From xen-devel-bounces@lists.xenproject.org Wed Jun 01 00:14:09 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 00:14:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340075.565012 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwBzl-00041l-Tf; Wed, 01 Jun 2022 00:13:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340075.565012; Wed, 01 Jun 2022 00:13:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwBzl-00041e-QY; Wed, 01 Jun 2022 00:13:53 +0000
Received: by outflank-mailman (input) for mailman id 340075;
 Wed, 01 Jun 2022 00:13:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1L9V=WI=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nwBzj-00041Y-LI
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 00:13:51 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b5739779-e13f-11ec-bd2c-47488cf2e6aa;
 Wed, 01 Jun 2022 02:13:49 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id DFF73614B3;
 Wed,  1 Jun 2022 00:13:47 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7DE4CC385A9;
 Wed,  1 Jun 2022 00:13:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b5739779-e13f-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654042427;
	bh=ag10k4hTWze4mXNm0kqpInGHiNKVLfIEVeWL4b/oXgM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Z2WeFA4UGgGptD+7wpRGoVF54rYcHzOPQ+lfrfARxTA+9BO1Nfs2X2fsLZcGKXyYv
	 /5VIpDRuuPWM/tuvb0qfvzFJQ9DGWeex0cOZLWuQRuXg9Gwp5hSFNxemvc4s2nqWNN
	 48fupsDfSuYpTwgoui1FjV9yorfHLJz2LQNUDo9X91a96YA90p9YLWRh2c9XqmBgdA
	 SGGF2SRaXpeNfbFudvCHNkcpoI1NcFoihviHfUFOaXNfAk8DvhFabPY7bGfzWUaj+C
	 EFDnmdepE3x0mLVq76l2UV8vDlOA7cXgoKCbHX2bs01COt895xzl0h1B/KGDNJ3y20
	 /B9yx+fQs7Qxw==
Date: Tue, 31 May 2022 17:13:45 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Juergen Gross <jgross@suse.com>
cc: xen-devel@lists.xenproject.org, x86@kernel.org, 
    linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, 
    Boris Ostrovsky <boris.ostrovsky@oracle.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, 
    Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, 
    "H. Peter Anvin" <hpa@zytor.com>, 
    Greg Kroah-Hartman <gregkh@linuxfoundation.org>, 
    Jiri Slaby <jirislaby@kernel.org>, kernel test robot <lkp@intel.com>
Subject: Re: [PATCH] xen: replace xen_remap() with memremap()
In-Reply-To: <20220530082634.6339-1-jgross@suse.com>
Message-ID: <alpine.DEB.2.22.394.2205311713270.1905099@ubuntu-linux-20-04-desktop>
References: <20220530082634.6339-1-jgross@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 30 May 2022, Juergen Gross wrote:
> xen_remap() is used to establish mappings for frames not under direct
> control of the kernel: for Xenstore and console ring pages, and for
> grant pages of non-PV guests.
> 
> Today xen_remap() is defined to use ioremap() on x86 (doing uncached
> mappings), and ioremap_cache() on Arm (doing cached mappings).
> 
> Uncached mappings for those use cases are bad for performance, so they
> should be avoided if possible. As all use cases of xen_remap() don't
> require uncached mappings (the mapped area is always physical RAM),
> a mapping using the standard WB cache mode is fine.
> 
> As sparse is flagging some of the xen_remap() use cases to be not
> appropriate for iomem(), as the result is not annotated with the
> __iomem modifier, eliminate xen_remap() completely and replace all
> use cases with memremap() specifying the MEMREMAP_WB caching mode.
> 
> xen_unmap() can be replaced with memunmap().
> 
> Reported-by: kernel test robot <lkp@intel.com>
> Signed-off-by: Juergen Gross <jgross@suse.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  arch/x86/include/asm/xen/page.h   | 3 ---
>  drivers/tty/hvc/hvc_xen.c         | 2 +-
>  drivers/xen/grant-table.c         | 6 +++---
>  drivers/xen/xenbus/xenbus_probe.c | 8 ++++----
>  include/xen/arm/page.h            | 3 ---
>  5 files changed, 8 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
> index 1fc67df50014..fa9ec20783fa 100644
> --- a/arch/x86/include/asm/xen/page.h
> +++ b/arch/x86/include/asm/xen/page.h
> @@ -347,9 +347,6 @@ unsigned long arbitrary_virt_to_mfn(void *vaddr);
>  void make_lowmem_page_readonly(void *vaddr);
>  void make_lowmem_page_readwrite(void *vaddr);
>  
> -#define xen_remap(cookie, size) ioremap((cookie), (size))
> -#define xen_unmap(cookie) iounmap((cookie))
> -
>  static inline bool xen_arch_need_swiotlb(struct device *dev,
>  					 phys_addr_t phys,
>  					 dma_addr_t dev_addr)
> diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
> index ebaf7500f48f..7c23112dc923 100644
> --- a/drivers/tty/hvc/hvc_xen.c
> +++ b/drivers/tty/hvc/hvc_xen.c
> @@ -253,7 +253,7 @@ static int xen_hvm_console_init(void)
>  	if (r < 0 || v == 0)
>  		goto err;
>  	gfn = v;
> -	info->intf = xen_remap(gfn << XEN_PAGE_SHIFT, XEN_PAGE_SIZE);
> +	info->intf = memremap(gfn << XEN_PAGE_SHIFT, XEN_PAGE_SIZE, MEMREMAP_WB);
>  	if (info->intf == NULL)
>  		goto err;
>  	info->vtermno = HVC_COOKIE;
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index 1a1aec0a88a1..2f4f0ed5d8f8 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -632,7 +632,7 @@ int gnttab_setup_auto_xlat_frames(phys_addr_t addr)
>  	if (xen_auto_xlat_grant_frames.count)
>  		return -EINVAL;
>  
> -	vaddr = xen_remap(addr, XEN_PAGE_SIZE * max_nr_gframes);
> +	vaddr = memremap(addr, XEN_PAGE_SIZE * max_nr_gframes, MEMREMAP_WB);
>  	if (vaddr == NULL) {
>  		pr_warn("Failed to ioremap gnttab share frames (addr=%pa)!\n",
>  			&addr);
> @@ -640,7 +640,7 @@ int gnttab_setup_auto_xlat_frames(phys_addr_t addr)
>  	}
>  	pfn = kcalloc(max_nr_gframes, sizeof(pfn[0]), GFP_KERNEL);
>  	if (!pfn) {
> -		xen_unmap(vaddr);
> +		memunmap(vaddr);
>  		return -ENOMEM;
>  	}
>  	for (i = 0; i < max_nr_gframes; i++)
> @@ -659,7 +659,7 @@ void gnttab_free_auto_xlat_frames(void)
>  	if (!xen_auto_xlat_grant_frames.count)
>  		return;
>  	kfree(xen_auto_xlat_grant_frames.pfn);
> -	xen_unmap(xen_auto_xlat_grant_frames.vaddr);
> +	memunmap(xen_auto_xlat_grant_frames.vaddr);
>  
>  	xen_auto_xlat_grant_frames.pfn = NULL;
>  	xen_auto_xlat_grant_frames.count = 0;
> diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
> index d367f2bd2b93..58b732dcbfb8 100644
> --- a/drivers/xen/xenbus/xenbus_probe.c
> +++ b/drivers/xen/xenbus/xenbus_probe.c
> @@ -752,8 +752,8 @@ static void xenbus_probe(void)
>  	xenstored_ready = 1;
>  
>  	if (!xen_store_interface) {
> -		xen_store_interface = xen_remap(xen_store_gfn << XEN_PAGE_SHIFT,
> -						XEN_PAGE_SIZE);
> +		xen_store_interface = memremap(xen_store_gfn << XEN_PAGE_SHIFT,
> +					       XEN_PAGE_SIZE, MEMREMAP_WB);
>  		/*
>  		 * Now it is safe to free the IRQ used for xenstore late
>  		 * initialization. No need to unbind: it is about to be
> @@ -1009,8 +1009,8 @@ static int __init xenbus_init(void)
>  #endif
>  			xen_store_gfn = (unsigned long)v;
>  			xen_store_interface =
> -				xen_remap(xen_store_gfn << XEN_PAGE_SHIFT,
> -					  XEN_PAGE_SIZE);
> +				memremap(xen_store_gfn << XEN_PAGE_SHIFT,
> +					 XEN_PAGE_SIZE, MEMREMAP_WB);
>  			if (xen_store_interface->connection != XENSTORE_CONNECTED)
>  				wait = true;
>  		}
> diff --git a/include/xen/arm/page.h b/include/xen/arm/page.h
> index 7e199c6656b9..e5c84ff28c8b 100644
> --- a/include/xen/arm/page.h
> +++ b/include/xen/arm/page.h
> @@ -109,9 +109,6 @@ static inline bool set_phys_to_machine(unsigned long pfn, unsigned long mfn)
>  	return __set_phys_to_machine(pfn, mfn);
>  }
>  
> -#define xen_remap(cookie, size) ioremap_cache((cookie), (size))
> -#define xen_unmap(cookie) iounmap((cookie))
> -
>  bool xen_arch_need_swiotlb(struct device *dev,
>  			   phys_addr_t phys,
>  			   dma_addr_t dev_addr);
> -- 
> 2.35.3
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 00:19:15 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 00:19:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340084.565023 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwC4t-0004fQ-II; Wed, 01 Jun 2022 00:19:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340084.565023; Wed, 01 Jun 2022 00:19:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwC4t-0004fJ-Eq; Wed, 01 Jun 2022 00:19:11 +0000
Received: by outflank-mailman (input) for mailman id 340084;
 Wed, 01 Jun 2022 00:19:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1L9V=WI=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nwC4s-0004fD-O3
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 00:19:10 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org
 [2604:1380:4601:e00::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 74607df3-e140-11ec-bd2c-47488cf2e6aa;
 Wed, 01 Jun 2022 02:19:09 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 52754B815FB;
 Wed,  1 Jun 2022 00:19:08 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7E051C385A9;
 Wed,  1 Jun 2022 00:19:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 74607df3-e140-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654042747;
	bh=P/iz6+csAw0ciTKuenpfmOdcBMP5IIWA42cjgJk9vhg=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=RlPEtkSEx6F6CWKZ47/uWC1iN+9ybjcUkUNW5g0uMGGhzkNEaQRVAMmLaJQlFNDxh
	 CyyxLLGFfORxaKuGG3G3sArabRmUALzpzpiJFSeRTAcPEycRf62F2vLWAC8uTsfXkw
	 iSwFuW4TWEEcW2jRMLDQ0PI34+aKf4miSBYzq2woXIXcLFjdHVZv9SB/cCrf27dfKO
	 X8DcegkVt/H/ogJBbWpohDKm6ML4sdv8KOdC2hUNzS1qs6Rr/2B9swycHxwo/clOMM
	 9BFBG8QT+fsYBWiPnnhU3IvFGeD3lrRImkuMBhlTvVX0IrhO7YJ2p25FwtPfffGELa
	 FcqBbiXBHGY0A==
Date: Tue, 31 May 2022 17:19:05 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, 
    linux-arm-kernel@lists.infradead.org, Juergen Gross <jgross@suse.com>, 
    Boris Ostrovsky <boris.ostrovsky@oracle.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, 
    "Michael S. Tsirkin" <mst@redhat.com>, 
    Christoph Hellwig <hch@infradead.org>
Subject: Re: [PATCH V3 3/8] xen/grant-dma-ops: Add option to restrict memory
 access under Xen
In-Reply-To: <1653944417-17168-4-git-send-email-olekstysh@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2205311718550.1905099@ubuntu-linux-20-04-desktop>
References: <1653944417-17168-1-git-send-email-olekstysh@gmail.com> <1653944417-17168-4-git-send-email-olekstysh@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 31 May 2022, Oleksandr Tyshchenko wrote:
> From: Juergen Gross <jgross@suse.com>
> 
> Introduce Xen grant DMA-mapping layer which contains special DMA-mapping
> routines for providing grant references as DMA addresses to be used by
> frontends (e.g. virtio) in Xen guests.
> 
> Add the needed functionality by providing a special set of DMA ops
> handling the needed grant operations for the I/O pages.
> 
> The subsequent commit will introduce the use case for xen-grant DMA ops
> layer to enable using virtio devices in Xen guests in a safe manner.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> Changes RFC -> V1:
>    - squash with almost all changes from commit (except handling "xen,dev-domid"
>      property):
>      "[PATCH 4/6] virtio: Various updates to xen-virtio DMA ops layer"
>    - update commit subject/description and comments in code
>    - leave only single Kconfig option XEN_VIRTIO and remove architectural
>      dependencies
>    - introduce common xen_has_restricted_virtio_memory_access() in xen.h
>      and update arch_has_restricted_virtio_memory_access() for both
>      Arm and x86 to call new helper
>    - use (1ULL << 63) instead of 0x8000000000000000ULL for XEN_GRANT_ADDR_OFF
>    - implement xen_virtio_dma_map(unmap)_sg() using example in swiotlb-xen.c
>    - optimize padding by moving "broken" field in struct xen_virtio_data
>    - remove unneeded per-device spinlock
>    - remove the inclusion of virtio_config.h
>    - remane everything according to the new naming scheme:
>      s/virtio/grant_dma
>    - add new hidden config option XEN_GRANT_DMA_OPS
> 
> Changes V1 -> V2:
>    - fix checkpatch.pl warnings
>    - remove the inclusion of linux/pci.h
>    - rework to use xarray for data context
>    - remove EXPORT_SYMBOL_GPL(xen_grant_setup_dma_ops);
>    - remove the line of * after SPDX-License-Identifier
>    - split changes into grant-dma-ops.c and arch_has_restricted_virtio_memory_access()
>      and update commit subject/description accordingly
>    - remove "default n" for config XEN_VIRTIO
>    - implement xen_grant_dma_alloc(free)_pages()
> 
> Changes V2 -> V3:
>    - Stefano already gave his Reviewed-by, I dropped it due to the changes (minor)
>    - remane field "dev_domid" in struct xen_grant_dma_data to "backend_domid"
>    - remove local variable "domid" in xen_grant_setup_dma_ops()
> ---
>  drivers/xen/Kconfig         |   4 +
>  drivers/xen/Makefile        |   1 +
>  drivers/xen/grant-dma-ops.c | 311 ++++++++++++++++++++++++++++++++++++++++++++
>  include/xen/xen-ops.h       |   8 ++
>  4 files changed, 324 insertions(+)
>  create mode 100644 drivers/xen/grant-dma-ops.c
> 
> diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
> index 120d32f..313a9127 100644
> --- a/drivers/xen/Kconfig
> +++ b/drivers/xen/Kconfig
> @@ -335,4 +335,8 @@ config XEN_UNPOPULATED_ALLOC
>  	  having to balloon out RAM regions in order to obtain physical memory
>  	  space to create such mappings.
>  
> +config XEN_GRANT_DMA_OPS
> +	bool
> +	select DMA_OPS
> +
>  endmenu
> diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
> index 5aae66e..1a23cb0 100644
> --- a/drivers/xen/Makefile
> +++ b/drivers/xen/Makefile
> @@ -39,3 +39,4 @@ xen-gntalloc-y				:= gntalloc.o
>  xen-privcmd-y				:= privcmd.o privcmd-buf.o
>  obj-$(CONFIG_XEN_FRONT_PGDIR_SHBUF)	+= xen-front-pgdir-shbuf.o
>  obj-$(CONFIG_XEN_UNPOPULATED_ALLOC)	+= unpopulated-alloc.o
> +obj-$(CONFIG_XEN_GRANT_DMA_OPS)		+= grant-dma-ops.o
> diff --git a/drivers/xen/grant-dma-ops.c b/drivers/xen/grant-dma-ops.c
> new file mode 100644
> index 00000000..44659f4
> --- /dev/null
> +++ b/drivers/xen/grant-dma-ops.c
> @@ -0,0 +1,311 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * Xen grant DMA-mapping layer - contains special DMA-mapping routines
> + * for providing grant references as DMA addresses to be used by frontends
> + * (e.g. virtio) in Xen guests
> + *
> + * Copyright (c) 2021, Juergen Gross <jgross@suse.com>
> + */
> +
> +#include <linux/module.h>
> +#include <linux/dma-map-ops.h>
> +#include <linux/of.h>
> +#include <linux/pfn.h>
> +#include <linux/xarray.h>
> +#include <xen/xen.h>
> +#include <xen/grant_table.h>
> +
> +struct xen_grant_dma_data {
> +	/* The ID of backend domain */
> +	domid_t backend_domid;
> +	/* Is device behaving sane? */
> +	bool broken;
> +};
> +
> +static DEFINE_XARRAY(xen_grant_dma_devices);
> +
> +#define XEN_GRANT_DMA_ADDR_OFF	(1ULL << 63)
> +
> +static inline dma_addr_t grant_to_dma(grant_ref_t grant)
> +{
> +	return XEN_GRANT_DMA_ADDR_OFF | ((dma_addr_t)grant << PAGE_SHIFT);
> +}
> +
> +static inline grant_ref_t dma_to_grant(dma_addr_t dma)
> +{
> +	return (grant_ref_t)((dma & ~XEN_GRANT_DMA_ADDR_OFF) >> PAGE_SHIFT);
> +}
> +
> +static struct xen_grant_dma_data *find_xen_grant_dma_data(struct device *dev)
> +{
> +	struct xen_grant_dma_data *data;
> +
> +	xa_lock(&xen_grant_dma_devices);
> +	data = xa_load(&xen_grant_dma_devices, (unsigned long)dev);
> +	xa_unlock(&xen_grant_dma_devices);
> +
> +	return data;
> +}
> +
> +/*
> + * DMA ops for Xen frontends (e.g. virtio).
> + *
> + * Used to act as a kind of software IOMMU for Xen guests by using grants as
> + * DMA addresses.
> + * Such a DMA address is formed by using the grant reference as a frame
> + * number and setting the highest address bit (this bit is for the backend
> + * to be able to distinguish it from e.g. a mmio address).
> + *
> + * Note that for now we hard wire dom0 to be the backend domain. In order
> + * to support any domain as backend we'd need to add a way to communicate
> + * the domid of this backend, e.g. via Xenstore, via the PCI-device's
> + * config space or DT/ACPI.
> + */
> +static void *xen_grant_dma_alloc(struct device *dev, size_t size,
> +				 dma_addr_t *dma_handle, gfp_t gfp,
> +				 unsigned long attrs)
> +{
> +	struct xen_grant_dma_data *data;
> +	unsigned int i, n_pages = PFN_UP(size);
> +	unsigned long pfn;
> +	grant_ref_t grant;
> +	void *ret;
> +
> +	data = find_xen_grant_dma_data(dev);
> +	if (!data)
> +		return NULL;
> +
> +	if (unlikely(data->broken))
> +		return NULL;
> +
> +	ret = alloc_pages_exact(n_pages * PAGE_SIZE, gfp);
> +	if (!ret)
> +		return NULL;
> +
> +	pfn = virt_to_pfn(ret);
> +
> +	if (gnttab_alloc_grant_reference_seq(n_pages, &grant)) {
> +		free_pages_exact(ret, n_pages * PAGE_SIZE);
> +		return NULL;
> +	}
> +
> +	for (i = 0; i < n_pages; i++) {
> +		gnttab_grant_foreign_access_ref(grant + i, data->backend_domid,
> +				pfn_to_gfn(pfn + i), 0);
> +	}
> +
> +	*dma_handle = grant_to_dma(grant);
> +
> +	return ret;
> +}
> +
> +static void xen_grant_dma_free(struct device *dev, size_t size, void *vaddr,
> +			       dma_addr_t dma_handle, unsigned long attrs)
> +{
> +	struct xen_grant_dma_data *data;
> +	unsigned int i, n_pages = PFN_UP(size);
> +	grant_ref_t grant;
> +
> +	data = find_xen_grant_dma_data(dev);
> +	if (!data)
> +		return;
> +
> +	if (unlikely(data->broken))
> +		return;
> +
> +	grant = dma_to_grant(dma_handle);
> +
> +	for (i = 0; i < n_pages; i++) {
> +		if (unlikely(!gnttab_end_foreign_access_ref(grant + i))) {
> +			dev_alert(dev, "Grant still in use by backend domain, disabled for further use\n");
> +			data->broken = true;
> +			return;
> +		}
> +	}
> +
> +	gnttab_free_grant_reference_seq(grant, n_pages);
> +
> +	free_pages_exact(vaddr, n_pages * PAGE_SIZE);
> +}
> +
> +static struct page *xen_grant_dma_alloc_pages(struct device *dev, size_t size,
> +					      dma_addr_t *dma_handle,
> +					      enum dma_data_direction dir,
> +					      gfp_t gfp)
> +{
> +	void *vaddr;
> +
> +	vaddr = xen_grant_dma_alloc(dev, size, dma_handle, gfp, 0);
> +	if (!vaddr)
> +		return NULL;
> +
> +	return virt_to_page(vaddr);
> +}
> +
> +static void xen_grant_dma_free_pages(struct device *dev, size_t size,
> +				     struct page *vaddr, dma_addr_t dma_handle,
> +				     enum dma_data_direction dir)
> +{
> +	xen_grant_dma_free(dev, size, page_to_virt(vaddr), dma_handle, 0);
> +}
> +
> +static dma_addr_t xen_grant_dma_map_page(struct device *dev, struct page *page,
> +					 unsigned long offset, size_t size,
> +					 enum dma_data_direction dir,
> +					 unsigned long attrs)
> +{
> +	struct xen_grant_dma_data *data;
> +	unsigned int i, n_pages = PFN_UP(size);
> +	grant_ref_t grant;
> +	dma_addr_t dma_handle;
> +
> +	if (WARN_ON(dir == DMA_NONE))
> +		return DMA_MAPPING_ERROR;
> +
> +	data = find_xen_grant_dma_data(dev);
> +	if (!data)
> +		return DMA_MAPPING_ERROR;
> +
> +	if (unlikely(data->broken))
> +		return DMA_MAPPING_ERROR;
> +
> +	if (gnttab_alloc_grant_reference_seq(n_pages, &grant))
> +		return DMA_MAPPING_ERROR;
> +
> +	for (i = 0; i < n_pages; i++) {
> +		gnttab_grant_foreign_access_ref(grant + i, data->backend_domid,
> +				xen_page_to_gfn(page) + i, dir == DMA_TO_DEVICE);
> +	}
> +
> +	dma_handle = grant_to_dma(grant) + offset;
> +
> +	return dma_handle;
> +}
> +
> +static void xen_grant_dma_unmap_page(struct device *dev, dma_addr_t dma_handle,
> +				     size_t size, enum dma_data_direction dir,
> +				     unsigned long attrs)
> +{
> +	struct xen_grant_dma_data *data;
> +	unsigned int i, n_pages = PFN_UP(size);
> +	grant_ref_t grant;
> +
> +	if (WARN_ON(dir == DMA_NONE))
> +		return;
> +
> +	data = find_xen_grant_dma_data(dev);
> +	if (!data)
> +		return;
> +
> +	if (unlikely(data->broken))
> +		return;
> +
> +	grant = dma_to_grant(dma_handle);
> +
> +	for (i = 0; i < n_pages; i++) {
> +		if (unlikely(!gnttab_end_foreign_access_ref(grant + i))) {
> +			dev_alert(dev, "Grant still in use by backend domain, disabled for further use\n");
> +			data->broken = true;
> +			return;
> +		}
> +	}
> +
> +	gnttab_free_grant_reference_seq(grant, n_pages);
> +}
> +
> +static void xen_grant_dma_unmap_sg(struct device *dev, struct scatterlist *sg,
> +				   int nents, enum dma_data_direction dir,
> +				   unsigned long attrs)
> +{
> +	struct scatterlist *s;
> +	unsigned int i;
> +
> +	if (WARN_ON(dir == DMA_NONE))
> +		return;
> +
> +	for_each_sg(sg, s, nents, i)
> +		xen_grant_dma_unmap_page(dev, s->dma_address, sg_dma_len(s), dir,
> +				attrs);
> +}
> +
> +static int xen_grant_dma_map_sg(struct device *dev, struct scatterlist *sg,
> +				int nents, enum dma_data_direction dir,
> +				unsigned long attrs)
> +{
> +	struct scatterlist *s;
> +	unsigned int i;
> +
> +	if (WARN_ON(dir == DMA_NONE))
> +		return -EINVAL;
> +
> +	for_each_sg(sg, s, nents, i) {
> +		s->dma_address = xen_grant_dma_map_page(dev, sg_page(s), s->offset,
> +				s->length, dir, attrs);
> +		if (s->dma_address == DMA_MAPPING_ERROR)
> +			goto out;
> +
> +		sg_dma_len(s) = s->length;
> +	}
> +
> +	return nents;
> +
> +out:
> +	xen_grant_dma_unmap_sg(dev, sg, i, dir, attrs | DMA_ATTR_SKIP_CPU_SYNC);
> +	sg_dma_len(sg) = 0;
> +
> +	return -EIO;
> +}
> +
> +static int xen_grant_dma_supported(struct device *dev, u64 mask)
> +{
> +	return mask == DMA_BIT_MASK(64);
> +}
> +
> +static const struct dma_map_ops xen_grant_dma_ops = {
> +	.alloc = xen_grant_dma_alloc,
> +	.free = xen_grant_dma_free,
> +	.alloc_pages = xen_grant_dma_alloc_pages,
> +	.free_pages = xen_grant_dma_free_pages,
> +	.mmap = dma_common_mmap,
> +	.get_sgtable = dma_common_get_sgtable,
> +	.map_page = xen_grant_dma_map_page,
> +	.unmap_page = xen_grant_dma_unmap_page,
> +	.map_sg = xen_grant_dma_map_sg,
> +	.unmap_sg = xen_grant_dma_unmap_sg,
> +	.dma_supported = xen_grant_dma_supported,
> +};
> +
> +void xen_grant_setup_dma_ops(struct device *dev)
> +{
> +	struct xen_grant_dma_data *data;
> +
> +	data = find_xen_grant_dma_data(dev);
> +	if (data) {
> +		dev_err(dev, "Xen grant DMA data is already created\n");
> +		return;
> +	}
> +
> +	data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
> +	if (!data)
> +		goto err;
> +
> +	/* XXX The dom0 is hardcoded as the backend domain for now */
> +	data->backend_domid = 0;
> +
> +	if (xa_err(xa_store(&xen_grant_dma_devices, (unsigned long)dev, data,
> +			GFP_KERNEL))) {
> +		dev_err(dev, "Cannot store Xen grant DMA data\n");
> +		goto err;
> +	}
> +
> +	dev->dma_ops = &xen_grant_dma_ops;
> +
> +	return;
> +
> +err:
> +	dev_err(dev, "Cannot set up Xen grant DMA ops, retain platform DMA ops\n");
> +}
> +
> +MODULE_DESCRIPTION("Xen grant DMA-mapping layer");
> +MODULE_AUTHOR("Juergen Gross <jgross@suse.com>");
> +MODULE_LICENSE("GPL");
> diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
> index a3584a3..4f9fad5 100644
> --- a/include/xen/xen-ops.h
> +++ b/include/xen/xen-ops.h
> @@ -221,4 +221,12 @@ static inline void xen_preemptible_hcall_end(void) { }
>  
>  #endif /* CONFIG_XEN_PV && !CONFIG_PREEMPTION */
>  
> +#ifdef CONFIG_XEN_GRANT_DMA_OPS
> +void xen_grant_setup_dma_ops(struct device *dev);
> +#else
> +static inline void xen_grant_setup_dma_ops(struct device *dev)
> +{
> +}
> +#endif /* CONFIG_XEN_GRANT_DMA_OPS */
> +
>  #endif /* INCLUDE_XEN_OPS_H */
> -- 
> 2.7.4
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 00:35:09 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 00:35:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340095.565035 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwCKE-0007J8-4d; Wed, 01 Jun 2022 00:35:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340095.565035; Wed, 01 Jun 2022 00:35:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwCKD-0007J1-Vc; Wed, 01 Jun 2022 00:35:01 +0000
Received: by outflank-mailman (input) for mailman id 340095;
 Wed, 01 Jun 2022 00:35:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1L9V=WI=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nwCKC-0007Iq-Cn
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 00:35:00 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id aa016be4-e142-11ec-bd2c-47488cf2e6aa;
 Wed, 01 Jun 2022 02:34:58 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 3DF01614B3;
 Wed,  1 Jun 2022 00:34:57 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id D44A2C385A9;
 Wed,  1 Jun 2022 00:34:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aa016be4-e142-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654043696;
	bh=AGLxz/xreC1/Xeaa2Sjt8QqZXIylnuwZjPxvPygrygg=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=DfuMHCjbgseVb9ElRvU6+fEgDpIv3ulhOxsoIPOQDdh5GUh8KjIJ28oGLRaKeGvWL
	 0OGOBkYVHKvSBhLuz9k1LfSVAr5J41JWFoFQ4TABVCeTglNoTjSMmz8FJQ49sdckLg
	 qsGKa+Yr+MoyMmAdmNIWUZGeNcE1QpV9WDLAqZFgKgw12wg03tSKY2yAzfzVh+OlIr
	 IhSeKnYLMf2s3ZC0XoXksZyr4Mb/ha+g8gPhA0eQAKOPF5QkoCt5VR2mG2iYj77PY2
	 +OIvjXfirEI3JrU/2GAu0wBIXEs9XmzEmlNrwfNOn0Ao2FA84oEYKiBszft0wz/igv
	 pc1IcOOf15l3Q==
Date: Tue, 31 May 2022 17:34:54 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
cc: xen-devel@lists.xenproject.org, devicetree@vger.kernel.org, 
    linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, 
    iommu@lists.linux-foundation.org, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, 
    Rob Herring <robh+dt@kernel.org>, Joerg Roedel <joro@8bytes.org>, 
    Will Deacon <will@kernel.org>, 
    Krzysztof Kozlowski <krzysztof.kozlowski+dt@linaro.org>, 
    Julien Grall <julien@xen.org>, Juergen Gross <jgross@suse.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    "Michael S. Tsirkin" <mst@redhat.com>, 
    Christoph Hellwig <hch@infradead.org>, Arnd Bergmann <arnd@arndb.de>
Subject: Re: [PATCH V3 5/8] dt-bindings: Add xen,grant-dma IOMMU description
 for xen-grant DMA ops
In-Reply-To: <1653944417-17168-6-git-send-email-olekstysh@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2205311726000.1905099@ubuntu-linux-20-04-desktop>
References: <1653944417-17168-1-git-send-email-olekstysh@gmail.com> <1653944417-17168-6-git-send-email-olekstysh@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 31 May 2022, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> The main purpose of this binding is to communicate Xen specific
> information using generic IOMMU device tree bindings (which is
> a good fit here) rather than introducing a custom property.
> 
> Introduce Xen specific IOMMU for the virtualized device (e.g. virtio)
> to be used by Xen grant DMA-mapping layer in the subsequent commit.
> 
> The reference to Xen specific IOMMU node using "iommus" property
> indicates that Xen grant mappings need to be enabled for the device,
> and it specifies the ID of the domain where the corresponding backend
> resides. The domid (domain ID) is used as an argument to the Xen grant
> mapping APIs.
> 
> This is needed for the option to restrict memory access using Xen grant
> mappings to work which primary goal is to enable using virtio devices
> in Xen guests.
> 
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> ---
> Changes RFC -> V1:
>    - update commit subject/description and text in description
>    - move to devicetree/bindings/arm/
> 
> Changes V1 -> V2:
>    - update text in description
>    - change the maintainer of the binding
>    - fix validation issue
>    - reference xen,dev-domid.yaml schema from virtio/mmio.yaml
> 
> Change V2 -> V3:
>    - Stefano already gave his Reviewed-by, I dropped it due to the changes (significant)
>    - use generic IOMMU device tree bindings instead of custom property
>      "xen,dev-domid"
>    - change commit subject and description, was
>      "dt-bindings: Add xen,dev-domid property description for xen-grant DMA ops"
> ---
>  .../devicetree/bindings/iommu/xen,grant-dma.yaml   | 49 ++++++++++++++++++++++
>  1 file changed, 49 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml
> 
> diff --git a/Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml b/Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml
> new file mode 100644
> index 00000000..ab5765c
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml
> @@ -0,0 +1,49 @@
> +# SPDX-License-Identifier: (GPL-2.0-only or BSD-2-Clause)
> +%YAML 1.2
> +---
> +$id: http://devicetree.org/schemas/iommu/xen,grant-dma.yaml#
> +$schema: http://devicetree.org/meta-schemas/core.yaml#
> +
> +title: Xen specific IOMMU for virtualized devices (e.g. virtio)
> +
> +maintainers:
> +  - Stefano Stabellini <sstabellini@kernel.org>
> +
> +description:
> +  The reference to Xen specific IOMMU node using "iommus" property indicates
> +  that Xen grant mappings need to be enabled for the device, and it specifies
> +  the ID of the domain where the corresponding backend resides.
> +  The binding is required to restrict memory access using Xen grant mappings.

I think this is OK and in line with the discussion we had on the list. I
propose the following wording instead:

"""
The Xen IOMMU represents the Xen grant table interface. Grant mappings
are to be used with devices connected to the Xen IOMMU using the
"iommus" property, which also specifies the ID of the backend domain.
The binding is required to restrict memory access using Xen grant
mappings.
"""


> +properties:
> +  compatible:
> +    const: xen,grant-dma
> +
> +  '#iommu-cells':
> +    const: 1
> +    description:
> +      Xen specific IOMMU is multiple-master IOMMU device.
> +      The single cell describes the domid (domain ID) of the domain where
> +      the backend is running.

Here I would say:

"""
The single cell is the domid (domain ID) of the domain where the backend
is running.
"""

With the two wording improvements:

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> +required:
> +  - compatible
> +  - "#iommu-cells"
> +
> +additionalProperties: false
> +
> +examples:
> +  - |
> +    xen_iommu {
> +        compatible = "xen,grant-dma";
> +        #iommu-cells = <1>;
> +    };
> +
> +    virtio@3000 {
> +        compatible = "virtio,mmio";
> +        reg = <0x3000 0x100>;
> +        interrupts = <41>;
> +
> +        /* The backend is located in Xen domain with ID 1 */
> +        iommus = <&xen_iommu 1>;
> +    };
> -- 
> 2.7.4
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 00:38:37 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 00:38:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340103.565046 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwCNg-0007vK-Ku; Wed, 01 Jun 2022 00:38:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340103.565046; Wed, 01 Jun 2022 00:38:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwCNg-0007vD-Hi; Wed, 01 Jun 2022 00:38:36 +0000
Received: by outflank-mailman (input) for mailman id 340103;
 Wed, 01 Jun 2022 00:38:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1L9V=WI=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nwCNe-0007ur-Su
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 00:38:34 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2932b8cf-e143-11ec-837f-e5687231ffcc;
 Wed, 01 Jun 2022 02:38:33 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by sin.source.kernel.org (Postfix) with ESMTPS id EBE7FCE1902;
 Wed,  1 Jun 2022 00:38:29 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id C660CC385A9;
 Wed,  1 Jun 2022 00:38:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2932b8cf-e143-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654043908;
	bh=2Ox7lz8z9nRCuRY+W2ivcqHbD3jf8K4mU3wNbncCJvE=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=ACXErPAVzHV3tUf/6olYKLPTuR5bOIjlhKKUyY3xFc6DSImyxGo37EY6xCnvD9bgw
	 4DM4E4vCLd9QLtYd71Fm48/NYJeeO3fxr3RYtBwZBdZAw1NvXfvLDGSBjRqk1TJg9o
	 ULhrZzG+WwQZXpVQXGUygWPinwY5HeAFfxKwWBc8TSkrAq1iOEM7QeNmt0iYnZcdpf
	 77HGEjp94iX1h/RI/iX9rREkuHalhYGzwGp2nRYo0a93bxi2pNVZJfU3CHiiQBxeKC
	 OMuqFBYwGsevlojee5Bjqd3LRBKuefX41tvKdrqypPorQ2gX65nGe8y/DoyrPYAfDV
	 9AModjdsChDtg==
Date: Tue, 31 May 2022 17:38:26 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, 
    linux-arm-kernel@lists.infradead.org, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Boris Ostrovsky <boris.ostrovsky@oracle.com>, 
    Juergen Gross <jgross@suse.com>, Julien Grall <julien@xen.org>, 
    "Michael S. Tsirkin" <mst@redhat.com>, 
    Christoph Hellwig <hch@infradead.org>
Subject: Re: [PATCH V3 6/8] xen/grant-dma-iommu: Introduce stub IOMMU
 driver
In-Reply-To: <1653944417-17168-7-git-send-email-olekstysh@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2205311738180.1905099@ubuntu-linux-20-04-desktop>
References: <1653944417-17168-1-git-send-email-olekstysh@gmail.com> <1653944417-17168-7-git-send-email-olekstysh@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 31 May 2022, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> In order to reuse generic IOMMU device tree bindings by Xen grant
> DMA-mapping layer we need to add this stub driver from a fw_devlink
> perspective (grant-dma-ops cannot be converted into the proper
> IOMMU driver).
> 
> Otherwise, just reusing IOMMU bindings (without having a corresponding
> driver) leads to the deferred probe timeout afterwards, because
> the IOMMU device never becomes available.
> 
> This stub driver does nothing except registering empty iommu_ops,
> the upper layer "of_iommu" will treat this as NO_IOMMU condition
> and won't return -EPROBE_DEFER.
> 
> As this driver is quite different from the most hardware IOMMU
> implementations and only needed in Xen guests, place it in drivers/xen
> directory. The subsequent commit will make use of it.
> 
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> According to the discussion at:
> https://lore.kernel.org/xen-devel/c0f78aab-e723-fe00-a310-9fe52ec75e48@gmail.com/
> 
> Change V2 -> V3:
>    - new patch
> ---
>  drivers/xen/Kconfig           |  4 +++
>  drivers/xen/Makefile          |  1 +
>  drivers/xen/grant-dma-iommu.c | 78 +++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 83 insertions(+)
>  create mode 100644 drivers/xen/grant-dma-iommu.c
> 
> diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
> index a7bd8ce..35d20d9 100644
> --- a/drivers/xen/Kconfig
> +++ b/drivers/xen/Kconfig
> @@ -335,6 +335,10 @@ config XEN_UNPOPULATED_ALLOC
>  	  having to balloon out RAM regions in order to obtain physical memory
>  	  space to create such mappings.
>  
> +config XEN_GRANT_DMA_IOMMU
> +	bool
> +	select IOMMU_API
> +
>  config XEN_GRANT_DMA_OPS
>  	bool
>  	select DMA_OPS
> diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
> index 1a23cb0..c0503f1 100644
> --- a/drivers/xen/Makefile
> +++ b/drivers/xen/Makefile
> @@ -40,3 +40,4 @@ xen-privcmd-y				:= privcmd.o privcmd-buf.o
>  obj-$(CONFIG_XEN_FRONT_PGDIR_SHBUF)	+= xen-front-pgdir-shbuf.o
>  obj-$(CONFIG_XEN_UNPOPULATED_ALLOC)	+= unpopulated-alloc.o
>  obj-$(CONFIG_XEN_GRANT_DMA_OPS)		+= grant-dma-ops.o
> +obj-$(CONFIG_XEN_GRANT_DMA_IOMMU)	+= grant-dma-iommu.o
> diff --git a/drivers/xen/grant-dma-iommu.c b/drivers/xen/grant-dma-iommu.c
> new file mode 100644
> index 00000000..16b8bc0
> --- /dev/null
> +++ b/drivers/xen/grant-dma-iommu.c
> @@ -0,0 +1,78 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Stub IOMMU driver which does nothing.
> + * The main purpose of it being present is to reuse generic IOMMU device tree
> + * bindings by Xen grant DMA-mapping layer.
> + *
> + * Copyright (C) 2022 EPAM Systems Inc.
> + */
> +
> +#include <linux/iommu.h>
> +#include <linux/of.h>
> +#include <linux/platform_device.h>
> +
> +struct grant_dma_iommu_device {
> +	struct device *dev;
> +	struct iommu_device iommu;
> +};
> +
> +/* Nothing is really needed here */
> +static const struct iommu_ops grant_dma_iommu_ops;
> +
> +static const struct of_device_id grant_dma_iommu_of_match[] = {
> +	{ .compatible = "xen,grant-dma" },
> +	{ },
> +};
> +
> +static int grant_dma_iommu_probe(struct platform_device *pdev)
> +{
> +	struct grant_dma_iommu_device *mmu;
> +	int ret;
> +
> +	mmu = devm_kzalloc(&pdev->dev, sizeof(*mmu), GFP_KERNEL);
> +	if (!mmu)
> +		return -ENOMEM;
> +
> +	mmu->dev = &pdev->dev;
> +
> +	ret = iommu_device_register(&mmu->iommu, &grant_dma_iommu_ops, &pdev->dev);
> +	if (ret)
> +		return ret;
> +
> +	platform_set_drvdata(pdev, mmu);
> +
> +	return 0;
> +}
> +
> +static int grant_dma_iommu_remove(struct platform_device *pdev)
> +{
> +	struct grant_dma_iommu_device *mmu = platform_get_drvdata(pdev);
> +
> +	platform_set_drvdata(pdev, NULL);
> +	iommu_device_unregister(&mmu->iommu);
> +
> +	return 0;
> +}
> +
> +static struct platform_driver grant_dma_iommu_driver = {
> +	.driver = {
> +		.name = "grant-dma-iommu",
> +		.of_match_table = grant_dma_iommu_of_match,
> +	},
> +	.probe = grant_dma_iommu_probe,
> +	.remove = grant_dma_iommu_remove,
> +};
> +
> +static int __init grant_dma_iommu_init(void)
> +{
> +	struct device_node *iommu_np;
> +
> +	iommu_np = of_find_matching_node(NULL, grant_dma_iommu_of_match);
> +	if (!iommu_np)
> +		return 0;
> +
> +	of_node_put(iommu_np);
> +
> +	return platform_driver_register(&grant_dma_iommu_driver);
> +}
> +subsys_initcall(grant_dma_iommu_init);
> -- 
> 2.7.4
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 00:41:59 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 00:41:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340111.565057 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwCQt-00010c-3y; Wed, 01 Jun 2022 00:41:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340111.565057; Wed, 01 Jun 2022 00:41:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwCQt-00010V-0U; Wed, 01 Jun 2022 00:41:55 +0000
Received: by outflank-mailman (input) for mailman id 340111;
 Wed, 01 Jun 2022 00:41:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwCQs-00010L-HX; Wed, 01 Jun 2022 00:41:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwCQs-0000Ve-F3; Wed, 01 Jun 2022 00:41:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwCQr-0001Ii-B2; Wed, 01 Jun 2022 00:41:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nwCQr-0006L4-8N; Wed, 01 Jun 2022 00:41:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Fb4vqnE4I/K1gKvzg7PW1MqwqWr7PyZTww9RxfC+1OQ=; b=4dC97vJGbXgLSc7bVd5k4Kn2AC
	xkptuKAE6SvYDQjO9cD8gPlOCsxb/PMVS/+sPLgfLVKZNosLKZIOZx3TfWuKUOWQpLzIYS0JmmyG0
	JPpLr6fGFfI4MyR2lHr2W+3kN0ymU/5mrTb2LEBT2uT6NSpf/+8lTOnVZ974wZ2WTpOA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170790-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 170790: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=09a6a71097e3e7d28eaa0f55e8f2c4b879c299f5
X-Osstest-Versions-That:
    xen=49dd52fb1311dadab29f6634d0bc1f4c022c357a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 01 Jun 2022 00:41:53 +0000

flight 170790 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170790/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  09a6a71097e3e7d28eaa0f55e8f2c4b879c299f5
baseline version:
 xen                  49dd52fb1311dadab29f6634d0bc1f4c022c357a

Last test of basis   170730  2022-05-25 14:01:47 Z    6 days
Testing same since   170790  2022-05-31 21:01:38 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>
  Stefano Stabellini <stefano.stabellini@xilinx.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   49dd52fb13..09a6a71097  09a6a71097e3e7d28eaa0f55e8f2c4b879c299f5 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 00:53:59 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 00:53:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340121.565068 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwCcU-0002fL-7i; Wed, 01 Jun 2022 00:53:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340121.565068; Wed, 01 Jun 2022 00:53:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwCcU-0002fE-4m; Wed, 01 Jun 2022 00:53:54 +0000
Received: by outflank-mailman (input) for mailman id 340121;
 Wed, 01 Jun 2022 00:53:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1L9V=WI=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nwCcT-0002f8-BS
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 00:53:53 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4d04c200-e145-11ec-bd2c-47488cf2e6aa;
 Wed, 01 Jun 2022 02:53:51 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id D39666149D;
 Wed,  1 Jun 2022 00:53:49 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9ADDDC385A9;
 Wed,  1 Jun 2022 00:53:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4d04c200-e145-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654044829;
	bh=EQ7y3MecILqxCm4jEd/aUGty8mqg6Pqf5GSMHZ3CV4A=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=iT2Hr0gEyj9nwHnMgGrg5ZFexC2dH48t7hzBN1IDF9PAB6qUaRSinConRCTu8BtPE
	 R1tuk/hLs/JgCcKEnZF5j9oLorGTg7DzoDSvFqO/vBKtO/JRHkeam7mqEoA2a+djs4
	 lKbMCzYh91dyRR5NW1X38ykgpdgx6O1b5QyzaXy7k0yyZYx/WtqiKC/ahl+i0FE+8W
	 GtN/07/8TzuFpbJcmZTqbfjai2y1dAUNiTnch5ddRsKJIMmFMqsD4VF61FuxALvEZZ
	 9PkilrXJ03elxU2G0UwJs/nlGnXPscuyIx7TkTVve2azJnESxMwH6ccexp8qW2NAFK
	 tbfL7jOR/49tg==
Date: Tue, 31 May 2022 17:53:47 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, 
    linux-arm-kernel@lists.infradead.org, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Boris Ostrovsky <boris.ostrovsky@oracle.com>, 
    Juergen Gross <jgross@suse.com>, Julien Grall <julien@xen.org>, 
    "Michael S. Tsirkin" <mst@redhat.com>, 
    Christoph Hellwig <hch@infradead.org>
Subject: Re: [PATCH V3 7/8] xen/grant-dma-ops: Retrieve the ID of backend's
 domain for DT devices
In-Reply-To: <1653944417-17168-8-git-send-email-olekstysh@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2205311741470.1905099@ubuntu-linux-20-04-desktop>
References: <1653944417-17168-1-git-send-email-olekstysh@gmail.com> <1653944417-17168-8-git-send-email-olekstysh@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII



On Tue, 31 May 2022, Oleksandr Tyshchenko wrote:

> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> Use the presence of "iommus" property pointed to the IOMMU node with
> recently introduced "xen,grant-dma" compatible as a clear indicator
> of enabling Xen grant mappings scheme for that device and read the ID
> of Xen domain where the corresponding backend is running. The domid
> (domain ID) is used as an argument to the Xen grant mapping APIs.
> 
> To avoid the deferred probe timeout which takes place after reusing
> generic IOMMU device tree bindings (because the IOMMU device never
> becomes available) enable recently introduced stub IOMMU driver by
> selecting XEN_GRANT_DMA_IOMMU.
> 
> Also introduce xen_is_grant_dma_device() to check whether xen-grant
> DMA ops need to be set for a passed device.
> 
> Remove the hardcoded domid 0 in xen_grant_setup_dma_ops().
> 
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> Changes RFC -> V1:
>    - new patch, split required changes from commit:
>     "[PATCH 4/6] virtio: Various updates to xen-virtio DMA ops layer"
>    - update checks in xen_virtio_setup_dma_ops() to only support
>      DT devices for now
>    - remove the "virtio,mmio" check from xen_is_virtio_device()
>    - remane everything according to the new naming scheme:
>      s/virtio/grant_dma
> 
> Changes V1 -> V2:
>    - remove dev_is_pci() check in xen_grant_setup_dma_ops()
>    - remove EXPORT_SYMBOL_GPL(xen_is_grant_dma_device);
> 
> Changes V2 -> V3:
>    - Stefano already gave his Reviewed-by, I dropped it due to the changes (significant)
>    - update commit description
>    - reuse generic IOMMU device tree bindings, select XEN_GRANT_DMA_IOMMU
>      to avoid the deferred probe timeout
> ---
>  drivers/xen/Kconfig         |  1 +
>  drivers/xen/grant-dma-ops.c | 48 ++++++++++++++++++++++++++++++++++++++-------
>  include/xen/xen-ops.h       |  5 +++++
>  3 files changed, 47 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
> index 35d20d9..bfd5f4f 100644
> --- a/drivers/xen/Kconfig
> +++ b/drivers/xen/Kconfig
> @@ -347,6 +347,7 @@ config XEN_VIRTIO
>  	bool "Xen virtio support"
>  	depends on VIRTIO
>  	select XEN_GRANT_DMA_OPS
> +	select XEN_GRANT_DMA_IOMMU if OF
>  	help
>  	  Enable virtio support for running as Xen guest. Depending on the
>  	  guest type this will require special support on the backend side
> diff --git a/drivers/xen/grant-dma-ops.c b/drivers/xen/grant-dma-ops.c
> index 44659f4..6586152 100644
> --- a/drivers/xen/grant-dma-ops.c
> +++ b/drivers/xen/grant-dma-ops.c
> @@ -55,11 +55,6 @@ static struct xen_grant_dma_data *find_xen_grant_dma_data(struct device *dev)
>   * Such a DMA address is formed by using the grant reference as a frame
>   * number and setting the highest address bit (this bit is for the backend
>   * to be able to distinguish it from e.g. a mmio address).
> - *
> - * Note that for now we hard wire dom0 to be the backend domain. In order
> - * to support any domain as backend we'd need to add a way to communicate
> - * the domid of this backend, e.g. via Xenstore, via the PCI-device's
> - * config space or DT/ACPI.
>   */
>  static void *xen_grant_dma_alloc(struct device *dev, size_t size,
>  				 dma_addr_t *dma_handle, gfp_t gfp,
> @@ -275,9 +270,26 @@ static const struct dma_map_ops xen_grant_dma_ops = {
>  	.dma_supported = xen_grant_dma_supported,
>  };
>  
> +bool xen_is_grant_dma_device(struct device *dev)
> +{
> +	struct device_node *iommu_np;
> +	bool has_iommu;
> +
> +	/* XXX Handle only DT devices for now */
> +	if (!dev->of_node)
> +		return false;
> +
> +	iommu_np = of_parse_phandle(dev->of_node, "iommus", 0);
> +	has_iommu = iommu_np && of_device_is_compatible(iommu_np, "xen,grant-dma");
> +	of_node_put(iommu_np);
> +
> +	return has_iommu;
> +}
> +
>  void xen_grant_setup_dma_ops(struct device *dev)
>  {
>  	struct xen_grant_dma_data *data;
> +	struct of_phandle_args iommu_spec;
>  
>  	data = find_xen_grant_dma_data(dev);
>  	if (data) {
> @@ -285,12 +297,34 @@ void xen_grant_setup_dma_ops(struct device *dev)
>  		return;
>  	}
>  
> +	/* XXX ACPI device unsupported for now */
> +	if (!dev->of_node)
> +		goto err;
> +
> +	if (of_parse_phandle_with_args(dev->of_node, "iommus", "#iommu-cells",
> +			0, &iommu_spec)) {
> +		dev_err(dev, "Cannot parse iommus property\n");
> +		goto err;
> +	}
> +
> +	if (!of_device_is_compatible(iommu_spec.np, "xen,grant-dma") ||
> +			iommu_spec.args_count != 1) {
> +		dev_err(dev, "Incompatible IOMMU node\n");
> +		of_node_put(iommu_spec.np);
> +		goto err;
> +	}
> +
> +	of_node_put(iommu_spec.np);
> +
>  	data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
>  	if (!data)
>  		goto err;
>  
> -	/* XXX The dom0 is hardcoded as the backend domain for now */
> -	data->backend_domid = 0;
> +	/*
> +	 * The endpoint ID here means the ID of the domain where the corresponding
> +	 * backend is running
> +	 */
> +	data->backend_domid = iommu_spec.args[0];
>  
>  	if (xa_err(xa_store(&xen_grant_dma_devices, (unsigned long)dev, data,
>  			GFP_KERNEL))) {
> diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
> index 4f9fad5..62be9dc 100644
> --- a/include/xen/xen-ops.h
> +++ b/include/xen/xen-ops.h
> @@ -223,10 +223,15 @@ static inline void xen_preemptible_hcall_end(void) { }
>  
>  #ifdef CONFIG_XEN_GRANT_DMA_OPS
>  void xen_grant_setup_dma_ops(struct device *dev);
> +bool xen_is_grant_dma_device(struct device *dev);
>  #else
>  static inline void xen_grant_setup_dma_ops(struct device *dev)
>  {
>  }
> +static inline bool xen_is_grant_dma_device(struct device *dev)
> +{
> +	return false;
> +}
>  #endif /* CONFIG_XEN_GRANT_DMA_OPS */
>  
>  #endif /* INCLUDE_XEN_OPS_H */
> -- 
> 2.7.4
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 01:04:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 01:04:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340130.565079 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwCmf-0002bC-B6; Wed, 01 Jun 2022 01:04:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340130.565079; Wed, 01 Jun 2022 01:04:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwCmf-0002b5-7a; Wed, 01 Jun 2022 01:04:25 +0000
Received: by outflank-mailman (input) for mailman id 340130;
 Wed, 01 Jun 2022 01:04:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1L9V=WI=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nwCme-0002az-ES
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 01:04:24 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c605d59f-e146-11ec-837f-e5687231ffcc;
 Wed, 01 Jun 2022 03:04:23 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 79F94B816D7;
 Wed,  1 Jun 2022 01:04:22 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 73A09C385A9;
 Wed,  1 Jun 2022 01:04:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c605d59f-e146-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654045461;
	bh=kW5xdDcGdNBM55f+KPx7Jkd8syxwJ//ekvpEVj5fpIg=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=j3p+qbK3MoH6Hna9Wv7INOefAW+kCSe8/T9nNxnQs/3gxiqD/91gJIywTlvmp6ltD
	 3WNm901iZLSNvlpjNk1+yahbcFFTqIIensrcyZ0aP1EIFWRqFghA+EU71IhRpqQj7T
	 vvhYDnYOBBZsgAqz9fQ65ZztXFuM42FoQdfWnHZKvCKIvh2dlq0eHJSVQ5gd50HZhs
	 tZlNdloM5zXoBTFn5PFz2QfH1TquihkOEYq+DA7PupBAc5kJwYFh60Rd8f2nDSTjiX
	 8Z+X8EoM2lJ63L+Dq+GMQ3lkMosnKX2OQJvkr7n0H0n4fFmUZqeuve9K6qTlEiu2T8
	 tAYEAdZ2KVc3g==
Date: Tue, 31 May 2022 18:04:19 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
cc: xen-devel@lists.xenproject.org, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Wei Liu <wl@xen.org>, 
    Anthony PERARD <anthony.perard@citrix.com>, 
    Juergen Gross <jgross@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH V2] libxl/arm: Create specific IOMMU node to be referred
 by virtio-mmio device
In-Reply-To: <1653944813-17970-1-git-send-email-olekstysh@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2205311755010.1905099@ubuntu-linux-20-04-desktop>
References: <1653944813-17970-1-git-send-email-olekstysh@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-526242627-1654045461=:1905099"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-526242627-1654045461=:1905099
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Tue, 31 May 2022, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> Reuse generic IOMMU device tree bindings to communicate Xen specific
> information for the virtio devices for which the restricted memory
> access using Xen grant mappings need to be enabled.
> 
> Insert "iommus" property pointed to the IOMMU node with "xen,grant-dma"
> compatible to all virtio devices which backends are going to run in
> non-hardware domains (which are non-trusted by default).
> 
> Based on device-tree binding from Linux:
> Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml
> 
> The example of generated nodes:
> 
> xen_iommu {
>     compatible = "xen,grant-dma";
>     #iommu-cells = <0x01>;
>     phandle = <0xfde9>;
> };
> 
> virtio@2000000 {
>     compatible = "virtio,mmio";
>     reg = <0x00 0x2000000 0x00 0x200>;
>     interrupts = <0x00 0x01 0xf01>;
>     interrupt-parent = <0xfde8>;
>     dma-coherent;
>     iommus = <0xfde9 0x01>;
> };
> 
> virtio@2000200 {
>     compatible = "virtio,mmio";
>     reg = <0x00 0x2000200 0x00 0x200>;
>     interrupts = <0x00 0x02 0xf01>;
>     interrupt-parent = <0xfde8>;
>     dma-coherent;
>     iommus = <0xfde9 0x01>;
> };
> 
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> ---
> !!! This patch is based on non upstreamed yet “Virtio support for toolstack
> on Arm” V8 series which is on review now:
> https://lore.kernel.org/xen-devel/1651598763-12162-1-git-send-email-olekstysh@gmail.com/
> 
> New device-tree binding (commit #5) is a part of solution to restrict memory
> access under Xen using xen-grant DMA-mapping layer (which is also on review):
> https://lore.kernel.org/xen-devel/1653944417-17168-1-git-send-email-olekstysh@gmail.com/
> 
> Changes RFC -> V1:
>    - update commit description
>    - rebase according to the recent changes to
>      "libxl: Introduce basic virtio-mmio support on Arm"
> 
> Changes V1 -> V2:
>    - Henry already gave his Reviewed-by, I dropped it due to the changes
>    - use generic IOMMU device tree bindings instead of custom property
>      "xen,dev-domid"
>    - change commit subject and description, was
>      "libxl/arm: Insert "xen,dev-domid" property to virtio-mmio device node"
> ---
>  tools/libs/light/libxl_arm.c          | 49 ++++++++++++++++++++++++++++++++---
>  xen/include/public/device_tree_defs.h |  1 +
>  2 files changed, 47 insertions(+), 3 deletions(-)
> 
> diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
> index 9be9b2a..72da3b1 100644
> --- a/tools/libs/light/libxl_arm.c
> +++ b/tools/libs/light/libxl_arm.c
> @@ -865,9 +865,32 @@ static int make_vpci_node(libxl__gc *gc, void *fdt,
>      return 0;
>  }
>  
> +static int make_xen_iommu_node(libxl__gc *gc, void *fdt)
> +{
> +    int res;
> +
> +    /* See Linux Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml */
> +    res = fdt_begin_node(fdt, "xen_iommu");
> +    if (res) return res;
> +
> +    res = fdt_property_compat(gc, fdt, 1, "xen,grant-dma");
> +    if (res) return res;
> +
> +    res = fdt_property_cell(fdt, "#iommu-cells", 1);
> +    if (res) return res;
> +
> +    res = fdt_property_cell(fdt, "phandle", GUEST_PHANDLE_IOMMU);
> +    if (res) return res;
> +
> +    res = fdt_end_node(fdt);
> +    if (res) return res;
> +
> +    return 0;
> +}
>  
>  static int make_virtio_mmio_node(libxl__gc *gc, void *fdt,
> -                                 uint64_t base, uint32_t irq)
> +                                 uint64_t base, uint32_t irq,
> +                                 uint32_t backend_domid)
>  {
>      int res;
>      gic_interrupt intr;
> @@ -890,6 +913,16 @@ static int make_virtio_mmio_node(libxl__gc *gc, void *fdt,
>      res = fdt_property(fdt, "dma-coherent", NULL, 0);
>      if (res) return res;
>  
> +    if (backend_domid != LIBXL_TOOLSTACK_DOMID) {
> +        uint32_t iommus_prop[2];
> +
> +        iommus_prop[0] = cpu_to_fdt32(GUEST_PHANDLE_IOMMU);
> +        iommus_prop[1] = cpu_to_fdt32(backend_domid);
> +
> +        res = fdt_property(fdt, "iommus", iommus_prop, sizeof(iommus_prop));
> +        if (res) return res;
> +    }
> +
>      res = fdt_end_node(fdt);
>      if (res) return res;
>  
> @@ -1097,6 +1130,7 @@ static int libxl__prepare_dtb(libxl__gc *gc, libxl_domain_config *d_config,
>      size_t fdt_size = 0;
>      int pfdt_size = 0;
>      libxl_domain_build_info *const info = &d_config->b_info;
> +    bool iommu_created;
>      unsigned int i;
>  
>      const libxl_version_info *vers;
> @@ -1204,11 +1238,20 @@ next_resize:
>          if (d_config->num_pcidevs)
>              FDT( make_vpci_node(gc, fdt, ainfo, dom) );
>  
> +        iommu_created = false;
>          for (i = 0; i < d_config->num_disks; i++) {
>              libxl_device_disk *disk = &d_config->disks[i];
>  
> -            if (disk->specification == LIBXL_DISK_SPECIFICATION_VIRTIO)
> -                FDT( make_virtio_mmio_node(gc, fdt, disk->base, disk->irq) );
> +            if (disk->specification == LIBXL_DISK_SPECIFICATION_VIRTIO) {
> +                if (disk->backend_domid != LIBXL_TOOLSTACK_DOMID &&
> +                    !iommu_created) {
> +                    FDT( make_xen_iommu_node(gc, fdt) );
> +                    iommu_created = true;
> +                }
> +
> +                FDT( make_virtio_mmio_node(gc, fdt, disk->base, disk->irq,
> +                                           disk->backend_domid) );
> +            }

This is a matter of taste as the code would also work as is, but I would
do the following instead:


if ( d_config->num_disks > 0 &&
     disk->backend_domid != LIBXL_TOOLSTACK_DOMID) {
     FDT( make_xen_iommu_node(gc, fdt) );
}

for (i = 0; i < d_config->num_disks; i++) {
    libxl_device_disk *disk = &d_config->disks[i];

    if (disk->specification == LIBXL_DISK_SPECIFICATION_VIRTIO)
        FDT( make_virtio_mmio_node(gc, fdt, disk->base, disk->irq) );
}

but I would give my acked-by anyway


>          }
>  
>          if (pfdt)
> diff --git a/xen/include/public/device_tree_defs.h b/xen/include/public/device_tree_defs.h
> index 209d43d..df58944 100644
> --- a/xen/include/public/device_tree_defs.h
> +++ b/xen/include/public/device_tree_defs.h
> @@ -7,6 +7,7 @@
>   * onwards. Reserve a high value for the GIC phandle.
>   */
>  #define GUEST_PHANDLE_GIC (65000)
> +#define GUEST_PHANDLE_IOMMU (GUEST_PHANDLE_GIC + 1)
>  
>  #define GUEST_ROOT_ADDRESS_CELLS 2
>  #define GUEST_ROOT_SIZE_CELLS 2
> -- 
> 2.7.4
> 
--8323329-526242627-1654045461=:1905099--


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 01:25:58 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 01:25:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340139.565095 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwD7M-0005GX-Bs; Wed, 01 Jun 2022 01:25:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340139.565095; Wed, 01 Jun 2022 01:25:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwD7M-0005G6-6k; Wed, 01 Jun 2022 01:25:48 +0000
Received: by outflank-mailman (input) for mailman id 340139;
 Wed, 01 Jun 2022 01:25:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1L9V=WI=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nwD7K-0005E9-LG
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 01:25:46 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c19e9f6d-e149-11ec-bd2c-47488cf2e6aa;
 Wed, 01 Jun 2022 03:25:44 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 60222614D1;
 Wed,  1 Jun 2022 01:25:43 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4C92DC385A9;
 Wed,  1 Jun 2022 01:25:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c19e9f6d-e149-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654046742;
	bh=l3SMqciVZHaKgi7Ph50tHbWwmzq4dF2H+tB+R/o8Cs0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=WLzJWy2WWEM3LPTRHgNloFvWlijXav99apATVtW3l0K/SZAY0YsAShjPpkIt/Gmu1
	 alxnxt8uYLEsghk9n7xaY/J5ibq7al7ZMjBmmRT0eJ2hUK9XwQkP5XYBGi1xiL9qv0
	 anlZbRhy2FMRIDjofiuCYlGJ5Ury1gDrCN4/0igO3jsDd4sN3xv5uB78Cig8rZs7f8
	 23WJosoXpvpahOS1CRgfy0SJL43ftXQju+9HjdSD+SBqU4UoSXdn54rVEsPNpCXzj2
	 4Zs9+nNsyUG3ec69WnLlc3qVEv/L3xXyTbsl3xbQfBC7kB4ONVArkPpVMWgiJStXRo
	 mRqh/WrR8V0/A==
Date: Tue, 31 May 2022 18:25:41 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Andrew Cooper <Andrew.Cooper3@citrix.com>, 
    Roger Pau Monne <roger.pau@citrix.com>, 
    Stefano Stabellini <stefano.stabellini@xilinx.com>, 
    Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH 1/2] docs/misra: introduce rules.rst
In-Reply-To: <10687069-5498-11f8-5474-fa34ee837025@xen.org>
Message-ID: <alpine.DEB.2.22.394.2205311823570.1905099@ubuntu-linux-20-04-desktop>
References: <alpine.DEB.2.22.394.2205241650160.1905099@ubuntu-linux-20-04-desktop> <alpine.DEB.2.22.394.2205251740280.1905099@ubuntu-linux-20-04-desktop> <0cf7383d-896e-76f0-b1cc-2f20bd7f368e@suse.com> <D9A44AC3-A959-442F-A94C-C9EFB359BEF1@arm.com>
 <da68ca4d-3498-ec6a-7a5d-040f23dd41a6@suse.com> <765738F2-97E9-40EF-A50E-2912C7D2A286@arm.com> <alpine.DEB.2.22.394.2205261233000.1905099@ubuntu-linux-20-04-desktop> <c0b481fb-5172-3515-764f-dba9f906c049@suse.com> <alpine.DEB.2.22.394.2205271602320.1905099@ubuntu-linux-20-04-desktop>
 <3882cc86-72a7-8e19-5f7b-b1cc89cce02e@xen.org> <5b790260-dd5c-9f62-7151-7684a0dc18fa@suse.com> <0cc9c342-f355-5816-09e9-a996624c6a79@xen.org> <6d6115a9-2810-0c9b-bba3-968b3ac50110@suse.com> <d4c6aa78-cc94-274c-db05-c62ff0badc9d@xen.org>
 <dcafd462-f912-8c59-f1bf-32f65ae45fd4@suse.com> <A7121189-9A68-41C6-A8EF-D823A0BBF4FF@citrix.com> <138D3C39-74A6-46CB-B598-2FC5FAD1E52D@arm.com> <10687069-5498-11f8-5474-fa34ee837025@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-756105103-1654046742=:1905099"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-756105103-1654046742=:1905099
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Tue, 31 May 2022, Julien Grall wrote:
> Hi,
> 
> On 30/05/2022 14:35, Bertrand Marquis wrote:
> > > Obviously something *else* we might want is a more convenient way to keep
> > > that rationale for the future, when we start to officially document
> > > deviations.  Given that the scanner will point out all the places where
> > > deviations happen, I don’t think an unstructured comment with an informal
> > > summary of the justification would be a problem — it seems like it would
> > > be a lot easier, when we start to officially document deviations, to
> > > transform comments in the existing codebase, than to search through the
> > > mailing lists and/or git commit history to find the rationale (or try to
> > > work out unaided what the intent was).  But I don’t have strong opinions
> > > on the matter.
> > 
> > Maybe agreeing on a simple tag to start that can later be improved (Luca
> > Fancellu on my side will start working on that with the FuSa SIG and Eclair
> > next month).
> > 
> > So I would suggest:
> > 
> > /**
> >   * MISRA_DEV: Rule ID
> >   * xxxxx justification
> >   *
> >   */
> > 
> > Whenever we will have defined the final way, we will replace those entries
> > with the new system.
> > 
> > Would that be an agreeable solution ?
> 
> I am fine with that. With one NIT thought, in Xen comments the first line of
> multi-line comment is "/*" rather than "/**".

I went with this (added it on top of the file.) As George wrote, I don't
have a strong opinion as at this stage we just need to get the ball
rolling and all options are OK.
--8323329-756105103-1654046742=:1905099--


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 01:25:59 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 01:25:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340138.565090 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwD7M-0005ER-20; Wed, 01 Jun 2022 01:25:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340138.565090; Wed, 01 Jun 2022 01:25:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwD7L-0005EK-Vb; Wed, 01 Jun 2022 01:25:47 +0000
Received: by outflank-mailman (input) for mailman id 340138;
 Wed, 01 Jun 2022 01:25:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Y5RR=WI=m5p.com=ehem@srs-se1.protection.inumbo.net>)
 id 1nwD7K-0005E9-67
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 01:25:46 +0000
Received: from mailhost.m5p.com (mailhost.m5p.com [2001:470:1f07:15ff::f7])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bfdc4f4d-e149-11ec-bd2c-47488cf2e6aa;
 Wed, 01 Jun 2022 03:25:43 +0200 (CEST)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.16.1/8.15.2) with ESMTPS id 2511PJYT083152
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Tue, 31 May 2022 21:25:25 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.16.1/8.15.2/Submit) id 2511PDJs083151;
 Tue, 31 May 2022 18:25:13 -0700 (PDT) (envelope-from ehem)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bfdc4f4d-e149-11ec-bd2c-47488cf2e6aa
Date: Tue, 31 May 2022 18:25:13 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2 3/3] tools/xl: Allow specifying JSON for domain
 configuration file format
Message-ID: <Ypa/+X7FQT2WaX12@mattapan.m5p.com>
References: <cover.1651285313.git.ehem+xen@m5p.com>
 <9aa6160b2664a52ff778fad67c366d67d3a0f8ab.1651285313.git.ehem+xen@m5p.com>
 <Yoeh3nMNW0AfcHr/@perard.uk.xensource.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <Yoeh3nMNW0AfcHr/@perard.uk.xensource.com>
X-Spam-Status: No, score=0.4 required=10.0 tests=KHOP_HELO_FCRDNS autolearn=no
	autolearn_force=no version=3.4.5
X-Spam-Checker-Version: SpamAssassin 3.4.5 (2021-03-20) on mattapan.m5p.com

On Fri, May 20, 2022 at 03:12:46PM +0100, Anthony PERARD wrote:
> On Tue, Apr 19, 2022 at 06:23:41PM -0700, Elliott Mitchell wrote:
> > JSON is currently used when saving domains to mass storage.  Being able
> > to use JSON as the normal input to `xl create` has potential to be
> > valuable.  Add the functionality.
> > 
> > Move the memset() earlier so to allow use of the structure sooner.
> > 
> > Signed-off-by: Elliott Mitchell <ehem+xen@m5p.com>
> 
> So, I gave this a try and creating a guest from a json config, and that
> fails very early with "Unknown guest type".
> 
> Have you actually tried to create a guest from config file written in
> json?
> 
> Also, this would need documentation about the new option, about the
> format. The man page need to be edited.
> 
> An example of a config file written in json would be nice as well.

I'll be trying for these at some point, but no timeframe yet.  This was
an idea which occurred to me when looking at things.  I'm wavering on
whether this is the way to go...

Real goal is I would like to generate a replacement for the `xendomains`
init script.  While functional the script is woefully inadaquate for
anything other than the tiniest installation.

Notably there can be ordering constraints for start/shutdown (worse,
those could be distinct).  One might also wish different strategies for
different domains (some get saved to disk on reboot, some might get
shutdown/restarted).


For some of the configuration for this, adding to domain.cfg files makes
sense.  This though ends up with the issue of what should the extra data
look like?

I'm oscillating between adding a section in something libxl's parser
takes as a comment, versus adding a configuration option to domain.cfg
(libxl's parser ignores unknown sections which is not entirely good!).
JSON's structure would be good for an addition, but JSON comes with its
own downsides.

Most likely such a thing would be implemented in Python.  Needs a bit
more math than shell is good for.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Wed Jun 01 01:44:02 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 01:44:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340155.565112 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwDOv-0008Un-OO; Wed, 01 Jun 2022 01:43:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340155.565112; Wed, 01 Jun 2022 01:43:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwDOv-0008Ug-LM; Wed, 01 Jun 2022 01:43:57 +0000
Received: by outflank-mailman (input) for mailman id 340155;
 Wed, 01 Jun 2022 01:43:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1L9V=WI=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nwDOt-0008Ua-Gz
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 01:43:55 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4b7b4e15-e14c-11ec-837f-e5687231ffcc;
 Wed, 01 Jun 2022 03:43:54 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id A43C8B8175B;
 Wed,  1 Jun 2022 01:43:53 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 06AC6C385A9;
 Wed,  1 Jun 2022 01:43:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4b7b4e15-e14c-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654047832;
	bh=z6rZJmg35HTbcLY6xC8GV/XUKS+KBk9uUWKRQMRZZoU=;
	h=Date:From:To:cc:Subject:From;
	b=Tg4tIFjEbr0/yDE0vMg37RT+FcqGCA+RE1XNCgLhV9YHRoWkpj7f8TqGuc8SPtWLq
	 OBsXSn/yxhNrY+Svx/9ghrlptqHWXLyhUnfJHJ39LxiLU8nyEUrUR1jbFX9DaWCtFH
	 BxbvhM2rn+Da/AH8aQhyLu21N/eBWYrMBJ/ClRv6+9Qjkdau/OZOZlGhlmb7G+BoHs
	 IH6Cu1dsYwrsi/iKIuUB9V2s/TAHGVTMx0YOLf0u7FM2g6/3PcKhPemBs4r93SBc0C
	 VRBYAvTq7PLFqSJj+oF9N309Jb8kMt1/zJ/kvcIF43y7WctNRxa0fu6UEz+pkB9Wbd
	 jaxWLKbneSqhg==
Date: Tue, 31 May 2022 18:43:50 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: xen-devel@lists.xenproject.org
cc: sstabellini@kernel.org, andrew.cooper3@citrix.com, jbeulich@suse.com, 
    roger.pau@citrix.com, julien@xen.org, Bertrand.Marquis@arm.com, 
    George.Dunlap@citrix.com
Subject: [PATCH v2 0/2] introduce docs/misra/rules.rst
Message-ID: <alpine.DEB.2.22.394.2205311816170.1905099@ubuntu-linux-20-04-desktop>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

Hi all,

This patch series is a follow-up from the MISRA C meeting last Thursday,
when we went through the list of MISRA C rules on the spreadsheet and
agreed to accept into the Xen coding style the first ones, starting from
Dir 2.1 up until Rule 5.1 (except for Rule 2.2) in descending popularity
order.

This is the full list of accepted rules so far:

Dir 2.1
Dir 4.7
Dir 4.10
Dir 4.14
Rule 1.3
Rule 3.2
Rule 5.1
Rule 6.2
Rule 8.1
Rule 8.4
Rule 8.5
Rule 8.6
Rule 8.8
Rule 8.12

This short patch series add them as a new document under docs/misra as a
list in rst format. The file can be used as input to cppcheck using a
small python script from Bertrand (who will send it to the xen-devel
separately.)

Cheers,

Stefano


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 01:44:09 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 01:44:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340156.565123 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwDP6-0000Mh-W5; Wed, 01 Jun 2022 01:44:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340156.565123; Wed, 01 Jun 2022 01:44:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwDP6-0000MY-Sg; Wed, 01 Jun 2022 01:44:08 +0000
Received: by outflank-mailman (input) for mailman id 340156;
 Wed, 01 Jun 2022 01:44:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1L9V=WI=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nwDP5-0008Ua-N7
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 01:44:07 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5319790f-e14c-11ec-837f-e5687231ffcc;
 Wed, 01 Jun 2022 03:44:07 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id BBAE8B8175B;
 Wed,  1 Jun 2022 01:44:06 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 17BBFC3411C;
 Wed,  1 Jun 2022 01:44:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5319790f-e14c-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654047845;
	bh=WcTGrMPln2r4PN7xO0mKkWVF8X4wWkx3gFpXJmHXKEQ=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=nRGVkIGNwipP5Hv6OA8LFhVgl711eabBTutfxMp3T56F/51EHJHXBupb6vQqC0Xph
	 Ndy95yMWel2pcgn9/g/aS7hyrXIQrPUVoF7Sd8p7prUUbpEAfE9AkXcbfpm2fXARm1
	 fgInc9I+f5DVW9fBxsYdyC8aOA2GcQtXQMbInYf8fBO4N9SfgQJlF4q1gCjSYfWNFm
	 E7VTkieOtkk1E7QnF0EiWddNxxdAL4qvZZKL1qcLRHnyf46cPtV/zNSlCqshnOZKRv
	 M4O8ylTJnobKO6XeLTDM5g/kOxOv40oY7667zNxJ/AvIOtXKuJU2cO3Zb1csnwPHGL
	 29SrcaD2YAH6w==
From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	andrew.cooper3@citrix.com,
	jbeulich@suse.com,
	roger.pau@citrix.com,
	julien@xen.org,
	Bertrand.Marquis@arm.com,
	George.Dunlap@citrix.com,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH v2 2/2] docs/misra: add Rule 5.1
Date: Tue, 31 May 2022 18:44:02 -0700
Message-Id: <20220601014402.2293524-2-sstabellini@kernel.org>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <alpine.DEB.2.22.394.2205311816170.1905099@ubuntu-linux-20-04-desktop>
References: <alpine.DEB.2.22.394.2205311816170.1905099@ubuntu-linux-20-04-desktop>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

Add Rule 5.1, with the additional note that the character limit for Xen
is 40 characters.

The max length identifiers found by ECLAIR are:

__mitigate_spectre_bhb_clear_insn_start
domain_pause_by_systemcontroller_nosync

Both of them are 40 characters long.

Explicitly mention that public headers might have longer identifiers.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---

Changes in v2:
- lower the limit to 40
- mention public headers
- improve commit message
---
 docs/misra/rules.rst | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/docs/misra/rules.rst b/docs/misra/rules.rst
index 7d6a9fe063..6ccff07765 100644
--- a/docs/misra/rules.rst
+++ b/docs/misra/rules.rst
@@ -82,6 +82,13 @@ existing codebase are work-in-progress.
      - Line-splicing shall not be used in // comments
      -
 
+   * - `Rule 5.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_05_01_2.c>`_
+     - Required
+     - External identifiers shall be distinct
+     - The Xen characters limit for identifiers is 40. Public headers
+       (xen/include/public/) are allowed to retain longer identifiers
+       for backward compatibility.
+
    * - `Rule 6.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_06_02.c>`_
      - Required
      - Single-bit named bit fields shall not be of a signed type
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 01 01:44:09 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 01:44:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340157.565129 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwDP7-0000Qw-FY; Wed, 01 Jun 2022 01:44:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340157.565129; Wed, 01 Jun 2022 01:44:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwDP7-0000Q8-6a; Wed, 01 Jun 2022 01:44:09 +0000
Received: by outflank-mailman (input) for mailman id 340157;
 Wed, 01 Jun 2022 01:44:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1L9V=WI=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nwDP6-0000M1-GA
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 01:44:08 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 529f00ac-e14c-11ec-bd2c-47488cf2e6aa;
 Wed, 01 Jun 2022 03:44:07 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 82066614F0;
 Wed,  1 Jun 2022 01:44:05 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 68A72C385A9;
 Wed,  1 Jun 2022 01:44:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 529f00ac-e14c-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654047844;
	bh=3LNFb9eaJ6ljtq7k9YOeH/I4pjqCITnxKzvBBuGvkIo=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=uM65ol6KI7QKERi++wgwRfjAUZrM3IZs0C6dDBVZpfO80Is7zlNwU/jTF9A3bmvOm
	 vPDBoIXF2wJiv/ivYry7+RGr2dng5td+EMG0yOJGEs5jDCziEWFLq8KQEAdcSfx05O
	 hwcpFG2I045gwvq//B8ol3JIl7MUuBK2/nLkRX/lCfi6uboERzn/v8XEPkXDf//fWf
	 nzBlryIiN4rFhdUq4LZh2ByfR2sCbyy3J+y/uP3ZMIqbAdRDVLF1jmQGnLjuih0BTm
	 r64vqeBrv21ug9tgX92sAA3qzHYmDxpyDZlt99LUHjOy+xJCrtQDQ6jZVoOt1hAAHZ
	 ZnFEhXva3Peew==
From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	andrew.cooper3@citrix.com,
	jbeulich@suse.com,
	roger.pau@citrix.com,
	julien@xen.org,
	Bertrand.Marquis@arm.com,
	George.Dunlap@citrix.com,
	Stefano Stabellini <stefano.stabellini@xilinx.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>
Subject: [PATCH v2 1/2] docs/misra: introduce rules.rst
Date: Tue, 31 May 2022 18:44:01 -0700
Message-Id: <20220601014402.2293524-1-sstabellini@kernel.org>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <alpine.DEB.2.22.394.2205311816170.1905099@ubuntu-linux-20-04-desktop>
References: <alpine.DEB.2.22.394.2205311816170.1905099@ubuntu-linux-20-04-desktop>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

Introduce a list of MISRA C rules that apply to the Xen hypervisor. The
list is in RST format.

Specify that rules deviations need to be documented. Introduce a
documentation tag for in-code comments to mark them as deviations. Also
mention that other documentation mechanisms are work-in-progress.

Add a mention of the new list to CODING_STYLE.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---

Note that I don't feel strongly about the deviations format. At this
stage anything is OK in my view. We'll need to improve it as we go along
and as we start to integrate better with MISRA C checkers. That said,
certainly an in-code comment with a special tag is a safe bet in terms
of tools integration (easy to parse, easy to convert.) We'll need other
mechanisms too and that's why I kept the sentence about "Other
documentation mechanisms are work-in-progress."

Changes in v2:
- clarify that deviations are permitted
- introduce an in-code tag for deviations
- improve the document format, make it proper reStructuredText
- improve commit message
---
 CODING_STYLE         |   6 +++
 docs/index.rst       |  12 +++++
 docs/misra/rules.rst | 123 +++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 141 insertions(+)
 create mode 100644 docs/misra/rules.rst

diff --git a/CODING_STYLE b/CODING_STYLE
index 9f50d9cec4..3386ee1d90 100644
--- a/CODING_STYLE
+++ b/CODING_STYLE
@@ -14,6 +14,12 @@ explicitly (e.g. tools/libxl/CODING_STYLE) but often implicitly (Linux
 coding style is fairly common). In general you should copy the style
 of the surrounding code. If you are unsure please ask.
 
+MISRA C
+-------
+
+The Xen Hypervisor follows some MISRA C coding rules. See
+docs/misra/rules.rst for details.
+
 Indentation
 -----------
 
diff --git a/docs/index.rst b/docs/index.rst
index b75487a05d..2c47cfa999 100644
--- a/docs/index.rst
+++ b/docs/index.rst
@@ -53,6 +53,18 @@ kind of development environment.
    hypervisor-guide/index
 
 
+MISRA C coding guidelines
+-------------------------
+
+MISRA C rules and directive to be used as coding guidelines when writing
+Xen hypervisor code.
+
+.. toctree::
+   :maxdepth: 2
+
+   misra/rules
+
+
 Miscellanea
 -----------
 
diff --git a/docs/misra/rules.rst b/docs/misra/rules.rst
new file mode 100644
index 0000000000..7d6a9fe063
--- /dev/null
+++ b/docs/misra/rules.rst
@@ -0,0 +1,123 @@
+.. SPDX-License-Identifier: CC-BY-4.0
+
+MISRA C rules for Xen
+=====================
+
+.. note::
+
+   **IMPORTANT** All MISRA C rules, text, and examples are copyrighted
+   by the MISRA Consortium Limited and used with permission.
+
+   Please refer to https://www.misra.org.uk/ to obtain a copy of MISRA
+   C, or for licensing options for other use of the rules.
+
+The following is the list of MISRA C rules that apply to the Xen
+hypervisor.
+
+It is possible that in specific circumstances it is best not to follow a
+rule because it is not possible or because the alternative leads to
+better code quality. Those cases are called "deviations". They are
+permissible as long as they are documented as an in-code comment using
+the following format::
+
+    /*
+     * MISRA_DEV: Rule ID
+     * Justification text.
+     */
+
+Other documentation mechanisms are work-in-progress.
+
+The existing codebase is not 100% compliant with the rules. Some of the
+violations are meant to be documented as deviations, while some others
+should be fixed. Both compliance and documenting deviations on the
+existing codebase are work-in-progress.
+
+.. list-table::
+   :header-rows: 1
+
+   * - Dir number
+     - Severity
+     - Summary
+     - Notes
+
+   * - `Dir 2.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/D_02_01.c>`_
+     - Required
+     - All source files shall compile without any compilation errors
+     -
+
+   * - `Dir 4.7 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/D_04_07.c>`_
+     - Required
+     - If a function returns error information then that error
+       information shall be tested
+     -
+
+   * - `Dir 4.10 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/D_04_10.c>`_
+     - Required
+     - Precautions shall be taken in order to prevent the contents of a
+       header file being included more than once
+     -
+
+   * - `Dir 4.14 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/D_04_14.c>`_
+     - Required
+     - The validity of values received from external sources shall be
+       checked
+     -
+
+.. list-table::
+   :header-rows: 1
+
+   * - Rule number
+     - Severity
+     - Summary
+     - Notes
+
+   * - `Rule 1.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_01_03.c>`_
+     - Required
+     - There shall be no occurrence of undefined or critical unspecified
+       behaviour
+     -
+
+   * - `Rule 3.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_03_02.c>`_
+     - Required
+     - Line-splicing shall not be used in // comments
+     -
+
+   * - `Rule 6.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_06_02.c>`_
+     - Required
+     - Single-bit named bit fields shall not be of a signed type
+     -
+
+   * - `Rule 8.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_08_01.c>`_
+     - Required
+     - Types shall be explicitly specified
+     -
+
+   * - `Rule 8.4 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_08_04.c>`_
+     - Required
+     - A compatible declaration shall be visible when an object or
+       function with external linkage is defined
+     -
+
+   * - `Rule 8.5 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_08_05_2.c>`_
+     - Required
+     - An external object or function shall be declared once in one and only one file
+     -
+
+   * - `Rule 8.6 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_08_06_2.c>`_
+     - Required
+     - An identifier with external linkage shall have exactly one
+       external definition
+     - Declarations without definitions are allowed (specifically when
+       the definition is compiled-out or optimized-out by the compiler)
+
+   * - `Rule 8.8 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_08_08.c>`_
+     - Required
+     - The static storage class specifier shall be used in all
+       declarations of objects and functions that have internal linkage
+     -
+
+   * - `Rule 8.12 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_08_12.c>`_
+     - Required
+     - Within an enumerator list the value of an implicitly-specified
+       enumeration constant shall be unique
+     -
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 01 02:54:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 02:54:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340180.565144 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwEUz-0000Je-PK; Wed, 01 Jun 2022 02:54:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340180.565144; Wed, 01 Jun 2022 02:54:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwEUz-0000JX-MX; Wed, 01 Jun 2022 02:54:17 +0000
Received: by outflank-mailman (input) for mailman id 340180;
 Wed, 01 Jun 2022 02:54:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=laj+=WI=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1nwEUx-0000JO-Bv
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 02:54:15 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur02on0610.outbound.protection.outlook.com
 [2a01:111:f400:fe06::610])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1dc45b9c-e156-11ec-bd2c-47488cf2e6aa;
 Wed, 01 Jun 2022 04:54:13 +0200 (CEST)
Received: from DB6PR0601CA0039.eurprd06.prod.outlook.com (2603:10a6:4:17::25)
 by AS4PR08MB7925.eurprd08.prod.outlook.com (2603:10a6:20b:574::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5293.17; Wed, 1 Jun
 2022 02:54:09 +0000
Received: from DBAEUR03FT003.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:17:cafe::aa) by DB6PR0601CA0039.outlook.office365.com
 (2603:10a6:4:17::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5293.13 via Frontend
 Transport; Wed, 1 Jun 2022 02:54:09 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT003.mail.protection.outlook.com (100.127.142.89) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5314.12 via Frontend Transport; Wed, 1 Jun 2022 02:54:09 +0000
Received: ("Tessian outbound d3318d0cda7b:v120");
 Wed, 01 Jun 2022 02:54:09 +0000
Received: from 288b93403eee.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 2E03D0A5-8B63-4469-9049-A067BFE7B737.1; 
 Wed, 01 Jun 2022 02:53:59 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 288b93403eee.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 01 Jun 2022 02:53:59 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com (2603:10a6:102:2b9::9)
 by AM0PR08MB5265.eurprd08.prod.outlook.com (2603:10a6:208:160::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5293.17; Wed, 1 Jun
 2022 02:53:56 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::d007:5582:9bbe:425e]) by PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::d007:5582:9bbe:425e%3]) with mapi id 15.20.5293.019; Wed, 1 Jun 2022
 02:53:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1dc45b9c-e156-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=mBs9enzklMEvm7Nrvt1Wwzc70vQl30RJMEtpGAlDwEE4YRaO9ZOPte6hA4ceyFCefMT2vC1WGmPMbDZ39A1FUgvJGPV9UYhXYc4tzw7mZgDbCiulZ/R5XfUbCixmvgonKeNANlXwxJrWpqK9I3ZoUwiovJhr/KZ/gcRKlNb8T3f5odWchGIgTPxzI+KVKFnkOZRCfWHmRaXUauXeSnkywxoPHkH4lENlbyTWP0c31skqwoGEiglOSYAd8qkwJIKbIJKKN2dHSUsfAOOjzoaVP2ZFzUZ8EkkDSV2+NI6NNaYveIwdJoBX8LMJrZuAtNTD3oBNXUU2w1MQkpLTixdvEA==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=LileIKBCs0q79+iKy8mBX6X0N64r2eTU1O7PYzFUMlQ=;
 b=G+LVTlEBpAPgGJFxCk0hBhvlUKZxlQf+L9uKS/8p4xYxHl30zyJkzwyDU81AtchcJJLzSwiZFAK9xdlg9TKbdT7N/jpWytYC0x+H4MuaKUuqo791m9HZp0zHTXZzpZXp5BvQhHfVBOCmXjabFFzlkuIAZsma8znJL0jrxl+mzA2RwbCcM6Ro5oSACsg3eopAy5Ujgrvh9818II02nzLkMvssxnol4G1HxEmTD608MBijCHRU2GLUKtxAWty196bEj4JxRLxRHuVzaUeakrTQAoQHL/gjXvRg6iNUhKrO0QZzYpVl/uZhUyNGE4IZ/JKN3Yzq0jL/RCVKcdXhdhY3YQ==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LileIKBCs0q79+iKy8mBX6X0N64r2eTU1O7PYzFUMlQ=;
 b=KIrfFhTwbiNBfipeZiy4CBDZfeiDN5Td1zKxaizubhGqmraDOwup3xlYoN9tdPxqAp6x6Qkmg8EHxl5QbFwx+jmgYeoBStWXVnKviXjJNnkVWrEAhIWSCOg6TkzRJP3fU30UYyordD06BBVF/IMbkWG9lUczlMK/aVdpNY98g08=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Qg/4oRAxE+dM9X8BORcfCl/bAFM+BC8oFH3w1/YzqAM9eR1FipAvU4PDeuMX4JBn4FZdflCWnxqTGVzq+ykx3fQQrGicK/1Q3fEeA2e8fL3U61g5ygfLwc9SaMxAPrDEtBIkks5DyCfCDPxYNcip5yFNG2tLHfn8MqBdfqorXE3kiKTJDxSd5OYbInjzDLdsLsGjrX8LTVrRdSvFIjU/coUYCAz/NthVygATX3jZWYBIeBLhLeTiZpbD+9OzzhaVhARoFGGCGeZWYzLRTNdHaXPGL/Cj7ECTCldXqZOeusBvM2FIJy0Rd/eIgKKFZyfu5CsS6yG5w3K209QCEU6lZA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=LileIKBCs0q79+iKy8mBX6X0N64r2eTU1O7PYzFUMlQ=;
 b=SDixh8hD0YWW0ztdgcIr0gL98nEY7IlYvDwYzeNCFofw+TlVsJ6GveizyZeEP95ktk69vLxAnHXMmYVzqRol7/eERGOzOmS3GOx/lCnfTYOBH7Nl3LhXA1Uyk8fC2a/sXjz6T/mkcVMJJBk54p70pJ9R4146RVtDpaVCs65FyFRMYbOjrTI8wZnCNRly0hkvhWtPWIlsQxHKf4p3N4YTO1QLMFrmGuoUWBd//VLCdOp7JuY2eaFIhuDG31ddJtY5jaEqCjIycSwj0lBxzcYrzQAnrmvgzuOLg9UNT6HUidjlsen9bwx/vGJiIl3eK4AB/0Rw8CHBSn5wfuMnYtrCyQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LileIKBCs0q79+iKy8mBX6X0N64r2eTU1O7PYzFUMlQ=;
 b=KIrfFhTwbiNBfipeZiy4CBDZfeiDN5Td1zKxaizubhGqmraDOwup3xlYoN9tdPxqAp6x6Qkmg8EHxl5QbFwx+jmgYeoBStWXVnKviXjJNnkVWrEAhIWSCOg6TkzRJP3fU30UYyordD06BBVF/IMbkWG9lUczlMK/aVdpNY98g08=
From: Wei Chen <Wei.Chen@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: nd <nd@arm.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Jiamei Xie <Jiamei.Xie@arm.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v4 7/8] xen/x86: add detection of memory interleaves for
 different nodes
Thread-Topic: [PATCH v4 7/8] xen/x86: add detection of memory interleaves for
 different nodes
Thread-Index: AQHYbm4HiKDL321oiEuXmbKNpj1MH605Bc8AgADVNxA=
Date: Wed, 1 Jun 2022 02:53:56 +0000
Message-ID:
 <PAXPR08MB7420F087CC36C8E8DB8DFF7E9EDF9@PAXPR08MB7420.eurprd08.prod.outlook.com>
References: <20220523062525.2504290-1-wei.chen@arm.com>
 <20220523062525.2504290-8-wei.chen@arm.com>
 <6003b7a5-63c5-9bd3-03db-a4bac5ba8e00@suse.com>
In-Reply-To: <6003b7a5-63c5-9bd3-03db-a4bac5ba8e00@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: DD88D475026A6948B3CDB9BB28D0A34E.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 2e032f31-1f67-48ad-3e38-08da437a0005
x-ms-traffictypediagnostic:
	AM0PR08MB5265:EE_|DBAEUR03FT003:EE_|AS4PR08MB7925:EE_
X-Microsoft-Antispam-PRVS:
	<AS4PR08MB7925848E72B1CD8C0164D2799EDF9@AS4PR08MB7925.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 w3m+yLuqY/8depxOOdHQiawwEHA5uk5l8xpYVHKLdTEwypsvuCK8Yg/PxYVYg7obpcWW+qsJMM2Rw+0TMurC00b96k8CAZZEqVn1tBNIQtCjXo8xAhXo7cftR5Te0246QDWkHsDTn5oZL/HWFiD8T9nWixu/06pUiPR0OKibxvdOBY9QtjSaxPao2VGuY4FovCNT01AMsOTvw5MziDrBY174+8p+gMK7i1hA5hHGlmZq9rGcNFVpwzz7jEhe0C4042CJ9av9OCSAkdS9T2T9poUtAEyAcN8FTmPh8ZkyjI8yotp/J5cItRikpQfeoZMcxu9nPBLzULROJERDSjHv/Hp4GsmiEAEn0sZlV4qHuHopxAgfovQCo44NDQgAW5+mNKpXS+2WTwcJ4DFglAL/1A9gPKOpByrXHxoJuZsxEj/JEUr0RYm7k1CKc+ONookeOQy4AYrnpkIuDdI7AEc1Nx1wP2KqbbujcoYjg8QktFYD+u/4McBedHevb47ZENtS7c3LT0YX7U4GjJ7ciRWywlTDJf4mrKXjc6J0HhBdEoifqLY6V5GEynLVLVDnkt0NKNuOmq2CvB4RU0P8cjeyoRP3ClKVaUbm/U9IPAUPoTsf2VOFWEy81L+Ks4T+tUaTIdY2CdFGZFuF2IUqyb9SHMIQc8Neb7xvofpid7GsG9hEmKPa8X3YT/Iil/fc5iwROggsiKdf9hzPz68PMVv6Ug==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB7420.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(122000001)(38070700005)(86362001)(2906002)(83380400001)(64756008)(6916009)(54906003)(508600001)(76116006)(8676002)(55016003)(316002)(66556008)(66476007)(66446008)(71200400001)(66946007)(4326008)(33656002)(7696005)(52536014)(5660300002)(6506007)(186003)(38100700002)(8936002)(9686003)(26005)(53546011);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB5265
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT003.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	8ad6c369-1d0d-48f6-1b22-08da4379f87a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	lpJx6A9SaOkG5N3L12QHjaLUhC7wH5FECj+hJnUQhvTJi6KeI0OXffj4AxErWO2SLCYg4Qlhgr0MlAyBA+GE7s46D+k3z5R/Qsn3L7q2JR9c97J2pLMIKqEbwcQljb+3ARjKrLyi7szfzoFr8QKO6GiGcEeYtrbDRJCZC2GJ3uRYywc4/7upmDLbsoFJb92uiMVY8ZoIcc47Gy2iMDdedvZ2byhlHFichT/mbOUaRzI4uyB6q8BKSmYh62XJ67dvgLTXJ5w6g1qYJUw9AhmTT/WXYJljXARuYZyX2ocm2zifli6s/N71kECvSMwdbtGL8II0LsKp2XNZDc8YheXTKjOhFqOEPnIqQkZjj7mIlqdtDLWGGkecal2A/UP7XVh//f5Exixbp7w8++/FsYv6tPrdvPd80E6vIJPogRmRR5X0ayiml3kJsspC01HYHt9aErYi4b+cspgzKO6X93G+Ja3fxjyfOeEWt6IcEs99kbzADxkTauaKHG8D4w1xkxoJi5TXPUWRKIpNTyvDtxZCVTDekVu5ADwN/dTiXyxYyngGd1PkvMOD64a4ModZ16tCOMfT7JnQMI9+N/MnFdLBIVLVjM/HPTers25R3EZglhxRQYO2Sg4+tIiksjzk6XFrvApthPUQh/QQKeagtv6UWAp8yf8G43rBDp0jtIxbwCZ4O5gZj4uRh448+LtZdHNp
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(7696005)(70586007)(70206006)(4326008)(6506007)(8676002)(26005)(9686003)(82310400005)(316002)(2906002)(40460700003)(52536014)(86362001)(6862004)(5660300002)(508600001)(8936002)(356005)(336012)(47076005)(36860700001)(55016003)(33656002)(186003)(54906003)(81166007)(83380400001)(53546011);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Jun 2022 02:54:09.1761
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 2e032f31-1f67-48ad-3e38-08da437a0005
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT003.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR08MB7925

SGkgSmFuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEphbiBCZXVs
aWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4gU2VudDogMjAyMuW5tDXmnIgzMeaXpSAyMToyMQ0K
PiBUbzogV2VpIENoZW4gPFdlaS5DaGVuQGFybS5jb20+DQo+IENjOiBuZCA8bmRAYXJtLmNvbT47
IEFuZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+OyBSb2dlciBQYXUNCj4g
TW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT47IFdlaSBMaXUgPHdsQHhlbi5vcmc+OyBKaWFt
ZWkgWGllDQo+IDxKaWFtZWkuWGllQGFybS5jb20+OyB4ZW4tZGV2ZWxAbGlzdHMueGVucHJvamVj
dC5vcmcNCj4gU3ViamVjdDogUmU6IFtQQVRDSCB2NCA3LzhdIHhlbi94ODY6IGFkZCBkZXRlY3Rp
b24gb2YgbWVtb3J5IGludGVybGVhdmVzDQo+IGZvciBkaWZmZXJlbnQgbm9kZXMNCj4gDQo+IE9u
IDIzLjA1LjIwMjIgMDg6MjUsIFdlaSBDaGVuIHdyb3RlOg0KPiA+IEBAIC0xMTksMjAgKzEyNSw0
NSBAQCBpbnQgdmFsaWRfbnVtYV9yYW5nZShwYWRkcl90IHN0YXJ0LCBwYWRkcl90IGVuZCwNCj4g
bm9kZWlkX3Qgbm9kZSkNCj4gPiAgCXJldHVybiAwOw0KPiA+ICB9DQo+ID4NCj4gPiAtc3RhdGlj
IF9faW5pdCBpbnQgY29uZmxpY3RpbmdfbWVtYmxrcyhwYWRkcl90IHN0YXJ0LCBwYWRkcl90IGVu
ZCkNCj4gPiArc3RhdGljDQo+ID4gK2VudW0gY29uZmxpY3RzIF9faW5pdCBjb25mbGljdGluZ19t
ZW1ibGtzKG5vZGVpZF90IG5pZCwgcGFkZHJfdCBzdGFydCwNCj4gPiArCQkJCQkgIHBhZGRyX3Qg
ZW5kLCBwYWRkcl90IG5kX3N0YXJ0LA0KPiA+ICsJCQkJCSAgcGFkZHJfdCBuZF9lbmQsIHVuc2ln
bmVkIGludCAqbWJsa2lkKQ0KPiA+ICB7DQo+ID4gLQlpbnQgaTsNCj4gPiArCXVuc2lnbmVkIGlu
dCBpOw0KPiA+DQo+ID4gKwkvKg0KPiA+ICsJICogU2NhbiBhbGwgcmVjb3JkZWQgbm9kZXMnIG1l
bW9yeSBibG9ja3MgdG8gY2hlY2sgY29uZmxpY3RzOg0KPiA+ICsJICogT3ZlcmxhcCBvciBpbnRl
cmxlYXZlLg0KPiA+ICsJICovDQo+ID4gIAlmb3IgKGkgPSAwOyBpIDwgbnVtX25vZGVfbWVtYmxr
czsgaSsrKSB7DQo+ID4gIAkJc3RydWN0IG5vZGUgKm5kID0gJm5vZGVfbWVtYmxrX3JhbmdlW2ld
Ow0KPiA+ICsNCj4gPiArCQkqbWJsa2lkID0gaTsNCj4gPiArDQo+ID4gKwkJLyogU2tpcCAwIGJ5
dGVzIG5vZGUgbWVtb3J5IGJsb2NrLiAqLw0KPiA+ICAJCWlmIChuZC0+c3RhcnQgPT0gbmQtPmVu
ZCkNCj4gPiAgCQkJY29udGludWU7DQo+ID4gKwkJLyoNCj4gPiArCQkgKiBVc2UgbWVtYmxrIHJh
bmdlIHRvIGNoZWNrIG1lbWJsayBvdmVybGFwcywgaW5jbHVkZSB0aGUNCj4gPiArCQkgKiBzZWxm
LW92ZXJsYXAgY2FzZS4NCj4gPiArCQkgKi8NCj4gPiAgCQlpZiAobmQtPmVuZCA+IHN0YXJ0ICYm
IG5kLT5zdGFydCA8IGVuZCkNCj4gPiAtCQkJcmV0dXJuIGk7DQo+ID4gKwkJCXJldHVybiBPVkVS
TEFQOw0KPiA+ICAJCWlmIChuZC0+ZW5kID09IGVuZCAmJiBuZC0+c3RhcnQgPT0gc3RhcnQpDQo+
ID4gLQkJCXJldHVybiBpOw0KPiA+ICsJCQlyZXR1cm4gT1ZFUkxBUDsNCj4gDQo+IEtub3dpbmcg
dGhhdCBuZCdzIHJhbmdlIGlzIG5vbi1lbXB0eSwgaXMgdGhpcyAybmQgY29uZGl0aW9uIGFjdHVh
bGx5DQo+IG5lZWRlZCBoZXJlPyAoU3VjaCBhbiBhZGp1c3RtZW50LCBpZiB5b3UgZGVjaWRlZCB0
byBtYWtlIGl0IGFuZCBpZiBub3QNCj4gc3BsaXQgb3V0IHRvIGEgc2VwYXJhdGUgcGF0Y2gsIHdv
dWxkIG5lZWQgY2FsbGluZyBvdXQgaW4gdGhlDQo+IGRlc2NyaXB0aW9uLikNCg0KVGhlIDJuZCBj
b25kaXRpb24gaGVyZSwgeW91IG1lYW50IGlzICIobmQtPmVuZCA9PSBlbmQgJiYgbmQtPnN0YXJ0
ID09IHN0YXJ0KSINCm9yIGp1c3QgIm5kLT5zdGFydCA9PSBzdGFydCIgYWZ0ZXIgIiYmIj8NCg0K
TXkgdW5kZXJzdGFuZGluZyBpcyB0aGUgZmlyc3QgY2FzZSwgIihuZC0+ZW5kID09IGVuZCAmJiBu
ZC0+c3RhcnQgPT0gc3RhcnQpIg0Kd2lsbCBiZSBjb3ZlcmVkIGJ5ICIobmQtPmVuZCA+IHN0YXJ0
ICYmIG5kLT5zdGFydCA8IGVuZCkiLiBJZiBzbywgSSdsbCByZW1vdmUNCml0IGluIHRoZSBuZXh0
IHZlcnNpb24gYW5kIGFkZCBzb21lIGRlc2NyaXB0aW9ucyBpbiB0aGUgY29tbWl0IGxvZyBhbmQg
Y29kZQ0KY29tbWVudC4NCg0KPiANCj4gPiArCQkvKg0KPiA+ICsJCSAqIFVzZSBub2RlIG1lbW9y
eSByYW5nZSB0byBjaGVjayB3aGV0aGVyIG5ldyByYW5nZSBjb250YWlucw0KPiA+ICsJCSAqIG1l
bW9yeSBmcm9tIG90aGVyIG5vZGVzIC0gaW50ZXJsZWF2ZSBjaGVjay4gV2UganVzdCBuZWVkDQo+
ID4gKwkJICogdG8gY2hlY2sgZnVsbCBjb250YWlucyBzaXR1YXRpb24uIEJlY2F1c2Ugb3Zlcmxh
cHMgaGF2ZQ0KPiA+ICsJCSAqIGJlZW4gY2hlY2tlZCBhYm92ZS4NCj4gPiArCQkgKi8NCj4gPiAr
CSAgICAgICAgaWYgKG5pZCAhPSBtZW1ibGtfbm9kZWlkW2ldICYmDQo+ID4gKwkJICAgIChuZF9z
dGFydCA8IG5kLT5zdGFydCAmJiBuZC0+ZW5kIDwgbmRfZW5kKSkNCj4gPiArCQkJcmV0dXJuIElO
VEVSTEVBVkU7DQo+IA0KPiBEb2Vzbid0IHRoaXMgbmVlZCB0byBiZSA8PSBpbiBib3RoIGNhc2Vz
IChhbGJlaXQgSSB0aGluayBvbmUgb2YgdGhlIHR3bw0KPiBleHByZXNzaW9ucyB3b3VsZCB3YW50
IHN3aXRjaGluZyBhcm91bmQsIHRvIGJldHRlciBsaW5lIHVwIHdpdGggdGhlDQo+IGVhcmxpZXIg
b25lLCB2aXNpYmxlIGluIGNvbnRleHQgZnVydGhlciB1cCkuDQo+IA0KDQpZZXMsIEkgd2lsbCBh
ZGQgIj0iaW4gYm90aCBjYXNlcy4gQnV0IGZvciBzd2l0Y2hpbmcgYXJvdW5kLCBJIGFsc28NCndh
bnRlZCB0byBtYWtlIGEgYmV0dGVyIGxpbmUgdXAuIEJ1dCBpZiBuaWQgPT0gbWVtYmxrX25vZGVp
ZFtpXSwNCnRoZSBjaGVjayBvZiAobmRfc3RhcnQgPCBuZC0+c3RhcnQgJiYgbmQtPmVuZCA8IG5k
X2VuZCkgaXMgbWVhbmluZ2xlc3MuDQpJJ2xsIGFkanVzdCB0aGVpciBvcmRlciBpbiB0aGUgbmV4
dCB2ZXJzaW9uIGlmIHlvdSB0aGluayB0aGlzIGlzDQphY2NlcHRhYmxlLg0KDQo+ID4gQEAgLTI3
NSwxMCArMzA2LDEzIEBAIGFjcGlfbnVtYV9wcm9jZXNzb3JfYWZmaW5pdHlfaW5pdChjb25zdCBz
dHJ1Y3QNCj4gYWNwaV9zcmF0X2NwdV9hZmZpbml0eSAqcGEpDQo+ID4gIHZvaWQgX19pbml0DQo+
ID4gIGFjcGlfbnVtYV9tZW1vcnlfYWZmaW5pdHlfaW5pdChjb25zdCBzdHJ1Y3QgYWNwaV9zcmF0
X21lbV9hZmZpbml0eSAqbWEpDQo+ID4gIHsNCj4gPiArCWVudW0gY29uZmxpY3RzIHN0YXR1czsN
Cj4gDQo+IEkgZG9uJ3QgdGhpbmsgeW91IG5lZWQgdGhpcyBsb2NhbCB2YXJpYWJsZS4NCj4gDQoN
CldoeSBJIGRvbid0IG5lZWQgdGhpcyBvbmU/IERpZCB5b3UgbWVhbiBJIGNhbiB1c2UNCnN3aXRj
aCAoY29uZmxpY3RpbmdfbWVtYmxrcyguLi4pKSBkaXJlY3RseT8NCg0KPiA+IEBAIC0zMTAsNDIg
KzM0NCw3OCBAQCBhY3BpX251bWFfbWVtb3J5X2FmZmluaXR5X2luaXQoY29uc3Qgc3RydWN0DQo+
IGFjcGlfc3JhdF9tZW1fYWZmaW5pdHkgKm1hKQ0KPiA+ICAJCWJhZF9zcmF0KCk7DQo+ID4gIAkJ
cmV0dXJuOw0KPiA+ICAJfQ0KPiA+ICsNCj4gPiArCS8qDQo+ID4gKwkgKiBGb3IgdGhlIG5vZGUg
dGhhdCBhbHJlYWR5IGhhcyBzb21lIG1lbW9yeSBibG9ja3MsIHdlIHdpbGwNCj4gPiArCSAqIGV4
cGFuZCB0aGUgbm9kZSBtZW1vcnkgcmFuZ2UgdGVtcG9yYXJpbHkgdG8gY2hlY2sgbWVtb3J5DQo+
ID4gKwkgKiBpbnRlcmxlYXZlcyB3aXRoIG90aGVyIG5vZGVzLiBXZSB3aWxsIG5vdCB1c2UgdGhp
cyBub2RlDQo+ID4gKwkgKiB0ZW1wIG1lbW9yeSByYW5nZSB0byBjaGVjayBvdmVybGFwcywgYmVj
YXVzZSBpdCB3aWxsIG1hc2sNCj4gPiArCSAqIHRoZSBvdmVybGFwcyBpbiBzYW1lIG5vZGUuDQo+
ID4gKwkgKg0KPiA+ICsJICogTm9kZSB3aXRoIDAgYnl0ZXMgbWVtb3J5IGRvZXNuJ3QgbmVlZCB0
aGlzIGV4cGFuZHNpb24uDQo+ID4gKwkgKi8NCj4gPiArCW5kX3N0YXJ0ID0gc3RhcnQ7DQo+ID4g
KwluZF9lbmQgPSBlbmQ7DQo+ID4gKwluZCA9ICZub2Rlc1tub2RlXTsNCj4gPiArCWlmIChuZC0+
c3RhcnQgIT0gbmQtPmVuZCkgew0KPiA+ICsJCWlmIChuZF9zdGFydCA+IG5kLT5zdGFydCkNCj4g
PiArCQkJbmRfc3RhcnQgPSBuZC0+c3RhcnQ7DQo+ID4gKw0KPiA+ICsJCWlmIChuZF9lbmQgPCBu
ZC0+ZW5kKQ0KPiA+ICsJCQluZF9lbmQgPSBuZC0+ZW5kOw0KPiA+ICsJfQ0KPiA+ICsNCj4gPiAg
CS8qIEl0IGlzIGZpbmUgdG8gYWRkIHRoaXMgYXJlYSB0byB0aGUgbm9kZXMgZGF0YSBpdCB3aWxs
IGJlIHVzZWQNCj4gbGF0ZXIqLw0KPiA+IC0JaSA9IGNvbmZsaWN0aW5nX21lbWJsa3Moc3RhcnQs
IGVuZCk7DQo+ID4gLQlpZiAoaSA8IDApDQo+ID4gLQkJLyogZXZlcnl0aGluZyBmaW5lICovOw0K
PiA+IC0JZWxzZSBpZiAobWVtYmxrX25vZGVpZFtpXSA9PSBub2RlKSB7DQo+ID4gLQkJYm9vbCBt
aXNtYXRjaCA9ICEobWEtPmZsYWdzICYgQUNQSV9TUkFUX01FTV9IT1RfUExVR0dBQkxFKSAhPQ0K
PiA+IC0JCSAgICAgICAgICAgICAgICAhdGVzdF9iaXQoaSwgbWVtYmxrX2hvdHBsdWcpOw0KPiA+
IC0NCj4gPiAtCQlwcmludGsoIiVzU1JBVDogUFhNICV1ICglIlBSSXBhZGRyIi0lIlBSSXBhZGRy
Iikgb3ZlcmxhcHMgd2l0aA0KPiBpdHNlbGYgKCUiUFJJcGFkZHIiLSUiUFJJcGFkZHIiKVxuIiwN
Cj4gPiAtCQkgICAgICAgbWlzbWF0Y2ggPyBLRVJOX0VSUiA6IEtFUk5fV0FSTklORywgcHhtLCBz
dGFydCwgZW5kLA0KPiA+IC0JCSAgICAgICBub2RlX21lbWJsa19yYW5nZVtpXS5zdGFydCwgbm9k
ZV9tZW1ibGtfcmFuZ2VbaV0uZW5kKTsNCj4gPiAtCQlpZiAobWlzbWF0Y2gpIHsNCj4gPiArCXN0
YXR1cyA9IGNvbmZsaWN0aW5nX21lbWJsa3Mobm9kZSwgc3RhcnQsIGVuZCwgbmRfc3RhcnQsIG5k
X2VuZCwgJmkpOw0KPiA+ICsJc3dpdGNoKHN0YXR1cykNCj4gPiArCXsNCj4gDQo+IFN0eWxlOiBN
aXNzaW5nIGJsYW5rIGJlZm9yZSAoIGFuZCB0aGUgYnJhY2UgZ29lcyBvbiB0aGUgc2FtZSBsaW5l
IGhlcmUNCj4gKExpbnV4IHN0eWxlKS4NCj4gDQoNCk9rLg0KDQo+ID4gKwljYXNlIE9WRVJMQVA6
DQo+ID4gKwl7DQo+IA0KPiBQbGVhc2Ugb21pdCBicmFjZXMgYXQgY2FzZSBsYWJlbHMgdW5sZXNz
IHlvdSBuZWVkIGEgbmV3IHNjb3BlIHRvIGRlY2xhcmUNCj4gdmFyaWFibGVzLg0KPiANCg0KT0su
DQoNCj4gPiArCQlpZiAobWVtYmxrX25vZGVpZFtpXSA9PSBub2RlKSB7DQo+ID4gKwkJCWJvb2wg
bWlzbWF0Y2ggPSAhKG1hLT5mbGFncyAmDQo+ID4gKwkJCQkJICBBQ1BJX1NSQVRfTUVNX0hPVF9Q
TFVHR0FCTEUpICE9DQo+ID4gKwkJCSAgICAgICAgICAgICAgICAhdGVzdF9iaXQoaSwgbWVtYmxr
X2hvdHBsdWcpOw0KPiA+ICsNCj4gPiArCQkJcHJpbnRrKCIlc1NSQVQ6IFBYTSAldSAoJSJQUklw
YWRkciItJSJQUklwYWRkciIpDQo+IG92ZXJsYXBzIHdpdGggaXRzZWxmICglIlBSSXBhZGRyIi0l
IlBSSXBhZGRyIilcbiIsDQo+ID4gKwkJCSAgICAgICBtaXNtYXRjaCA/IEtFUk5fRVJSIDogS0VS
Tl9XQVJOSU5HLCBweG0sIHN0YXJ0LA0KPiA+ICsJCQkgICAgICAgZW5kLCBub2RlX21lbWJsa19y
YW5nZVtpXS5zdGFydCwNCj4gPiArCQkJICAgICAgIG5vZGVfbWVtYmxrX3JhbmdlW2ldLmVuZCk7
DQo+ID4gKwkJCWlmIChtaXNtYXRjaCkgew0KPiA+ICsJCQkJYmFkX3NyYXQoKTsNCj4gPiArCQkJ
CXJldHVybjsNCj4gPiArCQkJfQ0KPiA+ICsJCQlicmVhazsNCj4gPiArCQl9IGVsc2Ugew0KPiA+
ICsJCQlwcmludGsoS0VSTl9FUlINCj4gPiArCQkJICAgICAgICJTUkFUOiBQWE0gJXUgKCUiUFJJ
cGFkZHIiLSUiUFJJcGFkZHIiKSBvdmVybGFwcw0KPiB3aXRoIFBYTSAldSAoJSJQUklwYWRkciIt
JSJQUklwYWRkciIpXG4iLA0KPiA+ICsJCQkgICAgICAgcHhtLCBzdGFydCwgZW5kLCBub2RlX3Rv
X3B4bShtZW1ibGtfbm9kZWlkW2ldKSwNCj4gPiArCQkJICAgICAgIG5vZGVfbWVtYmxrX3Jhbmdl
W2ldLnN0YXJ0LA0KPiA+ICsJCQkgICAgICAgbm9kZV9tZW1ibGtfcmFuZ2VbaV0uZW5kKTsNCj4g
PiAgCQkJYmFkX3NyYXQoKTsNCj4gPiAgCQkJcmV0dXJuOw0KPiA+ICAJCX0NCj4gDQo+IFRvIGxp
bWl0IGluZGVudGF0aW9uIGRlcHRoLCBvbiBvZiB0aGUgdHdvIHNpZGVzIG9mIHRoZSBjb25kaXRp
b25hbCBjYW4NCj4gYmUgbW92ZWQgb3V0LCBieSBvbWl0dGluZyB0aGUgdW5uZWNlc3NhcnkgImVs
c2UiLiBUbyByZWR1Y2UgdGhlIGRpZmYNCj4gaXQgbWF5IGJlIHdvcnRod2hpbGUgdG8gaW52ZXJ0
IHRoZSBpZigpIGNvbmRpdGlvbiwgYWxsb3dpbmcgdGhlICh0aGVuDQo+IGltcGxpY2l0KSAiZWxz
ZSIgY2FzZSB0byByZW1haW4gKGFsbW9zdCkgdW5jaGFuZ2VkIGZyb20gdGhlIG9yaWdpbmFsLg0K
PiANCg0KSSB3aWxsIGFkanVzdCB0aGVtIGluIG5leHQgdmVyc2lvbi4NCg0KPiA+IC0JfSBlbHNl
IHsNCj4gPiArCX0NCj4gPiArDQo+ID4gKwljYXNlIElOVEVSTEVBVkU6DQo+ID4gKwl7DQo+ID4g
IAkJcHJpbnRrKEtFUk5fRVJSDQo+ID4gLQkJICAgICAgICJTUkFUOiBQWE0gJXUgKCUiUFJJcGFk
ZHIiLSUiUFJJcGFkZHIiKSBvdmVybGFwcyB3aXRoDQo+IFBYTSAldSAoJSJQUklwYWRkciItJSJQ
UklwYWRkciIpXG4iLA0KPiA+IC0JCSAgICAgICBweG0sIHN0YXJ0LCBlbmQsIG5vZGVfdG9fcHht
KG1lbWJsa19ub2RlaWRbaV0pLA0KPiA+ICsJCSAgICAgICAiU1JBVO+8miBQWE0gJXU6ICglIlBS
SXBhZGRyIi0lIlBSSXBhZGRyIikgaW50ZXJsZWF2ZXMNCj4gd2l0aCBQWE0gJXUgbWVtYmxrICgl
IlBSSXBhZGRyIi0lIlBSSXBhZGRyIilcbiIsDQo+ID4gKwkJICAgICAgIG5vZGUsIG5kX3N0YXJ0
LCBuZF9lbmQsIG5vZGVfdG9fcHhtKG1lbWJsa19ub2RlaWRbaV0pLA0KPiANCj4gSG1tLCB5b3Ug
aGF2ZSBQWE0gaW4gdGhlIGxvZyBtZXNzYWdlIHRleHQsIGJ1dCB5b3Ugc3RpbGwgcGFzcyAibm9k
ZSIgYXMNCj4gZmlyc3QgYXJndW1lbnQuDQo+IA0KDQpJIHdpbGwgZml4IGl0Lg0KDQo+IFNpbmNl
IHlvdSdyZSB0b3VjaGluZyBhbGwgdGhlc2UgbWVzc2FnZXMsIGNvdWxkIEkgYXNrIHlvdSB0byBj
b252ZXJ0DQo+IGFsbCByYW5nZXMgdG8gcHJvcGVyIG1hdGhlbWF0aWNhbCBpbnRlcnZhbCByZXBy
ZXNlbnRhdGlvbj8gSS5lLg0KPiBbc3RhcnQsZW5kKSBoZXJlIGFpdWkgYXMgdGhlIGVuZCBhZGRy
ZXNzZXMgbG9vayB0byBiZSBub24taW5jbHVzaXZlLg0KPiANCg0KU3VyZSwgSSB3aWxsIGRvIGl0
Lg0KDQo+ID4gIAkJICAgICAgIG5vZGVfbWVtYmxrX3JhbmdlW2ldLnN0YXJ0LCBub2RlX21lbWJs
a19yYW5nZVtpXS5lbmQpOw0KPiA+ICAJCWJhZF9zcmF0KCk7DQo+ID4gIAkJcmV0dXJuOw0KPiA+
ICAJfQ0KPiA+IC0JaWYgKCEobWEtPmZsYWdzICYgQUNQSV9TUkFUX01FTV9IT1RfUExVR0dBQkxF
KSkgew0KPiA+IC0JCXN0cnVjdCBub2RlICpuZCA9ICZub2Rlc1tub2RlXTsNCj4gPg0KPiA+IC0J
CWlmICghbm9kZV90ZXN0X2FuZF9zZXQobm9kZSwgbWVtb3J5X25vZGVzX3BhcnNlZCkpIHsNCj4g
PiAtCQkJbmQtPnN0YXJ0ID0gc3RhcnQ7DQo+ID4gLQkJCW5kLT5lbmQgPSBlbmQ7DQo+ID4gLQkJ
fSBlbHNlIHsNCj4gPiAtCQkJaWYgKHN0YXJ0IDwgbmQtPnN0YXJ0KQ0KPiA+IC0JCQkJbmQtPnN0
YXJ0ID0gc3RhcnQ7DQo+ID4gLQkJCWlmIChuZC0+ZW5kIDwgZW5kKQ0KPiA+IC0JCQkJbmQtPmVu
ZCA9IGVuZDsNCj4gPiAtCQl9DQo+ID4gKwlkZWZhdWx0Og0KPiA+ICsJCWJyZWFrOw0KPiANCj4g
VGhpcyB3YW50cyB0byBiZSAiY2FzZSBOT19DT05GTElDVDoiLCBzdWNoIHRoYXQgdGhlIGNvbXBp
bGVyIHdvdWxkDQo+IHdhcm4gaWYgYSBuZXcgZW51bWVyYXRvciBhcHBlYXJzIHdpdGhvdXQgYWRk
aW5nIGNvZGUgaGVyZS4gKEFuDQo+IGFsdGVybmF0aXZlIC0gd2hpY2ggcGVyc29uYWxseSBJIGRv
bid0IGxpa2UgLSB3b3VsZCBiZSB0byBwdXQNCj4gQVNTRVJUX1VOUkVBQ0hBQkxFKCkgaW4gdGhl
IGRlZmF1bHQ6IGNhc2UuIFRoZSBkb3duc2lkZSBpcyB0aGF0DQo+IHRoZW4gdGhlIGlzc3VlIHdv
dWxkIG9ubHkgYmUgbm90aWNlYWJsZSBhdCBydW50aW1lLikNCj4gDQoNClRoYW5rcywgSSB3aWxs
IGFkanVzdCBpdCB0bzoNCgljYXNlIE5PX0NPTkZMSUNUOg0KCQlicmVhazsNCglkZWZhdWx0Og0K
CQlBU1NFUlRfVU5SRUFDSEFCTEUoKTsNCmluIG5leHQgdmVyc2lvbi4NCg0KDQo+IEphbg0KDQo=


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 04:00:35 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 04:00:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340190.565156 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwFWp-0008AO-M6; Wed, 01 Jun 2022 04:00:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340190.565156; Wed, 01 Jun 2022 04:00:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwFWp-0008AH-IM; Wed, 01 Jun 2022 04:00:15 +0000
Received: by outflank-mailman (input) for mailman id 340190;
 Wed, 01 Jun 2022 04:00:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KQRm=WI=gmail.com=alistair23@srs-se1.protection.inumbo.net>)
 id 1nwFWo-0008AB-B5
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 04:00:14 +0000
Received: from mail-oi1-x22e.google.com (mail-oi1-x22e.google.com
 [2607:f8b0:4864:20::22e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5479d1fa-e15f-11ec-837f-e5687231ffcc;
 Wed, 01 Jun 2022 06:00:12 +0200 (CEST)
Received: by mail-oi1-x22e.google.com with SMTP id i66so1072447oia.11
 for <xen-devel@lists.xenproject.org>; Tue, 31 May 2022 21:00:10 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5479d1fa-e15f-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=lkj/pxsJKw5W6oOFVXk/4v6PgU4wBSeI5wzKgRHDoP8=;
        b=iUZPqmIQeDatA5enBUNPteuPrcSMOC7wGcXRP945ChvRQNCbKx3Gpm++vqJtXyj3AH
         4yDkmo0TslxXKTnD/F7sJYc74pcS7P0zawr2PkC71SIoz1kVlSuSjHbTVaPHhMpHnP9W
         lmuXXbKy1o/iV9m/dTsI1/uf+agIOLDU21IEwim/dEHfImhns6T3Zw0flNtgO8VDr4Ns
         kTNAAI0f3KJi2v/qiF2d80S7ixgiLNEc3yGGC/Xgiz0utxHIO+QtztqPdGPBZkBkWPV2
         h1tISSFnVSvERQJh44dk77/OQX6BSfvW4VG9ZTp6XZ4UU+bGjsUgPIGQqL8hMzeSjIRx
         mbPw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=lkj/pxsJKw5W6oOFVXk/4v6PgU4wBSeI5wzKgRHDoP8=;
        b=OoC1MDFTMKLQ04LcOyyK+u9oXJJXJdA+zmrmWmumqYZcCIuiBiJJQpPTgb3HGX+Wyp
         rAKmtUQ/9A3/2Pk6papechk9ew0p8GVFuMfwiL11rIhFJGQZfca1A53MyH2PGCD6nC+9
         rBZ7qkz4a9/tE1TJ5CqLp69RPSX7AXGUK1TAhjDet0b03mZQ2AJtW8tP5rvLDnQzCy9F
         tNFyeD1mVyt+HobdSJr1lkWx4qmObynxCe9PVJkAGjuKHnhpSAcGpgQPVXg0lHLM7aM7
         s1wID3fNBtOPmk/iNkaX9+esS7Rjv7u1EtVKo1RN3wliI5/OaFNCFKhrsdyU5jlevsKN
         tOGA==
X-Gm-Message-State: AOAM530tpQvt6oFBbFJXZH3u2WqJ2lBR/Dcc7rsdnueE1GUqtl+dXkss
	0elYS/mCiykFAh4eYU3ss6aUfTvhKM3wmj685v0=
X-Google-Smtp-Source: ABdhPJxmAruo9Jh+aWKYEs0zkramUZyakqjCmhFteFpqOLXZ/09jJcTrUsMGVnf817+4G228+epVYVihf2CQHXOKgCw=
X-Received: by 2002:a05:6808:19a6:b0:32b:90c4:d1af with SMTP id
 bj38-20020a05680819a600b0032b90c4d1afmr14287045oib.64.1654056009315; Tue, 31
 May 2022 21:00:09 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1653977696.git.xiexun162534@gmail.com> <016c56548eee75c2b713ef90e4069690c0ae11cb.1653977696.git.xiexun162534@gmail.com>
In-Reply-To: <016c56548eee75c2b713ef90e4069690c0ae11cb.1653977696.git.xiexun162534@gmail.com>
From: Alistair Francis <alistair23@gmail.com>
Date: Wed, 1 Jun 2022 13:59:43 +1000
Message-ID: <CAKmqyKNQJsUSLAgsMB0arkT3zAXzzm6QF46ZpwDN1GdpvRQMSw@mail.gmail.com>
Subject: Re: [RFC PATCH 5/6] xen/riscv: Add early_printk
To: Xie Xun <xiexun162534@gmail.com>
Cc: "open list:X86" <xen-devel@lists.xenproject.org>, Bob Eshleman <bobbyeshleman@gmail.com>, 
	Alistair Francis <alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>
Content-Type: text/plain; charset="UTF-8"

On Tue, May 31, 2022 at 5:09 PM Xie Xun <xiexun162534@gmail.com> wrote:
>
> Signed-off-by: Xie Xun <xiexun162534@gmail.com>
> ---
>  xen/arch/riscv/Makefile                   |  1 +
>  xen/arch/riscv/early_printk.c             | 48 +++++++++++++++++++++++
>  xen/arch/riscv/include/asm/early_printk.h | 10 +++++
>  3 files changed, 59 insertions(+)
>  create mode 100644 xen/arch/riscv/early_printk.c
>  create mode 100644 xen/arch/riscv/include/asm/early_printk.h
>
> diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
> index c61349818f..f9abc8401b 100644
> --- a/xen/arch/riscv/Makefile
> +++ b/xen/arch/riscv/Makefile
> @@ -3,6 +3,7 @@ obj-y += lib/
>  obj-y   += domctl.o
>  obj-y   += domain.o
>  obj-y   += delay.o
> +obj-y   += early_printk.o
>  obj-y   += guestcopy.o
>  obj-y   += irq.o
>  obj-y   += p2m.o
> diff --git a/xen/arch/riscv/early_printk.c b/xen/arch/riscv/early_printk.c
> new file mode 100644
> index 0000000000..81d69add01
> --- /dev/null
> +++ b/xen/arch/riscv/early_printk.c

This should be named differently. This file should be called
`sbi_console_early_printk.c` to better indicate that it's using the
sbi_console

The SBI print functions are useful, but they have been marked as
deprecated with no future replacement (see
https://github.com/riscv-non-isa/riscv-sbi-doc/blob/659950dc57f9840cf8242c1cff138c2ee67634bb/riscv-sbi.adoc#5-legacy-extensions-eids-0x00---0x0f)

For the initial port I think it's ok to use these, but this isn't a
long term solution, we should aim to migrate to using the standard
hardware UART.

I'm sure Xen already has a driver for the ns16550 UART so it might be
worth just using that directly and not worrying with the sbi_console.

Alistair

> @@ -0,0 +1,48 @@
> +/*
> + * RISC-V early printk using SBI
> + *
> + * Copyright (C) 2021 Bobby Eshleman <bobbyeshleman@gmail.com>
> + */
> +#include <asm/sbi.h>
> +#include <asm/early_printk.h>
> +#include <xen/stdarg.h>
> +#include <xen/lib.h>
> +
> +void _early_puts(const char *s, size_t nr)
> +{
> +    while ( nr-- > 0 )
> +    {
> +        if (*s == '\n')
> +            sbi_console_putchar('\r');
> +        sbi_console_putchar(*s);
> +        s++;
> +    }
> +}
> +
> +static void vprintk_early(const char *prefix, const char *fmt, va_list args)
> +{
> +    char buf[128];
> +    int sz;
> +
> +    early_puts(prefix);
> +
> +    sz = vscnprintf(buf, sizeof(buf), fmt, args);
> +
> +    if ( sz < 0 ) {
> +        early_puts("(XEN) vprintk_early error\n");
> +        return;
> +    }
> +
> +    if ( sz == 0 )
> +        return;
> +
> +    _early_puts(buf, sz);
> +}
> +
> +void early_printk(const char *fmt, ...)
> +{
> +    va_list args;
> +    va_start(args, fmt);
> +    vprintk_early("(XEN) ", fmt, args);
> +    va_end(args);
> +}
> diff --git a/xen/arch/riscv/include/asm/early_printk.h b/xen/arch/riscv/include/asm/early_printk.h
> new file mode 100644
> index 0000000000..0d9928b333
> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/early_printk.h
> @@ -0,0 +1,10 @@
> +#ifndef __EARLY_PRINTK_H__
> +#define __EARLY_PRINTK_H__
> +
> +#include <xen/string.h>
> +
> +#define early_puts(s) _early_puts((s), strlen((s)))
> +void _early_puts(const char *s, size_t nr);
> +void early_printk(const char *fmt, ...);
> +
> +#endif /* __EARLY_PRINTK_H__ */
> --
> 2.30.2
>
>


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 05:05:03 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 05:05:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340199.565166 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwGXE-0007FZ-NB; Wed, 01 Jun 2022 05:04:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340199.565166; Wed, 01 Jun 2022 05:04:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwGXE-0007FS-KP; Wed, 01 Jun 2022 05:04:44 +0000
Received: by outflank-mailman (input) for mailman id 340199;
 Wed, 01 Jun 2022 05:04:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwGXC-0007FI-QJ; Wed, 01 Jun 2022 05:04:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwGXC-0008Qq-NS; Wed, 01 Jun 2022 05:04:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwGXC-0006rH-9x; Wed, 01 Jun 2022 05:04:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nwGXC-0000XC-61; Wed, 01 Jun 2022 05:04:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3RwUkqXCbqxyj/hnFe0m3UrmkMpfNcy6ef1m5+pGWMo=; b=g4W+BpMQNIUvSXFOUiGP5Tpu32
	8OWdIsqDGzW/eagarh7Uj0L0CAVcSnsatDCpyP4wULV7MF2K6T4WIsk1LC0JrIN90TQ0nHhs9o3NO
	04VcALUuOnytnr8yiJjqiqbnHLLHaE0Ibu18z7oitSDZMPLavzVaqHbqdHq0alhQT0eA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170791-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 170791: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-pygrub:debian-di-install:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:debian-di-install:fail:regression
    linux-linus:test-armhf-armhf-libvirt:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:debian-di-install:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:debian-di-install:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:debian-di-install:fail:regression
    linux-linus:test-armhf-armhf-xl:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-linus:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This:
    linux=2a5699b0de4ee623d77f183c8e8e62691bd60a70
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 01 Jun 2022 05:04:42 +0000

flight 170791 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170791/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 170714
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 170714
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 170714
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 170714
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-freebsd11-amd64 13 guest-start          fail REGR. vs. 170714
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 170714
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 170714
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 170714
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 170714
 test-arm64-arm64-xl-credit1  14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 170714
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 170714
 test-arm64-arm64-xl-xsm      14 guest-start              fail REGR. vs. 170714
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 170714
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-qemuu-nested-amd 12 debian-hvm-install  fail REGR. vs. 170714
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 170714
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 170714
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-xl-qemut-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 170714
 test-amd64-amd64-qemuu-nested-intel 12 debian-hvm-install fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 170714
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 170714
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 170714
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 170714
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 170714
 test-amd64-amd64-pygrub      12 debian-di-install        fail REGR. vs. 170714
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-xl-vhd      12 debian-di-install        fail REGR. vs. 170714
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 170714
 test-arm64-arm64-xl-vhd      12 debian-di-install        fail REGR. vs. 170714
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 170714
 test-arm64-arm64-libvirt-raw 12 debian-di-install        fail REGR. vs. 170714
 test-armhf-armhf-libvirt-qcow2 12 debian-di-install      fail REGR. vs. 170714
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 170714
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 170714

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 170714
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 170714

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714

version targeted for testing:
 linux                2a5699b0de4ee623d77f183c8e8e62691bd60a70
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z    8 days
Failing since        170716  2022-05-24 11:12:06 Z    7 days   23 attempts
Testing same since   170791  2022-05-31 22:10:46 Z    0 days    1 attempts

------------------------------------------------------------
1976 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 221541 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 06:04:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 06:04:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340209.565178 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwHSr-0005h4-7S; Wed, 01 Jun 2022 06:04:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340209.565178; Wed, 01 Jun 2022 06:04:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwHSr-0005gx-3r; Wed, 01 Jun 2022 06:04:17 +0000
Received: by outflank-mailman (input) for mailman id 340209;
 Wed, 01 Jun 2022 06:04:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z6G0=WI=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nwHSp-0005gr-KD
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 06:04:15 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.109.102])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a92e1e7a-e170-11ec-837f-e5687231ffcc;
 Wed, 01 Jun 2022 08:04:13 +0200 (CEST)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2106.outbound.protection.outlook.com [104.47.17.106]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-25-RGKeoqWGPwu3UKswl-_Eyg-1; Wed, 01 Jun 2022 08:04:11 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM6PR04MB4424.eurprd04.prod.outlook.com (2603:10a6:20b:1e::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Wed, 1 Jun
 2022 06:04:10 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.012; Wed, 1 Jun 2022
 06:04:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a92e1e7a-e170-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654063453;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=05k4mje6QfRWLhMHFWF/guLKIpRIrLgBZQocG3RvjF0=;
	b=AJ9+J8vuVauA1LKiKLlg0upIQ695nPn9sJU7TnOhB7VcRDdszVCZcVvt3J9MTP6oaNbUN7
	mMP66WUUtj3Kl7mLGmmv8XLhqGLtgIrQwq7b2b+feA4MSErxRCxESkU/AXtVGiXvGv5Vol
	spV0wF8AOfPcl8LvJ/S3QX/5w04BkPM=
X-MC-Unique: RGKeoqWGPwu3UKswl-_Eyg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=d5qynFxRnXmNoGyDa3upXqCrTgGLH+6NVJYxvxvZxZ3fBmox8Cxscun1959cvFa2D3vCAhPUGlyVqu6ZS4rkD7oQuKNV4iSxIazEDat1LTDrJpRfK4xqYjKZvvdESF3rqqmcmYnlGjW91JNfuI/xVvEDKqiKI4vt71BGGHtGvl28pIYYQsWgGXnjuCjuPCXY0QitoLY0TdIYt4wbAqepo/LVQuBYWtXuQO6TGTEu6hxQJp49a4c6t8gjvxcXP6zq7YsRnzKG7GF0bIlNBAHYm5Oi98qjbg8fHODLjkZDKzFmp4rZx9/q2necl3tBKyj8HQ2iRpdpzDC+dZsgTx1eiA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=05k4mje6QfRWLhMHFWF/guLKIpRIrLgBZQocG3RvjF0=;
 b=KCs5a0N7dbuyIESh7oUlMf6pw3k+RsAVwSWNhe5k4DpmMSEuckwjYxA2TcAOD3LMK6BSYhOs6ivSvs0jdUFtwI76Q3JdF4fJmhR7/Cfwo0LO7T7PlzUeNjCc6oN7tsq4/PEWvzb7RFljF6+QHum5BNfqHWpUJ27Ed4GLrDUt3/A6xz9ow6dHHf1vY+ZoLCcdpG3MtS7gPOgNcy6HZep5t3OuRXXDDUafrznMA4U8app0GCQvJNVfeWL45UA9gRb8Zt8gt8Hd11FIuBuWv13lJI6cU9VykkAbwt/udQp0pjNsD8dLjAjsSMTtQDaZxXppXd9uZkRKQwEHu/Gk5NFRCw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <7716ff49-a306-9938-0e91-aad45deef313@suse.com>
Date: Wed, 1 Jun 2022 08:04:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH v3 1/3] xsm: only search for a policy file when needed
Content-Language: en-US
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Cc: scott.davis@starlab.io, christopher.clark@starlab.io, jandryuk@gmail.com,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xenproject.org
References: <20220531150857.19727-1-dpsmith@apertussolutions.com>
 <20220531150857.19727-2-dpsmith@apertussolutions.com>
 <1358771f-32ae-8a6b-9894-980014d7112c@suse.com>
 <604e79d6-d07f-1a28-83a0-55fede499e12@apertussolutions.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <604e79d6-d07f-1a28-83a0-55fede499e12@apertussolutions.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM6PR02CA0015.eurprd02.prod.outlook.com
 (2603:10a6:20b:6e::28) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: daaf34cf-b884-4d87-bb1a-08da43948b80
X-MS-TrafficTypeDiagnostic: AM6PR04MB4424:EE_
X-Microsoft-Antispam-PRVS:
	<AM6PR04MB4424FD9B2D4589CB0BA98CC7B3DF9@AM6PR04MB4424.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	r55VAwPBNzUAhsCSf1j+9lF3R3wAiAiFML1oyhlq7esRemx22QNwckmGL1zzG+sUwr6KPGZyEqjZThEitQUGNkOCjzrGEHDEFdACMHkAFwdk8xWYAUI+oi4I5y+iSOz2bSQUWgdGjvR+J6UUKGUt7GZA63MiGIHHfRcR1pfzz3osI4gSJ6hSW064+xZvHe9b1ilEHewhlKWNyj6DJEl+K8EJdh3jt7oFyZ0DF1MKPz07W/HDFAqaXi1PjJn7TmUOd445upQ6I57b3poToeSLrJ2a88kLLihDrJKhnWQN6uZuSdtk9lJXGk6x0kbd7i5oQZXtLJBCJjX/p9mdm779RvCIYBua9oKdrYF7NxWtDtBDIno74LtzJj0TiQYy4A4p4PRZYa7we3HTSQNID07tgBqu0g68AJu1zfsyvK3ja5VyUzVmnsx+pjH0HDsKOEe7UVkGKU5E+u1d0lImgGtahSbf3j06zQetc0v3bdg0Z/wnH+8w7JY76ANl4RYza1x3B0tZd6trPG+6pi4TN7vmQGe/xZw2JOgBszmh/tVGUMBWnCNnOY8hzTWL97wPqRXFuEfoGwZKowZeuLRgs2KPm0vEo0OeOurMTeCRfLbwwkXkuIrDOaq1G7JH9uvRw/J6fSWPSxd0kZkTHstl5nUixm488QLeYqO3dUNVXEBjfWUCjPhQbobwDqNC7QuxYKH0F28Mmbb9coAiArVhMxJ8DHPnGdmTc3c5asOvdzEGqdjp1vNnOjp1fqZLgGv4hENA
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(2906002)(31696002)(86362001)(8936002)(186003)(38100700002)(508600001)(5660300002)(6486002)(2616005)(53546011)(6506007)(6512007)(26005)(31686004)(66946007)(66556008)(66476007)(4326008)(8676002)(36756003)(316002)(6916009)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VXI1MFp1VkxsOTZCaEZKQlZKcjhNKzlyK1FaTHVSOVlkbzcwNEVPdCtyN0VJ?=
 =?utf-8?B?U2cvU1grZ3VNMnNUa3IrSCsrbjlLV1B3WkM1R29FMTE0TjVMbXVtLzdqWmhn?=
 =?utf-8?B?YklHei9oS2xodE5HYTBDRGYyT0dFdndxMlREVXBrTi9jTHlDbUx4dE5XcFdC?=
 =?utf-8?B?SlhmYlovVTNvMjRwcnNoVnRxUGpadnE1OS85ZWNJaXg5dmxNeXFQd3djam5M?=
 =?utf-8?B?TDZjTWxmVUM1UkVydTVjaHhla2hHamtTYlY5NTl5ZnM0aHE4YUl5MTZDQUE2?=
 =?utf-8?B?ZUtJQWp1ZTlWRkJmY0Y3aDNpM25CeXVLeGZ3ZStWNWtHMENHNGFqK0djQTdP?=
 =?utf-8?B?cTBQU1l0WTQvUzdUUkp2YnFQN3YyakttaTNyTGEzRm1hR3hzMWcvTzFKQU0r?=
 =?utf-8?B?OEhia3BjUkhVenZaQVhaM2pHb1Zvd1JvRlgxNlRxQWVrZkkySGxWQlRSTnQy?=
 =?utf-8?B?RUdFKzFqeE54ZnZITk1FMTI0RXpXTWZLSGNMUGlQc3ozRytMY1kvczR1aW1z?=
 =?utf-8?B?cGl0a2ZrclllSEdocFhLdXpHdEpVSnpMeE93dUd2RmtFRnNRa1lOZmZFVVVq?=
 =?utf-8?B?dUJOUkEwSjFYL1VlTElFZERPcktrUnVoanR1OGxLdXBteVYvcjdLSy96TU5E?=
 =?utf-8?B?Sk9BR05kZXdIVTZET212bHFSd2l2OWdhQ2ZnQWdrb2poMXBKS21MSm9ENDJR?=
 =?utf-8?B?TUxabkJ3R1o2Sjl2YkdDVEdvMVRldkxXMjR0LzZaWWFlb2pmMHVIeUxVTmk4?=
 =?utf-8?B?Mk1KckVPcW1GeFZyMkp3UStydk1QS2xGSUI3MEpsYktwSmN1OTFxR2c0QmhL?=
 =?utf-8?B?WmVHc3FDTGxPQUh6eVQ1dVkrcmxEaDFZZWtNNGFIa3pnNS9XSUx0a3p0SWZl?=
 =?utf-8?B?NDJ5ZkdoU2diZUVkL1BEVk1IMkp6clh5QlVURGhyU2ZieUQ3N2Zaa0ZlaEpm?=
 =?utf-8?B?UmZQUGhMQm1UWkdmUFdORUJld2NERDVCNW9iM1JUc0JsYmdKcVF5VmJjaGc0?=
 =?utf-8?B?L0xIM2x1VEtSQmFrZGpsVlVsNVkwS2tTR0JKR0dhamNGZFlmNVZlV3l2cDVE?=
 =?utf-8?B?OEQvNkkvUW9XUHQ1c0cyQWhWTmJpU3dRaUJISDlKblVRWWx3M3JsZmRTSHRp?=
 =?utf-8?B?b05rUGZxSXpHSUxzbUFsQUlqSUNwTG9FMFdwQy9jTmtzbnpMa0tISTR6R1dI?=
 =?utf-8?B?TzhqV3lPUVBoVyt5Um5ucnlBU09tcGdsSVN1SzRKN0RrK0IxdVMxSDkyT2hi?=
 =?utf-8?B?ZStVNXRLcnphY2pCU21VanNidUJrSVNHajluUjZ2RlFPd3RrWnhlZlpPeHNo?=
 =?utf-8?B?VUR4ZUY4Rzd2cjhqZVkvRGt3Sm9MSEUzaGgzUTVuV21EMW5NZmpSYmVxeWs2?=
 =?utf-8?B?emZhVFR5QUxPK1l2ZVlkZ3JOL2NRTjM5YWdHcWN0S2ZTWTFjOUtaRGFsTVpr?=
 =?utf-8?B?R3E2Z3N0cFBTMEh3QVZmWk9lUjBtRlplTVRpRy8xeDVHdzg3UHBWRlN0MjJv?=
 =?utf-8?B?YmZRQ1ZIMTZwWjhIRndSZG5LMUplVVVoUnRVQWpJU2k1TEpNSjRGT0ZmekZ2?=
 =?utf-8?B?d3A0T2U5NUozYkJWK3c1dE5xOHBCZFkrcUtlVGExZm1IL05ndFJZdFRCSjNo?=
 =?utf-8?B?QWE0MUpoWGh1MXZTOG5wd0p6UlhLVmF5RXJuM2JPRzVrMlkzREo0UnNxdUF6?=
 =?utf-8?B?TDZMUGcwTWc4MkFuVkpoZ0U0VjVONmZ4My92U1dRa0t1azhqQVpZMlpIcWJ5?=
 =?utf-8?B?NEdYRmxpcUZ6UStVSXdtbHZQQUJlSFBGSUlhY1ZoeHlIMGVXQ3FxR0dmZmd6?=
 =?utf-8?B?bVBWMGNwc1YrTUlhbmt3dXIyWHladng5cWM1ZFFRSEErZlhDL1lYdXZTZzd6?=
 =?utf-8?B?MEpZYWwvNTVXOEFtN3AwZ3RRcWJvTk1Femd4UEFSYlM5QzYvTVpaRmlBTGVh?=
 =?utf-8?B?dlRGeE0wckhId0tpUVFiVUdWamVxRWdWcCtSZXoyd282MEVTS0xLUkJXR0hq?=
 =?utf-8?B?WkFqNDdiS1phWk5aSEpQOEFZU0xyMEIxUFpjbHVyNWhDeDNCQ01Bd0h4aDNq?=
 =?utf-8?B?NkVEamZqZlNWTEszajJLaTRrUk95dU5RbzdLZzlpV3N1THF5b0NjUms5YWxj?=
 =?utf-8?B?a21MbG1xTEU5TGVoajVXc0Y1ZW5BUWhabVhEbmtCelZPNkpzK2FFVExQcWRt?=
 =?utf-8?B?NUNKM1BwLzN5L2wvckRxakdNVE1RZE1vUDVwQ01Bc3BJNWc1OVNkTGR3Z0Vy?=
 =?utf-8?B?ODJZSEtWTXFBR0g4dVdMNDZ4a3BleGN3QTVOS2g0WXdDWkR6WVNhQ1J4V3ky?=
 =?utf-8?B?YzhicXlnTHhzNnlZN1VBQTNxbERVdlpJNHRQMzMrYnhjWUkrOEFuZz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: daaf34cf-b884-4d87-bb1a-08da43948b80
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Jun 2022 06:04:10.3067
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: PsG/CF0LMIx5aK7TpsgBl9iugGgFiuY4kT1BkVDBfJVbBOOsC7aVP1zK1kFPf3aVCLb+mHa02sLguaIGrOQIBg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR04MB4424

On 31.05.2022 18:15, Daniel P. Smith wrote:
> 
> On 5/31/22 11:51, Jan Beulich wrote:
>> On 31.05.2022 17:08, Daniel P. Smith wrote:
>>> It is possible to select a few different build configurations that results in
>>> the unnecessary walking of the boot module list looking for a policy module.
>>> This specifically occurs when the flask policy is enabled but either the dummy
>>> or the SILO policy is selected as the enforcing policy. This is not ideal for
>>> configurations like hyperlaunch and dom0less when there could be a number of
>>> modules to be walked or doing an unnecessary device tree lookup.
>>>
>>> This patch introduces the policy_file_required flag for tracking when an XSM
>>> policy module requires a policy file. Only when the policy_file_required flag
>>> is set to true, will XSM search the boot modules for a policy file.
>>>
>>> Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
>>
>> Looks technically okay, so
>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>> but couldn't you ...
>>
>>> @@ -148,7 +160,7 @@ int __init xsm_multiboot_init(
>>>  
>>>      printk("XSM Framework v" XSM_FRAMEWORK_VERSION " initialized\n");
>>>  
>>> -    if ( XSM_MAGIC )
>>> +    if ( policy_file_required && XSM_MAGIC )
>>>      {
>>>          ret = xsm_multiboot_policy_init(module_map, mbi, &policy_buffer,
>>>                                          &policy_size);
>>> @@ -176,7 +188,7 @@ int __init xsm_dt_init(void)
>>>  
>>>      printk("XSM Framework v" XSM_FRAMEWORK_VERSION " initialized\n");
>>>  
>>> -    if ( XSM_MAGIC )
>>> +    if ( policy_file_required && XSM_MAGIC )
>>>      {
>>>          ret = xsm_dt_policy_init(&policy_buffer, &policy_size);
>>>          if ( ret )
>>
>> ... drop the two "&& XSM_MAGIC" here at this time? Afaict policy_file_required
>> cannot be true when XSM_MAGIC is zero.
> 
> I was on the fence about this, as it should be rendered as redundant as
> you point out. I am good with dropping on next spin.

I'd also be okay dropping this while committing, unless a v4 appears
first ...

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 01 06:08:21 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 06:08:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340218.565189 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwHWf-0006MI-SH; Wed, 01 Jun 2022 06:08:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340218.565189; Wed, 01 Jun 2022 06:08:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwHWf-0006MB-Of; Wed, 01 Jun 2022 06:08:13 +0000
Received: by outflank-mailman (input) for mailman id 340218;
 Wed, 01 Jun 2022 06:08:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z6G0=WI=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nwHWe-0006M5-Q0
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 06:08:12 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.111.102])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 36bc9111-e171-11ec-bd2c-47488cf2e6aa;
 Wed, 01 Jun 2022 08:08:11 +0200 (CEST)
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2056.outbound.protection.outlook.com [104.47.13.56]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-34-DJf_wPZWM6OXTpJdiZGj8g-1; Wed, 01 Jun 2022 08:08:09 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB4692.eurprd04.prod.outlook.com (2603:10a6:208:cc::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12; Wed, 1 Jun
 2022 06:08:08 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.012; Wed, 1 Jun 2022
 06:08:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 36bc9111-e171-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654063690;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qEUwB/AVRNW+SOSSQmEQiPixxgsqNXK7nt7cMYX3VFA=;
	b=J9VmMy2wdlVEOFeHBde4bKasMHAFNmUS1J9Af6EtK5FjWRCPeiTv/rs5TWLPXwA8ajnDdg
	ZG0H3VIHJBBxgwGGWGS4hFK8lo+Bn8JN5htFgJpC/C5B6wQ8xT4vF5v5sa9/+IdkU2IduA
	LpE5Wla071p2B8KB8GTnmeiW0QXKw/M=
X-MC-Unique: DJf_wPZWM6OXTpJdiZGj8g-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=L3eReNnavcFxzYI4gwF5T+X1BLMV+XA0B4w2ombN6bU05fRZJg7m4KDLljCiqqg4Y5gW/tG6UfgEZcFr1DvjVE2qHQYW6NYXecsax80lwSgOnbCBZ+j02g+onpwndJfx7vxyp9m0Q1A4rAC2suJybBxLIuvoWLbMGugps9VY+ytBCtwlZmohKP/jYPRqyu7NdcyIsTDTsUSP/cY//Gm9rn1EA5g7JAjvTZdB3sx6IXvV8oHwoLh2OlcV5pSWAtGW3vqjW6zA1379ym3AkkYfpkLC3TQgsBdM+BZiAvQF3IiPk4eLJr9ykpBUjww1KdzxVuuK5u49SMdwpj+WfCl9fQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qEUwB/AVRNW+SOSSQmEQiPixxgsqNXK7nt7cMYX3VFA=;
 b=HaTmJSW/2Dbu1xcEDv6HYIUEi2afhaqoPkQ69FkoS9E1y12/ldNXV9Y/l63RKZHqttIM6KxPqxFbghwHCp9SblfBdUrZjr0fF2f+X3ma/uUxxVzFmDOAqM3d689isCdFDRIpT2Gpgjvj8jbsBpdxQ6nsC/7TsJdLBwU2kbevx3gCpS89YdIigbozSYBVIUiZieOsZgEB49IXtx+o0SsyeQ0Ops4fv/5uuuEsz5WcG00cPVHWF52FxOLMNGVVfH/kp2y4B/TZTAImXMKJ9Z3XnChWIBSZ0BLbuEQsm4NSIYSk6uP9WakxKV+XlUYv9XLl9sifZaATjafQmNn3JebQzA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f8697e9f-6b48-0c1b-8d5a-2d36dafa75b4@suse.com>
Date: Wed, 1 Jun 2022 08:08:05 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH v3 1/3] xsm: only search for a policy file when needed
Content-Language: en-US
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Cc: scott.davis@starlab.io, christopher.clark@starlab.io, jandryuk@gmail.com,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xenproject.org
References: <20220531150857.19727-1-dpsmith@apertussolutions.com>
 <20220531150857.19727-2-dpsmith@apertussolutions.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220531150857.19727-2-dpsmith@apertussolutions.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0072.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:49::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c3d2a808-52e6-4fea-d34e-08da43951944
X-MS-TrafficTypeDiagnostic: AM0PR04MB4692:EE_
X-Microsoft-Antispam-PRVS:
	<AM0PR04MB469239A57EA0A6FE8162BC66B3DF9@AM0PR04MB4692.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	zbdG5fc3dL46Kcaem8obARmMJ4bO2TlywIMdU3ZUxwvkCRnIfjgQb1npwfETzi9dJE7fdBOG+3dgCWsuoX0GUrqRgOZ/7XSsMUEwQ/BTac4R0u78CLNd/3tNyHinI4CfCgxgVGBby0vonqF9vAvu/bd/sqsLSLz0ChqvhaPaGJSfFEymlyPHrILVXEGq5r8KzCfaIv+f5rMXnufnZClnB5Nd1pC0y9Z5m3HrbGuuRxIIfBl2enxAtV8sj2msdfc0tn2w2jttGgINE9eM6Puw8LZEwqwKSefeVGFPb8JaTb+HJ89Md/8XoxLGJIZhMEu9tGGGfhYSBC8gyKzhthxyNjhhzK7R3ppFpBDiwr3tikdVoGqAjcf1k/rMbLJYp4b/MVRbO6ilVL0ZTJbLYbgFFAEjfDYYd+kLcsUknCHV6zb3Egtu8Gay7xtikXeBl8ewHfLzgmdJLYJniMbl6FfeBq6mceu+mislhJmOByUB7QMQXJjv2Js8pP4NRfiwdJvV/ccvmYv4Ds8Fj9pEacgywN8prH5BTSvm1y0JwTXmjRh0nVZXVib9O28PaCXuqsO8DjjaUAnM20ZqWAWCOBYWSmG6ydGh/iPwz+l4w7Sfn5sd/QoMVJGFcVne2S6FjG7OVSAkXMHeN07hZelML/00AdH8t3/A588iUjLcbaoyo4KyzicufyT+C8yu4EBZoB53CG0UXZtG265o5LjAGcXNPoT0DfBrYyHs0dJ6yPC6UqM=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(86362001)(31696002)(6506007)(38100700002)(186003)(53546011)(6916009)(2906002)(8676002)(6666004)(4326008)(66946007)(508600001)(26005)(316002)(5660300002)(66556008)(36756003)(66476007)(31686004)(2616005)(6486002)(6512007)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UHk3YUxGdG9OeUt0WHVqc1lQQzdxUjZteThmTFhXRXdqZ29hOHV4Nyt3RkNE?=
 =?utf-8?B?cHN2eVVmQ0JDWlA2eGRsbmoxS1dCblV0QWh1dnNtOW1UTDNnUFhrOEZlN3RW?=
 =?utf-8?B?MnpPdEs1Z3Z4Q21IeHJzNW9sTERaTm5rQWlPeXpoMi9ZS3hZVmg1QXNPY01k?=
 =?utf-8?B?aUJmeWU1S0w0WHN0RDFrYUc3NW02Q285ZXd2aituUHhEY2VoenhhVEZ3OGQ4?=
 =?utf-8?B?MGtaNkkrdVppbGl4TFdPdGI1ejM2YVVvd09jb0dHaFdRaEpvMFZWcUNiUXNR?=
 =?utf-8?B?MFI1UTQvaG1VbGwyelBrdTZYcERsSHlXclYvMzdrY3FIQitkNVFGN0J6NCtm?=
 =?utf-8?B?NHNQMEpMT3R0RXI1VzlrUUhub3d2RW9wWUJiTlZOUDR5R1I4M0MvK1liNVVO?=
 =?utf-8?B?MWhUMW1mVmVjanB6Vm5Dak9YVm1EempjdkxYNlRnYk51RFpWR25CVDNLKzVj?=
 =?utf-8?B?a0xEREdaaEhReGFxRnRTWWd2eFJRQ1dCT2lJUVJuT0RRRkdNRVU3YnBkc2hI?=
 =?utf-8?B?dE4yWjZWR1JXcU1NTEVGc0pFNVlvRkNOM2tDTlJ3WE1obHFVdFFxeGJldlJl?=
 =?utf-8?B?bjFhYVdkc2hZWUtCdTQrZ2xaL2VMYmpjZFpNUjJxUHNTNDluVXBZY01rdVVM?=
 =?utf-8?B?YUdqcDVRMXV6VGxVUDk0Yyt0eklLalg3TEd1WWJzQk1xU2w0WXk4YUhBTVFW?=
 =?utf-8?B?eUlpUXlteW5KTjdzbFpkSlVVSGd1VGw2OUZxYkI5RUE1dm51NGhwUnY1V29P?=
 =?utf-8?B?MmhJT2JMTmwxWk1DY0ZscTVKRkd6L0laMmkrd2dWWURwc3RFbWNyZnQ5czlY?=
 =?utf-8?B?dGZPNFFiak02NC93dExOUFROL3pXNStqMHl0NU12V3FNWjRQNEU5UUg5UHBl?=
 =?utf-8?B?NXFYZ2JvS0RhYzJiNDFYMTc2UTVBcGpUelplb09NQVV5VzNIOGYwNEpzS21s?=
 =?utf-8?B?VXZlbWxHUTVDTUZVQ1B0Rit0TUMrdlJGdjZLZzdsWktnWFJRMEJQRUg0d0NH?=
 =?utf-8?B?UWFYcU5zZXFCTjVrVkdTZWZWWjJBbHpqOThVaHZPcFVDcnNPMmk2c3FJVTNX?=
 =?utf-8?B?STEvZzE1QVprZjdnaWdGY3lnK2ZjRHkyaThYMjlCTEdXaDFpR3Y1eUNtczNS?=
 =?utf-8?B?YThsQ2lZVFZXTDRrVG5lSUdIbTg0cDhwNUI2SjFjemgzSGd0aHNtalJERlRm?=
 =?utf-8?B?cnhuenhMdm9EOW0rT2ZRR1c1WFV2c1dSUEJEVkw1cElPenpVT1pWQ1RqVURs?=
 =?utf-8?B?VXJWdmR2RzhvYWw2Q25Ia0F5MTVTVEIybE9nVHlwMlQ4RzNtVVZ5Y3VKb09K?=
 =?utf-8?B?VWhVWTZESG1qQUFmLzJqazRoYXFoQm4ySmNPSUNwb1l0VVpTViszYWJ1cmdp?=
 =?utf-8?B?NDVSdExHemNpazNxeGlUZnB3MS9CNndQVG80WFFkRnByR3p2OVVkMHNOZExQ?=
 =?utf-8?B?c1ozSjhaWUtTaTJjN1d6bkNZck9FYlI1b3dEaFhXeU10ZTJZS0VRRDdlMi9O?=
 =?utf-8?B?bnRqMnh6Q2JtcUx6SlZ0dmNDUXpMZmJkY1hPNmI0SHB5RXlKV0x5STcwTS9l?=
 =?utf-8?B?aVBKSTRCYjhPRkNWNitRc1lZWHlZdVlod3MwTjB1dzdieU9uMVlKaWR4d25p?=
 =?utf-8?B?amRlMXAyQkxKTW84cDFHaTlUQ0Zsa0I1ejlpRm80NkxGNkhlS29nUVMwdzdh?=
 =?utf-8?B?Y3o0aFZOV3AxK1piVWxSelRSME9JWWhnQ3pLMi9Qd2dRM1VSZFBoTkZJSjZE?=
 =?utf-8?B?WjFkeEpxcFFEZWNaZ1NudGFEU1I1TGR0R1lGek44dlNRMm1aSE84U0xBdUtV?=
 =?utf-8?B?eklzRzVpY015bGp0Q2MxL3Iwb1ZyQkpOU0lKbmIyanNLbUgzNU9NSldiQkc2?=
 =?utf-8?B?cVp3eEMrL0oyOEJrRllGN25RTEtHdHY0d3o3b1l3Q1VFK1FyNEFLS1AvSHp3?=
 =?utf-8?B?ZUVhTGwyMzI1YzdKWlQwb0NiRHRPam5obzVrNzMrYmhYZzBLZFJ1Rk1JTFhL?=
 =?utf-8?B?M0ZqQjhyclRXdG9hd0JkRkVsYUNpUzZldEJGQytFNThJcm9XbTEyU3ZxdjlF?=
 =?utf-8?B?OVdxemprSFdqdVpwdlJTOWYzYWxsU2IzNTRLMWd4aDkyTWtwU3hJc0QxeER6?=
 =?utf-8?B?Znc0UGhSTDFmdmhYMmRqT1Qzb0FBdzZQd3p6QWhSQ3h1Nm5Teko1eVpQNlhE?=
 =?utf-8?B?UjJvL1pxSHRFMmxOa3cweUhoMThIRkF4TFVnWkt2dm5QNlBjVW43LzhFamsy?=
 =?utf-8?B?VE0rODVhOXdYelBrd044UmJQOEZ4Y2tycm9TZTQrVnZjUFNPdWZoWXJQYkd5?=
 =?utf-8?B?YjF1cGVwbGJ1UWpIWmpNenRiRis3LzF1MTdWTHgwTXdjVlZlN2JVZz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c3d2a808-52e6-4fea-d34e-08da43951944
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Jun 2022 06:08:08.2131
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ar0q+NiH28/8+09vZFX4iOzCh6BTMU8/WHiicVIeg9Vs0MYR/1CDYEuCr8zsAJybDUa4UQe52wWh4HXZaOfUFw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB4692

On 31.05.2022 17:08, Daniel P. Smith wrote:
> It is possible to select a few different build configurations that results in
> the unnecessary walking of the boot module list looking for a policy module.
> This specifically occurs when the flask policy is enabled but either the dummy
> or the SILO policy is selected as the enforcing policy. This is not ideal for
> configurations like hyperlaunch and dom0less when there could be a number of
> modules to be walked or doing an unnecessary device tree lookup.
> 
> This patch introduces the policy_file_required flag for tracking when an XSM
> policy module requires a policy file.

In light of the "flask=late" aspect of patch 2, I'd like to suggest to
slightly alter wording here: "... requires looking for a policy file."

> --- a/xen/xsm/xsm_core.c
> +++ b/xen/xsm/xsm_core.c
> @@ -55,19 +55,31 @@ static enum xsm_bootparam __initdata xsm_bootparam =
>      XSM_BOOTPARAM_DUMMY;
>  #endif
>  
> +static bool __initdata policy_file_required =
> +    IS_ENABLED(CONFIG_XSM_FLASK_DEFAULT);

The variable may then also want renaming, to e.g. "find_policy_file".

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 01 06:14:43 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 06:14:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340226.565200 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwHcr-0007wk-Ij; Wed, 01 Jun 2022 06:14:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340226.565200; Wed, 01 Jun 2022 06:14:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwHcr-0007wd-FJ; Wed, 01 Jun 2022 06:14:37 +0000
Received: by outflank-mailman (input) for mailman id 340226;
 Wed, 01 Jun 2022 06:14:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z6G0=WI=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nwHcp-0007wX-LY
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 06:14:35 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.109.102])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1b64036a-e172-11ec-bd2c-47488cf2e6aa;
 Wed, 01 Jun 2022 08:14:34 +0200 (CEST)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2104.outbound.protection.outlook.com [104.47.17.104]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-33-3A7h9_OWOl2At6zZNkBSXw-1; Wed, 01 Jun 2022 08:14:32 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB4692.eurprd04.prod.outlook.com (2603:10a6:208:cc::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12; Wed, 1 Jun
 2022 06:14:30 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.012; Wed, 1 Jun 2022
 06:14:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1b64036a-e172-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654064074;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=p07NFfR9d7h8GbcIWy0Ibo2cvAr3i2BG7PXENiPZ15c=;
	b=Zjc1ZWSC+AC7ZjWnCRiVBmzmBiIXpmW4Zpwsdq3gyEnQHuj9DDVb5WbhfcWoULMD13EzRC
	tThI73RlFfZo4JCGri8s4Dot9YraVoMR3NiDUhmCkf/UEd6g5ZrH47IEvrk9wwo92XTUZS
	TK3gKRrq57rylHePj66m0aDj4vpZHQ8=
X-MC-Unique: 3A7h9_OWOl2At6zZNkBSXw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ntKsivT3y5baupIlOE6OontJfJ5IH8M28zRzxuro5kIl5KlwiM63/hJvmS0cbDIepxkhCMw3Lt/aniJKWyG31wyCFMj/B6fGVWnrr9T3By8WQxrXQH3kDxpHO7C46Fzb953izcKITsc8Rn51bkymjCU4irAGvL9XTlK82zKz2aeaDRedCZmJVf71lp/fOaLni2ko5t1M3v+2XIFoSSmlno3TkiftkjKqqCquQRSFBO20HJGBUCobGoTRfOVHOWWqVRCyNm2yQRmwMwoMZQbPCIGM327J2vHSIYS6OXaCPpEHZUUOSq/wkpmG0FHJQkGpInE4gZxfyVAOqPAXevv+Kg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=p07NFfR9d7h8GbcIWy0Ibo2cvAr3i2BG7PXENiPZ15c=;
 b=l3IpPTn0wrpEpvblBhcJjIcMptD95pSv8I2RpzB+CoXWYjYzdSJytLFDAsRN//glriQp0GQ26H3pMf4xgZuHPmKG0jnt+YSxIgF7l/N/ayM2HI6OD6NJkn7Gw+ccQizOGJldS7s6yZ8OmIp+ycX3xav9exYUYNnptNXoM7XnQxcmJsb6WAl2fP7sAgtSqd/1Nv8zuF+++4C3OYGwDgUfq71Gkp+XDKo8WM2lFWjf2xoXOXXGJPMyNrCDAOdfDnnsYRFkjNwZXEl8v6t/vLmumO8m5N3epqthimYTKXVCphc7qgqHf4HXaUebu2DC6iro8RogRYHDBlP/a0t77g6jlQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e7582bd3-1a3b-49e9-7d3f-f86ae3d4ab2b@suse.com>
Date: Wed, 1 Jun 2022 08:14:28 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH v3 3/3] xsm: properly handle error from XSM init
Content-Language: en-US
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Cc: scott.davis@starlab.io, christopher.clark@starlab.io, jandryuk@gmail.com,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xenproject.org,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Wei Liu <wl@xen.org>
References: <20220531150857.19727-1-dpsmith@apertussolutions.com>
 <20220531150857.19727-4-dpsmith@apertussolutions.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220531150857.19727-4-dpsmith@apertussolutions.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM6P195CA0107.EURP195.PROD.OUTLOOK.COM
 (2603:10a6:209:86::48) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e25efef3-1b9e-4273-7fd6-08da4395fd4e
X-MS-TrafficTypeDiagnostic: AM0PR04MB4692:EE_
X-Microsoft-Antispam-PRVS:
	<AM0PR04MB46928530DFE04FFA63D16246B3DF9@AM0PR04MB4692.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	KTqSAi1v59gl7Vb7idHyIJJzdeOwgJBACwfJHFtRnDpYR/ii87NrJ6PKfWaERNXqp+eGpX8bIQlofbov+DbgiY0rDYEdt5w9t/6wdWp7xp5d1Tn6d7Q2+ghLl3MoSfw+cp+tNi1EjKfhcux48TnnXaAibXAxX74WVJygqfIvEV3aX4nneCFStL1ZinYRybIGEC9JA+ZMatpYK2o00ouvWE0IbCrGTW/Pq2Bc0qF4igE3YHmCJcCybQaXp4zeG87Mr+jo//i0GOPc1y6ci/CLX8GwlyjuaX/PF2djhkaa7r6vRy/T/pEQIieQQJhIjNbBVCF1J4tFsn5LitEFCfDLyMaPZHeIgCtUqDVGd3+7fTiCuJ5boq1RyskHbDMLAV/0Sy5kAwjWTzWgmkLwZ82RVxh33PodB8j1Bxo0wNFMw/A/yhMTHvwMLu6+iywR9Yzd+PIj7JvW8jzA709lpYU9c2jZsXXrx2MrhjfH5rsSKnqVk6jAXMv6mSJS7ss/IGr9dgA6LdTR2CIZQTTPDnv8ifkxaND0itYBRkLkKeJj3WLaLQvU9BroBaLC0isH/LBZYiQmHrLrU9zPjCNK6JG9ce9bd++T3i7+AyNWx8FIz2NbXnP/Z+yGmyWTuq7HTsCbZ6A4Fn+PdebDBm1O+eqoVNxHeuZR2UzDygwTs5TRhYV8jQlXRyxcBwGjOP2fZV/Wu2pCcA3au0vTkpoxn2Kd+PcL8FFHCe9KJtXoFCZJTNVIGr8ZvOhl+++VA0hgE+kw
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(66946007)(508600001)(26005)(316002)(2906002)(4326008)(8676002)(6486002)(6512007)(83380400001)(8936002)(66476007)(7416002)(5660300002)(66556008)(36756003)(31686004)(2616005)(31696002)(86362001)(6916009)(54906003)(186003)(53546011)(6506007)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?V0h0SzMvSnVYYkVzbkxhcW1aellHM2xSL3hQTFdHOXJmeFJmcVNIRks1Wjd2?=
 =?utf-8?B?RVdOdGNiaC9XYnRoUGhaM2Y3cjZOclRTa3lrM1BhRXYxdzFrM1l3cFJDRCtm?=
 =?utf-8?B?WnVjdm5xMXloMENqRWxSdm02b1pQeW53TzdaZDAzaFRNRGUxODNzdnFDR0Q3?=
 =?utf-8?B?bzJ2UGRGaHRSQ3hKRGxFQlluU0VsbE8zUHdNVFI2UFBiTVluVnRKcS9rY05j?=
 =?utf-8?B?anNHQWt5NUU0d0p3YVNOU3lnY3Z4MlZJQ0NMNm5UMDZlNUtWaElKaGFtTllG?=
 =?utf-8?B?NHZWV0pkSzdKZzloWUFWeThZVzhxMndDdWl3RkwyNEt1QitTLzNUa2JWS2Rm?=
 =?utf-8?B?RkxXazVqUHJqekMxM1JrV3lWWTRhTXJmdzhiMGlUaWxnT256RnNKK0VFNkd1?=
 =?utf-8?B?OVBscjYzRFZzOTl5K29oWnJrS2piQlFSQkd5aGtmaHVJTXBzYWRtZEYzOE9w?=
 =?utf-8?B?V0NXODRITmJIWEZPcURwaXRXY3pNR05vd3oxTFN0NDhXUlVpQjRpR2R5NzM0?=
 =?utf-8?B?Y0JVSkFEdVc1aVo4MnNyL1BhWi93YzlINzhwTVl1eU9qQTlmZlpjdkxjdHpp?=
 =?utf-8?B?VDBxUWdCSzZ3VFpVN3BFdmZKbFAxU3g4bi82SGJRU0hSS2djL1RBcFBSbzZu?=
 =?utf-8?B?SStMSkpuNUtIQSt1M1IzU2IvSHViem9TWUo0ZnE3bmtNZUJEaDJ6ZHA4dzN3?=
 =?utf-8?B?QTNlY0RQSTNRR0dQcC9RRmxjM0VjaWgrV1NXU3R3bFVUWEpLUTZ4cHpBcXJZ?=
 =?utf-8?B?TzRpNi9iaGdEejNiWmFuM1Nsc0R1SUFzODRCb0FIMTR3OFFWb0NyZm5VQ1hG?=
 =?utf-8?B?OW1tNHFVcEx4OThzeGlPU3duM0tlQWdiUmNURWEyaU92UDFMTVNkK043OG1s?=
 =?utf-8?B?Mm9MbEMwam1sMXFOVGdYTktwM00wVlFDU1RwS0FXK2prZXQ4Sk1oRlo4MWhH?=
 =?utf-8?B?ZnZGT1V3alJwUWZsc3FsSW9yTkJoY212U1ppVVRqQS93bzZMMkRJYjNDWHZt?=
 =?utf-8?B?M3VJZnRQU1BkQWZjU0gzV0xqek0zN3lVVWh4UkpEYW5mdHZPcWRhK3ZtbFBZ?=
 =?utf-8?B?R3BqdGpQUUYwaEFUUC92Q21oQ1pYT0tGL3Fsb04zNSsvY3VFTW5wMUNRVXI4?=
 =?utf-8?B?OG84U3F5djBId1hISy9uVFZoVlQwQmtnNXl0YXlQVDdwY1E0a0RlNitBa2Vs?=
 =?utf-8?B?aExZUDNwNWlaaDQ5aUZidXBTVks4VGc3OFoxZFVBLzg3VEo5UXBGb1lzcmNr?=
 =?utf-8?B?NmxFL0J5MGp4d255N2xOa3k2Z2ZaWDFKUEF4M2hHMHAzZXV4VFNlUHMrTkow?=
 =?utf-8?B?ajRQRU9uOXgyRXA1RHRsRlBuSHhrK3BkS2kwVlpkd1lCa2NBRkRpODFNZlBv?=
 =?utf-8?B?UW9ySVEwaC93bW1OVVdZdkxITmlvZUhvNlpUeDFYZXBDcDVJZ3RhKzRjU0d0?=
 =?utf-8?B?ZXp5WHhlMURQVlljV1lreEs4NDdLQkkxdUVRRW1nL2RDOG0yK1UrL0E2UmVn?=
 =?utf-8?B?WXJ1MFFtOWJvbUlmOENGRDNnNzBpcHc0ejJZWk9WbFRibnQ2U1Rsa0NpZ1JT?=
 =?utf-8?B?VXZDcmJlZzhQV2RHenM5K1d6UEZpZlJxQVY2dkFvY3Zta0RwL094T3RwQUNI?=
 =?utf-8?B?ZVlrZW5WLzlkdWRyRDJUeW4zZkE3cFVqemtaOVo3UGV1bVdsQVF3cmx1c2xh?=
 =?utf-8?B?bnBHS01adGJqZTQ2bTNNMUgxcndscjg1QkZkdUV2QU5MdzdRWmJVeXJJVXla?=
 =?utf-8?B?eHNFRStCRi8ySkxSOWpDc0xpUFB5MFl1dkl3UzA2RTZwcHZWLzVzcnR4eHFt?=
 =?utf-8?B?R2QzUXpFVEFqMFV4NmZ6YkFtbG5MTFFFczRtR2EzeFVlMEtUaVZiajRrNUF5?=
 =?utf-8?B?clN3RWc3dGlMM1BVZ0QzTnoxU01ROEJzTmZxSElXMGM5UUYzZUhwT3BJV1h1?=
 =?utf-8?B?eCtXRndMNEFHazNhdTJHUVcrdkcyR3V4bFluT0UvczIzQlpPckZyMHFmVnZH?=
 =?utf-8?B?M3N0SHlGdVYvd2xJcVZUcUppSitVeG54ZjFxL3I0RXlVVEh2SjE1QmNybVpW?=
 =?utf-8?B?WDBodFNCYUI3bzBnVS82Z2NCaUJjMHhXQkdWdTFCdEE3SnU1WDlLTGd4K1VF?=
 =?utf-8?B?a01ha24xdXpGQ3ZnUUZET3hZR1FCK1VFTHJ0aFBqVllpM01Fd2xaUkNoM0Vv?=
 =?utf-8?B?d1pCdHpUaWk1M1dZWERicVNNSHhCRUFhcW1OaGwwMzh2QkhpdllHMWlNalFr?=
 =?utf-8?B?cW90VFVzUDdGTVQ5eHhQOWxVbklJQXVMdm5hRWdYQnVLbU9weG52Vzl2TmRD?=
 =?utf-8?B?M0o1KzErRnBGbWgzWUZNUENrTFdDWi9QeTdLbFdtS09SaXdVR0tzUT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e25efef3-1b9e-4273-7fd6-08da4395fd4e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Jun 2022 06:14:30.7033
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: bjnyDCOZVHXLcamsOYoP3Kq5e++uQzWqGFfhm9LxYvl3YJzBmvSs16IaLWysEC++eeTQll/rNztKIj0q+J4utQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB4692

On 31.05.2022 17:08, Daniel P. Smith wrote:
> @@ -1690,7 +1691,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
>  
>      open_softirq(NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ, new_tlbflush_clock_period);
>  
> -    if ( opt_watchdog ) 
> +    if ( opt_watchdog )
>          nmi_watchdog = NMI_LOCAL_APIC;
>  
>      find_smp_config();

Please omit formatting changes to entirely unrelated pieces of code.

> @@ -1700,7 +1701,11 @@ void __init noreturn __start_xen(unsigned long mbi_p)
>      mmio_ro_ranges = rangeset_new(NULL, "r/o mmio ranges",
>                                    RANGESETF_prettyprint_hex);
>  
> -    xsm_multiboot_init(module_map, mbi);
> +    if ( xsm_multiboot_init(module_map, mbi) )
> +        warning_add("WARNING: XSM failed to initialize.\n"
> +                    "This has implications on the security of the system,\n"
> +                    "as uncontrolled communications between trusted and\n"
> +                    "untrusted domains may occur.\n");

Uncontrolled communication isn't the only thing that could occur, aiui.
So at the very least "e.g." or some such would want adding imo.

Now that return values are checked, I think that in addition to what
you already do the two function declarations may want decorating with
__must_check.

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 01 06:32:43 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 06:32:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340234.565211 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwHuI-0002Cp-4C; Wed, 01 Jun 2022 06:32:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340234.565211; Wed, 01 Jun 2022 06:32:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwHuI-0002Ci-0h; Wed, 01 Jun 2022 06:32:38 +0000
Received: by outflank-mailman (input) for mailman id 340234;
 Wed, 01 Jun 2022 06:32:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z6G0=WI=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nwHuG-0002Cb-9Y
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 06:32:36 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.111.102])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9f6b7f4b-e174-11ec-837f-e5687231ffcc;
 Wed, 01 Jun 2022 08:32:35 +0200 (CEST)
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 (mail-am5eur02lp2053.outbound.protection.outlook.com [104.47.4.53]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-44-MoKtNdhzO8eQRAQL_zO-7A-1; Wed, 01 Jun 2022 08:32:32 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM6PR04MB5685.eurprd04.prod.outlook.com (2603:10a6:20b:a4::30)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5293.19; Wed, 1 Jun
 2022 06:32:31 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.012; Wed, 1 Jun 2022
 06:32:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9f6b7f4b-e174-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654065154;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Fd1Wr6W9tvg65teYXj8XuYJ0ugxs7y10RMYAOvZz5Hw=;
	b=flN7z/O7CowDkQmwTKM4Q9NNU0A/pvD4u+q3j1sV09HD/rvC9NXcXzFmQdlWTG5eMV1tM8
	TagM/QpSlkIltiMRpaJPx5CFGGKjkvEW/wtqoQ41gnr/IhbsXYfPQAruKOZCdQUu3rmJZP
	ywuE+UC7XcF5g5go/mnEF9EUg6xRUNo=
X-MC-Unique: MoKtNdhzO8eQRAQL_zO-7A-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=i5M+LNrsZSJheS4i+U2AuRbRAyFg939Parm2k3GMqykzIwCaH/cscCJpo7OhkmcHvXdEV1KGQ1bgk00TizGq1uIQgPSZSpGFo1BRzalx4sReoToFSTJnJg/5moODMBB+dDxWRch3XiADF2P47fISHBw1flkgVtwab3QEmfwFdS+6bqFGMdQ19JJEfLs5tLTpqZxcx278uBbkSiaqsZy6wHyQM/7P7wRuiLqpZU61sTFUzsu+IwzbxZp6/BPxW7TvZCtmAyDk0i4zF6RCWrc4Z5C2jsAkQSTnjMkT+FALFLPhMPDA45M4HqTJ6ACE9HOM6dOnNhDnjMlTAURNwVZcfw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=4nTLkMXITaCeRO+Z8Svw8f9/n5YbDmt/UYi99ABZAJA=;
 b=GY+ZOMcGbjba7tMhS/CDSibk4E8ucYfQIKVshJTMwiMZ1ZbyYb6d2mO40gxvC83goARhf1NQfoB29QG2/fXKjaX4p0ZGMi5z1gKLy7fJYnKFeA2R8/OvKSliQEUofJ8X05gS+CBIAapKIoPUrwxRDzSc+BweLhghkbe7LdvaMaublCWC+qjLOsbFgj/2eqd6ucmKJ0heiDBwbJpw1os1uWkEv2tih4LAFGaAOdp53TiNyK5Osrxn6xYbrDdS2HfDe+hhvDFltC17p+UXagFSdUYI9MdsIr/PDlHsZMzkT1U6D8Q2PzJclxNSLcOS4NglaGD/yuYhv7ZVMDKQlVTxEQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <dc158225-73d0-498d-8b30-ade1078edd51@suse.com>
Date: Wed, 1 Jun 2022 08:32:29 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH v4 7/8] xen/x86: add detection of memory interleaves for
 different nodes
Content-Language: en-US
To: Wei Chen <Wei.Chen@arm.com>
CC: nd <nd@arm.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Jiamei Xie <Jiamei.Xie@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20220523062525.2504290-1-wei.chen@arm.com>
 <20220523062525.2504290-8-wei.chen@arm.com>
 <6003b7a5-63c5-9bd3-03db-a4bac5ba8e00@suse.com>
 <PAXPR08MB7420F087CC36C8E8DB8DFF7E9EDF9@PAXPR08MB7420.eurprd08.prod.outlook.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <PAXPR08MB7420F087CC36C8E8DB8DFF7E9EDF9@PAXPR08MB7420.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-ClientProxiedBy: AM6P192CA0015.EURP192.PROD.OUTLOOK.COM
 (2603:10a6:209:83::28) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 7a57248b-aae7-43f8-760a-08da43988152
X-MS-TrafficTypeDiagnostic: AM6PR04MB5685:EE_
X-Microsoft-Antispam-PRVS:
	<AM6PR04MB568592FA2534DCE10745E7F4B3DF9@AM6PR04MB5685.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ABVzi1vtizstXLEhlkSySRvYpmLrKzwUBtfwyGthBqVvnttb6hxpOeaZwd1kcWWaYGpPjHgDWfoIHjDK1QgtrAEhX828RBvfgaXVRhALXKTdsJZOVSmMi5mmSqozgAw6BdILKAKC34vFt1isgwh7GfBzUsuN5Nlnm7h0jzwEUPG0g26Jx/vYO+K0oYJjRz2IaJM5TIVSNOVi8PSzpM6907gMmi9FgXfiVBynrqve/iLLRFggtjme+t09hRpRTfe1wnGLGFHJWR4DbT2pL02iUHz+NBd6PrSQ888WpQkUx4w9FuMgEEJ5kwUmwCYlbWAvYodk+cM9E0AG82zszBOoT9n1D5skOVFqNGG5NwoQKtL3aSQyeU7ULud+G37B/blEiSCP4BhDFE+NnCn5SaPyPNUE3MSXR2i+Yr2rU0Vw02J7A3kmm5P50AO+E178M9VetwXJNfo1s6M5BvwvDEQF273nSGUDA9Y3s2Gv8v27y9JD0Sa1rgzer6MDfrL5O24N/yGJJHJTP2Eyb+lXYPsx3F6QCT7M9VeqL7vLHMZ/KB4uTkhZIvhdUrfaeNycqpvRKqzuKawP3x0D6iD9HFrp0VrCd8Ttq47OLJ/GFfC3tJCjFG2NVRzYX8RiFfKAR4v+epp3dOhFPRDOJ57Cp0IOTjTuNwB6Jiim4GvCEUQm2Mu2hjXHjOuAYA3Coas6T4RsFsAkFCcawOzSgnSatb9lKgz37XXPBz2R3eauwVXcmA7dgqvvekF2O1dUHlTIMpMp
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(66476007)(66946007)(2906002)(66556008)(6486002)(4326008)(8676002)(508600001)(31686004)(53546011)(6506007)(186003)(8936002)(26005)(6512007)(2616005)(86362001)(36756003)(31696002)(6916009)(54906003)(38100700002)(316002)(5660300002)(83380400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?KBRjZrsdSfehOVjzrg9ZKa4J4I56UUj84BO92hi7foQnTYMp8+o7qdpq7x//?=
 =?us-ascii?Q?/f2MXh+NXzClAGcvCMoqCFC5Bnm28l32x1M5SOTCpDoVJb8aViaqfTtnu3lE?=
 =?us-ascii?Q?KkVJEOKk4BTbR/ZAJ+Hc1lwy9dq9FiiN736vZQoIbtuOFyd8zvAID3gnhzqQ?=
 =?us-ascii?Q?uHsBxgaYiPtQrh+snX8HOQBnQplaVK+k92BW21CRoro9TXtdaLa4DzvcLbye?=
 =?us-ascii?Q?FgZ4igyxZDAjgiFT1iRw4NVRm2jZahE8mLrhC39p2blO9wE4WqRTBQjxqVBn?=
 =?us-ascii?Q?zhLNGmyzi3nICFgDznK+9DzieL1DbOUcv5sqfar3p3lSTj6W+nhy9R69bj9P?=
 =?us-ascii?Q?vw0BrzoCLGoRB82y11GHXjyvsPEiuuRjrunvgqQZjtZ+HHrbescL/uwZIPP5?=
 =?us-ascii?Q?eYA021ZHgItQvmTonAsqstF9METVHurVyUITZ+T3AKousB310dWsqmSMPG+d?=
 =?us-ascii?Q?85LxqcnEVmLUvdY0Ybw4HsGMYpzQvcQ+DDtScr9iSQDOSzv8sCKamT+oo5OJ?=
 =?us-ascii?Q?CNV/PMmhvXKpBOx5D40cTjNn/jE7RuQIH66O4F07Pq1ANettdA6c2G81oA2C?=
 =?us-ascii?Q?HVDaYzPTkbqDQzjpMM5exg+jBcSuK5u8Lv7wTWgXT5biK3hJV/gifa/PkmtO?=
 =?us-ascii?Q?mJnCrY2hU4u4NWoTeojr5h52EtRTk8d2LaKIJwa+Cmis7LZQ0qNtmwdYUr9/?=
 =?us-ascii?Q?ljLp2Gf8b9oCkvSJJZKw76pnA4enix3uqTX2u/7QHZBHbp8sCVjAfVbjmW6M?=
 =?us-ascii?Q?YJXwIn/nUsmiPtJKpIutpUIBQz9BhVNfjXkClAJphwb4SKea1RD3ZrWYmfBG?=
 =?us-ascii?Q?Iddz+XtYLVKaJ56g1oHxjgRau9c6PvAJEkMQP0wjVM0h+pcY0frTRuT2zMSp?=
 =?us-ascii?Q?ky9R9rwRHt4TaRAvhYIAHkIVESIWpXNoivUa0o4Tmf2GWUDJmzvT6Z3sKG21?=
 =?us-ascii?Q?obi9TfFhLc+i+9zl8JonfDsVGj1F5NZ8Ux+PkwGWoSccUbV3cLJbRYwcc9ww?=
 =?us-ascii?Q?G38bLTa4L0qnDzHHvcBRsCZneFxjKIsCw9zB2eeMd5v4TlvbVA2kxxTor3pl?=
 =?us-ascii?Q?1mOzvb+Eu+l35/Eq59WocXybBIPuBNtt+F00QQlktauhMkKLJ0wn5scUyKlU?=
 =?us-ascii?Q?I52aefRPYSSo6GPOGPXgFFopMca7MtIacmY5MDTigZnbPeaKBwnxlyiSIF14?=
 =?us-ascii?Q?1ImbTa1JLbIeSketqeRFs/u6JhK4AOkNsWzaXyHJwcVKYRIv3ATad8s3fAwl?=
 =?us-ascii?Q?DnYsdLjGDyh1/K4b9u0MiAM4tQIvRjf7bMnwDb/iZJBVy6z1FwlJpDifhIDr?=
 =?us-ascii?Q?YTPHpXlNB0xcCJ1+t3zuOwZkU4aFjOyIZ+6qPhXiMvpia7DplwqJMMMF19Ze?=
 =?us-ascii?Q?0J3ezGAKe4spw4flpwHsWjrmQ+X/g5tkk6+KVDNJ1tak8d6bjUATArFe6PVf?=
 =?us-ascii?Q?lgr2BjaxxTZv31NM+1+/r8ofsV4Z1ek1VEvqj5DFyGpxo4yVIKWlXsa95vqi?=
 =?us-ascii?Q?4vpHAFh4IfYX0ZMkYtfATdENLmw2brD3Q5n54tfk2yNOhurD1bhaeWhGoJbI?=
 =?us-ascii?Q?ZOhC/yiQ+gdjcENNHScK4GlxN8UbxCo7qaz/hdu4FZW/2NIHPUOtMkzu/Txy?=
 =?us-ascii?Q?YbpBzgCHNFOzUDOm3YEzOG6zgvynLBigG7HmPNW2NRxoTw1BxkR8FmsRQsQf?=
 =?us-ascii?Q?0+2gJtkX2NavMbaS/zvoRVtNkZK0DQg2Je6ka9jlq+nCEvuHHzUvFysLkVks?=
 =?us-ascii?Q?30y+xQa0xw=3D=3D?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7a57248b-aae7-43f8-760a-08da43988152
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Jun 2022 06:32:31.1976
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: w9yxBl0AprWwpLQL029JQHs+hbTffWf1BGDnd736R98UlQyIR3qp29naWQmvbS3zb5g33Ju/gvQSfrKu9ky0Yw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR04MB5685

On 01.06.2022 04:53, Wei Chen wrote:
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 2022=E5=B9=B45=E6=9C=8831=E6=97=A5 21:21
>>
>> On 23.05.2022 08:25, Wei Chen wrote:
>>> @@ -119,20 +125,45 @@ int valid_numa_range(paddr_t start, paddr_t end,
>> nodeid_t node)
>>>  	return 0;
>>>  }
>>>
>>> -static __init int conflicting_memblks(paddr_t start, paddr_t end)
>>> +static
>>> +enum conflicts __init conflicting_memblks(nodeid_t nid, paddr_t start,
>>> +					  paddr_t end, paddr_t nd_start,
>>> +					  paddr_t nd_end, unsigned int *mblkid)
>>>  {
>>> -	int i;
>>> +	unsigned int i;
>>>
>>> +	/*
>>> +	 * Scan all recorded nodes' memory blocks to check conflicts:
>>> +	 * Overlap or interleave.
>>> +	 */
>>>  	for (i =3D 0; i < num_node_memblks; i++) {
>>>  		struct node *nd =3D &node_memblk_range[i];
>>> +
>>> +		*mblkid =3D i;
>>> +
>>> +		/* Skip 0 bytes node memory block. */
>>>  		if (nd->start =3D=3D nd->end)
>>>  			continue;
>>> +		/*
>>> +		 * Use memblk range to check memblk overlaps, include the
>>> +		 * self-overlap case.
>>> +		 */
>>>  		if (nd->end > start && nd->start < end)
>>> -			return i;
>>> +			return OVERLAP;
>>>  		if (nd->end =3D=3D end && nd->start =3D=3D start)
>>> -			return i;
>>> +			return OVERLAP;
>>
>> Knowing that nd's range is non-empty, is this 2nd condition actually
>> needed here? (Such an adjustment, if you decided to make it and if not
>> split out to a separate patch, would need calling out in the
>> description.)
>=20
> The 2nd condition here, you meant is "(nd->end =3D=3D end && nd->start =
=3D=3D start)"
> or just "nd->start =3D=3D start" after "&&"?

No, I mean the entire 2nd if().

>>> +		/*
>>> +		 * Use node memory range to check whether new range contains
>>> +		 * memory from other nodes - interleave check. We just need
>>> +		 * to check full contains situation. Because overlaps have
>>> +		 * been checked above.
>>> +		 */
>>> +	        if (nid !=3D memblk_nodeid[i] &&
>>> +		    (nd_start < nd->start && nd->end < nd_end))
>>> +			return INTERLEAVE;
>>
>> Doesn't this need to be <=3D in both cases (albeit I think one of the tw=
o
>> expressions would want switching around, to better line up with the
>> earlier one, visible in context further up).
>>
>=20
> Yes, I will add "=3D"in both cases. But for switching around, I also
> wanted to make a better line up. But if nid =3D=3D memblk_nodeid[i],
> the check of (nd_start < nd->start && nd->end < nd_end) is meaningless.
> I'll adjust their order in the next version if you think this is
> acceptable.

I wasn't referring to the "nid !=3D memblk_nodeid[i]" part at all. What
I'm after is for the two range checks to come as close as possible to
what the other range check does. (Which, as I notice only now, would
include the dropping of the unnecessary inner pair of parentheses.)
E.g. (there are other variations of it)

	        if (nid !=3D memblk_nodeid[i] &&
		    nd->start >=3D nd_start && nd->end <=3D nd_end)
			return INTERLEAVE;

>>> @@ -275,10 +306,13 @@ acpi_numa_processor_affinity_init(const struct
>> acpi_srat_cpu_affinity *pa)
>>>  void __init
>>>  acpi_numa_memory_affinity_init(const struct acpi_srat_mem_affinity *ma=
)
>>>  {
>>> +	enum conflicts status;
>>
>> I don't think you need this local variable.
>>
>=20
> Why I don't need this one? Did you mean I can use
> switch (conflicting_memblks(...)) directly?

Yes. Why could this not be possible?

>>>  		       node_memblk_range[i].start, node_memblk_range[i].end);
>>>  		bad_srat();
>>>  		return;
>>>  	}
>>> -	if (!(ma->flags & ACPI_SRAT_MEM_HOT_PLUGGABLE)) {
>>> -		struct node *nd =3D &nodes[node];
>>>
>>> -		if (!node_test_and_set(node, memory_nodes_parsed)) {
>>> -			nd->start =3D start;
>>> -			nd->end =3D end;
>>> -		} else {
>>> -			if (start < nd->start)
>>> -				nd->start =3D start;
>>> -			if (nd->end < end)
>>> -				nd->end =3D end;
>>> -		}
>>> +	default:
>>> +		break;
>>
>> This wants to be "case NO_CONFLICT:", such that the compiler would
>> warn if a new enumerator appears without adding code here. (An
>> alternative - which personally I don't like - would be to put
>> ASSERT_UNREACHABLE() in the default: case. The downside is that
>> then the issue would only be noticeable at runtime.)
>>
>=20
> Thanks, I will adjust it to:
> 	case NO_CONFLICT:
> 		break;
> 	default:
> 		ASSERT_UNREACHABLE();
> in next version.

As said - I consider this form less desirable, as it'll defer
noticing of an issue from build-time to runtime. If you think that
form is better, may I ask why?

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 01 06:37:49 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 06:37:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340243.565221 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwHzE-0002uU-Rx; Wed, 01 Jun 2022 06:37:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340243.565221; Wed, 01 Jun 2022 06:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwHzE-0002uN-PH; Wed, 01 Jun 2022 06:37:44 +0000
Received: by outflank-mailman (input) for mailman id 340243;
 Wed, 01 Jun 2022 06:37:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z6G0=WI=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nwHzE-0002uH-1l
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 06:37:44 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.111.102])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 56c3404d-e175-11ec-837f-e5687231ffcc;
 Wed, 01 Jun 2022 08:37:42 +0200 (CEST)
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2168.outbound.protection.outlook.com [104.47.17.168]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-2-vzrmx1faNX-3ogzMxevZnw-1; Wed, 01 Jun 2022 08:37:41 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR0402MB3952.eurprd04.prod.outlook.com (2603:10a6:803:1c::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5293.19; Wed, 1 Jun
 2022 06:37:39 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.012; Wed, 1 Jun 2022
 06:37:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 56c3404d-e175-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654065462;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Qlp3TaauqFQczu+72cAPOsSh/Z9upua5oV5Z+S8Lczc=;
	b=bd2qRB6fe8dyt97YRr+NaEeP54qWwR+MRDB/ttEQDnJo6NNfQz8FP9m+3p94AEaThxKeAf
	uZFcocEOAY7YJww7HWn6K+DJbh7sTVfE5hQmLONXgwV1jOOcEBjjDacxm5JKi9RyOjuiHf
	ZWAfi6v8zzL9Y9sPH4b+f0vJj2z0pvA=
X-MC-Unique: vzrmx1faNX-3ogzMxevZnw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Jv0zS+stuwUCXEz5Kgt8zNtvmlU3x3c4Pkb9Z6AD2Mj54PI86USzcYB29USy/vIPjy9rzu3bERNkrlD0oI6+SmGAbcTqqLap3fDiuKQw7Uupc14Fjdj1juQiaAAiFzH+TFJrvffzNKYLojzJ2n/TBU/JAvAQuyOWzYW5dV8hLzMNZ+r25HDHS5o6SXPt0M3n5eti6YU6Fhf1ldKlSnFY6mFU7B0y3LA4c4yXsFnltribG9XUQmr97R/L8Hp5W/E0QFhm7n7swFqMk0hS4gz4ESHAxMXHfnrVIxmyVi7e7/NwBSUeU6zX2EfkHKzPOthxlY8hjSUEqOQf+DSG4mR2NQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Qlp3TaauqFQczu+72cAPOsSh/Z9upua5oV5Z+S8Lczc=;
 b=G6Hv9Hg/N1ul/XgsAClCxSehwayrLJ8DcXG0aXen1kNVsEyyKaqInNwqpb/heTB6oJpjU4oCi3puq0eBokH1QyL3gjkcyHd/BGFt1S1FwRTzQrMnobhPSPnGUWpD3thiSudiapImuyvv97kPvWKzzXOgs0jNRkASlhA8ne2YEUnJG5ev5cQiO+KRf8V8rVvuUvarD88JE+7BDGq8zrdkdUguD/oZ+hBNjCZwTPy/eX4pGEERXRMzXdX8cLfB4Xy9Ew3V5Bi8KDuWVXF7V1tfx1MHN4uzmewBj37ZLTaN4zqu6Q0R9VblSQurDJpsHD1eTSsAoPzHyfOle6xMAR6UAA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <239d1e45-7094-42b5-027c-6697d0e23f0e@suse.com>
Date: Wed, 1 Jun 2022 08:37:36 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [RFC PATCH 5/6] xen/riscv: Add early_printk
Content-Language: en-US
To: Alistair Francis <alistair23@gmail.com>, Xie Xun <xiexun162534@gmail.com>
Cc: "open list:X86" <xen-devel@lists.xenproject.org>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>
References: <cover.1653977696.git.xiexun162534@gmail.com>
 <016c56548eee75c2b713ef90e4069690c0ae11cb.1653977696.git.xiexun162534@gmail.com>
 <CAKmqyKNQJsUSLAgsMB0arkT3zAXzzm6QF46ZpwDN1GdpvRQMSw@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <CAKmqyKNQJsUSLAgsMB0arkT3zAXzzm6QF46ZpwDN1GdpvRQMSw@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM6PR10CA0067.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:209:80::44) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3e6b8a28-fe4f-4181-446b-08da43993872
X-MS-TrafficTypeDiagnostic: VI1PR0402MB3952:EE_
X-Microsoft-Antispam-PRVS:
	<VI1PR0402MB39524F40790E8AD157257591B3DF9@VI1PR0402MB3952.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NslA2FLEY/y4eygMlAvIDLA8J2ok3/wmcQ5EqA64p86xPHNgpWB3byfQ1whSv/mR+kvteOYci3AbgY/hVI2DTRjXl0c/Xg3k8EJS58jfVg6VkPpLmFGBpKqx8lvH5UX/Ohm0/6pyxUp18acgSdeQ2OFyA7vvczbbYCfYXZNMWw0uFZLyRZGAqFcHzW6Lni4Tkj/arkmggG/CgRMJ9RHN/EhirYCSTFMoQqKN6oAzOjk4em3rq5xsR9q4wXZoce8kK63JAAT9gekfOGHXJLtD+mSZhtJBEcQFswNF9Rnel0ZoizY4b0Paj/zJsV6ME0MrDVyR6O0i+3BgNV6iPVlT7i5lkakNKf0zKneRG0JHytyMk2YjdHhXouSUCl5DQEwc1fsSHD4FusSlNfUNAAbeQwRzn2E6CHNymYO2gA/+jBzNd1JI+MYrIhkhPOa79HXNuCt5PZMhn2p4jOvh6s5EEBPWYMueY6KUt0O6kwz8xHNP8Byrfh30M02NN7P+HJDqBUHMfZpIYM2FL1rSwW2jK/GTRyz0CZeNjmQCjFvH8b0cVopj1UqrFoZh9k1+c0wIzJrXYtOiRrUX93fIzhLfPtzyB/DtrwoGgLwDeGpaKyw1yMQLdAMdcx9qSNzrMjIUN7gNmkzVdZF/h1bIyM4r+qLtnvovn+UwZArLiAc+TB5lbhKxWe97nHm0Rrehe9kc8EMH/H2N+Vg6iNXjLMvKycr9ad22/c5rPvXKU43Ppwm1afa/bZw9/FnC2ZmteFtjkEhdXwUo9J7rV1ujb0N1izXNWRh1KBTWZs7IZjIKSeRS32OcuUHOnzzulxAhNQ1S
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(86362001)(8676002)(508600001)(5660300002)(8936002)(4326008)(31696002)(36756003)(38100700002)(316002)(31686004)(966005)(66946007)(54906003)(66556008)(6486002)(110136005)(66476007)(26005)(6506007)(2906002)(53546011)(186003)(2616005)(6512007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QlBMdVNya0xvSGFKaXNFeXE1VGRDdjB3Y1k1ZnVCSjdhZHNxQVMxMTdxdEh0?=
 =?utf-8?B?eWlhelU2czJnV0pTQTFiWWJtWXJBMmxlK3UzdVVmZjArMzVsbm5PTUl3SkJV?=
 =?utf-8?B?b1VLQXAvWjVIazA0Q2xmUk9XSmt2a0gvMm9sRk5DNkp4RjA0VS93cStpem1H?=
 =?utf-8?B?bW80Q0hKaFlRUVhWZ0Y2Nk80TW5ndEtMRTdES1JUTkVwSk1GQkpVSDhGajRP?=
 =?utf-8?B?R0pvT1hud1doTzg3eUIrUy91ZFg3MllWc2tLTnpMR3NIUjBVRnBRWXhwek9h?=
 =?utf-8?B?T0hTYkNZQVIxazVFQTF4TEx1WEhqZE5TRnJQOHc2dC9TUHgvZm5LK2QvbHdX?=
 =?utf-8?B?Ulh1cUhBYXBsb3VFM1pwSzRVQVF1UEZjaGN1MWIvcm5kVEY5YmxIeUM4a01L?=
 =?utf-8?B?N1IyckhBNUZTMWdRNE5RVHRNVHVXc05QWlE2TU80dkVjZnZnN3hiTm9BaGdx?=
 =?utf-8?B?SlFOTzZ6QWlLMERsUDNLVy81V1RyWUNITjM2MGpHMS95SHA5MWplQVBLei9H?=
 =?utf-8?B?ZlRZWlBnVEZMdUNXYStTbXo2WEJWZmxOTGJ0NVJNajBMelhxUUloUmczR3VT?=
 =?utf-8?B?WEhlQ0xIektOVnc3L1ZYMkFRZmI4OEcvT0t6bUFmUXFUckxNSjl2WTlnV3hi?=
 =?utf-8?B?QndHaHdKaXB3SEFNSlhLWnQ4bWN6UmpLK0J2SGpZU2pHVXNmR3FJTjVzTFR0?=
 =?utf-8?B?YUFJR0RPV1VkSmpaNVFDTkJSTlJ5ejdHZUNnLzY0TEFhd1RhWDIzTnNVTUVL?=
 =?utf-8?B?UWhwMi9ZWW9rZVpHdDArUHhXMDNHZkUzcnh0c0hweE5PM2pEYkRjQmRyMHF2?=
 =?utf-8?B?VHRYM2tGa1JMUDZuZWNTUnA3WVh1WjFDOHk1L2U0S01FSGFObXlZVGFBdnd0?=
 =?utf-8?B?ZXNFZXVQK2hSU0V6SXFCdzlGNS9pcVBpZmpQL1Q4UjNaaHA4NXJGTURNK214?=
 =?utf-8?B?eWZ3VzhLZ3Y4OExMYnlTb3lZWm0yazVIZ0w4dzR3eWduNHJtUGd3R3pINXN0?=
 =?utf-8?B?UGVkRG53eVV4YUxueTBHa1QvS3djUEdaU2FBdDJIRFpvelNpeVZ2TUFvMHBk?=
 =?utf-8?B?ZkhhYVh4QXlld3lVbE8zdEw3Vm5WaUdWV1Bxd1hkZ3JTa3d1OHlnTXVMRll2?=
 =?utf-8?B?d3JJN3RMMVlERC90bG9paXNkaDlGRDYwWktJcnZmQmFrc3R3bHM2V3JFWGxX?=
 =?utf-8?B?Z1FReUd2ZFhWZi92NVEzN3JJUk9UQUV0eFBLVDloOC9mN0QyckVWUHBZUFFR?=
 =?utf-8?B?M1BqVjdQbGpEb2NTeVNnd1djTkdvSTQrNUhUeENDRWJwVDFHQm55cUQrNzE2?=
 =?utf-8?B?ZyswTWxZNnhtUFNEKzduTkVyTjRSLzRZWUtLVzZGSXZDYzJraEhFMS82LzNW?=
 =?utf-8?B?WjYwajFMem1ub2FTUWNKSEFaZmNERFNub1hvcUZkWFowVDlyYmlvMlVxVCth?=
 =?utf-8?B?aGc0dE9tbjhNcTQycERvTThuMXh3aDM0T0gvQXlQaTFNckEwOGloRWluNzVK?=
 =?utf-8?B?N3VzM1d3V3phc3BLTXllNkFwRW1hdFJyelFCNkZ5Z0x1VlhQdW5ITFpRYnBQ?=
 =?utf-8?B?SC9rNlRrUHhrNkpLUndQSUpDUHZwcHFqUktiQ0REQ29jTkpZR0hpbGdQbzdr?=
 =?utf-8?B?bFVsTmFJRGdQR2xNdUhNb1g5OHFWd2VPRWJ4bnpOZUh0OEVpVHQxVS9jNlJk?=
 =?utf-8?B?bHRoU1NSbnZVN2NLUW51T0lsYmdydGlQL1pwYlFSdjU0VXZxaWpPTkdpUkd1?=
 =?utf-8?B?dnFicy8yQlJXaHovS0ZaVzlCcVpQN1drTXViWjJ2L21xU2cvOFFXejEzc216?=
 =?utf-8?B?TWp2WFQ4UnFDNTlTTlpTRzFsVktNY0FLRlBEMk1vaitVOXhtbUhuWEgzQVFz?=
 =?utf-8?B?cFNpR3EzQTc2UWxPYUo2V3ZabmdWWEJ6d2xMWlh2bUdyMU12ZTJaRGZ5TDBB?=
 =?utf-8?B?UFRiR2F2N05FcWdDRXRVR3loOGJpdTZXZk4rdkhBUFlROUtSd2JWQzhpc3dY?=
 =?utf-8?B?SkFMZUwyTjIvRWY1bHZhU0czREtERGdOOGlpRk51c3pnVkE2Q05PVWt1QnU0?=
 =?utf-8?B?M0Y1eXEzNGhpYi9jT0hIWXdlSXlwc09JTlBlMXlGYmlId2tvaXBvUzVxUUlv?=
 =?utf-8?B?WDJaTmlNMFdFdWNyTEpmRTF1UzRrWGZxU3VQdTVkbDhCNnpadk96eG9DQmlX?=
 =?utf-8?B?MVZOdDV4Ym1lalVyY3FkeGxHME8zSlpIdXpjeUVpbFlOYXVEQTVRVjhMTWJw?=
 =?utf-8?B?U1NjcWRLR0F2OUFzMDJhYXprd1BxeVl1UjJwaDI2bExvamp4Mys5WjBsK0pY?=
 =?utf-8?B?cXp3bDJTQ21hbSs4NVJsRU5FVmlVc0lwOEs5dHJPejk4eWJ2bjIydz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3e6b8a28-fe4f-4181-446b-08da43993872
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Jun 2022 06:37:38.8197
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: OlT5bGHNAhzu7iWvGRJvOqkUQR430ATgTsgRp1pDy61KL2j1b6+fN/0NGzh8fQl5MKgqNutBA2UScQjnxN1Atg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0402MB3952

On 01.06.2022 05:59, Alistair Francis wrote:
> On Tue, May 31, 2022 at 5:09 PM Xie Xun <xiexun162534@gmail.com> wrote:
>> --- /dev/null
>> +++ b/xen/arch/riscv/early_printk.c
> 
> This should be named differently. This file should be called
> `sbi_console_early_printk.c` to better indicate that it's using the
> sbi_console

To not go overboard with file name length, perhaps sbi-earlyprintk.c
or sbi-early-printk.c would suffice? Or, if other variants are to
appear, *-sbi.c?

> The SBI print functions are useful, but they have been marked as
> deprecated with no future replacement (see
> https://github.com/riscv-non-isa/riscv-sbi-doc/blob/659950dc57f9840cf8242c1cff138c2ee67634bb/riscv-sbi.adoc#5-legacy-extensions-eids-0x00---0x0f)
> 
> For the initial port I think it's ok to use these, but this isn't a
> long term solution, we should aim to migrate to using the standard
> hardware UART.
> 
> I'm sure Xen already has a driver for the ns16550 UART so it might be
> worth just using that directly and not worrying with the sbi_console.

Of course we have a driver for that, but depending on the device flavors
in use on RISC-V it may require touching.

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 01 06:50:03 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 06:50:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340251.565233 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwIB2-0004Zu-0B; Wed, 01 Jun 2022 06:49:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340251.565233; Wed, 01 Jun 2022 06:49:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwIB1-0004Zn-TV; Wed, 01 Jun 2022 06:49:55 +0000
Received: by outflank-mailman (input) for mailman id 340251;
 Wed, 01 Jun 2022 06:49:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z6G0=WI=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nwIB0-0004Zc-AV
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 06:49:54 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.111.102])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 09e6cee5-e177-11ec-bd2c-47488cf2e6aa;
 Wed, 01 Jun 2022 08:49:52 +0200 (CEST)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2111.outbound.protection.outlook.com [104.47.17.111]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-18-55JdQOwnPq2RJzo7xzuXow-1; Wed, 01 Jun 2022 08:49:51 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (20.179.234.89) by
 AM0PR04MB5490.eurprd04.prod.outlook.com (20.178.115.24) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5293.19; Wed, 1 Jun 2022 06:49:49 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.012; Wed, 1 Jun 2022
 06:49:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 09e6cee5-e177-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654066192;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=flItGDFpmQXNLeZ5YZ5v5mwNooKUQqcZOJe2gt7SM4U=;
	b=ZJAaUbswxo/no8B9uc8LTqNIHL7oemGNTzqrdwqLpjgi24UhnLzoJR6lPSeO3LxzyKIdGR
	uCIHkkouEdmXWPUF1Jdvu9IZfj+aUSPEJGerGxcQtx43dWFpRZwEMw70XlYjru7ezG0Ql3
	kPRqXJwF1niDdSJkw8Ys0ZLLEWhfWYM=
X-MC-Unique: 55JdQOwnPq2RJzo7xzuXow-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HnYnlEWP8puhZm3pRWROLwOx7pOviYIU9yDUMYRp9Km5j2kVEBG0EbEDImh6r3aqBwBqp76bGBZn/MK/kY86B1ZYKAhzbNuvm/J+74v/XIl9dDZ2sYtBQIkInP9mcH56n9F/DEe/J3GezawlZeXXUB117tbw03MMteZ4FXDGsKpZV16YOUrq8crwnvxdf+H/HstMI8EmmZSgejId+haYzzkRYnDoB3DGDWeEqi2AmUJsKfuKZKtQ2OlD6rw62uUaTd3YFFPGGaavZeRcVRl9S01YvTvHrDSoYf0x1/dj1VlaC/QQ2zLN9iHjQD48Sk/8T6qvwNECmnVM7YLUoFjj2g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=flItGDFpmQXNLeZ5YZ5v5mwNooKUQqcZOJe2gt7SM4U=;
 b=Ni6QvXDyP4lkBho0EsNvyXsK+NjYf67PaZ5j20BkPL5LSZf8B8ZeQa7GjXoQ2XL3qRe56a556DY6LWaumD2srdN2c+2BNR1fJYCz913UVS7365W3hWdxrjVZD9UFH4XU0BR+49EXHSn5PEKC4TFz3shrDpVmDhPPt8LktN/c6QHP5Vb6LQhl2uwD/P/ubrrSMJYxZFiMO/j0ZmqK1JaxODD70TBNEVZDNsoLE2jkoq/vdPhmqApAAKCHWWJi2VzqMWm+osMBJSSgBKt/muqIMokELrcInDeP1ludfZSFH3X3DrEWN36yafVMEejqGTAvpZ8Jjlv3QA6wbtW7mukH0w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <40db300c-4d20-8339-599f-bcf6521442fa@suse.com>
Date: Wed, 1 Jun 2022 08:49:46 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH v4 3/3] xsm: properly handle error from XSM init
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>,
 "Daniel P. Smith" <dpsmith@apertussolutions.com>
Cc: "scott.davis@starlab.io" <scott.davis@starlab.io>,
 "christopher.clark@starlab.io" <christopher.clark@starlab.io>,
 "jandryuk@gmail.com" <jandryuk@gmail.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Wei Liu <wl@xen.org>
References: <20220531182041.10640-1-dpsmith@apertussolutions.com>
 <20220531182041.10640-4-dpsmith@apertussolutions.com>
 <c206a20b-ee5f-aa5b-64ba-fe06469f0f2f@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <c206a20b-ee5f-aa5b-64ba-fe06469f0f2f@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS8PR04CA0210.eurprd04.prod.outlook.com
 (2603:10a6:20b:2f3::35) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 2f6a5dd0-081b-417b-194d-08da439aebd2
X-MS-TrafficTypeDiagnostic: AM0PR04MB5490:EE_
X-Microsoft-Antispam-PRVS:
	<AM0PR04MB549098D96EE9BFBC32B339C9B3DF9@AM0PR04MB5490.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JRFX3zRibGyt4DMLsU4R0x44AE7I/4GdKth/LQ8vpf7FRbYhhB6+eviRQhbcS0LSs1aY0EHJlEyRKDKXWZmfTUhfkC3dF5neiLG7J7vdkAsL4Fzs/tWDB5jEzbjePagayx9fNCN9i4asINrMQVTf4gJ+RVGws0Av8HPHw9fidNcK6DA4cktkuOKRTwDkt94aTVMbWjLKWCwVRdINOHG6tUzf2xeSWOj+/Zo4Tv5edeY3Cx5a1gkM1ru1O9j4dXXids9gQgzipN1lmGUffMsK4Ifk1qTdF0b9CI9FQISdh57knlfvgHdVCygnBv12QfN2MWDliU0H33tD5o+R70tDrdrKa0kD/LgSUYdu49olvaf8NcacFzyK3FRTTZ3VVzJkmkaCfVwMtEdZhyFOjV/Wl3uk3+vWssAdZrj8CYWzEizST9xzKsG1bZwoHWC4AQlZOrFLdH7wTgvtlZ6rNz+8WpdFEh3kVFhtonkA5ozXwdxHIV08cg0ODpjd0tc63mfnwSPlrG4t8/iVIeHgMYINQIU7a+KIPB9HOvc7Tx9OVwumWFiixsA+9WniiccAGaLlSa0pLqiFzK44EdeCpYzELWR0RB8zcNIXTQOrT+w+v95lhj/CPpVIk0EeCKy0pcrnpbDcTDZ0HpOD1QA9D7FRu8T2wvu2NE4GjtrwdN+xXZQ6uSVu10GRB4P4mnj6Qpl+OlJgIiO0lzSV8zAVAXSr8dr2qKsqP/dDZYbm0RyqkcyL7XPBe30oWDpRqXx3a/sy
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(31696002)(86362001)(38100700002)(31686004)(316002)(36756003)(54906003)(110136005)(6486002)(66556008)(2906002)(66946007)(8676002)(4326008)(66476007)(5660300002)(508600001)(7416002)(8936002)(2616005)(83380400001)(26005)(6506007)(6512007)(186003)(53546011)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TUtVUXdsZ28zaTR3MU9WV3VBekJRL3hpSENUU0JpSHZJNGRXYVNTS2IrVThp?=
 =?utf-8?B?RnBoVjZrK3ZsZUVMS0xxb0dLR1lLbGMxMU5senZ3WEg5L3hWUEk5dGVrM2hp?=
 =?utf-8?B?RTc1eW1penhxUFFJbHhqOWVVV3lpbmE1cG5uN0ZUbmhYV0R3cWh2bVFua2Ny?=
 =?utf-8?B?dXpiZ3RxR0s4OUYyd2VVRy83aUZCSVN4RldpUy8zaXJ4VFJ5SUdDMFZlZ21M?=
 =?utf-8?B?bVhrTWhsWjdVaEhHNHFXUHYvUDM4LzhJYkdQTk9LQmtlRmUwUkJlQU9ZTTdX?=
 =?utf-8?B?OGxnQm55WHplem5sSUxJeDJ5enR3KzUweEg3OHJnYWU2MVJ6elgxUEJsUEJW?=
 =?utf-8?B?YnZZeUh4NEhveDlDQ3RBWVlJc3lZT0s5clZqUXE2aVlRVlgwbi9xckE0WU9z?=
 =?utf-8?B?cGRNZFFuQ3RQSFdINC82MXB5bzJRSEl3NW00cEJCRHJLZk15WVUva2w5a3k1?=
 =?utf-8?B?UnVlUWpDYzE1a2h3ZXB4c0xyeTRqWWlJMkx0cWVwQ0ZlQ1NWclRSc2Fyb1B3?=
 =?utf-8?B?N0lXMGQwRTJKcnp4VmdDSFJuM29FQzQ0MjdxbHdUYWJNU3BicDdpTHNCUGx2?=
 =?utf-8?B?N2lQcDE3SnBoOFhxczA4WGVqaVMxZ21FNU5YNk1LTWcrc3AzRnMzRVhNUHFa?=
 =?utf-8?B?eTI4cE0zek45Ti80aklwTk1qbXdNQTgzRUVFRzlCUFkxWnVLMk1HQU1ybndo?=
 =?utf-8?B?ZmZaWEhxeGUwRk5hRERPVjFTeFhzcHVmY1YwM0dMTDJNTWF2Q1FUbTdWOE9Q?=
 =?utf-8?B?NXdEQThTYm5xYitJWmRrOXhGM1BYMEg3dFAwWVZwejA2aGRyYmhJTXordmgv?=
 =?utf-8?B?ZGMwNVJRb2ovTEtEaXhZcGg4aHBiUVFZV0h5aHJXdXpOYkQ3OXFubmNuWXNB?=
 =?utf-8?B?T0xSZU90V1RJMnc4bkJZVGtsZndNZzl3dG1hNjJxdE56SDNJYVM4TFFxWGZN?=
 =?utf-8?B?WlVpUHZlK3hRMnRwVFlOWGdDQTlUVitGZytnRVI1Y3NjeVJDbm4ycUI4c1N6?=
 =?utf-8?B?czhVNG1ESmw4WDloeWhwREZ6TnFvZXRhckdSUmlTN1R6Y2t5MFBsMDVKYzFC?=
 =?utf-8?B?K1EybFJnc2toTmZMT3ZhcmhJenZ4RGFVRFJlT3FnNjhlVlljVnhjRW9pb0VS?=
 =?utf-8?B?UUxoVjNIVkFLbjd0YWdHMnplTEVpd2pwK0tUR01NdnBjeHRiUWVEcWlDTFZy?=
 =?utf-8?B?bGpRelMzTTJKYUxPeUpvbFVkUzNHVlExVk9qYjNYcjd0UnRSZE1MWUthcllB?=
 =?utf-8?B?Y3hHZHpKbCtMN0tUWVYyUVpzblNTcHRjNGtWeXY0YWhYQ09PVGlld25tcTRH?=
 =?utf-8?B?T0dweW1mNjAyOWpqZ0NzVCs0OWpHdmdsbzE3ODRUN3V1YWpGeTVGQ2hCQTBW?=
 =?utf-8?B?ZTdUMjYweDh4V1ZBZ3ZDOUh2NEFWT1ErN3V5TDBIZllRRlpFOURjQkt1bE04?=
 =?utf-8?B?YW9LdWdLS2V6MTM1dEdNMnBaN3dmUUtlTCs0WTk0cWdDVzhvd2J3czFCR0F6?=
 =?utf-8?B?V1c4d2ZjNmtteTJBemY3M3oxSXd6ZFVnWGxsOURrdjl6dUFISWNmV0NaMFd6?=
 =?utf-8?B?UktLdFk0YWJWMStQYVFzL2VWZWo4NkRBeGpYV0ZKVXd0RUo5ZGl6UUV3MTgz?=
 =?utf-8?B?czgrVUdmc2FQNnVXa09BR2U2YTlWZkRnbDFIOWhyMHAvY1hMWlo2RnIzS3Bz?=
 =?utf-8?B?ZFVMQTc2RlNXZzB2MWxGSGFXT0RQS0Fhbjd4SXdGVVRlTGx1L1hYY2ZTcy8w?=
 =?utf-8?B?Q3RkaVI0TERrd05HaUQ1SjY2YWduKzMyWnoxNTN5MWExRHhDQitKL0RyaWNF?=
 =?utf-8?B?QUVGTWJsUTJCem9sbnZVbDhVSksrOXRNTlB6Y1dVTVpZZDRzZmNGaEdtM29D?=
 =?utf-8?B?ODUzUEhYT3FORnBueFBTdGd4ejR6UVlQT2NPM3FZZTBvdzcxMFZPWEJZeUVq?=
 =?utf-8?B?MHNTYUhGSVo3WWowZWtWK09RQjdsY2VHRkNwbGU2R0pSTUhwTm5oYTJjcEM1?=
 =?utf-8?B?eCtubFBqQ251SFVHNE9yNGhLNWk5TkYvM0I1RnlKSXplUHQxZVRYdCtXeFlO?=
 =?utf-8?B?ekhhMC9VRmJZSklCaS9GeGhCaGk0dWhQWUErM29NQXVKV1B2Ym1jejR6WjJL?=
 =?utf-8?B?MmZWOU5ZbkV6U01ObDlNZXhCMlpmR1oraHNGZWs3Z2pPOUE4WHBYdk05Y0FD?=
 =?utf-8?B?RjczcmJUVFJvVEkydHRDaTNBQm1wRGM0N1I5bXpjZ2p5QXRIcklqQWpQTG44?=
 =?utf-8?B?OWFlVGRxOFpqeko5RktEMElXdWlVUEhTL1NRRG1zNjRLUy9RV01hNDVOcUZ6?=
 =?utf-8?B?WlJhYzdRVnZUcUFjVHdBUmxNTnpZeXhmSjBIdVFqT05KMnU3dGNnZz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2f6a5dd0-081b-417b-194d-08da439aebd2
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Jun 2022 06:49:48.8973
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: apdjKoMW11LJDwKXH6wx/KQJuT4QhOxrcxBpBzuICAYjt0sG7hqS1uYrucdbQR5gPnusF+A2BFtRET3EX2yRXg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB5490

On 31.05.2022 21:18, Andrew Cooper wrote:
> On 31/05/2022 19:20, Daniel P. Smith wrote:
>> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
>> index 53a73010e0..ed67b50c9d 100644
>> --- a/xen/arch/x86/setup.c
>> +++ b/xen/arch/x86/setup.c
>> @@ -1700,7 +1701,11 @@ void __init noreturn __start_xen(unsigned long mbi_p)
>>      mmio_ro_ranges = rangeset_new(NULL, "r/o mmio ranges",
>>                                    RANGESETF_prettyprint_hex);
>>  
>> -    xsm_multiboot_init(module_map, mbi);
>> +    if ( xsm_multiboot_init(module_map, mbi) )
>> +        warning_add("WARNING: XSM failed to initialize.\n"
>> +                    "This has implications on the security of the system,\n"
>> +                    "as uncontrolled communications between trusted and\n"
>> +                    "untrusted domains may occur.\n");
> 
> The problem with this approach is that it forces each architecture to
> opencode the failure string, in a function which is very busy with other
> things too.
> 
> Couldn't xsm_{multiboot,dt}_init() be void, and the warning_add() move
> into them, like the SLIO warning for ARM already?

I, too, was considering to suggest this (but then didn't on v3). Furthermore
the warning_add() could then be wrapped in a trivial helper function to be
used by both MB and DT.

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 01 07:10:38 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 07:10:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340259.565244 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwIUn-00082Z-NZ; Wed, 01 Jun 2022 07:10:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340259.565244; Wed, 01 Jun 2022 07:10:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwIUn-00082S-K6; Wed, 01 Jun 2022 07:10:21 +0000
Received: by outflank-mailman (input) for mailman id 340259;
 Wed, 01 Jun 2022 07:10:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z6G0=WI=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nwIUl-00082M-Rf
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 07:10:20 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.109.102])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e388aa7d-e179-11ec-837f-e5687231ffcc;
 Wed, 01 Jun 2022 09:10:17 +0200 (CEST)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2109.outbound.protection.outlook.com [104.47.17.109]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-15-48F42lHkNu2MmscvYnc9sQ-1; Wed, 01 Jun 2022 09:10:15 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by HE1PR0401MB2522.eurprd04.prod.outlook.com (2603:10a6:3:85::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Wed, 1 Jun
 2022 07:10:12 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.012; Wed, 1 Jun 2022
 07:10:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e388aa7d-e179-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654067416;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=0/vn2ED/lmZHSy+xmKFBuIZFr4oZAzv/EUr+To+dfdw=;
	b=F+0dTIPrXrtIevoWlfvNq9vPaDucyl4VrsTVUDGu7Opm9MR7nAjAlqQbfjA3t9xeQHLjct
	xyJjOvidgNx1WLaBkfkHiDakBvtlakfphxpmqFbG1rJAGLgWZ8CzdSpv9lONKLctKl6vq7
	WyVHxVQOwICrmbhnwWsRHR23y7htXxg=
X-MC-Unique: 48F42lHkNu2MmscvYnc9sQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ly99rhbvyOXaEqKHZ6ltHiVeKaty5tz86ROkQEDeWptadNvcEmqUctYUaPvCqs3eeyygybOCR1SfLKriq6rfsoGMFCVSOqkHj8UtCLTO5mRJxMIH5zjfflL2x9PrpSiMwuyRa3iXtAWYMbbs13KlfO9Cy3syNAP9my2cYPcGMwKQxW2oXVQlftB6sT/6kLBPix80vHH3CKLGo7rRfYB0gUvepZYxe9NLt2o7uTzM5l03NwT4kq88UpdIGhr3QErEt0TxsUHyYq9EOj1Oc4vGmyDRobXj+Yb/Z6FZeI1m9mOTRfNGzVvfLWtSTkq9Y5XnC0SkUwkNHMlrll+9j4LvTA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=AGvCWGR54srS7h0Urr6RfkCGQ5gQN6PcDE+q4zaskx4=;
 b=VCf8C11vLBvgIFtCo2ulygQgUQRVS58dPizA8UXAk1+mnUTIcfh1e8Nf/C+KWZZdRmlYFH2C5/DH1Zbts1op0uHOZYXN0ki1yPDTinQb5WvTSGTNYnGZlJEzGxerXt8EZStEbrfcMo8s9HgpNqdfJGwV+WBpVESVh8PFAD3QCW5ymX0gSKXdAFzfgJUOBv+JzUQtm1c/n8MAQEauXj+uZq6ZWWCgXYhEWHJ5MFbeD/OTw4tsqM47YQnl3HwOKR05EzWj89tpx9ShapcUNRU+HAt/hTFQEJya/ySp71LlmTvLIc20rpzaG0RzibG5R3UzYsRMVXOBU/Tl9Z7S4CCagQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e5bc83c8-3962-4d43-4ef1-f338ca2fb782@suse.com>
Date: Wed, 1 Jun 2022 09:10:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH v5 01/15] IOMMU/x86: restrict IO-APIC mappings for PV Dom0
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>
References: <80448822-bc1c-9f7d-ade5-fdf7c46421fe@suse.com>
 <1de2cc0a-e89c-6be9-9d6e-a10219f6f9aa@suse.com>
 <YpYozCRkfs1KdBus@Air-de-Roger>
 <22d2f071-4046-52c6-6f11-23fb23fb61c1@suse.com>
 <YpY/Pm43mMJFGYql@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <YpY/Pm43mMJFGYql@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-ClientProxiedBy: AS8PR04CA0010.eurprd04.prod.outlook.com
 (2603:10a6:20b:310::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b2825051-8673-46b3-9e18-08da439dc4ea
X-MS-TrafficTypeDiagnostic: HE1PR0401MB2522:EE_
X-Microsoft-Antispam-PRVS:
	<HE1PR0401MB2522473EC7CAA009656C7BADB3DF9@HE1PR0401MB2522.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	at51WHrONGdzYtqrgSWdeLnTvXh+pI7zxQJDIyw7yD2DIm+Bsg2tXi22Bj/LbkQhEJKEDa13y/ls6d8FUz1vfwCMKYoJIwK3hK6kvg/tvrKwfcYv+OpSe3iMRdJODTp6X1xRn2vMmqUFIjMGaMC4Cv3PSXKvJ9c+Jm4aaM42/+3K6s44mvXViJYtGvDvQvV6mJfM2cfOoPRSCoHRoBZggx/x3iFA79QMqboFcLNTpHFNGQPAQqWk4xyEdDnr3U8af37ObFCyHl1d99WxTkO4i99UdU9vSLl7oR4m03Ln461s2MuYyR+bY+UZ+l+uBvmEXljYZ6dH83TrEyS0Mv8NZYwz9frYagoObdnJv1p0Jb8k/fQdKnkQXLv1dINh2HV8lC3Hp++mIFMx/VWzmXbPI99HrUnVBMEqZill2nds6Bnu101l+yjGFSZB59R7V2SYtuWR6+zxJsUtmwVcyieEfY4KdYrT79TMUSYzvcqdWz+jeuQRHeDRR1AxZaTHEBA2U7NUEPnd6tGjPqxgjqGlE2LBo85pzxcl7mTV0guffhZY79fUTwVJSV6afk4nwmb1n7hNnCEipVIn9LRphQY6vqq/94B2QnhqyK9u1HgfVvcyejG3DQ3hkbIKwfOt40MVhe2LA4y2mdyr+iLsia9vwgLfJs7cVyxB5b9THJVwFszg12Fsi8djeRBctu9Vb0hdhjJGuVbfwvMlvT1Sxk+iM7knSKkk9QGOoTPUPXRNuPNSXv+ZHT7OM8A0uAoT6ZmW
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(2906002)(31696002)(86362001)(6666004)(8936002)(186003)(38100700002)(6506007)(508600001)(5660300002)(6486002)(2616005)(53546011)(6512007)(26005)(31686004)(66946007)(66556008)(66476007)(4326008)(8676002)(36756003)(316002)(54906003)(6916009)(83380400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?oE79vqkRR8E9osAbYXws+yRv+tAbP8qNLB/oY1+uKzJeLm8u97G3tgYTBBDK?=
 =?us-ascii?Q?M9zwu2C/cKfMxFtHY2iIpgCWjbnzWCLCT/tOFl4/lFQibor21CwFndXXOsht?=
 =?us-ascii?Q?TYw1Ua8X/VFxFvKaaIXIs/asUkLczKuUp5Ik5+oeiUsaWgv3mxI8k/QX8rP1?=
 =?us-ascii?Q?UIzvh0D+qFek9mP0+M2d2TkRzbNStjw5p75akeScHEZ7Tx1vAEDyO3dJzwqH?=
 =?us-ascii?Q?FMMH3Omuqh0iAofvL1UsoT7BG2ppRfg1TIRb+pZvZghN28r5K3RaJWwJcTPC?=
 =?us-ascii?Q?J3SFGLsz7izT6BHDsUbZ1LfU/k/NbEKFcTOacw7QuVv0Es8s6Bq/Vr+9uFAQ?=
 =?us-ascii?Q?EgoIxOADEzECYabQ0ZWClDMPHexx1Uw1B3ynbTqOWOX2Wy23xcCK6mYs9eEI?=
 =?us-ascii?Q?H7458QrYZxoHqZTMenVwt57rpY9Q5PpJM/u/qORT87ylJWKFSM6HyJQmYjOI?=
 =?us-ascii?Q?IyUdqPQIY7I2dyVRSEEGcpJycvVULk95d2grRPcoW2Om1nXjhizFfPqyXBzb?=
 =?us-ascii?Q?gBFbpiO2qyYyVVjhWHncO5qOa9pzOFMvBDyfRHFV95eCU+XB4lFGhYjwBX3W?=
 =?us-ascii?Q?uC4ZOT9TNmv6wCzN4chIeMg/BRb3KC6xux/flqzbFxhaYzACO2rbTnoVzsSR?=
 =?us-ascii?Q?al6JNnxNtsoFtIxD6oPAA01Dhon7nCgy3wKtkOAUJMXzGHWw6cfqrEo9eiZX?=
 =?us-ascii?Q?izDxas0ZQ9CKF/cLtFFtZX+xAufYilWtFCLFY69hbzQVLLjH3KI6HbXOfZGX?=
 =?us-ascii?Q?DeG3rZJTdSG7kkUagspn5nW4tNrprEOGa7Zv0ghIFY6RN6REWe29FbyQRB7q?=
 =?us-ascii?Q?sLGrCk0RZqtzmyGQo2cJfHBA9x/p+BVIg5w/Sx1efZYvFaIGuHJtf4i5T5Lu?=
 =?us-ascii?Q?xVgDrXi5cV8RVIV5m+icXxel7u/mQFhkWGquthcqiZDN8I/XnTS33k47GslM?=
 =?us-ascii?Q?O1rRZoPFCswOHOcblnBGY51eh5lQ/QJ5LqVvBlTupWx3wdlrrSBM7MVBQ/w+?=
 =?us-ascii?Q?B9KBOOSpF/gFLyGNxRvqIjWxLWe+4n/uxciaywqsLKAO0cc0JJosOjdzoNIq?=
 =?us-ascii?Q?+Ad9+i5JVIr+2DLgCQxcDc7akjob/fIDbNxxH21zm/td9JqFSyhdPmvRCJTt?=
 =?us-ascii?Q?3aZidwuEzOgsDmmP59/QfilzYPYtT2igczW6QX4o2eQJRMLwntJ0wySrIfZ+?=
 =?us-ascii?Q?KlgCBGRn/HMaVN31iy0bozFh1KUWC42xVOk4PB8ze1f46YzXcgMaE050g4Nq?=
 =?us-ascii?Q?arBeSwqD7IPKZ9bXHYVUubOZBXlaX6iC+O3urLvl43+xgnaoCZBMubiLckdf?=
 =?us-ascii?Q?82kqyZvhD1CfOA7VWufM2y1eKu9mEebNsJfMqKWpTb9W4JjBDln+AwWs/Etk?=
 =?us-ascii?Q?qt56mf3E9paX4zP5Q2MCBKTvIQAcUQPRBg8elxC+VvWH2jfcTi9WzOwJ2Wph?=
 =?us-ascii?Q?5Zn3VXkKt3ChZIA+RqtYiq+LkBQja01Tton8YJNGmy12DLziz3SbYcags0Bj?=
 =?us-ascii?Q?wtCI1K+UB0u9tjVXsEZoKeOol+3DOzMskGaQ7/cr/5VeAlxLwGhfsBhLIinz?=
 =?us-ascii?Q?crEv0eF0PLSIE3Qdr8JdQscl31L2VDWv3xCPRFdFwfUoFqgF87eSR3FLrcqu?=
 =?us-ascii?Q?XB7PnSMuRwB55LGoGAs/zaoxht0cakykR0CfuEW3Ko1Jueek7RW2O3FJUjPO?=
 =?us-ascii?Q?Bvxhmcu0KMoh7b+RCVMahyggGmQfvB1ZrBxbHOdac8DQIfK3AJOYriWhNtUP?=
 =?us-ascii?Q?hd3nDy2sMg=3D=3D?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b2825051-8673-46b3-9e18-08da439dc4ea
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Jun 2022 07:10:12.0233
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: tnZtkBQ4AIpqlcM2zDs1BiOGgKri4CeBEX4mR9vJkw0W9lVqjvhh6k/LhUd0EJnrDe9shTT3dBQQK3eOSsqP2A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0401MB2522

On 31.05.2022 18:15, Roger Pau Monn=C3=A9 wrote:
> On Tue, May 31, 2022 at 05:40:03PM +0200, Jan Beulich wrote:
>> On 31.05.2022 16:40, Roger Pau Monn=C3=A9 wrote:
>>> On Fri, May 27, 2022 at 01:12:06PM +0200, Jan Beulich wrote:
>>>> @@ -289,44 +290,75 @@ static bool __hwdom_init hwdom_iommu_map
>>>>       * that fall in unusable ranges for PV Dom0.
>>>>       */
>>>>      if ( (pfn > max_pfn && !mfn_valid(mfn)) || xen_in_range(pfn) )
>>>> -        return false;
>>>> +        return 0;
>>>> =20
>>>>      switch ( type =3D page_get_ram_type(mfn) )
>>>>      {
>>>>      case RAM_TYPE_UNUSABLE:
>>>> -        return false;
>>>> +        return 0;
>>>> =20
>>>>      case RAM_TYPE_CONVENTIONAL:
>>>>          if ( iommu_hwdom_strict )
>>>> -            return false;
>>>> +            return 0;
>>>>          break;
>>>> =20
>>>>      default:
>>>>          if ( type & RAM_TYPE_RESERVED )
>>>>          {
>>>>              if ( !iommu_hwdom_inclusive && !iommu_hwdom_reserved )
>>>> -                return false;
>>>> +                perms =3D 0;
>>>>          }
>>>> -        else if ( is_hvm_domain(d) || !iommu_hwdom_inclusive || pfn >=
 max_pfn )
>>>> -            return false;
>>>> +        else if ( is_hvm_domain(d) )
>>>> +            return 0;
>>>> +        else if ( !iommu_hwdom_inclusive || pfn > max_pfn )
>>>> +            perms =3D 0;
>>>>      }
>>>> =20
>>>>      /* Check that it doesn't overlap with the Interrupt Address Range=
. */
>>>>      if ( pfn >=3D 0xfee00 && pfn <=3D 0xfeeff )
>>>> -        return false;
>>>> +        return 0;
>>>>      /* ... or the IO-APIC */
>>>> -    for ( i =3D 0; has_vioapic(d) && i < d->arch.hvm.nr_vioapics; i++=
 )
>>>> -        if ( pfn =3D=3D PFN_DOWN(domain_vioapic(d, i)->base_address) =
)
>>>> -            return false;
>>>> +    if ( has_vioapic(d) )
>>>> +    {
>>>> +        for ( i =3D 0; i < d->arch.hvm.nr_vioapics; i++ )
>>>> +            if ( pfn =3D=3D PFN_DOWN(domain_vioapic(d, i)->base_addre=
ss) )
>>>> +                return 0;
>>>> +    }
>>>> +    else if ( is_pv_domain(d) )
>>>> +    {
>>>> +        /*
>>>> +         * Be consistent with CPU mappings: Dom0 is permitted to esta=
blish r/o
>>>> +         * ones there (also for e.g. HPET in certain cases), so it sh=
ould also
>>>> +         * have such established for IOMMUs.
>>>> +         */
>>>> +        if ( iomem_access_permitted(d, pfn, pfn) &&
>>>> +             rangeset_contains_singleton(mmio_ro_ranges, pfn) )
>>>> +            perms =3D IOMMUF_readable;
>>>> +    }
>>>>      /*
>>>>       * ... or the PCIe MCFG regions.
>>
>> With this comment (which I leave alone) ...
>>
>>>>       * TODO: runtime added MMCFG regions are not checked to make sure=
 they
>>>>       * don't overlap with already mapped regions, thus preventing tra=
pping.
>>>>       */
>>>>      if ( has_vpci(d) && vpci_is_mmcfg_address(d, pfn_to_paddr(pfn)) )
>>>> -        return false;
>>>> +        return 0;
>>>> +    else if ( is_pv_domain(d) )
>>>> +    {
>>>> +        /*
>>>> +         * Don't extend consistency with CPU mappings to PCI MMCFG re=
gions.
>>>> +         * These shouldn't be accessed via DMA by devices.
>>>
>>> Could you expand the comment a bit to explicitly mention the reason
>>> why MMCFG regions shouldn't be accessible from device DMA operations?
>>
>> ... it's hard to tell what I should write here. I'd expect extended
>> reasoning to go there (if anywhere). I'd be okay adjusting the earlier
>> comment, if only I knew what to write. "We don't want them to be
>> accessed that way" seems a little blunt. I could say "Devices have
>> other means to access PCI config space", but this not being said there
>> I took as being implied.
>=20
> But we could likely say the same about IO-APIC or HPET MMIO regions.
> I don't think we expect them to be accessed by devices, yet we provide
> them for coherency with CPU side mappings in the PV case.

As to "say the same" - yes for the first part of my earlier reply, but
no for the latter part.

>> Or else what was the reason to exclude these
>> for PVH Dom0?
>=20
> The reason for PVH is because the config space is (partially) emulated
> for the hardware domain, so we don't allow untrapped access by the CPU
> either.

Hmm, right - there's read emulation there as well, while for PV we
only intercept writes.

So overall should we perhaps permit r/o access to MMCFG for PV? Of
course that would only end up consistent once we adjust mappings
dynamically when MMCFG ranges are put in use (IOW if we can't verify
an MMCFG range is suitably reserved, we'd not find in
mmio_ro_ranges just yet, and hence we still wouldn't have an IOMMU
side mapping even if CPU side mappings are permitted). But for the
patch here it would simply mean dropping some of the code I did add
for v5.

Otherwise, i.e. if the code is to remain as is, I'm afraid I still
wouldn't see what to put usefully in the comment.

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 01 07:30:20 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 07:30:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340271.565267 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwIo3-0002TX-OU; Wed, 01 Jun 2022 07:30:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340271.565267; Wed, 01 Jun 2022 07:30:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwIo3-0002TQ-Ko; Wed, 01 Jun 2022 07:30:15 +0000
Received: by outflank-mailman (input) for mailman id 340271;
 Wed, 01 Jun 2022 07:30:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z6G0=WI=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nwIo2-0002TK-7G
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 07:30:14 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.109.102])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ac6eb242-e17c-11ec-837f-e5687231ffcc;
 Wed, 01 Jun 2022 09:30:12 +0200 (CEST)
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03lp2168.outbound.protection.outlook.com [104.47.51.168]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-41-R6MtrXxGNUOvIJCJrzZVBg-1; Wed, 01 Jun 2022 09:30:10 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM6PR04MB4743.eurprd04.prod.outlook.com (2603:10a6:20b:2::25) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5293.19; Wed, 1 Jun
 2022 07:30:09 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.012; Wed, 1 Jun 2022
 07:30:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ac6eb242-e17c-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654068612;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=xRyVaMZksuTzQ8n2sOKfA0XEHeDp8MNHPGMJCPsZguU=;
	b=mRSghf3ByqumdmENCNilq4Rtc+N3SS12hyjA4eUaOKNXBaH23g6a/+D08OrZsCL3ecQutR
	hoE4C+PZWEtoCaP8RS1LjduSFx8G2mGXf26Td4FKu11vs04diLhfCRXcI8jTO09nhCg4GW
	yffO/+j0D+YtB3qG8SjVSgHdOuCYsro=
X-MC-Unique: R6MtrXxGNUOvIJCJrzZVBg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ef2+BMHYaTIonF1OHWt1+Gznr354YUShykNly5hgjYP+9YfMBKmOQvvDRdHltoydSZKPlBtskNhVNR1zNxURW8DtSDImtviQ7c7TYLlhRY0wP+aYeI7m/YguN5kMJSHWtEa+5zoPDhKKgrfEhdUqi7t39Ys71aTSpRw/w7Etr6cN2zSzNim0ULp+WFD2WI3/Vjr+1xDD6nWoCn5+2ql5hHtaVXd28LLmGiZDWiqzv1wXBA/XJizUHl2VkxGGkmcCczPoV/qw7bezVbzYHCTxETu8yUWLMifsmU1ip6aIEjPL4SrNcAlBXuBEbbE2+4TFL3a1H3PcZuwmdbWWtGWd+A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=dELefG2bSYXeyZV9CN/nMhdgDni/uZK8NPZC86J7h1c=;
 b=Lo8nD2mh/zho30wG1xMZ90+i1u38DGBZWM/zLl0j5EZbd0UqjTG9G1o7q17L4wk1MYZ0Or23lMzbnQEKqDABW4jLw43rCzV1WN5zleGj0R25oB3v2GjShm3SjP34CcWlS0rPDyc9zlLg6DGBycybPxDZqp36RixB2UjAECRtns3A7Y8+LNsM5uxfAZXwxoyYtaTDHJ8aEJ/gWN5uv/tI4sgLNbdxV0YhQ7ahIL3IM+s0oR5VOisOGijTxLKYdLbRnIepmNFaS6FyM1dgQJOplnci+v/5rG2KNsmfIe83GBUYUhDS0xWKYNx2kMNYL8DjPnydgDpagu+R71cresfNIw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <0146fc48-096a-1c5e-406f-bd7b471fc1fa@suse.com>
Date: Wed, 1 Jun 2022 09:30:07 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH v5 02/15] IOMMU/x86: perform PV Dom0 mappings in batches
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 Wei Liu <wl@xen.org>
References: <80448822-bc1c-9f7d-ade5-fdf7c46421fe@suse.com>
 <67fd1ed1-4a62-c014-51c0-f547e33fb427@suse.com>
 <YpY71HuPOP59Do+Y@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <YpY71HuPOP59Do+Y@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-ClientProxiedBy: AS8PR04CA0105.eurprd04.prod.outlook.com
 (2603:10a6:20b:31e::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 186d9cfa-edc6-459e-3c8d-08da43a08e41
X-MS-TrafficTypeDiagnostic: AM6PR04MB4743:EE_
X-Microsoft-Antispam-PRVS:
	<AM6PR04MB4743A0E56DA01DC3D38DB083B3DF9@AM6PR04MB4743.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ZFOk20wCJ/WJud+VoqUex+2CcyAscDxoO6o1oy2boYJ1l7yEFB/b+dbDjFxcmKfMi4bT+xii5Ayjxc9xOf2vEjpJcxRIbox6wPBtyP68qucHMWb7aduIS5bj/08fBuFscKp0JSsFSzPcwatVwC2yHAxM7wSUJdYpKcitsEnf05QohZOUvdUb1jJpp25Eb5a+yWogdcyw+Ea9+L+ANwXX0QrDsS7WTKA3iOucVCT4kOD1ockS5aNNqjYVBRZf51kpZTS8Sv0x+UQxZYtgSwvErNEsQX1gdh6no6mJ0z2aK+EHPLiw1foyCBDrXw/T5ZQsINEBNu65dGhAdom0ZKZaOx6geVec2czdvBigWW3uM63tDv9Gl648Kg5lv8iTWN8PRJ5weQR5HSwbl7vpbfyRhIIpGUpbORcv4PjVJUhDAXg76cZIzRSnDgXpmbEgpuJs7HskYOPzGHJOyIfaoU2OZCykO77hOFnt7uwHRTtxNJO1LengbP5GKAivVXhzAM0bQBmC0oBvSY3YDDcFtwUJhEhwQ2KflH6PldetWQtdVj6g77PjQdvasM1CLi/8XKJFnYZLknj4swcGajrwCpSBaarTBHjsXFn/zeBMlz/eeSfX+lb+EEY/YuTAr7Hbp72f0pZ72Q92I+OttIVaYzYyHRP9dz0eW0FSfora7Ir+IrD1Gy7by17PTEAxT6jPZnES4CnPdiVC6TpoO5nW0kqsVLUEx45AP10FID6dwGq8LRjC+I4I9kS+A1pzvmOj91dq
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(6506007)(36756003)(6512007)(53546011)(508600001)(6486002)(26005)(316002)(2906002)(31686004)(38100700002)(4326008)(66946007)(54906003)(8676002)(66476007)(6916009)(31696002)(86362001)(83380400001)(5660300002)(8936002)(2616005)(186003)(66556008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?BgOVFiFKQ8QA+UqqrS40uHS+w6SF0y3WBlCts+N1ITiyVsvVv77zok4gOgZj?=
 =?us-ascii?Q?jXgeTKcVSa+r0KE5bJeK0OUwTtecf49Yqbb+25ymDwhR+17I65MXAtJzEnwd?=
 =?us-ascii?Q?psbhRS3B8Ha9RHaiVtEZESAJboa3FEIAjk9ZQ/4rzcEg+tQIh+nV59bOFbVj?=
 =?us-ascii?Q?129jK4lQrPWHl8S57yck/EqXIlAKPvNIVV8LLANj/IVVfHpXr+0Snzm8DcMA?=
 =?us-ascii?Q?Pela0y0hViElNSSktoSX5wBKxyWJrVZmovdGBXf8AwALAgOZcKwybFyTK4Di?=
 =?us-ascii?Q?9krhoyXtxJQlPS2vPD+EicBIzkNw6G1IK4wFsBGaW/lrsRyan5bLv3hHRMbG?=
 =?us-ascii?Q?0yweDDKCZVPSWdNDX3jkE0Wgjl/85PUhoouk1aQpXH7F/gT6L21q1BxHzdew?=
 =?us-ascii?Q?NOvbYApQeJuMRzZl+DzZlfvd5Nzg7R8VAMKf1WyVzAGK9aRvtFRBvMVX4dhx?=
 =?us-ascii?Q?HTAok7RD5J4sOsLfnGn2UGXoNT5A7ebsr7DodIjU1tPbNFDiNiv0+q9Fx3cb?=
 =?us-ascii?Q?lVnXkWm06S4N5cpzZl3S3PjiiW+r/Zr5yScx5Wo2P+a9FZldh3qBqnhzKs3p?=
 =?us-ascii?Q?uFtI6NjvvBxStJT6yLV7mdzHX1KyU79jHKJ0k6R+6Qp5iwAJBY+FNsrwSCey?=
 =?us-ascii?Q?A3Gsnv47LnHo8ACv8dD5/Wz6Nn+VSka4Y6+DDvQGdGoX5Ho7JsmhIk7MV4Fw?=
 =?us-ascii?Q?KA1FuJZvEefdZ36vBkLXf3mQX8CoJkGOk6RsPuyKYPz0ccwqfIbyCJgQEZeY?=
 =?us-ascii?Q?H9EhHIVjOmD9zLQjLLa9y84mf6uZ+jVKDuX0/64+M2MArvmFU+1migt1sSCi?=
 =?us-ascii?Q?LREMGPdPW7JJWi75okZyFlgT1cG+5dFQJ34Lh8VYVDGAMcXebRemkQx7cWFy?=
 =?us-ascii?Q?EVrqn+iyCto47Qsvk1kv2p3/Bp8m69PjurQ5UaaOyQEqo1HKfcPQpMc/sGF1?=
 =?us-ascii?Q?BdSVeVHYQ1AqP6kXA1lGhXTuk3FA/NWzeJDDPlOVesUEGMuhGHN5Kx1PqY6t?=
 =?us-ascii?Q?HchiAdUI8w3QJmznhtVVFEkhRPcQs6WttClJONidtVzCLixo3v9lh7h7JgbX?=
 =?us-ascii?Q?fNBHeDGraw5KzDgVBWIoCmx/yQwN5rNU0jV5w3zZ42J850GqabAF77ELr2gx?=
 =?us-ascii?Q?E17Yi1JFyd9JBKT9QG7Mjjxb/47nuSjFRgdnU4yENLZA4q+tjZONazplNRoH?=
 =?us-ascii?Q?zlsW8rQroncaLqoajdxPPK8Oa5KnghFTj8w3uAegyEtMPdEZacWUREzlSihs?=
 =?us-ascii?Q?hL5L02/wFA7cPfmRFGdh7oG6fJebJ9jWJyVue5ms4zSDAM41/l6qTuy2iZHG?=
 =?us-ascii?Q?WMwY+u8hDgZH3sEXOcWwBhKssAF+bsqm9YN41vWnDecHCmvVAOSEyOCC8Ql8?=
 =?us-ascii?Q?M4Oi9wcIK1dBMmGCHRPynJHBMBajJu3HhJiFSAFmHIg7nkDY2xszB8ax9inS?=
 =?us-ascii?Q?zuCsDxJ41lNN4cE7kngo2ymfXMH2MYjC3e80oFZAQr+INtkQGUnnjnn8ebIO?=
 =?us-ascii?Q?YKjRMXCriJ/UBAGlkYASjAI4SKDop4eV+yrF8uNRv6O9LUO7O6AUGCuBKlWu?=
 =?us-ascii?Q?BOVN5jUh+ZpNMkP3MJf7cNYm88lcEFR749lCgb5d/biKa7Bww/qe69FzNxR0?=
 =?us-ascii?Q?DxxXCnhZ1/RvmoH/qSjnpgkfWXAz+ggzGgyJq5Oz6lE1P62QuKPJN8yRIkXE?=
 =?us-ascii?Q?fzT62z5umuGj28bjwIQEs6LVBpMO9LXLk+UIkeE644v7jMRqKJjcCLTQzMSy?=
 =?us-ascii?Q?XsMpTTzghQ=3D=3D?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 186d9cfa-edc6-459e-3c8d-08da43a08e41
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Jun 2022 07:30:08.8689
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 0NUJfaRTwuV19Blv9hzdfrFqCDae0JOIx1sOWFHljyEa09yv32Vjo0sKte+tm3QYKvgBANDha46bpsl8RcD60w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR04MB4743

On 31.05.2022 18:01, Roger Pau Monn=C3=A9 wrote:
> On Fri, May 27, 2022 at 01:12:48PM +0200, Jan Beulich wrote:
>> For large page mappings to be easily usable (i.e. in particular without
>> un-shattering of smaller page mappings) and for mapping operations to
>> then also be more efficient, pass batches of Dom0 memory to iommu_map().
>> In dom0_construct_pv() and its helpers (covering strict mode) this
>> additionally requires establishing the type of those pages (albeit with
>> zero type references).
>>
>> The earlier establishing of PGT_writable_page | PGT_validated requires
>> the existing places where this gets done (through get_page_and_type())
>> to be updated: For pages which actually have a mapping, the type
>> refcount needs to be 1.
>>
>> There is actually a related bug that gets fixed here as a side effect:
>> Typically the last L1 table would get marked as such only after
>> get_page_and_type(..., PGT_writable_page). While this is fine as far as
>> refcounting goes, the page did remain mapped in the IOMMU in this case
>> (when "iommu=3Ddom0-strict").
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>=20
> Acked-by: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>

Thanks.

>> ---
>> Subsequently p2m_add_identity_entry() may want to also gain an order
>> parameter, for arch_iommu_hwdom_init() to use. While this only affects
>> non-RAM regions, systems typically have 2-16Mb of reserved space
>> immediately below 4Gb, which hence could be mapped more efficiently.
>>
>> Eventually we may want to overhaul this logic to use a rangeset based
>> approach instead, punching holes into originally uniformly large-page-
>> mapped regions. Doing so right here would first and foremost be yet more
>> of a change.
>>
>> The installing of zero-ref writable types has in fact shown (observed
>> while putting together the change) that despite the intention by the
>> XSA-288 changes (affecting DomU-s only) for Dom0 a number of
>> sufficiently ordinary pages (at the very least initrd and P2M ones as
>> well as pages that are part of the initial allocation but not part of
>> the initial mapping) still have been starting out as PGT_none, meaning
>> that they would have gained IOMMU mappings only the first time these
>> pages would get mapped writably. Consequently an open question is
>> whether iommu_memory_setup() should set the pages to PGT_writable_page
>> independent of need_iommu_pt_sync().
>=20
> Hm, I see, non strict PV dom0s won't get the pages set to
> PGT_writable_page even when accessible by devices by virtue of such
> domain having all RAM mapped in the IOMMU page-tables.
>=20
> I guess it does make sense to also have the pages set as
> PGT_writable_page by default in that case, as tthe pages _are_
> writable by the IOMMU.  Do pages added during runtime (ie: ballooned
> in) also get PGT_writable_page set?

Yes, by virtue of going through guest_physmap_add_page().

>> @@ -406,20 +406,41 @@ void __hwdom_init arch_iommu_hwdom_init(
>>          if ( !perms )
>>              rc =3D 0;
>>          else if ( paging_mode_translate(d) )
>> +        {
>>              rc =3D p2m_add_identity_entry(d, pfn,
>>                                          perms & IOMMUF_writable ? p2m_a=
ccess_rw
>>                                                                  : p2m_a=
ccess_r,
>>                                          0);
>> +            if ( rc )
>> +                printk(XENLOG_WARNING
>> +                       "%pd: identity mapping of %lx failed: %d\n",
>> +                       d, pfn, rc);
>> +        }
>> +        else if ( pfn !=3D start + count || perms !=3D start_perms )
>> +        {
>> +        commit:
>> +            rc =3D iommu_map(d, _dfn(start), _mfn(start), count, start_=
perms,
>> +                           &flush_flags);
>> +            if ( rc )
>> +                printk(XENLOG_WARNING
>> +                       "%pd: IOMMU identity mapping of [%lx,%lx) failed=
: %d\n",
>> +                       d, pfn, pfn + count, rc);
>> +            SWAP(start, pfn);
>> +            start_perms =3D perms;
>> +            count =3D 1;
>> +        }
>>          else
>> -            rc =3D iommu_map(d, _dfn(pfn), _mfn(pfn), 1ul << PAGE_ORDER=
_4K,
>> -                           perms, &flush_flags);
>> +        {
>> +            ++count;
>> +            rc =3D 0;
>> +        }
>> =20
>> -        if ( rc )
>> -            printk(XENLOG_WARNING "%pd: identity %smapping of %lx faile=
d: %d\n",
>> -                   d, !paging_mode_translate(d) ? "IOMMU " : "", pfn, r=
c);
>> =20
>> -        if (!(i & 0xfffff))
>> +        if ( !(++i & 0xfffff) )
>>              process_pending_softirqs();
>> +
>> +        if ( i =3D=3D top && count )
>=20
> Nit: do you really need to check for count !=3D 0? AFAICT this is only
> possible in the first iteration.

Yes, to avoid taking the PV path for PVH on the last iteration (count
remains zero for PVH throughout the entire loop).

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 01 07:32:54 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 07:32:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340279.565278 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwIqb-0003Dp-6s; Wed, 01 Jun 2022 07:32:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340279.565278; Wed, 01 Jun 2022 07:32:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwIqb-0003Di-30; Wed, 01 Jun 2022 07:32:53 +0000
Received: by outflank-mailman (input) for mailman id 340279;
 Wed, 01 Jun 2022 07:32:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z6G0=WI=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nwIqa-0003Da-2w
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 07:32:52 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.109.102])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0ab55379-e17d-11ec-837f-e5687231ffcc;
 Wed, 01 Jun 2022 09:32:51 +0200 (CEST)
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03lp2169.outbound.protection.outlook.com [104.47.51.169]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-43-htjD7aovM9u5tpDSzeowog-1; Wed, 01 Jun 2022 09:32:48 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB7PR04MB4202.eurprd04.prod.outlook.com (2603:10a6:5:25::33) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12; Wed, 1 Jun
 2022 07:32:46 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.012; Wed, 1 Jun 2022
 07:32:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0ab55379-e17d-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654068770;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Eq+SXhEEWyZlcv8g0wbjQAiNce1U/8ZrVCoKA72CPjI=;
	b=QRXvDQW9T9R4Uju6jMPCEp5+XF0c4cs7NXsIF4CqaC1DzxIHbufnn2m0rXimElHktR1EaU
	135qRQN8taaR/OVhLxZGKcxhH2GRSZDf8MWe2MhtFWKJEnwaGPlLDDc673E1HIr3xJ9Iz3
	qSKfZiff367GVk3Cqu1fBLUENVEQ9OU=
X-MC-Unique: htjD7aovM9u5tpDSzeowog-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Y7oovN/VM8jEsbRYPDHXRaZ4ps5UU37B4YGWuqLFb5gQ4rTL2wQ+4rRA2IH9bu+mtrDg1egorJFXGDdGFS7jmmcliBgThruXMLGb72sKpBX+3DRsUPiKfaLOCBfPtNjnfDcydiN3R/s7Jiq0jRbrT80w3qjgE2Mze9U4iyym398H2M/zPfpateYAxRO23G6dDz1sCK59KfAeCxp+11zt1rbc4lokmiUkZTXwDNexLHnCZHo6N9eskRpc9QgGilFG/spFOV5x/d4sU7zTYe4DLolbh9hUVhYvplZiL5JRHfUUXbLruYdV7VeCwm+u0A+1PW4Uep2TKDL80DKdhTStNw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ICqHSupe+g2orTcf3YUEr+7It+/wynk6o0K/KG5vsyY=;
 b=CFU4XCHVwOvffLkFmRSH+OjnPDahLuoE/c7ITUMD8f1gcazkpQDKVYo1dHpqtI4eUhQIBpMJmniCiUKCewdt5MzMrPl91TfVhvTKaWf+h4RpS+O7SDX4R4+xFHjP/vLlLNb2MdjjYLelq21I2YWBtm928WHvTgoMaO8OPWlwri+XlQnMAdOhEmDPHM0weCnaZUavVO8ILt5YrX45a0gHrsWbECPacUrtYFLE9jnnoMMluQwhmwf98hkuugJGzSOymtuMcPBh2xYaKUT1sEVKQoku3rFPIufJnRkDGgmnd2N59nGKgiVYfgBCA0PjDBql122xw2io773wmZOxmpTsuw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <372325ed-18b6-9329-901d-6596ce6e497d@suse.com>
Date: Wed, 1 Jun 2022 09:32:44 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH v5 03/15] IOMMU/x86: support freeing of pagetables
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>
References: <80448822-bc1c-9f7d-ade5-fdf7c46421fe@suse.com>
 <614413d8-5043-f0e3-929b-f161fa89bb35@suse.com>
 <YpZBjVxRdJOzJzZx@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <YpZBjVxRdJOzJzZx@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-ClientProxiedBy: AM6PR08CA0040.eurprd08.prod.outlook.com
 (2603:10a6:20b:c0::28) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 37a20e7c-3c50-4a2b-1b6b-08da43a0ec38
X-MS-TrafficTypeDiagnostic: DB7PR04MB4202:EE_
X-Microsoft-Antispam-PRVS:
	<DB7PR04MB4202CC3EB72B8288AF0FFEE7B3DF9@DB7PR04MB4202.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	zKoPMvDX0kMCfexNc+LZgzV6IoC/vvWpGFj2W1jfscUZanxNRXgGaKtN3eJWgRMn8YLGh8bN+Zo2P5EK14xZA7PKeeqfNW00sHjX/thQMWA54Yi8WiyQ36/jLlznmjuidXJML3qABl7RJCKiLEziFRYcOHYK+8n6lg7JV61xM5tsjqqMAcRTjoxPWLUATW+MY30bMHBJ+3MvmApAKBvlW35JYksY2R1KllX6AZzNkDu7ieEByJ0OCBVUiHSvGFkzlwhmP9aFF+txkXcR7I9uaZOlMD6g0Nyzej/aAFuHo4SrXGRkbuKQ+XKYV/OoOLlU3qNyxenJZXxgOWhS8u9Vi+PMKTKtYtRYCTyNURLczRAjcHcj72OwLMdbO/fOY9IJy228s0ibv9r7tUNjGx0yX44xTbTTaTHypx37bnf1AlOw9dqN91/OWJpmc/AWyl7kq9Fb9ovTh6OaR88R7mUmLT/ilp8jmGVX+8+GqG245vtGfLgQboEMEkr5q+3uO9+Ew3LGHtiNiSCry5mzZBhrCZxExAEVWgi+wbRyYuICeRkhylMa1lahot14nbap0/NbhPyk8u+lzl073XjJbkYx/Y2f7BZaed133W4NZ/dH1ALrXnXR8TLUZF4Ql3mzkXtSoneP/wSXhMmAxOEb7vl53eyZ+t/2MhBwFLyEYpiUpVsj6vbE75QAJbsz3zbWuG8m1iM+iL9utfYYg5fsorX/uP0zjEy3scxSbg7BSkRZeBMhxvKgnzL/0/N6P6NIUjLk
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(2616005)(53546011)(26005)(6506007)(31696002)(86362001)(2906002)(4326008)(66556008)(54906003)(36756003)(38100700002)(8676002)(66476007)(6916009)(316002)(66946007)(31686004)(186003)(6512007)(83380400001)(8936002)(508600001)(5660300002)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?K/ieQd2W4NlQf+xu5BgquYfBL+uxwimblHcaMcTBAe7HMENLPnPyHKa8xo9K?=
 =?us-ascii?Q?1R7uafNdbKgCQWciXjEQbbixm5ToZ3rq6GZOKGXc2sTFWjI1cgPlBgC3O+sy?=
 =?us-ascii?Q?881uaUK+tdMzZZVBwcglT9wywPPrBOxyiHEd852BcFbnkdvQGPQLZ+SxlpzG?=
 =?us-ascii?Q?z5LjDWpIrU6BVG1Ei3aaWf3EXIk0oq+GPYu9Ar385INDAJGCnJqASOFyV7c6?=
 =?us-ascii?Q?IyCq5EDiWCHEyZcRJOYdAfrjjJRwbsAxvKAiw5zBwckathWs5Anodjs3vgkZ?=
 =?us-ascii?Q?MQr8X4CYPOxxTqY0KNcTJLnPaqZhlcMJpudMTjjVHWCcZeb91sOS3k3FsnK5?=
 =?us-ascii?Q?8hT7XqKnh32COzkQ97PNk9H2igukG950ehZy4TlUkxkMHJUvfrg1YjBGlUqR?=
 =?us-ascii?Q?5jJ9Lul4bS1hvhOkEy6UY/A6aPR7TlNEsKxV/hidYzEtDT8nhv+s5IWKRNw9?=
 =?us-ascii?Q?2hmDNDlqo7V4SD7UOdhLGdT2ja0hoXG877g6D+2N0Ru/rhy5L9kvE8bvveQK?=
 =?us-ascii?Q?kuzP/ScBQMexLm59svKfRRWttqq9/QyrezaeNm4uHF4JxnutE0YXY3DCmYVY?=
 =?us-ascii?Q?SBdth7TGozghNCJqDXX+FD3QhP2PK+BZiMG2o33nGUy/sa3S/KoBRVAvq16Z?=
 =?us-ascii?Q?a6n7Q4ONAuczNi8tBMSCh5bVEhaBWxY0vS4375nAYm569zjG14sBiVtBnQJP?=
 =?us-ascii?Q?wmuOLSpElDUpvAhzx7b42ob4KBF538Z138HS3DG2DnhWt15HiZ5yYc1jVBXW?=
 =?us-ascii?Q?bsYW2H4Q4W3OerF7rIOQbIu6UJNZPWA5rbCfT2necaILC/FAXJJDYQTlXjXD?=
 =?us-ascii?Q?nkUkGxU6iZs2n84tF0NjPI6LdoNvIM9jkmiRDKVmC9IN4s95KgQljI1pd8bM?=
 =?us-ascii?Q?2VHjFIcB5010z8JUwA5c4Ts7UfmRXkpYboH5XVLExsYlTDQAYlQapPtyEco+?=
 =?us-ascii?Q?HaWrSeAEVds2RCGe9KwD1+NiogXcnmebQhWLZsslURu2Q17qOGDWK/0aVAaa?=
 =?us-ascii?Q?R6SYZS+As4YD4ULOfzBNUB/wDWvKuAtHxSoR9ztQcrGu+fSfif74EGOBoDDv?=
 =?us-ascii?Q?aOZfnUYcUMcFLZJJ0AXsVO6NokZaDSes63pFQ5sIcz2mqXdRD0r0smebTJdO?=
 =?us-ascii?Q?yxRipeUwQ4PLnUUFrzxTCTkYoUfxOmp3k/CMCz4iFWxx4SsdC4FXyMIKf1DB?=
 =?us-ascii?Q?n1eyNOutPIwx8ib/t1Tl8mpbphqa8FafPgNKKNdXpG/xpvm5qta1mDw9WRUv?=
 =?us-ascii?Q?sb5V659Vb5lHJu5Ef3FyNo+sAnBYEh2sCs3zYfFgiEH0LjoJwE/UaFiwLSLc?=
 =?us-ascii?Q?75QoqdO6YZ0NH/bJvB7uogw1eD3X8yJfAW8GUA6DKyUuyhVFAUTWPyKHv5Pg?=
 =?us-ascii?Q?bUdE0HAGZhi3vq0XyGSzzM0by6kRevVrtcx4rt6qj8HAbMTby1+9NaxAstmA?=
 =?us-ascii?Q?0rwpw8W/g64/hIxcCmBHUC9Os+1Nzsc26gIb6j2qc9FHBDfa41BwE2FBYclm?=
 =?us-ascii?Q?YLaXIjtgWgc+HtoGkbdTIveCa5XSBROWBrz572vIJ/3q5hmb5vwdtOMS7dMo?=
 =?us-ascii?Q?ext2iTgTKaAHETrmISg4qTjFAUaboAiw67P2NOebHbhPK1xh77Kgm8XWxm7V?=
 =?us-ascii?Q?F3VMizbXMEO1CwBTtFLM+ZgCxndrfJiZEprfm9+Yk9pmFakcV0LKArnR+BA3?=
 =?us-ascii?Q?/UrVF8mrkSmEE2R96/2KfblvTAGGbJLgp47mvKTx7WvRqZnyccGCBago3950?=
 =?us-ascii?Q?ciTx2W43Jw=3D=3D?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 37a20e7c-3c50-4a2b-1b6b-08da43a0ec38
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Jun 2022 07:32:46.4524
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ViwfPFnVp1rRPot7wXDIqhxy/ld7QxwLjTb1y4RNbWiL+2EVfxYtdAdIs5ArR/gh5nQERysZ6n3fT6vZnuEqew==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR04MB4202

On 31.05.2022 18:25, Roger Pau Monn=C3=A9 wrote:
> On Fri, May 27, 2022 at 01:13:09PM +0200, Jan Beulich wrote:
>> @@ -566,6 +567,98 @@ struct page_info *iommu_alloc_pgtable(st
>>      return pg;
>>  }
>> =20
>> +/*
>> + * Intermediate page tables which get replaced by large pages may only =
be
>> + * freed after a suitable IOTLB flush. Hence such pages get queued on a
>> + * per-CPU list, with a per-CPU tasklet processing the list on the assu=
mption
>> + * that the necessary IOTLB flush will have occurred by the time taskle=
ts get
>> + * to run. (List and tasklet being per-CPU has the benefit of accesses =
not
>> + * requiring any locking.)
>> + */
>> +static DEFINE_PER_CPU(struct page_list_head, free_pgt_list);
>> +static DEFINE_PER_CPU(struct tasklet, free_pgt_tasklet);
>> +
>> +static void free_queued_pgtables(void *arg)
>> +{
>> +    struct page_list_head *list =3D arg;
>> +    struct page_info *pg;
>> +    unsigned int done =3D 0;
>> +
>> +    while ( (pg =3D page_list_remove_head(list)) )
>> +    {
>> +        free_domheap_page(pg);
>> +
>> +        /* Granularity of checking somewhat arbitrary. */
>> +        if ( !(++done & 0x1ff) )
>> +             process_pending_softirqs();
>=20
> Hm, I'm wondering whether we really want to process pending softirqs
> here.
>=20
> Such processing will prevent the watchdog from triggering, which we
> likely want in production builds.  OTOH in debug builds we should make
> sure that free_queued_pgtables() doesn't take longer than a watchdog
> window, or else it's likely to cause issues to guests scheduled on
> this same pCPU (and calling process_pending_softirqs() will just mask
> it).

Doesn't this consideration apply to about every use of the function we
already have in the code base?

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 01 07:40:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 07:40:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340287.565289 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwIxt-0004fk-19; Wed, 01 Jun 2022 07:40:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340287.565289; Wed, 01 Jun 2022 07:40:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwIxs-0004fd-Tq; Wed, 01 Jun 2022 07:40:24 +0000
Received: by outflank-mailman (input) for mailman id 340287;
 Wed, 01 Jun 2022 07:40:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=esiv=WI=citrix.com=prvs=1449ffc77=George.Dunlap@srs-se1.protection.inumbo.net>)
 id 1nwIxr-0004fW-9L
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 07:40:23 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 15eeebf5-e17e-11ec-bd2c-47488cf2e6aa;
 Wed, 01 Jun 2022 09:40:21 +0200 (CEST)
Received: from mail-dm6nam10lp2102.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.102])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 01 Jun 2022 03:40:14 -0400
Received: from PH0PR03MB5669.namprd03.prod.outlook.com (2603:10b6:510:33::16)
 by BL0PR03MB4228.namprd03.prod.outlook.com (2603:10b6:208:68::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12; Wed, 1 Jun
 2022 07:40:13 +0000
Received: from PH0PR03MB5669.namprd03.prod.outlook.com
 ([fe80::b402:44ba:be8:2308]) by PH0PR03MB5669.namprd03.prod.outlook.com
 ([fe80::b402:44ba:be8:2308%4]) with mapi id 15.20.5314.012; Wed, 1 Jun 2022
 07:40:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 15eeebf5-e17e-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654069220;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:mime-version;
  bh=jq0wdIyUfobfG2FBvyvsaB3PwseR77vdc+81MI9QyRY=;
  b=W5jJVC6p1G55Fhoe59zz0M6vBGcsYEb8HdeQpwPgoqxNokjgzdKyd3Af
   s4pq9OtrwpIi8n2S8EVBpe5mC6DAVehLUvSPK3fKT2VjDn/ND4qPBvtTl
   SMwV0mofsz6JSJdBN9eo7xmaQRiqtXPhg+MMjFTh3HfAm6nitL0XmvOXY
   I=;
X-IronPort-RemoteIP: 104.47.58.102
X-IronPort-MID: 71950180
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:N324Vq9PusZMWOWfjvd+DrUDkn6TJUtcMsCJ2f8bNWPdYAuX7wSz/
 BJcAD7Ya7vPIDfrKpolWDmFhUhQu5aHy94xSFBqqCo8FStG95efC4yScB/+ZHPIcJPJER055
 pgQNdKdfcxpQCKN/h2naeTq9XVw2KjUT7SsVL+s1kydPeNBYH5JZUVLx75p6mIRveWEPu+th
 T/Ti5LTNgao0GclamtOsKzYpEM+5aX75mlG41Zhb68bt1HXxiUZVJ4RG/q8fiDyKmV28k9WZ
 AphIJWRpD6xE8IFU4v9+lrDWhRWBOaUZ2Bis1IOM0SYqkEqShcaj+BqbZLwVW8N02/Tx44pk
 Y0U3XCNYVxB0pPkybx1vyZwS0mSDYUekFMQCSHi2SA75xSun0rEm52CPmlvVWEr0r8f7VV13
 e4ZMFgwgiWr3Ipa9l4Zpt5E3azPJOGzVG8WV+oJITvxVZ7KSribK0nGCEMxMJ7dSamiEN6HD
 /f1ZwaDYzzvPBluYnJHBKsQlcaN31T1b35/9liK8P9fD2j7lGSd0ZDLGf+MIpmmYJsQmUyV4
 GXb427+HxcWcsSFziaI+W6tgemJmj7nXIUVF/uz8fsCbF+7nzRPTkFJEwbr56Dh0CZSWPoGQ
 6AQ0gUjqrI9+QqHU9/5VgWQq3+YpB8MHdFXFoXW7SnSk/uJu1/JXgDoSBZ7aZ8LtZRueQUj8
 XqKvc7IVTJBroCKHCf1GrC86Gna1TIuBWMafioFUQst6sHuup0ulQnISst/EamzlZv+HjSY6
 xqHtjQkjrMfy+sCzbym/Evviiip4JPOS2Yd9gjRG26o8A59TIqkfJCzr0jW6+5aK4SURUXHu
 2IL8+Cg6+QJAYCIhTa6auwHF7G05N6IKDTZx1VoGvEJ6DCF63OlO4dK71lWP0xuLtpCdTb3Y
 VT7oh9Y/ptaNj2rasdfaIKrCt82yrDgGM6jXfTddNlmeYR4bguO9mdvYia4xHvxmUIhlaU+P
 5azcsu2C3seT6N9w1KeRe0QzLsqzSAW3n7ISNbwyBHP+biDYH+YT58VPV3Iafo2hJ5ouy3Q+
 tdbcsePlRNWVbSmZjGNqNZJa1cXMXI8GJb67dRNcfKOKRZnH2dnDOLNxbQmeMpumKE9evr0w
 0xRk3RwkDLX7UAr4y3RApy/QNsDhapCkE8=
IronPort-HdrOrdr: A9a23:lVuLdqBZlkfdhrLlHegAsceALOsnbusQ8zAXPh9KJCC9I/bzqy
 nxpp8mPEfP+U0ssHFJo6HiBEDyewKnyXcV2/hdAV7GZmXbUQSTXeZfBOfZogEIXheOjtK1tp
 0QP5SWaueAa2SS5PySiGbXLz9j+qj/zEnCv5a9854Zd3APV0gW1XYdNu/0KC1LbTgDIaB8OI
 uX58JBqTblU28QdN6HCn4MWPWGj8HXlbr9CCR2SSIP2U2rt3eF+bT6Gx+X0lM1SDVU24ov9m
 DDjkjQ+rijifem0RXRvlWjrqi+2eGRiuerNvb8yPT9GQ+czzpAo74RH4FqiQpF491HLmxa1+
 Uk7S1QefiboEmhA11d6SGdpzUIlgxepEMKgGXo/0fLsIj3Qik3BNFGgp8cehzF61A4tNU5y6
 5T2XmF3qAnRS8pDEzGlqf1vjxR5zyJSEAZ4KcuZr1kIPkjQa4UqZZa8FJeEZ8GEi6/4Ic7EP
 N2BMWZ4PpNa1uVY33Qo2EqmbWXLz4ONwbDRlJHtt2e0jBQknw8x0wExNYHlnNF8J4mUZFL6+
 nNL6wtnrBTSc0da757GY46MICKI32IRQiJPHOZIFzhGq1CM3XRq4Tv6LFw/+2ucIxg9upEpH
 0AaiItiYcfQTOfNSTV5uw7zvnkehTPYR39jsdD+pN+prrwALL2LCzrciFar/ed
X-IronPort-AV: E=Sophos;i="5.91,266,1647316800"; 
   d="asc'?scan'208,217";a="71950180"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NPoyHLuHqg/V2IGli49OBV46OIZRom69JFN+KcRQFxW57NdidSezY52a4+Sv/c12Th6LCeJip0tL622msYfEenFWxlk0uQDoJn2xZbA2xouKd2BJJ5jDhwmZnjB0MMK0qXcNwQbqdA9dqGCNA4VyXxzOr/2TW4AA0PF1i3p8Y0ukUD6l9fuf3qHWzS0zg7chvfHQTMJUpzgi+QgTzA7r23xTP6ucoLCp4+RqGHsDsGNQaRjvsigb06+ea5c3+ueTbGuXak4FcnxyxIuDL2xBEkJwo5qxVqeH+UufWYBI/T8mD73ksJKdfKvxQZMknfnXVHMXBkf3xzP0kqumPBX45w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=jyVIV3LeyRDNNE3omh0hYjfR0gRzgBqV4vaR1GTJwGc=;
 b=QBcfO1HQMt/WGPSFQ7oU6dCa39K8bIO+GwwDEWRlKTkHyhutMZ5C909NB9pRkhNgMvoNpO6zM0h9yayOfXO417PpfoTeXNyaUv/ABDkk6cm4eyxRsUgpGPJ0XZye63Xui8uevuesuL2fo3mXff+UOLe1G2jMCKPUa1RT2oEPpgLLyntgYyR55FgGRw131pKudGmyIY0KWHTch7kXkcnsTkNxkVyuBUoqCi+c1uqrWMBQP7xGJS4eKH/PuleYJoJjm6ZmYVSNRolxEZyTgenxP3IqLZEVr0+aYBbInJEAyvdwolPzuwTSDheSe9PGUUa9cfEqDFKz5fZ3nkXURls7eQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jyVIV3LeyRDNNE3omh0hYjfR0gRzgBqV4vaR1GTJwGc=;
 b=uu2XpIebykOURZJR5tQWwQDfsqzLhlMZk4Aj+4xHXVOO/wkTJCd6AtlNrMvxInx61U0lHSdjdT3bMR5pdg9L7aScbf7oXrWWEs/uLpX2+PibmDx72Xgclwd9jSz9vDZG7VGXC6omNemU0DMXzmlXAQ8OAPeOlUbn+CIeLLZVXs4=
From: George Dunlap <George.Dunlap@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>
CC: "Daniel P. Smith" <dpsmith@apertussolutions.com>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, xen-devel <xen-devel@lists.xenproject.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Wei Liu <wl@xen.org>,
	"scott.davis@starlab.io" <scott.davis@starlab.io>,
	"christopher.clark@starlab.io" <christopher.clark@starlab.io>,
	"sstabellini@kernel.org" <sstabellini@kernel.org>, Andrew Cooper
	<Andrew.Cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>
Subject: Re: [RFC PATCH 1/4] kconfig: allow configuration of maximum modules
Thread-Topic: [RFC PATCH 1/4] kconfig: allow configuration of maximum modules
Thread-Index: AQHYdHaBi3mMFaceck2YWC859wyqAa04suMAgAAbYQCAADQ0AIABKjuA
Date: Wed, 1 Jun 2022 07:40:12 +0000
Message-ID: <8F3BD9BB-EC62-4721-9BD0-3E4CC1E75A22@citrix.com>
References: <20220531024127.23669-1-dpsmith@apertussolutions.com>
 <20220531024127.23669-2-dpsmith@apertussolutions.com>
 <2F13F24D-0A55-4BC3-9AC6-606C7E1626E8@arm.com>
 <4ebbb465-00ec-f4ba-8555-711cd76517ed@apertussolutions.com>
 <YpYdqglsWIlsFsaB@Air-de-Roger>
In-Reply-To: <YpYdqglsWIlsFsaB@Air-de-Roger>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 713bc76e-cb6e-4b25-57eb-08da43a1f66a
x-ms-traffictypediagnostic: BL0PR03MB4228:EE_
x-microsoft-antispam-prvs:
 <BL0PR03MB42286A53C6776069A4F4E49A99DF9@BL0PR03MB4228.namprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 EIaBlOVbVKAHTWB5JbThGgS65Bke3c6dNGVauuaE8VEPfnUiQDu/f/IIDMcIbfjDH6187qykB9I8tmegDoUU3whP6x1rRaBa3GfMoRfxhOvBnXz74Q+71vLoQqmxO7PcelvYDc5qZOcNNx+9+65Z3GtalWG44mTzhhZ03jT5hk8kZpaSrRGamVcTG7yfvSvY2EZeRQarB1FFMb41EXaQvvsZlIyBBV3I/JYCkMa8xTtMnk2jrD4xTWP2NG9oKazUvQ/N4kC6NRFK5oB0ffkoVUmLhFRJdDegyM4YocEBUr/JD/BJ2xPG+I8XSDDdY/z2AzxBS2n168uKfoav2GChSlcIJVMPrOFRWNOJb3r8Rrtl53zZJE63EOMx5/Ofmvm8FKVEXKJpkTBZOGzuPUcK4TvDGzyk9ha7neHtw776j6Z1mNhdXb4RuehHL9XAVmABA31lA3sHSAtcXvflsduitgWKbt5T1V5ncssFO2gm8/0MdjRSmThnmQhX6O7fTjrkN0vIvqmUhkH6oVy30fKA7YePV2/1gQbLFZ7gvzlgtIqvE26N5/reX46GoMwQ2Ac0qqtZ0Qa1k0p+Ko43n7mFsoDbufwehabDk982NPl0A5KyS+MyF0e7geKdVZrtwtIYVGo72GrYHjxe5L2t68mocs3pUS2KZPDRE5gJq1FjHYHUUlKkhXs+VIG4zASmkb8NAZOIgFP5Pvpr6/6jJxb5mH8L2DezG2r45jTuNgM8uXp0xkiinoGshFqvQuhrhnKM
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(38070700005)(122000001)(53546011)(38100700002)(508600001)(83380400001)(6506007)(5660300002)(91956017)(6862004)(4326008)(8676002)(82960400001)(76116006)(36756003)(64756008)(66556008)(66946007)(66446008)(66476007)(33656002)(86362001)(2906002)(6636002)(8936002)(71200400001)(316002)(99936003)(54906003)(37006003)(6486002)(2616005)(186003)(7416002)(26005)(6512007)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?SmFGaG5qdVQvWlNVMStTZjcyTW1aZUFkOC9qcEp0OEQxWGd0d2owdGF3MlVR?=
 =?utf-8?B?SDFPTU9RZnVnbmlTMmRIU0ovK2ZwRWtnT1ZiTmlVS2pXQVZBemkyYjJ3ZjdF?=
 =?utf-8?B?Y290MU5ualZVVjhkNXcvNklnTWw5dk4rMHl1elFMZFdPeUNEMWFraU1WL29F?=
 =?utf-8?B?eXliT1EwZ0tsK3h4M1dKbzZjNlV0TWZaSHB4bmo2Q0x5WFNYWXRiNEtjb3J0?=
 =?utf-8?B?dTVZaG9tRVFGNU9PSzErejFOd3Vqa2UxN1gyaThmQWRWUEVpb2F0VjVrVmh2?=
 =?utf-8?B?ZWhrQkVtNmw1dStuZjd6dFUwUW9qZkpodThKUWVUMEkzUnNrRkdBVlg2eVBG?=
 =?utf-8?B?QmI2cFBaQ3g0RlZBTGM1UHVmYTBwZzZUNDhaZERyZWx3OTdVRHcrSGp0TVJK?=
 =?utf-8?B?NTJodmhvcVdLdDhIbTQyVDhheUpUNmkzRmpvZ2JIbG1UbmM5NlE1VGFJQlpG?=
 =?utf-8?B?NlVWcjVYS3orbExoZ2h5Y214UXE4aTA0ZFNOMVVnVDkyQVdjNmxxNGVhT3Vy?=
 =?utf-8?B?TTRWUmxrblZHdnNPdFFWMTFRVUdCTHFyQk12RzAxMGNqSFlkbXZtU1hMdGhz?=
 =?utf-8?B?Y0VqMmx4dFhpdXNDTC9ob2dQY2VGblRYVDJQeWhDZGxzSFZQMHU3Q2RQbS8x?=
 =?utf-8?B?ZHFBdi9yckx4Y3RLejZ0S25wZlMraWJxeU9HUmtrWk8yVDV1NXovU1lQRTBZ?=
 =?utf-8?B?Zml6ajI2WEpQd2pMYlZaOVZ5ZDZkRGlraUNBVzYvdEJTZEdNYnB5V1huRDdo?=
 =?utf-8?B?UGdycWFuYkMzM01qeDE4V3R5ZUJqYThDY2IySzU5OGhEaGE4cjBNL0ErMWEv?=
 =?utf-8?B?RlJwM1J0WUxadjhaSE5YK0MvUWlpWG5TNlQ0b3JXaXFvMVVZOVgrSHAxd0Fq?=
 =?utf-8?B?US9lVXcwQmtvcU5JNXkyWWc0K1F3NjBNWXp0Mi9ZdzNCdXFPcHQyVUNLRGZu?=
 =?utf-8?B?c2VEZGdHS0U0RmRsc1dKT0hHV0xrT1F3MGlRQlBtL1NXSlpWRU16OWIyRGxE?=
 =?utf-8?B?NGhCMmZpdFVGbG9TNldhdzY4NUFGNHRrZjNlUEhaN3kxV2Q3UGpwZXdxRUNi?=
 =?utf-8?B?N0pGcVRPcW5ldkNwdklZTVJXZmlydlJucmJhdGcwdlNKamxBbi9mOGFwTmx4?=
 =?utf-8?B?Qk5FVkxVNW5RbzcyS25USUMwMnNnN0NTUlpLbk5maEN0OTVzNmgvcStxb2lV?=
 =?utf-8?B?Mm5WazdRdnZPekxYYmo1UWVnQ28wdE54ZDBDUkpjeWRMeXU4ZUlwTGtWaWl5?=
 =?utf-8?B?ejFlbFd1TG9qdGMvNW14bGJ5Umc5ZTg1dmFVcEdHUnAzTWVGUDVaZStWRlpo?=
 =?utf-8?B?UExGMmt0ZS9qY21jMVhpOHoyL0VJRXpQY0RHZ3MzeG05ci9rYWVRejh4NnJ4?=
 =?utf-8?B?d3U1VGNzbThIcjhlVGYrNGxPNk0wYmNtaHVveHQ4SGFtSVNqSU1hQ2sxOGRQ?=
 =?utf-8?B?aXpQd0FqMktGekRYRHRyOUNEZEVLL0hWdGZKM3RvRmNUTkVXRm13SXUrSjFj?=
 =?utf-8?B?K1A3U3BpZXcrdHNrNlkvbjA0UjY0K3FnVHRiSFJxT2VTcitUcmZhRUhreEJt?=
 =?utf-8?B?N0RnWDFlckpGK1JyaXRRbXpCYUJtbS9PSmJENjJKdWFYTS9ZeWt2OXBZYm9h?=
 =?utf-8?B?QmtyVDVMYVRQaXhpTVhDMzRveXhpVmpySk9ZVWsraE9OaFM1MTAxT2VsaCty?=
 =?utf-8?B?cDcvVWV3NEpobU5URnVQL0RqNFRNNzh1NXQ0M1p2UG54dWRXd3J1Uk8wNWY5?=
 =?utf-8?B?cGxoUXQzZ3FtS01wd3BjeWJKWVE2c0tnYi90ZnY4L0N0Vk5Rekt5Z3FvVkpN?=
 =?utf-8?B?ZTl6Q2M4UUEwR0E1TVF0RWtyQkpNemxMazFCMU9CRG00eUM0VVgyWG96NU8v?=
 =?utf-8?B?VkQvQmMrU2k2LzBMSTBZMnJDVUVubURVTWg4ZWJ1V3R6UllvM1diLzFxTi9W?=
 =?utf-8?B?c242KzNYcFF4YS93MXBQVkJJZnZJSlEvRjYvNzd5NC9ROUxlVzVhSDFEVEI4?=
 =?utf-8?B?MlhmOWFIRnRnMWEzOHVRNCtJaHR5OW9EcUsybG9LV1VzVTZNRDBtNEpPazRm?=
 =?utf-8?B?bGh2QW01SGdTWElTYXpleEdhbnVpQ3dVRG5hMjJ6OUEwUnFWTm1xZmZWb2h0?=
 =?utf-8?B?aUJlNWZxdzltYi94R2Jjbm9WVDcyb25rSkV6eWdhb3R4SkpHcWhYNTBnQW9y?=
 =?utf-8?B?TDlvUGNFUGpVNGdjM3M0cUQrU0lnU2sxU1Z3MTdkOHg5b1ZLT3MvUy9LQk1H?=
 =?utf-8?B?cVlweDl4a1Qvd0NZWURtR3F0VExzZEcranFNTCtPZnFGTTV0cFg4VnVNUlZP?=
 =?utf-8?B?Q3R5VFQvNkpGNEZtTXZ0cElKblR3MEZpSTJkbzh1YVAwMXI2c0JLZkNheUFJ?=
 =?utf-8?Q?imdmdfehTEWgFzpE=3D?=
Content-Type: multipart/signed;
	boundary="Apple-Mail=_89C0914C-E71D-4E70-861B-5565183C51A1";
	protocol="application/pgp-signature";
	micalg=pgp-sha256
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 713bc76e-cb6e-4b25-57eb-08da43a1f66a
X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Jun 2022 07:40:12.8681
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: F1yoCfPlxoaJIS8xrms/6AoiP2SWZ64P8Flp2Khsk7xrCaVzRtzHFoHNUd777ACsP2b6DQ0CXVbx4c6OZuorxXfMb2R1e/d/NrgasBIc4pA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR03MB4228

--Apple-Mail=_89C0914C-E71D-4E70-861B-5565183C51A1
Content-Type: multipart/alternative;
	boundary="Apple-Mail=_E277AD90-BD39-4FC2-818A-A6A371C2E64C"


--Apple-Mail=_E277AD90-BD39-4FC2-818A-A6A371C2E64C
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=utf-8



> On 31 May 2022, at 14:52, Roger Pau Monne <roger.pau@citrix.com> =
wrote:
>=20
> On Tue, May 31, 2022 at 06:45:52AM -0400, Daniel P. Smith wrote:
>> On 5/31/22 05:07, Bertrand Marquis wrote:
>>> Hi Daniel,
>>=20
>> Greetings Bertrand.
>>=20
>>>> On 31 May 2022, at 03:41, Daniel P. Smith =
<dpsmith@apertussolutions.com> wrote:
>>>>=20
>>>> For x86 the number of allowable multiboot modules varies between =
the different
>>>> entry points, non-efi boot, pvh boot, and efi boot. In the case of =
both Arm and
>>>> x86 this value is fixed to values based on generalized assumptions. =
With
>>>> hyperlaunch for x86 and dom0less on Arm, use of static sizes =
results in large
>>>> allocations compiled into the hypervisor that will go unused by =
many use cases.
>>>>=20
>>>> This commit introduces a Kconfig variable that is set with sane =
defaults based
>>>> on configuration selection. This variable is in turned used as the =
array size
>>>> for the cases where a static allocated array of boot modules is =
declared.
>>>>=20
>>>> Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
>>>> ---
>>>> xen/arch/Kconfig | 12 ++++++++++++
>>>> xen/arch/arm/include/asm/setup.h | 5 +++--
>>>> xen/arch/x86/efi/efi-boot.h | 2 +-
>>>> xen/arch/x86/guest/xen/pvh-boot.c | 2 +-
>>>> xen/arch/x86/setup.c | 4 ++--
>>>> 5 files changed, 19 insertions(+), 6 deletions(-)
>>>>=20
>>>> diff --git a/xen/arch/Kconfig b/xen/arch/Kconfig
>>>> index f16eb0df43..57b14e22c9 100644
>>>> --- a/xen/arch/Kconfig
>>>> +++ b/xen/arch/Kconfig
>>>> @@ -17,3 +17,15 @@ config NR_CPUS
>>>> 	 For CPU cores which support Simultaneous Multi-Threading or =
similar
>>>> 	 technologies, this the number of logical threads which Xen will
>>>> 	 support.
>>>> +
>>>> +config NR_BOOTMODS
>>>> +	int "Maximum number of boot modules that a loader can pass"
>>>> +	range 1 64
>>>> +	default "8" if X86
>>>> +	default "32" if ARM
>>>> +	help
>>>> +	 Controls the build-time size of various arrays allocated for
>>>> +	 parsing the boot modules passed by a loader when starting Xen.
>>>> +
>>>> +	 This is of particular interest when using Xen's hypervisor =
domain
>>>> +	 capabilities such as dom0less.
>>>> diff --git a/xen/arch/arm/include/asm/setup.h =
b/xen/arch/arm/include/asm/setup.h
>>>> index 2bb01ecfa8..312a3e4209 100644
>>>> --- a/xen/arch/arm/include/asm/setup.h
>>>> +++ b/xen/arch/arm/include/asm/setup.h
>>>> @@ -10,7 +10,8 @@
>>>>=20
>>>> #define NR_MEM_BANKS 256
>>>>=20
>>>> -#define MAX_MODULES 32 /* Current maximum useful modules */
>>>> +/* Current maximum useful modules */
>>>> +#define MAX_MODULES CONFIG_NR_BOOTMODS
>>>>=20
>>>> typedef enum {
>>>> BOOTMOD_XEN,
>>>> @@ -38,7 +39,7 @@ struct meminfo {
>>>> * The domU flag is set for kernels and ramdisks of "xen,domain" =
nodes.
>>>> * The purpose of the domU flag is to avoid getting confused in
>>>> * kernel_probe, where we try to guess which is the dom0 kernel and
>>>> - * initrd to be compatible with all versions of the multiboot =
spec.
>>>> + * initrd to be compatible with all versions of the multiboot =
spec.
>>>=20
>>> This seems to be a spurious change.
>>=20
>> I have been trying to clean up trailing white space when I see it
>> nearby. I can drop this one if that is desired.
>=20
> IMO it's best if such white space removal is only done when already
> modifying the line, or else it makes it harder to track changes when
> using `git blame` for example (not likely in this case since it's a
> multi line comment).

The down side of this is that you can=E2=80=99t use =E2=80=9Cautomatically=
 remove trailing whitespace on save=E2=80=9D features of some editors.

Without such automation, I introduce loads of trailing whitespace.  With =
such automation, I end up removing random trailing whitespace as I =
happen to touch files.  I=E2=80=99ve always done this by just adding =
=E2=80=9CWhile here, remove some trailing whitespace=E2=80=9D to the =
commit message, and there haven=E2=80=99t been any complaints.

If we actually care about trailing whitespace, then I think we should =
accept random fix-ups as files are touched.  OTOH if we want to avoid =
random fix-ups, we should remove the aversion to trailing whitespace.

 -George

--Apple-Mail=_E277AD90-BD39-4FC2-818A-A6A371C2E64C
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
	charset=utf-8

<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html; =
charset=3Dutf-8"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; line-break: after-white-space;" class=3D""><br =
class=3D""><div><br class=3D""><blockquote type=3D"cite" class=3D""><div =
class=3D"">On 31 May 2022, at 14:52, Roger Pau Monne &lt;<a =
href=3D"mailto:roger.pau@citrix.com" =
class=3D"">roger.pau@citrix.com</a>&gt; wrote:</div><br =
class=3D"Apple-interchange-newline"><div class=3D""><meta =
charset=3D"UTF-8" class=3D""><span style=3D"caret-color: rgb(0, 0, 0); =
font-family: JetBrainsMonoRoman-Thin; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">On Tue, May 31, 2022 at 06:45:52AM -0400, Daniel P. Smith =
wrote:</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: =
JetBrainsMonoRoman-Thin; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: 400; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><blockquote type=3D"cite" =
style=3D"font-family: JetBrainsMonoRoman-Thin; font-size: 14px; =
font-style: normal; font-variant-caps: normal; font-weight: 400; =
letter-spacing: normal; orphans: auto; text-align: start; text-indent: =
0px; text-transform: none; white-space: normal; widows: auto; =
word-spacing: 0px; -webkit-text-size-adjust: auto; =
-webkit-text-stroke-width: 0px; text-decoration: none;" class=3D"">On =
5/31/22 05:07, Bertrand Marquis wrote:<br class=3D""><blockquote =
type=3D"cite" class=3D"">Hi Daniel,<br class=3D""></blockquote><br =
class=3D"">Greetings Bertrand.<br class=3D""><br class=3D""><blockquote =
type=3D"cite" class=3D""><blockquote type=3D"cite" class=3D"">On 31 May =
2022, at 03:41, Daniel P. Smith &lt;<a =
href=3D"mailto:dpsmith@apertussolutions.com" =
class=3D"">dpsmith@apertussolutions.com</a>&gt; wrote:<br class=3D""><br =
class=3D"">For x86 the number of allowable multiboot modules varies =
between the different<br class=3D"">entry points, non-efi boot, pvh =
boot, and efi boot. In the case of both Arm and<br class=3D"">x86 this =
value is fixed to values based on generalized assumptions. With<br =
class=3D"">hyperlaunch for x86 and dom0less on Arm, use of static sizes =
results in large<br class=3D"">allocations compiled into the hypervisor =
that will go unused by many use cases.<br class=3D""><br class=3D"">This =
commit introduces a Kconfig variable that is set with sane defaults =
based<br class=3D"">on configuration selection. This variable is in =
turned used as the array size<br class=3D"">for the cases where a static =
allocated array of boot modules is declared.<br class=3D""><br =
class=3D"">Signed-off-by: Daniel P. Smith &lt;<a =
href=3D"mailto:dpsmith@apertussolutions.com" =
class=3D"">dpsmith@apertussolutions.com</a>&gt;<br class=3D"">---<br =
class=3D"">xen/arch/Kconfig | 12 ++++++++++++<br =
class=3D"">xen/arch/arm/include/asm/setup.h | 5 +++--<br =
class=3D"">xen/arch/x86/efi/efi-boot.h | 2 +-<br =
class=3D"">xen/arch/x86/guest/xen/pvh-boot.c | 2 +-<br =
class=3D"">xen/arch/x86/setup.c | 4 ++--<br class=3D"">5 files changed, =
19 insertions(+), 6 deletions(-)<br class=3D""><br class=3D"">diff --git =
a/xen/arch/Kconfig b/xen/arch/Kconfig<br class=3D"">index =
f16eb0df43..57b14e22c9 100644<br class=3D"">--- a/xen/arch/Kconfig<br =
class=3D"">+++ b/xen/arch/Kconfig<br class=3D"">@@ -17,3 +17,15 @@ =
config NR_CPUS<br class=3D""><span class=3D"Apple-tab-span" =
style=3D"white-space: pre;">	</span><span =
class=3D"Apple-converted-space">&nbsp;</span>For CPU cores which support =
Simultaneous Multi-Threading or similar<br class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space: pre;">	</span><span =
class=3D"Apple-converted-space">&nbsp;</span>technologies, this the =
number of logical threads which Xen will<br class=3D""><span =
class=3D"Apple-tab-span" style=3D"white-space: pre;">	</span><span =
class=3D"Apple-converted-space">&nbsp;</span>support.<br class=3D"">+<br =
class=3D"">+config NR_BOOTMODS<br class=3D"">+<span =
class=3D"Apple-tab-span" style=3D"white-space: pre;">	</span>int =
"Maximum number of boot modules that a loader can pass"<br =
class=3D"">+<span class=3D"Apple-tab-span" style=3D"white-space: pre;">	=
</span>range 1 64<br class=3D"">+<span class=3D"Apple-tab-span" =
style=3D"white-space: pre;">	</span>default "8" if X86<br =
class=3D"">+<span class=3D"Apple-tab-span" style=3D"white-space: pre;">	=
</span>default "32" if ARM<br class=3D"">+<span class=3D"Apple-tab-span" =
style=3D"white-space: pre;">	</span>help<br class=3D"">+<span =
class=3D"Apple-tab-span" style=3D"white-space: pre;">	</span><span =
class=3D"Apple-converted-space">&nbsp;</span>Controls the build-time =
size of various arrays allocated for<br class=3D"">+<span =
class=3D"Apple-tab-span" style=3D"white-space: pre;">	</span><span =
class=3D"Apple-converted-space">&nbsp;</span>parsing the boot modules =
passed by a loader when starting Xen.<br class=3D"">+<br class=3D"">+<span=
 class=3D"Apple-tab-span" style=3D"white-space: pre;">	</span><span =
class=3D"Apple-converted-space">&nbsp;</span>This is of particular =
interest when using Xen's hypervisor domain<br class=3D"">+<span =
class=3D"Apple-tab-span" style=3D"white-space: pre;">	</span><span =
class=3D"Apple-converted-space">&nbsp;</span>capabilities such as =
dom0less.<br class=3D"">diff --git a/xen/arch/arm/include/asm/setup.h =
b/xen/arch/arm/include/asm/setup.h<br class=3D"">index =
2bb01ecfa8..312a3e4209 100644<br class=3D"">--- =
a/xen/arch/arm/include/asm/setup.h<br class=3D"">+++ =
b/xen/arch/arm/include/asm/setup.h<br class=3D"">@@ -10,7 +10,8 @@<br =
class=3D""><br class=3D"">#define NR_MEM_BANKS 256<br class=3D""><br =
class=3D"">-#define MAX_MODULES 32 /* Current maximum useful modules =
*/<br class=3D"">+/* Current maximum useful modules */<br =
class=3D"">+#define MAX_MODULES CONFIG_NR_BOOTMODS<br class=3D""><br =
class=3D"">typedef enum {<br class=3D"">BOOTMOD_XEN,<br class=3D"">@@ =
-38,7 +39,7 @@ struct meminfo {<br class=3D"">* The domU flag is set for =
kernels and ramdisks of "xen,domain" nodes.<br class=3D"">* The purpose =
of the domU flag is to avoid getting confused in<br class=3D"">* =
kernel_probe, where we try to guess which is the dom0 kernel and<br =
class=3D"">- * initrd to be compatible with all versions of the =
multiboot spec.<span class=3D"Apple-converted-space">&nbsp;</span><br =
class=3D"">+ * initrd to be compatible with all versions of the =
multiboot spec.<br class=3D""></blockquote><br class=3D"">This seems to =
be a spurious change.<br class=3D""></blockquote><br class=3D"">I have =
been trying to clean up trailing white space when I see it<br =
class=3D"">nearby. I can drop this one if that is desired.<br =
class=3D""></blockquote><br style=3D"caret-color: rgb(0, 0, 0); =
font-family: JetBrainsMonoRoman-Thin; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><span style=3D"caret-color: rgb(0, 0, =
0); font-family: JetBrainsMonoRoman-Thin; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">IMO it's best if such white space removal is only done when =
already</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: =
JetBrainsMonoRoman-Thin; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: 400; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><span style=3D"caret-color: rgb(0, 0, =
0); font-family: JetBrainsMonoRoman-Thin; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">modifying the line, or else it makes it harder to track =
changes when</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: =
JetBrainsMonoRoman-Thin; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: 400; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><span style=3D"caret-color: rgb(0, 0, =
0); font-family: JetBrainsMonoRoman-Thin; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">using `git blame` for example (not likely in this case since =
it's a</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: =
JetBrainsMonoRoman-Thin; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: 400; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><span style=3D"caret-color: rgb(0, 0, =
0); font-family: JetBrainsMonoRoman-Thin; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">multi line comment).</span><br style=3D"caret-color: rgb(0, =
0, 0); font-family: JetBrainsMonoRoman-Thin; font-size: 14px; =
font-style: normal; font-variant-caps: normal; font-weight: 400; =
letter-spacing: normal; text-align: start; text-indent: 0px; =
text-transform: none; white-space: normal; word-spacing: 0px; =
-webkit-text-stroke-width: 0px; text-decoration: none;" =
class=3D""></div></blockquote><div><br class=3D""></div></div>The down =
side of this is that you can=E2=80=99t use =E2=80=9Cautomatically remove =
trailing whitespace on save=E2=80=9D features of some editors.<div =
class=3D""><br class=3D""></div><div class=3D"">Without such automation, =
I introduce loads of trailing whitespace. &nbsp;With such automation, I =
end up removing random trailing whitespace as I happen to touch files. =
&nbsp;I=E2=80=99ve always done this by just adding =E2=80=9CWhile here, =
remove some trailing whitespace=E2=80=9D to the commit message, and =
there haven=E2=80=99t been any complaints.</div><div class=3D""><br =
class=3D""></div><div class=3D"">If we actually care about trailing =
whitespace, then I think we should accept random fix-ups as files are =
touched. &nbsp;OTOH if we want to avoid random fix-ups, we should remove =
the aversion to trailing whitespace.</div><div class=3D""><br =
class=3D""></div><div class=3D"">&nbsp;-George</div></body></html>=

--Apple-Mail=_E277AD90-BD39-4FC2-818A-A6A371C2E64C--

--Apple-Mail=_89C0914C-E71D-4E70-861B-5565183C51A1
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
	filename=signature.asc
Content-Type: application/pgp-signature;
	name=signature.asc
Content-Description: Message signed with OpenPGP

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEj3+7SZ4EDefWZFyCshXHp8eEG+0FAmKXF9cACgkQshXHp8eE
G+1D4gf/RYtMOEqMqayL4bq5yQUDUpykPUFZoUBjGjrwoKSquaQ3Xog41sm6MeR7
GIZ5ZW/e4CXT+iLcLNRG7q28MQ/5ezC0/fyf+rnvhLswLW+qGuaNwO6U406p6SRr
tmgXIIehzNNACZ9rsVwg8j2IOOSYemDEphnQBf32HPj6Q5ceHITn3/zfLd9hQkFe
Rl9O4ufHNL+e4Bl66XfcKZDK7gQxNDHd18DCMDkOVO9enxl2PlN7mw4hFHrK3vMR
7F2b/UM1fzYesDOOkIraLCPuFYh+Dg6yGzvarxrhGyeGkhEIxt8Ep3HK8o5f7YAR
WKKybj86w9btUhDGdmwG4HrATN8Ilw==
=ba9V
-----END PGP SIGNATURE-----

--Apple-Mail=_89C0914C-E71D-4E70-861B-5565183C51A1--


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 07:50:53 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 07:50:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340297.565300 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwJ7t-0006Ly-5W; Wed, 01 Jun 2022 07:50:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340297.565300; Wed, 01 Jun 2022 07:50:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwJ7t-0006Lr-26; Wed, 01 Jun 2022 07:50:45 +0000
Received: by outflank-mailman (input) for mailman id 340297;
 Wed, 01 Jun 2022 07:50:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1nwJ7s-0006Ll-Li
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 07:50:44 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nwJ7r-0002zS-16; Wed, 01 Jun 2022 07:50:43 +0000
Received: from home.octic.net ([81.187.162.82] helo=[10.0.1.193])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nwJ7q-0007h4-RF; Wed, 01 Jun 2022 07:50:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=IfFrAyYPuJE6cW8okeWrsA1QDP8r9TxF5Sh+O6De3Wo=; b=dBhviJw79KoFepUMcC0ONCi/dh
	LjOSyNofHShPqns/+UaNV6tAgMZAVZra/G/cDfjRc5Li3Y4ZMrQfZMbhwvNFNqgE3ABAnMVyYOgi9
	pJZCBqxqWI+Cbyr/Z4Ui9nuwGDHZeaCAj9SNxBQopuSnE4Mduhk2CjQn0JvuwY+Qfih0=;
Message-ID: <5b7ccf0a-71d9-869c-aaac-2b10dd2b4d9f@xen.org>
Date: Wed, 1 Jun 2022 08:50:32 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.9.1
Subject: Re: [Xen-devel] SMMU permission fault on Dom0 when init vpu_decoder
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Oleksii Moisieiev <Oleksii_Moisieiev@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Peng Fan <peng.fan@nxp.com>, Bertrand Marquis <bertrand.marquis@arm.com>
References: <20220530152102.GA883104@EPUAKYIW015D>
 <da899c6a-a9ec-fa25-75ef-6f375dfd422a@xen.org>
 <alpine.DEB.2.22.394.2205311327330.1905099@ubuntu-linux-20-04-desktop>
From: Julien Grall <julien@xen.org>
In-Reply-To: <alpine.DEB.2.22.394.2205311327330.1905099@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 01/06/2022 00:13, Stefano Stabellini wrote:
>>> arm: Set p2m_type to p2m_mmio_direct_nc_x for reserved memory
>>> regions
>>>
>>> This is the enhancement of the 46b3dd3718144ca6ac2c12a3b106e57fb7156554.
>>> Those patch introduces p2m_mmio_direct_nc_x p2m type which sets the
>>> e->p2m.xn = 0 for the reserved-memory, such as vpu encoder/decoder.
>>>
>>> Set p2m_mmio_direct_nc_x in map_regions_p2mt for reserved-memory the
>>> same way it does in map_mmio_regions. This change is for the case
>>> when vpu encoder/decoder works in DomO and not passed-through to the
>>> Guest Domains.
>>>
>>> Signed-off-by: Oleksii Moisieiev <oleksii_moisieiev@epam.com>
>>> ---
>>>    xen/arch/arm/p2m.c | 7 +++++++
>>>    1 file changed, 7 insertions(+)
>>>
>>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>>> index e9568dab88..bb1f681b71 100644
>>> --- a/xen/arch/arm/p2m.c
>>> +++ b/xen/arch/arm/p2m.c
>>> @@ -1333,6 +1333,13 @@ int map_regions_p2mt(struct domain *d,
>>>                         mfn_t mfn,
>>>                         p2m_type_t p2mt)
>>>    {
>>> +    if (((long)gfn_x(gfn) >= (GUEST_RAM0_BASE >> PAGE_SHIFT)) &&
>>> +        (((long)gfn_x(gfn) + nr) <=
>>> +        ((GUEST_RAM0_BASE + GUEST_RAM0_SIZE)>> PAGE_SHIFT)))
>>
>> I am afraid I don't understand what this check is for. In a normal setup, we
>> don't know where the reserved regions are mapped. Only the caller may know
>> that.
>>
>> For dom0, this decision could be taken in map_range_to_domain(). For the domU,
>> we would need to let the toolstack to chose the memory attribute.
> 
> I think the intent of the check is to recognize that map_regions_p2mt
> was called for a normal memory location and, if so, change the p2m type
> to p2m_mmio_direct_nc_x.

That would have made sense if it was for a domU. But AFAICT the intent 
is to address the problem for dom0.

Technically, GUEST_RAM0_BASE describes the RAM for the guest and not 
dom0. Maybe they are the same on his HW, but without more details I 
can't confirm that. And therefore...

> 
> As a downstream, the patch below is one of the easiest way to have a
> self-contained change to fix the problem described above.
... there are no way for me to tell whether this patch would still be 
fine for a downstream project.

Cheers,

[...]

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 07:59:32 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 07:59:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340305.565311 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwJGK-0007Cn-0C; Wed, 01 Jun 2022 07:59:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340305.565311; Wed, 01 Jun 2022 07:59:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwJGJ-0007Cg-TB; Wed, 01 Jun 2022 07:59:27 +0000
Received: by outflank-mailman (input) for mailman id 340305;
 Wed, 01 Jun 2022 07:59:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=RHfR=WI=nxp.com=peng.fan@srs-se1.protection.inumbo.net>)
 id 1nwJGJ-0007Ca-2k
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 07:59:27 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0603.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::603])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c139256b-e180-11ec-bd2c-47488cf2e6aa;
 Wed, 01 Jun 2022 09:59:25 +0200 (CEST)
Received: from DU0PR04MB9417.eurprd04.prod.outlook.com (2603:10a6:10:358::11)
 by HE1PR0402MB2828.eurprd04.prod.outlook.com (2603:10a6:3:da::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Wed, 1 Jun
 2022 07:59:23 +0000
Received: from DU0PR04MB9417.eurprd04.prod.outlook.com
 ([fe80::a892:e4a9:4769:13a5]) by DU0PR04MB9417.eurprd04.prod.outlook.com
 ([fe80::a892:e4a9:4769:13a5%9]) with mapi id 15.20.5293.019; Wed, 1 Jun 2022
 07:59:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c139256b-e180-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=T+MuOUdUlr1EILlnKhMEo2j9T8jb1TyWFeJ8mIRZSjn235Cqq9Y3YUDe1AB3P14d9/BBFW/VycglZuhDwaTeSjKSseVHgMSi+SgaqiOfpSteVtlJxveZ+NJ0MShX5Ch9bnVpBu5fA982TiUDhE1U/8s4/92RNz3Ms3DBAPqPI/pv0k6at1xFq7hSQJ7ywlXFr4i78siwwvhCohUlKXGASt0zO/SOFzIZHovEyoL/OvPWKcYS626DzJvi1J6ZJ3+m96bT/w7fRS9Sy99sNeHeHD6rbc9QdN1wAkJDcoBx9M4n83wbUF4nuJgTOUHRTNN0mDbXWrpBsBm3L1X0WZdjcA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=PZaCf6yj+k2pTxQ/obHa+4j8WN3nS1YOXvsVM/ZQM/8=;
 b=NXzAz83qqWQbXj90R0BMZJtgTEKy/KTybB2lhEjGN0fcxm46ZoE6wAdElq7V+hrf9vfJpSwEw9mXPfKC1yDKSa5enkASzMlmzD/7lGJHq6a3gSs5jHvPzp4jI0wd7x2v9aWCwAM7bJ7iyybkQl7OoBWJfZJRpCQ5z/Ycj7RyjRnxffpOjhpxa/iVOZd5FCcdM1Jdzcc4f3ksRHJWXL+xam4ToR3A2wH5NmDNXCY6HPfKqtdBRJgO9Yp+zkdVHZ1rWDF2Jy0skxy+UuNcVoovDPo77J/1+7onc0BYMgGLD4LehZHwcKeyl7F1j2Y2u6dUScSFU79etTAjexVyo6tZpw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=nxp.com; dmarc=pass action=none header.from=nxp.com; dkim=pass
 header.d=nxp.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PZaCf6yj+k2pTxQ/obHa+4j8WN3nS1YOXvsVM/ZQM/8=;
 b=JEkVn/7jLdCplfpeckoAz6eoMKodgxcmvPq3kQQ+KSLufF4h/0VAlQIBN2ptinu0KQR7Whb/xmwLy+y70vJUcGmGViu/VXAqLTpiVqa9qOY4g5Bz8mRhsU9Cxy72EwMYg9ZhRt5429ErPqPtLK5DNBhIb9iOlHCwllC8dI4tE4Y=
From: Peng Fan <peng.fan@nxp.com>
To: Oleksii Moisieiev <Oleksii_Moisieiev@epam.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Julien Grall <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>
Subject: RE: [Xen-devel] SMMU permission fault on Dom0 when init vpu_decoder
Thread-Topic: [Xen-devel] SMMU permission fault on Dom0 when init vpu_decoder
Thread-Index: AQHYdDjfc3Z/1SeAUE+eDUzXC1OEPa06Mh/Q
Date: Wed, 1 Jun 2022 07:59:23 +0000
Message-ID:
 <DU0PR04MB94175823AAA60802503F802588DF9@DU0PR04MB9417.eurprd04.prod.outlook.com>
References: <20220530152102.GA883104@EPUAKYIW015D>
In-Reply-To: <20220530152102.GA883104@EPUAKYIW015D>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=nxp.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: ce67c8b1-1f99-4729-4324-08da43a4a444
x-ms-traffictypediagnostic: HE1PR0402MB2828:EE_
x-microsoft-antispam-prvs:
 <HE1PR0402MB28288DDE7718587E4D883EBC88DF9@HE1PR0402MB2828.eurprd04.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 d5f84rBx+sGPuK3nORw+JtIMY9+Oz94nk8/+XGfVZESwghihuvQqS51M7o2tYMR18IwFh2QJIYlQJwtPeMyAyTdGJv5CZtSRFgUggZIkzbY8jgpRnVfj6pT8kzbaSF/I6RarYk79jZ9gdOJadAhUnjAG6l6O4NJISJYp/rpawakNaLtbsNkhyiKunxOBbpJEgtEg93ddUKM+iL2aXABbumB5jnxhpDddtarjMZZMI2nlEiIui3f30XQy1HerurCgjiLwKt6lZaWo7CnsU9EeMUj8hdjXv66PFdpjT28S61AIWAn5ucED0LGZ5C6davu0EDHmyiJ/+IxI8j4xXJ9qvfxjSTtBeQQaTnOGQypmAFHPpJKqyZuSlAU0nxcvyd5ASjIZWiDdHDWiMdBfah6HeSpJJ7bgY+iKdx/mIHYPZVUPe5+W3pwXum8Dl/MmkYFwqo4tiY3BiQpl8rHzp6sqw62yOh0hwjAmqzVL0lEskos8lGs8T77whKWvSklxQs3BGCXI8v5a9ZZQLwxkm/btMRk8U0QbL6CQbQpDW8taTaJGr2+SwlPwZo6Qsqcz0iIFxySBsmwO3R41OTjklmXEdP7QK8YCtn5aYJn901yIGbnv1XpcA0SNky/Ti87GgmZqOmq8bMiqOeZjTDiMmpON0XYYBQUzf7BbbXuqhhsfj8zKkwxftzjKRn3NiOwKkSUu8jVYffmp+JZSnSMHSv5pAYG+qZiDaB2BYZAwu7sqT4rTWd1Vc25kb7DzJeg0938hsdYQ/W1/K4gE8X3Oh6TYe08YWD+L4Isgdu8PZGngRVI=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DU0PR04MB9417.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(66556008)(66446008)(76116006)(66476007)(66946007)(64756008)(8676002)(4326008)(966005)(33656002)(86362001)(8936002)(52536014)(71200400001)(122000001)(38070700005)(38100700002)(7696005)(6506007)(44832011)(5660300002)(26005)(45080400002)(55016003)(508600001)(9686003)(2906002)(316002)(110136005)(186003)(54906003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?us-ascii?Q?YMIQb5RkzcyEbvd3ERvU2QT9VpbYafDp6XCS4hjQzH6GTMz4FhjnvOPKBQQZ?=
 =?us-ascii?Q?Yyappu8eld9bGGken4S8JjKwjDVZcowD3fBp70smvWPmU2WimM2tLoYR3lIf?=
 =?us-ascii?Q?Qr98+LMTCt8Zc76DfH476nePHHM1Sm51ONX00Sv3SFz4t0JSfusB4UlTqxzQ?=
 =?us-ascii?Q?PUSKSvt9y6xmtA9FNgIi9L4A05+Jbx4sEKE+JcgWFa1RBCqT41N3HV29d4EW?=
 =?us-ascii?Q?0md/CY1QC488jBKFWYXUd/N4KSZnpKmlU7M184GGtJaqQJ1gee+FU9iQiFNU?=
 =?us-ascii?Q?qOowtl7rDh1tlQZmZ16l6P1Pbhp8g7S4YG6Sx7AKW8zkqGpcZpFHWSyoOSG8?=
 =?us-ascii?Q?+jgwnp5G9AkLrTbZvaR23uCLywhkRYHcuPu+xQ0MsaFLLsPAJus1b/1sKPo/?=
 =?us-ascii?Q?wbn1hXZ8uCY+pjkJVRY/VKLLggqdAEYqQvEjsP1TjcN6tJr81k4IzFuO9KY+?=
 =?us-ascii?Q?Xn+CJuvOGRCWRacZUnr6IDoYQQsbq37LM4O2P2xaFiAy7VwHXz/1pFRC78fT?=
 =?us-ascii?Q?qW9DhAWQVUEe0LmM0VnNliOpNlQL2N32srwejGYXTLzc9K/a8MUTZXV8SlNV?=
 =?us-ascii?Q?TFaGLzXuAkni1O0XKJK0trFqUJlCA3tWCaziPmzGsgEK9PHDlwEGF0CO0OEU?=
 =?us-ascii?Q?NQATYBRUo5fOVAEObPvQGt6tHR7NmPdcEL5VQlUooBpCK+lKlWDbAQGYt/lZ?=
 =?us-ascii?Q?LFCbOQF8hSNlec2+B1KxWp3woiNjdEyGmBjHpWe072Lg4d85Ks82HT9fvySh?=
 =?us-ascii?Q?H5X+bcHOzIP3GczzmH7ycstSM0cmfleBWKS0Kq9h5Le3NWq97LiX1LNhgc+e?=
 =?us-ascii?Q?tnxU1IZLH7sSrfUCJ3kEjaSxZoqoDXM0CUhZBfkJ+FsI0vLj+5G7TOlqKZ7z?=
 =?us-ascii?Q?DaOFD9BkBrfk2W/EiUib498uJ19SJAs160rj400rVDJzRdv4AuLqWCgVDFel?=
 =?us-ascii?Q?i85wKBcf4O+zDBbNaYftSEsNF77uDrQVzNrWxtk16wPlMoQ2T1KpRmUYOpAc?=
 =?us-ascii?Q?sCRJGOp8O4c7ph7Bzxr9uUvFnqZqyYXwGylsyvQlC9x0kpomqDEOIOsGCOEN?=
 =?us-ascii?Q?eEEhtWdyrgAfszOoNLkxav8Xwcsg9wEAyz2CBlNYRhnr5OqaJQEnf12A/9fE?=
 =?us-ascii?Q?yS39oRrK70UkT52HkRFlt/hX8mKYp9UMSrLmm35H57WgBJ1CYObuz4+MJRcT?=
 =?us-ascii?Q?gI7WYVi06pdEMlpS6Wltt3pfyWI00j5v2cv2bCLPsM1yOfpDy5/4ZSvKVHR4?=
 =?us-ascii?Q?4wa47aSYBlhijTcDMy4Tb40cXGtKwg4pJdaLtNNBs0Xu3PdqYSUr1LFMtOGM?=
 =?us-ascii?Q?hysET0Ak39/2+RQO0lVGYRQF20F3N6+GwBAhT3JiQDNTqJXpLcFF2xsjRo5O?=
 =?us-ascii?Q?riKTID+a3aXksnQMl4Y7hERqSi+VdN9snPQPS9JBSIshhEDhTy7cCdSdJvCx?=
 =?us-ascii?Q?KyniXksCG18SR3iVzWE5mlUsn9YE9X3It9g4qHxtFzI/LmCL4u3siyBjb6zt?=
 =?us-ascii?Q?NKiDCRqrCZbJYsUt0vPgdXl1/zhsv/chxX/Eo/8EQfMJlBbg451XWeBeCTXP?=
 =?us-ascii?Q?x9+e5gRqCUqyxad+QdeP0NgYJFr3Sc0j0ejUeCuT09a37LCjHrwkpTUMdhZf?=
 =?us-ascii?Q?ABWrFasbi1yj4us1mCbPvHjr53dL/F7DcSQd9q3ydKdjoBaafra2yOkDn7Mg?=
 =?us-ascii?Q?ZQr9FpyU4IUJb+Wy48W62mvLQ4l4KZB4lNaD+gqaMI9TYG+B?=
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: nxp.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: DU0PR04MB9417.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ce67c8b1-1f99-4729-4324-08da43a4a444
X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Jun 2022 07:59:23.5512
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: su7kiL4n7QHgrRo0rHWp03T+0YOXGqjVhLYV2/4Qw+cAeHencqhq07QIkYasMZE9fzJmvQKYu5cROb0MFdCb/w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0402MB2828

> Subject: [Xen-devel] SMMU permission fault on Dom0 when init vpu_decoder
>=20
> Hello,
>=20
> I'm getting permission fault from SMMU when trying to init
> VPU_Encoder/Decoder in Dom0 on IMX8QM board:
> (XEN) smmu: /iommu@51400000: Unhandled context fault: fsr=3D0x408,
> iova=3D0x86000a60, fsynr=3D0x1c0062, cb=3D0 This error appears when
> vpu_encoder/decoder tries to memcpy firmware image to
> 0x86000000 address, which is defined in reserved-memory node in xen
> device-tree as encoder_boot/decoder_boot region.
>=20
> I'm using xen from branch xen-project/staging-4.16 + imx related patches,
> which were taken from
> https://eur01.safelinks.protection.outlook.com/?url=3Dhttps%3A%2F%2Fsourc=
e.c
> odeaurora.org%2Fexternal%2Fimx%2Fimx-xen&amp;data=3D05%7C01%7Cpeng.f
> an%40nxp.com%7C91e3a953942d414dcc6208da425006e7%7C686ea1d3bc2b
> 4c6fa92cd99c5c301635%7C0%7C0%7C637895208732114203%7CUnknown%
> 7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiL
> CJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=3Dno%2BV2ubjGmrsm96NP
> ybeeug4a3BXx3oX7xmylzZCU8E%3D&amp;reserved=3D0.
>=20
> After some investigation I found that this issue was fixed by Peng Fan in
> commit: 46b3dd3718144ca6ac2c12a3b106e57fb7156554 (Hash from
> codeaurora), but only for the Guest domains.
> It introduces new p2m_type p2m_mmio_direct_nc_x, which differs from
> p2m_mmio_direct_nc by XN =3D 0. This type is set to the reserved memory r=
egion
> in map_mmio_regions function.
>=20
> I was able to fix issue in Dom0 by setting p2m_mmio_direct_nc_x type for =
the
> reserved memory in map_regions_p2mt, which is used to map memory during
> Dom0 creation.
> Patch can be found below.
>=20
> Based on initial discussions on IRC channel - XN bit did the trick becaus=
e looks
> like vpu decoder is executing some code from this memory.
>=20
> The purpose of this email is to discuss this issue and probably produce g=
eneric
> solution for it.
>=20
> Best regards,
> Oleksii.
>=20
> ---
> arm: Set p2m_type to p2m_mmio_direct_nc_x for reserved memory regions
>=20
> This is the enhancement of the
> 46b3dd3718144ca6ac2c12a3b106e57fb7156554.
> Those patch introduces p2m_mmio_direct_nc_x p2m type which sets the
> e->p2m.xn =3D 0 for the reserved-memory, such as vpu encoder/decoder.
>=20
> Set p2m_mmio_direct_nc_x in map_regions_p2mt for reserved-memory the
> same way it does in map_mmio_regions. This change is for the case when vp=
u
> encoder/decoder works in DomO and not passed-through to the Guest
> Domains.

For Dom0, there is no SMMU, so no need x. Just nc is enough.

Regards,
Peng.

>=20
> Signed-off-by: Oleksii Moisieiev <oleksii_moisieiev@epam.com>
> ---
>  xen/arch/arm/p2m.c | 7 +++++++
>  1 file changed, 7 insertions(+)
>=20
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index
> e9568dab88..bb1f681b71 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1333,6 +1333,13 @@ int map_regions_p2mt(struct domain *d,
>                       mfn_t mfn,
>                       p2m_type_t p2mt)
>  {
> +    if (((long)gfn_x(gfn) >=3D (GUEST_RAM0_BASE >> PAGE_SHIFT)) &&
> +        (((long)gfn_x(gfn) + nr) <=3D
> +        ((GUEST_RAM0_BASE + GUEST_RAM0_SIZE)>> PAGE_SHIFT)))
> +    {
> +        p2m_remove_mapping(d, gfn, nr, mfn);
> +        return p2m_insert_mapping(d, gfn, nr, mfn,
> p2m_mmio_direct_nc_x);
> +    }
>      return p2m_insert_mapping(d, gfn, nr, mfn, p2mt);  }
>=20
> --
> 2.27.0


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 08:18:14 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 08:18:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340316.565322 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwJYE-0001zu-To; Wed, 01 Jun 2022 08:17:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340316.565322; Wed, 01 Jun 2022 08:17:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwJYE-0001zn-R1; Wed, 01 Jun 2022 08:17:58 +0000
Received: by outflank-mailman (input) for mailman id 340316;
 Wed, 01 Jun 2022 08:17:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=s3TG=WI=citrix.com=prvs=1441e74d4=roger.pau@srs-se1.protection.inumbo.net>)
 id 1nwJYC-0001ze-F9
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 08:17:56 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 54b96cf0-e183-11ec-837f-e5687231ffcc;
 Wed, 01 Jun 2022 10:17:53 +0200 (CEST)
Received: from mail-bn7nam10lp2103.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.103])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 01 Jun 2022 04:17:45 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by DM6PR03MB4428.namprd03.prod.outlook.com (2603:10b6:5:101::31) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Wed, 1 Jun
 2022 08:17:43 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::5df3:95ce:4dfd:134e]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::5df3:95ce:4dfd:134e%4]) with mapi id 15.20.5314.013; Wed, 1 Jun 2022
 08:17:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 54b96cf0-e183-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654071473;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=1QhUNLOsmuk8+a3r4l41JsA+JNR82+WRjZjAvauoONQ=;
  b=dKgdoVs9qfjMTo59I+FjyzeaVBK1MPwGtflCovTlmrLGWSC/oOcTSoFY
   1nThZg1uUmYd7rqr3UH9w250szVx6Knuk4POsf/hTQKOS8BKocQcV4SO8
   shJuPcl3WsaVl3sjjjrrL9XxDO6/bK88eXJwlfKWL23WPWxQQexzpvEhC
   s=;
X-IronPort-RemoteIP: 104.47.70.103
X-IronPort-MID: 75133149
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:y1IClKxIrzSxIoEmzj56t+dBxyrEfRIJ4+MujC+fZmUNrF6WrkUOy
 2EbCG2Ha6mCYWr1KtElaY3j/UIAsJ/XnN9mTFZo/yAxQypGp/SeCIXCJC8cHc8zwu4v7q5Dx
 59DAjUVBJlsFhcwnj/0bv656yMUOZigHtIQMsadUsxKbVIiGX5JZS5LwbZj2NY22YHhWmthh
 PupyyHhEA79s9JLGjp8B5Kr8HuDa9yr5Vv0FnRnDRx6lAe2e0s9VfrzFonoR5fMeaFGH/bSe
 gr25OrRElU1XfsaIojNfr7TKiXmS1NJVOSEoiI+t6OK2nCuqsGuu0qS2TV1hUp/0l20c95NJ
 Npllb2QUyMsEKT1iP0maz9eVChAB7Re5+qSSZS/mZT7I0zuVVLJmq0rJmdpeIoS96BwHH1E8
 uEeJHYVdBefiumqwbW9DO5xmsAkK8qtN4Qa0p1i5WiBUbB6HtacG+OTvYQwMDQY36iiGd7EY
 MUUc3x3ZQnoaBxTIFYHTpk5mY9Eg1GgLmQD8wvJ9MLb5UDi8iFhjID0KOHWJNWJTtVkrxyKt
 DvZqjGR7hYycYb3JSC+2nCmi/LLnCj7cJkPD7D+/flv6HWDy2pWBBAIWF+TpfiillX4S99ZM
 1YT+Cclse417kPDZsH0QhmQsHOC+BkGVLJt//YS7QiMzu/e5VmfD21dFjpZMoV+74kxWCAg0
 UKPk5XxHztzvbaJSHWbsLCJsTe1PitTJmgHDcMZcTY4DxDYiNlbpnryohxLScZZUvWd9enM/
 g23
IronPort-HdrOrdr: A9a23:ALyT2a2Z7xVBoThbje/p2AqjBSByeYIsimQD101hICG9Lfb0qy
 n+pp4mPEHP4wr5OEtOpTlPAtjkfZr5z+8M3WB3B8bYYOCGghrQEGgG1+ffKlLbexEWmtQttp
 uINpIOcuEYbmIK8voSgjPIdOrIqePvmM7IuQ6d9QYKcegDUdAd0+4TMHf+LqQZfnglOXJvf6
 Dsm/av6gDQMEg/X4CePD0oTuLDr9rEmNbPZgMHPQcu7E2rgSmz4LD3PhCE1lNGOgk/iosKwC
 zgqUjU96+ju/a0xlv10HLS1Y1fnJ/ExsFYDMKBp8AJInHHixquZq5mR7qe1QpF6N2H2RIPqp
 3hsh0gN8N85zf4eXy0mwLk303a3DMn+xbZuCulqEqmhfa8aCMxCsJHi44cWADe8VAcsNZ117
 8O936FtrJMZCmw0xjV1pztbVVHh0C0qX0tnao4lHpES7YTb7dXsMg24F5VKpEdByj3gbpXXN
 WGNPuspcq+TGnqL0ww5gJUsZ+RtzUIb1q7q3E5y4KoO2M8pgE686MarPZv60vouqhNDqWs3N
 60Q5iApIs+MPP+UpgNdNvpYfHHfVAlEii8Rl57HzzcZdI6EkOIjaLLy5MIw8zvUKA07fIJ6e
 b8uRVjxCQPR34=
X-IronPort-AV: E=Sophos;i="5.91,266,1647316800"; 
   d="scan'208";a="75133149"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ACiDRTg/jTFbuEcWY4xg/8PNEKEVsLl46uj0UQSJ6paiuygxQF/pGD0hV7JQbtINjv9Yfcjb05RSi4IJl3r6L1Snt9ehPYvj7od+Q+JGKAEsB9x/NnY03fOylOga/hTHBQuhF9HWnYbWoSP7C0SV/OpZupLTUP8ROo1uKAfi9Ipqtg/zeKLYIhSRtfNnYJwsWpO+U8NuWon/7fTPqK5pEZy/IdoXMxqKraFxpog5p1HXtieJSLa5ENzPdDWC3YeHzyOna8SXFvQk4DrQiwprKPXBj4HlADLBdhT3U+zdjZOKgnFyQ+UtfDqMsok9nc+OIhAM3CwHUgwtueHMcfSTkQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=RmOwzsdsnI55k4oa+wPCZ/S7Lttrw7ckiBEpig/WKbg=;
 b=Zwy0kuDBBnP5e6f5gt2IZztIqE75rOoxhfynVKaYDwEgPhCiXacOCwtYK1Dy9bAOzp171Sm74VT37XfI3e3wvBnxlwJt1pnWqOzYj3onTva7yxysXiNFP+u1MXtpjq93SldNSIn8ct8UCTupkNTvt2776BKxoSXgjytoanexUscE1UzaXk1el0X6IgvO1xX0E1TlUd3647V3Ecks9OVpd/nZEdGwo8tW1RR9Bwh11xWJFTZmflwnF4d9dY94KZPw3GzYTfm3gkkinUgLGD7jCl1HlKInOAk2ofssapNf3j1CAZYorl8aOQ84po7afgNkz/mDzO4sXnmfhLpxRPWVGQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RmOwzsdsnI55k4oa+wPCZ/S7Lttrw7ckiBEpig/WKbg=;
 b=DZ7D6bkyRmO+PgW+LaruJOEuay6vP081sFKDwaBvQjzctFIpYvtzYzNXyiJfgctqSWcbOdPsBlV+Pr6m5V9ZuuuaISf8MhV7o+n8QvES0r0uhdRzHxSvvTBXl675ycncfvPxCsIzVd1FbDp4gm0FKDIJXgwp7t+2q8kigTlYCr0=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 1 Jun 2022 10:17:37 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Paul Durrant <paul@xen.org>
Subject: Re: [PATCH v5 01/15] IOMMU/x86: restrict IO-APIC mappings for PV Dom0
Message-ID: <YpcgoYkJMzQnXUkb@Air-de-Roger>
References: <80448822-bc1c-9f7d-ade5-fdf7c46421fe@suse.com>
 <1de2cc0a-e89c-6be9-9d6e-a10219f6f9aa@suse.com>
 <YpYozCRkfs1KdBus@Air-de-Roger>
 <22d2f071-4046-52c6-6f11-23fb23fb61c1@suse.com>
 <YpY/Pm43mMJFGYql@Air-de-Roger>
 <e5bc83c8-3962-4d43-4ef1-f338ca2fb782@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <e5bc83c8-3962-4d43-4ef1-f338ca2fb782@suse.com>
X-ClientProxiedBy: LO2P265CA0343.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:d::19) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e6dcb0e7-f318-406d-58c1-08da43a73381
X-MS-TrafficTypeDiagnostic: DM6PR03MB4428:EE_
X-Microsoft-Antispam-PRVS:
	<DM6PR03MB4428665D2DF03D1071CB045F8FDF9@DM6PR03MB4428.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	edXGAh3EjU8I5uDYKJ7GMEBKpkIDXSIHgKsBSskPs1qYrKIP1b8a21OjDxVgwt3S29+8t7sTyGuwA+BPQEB+o6RmyIW3jPOjatQHMQNyBZuCzTckzBzCWTWfTQYhdI2IE9GR4KFa/lUS0oc/cZXWjN/Q4njrBZbcpbPbIiqj9fRmzxSGmpt+GQeU4DEqS3WfY94EFVxhlvW7G4UdcL3dM8NB3mkep90h0AxKPbn5lZUWUHjqZfpOV/RobS1gaSgl65+hFqiaDKQcrYS22ey3d9RKOrwfFs6Gd+0g08RtnB4u4fqof+TzozGp0NjikVjOZxUTbpsszVYC5t0tNOsE1fIaChx638m5NEDTm5wt36Z6HwjmsBPs7MXk58mm23Ic1QkFTvSptiwn1Sa2rvQ0Q2GThWyy7NHzK+2Eut7eoaxPqVW7J3ECcZkJQwBpGvtSqnfwE/g8zp56lQBvOZRC4eA2+ZtKLgtXqew8us9Jb2EO78i16DjGlwQfM+qTLfBYMUp1sB5EdpLG6zS7dVU+yqQ/aCOC+HZkVWQP68WzpISaqRYJctrYk2UlFtajDVo6cXDoXDzTx2ELBYcZWzzpwhn4uzX/Ghk7qI2RrGZp/+dgm9h8sp1woHzVG9Um3ZNllHMVj4wsH5wQv0+9rWJ4JQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(7916004)(4636009)(366004)(4326008)(8676002)(66556008)(66476007)(66946007)(316002)(85182001)(26005)(6512007)(33716001)(54906003)(83380400001)(6916009)(186003)(82960400001)(38100700002)(2906002)(8936002)(6666004)(86362001)(6486002)(53546011)(6506007)(9686003)(5660300002)(508600001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Wnh2b3FoREZnUmxzL0RtSEVBQlBnWWRzVTVJbi9CVGdqNUNTZUNUN3V4V3hK?=
 =?utf-8?B?aHJ0WEtRQWdDcHRKRE5ZSnhXbXZOQW4xK0FmQkhCMWNqaGlpcTMrMmlRK3NP?=
 =?utf-8?B?UnJLdnZPWHh6MlQxN2Z0MWlhUVpUeG5qbTl6V3hZdlM2MGlDV2ZOdW5zVnJU?=
 =?utf-8?B?N3NZeXBqSUdmSEVaTGFxakdZdFYzSWRIODVNczZwOVdPY0R1b3VDWHJlYm1Q?=
 =?utf-8?B?YlhwVmZHNnkvbWgyL1dQOHVNWmdiSGVZU1grazVFanBXUVJydzZtQ09oM3dq?=
 =?utf-8?B?Z1FsRmt0anBwYTZoMEs1UU1tNFI3MGY5M21IVnpuWTZFeHdoRVg0czFsY0w2?=
 =?utf-8?B?dFJXUnlGcXd3Y2VFY1IvaXhHM3FBTEd6emxlOHp2bXYvVFN0VysrcCtRYW9J?=
 =?utf-8?B?YjlTU0RrL3Zzc0QxZjdySXZaaG12eW0zOGVoTllvZytpeHVHUmprNlVHWVVQ?=
 =?utf-8?B?L1FtVWFYSGxDVEFBUG41b25tZDFMcnVVV0loVHFoV1FBbXN6blpjT3krN0J0?=
 =?utf-8?B?RFZLZnBGajVYemZJOFNxMjNJOERIRk9Fai9tL2NicFNzOFN3MnhLb0xCUzNS?=
 =?utf-8?B?WS8vWnQ0Wnp6Z2pQYW90eDlhbHZQbXBnQVBSS3poTythcWl1Y1o4RDFLcXZ6?=
 =?utf-8?B?NERJM3owMkN5MExocXNHQWk5cVYwOGVCcTA5cXZlaTRHaEJETzN3NnQwYmFE?=
 =?utf-8?B?VXgwL3ZHOUg4bk51ZU9RNUxkUk1uRkdaM1E5WUM4bTRwSVBrV0hHWHR0NENJ?=
 =?utf-8?B?NHlaSDdrd2hHKzlJV0RjckVOQjNML3FsclFvSzV1MHRRUWVKUFdMV2QyVXFj?=
 =?utf-8?B?Q0ZnamlqbHZVNndxR0Z2OHREaEgzTWwyTzAveTB6bUQ5QnBxeGlWVzFDS2ZY?=
 =?utf-8?B?amZCZTBPS0VEMVZUb252VUdnUDFaTlZVdmxqL0J6YVBpRis0cFcwNVVzSW5U?=
 =?utf-8?B?dFJnMitXWTZhTExuLzNkWjNNQmh4M2FjSWRncjRlRDAwakZMSGx5UjRYVk9Q?=
 =?utf-8?B?NVEvUlZxakRTQkFEVUhwa2cvWmFJcnVxVTkwdmpIQ09leXg3Y2VyYkp4NFhv?=
 =?utf-8?B?bHRvZTIvMTJnNGw3N2VQaFMzN0dtNEdZQXVNdVByN0Z5Tll3Yis2OUNkdFVY?=
 =?utf-8?B?czc4bFplZjJvUlpvN25LZ1dQRWlBdjRka250Z1Uwem1yY3J0dVpUUElHbmFr?=
 =?utf-8?B?RC9Qb2xTaWZPcjVKR1hZWS9ySmVnbk9uS1hxbmlxYWwxVHlvZGJOcTdJQkJB?=
 =?utf-8?B?b3N6RHIrN2ZwOVpmVU9FUUc4NmErYW5xWWxNMTNVZzhLU0Mza3hBNk93TW92?=
 =?utf-8?B?WlVGZHYzZnRvVjlrc0ZRbEI0NTZjVVJRQzNQNjRKVmdyUkF2TDVjbDMrUy9t?=
 =?utf-8?B?czhqWDhiNkxFRHdqSEdIQVlqSk9RNUx4b1ZFRHdTc2FONzdQb2luUWVSbElE?=
 =?utf-8?B?RmRyR1p0S2NLNFVzQmEyZ3hsNExLZGlQUFl3aUhoZTM4WDB0ZTQ4dmo2bEFy?=
 =?utf-8?B?RUgzVlRQc3BSNE13Qk9MQ1d4MVZCaXdyNU1MWC9sRWdQVVJpTkcyMkhmbU5V?=
 =?utf-8?B?aW8rUnNmRkdJN0RrT0lsYXZZRTR6VG1leHpPenVYelpmTk40Q0l0YmdjR0l4?=
 =?utf-8?B?aUVPT2VOL3AyUnpCV0oydUtxUitGa1NBOW5QL0pGTWNEdHMzaGNHbmk5N1Nr?=
 =?utf-8?B?OHE0L3hXeUJicWxCNmFoKzhFdUh1QklHN3VwT1ZtRzlYT3IzRCtZWXoxZUlR?=
 =?utf-8?B?TlkvYU1LYjgwK3UwcncyQzRtOWdZbVZ0NUs2UVFjVW92KzNPWlV5QWZmOHIx?=
 =?utf-8?B?cEk4RENYOW1QRDE1bEl0blFNalZiUmpHejF1TjIxQzhBRzA5ejBvOFpOaExn?=
 =?utf-8?B?d2gyZzVrMUhYSjFxVGE2RHJVemE1bCtVQ2gyR2pFS1d3NjYyNlNTd01yZURq?=
 =?utf-8?B?dWk5T1kwQXZIRlZ3Yk5KbXpWbDdRYWRNQVc0MkdmY0FxbFMrODY3YjVvcmVq?=
 =?utf-8?B?S2llM2c2RmZxa0Z2OGNUcU1LMUIvc2VWWGt2L0R4aHdsb3E3VE5vdXJvdWZV?=
 =?utf-8?B?b29BS1c5TXZSbUVUWFNWdDRGRWJoT0NZQzhJT1lwRjY0cEplZkFFUXRFTjJO?=
 =?utf-8?B?T3daL2oyb3N6ZGJyMFR6Zy9pdDBIckNlMEU0dUxNRUtKeVpoWkhNcHVEdzhj?=
 =?utf-8?B?ck1KN0o3dlk1NFU3TmZmQmt1SmV2Qm94a2J6M1FUZzhjYkJBSFd3clVNTDhP?=
 =?utf-8?B?Qm5IamRkWURKNWJsL3NEM1hGc05NdUZKQXprbkV5THljeTFnSUk0elFLSGtk?=
 =?utf-8?B?aE5lZzNWdjdVejh0QVNkOE4wMThDU21abGpkYzRjVU5TN2ltb1RxQT09?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e6dcb0e7-f318-406d-58c1-08da43a73381
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Jun 2022 08:17:43.1091
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: HhfCE+cYemrEsem2bvNTRS4WVWE7C5tK5y5g6vyRpJtUtaQkRLgJOM19BfHYIacAXUClxcm5RB23V1LSz4GF/g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4428

On Wed, Jun 01, 2022 at 09:10:09AM +0200, Jan Beulich wrote:
> On 31.05.2022 18:15, Roger Pau Monné wrote:
> > On Tue, May 31, 2022 at 05:40:03PM +0200, Jan Beulich wrote:
> >> On 31.05.2022 16:40, Roger Pau Monné wrote:
> >>> On Fri, May 27, 2022 at 01:12:06PM +0200, Jan Beulich wrote:
> >>>> @@ -289,44 +290,75 @@ static bool __hwdom_init hwdom_iommu_map
> >>>>       * that fall in unusable ranges for PV Dom0.
> >>>>       */
> >>>>      if ( (pfn > max_pfn && !mfn_valid(mfn)) || xen_in_range(pfn) )
> >>>> -        return false;
> >>>> +        return 0;
> >>>>  
> >>>>      switch ( type = page_get_ram_type(mfn) )
> >>>>      {
> >>>>      case RAM_TYPE_UNUSABLE:
> >>>> -        return false;
> >>>> +        return 0;
> >>>>  
> >>>>      case RAM_TYPE_CONVENTIONAL:
> >>>>          if ( iommu_hwdom_strict )
> >>>> -            return false;
> >>>> +            return 0;
> >>>>          break;
> >>>>  
> >>>>      default:
> >>>>          if ( type & RAM_TYPE_RESERVED )
> >>>>          {
> >>>>              if ( !iommu_hwdom_inclusive && !iommu_hwdom_reserved )
> >>>> -                return false;
> >>>> +                perms = 0;
> >>>>          }
> >>>> -        else if ( is_hvm_domain(d) || !iommu_hwdom_inclusive || pfn > max_pfn )
> >>>> -            return false;
> >>>> +        else if ( is_hvm_domain(d) )
> >>>> +            return 0;
> >>>> +        else if ( !iommu_hwdom_inclusive || pfn > max_pfn )
> >>>> +            perms = 0;
> >>>>      }
> >>>>  
> >>>>      /* Check that it doesn't overlap with the Interrupt Address Range. */
> >>>>      if ( pfn >= 0xfee00 && pfn <= 0xfeeff )
> >>>> -        return false;
> >>>> +        return 0;
> >>>>      /* ... or the IO-APIC */
> >>>> -    for ( i = 0; has_vioapic(d) && i < d->arch.hvm.nr_vioapics; i++ )
> >>>> -        if ( pfn == PFN_DOWN(domain_vioapic(d, i)->base_address) )
> >>>> -            return false;
> >>>> +    if ( has_vioapic(d) )
> >>>> +    {
> >>>> +        for ( i = 0; i < d->arch.hvm.nr_vioapics; i++ )
> >>>> +            if ( pfn == PFN_DOWN(domain_vioapic(d, i)->base_address) )
> >>>> +                return 0;
> >>>> +    }
> >>>> +    else if ( is_pv_domain(d) )
> >>>> +    {
> >>>> +        /*
> >>>> +         * Be consistent with CPU mappings: Dom0 is permitted to establish r/o
> >>>> +         * ones there (also for e.g. HPET in certain cases), so it should also
> >>>> +         * have such established for IOMMUs.
> >>>> +         */
> >>>> +        if ( iomem_access_permitted(d, pfn, pfn) &&
> >>>> +             rangeset_contains_singleton(mmio_ro_ranges, pfn) )
> >>>> +            perms = IOMMUF_readable;
> >>>> +    }
> >>>>      /*
> >>>>       * ... or the PCIe MCFG regions.
> >>
> >> With this comment (which I leave alone) ...
> >>
> >>>>       * TODO: runtime added MMCFG regions are not checked to make sure they
> >>>>       * don't overlap with already mapped regions, thus preventing trapping.
> >>>>       */
> >>>>      if ( has_vpci(d) && vpci_is_mmcfg_address(d, pfn_to_paddr(pfn)) )
> >>>> -        return false;
> >>>> +        return 0;
> >>>> +    else if ( is_pv_domain(d) )
> >>>> +    {
> >>>> +        /*
> >>>> +         * Don't extend consistency with CPU mappings to PCI MMCFG regions.
> >>>> +         * These shouldn't be accessed via DMA by devices.
> >>>
> >>> Could you expand the comment a bit to explicitly mention the reason
> >>> why MMCFG regions shouldn't be accessible from device DMA operations?
> >>
> >> ... it's hard to tell what I should write here. I'd expect extended
> >> reasoning to go there (if anywhere). I'd be okay adjusting the earlier
> >> comment, if only I knew what to write. "We don't want them to be
> >> accessed that way" seems a little blunt. I could say "Devices have
> >> other means to access PCI config space", but this not being said there
> >> I took as being implied.
> > 
> > But we could likely say the same about IO-APIC or HPET MMIO regions.
> > I don't think we expect them to be accessed by devices, yet we provide
> > them for coherency with CPU side mappings in the PV case.
> 
> As to "say the same" - yes for the first part of my earlier reply, but
> no for the latter part.

Yes, obviously devices cannot access the HPET or the IO-APIC MMIO from
the PCI config space :).

> >> Or else what was the reason to exclude these
> >> for PVH Dom0?
> > 
> > The reason for PVH is because the config space is (partially) emulated
> > for the hardware domain, so we don't allow untrapped access by the CPU
> > either.
> 
> Hmm, right - there's read emulation there as well, while for PV we
> only intercept writes.
> 
> So overall should we perhaps permit r/o access to MMCFG for PV? Of
> course that would only end up consistent once we adjust mappings
> dynamically when MMCFG ranges are put in use (IOW if we can't verify
> an MMCFG range is suitably reserved, we'd not find in
> mmio_ro_ranges just yet, and hence we still wouldn't have an IOMMU
> side mapping even if CPU side mappings are permitted). But for the
> patch here it would simply mean dropping some of the code I did add
> for v5.

I would be OK with that, as I think we would then be consistent with
how IO-APIC and HPET MMIO regions are handled.  We would have to add
some small helper/handling in PHYSDEVOP_pci_mmcfg_reserved for PV.

> Otherwise, i.e. if the code is to remain as is, I'm afraid I still
> wouldn't see what to put usefully in the comment.

IMO the important part is to note whether there's a reason or not why
the handling of IO-APIC, HPET vs MMCFG RO regions differ in PV mode.
Ie: if we don't want to handle MMCFG in RO mode for device mappings
because of the complication with handling dynamic changes as a result
of PHYSDEVOP_pci_mmcfg_reserved we should just note it.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 08:49:38 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 08:49:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340326.565332 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwK2m-0005o0-Is; Wed, 01 Jun 2022 08:49:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340326.565332; Wed, 01 Jun 2022 08:49:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwK2m-0005nt-G2; Wed, 01 Jun 2022 08:49:32 +0000
Received: by outflank-mailman (input) for mailman id 340326;
 Wed, 01 Jun 2022 08:49:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwK2m-0005nj-0i; Wed, 01 Jun 2022 08:49:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwK2l-0004bK-Sv; Wed, 01 Jun 2022 08:49:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwK2l-00067H-Bh; Wed, 01 Jun 2022 08:49:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nwK2l-0001rF-BD; Wed, 01 Jun 2022 08:49:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wUnwDE24fJ2nH/bGAGCxU+J1GQYURWQqNtDcixZLMik=; b=SWoe8NqoPz7O8OJy9wryo0z5Q6
	wYlZ/bQn0eUoe7wXwxLmBodU+pVNQsILzkuZMbEowbLX5VFKnRO63gCqYPNOQ/BFd+louaX/BkeUv
	qa3tbQDPWXb+QyKE6fHQTUipOqHDVbYWYQUEnS44FEwsKGyNZt5GGQMcR+kyFRjr1cmQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170792-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 170792: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-freebsd10-amd64:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=09a6a71097e3e7d28eaa0f55e8f2c4b879c299f5
X-Osstest-Versions-That:
    xen=49dd52fb1311dadab29f6634d0bc1f4c022c357a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 01 Jun 2022 08:49:31 +0000

flight 170792 xen-unstable real [real]
flight 170795 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/170792/
http://logs.test-lab.xenproject.org/osstest/logs/170795/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-freebsd10-amd64  7 xen-install      fail pass in 170795-retest

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail like 170772
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170780
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170780
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170780
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 170780
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 170780
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 170780
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170780
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170780
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170780
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170780
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170780
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170780
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170780
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  09a6a71097e3e7d28eaa0f55e8f2c4b879c299f5
baseline version:
 xen                  49dd52fb1311dadab29f6634d0bc1f4c022c357a

Last test of basis   170780  2022-05-31 01:53:10 Z    1 days
Testing same since   170792  2022-06-01 01:09:33 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>
  Stefano Stabellini <stefano.stabellini@xilinx.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   49dd52fb13..09a6a71097  09a6a71097e3e7d28eaa0f55e8f2c4b879c299f5 -> master


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 09:04:59 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 09:04:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340337.565348 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwKHe-00005J-T1; Wed, 01 Jun 2022 09:04:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340337.565348; Wed, 01 Jun 2022 09:04:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwKHe-000053-Pj; Wed, 01 Jun 2022 09:04:54 +0000
Received: by outflank-mailman (input) for mailman id 340337;
 Wed, 01 Jun 2022 09:04:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qdo/=WI=epam.com=prvs=8151ed00d6=oleksii_moisieiev@srs-se1.protection.inumbo.net>)
 id 1nwKHd-00004v-Dy
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 09:04:53 +0000
Received: from mx0b-0039f301.pphosted.com (mx0b-0039f301.pphosted.com
 [148.163.137.242]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e439799f-e189-11ec-bd2c-47488cf2e6aa;
 Wed, 01 Jun 2022 11:04:51 +0200 (CEST)
Received: from pps.filterd (m0174682.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2518PCPc015107;
 Wed, 1 Jun 2022 09:04:39 GMT
Received: from eur03-ve1-obe.outbound.protection.outlook.com
 (mail-ve1eur03lp2057.outbound.protection.outlook.com [104.47.9.57])
 by mx0b-0039f301.pphosted.com (PPS) with ESMTPS id 3ge4k8r4yg-2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Wed, 01 Jun 2022 09:04:39 +0000
Received: from PA4PR03MB7136.eurprd03.prod.outlook.com (2603:10a6:102:ea::23)
 by DB7PR03MB4970.eurprd03.prod.outlook.com (2603:10a6:10:7c::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Wed, 1 Jun
 2022 09:04:35 +0000
Received: from PA4PR03MB7136.eurprd03.prod.outlook.com
 ([fe80::31b5:dfd5:2d38:c0b2]) by PA4PR03MB7136.eurprd03.prod.outlook.com
 ([fe80::31b5:dfd5:2d38:c0b2%9]) with mapi id 15.20.5314.012; Wed, 1 Jun 2022
 09:04:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e439799f-e189-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TmsHQLJO7V2Svpj9+yx/nFs1XzrUQ6D5egy+K+2Py4H67PcfIisVHOe7OSdqONjLYMhMGDx7bYCYdrj4KmB06gPIFv+QcsyLSYBUDIBOVHRfDIAQV4OO0RnlyBEVLpBtD/yn1kmcspztdqe/1wrADEMyAbh7yH08psoJ/2qbE3RiVSrNRSlivXs/zYs2EtUnSd6fwbcaF3xsppWSioAw9cqLuqZPJJJYFGqKZszFY9i+XFI4lWoBkdThYjDM3Ngw/o9YTewe5p5W1b/8LHGo5HVgsr4tQYYOvd5gcC1IvebcxZ+gZvH1vv6GqZyodSe9eOcU32Ilx2skpGmQr4No8w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=FJGi0jDrsdDVUTUEv+r1p4LlW3UjXyOGb5PcbVcR0pw=;
 b=PZSI866UMfDLuy7DQSZgZMOKpLcsXBpYaW6qFpVGUyG7IqQkAesX+kdunrPP7I6b+SDewKl7lkfVMH3MaP9qA/SdKOU3NwLv6lRXtPA/4KBUK7+mqbX+9yyDVDmWRA4Z/tPJhxPIxs7ordZIBwqAwVEqY5NJXqS8km8GZuzMSggcUaMlCQ+++PKRfDkgWjUKLOSks8APy1Ly+Rpqx/oBL1gJSYh/DHrJ0F7TYmTtsqryEfPDfNEE7RNMWyei5+9aAfKRqHv0KsATpqUTA16jM6+FQaD1MFPSXDeXEOQCEfJ/jYdJpTX6D6r5avvKerw+AmpYW3bzNvmr4bt1lDgOXA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FJGi0jDrsdDVUTUEv+r1p4LlW3UjXyOGb5PcbVcR0pw=;
 b=Mfdpqcw4g84HyleYO1kbNDWAUqR9yZ5Hsh5av/IlqjIla9zgCzyIQTCBqwpBeK2PnM8iXfnKG1GaEM8XfEpqZGSuBnvbYDRBdFtQGXXGzMMwFsfcxKOdqOSc48SPtuGJhfGILtMoidVGNfGVF0esK6vutKzV0qZxDvjmzqVTSU/4YUwXwCPC/7G2OtIc03FelAzk5ElWzBf6pJ7KsvqLs/HYArMUZIvlypeVzvJR0tmtTGxiK0hf5UaLDrqEdMv/CQnJ10K2xi+8SiSrEz7DFE8DiEd8MF1Pp6q2LrO4zltMjicSq7fDqvv+rXIGx3RANco+ZmlxKW22IGGm48f7iw==
From: Oleksii Moisieiev <Oleksii_Moisieiev@epam.com>
To: Julien Grall <julien@xen.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Peng
 Fan <peng.fan@nxp.com>,
        Bertrand Marquis <bertrand.marquis@arm.com>,
        Stefano
 Stabellini <sstabellini@kernel.org>
Subject: Re: [Xen-devel] SMMU permission fault on Dom0 when init vpu_decoder
Thread-Topic: [Xen-devel] SMMU permission fault on Dom0 when init vpu_decoder
Thread-Index: AQHYdDjfc3Z/1SeAUE+eDUzXC1OEPa03j+IAgAK05QA=
Date: Wed, 1 Jun 2022 09:04:35 +0000
Message-ID: <20220601090434.GA3644565@EPUAKYIW015D>
References: <20220530152102.GA883104@EPUAKYIW015D>
 <da899c6a-a9ec-fa25-75ef-6f375dfd422a@xen.org>
In-Reply-To: <da899c6a-a9ec-fa25-75ef-6f375dfd422a@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 1f22742d-ea74-421c-b8a1-08da43adc017
x-ms-traffictypediagnostic: DB7PR03MB4970:EE_
x-ld-processed: b41b72d0-4e9f-4c26-8a69-f949f367c91d,ExtAddr
x-microsoft-antispam-prvs: 
 <DB7PR03MB49701D92F5230210AB01A59FE3DF9@DB7PR03MB4970.eurprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 gd+CQknD1ZeYplYRfeYiBfq21emIjYkwhUvPBgH0aujna49OBZPuiTChFwShQQEi7KIf8i2mAPlt3nx0IRicWT0af1pewOMVmhneXqLDkpStv7ykigKPVQ/P7K3/ZmX+Wozy8fhuRvew2ki9zJtuULMwAvuwjxpzc3oZBsqSU5YB96mmNldTyczkJ59uxS0Rl9UJtnpho/QU8rnVqK66rbKgjnJUit6TWOUJ1/eEhDlUuGYSpDDskU3KeVknPLMyQozK3m6SJkd0V+N19rDNmFPbEL6lpb+rdQVka+a2X2xyainoCnpmQ5Xr/Xk4AeDBtv5J+bF5zUt569LqVFQL5QB57EUgtvHqiqJihaExGkrJb8+BYp+zJdeyP0uV+3mZKINptMnP7wVmlwgCtMewPowzvAWN3E9JFPjM8gGm4e4x8ULjA9+mUVwj+Secf7cD79WNDFjiBUzroH4jieAzbfdcFVXG7nZoBC3+PgzyixC39oxvYnRLS416fA4QLRNlkAa4aubDoS43AsSiXiWaD18RiwsgIZuwSwaBFQpyDAQJoldtRNlpZjSAQrQz8Lt3f5+fgiTyV/E9hzZkrBhxFDlvxwEZ86rKq9lT91ZLDAIe8KHLiz/EidRNkn+2CkisPHSjFHIM1bakYfkV/7STcrkpVFeZZm28TlWPPd21BwA/1Rq/Mhu0tH9vTUXACNqCczOVLuX+HbkbTDTKr/SweszQn/4MxOWRrMtxEgysnO6pX4pF6IdoujTX7L9lNaYMAiGyWI2z7qy0GKIM0sC4OE08bhU7Q34vUxZPc6dwK1o=
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PA4PR03MB7136.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(7916004)(4636009)(366004)(966005)(6486002)(71200400001)(6512007)(508600001)(9686003)(53546011)(26005)(38100700002)(6506007)(38070700005)(122000001)(1076003)(8936002)(83380400001)(316002)(76116006)(66946007)(91956017)(66476007)(66446008)(64756008)(4326008)(8676002)(66556008)(5660300002)(2906002)(33656002)(86362001)(6916009)(54906003)(33716001)(186003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: 
 =?us-ascii?Q?1mONyPVLxLNQrb21RT1fRF1Ipy1p4noZ7uo2u/dpZxv2oTUHpodslt8fSI6/?=
 =?us-ascii?Q?Na/kaoFCOGxfk9IiQoqa3/C2BiAAYif2VhNLZQxJ6Tc4KfjlZgteOqgVdSCf?=
 =?us-ascii?Q?TgojLr4QBTJhoUZHKDD8AH8+meOfd0ftv/iC0x6bcAjonw0jViqaRPmn8tf+?=
 =?us-ascii?Q?i+mfDmkllvNMdXZYTt8yhrdXX/bYAOQhvTXEv480a8u4S25POdsihlyPni3a?=
 =?us-ascii?Q?NJZT/1ZBNpOl9ftmKqH1vBani/Yd7n/Y6pvN0ZlyvENg4WhMZyf5491I04/6?=
 =?us-ascii?Q?rMvWo5o+8SGDOKW77FjTaGJzkugVnKkaJzwEJrzR1pvxvhHvMMtUwhHPi33f?=
 =?us-ascii?Q?tw4bVjlZY95PsZQnzvioeKPPxYfyWcVKSv4RWJhdlbw3jA7HBtgXfefvHlvE?=
 =?us-ascii?Q?f+U6Gab+GLDgHYR20HEsxbvlkRITs0c+1/c2z47Fanxg+4xVPlPDqqFzjxi3?=
 =?us-ascii?Q?2rY629BoUz64IlXfsCqASjEzVoHzKIw6zg+cfdf7hnFG8XMKnUJ+woJU+eR+?=
 =?us-ascii?Q?J0OIoNnoJeKjgP8TIjXZRA728Q9ExCxumt0klILhPdYomWxczuBILNDm/KEX?=
 =?us-ascii?Q?KqwUpIYx7DAjbLXf0Ez8lHFZq1p4TAcXvTtR6UDgRiF7hhJecJgbVEEtGE7d?=
 =?us-ascii?Q?vdYeNgXQ1yJUVPdY+Lk21qOXF9OXJA/lX2yya63OPSNHp6EJHD63vcMUpcEm?=
 =?us-ascii?Q?3l2l3Ndlc5wAHD8Ol8inC9BG+C9I+Qb8i2jjx+GAof1CmI6jNyckSvgFax5z?=
 =?us-ascii?Q?0oOouHfMm873kCGLEDaCCNPn5v0VgJARDkYertrjqSFuyGg4M4+R8h38x+vk?=
 =?us-ascii?Q?PmjTBSLXOopMByN14V4r/adMK8C7K+us+MnWrIPjKPBrzjxQ+T4nuB+zNPTQ?=
 =?us-ascii?Q?qlwVzUewY09LQ3N1HU19tFcSHnupzso3N2fGtD5ep3rku+cax+KcyAYKtuYP?=
 =?us-ascii?Q?R9VjsglCWb0bZRWCJi5mrA5dvbypFLmhkGM2nAQOyblkDbOGuxaN7Ot+ZfZz?=
 =?us-ascii?Q?7qZEXhilAtEoSHYX25m2/xPIm5aRaDJPLZdOPmzy1caralla4UVvwX3sH4cb?=
 =?us-ascii?Q?eOzrV9s/rHKPYyjYu5BPoy7lXvT39ZummICNyS17TOfFdjkS263L+6b+Yf5i?=
 =?us-ascii?Q?c81CGEpFlwEQjrhf7jpJZdNHIKKgEqE84zjqXPES4tQlj2NMZkvsOmbD1s6T?=
 =?us-ascii?Q?YBsaT9ZVTSMj7D9YpvT3+eV8+KlGro9ZC+e1si2PMXOKq8/SSy6gUlD+NyQb?=
 =?us-ascii?Q?Mny3Lge9WRb4kZPxDmDV0Gtdi8Yc3Xs6387mzTvy/yCgyPjmWsAS/ni/RUpf?=
 =?us-ascii?Q?tCl+Sm4dR4K4cs0ZiU+FAG3hMIpgKStP7zQ9mDaZEQxfNSIb0dlsyadpyKQE?=
 =?us-ascii?Q?jpdm4IXNHl2KSy/2SWEt1rBRNbhgcDu3H1cYD0xHUpcNFbhjxPIY7kCylFsE?=
 =?us-ascii?Q?JXVdQdYobUoAycl2fwKCCDVDCn6GMZIVDYw2TBADyRpslvAKsnVOWZrDMVMk?=
 =?us-ascii?Q?71wnGG/RBJ0H1+T9wgrm8ATuWKjSe0JTAknNgMsKBCCttHnD9cUjsmMKeOAW?=
 =?us-ascii?Q?OHnlv+tyhTVNPLw0eKSrKzJyl33Fay/IJKJy9m/jkfXnReAaZw2FpLkscgEt?=
 =?us-ascii?Q?qvPxdMMcLercDhLfWZrvIHslX9lxcjzG9W17I7vOHkCSBlrrwapxP8+95Q8K?=
 =?us-ascii?Q?xdXOTV7FbqVcBQI/7zGZ1H4WVF5xXw4AYJM+jnARmgiNdAbjOgeUCb0OIXmb?=
 =?us-ascii?Q?/ngzoEYGolv28eQTlxxC1lXTaLAn6BhGX4WsptS7Qk41Yo/SiPx+?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <398587978E43FF4AA8E68A6215483E7E@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PA4PR03MB7136.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1f22742d-ea74-421c-b8a1-08da43adc017
X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Jun 2022 09:04:35.7373
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: yV9/kpHzN5yp4dewoi1hQqZufDJ2B1urVeeFTQu50dihDxdXjp+e1CnPEm6urAyzalo9yWlrOBG+33VabfBngGkkd/ZFJTpheTy3YA0sYhg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR03MB4970
X-Proofpoint-GUID: IpR9jMS1cLM2tFdMEt78zFkjUti7gFAK
X-Proofpoint-ORIG-GUID: IpR9jMS1cLM2tFdMEt78zFkjUti7gFAK
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.205,Aquarius:18.0.874,Hydra:6.0.517,FMLib:17.11.64.514
 definitions=2022-06-01_03,2022-05-30_03,2022-02-23_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 phishscore=0
 priorityscore=1501 adultscore=0 mlxlogscore=999 clxscore=1015 spamscore=0
 suspectscore=0 bulkscore=0 lowpriorityscore=0 impostorscore=0 mlxscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2204290000
 definitions=main-2206010041

On Mon, May 30, 2022 at 04:44:36PM +0100, Julien Grall wrote:
Hi Julien,

> (+ Stefano)
>=20
> On 30/05/2022 16:21, Oleksii Moisieiev wrote:
> > Hello,
>=20
> Hi Oleksii,
>=20
> > I'm getting permission fault from SMMU when trying to init VPU_Encoder/=
Decoder
> > in Dom0 on IMX8QM board:
> > (XEN) smmu: /iommu@51400000: Unhandled context fault: fsr=3D0x408, iova=
=3D0x86000a60, fsynr=3D0x1c0062, cb=3D0
> > This error appears when vpu_encoder/decoder tries to memcpy firmware im=
age to
> > 0x86000000 address, which is defined in reserved-memory node in xen dev=
ice-tree
> > as encoder_boot/decoder_boot region.
>=20
> It is not clear to me who is executing the memcpy(). Is it the device or
> your domain? If the former, where was the instruction fetch from?
>=20
> The reason I am asking that is, from what you wrote, mempcy() will write =
to
> 0x86000000. So the write should not result to a instruction abort. Only a=
n
> instruction fetch would lead to such abort.

My configuration is the following:=20
In Dom0 I have vpu_decoder, operated by vpu_malone driver.
During initialization, in function vpu_firmware_download it requests
firmware and put it to decoder_boot memory using memcpy. Then waiting
for the interrupt from the device. Looks like, device decoder tries to
execute something from this memory.

>=20
> >=20
> > I'm using xen from branch xen-project/staging-4.16 + imx related patche=
s, which were
> > taken from https://urldefense.com/v3/__https://source.codeaurora.org/ex=
ternal/imx/imx-xen__;!!GF_29dbcQIUBPA!xy4tOkXLiMzvC0wg_Me93zTZ4sZBZ7dq_-zkw=
YSaJvqt5vNVEOa-mV7Li2crSK3OBTQFb396tUDElwtpiw$ [source[.]codeaurora[.]org].
> >=20
> > After some investigation I found that this issue was fixed by Peng Fan =
in
> > commit: 46b3dd3718144ca6ac2c12a3b106e57fb7156554 (Hash from codeaurora)=
, but only for
> > the Guest domains.
> > It introduces new p2m_type p2m_mmio_direct_nc_x, which differs from
> > p2m_mmio_direct_nc by XN =3D 0. This type is set to the reserved memory=
 region in
> > map_mmio_regions function.
> >=20
> > I was able to fix issue in Dom0 by setting p2m_mmio_direct_nc_x type fo=
r the
> > reserved memory in map_regions_p2mt, which is used to map memory during=
 Dom0 creation.
> > Patch can be found below.
> >=20
> > Based on initial discussions on IRC channel - XN bit did the trick beca=
use looks
> > like vpu decoder is executing some code from this memory.
>=20
> This was a surprise to me that device could also execute memory. From the
> SMMU spec, this looks a legit things. Before relaxing the type, I would l=
ike
> to confirm this is what's happenning in your case.
>=20
> [...]
>=20
> > ---
> > arm: Set p2m_type to p2m_mmio_direct_nc_x for reserved memory
> > regions
> >=20
> > This is the enhancement of the 46b3dd3718144ca6ac2c12a3b106e57fb7156554=
.
> > Those patch introduces p2m_mmio_direct_nc_x p2m type which sets the
> > e->p2m.xn =3D 0 for the reserved-memory, such as vpu encoder/decoder.
> >=20
> > Set p2m_mmio_direct_nc_x in map_regions_p2mt for reserved-memory the
> > same way it does in map_mmio_regions. This change is for the case
> > when vpu encoder/decoder works in DomO and not passed-through to the
> > Guest Domains.
> >=20
> > Signed-off-by: Oleksii Moisieiev <oleksii_moisieiev@epam.com>
> > ---
> >   xen/arch/arm/p2m.c | 7 +++++++
> >   1 file changed, 7 insertions(+)
> >=20
> > diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> > index e9568dab88..bb1f681b71 100644
> > --- a/xen/arch/arm/p2m.c
> > +++ b/xen/arch/arm/p2m.c
> > @@ -1333,6 +1333,13 @@ int map_regions_p2mt(struct domain *d,
> >                        mfn_t mfn,
> >                        p2m_type_t p2mt)
> >   {
> > +    if (((long)gfn_x(gfn) >=3D (GUEST_RAM0_BASE >> PAGE_SHIFT)) &&
> > +        (((long)gfn_x(gfn) + nr) <=3D
> > +        ((GUEST_RAM0_BASE + GUEST_RAM0_SIZE)>> PAGE_SHIFT)))
>=20
> I am afraid I don't understand what this check is for. In a normal setup,=
 we
> don't know where the reserved regions are mapped. Only the caller may kno=
w
> that.
>=20
> For dom0, this decision could be taken in map_range_to_domain(). For the
> domU, we would need to let the toolstack to chose the memory attribute.
> Stefano attempted to do that a few years ago (see [1]). Maybe we should
> revive it?
>=20
> > +    {
> > +        p2m_remove_mapping(d, gfn, nr, mfn);
> > +        return p2m_insert_mapping(d, gfn, nr, mfn, p2m_mmio_direct_nc_=
x);
> > +    }
> >       return p2m_insert_mapping(d, gfn, nr, mfn, p2mt);
> >   }
>=20
> Cheers,
>=20
> [1] https://urldefense.com/v3/__https://lore.kernel.org/xen-devel/alpine.=
DEB.2.10.1902261501020.20689@sstabellini-ThinkPad-X260/__;!!GF_29dbcQIUBPA!=
xy4tOkXLiMzvC0wg_Me93zTZ4sZBZ7dq_-zkwYSaJvqt5vNVEOa-mV7Li2crSK3OBTQFb396tUB=
ARsu3hw$
> [lore[.]kernel[.]org]
>=20
> --=20
> Julien Gral=


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 09:06:20 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 09:06:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340346.565359 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwKJ1-0000io-Ab; Wed, 01 Jun 2022 09:06:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340346.565359; Wed, 01 Jun 2022 09:06:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwKJ1-0000ih-7X; Wed, 01 Jun 2022 09:06:19 +0000
Received: by outflank-mailman (input) for mailman id 340346;
 Wed, 01 Jun 2022 09:06:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=s3TG=WI=citrix.com=prvs=1441e74d4=roger.pau@srs-se1.protection.inumbo.net>)
 id 1nwKIz-0000iX-Dy
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 09:06:17 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 161d166f-e18a-11ec-837f-e5687231ffcc;
 Wed, 01 Jun 2022 11:06:15 +0200 (CEST)
Received: from mail-bn8nam11lp2170.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.170])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 01 Jun 2022 05:06:11 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by DM6PR03MB5178.namprd03.prod.outlook.com (2603:10b6:5:240::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5293.13; Wed, 1 Jun
 2022 09:06:08 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::5df3:95ce:4dfd:134e]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::5df3:95ce:4dfd:134e%4]) with mapi id 15.20.5314.013; Wed, 1 Jun 2022
 09:06:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 161d166f-e18a-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654074375;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=7NF2P2dEtgcYL12pIWo5RhfgtniCV1ntUayzxzmxbpQ=;
  b=SUTjSTvhMQx3OtPIhQO4qhWCvl3fKrVc+SzZl0nW3sDcJmLd4lqE5K33
   g3Jc2f1hK9U7kATR5NgYbHys8AoQO7scjLZ97sepLP5pS8Z/5HUO8s9kg
   af3tix6L1hvtiWaZHDdfROj7oA6EzW//8Ig+1J+b1BN1HS6LVrK5lvkQG
   w=;
X-IronPort-RemoteIP: 104.47.58.170
X-IronPort-MID: 72596194
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:M9xvAaoP9nslIsbrfA2XQHq3gjNeBmK+ZBIvgKrLsJaIsI4StFCzt
 garIBnQOa2KMWLzfox1Od+y/U0OsJbRxtQxSVFurC8yFC9GpJuZCYyVIHmrMnLJJKUvbq7GA
 +byyDXkBJppJpMJjk71atANlVEliefQAOCU5NfsYkidfyc9IMsaoU8lyrdRbrJA24DjWVvT4
 Yqq+qUzBXf+s9JKGjNMg068gEsHUMTa4Fv0aXRnOJinFHeH/5UkJMp3yZOZdhMUcaENdgKOf
 M7RzanRw4/s10xF5uVJMFrMWhZirrb6ZWBig5fNMkSoqkAqSicais7XOBeAAKv+Zvrgc91Zk
 b1wWZKMpQgBIJP0stYFbiFiQwZRE6JK3ZbbPkWeiJnGp6HGWyOEL/RGKmgTZNRd0MAnRGZE+
 LofNSwHaQ2Fi6Su2rWnR+Jwh8Mlas72IIcYvXImxjbcZRokacmbH+OWupkFjHFp2Zgm8fX2P
 qL1bRJ1axvNeVtXM0o/A5Mihua4wHL4dlW0rXrK//dmvTGCkGSd1pDnIdGSJ/eIYv5WxGfGn
 nzgo0vlMCAjYYn3JT2ttyjEavX0tS/jQ4cTCL2Q/+ZnmkGO3XcUDAAKVFy9ur+yjUvWc/hSM
 VAO8ywi64077lW2T8LVVge95nWDu3Y0S9dWVuE39gyJ4q7V+BqCQHgJSCZbb94rv9NwQiYlv
 ne3mNfuCS1qoaeiY3uX/beJrhu/ISEQa2QFYEcsUg8t89Tl5oYpgXrnVd1kDLLzgtTrGCrY2
 CyDtiw3jfMSiqYj3KWh/EvbhCqsq4KPRQo8/Ab/RX6s9AdwbsikYOSA8kPH5PxNKIKYSFipv
 3UencWaqucUAvmlliaAXeEMF7GB/OuePXvXhlsHN5s88zWg/VazcIYW5ytxTHqFKe4BcD7tJ
 UXV6QVY4cYKOGPwNPAvJYWsF84t0K7sU8z/UezZZcZPZZ43cxKb+CZpZgib2GWFfFUQrJzT8
 KyzKa6EZUv2w4w9pNZqb4/xCYMW+x0=
IronPort-HdrOrdr: A9a23:B/gjVawxWEg3EYiE96h3KrPxvuskLtp133Aq2lEZdPULSKGlfp
 GV9sjziyWetN9wYh4dcB67Scy9qFfnhOZICO4qTMyftWjdyRKVxeRZgbcKrAeBJ8STzJ8/6U
 4kSdkFNDSSNykEsS+Z2njeLz9I+rDunsGVbKXlvhFQpGlRGt1dBmxCe2Km+yNNNWt77c1TLu
 vg2iMLnUvoRV0nKuCAQlUVVenKoNPG0LrgfB49HhYirC2Dlymh5rLWGwWRmk52aUIF/Z4StU
 z+1yDp7KSqtP+2jjfaym/o9pxT3P/s0MFKCsCggtUcbh/slgGrToJ8XKDqhkF8nMifrHIR1P
 XcqRYpOMp+r1vXY2GOuBPonzLt1T4/gkWSvWOwsD/Gm4jUVTg6A81OicZyaR3C8Xctu9l6ze
 Ziw3+Zn4A/N2KOoA3No/zzEz16nEu9pnQv1cQJiWZEbIcYYLhN6aQC4UJuFosaFi6S0vFqLA
 BXNrCc2B9qSyLbU5iA1VMfg+BEH05DUytue3Jy9PB8iFNt7TJEJ0hx/r1rop5PzuN5d3B+3Z
 W0Dk1ZrsAxciYoV9MMOA4ge7rBNoWfe2O7DIqtSW6XZ50vCjbql6PdxokTyaWDRKEopaFC6q
 gpFmko/1IPRw==
X-IronPort-AV: E=Sophos;i="5.91,266,1647316800"; 
   d="scan'208";a="72596194"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=i9sUl/rMwpq2VLFKLnuUDCn/dHkdR8PYXDvNno1sBjToQLV/teBnEu3AyiTJHNT9b+eWO8iMoHH+iLhS5G8etBcXqJ4yjHixp0N/xPM+I+HIjT1TSv33qk3r+m1KZFA15ENbDnzRO6OcbLRjgrR2D50zZOkBGJP7K/4xlIHv3JntOlwQAp5Yh9TLsRJ6KdTekihHcLK+n3QRcOUokedyoTCuDU80mT0vpIH8shHmAPd4cNc2DxOrISRUT+JLmtlJfEm8/URDJ+YKTMOu9M4h3z5U8J095SQKIh88fHafKZSNdhchFLTO+Qjv59e0M+PLbg/I3asBQjHOcl2/wG2j7w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Ddtxoi3oD3OEprzkBRW5NKojO3cSePqftt7ZJcrxUeg=;
 b=OkdWbMLB9CDw9gKXmGRhvJXZ8igZmVEKG8farMcP/NmJFKUArGvNJ0KplAfsrUj+NqWEBN6XrDEiDz04sXsTX3zoeQCrOW4UH80WtVRHKeaC7LwfYdLAICL+ao+pKIt1VfxXwATyjCWc623KfB1xAceB2S0d3d5qc/8K6pPoX8MLJkCoH7GpnhXS+SlSzY5WIdczRUGNY5Exj+X6LJ+YaGj0Ps6m/UaA5VdJ4/Dll2iikw5DgD5SMe/7ACgWAjULrhWFupWksEVKaDROkCYKeHJMKi7v2GcreYBbbESe6Meg3cq97cHXnV0LOfvSsx0OdaCXQwzlYRDWZEIn0TuzUQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ddtxoi3oD3OEprzkBRW5NKojO3cSePqftt7ZJcrxUeg=;
 b=OhU5xgxMHQBGiu2byatentZTD0eECI8HeEpWzoUf6LLPBYku7q4i7Vmxtr79WgaCLpU1L+RiEwAeuX6hHzZfp0UcMyQxVVUWRbA8c7lVMrGczr73elmKqDUr4lEI5dBKk5UCYDuS1zTBm4l/HBRuWQvH1lirPVnRETKtf01kGaA=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 1 Jun 2022 11:06:04 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: George Dunlap <George.Dunlap@citrix.com>
Cc: "Daniel P. Smith" <dpsmith@apertussolutions.com>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Wei Liu <wl@xen.org>,
	"scott.davis@starlab.io" <scott.davis@starlab.io>,
	"christopher.clark@starlab.io" <christopher.clark@starlab.io>,
	"sstabellini@kernel.org" <sstabellini@kernel.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>
Subject: Re: [RFC PATCH 1/4] kconfig: allow configuration of maximum modules
Message-ID: <Ypcr/N/0FpxepyTc@Air-de-Roger>
References: <20220531024127.23669-1-dpsmith@apertussolutions.com>
 <20220531024127.23669-2-dpsmith@apertussolutions.com>
 <2F13F24D-0A55-4BC3-9AC6-606C7E1626E8@arm.com>
 <4ebbb465-00ec-f4ba-8555-711cd76517ed@apertussolutions.com>
 <YpYdqglsWIlsFsaB@Air-de-Roger>
 <8F3BD9BB-EC62-4721-9BD0-3E4CC1E75A22@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <8F3BD9BB-EC62-4721-9BD0-3E4CC1E75A22@citrix.com>
X-ClientProxiedBy: LO4P123CA0258.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:194::11) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3918cf11-a040-403b-5ee1-08da43adf76e
X-MS-TrafficTypeDiagnostic: DM6PR03MB5178:EE_
X-Microsoft-Antispam-PRVS:
	<DM6PR03MB5178A678DBD6ADA6AF9111768FDF9@DM6PR03MB5178.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JBBm/v9T1P+3omcC3UbLprP//r/xAAt5RiWz9VlxI65KF2uWlf4L23Nr+opdiLYwC+sVjhnn29HqaXbkY5A9ZHNc9KLqWGAnE+jcnMAe7FhegREgI4oJPABGDy7atAK2Bt5NBdjvSNMc+y6se5FsvwaP//LIBegOcK/3EEWqKjkC8QTUYBFfZMrsuk1N80FiL0FIWOGej01fBUKRge/lLXAZdY6aJTLDX53ssC9ms2L2LkyIQ/UgbHjsYqsnDadRCMCLdsKFivFayrLaYcFYXSGU19+XtAzzlkIqMszdlIqT/Eh+BnbrY5PYN4WYjoMWoV1Ik8DUefI9J1+Dj5/cxOj+dzy02fFPtpE4wWTUcQChAWzWgH1153cOit/gfdOf31ZceICxJs7qFfZjZhLzdNnU6c6VpAq2wiEJjvTYWQUabE2yGheB3/FbUN7c/fyMDPgYoKZPLh/GED3XP5Ur/CQ25Ne+SKCxaXwBB9nNMm4ZHkmD90MuPhNGgqAzl5MISHnw4AUkluUBv4PQiQt6butFjby9RygT84Lvk0uuQgRyhWlWHiom4JUvGQjhl9L1DSzGk1DCV8o0Xh6wUjxrtA6NiF8SgJmLvPCYbc69SLB4O8a1VTn9zY//E2t8JIXrfc8i0eCYjhZO1xCjoMF6Xg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(7916004)(4636009)(366004)(4326008)(8676002)(85182001)(86362001)(83380400001)(508600001)(6486002)(2906002)(9686003)(26005)(6512007)(6506007)(53546011)(6666004)(316002)(66556008)(54906003)(66946007)(66476007)(38100700002)(33716001)(7416002)(6862004)(5660300002)(6636002)(82960400001)(8936002)(186003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bWFFTldWNzFkVjBtd1pYQzdUcm9qTXdoRnVDMTZJRzdBNjBxbnhEVDd6MUU0?=
 =?utf-8?B?NnpQMk1yS3dtZEhhSmovZDY5N1BHOHp5UW1NYlZnUkJxZ1dQOXhJbjUycXJx?=
 =?utf-8?B?QkFiYTEva1NYN0M5SFBmcFBzekJleHNpc2ZYLzZJSTR0bWZybGdsQ3NiMS82?=
 =?utf-8?B?b3R3WDV3Zlh4cFNTTGUvaEN3V2lsMHUyUGhSeDRWVG5iUUlKUFMvQmZYYnBP?=
 =?utf-8?B?TjA2YWh6eXZyUldRYXByVC94dU9tbk80c0gwL1R1QTR3czRDY1A3V1dURjVw?=
 =?utf-8?B?K2w4K3BOcUxtclFxL0F1UndKR2htMU9wQWMvcXY3ZHlPeFI4Y2lGM2wzWVpT?=
 =?utf-8?B?WEdsQTRWanVOdWVIWUhJNURjZUtTSTRheE5LQ0dicUoxS0FWT3dPbzJia2Vq?=
 =?utf-8?B?ajFaOHV2Sm1QZmszdDcvVHE3WW4waXA2ZVhnSklvM0N0OFpYK0wyQ0Y2MnBH?=
 =?utf-8?B?eXF4STdHaFhDd3F1ME53Ry9qaUJraTN3N0wzdzRkVTF4d2pxMlNCSGUva2Jk?=
 =?utf-8?B?ZTF6NTB0eTFnY1lGbmJ5RWF2WjhwSnJoU01MUVp1bzhvSEUrSDNjZ2ZQbnBQ?=
 =?utf-8?B?bmRsSGk1TXhkdWM0NjdHa1lwdXpLbWFqOENIdjV2NGxPNVpQd0ZxQlJmWEFX?=
 =?utf-8?B?Y1NXVVRlb3RBajhPMzJzRXgzZ2g1UFdNSC9oRTdNZmF3bHVKZng4QVhGVHc2?=
 =?utf-8?B?L3Z6SndIdWRpcG9kR0lpLzBNb3FIelZUS0lRZlZmcmtKUGRHOTFod3U3UkNj?=
 =?utf-8?B?allzbFFYcDhkNGN1MlZDS2tVdENQa0w4L3Fab2pjV2YzK29wTTRuSmNCYm5w?=
 =?utf-8?B?em1PZWVBd2ludGt6ZHBsTkJzVjkvbzY5VUY4MitEdkwxVm80Z1ZvcFRsazdw?=
 =?utf-8?B?dklqMkt4cGFwVUd4VjYzaEgzOVNhdDdyQnQrd3hLYjhmNW54MDBUQ3RPR3Br?=
 =?utf-8?B?UUUvZFhKM3VPODB5MDNrVGNXYmU2NkN2QllSL2hORjlmOHhCY29lcEF6M01n?=
 =?utf-8?B?UWRQU3R1VnJBUkNuUTJ4dHdFcFVVUVZDVmhNRjhQQ2I1aFBwU1FudWhVU0lP?=
 =?utf-8?B?Z1hUNXIxM2NFTGdWWmcwVmZ6SW4wTVRKUHgyemJmMjVQd1pGazRkZzNYSFdW?=
 =?utf-8?B?c3ZsbzVYU2tuQjJBU1ROZDFqZkpGei94UzVhd3lMNnFLYjBpOVg4YnZjYkdi?=
 =?utf-8?B?K0dUa2RtK01LN1hhOUs5MGtNQm4xVEpTNkRNMWZ1YlFVZ0FHUU9SaVd2SVRW?=
 =?utf-8?B?aDN1bnJDVFVPZXJjTFFVWGNvV0NVcEJvTFJybGtWdGZvKzVOKzZKdHFVbzJS?=
 =?utf-8?B?YS9XbmUwbGs2cXJCeVluNW5zK1Y5M1NFL2FOT1gyd1hwdzRTMzlIWHBLZS9p?=
 =?utf-8?B?eWdFejhDVWk1aEp1dk93bllFTmVhZ1lGT1c1cnU4UURjU0V6enB3L2I3RlZU?=
 =?utf-8?B?VHUxV0FZSXhuWENjQ0JSZ3daNUVxd2xXeTNaUEFZNkNnZ3c3WmpQa2Jqa2F4?=
 =?utf-8?B?MmxSUzhjakJhVkttNzBSdDgrV0NmLy82Qm8zbndZK29XWFRjeCt4ckR6VWt4?=
 =?utf-8?B?bmh4ZkVGY0l6eEE1RTBadFpka1lmS1lnaTZ4QkZuVERjNlBuQld4SFQ5QmJo?=
 =?utf-8?B?VmFnWVBWbWkrdWJxWEwvazVKWGErZWxFRGdZeWVORS9pK3BxT1Ntak5XZnB6?=
 =?utf-8?B?cVhFd3k2TUdZdjg1TWp2MTNFS0tMUVJJL0lFNUlDcDFEZ1o0a3VtSk40TnFy?=
 =?utf-8?B?TGxLRDVhcThZSE9hS09LaUlJZlhxVFptK2hCYjluZDB5MGxOMFhzcDhMOWVV?=
 =?utf-8?B?Q1czeW5WRkRqZkF1MjhuWm52V0dTc1lhSmtOWlNpektFS1pkOERhYU44VmRl?=
 =?utf-8?B?TU4vZjdYbEFjWjdOT1ArNE5Mc1oydytzcDRNV3JuRU5VSk5WendUMjNJOWNH?=
 =?utf-8?B?MXR4azRnT1kwUCtmTS9od29ZajJha2JMN1JsNzU4UEZYSXJFbnF3cHJ5VEJa?=
 =?utf-8?B?VDhUY2Z1eW1YTnBFNytFVWorL2N4Y01CQVZSWXZiS1JzbDNPR1hGVDlWNDJB?=
 =?utf-8?B?R3NqNHZhZXpheGVyQllCd2tPZnZGSnhjR3Q5VkQzY3NON3B4YVNXdDk3NkMz?=
 =?utf-8?B?U0RSZXlzQnVtR1VQZ1VVZkxQQ0xFaFV3OVZsRDN3OStES2FTNmVTb0tUejJE?=
 =?utf-8?B?QUNQL29yZFU4clpEN2o0NHFVekEzcnMxV21QNStVVDVYa1JPTTJ6L09mS0JK?=
 =?utf-8?B?VmJZVEZtZ29xaEc1aWhFQjFBT1luQ2FMTERGU0J6cWs0bHl3WEJKWVdrZjBF?=
 =?utf-8?B?SlhkaVo2SW54NitZQmxwdkE0WFlaU1VKd0svaHlYKzhmVHkvYjVoWTdVSm5F?=
 =?utf-8?Q?mraVk9zePV3nsFEs=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3918cf11-a040-403b-5ee1-08da43adf76e
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Jun 2022 09:06:08.7048
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: j8XviSKXNeIioZseUSB/RPd2uDKtkYzS5m/5jqKGjn0fspo5GmcizYOAVWEQ9OEplxCpjdmJVFuOwp6FukIkaQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5178

On Wed, Jun 01, 2022 at 07:40:12AM +0000, George Dunlap wrote:
> 
> 
> > On 31 May 2022, at 14:52, Roger Pau Monne <roger.pau@citrix.com> wrote:
> > 
> > On Tue, May 31, 2022 at 06:45:52AM -0400, Daniel P. Smith wrote:
> >> On 5/31/22 05:07, Bertrand Marquis wrote:
> >>> Hi Daniel,
> >> 
> >> Greetings Bertrand.
> >> 
> >>>> On 31 May 2022, at 03:41, Daniel P. Smith <dpsmith@apertussolutions.com> wrote:
> >>>> 
> >>>> For x86 the number of allowable multiboot modules varies between the different
> >>>> entry points, non-efi boot, pvh boot, and efi boot. In the case of both Arm and
> >>>> x86 this value is fixed to values based on generalized assumptions. With
> >>>> hyperlaunch for x86 and dom0less on Arm, use of static sizes results in large
> >>>> allocations compiled into the hypervisor that will go unused by many use cases.
> >>>> 
> >>>> This commit introduces a Kconfig variable that is set with sane defaults based
> >>>> on configuration selection. This variable is in turned used as the array size
> >>>> for the cases where a static allocated array of boot modules is declared.
> >>>> 
> >>>> Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
> >>>> ---
> >>>> xen/arch/Kconfig | 12 ++++++++++++
> >>>> xen/arch/arm/include/asm/setup.h | 5 +++--
> >>>> xen/arch/x86/efi/efi-boot.h | 2 +-
> >>>> xen/arch/x86/guest/xen/pvh-boot.c | 2 +-
> >>>> xen/arch/x86/setup.c | 4 ++--
> >>>> 5 files changed, 19 insertions(+), 6 deletions(-)
> >>>> 
> >>>> diff --git a/xen/arch/Kconfig b/xen/arch/Kconfig
> >>>> index f16eb0df43..57b14e22c9 100644
> >>>> --- a/xen/arch/Kconfig
> >>>> +++ b/xen/arch/Kconfig
> >>>> @@ -17,3 +17,15 @@ config NR_CPUS
> >>>> 	 For CPU cores which support Simultaneous Multi-Threading or similar
> >>>> 	 technologies, this the number of logical threads which Xen will
> >>>> 	 support.
> >>>> +
> >>>> +config NR_BOOTMODS
> >>>> +	int "Maximum number of boot modules that a loader can pass"
> >>>> +	range 1 64
> >>>> +	default "8" if X86
> >>>> +	default "32" if ARM
> >>>> +	help
> >>>> +	 Controls the build-time size of various arrays allocated for
> >>>> +	 parsing the boot modules passed by a loader when starting Xen.
> >>>> +
> >>>> +	 This is of particular interest when using Xen's hypervisor domain
> >>>> +	 capabilities such as dom0less.
> >>>> diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
> >>>> index 2bb01ecfa8..312a3e4209 100644
> >>>> --- a/xen/arch/arm/include/asm/setup.h
> >>>> +++ b/xen/arch/arm/include/asm/setup.h
> >>>> @@ -10,7 +10,8 @@
> >>>> 
> >>>> #define NR_MEM_BANKS 256
> >>>> 
> >>>> -#define MAX_MODULES 32 /* Current maximum useful modules */
> >>>> +/* Current maximum useful modules */
> >>>> +#define MAX_MODULES CONFIG_NR_BOOTMODS
> >>>> 
> >>>> typedef enum {
> >>>> BOOTMOD_XEN,
> >>>> @@ -38,7 +39,7 @@ struct meminfo {
> >>>> * The domU flag is set for kernels and ramdisks of "xen,domain" nodes.
> >>>> * The purpose of the domU flag is to avoid getting confused in
> >>>> * kernel_probe, where we try to guess which is the dom0 kernel and
> >>>> - * initrd to be compatible with all versions of the multiboot spec.
> >>>> + * initrd to be compatible with all versions of the multiboot spec.
> >>> 
> >>> This seems to be a spurious change.
> >> 
> >> I have been trying to clean up trailing white space when I see it
> >> nearby. I can drop this one if that is desired.
> > 
> > IMO it's best if such white space removal is only done when already
> > modifying the line, or else it makes it harder to track changes when
> > using `git blame` for example (not likely in this case since it's a
> > multi line comment).
> 
> The down side of this is that you can’t use “automatically remove trailing whitespace on save” features of some editors.
> 
> Without such automation, I introduce loads of trailing whitespace.  With such automation, I end up removing random trailing whitespace as I happen to touch files.  I’ve always done this by just adding “While here, remove some trailing whitespace” to the commit message, and there haven’t been any complaints.

FWIW, I have an editor feature that highlights trailing whitespace,
but doesn't remove it.

As said, I find it cumbersome to have to go through more jumps while
using `git blame` or similar, just because of unrelated cleanups.

IMO I don't think it's good practice to wholesale replace all trailing
whitespace from a file as part of an unrelated change.  If people do
think this is fine I'm OK with it, but it's not my preference.  Just
raised this because such changes make it harder to use `git blame`
IMO (have to remember to use -w, but that won't help people using the
web interface for example).

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 09:08:59 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 09:08:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340354.565370 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwKLa-0001Nb-OR; Wed, 01 Jun 2022 09:08:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340354.565370; Wed, 01 Jun 2022 09:08:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwKLa-0001NU-LW; Wed, 01 Jun 2022 09:08:58 +0000
Received: by outflank-mailman (input) for mailman id 340354;
 Wed, 01 Jun 2022 09:08:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=s3TG=WI=citrix.com=prvs=1441e74d4=roger.pau@srs-se1.protection.inumbo.net>)
 id 1nwKLY-0001NO-Q8
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 09:08:56 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 75dc1f85-e18a-11ec-837f-e5687231ffcc;
 Wed, 01 Jun 2022 11:08:55 +0200 (CEST)
Received: from mail-dm6nam10lp2106.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.106])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 01 Jun 2022 05:08:43 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by BLAPR03MB5393.namprd03.prod.outlook.com (2603:10b6:208:291::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12; Wed, 1 Jun
 2022 09:08:40 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::5df3:95ce:4dfd:134e]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::5df3:95ce:4dfd:134e%4]) with mapi id 15.20.5314.013; Wed, 1 Jun 2022
 09:08:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 75dc1f85-e18a-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654074535;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=uVVpD0CGJvdVcdmN0zON1HWOHM9Bk3LNLwZlVkxRRlU=;
  b=eN15FkejMe6l7x5Y3z3YMQz8b+aD55Oj07vFR+bHOn1+L6DZ2fk5YT4T
   tRzgW1Yx7vSo7tM56mojX3S3tZPUMt93odQkSMqZwk9k/8Ab2F0Ru+Wzm
   ldJF6nr43D97X8/pLV7b925QxGORl1br7j6yL3ewFDNs8t7/sDndzhQOJ
   E=;
X-IronPort-RemoteIP: 104.47.58.106
X-IronPort-MID: 75136455
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:SkDDsazU+FJ6VMU1r5x6t+dNxyrEfRIJ4+MujC+fZmUNrF6WrkUFm
 DMXWDuOMqmNNjCmc413a9u2pEIO6pKEyN5qQFZlpSAxQypGp/SeCIXCJC8cHc8zwu4v7q5Dx
 59DAjUVBJlsFhcwnj/0bv656yMUOZigHtIQMsadUsxKbVIiGX5JZS5LwbZj2NY22YHhWmthh
 PupyyHhEA79s9JLGjp8B5Kr8HuDa9yr5Vv0FnRnDRx6lAe2e0s9VfrzFonoR5fMeaFGH/bSe
 gr25OrRElU1XfsaIojNfr7TKiXmS1NJVOSEoiI+t6OK2nCuqsGuu0qS2TV1hUp/0l20c95NJ
 NplubuzVzYjJ4b1tadBWTtkFC9aDZZM9+qSSZS/mZT7I0zuVVLJmq0rIGRoeIoS96BwHH1E8
 uEeJHYVdBefiumqwbW9DO5xmsAkK8qtN4Qa0p1i5WiBUbB6HtaeE+OTvYEwMDQY36iiGd7EY
 MUUc3x3ZQnoaBxTIFYHTpk5mY9Eg1GgLmQD9g7I+MLb5UD89glq1pfVG+aLe8WVZZtvvH6Tm
 lL/qjGR7hYycYb3JSC+2nCmi/LLnCj7cJkPD7D+/flv6HWDy2pWBBAIWF+TpfiillX4S99ZM
 1YT+Cclse417kPDZsH0QhmQsHOC+BkGVLJ4DOkS+AyLjK3O7G6k6nMsSzdAbJkqsZEwTDlzj
 luRxYqxW3poraGfTm+b+vGMtzSuNCMJLGgEIygZUQ8C5Nqlq4Y25v7Scute/GeOpoWdMVnNL
 /qi9UDSW517YRY36piG
IronPort-HdrOrdr: A9a23:/O5A1a1NF061F+qO2y9BRwqjBSByeYIsimQD101hICG9Lfb0qy
 n+pp4mPEHP4wr5OEtOpTlPAtjkfZr5z+8M3WB3B8bYYOCGghrQEGgG1+ffKlLbexEWmtQttp
 uINpIOcuEYbmIK8voSgjPIdOrIqePvmM7IuQ6d9QYKcegDUdAd0+4TMHf+LqQZfnglOXJvf6
 Dsm/av6gDQMEg/X4CePD0oTuLDr9rEmNbPZgMHPQcu7E2rgSmz4LD3PhCE1lNGOgk/iosKwC
 zgqUjU96+ju/a0xlv10HLS1Y1fnJ/ExsFYDMKBp8AJInHHixquZq5mR7qe1QpF6N2H2RIPqp
 3hsh0gN8N85zf4eXy0mwLk303a3DMn+xbZuCulqEqmhfa8aCMxCsJHi44cWADe8VAcsNZ117
 8O936FtrJMZCmw0xjV1pztbVVHh0C0qX0tnao4lHpES7YTb7dXsMg24F5VKpEdByj3gbpXXN
 WGNPuspcq+TGnqL0ww5gJUsZ+RtzUIb1q7q3E5y4KoO2M8pgE686MarPZv60vouqhNDqWs3N
 60Q5iApIs+MPP+UpgNdNvpYfHHfVAlEii8Rl57HzzcZdI6EkOIjaLLy5MIw8zvUKA07fIJ6e
 b8uRVjxCQPR34=
X-IronPort-AV: E=Sophos;i="5.91,266,1647316800"; 
   d="scan'208";a="75136455"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JLUYb6r/+sBXxSBiy0NeiQyRC8VVGYPSREdVY4+l0FdULuPNJSVCeshRwWyVkIqeDn2ycgl8iNaOo99a7i4KrZhJcV/nPLmlSKep486UClA0aDmYQTaEozuZwlpECdR8ZnURxfc65MrdxpFvMLCtxKrhBRM3+SkyAj4a47/ED+LGcAmyNQVHL35IETNeaEo6T7W6E/yrvhMkpFwphtm4GTVxSg0XabXvBXiu1Cmm7xgYKvhTX+JEzS/ie8WyerX5Hjhl1vETjdtWAkgfgd+xxnaOmdHJVmS/L2UN9QknaBB7fuuglYsHV+UOEOtysKWmmdB5+9epMUl7eDm5jc9pxA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=UmJL8rCX2dWF7cn9lm8Lab5GIBv+/1c3huTGt1gpfug=;
 b=Px2itKSYw9PybJ9/bPpKCrGbwrtD3RenVtM+e7Cyy7wQvn/6gqP2y792ocgR/4yY2jN7kwFfpkUc2qHu8DqxdFcRIDtpT3siPuxWyYMr4gBRRZFv57ESKlLq+q51K08p6f78SkZKuAN6kuwjnw+9NOtFPNGs6ndRmJFagcPFjLAciR4fvGT0sKBFrqTVApBqiwQiyAqfmp8Iq3QpUDygY51OufpBxBYv6tlFMM8dVG1LsVBAfTJ1Kurhi0CzyRT4QFKmv+Cw+hwSLVsB1PiNOxRSwzYdxK7+Olvoc31JQ0f6kx49vpilfFh+PQ4CL0zsWpdtPCrxxCqv0YYzOPCoYg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UmJL8rCX2dWF7cn9lm8Lab5GIBv+/1c3huTGt1gpfug=;
 b=p1fu3z4lVKx66qdrxXmevJEYp8MF+9Xyof5MfomVBfI8Hd/NdkJoHvGem1+BApAmD/PN/fywtgWy5W64nqrxlXBkQ78h70nxo34ziPCkCSsMqYeAjzGng3GS7G83no4Icv9Hy9iH/8CVuSOTU+KegojPjir9PZwvhQAM42KCcU8=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 1 Jun 2022 11:08:35 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v5 02/15] IOMMU/x86: perform PV Dom0 mappings in batches
Message-ID: <Ypcsk/5eaUHkFJL2@Air-de-Roger>
References: <80448822-bc1c-9f7d-ade5-fdf7c46421fe@suse.com>
 <67fd1ed1-4a62-c014-51c0-f547e33fb427@suse.com>
 <YpY71HuPOP59Do+Y@Air-de-Roger>
 <0146fc48-096a-1c5e-406f-bd7b471fc1fa@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <0146fc48-096a-1c5e-406f-bd7b471fc1fa@suse.com>
X-ClientProxiedBy: LO4P123CA0040.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:152::9) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 60009361-ee42-4c1d-89d0-08da43ae5184
X-MS-TrafficTypeDiagnostic: BLAPR03MB5393:EE_
X-Microsoft-Antispam-PRVS:
	<BLAPR03MB5393E4DEDF231F0D91AB33F28FDF9@BLAPR03MB5393.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tR5Bp1rT8wUNpE+0zF9Z4+eS6N/0H65qaKB/ROEC4yMIcjN4gFP2QpSmY6uvyHwJGnm3hs9vv7yxmMDTLZYhN0U/mqnlOCx0uXWEGhwN5vQqipavRCUG0chh9Z6m6P+45KtMQ2pyeezsieuj5veMOBKVck4sjC2Fq3QiTpycxUSJ/7KDc4VpU7EuF99nusRkGk0r/xJXFN2UWZgudeo71rMRMBFbqJHAUVU1dCBd/fcMdWbs74s6RKwgzhvTtnnlv6dRuegrJlirvudGT2u+qVB2x+5bFMtue4zp+xdYoOB0MZZlRjteN+nuKVGoal59GZeI6mRD90dhnHayQYoUy6+ocxP3tDUq5mGEfoxvpjmpYEV7g3vIrB1Um5On6Rj301wjqcw0NufO7rZomEuIX0i45rRupr0HaBGieXL2Fi5Lt9GgHdap4jpAQaWwQAIqbGhjnEaq25qOe0IQyPdSkMw4Lxeq8APTcH/pUEQghbvFwwPv7QggjJGc3+xv8R6ctreQmijwHeChVEopQi14z0D9I3CwKB3S8XC1TYPyb6laHoWhiYfCBOpzCaE562Grc4Hu1eR+LvF7AcE3u3Ta6UaI4m4OtNmtW/xdu7kpdB6RqQi9bZNBFxNayBIVNos9z2Hm5QOqJHQ7jeiyp3LG9g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(7916004)(366004)(186003)(26005)(86362001)(53546011)(6512007)(6506007)(9686003)(83380400001)(85182001)(33716001)(38100700002)(6666004)(2906002)(82960400001)(316002)(508600001)(5660300002)(6486002)(8936002)(66556008)(66946007)(66476007)(4326008)(8676002)(6916009)(54906003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?akhTd0hLS2NuRVE1UFByd2tHcTEwSkNXUlAra01iRjdrOW1UcXB6MnJuaHpo?=
 =?utf-8?B?NjQ0UVZCOXpnUXN5clNXNGxRMklsQlkrTEY1Y0ZqZHIxcHFIUDZaTHZ0OEVq?=
 =?utf-8?B?VnhwQVdUTER2cnJGYkF4elBXMG9hczJEOEFNRGlOTW14aEdZL3UwWVBiN1F3?=
 =?utf-8?B?MHlaVHFOZWVPejhpaHBYWnM4UldCU0JhSDJRMEpXQjF3bzJvTXlSaDhBS0pV?=
 =?utf-8?B?YU91WkJ4Uk1xb1JlamdKSXBWT1ZIeHIxRURYb0hSVUVoQVFPOXhIMUpVY3VS?=
 =?utf-8?B?eGdNdk1mcnpFODVRK1BRTEJYVHVsWERRTWYwcUZzbmtNcWZLR1FxaTFjTEpP?=
 =?utf-8?B?QTJuTllPcXdqN3JCUnlBSlhTY0lESjN0NVBkVVp2aWcyNDg3N3k3MVZsYlNN?=
 =?utf-8?B?MzZYS1U5ZzJ1R09BUEthWUlYc3E5ZU9UZTRhS2JWQmRTa01TWkthYkc5WUZp?=
 =?utf-8?B?UVBHVGtNcEZPWmxyOHhCR2xaQ0xuSFpCMGFSeENnNHB4QTJ0Y2RjaU9Gbndj?=
 =?utf-8?B?TU1vTzlIU05kbTJ3RStMUVBnbElNUjZFT1daK3ovRHMyRlhwZ1lGb3FWTzk2?=
 =?utf-8?B?WjNHTHVhdUZBUkwzTzhBT054eURQcFljb3lkUFlaZmROUVJ4a3JYZU1ML2wx?=
 =?utf-8?B?UHRoY1E0T2FUTitubkdhUzk5QUJLRDRjcmhzV2xoSWRxTXBvVDJZNzVXNnRa?=
 =?utf-8?B?eTcrb1ZjMDdNcE1seTZIYytMamhPdmgrZ2xKQXJCYUNSeTI2UEQxSStuK3dG?=
 =?utf-8?B?V0JSQ1FZMjVmUCtWWHlMazJBcEJ0SHgxalpkZW5yZ0s4dVh3K2pDbWNmemlB?=
 =?utf-8?B?eGFwKzZBSWl0K0JsR1hkaERmcFZDMTJkT2VST2pKYVdSbzVVa295U2VuZGdL?=
 =?utf-8?B?Y2VNYWczSHZ1STNCOUV2Wlo0VVZuNDM5R1ZUM2lQVmRsY3huZWhDZGpZeSth?=
 =?utf-8?B?Z2NTZDVGVnFXZUZOREppTE5tV3JsUWJCM21sWE95RTFVbUdNL3RDVmZ4SkdZ?=
 =?utf-8?B?aVRjNEw2T09zOWxMRnBiNnZMRFZnSHF0a3lhZVg2VlRoY2xVMmdSNEcyTUFx?=
 =?utf-8?B?eis0SnNBYmUrVmZUSEtTTjd3UmR2N015S3ViVjdqWC9hZnBNeVRNRFBZSWRK?=
 =?utf-8?B?V25hNzV3TERWVXcxV3Nwc1NSeS9lZHhBZmdIOFplSkFVS1JzeUZvdkpxdUxz?=
 =?utf-8?B?OU9SeHVXS3lXV2dsVEVJUXhhaDcrbytKYTN6K1V0ZFo1WXNNdkpUb1ljSWZJ?=
 =?utf-8?B?MFhXcnBYV2pGUkFwRDdSNGJ5ZGZ5aEZFTTBCaHFKSlNQdHB3TlhZUXZXamVO?=
 =?utf-8?B?QzlUYVRDV3VaM0JtLzV6QVprY1RaUWVuZno0dSswTTJkVTVGODNvdWt3MWoy?=
 =?utf-8?B?OS84N1VzeitHTWpQbDl3d2VabEJwTmlXQzdmSlhpcEJqWUJ4RWpLTEVPUHZR?=
 =?utf-8?B?bnVwSGR0d1VjZGV1azhmOW5oTXVuQTlEeXVnbVEvb0JDcEJuUU9PSkt2c214?=
 =?utf-8?B?UjJyakd6YkZ0RVdudmtTRDg0L1BUVzJLWCt2Ly9RT2tuNzk1Q3BscGEwUG1k?=
 =?utf-8?B?dTQxMm9FdHdhWTRrVmJEYWxOYUNoUFQ4K3g0MktrdjByNlc4UWFIM2pOVlQw?=
 =?utf-8?B?Yi9na1ljV1JEcElFaE5qaHdvRUZQMXpZQWkwSEpjM1pkcmFXY1VMT1V5OEJt?=
 =?utf-8?B?NXVCMjBDNW5PeEtKbFpVejJaRWp0ZlR3SzlVV2taeFhLb09URHNBREZtYyt0?=
 =?utf-8?B?cEc5TlpadHVMeE5neU50RFlPRGNMdzJtOHRpUUdYSlNJYnRVWEV1dzVqRy9k?=
 =?utf-8?B?N0hEeDRSZ0NveUUzTmpYS3BGMFlWcFA1RW9kSyt2cXdzQlFJRHNPTTdaMHlX?=
 =?utf-8?B?K3FFZzNkTWtaOXpFTkEvSG1VK3A3c3drUWpWa0ZuWXNPYmZtdmRLRWM3U0xQ?=
 =?utf-8?B?VGxaSDlRbE42Vlc1aFJEZ3NjWjF4Tmk3N1B6NTAxbEp3Z242SEYyaGh4VmY2?=
 =?utf-8?B?V2h2ZlJHWHFjeWdXNU4vc0dIWXAwUnVpY25makxxaUI4M2pxZzVDSGpvMHJW?=
 =?utf-8?B?ZFl4QkpqM0lJUlBvaVUvK1d5L0ZUSmovZ0I3YVUwV29TRUdaLzdHVHJZSitL?=
 =?utf-8?B?RmVwczAxc1lYL2ZKanZtOXdESmlCSVFNcjRyeEZtNWxBclMrMzBTUmErMWl1?=
 =?utf-8?B?OWM2UzROT3l5UGgrMUllVjlKV0o1VWd1S1Z3bFdab0J5R0U4OHpvcjZIdUow?=
 =?utf-8?B?NWw1K1NrODErc1FRVHdTYXRQT20wejRzbkxlTjhRbU5US2t5eEtPVENJMWZ4?=
 =?utf-8?B?ZWtFVGxYaEZ5a3N2dytBSGNORW9xZnlnaXVMNjQvcUpxQlRUL3VGd2lGckV1?=
 =?utf-8?Q?zD3Pzu9Oni+u/xdY=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 60009361-ee42-4c1d-89d0-08da43ae5184
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Jun 2022 09:08:39.9075
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: pneE3oZy8RQxRz+4cCGgVPcjy764B0vjojU1hl0wh0+c0pOHLSejkT65EjiejMJ+JBCX+c/ShS3XCSiVXovFGw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR03MB5393

On Wed, Jun 01, 2022 at 09:30:07AM +0200, Jan Beulich wrote:
> On 31.05.2022 18:01, Roger Pau Monné wrote:
> > On Fri, May 27, 2022 at 01:12:48PM +0200, Jan Beulich wrote:
> >> @@ -406,20 +406,41 @@ void __hwdom_init arch_iommu_hwdom_init(
> >>          if ( !perms )
> >>              rc = 0;
> >>          else if ( paging_mode_translate(d) )
> >> +        {
> >>              rc = p2m_add_identity_entry(d, pfn,
> >>                                          perms & IOMMUF_writable ? p2m_access_rw
> >>                                                                  : p2m_access_r,
> >>                                          0);
> >> +            if ( rc )
> >> +                printk(XENLOG_WARNING
> >> +                       "%pd: identity mapping of %lx failed: %d\n",
> >> +                       d, pfn, rc);
> >> +        }
> >> +        else if ( pfn != start + count || perms != start_perms )
> >> +        {
> >> +        commit:
> >> +            rc = iommu_map(d, _dfn(start), _mfn(start), count, start_perms,
> >> +                           &flush_flags);
> >> +            if ( rc )
> >> +                printk(XENLOG_WARNING
> >> +                       "%pd: IOMMU identity mapping of [%lx,%lx) failed: %d\n",
> >> +                       d, pfn, pfn + count, rc);
> >> +            SWAP(start, pfn);
> >> +            start_perms = perms;
> >> +            count = 1;
> >> +        }
> >>          else
> >> -            rc = iommu_map(d, _dfn(pfn), _mfn(pfn), 1ul << PAGE_ORDER_4K,
> >> -                           perms, &flush_flags);
> >> +        {
> >> +            ++count;
> >> +            rc = 0;
> >> +        }
> >>  
> >> -        if ( rc )
> >> -            printk(XENLOG_WARNING "%pd: identity %smapping of %lx failed: %d\n",
> >> -                   d, !paging_mode_translate(d) ? "IOMMU " : "", pfn, rc);
> >>  
> >> -        if (!(i & 0xfffff))
> >> +        if ( !(++i & 0xfffff) )
> >>              process_pending_softirqs();
> >> +
> >> +        if ( i == top && count )
> > 
> > Nit: do you really need to check for count != 0? AFAICT this is only
> > possible in the first iteration.
> 
> Yes, to avoid taking the PV path for PVH on the last iteration (count
> remains zero for PVH throughout the entire loop).

Oh, I see, that chunk is shared by both PV and PVH.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 09:13:22 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 09:13:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340365.565381 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwKPn-00030S-EO; Wed, 01 Jun 2022 09:13:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340365.565381; Wed, 01 Jun 2022 09:13:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwKPn-00030L-BQ; Wed, 01 Jun 2022 09:13:19 +0000
Received: by outflank-mailman (input) for mailman id 340365;
 Wed, 01 Jun 2022 09:13:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qdo/=WI=epam.com=prvs=8151ed00d6=oleksii_moisieiev@srs-se1.protection.inumbo.net>)
 id 1nwKPm-00030F-V8
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 09:13:18 +0000
Received: from mx0b-0039f301.pphosted.com (mx0b-0039f301.pphosted.com
 [148.163.137.242]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 12831bcd-e18b-11ec-bd2c-47488cf2e6aa;
 Wed, 01 Jun 2022 11:13:17 +0200 (CEST)
Received: from pps.filterd (m0174680.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2517Vmv6008682;
 Wed, 1 Jun 2022 09:13:16 GMT
Received: from eur05-vi1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2170.outbound.protection.outlook.com [104.47.17.170])
 by mx0b-0039f301.pphosted.com (PPS) with ESMTPS id 3ge3t88c3e-2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Wed, 01 Jun 2022 09:13:16 +0000
Received: from PA4PR03MB7136.eurprd03.prod.outlook.com (2603:10a6:102:ea::23)
 by DBBPR03MB5302.eurprd03.prod.outlook.com (2603:10a6:10:f5::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12; Wed, 1 Jun
 2022 09:13:12 +0000
Received: from PA4PR03MB7136.eurprd03.prod.outlook.com
 ([fe80::31b5:dfd5:2d38:c0b2]) by PA4PR03MB7136.eurprd03.prod.outlook.com
 ([fe80::31b5:dfd5:2d38:c0b2%9]) with mapi id 15.20.5314.012; Wed, 1 Jun 2022
 09:13:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12831bcd-e18b-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=V5Awx9AZjco4DNey7vFy6j9S+rNY38kqGvWH+Mqz32lERtpzMHp0y1fDxAFOGnnAup+6G83kHn198iqcjALd4jqSGa7Aw3LEpIxt8ZUmyt0thHPdKsru7PWwSeEiDEwC+ZXBLk3KhqvkXuXD1Yl+zP05xDNvBWzP5z/FjufRkLAaUWT1kpGMx3pPUkYzjD0Iew36PxB4iU6mxcce1ijG8kyUexlyiqSWzik57NBG68PfViRrVcHXxwri/LRYX/DI7BohDUQ8ZcYd0SBhb9XCqnsjaCJvBaYgcQN+MEHqteF4wyWkfPCueoeErGhlhaA6JoKneeOxFEcqUe23UoYpCg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=OM5znaz24psi/QNH8B9NFjCmzETA9Q7J4ovVk/VtDI4=;
 b=KU03Pdepq0jf6RzCCjnefUolHE5sqKX7x4KCH2qdj/BM39J74iSOGExI1MVkWeABtT5Bm9ampjuYnueGh0XVBeCyfPJiIRi1GVkObFiWL2J3QYlEs9XuV6DX6M6LxFrh/m0nVLzxyrZANeFH9Eci9mfAKU+5y95Q0Cnn2bnWwYWAa0P6Ym9dU/QMvXOd4FpMSgGKcJ+G6zhqaVz9f/18J9ngSZP/Vj9GEma9MWQgFfqSrQnNGp4SIFuqQpxMwGFx2ES+v/8nL+kGw9heoMxCS1bo6MqStwd8oOYOSrlM78uGNufpy3yu7TpwkYOmgbT1+Rg5/yL7NGt/8W+hHXh/ug==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OM5znaz24psi/QNH8B9NFjCmzETA9Q7J4ovVk/VtDI4=;
 b=Uj0exxxsA8fSNp9UqEHgCesis2/yJK9Dg9H4+9IGVn8gcVPDkUXdASr/8K6d+Git0NR+65lQ+R8O+cNM79cSCT1/mUK7y12Vp2dKOYco2k8MrpHbipnYTkiQakC1ZRny7EflQY+3xnq/DB8jLC0RZDKv/eeaAdSUlczezZgtUJOCnFwANLYYAxrFQNds/KEpDtE6GQXu0zcKktPCB45bccp716uAgyAwIFmqIg21HfBjcKGOYT8mjiJExr22oI8Yaj+6c6lNBeZE5VyplL7q5qnY7Yh+33rfq2dijrXsWjuju+bI+6mHWNUqMK7NbT52PQmO4Y86jN+oxiEjdd6bhw==
From: Oleksii Moisieiev <Oleksii_Moisieiev@epam.com>
To: Peng Fan <peng.fan@nxp.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Julien
 Grall <julien@xen.org>,
        Bertrand Marquis <bertrand.marquis@arm.com>
Subject: Re: [Xen-devel] SMMU permission fault on Dom0 when init vpu_decoder
Thread-Topic: [Xen-devel] SMMU permission fault on Dom0 when init vpu_decoder
Thread-Index: AQHYdDjfc3Z/1SeAUE+eDUzXC1OEPa06Mh/QgAAVEIA=
Date: Wed, 1 Jun 2022 09:13:12 +0000
Message-ID: <20220601091311.GA3658954@EPUAKYIW015D>
References: <20220530152102.GA883104@EPUAKYIW015D>
 <DU0PR04MB94175823AAA60802503F802588DF9@DU0PR04MB9417.eurprd04.prod.outlook.com>
In-Reply-To: 
 <DU0PR04MB94175823AAA60802503F802588DF9@DU0PR04MB9417.eurprd04.prod.outlook.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 46f14f39-9764-4b7e-264d-08da43aef435
x-ms-traffictypediagnostic: DBBPR03MB5302:EE_
x-ld-processed: b41b72d0-4e9f-4c26-8a69-f949f367c91d,ExtAddr
x-microsoft-antispam-prvs: 
 <DBBPR03MB530238A6BED1C368C6ADBE83E3DF9@DBBPR03MB5302.eurprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 rmR8D9DanMMJjlxnvtVAHmaR67ko9NPSIAIcoJonY0nk2CnU2A0GBEr8I31uQZNXNUunV23CauZjZbqa7OhDKqB6omkHWrEf7TDC4e4OkUmrirF2xxjOa5zfkjUBR7z4U/uU3ry8SxSdgXdhBRXck+7jCzdxAtCEEEIREU0sVu9EmeUilRRqnMsKM9Sd+bkQ7cIH2UZ7i64q36uS0bfqRtTHSGOlaz9rsnQ4fXIJEbKF6r/OdEh9/sKLfti0bWqZOUMdz/dLtU+aUD6AHgWEKD1++3G35v4c5NgS0ji9TgyrV9Qy/3qW4BnrpJC9D9044QOpSZDndovODYODcCAkhYuLoKisOVgTrmgMHzmmz0svzktrfiKVNVoI36p8QtH8mHs9INMImTLL1y3+lty04C+vXMHaItvLp5qnzrjtU2k6OXCbWWD8thGThvJnbVVqFvk9smGjj2UuiriZdMPj4zx43A9fS+SPZVRtgbYTH1jfu559QZw16riNfnVamvbAl+pZVp3v8+3lDbJ7Ynt7Uaoc8CBQBhbnoqcs8t83WsXPdz28ysewux6KZkcpQ8E0gIPpmuZBrKwgBAwzGT+2Y76wpWYdgadgHiAkqcm16nxZzF0soYtLC2xVIpFcojJBD72jM3mr5eoP9u9/auDgfdcDjyK961gYW0WjhV0/x2AjvoqIEB11DG4bqAKVo4XvMUoV8g2NK2uHhXC645i1ropMYtbQ76PrpyXv6faDWvGzdj6qdgW8be3SfDIo5yR6NLumj1cgXSGCCBeKcLNgBl++4FJqVOxFdtOmIAW2oMk=
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PA4PR03MB7136.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(7916004)(4636009)(366004)(122000001)(5660300002)(71200400001)(508600001)(38100700002)(316002)(38070700005)(33656002)(6506007)(8936002)(91956017)(86362001)(9686003)(26005)(6512007)(33716001)(6916009)(54906003)(6486002)(966005)(45080400002)(2906002)(66556008)(8676002)(76116006)(1076003)(66946007)(66446008)(66476007)(64756008)(186003)(4326008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: 
 =?us-ascii?Q?7vvAkLZtZtFdLdf5LNyYwNVT4HD6k8NlHY6SZYsv7LGY4YFF5iQ4FHzawb/V?=
 =?us-ascii?Q?3FjxY6htCyrdeDQsPCek0b8Wjj8Pu8RQnAHfCxnwOYRqhHWgp3c/cKgS7e6L?=
 =?us-ascii?Q?M4ozhyPSW59QDvTvz1g86KcHLiWu/TWqgd66QIvxKg5KV7v0W9cFHqEp9h8p?=
 =?us-ascii?Q?8fA7erGL5ntZqryFtUpDUtkwHTVLpxDXUzhRkSe0phzIcK9IgogNIb/+9G3n?=
 =?us-ascii?Q?IRS6+EoUkV8egf80POVdx781kEPLBxYyN8+f+/3hUqjMO7T6bzZFhWlQeSMi?=
 =?us-ascii?Q?sDZhqPIZZwxjOwSx8kxJGzpai9Q9miCPfz7c2FHWpHBL4Uex5bq8WEqp6vNw?=
 =?us-ascii?Q?dm3spUwFrMyV0YC+vFhXibSiQvuIPiglC5d00jTkAcAwO6EtMmpRDWrskdQn?=
 =?us-ascii?Q?763HfMGx5rfz2Akfo1xXQ8y9uBecbSWzFsdhLrC4+W0CHV66cyHONRremGwX?=
 =?us-ascii?Q?NHsnzro4vAiBY+QLxhOpywK+ZZ47AG3XUSscVeKEeKrBpHpQKtmGtmZUxjaR?=
 =?us-ascii?Q?t+cPwLAp1Hu0EnvwBhfDCY+716CJyK1wJMugzfPuT/YZ4NwEnemOWEQ36vgf?=
 =?us-ascii?Q?F7vX2Phy/LIo+T9Ym2jy4gr+CLR6dGX+z0ucG1mEPPNcy1bOU08Fjziid4PD?=
 =?us-ascii?Q?qV0/ZMPvJIwEwWSUkLn4Nfv6v+7ddhAOQb+iiQ/mLAF6zSgvQ4wcNjnFts5q?=
 =?us-ascii?Q?RaV1djs1apcvLGIFrfdfuFgRty0W9StlVcAKurmzZvBSmFW/CgSioKdlVvUx?=
 =?us-ascii?Q?zvt7WQCYXc1pyqmtzPjGOjDK+H1mOfXSupIKS9+gPTqGDGp8doo1Iwx/sOoU?=
 =?us-ascii?Q?0YZA0m83vxqVIOtGJ+AVdUnc1t28gZBoZBjeuqOaN+Cc9WDItXRQualnqbCK?=
 =?us-ascii?Q?GqbCxDIUx0iWQUQByLIwq4LIVGW5Qo2kObbUHN0o9d1JQbTerviDIP7ly9zh?=
 =?us-ascii?Q?gUwN1BgoslQbfRyEctz+m6sNLC+jjMObWG1tlYIfewOxTVa+4uC+y/VHVSZI?=
 =?us-ascii?Q?f1Q9P0QXqC+svtuJVOvYBUSrq8YUNCz8N8XiUdWl4tEbMjcqnsJQ5NFBZlZz?=
 =?us-ascii?Q?3DNmD68X+EmHtwYZF7fikKsk0wfh+fVD5V6pAbEBsnLQS3P1Gq+ROyA1Zbbd?=
 =?us-ascii?Q?hQQ9TIIAGrZ8Mx7v+Q59kIID/m+8Q9x8ohB83fqnE/AhVU60nWYY82nj1H39?=
 =?us-ascii?Q?VIEufwTajJGERzeBpC+xDMnxzvDon7kNue91kVr/L0gyf1rwMjgR3dTHpcQq?=
 =?us-ascii?Q?DMlbncbpcaY+f4yQ/CUQGYF1nZZDNGMksCjjejkGuflIcJikI09nDOpqtFo+?=
 =?us-ascii?Q?lJVSRAJE465MevT5HklNVJMwzDN1+YnRNd/1RmcKq2sF7wPLw7WA+mzJEjEs?=
 =?us-ascii?Q?KM5zdi3xDVMz4sd8/sauxSv75HXuWBMns8GpEEuimy1MRLQoFgw0NfOqOq0p?=
 =?us-ascii?Q?SqUJkEuG5C3l1Kp9di1zamkrD3sviUzhU3L5HnJhddtaSgRR52BGiHqww64b?=
 =?us-ascii?Q?ciDU4fUQ3cLIqo5gGALUX0nXkdK08mu8BuqM9so+Vy7SBOKmlPk4c1ZeC15k?=
 =?us-ascii?Q?sCbieeIwCLlyrLD2mxm4/EQ9lc0BPrBMpK6+e/InBU4tpXjo/HKdHBofLplQ?=
 =?us-ascii?Q?2DwTzLp4tC8jLW7FDql7OeOcb8g3hOZuJSljsBuhqlZ4aKILfmkWhbUBV8Yj?=
 =?us-ascii?Q?K46HjU/AnpsQa8kFpzqK/gUEgcqf7J0tuk/uwA4zOM22pJMz72EyfYxD0W5n?=
 =?us-ascii?Q?340kJ3n/Se7yuM2ZqqHv8TUoGLHHOSZ0j2wfK++Jb0WgJxvQdsdW?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <78295EC09488B94EBF7C5D5A190555F3@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PA4PR03MB7136.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 46f14f39-9764-4b7e-264d-08da43aef435
X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Jun 2022 09:13:12.6372
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: fnlXsypGzC+LXi5LbOcqqaqe9Psfadkex2bqQcJ0e7qpTufNvnszML5cQlDJyxs+jhMBPij8qTX7nGKM7s2egn0/cKd6MADDdE4L+z9IqhU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR03MB5302
X-Proofpoint-GUID: KODjtbc5QwVXgu2kx2h_42XUBJiLTKFM
X-Proofpoint-ORIG-GUID: KODjtbc5QwVXgu2kx2h_42XUBJiLTKFM
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.205,Aquarius:18.0.874,Hydra:6.0.517,FMLib:17.11.64.514
 definitions=2022-06-01_03,2022-05-30_03,2022-02-23_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 spamscore=0
 bulkscore=0 suspectscore=0 phishscore=0 adultscore=0 mlxlogscore=999
 priorityscore=1501 mlxscore=0 malwarescore=0 impostorscore=0
 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2204290000 definitions=main-2206010042

On Wed, Jun 01, 2022 at 07:59:23AM +0000, Peng Fan wrote:
> > Subject: [Xen-devel] SMMU permission fault on Dom0 when init vpu_decode=
r
> >=20
> > Hello,
> >=20
> > I'm getting permission fault from SMMU when trying to init
> > VPU_Encoder/Decoder in Dom0 on IMX8QM board:
> > (XEN) smmu: /iommu@51400000: Unhandled context fault: fsr=3D0x408,
> > iova=3D0x86000a60, fsynr=3D0x1c0062, cb=3D0 This error appears when
> > vpu_encoder/decoder tries to memcpy firmware image to
> > 0x86000000 address, which is defined in reserved-memory node in xen
> > device-tree as encoder_boot/decoder_boot region.
> >=20
> > I'm using xen from branch xen-project/staging-4.16 + imx related patche=
s,
> > which were taken from
> > https://urldefense.com/v3/__https://eur01.safelinks.protection.outlook.=
com/?url=3Dhttps*3A*2F*2Fsource.c__;JSUl!!GF_29dbcQIUBPA!wzoDdJsuf4bjXMe85t=
A46E0tLpFG5gqHoo-OeY6o_ARroNBmX7JByHW67qEUik7bNow0STgvAjR4rBkRu2Ux$ [eur01[=
.]safelinks[.]protection[.]outlook[.]com]
> > odeaurora.org%2Fexternal%2Fimx%2Fimx-xen&amp;data=3D05%7C01%7Cpeng.f
> > an%40nxp.com%7C91e3a953942d414dcc6208da425006e7%7C686ea1d3bc2b
> > 4c6fa92cd99c5c301635%7C0%7C0%7C637895208732114203%7CUnknown%
> > 7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiL
> > CJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=3Dno%2BV2ubjGmrsm96NP
> > ybeeug4a3BXx3oX7xmylzZCU8E%3D&amp;reserved=3D0.
> >=20
> > After some investigation I found that this issue was fixed by Peng Fan =
in
> > commit: 46b3dd3718144ca6ac2c12a3b106e57fb7156554 (Hash from
> > codeaurora), but only for the Guest domains.
> > It introduces new p2m_type p2m_mmio_direct_nc_x, which differs from
> > p2m_mmio_direct_nc by XN =3D 0. This type is set to the reserved memory=
 region
> > in map_mmio_regions function.
> >=20
> > I was able to fix issue in Dom0 by setting p2m_mmio_direct_nc_x type fo=
r the
> > reserved memory in map_regions_p2mt, which is used to map memory during
> > Dom0 creation.
> > Patch can be found below.
> >=20
> > Based on initial discussions on IRC channel - XN bit did the trick beca=
use looks
> > like vpu decoder is executing some code from this memory.
> >=20
> > The purpose of this email is to discuss this issue and probably produce=
 generic
> > solution for it.
> >=20
> > Best regards,
> > Oleksii.
> >=20
> > ---
> > arm: Set p2m_type to p2m_mmio_direct_nc_x for reserved memory regions
> >=20
> > This is the enhancement of the
> > 46b3dd3718144ca6ac2c12a3b106e57fb7156554.
> > Those patch introduces p2m_mmio_direct_nc_x p2m type which sets the
> > e->p2m.xn =3D 0 for the reserved-memory, such as vpu encoder/decoder.
> >=20
> > Set p2m_mmio_direct_nc_x in map_regions_p2mt for reserved-memory the
> > same way it does in map_mmio_regions. This change is for the case when =
vpu
> > encoder/decoder works in DomO and not passed-through to the Guest
> > Domains.
>=20
> For Dom0, there is no SMMU, so no need x. Just nc is enough.
>=20
> Regards,
> Peng.

I would say that SMMU is not neccessary for Dom0 because it's mapped
1:1. But using device under SMMU in Dom0 is still valid case. For
example to protect device from reaching address, assigned to another
domain, since Dom0 is trusted.

>=20
> >=20
> > Signed-off-by: Oleksii Moisieiev <oleksii_moisieiev@epam.com>
> > ---
> >  xen/arch/arm/p2m.c | 7 +++++++
> >  1 file changed, 7 insertions(+)
> >=20
> > diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index
> > e9568dab88..bb1f681b71 100644
> > --- a/xen/arch/arm/p2m.c
> > +++ b/xen/arch/arm/p2m.c
> > @@ -1333,6 +1333,13 @@ int map_regions_p2mt(struct domain *d,
> >                       mfn_t mfn,
> >                       p2m_type_t p2mt)
> >  {
> > +    if (((long)gfn_x(gfn) >=3D (GUEST_RAM0_BASE >> PAGE_SHIFT)) &&
> > +        (((long)gfn_x(gfn) + nr) <=3D
> > +        ((GUEST_RAM0_BASE + GUEST_RAM0_SIZE)>> PAGE_SHIFT)))
> > +    {
> > +        p2m_remove_mapping(d, gfn, nr, mfn);
> > +        return p2m_insert_mapping(d, gfn, nr, mfn,
> > p2m_mmio_direct_nc_x);
> > +    }
> >      return p2m_insert_mapping(d, gfn, nr, mfn, p2mt);  }
> >=20
> > --
> > 2.27.0=


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 09:24:35 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 09:24:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340373.565391 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwKaU-0004dy-Hh; Wed, 01 Jun 2022 09:24:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340373.565391; Wed, 01 Jun 2022 09:24:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwKaU-0004dr-Em; Wed, 01 Jun 2022 09:24:22 +0000
Received: by outflank-mailman (input) for mailman id 340373;
 Wed, 01 Jun 2022 09:24:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=s3TG=WI=citrix.com=prvs=1441e74d4=roger.pau@srs-se1.protection.inumbo.net>)
 id 1nwKaT-0004dl-CB
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 09:24:21 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9be4c5a7-e18c-11ec-837f-e5687231ffcc;
 Wed, 01 Jun 2022 11:24:18 +0200 (CEST)
Received: from mail-mw2nam08lp2176.outbound.protection.outlook.com (HELO
 NAM04-MW2-obe.outbound.protection.outlook.com) ([104.47.73.176])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 01 Jun 2022 05:24:15 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by BN8PR03MB4756.namprd03.prod.outlook.com (2603:10b6:408:9a::27) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5293.13; Wed, 1 Jun
 2022 09:24:13 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::5df3:95ce:4dfd:134e]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::5df3:95ce:4dfd:134e%4]) with mapi id 15.20.5314.013; Wed, 1 Jun 2022
 09:24:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9be4c5a7-e18c-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654075458;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=NgHhILmzCjHH2AvRA+h6Zw+4fQ+xkyr0fT8znc7hC8U=;
  b=B1reiEnUJCmmLz4nlWhWFVtG15UoZOZXcueuvxtS6meLm9YYJ2gkwbK3
   rzXbx/mDK4FDf4TwxTgRvnAU16yAIMgEp8ken8nIHP2YmHQkdPdNvKizf
   N8dAE5guhNzGe6vs5pOiUNbGMuqudHGgidhNZXv7RZITZ6z7qxcb733Wl
   Q=;
X-IronPort-RemoteIP: 104.47.73.176
X-IronPort-MID: 75137377
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Z9YrYqOs0YY5SsHvrR3TlsFynXyQoLVcMsEvi/4bfWQNrUpw0T1Wz
 GRKWDjXbPnZMWqhKookPdji9xtXvMCHzYRhTQto+SlhQUwRpJueD7x1DKtR0wB+jCHnZBg6h
 ynLQoCYdKjYdleF+lH1dOKJQUBUjclkfJKlYAL/En03FFYMpBsJ00o5wbZn2tcw2LBVPivW0
 T/Mi5yHULOa82Yc3lI8s8pvfzs24ZweEBtB1rAPTagjUG32zhH5P7pGTU2FFFPqQ5E8IwKPb
 72rIIdVXI/u10xF5tuNyt4Xe6CRK1LYFVDmZnF+A8BOjvXez8CbP2lS2Pc0MC9qZzu1c99Z0
 Nxwu766S1sVNaTRltlBCjlGLiAuIvgTkFPHCSDXXc276WTjKyep5so0SUY8MMsf5/p9BnxI+
 boAMjcRYxufhuWwhrWmVu1rgcdlJ87uVG8dkig4kXeFUrB5GtaaHPuiCdxwhV/cguhUGvnTf
 YwBYCdHZxXceRxffFwQDfrSmc/32yCkLGYH9zp5o4II3UHf6BRtk4SuOYb1a42GGdlMolSh8
 zeuE2PRR0ty2Mak4TiP/2+oh+TPtTjmQ49UH7q9ntZ6jVvWymENBRk+UVqgveL/mkO4Q8hYK
 UEf5mwpt6da3FSiU93VTxC+5nmesXYht8F4FuQ77ESI1fDS6gPBVmwcFGceNpohqdM8QiEs2
 hmRhdT1CDdzsbqTD3WA6rOTqjD0Mi8QRYMfWRI5ocI+y4GLiOkOYtjnF76PzIbdYgXJJAzN
IronPort-HdrOrdr: A9a23:VT6P6a+4GiCyykYIVkpuk+FKdb1zdoMgy1knxilNoENuH/Bwxv
 rFoB1E73TJYVYqN03IV+rwXZVoZUmsjaKdhrNhRotKPTOWwVdASbsP0WKM+V3d8kHFh41gPO
 JbAtJD4b7LfCdHZKTBkW6F+r8bqbHokZxAx92uqUuFJTsaF52IhD0JbjpzfHcGJjWvUvECZe
 ehD4d81nOdUEVSSv7+KmgOXuDFqdGOvJX6YSQeDxpizAWVlzun5JPzDhDdh34lInhy6IZn1V
 KAvx3y562lvf3+4hjA11XL55ATvNf60NNMCOGFl8BQADTxjQSDYphnRtS5zXgIidDqzGxvvM
 jHoh8mMcg2w3TNflutqR+o4AXk2CZG0Q6X9XaoxV/Y5eDpTjMzDMRMwahDdAHC1kYmtNZglI
 pWwmOwrfNsfFz9tRW4w+KNewBhl0Kyr3Znu/UUlWZjXYwXb6IUhZAD/XlSDIwLEEvBmcwa+d
 FVfYDhDcttABOnhyizhBgt/DXsZAV/Iv6+eDlNhiTPuAIm3kyQzCMjtbkidzk7hdcAoqJ/lp
 X525RT5c9zp/AtHNJA7cc6MLyK4z/2MGTx2Fz7GyWVKIg3f1TwlrXQ3JIZoMmXRb1g9upBpH
 2GaiITiVIP
X-IronPort-AV: E=Sophos;i="5.91,266,1647316800"; 
   d="scan'208";a="75137377"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eqTquFLpCVmPAEk791LxTZ8zofhFwPQwT+nxrpFgN84BlP9UVW/43fmKu6OHx3kcoM4Jb61sNyCyTBXrGt4MuArMZjyNfmoFuExu3OOSv6Lr1xlWn+x7kyEcisYsyHheLW60Ph+ZnmRsa3iMNKP9F+VvSr/LeIg++vvLs81++vPEV1TrkF3WH3bO7g97+97JhcO+/XdvYwT45moeEdNVi4g5BJ6p8H0XbHzT1QbnIQlpbxdDXso+p/A4e1fGEEJHHxDJMFKK3J4s+EsnG4lOJ1MeORne0OiuqqheoDwx1U0bKrwuh+KufugkHX2s3oMIX0/f467XSB6x6isDRly3jw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=QwFMEiCsktYe+ykj4mHdBcUXkDA/aI5nE5HdboBnRqU=;
 b=ln73o00c9PZRGcyZRCfempVcmKmZjWdizPpuz8s7S5cB8iXXCrZCKWHnXkKeIfSlJwXSAysj76kzE9BZvINpAO9PgdMNWQ170xdV4pplIiHTGBN9wVBeGBxBS3BOkhyBHF/fS1mNIDE5fdEvpuorfYZ4noURZeYJyWN96irNvIUf07pPlELOcVSi3TYWVzs7jz8+5rLcZWLyy/Xk5/Iu74eKw0nGOicHvCFzZH3GV3E+j4Ue+3kWWVImsvV5fzL0ikoUJNWdyA3KDG1lX9kJdcdsQ+S0rhmJH73pHm6pAEMDng/WU6RXmVeWo8teok9t24BLSc2QAxgo5WR7gPFtmg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QwFMEiCsktYe+ykj4mHdBcUXkDA/aI5nE5HdboBnRqU=;
 b=hVce/BE1WNLaAkT0yWES0ri+IZffJO795wcODhEGwV/knW1S3IZH75bj+OjJtQdG6RmaZleD9Xq80AMgk0MIc0IH7cxs7wHJCkKU97GTgQ9eiJQZpUhyDc4BIQ2ZrrXQc52n/ju7a5gVQOBMYDrXTGw1qNgXpTxQ646atkofo5g=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 1 Jun 2022 11:24:08 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Paul Durrant <paul@xen.org>
Subject: Re: [PATCH v5 03/15] IOMMU/x86: support freeing of pagetables
Message-ID: <YpcwOCBEzI+qvTga@Air-de-Roger>
References: <80448822-bc1c-9f7d-ade5-fdf7c46421fe@suse.com>
 <614413d8-5043-f0e3-929b-f161fa89bb35@suse.com>
 <YpZBjVxRdJOzJzZx@Air-de-Roger>
 <372325ed-18b6-9329-901d-6596ce6e497d@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <372325ed-18b6-9329-901d-6596ce6e497d@suse.com>
X-ClientProxiedBy: LO2P265CA0255.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:8a::27) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 824ceb34-f924-4c61-fd3a-08da43b07ddb
X-MS-TrafficTypeDiagnostic: BN8PR03MB4756:EE_
X-Microsoft-Antispam-PRVS:
	<BN8PR03MB47563C50ED584B7EAD5736A58FDF9@BN8PR03MB4756.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	f9Q/1ynIPlZ9ZVeEWtfSKie7om0qK6pbhn26n6McWbCMKgAeEEjea2UxqcNISDmR3vp58QhzO6s/ru7esoOJsRCYOU9mxyDRby7rlAYOZb6QR6sXHKWEZeQYRGpj6Y8mvRSSxTi26zVcY97ZPoDPJVTQ/0G/gZA1YlrhIv3o35FghJ8OPg0nWZnNRiKVAG4BSi2Xj89c9626SBRzhrcuUi6yTOK1eb2amoIh5zgouuhiD8iJzEBZziave08uTR7iyJTiCRWJhKsQ42ghShMuUz0DPlLTZ0YtkanteU3f2xjquSwr2hf0INV4kUZ1f0ZYXCdUFQ+zUTGgYF1EP5TiluYsZ3P8MoJjtZ+LcrNCeulcO65FQe3/pwHJVgVf2KQpiEwmTkUjtSNhKDnsC/Cp9FLWh7IOixIBtUYkvI8yktlCBqQCBzD2MqTbBomi7opUDCI+uUKsRcC0yNEsgwy0+mI5/PDbTP8ROIXyo7VKYX77E32qJEAjWbeS3zIjGTXByi5qc5TsIkSlWPKRMws0Y3nMNJzrq9Mqzak7ddUAE3dhsCc+5JjLoljQcQd+mmWYTu2YL0jMbgviVnGotVXNofEgFVyVs2NViOte/xRMCN8V3F3cnsXD8L9BEUu0cbQTS4clkegRrGy6PpRqxrhWYDpHPxWk7HEH4knU60IE3O5m3KPJzpFpqHSaCB0i+Sw/YNxCUV9d2+dbf9t7RUqp4A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(7916004)(4636009)(366004)(508600001)(33716001)(6486002)(4326008)(186003)(5660300002)(85182001)(86362001)(83380400001)(6916009)(6512007)(6506007)(8676002)(66476007)(54906003)(9686003)(38100700002)(66946007)(53546011)(6666004)(316002)(8936002)(82960400001)(2906002)(26005)(66556008)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RWdWdXhmRXBqekd6UGhjUlNyZU0yZWYzWUxVMjlLa2tuejcraEN3ZzNzVDlK?=
 =?utf-8?B?SHlqOU02Z29nT3VVTUtmeERlZmFIR2lNbmVxR29ReVlkejM5S3pZUU9DcE5k?=
 =?utf-8?B?eXFpK3BSTVVrQnIxblR3WUNZUWgxYnhaWlFNOStMZC9tK3BaNDh0cnh6blBY?=
 =?utf-8?B?T0hHMGhaeGhvYjFWMjY0QWkzRE1KZ3pOdUVEa1JGN1JKNGd6OEFHcVFWT0VP?=
 =?utf-8?B?dFhDOCtyM2FmV1YxUTBDcUJOMFJyM0hqdGcrTm12andOU2tyTTlNWXNwS0k5?=
 =?utf-8?B?MHNDZER3bXphTGk1TW4zSTc0OFhhOERmYmdMdlphT3hZajFOalFUWlBFYlB0?=
 =?utf-8?B?YTZQQW1SUVRncVVBanlhbEk3aWE4c2RIYzR0NlZISWhIekVjSzVhSXdheG1x?=
 =?utf-8?B?VmM1WDlZKzV2SUcxLzVnZVhXc1V6K1g5d0hmR1BBS2E2VjNjVlVWV0MzelBL?=
 =?utf-8?B?ZlhKN1ZKZERLeFFDV0tvTjhOY2k5WlVpUGMza0tFWnY4SlBSSWhTS002YlNT?=
 =?utf-8?B?TXhocW9SKzhEMWFoYWVvaUZlNlNucU9SNlBCRUN2azZZRVhJOWdzbWw1bWY1?=
 =?utf-8?B?Nkw3TnVLOHlEM3EvTStScE0zVzNEU0hzWS80M0dPRUk4SXdYem5xVEExK0Q1?=
 =?utf-8?B?b3dKbG5iTnVZVXdHNStEZTRKTHdIczFIRG5ad2hKUmZTV0xVeG5qZU81aURB?=
 =?utf-8?B?VEhKdHBqZDdXb1Q0WlBJVDNUYm1BVUFHRXhEa0k4cFFYMDRDOW9pRit6N1hU?=
 =?utf-8?B?ZGdMWGFCMlhEdXdtbXgwVkE0STExaTJIY09BTzhLaEdhZ1dDTzMwaTIwR3Qw?=
 =?utf-8?B?S3UycmpSVnRJdDdBVng5TVM4QXRPRnEyNDhTS1UyVnN0bzNxc0tVMjZXdUVz?=
 =?utf-8?B?d2ZwRXVmWlliTEVzWC9VQ2FXcnBCdXNUSXh1WmdzRFU3M1JqcjdNb0NqdTlx?=
 =?utf-8?B?NFVBV0xJYnlLMFpHM3FXU3pZM1JsZDFxOEVibHZrOEl0RTNkR2RmRG5zbS9J?=
 =?utf-8?B?T2h0ang3c3QwbFlxSFhpMldIc1FuYURzUVdnVWFKeXlpcEVnSmVPbVpxSHhY?=
 =?utf-8?B?NHVhVFdSNHhLRzFuSmc4OVNRakNwS0N3M1pBMkFqdWJwOFdubGFoTFZZaDJt?=
 =?utf-8?B?Qnh5MW9odEVIZVhpeUE5ZGFhUTFuV1Y1c0dqL0ZUU3VsS1oyQ2w0alIwUzRJ?=
 =?utf-8?B?dWxTMVNUNFdKOTdpU3RBNHZKdU1YeEY0eURVY1BrUkZlc01rWUFqZm9aL0Qr?=
 =?utf-8?B?djhFeDgvZEhxWmhBdVF1b05uNWVuYklEK250QXI1Q2R2YVRmYUVNRTdheW9n?=
 =?utf-8?B?STYySlpEYjVDSDBhNHI0cS9iV0NmTHV0N3h0eFNGNDZ6aDhOdzFrNUNzUWJP?=
 =?utf-8?B?ZGN1Wk0yd3VZTDhOY1RjOC9MdGVvUVRzZlp5MXlaaEJiMjYrY09NSURRR3ZI?=
 =?utf-8?B?OXhZK0ZBalNBZEdQQStFWllmbjZqUkpsZmZVWGQ5THF3UTY5ekxKTDZjT2Fv?=
 =?utf-8?B?cFpSdHcvZXcwdjZkcW9uS1A5UkJwOVlTcmxlbnJ4R3dxQlVkRkc0VkFLd1Rr?=
 =?utf-8?B?bVdYdnpaS3hqYmxUWGp6M2lVV09Fa29xMHF5NjF4M0wyTnNIcUEyZnAwK1ZV?=
 =?utf-8?B?Mkh0YmNSY0c4TCtaRUE4Tjl2dS9yMzRGek9CTUpMOWE5V2tMN2Nzb2lsaVhs?=
 =?utf-8?B?MXZNODdTWllvYkpVdjBCMk9Sb2d5YkNON1pxZFFhcElURW9NOXhHZ1FmMXZD?=
 =?utf-8?B?SWp3NFNFZ1RVdEYwbFZvYWZCY3dmVGFVOEdQNGdyc3BwSE0yRDAzb1lWYW02?=
 =?utf-8?B?d2szM2lMcjNaM2w2OW9NRUQ0Wnh2ZGZFVWNmWGlTcWxRQWwzYXJGMVBMMmgr?=
 =?utf-8?B?TnByMElLWFVManlDRW1JQWUvSGdzVjF1SEd1dEUyTUtrTlY0L25OWFlNTVBJ?=
 =?utf-8?B?NjBNa1VuVCtLcTZHaUN4cVhYZnIyY2FlTWQyM29RVzFkRHR1QlZsUlB1WmU0?=
 =?utf-8?B?SXJub3gxWjlncVpmT2grOEpwQTdWUWhEekxlb0dWcE5vdEt2bEM5NTc3Z1hP?=
 =?utf-8?B?OUdURHNGdStxVWJNbGl6ZWZwOCtjWHQ3ME0vT3dONWhUbEMrWjJKNVYyalh5?=
 =?utf-8?B?V3lPT0paVWlzekxaKzYwcXdhUUY2dEMzTElaUzBERkNWK2R5NDdURkpOeDcr?=
 =?utf-8?B?NzVZU0YveFpJRUZFKy85S00wZ25nQ09NSDM4UHRhNDB6QmdNTHd0YlUvcjFD?=
 =?utf-8?B?aHErQW1JaU5MNjhEeW1hd1RCcFhxa0pZdXBMMFlTSnd4dERPVUVSZWJ3czFF?=
 =?utf-8?B?N1liZ2tpaE1Rd2twNmY4ZG1nZTBVMVhpVjlVb09tUlZXOWdqS01MRHp3QkJI?=
 =?utf-8?Q?W0zIHeio0UQ+025I=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 824ceb34-f924-4c61-fd3a-08da43b07ddb
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Jun 2022 09:24:13.3251
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: mHSvScSNldfmcxiqOFb/xhq920S9LvRCEOGvpDKpGbBK/pxFETylhPewiEFqMWCB85rIQWrDWEbS/TNpKUi9pw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR03MB4756

On Wed, Jun 01, 2022 at 09:32:44AM +0200, Jan Beulich wrote:
> On 31.05.2022 18:25, Roger Pau Monné wrote:
> > On Fri, May 27, 2022 at 01:13:09PM +0200, Jan Beulich wrote:
> >> @@ -566,6 +567,98 @@ struct page_info *iommu_alloc_pgtable(st
> >>      return pg;
> >>  }
> >>  
> >> +/*
> >> + * Intermediate page tables which get replaced by large pages may only be
> >> + * freed after a suitable IOTLB flush. Hence such pages get queued on a
> >> + * per-CPU list, with a per-CPU tasklet processing the list on the assumption
> >> + * that the necessary IOTLB flush will have occurred by the time tasklets get
> >> + * to run. (List and tasklet being per-CPU has the benefit of accesses not
> >> + * requiring any locking.)
> >> + */
> >> +static DEFINE_PER_CPU(struct page_list_head, free_pgt_list);
> >> +static DEFINE_PER_CPU(struct tasklet, free_pgt_tasklet);
> >> +
> >> +static void free_queued_pgtables(void *arg)
> >> +{
> >> +    struct page_list_head *list = arg;
> >> +    struct page_info *pg;
> >> +    unsigned int done = 0;
> >> +
> >> +    while ( (pg = page_list_remove_head(list)) )
> >> +    {
> >> +        free_domheap_page(pg);
> >> +
> >> +        /* Granularity of checking somewhat arbitrary. */
> >> +        if ( !(++done & 0x1ff) )
> >> +             process_pending_softirqs();
> > 
> > Hm, I'm wondering whether we really want to process pending softirqs
> > here.
> > 
> > Such processing will prevent the watchdog from triggering, which we
> > likely want in production builds.  OTOH in debug builds we should make
> > sure that free_queued_pgtables() doesn't take longer than a watchdog
> > window, or else it's likely to cause issues to guests scheduled on
> > this same pCPU (and calling process_pending_softirqs() will just mask
> > it).
> 
> Doesn't this consideration apply to about every use of the function we
> already have in the code base?

Not really, at least when used by init code or by the debug key
handlers.  This use is IMO different than what I would expect, as it's
a guest triggered path that we believe do require such processing.
Normally we would use continuations for such long going guest
triggered operations.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 09:28:25 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 09:28:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340381.565402 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwKeP-0005Gf-2B; Wed, 01 Jun 2022 09:28:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340381.565402; Wed, 01 Jun 2022 09:28:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwKeO-0005GY-VS; Wed, 01 Jun 2022 09:28:24 +0000
Received: by outflank-mailman (input) for mailman id 340381;
 Wed, 01 Jun 2022 09:28:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=RHfR=WI=nxp.com=peng.fan@srs-se1.protection.inumbo.net>)
 id 1nwKeN-0005GO-Es
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 09:28:23 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0628.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::628])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2dbca34c-e18d-11ec-837f-e5687231ffcc;
 Wed, 01 Jun 2022 11:28:21 +0200 (CEST)
Received: from DU0PR04MB9417.eurprd04.prod.outlook.com (2603:10a6:10:358::11)
 by VI1PR0401MB2656.eurprd04.prod.outlook.com (2603:10a6:800:56::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Wed, 1 Jun
 2022 09:28:18 +0000
Received: from DU0PR04MB9417.eurprd04.prod.outlook.com
 ([fe80::a892:e4a9:4769:13a5]) by DU0PR04MB9417.eurprd04.prod.outlook.com
 ([fe80::a892:e4a9:4769:13a5%9]) with mapi id 15.20.5293.019; Wed, 1 Jun 2022
 09:28:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2dbca34c-e18d-11ec-837f-e5687231ffcc
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=a21KWyYX18l7N98QxFsxm02SevQfqITHthbQj4cBlw7aCAnXdw7TOd748VoMhEGIAyAajRvR4knJV3lJciPDriMnnCxQUgOygL7lUdY5dB5tuqUA2td6cNCLjvFoFhlzrtS1FWAgM8V/JY2zqjkxrPM/ZmAQHvs/G6D6t7KGSLnsydFEQ/rShNyM3R6cvRwLWhDqXk0v7EYTR0c0iHS1P/i6YxGPjKFMyAb8Ge6Aa1rKIPOnaOdwdaryRjTVs3Hp2gWzDeG2I5dMcKc0YvZ6xm4ej5QX7XkeL1hbpLShbcONnikWQ1RY9ccIEoCiT++wZhRSzC9O9eVYBfrDVoZS/g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=trUnlHmFW617d+57cYc4H3IdZLgm/qHipiQpeXKmM3Y=;
 b=heiYCLGjJvlLVcqHtLsGRvl6KVdyRsaSIHc6CQ6IUj0prK62nWory7TT9VWR7Ecp/WX4HUqNpSSDVASVQjB1h4gtxParxjyCDOPwzszHvlmqRC+iLg2ovauYdBC72Tc98DT6/N4apn5UWfs/+gOTmMZWlTf7xj+H3G2CZjqFi2h8rgc8PiVC0PS223MSRk/2LIUVe8MEVgYLqap+XuPUGfeBl9oZ5OxeVnGBLzXo+SqdXuyEmV1tziGT1W7CdXDbm7i2nhC7V0i++OBuuy9MLtM7bsywtRzVvcEfQJSk8KOrUMba1PrWty4RtcbmGQnPekIb1uoG+C+hHXqseJwaOA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=nxp.com; dmarc=pass action=none header.from=nxp.com; dkim=pass
 header.d=nxp.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=trUnlHmFW617d+57cYc4H3IdZLgm/qHipiQpeXKmM3Y=;
 b=PodMGlM3saJwRJ3nGV9BnhwwMaz+tdQphJcfbW1CIYfeUsA82ZsazdWFefqwcImsRSZCc908BmHeh0YCy6k4JLYqV9mITqTAwNKtHc7nEmFYVRwWLx9hp4ByomCmZPMgVN/eTzowk8Wj5zbArxrqifFQwaKFLw39BgDJSRAlnYg=
From: Peng Fan <peng.fan@nxp.com>
To: Oleksii Moisieiev <Oleksii_Moisieiev@epam.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Julien
 Grall <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>
Subject: RE: [Xen-devel] SMMU permission fault on Dom0 when init vpu_decoder
Thread-Topic: [Xen-devel] SMMU permission fault on Dom0 when init vpu_decoder
Thread-Index: AQHYdDjfc3Z/1SeAUE+eDUzXC1OEPa06Mh/QgAAVEICAAANxYA==
Date: Wed, 1 Jun 2022 09:28:18 +0000
Message-ID:
 <DU0PR04MB9417A3436FB754A7F374C44888DF9@DU0PR04MB9417.eurprd04.prod.outlook.com>
References: <20220530152102.GA883104@EPUAKYIW015D>
 <DU0PR04MB94175823AAA60802503F802588DF9@DU0PR04MB9417.eurprd04.prod.outlook.com>
 <20220601091311.GA3658954@EPUAKYIW015D>
In-Reply-To: <20220601091311.GA3658954@EPUAKYIW015D>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=nxp.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 78336a46-2ecf-4c34-6b8c-08da43b11037
x-ms-traffictypediagnostic: VI1PR0401MB2656:EE_
x-microsoft-antispam-prvs:
 <VI1PR0401MB26563BD038A6931E83376C8488DF9@VI1PR0401MB2656.eurprd04.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 /vbZGtGFE/+p0RrNzy7gR60FwtJec113HJT9h0wHVKm8HsN2yedGpRXJJ31Co3ZRQ/G8QlHrlgARFJBdZ9Wu4xTWssVa1oG87ZKHcTVvvdQ7Z72AAyicc+fRvK9xfNnBNPEYGuVKlqsJl2u2QVyatbBFyAgR+1fpGSbp1NnX8F9X7Qw8lC8FWfIpCk1QqO55rg49LjZfse4ALn3ohxQWlsbwGc9gXPUmpcnfaih6IP3lgdjifjdw4uhbWtmwUkFrNGNbU41hhSfC9y0onE4Hy2ZyLNJxZ4RqQqjspKuvkeG2eyVjnatzMzrSGgwsz+xSpXIxpQ1mWjFuzpZss7s++TVnHgX2DFAiYmxT++eagGMmANyT2NOfM49VGBbVyuqyzvJt1VCnc4VWMYF/rzx84FTKgBS/+q42/IpTcc3o4B1xq0Q9lby67y1B0FZAvmm2I4JyICGMolN4Tfpe1DACEGDvaZdxSEk7zpMoBwEerk6fThY4eTEq+9+AAcJsnlSPvOxb+H9plp7pOEJ45ldnbjE2nMrQmU/4dngSYFsZF23wFSRPo0PYrYlPh4qSd15xZUDhT15S58ukDhe2pWOQ8HcYpOOi0z5WSkxCtQLghz9u/kNTooec9AsDTOktlZw7nMucy+w1EEU+cPqtTkNPtccwWerhpv0l2jtVLHNIAfguvy3LuvAEiGUo2It/52tRYAHH5G2Miz97Upfz+3NrQ8/4EuxrKi+yypkW3aAJap4uYjMdFBFxTLINCz7CPcZDOPnHJGZUptjcT9pjhoaothRGv+yxeUj0/m6fYw63mfU=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DU0PR04MB9417.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(508600001)(33656002)(2906002)(76116006)(6506007)(966005)(5660300002)(7696005)(45080400002)(54906003)(6916009)(186003)(44832011)(86362001)(55016003)(316002)(83380400001)(38070700005)(9686003)(122000001)(71200400001)(38100700002)(66946007)(66556008)(26005)(8676002)(66446008)(66476007)(8936002)(64756008)(4326008)(52536014);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?us-ascii?Q?/qNANLxhk+5/UWp4mCsO/xwGh7iMQiQirDn/w/m5GvcU9uQmtK8+9I/KDWKI?=
 =?us-ascii?Q?vyb1ohKV3mZP5z6QVer27jhohG8yFXmn5u0tWHw/vwp1oEJdfj6AF5Lb74J4?=
 =?us-ascii?Q?2PZFYIU0fZwJ81+qwFT1m32bRaER5PHyv7Mos57Gdr8AsQaFMwDDxfXOHt/4?=
 =?us-ascii?Q?MA1VcqZPdoHxLfCFNc3bbBl302yK6WTzip0KhhtNYYETC1wDmNsNUpJnaSP8?=
 =?us-ascii?Q?n9rBxnD9HJO9acRtv9ms9+2HVipoYqkfg5kUvTUajvfDZ7O48WnbVM8GlImB?=
 =?us-ascii?Q?sTg6i0LEUso2/Y0QRdGkRVeIesj7GvJkf0w9ZuqX7wsKH0XWnY4YTCFamUQA?=
 =?us-ascii?Q?G0ek+MNSDFaPPC5vss72izRxLqv7hDbuupTnmaYX1uMNXZTHIsHI7liLTpcy?=
 =?us-ascii?Q?SYdPCMstp+srZvwlOQ6Z87FSNepFH1cvs1lSJUCbpRquV0JT7BfQxbXEPImv?=
 =?us-ascii?Q?KuKvA8XnYnoWTSgGjXNG5CZ+gG4IvatgBv0PanTuTeFxTt8kgyol2ZOvmQr6?=
 =?us-ascii?Q?JhzLRHcnuhnBDx5uuj3SX2x91Zr/wVXSRgats8ZBq2+XY2TtHZDHjL3yzhuA?=
 =?us-ascii?Q?PapEdLiGq0F38yErZXAuMm1mAmifGB0uzz7i13oe0Ybi0RkSe4+AnMQjKJvX?=
 =?us-ascii?Q?+ywu194dP9Fcokxp+b5P3Ii2dkqdS7+JKJwyBeuC2E8zNtnYwelVTm++90ej?=
 =?us-ascii?Q?b+OdO6KDRsbp6oolcfQuQHYBQ5JnLuIyXiMpPnj6VdVw9g998FrUb2quUKZK?=
 =?us-ascii?Q?lrlELe3QGoAAZ23TDMVXA5TxKoub+3C8LvTYdjiCQUxM9FGG43TtKQ+6k+gA?=
 =?us-ascii?Q?WtXr92wDUzr1VvnfuThWAv6x7wKJViYwzmkESTXgXZTq81flufVRfvb7BNIi?=
 =?us-ascii?Q?czvSCTnxv+oWL6Oz+63LbvBFKV7Vcg1JQMnCc/hAUVQrMR4HW414j2Yy/T76?=
 =?us-ascii?Q?twwsh50NY7UgDvQWQY8HtVarFnRJ9Kmd1S19xR/d3PjNj9SkkR+aIjAMLTy0?=
 =?us-ascii?Q?fjLeH1uuj7nBw1Q6I+WM+xzNwh+X3Ih/RC6w21HrAaNyqefcqAN4scY7eQMp?=
 =?us-ascii?Q?8idclZ8BJaewpFwi5HAuS7hVrxIBzL0fgowvrNNspRkLOr9wm4YalrmR0jek?=
 =?us-ascii?Q?suxxliQzGTGMZfDGgm7do030v6yzCPEB78XUjIdshaiIUl2vkHB4NLC8RTU8?=
 =?us-ascii?Q?Zdd/TCcmiSnavMkNLVY8ZkUHQE/cfBokWzJgsnXUyCqWyYUavxvQPu3xcrut?=
 =?us-ascii?Q?JSprBtRRx9KiGQjmFaqln2+MfPEHfCkA/PcvjS7L0ePoR50uhNnEYujjnvoJ?=
 =?us-ascii?Q?SvNqZARDPhSHr44yp2k8BzA6Tizo7sZOQN4kaxEf8JpDqqYrwB4d1QURqtni?=
 =?us-ascii?Q?QwvI2BrHVzEoFMcPigmgthIUV7NIZMQiGjkkQIFOmpxkpsHUOlPbCge0nsck?=
 =?us-ascii?Q?0nflT6p9pw5thzwDQrO4Qk2zhfbvbR8eh3tBqLB1oNQizzGXZUfeduDQGaLG?=
 =?us-ascii?Q?l8hhuqYPJbxb2KZGxMNSofvhBwjEE1VM6gfZHpXi+HT7HkUBK27XQMlN1zO1?=
 =?us-ascii?Q?ivj8Q6nSHJPaevzLqQiY/FT9xgoXM4u/1HZtJ7wEPXmzrpIkdYg4OTsnJtm7?=
 =?us-ascii?Q?fz2UjNih6WabVeAzqYFEoC8HX80sKIgWu5O0k16NV5x1PpvVpy/gE/DjJEf0?=
 =?us-ascii?Q?LWfWzhGyKjnQFXEraqbsXhd8pWvUMUQHxc4A8g+IloHkZ+s+?=
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: nxp.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: DU0PR04MB9417.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 78336a46-2ecf-4c34-6b8c-08da43b11037
X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Jun 2022 09:28:18.6218
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: gRFQt+6B6cLbKwyYopSerh1YFjmdsrCyoF0oVTJeCOUJ2I20VUUQZ1icnhyFByuYy68tFJHE21btGkMCHTiY7Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2656

> Subject: Re: [Xen-devel] SMMU permission fault on Dom0 when init
> vpu_decoder
>=20
> On Wed, Jun 01, 2022 at 07:59:23AM +0000, Peng Fan wrote:
> > > Subject: [Xen-devel] SMMU permission fault on Dom0 when init
> > > vpu_decoder
> > >
> > > Hello,
> > >
> > > I'm getting permission fault from SMMU when trying to init
> > > VPU_Encoder/Decoder in Dom0 on IMX8QM board:
> > > (XEN) smmu: /iommu@51400000: Unhandled context fault: fsr=3D0x408,
> > > iova=3D0x86000a60, fsynr=3D0x1c0062, cb=3D0 This error appears when
> > > vpu_encoder/decoder tries to memcpy firmware image to
> > > 0x86000000 address, which is defined in reserved-memory node in xen
> > > device-tree as encoder_boot/decoder_boot region.
> > >
> > > I'm using xen from branch xen-project/staging-4.16 + imx related
> > > patches, which were taken from
> > > https://eur01.safelinks.protection.outlook.com/?url=3Dhttps%3A%2F%2Fu=
r
> > >
> ldefense.com%2Fv3%2F__https%3A%2F%2Feur01.safelinks.protection.outlo
> > >
> ok.com%2F%3Furl%3Dhttps*3A*2F*2Fsource.c__%3BJSUl!!GF_29dbcQIUBPA!
> wz
> > >
> oDdJsuf4bjXMe85tA46E0tLpFG5gqHoo-OeY6o_ARroNBmX7JByHW67qEUik7bN
> ow0ST
> > >
> gvAjR4rBkRu2Ux%24&amp;data=3D05%7C01%7Cpeng.fan%40nxp.com%7C5abe5
> 7eece
> > >
> 404f6c017808da43aef8f7%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7
> C0%7C
> > >
> 637896716019179992%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwM
> DAiLCJQI
> > >
> joiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sd
> ata=3D
> > >
> 47ddyB8JUz8sDHcXluhcB7RJ7bH4a33l%2FF2XzUpAPxY%3D&amp;reserved=3D0
> > > [eur01[.]safelinks[.]protection[.]outlook[.]com]
> > >
> odeaurora.org%2Fexternal%2Fimx%2Fimx-xen&amp;data=3D05%7C01%7Cpeng.f
> > >
> an%40nxp.com%7C91e3a953942d414dcc6208da425006e7%7C686ea1d3bc2b
> > >
> 4c6fa92cd99c5c301635%7C0%7C0%7C637895208732114203%7CUnknown%
> > >
> 7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiL
> > >
> CJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=3Dno%2BV2ubjGmrsm96NP
> > > ybeeug4a3BXx3oX7xmylzZCU8E%3D&amp;reserved=3D0.
> > >
> > > After some investigation I found that this issue was fixed by Peng
> > > Fan in
> > > commit: 46b3dd3718144ca6ac2c12a3b106e57fb7156554 (Hash from
> > > codeaurora), but only for the Guest domains.
> > > It introduces new p2m_type p2m_mmio_direct_nc_x, which differs from
> > > p2m_mmio_direct_nc by XN =3D 0. This type is set to the reserved
> > > memory region in map_mmio_regions function.
> > >
> > > I was able to fix issue in Dom0 by setting p2m_mmio_direct_nc_x type
> > > for the reserved memory in map_regions_p2mt, which is used to map
> > > memory during
> > > Dom0 creation.
> > > Patch can be found below.
> > >
> > > Based on initial discussions on IRC channel - XN bit did the trick
> > > because looks like vpu decoder is executing some code from this memor=
y.
> > >
> > > The purpose of this email is to discuss this issue and probably
> > > produce generic solution for it.
> > >
> > > Best regards,
> > > Oleksii.
> > >
> > > ---
> > > arm: Set p2m_type to p2m_mmio_direct_nc_x for reserved memory
> > > regions
> > >
> > > This is the enhancement of the
> > > 46b3dd3718144ca6ac2c12a3b106e57fb7156554.
> > > Those patch introduces p2m_mmio_direct_nc_x p2m type which sets the
> > > e->p2m.xn =3D 0 for the reserved-memory, such as vpu encoder/decoder.
> > >
> > > Set p2m_mmio_direct_nc_x in map_regions_p2mt for reserved-memory the
> > > same way it does in map_mmio_regions. This change is for the case
> > > when vpu encoder/decoder works in DomO and not passed-through to the
> > > Guest Domains.
> >
> > For Dom0, there is no SMMU, so no need x. Just nc is enough.
> >
> > Regards,
> > Peng.
>=20
> I would say that SMMU is not neccessary for Dom0 because it's mapped 1:1.
> But using device under SMMU in Dom0 is still valid case. For example to p=
rotect
> device from reaching address, assigned to another domain, since Dom0 is
> trusted.

I mean the reason to use nc_x is that VPU DomU is accessing DRAM through SM=
MU.
It needs X because there is VPU firmware run in DomU.

I not see why need X for Dom0, unless you assign a SID for VPU and create S=
MMU
mapping for VPU in Dom0.

Regards,
Peng.

>=20
> >
> > >
> > > Signed-off-by: Oleksii Moisieiev <oleksii_moisieiev@epam.com>
> > > ---
> > >  xen/arch/arm/p2m.c | 7 +++++++
> > >  1 file changed, 7 insertions(+)
> > >
> > > diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index
> > > e9568dab88..bb1f681b71 100644
> > > --- a/xen/arch/arm/p2m.c
> > > +++ b/xen/arch/arm/p2m.c
> > > @@ -1333,6 +1333,13 @@ int map_regions_p2mt(struct domain *d,
> > >                       mfn_t mfn,
> > >                       p2m_type_t p2mt)  {
> > > +    if (((long)gfn_x(gfn) >=3D (GUEST_RAM0_BASE >> PAGE_SHIFT)) &&
> > > +        (((long)gfn_x(gfn) + nr) <=3D
> > > +        ((GUEST_RAM0_BASE + GUEST_RAM0_SIZE)>> PAGE_SHIFT)))
> > > +    {
> > > +        p2m_remove_mapping(d, gfn, nr, mfn);
> > > +        return p2m_insert_mapping(d, gfn, nr, mfn,
> > > p2m_mmio_direct_nc_x);
> > > +    }
> > >      return p2m_insert_mapping(d, gfn, nr, mfn, p2mt);  }
> > >
> > > --
> > > 2.27.0


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 09:31:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 09:31:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340391.565414 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwKhW-0006jc-L7; Wed, 01 Jun 2022 09:31:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340391.565414; Wed, 01 Jun 2022 09:31:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwKhW-0006jV-Hy; Wed, 01 Jun 2022 09:31:38 +0000
Received: by outflank-mailman (input) for mailman id 340391;
 Wed, 01 Jun 2022 09:31:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qdo/=WI=epam.com=prvs=8151ed00d6=oleksii_moisieiev@srs-se1.protection.inumbo.net>)
 id 1nwKhV-0006jM-3p
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 09:31:37 +0000
Received: from mx0a-0039f301.pphosted.com (mx0a-0039f301.pphosted.com
 [148.163.133.242]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a0888031-e18d-11ec-837f-e5687231ffcc;
 Wed, 01 Jun 2022 11:31:35 +0200 (CEST)
Received: from pps.filterd (m0174677.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2519F1fQ026935;
 Wed, 1 Jun 2022 09:31:33 GMT
Received: from eur05-vi1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2173.outbound.protection.outlook.com [104.47.17.173])
 by mx0a-0039f301.pphosted.com (PPS) with ESMTPS id 3gdqkvj8kt-2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Wed, 01 Jun 2022 09:31:33 +0000
Received: from PA4PR03MB7136.eurprd03.prod.outlook.com (2603:10a6:102:ea::23)
 by AM0PR03MB3652.eurprd03.prod.outlook.com (2603:10a6:208:44::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12; Wed, 1 Jun
 2022 09:31:29 +0000
Received: from PA4PR03MB7136.eurprd03.prod.outlook.com
 ([fe80::31b5:dfd5:2d38:c0b2]) by PA4PR03MB7136.eurprd03.prod.outlook.com
 ([fe80::31b5:dfd5:2d38:c0b2%9]) with mapi id 15.20.5314.012; Wed, 1 Jun 2022
 09:31:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a0888031-e18d-11ec-837f-e5687231ffcc
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Lf8dIBHyuNM9oT8ufLqdzqmqOoc+xnvcuE2780r0ogA9c+zRPQZyRmNlEG4iHMtHfX6bkKBWDjNMmkFUqFlMLH3/T5W2W+L3xqm9ZgajM5xcfTOK/QA02nuEujYGj16ieFWmD53kwCS/V/qe6SU0p7hgSnur5s6Wr4d+Y3Zp9S4fSVxzCmlkVo/W3XylmOph+DJ4403rmD+ntnRqNc6PTbdgM3VXy/Obg3kiIUM2cY7CQuceXrYH4ycokWDF+D7vl0vThz7Hv1FY9btBD6iyTLT+CkJy8aFI/rFdcN7PfU3QwdZO+ZbuNokulmXuBZP7QfQYOCDznMaudaJ/A6H7WA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=KwmEnlhEKEa2BrJU3i/cvUHq+3r62mYinpK9nXM5VN0=;
 b=Tv9Jfyj+MlMeWB5PW7G87ct0vxYbY9jfa8pOkIW8hFPqR5+LT+hXRDYlLathOZfahLuRkjsWpw3JZkXn429GjO1aLGkdC/hyaWEP9XhF4rUCFoTsi9Ic/w9YOy0RPojtwlUeA8aK9xFI4QgZNvNMasihRQNgnd7+mxPtHLLDSCHBCMx/a7/W6tOUCx9IzSYKyDruYr5W3FEL2/n5SGgTxCFLLLq+0kssg/7/uSX7P0RTYNn6BMj4cGnvOD8BSMwhwb2GW7SF/n9SyVCGsXpB8gzccRyt0i1OQo3qt4e1LFRGbexXu2V8fmc0dl/jbnojE3SJJj9H1Uf4flZmHBev2w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KwmEnlhEKEa2BrJU3i/cvUHq+3r62mYinpK9nXM5VN0=;
 b=a6m7Gl12GsW7uTRZBc5lWhfknYfR+JygoknLOxEPm3GsDF72G4gq4VBP8WwzYyfEBnTn0aA2Y6PnW1EXaCXAHbzwdoxQjzb+hDphi2t1xFLLpkatCjbWuNvGVq/W1O1LK0lKq3p73+XrM/D1FSekrgqyEMfAJ5OtC9eHPIUPTeb73ytCUgLi10pDPd8nBeeow6WTq8CgEwv9MtEmuaiJA7kaBR9RLuHX0PY1yGBnvipSQslyc/YmTcVcnAQFWrROT4C+HiOgzlRIRWzn/HXKhC37J1RInTOZH1jafU4NqZmKtUZ8OPPqo+wr9fzsuTjrWw73nmHCOGNdrh/0XLZUaA==
From: Oleksii Moisieiev <Oleksii_Moisieiev@epam.com>
To: Peng Fan <peng.fan@nxp.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Julien
 Grall <julien@xen.org>,
        Bertrand Marquis <bertrand.marquis@arm.com>
Subject: Re: [Xen-devel] SMMU permission fault on Dom0 when init vpu_decoder
Thread-Topic: [Xen-devel] SMMU permission fault on Dom0 when init vpu_decoder
Thread-Index: AQHYdDjfc3Z/1SeAUE+eDUzXC1OEPa06Mh/QgAAVEICAAANxYIAAAasA
Date: Wed, 1 Jun 2022 09:31:29 +0000
Message-ID: <20220601093128.GA3667062@EPUAKYIW015D>
References: <20220530152102.GA883104@EPUAKYIW015D>
 <DU0PR04MB94175823AAA60802503F802588DF9@DU0PR04MB9417.eurprd04.prod.outlook.com>
 <20220601091311.GA3658954@EPUAKYIW015D>
 <DU0PR04MB9417A3436FB754A7F374C44888DF9@DU0PR04MB9417.eurprd04.prod.outlook.com>
In-Reply-To: 
 <DU0PR04MB9417A3436FB754A7F374C44888DF9@DU0PR04MB9417.eurprd04.prod.outlook.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: af626d61-291e-4118-e895-08da43b181d3
x-ms-traffictypediagnostic: AM0PR03MB3652:EE_
x-ld-processed: b41b72d0-4e9f-4c26-8a69-f949f367c91d,ExtAddr
x-microsoft-antispam-prvs: 
 <AM0PR03MB3652B08AC384CB2A69BFED2FE3DF9@AM0PR03MB3652.eurprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 8GiQ6FD6EBfHHr5c1YTygfSm3ui4jq46yCDOj/0//gf8x7Glqy4Nhd7/0esjEDsjEmGIupXb0+NhNNeGiRFYDlXdNn9AQlhVHubIMThi3qT/7/H3EHa32Zio22ocBu5nQ+b+RVVgfxyYOLbrAdtOlI8i7vBMFEtL0fUApFWqNXtieghWbDCgHMUXpV9y6WEaZBTGgGa38no6hjwQx9tkFE6tm8E1t/2WxDHHJ+eQX2kFwD/vyITb2dRw7ncn7F9Kmkj0wYov6swt5Q6P8Xh8m2bwMLWP94cjKvKYdVjm5jhLSqvAh6E0Xzg3iZwu4QSZwyiACyx5Mhmg+bnSc9k970u2TrtrtoQGvX7STeVjUoyQHyW0d5i7I9CFEgVWP57ODgKQycDlhKFOMP2/czkQeaACe0QOD3A060zAwNCXvhgmw6LtqfNDoPBzyxhurdBSz2qIRXBO3IqQQPcJHOCdykr7P1TjbZ+umYdGLJ2+Iya8TmsBHOHNOwm+vLUrGGh2N69/Vq0V+M7C+yPbp41wi5pgU1nAq1iVjdWpoSqob37oAW4j4mm8qHOsF/dYcrisExtGkrGeY6vLHG/eYAntxb3pW2L2UAosog+KKHSv81Uvr/8dDaUoJcPmAymdNgsxE7Qno9osgcOLIuPEYYhdibg6xbhmrXrd9oKxlVCMKMs9Lv1TcuSjsPn0tkLBj2RJ4jDwc1HGmur9UZaDixh3tyWQQfC4gg8ugFadX6M23bytL39zuQmpI+QWngInecu4c+dFDamix4zkZcGs3CXG4M7qIP9H9Dz5WQWCj/FwKorxcwQwHBVuKTtDxwlAEKtD9Rj82d97rhtFxcp4Xa7DBQ==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PA4PR03MB7136.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(7916004)(366004)(122000001)(33716001)(38070700005)(33656002)(86362001)(186003)(83380400001)(38100700002)(1076003)(6506007)(26005)(76116006)(71200400001)(54906003)(6916009)(966005)(508600001)(6486002)(45080400002)(316002)(6512007)(66946007)(2906002)(4326008)(9686003)(66556008)(66476007)(66446008)(8676002)(64756008)(5660300002)(8936002)(91956017);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: 
 =?us-ascii?Q?mvzO+hw9pECtVfUPRCm5OzS+h62RpytvFBzuXWXO+JZ3BJdC44WYHkDZ6KJs?=
 =?us-ascii?Q?ufoJBkIlTQAhjMpy4dbXeh1W4ryJ+M06buob+ONp2FbTy+LQobT6p9yE3aGj?=
 =?us-ascii?Q?bJXdjqe0heruyLpmgdq2L0HYORvyj316H5TnLe+l6NRzvebSR9LJJQCl8OW1?=
 =?us-ascii?Q?tSDB2mra33BDDaPKycdhctJzRj5iAsqgxXSEI07pppTb2ub9mFClFscr1u/K?=
 =?us-ascii?Q?zxOt95yjUXv7cyI9JbK/3rMaXBNhHR90s0mWil/UMYTgBTJPQKopPGBeTIRx?=
 =?us-ascii?Q?Vq5YAcvvsJxwiUk7Vq4PqymNwWul6peSGZqh+DZp9CfPMUpztePMaXS9p/vi?=
 =?us-ascii?Q?J+99kIM06JzNG2KcYMVv4Z2EzB3LNr/P7ZUc0bZ9Uwx+lyLSaE1dOLU01cYL?=
 =?us-ascii?Q?ggxumv1PeIThtd+Yt9LtKAe4Tic+Vt3obo/sJ6rU8r8Bc3JaG0iVrPr0gHeG?=
 =?us-ascii?Q?jB8iXo8wq7lfSdb8gx0NUV8zuI2VQiOgfqlHTZYW1QkK29Tf45TQSwCOflLm?=
 =?us-ascii?Q?jK0Sqc1ptdJl6ZIVGTWAuHMI/9ofD9/tdoeZifZvV2WysKLyLkty0/WFLqQK?=
 =?us-ascii?Q?Ri0JqONKKXBHAY15B+WCrR3C8yn1UsYBHdfvRuq/4gdzJbjelp+bf5NQtEtV?=
 =?us-ascii?Q?jQHgOsz614QrFON42AppMqk/fYyvigCly+O1t98hL/3mP6HYGsqjwQODvhWe?=
 =?us-ascii?Q?BzZffTHb8JqrfzGnUUbUiEy7rd5scWjbqAggbsz0MBDr4bfZKZmQjk6ayPZI?=
 =?us-ascii?Q?G+ueTLJJKDRZikImehNTNEU6UqmIMRu7DPwPFb78ysuGPKRhvi/AZCTABipz?=
 =?us-ascii?Q?p7+5G/2MHQf7gI6AztGcBDTS73/+0VG7q3qyAaqe2qhUmmSkxD9GGK5lIErs?=
 =?us-ascii?Q?qEW33yQfqMz6uFTcskHToPOPXx28OJSNwntEALtZX/fsPJoa7Op3uyudylRN?=
 =?us-ascii?Q?5cJLR+dPhWc1fogAd63fYchSImu1TRZSLfNa8r56r5E8b5IBUJTUPGuZ6w+5?=
 =?us-ascii?Q?moSFpqMNMnvoPGauXo9E4M9d/vom+Buy20dyf/x1Fxxyc+qjDmJ/7bSMgy8l?=
 =?us-ascii?Q?ALpsUZHs7By9anV3NUGzOjFOSvTzi7sSl4q4C7j5b3IFAa/EdIduF09QeAs+?=
 =?us-ascii?Q?MrtgU//palAVFYNl7xmxYq2gfUQwjLPDZibpUG/nyh2FB643KfiCn5Iu1JHT?=
 =?us-ascii?Q?2+/GsGGJp4IUykfctSZGOcCFHnzODxR2PYYd8vR100EaD1ChbhgKNV2jTrJz?=
 =?us-ascii?Q?1Sy+flzj2aLypOZNFgboEIE/1Pm0tfT4xE1B290KR0cO/2N3Z/XgtYhWY6+k?=
 =?us-ascii?Q?A27ENQqv1nk8cu5WJmfP0BxZ/Z3pg/a2FNPXTuZqag7lfzvWyo/4LuWvRUuZ?=
 =?us-ascii?Q?iJaZdfz7M0TgULbTkrxcZQkbgsRCpYenqnnoSEutkgMlnnLQCdYCYxkrSueo?=
 =?us-ascii?Q?+YzNafd0s60lcFcXiu8TjOh/O5urpzimQ2ug026NDwJ4ziPC4lgW+aQvEqyB?=
 =?us-ascii?Q?hb7x0Gkz0ShnKTy+3m8b0Pbms35d4gmD27G1vI7Lbv2xkTTVv+iA965M/SfD?=
 =?us-ascii?Q?Rbzb2Mj/QHBGLYJDXNE8i+eaMlY7SaC6+6Si1asZh6cqOtuLrgd61BYTOjky?=
 =?us-ascii?Q?Vz90b0YH+O7DAYy42pJkJy9kw4bqdpaWbK9Dok/xZo4edV7Fxx8DKXNZuJ9F?=
 =?us-ascii?Q?QjLPPnk/gK6EWS3n/2Qd15MqpUmSLzEvmayAJRtk7wlvxh5Tgvkuu7/qyTZz?=
 =?us-ascii?Q?nsiP7emIcXNNHGn+WqAdeyrGOb+xj4Lf0hmhbO11g1iSUtQaZX6J?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <8F2EFFD60671124CA0F0F21C65B906A3@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PA4PR03MB7136.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: af626d61-291e-4118-e895-08da43b181d3
X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Jun 2022 09:31:29.2142
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 4AM3ApIhywrvFcQU246DI2teLrgOCiJvsw7J0BFt5k1FJ76H9bD0ZdIrYPg9+YvU110B8O/o9LFiNB0Qt5cD6eBxufAlJtnmfEpM4FtbKkQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB3652
X-Proofpoint-ORIG-GUID: 9GBITxillXq-OxgZ-bnnzNSY5LCySwvU
X-Proofpoint-GUID: 9GBITxillXq-OxgZ-bnnzNSY5LCySwvU
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.205,Aquarius:18.0.874,Hydra:6.0.517,FMLib:17.11.64.514
 definitions=2022-06-01_03,2022-05-30_03,2022-02-23_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0
 spamscore=0 mlxscore=0 suspectscore=0 mlxlogscore=999 malwarescore=0
 bulkscore=0 phishscore=0 priorityscore=1501 clxscore=1015 impostorscore=0
 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2204290000 definitions=main-2206010042

On Wed, Jun 01, 2022 at 09:28:18AM +0000, Peng Fan wrote:
> > Subject: Re: [Xen-devel] SMMU permission fault on Dom0 when init
> > vpu_decoder
> >=20
> > On Wed, Jun 01, 2022 at 07:59:23AM +0000, Peng Fan wrote:
> > > > Subject: [Xen-devel] SMMU permission fault on Dom0 when init
> > > > vpu_decoder
> > > >
> > > > Hello,
> > > >
> > > > I'm getting permission fault from SMMU when trying to init
> > > > VPU_Encoder/Decoder in Dom0 on IMX8QM board:
> > > > (XEN) smmu: /iommu@51400000: Unhandled context fault: fsr=3D0x408,
> > > > iova=3D0x86000a60, fsynr=3D0x1c0062, cb=3D0 This error appears when
> > > > vpu_encoder/decoder tries to memcpy firmware image to
> > > > 0x86000000 address, which is defined in reserved-memory node in xen
> > > > device-tree as encoder_boot/decoder_boot region.
> > > >
> > > > I'm using xen from branch xen-project/staging-4.16 + imx related
> > > > patches, which were taken from
> > > > https://urldefense.com/v3/__https://eur01.safelinks.protection.outl=
ook.com/?url=3Dhttps*3A*2F*2Fur__;JSUl!!GF_29dbcQIUBPA!xNT11x4E87Ot-pS9c5Eb=
iteNwSWhUuPsM66Y2_ZO5WSMjAMlsRn70_-k8Y2Tfh-GIR018oX6TPa4IEOiIVfv$ [eur01[.]=
safelinks[.]protection[.]outlook[.]com]
> > > >
> > ldefense.com%2Fv3%2F__https%3A%2F%2Feur01.safelinks.protection.outlo
> > > >
> > ok.com%2F%3Furl%3Dhttps*3A*2F*2Fsource.c__%3BJSUl!!GF_29dbcQIUBPA!
> > wz
> > > >
> > oDdJsuf4bjXMe85tA46E0tLpFG5gqHoo-OeY6o_ARroNBmX7JByHW67qEUik7bN
> > ow0ST
> > > >
> > gvAjR4rBkRu2Ux%24&amp;data=3D05%7C01%7Cpeng.fan%40nxp.com%7C5abe5
> > 7eece
> > > >
> > 404f6c017808da43aef8f7%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7
> > C0%7C
> > > >
> > 637896716019179992%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwM
> > DAiLCJQI
> > > >
> > joiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sd
> > ata=3D
> > > >
> > 47ddyB8JUz8sDHcXluhcB7RJ7bH4a33l%2FF2XzUpAPxY%3D&amp;reserved=3D0
> > > > [eur01[.]safelinks[.]protection[.]outlook[.]com]
> > > >
> > odeaurora.org%2Fexternal%2Fimx%2Fimx-xen&amp;data=3D05%7C01%7Cpeng.f
> > > >
> > an%40nxp.com%7C91e3a953942d414dcc6208da425006e7%7C686ea1d3bc2b
> > > >
> > 4c6fa92cd99c5c301635%7C0%7C0%7C637895208732114203%7CUnknown%
> > > >
> > 7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiL
> > > >
> > CJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=3Dno%2BV2ubjGmrsm96NP
> > > > ybeeug4a3BXx3oX7xmylzZCU8E%3D&amp;reserved=3D0.
> > > >
> > > > After some investigation I found that this issue was fixed by Peng
> > > > Fan in
> > > > commit: 46b3dd3718144ca6ac2c12a3b106e57fb7156554 (Hash from
> > > > codeaurora), but only for the Guest domains.
> > > > It introduces new p2m_type p2m_mmio_direct_nc_x, which differs from
> > > > p2m_mmio_direct_nc by XN =3D 0. This type is set to the reserved
> > > > memory region in map_mmio_regions function.
> > > >
> > > > I was able to fix issue in Dom0 by setting p2m_mmio_direct_nc_x typ=
e
> > > > for the reserved memory in map_regions_p2mt, which is used to map
> > > > memory during
> > > > Dom0 creation.
> > > > Patch can be found below.
> > > >
> > > > Based on initial discussions on IRC channel - XN bit did the trick
> > > > because looks like vpu decoder is executing some code from this mem=
ory.
> > > >
> > > > The purpose of this email is to discuss this issue and probably
> > > > produce generic solution for it.
> > > >
> > > > Best regards,
> > > > Oleksii.
> > > >
> > > > ---
> > > > arm: Set p2m_type to p2m_mmio_direct_nc_x for reserved memory
> > > > regions
> > > >
> > > > This is the enhancement of the
> > > > 46b3dd3718144ca6ac2c12a3b106e57fb7156554.
> > > > Those patch introduces p2m_mmio_direct_nc_x p2m type which sets the
> > > > e->p2m.xn =3D 0 for the reserved-memory, such as vpu encoder/decode=
r.
> > > >
> > > > Set p2m_mmio_direct_nc_x in map_regions_p2mt for reserved-memory th=
e
> > > > same way it does in map_mmio_regions. This change is for the case
> > > > when vpu encoder/decoder works in DomO and not passed-through to th=
e
> > > > Guest Domains.
> > >
> > > For Dom0, there is no SMMU, so no need x. Just nc is enough.
> > >
> > > Regards,
> > > Peng.
> >=20
> > I would say that SMMU is not neccessary for Dom0 because it's mapped 1:=
1.
> > But using device under SMMU in Dom0 is still valid case. For example to=
 protect
> > device from reaching address, assigned to another domain, since Dom0 is
> > trusted.
>=20
> I mean the reason to use nc_x is that VPU DomU is accessing DRAM through =
SMMU.
> It needs X because there is VPU firmware run in DomU.
>=20
> I not see why need X for Dom0, unless you assign a SID for VPU and create=
 SMMU
> mapping for VPU in Dom0.
>=20
> Regards,
> Peng.

That is my case. I've created SMMU mapping for VPU in Dom0 and use
vpu_encoder/decoder from Dom0.

>=20
> >=20
> > >
> > > >
> > > > Signed-off-by: Oleksii Moisieiev <oleksii_moisieiev@epam.com>
> > > > ---
> > > >  xen/arch/arm/p2m.c | 7 +++++++
> > > >  1 file changed, 7 insertions(+)
> > > >
> > > > diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index
> > > > e9568dab88..bb1f681b71 100644
> > > > --- a/xen/arch/arm/p2m.c
> > > > +++ b/xen/arch/arm/p2m.c
> > > > @@ -1333,6 +1333,13 @@ int map_regions_p2mt(struct domain *d,
> > > >                       mfn_t mfn,
> > > >                       p2m_type_t p2mt)  {
> > > > +    if (((long)gfn_x(gfn) >=3D (GUEST_RAM0_BASE >> PAGE_SHIFT)) &&
> > > > +        (((long)gfn_x(gfn) + nr) <=3D
> > > > +        ((GUEST_RAM0_BASE + GUEST_RAM0_SIZE)>> PAGE_SHIFT)))
> > > > +    {
> > > > +        p2m_remove_mapping(d, gfn, nr, mfn);
> > > > +        return p2m_insert_mapping(d, gfn, nr, mfn,
> > > > p2m_mmio_direct_nc_x);
> > > > +    }
> > > >      return p2m_insert_mapping(d, gfn, nr, mfn, p2mt);  }
> > > >
> > > > --
> > > > 2.27.0=


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 09:50:07 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 09:50:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340399.565425 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwKzI-0000D1-8m; Wed, 01 Jun 2022 09:50:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340399.565425; Wed, 01 Jun 2022 09:50:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwKzI-0000Cu-5q; Wed, 01 Jun 2022 09:50:00 +0000
Received: by outflank-mailman (input) for mailman id 340399;
 Wed, 01 Jun 2022 09:49:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=RHfR=WI=nxp.com=peng.fan@srs-se1.protection.inumbo.net>)
 id 1nwKzH-0000Co-Ii
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 09:49:59 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0604.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::604])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 32a10b4c-e190-11ec-837f-e5687231ffcc;
 Wed, 01 Jun 2022 11:49:58 +0200 (CEST)
Received: from DU0PR04MB9417.eurprd04.prod.outlook.com (2603:10a6:10:358::11)
 by DBBPR04MB6218.eurprd04.prod.outlook.com (2603:10a6:10:d0::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Wed, 1 Jun
 2022 09:49:56 +0000
Received: from DU0PR04MB9417.eurprd04.prod.outlook.com
 ([fe80::a892:e4a9:4769:13a5]) by DU0PR04MB9417.eurprd04.prod.outlook.com
 ([fe80::a892:e4a9:4769:13a5%9]) with mapi id 15.20.5293.019; Wed, 1 Jun 2022
 09:49:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 32a10b4c-e190-11ec-837f-e5687231ffcc
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BFRvpBOmiylupNcLLyn+08pJIpQIBR8naXA0+M3qcN+JbuCpLcwnREfs3zpxfM7aMYLOTFjE5B19F4fleK2Jau4t0cUIJ4LRy/dMI/1j2B9onqqLsXtRrmCc6gqv1f2I5nUK6VuPXhSBNtgpoVojYaN2dBt9sfBOqNjHIV2K2qXq9DGp2BL88bk479BWywWC4jZAWMchKMNHNAlvVTCrzK7UYxh2Kw8xrs82xTfqOhuP3+4LZkhT0CjT8XqtXafoUtwdSsq5ouehT9iCRaBDOjf+q4mN9KrSiHQbM5BR1YYBrOTclbjNN6kP5j99bSoUntgnIABaGqsTpDguP35y7w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=teqlByfXz9Tdy7bHT128jHhdsx2OXhF/KJnnZeAvqTw=;
 b=fKu6we3NcHW0YS1t6rcwOObEbmoREHP6xAXn4X0qO3ornzdfpFGEsb2M3g62k4zVUpHvcPO/gCgYGJ/eLpCPUciaHECQlMnRFYMcujJy2wvhiYWEw6/rofExn4NGnuV28rfOBqSUe9GrYiOv9T4o4S7CI/ajH8AxF47HvMeHCz0wW9v4ksRi9FdEbCdpum0+4B0CnWo6DMUUsPZne2dcFFeFVViIMj4r2Z38U9AxRdLgMTO037bcLY7Uyl/o6lh3qjqLOU/UJnX7syKfxauO5WXmYtp2m0kDUx3D66lma56oViu17neUitZIj7BRm98ohnrEgpQCJB9B7SkzlHs/Dw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=nxp.com; dmarc=pass action=none header.from=nxp.com; dkim=pass
 header.d=nxp.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=teqlByfXz9Tdy7bHT128jHhdsx2OXhF/KJnnZeAvqTw=;
 b=P6ksZfWL3G1mi/0T1fzxSCEyPvDAXHaDCc4kT/bRz8LA78A2AvhLtlSAlVFv5TSB1UyXwwyJIaBX5y3ds3C552M7iSBcpcaJSMbuEaT+YshPbOkHh51en9n05Sf84KdABw/o1B/Xm1xXGu3aceYBVQgtJxjgRCNM97967O8BXP4=
From: Peng Fan <peng.fan@nxp.com>
To: Julien Grall <julien@xen.org>, Oleksii Moisieiev
	<Oleksii_Moisieiev@epam.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Bertrand Marquis <bertrand.marquis@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>
Subject: RE: [Xen-devel] SMMU permission fault on Dom0 when init vpu_decoder
Thread-Topic: [Xen-devel] SMMU permission fault on Dom0 when init vpu_decoder
Thread-Index: AQHYdDjfc3Z/1SeAUE+eDUzXC1OEPa03j+IAgALBIaA=
Date: Wed, 1 Jun 2022 09:49:56 +0000
Message-ID:
 <DU0PR04MB941731169B3086156FF485F588DF9@DU0PR04MB9417.eurprd04.prod.outlook.com>
References: <20220530152102.GA883104@EPUAKYIW015D>
 <da899c6a-a9ec-fa25-75ef-6f375dfd422a@xen.org>
In-Reply-To: <da899c6a-a9ec-fa25-75ef-6f375dfd422a@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=nxp.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: b8fde13d-d58a-42ca-2b0e-08da43b415d5
x-ms-traffictypediagnostic: DBBPR04MB6218:EE_
x-microsoft-antispam-prvs:
 <DBBPR04MB62184DDB9517F387B039932788DF9@DBBPR04MB6218.eurprd04.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 CvKA0l0cvf0Pdq0/Zif/84haQE5aAt55fyAM4/6oKywqwQ/F+h5fjiAJpX30brKw9xuFrayK+efkbbAGU6BnNXMo+vvmUpAuu7C2QpIA5sW5+PFvu/8UKV/X9FQ8YXTSsjldADLzE9aFM5/9uEKrX5Dgl5CQS+KqZaau6E2bEzsRQq+ELhMCHfz94l1wQIEq0AmIInzxns1Da4b4JA55YvtyKkc824CJWKvUlakwBxTazgMVLVajIH3B2XiWTmz1YtpZkcLSG0IAZwUb3HIbpsARQ/F1OJ1g13JoV3h1KzNO09r+SyQV4QMQyYD/VAqrYzovd+kWIAh4QkeOhSPHoVGmgl9I3YsDSvsoDyyn921sqqPLfx1bCsHBAH6+Mtwlwn+kgk1ZSIyrdVHEDKXn3RTFfBocaIm+a7hR/8oHl24zdH9+qIo31lTay9hheDtqY5X3ubsycx/jvUGoYgB+ubuEvWR8jNuUcSgttPsXfjVYvyFFRzG2lMReTc9qpPMsjJKGcvFE494OSuwwAHSo2EX9JGiOwWZHekD6Uz7wUAap6W9YO3Or+2yoGwKeaHeD0Fg2ZL8twP8DbkiGzOgZ5Hydpj1K06UFkIgKgbAzK4Mq/q2P/dOVOAkstQ7vfGwLjsnG5E+aNYpgnDIfQKLDdy6RocL8/C/97MNy3HXtrXVT5CK004u8G8rpteTOMSguDFNspbIhwrwOqnkra1ywBEJJTRGx01SzFfFjSPoaCW3oiKiqw0jm8OGlxdjjBzceo+hsngECaoGuUX/IynRBkpQMQn6IZf1opXGm1HS96VQ=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DU0PR04MB9417.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(186003)(86362001)(7696005)(6506007)(508600001)(122000001)(53546011)(9686003)(45080400002)(38070700005)(38100700002)(26005)(2906002)(44832011)(83380400001)(33656002)(66946007)(66556008)(66446008)(76116006)(66476007)(64756008)(4326008)(5660300002)(71200400001)(54906003)(110136005)(8676002)(8936002)(316002)(52536014)(55016003)(966005);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?us-ascii?Q?4s6goWdkIpDjp4bqUH36s7rDUw//E9bTor/Hr7L8FYnVM4bgIVuV9gs1STDr?=
 =?us-ascii?Q?Pdi4P/PYEeh891DpW+gvPBrYjgqLmB2+vihyt4io0udfxnZ6ossptkKOfQyX?=
 =?us-ascii?Q?+iBOlimCB3krOwcoLiApkAuMJb198GmwWe6MPGeYDdC6udHg7hN/8X5c5FWz?=
 =?us-ascii?Q?OG05ygFxZNJ6aUqIDUVPa+9GGadA6D77PE4tOc2wbbvk3XKGzRq7AOFCJEas?=
 =?us-ascii?Q?/U6fUC/yiQ7+gwY/0Hz2RzxbC5KA4l9JHEdD/pE5VAAcIe2F8WVWaPlQa+hu?=
 =?us-ascii?Q?ynu3qgEwqszJCswYIeIJ2DmqguK0UzF+8WMn7DqWZwpIMArGwbZgnnURgAzh?=
 =?us-ascii?Q?UKy6ndaG3+FfOx0XZbG+x5170oc/eibuaTtEZjppKEtC84lCHNjZeh56ls+u?=
 =?us-ascii?Q?GnQ4RLEMGYoo3UblvxxIJicpXxjChuRSTGYumGLwn0xCTqKXw/4P03e+4t60?=
 =?us-ascii?Q?23T8QEYwofnMZYB2Tn3oaVO6BTclmO18cY5YUDlLr+tCBHxntDo9gbgvwjvm?=
 =?us-ascii?Q?HtxttBO6ezkncaSrfoZQBVacUM1s3gccciPbFC78L0TFov+A4a+L0zQBF/n8?=
 =?us-ascii?Q?YZT/Hr5GKdr+S6AOZBQ9pBt0cY6Ai767LXFx1oCHaHvcy3J6nmC2DnrZv0vW?=
 =?us-ascii?Q?k8iFSjtIT2UT0tdG3wRkPTEYoJwiSAB41aEEOEpAXK43Lk4hSJ8EknWaba60?=
 =?us-ascii?Q?YgYQy1ge9eUUpUBtlVA8Dg9c+mIDnkEltFOpkWasf8ME1BvNeFn4AwRfkv74?=
 =?us-ascii?Q?tvOZPQbJ9em4bQXgIvzb4cFamWo1CCJ4WA6HNn4yTYSSKcJ/3WPnIw/Au7+3?=
 =?us-ascii?Q?7/vLsb8+JJoTjE6J7AZPRB1ryaJWCX+ZN3UY/NVFPoXwY7WfU0zlB1WnpvPa?=
 =?us-ascii?Q?3dhW18g746o3RIDFrOTt0I8d6Bjxmzjd2Tvgp/dXdryNBLRs6IkVqOr9SJmg?=
 =?us-ascii?Q?aAoGmShUKplGFvYXOwumcf/1cxw+QqHdQeMoL6CW9EwEughlOgxnLSX3dsZW?=
 =?us-ascii?Q?Vw6vssy/V0caAHBpXCirTTci/ZA7gN8jWeRXr5pUevs4mYC9yJzunx17q75E?=
 =?us-ascii?Q?nCERqHa68ZdCDzpVIw5DhEUesSZV7raDwitI2YxeYNaE9Pm+hY4tJ35tDNaX?=
 =?us-ascii?Q?Cw3fQKS9K3ZNgG+3DoFmy76kMJRgnZgYb32HaSHy92eBcCrQkSbm1OyGBVMA?=
 =?us-ascii?Q?o3+yyyvKdMn1v5IF7GcHFv1QT9RTCFjqFrA7fnxsuumk9YLiq7vhquXhB2Mt?=
 =?us-ascii?Q?E79vJ3Byg1OcoN0u34FArBamN0T2L6kY9VWdS45VPzLjpWjW3RjrWaCi2qDk?=
 =?us-ascii?Q?e5XlDE726jgtlYTbOr3m0kv2M3rfaE+WBdGDdu0ZFczF8UoONDT8YdLcQ61a?=
 =?us-ascii?Q?D2avVvEdlDPolnrwwR/XfXk6ZsodNyTWQrQvYaiJMVt0GEipOs46MlX5HcR3?=
 =?us-ascii?Q?B8+vX0Xm5m6zOgrcwY1mzdUXQ4k/CQTzNZnjRsat+RRaO+oTzEu5TkL+Xbzu?=
 =?us-ascii?Q?co0pMST0HXZ0IoUnVjux3bc39HlTc9o971nFG2k1D4kyT024kTuip/9Wy9+h?=
 =?us-ascii?Q?jrII8Ctve1xqlDZGbB02VErcbfzKf72S/4aqHU38y92iNDowVXcVUT7q3CYl?=
 =?us-ascii?Q?RfdkMtPFu4r8DM0JkqTpspBI1bgjvysQR0bYXqNl0fPb679C3fGLVsoSLnzo?=
 =?us-ascii?Q?QD2JG/qtVdsAXkker2FVcmqEHdzAuFFRuGz914hupEIXNlB3?=
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: nxp.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: DU0PR04MB9417.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b8fde13d-d58a-42ca-2b0e-08da43b415d5
X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Jun 2022 09:49:56.5343
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: IqV3AEOA65GHCUwZWCJpe6wtT26pnil+7hWQbylv/XcVW4/V2zArt2WSxMHZmqM+O/MrZxQn5t7/NEy7NxBfqQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB6218

> Subject: Re: [Xen-devel] SMMU permission fault on Dom0 when init
> vpu_decoder
>=20
> (+ Stefano)
>=20
> On 30/05/2022 16:21, Oleksii Moisieiev wrote:
> > Hello,
>=20
> Hi Oleksii,
>=20
> > I'm getting permission fault from SMMU when trying to init
> > VPU_Encoder/Decoder in Dom0 on IMX8QM board:
> > (XEN) smmu: /iommu@51400000: Unhandled context fault: fsr=3D0x408,
> > iova=3D0x86000a60, fsynr=3D0x1c0062, cb=3D0 This error appears when
> > vpu_encoder/decoder tries to memcpy firmware image to
> > 0x86000000 address, which is defined in reserved-memory node in xen
> > device-tree as encoder_boot/decoder_boot region.
>=20
> It is not clear to me who is executing the memcpy(). Is it the device or =
your
> domain? If the former, where was the instruction fetch from?
>=20
> The reason I am asking that is, from what you wrote, mempcy() will write =
to
> 0x86000000. So the write should not result to a instruction abort.
> Only an instruction fetch would lead to such abort.
>=20
> >
> > I'm using xen from branch xen-project/staging-4.16 + imx related
> > patches, which were taken from
> https://eur01.safelinks.protection.outlook.com/?url=3Dhttps%3A%2F%2Fsourc=
e.c
> odeaurora.org%2Fexternal%2Fimx%2Fimx-xen&amp;data=3D05%7C01%7Cpeng.f
> an%40nxp.com%7C726bfc45b45747a655bc08da425351d4%7C686ea1d3bc2b
> 4c6fa92cd99c5c301635%7C0%7C0%7C637895222868692087%7CUnknown%
> 7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiL
> CJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=3DKcbEjN80VJS5yq4KMp2g
> NjQuVtx95jKHH5T32ZCj3Do%3D&amp;reserved=3D0.
> >
> > After some investigation I found that this issue was fixed by Peng Fan
> > in
> > commit: 46b3dd3718144ca6ac2c12a3b106e57fb7156554 (Hash from
> > codeaurora), but only for the Guest domains.
> > It introduces new p2m_type p2m_mmio_direct_nc_x, which differs from
> > p2m_mmio_direct_nc by XN =3D 0. This type is set to the reserved memory
> > region in map_mmio_regions function.
> >
> > I was able to fix issue in Dom0 by setting p2m_mmio_direct_nc_x type
> > for the reserved memory in map_regions_p2mt, which is used to map
> memory during Dom0 creation.
> > Patch can be found below.
> >
> > Based on initial discussions on IRC channel - XN bit did the trick
> > because looks like vpu decoder is executing some code from this memory.
>=20
> This was a surprise to me that device could also execute memory. From the
> SMMU spec, this looks a legit things. Before relaxing the type, I would l=
ike to
> confirm this is what's happenning in your case.

It is not device, VPU could execute code, it has firmware. Just like use
SMMU to isolate M4-core, M4 could execute code.

Regards,
Peng.

>=20
> [...]
>=20
> > ---
> > arm: Set p2m_type to p2m_mmio_direct_nc_x for reserved memory regions
> >
> > This is the enhancement of the
> 46b3dd3718144ca6ac2c12a3b106e57fb7156554.
> > Those patch introduces p2m_mmio_direct_nc_x p2m type which sets the
> > e->p2m.xn =3D 0 for the reserved-memory, such as vpu encoder/decoder.
> >
> > Set p2m_mmio_direct_nc_x in map_regions_p2mt for reserved-memory the
> > same way it does in map_mmio_regions. This change is for the case when
> > vpu encoder/decoder works in DomO and not passed-through to the Guest
> > Domains.
> >
> > Signed-off-by: Oleksii Moisieiev <oleksii_moisieiev@epam.com>
> > ---
> >   xen/arch/arm/p2m.c | 7 +++++++
> >   1 file changed, 7 insertions(+)
> >
> > diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index
> > e9568dab88..bb1f681b71 100644
> > --- a/xen/arch/arm/p2m.c
> > +++ b/xen/arch/arm/p2m.c
> > @@ -1333,6 +1333,13 @@ int map_regions_p2mt(struct domain *d,
> >                        mfn_t mfn,
> >                        p2m_type_t p2mt)
> >   {
> > +    if (((long)gfn_x(gfn) >=3D (GUEST_RAM0_BASE >> PAGE_SHIFT)) &&
> > +        (((long)gfn_x(gfn) + nr) <=3D
> > +        ((GUEST_RAM0_BASE + GUEST_RAM0_SIZE)>> PAGE_SHIFT)))
>=20
> I am afraid I don't understand what this check is for. In a normal setup,=
 we don't
> know where the reserved regions are mapped. Only the caller may know that=
.
>=20
> For dom0, this decision could be taken in map_range_to_domain(). For the
> domU, we would need to let the toolstack to chose the memory attribute.
> Stefano attempted to do that a few years ago (see [1]). Maybe we should r=
evive
> it?
>=20
> > +    {
> > +        p2m_remove_mapping(d, gfn, nr, mfn);
> > +        return p2m_insert_mapping(d, gfn, nr, mfn,
> p2m_mmio_direct_nc_x);
> > +    }
> >       return p2m_insert_mapping(d, gfn, nr, mfn, p2mt);
> >   }
> >
>=20
> Cheers,
>=20
> [1]
> https://eur01.safelinks.protection.outlook.com/?url=3Dhttps%3A%2F%2Flore.=
ker
> nel.org%2Fxen-devel%2Falpine.DEB.2.10.1902261501020.20689%40sstabellini
> -ThinkPad-X260%2F&amp;data=3D05%7C01%7Cpeng.fan%40nxp.com%7C726bfc
> 45b45747a655bc08da425351d4%7C686ea1d3bc2b4c6fa92cd99c5c301635%
> 7C0%7C0%7C637895222868692087%7CUnknown%7CTWFpbGZsb3d8eyJWIjoi
> MC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000
> %7C%7C%7C&amp;sdata=3DY%2B%2Fxslm6wc38jM6zZWshRnTM4tx%2BMpyXaI
> r5BLV1R9Q%3D&amp;reserved=3D0
>=20
> --
> Julien Gral



From xen-devel-bounces@lists.xenproject.org Wed Jun 01 09:57:43 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 09:57:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340408.565436 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwL6h-0001sC-8O; Wed, 01 Jun 2022 09:57:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340408.565436; Wed, 01 Jun 2022 09:57:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwL6h-0001s5-5H; Wed, 01 Jun 2022 09:57:39 +0000
Received: by outflank-mailman (input) for mailman id 340408;
 Wed, 01 Jun 2022 09:57:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=esiv=WI=citrix.com=prvs=1449ffc77=George.Dunlap@srs-se1.protection.inumbo.net>)
 id 1nwL6g-0001rv-1c
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 09:57:38 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 428dfe09-e191-11ec-bd2c-47488cf2e6aa;
 Wed, 01 Jun 2022 11:57:36 +0200 (CEST)
Received: from mail-bn8nam11lp2174.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.174])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 01 Jun 2022 05:57:25 -0400
Received: from PH0PR03MB5669.namprd03.prod.outlook.com (2603:10b6:510:33::16)
 by BYAPR03MB3541.namprd03.prod.outlook.com (2603:10b6:a02:b0::27)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5293.13; Wed, 1 Jun
 2022 09:57:22 +0000
Received: from PH0PR03MB5669.namprd03.prod.outlook.com
 ([fe80::b402:44ba:be8:2308]) by PH0PR03MB5669.namprd03.prod.outlook.com
 ([fe80::b402:44ba:be8:2308%4]) with mapi id 15.20.5314.012; Wed, 1 Jun 2022
 09:57:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 428dfe09-e191-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654077456;
  h=from:to:subject:date:message-id:mime-version;
  bh=hYHYJDCc+jJ6qvBbW1U2L2EhfHpZp0F7uLQxr5TrF7Q=;
  b=a+hD9xtSAXXMqvsdYlwnSn1UamLUK1Lf1jvofljvNeNKUbK8EOMaGFS9
   ymYDWrWop5b8Uar2YKgxpDGekO50s9Qa0kEWWgiz5Uqr4qrlyHkuo6Q+u
   HXRgMDbFGCUe9SVN7G8dvQ4h+mHRuYoaksTgfqLkD+TYKxfrghU+Etk1K
   4=;
X-IronPort-RemoteIP: 104.47.58.174
X-IronPort-MID: 72454736
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:m73i866U2HpZ9K/2ZDni7QxRtDzCchMFZxGqfqrLsTDasI4TYg02e
 lBvGjDRZK7OJyCgZYg1O70CxjoCvZeHx9YwGVRsq39jES4Q8pqVWdmScxmhNC3CfsGdHB9qt
 JVHZIiYfcloEyTS+EijY7Pq9iItiPGBSuqmVLaVUswdqXeIbQ944f40s7Jp0uaE+OSEPj5hm
 e8eguWPNAD11jQqPz9Ks/LYpUM/7Kj44T5E5VYwOa8QsFPXzShJAMoTK5/qIiqjSOG4PAIbq
 8Uvbl2d1jmEl/v4Ior9yt4XSmVTHviKe1LmZkN+A8BOuDAbzsAJ+vt9ZaN0hXt/0W3TxYgvk
 IwV7PRcdC9yVkHysLVFO/VnO3kW0Z1uoNcr9lDm7KR/Z2WfG5fd660G4HMeZOX0yc4uaY16z
 tQKKShlU/y2r7neLIRX6AVbrp9LwMHDZOvzs5z7pN3TJa5OrZvrG80m6TLEtduZaw8n8fv2P
 qIkhTRTgBvoPTlWBkcFC48HxaT3qib7QgddphWFnP9ii4TT5FQZPLnFFvPwI4XPb+MF20GSq
 yTB4njzBQwcOJqH0z2Z/3mwh+jJ2yTmRIYVE77+/flv6LGR7jVLVFtKCh3m8b/g1RLWt9F3c
 iT4/gIBoK8o+0HtYsT7WxSgiHWFogQdS5xbFOhSBASllfCIvlrGXTZsojhpZNklu5YxXzATz
 k6Nmf23CyVzj6CvRifInluThXboUcQPFkcSaClBQQYb7t3LpIAokgmJXttlCLSyjND+BXf32
 T/ihDQ3grwIy81N06i98kHKhxqjo5HISkg+4QC/dmOj8g59IZ6oYaSp70TW6bBLK4PxZlWIu
 nkJn46d7fgDF7mKjiWGROhLF7asj96OMCfdmkJoB5ks7XKyvXWndJpTyD57LUZtdM0DfFfBe
 ELJogpM/rdcOXKrael8ZIfZI8cn06nmU8zoX+rPadxmZYJtcQaB9yZoflOUw2Hrnw4nlqRXE
 ZWUa8G3Fl4BFL9qijGxQo81ybM23Do3w3/7TJXy3ROhl7GZYRa9S7AFNh2Mb8g46r+JpEPe9
 NM3H8aGxBxFFuzxZiTM8IcIBUALKXk9Fdb9rMk/XvSEKAtgXnsoDfPRyKwoU4V/muJekeKg1
 linRkJd/3/uinTGJBuiZ2hqbfXkWpMXhVcyITBqBk6l3XMue66m9qJZfJwyFZEc6OF8xORoC
 dkEf8mNCO5GTDjv9zMHa5Tns4t4dw/tjgWLVwK7Zzw4c49lVhb+8NbudQvy9wEDFiOy88A5p
 tWI9A7VT9wgWg5rF83Xa9qjyEm3sGICn/hxRA3DJdw7UF737IFgJij1j/k2C8IBMxPOwn2dz
 Qn+KRUFoejApacl/d+PgrqLx6+zEuJ4B1dbGW/z4rO/NC2c9W2mqadMSP2NfCzdfGrs9b++e
 P5OyPXhLPwAmk0MuI15e55gxLgy4NbHrLZAwgNpWn7MajyDBqxlPVGP2MBdsasLzbgfpAjec
 kuK+8VyOLOHIsrpHVcdYg0/YYyry/gOkzXd5OkpKUDS6yp+/b7BWkJXVzGGki1UKv1qN4Qp2
 s8xucoX7Rz5gR0vWv6ciSoS62OTKHgNQo0os5gbBMngjQ9D4ltFa5bNTCLt4JWCbtxKGkYvK
 z6QwqHFgtx0zU/DdWs6D3Tl2PRcnogH/htNyTcqIFCIhNvInLkz1Rhb/C4rZhpc0hhclexpU
 kBiLE0zJ+OR/jNkhc5Md2+pEgBFQhae/yTZ8VIJiWmfdEiuWWzlJXc4f+2K+Sgx62NbYiId/
 6ufzE6+VDrndof62S5acUR/pvrvRNpyrVSasM+iFsWBWZI9ZFLNg6m1eHAHrDP9ANg8mVWBo
 +R2lM53baD4LicWr7cMAo+Wz6kLSBuEKWpBRtls5KoMW2rbfVma3jyDKFu4fM9XEODb6k+zC
 8FoJcVnWgy30WCFqTVzLaIBLqdonP9v4fIGZq/wY2UBttO3tTFurI7P+gDxgWYqR5NllsNVA
 onccRqPAmuMjH1VhmqLrc4sEm+5atkNYEv7xvK49M0ADZdFu+ZpGXzeyZOxtnSRdQFhrxSdu
 VqaY7eMlrM+j4Nxg4HrD6NPQR2uLs/+X/iJ9wb1tMlSadTIMoHFsAZ9RkTbAjm69IA5A7xf/
 YlhevautK8ZlN7ai1zkpqQ=
IronPort-HdrOrdr: A9a23:/amkI6hrXVzzTktKsCtGU5Vb2XBQXtgji2hC6mlwRA09TyX4ra
 6TdZEgvyMc5wx/ZJhNo6HkBECrewK6yXcN2/h0AV7AZmjbUQmTQL2KhLGKqwEIfReOldK1vp
 0QEZSWZuecMbErt63HCO7UKbYd/OU=
X-IronPort-AV: E=Sophos;i="5.91,266,1647316800"; 
   d="asc'?scan'208";a="72454736"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mDj7gzlZQbt3pO+RMMMZZTqVybDHrcz4TVy0MRj3ViRaUj2wFnt1vVkRCv8L82GmFmz1Ob6KGTTTnsr7UbiBcsPACX6ia1xc3P9ZjDsVTjnhQb/F/IqyhPf+GfsFcwiwUDaTEQvKRGmNn1M8nAY5gWqqgnQcL20yeaCoC4PaNV282R8/a/TKs6rjrTH4VRNGpO/k7N2n5NOznDNm5FaWeTMSVcuCpDLFO4GVUREY75GglmWTUn7CXYL/P1voUBdtBCK83XFaba8xt7qoxsl9Ts6M8BfvOHPjTBSwVcPCc2D8fuucm7NGUWjDZ11WpNn9hrWHVGW/M67FZHq8uvFOTA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=gt+DLtKevXEzdmsa4iy8zf8tb0Q8cFwnfFjezxU2epY=;
 b=hVd/9zleg/Ozug8X4HVCcezl5sDnBGhFfimufZEyqBdMNVi/i1Ji8w8dTn3I3610Xmbv6eTxZEaiXbWtbH94CoBs4FUzkkIsYn9LwFsQYPieea1bLQoYz4xEuStwUjPdVkBJXTGNJz34UiNTIDkVD6IjzALu+dtRI+A//cH+etCmpFHJ5CMt3OPE81VfRcLPrQ5+Eb3onh4RNNkh5dwwy7m2ojbNmvh3EU7lpLYVi6FBcJ8FuLb2ybcRmeN7kLdYHKDceMQL7hRrxRrRUsuPXQkg8wDZgUPqY/XOHgHio/mgOKXZTUN+XEZaUEKgaS1EBYT8V3FjQ90X5fQggHkR8g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gt+DLtKevXEzdmsa4iy8zf8tb0Q8cFwnfFjezxU2epY=;
 b=dibdPEIfwcovccMAcc8EUt17AR0CA7dWwml+vEd1oeSGg3146gdpqqj/dxqU7dkQMDBlQlr7+NH9x0rXPL33mDDbtUdGIlQbqRyKfuoFRKgJ1iy5a1BGGuT2LiyYnCCLnyysfmgIDwa1F4g5XA8g6bDCd4IbcccqUjxTBcuyLSY=
From: George Dunlap <George.Dunlap@citrix.com>
To: xen-devel <xen-devel@lists.xenproject.org>, Tamas K Lengyel
	<tamas.k.lengyel@gmail.com>, "intel-xen@intel.com" <intel-xen@intel.com>,
	"daniel.kiper@oracle.com" <daniel.kiper@oracle.com>, Roger Pau Monne
	<roger.pau@citrix.com>, Sergey Dyasli <sergey.dyasli@citrix.com>, Christopher
 Clark <christopher.w.clark@gmail.com>, Rich Persaud <persaur@gmail.com>,
	Kevin Pearson <kevin.pearson@ortmanconsulting.com>, Juergen Gross
	<jgross@suse.com>, =?utf-8?B?UGF1bCBEdXJyYW50wqA=?= <pdurrant@amazon.com>,
	"Ji, John" <john.ji@intel.com>, "edgar.iglesias@xilinx.com"
	<edgar.iglesias@xilinx.com>, "robin.randhawa@arm.com"
	<robin.randhawa@arm.com>, Artem Mygaiev <Artem_Mygaiev@epam.com>, Matt
 Spencer <Matt.Spencer@arm.com>, Stewart Hildebrand
	<Stewart.Hildebrand@dornerworks.com>, Volodymyr Babchuk
	<volodymyr_babchuk@epam.com>, Jeff Kubascik <Jeff.Kubascik@dornerworks.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
	Rian Quinn <rianquinn@gmail.com>, "Daniel P. Smith"
	<dpsmith@apertussolutions.com>,
	=?utf-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLRG91ZyBHb2xkc3RlaW4=?=
	<cardoe@cardoe.com>, George Dunlap <George.Dunlap@citrix.com>, David
 Woodhouse <dwmw@amazon.co.uk>,
	=?utf-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLQW1pdCBTaGFo?= <amit@infradead.org>,
	=?utf-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLVmFyYWQgR2F1dGFt?=
	<varadgautam@gmail.com>, Brian Woods <brian.woods@xilinx.com>, Robert Townley
	<rob.townley@gmail.com>, Bobby Eshleman <bobby.eshleman@gmail.com>,
	=?utf-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLQ29yZXkgTWlueWFyZA==?=
	<cminyard@mvista.com>, Olivier Lambert <olivier.lambert@vates.fr>, Andrew
 Cooper <Andrew.Cooper3@citrix.com>, Ash Wilding <ash.j.wilding@gmail.com>,
	Rahul Singh <Rahul.Singh@arm.com>, =?utf-8?B?UGlvdHIgS3LDs2w=?=
	<piotr.krol@3mdeb.com>, Brendan Kerrigan <brendank310@gmail.com>, "Thierry
 Laurion (Insurgo)" <insurgo@riseup.net>, Oleksandr Andrushchenko
	<oleksandr_andrushchenko@epam.com>, Oleksandr Tyshchenko
	<oleksandr_tyshchenko@epam.com>, Deepthi <deepthi.m@ltts.com>, Scott Davis
	<scottwd@gmail.com>, Ben Boyd <ben@exotanium.io>, Anthony Perard
	<anthony.perard@citrix.com>, Michal Orzel <michal.orzel@arm.com>
Subject: MOVING COMMUNITY CALL Call for agenda items for 9 June Community Call
 @ 1500 UTC
Thread-Topic: MOVING COMMUNITY CALL Call for agenda items for 9 June Community
 Call @ 1500 UTC
Thread-Index: AQHYdZ38qt6Ww/ZR/UOW6AbWsKoU5w==
Date: Wed, 1 Jun 2022 09:57:21 +0000
Message-ID: <CC75A251-2695-4E9E-95A7-043874B22F32@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 919bf152-7f26-4990-ebba-08da43b51f12
x-ms-traffictypediagnostic: BYAPR03MB3541:EE_
x-microsoft-antispam-prvs:
 <BYAPR03MB35412F78E1387062B688BDA799DF9@BYAPR03MB3541.namprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 QbFARXJvhhkLbP20/m+PF1Pfs/4QfpF6WvHP4qz8p4HJqMoO9V5N/USi3GWUZz8PRLYwy7xIUGBv5xIL9dbcfHNhI6FZ7iMfhFl/K8n8tHz/OrkwQnm+3t2o0IJB8BsMs52vnC+OPEqbM7f2SOB+EQvS/+bQw43ZRBR2wg1v+q/eQdB1JWmEgtCIpFeaZs8ts6nseBtRjye+ghOfbzebus7n7OMJkfskyozKKtlHAwGebgAmHcAmaDf9V9rZG2Nvt0GVkpk/eju1m/T1jR7lB6P8wFyBwo+17gc+52Tu0NqWi7IKY9YtrwjSS89XpwxPR/yj01Dh+doIUZxwb7rCP5Fd3Pygum6b7sav4Uy5H2a08eTCKHxoR+damzHuXODtN4PcL1s4WS8lH2CD9o4L6hOW5/Se/i5YxQp0PlVaH50W7GYwaJfBCqHJH1x2tJpHEt7D2tZyEOsSMWw0M2hhTn2EKQzqgFT7sFvSujPzQ+pvOmxtAtMLytD9sckBZaZN1SogFWDLkhrGaIF86WDZSSkeTWZ66Nj8PVg8ql3gglVGpjvX7boD/LTr8537Q0hYf+qb1aQqOaFe6UnhSJWGnpaKtHQrOcS0brGmwboMBzM4BeA5KWfg4v+QNzZIU5H5QMKzucvZy3mUakVNr/E8cOaVI0DxdjPhg4pBkIeabc+jlsyFDQT51FhAlO0mIb41Rq00lajFof9nsLVgopMOVFDBAi4gaYKYQh4FT+lWWHZ2H8pJ0iwajdZcsVIqMprGNXBda7XaaVB/2bTrB1MsW0vvREcK3LvO9MWhSxLiRhzu7gBtpr8uXArY1AYgrMVLqG5KQ+LafkR36ZRQw9srb7xeFn6cnquR8nRQm1SfGhiFnduRt7vNTiJ1xO0kKbUBQocjQyutojmPYCqrUabQkjBcroWoBnZAKuZUr5HfURE=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(6512007)(508600001)(7406005)(6486002)(7416002)(186003)(966005)(71200400001)(8936002)(6506007)(5660300002)(26005)(2616005)(2906002)(8676002)(33656002)(64756008)(66446008)(38070700005)(38100700002)(91956017)(66556008)(66946007)(66476007)(76116006)(921005)(83380400001)(122000001)(86362001)(316002)(82960400001)(99936003)(36756003)(110136005)(221023002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?S0kvR2RhZ2pwY1BGYTZzNjZ3c01QUWQxR0RhcFNlQ05zNG1IR2JGUTBiNTJy?=
 =?utf-8?B?bWNhUFBjbU92TTdReXkrbnNSOUJ6Kzg3VDlQY0YvT204eGtrU0NnK1l0OEZQ?=
 =?utf-8?B?K1EzUVJSOFQ0cnlzUVg1blUwcDB1eUE3THhBUThzKzZvVVI5L1VGcGc2Qy9n?=
 =?utf-8?B?ZUlKdjlQQU1hc2VjWWZFdHBlWlFaemZpVW5KV0ZPcWpQUVIzN215bnVFTW1Y?=
 =?utf-8?B?bVo3OGkyRk9xblFFU09nandZY1lGZUxFUmFWVmpTU05KK2Ftb01DNlRoMHV5?=
 =?utf-8?B?K3d3RE1NR3Y5NStiaURnRHphK1hGZWd6d3VRMmZ4bDBBMi9FV3h0NG1tcDlC?=
 =?utf-8?B?STU2WkYyTkxaeEp6a1FmM1NKaFVyWnUzMFdjNTlUejkybUMzQTJEVWljVWhU?=
 =?utf-8?B?ZDhMazF4WGxCbnlCZE5nMjduMjdSZERDRXpyWVVRekl2SVJwb1pqajdsSTdl?=
 =?utf-8?B?SFJLUmZyaTY1VGlob2ZlNHh2Z25ISmFuTEo0Y0JhMHJlME1UNVFtdHlEVzh1?=
 =?utf-8?B?M3gvSnFZa3kvbnE5QWFMRjMyQWpHcE1rY3ZoRHUvampDeGhOL1VNSDk2Kytm?=
 =?utf-8?B?QnlBZk9hS01aYWxTT09rK2ZBNUJRY0RHbnJNY0N4U0NJTStKYVRRK3VVeU9t?=
 =?utf-8?B?RGQxOTBJR2E5MzQ0TzU5Mm4zbE9LNzVoSlBTblFPM3hqOTVoODNlR29YRkNw?=
 =?utf-8?B?cVl5RDd1bnUzT3pkQnNQbVhycG5oTXRqSm5DK3RjQVl6VkZPNXUxWXovaGVC?=
 =?utf-8?B?WEZKdlE4UlNOSzY3VzdZSHBXSG9KWUd5WnB6cnh1b0xuS1M0WG44OEpHb1M3?=
 =?utf-8?B?Mnp3QVBJYlFyOWVjRDByck5kZFFheCtKYWlxZUNudms1bEtsTDVoS2c0RSt2?=
 =?utf-8?B?VzliRmRNUE9nWFhBdnpiZjZ3SVA0ZGFKSkE1RnlLaFFOditJeWhmZlZ2bzM2?=
 =?utf-8?B?djV5TU5adnpzVDlmdDdFT3l0ODNNN3ByeTNoVG5VU2EzelR4L0l1Z0ROV0Ew?=
 =?utf-8?B?K2ZISVdWeGNzS1hSajZ3S0lxK2Y2K0w0c1BybzhvbDdtQThRK28wT25janAw?=
 =?utf-8?B?T25hVS9ib2t5S2xXRy9aUUlVTnp6L2ZrSlVyb3RPazdjRDB4UEZtbURSYkRo?=
 =?utf-8?B?bTZ6ejhPbEtlU2lBUFpvNFNabjZvcDRKQ0NnRUtMcFc5cTdWb1lxdmNCSVhj?=
 =?utf-8?B?enRJdHJvT0lYNnBLRjlqUWVuZVczOVQ0S0I2K2oxaVB0SFJSNTd1VzBaN1g4?=
 =?utf-8?B?T2NjK2VweHJUNWZvNS8rWUF4YldLNEdqS1l4TU93N2VpOURkbDdJelJUNnps?=
 =?utf-8?B?UGJ0TDVydHVGekZpUWV6aWZtaElZbG9sYnBielU1enVIeWpCK3ZKWXZXaEIy?=
 =?utf-8?B?T1VwYzNKcUNmbENEYS9iN1oycTJ2ODdNZjVVczdhdXMxenFXd3lMRW04YTRh?=
 =?utf-8?B?cVh6akowbnJUMDN0aTFkVFhHWDl4dWt1ZEN1MHFSR2luaWJHR0R4L3NaMlB1?=
 =?utf-8?B?MDFEa09sSGh4NStUT1k0d2x6MlZEbTlreTRPYWJqOXBSRzFLTE9GR3lDRDEr?=
 =?utf-8?B?Z29MaXZGMmxOTVVDcmxSZGdwcUV2Z2Q3U0VuVHo1R29oZXEwNjFSSE1lcXFU?=
 =?utf-8?B?NU91QlFDSnI5S1VnUFZRTW5LcE91Tm00ZE4vSUMvbjRSMHphdmpOdWJ5Zmla?=
 =?utf-8?B?TkJTL3I3TS9DaFlUWXIxaUc0VzBqK0U2MnAwdGt3M3B2Qi9nTldKUVhTRzFj?=
 =?utf-8?B?SE9oNDFJcVBzUzJBL0hxcnduUTFnR0lVNXJjdVV6eHdNTmNXeFpVeWltWHRQ?=
 =?utf-8?B?K1F0UHZiSjRTYW1wNFBmdzJiY04ybldEbjlIUTg0SHlIVGtXMC9TaG9DQXpo?=
 =?utf-8?B?WnR3VnEzUW45VVlXeURWRDJITEhocm5vL3BaSnRhMGh6aWxEN2tMR1Rna1Qw?=
 =?utf-8?B?c3R0dUZ6amNVRU5pUnpVNVk3YlN0WnBJUTcrMW5PdUhwckZaTnNSUktkelFE?=
 =?utf-8?B?T1FzM3Z3eEVLMHlRbmd0eUxuUGpxT3piYmtybXJDNVU0RHJ3OTIyTkRCbit0?=
 =?utf-8?B?RmxDVkxNUmdITUQvQitieG5oc1A1bHd4UmczSEJIYmR1c25NVWlxaGF1NkRx?=
 =?utf-8?B?bUUwUUhnWTdDTm5ZVmY1Tng4a05xaUl4TWZyQ0FFR0JGN0dyTjlmMG9pY0tl?=
 =?utf-8?B?c3ZrK0MySEVSdzJyUTlNL2R2SmRRblhZL3dtR3NWM0ptOC9TQVZJUHNCODdo?=
 =?utf-8?B?REN1dm9mT3J0eko3a2tSQU5BcWRhUGpseGZjSHZZNjhUdkgwZStDb2taQjg4?=
 =?utf-8?B?Z296OVdzS29yWkM3QWExZTJtNHZ4NUY5bTFWYlJkalNGRnlmYzBOWUdTRlNV?=
 =?utf-8?Q?Swvz4xJnRhkB4K50=3D?=
Content-Type: multipart/signed;
	boundary="Apple-Mail=_C3089C04-88FB-4DAB-A9D6-74807E8441F7";
	protocol="application/pgp-signature";
	micalg=pgp-sha256
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 919bf152-7f26-4990-ebba-08da43b51f12
X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Jun 2022 09:57:21.5040
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: QRePYN27cRE6VU1+9lakfmnGNBqbu4efONb9Oe9LX7XEPs+UJASRj8O574+EaCt10GDiQIbmMk4S/FJ9nZL4jqfoWRTMPvRADjESFuk+wG0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3541

--Apple-Mail=_C3089C04-88FB-4DAB-A9D6-74807E8441F7
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=us-ascii

Hi all,

Sorry for sending this out so late; my calendar was screwed up.  Due to =
it being a public holiday in the UK, I propose moving the monthly =
community call to NEXT THURSDAY, 9 June, same time.

The proposed agenda is in =
https://cryptpad.fr/pad/#/2/pad/edit/URCDNNBOVKsEK2grXf2l954a/ and you =
can edit to add items.  Alternatively, you can reply to this mail =
directly.

Agenda items appreciated a few days before the call: please put your =
name besides items if you edit the document.

Note the following administrative conventions for the call:
* Unless, agreed in the pervious meeting otherwise, the call is on the =
1st Thursday of each month at 1600 British Time (either GMT or BST)
* I usually send out a meeting reminder a few days before with a =
provisional agenda

* To allow time to switch between meetings, we'll plan on starting the =
agenda at 16:05 sharp.  Aim to join by 16:03 if possible to allocate =
time to sort out technical difficulties &c

* If you want to be CC'ed please add or remove yourself from the =
sign-up-sheet at =
https://cryptpad.fr/pad/#/2/pad/edit/D9vGzihPxxAOe6RFPz0sRCf+/

Best Regards
George



=3D=3D Dial-in Information =3D=3D
## Meeting time
15:00 - 16:00 UTC
Further International meeting times: =
https://www.timeanddate.com/worldclock/meetingdetails.html?year=3D2022&mon=
th=3D06&day=3D9&hour=3D15&min=3D0&sec=3D0&p1=3D1234&p2=3D37&p3=3D224&p4=3D=
179


## Dial in details
Web: https://meet.jit.si/XenProjectCommunityCall

Dial-in info and pin can be found here:

https://meet.jit.si/static/dialInInfo.html?room=3DXenProjectCommunityCall

--Apple-Mail=_C3089C04-88FB-4DAB-A9D6-74807E8441F7
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
	filename=signature.asc
Content-Type: application/pgp-signature;
	name=signature.asc
Content-Description: Message signed with OpenPGP

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEj3+7SZ4EDefWZFyCshXHp8eEG+0FAmKXOAAACgkQshXHp8eE
G+1h4QgAv6//RWfhmVRGJzCWeJA8deQ+FPGXjkmKVh8dnUZYPrCPTJE9oNDnnzd/
FT1FWShaDhp2TtdkJZKjxHZWnZJFuQ8JzCj83JUqXPwRVULDo6atrx3GlpBvsN6C
CWTVvvhYT2khmBgM5z4pVqrE2S+IBFy5TRL2nvXvTH75cjmDmlw5ysp2SIXLyLli
ANixAhM2qJT9Gj6kZS+12AgSbd6fC4bB2ZQvFPVVVqUXeF8QkM9Fi8jjWvZ3ESiY
H8/kZR4wYVKJsMQR8TtXe5JyIsIeDpXF825uOYZ3sD3TpUvay79aKj1IcI8loeHc
Bgr/aroyeHrddH6yhBSoE+pGleVYSQ==
=VO+B
-----END PGP SIGNATURE-----

--Apple-Mail=_C3089C04-88FB-4DAB-A9D6-74807E8441F7--


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 10:32:17 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 10:32:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340417.565447 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwLdx-0006ha-01; Wed, 01 Jun 2022 10:32:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340417.565447; Wed, 01 Jun 2022 10:32:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwLdw-0006hT-ST; Wed, 01 Jun 2022 10:32:00 +0000
Received: by outflank-mailman (input) for mailman id 340417;
 Wed, 01 Jun 2022 10:31:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CPHY=WI=citrix.com=prvs=144c139f6=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1nwLdv-0006hN-2l
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 10:31:59 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0e6bb8c8-e196-11ec-837f-e5687231ffcc;
 Wed, 01 Jun 2022 12:31:56 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0e6bb8c8-e196-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654079516;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=TSX9ZJegCdU06HW+9bJrKO5UO1rWK/XvE7f5oVsFUSE=;
  b=YeoJzS+D4LwuOB1mt90ltRLL8j7GdBvZhfkwFCxuGUSP1i/E6zzmH72j
   p8OMdz3AGB1PP4I50CyBH5/0PO2m5X71vkei+t+uVy3c3QUrUTlqzvfh2
   Wa8xoY0usW7+cAPaexaCSwGgCapnsvZn4oKb1tbapZH1b5/iKMDwHV6rJ
   w=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 71960707
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:Kothq6vwoUq7/oJSlEJDTpKiyefnVEVeMUV32f8akzHdYApBsoF/q
 tZmKTzXOanZZ2TwKdt1bITipk8O7JOHx4dlHFFp+yphFygR+JbJXdiXEBz9bniYRiHhoOOLz
 Cm8hv3odp1coqr0/0/1WlTZhSAgk/nOHNIQMcacUsxLbVYMpBwJ1FQywobVvqYy2YLjW13V4
 IuoyyHiEATNNwBcYzp8B52r8HuDjNyq0N/PlgVjDRzjlAa2e0g9VPrzF4noR5fLatA88tqBb
 /TC1NmEElbxpH/BPD8HfoHTKSXmSpaKVeSHZ+E/t6KK2nCurQRquko32WZ1he66RFxlkvgoo
 Oihu6BcRi84EfbgospecSVSTRNAPL9s2LX+EFug5Jn7I03uKxMAwt1rBUAye4YZ5vx2ESdF8
 vlwxDIlN07ZwbjsmfTiF7cq1p9LwMrDZevzvllpyy3ZCvA3B4jOWazQ6fdT3Ssqh9AIFvHbD
 yYcQWU2PUqYPEUQUrsRILYYs9uF3Fj7SAJj8wO3/pIZ0jHY0BMkhdABN/KKI4fXFK25hH2wu
 W/HuW/5DxcyPcaajzGC9xqEmevnjS79HoUIG9WQ9PFwh0aI7ncOExBQXly+ydG9l0W3HdxWL
 UcZ/i4zhaEo8QqgSdyVdw21pjuIswARX/JUEvYm80edx6zM+QGbC2MYCDlbZ7QOvsIsWSYj0
 FPPmtrzHCFuq5WcU3fb/bCRxQ5eIgBMczVEP3VdC1JYvZ+z++nfky4jUP5yP/WZtPO2Ow36n
 QK0rToUp78qvO0UgvDTEU/8v968mnTYZldru1mPAj37tVMRiJ2NPNLxtwWChRpUBMPAFwTa4
 iBZ8ySLxLpWZaxhghBhVwnk8FuBw/+eeAPRjld0d3XK32T8oiXzFWy8DdwXGauIDirnUWWwC
 KMrkVkNjKK/xVPzBUONX6q/Ct4x0Y/rHsn/W/bfY7JmO8YsKFfdpnkxOxHJhggBdXTAdolmY
 P+mnTuEVy5GWcyLMhLtLwvi7VPb7n9nnj6CLXwK5x+mzaCfdBaodFvxC3PXNrpRxPrd+G39q
 o8DX+PXmk43eLCvPUHqHXs7cAliwY4TXsim9aS6t4erf2JbJY3WI6WBmOt7Ktc7wfw9eyWh1
 ijVZ3K0AWHX3RXvQThmoFg6AF8zdf6TdU4GABE=
IronPort-HdrOrdr: A9a23:2xhIkq8X2Wr7/GfhgEduk+Fwdb1zdoMgy1knxilNoENuHPBwxv
 rAoB1E73PJYW4qKQsdcdDpAtjlfZquz+8J3WBxB8buYOCCggqVxe5ZnPPfKlHbak/DH41mpO
 tdmspFeabN5DFB5K6QimTZYrUdKbK8gceVbJLlvg5QpHZRGsddBmlCe2OmO3wzYDMDKYsyFZ
 Ka6MYCjSGnY24rYsOyAWRAd/TfpvXQ/aiWLiIuNloC0k2jnDmo4Ln1H1yzxREFSQ5Cxr8k7C
 zsjxH53KO+qPu2oyWsmVM7rq4m2ecJ+OEzR/BkufJlaAkETTzYIbiJbofy/AzdZtvfrGrC3u
 O85CvIdP4Dl085NlvF3icFnTOQlgrHLxTZuAelabyJm72heNtyMbs+uatJNhTe8EYup9d6ze
 ZC2H+YrYNeCVfakD36/MWgbWAjqqOYmwtVrQcotQ0XbWLeUs4ikaUPuEdOVJsQFiPz744qVO
 FoEcHH/f5TNVeXdWrQsGVjyMGlGi1bJGbNfmES/siOlzRGlnFwyEUVgMQZg3cb7Zo4D51J/f
 7NPKhknKxHCsUWcaV+DuEcRtbfMB2HfTvcdGaJZVj3HqAOPHzA75bx/bUu/emvPIcFyZMj8a
 6xJ2+wdVRCD34GJff+rKGjqCq9MVlVdQ6duf129txlsre5QLLqNES4OSUTr/c=
X-IronPort-AV: E=Sophos;i="5.91,268,1647316800"; 
   d="scan'208";a="71960707"
Date: Wed, 1 Jun 2022 11:31:38 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Luca Fancellu <luca.fancellu@arm.com>
CC: <xen-devel@lists.xenproject.org>, <bertrand.marquis@arm.com>,
	<wei.chen@arm.com>, <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH] tools/libxl: optimize domain creation skipping domain
 cpupool move
Message-ID: <YpdACp5Wra1yWjow@perard.uk.xensource.com>
References: <20220526081230.3194-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20220526081230.3194-1-luca.fancellu@arm.com>

On Thu, May 26, 2022 at 09:12:30AM +0100, Luca Fancellu wrote:
> Commit 92ea9c54fc81 ("arm/dom0less: assign dom0less guests to cpupools")
> introduced a way to start a domain directly on a certain cpupool,
> adding a "cpupool_id" member to struct xen_domctl_createdomain.
> 
> This was done to be able to start dom0less guests in different pools than
> cpupool0, but the toolstack can benefit from it because it can now use
> the struct member directly instead of creating the guest in cpupool0
> and then moving it to the target cpupool.
> 
> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>

Acked-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 11:29:36 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 11:29:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340426.565458 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwMXV-0004EZ-B3; Wed, 01 Jun 2022 11:29:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340426.565458; Wed, 01 Jun 2022 11:29:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwMXV-0004ES-7Y; Wed, 01 Jun 2022 11:29:25 +0000
Received: by outflank-mailman (input) for mailman id 340426;
 Wed, 01 Jun 2022 11:29:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=s3TG=WI=citrix.com=prvs=1441e74d4=roger.pau@srs-se1.protection.inumbo.net>)
 id 1nwMXT-0004EM-ID
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 11:29:23 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 13ed77f3-e19e-11ec-bd2c-47488cf2e6aa;
 Wed, 01 Jun 2022 13:29:21 +0200 (CEST)
Received: from mail-bn7nam10lp2106.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.106])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 01 Jun 2022 07:29:16 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by SN6PR03MB4205.namprd03.prod.outlook.com (2603:10b6:805:c7::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5293.13; Wed, 1 Jun
 2022 11:29:11 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::5df3:95ce:4dfd:134e]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::5df3:95ce:4dfd:134e%4]) with mapi id 15.20.5314.013; Wed, 1 Jun 2022
 11:29:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 13ed77f3-e19e-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654082961;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=f1CnLgqqnK0eKYzt2XQLaDWrTRLbcJ0tquj82uVdlxA=;
  b=L2IOtlVXu0aSCfOZEzFThyCLNO7C0P7d3qP7agCLxv5PuXljFQ8cNOdA
   uV2kGbUuksdvQ0bMtyBULWJBFd5Bkc0l5zB7Cf6orQSXdasXA6kwcMBn8
   LOgcBVdJrWu2rj5w2apkKDeslk8BBF4qsF6htGbXjCCjVVAeSIpSCVJaL
   E=;
X-IronPort-RemoteIP: 104.47.70.106
X-IronPort-MID: 72460125
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:rxK6S6mkogP/DMReQrq7gK/o5gz7J0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xIeXGCCbP3bZmeketl0bYq+8xtQvZKGnYJnSwdvrCFmFCMWpZLJC+rCIxarNUt+DCFioGGLT
 Sk6QoOdRCzhZiaE/n9BCpC48T8kk/vgqoPUUIYoAAgoLeNfYHpn2EsLd9IR2NYy24DnW1rV4
 7senuWEULOb828sWo4rw/rrRCNH5JwebxtB4zTSzdgS1LPvvyF94KA3fMldHFOhKmVgJcaoR
 v6r8V2M1jixEyHBqD+Suu2TnkUiGtY+NOUV45Zcc/DKbhNq/kTe3kunXRa1hIg+ZzihxrhMJ
 NtxWZOYV0RyALH0ir0mVkdbIyIlbJ1o2Z/fCC3q2SCT5xWun3rE5dxLVRlzF6tHv+F9DCdJ6
 OASLy0LYlabneWqzbmnS+5qwMM+MM3sO4BZsXZlpd3bJa9+HdafHOOVvZkBhGlYasNmRJ4yY
 +IDbjVidlLYagBnMVYLEpMu2uyvgxETdhUH8g7L/fNtvgA/yiRO/ufrPeLKVuDQas9OlUiKp
 DLi4XnAV0Ry2Nu3jGDtHmiXru3FkD7/WYkSPKal7fMsi1qWrkQDBRtTWValrP2Rjk+lR8kZO
 0ES4jApr6U56AqsVNaVdwWxvXqsrhMaHd1KHIUS9wWl2qfSpQGDCQA5oiVpbdUnsIo8Q2Ms3
 1rQx9fxX2U37vuSVG6X8aqSoXWqIy8JIGQeZCgCCwwY/93kp4J1hRXKJjp+LJOIYhTOMWmY6
 1i3QOIW3t3/UeZjO32HwG36
IronPort-HdrOrdr: A9a23:jrBHhqG6lFgxUJrepLqFepHXdLJyesId70hD6qkvc3Fom52j/f
 xGws5x6faVslkssb8b6LW90Y27MAvhHPlOkPIs1NaZLXDbUQ6TQL2KgrGD/9SNIVycygcZ79
 YbT0EcMqyOMbEZt7ec3ODQKb9Jrri6GeKT9IHjJh9WPH1XgspbnmNE42igYy9LrF4sP+tFKH
 PQ3LsPmxOQPVAsKuirDHgMWObO4/XNiZLdeBYDQzoq8hOHgz+E4KPzV0Hw5GZUbxp/hZMZtU
 TVmQ3w4auu99m91x/nzmfWq7BbgsHoxNdvDNGFzuIVNjLvoAC1Y5kJYczLgBkF5MWUrHo6mt
 jFpBkte+x19nPqZ2mw5SDg3gHxuQxen0PK+Bu9uz/OsMb5TDU1B45qnoRCaCbU7EImoZVVzL
 9L93jxjesZMTrw2ADGo/TYXRBjkUS55VA4l/QIsnBZWYwCLJdMsI0k+l9PGptoJlO31GkeKp
 guMCjg3ocXTbvDBEqp/VWHgebcE0jbJy32DHTr4aeuonprdHMQ9Tps+CVQpAZEyHsHceg02w
 31CNUXqFhwdL5nUUtcPpZ3fSLlMB26ffrzWFjiUmjPJeUgB0/njaLRzfEc2NyKEaZ4vqfa3q
 6xGm9liQ==
X-IronPort-AV: E=Sophos;i="5.91,268,1647316800"; 
   d="scan'208";a="72460125"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Or7GHXPapmwFzcUpzWP29USdbCZ4ReV2jiwcYoXAnVM4Ju9um25ZGtgiQkk8YrNVC+qFYlad6CZAIxpuRPXxfrm+lEbUxyN/oU13wUF9GeHGQ4CXm9qhERgfQK8+NCXW8C8dY3PFkCSy5At5+3TCnV684F4ya2pGvVCz6CzS/DoEx9Ih2a1XbqMOiIp6EBAmi30FqvliJhHos/j3FIldF77tziIthu6Ble62ZD7jqNQfYq6oveFIO5coAFgneT63+oUwOTW+E3tzzzmlzC2Bzs7X2BdxeI5ytwM/U9woQbRJynIvmY8Bus3G1MbuWLIbweH/4Kt5ax/ChO/LHJotig==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=apl+oZUNBUkv+oZsSLPe55QO3VhhU92cOwCPBQ+mZ3c=;
 b=Sj5/7jP2x6EcEaKeMTYLK2DGw/DPk+2Wljzu9R/U0dL9hsW8fKrW+21J0sox0yITpV1vnB+Pt10bwqHiJ/YJoWu9nTpPPBq/9iYKmod4/GspFP3J+RPbvfQgAof+Q2EwXN+JnDnJcBGHwjqMXIuD8oLgC3X9foy7+DcnyVPHcmgElOqofspQiaCJuRCjyPCbWOfhoSwww32AF0Meuki5ZcMq5fpqYeLLJm77MoWpqjJwRIRK8LVSUevopGz0RpeFyQBXAE+M66PAILDt5/jRUI8GJ6W4WBKcsfmMTJNLr0k4N+U72Xp2YbWjNzZVlkcPKcqbD2SRatQhxByAcEGllA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=apl+oZUNBUkv+oZsSLPe55QO3VhhU92cOwCPBQ+mZ3c=;
 b=J84O+RPnpczP/9ghT1XlryY6X422IlB3Da4vXevG0qjsXxQpkLLPWJeC0tWu1D/g1LsFztupOajE8+CSKWQakJVufFXMMvpuVYri4oDNw5hpi+as7SEThwjeXWjzB0ToZqgxlNdtjLmNclQwc6qzraK16UYYfPGD17X7XrQJ7RA=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 1 Jun 2022 13:29:07 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v5 07/15] x86: introduce helper for recording degree of
 contiguity in page tables
Message-ID: <YpdNg5fgAncfSeTK@Air-de-Roger>
References: <80448822-bc1c-9f7d-ade5-fdf7c46421fe@suse.com>
 <1fec512a-8c7b-69b5-40bf-88b42e9ecb7d@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <1fec512a-8c7b-69b5-40bf-88b42e9ecb7d@suse.com>
X-ClientProxiedBy: LO4P265CA0114.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2c3::18) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 899559bc-c72e-47b6-d9f7-08da43c1f33c
X-MS-TrafficTypeDiagnostic: SN6PR03MB4205:EE_
X-Microsoft-Antispam-PRVS:
	<SN6PR03MB4205219A7BF7CAE4D08788118FDF9@SN6PR03MB4205.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RvP/b5EK+/lGQZ4cCvIF8bSl8CPJeEowPGk6GrCBhQ1DlJRgy2To2p7sLJQTJ88liMhOek0GJSdAtUWwrVVtxkhqlj9eMngJzwQmCaW7U3JbyGIXyo5+LiGTe3jobj7hMVd4OL0Qx1HgUdtnCgNMDX9wLELN5gRvy6STsdcuPA7nNhRbvyorpyfoDWN48mB+V4akv/ScYqfMbzOqbYtYbs7mLt4JNJsBA69Wrd2rMGnnmg4Z3o261DzRYxSkFH0+q5NulMJbmr14gimgDxquJ7yo5pYBnoAQ/EvdOigbRWwwnVL/b3CU33LL+wvNvvvJ5MuKXvnFzMADUyjPH+KQY487+OXDh8KSL28N9H+9CebQtSTGu5CDAIPgBW734/vHb7Z3kaEbpHvC/ctaX/qSlohPQyo8enmV6RoMKjbJxkphotdky6HXZzTjtDnxt5t7SV4rsp/IKsKZjDUNkRRc33aNREcqV7iPi4iiOwpriWMGw0AoZDc7fkOJVnS0a8n47KmneyLC+DEzti9vi4aK92j8+p+/LRrOumnOvt6Qt6+en95+6tkdp2tsWROLoN5ONFeFOkWzLMaBABX/LqFmXcUXKAJAD8KY4NckzeM8dfvZo6Fon5R8rbstmWeSZvihOAzbT4BUUs1LndymAN1hfw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(7916004)(4636009)(366004)(86362001)(6666004)(38100700002)(186003)(83380400001)(5660300002)(66476007)(4326008)(66946007)(33716001)(85182001)(8936002)(9686003)(66556008)(8676002)(2906002)(316002)(54906003)(6916009)(6506007)(26005)(6512007)(508600001)(82960400001)(6486002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?S1lpcWh4ZFdoTU5tOFhSVk9Ud01lc1FUNm9KUEd6WXBMVWhHTnlkTmo0NDV6?=
 =?utf-8?B?MFRMQUhIb2s3YWw0U1FybzRUSGwyT20wZndpcWg2ZjViR2dpNzJiWVgzQ0tr?=
 =?utf-8?B?Z2NNSFFXOW1zd2FTMGcvMXBreXM4aUtKa0N2eDRCZVpGaHZXeG10SmxpU1VK?=
 =?utf-8?B?aG44d01wMGo2b1lYOHRYcnhCelIvV2xoVlNhV1VlZWlFdGNUTkZQekRMZ3dY?=
 =?utf-8?B?eDJEOHF1M0R3eEJxRS9HQlErZlRyK0p4N2g5VG55RVNTZ2pZK1pFaU9OQm8w?=
 =?utf-8?B?dU5KOWlLTTVWMHV0SWdCQUhFWFFGZ2RDYzhHY0dBMHJxaDFoZnVXVVBwTGZh?=
 =?utf-8?B?b1RtTlZFVWlJWGduajZNU1FKOCtGc2o0VlQvNzMrejJhVE1ONElqOFlqZTNp?=
 =?utf-8?B?RmgxSXRjbGFMZWVROVFPMDJuMlQvSmdWbVdiRzgycEs2K2tUQ2dGSWp1Kzdm?=
 =?utf-8?B?NUFna2hBc2Y2eEdMYzdrNnVqdXUwZ1FQQXJ2RzlobnFLTEJlWGpBUWxRVEJu?=
 =?utf-8?B?a2dVZzdUMXhwNEp1bWdBOTFyOHdOYzA0ajlDM2RIbGFTcGEvcm44VE1ndTFE?=
 =?utf-8?B?VElsZ0hnSVZlVERLZDZ0MjFDVHp3N1N2YTdSVGhVM2tEN2hoc3gwZ2dUQW9r?=
 =?utf-8?B?SExZVzA4Q3h4eG4va3ByTnFKdlZ4Qmt3Qm5aVEZ4dFJZZG5RdGVQZHNGUFMx?=
 =?utf-8?B?ckVvN0FKbmhhbEZNdjhYMHByeHdDMUJPSXM5dktKUWdSMnI0cjI0UVJQTnVL?=
 =?utf-8?B?WmlXL3N4YXpuQ0tncEdXTUpDVEp6ditHeGxJWDdNMzgvUng1Nyt6WlRTWGRi?=
 =?utf-8?B?K3pMQUFMT1EvTE51RXJMT084dlVpdS9KUmIzdGMvQjNWQTVvcUwrekhRN21O?=
 =?utf-8?B?eXQ3QUxDeGNWZUNSVS9HVSs5eTBmK1dzMXJoSWE1U1FnZjRPL1A3bXdNeHdm?=
 =?utf-8?B?ZlJBQWIzZGZHUGtCR0ZweEN6dyttallrREFNOVQ5Skx3WkZlZFBlRTRyNSsy?=
 =?utf-8?B?cUZPNTVVUW9JcnJORlpySEFnS1JnQkVlMjVqbklhSmJ2MnFSOHpERzFLNE92?=
 =?utf-8?B?b05hM2VveitHaStYNEViYkgrWlVuM01kOFN3SS8wSExLcE01blNDRUJtTG9N?=
 =?utf-8?B?TThVdmlXdjVsOWNwZVpaUFowZWJ5UDZoYTBEVHlBKzhNRGRTOU0rVDhndEZN?=
 =?utf-8?B?OEdyY05Lcnl3YjYxeXY1Q0s0M0pmTE1wRHRCMS9lYVliVE5lMnIwRHB6V3E3?=
 =?utf-8?B?WFhKSno2TU1BbURkVmhZUVA3RWc3ajl3YW9jL2pWMXR2Rzk5S1pEM2ZPQVJr?=
 =?utf-8?B?YUlXekdGLzRrZVlFVHR1MFhlR2dNQ2IxODVBU0FQUFB1TnRRRGRzcGNuVnBa?=
 =?utf-8?B?V1Vjb0JpRGgwVklKblQzK3N1UnZsRDk0T1poUHA2YVRFU01kTEhZRXFhOHY0?=
 =?utf-8?B?UHBwbEpDMzRDWHVSR3JvVFRSNzJ6dUtLeTNCWWxGS0ZuajhlQWtiTEpSNVox?=
 =?utf-8?B?elF2Kzc4eWdyVlN3UTNlcGlWakFVWXZtOFh6eG9rRUV5U1BjR0toaVY5SDJ2?=
 =?utf-8?B?S0J3YlBNNTc1NG4wdkdPdU5IaC95NVhDTXVyekdSbnpabm5JREY0Q2Z4aXp5?=
 =?utf-8?B?UThYVEwzWjBVTjF4NFlsRWpNUmRKQ2Q2Ulk2TVl3b0FCeFJGencxVXBFbHFy?=
 =?utf-8?B?dU5lWGJ4Y2ttZGoxRXFVKzMzNURYdkU2QlFhcW9GRGM1dks5MXhnaUtuMWo2?=
 =?utf-8?B?SUVOT0xQTnJoQ2xNR0w3dmpSQ3ZJTy9CSWdiTUkyYW9YdUVmZlU1MFNWVzQ4?=
 =?utf-8?B?cDBWcVF3YStUQzN5WUhtYUk1M0IxR3VwWENDSTExRnZwVG9aZWYzbC9WNlov?=
 =?utf-8?B?NGZPOXVBaXY5UCt3UU9sRXFLTTJ2eWJCQW95ZFlSQ2lpemJiSTlWR01zcUhN?=
 =?utf-8?B?b2NObE9NWEd5VENCcVdFdDVVTkwrc1J1WWkrRVBiWS84MEYwVVBOQW05WlhY?=
 =?utf-8?B?NVZyL21kZTZWWXp4c2hCbitvdVB0a0ZFQllCR3l4TTVWcGVqaFpGM0lIWG9s?=
 =?utf-8?B?QTZJdVJzOXcyNHVidzJqaXllcVA3cUxPWG10WnFMTk5NS3BIMlptWEY1ZkFm?=
 =?utf-8?B?THcyZldCd0RGdGJNejNDVk9yMmhZZ2psNTM2VU1HM1ZOcXFVNHFwVE9KNVF0?=
 =?utf-8?B?NWVhd2xSU0Nkd0t3R2wwdHRmQXI5RzdKK2huWVVBUUVCSElmMEtSdGRBdlRZ?=
 =?utf-8?B?OHdTMkFiNktaS09ZRWFTY3VuQzVGWlpKdWQ2b1ZpM1BpYks4QlBVQ0NCOWtp?=
 =?utf-8?B?WDFkRkg4eVZGdzRpQnh2UDUzNGRiNTE4VlFiRG9iRmJ2WGZUWU9IanFwS0tK?=
 =?utf-8?Q?oiauv27UUZ7zOVrM=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 899559bc-c72e-47b6-d9f7-08da43c1f33c
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Jun 2022 11:29:11.6353
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: AOo6sL4DOOgdil15GRjIGg6c8yVJpqIhSt0qtJzkOj1BE/RQBZkwubVaMGs0R2JvF7RFi7dxn1AYP2uFJiHurg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR03MB4205

On Fri, May 27, 2022 at 01:17:08PM +0200, Jan Beulich wrote:
> This is a re-usable helper (kind of a template) which gets introduced
> without users so that the individual subsequent patches introducing such
> users can get committed independently of one another.
> 
> See the comment at the top of the new file. To demonstrate the effect,
> if a page table had just 16 entries, this would be the set of markers
> for a page table with fully contiguous mappings:
> 
> index  0 1 2 3 4 5 6 7 8 9 A B C D E F
> marker 4 0 1 0 2 0 1 0 3 0 1 0 2 0 1 0
> 
> "Contiguous" here means not only present entries with successively
> increasing MFNs, each one suitably aligned for its slot, but also a
> respective number of all non-present entries.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> @Roger: I've retained your R-b, but I was on the edge of dropping it.

Sure, that's fine.

> ---
> v5: Bail early from step 1 if possible. Arrange for consumers who are
>     just after CONTIG_{LEVEL_SHIFT,NR}. Extend comment.
> v3: Rename function and header. Introduce IS_CONTIG().
> v2: New.
> 
> --- /dev/null
> +++ b/xen/arch/x86/include/asm/pt-contig-markers.h
> @@ -0,0 +1,110 @@
> +#ifndef __ASM_X86_PT_CONTIG_MARKERS_H
> +#define __ASM_X86_PT_CONTIG_MARKERS_H
> +
> +/*
> + * Short of having function templates in C, the function defined below is
> + * intended to be used by multiple parties interested in recording the
> + * degree of contiguity in mappings by a single page table.
> + *
> + * Scheme: Every entry records the order of contiguous successive entries,
> + * up to the maximum order covered by that entry (which is the number of
> + * clear low bits in its index, with entry 0 being the exception using
> + * the base-2 logarithm of the number of entries in a single page table).
> + * While a few entries need touching upon update, knowing whether the
> + * table is fully contiguous (and can hence be replaced by a higher level
> + * leaf entry) is then possible by simply looking at entry 0's marker.
> + *
> + * Prereqs:
> + * - CONTIG_MASK needs to be #define-d, to a value having at least 4
> + *   contiguous bits (ignored by hardware), before including this file (or
> + *   else only CONTIG_LEVEL_SHIFT and CONTIG_NR will become available),
> + * - page tables to be passed to the helper need to be initialized with
> + *   correct markers,
> + * - not-present entries need to be entirely clear except for the marker.
> + */
> +
> +/* This is the same for all anticipated users, so doesn't need passing in. */
> +#define CONTIG_LEVEL_SHIFT 9
> +#define CONTIG_NR          (1 << CONTIG_LEVEL_SHIFT)
> +
> +#ifdef CONTIG_MASK
> +
> +#include <xen/bitops.h>
> +#include <xen/lib.h>
> +#include <xen/page-size.h>
> +
> +#define GET_MARKER(e) MASK_EXTR(e, CONTIG_MASK)
> +#define SET_MARKER(e, m) \
> +    ((void)((e) = ((e) & ~CONTIG_MASK) | MASK_INSR(m, CONTIG_MASK)))
> +
> +#define IS_CONTIG(kind, pt, i, idx, shift, b) \
> +    ((kind) == PTE_kind_leaf \
> +     ? (((pt)[i] ^ (pt)[idx]) & ~CONTIG_MASK) == (1ULL << ((b) + (shift))) \
> +     : !((pt)[i] & ~CONTIG_MASK))
> +
> +enum PTE_kind {
> +    PTE_kind_null,
> +    PTE_kind_leaf,
> +    PTE_kind_table,
> +};
> +
> +static bool pt_update_contig_markers(uint64_t *pt, unsigned int idx,
> +                                     unsigned int level, enum PTE_kind kind)
> +{
> +    unsigned int b, i = idx;
> +    unsigned int shift = (level - 1) * CONTIG_LEVEL_SHIFT + PAGE_SHIFT;
> +
> +    ASSERT(idx < CONTIG_NR);
> +    ASSERT(!(pt[idx] & CONTIG_MASK));
> +
> +    /* Step 1: Reduce markers in lower numbered entries. */
> +    while ( i )
> +    {
> +        b = find_first_set_bit(i);
> +        i &= ~(1U << b);
> +        if ( GET_MARKER(pt[i]) <= b )
> +            break;
> +        SET_MARKER(pt[i], b);
> +    }
> +
> +    /* An intermediate table is never contiguous with anything. */
> +    if ( kind == PTE_kind_table )
> +        return false;
> +
> +    /*
> +     * Present entries need in-sync index and address to be a candidate
> +     * for being contiguous: What we're after is whether ultimately the
> +     * intermediate table can be replaced by a superpage.
> +     */
> +    if ( kind != PTE_kind_null &&
> +         idx != ((pt[idx] >> shift) & (CONTIG_NR - 1)) )
> +        return false;
> +
> +    /* Step 2: Check higher numbered entries for contiguity. */
> +    for ( b = 0; b < CONTIG_LEVEL_SHIFT && !(idx & (1U << b)); ++b )
> +    {
> +        i = idx | (1U << b);
> +        if ( !IS_CONTIG(kind, pt, i, idx, shift, b) || GET_MARKER(pt[i]) != b )
> +            break;
> +    }
> +
> +    /* Step 3: Update markers in this and lower numbered entries. */
> +    for ( ; SET_MARKER(pt[idx], b), b < CONTIG_LEVEL_SHIFT; ++b )
> +    {
> +        i = idx ^ (1U << b);
> +        if ( !IS_CONTIG(kind, pt, i, idx, shift, b) || GET_MARKER(pt[i]) != b )
> +            break;
> +        idx &= ~(1U << b);
> +    }
> +
> +    return b == CONTIG_LEVEL_SHIFT;
> +}
> +
> +#undef IS_CONTIG
> +#undef SET_MARKER
> +#undef GET_MARKER
> +#undef CONTIG_MASK

Is it fine to undef CONTIG_MASK here, when it was defined outside of
this file?  It does seem weird to me.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 11:51:04 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 11:51:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340436.565469 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwMsD-0007lY-6Y; Wed, 01 Jun 2022 11:50:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340436.565469; Wed, 01 Jun 2022 11:50:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwMsD-0007lR-3g; Wed, 01 Jun 2022 11:50:49 +0000
Received: by outflank-mailman (input) for mailman id 340436;
 Wed, 01 Jun 2022 11:50:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwMsC-0007lH-EY; Wed, 01 Jun 2022 11:50:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwMsC-0007mm-Ag; Wed, 01 Jun 2022 11:50:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwMsB-00077A-Tb; Wed, 01 Jun 2022 11:50:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nwMsB-0000dA-T5; Wed, 01 Jun 2022 11:50:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9Tie1e6RALmuh7Y8dE6cHueIBUuDrrBQMuXLKHssoQs=; b=wy2GUL7CXeDWQpRRSur966S95Z
	UfQ65pEQ+cwdD2sn4GQbdVzsk3HspezOc2OK8R1XGNl/UZQvGRXr5g20HYLgUWOtb4WBGoWxFZZNp
	TIiNVQ/5fM2FZS/uzyMo7/2arSEJDeHy4WrOIqgF6884ccjGfFj2mzjCx1kK1g0Ac2iA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170793-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 170793: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=8b8fd1bc67efa22852cbdf5594c9be5b99922df1
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 01 Jun 2022 11:50:47 +0000

flight 170793 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170793/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              8b8fd1bc67efa22852cbdf5594c9be5b99922df1
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  691 days
Failing since        151818  2020-07-11 04:18:52 Z  690 days  672 attempts
Testing same since   170793  2022-06-01 04:20:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Amneesh Singh <natto@weirdnatto.in>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani@anisinha.ca>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brad Laue <brad@brad-x.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Kirbach <christian.kirbach@gmail.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Christophe Fergeau <cfergeau@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Didik Supriadi <didiksupriadi41@gmail.com>
  dinglimin <dinglimin@cmss.chinamobile.com>
  Divya Garg <divya.garg@nutanix.com>
  Dmitrii Shcherbakov <dmitrii.shcherbakov@canonical.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Emilio Herrera <ehespinosa57@gmail.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Franck Ridel <fridel@protonmail.com>
  Gavi Teitz <gavi@nvidia.com>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Haonan Wang <hnwanga1@gmail.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Hiroki Narukawa <hnarukaw@yahoo-corp.jp>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Ian Wienand <iwienand@redhat.com>
  Ioanna Alifieraki <ioanna-maria.alifieraki@canonical.com>
  Ivan Teterevkov <ivan.teterevkov@nutanix.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  jason lee <ppark5237@gmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jia Zhou <zhou.jia2@zte.com.cn>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jing Qi <jinqi@redhat.com>
  Jinsheng Zhang <zhangjl02@inspur.com>
  Jiri Denemark <jdenemar@redhat.com>
  Joachim Falk <joachim.falk@gmx.de>
  John Ferlan <jferlan@redhat.com>
  John Levon <john.levon@nutanix.com>
  John Levon <levon@movementarian.org>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Justin Gatzen <justin.gatzen@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kim InSoo <simmon@nplob.com>
  Koichi Murase <myoga.murase@gmail.com>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Lei Yang <yanglei209@huawei.com>
  Lena Voytek <lena.voytek@canonical.com>
  Liang Yan <lyan@digitalocean.com>
  Liang Yan <lyan@digtalocean.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Liu Yiding <liuyd.fnst@fujitsu.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  luzhipeng <luzhipeng@cestc.cn>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Martin Pitt <mpitt@debian.org>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matej Cepl <mcepl@cepl.eu>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Goodhart <c@chromakode.com>
  Maxim Nestratov <mnestratov@virtuozzo.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Moteen Shah <codeguy.moteen@gmail.com>
  Moteen Shah <moteenshah.02@gmail.com>
  Muha Aliss <muhaaliss@gmail.com>
  Nathan <nathan95@live.it>
  Neal Gompa <ngompa13@gmail.com>
  Nick Chevsky <nchevsky@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nicolas Lécureuil <neoclust@mageia.org>
  Nicolas Lécureuil <nicolas.lecureuil@siveo.net>
  Nikolay Shirokovskiy <nikolay.shirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Niteesh Dubey <niteesh@linux.ibm.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Or Ozeri <oro@il.ibm.com>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Praveen K Paladugu <prapal@linux.microsoft.com>
  Richard W.M. Jones <rjones@redhat.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Robin Lee <cheeselee@fedoraproject.org>
  Rohit Kumar <rohit.kumar3@nutanix.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Davis <scott.davis@starlab.io>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  shenjiatong <yshxxsjt715@gmail.com>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Simon Rowe <simon.rowe@nutanix.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tom Wieczorek <tom@bibbu.net>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tu Qiang <tu.qiang35@zte.com.cn>
  Tuguoyi <tu.guoyi@h3c.com>
  tuqiang <tu.qiang35@zte.com.cn>
  Vasiliy Ulyanov <vulyanov@suse.de>
  Victor Toso <victortoso@redhat.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Vinayak Kale <vkale@nvidia.com>
  Vineeth Pillai <viremana@linux.microsoft.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  Wei-Chen Chen <weicche@microsoft.com>
  William Douglas <william.douglas@intel.com>
  Xu Chao <xu.chao6@zte.com.cn>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Fei <yangfei85@huawei.com>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yasuhiko Kamata <belphegor@belbel.or.jp>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
  zhangjl02 <zhangjl02@inspur.com>
  zhanglei <zhanglei@smartx.com>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>
  Дамјан Георгиевски <gdamjan@gmail.com>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 110353 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 12:12:12 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 12:12:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340447.565479 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwNCm-0002It-6d; Wed, 01 Jun 2022 12:12:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340447.565479; Wed, 01 Jun 2022 12:12:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwNCm-0002Im-3j; Wed, 01 Jun 2022 12:12:04 +0000
Received: by outflank-mailman (input) for mailman id 340447;
 Wed, 01 Jun 2022 12:12:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z6G0=WI=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nwNCk-0002Ig-3p
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 12:12:02 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.111.102])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0a1dc21f-e1a4-11ec-bd2c-47488cf2e6aa;
 Wed, 01 Jun 2022 14:12:00 +0200 (CEST)
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2058.outbound.protection.outlook.com [104.47.13.58]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-16-8T--mCyDN_CdcEQOHta9jA-1; Wed, 01 Jun 2022 14:11:57 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8744.eurprd04.prod.outlook.com (2603:10a6:10:2e2::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Wed, 1 Jun
 2022 12:11:56 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.012; Wed, 1 Jun 2022
 12:11:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0a1dc21f-e1a4-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654085520;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=tmNljydhbifzkJYxeJr/2ZbCc04r/6gwjTVNoNW7FrY=;
	b=CQo9IQ6jdK0+yp+UIvA6/EA7AUJuj3JJ3Q1VZ+qwBjxtcppXcAwJeSORhwlowj0Ty1S6No
	tPFNRUt/m0IV2Tizuu57nYSMyGEUYXqFWByVcJOks6z71jZG2FiI2sJ6fzmUHK9vL7M580
	m3tkL7YVJskTc2y5VMg5EMURe0M4MnE=
X-MC-Unique: 8T--mCyDN_CdcEQOHta9jA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XFX1pCH6jX0qJ10d0aVY90GWa7Shts7Y45wn9E/VInhGyAPS5voFD9uzZMSIG557mo6L/VdhH+McbXxKpoNYgZ0Ig8B3IJKBt6X76MxEmoQHZ9MuHQhCDZ4bQgFhZtJnLeg7WFfP8L2rWjU9Q0SwtUwHN4QuPUEd9h0QkmfPg21LY3Tfmep2ZLDA4Mblihh+jFCAOZrrW9Nyl/MR9SFGmbl+5MuynSdKDwjQlDYGHKJwZ5g9KGT9F90hesZ5h4XMOEfphnJxSfZmhupDVM6L7UiPDCp+PM8QdEnTTsiVptdRm+1rjvfQyQ6m20F4o0I9zlmPzG2TdTXMHhpX9Tl2Vg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0aIBUsNDsDIHwEo7dBvOXRkfMWUr4TDmeOyCxfX00U0=;
 b=Jw/WozIjub1atSw4Zv6hwCufhIl211lEXezm/TVmgeBlO2epE0COnJqnY7fLFLugg2Fa74eSPP7AXEVVuQUubqCPNdAZpWkUp25zZq0Q3zpBWWD8u/CNEm7sWm0PwLwvDWmtP/RL+KR1BqY1yXkhkLxeh/JiAgNfYKoBXhMDemNAeSI+yMZKDfJ+dEq96K3mLGkC8BRZXHgSJu/it6wSPQQkCJQ/FIwsLfIihvqq7Ha/HJRdoB9R1nidjyEMLXAlMB+zck6vmlVCFQ7AOkpjVGuN72H8dBHco7WoFW1M16qaiBemkOd+6AZ0TnDpzI1A4o8OV4mO5n2hZyMS8qXFYQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <803e1e01-5a8e-36fa-1fe1-35bcf147c8e6@suse.com>
Date: Wed, 1 Jun 2022 14:11:53 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH v5 07/15] x86: introduce helper for recording degree of
 contiguity in page tables
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 Wei Liu <wl@xen.org>
References: <80448822-bc1c-9f7d-ade5-fdf7c46421fe@suse.com>
 <1fec512a-8c7b-69b5-40bf-88b42e9ecb7d@suse.com>
 <YpdNg5fgAncfSeTK@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <YpdNg5fgAncfSeTK@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-ClientProxiedBy: AS9PR06CA0211.eurprd06.prod.outlook.com
 (2603:10a6:20b:45e::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3ac5e9ad-11a7-4bd0-786d-08da43c7eb80
X-MS-TrafficTypeDiagnostic: DU2PR04MB8744:EE_
X-Microsoft-Antispam-PRVS:
	<DU2PR04MB8744B558494CBDAFE95D668FB3DF9@DU2PR04MB8744.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Lw8d2N8ae14ZEnRRTkoZH1XPOu4t235ESUu4YdP6o4DP8qMJFmCuwGsbbRnBZssiY6kpmCQqHWkeldF7QQFI8iv8kjut9JNHF45Lw6k6bA3Xm2eem87EuE8rjfIb/+K9se+3mXGLKWb/dmLSVnGUtB/Dl+KsTY3SyOnxO8NW+1BVdl7fGdt2dC14OcIOCi9FDEzq5LH0zqIy/mtyUKvuP0FvMjuyiff66XSCnK/EQXRVB/oPeSTyQUkwZ0FP3VOtZjocrvv1bxVz/G3C6WGmFjXgozrp3yUb7ioJO1F2H84vefGaUeDzp9VCAmP2ntzdZp+KPVllvGRvSkfi8pOR0J+9YpOoArveSu/2MG8Kvqy2utsLobQWwiVJqZmddZapKykGo/z0LgR5yAKSRCaS5npDplGvbrCUMP2PlQEY+jX1rpm6Z4C3R73Yon/2XYiXdmHhXB7PEu3P/WztyvYbZuzXZ4prGShLaDrqwg+hmWOU2uwozgJoNzrziO4Q3WHo5yk9Ta4Jq/K7Pf0Pp2akBZezPz64YMdGKhKednAyNwjRJlZdh336bfXCGd2509JJFzUzHacU4Aq6LF0EF7TSejiZxdTpZKYRwjiGaGK3x+XLzjXl/Ie+nwqoBiiNZQlyYyTPgphSCCuqZjQ25pmANwf2Tnsm268ozEcVtxVonomfnZ4hefrO22+h9SvgfAXXCRTj+NiRUKZIbhYVyT9In/YaBkr/+2+R/JkJyAiVCgw=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(66556008)(66946007)(4326008)(8676002)(5660300002)(66476007)(38100700002)(8936002)(2906002)(6916009)(6512007)(26005)(6506007)(508600001)(53546011)(6666004)(54906003)(83380400001)(2616005)(316002)(186003)(31696002)(36756003)(86362001)(31686004)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?gcd3wQ5RJ2/PcELUh3ujim5i+Zxhvgq7PN9cfkt+cgey2DwyMCtoZV5lrmUc?=
 =?us-ascii?Q?Y9tMRGr8qiKhiQn3QNbDp0wkK3YK0+CEY43kWCbeSXVfCpnPZOKLMCe1T7Wf?=
 =?us-ascii?Q?DxfhsDOdOoFOqo2j+5WvTMRfY0hEq7NQOY/xnqZn3nZ0a+mizIlpwKAZ6AJ4?=
 =?us-ascii?Q?eeGQm/trSDahpKSJqq3xHOn8DaPo2gn8/gjhA98HVf3HB6OPUOyqQg+uHPVE?=
 =?us-ascii?Q?MHb2RkyjSZaOpvBAmFAX0TOPrfRPZTb59bKMtUGmLvyVg/a/0WbFeofL8PzW?=
 =?us-ascii?Q?BvWpGINN2bV2cPMiLTHz3FNuWIOmAoZC+NnaLnk0JwlrC/ZBVgdhN8ML2agP?=
 =?us-ascii?Q?vgwyWQGkNGATOrVGQIdTLe+vm9GFtTJrOafCbYw66VjGvuNSYoKYdsweByRH?=
 =?us-ascii?Q?6mjKNRhUBAO+7Z52TniiGvw6iI4FTadY7SwmYiXKg5m1X/PiZ7NyjdMPCKKp?=
 =?us-ascii?Q?ZSBaAqrnkdau+ib5iCQbCBaGCE8bOVXTOxCPrDa845IXlL2JM44mYrJ7Skua?=
 =?us-ascii?Q?nk4KWGQmsdLuOIuHUdtwz1TczHqMosqXeA/hUiDLfKCdOhgzYHo0mUO0JpGa?=
 =?us-ascii?Q?kowNe39XjN4tp8AEXfYp6Se8QHeOPecnIihPa0frGjalE98SktarDKwytPp3?=
 =?us-ascii?Q?XLpEssWENvsgXntJZjv2fEGIryAebzC//28vLE1vvavyoQKBD6CCHalpvXW+?=
 =?us-ascii?Q?fHb8hKtkoYpRvLImw76W9IMeuAMo3vjC0e+YUFy2dROchWrozT4WWD+MigdE?=
 =?us-ascii?Q?15e9vxlA8i4pGRcNUJfwI/07hmYV9JlUfib7ip4GV3XhJD9+y4kz0+KdAXhd?=
 =?us-ascii?Q?fI21axEIJ89l9zA3iUR9KPU0B7CniZSmduKXSNd4JAjy6/WbjHepy7GsSsA1?=
 =?us-ascii?Q?DncKXz1NiOb9UYqsU2QQeQar2NA/Q8i8jlSkuwxR/00sn7rXsNuFUBNKxldf?=
 =?us-ascii?Q?fDJTOK55Cv7Pov/zKVLdMY7dDF5V6n+mNcige+u6DxKRjMGVfxzJDZkU+33s?=
 =?us-ascii?Q?gcVNVHUulaDq3++jtaReA1M8fxz5dWj2+zYvmTRXmNWwrCWvpFh0xamI75IQ?=
 =?us-ascii?Q?SdL8ZbiKENsGgJ7skAmjiwP1d0+dM0ZrzW5PsIDdo8/dzg2tEq5xq7ZUIxxd?=
 =?us-ascii?Q?1y/pG6SnLbGwdFh4bG4ejKHL7RkOXcvmpkVlkek7ryEFucEJYDWb1AC2iVkS?=
 =?us-ascii?Q?CNCLW1IUtkL6uUtfLRQHAYh8vHqitQhf/fkh5qvI6sj/WCViWdMBi0m4Uy8H?=
 =?us-ascii?Q?W95xbemXaeIfWxwRmKtrAZfR92sLD4wvymK/sbdgWCzpabZ36KazlZ/CR8/s?=
 =?us-ascii?Q?lczdWy0GUzjNsdls9Kgy95d67nww45fTgVqMDwMBG0empJN7lz/l6KHoJ6xh?=
 =?us-ascii?Q?3T/ldJd570QjYyyeAJ5x8pDS7q2DHffYDgkljWzk37QwkKuNxsTqUgz9FUeD?=
 =?us-ascii?Q?Exxp616mkwMnTqXPKE3whLjtZfShy7WOTRh6Zb8XRAS/wAJ7WCikpHk3dEYm?=
 =?us-ascii?Q?51kqGE20ozvOceLCfDG5jUKRwqVJ+pOyQ5uHaGfskJS6YM2m4Xe2wfeEgd05?=
 =?us-ascii?Q?dAyesQWo9s9wVESBTgfcTl7o88gCth1Wb+ysVyeCw8DJ1mpAyJRi9tW+goBF?=
 =?us-ascii?Q?nzTIEeTKR7+eRVqQCOIN2FFTdLUSoyFQ+gcHsW8wsDOGPCMTPFRQowtDd727?=
 =?us-ascii?Q?3N5ATRooDDQXymGdfgzf8F6eqwUbEyPbT04TNlQJDgxh+nqZtHAryPOGvpSS?=
 =?us-ascii?Q?+ouTcKDUfg=3D=3D?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3ac5e9ad-11a7-4bd0-786d-08da43c7eb80
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Jun 2022 12:11:56.0589
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: xrO8SvKlxl+/bUiTBp059SU5DSzgC34VYc8ta0c9mU9TZqAsIlND0Xgh8PDrs+9O1feC4Jsx5wwcQ4NWET7/jg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8744

On 01.06.2022 13:29, Roger Pau Monn=C3=A9 wrote:
> On Fri, May 27, 2022 at 01:17:08PM +0200, Jan Beulich wrote:
>> --- /dev/null
>> +++ b/xen/arch/x86/include/asm/pt-contig-markers.h
>> @@ -0,0 +1,110 @@
>> +#ifndef __ASM_X86_PT_CONTIG_MARKERS_H
>> +#define __ASM_X86_PT_CONTIG_MARKERS_H
>> +
>> +/*
>> + * Short of having function templates in C, the function defined below =
is
>> + * intended to be used by multiple parties interested in recording the
>> + * degree of contiguity in mappings by a single page table.
>> + *
>> + * Scheme: Every entry records the order of contiguous successive entri=
es,
>> + * up to the maximum order covered by that entry (which is the number o=
f
>> + * clear low bits in its index, with entry 0 being the exception using
>> + * the base-2 logarithm of the number of entries in a single page table=
).
>> + * While a few entries need touching upon update, knowing whether the
>> + * table is fully contiguous (and can hence be replaced by a higher lev=
el
>> + * leaf entry) is then possible by simply looking at entry 0's marker.
>> + *
>> + * Prereqs:
>> + * - CONTIG_MASK needs to be #define-d, to a value having at least 4
>> + *   contiguous bits (ignored by hardware), before including this file =
(or
>> + *   else only CONTIG_LEVEL_SHIFT and CONTIG_NR will become available),
>> + * - page tables to be passed to the helper need to be initialized with
>> + *   correct markers,
>> + * - not-present entries need to be entirely clear except for the marke=
r.
>> + */
>> +
>> +/* This is the same for all anticipated users, so doesn't need passing =
in. */
>> +#define CONTIG_LEVEL_SHIFT 9
>> +#define CONTIG_NR          (1 << CONTIG_LEVEL_SHIFT)
>> +
>> +#ifdef CONTIG_MASK
>> +
>> +#include <xen/bitops.h>
>> +#include <xen/lib.h>
>> +#include <xen/page-size.h>
>> +
>> +#define GET_MARKER(e) MASK_EXTR(e, CONTIG_MASK)
>> +#define SET_MARKER(e, m) \
>> +    ((void)((e) =3D ((e) & ~CONTIG_MASK) | MASK_INSR(m, CONTIG_MASK)))
>> +
>> +#define IS_CONTIG(kind, pt, i, idx, shift, b) \
>> +    ((kind) =3D=3D PTE_kind_leaf \
>> +     ? (((pt)[i] ^ (pt)[idx]) & ~CONTIG_MASK) =3D=3D (1ULL << ((b) + (s=
hift))) \
>> +     : !((pt)[i] & ~CONTIG_MASK))
>> +
>> +enum PTE_kind {
>> +    PTE_kind_null,
>> +    PTE_kind_leaf,
>> +    PTE_kind_table,
>> +};
>> +
>> +static bool pt_update_contig_markers(uint64_t *pt, unsigned int idx,
>> +                                     unsigned int level, enum PTE_kind =
kind)
>> +{
>> +    unsigned int b, i =3D idx;
>> +    unsigned int shift =3D (level - 1) * CONTIG_LEVEL_SHIFT + PAGE_SHIF=
T;
>> +
>> +    ASSERT(idx < CONTIG_NR);
>> +    ASSERT(!(pt[idx] & CONTIG_MASK));
>> +
>> +    /* Step 1: Reduce markers in lower numbered entries. */
>> +    while ( i )
>> +    {
>> +        b =3D find_first_set_bit(i);
>> +        i &=3D ~(1U << b);
>> +        if ( GET_MARKER(pt[i]) <=3D b )
>> +            break;
>> +        SET_MARKER(pt[i], b);
>> +    }
>> +
>> +    /* An intermediate table is never contiguous with anything. */
>> +    if ( kind =3D=3D PTE_kind_table )
>> +        return false;
>> +
>> +    /*
>> +     * Present entries need in-sync index and address to be a candidate
>> +     * for being contiguous: What we're after is whether ultimately the
>> +     * intermediate table can be replaced by a superpage.
>> +     */
>> +    if ( kind !=3D PTE_kind_null &&
>> +         idx !=3D ((pt[idx] >> shift) & (CONTIG_NR - 1)) )
>> +        return false;
>> +
>> +    /* Step 2: Check higher numbered entries for contiguity. */
>> +    for ( b =3D 0; b < CONTIG_LEVEL_SHIFT && !(idx & (1U << b)); ++b )
>> +    {
>> +        i =3D idx | (1U << b);
>> +        if ( !IS_CONTIG(kind, pt, i, idx, shift, b) || GET_MARKER(pt[i]=
) !=3D b )
>> +            break;
>> +    }
>> +
>> +    /* Step 3: Update markers in this and lower numbered entries. */
>> +    for ( ; SET_MARKER(pt[idx], b), b < CONTIG_LEVEL_SHIFT; ++b )
>> +    {
>> +        i =3D idx ^ (1U << b);
>> +        if ( !IS_CONTIG(kind, pt, i, idx, shift, b) || GET_MARKER(pt[i]=
) !=3D b )
>> +            break;
>> +        idx &=3D ~(1U << b);
>> +    }
>> +
>> +    return b =3D=3D CONTIG_LEVEL_SHIFT;
>> +}
>> +
>> +#undef IS_CONTIG
>> +#undef SET_MARKER
>> +#undef GET_MARKER
>> +#undef CONTIG_MASK
>=20
> Is it fine to undef CONTIG_MASK here, when it was defined outside of
> this file?  It does seem weird to me.

I consider it not just fine, but desirable. Use sites of this header #defin=
e
this just for the purpose of this header. And I want to leave name space as
uncluttered as possible. Should there really arise a need to keep this, we
can always consider removing the #undef (just like I did for
CONTIG_LEVEL_SHIFT and CONTIG_NR because of feedback of yours on another
patch).

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 01 13:00:10 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 13:00:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340455.565490 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwNx4-0007Ca-Sd; Wed, 01 Jun 2022 12:59:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340455.565490; Wed, 01 Jun 2022 12:59:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwNx4-0007CP-Ps; Wed, 01 Jun 2022 12:59:54 +0000
Received: by outflank-mailman (input) for mailman id 340455;
 Wed, 01 Jun 2022 12:59:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=s3TG=WI=citrix.com=prvs=1441e74d4=roger.pau@srs-se1.protection.inumbo.net>)
 id 1nwNx2-0007CJ-Ub
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 12:59:53 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b7bea98d-e1aa-11ec-837f-e5687231ffcc;
 Wed, 01 Jun 2022 14:59:50 +0200 (CEST)
Received: from mail-mw2nam10lp2101.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.101])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 01 Jun 2022 08:59:47 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by MWHPR03MB2605.namprd03.prod.outlook.com (2603:10b6:300:45::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Wed, 1 Jun
 2022 12:59:46 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::5df3:95ce:4dfd:134e]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::5df3:95ce:4dfd:134e%4]) with mapi id 15.20.5314.013; Wed, 1 Jun 2022
 12:59:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b7bea98d-e1aa-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654088390;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=XaA/2JXQEhqeOari9K2fkP7CDvveixfTMqwLMj7bDfw=;
  b=AsuxFfXylX4k3Fd9LynTTX0uhhzF6/VAqNuqF1vUTAmhJU6ISqJ7EMJc
   l8vP4sxbclEfvgzKu/wuQUQ5GrSk+6Yem32tEnJaOiHEaGVj2zjEwuEiQ
   nS00fMK1RkfBCS49Za/cF+cG1Ry4DizgzDaKIr66UhSgoTs6L1/rHtJiZ
   I=;
X-IronPort-RemoteIP: 104.47.55.101
X-IronPort-MID: 73016131
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:rO/ZEa6qc4KV+cnOiyBi5wxRtE7GchMFZxGqfqrLsTDasY5as4F+v
 mBMCD3UaKmJM2qne9skOtu/9UhTsZKBmIRlQAdlryBkHi5G8cbLO4+Ufxz6V8+wwmwvb67FA
 +E2MISowBUcFyeEzvuVGuG96yE6j8lkf5KkYAL+EnkZqTRMFWFw0HqPp8Zj2tQy2YbhWlvX0
 T/Pi5a31GGNimYc3l08s8pvmDs31BglkGpF1rCWTakjUG72zxH5PrpGTU2CByKQrr1vNvy7X
 47+IISRpQs1yfuP5uSNyd4XemVSKlLb0JPnZnB+A8BOiTAazsA+PzpS2FPxpi67hh3Q9+2dx
 umhurSrTAV3FaaXkt1BbChpSQ5gYaIX9+T+dC3XXcy7lyUqclPK6tA3VAQTAtdd/ex6R2ZT6
 fYfNTYBKAiZgP67y666Te8qgdk/KM7sP8UUvXQIITPxVK56B8ycBfiXo4YAhV/chegXdRraT
 9AeZjd1KgzJfjVEO0sNCYJ4l+Ct7pX6W2ID8AvL/PFui4TV5DEq/7nhLIfoQd2XQNVNh1yVq
 Xvl9l2sV3n2M/Tak1Jp6EmEhOXCgCf6U4I6D6Cj+7hhh1j77nweDlgaWEW2pdG9i1WiQJRPJ
 koM4C0soKMuskuxQbHVRxSlpFaUsxhaXMBfe9DW8ymIw6vQpgOGXG4NS2cZbMR87ZdvAzs3y
 lWOgtXlQyR1t6GYQm6c8bHSqi6uPS8SLikJYipsoRY53uQPabob1nrnJuuP2obs5jEpMVkcG
 wy3kRU=
IronPort-HdrOrdr: A9a23:Ls3T866QCYJ52TeMmwPXwDLXdLJyesId70hD6qkQc3FomwKj9/
 xG/c5rsSMc7Qx6ZJhOo7+90cW7L080lqQFhLX5X43SPzUO0VHARO1fBOPZqAEIcBeOlNK1u5
 0AT0B/YueAcGSTj6zBkXWF+wBL+qj5zEiq792usUuEVWtRGsZdB58SMHfhLqVxLjM2Y6YRJd
 6nyedsgSGvQngTZtTTPAh+YwCSz+e77a4PeHQ9dmYa1DU=
X-IronPort-AV: E=Sophos;i="5.91,268,1647316800"; 
   d="scan'208";a="73016131"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DZpjufRkNgGIGdMpD86xdQQXouRInUawS4USN5lIzhbhbazmdNNxayND/qmKikFs/I+71kb29eTwxJixgntP6cB1G4r6ep3Wd0RHGTOF0V5KrOaZJRRIP5QVOnyWB14UuqffYccBNYoLWxT+CeDBAFaMeyjlip7OxQIEoyoojbaaFSfLI6rrvp1HTBz4u+oHBWQpG6ExA4bHtbF5blePOxZfHItr/aESa2H00Hq2I375nuCfN+YGp9VhCWx+3DxLemKmf792QCMmBS72pn9q9BUb9twVidtCEkbpWq5NZFt0hnOh5Xq+Kt/rHilWbedFKCqQD/oRrgv91q/PRsk2RA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=cViLunhqObzyKNJgTwhVg0ziWpgrhYPJVsTu9Ioa790=;
 b=i4ZSWsnai6R2KYFK1pCz/SL4HGKH8H9A4A37Sqepo0esKu1PXP5u3nkowbaSl0SF2ufjZt4DsbUbXHHwa11gA9CTbFdAiF7UyMH+0Iq84jIdQXu5gqFsG9wFFdkIsIm8G9JE6Khrx27fknwayQXhQyVmhOyfw7yLLISn7wuQx1RfBk5HRUokedcqkCIYzND9Y/dA0E9aPDQn8pxmqfHvB/V4CNzHGO6vLzficExk4hPYHQ4bZv0JDYm0fwcsWJVlETMUGT7/PoFG7LMVvuCF9JET/JJa+70SPd/7Aq6Lr0g0wKar8srmbTZ0Udl83E36LiZrRCqQAJqO6dODlGIIPw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cViLunhqObzyKNJgTwhVg0ziWpgrhYPJVsTu9Ioa790=;
 b=A78anne/3oqgSuuNs71iFGceI02YjQprJY5TFvlVSxHBaFqzluxAfzwrq0mTuPcjUh0f57EI0sPtvxvH4M0utBSxqTgLuTer4iRHO0Cf9qXc9Rce4abzrZTzdFFfM2Z9DArLu2apAP5wU2GC4mKcnynnhWpOa8/pp9hE2CfODOo=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 1 Jun 2022 14:59:41 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Paul Durrant <paul@xen.org>
Subject: Re: [PATCH v5 08/15] IOMMU/x86: prefill newly allocate page tables
Message-ID: <YpdivYC3MlpYPBLB@Air-de-Roger>
References: <80448822-bc1c-9f7d-ade5-fdf7c46421fe@suse.com>
 <1df469a9-ddf2-2036-105a-f303f0554f06@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <1df469a9-ddf2-2036-105a-f303f0554f06@suse.com>
X-ClientProxiedBy: LO4P265CA0083.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2bd::20) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 67b4752d-ca42-421c-806e-08da43ce9aac
X-MS-TrafficTypeDiagnostic: MWHPR03MB2605:EE_
X-Microsoft-Antispam-PRVS:
	<MWHPR03MB2605CCB5DD2451A9CF4920D28FDF9@MWHPR03MB2605.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ASwMuSkBKv5v16HPGq2n7lYoZ7GdnYz2HvowqBzg2xQjRPCt8zW7MBw+4SjzEGavhyxx3j5FCoh/3lTdT6m40y8QADgoKwBudNpKEOffv4COTfVUe5qdRN4lBwrm0nd+ZZFDiRP0FXxnTZQjDddFTshJPPWscQ2gbjl7wR/HA6DOOW08GN+etsrTdu/64dms/jLk/noGSLwR5OlFdhi2XJxQPIaVGdqdJ83NSdOgphz6IkCXjAW6kUiwKRHtuU7WXVhRqE0Sv+VoFE1qm6oXP0J3B/EWFCnc8rTkGXF5IPpPASPpdv5eurPc565loGSGOhfpey9l3CdZjVX77ER9yPUjkVEcaOTcj8f4rlDOF5qhi4YPqbW7W97gwQ2cpG1r4d8o5srcNf8FsIKaLhv+35OF0nU6EAESTkD08vvFb+IY+HOqxaBan9HhDDp1lH722iizLS3i/XspkOx+TU9+DaRs12sWxt1nb0A1YYFFL61HNTGR4aG7JUiOq1jksFS+XvIzYQOItbAX2o/SD3XQH1OH2StX9Pt1KnZrdeYaN9J5nKu9ypp2SgwE0kezmNFO/pi5F/fp8xIuX29WkvIdSljIF73JICFm9Ho5zBE4SC5ASjzs5Cri7/XwbDVtF7cbLI9b6sKQHDB+1JIJHg9XjA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(7916004)(366004)(6512007)(33716001)(9686003)(6486002)(66476007)(316002)(26005)(8676002)(82960400001)(4326008)(6666004)(54906003)(6506007)(6916009)(86362001)(83380400001)(186003)(66946007)(66556008)(508600001)(2906002)(8936002)(5660300002)(38100700002)(85182001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QXQydFZDTHJxQ2JTZ2NnTGRQaVNYcmtOSVpMVGNtU1dhWEkxcHZmNFdnazJh?=
 =?utf-8?B?U1NpelZRaFJoUExKU1dWZ3I2TjQyK0NyQ1lHU2ZDN3kyYkRabnNIR2cyb1Jm?=
 =?utf-8?B?K1QxaEkyVXBPNUszL2d6VTc2TTM1SVh4bHNhR2NQS3AzV0Z3TVYrWDNWUFBz?=
 =?utf-8?B?YWtpMFJjUS81SVYwZkw5V1Bxa01BT0wwZkoyems5UlR6WW5qTTBtRjlwUkVz?=
 =?utf-8?B?YWhKU0E5bDJ3ZzRIVFVSZ0psMytBdUVpUzN5c3Fyd0RiSzhlZ1I0SCtXOFhE?=
 =?utf-8?B?SWRTbUlTVFh0ZURLczV5RGVnSzRlRzZScXdWSnVRUmNsVm9iSFplVG0yRHpi?=
 =?utf-8?B?Y25NdDc5MzVlVEl5RlJvZHZJUklJM0pjWDIyOHlxMnFHZHVDNUFCSVlOU2k1?=
 =?utf-8?B?WXdaUFE3azJLNXRQUW12bmNtd1V3UTR5QWNNcUlBTzB6YisyRkpXZWdOWEx5?=
 =?utf-8?B?YVE2RjlkaTlKaTdRVm5ESGtPNGlPMUY0K2I1YjR2VE5Mc3lzdDVaVS9OTXVu?=
 =?utf-8?B?VnB5N29RYUdyMVR2SEJFZi9yV2xUWEtjUHo5bWdWaHhxZ3pPQzhNRytUTEQv?=
 =?utf-8?B?Y0VVOEdkS1BXN1ovUXBIeXRLVTlFYmlhOXJZU2lkZnpxUWJnSytiS29YSzFH?=
 =?utf-8?B?dXpCQlAxdXNhbDlqay84dytJNWJMOC90RUF0Smk5blBGYTdTUHllWHFvekR6?=
 =?utf-8?B?UFE0Ky9VdUdLWGNZWHdUUXgxanZwd0grQzBScEozcEllZkpWWHJrT2sybzhC?=
 =?utf-8?B?RVp1VlV3S1MzTS9lMHIwb0g2MWZ3aHNhNW9CVWU0eU0vSWJCT0Z5QXdUdVZC?=
 =?utf-8?B?VWlCenlFSktRRll6OUN2WTBRM2JnU2Jjd2pVMytmWlpNcE5XdG11MG1xYllq?=
 =?utf-8?B?TTF2WUJ5ZlpmbFM4TE40TjUwemdIWHRhU0Y5R0I4SklaWFB1L0dpVWJzaTNt?=
 =?utf-8?B?MFVLemtVcnl6Y0F2OXNvdlBGS3NiVVo5alBkK0VNLzUxZzdjODFBeUpLMkor?=
 =?utf-8?B?VWl3c0JzOXFraEVjalVQYkZjMUZXa3JFQ3RuUzlVRnpNK2lFMnpodmRCS2tu?=
 =?utf-8?B?S0l5akRMcGJmOUovUlNualNxSFR2QWFnZGpNRWN6OTFzMWt1NWlSRGlITlcv?=
 =?utf-8?B?U1B0NlJBM2hRUGpmNnJDMElISDFMUHhUSDhCOUJicGp6d2RZRjB2U3JrazQ0?=
 =?utf-8?B?Tnk2cktSbjRzWndVTFVGZnV6czEzcU9LRE5hUnc4SGVTbmRxM1I0aVBpL0xJ?=
 =?utf-8?B?bmV3WWVlR0cwOThzNmZiQ0RXQ0lnbzdXY1dKeVlUamRET3Z3OGxXanZTdzYx?=
 =?utf-8?B?c04vTnh1WDUxcUFNdnJLN2ppNXQyeG9GZVROWnhMakpVa3ByamNlUDFVeWFD?=
 =?utf-8?B?UUZ5ZXJmcWdVcG5wcmlDZ3B0YzFCcERMRzZPYmlmdmJteE9NeGYrcmN4NVBi?=
 =?utf-8?B?WVlIeGE1L2Y1b05mUjR6WjVjZ2FyZFRHMFVpL2x3UHg3MnoyYWM0RXJmUnJa?=
 =?utf-8?B?d3EreTdoOGRKNWZqQlRLeFpybWRiUlU5bkQ0TXBMcmdvWWs5QlorNk5GMGVR?=
 =?utf-8?B?SEsrTytPT3JMck5SSlBPaE96VXJnUm1xTUdrdTg1VmpzcjMvV0R3U1p0WDVJ?=
 =?utf-8?B?ejMvMW5uK3ltZDk2dHFHNDRtMC9zK2Y2YTJjN29GVUxyZGlkZVdYcmlaZms0?=
 =?utf-8?B?WHRZQm9oQjlKdGxOY21BeWpkMTluTTVQQUJ1eGRPYXJ4NGVBait6QWQ4SitZ?=
 =?utf-8?B?Um5GOFBnd3RLcGh0OUVwbUE2S2RGaVcrbS9sWTdjbXJNNjNoZlhwSC9KTW14?=
 =?utf-8?B?dHFEckFMamM0VHhmZG5BZkgwUmtxSWZ1QzZFM01YVUxlcnNSd3NtZzRTMFNN?=
 =?utf-8?B?RW5haDBjMHBaSmZqMllIcWtJSzk2QSs1SS90RDcrclB3VWNtd1I3SStOQU8y?=
 =?utf-8?B?QXN2N2c1NnRHdzlNQzNXMmtSK3VhYURiUlM3aWZXZzJrQXQ4TGNKVm43aGRj?=
 =?utf-8?B?WDhDeVRMVnRnUGdaRHhwejRpOXhDZWM3ZUdhc1VOM1RwbGJweEJ3Nk9xcGFh?=
 =?utf-8?B?NG9acDk5Qkdlbm5PdFRJMFhDM3BLOVpKMXFpUTV1YTBoVlFRWlpTeGNzZlJt?=
 =?utf-8?B?aVFoY0ZndjluckdpQ1Jsb3E2bVBRUWtXTzgxeDFKR1pUbWdKcXlpR0ZlMzhS?=
 =?utf-8?B?SStOay9taHlYNXpMYTZNYnM1OUVYUUJlb3FwaXNHWkFiVWFWTE1oeERFSTZL?=
 =?utf-8?B?am5MbGVNcXpNaVJ4MGpLMEJnMWRCcm1PT2c5aWhCWXBuWGF1Q2lWN3NZRTFZ?=
 =?utf-8?B?akhzRklQOURqYnE2eXN5QzRDQlNGR0dYeU9EcDA5a3RSVWU1ZElhbVRxRE5F?=
 =?utf-8?Q?FEadJqsWNtIs/gNo=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 67b4752d-ca42-421c-806e-08da43ce9aac
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Jun 2022 12:59:46.4760
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: X1uNBXv/TtOeRB146r0fl3WqBWfHeJXh8uo+MrNcnce019MD0Wsbq5dBOzZL5FixBBSYC2SBssrsBCyzREQ8LQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR03MB2605

On Fri, May 27, 2022 at 01:17:35PM +0200, Jan Beulich wrote:
> Page tables are used for two purposes after allocation: They either
> start out all empty, or they are filled to replace a superpage.
> Subsequently, to replace all empty or fully contiguous page tables,
> contiguous sub-regions will be recorded within individual page tables.
> Install the initial set of markers immediately after allocation. Make
> sure to retain these markers when further populating a page table in
> preparation for it to replace a superpage.
> 
> The markers are simply 4-bit fields holding the order value of
> contiguous entries. To demonstrate this, if a page table had just 16
> entries, this would be the initial (fully contiguous) set of markers:
> 
> index  0 1 2 3 4 5 6 7 8 9 A B C D E F
> marker 4 0 1 0 2 0 1 0 3 0 1 0 2 0 1 0
> 
> "Contiguous" here means not only present entries with successively
> increasing MFNs, each one suitably aligned for its slot, and identical
> attributes, but also a respective number of all non-present (zero except
> for the markers) entries.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Kevin Tian <kevin.tian@intel.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

> --- a/xen/drivers/passthrough/x86/iommu.c
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -26,6 +26,7 @@
>  #include <asm/hvm/io.h>
>  #include <asm/io_apic.h>
>  #include <asm/mem_paging.h>
> +#include <asm/pt-contig-markers.h>
>  #include <asm/setup.h>
>  
>  const struct iommu_init_ops *__initdata iommu_init_ops;
> @@ -538,11 +539,12 @@ int iommu_free_pgtables(struct domain *d
>      return 0;
>  }
>  
> -struct page_info *iommu_alloc_pgtable(struct domain_iommu *hd)
> +struct page_info *iommu_alloc_pgtable(struct domain_iommu *hd,
> +                                      uint64_t contig_mask)
>  {
>      unsigned int memflags = 0;
>      struct page_info *pg;
> -    void *p;
> +    uint64_t *p;
>  
>  #ifdef CONFIG_NUMA
>      if ( hd->node != NUMA_NO_NODE )
> @@ -554,7 +556,29 @@ struct page_info *iommu_alloc_pgtable(st
>          return NULL;
>  
>      p = __map_domain_page(pg);
> -    clear_page(p);
> +
> +    if ( contig_mask )
> +    {
> +        /* See pt-contig-markers.h for a description of the marker scheme. */
> +        unsigned int i, shift = find_first_set_bit(contig_mask);
> +
> +        ASSERT((CONTIG_LEVEL_SHIFT & (contig_mask >> shift)) == CONTIG_LEVEL_SHIFT);
> +
> +        p[0] = (CONTIG_LEVEL_SHIFT + 0ull) << shift;
> +        p[1] = 0;
> +        p[2] = 1ull << shift;
> +        p[3] = 0;
> +
> +        for ( i = 4; i < PAGE_SIZE / 8; i += 4 )

FWIW, you could also use sizeof(*p) instead of hardcoding 8.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 13:02:53 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 13:02:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340463.565502 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwNzw-0000Jv-CP; Wed, 01 Jun 2022 13:02:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340463.565502; Wed, 01 Jun 2022 13:02:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwNzw-0000Jo-8V; Wed, 01 Jun 2022 13:02:52 +0000
Received: by outflank-mailman (input) for mailman id 340463;
 Wed, 01 Jun 2022 13:02:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=s3TG=WI=citrix.com=prvs=1441e74d4=roger.pau@srs-se1.protection.inumbo.net>)
 id 1nwNzv-0000Je-6M
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 13:02:51 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 228fdff3-e1ab-11ec-837f-e5687231ffcc;
 Wed, 01 Jun 2022 15:02:49 +0200 (CEST)
Received: from mail-bn1nam07lp2048.outbound.protection.outlook.com (HELO
 NAM02-BN1-obe.outbound.protection.outlook.com) ([104.47.51.48])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 01 Jun 2022 09:02:46 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by BYAPR03MB4710.namprd03.prod.outlook.com (2603:10b6:a03:131::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5293.13; Wed, 1 Jun
 2022 13:02:43 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::5df3:95ce:4dfd:134e]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::5df3:95ce:4dfd:134e%4]) with mapi id 15.20.5314.013; Wed, 1 Jun 2022
 13:02:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 228fdff3-e1ab-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654088569;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=SMC7FlQ0dsjTBr0jTARmB8P71IWlVXA/M5O2memPra8=;
  b=TArEGm6KKe3s24xxFjF5oX3Tjv7ICissTre+2Da1Kf41hyeRP8S5A91+
   QaUbIZw01xM4G620D8JxRvdUpFC+uWNhYAOm79DjufONyFeed5OYzVd+E
   HUDaNjriirFNZcE7bVwyG7pqCWSlnZifIdJ8Cmt8SHvNGs1i3oeQeEg+g
   0=;
X-IronPort-RemoteIP: 104.47.51.48
X-IronPort-MID: 72618464
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:BQo2iq8j6XFGhwzkNebGDrUD+3+TJUtcMsCJ2f8bNWPcYEJGY0x3y
 mAaDGuAaK7eZzfzLY0nO4vg8RkFvsDcnN8yTgE4qX08E34SpcT7XtnIdU2Y0wF+jyHgoOCLy
 +1EN7Es+ehtFie0Si+Fa+Sn9T8mvU2xbuKU5NTsY0idfic5DnZ44f5fs7Rh2NQw34DgW1nlV
 e7a+KUzBnf0g1aYDUpMg06zgEsHUCPa4W5wUvQWPJinjXeG/5UnJMt3yZKZdhMUdrJ8DO+iL
 9sv+Znilo/vE7XBPfv++lrzWhVirrc/pmFigFIOM0SpqkAqSiDfTs/XnRfTAKtao2zhojx/9
 DlCncOCGFgvDLPGocscdCd5DTlwbZxo1LCSdBBTseTLp6HHW13F5q00SWsQZMgf8OsxBnxS/
 /sFLjxLdgqEm++93LO8TK9rm9gnK87oeogYvxmMzxmAVapgHc+FHvyMuY8wMDQY36iiGd7EY
 MUUc3x3ZQnoaBxTIFYHTpk5mY9Eg1GgKWMB+A7L+cLb5UDDkCxS9bPIO+COJOyxfeJlxlfF/
 0zvqjGR7hYycYb3JSC+2nCmi/LLnCj7cJkPD7D+/flv6HWDy2pWBBAIWF+TpfiillX4S99ZM
 1YT+Cclse417kPDZsH0QhmQsHOC+BkGVLJ4DOkS+AyLjK3O7G6k6nMsSzdAbJksspYwTDlyi
 VuRxYu1VXporaGfTm+b+vGMtzSuNCMJLGgEIygZUQ8C5Nqlq4Y25v7Scute/GeOpoWdMVnNL
 /qi9UDSW517YRY36piG
IronPort-HdrOrdr: A9a23:mq9nlq8DKzJeqXyWbfJuk+FYdb1zdoMgy1knxilNoENuH/Bwxv
 rFoB1E73TJYW4qKQkdcKO7SdK9qBLnhNVICOwqUYtKMzOW3FdAQLsC0WKm+UyYJ8SczJ8W6U
 4DSdkYNDSYNzET4qjHCUuDYrAdKbK8gcOVbJLlvhJQpHZRGsNdBmlCajqzIwlTfk1rFJA5HJ
 2T6o5svDy7Y0kaacy9Gz0sQ/XDj8ejruOrXTc2QzocrCWehzKh77D3VzKC2A0Fbj9JybA+tU
 DYjg3C4Lm5uf3T8G6S64aT1eUZpDLS8KoCOCW+sLlXFtwqsHfrWG1VYczCgNnympDr1L9lqq
 iJn/5qBbUI15qYRBDJnfKq4Xis7N9m0Q6c9beV7EGT3fDRVXY0DdFMipledQac4008vMtk2K
 YOxG6BsYFLZCmw6hgVyuK4Iy2CrHDE1kbKUNRj/EB3QM8bcvtcvIYf9ERaHNMJGz/78pkuFK
 1rANvH7PhbfFuGZzSB11MfiOCETzA2BFOLU0ICssua33xfm2141VIRwIgakm0b/JwwRpFY76
 DPM7hulrtJUsgKBJgNTdspUI+yECjAUBjMOGWdLRDuE7wGIWvEr9rt7LA89IiRCek1JVsJ6e
 b8uX9jxB8PkhjVeLOzNbVwg2DwfFk=
X-IronPort-AV: E=Sophos;i="5.91,268,1647316800"; 
   d="scan'208";a="72618464"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=l2jttJpNEtTuD/yR4ibi6hzOon+xeuO4oUEuPXuOPgQGXp5lITOHVZiO/cNW6tmbxR6JonFLFu1n6MjFRbKRASqGc7XMZW53ZbQjQY9qqjM1GnkOFUvZpfR904dJX45fr7Z+VbVg4WkbkB610ftmJZ6rXL0s4XHwbtQ9zcXign0+JrC7K7hdoEFirZj1LezASCr16EJFtSqrdB0UHJZDuO6sFc9zdwhXlPD+LElHGO/uWQCoXU0RJ2CxOchaxVCHxbQSyiZPJJe58H0CL0XWxJ6uOgYRRKUfhMH3o0RAwQZUh+xK+m0CY70axBnWp4rR7N0EMXr4XnYc6eor8xmL1w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=OUetxWrunKeTwfxVuBeKpvGHvNTNA+PJwtPDEIt35Yc=;
 b=W7JBTASspDzDUYD3HSLH8YlEYtn/Y45TM+stqivgKzcpn4r2qJTLC5KVNZJYxlccTolETLwR+VmIN9uRLpOp1KWCRxcCjgJCja6HLkgG5MIWnnF+aEyNGXJgpCfT1LcOxyfY3kn172oA694E/hhy3SDUczbvMeZiEQcBVtkvpFvPlv1Ods2SVm4i+V09NqMiSh7O/XI2cL5qOk6xijCKKhZypJ0AoW5SXzhSsAKYzY29Ow5T2eKiOKUWS3oET07JA+HSVJTGY/9uASHr0tPD/SZn6M/WMqz0syYEp7llKY0zyonrQeaYVY7htyTJmKk0HO7UWTSYSNgBczGgfJGjsw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OUetxWrunKeTwfxVuBeKpvGHvNTNA+PJwtPDEIt35Yc=;
 b=Hhu/NKhhM9Q5vYkwsquihmJJiizTQ0KUCvCcjaPowBeAhRMiKWY1FCsOhMQDmme3emzJ+RKsoIZseBxqAN0OXwP7fXj2XUMQSbhAEW9N9b6DSAHAsas60LGO/OHQnJClZQf8tyn0sJOv7HeNLs+wYy77aER1rNR+l0CLF58uhWo=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 1 Jun 2022 15:02:39 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v5 07/15] x86: introduce helper for recording degree of
 contiguity in page tables
Message-ID: <Ypdjb++na2OGLr7u@Air-de-Roger>
References: <80448822-bc1c-9f7d-ade5-fdf7c46421fe@suse.com>
 <1fec512a-8c7b-69b5-40bf-88b42e9ecb7d@suse.com>
 <YpdNg5fgAncfSeTK@Air-de-Roger>
 <803e1e01-5a8e-36fa-1fe1-35bcf147c8e6@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <803e1e01-5a8e-36fa-1fe1-35bcf147c8e6@suse.com>
X-ClientProxiedBy: LO4P123CA0501.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1ab::20) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f8e33ceb-659b-4cee-0fcb-08da43cf0432
X-MS-TrafficTypeDiagnostic: BYAPR03MB4710:EE_
X-Microsoft-Antispam-PRVS:
	<BYAPR03MB47105A0890DFD82C85281D698FDF9@BYAPR03MB4710.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	vt2AWAVVZohDKt1yJ2ngl3TFnH6V+rDXo7ImLyRSVYbL/xGl2Zpex4bKHqTP6CLogutQaLdmUUEz+88Rs7SNf+Ow9bgBCozG5753GbUDnOagKDbYmT41Ao6yxsw6nLaA9+NWS+2DmEZTF7rmIE7KYFqTE+YBxL2CAKgTrs7uIwPhkjUx118wQbeqZvrniYmWcNiRFo4kdM5FdD+zyoD6EoT/cULRmhyLEkTEn/ZJytDkOIFANkhckWtKpCT0n75aoYx842ejEQCj+uxARc0gTVN4bOvYTM2ysJEsVpiwuINa/O5O7BF7ue5pQUuSnVrjPGmk9BVMDw0n7Doszvs4qu7onEPtYfwooQnRvbj95muKtmxK/49SYGVSYk2jRL1owIjCUGkSSwSmh6KtIzSjdvwMm4+JuKETyRWzn8zrNmdKrgKy1iK0qA4c5BoHR5v5luYBYwCM6k+48JxXm4FFc6jedE2iDoXObMlicY4ePzsF6plKlrOeVPL9g4F1CoyHpcOj69zpdc2le/sJ34p99bgVp7FSi65AjTx81Uj+IIAfr/EuKZbyy93ZpUGtRJJhFD8ePqMe7kuC88eA0fmGpVfvzn2+OmSzINfhEAFfkkmaoTMN6BmieTYO+MhtlJvZFuQKN3tzAKeGHvEh9B/Zzg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(7916004)(4636009)(366004)(508600001)(86362001)(5660300002)(8936002)(8676002)(4326008)(66946007)(316002)(6916009)(85182001)(54906003)(66476007)(66556008)(6486002)(26005)(82960400001)(6512007)(9686003)(6666004)(6506007)(2906002)(53546011)(38100700002)(186003)(33716001)(83380400001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RXFtL2lpUy9UVW0vVHJuVjFtWFlxSXplaUVsUDBsR0E3eDdVaXBURHllUFRo?=
 =?utf-8?B?dHQwaUdmT245UFhnaTFyM3VWY1JvbkU0K3lCcFBGVUxoa1FuajMzWUxLU0FJ?=
 =?utf-8?B?blRNaTg1aHpkZHFVSTdNRVA4amJ3NkxDSmNsK3FhZ0RpcHJTU2hkbVZhaERn?=
 =?utf-8?B?VDJDL0dsT25icEQzdE1OQncwRlN2NS9TUytyMnkvM25GYXYrN2lVSnA3TVBk?=
 =?utf-8?B?RldNZGZFdEJLQnZCdWZSNXJhVjY1TWxjRGhwVWx1UWp1U2xaWldWb3R4Q1Fi?=
 =?utf-8?B?bHM1RjRBUDkzbG9GMGRNVno5MUJHQ1JjYVFMeER2UG1EMTFvSExTYTJpQWpq?=
 =?utf-8?B?Z0hvUnprSTF6SzF0UDMrT0JVa0VTbUNlWGFZL0lkV2xKYVlRZFVPb0NnZHBT?=
 =?utf-8?B?bk8yWXljUVFnQVlSRnJ5RGRwRy81N0tLVE1NWnI4aEI2NmtXR1NRSUZLeTdT?=
 =?utf-8?B?WTBhWnVRZUsxOUJuMDlFbWJhNXhkNkVDZE5sVzJyeGkvRktYV0Jwa0JzdGVC?=
 =?utf-8?B?UFJHeE5EZFlWSDlCaXpEZGFPOU83QkRlS0FFQTVZQ3RUNmJHV21JUlZ3TUxw?=
 =?utf-8?B?d3QxUXQvZUFBNXpoWVNXQ0R4cTN6cGlYLzNFTm1UbHBsdWZ0UFR6SnpNdHFw?=
 =?utf-8?B?QlJzTjBZS3A2WGRqQ2I1dnN3dkt4djcrRjYwN0tzZWxwMWhWQmRBK2RwZkly?=
 =?utf-8?B?d09tMVhMdThkb1Zoa0tZM2dvRGliNkFiQnRSUWEyZHluV045ays2UDc5ajBu?=
 =?utf-8?B?NDJtM1JJTk9UWUVjT3llcmJNYXhMQSs2UzREb0x4dVUybXovQzJxdDlRM3No?=
 =?utf-8?B?QTZqL3lnZzdKcFVCcE56djVUczU2c1E4dE9ZYnVqZ3NmMUxlc3djcHYrRjRt?=
 =?utf-8?B?aVRSdDl6b1BJM1JSckdvY0dtRzNKZ0N6K2JVdDE1Znh6dGI0N0YzUEhYRTJa?=
 =?utf-8?B?YnB3emtTZ2szNlRyM2xLMzgzL1NtK2lJY29ENG01bENyaE9TZ0R2SUcwSENl?=
 =?utf-8?B?eSsxTnRxSkhEVWQwbWczOVU0dDBsQnJYSVBEQmlvVUo3VmFvbUgxV0JRMExE?=
 =?utf-8?B?YlRMR2dEYnlwNzUwaEhXODVPRVA5Q013c0xsdzMxWmkwSHRQL3A2dDVPZ3lv?=
 =?utf-8?B?VjRNQ1Y4eVBTam5XNzBraTdKaFJXMDJublhXSWJNSzNpTHJ4ZDBXY0UxS29j?=
 =?utf-8?B?c0xpMFgxK1daTmJpejNGb3pVVlZXSVdQTzNvUHVUOEpkQUt4OG5EY2ZIM213?=
 =?utf-8?B?aVMyQTRaaTVUN254dWg3ME1kNjlQVGNUei9EYWZheEFVbStLaU1nVkJQWUl5?=
 =?utf-8?B?ZkducFhRdHo3cVo4N1p1WUpWVVUyVUIxZkVSK1Q0b2RnK1I4RGdERnVMZDRY?=
 =?utf-8?B?SWh1Q2N5eldmODcrcFZIZGZMaDNEV0dKempabHg4ODR3SFlIaTdLK3p6UC9E?=
 =?utf-8?B?cTNwenUwTitTZGRxbE1vZDJKWXF2am5IeVkvbnhNQ281WmhaaTlsZXhhdVFK?=
 =?utf-8?B?ampubDNaWk1MT1pWV2F6RGFVSEsxU21rc0l4dkFTUmYzMWFKbXNvZThGRUtM?=
 =?utf-8?B?SFhTZk5ERnAwbW9ScmpiNUQ4TERpYXJXVkZXb1NQdXBDanFhbWFNRWExZDRT?=
 =?utf-8?B?UE1NS2g4Slk2SDUxbUR3a3VGV0NCYnhWbTRVdE9ocjcrdFAvWVplMFY2SHd3?=
 =?utf-8?B?b0MyeWxIbFFWU2IxQlpocGhWbit6OEZPc1NBV0o1azVvQ2l0SCtHekh6T2VO?=
 =?utf-8?B?YTVzVGNyVkNPMXIyNkNyYnB0RG5TZFFJNnFNaVRkV0xzUUlGem1nVThPaEhW?=
 =?utf-8?B?My9zTDVVVXBNQTJFU3plK3JGOStvaWVGTVVxSVFsczJVZE5WVTNPMUFPVENw?=
 =?utf-8?B?a0dJSEhOYVhDMTVYa25BMWNyVFhxanFySWdGbkxHTWxvU3pVM2hPYzM4UUJO?=
 =?utf-8?B?WDUvV3NJSmtSSXdmQzJ2ZDRYV09qME1JcGRVNlFwbWt1bm4rUTRBalVhbXBL?=
 =?utf-8?B?d1g1ZU9ZVW81U2U2Q2poRGdoQVRxTEdUdDUvckFSY241dHJuV0huZ2NyenNO?=
 =?utf-8?B?RGlVTnlFQ3lycitJSVhBYTZOdm1HMmlwV0FkbU1nTS9TNWxkNlBZNkNSMVBX?=
 =?utf-8?B?OTh6cE05TjByVHRQVVhNWEN4RzlBT0RCWTQyU0l2dXlFN25Ub0N0Q0xlY3h5?=
 =?utf-8?B?eHRLNnFDZERJdk9TNWlOQU12U3k0bFZlS1d6OVc0V1N3ZGlvV3pqbmFUemsr?=
 =?utf-8?B?eW5CWGdBUlhEd3lJL1hSMFhrcDB0aWdHbXBIUE55LzVOSGJXS0dQSTJTeXpx?=
 =?utf-8?B?d2NnelQrZmtUMlovR1F3TmovV0NYaFZiR0JaMmtSTlQvWmZRQUxoOUZEZFlV?=
 =?utf-8?Q?ERpOmpR2YhhXALZk=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f8e33ceb-659b-4cee-0fcb-08da43cf0432
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Jun 2022 13:02:43.5745
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: JcET4ZrCDB94lhHNMP2gyDPzRM9vpOZPnUCgPdPld45ZBMq/yUd7rTsvuaR7/2R9Nyev+S7s/3JorjM9334kgg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4710

On Wed, Jun 01, 2022 at 02:11:53PM +0200, Jan Beulich wrote:
> On 01.06.2022 13:29, Roger Pau Monné wrote:
> > On Fri, May 27, 2022 at 01:17:08PM +0200, Jan Beulich wrote:
> >> --- /dev/null
> >> +++ b/xen/arch/x86/include/asm/pt-contig-markers.h
> >> @@ -0,0 +1,110 @@
> >> +#ifndef __ASM_X86_PT_CONTIG_MARKERS_H
> >> +#define __ASM_X86_PT_CONTIG_MARKERS_H
> >> +
> >> +/*
> >> + * Short of having function templates in C, the function defined below is
> >> + * intended to be used by multiple parties interested in recording the
> >> + * degree of contiguity in mappings by a single page table.
> >> + *
> >> + * Scheme: Every entry records the order of contiguous successive entries,
> >> + * up to the maximum order covered by that entry (which is the number of
> >> + * clear low bits in its index, with entry 0 being the exception using
> >> + * the base-2 logarithm of the number of entries in a single page table).
> >> + * While a few entries need touching upon update, knowing whether the
> >> + * table is fully contiguous (and can hence be replaced by a higher level
> >> + * leaf entry) is then possible by simply looking at entry 0's marker.
> >> + *
> >> + * Prereqs:
> >> + * - CONTIG_MASK needs to be #define-d, to a value having at least 4
> >> + *   contiguous bits (ignored by hardware), before including this file (or
> >> + *   else only CONTIG_LEVEL_SHIFT and CONTIG_NR will become available),
> >> + * - page tables to be passed to the helper need to be initialized with
> >> + *   correct markers,
> >> + * - not-present entries need to be entirely clear except for the marker.
> >> + */
> >> +
> >> +/* This is the same for all anticipated users, so doesn't need passing in. */
> >> +#define CONTIG_LEVEL_SHIFT 9
> >> +#define CONTIG_NR          (1 << CONTIG_LEVEL_SHIFT)
> >> +
> >> +#ifdef CONTIG_MASK
> >> +
> >> +#include <xen/bitops.h>
> >> +#include <xen/lib.h>
> >> +#include <xen/page-size.h>
> >> +
> >> +#define GET_MARKER(e) MASK_EXTR(e, CONTIG_MASK)
> >> +#define SET_MARKER(e, m) \
> >> +    ((void)((e) = ((e) & ~CONTIG_MASK) | MASK_INSR(m, CONTIG_MASK)))
> >> +
> >> +#define IS_CONTIG(kind, pt, i, idx, shift, b) \
> >> +    ((kind) == PTE_kind_leaf \
> >> +     ? (((pt)[i] ^ (pt)[idx]) & ~CONTIG_MASK) == (1ULL << ((b) + (shift))) \
> >> +     : !((pt)[i] & ~CONTIG_MASK))
> >> +
> >> +enum PTE_kind {
> >> +    PTE_kind_null,
> >> +    PTE_kind_leaf,
> >> +    PTE_kind_table,
> >> +};
> >> +
> >> +static bool pt_update_contig_markers(uint64_t *pt, unsigned int idx,
> >> +                                     unsigned int level, enum PTE_kind kind)
> >> +{
> >> +    unsigned int b, i = idx;
> >> +    unsigned int shift = (level - 1) * CONTIG_LEVEL_SHIFT + PAGE_SHIFT;
> >> +
> >> +    ASSERT(idx < CONTIG_NR);
> >> +    ASSERT(!(pt[idx] & CONTIG_MASK));
> >> +
> >> +    /* Step 1: Reduce markers in lower numbered entries. */
> >> +    while ( i )
> >> +    {
> >> +        b = find_first_set_bit(i);
> >> +        i &= ~(1U << b);
> >> +        if ( GET_MARKER(pt[i]) <= b )
> >> +            break;
> >> +        SET_MARKER(pt[i], b);
> >> +    }
> >> +
> >> +    /* An intermediate table is never contiguous with anything. */
> >> +    if ( kind == PTE_kind_table )
> >> +        return false;
> >> +
> >> +    /*
> >> +     * Present entries need in-sync index and address to be a candidate
> >> +     * for being contiguous: What we're after is whether ultimately the
> >> +     * intermediate table can be replaced by a superpage.
> >> +     */
> >> +    if ( kind != PTE_kind_null &&
> >> +         idx != ((pt[idx] >> shift) & (CONTIG_NR - 1)) )
> >> +        return false;
> >> +
> >> +    /* Step 2: Check higher numbered entries for contiguity. */
> >> +    for ( b = 0; b < CONTIG_LEVEL_SHIFT && !(idx & (1U << b)); ++b )
> >> +    {
> >> +        i = idx | (1U << b);
> >> +        if ( !IS_CONTIG(kind, pt, i, idx, shift, b) || GET_MARKER(pt[i]) != b )
> >> +            break;
> >> +    }
> >> +
> >> +    /* Step 3: Update markers in this and lower numbered entries. */
> >> +    for ( ; SET_MARKER(pt[idx], b), b < CONTIG_LEVEL_SHIFT; ++b )
> >> +    {
> >> +        i = idx ^ (1U << b);
> >> +        if ( !IS_CONTIG(kind, pt, i, idx, shift, b) || GET_MARKER(pt[i]) != b )
> >> +            break;
> >> +        idx &= ~(1U << b);
> >> +    }
> >> +
> >> +    return b == CONTIG_LEVEL_SHIFT;
> >> +}
> >> +
> >> +#undef IS_CONTIG
> >> +#undef SET_MARKER
> >> +#undef GET_MARKER
> >> +#undef CONTIG_MASK
> > 
> > Is it fine to undef CONTIG_MASK here, when it was defined outside of
> > this file?  It does seem weird to me.
> 
> I consider it not just fine, but desirable. Use sites of this header #define
> this just for the purpose of this header. And I want to leave name space as
> uncluttered as possible. Should there really arise a need to keep this, we
> can always consider removing the #undef (just like I did for
> CONTIG_LEVEL_SHIFT and CONTIG_NR because of feedback of yours on another
> patch).

OK, I find it kind of unexpected to undef in a file where it's not
defined, but I think that's fine.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 13:06:27 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 13:06:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340475.565513 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwO3M-000149-1x; Wed, 01 Jun 2022 13:06:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340475.565513; Wed, 01 Jun 2022 13:06:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwO3L-000142-Uz; Wed, 01 Jun 2022 13:06:23 +0000
Received: by outflank-mailman (input) for mailman id 340475;
 Wed, 01 Jun 2022 13:06:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwO3K-00013s-U4; Wed, 01 Jun 2022 13:06:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwO3K-0000po-Qc; Wed, 01 Jun 2022 13:06:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwO3K-0002LK-Bn; Wed, 01 Jun 2022 13:06:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nwO3K-00079k-BJ; Wed, 01 Jun 2022 13:06:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Yc8WOagjxnatmKJIN+Gda3dW/SLOZ5O9aNy4uaA8mEg=; b=x0K/QZoW1nFIIGnerYF/dfuJTY
	BOM19CcdKqGvdSfPuiaLbF8TdijeHjZ8OLb85MGtvDVPZK6wmTdUkR8ch6Tc1FzqXUu1YCd2xlWVu
	14qo5hwsCgBeVqPXmXa9VhKGnjt3d230/2As1HpCV3D1Ikzp+9MzwcbDCQ8kcsFaS+gA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170796-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 170796: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=58ce5b6c33ecae76f2e9fc5213a56e98c3be4be5
X-Osstest-Versions-That:
    xen=09a6a71097e3e7d28eaa0f55e8f2c4b879c299f5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 01 Jun 2022 13:06:22 +0000

flight 170796 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170796/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  58ce5b6c33ecae76f2e9fc5213a56e98c3be4be5
baseline version:
 xen                  09a6a71097e3e7d28eaa0f55e8f2c4b879c299f5

Last test of basis   170790  2022-05-31 21:01:38 Z    0 days
Testing same since   170796  2022-06-01 08:00:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Tamas K Lengyel <tamas.lengyel@intel.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   09a6a71097..58ce5b6c33  58ce5b6c33ecae76f2e9fc5213a56e98c3be4be5 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 13:17:58 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 13:17:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340485.565524 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwOEQ-0002iG-3V; Wed, 01 Jun 2022 13:17:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340485.565524; Wed, 01 Jun 2022 13:17:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwOEQ-0002i9-04; Wed, 01 Jun 2022 13:17:50 +0000
Received: by outflank-mailman (input) for mailman id 340485;
 Wed, 01 Jun 2022 13:17:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z6G0=WI=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nwOEO-0002i3-Py
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 13:17:48 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.109.102])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3a8031e3-e1ad-11ec-bd2c-47488cf2e6aa;
 Wed, 01 Jun 2022 15:17:47 +0200 (CEST)
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 (mail-am5eur02lp2052.outbound.protection.outlook.com [104.47.4.52]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-9-uoUGJAbWOcWg7ewFWM4tPg-1; Wed, 01 Jun 2022 15:17:44 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM6PR0402MB3528.eurprd04.prod.outlook.com (2603:10a6:209:7::25)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12; Wed, 1 Jun
 2022 13:17:43 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.012; Wed, 1 Jun 2022
 13:17:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3a8031e3-e1ad-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654089466;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=nwyAR43lH8ZC4s6XbtYja3+J4TMTpoVqjUwK1EY+knw=;
	b=ejHgVBODZzY7auPNAVIbTHFrbo27Kl0mrqh4PdHSs+MF8ULOZsWu1Ftu48IlBfiigajhuX
	1qsZm8EvMTRNSpj3gzkS8qGtrjNuhFbuFC6u7MbLS4zd2UqLwWUZrmWEF/oP2rLhRNRyCk
	tcnVm8jIiHtI8rpv16F0xJouVrCQAnM=
X-MC-Unique: uoUGJAbWOcWg7ewFWM4tPg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=S1ONOQ5EU5k0up1ibaycvbszqVF3O+AQxSx2iw5tWzxRnBEpURqbGe5mQrjrc4sejcb8Gs1fMmmROJ4QRPiQOqOZUa9K6ZcTpKi8AHXSomaHQUUWKAPl48kWSzQj0Zu+V2BQNMAGCWpmKTZeU5HlQ0/DcDI9j5V6jpK4nKwHL1OqYfYUkuVNAHrKKpwJU+0ZXRKqN3dft5kD1vH8dFNjm6Y5lASR1hksz8Xvmroxe/6XLo2y2T+jaZH9pc9Bstxw4ddxnScuDgtPAwAiM3Iu5lxW1+ywVFYX3Jk93z4INxgQ6w+zMyhb22jc5J8bcXCT9fHzQ7zTa4GJbGwjmWZm6A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=sM2fFa2IAObPD+hod78iFnVgnTcoQ+gaVMhdkgBq2aQ=;
 b=N8zWE3kR6yDGQxvT/5h0RTjhJwJiQUaeF8Lw62C7ac3cU0Ss7QoV8IEvW1qkgu4657/JCEq2uArgcDM50CqTFX3iDsbfwWIdhGStSfdWpYB/3B/ce24Dxjvv0u0RcH+wp84gr6XVYv5CXcGUduKk7G5IvAj9+FkIy0aBzgkwlJxMNuEjrrmVflJG2sS0ZS0hSXSqB1z4EBwmQh4Ud91ksg9laSzg1YPJVwdsjroJsufyGIO7sVV/8hJq2ufVxjv4bqXP61ZMpKvacEOHq55tFgkshaJN5jjQ7KMCC6fXWnG8d++xleIrjYiibaab3adiVDe5SAiEjcFa2EenHp7qnA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b57e7c46-338e-c4e7-d9ed-b8c52e710ece@suse.com>
Date: Wed, 1 Jun 2022 15:17:41 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH v5 08/15] IOMMU/x86: prefill newly allocate page tables
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>
References: <80448822-bc1c-9f7d-ade5-fdf7c46421fe@suse.com>
 <1df469a9-ddf2-2036-105a-f303f0554f06@suse.com>
 <YpdivYC3MlpYPBLB@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <YpdivYC3MlpYPBLB@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-ClientProxiedBy: FR0P281CA0081.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1e::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 2258de46-1a90-4957-2194-08da43d11c78
X-MS-TrafficTypeDiagnostic: AM6PR0402MB3528:EE_
X-Microsoft-Antispam-PRVS:
	<AM6PR0402MB35287E5A19423827F5714A42B3DF9@AM6PR0402MB3528.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nVBtnfiFIbmFNKMhIdEoeRB5qlfdk0AGYd1S+ebyu2DuxUioQw+D0AXPu+/iTSWg1cpGaA+2qXWmxspvYSDrjgW/L5WadiFvAosq2JmeeugGq2XoSAHXBX1TIizdHwhM5fJwCVoZozIlonzqpsY2IJb17AyasdQSpGmKvaSb6LBxZ/kaE0amyFbeCCHULKVlCKdUGZmsoKVSPC+d4Qg9jLB+uw29386f67aNxyNrw9o8R0MWa7s95b19L6Lf9METzwOT7ld6vzwN8ojQENcdBDcZ3YOQ77OoNUix76lJ+eu3KoBJ5JfDEfZKp5cSfytg5gKvdQtcRzP6PUAq9qepeborT1S1x5/A8no87tH4TyjND5FF1TQ6TXFDzlzAuiflTNmi8IbZtxIpU+7XkrvI4bne21Ryl5JL0SLfYmm0Zm4r7z4yLS4WsK/MkXAKE+SH5C1z4APmPRCFO3Lnz6LbtlPNIclqppNM/nFfjYkXSBfaqyZwl5V7J5znDOx8mNKJO4X5ta9VWOo5LYncNnzquf7g1SSzsZzUEmK3vTGqwMy9LDtYEg/GXzsvlncyNPnsp9rKjs1oPG0AYUxZzyVHSZOpB5o8Qw0A04zh0TRWfQ73pV+ka2vbITUPVLTHACaFFW5I0U7M2k+VjzODWkisW24EFSRyws2tyai8tWsCNOLq9wKdcR3Z6/Z8VDo80rFrWUkVoGeCRmpiNCJxxfnePYjzok4SiiOWCSddBd6aVAE=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(5660300002)(38100700002)(508600001)(36756003)(316002)(31686004)(6506007)(53546011)(8936002)(86362001)(31696002)(26005)(6512007)(83380400001)(6916009)(6486002)(54906003)(2906002)(8676002)(2616005)(66946007)(66556008)(66476007)(186003)(4326008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?rvJhBVoP6eNaexA5/M5Tf1ceYGV00yCWojJoBaXWWkmTlAbr7kI+LZU4L3+h?=
 =?us-ascii?Q?o3Odm2E+eLKS3J2uQAJV10HEzSNB9lyBq4fxwHW0wDidKyqNSB6AgHjtDRLZ?=
 =?us-ascii?Q?vCTG7dAeeUp0aEaNZsmSWVHmNM05fuyytL7/5A2aZTQJaSaPjoqq4RcA++ym?=
 =?us-ascii?Q?cHjminU0t8iHKJWGtjd3GeoA96XzfHtPogZH17ZOlhNvclrJEriU7w9hWQQ3?=
 =?us-ascii?Q?HGT/rzpuqkaOhEVTYMOgoci6Vs8UUxx5MekVhhhChfIDvj8hE+Czzu4oMZ9W?=
 =?us-ascii?Q?ieJzwNA+ywR1kMTchO1yf1IWpNObZ8uxf+hemMP0PvZdXbjg0XTATyV39ZSH?=
 =?us-ascii?Q?krqdNDLQvvaBdglRjKnSwzQdq8Z9IrvN2r2V0XJwx0PvKUdYa6Rn9R0vRDvM?=
 =?us-ascii?Q?CZK97TtfCPv8G4lf7p8wqdeJyFiT7sr/KMO9Jlf9bkgtc4ZeFpOjvpRWdLN1?=
 =?us-ascii?Q?sbwa3JWik9xWGuqNYMxfCyt9N/tdT3JcOWj6/cSQuvbRQV34RNqL5LuqSLnC?=
 =?us-ascii?Q?abikRc24YSW20WHIk8cIlGy2NTpFLB2tlfwIWv6GzVtq34Dz4K90PsApTqxF?=
 =?us-ascii?Q?L3yPXxOwvxSKqdQnjW/+/ejnG1H2VNPEoPhqof8NeVYFvD1F7SFbQdMW33ON?=
 =?us-ascii?Q?p0914cdGor+CNF/Kn8U40472MP964C0loQWVir/d6ziXvfmJXEdXdHr+YWuT?=
 =?us-ascii?Q?15z21XsFhJiEZgCUSMkq5XNsiWTNKVaQOBgAW0yy72F2xFvb2M9ZoK9yfsW4?=
 =?us-ascii?Q?IW77psxBnLEkVn6/d3Rw95rDXAehjy1FswOeB/hI4P1fLi/QQmup+q6BXzfN?=
 =?us-ascii?Q?pbPrs9MksJrKl3afivO65jWYiyOS9icpmNc+4V88aaZVo4w9GWx2KS1o8f6U?=
 =?us-ascii?Q?Iv6zm2kD7qzeOSdq90zE34EYW35AtAihbTwmB3wuxWlb0KGxz7OMWPTU81GW?=
 =?us-ascii?Q?NdLT0Pc2z5xshOpNrCX6PfwA9muJVl7MtctPSxmWJhRkCgYToQ5Redgv2mdE?=
 =?us-ascii?Q?kYCBs0p3a6LTEjjKDP9qIKrw6U/TP0AJ7QqYkkWVgcfQZx3PADjk2sSGl2yE?=
 =?us-ascii?Q?y4RoY8M8jkbO+SQmtr90SImWFx1v7HV4WBBld0DLv6Vh6BzpI8wWmTyVs9Jh?=
 =?us-ascii?Q?DIpgqUeinmkBqjRqmdDi53ZfD/UWVU9BFMCr+ac24rO8/ozZYf8j27ycktFc?=
 =?us-ascii?Q?tYQaDKekoCNG5dPH3TcpUY+GpI5pLzf3dweNMJSL6igVi06FpFWBSlbJC2VO?=
 =?us-ascii?Q?o3LJAHxgDHC/j2CIcUClPJMU0eoulqgWbGydBWv8wFFI+curqe2EuB/jCRba?=
 =?us-ascii?Q?SsDiRpPsI/LQFwxsHxPlXhFzVdC3EK1InRJ8sM0XiT9WXfq4RMiE4NP6Q8zI?=
 =?us-ascii?Q?XJFjLUR1ISI9YLSfaXtUNKNGi9JEXCAYT6Z7ZqeeN9hLJ1WN6WIoNit82U07?=
 =?us-ascii?Q?Jq915MsUGj9NaCNJxrhnK+L9lX5tCcH0I/9gc+lMRgicHTacnLnnwF+m75LF?=
 =?us-ascii?Q?sQ3r31CxaAmKLRFo4SM1stkw/YUI6DHeAdmPoE3IuP1a4xQN3vppOvGp7G+V?=
 =?us-ascii?Q?g1ddeTjlyjdafKl5XiFVHaC1S3HZS209NUBJACFjW9FL3cG/s52IeCkzj7dV?=
 =?us-ascii?Q?epw9gIHfKo3M6NYAi4WulWl6HdSzpkVbVROYHqHXltg5Cw8pn+OuNq9UEN7h?=
 =?us-ascii?Q?7PXWrx+zmIG4yoLpaw8fSg/z6u9FEqbp3F2WJs6M9J9iXxAE4F7gw2F6w761?=
 =?us-ascii?Q?CyzkK4runw=3D=3D?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2258de46-1a90-4957-2194-08da43d11c78
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Jun 2022 13:17:43.3707
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: T+vG0SOHjCq9RWrJGRq1DkjGUO8ufq5MjfR8IaPs6fhijLEoIdOqeAxnkVN1DI17rIFS2KIVLJd1bD0kpt5oeA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR0402MB3528

On 01.06.2022 14:59, Roger Pau Monn=C3=A9 wrote:
> On Fri, May 27, 2022 at 01:17:35PM +0200, Jan Beulich wrote:
>> Page tables are used for two purposes after allocation: They either
>> start out all empty, or they are filled to replace a superpage.
>> Subsequently, to replace all empty or fully contiguous page tables,
>> contiguous sub-regions will be recorded within individual page tables.
>> Install the initial set of markers immediately after allocation. Make
>> sure to retain these markers when further populating a page table in
>> preparation for it to replace a superpage.
>>
>> The markers are simply 4-bit fields holding the order value of
>> contiguous entries. To demonstrate this, if a page table had just 16
>> entries, this would be the initial (fully contiguous) set of markers:
>>
>> index  0 1 2 3 4 5 6 7 8 9 A B C D E F
>> marker 4 0 1 0 2 0 1 0 3 0 1 0 2 0 1 0
>>
>> "Contiguous" here means not only present entries with successively
>> increasing MFNs, each one suitably aligned for its slot, and identical
>> attributes, but also a respective number of all non-present (zero except
>> for the markers) entries.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
>=20
> Reviewed-by: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>

Thanks.

>> @@ -538,11 +539,12 @@ int iommu_free_pgtables(struct domain *d
>>      return 0;
>>  }
>> =20
>> -struct page_info *iommu_alloc_pgtable(struct domain_iommu *hd)
>> +struct page_info *iommu_alloc_pgtable(struct domain_iommu *hd,
>> +                                      uint64_t contig_mask)
>>  {
>>      unsigned int memflags =3D 0;
>>      struct page_info *pg;
>> -    void *p;
>> +    uint64_t *p;
>> =20
>>  #ifdef CONFIG_NUMA
>>      if ( hd->node !=3D NUMA_NO_NODE )
>> @@ -554,7 +556,29 @@ struct page_info *iommu_alloc_pgtable(st
>>          return NULL;
>> =20
>>      p =3D __map_domain_page(pg);
>> -    clear_page(p);
>> +
>> +    if ( contig_mask )
>> +    {
>> +        /* See pt-contig-markers.h for a description of the marker sche=
me. */
>> +        unsigned int i, shift =3D find_first_set_bit(contig_mask);
>> +
>> +        ASSERT((CONTIG_LEVEL_SHIFT & (contig_mask >> shift)) =3D=3D CON=
TIG_LEVEL_SHIFT);
>> +
>> +        p[0] =3D (CONTIG_LEVEL_SHIFT + 0ull) << shift;
>> +        p[1] =3D 0;
>> +        p[2] =3D 1ull << shift;
>> +        p[3] =3D 0;
>> +
>> +        for ( i =3D 4; i < PAGE_SIZE / 8; i +=3D 4 )
>=20
> FWIW, you could also use sizeof(*p) instead of hardcoding 8.

Indeed. Changed.

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 01 13:24:43 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 13:24:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340493.565534 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwOL1-0004QA-Pa; Wed, 01 Jun 2022 13:24:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340493.565534; Wed, 01 Jun 2022 13:24:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwOL1-0004Q3-Mi; Wed, 01 Jun 2022 13:24:39 +0000
Received: by outflank-mailman (input) for mailman id 340493;
 Wed, 01 Jun 2022 13:24:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=esiv=WI=citrix.com=prvs=1449ffc77=George.Dunlap@srs-se1.protection.inumbo.net>)
 id 1nwOL0-0004PZ-9G
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 13:24:38 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2d663397-e1ae-11ec-bd2c-47488cf2e6aa;
 Wed, 01 Jun 2022 15:24:36 +0200 (CEST)
Received: from mail-mw2nam12lp2040.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.40])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 01 Jun 2022 09:24:05 -0400
Received: from PH0PR03MB5669.namprd03.prod.outlook.com (2603:10b6:510:33::16)
 by SA2PR03MB5882.namprd03.prod.outlook.com (2603:10b6:806:118::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12; Wed, 1 Jun
 2022 13:24:03 +0000
Received: from PH0PR03MB5669.namprd03.prod.outlook.com
 ([fe80::b402:44ba:be8:2308]) by PH0PR03MB5669.namprd03.prod.outlook.com
 ([fe80::b402:44ba:be8:2308%4]) with mapi id 15.20.5314.012; Wed, 1 Jun 2022
 13:24:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2d663397-e1ae-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654089876;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:mime-version;
  bh=wAi3SuFVLEZpKYb3Per3z2ksllRzzuypzMGu+/fasxc=;
  b=fG9PzVHrpZjTc4H84CHWHo21m9IXHmG7enfUZjeIZScNmuJAcUBR4TRu
   Q0+BGVn/WBvIkOqfdlK3Zfxs+LFS4WlL0ou/oOxlCpyJ4V+89IDTUD9KC
   0t/843CI2lzh1OsciCA5Dd/2BD//pVVFCtPcOULAzKD0U5ylLqTaTDK0y
   o=;
X-IronPort-RemoteIP: 104.47.66.40
X-IronPort-MID: 71973227
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:SUME6aopfR85iB1JeE6PX7Oo36heBmKxZBIvgKrLsJaIsI4StFGz/
 9cnaN20SrzTNTykP5w0PZPnthk2DaWln4VqTQJu+C42E39H9pPJDI3Ff0r5bi6cdcCcQRM/t
 ZkTYdPJIp5pESeH9x3xPOK58HQihKvTTbGnVeOUYyp9SWeIJMtZZTdLwobV1aY00YjR73qxh
 O7PT+3j1H6N0mIsOTIZtv6Jo0Iyt/ijsWNH4gJgOPsW5lSCnSlOXcxOea3pI3XGGYQFReTSq
 8QvbV2aEsE12z93V7tJR56iKhVirob6ZFTI0jwMM0SbqkAqShYai87XD9JBLxYO49m1t4opk
 o8V68TpEV1B0pDkw4zxbTEJS0mSAoUekFP3CSDXXRu7lhCun9PEmp2CPWluVWEq0r8f7VJmr
 JT0HAslfBGb799a9ZrgIgVaambPG+GwVG8XkikIITg0lp/KS7ibK0nBzYcwMDvdGqmitBsRD
 iYUQWMHUfjOX/FAEnY6K7QQnd6BvV2hSGZ4llC8tIYpvUGGmWSd0JC1WDbUUvqjYJwP22On/
 CfB9Wm/BQwGPtuCzzbD6mirmuLEgSL8XsQVCaG88flpxlaUwwT/CjVPDQf9/ab/1BD4B4o3x
 088o0LCqYAd+UuxQdS7cwC+pHeclhUdR8BRA6sx7wTlJq/8vF/IWTNeFGEphNoOpp5ufAI47
 0WyovS4CwVVrO2OElaZz+LBxd+1EW1PRYMYXgcGUA8E7t/LsIw1yBXVQb5LC7Wph9f4HTXxx
 TGiryUkgbgXy8kR2M2T/1rKnjatrZjhVRMu60PcWWfNxhN0YsupapKl7XDf7O1cN8CJQ1+Zp
 n8GlsOCqucUAvmlliOXR/4WNKq0/PvDOzrZ6XZFEoM97T2r9ziGdJpJ/TBlDE5zN4APfjqBX
 aPIkQZY5ZsWOWTwa6ZyOti1E55ykfCmEsn5XPfJaNYIeoJ2aAKM4CBpYwiXwnzpl08v16o4P
 P93bPqRMJrTMow/pBLeegvX+eZyrszi7Qs/nazG8ik=
IronPort-HdrOrdr: A9a23:cfZaKavxwExEGpVd0P9Hqh3C7skC2oMji2hC6mlwRA09TyXGra
 2TdaUgvyMc1gx7ZJh5o6H6BEGBKUmslqKdkrNhR4tKPTOW9VdASbsP0WKM+UyGJ8STzI9gPO
 JbAtBD4b7LfBRHZKTBkW+F+r8bqbHpnpxAx92utkuFJjsaCZ2Imj0JbjpzZXcGITWua6BYKL
 Osou584xawc3Ueacq2QlMfWfLYmtHNnJX6JTYbGh8O8mC1/H2VwY+/NyLd8gYVUjtJz7tn23
 PCiRbF6qKqtOz+4gPA1lXU849dlLLau5p+7Y23+4gowwfX+0SVjbdaKvi/VfcO0aWSAWMR4Z
 rxStEbToNOAj3qDyeISFDWqnTdOX4VmgPfIBmj8DTeSIXCNUwHItsEioRDfhTD7U08+Nl6za
 JQxmqc84FaFBXagU3GlpD1v4EDrDvKnZMOq59ks5Vka/pWVFaRl/1swGpFVJMbWC7q4oEuF+
 djSMna+fZNaFufK3TUpHNmztCgVmk6Wk7ueDlJhuWFlzxN2HxpxUoRw8IS2n8G6ZImUpFBo+
 DJKL5hmr1CRtIfKah9GOACS82qDXGle2OGDEuCZVD8UK0XMXPErJD6pL0z+eGxYZQNiIA/nZ
 zQOWkowlLau3ieffFm8Kc7giwlGl/NLAgF4vsulKRRq/n7WKfhNzGFRRQnj9agys9vd/HmZw
 ==
X-IronPort-AV: E=Sophos;i="5.91,268,1647316800"; 
   d="asc'?scan'208";a="71973227"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iUzG1EqbAGH6RJRa+tS8i5MYZIBm/OUCxRVPWIfzm4vCPj46aqouRc5X+U5yPMG8Fm4Jv9iLzcS6lMAnCcIut1fSCdUOgYnKuqenjKNUuUU0Br8P+yfQcYwmbxu0tbSTfBtpZ8/xR/Q9rFtIAbVOct2p6tYXbHFIcvoVuJaPX5OxhFkpCLyyab9JUALk2iFIO6IlvupdPZWcTGpAqegoqnZpPzVrVd+4OTOLyCTDAQIQnURNAuaBSYGXRXpvXni6PQaKInPhXQwt4/zssbDFBvglAUaCOG1htk9kzRuAb5XxJ8URxtyAeiJbiHe+Yy4yXTyIHzlp/0bMdtyJzeZajg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=AjG9hrypZaAfzJEJFVUpDDNmNj+90uGx/mUfxj3Etvs=;
 b=ibCy6BThcQ7yWdtvy7lHl22qTtIfWLMdcgc+Ja5Dlja1u9naZ9qlyGwwIGONbwmWFICEBHfXIeeQMLYsKVuQ99/FH36vneaa3kIf2U8AFa2tVb1hqut1beKHMnPFLnRoF+7CAwbh5VNyJvMPGpFdKZo6cQhLK888z4Kbcix+Ej9MF7QEErTUlmvE1lZpIWglkeRnLwRCR9Pu3CpFjv7P/cDvqgxej7rviLyB+XujZSDEziMf9rVCWSqzZ0m83GH5b7uJ9yc4MR5oe3oGgw4wpl4X+aVG+A7PE4DENX9cao3ZuufNSi4dO/kh70j1v0sEzfqRPnf3GJMP6kAfHWm7rQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AjG9hrypZaAfzJEJFVUpDDNmNj+90uGx/mUfxj3Etvs=;
 b=w9UgQs/sAIptxXCH96Sqv7QXnzqfh3gFMsQxfIW+WRIsY97t0odRhiaE+12nkDfxMl0/ynHqk3dmiIhOb6jcNW60MkA3DDnBALnowWGSkhqMEn0BHJOe/BDt52xF7yXhhUQIOual4jlj6WPy0eQE6tkBCxV/deEUZSmC3j+8pBE=
From: George Dunlap <George.Dunlap@citrix.com>
To: Julien Grall <julien@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Jan Beulich
	<jbeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>, Juergen
 Gross <jgross@suse.com>, Roger Pau Monne <roger.pau@citrix.com>, Andrew
 Cooper <Andrew.Cooper3@citrix.com>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>
Subject: Re: Process for cherry-picking patches from other projects
Thread-Topic: Process for cherry-picking patches from other projects
Thread-Index: AQHYZtZ1suK3TAKIFU2tuY9R7u+Tcq0jN1mAgADFIoCAAEpAAIAWYU6A
Date: Wed, 1 Jun 2022 13:24:03 +0000
Message-ID: <EC24A8EA-BB7B-4D66-9799-929761913493@citrix.com>
References: <396325A0-7EE6-4EAC-9BB9-BA67D878E6AE@citrix.com>
 <5e4d505c-a02c-eb54-8299-b1078943a8a5@suse.com>
 <alpine.DEB.2.22.394.2205172012100.1905099@ubuntu-linux-20-04-desktop>
 <9bb6855e-ee93-691b-877e-b187db91dbd7@xen.org>
In-Reply-To: <9bb6855e-ee93-691b-877e-b187db91dbd7@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: bf493ddb-4ddb-4030-98b6-08da43d1ff44
x-ms-traffictypediagnostic: SA2PR03MB5882:EE_
x-microsoft-antispam-prvs:
 <SA2PR03MB5882E85701930B698D66ACC399DF9@SA2PR03MB5882.namprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 5c0xP3LskNJnZKncu23KbVFSr4651lMqFRQaDUE+ZnCE4PCvNOPx1+aAmmAq96HKG7HosybdmU5s474cQUaMzMudiEhco7XbHrmgvmdADPOQP/XXgXalZ1bt/hznrnt0jBT1nKeqzrbP5oxzXsrsY8GGaFFDa7+A/fNJ8xHNagS4Ak0qRgMPPcZ8Xb6Tos6hWvSGpLSimcWuiwRqZ0y3ePSqVxzIo1FkL8hl8v2BJkF2MgoaV8ALE59Li3IZSFCx/QazL16SIZ0EG4AWFI8Xv9S2vmvVd4g/gUvE0LLTE6xnCxgw65PLUnOxkJb8byqf2TZxhpLTe9GjdFJczho8Y7icNjyAKaxWJ4SDK0UHpJxrGUn5OpeHdTKNlzxyig63vsOEo8gwQvBzjCz4B4U6blCTSx2XHTV64sfUy4X4R5dQckSGvzh46lTky4ldXA5PkAYppw8p3lsvq2Gy3GIsNqTy9YSl1efWq+TuzT9z1NKLh2r+yCLs8/A3YOH1tlcwgu2wskF59N9kGjO390mIDgEzgZRXcKLXacYXUlagZJUOu6ls6RRgih+uHxghGcHUy0MxjOZBiXNp7zpaPELDchr3gdFccGmPU/51nFqRI98blVyXAVY54MBi+aQge0cZYBsXHq3nWGolShl1Hr0PlEaoEdL5ifSnUGjpiz5jIdE8VTCvWwlH0Tj8GZV74xnb02aKEfY47trpzufM9fxiTxO39WXy231WAl3JJaM2cRwKKqCVuA0mkL1+tM6lOVp/CTjIAqqk8WFWGV7+pTiuhqZevNzDf0mC6AisE4UA0bBV6qpjH+V+P+e5YqwthDqInXv3kMeRPrTHnjlXB8B0IA==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(53546011)(122000001)(38100700002)(5660300002)(38070700005)(6506007)(83380400001)(508600001)(4326008)(91956017)(8676002)(76116006)(36756003)(66446008)(66556008)(66476007)(64756008)(66946007)(33656002)(2616005)(82960400001)(86362001)(71200400001)(8936002)(316002)(54906003)(99936003)(966005)(6486002)(6916009)(2906002)(186003)(6512007)(26005)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?Rm1OaG1PaGhib3FXT1RvdE05cW9YU011d1BoVzVvUDdkTFhWYzd2bzFJV0tX?=
 =?utf-8?B?WTdlRXJCNkVPUUVEdC9GVkRyRVFEbktPTk54U3lwLzFqWnZnRzJ2VEtHbGhU?=
 =?utf-8?B?SkpEUWY4YVZXWlhIZ2VmMUVjYURpUllaTWNHZExWdWFITllKbzRTc0VxL3l2?=
 =?utf-8?B?Uk5aOS9sc1VCVXV1NWJhSjZ5UVdsOENoZHAwNXo0MTVwd2FUM0lUdlIraU5N?=
 =?utf-8?B?NStiNGFGNnBDNHRWSU5HRHFWa0ZwNWxzYlZKMUZqRWNKUDg2Mm4rdTBIZ2E3?=
 =?utf-8?B?cGt6cyt0bFRneTREUVl5U0NBczlOcjZ6SS9vQlhsR3lTYnA0TXg2S3FXUkZi?=
 =?utf-8?B?MjBjSnF0ZlNwZVJIZS9oUlNqaXRGVG1rUHRWOWNWbEhXeHBFYW4zVGozeUNE?=
 =?utf-8?B?c1BMczNqM2hJaEhiWmI1SWJRYkErQUIvR2tUc3FuS1F3Tyt1M0VSMkY4ZGNZ?=
 =?utf-8?B?UkU5TWRtTjMxQ3ZjK3kwS1pDSDlRZ3BNak90ak5udW4zODdtekx2bmZSaDZy?=
 =?utf-8?B?WE5MTDZiN1V1Y1FDMkVlM1NKa2x4SU0vQ28wRnZLdkgrTFJjYWcraG1RZVJz?=
 =?utf-8?B?N0RSaVB2c1JsZEpHS3JRbFBoMHVtbmFWaHV4UklLZUl6RUx2OC9idjJwN1lx?=
 =?utf-8?B?Zi80QzZuMU95S0ZXNG40YlhVNkc3ckU1U0ROb000cnowbmVwRGdMaStwNnVO?=
 =?utf-8?B?UjBIcVVld2FqRHVmZ1Y1elhxVzhHQUZTaGFaZ0hCb1FBUTQyL1Z5LzN2S0Vx?=
 =?utf-8?B?YVNqekxrUUMwK0w3Y0lyR29uWWEralQ0TzByeGtTQUo1anA1QlpiTGZ1VGJ1?=
 =?utf-8?B?cG96S1BkWUZvdXQ2SWpSWVNWWjc2a1VWZXhDV3pkNWNBa2prT0kvWGc4SGpR?=
 =?utf-8?B?Q0J6TFd1T1g4R0J0RU5LeWwxZFBFUDhyL2RvZWF2UDB6UW1Pek45MDhZYlhO?=
 =?utf-8?B?RDd5aTMrMHlQaFdEOGI0ZnVCZm9Zbkgydkp6bkdacGpXTlVVR1dZOHFsbTdT?=
 =?utf-8?B?ZnBJM2hLT3pldFhuWUxxVlZDaFE0ekNDTDA0L0E1YkNRQnFGeWVPV2FKeFFa?=
 =?utf-8?B?OWM3TnJOUksrczQrdDNHb3RUNk1xUmV4TUxEV1dpaXhjTGZMT3JRME03VGhD?=
 =?utf-8?B?NGprOUYwamhQODliWEJiWVBlVzh2MHFGUk10QUVvb294RmxrQVp3YzlicTlS?=
 =?utf-8?B?blU0c2g1ZTVjWUNFSUpnRTYwa2dsZ0wwb1I1cVNtcFByWjl5YkVGL1hWTnVz?=
 =?utf-8?B?UDR3NjdmOUFVWmYvbUxBQjhPNUZJMjJwQnE5anhZUW9vZE14Z1UybWFOSUla?=
 =?utf-8?B?a0w0Z3FKbHZvZTBjREVPeUV2NjRIMkNxNzhvZ3dzeUhseERoa2l3SGVib0xR?=
 =?utf-8?B?UUh6VVZpd2dqSERkbVFzS1N0Zkh6UHFJWVVCVUZrbFBEa3YzV3dqSDhwM3Rt?=
 =?utf-8?B?YmRMMVpPZk1mRnQxWC8xcHBLR1VOVms3SExHc2NITVRsS3N0NkcrVUEzR0V4?=
 =?utf-8?B?RXgrUnVWbFpXaTlDaVR4QTlLMEtQRzRZdmNSdHZoUW5JYWJWRFBPRHl6TGZE?=
 =?utf-8?B?NzlLUUxmdGJkcnFkNUtqS21VcERMNVRTZERyL1FZQ3lSOHo3akZsR0wyUTRW?=
 =?utf-8?B?ZVAvL1R5Q2RyTDBDQlgyZ0ZtMWkrRXVJaWRqK1VGR0R6bU1hQmhjTnd5MkhL?=
 =?utf-8?B?blZGbjdrRlF0S1JOTXhtdWh3QjB0amlHUmRYUktSZGFkTWlQaklVL0lMTndU?=
 =?utf-8?B?cjhyK3R3b3pQdjYvL2UxakxhL0h3RTlqNUlDdkw1QmhaNDh6K1Q4cGF0Rmw2?=
 =?utf-8?B?bzUveDdWWWhWWklNMjVLK0VsUTVTbzJvRFVQRktQS05RNGtpbXhZQVY5bVU2?=
 =?utf-8?B?SElJbTNGSHVpMjE0Z2dWdjVKL1A4OFRrbzFkTzhWbUpqemNmZnJMa29nUlha?=
 =?utf-8?B?YzNwYkFhSlFyTWhtNnpnRmQwYW9LVnAzeERuYTN6S0JCVTM1OStjTndtczBi?=
 =?utf-8?B?alJFMERtNG1ydTZPWmFuM2N6QThGRGc3OWMwSGwwNlhseFVPSVhjU1ZTTnVh?=
 =?utf-8?B?ZXQ5T0hDMkdpNm5FM3VOT2tCNnk2K0NSMThGQWovYmovbExGYTNwdldMTHpH?=
 =?utf-8?B?TXBjdThyQURTZkxIRVNsd3JoWEp6eWR3SEkwVk8zVlRoaGNabUtIOWxzRll2?=
 =?utf-8?B?dk5IOStCSEZOZ0RLdHd6akN5TlNEb2EwRHRRWDFIWHl4M1cvVmUyckwxQWdi?=
 =?utf-8?B?aEVqckQ5ZU5SR2IzSTV1LzB6cGtFNGhLb1ZjNUFGTWUzS2M0dFRnRkFWZU9H?=
 =?utf-8?B?YTg4d3pRWnBNWFhxNHV1MTFPSFord1V2SHh4MXhFMkxsbzdQYWNGbXRTQ211?=
 =?utf-8?Q?RfLqEP7CyabCjkUw=3D?=
Content-Type: multipart/signed;
	boundary="Apple-Mail=_54787B9F-FECB-492B-B290-9D5F87F874F5";
	protocol="application/pgp-signature";
	micalg=pgp-sha256
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bf493ddb-4ddb-4030-98b6-08da43d1ff44
X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Jun 2022 13:24:03.6101
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: I2o7lLdEfyD4HuzyYr33uNQhJuhE5HeIjUtOSbRrqPeR2NNmNi4K1Mit1VFT7hDqdSG77/2jgcDkd4u9d4c6wB97UX3ODD7cWmnXjW7py1c=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA2PR03MB5882

--Apple-Mail=_54787B9F-FECB-492B-B290-9D5F87F874F5
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=utf-8



> On 18 May 2022, at 08:38, Julien Grall <julien@xen.org> wrote:
>=20
> Hi Stefano,
>=20
> On 18/05/2022 04:12, Stefano Stabellini wrote:
>> On Tue, 17 May 2022, Jan Beulich wrote:
>>> Hmm. The present rules written down in =
docs/process/sending-patches.pandoc
>>> are a result of me having been accused of unduly stripping S-o-b =
(and maybe
>>> a few other) tags. If that was for a real reason (and not just =
because of
>>> someone's taste), how could it ever be okay to remove S-o-b? =
(Personally I
>>> agree with what you propose, it just doesn't line up with that =
discussion
>>> we had not all that long ago.)
>> This is the meaning of the DCO: https://developercertificate.org
>> The relevant case is:
>> (b) The contribution is based upon previous work that, to the best
>>     of my knowledge, is covered under an appropriate open source
>>     license and I have the right under that license to submit that
>>     work with modifications, whether created in whole or in part
>>     by me, under the same open source license (unless I am
>>     permitted to submit under a different license), as indicated
>>     in the file; or
>> IANAL but I read this as to mean that only the submitter =
Signed-off-by
>> is required. Also consider that the code could come from a place =
where
>> Signed-off-by is not used. As long as the copyright checks out, then =
we
>> are OK.
>=20
> I don't think I can write better than what Ian wrote back then:
>=20
> "
> Please can we keep the Linux S-o-b.
>=20
> Note that S-o-b is not a chain of *approval* (whose relevance to us is
> debateable) but but a chain of *custody and transmission* for
> copyright/licence/gdpr purposes.  That latter chain is hightly
> relevant to us.
>=20
> All such S-o-b should be retained.
>=20
> Of course if you got the patch somewhere other than the Linux commit,
> then the chain of custody doesn't pass through the Linux commit.  But
> in that case I expect you to be able to say where you got it.
> "

So the thread in question is "[PATCH 1/7] xz: add fall-through comments =
to a switch statement=E2=80=9D [1].

This effectively turned into a policy discussion that happened on a =
random thread about compression algorithms.  It=E2=80=99s likely a lot =
of people who might have had opinions didn=E2=80=99t read the thread; =
that=E2=80=99s why I started a new thread, to make sure people knew =
there was a policy discussion going on.

I was on parental leave when this discussion happened.  Looking at the =
thread, I agree with the request of Julien to just C&P the whole Linux =
commit message: It just seems both simpler and more=E2=80=A6 fitting? =
Respectful? Something like that; and additionally it saves the reviewer =
having to think too much about whether the removed S-o-b=E2=80=99s were =
necessary.  It=E2=80=99s something that we should just do because it=E2=80=
=99s easy and generally better, particularly as we now have a way of =
indicating =E2=80=9Cabove this line is *their* way of doing things, =
which may have useless data in it; below this line is *ours*.=E2=80=9D

However, the justification Ian put forward in that thread =E2=80=94 that =
"S-o-b is ...a chain of *custody and transmission* for =
copyright/licence/gdpr purposes=E2=80=9D =E2=80=94 must be incorrect.  =
If it were true, then when we import a file from another project, we =
would have to check in *the entire git log for that file up to that =
point*, including all patches.  After all, we need to know *for each =
line*, the copyright provenance =E2=80=94 even having a massive list of =
all S-o-b=E2=80=99s from the history of the file wouldn=E2=80=99t be of =
any use if a copyright dispute actually came up.  I think that shows the =
absurdity of the position.

What we need to be able to do is, for each line of code, to be able to =
track down where it came from and who originally asserted that it was =
GPL, in the event of some sort of challenge.  As long as we have the the =
Linux commit at the point of import, we can track everything else down.  =
In fact, it will be much easier to track down from a Linux git commit =
than from anything checked into the commit message.

I=E2=80=99ll double check with LF legal on this, but I=E2=80=99m pretty =
sure having a =E2=80=9Cpointer=E2=80=9D to where the code came from =
(either a git commit or a message-id) should be fine.

 -George

[1] =
https://patchwork.kernel.org/project/xen-devel/patch/0ed245fa-58a7-a5f6-b8=
2e-48f9ed0b6970@suse.com/

--Apple-Mail=_54787B9F-FECB-492B-B290-9D5F87F874F5
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
	filename=signature.asc
Content-Type: application/pgp-signature;
	name=signature.asc
Content-Description: Message signed with OpenPGP

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEj3+7SZ4EDefWZFyCshXHp8eEG+0FAmKXaHEACgkQshXHp8eE
G+31eQgAhB7H9S/sKs9QPTrMvd0g20y1ZI5PxqfMQUpSDukDhSndbeY+RHZ/haEN
cGiAPubApwJRKcR7BVOQUrhe0aqBNbyWPTfOz2qhj7Mx9Eh6HT/UH58BFMLVHCpt
xdOnSTPIpFi4maBU7HQ/l7+BNrZM0fnLFEueaBMs00jOxhxT+U50juxp3sK7cSnH
Z1A3RYSzFQSbrfeGtlOBYF7/c50esoYvPS1apsNjP4gLK/edwNE1xyQSZeL6TuKW
P8vxSaXJ5qEIAI9FbkjgrpNLrxToNG+y4HcfjQGD4YtAz2MmBman62UZzO/OC1Lm
zCTRR0rLReV68rzWAfypsTPn6hcpjg==
=v3+m
-----END PGP SIGNATURE-----

--Apple-Mail=_54787B9F-FECB-492B-B290-9D5F87F874F5--


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 13:32:13 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 13:32:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340503.565546 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwOSG-00069X-Mi; Wed, 01 Jun 2022 13:32:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340503.565546; Wed, 01 Jun 2022 13:32:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwOSG-00069Q-Ju; Wed, 01 Jun 2022 13:32:08 +0000
Received: by outflank-mailman (input) for mailman id 340503;
 Wed, 01 Jun 2022 13:32:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Edd5=WI=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1nwOSF-00069K-AV
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 13:32:07 +0000
Received: from mail-lf1-x129.google.com (mail-lf1-x129.google.com
 [2a00:1450:4864:20::129])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3a9e996f-e1af-11ec-bd2c-47488cf2e6aa;
 Wed, 01 Jun 2022 15:32:06 +0200 (CEST)
Received: by mail-lf1-x129.google.com with SMTP id i10so2829347lfj.0
 for <xen-devel@lists.xenproject.org>; Wed, 01 Jun 2022 06:32:06 -0700 (PDT)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 d20-20020a05651233d400b0047255d2115csm366642lfg.139.2022.06.01.06.32.04
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 01 Jun 2022 06:32:05 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3a9e996f-e1af-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=6AmNd1uoVJElSSIZ5NcicKz1NijvWaHzL3bjyTaCTH4=;
        b=ldSt7vIuRD+IuLgBfjyNqBye32eNJ62ST9orWrrQ7xCyXt5nToDaIro4j7IujIzZIw
         zgZ8vXfHbYGORoIsxAJLQvjkcc6sykEM9L1EyLORHH6oaIcpSY1IUyFUk03VKLCwnPN6
         +6LwGg/O5yk6KhmJNaozXZv2nmZ9jjdhr218OF8Xzfva6FguxwE3Hdd8DrotuQXxODYV
         ocw8FcfH9d/LPrBKVlTYvsTkJqyYatVMDZ+6eGYRZc3e5wOgbRaPB/gd6lX7imnGwALe
         3Zrzu6o06WNYbgVRgdyOI4ykt7Tb46alUg5IEUKgeCqhd4Re1nl50dMdjR8kh7X+CtQR
         trgg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=6AmNd1uoVJElSSIZ5NcicKz1NijvWaHzL3bjyTaCTH4=;
        b=OEdSds2V5Y4xKW5n4WiZRss21Iw7DtREWaDI8jnWOoOfTNePKoXdyfR7ZC1XOpDg5S
         Ny+Y2B/EJB+xqJA6aimK/MsvMaD/FEIYPa+OS2y7uXmgMg9HComMV5uICOcSbVBrWYZw
         AWKkSvvUje17KZHIsyj1bd1t6kHieeL9AE3xk6x5/EyktlRMis9KJ0z0GXSwFLzNxBp3
         QwxNkMF5gnfa8/QqkEB6ZSLGXgYFVO61p+GFQGdbY0/m10Pe5tiOaVENxLmtPPVx+JzL
         HCHdU/uzH4o8bfNaOD+Aowwwu7Xaaurkbjb+cKV10s/7ZbjAOEB9X14XFEdwGxB0/4A7
         z2Ug==
X-Gm-Message-State: AOAM533RgT8ujtqsfNlR1unSF8nv8JwgUcYog7w7ewZqOheEltcu61kM
	x+Sifjz0poMN711WgP1lnjU=
X-Google-Smtp-Source: ABdhPJxB3fNJrJkKRrllMC2ycuvNXzvXlHnLWls/OtCzWnm+TxujSRcM7FqomSJRwMxSqo/CEZZNhg==
X-Received: by 2002:a05:6512:1041:b0:478:afc6:5846 with SMTP id c1-20020a056512104100b00478afc65846mr24643051lfb.132.1654090325618;
        Wed, 01 Jun 2022 06:32:05 -0700 (PDT)
Subject: Re: [PATCH V3 5/8] dt-bindings: Add xen,grant-dma IOMMU description
 for xen-grant DMA ops
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, devicetree@vger.kernel.org,
 linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
 iommu@lists.linux-foundation.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Rob Herring <robh+dt@kernel.org>, Joerg Roedel <joro@8bytes.org>,
 Will Deacon <will@kernel.org>,
 Krzysztof Kozlowski <krzysztof.kozlowski+dt@linaro.org>,
 Julien Grall <julien@xen.org>, Juergen Gross <jgross@suse.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Christoph Hellwig
 <hch@infradead.org>, Arnd Bergmann <arnd@arndb.de>
References: <1653944417-17168-1-git-send-email-olekstysh@gmail.com>
 <1653944417-17168-6-git-send-email-olekstysh@gmail.com>
 <alpine.DEB.2.22.394.2205311726000.1905099@ubuntu-linux-20-04-desktop>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <31c21ef0-3847-a896-a387-c2e1cc0f9467@gmail.com>
Date: Wed, 1 Jun 2022 16:32:03 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.22.394.2205311726000.1905099@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 01.06.22 03:34, Stefano Stabellini wrote:

Hello Stefano

> On Tue, 31 May 2022, Oleksandr Tyshchenko wrote:
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> The main purpose of this binding is to communicate Xen specific
>> information using generic IOMMU device tree bindings (which is
>> a good fit here) rather than introducing a custom property.
>>
>> Introduce Xen specific IOMMU for the virtualized device (e.g. virtio)
>> to be used by Xen grant DMA-mapping layer in the subsequent commit.
>>
>> The reference to Xen specific IOMMU node using "iommus" property
>> indicates that Xen grant mappings need to be enabled for the device,
>> and it specifies the ID of the domain where the corresponding backend
>> resides. The domid (domain ID) is used as an argument to the Xen grant
>> mapping APIs.
>>
>> This is needed for the option to restrict memory access using Xen grant
>> mappings to work which primary goal is to enable using virtio devices
>> in Xen guests.
>>
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>> ---
>> Changes RFC -> V1:
>>     - update commit subject/description and text in description
>>     - move to devicetree/bindings/arm/
>>
>> Changes V1 -> V2:
>>     - update text in description
>>     - change the maintainer of the binding
>>     - fix validation issue
>>     - reference xen,dev-domid.yaml schema from virtio/mmio.yaml
>>
>> Change V2 -> V3:
>>     - Stefano already gave his Reviewed-by, I dropped it due to the changes (significant)
>>     - use generic IOMMU device tree bindings instead of custom property
>>       "xen,dev-domid"
>>     - change commit subject and description, was
>>       "dt-bindings: Add xen,dev-domid property description for xen-grant DMA ops"
>> ---
>>   .../devicetree/bindings/iommu/xen,grant-dma.yaml   | 49 ++++++++++++++++++++++
>>   1 file changed, 49 insertions(+)
>>   create mode 100644 Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml
>>
>> diff --git a/Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml b/Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml
>> new file mode 100644
>> index 00000000..ab5765c
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml
>> @@ -0,0 +1,49 @@
>> +# SPDX-License-Identifier: (GPL-2.0-only or BSD-2-Clause)
>> +%YAML 1.2
>> +---
>> +$id: http://devicetree.org/schemas/iommu/xen,grant-dma.yaml#
>> +$schema: http://devicetree.org/meta-schemas/core.yaml#
>> +
>> +title: Xen specific IOMMU for virtualized devices (e.g. virtio)
>> +
>> +maintainers:
>> +  - Stefano Stabellini <sstabellini@kernel.org>
>> +
>> +description:
>> +  The reference to Xen specific IOMMU node using "iommus" property indicates
>> +  that Xen grant mappings need to be enabled for the device, and it specifies
>> +  the ID of the domain where the corresponding backend resides.
>> +  The binding is required to restrict memory access using Xen grant mappings.
> I think this is OK and in line with the discussion we had on the list. I
> propose the following wording instead:
>
> """
> The Xen IOMMU represents the Xen grant table interface. Grant mappings
> are to be used with devices connected to the Xen IOMMU using the
> "iommus" property, which also specifies the ID of the backend domain.
> The binding is required to restrict memory access using Xen grant
> mappings.
> """
>
>
>> +properties:
>> +  compatible:
>> +    const: xen,grant-dma
>> +
>> +  '#iommu-cells':
>> +    const: 1
>> +    description:
>> +      Xen specific IOMMU is multiple-master IOMMU device.
>> +      The single cell describes the domid (domain ID) of the domain where
>> +      the backend is running.
> Here I would say:
>
> """
> The single cell is the domid (domain ID) of the domain where the backend
> is running.
> """
>
> With the two wording improvements:

I am happy with proposed wording improvements, will update.


>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

Thanks!


>
>
>> +required:
>> +  - compatible
>> +  - "#iommu-cells"
>> +
>> +additionalProperties: false
>> +
>> +examples:
>> +  - |
>> +    xen_iommu {
>> +        compatible = "xen,grant-dma";
>> +        #iommu-cells = <1>;
>> +    };
>> +
>> +    virtio@3000 {
>> +        compatible = "virtio,mmio";
>> +        reg = <0x3000 0x100>;
>> +        interrupts = <41>;
>> +
>> +        /* The backend is located in Xen domain with ID 1 */
>> +        iommus = <&xen_iommu 1>;
>> +    };
>> -- 
>> 2.7.4
>>
>>
>> _______________________________________________
>> linux-arm-kernel mailing list
>> linux-arm-kernel@lists.infradead.org
>> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
>>
-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Wed Jun 01 14:25:40 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 14:25:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340511.565557 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwPHo-0003jI-Ma; Wed, 01 Jun 2022 14:25:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340511.565557; Wed, 01 Jun 2022 14:25:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwPHo-0003jB-Jl; Wed, 01 Jun 2022 14:25:24 +0000
Received: by outflank-mailman (input) for mailman id 340511;
 Wed, 01 Jun 2022 14:25:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Edd5=WI=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1nwPHm-0003j4-OB
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 14:25:22 +0000
Received: from mail-lj1-x231.google.com (mail-lj1-x231.google.com
 [2a00:1450:4864:20::231])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id aacf5421-e1b6-11ec-bd2c-47488cf2e6aa;
 Wed, 01 Jun 2022 16:25:20 +0200 (CEST)
Received: by mail-lj1-x231.google.com with SMTP id 1so2179479ljp.8
 for <xen-devel@lists.xenproject.org>; Wed, 01 Jun 2022 07:25:20 -0700 (PDT)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 be34-20020a056512252200b0047255d2111csm400301lfb.75.2022.06.01.07.25.19
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 01 Jun 2022 07:25:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aacf5421-e1b6-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=FBKSGghGMyehS2TUOWQO4R6WCmFwpZJjX3cauamzd94=;
        b=Ultw8DU29BFuHHv+SAl79K1cLAUml40sbYhf7/PNERagVaxUEYg66jGdS44iSwOWlN
         lWW2GoUaD/bEMI45tTYaZ+VrJza8xUox2zbdsm6A1B5GNJVjCMmiF888AgbA4h4m5aLg
         tn04ixsSJnKFIOwQRTmXkajhAmOBFv+X3vyNQ50rVoM6CwgPo61c3A6SUfi0bTVWMxdh
         z8+sz6cZ2yjIAK6Wa/lyR4bfrIUynhxz2XEYplpOkgbCoA+Gqhma7dRyrwfPrV5pt6CB
         PNV8QzOoeRVsBC7bv34CN2R3y1QEpFKFXNBfqhf6Vx5A3MC02CZRwFylWK207E8wRJ3C
         oLCA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=FBKSGghGMyehS2TUOWQO4R6WCmFwpZJjX3cauamzd94=;
        b=EcYXnX3QabuDwGsJgbrtOx7/1xhwNyY/8SUdTllWcJlTcJnteGvYqITEHlnEsRZPg0
         jpXsZqdC4w9FXO0DQtyBufHNTDjq8isvIhJdgPZimkRycIinxBK2xuq1HJgPCEJY5m0A
         8p+SqXHTrFptufZXsjAvtPq2S9D1jWtEDqfABhOIzvMj3fageTuo3c1aEibYZYCohDRx
         FfIG5Xh8COhUIt/DcXGUiQwCdZrPtL+PMHjyWuoVCBC6NiTz+gfxbbM3xBgXZ3sJW30w
         2OoiGJqWJ3bLnFQEsCRKypUCSoHcQRQLMcuQD7jDma6RD5fRIwwbeyyJB4bSDl96M4KV
         EN7w==
X-Gm-Message-State: AOAM532e5l9TtEGrs5jCxbCE+t/ah5xeOrL2NWSpnydRDO4hcsRIRwqj
	nZj7sGfyZSKzmsGx5oEA7+k=
X-Google-Smtp-Source: ABdhPJzdgU0TEzWOzw1uVTXtzjAfJDCmQ71KdCZvzgA7rJtMp7zu5PwgQORHQ3eXgUOOk1oqeJ0a0Q==
X-Received: by 2002:a2e:a36f:0:b0:253:d948:731c with SMTP id i15-20020a2ea36f000000b00253d948731cmr37053467ljn.159.1654093520369;
        Wed, 01 Jun 2022 07:25:20 -0700 (PDT)
Subject: Re: [PATCH V2] libxl/arm: Create specific IOMMU node to be referred
 by virtio-mmio device
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>
References: <1653944813-17970-1-git-send-email-olekstysh@gmail.com>
 <alpine.DEB.2.22.394.2205311755010.1905099@ubuntu-linux-20-04-desktop>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <e67bde26-2eff-948a-a2c3-08cc474affa6@gmail.com>
Date: Wed, 1 Jun 2022 17:25:18 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.22.394.2205311755010.1905099@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 01.06.22 04:04, Stefano Stabellini wrote:


Hello Stefano


> On Tue, 31 May 2022, Oleksandr Tyshchenko wrote:
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> Reuse generic IOMMU device tree bindings to communicate Xen specific
>> information for the virtio devices for which the restricted memory
>> access using Xen grant mappings need to be enabled.
>>
>> Insert "iommus" property pointed to the IOMMU node with "xen,grant-dma"
>> compatible to all virtio devices which backends are going to run in
>> non-hardware domains (which are non-trusted by default).
>>
>> Based on device-tree binding from Linux:
>> Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml
>>
>> The example of generated nodes:
>>
>> xen_iommu {
>>      compatible = "xen,grant-dma";
>>      #iommu-cells = <0x01>;
>>      phandle = <0xfde9>;
>> };
>>
>> virtio@2000000 {
>>      compatible = "virtio,mmio";
>>      reg = <0x00 0x2000000 0x00 0x200>;
>>      interrupts = <0x00 0x01 0xf01>;
>>      interrupt-parent = <0xfde8>;
>>      dma-coherent;
>>      iommus = <0xfde9 0x01>;
>> };
>>
>> virtio@2000200 {
>>      compatible = "virtio,mmio";
>>      reg = <0x00 0x2000200 0x00 0x200>;
>>      interrupts = <0x00 0x02 0xf01>;
>>      interrupt-parent = <0xfde8>;
>>      dma-coherent;
>>      iommus = <0xfde9 0x01>;
>> };
>>
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>> ---
>> !!! This patch is based on non upstreamed yet “Virtio support for toolstack
>> on Arm” V8 series which is on review now:
>> https://lore.kernel.org/xen-devel/1651598763-12162-1-git-send-email-olekstysh@gmail.com/
>>
>> New device-tree binding (commit #5) is a part of solution to restrict memory
>> access under Xen using xen-grant DMA-mapping layer (which is also on review):
>> https://lore.kernel.org/xen-devel/1653944417-17168-1-git-send-email-olekstysh@gmail.com/
>>
>> Changes RFC -> V1:
>>     - update commit description
>>     - rebase according to the recent changes to
>>       "libxl: Introduce basic virtio-mmio support on Arm"
>>
>> Changes V1 -> V2:
>>     - Henry already gave his Reviewed-by, I dropped it due to the changes
>>     - use generic IOMMU device tree bindings instead of custom property
>>       "xen,dev-domid"
>>     - change commit subject and description, was
>>       "libxl/arm: Insert "xen,dev-domid" property to virtio-mmio device node"
>> ---
>>   tools/libs/light/libxl_arm.c          | 49 ++++++++++++++++++++++++++++++++---
>>   xen/include/public/device_tree_defs.h |  1 +
>>   2 files changed, 47 insertions(+), 3 deletions(-)
>>
>> diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
>> index 9be9b2a..72da3b1 100644
>> --- a/tools/libs/light/libxl_arm.c
>> +++ b/tools/libs/light/libxl_arm.c
>> @@ -865,9 +865,32 @@ static int make_vpci_node(libxl__gc *gc, void *fdt,
>>       return 0;
>>   }
>>   
>> +static int make_xen_iommu_node(libxl__gc *gc, void *fdt)
>> +{
>> +    int res;
>> +
>> +    /* See Linux Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml */
>> +    res = fdt_begin_node(fdt, "xen_iommu");
>> +    if (res) return res;
>> +
>> +    res = fdt_property_compat(gc, fdt, 1, "xen,grant-dma");
>> +    if (res) return res;
>> +
>> +    res = fdt_property_cell(fdt, "#iommu-cells", 1);
>> +    if (res) return res;
>> +
>> +    res = fdt_property_cell(fdt, "phandle", GUEST_PHANDLE_IOMMU);
>> +    if (res) return res;
>> +
>> +    res = fdt_end_node(fdt);
>> +    if (res) return res;
>> +
>> +    return 0;
>> +}
>>   
>>   static int make_virtio_mmio_node(libxl__gc *gc, void *fdt,
>> -                                 uint64_t base, uint32_t irq)
>> +                                 uint64_t base, uint32_t irq,
>> +                                 uint32_t backend_domid)
>>   {
>>       int res;
>>       gic_interrupt intr;
>> @@ -890,6 +913,16 @@ static int make_virtio_mmio_node(libxl__gc *gc, void *fdt,
>>       res = fdt_property(fdt, "dma-coherent", NULL, 0);
>>       if (res) return res;
>>   
>> +    if (backend_domid != LIBXL_TOOLSTACK_DOMID) {
>> +        uint32_t iommus_prop[2];
>> +
>> +        iommus_prop[0] = cpu_to_fdt32(GUEST_PHANDLE_IOMMU);
>> +        iommus_prop[1] = cpu_to_fdt32(backend_domid);
>> +
>> +        res = fdt_property(fdt, "iommus", iommus_prop, sizeof(iommus_prop));
>> +        if (res) return res;
>> +    }
>> +
>>       res = fdt_end_node(fdt);
>>       if (res) return res;
>>   
>> @@ -1097,6 +1130,7 @@ static int libxl__prepare_dtb(libxl__gc *gc, libxl_domain_config *d_config,
>>       size_t fdt_size = 0;
>>       int pfdt_size = 0;
>>       libxl_domain_build_info *const info = &d_config->b_info;
>> +    bool iommu_created;
>>       unsigned int i;
>>   
>>       const libxl_version_info *vers;
>> @@ -1204,11 +1238,20 @@ next_resize:
>>           if (d_config->num_pcidevs)
>>               FDT( make_vpci_node(gc, fdt, ainfo, dom) );
>>   
>> +        iommu_created = false;
>>           for (i = 0; i < d_config->num_disks; i++) {
>>               libxl_device_disk *disk = &d_config->disks[i];
>>   
>> -            if (disk->specification == LIBXL_DISK_SPECIFICATION_VIRTIO)
>> -                FDT( make_virtio_mmio_node(gc, fdt, disk->base, disk->irq) );
>> +            if (disk->specification == LIBXL_DISK_SPECIFICATION_VIRTIO) {
>> +                if (disk->backend_domid != LIBXL_TOOLSTACK_DOMID &&
>> +                    !iommu_created) {
>> +                    FDT( make_xen_iommu_node(gc, fdt) );
>> +                    iommu_created = true;
>> +                }
>> +
>> +                FDT( make_virtio_mmio_node(gc, fdt, disk->base, disk->irq,
>> +                                           disk->backend_domid) );
>> +            }
> This is a matter of taste as the code would also work as is, but I would
> do the following instead:
>
>
> if ( d_config->num_disks > 0 &&
>       disk->backend_domid != LIBXL_TOOLSTACK_DOMID) {
>       FDT( make_xen_iommu_node(gc, fdt) );
> }
>
> for (i = 0; i < d_config->num_disks; i++) {
>      libxl_device_disk *disk = &d_config->disks[i];
>
>      if (disk->specification == LIBXL_DISK_SPECIFICATION_VIRTIO)
>          FDT( make_virtio_mmio_node(gc, fdt, disk->base, disk->irq) );
> }

I got your idea to avoid using local "iommu_created". For that, I think, 
we need to modify the first check to make sure that we have at least one 
virtio device, otherwise we might end up inserting unused IOMMU node. 
But, it will turn into an extra loop to go through num_disks and look 
for LIBXL_DISK_SPECIFICATION_VIRTIO.



>
> but I would give my acked-by anyway

Thanks!


>
>
>>           }
>>   
>>           if (pfdt)
>> diff --git a/xen/include/public/device_tree_defs.h b/xen/include/public/device_tree_defs.h
>> index 209d43d..df58944 100644
>> --- a/xen/include/public/device_tree_defs.h
>> +++ b/xen/include/public/device_tree_defs.h
>> @@ -7,6 +7,7 @@
>>    * onwards. Reserve a high value for the GIC phandle.
>>    */
>>   #define GUEST_PHANDLE_GIC (65000)
>> +#define GUEST_PHANDLE_IOMMU (GUEST_PHANDLE_GIC + 1)
>>   
>>   #define GUEST_ROOT_ADDRESS_CELLS 2
>>   #define GUEST_ROOT_SIZE_CELLS 2
>> -- 
>> 2.7.4

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Wed Jun 01 15:06:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 15:06:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340519.565568 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwPve-00009D-Ro; Wed, 01 Jun 2022 15:06:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340519.565568; Wed, 01 Jun 2022 15:06:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwPve-000096-Om; Wed, 01 Jun 2022 15:06:34 +0000
Received: by outflank-mailman (input) for mailman id 340519;
 Wed, 01 Jun 2022 15:06:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z6G0=WI=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nwPvd-000090-5u
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 15:06:33 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.109.102])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6b74b9c4-e1bc-11ec-bd2c-47488cf2e6aa;
 Wed, 01 Jun 2022 17:06:32 +0200 (CEST)
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2171.outbound.protection.outlook.com [104.47.17.171]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-3-n15_hrpYMu-ufdaKvupBKg-1; Wed, 01 Jun 2022 17:06:30 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB4320.eurprd04.prod.outlook.com (2603:10a6:803:49::26)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Wed, 1 Jun
 2022 15:06:29 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.012; Wed, 1 Jun 2022
 15:06:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6b74b9c4-e1bc-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654095991;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=rg+Wj+pnmP/KzNgZ8zU/T4xM1gq+BWv/F6lOr7MqrIY=;
	b=NvGiGeyWaxivKk9A7m6oKJcwNtwkEJyXeCm5/kFeY873ccdXQE4oFDtkgN86XkE3AyRJX2
	rSUhCdHaxm3at6g47KKlyqtO9SDA+ItFkC7/nBOtOoZu3gkw78TzuP4uqwFg8v+cG47wQy
	DPYFWH7MQnoW84kteAxVuy3luAHLNKc=
X-MC-Unique: n15_hrpYMu-ufdaKvupBKg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aPDMsa6YQYBOZX/QvgXZVo6+xCycPpM12h3KUz4GJdMrouayvmMvRY2u8FdAvlc0ZORClQuSNhH3YaCyVU+xDPQ1DA8aJawVe7U/g8nVLZ93BURR6MSjcAB3WewTvQkkqb74BxR8OBh10WbWMxLeLE1guYmZ8aKhCmuGTHZav43rl/s6Pwtk+6Tnd61Z6L+3D0pV7RVqfJan24RLoXj38vp/YvjsA0imUOBh5lJq6FCXgDfUCSPpEYBRNqtHZrLmw+fGTR+jyWcY4fowLt2b+35TyK/aHXSaHWtfL5JvFmqndcO2nsaEPTbSQuCPC/t4ahIwPkRXjhQ25f1y2oImqw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=rg+Wj+pnmP/KzNgZ8zU/T4xM1gq+BWv/F6lOr7MqrIY=;
 b=MYgx4XfqeWmYhgcc77PIEG422Nel37hrDHEgQ37nXahPhDDZpyMX0JZx4tkc1VQQ/YQ1koahXpEZErBjxlmWX6+hYizsv8Q4FC2rMVa0BPYLBt8CQ8W9pZkbAx+9OSaAh4MCm610gOBFUr9Z/x3Ej5pSQaBP3XQSympFlA1VSdsyvEiPbJZn/bLvURmOmwGbdQ7ToI08evnnCNDREI+7UrzlkwlcXurD5PgjLOHYtpo7rq0vLFw+PpMJRM9Guxyli9gzi3zgtv0NItWygOvSZkh2qYNpzaNG1lC8GdxD7QWhy5b82+gtX3hCgPr1UPcrozK9lcHz3VPZZlagzASx0g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f463934c-c196-6c25-12d1-85f5494de593@suse.com>
Date: Wed, 1 Jun 2022 17:06:26 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH] x86/spec-ctrl: Enumeration for new Intel BHI controls
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20220531192137.12468-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220531192137.12468-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR04CA0054.eurprd04.prod.outlook.com
 (2603:10a6:20b:46a::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 628efdae-14f3-4281-5498-08da43e04e24
X-MS-TrafficTypeDiagnostic: VI1PR04MB4320:EE_
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB4320D29D702B86E743702051B3DF9@VI1PR04MB4320.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ANFDMhvjlwG/HvTGqZJcd2KSvefNkBrO6/Kpk2sjI2ZjUEqGPpMPolU7kYc5woYpTr8SYUU+0WYWLXPW92WcP/QwszNPZE/CVQTa+K+Wn0BrqRhIHwikM0KAjexv+CM26gZHJaZBtGpATP4RXQVRjSVs3rEqMQisU4aKlcxOD+dU7IYeS28rGceugqC0cLltNKSCN6kmVM3GFQCVgbELuH3bHQtjgrFMQ5YTsyEoLKKUeJMEdeLlu2e7uPL+Y9hJLarovcVnvuXRn0howxzNSXyPR/0XaWJpxQfH2E15j6yCvNCFq+hTPuzgWzaHzDdXmj32l48m4EyYYrTNj+n0PnZeKPrvz1H+3yMNMZP3b+1kKRzVYHJyzXbfTjm2VmCqrQyMfBDl3f03s5fwrkiQvD19xPZleeRGdTOKzVj2BCZbYJbzUWA1ApVxFd+TF6+ElFEyHW+rV4WNZyddfAioGh03RHSMextFn8U4jciYEU5E+7I1PaViLZg4/M5AoWsHPZ3+8qQuTgG+TrGgCxKaTb1ynCs0D6v530Alqh6VOBr2Lu0ldtU2W6yzmCJPVpHT/KIsrBwnzicuM0mXAVlcKferVWoT0MMjpMAlVevfucqttBBkLnUVfgFcoVfPM9tPNsSO2B6I1XQHGBDmrSbmDhe6pAfEi4clyrgfpQSXUc/aul++w5qNd9FOBLiqLNU8zHgFkwRyA9tJazFfD7Q91NgTgZccbVO9C8Mlmaae9PaObt5mvo9g8/nFckv8d5iBFPstPbL0Zj3K04ZvOn/fCBnmZe8mfe0SjMSYoUtSwyz+PpPUk9PKXMTXr7I/4X+3se/jwk7ZRqyg4ByQr1sA7g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(6512007)(508600001)(966005)(6486002)(186003)(2616005)(53546011)(26005)(38100700002)(6666004)(6506007)(316002)(66946007)(66476007)(4326008)(8676002)(66556008)(8936002)(558084003)(5660300002)(2906002)(31686004)(6916009)(54906003)(36756003)(86362001)(31696002)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TmxCRjVUd1lWZ05tMVVZUGtRT2YwTCt3VklzL3pUTzdRZ0h0RmJoUVptT080?=
 =?utf-8?B?WlZrZ1l6ekhpN3pvK2ZlMlNFb2RGeWswUXFqdWd2V21VSlRiUUxqN1ZVeSsx?=
 =?utf-8?B?aGM1elVKWlhPRWZSWkJiY2M3TGxEVWZBSExza3pXV3g4T3piMU1aUEZHNjhG?=
 =?utf-8?B?VDRQVThrZTJPUDhuVDRWQUtyVWx6aWZEOGpGVDFhMW1iMTZKVWZEQytlT1NL?=
 =?utf-8?B?WjAzVURnZmdVeDFLYituQlF5eXZUTTNLQTZEbWpmejRqdlF2N3I1SElBM2tr?=
 =?utf-8?B?UTFMb05qWlU1V1pPRzZDNTJ1TU81eFVxZVhweXZPRjg0NUpIdmZmQ0M4R2Jl?=
 =?utf-8?B?SFpVdTc1U3A1V2p1amVpOWFMK0xkdlBtaHY4S2lXMWFtMHVLd2VRYzlOa1Jn?=
 =?utf-8?B?MnNIMnYzN3d6dXYzcjBrQzZPQlI3MENtQUtNa1ZRbXJEZ1AydktISHIwUjVH?=
 =?utf-8?B?MVgrMHdrL3VmM2RHcGJMSnF5MG44ZUtUK0xCRXlSZlA5Y29aakp2Z2hSM3Mv?=
 =?utf-8?B?Ui9JMjFJZk9qaHFGck8vdHBvQkEvRlRSMlFiazQ5QW0xTDlYVnpMVUVBanR5?=
 =?utf-8?B?NlVNWEVHQkVLVlNtSFNzdlNnUzVlSDlmOFM2RUdCdFI4WGhmRVd1aVJjZVow?=
 =?utf-8?B?N3VQOVBUT1ZwZlpiMEVOZmtPMDFIbU44d1BpQ3hYL3RBUEhMWU9wdGRlQ1E3?=
 =?utf-8?B?am1PazQ0eGVLRFc0eUJFN09hdWJORm1UZ3YxQ2UvdDROeTVEbVppR3BLMnVj?=
 =?utf-8?B?ZmQ2NFFQWk9xUmZEWGlsUkFPWmtPMHRZQzJ3Yy9hSUVTS2Y3OGZ5YXN1RmZu?=
 =?utf-8?B?K2huZ0hkM2sxSms3ZTBnMk1QZTBCZnBraFhwUC8xL3M3ei9yaUFpclFxMmtJ?=
 =?utf-8?B?MXJlT0RicDhQVmorblZHTTlrdGNWYVBnazloUGd5VHNPSFp5bXdTdVZlYUVx?=
 =?utf-8?B?NnNnMS9uaGh1V0UwcU9MdnFoNXF3VW8vRGplckJUcUtZVjJMclphcm5iRGVP?=
 =?utf-8?B?QlE0QTRHN2FUUWFpbXhFUU13WWVrS3FESFZiR3lIaU9CMjJ0ZzVQWDQ4N1pU?=
 =?utf-8?B?TFNBbktYNE15M2luSE1MaFo0VHZjM2RIYmxVKzVuWlk5STlLc1RGWkxxVFA0?=
 =?utf-8?B?bWErQVlkK3FCcEtSLzE0azlQbFJ6eVZZWUViTWF1eURNb1VsTmJkbU13NC9H?=
 =?utf-8?B?dXhkelZlSkg3NWFVeG50cGYzQ3A3MjR6c25VNWJvMXlUVFlJUFA0NmU1NFRP?=
 =?utf-8?B?eTVDbjBwMUJUQ2pscEJmVTJGYWpYbE80aWU0WkV4WDhldVlPU1IwY2xEOFNX?=
 =?utf-8?B?Q0JGRWMrTnpLcFBSc1cyTU5tWjFhNFFCQ09kTUYyMEdkay9GK0Jjem1OcnZK?=
 =?utf-8?B?NlAxRDJGeThSSjYvS3YzaGNOTEorWWxVTXNQQVRmK0cxY0lLaG1wZkZJR0E3?=
 =?utf-8?B?UXhNckt0NEVwRG1xNHRMaGw4ejhkNVJjSy92Rk1ZR0RFbnVaQlUzVHJJU2sr?=
 =?utf-8?B?SGl4ZzFadXR2TnkvUDNVN2lPTjE1UmFnNXJOak1jdGxnNDZEUnBPdHltT3ND?=
 =?utf-8?B?dktZeVlBSHdHTG5jaFc5NnBUVVlBOWo4c2tBTWhaTjRqZ3EvcWFVbmZqUHVo?=
 =?utf-8?B?dENzZEZJSHJvdkhjckpKL29zbDFUYzVnWVBaMnpYbWsrZS9XQ0kvQnJRRFRW?=
 =?utf-8?B?Q09QK2EwQmxpVXpNR1Bkck5NRVhqOHRtWFo4aFlxUmxmR0hzUTVxUWpYNkY5?=
 =?utf-8?B?S1dKVDRtS2N5Yk1nTWdTU0E3bHE1NTNvNGZCclh3a2haQ0h4cjU4S2ZwWnJx?=
 =?utf-8?B?Ym5BVFJ2WFRYSSttNkYrSU9TdjE2ZVJnVXRiWVRXeUc2eC8rWC9WTEZhMjNp?=
 =?utf-8?B?bjVCUElWRGtxa0ZvUFlyYzQxdXNKeVVFNFV2cFpaUDAwSUZTZzZ1d2hHMFhZ?=
 =?utf-8?B?Q01MeWdoNEYwcW1ZT2NISk5BRUQyQWFRTHNQY28ydHlyRS82cXVZZ0lWZVFR?=
 =?utf-8?B?dXlrSmZ4MExtZDhJR3hlNjY1RHg3U2dvM1ljTXlVYWVrRGNpc3B0SVdCRDlG?=
 =?utf-8?B?Z3JncGhwN3dLd3Fkb2Q4STdZd0tWVHFXWGdiTGZkTDhqdGdPSVpGNnBTYkFS?=
 =?utf-8?B?MUl2YldEWUlYWm9RS1VXeE81WVNCY0hzVWZ1S20ycXRBKzlyckIxazl2UmdX?=
 =?utf-8?B?YXI4VkNDV0FCT2I1WFo2MTlGVXhwRjE4cW1INXNYVDFtcGxwdU9HZUtxaW9N?=
 =?utf-8?B?ZmV0aDB5L1BRTE11d3ZOVVZLUGxMcDJNd0dmTjYwWWh4WGR5YU9WcDFwb2Yx?=
 =?utf-8?B?QUN4TGw5SEJUdHNOcm4yaUNCSUxpMnc0dUhVSWJzTDAvQ1kxZEpzUT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 628efdae-14f3-4281-5498-08da43e04e24
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Jun 2022 15:06:29.1429
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 63LZmiOup/2MF+fSRQQt/Fj7cYb6ZMxWIulG18Ild+YSIY0pTpHRCb3akwJNBRz0rwMTOpyVEy3BZra9QS+IRQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4320

On 31.05.2022 21:21, Andrew Cooper wrote:
> https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/technical-documentation/branch-history-injection.html
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Wed Jun 01 15:10:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 15:10:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340528.565579 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwPzQ-0001cE-GL; Wed, 01 Jun 2022 15:10:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340528.565579; Wed, 01 Jun 2022 15:10:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwPzQ-0001c7-Cr; Wed, 01 Jun 2022 15:10:28 +0000
Received: by outflank-mailman (input) for mailman id 340528;
 Wed, 01 Jun 2022 15:10:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z6G0=WI=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nwPzP-0001c0-9x
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 15:10:27 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.111.102])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f73d3cad-e1bc-11ec-bd2c-47488cf2e6aa;
 Wed, 01 Jun 2022 17:10:26 +0200 (CEST)
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04lp2055.outbound.protection.outlook.com [104.47.12.55]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-44--knJa1v6OB2gHSjquqqBkg-1; Wed, 01 Jun 2022 17:10:22 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM6PR04MB6295.eurprd04.prod.outlook.com (2603:10a6:20b:72::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Wed, 1 Jun
 2022 15:10:20 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.012; Wed, 1 Jun 2022
 15:10:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f73d3cad-e1bc-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654096225;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=vPsLL7XIAwhRmlTLJqGC9WGiVhVFHUcjTqNTmq6BBiA=;
	b=fY6V9eO5rrrop9gDtsnqbWYFlec73LL9TNatlD2OAcB6Sq4wgfFfUR4D8GvXmL4r8B7Zq/
	VxhIv46vxDtyYJqiude4jIZPvdfP02sPu5ybJH9eELln6fB2zge+hAuqPWu9Itih6JIg9C
	yyloZfXKb2UGflLalWce9JHknsWmul0=
X-MC-Unique: -knJa1v6OB2gHSjquqqBkg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mRikj1fHEB9HqLelk9gPF94DrYDX5472LATwSMEKwL1GWyr74tPcMo4PLDNdKJ5mmeUviR7bqu0XUHRwV8smDn58xIpAZlXgnQ+kiwnmS5ZuZjziSez97PJ5EG3uM+a0ZT9Hh4p4K3LKEfHLeFfWXbUz/dIJ90gLJTRgaxSQmyHAcg4LnjXUOpmpLu+BcgLziZx89Bp7R1Qsj36zIrQC9YK1uFRqX39BtlgGXpAdF6Ds3pRtV5FMv8hy1Ti6BSo0GJaVEO+mruuojTrUNr0rpiq434c3rZVwOu+EAFkt2L9a/YVYEB4L8+/9W7Ov8eFo2Sa8rXrGbMDl9DP7dNonXw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=t1/Sme7U92u2KIarhnjjkAxXPb/ONRS0zXaXbcS8Ssc=;
 b=cjsByuXUqCGgxw52c43OnggqWaa6RUy4oAg0L1M9WnBBDQAzeRDA3/jHcZjrSFPXQxLrSD0U4utCMiDuVdTF++CIC4gKVGMDATOaVnjcnamQiJQJNpF/ExM9Ms7F1kN8D+hOF2ZcsgEqP06RCpHeC2Qh+PfsTpY8z3b/w59SDjvVyA0gb+8FsTJ8h6u0g8pqh4JhjXzV6CySWNThB3fnrrP2yixdx+rZ+ysTFk8if9aVr8fFMM0kQpKZFB2TbCNcNfpCP/i1w3RXDWdCPoamsnFw0SDbun3fBD/GfUYJfuqBFmOaPO2i5CBXc37v7jPalJEELJF7/iMZ3qcuq9BWOQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <671e1678-0bcd-6cc2-af9f-55b6e71db894@suse.com>
Date: Wed, 1 Jun 2022 17:10:17 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH v5 01/15] IOMMU/x86: restrict IO-APIC mappings for PV Dom0
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>
References: <80448822-bc1c-9f7d-ade5-fdf7c46421fe@suse.com>
 <1de2cc0a-e89c-6be9-9d6e-a10219f6f9aa@suse.com>
 <YpYozCRkfs1KdBus@Air-de-Roger>
 <22d2f071-4046-52c6-6f11-23fb23fb61c1@suse.com>
 <YpY/Pm43mMJFGYql@Air-de-Roger>
 <e5bc83c8-3962-4d43-4ef1-f338ca2fb782@suse.com>
 <YpcgoYkJMzQnXUkb@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <YpcgoYkJMzQnXUkb@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-ClientProxiedBy: AM6PR08CA0030.eurprd08.prod.outlook.com
 (2603:10a6:20b:c0::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e2fb40bf-a4bc-42bd-fa54-08da43e0d822
X-MS-TrafficTypeDiagnostic: AM6PR04MB6295:EE_
X-Microsoft-Antispam-PRVS:
	<AM6PR04MB6295A5FF41C03CD7ABD4FC50B3DF9@AM6PR04MB6295.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	lyIN1D++jPv21a9wDGBHs/iCgAGQyOdZtsa8pa8H2lguM2aoZ+49NwGVQe6tNQvR1oIOilknpfcmUEI8uTXuXbOBpwgfmEQAwz/C6z4zo0WZRdbwZoA3HsAU6P1e0xw/46dM7jjXa4+Al9tvqhbi73A3VJJh105OpoRjBpRDFES1RMnJj+SCRg3R4UidmD/d0+GJnkyUdkH9MbH6br7Nfhu2FrmpX8x9I+nfge28LMjarEDs3zm/jtZYPmhdqx7sOWFwzCSx/l3VY3t9GnJPh6nBHVkQZgRuOAh6WCCrZDW+qQw1luRbieVdCdyYpuYDgQH4suNKRjFA+a5dWzBmnRtU7rPdSFXGIrSOlxih+sCpBFftRthHBxH7Jwfed91rxYzQQDF6DLnEeF09WmHqF/kQbzTinaZJ21SLYNyZjeHVmnX2Z5URpc6mSyskU4c304LR2JTx79xrYtglL7+NL/M9BdsXc6YwwIYEW0VYYwo6LsPbSfcxXgKHBbtviWtjZKOMOfZRTLonkSfjtbAxTRthluccoSwPO/2rykIplqc/rp7CXMNWmXCDUxEtO0cpkalzy39yfkq7D5R+Eu05iFZS6C4XqRzheaOiqO27MUWipkZO0JD/SFIYk/oE3L5GddCgcWvBKj8FUSCHv4kbfI7I7Xxr826yS7As7EliLxMUnwBjsK+ujfXn5OSXxHPP9zthMaf3OZFtoTU+ePICGz8WsVkVqAS0PTELbIEB8iJRfPofeumPJ6PEU1wPBdNC
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(6666004)(26005)(6512007)(31686004)(5660300002)(508600001)(2616005)(186003)(38100700002)(36756003)(6506007)(53546011)(6486002)(8936002)(31696002)(66476007)(4326008)(83380400001)(66556008)(8676002)(66946007)(6916009)(54906003)(2906002)(316002)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?wyFXNLcckk7EPTGoPa4GQyBkElqAoW/TD3QNfElI70dpogIEYsGYwCbszS5X?=
 =?us-ascii?Q?WTIM0EFGZmMGX7CMJkn7V9iAZLnkHvI3NStkmLS6ZMEvu+wg/0wXkBHysI0W?=
 =?us-ascii?Q?ka0FctiMU+FwmpPdxnwLEH5B1gmv4OPlZwJVNuwp4kPf/bOS5il7owg6jjmo?=
 =?us-ascii?Q?aHlswsXP8niMvNpPrlxv0KFURl3MmdYfyjkz+znqySchauwT3/ZmIAhpkTau?=
 =?us-ascii?Q?sEyBNI/uCfPq26rwC0v6DgOoCi2AC7ONJfrdFKS3GNwNSOP8mbxx7AF1cw5q?=
 =?us-ascii?Q?InCMAbvIGyemLm+AsPPnPIGuDnYQJVZXG0langRfTIEscUS/ZWaGIqgwtwl1?=
 =?us-ascii?Q?zMeZ6+xYqWDnCAFFsvusWuvRBq91Clz2H56ELHkVQ1oHOZl+CrB+Fy4zx1O1?=
 =?us-ascii?Q?wIcqTbjIA59MQTeEX5R1nqZC1S3O/wvI4HNWGQVMnadLkslaa7C5w1EckRl0?=
 =?us-ascii?Q?n/LlTUE2O4TU26iLAfROCKYx1bsXHiCC0ZyEzAiYdvMbxJz8xE4HBD0M0vBU?=
 =?us-ascii?Q?hXBqop1sFhfdQB5qBh7mYtM9frbOM8uhgZJLkNvaxW7YKDdXfu9dBv4K6gBw?=
 =?us-ascii?Q?mcQCsEwZZ/nnpZuZIXglZtBPR12/L0oiNFQlRN5TBYVaOTIe0EV5UV8nEIVQ?=
 =?us-ascii?Q?e2J5lYltK1vBnysIEUdVU0RYiRve67LwWUmcSoKFePoZ8jZ9q4zYwuGoae/3?=
 =?us-ascii?Q?/9UE6TDN78A0iGuuFBmCpLlBP+wNChl+H/QrEqdy9AF+CAChqEf2BjxM+Tlw?=
 =?us-ascii?Q?bA6HZRXs8v37+hDp08M2jCO7vGO6Ho69yV1NN6ltkitlDCqe57/UsLrnPW0G?=
 =?us-ascii?Q?+ZGbgqB777xUaxDgpf3jSMVraQbXiePnQy4/uGYc/ytZuyb0ptTY1iUsb6uZ?=
 =?us-ascii?Q?WLd21ysv5amGyUj++S2mqBOyZ9R8bu7WLLU5qp8s/nTH+CB894TMkJL4ydyu?=
 =?us-ascii?Q?WY3WFoRBe4JHjLL0ffkh8rfYlg0zLCnEgZ3ulnIrjl992Q1E+kuX9azOx5/K?=
 =?us-ascii?Q?qsccocIwSbiYUAGMaTbZXXgq5BKAB718K9Xl140Is5IyjkwcBawdFTPzIHxx?=
 =?us-ascii?Q?F/+fC9VOB13e3rW5Nt4vBnZWfRx7qvJGgXVX5MDpxTFOVNKRxz0DaBxN5UT9?=
 =?us-ascii?Q?DlJlGS02yGUwIAxMr0/8n46TetcGWaWB/4+Jv6c+2oEiPnCHwO1RBKLpHidI?=
 =?us-ascii?Q?PeOfF04D+b2qPWrlSxnOl1a2V4p58jS+ZsGG8/b/eTaW1M7Zi53toVu9O4p7?=
 =?us-ascii?Q?b/hpUeulcNB5PT/I6dpNz7CY7ydhei1wbRTuDtmFgzrzVQ0oLau/O4RuXj/u?=
 =?us-ascii?Q?ZzJCBc3NRmnShKiUPCy5cqP5vcyousJlo0zNH6nJeWj06drR0qtKvJpxlVJ5?=
 =?us-ascii?Q?YhDkM1tPwCjDrktdFQrO6R0QjoNuezC02Zj4DtimCS1oHw1UzeEqyPjf3vB1?=
 =?us-ascii?Q?KyOpwhrpDXOdUdjvL/n33i2zGUvhoxIiZML0RYBK9F91LoM91HRJgS36a2wm?=
 =?us-ascii?Q?1I88sgsO1ZoeE631YkjMX9DVOsDrr3PQD+LJkaBrcHzN6rMzOLFoLXN59e1Q?=
 =?us-ascii?Q?GZKZOR+qNuhGL/tpryaqgUGFpkkB9BfU+6I3jsEYRgWUqdc4IA0yCpsE3aiP?=
 =?us-ascii?Q?AMVSpVlU/8R1b2aGLDs2eDePuXI435EFOOSWrIfBPp4r9JnodfCm/e7gP1LF?=
 =?us-ascii?Q?VX08EoHfB+sciSFnNuNsyWYRQk5C5lMsgUZXtzcJOaFVrn5Jgu0hQZEUf3gF?=
 =?us-ascii?Q?PDrnQrU6SA=3D=3D?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e2fb40bf-a4bc-42bd-fa54-08da43e0d822
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Jun 2022 15:10:20.6440
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3laY3KUza2fbKe0ybzsXqfmpefefCU1rEeUX56pVZB2aE891CfYYVgHZV0ghNot1s8dbK3NhThB5gKNfRTK/iw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR04MB6295

On 01.06.2022 10:17, Roger Pau Monn=C3=A9 wrote:
> On Wed, Jun 01, 2022 at 09:10:09AM +0200, Jan Beulich wrote:
>> On 31.05.2022 18:15, Roger Pau Monn=C3=A9 wrote:
>>> On Tue, May 31, 2022 at 05:40:03PM +0200, Jan Beulich wrote:
>>>> On 31.05.2022 16:40, Roger Pau Monn=C3=A9 wrote:
>>>>> On Fri, May 27, 2022 at 01:12:06PM +0200, Jan Beulich wrote:
>>>>>> @@ -289,44 +290,75 @@ static bool __hwdom_init hwdom_iommu_map
>>>>>>       * that fall in unusable ranges for PV Dom0.
>>>>>>       */
>>>>>>      if ( (pfn > max_pfn && !mfn_valid(mfn)) || xen_in_range(pfn) )
>>>>>> -        return false;
>>>>>> +        return 0;
>>>>>> =20
>>>>>>      switch ( type =3D page_get_ram_type(mfn) )
>>>>>>      {
>>>>>>      case RAM_TYPE_UNUSABLE:
>>>>>> -        return false;
>>>>>> +        return 0;
>>>>>> =20
>>>>>>      case RAM_TYPE_CONVENTIONAL:
>>>>>>          if ( iommu_hwdom_strict )
>>>>>> -            return false;
>>>>>> +            return 0;
>>>>>>          break;
>>>>>> =20
>>>>>>      default:
>>>>>>          if ( type & RAM_TYPE_RESERVED )
>>>>>>          {
>>>>>>              if ( !iommu_hwdom_inclusive && !iommu_hwdom_reserved )
>>>>>> -                return false;
>>>>>> +                perms =3D 0;
>>>>>>          }
>>>>>> -        else if ( is_hvm_domain(d) || !iommu_hwdom_inclusive || pfn=
 > max_pfn )
>>>>>> -            return false;
>>>>>> +        else if ( is_hvm_domain(d) )
>>>>>> +            return 0;
>>>>>> +        else if ( !iommu_hwdom_inclusive || pfn > max_pfn )
>>>>>> +            perms =3D 0;
>>>>>>      }
>>>>>> =20
>>>>>>      /* Check that it doesn't overlap with the Interrupt Address Ran=
ge. */
>>>>>>      if ( pfn >=3D 0xfee00 && pfn <=3D 0xfeeff )
>>>>>> -        return false;
>>>>>> +        return 0;
>>>>>>      /* ... or the IO-APIC */
>>>>>> -    for ( i =3D 0; has_vioapic(d) && i < d->arch.hvm.nr_vioapics; i=
++ )
>>>>>> -        if ( pfn =3D=3D PFN_DOWN(domain_vioapic(d, i)->base_address=
) )
>>>>>> -            return false;
>>>>>> +    if ( has_vioapic(d) )
>>>>>> +    {
>>>>>> +        for ( i =3D 0; i < d->arch.hvm.nr_vioapics; i++ )
>>>>>> +            if ( pfn =3D=3D PFN_DOWN(domain_vioapic(d, i)->base_add=
ress) )
>>>>>> +                return 0;
>>>>>> +    }
>>>>>> +    else if ( is_pv_domain(d) )
>>>>>> +    {
>>>>>> +        /*
>>>>>> +         * Be consistent with CPU mappings: Dom0 is permitted to es=
tablish r/o
>>>>>> +         * ones there (also for e.g. HPET in certain cases), so it =
should also
>>>>>> +         * have such established for IOMMUs.
>>>>>> +         */
>>>>>> +        if ( iomem_access_permitted(d, pfn, pfn) &&
>>>>>> +             rangeset_contains_singleton(mmio_ro_ranges, pfn) )
>>>>>> +            perms =3D IOMMUF_readable;
>>>>>> +    }
>>>>>>      /*
>>>>>>       * ... or the PCIe MCFG regions.
>>>>
>>>> With this comment (which I leave alone) ...
>>>>
>>>>>>       * TODO: runtime added MMCFG regions are not checked to make su=
re they
>>>>>>       * don't overlap with already mapped regions, thus preventing t=
rapping.
>>>>>>       */
>>>>>>      if ( has_vpci(d) && vpci_is_mmcfg_address(d, pfn_to_paddr(pfn))=
 )
>>>>>> -        return false;
>>>>>> +        return 0;
>>>>>> +    else if ( is_pv_domain(d) )
>>>>>> +    {
>>>>>> +        /*
>>>>>> +         * Don't extend consistency with CPU mappings to PCI MMCFG =
regions.
>>>>>> +         * These shouldn't be accessed via DMA by devices.
>>>>>
>>>>> Could you expand the comment a bit to explicitly mention the reason
>>>>> why MMCFG regions shouldn't be accessible from device DMA operations?
>>>>
>>>> ... it's hard to tell what I should write here. I'd expect extended
>>>> reasoning to go there (if anywhere). I'd be okay adjusting the earlier
>>>> comment, if only I knew what to write. "We don't want them to be
>>>> accessed that way" seems a little blunt. I could say "Devices have
>>>> other means to access PCI config space", but this not being said there
>>>> I took as being implied.
>>>
>>> But we could likely say the same about IO-APIC or HPET MMIO regions.
>>> I don't think we expect them to be accessed by devices, yet we provide
>>> them for coherency with CPU side mappings in the PV case.
>>
>> As to "say the same" - yes for the first part of my earlier reply, but
>> no for the latter part.
>=20
> Yes, obviously devices cannot access the HPET or the IO-APIC MMIO from
> the PCI config space :).
>=20
>>>> Or else what was the reason to exclude these
>>>> for PVH Dom0?
>>>
>>> The reason for PVH is because the config space is (partially) emulated
>>> for the hardware domain, so we don't allow untrapped access by the CPU
>>> either.
>>
>> Hmm, right - there's read emulation there as well, while for PV we
>> only intercept writes.
>>
>> So overall should we perhaps permit r/o access to MMCFG for PV? Of
>> course that would only end up consistent once we adjust mappings
>> dynamically when MMCFG ranges are put in use (IOW if we can't verify
>> an MMCFG range is suitably reserved, we'd not find in
>> mmio_ro_ranges just yet, and hence we still wouldn't have an IOMMU
>> side mapping even if CPU side mappings are permitted). But for the
>> patch here it would simply mean dropping some of the code I did add
>> for v5.
>=20
> I would be OK with that, as I think we would then be consistent with
> how IO-APIC and HPET MMIO regions are handled.  We would have to add
> some small helper/handling in PHYSDEVOP_pci_mmcfg_reserved for PV.

Okay, I'll drop that code again then. But I'm not going to look into
making the dynamic part work, at least not within this series.

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 01 15:23:52 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 15:23:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340536.565590 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwQCI-0003db-KZ; Wed, 01 Jun 2022 15:23:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340536.565590; Wed, 01 Jun 2022 15:23:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwQCI-0003dU-Hm; Wed, 01 Jun 2022 15:23:46 +0000
Received: by outflank-mailman (input) for mailman id 340536;
 Wed, 01 Jun 2022 15:23:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwQCH-0003cy-FO; Wed, 01 Jun 2022 15:23:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwQCH-0003Cy-By; Wed, 01 Jun 2022 15:23:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwQCG-0006lg-Rp; Wed, 01 Jun 2022 15:23:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nwQCG-0008QF-RO; Wed, 01 Jun 2022 15:23:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=L2iJdJfkLyFSuqA/4tmGbX7jRM+ze/Dz34P+rh7Pc18=; b=KK7fau5h06T5Xfhx0RfeIBFHfJ
	305VvTa626v+cllB0tbzG5au9wjsQ3Tp2sHDP7pAPEuUDL2TdvbPV3i0ZxlH8z3bw6XQaPhCGwmsS
	AjUC2J4wEaOxxfehGadyceaSFcLALEJ1cE+0mu2ZoQPoJTdPPuO9vCTp6CecvkZzOogE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170798-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 170798: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=62044aa99bcf0a7b1581b24ad8e8f105e48fa15a
X-Osstest-Versions-That:
    ovmf=df1c7e91b46db364ba1ce5e21660987c29c35334
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 01 Jun 2022 15:23:44 +0000

flight 170798 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170798/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 62044aa99bcf0a7b1581b24ad8e8f105e48fa15a
baseline version:
 ovmf                 df1c7e91b46db364ba1ce5e21660987c29c35334

Last test of basis   170786  2022-05-31 11:42:56 Z    1 days
Testing same since   170798  2022-06-01 13:11:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Peter Gonda <pgonda@google.com>
  Tom Lendacky <thomas.lendacky@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   df1c7e91b4..62044aa99b  62044aa99bcf0a7b1581b24ad8e8f105e48fa15a -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 15:25:25 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 15:25:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340546.565601 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwQDt-0004Fn-1L; Wed, 01 Jun 2022 15:25:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340546.565601; Wed, 01 Jun 2022 15:25:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwQDs-0004Fg-UI; Wed, 01 Jun 2022 15:25:24 +0000
Received: by outflank-mailman (input) for mailman id 340546;
 Wed, 01 Jun 2022 15:25:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z6G0=WI=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nwQDr-0004FW-AF
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 15:25:23 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.111.102])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0d15c786-e1bf-11ec-837f-e5687231ffcc;
 Wed, 01 Jun 2022 17:25:21 +0200 (CEST)
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01lp2051.outbound.protection.outlook.com [104.47.0.51]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-34-bZsnnTM7NK6qG3um41bGdw-1; Wed, 01 Jun 2022 17:25:19 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB6308.eurprd04.prod.outlook.com (2603:10a6:208:140::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12; Wed, 1 Jun
 2022 15:25:18 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.012; Wed, 1 Jun 2022
 15:25:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0d15c786-e1bf-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654097121;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=MGsZoBdgp151vc6aXUlDeN+Yl9fJ6c2k1hn8B13mDa8=;
	b=QjnxywJLli/3oOD98i8xF5HBwRuofIVHuqz20GlGquhHa47LEJewlVFd3PDfKhj+T3t5Qe
	GPbv6W4ct1J1KPPV2p/3YzgPUmgpdzWq3ByppFOE2Ff2O8wsJG/hqhYvmolTFpCxZUyN+Z
	2Pf8obTJ8u91XbsPFmjgbDKLjYLLcxU=
X-MC-Unique: bZsnnTM7NK6qG3um41bGdw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HxR63S9uihjTZqJPJMXMhCJ8f6CDZOp0vpbi3hmJttsuXL4VWUzlMAmB3LU83rjXs2AUhP667ez0YLGqmxPKz+3Ai4oEkwe2vxU39NDxxFnJlkRd4z6xkJ+4QcZVsbb/ccWmiiE542N9oocck6m6UM1mmPe8w5c+jJe9k47SJHuHsEZcxRn0w3PElMf92yMpTXKds199f//h3BZL8PkJNov40bcINlu6fdYeyHhHbO+zHNwfJyHtTFtY3OdB182MvRRaDes5yyTgMGpeBQWhI1+6LGmKket0+VjbMUvMCupJFr4Kl1XbGovEHceEGE9Ok2v6y9yvVL2NkALN01RVZw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7pV5/dPL0HhN5Oev5pVTgPjhTkLl94wRUUk6mIRZApg=;
 b=aUQhVBE9dzk5XpZoK8ipwB7A0qJzuFWm1OpkKcF0Vx3s2QI+Br/wz5xN7ZThmiJzh8T3p6u42FZ4W8nhsljyy4SgScSI0ZQy67rpasjqNyVYHe5GbmMNmxR26bzx1VLeoRTi5UekoIz50dnTmceSq/IsmMH8Jsq1/bwoOWWsWDfDwvc/wFUSxRh4z3A5jkwB04LnGten40QHs7hGFjPTVk0BD7fPOoUoepQhBkFmIzlPDBomPTBAK0Fdbga0l/Fxy520URBhiHun0ZOzeW7rEpk4GGYA+eLXjZSf0D1dL3im+Egd6REo9tOZ7mL9HvgaNipcV93nB8zv4X1B8NB0FA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <2014c9a1-1c38-b36b-160e-f79afcdc3a10@suse.com>
Date: Wed, 1 Jun 2022 17:25:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH v5 03/15] IOMMU/x86: support freeing of pagetables
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>
References: <80448822-bc1c-9f7d-ade5-fdf7c46421fe@suse.com>
 <614413d8-5043-f0e3-929b-f161fa89bb35@suse.com>
 <YpZBjVxRdJOzJzZx@Air-de-Roger>
 <372325ed-18b6-9329-901d-6596ce6e497d@suse.com>
 <YpcwOCBEzI+qvTga@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <YpcwOCBEzI+qvTga@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-ClientProxiedBy: FR0P281CA0064.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:49::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 7a7c4b0a-d578-4440-a4d9-08da43e2ef12
X-MS-TrafficTypeDiagnostic: AM0PR04MB6308:EE_
X-Microsoft-Antispam-PRVS:
	<AM0PR04MB6308A60DDE7A7B4E48759BF7B3DF9@AM0PR04MB6308.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JthYy9AjR5g2wjgSfmPBOF/HU4HnXlMaCV4IpbA4ZQOj4S10BPMRp9j+GDpPnuL7MSwTtD0ShgMTQqlIXKMlE7s+iXXyyk1HRclrqb/2dJ5c0MnGZ+/hdlrC/kpEhuhSiLWptSH9Mq46yDHIGkVAJMomH78p6jHIjvTRaZAJwfLRzuEdrp7rPFY+0paRFmjHEkeAq9mhwaoxYPqbJUBcvJtxZH+h0BhoFQGe2spb9fOPEJftLLZUIcdxfMwy5s8d3mV8oXxZDZ2LQYIuyR8p3rFCr0hie0vWBAQqs2MefRfh5SiBbEz49jH52pu3CFGJ0L3bmqYWIloRAKhTpjMXTGFCTZFC6OkuMaG3/spOGdL0ATnPmN2nmwkTyWitux+vLqcdHHLaq3KelDGGQ2way6Xknt3MVd3PnasuJGauz7CDY72+3DBWIY3yvc82X6wCGCx52XeinDn3A5gSUK4UHLcBK4/nCVJTSdNurE5fp9Ut/ajTfy4Lbj3/+1VA8Yf2QzQqvvRxiKdxnVSDg+xfamO4Ru8vCMAJJh1GZ6ZVLkuYmPqkrkJFWsvFuj2dSBzxnQPyffSidfzuWMyqxM+/O5qaA9piG1gK9m+B5wmDdOglnvgvnzsGi8t8GwLyJfnjZ0G7RE7eFZEgW2IjbaglAyn3LYPJtftIf105Rc2+I/JhTcVdHue/dKeGCTYcwj8O7Jb3D/VfKQAirCZTk61Uu4io3x4e6/kHWVukTS/RbA+S/kGJHVuTpvxf/FXGIO/3
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(508600001)(6486002)(5660300002)(8936002)(36756003)(316002)(6916009)(54906003)(31696002)(66476007)(66556008)(66946007)(8676002)(4326008)(86362001)(83380400001)(6506007)(6512007)(2616005)(186003)(26005)(38100700002)(2906002)(31686004)(53546011)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?sEeHRMgTdtRyWnhFtoPi+NjrLhxVA0OqWl1hZikO/gHuMnryyLH7Nv6GljWy?=
 =?us-ascii?Q?BCCNV2aT/wOOwEPQm1MoIgGEQGqhkYWWnmjY5cj1CY8H4Yfqxnjyn085c6GA?=
 =?us-ascii?Q?+3IF2cl0Xsxv8mI19/6APfZA4bHnRabccvO0C87uJ76AtF4Xm1DiW96SmOnL?=
 =?us-ascii?Q?NSL87Ybbw/X8xa0pU386VVLERo373J6ExPogNyrnVUe2R7mIsMZQGBnvTefv?=
 =?us-ascii?Q?MCgXGFg+C92BY7nuEB0GT3aZlCxXjqIc1FrNdtJIcczxdH5UdH0osaaOr83H?=
 =?us-ascii?Q?FXR0oIZd/g7e7SdNgVjMkSk3SP8e7y3Bsd1B3AV3JVVp03zwpR+XDk8rtnJb?=
 =?us-ascii?Q?53xpndCJGV5AiDtumNv6C35VGOuwnUTo84vLKe0yaCE09aDQjNI39mS6KpdM?=
 =?us-ascii?Q?Xh48I1yuYaM6lR4u4i82NnEcVGMxTNLYCrFV/2t01V4KZapl3oKvfpS9+hMH?=
 =?us-ascii?Q?UgO+4u+jixHUnzRp8DMvZ+Qgahmidb7SZpjS4iB6qqL1rAjtvlm/Gbb/b9wi?=
 =?us-ascii?Q?WngqdDz6lTLqLoHQxGYdcWL2LnwsB9dvg5Gpre3hlbaW7jrA38oEFwuDJ/UP?=
 =?us-ascii?Q?fjxF0vEDsSlrRYVynGC2YnwY7u7E3uRzKfWjlze5w5E+UiMsKUFL7sFqtBQP?=
 =?us-ascii?Q?Ji5TmTmAvlV7KqgIH8L7fdJTBa6+RV5K5kzbTo4Nq50FpHwhfykNpeunIdcE?=
 =?us-ascii?Q?LchNcW+Wkfrhb5eyAH9vewZljr9eF4G3gwFl5SAlqODHYkr77OWzU6TzloWG?=
 =?us-ascii?Q?SZtPIkm0lhtQMqP6a8eK8tOv8+xVuIUMQlGMI/gytdU8efQLBEHNPlDAiKki?=
 =?us-ascii?Q?1wxO7OwrinsPf9eZtuGpp4Dj+zC4WTuoRmkZxLNGQm+NYOf/duTIR16MPklT?=
 =?us-ascii?Q?HuQFsYeyRt9RFOhIlObrZi86uhSOoXc4zcj4ZI2zB+FOphe3wvtF0V91J0+q?=
 =?us-ascii?Q?jxMVErqNoeCqt2au8O2OsRIXhaHQnviJOFCqbzayLCb5FA2zYN8E9za1BxVy?=
 =?us-ascii?Q?F7SORKMyKtNQcwGLDGL4RanuizVVtfNHrBkYHLSy+66JXMbz3wpTVZVFCDlV?=
 =?us-ascii?Q?8QIrKLxyCC34zHRskx4ZeM8OXAhfydqyEmYMy3A7cqYLJjmDpRMmKmVRYx5S?=
 =?us-ascii?Q?K9/8UXLxGoSQ5xP5XvtCI0zmy8CKF+2/GKFGEk23eiDhhs8NdBZVs7b+Mo90?=
 =?us-ascii?Q?nyo2CueZEkWwDmuE3j6e0GLmBSqBfJcmGUgwFgu9OeW2BmnntSP680OFs26w?=
 =?us-ascii?Q?s80UYWJbFXmhcVLNeHj6+cZZ1f5sBeQx7nAFT1t/qZDlLcK19kql5krfCB+v?=
 =?us-ascii?Q?RXhEfJXbl8/Y5KufcL6vHR8UJn7TwKdzaNe3T6CPgN3yG5kb4mv8D54emwyN?=
 =?us-ascii?Q?l8LpapDvx4BC6SeIPjeuPi3QXyBa8THWK0sk1SvOaQWmizpt/IyCsg0RFwoj?=
 =?us-ascii?Q?cYV2t+zkuXNuiwmVfSnDinqKACXp66a9a6/oBkm+R5IIDBR/HfLmKmzWQCfR?=
 =?us-ascii?Q?QAgyNMLPJGKM+7bXxO6gsd7MXx4fO3REwN1vjD9/L6Nd5pEDTLWQWkxLn4N+?=
 =?us-ascii?Q?1b4sUZ90x8b4HdVTOLwx3MlHkLjVmuzF9I551geFJDNvPUahiWdOxeWjWmAo?=
 =?us-ascii?Q?2wmV2slxkMQVipC3L0xWzFhkrRBOE/FQbI5WIUEvwmA9YCHzthetuWFQuEc6?=
 =?us-ascii?Q?Pu2l+vmhB1Y3DKYodbURy3YjtTQW1SoMQGWA+w+yjX4ARAzmWopIjqNLR8SS?=
 =?us-ascii?Q?fCtZUvnulw=3D=3D?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7a7c4b0a-d578-4440-a4d9-08da43e2ef12
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Jun 2022 15:25:18.0555
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Xs/KNftOG0abhiV0aEA2aO5wDCFVsu8ufYlcwe1p+4MSCJjXaff4vlkXnufXa7J7p51L2rTa5E4vG21OKVpBpQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB6308

On 01.06.2022 11:24, Roger Pau Monn=C3=A9 wrote:
> On Wed, Jun 01, 2022 at 09:32:44AM +0200, Jan Beulich wrote:
>> On 31.05.2022 18:25, Roger Pau Monn=C3=A9 wrote:
>>> On Fri, May 27, 2022 at 01:13:09PM +0200, Jan Beulich wrote:
>>>> @@ -566,6 +567,98 @@ struct page_info *iommu_alloc_pgtable(st
>>>>      return pg;
>>>>  }
>>>> =20
>>>> +/*
>>>> + * Intermediate page tables which get replaced by large pages may onl=
y be
>>>> + * freed after a suitable IOTLB flush. Hence such pages get queued on=
 a
>>>> + * per-CPU list, with a per-CPU tasklet processing the list on the as=
sumption
>>>> + * that the necessary IOTLB flush will have occurred by the time task=
lets get
>>>> + * to run. (List and tasklet being per-CPU has the benefit of accesse=
s not
>>>> + * requiring any locking.)
>>>> + */
>>>> +static DEFINE_PER_CPU(struct page_list_head, free_pgt_list);
>>>> +static DEFINE_PER_CPU(struct tasklet, free_pgt_tasklet);
>>>> +
>>>> +static void free_queued_pgtables(void *arg)
>>>> +{
>>>> +    struct page_list_head *list =3D arg;
>>>> +    struct page_info *pg;
>>>> +    unsigned int done =3D 0;
>>>> +
>>>> +    while ( (pg =3D page_list_remove_head(list)) )
>>>> +    {
>>>> +        free_domheap_page(pg);
>>>> +
>>>> +        /* Granularity of checking somewhat arbitrary. */
>>>> +        if ( !(++done & 0x1ff) )
>>>> +             process_pending_softirqs();
>>>
>>> Hm, I'm wondering whether we really want to process pending softirqs
>>> here.
>>>
>>> Such processing will prevent the watchdog from triggering, which we
>>> likely want in production builds.  OTOH in debug builds we should make
>>> sure that free_queued_pgtables() doesn't take longer than a watchdog
>>> window, or else it's likely to cause issues to guests scheduled on
>>> this same pCPU (and calling process_pending_softirqs() will just mask
>>> it).
>>
>> Doesn't this consideration apply to about every use of the function we
>> already have in the code base?
>=20
> Not really, at least when used by init code or by the debug key
> handlers.  This use is IMO different than what I would expect, as it's
> a guest triggered path that we believe do require such processing.
> Normally we would use continuations for such long going guest
> triggered operations.

So what do you suggest I do? Putting the call inside #ifndef CONFIG_DEBUG
is not a good option imo. Re-scheduling the tasklet wouldn't help, aiui
(it would still run again right away). Moving the work to another CPU so
this one can do other things isn't very good either - what if other CPUs
are similarly busy? Remains making things more complicated here by
involving a timer, the handler of which would re-schedule the tasklet. I
have to admit I don't like that very much either. The more that the
use of process_pending_softirqs() is "just in case" here anyway - if lots
of page tables were to be queued, I'd expect the queuing entity to be
preempted before a rather large pile could accumulate.

Maybe I could make iommu_queue_free_pgtable() return non-void, to instruct
the caller to bubble up a preemption notification once a certain number
of pages have been queued for freeing. This might end up intrusive ...

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jun 01 16:43:57 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 16:43:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340557.565612 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwRRe-0005X1-34; Wed, 01 Jun 2022 16:43:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340557.565612; Wed, 01 Jun 2022 16:43:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwRRd-0005Wu-WC; Wed, 01 Jun 2022 16:43:42 +0000
Received: by outflank-mailman (input) for mailman id 340557;
 Wed, 01 Jun 2022 16:43:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1L9V=WI=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nwRRc-0005Wo-Tx
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 16:43:41 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org
 [2604:1380:4601:e00::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fcbf564f-e1c9-11ec-bd2c-47488cf2e6aa;
 Wed, 01 Jun 2022 18:43:39 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id D190CB81B73;
 Wed,  1 Jun 2022 16:43:37 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 19D1BC385A5;
 Wed,  1 Jun 2022 16:43:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fcbf564f-e1c9-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654101816;
	bh=mx+qXncMxZL9JnjxIZL2AAlRK7lGwqN0UNQ3EUO+JoM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=kJts6fHqYZZN9Dtjjh0rFPTXFX4mluKKKhg+g1an849pE5IqzsH/ioDLtsP2T1Vym
	 I+D0Q0+3bcghY60tV8VT9/l/siaM28Em36eklVCC3kb4P3rUwbao9vGwwSFBrg/zLR
	 c6D/Kp76hOKNGPbxRz0HsJ1J+IHtpZFGbhoyGxf3UdwlsHiaSAuxq8wzEQNv6YQvLF
	 /oSeM2G0nycErBzug9yRCpmuP5Smvp0WDR90MfhkZOxhICw/5mImHpQvlM/45JO2Bs
	 DUysjf3O9itWJLpjLhXVyvErIg0mydaM8XnAaPqzkQ5NhV9ulqC1AVel1Z9syRSB8d
	 gghNuXBZzm0Cg==
Date: Wed, 1 Jun 2022 09:43:31 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: George Dunlap <George.Dunlap@citrix.com>
cc: xen-devel <xen-devel@lists.xenproject.org>, 
    Tamas K Lengyel <tamas.k.lengyel@gmail.com>, 
    "intel-xen@intel.com" <intel-xen@intel.com>, 
    "daniel.kiper@oracle.com" <daniel.kiper@oracle.com>, 
    Roger Pau Monne <roger.pau@citrix.com>, 
    Sergey Dyasli <sergey.dyasli@citrix.com>, 
    Christopher Clark <christopher.w.clark@gmail.com>, 
    Rich Persaud <persaur@gmail.com>, 
    Kevin Pearson <kevin.pearson@ortmanconsulting.com>, 
    Juergen Gross <jgross@suse.com>, 
    =?UTF-8?Q?Paul_Durrant=C2=A0?= <pdurrant@amazon.com>, 
    "Ji, John" <john.ji@intel.com>, 
    "edgar.iglesias@xilinx.com" <edgar.iglesias@xilinx.com>, 
    "robin.randhawa@arm.com" <robin.randhawa@arm.com>, 
    Artem Mygaiev <Artem_Mygaiev@epam.com>, 
    Matt Spencer <Matt.Spencer@arm.com>, 
    Stewart Hildebrand <Stewart.Hildebrand@dornerworks.com>, 
    Volodymyr Babchuk <volodymyr_babchuk@epam.com>, 
    Jeff Kubascik <Jeff.Kubascik@dornerworks.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Rian Quinn <rianquinn@gmail.com>, 
    "Daniel P. Smith" <dpsmith@apertussolutions.com>, 
    "=?UTF-8?Q?=E2=80=8B=E2=80=8B=E2=80=8B=E2=80=8B=E2=80=8B=E2=80=8B=E2=80=8B?=
 =?UTF-8?Q?Doug_Goldstein?=" <cardoe@cardoe.com>, 
    David Woodhouse <dwmw@amazon.co.uk>, 
    =?UTF-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLQW1pdCBTaGFo?= <amit@infradead.org>, 
    "=?UTF-8?Q?=E2=80=8B=E2=80=8B=E2=80=8B=E2=80=8B=E2=80=8B=E2=80=8B=E2=80=8B?=
 =?UTF-8?Q?Varad_Gautam?=" <varadgautam@gmail.com>, 
    Brian Woods <brian.woods@xilinx.com>, 
    Robert Townley <rob.townley@gmail.com>, 
    Bobby Eshleman <bobby.eshleman@gmail.com>, 
    "=?UTF-8?Q?=E2=80=8B=E2=80=8B=E2=80=8B=E2=80=8B=E2=80=8B=E2=80=8B=E2=80=8B?=
 =?UTF-8?Q?Corey_Minyard?=" <cminyard@mvista.com>, 
    Olivier Lambert <olivier.lambert@vates.fr>, 
    Andrew Cooper <Andrew.Cooper3@citrix.com>, 
    Ash Wilding <ash.j.wilding@gmail.com>, Rahul Singh <Rahul.Singh@arm.com>, 
    =?UTF-8?Q?Piotr_Kr=C3=B3l?= <piotr.krol@3mdeb.com>, 
    Brendan Kerrigan <brendank310@gmail.com>, 
    "Thierry Laurion (Insurgo)" <insurgo@riseup.net>, 
    Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, 
    Deepthi <deepthi.m@ltts.com>, Scott Davis <scottwd@gmail.com>, 
    Ben Boyd <ben@exotanium.io>, Anthony Perard <anthony.perard@citrix.com>, 
    Michal Orzel <michal.orzel@arm.com>
Subject: Re: MOVING COMMUNITY CALL Call for agenda items for 9 June Community
 Call @ 1500 UTC
In-Reply-To: <CC75A251-2695-4E9E-95A7-043874B22F32@citrix.com>
Message-ID: <alpine.DEB.2.22.394.2206010942010.1905099@ubuntu-linux-20-04-desktop>
References: <CC75A251-2695-4E9E-95A7-043874B22F32@citrix.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

Hi all,

I would like to suggest to have the MISRA C meeting just before the
community call (7AM California time). If it is difficult for any of the
must-have attendees, then I would like to ask to reserve 30 minutes of
the community call to make progress on MISRA.

Cheers,

Stefano


On Wed, 1 Jun 2022, George Dunlap wrote:
> Hi all,
> 
> Sorry for sending this out so late; my calendar was screwed up.  Due to it being a public holiday in the UK, I propose moving the monthly community call to NEXT THURSDAY, 9 June, same time.
> 
> The proposed agenda is in https://cryptpad.fr/pad/#/2/pad/edit/URCDNNBOVKsEK2grXf2l954a/ and you can edit to add items.  Alternatively, you can reply to this mail directly.
> 
> Agenda items appreciated a few days before the call: please put your name besides items if you edit the document.
> 
> Note the following administrative conventions for the call:
> * Unless, agreed in the pervious meeting otherwise, the call is on the 1st Thursday of each month at 1600 British Time (either GMT or BST)
> * I usually send out a meeting reminder a few days before with a provisional agenda
> 
> * To allow time to switch between meetings, we'll plan on starting the agenda at 16:05 sharp.  Aim to join by 16:03 if possible to allocate time to sort out technical difficulties &c
> 
> * If you want to be CC'ed please add or remove yourself from the sign-up-sheet at https://cryptpad.fr/pad/#/2/pad/edit/D9vGzihPxxAOe6RFPz0sRCf+/
> 
> Best Regards
> George
> 
> 
> 
> == Dial-in Information ==
> ## Meeting time
> 15:00 - 16:00 UTC
> Further International meeting times: https://www.timeanddate.com/worldclock/meetingdetails.html?year=2022&month=06&day=9&hour=15&min=0&sec=0&p1=1234&p2=37&p3=224&p4=179
> 
> 
> ## Dial in details
> Web: https://meet.jit.si/XenProjectCommunityCall
> 
> Dial-in info and pin can be found here:
> 
> https://meet.jit.si/static/dialInInfo.html?room=XenProjectCommunityCall
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 16:43:59 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 16:43:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340558.565623 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwRRv-0005qa-DM; Wed, 01 Jun 2022 16:43:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340558.565623; Wed, 01 Jun 2022 16:43:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwRRv-0005qT-AD; Wed, 01 Jun 2022 16:43:59 +0000
Received: by outflank-mailman (input) for mailman id 340558;
 Wed, 01 Jun 2022 16:43:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1nwRRu-0005q7-4i
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 16:43:58 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nwRRs-00052T-Vu; Wed, 01 Jun 2022 16:43:56 +0000
Received: from [54.239.6.189] (helo=[192.168.4.58])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nwRRs-00044A-PG; Wed, 01 Jun 2022 16:43:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=7ISjeHr5jWWeQil9NMXknQxeXIvcVXP44GBhtySct9c=; b=vZhc6L/LFIYSLKhThXDpJifEK7
	oO2gXirj/PjhTZkV3ey4Q7CuCbcgchC7mLaowiCWjKj77I4OOwljbfVzau3vrzi9Fapla54U8zwGy
	jkWCOowiKIblBDY2P9FnFgFz+BZM2XnValQxgjjwcee4WU7Ut+D/0Q3DOgTU2ez41DK0=;
Message-ID: <847647f6-8583-ca22-3cec-90cebe36896d@xen.org>
Date: Wed, 1 Jun 2022 17:43:53 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.9.1
Subject: Re: [PATCH V2] libxl/arm: Create specific IOMMU node to be referred
 by virtio-mmio device
To: Oleksandr Tyshchenko <olekstysh@gmail.com>, xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Wei Liu
 <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Juergen Gross <jgross@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <1653944813-17970-1-git-send-email-olekstysh@gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <1653944813-17970-1-git-send-email-olekstysh@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Oleksandr,

On 30/05/2022 22:06, Oleksandr Tyshchenko wrote:
> diff --git a/xen/include/public/device_tree_defs.h b/xen/include/public/device_tree_defs.h
> index 209d43d..df58944 100644
> --- a/xen/include/public/device_tree_defs.h
> +++ b/xen/include/public/device_tree_defs.h
> @@ -7,6 +7,7 @@
>    * onwards. Reserve a high value for the GIC phandle.

This comment need to be updated.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 16:47:55 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 16:47:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340573.565633 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwRVi-0006ht-Uz; Wed, 01 Jun 2022 16:47:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340573.565633; Wed, 01 Jun 2022 16:47:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwRVi-0006hm-S8; Wed, 01 Jun 2022 16:47:54 +0000
Received: by outflank-mailman (input) for mailman id 340573;
 Wed, 01 Jun 2022 16:47:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Edd5=WI=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1nwRVh-0006hg-Cy
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 16:47:53 +0000
Received: from mail-lf1-x129.google.com (mail-lf1-x129.google.com
 [2a00:1450:4864:20::129])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 93e2cb62-e1ca-11ec-bd2c-47488cf2e6aa;
 Wed, 01 Jun 2022 18:47:52 +0200 (CEST)
Received: by mail-lf1-x129.google.com with SMTP id c19so3717447lfv.5
 for <xen-devel@lists.xenproject.org>; Wed, 01 Jun 2022 09:47:52 -0700 (PDT)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 o24-20020a056512051800b00479039f3a7fsm464409lfb.107.2022.06.01.09.47.50
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 01 Jun 2022 09:47:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 93e2cb62-e1ca-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=5ysxTQwFCS3GVEBRkfwuqA9tewg5aHhB76j7ktEmfGg=;
        b=L6FimHJy5ITfh5gfMmINRQzz96RRLbml0sApFN8Z4wkmrHYUfO3A4c6FvETODLbhCs
         ijRxNNKPa6tHGktbPIBOGZDXxvbeC9jzha1lSjTkn9uKY3I9ozuHghaV/JWXBXRL/mKN
         ipffNDCoKNSjdPLqYQoagvM3CbTqvyqOPhS+H6svrp8RGj98Ovs4HwzFprknXRn8seA/
         UDiv9AK1CFlFEW8y87MsynBUnwV1ds081o0SBMBLKkWSa6Y64YFny+oIC7zQPBZVUHcG
         GXEvSYQFt8R5C3skusvqMPQbtsPlOutKJ5LT4JvwKoDfDLnyp3xSWdb8sn2GS9NBAOTS
         O6JA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=5ysxTQwFCS3GVEBRkfwuqA9tewg5aHhB76j7ktEmfGg=;
        b=yU6a66R3WA2YMBm+5cc33WA8kuBynSkFtQWHGC4T7Q5kyNO/p+EDosNs7hN0hK5LaU
         4jVQkFLZ12jzmEXkeiAsXgxUhXUsnXLzYHeL3d8j2+86JgWHEpPwqlo6uFeCC+FADq1l
         Mtqy3bIvr+HxPQwlVH6we9e28xGVp41F9JyaXbXyduYWnAQCWDEXnvkdZp3OWhbhvPup
         aaeVEmoK2UGDUr3fEZntQaLap3qP7yIDU17hV3vOC2kxVpYpjVS42BTZhEXzryhLY5PP
         LFhsm0pvs5Lkm3lRaIf8WOeRtXkEsbBUOjw5l8FTJYi2hCLXAmUQ6bQXtz96xAsTMtqW
         9aFw==
X-Gm-Message-State: AOAM532r8rqMhcndUqhHHB59vc19PIlLilGX1ipowXYpbWJATsBtjELE
	bi6TwwE4rJOuevTZCNEjSGc=
X-Google-Smtp-Source: ABdhPJwKNFicLIC5mwu7lt+zKuQXf++lAQwUeQal6wex/+NKo02XLGh2/OKF90jLQYjQaQ622TY+0g==
X-Received: by 2002:a05:6512:c08:b0:478:6a03:220d with SMTP id z8-20020a0565120c0800b004786a03220dmr37710106lfu.479.1654102071716;
        Wed, 01 Jun 2022 09:47:51 -0700 (PDT)
Subject: Re: [PATCH V2] libxl/arm: Create specific IOMMU node to be referred
 by virtio-mmio device
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Wei Liu
 <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Juergen Gross <jgross@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <1653944813-17970-1-git-send-email-olekstysh@gmail.com>
 <847647f6-8583-ca22-3cec-90cebe36896d@xen.org>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <c2119161-8b94-d1f0-81bf-bac024087940@gmail.com>
Date: Wed, 1 Jun 2022 19:47:50 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <847647f6-8583-ca22-3cec-90cebe36896d@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 01.06.22 19:43, Julien Grall wrote:
> Hi Oleksandr,


Hello Julien



>
> On 30/05/2022 22:06, Oleksandr Tyshchenko wrote:
>> diff --git a/xen/include/public/device_tree_defs.h 
>> b/xen/include/public/device_tree_defs.h
>> index 209d43d..df58944 100644
>> --- a/xen/include/public/device_tree_defs.h
>> +++ b/xen/include/public/device_tree_defs.h
>> @@ -7,6 +7,7 @@
>>    * onwards. Reserve a high value for the GIC phandle.
>
> This comment need to be updated.


Indeed, will do


>
> Cheers,
>
-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Wed Jun 01 16:59:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 16:59:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340582.565656 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwRgr-0000Cm-8V; Wed, 01 Jun 2022 16:59:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340582.565656; Wed, 01 Jun 2022 16:59:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwRgr-0000Cd-4r; Wed, 01 Jun 2022 16:59:25 +0000
Received: by outflank-mailman (input) for mailman id 340582;
 Wed, 01 Jun 2022 16:59:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CPHY=WI=citrix.com=prvs=144c139f6=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1nwRgp-0008OC-Qn
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 16:59:23 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2f24dd1d-e1cc-11ec-bd2c-47488cf2e6aa;
 Wed, 01 Jun 2022 18:59:22 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2f24dd1d-e1cc-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654102762;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=BlbE29TWg9aEJQUORmjYM/XevnXuD7OicnrE0Ym51MU=;
  b=JNbjtcqWZFvy6o/cx+wxbv8WVyQvgE/078WIaH18FxC9U/VLJ6hw60nK
   weZaU77N+UfqGxz+TBZkyjfKPYjq8aJ5a82RQ8S2WgEDJQfrl5XGBU+u8
   i+0/Zl5HmaF2TDZrUT1sIDEZXlLe6gViqjcW+DJzg8JeUYawpTV11dE6H
   w=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 73044234
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:4Bf/a6hDFHPz6eXK7dw3/KJQX161GBAKZh0ujC45NGQN5FlHY01je
 htvUGrVa/nZZWr9etwlOYi+phxUvJ7Xm9VqSAdp/ixgRCsb9cadCdqndUqhZCn6wu8v7a5EA
 2fyTvGacajYm1eF/k/F3oDJ9CU6jefSLlbFILas1hpZHGeIcw98z0M68wIFqtQw24LhXVvU4
 YiaT/D3YzdJ5RYlagr41IrbwP9flKyaVOQw5wFWiVhj5TcyplFNZH4tDfjZw0jQG+G4KtWSV
 efbpIxVy0uCl/sb5nFJpZ6gGqECaua60QFjERO6UYD66vRJjnRaPqrWqJPwwKqY4tmEt4kZ9
 TlDiXC/YUB0NYTrnMUDaCdnVA1FY4ND5Of7BlHq5KR/z2WeG5ft6/BnDUVwNowE4OdnR2pJ8
 JT0KhhUMErF3bjvhuvmFK883azPL+GyVG8bknhm0THeC+dgWZ3ZSr/GzdRZwC0xloZFGvO2i
 88xNmE3MEiaOEEn1lE/GdFgnM63v1rFcRpWtw/L+5Y4uTSD5VkkuFTqGIWMIYHbLSlPpW6Dv
 X7P9Wn9BhAcNfScxCCD/3bqgfXA9QvkXKoCGbv+8eRl6HWR22gSBRs+RVa95/6jhSaWS99Zb
 kAZ5Ccqhawz71CwCMnwWQWip3yJtQJaXMBfe8U44gyQzqvf4y6CG3MJCDVGbbQbWNQeHGJwk
 AXTxpWwWGIp4Ob9pW+hGqm86m65EA8RDHE5WxBDYTBbzdz/+L0Up0eaJjp8K5JZnuEZCBmpn
 W3W9HRh2e5D5SIY//7lpA6a2lpAsrCMF1dovVuPAwpJ+ysjPOaYi5qUBU83BBqqBKKQVRG/s
 XcNgKByB8heXMjWxERhrAjgdYxFBspp0xWG2DaD57F7q1yQF4eLJOi8Gg1WKkZzKdojcjT0e
 kLVsg45zMYNYSfyN/UvPNzuUp1CIU3c+TPND6m8UzazSsIpKF/vEN9GPiZ8IFwBYGBzyPpia
 P93gO6nDGoACLQP8Qdas9w1iOdxrghnnDu7bcmik3yPjOvFDFbIGOhtDbd7Rr1ghE9yiF6No
 4g32grj40g3bdASlQGNrtBDdgBRdydlbX00wuQOHtO+zsNdMDlJI5fsLXkJJ+SJQ4w9ej/0w
 0yA
IronPort-HdrOrdr: A9a23:Q9G4GanUo5GsybFcLBXDDE/knLfpDfIU3DAbv31ZSRFFG/Fxl6
 iV8sjzsiWE7gr5OUtQ4exoV5PhfZqxz/JICMwqTNKftWrdyQyVxeNZnOjfKlTbckWUnINgPO
 VbAsxD4bXLfCFHZK3BgTVQfexO/DD+ytHLudvj
X-IronPort-AV: E=Sophos;i="5.91,269,1647316800"; 
   d="scan'208";a="73044234"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH 1/4] build: xen/include: use if_changed
Date: Wed, 1 Jun 2022 17:59:06 +0100
Message-ID: <20220601165909.46588-2-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220601165909.46588-1-anthony.perard@citrix.com>
References: <20220601165909.46588-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Use "define" for the headers*_chk commands as otherwise the "#"
is interpreted as a comment and make can't find the end of
$(foreach,).

Adding several .PRECIOUS as without them `make` deletes the
intermediate targets. This is an issue because the macro $(if_changed,)
check if the target exist in order to decide whether to recreate the
target.

Removing the call to `mkdir` from the commands. Those aren't needed
anymore because a rune in Rules.mk creates the directory for each
$(targets).

Remove "export PYTHON" as it is already exported.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 xen/include/Makefile | 108 ++++++++++++++++++++++++++++++-------------
 1 file changed, 76 insertions(+), 32 deletions(-)

diff --git a/xen/include/Makefile b/xen/include/Makefile
index 03baf10efb..6d9bcc19b0 100644
--- a/xen/include/Makefile
+++ b/xen/include/Makefile
@@ -45,38 +45,65 @@ public-$(CONFIG_ARM) := $(wildcard $(srcdir)/public/arch-arm/*.h $(srcdir)/publi
 .PHONY: all
 all: $(addprefix $(obj)/,$(headers-y))
 
-$(obj)/compat/%.h: $(obj)/compat/%.i $(srcdir)/Makefile $(srctree)/tools/compat-build-header.py
-	$(PYTHON) $(srctree)/tools/compat-build-header.py <$< $(patsubst $(obj)/%,%,$@) >>$@.new; \
-	mv -f $@.new $@
-
-$(obj)/compat/%.i: $(obj)/compat/%.c $(srcdir)/Makefile
-	$(CPP) $(filter-out -Wa$(comma)% -include %/include/xen/config.h,$(XEN_CFLAGS)) $(cppflags-y) -o $@ $<
-
-$(obj)/compat/%.c: $(src)/public/%.h $(srcdir)/xlat.lst $(srcdir)/Makefile $(srctree)/tools/compat-build-source.py
-	mkdir -p $(@D)
-	$(PYTHON) $(srctree)/tools/compat-build-source.py $(srcdir)/xlat.lst <$< >$@.new
-	mv -f $@.new $@
-
-$(obj)/compat/.xlat/%.h: $(obj)/compat/%.h $(obj)/compat/.xlat/%.lst $(srctree)/tools/get-fields.sh $(srcdir)/Makefile
-	export PYTHON=$(PYTHON); \
-	while read what name; do \
-		$(SHELL) $(srctree)/tools/get-fields.sh "$$what" compat_$$name $< || exit $$?; \
-	done <$(patsubst $(obj)/compat/%,$(obj)/compat/.xlat/%,$(basename $<)).lst >$@.new
-	mv -f $@.new $@
+quiet_cmd_compat_h = GEN     $@
+cmd_compat_h = \
+    $(PYTHON) $(srctree)/tools/compat-build-header.py <$< $(patsubst $(obj)/%,%,$@) >>$@.new; \
+    mv -f $@.new $@
+
+quiet_cmd_compat_i = CPP     $@
+cmd_compat_i = $(CPP) $(filter-out -Wa$(comma)% -include %/include/xen/config.h,$(XEN_CFLAGS)) $(cppflags-y) -o $@ $<
+
+quiet_cmd_compat_c = GEN     $@
+cmd_compat_c = \
+   $(PYTHON) $(srctree)/tools/compat-build-source.py $(srcdir)/xlat.lst <$< >$@.new; \
+   mv -f $@.new $@
+
+quiet_cmd_xlat_headers = GEN     $@
+cmd_xlat_headers = \
+    while read what name; do \
+        $(SHELL) $(srctree)/tools/get-fields.sh "$$what" compat_$$name $< || exit $$?; \
+    done <$(patsubst $(obj)/compat/%,$(obj)/compat/.xlat/%,$(basename $<)).lst >$@.new; \
+    mv -f $@.new $@
+
+targets += $(headers-y)
+$(obj)/compat/%.h: $(obj)/compat/%.i $(srctree)/tools/compat-build-header.py FORCE
+	$(call if_changed,compat_h)
+
+.PRECIOUS: $(obj)/compat/%.i
+targets += $(patsubst %.h, %.i, $(headers-y))
+$(obj)/compat/%.i: $(obj)/compat/%.c FORCE
+	$(call if_changed,compat_i)
+
+.PRECIOUS: $(obj)/compat/%.c
+targets += $(patsubst %.h, %.c, $(headers-y))
+$(obj)/compat/%.c: $(src)/public/%.h $(srcdir)/xlat.lst $(srctree)/tools/compat-build-source.py FORCE
+	$(call if_changed,compat_c)
+
+targets += $(patsubst compat/%, compat/.xlat/%, $(headers-y))
+$(obj)/compat/.xlat/%.h: $(obj)/compat/%.h $(obj)/compat/.xlat/%.lst $(srctree)/tools/get-fields.sh FORCE
+	$(call if_changed,xlat_headers)
+
+quiet_cmd_xlat_lst = GEN     $@
+cmd_xlat_lst = \
+	grep -v '^[[:blank:]]*$(pound)' $< | sed -ne 's,@arch@,$(compat-arch-y),g' -re 's,[[:blank:]]+$*\.h[[:blank:]]*$$,,p' >$@.new; \
+	$(call move-if-changed,$@.new,$@)
 
 .PRECIOUS: $(obj)/compat/.xlat/%.lst
-$(obj)/compat/.xlat/%.lst: $(srcdir)/xlat.lst $(srcdir)/Makefile
-	mkdir -p $(@D)
-	grep -v '^[[:blank:]]*#' $< | sed -ne 's,@arch@,$(compat-arch-y),g' -re 's,[[:blank:]]+$*\.h[[:blank:]]*$$,,p' >$@.new
-	$(call move-if-changed,$@.new,$@)
+targets += $(patsubst compat/%.h, compat/.xlat/%.lst, $(headers-y))
+$(obj)/compat/.xlat/%.lst: $(srcdir)/xlat.lst FORCE
+	$(call if_changed,xlat_lst)
 
 xlat-y := $(shell sed -ne 's,@arch@,$(compat-arch-y),g' -re 's,^[?!][[:blank:]]+[^[:blank:]]+[[:blank:]]+,,p' $(srcdir)/xlat.lst | uniq)
 xlat-y := $(filter $(patsubst compat/%,%,$(headers-y)),$(xlat-y))
 
-$(obj)/compat/xlat.h: $(addprefix $(obj)/compat/.xlat/,$(xlat-y)) $(obj)/config/auto.conf $(srcdir)/Makefile
-	cat $(filter %.h,$^) >$@.new
+quiet_cmd_xlat_h = GEN     $@
+cmd_xlat_h = \
+	cat $(filter %.h,$^) >$@.new; \
 	mv -f $@.new $@
 
+$(obj)/compat/xlat.h: $(addprefix $(obj)/compat/.xlat/,$(xlat-y)) $(obj)/config/auto.conf FORCE
+	$(call if_changed,xlat_h)
+
 ifeq ($(XEN_TARGET_ARCH),$(XEN_COMPILE_ARCH))
 
 all: $(obj)/headers.chk $(obj)/headers99.chk $(obj)/headers++.chk
@@ -102,27 +129,31 @@ PUBLIC_C99_HEADERS := $(call public-filter-headers,public-c99-headers)
 $(src)/public/io/9pfs.h-prereq := string
 $(src)/public/io/pvcalls.h-prereq := string
 
-$(obj)/headers.chk: $(PUBLIC_ANSI_HEADERS) $(srcdir)/Makefile
+quiet_cmd_header_chk = CHK     $@
+cmd_header_chk = \
 	for i in $(filter %.h,$^); do \
 	    $(CC) -x c -ansi -Wall -Werror -include stdint.h \
 	          -S -o /dev/null $$i || exit 1; \
 	    echo $$i; \
-	done >$@.new
+	done >$@.new; \
 	mv $@.new $@
 
-$(obj)/headers99.chk: $(PUBLIC_C99_HEADERS) $(srcdir)/Makefile
-	rm -f $@.new
+quiet_cmd_headers99_chk = CHK     $@
+define cmd_headers99_chk
+	rm -f $@.new; \
 	$(foreach i, $(filter %.h,$^),                                        \
 	    echo "#include "\"$(i)\"                                          \
 	    | $(CC) -x c -std=c99 -Wall -Werror                               \
 	      -include stdint.h                                               \
 	      $(foreach j, $($(patsubst $(srctree)/%,%,$i)-prereq), -include $(j).h) \
 	      -S -o /dev/null -                                               \
-	    || exit $$?; echo $(i) >> $@.new;)
+	    || exit $$?; echo $(i) >> $@.new;) \
 	mv $@.new $@
+endef
 
-$(obj)/headers++.chk: $(PUBLIC_HEADERS) $(srcdir)/Makefile
-	rm -f $@.new
+quiet_cmd_headerscxx_chk = CHK     $@
+define cmd_headerscxx_chk
+	rm -f $@.new; \
 	if ! $(CXX) -v >/dev/null 2>&1; then                                  \
 	    touch $@.new;                                                     \
 	    exit 0;                                                           \
@@ -133,8 +164,21 @@ $(obj)/headers++.chk: $(PUBLIC_HEADERS) $(srcdir)/Makefile
 	      -include stdint.h -include $(srcdir)/public/xen.h               \
 	      $(foreach j, $($(patsubst $(srctree)/%,%,$i)-prereq), -include c$(j)) \
 	      -S -o /dev/null -                                               \
-	    || exit $$?; echo $(i) >> $@.new;)
+	    || exit $$?; echo $(i) >> $@.new;) \
 	mv $@.new $@
+endef
+
+targets += headers.chk
+$(obj)/headers.chk: $(PUBLIC_ANSI_HEADERS) FORCE
+	$(call if_changed,header_chk)
+
+targets += headers99.chk
+$(obj)/headers99.chk: $(PUBLIC_C99_HEADERS) FORCE
+	$(call if_changed,headers99_chk)
+
+targets += headers++.chk
+$(obj)/headers++.chk: $(PUBLIC_HEADERS) FORCE
+	$(call if_changed,headerscxx_chk)
 
 endif
 
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Wed Jun 01 16:59:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 16:59:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340581.565644 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwRgq-0008OQ-0k; Wed, 01 Jun 2022 16:59:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340581.565644; Wed, 01 Jun 2022 16:59:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwRgp-0008OI-Tv; Wed, 01 Jun 2022 16:59:23 +0000
Received: by outflank-mailman (input) for mailman id 340581;
 Wed, 01 Jun 2022 16:59:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CPHY=WI=citrix.com=prvs=144c139f6=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1nwRgo-0008OC-OF
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 16:59:22 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2d384560-e1cc-11ec-bd2c-47488cf2e6aa;
 Wed, 01 Jun 2022 18:59:20 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2d384560-e1cc-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654102760;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=l6wUEv/HeH0FfQAgRb9dzcQpJAYhvYgm8+/ckh3K7VA=;
  b=F43tD4yzQ/a49hBifYYg+Or2/Zb/0F/OujhRnwb/eMk2X4L2+c1Dfdgb
   IIqzfoMlzQAJNQeX1MltPhIJaed2qHwZi98v+yamVIwPinJCV5VmBXVUU
   IkiR+UUu1N48yU1qIDWhzA3eBOBmPG+PSwcTy0H0T/BCAfGCvcCRfg6nx
   U=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 73044229
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:89LSrajRliWYkrd5hcrHrmbSX161GBAKZh0ujC45NGQN5FlHY01je
 htvCD/VbPiKa2X0eYxwPo2zoE9V6JOBn4JlT1RvqS48ESkb9cadCdqndUqhZCn6wu8v7a5EA
 2fyTvGacajYm1eF/k/F3oDJ9CU6jefSLlbFILas1hpZHGeIcw98z0M68wIFqtQw24LhXVvU4
 YiaT/D3YzdJ5RYlagr41IrbwP9flKyaVOQw5wFWiVhj5TcyplFNZH4tDfjZw0jQG+G4KtWSV
 efbpIxVy0uCl/sb5nFJpZ6gGqECaua60QFjERO6UYD66vRJjnRaPqrWqJPwwKqY4tmEt4kZ9
 TlDiXC/YV0HG4+PgNotah5zIxNjZIl6wJ3GG1Hq5KR/z2WeG5ft6/BnDUVwNowE4OdnR2pJ8
 JT0KhhUMErF3bjvhuvmFK883azPL+GyVG8bknhm0THeC+dgWZ3ZSr/GzdRZwC0xloZFGvO2i
 88xNmE3MEiaOUwn1lE/Oq41kf71mXvFTTRetkq5r4MuxHDt5VkkuFTqGIWMIYHbLSlPpW6Ho
 krW8mK/BQsVXPSE0iaM+H+ogu7JnAv4VZgUGbn+8eRl6HWRzGEODBwdVXOgvOK0zEW5Xrp3O
 0ESvyYjs6U23EiqVcXmGQ21pmaeuRwRUMYWFPc1gCmP167V7gCxFmUCCDlbZ7QbWNQeHGJwk
 AXTxpWwWGIp4Ob9pW+hGqm89x2XG2sNE187YgQ0fxZcx+XbgoAJp0eaJjp8K5JZnuEZCBmpn
 W3W9HRh2e5D5SIY//7lpA6a2lpAsrCMF1dovVuPAwpJ+ysjPOaYi5qUBU83BBqqBKKQVRG/s
 XcNgKByB8heXMjWxERhrAjgdYxFBspp0xWG2DaD57F7q1yQF4eLJOi8Gg1WKkZzKdojcjT0e
 kLVsg45zMYNYSfyN/UvPNzuUp1CIU3c+TPND6m8UzazSsIpKF/vEN9GPiZ8IFwBYGBzyPpia
 P93gO6nDGoACLQP8Qdas9w1iOdxrghnnDu7bcmik3yPjOvFDFbIGOhtDbd7Rr1ghE9yiF6No
 4g32grj40g3bdASlQGNrtBDdgBRdydlbX00wuQOHtO+zsNdMDlJI5fsLXkJIuSJQ4w9ej/0w
 0yA
IronPort-HdrOrdr: A9a23:QNCkiqk0HP0LKEhjh6DBJXByyP/pDfIm3DAbv31ZSRFFG/Fxl6
 iV88jzsiWE7Qr5OUtQ/uxoV5PgfZqxz/NICOoqTNWftWvd2FdARbsKheCJ/9SJIVybygc378
 ldmsZFZOEYdWIK7vrH3A==
X-IronPort-AV: E=Sophos;i="5.91,269,1647316800"; 
   d="scan'208";a="73044229"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Julien Grall <julien@xen.org>,
	Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>
Subject: [XEN PATCH 0/4] xen: rework compat headers generation
Date: Wed, 1 Jun 2022 17:59:05 +0100
Message-ID: <20220601165909.46588-1-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Patch series available in this git branch:
https://xenbits.xen.org/git-http/people/aperard/xen-unstable.git br.build-system-xen-include-rework-v1

Hi,

This patch series is about 2 improvement. First one is to use $(if_changed, )
in "include/Makefile" to make the generation of the compat headers less verbose
and to have the command line part of the decision to rebuild the headers.
Second one is to replace one slow script by a much faster one, and save time
when generating the headers.

Thanks.

Anthony PERARD (4):
  build: xen/include: use if_changed
  build: set PERL
  build: replace get-fields.sh by a perl script
  build: remove auto.conf prerequisite from compat/xlat.h target

 xen/Makefile                 |   1 +
 xen/include/Makefile         | 106 ++++---
 xen/tools/compat-xlat-header | 539 +++++++++++++++++++++++++++++++++++
 xen/tools/get-fields.sh      | 528 ----------------------------------
 4 files changed, 614 insertions(+), 560 deletions(-)
 create mode 100755 xen/tools/compat-xlat-header
 delete mode 100644 xen/tools/get-fields.sh

-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Wed Jun 01 16:59:33 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 16:59:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340583.565667 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwRgz-0000Yv-ME; Wed, 01 Jun 2022 16:59:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340583.565667; Wed, 01 Jun 2022 16:59:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwRgz-0000Yo-JF; Wed, 01 Jun 2022 16:59:33 +0000
Received: by outflank-mailman (input) for mailman id 340583;
 Wed, 01 Jun 2022 16:59:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CPHY=WI=citrix.com=prvs=144c139f6=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1nwRgy-0008OC-J8
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 16:59:32 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3345643a-e1cc-11ec-bd2c-47488cf2e6aa;
 Wed, 01 Jun 2022 18:59:31 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3345643a-e1cc-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654102770;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=lTcMhdu7U3PbutCiovcP5+LL1tCro+vwpmFhh06+tHU=;
  b=HDHkZMN6n30hVAZN2MKH+LXvMUEx0ozJUY54MirDnBYAKWcf5VAK5MzC
   Lz/DzZr4ocTHPOpoLs1zK8CyGYLFHocWT+6Uwnj/W+DFhg0xsrphq8vrz
   PVA6SgWsFp30PJj2wHF2LFlFz2nA0kO7agoBNxorV7ORhH0ZfsXw29zmh
   w=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 72647996
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:fC3eBqL6t0KeJT3uFE+RqZUlxSXFcZb7ZxGr2PjKsXjdYENS3jwEm
 mcaWW7SaPyDZWr8eIhxYIq2oE5SvZHSn9RjTAdlqX01Q3x08seUXt7xwmUcns+xwm8vaGo9s
 q3yv/GZdJhcokf0/0vrav67xZVF/fngqoDUUYYoAQgsA149IMsdoUg7wbRh3NYx2YLR7z6l4
 rseneWOYDdJ5BYsWo4kw/rrRMRH5amaVJsw5zTSVNgT1LPsvyB94KE3fMldG0DQUIhMdtNWc
 s6YpF2PEsE1yD92Yj+tuu6TnkTn2dc+NyDW4pZdc/DKbhSvOkXee0v0XRYRQR4/ttmHozx+4
 OdKrbH3FyZyBbHzttQ9Sz0GNnxHYoQTrdcrIVDn2SCS50jPcn+qyPRyFkAme4Yf/46bA0kXq
 6ZecmpUKEne2aTmm9pXScE17ignBMDtIIMYvGAm1TzDBOwqaZvCX7/L9ZlT2zJYasVmQqyAO
 5VIOGQHgBLoQxlEJ04dT6kCpMS0vFXZbQMDkXeHqv9ii4TU5FMoi+W8WDbPQfSRXtlclEuco
 mPA/kz6DwscOdjZziCKmlqzgsffkCW9X5gdfJW66/prjVu71mEVThoMWjOTsfS/z0KzRd9bA
 0gV4TY167g/8lSxSdvwVAH+p2SL1jYeUddNF+wx6CmW17HZpQ2eAwA5oiVpMYJ88pVsHHpzi
 wHPz4iB6SFTXKO9e3WF2/DKjGiJJRc7HU8aQXEhEDJayoy2yG0stS4jXuqPAYbs0ICpQW2vn
 WvaxMQtr+5N1JBWjs1X6XiC2mvx/caRE2bZ8y2NBgqYAhVFiJlJjmBCwXzS9r5+IYmQVTFtV
 1BUypHFvIji4Xxg/RFhodnh/5nzvp5pyBWG3TZS82AJrlxBAUKLc4FK+y1ZL0x0KMsCcjKBS
 BaN5F8JvsULYyX1NPYfj2eN5yMClPGIKDgYfqqMMoomjmZZL2drAx2ClWbPhjuwwSDAYIk0O
 IuBcNbEMEv2/Z9PlWLsL89EiOdD7nlnmQv7GMGgpzz6gOX2WZJgYepcWLd4RrthvP3sTcS82
 4s3CvZmPD0FD7OuOXmMq9FNRb3IRFBiba3LRwVsXrbrCmJb9KsJUJc9HZtJl1RZoplo
IronPort-HdrOrdr: A9a23:xdWcsqHGaflyk95hpLqE0MeALOsnbusQ8zAXP0AYc3Jom6uj5q
 aTdZUgpGfJYVkqOE3I9ertBEDEewK4yXcX2/h3AV7BZniEhILAFugLhuGO/9SjIVybygc079
 YYT0EUMrzN5DZB4voSmDPIceod/A==
X-IronPort-AV: E=Sophos;i="5.91,268,1647316800"; 
   d="scan'208";a="72647996"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH 4/4] build: remove auto.conf prerequisite from compat/xlat.h target
Date: Wed, 1 Jun 2022 17:59:09 +0100
Message-ID: <20220601165909.46588-5-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220601165909.46588-1-anthony.perard@citrix.com>
References: <20220601165909.46588-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Now that the command line generating "xlat.h" is check on rebuild, the
header will be regenerated whenever the list of xlat headers changes
due to change in ".config". We don't need to force a regeneration for
every changes in ".config".

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 xen/include/Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/include/Makefile b/xen/include/Makefile
index b7e7148665..937a8bc884 100644
--- a/xen/include/Makefile
+++ b/xen/include/Makefile
@@ -99,7 +99,7 @@ cmd_xlat_h = \
 	cat $(filter %.h,$^) >$@.new; \
 	mv -f $@.new $@
 
-$(obj)/compat/xlat.h: $(addprefix $(obj)/compat/.xlat/,$(xlat-y)) $(obj)/config/auto.conf FORCE
+$(obj)/compat/xlat.h: $(addprefix $(obj)/compat/.xlat/,$(xlat-y)) FORCE
 	$(call if_changed,xlat_h)
 
 ifeq ($(XEN_TARGET_ARCH),$(XEN_COMPILE_ARCH))
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Wed Jun 01 16:59:34 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 16:59:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340584.565672 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwRh0-0000c4-0K; Wed, 01 Jun 2022 16:59:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340584.565672; Wed, 01 Jun 2022 16:59:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwRgz-0000ah-RH; Wed, 01 Jun 2022 16:59:33 +0000
Received: by outflank-mailman (input) for mailman id 340584;
 Wed, 01 Jun 2022 16:59:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CPHY=WI=citrix.com=prvs=144c139f6=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1nwRgy-0000Xw-Rb
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 16:59:32 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3355f586-e1cc-11ec-837f-e5687231ffcc;
 Wed, 01 Jun 2022 18:59:31 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3355f586-e1cc-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654102771;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=XFVVIbvewXpq18wLagkSmqKQFGebkO+PaAyJtLO/9K8=;
  b=VF+AqHuON8WHLriyTwRdVXR3ra0uqaQ0z3ibpz++GYpR2cB6LM4Gk1DB
   zjpUn7xWn4u1O/qg8rRb5OS0SlST6OKjC8BZc7AfNnxlBckTHdh2EaQQo
   W3r6FT1c+YcGR0hcfvTDyp44iz5CRYKE4ic3HxmXpQMJFhDeSNsBurloC
   s=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 72494142
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:OoSqwqinvuCJ3pRVzKHD4FDgX161GBAKZh0ujC45NGQN5FlHY01je
 htvXTyCb/qMYDSgf4wgO4+3p0oC75DQnYRqT1dtq3o3QyMb9cadCdqndUqhZCn6wu8v7a5EA
 2fyTvGacajYm1eF/k/F3oDJ9CU6jefSLlbFILas1hpZHGeIcw98z0M68wIFqtQw24LhXVvU4
 YiaT/D3YzdJ5RYlagr41IrbwP9flKyaVOQw5wFWiVhj5TcyplFNZH4tDfjZw0jQG+G4KtWSV
 efbpIxVy0uCl/sb5nFJpZ6gGqECaua60QFjERO6UYD66vRJjnRaPqrWqJPwwKqY4tmEt4kZ9
 TlDiXC/YT41MbzJwuMcb0RJCjw5J/JKwpvsGVHq5KR/z2WeG5ft6/BnDUVwNowE4OdnR2pJ8
 JT0KhhUMErF3bjvhuvmFK883azPL+GyVG8bknhm0THeC+dgWZ3ZSr/GzdRZwC0xloZFGvO2i
 88xNmAzPUiZP0cn1lE/Uss/uP+miUvEbzBXk1KcpKkdyHeM01kkuFTqGIWMIYHbLSlPpW6Dv
 X7P9Wn9BhAcNfScxCCD/3bqgfXA9QvkXKoCGbv+8eRl6HWR22gSBRs+RVa95/6jhSaWS99Zb
 kAZ5Ccqhawz71CwCMnwWQWip3yJtQJaXMBfe8U44gyQzqvf4y6CG3MJCDVGbbQbWNQeHGJwk
 AXTxpWwWGIp4Ob9pW+hGqm8lC+9KC1PKm4+OD4dUEwfvNXZhKoLp0eaJjp8K5JZnuEZCBmpn
 W3W9HRh2e5D5SIY//7lpA6a2lpAsrCMF1dovVuPAwpJ+ysjPOaYi5qUBU83BBqqBKKQVRG/s
 XcNgKByB8heXMjWxERhrAjgdYxFBspp0xWG2DaD57F7q1yQF4eLJOi8Gg1WKkZzKdojcjT0e
 kLVsg45zMYNYSfyN/UvPNzuUp1CIU3c+TPND6m8UzazSsIpKF/vEN9GPiZ8IFwBYGBzyPpia
 P93gO6nDGoACLQP8Qdas9w1iOdxrghnnDu7bcmik3yPjOvFDFbIGOhtDbd7Rr1ghE9yiF6No
 4g32grj40g3bdASlQGNrtBDdgBRdydlbX00wuQOHtO+zsNdMDlJI5fsLXkJIuSJQ4w9ej/0w
 0yA
IronPort-HdrOrdr: A9a23:8BwDfamrKb54aO+kaqExKfY52i/pDfIq3DAbv31ZSRFFG/Fxl6
 iV88jzsiWE7wr5OUtQ4OxoV5PgfZqxz/NICMwqTNWftWrdyQ+VxeNZjbcKqgeIc0aVygce79
 YET0EXMqyXMbEQt6jHCWeDf+rIuOP3k5yVuQ==
X-IronPort-AV: E=Sophos;i="5.91,269,1647316800"; 
   d="scan'208";a="72494142"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH 2/4] build: set PERL
Date: Wed, 1 Jun 2022 17:59:07 +0100
Message-ID: <20220601165909.46588-3-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220601165909.46588-1-anthony.perard@citrix.com>
References: <20220601165909.46588-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

We are going to use it in a moment.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 xen/Makefile | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/Makefile b/xen/Makefile
index 82f5310b12..a6650a2acc 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -22,6 +22,7 @@ PYTHON_INTERPRETER	:= $(word 1,$(shell which python3 python python2 2>/dev/null)
 export PYTHON		?= $(PYTHON_INTERPRETER)
 
 export CHECKPOLICY	?= checkpolicy
+export PERL		?= perl
 
 $(if $(filter __%, $(MAKECMDGOALS)), \
     $(error targets prefixed with '__' are only for internal use))
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Wed Jun 01 16:59:37 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 16:59:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340585.565689 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwRh3-00019Z-EW; Wed, 01 Jun 2022 16:59:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340585.565689; Wed, 01 Jun 2022 16:59:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwRh3-00019G-7p; Wed, 01 Jun 2022 16:59:37 +0000
Received: by outflank-mailman (input) for mailman id 340585;
 Wed, 01 Jun 2022 16:59:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CPHY=WI=citrix.com=prvs=144c139f6=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1nwRh0-0000Xw-V5
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 16:59:35 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3519baf0-e1cc-11ec-837f-e5687231ffcc;
 Wed, 01 Jun 2022 18:59:32 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3519baf0-e1cc-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654102772;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=WoBfmEao3DrJCNeVSjlgM3yjFCOP00okKErgLU5uy7I=;
  b=AYkU6r0jO3EA+mYR3hGMK0uv7pgVFrJ9sCZDMFNx0HxbVvIbtlXEoz6s
   QTgWcPYEZ9gTYj1JzrCXtMY2Ttjbq7gjejF6g1qPId+nJHVFajDdFtk0U
   md4zn9e/Cn0axz3zfCJG5U7DmlfcKO3+OidxK6qLTzj2sMVbOAfnCaMse
   g=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 72494147
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:kdaoo6gzNzeu5lB46qQf42AJX161GBAKZh0ujC45NGQN5FlHY01je
 htvX2qEbPaOY2rxKIpyOt62oR8D75DcmoNiG1NupH83RSIb9cadCdqndUqhZCn6wu8v7a5EA
 2fyTvGacajYm1eF/k/F3oDJ9CU6jefSLlbFILas1hpZHGeIcw98z0M68wIFqtQw24LhXVvU4
 YiaT/D3YzdJ5RYlagr41IrbwP9flKyaVOQw5wFWiVhj5TcyplFNZH4tDfjZw0jQG+G4KtWSV
 efbpIxVy0uCl/sb5nFJpZ6gGqECaua60QFjERO6UYD66vRJjnRaPqrWqJPwwKqY4tmEt4kZ9
 TlDiXC/YV91B6jNpr8waAhBGgBwMvFI27bee1Hq5KR/z2WeG5ft6/BnDUVwNowE4OdnR2pJ8
 JT0KhhUMErF3bjvhuvmFK883azPL+GyVG8bknhm0THeC+dgWZ3ZSr/GzdRZwC0xloZFGvO2i
 88xNmAzPUiZP0In1lE/S5IdxNnvhVfGLwJFrkyp/roQynHOw1kkuFTqGIWMIYHbLSlPpW6Dv
 X7P9Wn9BhAcNfScxCCD/3bqgfXA9QvkXKoCGbv+8eRl6HWR22gSBRs+RVa95/6jhSaWS99Zb
 kAZ5Ccqhawz71CwCMnwWQWip3yJtQJaXMBfe8U44gyQzqvf4y6CG3MJCDVGbbQbWNQeHGJwk
 AXTxpWwWGIp4Ob9pW+hGqm89W+LPCkrdkI5dDYNbAUKv9fAqZhpp0eaJjp8K5JZnuEZCBmpn
 W3W9HRh2e5D5SIY//7lpA6a2lpAsrCMF1dovVuPAwpJ+ysjPOaYi5qUBU83BBqqBKKQVRG/s
 XcNgKByB8heXMjWxERhrAjgdYxFBspp0xWG2DaD57F7q1yQF4eLJOi8Gg1WKkZzKdojcjT0e
 kLVsg45zMYNYSfyN/UvPNzuUp1CIU3c+TPND6m8UzazSsIpKF/vEN9GPiZ8IFwBYGBzyPpia
 P93gO6nDGoACLQP8Qdas9w1iOdxrghnnDu7bcmik3yPjOvFDFbIGOhtDbd7Rr1ghE9yiF6No
 4g32grj40g3bdASlQGNrtBDdgBRdydlbX00wuQOHtO+zsNdMDlJI5fsLXkJIeSJQ4w9ej/0w
 0yA
IronPort-HdrOrdr: A9a23:stiJ9ajJ1cONnIsbnYiNyk0MPHBQXtwji2hC6mlwRA09TySZ//
 rAoB19726StN9xYgBYpTnuAsi9qB/nmKKdpLNhX4tKPzOW3FdATrsD0WKK+VSJcEfDH6xmpM
 JdmsBFebvN5DNB4/oSjjPVLz9Z+qjlzJyV
X-IronPort-AV: E=Sophos;i="5.91,269,1647316800"; 
   d="scan'208";a="72494147"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH 3/4] build: replace get-fields.sh by a perl script
Date: Wed, 1 Jun 2022 17:59:08 +0100
Message-ID: <20220601165909.46588-4-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220601165909.46588-1-anthony.perard@citrix.com>
References: <20220601165909.46588-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

The get-fields.sh which generate all the include/compat/.xlat/*.h
headers is quite slow. It takes for example nearly 3 seconds to
generate platform.h on a recent machine, or 2.3 seconds for memory.h.

Since it's only text processing, rewriting the mix of shell/sed/python
into a single perl script make the generation of those file a lot
faster.

I tried to keep a similar look for the code, to keep the code similar
between the shell and perl, and to ease review. So some code in perl
might look weird or could be written better.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 xen/include/Makefile         |   6 +-
 xen/tools/compat-xlat-header | 539 +++++++++++++++++++++++++++++++++++
 xen/tools/get-fields.sh      | 528 ----------------------------------
 3 files changed, 541 insertions(+), 532 deletions(-)
 create mode 100755 xen/tools/compat-xlat-header
 delete mode 100644 xen/tools/get-fields.sh

diff --git a/xen/include/Makefile b/xen/include/Makefile
index 6d9bcc19b0..b7e7148665 100644
--- a/xen/include/Makefile
+++ b/xen/include/Makefile
@@ -60,9 +60,7 @@ cmd_compat_c = \
 
 quiet_cmd_xlat_headers = GEN     $@
 cmd_xlat_headers = \
-    while read what name; do \
-        $(SHELL) $(srctree)/tools/get-fields.sh "$$what" compat_$$name $< || exit $$?; \
-    done <$(patsubst $(obj)/compat/%,$(obj)/compat/.xlat/%,$(basename $<)).lst >$@.new; \
+    $(PERL) $(srctree)/tools/compat-xlat-header $< $(patsubst $(obj)/compat/%,$(obj)/compat/.xlat/%,$(basename $<)).lst > $@.new; \
     mv -f $@.new $@
 
 targets += $(headers-y)
@@ -80,7 +78,7 @@ $(obj)/compat/%.c: $(src)/public/%.h $(srcdir)/xlat.lst $(srctree)/tools/compat-
 	$(call if_changed,compat_c)
 
 targets += $(patsubst compat/%, compat/.xlat/%, $(headers-y))
-$(obj)/compat/.xlat/%.h: $(obj)/compat/%.h $(obj)/compat/.xlat/%.lst $(srctree)/tools/get-fields.sh FORCE
+$(obj)/compat/.xlat/%.h: $(obj)/compat/%.h $(obj)/compat/.xlat/%.lst $(srctree)/tools/compat-xlat-header FORCE
 	$(call if_changed,xlat_headers)
 
 quiet_cmd_xlat_lst = GEN     $@
diff --git a/xen/tools/compat-xlat-header b/xen/tools/compat-xlat-header
new file mode 100755
index 0000000000..f1f42a9dde
--- /dev/null
+++ b/xen/tools/compat-xlat-header
@@ -0,0 +1,539 @@
+#!/usr/bin/perl -w
+
+use strict;
+use warnings;
+
+open COMPAT_LIST, "<$ARGV[1]" or die "can't open $ARGV[1], $!";
+open HEADER, "<$ARGV[0]" or die "can't open $ARGV[0], $!";
+
+my @typedefs;
+
+my @header_tokens;
+while (<HEADER>) {
+    next if m/^\s*#.*/;
+    s/([\]\[,;:{}])/ $1 /g;
+    s/^\s+//;
+    push(@header_tokens, split(/\s+/));
+}
+
+sub get_fields {
+    my ($looking_for) = @_;
+    my $level = 1;
+    my $aggr = 0;
+    my ($name, @fields);
+
+    foreach (@header_tokens) {
+        if (/^(struct|union)$/) {
+            unless ($level != 1) {
+                $aggr = 1;
+                @fields = ();
+                $name = '';
+            }
+        } elsif ($_ eq '{') {
+            $level++;
+        } elsif ($_ eq '}') {
+            $level--;
+            if ($level == 1 and $name eq $looking_for) {
+                push (@fields, $_);
+                return @fields;
+            }
+        } elsif (/^[a-zA-Z_].*/) {
+            unless ($aggr == 0 or $name ne "") {
+                $name = $_;
+            }
+        }
+        unless ($aggr == 0) {
+            push (@fields, $_);
+        }
+    }
+    return ();
+}
+
+sub get_typedefs {
+    my $level = 1;
+    my $state = 0;
+    my @typedefs;
+    foreach (@_) {
+        if ($_ eq 'typedef') {
+            unless ($level != 1) {
+                $state = 1;
+            }
+        } elsif (m/^COMPAT_HANDLE\(.*\)$/) {
+            unless ($level != 1 or $state != 1) {
+                $state = 2;
+            }
+        } elsif (m/^[\[\{]$/) {
+            $level++;
+        } elsif (m/^[\]\}]$/) {
+            $level--;
+        } elsif ($_ eq ';') {
+            unless  ($level != 1) {
+                $state = 0;
+            }
+        } elsif (m/^[a-zA-Z_].*$/) {
+            unless ($level != 1 or $state != 2) {
+                push (@typedefs, $_);
+            }
+        }
+    }
+    return @typedefs;
+}
+
+sub build_enums {
+    my ($name, @tokens) = @_;
+
+    my $level = 1;
+    my $kind = '';
+    my $named = '';
+    my (@fields, @members, $id);
+
+    foreach (@tokens) {
+        if (m/^(struct|union)$/) {
+            unless ($level != 2) {
+                @fields = ('');
+            }
+            $kind="$_;$kind";
+        } elsif ($_ eq '{') {
+            $level++;
+        } elsif ($_ eq '}') {
+            $level--;
+            if ($level == 1) {
+                my $subkind = $kind;
+                $subkind =~ s/;.*//;
+                if ($subkind eq 'union') {
+                    print "\nenum XLAT_$name {\n";
+                    foreach (@members) {
+                        print "    XLAT_${name}_$_,\n";
+                    }
+                    print "};\n";
+                }
+                return;
+            } elsif ($level == 2) {
+                $named = '?';
+            }
+        } elsif (/^[a-zA-Z_].*$/) {
+            $id = $_;
+            my $k = $kind;
+            $k =~ s/.*?;//;
+            if ($named ne '' and $k ne '') {
+                shift @fields if @fields > 0 and $fields[0] eq '';
+                build_enums("${name}_$_", @fields);
+                $named = '!';
+            }
+        } elsif ($_ eq ',') {
+            unless ($level != 2) {
+                push (@members, $id);
+            }
+        } elsif ($_ eq ';') {
+            unless ($level != 2) {
+                push (@members, $id);
+            }
+            unless ($named eq '') {
+                $kind =~ s/.*?;//;
+            }
+            $named = '';
+        }
+        unless (@fields == 0) {
+            push (@fields, $_);
+        }
+    }
+}
+
+sub handle_field {
+    my ($prefix, $name, $id, $type, @fields) = @_;
+
+    if (@fields == 0) {
+        print " \\\n";
+        if ($type eq '') {
+            print "$prefix\(_d_)->$id = (_s_)->$id;"
+        } else {
+            my $k = $id;
+            $k =~ s/\./_/g;
+            print "${prefix}XLAT_${name}_HNDL_${k}(_d_, _s_);";
+        }
+    } elsif ("@fields" !~ m/[{}]/) {
+        my $tag = "@fields";
+        $tag =~ s/\s*(struct|union)\s+(compat_)?(\w+)\s.*/$3/;
+        print " \\\n";
+        print "${prefix}XLAT_$tag(&(_d_)->$id, &(_s_)->$id);"
+    } else {
+        my $func_id = $id;
+        my @func_tokens = @fields;
+        my $kind = '';
+        my $array = "";
+        my $level = 1;
+        my $arrlvl = 1;
+        my $array_type = '';
+        my $id = '';
+        my $type = '';
+        @fields = ();
+        foreach (@func_tokens) {
+            if (/^(struct|union)$/) {
+                unless ($level != 2) {
+                    @fields = ('');
+                }
+                if ($level == 1) {
+                    $kind = $_;
+                    if ($kind eq 'union') {
+                        my $tmp = $func_id;
+                        $tmp =~ s/\./_/g;
+                        print " \\\n";
+                        print  "${prefix}switch ($tmp) {"
+                    }
+                }
+            } elsif ($_ eq '{') {
+                $level++;
+                $id = '';
+            } elsif ($_ eq '}') {
+                $level--;
+                $id = '';
+                if ($level == 1 and $kind eq 'union') {
+                    print " \\\n";
+                    print "$prefix}"
+                }
+            } elsif ($_ eq '[') {
+                if ($level != 2 or $arrlvl != 1) {
+                    # skip
+                } elsif ($array eq '') {
+                    $array = ' ';
+                } else {
+                    $array = "$array;";
+                }
+                $arrlvl++;
+            } elsif ($_ eq ']') {
+                $arrlvl--;
+            } elsif (m/^COMPAT_HANDLE\((.*)\)$/) {
+                if ($level == 2 and $id eq '') {
+                    $type = $1;
+                    $type =~ s/^compat_//;
+                }
+            } elsif ($_ eq "compat_domain_handle_t") {
+                if ($level == 2 and $id eq '') {
+                    $array_type = $_;
+                }
+            } elsif (m/^[a-zA-Z_].*$/) {
+                if ($id eq '' and $type eq '' and $array_type eq '') {
+                    foreach $id (@typedefs) {
+                        unless ($id ne $_) {
+                            $type = $id;
+                        }
+                    }
+                    if ($type eq '') {
+                        $id = $_;
+                    } else {
+                        $id = '';
+                    }
+                } else {
+                    $id = $_;
+                }
+            } elsif (m/^[,;]$/) {
+                if ($level == 2 and $id !~ /^_pad\d*$/) {
+                    if ($kind eq 'union') {
+                        my $tmp = "$func_id.$id";
+                        $tmp =~ s/\./_/g;
+                        print " \\\n";
+                        print  "${prefix}case XLAT_${name}_$tmp:";
+                        shift @fields if @fields > 0 and $fields[0] eq '';
+                        handle_field("$prefix    ", $name,  "$func_id.$id", $type, @fields);
+                    } elsif ($array eq '' and $array_type eq '') {
+                        shift @fields if @fields > 0 and $fields[0] eq '';
+                        handle_field($prefix, $name, "$func_id.$id", $type, @fields);
+                    } elsif ($array eq '') {
+                        copy_array("    ", "$func_id.$id");
+                    } else {
+                        $array =~ s/^.*?;//;
+                        shift @fields if @fields > 0 and $fields[0] eq '';
+                        handle_array($prefix, $name, "$func_id.$id", $array, $type, @fields);
+                    }
+                    unless ($_ ne ';') {
+                        @fields = ();
+                        $id = '';
+                        $type = '';
+                    }
+                    $array = '';
+                    if ($kind eq 'union') {
+                        print " \\\n";
+                        print "$prefix    break;";
+                    }
+                }
+            } else {
+                if ($array ne '') {
+                    $array = "$array $_";
+                }
+            }
+            unless (@fields == 0) {
+                push (@fields, $_);
+            }
+        }
+    }
+}
+
+sub copy_array {
+    my ($prefix, $id) = @_;
+
+    print " \\\n";
+    print  "${prefix}if ((_d_)->$id != (_s_)->$id) \\\n";
+    print  "$prefix    memcpy((_d_)->$id, (_s_)->$id, sizeof((_d_)->$id));";
+}
+
+sub handle_array {
+    my ($prefix, $name, $id, $array, $type, @fields) = @_;
+
+    my $i = $array;
+    $i =~ s/[^;]//g;
+    $i = length($i);
+    $i = "i$i";
+
+    print " \\\n";
+    print "$prefix\{ \\\n";
+    print "$prefix    unsigned int $i; \\\n";
+    my $tmp = $array;
+    $tmp =~ s/;.*$//;
+    $tmp =~ s/^\s*(.*)\s*$/$1/;
+    print "$prefix    for ($i = 0; $i < $tmp; ++$i) {";
+    if ($array !~ m/^.*?;/) {
+        handle_field("$prefix        ", $name, "$id\[$i]", $type, @fields);
+    } else {
+        handle_array("$prefix        " ,$name, "$id\[$i]", $', $type, @fields);
+    }
+    print " \\\n";
+    print "$prefix    } \\\n";
+    print "$prefix\}";
+}
+
+sub build_body {
+    my ($name, @tokens) = @_;
+    my $level = 1;
+    my $id = '';
+    my $array = '';
+    my $arrlvl = 1;
+    my $array_type = '';
+    my $type = '';
+    my @fields;
+
+    printf "\n#define XLAT_$name(_d_, _s_) do {";
+
+    foreach (@tokens) {
+        if (/^(struct|union)$/) {
+            unless ($level != 2) {
+                @fields = ('');
+            }
+        } elsif ($_ eq '{') {
+            $level++;
+            $id = '';
+        } elsif ($_ eq '}') {
+            $level--;
+            $id = '';
+        } elsif ($_ eq '[') {
+            if ($level != 2 or $arrlvl != 1) {
+                # skip
+            } elsif ($array eq '') {
+                $array = ' ';
+            } else {
+                $array = "$array;";
+            }
+            $arrlvl++;
+        } elsif ($_ eq ']') {
+            $arrlvl--;
+        } elsif (m/^COMPAT_HANDLE\((.*)\)$/) {
+            if ($level == 2 and $id eq '') {
+                $type = $1;
+                $type =~ s/^compat_//;
+            }
+        } elsif ($_ eq "compat_domain_handle_t") {
+            if ($level == 2 and $id eq '') {
+                $array_type = $_;
+            }
+        } elsif (m/^[a-zA-Z_].*$/) {
+            if ($array ne '') {
+                $array = "$array $_";
+            } elsif ($id eq '' and $type eq '' and $array_type eq '') {
+                foreach $id (@typedefs) {
+                    unless ($id eq $_) {
+                        $type = $id;
+                    }
+                }
+                if ($type eq '') {
+                    $id = $_;
+                } else {
+                    $id = '';
+                }
+            } else {
+                $id = $_;
+            }
+        } elsif (m/^[,;]$/) {
+            if ($level == 2 and $id !~ /^_pad\d*$/) {
+                if ($array eq '' and $array_type eq '') {
+                    shift @fields if @fields > 0 and $fields[0] eq '';
+                    handle_field("    ", $name, $id, $type, @fields);
+                } elsif ($array eq '') {
+                    copy_array("    ", $id);
+                } else {
+                    my $tmp = $array;
+                    $tmp =~ s/^.*?;//;
+                    shift @fields if @fields > 0 and $fields[0] eq '';
+                    handle_array("    ", $name, $id, $tmp, $type, @fields);
+                }
+                unless ($_ ne ';') {
+                    @fields = ();
+                    $id = '';
+                    $type = '';
+                }
+                $array = '';
+            }
+        } else {
+            if ($array ne '') {
+                $array = "$array $_";
+            }
+        }
+        unless (@fields == 0) {
+            push (@fields, $_);
+        }
+    }
+    print " \\\n} while (0)\n";
+}
+
+sub check_field {
+    my ($kind, $name, $field, @extrafields) = @_;
+
+    if ("@extrafields" !~ m/[{}]/) {
+        print "; \\\n";
+        if (@extrafields != 0) {
+            foreach (@extrafields) {
+                if (m/^(struct|union)$/) {
+                    # skip
+                } elsif (m/^[a-zA-Z_].*/) {
+                    s/^xen_//;
+                    print "    CHECK_$_";
+                    last;
+                } else {
+                    die "Malformed compound declaration: '$_'";
+                }
+            }
+        } elsif ($field !~ m/\./) {
+            print "    CHECK_FIELD_($kind, $name, $field)";
+        } else {
+            my $n = $field =~ s/\./, /g;
+            print  "    CHECK_SUBFIELD_${n}_($kind, $name, $field)";
+        }
+    } else {
+        my $level = 1;
+        my @fields = ();
+        my $id = '';
+
+        foreach (@extrafields) {
+            if (m/^(struct|union)$/) {
+                unless ($level != 2) {
+                    @fields = ('');
+                }
+            } elsif ($_ eq '{') {
+                $level++;
+                $id = '';
+            } elsif ($_ eq '}') {
+                $level--;
+                $id = '';
+            } elsif (/^compat_.*_t$/) {
+                if ($level == 2) {
+                    @fields = ('');
+                    s/_t$//;
+                    s/^compat_//;
+                }
+            } elsif (/^evtchn_.*_compat_t$/) {
+                if ($level == 2 and $_ ne "evtchn_port_compat_t") {
+                    @fields = ('');
+                    s/_compat_t$//;
+                }
+            } elsif (/^[a-zA-Z_].*$/) {
+                $id = $_;
+            } elsif (/^[,;]$/) {
+                if ($level == 2 and $id !~ /^_pad\d*$/) {
+                    shift @fields if @fields > 0 and $fields[0] eq '';
+                    check_field($kind, $name, "$field.$id", @fields);
+                    unless ($_ ne ";") {
+                        @fields = ();
+                        $id = '';
+                    }
+                }
+            }
+            unless (@fields == 0) {
+                push (@fields, $_);
+            }
+        }
+    }
+}
+
+sub build_check {
+    my ($name, @tokens) = @_;
+    my $level = 1;
+    my (@fields, $kind, $id);
+    my $arrlvl = 1;
+
+    print "\n";
+    print "#define CHECK_$name \\\n";
+
+    foreach (@tokens) {
+        if (/^(struct|union)$/) {
+            if ($level == 1) {
+                $kind = $_;
+                print "    CHECK_SIZE_($kind, $name)";
+            } elsif ($level == 2) {
+                @fields = ('');
+            }
+        } elsif ($_ eq '{') {
+            $level++;
+            $id = '';
+        } elsif ($_ eq '}') {
+            $level--;
+            $id = '';
+        } elsif ($_ eq '[') {
+            $arrlvl++;
+        } elsif ($_ eq ']') {
+            $arrlvl--;
+        } elsif (/^compat_.*_t$/) {
+            if ($level == 2 and $_ ne "compat_argo_port_t") {
+                @fields = ('');
+                s/_t$//;
+                s/^compat_//;
+            }
+        } elsif (/^[a-zA-Z_].*$/) {
+            unless ($level != 2 or $arrlvl != 1) {
+                $id = $_;
+            }
+        } elsif (/^[,;]$/) {
+            if ($level == 2 and $id !~ /^_pad\d*$/) {
+                shift @fields if @fields > 0 and $fields[0] eq '';
+                check_field($kind, $name, $id, @fields);
+                unless ($_ ne ";") {
+                    @fields = ();
+                    $id = '';
+                }
+            }
+        }
+
+        unless (@fields == 0) {
+            push (@fields, $_);
+        }
+    }
+    print "\n";
+}
+
+@typedefs = get_typedefs(@header_tokens);
+
+while (<COMPAT_LIST>) {
+    my ($what, $name) = split(/\s+/, $_);
+    $name =~ s/^xen//;
+
+    my @fields = get_fields("compat_$name");
+    if (@fields == 0) {
+	die "Fields of 'compat_$name' not found in '$ARGV[1]'";
+    }
+
+    if ($what eq "!") {
+        build_enums($name, @fields);
+        build_body($name, @fields);
+    } elsif ($what eq "?") {
+        build_check($name, @fields);
+    } else {
+        die "Invalid translation indicator: '$what'";
+    }
+}
diff --git a/xen/tools/get-fields.sh b/xen/tools/get-fields.sh
deleted file mode 100644
index 002db2093f..0000000000
--- a/xen/tools/get-fields.sh
+++ /dev/null
@@ -1,528 +0,0 @@
-test -n "$1" -a -n "$2" -a -n "$3"
-set -ef
-
-SED=sed
-if test -x /usr/xpg4/bin/sed; then
-	SED=/usr/xpg4/bin/sed
-fi
-if test -z ${PYTHON}; then
-	PYTHON=`/usr/bin/env python`
-fi
-if test -z ${PYTHON}; then
-	echo "Python not found"
-	exit 1
-fi
-
-get_fields ()
-{
-	local level=1 aggr=0 name= fields=
-	for token in $2
-	do
-		case "$token" in
-		struct|union)
-			test $level != 1 || aggr=1 fields= name=
-			;;
-		"{")
-			level=$(expr $level + 1)
-			;;
-		"}")
-			level=$(expr $level - 1)
-			if [ $level = 1 -a $name = $1 ]
-			then
-				echo "$fields }"
-				return 0
-			fi
-			;;
-		[a-zA-Z_]*)
-			test $aggr = 0 -o -n "$name" || name="$token"
-			;;
-		esac
-		test $aggr = 0 || fields="$fields $token"
-	done
-}
-
-get_typedefs ()
-{
-	local level=1 state=
-	for token in $1
-	do
-		case "$token" in
-		typedef)
-			test $level != 1 || state=1
-			;;
-		COMPAT_HANDLE\(*\))
-			test $level != 1 -o "$state" != 1 || state=2
-			;;
-		[\{\[])
-			level=$(expr $level + 1)
-			;;
-		[\}\]])
-			level=$(expr $level - 1)
-			;;
-		";")
-			test $level != 1 || state=
-			;;
-		[a-zA-Z_]*)
-			test $level != 1 -o "$state" != 2 || echo "$token"
-			;;
-		esac
-	done
-}
-
-build_enums ()
-{
-	local level=1 kind= fields= members= named= id= token
-	for token in $2
-	do
-		case "$token" in
-		struct|union)
-			test $level != 2 || fields=" "
-			kind="$token;$kind"
-			;;
-		"{")
-			level=$(expr $level + 1)
-			;;
-		"}")
-			level=$(expr $level - 1)
-			if [ $level = 1 ]
-			then
-				if [ "${kind%%;*}" = union ]
-				then
-					echo
-					echo "enum XLAT_$1 {"
-					for m in $members
-					do
-						echo "    XLAT_${1}_$m,"
-					done
-					echo "};"
-				fi
-				return 0
-			elif [ $level = 2 ]
-			then
-				named='?'
-			fi
-			;;
-		[a-zA-Z]*)
-			id=$token
-			if [ -n "$named" -a -n "${kind#*;}" ]
-			then
-				build_enums ${1}_$token "$fields"
-				named='!'
-			fi
-			;;
-		",")
-			test $level != 2 || members="$members $id"
-			;;
-		";")
-			test $level != 2 || members="$members $id"
-			test -z "$named" || kind=${kind#*;}
-			named=
-			;;
-		esac
-		test -z "$fields" || fields="$fields $token"
-	done
-}
-
-handle_field ()
-{
-	if [ -z "$5" ]
-	then
-		echo " \\"
-		if [ -z "$4" ]
-		then
-			printf %s "$1(_d_)->$3 = (_s_)->$3;"
-		else
-			printf %s "$1XLAT_${2}_HNDL_$(echo $3 | $SED 's,\.,_,g')(_d_, _s_);"
-		fi
-	elif [ -z "$(echo "$5" | $SED 's,[^{}],,g')" ]
-	then
-		local tag=$(echo "$5" | ${PYTHON} -c '
-import re,sys
-for line in sys.stdin.readlines():
-    sys.stdout.write(re.subn(r"\s*(struct|union)\s+(compat_)?(\w+)\s.*", r"\3", line)[0].rstrip() + "\n")
-')
-		echo " \\"
-		printf %s "${1}XLAT_$tag(&(_d_)->$3, &(_s_)->$3);"
-	else
-		local level=1 kind= fields= id= array= arrlvl=1 array_type= type= token
-		for token in $5
-		do
-			case "$token" in
-			struct|union)
-				test $level != 2 || fields=" "
-				if [ $level = 1 ]
-				then
-					kind=$token
-					if [ $kind = union ]
-					then
-						echo " \\"
-						printf %s "${1}switch ($(echo $3 | $SED 's,\.,_,g')) {"
-					fi
-				fi
-				;;
-			"{")
-				level=$(expr $level + 1) id=
-				;;
-			"}")
-				level=$(expr $level - 1) id=
-				if [ $level = 1 -a $kind = union ]
-				then
-					echo " \\"
-					printf %s "$1}"
-				fi
-				;;
-			"[")
-				if [ $level != 2 -o $arrlvl != 1 ]
-				then
-					:
-				elif [ -z "$array" ]
-				then
-					array=" "
-				else
-					array="$array;"
-				fi
-				arrlvl=$(expr $arrlvl + 1)
-				;;
-			"]")
-				arrlvl=$(expr $arrlvl - 1)
-				;;
-			COMPAT_HANDLE\(*\))
-				if [ $level = 2 -a -z "$id" ]
-				then
-					type=${token#COMPAT_HANDLE?}
-					type=${type%?}
-					type=${type#compat_}
-				fi
-				;;
-			compat_domain_handle_t)
-				if [ $level = 2 -a -z "$id" ]
-				then
-					array_type=$token
-				fi
-				;;
-			[a-zA-Z]*)
-				if [ -z "$id" -a -z "$type" -a -z "$array_type" ]
-				then
-					for id in $typedefs
-					do
-						test $id != "$token" || type=$id
-					done
-					if [ -z "$type" ]
-					then
-						id=$token
-					else
-						id=
-					fi
-				else
-					id=$token
-				fi
-				;;
-			[\,\;])
-				if [ $level = 2 -a -n "$(echo $id | $SED 's,^_pad[[:digit:]]*,,')" ]
-				then
-					if [ $kind = union ]
-					then
-						echo " \\"
-						printf %s "${1}case XLAT_${2}_$(echo $3.$id | $SED 's,\.,_,g'):"
-						handle_field "$1    " $2 $3.$id "$type" "$fields"
-					elif [ -z "$array" -a -z "$array_type" ]
-					then
-						handle_field "$1" $2 $3.$id "$type" "$fields"
-					elif [ -z "$array" ]
-					then
-						copy_array "    " $3.$id
-					else
-						handle_array "$1" $2 $3.$id "${array#*;}" "$type" "$fields"
-					fi
-					test "$token" != ";" || fields= id= type=
-					array=
-					if [ $kind = union ]
-					then
-						echo " \\"
-						printf %s "$1    break;"
-					fi
-				fi
-				;;
-			*)
-				if [ -n "$array" ]
-				then
-					array="$array $token"
-				fi
-				;;
-			esac
-			test -z "$fields" || fields="$fields $token"
-		done
-	fi
-}
-
-copy_array ()
-{
-	echo " \\"
-	echo "${1}if ((_d_)->$2 != (_s_)->$2) \\"
-	printf %s "$1    memcpy((_d_)->$2, (_s_)->$2, sizeof((_d_)->$2));"
-}
-
-handle_array ()
-{
-	local i="i$(echo $4 | $SED 's,[^;], ,g' | wc -w | $SED 's,[[:space:]]*,,g')"
-	echo " \\"
-	echo "$1{ \\"
-	echo "$1    unsigned int $i; \\"
-	printf %s "$1    for ($i = 0; $i < "${4%%;*}"; ++$i) {"
-	if [ "$4" = "${4#*;}" ]
-	then
-		handle_field "$1        " $2 $3[$i] "$5" "$6"
-	else
-		handle_array "$1        " $2 $3[$i] "${4#*;}" "$5" "$6"
-	fi
-	echo " \\"
-	echo "$1    } \\"
-	printf %s "$1}"
-}
-
-build_body ()
-{
-	echo
-	printf %s "#define XLAT_$1(_d_, _s_) do {"
-	local level=1 fields= id= array= arrlvl=1 array_type= type= token
-	for token in $2
-	do
-		case "$token" in
-		struct|union)
-			test $level != 2 || fields=" "
-			;;
-		"{")
-			level=$(expr $level + 1) id=
-			;;
-		"}")
-			level=$(expr $level - 1) id=
-			;;
-		"[")
-			if [ $level != 2 -o $arrlvl != 1 ]
-			then
-				:
-			elif [ -z "$array" ]
-			then
-				array=" "
-			else
-				array="$array;"
-			fi
-			arrlvl=$(expr $arrlvl + 1)
-			;;
-		"]")
-			arrlvl=$(expr $arrlvl - 1)
-			;;
-		COMPAT_HANDLE\(*\))
-			if [ $level = 2 -a -z "$id" ]
-			then
-				type=${token#COMPAT_HANDLE?}
-				type=${type%?}
-				type=${type#compat_}
-			fi
-			;;
-		compat_domain_handle_t)
-			if [ $level = 2 -a -z "$id" ]
-			then
-				array_type=$token
-			fi
-			;;
-		[a-zA-Z_]*)
-			if [ -n "$array" ]
-			then
-				array="$array $token"
-			elif [ -z "$id" -a -z "$type" -a -z "$array_type" ]
-			then
-				for id in $typedefs
-				do
-					test $id != "$token" || type=$id
-				done
-				if [ -z "$type" ]
-				then
-					id=$token
-				else
-					id=
-				fi
-			else
-				id=$token
-			fi
-			;;
-		[\,\;])
-			if [ $level = 2 -a -n "$(echo $id | $SED 's,^_pad[[:digit:]]*,,')" ]
-			then
-				if [ -z "$array" -a -z "$array_type" ]
-				then
-					handle_field "    " $1 $id "$type" "$fields"
-				elif [ -z "$array" ]
-				then
-					copy_array "    " $id
-				else
-					handle_array "    " $1 $id "${array#*;}" "$type" "$fields"
-				fi
-				test "$token" != ";" || fields= id= type=
-				array=
-			fi
-			;;
-		*)
-			if [ -n "$array" ]
-			then
-				array="$array $token"
-			fi
-			;;
-		esac
-		test -z "$fields" || fields="$fields $token"
-	done
-	echo " \\"
-	echo "} while (0)"
-}
-
-check_field ()
-{
-	if [ -z "$(echo "$4" | $SED 's,[^{}],,g')" ]
-	then
-		echo "; \\"
-		local n=$(echo $3 | $SED 's,[^.], ,g' | wc -w | $SED 's,[[:space:]]*,,g')
-		if [ -n "$4" ]
-		then
-			for n in $4
-			do
-				case $n in
-				struct|union)
-					;;
-				[a-zA-Z_]*)
-					printf %s "    CHECK_${n#xen_}"
-					break
-					;;
-				*)
-					echo "Malformed compound declaration: '$n'" >&2
-					exit 1
-					;;
-				esac
-			done
-		elif [ $n = 0 ]
-		then
-			printf %s "    CHECK_FIELD_($1, $2, $3)"
-		else
-			printf %s "    CHECK_SUBFIELD_${n}_($1, $2, $(echo $3 | $SED 's!\.!, !g'))"
-		fi
-	else
-		local level=1 fields= id= token
-		for token in $4
-		do
-			case "$token" in
-			struct|union)
-				test $level != 2 || fields=" "
-				;;
-			"{")
-				level=$(expr $level + 1) id=
-				;;
-			"}")
-				level=$(expr $level - 1) id=
-				;;
-			compat_*_t)
-				if [ $level = 2 ]
-				then
-					fields=" "
-					token="${token%_t}"
-					token="${token#compat_}"
-				fi
-				;;
-			evtchn_*_compat_t)
-				if [ $level = 2 -a $token != evtchn_port_compat_t ]
-				then
-					fields=" "
-					token="${token%_compat_t}"
-				fi
-				;;
-			[a-zA-Z]*)
-				id=$token
-				;;
-			[\,\;])
-				if [ $level = 2 -a -n "$(echo $id | $SED 's,^_pad[[:digit:]]*,,')" ]
-				then
-					check_field $1 $2 $3.$id "$fields"
-					test "$token" != ";" || fields= id=
-				fi
-				;;
-			esac
-			test -z "$fields" || fields="$fields $token"
-		done
-	fi
-}
-
-build_check ()
-{
-	echo
-	echo "#define CHECK_$1 \\"
-	local level=1 fields= kind= id= arrlvl=1 token
-	for token in $2
-	do
-		case "$token" in
-		struct|union)
-			if [ $level = 1 ]
-			then
-				kind=$token
-				printf %s "    CHECK_SIZE_($kind, $1)"
-			elif [ $level = 2 ]
-			then
-				fields=" "
-			fi
-			;;
-		"{")
-			level=$(expr $level + 1) id=
-			;;
-		"}")
-			level=$(expr $level - 1) id=
-			;;
-		"[")
-			arrlvl=$(expr $arrlvl + 1)
-			;;
-		"]")
-			arrlvl=$(expr $arrlvl - 1)
-			;;
-		compat_*_t)
-			if [ $level = 2 -a $token != compat_argo_port_t ]
-			then
-				fields=" "
-				token="${token%_t}"
-				token="${token#compat_}"
-			fi
-			;;
-		[a-zA-Z_]*)
-			test $level != 2 -o $arrlvl != 1 || id=$token
-			;;
-		[\,\;])
-			if [ $level = 2 -a -n "$(echo $id | $SED 's,^_pad[[:digit:]]*,,')" ]
-			then
-				check_field $kind $1 $id "$fields"
-				test "$token" != ";" || fields= id=
-			fi
-			;;
-		esac
-		test -z "$fields" || fields="$fields $token"
-	done
-	echo ""
-}
-
-list="$($SED -e 's,^[[:space:]]#.*,,' -e 's!\([]\[,;:{}]\)! \1 !g' $3)"
-fields="$(get_fields $(echo $2 | $SED 's,^compat_xen,compat_,') "$list")"
-if [ -z "$fields" ]
-then
-	echo "Fields of '$2' not found in '$3'" >&2
-	exit 1
-fi
-name=${2#compat_}
-name=${name#xen}
-case "$1" in
-"!")
-	typedefs="$(get_typedefs "$list")"
-	build_enums $name "$fields"
-	build_body $name "$fields"
-	;;
-"?")
-	build_check $name "$fields"
-	;;
-*)
-	echo "Invalid translation indicator: '$1'" >&2
-	exit 1
-	;;
-esac
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Wed Jun 01 17:17:52 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 17:17:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340628.565700 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwRyb-00058w-8F; Wed, 01 Jun 2022 17:17:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340628.565700; Wed, 01 Jun 2022 17:17:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwRyb-00058p-56; Wed, 01 Jun 2022 17:17:45 +0000
Received: by outflank-mailman (input) for mailman id 340628;
 Wed, 01 Jun 2022 17:17:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UpDj=WI=citrix.com=prvs=144fbc88a=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1nwRyZ-00058j-ID
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 17:17:43 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bda60623-e1ce-11ec-837f-e5687231ffcc;
 Wed, 01 Jun 2022 19:17:42 +0200 (CEST)
Received: from mail-bn8nam11lp2169.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 01 Jun 2022 13:17:38 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SN4PR03MB6718.namprd03.prod.outlook.com (2603:10b6:806:1ed::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5293.13; Wed, 1 Jun
 2022 17:17:36 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::50a2:499b:fa53:b1eb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::50a2:499b:fa53:b1eb%5]) with mapi id 15.20.5293.019; Wed, 1 Jun 2022
 17:17:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bda60623-e1ce-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654103862;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=i3YrQFEUT1vDBhQnJgFaO4yKn/IKEH7GluQ0X6ZOKKg=;
  b=E2ZQZ5UdNezew+sjFmEN/MXRZ3u9gwJoSmuSAYnKI+Ot8qZMBvoG9wHo
   1C6JjCSxX5qL/2P3oQgd6PTVW26BCaR+Bu6//DPw8DRBcXJoRAvUZk4rw
   ANIoKztpqvMAbGRgozwyMLapO82q4PB8IEmQlfQDoVd3YwaZ/XUqv7Ska
   M=;
X-IronPort-RemoteIP: 104.47.58.169
X-IronPort-MID: 72649710
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:041Nk6oAenV+PhRmp8jhGLlZ+F5eBmIDZBIvgKrLsJaIsI4StFCzt
 garIBmEOquKZGLyLdh2YIq/p0xVvsLdm9RrTlA4+3szRSwb8ZuZCYyVIHmrMnLJJKUvbq7GA
 +byyDXkBJppJpMJjk71atANlVEliefQAOCU5NfsYkidfyc9IMsaoU8lyrdRbrJA24DjWVvT4
 Yqq+KUzBXf+s9JKGjNMg068gEsHUMTa4Fv0aXRnOJinFHeH/5UkJMp3yZOZdhMUcaENdgKOf
 M7RzanRw4/s10xF5uVJMFrMWhZirrb6ZWBig5fNMkSoqkAqSicais7XOBeAAKv+Zvrgc91Zk
 b1wWZKMpQgBHKnskcIEVR9hDR5hJJ9MpLDOMSPirpnGp6HGWyOEL/RGKmgTZNRd0MAnRGZE+
 LofNSwHaQ2Fi6Su2rWnR+Jwh8Mlas72IIcYvXImxjbcZRokacmbH+OWupkFjHFp2JEm8fX2P
 qL1bRJGahjabgIJEVAQEJ8kx8+jh2Xlci0eo1WQzUYyyzeJklAgj+G1WDbTUuWSQ5h5rBa4n
 VLfr2WnAikdZcST1jXQpxpAgceKx0sXQrk6BLC+s/JnnlCX7mgSEwENE0u2p+GjjUyzUM4ZL
 FYbkgIssKwz+UqDXtT7GRqirxasvBQRRt5RGO0S8xyWx+zf5APxLncAZi5MbpohrsBebScxy
 laDktftBDpumL6YU3SQ8vGTtzzaBMQOBWoLZCtBRw1V5dDm+dg3lkiWEIclF7OphNroHz222
 yqNsCU1m7QUi4gMyrm/+lfExTmro/AlUzII2+keZUr9hisRWWJvT9bABYTzhRqYELukcw==
IronPort-HdrOrdr: A9a23:TaQWnqHttr1Ry+RKpLqFsZLXdLJyesId70hD6qkvc3Fom52j/f
 xGws5x6fatskdrZJkh8erwW5Vp2RvnhNJICPoqTM2ftW7dySSVxeBZnMbfKljbdxEWmdQtsp
 uIH5IeNDS0NykDsS+Y2nj3Lz9D+qjgzEnAv463oBlQpENRGthdBmxCe2Sm+zhNNW177O0CZf
 +hD6R8xwaISDAyVICWF3MFV+/Mq5ngj5T9eyMLABYh9U2nkS6owKSSKWnZ4j4uFxd0hZsy+2
 nMlAL0oo+5teug9xPa32jPq7xLhdrazMdZDsDksLlXFtyssHfrWG1SYczHgNkHmpDp1L/sqq
 iLn/4UBbU315oWRBDtnfKi4Xi57N9k0Q6e9bbRuwqenSW+fkN6NyMJv/MmTvOSgXBQw+1Uwe
 ZF2XmUuIFQCg6FlCPh58LQXxUvjUasp2E++NRjx0C3fLFuHoO5l7ZvtX+90a1wbh7S+cQiCq
 1jHcvc7PFZfReTaG3YpHBmxJipUm4oFhmLT0AesojNugIm1kxR3g8d3ogSj30A/JUyR91N4P
 nFKL1hkPVLQtUNZaxwCe8dSY+8C3DLQxjLLGWOSG6XX50vKjbIsdr68b817OaldNgBy4Yzgo
 3IVBdCuWs7ayvVeLqzNV1wg2TwqUmGLETQI5tllulEU5XHNcnWGDzGTkwymM29pPhaCtHHWp
 +ISedrP8M=
X-IronPort-AV: E=Sophos;i="5.91,269,1647316800"; 
   d="scan'208";a="72649710"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jLRQdw9WkIuM8mbPPh9G3Iq5sdbYwk3B8rhBMFDBraxj0S05CqWuihXkZsLVb+41C+06gD0j7RscKeQ7bvBtTnBAx7HWcBZv2Ei5rNXBoHhT1WxSE20K+a65V5awutr9+ila7r4z9+BzJvIgQLO3B6pANnG1QxEstd3+lEBe0Aofz25ynSNGxyajSM2EnIhQwnPc08GpAJRje9VPADt8GzTl651vFc9RU8p0yKhhbMnyt8Q+4wm8V3tOszhXIv9EdxtkIi+DkDR0VcsyAfDH0a9Pi0wGc12pAmBF3u3ONQDQqTFW0SbPZK7rjqjkIcGWs+wdrCRY0/TtTgP2U2q0aQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=i3YrQFEUT1vDBhQnJgFaO4yKn/IKEH7GluQ0X6ZOKKg=;
 b=Smqf9MEkvrA2ijaEynQ2oMvFxJ3z2Oe4pRmB1Jv43Fc2Td/mYA+lScdj9GTageDeII35x/Eyvy1Z0SZ9M1ELJq/f+DI2USlv/jbIKkJTRyjXzRI2RMw7EQV3ZgRQ+c6myNSkAPiZ65JyBaFzlQOanMiIQElLMq9l1gCCcnfS0AdV/zDNLzvwvX7SP3DS+IdJXiI6zmyLMZ5jbciIn/QK+i5zPA3CzKPKV++F8RaX5rLsFad8xaM1nisGOWod3KbGpn5t3WepkMRXZMgnPNdOBv1REgQmf+EhH5skGFoASoCBRbaRmVPEP8mh9XuxK+9gKM0Ryuy4A3WXStKfgA34iA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=i3YrQFEUT1vDBhQnJgFaO4yKn/IKEH7GluQ0X6ZOKKg=;
 b=vWWrfwAPrvYvo1ZZOquSjevI8HgOTjFDCskU5SSBF+nyAbmviDxkIbShw0QISz55/4eI6pkjz2MeC5V1CNJ6aRklufskJnUQsFZPp9r0tUwLuJXD3kszMTNMM2qOCofwo2pEjjfdJTVxKSP0/5XNQgaeaxBjIjYsbNRSv8danFE=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>, Wei Liu
	<wl@xen.org>, George Dunlap <George.Dunlap@citrix.com>, Stefano Stabellini
	<sstabellini@kernel.org>
Subject: Re: [XEN PATCH 0/4] xen: rework compat headers generation
Thread-Topic: [XEN PATCH 0/4] xen: rework compat headers generation
Thread-Index: AQHYddjyz0tXrY3j20Cs3apay7U+za06y0cA
Date: Wed, 1 Jun 2022 17:17:36 +0000
Message-ID: <5e94648f-ba89-3691-0d80-1a8cca588ca6@citrix.com>
References: <20220601165909.46588-1-anthony.perard@citrix.com>
In-Reply-To: <20220601165909.46588-1-anthony.perard@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 5417750f-851c-43fd-5a76-08da43f29f95
x-ms-traffictypediagnostic: SN4PR03MB6718:EE_
x-microsoft-antispam-prvs:
 <SN4PR03MB67183FF4316CA1BFF32341C6BADF9@SN4PR03MB6718.namprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 hh++QGynYEilWUtYNKLBYFeqcun91OTokEqK8gIR3CaIGUOr3JuhMH9rRLRmW8nRsLDY1fBkmdBUNoW8uSBZqBjRj7pp2TtaekML6MCEOdcQ9Wi46YkK9vjJzT/sMIb24Id7+VTlnLj2Y0t8qwOVdEvJhwOQ1Pk513uYmeGAinv9q483VY7luqyQTSj+h68ARDJlXd4b/e+WPyDMO6w3fIhVRbYKSsGWkQa9tc5fqya08u4Iqm7wN/EMoF7YURa3+r+9D78dG55NZBsDXr5mAO1bBHWjWjiJ3OiJXiMdESLb9Z5j3yAhByZ0oWVZzGtrU8i895FGe4Onnq9hgINvBBbI3Ns5xISsD+BeuNBLdA0v21kg3sWunRmimvXjMm5QUJeEvke0grTJanlTyeaFHMUqb2q4cDbWXQ1GEzukM+NJvWdleCIhiwr6s8MPArYBPMITo8pDrOjwjSG5Z4W9+qV8w+G1PT9yZrJXKXwgGZKSl47P0GVRrWZMEYqRZf0myuOaQDqPGtvUNyqyrY0pgJLQNxSz/gixn9F5b9EVmuj/j/wSooCaIaXIqDJth2iWXg77NyBbxoZYupx28NcLvenVEpiSfY83nG5ccmzJLn2OB2qLbeZw9SAv3Rm9NT7A6ZPWqIRBLOcureD3lV+Yw0V2VsYNXRNze93CHG41Xkm2CSM/5PwXGASZ83Q5faXwWd/k82bXAvFTKyGr4TGK1sX8gMDdYdhLFPThpnVMFeZmhp8/zgLKcxIVqbjvRxvDKYZNikwdg//HEaLgWez5g1wMYBqvotiGYrRas4gWQCd1ECBuDoBNSlMOxj3oSWCfnrS80hwYC3Ul9CKtPs1A7u7EsnE8orMiXebPLRwJHHo=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(966005)(2906002)(316002)(6506007)(6512007)(26005)(53546011)(54906003)(508600001)(38100700002)(8676002)(64756008)(66476007)(4326008)(8936002)(66946007)(66446008)(76116006)(66556008)(91956017)(31696002)(86362001)(82960400001)(71200400001)(110136005)(6486002)(2616005)(5660300002)(38070700005)(122000001)(31686004)(186003)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?R1Z1dEtVU01XeThabGFJdHlBeXVrbE41WEpvQ1JnVXZvNTdXVE9ZcVA2MnIz?=
 =?utf-8?B?a1VOZzA1bS9aR3J6ckJva0dOL1I3YkJ2aEtLWDZuZ2JXNjJpWS90NGNGSWl4?=
 =?utf-8?B?QnBUTll5enRpWGlCWGx3NlQzTFZYRWMrZmk4N3BPRDk2OFVXV25qVHh2Ri9r?=
 =?utf-8?B?UDh6RERQWG5DMnp2WXJZNUl1N05TOFVUdE1xelIwcDJjUG5pT2NuK2ZZc1Q5?=
 =?utf-8?B?cGNtKzVaTzdUOHgwUms3c3Y1MHdoREl6aDMyZUFXeENIYUxiNE81MjhmWVhk?=
 =?utf-8?B?dlhsTEFFN293cHVlYm9PdVJVUVBXZVVGZ2JUQmw5RGlMMU5sWFpsY3J4ZFg1?=
 =?utf-8?B?MjY1Ry9BbllGM0tjUzNTZno0RTRLWUJpTzkxdFZSdmhDY1FHckdPZE04ejZM?=
 =?utf-8?B?SHY4UXl5WCttNUlFeWZGT00vTEpVTFlSS0crS1ZHRDFXNmxpaS9IcGNnUTZD?=
 =?utf-8?B?KzJQVHpZVEZHWElES2tIdWh4T3FIT2ZURjF4SzdKSmFEU0VJcnRBYzJ0MjVw?=
 =?utf-8?B?cGQyczMvd011SUE2N3gyOUl6NlI1ZEpSbzE1QkNUbHp5UFhvNFJQNFhuMjVT?=
 =?utf-8?B?a1NudWpTTmVDaVVYQXBSbzhmZGpKMTZKZkxRNDhyQ1kvSHZWWll2SGdjc0lB?=
 =?utf-8?B?Smk0TWxoZ3BQakN6L29IMmswOVAvV08rRllaQmYvd0UyMkxvUFcwWHdLa2tJ?=
 =?utf-8?B?bHBwZU82K3hsYjY0UWFPUEV3RHZEWnRrR0poaXRQbzJ4WnJIRVFmbnp6aERP?=
 =?utf-8?B?WkJnQXNUWDQySFB1U2ZmZjVVMWw2OEFRSkpuc2NxZmh3aGVwbzVaMEU1emVT?=
 =?utf-8?B?T0pTaEhLTmxqTDlnbXlobkJEd1VmOFphUHdoK0JNL3ZDUzMvdzNtc0JvVWZy?=
 =?utf-8?B?Uk44cTJHbkNSdEM1ZFpTSXVBWVp6TUN4Tm9NWGF3a0RHdTBoZkpDbDd5YlpC?=
 =?utf-8?B?S1Z1MjFsTytuNkx3VWNncS9DVFoyZDVBMHltNk9mbjR1VnRFWWpYVUVOVmVy?=
 =?utf-8?B?MHc5ek9KLytaU29SaGZveC9aOThhRW5IOGxBRXowa2R6MWh1SGVDdldsWTBk?=
 =?utf-8?B?elhjLzFtbVVyd3lNblQ3QklSSUpIb0dlOVZaT1lZWVpHMkpzc1RPMjJKcHlk?=
 =?utf-8?B?ZVd6SXZRSk9IMXVBODRFQnV4cENhRTZDWXg1dlR4RXVUNHROdXhPdGpxcDBZ?=
 =?utf-8?B?VkMwNWlnZ1BZd3dRMnd2aDFkZFdsTDJnVVBzOElpeHpaRUVEYkVWZnByNVlM?=
 =?utf-8?B?Zk4rMUxBTGpyZDN1cFN5cnNyc1IwVlZISlUzaFNHWEpLQmpxelkzWE9hcUFz?=
 =?utf-8?B?NWNjdWlnVVBRRzl0RjdsWTlTSjVzd1R2ZThveHZRWHZBemZRK04zTTlFTk1S?=
 =?utf-8?B?L2l5bXZiemlncjk0b0VZZ2lvbTl4QlZ2bld3UWV3djZRc2diS2xlVmJ0Q2VJ?=
 =?utf-8?B?dGtPNnFOVmFIWTZaOTVqRDJCTXBCTWkwem8zOGk4N0hyRForWGF4eWhTZG1q?=
 =?utf-8?B?enYwQVZ5ZzR5QUFGTXljejRqT2xSMjhSS0E2OVJTeVUwdHhxMjc0UFRZYWdx?=
 =?utf-8?B?ajdsdUdqNTVpTm5OeEtyRkI2bGl1cExJK1ZDTm13OGwyQ3g5dGQ5bUdMZFhm?=
 =?utf-8?B?dU1maEFHK1p6Qy9sbVgySnBEdXN4aGpRNXdRSFZzdTVCSXhOa0Fodk1QMUs0?=
 =?utf-8?B?Zi9jSUNXSzlhemhpNkF3Z0RQRXFWOEFjbHlUOEJrM2Ezc2x2SkVyQ0JzQlpj?=
 =?utf-8?B?ZVU0aEUwaGJxVjZaOXp5dElQMk5GTklDVDB6ZzNYc2dKTDlHTk03SHRKT1Y3?=
 =?utf-8?B?WDlvcXJBVFlkTXNpMFd2N3RVL0VaOW1kRWp0YTJtZ2FXZGhKQkNXZ2J1Q3Q5?=
 =?utf-8?B?Z0Vqd1hUaC9yWWQxZTdqcUEyUS9YcU8vMGZlUFVRNzd1YXJxMk0rM1BJaE9q?=
 =?utf-8?B?cTlQOW5LKzJ4TnZqRzUySHpZWlNYUXBVVzlSOCtPTFlCWHJ6cDdwWnc5c3dY?=
 =?utf-8?B?aWdGMVd4Rng5Wm04NWJHUnJJSWFXV01RVG5vQzU0S0xDeFk4Wm44SFNsL1Bl?=
 =?utf-8?B?MVVva3lNNk5yczdCd1hheEU4Smw2TW1iQjFDTDVYK3FORTdJTkx1NXdWRHBi?=
 =?utf-8?B?OUxDYUdrUUduelpxcHBwdE9YK2UyalkrTkRYREpUak1jU3lWY01XZWJlSDR6?=
 =?utf-8?B?MXIwUm95MFl0MEJHd2FhdTFkazY4Q2RyRHpHSjEwK3czUVVFVlZYYURacFhv?=
 =?utf-8?B?TkpQZmdLa0N1ZHA5RG1lMTdHL0RibG91UVRoTGIyazZWTkZiZ1d3V3hteEdo?=
 =?utf-8?B?QzcrVEJQMW5oYWRLb1ZJMVdwRHJseVpORFVQV1V5dUlkR05SMUxQcWJDRkdq?=
 =?utf-8?Q?uTEKfjc1PrScar4Q=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <2BAF9D909864254B90248DD87456FDD9@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5417750f-851c-43fd-5a76-08da43f29f95
X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Jun 2022 17:17:36.4389
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: v8S3Hj6RCKhpWdTNHQjRiEsTnaEr/5ciVD9BQH0OQi+lLnWiBgVaY/aFsGTfr+timEGcvYz3InJQg9DWfERCUWtJDl0Zd9c94wzhatPVneY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN4PR03MB6718

T24gMDEvMDYvMjAyMiAxNzo1OSwgQW50aG9ueSBQRVJBUkQgd3JvdGU6DQo+IFBhdGNoIHNlcmll
cyBhdmFpbGFibGUgaW4gdGhpcyBnaXQgYnJhbmNoOg0KPiBodHRwczovL3hlbmJpdHMueGVuLm9y
Zy9naXQtaHR0cC9wZW9wbGUvYXBlcmFyZC94ZW4tdW5zdGFibGUuZ2l0IGJyLmJ1aWxkLXN5c3Rl
bS14ZW4taW5jbHVkZS1yZXdvcmstdjENCj4NCj4gSGksDQo+DQo+IFRoaXMgcGF0Y2ggc2VyaWVz
IGlzIGFib3V0IDIgaW1wcm92ZW1lbnQuIEZpcnN0IG9uZSBpcyB0byB1c2UgJChpZl9jaGFuZ2Vk
LCApDQo+IGluICJpbmNsdWRlL01ha2VmaWxlIiB0byBtYWtlIHRoZSBnZW5lcmF0aW9uIG9mIHRo
ZSBjb21wYXQgaGVhZGVycyBsZXNzIHZlcmJvc2UNCj4gYW5kIHRvIGhhdmUgdGhlIGNvbW1hbmQg
bGluZSBwYXJ0IG9mIHRoZSBkZWNpc2lvbiB0byByZWJ1aWxkIHRoZSBoZWFkZXJzLg0KPiBTZWNv
bmQgb25lIGlzIHRvIHJlcGxhY2Ugb25lIHNsb3cgc2NyaXB0IGJ5IGEgbXVjaCBmYXN0ZXIgb25l
LCBhbmQgc2F2ZSB0aW1lDQo+IHdoZW4gZ2VuZXJhdGluZyB0aGUgaGVhZGVycy4NCj4NCj4gVGhh
bmtzLg0KPg0KPiBBbnRob255IFBFUkFSRCAoNCk6DQo+ICAgYnVpbGQ6IHhlbi9pbmNsdWRlOiB1
c2UgaWZfY2hhbmdlZA0KPiAgIGJ1aWxkOiBzZXQgUEVSTA0KPiAgIGJ1aWxkOiByZXBsYWNlIGdl
dC1maWVsZHMuc2ggYnkgYSBwZXJsIHNjcmlwdA0KPiAgIGJ1aWxkOiByZW1vdmUgYXV0by5jb25m
IHByZXJlcXVpc2l0ZSBmcm9tIGNvbXBhdC94bGF0LmggdGFyZ2V0DQo+DQo+ICB4ZW4vTWFrZWZp
bGUgICAgICAgICAgICAgICAgIHwgICAxICsNCj4gIHhlbi9pbmNsdWRlL01ha2VmaWxlICAgICAg
ICAgfCAxMDYgKysrKy0tLQ0KPiAgeGVuL3Rvb2xzL2NvbXBhdC14bGF0LWhlYWRlciB8IDUzOSAr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKw0KPiAgeGVuL3Rvb2xzL2dldC1maWVs
ZHMuc2ggICAgICB8IDUyOCAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQoNCkV4
Y2VsbGVudC7CoCBJIHdhcyBwbGFubmluZyB0byBhc2sgeW91IGFib3V0IHRoaXMuwqAgKEkgYWxz
byBuZWVkIHRvDQpyZWZyZXNoaW5nIG15IGhhbGYgc2VyaWVzIGNsZWFuaW5nIHVwIG90aGVyIGJp
dHMgb2YgdGhlIGJ1aWxkLikNCg0KT25lIHRyaXZpYWwgb2JzZXJ2YXRpb24gaXMgdGhhdCBpdCB3
b3VsZCBwcm9iYWJseSBiZSBuaWNlciB0byBuYW1lIHRoZQ0Kc2NyaXB0IHdpdGggYSAucGwgZXh0
ZW5zaW9uLg0KDQpBbnkgbnVtYmVycyBvbiB3aGF0IHRoZSBzcGVlZHVwIGluIHBhdGNoIDMgaXM/
DQoNCkFyZSB0aGUgZ2VuZXJhdGVkIGNvbXBhdCBoZWFkZXJzIGlkZW50aWNhbCBiZWZvcmUgYW5k
IGFmdGVyIHRoaXMNCnNlcmllcz/CoCBJZiB5ZXMsIEknbSB2ZXJ5IHRlbXB0ZWQgdG8gYWNrIGFu
ZCBjb21taXQgaXQgc3RyYWlnaHQgYXdheS4NCg0KfkFuZHJldw0K


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 17:18:19 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 17:18:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340632.565711 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwRz9-0005cN-If; Wed, 01 Jun 2022 17:18:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340632.565711; Wed, 01 Jun 2022 17:18:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwRz9-0005cE-Fe; Wed, 01 Jun 2022 17:18:19 +0000
Received: by outflank-mailman (input) for mailman id 340632;
 Wed, 01 Jun 2022 17:18:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1nwRz7-0005bh-R3
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 17:18:17 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nwRz7-0005kw-Ea; Wed, 01 Jun 2022 17:18:17 +0000
Received: from [54.239.6.189] (helo=[192.168.4.58])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nwRz7-0002YG-6q; Wed, 01 Jun 2022 17:18:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=6EqGH4PhuIX4ekocWd8ywkuKfIykr81tpWgzIleeCTU=; b=K+ueR1Y+YCcUMCIY4zN2TcKUCk
	vod268SPGYnyql1lCZCrh8PM80jOf2XU7BfDadqi0CY/muv/2s9bg7iw30G3uqfy6I2im3Vr68jJp
	iHOuqOVZPwS7zFaPSGHckh0iQXnyh9uDGHFB40YTXXnpTW7+jrtDd1CHIKxI79uRBEV4=;
Message-ID: <d6369ba7-4794-18dd-edcf-ae68ef927b97@xen.org>
Date: Wed, 1 Jun 2022 18:18:15 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.9.1
Subject: Re: [PATCH v2 1/2] docs/misra: introduce rules.rst
To: Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
Cc: andrew.cooper3@citrix.com, jbeulich@suse.com, roger.pau@citrix.com,
 Bertrand.Marquis@arm.com, George.Dunlap@citrix.com,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
References: <alpine.DEB.2.22.394.2205311816170.1905099@ubuntu-linux-20-04-desktop>
 <20220601014402.2293524-1-sstabellini@kernel.org>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220601014402.2293524-1-sstabellini@kernel.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 01/06/2022 02:44, Stefano Stabellini wrote:
> From: Stefano Stabellini <stefano.stabellini@xilinx.com>
> 
> Introduce a list of MISRA C rules that apply to the Xen hypervisor. The
> list is in RST format.
> 
> Specify that rules deviations need to be documented. Introduce a
> documentation tag for in-code comments to mark them as deviations. Also
> mention that other documentation mechanisms are work-in-progress.
> 
> Add a mention of the new list to CODING_STYLE.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 17:19:04 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 17:19:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340644.565722 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwRzr-0006Fd-T2; Wed, 01 Jun 2022 17:19:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340644.565722; Wed, 01 Jun 2022 17:19:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwRzr-0006FV-PV; Wed, 01 Jun 2022 17:19:03 +0000
Received: by outflank-mailman (input) for mailman id 340644;
 Wed, 01 Jun 2022 17:19:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1nwRzq-0006FH-Ne
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 17:19:02 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nwRzq-0005la-G6; Wed, 01 Jun 2022 17:19:02 +0000
Received: from [54.239.6.189] (helo=[192.168.4.58])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nwRzq-0002Zq-A0; Wed, 01 Jun 2022 17:19:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=CkKlMb44dbpeHtXyemQCV0yn+Hbbi6uuH/EJps0JUFU=; b=ZMXmtA9mSp0KC0KHQsabk4QWCJ
	1KKACuZJOnA2BX1zWGz5xvp9WFFEq1/gC0Br0mrGrbimWWvWrk5x9otyh/r61lTd5QefOmQ6aSO7h
	cTFjPBcjthXEsro0Am6h5pTnp9+C1e45EUZB6kKNlaTLg+86rcudJoPGi8hBc6WWDuEc=;
Message-ID: <c31a6b07-8c19-5a67-2dc4-84ab0ee59ac7@xen.org>
Date: Wed, 1 Jun 2022 18:19:00 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.9.1
Subject: Re: [PATCH v2 2/2] docs/misra: add Rule 5.1
To: Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
Cc: andrew.cooper3@citrix.com, jbeulich@suse.com, roger.pau@citrix.com,
 Bertrand.Marquis@arm.com, George.Dunlap@citrix.com,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
References: <alpine.DEB.2.22.394.2205311816170.1905099@ubuntu-linux-20-04-desktop>
 <20220601014402.2293524-2-sstabellini@kernel.org>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220601014402.2293524-2-sstabellini@kernel.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 01/06/2022 02:44, Stefano Stabellini wrote:
> From: Stefano Stabellini <stefano.stabellini@xilinx.com>
> 
> Add Rule 5.1, with the additional note that the character limit for Xen
> is 40 characters.
> 
> The max length identifiers found by ECLAIR are:
> 
> __mitigate_spectre_bhb_clear_insn_start
> domain_pause_by_systemcontroller_nosync
> 
> Both of them are 40 characters long.
> 
> Explicitly mention that public headers might have longer identifiers.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 17:25:44 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 17:25:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340653.565732 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwS6F-0007wC-Kh; Wed, 01 Jun 2022 17:25:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340653.565732; Wed, 01 Jun 2022 17:25:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwS6F-0007w5-Hg; Wed, 01 Jun 2022 17:25:39 +0000
Received: by outflank-mailman (input) for mailman id 340653;
 Wed, 01 Jun 2022 17:25:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pr6R=WI=kernel.org=nathan@srs-se1.protection.inumbo.net>)
 id 1nwS6E-0007vz-Hq
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 17:25:38 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org
 [2604:1380:4601:e00::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d9a9cb41-e1cf-11ec-bd2c-47488cf2e6aa;
 Wed, 01 Jun 2022 19:25:36 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id DA5C2B81BC3;
 Wed,  1 Jun 2022 17:25:35 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 301D1C385A5;
 Wed,  1 Jun 2022 17:25:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d9a9cb41-e1cf-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654104334;
	bh=jMMe7B3BhgdISh7uct3BKpjV+FPEgE6gFytMtHtHl0c=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=GHO+qMWQe5cP8tmgUQqj8fPGjJwS3Otgz+OXIEdGPVJSradD7wwZvicTh/XeTN6xn
	 36cftZjh2U4vxTC907inz6H2ofvk+02etbYJZRSA3kWg1Zraq122z9y1G6zY5uAzjM
	 9xDC4IBmiXmiVoI5M28u6BmC46gm/t04m9/d3Uk738NKcetPPZ1x+AIx5ihXth0Frb
	 Ch9UusaL6Sktiv1EghEBMMueZ59I77EcbHnOJL4/Jl32ORyzuRmi00q7R572WE5XOK
	 /zF9VgHnpoTDGhq77ORqvchBakQOe3gxA1stj2MKAJaA3beP0uvbuDT0szSX/KVl+i
	 ui6hmrXqhf3Iw==
Date: Wed, 1 Jun 2022 10:25:31 -0700
From: Nathan Chancellor <nathan@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: iommu@lists.linux-foundation.org, x86@kernel.org,
	Anshuman Khandual <anshuman.khandual@arm.com>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>, Joerg Roedel <joro@8bytes.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Lu Baolu <baolu.lu@linux.intel.com>,
	Robin Murphy <robin.murphy@arm.com>,
	linux-arm-kernel@lists.infradead.org,
	xen-devel@lists.xenproject.org, linux-ia64@vger.kernel.org,
	linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-hyperv@vger.kernel.org, tboot-devel@lists.sourceforge.net,
	linux-pci@vger.kernel.org
Subject: Re: [PATCH 09/15] swiotlb: make the swiotlb_init interface more
 useful
Message-ID: <YpehC7BwBlnuxplF@dev-arch.thelio-3990X>
References: <20220404050559.132378-1-hch@lst.de>
 <20220404050559.132378-10-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20220404050559.132378-10-hch@lst.de>

Hi Christoph,

On Mon, Apr 04, 2022 at 07:05:53AM +0200, Christoph Hellwig wrote:
> Pass a bool to pass if swiotlb needs to be enabled based on the
> addressing needs and replace the verbose argument with a set of
> flags, including one to force enable bounce buffering.
> 
> Note that this patch removes the possibility to force xen-swiotlb
> use using swiotlb=force on the command line on x86 (arm and arm64
> never supported that), but this interface will be restored shortly.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

I bisected a performance regression in WSL2 to this change as commit
c6af2aa9ffc9 ("swiotlb: make the swiotlb_init interface more useful") in
mainline (bisect log below). I initially noticed it because accessing the
Windows filesystem through the /mnt/c mount is about 40x slower if I am doing
my math right based on the benchmarks below.

Before:

$ uname -r; and hyperfine "ls -l /mnt/c/Users/natec/Downloads"
5.18.0-rc3-microsoft-standard-WSL2-00008-ga3e230926708
Benchmark 1: ls -l /mnt/c/Users/natec/Downloads
  Time (mean ± σ):     564.5 ms ±  24.1 ms    [User: 2.5 ms, System: 130.3 ms]
  Range (min … max):   510.2 ms … 588.0 ms    10 runs

After

$ uname -r; and hyperfine "ls -l /mnt/c/Users/natec/Downloads"
5.18.0-rc3-microsoft-standard-WSL2-00009-gc6af2aa9ffc9
Benchmark 1: ls -l /mnt/c/Users/natec/Downloads
  Time (mean ± σ):     23.282 s ±  1.220 s    [User: 0.013 s, System: 0.101 s]
  Range (min … max):   21.793 s … 25.317 s    10 runs

I do see 'swiotlb=force' on the cmdline:

$ cat /proc/cmdline
initrd=\initrd.img panic=-1 nr_cpus=8 swiotlb=force earlycon=uart8250,io,0x3f8,115200 console=hvc0 debug pty.legacy_count=0

/mnt/c appears to be a 9p mount, not sure if that is relevant here:

$ mount &| grep /mnt/c
drvfs on /mnt/c type 9p (rw,noatime,dirsync,aname=drvfs;path=C:\;uid=1000;gid=1000;symlinkroot=/mnt/,mmap,access=client,msize=262144,trans=virtio)

If there is any other information I can provide, please let me know.

Cheers,
Nathan

# bad: [700170bf6b4d773e328fa54ebb70ba444007c702] Merge tag 'nfs-for-5.19-1' of git://git.linux-nfs.org/projects/anna/linux-nfs
# good: [4b0986a3613c92f4ec1bdc7f60ec66fea135991f] Linux 5.18
git bisect start '700170bf6b4d773e328fa54ebb70ba444007c702' 'v5.18'
# good: [86c87bea6b42100c67418af690919c44de6ede6e] Merge tag 'devicetree-for-5.19' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux
git bisect good 86c87bea6b42100c67418af690919c44de6ede6e
# bad: [ae862183285cbb2ef9032770d98ffa9becffe9d5] Merge tag 'arm-dt-5.19' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc
git bisect bad ae862183285cbb2ef9032770d98ffa9becffe9d5
# good: [2518f226c60d8e04d18ba4295500a5b0b8ac7659] Merge tag 'drm-next-2022-05-25' of git://anongit.freedesktop.org/drm/drm
git bisect good 2518f226c60d8e04d18ba4295500a5b0b8ac7659
# bad: [babf0bb978e3c9fce6c4eba6b744c8754fd43d8e] Merge tag 'xfs-5.19-for-linus' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux
git bisect bad babf0bb978e3c9fce6c4eba6b744c8754fd43d8e
# good: [beed983621fbdfd291e6e3a0cdc4d10517e60af8] ASoC: Intel: avs: Machine board registration
git bisect good beed983621fbdfd291e6e3a0cdc4d10517e60af8
# good: [fbe86daca0ba878b04fa241b85e26e54d17d4229] Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi
git bisect good fbe86daca0ba878b04fa241b85e26e54d17d4229
# good: [166afc45ed5523298541fd0297f9ad585cc2708c] Merge tag 'reflink-speedups-5.19_2022-04-28' of git://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux into xfs-5.19-for-next
git bisect good 166afc45ed5523298541fd0297f9ad585cc2708c
# bad: [e375780b631a5fc2a61a3b4fa12429255361a31e] Merge tag 'fsnotify_for_v5.19-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs
git bisect bad e375780b631a5fc2a61a3b4fa12429255361a31e
# bad: [4a37f3dd9a83186cb88d44808ab35b78375082c9] dma-direct: don't over-decrypt memory
git bisect bad 4a37f3dd9a83186cb88d44808ab35b78375082c9
# bad: [742519538e6b07250c8085bbff4bd358bc03bf16] swiotlb: pass a gfp_mask argument to swiotlb_init_late
git bisect bad 742519538e6b07250c8085bbff4bd358bc03bf16
# good: [9bbe7a7fc126e3d14fefa4b035854aba080926d9] arm/xen: don't check for xen_initial_domain() in xen_create_contiguous_region
git bisect good 9bbe7a7fc126e3d14fefa4b035854aba080926d9
# good: [a3e230926708125205ffd06d3dc2175a8263ae7e] x86: centralize setting SWIOTLB_FORCE when guest memory encryption is enabled
git bisect good a3e230926708125205ffd06d3dc2175a8263ae7e
# bad: [8ba2ed1be90fc210126f68186564707478552c95] swiotlb: add a SWIOTLB_ANY flag to lift the low memory restriction
git bisect bad 8ba2ed1be90fc210126f68186564707478552c95
# bad: [c6af2aa9ffc9763826607bc2664ef3ea4475ed18] swiotlb: make the swiotlb_init interface more useful
git bisect bad c6af2aa9ffc9763826607bc2664ef3ea4475ed18
# first bad commit: [c6af2aa9ffc9763826607bc2664ef3ea4475ed18] swiotlb: make the swiotlb_init interface more useful


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 17:32:47 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 17:32:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340661.565744 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwSD5-00017E-C4; Wed, 01 Jun 2022 17:32:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340661.565744; Wed, 01 Jun 2022 17:32:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwSD5-000177-8Q; Wed, 01 Jun 2022 17:32:43 +0000
Received: by outflank-mailman (input) for mailman id 340661;
 Wed, 01 Jun 2022 17:32:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Y5RR=WI=m5p.com=ehem@srs-se1.protection.inumbo.net>)
 id 1nwSD4-000171-Az
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 17:32:42 +0000
Received: from mailhost.m5p.com (mailhost.m5p.com [2001:470:1f07:15ff::f7])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d51c21ca-e1d0-11ec-837f-e5687231ffcc;
 Wed, 01 Jun 2022 19:32:40 +0200 (CEST)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.16.1/8.15.2) with ESMTPS id 251HWBV0087434
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Wed, 1 Jun 2022 13:32:16 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.16.1/8.15.2/Submit) id 251HWA8w087433;
 Wed, 1 Jun 2022 10:32:10 -0700 (PDT) (envelope-from ehem)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d51c21ca-e1d0-11ec-837f-e5687231ffcc
Date: Wed, 1 Jun 2022 10:32:10 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>,
        George Dunlap <george.dunlap@citrix.com>,
        Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
        Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [XEN PATCH 3/4] build: replace get-fields.sh by a perl script
Message-ID: <Ypeimt5XHHog64qw@mattapan.m5p.com>
References: <20220601165909.46588-1-anthony.perard@citrix.com>
 <20220601165909.46588-4-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20220601165909.46588-4-anthony.perard@citrix.com>
X-Spam-Status: No, score=0.4 required=10.0 tests=KHOP_HELO_FCRDNS autolearn=no
	autolearn_force=no version=3.4.5
X-Spam-Checker-Version: SpamAssassin 3.4.5 (2021-03-20) on mattapan.m5p.com

On Wed, Jun 01, 2022 at 05:59:08PM +0100, Anthony PERARD wrote:
> diff --git a/xen/tools/compat-xlat-header b/xen/tools/compat-xlat-header
> new file mode 100755
> index 0000000000..f1f42a9dde
> --- /dev/null
> +++ b/xen/tools/compat-xlat-header
> @@ -0,0 +1,539 @@
> +#!/usr/bin/perl -w
> +
> +use strict;
> +use warnings;

I hope to take more of a look at this, but one thing I immediately
notice:  -w is redundant with "use warnings;".  I strongly prefer
"use warnings;", but others may have different preferences.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Wed Jun 01 17:34:49 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 17:34:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340671.565755 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwSF5-0001m6-Rc; Wed, 01 Jun 2022 17:34:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340671.565755; Wed, 01 Jun 2022 17:34:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwSF5-0001lz-OH; Wed, 01 Jun 2022 17:34:47 +0000
Received: by outflank-mailman (input) for mailman id 340671;
 Wed, 01 Jun 2022 17:34:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NYJT=WI=lst.de=hch@srs-se1.protection.inumbo.net>)
 id 1nwSF4-0001ld-Np
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 17:34:46 +0000
Received: from verein.lst.de (verein.lst.de [213.95.11.211])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 206ef2b4-e1d1-11ec-837f-e5687231ffcc;
 Wed, 01 Jun 2022 19:34:45 +0200 (CEST)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 9F29B68AA6; Wed,  1 Jun 2022 19:34:41 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 206ef2b4-e1d1-11ec-837f-e5687231ffcc
Date: Wed, 1 Jun 2022 19:34:41 +0200
From: Christoph Hellwig <hch@lst.de>
To: Nathan Chancellor <nathan@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>, iommu@lists.linux-foundation.org,
	x86@kernel.org, Anshuman Khandual <anshuman.khandual@arm.com>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>, Joerg Roedel <joro@8bytes.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Lu Baolu <baolu.lu@linux.intel.com>,
	Robin Murphy <robin.murphy@arm.com>,
	linux-arm-kernel@lists.infradead.org,
	xen-devel@lists.xenproject.org, linux-ia64@vger.kernel.org,
	linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-hyperv@vger.kernel.org, tboot-devel@lists.sourceforge.net,
	linux-pci@vger.kernel.org
Subject: Re: [PATCH 09/15] swiotlb: make the swiotlb_init interface more
 useful
Message-ID: <20220601173441.GB27582@lst.de>
References: <20220404050559.132378-1-hch@lst.de> <20220404050559.132378-10-hch@lst.de> <YpehC7BwBlnuxplF@dev-arch.thelio-3990X>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <YpehC7BwBlnuxplF@dev-arch.thelio-3990X>
User-Agent: Mutt/1.5.17 (2007-11-01)

Can you send me the full dmesg and the content of
/sys/kernel/debug/swiotlb/io_tlb_nslabs for a good and a bad boot?

Thanks!


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 17:35:34 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 17:35:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340680.565766 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwSFq-0002KK-5A; Wed, 01 Jun 2022 17:35:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340680.565766; Wed, 01 Jun 2022 17:35:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwSFq-0002KD-1h; Wed, 01 Jun 2022 17:35:34 +0000
Received: by outflank-mailman (input) for mailman id 340680;
 Wed, 01 Jun 2022 17:35:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1nwSFo-0002Jz-DL
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 17:35:32 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nwSFm-00064G-MY; Wed, 01 Jun 2022 17:35:30 +0000
Received: from [54.239.6.189] (helo=[192.168.4.58])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nwSFm-0003hZ-Er; Wed, 01 Jun 2022 17:35:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=J0Av9ftFQoR40bCjcL7qGxFHMTNxq1BrXuURHuR68O0=; b=rjDlUhkcEGxe8jZF7psshcx/HV
	1y3HNMHfdI7Sqhr24C0yaWx/ju+UXh2Y01HGcIGM3xddx62bLC8Ggn7DceuzeYA1e1PoEwYfFjUss
	FTWShzsmjzUTPbJ5fbJlpR+Cts17MR3hjlz+tW9r6EMtoXHzAotVTj6MIgnLbklOuPAA=;
Message-ID: <337d6dbf-e8ee-5de7-a75e-97be815f4467@xen.org>
Date: Wed, 1 Jun 2022 18:35:27 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.9.1
Subject: Re: [RFC PATCH 1/4] kconfig: allow configuration of maximum modules
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>,
 xen-devel@lists.xenproject.org,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Wei Liu <wl@xen.org>
Cc: scott.davis@starlab.io, christopher.clark@starlab.io,
 sstabellini@kernel.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20220531024127.23669-1-dpsmith@apertussolutions.com>
 <20220531024127.23669-2-dpsmith@apertussolutions.com>
 <ab531f8b-a602-22e0-dabf-c7d073c88236@xen.org>
 <be06db4d-43c4-7d24-db0d-349c0a1e4999@apertussolutions.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <be06db4d-43c4-7d24-db0d-349c0a1e4999@apertussolutions.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit



On 31/05/2022 11:53, Daniel P. Smith wrote:
> On 5/31/22 05:25, Julien Grall wrote:
>> Hi,
>>
>> On 31/05/2022 03:41, Daniel P. Smith wrote:
>>> diff --git a/xen/arch/Kconfig b/xen/arch/Kconfig
>>> index f16eb0df43..57b14e22c9 100644
>>> --- a/xen/arch/Kconfig
>>> +++ b/xen/arch/Kconfig
>>> @@ -17,3 +17,15 @@ config NR_CPUS
>>>          For CPU cores which support Simultaneous Multi-Threading or
>>> similar
>>>          technologies, this the number of logical threads which Xen will
>>>          support.
>>> +
>>> +config NR_BOOTMODS
>>> +    int "Maximum number of boot modules that a loader can pass"
>>> +    range 1 64
>>
>> OOI, any reason to limit the size?
> 
> I modelled this entirely after NR_CPUS, which applied a limit

The limit for NR_CPUS makes sense because there are scalability issues 
after that (although 4095 seems quite high) and/or the HW impose a limit.

> , and it
> seemed reasonable to me at the time. I choose 64 since it was double
> currently what Arm had set for MAX_MODULES. As such, I have no hard
> reason for there to be a limit. It can easily be removed or adjusted to
> whatever the reviewers feel would be appropriate.

Ok. In which case I would drop the limit beause it also prevent a users 
to create more 64 dom0less domUs (actually a bit less because some 
modules are used by Xen). I don't think there are a strong reason to 
prevent that, right?

>>> +    default "8" if X86
>>> +    default "32" if ARM
>>> +    help
>>> +      Controls the build-time size of various arrays allocated for
>>> +      parsing the boot modules passed by a loader when starting Xen.
>>> +
>>> +      This is of particular interest when using Xen's hypervisor domain
>>> +      capabilities such as dom0less.
>>> diff --git a/xen/arch/arm/include/asm/setup.h
>>> b/xen/arch/arm/include/asm/setup.h
>>> index 2bb01ecfa8..312a3e4209 100644
>>> --- a/xen/arch/arm/include/asm/setup.h
>>> +++ b/xen/arch/arm/include/asm/setup.h
>>> @@ -10,7 +10,8 @@
>>>      #define NR_MEM_BANKS 256
>>>    -#define MAX_MODULES 32 /* Current maximum useful modules */
>>> +/* Current maximum useful modules */
>>> +#define MAX_MODULES CONFIG_NR_BOOTMODS
>>
>> There are only a handful number of use of MAX_MODULES in Arm. So I would
>> prefer if we replace all the use with CONFIG_NR_BOOTMODS.
> 
> No problem, as I stated above, I mimicked what was done for NR_CPUS
> which did a similar #define for MAX_CPUS.

Thanks!

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 17:47:24 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 17:47:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340688.565776 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwSR1-00043C-60; Wed, 01 Jun 2022 17:47:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340688.565776; Wed, 01 Jun 2022 17:47:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwSR1-000435-2e; Wed, 01 Jun 2022 17:47:07 +0000
Received: by outflank-mailman (input) for mailman id 340688;
 Wed, 01 Jun 2022 17:47:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pr6R=WI=kernel.org=nathan@srs-se1.protection.inumbo.net>)
 id 1nwSQz-00042z-At
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 17:47:05 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d634a393-e1d2-11ec-bd2c-47488cf2e6aa;
 Wed, 01 Jun 2022 19:47:00 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id A088461635;
 Wed,  1 Jun 2022 17:46:58 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 00445C385A5;
 Wed,  1 Jun 2022 17:46:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d634a393-e1d2-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654105618;
	bh=xSqWarjhALj8dKEeRZN7Pwk+UoI0xRS0grTqPGkq0Ek=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=Z37bXwE16cFb4POTGHZIQMsdzIxBGSyCmSmdB5IiiiESZz9r5EYnXKBoaTwznCLi+
	 61PrQ9XC/Ssb/OnXH81eChsmzWHCsfXAIDr8t4rK6VSoqBhSv6c1xj7USCdYxENWBG
	 KC4m24lnXGw36Q6DVvasYy4ko87g7cyYRsF0DIh6ac0NK6JLcqRFCmQLs0bWr9pS8F
	 8pEuYNeBvPGmOWoUuf6M6kxwT+mvzFgsx0V02f1nrRkXum9nJRISzqFt35KZlB7cID
	 gga8ZTMBciVQHE/xDoutTxe9S5YZnn49BRdwPBH3eFCEowSTAjy4EVHfsQ7ItHJCEJ
	 x35A+j0MGTPTQ==
Date: Wed, 1 Jun 2022 10:46:54 -0700
From: Nathan Chancellor <nathan@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: iommu@lists.linux-foundation.org, x86@kernel.org,
	Anshuman Khandual <anshuman.khandual@arm.com>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>, Joerg Roedel <joro@8bytes.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Lu Baolu <baolu.lu@linux.intel.com>,
	Robin Murphy <robin.murphy@arm.com>,
	linux-arm-kernel@lists.infradead.org,
	xen-devel@lists.xenproject.org, linux-ia64@vger.kernel.org,
	linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-hyperv@vger.kernel.org, tboot-devel@lists.sourceforge.net,
	linux-pci@vger.kernel.org
Subject: Re: [PATCH 09/15] swiotlb: make the swiotlb_init interface more
 useful
Message-ID: <YpemDuzdoaO3rijX@Ryzen-9-3900X.>
References: <20220404050559.132378-1-hch@lst.de>
 <20220404050559.132378-10-hch@lst.de>
 <YpehC7BwBlnuxplF@dev-arch.thelio-3990X>
 <20220601173441.GB27582@lst.de>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="euROFU3ViQK6AW7w"
Content-Disposition: inline
In-Reply-To: <20220601173441.GB27582@lst.de>


--euROFU3ViQK6AW7w
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

On Wed, Jun 01, 2022 at 07:34:41PM +0200, Christoph Hellwig wrote:
> Can you send me the full dmesg and the content of
> /sys/kernel/debug/swiotlb/io_tlb_nslabs for a good and a bad boot?

Sure thing, they are attached! If there is anything else I can provide
or test, I am more than happy to do so.

Cheers,
Nathan

--euROFU3ViQK6AW7w
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="good.log"

# cat /sys/kernel/debug/swiotlb/io_tlb_nslabs
32768

# dmesg
[    0.000000] Linux version 5.18.0-rc3-microsoft-standard-WSL2-00008-ga3e230926708 (nathan@dev-arch.thelio-3990X) (gcc (GCC) 12.1.0, GNU ld (GNU Binutils) 2.38) #1 SMP PREEMPT_DYNAMIC Wed Jun 1 10:38:34 MST 2022
[    0.000000] Command line: initrd=\initrd.img panic=-1 nr_cpus=8 swiotlb=force earlycon=uart8250,io,0x3f8,115200 console=hvc0 debug pty.legacy_count=0
[    0.000000] KERNEL supported cpus:
[    0.000000]   Intel GenuineIntel
[    0.000000]   AMD AuthenticAMD
[    0.000000]   Centaur CentaurHauls
[    0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
[    0.000000] x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
[    0.000000] x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
[    0.000000] signal: max sigframe size: 1776
[    0.000000] BIOS-provided physical RAM map:
[    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable
[    0.000000] BIOS-e820: [mem 0x00000000000e0000-0x00000000000e0fff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000001fffff] ACPI data
[    0.000000] BIOS-e820: [mem 0x0000000000200000-0x00000000f7ffffff] usable
[    0.000000] BIOS-e820: [mem 0x0000000100000000-0x0000000407ffffff] usable
[    0.000000] earlycon: uart8250 at I/O port 0x3f8 (options '115200')
[    0.000000] printk: bootconsole [uart8250] enabled
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] DMI not present or invalid.
[    0.000000] Hypervisor detected: Microsoft Hyper-V
[    0.000000] Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0xc2c, misc 0xe0bed7b6
[    0.000000] Hyper-V: Host Build 10.0.22000.708-0-0
[    0.000000] Hyper-V: Nested features: 0x4a0000
[    0.000000] Hyper-V: LAPIC Timer Frequency: 0x1e8480
[    0.000000] Hyper-V: Using hypercall for remote TLB flush
[    0.000000] clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns
[    0.000005] tsc: Detected 3800.008 MHz processor
[    0.001901] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
[    0.004593] e820: remove [mem 0x000a0000-0x000fffff] usable
[    0.006806] last_pfn = 0x408000 max_arch_pfn = 0x400000000
[    0.009042] x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
[    0.011760] last_pfn = 0xf8000 max_arch_pfn = 0x400000000
[    0.013959] Using GB pages for direct mapping
[    0.015749] RAMDISK: [mem 0x0371f000-0x03779fff]
[    0.017616] ACPI: Early table checksum verification disabled
[    0.019854] ACPI: RSDP 0x00000000000E0000 000024 (v02 VRTUAL)
[    0.022162] ACPI: XSDT 0x0000000000100000 000044 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
[    0.025624] ACPI: FACP 0x0000000000101000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001)
[    0.029022] ACPI: DSDT 0x00000000001011B8 01E184 (v02 MSFTVM DSDT01   00000001 MSFT 05000000)
[    0.032413] ACPI: FACS 0x0000000000101114 000040
[    0.034280] ACPI: OEM0 0x0000000000101154 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
[    0.037699] ACPI: SRAT 0x000000000011F33C 000330 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001)
[    0.041089] ACPI: APIC 0x000000000011F66C 000088 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001)
[    0.044475] ACPI: Reserving FACP table memory at [mem 0x101000-0x101113]
[    0.047159] ACPI: Reserving DSDT table memory at [mem 0x1011b8-0x11f33b]
[    0.049905] ACPI: Reserving FACS table memory at [mem 0x101114-0x101153]
[    0.052693] ACPI: Reserving OEM0 table memory at [mem 0x101154-0x1011b7]
[    0.055404] ACPI: Reserving SRAT table memory at [mem 0x11f33c-0x11f66b]
[    0.058040] ACPI: Reserving APIC table memory at [mem 0x11f66c-0x11f6f3]
[    0.061078] Zone ranges:
[    0.062074]   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
[    0.066106]   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
[    0.068763]   Normal   [mem 0x0000000100000000-0x0000000407ffffff]
[    0.071235]   Device   empty
[    0.072384] Movable zone start for each node
[    0.074058] Early memory node ranges
[    0.075515]   node   0: [mem 0x0000000000001000-0x000000000009ffff]
[    0.077979]   node   0: [mem 0x0000000000200000-0x00000000f7ffffff]
[    0.080483]   node   0: [mem 0x0000000100000000-0x0000000407ffffff]
[    0.082980] Initmem setup node 0 [mem 0x0000000000001000-0x0000000407ffffff]
[    0.085954] On node 0, zone DMA: 1 pages in unavailable ranges
[    0.085972] On node 0, zone DMA: 352 pages in unavailable ranges
[    0.098043] ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1])
[    0.102995] IOAPIC[0]: apic_id 8, version 17, address 0xfec00000, GSI 0-23
[    0.105726] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[    0.108349] ACPI: Using ACPI (MADT) for SMP configuration information
[    0.110909] smpboot: Allowing 8 CPUs, 0 hotplug CPUs
[    0.112902] [mem 0xf8000000-0xffffffff] available for PCI devices
[    0.115315] Booting paravirtualized kernel on Hyper-V
[    0.117347] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604462750000 ns
[    0.126056] setup_percpu: NR_CPUS:256 nr_cpumask_bits:256 nr_cpu_ids:8 nr_node_ids:1
[    0.129513] percpu: Embedded 53 pages/cpu s178408 r8192 d30488 u262144
[    0.132139] pcpu-alloc: s178408 r8192 d30488 u262144 alloc=1*2097152
[    0.134679] pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
[    0.136341] Hyper-V: PV spinlocks enabled
[    0.137917] PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
[    0.140833] Built 1 zonelists, mobility grouping on.  Total pages: 4127749
[    0.143636] Kernel command line: initrd=\initrd.img panic=-1 nr_cpus=8 swiotlb=force earlycon=uart8250,io,0x3f8,115200 console=hvc0 debug pty.legacy_count=0
[    0.151440] Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear)
[    0.155808] Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
[    0.159017] mem auto-init: stack:off, heap alloc:off, heap free:off
[    0.184685] Memory: 4126096K/16775804K available (18449K kernel code, 2647K rwdata, 3952K rodata, 1536K init, 2448K bss, 392692K reserved, 0K cma-reserved)
[    0.190358] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
[    0.192969] ftrace: allocating 51788 entries in 203 pages
[    0.201434] ftrace: allocated 203 pages with 5 groups
[    0.204349] Dynamic Preempt: none
[    0.205846] rcu: Preemptible hierarchical RCU implementation.
[    0.208162] rcu: 	RCU restricting CPUs from NR_CPUS=256 to nr_cpu_ids=8.
[    0.210799] 	Trampoline variant of Tasks RCU enabled.
[    0.212756] 	Rude variant of Tasks RCU enabled.
[    0.214541] 	Tracing variant of Tasks RCU enabled.
[    0.216404] rcu: RCU calculated value of scheduler-enlistment delay is 10 jiffies.
[    0.219393] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
[    0.225278] Using NULL legacy PIC
[    0.226573] NR_IRQS: 16640, nr_irqs: 488, preallocated irqs: 0
[    0.229131] random: crng init done (trusting CPU's manufacturer)
[    0.231548] Console: colour dummy device 80x25
[    0.233276] ACPI: Core revision 20211217
[    0.234931] Failed to register legacy timer interrupt
[    0.236944] APIC: Switch to symmetric I/O mode setup
[    0.238947] Hyper-V: enabling crash_kexec_post_notifiers
[    0.241016] Hyper-V: Using IPI hypercalls
[    0.242591] Hyper-V: Using enlightened APIC (xapic mode)
[    0.242680] clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x6d8cbf8869f, max_idle_ns: 881590921691 ns
[    0.248870] Calibrating delay loop (skipped), value calculated using timer frequency.. 7600.01 BogoMIPS (lpj=38000080)
[    0.253092] pid_max: default: 32768 minimum: 301
[    0.254926] LSM: Security Framework initializing
[    0.256750] Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear)
[    0.258867] Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear)
[    0.258867] x86/cpu: User Mode Instruction Prevention (UMIP) activated
[    0.258867] Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 512
[    0.258867] Last level dTLB entries: 4KB 2048, 2MB 2048, 4MB 1024, 1GB 0
[    0.258867] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
[    0.258867] Spectre V2 : Mitigation: Retpolines
[    0.258867] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
[    0.258867] Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
[    0.258867] Spectre V2 : User space: Mitigation: STIBP via prctl
[    0.258867] Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
[    0.258867] Freeing SMP alternatives memory: 56K
[    0.258867] smpboot: CPU0: AMD Ryzen 9 3900X 12-Core Processor (family: 0x17, model: 0x71, stepping: 0x0)
[    0.258867] cblist_init_generic: Setting adjustable number of callback queues.
[    0.258867] cblist_init_generic: Setting shift to 3 and lim to 1.
[    0.258867] cblist_init_generic: Setting shift to 3 and lim to 1.
[    0.258895] cblist_init_generic: Setting shift to 3 and lim to 1.
[    0.261292] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[    0.263817] ... version:                0
[    0.265391] ... bit width:              48
[    0.267068] ... generic registers:      6
[    0.268634] ... value mask:             0000ffffffffffff
[    0.268871] ... max period:             00007fffffffffff
[    0.270991] ... fixed-purpose events:   0
[    0.272571] ... event mask:             000000000000003f
[    0.274695] rcu: Hierarchical SRCU implementation.
[    0.276947] smp: Bringing up secondary CPUs ...
[    0.278794] x86: Booting SMP configuration:
[    0.278872] .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
[    0.279525] smp: Brought up 1 node, 8 CPUs
[    0.282814] smpboot: Max logical packages: 1
[    0.284479] smpboot: Total of 8 processors activated (60800.12 BogoMIPS)
[    0.298895] node 0 deferred pages initialised in 10ms
[    0.300921] devtmpfs: initialized
[    0.300921] x86/mm: Memory block size: 128MB
[    0.302426] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604462750000 ns
[    0.308889] futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
[    0.312151] NET: Registered PF_NETLINK/PF_ROUTE protocol family
[    0.314571] thermal_sys: Registered thermal governor 'step_wise'
[    0.314572] thermal_sys: Registered thermal governor 'user_space'
[    0.316961] cpuidle: using governor menu
[    0.318916] PCI: Fatal: No config space access function found
[    0.322206] kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
[    0.322206] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
[    0.322206] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
[    0.328925] raid6: skipped pq benchmark and selected avx2x4
[    0.331164] raid6: using avx2x2 recovery algorithm
[    0.331164] ACPI: Added _OSI(Module Device)
[    0.331164] ACPI: Added _OSI(Processor Device)
[    0.332457] ACPI: Added _OSI(3.0 _SCP Extensions)
[    0.334283] ACPI: Added _OSI(Processor Aggregator Device)
[    0.336463] ACPI: Added _OSI(Linux-Dell-Video)
[    0.338283] ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
[    0.348871] ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
[    0.355075] ACPI: 1 ACPI AML tables successfully acquired and loaded
[    0.358946] ACPI: Interpreter enabled
[    0.360583] ACPI: PM: (supports S0 S5)
[    0.362198] ACPI: Using IOAPIC for interrupt routing
[    0.364304] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
[    0.368017] ACPI: Enabled 1 GPEs in block 00 to 0F
[    0.369481] iommu: Default domain type: Translated 
[    0.371449] iommu: DMA domain TLB invalidation policy: lazy mode 
[    0.373937] SCSI subsystem initialized
[    0.375541] ACPI: bus type USB registered
[    0.377164] usbcore: registered new interface driver usbfs
[    0.378875] usbcore: registered new interface driver hub
[    0.380985] usbcore: registered new device driver usb
[    0.383023] hv_vmbus: Vmbus version:5.2
[    0.383023] PCI: Using ACPI for IRQ routing
[    0.383023] PCI: System does not support PCI
[    0.389034] clocksource: Switched to clocksource tsc-early
[    0.389075] hv_vmbus: Unknown GUID: c376c1c3-d276-48d2-90a9-c04748072c60
[    0.394146] hv_vmbus: Unknown GUID: 6e382d18-3336-4f4b-acc4-2b7703d4df4a
[    0.396909] hv_vmbus: Unknown GUID: dde9cbc0-5060-4436-9448-ea1254a5d177
[    0.399744] VFS: Disk quotas dquot_6.6.0
[    0.401370] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[    0.404186] FS-Cache: Loaded
[    0.405386] pnp: PnP ACPI init
[    0.406693] pnp: PnP ACPI: found 3 devices
[    0.412166] NET: Registered PF_INET protocol family
[    0.414416] IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear)
[    0.418515] tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, linear)
[    0.422058] TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear)
[    0.425602] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
[    0.428661] TCP: Hash tables configured (established 131072 bind 65536)
[    0.431375] UDP hash table entries: 8192 (order: 6, 262144 bytes, linear)
[    0.434579] UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, linear)
[    0.437587] NET: Registered PF_UNIX/PF_LOCAL protocol family
[    0.440120] RPC: Registered named UNIX socket transport module.
[    0.442979] RPC: Registered udp transport module.
[    0.445120] RPC: Registered tcp transport module.
[    0.447250] RPC: Registered tcp NFSv4.1 backchannel transport module.
[    0.449989] PCI: CLS 0 bytes, default 64
[    0.451619] PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
[    0.451654] Trying to unpack rootfs image as initramfs...
[    0.454256] software IO TLB: mapped [mem 0x00000000f4000000-0x00000000f8000000] (64MB)
[    0.456463] Freeing initrd memory: 364K
[    0.541510] kvm: no hardware support for 'kvm_intel'
[    0.543867] SVM: TSC scaling supported
[    0.545319] kvm: Nested Virtualization enabled
[    0.547040] SVM: kvm: Nested Paging enabled
[    0.548653] SVM: kvm: Hyper-V enlightened NPT TLB flush enabled
[    0.550967] SVM: kvm: Hyper-V Direct TLB Flush enabled
[    0.552946] SVM: Virtual VMLOAD VMSAVE supported
[    0.618969] Initialise system trusted keyrings
[    0.620900] workingset: timestamp_bits=46 max_order=22 bucket_order=0
[    0.624053] squashfs: version 4.0 (2009/01/31) Phillip Lougher
[    0.626726] NFS: Registering the id_resolver key type
[    0.628690] Key type id_resolver registered
[    0.630376] Key type id_legacy registered
[    0.631915] nfs4filelayout_init: NFSv4 File Layout Driver Registering...
[    0.634537] nfs4flexfilelayout_init: NFSv4 Flexfile Layout Driver Registering...
[    0.637515] Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
[    0.640546] Key type cifs.idmap registered
[    0.642255] fuse: init (API version 7.36)
[    0.643973] SGI XFS with ACLs, security attributes, realtime, scrub, repair, quota, no debug enabled
[    0.647888] 9p: Installing v9fs 9p2000 file system support
[    0.650100] ceph: loaded (mds proto 32)
[    0.653965] NET: Registered PF_ALG protocol family
[    0.655832] xor: automatically using best checksumming function   avx       
[    0.658591] Key type asymmetric registered
[    0.660227] Asymmetric key parser 'x509' registered
[    0.662151] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250)
[    0.665489] hv_vmbus: registering driver hv_pci
[    0.667566] hv_pci b440727e-7525-4d9c-a556-d52029b00086: PCI VMBus probing: Using version 0x10004
[    0.671953] hv_pci b440727e-7525-4d9c-a556-d52029b00086: PCI host bridge to bus 7525:00
[    0.675135] pci_bus 7525:00: root bus resource [mem 0x9ffe00000-0x9ffe02fff window]
[    0.678177] pci_bus 7525:00: No busn resource found for root bus, will use [bus 00-ff]
[    0.682285] pci 7525:00:00.0: [1af4:1043] type 00 class 0x010000
[    0.686012] pci 7525:00:00.0: reg 0x10: [mem 0x9ffe00000-0x9ffe00fff 64bit]
[    0.689489] pci 7525:00:00.0: reg 0x18: [mem 0x9ffe01000-0x9ffe01fff 64bit]
[    0.693005] pci 7525:00:00.0: reg 0x20: [mem 0x9ffe02000-0x9ffe02fff 64bit]
[    0.698805] pci_bus 7525:00: busn_res: [bus 00-ff] end is updated to 00
[    0.701529] pci 7525:00:00.0: BAR 0: assigned [mem 0x9ffe00000-0x9ffe00fff 64bit]
[    0.705034] pci 7525:00:00.0: BAR 2: assigned [mem 0x9ffe01000-0x9ffe01fff 64bit]
[    0.708415] pci 7525:00:00.0: BAR 4: assigned [mem 0x9ffe02000-0x9ffe02fff 64bit]
[    0.712037] hv_pci 121abb12-7ab2-45ff-b4ee-e85a1067c860: PCI VMBus probing: Using version 0x10004
[    0.716192] hv_pci 121abb12-7ab2-45ff-b4ee-e85a1067c860: PCI host bridge to bus 7ab2:00
[    0.719504] pci_bus 7ab2:00: No busn resource found for root bus, will use [bus 00-ff]
[    0.723020] pci 7ab2:00:00.0: [1414:008e] type 00 class 0x030200
[    0.729607] pci_bus 7ab2:00: busn_res: [bus 00-ff] end is updated to 00
[    0.732348] hv_pci 0c573a50-ca5d-42a9-9657-3920a455151c: PCI VMBus probing: Using version 0x10004
[    0.736748] hv_pci 0c573a50-ca5d-42a9-9657-3920a455151c: PCI host bridge to bus ca5d:00
[    0.739875] pci_bus ca5d:00: root bus resource [mem 0x9ffe04000-0x9ffe06fff window]
[    0.742859] pci_bus ca5d:00: No busn resource found for root bus, will use [bus 00-ff]
[    0.746939] pci ca5d:00:00.0: [1af4:1049] type 00 class 0x010000
[    0.750206] pci ca5d:00:00.0: reg 0x10: [mem 0x9ffe04000-0x9ffe04fff 64bit]
[    0.753516] pci ca5d:00:00.0: reg 0x18: [mem 0x9ffe05000-0x9ffe05fff 64bit]
[    0.756826] pci ca5d:00:00.0: reg 0x20: [mem 0x9ffe06000-0x9ffe06fff 64bit]
[    0.763185] pci_bus ca5d:00: busn_res: [bus 00-ff] end is updated to 00
[    0.765793] pci ca5d:00:00.0: BAR 0: assigned [mem 0x9ffe04000-0x9ffe04fff 64bit]
[    0.769223] pci ca5d:00:00.0: BAR 2: assigned [mem 0x9ffe05000-0x9ffe05fff 64bit]
[    0.772634] pci ca5d:00:00.0: BAR 4: assigned [mem 0x9ffe06000-0x9ffe06fff 64bit]
[    0.784382] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled
[    0.787110] 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
[    0.790429] 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A
[    0.829682] Non-volatile memory driver v1.3
[    0.831440] printk: console [hvc0] enabled
[    0.833112] printk: bootconsole [uart8250] disabled
[    0.836997] brd: module loaded
[    0.838058] loop: module loaded
[    0.838265] hv_vmbus: registering driver hv_storvsc
[    0.838832] wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information.
[    0.839306] wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
[    0.839681] tun: Universal TUN/TAP device driver, 1.6
[    0.839972] PPP generic driver version 2.4.2
[    0.840236] scsi host0: storvsc_host_t
[    0.840302] PPP BSD Compression module registered
[    0.840692] PPP Deflate Compression module registered
[    0.841134] PPP MPPE Compression module registered
[    0.841392] NET: Registered PF_PPPOX protocol family
[    0.841654] usbcore: registered new interface driver cdc_ether
[    0.841957] usbcore: registered new interface driver cdc_ncm
[    0.842276] hv_vmbus: registering driver hv_netvsc
[    0.842639] VFIO - User Level meta-driver version: 0.3
[    0.843061] usbcore: registered new interface driver cdc_acm
[    0.843378] cdc_acm: USB Abstract Control Model driver for USB modems and ISDN adapters
[    0.843756] usbcore: registered new interface driver ch341
[    0.843966] usbserial: USB Serial support registered for ch341-uart
[    0.844287] usbcore: registered new interface driver cp210x
[    0.844545] usbserial: USB Serial support registered for cp210x
[    0.844853] usbcore: registered new interface driver ftdi_sio
[    0.845182] usbserial: USB Serial support registered for FTDI USB Serial Device
[    0.845673] vhci_hcd vhci_hcd.0: USB/IP Virtual Host Controller
[    0.846016] vhci_hcd vhci_hcd.0: new USB bus registered, assigned bus number 1
[    0.846374] vhci_hcd: created sysfs vhci_hcd.0
[    0.846742] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.18
[    0.847077] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[    0.847418] usb usb1: Product: USB/IP Virtual Host Controller
[    0.847660] usb usb1: Manufacturer: Linux 5.18.0-rc3-microsoft-standard-WSL2-00008-ga3e230926708 vhci_hcd
[    0.847995] usb usb1: SerialNumber: vhci_hcd.0
[    0.848335] hub 1-0:1.0: USB hub found
[    0.848536] hub 1-0:1.0: 8 ports detected
[    0.848976] vhci_hcd vhci_hcd.0: USB/IP Virtual Host Controller
[    0.849254] vhci_hcd vhci_hcd.0: new USB bus registered, assigned bus number 2
[    0.849621] usb usb2: We don't know the algorithms for LPM for this host, disabling LPM.
[    0.850102] usb usb2: New USB device found, idVendor=1d6b, idProduct=0003, bcdDevice= 5.18
[    0.850427] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[    0.850718] usb usb2: Product: USB/IP Virtual Host Controller
[    0.850961] usb usb2: Manufacturer: Linux 5.18.0-rc3-microsoft-standard-WSL2-00008-ga3e230926708 vhci_hcd
[    0.851300] usb usb2: SerialNumber: vhci_hcd.0
[    0.851614] hub 2-0:1.0: USB hub found
[    0.851810] hub 2-0:1.0: 8 ports detected
[    0.852249] hv_vmbus: registering driver hyperv_keyboard
[    0.852593] rtc_cmos 00:02: RTC can wake from S4
[    0.854483] rtc_cmos 00:02: registered as rtc0
[    0.854995] rtc_cmos 00:02: setting system clock to 2022-06-01T17:39:04 UTC (1654105144)
[    0.855309] rtc_cmos 00:02: alarms up to one month, 114 bytes nvram
[    0.855718] device-mapper: ioctl: 4.46.0-ioctl (2022-02-22) initialised: dm-devel@redhat.com
[    0.856172] device-mapper: raid: Loading target version 1.15.1
[    0.856511] usbcore: registered new interface driver usbhid
[    0.856703] usbhid: USB HID core driver
[    0.856901] hv_utils: Registering HyperV Utility Driver
[    0.857113] hv_vmbus: registering driver hv_utils
[    0.857333] hv_vmbus: registering driver hv_balloon
[    0.857338] hv_utils: cannot register PTP clock: 0
[    0.857607] drop_monitor: Initializing network drop monitor service
[    0.858067] hv_utils: TimeSync IC version 4.0
[    0.858073] hv_balloon: Using Dynamic Memory protocol version 2.0
[    0.858248] Mirror/redirect action on
[    0.859029] Free page reporting enabled
[    0.859197] hv_balloon: Cold memory discard hint enabled
[    0.859770] IPVS: Registered protocols (TCP, UDP)
[    0.860029] IPVS: Connection hash table configured (size=4096, memory=64Kbytes)
[    0.860395] IPVS: ipvs loaded.
[    0.860546] IPVS: [rr] scheduler registered.
[    0.860743] IPVS: [wrr] scheduler registered.
[    0.860937] IPVS: [sh] scheduler registered.
[    0.861159] ipip: IPv4 and MPLS over IPv4 tunneling driver
[    0.861418] ipt_CLUSTERIP: ClusterIP Version 0.8 loaded successfully
[    0.861666] Initializing XFRM netlink socket
[    0.861903] NET: Registered PF_INET6 protocol family
[    0.862439] Segment Routing with IPv6
[    0.862619] In-situ OAM (IOAM) with IPv6
[    0.862791] sit: IPv6, IPv4 and MPLS over IPv4 tunneling driver
[    0.863092] NET: Registered PF_PACKET protocol family
[    0.863332] Bridge firewalling registered
[    0.863488] 8021q: 802.1Q VLAN Support v1.8
[    0.863653] sctp: Hash tables configured (bind 256/256)
[    0.863885] 9pnet: Installing 9P2000 support
[    0.875420] Key type dns_resolver registered
[    0.875693] Key type ceph registered
[    0.875979] libceph: loaded (mon/osd proto 15/24)
[    0.876260] NET: Registered PF_VSOCK protocol family
[    0.876540] hv_vmbus: registering driver hv_sock
[    0.876769] IPI shorthand broadcast: enabled
[    0.876968] sched_clock: Marking stable (856497151, 19750700)->(930661400, -54413549)
[    0.877527] registered taskstats version 1
[    0.878008] Loading compiled-in X.509 certificates
[    0.878392] Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no
[    0.883871] Freeing unused kernel image (initmem) memory: 1536K
[    0.958961] Write protecting the kernel read-only data: 24576k
[    0.959743] Freeing unused kernel image (text/rodata gap) memory: 2028K
[    0.960146] Freeing unused kernel image (rodata/data gap) memory: 144K
[    0.960406] Run /init as init process
[    0.960549]   with arguments:
[    0.960690]     /init
[    0.960790]   with environment:
[    0.960931]     HOME=/
[    0.961046]     TERM=linux
[    0.963320] scsi 0:0:0:0: Direct-Access     Msft     Virtual Disk     1.0  PQ: 0 ANSI: 5
[    0.964110] sd 0:0:0:0: Attached scsi generic sg0 type 0
[    0.964747] sd 0:0:0:0: [sda] 641944 512-byte logical blocks: (329 MB/313 MiB)
[    0.965293] sd 0:0:0:0: [sda] Write Protect is on
[    0.965554] sd 0:0:0:0: [sda] Mode Sense: 0f 00 80 00
[    0.965967] sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
[    1.459424] EXT4-fs (sda): mounted filesystem without journal. Quota mode: none.
[    1.677905] hv_pci 75e234e6-9a41-4c1d-8ac8-5d395d071dd2: PCI VMBus probing: Using version 0x10004
[    1.678919] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x6d8cbf8869f, max_idle_ns: 881590921691 ns
[    1.679926] hv_pci 75e234e6-9a41-4c1d-8ac8-5d395d071dd2: PCI host bridge to bus 9a41:00
[    1.680478] pci_bus 9a41:00: root bus resource [mem 0xa00000000-0xc00001fff window]
[    1.680783] pci_bus 9a41:00: No busn resource found for root bus, will use [bus 00-ff]
[    1.682091] pci 9a41:00:00.0: [1af4:105a] type 00 class 0x088000
[    1.684295] pci 9a41:00:00.0: reg 0x10: [mem 0xc00000000-0xc00000fff 64bit]
[    1.686271] pci 9a41:00:00.0: reg 0x18: [mem 0xc00001000-0xc00001fff 64bit]
[    1.688288] pci 9a41:00:00.0: reg 0x20: [mem 0xa00000000-0xbffffffff 64bit]
[    1.693245] pci_bus 9a41:00: busn_res: [bus 00-ff] end is updated to 00
[    1.693513] pci 9a41:00:00.0: BAR 4: assigned [mem 0xa00000000-0xbffffffff 64bit]
[    1.695391] pci 9a41:00:00.0: BAR 0: assigned [mem 0xc00000000-0xc00000fff 64bit]
[    1.697251] pci 9a41:00:00.0: BAR 2: assigned [mem 0xc00001000-0xc00001fff 64bit]
[    1.712810] virtiofs virtio2: Cache len: 0x200000000 @ 0xa00000000
[    1.713702] clocksource: Switched to clocksource tsc
[    1.739139] sd 0:0:0:0: [sda] Attached SCSI disk
[    1.778098] memmap_init_zone_device initialised 2097152 pages in 10ms
[    1.779673] WSL2: SetEphemeralPortRange is a no-op : range (0, 0)
[    1.848936] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[    2.479025] scsi 0:0:0:1: Direct-Access     Msft     Virtual Disk     1.0  PQ: 0 ANSI: 5
[    2.479872] sd 0:0:0:1: Attached scsi generic sg1 type 0
[    2.480323] sd 0:0:0:1: [sdb] 2097160 512-byte logical blocks: (1.07 GB/1.00 GiB)
[    2.480417] scsi 0:0:0:2: Direct-Access     Msft     Virtual Disk     1.0  PQ: 0 ANSI: 5
[    2.480787] sd 0:0:0:1: [sdb] 4096-byte physical blocks
[    2.481485] sd 0:0:0:2: Attached scsi generic sg2 type 0
[    2.481508] sd 0:0:0:1: [sdb] Write Protect is off
[    2.482052] sd 0:0:0:2: [sdc] 536870912 512-byte logical blocks: (275 GB/256 GiB)
[    2.482106] sd 0:0:0:1: [sdb] Mode Sense: 0f 00 00 00
[    2.482522] sd 0:0:0:2: [sdc] 4096-byte physical blocks
[    2.482915] sd 0:0:0:1: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    2.483103] sd 0:0:0:2: [sdc] Write Protect is off
[    2.483857] sd 0:0:0:2: [sdc] Mode Sense: 0f 00 00 00
[    2.484274] sd 0:0:0:2: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    2.485716] sd 0:0:0:1: [sdb] Attached SCSI disk
[    2.491559] sd 0:0:0:2: [sdc] Attached SCSI disk
[    2.500504] EXT4-fs (sdc): recovery complete
[    2.501459] EXT4-fs (sdc): mounted filesystem with ordered data mode. Quota mode: none.
[    2.522816] Adding 1048576k swap on /dev/sdb.  Priority:-2 extents:1 across:1048576k 
[    2.681998] hv_pci 218ee01d-fcab-4f07-9ea0-57e00b7600be: PCI VMBus probing: Using version 0x10004
[    2.683056] 9pnet_virtio: no channels available for device drvfs
[    2.683289] hv_pci 218ee01d-fcab-4f07-9ea0-57e00b7600be: PCI host bridge to bus fcab:00
[    2.683349] init: (1) WARNING: mount: waiting for virtio device...
[    2.683645] pci_bus fcab:00: root bus resource [mem 0x9ffe08000-0x9ffe0afff window]
[    2.684167] pci_bus fcab:00: No busn resource found for root bus, will use [bus 00-ff]
[    2.685517] pci fcab:00:00.0: [1af4:1049] type 00 class 0x010000
[    2.686817] pci fcab:00:00.0: reg 0x10: [mem 0x9ffe08000-0x9ffe08fff 64bit]
[    2.687677] pci fcab:00:00.0: reg 0x18: [mem 0x9ffe09000-0x9ffe09fff 64bit]
[    2.688539] pci fcab:00:00.0: reg 0x20: [mem 0x9ffe0a000-0x9ffe0afff 64bit]
[    2.692528] pci_bus fcab:00: busn_res: [bus 00-ff] end is updated to 00
[    2.692830] pci fcab:00:00.0: BAR 0: assigned [mem 0x9ffe08000-0x9ffe08fff 64bit]
[    2.693631] pci fcab:00:00.0: BAR 2: assigned [mem 0x9ffe09000-0x9ffe09fff 64bit]
[    2.694374] pci fcab:00:00.0: BAR 4: assigned [mem 0x9ffe0a000-0x9ffe0afff 64bit]
[    2.788379] hv_pci 5767d560-6512-42aa-bddc-89eca93e4c84: PCI VMBus probing: Using version 0x10004
[    2.789471] 9pnet_virtio: no channels available for device drvfs
[    2.789751] hv_pci 5767d560-6512-42aa-bddc-89eca93e4c84: PCI host bridge to bus 6512:00
[    2.789774] init: (1) WARNING: mount: waiting for virtio device...
[    2.790068] pci_bus 6512:00: root bus resource [mem 0x9ffe0c000-0x9ffe0efff window]
[    2.790613] pci_bus 6512:00: No busn resource found for root bus, will use [bus 00-ff]
[    2.791858] pci 6512:00:00.0: [1af4:1049] type 00 class 0x010000
[    2.792992] pci 6512:00:00.0: reg 0x10: [mem 0x9ffe0c000-0x9ffe0cfff 64bit]
[    2.793834] pci 6512:00:00.0: reg 0x18: [mem 0x9ffe0d000-0x9ffe0dfff 64bit]
[    2.794694] pci 6512:00:00.0: reg 0x20: [mem 0x9ffe0e000-0x9ffe0efff 64bit]
[    2.798621] pci_bus 6512:00: busn_res: [bus 00-ff] end is updated to 00
[    2.798961] pci 6512:00:00.0: BAR 0: assigned [mem 0x9ffe0c000-0x9ffe0cfff 64bit]
[    2.799695] pci 6512:00:00.0: BAR 2: assigned [mem 0x9ffe0d000-0x9ffe0dfff 64bit]
[    2.800416] pci 6512:00:00.0: BAR 4: assigned [mem 0x9ffe0e000-0x9ffe0efff 64bit]
[    2.895193] hv_pci 4fae7a25-5e7f-42d1-8a8f-7e1eb4128529: PCI VMBus probing: Using version 0x10004
[    2.896654] hv_pci 4fae7a25-5e7f-42d1-8a8f-7e1eb4128529: PCI host bridge to bus 5e7f:00
[    2.897036] pci_bus 5e7f:00: root bus resource [mem 0x9ffe10000-0x9ffe12fff window]
[    2.897405] pci_bus 5e7f:00: No busn resource found for root bus, will use [bus 00-ff]
[    2.898729] pci 5e7f:00:00.0: [1af4:1049] type 00 class 0x010000
[    2.900002] pci 5e7f:00:00.0: reg 0x10: [mem 0x9ffe10000-0x9ffe10fff 64bit]
[    2.900885] pci 5e7f:00:00.0: reg 0x18: [mem 0x9ffe11000-0x9ffe11fff 64bit]
[    2.901753] pci 5e7f:00:00.0: reg 0x20: [mem 0x9ffe12000-0x9ffe12fff 64bit]
[    2.905797] pci_bus 5e7f:00: busn_res: [bus 00-ff] end is updated to 00
[    2.906134] pci 5e7f:00:00.0: BAR 0: assigned [mem 0x9ffe10000-0x9ffe10fff 64bit]
[    2.906894] pci 5e7f:00:00.0: BAR 2: assigned [mem 0x9ffe11000-0x9ffe11fff 64bit]
[    2.907627] pci 5e7f:00:00.0: BAR 4: assigned [mem 0x9ffe12000-0x9ffe12fff 64bit]
[    3.077268] hv_pci 98cd7659-96ca-4b6a-95f7-0896075bc081: PCI VMBus probing: Using version 0x10004
[    3.078351] 9pnet_virtio: no channels available for device drvfs
[    3.078666] init: (1) WARNING: mount: waiting for virtio device...
[    3.078865] hv_pci 98cd7659-96ca-4b6a-95f7-0896075bc081: PCI host bridge to bus 96ca:00
[    3.079411] pci_bus 96ca:00: root bus resource [mem 0x9ffe14000-0x9ffe16fff window]
[    3.079774] pci_bus 96ca:00: No busn resource found for root bus, will use [bus 00-ff]
[    3.081309] pci 96ca:00:00.0: [1af4:1049] type 00 class 0x010000
[    3.082647] pci 96ca:00:00.0: reg 0x10: [mem 0x9ffe14000-0x9ffe14fff 64bit]
[    3.083650] pci 96ca:00:00.0: reg 0x18: [mem 0x9ffe15000-0x9ffe15fff 64bit]
[    3.084634] pci 96ca:00:00.0: reg 0x20: [mem 0x9ffe16000-0x9ffe16fff 64bit]
[    3.088677] pci_bus 96ca:00: busn_res: [bus 00-ff] end is updated to 00
[    3.089026] pci 96ca:00:00.0: BAR 0: assigned [mem 0x9ffe14000-0x9ffe14fff 64bit]
[    3.089905] pci 96ca:00:00.0: BAR 2: assigned [mem 0x9ffe15000-0x9ffe15fff 64bit]
[    3.090676] pci 96ca:00:00.0: BAR 4: assigned [mem 0x9ffe16000-0x9ffe16fff 64bit]
[    3.183923] hv_pci d00b62d9-165b-4f27-8533-5d0402fb00d8: PCI VMBus probing: Using version 0x10004
[    3.185457] hv_pci d00b62d9-165b-4f27-8533-5d0402fb00d8: PCI host bridge to bus 165b:00
[    3.185861] pci_bus 165b:00: root bus resource [mem 0x9ffe18000-0x9ffe1afff window]
[    3.185908] 9pnet_virtio: no channels available for device drvfs
[    3.186186] pci_bus 165b:00: No busn resource found for root bus, will use [bus 00-ff]
[    3.186580] init: (1) WARNING: mount: waiting for virtio device...
[    3.187888] pci 165b:00:00.0: [1af4:1049] type 00 class 0x010000
[    3.189081] pci 165b:00:00.0: reg 0x10: [mem 0x9ffe18000-0x9ffe18fff 64bit]
[    3.189964] pci 165b:00:00.0: reg 0x18: [mem 0x9ffe19000-0x9ffe19fff 64bit]
[    3.190841] pci 165b:00:00.0: reg 0x20: [mem 0x9ffe1a000-0x9ffe1afff 64bit]
[    3.194800] pci_bus 165b:00: busn_res: [bus 00-ff] end is updated to 00
[    3.195067] pci 165b:00:00.0: BAR 0: assigned [mem 0x9ffe18000-0x9ffe18fff 64bit]
[    3.195787] pci 165b:00:00.0: BAR 2: assigned [mem 0x9ffe19000-0x9ffe19fff 64bit]
[    3.196517] pci 165b:00:00.0: BAR 4: assigned [mem 0x9ffe1a000-0x9ffe1afff 64bit]
[    3.292120] hv_pci ff437ee8-6a13-4119-800a-411d24b930e4: PCI VMBus probing: Using version 0x10004
[    3.293643] hv_pci ff437ee8-6a13-4119-800a-411d24b930e4: PCI host bridge to bus 6a13:00
[    3.294135] pci_bus 6a13:00: root bus resource [mem 0x9ffe1c000-0x9ffe1efff window]
[    3.294500] pci_bus 6a13:00: No busn resource found for root bus, will use [bus 00-ff]
[    3.295857] pci 6a13:00:00.0: [1af4:1049] type 00 class 0x010000
[    3.297136] pci 6a13:00:00.0: reg 0x10: [mem 0x9ffe1c000-0x9ffe1cfff 64bit]
[    3.298009] pci 6a13:00:00.0: reg 0x18: [mem 0x9ffe1d000-0x9ffe1dfff 64bit]
[    3.298981] pci 6a13:00:00.0: reg 0x20: [mem 0x9ffe1e000-0x9ffe1efff 64bit]
[    3.303144] pci_bus 6a13:00: busn_res: [bus 00-ff] end is updated to 00
[    3.303421] pci 6a13:00:00.0: BAR 0: assigned [mem 0x9ffe1c000-0x9ffe1cfff 64bit]
[    3.304141] pci 6a13:00:00.0: BAR 2: assigned [mem 0x9ffe1d000-0x9ffe1dfff 64bit]
[    3.304851] pci 6a13:00:00.0: BAR 4: assigned [mem 0x9ffe1e000-0x9ffe1efff 64bit]
[   49.723597] hv_balloon: Max. dynamic memory size: 16384 MB

--euROFU3ViQK6AW7w
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="bad.log"

# cat /sys/kernel/debug/swiotlb/io_tlb_nslabs
32768

# dmesg
[    0.000000] Linux version 5.18.0-rc3-microsoft-standard-WSL2-00009-gc6af2aa9ffc9 (nathan@dev-arch.thelio-3990X) (gcc (GCC) 12.1.0, GNU ld (GNU Binutils) 2.38) #1 SMP PREEMPT_DYNAMIC Wed Jun 1 10:41:21 MST 2022
[    0.000000] Command line: initrd=\initrd.img panic=-1 nr_cpus=8 swiotlb=force earlycon=uart8250,io,0x3f8,115200 console=hvc0 debug pty.legacy_count=0
[    0.000000] KERNEL supported cpus:
[    0.000000]   Intel GenuineIntel
[    0.000000]   AMD AuthenticAMD
[    0.000000]   Centaur CentaurHauls
[    0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
[    0.000000] x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
[    0.000000] x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
[    0.000000] signal: max sigframe size: 1776
[    0.000000] BIOS-provided physical RAM map:
[    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable
[    0.000000] BIOS-e820: [mem 0x00000000000e0000-0x00000000000e0fff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000001fffff] ACPI data
[    0.000000] BIOS-e820: [mem 0x0000000000200000-0x00000000f7ffffff] usable
[    0.000000] BIOS-e820: [mem 0x0000000100000000-0x0000000407ffffff] usable
[    0.000000] earlycon: uart8250 at I/O port 0x3f8 (options '115200')
[    0.000000] printk: bootconsole [uart8250] enabled
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] DMI not present or invalid.
[    0.000000] Hypervisor detected: Microsoft Hyper-V
[    0.000000] Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0xc2c, misc 0xe0bed7b6
[    0.000000] Hyper-V: Host Build 10.0.22000.708-0-0
[    0.000000] Hyper-V: Nested features: 0x4a0000
[    0.000000] Hyper-V: LAPIC Timer Frequency: 0x1e8480
[    0.000000] Hyper-V: Using hypercall for remote TLB flush
[    0.000000] clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns
[    0.000003] tsc: Detected 3800.008 MHz processor
[    0.001844] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
[    0.004500] e820: remove [mem 0x000a0000-0x000fffff] usable
[    0.006752] last_pfn = 0x408000 max_arch_pfn = 0x400000000
[    0.008928] x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
[    0.011617] last_pfn = 0xf8000 max_arch_pfn = 0x400000000
[    0.013721] Using GB pages for direct mapping
[    0.015504] RAMDISK: [mem 0x0371f000-0x03779fff]
[    0.017288] ACPI: Early table checksum verification disabled
[    0.019511] ACPI: RSDP 0x00000000000E0000 000024 (v02 VRTUAL)
[    0.021806] ACPI: XSDT 0x0000000000100000 000044 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
[    0.025255] ACPI: FACP 0x0000000000101000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001)
[    0.028644] ACPI: DSDT 0x00000000001011B8 01E184 (v02 MSFTVM DSDT01   00000001 MSFT 05000000)
[    0.032074] ACPI: FACS 0x0000000000101114 000040
[    0.033876] ACPI: OEM0 0x0000000000101154 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
[    0.037335] ACPI: SRAT 0x000000000011F33C 000330 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001)
[    0.040839] ACPI: APIC 0x000000000011F66C 000088 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001)
[    0.044205] ACPI: Reserving FACP table memory at [mem 0x101000-0x101113]
[    0.046856] ACPI: Reserving DSDT table memory at [mem 0x1011b8-0x11f33b]
[    0.049526] ACPI: Reserving FACS table memory at [mem 0x101114-0x101153]
[    0.052323] ACPI: Reserving OEM0 table memory at [mem 0x101154-0x1011b7]
[    0.054916] ACPI: Reserving SRAT table memory at [mem 0x11f33c-0x11f66b]
[    0.057563] ACPI: Reserving APIC table memory at [mem 0x11f66c-0x11f6f3]
[    0.060644] Zone ranges:
[    0.061621]   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
[    0.064031]   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
[    0.066551]   Normal   [mem 0x0000000100000000-0x0000000407ffffff]
[    0.069096]   Device   empty
[    0.070236] Movable zone start for each node
[    0.071880] Early memory node ranges
[    0.073323]   node   0: [mem 0x0000000000001000-0x000000000009ffff]
[    0.075736]   node   0: [mem 0x0000000000200000-0x00000000f7ffffff]
[    0.078260]   node   0: [mem 0x0000000100000000-0x0000000407ffffff]
[    0.080698] Initmem setup node 0 [mem 0x0000000000001000-0x0000000407ffffff]
[    0.083706] On node 0, zone DMA: 1 pages in unavailable ranges
[    0.083724] On node 0, zone DMA: 352 pages in unavailable ranges
[    0.095850] ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1])
[    0.100817] IOAPIC[0]: apic_id 8, version 17, address 0xfec00000, GSI 0-23
[    0.103601] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[    0.106156] ACPI: Using ACPI (MADT) for SMP configuration information
[    0.108743] smpboot: Allowing 8 CPUs, 0 hotplug CPUs
[    0.110810] [mem 0xf8000000-0xffffffff] available for PCI devices
[    0.113193] Booting paravirtualized kernel on Hyper-V
[    0.115221] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604462750000 ns
[    0.124086] setup_percpu: NR_CPUS:256 nr_cpumask_bits:256 nr_cpu_ids:8 nr_node_ids:1
[    0.127620] percpu: Embedded 53 pages/cpu s178408 r8192 d30488 u262144
[    0.130204] pcpu-alloc: s178408 r8192 d30488 u262144 alloc=1*2097152
[    0.132746] pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
[    0.134452] Hyper-V: PV spinlocks enabled
[    0.136082] PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
[    0.138923] Built 1 zonelists, mobility grouping on.  Total pages: 4127749
[    0.141669] Kernel command line: initrd=\initrd.img panic=-1 nr_cpus=8 swiotlb=force earlycon=uart8250,io,0x3f8,115200 console=hvc0 debug pty.legacy_count=0
[    0.149809] Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear)
[    0.154232] Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
[    0.157400] mem auto-init: stack:off, heap alloc:off, heap free:off
[    0.182296] Memory: 4126096K/16775804K available (18449K kernel code, 2647K rwdata, 3952K rodata, 1536K init, 2448K bss, 392692K reserved, 0K cma-reserved)
[    0.188084] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
[    0.190824] ftrace: allocating 51788 entries in 203 pages
[    0.199397] ftrace: allocated 203 pages with 5 groups
[    0.202320] Dynamic Preempt: none
[    0.203777] rcu: Preemptible hierarchical RCU implementation.
[    0.206055] rcu: 	RCU restricting CPUs from NR_CPUS=256 to nr_cpu_ids=8.
[    0.208807] 	Trampoline variant of Tasks RCU enabled.
[    0.210844] 	Rude variant of Tasks RCU enabled.
[    0.213292] 	Tracing variant of Tasks RCU enabled.
[    0.215445] rcu: RCU calculated value of scheduler-enlistment delay is 10 jiffies.
[    0.218380] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
[    0.224306] Using NULL legacy PIC
[    0.225589] NR_IRQS: 16640, nr_irqs: 488, preallocated irqs: 0
[    0.228245] random: crng init done (trusting CPU's manufacturer)
[    0.230636] Console: colour dummy device 80x25
[    0.232438] ACPI: Core revision 20211217
[    0.234033] Failed to register legacy timer interrupt
[    0.236047] APIC: Switch to symmetric I/O mode setup
[    0.238014] Hyper-V: enabling crash_kexec_post_notifiers
[    0.240107] Hyper-V: Using IPI hypercalls
[    0.241666] Hyper-V: Using enlightened APIC (xapic mode)
[    0.241757] clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x6d8cbf8869f, max_idle_ns: 881590921691 ns
[    0.247930] Calibrating delay loop (skipped), value calculated using timer frequency.. 7600.01 BogoMIPS (lpj=38000080)
[    0.252067] pid_max: default: 32768 minimum: 301
[    0.253895] LSM: Security Framework initializing
[    0.255751] Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear)
[    0.257927] Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear)
[    0.257927] x86/cpu: User Mode Instruction Prevention (UMIP) activated
[    0.257927] Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 512
[    0.257927] Last level dTLB entries: 4KB 2048, 2MB 2048, 4MB 1024, 1GB 0
[    0.257927] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
[    0.257927] Spectre V2 : Mitigation: Retpolines
[    0.257927] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
[    0.257927] Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
[    0.257927] Spectre V2 : User space: Mitigation: STIBP via prctl
[    0.257927] Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
[    0.257927] Freeing SMP alternatives memory: 56K
[    0.257927] smpboot: CPU0: AMD Ryzen 9 3900X 12-Core Processor (family: 0x17, model: 0x71, stepping: 0x0)
[    0.257927] cblist_init_generic: Setting adjustable number of callback queues.
[    0.257927] cblist_init_generic: Setting shift to 3 and lim to 1.
[    0.257927] cblist_init_generic: Setting shift to 3 and lim to 1.
[    0.257953] cblist_init_generic: Setting shift to 3 and lim to 1.
[    0.260344] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[    0.262870] ... version:                0
[    0.264433] ... bit width:              48
[    0.266024] ... generic registers:      6
[    0.267586] ... value mask:             0000ffffffffffff
[    0.267931] ... max period:             00007fffffffffff
[    0.269978] ... fixed-purpose events:   0
[    0.271549] ... event mask:             000000000000003f
[    0.273693] rcu: Hierarchical SRCU implementation.
[    0.275947] smp: Bringing up secondary CPUs ...
[    0.277746] x86: Booting SMP configuration:
[    0.277931] .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
[    0.278598] smp: Brought up 1 node, 8 CPUs
[    0.281786] smpboot: Max logical packages: 1
[    0.283442] smpboot: Total of 8 processors activated (60800.12 BogoMIPS)
[    0.297948] node 0 deferred pages initialised in 10ms
[    0.300095] devtmpfs: initialized
[    0.300095] x86/mm: Memory block size: 128MB
[    0.301372] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604462750000 ns
[    0.307950] futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
[    0.311117] NET: Registered PF_NETLINK/PF_ROUTE protocol family
[    0.313590] thermal_sys: Registered thermal governor 'step_wise'
[    0.313591] thermal_sys: Registered thermal governor 'user_space'
[    0.315970] cpuidle: using governor menu
[    0.317983] PCI: Fatal: No config space access function found
[    0.321181] kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
[    0.321181] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
[    0.321181] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
[    0.327982] raid6: skipped pq benchmark and selected avx2x4
[    0.330080] raid6: using avx2x2 recovery algorithm
[    0.331916] ACPI: Added _OSI(Module Device)
[    0.337945] ACPI: Added _OSI(Processor Device)
[    0.339664] ACPI: Added _OSI(3.0 _SCP Extensions)
[    0.341472] ACPI: Added _OSI(Processor Aggregator Device)
[    0.343581] ACPI: Added _OSI(Linux-Dell-Video)
[    0.345290] ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
[    0.347310] ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
[    0.351846] ACPI: 1 ACPI AML tables successfully acquired and loaded
[    0.354878] ACPI: Interpreter enabled
[    0.357948] ACPI: PM: (supports S0 S5)
[    0.359386] ACPI: Using IOAPIC for interrupt routing
[    0.361295] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
[    0.364875] ACPI: Enabled 1 GPEs in block 00 to 0F
[    0.367281] iommu: Default domain type: Translated 
[    0.367930] iommu: DMA domain TLB invalidation policy: lazy mode 
[    0.370342] SCSI subsystem initialized
[    0.371819] ACPI: bus type USB registered
[    0.373384] usbcore: registered new interface driver usbfs
[    0.375508] usbcore: registered new interface driver hub
[    0.377556] usbcore: registered new device driver usb
[    0.378007] hv_vmbus: Vmbus version:5.2
[    0.379525] PCI: Using ACPI for IRQ routing
[    0.381148] PCI: System does not support PCI
[    0.382985] hv_vmbus: Unknown GUID: c376c1c3-d276-48d2-90a9-c04748072c60
[    0.385656] hv_vmbus: Unknown GUID: 6e382d18-3336-4f4b-acc4-2b7703d4df4a
[    0.385656] hv_vmbus: Unknown GUID: dde9cbc0-5060-4436-9448-ea1254a5d177
[    0.385656] clocksource: Switched to clocksource tsc-early
[    0.395132] VFS: Disk quotas dquot_6.6.0
[    0.398629] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[    0.401503] FS-Cache: Loaded
[    0.402695] pnp: PnP ACPI init
[    0.403992] pnp: PnP ACPI: found 3 devices
[    0.409671] NET: Registered PF_INET protocol family
[    0.411938] IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear)
[    0.416026] tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, linear)
[    0.419575] TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear)
[    0.423230] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
[    0.426347] TCP: Hash tables configured (established 131072 bind 65536)
[    0.429190] UDP hash table entries: 8192 (order: 6, 262144 bytes, linear)
[    0.432209] UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, linear)
[    0.435648] NET: Registered PF_UNIX/PF_LOCAL protocol family
[    0.438375] RPC: Registered named UNIX socket transport module.
[    0.440789] RPC: Registered udp transport module.
[    0.442671] RPC: Registered tcp transport module.
[    0.444673] RPC: Registered tcp NFSv4.1 backchannel transport module.
[    0.447332] PCI: CLS 0 bytes, default 64
[    0.448993] PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
[    0.449028] Trying to unpack rootfs image as initramfs...
[    0.451710] software IO TLB: mapped [mem 0x00000000f4000000-0x00000000f8000000] (64MB)
[    0.454079] Freeing initrd memory: 364K
[    0.541430] kvm: no hardware support for 'kvm_intel'
[    0.543753] SVM: TSC scaling supported
[    0.545214] kvm: Nested Virtualization enabled
[    0.546939] SVM: kvm: Nested Paging enabled
[    0.548655] SVM: kvm: Hyper-V enlightened NPT TLB flush enabled
[    0.550982] SVM: kvm: Hyper-V Direct TLB Flush enabled
[    0.553055] SVM: Virtual VMLOAD VMSAVE supported
[    0.619230] Initialise system trusted keyrings
[    0.621148] workingset: timestamp_bits=46 max_order=22 bucket_order=0
[    0.624304] squashfs: version 4.0 (2009/01/31) Phillip Lougher
[    0.626917] NFS: Registering the id_resolver key type
[    0.628911] Key type id_resolver registered
[    0.630622] Key type id_legacy registered
[    0.632252] nfs4filelayout_init: NFSv4 File Layout Driver Registering...
[    0.635096] nfs4flexfilelayout_init: NFSv4 Flexfile Layout Driver Registering...
[    0.638073] Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
[    0.641200] Key type cifs.idmap registered
[    0.642865] fuse: init (API version 7.36)
[    0.644622] SGI XFS with ACLs, security attributes, realtime, scrub, repair, quota, no debug enabled
[    0.648593] 9p: Installing v9fs 9p2000 file system support
[    0.650987] ceph: loaded (mds proto 32)
[    0.654860] NET: Registered PF_ALG protocol family
[    0.656711] xor: automatically using best checksumming function   avx       
[    0.659472] Key type asymmetric registered
[    0.661072] Asymmetric key parser 'x509' registered
[    0.662976] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250)
[    0.666333] hv_vmbus: registering driver hv_pci
[    0.668465] hv_pci a154a52e-df93-4a32-a470-37362af9092a: PCI VMBus probing: Using version 0x10004
[    0.672862] hv_pci a154a52e-df93-4a32-a470-37362af9092a: PCI host bridge to bus df93:00
[    0.675963] pci_bus df93:00: root bus resource [mem 0x9ffe00000-0x9ffe02fff window]
[    0.678965] pci_bus df93:00: No busn resource found for root bus, will use [bus 00-ff]
[    0.683016] pci df93:00:00.0: [1af4:1043] type 00 class 0x010000
[    0.686425] pci df93:00:00.0: reg 0x10: [mem 0x9ffe00000-0x9ffe00fff 64bit]
[    0.689851] pci df93:00:00.0: reg 0x18: [mem 0x9ffe01000-0x9ffe01fff 64bit]
[    0.693249] pci df93:00:00.0: reg 0x20: [mem 0x9ffe02000-0x9ffe02fff 64bit]
[    0.698971] pci_bus df93:00: busn_res: [bus 00-ff] end is updated to 00
[    0.701554] pci df93:00:00.0: BAR 0: assigned [mem 0x9ffe00000-0x9ffe00fff 64bit]
[    0.704894] pci df93:00:00.0: BAR 2: assigned [mem 0x9ffe01000-0x9ffe01fff 64bit]
[    0.708243] pci df93:00:00.0: BAR 4: assigned [mem 0x9ffe02000-0x9ffe02fff 64bit]
[    0.711877] hv_pci ca66d0fe-4477-439b-9d67-02d7bd2dcb05: PCI VMBus probing: Using version 0x10004
[    0.716001] hv_pci ca66d0fe-4477-439b-9d67-02d7bd2dcb05: PCI host bridge to bus 4477:00
[    0.719136] pci_bus 4477:00: No busn resource found for root bus, will use [bus 00-ff]
[    0.722586] pci 4477:00:00.0: [1414:008e] type 00 class 0x030200
[    0.729016] pci_bus 4477:00: busn_res: [bus 00-ff] end is updated to 00
[    0.731822] hv_pci c42c9658-a709-40bd-8d72-5ee9b78e28ca: PCI VMBus probing: Using version 0x10004
[    0.736240] hv_pci c42c9658-a709-40bd-8d72-5ee9b78e28ca: PCI host bridge to bus a709:00
[    0.739381] pci_bus a709:00: root bus resource [mem 0x9ffe04000-0x9ffe06fff window]
[    0.742368] pci_bus a709:00: No busn resource found for root bus, will use [bus 00-ff]
[    0.746437] pci a709:00:00.0: [1af4:1049] type 00 class 0x010000
[    0.749663] pci a709:00:00.0: reg 0x10: [mem 0x9ffe04000-0x9ffe04fff 64bit]
[    0.752975] pci a709:00:00.0: reg 0x18: [mem 0x9ffe05000-0x9ffe05fff 64bit]
[    0.756273] pci a709:00:00.0: reg 0x20: [mem 0x9ffe06000-0x9ffe06fff 64bit]
[    0.762929] pci_bus a709:00: busn_res: [bus 00-ff] end is updated to 00
[    0.765585] pci a709:00:00.0: BAR 0: assigned [mem 0x9ffe04000-0x9ffe04fff 64bit]
[    0.769014] pci a709:00:00.0: BAR 2: assigned [mem 0x9ffe05000-0x9ffe05fff 64bit]
[    0.772436] pci a709:00:00.0: BAR 4: assigned [mem 0x9ffe06000-0x9ffe06fff 64bit]
[    0.784024] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled
[    0.786788] 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
[    0.790004] 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A
[    0.828478] Non-volatile memory driver v1.3
[    0.830180] printk: console [hvc0] enabled
[    0.831804] printk: bootconsole [uart8250] disabled
[    0.835700] brd: module loaded
[    0.836692] loop: module loaded
[    0.836890] hv_vmbus: registering driver hv_storvsc
[    0.837466] wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information.
[    0.837853] wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
[    0.838217] scsi host0: storvsc_host_t
[    0.838399] tun: Universal TUN/TAP device driver, 1.6
[    0.838812] PPP generic driver version 2.4.2
[    0.839159] PPP BSD Compression module registered
[    0.839395] PPP Deflate Compression module registered
[    0.839781] PPP MPPE Compression module registered
[    0.840046] NET: Registered PF_PPPOX protocol family
[    0.840326] usbcore: registered new interface driver cdc_ether
[    0.840687] usbcore: registered new interface driver cdc_ncm
[    0.841009] hv_vmbus: registering driver hv_netvsc
[    0.841331] VFIO - User Level meta-driver version: 0.3
[    0.841707] usbcore: registered new interface driver cdc_acm
[    0.841999] cdc_acm: USB Abstract Control Model driver for USB modems and ISDN adapters
[    0.842346] usbcore: registered new interface driver ch341
[    0.842613] usbserial: USB Serial support registered for ch341-uart
[    0.842907] usbcore: registered new interface driver cp210x
[    0.843226] usbserial: USB Serial support registered for cp210x
[    0.843574] usbcore: registered new interface driver ftdi_sio
[    0.843874] usbserial: USB Serial support registered for FTDI USB Serial Device
[    0.844325] vhci_hcd vhci_hcd.0: USB/IP Virtual Host Controller
[    0.844630] vhci_hcd vhci_hcd.0: new USB bus registered, assigned bus number 1
[    0.844968] vhci_hcd: created sysfs vhci_hcd.0
[    0.845394] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.18
[    0.845749] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[    0.846033] usb usb1: Product: USB/IP Virtual Host Controller
[    0.846274] usb usb1: Manufacturer: Linux 5.18.0-rc3-microsoft-standard-WSL2-00009-gc6af2aa9ffc9 vhci_hcd
[    0.846606] usb usb1: SerialNumber: vhci_hcd.0
[    0.846887] hub 1-0:1.0: USB hub found
[    0.847077] hub 1-0:1.0: 8 ports detected
[    0.847565] vhci_hcd vhci_hcd.0: USB/IP Virtual Host Controller
[    0.847828] vhci_hcd vhci_hcd.0: new USB bus registered, assigned bus number 2
[    0.848246] usb usb2: We don't know the algorithms for LPM for this host, disabling LPM.
[    0.848690] usb usb2: New USB device found, idVendor=1d6b, idProduct=0003, bcdDevice= 5.18
[    0.848997] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[    0.849279] usb usb2: Product: USB/IP Virtual Host Controller
[    0.849523] usb usb2: Manufacturer: Linux 5.18.0-rc3-microsoft-standard-WSL2-00009-gc6af2aa9ffc9 vhci_hcd
[    0.849855] usb usb2: SerialNumber: vhci_hcd.0
[    0.850138] hub 2-0:1.0: USB hub found
[    0.850333] hub 2-0:1.0: 8 ports detected
[    0.850772] hv_vmbus: registering driver hyperv_keyboard
[    0.851119] rtc_cmos 00:02: RTC can wake from S4
[    0.853017] rtc_cmos 00:02: registered as rtc0
[    0.853536] rtc_cmos 00:02: setting system clock to 2022-06-01T17:41:59 UTC (1654105319)
[    0.853854] rtc_cmos 00:02: alarms up to one month, 114 bytes nvram
[    0.854249] device-mapper: ioctl: 4.46.0-ioctl (2022-02-22) initialised: dm-devel@redhat.com
[    0.854710] device-mapper: raid: Loading target version 1.15.1
[    0.855003] usbcore: registered new interface driver usbhid
[    0.855198] usbhid: USB HID core driver
[    0.855394] hv_utils: Registering HyperV Utility Driver
[    0.855608] hv_vmbus: registering driver hv_utils
[    0.855910] hv_vmbus: registering driver hv_balloon
[    0.855914] hv_utils: cannot register PTP clock: 0
[    0.856165] drop_monitor: Initializing network drop monitor service
[    0.856597] hv_balloon: Using Dynamic Memory protocol version 2.0
[    0.856611] hv_utils: TimeSync IC version 4.0
[    0.856742] Mirror/redirect action on
[    0.857542] Free page reporting enabled
[    0.857744] hv_balloon: Cold memory discard hint enabled
[    0.858304] IPVS: Registered protocols (TCP, UDP)
[    0.858534] IPVS: Connection hash table configured (size=4096, memory=64Kbytes)
[    0.858872] IPVS: ipvs loaded.
[    0.859018] IPVS: [rr] scheduler registered.
[    0.859213] IPVS: [wrr] scheduler registered.
[    0.859405] IPVS: [sh] scheduler registered.
[    0.859622] ipip: IPv4 and MPLS over IPv4 tunneling driver
[    0.860047] ipt_CLUSTERIP: ClusterIP Version 0.8 loaded successfully
[    0.860293] Initializing XFRM netlink socket
[    0.860522] NET: Registered PF_INET6 protocol family
[    0.861019] Segment Routing with IPv6
[    0.861193] In-situ OAM (IOAM) with IPv6
[    0.861362] sit: IPv6, IPv4 and MPLS over IPv4 tunneling driver
[    0.861655] NET: Registered PF_PACKET protocol family
[    0.861875] Bridge firewalling registered
[    0.862024] 8021q: 802.1Q VLAN Support v1.8
[    0.862189] sctp: Hash tables configured (bind 256/256)
[    0.862419] 9pnet: Installing 9P2000 support
[    0.874428] Key type dns_resolver registered
[    0.874663] Key type ceph registered
[    0.874921] libceph: loaded (mon/osd proto 15/24)
[    0.875208] NET: Registered PF_VSOCK protocol family
[    0.875419] hv_vmbus: registering driver hv_sock
[    0.875644] IPI shorthand broadcast: enabled
[    0.875840] sched_clock: Marking stable (855501109, 19696300)->(934890300, -59692891)
[    0.876403] registered taskstats version 1
[    0.876928] Loading compiled-in X.509 certificates
[    0.877333] Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no
[    0.882685] Freeing unused kernel image (initmem) memory: 1536K
[    0.918057] Write protecting the kernel read-only data: 24576k
[    0.918975] Freeing unused kernel image (text/rodata gap) memory: 2028K
[    0.919393] Freeing unused kernel image (rodata/data gap) memory: 144K
[    0.919654] Run /init as init process
[    0.919798]   with arguments:
[    0.919940]     /init
[    0.920047]   with environment:
[    0.920190]     HOME=/
[    0.920286]     TERM=linux
[    0.922620] scsi 0:0:0:0: Direct-Access     Msft     Virtual Disk     1.0  PQ: 0 ANSI: 5
[    0.923252] sd 0:0:0:0: Attached scsi generic sg0 type 0
[    0.923893] sd 0:0:0:0: [sda] 641944 512-byte logical blocks: (329 MB/313 MiB)
[    0.924490] sd 0:0:0:0: [sda] Write Protect is on
[    0.924760] sd 0:0:0:0: [sda] Mode Sense: 0f 00 80 00
[    0.925199] sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
[    1.538410] EXT4-fs (sda): mounted filesystem without journal. Quota mode: none.
[    1.640080] hv_pci 88309164-1127-4233-9560-6e0cad455d8f: PCI VMBus probing: Using version 0x10004
[    1.642187] hv_pci 88309164-1127-4233-9560-6e0cad455d8f: PCI host bridge to bus 1127:00
[    1.642548] pci_bus 1127:00: root bus resource [mem 0xa00000000-0xc00001fff window]
[    1.642895] pci_bus 1127:00: No busn resource found for root bus, will use [bus 00-ff]
[    1.644287] pci 1127:00:00.0: [1af4:105a] type 00 class 0x088000
[    1.647194] pci 1127:00:00.0: reg 0x10: [mem 0xc00000000-0xc00000fff 64bit]
[    1.649501] pci 1127:00:00.0: reg 0x18: [mem 0xc00001000-0xc00001fff 64bit]
[    1.651514] pci 1127:00:00.0: reg 0x20: [mem 0xa00000000-0xbffffffff 64bit]
[    1.656603] pci_bus 1127:00: busn_res: [bus 00-ff] end is updated to 00
[    1.656902] pci 1127:00:00.0: BAR 4: assigned [mem 0xa00000000-0xbffffffff 64bit]
[    1.658754] pci 1127:00:00.0: BAR 0: assigned [mem 0xc00000000-0xc00000fff 64bit]
[    1.660797] pci 1127:00:00.0: BAR 2: assigned [mem 0xc00001000-0xc00001fff 64bit]
[    1.676734] virtiofs virtio2: Cache len: 0x200000000 @ 0xa00000000
[    1.687964] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x6d8cbf8869f, max_idle_ns: 881590921691 ns
[    1.688537] clocksource: Switched to clocksource tsc
[    1.718131] sd 0:0:0:0: [sda] Attached SCSI disk
[    1.737024] memmap_init_zone_device initialised 2097152 pages in 10ms
[    1.738457] WSL2: SetEphemeralPortRange is a no-op : range (0, 0)
[    1.847973] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[    2.418129] scsi 0:0:0:1: Direct-Access     Msft     Virtual Disk     1.0  PQ: 0 ANSI: 5
[    2.418887] sd 0:0:0:1: Attached scsi generic sg1 type 0
[    2.419446] scsi 0:0:0:2: Direct-Access     Msft     Virtual Disk     1.0  PQ: 0 ANSI: 5
[    2.419498] sd 0:0:0:1: [sdb] 2097160 512-byte logical blocks: (1.07 GB/1.00 GiB)
[    2.420146] sd 0:0:0:2: Attached scsi generic sg2 type 0
[    2.420236] sd 0:0:0:1: [sdb] 4096-byte physical blocks
[    2.420789] sd 0:0:0:2: [sdc] 536870912 512-byte logical blocks: (275 GB/256 GiB)
[    2.420849] sd 0:0:0:1: [sdb] Write Protect is off
[    2.421225] sd 0:0:0:2: [sdc] 4096-byte physical blocks
[    2.421541] sd 0:0:0:1: [sdb] Mode Sense: 0f 00 00 00
[    2.421888] sd 0:0:0:2: [sdc] Write Protect is off
[    2.422260] sd 0:0:0:1: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    2.422392] sd 0:0:0:2: [sdc] Mode Sense: 0f 00 00 00
[    2.423302] sd 0:0:0:2: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    2.424840] sd 0:0:0:1: [sdb] Attached SCSI disk
[    2.426063] sd 0:0:0:2: [sdc] Attached SCSI disk
[    2.444484] EXT4-fs (sdc): recovery complete
[    2.445457] EXT4-fs (sdc): mounted filesystem with ordered data mode. Quota mode: none.
[    2.459372] Adding 1048576k swap on /dev/sdb.  Priority:-2 extents:1 across:1048576k 
[    2.649008] hv_pci aed388e8-de3a-45a5-bb0c-ee534e806689: PCI VMBus probing: Using version 0x10004
[    2.650439] hv_pci aed388e8-de3a-45a5-bb0c-ee534e806689: PCI host bridge to bus de3a:00
[    2.650756] pci_bus de3a:00: root bus resource [mem 0x9ffe08000-0x9ffe0afff window]
[    2.651045] pci_bus de3a:00: No busn resource found for root bus, will use [bus 00-ff]
[    2.652320] pci de3a:00:00.0: [1af4:1049] type 00 class 0x010000
[    2.653496] pci de3a:00:00.0: reg 0x10: [mem 0x9ffe08000-0x9ffe08fff 64bit]
[    2.654363] pci de3a:00:00.0: reg 0x18: [mem 0x9ffe09000-0x9ffe09fff 64bit]
[    2.655227] pci de3a:00:00.0: reg 0x20: [mem 0x9ffe0a000-0x9ffe0afff 64bit]
[    2.659234] pci_bus de3a:00: busn_res: [bus 00-ff] end is updated to 00
[    2.659501] pci de3a:00:00.0: BAR 0: assigned [mem 0x9ffe08000-0x9ffe08fff 64bit]
[    2.660251] pci de3a:00:00.0: BAR 2: assigned [mem 0x9ffe09000-0x9ffe09fff 64bit]
[    2.660989] pci de3a:00:00.0: BAR 4: assigned [mem 0x9ffe0a000-0x9ffe0afff 64bit]
[    2.663932] 9pnet_virtio: no channels available for device drvfs
[    2.664237] init: (1) WARNING: mount: waiting for virtio device...
[    2.770766] hv_pci 81d9a8d7-9fce-46b7-9ff9-b146da3089e4: PCI VMBus probing: Using version 0x10004
[    2.772061] hv_pci 81d9a8d7-9fce-46b7-9ff9-b146da3089e4: PCI host bridge to bus 9fce:00
[    2.772372] pci_bus 9fce:00: root bus resource [mem 0x9ffe0c000-0x9ffe0efff window]
[    2.772664] pci_bus 9fce:00: No busn resource found for root bus, will use [bus 00-ff]
[    2.773910] pci 9fce:00:00.0: [1af4:1049] type 00 class 0x010000
[    2.775050] pci 9fce:00:00.0: reg 0x10: [mem 0x9ffe0c000-0x9ffe0cfff 64bit]
[    2.775911] pci 9fce:00:00.0: reg 0x18: [mem 0x9ffe0d000-0x9ffe0dfff 64bit]
[    2.776761] pci 9fce:00:00.0: reg 0x20: [mem 0x9ffe0e000-0x9ffe0efff 64bit]
[    2.780694] pci_bus 9fce:00: busn_res: [bus 00-ff] end is updated to 00
[    2.780986] pci 9fce:00:00.0: BAR 0: assigned [mem 0x9ffe0c000-0x9ffe0cfff 64bit]
[    2.781703] pci 9fce:00:00.0: BAR 2: assigned [mem 0x9ffe0d000-0x9ffe0dfff 64bit]
[    2.781986] 9pnet_virtio: no channels available for device drvfs
[    2.782311] init: (1) WARNING: mount: waiting for virtio device...
[    2.782471] pci 9fce:00:00.0: BAR 4: assigned [mem 0x9ffe0e000-0x9ffe0efff 64bit]
[    2.898491] hv_pci 86858b54-a9ff-42c1-85be-12646bde6957: PCI VMBus probing: Using version 0x10004
[    2.899831] hv_pci 86858b54-a9ff-42c1-85be-12646bde6957: PCI host bridge to bus a9ff:00
[    2.900148] pci_bus a9ff:00: root bus resource [mem 0x9ffe10000-0x9ffe12fff window]
[    2.900437] pci_bus a9ff:00: No busn resource found for root bus, will use [bus 00-ff]
[    2.901684] pci a9ff:00:00.0: [1af4:1049] type 00 class 0x010000
[    2.902856] pci a9ff:00:00.0: reg 0x10: [mem 0x9ffe10000-0x9ffe10fff 64bit]
[    2.903716] pci a9ff:00:00.0: reg 0x18: [mem 0x9ffe11000-0x9ffe11fff 64bit]
[    2.904609] pci a9ff:00:00.0: reg 0x20: [mem 0x9ffe12000-0x9ffe12fff 64bit]
[    2.908519] pci_bus a9ff:00: busn_res: [bus 00-ff] end is updated to 00
[    2.908842] pci a9ff:00:00.0: BAR 0: assigned [mem 0x9ffe10000-0x9ffe10fff 64bit]
[    2.909600] pci a9ff:00:00.0: BAR 2: assigned [mem 0x9ffe11000-0x9ffe11fff 64bit]
[    2.910335] pci a9ff:00:00.0: BAR 4: assigned [mem 0x9ffe12000-0x9ffe12fff 64bit]
[    3.009967] hv_pci 14a6c250-2f5e-4544-972d-e4af992a128a: PCI VMBus probing: Using version 0x10004
[    3.011462] hv_pci 14a6c250-2f5e-4544-972d-e4af992a128a: PCI host bridge to bus 2f5e:00
[    3.011804] pci_bus 2f5e:00: root bus resource [mem 0x9ffe14000-0x9ffe16fff window]
[    3.012172] pci_bus 2f5e:00: No busn resource found for root bus, will use [bus 00-ff]
[    3.013562] pci 2f5e:00:00.0: [1af4:1049] type 00 class 0x010000
[    3.014756] pci 2f5e:00:00.0: reg 0x10: [mem 0x9ffe14000-0x9ffe14fff 64bit]
[    3.015633] pci 2f5e:00:00.0: reg 0x18: [mem 0x9ffe15000-0x9ffe15fff 64bit]
[    3.016516] pci 2f5e:00:00.0: reg 0x20: [mem 0x9ffe16000-0x9ffe16fff 64bit]
[    3.020834] pci_bus 2f5e:00: busn_res: [bus 00-ff] end is updated to 00
[    3.021157] pci 2f5e:00:00.0: BAR 0: assigned [mem 0x9ffe14000-0x9ffe14fff 64bit]
[    3.022095] pci 2f5e:00:00.0: BAR 2: assigned [mem 0x9ffe15000-0x9ffe15fff 64bit]
[    3.022867] pci 2f5e:00:00.0: BAR 4: assigned [mem 0x9ffe16000-0x9ffe16fff 64bit]
[    3.030227] 9pnet_virtio: no channels available for device drvfs
[    3.030578] init: (1) WARNING: mount: waiting for virtio device...
[    3.143192] hv_pci ed843176-e057-48d4-ab94-623a027b1629: PCI VMBus probing: Using version 0x10004
[    3.144600] hv_pci ed843176-e057-48d4-ab94-623a027b1629: PCI host bridge to bus e057:00
[    3.144927] pci_bus e057:00: root bus resource [mem 0x9ffe18000-0x9ffe1afff window]
[    3.145222] pci_bus e057:00: No busn resource found for root bus, will use [bus 00-ff]
[    3.146529] pci e057:00:00.0: [1af4:1049] type 00 class 0x010000
[    3.147766] pci e057:00:00.0: reg 0x10: [mem 0x9ffe18000-0x9ffe18fff 64bit]
[    3.148685] pci e057:00:00.0: reg 0x18: [mem 0x9ffe19000-0x9ffe19fff 64bit]
[    3.149556] pci e057:00:00.0: reg 0x20: [mem 0x9ffe1a000-0x9ffe1afff 64bit]
[    3.153686] pci_bus e057:00: busn_res: [bus 00-ff] end is updated to 00
[    3.154060] pci e057:00:00.0: BAR 0: assigned [mem 0x9ffe18000-0x9ffe18fff 64bit]
[    3.154855] pci e057:00:00.0: BAR 2: assigned [mem 0x9ffe19000-0x9ffe19fff 64bit]
[    3.155592] pci e057:00:00.0: BAR 4: assigned [mem 0x9ffe1a000-0x9ffe1afff 64bit]
[    3.188714] hv_pci c963c476-4546-43da-a32c-7e4ddeb55d93: PCI VMBus probing: Using version 0x10004
[    3.190328] hv_pci c963c476-4546-43da-a32c-7e4ddeb55d93: PCI host bridge to bus 4546:00
[    3.190732] pci_bus 4546:00: root bus resource [mem 0x9ffe1c000-0x9ffe1efff window]
[    3.191104] pci_bus 4546:00: No busn resource found for root bus, will use [bus 00-ff]
[    3.192578] pci 4546:00:00.0: [1af4:1049] type 00 class 0x010000
[    3.193870] pci 4546:00:00.0: reg 0x10: [mem 0x9ffe1c000-0x9ffe1cfff 64bit]
[    3.194935] pci 4546:00:00.0: reg 0x18: [mem 0x9ffe1d000-0x9ffe1dfff 64bit]
[    3.196022] pci 4546:00:00.0: reg 0x20: [mem 0x9ffe1e000-0x9ffe1efff 64bit]
[    3.200503] pci_bus 4546:00: busn_res: [bus 00-ff] end is updated to 00
[    3.200855] pci 4546:00:00.0: BAR 0: assigned [mem 0x9ffe1c000-0x9ffe1cfff 64bit]
[    3.201736] pci 4546:00:00.0: BAR 2: assigned [mem 0x9ffe1d000-0x9ffe1dfff 64bit]
[    3.202501] pci 4546:00:00.0: BAR 4: assigned [mem 0x9ffe1e000-0x9ffe1efff 64bit]
[   49.723721] hv_balloon: Max. dynamic memory size: 16384 MB

--euROFU3ViQK6AW7w--


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 17:58:02 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 17:58:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340697.565787 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwSbV-0005nV-BJ; Wed, 01 Jun 2022 17:57:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340697.565787; Wed, 01 Jun 2022 17:57:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwSbV-0005nO-8Y; Wed, 01 Jun 2022 17:57:57 +0000
Received: by outflank-mailman (input) for mailman id 340697;
 Wed, 01 Jun 2022 17:57:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NYJT=WI=lst.de=hch@srs-se1.protection.inumbo.net>)
 id 1nwSbT-0005nF-II
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 17:57:55 +0000
Received: from verein.lst.de (verein.lst.de [213.95.11.211])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5c2640f2-e1d4-11ec-bd2c-47488cf2e6aa;
 Wed, 01 Jun 2022 19:57:54 +0200 (CEST)
Received: by verein.lst.de (Postfix, from userid 2407)
 id D34A568C4E; Wed,  1 Jun 2022 19:57:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5c2640f2-e1d4-11ec-bd2c-47488cf2e6aa
Date: Wed, 1 Jun 2022 19:57:43 +0200
From: Christoph Hellwig <hch@lst.de>
To: Nathan Chancellor <nathan@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>, iommu@lists.linux-foundation.org,
	x86@kernel.org, Anshuman Khandual <anshuman.khandual@arm.com>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>, Joerg Roedel <joro@8bytes.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Lu Baolu <baolu.lu@linux.intel.com>,
	Robin Murphy <robin.murphy@arm.com>,
	linux-arm-kernel@lists.infradead.org,
	xen-devel@lists.xenproject.org, linux-ia64@vger.kernel.org,
	linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-hyperv@vger.kernel.org, tboot-devel@lists.sourceforge.net,
	linux-pci@vger.kernel.org
Subject: Re: [PATCH 09/15] swiotlb: make the swiotlb_init interface more
 useful
Message-ID: <20220601175743.GA28082@lst.de>
References: <20220404050559.132378-1-hch@lst.de> <20220404050559.132378-10-hch@lst.de> <YpehC7BwBlnuxplF@dev-arch.thelio-3990X> <20220601173441.GB27582@lst.de> <YpemDuzdoaO3rijX@Ryzen-9-3900X.>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <YpemDuzdoaO3rijX@Ryzen-9-3900X.>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Wed, Jun 01, 2022 at 10:46:54AM -0700, Nathan Chancellor wrote:
> On Wed, Jun 01, 2022 at 07:34:41PM +0200, Christoph Hellwig wrote:
> > Can you send me the full dmesg and the content of
> > /sys/kernel/debug/swiotlb/io_tlb_nslabs for a good and a bad boot?
> 
> Sure thing, they are attached! If there is anything else I can provide
> or test, I am more than happy to do so.

Nothing interesting.  But the performance numbers almost look like
swiotlb=force got ignored before (even if I can't explain why).
Do you get a similar performance with the new kernel without
swiotlb=force as the old one with that argument by any chance?



From xen-devel-bounces@lists.xenproject.org Wed Jun 01 17:58:06 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 17:58:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340698.565799 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwSbe-00065M-J7; Wed, 01 Jun 2022 17:58:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340698.565799; Wed, 01 Jun 2022 17:58:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwSbe-00065F-Fo; Wed, 01 Jun 2022 17:58:06 +0000
Received: by outflank-mailman (input) for mailman id 340698;
 Wed, 01 Jun 2022 17:58:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Edd5=WI=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1nwSbc-0005nF-PR
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 17:58:04 +0000
Received: from mail-lf1-x135.google.com (mail-lf1-x135.google.com
 [2a00:1450:4864:20::135])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6237e503-e1d4-11ec-bd2c-47488cf2e6aa;
 Wed, 01 Jun 2022 19:58:03 +0200 (CEST)
Received: by mail-lf1-x135.google.com with SMTP id a15so3995939lfb.9
 for <xen-devel@lists.xenproject.org>; Wed, 01 Jun 2022 10:58:04 -0700 (PDT)
Received: from otyshchenko.router ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 f1-20020a2e1f01000000b002555bc8f782sm435358ljf.39.2022.06.01.10.58.01
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 01 Jun 2022 10:58:02 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6237e503-e1d4-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=k1wgktw6Jz1Aqx56EJW/aeWx4Yi6q/5k3Q6KwTdWZ5Y=;
        b=HIGwhXO/yHVx7Hoe08gc1p3tbdz+vN0kP33/Nrsy1wH224GSjSJ4Mr5OJERNkNTke2
         /31/mlMM/Ee9Z87DkXB9Y/LIhS2POLiwykBv/JCy2Bs4FczGL/LMJJJqvyUfeY3wsfuA
         sVzpJD+UNv50U46dlc92i4GMF0rbID240SL77tadHtKf3pBKEVk5fAXDEJkYPakfwmHe
         KcbRhSbDYi8Scowih7Z+s3tl3UVpQ57PIvQ6597F2ymzHQXq/QGQSvjb6XSeiA0fUmhM
         4ENi9SKnWzkXVHHbteEn8+JU2PCE3zWVqUV5O/X9os9CMLtTv4Ai2sdhyomVCtE9Lsfh
         Uwyg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=k1wgktw6Jz1Aqx56EJW/aeWx4Yi6q/5k3Q6KwTdWZ5Y=;
        b=jDKLymu4RApmDUVX+IKk+rTQYB1O4YgIQppA6b0vTUjGRKwT0hZflIqPUb7M68JM0f
         CMLt+WaEmQKuQixGQoO+M8YZs2IV+nDxTz8b+FI7aLXbRD00AMn2f+ijstFhtIdOiHHg
         AX/x3Ejy8IWC0wgwoRdOQaBnxwCYOa+3gunHzZwLP5ya2cgHiiAh4cB94DF4Jo8TUl5E
         Givi2RMuao6rETOdloKX6E8XCfDehcs2ntu0jnCDQyIreKQx8Dkif7W0lHJukBnCt9a3
         hqO7jKbodtBKfCaQwswZUouLRZfIAz8kgfFBwhELnROyRpZORr1lwX+FFwqZoIJK4tUE
         equw==
X-Gm-Message-State: AOAM530ovccMxrohH9mpXDo40smBthZxbivTonWmn8Fg6nWWp27oTb15
	e+zoCqJYNayDxQ6PqfY5UZOyFsEabT8=
X-Google-Smtp-Source: ABdhPJyoQiFwkcGMxtwpgpCGnRfxFp9WKbrIiktUMcfiFBCsfcHF404xwv5VJbaYlmJRFQm6T6bv3w==
X-Received: by 2002:a05:6512:a89:b0:478:80fe:29c0 with SMTP id m9-20020a0565120a8900b0047880fe29c0mr487116lfu.682.1654106283289;
        Wed, 01 Jun 2022 10:58:03 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Nick Rosbrook <rosbrookn@gmail.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Wei Chen <Wei.Chen@arm.com>,
	Kaly Xin <Kaly.Xin@arm.com>,
	Jiamei Xie <Jiamei.Xie@arm.com>,
	Henry Wang <Henry.Wang@arm.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>
Subject: [PATCH V9 0/2] Virtio support for toolstack on Arm (Was "IOREQ feature (+ virtio-mmio) on Arm")
Date: Wed,  1 Jun 2022 20:57:39 +0300
Message-Id: <1654106261-28044-1-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Hello all.

The purpose of this patch series is to add missing virtio-mmio bits to Xen toolstack on Arm.
The Virtio support for toolstack [1] was postponed as the main target was to upstream IOREQ/DM
support on Arm in the first place. Now, we already have IOREQ support in, so we can resume Virtio
enabling work. You can find previous discussions at [2].

Patch series [3] is based on recent "staging" branch
(49dd52fb1311dadab29f6634d0bc1f4c022c357a tools/xenstore: fix event sending in introduce_domain())
and tested on Renesas Salvator-X board + H3 ES3.0 SoC (Arm64) with virtio-mmio based virtio-disk backend [4]
running in Dom0 (or Driver domain) and unmodified Linux Guest running on existing virtio-blk driver (frontend).
No issues were observed. Guest domain 'reboot/destroy' use-cases work properly.

Any feedback/help would be highly appreciated.

[1]
https://lore.kernel.org/xen-devel/1610488352-18494-24-git-send-email-olekstysh@gmail.com/
https://lore.kernel.org/xen-devel/1610488352-18494-25-git-send-email-olekstysh@gmail.com/
[2]
https://lists.xenproject.org/archives/html/xen-devel/2021-01/msg02403.html
https://lists.xenproject.org/archives/html/xen-devel/2021-01/msg02536.html
https://lore.kernel.org/xen-devel/1621626361-29076-1-git-send-email-olekstysh@gmail.com/
https://lore.kernel.org/xen-devel/1638982784-14390-1-git-send-email-olekstysh@gmail.com/
https://lore.kernel.org/xen-devel/1649442065-8332-1-git-send-email-olekstysh@gmail.com/
https://lore.kernel.org/xen-devel/1651598763-12162-1-git-send-email-olekstysh@gmail.com/

[3] https://github.com/otyshchenko1/xen/commits/libxl_virtio4
[4] https://github.com/otyshchenko1/virtio-disk/commits/virtio_grant

Julien Grall (1):
  libxl: Introduce basic virtio-mmio support on Arm

Oleksandr Tyshchenko (1):
  libxl: Add support for Virtio disk configuration

 docs/man/xl-disk-configuration.5.pod.in   |  38 +-
 tools/golang/xenlight/helpers.gen.go      |   8 +
 tools/golang/xenlight/types.gen.go        |  18 +
 tools/include/libxl.h                     |   7 +
 tools/libs/light/libxl_arm.c              | 121 +++-
 tools/libs/light/libxl_device.c           |  62 +-
 tools/libs/light/libxl_disk.c             | 140 ++++-
 tools/libs/light/libxl_internal.h         |   2 +
 tools/libs/light/libxl_types.idl          |  18 +
 tools/libs/light/libxl_types_internal.idl |   1 +
 tools/libs/light/libxl_utils.c            |   2 +
 tools/libs/util/libxlu_disk_l.c           | 959 +++++++++++++++---------------
 tools/libs/util/libxlu_disk_l.h           |   2 +-
 tools/libs/util/libxlu_disk_l.l           |   9 +
 tools/xl/xl_block.c                       |  11 +
 xen/include/public/arch-arm.h             |   7 +
 16 files changed, 923 insertions(+), 482 deletions(-)

-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Wed Jun 01 17:58:10 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 17:58:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340699.565810 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwSbh-0006NA-TE; Wed, 01 Jun 2022 17:58:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340699.565810; Wed, 01 Jun 2022 17:58:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwSbh-0006N1-Pg; Wed, 01 Jun 2022 17:58:09 +0000
Received: by outflank-mailman (input) for mailman id 340699;
 Wed, 01 Jun 2022 17:58:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Edd5=WI=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1nwSbf-0005nF-Nt
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 17:58:07 +0000
Received: from mail-lj1-x230.google.com (mail-lj1-x230.google.com
 [2a00:1450:4864:20::230])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6397a7ab-e1d4-11ec-bd2c-47488cf2e6aa;
 Wed, 01 Jun 2022 19:58:06 +0200 (CEST)
Received: by mail-lj1-x230.google.com with SMTP id o15so2818760ljp.10
 for <xen-devel@lists.xenproject.org>; Wed, 01 Jun 2022 10:58:06 -0700 (PDT)
Received: from otyshchenko.router ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 f1-20020a2e1f01000000b002555bc8f782sm435358ljf.39.2022.06.01.10.58.04
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 01 Jun 2022 10:58:05 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6397a7ab-e1d4-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=QLfhssRYKH9JFmc8Xi4ALxksAwYW0L5cZrbrOlebBOI=;
        b=KLVWoX7Dswhgi7kfe+Kogc2ANzEvmsdecYTfz/4yYZx8nnJ7O1F8DzMuK7ZlbxwHR5
         GYQ9HV0t+LdrDx8hzOK3Ua6hzlaZMHOyz31O1HCQQ2buxBOt/Qou96SMAjk7NdwH5Uso
         jrtkIv8jD+G84c//QXProvywfnbQK43GJ4qYIojq4/6gdAm9zWqScwKEgOb8vvhE8U2A
         FoIRKcj5LqvHK3adxk8YJ2icl3oZxcUj5W8Ov0c+LVPICA2V3TkKhXALchEJAjp0bflI
         N7Jsjrhm0Yq6g+j125Y7PFVRVr/1MvWp376RLHOwQxTg0/fpJHO6B7fFv554YOAEoCB7
         5NfA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=QLfhssRYKH9JFmc8Xi4ALxksAwYW0L5cZrbrOlebBOI=;
        b=Fl3mhYuCWNMQrmgdTSp7BcQ2ZFbQKOCzUHARbau0LDL3u3o9fj1VseI3QhS0FCB/vm
         pf01rj4Em9KKcL/3YokOFO+3KkAK4XH98vCUZYIxHKNlv2T9shBTm16FKW/HzH3oMakV
         5wCebT9yMYoW7gP3zLDiLDuXE9/iZ2a8wg22GPHWzWYpwnftXOwt7tKQU0+sR9mr8rnt
         qW/BrgidVG6u4UWUaEuyzWMoWjxHAm++I1SbM0jABXZcimmtNVXCwrnZTVDQ+Pvug126
         HetRyFhDXY4EESAC4NiNMrrqth3xsqkcXDnyL9tRnRXK+I4bNWbrtJyd8KQYCrZs99Ax
         DmJw==
X-Gm-Message-State: AOAM5323FQIFDDS8AxRJ+aW0F6OOM9g2GX35ra5NCx30tVajbdMuB6VC
	lVoQ6Y3dYoTs+UCxNRrO+9OagpQO170=
X-Google-Smtp-Source: ABdhPJxpXWEtWZ5OrURxfAiwQWN52eDsnbdJN+9PrkY4vPBfbN4ErTQGC2wi2B536AB44b/+qPP1iw==
X-Received: by 2002:a05:651c:244:b0:253:ecad:a4ee with SMTP id x4-20020a05651c024400b00253ecada4eemr28489912ljn.21.1654106285723;
        Wed, 01 Jun 2022 10:58:05 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien.grall@arm.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: [PATCH V9 2/2] libxl: Introduce basic virtio-mmio support on Arm
Date: Wed,  1 Jun 2022 20:57:41 +0300
Message-Id: <1654106261-28044-3-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1654106261-28044-1-git-send-email-olekstysh@gmail.com>
References: <1654106261-28044-1-git-send-email-olekstysh@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Julien Grall <julien.grall@arm.com>

This patch introduces helpers to allocate Virtio MMIO params
(IRQ and memory region) and create specific device node in
the Guest device-tree with allocated params. In order to deal
with multiple Virtio devices, reserve corresponding ranges.
For now, we reserve 1MB for memory regions and 10 SPIs.

As these helpers should be used for every Virtio device attached
to the Guest, call them for Virtio disk(s).

Please note, with statically allocated Virtio IRQs there is
a risk of a clash with a physical IRQs of passthrough devices.
For the first version, it's fine, but we should consider allocating
the Virtio IRQs automatically. Thankfully, we know in advance which
IRQs will be used for passthrough to be able to choose non-clashed
ones.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes RFC -> V1:
   - was squashed with:
     "[RFC PATCH V1 09/12] libxl: Handle virtio-mmio irq in more correct way"
     "[RFC PATCH V1 11/12] libxl: Insert "dma-coherent" property into virtio-mmio device node"
     "[RFC PATCH V1 12/12] libxl: Fix duplicate memory node in DT"
   - move VirtIO MMIO #define-s to xen/include/public/arch-arm.h

Changes V1 -> V2:
   - update the author of a patch

Changes V2 -> V3:
   - no changes

Changes V3 -> V4:
   - no changes

Changes V4 -> V5:
   - split the changes, change the order of the patches
   - drop an extra "virtio" configuration option
   - update patch description
   - use CONTAINER_OF instead of own implementation
   - reserve ranges for Virtio MMIO params and put them
     in correct location
   - create helpers to allocate Virtio MMIO params, add
     corresponding sanity-сhecks
   - add comment why MMIO size 0x200 is chosen
   - update debug print
   - drop Wei's T-b

Changes V5 -> V6:
   - rebase on current staging

Changes V6 -> V7:
   - rebase on current staging
   - add T-b and R-b tags
   - update according to the recent changes to
     "libxl: Add support for Virtio disk configuration"

Changes V7 -> V8:
   - drop T-b and R-b tags
   - make virtio_mmio_base/irq global variables to be local in
     libxl__arch_domain_prepare_config() and initialize them at
     the beginning of the function, then rework alloc_virtio_mmio_base/irq()
     to take a pointer to virtio_mmio_base/irq variables as an argument
   - update according to the recent changes to
     "libxl: Add support for Virtio disk configuration"

Changes V8 -> V9:
   - Stefano already gave his Reviewed-by, I dropped it due to the changes
   - remove the second set of parentheses for check in alloc_virtio_mmio_base()
   - clarify the updating of "nr_spis" right after num_disks loop in
     libxl__arch_domain_prepare_config() and add a comment on top of it
   - use GCSPRINTF() instead of using a buffer of a static size
     calculated by hand in make_virtio_mmio_node()
---
 tools/libs/light/libxl_arm.c  | 121 +++++++++++++++++++++++++++++++++++++++++-
 xen/include/public/arch-arm.h |   7 +++
 2 files changed, 126 insertions(+), 2 deletions(-)

diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index eef1de0..9be9b2a 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -8,6 +8,46 @@
 #include <assert.h>
 #include <xen/device_tree_defs.h>
 
+/*
+ * There is no clear requirements for the total size of Virtio MMIO region.
+ * The size of control registers is 0x100 and device-specific configuration
+ * registers starts at the offset 0x100, however it's size depends on the device
+ * and the driver. Pick the biggest known size at the moment to cover most
+ * of the devices (also consider allowing the user to configure the size via
+ * config file for the one not conforming with the proposed value).
+ */
+#define VIRTIO_MMIO_DEV_SIZE   xen_mk_ullong(0x200)
+
+static uint64_t alloc_virtio_mmio_base(libxl__gc *gc, uint64_t *virtio_mmio_base)
+{
+    uint64_t base = *virtio_mmio_base;
+
+    /* Make sure we have enough reserved resources */
+    if (base + VIRTIO_MMIO_DEV_SIZE >
+        GUEST_VIRTIO_MMIO_BASE + GUEST_VIRTIO_MMIO_SIZE) {
+        LOG(ERROR, "Ran out of reserved range for Virtio MMIO BASE 0x%"PRIx64"\n",
+            base);
+        return 0;
+    }
+    *virtio_mmio_base += VIRTIO_MMIO_DEV_SIZE;
+
+    return base;
+}
+
+static uint32_t alloc_virtio_mmio_irq(libxl__gc *gc, uint32_t *virtio_mmio_irq)
+{
+    uint32_t irq = *virtio_mmio_irq;
+
+    /* Make sure we have enough reserved resources */
+    if (irq > GUEST_VIRTIO_MMIO_SPI_LAST) {
+        LOG(ERROR, "Ran out of reserved range for Virtio MMIO IRQ %u\n", irq);
+        return 0;
+    }
+    (*virtio_mmio_irq)++;
+
+    return irq;
+}
+
 static const char *gicv_to_string(libxl_gic_version gic_version)
 {
     switch (gic_version) {
@@ -26,8 +66,10 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
 {
     uint32_t nr_spis = 0;
     unsigned int i;
-    uint32_t vuart_irq;
-    bool vuart_enabled = false;
+    uint32_t vuart_irq, virtio_irq = 0;
+    bool vuart_enabled = false, virtio_enabled = false;
+    uint64_t virtio_mmio_base = GUEST_VIRTIO_MMIO_BASE;
+    uint32_t virtio_mmio_irq = GUEST_VIRTIO_MMIO_SPI_FIRST;
 
     /*
      * If pl011 vuart is enabled then increment the nr_spis to allow allocation
@@ -39,6 +81,35 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
         vuart_enabled = true;
     }
 
+    for (i = 0; i < d_config->num_disks; i++) {
+        libxl_device_disk *disk = &d_config->disks[i];
+
+        if (disk->specification == LIBXL_DISK_SPECIFICATION_VIRTIO) {
+            disk->base = alloc_virtio_mmio_base(gc, &virtio_mmio_base);
+            if (!disk->base)
+                return ERROR_FAIL;
+
+            disk->irq = alloc_virtio_mmio_irq(gc, &virtio_mmio_irq);
+            if (!disk->irq)
+                return ERROR_FAIL;
+
+            if (virtio_irq < disk->irq)
+                virtio_irq = disk->irq;
+            virtio_enabled = true;
+
+            LOG(DEBUG, "Allocate Virtio MMIO params for Vdev %s: IRQ %u BASE 0x%"PRIx64,
+                disk->vdev, disk->irq, disk->base);
+        }
+    }
+
+    /*
+     * Every virtio-mmio device uses one emulated SPI. If Virtio devices are
+     * present, make sure that we allocate enough SPIs for them.
+     * The resulting "nr_spis" needs to cover the highest possible SPI.
+     */
+    if (virtio_enabled)
+        nr_spis = max(nr_spis, virtio_irq - 32 + 1);
+
     for (i = 0; i < d_config->b_info.num_irqs; i++) {
         uint32_t irq = d_config->b_info.irqs[i];
         uint32_t spi;
@@ -58,6 +129,13 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
             return ERROR_FAIL;
         }
 
+        /* The same check as for vpl011 */
+        if (virtio_enabled &&
+            (irq >= GUEST_VIRTIO_MMIO_SPI_FIRST && irq <= virtio_irq)) {
+            LOG(ERROR, "Physical IRQ %u conflicting with Virtio MMIO IRQ range\n", irq);
+            return ERROR_FAIL;
+        }
+
         if (irq < 32)
             continue;
 
@@ -787,6 +865,37 @@ static int make_vpci_node(libxl__gc *gc, void *fdt,
     return 0;
 }
 
+
+static int make_virtio_mmio_node(libxl__gc *gc, void *fdt,
+                                 uint64_t base, uint32_t irq)
+{
+    int res;
+    gic_interrupt intr;
+    const char *name = GCSPRINTF("virtio@%"PRIx64, base);
+
+    res = fdt_begin_node(fdt, name);
+    if (res) return res;
+
+    res = fdt_property_compat(gc, fdt, 1, "virtio,mmio");
+    if (res) return res;
+
+    res = fdt_property_regs(gc, fdt, GUEST_ROOT_ADDRESS_CELLS, GUEST_ROOT_SIZE_CELLS,
+                            1, base, VIRTIO_MMIO_DEV_SIZE);
+    if (res) return res;
+
+    set_interrupt(intr, irq, 0xf, DT_IRQ_TYPE_EDGE_RISING);
+    res = fdt_property_interrupts(gc, fdt, &intr, 1);
+    if (res) return res;
+
+    res = fdt_property(fdt, "dma-coherent", NULL, 0);
+    if (res) return res;
+
+    res = fdt_end_node(fdt);
+    if (res) return res;
+
+    return 0;
+}
+
 static const struct arch_info *get_arch_info(libxl__gc *gc,
                                              const struct xc_dom_image *dom)
 {
@@ -988,6 +1097,7 @@ static int libxl__prepare_dtb(libxl__gc *gc, libxl_domain_config *d_config,
     size_t fdt_size = 0;
     int pfdt_size = 0;
     libxl_domain_build_info *const info = &d_config->b_info;
+    unsigned int i;
 
     const libxl_version_info *vers;
     const struct arch_info *ainfo;
@@ -1094,6 +1204,13 @@ next_resize:
         if (d_config->num_pcidevs)
             FDT( make_vpci_node(gc, fdt, ainfo, dom) );
 
+        for (i = 0; i < d_config->num_disks; i++) {
+            libxl_device_disk *disk = &d_config->disks[i];
+
+            if (disk->specification == LIBXL_DISK_SPECIFICATION_VIRTIO)
+                FDT( make_virtio_mmio_node(gc, fdt, disk->base, disk->irq) );
+        }
+
         if (pfdt)
             FDT( copy_partial_fdt(gc, fdt, pfdt) );
 
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index ab05fe1..c8b6058 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -407,6 +407,10 @@ typedef uint64_t xen_callback_t;
 
 /* Physical Address Space */
 
+/* Virtio MMIO mappings */
+#define GUEST_VIRTIO_MMIO_BASE   xen_mk_ullong(0x02000000)
+#define GUEST_VIRTIO_MMIO_SIZE   xen_mk_ullong(0x00100000)
+
 /*
  * vGIC mappings: Only one set of mapping is used by the guest.
  * Therefore they can overlap.
@@ -493,6 +497,9 @@ typedef uint64_t xen_callback_t;
 
 #define GUEST_VPL011_SPI        32
 
+#define GUEST_VIRTIO_MMIO_SPI_FIRST   33
+#define GUEST_VIRTIO_MMIO_SPI_LAST    43
+
 /* PSCI functions */
 #define PSCI_cpu_suspend 0
 #define PSCI_cpu_off     1
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Wed Jun 01 17:58:12 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 17:58:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340700.565821 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwSbk-0006fE-8q; Wed, 01 Jun 2022 17:58:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340700.565821; Wed, 01 Jun 2022 17:58:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwSbk-0006ez-3K; Wed, 01 Jun 2022 17:58:12 +0000
Received: by outflank-mailman (input) for mailman id 340700;
 Wed, 01 Jun 2022 17:58:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Edd5=WI=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1nwSbi-0006Qx-HK
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 17:58:10 +0000
Received: from mail-lf1-x129.google.com (mail-lf1-x129.google.com
 [2a00:1450:4864:20::129])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 63ef8cc4-e1d4-11ec-837f-e5687231ffcc;
 Wed, 01 Jun 2022 19:58:06 +0200 (CEST)
Received: by mail-lf1-x129.google.com with SMTP id j10so3987000lfe.12
 for <xen-devel@lists.xenproject.org>; Wed, 01 Jun 2022 10:58:06 -0700 (PDT)
Received: from otyshchenko.router ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 f1-20020a2e1f01000000b002555bc8f782sm435358ljf.39.2022.06.01.10.58.03
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 01 Jun 2022 10:58:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 63ef8cc4-e1d4-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=x8A1wlT3HfZQFThS6Pms3+KAZSMNs1nwi+xyFk+yUdc=;
        b=df0WXHZGwhfjvPnjAxC7/GY6xDTP+X3KXOqP2pA0FDuzwUa5U4pujRBZiiJIrgGh9y
         JvPzEfE8gwaAZl3p0d4ij+plmu+Z08v8TLZHj+KIeT7MzDjL6oppNDmsLTeH5s8GSJtE
         qHnmtvldsa97UPmxQf7g2wVxO+3bFfbOXc0+m6JiA03g/FyWmqtJBDcuKJr4DONCRmLR
         9oTSDgzWdI8SVdkpcssSrrvqBb4NacMMz9vZW65qfl2Uf6NdSAeemLyvZUiiWlpmkuxP
         2npZksrKlO6wFhaiFdGa7sfVjb2050RIkRr9R75/GdBFknovS9t4WZBQG37TWY+V29wL
         /r5A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=x8A1wlT3HfZQFThS6Pms3+KAZSMNs1nwi+xyFk+yUdc=;
        b=2sOmy2FGauTzC3flqYGF+ltUCDsIbj4fUb/6F+OOjdOjlV1BJzEirPbWewSmkxXZWW
         AKit3DLRlWSF7Vs8EJXU5QnxJqfvXuR+OUCx+X3ESAIF2p21sA94HrRvJ54Z+e5IYnmz
         6ShPEQNxqDanpdIBIl7OvToF0ceXf/pB7+Pe+OZcm/3A0BOeS/czIjhEyjAnL8G0mcb8
         uYUtmqqzSXe2zWMY2tMUBT9HxzwylM0HLzFYDVijvL+XPVNHTyflW/OAibEQLkkcFfJl
         saCPcXNS5cxkBwLRBoGWtHB68m9JTbXtF07siM1QqKJFWye8wHVruZjmh6B54Tq7BRpG
         2WkA==
X-Gm-Message-State: AOAM533pqhMeT7cVxlamzBVvXoaLTlOzSvbOIBfnHhYPuFABNHxN2DKY
	IpyyfQkoG+/hGTjBDy7NTH1n/blKuRY=
X-Google-Smtp-Source: ABdhPJwZa5eQWbNcOFizwLiGov6PXztuKkCWinaACkr1K90rD3Pup3TVFdnmIDcynnE8TwC3+yh+lA==
X-Received: by 2002:a05:6512:169e:b0:470:2124:63fb with SMTP id bu30-20020a056512169e00b00470212463fbmr478280lfb.616.1654106284655;
        Wed, 01 Jun 2022 10:58:04 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Nick Rosbrook <rosbrookn@gmail.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>
Subject: [PATCH V9 1/2] libxl: Add support for Virtio disk configuration
Date: Wed,  1 Jun 2022 20:57:40 +0300
Message-Id: <1654106261-28044-2-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1654106261-28044-1-git-send-email-olekstysh@gmail.com>
References: <1654106261-28044-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

This patch adds basic support for configuring and assisting virtio-mmio
based virtio-disk backend (emulator) which is intended to run out of
Qemu and could be run in any domain.
Although the Virtio block device is quite different from traditional
Xen PV block device (vbd) from the toolstack's point of view:
 - as the frontend is virtio-blk which is not a Xenbus driver, nothing
   written to Xenstore are fetched by the frontend currently ("vdev"
   is not passed to the frontend). But this might need to be revised
   in future, so frontend data might be written to Xenstore in order to
   support hotplugging virtio devices or passing the backend domain id
   on arch where the device-tree is not available.
 - the ring-ref/event-channel are not used for the backend<->frontend
   communication, the proposed IPC for Virtio is IOREQ/DM
it is still a "block device" and ought to be integrated in existing
"disk" handling. So, re-use (and adapt) "disk" parsing/configuration
logic to deal with Virtio devices as well.

For the immediate purpose and an ability to extend that support for
other use-cases in future (Qemu, virtio-pci, etc) perform the following
actions:
- Add new disk backend type (LIBXL_DISK_BACKEND_OTHER) and reflect
  that in the configuration
- Introduce new disk "specification" and "transport" fields to struct
  libxl_device_disk. Both are written to the Xenstore. The transport
  field is only used for the specification "virtio" and it assumes
  only "mmio" value for now.
- Introduce new "specification" option with "xen" communication
  protocol being default value.
- Add new device kind (LIBXL__DEVICE_KIND_VIRTIO_DISK) as current
  one (LIBXL__DEVICE_KIND_VBD) doesn't fit into Virtio disk model

An example of domain configuration for Virtio disk:
disk = [ 'phy:/dev/mmcblk0p3, xvda1, backendtype=other, specification=virtio']

Nothing has changed for default Xen disk configuration.

Please note, this patch is not enough for virtio-disk to work
on Xen (Arm), as for every Virtio device (including disk) we need
to allocate Virtio MMIO params (IRQ and memory region) and pass
them to the backend, also update Guest device-tree. The subsequent
patch will add these missing bits. For the current patch,
the default "irq" and "base" are just written to the Xenstore.
This is not an ideal splitting, but this way we avoid breaking
the bisectability.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
---
Changes RFC -> V1:
   - no changes

Changes V1 -> V2:
   - rebase according to the new location of libxl_virtio_disk.c

Changes V2 -> V3:
   - no changes

Changes V3 -> V4:
   - rebase according to the new argument for DEFINE_DEVICE_TYPE_STRUCT

Changes V4 -> V5:
   - split the changes, change the order of the patches
   - update patch description
   - don't introduce new "vdisk" configuration option with own parsing logic,
     re-use Xen PV block "disk" parsing/configuration logic for the virtio-disk
   - introduce "virtio" flag and document it's usage
   - add LIBXL_HAVE_DEVICE_DISK_VIRTIO
   - update libxlu_disk_l.[ch]
   - drop num_disks variable/MAX_VIRTIO_DISKS
   - drop Wei's T-b

Changes V5 -> V6:
   - rebase on current staging
   - use "%"PRIu64 instead of %lu for disk->base in device_disk_add()
   - update *.gen.go files

Changes V6 -> V7:
   - rebase on current staging
   - update *.gen.go files and libxlu_disk_l.[ch] files
   - update patch description
   - rework significantly to support more flexible configuration
     and have more generic basic implementation for being able to extend
     that for other use-cases (virtio-pci, qemu, etc).

Changes V7 -> V8:
   - update *.gen.go files and libxlu_disk_l.[ch] files
   - update patch description and comments in the code
   - use "specification" config option instead of "protocol"
   - update libxl_types.idl and code according to new fields
     in libxl_device_disk

Changes V8 -> V9:
   - update (and harden) checks in libxl__device_disk_setdefault(),
     return error in case of incorrect settings of specification
     and transport
   - remove both asserts in device_disk_add()
   - update virtio related code in libxl__disk_from_xenstore(),
     do not fail if specification node is absent, replace
     open-coded checks of fetched specification and transport by
     libxl_disk_specification_from_string() and libxl_disk_transport_from_string()
     respectively
   - s/libxl_device_disk_get_path/libxl__device_disk_get_path
   - add a comment for virtio-mmio parameters in struct libxl_device_disk
---
 docs/man/xl-disk-configuration.5.pod.in   |  38 +-
 tools/golang/xenlight/helpers.gen.go      |   8 +
 tools/golang/xenlight/types.gen.go        |  18 +
 tools/include/libxl.h                     |   7 +
 tools/libs/light/libxl_device.c           |  62 +-
 tools/libs/light/libxl_disk.c             | 140 ++++-
 tools/libs/light/libxl_internal.h         |   2 +
 tools/libs/light/libxl_types.idl          |  18 +
 tools/libs/light/libxl_types_internal.idl |   1 +
 tools/libs/light/libxl_utils.c            |   2 +
 tools/libs/util/libxlu_disk_l.c           | 959 +++++++++++++++---------------
 tools/libs/util/libxlu_disk_l.h           |   2 +-
 tools/libs/util/libxlu_disk_l.l           |   9 +
 tools/xl/xl_block.c                       |  11 +
 14 files changed, 797 insertions(+), 480 deletions(-)

diff --git a/docs/man/xl-disk-configuration.5.pod.in b/docs/man/xl-disk-configuration.5.pod.in
index 71d0e86..487ffef 100644
--- a/docs/man/xl-disk-configuration.5.pod.in
+++ b/docs/man/xl-disk-configuration.5.pod.in
@@ -232,7 +232,7 @@ Specifies the backend implementation to use
 
 =item Supported values
 
-phy, qdisk
+phy, qdisk, other
 
 =item Mandatory
 
@@ -244,11 +244,13 @@ Automatically determine which backend to use.
 
 =back
 
-This does not affect the guest's view of the device.  It controls
-which software implementation of the Xen backend driver is used.
+It controls which software implementation of the backend driver is used.
+Depending on the "specification" option this may affect the guest's view
+of the device.
 
 Not all backend drivers support all combinations of other options.
-For example, "phy" does not support formats other than "raw".
+For example, "phy" and "other" do not support formats other than "raw" and
+"other" does not support specifications other than "virtio".
 Normally this option should not be specified, in which case libxl will
 automatically determine the most suitable backend.
 
@@ -344,8 +346,36 @@ can be used to disable "hole punching" for file based backends which
 were intentionally created non-sparse to avoid fragmentation of the
 file.
 
+=item B<specification>=I<SPECIFICATION>
+
+=over 4
+
+=item Description
+
+Specifies the communication protocol (specification) to use for the chosen
+"backendtype" option
+
+=item Supported values
+
+xen, virtio
+
+=item Mandatory
+
+No
+
+=item Default value
+
+xen
+
 =back
 
+Besides forcing toolstack to use specific backend implementation, this also
+affects the guest's view of the device. For example, "virtio" requires
+Virtio frontend driver (virtio-blk) to be used. Please note, the virtual
+device (vdev) is not passed to the guest in that case, but it still must be
+specified for the internal purposes.
+
+=back
 
 =head1 COLO Parameters
 
diff --git a/tools/golang/xenlight/helpers.gen.go b/tools/golang/xenlight/helpers.gen.go
index b746ff1..00f10b9 100644
--- a/tools/golang/xenlight/helpers.gen.go
+++ b/tools/golang/xenlight/helpers.gen.go
@@ -1751,6 +1751,10 @@ x.DirectIoSafe = bool(xc.direct_io_safe)
 if err := x.DiscardEnable.fromC(&xc.discard_enable);err != nil {
 return fmt.Errorf("converting field DiscardEnable: %v", err)
 }
+x.Specification = DiskSpecification(xc.specification)
+x.Transport = DiskTransport(xc.transport)
+x.Irq = uint32(xc.irq)
+x.Base = uint64(xc.base)
 if err := x.ColoEnable.fromC(&xc.colo_enable);err != nil {
 return fmt.Errorf("converting field ColoEnable: %v", err)
 }
@@ -1788,6 +1792,10 @@ xc.direct_io_safe = C.bool(x.DirectIoSafe)
 if err := x.DiscardEnable.toC(&xc.discard_enable); err != nil {
 return fmt.Errorf("converting field DiscardEnable: %v", err)
 }
+xc.specification = C.libxl_disk_specification(x.Specification)
+xc.transport = C.libxl_disk_transport(x.Transport)
+xc.irq = C.uint32_t(x.Irq)
+xc.base = C.uint64_t(x.Base)
 if err := x.ColoEnable.toC(&xc.colo_enable); err != nil {
 return fmt.Errorf("converting field ColoEnable: %v", err)
 }
diff --git a/tools/golang/xenlight/types.gen.go b/tools/golang/xenlight/types.gen.go
index b1e84d5..cc52936 100644
--- a/tools/golang/xenlight/types.gen.go
+++ b/tools/golang/xenlight/types.gen.go
@@ -99,6 +99,20 @@ DiskBackendUnknown DiskBackend = 0
 DiskBackendPhy DiskBackend = 1
 DiskBackendTap DiskBackend = 2
 DiskBackendQdisk DiskBackend = 3
+DiskBackendOther DiskBackend = 4
+)
+
+type DiskSpecification int
+const(
+DiskSpecificationUnknown DiskSpecification = 0
+DiskSpecificationXen DiskSpecification = 1
+DiskSpecificationVirtio DiskSpecification = 2
+)
+
+type DiskTransport int
+const(
+DiskTransportUnknown DiskTransport = 0
+DiskTransportMmio DiskTransport = 1
 )
 
 type NicType int
@@ -643,6 +657,10 @@ Readwrite int
 IsCdrom int
 DirectIoSafe bool
 DiscardEnable Defbool
+Specification DiskSpecification
+Transport DiskTransport
+Irq uint32
+Base uint64
 ColoEnable Defbool
 ColoRestoreEnable Defbool
 ColoHost string
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 51a9b6c..cd8067b 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -528,6 +528,13 @@
 #define LIBXL_HAVE_MAX_GRANT_VERSION 1
 
 /*
+ * LIBXL_HAVE_DEVICE_DISK_SPECIFICATION indicates that 'specification' and
+ * 'transport' fields (of libxl_disk_specification and libxl_disk_transport
+ * types respectively) are present in libxl_device_disk.
+ */
+#define LIBXL_HAVE_DEVICE_DISK_SPECIFICATION 1
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
diff --git a/tools/libs/light/libxl_device.c b/tools/libs/light/libxl_device.c
index e6025d1..a38d2e2 100644
--- a/tools/libs/light/libxl_device.c
+++ b/tools/libs/light/libxl_device.c
@@ -289,9 +289,16 @@ static int disk_try_backend(disk_try_backend_args *a,
                             libxl_disk_backend backend)
  {
     libxl__gc *gc = a->gc;
+    libxl_disk_specification specification = a->disk->specification;
     /* returns 0 (ie, DISK_BACKEND_UNKNOWN) on failure, or
      * backend on success */
 
+    if ((specification == LIBXL_DISK_SPECIFICATION_VIRTIO &&
+         backend != LIBXL_DISK_BACKEND_OTHER) ||
+        (specification != LIBXL_DISK_SPECIFICATION_VIRTIO &&
+         backend == LIBXL_DISK_BACKEND_OTHER))
+        goto bad_specification;
+
     switch (backend) {
     case LIBXL_DISK_BACKEND_PHY:
         if (a->disk->format != LIBXL_DISK_FORMAT_RAW) {
@@ -329,6 +336,29 @@ static int disk_try_backend(disk_try_backend_args *a,
         if (a->disk->script) goto bad_script;
         return backend;
 
+    case LIBXL_DISK_BACKEND_OTHER:
+        if (a->disk->format != LIBXL_DISK_FORMAT_RAW)
+            goto bad_format;
+
+        if (a->disk->script)
+            goto bad_script;
+
+        if (libxl_defbool_val(a->disk->colo_enable))
+            goto bad_colo;
+
+        if (a->disk->backend_domid != LIBXL_TOOLSTACK_DOMID) {
+            LOG(DEBUG, "Disk vdev=%s, is using a storage driver domain, "
+                       "skipping physical device check", a->disk->vdev);
+            return backend;
+        }
+
+        if (libxl__try_phy_backend(a->stab.st_mode))
+            return backend;
+
+        LOG(DEBUG, "Disk vdev=%s, backend other unsuitable as phys path not a "
+                   "block device", a->disk->vdev);
+        return 0;
+
     default:
         LOG(DEBUG, "Disk vdev=%s, backend %d unknown", a->disk->vdev, backend);
         return 0;
@@ -352,6 +382,12 @@ static int disk_try_backend(disk_try_backend_args *a,
     LOG(DEBUG, "Disk vdev=%s, backend %s not compatible with colo",
         a->disk->vdev, libxl_disk_backend_to_string(backend));
     return 0;
+
+ bad_specification:
+    LOG(DEBUG, "Disk vdev=%s, backend %s not compatible with specification %s",
+        a->disk->vdev, libxl_disk_backend_to_string(backend),
+        libxl_disk_specification_to_string(specification));
+    return 0;
 }
 
 int libxl__backendpath_parse_domid(libxl__gc *gc, const char *be_path,
@@ -376,8 +412,9 @@ int libxl__device_disk_set_backend(libxl__gc *gc, libxl_device_disk *disk) {
     a.gc = gc;
     a.disk = disk;
 
-    LOG(DEBUG, "Disk vdev=%s spec.backend=%s", disk->vdev,
-               libxl_disk_backend_to_string(disk->backend));
+    LOG(DEBUG, "Disk vdev=%s spec.backend=%s specification=%s", disk->vdev,
+               libxl_disk_backend_to_string(disk->backend),
+               libxl_disk_specification_to_string(disk->specification));
 
     if (disk->format == LIBXL_DISK_FORMAT_EMPTY) {
         if (!disk->is_cdrom) {
@@ -392,7 +429,8 @@ int libxl__device_disk_set_backend(libxl__gc *gc, libxl_device_disk *disk) {
         }
         memset(&a.stab, 0, sizeof(a.stab));
     } else if ((disk->backend == LIBXL_DISK_BACKEND_UNKNOWN ||
-                disk->backend == LIBXL_DISK_BACKEND_PHY) &&
+                disk->backend == LIBXL_DISK_BACKEND_PHY ||
+                disk->backend == LIBXL_DISK_BACKEND_OTHER) &&
                disk->backend_domid == LIBXL_TOOLSTACK_DOMID &&
                !disk->script) {
         if (stat(disk->pdev_path, &a.stab)) {
@@ -408,7 +446,8 @@ int libxl__device_disk_set_backend(libxl__gc *gc, libxl_device_disk *disk) {
         ok=
             disk_try_backend(&a, LIBXL_DISK_BACKEND_PHY) ?:
             disk_try_backend(&a, LIBXL_DISK_BACKEND_QDISK) ?:
-            disk_try_backend(&a, LIBXL_DISK_BACKEND_TAP);
+            disk_try_backend(&a, LIBXL_DISK_BACKEND_TAP) ?:
+            disk_try_backend(&a, LIBXL_DISK_BACKEND_OTHER);
         if (ok)
             LOG(DEBUG, "Disk vdev=%s, using backend %s",
                        disk->vdev,
@@ -441,10 +480,25 @@ char *libxl__device_disk_string_of_backend(libxl_disk_backend backend)
         case LIBXL_DISK_BACKEND_QDISK: return "qdisk";
         case LIBXL_DISK_BACKEND_TAP: return "phy";
         case LIBXL_DISK_BACKEND_PHY: return "phy";
+        case LIBXL_DISK_BACKEND_OTHER: return "other";
+        default: return NULL;
+    }
+}
+
+char *libxl__device_disk_string_of_specification(libxl_disk_specification specification)
+{
+    switch (specification) {
+        case LIBXL_DISK_SPECIFICATION_XEN: return "xen";
+        case LIBXL_DISK_SPECIFICATION_VIRTIO: return "virtio";
         default: return NULL;
     }
 }
 
+char *libxl__device_disk_string_of_transport(libxl_disk_transport transport)
+{
+    return (transport == LIBXL_DISK_TRANSPORT_MMIO ? "mmio" : NULL);
+}
+
 const char *libxl__qemu_disk_format_string(libxl_disk_format format)
 {
     switch (format) {
diff --git a/tools/libs/light/libxl_disk.c b/tools/libs/light/libxl_disk.c
index a5ca778..e90bc25 100644
--- a/tools/libs/light/libxl_disk.c
+++ b/tools/libs/light/libxl_disk.c
@@ -163,6 +163,25 @@ static int libxl__device_disk_setdefault(libxl__gc *gc, uint32_t domid,
     rc = libxl__resolve_domid(gc, disk->backend_domname, &disk->backend_domid);
     if (rc < 0) return rc;
 
+    if (disk->specification == LIBXL_DISK_SPECIFICATION_UNKNOWN)
+        disk->specification = LIBXL_DISK_SPECIFICATION_XEN;
+
+    if (disk->specification == LIBXL_DISK_SPECIFICATION_XEN &&
+        disk->transport != LIBXL_DISK_TRANSPORT_UNKNOWN) {
+        LOGD(ERROR, domid, "Transport is only supported for specification virtio");
+        return ERROR_FAIL;
+    }
+
+    /* Force transport mmio for specification virtio for now */
+    if (disk->specification == LIBXL_DISK_SPECIFICATION_VIRTIO) {
+        if (!(disk->transport == LIBXL_DISK_TRANSPORT_UNKNOWN ||
+              disk->transport == LIBXL_DISK_TRANSPORT_MMIO)) {
+            LOGD(ERROR, domid, "Unsupported transport for specification virtio");
+            return ERROR_FAIL;
+        }
+        disk->transport = LIBXL_DISK_TRANSPORT_MMIO;
+    }
+
     /* Force Qdisk backend for CDROM devices of guests with a device model. */
     if (disk->is_cdrom != 0 &&
         libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM) {
@@ -204,6 +223,9 @@ static int libxl__device_from_disk(libxl__gc *gc, uint32_t domid,
         case LIBXL_DISK_BACKEND_QDISK:
             device->backend_kind = LIBXL__DEVICE_KIND_QDISK;
             break;
+        case LIBXL_DISK_BACKEND_OTHER:
+            device->backend_kind = LIBXL__DEVICE_KIND_VIRTIO_DISK;
+            break;
         default:
             LOGD(ERROR, domid, "Unrecognized disk backend type: %d",
                  disk->backend);
@@ -212,7 +234,8 @@ static int libxl__device_from_disk(libxl__gc *gc, uint32_t domid,
 
     device->domid = domid;
     device->devid = devid;
-    device->kind  = LIBXL__DEVICE_KIND_VBD;
+    device->kind = disk->backend == LIBXL_DISK_BACKEND_OTHER ?
+        LIBXL__DEVICE_KIND_VIRTIO_DISK : LIBXL__DEVICE_KIND_VBD;
 
     return 0;
 }
@@ -330,7 +353,14 @@ static void device_disk_add(libxl__egc *egc, uint32_t domid,
 
                 assert(device->backend_kind == LIBXL__DEVICE_KIND_VBD);
                 break;
+            case LIBXL_DISK_BACKEND_OTHER:
+                dev = disk->pdev_path;
 
+                flexarray_append(back, "params");
+                flexarray_append(back, dev);
+
+                assert(device->backend_kind == LIBXL__DEVICE_KIND_VIRTIO_DISK);
+                break;
             case LIBXL_DISK_BACKEND_TAP:
                 LOG(ERROR, "blktap is not supported");
                 rc = ERROR_FAIL;
@@ -386,6 +416,14 @@ static void device_disk_add(libxl__egc *egc, uint32_t domid,
         flexarray_append_pair(back, "discard-enable",
                               libxl_defbool_val(disk->discard_enable) ?
                               "1" : "0");
+        flexarray_append(back, "specification");
+        flexarray_append(back, libxl__device_disk_string_of_specification(disk->specification));
+        if (disk->specification == LIBXL_DISK_SPECIFICATION_VIRTIO) {
+            flexarray_append(back, "transport");
+            flexarray_append(back, libxl__device_disk_string_of_transport(disk->transport));
+            flexarray_append_pair(back, "base", GCSPRINTF("%"PRIu64, disk->base));
+            flexarray_append_pair(back, "irq", GCSPRINTF("%u", disk->irq));
+        }
 
         flexarray_append(front, "backend-id");
         flexarray_append(front, GCSPRINTF("%d", disk->backend_domid));
@@ -532,6 +570,53 @@ static int libxl__disk_from_xenstore(libxl__gc *gc, const char *libxl_path,
     }
     libxl_string_to_backend(ctx, tmp, &(disk->backend));
 
+    tmp = libxl__xs_read(gc, XBT_NULL,
+                         GCSPRINTF("%s/specification", libxl_path));
+    if (!tmp) {
+        LOG(DEBUG, "Missing xenstore node %s/specification, assuming specification xen", libxl_path);
+        disk->specification = LIBXL_DISK_SPECIFICATION_XEN;
+    } else {
+        rc = libxl_disk_specification_from_string(tmp, &disk->specification);
+        if (rc) {
+            LOG(ERROR, "Unable to parse xenstore node %s/specification", libxl_path);
+            goto cleanup;
+        }
+    }
+
+    if (disk->specification == LIBXL_DISK_SPECIFICATION_VIRTIO) {
+        tmp = libxl__xs_read(gc, XBT_NULL,
+                             GCSPRINTF("%s/transport", libxl_path));
+        if (!tmp) {
+            LOG(ERROR, "Missing xenstore node %s/transport", libxl_path);
+            goto cleanup;
+        }
+        rc = libxl_disk_transport_from_string(tmp, &disk->transport);
+        if (rc) {
+            LOG(ERROR, "Unable to parse xenstore node %s/transport", libxl_path);
+            goto cleanup;
+        }
+        if (disk->transport != LIBXL_DISK_TRANSPORT_MMIO) {
+            LOG(ERROR, "Only transport mmio is expected for specification virtio");
+            goto cleanup;
+        }
+
+        tmp = libxl__xs_read(gc, XBT_NULL,
+                             GCSPRINTF("%s/base", libxl_path));
+        if (!tmp) {
+            LOG(ERROR, "Missing xenstore node %s/base", libxl_path);
+            goto cleanup;
+        }
+        disk->base = strtoul(tmp, NULL, 10);
+
+        tmp = libxl__xs_read(gc, XBT_NULL,
+                             GCSPRINTF("%s/irq", libxl_path));
+        if (!tmp) {
+            LOG(ERROR, "Missing xenstore node %s/irq", libxl_path);
+            goto cleanup;
+        }
+        disk->irq = strtoul(tmp, NULL, 10);
+    }
+
     disk->vdev = xs_read(ctx->xsh, XBT_NULL,
                          GCSPRINTF("%s/dev", libxl_path), &len);
     if (!disk->vdev) {
@@ -575,6 +660,41 @@ cleanup:
     return rc;
 }
 
+static int libxl__device_disk_get_path(libxl__gc *gc, uint32_t domid,
+                                       char **path)
+{
+    const char *dir;
+    int rc;
+
+    /*
+     * As we don't know exactly what device kind to be used here, guess it
+     * by checking the presence of the corresponding path in Xenstore.
+     * First, try to read path for vbd device (default) and if not exists
+     * read path for virtio_disk device. This will work as long as both Xen PV
+     * and Virtio disk devices are not assigned to the same guest.
+     */
+    *path = GCSPRINTF("%s/device/%s",
+                      libxl__xs_libxl_path(gc, domid),
+                      libxl__device_kind_to_string(LIBXL__DEVICE_KIND_VBD));
+
+    rc = libxl__xs_read_checked(gc, XBT_NULL, *path, &dir);
+    if (rc)
+        return rc;
+
+    if (dir)
+        return 0;
+
+    *path = GCSPRINTF("%s/device/%s",
+                      libxl__xs_libxl_path(gc, domid),
+                      libxl__device_kind_to_string(LIBXL__DEVICE_KIND_VIRTIO_DISK));
+
+    rc = libxl__xs_read_checked(gc, XBT_NULL, *path, &dir);
+    if (rc)
+        return rc;
+
+    return 0;
+}
+
 int libxl_vdev_to_device_disk(libxl_ctx *ctx, uint32_t domid,
                               const char *vdev, libxl_device_disk *disk)
 {
@@ -588,10 +708,12 @@ int libxl_vdev_to_device_disk(libxl_ctx *ctx, uint32_t domid,
 
     libxl_device_disk_init(disk);
 
-    libxl_path = libxl__domain_device_libxl_path(gc, domid, devid,
-                                                 LIBXL__DEVICE_KIND_VBD);
+    rc = libxl__device_disk_get_path(gc, domid, &libxl_path);
+    if (rc)
+        return rc;
 
-    rc = libxl__disk_from_xenstore(gc, libxl_path, devid, disk);
+    rc = libxl__disk_from_xenstore(gc, GCSPRINTF("%s/%d", libxl_path, devid),
+                                   devid, disk);
 
     GC_FREE;
     return rc;
@@ -605,16 +727,19 @@ int libxl_device_disk_getinfo(libxl_ctx *ctx, uint32_t domid,
     char *fe_path, *libxl_path;
     char *val;
     int rc;
+    libxl__device_kind kind;
 
     diskinfo->backend = NULL;
 
     diskinfo->devid = libxl__device_disk_dev_number(disk->vdev, NULL, NULL);
 
-    /* tap devices entries in xenstore are written as vbd devices. */
+    /* tap devices entries in xenstore are written as vbd/virtio_disk devices. */
+    kind = disk->backend == LIBXL_DISK_BACKEND_OTHER ?
+        LIBXL__DEVICE_KIND_VIRTIO_DISK : LIBXL__DEVICE_KIND_VBD;
     fe_path = libxl__domain_device_frontend_path(gc, domid, diskinfo->devid,
-                                                 LIBXL__DEVICE_KIND_VBD);
+                                                 kind);
     libxl_path = libxl__domain_device_libxl_path(gc, domid, diskinfo->devid,
-                                                 LIBXL__DEVICE_KIND_VBD);
+                                                 kind);
     diskinfo->backend = xs_read(ctx->xsh, XBT_NULL,
                                 GCSPRINTF("%s/backend", libxl_path), NULL);
     if (!diskinfo->backend) {
@@ -1418,6 +1543,7 @@ LIBXL_DEFINE_DEVICE_LIST(disk)
 #define libxl__device_disk_update_devid NULL
 
 DEFINE_DEVICE_TYPE_STRUCT(disk, VBD, disks,
+    .get_path    = libxl__device_disk_get_path,
     .merge       = libxl_device_disk_merge,
     .dm_needed   = libxl_device_disk_dm_needed,
     .from_xenstore = (device_from_xenstore_fn_t)libxl__disk_from_xenstore,
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index bdef5a6..cb9e8b3 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -1493,6 +1493,8 @@ _hidden char * libxl__domain_pvcontrol_read(libxl__gc *gc,
 
 /* from xl_device */
 _hidden char *libxl__device_disk_string_of_backend(libxl_disk_backend backend);
+_hidden char *libxl__device_disk_string_of_specification(libxl_disk_specification specification);
+_hidden char *libxl__device_disk_string_of_transport(libxl_disk_transport transport);
 _hidden char *libxl__device_disk_string_of_format(libxl_disk_format format);
 _hidden const char *libxl__qemu_disk_format_string(libxl_disk_format format);
 _hidden int libxl__device_disk_set_backend(libxl__gc*, libxl_device_disk*);
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 2a42da2..858e32b 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -130,6 +130,18 @@ libxl_disk_backend = Enumeration("disk_backend", [
     (1, "PHY"),
     (2, "TAP"),
     (3, "QDISK"),
+    (4, "OTHER"),
+    ])
+
+libxl_disk_specification = Enumeration("disk_specification", [
+    (0, "UNKNOWN"),
+    (1, "XEN"),
+    (2, "VIRTIO"),
+    ])
+
+libxl_disk_transport = Enumeration("disk_transport", [
+    (0, "UNKNOWN"),
+    (1, "MMIO"),
     ])
 
 libxl_nic_type = Enumeration("nic_type", [
@@ -704,6 +716,12 @@ libxl_device_disk = Struct("device_disk", [
     ("is_cdrom", integer),
     ("direct_io_safe", bool),
     ("discard_enable", libxl_defbool),
+    ("specification", libxl_disk_specification),
+    ("transport", libxl_disk_transport),
+    # Note that virtio-mmio parameters (irq and base) are for internal use
+    # by libxl and can't be modified.
+    ("irq", uint32),
+    ("base", uint64),
     # Note that the COLO configuration settings should be considered unstable.
     # They may change incompatibly in future versions of Xen.
     ("colo_enable", libxl_defbool),
diff --git a/tools/libs/light/libxl_types_internal.idl b/tools/libs/light/libxl_types_internal.idl
index 3593e21..8f71980 100644
--- a/tools/libs/light/libxl_types_internal.idl
+++ b/tools/libs/light/libxl_types_internal.idl
@@ -32,6 +32,7 @@ libxl__device_kind = Enumeration("device_kind", [
     (14, "PVCALLS"),
     (15, "VSND"),
     (16, "VINPUT"),
+    (17, "VIRTIO_DISK"),
     ])
 
 libxl__console_backend = Enumeration("console_backend", [
diff --git a/tools/libs/light/libxl_utils.c b/tools/libs/light/libxl_utils.c
index e5e6b2d..f55915e 100644
--- a/tools/libs/light/libxl_utils.c
+++ b/tools/libs/light/libxl_utils.c
@@ -297,6 +297,8 @@ int libxl_string_to_backend(libxl_ctx *ctx, char *s, libxl_disk_backend *backend
         *backend = LIBXL_DISK_BACKEND_TAP;
     } else if (!strcmp(s, "qdisk")) {
         *backend = LIBXL_DISK_BACKEND_QDISK;
+    } else if (!strcmp(s, "other")) {
+        *backend = LIBXL_DISK_BACKEND_OTHER;
     } else if (!strcmp(s, "tap")) {
         p = strchr(s, ':');
         if (!p) {
diff --git a/tools/libs/util/libxlu_disk_l.c b/tools/libs/util/libxlu_disk_l.c
index 32d4b74..bb1337c 100644
--- a/tools/libs/util/libxlu_disk_l.c
+++ b/tools/libs/util/libxlu_disk_l.c
@@ -549,8 +549,8 @@ static void yynoreturn yy_fatal_error ( const char* msg , yyscan_t yyscanner );
 	yyg->yy_hold_char = *yy_cp; \
 	*yy_cp = '\0'; \
 	yyg->yy_c_buf_p = yy_cp;
-#define YY_NUM_RULES 36
-#define YY_END_OF_BUFFER 37
+#define YY_NUM_RULES 37
+#define YY_END_OF_BUFFER 38
 /* This struct is not used in this scanner,
    but its presence is necessary. */
 struct yy_trans_info
@@ -558,74 +558,77 @@ struct yy_trans_info
 	flex_int32_t yy_verify;
 	flex_int32_t yy_nxt;
 	};
-static const flex_int16_t yy_acclist[575] =
+static const flex_int16_t yy_acclist[594] =
     {   0,
-       35,   35,   37,   33,   34,   36, 8193,   33,   34,   36,
-    16385, 8193,   33,   36,16385,   33,   34,   36,   34,   36,
-       33,   34,   36,   33,   34,   36,   33,   34,   36,   33,
-       34,   36,   33,   34,   36,   33,   34,   36,   33,   34,
-       36,   33,   34,   36,   33,   34,   36,   33,   34,   36,
-       33,   34,   36,   33,   34,   36,   33,   34,   36,   33,
-       34,   36,   33,   34,   36,   33,   34,   36,   35,   36,
-       36,   33,   33, 8193,   33, 8193,   33,16385, 8193,   33,
-     8193,   33,   33, 8224,   33,16416,   33,   33,   33,   33,
-       33,   33,   33,   33,   33,   33,   33,   33,   33,   33,
-
-       33,   33,   33,   33,   33,   33,   33,   33,   33,   35,
-     8193,   33, 8193,   33, 8193, 8224,   33, 8224,   33, 8224,
-       23,   33,   33,   33,   33,   33,   33,   33,   33,   33,
-       33,   33,   33,   33,   33,   33,   33,   33,   33,   33,
-       33,   33,   33,   33,   33, 8224,   33, 8224,   33, 8224,
-       23,   33,   33,   28, 8224,   33,16416,   33,   33,   15,
-       33,   33,   33,   33,   33,   33,   33,   33,   33, 8217,
-     8224,   33,16409,16416,   33,   33,   31, 8224,   33,16416,
-       33, 8216, 8224,   33,16408,16416,   33,   33, 8219, 8224,
-       33,16411,16416,   33,   33,   33,   33,   33,   28, 8224,
-
-       33,   28, 8224,   33,   28,   33,   28, 8224,   33,    3,
-       33,   15,   33,   33,   33,   33,   33,   30, 8224,   33,
-    16416,   33,   33,   33, 8217, 8224,   33, 8217, 8224,   33,
-     8217,   33, 8217, 8224,   33,   33,   31, 8224,   33,   31,
-     8224,   33,   31,   33,   31, 8224, 8216, 8224,   33, 8216,
-     8224,   33, 8216,   33, 8216, 8224,   33, 8219, 8224,   33,
-     8219, 8224,   33, 8219,   33, 8219, 8224,   33,   33,   10,
-       33,   33,   28, 8224,   33,   28, 8224,   33,   28, 8224,
-       28,   33,   28,   33,    3,   33,   33,   33,   33,   33,
-       33,   33,   30, 8224,   33,   30, 8224,   33,   30,   33,
-
-       30, 8224,   33,   33,   29, 8224,   33,16416, 8217, 8224,
-       33, 8217, 8224,   33, 8217, 8224, 8217,   33, 8217,   33,
-       33,   31, 8224,   33,   31, 8224,   33,   31, 8224,   31,
-       33,   31, 8216, 8224,   33, 8216, 8224,   33, 8216, 8224,
-     8216,   33, 8216,   33, 8219, 8224,   33, 8219, 8224,   33,
-     8219, 8224, 8219,   33, 8219,   33,   33,   10,   23,   10,
-        7,   33,   33,   33,   33,   33,   33,   33,   13,   33,
-       30, 8224,   33,   30, 8224,   33,   30, 8224,   30,   33,
-       30,    2,   33,   29, 8224,   33,   29, 8224,   33,   29,
-       33,   29, 8224,   16,   33,   33,   11,   33,   22,   10,
-
-       10,   23,    7,   23,    7,   33,    8,   33,   33,   33,
-       33,    6,   33,   13,   33,    2,   23,    2,   33,   29,
-     8224,   33,   29, 8224,   33,   29, 8224,   29,   33,   29,
-       16,   33,   33,   11,   23,   11,   26, 8224,   33,16416,
-       22,   23,   22,    7,    7,   23,   33,    8,   23,    8,
-       33,   33,   33,   33,    6,   23,    6,    6,   23,    6,
-       23,   33,    2,    2,   23,   33,   33,   11,   11,   23,
-       26, 8224,   33,   26, 8224,   33,   26,   33,   26, 8224,
-       22,   23,   33,    8,    8,   23,   33,   33,   17,   18,
-        6,    6,   23,    6,    6,   33,   33,   14,   33,   26,
-
-     8224,   33,   26, 8224,   33,   26, 8224,   26,   33,   26,
-       33,   33,   33,   17,   23,   17,   18,   23,   18,    6,
-        6,   33,   33,   14,   33,   20,    9,   19,   17,   17,
-       23,   18,   18,   23,    6,    5,    6,   33,   21,   20,
-       23,   20,    9,   23,    9,   19,   23,   19,    4,    6,
-        5,    6,   33,   21,   23,   21,   20,   20,   23,    9,
-        9,   23,   19,   19,   23,    4,    6,   12,   33,   21,
-       21,   23,   12,   33
+       36,   36,   38,   34,   35,   37, 8193,   34,   35,   37,
+    16385, 8193,   34,   37,16385,   34,   35,   37,   35,   37,
+       34,   35,   37,   34,   35,   37,   34,   35,   37,   34,
+       35,   37,   34,   35,   37,   34,   35,   37,   34,   35,
+       37,   34,   35,   37,   34,   35,   37,   34,   35,   37,
+       34,   35,   37,   34,   35,   37,   34,   35,   37,   34,
+       35,   37,   34,   35,   37,   34,   35,   37,   36,   37,
+       37,   34,   34, 8193,   34, 8193,   34,16385, 8193,   34,
+     8193,   34,   34, 8225,   34,16417,   34,   34,   34,   34,
+       34,   34,   34,   34,   34,   34,   34,   34,   34,   34,
+
+       34,   34,   34,   34,   34,   34,   34,   34,   34,   34,
+       36, 8193,   34, 8193,   34, 8193, 8225,   34, 8225,   34,
+     8225,   24,   34,   34,   34,   34,   34,   34,   34,   34,
+       34,   34,   34,   34,   34,   34,   34,   34,   34,   34,
+       34,   34,   34,   34,   34,   34,   34, 8225,   34, 8225,
+       34, 8225,   24,   34,   34,   29, 8225,   34,16417,   34,
+       34,   16,   34,   34,   34,   34,   34,   34,   34,   34,
+       34, 8218, 8225,   34,16410,16417,   34,   34,   32, 8225,
+       34,16417,   34, 8217, 8225,   34,16409,16417,   34,   34,
+       34, 8220, 8225,   34,16412,16417,   34,   34,   34,   34,
+
+       34,   29, 8225,   34,   29, 8225,   34,   29,   34,   29,
+     8225,   34,    3,   34,   16,   34,   34,   34,   34,   34,
+       31, 8225,   34,16417,   34,   34,   34, 8218, 8225,   34,
+     8218, 8225,   34, 8218,   34, 8218, 8225,   34,   34,   32,
+     8225,   34,   32, 8225,   34,   32,   34,   32, 8225, 8217,
+     8225,   34, 8217, 8225,   34, 8217,   34, 8217, 8225,   34,
+       34, 8220, 8225,   34, 8220, 8225,   34, 8220,   34, 8220,
+     8225,   34,   34,   11,   34,   34,   29, 8225,   34,   29,
+     8225,   34,   29, 8225,   29,   34,   29,   34,    3,   34,
+       34,   34,   34,   34,   34,   34,   31, 8225,   34,   31,
+
+     8225,   34,   31,   34,   31, 8225,   34,   34,   30, 8225,
+       34,16417, 8218, 8225,   34, 8218, 8225,   34, 8218, 8225,
+     8218,   34, 8218,   34,   34,   32, 8225,   34,   32, 8225,
+       34,   32, 8225,   32,   34,   32, 8217, 8225,   34, 8217,
+     8225,   34, 8217, 8225, 8217,   34, 8217,   34,   34, 8220,
+     8225,   34, 8220, 8225,   34, 8220, 8225, 8220,   34, 8220,
+       34,   34,   11,   24,   11,    7,   34,   34,   34,   34,
+       34,   34,   34,   14,   34,   31, 8225,   34,   31, 8225,
+       34,   31, 8225,   31,   34,   31,    2,   34,   30, 8225,
+       34,   30, 8225,   34,   30,   34,   30, 8225,   17,   34,
+
+       34,   12,   34,   34,   23,   11,   11,   24,    7,   24,
+        7,   34,    8,   34,   34,   34,   34,    6,   34,   14,
+       34,    2,   24,    2,   34,   30, 8225,   34,   30, 8225,
+       34,   30, 8225,   30,   34,   30,   17,   34,   34,   12,
+       24,   12,   34,   27, 8225,   34,16417,   23,   24,   23,
+        7,    7,   24,   34,    8,   24,    8,   34,   34,   34,
+       34,    6,   24,    6,    6,   24,    6,   24,   34,    2,
+        2,   24,   34,   34,   12,   12,   24,   34,   27, 8225,
+       34,   27, 8225,   34,   27,   34,   27, 8225,   23,   24,
+       34,    8,    8,   24,   34,   34,   18,   19,    6,    6,
+
+       24,    6,    6,   34,   34,   15,   34,   34,   27, 8225,
+       34,   27, 8225,   34,   27, 8225,   27,   34,   27,   34,
+       34,   34,   18,   24,   18,   19,   24,   19,    6,    6,
+       34,   34,   15,   34,   34,   21,    9,   20,   18,   18,
+       24,   19,   19,   24,    6,    5,    6,   34,   22,   34,
+       21,   24,   21,    9,   24,    9,   20,   24,   20,    4,
+        6,    5,    6,   34,   22,   24,   22,   34,   21,   21,
+       24,    9,    9,   24,   20,   20,   24,    4,    6,   13,
+       34,   22,   22,   24,   10,   13,   34,   10,   24,   10,
+       10,   10,   24
+
     } ;
 
-static const flex_int16_t yy_accept[356] =
+static const flex_int16_t yy_accept[373] =
     {   0,
         1,    1,    1,    2,    3,    4,    7,   12,   16,   19,
        21,   24,   27,   30,   33,   36,   39,   42,   45,   48,
@@ -633,39 +636,41 @@ static const flex_int16_t yy_accept[356] =
        74,   76,   79,   81,   82,   83,   84,   87,   87,   88,
        89,   90,   91,   92,   93,   94,   95,   96,   97,   98,
        99,  100,  101,  102,  103,  104,  105,  106,  107,  108,
-      109,  110,  111,  113,  115,  116,  118,  120,  121,  122,
+      109,  110,  111,  112,  114,  116,  117,  119,  121,  122,
       123,  124,  125,  126,  127,  128,  129,  130,  131,  132,
       133,  134,  135,  136,  137,  138,  139,  140,  141,  142,
-      143,  144,  145,  146,  148,  150,  151,  152,  153,  154,
-
-      158,  159,  160,  162,  163,  164,  165,  166,  167,  168,
-      169,  170,  175,  176,  177,  181,  182,  187,  188,  189,
-      194,  195,  196,  197,  198,  199,  202,  205,  207,  209,
-      210,  212,  214,  215,  216,  217,  218,  222,  223,  224,
-      225,  228,  231,  233,  235,  236,  237,  240,  243,  245,
-      247,  250,  253,  255,  257,  258,  261,  264,  266,  268,
-      269,  270,  271,  272,  273,  276,  279,  281,  283,  284,
-      285,  287,  288,  289,  290,  291,  292,  293,  296,  299,
-      301,  303,  304,  305,  309,  312,  315,  317,  319,  320,
-      321,  322,  325,  328,  330,  332,  333,  336,  339,  341,
-
-      343,  344,  345,  348,  351,  353,  355,  356,  357,  358,
-      360,  361,  362,  363,  364,  365,  366,  367,  368,  369,
-      371,  374,  377,  379,  381,  382,  383,  384,  387,  390,
-      392,  394,  396,  397,  398,  399,  400,  401,  403,  405,
-      406,  407,  408,  409,  410,  411,  412,  413,  414,  416,
-      418,  419,  420,  423,  426,  428,  430,  431,  433,  434,
-      436,  437,  441,  443,  444,  445,  447,  448,  450,  451,
-      452,  453,  454,  455,  457,  458,  460,  462,  463,  464,
-      466,  467,  468,  469,  471,  474,  477,  479,  481,  483,
-      484,  485,  487,  488,  489,  490,  491,  492,  494,  495,
-
-      496,  497,  498,  500,  503,  506,  508,  510,  511,  512,
-      513,  514,  516,  517,  519,  520,  521,  522,  523,  524,
-      526,  527,  528,  529,  530,  532,  533,  535,  536,  538,
-      539,  540,  542,  543,  545,  546,  548,  549,  551,  553,
-      554,  556,  557,  558,  560,  561,  563,  564,  566,  568,
-      570,  571,  573,  575,  575
+      143,  144,  145,  146,  147,  148,  150,  152,  153,  154,
+
+      155,  156,  160,  161,  162,  164,  165,  166,  167,  168,
+      169,  170,  171,  172,  177,  178,  179,  183,  184,  189,
+      190,  191,  192,  197,  198,  199,  200,  201,  202,  205,
+      208,  210,  212,  213,  215,  217,  218,  219,  220,  221,
+      225,  226,  227,  228,  231,  234,  236,  238,  239,  240,
+      243,  246,  248,  250,  253,  256,  258,  260,  261,  262,
+      265,  268,  270,  272,  273,  274,  275,  276,  277,  280,
+      283,  285,  287,  288,  289,  291,  292,  293,  294,  295,
+      296,  297,  300,  303,  305,  307,  308,  309,  313,  316,
+      319,  321,  323,  324,  325,  326,  329,  332,  334,  336,
+
+      337,  340,  343,  345,  347,  348,  349,  350,  353,  356,
+      358,  360,  361,  362,  363,  365,  366,  367,  368,  369,
+      370,  371,  372,  373,  374,  376,  379,  382,  384,  386,
+      387,  388,  389,  392,  395,  397,  399,  401,  402,  403,
+      404,  405,  406,  407,  409,  411,  412,  413,  414,  415,
+      416,  417,  418,  419,  420,  422,  424,  425,  426,  429,
+      432,  434,  436,  437,  439,  440,  442,  443,  444,  448,
+      450,  451,  452,  454,  455,  457,  458,  459,  460,  461,
+      462,  464,  465,  467,  469,  470,  471,  473,  474,  475,
+      476,  478,  479,  482,  485,  487,  489,  491,  492,  493,
+
+      495,  496,  497,  498,  499,  500,  502,  503,  504,  505,
+      506,  508,  509,  512,  515,  517,  519,  520,  521,  522,
+      523,  525,  526,  528,  529,  530,  531,  532,  533,  535,
+      536,  537,  538,  539,  540,  542,  543,  545,  546,  548,
+      549,  550,  551,  553,  554,  556,  557,  559,  560,  562,
+      564,  565,  567,  568,  569,  570,  572,  573,  575,  576,
+      578,  580,  582,  583,  585,  586,  588,  590,  591,  592,
+      594,  594
     } ;
 
 static const YY_CHAR yy_ec[256] =
@@ -708,216 +713,224 @@ static const YY_CHAR yy_meta[35] =
         1,    1,    1,    1
     } ;
 
-static const flex_int16_t yy_base[424] =
+static const flex_int16_t yy_base[443] =
     {   0,
-        0,    0,  901,  900,  902,  897,   33,   36,  905,  905,
-       45,   63,   31,   42,   51,   52,  890,   33,   65,   67,
-       69,   70,  889,   71,  888,   75,    0,  905,  893,  905,
-       91,   94,    0,    0,  103,  886,  112,    0,   89,   98,
-      113,   92,  114,   99,  100,   48,  121,  116,  119,   74,
-      124,  129,  123,  135,  132,  133,  137,  134,  138,  139,
-      141,    0,  155,    0,    0,  164,    0,    0,  849,  142,
-      152,  164,  140,  161,  165,  166,  167,  168,  169,  173,
-      174,  178,  176,  180,  184,  208,  189,  183,  192,  195,
-      215,  191,  193,  223,    0,    0,  905,  208,  204,  236,
-
-      219,  209,  238,  196,  237,  831,  242,  815,  241,  224,
-      243,  261,  244,  259,  277,  266,  286,  250,  288,  298,
-      249,  283,  274,  282,  294,  308,    0,  310,    0,  295,
-      305,  905,  308,  306,  313,  314,  342,  319,  316,  320,
-      331,    0,  349,    0,  342,  344,  356,    0,  358,    0,
-      365,    0,  367,    0,  354,  375,    0,  377,    0,  363,
-      356,  809,  327,  322,  384,    0,    0,    0,    0,  379,
-      905,  382,  384,  386,  390,  372,  392,  403,    0,  410,
-        0,  407,  413,  423,  426,    0,    0,    0,    0,  409,
-      424,  435,    0,    0,    0,    0,  437,    0,    0,    0,
-
-        0,  433,  444,    0,    0,    0,    0,  391,  440,  781,
-      905,  769,  439,  445,  444,  447,  449,  454,  453,  399,
-      464,    0,    0,    0,    0,  757,  465,  476,    0,  478,
-        0,  479,  476,  753,  462,  490,  749,  905,  745,  905,
-      483,  737,  424,  485,  487,  490,  500,  493,  905,  729,
-      905,  502,  518,    0,    0,    0,    0,  905,  498,  721,
-      905,  527,  713,    0,  705,  905,  495,  697,  905,  365,
-      521,  528,  530,  685,  905,  534,  540,  540,  657,  905,
-      537,  542,  650,  905,  553,    0,  557,    0,    0,  551,
-      641,  905,  558,  557,  633,  614,  613,  905,  547,  555,
-
-      563,  565,  569,  584,    0,    0,    0,    0,  583,  570,
-      585,  612,  905,  601,  905,  522,  580,  589,  594,  905,
-      600,  585,  563,  520,  905,  514,  905,  586,  486,  597,
-      480,  441,  905,  416,  905,  345,  905,  334,  905,  601,
-      254,  905,  242,  905,  200,  905,  151,  905,  905,  607,
-       86,  905,  905,  905,  620,  624,  627,  631,  635,  639,
-      643,  647,  651,  655,  659,  663,  667,  671,  675,  679,
-      683,  687,  691,  695,  699,  703,  707,  711,  715,  719,
-      723,  727,  731,  735,  739,  743,  747,  751,  755,  759,
-      763,  767,  771,  775,  779,  783,  787,  791,  795,  799,
-
-      803,  807,  811,  815,  819,  823,  827,  831,  835,  839,
-      843,  847,  851,  855,  859,  863,  867,  871,  875,  879,
-      883,  887,  891
+        0,    0,  936,  935,  937,  932,   33,   36,  940,  940,
+       45,   63,   31,   42,   51,   52,  925,   33,   65,   67,
+       69,   70,  924,   71,  923,   75,    0,  940,  928,  940,
+       91,   95,    0,    0,  104,  921,  113,    0,   91,   99,
+      114,   92,  115,   80,  100,   48,  119,  121,  122,   74,
+      123,  128,  131,  129,  125,  133,  135,  136,  137,  143,
+      138,  145,    0,  157,    0,    0,  168,    0,    0,  926,
+      140,  146,  165,  159,  152,  164,  155,  168,  171,  176,
+      177,  170,  180,  175,  184,  188,  212,  191,  185,  192,
+      193,  194,  219,  212,  199,  230,    0,    0,  940,  195,
+
+      200,  239,  235,  197,  246,  225,  226,  919,  244,  918,
+      243,  236,  245,  266,  248,  264,  282,  271,  291,  248,
+      270,  254,  300,  279,  296,  302,  288,  303,  311,    0,
+      315,    0,  311,  318,  940,  313,  319,  208,  313,  344,
+      321,  331,  325,  333,    0,  352,    0,  345,  347,  359,
+        0,  361,    0,  368,    0,  370,    0,  322,  366,  379,
+        0,  381,    0,  359,  357,  923,  382,  384,  392,    0,
+        0,    0,    0,  387,  940,  386,  390,  392,  329,  401,
+      397,  409,    0,  417,    0,  399,  412,  426,  429,    0,
+        0,    0,    0,  412,  427,  438,    0,    0,    0,    0,
+
+      440,    0,    0,    0,    0,  436,  405,  447,    0,    0,
+        0,    0,  438,  443,  922,  940,  921,  442,  450,  449,
+      452,  454,  459,  458,  453,  469,    0,    0,    0,    0,
+      920,  470,  481,    0,  483,    0,  484,  481,  919,  368,
+      467,  495,  918,  940,  917,  940,  488,  916,  479,  490,
+      492,  495,  505,  498,  940,  915,  940,  507,  523,    0,
+        0,    0,    0,  940,  503,  864,  940,  846,  532,  836,
+        0,  824,  940,  516,  796,  940,  513,  530,  536,  538,
+      784,  940,  542,  535,  547,  772,  940,  549,  551,  768,
+      940,  502,  562,    0,  564,    0,    0,  562,  764,  940,
+
+      544,  557,  760,  752,  744,  940,  552,  568,  571,  568,
+      581,  577,  588,    0,    0,    0,    0,  589,  580,  591,
+      736,  940,  728,  940,  601,  602,  597,  599,  940,  603,
+      720,  712,  700,  672,  940,  665,  940,  610,  656,  603,
+      648,  607,  629,  940,  627,  940,  625,  940,  624,  940,
+      607,  574,  940,  614,  572,  940,  491,  940,  433,  940,
+      940,  622,  389,  940,  303,  940,  261,  940,  204,  940,
+      940,  635,  639,  642,  646,  650,  654,  658,  662,  666,
+      670,  674,  678,  682,  686,  690,  694,  698,  702,  706,
+      710,  714,  718,  722,  726,  730,  734,  738,  742,  746,
+
+      750,  754,  758,  762,  766,  770,  774,  778,  782,  786,
+      790,  794,  798,  802,  806,  810,  814,  818,  822,  826,
+      830,  834,  838,  842,  846,  850,  854,  858,  862,  866,
+      870,  874,  878,  882,  886,  890,  894,  898,  902,  906,
+      910,  914
     } ;
 
-static const flex_int16_t yy_def[424] =
+static const flex_int16_t yy_def[443] =
     {   0,
-      354,    1,  355,  355,  354,  356,  357,  357,  354,  354,
-      358,  358,   12,   12,   12,   12,   12,   12,   12,   12,
-       12,   12,   12,   12,   12,   12,  359,  354,  356,  354,
-      360,  357,  361,  361,  362,   12,  356,  363,   12,   12,
-       12,   12,   12,   12,   12,   12,   12,   12,   12,   12,
+      371,    1,  372,  372,  371,  373,  374,  374,  371,  371,
+      375,  375,   12,   12,   12,   12,   12,   12,   12,   12,
+       12,   12,   12,   12,   12,   12,  376,  371,  373,  371,
+      377,  374,  378,  378,  379,   12,  373,  380,   12,   12,
        12,   12,   12,   12,   12,   12,   12,   12,   12,   12,
-       12,  359,  360,  361,  361,  364,  365,  365,  354,   12,
        12,   12,   12,   12,   12,   12,   12,   12,   12,   12,
-       12,   12,   12,   12,   12,  362,   12,   12,   12,   12,
-       12,   12,   12,  364,  365,  365,  354,   12,   12,  366,
-
+       12,   12,  376,  377,  378,  378,  381,  382,  382,  371,
        12,   12,   12,   12,   12,   12,   12,   12,   12,   12,
-       12,  367,   86,   86,  368,   12,  369,   12,   12,  370,
-       12,   12,   12,   12,   12,  371,  372,  366,  372,   12,
-       12,  354,   86,   12,   12,   12,  373,   12,   12,   12,
-      374,  375,  367,  375,   86,   86,  376,  377,  368,  377,
-      378,  379,  369,  379,   12,  380,  381,  370,  381,   12,
-       12,  382,   12,   12,  371,  372,  372,  383,  383,   12,
-      354,   86,   86,   86,   12,   12,   12,  384,  385,  373,
-      385,   12,   12,  386,  374,  375,  375,  387,  387,   86,
-       86,  376,  377,  377,  388,  388,  378,  379,  379,  389,
-
-      389,   12,  380,  381,  381,  390,  390,   12,   12,  391,
-      354,  392,   86,   12,   86,   86,   86,   12,   86,   12,
-      384,  385,  385,  393,  393,  394,   86,  395,  396,  386,
-      396,   86,   86,  397,   12,  398,  391,  354,  399,  354,
-       86,  400,   12,   86,   86,   86,  401,   86,  354,  402,
-      354,   86,  395,  396,  396,  403,  403,  354,   86,  404,
-      354,  405,  406,  406,  399,  354,   86,  407,  354,   12,
-       86,   86,   86,  408,  354,  408,  408,   86,  402,  354,
-       86,   86,  404,  354,  409,  410,  405,  410,  406,   86,
-      407,  354,   12,   86,  411,  412,  408,  354,  408,  408,
-
-       86,   86,   86,  409,  410,  410,  413,  413,   86,   12,
-       86,  414,  354,  415,  354,  408,  408,   86,   86,  354,
-      416,  417,  418,  414,  354,  415,  354,  408,  408,   86,
-      419,  420,  354,  421,  354,  422,  354,  408,  354,   86,
-      423,  354,  420,  354,  421,  354,  422,  354,  354,   86,
-      423,  354,  354,    0,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354
+       12,   12,   12,   12,   12,   12,  379,   12,   12,   12,
+       12,   12,   12,   12,   12,  381,  382,  382,  371,   12,
+
+       12,  383,   12,   12,   12,   12,   12,   12,   12,   12,
+       12,   12,   12,  384,   87,   87,  385,   12,  386,   12,
+       12,   12,  387,   12,   12,   12,   12,   12,  388,  389,
+      383,  389,   12,   12,  371,   87,   12,   12,   12,  390,
+       12,   12,   12,  391,  392,  384,  392,   87,   87,  393,
+      394,  385,  394,  395,  396,  386,  396,   12,   12,  397,
+      398,  387,  398,   12,   12,  399,   12,   12,  388,  389,
+      389,  400,  400,   12,  371,   87,   87,   87,   12,   12,
+       12,  401,  402,  390,  402,   12,   12,  403,  391,  392,
+      392,  404,  404,   87,   87,  393,  394,  394,  405,  405,
+
+      395,  396,  396,  406,  406,   12,   12,  397,  398,  398,
+      407,  407,   12,   12,  408,  371,  409,   87,   12,   87,
+       87,   87,   12,   87,   12,  401,  402,  402,  410,  410,
+      411,   87,  412,  413,  403,  413,   87,   87,  414,   12,
+       12,  415,  408,  371,  416,  371,   87,  417,   12,   87,
+       87,   87,  418,   87,  371,  419,  371,   87,  412,  413,
+      413,  420,  420,  371,   87,  421,  371,   12,  422,  423,
+      423,  416,  371,   87,  424,  371,   12,   87,   87,   87,
+      425,  371,  425,  425,   87,  419,  371,   87,   87,  421,
+      371,   12,  426,  427,  422,  427,  423,   87,  424,  371,
+
+       12,   87,  428,  429,  425,  371,  425,  425,   87,   87,
+       87,   12,  426,  427,  427,  430,  430,   87,   12,   87,
+      431,  371,  432,  371,  425,  425,   87,   87,  371,   12,
+      433,  434,  435,  431,  371,  432,  371,  425,  425,   87,
+      436,   12,  437,  371,  438,  371,  439,  371,  425,  371,
+       87,  440,  371,   12,  437,  371,  438,  371,  439,  371,
+      371,   87,  440,  371,  441,  371,  442,  371,  442,  371,
+        0,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+      371,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+      371,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+
+      371,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+      371,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+      371,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+      371,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+      371,  371
     } ;
 
-static const flex_int16_t yy_nxt[940] =
+static const flex_int16_t yy_nxt[975] =
     {   0,
         6,    7,    8,    9,    6,    6,    6,    6,   10,   11,
        12,   13,   14,   15,   16,   17,   18,   19,   17,   17,
        17,   17,   20,   17,   21,   22,   23,   24,   25,   17,
        26,   17,   17,   17,   32,   32,   33,   32,   32,   33,
        36,   34,   36,   42,   34,   29,   29,   29,   30,   35,
-       50,   36,   37,   38,   43,   44,   39,   36,   79,   45,
+       50,   36,   37,   38,   43,   44,   39,   36,   80,   45,
        36,   36,   40,   29,   29,   29,   30,   35,   46,   48,
        37,   38,   41,   47,   36,   49,   36,   53,   36,   36,
-       36,   56,   58,   36,   36,   55,   82,   60,   51,  342,
-       54,   61,   52,   29,   64,   32,   32,   33,   36,   65,
-
-       70,   36,   34,   29,   29,   29,   30,   36,   36,   36,
-       29,   38,   66,   66,   66,   67,   66,   71,   74,   66,
-       68,   72,   36,   36,   73,   36,   77,   78,   36,   76,
-       36,   53,   36,   36,   75,   85,   80,   83,   36,   86,
-       84,   36,   36,   36,   36,   81,   36,   36,   36,   36,
-       36,   36,   93,   89,  337,   98,   88,   29,   64,  101,
-       90,   36,   91,   65,   92,   87,   29,   95,   89,   99,
-       36,  100,   96,   36,   36,   36,   36,   36,   36,  106,
-      105,   85,   36,   36,  102,   36,  107,   36,  103,   36,
-      109,  112,   36,   36,  104,  108,  115,  110,   36,  117,
-
-       36,   36,   36,  335,   36,   36,  122,  111,   29,   29,
-       29,   30,  118,   36,  116,   29,   38,   36,   36,  113,
-      114,  119,  120,  123,   36,   29,   95,  121,   36,  134,
-      131,   96,  130,   36,  125,  124,  126,  126,   66,  127,
-      126,  132,  133,  126,  129,  333,   36,   36,  135,  137,
-       36,   36,   36,  140,  139,   35,   35,  352,   36,   36,
-       85,  141,  141,   66,  142,  141,  160,  145,  141,  144,
-       35,   35,   89,  117,  155,   36,  146,  147,  147,   66,
-      148,  147,  162,   36,  147,  150,  151,  151,   66,  152,
-      151,   36,   36,  151,  154,  120,  161,   36,  156,  156,
-
-       66,  157,  156,   36,   36,  156,  159,  164,  171,  163,
-       29,  166,   29,  168,   36,   36,  167,  170,  169,   35,
-       35,  172,   36,   36,  173,   36,  213,  184,   36,   36,
-      175,   36,  174,   29,  186,  212,   36,  349,  183,  187,
-      177,  176,  178,  178,   66,  179,  178,  182,  348,  178,
-      181,   29,  188,   35,   35,   35,   35,  189,   29,  193,
-       29,  195,  190,   36,  194,   36,  196,   29,  198,   29,
-      200,  191,   36,  199,   36,  201,  219,   29,  204,   29,
-      206,   36,  202,  205,  209,  207,   29,  166,   36,  293,
-      208,  214,  167,   35,   35,   35,   35,   35,   35,   36,
-
-       36,   36,  249,  218,  220,   29,  222,  216,   36,  217,
-      235,  223,   29,  224,  215,  226,   36,  227,  225,  346,
-       35,   35,   36,  228,  228,   66,  229,  228,   29,  186,
-      228,  231,  232,   36,  187,  233,   35,   29,  193,   29,
-      198,  234,   36,  194,  344,  199,   29,  204,  236,   36,
-       35,  241,  205,  242,   36,   35,   35,  270,   35,   35,
-       35,   35,  247,   36,   35,   35,   29,  222,  244,  262,
-      248,   36,  223,  243,  245,  246,   35,  252,   29,  254,
-       29,  256,  258,  342,  255,  259,  257,   35,   35,  339,
-       35,   35,   69,  264,   35,   35,   35,   35,   35,   35,
-
-      267,   35,   35,  275,   35,   35,   35,   35,  271,   35,
-       35,  276,  277,   35,   35,  272,  278,  315,  273,  281,
-       29,  254,  290,  313,  282,  275,  255,  285,  285,   66,
-      286,  285,   35,   35,  285,  288,  295,  298,  296,   35,
-       35,   35,   35,  298,  301,  328,  299,  294,   35,   35,
-      275,   35,   35,   35,  303,   29,  305,  300,  275,   29,
-      307,  306,   35,   35,  302,  308,  337,   36,   35,   35,
-      309,  310,  320,  316,   35,   35,   35,   35,  322,   36,
-       35,   35,  317,  275,  319,  311,   29,  305,  335,  275,
-      318,  321,  306,  323,   35,   35,   35,   35,  330,  329,
-
-       35,   35,  331,  333,  327,   35,   35,  338,   35,   35,
-      353,  340,   35,   35,  350,  325,  275,  315,   35,   35,
-       27,   27,   27,   27,   29,   29,   29,   31,   31,   31,
-       31,   36,   36,   36,   36,   62,  313,   62,   62,   63,
-       63,   63,   63,   65,  269,   65,   65,   35,   35,   35,
-       35,   69,   69,  261,   69,   94,   94,   94,   94,   96,
-      251,   96,   96,  128,  128,  128,  128,  143,  143,  143,
-      143,  149,  149,  149,  149,  153,  153,  153,  153,  158,
-      158,  158,  158,  165,  165,  165,  165,  167,  298,  167,
-      167,  180,  180,  180,  180,  185,  185,  185,  185,  187,
-
-      292,  187,  187,  192,  192,  192,  192,  194,  240,  194,
-      194,  197,  197,  197,  197,  199,  289,  199,  199,  203,
-      203,  203,  203,  205,  284,  205,  205,  210,  210,  210,
-      210,  169,  280,  169,  169,  221,  221,  221,  221,  223,
-      269,  223,  223,  230,  230,  230,  230,  189,  266,  189,
-      189,  196,  211,  196,  196,  201,  261,  201,  201,  207,
-      251,  207,  207,  237,  237,  237,  237,  239,  239,  239,
-      239,  225,  240,  225,  225,  250,  250,  250,  250,  253,
-      253,  253,  253,  255,  238,  255,  255,  260,  260,  260,
-      260,  263,  263,  263,  263,  265,  265,  265,  265,  268,
-
-      268,  268,  268,  274,  274,  274,  274,  279,  279,  279,
-      279,  257,  211,  257,  257,  283,  283,  283,  283,  287,
-      287,  287,  287,  264,  138,  264,  264,  291,  291,  291,
-      291,  297,  297,  297,  297,  304,  304,  304,  304,  306,
-      136,  306,  306,  312,  312,  312,  312,  314,  314,  314,
-      314,  308,   97,  308,  308,  324,  324,  324,  324,  326,
-      326,  326,  326,  332,  332,  332,  332,  334,  334,  334,
-      334,  336,  336,  336,  336,  341,  341,  341,  341,  343,
-      343,  343,  343,  345,  345,  345,  345,  347,  347,  347,
-      347,  351,  351,  351,  351,   36,   30,   59,   57,   36,
-
-       30,  354,   28,   28,    5,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354
+       36,   56,   58,   36,   36,   55,   83,   61,   51,   36,
+       54,   62,   52,   29,   65,   59,   32,   32,   33,   66,
+
+       36,   36,   71,   34,   29,   29,   29,   30,   36,   36,
+       77,   29,   38,   67,   67,   67,   68,   67,   75,   72,
+       67,   69,   73,   36,   36,   74,   78,   79,   36,   53,
+       36,   36,   36,   87,   36,   76,   84,   36,   36,   85,
+       36,   81,   36,   86,   36,   36,   36,   36,   82,   36,
+       92,   95,   36,  100,   36,   36,   89,   90,   88,   29,
+       65,   36,   91,  101,   36,   66,   90,   93,   36,   94,
+       29,   97,  102,   36,   36,  104,   98,   36,  103,   36,
+       36,  107,  108,  106,   36,   36,   36,  105,   86,   36,
+      109,  110,  111,   36,   36,  114,  112,   36,  117,  119,
+
+       36,   36,   36,   36,   36,  121,   36,  368,   36,   36,
+      120,  113,   29,   29,   29,   30,  118,   36,  134,   29,
+       38,   36,  127,  115,  116,  122,  123,  125,   36,  126,
+      128,  124,   29,   97,   36,   36,  180,  138,   98,  129,
+      129,   67,  130,  129,   36,   36,  129,  132,  133,  135,
+      136,  140,   36,   36,   36,   36,  142,   36,  137,   35,
+       35,  123,   86,   36,  370,  143,  144,  144,   67,  145,
+      144,  148,  158,  144,  147,   35,   35,   90,  119,   36,
+       36,  149,  150,  150,   67,  151,  150,  159,   36,  150,
+      153,  154,  154,   67,  155,  154,  164,   36,  154,  157,
+
+      160,  160,   67,  161,  160,   36,  368,  160,  163,  165,
+      166,   36,   36,   29,  170,  167,  168,   29,  172,  171,
+       36,  175,   36,  173,   35,   35,  176,   36,   36,  177,
+       36,   36,  188,  174,   36,   29,  190,  178,   36,  181,
+       36,  191,  223,  179,  182,  182,   67,  183,  182,  186,
+      206,  182,  185,  187,   29,  192,   35,   35,   35,   35,
+      193,   29,  197,   29,  199,  194,   36,  198,   36,  200,
+       29,  202,   29,  204,  195,   36,  203,   36,  205,  268,
+      207,   29,  209,   29,  211,  214,  213,  210,  218,  212,
+      217,   36,  353,   36,   29,  170,   36,   35,   35,  219,
+
+      171,   35,   35,   35,   35,  224,   36,  231,   36,  225,
+       36,   29,  227,  221,   36,  222,  232,  228,  220,   29,
+      229,   36,  240,   35,   35,  230,  233,  233,   67,  234,
+      233,   29,  190,  233,  236,  237,  348,  191,  238,   35,
+       29,  197,   29,  202,  239,   36,  198,   36,  203,   29,
+      209,  242,   36,   35,  247,  210,  255,  241,  248,   36,
+       35,   35,   36,   35,   35,   35,   35,  253,   36,   35,
+       35,   29,  227,  250,  269,  254,   36,  228,  249,  251,
+      252,   35,  258,   29,  260,   29,  262,  264,   36,  261,
+      265,  263,   35,   35,  346,   35,   35,   70,  271,   35,
+
+       35,   35,   35,   35,   35,  274,   35,   35,  282,   35,
+       35,   36,  277,  278,   35,   35,  283,  284,   35,   35,
+      279,  285,   36,  280,  288,   29,  260,   35,   35,  289,
+      312,  261,  293,  293,   67,  294,  293,  301,  306,  293,
+      296,   35,   35,  298,  303,  306,  304,   35,   35,   35,
+       35,  309,  308,   36,  307,  282,  302,  319,   35,   35,
+       35,   35,   35,  311,   29,  314,   29,  316,   35,   35,
+      315,  282,  317,   35,   35,  344,  310,  364,  325,   35,
+       35,  318,   35,   35,  329,  320,   36,  328,  332,   36,
+       29,  314,   35,   35,  330,  326,  315,  331,  327,  333,
+
+       35,   35,   35,   35,  282,  282,  340,  341,   35,   35,
+       35,   35,   36,  282,   35,   35,   36,  351,   35,   35,
+      362,  339,  365,   36,  338,  366,  342,  361,  360,  354,
+      358,  349,  356,   35,   35,   27,   27,   27,   27,   29,
+       29,   29,   31,   31,   31,   31,   36,   36,   36,   36,
+       63,  353,   63,   63,   64,   64,   64,   64,   66,  350,
+       66,   66,   35,   35,   35,   35,   70,   70,  324,   70,
+       96,   96,   96,   96,   98,  322,   98,   98,  131,  131,
+      131,  131,  146,  146,  146,  146,  152,  152,  152,  152,
+      156,  156,  156,  156,  162,  162,  162,  162,  169,  169,
+
+      169,  169,  171,  348,  171,  171,  184,  184,  184,  184,
+      189,  189,  189,  189,  191,  346,  191,  191,  196,  196,
+      196,  196,  198,  344,  198,  198,  201,  201,  201,  201,
+      203,  337,  203,  203,  208,  208,  208,  208,  210,  335,
+      210,  210,  215,  215,  215,  215,  173,  282,  173,  173,
+      226,  226,  226,  226,  228,  324,  228,  228,  235,  235,
+      235,  235,  193,  322,  193,  193,  200,  276,  200,  200,
+      205,  267,  205,  205,  212,  257,  212,  212,  243,  243,
+      243,  243,  245,  245,  245,  245,  230,  306,  230,  230,
+      256,  256,  256,  256,  259,  259,  259,  259,  261,  300,
+
+      261,  261,  266,  266,  266,  266,  270,  270,  270,  270,
+      272,  272,  272,  272,  275,  275,  275,  275,  281,  281,
+      281,  281,  286,  286,  286,  286,  263,  246,  263,  263,
+      290,  290,  290,  290,  295,  295,  295,  295,  271,  297,
+      271,  271,  299,  299,  299,  299,  305,  305,  305,  305,
+      313,  313,  313,  313,  315,  292,  315,  315,  321,  321,
+      321,  321,  323,  323,  323,  323,  317,  291,  317,  317,
+      334,  334,  334,  334,  336,  336,  336,  336,  343,  343,
+      343,  343,  345,  345,  345,  345,  347,  347,  347,  347,
+      352,  352,  352,  352,  355,  355,  355,  355,  357,  357,
+
+      357,  357,  359,  359,  359,  359,  363,  363,  363,  363,
+      367,  367,  367,  367,  369,  369,  369,  369,  287,  276,
+      273,  216,  267,  257,  246,  244,  216,  141,  139,   99,
+       36,   30,   60,   57,   36,   30,  371,   28,   28,    5,
+      371,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+      371,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+      371,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+      371,  371,  371,  371
     } ;
 
-static const flex_int16_t yy_chk[940] =
+static const flex_int16_t yy_chk[975] =
     {   0,
         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
@@ -927,101 +940,105 @@ static const flex_int16_t yy_chk[940] =
        18,   14,   11,   11,   13,   14,   11,   46,   46,   14,
        15,   16,   11,   12,   12,   12,   12,   12,   14,   16,
        12,   12,   12,   15,   19,   16,   20,   20,   21,   22,
-       24,   22,   24,   50,   26,   21,   50,   26,   19,  351,
-       20,   26,   19,   31,   31,   32,   32,   32,   39,   31,
-
-       39,   42,   32,   35,   35,   35,   35,   40,   44,   45,
-       35,   35,   37,   37,   37,   37,   37,   39,   42,   37,
-       37,   40,   41,   43,   41,   48,   45,   45,   49,   44,
-       47,   47,   53,   51,   43,   53,   48,   51,   52,   54,
-       52,   55,   56,   58,   54,   49,   57,   59,   60,   73,
-       61,   70,   60,   61,  347,   70,   56,   63,   63,   73,
-       58,   71,   59,   63,   59,   55,   66,   66,   57,   71,
-       74,   72,   66,   72,   75,   76,   77,   78,   79,   78,
-       77,   79,   80,   81,   74,   83,   80,   82,   75,   84,
-       82,   85,   88,   85,   76,   81,   87,   83,   87,   89,
-
-       92,   89,   93,  345,   90,  104,   92,   84,   86,   86,
-       86,   86,   90,   99,   88,   86,   86,   98,  102,   86,
-       86,   91,   91,   93,   91,   94,   94,   91,  101,  104,
-      102,   94,  101,  110,   99,   98,  100,  100,  100,  100,
-      100,  103,  103,  100,  100,  343,  105,  103,  105,  107,
-      109,  107,  111,  110,  109,  113,  113,  341,  121,  118,
-      111,  112,  112,  112,  112,  112,  121,  113,  112,  112,
-      114,  114,  116,  116,  118,  116,  114,  115,  115,  115,
-      115,  115,  123,  123,  115,  115,  117,  117,  117,  117,
-      117,  124,  122,  117,  117,  119,  122,  119,  120,  120,
-
-      120,  120,  120,  125,  130,  120,  120,  125,  131,  124,
-      126,  126,  128,  128,  131,  134,  126,  130,  128,  133,
-      133,  133,  135,  136,  133,  139,  164,  140,  138,  140,
-      134,  164,  133,  141,  141,  163,  163,  338,  139,  141,
-      136,  135,  137,  137,  137,  137,  137,  138,  336,  137,
-      137,  143,  143,  145,  145,  146,  146,  143,  147,  147,
-      149,  149,  145,  155,  147,  161,  149,  151,  151,  153,
-      153,  146,  160,  151,  270,  153,  176,  156,  156,  158,
-      158,  176,  155,  156,  161,  158,  165,  165,  170,  270,
-      160,  170,  165,  172,  172,  173,  173,  174,  174,  175,
-
-      208,  177,  220,  175,  177,  178,  178,  173,  220,  174,
-      208,  178,  180,  180,  172,  182,  182,  183,  180,  334,
-      190,  190,  183,  184,  184,  184,  184,  184,  185,  185,
-      184,  184,  190,  243,  185,  191,  191,  192,  192,  197,
-      197,  202,  202,  192,  332,  197,  203,  203,  209,  209,
-      213,  213,  203,  214,  214,  215,  215,  243,  216,  216,
-      217,  217,  218,  218,  219,  219,  221,  221,  215,  235,
-      219,  235,  221,  214,  216,  217,  227,  227,  228,  228,
-      230,  230,  232,  331,  228,  233,  230,  233,  233,  329,
-      232,  232,  236,  236,  241,  241,  244,  244,  245,  245,
-
-      241,  246,  246,  247,  248,  248,  267,  267,  244,  259,
-      259,  247,  247,  252,  252,  245,  248,  326,  246,  252,
-      253,  253,  267,  324,  259,  316,  253,  262,  262,  262,
-      262,  262,  271,  271,  262,  262,  272,  276,  273,  272,
-      272,  273,  273,  277,  278,  316,  276,  271,  281,  281,
-      299,  278,  278,  282,  282,  285,  285,  277,  300,  287,
-      287,  285,  290,  290,  281,  287,  323,  293,  294,  294,
-      290,  293,  303,  299,  301,  301,  302,  302,  310,  310,
-      303,  303,  300,  317,  302,  294,  304,  304,  322,  328,
-      301,  309,  304,  311,  309,  309,  311,  311,  318,  317,
-
-      318,  318,  319,  321,  314,  319,  319,  328,  330,  330,
-      350,  330,  340,  340,  340,  312,  297,  296,  350,  350,
-      355,  355,  355,  355,  356,  356,  356,  357,  357,  357,
-      357,  358,  358,  358,  358,  359,  295,  359,  359,  360,
-      360,  360,  360,  361,  291,  361,  361,  362,  362,  362,
-      362,  363,  363,  283,  363,  364,  364,  364,  364,  365,
-      279,  365,  365,  366,  366,  366,  366,  367,  367,  367,
-      367,  368,  368,  368,  368,  369,  369,  369,  369,  370,
-      370,  370,  370,  371,  371,  371,  371,  372,  274,  372,
-      372,  373,  373,  373,  373,  374,  374,  374,  374,  375,
-
-      268,  375,  375,  376,  376,  376,  376,  377,  265,  377,
-      377,  378,  378,  378,  378,  379,  263,  379,  379,  380,
-      380,  380,  380,  381,  260,  381,  381,  382,  382,  382,
-      382,  383,  250,  383,  383,  384,  384,  384,  384,  385,
-      242,  385,  385,  386,  386,  386,  386,  387,  239,  387,
-      387,  388,  237,  388,  388,  389,  234,  389,  389,  390,
-      226,  390,  390,  391,  391,  391,  391,  392,  392,  392,
-      392,  393,  212,  393,  393,  394,  394,  394,  394,  395,
-      395,  395,  395,  396,  210,  396,  396,  397,  397,  397,
-      397,  398,  398,  398,  398,  399,  399,  399,  399,  400,
-
-      400,  400,  400,  401,  401,  401,  401,  402,  402,  402,
-      402,  403,  162,  403,  403,  404,  404,  404,  404,  405,
-      405,  405,  405,  406,  108,  406,  406,  407,  407,  407,
-      407,  408,  408,  408,  408,  409,  409,  409,  409,  410,
-      106,  410,  410,  411,  411,  411,  411,  412,  412,  412,
-      412,  413,   69,  413,  413,  414,  414,  414,  414,  415,
-      415,  415,  415,  416,  416,  416,  416,  417,  417,  417,
-      417,  418,  418,  418,  418,  419,  419,  419,  419,  420,
-      420,  420,  420,  421,  421,  421,  421,  422,  422,  422,
-      422,  423,  423,  423,  423,   36,   29,   25,   23,   17,
-
-        6,    5,    4,    3,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354
+       24,   22,   24,   50,   26,   21,   50,   26,   19,   44,
+       20,   26,   19,   31,   31,   24,   32,   32,   32,   31,
+
+       39,   42,   39,   32,   35,   35,   35,   35,   40,   45,
+       44,   35,   35,   37,   37,   37,   37,   37,   42,   39,
+       37,   37,   40,   41,   43,   41,   45,   45,   47,   47,
+       48,   49,   51,   54,   55,   43,   51,   52,   54,   52,
+       53,   48,   56,   53,   57,   58,   59,   61,   49,   71,
+       59,   61,   60,   71,   62,   72,   56,   62,   55,   64,
+       64,   75,   58,   72,   77,   64,   57,   60,   74,   60,
+       67,   67,   73,   76,   73,   75,   67,   78,   74,   82,
+       79,   78,   79,   77,   84,   80,   81,   76,   80,   83,
+       81,   82,   83,   85,   89,   86,   84,   86,   88,   90,
+
+       88,   90,   91,   92,  100,   92,  104,  369,   95,  101,
+       91,   85,   87,   87,   87,   87,   89,  138,  104,   87,
+       87,   94,  100,   87,   87,   93,   93,   94,   93,   95,
+      101,   93,   96,   96,  106,  107,  138,  107,   96,  102,
+      102,  102,  102,  102,  103,  112,  102,  102,  103,  105,
+      105,  109,  111,  109,  113,  105,  111,  120,  106,  115,
+      115,  122,  113,  122,  367,  112,  114,  114,  114,  114,
+      114,  115,  120,  114,  114,  116,  116,  118,  118,  121,
+      118,  116,  117,  117,  117,  117,  117,  121,  124,  117,
+      117,  119,  119,  119,  119,  119,  124,  127,  119,  119,
+
+      123,  123,  123,  123,  123,  125,  365,  123,  123,  125,
+      126,  126,  128,  129,  129,  127,  128,  131,  131,  129,
+      133,  134,  139,  131,  136,  136,  136,  134,  137,  136,
+      141,  158,  143,  133,  143,  144,  144,  136,  179,  139,
+      142,  144,  179,  137,  140,  140,  140,  140,  140,  141,
+      158,  140,  140,  142,  146,  146,  148,  148,  149,  149,
+      146,  150,  150,  152,  152,  148,  165,  150,  164,  152,
+      154,  154,  156,  156,  149,  159,  154,  240,  156,  240,
+      159,  160,  160,  162,  162,  165,  164,  160,  168,  162,
+      167,  167,  363,  168,  169,  169,  174,  176,  176,  174,
+
+      169,  177,  177,  178,  178,  180,  181,  186,  186,  181,
+      180,  182,  182,  177,  207,  178,  187,  182,  176,  184,
+      184,  187,  207,  194,  194,  184,  188,  188,  188,  188,
+      188,  189,  189,  188,  188,  194,  359,  189,  195,  195,
+      196,  196,  201,  201,  206,  206,  196,  213,  201,  208,
+      208,  214,  214,  218,  218,  208,  225,  213,  219,  219,
+      220,  220,  225,  221,  221,  222,  222,  223,  223,  224,
+      224,  226,  226,  220,  241,  224,  241,  226,  219,  221,
+      222,  232,  232,  233,  233,  235,  235,  237,  249,  233,
+      238,  235,  238,  238,  357,  237,  237,  242,  242,  247,
+
+      247,  250,  250,  251,  251,  247,  252,  252,  253,  254,
+      254,  292,  249,  250,  265,  265,  253,  253,  258,  258,
+      251,  254,  277,  252,  258,  259,  259,  274,  274,  265,
+      292,  259,  269,  269,  269,  269,  269,  277,  284,  269,
+      269,  278,  278,  274,  279,  283,  280,  279,  279,  280,
+      280,  285,  284,  301,  283,  307,  278,  301,  285,  285,
+      288,  288,  289,  289,  293,  293,  295,  295,  302,  302,
+      293,  308,  295,  298,  298,  355,  288,  352,  307,  310,
+      310,  298,  309,  309,  311,  302,  312,  310,  319,  319,
+      313,  313,  311,  311,  312,  308,  313,  318,  309,  320,
+
+      318,  318,  320,  320,  325,  326,  327,  328,  327,  327,
+      328,  328,  330,  338,  340,  340,  342,  340,  351,  351,
+      351,  326,  354,  354,  325,  362,  330,  349,  347,  342,
+      345,  338,  343,  362,  362,  372,  372,  372,  372,  373,
+      373,  373,  374,  374,  374,  374,  375,  375,  375,  375,
+      376,  341,  376,  376,  377,  377,  377,  377,  378,  339,
+      378,  378,  379,  379,  379,  379,  380,  380,  336,  380,
+      381,  381,  381,  381,  382,  334,  382,  382,  383,  383,
+      383,  383,  384,  384,  384,  384,  385,  385,  385,  385,
+      386,  386,  386,  386,  387,  387,  387,  387,  388,  388,
+
+      388,  388,  389,  333,  389,  389,  390,  390,  390,  390,
+      391,  391,  391,  391,  392,  332,  392,  392,  393,  393,
+      393,  393,  394,  331,  394,  394,  395,  395,  395,  395,
+      396,  323,  396,  396,  397,  397,  397,  397,  398,  321,
+      398,  398,  399,  399,  399,  399,  400,  305,  400,  400,
+      401,  401,  401,  401,  402,  304,  402,  402,  403,  403,
+      403,  403,  404,  303,  404,  404,  405,  299,  405,  405,
+      406,  290,  406,  406,  407,  286,  407,  407,  408,  408,
+      408,  408,  409,  409,  409,  409,  410,  281,  410,  410,
+      411,  411,  411,  411,  412,  412,  412,  412,  413,  275,
+
+      413,  413,  414,  414,  414,  414,  415,  415,  415,  415,
+      416,  416,  416,  416,  417,  417,  417,  417,  418,  418,
+      418,  418,  419,  419,  419,  419,  420,  272,  420,  420,
+      421,  421,  421,  421,  422,  422,  422,  422,  423,  270,
+      423,  423,  424,  424,  424,  424,  425,  425,  425,  425,
+      426,  426,  426,  426,  427,  268,  427,  427,  428,  428,
+      428,  428,  429,  429,  429,  429,  430,  266,  430,  430,
+      431,  431,  431,  431,  432,  432,  432,  432,  433,  433,
+      433,  433,  434,  434,  434,  434,  435,  435,  435,  435,
+      436,  436,  436,  436,  437,  437,  437,  437,  438,  438,
+
+      438,  438,  439,  439,  439,  439,  440,  440,  440,  440,
+      441,  441,  441,  441,  442,  442,  442,  442,  256,  248,
+      245,  243,  239,  231,  217,  215,  166,  110,  108,   70,
+       36,   29,   25,   23,   17,    6,    5,    4,    3,  371,
+      371,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+      371,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+      371,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+      371,  371,  371,  371
     } ;
 
 #define YY_TRAILING_MASK 0x2000
@@ -1160,9 +1177,17 @@ static void setbackendtype(DiskParseContext *dpc, const char *str) {
     if (     !strcmp(str,"phy"))   DSET(dpc,backend,BACKEND,str,PHY);
     else if (!strcmp(str,"tap"))   DSET(dpc,backend,BACKEND,str,TAP);
     else if (!strcmp(str,"qdisk")) DSET(dpc,backend,BACKEND,str,QDISK);
+    else if (!strcmp(str,"other")) DSET(dpc,backend,BACKEND,str,OTHER);
     else xlu__disk_err(dpc,str,"unknown value for backendtype");
 }
 
+/* Sets ->specification from the string.  IDL should provide something for this. */
+static void setspecification(DiskParseContext *dpc, const char *str) {
+    if      (!strcmp(str,"xen"))    DSET(dpc,specification,SPECIFICATION,str,XEN);
+    else if (!strcmp(str,"virtio")) DSET(dpc,specification,SPECIFICATION,str,VIRTIO);
+    else xlu__disk_err(dpc,str,"unknown value for specification");
+}
+
 /* Sets ->colo-port from the string.  COLO need this. */
 static void setcoloport(DiskParseContext *dpc, const char *str) {
     int port = atoi(str);
@@ -1199,9 +1224,9 @@ static int vdev_and_devtype(DiskParseContext *dpc, char *str) {
 #undef DPC /* needs to be defined differently the actual lexer */
 #define DPC ((DiskParseContext*)yyextra)
 
-#line 1202 "libxlu_disk_l.c"
+#line 1227 "libxlu_disk_l.c"
 
-#line 1204 "libxlu_disk_l.c"
+#line 1229 "libxlu_disk_l.c"
 
 #define INITIAL 0
 #define LEXERR 1
@@ -1477,13 +1502,13 @@ YY_DECL
 		}
 
 	{
-#line 177 "libxlu_disk_l.l"
+#line 185 "libxlu_disk_l.l"
 
 
-#line 180 "libxlu_disk_l.l"
+#line 188 "libxlu_disk_l.l"
  /*----- the scanner rules which do the parsing -----*/
 
-#line 1486 "libxlu_disk_l.c"
+#line 1511 "libxlu_disk_l.c"
 
 	while ( /*CONSTCOND*/1 )		/* loops until end-of-file is reached */
 		{
@@ -1515,14 +1540,14 @@ yy_match:
 			while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 				{
 				yy_current_state = (int) yy_def[yy_current_state];
-				if ( yy_current_state >= 355 )
+				if ( yy_current_state >= 372 )
 					yy_c = yy_meta[yy_c];
 				}
 			yy_current_state = yy_nxt[yy_base[yy_current_state] + yy_c];
 			*yyg->yy_state_ptr++ = yy_current_state;
 			++yy_cp;
 			}
-		while ( yy_current_state != 354 );
+		while ( yy_current_state != 371 );
 
 yy_find_action:
 		yy_current_state = *--yyg->yy_state_ptr;
@@ -1572,152 +1597,158 @@ do_action:	/* This label is used only to access EOF actions. */
 case 1:
 /* rule 1 can match eol */
 YY_RULE_SETUP
-#line 182 "libxlu_disk_l.l"
+#line 190 "libxlu_disk_l.l"
 { /* ignore whitespace before parameters */ }
 	YY_BREAK
 /* ordinary parameters setting enums or strings */
 case 2:
 /* rule 2 can match eol */
 YY_RULE_SETUP
-#line 186 "libxlu_disk_l.l"
+#line 194 "libxlu_disk_l.l"
 { STRIP(','); setformat(DPC, FROMEQUALS); }
 	YY_BREAK
 case 3:
 YY_RULE_SETUP
-#line 188 "libxlu_disk_l.l"
+#line 196 "libxlu_disk_l.l"
 { DPC->disk->is_cdrom = 1; }
 	YY_BREAK
 case 4:
 YY_RULE_SETUP
-#line 189 "libxlu_disk_l.l"
+#line 197 "libxlu_disk_l.l"
 { DPC->disk->is_cdrom = 1; }
 	YY_BREAK
 case 5:
 YY_RULE_SETUP
-#line 190 "libxlu_disk_l.l"
+#line 198 "libxlu_disk_l.l"
 { DPC->disk->is_cdrom = 0; }
 	YY_BREAK
 case 6:
 /* rule 6 can match eol */
 YY_RULE_SETUP
-#line 191 "libxlu_disk_l.l"
+#line 199 "libxlu_disk_l.l"
 { xlu__disk_err(DPC,yytext,"unknown value for type"); }
 	YY_BREAK
 case 7:
 /* rule 7 can match eol */
 YY_RULE_SETUP
-#line 193 "libxlu_disk_l.l"
+#line 201 "libxlu_disk_l.l"
 { STRIP(','); setaccess(DPC, FROMEQUALS); }
 	YY_BREAK
 case 8:
 /* rule 8 can match eol */
 YY_RULE_SETUP
-#line 194 "libxlu_disk_l.l"
+#line 202 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("backend", backend_domname, FROMEQUALS); }
 	YY_BREAK
 case 9:
 /* rule 9 can match eol */
 YY_RULE_SETUP
-#line 195 "libxlu_disk_l.l"
+#line 203 "libxlu_disk_l.l"
 { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
 	YY_BREAK
 case 10:
 /* rule 10 can match eol */
 YY_RULE_SETUP
-#line 197 "libxlu_disk_l.l"
-{ STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
+#line 204 "libxlu_disk_l.l"
+{ STRIP(','); setspecification(DPC,FROMEQUALS); }
 	YY_BREAK
 case 11:
 /* rule 11 can match eol */
 YY_RULE_SETUP
-#line 198 "libxlu_disk_l.l"
-{ STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
+#line 206 "libxlu_disk_l.l"
+{ STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
 	YY_BREAK
 case 12:
+/* rule 12 can match eol */
 YY_RULE_SETUP
-#line 199 "libxlu_disk_l.l"
-{ DPC->disk->direct_io_safe = 1; }
+#line 207 "libxlu_disk_l.l"
+{ STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
 	YY_BREAK
 case 13:
 YY_RULE_SETUP
-#line 200 "libxlu_disk_l.l"
-{ libxl_defbool_set(&DPC->disk->discard_enable, true); }
+#line 208 "libxlu_disk_l.l"
+{ DPC->disk->direct_io_safe = 1; }
 	YY_BREAK
 case 14:
 YY_RULE_SETUP
-#line 201 "libxlu_disk_l.l"
-{ libxl_defbool_set(&DPC->disk->discard_enable, false); }
+#line 209 "libxlu_disk_l.l"
+{ libxl_defbool_set(&DPC->disk->discard_enable, true); }
 	YY_BREAK
-/* Note that the COLO configuration settings should be considered unstable.
-  * They may change incompatibly in future versions of Xen. */
 case 15:
 YY_RULE_SETUP
-#line 204 "libxlu_disk_l.l"
-{ libxl_defbool_set(&DPC->disk->colo_enable, true); }
+#line 210 "libxlu_disk_l.l"
+{ libxl_defbool_set(&DPC->disk->discard_enable, false); }
 	YY_BREAK
+/* Note that the COLO configuration settings should be considered unstable.
+  * They may change incompatibly in future versions of Xen. */
 case 16:
 YY_RULE_SETUP
-#line 205 "libxlu_disk_l.l"
-{ libxl_defbool_set(&DPC->disk->colo_enable, false); }
+#line 213 "libxlu_disk_l.l"
+{ libxl_defbool_set(&DPC->disk->colo_enable, true); }
 	YY_BREAK
 case 17:
-/* rule 17 can match eol */
 YY_RULE_SETUP
-#line 206 "libxlu_disk_l.l"
-{ STRIP(','); SAVESTRING("colo-host", colo_host, FROMEQUALS); }
+#line 214 "libxlu_disk_l.l"
+{ libxl_defbool_set(&DPC->disk->colo_enable, false); }
 	YY_BREAK
 case 18:
 /* rule 18 can match eol */
 YY_RULE_SETUP
-#line 207 "libxlu_disk_l.l"
-{ STRIP(','); setcoloport(DPC, FROMEQUALS); }
+#line 215 "libxlu_disk_l.l"
+{ STRIP(','); SAVESTRING("colo-host", colo_host, FROMEQUALS); }
 	YY_BREAK
 case 19:
 /* rule 19 can match eol */
 YY_RULE_SETUP
-#line 208 "libxlu_disk_l.l"
-{ STRIP(','); SAVESTRING("colo-export", colo_export, FROMEQUALS); }
+#line 216 "libxlu_disk_l.l"
+{ STRIP(','); setcoloport(DPC, FROMEQUALS); }
 	YY_BREAK
 case 20:
 /* rule 20 can match eol */
 YY_RULE_SETUP
-#line 209 "libxlu_disk_l.l"
-{ STRIP(','); SAVESTRING("active-disk", active_disk, FROMEQUALS); }
+#line 217 "libxlu_disk_l.l"
+{ STRIP(','); SAVESTRING("colo-export", colo_export, FROMEQUALS); }
 	YY_BREAK
 case 21:
 /* rule 21 can match eol */
 YY_RULE_SETUP
-#line 210 "libxlu_disk_l.l"
+#line 218 "libxlu_disk_l.l"
+{ STRIP(','); SAVESTRING("active-disk", active_disk, FROMEQUALS); }
+	YY_BREAK
+case 22:
+/* rule 22 can match eol */
+YY_RULE_SETUP
+#line 219 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("hidden-disk", hidden_disk, FROMEQUALS); }
 	YY_BREAK
 /* the target magic parameter, eats the rest of the string */
-case 22:
+case 23:
 YY_RULE_SETUP
-#line 214 "libxlu_disk_l.l"
+#line 223 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("target", pdev_path, FROMEQUALS); }
 	YY_BREAK
 /* unknown parameters */
-case 23:
-/* rule 23 can match eol */
+case 24:
+/* rule 24 can match eol */
 YY_RULE_SETUP
-#line 218 "libxlu_disk_l.l"
+#line 227 "libxlu_disk_l.l"
 { xlu__disk_err(DPC,yytext,"unknown parameter"); }
 	YY_BREAK
 /* deprecated prefixes */
 /* the "/.*" in these patterns ensures that they count as if they
    * matched the whole string, so these patterns take precedence */
-case 24:
+case 25:
 YY_RULE_SETUP
-#line 225 "libxlu_disk_l.l"
+#line 234 "libxlu_disk_l.l"
 {
                     STRIP(':');
                     DPC->had_depr_prefix=1; DEPRECATE("use `[format=]...,'");
                     setformat(DPC, yytext);
                  }
 	YY_BREAK
-case 25:
+case 26:
 YY_RULE_SETUP
-#line 231 "libxlu_disk_l.l"
+#line 240 "libxlu_disk_l.l"
 {
                     char *newscript;
                     STRIP(':');
@@ -1731,65 +1762,65 @@ YY_RULE_SETUP
                     free(newscript);
                 }
 	YY_BREAK
-case 26:
+case 27:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
 yyg->yy_c_buf_p = yy_cp = yy_bp + 8;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 244 "libxlu_disk_l.l"
+#line 253 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
-case 27:
+case 28:
 YY_RULE_SETUP
-#line 245 "libxlu_disk_l.l"
+#line 254 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
-case 28:
+case 29:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
 yyg->yy_c_buf_p = yy_cp = yy_bp + 4;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 246 "libxlu_disk_l.l"
+#line 255 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
-case 29:
+case 30:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
 yyg->yy_c_buf_p = yy_cp = yy_bp + 6;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 247 "libxlu_disk_l.l"
+#line 256 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
-case 30:
+case 31:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
 yyg->yy_c_buf_p = yy_cp = yy_bp + 5;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 248 "libxlu_disk_l.l"
+#line 257 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
-case 31:
+case 32:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
 yyg->yy_c_buf_p = yy_cp = yy_bp + 4;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 249 "libxlu_disk_l.l"
+#line 258 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
-case 32:
-/* rule 32 can match eol */
+case 33:
+/* rule 33 can match eol */
 YY_RULE_SETUP
-#line 251 "libxlu_disk_l.l"
+#line 260 "libxlu_disk_l.l"
 {
 		  xlu__disk_err(DPC,yytext,"unknown deprecated disk prefix");
 		  return 0;
 		}
 	YY_BREAK
 /* positional parameters */
-case 33:
-/* rule 33 can match eol */
+case 34:
+/* rule 34 can match eol */
 YY_RULE_SETUP
-#line 258 "libxlu_disk_l.l"
+#line 267 "libxlu_disk_l.l"
 {
     STRIP(',');
 
@@ -1816,27 +1847,27 @@ YY_RULE_SETUP
     }
 }
 	YY_BREAK
-case 34:
+case 35:
 YY_RULE_SETUP
-#line 284 "libxlu_disk_l.l"
+#line 293 "libxlu_disk_l.l"
 {
     BEGIN(LEXERR);
     yymore();
 }
 	YY_BREAK
-case 35:
+case 36:
 YY_RULE_SETUP
-#line 288 "libxlu_disk_l.l"
+#line 297 "libxlu_disk_l.l"
 {
     xlu__disk_err(DPC,yytext,"bad disk syntax"); return 0;
 }
 	YY_BREAK
-case 36:
+case 37:
 YY_RULE_SETUP
-#line 291 "libxlu_disk_l.l"
+#line 300 "libxlu_disk_l.l"
 YY_FATAL_ERROR( "flex scanner jammed" );
 	YY_BREAK
-#line 1839 "libxlu_disk_l.c"
+#line 1870 "libxlu_disk_l.c"
 			case YY_STATE_EOF(INITIAL):
 			case YY_STATE_EOF(LEXERR):
 				yyterminate();
@@ -2104,7 +2135,7 @@ static int yy_get_next_buffer (yyscan_t yyscanner)
 		while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 			{
 			yy_current_state = (int) yy_def[yy_current_state];
-			if ( yy_current_state >= 355 )
+			if ( yy_current_state >= 372 )
 				yy_c = yy_meta[yy_c];
 			}
 		yy_current_state = yy_nxt[yy_base[yy_current_state] + yy_c];
@@ -2128,11 +2159,11 @@ static int yy_get_next_buffer (yyscan_t yyscanner)
 	while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 		{
 		yy_current_state = (int) yy_def[yy_current_state];
-		if ( yy_current_state >= 355 )
+		if ( yy_current_state >= 372 )
 			yy_c = yy_meta[yy_c];
 		}
 	yy_current_state = yy_nxt[yy_base[yy_current_state] + yy_c];
-	yy_is_jam = (yy_current_state == 354);
+	yy_is_jam = (yy_current_state == 371);
 	if ( ! yy_is_jam )
 		*yyg->yy_state_ptr++ = yy_current_state;
 
@@ -2941,4 +2972,4 @@ void yyfree (void * ptr , yyscan_t yyscanner)
 
 #define YYTABLES_NAME "yytables"
 
-#line 291 "libxlu_disk_l.l"
+#line 300 "libxlu_disk_l.l"
diff --git a/tools/libs/util/libxlu_disk_l.h b/tools/libs/util/libxlu_disk_l.h
index 6abeecf..509aad6 100644
--- a/tools/libs/util/libxlu_disk_l.h
+++ b/tools/libs/util/libxlu_disk_l.h
@@ -694,7 +694,7 @@ extern int yylex (yyscan_t yyscanner);
 #undef yyTABLES_NAME
 #endif
 
-#line 291 "libxlu_disk_l.l"
+#line 300 "libxlu_disk_l.l"
 
 #line 699 "libxlu_disk_l.h"
 #undef xlu__disk_yyIN_HEADER
diff --git a/tools/libs/util/libxlu_disk_l.l b/tools/libs/util/libxlu_disk_l.l
index 3bd639a..47b8ee0 100644
--- a/tools/libs/util/libxlu_disk_l.l
+++ b/tools/libs/util/libxlu_disk_l.l
@@ -122,9 +122,17 @@ static void setbackendtype(DiskParseContext *dpc, const char *str) {
     if (     !strcmp(str,"phy"))   DSET(dpc,backend,BACKEND,str,PHY);
     else if (!strcmp(str,"tap"))   DSET(dpc,backend,BACKEND,str,TAP);
     else if (!strcmp(str,"qdisk")) DSET(dpc,backend,BACKEND,str,QDISK);
+    else if (!strcmp(str,"other")) DSET(dpc,backend,BACKEND,str,OTHER);
     else xlu__disk_err(dpc,str,"unknown value for backendtype");
 }
 
+/* Sets ->specification from the string.  IDL should provide something for this. */
+static void setspecification(DiskParseContext *dpc, const char *str) {
+    if      (!strcmp(str,"xen"))    DSET(dpc,specification,SPECIFICATION,str,XEN);
+    else if (!strcmp(str,"virtio")) DSET(dpc,specification,SPECIFICATION,str,VIRTIO);
+    else xlu__disk_err(dpc,str,"unknown value for specification");
+}
+
 /* Sets ->colo-port from the string.  COLO need this. */
 static void setcoloport(DiskParseContext *dpc, const char *str) {
     int port = atoi(str);
@@ -192,6 +200,7 @@ devtype=[^,]*,?	{ xlu__disk_err(DPC,yytext,"unknown value for type"); }
 access=[^,]*,?	{ STRIP(','); setaccess(DPC, FROMEQUALS); }
 backend=[^,]*,? { STRIP(','); SAVESTRING("backend", backend_domname, FROMEQUALS); }
 backendtype=[^,]*,? { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
+specification=[^,]*,? { STRIP(','); setspecification(DPC,FROMEQUALS); }
 
 vdev=[^,]*,?	{ STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
 script=[^,]*,?	{ STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
diff --git a/tools/xl/xl_block.c b/tools/xl/xl_block.c
index 70eed43..f2b0ff5 100644
--- a/tools/xl/xl_block.c
+++ b/tools/xl/xl_block.c
@@ -50,6 +50,11 @@ int main_blockattach(int argc, char **argv)
         return 0;
     }
 
+    if (disk.specification != LIBXL_DISK_SPECIFICATION_XEN) {
+        fprintf(stderr, "block-attach is only supported for specification xen\n");
+        return 1;
+    }
+
     if (libxl_device_disk_add(ctx, fe_domid, &disk, 0)) {
         fprintf(stderr, "libxl_device_disk_add failed.\n");
         return 1;
@@ -119,6 +124,12 @@ int main_blockdetach(int argc, char **argv)
         fprintf(stderr, "Error: Device %s not connected.\n", argv[optind+1]);
         return 1;
     }
+
+    if (disk.specification != LIBXL_DISK_SPECIFICATION_XEN) {
+        fprintf(stderr, "block-detach is only supported for specification xen\n");
+        return 1;
+    }
+
     rc = !force ? libxl_device_disk_safe_remove(ctx, domid, &disk, 0) :
         libxl_device_disk_destroy(ctx, domid, &disk, 0);
     if (rc) {
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Wed Jun 01 18:12:14 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 18:12:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340731.565832 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwSpD-0001v8-O3; Wed, 01 Jun 2022 18:12:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340731.565832; Wed, 01 Jun 2022 18:12:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwSpD-0001v1-Kh; Wed, 01 Jun 2022 18:12:07 +0000
Received: by outflank-mailman (input) for mailman id 340731;
 Wed, 01 Jun 2022 18:12:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pr6R=WI=kernel.org=nathan@srs-se1.protection.inumbo.net>)
 id 1nwSpC-0001uv-A8
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 18:12:06 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 563d59b7-e1d6-11ec-bd2c-47488cf2e6aa;
 Wed, 01 Jun 2022 20:12:04 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by sin.source.kernel.org (Postfix) with ESMTPS id 9BED2CE1D21;
 Wed,  1 Jun 2022 18:12:01 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id BDA63C385A5;
 Wed,  1 Jun 2022 18:11:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 563d59b7-e1d6-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654107120;
	bh=XyRtnaG4f6v1X+0b1Icwpx1CnomvwrWKo4GYEBNHb+8=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=kBs/HAWx4uM3HDoEzXU7+T8V9lyNwNqn4zd6Uh/AJGNo4QD8EzwUXHg8SF5aQu/eR
	 anmAddXVYpYTPQXJYiVXSgAM6oeXwkinLakcGmSyuAZtpT4Gg5dXcDILW5BYrqp6yC
	 vfOgfDXfoYgyw9l9+lVWSvldxx5jHryrGn/d5UzcJCJjQVQvAnn5PGnRbongr5jgLY
	 JziHqqThI4bmA2WwIpuZHilJdJjc4uSPMPWYG0WUdO4ZhUny70sDURv0RtrDDn5Tm8
	 XZjMRYHCt4Yp4VSnSP3UbNmDy1oSOc8nqKRVMKzHpT2VEuCPCTYpdk41t0XRp7VTN5
	 o4MHC+cQamxCw==
Date: Wed, 1 Jun 2022 11:11:57 -0700
From: Nathan Chancellor <nathan@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: iommu@lists.linux-foundation.org, x86@kernel.org,
	Anshuman Khandual <anshuman.khandual@arm.com>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>, Joerg Roedel <joro@8bytes.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Lu Baolu <baolu.lu@linux.intel.com>,
	Robin Murphy <robin.murphy@arm.com>,
	linux-arm-kernel@lists.infradead.org,
	xen-devel@lists.xenproject.org, linux-ia64@vger.kernel.org,
	linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-hyperv@vger.kernel.org, tboot-devel@lists.sourceforge.net,
	linux-pci@vger.kernel.org
Subject: Re: [PATCH 09/15] swiotlb: make the swiotlb_init interface more
 useful
Message-ID: <Yper7agk7XfCCQNa@dev-arch.thelio-3990X>
References: <20220404050559.132378-1-hch@lst.de>
 <20220404050559.132378-10-hch@lst.de>
 <YpehC7BwBlnuxplF@dev-arch.thelio-3990X>
 <20220601173441.GB27582@lst.de>
 <YpemDuzdoaO3rijX@Ryzen-9-3900X.>
 <20220601175743.GA28082@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20220601175743.GA28082@lst.de>

On Wed, Jun 01, 2022 at 07:57:43PM +0200, Christoph Hellwig wrote:
> On Wed, Jun 01, 2022 at 10:46:54AM -0700, Nathan Chancellor wrote:
> > On Wed, Jun 01, 2022 at 07:34:41PM +0200, Christoph Hellwig wrote:
> > > Can you send me the full dmesg and the content of
> > > /sys/kernel/debug/swiotlb/io_tlb_nslabs for a good and a bad boot?
> > 
> > Sure thing, they are attached! If there is anything else I can provide
> > or test, I am more than happy to do so.
> 
> Nothing interesting.  But the performance numbers almost look like
> swiotlb=force got ignored before (even if I can't explain why).

I was able to get my performance back with this diff but I don't know if
this is a hack or a proper fix in the context of the series.

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index dfa1de89dc94..0bfb2fe3d8c5 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -276,7 +276,7 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
 		      __func__, alloc_size, PAGE_SIZE);
 
 	swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, false);
-	mem->force_bounce = flags & SWIOTLB_FORCE;
+	mem->force_bounce = swiotlb_force_bounce || (flags & SWIOTLB_FORCE);
 
 	if (flags & SWIOTLB_VERBOSE)
 		swiotlb_print_info();

> Do you get a similar performance with the new kernel without
> swiotlb=force as the old one with that argument by any chance?

I'll see if I can test that, as I am not sure I have control over those
cmdline arguments.

Cheers,
Nathan


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 18:21:53 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 18:21:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340739.565843 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwSya-0003aB-LN; Wed, 01 Jun 2022 18:21:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340739.565843; Wed, 01 Jun 2022 18:21:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwSya-0003a4-IJ; Wed, 01 Jun 2022 18:21:48 +0000
Received: by outflank-mailman (input) for mailman id 340739;
 Wed, 01 Jun 2022 18:21:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NYJT=WI=lst.de=hch@srs-se1.protection.inumbo.net>)
 id 1nwSyY-0003Zf-SM
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 18:21:46 +0000
Received: from verein.lst.de (verein.lst.de [213.95.11.211])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b1a5f090-e1d7-11ec-837f-e5687231ffcc;
 Wed, 01 Jun 2022 20:21:45 +0200 (CEST)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 0E98968AA6; Wed,  1 Jun 2022 20:21:42 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b1a5f090-e1d7-11ec-837f-e5687231ffcc
Date: Wed, 1 Jun 2022 20:21:41 +0200
From: Christoph Hellwig <hch@lst.de>
To: Nathan Chancellor <nathan@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>, iommu@lists.linux-foundation.org,
	x86@kernel.org, Anshuman Khandual <anshuman.khandual@arm.com>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>, Joerg Roedel <joro@8bytes.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Lu Baolu <baolu.lu@linux.intel.com>,
	Robin Murphy <robin.murphy@arm.com>,
	linux-arm-kernel@lists.infradead.org,
	xen-devel@lists.xenproject.org, linux-ia64@vger.kernel.org,
	linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-hyperv@vger.kernel.org, tboot-devel@lists.sourceforge.net,
	linux-pci@vger.kernel.org
Subject: Re: [PATCH 09/15] swiotlb: make the swiotlb_init interface more
 useful
Message-ID: <20220601182141.GA28309@lst.de>
References: <20220404050559.132378-1-hch@lst.de> <20220404050559.132378-10-hch@lst.de> <YpehC7BwBlnuxplF@dev-arch.thelio-3990X> <20220601173441.GB27582@lst.de> <YpemDuzdoaO3rijX@Ryzen-9-3900X.> <20220601175743.GA28082@lst.de> <Yper7agk7XfCCQNa@dev-arch.thelio-3990X>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <Yper7agk7XfCCQNa@dev-arch.thelio-3990X>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Wed, Jun 01, 2022 at 11:11:57AM -0700, Nathan Chancellor wrote:
> On Wed, Jun 01, 2022 at 07:57:43PM +0200, Christoph Hellwig wrote:
> > On Wed, Jun 01, 2022 at 10:46:54AM -0700, Nathan Chancellor wrote:
> > > On Wed, Jun 01, 2022 at 07:34:41PM +0200, Christoph Hellwig wrote:
> > > > Can you send me the full dmesg and the content of
> > > > /sys/kernel/debug/swiotlb/io_tlb_nslabs for a good and a bad boot?
> > > 
> > > Sure thing, they are attached! If there is anything else I can provide
> > > or test, I am more than happy to do so.
> > 
> > Nothing interesting.  But the performance numbers almost look like
> > swiotlb=force got ignored before (even if I can't explain why).
> 
> I was able to get my performance back with this diff but I don't know if
> this is a hack or a proper fix in the context of the series.

This looks good, but needs a little tweak.  I'd go for this variant of
it:


diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index dfa1de89dc944..cb50f8d383606 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -192,7 +192,7 @@ void __init swiotlb_update_mem_attributes(void)
 }
 
 static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
-				    unsigned long nslabs, bool late_alloc)
+		unsigned long nslabs, unsigned int flags, bool late_alloc)
 {
 	void *vaddr = phys_to_virt(start);
 	unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
@@ -203,8 +203,7 @@ static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
 	mem->index = 0;
 	mem->late_alloc = late_alloc;
 
-	if (swiotlb_force_bounce)
-		mem->force_bounce = true;
+	mem->force_bounce = swiotlb_force_bounce || (flags & SWIOTLB_FORCE);
 
 	spin_lock_init(&mem->lock);
 	for (i = 0; i < mem->nslabs; i++) {
@@ -275,8 +274,7 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
 		panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
 		      __func__, alloc_size, PAGE_SIZE);
 
-	swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, false);
-	mem->force_bounce = flags & SWIOTLB_FORCE;
+	swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, flags, false);
 
 	if (flags & SWIOTLB_VERBOSE)
 		swiotlb_print_info();
@@ -348,7 +346,7 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask,
 
 	set_memory_decrypted((unsigned long)vstart,
 			     (nslabs << IO_TLB_SHIFT) >> PAGE_SHIFT);
-	swiotlb_init_io_tlb_mem(mem, virt_to_phys(vstart), nslabs, true);
+	swiotlb_init_io_tlb_mem(mem, virt_to_phys(vstart), nslabs, 0, true);
 
 	swiotlb_print_info();
 	return 0;
@@ -835,8 +833,8 @@ static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
 
 		set_memory_decrypted((unsigned long)phys_to_virt(rmem->base),
 				     rmem->size >> PAGE_SHIFT);
-		swiotlb_init_io_tlb_mem(mem, rmem->base, nslabs, false);
-		mem->force_bounce = true;
+		swiotlb_init_io_tlb_mem(mem, rmem->base, nslabs, SWIOTLB_FORCE,
+				false);
 		mem->for_alloc = true;
 
 		rmem->priv = mem;



From xen-devel-bounces@lists.xenproject.org Wed Jun 01 18:40:20 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 18:40:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340748.565854 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwTGN-00068b-63; Wed, 01 Jun 2022 18:40:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340748.565854; Wed, 01 Jun 2022 18:40:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwTGN-00068U-35; Wed, 01 Jun 2022 18:40:11 +0000
Received: by outflank-mailman (input) for mailman id 340748;
 Wed, 01 Jun 2022 18:40:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pr6R=WI=kernel.org=nathan@srs-se1.protection.inumbo.net>)
 id 1nwTGM-00068O-2f
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 18:40:10 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 424ac224-e1da-11ec-837f-e5687231ffcc;
 Wed, 01 Jun 2022 20:40:08 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id B8B8761684;
 Wed,  1 Jun 2022 18:40:06 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id D7E30C385B8;
 Wed,  1 Jun 2022 18:40:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 424ac224-e1da-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654108806;
	bh=vpZ1oNTlay8NWlxGY9rUMzeRkjnKLNWUB5LHGYBDV9E=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=SV4cmB3MpgVr8rN3mq/AgsHnJ59ZjgcffarURyAfkAdkEzNpXND21AvmUXs/TCSKG
	 JN6Kln9Qh2fKKoNGqhtzaeN3GVMD5O+WlHNEGZq1+TVFKTgXvUeTxpOC2opw2jzy3M
	 bzs+i28vGY/FzwHvhAo/7MfauYLqirVh5qrYbf/9uggvCsKOW9dGOrTbtu8Bly2ogq
	 gdOF0hMn+j3+XeZkNnzwCZiw7gyMKIj6V8TC3bRb8s5c+amjSWyv+xMo80ucbbrM+M
	 K8Ccnhjy4g9orAMgOh/NR3ZQR6ir+xGwf35PRNSGyFHOR15Ppn2g/2bRwSBc69z3WO
	 cpjY5uJWVEvrQ==
Date: Wed, 1 Jun 2022 11:40:03 -0700
From: Nathan Chancellor <nathan@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: iommu@lists.linux-foundation.org, x86@kernel.org,
	Anshuman Khandual <anshuman.khandual@arm.com>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>, Joerg Roedel <joro@8bytes.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Lu Baolu <baolu.lu@linux.intel.com>,
	Robin Murphy <robin.murphy@arm.com>,
	linux-arm-kernel@lists.infradead.org,
	xen-devel@lists.xenproject.org, linux-ia64@vger.kernel.org,
	linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-hyperv@vger.kernel.org, tboot-devel@lists.sourceforge.net,
	linux-pci@vger.kernel.org
Subject: Re: [PATCH 09/15] swiotlb: make the swiotlb_init interface more
 useful
Message-ID: <Ypeyg2Dm/WfoKDZt@dev-arch.thelio-3990X>
References: <20220404050559.132378-1-hch@lst.de>
 <20220404050559.132378-10-hch@lst.de>
 <YpehC7BwBlnuxplF@dev-arch.thelio-3990X>
 <20220601173441.GB27582@lst.de>
 <YpemDuzdoaO3rijX@Ryzen-9-3900X.>
 <20220601175743.GA28082@lst.de>
 <Yper7agk7XfCCQNa@dev-arch.thelio-3990X>
 <20220601182141.GA28309@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20220601182141.GA28309@lst.de>

On Wed, Jun 01, 2022 at 08:21:41PM +0200, Christoph Hellwig wrote:
> On Wed, Jun 01, 2022 at 11:11:57AM -0700, Nathan Chancellor wrote:
> > On Wed, Jun 01, 2022 at 07:57:43PM +0200, Christoph Hellwig wrote:
> > > On Wed, Jun 01, 2022 at 10:46:54AM -0700, Nathan Chancellor wrote:
> > > > On Wed, Jun 01, 2022 at 07:34:41PM +0200, Christoph Hellwig wrote:
> > > > > Can you send me the full dmesg and the content of
> > > > > /sys/kernel/debug/swiotlb/io_tlb_nslabs for a good and a bad boot?
> > > > 
> > > > Sure thing, they are attached! If there is anything else I can provide
> > > > or test, I am more than happy to do so.
> > > 
> > > Nothing interesting.  But the performance numbers almost look like
> > > swiotlb=force got ignored before (even if I can't explain why).
> > 
> > I was able to get my performance back with this diff but I don't know if
> > this is a hack or a proper fix in the context of the series.
> 
> This looks good, but needs a little tweak.  I'd go for this variant of
> it:

Tested-by: Nathan Chancellor <nathan@kernel.org>

Thanks a lot for the quick fix!

> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index dfa1de89dc944..cb50f8d383606 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -192,7 +192,7 @@ void __init swiotlb_update_mem_attributes(void)
>  }
>  
>  static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
> -				    unsigned long nslabs, bool late_alloc)
> +		unsigned long nslabs, unsigned int flags, bool late_alloc)
>  {
>  	void *vaddr = phys_to_virt(start);
>  	unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
> @@ -203,8 +203,7 @@ static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
>  	mem->index = 0;
>  	mem->late_alloc = late_alloc;
>  
> -	if (swiotlb_force_bounce)
> -		mem->force_bounce = true;
> +	mem->force_bounce = swiotlb_force_bounce || (flags & SWIOTLB_FORCE);
>  
>  	spin_lock_init(&mem->lock);
>  	for (i = 0; i < mem->nslabs; i++) {
> @@ -275,8 +274,7 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
>  		panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
>  		      __func__, alloc_size, PAGE_SIZE);
>  
> -	swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, false);
> -	mem->force_bounce = flags & SWIOTLB_FORCE;
> +	swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, flags, false);
>  
>  	if (flags & SWIOTLB_VERBOSE)
>  		swiotlb_print_info();
> @@ -348,7 +346,7 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask,
>  
>  	set_memory_decrypted((unsigned long)vstart,
>  			     (nslabs << IO_TLB_SHIFT) >> PAGE_SHIFT);
> -	swiotlb_init_io_tlb_mem(mem, virt_to_phys(vstart), nslabs, true);
> +	swiotlb_init_io_tlb_mem(mem, virt_to_phys(vstart), nslabs, 0, true);
>  
>  	swiotlb_print_info();
>  	return 0;
> @@ -835,8 +833,8 @@ static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
>  
>  		set_memory_decrypted((unsigned long)phys_to_virt(rmem->base),
>  				     rmem->size >> PAGE_SHIFT);
> -		swiotlb_init_io_tlb_mem(mem, rmem->base, nslabs, false);
> -		mem->force_bounce = true;
> +		swiotlb_init_io_tlb_mem(mem, rmem->base, nslabs, SWIOTLB_FORCE,
> +				false);
>  		mem->for_alloc = true;
>  
>  		rmem->priv = mem;
> 

Cheers,
Nathan


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 19:40:17 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 19:40:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340760.565865 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwUCK-0004jh-Vk; Wed, 01 Jun 2022 19:40:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340760.565865; Wed, 01 Jun 2022 19:40:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwUCK-0004iz-Pv; Wed, 01 Jun 2022 19:40:04 +0000
Received: by outflank-mailman (input) for mailman id 340760;
 Wed, 01 Jun 2022 19:40:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwUCJ-0004WR-Gm; Wed, 01 Jun 2022 19:40:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwUCJ-0008Ef-Dz; Wed, 01 Jun 2022 19:40:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwUCI-0001kN-Sn; Wed, 01 Jun 2022 19:40:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nwUCI-0003e5-SM; Wed, 01 Jun 2022 19:40:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mA00f9Thsp32SXaG1q6ej3wSUv14aS4OaaAfvG8M/Yo=; b=F/z5yyIh3E+u8Q+dxGwnIXqusc
	vYwmnke2yQHdTakt9wHZMa7mlbcDWzycmo9HwcHOmVfu07eOHlBDFFLAR7IrH+iKv5/FOyY7+sDfg
	g+KLT9WyP4KSWut2dcbpMl3N71Mwoh8p9L7f3RLkfazmx3X/T7xO0YK0N+Fx8AXunRy8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170794-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 170794: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-pygrub:debian-di-install:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:debian-di-install:fail:regression
    linux-linus:test-armhf-armhf-libvirt:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:debian-di-install:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:debian-di-install:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:debian-di-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-linus:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This:
    linux=700170bf6b4d773e328fa54ebb70ba444007c702
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 01 Jun 2022 19:40:02 +0000

flight 170794 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170794/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 170714
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 170714
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 170714
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 170714
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-freebsd11-amd64 13 guest-start          fail REGR. vs. 170714
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 170714
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 170714
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 170714
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 170714
 test-arm64-arm64-xl-credit1  14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 170714
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 170714
 test-arm64-arm64-xl-xsm      14 guest-start              fail REGR. vs. 170714
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 170714
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-qemuu-nested-amd 12 debian-hvm-install  fail REGR. vs. 170714
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 170714
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 170714
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-xl-qemut-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 170714
 test-amd64-amd64-qemuu-nested-intel 12 debian-hvm-install fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 170714
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 170714
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 170714
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 170714
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 170714
 test-amd64-amd64-pygrub      12 debian-di-install        fail REGR. vs. 170714
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-xl-vhd      12 debian-di-install        fail REGR. vs. 170714
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 170714
 test-arm64-arm64-xl-vhd      12 debian-di-install        fail REGR. vs. 170714
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 170714
 test-arm64-arm64-libvirt-raw 12 debian-di-install        fail REGR. vs. 170714
 test-armhf-armhf-libvirt-qcow2 12 debian-di-install      fail REGR. vs. 170714
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 170714
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 170714

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 170714
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 170714

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714

version targeted for testing:
 linux                700170bf6b4d773e328fa54ebb70ba444007c702
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z    8 days
Failing since        170716  2022-05-24 11:12:06 Z    8 days   24 attempts
Testing same since   170794  2022-06-01 05:09:10 Z    0 days    1 attempts

------------------------------------------------------------
1989 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 222957 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 19:54:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 19:54:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340771.565876 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwUQD-00078N-FL; Wed, 01 Jun 2022 19:54:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340771.565876; Wed, 01 Jun 2022 19:54:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwUQD-00078G-CW; Wed, 01 Jun 2022 19:54:25 +0000
Received: by outflank-mailman (input) for mailman id 340771;
 Wed, 01 Jun 2022 19:54:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=k3DM=WI=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1nwUQB-000788-EA
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 19:54:23 +0000
Received: from mail-qv1-xf2e.google.com (mail-qv1-xf2e.google.com
 [2607:f8b0:4864:20::f2e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a1144b85-e1e4-11ec-837f-e5687231ffcc;
 Wed, 01 Jun 2022 21:54:22 +0200 (CEST)
Received: by mail-qv1-xf2e.google.com with SMTP id ea7so2194700qvb.12
 for <xen-devel@lists.xenproject.org>; Wed, 01 Jun 2022 12:54:22 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:c14:2162:8bd0:d95])
 by smtp.gmail.com with ESMTPSA id
 x22-20020ae9e916000000b0069fc13ce205sm1706737qkf.54.2022.06.01.12.54.19
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 01 Jun 2022 12:54:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a1144b85-e1e4-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=36+ek962W4ESiUtGpxVRYFAjd+VwVWl5kF1SkjDtZMw=;
        b=UgQWMNlwAFDSyhOYxlXAYkCIl2mc/sh4SXHL5oEpmTBjCaoAGPvuIVJY/EBVObhLpn
         Tz91Ti4l40qj4sNbS7uKLNC9gcnNRG5R+zImHy1Cp949Upag0/8ZrU1yI6qHOdzk3vmU
         SBeuB10CrxBpO3gRydMNeTXEZucGMbZc5K7F2E3UrLO8iOWDda/dE0Zo2p/9PLMv5/6z
         bzVRmmCN567UJ78Ejrskf3ZRMFAUDpCM/u7icHrzh0Z17zrP9xbdUmXisw6dKLBclYlu
         TX2X+ASp7cE/K5iM3qCQ/abc7GzGW8Qqm/ks1GR6NE3stV8bKbmnhKyh44hxwPl2+Hk7
         DpUA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=36+ek962W4ESiUtGpxVRYFAjd+VwVWl5kF1SkjDtZMw=;
        b=6IFT6cb7sK8777JgU2+Rs1CbrJBNDSipKkWcJi786o0ySnbRVyL8pzPdVEOb0JgS6d
         CzdwSEBaip0JfR1JFuwzSpsiV/9UzebkjQTaEU+0t2ayvN7Ms+zDxaS0Y+8U3yOGzny7
         DeCi6Nwc9cUIm2Lxip0mxntxa4lWGdooCZWsSAS7QI0za4wNhhKM2DgOle2+UGZuypvC
         4tnrgbnlFxwKhk4yVacZn+zXtAmfxwHrzdJ02At5F+76cuEThRSkRRchaYiQYZeZeVpk
         vaeEoz1D0gQoPPg17MktRSSMFx9qTRbf9RVq7jFrfFsnpBt3/d0ucLblKcXJF/xbRdKl
         IGag==
X-Gm-Message-State: AOAM531Nx6asBdAfvtZ3drG0YJNP97amD9zVmTZ/QrjdBJeIojU2TKgF
	ZsB43LsQua0/k2xXRVttnCA=
X-Google-Smtp-Source: ABdhPJxGZUdP2s4a2nDmTBCP5a2DzP2YjvJu5cPJlIPUmCxdTJANOqM8werYrhiWx0CWKbMBpK77Vw==
X-Received: by 2002:ad4:4ee5:0:b0:464:358b:4a00 with SMTP id dv5-20020ad44ee5000000b00464358b4a00mr21716379qvb.19.1654113260787;
        Wed, 01 Jun 2022 12:54:20 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jens Axboe <axboe@kernel.dk>
Cc: Jason Andryuk <jandryuk@gmail.com>,
	xen-devel@lists.xenproject.org,
	linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Subject: [PATCH] xen-blkfront: Handle NULL gendisk
Date: Wed,  1 Jun 2022 15:53:41 -0400
Message-Id: <20220601195341.28581-1-jandryuk@gmail.com>
X-Mailer: git-send-email 2.36.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

When a VBD is not fully created and then closed, the kernel can have a
NULL pointer dereference:

The reproducer is trivial:

[user@dom0 ~]$ sudo xl block-attach work backend=sys-usb vdev=xvdi target=/dev/sdz
[user@dom0 ~]$ xl block-list work
Vdev  BE  handle state evt-ch ring-ref BE-path
51712 0   241    4     -1     -1       /local/domain/0/backend/vbd/241/51712
51728 0   241    4     -1     -1       /local/domain/0/backend/vbd/241/51728
51744 0   241    4     -1     -1       /local/domain/0/backend/vbd/241/51744
51760 0   241    4     -1     -1       /local/domain/0/backend/vbd/241/51760
51840 3   241    3     -1     -1       /local/domain/3/backend/vbd/241/51840
                 ^ note state, the /dev/sdz doesn't exist in the backend

[user@dom0 ~]$ sudo xl block-detach work xvdi
[user@dom0 ~]$ xl block-list work
Vdev  BE  handle state evt-ch ring-ref BE-path
work is an invalid domain identifier

And its console has:

BUG: kernel NULL pointer dereference, address: 0000000000000050
PGD 80000000edebb067 P4D 80000000edebb067 PUD edec2067 PMD 0
Oops: 0000 [#1] PREEMPT SMP PTI
CPU: 1 PID: 52 Comm: xenwatch Not tainted 5.16.18-2.43.fc32.qubes.x86_64 #1
RIP: 0010:blk_mq_stop_hw_queues+0x5/0x40
Code: 00 48 83 e0 fd 83 c3 01 48 89 85 a8 00 00 00 41 39 5c 24 50 77 c0 5b 5d 41 5c 41 5d c3 c3 0f 1f 80 00 00 00 00 0f 1f 44 00 00 <8b> 47 50 85 c0 74 32 41 54 49 89 fc 55 53 31 db 49 8b 44 24 48 48
RSP: 0018:ffffc90000bcfe98 EFLAGS: 00010293
RAX: ffffffffc0008370 RBX: 0000000000000005 RCX: 0000000000000000
RDX: 0000000000000000 RSI: 0000000000000005 RDI: 0000000000000000
RBP: ffff88800775f000 R08: 0000000000000001 R09: ffff888006e620b8
R10: ffff888006e620b0 R11: f000000000000000 R12: ffff8880bff39000
R13: ffff8880bff39000 R14: 0000000000000000 R15: ffff88800604be00
FS:  0000000000000000(0000) GS:ffff8880f3300000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000050 CR3: 00000000e932e002 CR4: 00000000003706e0
Call Trace:
 <TASK>
 blkback_changed+0x95/0x137 [xen_blkfront]
 ? read_reply+0x160/0x160
 xenwatch_thread+0xc0/0x1a0
 ? do_wait_intr_irq+0xa0/0xa0
 kthread+0x16b/0x190
 ? set_kthread_struct+0x40/0x40
 ret_from_fork+0x22/0x30
 </TASK>
Modules linked in: snd_seq_dummy snd_hrtimer snd_seq snd_seq_device snd_timer snd soundcore ipt_REJECT nf_reject_ipv4 xt_state xt_conntrack nft_counter nft_chain_nat xt_MASQUERADE nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nft_compat nf_tables nfnetlink intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel xen_netfront pcspkr xen_scsiback target_core_mod xen_netback xen_privcmd xen_gntdev xen_gntalloc xen_blkback xen_evtchn ipmi_devintf ipmi_msghandler fuse bpf_preload ip_tables overlay xen_blkfront
CR2: 0000000000000050
---[ end trace 7bc9597fd06ae89d ]---
RIP: 0010:blk_mq_stop_hw_queues+0x5/0x40
Code: 00 48 83 e0 fd 83 c3 01 48 89 85 a8 00 00 00 41 39 5c 24 50 77 c0 5b 5d 41 5c 41 5d c3 c3 0f 1f 80 00 00 00 00 0f 1f 44 00 00 <8b> 47 50 85 c0 74 32 41 54 49 89 fc 55 53 31 db 49 8b 44 24 48 48
RSP: 0018:ffffc90000bcfe98 EFLAGS: 00010293
RAX: ffffffffc0008370 RBX: 0000000000000005 RCX: 0000000000000000
RDX: 0000000000000000 RSI: 0000000000000005 RDI: 0000000000000000
RBP: ffff88800775f000 R08: 0000000000000001 R09: ffff888006e620b8
R10: ffff888006e620b0 R11: f000000000000000 R12: ffff8880bff39000
R13: ffff8880bff39000 R14: 0000000000000000 R15: ffff88800604be00
FS:  0000000000000000(0000) GS:ffff8880f3300000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000050 CR3: 00000000e932e002 CR4: 00000000003706e0
Kernel panic - not syncing: Fatal exception
Kernel Offset: disabled

info->rq and info->gd are only set in blkfront_connect(), which is
called for state 4 (XenbusStateConnected).  Guard against using NULL
variables in blkfront_closing() to avoid the issue.

The rest of blkfront_closing looks okay.  If info->nr_rings is 0, then
for_each_rinfo won't do anything.

blkfront_remove also needs to check for non-NULL pointers before
cleaning up the gendisk and request queue.

Fixes: 05d69d950d9d "xen-blkfront: sanitize the removal state machine"
Reported-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 drivers/block/xen-blkfront.c | 19 ++++++++++++-------
 1 file changed, 12 insertions(+), 7 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 003056d4f7f5..966a6bf4c162 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -2137,9 +2137,11 @@ static void blkfront_closing(struct blkfront_info *info)
 		return;
 
 	/* No more blkif_request(). */
-	blk_mq_stop_hw_queues(info->rq);
-	blk_mark_disk_dead(info->gd);
-	set_capacity(info->gd, 0);
+	if (info->rq && info->gd) {
+		blk_mq_stop_hw_queues(info->rq);
+		blk_mark_disk_dead(info->gd);
+		set_capacity(info->gd, 0);
+	}
 
 	for_each_rinfo(info, rinfo, i) {
 		/* No more gnttab callback work. */
@@ -2480,16 +2482,19 @@ static int blkfront_remove(struct xenbus_device *xbdev)
 
 	dev_dbg(&xbdev->dev, "%s removed", xbdev->nodename);
 
-	del_gendisk(info->gd);
+	if (info->gd)
+		del_gendisk(info->gd);
 
 	mutex_lock(&blkfront_mutex);
 	list_del(&info->info_list);
 	mutex_unlock(&blkfront_mutex);
 
 	blkif_free(info, 0);
-	xlbd_release_minors(info->gd->first_minor, info->gd->minors);
-	blk_cleanup_disk(info->gd);
-	blk_mq_free_tag_set(&info->tag_set);
+	if (info->gd) {
+		xlbd_release_minors(info->gd->first_minor, info->gd->minors);
+		blk_cleanup_disk(info->gd);
+		blk_mq_free_tag_set(&info->tag_set);
+	}
 
 	kfree(info);
 	return 0;
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 01 20:28:01 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 20:28:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340780.565887 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwUwc-0002fB-3U; Wed, 01 Jun 2022 20:27:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340780.565887; Wed, 01 Jun 2022 20:27:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwUwc-0002f4-0R; Wed, 01 Jun 2022 20:27:54 +0000
Received: by outflank-mailman (input) for mailman id 340780;
 Wed, 01 Jun 2022 20:27:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1L9V=WI=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nwUwa-0002ek-5n
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 20:27:52 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4d9f2572-e1e9-11ec-837f-e5687231ffcc;
 Wed, 01 Jun 2022 22:27:50 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 35869B81985;
 Wed,  1 Jun 2022 20:27:48 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 90DA0C385B8;
 Wed,  1 Jun 2022 20:27:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4d9f2572-e1e9-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654115266;
	bh=mSmM2+7GRV4+/0dAzKlo87gel3iDbQlAbmki+vKiF6E=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=l8s3AkTdah2m0bxUpiOYsuMpbealnOFc9MRKhSmAhzJgncpSuS+YrstHaI3tss9eh
	 ySf/GqGp5ACqr9yVQrJbe3vjkibWmeIrTWTOdOspBmG+HKjNyjreGQWztO36m7rFny
	 aTOLsXElaW2vej4mbxsG1p6Ak7ncfQNAaBRJ5asxgBVEqGr+XHnHyJsW/i2sWcn1T1
	 muQ6YYgKBDbF0WDNVmHk9sO5N/pg32qcf2rFLDZIYKFW6WVXtMZP4pjuQ8i9RHO4Xi
	 h12IDANhHsw6xCjpdTBbdzT/v6fGCF+cHUhTUsFArd9uJsrmwNXeIGkPmmOCnkX6Wt
	 Dpyr83trwi2JA==
Date: Wed, 1 Jun 2022 13:27:44 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Stefano Stabellini <sstabellini@kernel.org>
cc: George Dunlap <George.Dunlap@citrix.com>, 
    xen-devel <xen-devel@lists.xenproject.org>, 
    Roger Pau Monne <roger.pau@citrix.com>, 
    Artem Mygaiev <Artem_Mygaiev@epam.com>, jbeulich@suse.com, 
    Andrew.Cooper3@citrix.com, julien@xen.org, Bertrand.Marquis@arm.com, 
    fusa-sig@lists.xenproject.org
Subject: Re: MOVING COMMUNITY CALL Call for agenda items for 9 June Community
 Call @ 1500 UTC
In-Reply-To: <alpine.DEB.2.22.394.2206010942010.1905099@ubuntu-linux-20-04-desktop>
Message-ID: <alpine.DEB.2.22.394.2206011324400.1905099@ubuntu-linux-20-04-desktop>
References: <CC75A251-2695-4E9E-95A7-043874B22F32@citrix.com> <alpine.DEB.2.22.394.2206010942010.1905099@ubuntu-linux-20-04-desktop>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

Reducing CC and adding fusa-sig

Actually Jun 9 at 8AM California / 4PM UK doesn't work for some of you,
so it is either:
1) Jun 9 at 7AM California / 3PM UK
2) Jun 14 at 8AM California / 4PM UK

My preference is the first option because it is sooner but let me know
if it doesn't work and we'll try the second option.



On Wed, 1 Jun 2022, Stefano Stabellini wrote:
> Hi all,
> 
> I would like to suggest to have the MISRA C meeting just before the
> community call (7AM California time). If it is difficult for any of the
> must-have attendees, then I would like to ask to reserve 30 minutes of
> the community call to make progress on MISRA.
> 
> Cheers,
> 
> Stefano
> 
> 
> On Wed, 1 Jun 2022, George Dunlap wrote:
> > Hi all,
> > 
> > Sorry for sending this out so late; my calendar was screwed up.  Due to it being a public holiday in the UK, I propose moving the monthly community call to NEXT THURSDAY, 9 June, same time.
> > 
> > The proposed agenda is in https://cryptpad.fr/pad/#/2/pad/edit/URCDNNBOVKsEK2grXf2l954a/ and you can edit to add items.  Alternatively, you can reply to this mail directly.
> > 
> > Agenda items appreciated a few days before the call: please put your name besides items if you edit the document.
> > 
> > Note the following administrative conventions for the call:
> > * Unless, agreed in the pervious meeting otherwise, the call is on the 1st Thursday of each month at 1600 British Time (either GMT or BST)
> > * I usually send out a meeting reminder a few days before with a provisional agenda
> > 
> > * To allow time to switch between meetings, we'll plan on starting the agenda at 16:05 sharp.  Aim to join by 16:03 if possible to allocate time to sort out technical difficulties &c
> > 
> > * If you want to be CC'ed please add or remove yourself from the sign-up-sheet at https://cryptpad.fr/pad/#/2/pad/edit/D9vGzihPxxAOe6RFPz0sRCf+/
> > 
> > Best Regards
> > George
> > 
> > 
> > 
> > == Dial-in Information ==
> > ## Meeting time
> > 15:00 - 16:00 UTC
> > Further International meeting times: https://www.timeanddate.com/worldclock/meetingdetails.html?year=2022&month=06&day=9&hour=15&min=0&sec=0&p1=1234&p2=37&p3=224&p4=179
> > 
> > 
> > ## Dial in details
> > Web: https://meet.jit.si/XenProjectCommunityCall
> > 
> > Dial-in info and pin can be found here:
> > 
> > https://meet.jit.si/static/dialInInfo.html?room=XenProjectCommunityCall
> > 
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 20:39:20 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 20:39:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340789.565900 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwV7a-0004OT-5n; Wed, 01 Jun 2022 20:39:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340789.565900; Wed, 01 Jun 2022 20:39:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwV7a-0004OM-31; Wed, 01 Jun 2022 20:39:14 +0000
Received: by outflank-mailman (input) for mailman id 340789;
 Wed, 01 Jun 2022 20:39:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1L9V=WI=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nwV7Z-0004OG-8S
 for xen-devel@lists.xenproject.org; Wed, 01 Jun 2022 20:39:13 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e344afed-e1ea-11ec-bd2c-47488cf2e6aa;
 Wed, 01 Jun 2022 22:39:11 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by sin.source.kernel.org (Postfix) with ESMTPS id 27A36CE1C02;
 Wed,  1 Jun 2022 20:39:08 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 369CBC385B8;
 Wed,  1 Jun 2022 20:39:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e344afed-e1ea-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654115946;
	bh=uCCYMUDRoK5TEeCPzFx34TSsizz+TIyNlx4Fb9bR6Tg=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=tX0fE9SqDfvPBx5QmUFNLMBxJI3zg5nP5HcbcryDlsMyOKrpMVEX+meb48q4uZ573
	 bv47/ky/Phs0K1S3V1+4d0GOCd/qeTHQ7EGCR8u1b1eDDC/5UKKjDkoEjpswJty4MI
	 /DcEHFQGkmjmcx4jZu0lg0bJI2TcYRe54QjugSEEbyUdAy75kb55C4GcEc/hHkT38E
	 PtctCO32aN7JSVQdPRYL8FRFOGTPhrmUHKlzPDu+xluQUljuBiOkSvXjmmMhgsn18O
	 XUZ2u9sr7Ul6mZv5l0b0RK+NjcWAHMGF1VmD3pI0ElZOltaMYRqA/EUDxUKgkdrnIG
	 VhTm5Y2VjjUXQ==
Date: Wed, 1 Jun 2022 13:39:03 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Oleksandr <olekstysh@gmail.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Wei Liu <wl@xen.org>, 
    Anthony PERARD <anthony.perard@citrix.com>, 
    Juergen Gross <jgross@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>
Subject: Re: [PATCH V2] libxl/arm: Create specific IOMMU node to be referred
 by virtio-mmio device
In-Reply-To: <e67bde26-2eff-948a-a2c3-08cc474affa6@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2206011338310.1905099@ubuntu-linux-20-04-desktop>
References: <1653944813-17970-1-git-send-email-olekstysh@gmail.com> <alpine.DEB.2.22.394.2205311755010.1905099@ubuntu-linux-20-04-desktop> <e67bde26-2eff-948a-a2c3-08cc474affa6@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1716422508-1654115946=:1905099"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1716422508-1654115946=:1905099
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Wed, 1 Jun 2022, Oleksandr wrote:
> On 01.06.22 04:04, Stefano Stabellini wrote:
> > On Tue, 31 May 2022, Oleksandr Tyshchenko wrote:
> > > From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> > > 
> > > Reuse generic IOMMU device tree bindings to communicate Xen specific
> > > information for the virtio devices for which the restricted memory
> > > access using Xen grant mappings need to be enabled.
> > > 
> > > Insert "iommus" property pointed to the IOMMU node with "xen,grant-dma"
> > > compatible to all virtio devices which backends are going to run in
> > > non-hardware domains (which are non-trusted by default).
> > > 
> > > Based on device-tree binding from Linux:
> > > Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml
> > > 
> > > The example of generated nodes:
> > > 
> > > xen_iommu {
> > >      compatible = "xen,grant-dma";
> > >      #iommu-cells = <0x01>;
> > >      phandle = <0xfde9>;
> > > };
> > > 
> > > virtio@2000000 {
> > >      compatible = "virtio,mmio";
> > >      reg = <0x00 0x2000000 0x00 0x200>;
> > >      interrupts = <0x00 0x01 0xf01>;
> > >      interrupt-parent = <0xfde8>;
> > >      dma-coherent;
> > >      iommus = <0xfde9 0x01>;
> > > };
> > > 
> > > virtio@2000200 {
> > >      compatible = "virtio,mmio";
> > >      reg = <0x00 0x2000200 0x00 0x200>;
> > >      interrupts = <0x00 0x02 0xf01>;
> > >      interrupt-parent = <0xfde8>;
> > >      dma-coherent;
> > >      iommus = <0xfde9 0x01>;
> > > };
> > > 
> > > Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> > > ---
> > > !!! This patch is based on non upstreamed yet “Virtio support for
> > > toolstack
> > > on Arm” V8 series which is on review now:
> > > https://lore.kernel.org/xen-devel/1651598763-12162-1-git-send-email-olekstysh@gmail.com/
> > > 
> > > New device-tree binding (commit #5) is a part of solution to restrict
> > > memory
> > > access under Xen using xen-grant DMA-mapping layer (which is also on
> > > review):
> > > https://lore.kernel.org/xen-devel/1653944417-17168-1-git-send-email-olekstysh@gmail.com/
> > > 
> > > Changes RFC -> V1:
> > >     - update commit description
> > >     - rebase according to the recent changes to
> > >       "libxl: Introduce basic virtio-mmio support on Arm"
> > > 
> > > Changes V1 -> V2:
> > >     - Henry already gave his Reviewed-by, I dropped it due to the changes
> > >     - use generic IOMMU device tree bindings instead of custom property
> > >       "xen,dev-domid"
> > >     - change commit subject and description, was
> > >       "libxl/arm: Insert "xen,dev-domid" property to virtio-mmio device
> > > node"
> > > ---
> > >   tools/libs/light/libxl_arm.c          | 49
> > > ++++++++++++++++++++++++++++++++---
> > >   xen/include/public/device_tree_defs.h |  1 +
> > >   2 files changed, 47 insertions(+), 3 deletions(-)
> > > 
> > > diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
> > > index 9be9b2a..72da3b1 100644
> > > --- a/tools/libs/light/libxl_arm.c
> > > +++ b/tools/libs/light/libxl_arm.c
> > > @@ -865,9 +865,32 @@ static int make_vpci_node(libxl__gc *gc, void *fdt,
> > >       return 0;
> > >   }
> > >   +static int make_xen_iommu_node(libxl__gc *gc, void *fdt)
> > > +{
> > > +    int res;
> > > +
> > > +    /* See Linux
> > > Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml */
> > > +    res = fdt_begin_node(fdt, "xen_iommu");
> > > +    if (res) return res;
> > > +
> > > +    res = fdt_property_compat(gc, fdt, 1, "xen,grant-dma");
> > > +    if (res) return res;
> > > +
> > > +    res = fdt_property_cell(fdt, "#iommu-cells", 1);
> > > +    if (res) return res;
> > > +
> > > +    res = fdt_property_cell(fdt, "phandle", GUEST_PHANDLE_IOMMU);
> > > +    if (res) return res;
> > > +
> > > +    res = fdt_end_node(fdt);
> > > +    if (res) return res;
> > > +
> > > +    return 0;
> > > +}
> > >     static int make_virtio_mmio_node(libxl__gc *gc, void *fdt,
> > > -                                 uint64_t base, uint32_t irq)
> > > +                                 uint64_t base, uint32_t irq,
> > > +                                 uint32_t backend_domid)
> > >   {
> > >       int res;
> > >       gic_interrupt intr;
> > > @@ -890,6 +913,16 @@ static int make_virtio_mmio_node(libxl__gc *gc, void
> > > *fdt,
> > >       res = fdt_property(fdt, "dma-coherent", NULL, 0);
> > >       if (res) return res;
> > >   +    if (backend_domid != LIBXL_TOOLSTACK_DOMID) {
> > > +        uint32_t iommus_prop[2];
> > > +
> > > +        iommus_prop[0] = cpu_to_fdt32(GUEST_PHANDLE_IOMMU);
> > > +        iommus_prop[1] = cpu_to_fdt32(backend_domid);
> > > +
> > > +        res = fdt_property(fdt, "iommus", iommus_prop,
> > > sizeof(iommus_prop));
> > > +        if (res) return res;
> > > +    }
> > > +
> > >       res = fdt_end_node(fdt);
> > >       if (res) return res;
> > >   @@ -1097,6 +1130,7 @@ static int libxl__prepare_dtb(libxl__gc *gc,
> > > libxl_domain_config *d_config,
> > >       size_t fdt_size = 0;
> > >       int pfdt_size = 0;
> > >       libxl_domain_build_info *const info = &d_config->b_info;
> > > +    bool iommu_created;
> > >       unsigned int i;
> > >         const libxl_version_info *vers;
> > > @@ -1204,11 +1238,20 @@ next_resize:
> > >           if (d_config->num_pcidevs)
> > >               FDT( make_vpci_node(gc, fdt, ainfo, dom) );
> > >   +        iommu_created = false;
> > >           for (i = 0; i < d_config->num_disks; i++) {
> > >               libxl_device_disk *disk = &d_config->disks[i];
> > >   -            if (disk->specification == LIBXL_DISK_SPECIFICATION_VIRTIO)
> > > -                FDT( make_virtio_mmio_node(gc, fdt, disk->base,
> > > disk->irq) );
> > > +            if (disk->specification == LIBXL_DISK_SPECIFICATION_VIRTIO) {
> > > +                if (disk->backend_domid != LIBXL_TOOLSTACK_DOMID &&
> > > +                    !iommu_created) {
> > > +                    FDT( make_xen_iommu_node(gc, fdt) );
> > > +                    iommu_created = true;
> > > +                }
> > > +
> > > +                FDT( make_virtio_mmio_node(gc, fdt, disk->base,
> > > disk->irq,
> > > +                                           disk->backend_domid) );
> > > +            }
> > This is a matter of taste as the code would also work as is, but I would
> > do the following instead:
> > 
> > 
> > if ( d_config->num_disks > 0 &&
> >       disk->backend_domid != LIBXL_TOOLSTACK_DOMID) {
> >       FDT( make_xen_iommu_node(gc, fdt) );
> > }
> > 
> > for (i = 0; i < d_config->num_disks; i++) {
> >      libxl_device_disk *disk = &d_config->disks[i];
> > 
> >      if (disk->specification == LIBXL_DISK_SPECIFICATION_VIRTIO)
> >          FDT( make_virtio_mmio_node(gc, fdt, disk->base, disk->irq) );
> > }
> 
> I got your idea to avoid using local "iommu_created". For that, I think, we
> need to modify the first check to make sure that we have at least one virtio
> device, otherwise we might end up inserting unused IOMMU node. But, it will
> turn into an extra loop to go through num_disks and look for
> LIBXL_DISK_SPECIFICATION_VIRTIO.

I see, then just keep it as is.

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
--8323329-1716422508-1654115946=:1905099--


From xen-devel-bounces@lists.xenproject.org Wed Jun 01 20:40:43 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jun 2022 20:40:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340797.565911 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwV8t-0005iu-HV; Wed, 01 Jun 2022 20:40:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340797.565911; Wed, 01 Jun 2022 20:40:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwV8t-0005in-Em; Wed, 01 Jun 2022 20:40:35 +0000
Received: by outflank-mailman (input) for mailman id 340797;
 Wed, 01 Jun 2022 20:40:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwV8s-0005iZ-PF; Wed, 01 Jun 2022 20:40:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwV8s-0000vT-LD; Wed, 01 Jun 2022 20:40:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwV8s-00039z-8o; Wed, 01 Jun 2022 20:40:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nwV8s-00028q-8N; Wed, 01 Jun 2022 20:40:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=AbgwXo97R48p2NN/2Y1I7TeeX1Iqrsvv5i6Jw07BkQc=; b=tbrO/eeBiyeO6/WU75moOlDea2
	8OX/9EWD8msvcrwNDNFUJ8q08S8fzjMjzmCvwDewvaTFisw9S1Wnjh9vYiu7+iG8dsgPrpPkmVSK3
	tAmGTgrk+c/AdHvGiRgpS05thTfbAQx6eDJEvGG7xf/96K860OuV3Uh6ogjBR/K4wOu4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170797-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 170797: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-credit2:xen-boot:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-amd64:xen-install:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=09a6a71097e3e7d28eaa0f55e8f2c4b879c299f5
X-Osstest-Versions-That:
    xen=09a6a71097e3e7d28eaa0f55e8f2c4b879c299f5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 01 Jun 2022 20:40:34 +0000

flight 170797 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170797/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail in 170792 pass in 170797
 test-armhf-armhf-xl-credit2   8 xen-boot                   fail pass in 170792

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit2 15 migrate-support-check fail in 170792 never pass
 test-armhf-armhf-xl-credit2 16 saverestore-support-check fail in 170792 never pass
 test-amd64-i386-freebsd10-amd64  7 xen-install                fail like 170792
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170792
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170792
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170792
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 170792
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 170792
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 170792
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170792
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170792
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170792
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170792
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170792
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170792
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170792
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  09a6a71097e3e7d28eaa0f55e8f2c4b879c299f5
baseline version:
 xen                  09a6a71097e3e7d28eaa0f55e8f2c4b879c299f5

Last test of basis   170797  2022-06-01 08:51:44 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Thu Jun 02 02:04:47 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 02:04:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340809.565923 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwaCM-0000xG-6Q; Thu, 02 Jun 2022 02:04:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340809.565923; Thu, 02 Jun 2022 02:04:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwaCM-0000x9-29; Thu, 02 Jun 2022 02:04:30 +0000
Received: by outflank-mailman (input) for mailman id 340809;
 Thu, 02 Jun 2022 02:04:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ifcq=WJ=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1nwaCL-0000x3-3b
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 02:04:29 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur02on0627.outbound.protection.outlook.com
 [2a01:111:f400:fe06::627])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5457ea22-e218-11ec-bd2c-47488cf2e6aa;
 Thu, 02 Jun 2022 04:04:27 +0200 (CEST)
Received: from DB6PR0801CA0057.eurprd08.prod.outlook.com (2603:10a6:4:2b::25)
 by HE1PR0802MB2233.eurprd08.prod.outlook.com (2603:10a6:3:c1::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5293.13; Thu, 2 Jun
 2022 02:04:22 +0000
Received: from DBAEUR03FT054.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:2b:cafe::ff) by DB6PR0801CA0057.outlook.office365.com
 (2603:10a6:4:2b::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13 via Frontend
 Transport; Thu, 2 Jun 2022 02:04:16 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT054.mail.protection.outlook.com (100.127.142.218) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5314.12 via Frontend Transport; Thu, 2 Jun 2022 02:04:15 +0000
Received: ("Tessian outbound e40990bc24d7:v120");
 Thu, 02 Jun 2022 02:04:15 +0000
Received: from f41c9e2c0c89.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 D546E7A8-CB2F-4D6A-895D-EB49630924A1.1; 
 Thu, 02 Jun 2022 02:04:09 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f41c9e2c0c89.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 02 Jun 2022 02:04:09 +0000
Received: from DU2PR08MB7325.eurprd08.prod.outlook.com (2603:10a6:10:2e4::7)
 by VI1PR08MB4575.eurprd08.prod.outlook.com (2603:10a6:803:eb::26) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5293.16; Thu, 2 Jun
 2022 02:04:04 +0000
Received: from DU2PR08MB7325.eurprd08.prod.outlook.com
 ([fe80::ccd5:23df:33bc:1b]) by DU2PR08MB7325.eurprd08.prod.outlook.com
 ([fe80::ccd5:23df:33bc:1b%9]) with mapi id 15.20.5314.012; Thu, 2 Jun 2022
 02:04:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5457ea22-e218-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=kPKU7DF4dqwKfb+mdbGxaI2dIodc3c/kalfdLEXUADDk8hbAK+VzbxjTNyxMx0s7zkVPv7OLLi3L0bDT4qwkr69B8XrMS26ETIvh982eKe93bejXstQXAHAbi8KtApCQtaTbasC5FmQDjoJo3vrQKNC5Q8RrJ9U0hzPIaxWwEFFOnouxhRCsrMNCDsdmH6BsaZy9ap8dRIGgNWAb8q4lTkYAbqRFtxbo3QTEy17968q3m7k2sghRXlVo8TMooqoQwRYsk6k+gFRqUR/1CVLOBR9MtltC31XjAZsTeujuH49xxQqAiB0oAXet0TtXMU9ivzKz9oSKjnt8iAncOhBkGQ==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+Q0SNOjqYoMnPS2aHUS5gbY0mU7+DnIoWUni+XmCRDE=;
 b=NzyNUx7KCm3v8jP9VTjfRS0FruEPEThMZHjSh8d1Sgz2Qa6J+wC6baH2whC7kWaPRmhq60yMF7Xgku7PRIdgxNsF8lOoSgWy9WBTTLXm20DTmjdvVA5mrpgyzny1BOpkEa6rlpv/DCmnBr7t0xfegqaglqjURdTXrlW12IRCfevUPR40USqYf3mNAYDUXuizR/qY+ZKS7M3dBK/B1KDM5Thn1nVhSgVP6aINTlRX4t4p4C5gt2I2KjL/i3BRcg4Zw0t0i7RUu+chE2bC8jge0aQ9cYJRVvSWr/wrWhZecz4f97i2+RE4Trrz4XWpB0ifzi14qZzu33pyuWBYawXVqg==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+Q0SNOjqYoMnPS2aHUS5gbY0mU7+DnIoWUni+XmCRDE=;
 b=KZTNuI2LG62Q06ifZ6WC/Vlq7HmhfGrb+0IzJ9TiYWBjm3PPs7rdUOThT2r+jmSP119j3AvsY77a3Y1g9rZq8xQKHAlVa2dKCqn8fhKD0NamyeszUTTGlLOOPlTgg0l/0Frmctz5fPvnWTzU/QSzJHPDwUCSOZAbyBvP4I1qO6w=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GwvAmHyhSG0YOd64XwkhC43G0HtUxgZ9V59JekVO+jL/Hk3wkqAxYBTI98yMRf9+HcLURsCgtGSi6OK5kE/ELVXgMkqugBdIuWnuVMLhQtNk9qWKvdhEjwHXY5KCWfMs5TMTIDTbcoGFI+deLe8P6O6rhqN6pSK4/2zfEswgVSMPl479riPmrjlmMg54IQKuyV6ap1JuLHbnxBs8IPZpDEW84nzFQT3rfkQ28DdWvFcSvBY/WZrNVSOOrs8cscFusdoTTMkNWNvgkspCJGCJhRoJTKMo5RRompUMryhB5SX8pSofmjxq68pZaQSGhCtc6mOZcF02qtc7z6oWLOa3bg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+Q0SNOjqYoMnPS2aHUS5gbY0mU7+DnIoWUni+XmCRDE=;
 b=kz1fghKgdj34BydkIVjecxKNzIcLvhY0zZfLoFiObaGLJ2bv2abST03nkiOLTn/RvCJcJCKT6+hMFGZJWNhMBFylB5tK+UV56kXKbocDHEnH/2Kq/FwVPL/Hs6BGjVQINxz4QTe7B7oJ3OpsSd1ZZw/HtJQvQcmtuGM1dNDjmuW27LzOnwPl3ZsFkYnZn9sPWnNDDdy7Ckii/tGDAebuOVSUyZyQgb0MwLZmGLSuXVnvw6ytkbnx/vEciMA8hjtPhO+zemlRIZZYF8OGWWMlAWodZuWsN+9aKmurJaD9kyujRlUYKj8XJ+8PhK12i6gCO89gRvudaEgzReWy93E0vA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+Q0SNOjqYoMnPS2aHUS5gbY0mU7+DnIoWUni+XmCRDE=;
 b=KZTNuI2LG62Q06ifZ6WC/Vlq7HmhfGrb+0IzJ9TiYWBjm3PPs7rdUOThT2r+jmSP119j3AvsY77a3Y1g9rZq8xQKHAlVa2dKCqn8fhKD0NamyeszUTTGlLOOPlTgg0l/0Frmctz5fPvnWTzU/QSzJHPDwUCSOZAbyBvP4I1qO6w=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Wei
 Liu <wl@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v5 1/9] xen/arm: rename PGC_reserved to PGC_staticmem
Thread-Topic: [PATCH v5 1/9] xen/arm: rename PGC_reserved to PGC_staticmem
Thread-Index: AQHYdJxp1L6/0pKvUUGfNOQBsY7s5K04qNIAgAKz+nA=
Date: Thu, 2 Jun 2022 02:04:03 +0000
Message-ID:
 <DU2PR08MB7325AAD5E8AE856593154150F7DE9@DU2PR08MB7325.eurprd08.prod.outlook.com>
References: <20220531031241.90374-1-Penny.Zheng@arm.com>
 <20220531031241.90374-2-Penny.Zheng@arm.com>
 <6cb78106-3d5b-4fc4-7f51-9919f161d69f@suse.com>
In-Reply-To: <6cb78106-3d5b-4fc4-7f51-9919f161d69f@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 2DEC61F820E7E246BD6F4F9CDF9C5CAB.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: fc2e934e-e2b0-4e6f-2453-08da443c3245
x-ms-traffictypediagnostic:
	VI1PR08MB4575:EE_|DBAEUR03FT054:EE_|HE1PR0802MB2233:EE_
X-Microsoft-Antispam-PRVS:
	<HE1PR0802MB223363EF10C1850F208633BFF7DE9@HE1PR0802MB2233.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 4e7Nc6agRLcZYTdbRFSfpbzkkAqi4/dzQ3IYPwmIqXJzgdXqxjwBtnVjo6zmsWvBswDUL7Or5dCSmOgdFBLNUMlB2mJinGoti6nNe0drELCI9m6XEe0Ll2rLBHV3FfYl9Q3LTjVoS9D5PUA+EpbWtmkjyB08cv1RgVEUtweRGEz3GEvS/BJ7CLtNct2cy6RUebxz98w6LYXjFIdQexc+uNbSa8Afnpj/DHDiHejcFgnxHQf+m/orHZ+y/qZavZGVvkR+m0MPba48ls/04IXkfaBZ0p5cwvZ8w2i6Pkn7vHNGkp+rEhAEWEWfl4wUVuM/gmKcxmIIKIoN66p+VchYC9MGz4+FmCYrCuWMKwhwzs7H4XT9cApKBevA2KqvVlOp5HsEZ14X9THuOj1Rk0adbHenpXxDRmOd9MnI9dQXOvSGsBAlJOagJ5it1e1hPcYdAPUMgAYjtNGhUCnznLSEtRhXnVayxPbzDbB6qb7Lq1xw/4vDYv0Pl7O9yy2aIFGYRa0i3E2i5GsmLGs+jG4c4Vg6nqcV43Q4Ihhqcy4MRI6Il0OpRW0Zghc4XMDAEvjUnbHAbMGlPUIl2lQlzPMcUrEJXzzejtgJ3zOtCKJPvmB02TM5wrS0pT6gusxI6y+lhivcZ0vHhtOTLCwGLmRfJuqhn3XpOOKLZJSD3EsT7ZWTahB9pQg0mgi4zjk1GNXrL8/zTQiU7v5ACCOlabahiA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DU2PR08MB7325.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(9686003)(5660300002)(4326008)(83380400001)(6506007)(8936002)(38100700002)(76116006)(8676002)(4744005)(2906002)(55016003)(66946007)(7696005)(52536014)(66476007)(66556008)(53546011)(38070700005)(64756008)(66446008)(26005)(508600001)(6916009)(316002)(86362001)(71200400001)(54906003)(122000001)(186003)(33656002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4575
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT054.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	2aa615e7-de0d-4501-f622-08da443c2aee
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/goVOvxbhLbBLU5RJQgmEMThPYpj7j28EToFM2j+YXKDIfcsBxrBJSd6FzpS4kOBOqpt4szx85iTYSvUhdS1/ZD/Kp7T0ut6WNcQ3QxvW7TOFX6Jkl015L/SNCLtsYeEpg5rCvHoDEvMSkseFPPcr0odFSqDQYuoTTSX6u/4I20hQqLbrmkEO+eu006neYJUH2B1pjCYR14+A4hU+0DubdXAYPcC9BWonHP3JUA/arppRMSTuVUp2wrAKEx30RDh46EfzdgAiresJhKvI7ub+/97nEhft+CKClI/aNegMMzLSnK4aqzxkAcVOn7KA6vGgAL6vHkiZ7NwEUhENhLYqywq8vPYYLDDmhKIwLLgulTZe6v5u7N6CwCKfzFCGc6qCqoxxBInh6jJ5facfWjpWIechtmcj5V7akJMEtPcgCOBHFO03CeLITYjykkOoj/SVd0td8lDIOiBglwtp1EKY0ieIK8VqTuhynKm0PmjMvpurrd32UV1S+ciHxCfKUPL04egz4y2w9VBJ0vKOQygiC2KL3OwJVYwpyvJP2yMQ7Z3arcxrFF1kuaotyVTgms6Ls26V3Pqlhe4ff4OJCqihTYwX0XWe/W/89PHEljKRsRQkez6GF+OjRe6NjDcRwrO+eyitVi+Ef70lyzk/Ho9FlmDkj9n5a55uOVU3B9GEbMOwKrj1EO4Y0BcLdfEAhac
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230001)(4636009)(46966006)(40470700004)(36840700001)(316002)(55016003)(54906003)(52536014)(8936002)(336012)(47076005)(508600001)(86362001)(40460700003)(5660300002)(82310400005)(70586007)(70206006)(4326008)(6862004)(8676002)(26005)(9686003)(7696005)(6506007)(2906002)(53546011)(186003)(356005)(33656002)(83380400001)(81166007)(36860700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2022 02:04:15.8308
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: fc2e934e-e2b0-4e6f-2453-08da443c3245
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT054.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0802MB2233

SGkgamFuIA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEphbiBCZXVs
aWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4gU2VudDogVHVlc2RheSwgTWF5IDMxLCAyMDIyIDQ6
MzMgUE0NCj4gVG86IFBlbm55IFpoZW5nIDxQZW5ueS5aaGVuZ0Bhcm0uY29tPg0KPiBDYzogV2Vp
IENoZW4gPFdlaS5DaGVuQGFybS5jb20+OyBTdGVmYW5vIFN0YWJlbGxpbmkNCj4gPHNzdGFiZWxs
aW5pQGtlcm5lbC5vcmc+OyBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4ub3JnPjsgQmVydHJhbmQg
TWFycXVpcw0KPiA8QmVydHJhbmQuTWFycXVpc0Bhcm0uY29tPjsgVm9sb2R5bXlyIEJhYmNodWsN
Cj4gPFZvbG9keW15cl9CYWJjaHVrQGVwYW0uY29tPjsgQW5kcmV3IENvb3Blcg0KPiA8YW5kcmV3
LmNvb3BlcjNAY2l0cml4LmNvbT47IEdlb3JnZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4
LmNvbT47DQo+IFdlaSBMaXUgPHdsQHhlbi5vcmc+OyB4ZW4tZGV2ZWxAbGlzdHMueGVucHJvamVj
dC5vcmcNCj4gU3ViamVjdDogUmU6IFtQQVRDSCB2NSAxLzldIHhlbi9hcm06IHJlbmFtZSBQR0Nf
cmVzZXJ2ZWQgdG8gUEdDX3N0YXRpY21lbQ0KPiANCj4gT24gMzEuMDUuMjAyMiAwNToxMiwgUGVu
bnkgWmhlbmcgd3JvdGU6DQo+ID4gLS0tIGEveGVuL2NvbW1vbi9wYWdlX2FsbG9jLmMNCj4gPiAr
KysgYi94ZW4vY29tbW9uL3BhZ2VfYWxsb2MuYw0KPiA+IEBAIC0xNTEsOCArMTUxLDggQEANCj4g
PiAgI2RlZmluZSBwMm1fcG9kX29mZmxpbmVfb3JfYnJva2VuX3JlcGxhY2UocGcpIEJVR19PTihw
ZyAhPSBOVUxMKQ0KPiA+ICNlbmRpZg0KPiA+DQo+ID4gLSNpZm5kZWYgUEdDX3Jlc2VydmVkDQo+
ID4gLSNkZWZpbmUgUEdDX3Jlc2VydmVkIDANCj4gPiArI2lmbmRlZiBQR0Nfc3RhdGljbWVtDQo+
ID4gKyNkZWZpbmUgUEdDX3N0YXRpY21lbSAwDQo+ID4gICNlbmRpZg0KPiANCj4gSnVzdCB3b25k
ZXJpbmc6IElzIHRoZSAibWVtIiBwYXJ0IG9mIHRoZSBuYW1lIHJlYWxseSBzaWduaWZpY2FudD8g
UGFnZXMgYWx3YXlzDQo+IHJlcHJlc2VudCBtZW1vcnkgb2Ygc29tZSBmb3JtLCBkb24ndCB0aGV5
Pw0KPiANCg0KU3VyZSwgaXQgc2VlbXMgcmVkdW5kYW50LCBJJ2xsIHJlbmFtZSB0byBQR0Nfc3Rh
dGljLg0KDQo+IEphbg0KDQo=


From xen-devel-bounces@lists.xenproject.org Thu Jun 02 02:18:55 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 02:18:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340817.565934 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwaQF-0002f8-Gf; Thu, 02 Jun 2022 02:18:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340817.565934; Thu, 02 Jun 2022 02:18:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwaQF-0002f1-Da; Thu, 02 Jun 2022 02:18:51 +0000
Received: by outflank-mailman (input) for mailman id 340817;
 Thu, 02 Jun 2022 02:18:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ifcq=WJ=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1nwaQD-0002ev-W7
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 02:18:50 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on060a.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::60a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 55d21a54-e21a-11ec-837f-e5687231ffcc;
 Thu, 02 Jun 2022 04:18:48 +0200 (CEST)
Received: from AM6PR0502CA0039.eurprd05.prod.outlook.com
 (2603:10a6:20b:56::16) by AM0PR08MB4577.eurprd08.prod.outlook.com
 (2603:10a6:208:fe::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12; Thu, 2 Jun
 2022 02:18:45 +0000
Received: from VE1EUR03FT056.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:56:cafe::c8) by AM6PR0502CA0039.outlook.office365.com
 (2603:10a6:20b:56::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13 via Frontend
 Transport; Thu, 2 Jun 2022 02:18:45 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT056.mail.protection.outlook.com (10.152.19.28) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5314.12 via Frontend Transport; Thu, 2 Jun 2022 02:18:44 +0000
Received: ("Tessian outbound e40990bc24d7:v120");
 Thu, 02 Jun 2022 02:18:44 +0000
Received: from cd60b10cd450.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 F17ABFF4-8732-4516-A114-F69D7D8700A2.1; 
 Thu, 02 Jun 2022 02:18:33 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id cd60b10cd450.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 02 Jun 2022 02:18:33 +0000
Received: from DU2PR08MB7325.eurprd08.prod.outlook.com (2603:10a6:10:2e4::7)
 by AM9PR08MB6817.eurprd08.prod.outlook.com (2603:10a6:20b:2ff::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12; Thu, 2 Jun
 2022 02:18:31 +0000
Received: from DU2PR08MB7325.eurprd08.prod.outlook.com
 ([fe80::ccd5:23df:33bc:1b]) by DU2PR08MB7325.eurprd08.prod.outlook.com
 ([fe80::ccd5:23df:33bc:1b%9]) with mapi id 15.20.5314.012; Thu, 2 Jun 2022
 02:18:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 55d21a54-e21a-11ec-837f-e5687231ffcc
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=nPB2BJYWdFQT5pi+7NxcY6XUzI2IQ4unNxd4QXB9Ydz36v6myuuIS0pyd9qtLK85ZLeOtxMglmkctYLdRO6LlqadCdD1xLJ4qDFWm/VkeLC2gGh8rcI5z59vrHtrbW7LmPLSqhK6ORzImmG/TtwGinkw0YeMCMktF9p1ZUmV9YnC4gSyIF2pcwk/YlCSneb7KwrtpuwfM6KSHXfDaWjfb/L6w9Fl/clu33Au98JFyY7b5VrYdGL3UwBjpzHnGiu6jG0zTRtIZao+El2Qu/S/r0kb/Bi3lPr0sZZgAn5muI6597DEV3Ox+QtzidaUN66HyMcfo9cjBviSu9w3uF5V6A==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7f7fpUrKH8MgG0mcY6KWXtg8YpxTMk9tH5sCFADrNfs=;
 b=PjNIMtvzjifw832BgLpO/h27hTXKiOzPX9OeO3pR43+XwWXmqswPT/ObQmaU2Qg/QVqLbIqkb5wIolW8ju/9jg7ZSGC9n0Z+7g9pbZjGuA0YxAK79e9BXtYMsLMyC1qCGvasEx1JH3Z4Ey4w/VXPlIsL3p11YwkGkQ8/D1SFxvC9tb9wQ3sEzNZsApP4NA8sZ1+jrGlt5vdIpt0mxJclMHHfqA+H4lZEBcPmicDSKr4Oyv9MtsltniwdaYJGhV4Ih28vLj4CfCzYM2pD/Y6DjuILp5bn16v4ju7411a4Q8QCz4GyGSEwQO8Y3xwoB9fDzbcKv5OlTcQZoqtd4Cpwng==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7f7fpUrKH8MgG0mcY6KWXtg8YpxTMk9tH5sCFADrNfs=;
 b=C7S7rNh72jLajs+nYOpSdiQpyH4iFMnmzfWR+9o6nhF36Vmm8bcTIU6LBGKZysp/fDhTKjp3RzP4HBS6Xq7vG5ruU546BSuLoyiu2FJTYmEjV9o3EKjLHQMFTr8HiECvUqvtuahOKO5uQgV99w63/uBsc2gBu0ehiG1LfGCCJLQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oYeh31fCwMRUjTDK+qybMYqfmn08Nmu2Qhn5aQCaCfkWgtRQH3pmqNxvsJaWN1sQjEUiBeVWdbkFOR3dFn+oH9xaXHBKpxhst6A7VFu1eBG5Od76/QTF0Qbo1YyZwCL8En6+QkEk58dugWkH7e9Zd/KIB7VXvhaf0Cl8VyIutkK2auNi0n6AwNBdJeKDi1Dcvlom4lfMVecbdkOT+sOQJYoT1ggUiIjsTeWiP1HifOdl6mhfa2LwZgApZ/9Seq0Vkik1qRy2FtIpW7DTMGCfhya32MH0ZRyF/tPxon4aV7qb7vINSNCInUWt44mLhbt30Nm6/Z0a2zagGPNVX08Ziw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7f7fpUrKH8MgG0mcY6KWXtg8YpxTMk9tH5sCFADrNfs=;
 b=bKQVX1l88EZmohkfdqG67HfMrSyty2wwEbcpTwm6oSUKZkXYQ1rCkJ4S7PV8DMshkmFU8WNl1MICodh3qkJdCp3pDQaYqPGhbjeHtuBuOXMHbQFF9h1pH0pLcXIRTwVZz5x9+VMWkk9FNweyvfBDUEeip3H8IDctCTKoM8hJ44FYSJYEPbkV2Mx2xfWA3z7ExlU3LaMwy/UUIWqJm01Z98TUOGZahxLMTQutHvYmbkoGlnLg9S8Oa6bfscvtaTnf19/O7V1SW72THNE5E/oKZcASbWgvVd1ZyIxkLoGSV4DW3gYqAHwaBpGG/QzAeE0PH+CocYgl3mtn2jFbuOSCZw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7f7fpUrKH8MgG0mcY6KWXtg8YpxTMk9tH5sCFADrNfs=;
 b=C7S7rNh72jLajs+nYOpSdiQpyH4iFMnmzfWR+9o6nhF36Vmm8bcTIU6LBGKZysp/fDhTKjp3RzP4HBS6Xq7vG5ruU546BSuLoyiu2FJTYmEjV9o3EKjLHQMFTr8HiECvUqvtuahOKO5uQgV99w63/uBsc2gBu0ehiG1LfGCCJLQ=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Wei
 Liu <wl@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v5 2/9] xen: do not free reserved memory into heap
Thread-Topic: [PATCH v5 2/9] xen: do not free reserved memory into heap
Thread-Index: AQHYdJxqwyiFdjgwHkGm0BKg5tWdR604qfGAgAK5SiA=
Date: Thu, 2 Jun 2022 02:18:31 +0000
Message-ID:
 <DU2PR08MB7325D759DF37AB836B3C2C79F7DE9@DU2PR08MB7325.eurprd08.prod.outlook.com>
References: <20220531031241.90374-1-Penny.Zheng@arm.com>
 <20220531031241.90374-3-Penny.Zheng@arm.com>
 <9f0d9d47-236a-d7c5-3498-d8706c616fcd@suse.com>
In-Reply-To: <9f0d9d47-236a-d7c5-3498-d8706c616fcd@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 9C8AE440865B3846AF73486696F94349.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: f4bb4320-5f71-4a88-16bf-08da443e380a
x-ms-traffictypediagnostic:
	AM9PR08MB6817:EE_|VE1EUR03FT056:EE_|AM0PR08MB4577:EE_
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB457720CA7858F2F146A74E85F7DE9@AM0PR08MB4577.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 gBnOROYKaYPUm5oXLOxUwO3VazoWBpFSfPR2T9YtDHqqfkk8vYZo6hxM989Umlr4d1EbaxAM52PtOzar6+5+qjqVJrFxeBobLJcZRFXoKvvtGQUynhboOxw4Z9fgdIgvb0u4+eY8Brcd6mOpgcoeg0OvENVkI9vNaxRSbV8+7qA/HP+K9BAgCiI5hfbbaS5yVhNrCyU4/w54Jv5kg4O9SX/t/BMYCo8t6s04oS0P7UlJVAR/hItPcxrEij4CZkmFK7q0d/Q7WRb+NEOBdNLX+9XU44fZW+Au/1T0k6WOH/hLwu+psWMZaxFYWL4Dq5wmdt2/4z/sj7cyBcHMPT3v7e249QuUNWO+q+doY+aTpfB8dGqY1+MNUDb9vNeRwjaPvSe+ZUBu7GHFLwKKLNWu3AuFkfayzJyCN/HI0/cVYppehKpF0FrKlCjeNAbPkOg+AM7kyBBdeZw//kOGNXwJH/JiUlPOMACUOpJqK59oo0WCqo9tXvuT6wXHFbHDSUhFzpqTidZNXhiCbLLIgqpoiYZ5Lq0ggH6cwiVlmGf7GxdwHllXUnA+ob69dk9z4UsHE3rmeLzuoaUo6aGaePIFQOmLu548+VnX/sBeaQrLB2vrIwX7aFf4V+ZxOX0Mwf0lvQxCFlnAYYtVghQzKdwfLU0dU1cerPjp7Sg5GunWclr5k9k80/Bv9bX4ySCYeDKSSL/TF948DroXZ1NL2lh7pg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DU2PR08MB7325.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(186003)(86362001)(122000001)(38070700005)(83380400001)(38100700002)(7696005)(6506007)(55016003)(6916009)(9686003)(71200400001)(508600001)(54906003)(316002)(26005)(53546011)(2906002)(66476007)(66556008)(64756008)(8676002)(76116006)(66946007)(66446008)(5660300002)(4326008)(52536014)(8936002)(33656002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6817
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT056.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	035cca26-5ad5-4f55-ce77-08da443e3012
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	bJGi8vx5DG2tLOJJUTpGcrVR4/erCR/d1pWMYQpIoJW9TsYQRkz5FrSaYVCbcWqDHbC7jXg6uT3i7yvlGqONNtzM9mohtdXpC2qw+TanNpDqXmgUQTRBaa1/zdqhNQgCsxPJDn0FS2BADMR1fqHZOOZV4DV9MgLwvN3QBUx08JYheTuaLJlxTHDrhmFqif5TH67R4CMTz+mTYdxoqL0QdQABwrP/AGvHLKsTJ+ECwFPd/kdzvGcTG83OTf4H/DjZfkQixaF8a1eCrvBVjHRz2Zqk6aVTixpewIAayMSKuXdqllK6jvUAMqKZ+k93k3ddKeSCmO6IsDtnsb6imcJgGPx0VN9eLh6Nyy/XEUU2NkJtgbb3m2xNlQ2lNvwm830j05uO8YGY7nRdg72g/zuKNHWneiIRFws5EaxD+Yb+g1d3KWMnp1ApjoY2WldtxmWLQ2H6MoNXgXS4B17S+JsOa72cyut9AUevzAqn5+lE5WZg1hBwR+8AtlgEhlP7wVI+sNKj3fTvJcxWS7bxDyNvHqyoHIARPGTFPNZx4JGC+RDTEHU2Web9neUb4lgpVtwMfSIr7+FD9Mz+iDV5qdiNIEv+LM5dEmH4sgoGBaEDkFClI5m3Cn9Q4jV3PqDYQwWqXGBWrAeuTJbxWrTVivNwbM6cZMtH9k7LJWEr3pojagwngskX7p5AI24R7fIOsoD2
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(186003)(336012)(47076005)(7696005)(6506007)(86362001)(9686003)(53546011)(508600001)(356005)(81166007)(40460700003)(26005)(36860700001)(83380400001)(70586007)(70206006)(33656002)(54906003)(6862004)(4326008)(82310400005)(5660300002)(8936002)(316002)(52536014)(55016003)(8676002)(2906002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2022 02:18:44.3786
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f4bb4320-5f71-4a88-16bf-08da443e380a
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT056.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB4577

DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSmFuIEJldWxpY2ggPGpi
ZXVsaWNoQHN1c2UuY29tPg0KPiBTZW50OiBUdWVzZGF5LCBNYXkgMzEsIDIwMjIgNDozNyBQTQ0K
PiBUbzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+DQo+IENjOiBXZWkgQ2hlbiA8
V2VpLkNoZW5AYXJtLmNvbT47IFN0ZWZhbm8gU3RhYmVsbGluaQ0KPiA8c3N0YWJlbGxpbmlAa2Vy
bmVsLm9yZz47IEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+OyBCZXJ0cmFuZCBNYXJxdWlz
DQo+IDxCZXJ0cmFuZC5NYXJxdWlzQGFybS5jb20+OyBWb2xvZHlteXIgQmFiY2h1aw0KPiA8Vm9s
b2R5bXlyX0JhYmNodWtAZXBhbS5jb20+OyBBbmRyZXcgQ29vcGVyDQo+IDxhbmRyZXcuY29vcGVy
M0BjaXRyaXguY29tPjsgR2VvcmdlIER1bmxhcCA8Z2VvcmdlLmR1bmxhcEBjaXRyaXguY29tPjsN
Cj4gV2VpIExpdSA8d2xAeGVuLm9yZz47IHhlbi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZw0K
PiBTdWJqZWN0OiBSZTogW1BBVENIIHY1IDIvOV0geGVuOiBkbyBub3QgZnJlZSByZXNlcnZlZCBt
ZW1vcnkgaW50byBoZWFwDQo+IA0KPiBPbiAzMS4wNS4yMDIyIDA1OjEyLCBQZW5ueSBaaGVuZyB3
cm90ZToNCj4gPiBQYWdlcyB1c2VkIGFzIGd1ZXN0IFJBTSBmb3Igc3RhdGljIGRvbWFpbiwgc2hh
bGwgYmUgcmVzZXJ2ZWQgdG8gdGhpcw0KPiA+IGRvbWFpbiBvbmx5Lg0KPiA+IFNvIGluIGNhc2Ug
cmVzZXJ2ZWQgcGFnZXMgYmVpbmcgdXNlZCBmb3Igb3RoZXIgcHVycG9zZSwgdXNlcnMgc2hhbGwN
Cj4gPiBub3QgZnJlZSB0aGVtIGJhY2sgdG8gaGVhcCwgZXZlbiB3aGVuIGxhc3QgcmVmIGdldHMg
ZHJvcHBlZC4NCj4gPg0KPiA+IGZyZWVfc3RhdGljbWVtX3BhZ2VzIHdpbGwgYmUgY2FsbGVkIGJ5
IGZyZWVfaGVhcF9wYWdlcyBpbiBydW50aW1lIGZvcg0KPiA+IHN0YXRpYyBkb21haW4gZnJlZWlu
ZyBtZW1vcnkgcmVzb3VyY2UsIHNvIGxldCdzIGRyb3AgdGhlIF9faW5pdCBmbGFnLg0KPiA+DQo+
ID4gU2lnbmVkLW9mZi1ieTogUGVubnkgWmhlbmcgPHBlbm55LnpoZW5nQGFybS5jb20+DQo+ID4g
LS0tDQo+ID4gdjUgY2hhbmdlczoNCj4gPiAtIEluIG9yZGVyIHRvIGF2b2lkIHN0dWIgZnVuY3Rp
b25zLCB3ZSAjZGVmaW5lIFBHQ19zdGF0aWNtZW0gdG8NCj4gPiBub24temVybyBvbmx5IHdoZW4g
Q09ORklHX1NUQVRJQ19NRU1PUlkNCj4gPiAtIHVzZSAidW5saWtlbHkoKSIgYXJvdW5kIHBnLT5j
b3VudF9pbmZvICYgUEdDX3N0YXRpY21lbQ0KPiA+IC0gcmVtb3ZlIHBvaW50bGVzcyAiaWYiLCBz
aW5jZSBtYXJrX3BhZ2VfZnJlZSgpIGlzIGdvaW5nIHRvIHNldA0KPiA+IGNvdW50X2luZm8gdG8g
UEdDX3N0YXRlX2ZyZWUgYW5kIGJ5IGNvbnNlcXVlbmNlIGNsZWFyIFBHQ19zdGF0aWNtZW0NCj4g
PiAtIG1vdmUgI2RlZmluZSBQR0Nfc3RhdGljbWVtIDAgdG8gbW0uaA0KPiA+IC0tLQ0KPiA+IHY0
IGNoYW5nZXM6DQo+ID4gLSBubyBjaGFuZ2VzDQo+ID4gLS0tDQo+ID4gdjMgY2hhbmdlczoNCj4g
PiAtIGZpeCBwb3NzaWJsZSByYWN5IGlzc3VlIGluIGZyZWVfc3RhdGljbWVtX3BhZ2VzKCkNCj4g
PiAtIGludHJvZHVjZSBhIHN0dWIgZnJlZV9zdGF0aWNtZW1fcGFnZXMoKSBmb3IgdGhlDQo+ID4g
IUNPTkZJR19TVEFUSUNfTUVNT1JZIGNhc2UNCj4gPiAtIG1vdmUgdGhlIGNoYW5nZSB0byBmcmVl
X2hlYXBfcGFnZXMoKSB0byBjb3ZlciBvdGhlciBwb3RlbnRpYWwgY2FsbA0KPiA+IHNpdGVzDQo+
ID4gLSBmaXggdGhlIGluZGVudGF0aW9uDQo+ID4gLS0tDQo+ID4gdjIgY2hhbmdlczoNCj4gPiAt
IG5ldyBjb21taXQNCj4gPiAtLS0NCj4gPiAgeGVuL2FyY2gvYXJtL2luY2x1ZGUvYXNtL21tLmgg
fCAgMiArKw0KPiA+ICB4ZW4vY29tbW9uL3BhZ2VfYWxsb2MuYyAgICAgICB8IDE2ICsrKysrKysr
Ky0tLS0tLS0NCj4gPiAgeGVuL2luY2x1ZGUveGVuL21tLmggICAgICAgICAgfCAgNiArKysrKy0N
Cj4gPiAgMyBmaWxlcyBjaGFuZ2VkLCAxNiBpbnNlcnRpb25zKCspLCA4IGRlbGV0aW9ucygtKQ0K
PiA+DQo+ID4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9pbmNsdWRlL2FzbS9tbS5oDQo+ID4g
Yi94ZW4vYXJjaC9hcm0vaW5jbHVkZS9hc20vbW0uaCBpbmRleCAxMjI2NzAwMDg1Li41NmQwOTM5
MzE4IDEwMDY0NA0KPiA+IC0tLSBhL3hlbi9hcmNoL2FybS9pbmNsdWRlL2FzbS9tbS5oDQo+ID4g
KysrIGIveGVuL2FyY2gvYXJtL2luY2x1ZGUvYXNtL21tLmgNCj4gPiBAQCAtMTA4LDkgKzEwOCwx
MSBAQCBzdHJ1Y3QgcGFnZV9pbmZvDQo+ID4gICAgLyogUGFnZSBpcyBYZW4gaGVhcD8gKi8NCj4g
PiAgI2RlZmluZSBfUEdDX3hlbl9oZWFwICAgICBQR19zaGlmdCgyKQ0KPiA+ICAjZGVmaW5lIFBH
Q194ZW5faGVhcCAgICAgIFBHX21hc2soMSwgMikNCj4gPiArI2lmZGVmIENPTkZJR19TVEFUSUNf
TUVNT1JZDQo+ID4gICAgLyogUGFnZSBpcyBzdGF0aWMgbWVtb3J5ICovDQo+ID4gICNkZWZpbmUg
X1BHQ19zdGF0aWNtZW0gICAgUEdfc2hpZnQoMykNCj4gPiAgI2RlZmluZSBQR0Nfc3RhdGljbWVt
ICAgICBQR19tYXNrKDEsIDMpDQo+ID4gKyNlbmRpZg0KPiA+ICAvKiAuLi4gKi8NCj4gPiAgLyog
UGFnZSBpcyBicm9rZW4/ICovDQo+ID4gICNkZWZpbmUgX1BHQ19icm9rZW4gICAgICAgUEdfc2hp
ZnQoNykNCj4gPiBkaWZmIC0tZ2l0IGEveGVuL2NvbW1vbi9wYWdlX2FsbG9jLmMgYi94ZW4vY29t
bW9uL3BhZ2VfYWxsb2MuYyBpbmRleA0KPiA+IDQ0NjAwZGQ5Y2QuLjY0MjU3NjExMTYgMTAwNjQ0
DQo+ID4gLS0tIGEveGVuL2NvbW1vbi9wYWdlX2FsbG9jLmMNCj4gPiArKysgYi94ZW4vY29tbW9u
L3BhZ2VfYWxsb2MuYw0KPiA+IEBAIC0xNTEsMTAgKzE1MSw2IEBADQo+ID4gICNkZWZpbmUgcDJt
X3BvZF9vZmZsaW5lX29yX2Jyb2tlbl9yZXBsYWNlKHBnKSBCVUdfT04ocGcgIT0gTlVMTCkNCj4g
PiAjZW5kaWYNCj4gPg0KPiA+IC0jaWZuZGVmIFBHQ19zdGF0aWNtZW0NCj4gPiAtI2RlZmluZSBQ
R0Nfc3RhdGljbWVtIDANCj4gPiAtI2VuZGlmDQo+ID4gLQ0KPiANCj4gSXMgdGhlIG1vdmluZyBv
ZiB0aGlzIGludG8gdGhlIGhlYWRlciByZWFsbHkgYSBuZWNlc3NhcnkgcGFydCBvZiB0aGlzIGNo
YW5nZT8NCj4gQWZhaWNzIHRoZSBzeW1ib2wgaXMgc3RpbGwgb25seSBldmVyIHVzZWQgaW4gdGhp
cyBvbmUgQyBmaWxlLg0KDQpMYXRlciwgaW4gY29tbWl0ICJ4ZW4vYXJtOiB1bnBvcHVsYXRlIG1l
bW9yeSB3aGVuIGRvbWFpbiBpcyBzdGF0aWMiLCANCndlIHdpbGwgdXNlIHRoaXMgZmxhZyBpbiB4
ZW4vYXJjaC9hcm0vaW5jbHVkZS9hc20vbW0uaA0KDQo+ID4gLS0tIGEveGVuL2luY2x1ZGUveGVu
L21tLmgNCj4gPiArKysgYi94ZW4vaW5jbHVkZS94ZW4vbW0uaA0KPiA+IEBAIC04NSwxMCArODUs
MTAgQEAgYm9vbCBzY3J1Yl9mcmVlX3BhZ2VzKHZvaWQpOyAgfSB3aGlsZSAoIGZhbHNlICkNCj4g
PiAjZGVmaW5lIEZSRUVfWEVOSEVBUF9QQUdFKHApIEZSRUVfWEVOSEVBUF9QQUdFUyhwLCAwKQ0K
PiA+DQo+ID4gLSNpZmRlZiBDT05GSUdfU1RBVElDX01FTU9SWQ0KPiA+ICAvKiBUaGVzZSBmdW5j
dGlvbnMgYXJlIGZvciBzdGF0aWMgbWVtb3J5ICovICB2b2lkDQo+ID4gZnJlZV9zdGF0aWNtZW1f
cGFnZXMoc3RydWN0IHBhZ2VfaW5mbyAqcGcsIHVuc2lnbmVkIGxvbmcgbnJfbWZucywNCj4gPiAg
ICAgICAgICAgICAgICAgICAgICAgICAgICBib29sIG5lZWRfc2NydWIpOw0KPiA+ICsjaWZkZWYg
Q09ORklHX1NUQVRJQ19NRU1PUlkNCj4gPiAgaW50IGFjcXVpcmVfZG9tc3RhdGljX3BhZ2VzKHN0
cnVjdCBkb21haW4gKmQsIG1mbl90IHNtZm4sIHVuc2lnbmVkIGludA0KPiBucl9tZm5zLA0KPiA+
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdW5zaWduZWQgaW50IG1lbWZsYWdzKTsgICNl
bmRpZg0KPiANCj4gSXMgdGhlICNpZmRlZiByZWFsbHkgd29ydGggcmV0YWluaW5nIGF0IHRoaXMg
cG9pbnQ/IENvZGUgaXMgZ2VuZXJhbGx5IGJldHRlcg0KPiByZWFkYWJsZSB3aXRob3V0Lg0KPiAN
Cg0KU3VyZSwgd2lsbCByZW1vdmUNCg0KPiBKYW4NCg0K


From xen-devel-bounces@lists.xenproject.org Thu Jun 02 02:20:32 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 02:20:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340826.565945 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwaRp-00043t-VJ; Thu, 02 Jun 2022 02:20:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340826.565945; Thu, 02 Jun 2022 02:20:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwaRp-00043m-SO; Thu, 02 Jun 2022 02:20:29 +0000
Received: by outflank-mailman (input) for mailman id 340826;
 Thu, 02 Jun 2022 02:20:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Cpeu=WJ=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1nwaRo-00043e-Ev
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 02:20:28 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 (mail-am5eur02on0625.outbound.protection.outlook.com
 [2a01:111:f400:fe07::625])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 90b8f85f-e21a-11ec-bd2c-47488cf2e6aa;
 Thu, 02 Jun 2022 04:20:27 +0200 (CEST)
Received: from DB7PR05CA0063.eurprd05.prod.outlook.com (2603:10a6:10:2e::40)
 by AM6PR08MB4135.eurprd08.prod.outlook.com (2603:10a6:20b:a9::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5293.13; Thu, 2 Jun
 2022 02:20:24 +0000
Received: from DBAEUR03FT047.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:2e:cafe::ac) by DB7PR05CA0063.outlook.office365.com
 (2603:10a6:10:2e::40) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5293.13 via Frontend
 Transport; Thu, 2 Jun 2022 02:20:24 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT047.mail.protection.outlook.com (100.127.143.25) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5314.12 via Frontend Transport; Thu, 2 Jun 2022 02:20:23 +0000
Received: ("Tessian outbound 5b5a41c043d3:v120");
 Thu, 02 Jun 2022 02:20:23 +0000
Received: from 7c3b2ad821d3.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 91096236-7B5C-4FE6-AA12-BA8294653CF5.1; 
 Thu, 02 Jun 2022 02:20:12 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 7c3b2ad821d3.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 02 Jun 2022 02:20:12 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com (2603:10a6:102:2b9::9)
 by AS8PR08MB7094.eurprd08.prod.outlook.com (2603:10a6:20b:401::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Thu, 2 Jun
 2022 02:20:11 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::d007:5582:9bbe:425e]) by PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::d007:5582:9bbe:425e%3]) with mapi id 15.20.5293.019; Thu, 2 Jun 2022
 02:20:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 90b8f85f-e21a-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=XNIUFDHEdl1sf1ATyYPQ+i820HplWGITl8+2X54eC1lkT3SXm6dpdS3NXo1fcdX6C5jckUpyWyWWJLXDYiEGy7SRuhRKekbBQyk3MQMHdYx74TenmG5ayBMNqTMxNUE0KTiwz/bZ/qJ60MpCtplrAd6e6Q1ETYdwsaDZkL8SaVS3X18eL5f8bSTgUmO7A1vYKX0XzACGOiSRMVenAINTtj4H2NzbdthWP48v7tYxT3HeYGXuFEQVDpNTeUo3Z/rUCx9RXeSmcojZLkQ4qmGPlUED0V+a3swvnErJa30rx1AJngIuPB2hX0hoHTSKNWBHeboJHnTHjGq1UhZ3dQbkJg==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=q+Qp4/v8H8bsT09zM7B9CbaKi4RLdX9Cv22MapYeN20=;
 b=GrSnUPFBmwEh9c8AzZRJ3kOzXLE+rIbyLQfqwzzkcmTRqjTgd4OTNV6BchnxDV6uBQ6/5z+19dyDeiewQfiHp/kj1E4aYiWkEwWjFFQEpICONHDQNKqhO6N49lqIZyVO8aVTctdRHj4BZuegwhj4a0baaks5kVERXvkLbYqMnkdD6jNznA5WOxvJK6y6fOElsw67EOc17qAnYbQIbUzBnE+FsV+TPOgvVSOBdbSmRT3lSqW5N2agJW7izclq+3y8TinfGIhvlhZx22rHpbkT63gbmGykA/lf8Pzo67r5pb0QGYAzZ8RR3Mi8eC8Po9hWMNpRc68+o47yTTsPb6qNQA==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=q+Qp4/v8H8bsT09zM7B9CbaKi4RLdX9Cv22MapYeN20=;
 b=f6j9ab8Y1X1oYH0zwmxSGGIYesbllRvoMGTEhHsjWWXk2D8HxYxfOaCKwmSeJMNsTWkyA36rZCYU7vcYDRvhOi44mFce/cNxOgiID2E4QM06ir8tt0gwONEr2mYpqAYv74l4T3N/twjtGkbqqditK98eG1b4/uAVNY+8iAK0BGM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZzkIe8zEFo79kvIBjrjv9Fqx6rdvoeAQBBzXjb0vbg3Hzd4vLUqsvwfoLQ48Mp1DX4e14npMIYeLxio+N7KyTvdCh20hBz6luoizL6K4GSChdH2z9XP40uqfSSB2zvGAfgXg6CJdv4ZSwDCR/Hbpdn5moxii4aHhe2rUDdW77A5aV2CDlAq0xTTj2QFOpCEhRStsWumcONd3XCxacYUKX2Ftaw/RW09sE/6mY02InYK4rdL9qdDGx4xM9yufJfO0mWYzuEBxT4lk26nC9WAo+0LzNFq8mNIZG9mz8xlT3auD7KdaT5Q6/Z9ot9Bf3HO2o6cyLzJv9HcEvueowyexNw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=q+Qp4/v8H8bsT09zM7B9CbaKi4RLdX9Cv22MapYeN20=;
 b=DEr7qTKLdI8Y22R2gEtM8ervJHWLfAYLLApRi0Kg3NcQ5L2AzqGoF9sHMwNAOAP5bJFMJERxR7PlELGQmygrmE3ngsE/QBClwVyzQKEyyQgUFOzX7oxWFQrxwDePQaLrzuYTWUvtfNoTAAyISmVGVJk3t8M5CmegORGusFUcQU+FFBWTgRF4i69YRBkM/dYHxWVo+AlhrphCl9AZ4Me2EUdLRxJM/XrDMl9DnLO+Oso9hQekey4v4yfWzcgiRn0UvpHksAnZRjj6/TJ1KhT6G7wmow/UsvWkpF36rEJ+jhll2ueOARz1pyx/THqWvTNC87MTHpzkmvl2puOoqdBCRw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=q+Qp4/v8H8bsT09zM7B9CbaKi4RLdX9Cv22MapYeN20=;
 b=f6j9ab8Y1X1oYH0zwmxSGGIYesbllRvoMGTEhHsjWWXk2D8HxYxfOaCKwmSeJMNsTWkyA36rZCYU7vcYDRvhOi44mFce/cNxOgiID2E4QM06ir8tt0gwONEr2mYpqAYv74l4T3N/twjtGkbqqditK98eG1b4/uAVNY+8iAK0BGM=
From: Wei Chen <Wei.Chen@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: nd <nd@arm.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Jiamei Xie <Jiamei.Xie@arm.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v4 7/8] xen/x86: add detection of memory interleaves for
 different nodes
Thread-Topic: [PATCH v4 7/8] xen/x86: add detection of memory interleaves for
 different nodes
Thread-Index: AQHYbm4HiKDL321oiEuXmbKNpj1MH605Bc8AgADVNxCAAErZgIABQrBA
Date: Thu, 2 Jun 2022 02:20:10 +0000
Message-ID:
 <PAXPR08MB74208B9B507B780CEC065C9F9EDE9@PAXPR08MB7420.eurprd08.prod.outlook.com>
References: <20220523062525.2504290-1-wei.chen@arm.com>
 <20220523062525.2504290-8-wei.chen@arm.com>
 <6003b7a5-63c5-9bd3-03db-a4bac5ba8e00@suse.com>
 <PAXPR08MB7420F087CC36C8E8DB8DFF7E9EDF9@PAXPR08MB7420.eurprd08.prod.outlook.com>
 <dc158225-73d0-498d-8b30-ade1078edd51@suse.com>
In-Reply-To: <dc158225-73d0-498d-8b30-ade1078edd51@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 66567AB6FDA22A4E80CBBAA15CF1F594.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: e738049a-ccd3-430e-aa30-08da443e7307
x-ms-traffictypediagnostic:
	AS8PR08MB7094:EE_|DBAEUR03FT047:EE_|AM6PR08MB4135:EE_
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB4135BC0CD212C90A6EFEB8629EDE9@AM6PR08MB4135.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 7r/SEQwRA3K+AYdw0WbfKSZvoboH814xtaueu4W5Vsdho9Fd+4jPyhiWa2OSzutyKlqBeJYWMfYNy+WMxvTz0PmGiB0A9kCD6/jlNv88vVw+mgE9YdP2if/XepCDdtVcsaVLavwbCm4T3bNA4eDd8UF3S9CD8x3JoR+TjVr6dLUjEZ+kwJvR1y3CkMDyla2Ox5zijNCvFo570YGHtI16/exRuqiUmkgf1zb9dQsMpjxytbNPGi3au11aBkwgujcyzH1TR9U8oyaYAMXusZi/+MwNwmC7bfUlCOk1HQUc3l9cEr63cCn8PglZyxWC7hHuldKqJJehIY0zPPUUGzVuKIDEgIBKfKo9W82FYMNKJ3hshh1WM3cQrvzbhnYD5P2ztkEwoyqbL5nkeSGhQoZ8vsFrSJwPetpCLgKZYT5zo3+E853YCoGhmU5hcNpM0wfbukZ5wLCDbz3Nyr8JW4/98CEb0mQv40CkwGQ98fpmWwvcXNA2uaYEmI0MEOKMTZ8/AuCGcZW5PyOKX4uYONv+f+IoiWFHGDyFjDxOO3WHUf9OjcaMbLYrs4qmwaeQw62Uv/0EhA1TeYWXrnTl63gBTDZMDUYrSJtS87a4k+2CpOLUV1M69baFAO/k1xULIHQv4eqb6NF425v1TuGs/Msz0z+eNy5xVO4eFbgTVvAMDPPr1+AjaHOtPm1kxRWbM7pFUaB/b8ElhpKmBbh+e1sPmA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB7420.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(38100700002)(55016003)(26005)(66446008)(4326008)(66556008)(66476007)(66946007)(8676002)(64756008)(76116006)(316002)(6916009)(186003)(54906003)(53546011)(38070700005)(2906002)(6506007)(7696005)(5660300002)(508600001)(122000001)(33656002)(83380400001)(8936002)(71200400001)(86362001)(9686003)(52536014);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB7094
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT047.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	782115b5-bd4a-4ee1-62fb-08da443e6b98
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hNlfV6fVM7toZPZHEgIlR6EkeLzrIxvD4ZbvaI4IBKxscIMjMS0YhkhKCGL6f+lT0obrQngTQeI6l3BsF8MPb9Y38SsI/5r7YLY5/SoP73N1ShnSoldkuWPiYP72jBs3abab7pM/qD7HjDShr72bSAZwWg8HdO46WJD2OhIv5FrC13ozcTTgi+kMnDNsfk4c3WQMTE9vbrACtzcmCTBj1o+pAm1g8cFg+z6rcnudeoK9WvhW7w+bb74tuteNf5uQb1PvBH5MUkQNbkOXf4eAJCmazYenZX5Y/fPO4HmMlaTtrdJtR455HTEk3dhVaII52KijrnKRG53FOYgH6FuUkdSIDJYeSURGpa7w7ntmvYUMbGlTnyRN5j9ZVJfCrXK28Kpf3zILAQSfHJ1aGMJYqE6R7BMcagTDmd+G++O9t7PMEtAaIYN7T3Y1oW2ErQlVDVpLPc8xG31KyAtX4s/bSn0H71gP5W84SFrz4o4MmBC3lDBliey7HiDBHW/l9lHxzz0UvcmVGbq+BDBXc1nRGB/VrJw1DiAfjw/sO3FlPvjhzS1IdOajLQKSiIrYKhjzUpB+ZrCiAmPj2Pai/9PXVK/aI6vERUBPmJFDpEVbmKnBo1ZMVKe3B0bnZ6L3agTNg2oAf/dXBH2QxgSfEEccrN3CUM72HHgb2j/ethJiuXOU0YM3OVojD4BVZLLH2xeU
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(70206006)(6862004)(52536014)(70586007)(55016003)(54906003)(316002)(8936002)(8676002)(86362001)(4326008)(186003)(336012)(7696005)(33656002)(5660300002)(40460700003)(508600001)(81166007)(83380400001)(36860700001)(2906002)(26005)(6506007)(9686003)(53546011)(47076005)(356005)(82310400005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2022 02:20:23.4889
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e738049a-ccd3-430e-aa30-08da443e7307
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT047.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4135

SGkgSmFuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEphbiBCZXVs
aWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4gU2VudDogMjAyMuW5tDbmnIgx5pelIDE0OjMyDQo+
IFRvOiBXZWkgQ2hlbiA8V2VpLkNoZW5AYXJtLmNvbT4NCj4gQ2M6IG5kIDxuZEBhcm0uY29tPjsg
QW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT47IFJvZ2VyIFBhdQ0KPiBN
b25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPjsgV2VpIExpdSA8d2xAeGVuLm9yZz47IEppYW1l
aSBYaWUNCj4gPEppYW1laS5YaWVAYXJtLmNvbT47IHhlbi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0
Lm9yZw0KPiBTdWJqZWN0OiBSZTogW1BBVENIIHY0IDcvOF0geGVuL3g4NjogYWRkIGRldGVjdGlv
biBvZiBtZW1vcnkgaW50ZXJsZWF2ZXMNCj4gZm9yIGRpZmZlcmVudCBub2Rlcw0KPiANCj4gT24g
MDEuMDYuMjAyMiAwNDo1MywgV2VpIENoZW4gd3JvdGU6DQo+ID4+IEZyb206IEphbiBCZXVsaWNo
IDxqYmV1bGljaEBzdXNlLmNvbT4NCj4gPj4gU2VudDogMjAyMuW5tDXmnIgzMeaXpSAyMToyMQ0K
PiA+Pg0KPiA+PiBPbiAyMy4wNS4yMDIyIDA4OjI1LCBXZWkgQ2hlbiB3cm90ZToNCj4gPj4+IEBA
IC0xMTksMjAgKzEyNSw0NSBAQCBpbnQgdmFsaWRfbnVtYV9yYW5nZShwYWRkcl90IHN0YXJ0LCBw
YWRkcl90IGVuZCwNCj4gPj4gbm9kZWlkX3Qgbm9kZSkNCj4gPj4+ICAJcmV0dXJuIDA7DQo+ID4+
PiAgfQ0KPiA+Pj4NCj4gPj4+IC1zdGF0aWMgX19pbml0IGludCBjb25mbGljdGluZ19tZW1ibGtz
KHBhZGRyX3Qgc3RhcnQsIHBhZGRyX3QgZW5kKQ0KPiA+Pj4gK3N0YXRpYw0KPiA+Pj4gK2VudW0g
Y29uZmxpY3RzIF9faW5pdCBjb25mbGljdGluZ19tZW1ibGtzKG5vZGVpZF90IG5pZCwgcGFkZHJf
dCBzdGFydCwNCj4gPj4+ICsJCQkJCSAgcGFkZHJfdCBlbmQsIHBhZGRyX3QgbmRfc3RhcnQsDQo+
ID4+PiArCQkJCQkgIHBhZGRyX3QgbmRfZW5kLCB1bnNpZ25lZCBpbnQgKm1ibGtpZCkNCj4gPj4+
ICB7DQo+ID4+PiAtCWludCBpOw0KPiA+Pj4gKwl1bnNpZ25lZCBpbnQgaTsNCj4gPj4+DQo+ID4+
PiArCS8qDQo+ID4+PiArCSAqIFNjYW4gYWxsIHJlY29yZGVkIG5vZGVzJyBtZW1vcnkgYmxvY2tz
IHRvIGNoZWNrIGNvbmZsaWN0czoNCj4gPj4+ICsJICogT3ZlcmxhcCBvciBpbnRlcmxlYXZlLg0K
PiA+Pj4gKwkgKi8NCj4gPj4+ICAJZm9yIChpID0gMDsgaSA8IG51bV9ub2RlX21lbWJsa3M7IGkr
Kykgew0KPiA+Pj4gIAkJc3RydWN0IG5vZGUgKm5kID0gJm5vZGVfbWVtYmxrX3JhbmdlW2ldOw0K
PiA+Pj4gKw0KPiA+Pj4gKwkJKm1ibGtpZCA9IGk7DQo+ID4+PiArDQo+ID4+PiArCQkvKiBTa2lw
IDAgYnl0ZXMgbm9kZSBtZW1vcnkgYmxvY2suICovDQo+ID4+PiAgCQlpZiAobmQtPnN0YXJ0ID09
IG5kLT5lbmQpDQo+ID4+PiAgCQkJY29udGludWU7DQo+ID4+PiArCQkvKg0KPiA+Pj4gKwkJICog
VXNlIG1lbWJsayByYW5nZSB0byBjaGVjayBtZW1ibGsgb3ZlcmxhcHMsIGluY2x1ZGUgdGhlDQo+
ID4+PiArCQkgKiBzZWxmLW92ZXJsYXAgY2FzZS4NCj4gPj4+ICsJCSAqLw0KPiA+Pj4gIAkJaWYg
KG5kLT5lbmQgPiBzdGFydCAmJiBuZC0+c3RhcnQgPCBlbmQpDQo+ID4+PiAtCQkJcmV0dXJuIGk7
DQo+ID4+PiArCQkJcmV0dXJuIE9WRVJMQVA7DQo+ID4+PiAgCQlpZiAobmQtPmVuZCA9PSBlbmQg
JiYgbmQtPnN0YXJ0ID09IHN0YXJ0KQ0KPiA+Pj4gLQkJCXJldHVybiBpOw0KPiA+Pj4gKwkJCXJl
dHVybiBPVkVSTEFQOw0KPiA+Pg0KPiA+PiBLbm93aW5nIHRoYXQgbmQncyByYW5nZSBpcyBub24t
ZW1wdHksIGlzIHRoaXMgMm5kIGNvbmRpdGlvbiBhY3R1YWxseQ0KPiA+PiBuZWVkZWQgaGVyZT8g
KFN1Y2ggYW4gYWRqdXN0bWVudCwgaWYgeW91IGRlY2lkZWQgdG8gbWFrZSBpdCBhbmQgaWYgbm90
DQo+ID4+IHNwbGl0IG91dCB0byBhIHNlcGFyYXRlIHBhdGNoLCB3b3VsZCBuZWVkIGNhbGxpbmcg
b3V0IGluIHRoZQ0KPiA+PiBkZXNjcmlwdGlvbi4pDQo+ID4NCj4gPiBUaGUgMm5kIGNvbmRpdGlv
biBoZXJlLCB5b3UgbWVhbnQgaXMgIihuZC0+ZW5kID09IGVuZCAmJiBuZC0+c3RhcnQgPT0NCj4g
c3RhcnQpIg0KPiA+IG9yIGp1c3QgIm5kLT5zdGFydCA9PSBzdGFydCIgYWZ0ZXIgIiYmIj8NCj4g
DQo+IE5vLCBJIG1lYW4gdGhlIGVudGlyZSAybmQgaWYoKS4NCj4gDQoNCk9LLg0KDQo+ID4+PiAr
CQkvKg0KPiA+Pj4gKwkJICogVXNlIG5vZGUgbWVtb3J5IHJhbmdlIHRvIGNoZWNrIHdoZXRoZXIg
bmV3IHJhbmdlIGNvbnRhaW5zDQo+ID4+PiArCQkgKiBtZW1vcnkgZnJvbSBvdGhlciBub2RlcyAt
IGludGVybGVhdmUgY2hlY2suIFdlIGp1c3QgbmVlZA0KPiA+Pj4gKwkJICogdG8gY2hlY2sgZnVs
bCBjb250YWlucyBzaXR1YXRpb24uIEJlY2F1c2Ugb3ZlcmxhcHMgaGF2ZQ0KPiA+Pj4gKwkJICog
YmVlbiBjaGVja2VkIGFib3ZlLg0KPiA+Pj4gKwkJICovDQo+ID4+PiArCSAgICAgICAgaWYgKG5p
ZCAhPSBtZW1ibGtfbm9kZWlkW2ldICYmDQo+ID4+PiArCQkgICAgKG5kX3N0YXJ0IDwgbmQtPnN0
YXJ0ICYmIG5kLT5lbmQgPCBuZF9lbmQpKQ0KPiA+Pj4gKwkJCXJldHVybiBJTlRFUkxFQVZFOw0K
PiA+Pg0KPiA+PiBEb2Vzbid0IHRoaXMgbmVlZCB0byBiZSA8PSBpbiBib3RoIGNhc2VzIChhbGJl
aXQgSSB0aGluayBvbmUgb2YgdGhlIHR3bw0KPiA+PiBleHByZXNzaW9ucyB3b3VsZCB3YW50IHN3
aXRjaGluZyBhcm91bmQsIHRvIGJldHRlciBsaW5lIHVwIHdpdGggdGhlDQo+ID4+IGVhcmxpZXIg
b25lLCB2aXNpYmxlIGluIGNvbnRleHQgZnVydGhlciB1cCkuDQo+ID4+DQo+ID4NCj4gPiBZZXMs
IEkgd2lsbCBhZGQgIj0iaW4gYm90aCBjYXNlcy4gQnV0IGZvciBzd2l0Y2hpbmcgYXJvdW5kLCBJ
IGFsc28NCj4gPiB3YW50ZWQgdG8gbWFrZSBhIGJldHRlciBsaW5lIHVwLiBCdXQgaWYgbmlkID09
IG1lbWJsa19ub2RlaWRbaV0sDQo+ID4gdGhlIGNoZWNrIG9mIChuZF9zdGFydCA8IG5kLT5zdGFy
dCAmJiBuZC0+ZW5kIDwgbmRfZW5kKSBpcyBtZWFuaW5nbGVzcy4NCj4gPiBJJ2xsIGFkanVzdCB0
aGVpciBvcmRlciBpbiB0aGUgbmV4dCB2ZXJzaW9uIGlmIHlvdSB0aGluayB0aGlzIGlzDQo+ID4g
YWNjZXB0YWJsZS4NCj4gDQo+IEkgd2Fzbid0IHJlZmVycmluZyB0byB0aGUgIm5pZCAhPSBtZW1i
bGtfbm9kZWlkW2ldIiBwYXJ0IGF0IGFsbC4gV2hhdA0KPiBJJ20gYWZ0ZXIgaXMgZm9yIHRoZSB0
d28gcmFuZ2UgY2hlY2tzIHRvIGNvbWUgYXMgY2xvc2UgYXMgcG9zc2libGUgdG8NCj4gd2hhdCB0
aGUgb3RoZXIgcmFuZ2UgY2hlY2sgZG9lcy4gKFdoaWNoLCBhcyBJIG5vdGljZSBvbmx5IG5vdywg
d291bGQNCj4gaW5jbHVkZSB0aGUgZHJvcHBpbmcgb2YgdGhlIHVubmVjZXNzYXJ5IGlubmVyIHBh
aXIgb2YgcGFyZW50aGVzZXMuKQ0KPiBFLmcuICh0aGVyZSBhcmUgb3RoZXIgdmFyaWF0aW9ucyBv
ZiBpdCkNCj4gDQo+IAkgICAgICAgIGlmIChuaWQgIT0gbWVtYmxrX25vZGVpZFtpXSAmJg0KPiAJ
CSAgICBuZC0+c3RhcnQgPj0gbmRfc3RhcnQgJiYgbmQtPmVuZCA8PSBuZF9lbmQpDQo+IAkJCXJl
dHVybiBJTlRFUkxFQVZFOw0KPiANCg0KT2gsIHRoYW5rcy4gSSBoYWQgdGhvdWdodCB0b28gbXVj
aCwgSSB3aWxsIGRyb3AgdGhlbS4NCg0KPiA+Pj4gQEAgLTI3NSwxMCArMzA2LDEzIEBAIGFjcGlf
bnVtYV9wcm9jZXNzb3JfYWZmaW5pdHlfaW5pdChjb25zdCBzdHJ1Y3QNCj4gPj4gYWNwaV9zcmF0
X2NwdV9hZmZpbml0eSAqcGEpDQo+ID4+PiAgdm9pZCBfX2luaXQNCj4gPj4+ICBhY3BpX251bWFf
bWVtb3J5X2FmZmluaXR5X2luaXQoY29uc3Qgc3RydWN0IGFjcGlfc3JhdF9tZW1fYWZmaW5pdHkN
Cj4gKm1hKQ0KPiA+Pj4gIHsNCj4gPj4+ICsJZW51bSBjb25mbGljdHMgc3RhdHVzOw0KPiA+Pg0K
PiA+PiBJIGRvbid0IHRoaW5rIHlvdSBuZWVkIHRoaXMgbG9jYWwgdmFyaWFibGUuDQo+ID4+DQo+
ID4NCj4gPiBXaHkgSSBkb24ndCBuZWVkIHRoaXMgb25lPyBEaWQgeW91IG1lYW4gSSBjYW4gdXNl
DQo+ID4gc3dpdGNoIChjb25mbGljdGluZ19tZW1ibGtzKC4uLikpIGRpcmVjdGx5Pw0KPiANCj4g
WWVzLiBXaHkgY291bGQgdGhpcyBub3QgYmUgcG9zc2libGU/DQo+IA0KDQpPay4NCg0KPiA+Pj4g
IAkJICAgICAgIG5vZGVfbWVtYmxrX3JhbmdlW2ldLnN0YXJ0LCBub2RlX21lbWJsa19yYW5nZVtp
XS5lbmQpOw0KPiA+Pj4gIAkJYmFkX3NyYXQoKTsNCj4gPj4+ICAJCXJldHVybjsNCj4gPj4+ICAJ
fQ0KPiA+Pj4gLQlpZiAoIShtYS0+ZmxhZ3MgJiBBQ1BJX1NSQVRfTUVNX0hPVF9QTFVHR0FCTEUp
KSB7DQo+ID4+PiAtCQlzdHJ1Y3Qgbm9kZSAqbmQgPSAmbm9kZXNbbm9kZV07DQo+ID4+Pg0KPiA+
Pj4gLQkJaWYgKCFub2RlX3Rlc3RfYW5kX3NldChub2RlLCBtZW1vcnlfbm9kZXNfcGFyc2VkKSkg
ew0KPiA+Pj4gLQkJCW5kLT5zdGFydCA9IHN0YXJ0Ow0KPiA+Pj4gLQkJCW5kLT5lbmQgPSBlbmQ7
DQo+ID4+PiAtCQl9IGVsc2Ugew0KPiA+Pj4gLQkJCWlmIChzdGFydCA8IG5kLT5zdGFydCkNCj4g
Pj4+IC0JCQkJbmQtPnN0YXJ0ID0gc3RhcnQ7DQo+ID4+PiAtCQkJaWYgKG5kLT5lbmQgPCBlbmQp
DQo+ID4+PiAtCQkJCW5kLT5lbmQgPSBlbmQ7DQo+ID4+PiAtCQl9DQo+ID4+PiArCWRlZmF1bHQ6
DQo+ID4+PiArCQlicmVhazsNCj4gPj4NCj4gPj4gVGhpcyB3YW50cyB0byBiZSAiY2FzZSBOT19D
T05GTElDVDoiLCBzdWNoIHRoYXQgdGhlIGNvbXBpbGVyIHdvdWxkDQo+ID4+IHdhcm4gaWYgYSBu
ZXcgZW51bWVyYXRvciBhcHBlYXJzIHdpdGhvdXQgYWRkaW5nIGNvZGUgaGVyZS4gKEFuDQo+ID4+
IGFsdGVybmF0aXZlIC0gd2hpY2ggcGVyc29uYWxseSBJIGRvbid0IGxpa2UgLSB3b3VsZCBiZSB0
byBwdXQNCj4gPj4gQVNTRVJUX1VOUkVBQ0hBQkxFKCkgaW4gdGhlIGRlZmF1bHQ6IGNhc2UuIFRo
ZSBkb3duc2lkZSBpcyB0aGF0DQo+ID4+IHRoZW4gdGhlIGlzc3VlIHdvdWxkIG9ubHkgYmUgbm90
aWNlYWJsZSBhdCBydW50aW1lLikNCj4gPj4NCj4gPg0KPiA+IFRoYW5rcywgSSB3aWxsIGFkanVz
dCBpdCB0bzoNCj4gPiAJY2FzZSBOT19DT05GTElDVDoNCj4gPiAJCWJyZWFrOw0KPiA+IAlkZWZh
dWx0Og0KPiA+IAkJQVNTRVJUX1VOUkVBQ0hBQkxFKCk7DQo+ID4gaW4gbmV4dCB2ZXJzaW9uLg0K
PiANCj4gQXMgc2FpZCAtIEkgY29uc2lkZXIgdGhpcyBmb3JtIGxlc3MgZGVzaXJhYmxlLCBhcyBp
dCdsbCBkZWZlcg0KPiBub3RpY2luZyBvZiBhbiBpc3N1ZSBmcm9tIGJ1aWxkLXRpbWUgdG8gcnVu
dGltZS4gSWYgeW91IHRoaW5rIHRoYXQNCj4gZm9ybSBpcyBiZXR0ZXIsIG1heSBJIGFzayB3aHk/
DQo+IA0KDQpPay4gSSB3aWxsIGRyb3AgdGhlIGRlZmF1bHQuIEkgaGFkIG1pcy11bmRlcnN0b29k
IHlvdXIgY29tbWVudC4NCg0KPiBKYW4NCg0K


From xen-devel-bounces@lists.xenproject.org Thu Jun 02 04:11:02 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 04:11:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340835.565955 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwcAV-0000nf-W3; Thu, 02 Jun 2022 04:10:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340835.565955; Thu, 02 Jun 2022 04:10:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwcAV-0000nY-TN; Thu, 02 Jun 2022 04:10:43 +0000
Received: by outflank-mailman (input) for mailman id 340835;
 Thu, 02 Jun 2022 04:10:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Cpeu=WJ=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1nwcAU-0000nR-K5
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 04:10:43 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on061b.outbound.protection.outlook.com
 [2a01:111:f400:fe02::61b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f63bdbbe-e229-11ec-837f-e5687231ffcc;
 Thu, 02 Jun 2022 06:10:40 +0200 (CEST)
Received: from DU2PR04CA0015.eurprd04.prod.outlook.com (2603:10a6:10:3b::20)
 by DB9PR08MB7699.eurprd08.prod.outlook.com (2603:10a6:10:392::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12; Thu, 2 Jun
 2022 04:10:35 +0000
Received: from DBAEUR03FT059.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:3b:cafe::d5) by DU2PR04CA0015.outlook.office365.com
 (2603:10a6:10:3b::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13 via Frontend
 Transport; Thu, 2 Jun 2022 04:10:35 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT059.mail.protection.outlook.com (100.127.142.102) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5314.12 via Frontend Transport; Thu, 2 Jun 2022 04:10:34 +0000
Received: ("Tessian outbound d3318d0cda7b:v120");
 Thu, 02 Jun 2022 04:10:34 +0000
Received: from 5043a14e0ac5.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 C11244C2-0C8D-4A7C-8AD3-5A5D86C2B028.1; 
 Thu, 02 Jun 2022 04:10:27 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 5043a14e0ac5.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 02 Jun 2022 04:10:27 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com (2603:10a6:102:2b9::9)
 by DB7PR08MB3355.eurprd08.prod.outlook.com (2603:10a6:5:18::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5293.17; Thu, 2 Jun
 2022 04:10:25 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::d007:5582:9bbe:425e]) by PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::d007:5582:9bbe:425e%3]) with mapi id 15.20.5293.019; Thu, 2 Jun 2022
 04:10:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f63bdbbe-e229-11ec-837f-e5687231ffcc
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=ktUSISaclaDrZ+GxiIo0w+hDS/kvsySeZNFD0B4cxnq/4jYc5+TpHZdI6ryQovOrSW9gePJ2WRKHVhzMOsvnYE3mJwwXySSojB0pnaVMByTuszlsIg7laL/gWjgdgMTtyBJ/yAkf1lTl7Zy8XjAXj7QnHIDAwSlYs0YPicumh0nqTIGgvBLIJAXd1+7saMjB0XUMYrC146bQ9oG/Sv66cYRtM5REvjGiP5B/wAO7MvbE+otfxXSgVe650KIdzobwjP2y4FQyhTZQ/w83SD388pol7UsdMXIiwqZBwCqipEV0a/lz0mmfAAY0KOUcUStwY2FSFNg3FsxolmBayWvg2A==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=RMrJHla0pmJ3TZNCdSIdkv1KpR1wdQRFa5FSC1mu+4c=;
 b=GuWP+XZNRm0qB1UQGSSZ233LWNY6Ss7P0grHrIunJ+IzI5Rr440QSiQOWWeXkyBnoPRj/zNkGFBdAjUlNEd/UWPFzL4j/XzfNgU8hfyztfE7/7b2wamH8VJmEvyKOL0dpftvYEnH8ox2dJa3ZC2J98YOndXcHEEpwIrVvNdobCsbiUP6UPx3jW9o98KcXCJmFad/35Ko/9EbQUXDABibMibvAA6HFG5bivNcf4NRQNjjocZFyR7fDm5YrLCESYaXudL1akEs4/aH4IpJ1YduYLvzbFaIvsXohwmZi9vH/WsdzaBJr0jG2O5V57iLZS22e3bhLgBQNsEO39UkT1wHrQ==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RMrJHla0pmJ3TZNCdSIdkv1KpR1wdQRFa5FSC1mu+4c=;
 b=FUmMqvMm26ZTvTHnzetf80+SZcBBkPmoaYAXV0mg2+d1LvTtlT6Uzv7W4P4eGYUOZeO9hf6HRdYXyEb+1N6MZZaokVlkIdBDTIc+znJmn/hCUu+06eawn2G9IaOFjEhu4RPYL0Bi48AU04HdsIPYa7YCGReZ+Co6XRnud6Mrof8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 681082eba03b6977
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=K+DN5D968g6veNblwwlvJkvB5QiVyJo/AHBY22deuNp6hxquW6XfDwBifzvJCmklbx7jhcXp1FRfqpfQ2ftc88vA0OzXYqVvB0Ch2TYWGBMExNg0nnq3/jFGrkAg/feib2sXegU/mFd8vQeFjxOZQW8SrUW+ddXSMUTVCMUQdcxdUKrVMa+gSml/BL1xrYSn7HWT7OcaCNYPDHcNO4RHtNFZO7iIxFF08Y/zMZlfjYlsv6d6JsCMKEfpCHfHye+V6o+TBh8ERLBp9HTScntKV0bEDIYgeoxnLx4VSN/rommjnP94MY0aqf22tfJ7w95hNhvAM26HGiEf53MiODpLLA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=RMrJHla0pmJ3TZNCdSIdkv1KpR1wdQRFa5FSC1mu+4c=;
 b=FFG5LRfGiKC2WiFERRedf87mpJHlE+XPqSkRPhcTGo8i6+0MCAWhtgruGMhoxl3b+foYK9ktOx1MR+f6CDdDLpiFWoVECUlttHHClpR+o2XiwQCFe3B+MFH0yxGIvJ3s6Te3Zw4ouqJJ+n9Dv4Gpjl612svRNptrcyijSR0MKyTgDTbq85cIdAUX0Haf5ACYXN6/epqm1WrJoKzxDu0AvoRDIVx7eE+Cvotx7xXFtCmI+udu9skkoP8ucBnrhtp9oJg9oRVC93RrW9O5Xvxn9zUUrO0zOKilH3ha88PHo+ySCE8MaDPtINd+2LhXP5dcil71A7GRsc/cgZYekmV6XQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RMrJHla0pmJ3TZNCdSIdkv1KpR1wdQRFa5FSC1mu+4c=;
 b=FUmMqvMm26ZTvTHnzetf80+SZcBBkPmoaYAXV0mg2+d1LvTtlT6Uzv7W4P4eGYUOZeO9hf6HRdYXyEb+1N6MZZaokVlkIdBDTIc+znJmn/hCUu+06eawn2G9IaOFjEhu4RPYL0Bi48AU04HdsIPYa7YCGReZ+Co6XRnud6Mrof8=
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
Message-ID: <f8674bd2-523f-bf89-2998-6f1176ad97be@arm.com>
Date: Thu, 2 Jun 2022 12:10:19 +0800
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH v4 7/8] xen/x86: add detection of memory interleaves for
 different nodes
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: nd@arm.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Jiamei Xie <jiamei.xie@arm.com>, xen-devel@lists.xenproject.org
References: <20220523062525.2504290-1-wei.chen@arm.com>
 <20220523062525.2504290-8-wei.chen@arm.com>
 <6003b7a5-63c5-9bd3-03db-a4bac5ba8e00@suse.com>
From: Wei Chen <Wei.Chen@arm.com>
In-Reply-To: <6003b7a5-63c5-9bd3-03db-a4bac5ba8e00@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: SG2PR02CA0119.apcprd02.prod.outlook.com
 (2603:1096:4:92::35) To PAXPR08MB7420.eurprd08.prod.outlook.com
 (2603:10a6:102:2b9::9)
MIME-Version: 1.0
X-MS-Office365-Filtering-Correlation-Id: 84e491fb-6a0f-48fd-c63f-08da444dd783
X-MS-TrafficTypeDiagnostic:
	DB7PR08MB3355:EE_|DBAEUR03FT059:EE_|DB9PR08MB7699:EE_
X-Microsoft-Antispam-PRVS:
	<DB9PR08MB7699F61017843D8A432BBFED9EDE9@DB9PR08MB7699.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 PiePK9fHtOP4E0OjAFL1sNGC8S9kS1bbGWU7FU1jZ4sJQFtb2MVFEc2uc82SxsyNEn23LMrcGz0raQLcMqAyZU5BKfBL9nwGeInHiessIH6vdIxv+dc151gdLi1e10AVqf8a1Nks//bQMlhiqkbzvMCNqBKi/2HIOQDT67Ymx1DQn42JJs14zMXbWcGMPUNLQkPaLxWYzklPi+20GbQE6muM/ouQmFY0OPz6auAJ2vogtm+QPx+b8BDYDpem5a2O49Fum0JVuFci/vF55LHj6djVXfUQYmkmpq3UocsGFVsGlYAPaVb3ViyOrr5ZvhlwmeC/Tp2OKTOnDzErPdHhOH++EySbPjRV3JaXjkVsR4AqGMPDe2Le+BSlK3uOY4byPBos6vr1Y/K2z6TUkoTcnkswyhIbNRU1caR33SpCVJ1BW4jEKLTd2nD3Gi6+ziqD3zPpKv1L2F28U+6Ei+dK3AdJLujSsRrcyOTJdZBJGcldwFOzEhrRkz4oDKkfbc49YS+Rx1rpZ6SzEpM7SvDJ9rdfHB1KkkFdzNrdy+D32OBi1rInWNb6bgAd3Od3haLKcMVebADdumV3B6IbUC7IjLSm7YWULL33PaPl9Y0cTEH6vu1juLPh3YZFLtW2oDX19E8BhBuGcVpGK0PH9gbgmXnoS7QnMWz4oSTVEcE8DufmiAFtnBr5Fz2RAM0LnOd7eCiwZGZCp4pvWRI9yoUElm0MzrHdVv466dLrc2HMFD0=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB7420.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(4326008)(5660300002)(83380400001)(6506007)(8936002)(38100700002)(8676002)(2906002)(31686004)(66946007)(6666004)(66476007)(66556008)(53546011)(6512007)(6486002)(36756003)(508600001)(6916009)(316002)(26005)(86362001)(54906003)(186003)(2616005)(31696002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3355
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT059.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	57bf3d8c-baff-4049-e81c-08da444dd190
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2Jbmba+C1/uCiE/sk5Mv8WVdu953lZJjDYKjusJwxDIjGkkvEiTsAeoQCCxx+rac4Z/IXzddjIHWWxv5qXn/aoGkBzan3cuQPSqprjs4CpHTdTSi10WDhhWiIh3+8buknQh6gDfau6VyOoTJ4rwdJemR9aNSb4HOeIns6cP+anMflq1wqfPmHkaxXIo2jlr08gBAvdMfDlVa8QMLM2CUZvnoFo/IfX+bOuwr0jks5Aon0EW1pkq9NkKVXXIHzbZtIIwpPs3TdwTYsHQMFFD88PsR0fJR+AP8wS2s9tLYAMCoNlEj+uslypSXAdJrht6XhuN3eOqNotDzl27LcjvRQZMTNiG0Dxd9YiOaXTUUC3kxsDLxuFkyNbRqwnYT4JwYgsgMn4SmH9mNYO0VdsFc/6qJbieXyF3ap0RFbL0xcDJfY1RIPiD3UXUAXYEoYQAj1gsMWsAj9U46+1FTzeG261mWx9YZzsqlT3HHFP0al+uxebidVNp21rK2Qzow9BuKK4heicOuUoVwxJU3DNgbde+lN5W7tD5TOGZ0ecv3WPDwN0gvSi3N3ARcMpu810VmH7+/sqFlTP/XnURwJjXtmp+GC7TZU8yiP55uUp7phNtAnDYswz6vO0oOqGWV+k2XgyiOkMiRuvjUp4PlNWA0nRzGTtF2u/+7YumPqRP83TRaqSCJGR1buEcSsM4fZwES53WkmlDmOD+xNicit4IYnA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230001)(4636009)(46966006)(40470700004)(36840700001)(6486002)(336012)(47076005)(4326008)(70206006)(70586007)(8676002)(82310400005)(83380400001)(508600001)(26005)(53546011)(54906003)(316002)(6512007)(86362001)(31696002)(186003)(6506007)(6666004)(2616005)(2906002)(31686004)(356005)(81166007)(40460700003)(6862004)(5660300002)(8936002)(36860700001)(36756003)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2022 04:10:34.5254
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 84e491fb-6a0f-48fd-c63f-08da444dd783
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT059.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB7699

Hi Jan,

On 2022/5/31 21:21, Jan Beulich wrote:
> On 23.05.2022 08:25, Wei Chen wrote:
>> @@ -119,20 +125,45 @@ int valid_numa_range(paddr_t start, paddr_t end, nodeid_t node)
>>   	return 0;

> 
> To limit indentation depth, on of the two sides of the conditional can
> be moved out, by omitting the unnecessary "else". To reduce the diff
> it may be worthwhile to invert the if() condition, allowing the (then
> implicit) "else" case to remain (almost) unchanged from the original.
> 
>> -	} else {
>> +	}
>> +
>> +	case INTERLEAVE:
>> +	{
>>   		printk(KERN_ERR
>> -		       "SRAT: PXM %u (%"PRIpaddr"-%"PRIpaddr") overlaps with PXM %u (%"PRIpaddr"-%"PRIpaddr")\n",
>> -		       pxm, start, end, node_to_pxm(memblk_nodeid[i]),
>> +		       "SRAT： PXM %u: (%"PRIpaddr"-%"PRIpaddr") interleaves with PXM %u memblk (%"PRIpaddr"-%"PRIpaddr")\n",
>> +		       node, nd_start, nd_end, node_to_pxm(memblk_nodeid[i]),
> 
> Hmm, you have PXM in the log message text, but you still pass "node" as
> first argument.
> 
> Since you're touching all these messages, could I ask you to convert
> all ranges to proper mathematical interval representation? I.e.
> [start,end) here aiui as the end addresses look to be non-inclusive.
> 

Sorry, I want to confirm with you about this comment again. Now the 
messages look like:
(XEN) NUMA： PXM 0: (0000000080000000-00000008d8000000) interleaves...

So I want to know, is it [0000000080000000-00000008d8000000) or
(0000000080000000-00000008d7ffffff) addressed your comment?
Literally, I think it's case#1?

Thanks,
Wei Chen

> 


From xen-devel-bounces@lists.xenproject.org Thu Jun 02 06:02:41 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 06:02:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340844.565967 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwduc-0005gg-7R; Thu, 02 Jun 2022 06:02:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340844.565967; Thu, 02 Jun 2022 06:02:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwduc-0005gZ-4I; Thu, 02 Jun 2022 06:02:26 +0000
Received: by outflank-mailman (input) for mailman id 340844;
 Thu, 02 Jun 2022 06:02:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Uk5x=WJ=bombadil.srs.infradead.org=BATV+b56869c198be7c3a2e83+6857+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1nwdua-0005gT-2Y
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 06:02:24 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8f6515d4-e239-11ec-bd2c-47488cf2e6aa;
 Thu, 02 Jun 2022 08:02:21 +0200 (CEST)
Received: from hch by bombadil.infradead.org with local (Exim 4.94.2 #2 (Red
 Hat Linux)) id 1nwduL-001bSd-8A; Thu, 02 Jun 2022 06:02:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8f6515d4-e239-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=In-Reply-To:Content-Transfer-Encoding
	:Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:
	Sender:Reply-To:Content-ID:Content-Description;
	bh=h2ZY8VEFgpSaofTKAI8U3Wwg8+8pVIAh16eQkIT60+w=; b=eqmhX3QclN9MlaR/0mwyStXb9K
	NOfcklljy6MQ77D15aoesLM1/t5jG5lKjBAg+zjdCmOipjYeIwcMAt7c391tje9RzUCmz40ZUr8SD
	PkwWxPOcLvYoWgEN/5fzukgPf85mr7wPyo0x2Hd2fs0dFBprgWahMRCVfWbbBuFE1y+hVQNpNUhUV
	AfozbbnIKEaIMOubfcgJZgIwBi4JNMy7O36+tB2XoNgBRJ5q4QiGgIgYzXhWWy4sKEteuV5GeV0Mr
	fpc6Fuh5n1cq/iKxGb0E6G9KmKJLmmzyM3T6LurMLWWRhvyLFGVouXm3HQ0VY1hmfflcPHWfDCwz3
	vzc0SN2w==;
Date: Wed, 1 Jun 2022 23:02:09 -0700
From: Christoph Hellwig <hch@infradead.org>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jens Axboe <axboe@kernel.dk>, xen-devel@lists.xenproject.org,
	linux-block@vger.kernel.org, linux-kernel@vger.kernel.org,
	Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?= <marmarek@invisiblethingslab.com>
Subject: Re: [PATCH] xen-blkfront: Handle NULL gendisk
Message-ID: <YphSYfdzy8kekhTZ@infradead.org>
References: <20220601195341.28581-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20220601195341.28581-1-jandryuk@gmail.com>
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

On Wed, Jun 01, 2022 at 03:53:41PM -0400, Jason Andryuk wrote:
> When a VBD is not fully created and then closed, the kernel can have a
> NULL pointer dereference:
> 
> The reproducer is trivial:
> 
> [user@dom0 ~]$ sudo xl block-attach work backend=sys-usb vdev=xvdi target=/dev/sdz
> [user@dom0 ~]$ xl block-list work
> Vdev  BE  handle state evt-ch ring-ref BE-path
> 51712 0   241    4     -1     -1       /local/domain/0/backend/vbd/241/51712
> 51728 0   241    4     -1     -1       /local/domain/0/backend/vbd/241/51728
> 51744 0   241    4     -1     -1       /local/domain/0/backend/vbd/241/51744
> 51760 0   241    4     -1     -1       /local/domain/0/backend/vbd/241/51760
> 51840 3   241    3     -1     -1       /local/domain/3/backend/vbd/241/51840
>                  ^ note state, the /dev/sdz doesn't exist in the backend
> 
> [user@dom0 ~]$ sudo xl block-detach work xvdi
> [user@dom0 ~]$ xl block-list work
> Vdev  BE  handle state evt-ch ring-ref BE-path
> work is an invalid domain identifier
> 
> And its console has:
> 
> BUG: kernel NULL pointer dereference, address: 0000000000000050
> PGD 80000000edebb067 P4D 80000000edebb067 PUD edec2067 PMD 0
> Oops: 0000 [#1] PREEMPT SMP PTI
> CPU: 1 PID: 52 Comm: xenwatch Not tainted 5.16.18-2.43.fc32.qubes.x86_64 #1
> RIP: 0010:blk_mq_stop_hw_queues+0x5/0x40
> Code: 00 48 83 e0 fd 83 c3 01 48 89 85 a8 00 00 00 41 39 5c 24 50 77 c0 5b 5d 41 5c 41 5d c3 c3 0f 1f 80 00 00 00 00 0f 1f 44 00 00 <8b> 47 50 85 c0 74 32 41 54 49 89 fc 55 53 31 db 49 8b 44 24 48 48
> RSP: 0018:ffffc90000bcfe98 EFLAGS: 00010293
> RAX: ffffffffc0008370 RBX: 0000000000000005 RCX: 0000000000000000
> RDX: 0000000000000000 RSI: 0000000000000005 RDI: 0000000000000000
> RBP: ffff88800775f000 R08: 0000000000000001 R09: ffff888006e620b8
> R10: ffff888006e620b0 R11: f000000000000000 R12: ffff8880bff39000
> R13: ffff8880bff39000 R14: 0000000000000000 R15: ffff88800604be00
> FS:  0000000000000000(0000) GS:ffff8880f3300000(0000) knlGS:0000000000000000
> CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 0000000000000050 CR3: 00000000e932e002 CR4: 00000000003706e0
> Call Trace:
>  <TASK>
>  blkback_changed+0x95/0x137 [xen_blkfront]
>  ? read_reply+0x160/0x160
>  xenwatch_thread+0xc0/0x1a0
>  ? do_wait_intr_irq+0xa0/0xa0
>  kthread+0x16b/0x190
>  ? set_kthread_struct+0x40/0x40
>  ret_from_fork+0x22/0x30
>  </TASK>
> Modules linked in: snd_seq_dummy snd_hrtimer snd_seq snd_seq_device snd_timer snd soundcore ipt_REJECT nf_reject_ipv4 xt_state xt_conntrack nft_counter nft_chain_nat xt_MASQUERADE nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nft_compat nf_tables nfnetlink intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel xen_netfront pcspkr xen_scsiback target_core_mod xen_netback xen_privcmd xen_gntdev xen_gntalloc xen_blkback xen_evtchn ipmi_devintf ipmi_msghandler fuse bpf_preload ip_tables overlay xen_blkfront
> CR2: 0000000000000050
> ---[ end trace 7bc9597fd06ae89d ]---
> RIP: 0010:blk_mq_stop_hw_queues+0x5/0x40
> Code: 00 48 83 e0 fd 83 c3 01 48 89 85 a8 00 00 00 41 39 5c 24 50 77 c0 5b 5d 41 5c 41 5d c3 c3 0f 1f 80 00 00 00 00 0f 1f 44 00 00 <8b> 47 50 85 c0 74 32 41 54 49 89 fc 55 53 31 db 49 8b 44 24 48 48
> RSP: 0018:ffffc90000bcfe98 EFLAGS: 00010293
> RAX: ffffffffc0008370 RBX: 0000000000000005 RCX: 0000000000000000
> RDX: 0000000000000000 RSI: 0000000000000005 RDI: 0000000000000000
> RBP: ffff88800775f000 R08: 0000000000000001 R09: ffff888006e620b8
> R10: ffff888006e620b0 R11: f000000000000000 R12: ffff8880bff39000
> R13: ffff8880bff39000 R14: 0000000000000000 R15: ffff88800604be00
> FS:  0000000000000000(0000) GS:ffff8880f3300000(0000) knlGS:0000000000000000
> CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 0000000000000050 CR3: 00000000e932e002 CR4: 00000000003706e0
> Kernel panic - not syncing: Fatal exception
> Kernel Offset: disabled
> 
> info->rq and info->gd are only set in blkfront_connect(), which is
> called for state 4 (XenbusStateConnected).  Guard against using NULL
> variables in blkfront_closing() to avoid the issue.
> 
> The rest of blkfront_closing looks okay.  If info->nr_rings is 0, then
> for_each_rinfo won't do anything.
> 
> blkfront_remove also needs to check for non-NULL pointers before
> cleaning up the gendisk and request queue.
> 
> Fixes: 05d69d950d9d "xen-blkfront: sanitize the removal state machine"
> Reported-by: Marek Marczykowski-Grecki <marmarek@invisiblethingslab.com>
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Tis looks ok, but do we have anything that prevents races between
blkfront_connect, blkfront_closing and blkfront_remove?


From xen-devel-bounces@lists.xenproject.org Thu Jun 02 06:42:36 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 06:42:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340854.565978 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nweXN-0002I6-82; Thu, 02 Jun 2022 06:42:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340854.565978; Thu, 02 Jun 2022 06:42:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nweXN-0002Hz-4Z; Thu, 02 Jun 2022 06:42:29 +0000
Received: by outflank-mailman (input) for mailman id 340854;
 Thu, 02 Jun 2022 06:42:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nweXL-0002Hg-KI; Thu, 02 Jun 2022 06:42:27 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nweXL-0005Bx-Hp; Thu, 02 Jun 2022 06:42:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nweXL-0006TU-3i; Thu, 02 Jun 2022 06:42:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nweXL-0006nY-3I; Thu, 02 Jun 2022 06:42:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=hoXIIWr4rnZklsjMDLdgBEEkbccayGG3++aXcHcLBlg=; b=XApLqlFgCCjFRnRcNJjlj8xrlY
	VVxlIm+FcGjHbliDww3/RUKglg4+rLNtBXB2HMCK6rX1xgUcxrSig7PB28Gqyi1g26z1NxfzcyPz7
	oE2Nz6dH4AAxZnlJhJhiEcvZ64e74oJbTQtAnBxzS/sDVeLNn3B15qK61gzkExPm0j2Q=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170800-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 170800: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-pygrub:debian-di-install:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:debian-di-install:fail:regression
    linux-linus:test-armhf-armhf-libvirt:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:debian-di-install:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:debian-di-install:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:debian-di-install:fail:regression
    linux-linus:test-armhf-armhf-xl:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-linus:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This:
    linux=8171acb8bc9b33f3ed827f0615b24f7a06495cd0
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 02 Jun 2022 06:42:27 +0000

flight 170800 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170800/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 170714
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 170714
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 170714
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 170714
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-freebsd11-amd64 13 guest-start          fail REGR. vs. 170714
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 170714
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 170714
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 170714
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 170714
 test-arm64-arm64-xl-credit1  14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 170714
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 170714
 test-arm64-arm64-xl-xsm      14 guest-start              fail REGR. vs. 170714
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 170714
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 170714
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-qemuu-nested-amd 12 debian-hvm-install  fail REGR. vs. 170714
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 170714
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 170714
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-xl-qemut-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 170714
 test-amd64-amd64-qemuu-nested-intel 12 debian-hvm-install fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 170714
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 170714
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 170714
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 170714
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 170714
 test-amd64-amd64-pygrub      12 debian-di-install        fail REGR. vs. 170714
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-xl-vhd      12 debian-di-install        fail REGR. vs. 170714
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 170714
 test-arm64-arm64-xl-vhd      12 debian-di-install        fail REGR. vs. 170714
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 170714
 test-arm64-arm64-libvirt-raw 12 debian-di-install        fail REGR. vs. 170714
 test-armhf-armhf-libvirt-qcow2 12 debian-di-install      fail REGR. vs. 170714
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 170714
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 170714

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 170714
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 170714

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714

version targeted for testing:
 linux                8171acb8bc9b33f3ed827f0615b24f7a06495cd0
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z    9 days
Failing since        170716  2022-05-24 11:12:06 Z    8 days   25 attempts
Testing same since   170800  2022-06-01 20:12:59 Z    0 days    1 attempts

------------------------------------------------------------
2007 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 224886 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 02 07:09:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 07:09:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340864.565989 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwewx-0005GP-Cj; Thu, 02 Jun 2022 07:08:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340864.565989; Thu, 02 Jun 2022 07:08:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwewx-0005GI-9d; Thu, 02 Jun 2022 07:08:55 +0000
Received: by outflank-mailman (input) for mailman id 340864;
 Thu, 02 Jun 2022 07:08:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nweww-0005G8-On; Thu, 02 Jun 2022 07:08:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nweww-0005e2-Kd; Thu, 02 Jun 2022 07:08:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nweww-00079T-7f; Thu, 02 Jun 2022 07:08:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nweww-0003iL-7C; Thu, 02 Jun 2022 07:08:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+Z6UAXeWe0tNgBx5woctrCelWKkIwaihJmtgYfkKnlU=; b=060oRJSj2/WBd5bUxcWgieuRgi
	hRD0Qw2C1+bxpfyGEO3cUJmp0BjLXj6u5X6CW9uWGivFI0pdbeJaXLYP3SzDI9IuQNofYe02agWB6
	zVh0hSbzn+TRT8Rp+J+DWApSTlE4/zU0rcwf1I60Zmc425nl8ABe7IoMMentggQwlUCU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170801-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 170801: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=58ce5b6c33ecae76f2e9fc5213a56e98c3be4be5
X-Osstest-Versions-That:
    xen=09a6a71097e3e7d28eaa0f55e8f2c4b879c299f5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 02 Jun 2022 07:08:54 +0000

flight 170801 xen-unstable real [real]
flight 170804 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/170801/
http://logs.test-lab.xenproject.org/osstest/logs/170804/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail pass in 170804-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170797
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170797
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170797
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 170797
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 170797
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170797
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170797
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170797
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170797
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170797
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170797
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170797
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  58ce5b6c33ecae76f2e9fc5213a56e98c3be4be5
baseline version:
 xen                  09a6a71097e3e7d28eaa0f55e8f2c4b879c299f5

Last test of basis   170797  2022-06-01 08:51:44 Z    0 days
Testing same since   170801  2022-06-01 21:08:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Tamas K Lengyel <tamas.lengyel@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   09a6a71097..58ce5b6c33  58ce5b6c33ecae76f2e9fc5213a56e98c3be4be5 -> master


From xen-devel-bounces@lists.xenproject.org Thu Jun 02 07:47:48 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 07:47:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340878.566011 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwfYS-00025C-LX; Thu, 02 Jun 2022 07:47:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340878.566011; Thu, 02 Jun 2022 07:47:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwfYS-000255-IX; Thu, 02 Jun 2022 07:47:40 +0000
Received: by outflank-mailman (input) for mailman id 340878;
 Thu, 02 Jun 2022 07:47:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwfYR-00024v-Ip; Thu, 02 Jun 2022 07:47:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwfYR-0006HE-FL; Thu, 02 Jun 2022 07:47:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwfYQ-00086r-TX; Thu, 02 Jun 2022 07:47:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nwfYQ-0004WT-T7; Thu, 02 Jun 2022 07:47:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NdIhPvq+n/BnYSw0+tj+avZJ+xGXHO1zEym/STe0tGA=; b=ydAJSNojizPoFg09C0DFbgBxK2
	5xcBFfffs7j+h3JTtPLmKEUIbt1SUX26NOIe3TTvNCZQCxICV0k+5waJ/5W427nBi9ROcvriLEN3I
	U72UbPuUz2f/XsKjQ3deCHCfwBomxgbdwpfepmLQytip2BiatwvOeBAGlkqTwQhN/K7s=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170802-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 170802: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=e2c2d575991cbccc39da81f1b54e78523a24ed11
X-Osstest-Versions-That:
    qemuu=7077fcb9b68f058809c9dd9fd1dacae1881e886c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 02 Jun 2022 07:47:38 +0000

flight 170802 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170802/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170783
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170783
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170783
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 170783
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170783
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170783
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170783
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170783
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170783
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                e2c2d575991cbccc39da81f1b54e78523a24ed11
baseline version:
 qemuu                7077fcb9b68f058809c9dd9fd1dacae1881e886c

Last test of basis   170783  2022-05-31 04:03:50 Z    2 days
Testing same since   170802  2022-06-01 21:38:16 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alex Bennée <alex.bennee@linaro.org>
  Daniel P. Berrangé <berrange@redhat.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   7077fcb9b6..e2c2d57599  e2c2d575991cbccc39da81f1b54e78523a24ed11 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Thu Jun 02 08:24:08 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 08:24:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340893.566023 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwg7Q-0007YX-Uv; Thu, 02 Jun 2022 08:23:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340893.566023; Thu, 02 Jun 2022 08:23:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwg7Q-0007YQ-S6; Thu, 02 Jun 2022 08:23:48 +0000
Received: by outflank-mailman (input) for mailman id 340893;
 Thu, 02 Jun 2022 08:23:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwg7O-0007YG-VI; Thu, 02 Jun 2022 08:23:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwg7O-0007QE-Sg; Thu, 02 Jun 2022 08:23:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwg7O-0000tU-CS; Thu, 02 Jun 2022 08:23:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nwg7O-0008Kw-By; Thu, 02 Jun 2022 08:23:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=lpmlAhkEnakMZzk1T1tA0xCRoq7NFQqa8FLecYP1NbM=; b=X9r9G/rjjlDLorcg3wL7nwuHaG
	J+50XBjQ3PNa08bAJixyMy3gwZJzDr+NtZUphv95KrpRCHHdMZM5flYkW31E/OFlED5eZDrhFwp9t
	0Upr9cpty+FYSHkWSVM62dRUBEHt5bhzdlSq4jjHMU9X+UGVGOXiptxzCkgx7M8cqe48=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170803-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 170803: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=215b2466cd45bd00d4d71db40364384c0bf9313d
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 02 Jun 2022 08:23:46 +0000

flight 170803 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170803/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              215b2466cd45bd00d4d71db40364384c0bf9313d
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  692 days
Failing since        151818  2020-07-11 04:18:52 Z  691 days  673 attempts
Testing same since   170803  2022-06-02 04:20:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Amneesh Singh <natto@weirdnatto.in>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani@anisinha.ca>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brad Laue <brad@brad-x.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Kirbach <christian.kirbach@gmail.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Christophe Fergeau <cfergeau@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Didik Supriadi <didiksupriadi41@gmail.com>
  dinglimin <dinglimin@cmss.chinamobile.com>
  Divya Garg <divya.garg@nutanix.com>
  Dmitrii Shcherbakov <dmitrii.shcherbakov@canonical.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Emilio Herrera <ehespinosa57@gmail.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Franck Ridel <fridel@protonmail.com>
  Gavi Teitz <gavi@nvidia.com>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Haonan Wang <hnwanga1@gmail.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Hiroki Narukawa <hnarukaw@yahoo-corp.jp>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Ian Wienand <iwienand@redhat.com>
  Ioanna Alifieraki <ioanna-maria.alifieraki@canonical.com>
  Ivan Teterevkov <ivan.teterevkov@nutanix.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  jason lee <ppark5237@gmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jia Zhou <zhou.jia2@zte.com.cn>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jing Qi <jinqi@redhat.com>
  Jinsheng Zhang <zhangjl02@inspur.com>
  Jiri Denemark <jdenemar@redhat.com>
  Joachim Falk <joachim.falk@gmx.de>
  John Ferlan <jferlan@redhat.com>
  John Levon <john.levon@nutanix.com>
  John Levon <levon@movementarian.org>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Justin Gatzen <justin.gatzen@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kim InSoo <simmon@nplob.com>
  Koichi Murase <myoga.murase@gmail.com>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Lei Yang <yanglei209@huawei.com>
  Lena Voytek <lena.voytek@canonical.com>
  Liang Yan <lyan@digitalocean.com>
  Liang Yan <lyan@digtalocean.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Liu Yiding <liuyd.fnst@fujitsu.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  luzhipeng <luzhipeng@cestc.cn>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Martin Pitt <mpitt@debian.org>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matej Cepl <mcepl@cepl.eu>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Goodhart <c@chromakode.com>
  Maxim Nestratov <mnestratov@virtuozzo.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Moteen Shah <codeguy.moteen@gmail.com>
  Moteen Shah <moteenshah.02@gmail.com>
  Muha Aliss <muhaaliss@gmail.com>
  Nathan <nathan95@live.it>
  Neal Gompa <ngompa13@gmail.com>
  Nick Chevsky <nchevsky@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nicolas Lécureuil <neoclust@mageia.org>
  Nicolas Lécureuil <nicolas.lecureuil@siveo.net>
  Nikolay Shirokovskiy <nikolay.shirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Niteesh Dubey <niteesh@linux.ibm.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Or Ozeri <oro@il.ibm.com>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Praveen K Paladugu <prapal@linux.microsoft.com>
  Richard W.M. Jones <rjones@redhat.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Robin Lee <cheeselee@fedoraproject.org>
  Rohit Kumar <rohit.kumar3@nutanix.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Davis <scott.davis@starlab.io>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  shenjiatong <yshxxsjt715@gmail.com>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Simon Rowe <simon.rowe@nutanix.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tom Wieczorek <tom@bibbu.net>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tu Qiang <tu.qiang35@zte.com.cn>
  Tuguoyi <tu.guoyi@h3c.com>
  tuqiang <tu.qiang35@zte.com.cn>
  Vasiliy Ulyanov <vulyanov@suse.de>
  Victor Toso <victortoso@redhat.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Vinayak Kale <vkale@nvidia.com>
  Vineeth Pillai <viremana@linux.microsoft.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  Wei-Chen Chen <weicche@microsoft.com>
  William Douglas <william.douglas@intel.com>
  Xu Chao <xu.chao6@zte.com.cn>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Fei <yangfei85@huawei.com>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yasuhiko Kamata <belphegor@belbel.or.jp>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
  zhangjl02 <zhangjl02@inspur.com>
  zhanglei <zhanglei@smartx.com>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>
  Дамјан Георгиевски <gdamjan@gmail.com>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 111297 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 02 08:32:26 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 08:32:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340904.566034 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwgFh-0000r6-T8; Thu, 02 Jun 2022 08:32:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340904.566034; Thu, 02 Jun 2022 08:32:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwgFh-0000qz-Pw; Thu, 02 Jun 2022 08:32:21 +0000
Received: by outflank-mailman (input) for mailman id 340904;
 Thu, 02 Jun 2022 08:32:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4LFK=WJ=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nwgFh-0000qt-2j
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 08:32:21 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.109.102])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 83ea4989-e24e-11ec-bd2c-47488cf2e6aa;
 Thu, 02 Jun 2022 10:32:19 +0200 (CEST)
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2104.outbound.protection.outlook.com [104.47.18.104]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-15-riDSTDHOOQGVVcQ6PWFMGA-1; Thu, 02 Jun 2022 10:32:17 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB5044.eurprd04.prod.outlook.com (2603:10a6:208:c0::33)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Thu, 2 Jun
 2022 08:32:16 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.013; Thu, 2 Jun 2022
 08:32:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 83ea4989-e24e-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654158738;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=kbVkT9jt21FrSVa845FeKmjOc7k+AO9S1HY7jKZ1+FY=;
	b=aMmusv6APq41rSTXBnnkx0tLgrQEPjUFJ2ms6F2+Idpj8Ff+m7ofJYjQenVv0k1OP/wtOL
	dc0Yv5gqRd+iHNqeJflsgMif7ReXEcKslv7+/FFGBEp12xgX8FCsGjMio1FLmqnWVybKpT
	4faT7I2VRcphodKtPH81R+XV6MuqOuw=
X-MC-Unique: riDSTDHOOQGVVcQ6PWFMGA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=O0WtDOQbdmh97KpPpdZ2rh+PwY4JNkw31IjDe+iEUaeKLrL+BLwqEAPUYR8awskSlJWzEe38z1cMrYG6oLXXKtojyAILwlVWxnlRLcAYkCU0E3Ue7TdwNbHQDT2Xpv8cIwEaIkpAAaCsjmfbBF5/qLXS7C7z2sdmmCScn4Mgfe1Sb2F0HuWEuJIrWD3Z+LV+vEnq/R77dzk41lZoYOrndfKFLv2qJo004wnu/tTg9YwjoVV4cffeHn8pH8x0EDTPCXmxT1/SE4QO1FGAmQNmeMymEqc7bzhiORd6aoaMi2+iLoCngt4mUGjj7GOabc+Kk0azzXXW9jSKMvMVEI/Jtg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=sYw7DR/++nnzD2xe4Y41/Rg7EXl+Gj32T/ZsfmyxtW8=;
 b=Dpoc9SIYIVY/P6rVgGr2+OyWOGBtOOFNOTXgUdW2WyvCVz+BAS9QN1MupP467QdXisAlXmHrU0ut8ByDf0FJ72MYToIUnxMjHi2jIpjC0pGxV6FAQQe0LJLg7j+Eb6OWj59yhWcrBdrLOPQkmK4h0T46bGXPYu8oLyX4KFJsorFUMz/ErTBEylWo3hUdyj9xtBwkUTyFnbWeEqdeDTzQYGrBY4ahPGI0MnWjSJZfiw70CHaxKH5/r2CuXgFdVvsDrkCnzRAtFRWkCyjFy/ouC6IA6u6sfeRXJYjy0dEQh7JSsGezq8PZd8v4uZWtU96EsVeSK3exnzoDw+ywLmbYRQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f807d861-5ac6-318b-8faf-17a3b7d85b71@suse.com>
Date: Thu, 2 Jun 2022 10:32:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v4 7/8] xen/x86: add detection of memory interleaves for
 different nodes
Content-Language: en-US
To: Wei Chen <Wei.Chen@arm.com>
CC: nd@arm.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Jiamei Xie <jiamei.xie@arm.com>, xen-devel@lists.xenproject.org
References: <20220523062525.2504290-1-wei.chen@arm.com>
 <20220523062525.2504290-8-wei.chen@arm.com>
 <6003b7a5-63c5-9bd3-03db-a4bac5ba8e00@suse.com>
 <f8674bd2-523f-bf89-2998-6f1176ad97be@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <f8674bd2-523f-bf89-2998-6f1176ad97be@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-ClientProxiedBy: AS9PR06CA0294.eurprd06.prod.outlook.com
 (2603:10a6:20b:45a::22) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b247b3f2-fc15-4746-7df6-08da4472661a
X-MS-TrafficTypeDiagnostic: AM0PR04MB5044:EE_
X-Microsoft-Antispam-PRVS:
	<AM0PR04MB504436C0A8A6766A56006CD5B3DE9@AM0PR04MB5044.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	MkO8UMkIp4FLBPjlfQXKxV9yEn+cE6gvFPdjsEUOEo3j0QIMe0P7N6MCoAmHw09CPR9lUdwza2vhLOk5jnzpUJAyIAaUVB6XWb2Mrky0c6vX/R5AYQX74jL/4UDfnxYpLctRb/yfSJLzU65J7pMoMTWsr9g7blltryoQhFW8o9HHljVD37uDczltGc0NSghAQATAjPvvSLFtSV6SxU/fG8jdZQQCtni+aAGu0gavpGLQ6wJXkijOYoufeLmx+ibBElL7N1JH3fsLweutd5OqSAEpR/fIOnL4L+oefuWZ1Rjepi2RG8qTP4utgKI/p/DsTcVhKo6RVMANAQkXtuSinLDEF1p3t03Uq4iKPtU/xWaRJgwG2ETH8NeztosyKLSbYavpNecOE67TxhwUGuPCoo8iZqeeO1WitVyFaVLsymWYJ6t71J54aYOpd2QHOmc9GaSacoyNwIAImlEUYXuIW+jfDuRnjnUg8FY4Pr9sN0hxdmb91jQoLBhxr3tMT1D8Vxq67WfuOfppUkXQHSm5mDiQS+i0bpjxx71a5rlo0/R/jF5484tPhnv8JegrNoYV21vtF5azioPv4/qog11+lsqnNE3Nx0LdjId7chH7+W7ak/yW/Tfm4sK9PgPpWSBFTGyX6hft+5C5KaLmMCr4ZY0QsiVVAFzIgu/aOA4k7O+Tn8zvPSTIP0R75tu7wevhtKJyhtuFEP6Axj1S8wcme0nLU9IxNd3ElBbkEezTGNwggsxCDDZl3XyX5JNYUkPR
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(186003)(31696002)(508600001)(38100700002)(83380400001)(86362001)(316002)(54906003)(2616005)(5660300002)(31686004)(8936002)(66946007)(4326008)(66476007)(66556008)(8676002)(2906002)(6666004)(53546011)(26005)(6512007)(6506007)(6486002)(36756003)(6916009)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?Y3V9SGfcaULqjHQlaQC9Whvx2hlTFaHltB8j4Hwaj0ncKshalp6l/CUoVcG+?=
 =?us-ascii?Q?KOfoUNmOiE3VinJw0zYBl5j7ZTGJdf+ZbKLQ/hwtHcSsttGNbfQl1zMOCD3k?=
 =?us-ascii?Q?wF0cpFaO3pBJpi1j+CS7HZL6Ccr02QqHbSc6aMvgkQudmwDEGkr1UihrT+lo?=
 =?us-ascii?Q?qzJpMfbo6IubFhVyuGrh3j6pf4wyhKkw5yQLCTUISmzPknS83TdIkHPN+AKG?=
 =?us-ascii?Q?9jzd52ZGLpM+TiQmJjbIW7cwWIgRNMrTaeq7i+AYtiL/oEfLM7Dftfc115xK?=
 =?us-ascii?Q?V5BDCi4r92DutMnejF9VsJd/7Q3F9yacWoB+2eUAAZhbEOSUCLth5iYfvabY?=
 =?us-ascii?Q?kLqAsrQ79G93vYdtM1+UXAUw9z3OmIcDFPOq+af4fJM2/7wKX+vjeszD/bpr?=
 =?us-ascii?Q?Cmca0KY8DSN2fLNxdhDLhx3Nmk0Kv+x7udahpirWQraqXc3HlIAWMWjZ7HJL?=
 =?us-ascii?Q?cECGhlEctONtRlZGO4Qd8g5TSpYuBEO8Dwoi6cBcv7O9Cc4Eyo817nkDUw1r?=
 =?us-ascii?Q?v17f6ba8dFz2ckV933zAH0+Ow0PBSgL+Z7Ke7YNahWILZ88NBbdr/dmpEE5j?=
 =?us-ascii?Q?CIm4tyJDGLRpXRhGt4K3Oa9CMY27kHP0sq81GF3DeBL+NhK3eWAoZT/4lQzS?=
 =?us-ascii?Q?b303HrN65BfmR9iLz9cTYDEGLdRUTtvNvQG8yw6VBXWtAwOiHQehjOeZ1+/3?=
 =?us-ascii?Q?1uaVTa4prFk8AkgfHfGfDakMAc32xM9rmHb0UMRnXTuklufZTeBRKZG4I0O+?=
 =?us-ascii?Q?g9MXApYrFOjcb8LKH8es+yjG8vLP9+oUbERHipCVUa8q4P4gQqbasU+/Je7Z?=
 =?us-ascii?Q?HRRnSgrGwfD8HsAXSxWNjyGxg18msoI6xUhB+zGKrHpTQUGvJ0qmaZOmN6gY?=
 =?us-ascii?Q?W0CXeD07XEG9VtHPhiLryDJgTTVwWsFnosm+/jD1T36+UUTdF1OSdPceQXDp?=
 =?us-ascii?Q?YjCtLKXQc1qvRoGJMtT/2mhCMCfpun5joPUC8+j7QXcS4Z1mMxOmpG5IqoJY?=
 =?us-ascii?Q?sp8SuvwiCTNyVA4dBAfiioOYXq+DatjZNN6cCCp3EsefW5RwIUGuXhzOlRqT?=
 =?us-ascii?Q?g+7zX6TcSht/EdE/pV4DgRHvRTP41XLvApQAB3o7IBlVlZ8dZz220S1QU3Vk?=
 =?us-ascii?Q?gsSqH+Ik5tBXKbZnSMe8Fv1T7hKrGSo6S3K+37WMSndQOlNI5sxer1OjybJz?=
 =?us-ascii?Q?UcA8WPWawf7i0PAbwLAGH9HNuyHqQ3Kj7WyJwqaRjowDRhLZbIu788TS+eI5?=
 =?us-ascii?Q?f/+igfEUGrQR53iy6IvE42V9N6fucA/hefnInsGHIP4wjvGXc+I8KxpkToGe?=
 =?us-ascii?Q?9xKGVc3ZdG+vsgBLid5tNlm26CIn2BjviLtozHKXT9cqYPK6qxttquysMxlK?=
 =?us-ascii?Q?4vNsoadfewOIpxgOtXQ1Eh1BreSdcG9li8Y+so16svCGWtXI1MUXZMJ1VDYu?=
 =?us-ascii?Q?un9OhHLvR2ONJOdeUWOCwCOvHuALRBWXxk8e+fwDdf4Q5FbfndRRtPl5i63g?=
 =?us-ascii?Q?eraxYwdrONkVa0sIOD0jMGkEUWMCRqtZ6FgWe5Aj6z819W0g2SDoIgLREPir?=
 =?us-ascii?Q?gh5A9XIhPcc/+vF/LOfzMwv6IJ/abJMzrqc4wDNaRvU4qbI8cY38T8NkYbyd?=
 =?us-ascii?Q?B8gqaddy7ONssmYx5NZPWWXJkaT2kVX5b4zgRF/oeTFIdsz6XFyT99WYFz8d?=
 =?us-ascii?Q?uNHjtr/qaqMfn+WA2PIf2oPtdVSHvcfPAwAVyy4pP+IXtxEGvavq6TzrtmiY?=
 =?us-ascii?Q?BEVivZE9DA=3D=3D?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b247b3f2-fc15-4746-7df6-08da4472661a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2022 08:32:16.2256
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: bWJGIHad5ETNzGPFs8H71j7d+wKeEIppWxVoRr1qsILprH7ztyThKogw0y1u30/5LIENZIHEovUnw1PYTvOiaQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB5044

On 02.06.2022 06:10, Wei Chen wrote:
> Hi Jan,
>=20
> On 2022/5/31 21:21, Jan Beulich wrote:
>> On 23.05.2022 08:25, Wei Chen wrote:
>>> @@ -119,20 +125,45 @@ int valid_numa_range(paddr_t start, paddr_t end, =
nodeid_t node)
>>>   	return 0;
>=20
>>
>> To limit indentation depth, on of the two sides of the conditional can
>> be moved out, by omitting the unnecessary "else". To reduce the diff
>> it may be worthwhile to invert the if() condition, allowing the (then
>> implicit) "else" case to remain (almost) unchanged from the original.
>>
>>> -	} else {
>>> +	}
>>> +
>>> +	case INTERLEAVE:
>>> +	{
>>>   		printk(KERN_ERR
>>> -		       "SRAT: PXM %u (%"PRIpaddr"-%"PRIpaddr") overlaps with PXM %u =
(%"PRIpaddr"-%"PRIpaddr")\n",
>>> -		       pxm, start, end, node_to_pxm(memblk_nodeid[i]),
>>> +		       "SRAT=EF=BC=9A PXM %u: (%"PRIpaddr"-%"PRIpaddr") interleaves =
with PXM %u memblk (%"PRIpaddr"-%"PRIpaddr")\n",
>>> +		       node, nd_start, nd_end, node_to_pxm(memblk_nodeid[i]),
>>
>> Hmm, you have PXM in the log message text, but you still pass "node" as
>> first argument.
>>
>> Since you're touching all these messages, could I ask you to convert
>> all ranges to proper mathematical interval representation? I.e.
>> [start,end) here aiui as the end addresses look to be non-inclusive.
>>
>=20
> Sorry, I want to confirm with you about this comment again. Now the=20
> messages look like:
> (XEN) NUMA=EF=BC=9A PXM 0: (0000000080000000-00000008d8000000) interleave=
s...
>=20
> So I want to know, is it [0000000080000000-00000008d8000000) or
> (0000000080000000-00000008d7ffffff) addressed your comment?
> Literally, I think it's case#1?

The former or [0000000080000000-00000008d7ffffff]. Parentheses stand for
exclusive boundaries, while square brackets stand for inclusive ones.

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jun 02 08:36:01 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 08:36:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340912.566045 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwgJD-0001UH-E3; Thu, 02 Jun 2022 08:35:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340912.566045; Thu, 02 Jun 2022 08:35:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwgJD-0001UA-9T; Thu, 02 Jun 2022 08:35:59 +0000
Received: by outflank-mailman (input) for mailman id 340912;
 Thu, 02 Jun 2022 08:35:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4LFK=WJ=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nwgJB-0001Ty-Qz
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 08:35:57 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.109.102])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 059d4ead-e24f-11ec-837f-e5687231ffcc;
 Thu, 02 Jun 2022 10:35:56 +0200 (CEST)
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04lp2050.outbound.protection.outlook.com [104.47.12.50]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-43-Vst1ESMrNsCGbm4JhY31mw-1; Thu, 02 Jun 2022 10:35:55 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB3PR0402MB3785.eurprd04.prod.outlook.com (2603:10a6:8:f::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Thu, 2 Jun
 2022 08:35:53 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.013; Thu, 2 Jun 2022
 08:35:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 059d4ead-e24f-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654158956;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=c9Hjxzh1343s7FWsxgC4jtHmUl37vjhUolmpdIgd/Kc=;
	b=efRrRPwXARHCYliLrOktixMXt7lH8S08ywO29PTVE/vmgYbHyMn3JXSvcN3n3f8iLYI42F
	j3Xe3O14ls9TEu3cK3vWFN4g3lADAAxnT2ZJdiwF+qLng9QlRHL7Chc/iL+te/OQeAjuWd
	rF/Yd5bwzS0xrFeklOO1HAM9BbrO0d8=
X-MC-Unique: Vst1ESMrNsCGbm4JhY31mw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Lm4ENZpDPqAMFSSQFV2D7Vj6Z+0Tvnc5xOSDna2uBGj6hGbyFjQP+1zs8MGXsCQaQYnOzPXISZ0bN1B/0+7wYrMYlvwT4gLil+R7US4lfbwSfognIRzH0jqLy5kN9luGrgRquXNc9dwlBOCUQ4UelGq2i6GuGjn1nZ9PZvZ4WmB0rS0EoT0E9C7p2GDBlpXLnkQ/8CxgD39GwxsRjGykkPGdwDkl1UxwUIySVu15Cb/tJ+iauSaJyJwxHces3POeVyej/HP1xiiyZm3stsZPuvwE2IZRobutV3u44tn87t/nv6iieHvPzTMkH80Un3r06X8YEC3cnB+hJ/sWDhooLw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=c9Hjxzh1343s7FWsxgC4jtHmUl37vjhUolmpdIgd/Kc=;
 b=edwVNab+pPiSTYZ1Q7UlDLvbn/XBYb3BAsXwZ5KAIrx76N2HS1a9ZWarmC3yU7/isJ9pS+UTobhOuAKPV7M2s9jM5UlK60hpTjqizb5KE5qsiaKvWKncTxwORu4OxtS6d892Jop5ZOGKtRAkzhs3E5oxC57PFF64xZMpsOk1HDnqgb2BZW2FcOeV/RA1Ulm0k1ZkhII/uPZlXH/VW623ymyAEVwUIaaKfm3d3Vvd1gWwUNq5vWld5jndnp2+3pI/GiB6oc2KSer6Flr7INC17kc7B70SexBjovo4XtCcaEQIkFSNV0cVdbzxO6Zn7PEFIq4KPSeXy6f5Od4khfKFMA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ebe4b409-318f-6b2c-0e05-fe9256528b32@suse.com>
Date: Thu, 2 Jun 2022 10:35:51 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: MOVING COMMUNITY CALL Call for agenda items for 9 June Community
 Call @ 1500 UTC
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: George Dunlap <George.Dunlap@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Roger Pau Monne <roger.pau@citrix.com>,
 Artem Mygaiev <Artem_Mygaiev@epam.com>, Andrew.Cooper3@citrix.com,
 julien@xen.org, Bertrand.Marquis@arm.com, fusa-sig@lists.xenproject.org
References: <CC75A251-2695-4E9E-95A7-043874B22F32@citrix.com>
 <alpine.DEB.2.22.394.2206010942010.1905099@ubuntu-linux-20-04-desktop>
 <alpine.DEB.2.22.394.2206011324400.1905099@ubuntu-linux-20-04-desktop>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <alpine.DEB.2.22.394.2206011324400.1905099@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR04CA0139.eurprd04.prod.outlook.com
 (2603:10a6:20b:48a::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 1f2f75f5-8a3d-426a-bdab-08da4472e803
X-MS-TrafficTypeDiagnostic: DB3PR0402MB3785:EE_
X-Microsoft-Antispam-PRVS:
	<DB3PR0402MB3785B15337B6C114C684452FB3DE9@DB3PR0402MB3785.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hZae3+XXz2WlWz4zMFJFFv5ZToinpgI2Lm73ULmZlR1oG7zFEmJOaaHbE7IwHSE9DYmBqhHtRS5IIwzylvk+5bkRsEypL1wfRxl0mThaV47yc2DC1BOpOn2WhFEWeSXLMQPmIMRtMoqgRUxpoDUtQKGGZFTKbdkCKuNfjB6tcC85Sbm7iD80PX3hnV9LDsjPYtOU1ELf8H5Wq2ik63knbjrGmLmsWNtxhLOiJzfpoYod+k0l9EWKhNjcjqC/mHGQYhWV6ODxUfBV2EurL63SVHJbyO9IkyKyF4z+Z8w3WrcHgrRjls+xRxVQb6ATyewkbV79NGx2FgYIqkkZyYGalrSH08dj/r2E6NKKJMKSurKXv1N6nS71DQ9bi4w2iE8xxVqTG6rmae4+XcnN1KfEs0wL/MTwA2hvEHJwT1LfufZbe3OZxaQ7DR7TAheMXnF900VZLCSgAtiwt1g4N4c8YpINPDNWFqB6qYenGuN7ihBTn7vs0u/faMMwLX0pJ2qS1Bi9ghysPf6eiTKliEOYO2xeX8Yxdn1b3sZ+IVmUk167OZZzCEtQiCzLvVKDslcOYmtCl0fFFHASbw30AoX+tpSuWQngy2fJJmu7TJFGaSyfSIDLZrWVuQwDrQq0UfsbigfmnxxaJYpgLANvBtszXi5TvY0QlOId/dBtlAc4/S1SuDngAy3n3SzE/OosvC5SDUs9DBP4yfw8nzJ+D48lwUBysgUiw7xCtTAGk/rwpIdaNTFKRl9UVv0LSqfBWuypQj50MviCxIvnr3K5QYasmw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(6916009)(54906003)(6486002)(316002)(508600001)(2616005)(26005)(6506007)(53546011)(66476007)(31696002)(2906002)(5660300002)(4744005)(86362001)(66946007)(8676002)(66556008)(4326008)(31686004)(36756003)(8936002)(38100700002)(6512007)(186003)(221023002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?b0tLang0aXNEcnY0WVo4YjJ3dVdEWTBiOG5rMUcxSXBpNU91VVFVeC9LVUxS?=
 =?utf-8?B?VTgzKzhiTXBoY0RER253Z3l5SmlpdVZWVW85azhOeVo3UGhTWGgwWFpIcDVi?=
 =?utf-8?B?b3EyL0dIQlY2ek5ucCt1OXBOSkpxZUlmT0ZjTkMrSiszNlVGSSt5aHduOTI4?=
 =?utf-8?B?aEpXcnRlTXg4bnpWTGlxSC80UnRobkZGWVJGQkw5ZFNZMnY4OFk4ekJoRTNY?=
 =?utf-8?B?NUxLcWpLeWtldEpsbFJVcjI4R0ZlM0oxaHA0VXVYTnpFYzB4b0NoY1VINEpi?=
 =?utf-8?B?aTVPU1JpTW9OM0NQTXVNWXBZWllqckJleFpJV2JYM3Z0TnNhQmdDTW0zUnNo?=
 =?utf-8?B?bWhrM0hLR3R2citMdUdVYlB2V0tHdW1FcU9Oc0tON2JLQlFhMUdVbWNVWURz?=
 =?utf-8?B?YmNXYjV2QjVLVDNENDlaa29TcFM5dVEwWWUrSlNnODE0Y0srR2UrQ3AybXZS?=
 =?utf-8?B?bXNYbVZvbHJmMTJ2YTVSVXAvUXpJTHV0aVdIeXczRVR0RGZ4UVJHellpSVhD?=
 =?utf-8?B?VjRuTFBsZityKzRtenFDNHErK1NpbnJBd3FuNUpiV05OeVBKL1gxQUJrWk02?=
 =?utf-8?B?RVBmYnZzYlM1cDl6S2tCWHN0Qms4RFdxcHh5WHJrMUc3b3ZrN3N1Mk1pUU9a?=
 =?utf-8?B?MVc5enJVa29RTGY1QlFQdVJjNjNIcXBSb0Z4UU5rZWQ2ZUI3Y1Q2cGpGaVRK?=
 =?utf-8?B?T3ZHYjk0Q0EzcHJVQTZhV1VRb2tiRi9iTXVlcUIwWmtMUGdzRWxCN2JFOWwy?=
 =?utf-8?B?NXRpaE1Bb1VyZ2RqenBWKzdWVUhvU3p6V3VBd09XWnROMWNaQkwrd3FBeTQv?=
 =?utf-8?B?VVN0cFQzZnEvLzNiQngxUkhST2s4VzdpT3RSU2RXUlhwaWNZREFxZHYzUHVF?=
 =?utf-8?B?VE5TTXkxTFNNWkJaRWhZZWVEdUcvbFcyQUVxWXhYYW5wR1R0N0g3aCtmL0E5?=
 =?utf-8?B?MzZSVXNmYndxYVNseHdNMVBQcEhoeFFSSVRvc1JMVjJVRWEwSDBxZmJ1TFJh?=
 =?utf-8?B?M3VaL3d0ZnB5cXNFcWFlZFgwMXBUSzdZMUt2eUVJVm9kc0I0ZmEvb0kzZHBy?=
 =?utf-8?B?LzlNRWpncmtLSVNHM2lGVGljbmZTOUgzUTNSYklZTE9rUE1CY0hiMVFySHFk?=
 =?utf-8?B?Y0pJWEZycytxSUxQcGZYcGo2SUIvcXJWNVB6UExhQkx4MXVLNms2VW1RWVdE?=
 =?utf-8?B?VXJmZnBMNm9KbTdLRUNSYmE5SUtkN3dheGIwa0tIdXFEK055c2p3Q3QralU3?=
 =?utf-8?B?N0tac2ZQTGRLMjIvejJHa20vY1d2K2JTb3U4ZnIvN0U1U3djM2FpYyt3SjBz?=
 =?utf-8?B?N0s3S1gzcEdtMmZucWh1dkZaYTg1aDZiTWdJUkhGU09KSXJ0V2c5OUZqTXlV?=
 =?utf-8?B?am1PZTN1ak9LSzQ1YlBXSmhSajcyVFJDQmorYzRKYmc1YlZWdEM1RmZHTERV?=
 =?utf-8?B?aVRUYi90a2VqYVlLSHFzdFZEYTZrV0VibjRsS0RNYXJLb3h0QW5tUWJWSnR3?=
 =?utf-8?B?NUVvRnpFQ2R5ZEF6eTlXNktzNEEyVUp0MitIczlCV0VuMHZ3VE93dTZWR3h6?=
 =?utf-8?B?N2R0aC93NENTSDl1N2ZpVlE5aC9lcG1KM0FuNFZneTdBZ0F0UkV5QkZlelVF?=
 =?utf-8?B?ZzRTRjl2eVVtMDJDa2xXTUZaYUtVUTNQdUY1MXBsK2dTc0g3LzlpZXhMN2h2?=
 =?utf-8?B?anU1MkNqSEpFK25zU1ZKUnVLb0ZQdFdrQkRFTENpbDE2dkh3ZjZzc0JVN0NQ?=
 =?utf-8?B?TnhDa0VNZDJ6K3UvZ254OFk5SlZJVC9KSFJLTDE5V3pwdFRWYlFSSndIV0ZQ?=
 =?utf-8?B?Z29iY096V0w2V0t4UktaaUtXUGFYQStMK1VhU2N2VnVsUGRycmRIV3JsU2g4?=
 =?utf-8?B?NkFUL1ZLbC9qeFZEdVpVbHpGYzVxQSswOEFyU2ZBUTRYeWpPL0M2RENwUitI?=
 =?utf-8?B?b0hZVDZuRmd4b0E3OXNITk9Gbm5NMlIvZU91dVpJaUREcE81dENqdzBScGNo?=
 =?utf-8?B?Nlk1ZkFtL2RkMGhQSTlBT1pLc2hreUpiZ0diaXZodDZ4WGpoS0xoNHd2UFZz?=
 =?utf-8?B?SXJ1SWFPaEVVemJtamhyS3RlLzRmUVBuTzU2WmUyaURkcXJYbm01a3pRMVN5?=
 =?utf-8?B?NEhod1F5RU96eWd1WHp4K3JrRWx0ZjlXcCttY0MyNW1LYjRDMk1sV3dpY29F?=
 =?utf-8?B?WTVZdDRvTmtuaGIvR3BqMTVWL0ZweVVMWnpxL1hjZmVXSnQ2dngrWGxIeGdJ?=
 =?utf-8?B?eWdIQ1FUVGpOYStRT05uNklqbGFsREMvRy9QK0ZKelFjMEg2YWNyVXlPZkZz?=
 =?utf-8?B?Y0tKdlVFNnBQU0taWk8zOGh6ZTRTa3JWUjNhTms1cHc0WjlPK05JZz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1f2f75f5-8a3d-426a-bdab-08da4472e803
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2022 08:35:53.7897
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Fpd8iUH4tF8Bzh127w3UI23AtAz9ALqTFt6ZZOabK1lE32/vZFBNAqoVoD7B5/xbckxai4FxF9qixuZTYOQ75Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB3PR0402MB3785

On 01.06.2022 22:27, Stefano Stabellini wrote:
> Reducing CC and adding fusa-sig
> 
> Actually Jun 9 at 8AM California / 4PM UK doesn't work for some of you,
> so it is either:
> 1) Jun 9 at 7AM California / 3PM UK
> 2) Jun 14 at 8AM California / 4PM UK
> 
> My preference is the first option because it is sooner but let me know
> if it doesn't work and we'll try the second option.

I don't think I was aware that another call is needed. Was that said
somewhere earlier where I did miss it? In any event, either option
looks to be okay here.

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jun 02 08:43:41 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 08:43:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340922.566056 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwgQa-00039J-5v; Thu, 02 Jun 2022 08:43:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340922.566056; Thu, 02 Jun 2022 08:43:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwgQa-00039C-2u; Thu, 02 Jun 2022 08:43:36 +0000
Received: by outflank-mailman (input) for mailman id 340922;
 Thu, 02 Jun 2022 08:43:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4LFK=WJ=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nwgQY-000396-Fr
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 08:43:34 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.111.102])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 159f8207-e250-11ec-837f-e5687231ffcc;
 Thu, 02 Jun 2022 10:43:33 +0200 (CEST)
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04lp2055.outbound.protection.outlook.com [104.47.12.55]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-44-yfgAqjP3NEKo5uFjW6Sv_w-1; Thu, 02 Jun 2022 10:43:31 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB5008.eurprd04.prod.outlook.com (2603:10a6:803:62::32)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Thu, 2 Jun
 2022 08:43:29 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.013; Thu, 2 Jun 2022
 08:43:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 159f8207-e250-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654159412;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=qMzAE4sPMKZChY6NYGrtUtUufgeougc+sPwTUOuU+f0=;
	b=GMJaHWQZ/248ROZMLohZLxP2Qoxn1rG4F8JZUJYoTgJSUlMeb0ueN4MBxikcKiLEXnDwO8
	8zhn6nHyd+cfsKRyk7KPC91NUVLK0eOaSrbWkM1VRJDxtabRoNPpKhtnDiNhEqOEyPwX4d
	KCc0blVSYcJbtrAea6Xc2VllKhcAmX4=
X-MC-Unique: yfgAqjP3NEKo5uFjW6Sv_w-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XhO1LBTPjXpEFTtcTzJPx5G8j8o4P9gCby/ZRMKtrhqwxtZIN7nOjADN89/lJ6eWDtghRs30h9lQ9X36t1yG6X014ZQwEPETrf2/JtJ/JOplb6XD7mU8Lwj8ZNOAMHIWUhYXWUFLf9+SSg34vCbiId9MNFzx4bOZHpnX+1eHnfAVOHvfJaJekT2PGw31bJqi86amMMr86AIoqEVpR4hqMnJ/eQf5FLAeh7+E4ph6tSi6cIyRZDZ5Mg5/L63evBMU2oRHGSr2fWGqLJZvXQkk8/BEokYzwEnUCu+qU64Yoq7hTPsLxMaFUk8mBPyGVZUjX/bJOathOQYwoja0FVVRHA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qMzAE4sPMKZChY6NYGrtUtUufgeougc+sPwTUOuU+f0=;
 b=Py9+jc/kxlCPHXKBL7AzkTmHWBakkCyfuSPNyT2o82B2n5dIJlr3e/dMCwhua4cf+nto30riqQ0B1vYiczDpMHO2G+5H3tRgNXcFgVgR8qThxreOCXqEMzni80zylB1b722uG3vuSAX9gRFod+MUWyFCbimYdnLxFxWyEB3f3MD6lL6dkHsRzN5ZmYrqipYnHhlK5ui2fPqjld1Fqa/i3KjYXBKTAFhZP7eUjesLQ35fW5Ts5D3Sw09QxjpHWQT31Wz5cdtaCPPHCofkS71mHDXSJ5e9E9dNPvVUL8BVlosnO8cONbjnixchAXBbJmMmVeitzd+M3eNrPjcWVGfhTQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <873890bc-e5fd-a9ed-77ff-7bd06d390ae9@suse.com>
Date: Thu, 2 Jun 2022 10:43:27 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v3] SUPPORT.md: extend security support for x86 hosts to 12
 TiB of memory
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM6PR05CA0021.eurprd05.prod.outlook.com
 (2603:10a6:20b:2e::34) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 5b9c4ddd-2347-49ca-fc02-08da4473f7a9
X-MS-TrafficTypeDiagnostic: VI1PR04MB5008:EE_
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB5008E9B4F7CA86B0EEDB904AB3DE9@VI1PR04MB5008.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	lSe0+KDjC1kILJZlXaCfpOR9/Tz7sozMXt6Kuu28OPx2G1kEsO1VXZdmYbmAcq9fHrwJ9opUWi5mIS1J0wqThxSNN24UH9P/FVLmUUTUBi/ekUH6ZWXdTV6assQadw1YqS6RNrop5lUc5l1OjIAPxmdY3/NfNFwubMQ6cUz55po5cqNZaz1mcYQEB2a2nT5S9JTd/ayIFpXqPA4/cOquArdQTsvARQuvlayytOK80UG1dwOQciydFsvapJux4TqFMJm4LTwjmZR+8WRmyQHIbC1087YJjUym6OARQNwUFvjmdZjO5GfRNV9Nm150UA8rh1YCGbLq3D3aOs3BHE0VJGk+Si4mB5bwzMzBx0FtmTL7nzit4TDDncQmRdawB6n4/RlKBrvuPLMo8JbmPnjgVoh582WAR1oNVpty9NAEhCUXd5uzMXdWs3ZcYILB9oOcApJDE4SBWtzto8ak9ahqwmkBVjlFO2we9lyOq0cEOaK4/4+h2Llp3jrELowMW6sY8dNipQIUYTf4oKlzaTYNydV6ZPOU/u7wGhTxTTfKfcAbOn4XYA0yyrb9H9pT06+ZQav5n2rqa+bosigEpysUcElMsWCg4Qsb2VkarN3W12zOcxbwsqGJ5Cg2fNoJvB0cKnDXfqV2VrHLAbMFm2013fYLU27EFoHbPaDMJskqFBv+sQS+JO1vx1K9pH1e0v/ddSJh3iMefqcxCzllrWgOOLsSBQDfDe6X2W0iaEITlmNrSWdDBkE71LzKysS5wCRGAiY3xS2d2Tg+JPTtHmfPklLlqJGaTWepbNP3lImiDvJtfpmMxo6ZLuGKeG4GvMnQ
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(6506007)(966005)(31686004)(6512007)(86362001)(2906002)(508600001)(5660300002)(15650500001)(8936002)(6486002)(26005)(36756003)(6916009)(38100700002)(66556008)(186003)(66946007)(66476007)(54906003)(83380400001)(2616005)(8676002)(4326008)(31696002)(316002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MW1iSWVmV2VjZUVpdm1nYTE2TTRDT1U1VXMwV0ZTeXlIaEh2N3R2RXp6b1d4?=
 =?utf-8?B?Q1dRTWpvZklSb2Y2S0pRL0RKc0trTzFpNVdyY1JqV0g0eC9XTDA5L1I3K0NT?=
 =?utf-8?B?UHRNcWNMK20xL2NpWEVsTFd5WUliblkwbG9nV1l3eVZGT0RxUWQ4Z3pDMHBh?=
 =?utf-8?B?RFZ2NXdBRUZDWXJkV1dubG1tblo2Ykp1Vmxqc05CTURieGF3ekswME02aWxP?=
 =?utf-8?B?QWs5b3RtZGpxMEx6Q1dhZ2dhK2RROEhKc1drMitQT3BQMDFkK21IOTNoQys5?=
 =?utf-8?B?NEM0NXgybDl4UWtkRTNTakpnWGtwbmVxcDFneXVjemlVeGI4T0NIS1hsVVpw?=
 =?utf-8?B?dEh2d3RVSEIraUlVY096OUJYUnZEaldYcWI5Uk5PZ2t6MncrVFhiWFUwQXp5?=
 =?utf-8?B?U2YvT0praXZMMnJraFZnNWVEQTR3U3BYZC8rckVCUWp0N01QMmh2bi9Sa3ZP?=
 =?utf-8?B?K0hWNm1qdnVpZFdydzJ5Sm9VdEtZeXlFZzllVjVEb2oyVXBWTnRhWEprUmNT?=
 =?utf-8?B?eTg4RGlWYUxJR1dQRHpkUUFRc2QyRDNTaEU3amFTU3BvMDdLaklXUVRXbzVL?=
 =?utf-8?B?L3g0OUNzRjNxSVpvcDRqNnlKNzcwTEc0c2wvYjRnNEJEVDVHZ2szcGhHekNX?=
 =?utf-8?B?dC93WTBUbFMzQ2NONSt0RjRSRGdRTjJDcnNiOWtmeG8vUE1WeTBkT0U4cDU1?=
 =?utf-8?B?UEdZMS93U3V3NnJBdGRmWTFEaGtjK2JkZ0UxZDNaNkFWU1RkdzY4RmN4L2VR?=
 =?utf-8?B?Y1FMakQ1VjRlMVRLUTlBL0hmM0VYM05mTW5mbFI0NnZQSWVxZ1FBN3FHajRv?=
 =?utf-8?B?NTc3dGR4U3Z3S1IzVDhKWDVnVEJSNzJlSnhVaWZOc2lVUHYwVTNkYk1vME1U?=
 =?utf-8?B?enBNdEJyN3FwY2xNa2FUR2owUXUrNFFzckdzUWJGQXFPTXA5Y1R0TnVtOVZm?=
 =?utf-8?B?UWZ4WnFoV3J0SWVicDBSdjIrUmNrZnppVENMdEt1S2x5Y25DSDZoS1Vhb1pR?=
 =?utf-8?B?VzVYcEVFTEJhSEpuVlZqVDlCTE41bFFsY1ZlSU4zaURESmFCS3RwbmxXYTh5?=
 =?utf-8?B?RGRPQmV5UTlqREYrNjY2eWZYbHU0NW03MWNEaHNNYmlrL0xUZGxsWEQxZm5V?=
 =?utf-8?B?NUl3NVk5bG1BVnJ4N3JzVEx4SXdYZ25YUjF2UEF3R0tsRys0dGVubHRhM293?=
 =?utf-8?B?dG0ycnJTVTlWVzhodFEyRCtscjdCNlRSY0hZZHArZGUxa3lYdisybFl5VkZN?=
 =?utf-8?B?RFd2MHRmSHFCd3RNS0NPbGkxN09GVW9KQWxQVzZKdDQvNmlpR1RFbERiMXl5?=
 =?utf-8?B?T0NOalc5Rm5HTUZCQ1RaZVo4N2RnZmZqVEMrNGNDRzFkR1FONWFNYnQrQS9V?=
 =?utf-8?B?NzRoR09ZQy9MSWtTYmZKRi8vb2FsNVdPbjlRNWptejhDNDR4RUVQVkh1T1Na?=
 =?utf-8?B?R1V0YXgrVDFueHAvZmZqZldYQUN0b0NXVHAwNnBUSUhSV2RDaDFSd1UzN2Fh?=
 =?utf-8?B?UUlBZlF3NWlRbVp2NEFJM1JhQk5BSHpJMzZhdlhldkN1TmhlVG5OT3kwSmZa?=
 =?utf-8?B?TmZIRU9aUklpQys4ek8xMEorMU9lcXJjNXRoNThFd3o4dVkzSUVXTEhCTllu?=
 =?utf-8?B?cUI4d0lYZ2FTenFFdmxGZWNjMDU0U081MDVGK2RPQ3F0d3poUTZsemFuNndh?=
 =?utf-8?B?bEViTUdBUW9QcmFmUFd2T0prYXFaV0pqZHZYVU0xZC8zMmxQL0pYaGI2eDYr?=
 =?utf-8?B?cXphMExPTjMyWTJ0ZzRUWFBsMzBZcTJtbW1OdUJkN2E3ckNCYVFUUEw2eVAx?=
 =?utf-8?B?cjQxbkZ6V21oUGtKcHZ0UTllOEVScFYwUDFFamd3UHRabXpISUxPOXFuZGkz?=
 =?utf-8?B?Rk1KYlJOclJYZlVjWDFVVDVTVnhZVGx5bHhmaE41alFPWisrRURXWnpwZmNh?=
 =?utf-8?B?UWZKdVM3YWpvRE9BUi8vanp5bVllZis3cFRmMXdhYkVzMW02MkJ1dmZHazJ3?=
 =?utf-8?B?d0ZIcWcvakJpRGtxcVU1QVdGNTZQZEhBNGQ5d2RyR3k1Ui9ucUZWdlJ6TFY5?=
 =?utf-8?B?YjAzejRQVnJWODF5SVQ0QXhWWFcvNklnNzlrSmY4UUltSmJ3bFU3V0F5amMx?=
 =?utf-8?B?bGttd05TdkxnZ0FvdXhiL0tHenFhU05jcFBxR0ovMDdkMnpEOStyRXdkV3I5?=
 =?utf-8?B?a2RtM0x3MnBBeERKd01TNWJtbjRoekR5L0ExbGdXZFdibXpaWmRBUHZDRWRy?=
 =?utf-8?B?UmdxeENjaGlPbnF5RWNDVVVYU1F2dUhqbEFXbEhrcjBsR0JBVG5JcFVWMGY0?=
 =?utf-8?B?YS96UCtsY2gzc3IvZ2J3MlY0NEZKSCs4c1h0VUtLOFU3eGpQdFp5QT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5b9c4ddd-2347-49ca-fc02-08da4473f7a9
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2022 08:43:29.5261
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: IVyahrL47lyXXuVluMJV4sUZ+nQY8P4E7rbo3jqaQYfzxjks0HLtrYkNsj2GKNSdaBNhtVWwYS0iybuaAfle/w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB5008

c49ee0329ff3 ("SUPPORT.md: limit security support for hosts with very
much memory"), as a result of XSA-385, restricted security support to
8 TiB of host memory. While subsequently further restricted for Arm,
extend this to 12 TiB on x86, putting in place a guest restriction to
8 TiB (or yet less for Arm) in exchange.

A 12 TiB x86 host was certified successfully for use with Xen 4.14 as
per https://www.suse.com/nbswebapp/yesBulletin.jsp?bulletinNumber=150753.
This in particular included running as many guests (2 TiB each) as
possible in parallel, to actually prove that all the memory can be used
like this. It may be relevant to note that the Optane memory there was
used in memory-only mode, with DRAM acting as cache.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: George Dunlap <george.dunlap@citrix.com>
---
v3: Correct Arm32 guest value. Restrict guest "leeway" to x86.
v2: Rebase over new host limits for Arm. Refine new guest values for
    Arm.

--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -50,7 +50,7 @@ For the Cortex A57 r0p0 - r1p1, see Erra
 
 ### Physical Memory
 
-    Status, x86: Supported up to 8 TiB. Hosts with more memory are supported, but not security supported.
+    Status, x86: Supported up to 12 TiB. Hosts with more memory are supported, but not security supported.
     Status, Arm32: Supported up to 12 GiB
     Status, Arm64: Supported up to 2 TiB
 
@@ -121,6 +121,14 @@ ARM only has one guest type at the momen
 
     Status: Supported
 
+## Guest Limits
+
+### Memory
+
+    Status, x86: Supported up to 8 TiB. Guests with more memory, but less than 16 TiB, are supported, but not security supported.
+    Status, Arm32: Supported up to 12 GiB
+    Status, Arm64: Supported up to 1 TiB
+
 ## Hypervisor file system
 
 ### Build info



From xen-devel-bounces@lists.xenproject.org Thu Jun 02 08:49:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 08:49:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340931.566067 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwgW6-0003r7-W6; Thu, 02 Jun 2022 08:49:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340931.566067; Thu, 02 Jun 2022 08:49:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwgW6-0003r0-Qv; Thu, 02 Jun 2022 08:49:18 +0000
Received: by outflank-mailman (input) for mailman id 340931;
 Thu, 02 Jun 2022 08:49:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4LFK=WJ=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nwgW5-0003qu-Do
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 08:49:17 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.111.102])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e22a6414-e250-11ec-837f-e5687231ffcc;
 Thu, 02 Jun 2022 10:49:16 +0200 (CEST)
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 (mail-am5eur02lp2053.outbound.protection.outlook.com [104.47.4.53]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-8-exQFmV4IMaSM38xEmmm3Vg-1; Thu, 02 Jun 2022 10:49:14 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM6PR04MB4133.eurprd04.prod.outlook.com (2603:10a6:209:4b::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Thu, 2 Jun
 2022 08:49:13 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.013; Thu, 2 Jun 2022
 08:49:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e22a6414-e250-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654159756;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=pIVBj2xQdgbcY405BJW2RkwS3GpAgcNUfmowHVhbnmg=;
	b=FiszSDrvDwTcz/G1rpRVJ0TlyfKprkdMFvQYYwenQvj1YcdENvXmGNwbUoTFBE4q/xeZC4
	47LofYJjcD0N1xGdiNAsRmdKq8e6HtUgLMOlzWElv3saWb5jsO0zixzeRYuMfTs2v0KS5G
	iNOZZ/6+YnG32cw2hyUejuVdS57qS+E=
X-MC-Unique: exQFmV4IMaSM38xEmmm3Vg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Nbcmo+bY1P16pyWA0Lof8COFmZla9sRetd1PaKVCVxOBNmVjJXktyp/hdsJ7qY6UWEELTKkefkseKodTRwygENpLH+ul6nwYFhmJSBHOO3v96m98lEHsh080/1rDdkHzp0FoggPjV2PTmYMkSDlYwtMxUSyDkHn3T+nD3k+iwt1JeMQvIL6SUtD0b1uSolnvYNzoVoEaGqmwDHxzxVqOKnT45O6n+oDDTeHi1bBNVMBRgD+/3n3w0WQwDxvvpob/8l38cxp7hXVHe4N4v9bf89oO1wBGavpJDzNwl955Str8SRNMLOUr87bAjhg1pBv3QUKFKtZ531ysq2lbxXD3Fw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Lw5SxRxEO65t+yhs01ufb5vk+O5Mr3rvHtFxBuNUXNw=;
 b=S9SQzF0JOhgZAqrW1TIKdPTJ/SOavvLebTc8W01VuHXvDjBplAfAD2j7k7AHPNw86kF80oCKh/zspS7XAdcbQJHLJcTPuAcfAjsr1ybMbxpWe/KF1g494w//v1cLnnvIAdNCYJd677FSmjUeAjIO9X8ivrbqXTuNZbaZ54OQ6KAiMB1N/U8hz18lyTNLYr1O7FDkoGxjmRJS2MjTZWkVLxmA0V24S/oVPluXDn31Ivm/dJMfbO2GUFr2XDa9PYj8mApGBJCpzfqwwQl14tQurTp5ws83JRjUVX9JXDleK3CDzKmxlcGq5nV2afxr4eXuYTRC7bmPIbnpP1wl31XTmA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <de93f0f5-374e-6fc8-22c3-70023a7d2f9b@suse.com>
Date: Thu, 2 Jun 2022 10:49:11 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [RFC PATCH 1/4] kconfig: allow configuration of maximum modules
Content-Language: en-US
To: Julien Grall <julien@xen.org>
CC: scott.davis@starlab.io, christopher.clark@starlab.io,
 sstabellini@kernel.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "Daniel P. Smith" <dpsmith@apertussolutions.com>,
 xen-devel@lists.xenproject.org,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Wei Liu <wl@xen.org>
References: <20220531024127.23669-1-dpsmith@apertussolutions.com>
 <20220531024127.23669-2-dpsmith@apertussolutions.com>
 <ab531f8b-a602-22e0-dabf-c7d073c88236@xen.org>
 <be06db4d-43c4-7d24-db0d-349c0a1e4999@apertussolutions.com>
 <337d6dbf-e8ee-5de7-a75e-97be815f4467@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <337d6dbf-e8ee-5de7-a75e-97be815f4467@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-ClientProxiedBy: AS9PR06CA0398.eurprd06.prod.outlook.com
 (2603:10a6:20b:461::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d70368a8-7108-4187-ca21-08da4474c4bc
X-MS-TrafficTypeDiagnostic: AM6PR04MB4133:EE_
X-Microsoft-Antispam-PRVS:
	<AM6PR04MB41336D4A29334F64F606DF05B3DE9@AM6PR04MB4133.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	o6pQc6zNNvbwX6b8dndkRyRom1z/oDNUj975rM2PVPKb/4+e4J8qULj8xuejsptTJfrZyJr34cY/WbV9CgIMI/98ftjaU1S+NlmIoO70ouuZPJl4zlsh+0QCvszeF2cczip87TS2lYXZQTGe4BPUppRMa8eY0LOTMaBeAoUsMesdgoIsM9MhMNMepcfIwJksklm7zLx/4TkeCgshOKElZ5hSnWYj0u24b/74wZS3DBF0cTD2Wiaen/NLGWewwH8IxFcC5A8Liw7dubs+ryWYJ1jX2uEfsnF/6CREIRXXxqQ8biXiG10Wq2tFFR18ePiwjeTr2+a8bLv04Tzg4IgdMmSaghlqRvS7786746DPSTQ4PuLeeUwwAPam5lV7n3wPoc9RANWorH8/pmO7L2Bb23h+ntMT8ZJH7Xen/5Prse89tfVSDJsgL18PrL6c8+aff2QPtUWNy/uzVVTZt5a4coU5ql7ui2HOidQ+hXqTtWy0xzKpbZJBb44tvMSoeofILqLcvDdIi5tieNkGLiIKy/Ahy1YdYdQogOFXarDI3CzEavRhNcWesc14Gp3liWy1o4foGLwzp0ihVpgndaYuO2T/Lfyo8lMoVxx/8//5klbpAYEOgbomE6NAz0SeCsn3ASo+4pDufaNJJ/4x5tGzoeqlRk2i+7Th/NDWRAwTHReyNd14FIkSmxeKrNa7Iuf274ZZfLpQvudqFg45KtFXDaWgaI+obxe8x/Amuxn4HCwKOzLzVTICgNfmmcvqPpac
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(38100700002)(31696002)(54906003)(2616005)(6916009)(8936002)(7416002)(5660300002)(508600001)(6486002)(66476007)(66946007)(316002)(4326008)(66556008)(8676002)(2906002)(36756003)(83380400001)(186003)(31686004)(53546011)(86362001)(6512007)(6506007)(26005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?aH0FpVYCMZW7jtP/I8j71UCeyw550hCqMd/PaUEuTz2CciHDi3yQnOhiPqGu?=
 =?us-ascii?Q?dOf8Ym3lQHpXLD2i0mDN8MuMgMpHop/iz1LlrzGO4pPSIXBAIQicgvIECH3L?=
 =?us-ascii?Q?UMQPTkx/+V5m9A8xMVhgzD1XH0KBADXaALgLgt1y8Tu1WEEZIKd9NpxG5N76?=
 =?us-ascii?Q?E1S0ubfgkrlZ8ENUOhUkUFTYvY03p4WEyIaMV3gsmG+nVtaCF/pC7O3t+gXC?=
 =?us-ascii?Q?0F58GOK71ocC5Xc9JfMBbVSS02ck0c+yIsyr0MVjegGz6QKNVy+21lDHHod0?=
 =?us-ascii?Q?d6NvW6kfpD7mKGy0cJUBJW6BM8H0feeNXFRQiV1wGaDMF7GND+XWlvw982o/?=
 =?us-ascii?Q?Q/BYcszY5yw3epWmHBMBb8Ltlm4UpPtFO5v82enZ7g2CdBzvnrTfm0iXSuw9?=
 =?us-ascii?Q?Mf3ZYQsyVpkKvxxk24HWs6dgMNb8fiabMKRIUi5oZvSnNAZUyCESLA7lEJSS?=
 =?us-ascii?Q?IzOB6Em/i7eavNfMrSuz9xc7UXdWCINsGwEsmQ0/VS8+sgT9CxWae1VK+HPh?=
 =?us-ascii?Q?vGVgoabGRlWKGMxLEVltUw5w+aXhy4PItHgoRBkrZJjIUEpZ8m/Wa6rlFo2K?=
 =?us-ascii?Q?dfq060gp3D93D5NPUvhmS85ukkV7YNxgkSLN7fve/ZyMCbjhYma7o7tmOFZt?=
 =?us-ascii?Q?MqEeoqsbAfMfOmxwjRUImHCTUTDFoHrpIkwlCS5R12GAzOkH05NBUdA2oFbV?=
 =?us-ascii?Q?WLxp+QouFHeGixmShW7n2I/+31VnhpkQuKBdd96V0iUeLFcmwZdyop87kDu8?=
 =?us-ascii?Q?itoCKLaDvb6quiZdb0qGBsFhxzHNWnGau9KfrTAruc44OXwD6/BsiRm5vrCN?=
 =?us-ascii?Q?JXVS96buGN6d5NHqhisAhxPRMQUmZ0ss+PywCUPk9HZhy5tONbU/NgrWH+kh?=
 =?us-ascii?Q?AGvgGUYl9KUH2+DZl1EDCIb9EZcxXCV7nwd2xMfnup2oP9q3yItTRlwJzijv?=
 =?us-ascii?Q?tQmguPWcsqAPGGzuq9w2ToABhjNOaiRQl+CyiZPJ72hzIqvpVDuj6hbTKrFE?=
 =?us-ascii?Q?EXnUGnzmz7qbHD9713SUKxiq2FawTZW7llsgSzP1YWeSH5FeSTPZxGW/Cipi?=
 =?us-ascii?Q?/oDY/bCq412CF4Zatbc0HvaRFru/feMrpU7tPjqvTXsYTN/oCQ6Qd1SVnzGJ?=
 =?us-ascii?Q?xgiEN7v3EVWNdhXmNu4nHia1f2p4nnT85wqmef9cBXQmgm1guv/O60BTkJmU?=
 =?us-ascii?Q?P6SeyZaq7oGC8U1hvikRU+bXOhe6H0k4rN2urMzUbifd/9KLkA2VRCDUIWvD?=
 =?us-ascii?Q?GMDfSuTuWiKb9gU4XtNaTWGXao4N6vPQVrSs6/Rsqnd1sSoKzEJ1kVWz3XVa?=
 =?us-ascii?Q?0do5NB5Fsq8WIZFV0k+Wx0waU6U8dBV1K5QUHpSyh0huRpVVYaR4ee9OmU4c?=
 =?us-ascii?Q?jGZ8XDm370GqGJ6XWxw9NH6Vtzm3MAVEI0SD+rQ/i8BlSF5NSWh0DKgLQqTK?=
 =?us-ascii?Q?2/hPDJWM830hZ13oRJePdDCseC1TxMhfwyzOZkDVjJvWE5qRzGVWNY/8RAOz?=
 =?us-ascii?Q?81/BM8M2i1Gg+WgmZsMngvvXjae6VwSEbiam0i9QTJNYo/BMvWEQEsfwJUtB?=
 =?us-ascii?Q?FPMFe0hrTC0Z9IgKdo5Q8utC6LgDMf03XYAuiLa7gkXvkhP+xkpmET63BTR7?=
 =?us-ascii?Q?mBlXaKETl+3ElPmIrHTGDRRtvIaZGSmHpdt19tkJGBjKPr1ojzOsFHpe2FW/?=
 =?us-ascii?Q?jbgMtVVCSlQm9g/5+7optzR0Fy7jl/Y/50cKjQaiEUTvaiNNT86ALKYe0jsA?=
 =?us-ascii?Q?Keh2pL1BrA=3D=3D?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d70368a8-7108-4187-ca21-08da4474c4bc
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2022 08:49:13.5510
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: EKyHL01vHXcq3PbdqVO9a6IQ7rUbgPfqtQGxtNe7eNsRazySLFifF5f+Q13seCxnt1d+q4gBCKBpj2aBO/7+eA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR04MB4133

On 01.06.2022 19:35, Julien Grall wrote:
>=20
>=20
> On 31/05/2022 11:53, Daniel P. Smith wrote:
>> On 5/31/22 05:25, Julien Grall wrote:
>>> Hi,
>>>
>>> On 31/05/2022 03:41, Daniel P. Smith wrote:
>>>> diff --git a/xen/arch/Kconfig b/xen/arch/Kconfig
>>>> index f16eb0df43..57b14e22c9 100644
>>>> --- a/xen/arch/Kconfig
>>>> +++ b/xen/arch/Kconfig
>>>> @@ -17,3 +17,15 @@ config NR_CPUS
>>>>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 For CPU cores which suppor=
t Simultaneous Multi-Threading or
>>>> similar
>>>>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 technologies, this the num=
ber of logical threads which Xen will
>>>>  =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 support.
>>>> +
>>>> +config NR_BOOTMODS
>>>> +=C2=A0=C2=A0=C2=A0 int "Maximum number of boot modules that a loader =
can pass"
>>>> +=C2=A0=C2=A0=C2=A0 range 1 64
>>>
>>> OOI, any reason to limit the size?
>>
>> I modelled this entirely after NR_CPUS, which applied a limit
>=20
> The limit for NR_CPUS makes sense because there are scalability issues=20
> after that (although 4095 seems quite high) and/or the HW impose a limit.

The 4095 is actually a software limit (due to how spinlocks are
implemented).

>> , and it
>> seemed reasonable to me at the time. I choose 64 since it was double
>> currently what Arm had set for MAX_MODULES. As such, I have no hard
>> reason for there to be a limit. It can easily be removed or adjusted to
>> whatever the reviewers feel would be appropriate.
>=20
> Ok. In which case I would drop the limit beause it also prevent a users=20
> to create more 64 dom0less domUs (actually a bit less because some=20
> modules are used by Xen). I don't think there are a strong reason to=20
> prevent that, right?

At least as per the kconfig language doc the upper bound is not
optional, so if a range is specified (which I think it should be,
to enforce the lower limit of 1) and upper bound is needed. To
address your concern with dom0less - 32768 maybe?

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jun 02 08:57:41 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 08:57:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340939.566078 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwge8-0005To-Qa; Thu, 02 Jun 2022 08:57:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340939.566078; Thu, 02 Jun 2022 08:57:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwge8-0005Tg-MV; Thu, 02 Jun 2022 08:57:36 +0000
Received: by outflank-mailman (input) for mailman id 340939;
 Thu, 02 Jun 2022 08:57:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nn9h=WJ=citrix.com=prvs=1458da55d=roger.pau@srs-se1.protection.inumbo.net>)
 id 1nwge8-0005Ta-7g
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 08:57:36 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 09fc2717-e252-11ec-837f-e5687231ffcc;
 Thu, 02 Jun 2022 10:57:34 +0200 (CEST)
Received: from mail-dm6nam10lp2109.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.109])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 02 Jun 2022 04:57:31 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by BLAPR03MB5377.namprd03.prod.outlook.com (2603:10b6:208:285::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12; Thu, 2 Jun
 2022 08:57:30 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::5df3:95ce:4dfd:134e]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::5df3:95ce:4dfd:134e%4]) with mapi id 15.20.5314.015; Thu, 2 Jun 2022
 08:57:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 09fc2717-e252-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654160254;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=QDtgbft/Cxdtj97ZtW1AEyEOYV69zeM224TKCB5HUhY=;
  b=DJhWDZ+7PrHfipSJ2UN3m8M0RX6fqC26s+yEtLK480ryeouXDPicvYvl
   c6WL8ceOSr4WzfvCKjvoTZY3EG005bm+TtWvbnRJVLKUN9raoLOdj28Oj
   aObDDct24Ccijwn2Tr7IbziPJS3LdM+3SySsaMsvIjpv+05j9MkCgsf3Y
   Q=;
X-IronPort-RemoteIP: 104.47.58.109
X-IronPort-MID: 72702191
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:0jNtVKn6Fio22Vi5R82guKjo5gz3J0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xIaWWyOaP7famSgft9+O9u0pE9QuZPRx9VhTQZk/CpkFSMWpZLJC+rCIxarNUt+DCFioGGLT
 Sk6QoOdRCzhZiaE/n9BCpC48T8kk/vgqoPUUIYoAAgoLeNfYHpn2EsLd9IR2NYy24DnW1rV5
 bsenuWEULOb828sWo4rw/rrRCNH5JwebxtB4zTSzdgS1LPvvyF94KA3fMldHFOhKmVgJcaoR
 v6r8V2M1jixEyHBqD+Suu2TnkUiGtY+NOUV45Zcc/DKbhNq/kTe3kunXRa1hIg+ZzihxrhMJ
 NtxWZOYaQguLKvKnsUhSEMFHC8vHvFi8eaXCC3q2SCT5xWun3rE5dxLVRlzEahGv+F9DCdJ6
 OASLy0LYlabneWqzbmnS+5qwMM+MM3sO4BZsXZlpd3bJa9+HdafHOOXtZkBhGZYasNmRJ4yY
 +IDbjVidlLYagBnMVYLEpMu2uyvgxETdhUH8g3N//NmugA/yiRWjJn2OuPEJOaHVJx5pFiF+
 2vC1SfAV0Ry2Nu3jGDtHmiXru3FkD7/WYkSPKal7fMsi1qWrkQDBRtTWValrP2Rjk+lR8kZO
 0ES4jApr6U56AqsVNaVdwWxvXqsrhMaHd1KHIUHBBqlz6PV50OVAzYCRzsYMNg+7pZuFHoty
 0ODmM7vCXp3qrqJRHmB97CS6zSvJSwSKmxEbigBJecY3+TeTEgIpkqnZr5e/GSd17UZxRmYL
 +i2kRUD
IronPort-HdrOrdr: A9a23:pH7UPqqBhrxVScp0APPyzc4aV5u5L9V00zEX/kB9WHVpm5Oj+v
 xGzc5w6farsl0ssREb9uxo9pPwJE800aQFmbX5Wo3SJzUO2VHYVb2KiLGP/9SOIU3DH4JmpM
 Rdmu1FeafN5DtB/LnHCWuDYrEdKbC8mcjH5Ns2jU0dKz2CA5sQkzuRYTzrdnGeKjM2Z6bQQ/
 Gnl7d6TnebCD0qR/X+IkNAc/nIptXNmp6jSRkaByQ/4A3LqT+z8rb1HzWRwx9bClp0sPwf2F
 mAtza8yrSosvm9xBOZ/2jP765OkN+k7tdYHsSDhuUcNz2poAe1Y4ZKXaGEoVkO0amSwWdvtO
 OJjwYrPsx15X+UVmapoSH10w2l6zoq42+K8y7tvVLT5ejCAB4qActIgoxUNjHD7VA7gd162K
 VXm0qEqpt+F3r77WvAzumNcysvulu/oHIkn+JWpWdYS5EiZLhYqpFa1F9JEa0HADnx5OkcYa
 VT5fnnlbdrmG6hHjDkVjEF+q3uYp1zJGbKfqE6gL3a79AM90oJjXfxx6Qk7wI9HdwGOtx5Dt
 //Q9VVfYF1P7ErhJ1GdZc8qOuMexvwqEH3QRSvyWqOLtB1B1v977jK3Z4S2MaGPLQ18bpaou
 WybLofjx95R37T
X-IronPort-AV: E=Sophos;i="5.91,270,1647316800"; 
   d="scan'208";a="72702191"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jTa+OwJb2x+JWQd2oWr9Q3T9B4XFNA2AYtZFRaTVPQ8HB2zJ1OFe347+Gd8WWLwu+nR5J6IQ0kWaOcPvdGWrYmwppMqfjLvO3hlHx3DLZMFSdiIzNRHQJCucsv63EQLmnqLOEp9V6F+oKqDcJPbk96GY9PNXl91aadj+Q7zjW5lrfdHWzTX0CjC7i7UPBQvsDLImo8muOYfXBgg/yumiz5t2z4nYK4hzHu2UElnUHKqpUR3+EsT6uLlKZWBQhOvbGA6kJd+zPW0tll8I+7DwMuEqmcC8utkf2HfRTyw2E42eEDCal0Vog3ye/Xs3CSsMMLRlgAO2q0qpbuuJjlnYDA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=JyK9lpuqccAyMNRDPPVpi2RaPYRe2kccwtZ553nC4iQ=;
 b=frm0V4PiNYqah1pfKdtl11qpDFJeQ0K2r50VsJ6VhO0S5LzhB8iC6Chf8Ji6WKZoZ9MgdWKZ9qfCX96bFHyh5/3haZC7RJPjuqpNg5mvvyruHQKRBmxI8Ek8uBeIprtiIs5hDG0LPer7jgm2CZDXQNogIqS8tFKiEpdZf219jCXJbBkdcN13QrDO7YebnM01s5ueGyZG4XxF2UlvJPifeNLmqjUU3NcifIzF/sk8e4meCv+kKO8cwdTOL2KhJdL3fg9V/MQagF+g13PTnsdhwjdHs6l56CrvwmHMy3/8p63fG1UP+2HeZxxYvqDsLaZidfd62MldzDWkQHyoda5Caw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JyK9lpuqccAyMNRDPPVpi2RaPYRe2kccwtZ553nC4iQ=;
 b=AXdVURYja0qtYTNCNJXkv6LWtmY3uLlXXxP/DD/EFaz1eF0kgUipmvOwxbnGAwf4I8uN1wrGe9tAPUplBpghdD2gv5lpf0ZvmfQCZD8Awp8/pxVFfr/9vQOdkn1pS1MoM/5RmOFTHGZP+Unpxf5Vu8MqwYAEpES0dx5Kr9EfIKM=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Thu, 2 Jun 2022 10:57:25 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Paul Durrant <paul@xen.org>
Subject: Re: [PATCH v5 03/15] IOMMU/x86: support freeing of pagetables
Message-ID: <Yph7dbKcJY8rZinX@Air-de-Roger>
References: <80448822-bc1c-9f7d-ade5-fdf7c46421fe@suse.com>
 <614413d8-5043-f0e3-929b-f161fa89bb35@suse.com>
 <YpZBjVxRdJOzJzZx@Air-de-Roger>
 <372325ed-18b6-9329-901d-6596ce6e497d@suse.com>
 <YpcwOCBEzI+qvTga@Air-de-Roger>
 <2014c9a1-1c38-b36b-160e-f79afcdc3a10@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <2014c9a1-1c38-b36b-160e-f79afcdc3a10@suse.com>
X-ClientProxiedBy: LO2P265CA0044.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:61::32) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: de0f25b2-e1c7-4c61-ed2b-08da4475ec99
X-MS-TrafficTypeDiagnostic: BLAPR03MB5377:EE_
X-Microsoft-Antispam-PRVS:
	<BLAPR03MB537747C4A98D7C92603327FD8FDE9@BLAPR03MB5377.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	SzV4+NaV5VzhBrtZR28MkU1EjklZ2kaihgmu6S45Dy+VuL4nU4drAKKirMS+LcmyNQARra57U5zHFZ5DWadBwQ0i+g5VawJhaHqgd4nhq5i5NU6KH727Rn/a0HVRwQIVkRA2BE1EIFi6wN1qloJ3DF1RKVH1ouhKdme1F09K2YUAqvQ4rBhWDgcIojDG4rwwv2WlLtAdsISei7hYU3FHNpVR1lEDyLr2LNRfwTOw32iEd8vo9wNkazNeTmrGGB6xEpvWMuvGdXT//kIBoMIw+YlEvsIt2ChNvZTZ3q/z0upk6B//Un+UcGM2HhJeNFtWvRslSIwkomiyFNsKmOi9qdFa4yHUBroWcBJoSyhEtKL3j+PyZj+onXQZs2JOrBBAbkClv1MJK0Px6tylfDzzDtW1w8e+O5Pwn1nJStrv+8ZVOtyMJoJHkvGBZcxt/eRCxdupnIpW5JD2RCMw6hAAd4/Z23Ssobp7c8amaIRyUkL+VbT8tN8TQFQrJolIZF/WKSrFteRsoBjjcmQWXWy6Q3RA8C8GVuiUanSDJoFAzSpMgv4rYPVYMDOVU6KsKec1q3Y/iT/F06A4NUOKKt74qvefgzSeDg+OV0Ukn8m5QoKFIK1N49R0AWG9ig1xLReXv2a3CAzC/XlEb5+/aBhAWg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(7916004)(366004)(6486002)(54906003)(66476007)(8676002)(6666004)(4326008)(66556008)(85182001)(316002)(6916009)(82960400001)(33716001)(9686003)(66946007)(6512007)(2906002)(508600001)(53546011)(26005)(86362001)(8936002)(83380400001)(5660300002)(186003)(6506007)(38100700002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RjFzdW9mTWF2UEN0MW10NHhMeEdGZG1meGRkVlFqNEhHRk92Uk1yYW85Nm8z?=
 =?utf-8?B?aHhSaStNVWxSbTNqZU1sYyttbnRVaWp0V3pWRGZVb1IzdGM3ZG5VZERoV3Uw?=
 =?utf-8?B?TG1ROWRpNmhsSlk2eUd6MVFSZFV4SHVSVnFXMFV6TnlLZldHcjI4ZDZFVFlq?=
 =?utf-8?B?TDZzd1QrQTZlZ2l0bDMwaC9jY0RZbTR5K0IzM2FudlNoQ2VGc3ZpZGtOU2Rj?=
 =?utf-8?B?WkZoRHhPZThrTWhSamNoODBOLzFFVGhzWTRLZjgwNHRZZysyMngxT3pQci93?=
 =?utf-8?B?cFU0VDQrcjhUQ05HbnVqYm4zKzdNWW5IVDdDd2gxRUZRNGlFWGNueEpFRG1X?=
 =?utf-8?B?dEhFd3BXckM2UUdiR3VRN1FRRzc1dHJoRGJNZWdCb0t0dU84NXgwUUdXc3V2?=
 =?utf-8?B?ckVSVXczNmlyaUZDTG13TkoyQnlvNFdWQW5ISE9TMzlBRHhKK2liM3FkT2c2?=
 =?utf-8?B?T3pHemp3dDVaNGhUQW56Z1VaS1lNVlJHaUNGTzBpYzA4dHN6ZUpBNXhSdkk4?=
 =?utf-8?B?MHJsbTBZL0l6MkFEc3RMNXBaOWgvTlFyUmpESnlFbDJ0UzliNXVUVVVwS3pL?=
 =?utf-8?B?TmczN05Db1I0anYxTXpYRTMweWpqS1FvRjZVb0FxRlBja1RFVEp2NitLZXdQ?=
 =?utf-8?B?Y083V1FZWXVFVmJsV21yTVQyY2hCQ2VIeExzby9lRDdmcXFUc3AzZ2RGTi9Q?=
 =?utf-8?B?bnZrK2dJZ3R6Rnd2WHBjTFhKKzNiWEdQOXdiUXFvSVdhQ2NDRzk1OXRZRVZy?=
 =?utf-8?B?TjJDRWxTUzZoRDVWQWIxSzZpT0JkcSswZ1I0bm14ZlNmQlVvWnFpZVhqaGk0?=
 =?utf-8?B?SlArd2ZhRzFTZGtlLzZsOTZ5K2FMSEFmZ3BadTF4Q0hIRitFMnFmd0lxSFZJ?=
 =?utf-8?B?cW1SajJmd0U5VGZ4NUR5d05oVjkranI4cUtVdzBHN0xTbmladXNVVTFWMzNx?=
 =?utf-8?B?dE55TEp3aVAxZW5Iek5ITVVJSlhuWEVrVVJ1UUpYMHRrS3NvTDU5UmpUNTUz?=
 =?utf-8?B?L015WUJ5cDFCUHRNVy80a0xJeDA0ZnpEa0VuWFZkZlgvL0NQTktySjdYd1dq?=
 =?utf-8?B?NHRDVFZRMTJubW4vU0RJMG5BQ3ZRT2owTXczMlgxWGt2SkZFUysyL2EzOVVa?=
 =?utf-8?B?VTBPVVN3eUtOdm9Vc29yVFpnNDVyWjgvUVlXQzFOa2hBWUt6UTI2ck5ySGxT?=
 =?utf-8?B?aTJ5NnRmMzlqeGJkVW93RkswMjRNNEZYcU5oMndWVU1NMjFSUGQ3cUhNaURZ?=
 =?utf-8?B?K1IweXJhUXRnRW1OaVI1WEhKR3h0aGovd0NRVU03bHlXaVp3S2d2MHNZRHB3?=
 =?utf-8?B?L0R2czZpVTNTMmRUVGJIWnZMMG9tam1xcTd0YnRnZnViazY1MFI0endreHg0?=
 =?utf-8?B?allvcDdRRzZxc2Rkd3VaRHNiU1IzS2g0alpBS0lQbFJIRTk1WVBlaENxVnha?=
 =?utf-8?B?UitrR3g0eHNHTjJaT21Eek9jOU9WL203Zkx0bzM1ZlMyWUlYU0EwVXFpWXVy?=
 =?utf-8?B?OGlBb25iZ1JFUlVUeDl6TitjN3JDaERTZWZ4RmhydlNNT1N0cVllSlNncW9Y?=
 =?utf-8?B?S3hCeGVDcWwwMVFHOGU3TGhEZzNDSmVEUDJVVjJKZUxIbHhsdm9vNXZwN29Q?=
 =?utf-8?B?MzZOM202b2xaQ1hNQ2JyaXorVTdrL3MrSFMxT0d1VVh6WU93U2ZjNnFSbFdl?=
 =?utf-8?B?RzBOc2FXQzd5U3dvTE5LTkEyS0lzVnhHYzBQRXRxWlF3UXZaRWlnZG9UVW56?=
 =?utf-8?B?ckpHYmlBODJ5V3RsMmtlSmZPQnJ3TDBDR0xLQ2k4czhBZVJMNEM0NW5kUy9X?=
 =?utf-8?B?bG15QnpkaHErSWQvay9XcHZzbjlJejVJck8wMG5mQ0pnTDF2cm9KRXhOa2hy?=
 =?utf-8?B?Z0RXWDdtb056bW9WaTF5SmNiWGQ4em83Y0VyZCtCR1lsMEgwU1dxaTNGeHRk?=
 =?utf-8?B?ai92OGgvMVFXSTFjeVFJS1NDQitQWE5aNXVNWk41VUNGQ3lhbmlQMWRxVW5W?=
 =?utf-8?B?eWZHbDgvOTlRTHVFcUV2KzlwR1N3ZEFzYUhkekZJQkJMaDl3WU1NNGFnWkNZ?=
 =?utf-8?B?ci85NCtXNHdydjJDS2VuT0lXSVpTc2NnY2k2ckJWRm5aUHQ0c3FKVkNUa0Np?=
 =?utf-8?B?c1lzZU1pRlhieTRYeitHWHcrcE1KbFoyY3RleGgwd2podkU2ekRBUy8rY3Qr?=
 =?utf-8?B?VjlKSWErN25oUHh1ckJQUi9lK1lXQTJHWDNRRkJPbkVQa0YzamFNNGdEd1E0?=
 =?utf-8?B?Y1d1NGxTNzE4Z1RwT0hxRUVjN1g4QnFhL3lwMEN6U3RhZnZQdmU2eGduRkxX?=
 =?utf-8?B?TnowTnVrREFJZzk3R3NWUXpjaEZLMk5sdVoxUXA1OHMvTVZNdVAxZy9hclBC?=
 =?utf-8?Q?XVMJdvIjupqV7kEQ=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: de0f25b2-e1c7-4c61-ed2b-08da4475ec99
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2022 08:57:29.9168
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: BjtyPOuHdCNiuh/ic8RiLV/8rVcMt+IizRwy0gCE7RBcDN68cO6brFmjTRV3oMJtNo9tm5U4WPekWo585go9/w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR03MB5377

On Wed, Jun 01, 2022 at 05:25:16PM +0200, Jan Beulich wrote:
> On 01.06.2022 11:24, Roger Pau Monné wrote:
> > On Wed, Jun 01, 2022 at 09:32:44AM +0200, Jan Beulich wrote:
> >> On 31.05.2022 18:25, Roger Pau Monné wrote:
> >>> On Fri, May 27, 2022 at 01:13:09PM +0200, Jan Beulich wrote:
> >>>> @@ -566,6 +567,98 @@ struct page_info *iommu_alloc_pgtable(st
> >>>>      return pg;
> >>>>  }
> >>>>  
> >>>> +/*
> >>>> + * Intermediate page tables which get replaced by large pages may only be
> >>>> + * freed after a suitable IOTLB flush. Hence such pages get queued on a
> >>>> + * per-CPU list, with a per-CPU tasklet processing the list on the assumption
> >>>> + * that the necessary IOTLB flush will have occurred by the time tasklets get
> >>>> + * to run. (List and tasklet being per-CPU has the benefit of accesses not
> >>>> + * requiring any locking.)
> >>>> + */
> >>>> +static DEFINE_PER_CPU(struct page_list_head, free_pgt_list);
> >>>> +static DEFINE_PER_CPU(struct tasklet, free_pgt_tasklet);
> >>>> +
> >>>> +static void free_queued_pgtables(void *arg)
> >>>> +{
> >>>> +    struct page_list_head *list = arg;
> >>>> +    struct page_info *pg;
> >>>> +    unsigned int done = 0;
> >>>> +
> >>>> +    while ( (pg = page_list_remove_head(list)) )
> >>>> +    {
> >>>> +        free_domheap_page(pg);
> >>>> +
> >>>> +        /* Granularity of checking somewhat arbitrary. */
> >>>> +        if ( !(++done & 0x1ff) )
> >>>> +             process_pending_softirqs();
> >>>
> >>> Hm, I'm wondering whether we really want to process pending softirqs
> >>> here.
> >>>
> >>> Such processing will prevent the watchdog from triggering, which we
> >>> likely want in production builds.  OTOH in debug builds we should make
> >>> sure that free_queued_pgtables() doesn't take longer than a watchdog
> >>> window, or else it's likely to cause issues to guests scheduled on
> >>> this same pCPU (and calling process_pending_softirqs() will just mask
> >>> it).
> >>
> >> Doesn't this consideration apply to about every use of the function we
> >> already have in the code base?
> > 
> > Not really, at least when used by init code or by the debug key
> > handlers.  This use is IMO different than what I would expect, as it's
> > a guest triggered path that we believe do require such processing.
> > Normally we would use continuations for such long going guest
> > triggered operations.
> 
> So what do you suggest I do? Putting the call inside #ifndef CONFIG_DEBUG
> is not a good option imo. Re-scheduling the tasklet wouldn't help, aiui
> (it would still run again right away). Moving the work to another CPU so
> this one can do other things isn't very good either - what if other CPUs
> are similarly busy? Remains making things more complicated here by
> involving a timer, the handler of which would re-schedule the tasklet. I
> have to admit I don't like that very much either. The more that the
> use of process_pending_softirqs() is "just in case" here anyway - if lots
> of page tables were to be queued, I'd expect the queuing entity to be
> preempted before a rather large pile could accumulate.

I would be fine with adding a comment here that we don't expect the
processing of softirqs to be necessary, but it's added merely as a
safety guard.

Long term I think we want to do with the approach of freeing such
pages in guest context before resuming execution, much like how we
defer vPCI BAR mappings using vpci_process_pending.  But this requires
some extra work, and we would also need to handle the case of the
freeing being triggered in a remote domain context.

> Maybe I could make iommu_queue_free_pgtable() return non-void, to instruct
> the caller to bubble up a preemption notification once a certain number
> of pages have been queued for freeing. This might end up intrusive ...

Hm, it will end up being pretty arbitrary, as the time taken to free
the pages is very variable depending on the contention on the heap
lock, and hence setting a limit in number of pages will be hard.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jun 02 09:01:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 09:01:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340948.566089 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwgi4-00071P-HY; Thu, 02 Jun 2022 09:01:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340948.566089; Thu, 02 Jun 2022 09:01:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwgi4-00071I-Cs; Thu, 02 Jun 2022 09:01:40 +0000
Received: by outflank-mailman (input) for mailman id 340948;
 Thu, 02 Jun 2022 09:01:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4LFK=WJ=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nwgi2-00071C-TA
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 09:01:38 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.111.102])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9c2e2049-e252-11ec-bd2c-47488cf2e6aa;
 Thu, 02 Jun 2022 11:01:37 +0200 (CEST)
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04lp2057.outbound.protection.outlook.com [104.47.12.57]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-44-D7V-1X0zOKaG90eZ2DFGaA-1; Thu, 02 Jun 2022 11:01:34 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8822.eurprd04.prod.outlook.com (2603:10a6:10:2e1::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Thu, 2 Jun
 2022 09:01:32 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.013; Thu, 2 Jun 2022
 09:01:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9c2e2049-e252-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654160497;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=J4JFGsBy5OgRtxViF+1rg5wEOUhlGowDONoLjfgoE/s=;
	b=ZKjQHoV56uk0g32tV9zdjsjKZwb8ix5eE9TPTwZlE/5Rx3uI0qcdkkwga4HJpE2TWgLuWr
	WLf7h+xbKzjMhtzh5zoxE433ieCsf+bh+7yZQILgQbLPxbSceXkxm4kl2Ffuma85ZVvKwi
	iAg2sApGTUcQ+J4FHYpD65p8EbIjdAU=
X-MC-Unique: D7V-1X0zOKaG90eZ2DFGaA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iDOJyq5gyi6QFrOFZd8UyhgOaBi4p81rAcf5DFw9KFtKdHds3V6dYJ75NmAmiVyf81HyJmCW/47T8bAHO6zMoYAtMbJKPTulRbrVYr1DuOaJMXjKbVt3NEFAInIqhA6T3d3E6JypHX+huh0Rh/tPKaD/nL/MFhSpmxV5YZUhhrpjwNe3SpR7bp9Gn/KAEkEdNajZvbvLOxSXtlqJOFm4Qj09Y4NoPK8OLuu7Aur8ZIqwIvAmeAcmUpVMC1/vD4nA32WXcDZlfcBjqdmEWsTHfyD+n0niLkEz+RKitANRuAjFEVkaoI+ekqIKD18rAYjYWyZSpijOdf+jXrFOHrmnVA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=J4JFGsBy5OgRtxViF+1rg5wEOUhlGowDONoLjfgoE/s=;
 b=Gj2rOtVy/t4ZCtcFjTUdj3NP6vm+dJAqfnnxeibLqnKlS+VkwIDve1XcSCoYqspLdIQGMIHsQZLRIa6E0qmz3nnZx+6eYyT5uubvrhbs/HiUsXHxRl2RB+Udnvp58Jog3BiJxB2huHBXi77vBGhMv0n9OIUGWx1U4rcuGnhmRtSNV7c1zRBFW8utdxXIq0kffAeRGUJJuSuVpOLYRbjwY8dm33gTMGPsTmvaM5W3z5xD7pHGb8CTQMjrvnBteGpzSnFFIKYXOKB+A+VSxSBUGk3gmEkY0BIRITEny3B72eOOmX4VwKF9KhHcRffG+lD6TsrpPQuQQA5oyaIsukSuXA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <092852c0-d833-0c7c-1bc4-5d2e86610a4d@suse.com>
Date: Thu, 2 Jun 2022 11:01:30 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [XEN PATCH 2/4] build: set PERL
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220601165909.46588-1-anthony.perard@citrix.com>
 <20220601165909.46588-3-anthony.perard@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220601165909.46588-3-anthony.perard@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR06CA0084.eurprd06.prod.outlook.com
 (2603:10a6:20b:464::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ba588a5e-64ff-4388-ebca-08da44767d3b
X-MS-TrafficTypeDiagnostic: DU2PR04MB8822:EE_
X-Microsoft-Antispam-PRVS:
	<DU2PR04MB882290FBD3518512A444B724B3DE9@DU2PR04MB8822.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	fAPfAAWooHmjfQAHyu9c4GDZfBWfn+jBu2OnFOHJEsgvq+9oHYH+GCmVrBsZGvbH778Cr2ih9CYPNbQDaryFuSvz53HQnTSsD6NtfaUH2H/UZOnkaSFppM0CNuk0a7M5Jl9H4ZFzw1uk3kpKgf0jgDfNw2IKtd8tqg0EcIX591247i3hdvFtz7eAwnjfsWwOfztmG0A5VsoccofjN12iuxeXB5ZZByn8kL1CyUPeCyn3k/tT/pKf6i9mcezpwPgVnA2oTfoDPTQwFiykJxLt3slUqxgRv4oCxUayQ4AOV1g94PmyZdSwmz88ixGMb8zZObZpiXqvR+cosY/C6tL/+5bC7AovkJQONJrPvYVPvnNX1FDy5lPfJjmxTLEN4w+iiAAX0K4CZM23J9TOKJzaoP4KCKaF8rIZfP9jdLtrGosFMSGvZfphT/RFb9YvwXzLmLgKDQFuryoRbzb1SmJ1SPEAAWPwEfOOTL+JIiBf45BDqWzSLKuGwcKeKfmIJIg8EVzdgrKvDNyKpoPLDFJaNZ5SX5jcdGKqwy2OplswcSocaxboOp525Dfeqsh12rQvQtmX3l+/MkAr4Bwr987mdXpzLJZPG6a/S0zHr7E3aVk7psZyZUcar24fa6MqNSUsfeGh7+/CjR7Q4woHDqRRUi3Zb6PqTtBzU/vjdiagkFveFy2r7X+n2F80L8phsHmPuJNj5Kk5ybDSFD3fiG4DajJ2gMgtbP34TNlEXs2NgG1//lQKXq/DubnR1Iv4tLN3
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(66556008)(31696002)(66476007)(4326008)(66946007)(8676002)(186003)(2616005)(4744005)(2906002)(316002)(54906003)(6916009)(26005)(6512007)(31686004)(86362001)(36756003)(8936002)(53546011)(6486002)(6506007)(5660300002)(508600001)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RS9OUmZIRmpZTkJ4cmFnMk5SRTM3NkprWmZtNGFnYktLSFJDWmJiVkU4ck50?=
 =?utf-8?B?WDlUL2NXbTVDYjVkdG1VbWlTb2dVUDE5NmNQaXFCeG9vU1FSWE5NNzVXd0xr?=
 =?utf-8?B?aWdnUWZGUUV5MHN3NDBNUmh2Q0l6MXlGNjlNWEUyekRyZTBoYk1CV3FMS1pa?=
 =?utf-8?B?cHJZNXVIVFY0QWpveSs1cERMZmx1Vzk5bVBSL0xPSlRLaWlrbW12N08wT0Yy?=
 =?utf-8?B?R1dBYSs3cTlrQ0JWWnV4UHBRZ0RQZHNIUi8rdTJoOVBxY1dJY2wvMkZyTHlH?=
 =?utf-8?B?cUROZjhPM1czU3p6TlhFWDRaYVhtUVJ6aU4xVWdsM2NsWVV3YWgzZmt3aVVu?=
 =?utf-8?B?QUJKL3FxT2tiWkJPcjlPTXREbHFldWtTVWhjd2hlbHdNU3NkMXR3Q3V5aGlG?=
 =?utf-8?B?K01qVHVSeUpkcnNLS2xpLzhsYWNldzNtSmExS2NKZXRXcTFYTE5USzJ3Ykwz?=
 =?utf-8?B?MmppTGlJVjZsYkwxYU5EckczbDFhTEtMK3VMTjlmTlhDZ2JlZWtPVE93SG5m?=
 =?utf-8?B?RDZIWExRNEVUUW4waUt4eU5NZFJsK0o3R3JJUmdyVzlFV0t3SUx1ZFB2NStM?=
 =?utf-8?B?em1ObWFRWVFlWEVYRFptVXdPL2tGaHhJdVIrYXRZVFowMzhVVmUrNUgvdGJk?=
 =?utf-8?B?Z2RsZVRTcUdGRVFSZnRuSnQ0alJDc3FJRE9udzBsTlgveE9IVVZDNHZHQkwx?=
 =?utf-8?B?VVdRaC9xWXZ3LzZPa0l2TnEzQjZBZ3VETzZSSktZUmYvWk1oVEprMjl1T0Q0?=
 =?utf-8?B?cWpCR09qOHl5T2FTR3I3NmkvQ3loRUhVMzBlTm1KbmpUcnFVMks1VDdCNElu?=
 =?utf-8?B?ZEhjbFZ2MDV2eFJKL2xlT25kcWJCVGdraFp1SGVsNzlRMzc0N1NEb1Y3T2JQ?=
 =?utf-8?B?UTh4R2h0NEUyd3gxTkY1aHY0N1hDMDlxa2pZMEx0d1Nwa1k4UXVleHo4aUpa?=
 =?utf-8?B?ZWtZMGgyZjRpY2x3S1RPK0NkaGZocnFVazJmY2M3VDVvQjBhWGVvNGdRYldu?=
 =?utf-8?B?NExCUUI3OC9seThWVHVTUkxCdHJnZC9PQjlWd0Vad2FXVXA3UTFRKzRUdHhI?=
 =?utf-8?B?OHJPaGtEUElSUmZ1ZTlQY1Iwc0dUKzMwYlh4aVdSeGJ6UWVpaFRmOVF0b0Ix?=
 =?utf-8?B?cG4zSHJEWG84UFYyWW9SN2lGZjdNajNQUFNOU3U4cGptU0NFT2JsVTc4blhR?=
 =?utf-8?B?ajhLcFBiMW5nT01UY09waVdRdGZsK3Vjc3R1K1JJaGFrdVNCSGRWTE40U2Jq?=
 =?utf-8?B?RzRkc3RTL3NFUmF3b1FEVERvZGR1YWQySmFWODRQS1RwdE53d0kwZW50bHly?=
 =?utf-8?B?aXkrOFFPVTF5YUpoUEs5ZzB2RzhWb3g3TmNPOXFGRm91SXVRdHRhZEdTdnpL?=
 =?utf-8?B?Z1lxd2U0NEIvWWhpZEk2NmtTU3FHdnZQeG1RMVJEaXpVZUNFcDhiSXBHbSt0?=
 =?utf-8?B?SUhZMENaT0JQS0tsUEZpMldZVkNoSEM0RTFBcFNSRmJtUU5SOXlwaHhURExQ?=
 =?utf-8?B?c1crZEllZWhBanJ6RDFaR25OK1ltSjNNYjJ1emhVMnJaZFZQSkZUdlU5SUk2?=
 =?utf-8?B?cTdSWWlhZ3RLSXhGb2xTWG9Rb1AyclhEa0lhaG9JRGxGVWNJUkxnL0IrMVJp?=
 =?utf-8?B?c0ZrTE1weGVDK2JpN09JajFTRmxIRGdkalZ2LzBnQkE2ZkJXWGltcTZ1Sjlo?=
 =?utf-8?B?VmNGdmhnZkZyWDdXdVBYWjUvc1Z3Wjlwa0ZmVWVNT2E3UjBkbjROcHVFZ0or?=
 =?utf-8?B?WldKN0h4ZFJSY0QvOTZCV0ZWbG54UlVBZmtRajE1cGtPZzM1NnRxWlVMVDdE?=
 =?utf-8?B?Y3k0WGxqbm9VK1g3K0RmQndOazdiTEhUOG1GK1JMcEVmaUNqN3N0MjVvYk9Z?=
 =?utf-8?B?SW5XOHI2Ukh3YUxIR3JVOWtCbGZKWmZzbTYrbG5DU21OZFRaaTY0SGxjQ3k2?=
 =?utf-8?B?VGkxMzI2Y0p2bVQyU3h5OTN2dUpHOGhZSVlLOU5jcW5KSmViZFV3Q3doMU1H?=
 =?utf-8?B?eURzZ1BZakZla2dqemNLZnhHMTE5UDRyNUMrcXBxak9TL3Y5dFBMRlByTzRX?=
 =?utf-8?B?Y1N5RnZiOGtudWNiemxnOWYrMS90MG1qQUZQd21EcXNyck9oN2dFWUFQR015?=
 =?utf-8?B?ZHJWOVlhY3VlRkZYSWV4U2FaWFZFeGJWUnhEeDljZ1dGTUNoUEZMczFlS0VL?=
 =?utf-8?B?U3BxRHBqM21GbFIwTkNIaUVnVHlvVndsVTh3dWplU2pzVVhyaGVjM042Um8w?=
 =?utf-8?B?SURBR09RSmNEZjVPRWFCR0NwTzdyUkR1aDZrTVE5OVZhQmJtUHRwVWR2d3lJ?=
 =?utf-8?B?K21lTUNsWVkwRTJ4R1RTTFpUT0VnNDBEZUt1K3AzUUFQTHFUNjFyQT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ba588a5e-64ff-4388-ebca-08da44767d3b
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2022 09:01:32.5817
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 8cKk524xCqwhjLcDqcHOKFb69amFYkiZWPAdMvvdxvSVA/ZX7s+oXY3x8ob9KFg8+cWTPiEeXBfbCUPICf2NNg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8822

On 01.06.2022 18:59, Anthony PERARD wrote:
> --- a/xen/Makefile
> +++ b/xen/Makefile
> @@ -22,6 +22,7 @@ PYTHON_INTERPRETER	:= $(word 1,$(shell which python3 python python2 2>/dev/null)
>  export PYTHON		?= $(PYTHON_INTERPRETER)
>  
>  export CHECKPOLICY	?= checkpolicy
> +export PERL		?= perl

For the intended use, is there a minimum version requirement? If so,
it needs documenting in ./README (and it preferably wouldn't be any
newer than from around the times our other dependencies are). And
even when the uses are fully backwards compatible, I think the need
for the tool wants mentioning there.

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jun 02 09:11:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 09:11:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340957.566100 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwgrW-0000HU-Gx; Thu, 02 Jun 2022 09:11:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340957.566100; Thu, 02 Jun 2022 09:11:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwgrW-0000HL-Do; Thu, 02 Jun 2022 09:11:26 +0000
Received: by outflank-mailman (input) for mailman id 340957;
 Thu, 02 Jun 2022 09:11:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4LFK=WJ=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nwgrV-0000HF-20
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 09:11:25 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.111.102])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f9247587-e253-11ec-bd2c-47488cf2e6aa;
 Thu, 02 Jun 2022 11:11:24 +0200 (CEST)
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 (mail-am5eur03lp2055.outbound.protection.outlook.com [104.47.8.55]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-12-prTvffC3PKmDYHPMyIYW2Q-1; Thu, 02 Jun 2022 11:11:20 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by HE1PR0401MB2441.eurprd04.prod.outlook.com (2603:10a6:3:7e::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Thu, 2 Jun
 2022 09:11:17 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.013; Thu, 2 Jun 2022
 09:11:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f9247587-e253-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654161083;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qLWD3VcNB0vWWMWTO4wRtqhdJKKGbreu5BnmIxRUPxI=;
	b=PjM9ow/wsB5op8Or52odGFczOO/vC7C34/1LVj0U8IdkEUurYjwa2F3q6Z4LNJxCyaGySo
	Wej8/gldB3RdR/bgVHJVx1L0HU88ctMSp17CL2F5QwYXgmjwm1u2781kZOW/5thEm/K2LO
	OqgSu7aZZ4FZdwbP84cDoin03s3elZo=
X-MC-Unique: prTvffC3PKmDYHPMyIYW2Q-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Lhy5bNpN38w7T5GLqBm1a17mNDhGmu3RoINsKGXzrm4Uq0kfYdZ6M2xQ6hNBptQr3XC0cmt/rmOnkuG4r6aQbgC/Q7nMn0oVGjCleVLKC8yK9Sp2frLtXcvJ9qSwx6F0u3oMSxRpqr0I4Pyh/W91AuF4fdXsc1z4ZSym7k3LCucAYSMbRGPEN+4BeuHhRrs7ke3kyA35OhdKgpELE+yqFSH3z0m/D3aeC8n/VtHfQrLLDfYQzQm2J8RDc6MrlS9pEfsJ0hpXHjr6NUwDSmgcNKNbwlvL3+pP5gXovAcDrCnGhUUsdVMkd8bCc4rqQzQgjRD4FndoWX9rSuOrVI9Jlw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qLWD3VcNB0vWWMWTO4wRtqhdJKKGbreu5BnmIxRUPxI=;
 b=F1BGONFrYh9JHJbBuD31d8jBpTiX7EQHcFa6aYmEjJwOHr3YtSKjnJrp1TvlMQ7CiIYg9bz8ZUQzVSrdVFqai6oip1qdrlxA2a8aY2/0sGUBi3q+72SgJe1MNWWTra75vd5Hy/srMY1FPCUw+02THYJojrNmHGtgjBHtD2+vCtoecOZlw+Cfe7+ROd42gOLU9SgfB4wXrE1Vbgu+KvtsuP5Esb0qG5+02L4PraT2HVM5jLWPK+RimyN8W9jPsbMHXsiFIqRsQCJL2kbhjHPx2XO2zMeCthlOnz2KENHA2MM22YNedCguQWLw5/mPgp1hQOCNWkPVta8fYIxgkH/z4w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <0f8f0c20-690c-f02a-e1f8-957462118999@suse.com>
Date: Thu, 2 Jun 2022 11:11:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [XEN PATCH 1/4] build: xen/include: use if_changed
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220601165909.46588-1-anthony.perard@citrix.com>
 <20220601165909.46588-2-anthony.perard@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220601165909.46588-2-anthony.perard@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS8PR04CA0186.eurprd04.prod.outlook.com
 (2603:10a6:20b:2f3::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d482c5de-41c5-4f78-1952-08da4477d9e6
X-MS-TrafficTypeDiagnostic: HE1PR0401MB2441:EE_
X-Microsoft-Antispam-PRVS:
	<HE1PR0401MB2441FEA81DC0E9696A84BD00B3DE9@HE1PR0401MB2441.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	4bOxqF4Ttn7ad9dToYexd5q9SG/StgJ2k0076Lb6xb7GaTUAb/Cdf9lmvfTh08scbR0UnbdHEFJO56devaDMD+wpfm0vZg/c2zZERFygNnQCky5X1k6H7zdYMD0Q6o3dXxAZbhI2ZWb8jfqnx+F/d798R96hxeZENiyfq4yqC5oqrpPSescj49iGdRS3+NDMNF+69kL+u7Ho1k57DY2ta+eHbDiOPh4syW5Uxs2VeUQARFm0jaBEqKyUjrgPxOj/88ZRU5l+rbzQyr2BqCDyLXyq+h05IifS7q/fC9a0atx2HRZF1q52UNUi0ZJuK+d3g5D/k/10MTuMzZaJ17YusIOW74uzSyBpcdh7yEpSfsSWoc5FSy3WZyazcK2io6RrcUBKjcVF5F/jLvzdE2AnT4ZG4AbuC2gSHncmUha/B5P0uHWKxDKZDh9yv0A02pPu0yEcZi1nOEEIBI5fwh4FaggWKMedmenAiozQh/Aq/0cb/3ew/+DG3CDhG9mJHGpUBWpVVRAPZ46GRGJReyFXMEDF1k5JFdYiBT6YAuFIONY9klwQu0j0PQBKcKs8hZe6+S2jfmWotDx33q1iCmT9mf5rd6/Io7TjOBDfDeV7lBEGv23BFs77a+RLG1apOKRqdQNypQxlpLDeGcBG/UGuLBJKfiJC8Cir7sKswHo/4pjYJWhDnNc+Dj+5VbxyOF3MJTQOcRWkEZjkENckBF7ZoViLrMVauu8puTwuQLQlpzADMGQhWRfCbhWTMjM/vCUU
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(38100700002)(6506007)(186003)(31696002)(26005)(6512007)(6486002)(86362001)(2616005)(8936002)(5660300002)(4744005)(508600001)(53546011)(316002)(6916009)(31686004)(54906003)(36756003)(66476007)(8676002)(4326008)(2906002)(66946007)(66556008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bFRNL0RnQW9KeFU3NXhSU05EVXh1TVRiT2JQOGwyK3NXaGw5eGhCc3M4Q0JH?=
 =?utf-8?B?TEE4aysxd1haTjYrTlJzWERvNVNza1IyR3NLYnlaUENEajR2ampXOSt5RkxG?=
 =?utf-8?B?YmFhOUp4b3NhblVXcVduRzVpWnhtNGpaYng4d3ZyY1NvRDBydXRsdmVreGFm?=
 =?utf-8?B?MjRwTTYra292SjJweVROdFVNV2w3VlNWRDBuRWhvOWhEV0tVTER4RURaVHU1?=
 =?utf-8?B?Q1pBaVBZTFJxR2oycU1ESnlwMHVFTE5KaDFvTHp5VnZSeUc4U0JoRnNGTENC?=
 =?utf-8?B?SG1zeThXQWpzZitINkVHellLYy9ZUFB6VmFuTnFQcksxRTAvYS9ueUJFSEwr?=
 =?utf-8?B?UGViMUpOcjNueTNDaWJoMTNNYmh3cTRxS3RWbGh2RmFiamhNMy83NkVaQWN6?=
 =?utf-8?B?SkRSNnU0cW1Fa3I5RFpjRTc1aGdZOUk4U0ZKcEl3NU5PN3hnM1YvUldBbllQ?=
 =?utf-8?B?UkJ2QkxvVE11eitqdDdlMnJ5NFJTeFE2VjVmNG9jS1hmQkQyUWhOeFExTVJC?=
 =?utf-8?B?bXEyZ2NQTUs4R2QwNlBqTXRMaTRKTmFxUFowSGhqbmRXMFd5SVZsVFd5OHVM?=
 =?utf-8?B?RC9Dd1BhVEVDTzJZVzNTWTNBQmY2ZFFHVm9TU2lxdThmSjhBak5HUC9yditm?=
 =?utf-8?B?d0VLc3dWUzB1Vzh5SU01TVFWeTRoSUhzYTJJZ1c1RHI2czFiQmFsR0N6Z05K?=
 =?utf-8?B?dE96NXQ5N05KWHY1U3FqWjNqU2dqN0JKQUhkUFlkQmZwc2tQTElkWk1NQzk3?=
 =?utf-8?B?M00vSk5ZL1c2K3NOZW9rWkhFT1NhTjNVUHpZcXZxbTNWU3VQY2REUGhVbjlK?=
 =?utf-8?B?UVg4ZTl3RE9RUVR4Z3U2RzBrdHlRb0QrYk4ycGQ0ZDVjb3lUbXFmcm9NeXZi?=
 =?utf-8?B?anNoNDk5ZjI0azVseVRLLzRpRzNPYzJ3YXozTkhLNnphMFhFTHN5QTNVMHpC?=
 =?utf-8?B?SzJMMGxJNmRENDA2c3pQNDFCVXFmcTFzSFVVais0cldGdWR3OGZoR1o3R0to?=
 =?utf-8?B?eXp6ZnFhM25sakF5Ti9neE9SMDU0MHprZkNGaCtGNlJ5dUJjdXRmYnhFdWEv?=
 =?utf-8?B?TS85WVNqdGZuUVJCWVBzU2dxMlgrOXAwU2V1TEI2Y0RNYkF3MCtWWCt3TGxr?=
 =?utf-8?B?MXVTOFB5TWkwUW9VOXE3aG1XV3oyMGpCRGxZNG1NeDhkU1BrZmhkaFVya3Qw?=
 =?utf-8?B?WU85RXZ2aGVFamM4WXBBdDM5NkxybzdyYzgvQVY1TFd4RnNKbjhaamNqTHg4?=
 =?utf-8?B?ckxsVFVoRUsxNzRRbU00MSs4b2VQNzNFN1lqWDgxUFgvbDYwTEhDb2R3dU9C?=
 =?utf-8?B?bFdnRlNuck8rQUJqK1pYUEhNSDZ3Sk5MUG1VN0Z3bHlocFYrZzBUK01udDRq?=
 =?utf-8?B?NCtJRk40bzNxTExXMVdUS01tcWJlTHY4aDJNL3JNWEdUMktvZnQ2Wks1SVlT?=
 =?utf-8?B?NmwvWEdPKzBRdG5sNTJKVWloN1prN3R6MmxlMnBSZnlyMWYxcmtZZ3RabEo2?=
 =?utf-8?B?SldjRFpHVjd0RWN0TkF0dmt4Z2JCTG0rTm1sQm5nV0gxZjZrL1IveXd5NlNC?=
 =?utf-8?B?VnhHanV0eS9lTUk4UW9PUGh6NGM1cjhCWWtvL2lSbllPUHU3dEk4NFVqZGth?=
 =?utf-8?B?Z1lUSWVyalozL2lnM1dvbW1ZanRsb1ZSZTZDb2VNY2tyTXMrMm5IOEhKVjha?=
 =?utf-8?B?WDMxeHZ6OWNhS3NGR3lJL1pITVRSNU9LZ3B2K1d4S290RXZYUzVLemFIWUhJ?=
 =?utf-8?B?WHRIK3dxbkRaZW5WK0NMc0tnRGNxRGZUdDlPMWpmWlJzR0VWeXVFZi8rdzBF?=
 =?utf-8?B?bEFoMVVneGRFZWZjNXg2dWFjQUFaQ2FqSCtLK2tOcTBkK2szWlk0dm5SV3Nr?=
 =?utf-8?B?cllnMFplQkpFZzBPY3N1TGpJSGRiTlNRRG43MkY1aGhTYlF1cXUyN29JeHNa?=
 =?utf-8?B?RGpWaHdwVTlhL3RJUjNPNFhUMDYxTVE3NjdHOVR4NHFCVUg1Nmp1WnJFU01r?=
 =?utf-8?B?dkhJbEdXTjRNcmZ3eXZYUFd5ZFBTSmFRenFDRWRCSVZSaUExcXRJZVlRSlc3?=
 =?utf-8?B?QmFXQTMxTi9ybVVKUWJtWDRGUFZrZ3BsNTBOSStZcUErNE5yM0xIeUdUSllN?=
 =?utf-8?B?eXprT05LL2QwOE9XMlJlcWlyS0FYMm1DSkxvODg1NE1FRTUzRXQyanNQS053?=
 =?utf-8?B?d1BpYW42djk4YVMzQ05BUGRFcS9sTG5kZnE2VkJKVkVDVDhUa2VMVjVCdUg0?=
 =?utf-8?B?WWJtR21vQmoyc2VCc0ozTmdERFdVdzRBelBMT2Jhc3hES2VGV2diMjRPVGhV?=
 =?utf-8?B?M1FVcmVtVG9JQUNwZzhrMUtGaysvZnFUeDJoOTJxUGVsc0xBKzIzdz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d482c5de-41c5-4f78-1952-08da4477d9e6
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2022 09:11:17.5441
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: nAOq+LtbUoLIDCSM0zCD27Y4Md9adY7gRsdXlgNJ1Ub+jLhEnTs+OfVS8d/GyES95OnwR1+VNqbM95URekI4Gw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0401MB2441

On 01.06.2022 18:59, Anthony PERARD wrote:
> Use "define" for the headers*_chk commands as otherwise the "#"
> is interpreted as a comment and make can't find the end of
> $(foreach,).

In cmd_xlat_lst you use $(pound) - any reason this doesn't work in
these rules? Note that I don't mind the use of "define", just that I'm
puzzled by the justification.

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jun 02 09:16:13 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 09:16:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340965.566110 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwgw7-00018u-3j; Thu, 02 Jun 2022 09:16:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340965.566110; Thu, 02 Jun 2022 09:16:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwgw7-00018n-0m; Thu, 02 Jun 2022 09:16:11 +0000
Received: by outflank-mailman (input) for mailman id 340965;
 Thu, 02 Jun 2022 09:16:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4LFK=WJ=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nwgw5-00018h-OY
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 09:16:09 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.109.102])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a31890b2-e254-11ec-bd2c-47488cf2e6aa;
 Thu, 02 Jun 2022 11:16:08 +0200 (CEST)
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01lp2055.outbound.protection.outlook.com [104.47.0.55]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-5-0VyicMXIOM29Qgeh1V_07g-1; Thu, 02 Jun 2022 11:16:06 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB7PR04MB4329.eurprd04.prod.outlook.com (2603:10a6:5:22::29) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Thu, 2 Jun
 2022 09:16:04 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.013; Thu, 2 Jun 2022
 09:16:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a31890b2-e254-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654161368;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=4A0y2OIPRh60XTuImXXNs/nsDNBronnHLrHCIry5G8g=;
	b=f9zpbWHhhJamX+aihHtVBoLNLQsU/25XrdEuqqkwtopHY2JVJSm8MQa9G+PUWk0Rhk4ZGL
	0YfryUBJRKpLSpO8t4cW5g9teOQa6gaW5QM+ZSlFnXtO1ZRvkId90xmqE7ykPhmCnoXMVJ
	Dgp35KQ7lK5ZxPKBX1kzycRVszaFIPg=
X-MC-Unique: 0VyicMXIOM29Qgeh1V_07g-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VcaFSrmZS7dCB8R1Htn3hXkT2OC3gSUisAckcq4eEVyZHP2DKYI1rqUbLniRwiv/IuYHEMppszBTSb8RfJU06maUTol6PdvfU9B8dA0FQP+2uCbKgk8cUQQqhZaG5K1N3B6KBcjVZTMCYEKy6nxbHnpbKml3OjMTnEXcHNKULMyK42/6JaSDRt5n55o/zADuXEdqcsA7oo6BxWURNbS5vy8FnenbWV5HOfN8Xg6YRCwQNTXNrew7NDV6gLRoMAmI9n1Rnk+hpP5NBhmtcV6OL7hjarYNVLgO/1kgMv0OMMtehf0aJgrXcjMlHxEIihs9QPNBO0I7SvES4K67RS4q3w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=4A0y2OIPRh60XTuImXXNs/nsDNBronnHLrHCIry5G8g=;
 b=bXFfimcBZzEwTwnukyluxcPbN0i1MPhXvmzCL45E/qq+DIqcrnBRqrW6sozOdMqEUy4T3p78+aUerc5TQ7tgOX1RvlCS/ktVjMVCL52m9lpc/9KvPz5+YUYTOvPschE2Q0c2TZ/9C5hKTtQLA6B8q+66lryfUddsVusHwso/j2NZPulYXzC9/VHFz1FDyY2BefjI8F8o9Ld6AI9i5cGECORw4hrLryJtgrvmuDmZ9nparJyswZ51WhJVaw47gWH87+/qpi9glJ3hTO4IOX2haxHF6caujzJAFshJUWEiEH3g8Sxs/5cgkK6/zB89tC/EPMcM3AM8KCi1b+RNFtIg5w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <1b9cf24f-87ec-4b49-04b8-72eb94c82c97@suse.com>
Date: Thu, 2 Jun 2022 11:16:02 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [XEN PATCH 4/4] build: remove auto.conf prerequisite from
 compat/xlat.h target
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220601165909.46588-1-anthony.perard@citrix.com>
 <20220601165909.46588-5-anthony.perard@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220601165909.46588-5-anthony.perard@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM6P194CA0074.EURP194.PROD.OUTLOOK.COM
 (2603:10a6:209:8f::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 995acec0-8318-4618-67b1-08da447884b8
X-MS-TrafficTypeDiagnostic: DB7PR04MB4329:EE_
X-Microsoft-Antispam-PRVS:
	<DB7PR04MB43297D35D8E007C92232840FB3DE9@DB7PR04MB4329.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	LkZhL8yaKgOkKTKbTZn3TAJvdRHTWRRCMO+LAs0tDe09/dRRH80Cu4Ie7qroUNnOixaBpTB62+15c1eUvjWlvFLn+Abs7m4hYbh3kbNKgoVyE7D12rqF0SJeaGaVhV1qEQ/FlD1i3I+8kv5B8aiClIjTJtsLqjtj9sXgzPDjSTXm609Q6VxnVAnaJZLpC9+wdGNQUYo53UJ/QFg8haBiakkQhKIQWv8J1Hh0O8wFx4cHXIXQzbYubvSxOrzg+jh9Z2WGfhcYPtThau5cNtr1RlwQ/F/yZLLUyf73UjhGTiF/SAziWBtjem702r6LVogh4iNxHKQERJb5yH/r5IkXDZcl+W0jrlgz3/t4pJ5mUdP7V16ST/219pxIIY0ACIsTHaQxg8Kiq1KpTgzXk2095+zvD7TysFnW/ZDVhR8yOuHcBo3fmOEGbeW0xn7+3ulITDop3SpHRuepMv8SCvOylSXnXeIS0qee6p3+bTePoda7naa5zxoSEKluDvUbCPqWqsVkOkCEvSKeQZO04UGmc7WDE4ZJQKiDNW69EGII9vkar8diwlDaL8GmP8iSD2bwWuMNKV2iNJfT+W5OIVrw2G7jIG2fN3QsxBDDBYZRCJzOlVBSZt3E8RcsSSI1ZqVO+3S4JbGs/s3WvtWqz5F9EbwOLGjzooQ1w2SspdZfEbphyaiccZZexvM4F4c4a/s6PWnVt8mqDVALcUz3zl2x7xu86QoqGhKwTxHxOEa9sFSBwwafruW4JrrHwuXMZjOq
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(6506007)(6916009)(6512007)(66946007)(54906003)(26005)(66476007)(8676002)(53546011)(2616005)(66556008)(6486002)(508600001)(86362001)(31696002)(316002)(38100700002)(186003)(8936002)(4744005)(36756003)(5660300002)(2906002)(31686004)(4326008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cmJZUTVTOVJSTFNGd0xJSFhMVm5GUjdTTXJnRUNQN0drY2NnMEhyamFRd2hZ?=
 =?utf-8?B?SWcvQ0RjdlFtemFFWDJrbXNTSlNsNm15N0dGU29FeVVjUmhxTVo0RFNTV29h?=
 =?utf-8?B?M3Z1cVM1TGhwMXB6by9DMmFJbDZya2FTTlhGVnFXQjFDYUVaRmpoaFcwU1E3?=
 =?utf-8?B?RDYxbDRwQm1VTlFaeEZZUmNhaDhGS1J1ZkhFMC9jRDUyVDl0ZUpRbnZhd1cw?=
 =?utf-8?B?bVVDcDBpTHVQNjdwNEVIeEk3SkE5cE5rbzVzbm9ONXNQUzRzSCtBQmVkK21D?=
 =?utf-8?B?NWhjVmt2NG0vcEgyUzVIVzMvcG9taVJ1TDRnZFQ0ejBqaHFhVHFlaFFMUTZp?=
 =?utf-8?B?M1NaS3R0dklCeTJBS0RBOFA5UUdpTCtnZlJxWTBha204VmRYdUdqbG50S1hX?=
 =?utf-8?B?YWpHUXd4Ly83R25BVWY3d0FJMTdyTUJGZ1RjeENlNFJ4N2pTb2FlMGtUVjJU?=
 =?utf-8?B?dzBTQ25TT1RYb2VVdWhMZUdQTURxdDBYQnE4clJDcXBUTE8xeUVVYWdNb0wv?=
 =?utf-8?B?QkZJQ3NXck0wbjVhdkpraGZSZ0xmMWJUZ1oycjduQjZjU09WTlIrYzFTZzIy?=
 =?utf-8?B?OFdDUC9yMWJhVG4yNmJhRk9ST0Nlczc5UXBFc0Z4K3RrUk9HeDZlbVkvNjJ0?=
 =?utf-8?B?VUJGNitnV3pqTjdmVXVMbUJwK2YwSU82aXVFRk1hZnpVRVRTUW1ZZzJ2ejZo?=
 =?utf-8?B?YmtRd0xmUFJDbi9malZzNGoyd01qMURYbUtIa0oxOThRNnluMFQyalEwYnls?=
 =?utf-8?B?aHp1QzRhcm0wZ3g2NVlKNGFyTHlIemxGU2kyUldvMDV4ZEplN2Q3RzBTUmdP?=
 =?utf-8?B?NFNTeWVSK3lJcDBZR0FWcFJwTFRKSmFWcmVqNU90U0dqR0lEV0k4RW1XaHhG?=
 =?utf-8?B?R2o4YTlDZ1lubU16SW10SVQrRFRNZHM4MjVrWjcycXExRTE1dFVNQ0pSMDZE?=
 =?utf-8?B?cFBzdkJZNnQxK3QyNGpJaGFCcHBpSFhrbVVVdmlpMCs3WkVRYUtHZ1RUaHVK?=
 =?utf-8?B?RFRKWUNKbmtvc2NzSFErMmIra3pTSVVLYzV6YlVXY0kxSVErV2ZIeWJURytD?=
 =?utf-8?B?MDhES25VNlkxVHhzZGlGdmlyOHNNSUVzTXVnY3dwR1FCZmVDVnNJaW8vc3V0?=
 =?utf-8?B?a1dWSHp2d2F3K1RPc2JNQ1F1WjJXbDdtcFZoS2lmdXpzWkJPUkhsRmJGWXRD?=
 =?utf-8?B?S3E1NmdzVGpGODNXU1pUbGlUbjlDU09KOEV4TDYzS3oyV2xwWnEzaklqVWhq?=
 =?utf-8?B?UjQrUWdYb0Q3c3FLZUJIZGppQWVjeXlPQm5OSHJUM3czL3U4UEhodjllRGZE?=
 =?utf-8?B?SGcwUEFrNXV4VnNrMERSeVhsYmlWRnl4RmpLemJBNnYwbXNoOStkcFVRWGgr?=
 =?utf-8?B?L1l0SjBsY3dnV0FCVFpQQklOa3dJaXN5TGl0aXNWNFdrNXRVSVpqdWd3S1VT?=
 =?utf-8?B?a1RWdTc1Q0JiRjhVdFkxMnZGVUJ0ZDJJK0xQMTgyNXR6Y3FGdTV0T1MvQXEz?=
 =?utf-8?B?S2o3RFFFZW9uaFRwVUY0Tk92T0g5MnJBand3Vnl3R21NNzlqNmYwV3R6Mldu?=
 =?utf-8?B?a3B2cmZ2Yk43RnFUVEJuanA1UXl1Um1QQUs2NndrbXRYdmRrR0hUMVFrWVhQ?=
 =?utf-8?B?UnhTK3JzT2l4SVlZU3BZMmpMN3NSL1hIUFBZMGxPT3NKR2V0M0R0NlRJRWVq?=
 =?utf-8?B?UVZ3Y1JrRDBiZVY0Y21RMEpleDNNVElwamxuZEpJbWkyV21mblFhWDd6Y2lG?=
 =?utf-8?B?UWQxWWFmdmdsNXBwbUpVK2hUdGdNQW5Vc293ZWdmMDJjTTYrRXA2ODJoV1FT?=
 =?utf-8?B?MGxuWnlHTE1iL0pDYzk3RHhPb3VBU0w5ekVLZkc3MEpXdzZpbDJCbGRnL2NK?=
 =?utf-8?B?ZkpYNmtuc2Y4WU5vM0NkYTh0Ly81Z015RGRPUUtLRnZUZTFGS1Z3cHNVcWtY?=
 =?utf-8?B?TUptK1FBdUdhcmpucklBbjZYQUhSclhXRTZwOVIzMXVkbnFGcS9rSVlSeEZ3?=
 =?utf-8?B?K2wvYklqSk0wRnlqM05yelJ5THNKZ2Z2T0NYS2hUWkFiZjh2OFNUelh2MUZ6?=
 =?utf-8?B?K0NRejB5THpZZEdNcmViLyt4SEhUamVRVDBPSCtQU0ZPY3Z1NkJqTStLWVRa?=
 =?utf-8?B?dlZBSVE2czg5MFI3cHpoOXR1Vm5abzg2c3B6TjRwZUNYd0cxQWI5NTN3R3pC?=
 =?utf-8?B?bzJKVk0zeFVBSG5xQ0VrckdSdkNuY0s1bmt2dlljODgwQUdoMFJEU25sdmhh?=
 =?utf-8?B?akE4NkM4N1FYMGQ0cThyUkxUdDlHbHdWNVFWQ2JzYlRQSStrK0IwRk0yMk4w?=
 =?utf-8?B?L1pZOS9DM1FNKzVzLzlnUWE2dmNEdk54QnJ4UnprSE8zK0VtaDVsdz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 995acec0-8318-4618-67b1-08da447884b8
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2022 09:16:04.1350
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: l9pG1h1iOlbvLmnF5uVGUqojtvhiGH0FIi8TgZWvhKaxfdBUNkfDKQCMbsEnKXNl7CC1vsvU/RrXqe5ghWV3Kw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR04MB4329

On 01.06.2022 18:59, Anthony PERARD wrote:
> Now that the command line generating "xlat.h" is check on rebuild, the
> header will be regenerated whenever the list of xlat headers changes
> due to change in ".config". We don't need to force a regeneration for
> every changes in ".config".

This looks to be dependent on only patch 1; I wonder if it shouldn't be
viewed as an integral part of that adjustment. Anyway - if you want to
keep it on its own:
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jun 02 09:23:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 09:23:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340973.566122 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwh36-0002mq-Tz; Thu, 02 Jun 2022 09:23:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340973.566122; Thu, 02 Jun 2022 09:23:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwh36-0002mj-PZ; Thu, 02 Jun 2022 09:23:24 +0000
Received: by outflank-mailman (input) for mailman id 340973;
 Thu, 02 Jun 2022 09:23:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4LFK=WJ=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nwh36-0002md-9y
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 09:23:24 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.109.102])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a63aa59e-e255-11ec-bd2c-47488cf2e6aa;
 Thu, 02 Jun 2022 11:23:23 +0200 (CEST)
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur02lp2059.outbound.protection.outlook.com [104.47.6.59]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-17-BRSUWzCHNpCEEUneJqFEQA-1; Thu, 02 Jun 2022 11:23:21 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM5PR0402MB2786.eurprd04.prod.outlook.com (2603:10a6:203:99::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Thu, 2 Jun
 2022 09:23:19 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.013; Thu, 2 Jun 2022
 09:23:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a63aa59e-e255-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654161803;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=wr7Z/Fosh+oBdriOy2r/m8ovEcSH8S2+SZzmoD/iwG0=;
	b=HPMw43wGYdOBFkSEL8GVKS9byrNk3a2+64jnFLIu+0yRkkUb2Kq1+WpjGablYi6B+Fd+al
	5xMcGEV1H+lP3TxR+5Z0Y3ZsOdBCZ48enRBAzJJmuWY3ZeAeEvQfNSMIQ/FB+f3sTfGxiA
	GWLVup0mFd+95UMhEvYiEOF+ZLjrr88=
X-MC-Unique: BRSUWzCHNpCEEUneJqFEQA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=W5u3u7SJVWdznnUQYViMoluA13O73dCqqnSwh8W+bGG+17zLSAnT3Ou0ipZrLdA1CcjB70F/iY42a60vtmvVXxZL1VcbB3EbXMsJ5cg8sdpeaLkeNfC0N/4kdV6/A47RF9f9Zm8GU/9Q5O3TcSosRzzlRtMBRr9BfsR3wLZLePclcjVJPYFjOfOsHF5iEKKefAp5A032pm49BFytiDHRw0P4U5cLJa8ehbc2IOJkiZlxOaKvBSpXc9OKuENHbVxmolDo8hQb0p6vnsDefdSFmGxCk7GDha0Siqaj/TeUGdCfG8c9SAHFChdiGpemP94wrUkMPiKJQ3wCSziH817NCA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=wr7Z/Fosh+oBdriOy2r/m8ovEcSH8S2+SZzmoD/iwG0=;
 b=KHHS/itU4KTNVhOClAM961xKhdro2tRZ8dOAWdc1Q2IJXe3sSbZSpk3RUmZVFSxVbgmsnZ85jpfUDaNjPyN9nQPiTEhUoCTRph9j3arJkAxfrd6GZr/14pYJ18zX79VPaQpS2D/1mlfxHRV/RV3MKCpmFWqR9+ca08cFzf1KUDfgq5WHqVhbZBJocSiC26Z7432h9Q/GaPXy3JZXYin4G/muJ6IsHtnZ3yJbe4YIrXnckYb+EvmNy0BjAts0UO1ruxqXoa3zg3SCHqHN9QAkGHY5+7C5eaO2gXT/hehlnq4+1bJfNP4aT7C34Hw30g9rEByizMkMdzvTFjBUbiXG9A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <fa7f4bff-654c-a2a9-092d-d48b6bd87525@suse.com>
Date: Thu, 2 Jun 2022 11:23:17 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v5 2/9] xen: do not free reserved memory into heap
Content-Language: en-US
To: Penny Zheng <Penny.Zheng@arm.com>
Cc: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20220531031241.90374-1-Penny.Zheng@arm.com>
 <20220531031241.90374-3-Penny.Zheng@arm.com>
 <9f0d9d47-236a-d7c5-3498-d8706c616fcd@suse.com>
 <DU2PR08MB7325D759DF37AB836B3C2C79F7DE9@DU2PR08MB7325.eurprd08.prod.outlook.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <DU2PR08MB7325D759DF37AB836B3C2C79F7DE9@DU2PR08MB7325.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR06CA0339.eurprd06.prod.outlook.com
 (2603:10a6:20b:466::31) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e1a6e6cd-058e-4405-a0ba-08da4479880c
X-MS-TrafficTypeDiagnostic: AM5PR0402MB2786:EE_
X-Microsoft-Antispam-PRVS:
	<AM5PR0402MB2786809E6B09199500C7EB3EB3DE9@AM5PR0402MB2786.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+zJn9IEWG1qftDLEf+/hBRdtkOjEWeilsJ2hWZeGDC/IEuKs/sf61yEmePfAGb+3CppRivhyffDMl6cfCE4/AMZzeg6yba4aaqsJByPnZ8v5pdV6rl+7TCtCL3Z+Gr7Z4SOvtFilDeWF+Q6iF6oBQbAzEDbo0fvHwPF+pz3ytPSk+RhA3BXEbSBptYDSaTcfddZV3To7uBg8zr4AB6zRr/6tJa5qFbzsErYIE7fzZWZBcHW2jKCXWXMroEx7hzb4GxFgnMOKrvTs5e97X0Ry8cid+XtRltCJVNdkh/mo72rk5mXJJ4EzEaXcc72/Mu1CIBd2JLBqtaFE4DIkiYdjLmCfPF+CkTNrE1SRfnC9y90ezgHUkJ98kLnIalLIORt1z1ehmGpuwhZ0AsMRaxWlR6OERUzxFqMxwBNlbbuStuapctzMRjxUpi1grtj7QwdaMokaC0QUuDogajh+e2ptDkm1jc5bUm2Vss7DSwx2MDiHdS+7NtMsuBF5OnT/LpfeRTPEnSUmfpV2CeGmvIzPjUWVL7Pl+SRGWYINbxRrCnCrXGiUXIIGc/B/C4rAQPdi8cJ3OnePTQq6rAmLyXlY+ipzJ6lWaCO/WtgO8oP+7Ddk4InfPZJJpXHl6ZnBufv/EFLbFHRF+DbgbevfnBA/OuYxu8eStAdyqcmPP+SEJePcaPvPhpUJSaBVcihf6lraIWGT1j/xLqQKkAJuucoOeV6v7tVrYs/k3ECsGaB94Yr76WuGvgRctqw8zYgz5CLU
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(186003)(31686004)(4326008)(2906002)(8936002)(7416002)(4744005)(5660300002)(36756003)(8676002)(2616005)(54906003)(53546011)(66556008)(6486002)(6916009)(6506007)(66476007)(26005)(66946007)(6512007)(38100700002)(86362001)(31696002)(508600001)(316002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?L3lVNDNIdmcvR0trNHk2TmJZbVRDbDFacnhwSnFUSXNmZVU1aS9IK0xTSUVq?=
 =?utf-8?B?VGVtZ3BMRnE2MDgvWHRUalhRTm5MamQzOGZsM0QrclkyTGZTbWxzNUIvSnlJ?=
 =?utf-8?B?YTFxeEtGMnRCenFUOVovVjE1bWNub0R4KzQwQjk1dVVjMjZEL2ZtSEFZY1NL?=
 =?utf-8?B?RnA3NWgvSWd0NVA4amdoOGQrQUpaQWJvTXg2Uy9EZzN4RnZyOWVMWEozRjZF?=
 =?utf-8?B?UXlKMFo1Tkdyb2dGV0lHWEJEN3pJV1h6T3B0eVBCaTJtREw1OW50aXZIWVNh?=
 =?utf-8?B?VTFsaS9Fd3BXL0hBZnRiYzMycmxiN2dsT3YvT25PNUNPc1V1OGJvWDRWK0xo?=
 =?utf-8?B?Z1ExZlZMU3EzN1dtdFB5OEdDakF5SHkza2hwSm1oblhNOTdYQStDRTM5cE14?=
 =?utf-8?B?ekVEK2pGRkE1eTdKM0I4ZWo0Q1pTRU5NcnZQbmVaeTdOanVUd2Jjc1plZU5y?=
 =?utf-8?B?WXgweEp5R0N4NzMybTVFSU1rNUJNZVVBTWFFRVN2SDNHUllhdjIvdUgrKzhi?=
 =?utf-8?B?SitOdnVtWjBiQ1M4NzkwRXpCRkx5NUN2R3kzd1NNTHlEbWc5RHBDWmphVDc0?=
 =?utf-8?B?UTRtb2pabENSY1c2cklmZnhQN0dQK0dMVVZ0NGVkdkhYbGp3bU9OYkVyN2I3?=
 =?utf-8?B?MFlVa0o4dzhsbjJLZmpBbXZQZTVVK1ZGdzhyVWVTWXE5dGpBY3NjV0FtMEEr?=
 =?utf-8?B?QjFjNzJZVFhNZ29LSlQ0Skc4ZmNaYU50bzVDV3JmcDR0clhuUVM2UlczSzR3?=
 =?utf-8?B?cG5OYktFREtTbzNiMHpNWFROTkFsNm5mQWJwQ2xFcnB6WG4rOERmU0puNVpx?=
 =?utf-8?B?RWZIdzkxbjJ3cnRhbG13S3ZnNWQ0S3M4bCtIa1ZBSXhaWHRJeWNDVWtwODVo?=
 =?utf-8?B?d0UyT2FtOXU4cTVGeGMzY29aMFBBK01LMk1yMkdpK2IzSk5QdSsyY0VJbnVV?=
 =?utf-8?B?QWYzZ090RlEwSHNNN0FaR3YyRkE4aGxucVNhYnd4eWdCc29KT3JKc244cXdB?=
 =?utf-8?B?cFRqMW9LTnVoa093YXM3YS84VnJOUVRsVE9yNDBOWTRhWWZIdWU2N3hGaE5x?=
 =?utf-8?B?WUFGNFU2NDVWT3Z6dFhOcW1nL3ZwaUlzSlFtdGVFbkNIeHE1anJmTmhlbEYv?=
 =?utf-8?B?bzRhYUtaMmMwWjAySkMrNkZqU09VenN4djl5ZHFZeEZUZm53V1FEdXBML010?=
 =?utf-8?B?SGZzNGxHd2RzcHBRUmF6ejhsVlZsSjFEdi9qaGsvb0E3bXVncTdvM1MzWEFT?=
 =?utf-8?B?VDZvNHY1K1FPd2M0Z2M2aUl6aTZqWEU3OGw0YWx0RzVTb3MyOTJMMUU5MCtz?=
 =?utf-8?B?NjNBeXNQbVVRZ3V5U01uaHltZURyd0ZjSTM3MFNtQllKdERPQS81SXIzNzBx?=
 =?utf-8?B?N3E2MGtCYnh1SXlMNHUyRTBXbWp4U09WWURFTFl1cUtYajZETkVwQ2tKOVhw?=
 =?utf-8?B?WXNWK2pLL1puWERybFR2N0thTVJoa0ZLSnF4ajhPREExcHJ3LzcvaFlRbFNw?=
 =?utf-8?B?T2IzcXNpL084Y1pUclI5cjVWREFhVWFKSDk0NkQ1dldFUFRUc0haVVh2cWIr?=
 =?utf-8?B?bWxoL0RJWVFGMlg0ODVvZFd0TUVYTjZaZUlJWXBkUGVjVTdEN2Z6blErSERY?=
 =?utf-8?B?cjFqSWQrdlBlWm5GVzQwbUtaN0FRWVAvZThXdXU1akp0UnNhR0V2YndBTW9y?=
 =?utf-8?B?VFdxbGYxczZTNDJFL3laYmJPSXp4RSt1bTc4K0svSkZldkRLNUtrTUZoT3NL?=
 =?utf-8?B?aDR2NEw1eDFsUzcyUWxaT1RaZ0M4OFFjOFdJd0tTZVR6dWRiMkc1TFZnRExR?=
 =?utf-8?B?d2NMREdwM1gxUS8xYnFwR25BTUlLVU9wL0ZFY1k3TVlSTTYraVpSQnBTTWFN?=
 =?utf-8?B?OEMvOENvOUwxckZ0NUlmaHAvK0xDalo5S05BMlkzQ2FCR3J5UFl2RjBjcWhL?=
 =?utf-8?B?eTVFeHNRdmJScHZCRU5ocko3OTFhVmNHSUxtVDZuVlE4TUVsVkpSOFJCaWlq?=
 =?utf-8?B?R2FuaGFKd3lMVG8yemNCV1A3NlF0c3lJVUxNWFgyZk1JNkMzaXp2dTNDZ3Jx?=
 =?utf-8?B?bEtkS1FrdTZ3VzZ3d29RQURkbXFvc2xxa2NHalJWWkV6eG8zdXNxZnNPUEVI?=
 =?utf-8?B?OEtCSFJPR1VYZHR2VnlFNEUvWnM4Zi9rNDhPNk5zajdoQXBucXBubmJYd3RU?=
 =?utf-8?B?M3lvdjNzVXJQaWpSZ0hYTklqVi9saGhEVFBzRjVia2c4THZwd0szeHhKMzRP?=
 =?utf-8?B?TkhxRUg3NWxhT05ydXArbkJidDNzZTJNT242TitJYzlXZ1F3clg0R3hpWExK?=
 =?utf-8?B?VHZqc3QydStkQVhGV0RZRC93WE85VzV0cERHQVNVK3IwMWFZTzZvUT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e1a6e6cd-058e-4405-a0ba-08da4479880c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2022 09:23:19.1856
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: W30dAD0tDU8ID61ayyfvSYs/27Tl19e7cx2d2UGij4FXP70gsfCfe7WniYer0I3eTh76S8Cc00O9233mbJyFQw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM5PR0402MB2786

On 02.06.2022 04:18, Penny Zheng wrote:
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: Tuesday, May 31, 2022 4:37 PM
>>
>> On 31.05.2022 05:12, Penny Zheng wrote:
>>> --- a/xen/common/page_alloc.c
>>> +++ b/xen/common/page_alloc.c
>>> @@ -151,10 +151,6 @@
>>>  #define p2m_pod_offline_or_broken_replace(pg) BUG_ON(pg != NULL)
>>> #endif
>>>
>>> -#ifndef PGC_staticmem
>>> -#define PGC_staticmem 0
>>> -#endif
>>> -
>>
>> Is the moving of this into the header really a necessary part of this change?
>> Afaics the symbol is still only ever used in this one C file.
> 
> Later, in commit "xen/arm: unpopulate memory when domain is static", 
> we will use this flag in xen/arch/arm/include/asm/mm.h

IOW you want to move this change there.

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jun 02 09:26:06 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 09:26:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340982.566133 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwh5Y-0003Rd-E3; Thu, 02 Jun 2022 09:25:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340982.566133; Thu, 02 Jun 2022 09:25:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwh5Y-0003RW-BI; Thu, 02 Jun 2022 09:25:56 +0000
Received: by outflank-mailman (input) for mailman id 340982;
 Thu, 02 Jun 2022 09:25:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4LFK=WJ=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nwh5X-0003RQ-A7
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 09:25:55 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.109.102])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 004011f6-e256-11ec-bd2c-47488cf2e6aa;
 Thu, 02 Jun 2022 11:25:54 +0200 (CEST)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2109.outbound.protection.outlook.com [104.47.17.109]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-37-dKQVpGNfPu-R313OaBPUqg-1; Thu, 02 Jun 2022 11:25:52 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB4909.eurprd04.prod.outlook.com (2603:10a6:803:52::27)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Thu, 2 Jun
 2022 09:25:50 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.013; Thu, 2 Jun 2022
 09:25:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 004011f6-e256-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654161954;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=h84eh+LwxOmJ1STbZjiq0E+X9h89ANQX3Xr6OCQww/w=;
	b=NH3d5LwXEbaw6W1K7RpTlbYSShAMOp/vMGTSJ0TeWV+QSDS4FD4bXXKca+6/MfiqPzDMDD
	NYt0KrhzhiYjwyXy30si7g3TitpHrIEA8FNZ5Xxjg27BLtOgU5BHzEIA5maQgLDzwGnDDF
	gBk+uYGfieDSGSXjReH/Dv0hX/5YdBQ=
X-MC-Unique: dKQVpGNfPu-R313OaBPUqg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=n8m6vW+NElyWtIvfAqpNQe8CZgbaRjm/E4q+fg9BuqzVf/CdqqdEkARzPuqet7Luhd4pDZsyd54uyfH3sERf6ppHx4BZHppUDFGOBc77l2xHsWcbLHZW6fSis5glxgLTQJ9G+UKJ4Tq41vDNHhxS+cTIBjyqskP+wgSwCBSLhuhgJWInuKwRUZIhKHqdj8S+sBjB1RqbhqRGWnpjGEIYufoQqiNEkRsc3t+vnbAmuuhQ/OJOD79hx4ccn54haAuZ1fJbSot5FleyLWq00rZtWihxBe/mWD3LGwMMHdMBAvKvIc3IWOX7gPQBe4g0IZjj1kcoX+Zzw/bY+TetF2HoNg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=h84eh+LwxOmJ1STbZjiq0E+X9h89ANQX3Xr6OCQww/w=;
 b=mx1RG4sYRdh/Kt8p/zNOuBT96BA2ccD5rxgMGhhwqCxjyRfwrYYEJD+vbWttWEHdpa9SG0/xcgkxTxNPXCrEEjboSpsk3BQ6JdsbuZl2mQWfaNGdsQ2q2eZXeJT0FoLWX8kGBGsCDA/dmvVoMRMYx8lVUxz1FbKK7W6YGgz/gN+DdgF1xmWU7AcXr2AV5iGpb6NJC71goPO4gJl61c/8zS+QJyPUkQiaXhzjAWdjOShJzIRRamztw/jY5hcvoI+oW8uj1Uk3NW/fVKDfgWHHfhaeV/llriKhl7xQUf10976h3aQH9UWFNa+SuJULDtNNK0nCQ6cpho2IQEUGU4VWDQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ed48b5d3-a8f1-f2e9-120f-476c3e19ebbf@suse.com>
Date: Thu, 2 Jun 2022 11:25:48 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v4 1/3] xsm: only search for a policy file when needed
Content-Language: en-US
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Cc: scott.davis@starlab.io, christopher.clark@starlab.io, jandryuk@gmail.com,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xenproject.org
References: <20220531182041.10640-1-dpsmith@apertussolutions.com>
 <20220531182041.10640-2-dpsmith@apertussolutions.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220531182041.10640-2-dpsmith@apertussolutions.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM5PR0402CA0023.eurprd04.prod.outlook.com
 (2603:10a6:203:90::33) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6ee11c5b-5057-4800-c113-08da4479e242
X-MS-TrafficTypeDiagnostic: VI1PR04MB4909:EE_
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB4909786DB63A27C25A442331B3DE9@VI1PR04MB4909.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	sTIzuzsFVozPjlPRVvR/v7GwnV4ik+uEgIPbw+4Ds9Eiid+0ZrmgIyZOodNIVtBkRG0dv+9XcSPuI9kQty7MP4UIKuXW/yMy9MtYVWsEitPUPMDdn6gZoq+jyKVuEdKRMC40kSIUKlb3TWPRIqxKuyaKba5xkq7EQaPvlhuE2Ve4BolZp/az+9CQcMcLAyl0dVCVZl6+EbbPbnAK6k2zWZmhtopyZUX1luPqkO0HxebwhDA5ZRXk7ksvjVIJTPjbE0wlvumZV3f4spffblfLyKBgt5unXEnDmGMi+Tt26+NlGdCFw8uChcxe9/BJXPD9XtDEFpp+nZHBMkO+ZoWeXIyG112V1xqr5s6jwNGRqdnA2xt4iicSNbUkS5sqjQ4pYUnnrQt0D5j8YJ27sLJZESr8Lhj4OuEzYGcbyZD3HDufLpoikm7ULwPJwXhrKCvh7pX8TogDg0LHuNfthfHfkhqGH9Z+An/JNNntjP3EMWwfDC6fogE4UQ5qw9xUSb5lDmPFwuiUii/sUnAzxfIaR/cNLG2XasIHQmNPelDpWIBjOhIXYEnDQtsG8inAejDFS1OeMLcTtRf+2NRpTYKqJ78ktDADV3b6M8n3H5xdZQgDAOZBydym0yHgQTNQvuapF4xgc9P6SKhSqkloU0kP+2pxYpXPky3VBeOFisfP8YNdE76qBcGrrc1UBJ+J7ffpYpJee/lZe/iTwn6WojPAevKJkV4blYk6yx2GWhLJQloYH/+kfmmbkhatUyGdQiYu
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(2906002)(83380400001)(5660300002)(31686004)(2616005)(508600001)(8936002)(186003)(36756003)(6486002)(53546011)(38100700002)(6512007)(6506007)(26005)(6916009)(8676002)(316002)(31696002)(66556008)(66476007)(66946007)(4326008)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SHR1amJjTXEvSWVmbjA5S3U1QUZ5d054UFR3Y3VEVkpGVUVLTWRnTEd2TDBn?=
 =?utf-8?B?OTVLeGR0b0s0OFRBaEZIWDF1WkFlNllqWFZjWnZsWHJTMlZQQVppQ1U1RlJx?=
 =?utf-8?B?bENCOENmWkxTU2ptRjBEbkRIWUxtY1U0T09SQ21aQkN6Z05FcjRUeDZNeThY?=
 =?utf-8?B?WDdyQ0lVTzFwSEYwT2E1R2tsMDdUTnErMXo0dGRwcG43czE1aFgxVFBWY3Zl?=
 =?utf-8?B?RlFVdkJmaWRTbHppZEZCYzFGdzV5TVRqbWs1YURPUi9tTlhGWXZSTDUwMWJL?=
 =?utf-8?B?ZmMvWExKRzlPM2FrdjA0andxcUZJOERtLzV3REpKazM4NmVJTUhFd09lNENH?=
 =?utf-8?B?b2ZVa2xtSGxTT0VlT1lmaTIyQjNObzZkdlVxU2h5Wk10cGtvSFpUdFZyZitZ?=
 =?utf-8?B?Nzc1TjhkQzRGZ2FwcXpCaC9uSkM3WitXNnBJMG0zMVpGOGRqN2ZnZjVNNjFJ?=
 =?utf-8?B?Tkd2OFZzOHl4Y1hYNDBZNWd6Qk9sRDF3emtsOWwvUHpXVVl2Ym9pUUU4U3F4?=
 =?utf-8?B?QkdhTHlpem0zWitrQ3FPY1UrNTdnaGp3Z2Q4ajBxNkFYODZZamtRZCtOWGtP?=
 =?utf-8?B?UVZiRkwyb3IvLzdqWW9TQ1FBbHRvUkZJT1N3eUhWNFFndmgrMlpFS0Y1Nndl?=
 =?utf-8?B?N09WNG9mWVIxTHhzK1V4Q25NTlNEM3VSS0M5ZTRSQzArb3JyRFNrMkdxaTBu?=
 =?utf-8?B?NktiWXA5ZThKMXFWWnlvWUNqd0ZaR1RuMCtYK1FGa0ZLWVA0N2RGUTZjUTQw?=
 =?utf-8?B?c1lCRnZwMDE4YUV3bkRQSmxsVkJKa3dOaHZsMFp4QWZUZVpmTVhnQ2w3UHNO?=
 =?utf-8?B?UXJmc0xxMkxocW0xL0E1OVhOZjdOTldiM2dnaW8vOHJOQ284MXFmS2RSNzBK?=
 =?utf-8?B?NmxySXlLanp2RnJXa09RRWFkS2tqUUlYNC8xV2I3V2RlUDhCeVBIeHBTNWJE?=
 =?utf-8?B?d1Nhc2FCV0NMRURSNEQxMmpWNzdGdS9jY2MyMjA2czJuMy9ReVFwc0gwZzF5?=
 =?utf-8?B?RUR3QlpxRlBtaDU5TG5yTXZIcTc0Q0JVOXJET0VPSnFOSkhXdGNJVkhpaC90?=
 =?utf-8?B?eGtVUmRCQjB3UHdwNW5zTVRVR0l4UlRuTUFWRXpTdWdwVUxrM0xjTFViSExU?=
 =?utf-8?B?YW50MjczZ0FkL2VzRVo0N2pHR0xPVjgvcXIvSWFYdmR4NFpyNHNPRHp2SzVt?=
 =?utf-8?B?ejdIYjcvcURwRmRtZy9OamtVLzc2S1ppNEJjNVpVSDl6TEQ1SkJkVCtmZWdz?=
 =?utf-8?B?REVZSDVHRWFjcmJLQy9HS3A5bEUzTkZSME92K05XZWk4ckViSmRtRCtFaG9Q?=
 =?utf-8?B?UEVpZklhdVBlb00reW9Bd1VXTGtoNjA5UXpyUHNITVl3bDM4cVJZbkMzVCsr?=
 =?utf-8?B?ZjdnQmtBcE5xNDdFSTRJaWExYnBBVnR6ZFN0d3M1MUV4bUhpM2RXeHBzRE1V?=
 =?utf-8?B?ditrUU9SZFZDVWs1dVpwWUFJN1R5MGx0WStHRWdZWTlyR2xyL2t5UHJPcWdk?=
 =?utf-8?B?ckpodFBjeUVYQlp6WDZZQU14ODBsNGJWMVhXUDErd1lZWXM1amhvVUxnZTB0?=
 =?utf-8?B?NFhDMUpJeTZMdmFnYUNrYTZBWnlCYW0zQ3R3L2NTa2FjNkxCUmV6d3JMWGFF?=
 =?utf-8?B?TkZuUkNIVDFtSE9oemdzTWhiam1mTWd3ZUk1OFVIbjFMNURVaTAxOGVoNXM5?=
 =?utf-8?B?bWdaakJqeXl2VnVaTkFmQmpGZzA1MlVOYjRYZ1BnamdqTzRJRFJFZFNWbkJ3?=
 =?utf-8?B?Mys4ck9xVllZTWFMbGpRRmlsMERTcUdtMGJZSlhvWnFPSmc1UDdUZ0RsbzR4?=
 =?utf-8?B?K3MwSlYwRzV1WlRMcUxDRlkvNkdYVEp6SHRtZnZFYUl4S3FIdkQzSEVaT3Vo?=
 =?utf-8?B?Yk5JT0Z6WmwzZ1d2NnlxaWtVRHg5SWNiTGtqWXFhWURBZ1VvNnZMRXNYS2RS?=
 =?utf-8?B?WFpZYkdHem9JQ29Md1M3Z0JDRzhtZ2ZUNlZTSEF3Qi9DMzVvcUdxVkVXTzNW?=
 =?utf-8?B?UC9qY0NSbmROVTlJQ3IzWU9jSldzSmJuNUlLaXA5YzJreWhjSHdqUnlzM0hL?=
 =?utf-8?B?aURvWWZnNlJnN3hiNkxpbmlxV2UxSER2NlJCUEE5eDN1L01kUjJ3ZnJBV0ZD?=
 =?utf-8?B?c3ovb0ExeDdKbWlHbWErNG8xQ3U2SVArNmRKMjM5ZGRCSVk4RkdXa2tXK1NH?=
 =?utf-8?B?aVMvYmVDbmRJdHFpYnlaVFRzajdEbjBsRm1jaW8rcG5QRnUzeDlKT1ViVitB?=
 =?utf-8?B?S0VBY0lza0V2UW5SeVVGTnBGNDY4a09TYVc3QTh4azNsNTJsYmZLNW1WV09G?=
 =?utf-8?B?Ni9yaGQ1c1pIQnFXRldyaDg5YzBCbXpDOHhBbWcxaUpxTXA4elVQQT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6ee11c5b-5057-4800-c113-08da4479e242
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2022 09:25:50.5197
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: o+K6d7fLTWAWx4cLeGi98555NE+xJIC8ivTz2L9XL/j+1WMGo6kjDfs1JwA6dSWdWoNv0sVy3JvwPDqBZ4GZ6Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4909

On 31.05.2022 20:20, Daniel P. Smith wrote:
> It is possible to select a few different build configurations that results in
> the unnecessary walking of the boot module list looking for a policy module.
> This specifically occurs when the flask policy is enabled but either the dummy
> or the SILO policy is selected as the enforcing policy. This is not ideal for
> configurations like hyperlaunch and dom0less when there could be a number of
> modules to be walked or doing an unnecessary device tree lookup.
> 
> This patch introduces the policy_file_required flag for tracking when an XSM
> policy module requires a policy file. Only when the policy_file_required flag
> is set to true, will XSM search the boot modules for a policy file.
> 
> Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> ---
>  xen/xsm/xsm_core.c | 16 ++++++++++++++--
>  1 file changed, 14 insertions(+), 2 deletions(-)

FTAOD my (later) v3 comments still apply; v4 simply appeared too quickly.

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jun 02 09:35:29 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 09:35:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340990.566144 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwhEi-00057k-Ca; Thu, 02 Jun 2022 09:35:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340990.566144; Thu, 02 Jun 2022 09:35:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwhEi-00057d-94; Thu, 02 Jun 2022 09:35:24 +0000
Received: by outflank-mailman (input) for mailman id 340990;
 Thu, 02 Jun 2022 09:35:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nn9h=WJ=citrix.com=prvs=1458da55d=roger.pau@srs-se1.protection.inumbo.net>)
 id 1nwhEg-00057W-PA
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 09:35:23 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 50cbf915-e257-11ec-837f-e5687231ffcc;
 Thu, 02 Jun 2022 11:35:20 +0200 (CEST)
Received: from mail-bn8nam11lp2175.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.175])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 02 Jun 2022 05:35:17 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by SJ0PR03MB5712.namprd03.prod.outlook.com (2603:10b6:a03:2dd::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Thu, 2 Jun
 2022 09:35:15 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::5df3:95ce:4dfd:134e]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::5df3:95ce:4dfd:134e%4]) with mapi id 15.20.5314.015; Thu, 2 Jun 2022
 09:35:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 50cbf915-e257-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654162520;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=Ot3xRBf4vJwPkna0YYEXw716xFNFqFIIxEaQR5vbs6M=;
  b=cJep1QPzMONDzHLRUyadP3wjhWWsH6QOwYA+lq0AG5+MudAn6GUwAd3D
   iEHnozfAstGHdzqqxsW/xgN+XB+gqdjMy7VBsyBaVlNT5jklg2kuYXdMf
   eDE20kF8/NBv3yRNvCyXVZPM2BTtLLwvhfthpTecjHS3L85Bo9O6DXpXY
   k=;
X-IronPort-RemoteIP: 104.47.58.175
X-IronPort-MID: 72695322
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Ae3tCqooywHnTimQwDwRVioce1peBmJuZBIvgKrLsJaIsI4StFCzt
 garIBnQb/2JZ2ChfYt/a4nl9hsH6MCHnIJkQAU6rS01EXgXpJuZCYyVIHmrMnLJJKUvbq7GA
 +byyDXkBJppJpMJjk71atANlVEliefQAOCU5NfsYkidfyc9IMsaoU8lyrdRbrJA24DjWVvT4
 Yqq/6UzBXf+s9JKGjNMg068gEsHUMTa4Fv0aXRnOJinFHeH/5UkJMp3yZOZdhMUcaENdgKOf
 M7RzanRw4/s10xF5uVJMFrMWhZirrb6ZWBig5fNMkSoqkAqSicais7XOBeAAKv+Zvrgc91Zk
 b1wWZKMpQgBFaLiuaMCCwNjPiRELPZs16DKGGShiJnGp6HGWyOEL/RGKmgTZNRd0MAnRGZE+
 LofNSwHaQ2Fi6Su2rWnR+Jwh8Mlas72IIcYvXImxjbcZRokacmbH+OWupkFjHFp2Z0m8fX2P
 qL1bRJ1axvNeVtXM0o/A5Mihua4wHL4dlW0rXrK//RmvjOJlmSd1pDLMev8SPisRf5M3UK3o
 GWc5V3aABEjYYn3JT2ttyjEavX0tSHxVZ8WFba43uV3m1DVzWsWYDUGWF3+rfSnh0qWX9NEN
 1dS6icotbI19kGgUp/6RRLQiGaNoxo0S9dWVeog52mwJrH85g+YAi0OSG5HYdl/7csuH2V1i
 xmOgs/jAiFpvPuNU3WB+7yIrDS0fy8IMWsFYixCRgwAizX+nLwOYtv0Zo4LOMaIYhfdQFkcH
 xjiQPACuogu
IronPort-HdrOrdr: A9a23:Ylrhkaxj5V2snVtm/9VqKrPxvuskLtp133Aq2lEZdPULSKGlfp
 GV9sjziyWetN9wYh4dcB67Scy9qFfnhOZICO4qTMyftWjdyRKVxeRZgbcKrAeBJ8STzJ8/6U
 4kSdkFNDSSNykEsS+Z2njeLz9I+rDunsGVbKXlvhFQpGlRGt1dBmxCe2Km+yNNNWt77c1TLu
 vg2iMLnUvoRV0nKuCAQlUVVenKoNPG0LrgfB49HhYirC2Dlymh5rLWGwWRmk52aUIF/Z4StU
 z+1yDp7KSqtP+2jjfaym/o9pxT3P/s0MFKCsCggtUcbh/slgGrToJ8XKDqhkF8nMifrHIR1P
 XcqRYpOMp+r1vXY2GOuBPonzLt1T4/gkWSvWOwsD/Gm4jUVTg6A81OicZyaR3C8Xctu9l6ze
 Ziw3+Zn4A/N2KOoA3No/zzEz16nEu9pnQv1cQJiWZEbIcYYLhN6aQC4UJuFosaFi6S0vFqLA
 BXNrCc2B9qSyLbU5iA1VMfg+BEH05DUytue3Jy9PB8iFNt7TJEJ0hx/r1rop5PzuN5d3B+3Z
 W0Dk1ZrsAxciYoV9MMOA4ge7rBNoWfe2O7DIqtSW6XZ50vCjbql6PdxokTyaWDRKEopaFC6q
 gpFmko/1IPRw==
X-IronPort-AV: E=Sophos;i="5.91,270,1647316800"; 
   d="scan'208";a="72695322"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Xengb/0ZL+Dz+xsO+KKuSv1066h/AW8ae0c1bl1cw86AiAmeuM618fA5GxNVbmXVjpvMnB2R74GNgIcfngTphBqD1hRIQhx8zGV2BT8ElNEeIO5v4lNibdO5ZuSCf/drl1qU5gNPBM1qyMjHKrQ5nzFW60YeUSRbCAbhxIro0O2PU+PJzj1KcHiKkht+IBcirmsl+ZsTn59HfgDa9cMg2Wm0/3skjTfdaZ67k1S+aQvJiXftjV+cs+sMNRD+39SdSVe1UHrhlfCUdOUmDmNO6YTmOsPBusATliLOjULW7kACCLRkDfHTuJ5LzY8fJgQJKW6B0yS68sNvaQ4KbFqLxg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nG2N9ig/MGr7QL3pa8iPLcL2fzuWvxlFjObLcXccWpA=;
 b=K8W8rk2gLQUpxPG63Ai6g62KvnMHTKTQ21hnfiKg+b3ePNPxa+ON0q2rUgNkDoMOnqjG2ELYtvMcOFY2+0R9TwSY4UkiXbuowzGe8/ZtMsEZ9Iev/fLEpl4rRkWH2qsM0d3saWg9k5lrgzWoU5B3Z1KTuppPhlBffn77MUiji1tE8uR4ahQJwA/aDChVwUOwENjH63zEWpmjGDqReXihoefdLd7GfeGehgrflR6hhwwGeUJI2VNfEtSK5mjzZNlzQ5sc1b/1NsQzTm0yiYffh4lcrnuxg9DkfXBFRJFNOvtvVOJwMWjzNSHoo5blcVq+gQMAb2RmFknnMfzeK0dG0w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nG2N9ig/MGr7QL3pa8iPLcL2fzuWvxlFjObLcXccWpA=;
 b=FTbxgPehWXUirP5DVoDFtmPxQUiBUTYFwA7QrwTHqySlJeanPEfPELczp0OEi9lxkUrDrIfQ9ksCnvn11EopA8HcmkXzIBDgXdUuLNDyXd2dBUoeRtRj7YrFtp+iPpLY1BUApCqDfxFs1weCP+SGRdRn4H8mQvEXAj1xxGQUBfM=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Thu, 2 Jun 2022 11:35:10 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Paul Durrant <paul@xen.org>
Subject: Re: [PATCH v5 12/15] VT-d: replace all-contiguous page tables by
 superpage mappings
Message-ID: <YpiETolItQ7FvcsG@Air-de-Roger>
References: <80448822-bc1c-9f7d-ade5-fdf7c46421fe@suse.com>
 <b3126189-2fec-ec14-7129-7897cde980a8@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <b3126189-2fec-ec14-7129-7897cde980a8@suse.com>
X-ClientProxiedBy: LO4P123CA0323.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:197::22) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 9f8a8ae6-c5a7-4bc7-e50b-08da447b32a9
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5712:EE_
X-Microsoft-Antispam-PRVS:
	<SJ0PR03MB571221830638840622AD80CE8FDE9@SJ0PR03MB5712.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Rk7hk8ON4of/RvVXv/0gGgoUqUdh85T+TZW2N0IELTEo2GPfuKCVYWBwDgUbpPc36wSKxZpts5tnfD5/npg+z6br9gybxMrdLgA64PMEjAKb8tu2/TOqNoOzD5f/HLfFLPysnAty4xhCpl2i3uK8gqFNgE0AbyBUVy1Hfk86oiNK3sHZ4fJsek7rD5LH3SMFT5DFF3jACw6vIjB8bDbtpTh9XY4KLAw+/5W3xPRtTw0EsyhR4zzaifxtXNO3eDhJehrYgv6XUtotVtnFITgdeGIZvtlm5KZL8FIWteakMJGfKrsuhCVQq/6uhZoekUZT14TP1s836rPeajYzp1bTOv1dFsGOLvU3REsDnrlT+rWXUzk3PJVXjH5Oc3izoOdkf2k9V5uW3MTNOjSc+iExSJO8UE+mVKkXN1Uq4YV5Tadzwlu5YVhVA5K7LCUQjJ0dIdQ4eWFwt3qkD06RFtXhgVaMJGPxcm1Vq4bCa+dv4PwYXYrgr4HtDyPUeeWafguCHgHyiIpcBgSRmTl/LJdldfKhUx3g6rZIAAp8tLoiy6VO/H0IZpTXCDUz6MrpfQubHttx7HHfi3k2R7Qqg21MRV0l6I+FAuPBxmKjrrrIMZiDf8V6hoJPhGMbXydpulQR4huTA8Uu6qMbRlZY0sIEPA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(7916004)(366004)(66946007)(66556008)(4326008)(8676002)(82960400001)(6666004)(86362001)(508600001)(66476007)(6916009)(6486002)(54906003)(316002)(186003)(6512007)(9686003)(26005)(6506007)(38100700002)(2906002)(85182001)(8936002)(33716001)(5660300002)(83380400001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?LzJqQzlFQWhqV1d1VXpqMnV6cExqdEhQNFdOVkcwZWxHR3RwdW1Ra3VPNFNu?=
 =?utf-8?B?dTVyOHVYVG1lS2pydUxlSm1naDhQTmxCR1VaMkgyMVRGTzd0MnlGenpQTnkv?=
 =?utf-8?B?bzcxYjRtYlJaZjhrZERRNHo3WXk0enlrNXh3c0t4dk90MTZGZTQ1TENCeVhM?=
 =?utf-8?B?ZjZJcXJTVlJ6Um41NXIxalQrRkJmNGEzQTd3bFp3cmhEWm1objFZSmxoWm1l?=
 =?utf-8?B?UUFDSXhhSXdCN1l4b0VqSzdtSlJaalVBb20zMEdsVWhWVG1pRVk3NUpIcGs5?=
 =?utf-8?B?dmdjMHRSNjZxc0FrbVFWTDNuYy8vNDJPdXRHdDMreTVoZXAzNWd2MWVVcEpz?=
 =?utf-8?B?MTlUR2Y3eWRvYlROclh5M0xxSm1wQmJ1ZTFYTkl6SldPWFF0MUZ2eGtOdFpz?=
 =?utf-8?B?K0FYQ1U5eFV6UU16clVQbTc3Z0FXN0dnbUhWVHVOTnltR2R0U2dsYlpzVnBQ?=
 =?utf-8?B?M21XSXNHTHFpS3pvUXFBUlYzSXJYUE5hNHlObDM3L0FKVGZRa2FrZjVlRWtQ?=
 =?utf-8?B?TWlyY3hoZm9lZGZPaW5wRHFVbmVJdG82U0JPZTg0WkFMN1NQT2ZqUENleGl3?=
 =?utf-8?B?SEhyT28xSDdHS0dzNXBYOTEvaW9PT3U0anZyWjd2SUdjK1NNYnA3T0dZSGVS?=
 =?utf-8?B?NldOWTNSWklhT2VVNFQweHcweXYxeERqTWw0UDlRbUNSSW8ySTBJUWQzYkcv?=
 =?utf-8?B?UTVJOFlUb0RSblhSclp1WjM4MXhDam9BbGlHdEVEUTVrbk9nKzFEcy9YbzJk?=
 =?utf-8?B?a0YyTWh1YlhXRzgrT3VWZ0tDbTdNTlQ5OTFrZkF1bmVKMEJxbm1WM1JzZTdt?=
 =?utf-8?B?N01FTDR3c3JwdzZMbVo5MjZyOGtjSXA5MWZmS3J5ZHhmZFVQVFJ4N3AyZW9y?=
 =?utf-8?B?N0JCVzlXTmRvVzJRQ0hGZGZFK21uZW5DVlRwSytyYUphZ0VzYkxaTnEwem41?=
 =?utf-8?B?T1g5by84TGJJQXM3dXo3US9sYjlXcnF5eFBhcTFVRWp3eldmd0FnMnpnVHd4?=
 =?utf-8?B?cGVXVU81VVhnaktiTE9BQSt2VERwbVN0ZnBkaGh4MFJXUjZDWkdPdTZ4QkhX?=
 =?utf-8?B?OUhmc0VpMzRORXBJcUFBMndickRwMjlZbHpicS84c29FVUJoNzdZVXZCcmVh?=
 =?utf-8?B?UzcwNWptemRhcUNVZTlJaWdwdUFWdzI0WENHMmExUWN4a2FTSUl0TitoaHZF?=
 =?utf-8?B?K0NkWUI5K09VL2tscHNTb0hMM0pacVhXQ3JXZUFDV2xUOWtHczdna1RZL0pG?=
 =?utf-8?B?OTBDaGxEbHBQYmxXNW0vOVpsbUp3SFRZb2Y1KytSRy9ORGliZ055WUZNMkJv?=
 =?utf-8?B?ZHVmcFlYRE1GWW9qamYyVi9TaUtPMGxtWGNhQnltWkxDYmpVRnRRamZiUS9J?=
 =?utf-8?B?WnZBMXJ1TVpCMWF3bksxQWd5dVBWdUI1QmtEQVp3MjMzUHJzVjVicVhrZG1J?=
 =?utf-8?B?VjdYR3lhOHhPbnozNDNaM2FBZlRkQlhKR1p1a2ZxNmQ2Y3p4OGJjcXN2SGV6?=
 =?utf-8?B?WVlqd0VmcW1uUGNXK1BKeFRKbjdtR1BwL1phQlV5WVd2SEFWRFEzRWg0Rnp5?=
 =?utf-8?B?eUJsSkdkU0ErOFBZdjdwd3dnN1BqYUlubEFiTEpzLzZ5WTBwVUdVS2g1S2Nh?=
 =?utf-8?B?MlNMa1lXWUZXTnExY2xZTldGekttdDJmMnlpRnFKdUNwSFZ3eEVqcHZ5UFlm?=
 =?utf-8?B?UlhhcWVyVVRSaUU1MzRsbWw0WGNueFM0Y0NlTDRPVStYRktSU29WcjJvVkw5?=
 =?utf-8?B?dVRVa3g5UkZ4OEhvT0lCUnNiaFJTNU5vMVFEUmNZcDhtTm9qWjVUWW00RXh2?=
 =?utf-8?B?Rk5HMWk1eEdpTitudGMrZytjQjZwaE5Nd3RnSlQ0YVFxSUtBdjFqbXRLZkh1?=
 =?utf-8?B?SFcxS2U1UWYvNUR3MGpObTc1ZldKQlRVMDVmaHVvdCtOYU5RL3Bzc1Z6TjFu?=
 =?utf-8?B?TENWVGZER0ZMUHB2RTRlQkQxYW9pWitybFRPZUFWRS84WWt2VWlPN1FwOXM2?=
 =?utf-8?B?dkRpWEg0aTU2QXZPWEJoYjBwOVFTRXVUdGNTTEphNmFyM1hIZktVL2JUUjFi?=
 =?utf-8?B?akJJM2xKNUFFN2lUQkNFQXM0WnB0YWNKUUJPR3lETFl2bW5CYlZjMDlVYXRS?=
 =?utf-8?B?SHlKeTFzYXpKUUtPZ3QvZXU0b2NFeTBQTWRLQXNNNWZvbVg0VWVMVzdDa0dV?=
 =?utf-8?B?WjlhMlJ0dWQyL1JPbk5mdkdoaWY4Rnhzam93TVpWVDNtaDR6Rm1TM2NTTWU4?=
 =?utf-8?B?Qko0bG1Ba1V0SnpMZjAxTDNDYjZFNkw3dFFWOE1lWkszbG45a2hjdER1aDMz?=
 =?utf-8?B?blNDb0dsYkE5MU9rYnNTY2FvS2ZFYTE5UGJ4ZE1KNUJpMERQNzl3M3E3dGN3?=
 =?utf-8?Q?Dj4HSZ7TlQAlidMM=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9f8a8ae6-c5a7-4bc7-e50b-08da447b32a9
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2022 09:35:15.0194
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: faU6Nny7mk9cNE+TeGZxVX9x/Nw1E8NrIv/BfQlKerC1Lw71d9+dTeLt+HmSvkDmM4M6zSy4dUBBDXMnNfQu5A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5712

On Fri, May 27, 2022 at 01:19:55PM +0200, Jan Beulich wrote:
> When a page table ends up with all contiguous entries (including all
> identical attributes), it can be replaced by a superpage entry at the
> next higher level. The page table itself can then be scheduled for
> freeing.
> 
> The adjustment to LEVEL_MASK is merely to avoid leaving a latent trap
> for whenever we (and obviously hardware) start supporting 512G mappings.
> 
> Note that cache sync-ing is likely more strict than necessary. This is
> both to be on the safe side as well as to maintain the pattern of all
> updates of (potentially) live tables being accompanied by a flush (if so
> needed).
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Kevin Tian <kevin.tian@intel.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

> ---
> Unlike the freeing of all-empty page tables, this causes quite a bit of
> back and forth for PV domains, due to their mapping/unmapping of pages
> when they get converted to/from being page tables. It may therefore be
> worth considering to delay re-coalescing a little, to avoid doing so
> when the superpage would otherwise get split again pretty soon. But I
> think this would better be the subject of a separate change anyway.
> 
> Of course this could also be helped by more "aware" kernel side
> behavior: They could avoid immediately mapping freed page tables
> writable again, in anticipation of re-using that same page for another
> page table elsewhere.

Could we provide an option to select whether to use super-pages for
IOMMU, so that PV domains could keep the previous behavior?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jun 02 09:47:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 09:47:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.340999.566155 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwhQK-0006oS-Fu; Thu, 02 Jun 2022 09:47:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 340999.566155; Thu, 02 Jun 2022 09:47:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwhQK-0006oL-CQ; Thu, 02 Jun 2022 09:47:24 +0000
Received: by outflank-mailman (input) for mailman id 340999;
 Thu, 02 Jun 2022 09:47:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4LFK=WJ=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nwhQI-0006oF-Oi
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 09:47:22 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.111.102])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ff73d2a9-e258-11ec-837f-e5687231ffcc;
 Thu, 02 Jun 2022 11:47:21 +0200 (CEST)
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 (mail-am5eur03lp2053.outbound.protection.outlook.com [104.47.8.53]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-46-ZPj2J4fHM02g8Hhg8GW9aw-1; Thu, 02 Jun 2022 11:47:18 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7409.eurprd04.prod.outlook.com (2603:10a6:20b:1c5::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Thu, 2 Jun
 2022 09:47:16 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.013; Thu, 2 Jun 2022
 09:47:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ff73d2a9-e258-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654163241;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=R2nAychlMBFxvawYdIVjlApNpLN1F+sVNEB0C89rDOo=;
	b=ZCauABVqLH5JnQ6TljekDnpd68GALssVM43wwErzi7KrOHdqhgoaqxZBwuPeWdkyTxUTe/
	d3mOzjcbnBmV+UWyvFzgzVklmuK+z5DnRhCdWAjVzfcgJwzcz2iekf9z1bnC0IVlF8Jueg
	dDXECuByeitWlTU4l/KHeU5HBS72KZs=
X-MC-Unique: ZPj2J4fHM02g8Hhg8GW9aw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jo8od4Dg3qGxAXEzUfChc7PvPm3Uk5AC7Tsz00WcN2qIi0Om40nHJssZaW74DGNQHa8U5oaKZDRC6UyK9X9HJliNqNGPDmsk+YOEDj9lTuqogkDxQlDP7QNkbEuQf0A2v9z5xLDwzdE4y5dlr10hrolpgY2amOkUA2aE70w0QafeKqKzVfpYXxbdMsyF8AVPhp1fDAfRcENAzRmR1DGd97AEzW3bKrpPVeCDoYkOoOAvN7l2Sa+64zELFUZg7A9Gm7sR0qwv5r9YS5zCXCPYMXdngD4L815+or9jYwjl/FgzDeopX0ZcQxQhIIrJM5Wkn885wfExZCW2RSgbpLjhpA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=R2nAychlMBFxvawYdIVjlApNpLN1F+sVNEB0C89rDOo=;
 b=GaEKOKI7zpsfKX0ThU+889UHuGRGmQnVBYvV6n97j2NAYnuQy+byl9G00NonCHg0V7oxGbDBqFONRoA9Qi4FwxjFyJ3WR0eJ2A4Lh+1hxUqba19mraddlzL+BSmI1gswU11I6v7sDHE4dMYShDygajxMNrlYSg7EJF8cshHAvE62mMZ7ZYvlokZjl4BptYoC4ereobI6b/bAm2oMmQSk26F8J0D8BUxRocRx4EFOGrzivFZq8EaNmBE+5wjSpW2a+miLi8R86OYLoAcdfGtfMcEd31NAm8k+lw5Dn1G7DroWRhq6ekbg9VUrPgpOqiaITuqW42G9EnwTXYaYmx36VA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <17edde4a-0d00-0da7-5910-09874ab70838@suse.com>
Date: Thu, 2 Jun 2022 11:47:14 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v4 2/3] xsm: consolidate loading the policy buffer
Content-Language: en-US
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Cc: scott.davis@starlab.io, christopher.clark@starlab.io, jandryuk@gmail.com,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xenproject.org
References: <20220531182041.10640-1-dpsmith@apertussolutions.com>
 <20220531182041.10640-3-dpsmith@apertussolutions.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220531182041.10640-3-dpsmith@apertussolutions.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR06CA0748.eurprd06.prod.outlook.com
 (2603:10a6:20b:487::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: dd1360eb-7cfa-4782-2bb0-08da447ce0c6
X-MS-TrafficTypeDiagnostic: AM8PR04MB7409:EE_
X-Microsoft-Antispam-PRVS:
	<AM8PR04MB740962E2C22519D3AC78783BB3DE9@AM8PR04MB7409.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	3zrSQbFCDuHu4ErL0iE/cjT7ApzNYX1IBoJKegqfAmcfWXKrBW0wU4enPDfJU0GZfyuz3+C8sTQyNJPOO/gz/QUs6TWYTMqc088irqst84jHaNSBE9+pebdXBKR2eTcvsNQEBycXieuKLdyua6O6oWzyeDkjXmxcET8FKv6OCzMp0sFoKQdS72jV8+3XGoarBukap2XVPepdlYhGUzKVa7w/1AcO6yq2gUOLwNViOffkHqsuSC7+Hi+eABDX5HKWMg86A9IsVexqW95N7t9Ysj/PIPmd0W/5mXs7laep5+u+HpjAo5y0QmJivweEblMzdM5RDQlopuzu/yOv2A39Mz7XbuKp5QhTS67CrKHgGpi0ezN/TYpNQaSzgqwqBKW4e8bByeYBXPaQ3k9ACYq87kUAuLPbkRZXxFP/T6mLgmpiR2/PDucRAe8JX0V5iDh210TCA1iw0ACnl1NKXHbzvNMjh0uH1zppc69sKjGaQgQyvLo3at6bjZ05x/H9gF9+AquZeIlm09IvmGxDdCbKFey0m/NdBVuMXupcpNkulCut1QLNeSzCZ2AFwqwcS3izDgA45i4QSErVD5IoGOPnxXc/yfuhTZ2+mF5L+MK8tRIl7/HUhw6PvUTeOjheNZNFxXmBqU6N2wzYeInCQ8sSVsowiRzZGRecAlsVQzPPitM4CiUUUrbQxT1/YDsGXEBdU2yFFekCM5FkQdiClIHlCms8BmVKvqOYOZdwzAvH5/o=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(31686004)(4326008)(8676002)(36756003)(6512007)(316002)(6916009)(83380400001)(66946007)(66476007)(86362001)(66556008)(31696002)(38100700002)(508600001)(26005)(186003)(2616005)(5660300002)(6506007)(53546011)(2906002)(6486002)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YTVZb3ZBdk1Dbm1zUE94dFVLQkhqeU9xTlQ3dFdDSEc3OGpFUkZObldNZVo5?=
 =?utf-8?B?YUFZVmhYWXBScVE4WU8ySDEzM3lCRzc4ZXh0N3Y1ZWlhV2ZFcCtabE5aV1hE?=
 =?utf-8?B?OTlSSklrVDRCL3RudHNtTjVoUFd1RzIwRmNrQWk2T25oNFB5cnpUTTJacUNi?=
 =?utf-8?B?VFBObXAwY3pyM0RPZmhiMGVvZFM5eG9oUzhjeUpnU3c1Q3loUkN3d2ZzeVB3?=
 =?utf-8?B?aWRGOTdBNFcrODJDZ1VZYUp3Qmp2Yk5jRm1MZzNOWlFYbTN1N1k4T0ZkSkRH?=
 =?utf-8?B?TVl5eUptdW52QVBISG95Vk1XYXl1bFhLSEE0T0dPeU5ESXRhUURORTdjNzBY?=
 =?utf-8?B?WnU1RE1DNDlsNW1Yd3FyRlFFd3lXMzdkb3RqN0JuRW5hbW1WMTNDODNpMWVt?=
 =?utf-8?B?WjFOL25nQ1VmYVB4MUh2TDh6Mk1QaHRIdzFOOVk3MDQvdTRDejhWR0VDUmhX?=
 =?utf-8?B?Z1NDYnBZeXB2OGVyUHNQMFNEeTcxaUpQZjZoT0JqLzhuQmg4Z2tHZkwrVmFD?=
 =?utf-8?B?eWtGSUx6N1VPbnVtWGVaREpVRWtkS0dWWWtKQnR0bEZkbGUzRHVYZU03RmVK?=
 =?utf-8?B?S1M4Vk5KeStITEN1aWQrZWNGUHFqa2Q5aitlRW50VTFUYzUzTG9PK2FCQTNl?=
 =?utf-8?B?Yk8zWXVZN3h3TmJhZ0gyL2NiVE93ajFsZlZ0Y0F4aGgrTmttK21TMXhwbXRl?=
 =?utf-8?B?WTdndkpKdmpVempDanR2QUdCWHdPeDNRMjhHZmh6YzV1ZUo2dzlIeFNXZ0Ev?=
 =?utf-8?B?dVNrUFYyS2hydDdoeUxOdy9KVHFUWlRRNmJBSkh5L1R6ZVowd1pJbXQ2UjdQ?=
 =?utf-8?B?YURKckluaS8zYkFOMkZhUDQwOVpGbGxMQnhnaU8zREgwVS9WUEE2MWdCTDVW?=
 =?utf-8?B?VXJHSnpENzYyWmNJdUJ1Qnp0QkRacDJEaWk0dGQvaXFlWTNxOTIxZ0x2UGM1?=
 =?utf-8?B?aVJodVgyVGhwZncxQ3FSbVZDdk5xWWlmWXZEamV0Y1dKZG1GeklzcTYvYURO?=
 =?utf-8?B?Qk9qV2U1VHBvcDcvdkhiUzR2c3JFNHZqdjR0R1JiZDM2OHpML05KVllvZmRW?=
 =?utf-8?B?dlRDNkRQbUdOY3ZVR25HUlJ3Mld2cTdhL2ZZWktqeU4xQStQMU9KUEt6cm9G?=
 =?utf-8?B?dm5uL1JRS1dRK280Tm1kOGsrZXEzdWlaaTBWM1pxSXc3UVVEbDZRQXdwQ29r?=
 =?utf-8?B?eXRZaFp6eU1UdTFlV0FORVQySWs1eWE2NVdpN3hUYlJtM1kvNnNvZWhDN1N5?=
 =?utf-8?B?Mm5Ga2lRYUNpWXVvOFFKZUJTZ0ZMZXpBZE1JcnZqSE9LYm13dW5CSGZaWkY2?=
 =?utf-8?B?eEg1dnFJMU80Y2piamRLVWdORFJlbTVrM1VvSFRUejlIYnAyTWVSOS9TNmJW?=
 =?utf-8?B?MXhLUlNNb25uNWRoUEZ1THI0eUdZSDM0SVJYUDBiWUFVcjl2WFltejkzdnhU?=
 =?utf-8?B?Nnlsbko0MlI3dTlaV0djT1FsSnc1eGNxNDdKWEdES05hQldHVExvRnJmeGFk?=
 =?utf-8?B?NFNhZVVyOUVWeGtPWU1CaHZ5d2RGUk1uWVQ3SVdZTjBFZUFpVW5YTnZSNHJr?=
 =?utf-8?B?bjc1ZmVRMkxUUjB2YXA0VEYxUzFXN2thRkdIdFhkcXowZEE3bjU4ZDB0NSsy?=
 =?utf-8?B?RWppVVljWkxJaldMYVpVUUhuRkQ1azltNXFvQTVtR2lPWlFUY25rQUIvOWJ1?=
 =?utf-8?B?cGVHd05rTHNqUitmN3Q3Q3hrWDZaKzUvejZERHF3dTI5eGdubjB5V2prQThE?=
 =?utf-8?B?U05uL2hqaGhKbkdER2tyTUt3Y1NFVUV6K3ROV0gxYkdSSUpvVFYza2JjRkx4?=
 =?utf-8?B?SHcyNEVNRjIwR2locmovSUo5S29icVNxWkduWU5tVUY1YjJmVGFxZ1plS2dP?=
 =?utf-8?B?QmZOU05ybW5EMm5xWWdOODR0RHNzTUtIUlRvOTVkMnVVd2R0MnZ1VUxFMDlp?=
 =?utf-8?B?aGZrNHdiTmJuR3Nqd1cvYldFM1Y3OFBsMzQ3VldTc1hMSmtKWDhOUzJHVis0?=
 =?utf-8?B?c1RTTkM1VXJYZk5SdUpVc2tuTUhjZGx2K2Y3S2NFTDN0YXFNbkdkdTZ3ZGdH?=
 =?utf-8?B?VGRtV0RrVWRWVE9jejYwKzB1NWVPTWhVU0NGMldkRXBvZS9vb2hiU1NCQnhz?=
 =?utf-8?B?anc1R2ZYU1A1Mk4yU3k0Z2pGZm1QOGhjaE1kdnhxUlRBdFI5ZzNOTnhXNlpY?=
 =?utf-8?B?OHdEbWhZWVQ3cFIxMFJVRklqangyUHowSVNROStjbGd4TDhYYVdNM0dqWDU0?=
 =?utf-8?B?QTNrU0VqYlFYalFCZWpGVVlGNHcrVGZFMzh4YXE5dnF3RkN0b050ZU9zOVVi?=
 =?utf-8?B?L0txTDNYK0ZhUHJjT3RycEVuOFo5TEVlbkdhdkU5dFlMRndjVHVVQT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: dd1360eb-7cfa-4782-2bb0-08da447ce0c6
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2022 09:47:16.5469
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: udyJFlqFxjUNo7qJmeO59tcnjHUSrUF3WqriLvAn+lRRrs6FwZOieUABOBtqxr5O0DeJGpxCyvj9FbDnGSEuGA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7409

On 31.05.2022 20:20, Daniel P. Smith wrote:
> Previously, initializing the policy buffer was split between two functions,
> xsm_{multiboot,dt}_policy_init() and xsm_core_init(). The latter for loading
> the policy from boot modules and the former for falling back to built-in
> policy.
> 
> This patch moves all policy buffer initialization logic under the
> xsm_{multiboot,dt}_policy_init() functions. It then ensures that an error
> message is printed for every error condition that may occur in the functions.
> With all policy buffer init contained and only called when the policy buffer
> must be populated, the respective xsm_{mb,dt}_init() functions will panic for
> all errors except ENOENT. An ENOENT signifies that a policy file could not be
> located. Since it is not possible to know if late loading of the policy file is
> intended, a warning is reported and XSM initialization is continued.

Is it really not possible to know? flask_init() panics in the one case
where a policy is strictly required. And I'm not convinced it is
appropriate to issue both an error and a warning in most (all?) of the
other cases (and it should be at most one of the two anyway imo).

> --- a/xen/include/xsm/xsm.h
> +++ b/xen/include/xsm/xsm.h
> @@ -775,7 +775,7 @@ int xsm_multiboot_init(
>      unsigned long *module_map, const multiboot_info_t *mbi);
>  int xsm_multiboot_policy_init(
>      unsigned long *module_map, const multiboot_info_t *mbi,
> -    void **policy_buffer, size_t *policy_size);
> +    const unsigned char *policy_buffer[], size_t *policy_size);

See my v3 comment on the use of [] here. And that comment was actually
made before you sent v4 (unlike the later comment on patch 1). Oh,
actually you did change this in the function definition, just not here.

> @@ -32,14 +32,21 @@
>  #ifdef CONFIG_MULTIBOOT
>  int __init xsm_multiboot_policy_init(
>      unsigned long *module_map, const multiboot_info_t *mbi,
> -    void **policy_buffer, size_t *policy_size)
> +    const unsigned char **policy_buffer, size_t *policy_size)
>  {
>      int i;
>      module_t *mod = (module_t *)__va(mbi->mods_addr);
> -    int rc = 0;
> +    int rc = -ENOENT;
>      u32 *_policy_start;
>      unsigned long _policy_len;
>  
> +#ifdef CONFIG_XSM_FLASK_POLICY
> +    /* Initially set to builtin policy, overriden if boot module is found. */

Nit: "overridden"

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jun 02 09:54:44 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 09:54:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341008.566166 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwhXK-00005R-EY; Thu, 02 Jun 2022 09:54:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341008.566166; Thu, 02 Jun 2022 09:54:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwhXK-00005K-B4; Thu, 02 Jun 2022 09:54:38 +0000
Received: by outflank-mailman (input) for mailman id 341008;
 Thu, 02 Jun 2022 09:54:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=R5Si=WJ=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1nwhXJ-00005D-7y
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 09:54:37 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 027027f5-e25a-11ec-837f-e5687231ffcc;
 Thu, 02 Jun 2022 11:54:36 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 629321F99A;
 Thu,  2 Jun 2022 09:54:35 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 1A114134F3;
 Thu,  2 Jun 2022 09:54:35 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id hxLpBNuImGJpUAAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 02 Jun 2022 09:54:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 027027f5-e25a-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1654163675; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Y+zb1z8ZfmyQ511e2hgXKSEcDrjipQXOzf/DXU4KCD0=;
	b=ueLr4R7P6F8mSupk6/9LCgD2EOYMRBSix21XvcnqViBp0f+DsmdJUtdEewZyd2pyH+2kA/
	9yRO4QtuWGtXHsKPjvzzK6lVYLzl56AXvU3L0SaQjsbc/HTSuRO8QDIbX/ozD7ZL/5UU5/
	sd3+ZF/iUeB3a2JCHNagwzYEscCKyPw=
Message-ID: <f4c96c08-9f87-e0d1-9b07-b8d654f36e2d@suse.com>
Date: Thu, 2 Jun 2022 11:54:34 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.8.0
Subject: Re: [PATCH v2] xen/gntdev: Avoid blocking in unmap_grant_pages()
Content-Language: en-US
To: Demi Marie Obenour <demi@invisiblethingslab.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Jennifer Herbert <jennifer.herbert@citrix.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?= <marmarek@invisiblethingslab.com>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 stable@vger.kernel.org
References: <20220525184153.6059-1-demi@invisiblethingslab.com>
 <00c0b10c-a35d-6729-5b4f-424febd9d5a3@suse.com> <YpUD3PnPoGj84jMq@itl-email>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <YpUD3PnPoGj84jMq@itl-email>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------y5L0b7i63kwPIVxx9zNZmi1G"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------y5L0b7i63kwPIVxx9zNZmi1G
Content-Type: multipart/mixed; boundary="------------u5JGZ2POLcDttpEs0aSKpENK";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Demi Marie Obenour <demi@invisiblethingslab.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Jennifer Herbert <jennifer.herbert@citrix.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?= <marmarek@invisiblethingslab.com>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 stable@vger.kernel.org
Message-ID: <f4c96c08-9f87-e0d1-9b07-b8d654f36e2d@suse.com>
Subject: Re: [PATCH v2] xen/gntdev: Avoid blocking in unmap_grant_pages()
References: <20220525184153.6059-1-demi@invisiblethingslab.com>
 <00c0b10c-a35d-6729-5b4f-424febd9d5a3@suse.com> <YpUD3PnPoGj84jMq@itl-email>
In-Reply-To: <YpUD3PnPoGj84jMq@itl-email>

--------------u5JGZ2POLcDttpEs0aSKpENK
Content-Type: multipart/mixed; boundary="------------etPTrSgLchsz5EqajCsT55hr"

--------------etPTrSgLchsz5EqajCsT55hr
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMzAuMDUuMjIgMTk6NTAsIERlbWkgTWFyaWUgT2Jlbm91ciB3cm90ZToNCj4gT24gTW9u
LCBNYXkgMzAsIDIwMjIgYXQgMDg6NDE6MjBBTSArMDIwMCwgSnVlcmdlbiBHcm9zcyB3cm90
ZToNCj4+IE9uIDI1LjA1LjIyIDIwOjQxLCBEZW1pIE1hcmllIE9iZW5vdXIgd3JvdGU6DQo+
Pj4gdW5tYXBfZ3JhbnRfcGFnZXMoKSBjdXJyZW50bHkgd2FpdHMgZm9yIHRoZSBwYWdlcyB0
byBubyBsb25nZXIgYmUgdXNlZC4NCj4+PiBJbiBodHRwczovL2dpdGh1Yi5jb20vUXViZXNP
Uy9xdWJlcy1pc3N1ZXMvaXNzdWVzLzc0ODEsIHRoaXMgbGVhZCB0byBhDQo+Pj4gZGVhZGxv
Y2sgYWdhaW5zdCBpOTE1OiBpOTE1IHdhcyB3YWl0aW5nIGZvciBnbnRkZXYncyBNTVUgbm90
aWZpZXIgdG8NCj4+PiBmaW5pc2gsIHdoaWxlIGdudGRldiB3YXMgd2FpdGluZyBmb3IgaTkx
NSB0byBmcmVlIGl0cyBwYWdlcy4gIEkgYWxzbw0KPj4+IGJlbGlldmUgdGhpcyBpcyByZXNw
b25zaWJsZSBmb3IgdmFyaW91cyBkZWFkbG9ja3MgSSBoYXZlIGV4cGVyaWVuY2VkIGluDQo+
Pj4gdGhlIHBhc3QuDQo+Pj4NCj4+PiBBdm9pZCB0aGVzZSBwcm9ibGVtcyBieSBtYWtpbmcg
dW5tYXBfZ3JhbnRfcGFnZXMgYXN5bmMuICBUaGlzIHJlcXVpcmVzDQo+Pj4gbWFraW5nIGl0
IHJldHVybiB2b2lkLCBhcyBhbnkgZXJyb3JzIHdpbGwgbm90IGJlIGF2YWlsYWJsZSB3aGVu
IHRoZQ0KPj4+IGZ1bmN0aW9uIHJldHVybnMuICBGb3J0dW5hdGVseSwgdGhlIG9ubHkgdXNl
IG9mIHRoZSByZXR1cm4gdmFsdWUgaXMgYQ0KPj4+IFdBUk5fT04oKSwgd2hpY2ggY2FuIGJl
IHJlcGxhY2VkIGJ5IGEgV0FSTl9PTiB3aGVuIHRoZSBlcnJvciBpcw0KPj4+IGRldGVjdGVk
LiAgQWRkaXRpb25hbGx5LCBhIGZhaWxlZCBjYWxsIHdpbGwgbm90IHByZXZlbnQgZnVydGhl
ciBjYWxscw0KPj4+IGZyb20gYmVpbmcgbWFkZSwgYnV0IHRoaXMgaXMgaGFybWxlc3MuDQo+
Pj4NCj4+PiBCZWNhdXNlIHVubWFwX2dyYW50X3BhZ2VzIGlzIG5vdyBhc3luYywgdGhlIGdy
YW50IGhhbmRsZSB3aWxsIGJlIHNlbnQgdG8NCj4+PiBJTlZBTElEX0dSQU5UX0hBTkRMRSB0
b28gbGF0ZSB0byBwcmV2ZW50IG11bHRpcGxlIHVubWFwcyBvZiB0aGUgc2FtZQ0KPj4+IGhh
bmRsZS4gIEluc3RlYWQsIGEgc2VwYXJhdGUgYm9vbCBhcnJheSBpcyBhbGxvY2F0ZWQgZm9y
IHRoaXMgcHVycG9zZS4NCj4+PiBUaGlzIHdhc3RlcyBtZW1vcnksIGJ1dCBzdHVmZmluZyB0
aGlzIGluZm9ybWF0aW9uIGluIHBhZGRpbmcgYnl0ZXMgaXMNCj4+PiB0b28gZnJhZ2lsZS4g
IEZ1cnRoZXJtb3JlLCBpdCBpcyBuZWNlc3NhcnkgdG8gZ3JhYiBhIHJlZmVyZW5jZSB0byB0
aGUNCj4+PiBtYXAgYmVmb3JlIG1ha2luZyB0aGUgYXN5bmNocm9ub3VzIGNhbGwsIGFuZCBy
ZWxlYXNlIHRoZSByZWZlcmVuY2Ugd2hlbg0KPj4+IHRoZSBjYWxsIHJldHVybnMuDQo+Pg0K
Pj4gSSB0aGluayB0aGVyZSBpcyBldmVuIG1vcmUgc3luY2luZyBuZWVkZWQ6DQo+Pg0KPj4g
LSBJbiB0aGUgZXJyb3IgcGF0aCBvZiBnbnRkZXZfbW1hcCgpIHVubWFwX2dyYW50X3BhZ2Vz
KCkgaXMgYmVpbmcgY2FsbGVkIGFuZA0KPj4gICAgaXQgaXMgYXNzdW1lZCwgbWFwIGlzIGF2
YWlsYWJsZSBhZnRlcndhcmRzIGFnYWluLiBUaGlzIHNob3VsZCBiZSByYXRoZXIgZWFzeQ0K
Pj4gICAgdG8gYXZvaWQgYnkgYWRkaW5nIGEgY291bnRlciBvZiBhY3RpdmUgbWFwcGluZ3Mg
dG8gc3RydWN0IGdudGRldl9ncmFudF9tYXANCj4+ICAgIChudW1iZXIgb2YgcGFnZXMgbm90
IGJlaW5nIHVubWFwcGVkIHlldCkuIEluIGNhc2UgdGhpcyBjb3VudGVyIGlzIG5vdCB6ZXJv
DQo+PiAgICBnbnRkZXZfbW1hcCgpIHNob3VsZCBiYWlsIG91dCBlYXJseS4NCj4gDQo+IElz
IGl0IHBvc3NpYmxlIHRvIGp1c3QgdW5tYXAgdGhlIHBhZ2VzIGRpcmVjdGx5IGhlcmU/ICBJ
IGRvbuKAmXQgdGhpbmsNCj4gdGhlcmUgY2FuIGJlIGFueSBvdGhlciB1c2VycyBvZiB0aGVz
ZSBwYWdlcyB5ZXQuICBVc2Vyc3BhY2UgY291bGQgcmFjZQ0KPiBhZ2FpbnN0IHRoZSB1bm1h
cCBhbmQgY2F1c2UgYSBwYWdlIGZhdWx0LCBidXQgdGhhdCBzaG91bGQganVzdCBjYXVzZQ0K
PiB1c2Vyc3BhY2UgdG8gZ2V0IFNJR1NFR1Ygb3IgU0lHQlVTLiAgSW4gYW55IGNhc2UgdGhp
cyBjb2RlIGNhbiB1c2UgdGhlDQo+IHN5bmMgdmVyc2lvbjsgaWYgaXQgZ2V0cyBibG9ja2Vk
IHRoYXTigJlzIHVzZXJzcGFjZeKAmXMgcHJvYmxlbS4NCj4gDQo+PiAtIGdudGRldl9wdXRf
bWFwKCkgaXMgY2FsbGluZyB1bm1hcF9ncmFudF9wYWdlcygpIGluIGNhc2UgdGhlIHJlZmNv
dW50IGhhcw0KPj4gICAgZHJvcHBlZCB0byB6ZXJvLiBUaGlzIGNhbGwgY2FuIHNldCB0aGUg
cmVmY291bnQgdG8gMSBhZ2Fpbiwgc28gdGhlcmUgaXMNCj4+ICAgIGFub3RoZXIgZGVsYXkg
bmVlZGVkIGJlZm9yZSBmcmVlaW5nIG1hcC4gSSB0aGluayB1bm1hcF9ncmFudF9wYWdlcygp
IHNob3VsZA0KPj4gICAgcmV0dXJuIGluIGNhc2UgdGhlIGNvdW50IG9mIG1hcHBlZCBwYWdl
cyBpcyB6ZXJvIChzZWUgYWJvdmUpLCB0aHVzIGF2b2lkaW5nDQo+PiAgICB0byBpbmNyZW1l
bnQgdGhlIHJlZmNvdW50IG9mIG1hcCBpZiBub3RoaW5nIGlzIHRvIGJlIGRvbmUuIFRoaXMg
d291bGQgZW5hYmxlDQo+PiAgICBnbnRkZXZfcHV0X21hcCgpIHRvIGp1c3QgcmV0dXJuIGFm
dGVyIHRoZSBjYWxsIG9mIHVubWFwX2dyYW50X3BhZ2VzKCkgaW4gY2FzZQ0KPj4gICAgdGhl
IHJlZmNvdW50IGhhcyBiZWVuIGluY3JlbWVudGVkIGFnYWluLg0KPiANCj4gSSB3aWxsIGNo
YW5nZSB0aGlzIGluIHYzLCBidXQgSSBkbyB3b25kZXIgaWYgZ250ZGV2IGlzIHVzaW5nIHRo
ZSB3cm9uZw0KPiBNTVUgbm90aWZpZXIgY2FsbGJhY2suICBJdCBzZWVtcyB0aGF0IHRoZSBh
cHByb3ByaWF0ZSBjYWxsYmFjayBpcw0KPiBhY3R1YWxseSByZWxlYXNlKCk6IGlmIEkgdW5k
ZXJzdGFuZCBjb3JyZWN0bHksIHJlbGVhc2UoKSBpcyBjYWxsZWQNCj4gcHJlY2lzZWx5IHdo
ZW4gdGhlIHJlZmNvdW50IG9uIHRoZSBwaHlzaWNhbCBwYWdlIGlzIGFib3V0IHRvIGRyb3Ag
dG8gMCwNCj4gYW5kIHRoYXQgaXMgd2hhdCB3ZSB3YW50Lg0KDQpObywgSSBkb24ndCB0aGlu
ayB0aGlzIGlzIGNvcnJlY3QuDQoNCnJlbGVhc2UoKSBpcyBiZWluZyBjYWxsZWQgd2hlbiB0
aGUgcmVmY291bnQgb2YgdGhlIGFkZHJlc3Mgc3BhY2UgaXMgYWJvdXQNCnRvIGRyb3AgdG8g
MC4gSXQgaGFzIG5vIHBhZ2UgZ3JhbnVsYXJpdHkuDQoNCg0KSnVlcmdlbg0K
--------------etPTrSgLchsz5EqajCsT55hr
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------etPTrSgLchsz5EqajCsT55hr--

--------------u5JGZ2POLcDttpEs0aSKpENK--

--------------y5L0b7i63kwPIVxx9zNZmi1G
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmKYiNoFAwAAAAAACgkQsN6d1ii/Ey+q
7ggAkvIDKvAzNM5SG+gjnxFkbxikvZcBfjkneXTiYxO+IKXD67Wbrw+mqQAN5geOpUeYyNKCZVJV
zdDedKrY25LpR0LT1U/LGWesrxjw2dVLIkNL0IOe5X1DHuVtGDbbgjrfp7qqZPNHvFEsQRohNbox
oij9GZMWxAIuHmfCBK+kYttp7BAqtTTFmJ/3SnT20IjZHzL8hfBJ8vaJ4sM4eteN62PdMCN/5Jg8
2dY0rdFCYvjG0VHjvg3E3b0WhGEPhW4etZQwVCcTaNBbOiOQZelNXvncrz8HTdAi8wPYhuU21v9C
sHGuUNh6KROfz49dA2tjfb3a8cm6JRASNoW4kIEDNw==
=qnL6
-----END PGP SIGNATURE-----

--------------y5L0b7i63kwPIVxx9zNZmi1G--


From xen-devel-bounces@lists.xenproject.org Thu Jun 02 09:58:59 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 09:58:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341016.566177 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwhbV-0000jV-W2; Thu, 02 Jun 2022 09:58:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341016.566177; Thu, 02 Jun 2022 09:58:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwhbV-0000jO-T4; Thu, 02 Jun 2022 09:58:57 +0000
Received: by outflank-mailman (input) for mailman id 341016;
 Thu, 02 Jun 2022 09:58:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4LFK=WJ=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nwhbU-0000jI-V5
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 09:58:56 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.109.102])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9d58c1a1-e25a-11ec-837f-e5687231ffcc;
 Thu, 02 Jun 2022 11:58:55 +0200 (CEST)
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2113.outbound.protection.outlook.com [104.47.18.113]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-15-1JbG6UkvMLiNqJE9MhjXOg-1; Thu, 02 Jun 2022 11:58:52 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by HE1PR0401MB2299.eurprd04.prod.outlook.com (2603:10a6:3:24::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Thu, 2 Jun
 2022 09:58:50 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.013; Thu, 2 Jun 2022
 09:58:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9d58c1a1-e25a-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654163935;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=NcMv9D5X8E0jFAIqXRfytNiygm3d2EUv5oRVxbQlCxE=;
	b=lKmP4foNQ9eKVTBq0l9PdIOqMEbtPnQylCyZBkzwIpiYgDH9r0UMqGctbiTFofdMoj8Sfa
	miOEXqbEd7WKk5+Toest8uq34wiXNwDH3IEAOi8+ubePN6/3d1OsWOO94AW1Vh9kNukiPN
	EqYjy/xgJoBOmGsRJE3wEeX6PvpE/9g=
X-MC-Unique: 1JbG6UkvMLiNqJE9MhjXOg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mgB0OvlBwyQ+b+YviM6e1/AplbAoc42kcu0iT4tzrrO+rXymyc7L8Jp7XO/sy0NwPDOEjFJIcyEUQQfpQDZqcTgNG/0CI6MQDw4hWFlXmJtSD7Yf1RnGP/bJP9yuMhggIel8Lto7mR3I5E1HeseahARwLFVqvNGQMyek7C/Pk4fYOz7IDni79+kUF0ytX+/FAVmBAxEZKxon0zqdZKf8MgasBGu3PlPdKeEO5GOwh7H3siaYyutlIZBkmqXg0IHtYgJ4E0sYiairTctvxMBvSysxah9E0x46zMB6ou8BMn+QPSzEW6cIqEAtRzbyuhKe23GzWNn2qvK9PlVpjzBW9g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=QzbKjmGi1yxR+GbC/efFDx//kj10EeSkNuSqnIx3GWQ=;
 b=Il7t00scpAWDIAqzAr8M7a38Ob2PVxGDMAn34XUwg9XkzIylhz2VCEOKM3zvQKONLMXx49JTVfeESqOzcZ3HzR3D6uNM/3mQ2//AZKn1EYN/Fdo/B69k8u3lPh7G1cSVG4/fvV5c4jfMVKd6ciKH6R6yABzz0Lx/G+pnALkMOsgbblMwL4AhLYtwe/3ST/idbbTd7H3shSAn57UEX5MlZMX52YwIn6an2QfbAgy7LXloeVaoEh9YqVhH+No0WGDqcXL/S9Bi0phtcr2naHiCQ0fnNIWvAo98fIvgKJf1kDyx12w+XOC/+8Z0ZQUiBOfT8WxJNL5kU4bbJpjebONxEg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <749f47ee-9258-c3a5-fab2-125c1bdda845@suse.com>
Date: Thu, 2 Jun 2022 11:58:48 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v5 12/15] VT-d: replace all-contiguous page tables by
 superpage mappings
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>
References: <80448822-bc1c-9f7d-ade5-fdf7c46421fe@suse.com>
 <b3126189-2fec-ec14-7129-7897cde980a8@suse.com>
 <YpiETolItQ7FvcsG@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <YpiETolItQ7FvcsG@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-ClientProxiedBy: FR0P281CA0071.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:49::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3233de07-65c1-4183-ca10-08da447e7e18
X-MS-TrafficTypeDiagnostic: HE1PR0401MB2299:EE_
X-Microsoft-Antispam-PRVS:
	<HE1PR0401MB2299465C7948E9F73BB44DDEB3DE9@HE1PR0401MB2299.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JhE6fjPiBVd7HQY+c7jE1US/+jQtzZ8ioDyS5/yxCQD2T5yfRWfhHx9NuYGCETDtAjjiBFgXGwtIkgxotq+YyWxMWOprlytt+mK/k9bz3U5m3DtWOWmObtRWUFZTBuUZDThct3uYMOZPWG1AqQI5DWaeELaPUyigSDrqFejwusGDojyFQ282E7VCh1Ov8HZ1BawoN5u9yVhHG8a/DuHBvkgqedrOL8yphG6JWawCGvMF3tTEKqE8KRhN1K2Oq31797yIOFn2vVOgJRUin/PyuZyG5t+qVqPcT35LJYKcfJuXwV1Sgc66QRb9Yr+MyY6xJHfsN2EGmhpMfIJw67sMtPZJ2v6/OyvqUB4KmIp9oHCmXRCtyfTnt7bq1ie3FuUqHBlw/mk6l0N4zviViRX2j9IovwUoyX5bIcMLJsogS09ZhSRJEjLwF0Q2imDiFSZonfEAKdVe0jFSAtLpvX4QMN0yIxYTwasOffwtL+yI1FvHaPZinWdAyzXjcAknjG7UYDpOmtZWJsBWHiMZy3QZewv4L2ic++0vtvmOxe+YpKmtk7Ao8n8A/1iR7W52fX/G3ewj8nx0rUu6iG1oIQhtLmjEoJmaWrLjVwlloRdyRT/4eT8twoH2rIEBv8xT+Y/FnSC0y0Xj2tGM4JTIgxYjz7hT4x38lMv8nxKbUh0uj+/Soci+IGG9nPA08gZqx+5JGcUAt1HsgvdS8q/zMf90OmtqXGKUjs1IzgXCgFiIniYMyUC92uQyJvxc5Meq4RXg
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(31686004)(4326008)(66946007)(8676002)(66556008)(66476007)(2616005)(54906003)(8936002)(5660300002)(6512007)(6486002)(36756003)(508600001)(2906002)(53546011)(26005)(6506007)(6916009)(38100700002)(83380400001)(86362001)(31696002)(186003)(316002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?VoRXEg8/a2xVvvMJaAfGdcO/dlHsnDAopFyOX+fuM692YdXd8fTTuLWHTK4l?=
 =?us-ascii?Q?2Ez1NLscsy6ugPmgsSyVvrp+Kp8vAn30daAyZERo6BQiVH5de/4Qd5/eXqIO?=
 =?us-ascii?Q?XnZIJsQVsECAI/S87uSJBf+n1czCZ3q/28sO1hPeSTpbDSxAAbJ6FO8slWKY?=
 =?us-ascii?Q?T6u9Qyghs+F8145m05SVmbZs+6sLhZDhNCWo5VUJQyJgP2Odbr1ph4ajRRUb?=
 =?us-ascii?Q?4WkDtQR1gh8cm5hO6/KAImdS0qI+b9A7leMMmaIkYrf+qSJd5Ybr457BClyl?=
 =?us-ascii?Q?lUoZkjTcxzWOyioNhqTX2UO+T9R/28bAboX8ktR/2InfU4T7Ng2i+8gO3yhp?=
 =?us-ascii?Q?SmSe2v+Ah2hgt6DMOIksfSH0LxXgnlnoTsVbQrRNqnhO4ZXYCb46bj2LjXex?=
 =?us-ascii?Q?OfUXwOqXXQ+zxVA98WsdYfC9PRBmKZW80LWpZ5tU+yYZIGMjSguFQUF4kDj7?=
 =?us-ascii?Q?/zHosGXhaPUsnf6RhJLJAos8dTjt013BK4xlbm31+fX6lOFouUU3NxiFgt26?=
 =?us-ascii?Q?XpMd35HUd9DUbgcKJYHpHuqReG4HlAtIvQRjCPx4+oQG28fTcf6zWv1qI+xA?=
 =?us-ascii?Q?DduyVVdCHDnty774vU56v2fv1GDbSPxx3TNukHeaFUTb6EsAPBW7oXvqD9HI?=
 =?us-ascii?Q?jrx+2IjSxX+edyNg4Gc37EzVRmgTsSBWMgZJKawRPgwOKT/kRr0rTsz9sS5N?=
 =?us-ascii?Q?WrFak/cuO5PTErPg9Y0Nqw0zLekruX+26ZoSHftL4jq5qntA/RksMYsZkzNt?=
 =?us-ascii?Q?oGkkM5auJfIspYpJd1af0RcF5PJN0iDFRdRXnUTxtU7K6SS3u0WwEm02lkKG?=
 =?us-ascii?Q?A5YiosaHod14VP/VyqzqFrqpEbDiyMCAHFOJ9w0Oc0e7bzSRbRjC4mJxbKqU?=
 =?us-ascii?Q?Mz1tP40xH4iT8zQmUXNgejika/5a49wF6xSxIV/Peu4h5/3xz/3oTYSJmaHV?=
 =?us-ascii?Q?UZV6OBSPC8IlrygXrGSrsugi4cA+um2IDbiMNgrmzDutrsrYe2829aDyAaAp?=
 =?us-ascii?Q?AMrAB5sZ9ueEqgbZBjUc9P7+NtEoGwh5LM3yO8N/+mI2TdfjB0eJkte6Rti9?=
 =?us-ascii?Q?/9DTfBkLhJFV3WM+HLEXVWVZVmdi7JdDYriUmfE5Ehu9/cgN0CqELOK+a4wq?=
 =?us-ascii?Q?8N8ZNjD1ZBIR8RNNxKKrUjmEHO58Ld3ZK1p8W5d7PYlcm94sIKdIomBjnmA4?=
 =?us-ascii?Q?czbTuGVyAT++0T4r1pY+slI6huMfWrIURp86mNKTv7T7Y88SSQSgChwAGrLY?=
 =?us-ascii?Q?MSTkMVKOqEqZcWHSuq8lAF1XcuUESKr5Ddn2evkMHJWa8mzYjZYJBbernY7u?=
 =?us-ascii?Q?TZ1RFp8Ly1ufIjqQZrXb+nlYhgWOHG5gDENEVO1TA81kKeTElH6KIUHX3um7?=
 =?us-ascii?Q?6KWMH08qHF3vf5gEZqpYoHi4ErQxhTV+YIbQHpF5f6uU3ddRRMed4fUbNYFe?=
 =?us-ascii?Q?8qrNQiXHbicIvCiKuG2MxwN/VaHoIV22iu2Tcn8Id9H80wpNyuBS0cBUoTrc?=
 =?us-ascii?Q?wWfZFHjqxb/0hdD7sySXl/Ic+a/StCls7bNIHNCt3p/1YThafCCVoyyCMJHH?=
 =?us-ascii?Q?nip5smQ16Tx4/MrnURJA0fwpqUZRbVbiMwXejCv9V2FFOIYA6lLM5a2CVY32?=
 =?us-ascii?Q?jX0dMxm0q7hytFqrNUCIVOoXwhB79LbRwghD4tKSlnXzsazW5ZBLhC6YD3ge?=
 =?us-ascii?Q?uu/YpuUIMf+SQMUTwobNXneeRq/RHBA7P51Wwr87OkiLD8Z8Idabt7BzXAlX?=
 =?us-ascii?Q?OzTl7zV71g=3D=3D?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3233de07-65c1-4183-ca10-08da447e7e18
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2022 09:58:49.9405
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: NO749DJkAtEgfpnz9J/uHCzKVCsVJH+87MeGNix71O+dLQ55nmLqohgy1L5Cpc0vyIf7l+cr0yzgBHWaGuA1jQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0401MB2299

On 02.06.2022 11:35, Roger Pau Monn=C3=A9 wrote:
> On Fri, May 27, 2022 at 01:19:55PM +0200, Jan Beulich wrote:
>> When a page table ends up with all contiguous entries (including all
>> identical attributes), it can be replaced by a superpage entry at the
>> next higher level. The page table itself can then be scheduled for
>> freeing.
>>
>> The adjustment to LEVEL_MASK is merely to avoid leaving a latent trap
>> for whenever we (and obviously hardware) start supporting 512G mappings.
>>
>> Note that cache sync-ing is likely more strict than necessary. This is
>> both to be on the safe side as well as to maintain the pattern of all
>> updates of (potentially) live tables being accompanied by a flush (if so
>> needed).
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
>=20
> Reviewed-by: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>

Thanks.

>> ---
>> Unlike the freeing of all-empty page tables, this causes quite a bit of
>> back and forth for PV domains, due to their mapping/unmapping of pages
>> when they get converted to/from being page tables. It may therefore be
>> worth considering to delay re-coalescing a little, to avoid doing so
>> when the superpage would otherwise get split again pretty soon. But I
>> think this would better be the subject of a separate change anyway.
>>
>> Of course this could also be helped by more "aware" kernel side
>> behavior: They could avoid immediately mapping freed page tables
>> writable again, in anticipation of re-using that same page for another
>> page table elsewhere.
>=20
> Could we provide an option to select whether to use super-pages for
> IOMMU, so that PV domains could keep the previous behavior?

Hmm, I did (a while ago) consider adding a command line option, largely
to have something in case of problems, but here you're asking about a
per-domain setting. Possible, sure, but I have to admit I'm always
somewhat hesitant when it comes to changes requiring to touch the tool
stack in non-trivial ways (required besides a separate Dom0 control).

It's also not clear what granularity we'd want to allow control at:
Just yes/no, or also an upper bound on the page sizes permitted, or
even a map of (dis)allowed page sizes?

Finally, what would the behavior be for HVM guests using shared page
tables? Should the IOMMU option there also suppress CPU-side large
pages? Or should the IOMMU option, when not fulfillable with shared
page tables, lead to use of separate page tables (and an error if
shared page tables were explicitly requested)?

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jun 02 10:07:26 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 10:07:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341024.566187 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwhjb-0002SH-RB; Thu, 02 Jun 2022 10:07:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341024.566187; Thu, 02 Jun 2022 10:07:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwhjb-0002SA-OJ; Thu, 02 Jun 2022 10:07:19 +0000
Received: by outflank-mailman (input) for mailman id 341024;
 Thu, 02 Jun 2022 10:07:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ifcq=WJ=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1nwhjZ-0002S4-WC
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 10:07:18 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on20609.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::609])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c6e1cd2a-e25b-11ec-bd2c-47488cf2e6aa;
 Thu, 02 Jun 2022 12:07:15 +0200 (CEST)
Received: from DB6PR0301CA0074.eurprd03.prod.outlook.com (2603:10a6:6:30::21)
 by VI1PR08MB5296.eurprd08.prod.outlook.com (2603:10a6:803:eb::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5293.16; Thu, 2 Jun
 2022 10:07:11 +0000
Received: from DBAEUR03FT020.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:30:cafe::6d) by DB6PR0301CA0074.outlook.office365.com
 (2603:10a6:6:30::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5293.19 via Frontend
 Transport; Thu, 2 Jun 2022 10:07:11 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT020.mail.protection.outlook.com (100.127.143.27) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5314.12 via Frontend Transport; Thu, 2 Jun 2022 10:07:10 +0000
Received: ("Tessian outbound 1766a3bff204:v120");
 Thu, 02 Jun 2022 10:07:09 +0000
Received: from 835691efd495.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 DEAC8EB5-9375-45A2-9220-C09808849F4E.1; 
 Thu, 02 Jun 2022 10:07:03 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 835691efd495.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 02 Jun 2022 10:07:03 +0000
Received: from DU2PR08MB7325.eurprd08.prod.outlook.com (2603:10a6:10:2e4::7)
 by AM6PR08MB3110.eurprd08.prod.outlook.com (2603:10a6:209:45::31) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5293.17; Thu, 2 Jun
 2022 10:07:00 +0000
Received: from DU2PR08MB7325.eurprd08.prod.outlook.com
 ([fe80::ccd5:23df:33bc:1b]) by DU2PR08MB7325.eurprd08.prod.outlook.com
 ([fe80::ccd5:23df:33bc:1b%9]) with mapi id 15.20.5314.012; Thu, 2 Jun 2022
 10:07:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c6e1cd2a-e25b-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=CBnRDK3+929jG2BEtKkF6ERVKUyv1B7CnsMtIYELe1PrJXPN04Ht474fyGkMBonJhIMqJZW1JAKSWuJrMGjPWYIlNg/bSUMvgaSuPHIj9x3AqFS9N6xRzGN4b3f7HNAjsDh+taRQx2aSVYhljq5UH5oUwdhRphkyinVlTCKIBU2HRjVK1UM/h9rc8ssmNlsIKAJdrn8IH85MCQEPpfV3NWc2W3x5qu/UcDoDxCHbwSrccioB84bjKMabpyqbkVvAn6MhtKkB5avg7EkwkrtaFFsEoO1kdICqjZR3okGXYiMhOABvmT4eVlRZ+wWDrF86e0xjI4YZq8BJMVGiuHRofg==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=AGLT+CTPi9nVsaGk6By6eC6CzYWHyDyS0UPXgKX49HY=;
 b=Ry9Lo8PgPvPbLdOA0+bSRI4bxBWN/e1thiRR4SkcxwchmuRErm2U7W+qyYUBBesRhFX/hDFWpJ1Rk/XyG9u8GK0nlJkhxioOI/Pk7csNSezld1ToL+mmEFxugftYonx51fVeHvDtFN0TJbbyqrvbpBgKVfCGnFxmdMuehZ/ZIuEX1Sgxqm4LAiO1yuUy9ssXXILg3hHK5qjHjK1Eb0MTfR0YekjEa9EX6bMvMeHZWalenpOdoLh71rQ7HwgoxXC6a/JCv5VBcx1IBlOC3gsMH56OJPjZDR0Nd17/yRZ50ZwMrBLFxKZL4psljEmMgsaeHoQcQd7bqUxJfz7+T1gkKA==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AGLT+CTPi9nVsaGk6By6eC6CzYWHyDyS0UPXgKX49HY=;
 b=ov54RxpNDBvAILM2GcMXIaPayRf/4FF5cwIhuR+K+fBGEPmFA4X/2kOHFn5BkcTTNAFfwx+qXtBUusmmlZyHBk5XXy+86FnH7dzP/zXqJKJISaCjtUKsesg80XqkBcT4VDZD53NFfkOeHRVyoVjt9/ExVS+OsHkuj7sC02r0Fl4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Z7u7eFgUcwHF8iZzec8YbaqjPGVytLP6H5kdz+8uq5LJUv0nJpKq9uF8+KXtv80ldDsalqCFwZqKnAFdmngLXDpCBptqF8mcui9Hb7ZRLsvoagE9V0vNjHgcevMTLck9IMhStZpAkR3vvWwPm8qgMnh79Voj/PtJU4obuFKHv6Ga6BoZc+WQ98y8QjejQTlHZInp6NM9Zy1DFx2GvBpMi58QbaU9lac00hGOGHb3UacDmhd7jsBYdTZ64lhtE5Y3J+QHBf8eXLiewFOK1otXeC1YpMiqfBZ+Ebib1bFjza/bybpE0PaMLPTHkKitDzM7py0diW5xJU6uPqo6sFB08Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=AGLT+CTPi9nVsaGk6By6eC6CzYWHyDyS0UPXgKX49HY=;
 b=dcLEaWe376H17e15eGGLx/paNCnri37DozboQ8zx/ezg0l7vIKLRfVF7nTnptfh578oWLyWmKSY+yy1YeWRCVCEEJAJRgc/Gi4wgTOhYgVgue61F+joSbTDwobnoi5w5k5iP4MoiN8SJ6qi7dT/QUHFHI2OLZgZLSN+pTrljnmouEYfg52JkZ3RerwdZ4JvUnSpBm4IbUvOiTMi00GrAtJzYfF6d80LPOnpCENWnNjgpVCDk84iKnInDMwpSrkBp62PuHFRBQ8Qv806xHqDWFC38UUBDdyIb7yTTkJOdbTANX0cYVqWSAs3XDqmA8iRLVC+nP+/bshPCywjBAzKxQQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AGLT+CTPi9nVsaGk6By6eC6CzYWHyDyS0UPXgKX49HY=;
 b=ov54RxpNDBvAILM2GcMXIaPayRf/4FF5cwIhuR+K+fBGEPmFA4X/2kOHFn5BkcTTNAFfwx+qXtBUusmmlZyHBk5XXy+86FnH7dzP/zXqJKJISaCjtUKsesg80XqkBcT4VDZD53NFfkOeHRVyoVjt9/ExVS+OsHkuj7sC02r0Fl4=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Wei
 Liu <wl@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v5 6/9] xen/arm: introduce CDF_staticmem
Thread-Topic: [PATCH v5 6/9] xen/arm: introduce CDF_staticmem
Thread-Index: AQHYdJxz5gwtTkiV8EuW1ruWinPeo604qxSAgAM6niA=
Date: Thu, 2 Jun 2022 10:07:00 +0000
Message-ID:
 <DU2PR08MB7325C5D659A735D1BC3A1214F7DE9@DU2PR08MB7325.eurprd08.prod.outlook.com>
References: <20220531031241.90374-1-Penny.Zheng@arm.com>
 <20220531031241.90374-7-Penny.Zheng@arm.com>
 <cd7455ab-c26f-8a91-fbf0-acf988d8371f@suse.com>
In-Reply-To: <cd7455ab-c26f-8a91-fbf0-acf988d8371f@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 444F69286ACCE746AB631C1C1B26DBD7.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 1d0da567-5b4f-4aa9-ea08-08da447fa84e
x-ms-traffictypediagnostic:
	AM6PR08MB3110:EE_|DBAEUR03FT020:EE_|VI1PR08MB5296:EE_
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB52968853C31AA3FD6A63B4B1F7DE9@VI1PR08MB5296.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 f+PzCb9VWUpBeYurg29pGFCJELTuu6DNKSOLKM7DN3UMVL02MkpDZhX147PyYu/MISwbrLLi0zu5MYdX0iwUf8yGogBkDVbZ51DrayKzDv5niKrueitZOA8I7b6AdUSilW0sA9awrc9pVftFDJ5ZUZChwmfyh28PgBpUBEMN9xAmisJLACCes5r6ONGPar14mA91nnpOr1X3tFDUCQXkhiWvOppZ3ugXBTJPZcIvYd5md3kJQdLXcq7yrU8wcjkGTbyT5dYgc3i7cf7wkHIhWnpEyXkbeSlD+9lg43FRIarYbpTTDIzUcDCQPhw4FpVpRqNcbhoWEsQDthB6udYTQqzYjctq0Zi3GjbJVnj/WDkgTuItG1va/6qt3L6xeCdyl8G9AleFmaBU5OgmVbS6dam5XRJpYWov4lYwCVV9EYuq3frhpNDSz5KXuRNCjSSf8CwX80T/OgiZEBnSYfQH7jWbLciICg4odWCARWrsKsaRLSsMU1Xv2IeGZcq73U7AS5c2PvDoOJAC80lHq7Y6UUJOyXbYkI4zeln8J5vzjbgxZk4BOou+1bgpIU+hQYpwCG3PgzMhbEFxGeM8tpze2dmiSNxVOGDAzjqtUoqOLriCBi4VdugMMVeMSqvE3hstQhkY1/iDXQKcHa17cIotuuzBcsNA5ahWPdkPZdKq/TswhnqXK5ge8JDcSeiknsn2YScgsHaII42HOTyu4Mtp8w==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DU2PR08MB7325.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(5660300002)(53546011)(26005)(8676002)(52536014)(4326008)(54906003)(8936002)(86362001)(9686003)(6916009)(2906002)(33656002)(38100700002)(83380400001)(7696005)(186003)(66946007)(122000001)(508600001)(76116006)(66446008)(66476007)(316002)(64756008)(66556008)(38070700005)(6506007)(55016003)(71200400001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3110
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT020.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	bc843a2a-c7c9-4007-ab25-08da447fa26f
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	uSHW+a0u/2Lp7ezO+KgOdxVPkIkApPmKqwvDt7CEQilB1bdxMRswKvy7k4vq4xqM9Xru874iqR/hAzjT6UIAAUcpXyu9Nspyc6NTk+Jo/XIEaEpWPIayq0C00xkM3E2F6KeFSrv61640ODSmVKUkkHDfpsn0/6mEXHtbTFaLIGQ9j5ls64+PXfQUViuZ08w57oP7/ZNfU1E9cnZYgLxQlxd+D0+43uVHwlHVZWHMKI2lfhUIKDkDKs3YL9DoFt35xvTHa5QVY6VtT2WwJWwKaCBJXlZtbqoJNSM9Be8HS+35Yxm4BIcJ3eOlHYx3Xh1Dj1H/pmMkR8Y3YwXs+ANWikrXp4BJ4nK8JFOu+QMpLA7mVZU38mlahkoCoIWR7xFFbkDOVLaE8piC7HOpq7UQiadr6rnZMDPkkzlBZHFndGuB/8mURtql8gaKmkSqdJfwqMcxBoyQjk1jhqm1sPFCvgN1ib9laZzdEM8Wb/kCAaL2VnR7Fb+dPMiB+JVAStMSBTvln63iMog2g3pDANg060f3bQ/gYwOL8Layo0oruOzl3RUXLCWAAhL0TAGScITqNEFEmKMJzSMJ0jCrITjk+Q6NORYKob5KGevWO1JvLAxHqf/6oHWgUfAtVOt47LCJJrOa7SRrXFdI0oEk0JEzJKVDsYsAvCI+tQlZqBGe+FpJxeVSgKuOIZ0BpMQ2l6x8
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230001)(4636009)(36840700001)(40470700004)(46966006)(2906002)(81166007)(6862004)(316002)(36860700001)(8676002)(4326008)(70586007)(83380400001)(33656002)(70206006)(5660300002)(6506007)(356005)(186003)(26005)(9686003)(55016003)(7696005)(53546011)(86362001)(508600001)(8936002)(336012)(82310400005)(40460700003)(52536014)(54906003)(47076005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2022 10:07:10.1625
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 1d0da567-5b4f-4aa9-ea08-08da447fa84e
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT020.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB5296

SGkgSmFuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSmFuIEJldWxp
Y2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KPiBTZW50OiBUdWVzZGF5LCBNYXkgMzEsIDIwMjIgNDo0
MSBQTQ0KPiBUbzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+DQo+IENjOiBXZWkg
Q2hlbiA8V2VpLkNoZW5AYXJtLmNvbT47IFN0ZWZhbm8gU3RhYmVsbGluaQ0KPiA8c3N0YWJlbGxp
bmlAa2VybmVsLm9yZz47IEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+OyBCZXJ0cmFuZCBN
YXJxdWlzDQo+IDxCZXJ0cmFuZC5NYXJxdWlzQGFybS5jb20+OyBWb2xvZHlteXIgQmFiY2h1aw0K
PiA8Vm9sb2R5bXlyX0JhYmNodWtAZXBhbS5jb20+OyBBbmRyZXcgQ29vcGVyDQo+IDxhbmRyZXcu
Y29vcGVyM0BjaXRyaXguY29tPjsgR2VvcmdlIER1bmxhcCA8Z2VvcmdlLmR1bmxhcEBjaXRyaXgu
Y29tPjsNCj4gV2VpIExpdSA8d2xAeGVuLm9yZz47IHhlbi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0
Lm9yZw0KPiBTdWJqZWN0OiBSZTogW1BBVENIIHY1IDYvOV0geGVuL2FybTogaW50cm9kdWNlIENE
Rl9zdGF0aWNtZW0NCj4gDQo+IE9uIDMxLjA1LjIwMjIgMDU6MTIsIFBlbm55IFpoZW5nIHdyb3Rl
Og0KPiA+IC0tLSBhL3hlbi9hcmNoL2FybS9pbmNsdWRlL2FzbS9kb21haW4uaA0KPiA+ICsrKyBi
L3hlbi9hcmNoL2FybS9pbmNsdWRlL2FzbS9kb21haW4uaA0KPiA+IEBAIC0zMSw2ICszMSwxMCBA
QCBlbnVtIGRvbWFpbl90eXBlIHsNCj4gPg0KPiA+ICAjZGVmaW5lIGlzX2RvbWFpbl9kaXJlY3Rf
bWFwcGVkKGQpICgoZCktPmNkZiAmIENERl9kaXJlY3RtYXApDQo+ID4NCj4gPiArI2lmZGVmIENP
TkZJR19TVEFUSUNfTUVNT1JZDQo+ID4gKyNkZWZpbmUgaXNfZG9tYWluX3VzaW5nX3N0YXRpY21l
bShkKSAoKGQpLT5jZGYgJiBDREZfc3RhdGljbWVtKQ0KPiA+ICsjZW5kaWYNCj4gDQo+IFdoeSBp
cyB0aGlzIGluIHRoZSBBcm0gaGVhZGVyLCByYXRoZXIgdGhhbiAuLi4NCj4gDQo+ID4gLS0tIGEv
eGVuL2luY2x1ZGUveGVuL2RvbWFpbi5oDQo+ID4gKysrIGIveGVuL2luY2x1ZGUveGVuL2RvbWFp
bi5oDQo+ID4gQEAgLTM0LDYgKzM0LDEyIEBAIHZvaWQgYXJjaF9nZXRfZG9tYWluX2luZm8oY29u
c3Qgc3RydWN0IGRvbWFpbiAqZCwNCj4gPiAjaWZkZWYgQ09ORklHX0FSTQ0KPiA+ICAvKiBTaG91
bGQgZG9tYWluIG1lbW9yeSBiZSBkaXJlY3RseSBtYXBwZWQ/ICovDQo+ID4gICNkZWZpbmUgQ0RG
X2RpcmVjdG1hcCAgICAgICAgICAgICgxVSA8PCAxKQ0KPiA+ICsvKiBJcyBkb21haW4gbWVtb3J5
IG9uIHN0YXRpYyBhbGxvY2F0aW9uPyAqLw0KPiA+ICsjZGVmaW5lIENERl9zdGF0aWNtZW0gICAg
ICAgICAgICAoMVUgPDwgMikNCj4gPiArI2VuZGlmDQo+ID4gKw0KPiA+ICsjaWZuZGVmIGlzX2Rv
bWFpbl91c2luZ19zdGF0aWNtZW0NCj4gPiArI2RlZmluZSBpc19kb21haW5fdXNpbmdfc3RhdGlj
bWVtKGQpICgodm9pZCkoZCksIGZhbHNlKQ0KPiA+ICAjZW5kaWYNCj4gDQo+IC4uLiBoZXJlICh3
aXRoIHdoYXQgeW91IGhhdmUgaGVyZSBub3cgc2ltcGx5IGJlY29taW5nIHRoZSAjZWxzZSBwYXJ0
KT8NCj4gT25jZSBsaXZpbmcgaGVyZSwgSSBleHBlY3QgaXQgYWxzbyBjYW4gYmUgYW4gaW5saW5l
IGZ1bmN0aW9uIHJhdGhlciB0aGFuIGEgbWFjcm8sDQo+IHdpdGggdGhlICNpZmRlZiBtZXJlbHkg
aW5zaWRlIGl0cyBib2R5Lg0KPiANCg0KSW4gb3JkZXIgdG8gYXZvaWQgYnJpbmcgdGhlIGNoaWNr
ZW4gYW5kIGVnZyBwcm9ibGVtIGluIHhlbi9pbmNsdWRlL3hlbi9kb21haW4uaCwNCkkgbWF5IG5l
ZWQgdG8gbW92ZSB0aGUgc3RhdGljIGlubGluZSBmdW5jdGlvbiB0byB4ZW4vaW5jbHVkZS94ZW4v
c2NoZWQuaCh3aGljaA0KaGFzIGFscmVhZHkgaW5jbHVkZWQgZG9tYWluLmggaGVhZGVyKS4NCg0K
PiBKYW4NCg0K


From xen-devel-bounces@lists.xenproject.org Thu Jun 02 10:25:58 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 10:25:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341033.566199 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwi1Y-0005J9-GY; Thu, 02 Jun 2022 10:25:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341033.566199; Thu, 02 Jun 2022 10:25:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwi1Y-0005J2-D7; Thu, 02 Jun 2022 10:25:52 +0000
Received: by outflank-mailman (input) for mailman id 341033;
 Thu, 02 Jun 2022 10:25:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4LFK=WJ=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nwi1X-0005Iw-5c
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 10:25:51 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.109.102])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5f436a06-e25e-11ec-bd2c-47488cf2e6aa;
 Thu, 02 Jun 2022 12:25:49 +0200 (CEST)
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2059.outbound.protection.outlook.com [104.47.14.59]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-45-4dRvOHWmMpKteXhH0kiE5Q-1; Thu, 02 Jun 2022 12:25:48 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by HE1PR0401MB2521.eurprd04.prod.outlook.com (2603:10a6:3:7d::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Thu, 2 Jun
 2022 10:25:46 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.013; Thu, 2 Jun 2022
 10:25:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f436a06-e25e-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654165549;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=mqi+JSWlfcglnSGg50ayVBnST8mkBeyG4Xv1EP4vmCk=;
	b=JWIerWVzX8oDPuPPg0LPGVC+X74WtfuSiKC6H9zJacWaPxxehTNHfQCdAdQL/jtiqIjmZa
	tfVzCFLYVO6h+YWehTYJGt9eg3a6rAkzn+jHgUBl0nBML8jD6dDYAWgM0F8UOjlRRaxZb9
	GkjM1uUOzmSip9rqKARuYm9mu8XUvQQ=
X-MC-Unique: 4dRvOHWmMpKteXhH0kiE5Q-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eTAw9HgBPlNMJ5aPn6SusPnedpGzku51HcJmhAu1GkfLqxFR9wLdpHTpnwSGBwHZxaVqS1oncasjXXjpwHZZQYDy/3V22tVv3KaxltDgsOl4h8dEgHsGp+sRIvotWCFyyuLWYsC5baboSOc8q+Q5IOvNTFdhX+7CNhi44O1zcb6sBGSjO6qU5fuP5ZZiRxpnWhztfhmtsPuZHn28ceJVsvGYWntV4nsbK7CPpp+jyKpmsUkJxiik5+v/XeIN9+mKgne/z+hsZL8PupYEov+xX8z2F2UPbTKpfWKs/2qAwWoixxsZO/l98K+lIYrBrVlDqKzfeAuBf+Urixo3tbAlJw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mqi+JSWlfcglnSGg50ayVBnST8mkBeyG4Xv1EP4vmCk=;
 b=VgbIhjCL1qmNxubuUNcUFoQitHU+G0fefHsjLfseV9xAq0eKnzcOlOy0jgkF3pnZNMBGezI+90Wpf5p4ryzTa0E6xIcTv+i+4Axq0lG514hG9nY0jFdkJzdCQtLSx3KLKUE0nvozNdXOymoamW2Isx6RbhcWWEr5BjZkG6Lp75lNttE9rsOPadaDNXSD3qNUrBHs3gS2iPEoSLDwv4gAlomUpPn7LfdJLMtYKXrcAnaMn+n/1E2JRX6LFXdTla+XzmvuTsI1Cy6b7pNnZQ5m7QnQoD3NmkZze3DNqgs6YULkZbTlDojvfOq9avfMiFpFrmfw/3y0fwMQVUdBRe+Qqg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f8cebd1c-1679-7b67-c43b-8c0cbe8ffa52@suse.com>
Date: Thu, 2 Jun 2022 12:25:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 0/2] x86/mwait-idle: (remaining) SPR support
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM5PR0201CA0009.eurprd02.prod.outlook.com
 (2603:10a6:203:3d::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 0b78ad57-170b-41c3-8b08-08da4482413a
X-MS-TrafficTypeDiagnostic: HE1PR0401MB2521:EE_
X-Microsoft-Antispam-PRVS:
	<HE1PR0401MB25211F48CF9604549A565D87B3DE9@HE1PR0401MB2521.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Gh8Q41OkS5b7OjWrfkTpTl6jSmaI4KTi07RfDEgoPZZssCnTrkl3JxaHgKcDs4RxMSFM9KRvHMfx+76EatFy85KGR5lz94TPrX+RxBDWds+c6VThopKzWxIgkdErMAB7R/1GftTCuPtUM/8eaSyS1QolnQr+utJb5lRVW9rdqzrbK269shW7mUWKHCtx+qc8x4A0ucqyPA/n+cmf7bUKKB4Cxfm6iYXitfM9MPGRRaqQi+e+FZjo1mG/c+YzCnnyj3KB5TuO2DLzKhW8nTNtNzfDkGD5cVBaXGabQKVB0sCEw8qyVeNivWyoSyNxlF6sE4BAdvkciWiuZYdQWHrQvr5e+bzYBjX6+FyZddBKJ7NNXoiLRkVJq5Kb5iOARGMtXBYlwr8XNG+vV3JniyMdlyAMYFwJC9tS8iXAouJ7ZjMyc32awh9YFq3u+3UKa7JF4o1qre/NGs3bwO4eoH4v+VLmZhrRqpuB+o2tHexH8AMqPd4gTpsBAOzr36lPUQxiBOeL5/2XEQJWp8qGw8Cvr8UcIYVPtQsprXV0767FDtOq8/c3pIi6385RtLBec7e8521nD/jiRvRPvtzlx4KdGvN4jlpEPs93yEI0J4BedQk6qlu2Qvs1IwQrMrzo+I/vAqcZGwihKGZRz6cuUfaZH8zoURUmiPvqIpRx+Qt6shI8TnYGEaiJksYgZDRMR6KUfvemoNrCyxDEmIUkxb9kJygX1W2KtEnfdzXqH8heTK8=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(8936002)(66556008)(66476007)(4326008)(8676002)(316002)(66946007)(54906003)(36756003)(6916009)(86362001)(6506007)(31696002)(5660300002)(558084003)(2906002)(31686004)(83380400001)(38100700002)(6666004)(6486002)(6512007)(508600001)(186003)(2616005)(26005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OUZqb1hyN2x1a0E4VUlDLzBJVTd2T2M1TWpXSFoxdE9nek56RTJQSElsanJU?=
 =?utf-8?B?SjlwWCtSbVR2RStjdFN0Tm9WTm1lcWludmo1alY3UDYxYmkrbVhqbnk0SmFY?=
 =?utf-8?B?Rm5NQ1lBQjRFdHZhUFlkZ0NJOWZWVUpEaHpvckJISVJ0Wk9rU2RBamJsWVlT?=
 =?utf-8?B?RXVldjZZYnQ1eEFpQU1ndmppUGJoSnA0WnFOdGk4Y1BDVXphaCtvditMeEFv?=
 =?utf-8?B?S3h5Zkhyajc1czl2WHNxMDNORkNDaW82NWFMUXpHRnZla0x6TWcxWjhualFa?=
 =?utf-8?B?VmtJVlUwNlpUM2dMRXBQRDFYdS9ta0NXb3hGaUgxc0h2a3FCaFpBMlF5N3BT?=
 =?utf-8?B?bC9OWmNmbytueG1FY2lsOHFSWlZQN3FMRHZrbEtkTE14VzBDdDh5eEF4SU5O?=
 =?utf-8?B?VDN2dVhCM3d6bDdPUkZNeEhqK2Y1RHdJem5iWWh0OEExL0NXWmxFTHQ4RURi?=
 =?utf-8?B?ajRHY1pjekJKaTBraFordXBFZTQ1Z1FvSURiWVZkd3d1WHNyaExqcUE4djho?=
 =?utf-8?B?RVJLUkJzSmdka0ZZcVB3ME04RUgxbkNJV1EreVFsWjZ1RzV2SllmR3pHYnd5?=
 =?utf-8?B?NHVaTEN1V0NxeXlaRVFzVkFmVHN3bXhyRW1GS0R2N3BNTU5GS1VycXhtbnAw?=
 =?utf-8?B?NTMrR3h5WEZYT1Voc1lxQjlES201K1dDUnBhUjgvbXB6OTRpbktKdmJaZ3Ja?=
 =?utf-8?B?ckk1VmdCMlBGcUVKSW1rMS80V2hPZzVXcnN1aXR4dU90eTM4bktrN3BiN1c3?=
 =?utf-8?B?MFZZd0Jwdm1YQVhQVjFWcFhQc255SzZUWFRHNVNnOUlIUmdlZlRSTU1VQ0pL?=
 =?utf-8?B?V3d0UjcwdFg2a2tCUVlPUnBycC9HSytHbkNrWHROODJqbnJPSDBlOWtsT1VY?=
 =?utf-8?B?SGgxWUlTSUFVVXdZZ2pLeWFJcTF5UksxcEgrS0lWYTFUa2ZLSzBodEJ0MlJq?=
 =?utf-8?B?RU9YM0pBSDFnWnBLc2pKYUtXQ2RHYk1KOUVrdHIvczVWKzN3dVlxeHcrNDdy?=
 =?utf-8?B?cGdKaUdNUFZ1dWdNSGZNODFYREtoT3NxdzNicUZuU1RPLytUeG1EeFBPMkk2?=
 =?utf-8?B?eTVsN2pDaFJlTWtBaHJFTGpIWEdNRk4rbEJPRDVTR3NIMERoU1ZITmRENXln?=
 =?utf-8?B?dGpJaURRcy9VajZhU2JwdmpuY1RtWFdtVUZOVnZHVTMyS1dYZ2MyczJ5T2s3?=
 =?utf-8?B?WmxuQitPaWp1YXlWMnF6dE5nc2pVeHpPTDY0QmxMMVQ1ZHhNby9FUVlFbXJ4?=
 =?utf-8?B?VFBZanRyRzRNb05CTUYrK1plUXVSRkp1aG9jOTMyZ01jNVJrR2RIbjdmT1NQ?=
 =?utf-8?B?eGZRUWhyNkdjeFRHUmR1bGVZaEZvU1VCRWFDTjB0SW84ZFB6TDN0aTB3aENi?=
 =?utf-8?B?Z2kxcFdtOGZ6am5oK1pOdTVHWUNZeWRXWGFQS1dnNzhUdmpyNjR1bnAxeWFT?=
 =?utf-8?B?QlpGK2Fla3BiUWt1cDl0S1ZYN3pDT0hRaTI3TjUrQnVCektzcTd3djFrdTJD?=
 =?utf-8?B?WlRuRm9yZjl2dlJ1bXJrc1poVWpwK2wrOUZ1NVY5Y0tkWDZmUVFuWWNJNGJX?=
 =?utf-8?B?cnVpK1N6aHBacXFxYVZnaVNteTZZbHNJRzVvV1I3WEh2WnVkVGFBc0paRndz?=
 =?utf-8?B?VVJ4U0pXak9xR21MUHNWM2EyR3d2YWZweGt6WGMrekZnWUtvNXFzcnhWRGRO?=
 =?utf-8?B?VE1wMnl2YjRhMTlVN1pXc3JHVGlvL0dWMEUyaTJPbWI1MURHcnFqYzFNSFpP?=
 =?utf-8?B?OTQrNjNoenI2SlNQUWFuS0NoVG54d1BUbTd4Wm96blJiZFVDaitTNXQzM2pw?=
 =?utf-8?B?YjM2UXlncExZTlFtdWtVNGlJazhnYVJhemxFSHlyNTFuNjdSZDg3MVN3TTRH?=
 =?utf-8?B?SW1HMHR3NTY4aEFJbXM1blFPQkZLeTR2dTFuNkFQMmZKRTk4bEJxTWtVdytR?=
 =?utf-8?B?WS9hZVJjaWVoOXFBOUJJZVdYZzdOTlFyMUVvalVmVEhENjFmSE9sRlhTWE9Q?=
 =?utf-8?B?dFgvUmNnbUJKelFHeHZvcDFtR29nRFpoLzEzUnV3Q0FsNUNjeFRNbVpxeXd0?=
 =?utf-8?B?WFVMRVJXOWJxaS93UXg1VmVhWUtPUmx6djR4Zmw0bjZKVUhHKzRmUFlieU9j?=
 =?utf-8?B?YnFMcTg2SjJma2xNblFBMTNZOEpMS3ZuSFEydFZZSDB2VjJ2ei9GaTh4cXp2?=
 =?utf-8?B?MTAwRjdTT3NaaWE3WnBiV2poSDZHUHdLYjlnaktDUjI2TEpOYTcwUXBnaGtP?=
 =?utf-8?B?UWw4a3ZGUk03dHFXVVRRSjdZWkZhK21PdCs5SGZTZG9QaGFaNHMwaVk3NGR1?=
 =?utf-8?B?RFBKQXpXQXFSU2lQWXY1eWZMSVVxeTB5S2xha3h6clUvcUxIYXpKQT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0b78ad57-170b-41c3-8b08-08da4482413a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2022 10:25:45.9170
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Io0Uo48LeRPYF2qwHvpXHGUflznvIsbFhfIhqcuKaF/Gh/dZhP4irha/sGJKFtv/6wOYg3rqL38s9gYmu8a0Zg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0401MB2521

Still pretty fresh from Linux 5.18 (and with adjustments to address issues
noticed while porting.

1: add 'preferred_cstates' module argument
2: add core C6 optimization for SPR

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jun 02 10:26:58 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 10:26:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341041.566210 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwi2V-0005uV-Ps; Thu, 02 Jun 2022 10:26:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341041.566210; Thu, 02 Jun 2022 10:26:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwi2V-0005uO-Mj; Thu, 02 Jun 2022 10:26:51 +0000
Received: by outflank-mailman (input) for mailman id 341041;
 Thu, 02 Jun 2022 10:26:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4LFK=WJ=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nwi2U-0005Iw-1l
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 10:26:50 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.111.102])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 828af386-e25e-11ec-bd2c-47488cf2e6aa;
 Thu, 02 Jun 2022 12:26:48 +0200 (CEST)
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03lp2177.outbound.protection.outlook.com [104.47.51.177]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-40-WY9WnJ7fPlqzFUhLbCJAPw-1; Thu, 02 Jun 2022 12:26:47 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by HE1PR0401MB2521.eurprd04.prod.outlook.com (2603:10a6:3:7d::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Thu, 2 Jun
 2022 10:26:46 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.013; Thu, 2 Jun 2022
 10:26:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 828af386-e25e-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654165608;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=y8ukhmYyNTQg/Apn6MkKOZE6epFoKJZGr2rITtOrpGk=;
	b=McNeaDi0ZPDSpHQcKKccLtFpxgynQKuM0uWaon7tebv9uBcT0HTlFam/VeyDObw6v3XzZk
	xH0nu2lDJmJF0Y3KrCUDY36bA1vhgzkSQ4wUR1qt38r37L55u5Yrh2mBB0Tva/FbqQHrE8
	2NzNHDITeedzGbAoDJo85vNdAQnKHRM=
X-MC-Unique: WY9WnJ7fPlqzFUhLbCJAPw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QiuLGAM/hgaAqSrT8+7X0U8Y53UD/rSAMBXTtb89bGx1VuZkIXXQKufX6mYTOw7cAudRqqvW4Z9qwCt48Ij1+MPLZGeiPXzPio0+lIe1xMVVgEQc1oqWJYU78oE4G14R5TRX/lexV7Bxu3WomOexADb7Iw2XstWEwNSw8I0FDaJspJcGqzqxv7hkqq/3Yi5hNQsPg7PZPWmisem/b9IO4VHHxA2GziaSHJIJyzU8IYgQPzw/TxseqiRFNQ6U4dhgQERhiite1N7pQaO+KTCvo02P4r/tMOH/So57fODo6YiZmPwyt6pDYEq/v7cbYNHvxoa3aw974VVhe9SEPCHAhg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=y8ukhmYyNTQg/Apn6MkKOZE6epFoKJZGr2rITtOrpGk=;
 b=hwh5GxKSLa41f1i+rs7abwC7An/I8pfkrh6M0OJAPzKkLlur5i7PchAz11W8NkOayQrsHvsffYw+iMvCL7vhJDKXOpTta2MISsdiWIGcpJIzBjT5b+MkSfpniQ3WHTpxaKounweIAkKy+IXGcCTe/6+XZ+LQnOVKJe0aE4xop0SHKKRp4t2SZ0TUiYAZt6c2OzGEH8EyLDdNMg3xmxYitVkoq6TSxP+vrb3oAsY9kCLPOuW8dOvUPHFMNnf6cYJfbrLc938Nm+GfsuBy7mQUOLvHE7s8qOVm4HeNceDXSj7P2mLS9D5iNdbo8Rifxq7KzZfwLzahk80n/QJjzaWY4g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <46c6bf61-5669-0de2-163d-64b9a3ce07a7@suse.com>
Date: Thu, 2 Jun 2022 12:26:44 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: [PATCH v2 1/2] x86/mwait-idle: add 'preferred-cstates' command line
 option
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <f8cebd1c-1679-7b67-c43b-8c0cbe8ffa52@suse.com>
In-Reply-To: <f8cebd1c-1679-7b67-c43b-8c0cbe8ffa52@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9P251CA0002.EURP251.PROD.OUTLOOK.COM
 (2603:10a6:20b:50f::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 85e09c86-bff9-4760-e046-08da44826519
X-MS-TrafficTypeDiagnostic: HE1PR0401MB2521:EE_
X-Microsoft-Antispam-PRVS:
	<HE1PR0401MB25216C87802A1F683B7EA3ACB3DE9@HE1PR0401MB2521.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	L/yKWv7IWOK8EwHeswpZnTLsiVHf0UaeXCIHCfiD0l0X5oaKdFARhWvwnHsaOhX96MCMxY3dVN53ZjY/MdfIJ0oNIOlA/lWba8f9jeUDAQY+sYaa55MPcfNglBLLzPaplsK5cQ7m0WgHbJncTdjWtJorpbkS969DW4HsTOuPPAm9348cbbBpoKZLRXM89lbs+n7O5E4KgiP75wMGlbQCcPYWub8QoxxqEvrW6u3mWqFgYttLTNbs+E153sAzi/4q7sThiSwlm/PAp6Q98DD5yWPD9w5tAdulBNAdc4oFlvNLyrlbD+K/yk4/NqaVSZyAyby8MT5lKy9utatk8ciErbqTdEvOw2IHXe8xukDUaempLQnfR3DcehnZ+9tEsnK8Jbx20z957HPhJYgJpH99liHjSP3Q+1F8WXTFR82L+z++OyMH1tP4KskWGLQk4ys/Sp8aHVCEtVbMnIHA/jsJ5t1Ob4fWPr3RbAGLx+L8YoLDsMBk9TCrnLVUBGhQmtgVI78VoGI7l6qrMjdqBD3yv+K0uBCqwnYBIFY5uNB1RlUO4bqYr1azVtnY9GwIp3847dctKT2UHrtCobon/ZpTZA5vxOO6NhZSimfz3An97v1BDPvhpLyeTQh3WtPJp75uV4wue2VuDNNoIDBtrbXFmNyuTpILogyrHI55gBNB7zfxKEa0q8pQxowAL3/e1ehL/YG52Z5yaqx1Yry/zuDKrTMPwIphSK5mIb99r1WHIiu7RbzH7qksdruqFf8yKpQA
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(8936002)(66556008)(66476007)(4326008)(8676002)(316002)(66946007)(54906003)(36756003)(6916009)(86362001)(6506007)(31696002)(5660300002)(2906002)(31686004)(83380400001)(38100700002)(6486002)(6512007)(508600001)(186003)(2616005)(26005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VXdRRzRRSVIrY1RyTXo1OVdtTlNsWjJ0VkdvRE85Mmp0QS8yUjh4c2N6bXg2?=
 =?utf-8?B?VllRK0pzaW1BazVzWFRWUW5ORkhDN1JyeE8xRkJia2ZQeWZFSlRTc2kzZnpF?=
 =?utf-8?B?cmo1SzdaTUxBTUpPUjBxNkhvdklOYnFWYlArV0U4THZEeUJiVkd3SW4xU2VY?=
 =?utf-8?B?UWE3QWkrMEFNTjdCSzdaRTVCYStoRnlMQk5iWU1tUjlKRXl3Rjl0MUlsL3JW?=
 =?utf-8?B?NnYrUFpOZUlBVlVKN0VWdWNKQlJMK3V6VGQ3c1l5ek50TTRlUzVCT0ZKVVVJ?=
 =?utf-8?B?c2tjRzYwMVlXMGFZUlArUUVrZ0tPbSsrRU10ZmRMK2Y4Q0xkTzFwV3BjeHpa?=
 =?utf-8?B?R3E0cTVSMDBVanZRQkEzdmZ3YkxPTE5CSzFuTDRld1I3aXdTZGdRK1VmYmZU?=
 =?utf-8?B?QUw4NThyQ3JBSDVvUWc2RDJKL1hnT1BOMHltYjJ1dnBFd00wbDlNWEVocTNv?=
 =?utf-8?B?NFJHQW5ka08rRU9uQW9oK01ZajIxSFZscUFmS1ZXV1ZvVG1vUmtuUDFhTjRO?=
 =?utf-8?B?N29PR1BhUkFpY3k4dmhmY2tEcGZaZ2VuRFI1ck5ETDhBMU5LY2p4ZW5ML3Vm?=
 =?utf-8?B?MFpjQ2VacEdVQ1NoYndnQWd4d25EWHNIQXI5ZGloLzBIdmE2WVVIQ2pvZ1ph?=
 =?utf-8?B?ZzgvUUk4Q3dBNUx6Nm15aDgzRUV4YWJTNEFrSklTV0ZPUEc2U3JFWnEybGhG?=
 =?utf-8?B?ZzhCM1JDdzgzVzE4WklDa0JSSG51cFMyd1M2cHhOYTVPM2VYVzl4RG9xKzUw?=
 =?utf-8?B?am1pUXorSUZaRHBzR0dmeGYzcDUxRm1sU3FzLzF0eDgwLzRJMndWQ002TlFs?=
 =?utf-8?B?VmxFd01JNnFmUDJQRkZkTTJUdmFTUjNnclVMTG5SMHM5QXdEUlBmcktWL05a?=
 =?utf-8?B?RHVIVzluS21LYzI5UW4vd0ZkSFIzcHplOExHdVpnYWJIWVAyNU5XWkFhclhV?=
 =?utf-8?B?c05UbXBQaHVVZnlFeDJvcFBpbWJFcGFwSVRLcHlDa0w1bktJaExpNkg0d1cx?=
 =?utf-8?B?T1dQZ1BLMGxVdzQwODdONlNhQVpoZnlyY09rakt2TkwxS3NZUmhPVG9rRHQ2?=
 =?utf-8?B?UnFheU45NFF3WmEvNVVBclJtTGVCOW15ejBTR0tGUm0yd2JmT3IxVE5naDFL?=
 =?utf-8?B?ZWUyVTBFaUo0Tnd3OWxUaHMwbE41a3dzR1NWM09SMnM2dCsvZERON3lwUFFz?=
 =?utf-8?B?d3haUzhET29Gdjg1NHpBZVZkRm00d05CNEFvSVR0NnpQazhBUDV5MkdheDU2?=
 =?utf-8?B?T0htSTFienFMdnVwV0JSUmJQdTRkNkIzMmUrNC9hM3ZCVFFPZkc4KzJqc3RW?=
 =?utf-8?B?ZEY1V3hGOGZhMTBNTWxHWkZybnJSdnlCbHNlZGNiRk9oQ0E0dDhWVTJSdC9I?=
 =?utf-8?B?eGNXdno3UFB6azlFRmRxN1YvRmUwcjg1cHFEanorRkN6MjFRdm5Cbm1JNExP?=
 =?utf-8?B?Tk85V0tleWVGUHF2M0t2R2YvbWYxMTlzNkdFMFQ4eTg5S3ArNGI1UW9JZGlU?=
 =?utf-8?B?RlNOaDZaaEJFRTR0UXRYYnVwMVdDUS9ZUHFiQVIyYXczMnJCblQwZjRxR2Zx?=
 =?utf-8?B?WDQwZlpySk1OTDl2dmd0Q0I0MFdTQVJVY2dXYlNLS1RDUnRKMjdIeXdsRjFp?=
 =?utf-8?B?UXBsYjh6NEVwSmV3UTY0aGdtWXJUSzVhWjNmcWNOTUlVNG5SazNNYjV5a294?=
 =?utf-8?B?RnVhaWp3T3AyMmU1M2NYOERRZ3F3T09BYlB5RmJscldqbWl4cjJGM3BEcVd4?=
 =?utf-8?B?OVRUVWJETEdBamxicEpwLzJ1Z2ZUam9kallyd3paZ09ZVThldnVVdUtNVFV0?=
 =?utf-8?B?ODd3WjdPVjVsQlh0R2VGeVRKTVVRYWhxbEFub0pJcTlkcFh2U3ZyZ0dPOElV?=
 =?utf-8?B?alRTb2hCbVZPbFd0cGhhZkhiVElCSzFoTXp5YXVrUisvZmFPdStnYjNrbWh4?=
 =?utf-8?B?WkcxZjZGZ2s2a2wvMGtHZ1UySE9nSHI4MDlFRlg3RnpwcXhnMjBlcDRiZUhu?=
 =?utf-8?B?a2xzYUdmWWE0Nmc1WWRYY2NrN3cxcCtmVEVQMGhBRnREUlJWYS9UVytXZlBy?=
 =?utf-8?B?K29pNVdHM2NvNmlhd1ViR2FadW9JL0xpZi9tQzQxQUtyb0JSSm1EUTBSRzRJ?=
 =?utf-8?B?eGNNWXdURHI2dmRNYTQvWmIvRVRQL0YwSEV4WXF6SkMxUWNJd1dVa2tHTW9I?=
 =?utf-8?B?clpmOWlLNEFBTjEwN2hFMFVxaFQzRm9CNzNYYTMrNlFNZ1QwYndaQ3pkTmRt?=
 =?utf-8?B?RnozTHh3UlFKaWN2QlJuZXA5SmdtbXBmemc0cmJraEdhSFg2Q2JtZytxaVZ4?=
 =?utf-8?B?RnJ1V3VselFMME1wVzRGTG91dlFrbGNRdWZDM3gxSzlPNEpEQVVwQT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 85e09c86-bff9-4760-e046-08da44826519
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2022 10:26:46.0228
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: juT4yKrR9yjI73flIbiuQsfgVnCd6srd2I/uv2frnq6PZJmsxcJjgA++jka6Ok4mb96F51QLOfvNgwabZtgY6Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0401MB2521

From: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>

On Sapphire Rapids Xeon (SPR) the C1 and C1E states are basically mutually
exclusive - only one of them can be enabled. By default, 'intel_idle' driver
enables C1 and disables C1E. However, some users prefer to use C1E instead of
C1, because it saves more energy.

This patch adds a new module parameter ('preferred_cstates') for enabling C1E
and disabling C1. Here is the idea behind it.

1. This option has effect only for "mutually exclusive" C-states like C1 and
   C1E on SPR.
2. It does not have any effect on independent C-states, which do not require
   other C-states to be disabled (most states on most platforms as of today).
3. For mutually exclusive C-states, the 'intel_idle' driver always has a
   reasonable default, such as enabling C1 on SPR by default. On other
   platforms, the default may be different.
4. Users can override the default using the 'preferred_cstates' parameter.
5. The parameter accepts the preferred C-states bit-mask, similarly to the
   existing 'states_off' parameter.
6. This parameter is not limited to C1/C1E, and leaves room for supporting
   other mutually exclusive C-states, if they come in the future.

Today 'intel_idle' can only be compiled-in, which means that on SPR, in order
to disable C1 and enable C1E, users should boot with the following kernel
argument: intel_idle.preferred_cstates=4

Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Origin: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git da0e58c038e6

Enable C1E (if requested) not only on the BSP's socket / package. Alter
command line option to fit our model, and extend it to also accept
string form arguments.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Also accept string form arguments for command line option. Restore
    C1E-control related enum from Linux, despite our somewhat different
    use (and bigger code churn).

--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -1885,6 +1885,12 @@ paging controls access to usermode addre
 ### ple_window (Intel)
 > `= <integer>`
 
+### preferred-cstates (x86)
+> `= ( <integer> | List of ( C1 | C1E | C2 | ... )`
+
+This is a mask of C-states which are to be used preferably.  This option is
+applicable only on hardware were certain C-states are exclusive of one another.
+
 ### psr (Intel)
 > `= List of ( cmt:<boolean> | rmid_max:<integer> | cat:<boolean> | cos_max:<integer> | cdp:<boolean> )`
 
--- a/xen/arch/x86/cpu/mwait-idle.c
+++ b/xen/arch/x86/cpu/mwait-idle.c
@@ -82,10 +82,29 @@ boolean_param("mwait-idle", opt_mwait_id
 
 static unsigned int mwait_substates;
 
+/*
+ * Some platforms come with mutually exclusive C-states, so that if one is
+ * enabled, the other C-states must not be used. Example: C1 and C1E on
+ * Sapphire Rapids platform. This parameter allows for selecting the
+ * preferred C-states among the groups of mutually exclusive C-states - the
+ * selected C-states will be registered, the other C-states from the mutually
+ * exclusive group won't be registered. If the platform has no mutually
+ * exclusive C-states, this parameter has no effect.
+ */
+static unsigned int __ro_after_init preferred_states_mask;
+static char __initdata preferred_states[64];
+string_param("preferred-cstates", preferred_states);
+
 #define LAPIC_TIMER_ALWAYS_RELIABLE 0xFFFFFFFF
 /* Reliable LAPIC Timer States, bit 1 for C1 etc. Default to only C1. */
 static unsigned int lapic_timer_reliable_states = (1 << 1);
 
+enum c1e_promotion {
+	C1E_PROMOTION_PRESERVE,
+	C1E_PROMOTION_ENABLE,
+	C1E_PROMOTION_DISABLE
+};
+
 struct idle_cpu {
 	const struct cpuidle_state *state_table;
 
@@ -95,7 +114,7 @@ struct idle_cpu {
 	 */
 	unsigned long auto_demotion_disable_flags;
 	bool byt_auto_demotion_disable_flag;
-	bool disable_promotion_to_c1e;
+	enum c1e_promotion c1e_promotion;
 };
 
 static const struct idle_cpu *icpu;
@@ -924,6 +943,15 @@ static void cf_check byt_auto_demotion_d
 	wrmsrl(MSR_MC6_DEMOTION_POLICY_CONFIG, 0);
 }
 
+static void cf_check c1e_promotion_enable(void *dummy)
+{
+	uint64_t msr_bits;
+
+	rdmsrl(MSR_IA32_POWER_CTL, msr_bits);
+	msr_bits |= 0x2;
+	wrmsrl(MSR_IA32_POWER_CTL, msr_bits);
+}
+
 static void cf_check c1e_promotion_disable(void *dummy)
 {
 	u64 msr_bits;
@@ -936,7 +964,7 @@ static void cf_check c1e_promotion_disab
 static const struct idle_cpu idle_cpu_nehalem = {
 	.state_table = nehalem_cstates,
 	.auto_demotion_disable_flags = NHM_C1_AUTO_DEMOTE | NHM_C3_AUTO_DEMOTE,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static const struct idle_cpu idle_cpu_atom = {
@@ -954,64 +982,64 @@ static const struct idle_cpu idle_cpu_li
 
 static const struct idle_cpu idle_cpu_snb = {
 	.state_table = snb_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static const struct idle_cpu idle_cpu_byt = {
 	.state_table = byt_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 	.byt_auto_demotion_disable_flag = true,
 };
 
 static const struct idle_cpu idle_cpu_cht = {
 	.state_table = cht_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 	.byt_auto_demotion_disable_flag = true,
 };
 
 static const struct idle_cpu idle_cpu_ivb = {
 	.state_table = ivb_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static const struct idle_cpu idle_cpu_ivt = {
 	.state_table = ivt_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static const struct idle_cpu idle_cpu_hsw = {
 	.state_table = hsw_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static const struct idle_cpu idle_cpu_bdw = {
 	.state_table = bdw_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static const struct idle_cpu idle_cpu_skl = {
 	.state_table = skl_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static const struct idle_cpu idle_cpu_skx = {
 	.state_table = skx_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static const struct idle_cpu idle_cpu_icx = {
        .state_table = icx_cstates,
-       .disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static struct idle_cpu __read_mostly idle_cpu_spr = {
 	.state_table = spr_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static const struct idle_cpu idle_cpu_avn = {
 	.state_table = avn_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static const struct idle_cpu idle_cpu_knl = {
@@ -1020,17 +1048,17 @@ static const struct idle_cpu idle_cpu_kn
 
 static const struct idle_cpu idle_cpu_bxt = {
 	.state_table = bxt_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static const struct idle_cpu idle_cpu_dnv = {
 	.state_table = dnv_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static const struct idle_cpu idle_cpu_snr = {
 	.state_table = snr_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 #define ICPU(model, cpu) \
@@ -1241,6 +1269,25 @@ static void __init skx_idle_state_table_
 }
 
 /*
+ * spr_idle_state_table_update - Adjust Sapphire Rapids idle states table.
+ */
+static void __init spr_idle_state_table_update(void)
+{
+	/* Check if user prefers C1E over C1. */
+	if (preferred_states_mask & BIT(2, U)) {
+		if (preferred_states_mask & BIT(1, U))
+			/* Both can't be enabled, stick to the defaults. */
+			return;
+
+		spr_cstates[0].flags |= CPUIDLE_FLAG_DISABLED;
+		spr_cstates[1].flags &= ~CPUIDLE_FLAG_DISABLED;
+
+		/* Request enabling C1E using the "C1E promotion" bit. */
+		idle_cpu_spr.c1e_promotion = C1E_PROMOTION_ENABLE;
+	}
+}
+
+/*
  * mwait_idle_state_table_update()
  *
  * Update the default state_table for this CPU-id
@@ -1261,6 +1308,9 @@ static void __init mwait_idle_state_tabl
 	case INTEL_FAM6_SKYLAKE_X:
 		skx_idle_state_table_update();
 		break;
+	case INTEL_FAM6_SAPPHIRERAPIDS_X:
+		spr_idle_state_table_update();
+		break;
 	}
 }
 
@@ -1268,6 +1318,7 @@ static int __init mwait_idle_probe(void)
 {
 	unsigned int eax, ebx, ecx;
 	const struct x86_cpu_id *id = x86_match_cpu(intel_idle_ids);
+	const char *str;
 
 	if (!id) {
 		pr_debug(PREFIX "does not run on family %d model %d\n",
@@ -1309,6 +1360,39 @@ static int __init mwait_idle_probe(void)
 	pr_debug(PREFIX "lapic_timer_reliable_states %#x\n",
 		 lapic_timer_reliable_states);
 
+	str = preferred_states;
+	if (isdigit(str[0]))
+		preferred_states_mask = simple_strtoul(str, &str, 0);
+	else if (str[0])
+	{
+		const char *ss;
+
+		do {
+			const struct cpuidle_state *state = icpu->state_table;
+			unsigned int bit = 1;
+
+			ss = strchr(str, ',');
+			if (!ss)
+				ss = strchr(str, '\0');
+
+			for (; state->name[0]; ++state) {
+				bit <<= 1;
+				if (!cmdline_strcmp(str, state->name)) {
+					preferred_states_mask |= bit;
+					break;
+				}
+			}
+			if (!state->name[0])
+				break;
+
+			str = ss + 1;
+	    } while (*ss);
+
+	    str -= str == ss + 1;
+	}
+	if (str[0])
+		printk("unrecognized \"preferred-cstates=%s\"\n", str);
+
 	mwait_idle_state_table_update();
 
 	return 0;
@@ -1400,8 +1484,18 @@ static int cf_check mwait_idle_cpu_init(
 	if (icpu->byt_auto_demotion_disable_flag)
 		on_selected_cpus(cpumask_of(cpu), byt_auto_demotion_disable, NULL, 1);
 
-	if (icpu->disable_promotion_to_c1e)
+	switch (icpu->c1e_promotion) {
+	case C1E_PROMOTION_DISABLE:
 		on_selected_cpus(cpumask_of(cpu), c1e_promotion_disable, NULL, 1);
+		break;
+
+	case C1E_PROMOTION_ENABLE:
+		on_selected_cpus(cpumask_of(cpu), c1e_promotion_enable, NULL, 1);
+		break;
+
+	case C1E_PROMOTION_PRESERVE:
+		break;
+	}
 
 	return NOTIFY_DONE;
 }



From xen-devel-bounces@lists.xenproject.org Thu Jun 02 10:27:15 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 10:27:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341047.566221 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwi2t-0006Tr-7H; Thu, 02 Jun 2022 10:27:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341047.566221; Thu, 02 Jun 2022 10:27:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwi2t-0006Tk-4V; Thu, 02 Jun 2022 10:27:15 +0000
Received: by outflank-mailman (input) for mailman id 341047;
 Thu, 02 Jun 2022 10:27:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4LFK=WJ=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nwi2r-0006HN-CH
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 10:27:13 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.111.102])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 90af019b-e25e-11ec-837f-e5687231ffcc;
 Thu, 02 Jun 2022 12:27:12 +0200 (CEST)
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04lp2053.outbound.protection.outlook.com [104.47.12.53]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-4-oIC2fnhPPcW5spW_FB5hGg-2; Thu, 02 Jun 2022 12:27:11 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB7PR04MB4377.eurprd04.prod.outlook.com (2603:10a6:5:37::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Thu, 2 Jun
 2022 10:27:09 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.013; Thu, 2 Jun 2022
 10:27:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 90af019b-e25e-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654165632;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Cv9/eEIQFiQhF6WMq0SwpnIoh/ieH+NQocSL68cRVwM=;
	b=Yb6JvANjEWlhW0++iUkL1R8pUQS18ENbgoZ4Ypq5cWN+3u3vKT538r0linhI1ao+FV9BG/
	WQWpYenSAphY/J7B/tjdp8P5v2ebmwdYtF0qvE5VxyCDU3fPWLfSPyX7U3TNNskRbOsM5+
	AChv4w0w1U7n7XzntAgPkXb3q6pCdGc=
X-MC-Unique: oIC2fnhPPcW5spW_FB5hGg-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZB87gY55xfno4q7V9873D5mGVVn8NPC4TTkDayZsQZWuW198JNtndCAJYSbbD8T3rRmUZaBmdSUwetJDzX4s1SzyErmUpnxU7dgydd8rpTQIHEHKRSnMj29BKyx19EH152eHL2WnpCmVHEzdU0pXLpwx5gn3sjbhNz2UHXUIvQGYjnsEqAABUjNM8lFdQ0d0ALHRT5w105crZqP+oUhwAIVcAwOHZaApsIa0aCbrZXKFCfK20fXX7zZTy3Ka5UWNOWTFtrM4xEvCpLwR5hRtvlmWLRFKqoT/uVyD5AcJY/TxMlrfd5RDNoG+/rdJTWhW6qq7QLlfa+ptu1t6/Rjy1g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=hOkAYeO56s7DN3Ugv7AOWjHMw56rkbVwFSJU6mwMhW8=;
 b=crdzozB4sXO0+TAH3Yg1CWbQMaYLiVo1Z1y+/G/WObSjZFMuBAptCtnf5Hxzu/mA1hrJDtTac7bGK3JRdzjNEA2B56cl3o06GH3AgvflyjZdqNpT+sYVAP8HYmPk8l1nOfaNiOXWsv6N/H2AajLsDJqF+s169YL+HQxSyi73BLdQlkolOWG+vxB80yu4v+b8HJdyXNWpUKQBGjRGFqaSwmorRkYlhJQqjmZa3KuyuyXMg8xJ0DtFWX5hGuCKgCbs9mNeN1cP6xsthtEkr4Bdlbiw0AVYYxiz6c07Ds5TAAqESAgcch/86G2ArSatb1dl3vlhZTGpicBoxwBjQTeKTg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <525b2d31-4218-b4d4-3000-a3a045ae7f59@suse.com>
Date: Thu, 2 Jun 2022 12:27:07 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: [PATCH v2 2/2] x86/mwait-idle: add core C6 optimization for SPR
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <f8cebd1c-1679-7b67-c43b-8c0cbe8ffa52@suse.com>
In-Reply-To: <f8cebd1c-1679-7b67-c43b-8c0cbe8ffa52@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-ClientProxiedBy: AS9P251CA0002.EURP251.PROD.OUTLOOK.COM
 (2603:10a6:20b:50f::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: de202ba3-6516-4e0a-3153-08da448272b0
X-MS-TrafficTypeDiagnostic: DB7PR04MB4377:EE_
X-Microsoft-Antispam-PRVS:
	<DB7PR04MB43771A6149096F1E98457D0CB3DE9@DB7PR04MB4377.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	zflh8+zr6wYbBBtvW7lIgoK3jAayhSLbn1je0cjQToSG1M6+d4HjrgDPlGj/bk4cWsP9p0SEVuA65as9pdM/CjeWikqgVKFd61IWBJmVfuGUaMPS+OZykb6ERj1qIiuctJ1viPn62d6BrzCS9Vmp5F3hZJD8D1XWDqbsbBahrzX3vpOqmTiLWav5oHYqg1BB8mScy1G69AGgUE31VDPwlnmcx1f9ONY+/08tXQ4OrHOwLIaR1MHQM2hU74YLF4mhSTALEjvh+cpPWOBfzDbBhwsN3oFM/eKHsMEapY2XyRV80K2+MuR5aDpkTK23GXbp5R8NthMxQKAhrG3Y75ij52XS58D/OYNlG5N3hPQpIK/gYfNH6W4eIfCtrI8vLgHaSY53Hyp6+I2oO03D+jZzlKG+9GV00JLmHJ9WDLcNuOCp7Om/T6ko8yVQ4YkwzEODesP276QGSzfheYXv4hXy5ZXO+IpjSzeDjYl01QhPEvuTPMwaTPS/KO0xDfkGGcKi4vnXGXhtV00we+D82qdbcw2fdDISWqPpSz4Gi9fYrMskTTSWT5sc2//B0TVlPF5pGCmtUZQVrLVClkKnXXfcc3u4lkKOgahfLBs89CmxpPSipuw8bGh6X5gKzzSwMjyyeUyl2eC0ZkkuVjnv1suNo/5gCJms6MV08opP3DCCNIKyCMN4vdDg3wgLoN9DWqk7DMnWxTy4HcXrDnTpDU8P6TJUu6ghVo7vmBUl+FSCIdJzgMSAoORfckBb6Fp7E9L2
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(31686004)(8676002)(4326008)(66946007)(8936002)(66556008)(66476007)(31696002)(86362001)(5660300002)(6486002)(83380400001)(2906002)(54906003)(316002)(36756003)(508600001)(38100700002)(2616005)(6506007)(6512007)(26005)(186003)(6916009)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?hzcLJbIfKJRfoosf6FOcNvpIPhFaTa4KAMUuw/be1A3j4lI8ELu7JK7+kjSZ?=
 =?us-ascii?Q?om5Vh6lDt5ez7PpLKhyPGpZBVOWXE2Y2g3NP4SAGzku6MpJFX+51trKFiObD?=
 =?us-ascii?Q?WFBa3i/p12zPSQeHKlA0wNmvK5ui3xJ54lUH9dv/isiTH/veZS3yHaXzGeEX?=
 =?us-ascii?Q?Da6fauJ437p9cdKknDJ7w95Y3KBKOf87KFfokerHomuYzBrhUz/c/mRrm/5j?=
 =?us-ascii?Q?WFcNW52HzsELaeP/gaj3kkJKBbDl9d9g5ChFX65M9dpIX2N0dtgKGDLg2lQW?=
 =?us-ascii?Q?mOzYJ9eaUYaEPq03mIk39UmqiDT95ofvDL4ekwZRvfplFBON8lLQnjy/lItY?=
 =?us-ascii?Q?8K13QaqSUnD8OHJm+gPD0skoGvjSjrZHDEHij87P2zGaneRPPNFQT6Ovmwhj?=
 =?us-ascii?Q?it1Xb4MNlhf3zXON2ZXjQ3nh2adFGLZMNbgrfsTiJN8cy3tuvqMZJToVE8PR?=
 =?us-ascii?Q?LwhkqeehrJS3VX9p8i6n2QK9vInnF47mD3lv/yy8gtBhWPB0iwJpymPwmIDh?=
 =?us-ascii?Q?IJTmVVYKnRiT/ZC2taOp3OhFkhp6bLUstyH15nS9K/r7ojLo6NLdpwh1IqtS?=
 =?us-ascii?Q?QNzxCuQa7BooLKDgVNoduS6AgW6HkS9gVtOloC1WJX0z2qkccBpKlRUCAgiO?=
 =?us-ascii?Q?pCu2mjCGwqNAbY5HmEzhjrEfIq0FjzHIvFWdHQ59Vf4hoR/oDDd3sQS9zvFx?=
 =?us-ascii?Q?OlB7cAVMOyTakEiEmYpO7ULmVcLIuOtjsJPRrxSyH+/VUpfqVw9uCHND1Nac?=
 =?us-ascii?Q?Qk9u+yrwr5UW6OtrziMF4pQMPwJrrOSHDMPnGvUiCVaeGfU29fd2dC91/9wI?=
 =?us-ascii?Q?LDERxoltAk/lBvFQ9WVkNGdkGkgq44f660kWlf1LRL2qCzVC+DcGhAIt43hA?=
 =?us-ascii?Q?LJuStwyZ8S4S2WZcfIBXuEw05/BiGW3O44tg5xHwQLsYf04hecQHB+x5aDlE?=
 =?us-ascii?Q?Zsn0CFgZH5VSoV137WPry0BBdwmnU8GjiOc65hNb3TNSQl1gOTCniXpwP00U?=
 =?us-ascii?Q?CUvqHRKJjdLQy8uTSBFtHKOmpGPv5a7zituClt4oOOLR4xoaca8qSw/O4c79?=
 =?us-ascii?Q?GxnyfC1F3DTL/ATcssn5TuWNeJCXcJ9tdNEuB7RoHh0ZZfvmh0ihqUbOlIfg?=
 =?us-ascii?Q?UCXg+EIbPUMMjpItkRIGxSPXu7G70W14MdNZEp5U1oOaHUoM/8Qp48qV6gxw?=
 =?us-ascii?Q?mRhkc4t/6zM7ner4oHUDof3Fw9lfNEqPKpOErlppbY52dSonZ1u63RdH70sp?=
 =?us-ascii?Q?7U3mmSJ/FsTF+a19K6MZ4389s1aa8c25kJnYduIU/ajTANdspBiKZKn+nPsy?=
 =?us-ascii?Q?YwAQBA/O+063a01wI8aHWQg+GPpD3To8denqKLndg+n+piKromyt0nquBaRN?=
 =?us-ascii?Q?Z1gLFG1CF2MhTyH2LD+OQ8W5mRHpDtWHgYpVTKr4pYuLtHm147BcawMQ7nIW?=
 =?us-ascii?Q?KYuqp1u5ao39ZzGrFEYf2vpNOpFOcO5Mr1Lj2ifvz85XI8J9knnrXmIsUEBE?=
 =?us-ascii?Q?kJn8dVp4DPIIWU+2TOkMaON8076WYnGfZckpbs8kjmo9N6o0DHZu4K20S3X2?=
 =?us-ascii?Q?GWCh9MNT2Qnvujlq/OXDRHvZfhFkUhBK8kzjkpDRoMOmtYz+3CetAK4JJmyb?=
 =?us-ascii?Q?BAubLuqeFERK7WrpfdxDBtgwwMf/Btk4K3iUYi+pNFG66d61zZVvV1BTnHVH?=
 =?us-ascii?Q?mV2KYncik5oZdpvuABVg67y1iT8pqagB8N1wWMc99h6+/nfG8GFfzclBZ6U5?=
 =?us-ascii?Q?o1TZnM4xmQ=3D=3D?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: de202ba3-6516-4e0a-3153-08da448272b0
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2022 10:27:08.8183
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: LgJPgBaQ8wCvSvjl55lU/HJ+oCET2mqmiC3kiQBAHiAfcVGWm0wA+sLAGWOgQtC+Spn2Sd/UmQS2ZvimNPDNzA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR04MB4377

From: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>

Add a Sapphire Rapids Xeon C6 optimization, similar to what we have for Sky=
 Lake
Xeon: if package C6 is disabled, adjust C6 exit latency and target residenc=
y to
match core C6 values, instead of using the default package C6 values.

Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Origin: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 3a=
9cf77b60dc

Make sure a contradictory "preferred-cstates" wouldn't cause bypassing
of the added logic.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
---
v2: Sync with the Linux side fix to the noticed issue. Re-base over
    change to earlier patch.

--- a/xen/arch/x86/cpu/mwait-idle.c
+++ b/xen/arch/x86/cpu/mwait-idle.c
@@ -1273,18 +1273,31 @@ static void __init skx_idle_state_table_
  */
 static void __init spr_idle_state_table_update(void)
 {
-	/* Check if user prefers C1E over C1. */
-	if (preferred_states_mask & BIT(2, U)) {
-		if (preferred_states_mask & BIT(1, U))
-			/* Both can't be enabled, stick to the defaults. */
-			return;
+	uint64_t msr;
=20
+	/* Check if user prefers C1E over C1. */
+	if (preferred_states_mask & BIT(2, U) &&
+	    !(preferred_states_mask & BIT(1, U))) {
+		/* Disable C1 and enable C1E. */
 		spr_cstates[0].flags |=3D CPUIDLE_FLAG_DISABLED;
 		spr_cstates[1].flags &=3D ~CPUIDLE_FLAG_DISABLED;
=20
 		/* Request enabling C1E using the "C1E promotion" bit. */
 		idle_cpu_spr.c1e_promotion =3D C1E_PROMOTION_ENABLE;
 	}
+
+	/*
+	 * By default, the C6 state assumes the worst-case scenario of package
+	 * C6. However, if PC6 is disabled, we update the numbers to match
+	 * core C6.
+	 */
+	rdmsrl(MSR_PKG_CST_CONFIG_CONTROL, msr);
+
+	/* Limit value 2 and above allow for PC6. */
+	if ((msr & 0x7) < 2) {
+		spr_cstates[2].exit_latency =3D 190;
+		spr_cstates[2].target_residency =3D 600;
+	}
 }
=20
 /*



From xen-devel-bounces@lists.xenproject.org Thu Jun 02 10:28:57 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 10:28:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341056.566231 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwi4W-0007Dl-Ik; Thu, 02 Jun 2022 10:28:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341056.566231; Thu, 02 Jun 2022 10:28:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwi4W-0007De-FU; Thu, 02 Jun 2022 10:28:56 +0000
Received: by outflank-mailman (input) for mailman id 341056;
 Thu, 02 Jun 2022 10:28:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4LFK=WJ=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nwi4U-0007DW-Od
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 10:28:54 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.111.102])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cd06c6c1-e25e-11ec-837f-e5687231ffcc;
 Thu, 02 Jun 2022 12:28:53 +0200 (CEST)
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2111.outbound.protection.outlook.com [104.47.18.111]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-46-XIEcMbQTP2Wt-rXXIsmKGg-1; Thu, 02 Jun 2022 12:28:52 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB4177.eurprd04.prod.outlook.com (2603:10a6:208:5b::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Thu, 2 Jun
 2022 10:28:51 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.013; Thu, 2 Jun 2022
 10:28:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cd06c6c1-e25e-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654165733;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=h+1PVOQ+hil5P5y2+bKR8TqRq2XSQidRzzSytWExdRk=;
	b=k4olpxN2tUkvhL4Ohq1c7aZ28PfzyAxpYvHLzb/nkkKRLnDNgK4RKakPYYFCxXThP23Z4I
	bPp+KpPdzpgxKy5C3jTK+ZOWPR4IYzySQ9qp4/inhLeiUdgm+thkZL848YucFeQ4zt+fTa
	Rbx+kEX4OC0mDcyuBn6qOWs8t+nZ7pQ=
X-MC-Unique: XIEcMbQTP2Wt-rXXIsmKGg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=I8GZkWK9MB6p5lsHhqzSrDcyq4X8C0Rilt6M623W7Dz/xqMPqU8HuyYHOEUmodrrpXkq+x8FZEgd+HX3sLpdbB8RdSAJwm6zqyN2AUbjVgC98r4QyOpI0nkCdpgoR394NVKDPkShkZIYxhZcD7Uw63Ts+k5r39Xs0VO5wRIFFRgg5JuByxZkL5hXiZG5QjTDvYGctpZ8qEUOv3k5YXYThvXqYWiLoOV/KY9gbcKHgVEpoEb9XkLubOYVXbVNp3ldAjdc4xFX/a9FsedqWUxY1GCkuimVa0S8pG8LzuxK6RdOmhBfHz4wiSKnU29i5GZ/jp8ZDQg2Q6Tp285jwHQaPw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=h+1PVOQ+hil5P5y2+bKR8TqRq2XSQidRzzSytWExdRk=;
 b=BfJFkYzuZaLO3vfXnECYVzifeSfAG+cu3aV18RJr0/04r+N8ECEojiE9Pt6/DPHBMSlwl9rMVWdPJMfj0en88BvjOiB3jHMBOGALbCXXB9SItN4NIKr8RjfKKouR/1AYL+hVFWv/g6UxzUpHbPbje70Ft18uHFthYb1+A5FXG8ESQsqGu/QWfM9HZev8ILesCbEOu0mTio905eWs+4i/Nik1HeKgJLGWXWfTIDia0BivFJPH4yI0eimNr78Okd0ntCofzgbD4/pYsUwMnZKWEaebbXupjtUV/uVfjy+ichor5+gboFx5D1jxzvqFewItCoVr+bEJJ7XbOovr7m7Mmw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a5301950-8cff-108c-3364-baad978c5707@suse.com>
Date: Thu, 2 Jun 2022 12:28:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86emul/test: encourage compiler to use more embedded
 broadcast
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM7PR04CA0019.eurprd04.prod.outlook.com
 (2603:10a6:20b:110::29) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8da3ffc9-e0a5-43b7-5a83-08da4482af82
X-MS-TrafficTypeDiagnostic: AM0PR04MB4177:EE_
X-Microsoft-Antispam-PRVS:
	<AM0PR04MB4177BA790B9F249EE9494AFFB3DE9@AM0PR04MB4177.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	QIPnPJHtFZCeULf/pNFIgUM9dgQiZZAFRxCtuLfSuKvcULWaaSLiWHU8tlXLfhxIrPZqtu0tzhlLtRzgwxaIKfQ3NEDUluT8TtEsn0hxBTvYUCw4plqHl8ffMbRuVZOJlKbZ2fy4q+A4AdOvlFxDJJNCzUL0tlY1uVcT5BxWmslknsaPd0ocB2f2/BoGRtpm/mnB/lwQuvfd6Dy34Y1BPoWPykix4TzhoOpfhVRCcMBDVzxLp9eM2TY61n4Gl3+Zzi/Y4qtKxp2D7YQoz4Tw3hHIggB8sGLnmfPX8nHeNeZwqcuj/eKOu1MTI+iGrxYJDMdR1Hy+jfou31F6FdK4RPhILxpFxtaeVl8YkBfB7wicnPrLAnXWkpStKDQlNGiNAqxZOklFCz4mrRaDaJdp2zTqxHqJdLOYT0p5T/BIotb9q6MZsliB3AgMt4dDAYyv70tUVZgjx0mu4Z0q8nXeMrxm8l8VqUy3L9tZZwNy7KRzY5KINWB6txHw1vEi2Hh2wOKk5h9fhW+dNqkqCjADQypTht1CpHSrKmFgJFAcCDrxRLic8s+KDcS5O6e41jezG5KiASzE3y3R0YaIW2Z1bUR3z9/2xbMTtMhM+DDmgSeLdtycPAm0k8wgyfTr3MR35WUaKOOCqpHrrB8XozYiRZ2yY7XbtYlkRgahxwl3J03U4LIPdYXIHatFVdKhMx5TU8+r8RnXORICkirCNVbMiPtOrRcynKtL9e5jFNIOusYDSrU40e7OdGZY58FV0yBw
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(6512007)(38100700002)(31686004)(8936002)(36756003)(5660300002)(2906002)(6486002)(186003)(26005)(31696002)(2616005)(6506007)(508600001)(316002)(4326008)(66946007)(66476007)(66556008)(8676002)(6916009)(54906003)(86362001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YktRUWJ0Z25KZTdMcFdka294Q0lCWDUyMVNKUVdCcTNmZFZpNFFydDFaN2lz?=
 =?utf-8?B?QVVmOHhYdEVMaWZjYk1YVm1QQXVrTU8rVDBraCtpSG5xVFdJMkV6ZE5JZVhm?=
 =?utf-8?B?WHUwaDZncjZGREwvWDRLcURsVktZNmo4SjBvTEJyYVJJMTJhTUZuQVh4ekQr?=
 =?utf-8?B?TnNkS0RyZFh1MlI5NDFYK3k4RHlKeUdCMHRMdElUSTdrTG82OG5XUzR2NTlu?=
 =?utf-8?B?RVpyRGhrWS9ISDBwQ2drZ3VNZmVna1YzVDlhK2VpM1ZZY1lYR21GZDR4VTdR?=
 =?utf-8?B?TE9vSTBXS0VJN1FCc3BwZkNnOXdjL3BmSGxGRkRyNFpoRlBPZzliYVdUSFZJ?=
 =?utf-8?B?amhNaHhsdFVvUEt6NVN4cTVVMS8xb3lKeUJSUjZ3d0lpNE1GZ1lYa01ROHJV?=
 =?utf-8?B?enc2V1U4UjI2Qk5qZStGUnlQWnROQWZiZFBTVXlrWVUwa1ZoWVZHRUM3Y2Rr?=
 =?utf-8?B?SHZaUHdDSkZhS0M1QnBSblBsNSs5K0xVVXdGR1FVeVNNYnE1azVwUzNUR3p6?=
 =?utf-8?B?czJHdUtqTm9JU0tQTVJDK2pPb0RqWGwvTENyZnIyNHh6ZTNVZFYwL0J6amFx?=
 =?utf-8?B?RWJtWTB1b3BYY2xuZlhONWJzUTV3aEp1NGJ2bUNjVVRSaUFMNlJuTzdTZWJH?=
 =?utf-8?B?S2gwOWVUTkNaRWlXNFNtOUR1cThHMStwN244S3FwUEJGWmdpNUUwWmxGblI2?=
 =?utf-8?B?THg1aVV1OWcwM09yQW9LVDRBUmZaVWRWM1l1cGpSL2FvcjgyVVpGV3Q2dGNL?=
 =?utf-8?B?VlBkZlFNQTdQODdNMGhOVElnajA1ZGE3QUVtcFN6dEcxWDNtR3VJZFpuVGZ3?=
 =?utf-8?B?QXU3OW56ZWovbDFSekFTOENJSllFemQ3U09KUUpvSHZzbk9FRDRtSy9oajZ0?=
 =?utf-8?B?SzRENUZCYzJmSTh2eldIcXFqa0phZTArME44aDZjcFhhWDBGS0FXVVdmb2sz?=
 =?utf-8?B?UDFTTXRneTRDNElOc0piQkhTRHQyQTcyNnF1UFRkNFQvNEVmRitTVEg3MS80?=
 =?utf-8?B?UWkwQVVMYSt6TDJFVURnT0F5Rm40VmtueVhIYmUrd0VNSUhRWmZCR1B2Vzdv?=
 =?utf-8?B?Nm5aamVPYmt6NXN1RTRSWkdGTnA4b214Z2NwaVlvYitqa0syT1BJc09oSEww?=
 =?utf-8?B?R01HQXNUWkYxSXFLT1p0S29WR1h5TjBrdmJVOE50cFIxSGdpN1YvUksrTUZG?=
 =?utf-8?B?Y1BZMUVML2lmWlh4S29GallGcVpjeWIvRHV1NWxRK28xZ1hRbjNHQWRNUFJo?=
 =?utf-8?B?NGdpV1RuYlptZmpVOUExd054ZzJvT3psdnNqNE5DWk85Q0NJSGFrcXZ4cnpS?=
 =?utf-8?B?c1kxTG5zdjdZejJqQnlqWHF3UWdycmwrdC9PVFE0QkxFNGUvZDZoMlNMMVRZ?=
 =?utf-8?B?eUZ3bTY0Rk13Tlc4UXVyRVNEeWRPQU9Ec0E3MUNPM3NWVm9Wazk3V0Rja0Zt?=
 =?utf-8?B?MWt6R0E3WWJ5ZktyL0h4a2ZQVHFYUHdvVG0rSzJNaTMxRWdSVXVOYnVaUEVi?=
 =?utf-8?B?cytEOWcwT1JJTFAvZGxTeWZvUE9wSWRjK3M1RjY1RDhDWWdLaTZVVCtiOWVO?=
 =?utf-8?B?cmhLRGRNQmVsMjhKaG1NclkvNXNxR2hFWmh0MWx1Y1Rqdkc0ZWRPbUFHSHpz?=
 =?utf-8?B?RkZSUUhhckNLOVlkb0RJanNjSVVHRUxzSnlhRTRNdG51bGFUdy9MakpLMWF3?=
 =?utf-8?B?V3VWMmJOMWFFdUFySmdKUWVLOTZMeWMxb3paVHQ3aVpCS2ZtYjVLY0tvdVAr?=
 =?utf-8?B?Y1I3Um00OE1EQWxGYmVlZTZoTUtURE1UczFMUUZWdzFTaElIdFpKUngwRkhH?=
 =?utf-8?B?OEU1eVhleEJHcnBNRGNGbXNZNzRBMjl2aFZSd1dSYzQ1VVdkc1dQZStiSUNi?=
 =?utf-8?B?d0tvdnZ4dENLUDBudUdSUkkwaEowNTBrNlNRQXkxWGF1L1J1WEZPSkgvVlBL?=
 =?utf-8?B?dkNQWHI2ZjZkaWprMVVObjBUd2FVUkN4T1hTOWRZS0hlZ0RvK244SUhpUzNp?=
 =?utf-8?B?RDRXZzBsb2x0Q3hwNjZsUUtMbXBKWmFoeTRzMEZyZnlLNmxtWEF1YXByZDBB?=
 =?utf-8?B?L1FwYUlnZEdlUzJJMkg5N0tCU21BdW5UL3FHK0tMZUlnbldCdExZQ2NDMTBY?=
 =?utf-8?B?QjFTYzk1VWxyTUphRFpEZzYxc3JobFBQb0tWSWcyWUMwc0lHWFk1QmRpb3I1?=
 =?utf-8?B?UFEwdSs3SUhESzVrcDMxVXIvVjZKejhqQ2lYRktPWit4K0pmdk84Rmt6MGgw?=
 =?utf-8?B?ZC9EQTFNZjJibWpmdUk1V3U3RisrQXJZcDl5T2dsendqdityZ21GVTF6MGlk?=
 =?utf-8?B?Vmx3MUw4WHpReGx0R3Y5S0o2bU1ZSFVwaEFySVRZclloMVJlREJDZz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8da3ffc9-e0a5-43b7-5a83-08da4482af82
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2022 10:28:50.8745
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ukIuvNDGOmFrH9wsyqXEEZzoT0VPnJEZ7k++9UbluhlqK8KNv3dhPMhJKHwU+MIHTt73np/ed41lzFYLwMxvqA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB4177

For one it was an oversight to leave dup_{hi,lo}() undefined for 512-bit
vector size. And then in FMA testing we can also arrange for the
compiler to (hopefully) recognize broadcasting potential.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/tests/x86_emulator/simd.c
+++ b/tools/tests/x86_emulator/simd.c
@@ -912,6 +912,13 @@ static inline vec_t movlhps(vec_t x, vec
 })
 #  endif
 # endif
+#elif VEC_SIZE == 64
+# if FLOAT_SIZE == 4
+#  define dup_hi(x) B(movshdup, _mask, x, undef(), ~0)
+#  define dup_lo(x) B(movsldup, _mask, x, undef(), ~0)
+# elif FLOAT_SIZE == 8
+#  define dup_lo(x) B(movddup, _mask, x, undef(), ~0)
+# endif
 #endif
 #if VEC_SIZE == 16 && defined(__SSSE3__) && !defined(__AVX512VL__)
 # if INT_SIZE == 1
--- a/tools/tests/x86_emulator/simd-fma.c
+++ b/tools/tests/x86_emulator/simd-fma.c
@@ -63,6 +63,9 @@ int fma_test(void)
 {
     unsigned int i;
     vec_t x, y, z, src, inv, one;
+#ifdef __AVX512F__
+    typeof(one[0]) one_ = 1;
+#endif
 
     for ( i = 0; i < ELEM_COUNT; ++i )
     {
@@ -71,6 +74,10 @@ int fma_test(void)
         one[i] = 1;
     }
 
+#ifdef __AVX512F__
+# define one one_
+#endif
+
     x = (src + one) * inv;
     y = (src - one) * inv;
     touch(src);
@@ -93,22 +100,28 @@ int fma_test(void)
     x = src + inv;
     y = src - inv;
     touch(inv);
+    touch(one);
     z = src * one + inv;
     if ( !eq(x, z) ) return __LINE__;
 
     touch(inv);
+    touch(one);
     z = -src * one - inv;
     if ( !eq(-x, z) ) return __LINE__;
 
     touch(inv);
+    touch(one);
     z = src * one - inv;
     if ( !eq(y, z) ) return __LINE__;
 
     touch(inv);
+    touch(one);
     z = -src * one + inv;
     if ( !eq(-y, z) ) return __LINE__;
     touch(inv);
 
+#undef one
+
 #if defined(addsub) && defined(fmaddsub)
     x = addsub(src * inv, one);
     y = addsub(src * inv, -one);



From xen-devel-bounces@lists.xenproject.org Thu Jun 02 10:32:09 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 10:32:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341067.566243 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwi7Z-0000Py-1t; Thu, 02 Jun 2022 10:32:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341067.566243; Thu, 02 Jun 2022 10:32:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwi7Y-0000Pr-V3; Thu, 02 Jun 2022 10:32:04 +0000
Received: by outflank-mailman (input) for mailman id 341067;
 Thu, 02 Jun 2022 10:32:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nn9h=WJ=citrix.com=prvs=1458da55d=roger.pau@srs-se1.protection.inumbo.net>)
 id 1nwi7X-0000Pl-GP
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 10:32:03 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3bed5f2a-e25f-11ec-bd2c-47488cf2e6aa;
 Thu, 02 Jun 2022 12:32:01 +0200 (CEST)
Received: from mail-bn8nam12lp2177.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.177])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 02 Jun 2022 06:31:34 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by PH7PR03MB6920.namprd03.prod.outlook.com (2603:10b6:510:151::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12; Thu, 2 Jun
 2022 10:31:21 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::5df3:95ce:4dfd:134e]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::5df3:95ce:4dfd:134e%4]) with mapi id 15.20.5314.015; Thu, 2 Jun 2022
 10:31:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3bed5f2a-e25f-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654165921;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=Lc3kxqnD2oflSsw3jGzA+zmnDDdB1/FZ/NgnMK5gZQ0=;
  b=SCbXRH1/zhVLVKlxaYN+nH078HxVBsUwINAZ+DL2m4lVBmS40es0j0gG
   NEFjuZ/VBOj5riXcxF9VY3hYIJmXt6ce7Y2A6HouR6+OBA6pxD+2o9T+j
   EtmGot8FL3tyA8uPA2iDI30z2yyRywR3wt7XgMiHWHA0riySH8FkghPyh
   w=;
X-IronPort-RemoteIP: 104.47.55.177
X-IronPort-MID: 72053464
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:4Thfb6/CLTcJRhriPZppDrUD8X+TJUtcMsCJ2f8bNWPcYEJGY0x3y
 2YYXDuCP6uJYWP8Kd9/b46+p0IEvZbRnYdlGgprqyo8E34SpcT7XtnIdU2Y0wF+jyHgoOCLy
 +1EN7Es+ehtFie0Si+Fa+Sn9T8mvU2xbuKU5NTsY0idfic5DnZ44f5fs7Rh2NQw34DgW1/lV
 e7a+KUzBnf0g1aYDUpMg06zgEsHUCPa4W5wUvQWPJinjXeG/5UnJMt3yZKZdhMUdrJ8DO+iL
 9sv+Znilo/vE7XBPfv++lrzWhVirrc/pmFigFIOM0SpqkAqSiDfTs/XnRfTAKtao2zhojx/9
 DlCnYe1aA0XEJ3XofUmST9WHD05Z6lI4oaSdBBTseTLp6HHW13F5qw0SWsQbcgf8OsxBnxS/
 /sFLjxLdgqEm++93LO8TK9rm9gnK87oeogYvxmMzxmAVapgHc+FHviMvIAHtNszrpkm8fL2f
 c0WZCApdB3dSxZOJk0WGNQ1m+LAanzXLGcA9QnJ+/ZfD277/SYgwrazFIDucJ+sWuhM3UCfv
 Ujk4DGsav0dHJnFodafyVqujOLSmSLwWKoJCaa1sPVthTW71mEVTREbS1a/if24kVKlHcJSL
 VQO/SgjprR081akJvHlVgC8iG6JuFgbQdU4LgEhwASEy66R6QDJAGEBF2RFcIZ/65BwQiE23
 FiUmd+vHSZorLCeVXOa8PGTsC+2Pi8Wa2QFYEfoUDc43jUqm6lr5jqnczqpOPfo5jEpMVkcG
 wy3kRU=
IronPort-HdrOrdr: A9a23:2rM1e6H7QgglRrp6pLqFepHXdLJyesId70hD6qkvc3Fom52j/f
 xGws5x6faVslkssb8b6LW90Y27MAvhHPlOkPIs1NaZLXDbUQ6TQL2KgrGD/9SNIVycygcZ79
 YbT0EcMqyOMbEZt7ec3ODQKb9Jrri6GeKT9IHjJh9WPH1XgspbnmNE42igYy9LrF4sP+tFKH
 PQ3LsPmxOQPVAsKuirDHgMWObO4/XNiZLdeBYDQzoq8hOHgz+E4KPzV0Hw5GZUbxp/hZMZtU
 TVmQ3w4auu99m91x/nzmfWq7BbgsHoxNdvDNGFzuIVNjLvoAC1Y5kJYczLgBkF5MWUrHo6mt
 jFpBkte+x19nPqZ2mw5SDg3gHxuQxen0PK+Bu9uz/OsMb5TDU1B45qnoRCaCbU7EImoZVVzL
 9L93jxjesZMTrw2ADGo/TYXRBjkUS55VA4l/QIsnBZWYwCLJdMsI0k+l9PGptoJlO31GkeKp
 guMCjg3ocXTbvDBEqp/VWHgebcE0jbJy32DHTr4aeuonprdHMQ9Tps+CVQpAZEyHsHceg02w
 31CNUXqFhwdL5nUUtcPpZ3fSLlMB26ffrzWFjiUmjPJeUgB0/njaLRzfEc2NyKEaZ4vqfa3q
 6xGm9liQ==
X-IronPort-AV: E=Sophos;i="5.91,270,1647316800"; 
   d="scan'208";a="72053464"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=L8N6Ye6JVHfNrr/whHmE4gHIy756KU3FrVT4iEE21OIKYBlEztdycTZ7Vn0Lc8UxENUgiPnQmBDyzlcO4M5wmeOSfyJzetT3GgnPxFQN6Tyf6ZdXQP1fdn+OoUTLSDBD7gMtdC8ThguYiy9yzBHtPHgrsR0IZRenSPSIHxAkxH9W0AoHH6/F/UU5HNmNvIPt8wTldvwZHeun5i9r2lQQgLrkfVRf/OQl0aQ+V5F62zZ/fz+BPBf3eC9XSYmp6E2ZkJwUhns9s72mOP+kXDDmuPzXDksqR4l2XcHbieJQbC7BJBwYtFlXI4edyx1U5NBAdZpKynw1BQ96SIU4aQHMuQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Sm/Gu6zBTbrxRbWMO7xuqnRaXgDDNwDJdD6WtnkOX34=;
 b=hq6EZ6KpL5RQV7w7tl2b/RWLjLdNddww/FTw7K9Exmj7QRE2PeZ2UzG5Di0T5I4v2c13ZHOpHgSOjHzk6FOzlR/wPyHnQc1Id1Ir/Nu+UJEOz4AzZQ9BxQtP+1Sigpt2/FNI+7heaoiILw5CyOiBySvdi9EtTDxz97OP5o5vBte8+OzqA+ucmE11B88qRrfzlzdgLBN99SSOmSjMInT7XPlS8RQVbMgXW7AFSsFqx5QaGZ879spT7rtEhsd3VRgGjzpJtZpm0PMbnmRZ07BltOAB+Vofvv+rPVWsPO5Q8TQ2Nr/oSV8nkQW5UYNGX5YSQtYCgVRofD/Ic/ZpV4zXCw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Sm/Gu6zBTbrxRbWMO7xuqnRaXgDDNwDJdD6WtnkOX34=;
 b=PXGRagBkEt6peAyXAiODraMX003zBEE45H5mtIQq4/CcdQIW3Jevc5g6rpvVaRYx6dC4JlequPIQjFweQAYRAEenhyK0bTr3Y6igc3KSZVZmtyR9JM9AFfJxytKfZBvpWUxfXIMY/b7uK/eR1cLxNfaIqFusqo5Ksqiaw2VNIJU=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Thu, 2 Jun 2022 12:31:17 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Paul Durrant <paul@xen.org>
Subject: Re: [PATCH v5 12/15] VT-d: replace all-contiguous page tables by
 superpage mappings
Message-ID: <YpiRdYJP5f65pRYG@Air-de-Roger>
References: <80448822-bc1c-9f7d-ade5-fdf7c46421fe@suse.com>
 <b3126189-2fec-ec14-7129-7897cde980a8@suse.com>
 <YpiETolItQ7FvcsG@Air-de-Roger>
 <749f47ee-9258-c3a5-fab2-125c1bdda845@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <749f47ee-9258-c3a5-fab2-125c1bdda845@suse.com>
X-ClientProxiedBy: LO4P123CA0365.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18e::10) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 241c946d-df31-406e-fd7f-08da448308f5
X-MS-TrafficTypeDiagnostic: PH7PR03MB6920:EE_
X-Microsoft-Antispam-PRVS:
	<PH7PR03MB692095D5A0EE4A1160D776478FDE9@PH7PR03MB6920.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	b1Ja1WqyVqXb8W+5UBMeWJQByZrUi0XUmH3vuITJmCPj+31/z5NVitAWM58gpcwQbEsJmXNzl92Tjx21JiyiWgDgX01a2dLW4czHcM4JGSOd26Bmvy5lkOzoTGWq4o7jaA5qlIW9r3oooGA7Az2PC6c1oBKIfAnWijtBOnjqKoQzgcXV0mWlVIqNQze2aI22KEgXYHYmuMvRJxBE7jN5cFbPqq9koSaE71SJ2iOZZGFj3wCA8ksiTRn0Q2nFBbEThKod1yCvmbHQg3kv9GzRiqkfAFFxRPQKqGgaUSv5TWld3ih4hqC69IWhlwACabltHTDuQznfBcUvc6IuL17u6kSV44xDRyhrR9v9212eeedvtgK6Or5YkQdaNyg515xu+B4X6UIT283Cdq9rv1LgIMBeBygpbbMPWCGDSlpqFcow454OIbk1vB2pVbwHdUCHDub89Jp7h4dcaiQ5hAblSDBgsdoYmku5G1KoaNBI/ygjEZNLv+FRnCQpTp+Hf7efWBkiXWrx2uW2jhTrtWhII1v6hKcUVLM9TUkAtCP6H/Wy473W0zMPLZ4x/bTibYhHSKm23pPzGeX8Rre5XYX7xP8ny2l2x0HGR7Kh9/NT5sFVjQqY4jbr+DZffBIcZyslEhrImjX6yIvEvmdQ8fSRYA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(7916004)(4636009)(366004)(8936002)(9686003)(85182001)(6486002)(26005)(6512007)(33716001)(53546011)(6506007)(86362001)(5660300002)(82960400001)(6666004)(508600001)(38100700002)(186003)(8676002)(54906003)(83380400001)(316002)(2906002)(4326008)(6916009)(66556008)(66476007)(66946007);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZjVKNEV3ME9uYXNMMFlaNVNNVUc1ai9DanBUMHJ6UFF5UTB6TXlCcTZIUHZS?=
 =?utf-8?B?ZlpJQ2dZRTM3cWpRTnhwWDVnV0JHTnRxRWFlNTFqYUxDRnFsdk9tVzcrVjlZ?=
 =?utf-8?B?NjIzbEI4Z0FNNGpCd3dTMzJQQVBWbTZCaFlRay9IYjlseWJTalIvS3UrRXNW?=
 =?utf-8?B?c1dxeHFsQkdQVEJWWVJyaHVHSHVIN0cxaXB1bnUvd09LQ3JXcUFxRGJRUDdu?=
 =?utf-8?B?UDRrUVV0cEgzbGl5akVTSDE3SFN4QjRFc0J6Y1l4OVNCeTJTazIxc2h5ZEpB?=
 =?utf-8?B?U3EvUEpPSU1EN3d1L1Z3emFRZEFoaXRRbE9SOHNsNm16OXdmYm1lOExyc2RJ?=
 =?utf-8?B?dldibW15elA4azdJek9LeTNTNnptZms2S2RIdHR2R2VGY1czQm9lUlg0a2ly?=
 =?utf-8?B?d21BZGZvNDhFNStCd1k2ditldFN4N1ArNlF2YUZvZHVmWldJL1VGT2ViamRB?=
 =?utf-8?B?VE5hMG43WkRZZjMrSGxCdkJSVHBkbksvSnNhNjhYN2NDalRnKyttZ2RCQ3ZB?=
 =?utf-8?B?WFl5TU9GTGNqdzVxVTRpRHlBQ05zTXZ4WGF4Z0EzTkU4U3Y2V0w1TW0zZVFZ?=
 =?utf-8?B?bWtpbmxpMlFuanVNNnZlanNmejhSSmVUNlRYb2lTMVlIVVBLQ3gvV0ZjNTNv?=
 =?utf-8?B?bGJJczdMb051a3huQ1BqSGtUZktRQXovWnpySG9rVkZwQW5VckJlc05XaW13?=
 =?utf-8?B?bHVIM2RxelNHMHdoY3o0UTgyYStLL2lYRkxITzBMVm9TaXU4Z094ajhzTWxN?=
 =?utf-8?B?dDVZVlQ3M0psOXhZQnpUemlaS1BxUmFxUGY4YW5ycjRFTHF1VWJxdTZnT0ZB?=
 =?utf-8?B?SmhhVkFpWk1qc3VkNStuU1lsNnZJeGpyQU5STmxqNi9MS084SzVhRG9Ja2dU?=
 =?utf-8?B?TTVYdzVlUVN3eWNUTTZnbGtiTnRSNnlGZ0gzQ2RMMUc2QjIxUkpBV3NpMXhx?=
 =?utf-8?B?dGpJeURjQVJxMUZWUkZsWXVaby9aSldqWGxCWGJaUVNGSUJ5MWNuQU84YmNB?=
 =?utf-8?B?Y0xJUFg0d245M3RXRmJ6Mk50cDJ3aVBRcmV2RFMyTWkxbTkwYnRyeUZxcDF4?=
 =?utf-8?B?V3FVRnpCU0piZXVZN3pteDZVSy9ZTE0vakZ1Wjg2a2Zzb0lBK3BKNzY1d2wx?=
 =?utf-8?B?ckl3aTV3blFudCtrNjdaS3MzdlZTMkJxZXZ3aEdYVkpJbmZKZTlRamtUWlhp?=
 =?utf-8?B?M1B6RU9HcW90R2dhYWpHSERjclVZbGFlMHhvWWhiNGRDVjVweEJHUUhZNUM2?=
 =?utf-8?B?OTI2M3lZRENiRDFUT3MrWjhEc3UrUHl6K0lYanpHeU5KeUFjUkVmZjV2UER1?=
 =?utf-8?B?NzVwOUhuenBDcEJNMGFzODFJTWsvcmJZakZXWGhXcnl0WGIxUmFNdk50NDZJ?=
 =?utf-8?B?TzJ3V3AwUEY0SG4ycThqcVNwRGNabGVISENiekJ2UnlEUUd3VHdhcXJzenB5?=
 =?utf-8?B?N1JjQzNBSmJzZE1lakY4R3NLckgvRFpBWnd0RkZHQThhM29xRWxPSUNBem53?=
 =?utf-8?B?aFphYVVEbm55VXRINXhkdFFZYTJFUmFYWE9mdWJiclowTmJTcFB6Z0JWMXBQ?=
 =?utf-8?B?c0d4TmtaT2ZOQVFTRm56QThha3BVZUhaMkczT3hxVmxOc0UvbjUydGl4N1Zj?=
 =?utf-8?B?a0U2NFBJenVjSS9UQVBtTGkwL3Z1ekZ5bXArSFVWMldWQ0JnRHR1NlNORks4?=
 =?utf-8?B?dkQ0OVY0bHNFZ1o0U0lDb3A2dVVScnJnVHA5Y0dMVVc5ZXllWUN5M1R1dnVr?=
 =?utf-8?B?WDN0SVN1NDMxYzVzZjFWVkZ6VzM5dDJ2Ky9OZ0sxYW55Wld4WE9aVUdWdUNY?=
 =?utf-8?B?OTVNNlFITEdyWk1sYU1hNS9qQUNPckxTcjlXSFFaSUxMeXFUaCtvUnkrVWZY?=
 =?utf-8?B?a0FTbU52U0wvUkQyb0JyN3k1Z0tRcWl6cmx0aXdmeVZ4NE4wVXJ5UTlHT0RO?=
 =?utf-8?B?ZXFJckZqN3hWbnkxdllmVDh3WDJSTDJ3b3p3OHJpVTJ0OXAwWWdEcFZqMlhW?=
 =?utf-8?B?M1M1NGF4Q2l5UTB5M0psSnJwN0FYa1VzeHRoZnhjNmRWNUt0MWJlZitzaHdk?=
 =?utf-8?B?NkZCd1pTbGRWYnNvcHlUbGhuNjRVbmc3Z3FQR05pb1U4K0NsSFdTVDQ3Qjdo?=
 =?utf-8?B?TDlKamltMUJ0clpuc0dlTVRPTWdxUk41dStTdUd6YkEzbGNJeHpKWEgwNUxa?=
 =?utf-8?B?eEl2M3BwM0hVY3lpTCs2OW4zYTY5U0xvUXovZExIZENZYThyNlhtamlrUi9z?=
 =?utf-8?B?aWIzTjB4K0d2RnJjS0ZueWlFV3NOOEE2R1JWZ2VubHEzSGZJVHFPSkZJUGN2?=
 =?utf-8?B?SVVuckR2ZzRlL2l4SExhUTZtNnY1SDBKK25UQ3VQMG13NllNek8waFNJU290?=
 =?utf-8?Q?+Oo46YdwEoXpyvBs=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 241c946d-df31-406e-fd7f-08da448308f5
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2022 10:31:20.9183
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ZwCrbPQ4Z/Rwp2WXvum0CDPppsWSNc2osq9uWT/b3tJYA643lz8ScwI2lh2NFSL3oHnfx3gBJrbaBSD2N0n5Pw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR03MB6920

On Thu, Jun 02, 2022 at 11:58:48AM +0200, Jan Beulich wrote:
> On 02.06.2022 11:35, Roger Pau Monné wrote:
> > On Fri, May 27, 2022 at 01:19:55PM +0200, Jan Beulich wrote:
> >> When a page table ends up with all contiguous entries (including all
> >> identical attributes), it can be replaced by a superpage entry at the
> >> next higher level. The page table itself can then be scheduled for
> >> freeing.
> >>
> >> The adjustment to LEVEL_MASK is merely to avoid leaving a latent trap
> >> for whenever we (and obviously hardware) start supporting 512G mappings.
> >>
> >> Note that cache sync-ing is likely more strict than necessary. This is
> >> both to be on the safe side as well as to maintain the pattern of all
> >> updates of (potentially) live tables being accompanied by a flush (if so
> >> needed).
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
> > 
> > Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Thanks.
> 
> >> ---
> >> Unlike the freeing of all-empty page tables, this causes quite a bit of
> >> back and forth for PV domains, due to their mapping/unmapping of pages
> >> when they get converted to/from being page tables. It may therefore be
> >> worth considering to delay re-coalescing a little, to avoid doing so
> >> when the superpage would otherwise get split again pretty soon. But I
> >> think this would better be the subject of a separate change anyway.
> >>
> >> Of course this could also be helped by more "aware" kernel side
> >> behavior: They could avoid immediately mapping freed page tables
> >> writable again, in anticipation of re-using that same page for another
> >> page table elsewhere.
> > 
> > Could we provide an option to select whether to use super-pages for
> > IOMMU, so that PV domains could keep the previous behavior?
> 
> Hmm, I did (a while ago) consider adding a command line option, largely
> to have something in case of problems, but here you're asking about a
> per-domain setting. Possible, sure, but I have to admit I'm always
> somewhat hesitant when it comes to changes requiring to touch the tool
> stack in non-trivial ways (required besides a separate Dom0 control).

Well, per-domain is always better IMO, but I don't want to block you
on this, so I guess a command line option would be OK.

Per-domain would IMO be helpful in this case because an admin might
wish to disable IOMMU super-pages just for PV guests, in order to
prevent the back-and-forth in that case.  We could also do so with a
command line option but it's not the most user-friendly approach.

> It's also not clear what granularity we'd want to allow control at:
> Just yes/no, or also an upper bound on the page sizes permitted, or
> even a map of (dis)allowed page sizes?

I would be fine with just yes/no.  I don't think we need to complicate
the logic, this should be a fallback in case things don't work as
expected.

> Finally, what would the behavior be for HVM guests using shared page
> tables? Should the IOMMU option there also suppress CPU-side large
> pages? Or should the IOMMU option, when not fulfillable with shared
> page tables, lead to use of separate page tables (and an error if
> shared page tables were explicitly requested)?

I think the option should error out (or be ignored?) when used with
shared page tables, there are already options to control the page
sizes for the CPU side page tables, and those should be used when
using shared page tables.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jun 02 10:34:18 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 10:34:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341075.566254 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwi9h-00015L-JX; Thu, 02 Jun 2022 10:34:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341075.566254; Thu, 02 Jun 2022 10:34:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwi9h-00015C-G6; Thu, 02 Jun 2022 10:34:17 +0000
Received: by outflank-mailman (input) for mailman id 341075;
 Thu, 02 Jun 2022 10:34:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4LFK=WJ=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nwi9g-000154-8Y
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 10:34:16 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.109.102])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8ca2df9f-e25f-11ec-bd2c-47488cf2e6aa;
 Thu, 02 Jun 2022 12:34:15 +0200 (CEST)
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2058.outbound.protection.outlook.com [104.47.14.58]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-45-Qv-fb97xO3C81EM38M-ouQ-1; Thu, 02 Jun 2022 12:34:12 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM6PR04MB6005.eurprd04.prod.outlook.com (2603:10a6:20b:94::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Thu, 2 Jun
 2022 10:34:10 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.013; Thu, 2 Jun 2022
 10:34:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8ca2df9f-e25f-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654166055;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QKDqeGUoz4In79F0vmCHiH1bU9+pEkaoRqSbHsioEvk=;
	b=eQxky/WT923b7iuo1NCEoYg6aXs0TIJQcj2kuI3DD6DydZtLGASwH/Ls9t8GanlvDG5o1M
	eXYlGvNKqtjygXmswRsI02ShYHBgbC837clOHdUapBFLik2t9tbDQ5XsUFn8OTeNX6VPPT
	UYMbSSDH4oW1rer4XPi81lVWm8TMXGM=
X-MC-Unique: Qv-fb97xO3C81EM38M-ouQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Joa54ufh8X9EQ+AiQIl8TAP9aqOW8rW+8J29XqkZmZTskCdR9JgzTqE+9tmGxOVAAeoHDl+tQCqduURZRVX0PP7womtthDSmDt0TfH35J6mgpDrFgRKa7LtIx+bdqdaWX0GcSSUk2YRNHC4OLWV8YCdLbvCxiSzib/kmZhj/QhG3MbGWme1dGbnhwpXCcmNOGGhFeR7ZL+efNT7Amumoa6ff8sTLu/OjZjFvGQtqRoJHSOoKVJMyE52NhjIfOIv9i0pEKCxQ+haHOQ/SXjCV1STaLDb0yWxOUVbRGzNzrPgkGgWGQAHVXCACF/geq+5ts/eDV7gTgvYU67j1zrWe3w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=QKDqeGUoz4In79F0vmCHiH1bU9+pEkaoRqSbHsioEvk=;
 b=LTU8jc4Xl65+JDxAqL50GWJva1+FaBZ5VBgLaZpdKvxmo0Cws66Kwl/MBfWD+J118oRYTntFhNHWvHHbakgfVAjYtRUKdAmff087+s4xvPxjvq+smVguU1VQ9+0OrC6w6ph0z52Quf/k9VmSI8sO5+BiQwVfhyzSTjIy2eY2kJuGseN5gaO0OdrPF7Dx2jQnRdpZR5AFAuYiWlgOZSlgbuc7DO6Lqw/13iK00xPhUH79f1X1CMz5n0xbQBuhcBoznQGNHgpr07LVoCyoH7I4gZosBbvtm41Qq3sM/eqZFryA3UwjHPDBy4P3pOkMciSR/h7HRnGxhvMF9T6yF5KrpA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <17bfe5d9-3bd7-88c7-e59c-e309e1668e12@suse.com>
Date: Thu, 2 Jun 2022 12:34:07 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v5 6/9] xen/arm: introduce CDF_staticmem
Content-Language: en-US
To: Penny Zheng <Penny.Zheng@arm.com>
Cc: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20220531031241.90374-1-Penny.Zheng@arm.com>
 <20220531031241.90374-7-Penny.Zheng@arm.com>
 <cd7455ab-c26f-8a91-fbf0-acf988d8371f@suse.com>
 <DU2PR08MB7325C5D659A735D1BC3A1214F7DE9@DU2PR08MB7325.eurprd08.prod.outlook.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <DU2PR08MB7325C5D659A735D1BC3A1214F7DE9@DU2PR08MB7325.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS8PR04CA0004.eurprd04.prod.outlook.com
 (2603:10a6:20b:310::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 1f7c1359-f8c0-47c9-727e-08da44836d81
X-MS-TrafficTypeDiagnostic: AM6PR04MB6005:EE_
X-Microsoft-Antispam-PRVS:
	<AM6PR04MB6005DB74DCB1324D81544ED4B3DE9@AM6PR04MB6005.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qcD79H1AU7vkP/KBrXWfuHYos4oEgB6Z+eLF1fx6QDiqFqjnutH1b/13B3htAAUZZ7It5SHkmYAEbuyyFuKWl1h+aDmrQVCXB89OMKK2nFNV9sVu1XZiVchD5U76UxBFfvMbekLRy4mL2IvICksGScGn+IRExOJLvlOYggcNMWFC/EEC2zso0FBCLrq1Ap+nsE86GSBnA3yUG1TVgzR6ujxP6XmPXKamm1LpPmlRZb98mepI+SccIrv42dy8BZQVD8EES0L/tyeAM6bfCgaNgd0BZNTuqdPoxxqm7sVA6YKjhSXXt76z21jnhkLjv5sUa/H+D9urrJHxuqy1K4xzSIhvteioVCLeWW65BSpHluuBl1hlXy+w2gHfESz+Om3aZJAyUmKXKbeDoYoLPQ+EwpQt0/E+vHOK/5f6k6BdyYzRHHEDmz1bjG6F/Euza9RzgL56+HwEpzSaqi/A1fcdZssRigFb+mM4v0JysctjvwXxKgLFLHgHOvUYoftUP293p+3KHcaSrlTrIN0KVD0QbySWm5utoFMm4S+LhvyNVW61Ej9sEkRaeqCzVOfSuRipO6eDYb1PUrmNbj49KxxyUYUnT9xSTrH/TlgrjpsPiCqeqh1mrhp+F9eqesibYkWnvhrxLtYdgndQ8bIIKeHqX3llhNPeUsRoGivElpSmuccmzUFOdM2BbbUMI7F/+fchVIAkeEUDLocydXI1qQMO5dtzq1SQTtjwC1R7n3xbJVrAsonN9Q5Rh3S+G+IbOR6O
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(6506007)(66946007)(6916009)(6512007)(26005)(66476007)(8676002)(53546011)(2616005)(66556008)(6486002)(508600001)(31696002)(86362001)(54906003)(38100700002)(316002)(83380400001)(186003)(8936002)(7416002)(5660300002)(36756003)(31686004)(2906002)(4326008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cFdrL044bXBKc25qZis4bEZrZGl5STFLYWRuc09JRk5TeVRpS1ZyLzlSaXE0?=
 =?utf-8?B?QldtVUt0K3NRQUVTemRrV04rYWxpNWNFV3ZYUmN0NXozejBGUE1JdzR4U0Rk?=
 =?utf-8?B?SUxjSDcrZFNlVXJ5dlNGK25YSHU3UW01WWhOanlHaVYrUmIxem1Pb2dtay9o?=
 =?utf-8?B?MFVOVVVwc1RTNmQybDJEcnhZQXM3dHlsZXM4TUpuTTh0TmdiKzFZMlRWK2Rs?=
 =?utf-8?B?ZXZMVkNmOGpaRVEwcUl2ckxXTlJ2YmxqR3NwQU9VVWlMZGhTVFcrMDdUbFlm?=
 =?utf-8?B?U0MzcTdLV2lYUjlzdUdXK2pYMVltVHB0RVc5S290K3pWdHdwSEo3S1JCOFhR?=
 =?utf-8?B?RC9FcDNENDVhOWs1WHlXdG9OZU5PMS8zdzhKVUMrVU1mS3hKN1FMR0RkZ3ZX?=
 =?utf-8?B?c0xwdVBqZkdXSStlajZwMEFVc1QvOGZ3ZUMzeTNnTGN4MmZmN2lSNm54Rlds?=
 =?utf-8?B?M0tYQ0xnZ2dlTUJ2N1ZGZG00dHZhM0xuamVETC9ReitCYUpaeDNGNllqRG5y?=
 =?utf-8?B?SVc0OVhBMFE5Q3gyUDNrK1RGa2hKM2xEbGpXMUxyMXIxOUlreXZaMmx4Skgr?=
 =?utf-8?B?TEovOGpReW4rcVNkK0lZRDMrZ3h5Z0JiMUlNRy9zZkZ2RTF4MnlYU2xxMkJH?=
 =?utf-8?B?Wm5YcGlQNDg3QThXZno4M1ZHVHpLVWlPNTVJVlpwZmdXY2ZwYVhzbFNyQW1x?=
 =?utf-8?B?eTVjdW85ajdOL2VBYzlFamdRRkZFbXpwSXBBRlpPN2lPcVA0VDNWMXVCamZ1?=
 =?utf-8?B?L3k1U2VLdnBXb3RKeFZzcVB5RUs5RzlJWXNVb0lnVXdOWXBPbzlUQU4zM2Rn?=
 =?utf-8?B?OElTSUFha3l1VFBlUC80NXJWTjVhVTUvUFBjOGlMdktJVzk4dENuaTA1VFAy?=
 =?utf-8?B?RURVRFhYY3RSTi9hd1pjNFVvQXJKMHdpZENyWnlBSDFVVXgzc21QSU5UNmVR?=
 =?utf-8?B?bXJHNDYrUitHWDRkQzdOMmRFTkNIR3pBUWJCaXcwTGUwcStMZ2JGWlk4UkJB?=
 =?utf-8?B?dzBhazBVSVJDOVJDZDVZVktYZFR1NUE4eUUvZXVwZXdxRVdJVzFuQ253QTI4?=
 =?utf-8?B?VjVpRld2MGczVWNRVE9wcUdBZ2U3Q21yZEVNZ29BbGhHbzNEb09KaGFobTNn?=
 =?utf-8?B?YTdUUTNvT0R6MFhrcWNFTFlqTjUvRFNldUZtVU1UV21xQXRlQTJ4U1l4QXd0?=
 =?utf-8?B?cVFQUy9OTXhFZzJwTXg1NmRwSS90ZkJ2Zm10VHU4dUExaXVMbWo1RDJOR2xB?=
 =?utf-8?B?cUtNbTIyZzNDM05INW4wTlFwK3hicGJIUDRmTE5aU2RmZjlJWVE0R2MvL1Rx?=
 =?utf-8?B?VHZZTW0zTXpxS1hDOFZZcVBDL1diNDRwR00vaDZmU0pOQlh3MjM1cE42RE1L?=
 =?utf-8?B?aFg0bjNCV1oxSmZtSmVUSTR4OCszNTVjYlBsZW93QmpjZnNLaGJkb0p0MllF?=
 =?utf-8?B?RzNiZFRFb3M2TUdNOU5McmlsR2R1bm5HY2dVMitYZnE0N0pQc3JDSVV3Ui9T?=
 =?utf-8?B?NXltRmwwb200Wm9iL2tGN0htdEd4bXBpQUpxUHpXOUpzdytXQlBucUV0VjBL?=
 =?utf-8?B?UmJWUnROL1dKdlVlSGZ4QXBCZ3JjekxUVHpFb0NJVnh6VktWVVNuVE9kZDk5?=
 =?utf-8?B?bFBKcVJ4QVptOWl5TGloZU9xSGZTSEZJTHlrd0Z3cEJocHE3T3c1cjdYeHQr?=
 =?utf-8?B?NTVQNXJUMUdsZ29NT1BvS1RXNFlEV1kyUTQ3Tk1MM25GQStDemNKTEZxbVRx?=
 =?utf-8?B?MStnc0o4SkJVbWhKTDJHaUNlaXVLRGhsczBJT0VIaVFrdnp1U1podER2d1lk?=
 =?utf-8?B?amZXN0krUTZEM3MxSmx6cnhMNTVicUpGWTRhWlVHT2QwTmx0a2d5c3RoVmw1?=
 =?utf-8?B?K1ZBaWJWcTlmUzRyMGRBR2dmU1B0eGRRU1VCTS9qMm5RYkhmVXl2ckVQQ1NI?=
 =?utf-8?B?N2M2c0xxTEdzR1dneEhPei9lS3JKWW12VlR5cXQ3ci83UEp0eXlPekZ3RnhI?=
 =?utf-8?B?M3RlUDNCRmhPYmhpU1RiSlFySFQ4OUttbWRkQ3hTSG5nUDU2TThjS1RXdUFQ?=
 =?utf-8?B?cFlDU2pvb0svdTVwVlBOYW11Wm0wa2RoUGhXdDllQ0FEUDhRMEtDNWxHcmN5?=
 =?utf-8?B?WnJKcXZFdE9vQVZPYVUyWWNDWVh1VkVHSXVFdzdwZFgrL0N2R3QrT2dtdVBM?=
 =?utf-8?B?eml0Q2JIL1V5SUpSODlPS25LWGJ4THN3SU5iUksyN2lOV2w0eEFyMUtLNHVr?=
 =?utf-8?B?UVRhK1Vsa0JzemJGRTlWMGpnR0kwUmhRbG8vQUdKODVVNHB3Q3pOZ3NmSWIz?=
 =?utf-8?B?NnhnSVpYejFoRm1KakdxNXlFQUxDdFJ4bE5ESnJWZGlLZTUwemdBZz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1f7c1359-f8c0-47c9-727e-08da44836d81
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2022 10:34:09.6357
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Tn5xch4JYEx4qkx4z1NHmZs+Ux0Bsg66NS3E8jOnJ6N7esXWNqGnrDzlCJ9tZTSTv8Y9cbAia8WKtn2kPJG/Ew==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR04MB6005

On 02.06.2022 12:07, Penny Zheng wrote:
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: Tuesday, May 31, 2022 4:41 PM
>>
>> On 31.05.2022 05:12, Penny Zheng wrote:
>>> --- a/xen/arch/arm/include/asm/domain.h
>>> +++ b/xen/arch/arm/include/asm/domain.h
>>> @@ -31,6 +31,10 @@ enum domain_type {
>>>
>>>  #define is_domain_direct_mapped(d) ((d)->cdf & CDF_directmap)
>>>
>>> +#ifdef CONFIG_STATIC_MEMORY
>>> +#define is_domain_using_staticmem(d) ((d)->cdf & CDF_staticmem)
>>> +#endif
>>
>> Why is this in the Arm header, rather than ...
>>
>>> --- a/xen/include/xen/domain.h
>>> +++ b/xen/include/xen/domain.h
>>> @@ -34,6 +34,12 @@ void arch_get_domain_info(const struct domain *d,
>>> #ifdef CONFIG_ARM
>>>  /* Should domain memory be directly mapped? */
>>>  #define CDF_directmap            (1U << 1)
>>> +/* Is domain memory on static allocation? */
>>> +#define CDF_staticmem            (1U << 2)
>>> +#endif
>>> +
>>> +#ifndef is_domain_using_staticmem
>>> +#define is_domain_using_staticmem(d) ((void)(d), false)
>>>  #endif
>>
>> ... here (with what you have here now simply becoming the #else part)?
>> Once living here, I expect it also can be an inline function rather than a macro,
>> with the #ifdef merely inside its body.
>>
> 
> In order to avoid bring the chicken and egg problem in xen/include/xen/domain.h,
> I may need to move the static inline function to xen/include/xen/sched.h(which
> has already included domain.h header).

Hmm, yes, if made an inline function it would need to move to sched.h.
But as a macro it could live here (without one half needing to live
in the Arm header).

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jun 02 11:18:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 11:18:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341093.566268 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwiq3-0006Gg-2T; Thu, 02 Jun 2022 11:18:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341093.566268; Thu, 02 Jun 2022 11:18:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwiq2-0006GZ-V8; Thu, 02 Jun 2022 11:18:02 +0000
Received: by outflank-mailman (input) for mailman id 341093;
 Thu, 02 Jun 2022 11:18:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwiq1-0006GP-L8; Thu, 02 Jun 2022 11:18:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwiq1-0002Bw-HO; Thu, 02 Jun 2022 11:18:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwiq1-0003ij-5a; Thu, 02 Jun 2022 11:18:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nwiq1-0005qR-4z; Thu, 02 Jun 2022 11:18:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=u51Mr3s3+x+1gNyDtPqbavHuOXoosP8Ii8DG7KsBSeU=; b=T3NtseVjRYSVhoMRE7ny2tDuRc
	bYwmWK/3iJWZHfa0FVZAAQxtb4h0xu43vqJ2dqAob+n6QGMnS2xXqjTdwZmZ0ITftsPieBC6UyQuU
	HFQSJbwVP/SiFh4JHqlAypqEWTvM/Fv9dluaoyhVaSNbEiQe+hVYkdpXTmd9z3/BSw2E=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170807-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 170807: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=64706ef761273ba403f9cb3b7a986bfb804c0a87
X-Osstest-Versions-That:
    ovmf=62044aa99bcf0a7b1581b24ad8e8f105e48fa15a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 02 Jun 2022 11:18:01 +0000

flight 170807 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170807/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 64706ef761273ba403f9cb3b7a986bfb804c0a87
baseline version:
 ovmf                 62044aa99bcf0a7b1581b24ad8e8f105e48fa15a

Last test of basis   170798  2022-06-01 13:11:48 Z    0 days
Testing same since   170807  2022-06-02 09:13:22 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Min Xu <min.m.xu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   62044aa99b..64706ef761  64706ef761273ba403f9cb3b7a986bfb804c0a87 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Jun 02 12:23:10 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 12:23:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341123.566287 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwjqo-0006Sh-4x; Thu, 02 Jun 2022 12:22:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341123.566287; Thu, 02 Jun 2022 12:22:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwjqo-0006Sa-2C; Thu, 02 Jun 2022 12:22:54 +0000
Received: by outflank-mailman (input) for mailman id 341123;
 Thu, 02 Jun 2022 12:22:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vp+S=WJ=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1nwjqn-0006SU-4W
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 12:22:53 +0000
Received: from mail-lf1-x129.google.com (mail-lf1-x129.google.com
 [2a00:1450:4864:20::129])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b91ba473-e26e-11ec-bd2c-47488cf2e6aa;
 Thu, 02 Jun 2022 14:22:52 +0200 (CEST)
Received: by mail-lf1-x129.google.com with SMTP id i10so7602158lfj.0
 for <xen-devel@lists.xenproject.org>; Thu, 02 Jun 2022 05:22:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b91ba473-e26e-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=VOQZA0pHYqK4XdbW9GaYy4EgHWG2TtOSiKjo+AkEHcM=;
        b=UnE1Rplcr6LJMmwXyiL8ub39B1DsH07ftzTp2OVzdHGbBuxHegfEbgNiYk0LTgRA8Q
         52K0TR54JuB4WxHpGvQ8tl2RYold0dDK00bsLnJbq8Yg+lCaOSCQFasYxpY0dLfInTgs
         7gQ19qYW1M6D6Z3e4mL14Y9+XIwN3YOm/MtYtG/Wv7bJJs9vyp/VNaV0eQM3FEhoyMIi
         3x7yA+IhqxRTNkOak9X+TW13moVPwvJqYCv4jKAf/TucWy6yL6+bZaHiQjnbCqwEN+ih
         cDn2R3aLAl1xQWUM+KA8AFT23ALZPYKaRyK6YHf/zAkApYAkW0pA3x1pXfoJQkUNTFDv
         g2wg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=VOQZA0pHYqK4XdbW9GaYy4EgHWG2TtOSiKjo+AkEHcM=;
        b=2TrWzO1Yu04Dun5X4rJedfYilqrMaof8tX97ZBFiJlmN72QYuIqNKBh40vQHP1uTXu
         UGfiInJim98hC0bFVK6ByuWL9+BaZaEOrlL+Ubb1DgX/sLs/yUyT8mEJYkQwh9U1aAw2
         ot1Sft4Vkyqw2nR8dSGs1OilgoQDR0V3IGigm6bJujs1aop09MqEFLLMkINK04/hGB2Q
         1+qfTlvuYpgjOLR2BvWC7tag/+yBHN3YJx04SvJKoyxXY6tpI4n9lmqI1fDeS6skn+gq
         md2sOR7y0+nnKSloxReWx64AG3k65nqJ7Unk/vCHNxhRola42mws+lJzcFpblnCF8PIT
         4hkw==
X-Gm-Message-State: AOAM531A/Nzshm59kvV2YjEJxpdaipryj0xCEZf/SXbjCgDlgpozNEQ4
	z3xnjNICJ90NIV+0vsMLN3lphrpiHMPzYAOc5GA=
X-Google-Smtp-Source: ABdhPJyVvZ/uTNq8sMKoJG6ijdTCL0j2WKweW33CgjPbnGda+c3G9nCqp0xIzgTsfNiXS+BPfYkG1BIjTPHHQt2wSv4=
X-Received: by 2002:a05:6512:b17:b0:478:d66d:f747 with SMTP id
 w23-20020a0565120b1700b00478d66df747mr17038767lfu.447.1654172571822; Thu, 02
 Jun 2022 05:22:51 -0700 (PDT)
MIME-Version: 1.0
References: <20220601195341.28581-1-jandryuk@gmail.com> <YphSYfdzy8kekhTZ@infradead.org>
In-Reply-To: <YphSYfdzy8kekhTZ@infradead.org>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Thu, 2 Jun 2022 08:22:40 -0400
Message-ID: <CAKf6xpsbgyvJjdRGrE3ru114iuXv98rumf8nVvKu5WmErf+zTA@mail.gmail.com>
Subject: Re: [PATCH] xen-blkfront: Handle NULL gendisk
To: Christoph Hellwig <hch@infradead.org>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Juergen Gross <jgross@suse.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Jens Axboe <axboe@kernel.dk>, 
	xen-devel <xen-devel@lists.xenproject.org>, linux-block@vger.kernel.org, 
	open list <linux-kernel@vger.kernel.org>, 
	=?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?= <marmarek@invisiblethingslab.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, Jun 2, 2022 at 2:02 AM Christoph Hellwig <hch@infradead.org> wrote:
>
> On Wed, Jun 01, 2022 at 03:53:41PM -0400, Jason Andryuk wrote:
> > When a VBD is not fully created and then closed, the kernel can have a
> > NULL pointer dereference:
> >

> >
> > info->rq and info->gd are only set in blkfront_connect(), which is
> > called for state 4 (XenbusStateConnected).  Guard against using NULL
> > variables in blkfront_closing() to avoid the issue.
> >
> > The rest of blkfront_closing looks okay.  If info->nr_rings is 0, then
> > for_each_rinfo won't do anything.
> >
> > blkfront_remove also needs to check for non-NULL pointers before
> > cleaning up the gendisk and request queue.
> >
> > Fixes: 05d69d950d9d "xen-blkfront: sanitize the removal state machine"
> > Reported-by: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingsl=
ab.com>
> > Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
>
> Tis looks ok, but do we have anything that prevents races between
> blkfront_connect, blkfront_closing and blkfront_remove?

Thanks for taking a look, Christoph.

blkfront_connect and blkfront_closing are called by the state machine
in blkback_changed.  blkback_changed is the xenbus_driver
.otherend_changed callback.  The xenwatch kthread calls callbacks
synchronously and one at a time, so that seems okay today.

blkfront_remove is the xenbus_driver .remove callback, so it is tied
to the life cycle of the device.  It's called after the
otherend_changed callback is unregistered, so those won't run when
blkfront_remove is running.

Given that, I think it's okay.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Thu Jun 02 12:36:40 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 12:36:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341131.566299 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwk43-0008EB-CU; Thu, 02 Jun 2022 12:36:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341131.566299; Thu, 02 Jun 2022 12:36:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwk43-0008E3-80; Thu, 02 Jun 2022 12:36:35 +0000
Received: by outflank-mailman (input) for mailman id 341131;
 Thu, 02 Jun 2022 12:36:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=R5Si=WJ=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1nwk42-0008Dx-Br
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 12:36:34 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a149e3f7-e270-11ec-837f-e5687231ffcc;
 Thu, 02 Jun 2022 14:36:31 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 472BD1FA8F;
 Thu,  2 Jun 2022 12:36:32 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id F386C134F3;
 Thu,  2 Jun 2022 12:36:31 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id vK71Oc+umGI4IQAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 02 Jun 2022 12:36:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a149e3f7-e270-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1654173392; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=9yC5c2X+hXMyc3sLZQelOrFODQa3a5r3ikpeDJhSbTc=;
	b=l+1+ltfcefSiXG1KFHbQ9nBWAvfVJopyuVQvCB0hWpJY+sql/TtNSz+MdkXqN+9cfJnjU1
	9aPalkfh3r/OvhLSpTufxV4JYna9TvjrHMi88dNoiT/1+Y36j5oBcLSHP3wnFytUtFtuZB
	53jtqrpGFQAguyEr2Dtp7NILflSqXh0=
Message-ID: <428b872b-691c-0ba7-5955-d94bafb8b9a6@suse.com>
Date: Thu, 2 Jun 2022 14:36:31 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.8.0
Subject: Re: [PATCH] xen-blkfront: Handle NULL gendisk
Content-Language: en-US
To: Jason Andryuk <jandryuk@gmail.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jens Axboe <axboe@kernel.dk>
Cc: xen-devel@lists.xenproject.org, linux-block@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?= <marmarek@invisiblethingslab.com>
References: <20220601195341.28581-1-jandryuk@gmail.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20220601195341.28581-1-jandryuk@gmail.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------t7xHsP582YmvVr9YdplTtH6v"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------t7xHsP582YmvVr9YdplTtH6v
Content-Type: multipart/mixed; boundary="------------SlNGl81hCA1N4j0PLGOuPKal";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jason Andryuk <jandryuk@gmail.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jens Axboe <axboe@kernel.dk>
Cc: xen-devel@lists.xenproject.org, linux-block@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?= <marmarek@invisiblethingslab.com>
Message-ID: <428b872b-691c-0ba7-5955-d94bafb8b9a6@suse.com>
Subject: Re: [PATCH] xen-blkfront: Handle NULL gendisk
References: <20220601195341.28581-1-jandryuk@gmail.com>
In-Reply-To: <20220601195341.28581-1-jandryuk@gmail.com>

--------------SlNGl81hCA1N4j0PLGOuPKal
Content-Type: multipart/mixed; boundary="------------ndNhzZq9ZtcE08toAAEkFguO"

--------------ndNhzZq9ZtcE08toAAEkFguO
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDEuMDYuMjIgMjE6NTMsIEphc29uIEFuZHJ5dWsgd3JvdGU6DQo+IFdoZW4gYSBWQkQg
aXMgbm90IGZ1bGx5IGNyZWF0ZWQgYW5kIHRoZW4gY2xvc2VkLCB0aGUga2VybmVsIGNhbiBo
YXZlIGENCj4gTlVMTCBwb2ludGVyIGRlcmVmZXJlbmNlOg0KPiANCj4gVGhlIHJlcHJvZHVj
ZXIgaXMgdHJpdmlhbDoNCj4gDQo+IFt1c2VyQGRvbTAgfl0kIHN1ZG8geGwgYmxvY2stYXR0
YWNoIHdvcmsgYmFja2VuZD1zeXMtdXNiIHZkZXY9eHZkaSB0YXJnZXQ9L2Rldi9zZHoNCj4g
W3VzZXJAZG9tMCB+XSQgeGwgYmxvY2stbGlzdCB3b3JrDQo+IFZkZXYgIEJFICBoYW5kbGUg
c3RhdGUgZXZ0LWNoIHJpbmctcmVmIEJFLXBhdGgNCj4gNTE3MTIgMCAgIDI0MSAgICA0ICAg
ICAtMSAgICAgLTEgICAgICAgL2xvY2FsL2RvbWFpbi8wL2JhY2tlbmQvdmJkLzI0MS81MTcx
Mg0KPiA1MTcyOCAwICAgMjQxICAgIDQgICAgIC0xICAgICAtMSAgICAgICAvbG9jYWwvZG9t
YWluLzAvYmFja2VuZC92YmQvMjQxLzUxNzI4DQo+IDUxNzQ0IDAgICAyNDEgICAgNCAgICAg
LTEgICAgIC0xICAgICAgIC9sb2NhbC9kb21haW4vMC9iYWNrZW5kL3ZiZC8yNDEvNTE3NDQN
Cj4gNTE3NjAgMCAgIDI0MSAgICA0ICAgICAtMSAgICAgLTEgICAgICAgL2xvY2FsL2RvbWFp
bi8wL2JhY2tlbmQvdmJkLzI0MS81MTc2MA0KPiA1MTg0MCAzICAgMjQxICAgIDMgICAgIC0x
ICAgICAtMSAgICAgICAvbG9jYWwvZG9tYWluLzMvYmFja2VuZC92YmQvMjQxLzUxODQwDQo+
ICAgICAgICAgICAgICAgICAgIF4gbm90ZSBzdGF0ZSwgdGhlIC9kZXYvc2R6IGRvZXNuJ3Qg
ZXhpc3QgaW4gdGhlIGJhY2tlbmQNCj4gDQo+IFt1c2VyQGRvbTAgfl0kIHN1ZG8geGwgYmxv
Y2stZGV0YWNoIHdvcmsgeHZkaQ0KPiBbdXNlckBkb20wIH5dJCB4bCBibG9jay1saXN0IHdv
cmsNCj4gVmRldiAgQkUgIGhhbmRsZSBzdGF0ZSBldnQtY2ggcmluZy1yZWYgQkUtcGF0aA0K
PiB3b3JrIGlzIGFuIGludmFsaWQgZG9tYWluIGlkZW50aWZpZXINCj4gDQo+IEFuZCBpdHMg
Y29uc29sZSBoYXM6DQo+IA0KPiBCVUc6IGtlcm5lbCBOVUxMIHBvaW50ZXIgZGVyZWZlcmVu
Y2UsIGFkZHJlc3M6IDAwMDAwMDAwMDAwMDAwNTANCj4gUEdEIDgwMDAwMDAwZWRlYmIwNjcg
UDREIDgwMDAwMDAwZWRlYmIwNjcgUFVEIGVkZWMyMDY3IFBNRCAwDQo+IE9vcHM6IDAwMDAg
WyMxXSBQUkVFTVBUIFNNUCBQVEkNCj4gQ1BVOiAxIFBJRDogNTIgQ29tbTogeGVud2F0Y2gg
Tm90IHRhaW50ZWQgNS4xNi4xOC0yLjQzLmZjMzIucXViZXMueDg2XzY0ICMxDQo+IFJJUDog
MDAxMDpibGtfbXFfc3RvcF9od19xdWV1ZXMrMHg1LzB4NDANCj4gQ29kZTogMDAgNDggODMg
ZTAgZmQgODMgYzMgMDEgNDggODkgODUgYTggMDAgMDAgMDAgNDEgMzkgNWMgMjQgNTAgNzcg
YzAgNWIgNWQgNDEgNWMgNDEgNWQgYzMgYzMgMGYgMWYgODAgMDAgMDAgMDAgMDAgMGYgMWYg
NDQgMDAgMDAgPDhiPiA0NyA1MCA4NSBjMCA3NCAzMiA0MSA1NCA0OSA4OSBmYyA1NSA1MyAz
MSBkYiA0OSA4YiA0NCAyNCA0OCA0OA0KPiBSU1A6IDAwMTg6ZmZmZmM5MDAwMGJjZmU5OCBF
RkxBR1M6IDAwMDEwMjkzDQo+IFJBWDogZmZmZmZmZmZjMDAwODM3MCBSQlg6IDAwMDAwMDAw
MDAwMDAwMDUgUkNYOiAwMDAwMDAwMDAwMDAwMDAwDQo+IFJEWDogMDAwMDAwMDAwMDAwMDAw
MCBSU0k6IDAwMDAwMDAwMDAwMDAwMDUgUkRJOiAwMDAwMDAwMDAwMDAwMDAwDQo+IFJCUDog
ZmZmZjg4ODAwNzc1ZjAwMCBSMDg6IDAwMDAwMDAwMDAwMDAwMDEgUjA5OiBmZmZmODg4MDA2
ZTYyMGI4DQo+IFIxMDogZmZmZjg4ODAwNmU2MjBiMCBSMTE6IGYwMDAwMDAwMDAwMDAwMDAg
UjEyOiBmZmZmODg4MGJmZjM5MDAwDQo+IFIxMzogZmZmZjg4ODBiZmYzOTAwMCBSMTQ6IDAw
MDAwMDAwMDAwMDAwMDAgUjE1OiBmZmZmODg4MDA2MDRiZTAwDQo+IEZTOiAgMDAwMDAwMDAw
MDAwMDAwMCgwMDAwKSBHUzpmZmZmODg4MGYzMzAwMDAwKDAwMDApIGtubEdTOjAwMDAwMDAw
MDAwMDAwMDANCj4gQ1M6ICAwMDEwIERTOiAwMDAwIEVTOiAwMDAwIENSMDogMDAwMDAwMDA4
MDA1MDAzMw0KPiBDUjI6IDAwMDAwMDAwMDAwMDAwNTAgQ1IzOiAwMDAwMDAwMGU5MzJlMDAy
IENSNDogMDAwMDAwMDAwMDM3MDZlMA0KPiBDYWxsIFRyYWNlOg0KPiAgIDxUQVNLPg0KPiAg
IGJsa2JhY2tfY2hhbmdlZCsweDk1LzB4MTM3IFt4ZW5fYmxrZnJvbnRdDQo+ICAgPyByZWFk
X3JlcGx5KzB4MTYwLzB4MTYwDQo+ICAgeGVud2F0Y2hfdGhyZWFkKzB4YzAvMHgxYTANCj4g
ICA/IGRvX3dhaXRfaW50cl9pcnErMHhhMC8weGEwDQo+ICAga3RocmVhZCsweDE2Yi8weDE5
MA0KPiAgID8gc2V0X2t0aHJlYWRfc3RydWN0KzB4NDAvMHg0MA0KPiAgIHJldF9mcm9tX2Zv
cmsrMHgyMi8weDMwDQo+ICAgPC9UQVNLPg0KPiBNb2R1bGVzIGxpbmtlZCBpbjogc25kX3Nl
cV9kdW1teSBzbmRfaHJ0aW1lciBzbmRfc2VxIHNuZF9zZXFfZGV2aWNlIHNuZF90aW1lciBz
bmQgc291bmRjb3JlIGlwdF9SRUpFQ1QgbmZfcmVqZWN0X2lwdjQgeHRfc3RhdGUgeHRfY29u
bnRyYWNrIG5mdF9jb3VudGVyIG5mdF9jaGFpbl9uYXQgeHRfTUFTUVVFUkFERSBuZl9uYXQg
bmZfY29ubnRyYWNrIG5mX2RlZnJhZ19pcHY2IG5mX2RlZnJhZ19pcHY0IG5mdF9jb21wYXQg
bmZfdGFibGVzIG5mbmV0bGluayBpbnRlbF9yYXBsX21zciBpbnRlbF9yYXBsX2NvbW1vbiBj
cmN0MTBkaWZfcGNsbXVsIGNyYzMyX3BjbG11bCBjcmMzMmNfaW50ZWwgZ2hhc2hfY2xtdWxu
aV9pbnRlbCB4ZW5fbmV0ZnJvbnQgcGNzcGtyIHhlbl9zY3NpYmFjayB0YXJnZXRfY29yZV9t
b2QgeGVuX25ldGJhY2sgeGVuX3ByaXZjbWQgeGVuX2dudGRldiB4ZW5fZ250YWxsb2MgeGVu
X2Jsa2JhY2sgeGVuX2V2dGNobiBpcG1pX2RldmludGYgaXBtaV9tc2doYW5kbGVyIGZ1c2Ug
YnBmX3ByZWxvYWQgaXBfdGFibGVzIG92ZXJsYXkgeGVuX2Jsa2Zyb250DQo+IENSMjogMDAw
MDAwMDAwMDAwMDA1MA0KPiAtLS1bIGVuZCB0cmFjZSA3YmM5NTk3ZmQwNmFlODlkIF0tLS0N
Cj4gUklQOiAwMDEwOmJsa19tcV9zdG9wX2h3X3F1ZXVlcysweDUvMHg0MA0KPiBDb2RlOiAw
MCA0OCA4MyBlMCBmZCA4MyBjMyAwMSA0OCA4OSA4NSBhOCAwMCAwMCAwMCA0MSAzOSA1YyAy
NCA1MCA3NyBjMCA1YiA1ZCA0MSA1YyA0MSA1ZCBjMyBjMyAwZiAxZiA4MCAwMCAwMCAwMCAw
MCAwZiAxZiA0NCAwMCAwMCA8OGI+IDQ3IDUwIDg1IGMwIDc0IDMyIDQxIDU0IDQ5IDg5IGZj
IDU1IDUzIDMxIGRiIDQ5IDhiIDQ0IDI0IDQ4IDQ4DQo+IFJTUDogMDAxODpmZmZmYzkwMDAw
YmNmZTk4IEVGTEFHUzogMDAwMTAyOTMNCj4gUkFYOiBmZmZmZmZmZmMwMDA4MzcwIFJCWDog
MDAwMDAwMDAwMDAwMDAwNSBSQ1g6IDAwMDAwMDAwMDAwMDAwMDANCj4gUkRYOiAwMDAwMDAw
MDAwMDAwMDAwIFJTSTogMDAwMDAwMDAwMDAwMDAwNSBSREk6IDAwMDAwMDAwMDAwMDAwMDAN
Cj4gUkJQOiBmZmZmODg4MDA3NzVmMDAwIFIwODogMDAwMDAwMDAwMDAwMDAwMSBSMDk6IGZm
ZmY4ODgwMDZlNjIwYjgNCj4gUjEwOiBmZmZmODg4MDA2ZTYyMGIwIFIxMTogZjAwMDAwMDAw
MDAwMDAwMCBSMTI6IGZmZmY4ODgwYmZmMzkwMDANCj4gUjEzOiBmZmZmODg4MGJmZjM5MDAw
IFIxNDogMDAwMDAwMDAwMDAwMDAwMCBSMTU6IGZmZmY4ODgwMDYwNGJlMDANCj4gRlM6ICAw
MDAwMDAwMDAwMDAwMDAwKDAwMDApIEdTOmZmZmY4ODgwZjMzMDAwMDAoMDAwMCkga25sR1M6
MDAwMDAwMDAwMDAwMDAwMA0KPiBDUzogIDAwMTAgRFM6IDAwMDAgRVM6IDAwMDAgQ1IwOiAw
MDAwMDAwMDgwMDUwMDMzDQo+IENSMjogMDAwMDAwMDAwMDAwMDA1MCBDUjM6IDAwMDAwMDAw
ZTkzMmUwMDIgQ1I0OiAwMDAwMDAwMDAwMzcwNmUwDQo+IEtlcm5lbCBwYW5pYyAtIG5vdCBz
eW5jaW5nOiBGYXRhbCBleGNlcHRpb24NCj4gS2VybmVsIE9mZnNldDogZGlzYWJsZWQNCj4g
DQo+IGluZm8tPnJxIGFuZCBpbmZvLT5nZCBhcmUgb25seSBzZXQgaW4gYmxrZnJvbnRfY29u
bmVjdCgpLCB3aGljaCBpcw0KPiBjYWxsZWQgZm9yIHN0YXRlIDQgKFhlbmJ1c1N0YXRlQ29u
bmVjdGVkKS4gIEd1YXJkIGFnYWluc3QgdXNpbmcgTlVMTA0KPiB2YXJpYWJsZXMgaW4gYmxr
ZnJvbnRfY2xvc2luZygpIHRvIGF2b2lkIHRoZSBpc3N1ZS4NCj4gDQo+IFRoZSByZXN0IG9m
IGJsa2Zyb250X2Nsb3NpbmcgbG9va3Mgb2theS4gIElmIGluZm8tPm5yX3JpbmdzIGlzIDAs
IHRoZW4NCj4gZm9yX2VhY2hfcmluZm8gd29uJ3QgZG8gYW55dGhpbmcuDQo+IA0KPiBibGtm
cm9udF9yZW1vdmUgYWxzbyBuZWVkcyB0byBjaGVjayBmb3Igbm9uLU5VTEwgcG9pbnRlcnMg
YmVmb3JlDQo+IGNsZWFuaW5nIHVwIHRoZSBnZW5kaXNrIGFuZCByZXF1ZXN0IHF1ZXVlLg0K
PiANCj4gRml4ZXM6IDA1ZDY5ZDk1MGQ5ZCAieGVuLWJsa2Zyb250OiBzYW5pdGl6ZSB0aGUg
cmVtb3ZhbCBzdGF0ZSBtYWNoaW5lIg0KPiBSZXBvcnRlZC1ieTogTWFyZWsgTWFyY3p5a293
c2tpLUfDs3JlY2tpIDxtYXJtYXJla0BpbnZpc2libGV0aGluZ3NsYWIuY29tPg0KPiBTaWdu
ZWQtb2ZmLWJ5OiBKYXNvbiBBbmRyeXVrIDxqYW5kcnl1a0BnbWFpbC5jb20+DQoNClJldmll
d2VkLWJ5OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+DQoNCg0KSnVlcmdlbg0K

--------------ndNhzZq9ZtcE08toAAEkFguO
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------ndNhzZq9ZtcE08toAAEkFguO--

--------------SlNGl81hCA1N4j0PLGOuPKal--

--------------t7xHsP582YmvVr9YdplTtH6v
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmKYrs8FAwAAAAAACgkQsN6d1ii/Ey9i
zQf8DzxSycrKgOoEwi8rX8mvlovcQ1eCJuvuU4/OabjZtV06Ms0XRBGHiXi771h+H3MRN8wYB0zM
zmd0xlRut4G3397FKvay6GgO+jT5Vt7w+rPl+OSjKSXqdoLyIGk6y47g7KJ1BN/IFdRXvOFR/MqZ
UB/o+ZKvCJs/RstSPQxOAqYaT0PKItxxTxAijRTLibZAgn2ZPbhGGgtouKRHOSXgYdq3voDmvmJq
AteAdd2IpG5d7W8ziYpuevzF+t8bcxHt5Jq4Hs4r0SbiEfyi+7Swvnr5JX0ovmFzX9rizlmkinWI
oHuU8xi1nBFFRcUPOaLZlUVHkeuoQAIFQQNvwkp1Kw==
=r7rQ
-----END PGP SIGNATURE-----

--------------t7xHsP582YmvVr9YdplTtH6v--


From xen-devel-bounces@lists.xenproject.org Thu Jun 02 12:49:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 12:49:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341140.566311 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwkGL-0001Xg-I4; Thu, 02 Jun 2022 12:49:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341140.566311; Thu, 02 Jun 2022 12:49:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwkGL-0001XZ-Cw; Thu, 02 Jun 2022 12:49:17 +0000
Received: by outflank-mailman (input) for mailman id 341140;
 Thu, 02 Jun 2022 12:49:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2EeJ=WJ=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1nwkGK-0001XT-7v
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 12:49:16 +0000
Received: from mail-lf1-x133.google.com (mail-lf1-x133.google.com
 [2a00:1450:4864:20::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 68792d0b-e272-11ec-bd2c-47488cf2e6aa;
 Thu, 02 Jun 2022 14:49:14 +0200 (CEST)
Received: by mail-lf1-x133.google.com with SMTP id i10so7711313lfj.0
 for <xen-devel@lists.xenproject.org>; Thu, 02 Jun 2022 05:49:14 -0700 (PDT)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 v17-20020a2e9f51000000b00253e37c971asm832692ljk.75.2022.06.02.05.49.13
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 02 Jun 2022 05:49:13 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 68792d0b-e272-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=subject:from:to:cc:references:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=bsTLsgBWur6uu9OlkhXTdmb21vQyR52FOVipLjd+uS4=;
        b=GZkGTjvTy9vObELpqI+097O57dXF5yumiXJgnYzhdM7ygzRIoboJ8wrZ3gCamVGTCU
         QNYYb5e/uRuAIHGgX2smRbqAkJg8mzLIaV4x+yKwlgplMlxEnTI6flgAgoqf6vhFBeCW
         mXjdZA0DzqYmVkVWOr813LlXgcFH1N7PzRnRSG/XzG2uzqFxEHXFtHjzBhBlVVVPSQuN
         IB/jp8hnT+K1YSVRfvUfN+xm8bBkHoy1thxoOaneRgIiHH6bpepKVH6Bo0kRjiAIEkbv
         p9voW4PVhGvhzCZqlHTqFeVxuZjRDIdOoNFtW7MxydtdhHPbJ2K4+7dKAT4nwQg25duf
         q3fA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:subject:from:to:cc:references:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=bsTLsgBWur6uu9OlkhXTdmb21vQyR52FOVipLjd+uS4=;
        b=B/3GiCHxrTZGgJm9wOH+RyM7QxMR6VtgJuV7qsyZOxvRTty1/k/zJdS5vm3r9fMFD6
         SgoSpwYCPxUeZHn0gT1wK5J1OSsnGXFV81K6x3BrrnJgkw4cth/4f1rWMTjJkuH7iH/m
         C/lWEGkNIyITnRX8HSJklHtKUFpzHGvqBivSf2D4vv661TmsOtNblFG9uTXr8wCVCI3T
         a7VD0S17ofWUe6VwCeIbCTrVhRIuExbopwcEypBo5/CMRt3cNO4VjS4cvSLYSR5BkLpa
         FqJqEnQ69LrRrl9jvF/wVFnVvaN2caokwgBJsnkaG4o72ll81GDeYqlIPHVb9bvOgCPe
         96Uw==
X-Gm-Message-State: AOAM533w1oLMnScsMTRfxz1Ps3tmTaBfGv72+z6Qmh59EMJoIz4aUZKl
	Szp7giVxuJnuBmHVf1ubo1w=
X-Google-Smtp-Source: ABdhPJwaXlD35k4IsIKvKitQcl/ikmFVgXqey4Sag2YEYkD87wMAyzyt3YWXxh8dBVecrMjoKXujNg==
X-Received: by 2002:a19:5f1d:0:b0:478:f072:8dc9 with SMTP id t29-20020a195f1d000000b00478f0728dc9mr3498506lfb.440.1654174154472;
        Thu, 02 Jun 2022 05:49:14 -0700 (PDT)
Subject: Re: [PATCH V3 4/8] xen/virtio: Enable restricted memory access using
 Xen grant mappings
From: Oleksandr <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Russell King <linux@armlinux.org.uk>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>, Julien Grall <julien@xen.org>,
 "Michael S. Tsirkin" <mst@redhat.com>, Christoph Hellwig <hch@infradead.org>
References: <1653944417-17168-1-git-send-email-olekstysh@gmail.com>
 <1653944417-17168-5-git-send-email-olekstysh@gmail.com>
Message-ID: <c2ae069d-ec95-50ea-1789-b2a667d6fb4c@gmail.com>
Date: Thu, 2 Jun 2022 15:49:12 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <1653944417-17168-5-git-send-email-olekstysh@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 31.05.22 00:00, Oleksandr Tyshchenko wrote:

Hello all.

> From: Juergen Gross <jgross@suse.com>
>
> In order to support virtio in Xen guests add a config option XEN_VIRTIO
> enabling the user to specify whether in all Xen guests virtio should
> be able to access memory via Xen grant mappings only on the host side.
>
> Also set PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS feature from the guest
> initialization code on Arm and x86 if CONFIG_XEN_VIRTIO is enabled.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> ---
> Changes V1 -> V2:
>     - new patch, split required changes from commit:
>      "[PATCH V1 3/6] xen/virtio: Add option to restrict memory access under Xen"
>     - rework according to new platform_has() infrastructure
>
> Changes V2 -> V3:
>     - add Stefano's R-b

May I please ask for the ack or comments for x86 side here?



> ---
>   arch/arm/xen/enlighten.c     |  2 ++
>   arch/x86/xen/enlighten_hvm.c |  2 ++
>   arch/x86/xen/enlighten_pv.c  |  2 ++
>   drivers/xen/Kconfig          | 11 +++++++++++
>   include/xen/xen.h            |  8 ++++++++
>   5 files changed, 25 insertions(+)
>
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index 07eb69f..1f9c3ba 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -443,6 +443,8 @@ static int __init xen_guest_init(void)
>   	if (!xen_domain())
>   		return 0;
>   
> +	xen_set_restricted_virtio_memory_access();
> +
>   	if (!acpi_disabled)
>   		xen_acpi_guest_init();
>   	else
> diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
> index 517a9d8..8b71b1d 100644
> --- a/arch/x86/xen/enlighten_hvm.c
> +++ b/arch/x86/xen/enlighten_hvm.c
> @@ -195,6 +195,8 @@ static void __init xen_hvm_guest_init(void)
>   	if (xen_pv_domain())
>   		return;
>   
> +	xen_set_restricted_virtio_memory_access();
> +
>   	init_hvm_pv_info();
>   
>   	reserve_shared_info();
> diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
> index ca85d14..30d24fe 100644
> --- a/arch/x86/xen/enlighten_pv.c
> +++ b/arch/x86/xen/enlighten_pv.c
> @@ -108,6 +108,8 @@ static DEFINE_PER_CPU(struct tls_descs, shadow_tls_desc);
>   
>   static void __init xen_pv_init_platform(void)
>   {
> +	xen_set_restricted_virtio_memory_access();
> +
>   	populate_extra_pte(fix_to_virt(FIX_PARAVIRT_BOOTMAP));
>   
>   	set_fixmap(FIX_PARAVIRT_BOOTMAP, xen_start_info->shared_info);
> diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
> index 313a9127..a7bd8ce 100644
> --- a/drivers/xen/Kconfig
> +++ b/drivers/xen/Kconfig
> @@ -339,4 +339,15 @@ config XEN_GRANT_DMA_OPS
>   	bool
>   	select DMA_OPS
>   
> +config XEN_VIRTIO
> +	bool "Xen virtio support"
> +	depends on VIRTIO
> +	select XEN_GRANT_DMA_OPS
> +	help
> +	  Enable virtio support for running as Xen guest. Depending on the
> +	  guest type this will require special support on the backend side
> +	  (qemu or kernel, depending on the virtio device types used).
> +
> +	  If in doubt, say n.
> +
>   endmenu
> diff --git a/include/xen/xen.h b/include/xen/xen.h
> index a99bab8..0780a81 100644
> --- a/include/xen/xen.h
> +++ b/include/xen/xen.h
> @@ -52,6 +52,14 @@ bool xen_biovec_phys_mergeable(const struct bio_vec *vec1,
>   extern u64 xen_saved_max_mem_size;
>   #endif
>   
> +#include <linux/platform-feature.h>
> +
> +static inline void xen_set_restricted_virtio_memory_access(void)
> +{
> +	if (IS_ENABLED(CONFIG_XEN_VIRTIO) && xen_domain())
> +		platform_set(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS);
> +}
> +
>   #ifdef CONFIG_XEN_UNPOPULATED_ALLOC
>   int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages);
>   void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages);

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Thu Jun 02 13:39:14 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 13:39:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341148.566321 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwl2R-0007ip-8Q; Thu, 02 Jun 2022 13:38:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341148.566321; Thu, 02 Jun 2022 13:38:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwl2R-0007ii-4d; Thu, 02 Jun 2022 13:38:59 +0000
Received: by outflank-mailman (input) for mailman id 341148;
 Thu, 02 Jun 2022 13:38:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwl2Q-0007iY-HS; Thu, 02 Jun 2022 13:38:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwl2Q-0004Uf-FY; Thu, 02 Jun 2022 13:38:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwl2Q-0000hI-2h; Thu, 02 Jun 2022 13:38:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nwl2Q-00039Q-2F; Thu, 02 Jun 2022 13:38:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=EuI1VFADoTdrcnTnH0lw+yvRcdD3eECGYNnyE6thr2Y=; b=c088BmGUlQUX1f80N8bf8tBkxD
	U+dcvE6sWe9st1Rg/2oWrGk2zBLVwMIGKEH08xzQ7Zoxu73i/GhOsJpQ02F7HlxaRhR5VOl7pJ2tF
	iOaic62aXslUSec2fWwG2/1U+zSICAu1WEp8w6yemiLjW7G5Fo4BdtLCudg7IIPv5RLs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170805-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 170805: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-pygrub:debian-di-install:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:debian-di-install:fail:regression
    linux-linus:test-armhf-armhf-libvirt:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:debian-di-install:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:debian-di-install:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:debian-di-install:fail:regression
    linux-linus:test-armhf-armhf-xl:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-linus:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This:
    linux=d1dc87763f406d4e67caf16dbe438a5647692395
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 02 Jun 2022 13:38:58 +0000

flight 170805 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170805/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 170714
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 170714
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 170714
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 170714
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-freebsd11-amd64 13 guest-start          fail REGR. vs. 170714
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 170714
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 170714
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 170714
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 170714
 test-arm64-arm64-xl-credit1  14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 170714
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 170714
 test-arm64-arm64-xl-xsm      14 guest-start              fail REGR. vs. 170714
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 170714
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-qemuu-nested-amd 12 debian-hvm-install  fail REGR. vs. 170714
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 170714
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 170714
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-xl-qemut-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 170714
 test-amd64-amd64-qemuu-nested-intel 12 debian-hvm-install fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 170714
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 170714
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 170714
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 170714
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 170714
 test-amd64-amd64-pygrub      12 debian-di-install        fail REGR. vs. 170714
 test-amd64-amd64-xl-vhd      12 debian-di-install        fail REGR. vs. 170714
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 170714
 test-arm64-arm64-xl-vhd      12 debian-di-install        fail REGR. vs. 170714
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 170714
 test-arm64-arm64-libvirt-raw 12 debian-di-install        fail REGR. vs. 170714
 test-armhf-armhf-libvirt-qcow2 12 debian-di-install      fail REGR. vs. 170714
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 170714
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 170714

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 170714
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 170714

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714

version targeted for testing:
 linux                d1dc87763f406d4e67caf16dbe438a5647692395
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z    9 days
Failing since        170716  2022-05-24 11:12:06 Z    9 days   26 attempts
Testing same since   170805  2022-06-02 06:46:34 Z    0 days    1 attempts

------------------------------------------------------------
2021 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 228471 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 02 15:07:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 15:07:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341159.566335 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwmPg-0001f8-4J; Thu, 02 Jun 2022 15:07:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341159.566335; Thu, 02 Jun 2022 15:07:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwmPg-0001f1-0j; Thu, 02 Jun 2022 15:07:04 +0000
Received: by outflank-mailman (input) for mailman id 341159;
 Thu, 02 Jun 2022 15:07:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwmPe-0001eq-Kl; Thu, 02 Jun 2022 15:07:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwmPe-00064H-HO; Thu, 02 Jun 2022 15:07:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwmPd-00031g-SA; Thu, 02 Jun 2022 15:07:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nwmPd-0002uO-Ra; Thu, 02 Jun 2022 15:07:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=R0/sbNIGp9CYAzbtb2xnHXe8qgYlskYGHykRXQSa6pM=; b=PecMCVuAkuBa1QuvjaWrm+ADYK
	Rzh03oczDBbgIZ/f7FQTTsOMuW7wKUs/e5QNY6WTTBwhM/huOIhg4J0fxsRppO6xki+4L7v5bfGaJ
	Eh5woz90rWVJy012h8vVi8TrLiYLV/2Tlc7WztTqwHG8n99N6xB8dvzBKH46hdSgWFns=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170806-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 170806: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-examine-uefi:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=58ce5b6c33ecae76f2e9fc5213a56e98c3be4be5
X-Osstest-Versions-That:
    xen=58ce5b6c33ecae76f2e9fc5213a56e98c3be4be5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 02 Jun 2022 15:07:01 +0000

flight 170806 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170806/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-examine-uefi  6 xen-install                fail pass in 170801

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail like 170801
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170801
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170801
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170801
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 170801
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 170801
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170801
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170801
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170801
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170801
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170801
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170801
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170801
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  58ce5b6c33ecae76f2e9fc5213a56e98c3be4be5
baseline version:
 xen                  58ce5b6c33ecae76f2e9fc5213a56e98c3be4be5

Last test of basis   170806  2022-06-02 07:12:19 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Thu Jun 02 19:10:13 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 19:10:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341170.566345 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwqCj-0005Mt-AQ; Thu, 02 Jun 2022 19:09:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341170.566345; Thu, 02 Jun 2022 19:09:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwqCj-0005Mm-7X; Thu, 02 Jun 2022 19:09:57 +0000
Received: by outflank-mailman (input) for mailman id 341170;
 Thu, 02 Jun 2022 19:09:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KMRg=WJ=oracle.com=boris.ostrovsky@srs-se1.protection.inumbo.net>)
 id 1nwqCh-0005Mb-Fd
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 19:09:56 +0000
Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com
 [205.220.177.32]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 93ebe4bd-e2a7-11ec-837f-e5687231ffcc;
 Thu, 02 Jun 2022 21:09:52 +0200 (CEST)
Received: from pps.filterd (m0246630.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 252Huhvm028854;
 Thu, 2 Jun 2022 19:09:12 GMT
Received: from iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com
 (iadpaimrmta02.appoci.oracle.com [147.154.18.20])
 by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3gbc7ku12j-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 02 Jun 2022 19:09:11 +0000
Received: from pps.filterd
 (iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1])
 by iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (8.16.1.2/8.16.1.2)
 with SMTP id 252J0sk8010287; Thu, 2 Jun 2022 19:09:11 GMT
Received: from nam04-dm6-obe.outbound.protection.outlook.com
 (mail-dm6nam04lp2043.outbound.protection.outlook.com [104.47.73.43])
 by iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com with ESMTP id
 3gc8kmxyus-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 02 Jun 2022 19:09:11 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com (2603:10b6:208:321::10)
 by BN6PR10MB1410.namprd10.prod.outlook.com (2603:10b6:404:46::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12; Thu, 2 Jun
 2022 19:09:09 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::84df:d22d:95d6:b4f9]) by BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::84df:d22d:95d6:b4f9%4]) with mapi id 15.20.5314.013; Thu, 2 Jun 2022
 19:09:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 93ebe4bd-e2a7-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=message-id : date :
 subject : to : cc : references : from : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2021-07-09;
 bh=ZJM97+Pdw2U+vQHT1XymKdEHrjACb2Vlw+HiKfA2CH8=;
 b=h4DYWHDBxIavOovzknzCoi1V+UXVTo7zwocYUDS+P//vU6XOsSTum57hk/eUl32ZJOP2
 SYTi3NKesO1DnIVftWOsGWjARNSHc2HOcuClAfoEOcI1wPh5ztlbAn2kRoaBwtKdsLjy
 zLsKMFGsH/7TC9uM+fcVyARfKrTCajcraXYOdez68gb1jH2J7ARPV/he8pObEG+wE+sX
 7WTYcf7cubLjwWH9CZq2v58hVUBtTnn5LFKQzIaTEtRZ+0Yd3Pcm3iBHBuMjgLE+/ahW
 8RdEEzEclR9ZTXVXDQMbCG1QntPltxI87ZMv/A6aPWQpifWHSZk+ET05cHma5BEVcplt 2g== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=W9e6X9U7f7faXE3i6o7Wm5pF+s0fuQje42lN5/E07FI8yVi3gPTrREwsXwnFeODAxMx1VNrqykAS3NkZ9ZUg6Cyia/BXWuMbnsOxPwDbk7PMRoOYSgI8zZnKa5MbTrP03wNP/ulFzFy60kMCvegjw9Ckh84cktAszMY5TTUEyUzwhOs1VH2ZYyihJXau5sJ87YwSbY4hndRGMmkUprro2SW1Z0/GuN7mS9e9ezSFm4ryvPaZmI8skHOmNk2VxgWbPxwA7Gki1gPAUj6UeL3bIDf/5RYfe4IY2qzh5fWDzYJ1skGv0ippgNxYN1SU9RfgXtBn8gyliBodYJoZBNsUzA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ZJM97+Pdw2U+vQHT1XymKdEHrjACb2Vlw+HiKfA2CH8=;
 b=kbjVGuNAvhxTOSIaBqQHuppU+CIIdoQA6svUkiq7pSPwmuQEozvAykptrEazrLvcSsQ2IhO7IRTZlBpTZ4Z1gEuEEoiI9S1UiQ8hmWuZQl2B0vPYXEgVmVBAZlfTUQLaW8hRqtOu0saZckQFyO7qjWC/qiOYcIbc1aU+hwoLSeV+CqykRnkv0FmgSBFOugrUxHNhjvSuKMHDPl0tq+iK26qV2qsNHWtuq9qqpCDjiVIOJaef555e+X8RquK9IqU9tbjrtvzaBhxXLpAwKVwZ0xJrZvbbNXADFlc42lIAHyx2qzm/pE5QTOEYpiiaCeUCov67tqw2BJ/8RVHblb06PA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZJM97+Pdw2U+vQHT1XymKdEHrjACb2Vlw+HiKfA2CH8=;
 b=gHNickKUVm0qPtValK6968a60zLZyFLoFm1mMqBwDhTew7sdAlHiinOj2XKudrOFMZXkeBgerT4mJkkreUQz/QE5uVp+s7vnVp3Epm6LTrlX0cEF1FqtouEbux/Hjq6+L/rTeFb7hJIi4jfDwj0nKpVPaxcEdCXPQ4GMHkgWweg=
Message-ID: <3f067ced-8fe3-2e4a-79bb-909b8636db17@oracle.com>
Date: Thu, 2 Jun 2022 15:09:00 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.9.1
Subject: Re: [PATCH V3 4/8] xen/virtio: Enable restricted memory access using
 Xen grant mappings
Content-Language: en-US
To: Oleksandr <olekstysh@gmail.com>, xen-devel@lists.xenproject.org,
        x86@kernel.org, linux-arm-kernel@lists.infradead.org,
        linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
        Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Russell King <linux@armlinux.org.uk>,
        Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
        Borislav Petkov <bp@alien8.de>,
        Dave Hansen <dave.hansen@linux.intel.com>,
        "H. Peter Anvin" <hpa@zytor.com>, Julien Grall <julien@xen.org>,
        "Michael S. Tsirkin" <mst@redhat.com>,
        Christoph Hellwig <hch@infradead.org>
References: <1653944417-17168-1-git-send-email-olekstysh@gmail.com>
 <1653944417-17168-5-git-send-email-olekstysh@gmail.com>
 <c2ae069d-ec95-50ea-1789-b2a667d6fb4c@gmail.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
In-Reply-To: <c2ae069d-ec95-50ea-1789-b2a667d6fb4c@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: SA0PR11CA0122.namprd11.prod.outlook.com
 (2603:10b6:806:131::7) To BLAPR10MB5009.namprd10.prod.outlook.com
 (2603:10b6:208:321::10)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 954a4093-2e31-41b5-e865-08da44cb5f08
X-MS-TrafficTypeDiagnostic: BN6PR10MB1410:EE_
X-Microsoft-Antispam-PRVS: 
	<BN6PR10MB1410A90D265292C38D99A0288ADE9@BN6PR10MB1410.namprd10.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	4buN1rh5t/S1qg7YdJ59gHVneaqoDGSGjHyJzjTYDAPdbwFWucGMTBilz7dZ1eivP44R5qI5g+SFEqYtq42QUvwu2d9BWlfJuK3s0rjok+3c2c/ncKdM77E+flSyv0Sx6HKzJ9ssiaaXnaBYJ3hxrVaoZjd+/G8jWTlWUzIkHk/UpIx4wIf+zPI5lp6u8g/vlr7DClHdDE8MrWXotCxuiXhIRQjadFJhYnEAkZrF8f/Qmi+O7Tkw2QE3YRrmIOOngDDZeSX4RWEoeVeV4o6Nn/G+qIBHqbvY46q42vP8pcGNdk43BVFTdNDjrMZoU7ZVRmBKFKDhcjbTMnKPHokjQcVdJQzBqOc/p6IaciU9UyH85WvA+aanOnM2oeLB9njlMfqBE1fU234uCQgsH3OxV2D/oJ8uiz25zlyFYzHHHnnuukoltTC/+8PJeSbPnrr81FJfuV0iqtJ1kPd3rChqiiopTo3UVa44VLJ6WvdTOquKg9Jq7Z9Q9hpky/KGJvwwN9lT+lZpVY9RyHYphI4XTh5HECyAsmg1FfBmLTKdytpuC4AACt1sZIgT7+98WNjRwGeI6NoYHEVoK+rhufBoYgmpR6HyBGMg3/OGowHh4aQuKYqjrLs4S7kuYnJ/iurYiFRd0Gu9WA7WRgIxOUzrJ5T6tflptO1D1YYrELCTUwaWCbyTOLT1fMSbI7WysvuEmEMZ/9GRPPXjHodaXQv97iQLaZnvhX+oV4vSCxN94JxsbzmEtrHiYMbuxXURSFcO
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB5009.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(186003)(31696002)(6666004)(26005)(508600001)(2906002)(7416002)(6512007)(38100700002)(44832011)(8936002)(6486002)(2616005)(53546011)(6506007)(5660300002)(316002)(54906003)(31686004)(36756003)(8676002)(4326008)(66556008)(66946007)(66476007)(86362001)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?utf-8?B?eWxjYkpQMzZhbWZ3ZXY0bUFyTHd5UHppakJOY1NVOTR3RDFCajM5elpQN3Qy?=
 =?utf-8?B?UXRtdE5XcWNMK3RTRFUzYlUzSWhtVkZSUzE1bkdVZW8yRHJPS3pTRCtwU2hS?=
 =?utf-8?B?ZVZBY05aMFRnSDg0TWptb2sySFhOVm5sVmVRZWZEWlVER2pWWkFKK3o5RnJC?=
 =?utf-8?B?c3doS1o3bnIxU1VJSU9IUlgwVC9waGtEV3pEMGVSRkc5ZlFwejByMnc0MDhS?=
 =?utf-8?B?ZEdlQWJzWlQwREhrZmErelRUTjFNMVpCQUgzQ3J2ZkIxV3ZVVlYzWjlXYVhN?=
 =?utf-8?B?b0Zaam5Mc2ZjK0VHOWhYVDMwVm1DQ1Q1OURRb1RSQXZGQXB1S244dzRBOEl6?=
 =?utf-8?B?RlFYdFFEeG41dC9KMCtPcGx1c0R6RERhdEs2ZzYyTHV4Qlc2bVNhcGdVOFc2?=
 =?utf-8?B?NDQ0TkN4T2lIMDZMbFJtVXdhUldZUTIyUXBZa2ZpRnVKZDQ0Z1hQbzFVcVNa?=
 =?utf-8?B?TkNUN0xaYkVlWTNsdGEzZTF6ZDRmVGRTcUM4OWZRbS9BOUVtdTQ1dXB3WWF1?=
 =?utf-8?B?SndmM2J5Lyt1VHB0RGJtMWoxbVNCajBCd1cyazIwOXJmWkdCM3E2aDZJUmRk?=
 =?utf-8?B?TUVhUmRaNDc1OXNERlBNV2RWUW9yb1h6VFRqZDZySXhrUTlFRGJ6RUhLT3c0?=
 =?utf-8?B?cjRBVjdlVXNZWXRjd3g2cFV1RDZLdU9OalEzV0o3RjYyUUJ1MWhWSzB4K1k3?=
 =?utf-8?B?WVM3NFRLOWFwM1dudHVwbjJhek81SVliK3BJQXJjVHVxeVNIamJwMERWbmxw?=
 =?utf-8?B?aHNLYWhYTGl5c3B2WVY4YStiM2doRXI2OWJsOGsxZklzQlpmeHI2NFcvZC95?=
 =?utf-8?B?NEZPOGhEeUt1VFNleUZ4WVlYd0Q5a1R6cWhOcFFGUHJIbG5aWWJ2eUtmb0hr?=
 =?utf-8?B?Y21iV2ltRDRjK3hZTG9JSzFid2pVM0RoNlYyNVByMEZmaTdnbEZVUWhldDlX?=
 =?utf-8?B?NHdEdnpBdFhLTG0rZzJRb0FESElPb1pFQUd5Q3c4OU9mcEMrcFZkcFI0U0o4?=
 =?utf-8?B?SVM0blBEZVBObW5oMWdGK1ZXb01oU3lWcVlSZDducVdsZDNGNVVhdFN4ZEFl?=
 =?utf-8?B?cG4wdGU1Mm9BYloxV3lYS2NnLzF4YnRDSUVXQkM5amFPVDhZWTlyeUpSVmRB?=
 =?utf-8?B?QzBhWnlxMzVUNFBjN2VLSkVKSGJEVlpTeFZQSDVUcTB0WXdpOU5tMnlHMU40?=
 =?utf-8?B?eXo0QzAvQ0NxUDFjN1RPdDdSWUpDeGZsMndXdEg4dmMwMXpKc0dTcC9iVmhC?=
 =?utf-8?B?MEhuVVZQWU1rdEc0QWQxSkFZdDR5QmY0MkNmOGZLelh2TldlYncwSFV1Mytu?=
 =?utf-8?B?aGFmSlYwWTVhdlNmVHhtTkhRNElEWmxOam9WcXRwcjlTOERVU21ENkVXcFRj?=
 =?utf-8?B?U3RQMWdNRVlJNVlxQnQ1akd3MC94RHQvT1VTazhzSVhDSzhMWjVyQ2p2a2NL?=
 =?utf-8?B?eUh5bysyWGpRRWZneGIzeSt2Y2U5Yk9MZnBYQStub2NXZWtZdndaVlVzdHM4?=
 =?utf-8?B?Tm9Qd0lHeU9vbTkzNkVGTGhhTlYvS0pMRS94T1VxZndFb1ZacGFJWlB4dzBE?=
 =?utf-8?B?NVlSTWp4WTUzTFA0UkFxZDNVN2RyZzMzS3A0akM4ZVAwZGdNODRrcGRMQkhN?=
 =?utf-8?B?dksydEVhYTk5TFZVMWdUMjB4K3JFWFFZdlVqVXRkaittS2VCUXl2TE5hY0wx?=
 =?utf-8?B?bWh1WVpkbWo4U1JtYWZpWGY4cW5PTkpBKzRwanNnNWR3RmhFS0FXTEQ0YWFT?=
 =?utf-8?B?a01VRWdtckJ5V0J6WjRDRC9xRWlVWHR2TWVFcGdEd1g4OTVkRUw0SWFOaUpz?=
 =?utf-8?B?aHJSQURaMEdrOTdleXk5ZHZ6UTVCUlZJMUVoWjl0eXBXYjU0dVp5QXdFUFM5?=
 =?utf-8?B?WExCanRLci8ramluSUdyRHhZQjlWcU5BMEZXVTZlakFXNTJlUVpWZzAva09Z?=
 =?utf-8?B?d3RuVElRS2JvRXJhd3YzOGhLZldGVVNDd2x6K09CMFdZcmFlM3k2M05BMm9D?=
 =?utf-8?B?WGh6UUdxTkQ4eThtZnpVSzJsQTRJU0FkUGY0U3YwZ0VOVW9FMlQ0MEVIZHBV?=
 =?utf-8?B?SW9lb2lCWCtDSzFXdzc1N3R0bEwrMS95cnZnL0ljaXJPNno0c1dCZ0twTnBD?=
 =?utf-8?B?cXZtQTFjTThqdzZoTCtROHBicVJZa1owZktGK1ZjTkVZdTVmQmF2bklRSWRr?=
 =?utf-8?B?ZDBPdUFBY21hNDR2K0s4U056L0ZNTWhNdWNiN3Aya1htNjhXeXZHeWZGWXJw?=
 =?utf-8?B?Q1lPek5mK1NzYUZlMlJhUzBtWEt4Tyt5TEZWT2lDZnJvL3BxVUM4THBVM0E1?=
 =?utf-8?B?dS8xaU1OSFpiUGI2TldYcVJGRDk5UmY0SGxxMGZDcVVTaG5tT3UzYyt0WnNG?=
 =?utf-8?Q?LmsvXU33acCAOhdk=3D?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 954a4093-2e31-41b5-e865-08da44cb5f08
X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB5009.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2022 19:09:09.2627
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Bw1XKOt31RU9RQEgFZQS2W/vxCOXJ1tRXA+CcxRlb7RwoBj0zBy5+qkV5dvoEP6Y387/05npvIJN1fPDo3nwYHaS9qHNgxn4Rge2S013+i4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR10MB1410
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.517,18.0.874
 definitions=2022-06-02_05:2022-06-02,2022-06-02 signatures=0
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 suspectscore=0
 malwarescore=0 mlxscore=0 mlxlogscore=999 adultscore=0 bulkscore=0
 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2204290000 definitions=main-2206020081
X-Proofpoint-GUID: klADcdvqPyFd6R0aWkVEouUJPTVZfq8M
X-Proofpoint-ORIG-GUID: klADcdvqPyFd6R0aWkVEouUJPTVZfq8M


On 6/2/22 8:49 AM, Oleksandr wrote:
>
> On 31.05.22 00:00, Oleksandr Tyshchenko wrote:
>
> Hello all.
>
>> From: Juergen Gross <jgross@suse.com>
>>
>> In order to support virtio in Xen guests add a config option XEN_VIRTIO
>> enabling the user to specify whether in all Xen guests virtio should
>> be able to access memory via Xen grant mappings only on the host side.
>>
>> Also set PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS feature from the guest
>> initialization code on Arm and x86 if CONFIG_XEN_VIRTIO is enabled.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
>> ---
>> Changes V1 -> V2:
>>     - new patch, split required changes from commit:
>>      "[PATCH V1 3/6] xen/virtio: Add option to restrict memory access under Xen"
>>     - rework according to new platform_has() infrastructure
>>
>> Changes V2 -> V3:
>>     - add Stefano's R-b
>
> May I please ask for the ack or comments for x86 side here?
>
>

Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>




From xen-devel-bounces@lists.xenproject.org Thu Jun 02 19:24:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 19:24:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341181.566379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwqQg-0000EI-BQ; Thu, 02 Jun 2022 19:24:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341181.566379; Thu, 02 Jun 2022 19:24:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwqQg-0000EB-6i; Thu, 02 Jun 2022 19:24:22 +0000
Received: by outflank-mailman (input) for mailman id 341181;
 Thu, 02 Jun 2022 19:24:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2EeJ=WJ=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1nwqQe-00089L-I9
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 19:24:20 +0000
Received: from mail-ej1-x62e.google.com (mail-ej1-x62e.google.com
 [2a00:1450:4864:20::62e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 99505fe7-e2a9-11ec-837f-e5687231ffcc;
 Thu, 02 Jun 2022 21:24:19 +0200 (CEST)
Received: by mail-ej1-x62e.google.com with SMTP id m20so11797117ejj.10
 for <xen-devel@lists.xenproject.org>; Thu, 02 Jun 2022 12:24:19 -0700 (PDT)
Received: from otyshchenko.router ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 eg13-20020a056402288d00b0042dce73168csm2938301edb.13.2022.06.02.12.24.17
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 02 Jun 2022 12:24:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 99505fe7-e2a9-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=atWo2NXjcz5EzUayOwhy3M2BlNEMIGiujbLV+faeIXM=;
        b=mP3vwQLwa0THZTdq8qiG3tbQqJFf1Xa1X4oAlyLR5UqR/h4POoo6cfg3UIa0R2wHNN
         dMbc9PL3Vw0pplOrQLrAJFVP2MRYdJ7pZQEZUfQyb2gubCPE5BhZOo3K4te6P553NKI4
         KIr9nRFzx1ipkPa3FhZCOv6jHUpYB4nbk9avk0iHRjtZIfEiXIl/l5nKooK2Rqf3tmrL
         XUHVs1LjD0B42IqgtkK/H4RW3sW8Cor3NF7n1RouI7Qn8mvA+EUGJLsHwIFeU4pVBr5s
         1virlHJv37c1XR+AUStA+w1ATBX/mCO80WCseY0IzT91IBJUCR9Goi/De/HPzRJ/3ZFH
         1RMA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=atWo2NXjcz5EzUayOwhy3M2BlNEMIGiujbLV+faeIXM=;
        b=HstbzG1Z2VfvktAYSUuGDP/mhy8ahAIXpcRzxXxdvmcNnBXVxr39fvaYojgKxvvvMi
         ndQ+xt/yH5dsdqrrPuhPTlPwvnfBsqfOLPuGQfsWp0lxkzAC3wWK5ztmmJZcxJCgPI3P
         kuuYzdYy1+QUP8b5sh11AKKyu+bZe1vMNwKCgNEyrtB0AuWWU8tRr1B8NcIzgqDlMin+
         ZKSIcYG+CnCWztvEDgD4tkgW22/itMFEm0QushNtKiZBtxLm4T3IOYbjzDYr01akjNIi
         EeRHqFOPb0hpYM7DH5BmUUQ2WnHmN681GQN5dXdxDomORzX7HxpHpAZovvQ4aP9bT1u8
         dPeA==
X-Gm-Message-State: AOAM532RITUlcrAFhC6iMHg3IZ6dkew226GGCeb6NKJtOxGwa3i2Vtos
	xwrQANsjBuvwwS136ul0oBX5qgLRkhY=
X-Google-Smtp-Source: ABdhPJx/6ujytkYbLeLEkphxNTCLeRmoVTHOusFR7B4mHvxmoVacNzV7jZZx/LpplQtjtaYoF3cL+g==
X-Received: by 2002:a17:907:3f8b:b0:6ff:4721:3cf3 with SMTP id hr11-20020a1709073f8b00b006ff47213cf3mr5545696ejc.48.1654197858501;
        Thu, 02 Jun 2022 12:24:18 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Christoph Hellwig <hch@infradead.org>
Subject: [PATCH V4 2/8] xen/grants: support allocating consecutive grants
Date: Thu,  2 Jun 2022 22:23:47 +0300
Message-Id: <1654197833-25362-3-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1654197833-25362-1-git-send-email-olekstysh@gmail.com>
References: <1654197833-25362-1-git-send-email-olekstysh@gmail.com>

From: Juergen Gross <jgross@suse.com>

For support of virtio via grant mappings in rare cases larger mappings
using consecutive grants are needed. Support those by adding a bitmap
of free grants.

As consecutive grants will be needed only in very rare cases (e.g. when
configuring a virtio device with a multi-page ring), optimize for the
normal case of non-consecutive allocations.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
Changes RFC -> V1:
   - no changes

Changes V1 -> V2:
   - no changes

Changes V2 -> V3:
   - rebase, move "if (unlikely(ref < GNTTAB_NR_RESERVED_ENTRIES))"
     to put_free_entry_locked()
   - do not overwrite "i" in gnttab_init(), introduce local max_nr_grefs
   - add a comment on top of "while (from < to)" in get_free_seq()
   - add Boris' R-b

Changes V3 -> V4:
   - no changes
---
 drivers/xen/grant-table.c | 251 +++++++++++++++++++++++++++++++++++++++-------
 include/xen/grant_table.h |   4 +
 2 files changed, 219 insertions(+), 36 deletions(-)

diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 7a18292..738029d 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -33,6 +33,7 @@
 
 #define pr_fmt(fmt) "xen:" KBUILD_MODNAME ": " fmt
 
+#include <linux/bitmap.h>
 #include <linux/memblock.h>
 #include <linux/sched.h>
 #include <linux/mm.h>
@@ -70,9 +71,32 @@
 
 static grant_ref_t **gnttab_list;
 static unsigned int nr_grant_frames;
+
+/*
+ * Handling of free grants:
+ *
+ * Free grants are in a simple list anchored in gnttab_free_head. They are
+ * linked by grant ref, the last element contains GNTTAB_LIST_END. The number
+ * of free entries is stored in gnttab_free_count.
+ * Additionally there is a bitmap of free entries anchored in
+ * gnttab_free_bitmap. This is being used for simplifying allocation of
+ * multiple consecutive grants, which is needed e.g. for support of virtio.
+ * gnttab_last_free is used to add free entries of new frames at the end of
+ * the free list.
+ * gnttab_free_tail_ptr specifies the variable which references the start
+ * of consecutive free grants ending with gnttab_last_free. This pointer is
+ * updated in a rather defensive way, in order to avoid performance hits in
+ * hot paths.
+ * All those variables are protected by gnttab_list_lock.
+ */
 static int gnttab_free_count;
-static grant_ref_t gnttab_free_head;
+static unsigned int gnttab_size;
+static grant_ref_t gnttab_free_head = GNTTAB_LIST_END;
+static grant_ref_t gnttab_last_free = GNTTAB_LIST_END;
+static grant_ref_t *gnttab_free_tail_ptr;
+static unsigned long *gnttab_free_bitmap;
 static DEFINE_SPINLOCK(gnttab_list_lock);
+
 struct grant_frames xen_auto_xlat_grant_frames;
 static unsigned int xen_gnttab_version;
 module_param_named(version, xen_gnttab_version, uint, 0);
@@ -168,16 +192,116 @@ static int get_free_entries(unsigned count)
 
 	ref = head = gnttab_free_head;
 	gnttab_free_count -= count;
-	while (count-- > 1)
-		head = gnttab_entry(head);
+	while (count--) {
+		bitmap_clear(gnttab_free_bitmap, head, 1);
+		if (gnttab_free_tail_ptr == __gnttab_entry(head))
+			gnttab_free_tail_ptr = &gnttab_free_head;
+		if (count)
+			head = gnttab_entry(head);
+	}
 	gnttab_free_head = gnttab_entry(head);
 	gnttab_entry(head) = GNTTAB_LIST_END;
 
+	if (!gnttab_free_count) {
+		gnttab_last_free = GNTTAB_LIST_END;
+		gnttab_free_tail_ptr = NULL;
+	}
+
 	spin_unlock_irqrestore(&gnttab_list_lock, flags);
 
 	return ref;
 }
 
+static int get_seq_entry_count(void)
+{
+	if (gnttab_last_free == GNTTAB_LIST_END || !gnttab_free_tail_ptr ||
+	    *gnttab_free_tail_ptr == GNTTAB_LIST_END)
+		return 0;
+
+	return gnttab_last_free - *gnttab_free_tail_ptr + 1;
+}
+
+/* Rebuilds the free grant list and tries to find count consecutive entries. */
+static int get_free_seq(unsigned int count)
+{
+	int ret = -ENOSPC;
+	unsigned int from, to;
+	grant_ref_t *last;
+
+	gnttab_free_tail_ptr = &gnttab_free_head;
+	last = &gnttab_free_head;
+
+	for (from = find_first_bit(gnttab_free_bitmap, gnttab_size);
+	     from < gnttab_size;
+	     from = find_next_bit(gnttab_free_bitmap, gnttab_size, to + 1)) {
+		to = find_next_zero_bit(gnttab_free_bitmap, gnttab_size,
+					from + 1);
+		if (ret < 0 && to - from >= count) {
+			ret = from;
+			bitmap_clear(gnttab_free_bitmap, ret, count);
+			from += count;
+			gnttab_free_count -= count;
+			if (from == to)
+				continue;
+		}
+
+		/*
+		 * Recreate the free list in order to have it properly sorted.
+		 * This is needed to make sure that the free tail has the maximum
+		 * possible size.
+		 */
+		while (from < to) {
+			*last = from;
+			last = __gnttab_entry(from);
+			gnttab_last_free = from;
+			from++;
+		}
+		if (to < gnttab_size)
+			gnttab_free_tail_ptr = __gnttab_entry(to - 1);
+	}
+
+	*last = GNTTAB_LIST_END;
+	if (gnttab_last_free != gnttab_size - 1)
+		gnttab_free_tail_ptr = NULL;
+
+	return ret;
+}
+
+static int get_free_entries_seq(unsigned int count)
+{
+	unsigned long flags;
+	int ret = 0;
+
+	spin_lock_irqsave(&gnttab_list_lock, flags);
+
+	if (gnttab_free_count < count) {
+		ret = gnttab_expand(count - gnttab_free_count);
+		if (ret < 0)
+			goto out;
+	}
+
+	if (get_seq_entry_count() < count) {
+		ret = get_free_seq(count);
+		if (ret >= 0)
+			goto out;
+		ret = gnttab_expand(count - get_seq_entry_count());
+		if (ret < 0)
+			goto out;
+	}
+
+	ret = *gnttab_free_tail_ptr;
+	*gnttab_free_tail_ptr = gnttab_entry(ret + count - 1);
+	gnttab_free_count -= count;
+	if (!gnttab_free_count)
+		gnttab_free_tail_ptr = NULL;
+	bitmap_clear(gnttab_free_bitmap, ret, count);
+
+ out:
+	spin_unlock_irqrestore(&gnttab_list_lock, flags);
+
+	return ret;
+}
+
 static void do_free_callbacks(void)
 {
 	struct gnttab_free_callback *callback, *next;
@@ -204,21 +328,51 @@ static inline void check_free_callbacks(void)
 		do_free_callbacks();
 }
 
-static void put_free_entry(grant_ref_t ref)
+static void put_free_entry_locked(grant_ref_t ref)
 {
-	unsigned long flags;
-
 	if (unlikely(ref < GNTTAB_NR_RESERVED_ENTRIES))
 		return;
 
-	spin_lock_irqsave(&gnttab_list_lock, flags);
 	gnttab_entry(ref) = gnttab_free_head;
 	gnttab_free_head = ref;
+	if (!gnttab_free_count)
+		gnttab_last_free = ref;
+	if (gnttab_free_tail_ptr == &gnttab_free_head)
+		gnttab_free_tail_ptr = __gnttab_entry(ref);
 	gnttab_free_count++;
+	bitmap_set(gnttab_free_bitmap, ref, 1);
+}
+
+static void put_free_entry(grant_ref_t ref)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&gnttab_list_lock, flags);
+	put_free_entry_locked(ref);
 	check_free_callbacks();
 	spin_unlock_irqrestore(&gnttab_list_lock, flags);
 }
 
+static void gnttab_set_free(unsigned int start, unsigned int n)
+{
+	unsigned int i;
+
+	for (i = start; i < start + n - 1; i++)
+		gnttab_entry(i) = i + 1;
+
+	gnttab_entry(i) = GNTTAB_LIST_END;
+	if (!gnttab_free_count) {
+		gnttab_free_head = start;
+		gnttab_free_tail_ptr = &gnttab_free_head;
+	} else {
+		gnttab_entry(gnttab_last_free) = start;
+	}
+	gnttab_free_count += n;
+	gnttab_last_free = i;
+
+	bitmap_set(gnttab_free_bitmap, start, n);
+}
+
 /*
  * Following applies to gnttab_update_entry_v1 and gnttab_update_entry_v2.
  * Introducing a valid entry into the grant table:
@@ -450,23 +604,31 @@ void gnttab_free_grant_references(grant_ref_t head)
 {
 	grant_ref_t ref;
 	unsigned long flags;
-	int count = 1;
-	if (head == GNTTAB_LIST_END)
-		return;
+
 	spin_lock_irqsave(&gnttab_list_lock, flags);
-	ref = head;
-	while (gnttab_entry(ref) != GNTTAB_LIST_END) {
-		ref = gnttab_entry(ref);
-		count++;
+	while (head != GNTTAB_LIST_END) {
+		ref = gnttab_entry(head);
+		put_free_entry_locked(head);
+		head = ref;
 	}
-	gnttab_entry(ref) = gnttab_free_head;
-	gnttab_free_head = head;
-	gnttab_free_count += count;
 	check_free_callbacks();
 	spin_unlock_irqrestore(&gnttab_list_lock, flags);
 }
 EXPORT_SYMBOL_GPL(gnttab_free_grant_references);
 
+void gnttab_free_grant_reference_seq(grant_ref_t head, unsigned int count)
+{
+	unsigned long flags;
+	unsigned int i;
+
+	spin_lock_irqsave(&gnttab_list_lock, flags);
+	for (i = count; i > 0; i--)
+		put_free_entry_locked(head + i - 1);
+	check_free_callbacks();
+	spin_unlock_irqrestore(&gnttab_list_lock, flags);
+}
+EXPORT_SYMBOL_GPL(gnttab_free_grant_reference_seq);
+
 int gnttab_alloc_grant_references(u16 count, grant_ref_t *head)
 {
 	int h = get_free_entries(count);
@@ -480,6 +642,24 @@ int gnttab_alloc_grant_references(u16 count, grant_ref_t *head)
 }
 EXPORT_SYMBOL_GPL(gnttab_alloc_grant_references);
 
+int gnttab_alloc_grant_reference_seq(unsigned int count, grant_ref_t *first)
+{
+	int h;
+
+	if (count == 1)
+		h = get_free_entries(1);
+	else
+		h = get_free_entries_seq(count);
+
+	if (h < 0)
+		return -ENOSPC;
+
+	*first = h;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(gnttab_alloc_grant_reference_seq);
+
 int gnttab_empty_grant_references(const grant_ref_t *private_head)
 {
 	return (*private_head == GNTTAB_LIST_END);
@@ -572,16 +752,13 @@ static int grow_gnttab_list(unsigned int more_frames)
 			goto grow_nomem;
 	}
 
+	gnttab_set_free(gnttab_size, extra_entries);
 
-	for (i = grefs_per_frame * nr_grant_frames;
-	     i < grefs_per_frame * new_nr_grant_frames - 1; i++)
-		gnttab_entry(i) = i + 1;
-
-	gnttab_entry(i) = gnttab_free_head;
-	gnttab_free_head = grefs_per_frame * nr_grant_frames;
-	gnttab_free_count += extra_entries;
+	if (!gnttab_free_tail_ptr)
+		gnttab_free_tail_ptr = __gnttab_entry(gnttab_size);
 
 	nr_grant_frames = new_nr_grant_frames;
+	gnttab_size += extra_entries;
 
 	check_free_callbacks();
 
@@ -1424,20 +1601,20 @@ static int gnttab_expand(unsigned int req_entries)
 int gnttab_init(void)
 {
 	int i;
-	unsigned long max_nr_grant_frames;
+	unsigned long max_nr_grant_frames, max_nr_grefs;
 	unsigned int max_nr_glist_frames, nr_glist_frames;
-	unsigned int nr_init_grefs;
 	int ret;
 
 	gnttab_request_version();
 	max_nr_grant_frames = gnttab_max_grant_frames();
+	max_nr_grefs = max_nr_grant_frames *
+			gnttab_interface->grefs_per_grant_frame;
 	nr_grant_frames = 1;
 
 	/* Determine the maximum number of frames required for the
 	 * grant reference free list on the current hypervisor.
 	 */
-	max_nr_glist_frames = (max_nr_grant_frames *
-			       gnttab_interface->grefs_per_grant_frame / RPP);
+	max_nr_glist_frames = max_nr_grefs / RPP;
 
 	gnttab_list = kmalloc_array(max_nr_glist_frames,
 				    sizeof(grant_ref_t *),
@@ -1454,6 +1631,12 @@ int gnttab_init(void)
 		}
 	}
 
+	gnttab_free_bitmap = bitmap_zalloc(max_nr_grefs, GFP_KERNEL);
+	if (!gnttab_free_bitmap) {
+		ret = -ENOMEM;
+		goto ini_nomem;
+	}
+
 	ret = arch_gnttab_init(max_nr_grant_frames,
 			       nr_status_frames(max_nr_grant_frames));
 	if (ret < 0)
@@ -1464,15 +1647,10 @@ int gnttab_init(void)
 		goto ini_nomem;
 	}
 
-	nr_init_grefs = nr_grant_frames *
-			gnttab_interface->grefs_per_grant_frame;
-
-	for (i = GNTTAB_NR_RESERVED_ENTRIES; i < nr_init_grefs - 1; i++)
-		gnttab_entry(i) = i + 1;
+	gnttab_size = nr_grant_frames * gnttab_interface->grefs_per_grant_frame;
 
-	gnttab_entry(nr_init_grefs - 1) = GNTTAB_LIST_END;
-	gnttab_free_count = nr_init_grefs - GNTTAB_NR_RESERVED_ENTRIES;
-	gnttab_free_head  = GNTTAB_NR_RESERVED_ENTRIES;
+	gnttab_set_free(GNTTAB_NR_RESERVED_ENTRIES,
+			gnttab_size - GNTTAB_NR_RESERVED_ENTRIES);
 
 	printk("Grant table initialized\n");
 	return 0;
@@ -1481,6 +1659,7 @@ int gnttab_init(void)
 	for (i--; i >= 0; i--)
 		free_page((unsigned long)gnttab_list[i]);
 	kfree(gnttab_list);
+	bitmap_free(gnttab_free_bitmap);
 	return ret;
 }
 EXPORT_SYMBOL_GPL(gnttab_init);
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index 527c990..e279be3 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -127,10 +127,14 @@ int gnttab_try_end_foreign_access(grant_ref_t ref);
  */
 int gnttab_alloc_grant_references(u16 count, grant_ref_t *pprivate_head);
 
+int gnttab_alloc_grant_reference_seq(unsigned int count, grant_ref_t *first);
+
 void gnttab_free_grant_reference(grant_ref_t ref);
 
 void gnttab_free_grant_references(grant_ref_t head);
 
+void gnttab_free_grant_reference_seq(grant_ref_t head, unsigned int count);
+
 int gnttab_empty_grant_references(const grant_ref_t *pprivate_head);
 
 int gnttab_claim_grant_reference(grant_ref_t *pprivate_head);
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Thu Jun 02 19:24:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 19:24:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341182.566383 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwqQg-0000JK-Qo; Thu, 02 Jun 2022 19:24:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341182.566383; Thu, 02 Jun 2022 19:24:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwqQg-0000I2-Li; Thu, 02 Jun 2022 19:24:22 +0000
Received: by outflank-mailman (input) for mailman id 341182;
 Thu, 02 Jun 2022 19:24:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2EeJ=WJ=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1nwqQf-00089L-JZ
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 19:24:21 +0000
Received: from mail-ej1-x634.google.com (mail-ej1-x634.google.com
 [2a00:1450:4864:20::634])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 99f42cd2-e2a9-11ec-837f-e5687231ffcc;
 Thu, 02 Jun 2022 21:24:20 +0200 (CEST)
Received: by mail-ej1-x634.google.com with SMTP id s6so262562eja.0
 for <xen-devel@lists.xenproject.org>; Thu, 02 Jun 2022 12:24:20 -0700 (PDT)
Received: from otyshchenko.router ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 eg13-20020a056402288d00b0042dce73168csm2938301edb.13.2022.06.02.12.24.18
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 02 Jun 2022 12:24:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 99f42cd2-e2a9-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=nn54oKyQdkUXTfQLrsUbZfVHd04Jg6Oti25WXdtpMEw=;
        b=noeKA44CCMwcwNGK2g1eYOfnRdRyy7YOIY85Btv7FnrULdiaoUxlDZmE99B4uAx2hl
         v3AAgssc0fBiVPqPmbvs8Ogd19qf98nnhiLAdzRRJrumuogoxvY99i8H++/nbNEpHgMP
         ZsSdxFpZ1upRG8LMm0C6tHsdL+SHt97eAWqDpkMzj6VgDkT8QDesdJZgeFgqhuR4GMsW
         W+9LhRwR2LDVdPTGhTtsPWUdLv/BcNvs4er9b0CKWheN494JF8vVx8drRQdMHnKHErER
         FwiQxdLM1NuG8Rv9mu1ec6WTo3TEat35psDn72MiP+vBOU3X7L95QcgluYRC61QHzqCE
         v/gw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=nn54oKyQdkUXTfQLrsUbZfVHd04Jg6Oti25WXdtpMEw=;
        b=mytLNRh4hXntEc2PB9o5oekpNi3lKsqlvIq9bV4RE+Afm1+dBBfhvx6VPD6LrXFCrd
         CrBkcmbIddgfgxGzXy49a6kGFpIhzK01JunNiVhlKx/9T6jQ7PyAbuyry86eWNEVo3hx
         QN6r1kzzI2PwXHUijZxWswGtP5+D/tnRuZoon014YTEJJmgoBeEbLdM7BA6AowrgDA6R
         OJZwrReEQBRNTxK00VjX+OXYsc+5r2qzSJKLryrXTCZpLYEii3p58+3gKNZUaxMFlJ1a
         bj+9iYufpGvWHYVK5Wdmc4qZ8+TT8NWsrpvI5qz44vxtCCVW7yEfvrGkTGcPjlLl3Xta
         40MA==
X-Gm-Message-State: AOAM532aAgGTlM66ofOwgRHPLZBcb6hLKLcSpuPMoLnONuoNHMOI0yxI
	aIaWHZXfsCJisZVdAf/QU50SfZlZHVA=
X-Google-Smtp-Source: ABdhPJxZPDsUROaiVqX6gtzt0y2M8IBZgC5AdCU2RJGga4LqtS2VHdKMb9QkG4kguQOxYTMFi3K4eg==
X-Received: by 2002:a17:907:9715:b0:6fe:d943:3116 with SMTP id jg21-20020a170907971500b006fed9433116mr5617210ejc.257.1654197859519;
        Thu, 02 Jun 2022 12:24:19 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Christoph Hellwig <hch@infradead.org>
Subject: [PATCH V4 3/8] xen/grant-dma-ops: Add option to restrict memory access under Xen
Date: Thu,  2 Jun 2022 22:23:48 +0300
Message-Id: <1654197833-25362-4-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1654197833-25362-1-git-send-email-olekstysh@gmail.com>
References: <1654197833-25362-1-git-send-email-olekstysh@gmail.com>

From: Juergen Gross <jgross@suse.com>

Introduce Xen grant DMA-mapping layer which contains special DMA-mapping
routines for providing grant references as DMA addresses to be used by
frontends (e.g. virtio) in Xen guests.

Add the needed functionality by providing a special set of DMA ops
handling the needed grant operations for the I/O pages.

The subsequent commit will introduce the use case for xen-grant DMA ops
layer to enable using virtio devices in Xen guests in a safe manner.

Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
Changes RFC -> V1:
   - squash with almost all changes from commit (except handling "xen,dev-domid"
     property):
     "[PATCH 4/6] virtio: Various updates to xen-virtio DMA ops layer"
   - update commit subject/description and comments in code
   - leave only single Kconfig option XEN_VIRTIO and remove architectural
     dependencies
   - introduce common xen_has_restricted_virtio_memory_access() in xen.h
     and update arch_has_restricted_virtio_memory_access() for both
     Arm and x86 to call new helper
   - use (1ULL << 63) instead of 0x8000000000000000ULL for XEN_GRANT_ADDR_OFF
   - implement xen_virtio_dma_map(unmap)_sg() using example in swiotlb-xen.c
   - optimize padding by moving "broken" field in struct xen_virtio_data
   - remove unneeded per-device spinlock
   - remove the inclusion of virtio_config.h
   - remane everything according to the new naming scheme:
     s/virtio/grant_dma
   - add new hidden config option XEN_GRANT_DMA_OPS

Changes V1 -> V2:
   - fix checkpatch.pl warnings
   - remove the inclusion of linux/pci.h
   - rework to use xarray for data context
   - remove EXPORT_SYMBOL_GPL(xen_grant_setup_dma_ops);
   - remove the line of * after SPDX-License-Identifier
   - split changes into grant-dma-ops.c and arch_has_restricted_virtio_memory_access()
     and update commit subject/description accordingly
   - remove "default n" for config XEN_VIRTIO
   - implement xen_grant_dma_alloc(free)_pages()

Changes V2 -> V3:
   - Stefano already gave his Reviewed-by, I dropped it due to the changes (minor)
   - remane field "dev_domid" in struct xen_grant_dma_data to "backend_domid"
   - remove local variable "domid" in xen_grant_setup_dma_ops()

Changes V3 -> V4:
   - add Stefano's R-b
---
---
 drivers/xen/Kconfig         |   4 +
 drivers/xen/Makefile        |   1 +
 drivers/xen/grant-dma-ops.c | 311 ++++++++++++++++++++++++++++++++++++++++++++
 include/xen/xen-ops.h       |   8 ++
 4 files changed, 324 insertions(+)
 create mode 100644 drivers/xen/grant-dma-ops.c

diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
index 120d32f..313a9127 100644
--- a/drivers/xen/Kconfig
+++ b/drivers/xen/Kconfig
@@ -335,4 +335,8 @@ config XEN_UNPOPULATED_ALLOC
 	  having to balloon out RAM regions in order to obtain physical memory
 	  space to create such mappings.
 
+config XEN_GRANT_DMA_OPS
+	bool
+	select DMA_OPS
+
 endmenu
diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index 5aae66e..1a23cb0 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -39,3 +39,4 @@ xen-gntalloc-y				:= gntalloc.o
 xen-privcmd-y				:= privcmd.o privcmd-buf.o
 obj-$(CONFIG_XEN_FRONT_PGDIR_SHBUF)	+= xen-front-pgdir-shbuf.o
 obj-$(CONFIG_XEN_UNPOPULATED_ALLOC)	+= unpopulated-alloc.o
+obj-$(CONFIG_XEN_GRANT_DMA_OPS)		+= grant-dma-ops.o
diff --git a/drivers/xen/grant-dma-ops.c b/drivers/xen/grant-dma-ops.c
new file mode 100644
index 00000000..44659f4
--- /dev/null
+++ b/drivers/xen/grant-dma-ops.c
@@ -0,0 +1,311 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Xen grant DMA-mapping layer - contains special DMA-mapping routines
+ * for providing grant references as DMA addresses to be used by frontends
+ * (e.g. virtio) in Xen guests
+ *
+ * Copyright (c) 2021, Juergen Gross <jgross@suse.com>
+ */
+
+#include <linux/module.h>
+#include <linux/dma-map-ops.h>
+#include <linux/of.h>
+#include <linux/pfn.h>
+#include <linux/xarray.h>
+#include <xen/xen.h>
+#include <xen/grant_table.h>
+
+struct xen_grant_dma_data {
+	/* The ID of backend domain */
+	domid_t backend_domid;
+	/* Is device behaving sane? */
+	bool broken;
+};
+
+static DEFINE_XARRAY(xen_grant_dma_devices);
+
+#define XEN_GRANT_DMA_ADDR_OFF	(1ULL << 63)
+
+static inline dma_addr_t grant_to_dma(grant_ref_t grant)
+{
+	return XEN_GRANT_DMA_ADDR_OFF | ((dma_addr_t)grant << PAGE_SHIFT);
+}
+
+static inline grant_ref_t dma_to_grant(dma_addr_t dma)
+{
+	return (grant_ref_t)((dma & ~XEN_GRANT_DMA_ADDR_OFF) >> PAGE_SHIFT);
+}
+
+static struct xen_grant_dma_data *find_xen_grant_dma_data(struct device *dev)
+{
+	struct xen_grant_dma_data *data;
+
+	xa_lock(&xen_grant_dma_devices);
+	data = xa_load(&xen_grant_dma_devices, (unsigned long)dev);
+	xa_unlock(&xen_grant_dma_devices);
+
+	return data;
+}
+
+/*
+ * DMA ops for Xen frontends (e.g. virtio).
+ *
+ * Used to act as a kind of software IOMMU for Xen guests by using grants as
+ * DMA addresses.
+ * Such a DMA address is formed by using the grant reference as a frame
+ * number and setting the highest address bit (this bit is for the backend
+ * to be able to distinguish it from e.g. a mmio address).
+ *
+ * Note that for now we hard wire dom0 to be the backend domain. In order
+ * to support any domain as backend we'd need to add a way to communicate
+ * the domid of this backend, e.g. via Xenstore, via the PCI-device's
+ * config space or DT/ACPI.
+ */
+static void *xen_grant_dma_alloc(struct device *dev, size_t size,
+				 dma_addr_t *dma_handle, gfp_t gfp,
+				 unsigned long attrs)
+{
+	struct xen_grant_dma_data *data;
+	unsigned int i, n_pages = PFN_UP(size);
+	unsigned long pfn;
+	grant_ref_t grant;
+	void *ret;
+
+	data = find_xen_grant_dma_data(dev);
+	if (!data)
+		return NULL;
+
+	if (unlikely(data->broken))
+		return NULL;
+
+	ret = alloc_pages_exact(n_pages * PAGE_SIZE, gfp);
+	if (!ret)
+		return NULL;
+
+	pfn = virt_to_pfn(ret);
+
+	if (gnttab_alloc_grant_reference_seq(n_pages, &grant)) {
+		free_pages_exact(ret, n_pages * PAGE_SIZE);
+		return NULL;
+	}
+
+	for (i = 0; i < n_pages; i++) {
+		gnttab_grant_foreign_access_ref(grant + i, data->backend_domid,
+				pfn_to_gfn(pfn + i), 0);
+	}
+
+	*dma_handle = grant_to_dma(grant);
+
+	return ret;
+}
+
+static void xen_grant_dma_free(struct device *dev, size_t size, void *vaddr,
+			       dma_addr_t dma_handle, unsigned long attrs)
+{
+	struct xen_grant_dma_data *data;
+	unsigned int i, n_pages = PFN_UP(size);
+	grant_ref_t grant;
+
+	data = find_xen_grant_dma_data(dev);
+	if (!data)
+		return;
+
+	if (unlikely(data->broken))
+		return;
+
+	grant = dma_to_grant(dma_handle);
+
+	for (i = 0; i < n_pages; i++) {
+		if (unlikely(!gnttab_end_foreign_access_ref(grant + i))) {
+			dev_alert(dev, "Grant still in use by backend domain, disabled for further use\n");
+			data->broken = true;
+			return;
+		}
+	}
+
+	gnttab_free_grant_reference_seq(grant, n_pages);
+
+	free_pages_exact(vaddr, n_pages * PAGE_SIZE);
+}
+
+static struct page *xen_grant_dma_alloc_pages(struct device *dev, size_t size,
+					      dma_addr_t *dma_handle,
+					      enum dma_data_direction dir,
+					      gfp_t gfp)
+{
+	void *vaddr;
+
+	vaddr = xen_grant_dma_alloc(dev, size, dma_handle, gfp, 0);
+	if (!vaddr)
+		return NULL;
+
+	return virt_to_page(vaddr);
+}
+
+static void xen_grant_dma_free_pages(struct device *dev, size_t size,
+				     struct page *vaddr, dma_addr_t dma_handle,
+				     enum dma_data_direction dir)
+{
+	xen_grant_dma_free(dev, size, page_to_virt(vaddr), dma_handle, 0);
+}
+
+static dma_addr_t xen_grant_dma_map_page(struct device *dev, struct page *page,
+					 unsigned long offset, size_t size,
+					 enum dma_data_direction dir,
+					 unsigned long attrs)
+{
+	struct xen_grant_dma_data *data;
+	unsigned int i, n_pages = PFN_UP(size);
+	grant_ref_t grant;
+	dma_addr_t dma_handle;
+
+	if (WARN_ON(dir == DMA_NONE))
+		return DMA_MAPPING_ERROR;
+
+	data = find_xen_grant_dma_data(dev);
+	if (!data)
+		return DMA_MAPPING_ERROR;
+
+	if (unlikely(data->broken))
+		return DMA_MAPPING_ERROR;
+
+	if (gnttab_alloc_grant_reference_seq(n_pages, &grant))
+		return DMA_MAPPING_ERROR;
+
+	for (i = 0; i < n_pages; i++) {
+		gnttab_grant_foreign_access_ref(grant + i, data->backend_domid,
+				xen_page_to_gfn(page) + i, dir == DMA_TO_DEVICE);
+	}
+
+	dma_handle = grant_to_dma(grant) + offset;
+
+	return dma_handle;
+}
+
+static void xen_grant_dma_unmap_page(struct device *dev, dma_addr_t dma_handle,
+				     size_t size, enum dma_data_direction dir,
+				     unsigned long attrs)
+{
+	struct xen_grant_dma_data *data;
+	unsigned int i, n_pages = PFN_UP(size);
+	grant_ref_t grant;
+
+	if (WARN_ON(dir == DMA_NONE))
+		return;
+
+	data = find_xen_grant_dma_data(dev);
+	if (!data)
+		return;
+
+	if (unlikely(data->broken))
+		return;
+
+	grant = dma_to_grant(dma_handle);
+
+	for (i = 0; i < n_pages; i++) {
+		if (unlikely(!gnttab_end_foreign_access_ref(grant + i))) {
+			dev_alert(dev, "Grant still in use by backend domain, disabled for further use\n");
+			data->broken = true;
+			return;
+		}
+	}
+
+	gnttab_free_grant_reference_seq(grant, n_pages);
+}
+
+static void xen_grant_dma_unmap_sg(struct device *dev, struct scatterlist *sg,
+				   int nents, enum dma_data_direction dir,
+				   unsigned long attrs)
+{
+	struct scatterlist *s;
+	unsigned int i;
+
+	if (WARN_ON(dir == DMA_NONE))
+		return;
+
+	for_each_sg(sg, s, nents, i)
+		xen_grant_dma_unmap_page(dev, s->dma_address, sg_dma_len(s), dir,
+				attrs);
+}
+
+static int xen_grant_dma_map_sg(struct device *dev, struct scatterlist *sg,
+				int nents, enum dma_data_direction dir,
+				unsigned long attrs)
+{
+	struct scatterlist *s;
+	unsigned int i;
+
+	if (WARN_ON(dir == DMA_NONE))
+		return -EINVAL;
+
+	for_each_sg(sg, s, nents, i) {
+		s->dma_address = xen_grant_dma_map_page(dev, sg_page(s), s->offset,
+				s->length, dir, attrs);
+		if (s->dma_address == DMA_MAPPING_ERROR)
+			goto out;
+
+		sg_dma_len(s) = s->length;
+	}
+
+	return nents;
+
+out:
+	xen_grant_dma_unmap_sg(dev, sg, i, dir, attrs | DMA_ATTR_SKIP_CPU_SYNC);
+	sg_dma_len(sg) = 0;
+
+	return -EIO;
+}
+
+static int xen_grant_dma_supported(struct device *dev, u64 mask)
+{
+	return mask == DMA_BIT_MASK(64);
+}
+
+static const struct dma_map_ops xen_grant_dma_ops = {
+	.alloc = xen_grant_dma_alloc,
+	.free = xen_grant_dma_free,
+	.alloc_pages = xen_grant_dma_alloc_pages,
+	.free_pages = xen_grant_dma_free_pages,
+	.mmap = dma_common_mmap,
+	.get_sgtable = dma_common_get_sgtable,
+	.map_page = xen_grant_dma_map_page,
+	.unmap_page = xen_grant_dma_unmap_page,
+	.map_sg = xen_grant_dma_map_sg,
+	.unmap_sg = xen_grant_dma_unmap_sg,
+	.dma_supported = xen_grant_dma_supported,
+};
+
+void xen_grant_setup_dma_ops(struct device *dev)
+{
+	struct xen_grant_dma_data *data;
+
+	data = find_xen_grant_dma_data(dev);
+	if (data) {
+		dev_err(dev, "Xen grant DMA data is already created\n");
+		return;
+	}
+
+	data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
+	if (!data)
+		goto err;
+
+	/* XXX The dom0 is hardcoded as the backend domain for now */
+	data->backend_domid = 0;
+
+	if (xa_err(xa_store(&xen_grant_dma_devices, (unsigned long)dev, data,
+			GFP_KERNEL))) {
+		dev_err(dev, "Cannot store Xen grant DMA data\n");
+		goto err;
+	}
+
+	dev->dma_ops = &xen_grant_dma_ops;
+
+	return;
+
+err:
+	dev_err(dev, "Cannot set up Xen grant DMA ops, retain platform DMA ops\n");
+}
+
+MODULE_DESCRIPTION("Xen grant DMA-mapping layer");
+MODULE_AUTHOR("Juergen Gross <jgross@suse.com>");
+MODULE_LICENSE("GPL");
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index a3584a3..4f9fad5 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -221,4 +221,12 @@ static inline void xen_preemptible_hcall_end(void) { }
 
 #endif /* CONFIG_XEN_PV && !CONFIG_PREEMPTION */
 
+#ifdef CONFIG_XEN_GRANT_DMA_OPS
+void xen_grant_setup_dma_ops(struct device *dev);
+#else
+static inline void xen_grant_setup_dma_ops(struct device *dev)
+{
+}
+#endif /* CONFIG_XEN_GRANT_DMA_OPS */
+
 #endif /* INCLUDE_XEN_OPS_H */
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Thu Jun 02 19:24:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 19:24:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341180.566368 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwqQf-0008PX-0u; Thu, 02 Jun 2022 19:24:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341180.566368; Thu, 02 Jun 2022 19:24:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwqQe-0008PN-Ty; Thu, 02 Jun 2022 19:24:20 +0000
Received: by outflank-mailman (input) for mailman id 341180;
 Thu, 02 Jun 2022 19:24:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2EeJ=WJ=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1nwqQd-00089L-4f
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 19:24:19 +0000
Received: from mail-ej1-x634.google.com (mail-ej1-x634.google.com
 [2a00:1450:4864:20::634])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 98b1a4dd-e2a9-11ec-837f-e5687231ffcc;
 Thu, 02 Jun 2022 21:24:18 +0200 (CEST)
Received: by mail-ej1-x634.google.com with SMTP id q21so11856781ejm.1
 for <xen-devel@lists.xenproject.org>; Thu, 02 Jun 2022 12:24:18 -0700 (PDT)
Received: from otyshchenko.router ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 eg13-20020a056402288d00b0042dce73168csm2938301edb.13.2022.06.02.12.24.16
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 02 Jun 2022 12:24:16 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 98b1a4dd-e2a9-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=/L1IBgi9+X4DxS9TjixiTB4nno95/1qL4aRB/Scae3U=;
        b=PdMHt1DvZ7GZsF7P5quJKwTE1RemEmJBXxcg/aCxYvaydLb4PFeaY8Y6CVo4/M9GgY
         LNyj2unnT6aDkt+8WZrYPgzMfhGhVS2C2Ofj4o5wz+4WmSS0NbOYsWq+JExbrA7VxMNZ
         Ig4sZQo92K4+ePFs4E+3BXTSh1I0fUku9zsxxNzN9VD7wEPxoMpdEKttAxh41dbGxpfk
         aKzfERQE2/K4Vktnpj+ymUwmiqfjIfHIaPZJy9RB8VjwUUQ263jJq7ghMhG+Saaqh8z/
         qUljmEwqX5BdWF3vI+z6/YOFY0Dva333dCwiwx/SubCsLzwilyOWqaHsRp9VxXhS5BZD
         HPGg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=/L1IBgi9+X4DxS9TjixiTB4nno95/1qL4aRB/Scae3U=;
        b=nSWabWPFdxVNmNJ+b/SAw+R8Hy90Fs0sAUTfdLS+/N29/3lXXml1b1bS0Zfb1xe0WU
         Hs4QuMQNen4HnKXW9542pWPMkHIVmEkFgR7cipYl7BdaAUrpgFcdCIddQDkYaXtchODo
         yktXH38O9I/56aKwOVGLwraatd0Vzl2IdJy/MyDlkixVjAoPEXgMKGPUtlXlf9Oi2JX7
         GnffavpmEw/9lpt7jWCb2GXbfQ0j6P+dKkdfgsS4YHf9lgc4mYHAYZsNnODo7rRqX6Sr
         SG4WeX06+DIyT8X0hnf68f8X41UbfKCJAuEntfltwcqdovadzNdWbQ8E3EoQd28hTWsN
         3U9Q==
X-Gm-Message-State: AOAM530znBfVHWDuDNxLXxhKNf6aX67vpA6D9MvEyQuWqg9ShxaGh+bP
	zj0GHT2ij1abPTME3XNKAsktyAKqfaM=
X-Google-Smtp-Source: ABdhPJwkE0CY+6aNgdFvX0fyrcHX18bhVxjbK2OvCM9lOWCS4spnr1FzK55RdPxwGJ6AbQ3Ea+5Sgg==
X-Received: by 2002:a17:907:2d86:b0:6ff:14d1:b29d with SMTP id gt6-20020a1709072d8600b006ff14d1b29dmr5508138ejc.17.1654197857434;
        Thu, 02 Jun 2022 12:24:17 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Russell King <linux@armlinux.org.uk>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Christoph Hellwig <hch@infradead.org>
Subject: [PATCH V4 1/8] arm/xen: Introduce xen_setup_dma_ops()
Date: Thu,  2 Jun 2022 22:23:46 +0300
Message-Id: <1654197833-25362-2-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1654197833-25362-1-git-send-email-olekstysh@gmail.com>
References: <1654197833-25362-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

This patch introduces new helper and places it in new header.
The helper's purpose is to assign any Xen specific DMA ops in
a single place. For now, we deal with xen-swiotlb DMA ops only.
The one of the subsequent commits in current series will add
xen-grant DMA ops case.

Also re-use the xen_swiotlb_detect() check on Arm32.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
[For arm64]
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
---
Changes RFC -> V1:
   - update commit description
   - move commit to the beginning of the series
   - move #ifdef CONFIG_XEN from dma-mapping.c to xen-ops.h

Changes V1 -> V2:
   - add Stefano's R-b
   - add missing SPDX-License-Identifier to xen-ops.h

Changes V2 -> V3:
   - add Catalin's A-b

Changes V3 -> V4:
   - no changes
---
 arch/arm/include/asm/xen/xen-ops.h   |  2 ++
 arch/arm/mm/dma-mapping.c            |  7 ++-----
 arch/arm64/include/asm/xen/xen-ops.h |  2 ++
 arch/arm64/mm/dma-mapping.c          |  7 ++-----
 include/xen/arm/xen-ops.h            | 15 +++++++++++++++
 5 files changed, 23 insertions(+), 10 deletions(-)
 create mode 100644 arch/arm/include/asm/xen/xen-ops.h
 create mode 100644 arch/arm64/include/asm/xen/xen-ops.h
 create mode 100644 include/xen/arm/xen-ops.h

diff --git a/arch/arm/include/asm/xen/xen-ops.h b/arch/arm/include/asm/xen/xen-ops.h
new file mode 100644
index 00000000..7ebb7eb
--- /dev/null
+++ b/arch/arm/include/asm/xen/xen-ops.h
@@ -0,0 +1,2 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#include <xen/arm/xen-ops.h>
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index 82ffac6..059cce0 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -33,7 +33,7 @@
 #include <asm/dma-iommu.h>
 #include <asm/mach/map.h>
 #include <asm/system_info.h>
-#include <xen/swiotlb-xen.h>
+#include <asm/xen/xen-ops.h>
 
 #include "dma.h"
 #include "mm.h"
@@ -2287,10 +2287,7 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
 
 	set_dma_ops(dev, dma_ops);
 
-#ifdef CONFIG_XEN
-	if (xen_initial_domain())
-		dev->dma_ops = &xen_swiotlb_dma_ops;
-#endif
+	xen_setup_dma_ops(dev);
 	dev->archdata.dma_ops_setup = true;
 }
 
diff --git a/arch/arm64/include/asm/xen/xen-ops.h b/arch/arm64/include/asm/xen/xen-ops.h
new file mode 100644
index 00000000..7ebb7eb
--- /dev/null
+++ b/arch/arm64/include/asm/xen/xen-ops.h
@@ -0,0 +1,2 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#include <xen/arm/xen-ops.h>
diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
index 6719f9e..6099c81 100644
--- a/arch/arm64/mm/dma-mapping.c
+++ b/arch/arm64/mm/dma-mapping.c
@@ -9,9 +9,9 @@
 #include <linux/dma-map-ops.h>
 #include <linux/dma-iommu.h>
 #include <xen/xen.h>
-#include <xen/swiotlb-xen.h>
 
 #include <asm/cacheflush.h>
+#include <asm/xen/xen-ops.h>
 
 void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
 		enum dma_data_direction dir)
@@ -52,8 +52,5 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
 	if (iommu)
 		iommu_setup_dma_ops(dev, dma_base, dma_base + size - 1);
 
-#ifdef CONFIG_XEN
-	if (xen_swiotlb_detect())
-		dev->dma_ops = &xen_swiotlb_dma_ops;
-#endif
+	xen_setup_dma_ops(dev);
 }
diff --git a/include/xen/arm/xen-ops.h b/include/xen/arm/xen-ops.h
new file mode 100644
index 00000000..288deb1
--- /dev/null
+++ b/include/xen/arm/xen-ops.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_ARM_XEN_OPS_H
+#define _ASM_ARM_XEN_OPS_H
+
+#include <xen/swiotlb-xen.h>
+
+static inline void xen_setup_dma_ops(struct device *dev)
+{
+#ifdef CONFIG_XEN
+	if (xen_swiotlb_detect())
+		dev->dma_ops = &xen_swiotlb_dma_ops;
+#endif
+}
+
+#endif /* _ASM_ARM_XEN_OPS_H */
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Thu Jun 02 19:24:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 19:24:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341179.566357 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwqQd-00089d-OZ; Thu, 02 Jun 2022 19:24:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341179.566357; Thu, 02 Jun 2022 19:24:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwqQd-00089W-LI; Thu, 02 Jun 2022 19:24:19 +0000
Received: by outflank-mailman (input) for mailman id 341179;
 Thu, 02 Jun 2022 19:24:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2EeJ=WJ=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1nwqQc-00089L-Db
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 19:24:18 +0000
Received: from mail-ej1-x634.google.com (mail-ej1-x634.google.com
 [2a00:1450:4864:20::634])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 97f434de-e2a9-11ec-837f-e5687231ffcc;
 Thu, 02 Jun 2022 21:24:17 +0200 (CEST)
Received: by mail-ej1-x634.google.com with SMTP id q21so11856781ejm.1
 for <xen-devel@lists.xenproject.org>; Thu, 02 Jun 2022 12:24:16 -0700 (PDT)
Received: from otyshchenko.router ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 eg13-20020a056402288d00b0042dce73168csm2938301edb.13.2022.06.02.12.24.14
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 02 Jun 2022 12:24:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 97f434de-e2a9-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=uZWf8OfW06wjRVZAgl1HK+YLzR5URPKYW1hO+2XycMY=;
        b=WXCbAGOdwthNz7JOaFwSAUnxA/ls8r/957G0EZDOoet9Ob1zjdRpDX2mg5f7nRnQMe
         i7STT3KF5AT8zIFgesLqvyTUBaBAlR5fiYvzZ+Fws0kLwrNbREbLOhCQbg7kloyNOGhN
         RnenAJBMA6moK+K4pnXCG4gdMKwFnwfLhiEwFjpWRXcdTIqWwNglLc40m8Yn6zAu1pLu
         tpLnrRaBP3GCVuBAREF1faftktovinZR7ZIR2gDTIwlx3VvIt5dvw7EUscc+xzBE6NXC
         Y8wPNXXUy/gecJ9bHOIdAFdYMOAS5N76qAxkz8MPPGJlGmLdVZF+VeBdGiJFhM6vWQgz
         GKlw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=uZWf8OfW06wjRVZAgl1HK+YLzR5URPKYW1hO+2XycMY=;
        b=61Q79hld1a0nSqy9GS79+rSpruhueUb3q635WCJUdnDzccPGaWHPRxi9TeNIt+DvdY
         eiXog/mJQmHvq5VGo0CGkRW+5O7WROyzyctSfW5RhpqBpTliSHa509zKEEDePzplPXUW
         zt+5AllnmBAPDeyWWdQ2lEV1nmmK+Lhyvjkpu8JjP4hCMAOmUzzFky5jH9aqQXeH3VWf
         spqrW3qmcQMYUUvtzP2C9su5b5O4nSSPnslRhhWE9ueykCccDoXlybI4/D+tsnUTw0dh
         wCjimGm3EF6bbAWdDkWcPPg8l3IpMveO72r1vzBlR8N+94NtjFSafb5urBxZpzc4msEw
         ptOw==
X-Gm-Message-State: AOAM530XcBj8fkgFptB/4/2/KC4zkku+1R+cieFV9UF9YbIg+a4Dt04z
	aq73T/VhgKrEsVoV8qFZNl3hOFHfdn4=
X-Google-Smtp-Source: ABdhPJzGG+rmpMMdlJHu1m2b41/LSpc6IQX1oGnbgOADMIvgbQTpOYfgkFHqvKd7AUU946nanljHwg==
X-Received: by 2002:a17:907:7f0d:b0:6ff:b84:a4aa with SMTP id qf13-20020a1709077f0d00b006ff0b84a4aamr5807189ejc.595.1654197855998;
        Thu, 02 Jun 2022 12:24:15 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	virtualization@lists.linux-foundation.org,
	x86@kernel.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Christoph Hellwig <hch@infradead.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Wei Chen <Wei.Chen@arm.com>,
	Henry Wang <Henry.Wang@arm.com>,
	Kaly Xin <Kaly.Xin@arm.com>,
	Jiamei Xie <Jiamei.Xie@arm.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>
Subject: [PATCH V4 0/8] virtio: Solution to restrict memory access under Xen using xen-grant DMA-mapping layer
Date: Thu,  2 Jun 2022 22:23:45 +0300
Message-Id: <1654197833-25362-1-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Hello all.

The purpose of this patch series is to add support for restricting memory access under Xen using specific
grant table [1] based DMA-mapping layer. Patch series is based on Juergen Gross’ initial work [2] which implies
using grant references instead of raw guest physical addresses (GPA) for the virtio communications (some
kind of the software IOMMU).

You can find RFC-V3 patch series (and previous discussions) at [3].

!!! Please note, the only diff between V3 and V4 is in commit #5, also I have collected the acks (commits ##4-7).

The high level idea is to create new Xen’s grant table based DMA-mapping layer for the guest Linux whose main
purpose is to provide a special 64-bit DMA address which is formed by using the grant reference (for a page
to be shared with the backend) with offset and setting the highest address bit (this is for the backend to
be able to distinguish grant ref based DMA address from normal GPA). For this to work we need the ability
to allocate contiguous (consecutive) grant references for multi-page allocations. And the backend then needs
to offer VIRTIO_F_ACCESS_PLATFORM and VIRTIO_F_VERSION_1 feature bits (it must support virtio-mmio modern
transport for 64-bit addresses in the virtqueue).

Xen's grant mapping mechanism is the secure and safe solution to share pages between domains which proven
to work and works for years (in the context of traditional Xen PV drivers for example). So far, the foreign
mapping is used for the virtio backend to map and access guest memory. With the foreign mapping, the backend
is able to map arbitrary pages from the guest memory (or even from Dom0 memory). And as the result, the malicious
backend which runs in a non-trusted domain can take advantage of this. Instead, with the grant mapping
the backend is only allowed to map pages which were explicitly granted by the guest before and nothing else.
According to the discussions in various mainline threads this solution would likely be welcome because it
perfectly fits in the security model Xen provides.

What is more, the grant table based solution requires zero changes to the Xen hypervisor itself at least
with virtio-mmio and DT (in comparison, for example, with "foreign mapping + virtio-iommu" solution which would
require the whole new complex emulator in hypervisor in addition to new functionality/hypercall to pass IOVA
from the virtio backend running elsewhere to the hypervisor and translate it to the GPA before mapping into
P2M or denying the foreign mapping request if no corresponding IOVA-GPA mapping present in the IOMMU page table
for that particular device). We only need to update toolstack to insert "xen,grant-dma" IOMMU node (to be referred
by the virtio-mmio device using "iommus" property) when creating a guest device-tree (this is an indicator for
the guest to use Xen grant mappings scheme for that device with the endpoint ID being used as ID of Xen domain
where the corresponding backend is running, the backend domid is used as an argument to the grant mapping APIs).
It worth mentioning that toolstack patch is based on non upstreamed yet “Virtio support for toolstack on Arm”
series which is on review now [4].

Please note the following:
- Patch series only covers Arm and virtio-mmio (device-tree) for now. To enable the restricted memory access
  feature on Arm the following option should be set:
  CONFIG_XEN_VIRTIO=y
- Patch series is based on "kernel: add new infrastructure for platform_has() support" patch series which
  is on review now [5]
- Xen should be built with the following options:
  CONFIG_IOREQ_SERVER=y
  CONFIG_EXPERT=y

Patch series is rebased on "for-linus-5.19" branch [1] with "platform_has()" series applied and tested on Renesas
Salvator-X board + H3 ES3.0 SoC (Arm64) with standalone userspace (non-Qemu) virtio-mmio based virtio-disk backend
running in Driver domain and Linux guest running on existing virtio-blk driver (frontend). No issues were observed.
Guest domain 'reboot/destroy' use-cases work properly.
I have also tested other use-cases such as assigning several virtio block devices or a mix of virtio and Xen PV block
devices to the guest. Patch series was build-tested on Arm32 and x86.

1. Xen changes located at (last patch):
https://github.com/otyshchenko1/xen/commits/libxl_virtio_next2_1
2. Linux changes located at (last 8 patches):
https://github.com/otyshchenko1/linux/commits/virtio_grant9
3. virtio-disk changes located at:
https://github.com/otyshchenko1/virtio-disk/commits/virtio_grant

Any feedback/help would be highly appreciated.

[1] https://xenbits.xenproject.org/docs/4.16-testing/misc/grant-tables.txt
[2] https://www.youtube.com/watch?v=IrlEdaIUDPk
[3] https://lore.kernel.org/xen-devel/1649963973-22879-1-git-send-email-olekstysh@gmail.com/
    https://lore.kernel.org/xen-devel/1650646263-22047-1-git-send-email-olekstysh@gmail.com/
    https://lore.kernel.org/xen-devel/1651947548-4055-1-git-send-email-olekstysh@gmail.com/
    https://lore.kernel.org/xen-devel/1653944417-17168-1-git-send-email-olekstysh@gmail.com/
[4] https://lore.kernel.org/xen-devel/1654106261-28044-1-git-send-email-olekstysh@gmail.com/
    https://lore.kernel.org/xen-devel/1653944813-17970-1-git-send-email-olekstysh@gmail.com/
[5] https://lore.kernel.org/xen-devel/20220504155703.13336-1-jgross@suse.com/
[6] https://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git/log/?h=for-linus-5.19

Juergen Gross (3):
  xen/grants: support allocating consecutive grants
  xen/grant-dma-ops: Add option to restrict memory access under Xen
  xen/virtio: Enable restricted memory access using Xen grant mappings

Oleksandr Tyshchenko (5):
  arm/xen: Introduce xen_setup_dma_ops()
  dt-bindings: Add xen,grant-dma IOMMU description for xen-grant DMA ops
  xen/grant-dma-iommu: Introduce stub IOMMU driver
  xen/grant-dma-ops: Retrieve the ID of backend's domain for DT devices
  arm/xen: Assign xen-grant DMA ops for xen-grant DMA devices

 .../devicetree/bindings/iommu/xen,grant-dma.yaml   |  39 +++
 arch/arm/include/asm/xen/xen-ops.h                 |   2 +
 arch/arm/mm/dma-mapping.c                          |   7 +-
 arch/arm/xen/enlighten.c                           |   2 +
 arch/arm64/include/asm/xen/xen-ops.h               |   2 +
 arch/arm64/mm/dma-mapping.c                        |   7 +-
 arch/x86/xen/enlighten_hvm.c                       |   2 +
 arch/x86/xen/enlighten_pv.c                        |   2 +
 drivers/xen/Kconfig                                |  20 ++
 drivers/xen/Makefile                               |   2 +
 drivers/xen/grant-dma-iommu.c                      |  78 +++++
 drivers/xen/grant-dma-ops.c                        | 345 +++++++++++++++++++++
 drivers/xen/grant-table.c                          | 251 ++++++++++++---
 include/xen/arm/xen-ops.h                          |  18 ++
 include/xen/grant_table.h                          |   4 +
 include/xen/xen-ops.h                              |  13 +
 include/xen/xen.h                                  |   8 +
 17 files changed, 756 insertions(+), 46 deletions(-)
 create mode 100644 Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml
 create mode 100644 arch/arm/include/asm/xen/xen-ops.h
 create mode 100644 arch/arm64/include/asm/xen/xen-ops.h
 create mode 100644 drivers/xen/grant-dma-iommu.c
 create mode 100644 drivers/xen/grant-dma-ops.c
 create mode 100644 include/xen/arm/xen-ops.h

-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Thu Jun 02 19:24:24 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 19:24:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341183.566398 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwqQi-0000hC-2l; Thu, 02 Jun 2022 19:24:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341183.566398; Thu, 02 Jun 2022 19:24:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwqQh-0000gF-U4; Thu, 02 Jun 2022 19:24:23 +0000
Received: by outflank-mailman (input) for mailman id 341183;
 Thu, 02 Jun 2022 19:24:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2EeJ=WJ=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1nwqQg-00089L-80
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 19:24:22 +0000
Received: from mail-ej1-x634.google.com (mail-ej1-x634.google.com
 [2a00:1450:4864:20::634])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9aae726a-e2a9-11ec-837f-e5687231ffcc;
 Thu, 02 Jun 2022 21:24:21 +0200 (CEST)
Received: by mail-ej1-x634.google.com with SMTP id q21so11856781ejm.1
 for <xen-devel@lists.xenproject.org>; Thu, 02 Jun 2022 12:24:21 -0700 (PDT)
Received: from otyshchenko.router ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 eg13-20020a056402288d00b0042dce73168csm2938301edb.13.2022.06.02.12.24.19
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 02 Jun 2022 12:24:20 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9aae726a-e2a9-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=vx2kLvIZS2juZNk5FVfsvDVXqGt8DgrAljecfWnEYWc=;
        b=dJJZFFwM4qaW1Hz4dYQgCfKswqdA9aOKawg49H4QWfa1bpfs3bSdZPuAxn/Xlr/1S0
         udjnMGecRU2bFne+nBMM8u5UDXRnNfywbI2TyTkY8otdiVk2aQjOq4BQmwbkJSJ3L7S1
         jwaMwvMxLCxk8leL7XwK0V5JvtOeksJ0PrbA3kniJymeTZP0TBCUJHLEOGpfIh4ex5Zb
         znL91zrw0l+8zPhwEWoiplmzqptkYlU8bd4N/Kot4q9ALGcez+a6AsMGXTf/vNe2P9Ip
         nFE1N1MwvX1kJfSwjfnr1NTJJ7xhp9xOVpXwjExSAy9XIGsJdoe4MxqN5eEv9PvhmaB7
         gKMQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=vx2kLvIZS2juZNk5FVfsvDVXqGt8DgrAljecfWnEYWc=;
        b=aD3QhKxbHQxqNgBWEYCwD4K48yNMq8IIH82VZilKfvRLwk/tWQkjjh0Rkx2jrjQ8vk
         gfO3fnKi0oUKf2646sFIqq8YdFosiHspVSPdfK9+46MKDWhEscnBIzg6VFDNIukTiWlb
         RB4B+vds38PnExA/zP0WCEJA8leKxtyAHV21ITnwwtGvw/4Am8bvOgz9SvXjyt21bGp8
         LtrPo6MuqsfT7uZO6QPNUbZeVH2ZUXLLkxePT2MsKto5ns+gzAdp51HpehflyAol3GWF
         44ZCwG06DcnYPRBD26tmJF9lI4dggAfLsPzlGQ4KCYa3rs8ZcqYh78hFQsLykZN5+pwU
         G2OA==
X-Gm-Message-State: AOAM533uZD/WXIvu0Qk2j9Ttd8gKSE/EzJwxblFprqa2rM6aIZi4cSX0
	EcvmszV5Qiz4opxOhTpkIfGEXA99v2Y=
X-Google-Smtp-Source: ABdhPJyBS8TsE8u/9Wi3VTZm5zZNB5OZHRxuUlBCbPVvVOS3nqruK/+dbbz+8v2AJMH8brXw7F7JaA==
X-Received: by 2002:a17:907:1b24:b0:6ff:235c:2ffd with SMTP id mp36-20020a1709071b2400b006ff235c2ffdmr5887568ejc.116.1654197860794;
        Thu, 02 Jun 2022 12:24:20 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Russell King <linux@armlinux.org.uk>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Julien Grall <julien@xen.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Christoph Hellwig <hch@infradead.org>
Subject: [PATCH V4 4/8] xen/virtio: Enable restricted memory access using Xen grant mappings
Date: Thu,  2 Jun 2022 22:23:49 +0300
Message-Id: <1654197833-25362-5-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1654197833-25362-1-git-send-email-olekstysh@gmail.com>
References: <1654197833-25362-1-git-send-email-olekstysh@gmail.com>

From: Juergen Gross <jgross@suse.com>

In order to support virtio in Xen guests add a config option XEN_VIRTIO
enabling the user to specify whether in all Xen guests virtio should
be able to access memory via Xen grant mappings only on the host side.

Also set PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS feature from the guest
initialization code on Arm and x86 if CONFIG_XEN_VIRTIO is enabled.

Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
Changes V1 -> V2:
   - new patch, split required changes from commit:
    "[PATCH V1 3/6] xen/virtio: Add option to restrict memory access under Xen"
   - rework according to new platform_has() infrastructure

Changes V2 -> V3:
   - add Stefano's R-b

Changes V3 -> V4:
   - add Boris' R-b
---
 arch/arm/xen/enlighten.c     |  2 ++
 arch/x86/xen/enlighten_hvm.c |  2 ++
 arch/x86/xen/enlighten_pv.c  |  2 ++
 drivers/xen/Kconfig          | 11 +++++++++++
 include/xen/xen.h            |  8 ++++++++
 5 files changed, 25 insertions(+)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 07eb69f..1f9c3ba 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -443,6 +443,8 @@ static int __init xen_guest_init(void)
 	if (!xen_domain())
 		return 0;
 
+	xen_set_restricted_virtio_memory_access();
+
 	if (!acpi_disabled)
 		xen_acpi_guest_init();
 	else
diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
index 517a9d8..8b71b1d 100644
--- a/arch/x86/xen/enlighten_hvm.c
+++ b/arch/x86/xen/enlighten_hvm.c
@@ -195,6 +195,8 @@ static void __init xen_hvm_guest_init(void)
 	if (xen_pv_domain())
 		return;
 
+	xen_set_restricted_virtio_memory_access();
+
 	init_hvm_pv_info();
 
 	reserve_shared_info();
diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index ca85d14..30d24fe 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -108,6 +108,8 @@ static DEFINE_PER_CPU(struct tls_descs, shadow_tls_desc);
 
 static void __init xen_pv_init_platform(void)
 {
+	xen_set_restricted_virtio_memory_access();
+
 	populate_extra_pte(fix_to_virt(FIX_PARAVIRT_BOOTMAP));
 
 	set_fixmap(FIX_PARAVIRT_BOOTMAP, xen_start_info->shared_info);
diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
index 313a9127..a7bd8ce 100644
--- a/drivers/xen/Kconfig
+++ b/drivers/xen/Kconfig
@@ -339,4 +339,15 @@ config XEN_GRANT_DMA_OPS
 	bool
 	select DMA_OPS
 
+config XEN_VIRTIO
+	bool "Xen virtio support"
+	depends on VIRTIO
+	select XEN_GRANT_DMA_OPS
+	help
+	  Enable virtio support for running as Xen guest. Depending on the
+	  guest type this will require special support on the backend side
+	  (qemu or kernel, depending on the virtio device types used).
+
+	  If in doubt, say n.
+
 endmenu
diff --git a/include/xen/xen.h b/include/xen/xen.h
index a99bab8..0780a81 100644
--- a/include/xen/xen.h
+++ b/include/xen/xen.h
@@ -52,6 +52,14 @@ bool xen_biovec_phys_mergeable(const struct bio_vec *vec1,
 extern u64 xen_saved_max_mem_size;
 #endif
 
+#include <linux/platform-feature.h>
+
+static inline void xen_set_restricted_virtio_memory_access(void)
+{
+	if (IS_ENABLED(CONFIG_XEN_VIRTIO) && xen_domain())
+		platform_set(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS);
+}
+
 #ifdef CONFIG_XEN_UNPOPULATED_ALLOC
 int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages);
 void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages);
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Thu Jun 02 19:24:26 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 19:24:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341184.566412 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwqQk-00017G-9y; Thu, 02 Jun 2022 19:24:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341184.566412; Thu, 02 Jun 2022 19:24:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwqQk-000173-6V; Thu, 02 Jun 2022 19:24:26 +0000
Received: by outflank-mailman (input) for mailman id 341184;
 Thu, 02 Jun 2022 19:24:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2EeJ=WJ=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1nwqQi-0000dK-6i
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 19:24:24 +0000
Received: from mail-ej1-x634.google.com (mail-ej1-x634.google.com
 [2a00:1450:4864:20::634])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9baf920f-e2a9-11ec-bd2c-47488cf2e6aa;
 Thu, 02 Jun 2022 21:24:23 +0200 (CEST)
Received: by mail-ej1-x634.google.com with SMTP id y19so11826897ejq.6
 for <xen-devel@lists.xenproject.org>; Thu, 02 Jun 2022 12:24:23 -0700 (PDT)
Received: from otyshchenko.router ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 eg13-20020a056402288d00b0042dce73168csm2938301edb.13.2022.06.02.12.24.20
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 02 Jun 2022 12:24:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9baf920f-e2a9-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=JATYvLe48LS2H6IM0secvITvXQRFYJatDO7tIGdxKew=;
        b=VTT0OBaJLQkU0DXmYjkEMy4pAELAV+jBlguhxMn7A/UqRxOPNmS5CHgPtJ8qEHZ37t
         dJchUBQgCQYu0sVoQ3XZJeZMu/D3NfZmgNzNcTSM6d3mQgW/gfiuwtDFZ3C/zopEft70
         53psImCdfXqfPzsfnsJ0x1WX2TilCTwWqqoFx42M3g540JOK5MIBPNJ4HDu3rtPd0NvR
         Q9zPUyeZA5Hv2CYInUh8MlFvL4nmj/eO1LHolt3AXfJI734JjCLw7bSnIYS5ZeZ4TIuR
         W3MaS7EEAi42acDU5NxK7pTxBgmMuOOT0q63i2Wpbh1F/ySWKn407asxBaQC4EJYFYUB
         ZPxw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=JATYvLe48LS2H6IM0secvITvXQRFYJatDO7tIGdxKew=;
        b=sikCSiT7VNwQ2CGBqHKTY68QomlhyeNdB7nsn3XWEiFcCRMzcfqrCLptjtX9iNvJEk
         Kwmj+O/2kH9rgOim7CRPhkIhXu3LrorEW7uM0JmpiNrssZlmODLD1z2Nxk7KmDNq6XY1
         +VTHJMrvX5yGkLsQZq0vEqb+oRVGIfDmlgwn7p2dlSVSF16Jj/q5SZHadUeR2JNZ+uw+
         4m3tbBuOTJBSxgYNP9pitEzY6RaukOTab0xLJww8v5nj+3K94XeeKmS0yZ2dXraKQAwH
         RuY/c4XWT9Hiw1rDH0v171bMW7b0hCtGEUiH94UpePstY3kwQg4HMQ4wOekbr8p3Zo2w
         q7Uw==
X-Gm-Message-State: AOAM5308JzgloSop9IBc8CI4xqlbcvoxkEIVoCSOpT2uiOAogRPAbSsH
	/xYHS/xSUH5JokG1RfkY5izhS91B0mI=
X-Google-Smtp-Source: ABdhPJwiCOLciOxN6TBYEcpArcIF1bwM/jhBDOJnBMmkSZAOupAwiR+Hr2XzW8lOBJHN2sRoLEmt4Q==
X-Received: by 2002:a17:906:8146:b0:6ff:119c:881f with SMTP id z6-20020a170906814600b006ff119c881fmr5659395ejw.38.1654197862292;
        Thu, 02 Jun 2022 12:24:22 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org,
	devicetree@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	iommu@lists.linux-foundation.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Rob Herring <robh+dt@kernel.org>,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Krzysztof Kozlowski <krzysztof.kozlowski+dt@linaro.org>,
	Julien Grall <julien@xen.org>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Christoph Hellwig <hch@infradead.org>,
	Arnd Bergmann <arnd@arndb.de>
Subject: [PATCH V4 5/8] dt-bindings: Add xen,grant-dma IOMMU description for xen-grant DMA ops
Date: Thu,  2 Jun 2022 22:23:50 +0300
Message-Id: <1654197833-25362-6-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1654197833-25362-1-git-send-email-olekstysh@gmail.com>
References: <1654197833-25362-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

The main purpose of this binding is to communicate Xen specific
information using generic IOMMU device tree bindings (which is
a good fit here) rather than introducing a custom property.

Introduce Xen specific IOMMU for the virtualized device (e.g. virtio)
to be used by Xen grant DMA-mapping layer in the subsequent commit.

The reference to Xen specific IOMMU node using "iommus" property
indicates that Xen grant mappings need to be enabled for the device,
and it specifies the ID of the domain where the corresponding backend
resides. The domid (domain ID) is used as an argument to the Xen grant
mapping APIs.

This is needed for the option to restrict memory access using Xen grant
mappings to work which primary goal is to enable using virtio devices
in Xen guests.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
Changes RFC -> V1:
   - update commit subject/description and text in description
   - move to devicetree/bindings/arm/

Changes V1 -> V2:
   - update text in description
   - change the maintainer of the binding
   - fix validation issue
   - reference xen,dev-domid.yaml schema from virtio/mmio.yaml

Change V2 -> V3:
   - Stefano already gave his Reviewed-by, I dropped it due to the changes (significant)
   - use generic IOMMU device tree bindings instead of custom property
     "xen,dev-domid"
   - change commit subject and description, was
     "dt-bindings: Add xen,dev-domid property description for xen-grant DMA ops"

Changes V3 -> V4:
   - add Stefano's R-b
   - remove underscore in iommu node name
   - remove consumer example virtio@3000
   - update text for two descriptions
---
 .../devicetree/bindings/iommu/xen,grant-dma.yaml   | 39 ++++++++++++++++++++++
 1 file changed, 39 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml

diff --git a/Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml b/Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml
new file mode 100644
index 00000000..be1539d
--- /dev/null
+++ b/Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml
@@ -0,0 +1,39 @@
+# SPDX-License-Identifier: (GPL-2.0-only or BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/iommu/xen,grant-dma.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Xen specific IOMMU for virtualized devices (e.g. virtio)
+
+maintainers:
+  - Stefano Stabellini <sstabellini@kernel.org>
+
+description:
+  The Xen IOMMU represents the Xen grant table interface. Grant mappings
+  are to be used with devices connected to the Xen IOMMU using the "iommus"
+  property, which also specifies the ID of the backend domain.
+  The binding is required to restrict memory access using Xen grant mappings.
+
+properties:
+  compatible:
+    const: xen,grant-dma
+
+  '#iommu-cells':
+    const: 1
+    description:
+      The single cell is the domid (domain ID) of the domain where the backend
+      is running.
+
+required:
+  - compatible
+  - "#iommu-cells"
+
+additionalProperties: false
+
+examples:
+  - |
+    iommu {
+        compatible = "xen,grant-dma";
+        #iommu-cells = <1>;
+    };
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Thu Jun 02 19:24:26 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 19:24:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341185.566417 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwqQk-0001AC-RF; Thu, 02 Jun 2022 19:24:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341185.566417; Thu, 02 Jun 2022 19:24:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwqQk-00019J-Hi; Thu, 02 Jun 2022 19:24:26 +0000
Received: by outflank-mailman (input) for mailman id 341185;
 Thu, 02 Jun 2022 19:24:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2EeJ=WJ=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1nwqQi-0000dK-Vq
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 19:24:25 +0000
Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com
 [2a00:1450:4864:20::636])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9c3c75da-e2a9-11ec-bd2c-47488cf2e6aa;
 Thu, 02 Jun 2022 21:24:24 +0200 (CEST)
Received: by mail-ej1-x636.google.com with SMTP id s12so4610717ejx.3
 for <xen-devel@lists.xenproject.org>; Thu, 02 Jun 2022 12:24:24 -0700 (PDT)
Received: from otyshchenko.router ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 eg13-20020a056402288d00b0042dce73168csm2938301edb.13.2022.06.02.12.24.22
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 02 Jun 2022 12:24:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9c3c75da-e2a9-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=iNk74UaWLtQZL938Xxs3lBfmlvnA9FseeUHkafxECOY=;
        b=l5kSUmTyGQGXi74E515dldlWlFe3o08m4WJUZfONrAYROcSF48RNd1TEGdK3LNFW+s
         tHjQHCQNYLmJh9wphr3xZq+NVXxnGx3cH+ocoirkpGeSZ869Kup4B5xOfIsyusPe34JW
         fimV2t9ST7TGBac/fuJaEnuwygaZgZ26gqFiCY1muNWsRbEzGytMUMQZC+PpYHUU7H29
         PVza1L+tH2IrywKG2WMwfvahBQ9yUpDBlpQqq+UjuysWSRqg/fQRnKtJgG0s+bfA05tE
         a1K1tERwgCp1gamFt3ZSwmgs8FxhO+ttN+3bbIuzaXgPzMAYhqlnluJoFMc+67kQwS3r
         9Ecg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=iNk74UaWLtQZL938Xxs3lBfmlvnA9FseeUHkafxECOY=;
        b=4tKQGjnt8cPOhMHY1OTfDlmJbKZEJtZqiMT8oiLwL6GXY/fc0uTfJeZ3ijPi0TH+xR
         1UHWeF758izIqmBnh7xxyDY/ruBlrCOdYH7WNmu+LRXVO0ZTobAOVZZQdngrOIvSp375
         wGZaK0wJS0WBQ0Joy1j4E+NHmcpkjTzphBQvJSaLI5HSOSaMoA0viKwIrVvLfIBWdTjm
         DA1yCakrK0a2M6eD+EUIgBwjCHXOA5a2lN2NN+QZ8qpT6ya8nFTyBPO8t85epW9iWoNN
         FSO/g2J79sWVAZ3lmCFnEchyYu59VJ3QrcvOgeX2UhA9/M4/7QbbUG3msvZ9JdCe+UdE
         hVgw==
X-Gm-Message-State: AOAM531t39dhlQY+kKFukx9HBByrPN5Raj19nHYMlLQx6FZ6LLZGjhTV
	itoJKc68MaYUSoo0M/EPIuttlf2ngUA=
X-Google-Smtp-Source: ABdhPJwzZQbzIbCIFf3Pkn1ya1TGT8EEvSTk8i6RqQF4X2KAXdSL8hILlCy1w1gJmVJ7oRgwQh7H8w==
X-Received: by 2002:a17:906:d1cc:b0:709:567f:3506 with SMTP id bs12-20020a170906d1cc00b00709567f3506mr5538761ejb.363.1654197863445;
        Thu, 02 Jun 2022 12:24:23 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Christoph Hellwig <hch@infradead.org>
Subject: [PATCH V4 6/8] xen/grant-dma-iommu: Introduce stub IOMMU driver
Date: Thu,  2 Jun 2022 22:23:51 +0300
Message-Id: <1654197833-25362-7-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1654197833-25362-1-git-send-email-olekstysh@gmail.com>
References: <1654197833-25362-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

In order to reuse generic IOMMU device tree bindings by Xen grant
DMA-mapping layer we need to add this stub driver from a fw_devlink
perspective (grant-dma-ops cannot be converted into the proper
IOMMU driver).

Otherwise, just reusing IOMMU bindings (without having a corresponding
driver) leads to the deferred probe timeout afterwards, because
the IOMMU device never becomes available.

This stub driver does nothing except registering empty iommu_ops,
the upper layer "of_iommu" will treat this as NO_IOMMU condition
and won't return -EPROBE_DEFER.

As this driver is quite different from the most hardware IOMMU
implementations and only needed in Xen guests, place it in drivers/xen
directory. The subsequent commit will make use of it.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
According to the discussion at:
https://lore.kernel.org/xen-devel/c0f78aab-e723-fe00-a310-9fe52ec75e48@gmail.com/

Change V2 -> V3:
   - new patch

Changes V3 -> V4:
   - add Stefano's R-b
---
 drivers/xen/Kconfig           |  4 +++
 drivers/xen/Makefile          |  1 +
 drivers/xen/grant-dma-iommu.c | 78 +++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 83 insertions(+)
 create mode 100644 drivers/xen/grant-dma-iommu.c

diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
index a7bd8ce..35d20d9 100644
--- a/drivers/xen/Kconfig
+++ b/drivers/xen/Kconfig
@@ -335,6 +335,10 @@ config XEN_UNPOPULATED_ALLOC
 	  having to balloon out RAM regions in order to obtain physical memory
 	  space to create such mappings.
 
+config XEN_GRANT_DMA_IOMMU
+	bool
+	select IOMMU_API
+
 config XEN_GRANT_DMA_OPS
 	bool
 	select DMA_OPS
diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index 1a23cb0..c0503f1 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -40,3 +40,4 @@ xen-privcmd-y				:= privcmd.o privcmd-buf.o
 obj-$(CONFIG_XEN_FRONT_PGDIR_SHBUF)	+= xen-front-pgdir-shbuf.o
 obj-$(CONFIG_XEN_UNPOPULATED_ALLOC)	+= unpopulated-alloc.o
 obj-$(CONFIG_XEN_GRANT_DMA_OPS)		+= grant-dma-ops.o
+obj-$(CONFIG_XEN_GRANT_DMA_IOMMU)	+= grant-dma-iommu.o
diff --git a/drivers/xen/grant-dma-iommu.c b/drivers/xen/grant-dma-iommu.c
new file mode 100644
index 00000000..16b8bc0
--- /dev/null
+++ b/drivers/xen/grant-dma-iommu.c
@@ -0,0 +1,78 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Stub IOMMU driver which does nothing.
+ * The main purpose of it being present is to reuse generic IOMMU device tree
+ * bindings by Xen grant DMA-mapping layer.
+ *
+ * Copyright (C) 2022 EPAM Systems Inc.
+ */
+
+#include <linux/iommu.h>
+#include <linux/of.h>
+#include <linux/platform_device.h>
+
+struct grant_dma_iommu_device {
+	struct device *dev;
+	struct iommu_device iommu;
+};
+
+/* Nothing is really needed here */
+static const struct iommu_ops grant_dma_iommu_ops;
+
+static const struct of_device_id grant_dma_iommu_of_match[] = {
+	{ .compatible = "xen,grant-dma" },
+	{ },
+};
+
+static int grant_dma_iommu_probe(struct platform_device *pdev)
+{
+	struct grant_dma_iommu_device *mmu;
+	int ret;
+
+	mmu = devm_kzalloc(&pdev->dev, sizeof(*mmu), GFP_KERNEL);
+	if (!mmu)
+		return -ENOMEM;
+
+	mmu->dev = &pdev->dev;
+
+	ret = iommu_device_register(&mmu->iommu, &grant_dma_iommu_ops, &pdev->dev);
+	if (ret)
+		return ret;
+
+	platform_set_drvdata(pdev, mmu);
+
+	return 0;
+}
+
+static int grant_dma_iommu_remove(struct platform_device *pdev)
+{
+	struct grant_dma_iommu_device *mmu = platform_get_drvdata(pdev);
+
+	platform_set_drvdata(pdev, NULL);
+	iommu_device_unregister(&mmu->iommu);
+
+	return 0;
+}
+
+static struct platform_driver grant_dma_iommu_driver = {
+	.driver = {
+		.name = "grant-dma-iommu",
+		.of_match_table = grant_dma_iommu_of_match,
+	},
+	.probe = grant_dma_iommu_probe,
+	.remove = grant_dma_iommu_remove,
+};
+
+static int __init grant_dma_iommu_init(void)
+{
+	struct device_node *iommu_np;
+
+	iommu_np = of_find_matching_node(NULL, grant_dma_iommu_of_match);
+	if (!iommu_np)
+		return 0;
+
+	of_node_put(iommu_np);
+
+	return platform_driver_register(&grant_dma_iommu_driver);
+}
+subsys_initcall(grant_dma_iommu_init);
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Thu Jun 02 19:24:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 19:24:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341186.566432 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwqQm-0001cd-Gt; Thu, 02 Jun 2022 19:24:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341186.566432; Thu, 02 Jun 2022 19:24:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwqQm-0001br-CA; Thu, 02 Jun 2022 19:24:28 +0000
Received: by outflank-mailman (input) for mailman id 341186;
 Thu, 02 Jun 2022 19:24:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2EeJ=WJ=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1nwqQk-0000dK-2K
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 19:24:26 +0000
Received: from mail-ej1-x62e.google.com (mail-ej1-x62e.google.com
 [2a00:1450:4864:20::62e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9cda2f64-e2a9-11ec-bd2c-47488cf2e6aa;
 Thu, 02 Jun 2022 21:24:25 +0200 (CEST)
Received: by mail-ej1-x62e.google.com with SMTP id v1so1130087ejg.13
 for <xen-devel@lists.xenproject.org>; Thu, 02 Jun 2022 12:24:25 -0700 (PDT)
Received: from otyshchenko.router ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 eg13-20020a056402288d00b0042dce73168csm2938301edb.13.2022.06.02.12.24.23
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 02 Jun 2022 12:24:24 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9cda2f64-e2a9-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=z5QlzeQbdKIOve+OBZzBdwGWf+LJ0VAgmCK8UR9Ip/I=;
        b=JsGs3AOUU2Y9WUix6EzdEzmQ3rqReff1j2L+DhP0gYv0LiZycnCA/GqH2+Y4du2Uad
         8pIDJxQ+ZbH8xFSIa4iHlHz1kf10KRLyvRXu8pYX+m7htkxYxy5W7rW9e5AU0SZKIB1f
         ZlNaolglE+ZvYXmncHkSx2TMsS3BX0vio1PNHxfT+XiqVUwschxzD+4/rDILAEIxdpre
         JinQjTFlMMiD7jcogL7DYqS3TYvavYjod3/XZULDumCHOHcUMksAu1JZWvL0xp7D5+3Y
         sGmaZ4oBtjIb7qUGxJSvKVsRWuAUQM/6LaNoR1g1cdI7S2ppacHZqVvqdTP/1nokkEXr
         grxw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=z5QlzeQbdKIOve+OBZzBdwGWf+LJ0VAgmCK8UR9Ip/I=;
        b=rRFdG9k9g6db6ShwAA/rQjDdc5a/ohdZXaaB+jIYsIsvJ1wWunBf2UihfsmrsO8apP
         iM3ZDd/RySVGkLFrwyK7JTrk9r+9g7rXgf0TIU/E6HyVHKtUDBsZmIOm9drWzlPB7Mfl
         AQT2JHXRQ30X/6WUUEi0WGgmBH/DyAWr36S4gduz04h9GMg8i9oqYUlRIhanBgOwotTV
         xIsFTyRi++AhC+yQFtFnROBv41ae5cQ5njnCApzTuxaMCNnFrsCeU1LdxWkDNncdTkt1
         Gf8KlNlkEdYVzcNwjsAFd8mgwl1hT52N8gk6TjllnTMlkpI7gur9GAQq72Lj2AWwxdfx
         R83A==
X-Gm-Message-State: AOAM532y+nyxkfpDiKRw6HDa27FyFn+5+lL+52q1S+CBTOB02/qJJWH0
	RF8DKvjcBFLiMNIHP/c1cdOhZzlFX1U=
X-Google-Smtp-Source: ABdhPJyum/BEFc4hal1QK39RdGrJhyKx4sfjLkjCqt1swmw2sAGVsLdhJxMQyWsjJ3IkhUXEWuqIsA==
X-Received: by 2002:a17:907:ea1:b0:6fe:f6a5:6009 with SMTP id ho33-20020a1709070ea100b006fef6a56009mr5910513ejc.275.1654197864490;
        Thu, 02 Jun 2022 12:24:24 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Christoph Hellwig <hch@infradead.org>
Subject: [PATCH V4 7/8] xen/grant-dma-ops: Retrieve the ID of backend's domain for DT devices
Date: Thu,  2 Jun 2022 22:23:52 +0300
Message-Id: <1654197833-25362-8-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1654197833-25362-1-git-send-email-olekstysh@gmail.com>
References: <1654197833-25362-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Use the presence of "iommus" property pointed to the IOMMU node with
recently introduced "xen,grant-dma" compatible as a clear indicator
of enabling Xen grant mappings scheme for that device and read the ID
of Xen domain where the corresponding backend is running. The domid
(domain ID) is used as an argument to the Xen grant mapping APIs.

To avoid the deferred probe timeout which takes place after reusing
generic IOMMU device tree bindings (because the IOMMU device never
becomes available) enable recently introduced stub IOMMU driver by
selecting XEN_GRANT_DMA_IOMMU.

Also introduce xen_is_grant_dma_device() to check whether xen-grant
DMA ops need to be set for a passed device.

Remove the hardcoded domid 0 in xen_grant_setup_dma_ops().

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
Changes RFC -> V1:
   - new patch, split required changes from commit:
    "[PATCH 4/6] virtio: Various updates to xen-virtio DMA ops layer"
   - update checks in xen_virtio_setup_dma_ops() to only support
     DT devices for now
   - remove the "virtio,mmio" check from xen_is_virtio_device()
   - remane everything according to the new naming scheme:
     s/virtio/grant_dma

Changes V1 -> V2:
   - remove dev_is_pci() check in xen_grant_setup_dma_ops()
   - remove EXPORT_SYMBOL_GPL(xen_is_grant_dma_device);

Changes V2 -> V3:
   - Stefano already gave his Reviewed-by, I dropped it due to the changes (significant)
   - update commit description
   - reuse generic IOMMU device tree bindings, select XEN_GRANT_DMA_IOMMU
     to avoid the deferred probe timeout

Changes V3 -> V4:
   - add Stefano's R-b
---
 drivers/xen/Kconfig         |  1 +
 drivers/xen/grant-dma-ops.c | 48 ++++++++++++++++++++++++++++++++++++++-------
 include/xen/xen-ops.h       |  5 +++++
 3 files changed, 47 insertions(+), 7 deletions(-)

diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
index 35d20d9..bfd5f4f 100644
--- a/drivers/xen/Kconfig
+++ b/drivers/xen/Kconfig
@@ -347,6 +347,7 @@ config XEN_VIRTIO
 	bool "Xen virtio support"
 	depends on VIRTIO
 	select XEN_GRANT_DMA_OPS
+	select XEN_GRANT_DMA_IOMMU if OF
 	help
 	  Enable virtio support for running as Xen guest. Depending on the
 	  guest type this will require special support on the backend side
diff --git a/drivers/xen/grant-dma-ops.c b/drivers/xen/grant-dma-ops.c
index 44659f4..6586152 100644
--- a/drivers/xen/grant-dma-ops.c
+++ b/drivers/xen/grant-dma-ops.c
@@ -55,11 +55,6 @@ static struct xen_grant_dma_data *find_xen_grant_dma_data(struct device *dev)
  * Such a DMA address is formed by using the grant reference as a frame
  * number and setting the highest address bit (this bit is for the backend
  * to be able to distinguish it from e.g. a mmio address).
- *
- * Note that for now we hard wire dom0 to be the backend domain. In order
- * to support any domain as backend we'd need to add a way to communicate
- * the domid of this backend, e.g. via Xenstore, via the PCI-device's
- * config space or DT/ACPI.
  */
 static void *xen_grant_dma_alloc(struct device *dev, size_t size,
 				 dma_addr_t *dma_handle, gfp_t gfp,
@@ -275,9 +270,26 @@ static const struct dma_map_ops xen_grant_dma_ops = {
 	.dma_supported = xen_grant_dma_supported,
 };
 
+bool xen_is_grant_dma_device(struct device *dev)
+{
+	struct device_node *iommu_np;
+	bool has_iommu;
+
+	/* XXX Handle only DT devices for now */
+	if (!dev->of_node)
+		return false;
+
+	iommu_np = of_parse_phandle(dev->of_node, "iommus", 0);
+	has_iommu = iommu_np && of_device_is_compatible(iommu_np, "xen,grant-dma");
+	of_node_put(iommu_np);
+
+	return has_iommu;
+}
+
 void xen_grant_setup_dma_ops(struct device *dev)
 {
 	struct xen_grant_dma_data *data;
+	struct of_phandle_args iommu_spec;
 
 	data = find_xen_grant_dma_data(dev);
 	if (data) {
@@ -285,12 +297,34 @@ void xen_grant_setup_dma_ops(struct device *dev)
 		return;
 	}
 
+	/* XXX ACPI device unsupported for now */
+	if (!dev->of_node)
+		goto err;
+
+	if (of_parse_phandle_with_args(dev->of_node, "iommus", "#iommu-cells",
+			0, &iommu_spec)) {
+		dev_err(dev, "Cannot parse iommus property\n");
+		goto err;
+	}
+
+	if (!of_device_is_compatible(iommu_spec.np, "xen,grant-dma") ||
+			iommu_spec.args_count != 1) {
+		dev_err(dev, "Incompatible IOMMU node\n");
+		of_node_put(iommu_spec.np);
+		goto err;
+	}
+
+	of_node_put(iommu_spec.np);
+
 	data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
 	if (!data)
 		goto err;
 
-	/* XXX The dom0 is hardcoded as the backend domain for now */
-	data->backend_domid = 0;
+	/*
+	 * The endpoint ID here means the ID of the domain where the corresponding
+	 * backend is running
+	 */
+	data->backend_domid = iommu_spec.args[0];
 
 	if (xa_err(xa_store(&xen_grant_dma_devices, (unsigned long)dev, data,
 			GFP_KERNEL))) {
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index 4f9fad5..62be9dc 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -223,10 +223,15 @@ static inline void xen_preemptible_hcall_end(void) { }
 
 #ifdef CONFIG_XEN_GRANT_DMA_OPS
 void xen_grant_setup_dma_ops(struct device *dev);
+bool xen_is_grant_dma_device(struct device *dev);
 #else
 static inline void xen_grant_setup_dma_ops(struct device *dev)
 {
 }
+static inline bool xen_is_grant_dma_device(struct device *dev)
+{
+	return false;
+}
 #endif /* CONFIG_XEN_GRANT_DMA_OPS */
 
 #endif /* INCLUDE_XEN_OPS_H */
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Thu Jun 02 19:24:29 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 19:24:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341187.566437 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwqQn-0001jh-72; Thu, 02 Jun 2022 19:24:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341187.566437; Thu, 02 Jun 2022 19:24:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwqQm-0001i1-Sc; Thu, 02 Jun 2022 19:24:28 +0000
Received: by outflank-mailman (input) for mailman id 341187;
 Thu, 02 Jun 2022 19:24:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2EeJ=WJ=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1nwqQl-0000dK-06
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 19:24:27 +0000
Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com
 [2a00:1450:4864:20::636])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9d68adeb-e2a9-11ec-bd2c-47488cf2e6aa;
 Thu, 02 Jun 2022 21:24:26 +0200 (CEST)
Received: by mail-ej1-x636.google.com with SMTP id s12so4610717ejx.3
 for <xen-devel@lists.xenproject.org>; Thu, 02 Jun 2022 12:24:25 -0700 (PDT)
Received: from otyshchenko.router ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 eg13-20020a056402288d00b0042dce73168csm2938301edb.13.2022.06.02.12.24.24
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 02 Jun 2022 12:24:25 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9d68adeb-e2a9-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=1vkkmAPV7KLk5lLiYTidlJE/79j9EHin+NPAEwFGGPw=;
        b=jmz6IGbzJezrRiTA2mBa10TijR6VR0Wsv+zWCJK861wll6nY+yQF0eqIKkBS5xRnY3
         06uO1Xxv/f6WxMrF7ZyLzMMJpLL7T61a9BflqTJ5BPHI2FF2yaJJHfW/bvUVTWX/bfjV
         TROS8lfNzLEEAzJE9nYscufoMwjo6IO6XfMO3atpb5xGlgWq24wpMqelyhVadYrVknNw
         V4Dfqy8uPCBCPMqt8EIVIXfDHx1hVqeQ4NxiznFjv2NBcgYhlvr/UPQpIcO2wVH5JkM0
         oetd2/yrhmoOzPa4PebgUEMktci6Q0lz4PC7beZD+tIq3arTBN3tTXxehvdl7SduyR5Q
         8frg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=1vkkmAPV7KLk5lLiYTidlJE/79j9EHin+NPAEwFGGPw=;
        b=vUahCzipRlcj4LeynpZKSp317kFZZn2QFGFiDEh65tSZqWywSsgYM1J/kUVaTymGMc
         M0cGU4OeVK01c4ckNFK6HOkDDAVH3GJ8B8htCvwYdi4Stm9Ljn3jIWi0lBikzbYFnGdH
         yvQUuw/4jlUp/n0FeDYHP5iO/ZROyYkc+zbVqUbTcGxnxlcrDtkm6qiR+Z3GQlDVMVgi
         YxtuYrLKGTPPMHmQ+3skMtvQOD5iDg1cwA36rQM4iPyV5634W2J4JQEy95VL0duQj+rg
         r+h1lqiZU+27huxyG3zsqo6HWqrrVEz6rkpAbh0e3Tf3CjcFX+2oQHKsKPpXvmJo4JeU
         JPvQ==
X-Gm-Message-State: AOAM533XC2TxC8PjQx8j/Q6pt2eCBH+YbXLPU580CETNaiphKJ/bdKcu
	ii8CZ5DvGXjWUlI4Skisid8HkqrLPBM=
X-Google-Smtp-Source: ABdhPJzV8VoTn9s996wPN+vTdvWiSQPgr/k13Tl/xR8W22maSDMF8OZr0DW6z0mMDrujt2w9yOf1MQ==
X-Received: by 2002:a17:907:7f20:b0:6fe:f0c8:8e6f with SMTP id qf32-20020a1709077f2000b006fef0c88e6fmr5510662ejc.453.1654197865496;
        Thu, 02 Jun 2022 12:24:25 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Christoph Hellwig <hch@infradead.org>
Subject: [PATCH V4 8/8] arm/xen: Assign xen-grant DMA ops for xen-grant DMA devices
Date: Thu,  2 Jun 2022 22:23:53 +0300
Message-Id: <1654197833-25362-9-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1654197833-25362-1-git-send-email-olekstysh@gmail.com>
References: <1654197833-25362-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

By assigning xen-grant DMA ops we will restrict memory access for
passed device using Xen grant mappings. This is needed for using any
virtualized device (e.g. virtio) in Xen guests in a safe manner.

Please note, for the virtio devices the XEN_VIRTIO config should
be enabled (it forces ARCH_HAS_RESTRICTED_VIRTIO_MEMORY_ACCESS).

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
Changes RFC -> V1:
   - update commit subject/description
   - remove #ifdef CONFIG_XEN_VIRTIO
   - re-organize the check taking into the account that
     swiotlb and virtio cases are mutually exclusive
   - update according to the new naming scheme:
     s/virtio/grant_dma

Changes V1 -> V2:
   - add Stefano's R-b
   - remove arch_has_restricted_virtio_memory_access() check
   - update commit description
   - remove the inclusion of virtio_config.h

Changes V2 -> V3:
   - no changes

Changes V3 -> V4:
   - no changes
---
 include/xen/arm/xen-ops.h | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/include/xen/arm/xen-ops.h b/include/xen/arm/xen-ops.h
index 288deb1..b0766a6 100644
--- a/include/xen/arm/xen-ops.h
+++ b/include/xen/arm/xen-ops.h
@@ -3,11 +3,14 @@
 #define _ASM_ARM_XEN_OPS_H
 
 #include <xen/swiotlb-xen.h>
+#include <xen/xen-ops.h>
 
 static inline void xen_setup_dma_ops(struct device *dev)
 {
 #ifdef CONFIG_XEN
-	if (xen_swiotlb_detect())
+	if (xen_is_grant_dma_device(dev))
+		xen_grant_setup_dma_ops(dev);
+	else if (xen_swiotlb_detect())
 		dev->dma_ops = &xen_swiotlb_dma_ops;
 #endif
 }
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Thu Jun 02 19:28:08 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 19:28:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341209.566457 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwqUJ-00051S-NM; Thu, 02 Jun 2022 19:28:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341209.566457; Thu, 02 Jun 2022 19:28:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwqUJ-00051L-Ia; Thu, 02 Jun 2022 19:28:07 +0000
Received: by outflank-mailman (input) for mailman id 341209;
 Thu, 02 Jun 2022 19:28:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2EeJ=WJ=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1nwqUI-00051E-Sq
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 19:28:06 +0000
Received: from mail-ed1-x532.google.com (mail-ed1-x532.google.com
 [2a00:1450:4864:20::532])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 204d01ba-e2aa-11ec-bd2c-47488cf2e6aa;
 Thu, 02 Jun 2022 21:28:05 +0200 (CEST)
Received: by mail-ed1-x532.google.com with SMTP id 25so7256838edw.8
 for <xen-devel@lists.xenproject.org>; Thu, 02 Jun 2022 12:28:05 -0700 (PDT)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 j3-20020a170906094300b0070ad296e4b0sm926605ejd.186.2022.06.02.12.28.04
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 02 Jun 2022 12:28:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 204d01ba-e2aa-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=vbAEqTtMltIjLFnJDROsfHStyE044+oJNOWsnHjqrXE=;
        b=Nv50WEmTh4wFPvQammySLQ1FWccD9Mr60pS3bU9HudwtAfHmbg1RwuEQI2FucZFpnn
         9wbz8eSVurUvBDDYvosAJGBmETp1zNDv6VzXWOB0/XCHsttljXR6K7P8Oo2+X5NUTfqi
         qbegVq5F0eeHV8e2vo7sq0TUY6O/lMIMULf2yozMmZcWcv7McgIL0kDZrgAKzt4LxkS5
         ZN00Dd8pWM1f+lED5Cyh/NkO7V4WpuImuxiF5nBLYte+DUAJAB7Kqs7XkKa5GKqL4tKc
         pRAk9R8o7iegb6pxwVKiLgrNtNzs6eoLg6Fl4IdVTnC3LXS20elAGncc4MpVvtVAafln
         Qxbw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=vbAEqTtMltIjLFnJDROsfHStyE044+oJNOWsnHjqrXE=;
        b=RrhEm4Tre/4LyUF9tP3QXIkhLpy19q/lG/hv5p0sazsy3veSDWngYPf0EKe3SOpL3C
         APkJzL/uDtLzGMoG+/RRt+9+8LIXAI0B7YFpnXCpJtq2Yx6HOTgVeSwSNi8CNP4RcHv+
         Bb0s3ZpEv+dxcGDdSX53Rfey2SSQ2hkyoh79qudwHykjdOUzfTBU0xKXRsrNfLlXA4DJ
         L1O/1M4gXrLZU++brPT8NTIiYscCO0Q5GhNb9PhFa8RBRg0OcT+/JqhC9+ABqibVHZ5r
         LJBE7S6Gl7H1P6rAZXQxfgsA2xH7LxNI5wEizsoScLpRlKrRYJ4cfPTEQxoGnDwLABGp
         NYBg==
X-Gm-Message-State: AOAM533jU4OzAX8uFHUQkS5rtUU5TH5/tYxynaqrZedsn8K2GWIkVScR
	OYkqVsVYv80UUhzkPszL+uU=
X-Google-Smtp-Source: ABdhPJwWPyT4XNOPOjxAa68FXYb7xAdVTT+TH8Liokjf020d0Q0m/psH7TjjmhOZhEBSJJusjo1qbA==
X-Received: by 2002:a05:6402:3227:b0:42d:df54:ba24 with SMTP id g39-20020a056402322700b0042ddf54ba24mr7071532eda.49.1654198085236;
        Thu, 02 Jun 2022 12:28:05 -0700 (PDT)
Subject: Re: [PATCH V2] libxl/arm: Create specific IOMMU node to be referred
 by virtio-mmio device
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>
References: <1653944813-17970-1-git-send-email-olekstysh@gmail.com>
 <alpine.DEB.2.22.394.2205311755010.1905099@ubuntu-linux-20-04-desktop>
 <e67bde26-2eff-948a-a2c3-08cc474affa6@gmail.com>
 <alpine.DEB.2.22.394.2206011338310.1905099@ubuntu-linux-20-04-desktop>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <ff312d2b-ece4-e4e3-a654-413636a6c8dd@gmail.com>
Date: Thu, 2 Jun 2022 22:28:03 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.22.394.2206011338310.1905099@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 01.06.22 23:39, Stefano Stabellini wrote:

Hello Stefano

> On Wed, 1 Jun 2022, Oleksandr wrote:
>> On 01.06.22 04:04, Stefano Stabellini wrote:
>>> On Tue, 31 May 2022, Oleksandr Tyshchenko wrote:
>>>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>>
>>>> Reuse generic IOMMU device tree bindings to communicate Xen specific
>>>> information for the virtio devices for which the restricted memory
>>>> access using Xen grant mappings need to be enabled.
>>>>
>>>> Insert "iommus" property pointed to the IOMMU node with "xen,grant-dma"
>>>> compatible to all virtio devices which backends are going to run in
>>>> non-hardware domains (which are non-trusted by default).
>>>>
>>>> Based on device-tree binding from Linux:
>>>> Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml
>>>>
>>>> The example of generated nodes:
>>>>
>>>> xen_iommu {
>>>>       compatible = "xen,grant-dma";
>>>>       #iommu-cells = <0x01>;
>>>>       phandle = <0xfde9>;
>>>> };
>>>>
>>>> virtio@2000000 {
>>>>       compatible = "virtio,mmio";
>>>>       reg = <0x00 0x2000000 0x00 0x200>;
>>>>       interrupts = <0x00 0x01 0xf01>;
>>>>       interrupt-parent = <0xfde8>;
>>>>       dma-coherent;
>>>>       iommus = <0xfde9 0x01>;
>>>> };
>>>>
>>>> virtio@2000200 {
>>>>       compatible = "virtio,mmio";
>>>>       reg = <0x00 0x2000200 0x00 0x200>;
>>>>       interrupts = <0x00 0x02 0xf01>;
>>>>       interrupt-parent = <0xfde8>;
>>>>       dma-coherent;
>>>>       iommus = <0xfde9 0x01>;
>>>> };
>>>>
>>>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>> ---
>>>> !!! This patch is based on non upstreamed yet “Virtio support for
>>>> toolstack
>>>> on Arm” V8 series which is on review now:
>>>> https://lore.kernel.org/xen-devel/1651598763-12162-1-git-send-email-olekstysh@gmail.com/
>>>>
>>>> New device-tree binding (commit #5) is a part of solution to restrict
>>>> memory
>>>> access under Xen using xen-grant DMA-mapping layer (which is also on
>>>> review):
>>>> https://lore.kernel.org/xen-devel/1653944417-17168-1-git-send-email-olekstysh@gmail.com/
>>>>
>>>> Changes RFC -> V1:
>>>>      - update commit description
>>>>      - rebase according to the recent changes to
>>>>        "libxl: Introduce basic virtio-mmio support on Arm"
>>>>
>>>> Changes V1 -> V2:
>>>>      - Henry already gave his Reviewed-by, I dropped it due to the changes
>>>>      - use generic IOMMU device tree bindings instead of custom property
>>>>        "xen,dev-domid"
>>>>      - change commit subject and description, was
>>>>        "libxl/arm: Insert "xen,dev-domid" property to virtio-mmio device
>>>> node"
>>>> ---
>>>>    tools/libs/light/libxl_arm.c          | 49
>>>> ++++++++++++++++++++++++++++++++---
>>>>    xen/include/public/device_tree_defs.h |  1 +
>>>>    2 files changed, 47 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
>>>> index 9be9b2a..72da3b1 100644
>>>> --- a/tools/libs/light/libxl_arm.c
>>>> +++ b/tools/libs/light/libxl_arm.c
>>>> @@ -865,9 +865,32 @@ static int make_vpci_node(libxl__gc *gc, void *fdt,
>>>>        return 0;
>>>>    }
>>>>    +static int make_xen_iommu_node(libxl__gc *gc, void *fdt)
>>>> +{
>>>> +    int res;
>>>> +
>>>> +    /* See Linux
>>>> Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml */
>>>> +    res = fdt_begin_node(fdt, "xen_iommu");
>>>> +    if (res) return res;
>>>> +
>>>> +    res = fdt_property_compat(gc, fdt, 1, "xen,grant-dma");
>>>> +    if (res) return res;
>>>> +
>>>> +    res = fdt_property_cell(fdt, "#iommu-cells", 1);
>>>> +    if (res) return res;
>>>> +
>>>> +    res = fdt_property_cell(fdt, "phandle", GUEST_PHANDLE_IOMMU);
>>>> +    if (res) return res;
>>>> +
>>>> +    res = fdt_end_node(fdt);
>>>> +    if (res) return res;
>>>> +
>>>> +    return 0;
>>>> +}
>>>>      static int make_virtio_mmio_node(libxl__gc *gc, void *fdt,
>>>> -                                 uint64_t base, uint32_t irq)
>>>> +                                 uint64_t base, uint32_t irq,
>>>> +                                 uint32_t backend_domid)
>>>>    {
>>>>        int res;
>>>>        gic_interrupt intr;
>>>> @@ -890,6 +913,16 @@ static int make_virtio_mmio_node(libxl__gc *gc, void
>>>> *fdt,
>>>>        res = fdt_property(fdt, "dma-coherent", NULL, 0);
>>>>        if (res) return res;
>>>>    +    if (backend_domid != LIBXL_TOOLSTACK_DOMID) {
>>>> +        uint32_t iommus_prop[2];
>>>> +
>>>> +        iommus_prop[0] = cpu_to_fdt32(GUEST_PHANDLE_IOMMU);
>>>> +        iommus_prop[1] = cpu_to_fdt32(backend_domid);
>>>> +
>>>> +        res = fdt_property(fdt, "iommus", iommus_prop,
>>>> sizeof(iommus_prop));
>>>> +        if (res) return res;
>>>> +    }
>>>> +
>>>>        res = fdt_end_node(fdt);
>>>>        if (res) return res;
>>>>    @@ -1097,6 +1130,7 @@ static int libxl__prepare_dtb(libxl__gc *gc,
>>>> libxl_domain_config *d_config,
>>>>        size_t fdt_size = 0;
>>>>        int pfdt_size = 0;
>>>>        libxl_domain_build_info *const info = &d_config->b_info;
>>>> +    bool iommu_created;
>>>>        unsigned int i;
>>>>          const libxl_version_info *vers;
>>>> @@ -1204,11 +1238,20 @@ next_resize:
>>>>            if (d_config->num_pcidevs)
>>>>                FDT( make_vpci_node(gc, fdt, ainfo, dom) );
>>>>    +        iommu_created = false;
>>>>            for (i = 0; i < d_config->num_disks; i++) {
>>>>                libxl_device_disk *disk = &d_config->disks[i];
>>>>    -            if (disk->specification == LIBXL_DISK_SPECIFICATION_VIRTIO)
>>>> -                FDT( make_virtio_mmio_node(gc, fdt, disk->base,
>>>> disk->irq) );
>>>> +            if (disk->specification == LIBXL_DISK_SPECIFICATION_VIRTIO) {
>>>> +                if (disk->backend_domid != LIBXL_TOOLSTACK_DOMID &&
>>>> +                    !iommu_created) {
>>>> +                    FDT( make_xen_iommu_node(gc, fdt) );
>>>> +                    iommu_created = true;
>>>> +                }
>>>> +
>>>> +                FDT( make_virtio_mmio_node(gc, fdt, disk->base,
>>>> disk->irq,
>>>> +                                           disk->backend_domid) );
>>>> +            }
>>> This is a matter of taste as the code would also work as is, but I would
>>> do the following instead:
>>>
>>>
>>> if ( d_config->num_disks > 0 &&
>>>        disk->backend_domid != LIBXL_TOOLSTACK_DOMID) {
>>>        FDT( make_xen_iommu_node(gc, fdt) );
>>> }
>>>
>>> for (i = 0; i < d_config->num_disks; i++) {
>>>       libxl_device_disk *disk = &d_config->disks[i];
>>>
>>>       if (disk->specification == LIBXL_DISK_SPECIFICATION_VIRTIO)
>>>           FDT( make_virtio_mmio_node(gc, fdt, disk->base, disk->irq) );
>>> }
>> I got your idea to avoid using local "iommu_created". For that, I think, we
>> need to modify the first check to make sure that we have at least one virtio
>> device, otherwise we might end up inserting unused IOMMU node. But, it will
>> turn into an extra loop to go through num_disks and look for
>> LIBXL_DISK_SPECIFICATION_VIRTIO.
> I see, then just keep it as is.

ok


>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

Thanks!


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Thu Jun 02 20:34:29 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 20:34:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341262.566467 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwrWG-0005BR-Ik; Thu, 02 Jun 2022 20:34:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341262.566467; Thu, 02 Jun 2022 20:34:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwrWG-0005BK-Ft; Thu, 02 Jun 2022 20:34:12 +0000
Received: by outflank-mailman (input) for mailman id 341262;
 Thu, 02 Jun 2022 20:34:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UOPH=WJ=apertussolutions.com=dpsmith@srs-se1.protection.inumbo.net>)
 id 1nwrWE-0005BE-Tj
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 20:34:11 +0000
Received: from sender4-of-o51.zoho.com (sender4-of-o51.zoho.com
 [136.143.188.51]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 59129eec-e2b3-11ec-837f-e5687231ffcc;
 Thu, 02 Jun 2022 22:34:08 +0200 (CEST)
Received: from [10.10.1.138] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1654202041607149.97825902238378;
 Thu, 2 Jun 2022 13:34:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 59129eec-e2b3-11ec-837f-e5687231ffcc
ARC-Seal: i=1; a=rsa-sha256; t=1654202044; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=jI7oPNrdL9B7UXMz4sP5T3Td9d/l7FTXZagqhEHve1BSXm8wyVJAXn1Uh/OAt/dd2rO8XK+LAxDaBM41cFJMI6aQeTxLL3fZR+qS2sVzftN+UGu6teqThbI+ob4bu/kKrAxjWLSddEdIB8KCmIkMfc+K8V1eouvKWITWtpZlcEo=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1654202044; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=jmSMCc7An9p2x9gZRj95kKJbQaxi7b3qBXJ2yh2fLb4=; 
	b=SZIdSomcZ8qoWC3N05KhHFUKgtZ1YWn9zki7C0n00KJe5SEKh6YU5prwA9fWJuV5F+niZGtAS0XffytG16w8PJQSxVbTj2N1vdBpI/z3ilgaE+M9/eYy+vtJ1f2551QtcDC1rZg/ARgnYsKHnK2UdTq8hogOcbOFxRIwBuWeXAY=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1654202044;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Message-ID:Date:Date:MIME-Version:To:To:Cc:Cc:References:From:From:Subject:Subject:In-Reply-To:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=jmSMCc7An9p2x9gZRj95kKJbQaxi7b3qBXJ2yh2fLb4=;
	b=hOdOCoF157gkQgwvsbgA/S8C5pQ4sM+WtOj3jCu373rcbtWR5UlFX/mXQDTpsj10
	MQWAwBtazU4strePB6PgKzmhvooaPBtVRmv03OgLJXJTm5uZe3xBOqgjvg6ELqTHrmV
	raUe9gv950HRntX4HcUJG36gs8ye+KPJlIpV6UzI=
Message-ID: <da87eeb4-516c-64bf-ee96-619a19875add@apertussolutions.com>
Date: Thu, 2 Jun 2022 16:32:30 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Content-Language: en-US
To: xen-devel@lists.xenproject.org
Cc: scott.davis@starlab.io, christopher.clark@starlab.io, jandryuk@gmail.com,
 Luca Fancellu <luca.fancellu@arm.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
References: <20220531145646.10062-1-dpsmith@apertussolutions.com>
 <20220531145646.10062-3-dpsmith@apertussolutions.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Subject: Re: [PATCH v8 2/2] flask: implement xsm_set_system_active
In-Reply-To: <20220531145646.10062-3-dpsmith@apertussolutions.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ZohoMailClient: External

On 5/31/22 10:56, Daniel P. Smith wrote:
> This commit implements full support for starting the idle domain privileged by
> introducing a new flask label xenboot_t which the idle domain is labeled with
> at creation.  It then provides the implementation for the XSM hook
> xsm_set_system_active to relabel the idle domain to the existing xen_t flask
> label.
> 
> In the reference flask policy a new macro, xen_build_domain(target), is
> introduced for creating policies for dom0less/hyperlaunch allowing the
> hypervisor to create and assign the necessary resources for domain
> construction.
> 
> Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
> Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
> Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
> Tested-by: Luca Fancellu <luca.fancellu@arm.com>

I am still debugging, but I now have a dom0 crashing due to an AVC that
is being tripped with this patch applied to the tip of staging. I just
wanted to give a heads-up, and I will follow back up once I can
determine the root cause.

v/r,
dps


From xen-devel-bounces@lists.xenproject.org Thu Jun 02 21:20:18 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 21:20:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341270.566478 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwsEl-0002gQ-2i; Thu, 02 Jun 2022 21:20:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341270.566478; Thu, 02 Jun 2022 21:20:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwsEk-0002gJ-VM; Thu, 02 Jun 2022 21:20:10 +0000
Received: by outflank-mailman (input) for mailman id 341270;
 Thu, 02 Jun 2022 21:20:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UOPH=WJ=apertussolutions.com=dpsmith@srs-se1.protection.inumbo.net>)
 id 1nwsEj-0002gD-Ci
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 21:20:09 +0000
Received: from sender4-of-o51.zoho.com (sender4-of-o51.zoho.com
 [136.143.188.51]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c60c3e87-e2b9-11ec-837f-e5687231ffcc;
 Thu, 02 Jun 2022 23:20:07 +0200 (CEST)
Received: from [10.10.1.138] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1654204804662743.2078611312962;
 Thu, 2 Jun 2022 14:20:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c60c3e87-e2b9-11ec-837f-e5687231ffcc
ARC-Seal: i=1; a=rsa-sha256; t=1654204805; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=ZAKHKc0ihEqMlIkC7Pg4x00z9OtuyZSOAF8r8POW78p4f4G43L6jf5el0IQMk6ID0F+HzkJRBXP3kkU/lwIyCDGjTTBTZrwMoWITa+7tqehlG6r5wyUDD6fCmUAR4zIcnAfJtORxPGdTtVPRhtsljSHZQia9cdmfZPPV4nOhkog=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1654204805; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=FJKAaVTT+R2zvKSvaDWAN2iQq0wixUB9o/f6YbTjDmU=; 
	b=BjGukJQfAYZNzTWcH6PuNMnfFTwT95KcK9NPzfOmwxLhYBL2F7f6VGuAvMNO1VjZhslY14GdyV5BQBWq4+1+t7NhBLcTTmyP+n/Xa+LilI0X88psgzHAPSZcekRCCAUjfYEOangs3JEMVUhjC2teJ8m+r787hyrm/Z9D6uxep2s=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1654204805;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Message-ID:Date:Date:MIME-Version:Subject:Subject:From:From:To:To:Cc:Cc:References:In-Reply-To:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=FJKAaVTT+R2zvKSvaDWAN2iQq0wixUB9o/f6YbTjDmU=;
	b=lChteZMH/J95LziRS7/r+G62+vJG/aVj4WHScNL7uD+GBM22mXA1BL/9HxDrAq9M
	5Z0CTP6fptRyu/02sv7pAkayx62T9BRBjsnnLISDymYS9isBuVpj8IK+bNurSE52K9D
	8f4u8tTX1AIvRNL9tp19JKL1jKpLM8TKo1Nd91us=
Message-ID: <8dde6c92-2c4a-8665-8519-38a01e7421a3@apertussolutions.com>
Date: Thu, 2 Jun 2022 17:18:34 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH v8 2/2] flask: implement xsm_set_system_active
Content-Language: en-US
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
To: xen-devel@lists.xenproject.org
Cc: scott.davis@starlab.io, christopher.clark@starlab.io, jandryuk@gmail.com,
 Luca Fancellu <luca.fancellu@arm.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
References: <20220531145646.10062-1-dpsmith@apertussolutions.com>
 <20220531145646.10062-3-dpsmith@apertussolutions.com>
 <da87eeb4-516c-64bf-ee96-619a19875add@apertussolutions.com>
In-Reply-To: <da87eeb4-516c-64bf-ee96-619a19875add@apertussolutions.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ZohoMailClient: External

On 6/2/22 16:32, Daniel P. Smith wrote:
> On 5/31/22 10:56, Daniel P. Smith wrote:
>> This commit implements full support for starting the idle domain privileged by
>> introducing a new flask label xenboot_t which the idle domain is labeled with
>> at creation.  It then provides the implementation for the XSM hook
>> xsm_set_system_active to relabel the idle domain to the existing xen_t flask
>> label.
>>
>> In the reference flask policy a new macro, xen_build_domain(target), is
>> introduced for creating policies for dom0less/hyperlaunch allowing the
>> hypervisor to create and assign the necessary resources for domain
>> construction.
>>
>> Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
>> Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
>> Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
>> Tested-by: Luca Fancellu <luca.fancellu@arm.com>
> 
> I am still debugging, but I now have a dom0 crashing due to an AVC that
> is being tripped with this patch applied to the tip of staging. I just
> wanted to give a heads-up, and I will follow back up once I can
> determine the root cause.

Please ignore and my apologies for the noise. The updated policy file
was not getting synced into the test environment.

v/r,
dps


From xen-devel-bounces@lists.xenproject.org Thu Jun 02 21:30:53 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 21:30:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341278.566489 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwsP3-0004Og-3e; Thu, 02 Jun 2022 21:30:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341278.566489; Thu, 02 Jun 2022 21:30:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwsP3-0004OZ-0X; Thu, 02 Jun 2022 21:30:49 +0000
Received: by outflank-mailman (input) for mailman id 341278;
 Thu, 02 Jun 2022 21:30:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=QKCk=WJ=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1nwsP2-0004OT-2Q
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 21:30:48 +0000
Received: from mail-qk1-x732.google.com (mail-qk1-x732.google.com
 [2607:f8b0:4864:20::732])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 43925484-e2bb-11ec-837f-e5687231ffcc;
 Thu, 02 Jun 2022 23:30:46 +0200 (CEST)
Received: by mail-qk1-x732.google.com with SMTP id 14so4648889qkl.6
 for <xen-devel@lists.xenproject.org>; Thu, 02 Jun 2022 14:30:46 -0700 (PDT)
Received: by 2002:ac8:7f14:0:0:0:0:0 with HTTP;
 Thu, 2 Jun 2022 14:30:45 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 43925484-e2bb-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:in-reply-to:references:from:date:message-id:subject:to
         :cc;
        bh=B/1eqqbHZnQqzfEJFI/2PMisMT8zWNIw4xhukZQmi8Q=;
        b=lCFMpTN4exoAZQ9n/fx4VXIwJwyfRwqE2Ui1VGvznjFtxo36qExWwecmrmeIgu1rWI
         wasYI8C7LdH7k2CbE5KZcax5XZ2KdWW1DhVtZuzYr6oF1bxYxV2q1xZqN4h6oupP/xhL
         /twg9f9HSpWVKu3wwePcw469DdYLgyoezQhu6WLRPAGCktT2J1gjYxr/HHVaL97zCpEu
         Utn+OIrLmERFLHOdnrIP/l9xkj2edr3Aeb7C4N948mC9oDCNlaS6jpTcazDuPFiCYRYr
         ofvBGZA+pG5+BUJebadb52KJsh26TZ2g1U+oJwvebpXd2JFfBYrjyiGKVg4Wbw/OVtJN
         UJGw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:mime-version:in-reply-to:references:from:date
         :message-id:subject:to:cc;
        bh=B/1eqqbHZnQqzfEJFI/2PMisMT8zWNIw4xhukZQmi8Q=;
        b=tuFVi6X2B5S5bITuSXa9oAnrfhEWIascNSDCJ/MNaxhpv9lVeDXPMPojBeYDj7kjtv
         gQRh5duc/IJY0PW9GMW9OgFVkdUqqhWsnygRrBWhQ4u8+uuzCRxm04icJiBiow6+eWrI
         vCcPZ9wIOI7tPUlZl7d34e1LNfZvlTjhb9cm6ZW82jJtySj4xiaagPRENBADN4vzQTor
         lk849zKezLJ2kWEMsCopAx/DdfmScu5SqKMA24mRouU+nFnhCsBiH2ulP2zv2qQfAkSR
         tit41Npp1Bl/nDVCyrqSJc4QRJYqx52++1rpcjkv+6xwIH7UIWzj3rUW9m3J4EjuU5QZ
         zi7w==
X-Gm-Message-State: AOAM531uG01G7TEvz+49fRlfGy+SkIyMjsn+U2FJ51lxde4vFsm9cBnH
	ja8ahZ6Kc15NHmfNmWJH9SU/96ZjSSS4fL8/8iw=
X-Google-Smtp-Source: ABdhPJwKW4JeFyoI54chyz2cWk2dZWekJhToO/7Dlpc/klI1T3tB3WkKbgxiJUpS1AAmIe1zrgsVE0eNKAnElADl81c=
X-Received: by 2002:a37:917:0:b0:6a6:9a14:b542 with SMTP id
 23-20020a370917000000b006a69a14b542mr458042qkj.562.1654205445841; Thu, 02 Jun
 2022 14:30:45 -0700 (PDT)
MIME-Version: 1.0
In-Reply-To: <43BCAA1E-7499-4584-AB60-C5004AA0643B@gmail.com>
References: <20220513180957.90514-1-shentey@gmail.com> <43BCAA1E-7499-4584-AB60-C5004AA0643B@gmail.com>
From: Bernhard Beschow <shentey@gmail.com>
Date: Thu, 2 Jun 2022 23:30:45 +0200
Message-ID: <CAG4p6K6kZHfC6KLoioozmGWomUoUZwceUQcU+Y9qDo9FraXfyQ@mail.gmail.com>
Subject: Re: [PATCH v2 0/3] PIIX3-IDE XEN cleanup
To: "qemu-devel@nongnu.org" <qemu-devel@nongnu.org>
Cc: qemu-trivial@nongnu.org, sstabellini@kernel.org, anthony.perard@citrix.com, 
	paul@xen.org, xen-devel@lists.xenproject.org, 
	Bernhard Beschow <shentey@gmail.com>, "Michael S. Tsirkin" <mst@redhat.com>, 
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, Paolo Bonzini <pbonzini@redhat.com>, 
	Richard Henderson <richard.henderson@linaro.org>, Eduardo Habkost <eduardo@habkost.net>, 
	John Snow <jsnow@redhat.com>, qemu-block@nongnu.org
Content-Type: multipart/alternative; boundary="000000000000fc28c705e07db752"

--000000000000fc28c705e07db752
Content-Type: text/plain; charset="UTF-8"

On Saturday, May 28, 2022, Bernhard Beschow <shentey@gmail.com> wrote:
> Am 13. Mai 2022 18:09:54 UTC schrieb Bernhard Beschow <shentey@gmail.com>:
>>v2:
>>* Have pci_xen_ide_unplug() return void (Paul Durrant)
>>* CC Xen maintainers (Michael S. Tsirkin)
>>
>>v1:
>>This patch series first removes the redundant "piix3-ide-xen" device
class and
>>then moves a XEN-specific helper function from PIIX3 code to XEN code.
The idea
>>is to decouple PIIX3-IDE and XEN and to compile XEN-specific bits only if
XEN
>>support is enabled.
>>
>>Testing done:
>>'qemu-system-x86_64 -M pc -m 1G -cdrom archlinux-2022.05.01-x86_64.iso"
boots
>>successfully and a 'poweroff' inside the VM also shuts it down correctly.
>>
>>XEN mode wasn't tested for the time being since its setup procedure seems
quite
>>sophisticated. Please let me know in case this is an obstacle.
>>
>>Bernhard Beschow (3):
>>  hw/ide/piix: Remove redundant "piix3-ide-xen" device class
>>  hw/ide/piix: Add some documentation to pci_piix3_xen_ide_unplug()
>>  include/hw/ide: Unexport pci_piix3_xen_ide_unplug()
>>
>> hw/i386/pc_piix.c          |  3 +--
>> hw/i386/xen/xen_platform.c | 48 +++++++++++++++++++++++++++++++++++++-
>> hw/ide/piix.c              | 42 ---------------------------------
>> include/hw/ide.h           |  3 ---
>> 4 files changed, 48 insertions(+), 48 deletions(-)
>>
>
> Ping
>
> Whole series is reviewed/acked.

Ping 2

--000000000000fc28c705e07db752
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Saturday, May 28, 2022, Bernhard Beschow &lt;<a href=3D"mailto:shentey@g=
mail.com">shentey@gmail.com</a>&gt; wrote:<br>&gt; Am 13. Mai 2022 18:09:54=
 UTC schrieb Bernhard Beschow &lt;<a href=3D"mailto:shentey@gmail.com">shen=
tey@gmail.com</a>&gt;:<br>&gt;&gt;v2:<br>&gt;&gt;* Have pci_xen_ide_unplug(=
) return void (Paul Durrant)<br>&gt;&gt;* CC Xen maintainers (Michael S. Ts=
irkin)<br>&gt;&gt;<br>&gt;&gt;v1:<br>&gt;&gt;This patch series first remove=
s the redundant &quot;piix3-ide-xen&quot; device class and<br>&gt;&gt;then =
moves a XEN-specific helper function from PIIX3 code to XEN code. The idea<=
br>&gt;&gt;is to decouple PIIX3-IDE and XEN and to compile XEN-specific bit=
s only if XEN<br>&gt;&gt;support is enabled.<br>&gt;&gt;<br>&gt;&gt;Testing=
 done:<br>&gt;&gt;&#39;qemu-system-x86_64 -M pc -m 1G -cdrom archlinux-2022=
.05.01-x86_64.iso&quot; boots<br>&gt;&gt;successfully and a &#39;poweroff&#=
39; inside the VM also shuts it down correctly.<br>&gt;&gt;<br>&gt;&gt;XEN =
mode wasn&#39;t tested for the time being since its setup procedure seems q=
uite<br>&gt;&gt;sophisticated. Please let me know in case this is an obstac=
le.<br>&gt;&gt;<br>&gt;&gt;Bernhard Beschow (3):<br>&gt;&gt;=C2=A0 hw/ide/p=
iix: Remove redundant &quot;piix3-ide-xen&quot; device class<br>&gt;&gt;=C2=
=A0 hw/ide/piix: Add some documentation to pci_piix3_xen_ide_unplug()<br>&g=
t;&gt;=C2=A0 include/hw/ide: Unexport pci_piix3_xen_ide_unplug()<br>&gt;&gt=
;<br>&gt;&gt; hw/i386/pc_piix.c=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 |=C2=A0 3=
 +--<br>&gt;&gt; hw/i386/xen/xen_platform.c | 48 ++++++++++++++++++++++++++=
+++++++++++-<br>&gt;&gt; hw/ide/piix.c=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 | 42 ---------------------------------<br>&gt;&gt; include/hw=
/ide.h=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0|=C2=A0 3 ---<br>&gt;&gt; 4 =
files changed, 48 insertions(+), 48 deletions(-)<br>&gt;&gt;<br>&gt;<br>&gt=
; Ping<br>&gt;<br>&gt; Whole series is reviewed/acked.<br><br>Ping 2

--000000000000fc28c705e07db752--


From xen-devel-bounces@lists.xenproject.org Thu Jun 02 21:59:09 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 21:59:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341286.566500 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwsqB-0007cE-Cc; Thu, 02 Jun 2022 21:58:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341286.566500; Thu, 02 Jun 2022 21:58:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwsqB-0007c7-9n; Thu, 02 Jun 2022 21:58:51 +0000
Received: by outflank-mailman (input) for mailman id 341286;
 Thu, 02 Jun 2022 21:58:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwsq9-0007bx-Um; Thu, 02 Jun 2022 21:58:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwsq9-0005a8-Qd; Thu, 02 Jun 2022 21:58:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwsq9-0002MU-7G; Thu, 02 Jun 2022 21:58:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nwsq9-0007Do-6h; Thu, 02 Jun 2022 21:58:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=j4COZevJqUjPeWbhQIbcx9AnbpzAUJX9aEFrNiyzCnM=; b=esQT0HvbecTyPfGaJ+nwSnxIX8
	i7W/wG2yOzoS6lRYGPKaN2xEK7E8RaCEUUA6V9P4257lsO21poqf94N9Y2BmwXwp8LRZtQZQC+8MO
	1/THwp8VIzMCq9SQ1I8UGOreYoDAAoXTL8ckPBXIPjSKuYPLqhkRZ0b0F7TqLcY9ZSXs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170808-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 170808: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-start/debian:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-start:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:guest-start:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-pygrub:debian-di-install:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:debian-di-install:fail:regression
    linux-linus:test-armhf-armhf-libvirt:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:debian-di-install:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:debian-di-install:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:debian-di-install:fail:regression
    linux-linus:test-armhf-armhf-xl:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:host-ping-check-xen:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-rtds:guest-start:fail:allowable
    linux-linus:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This:
    linux=d1dc87763f406d4e67caf16dbe438a5647692395
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 02 Jun 2022 21:58:49 +0000

flight 170808 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170808/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 170714
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 170714
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start           fail REGR. vs. 170714
 test-amd64-amd64-xl          14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start             fail REGR. vs. 170714
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-freebsd11-amd64 13 guest-start          fail REGR. vs. 170714
 test-amd64-amd64-xl-pvshim   14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-pair        25 guest-start/debian       fail REGR. vs. 170714
 test-amd64-amd64-xl-credit1  14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-xl-credit2  14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-xl-multivcpu 14 guest-start             fail REGR. vs. 170714
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 170714
 test-amd64-coresched-amd64-xl 14 guest-start             fail REGR. vs. 170714
 test-arm64-arm64-xl-credit1  14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 170714
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 170714
 test-arm64-arm64-xl-xsm      14 guest-start              fail REGR. vs. 170714
 test-armhf-armhf-xl-arndale  14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 170714
 test-amd64-amd64-xl-shadow   14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-qemuu-nested-amd 12 debian-hvm-install  fail REGR. vs. 170714
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 170714
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 170714
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-xl-qemut-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 170714
 test-amd64-amd64-qemuu-nested-intel 12 debian-hvm-install fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 170714
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 170714
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 170714
 test-armhf-armhf-xl-cubietruck 14 guest-start            fail REGR. vs. 170714
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 170714
 test-amd64-amd64-pygrub      12 debian-di-install        fail REGR. vs. 170714
 test-amd64-amd64-xl-vhd      12 debian-di-install        fail REGR. vs. 170714
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 170714
 test-arm64-arm64-xl-vhd      12 debian-di-install        fail REGR. vs. 170714
 test-arm64-arm64-libvirt-raw 12 debian-di-install        fail REGR. vs. 170714
 test-armhf-armhf-libvirt-qcow2 12 debian-di-install      fail REGR. vs. 170714
 test-armhf-armhf-xl          14 guest-start              fail REGR. vs. 170714
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 170714
 test-armhf-armhf-libvirt-raw 12 debian-di-install fail in 170805 REGR. vs. 170714

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-libvirt-raw 10 host-ping-check-xen        fail pass in 170805

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     14 guest-start              fail REGR. vs. 170714
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 170714

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714

version targeted for testing:
 linux                d1dc87763f406d4e67caf16dbe438a5647692395
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z    9 days
Failing since        170716  2022-05-24 11:12:06 Z    9 days   27 attempts
Testing same since   170805  2022-06-02 06:46:34 Z    0 days    2 attempts

------------------------------------------------------------
2021 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 228471 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 02 22:54:24 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jun 2022 22:54:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341297.566510 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwthp-0006Xa-J6; Thu, 02 Jun 2022 22:54:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341297.566510; Thu, 02 Jun 2022 22:54:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwthp-0006XT-Fy; Thu, 02 Jun 2022 22:54:17 +0000
Received: by outflank-mailman (input) for mailman id 341297;
 Thu, 02 Jun 2022 22:54:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OKJR=WJ=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1nwthn-0006XN-CL
 for xen-devel@lists.xenproject.org; Thu, 02 Jun 2022 22:54:15 +0000
Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com
 [66.111.4.25]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ea6e65fe-e2c6-11ec-837f-e5687231ffcc;
 Fri, 03 Jun 2022 00:54:12 +0200 (CEST)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.nyi.internal (Postfix) with ESMTP id 441855C023B;
 Thu,  2 Jun 2022 18:54:10 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute5.internal (MEProxy); Thu, 02 Jun 2022 18:54:10 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu,
 2 Jun 2022 18:54:09 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ea6e65fe-e2c6-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding:date
	:date:from:from:in-reply-to:message-id:mime-version:reply-to
	:sender:subject:subject:to:to; s=fm1; t=1654210450; x=
	1654296850; bh=MRGhsRZXeF/lnjLM/zzVqIm8xqV/+/8mPjK0qPm3CQw=; b=a
	g/TBKdA2UctNTjQ2euW+1J0DMxccU/toTBCoeCdlhI4E7r3RYs9O2QdO+D8umFiW
	/RbbgXwj+zE/X2GQjqJcdBH3Uaagp+TdrzdWMFCpFyYkE1LOwi6I3QiKc032e0IB
	DQv3huQlFPNw/gaiTLps3EDrIo0njdhdRvYDvkdo8dkZ8CMNhhmVpVj+Khf/Ad7h
	kzfrVP+hkF8H/xq/b5HGlqkP51y5V6+SLSMVYyzi8G/frNDExqjVpHuFaiRcakP8
	Ag8bjhkkCmKVuKcW34F9NP8sF8LsXrB5Ao4jqdefVm0wBcP5u1r5TMShHm0TSW/Z
	D05iKLQY1+Q9oMatsbFZA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:message-id
	:mime-version:reply-to:sender:subject:subject:to:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; t=
	1654210450; x=1654296850; bh=MRGhsRZXeF/lnjLM/zzVqIm8xqV/+/8mPjK
	0qPm3CQw=; b=LSEv6Dzkf7oNE3OFW6NUYG5sj1K0v2Nof3qim1YszUBX2kzpAN6
	GrjKm4Q44hQlTBqIBXmSSsFKycO3Wg9QeT5vU6TV20Q6jpsMBl+typg6xLgu5pK1
	A3weZcN/I56JRDzohGjfhL7szuog9qYAhjj25uYEiRA+5osgLDq2c/8HJKejmbNF
	mqf6X0foCRqgVIIZkT0QpKmkFpUU2OBG9wzNFTwqbzxR1c8QcsP3FNU+aMC48gmA
	L0dysPLKpQtjwyhV/EgzB6xdjGclxYbx8RdIPhk+0RCOVNeZfejPdK0mjUohd7dT
	BoGqqlnYwUBvIbR1W80HH1en/Z3xAfhmg0w==
X-ME-Sender: <xms:kT-ZYlRdfjIC7-XofoF2xBY_hsZHnGC2n7uNZpInUw3Sz7rmfAj-hg>
    <xme:kT-ZYuxQDTGQrVuj3ZzS8TEyWxfwyK35TmVwE5ltx2zvae9-CMymDkCaCpriSYP_K
    mcJnFYsGNPqkgg>
X-ME-Received: <xmr:kT-ZYq3UTxLN5_I0Li-aifoDPxW9nGyHNdB41TXdwLxiEeDzD4GDn8MRY6qRnopppEnXWddPGKcc>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrleehgddugecutefuodetggdotefrodftvf
    curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
    uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc
    fjughrpefhvfevufffkffoggfgsedtkeertdertddtnecuhfhrohhmpeffvghmihcuofgr
    rhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinhhgshhlrg
    gsrdgtohhmqeenucggtffrrghtthgvrhhnpeeijeegudefueekieejudekgfeukedvgeei
    tddufefhtdffheffueevfedvgfelgfenucffohhmrghinhepghhithhhuhgsrdgtohhmne
    cuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepuggvmhhi
    sehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:kT-ZYtBncN_-y1LVql4CO2Y8pBXTIAtzBPnJrUD4SRVA-U4T-w_biQ>
    <xmx:kT-ZYujXJHC4G6j22lIjPGNhp-5uOmRHMcrYxOuAikhvYIuBqGZ8KA>
    <xmx:kT-ZYhr6cJRXZQ6hXFusYsJ7RBqItRky7sxhpBijcc3czqqwdeMLdg>
    <xmx:kj-ZYgZQ9EeyEAcPpxWUWz8WbhjJ0Acu3-LM4wkJLHjfY8FMnjV8kw>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jennifer Herbert <jennifer.herbert@citrix.com>
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Subject: [PATCH v3] xen/gntdev: Avoid blocking in unmap_grant_pages()
Date: Thu,  2 Jun 2022 18:53:52 -0400
Message-Id: <20220602225352.3201-1-demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

unmap_grant_pages() currently waits for the pages to no longer be used.
In https://github.com/QubesOS/qubes-issues/issues/7481, this lead to a
deadlock against i915: i915 was waiting for gntdev's MMU notifier to
finish, while gntdev was waiting for i915 to free its pages.  I also
believe this is responsible for various deadlocks I have experienced in
the past.

Avoid these problems by making unmap_grant_pages async.  This requires
making it return void, as any errors will not be available when the
function returns.  Fortunately, the only use of the return value is a
WARN_ON(), which can be replaced by a WARN_ON when the error is
detected.  Additionally, a failed call will not prevent further calls
from being made, but this is harmless.

Because unmap_grant_pages is now async, the grant handle will be sent to
INVALID_GRANT_HANDLE too late to prevent multiple unmaps of the same
handle.  Instead, a separate bool array is allocated for this purpose.
This wastes memory, but stuffing this information in padding bytes is
too fragile.  Furthermore, it is necessary to grab a reference to the
map before making the asynchronous call, and release the reference when
the call returns.

It is also necessary to guard against reentrancy in gntdev_map_put(),
and to handle the case where userspace tries to map a mapping whose
contents have not all been freed yet.

Fixes: 745282256c75 ("xen/gntdev: safely unmap grants in case they are still in use")
Cc: stable@vger.kernel.org
Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
---
 drivers/xen/gntdev-common.h |   7 ++
 drivers/xen/gntdev.c        | 153 ++++++++++++++++++++++++------------
 2 files changed, 109 insertions(+), 51 deletions(-)

diff --git a/drivers/xen/gntdev-common.h b/drivers/xen/gntdev-common.h
index 20d7d059dadb..15c2e3afcc2b 100644
--- a/drivers/xen/gntdev-common.h
+++ b/drivers/xen/gntdev-common.h
@@ -16,6 +16,7 @@
 #include <linux/mmu_notifier.h>
 #include <linux/types.h>
 #include <xen/interface/event_channel.h>
+#include <xen/grant_table.h>
 
 struct gntdev_dmabuf_priv;
 
@@ -56,6 +57,7 @@ struct gntdev_grant_map {
 	struct gnttab_unmap_grant_ref *unmap_ops;
 	struct gnttab_map_grant_ref   *kmap_ops;
 	struct gnttab_unmap_grant_ref *kunmap_ops;
+	bool *being_removed;
 	struct page **pages;
 	unsigned long pages_vm_start;
 
@@ -73,6 +75,11 @@ struct gntdev_grant_map {
 	/* Needed to avoid allocation in gnttab_dma_free_pages(). */
 	xen_pfn_t *frames;
 #endif
+
+	/* Number of live grants */
+	atomic_long_t live_grants;
+	/* Needed to avoid allocation in __unmap_grant_pages */
+	struct gntab_unmap_queue_data unmap_data;
 };
 
 struct gntdev_grant_map *gntdev_alloc_map(struct gntdev_priv *priv, int count,
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index 59ffea800079..e8b83ea1eacd 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -35,6 +35,7 @@
 #include <linux/slab.h>
 #include <linux/highmem.h>
 #include <linux/refcount.h>
+#include <linux/workqueue.h>
 
 #include <xen/xen.h>
 #include <xen/grant_table.h>
@@ -60,10 +61,11 @@ module_param(limit, uint, 0644);
 MODULE_PARM_DESC(limit,
 	"Maximum number of grants that may be mapped by one mapping request");
 
+/* True in PV mode, false otherwise */
 static int use_ptemod;
 
-static int unmap_grant_pages(struct gntdev_grant_map *map,
-			     int offset, int pages);
+static void unmap_grant_pages(struct gntdev_grant_map *map,
+			      int offset, int pages);
 
 static struct miscdevice gntdev_miscdev;
 
@@ -120,6 +122,7 @@ static void gntdev_free_map(struct gntdev_grant_map *map)
 	kvfree(map->unmap_ops);
 	kvfree(map->kmap_ops);
 	kvfree(map->kunmap_ops);
+	kvfree(map->being_removed);
 	kfree(map);
 }
 
@@ -140,10 +143,13 @@ struct gntdev_grant_map *gntdev_alloc_map(struct gntdev_priv *priv, int count,
 	add->unmap_ops = kvmalloc_array(count, sizeof(add->unmap_ops[0]),
 					GFP_KERNEL);
 	add->pages     = kvcalloc(count, sizeof(add->pages[0]), GFP_KERNEL);
+	add->being_removed =
+		kvcalloc(count, sizeof(add->being_removed[0]), GFP_KERNEL);
 	if (NULL == add->grants    ||
 	    NULL == add->map_ops   ||
 	    NULL == add->unmap_ops ||
-	    NULL == add->pages)
+	    NULL == add->pages     ||
+	    NULL == add->being_removed)
 		goto err;
 	if (use_ptemod) {
 		add->kmap_ops   = kvmalloc_array(count, sizeof(add->kmap_ops[0]),
@@ -250,9 +256,34 @@ void gntdev_put_map(struct gntdev_priv *priv, struct gntdev_grant_map *map)
 	if (!refcount_dec_and_test(&map->users))
 		return;
 
-	if (map->pages && !use_ptemod)
+	if (map->pages && !use_ptemod) {
+		/*
+		 * Increment the reference count.  This ensures that the
+		 * subsequent call to unmap_grant_pages() will not wind up
+		 * re-entering itself.  It *can* wind up calling
+		 * gntdev_put_map() recursively, but such calls will be with a
+		 * nonzero reference count, so they will return before this code
+		 * is reached.  The recursion depth is thus limited to 1.
+		 */
+		refcount_inc(&map->users);
+
+		/*
+		 * Unmap the grants.  This may or may not be asynchronous, so it
+		 * is possible that the reference count is 1 on return, but it
+		 * could also be greater than 1.
+		 */
 		unmap_grant_pages(map, 0, map->count);
 
+		/* Check if the memory now needs to be freed */
+		if (!refcount_dec_and_test(&map->users))
+			return;
+
+		/*
+		 * All pages have been returned to the hypervisor, so free the
+		 * map.  FIXME: this is far too complex.
+		 */
+	}
+
 	if (map->notify.flags & UNMAP_NOTIFY_SEND_EVENT) {
 		notify_remote_via_evtchn(map->notify.event);
 		evtchn_put(map->notify.event);
@@ -283,6 +314,7 @@ static int find_grant_ptes(pte_t *pte, unsigned long addr, void *data)
 
 int gntdev_map_grant_pages(struct gntdev_grant_map *map)
 {
+	size_t alloced = 0;
 	int i, err = 0;
 
 	if (!use_ptemod) {
@@ -331,97 +363,114 @@ int gntdev_map_grant_pages(struct gntdev_grant_map *map)
 			map->count);
 
 	for (i = 0; i < map->count; i++) {
-		if (map->map_ops[i].status == GNTST_okay)
+		if (map->map_ops[i].status == GNTST_okay) {
 			map->unmap_ops[i].handle = map->map_ops[i].handle;
-		else if (!err)
+			if (!use_ptemod)
+				alloced++;
+		} else if (!err)
 			err = -EINVAL;
 
 		if (map->flags & GNTMAP_device_map)
 			map->unmap_ops[i].dev_bus_addr = map->map_ops[i].dev_bus_addr;
 
 		if (use_ptemod) {
-			if (map->kmap_ops[i].status == GNTST_okay)
+			if (map->kmap_ops[i].status == GNTST_okay) {
+				if (map->map_ops[i].status == GNTST_okay)
+					alloced++;
 				map->kunmap_ops[i].handle = map->kmap_ops[i].handle;
-			else if (!err)
+			} else if (!err) {
+				/* FIXME: should this be a WARN()? */
 				err = -EINVAL;
+			}
 		}
 	}
+	atomic_long_add(alloced, &map->live_grants);
 	return err;
 }
 
-static int __unmap_grant_pages(struct gntdev_grant_map *map, int offset,
-			       int pages)
+static void __unmap_grant_pages_done(int result,
+		struct gntab_unmap_queue_data *data)
 {
-	int i, err = 0;
-	struct gntab_unmap_queue_data unmap_data;
-
-	if (map->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) {
-		int pgno = (map->notify.addr >> PAGE_SHIFT);
-		if (pgno >= offset && pgno < offset + pages) {
-			/* No need for kmap, pages are in lowmem */
-			uint8_t *tmp = pfn_to_kaddr(page_to_pfn(map->pages[pgno]));
-			tmp[map->notify.addr & (PAGE_SIZE-1)] = 0;
-			map->notify.flags &= ~UNMAP_NOTIFY_CLEAR_BYTE;
-		}
-	}
-
-	unmap_data.unmap_ops = map->unmap_ops + offset;
-	unmap_data.kunmap_ops = use_ptemod ? map->kunmap_ops + offset : NULL;
-	unmap_data.pages = map->pages + offset;
-	unmap_data.count = pages;
-
-	err = gnttab_unmap_refs_sync(&unmap_data);
-	if (err)
-		return err;
+	unsigned int i;
+	struct gntdev_grant_map *map = data->data;
+	unsigned int offset = data->unmap_ops - map->unmap_ops;
+	atomic_long_sub(data->count, &map->live_grants);
 
-	for (i = 0; i < pages; i++) {
-		if (map->unmap_ops[offset+i].status)
-			err = -EINVAL;
+	for (i = 0; i < data->count; i++) {
+		WARN_ON(map->unmap_ops[offset+i].status);
 		pr_debug("unmap handle=%d st=%d\n",
 			map->unmap_ops[offset+i].handle,
 			map->unmap_ops[offset+i].status);
 		map->unmap_ops[offset+i].handle = INVALID_GRANT_HANDLE;
 		if (use_ptemod) {
-			if (map->kunmap_ops[offset+i].status)
-				err = -EINVAL;
+			WARN_ON(map->kunmap_ops[offset+i].status);
 			pr_debug("kunmap handle=%u st=%d\n",
 				 map->kunmap_ops[offset+i].handle,
 				 map->kunmap_ops[offset+i].status);
 			map->kunmap_ops[offset+i].handle = INVALID_GRANT_HANDLE;
 		}
 	}
-	return err;
+
+	/* Release reference taken by __unmap_grant_pages */
+	gntdev_put_map(NULL, map);
 }
 
-static int unmap_grant_pages(struct gntdev_grant_map *map, int offset,
-			     int pages)
+static void __unmap_grant_pages(struct gntdev_grant_map *map, int offset,
+			       int pages)
 {
-	int range, err = 0;
+	if (map->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) {
+		int pgno = (map->notify.addr >> PAGE_SHIFT);
+
+		if (pgno >= offset && pgno < offset + pages) {
+			/* No need for kmap, pages are in lowmem */
+			uint8_t *tmp = pfn_to_kaddr(page_to_pfn(map->pages[pgno]));
+
+			tmp[map->notify.addr & (PAGE_SIZE-1)] = 0;
+			map->notify.flags &= ~UNMAP_NOTIFY_CLEAR_BYTE;
+		}
+	}
+
+	map->unmap_data.unmap_ops = map->unmap_ops + offset;
+	map->unmap_data.kunmap_ops = use_ptemod ? map->kunmap_ops + offset : NULL;
+	map->unmap_data.pages = map->pages + offset;
+	map->unmap_data.count = pages;
+	map->unmap_data.done = __unmap_grant_pages_done;
+	map->unmap_data.data = map;
+	refcount_inc(&map->users); /* to keep map alive during async call below */
+
+	gnttab_unmap_refs_async(&map->unmap_data);
+}
+
+static void unmap_grant_pages(struct gntdev_grant_map *map, int offset,
+			      int pages)
+{
+	int range;
+
+	if (atomic_long_read(&map->live_grants) == 0)
+		return; /* Nothing to do */
 
 	pr_debug("unmap %d+%d [%d+%d]\n", map->index, map->count, offset, pages);
 
 	/* It is possible the requested range will have a "hole" where we
 	 * already unmapped some of the grants. Only unmap valid ranges.
 	 */
-	while (pages && !err) {
-		while (pages &&
-		       map->unmap_ops[offset].handle == INVALID_GRANT_HANDLE) {
+	while (pages) {
+		while (pages && map->being_removed[offset]) {
 			offset++;
 			pages--;
 		}
 		range = 0;
 		while (range < pages) {
-			if (map->unmap_ops[offset + range].handle ==
-			    INVALID_GRANT_HANDLE)
+			if (map->being_removed[offset + range])
 				break;
+			map->being_removed[offset + range] = true;
 			range++;
 		}
-		err = __unmap_grant_pages(map, offset, range);
+		if (range)
+			__unmap_grant_pages(map, offset, range);
 		offset += range;
 		pages -= range;
 	}
-
-	return err;
 }
 
 /* ------------------------------------------------------------------ */
@@ -473,7 +522,6 @@ static bool gntdev_invalidate(struct mmu_interval_notifier *mn,
 	struct gntdev_grant_map *map =
 		container_of(mn, struct gntdev_grant_map, notifier);
 	unsigned long mstart, mend;
-	int err;
 
 	if (!mmu_notifier_range_blockable(range))
 		return false;
@@ -494,10 +542,9 @@ static bool gntdev_invalidate(struct mmu_interval_notifier *mn,
 			map->index, map->count,
 			map->vma->vm_start, map->vma->vm_end,
 			range->start, range->end, mstart, mend);
-	err = unmap_grant_pages(map,
+	unmap_grant_pages(map,
 				(mstart - map->vma->vm_start) >> PAGE_SHIFT,
 				(mend - mstart) >> PAGE_SHIFT);
-	WARN_ON(err);
 
 	return true;
 }
@@ -985,6 +1032,10 @@ static int gntdev_mmap(struct file *flip, struct vm_area_struct *vma)
 		goto unlock_out;
 	if (use_ptemod && map->vma)
 		goto unlock_out;
+	if (atomic_long_read(&map->live_grants)) {
+		err = -EAGAIN;
+		goto unlock_out;
+	}
 	refcount_inc(&map->users);
 
 	vma->vm_ops = &gntdev_vmops;
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab


From xen-devel-bounces@lists.xenproject.org Fri Jun 03 00:21:47 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 00:21:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341307.566522 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwv4F-0000kd-1E; Fri, 03 Jun 2022 00:21:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341307.566522; Fri, 03 Jun 2022 00:21:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwv4E-0000kW-Uf; Fri, 03 Jun 2022 00:21:30 +0000
Received: by outflank-mailman (input) for mailman id 341307;
 Fri, 03 Jun 2022 00:21:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fj01=WK=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nwv4C-0000kQ-E5
 for xen-devel@lists.xenproject.org; Fri, 03 Jun 2022 00:21:28 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1aa5b77e-e2d3-11ec-837f-e5687231ffcc;
 Fri, 03 Jun 2022 02:21:25 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id E3608B821AC;
 Fri,  3 Jun 2022 00:21:24 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id F0D94C385A5;
 Fri,  3 Jun 2022 00:21:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1aa5b77e-e2d3-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654215683;
	bh=XrglRYkc9Y3P+fOC9ZgX+DqSTLBEyliylyelU+Kex7o=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=EyUUbRusivx3tXswMqfWgcZMfXtlli04Kc9CbecD1rFkHg6konrrNxbpb4Q3tO9uD
	 O/PgqKGP/pEYnsOJWlTQ2/KYv21MWOQSxEsoug0rDkNQAFx2sSJR1Ws5gPcDP4upVs
	 QjK0rc0DYwfdsxgyC5WLLdjFXsp4kKVrBJh+pk8oy0FIFAGQQfcKNOi8H5tbdgnYDs
	 Y8NXyrcg0nvJ7sJUmncxnkLvj9/zjX0UIJj561O9+xkyoeDcP63WRCAu89MtynbQbs
	 TsqGQZwBYjBxppBDkPyK3HZhU6sdFEY/99x9K2Ty1NE+W+9pfZRgFy98a/08SlqNg9
	 6xZRvuyl2f2xw==
Date: Thu, 2 Jun 2022 17:21:21 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
cc: xen-devel@lists.xenproject.org, Julien Grall <julien.grall@arm.com>, 
    Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>, 
    Juergen Gross <jgross@suse.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: Re: [PATCH V9 2/2] libxl: Introduce basic virtio-mmio support on
 Arm
In-Reply-To: <1654106261-28044-3-git-send-email-olekstysh@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2206021721130.2783803@ubuntu-linux-20-04-desktop>
References: <1654106261-28044-1-git-send-email-olekstysh@gmail.com> <1654106261-28044-3-git-send-email-olekstysh@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1213045464-1654215683=:2783803"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1213045464-1654215683=:2783803
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Wed, 1 Jun 2022, Oleksandr Tyshchenko wrote:
> From: Julien Grall <julien.grall@arm.com>
> 
> This patch introduces helpers to allocate Virtio MMIO params
> (IRQ and memory region) and create specific device node in
> the Guest device-tree with allocated params. In order to deal
> with multiple Virtio devices, reserve corresponding ranges.
> For now, we reserve 1MB for memory regions and 10 SPIs.
> 
> As these helpers should be used for every Virtio device attached
> to the Guest, call them for Virtio disk(s).
> 
> Please note, with statically allocated Virtio IRQs there is
> a risk of a clash with a physical IRQs of passthrough devices.
> For the first version, it's fine, but we should consider allocating
> the Virtio IRQs automatically. Thankfully, we know in advance which
> IRQs will be used for passthrough to be able to choose non-clashed
> ones.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> Please note, this is a split/cleanup/hardening of Julien's PoC:
> "Add support for Guest IO forwarding to a device emulator"
> 
> Changes RFC -> V1:
>    - was squashed with:
>      "[RFC PATCH V1 09/12] libxl: Handle virtio-mmio irq in more correct way"
>      "[RFC PATCH V1 11/12] libxl: Insert "dma-coherent" property into virtio-mmio device node"
>      "[RFC PATCH V1 12/12] libxl: Fix duplicate memory node in DT"
>    - move VirtIO MMIO #define-s to xen/include/public/arch-arm.h
> 
> Changes V1 -> V2:
>    - update the author of a patch
> 
> Changes V2 -> V3:
>    - no changes
> 
> Changes V3 -> V4:
>    - no changes
> 
> Changes V4 -> V5:
>    - split the changes, change the order of the patches
>    - drop an extra "virtio" configuration option
>    - update patch description
>    - use CONTAINER_OF instead of own implementation
>    - reserve ranges for Virtio MMIO params and put them
>      in correct location
>    - create helpers to allocate Virtio MMIO params, add
>      corresponding sanity-сhecks
>    - add comment why MMIO size 0x200 is chosen
>    - update debug print
>    - drop Wei's T-b
> 
> Changes V5 -> V6:
>    - rebase on current staging
> 
> Changes V6 -> V7:
>    - rebase on current staging
>    - add T-b and R-b tags
>    - update according to the recent changes to
>      "libxl: Add support for Virtio disk configuration"
> 
> Changes V7 -> V8:
>    - drop T-b and R-b tags
>    - make virtio_mmio_base/irq global variables to be local in
>      libxl__arch_domain_prepare_config() and initialize them at
>      the beginning of the function, then rework alloc_virtio_mmio_base/irq()
>      to take a pointer to virtio_mmio_base/irq variables as an argument
>    - update according to the recent changes to
>      "libxl: Add support for Virtio disk configuration"
> 
> Changes V8 -> V9:
>    - Stefano already gave his Reviewed-by, I dropped it due to the changes
>    - remove the second set of parentheses for check in alloc_virtio_mmio_base()
>    - clarify the updating of "nr_spis" right after num_disks loop in
>      libxl__arch_domain_prepare_config() and add a comment on top of it
>    - use GCSPRINTF() instead of using a buffer of a static size
>      calculated by hand in make_virtio_mmio_node()
> ---
>  tools/libs/light/libxl_arm.c  | 121 +++++++++++++++++++++++++++++++++++++++++-
>  xen/include/public/arch-arm.h |   7 +++
>  2 files changed, 126 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
> index eef1de0..9be9b2a 100644
> --- a/tools/libs/light/libxl_arm.c
> +++ b/tools/libs/light/libxl_arm.c
> @@ -8,6 +8,46 @@
>  #include <assert.h>
>  #include <xen/device_tree_defs.h>
>  
> +/*
> + * There is no clear requirements for the total size of Virtio MMIO region.
> + * The size of control registers is 0x100 and device-specific configuration
> + * registers starts at the offset 0x100, however it's size depends on the device
> + * and the driver. Pick the biggest known size at the moment to cover most
> + * of the devices (also consider allowing the user to configure the size via
> + * config file for the one not conforming with the proposed value).
> + */
> +#define VIRTIO_MMIO_DEV_SIZE   xen_mk_ullong(0x200)
> +
> +static uint64_t alloc_virtio_mmio_base(libxl__gc *gc, uint64_t *virtio_mmio_base)
> +{
> +    uint64_t base = *virtio_mmio_base;
> +
> +    /* Make sure we have enough reserved resources */
> +    if (base + VIRTIO_MMIO_DEV_SIZE >
> +        GUEST_VIRTIO_MMIO_BASE + GUEST_VIRTIO_MMIO_SIZE) {
> +        LOG(ERROR, "Ran out of reserved range for Virtio MMIO BASE 0x%"PRIx64"\n",
> +            base);
> +        return 0;
> +    }
> +    *virtio_mmio_base += VIRTIO_MMIO_DEV_SIZE;
> +
> +    return base;
> +}
> +
> +static uint32_t alloc_virtio_mmio_irq(libxl__gc *gc, uint32_t *virtio_mmio_irq)
> +{
> +    uint32_t irq = *virtio_mmio_irq;
> +
> +    /* Make sure we have enough reserved resources */
> +    if (irq > GUEST_VIRTIO_MMIO_SPI_LAST) {
> +        LOG(ERROR, "Ran out of reserved range for Virtio MMIO IRQ %u\n", irq);
> +        return 0;
> +    }
> +    (*virtio_mmio_irq)++;
> +
> +    return irq;
> +}
> +
>  static const char *gicv_to_string(libxl_gic_version gic_version)
>  {
>      switch (gic_version) {
> @@ -26,8 +66,10 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
>  {
>      uint32_t nr_spis = 0;
>      unsigned int i;
> -    uint32_t vuart_irq;
> -    bool vuart_enabled = false;
> +    uint32_t vuart_irq, virtio_irq = 0;
> +    bool vuart_enabled = false, virtio_enabled = false;
> +    uint64_t virtio_mmio_base = GUEST_VIRTIO_MMIO_BASE;
> +    uint32_t virtio_mmio_irq = GUEST_VIRTIO_MMIO_SPI_FIRST;
>  
>      /*
>       * If pl011 vuart is enabled then increment the nr_spis to allow allocation
> @@ -39,6 +81,35 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
>          vuart_enabled = true;
>      }
>  
> +    for (i = 0; i < d_config->num_disks; i++) {
> +        libxl_device_disk *disk = &d_config->disks[i];
> +
> +        if (disk->specification == LIBXL_DISK_SPECIFICATION_VIRTIO) {
> +            disk->base = alloc_virtio_mmio_base(gc, &virtio_mmio_base);
> +            if (!disk->base)
> +                return ERROR_FAIL;
> +
> +            disk->irq = alloc_virtio_mmio_irq(gc, &virtio_mmio_irq);
> +            if (!disk->irq)
> +                return ERROR_FAIL;
> +
> +            if (virtio_irq < disk->irq)
> +                virtio_irq = disk->irq;
> +            virtio_enabled = true;
> +
> +            LOG(DEBUG, "Allocate Virtio MMIO params for Vdev %s: IRQ %u BASE 0x%"PRIx64,
> +                disk->vdev, disk->irq, disk->base);
> +        }
> +    }
> +
> +    /*
> +     * Every virtio-mmio device uses one emulated SPI. If Virtio devices are
> +     * present, make sure that we allocate enough SPIs for them.
> +     * The resulting "nr_spis" needs to cover the highest possible SPI.
> +     */
> +    if (virtio_enabled)
> +        nr_spis = max(nr_spis, virtio_irq - 32 + 1);
> +
>      for (i = 0; i < d_config->b_info.num_irqs; i++) {
>          uint32_t irq = d_config->b_info.irqs[i];
>          uint32_t spi;
> @@ -58,6 +129,13 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
>              return ERROR_FAIL;
>          }
>  
> +        /* The same check as for vpl011 */
> +        if (virtio_enabled &&
> +            (irq >= GUEST_VIRTIO_MMIO_SPI_FIRST && irq <= virtio_irq)) {
> +            LOG(ERROR, "Physical IRQ %u conflicting with Virtio MMIO IRQ range\n", irq);
> +            return ERROR_FAIL;
> +        }
> +
>          if (irq < 32)
>              continue;
>  
> @@ -787,6 +865,37 @@ static int make_vpci_node(libxl__gc *gc, void *fdt,
>      return 0;
>  }
>  
> +
> +static int make_virtio_mmio_node(libxl__gc *gc, void *fdt,
> +                                 uint64_t base, uint32_t irq)
> +{
> +    int res;
> +    gic_interrupt intr;
> +    const char *name = GCSPRINTF("virtio@%"PRIx64, base);
> +
> +    res = fdt_begin_node(fdt, name);
> +    if (res) return res;
> +
> +    res = fdt_property_compat(gc, fdt, 1, "virtio,mmio");
> +    if (res) return res;
> +
> +    res = fdt_property_regs(gc, fdt, GUEST_ROOT_ADDRESS_CELLS, GUEST_ROOT_SIZE_CELLS,
> +                            1, base, VIRTIO_MMIO_DEV_SIZE);
> +    if (res) return res;
> +
> +    set_interrupt(intr, irq, 0xf, DT_IRQ_TYPE_EDGE_RISING);
> +    res = fdt_property_interrupts(gc, fdt, &intr, 1);
> +    if (res) return res;
> +
> +    res = fdt_property(fdt, "dma-coherent", NULL, 0);
> +    if (res) return res;
> +
> +    res = fdt_end_node(fdt);
> +    if (res) return res;
> +
> +    return 0;
> +}
> +
>  static const struct arch_info *get_arch_info(libxl__gc *gc,
>                                               const struct xc_dom_image *dom)
>  {
> @@ -988,6 +1097,7 @@ static int libxl__prepare_dtb(libxl__gc *gc, libxl_domain_config *d_config,
>      size_t fdt_size = 0;
>      int pfdt_size = 0;
>      libxl_domain_build_info *const info = &d_config->b_info;
> +    unsigned int i;
>  
>      const libxl_version_info *vers;
>      const struct arch_info *ainfo;
> @@ -1094,6 +1204,13 @@ next_resize:
>          if (d_config->num_pcidevs)
>              FDT( make_vpci_node(gc, fdt, ainfo, dom) );
>  
> +        for (i = 0; i < d_config->num_disks; i++) {
> +            libxl_device_disk *disk = &d_config->disks[i];
> +
> +            if (disk->specification == LIBXL_DISK_SPECIFICATION_VIRTIO)
> +                FDT( make_virtio_mmio_node(gc, fdt, disk->base, disk->irq) );
> +        }
> +
>          if (pfdt)
>              FDT( copy_partial_fdt(gc, fdt, pfdt) );
>  
> diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
> index ab05fe1..c8b6058 100644
> --- a/xen/include/public/arch-arm.h
> +++ b/xen/include/public/arch-arm.h
> @@ -407,6 +407,10 @@ typedef uint64_t xen_callback_t;
>  
>  /* Physical Address Space */
>  
> +/* Virtio MMIO mappings */
> +#define GUEST_VIRTIO_MMIO_BASE   xen_mk_ullong(0x02000000)
> +#define GUEST_VIRTIO_MMIO_SIZE   xen_mk_ullong(0x00100000)
> +
>  /*
>   * vGIC mappings: Only one set of mapping is used by the guest.
>   * Therefore they can overlap.
> @@ -493,6 +497,9 @@ typedef uint64_t xen_callback_t;
>  
>  #define GUEST_VPL011_SPI        32
>  
> +#define GUEST_VIRTIO_MMIO_SPI_FIRST   33
> +#define GUEST_VIRTIO_MMIO_SPI_LAST    43
> +
>  /* PSCI functions */
>  #define PSCI_cpu_suspend 0
>  #define PSCI_cpu_off     1
> -- 
> 2.7.4
> 
--8323329-1213045464-1654215683=:2783803--


From xen-devel-bounces@lists.xenproject.org Fri Jun 03 00:35:07 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 00:35:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341316.566533 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwvHK-0002ns-F9; Fri, 03 Jun 2022 00:35:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341316.566533; Fri, 03 Jun 2022 00:35:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwvHK-0002nl-Ac; Fri, 03 Jun 2022 00:35:02 +0000
Received: by outflank-mailman (input) for mailman id 341316;
 Fri, 03 Jun 2022 00:35:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwvHJ-0002nY-EW; Fri, 03 Jun 2022 00:35:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwvHJ-0000Vm-6u; Fri, 03 Jun 2022 00:35:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwvHI-000842-Lx; Fri, 03 Jun 2022 00:35:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nwvHI-0000gU-LU; Fri, 03 Jun 2022 00:35:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8rDKasEgq+031fw1pwy5pJtkqZxuklOCpdLxbj9D/8Q=; b=JkMpunCgXJHowKLpJZnK/e55mI
	rDWXnanmjuS07RffoTasKH+2bWi7iuMUhFrUKUCwFGu8Ax8SSo9dGcAvjTVFE/TBgH5JHny+MQxdH
	uvBvtu6p2YE11VYJcy347M16QJfk37JutQBYNRYGD8QxKNUVkL6Wo3ayeK9Hgh8DSDJk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170809-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 170809: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-saverestore:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=1e62a82574fc28e64deca589a23cf55ada2e1a7d
X-Osstest-Versions-That:
    qemuu=e2c2d575991cbccc39da81f1b54e78523a24ed11
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 03 Jun 2022 00:35:00 +0000

flight 170809 qemu-mainline real [real]
flight 170811 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/170809/
http://logs.test-lab.xenproject.org/osstest/logs/170811/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-vhd 16 guest-saverestore   fail pass in 170811-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170802
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170802
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170802
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170802
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170802
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170802
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170802
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170802
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                1e62a82574fc28e64deca589a23cf55ada2e1a7d
baseline version:
 qemuu                e2c2d575991cbccc39da81f1b54e78523a24ed11

Last test of basis   170802  2022-06-01 21:38:16 Z    1 days
Testing same since   170809  2022-06-02 15:09:54 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Laurent Vivier <laurent@vivier.eu>
  Richard Henderson <richard.henderson@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   e2c2d57599..1e62a82574  1e62a82574fc28e64deca589a23cf55ada2e1a7d -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Fri Jun 03 00:45:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 00:45:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341326.566544 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwvRK-0004VM-H5; Fri, 03 Jun 2022 00:45:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341326.566544; Fri, 03 Jun 2022 00:45:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwvRK-0004VF-D9; Fri, 03 Jun 2022 00:45:22 +0000
Received: by outflank-mailman (input) for mailman id 341326;
 Fri, 03 Jun 2022 00:45:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fj01=WK=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nwvRJ-0004V9-4W
 for xen-devel@lists.xenproject.org; Fri, 03 Jun 2022 00:45:21 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 70c41ece-e2d6-11ec-837f-e5687231ffcc;
 Fri, 03 Jun 2022 02:45:19 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 0120161957;
 Fri,  3 Jun 2022 00:45:18 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1DBD5C385A5;
 Fri,  3 Jun 2022 00:45:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 70c41ece-e2d6-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654217117;
	bh=DBmfjowaN657Cd99JHjlKuX84DHWg1Q4arRQB2dxD/0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=qsmDVO4ZIKqkdTYOiRGFsmU2smuq4lNjkRDI47tTSFoUxQ8ZNVxDT3RzXBSc5mUBv
	 poCkD/Gx6ce08Hb4ht7WCwGkJB/LxppBpwi0SBEEMhd4C6lc06g5vFgS739XiUdemL
	 KGsbYU2IusRid1+JvEuXCyu8FCQKGAAEZJFWMBsW4oOUsU4LSUUZeCMsa3PrprlHGY
	 FQPi1kZPjo+e64AjO63D8TgL6b7nEdsF0Eb9ektYjQJv+qs4kt0xWR/d9Ir2qy5rSa
	 Du0nu1XAPKWkmQrz1/0mWntKL9xjm7FJEqXCf1tztNs/N1tTRt0vkmN3YDaHpsqJGH
	 xw/rg0CUkZh2A==
Date: Thu, 2 Jun 2022 17:45:15 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Bertrand Marquis <bertrand.marquis@arm.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 3/4] arm: add ISAR2, MMFR0 and MMFR1 fields in
 cpufeature
In-Reply-To: <4a0aef106ac7b6c16048ff3554eda1d8b3eab61a.1653993431.git.bertrand.marquis@arm.com>
Message-ID: <alpine.DEB.2.22.394.2206021738430.2783803@ubuntu-linux-20-04-desktop>
References: <cover.1653993431.git.bertrand.marquis@arm.com> <4a0aef106ac7b6c16048ff3554eda1d8b3eab61a.1653993431.git.bertrand.marquis@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 31 May 2022, Bertrand Marquis wrote:
> Complete AA64ISAR2 and AA64MMFR[0-1] with more fields.
> While there add a comment for MMFR bitfields as for other registers in
> the cpuinfo structure definition.
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> ---
> Changes in v2:
> - patch introduced to isolate changes in cpufeature.h
> - complete MMFR0 and ISAR2 to sync with sysregs.h status
> ---
>  xen/arch/arm/include/asm/cpufeature.h | 28 ++++++++++++++++++++++-----
>  1 file changed, 23 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/arm/include/asm/cpufeature.h b/xen/arch/arm/include/asm/cpufeature.h
> index 9649a7afee..57eb6773d3 100644
> --- a/xen/arch/arm/include/asm/cpufeature.h
> +++ b/xen/arch/arm/include/asm/cpufeature.h
> @@ -234,6 +234,7 @@ struct cpuinfo_arm {
>      union {
>          register_t bits[3];
>          struct {
> +            /* MMFR0 */
>              unsigned long pa_range:4;
>              unsigned long asid_bits:4;
>              unsigned long bigend:4;
> @@ -242,18 +243,31 @@ struct cpuinfo_arm {
>              unsigned long tgranule_16K:4;
>              unsigned long tgranule_64K:4;
>              unsigned long tgranule_4K:4;
> -            unsigned long __res0:32;
> -
> +            unsigned long tgranule_16k_2:4;
> +            unsigned long tgranule_64k_2:4;
> +            unsigned long tgranule_4k:4;

Should be tgranule_4k_2:4


> +            unsigned long exs:4;
> +            unsigned long __res0:8;
> +            unsigned long fgt:4;
> +            unsigned long ecv:4;
> +
> +            /* MMFR1 */
>              unsigned long hafdbs:4;
>              unsigned long vmid_bits:4;
>              unsigned long vh:4;
>              unsigned long hpds:4;
>              unsigned long lo:4;
>              unsigned long pan:4;
> -            unsigned long __res1:8;
> -            unsigned long __res2:28;
> +            unsigned long specsei:4;
> +            unsigned long xnx:4;
> +            unsigned long twed:4;
> +            unsigned long ets:4;
> +            unsigned long __res1:4;

hcx?


> +            unsigned long afp:4;
> +            unsigned long __res2:12;

ntlbpa
tidcp1
cmow

>              unsigned long ecbhb:4;

Strangely enough I am looking at DDI0487H and ecbhb is not there
(D13.2.65). Am I looking at the wrong location?


> +            /* MMFR2 */
>              unsigned long __res3:64;
>          };
>      } mm64;
> @@ -297,7 +311,11 @@ struct cpuinfo_arm {
>              unsigned long __res2:8;
>  
>              /* ISAR2 */
> -            unsigned long __res3:28;
> +            unsigned long wfxt:4;
> +            unsigned long rpres:4;
> +            unsigned long gpa3:4;
> +            unsigned long apa3:4;
> +            unsigned long __res3:12;

mops
bc
pac_frac


>              unsigned long clearbhb:4;

And again this is not described at D13.2.63. Probably the bhb stuff
didn't make it into the ARM ARM yet.


>  
>              unsigned long __res4:32;
> -- 
> 2.25.1
> 


From xen-devel-bounces@lists.xenproject.org Fri Jun 03 00:45:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 00:45:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341327.566555 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwvRQ-0004mT-Pr; Fri, 03 Jun 2022 00:45:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341327.566555; Fri, 03 Jun 2022 00:45:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwvRQ-0004mK-M2; Fri, 03 Jun 2022 00:45:28 +0000
Received: by outflank-mailman (input) for mailman id 341327;
 Fri, 03 Jun 2022 00:45:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fj01=WK=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nwvRP-0004lA-BH
 for xen-devel@lists.xenproject.org; Fri, 03 Jun 2022 00:45:27 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 75481183-e2d6-11ec-bd2c-47488cf2e6aa;
 Fri, 03 Jun 2022 02:45:26 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 2EA0BB821AD;
 Fri,  3 Jun 2022 00:45:25 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id A23C7C385A5;
 Fri,  3 Jun 2022 00:45:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 75481183-e2d6-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654217123;
	bh=pswBL66gapywbHHoSo80JuIpUzyBo8FhSXc1vRvxQ/s=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=IF370UXKjJmpF5G2kYj31nB7kpo77aTpE1SRyTcxF+0C3o2o7BxpdRBs4wasm1ZbN
	 pF2sEMyvgzGSIvy6Pjktp8dnVHR1biVeFt72bwkTZG6BQ6F/jNHDnj61uO+grIrzr7
	 FeT6O97Eu9moJ3wcV9/KuR+s8Wkf3Pl8/D9ndeCHXY1FqdXLT6Q2Et/xaXh2QJRBwa
	 vp64qgpB8tdeFqqn6U5caFVAsrP+vXpJ6fK05tSuxvCmNRS3txaxkeF99AY093rYD+
	 DnXc6A9QIt5T0lNnljrEOGp0WCcTmvQml+tYRtHKJHNZB2I8WdI3v48UDJTQNSWkju
	 hTflZnJtksV6Q==
Date: Thu, 2 Jun 2022 17:45:22 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Bertrand Marquis <bertrand.marquis@arm.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 1/4] xen/arm: Sync sysregs and cpuinfo with Linux
 5.18-rc3
In-Reply-To: <6b828874989f198afe9041185075938f718dd495.1653993431.git.bertrand.marquis@arm.com>
Message-ID: <alpine.DEB.2.22.394.2206021730330.2783803@ubuntu-linux-20-04-desktop>
References: <cover.1653993431.git.bertrand.marquis@arm.com> <6b828874989f198afe9041185075938f718dd495.1653993431.git.bertrand.marquis@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 31 May 2022, Bertrand Marquis wrote:
> Sync existing ID registers sanitization with the status of Linux kernel
> version 5.18-rc3 and add sanitization of ISAR2 registers.
> 
> Sync sysregs.h bit shift defintions with the status of Linux kernel
> version 5.18-rc3.
> 
> Changes in this patch are splitted in a number of patches in Linux
> kernel and, as the previous synchronisation point was not clear, the
> changes are done in one patch with a status possible to compare easily
> by diffing Xen files to Linux kernel files.
> 
> Origin: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git b2d229d4ddb1
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> Changes in v2
> - move changes in cpufeature.h in an independent patch
> - add proper origin tag in the commit
> - rework the commit message
> ---
>  xen/arch/arm/arm64/cpufeature.c          | 18 +++++-
>  xen/arch/arm/include/asm/arm64/sysregs.h | 76 ++++++++++++++++++++----
>  2 files changed, 80 insertions(+), 14 deletions(-)
> 
> diff --git a/xen/arch/arm/arm64/cpufeature.c b/xen/arch/arm/arm64/cpufeature.c
> index 6e5d30dc7b..d9039d37b2 100644
> --- a/xen/arch/arm/arm64/cpufeature.c
> +++ b/xen/arch/arm/arm64/cpufeature.c
> @@ -143,6 +143,16 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
>  	ARM64_FTR_END,
>  };
>  
> +static const struct arm64_ftr_bits ftr_id_aa64isar2[] = {
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_HIGHER_SAFE, ID_AA64ISAR2_CLEARBHB_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
> +		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR2_APA3_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
> +		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR2_GPA3_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR2_RPRES_SHIFT, 4, 0),
> +	ARM64_FTR_END,
> +};
> +
>  static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
>  	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV3_SHIFT, 4, 0),
>  	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV2_SHIFT, 4, 0),
> @@ -158,8 +168,8 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
>  	S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_FP_SHIFT, 4, ID_AA64PFR0_FP_NI),
>  	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL3_SHIFT, 4, 0),
>  	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL2_SHIFT, 4, 0),
> -	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_SHIFT, 4, ID_AA64PFR0_EL1_64BIT_ONLY),
> -	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL0_SHIFT, 4, ID_AA64PFR0_EL0_64BIT_ONLY),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_SHIFT, 4, ID_AA64PFR0_ELx_64BIT_ONLY),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL0_SHIFT, 4, ID_AA64PFR0_ELx_64BIT_ONLY),
>  	ARM64_FTR_END,
>  };
>  
> @@ -197,7 +207,7 @@ static const struct arm64_ftr_bits ftr_id_aa64zfr0[] = {
>  };
>  
>  static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = {
> -	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_ECV_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_ECV_SHIFT, 4, 0),
>  	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_FGT_SHIFT, 4, 0),
>  	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_EXS_SHIFT, 4, 0),
>  	/*
> @@ -243,6 +253,7 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = {
>  };
>  
>  static const struct arm64_ftr_bits ftr_id_aa64mmfr1[] = {
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_AFP_SHIFT, 4, 0),
>  	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_ETS_SHIFT, 4, 0),
>  	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_TWED_SHIFT, 4, 0),
>  	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_XNX_SHIFT, 4, 0),
> @@ -588,6 +599,7 @@ void update_system_features(const struct cpuinfo_arm *new)
>  
>  	SANITIZE_ID_REG(isa64, 0, aa64isar0);
>  	SANITIZE_ID_REG(isa64, 1, aa64isar1);
> +	SANITIZE_ID_REG(isa64, 2, aa64isar2);
>  
>  	SANITIZE_ID_REG(zfr64, 0, aa64zfr0);
>  
> diff --git a/xen/arch/arm/include/asm/arm64/sysregs.h b/xen/arch/arm/include/asm/arm64/sysregs.h
> index eac08ed33f..54670084c3 100644
> --- a/xen/arch/arm/include/asm/arm64/sysregs.h
> +++ b/xen/arch/arm/include/asm/arm64/sysregs.h
> @@ -144,6 +144,30 @@
>  
>  /* id_aa64isar2 */
>  #define ID_AA64ISAR2_CLEARBHB_SHIFT 28
> +#define ID_AA64ISAR2_APA3_SHIFT     12
> +#define ID_AA64ISAR2_GPA3_SHIFT     8
> +#define ID_AA64ISAR2_RPRES_SHIFT    4
> +#define ID_AA64ISAR2_WFXT_SHIFT     0
> +
> +#define ID_AA64ISAR2_RPRES_8BIT     0x0
> +#define ID_AA64ISAR2_RPRES_12BIT    0x1
> +/*
> + * Value 0x1 has been removed from the architecture, and is
> + * reserved, but has not yet been removed from the ARM ARM
> + * as of ARM DDI 0487G.b.
> + */
> +#define ID_AA64ISAR2_WFXT_NI        0x0
> +#define ID_AA64ISAR2_WFXT_SUPPORTED 0x2
> +
> +#define ID_AA64ISAR2_APA3_NI                  0x0
> +#define ID_AA64ISAR2_APA3_ARCHITECTED         0x1
> +#define ID_AA64ISAR2_APA3_ARCH_EPAC           0x2
> +#define ID_AA64ISAR2_APA3_ARCH_EPAC2          0x3
> +#define ID_AA64ISAR2_APA3_ARCH_EPAC2_FPAC     0x4
> +#define ID_AA64ISAR2_APA3_ARCH_EPAC2_FPAC_CMB 0x5
> +
> +#define ID_AA64ISAR2_GPA3_NI             0x0
> +#define ID_AA64ISAR2_GPA3_ARCHITECTED    0x1
>  
>  /* id_aa64pfr0 */
>  #define ID_AA64PFR0_CSV3_SHIFT       60
> @@ -165,14 +189,13 @@
>  #define ID_AA64PFR0_AMU              0x1
>  #define ID_AA64PFR0_SVE              0x1
>  #define ID_AA64PFR0_RAS_V1           0x1
> +#define ID_AA64PFR0_RAS_V1P1         0x2
>  #define ID_AA64PFR0_FP_NI            0xf
>  #define ID_AA64PFR0_FP_SUPPORTED     0x0
>  #define ID_AA64PFR0_ASIMD_NI         0xf
>  #define ID_AA64PFR0_ASIMD_SUPPORTED  0x0
> -#define ID_AA64PFR0_EL1_64BIT_ONLY   0x1
> -#define ID_AA64PFR0_EL1_32BIT_64BIT  0x2
> -#define ID_AA64PFR0_EL0_64BIT_ONLY   0x1
> -#define ID_AA64PFR0_EL0_32BIT_64BIT  0x2
> +#define ID_AA64PFR0_ELx_64BIT_ONLY   0x1
> +#define ID_AA64PFR0_ELx_32BIT_64BIT  0x2
>  
>  /* id_aa64pfr1 */
>  #define ID_AA64PFR1_MPAMFRAC_SHIFT   16
> @@ -189,6 +212,7 @@
>  #define ID_AA64PFR1_MTE_NI           0x0
>  #define ID_AA64PFR1_MTE_EL0          0x1
>  #define ID_AA64PFR1_MTE              0x2
> +#define ID_AA64PFR1_MTE_ASYMM        0x3
>  
>  /* id_aa64zfr0 */
>  #define ID_AA64ZFR0_F64MM_SHIFT      56
> @@ -228,17 +252,37 @@
>  #define ID_AA64MMFR0_ASID_SHIFT      4
>  #define ID_AA64MMFR0_PARANGE_SHIFT   0
>  
> -#define ID_AA64MMFR0_TGRAN4_NI         0xf
> -#define ID_AA64MMFR0_TGRAN4_SUPPORTED  0x0
> -#define ID_AA64MMFR0_TGRAN64_NI        0xf
> -#define ID_AA64MMFR0_TGRAN64_SUPPORTED 0x0
> -#define ID_AA64MMFR0_TGRAN16_NI        0x0
> -#define ID_AA64MMFR0_TGRAN16_SUPPORTED 0x1
> +#define ID_AA64MMFR0_ASID_8          0x0
> +#define ID_AA64MMFR0_ASID_16         0x2
> +
> +#define ID_AA64MMFR0_TGRAN4_NI             0xf
> +#define ID_AA64MMFR0_TGRAN4_SUPPORTED_MIN  0x0
> +#define ID_AA64MMFR0_TGRAN4_SUPPORTED_MAX  0x7
> +#define ID_AA64MMFR0_TGRAN64_NI            0xf
> +#define ID_AA64MMFR0_TGRAN64_SUPPORTED_MIN 0x0
> +#define ID_AA64MMFR0_TGRAN64_SUPPORTED_MAX 0x7
> +#define ID_AA64MMFR0_TGRAN16_NI            0x0
> +#define ID_AA64MMFR0_TGRAN16_SUPPORTED_MIN 0x1
> +#define ID_AA64MMFR0_TGRAN16_SUPPORTED_MAX 0xf
> +
> +#define ID_AA64MMFR0_PARANGE_32        0x0
> +#define ID_AA64MMFR0_PARANGE_36        0x1
> +#define ID_AA64MMFR0_PARANGE_40        0x2
> +#define ID_AA64MMFR0_PARANGE_42        0x3
> +#define ID_AA64MMFR0_PARANGE_44        0x4
>  #define ID_AA64MMFR0_PARANGE_48        0x5
>  #define ID_AA64MMFR0_PARANGE_52        0x6
>  
> +#define ARM64_MIN_PARANGE_BITS     32
> +
> +#define ID_AA64MMFR0_TGRAN_2_SUPPORTED_DEFAULT 0x0
> +#define ID_AA64MMFR0_TGRAN_2_SUPPORTED_NONE    0x1
> +#define ID_AA64MMFR0_TGRAN_2_SUPPORTED_MIN     0x2
> +#define ID_AA64MMFR0_TGRAN_2_SUPPORTED_MAX     0x7
> +
>  /* id_aa64mmfr1 */
>  #define ID_AA64MMFR1_ECBHB_SHIFT     60
> +#define ID_AA64MMFR1_AFP_SHIFT       44
>  #define ID_AA64MMFR1_ETS_SHIFT       36
>  #define ID_AA64MMFR1_TWED_SHIFT      32
>  #define ID_AA64MMFR1_XNX_SHIFT       28
> @@ -271,6 +315,9 @@
>  #define ID_AA64MMFR2_CNP_SHIFT       0
>  
>  /* id_aa64dfr0 */
> +#define ID_AA64DFR0_MTPMU_SHIFT      48
> +#define ID_AA64DFR0_TRBE_SHIFT       44
> +#define ID_AA64DFR0_TRACE_FILT_SHIFT 40
>  #define ID_AA64DFR0_DOUBLELOCK_SHIFT 36
>  #define ID_AA64DFR0_PMSVER_SHIFT     32
>  #define ID_AA64DFR0_CTX_CMPS_SHIFT   28
> @@ -284,11 +331,18 @@
>  #define ID_AA64DFR0_PMUVER_8_1       0x4
>  #define ID_AA64DFR0_PMUVER_8_4       0x5
>  #define ID_AA64DFR0_PMUVER_8_5       0x6
> +#define ID_AA64DFR0_PMUVER_8_7       0x7
>  #define ID_AA64DFR0_PMUVER_IMP_DEF   0xf
>  
> +#define ID_AA64DFR0_PMSVER_8_2      0x1
> +#define ID_AA64DFR0_PMSVER_8_3      0x2
> +
>  #define ID_DFR0_PERFMON_SHIFT        24
>  
> -#define ID_DFR0_PERFMON_8_1          0x4
> +#define ID_DFR0_PERFMON_8_0         0x3
> +#define ID_DFR0_PERFMON_8_1         0x4
> +#define ID_DFR0_PERFMON_8_4         0x5
> +#define ID_DFR0_PERFMON_8_5         0x6
>  
>  #define ID_ISAR4_SWP_FRAC_SHIFT        28
>  #define ID_ISAR4_PSR_M_SHIFT           24
> -- 
> 2.25.1
> 


From xen-devel-bounces@lists.xenproject.org Fri Jun 03 00:51:37 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 00:51:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341343.566566 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwvXJ-0006YR-IW; Fri, 03 Jun 2022 00:51:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341343.566566; Fri, 03 Jun 2022 00:51:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwvXJ-0006YK-Fn; Fri, 03 Jun 2022 00:51:33 +0000
Received: by outflank-mailman (input) for mailman id 341343;
 Fri, 03 Jun 2022 00:51:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fj01=WK=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nwvXH-0006YE-DE
 for xen-devel@lists.xenproject.org; Fri, 03 Jun 2022 00:51:31 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4dd4731d-e2d7-11ec-837f-e5687231ffcc;
 Fri, 03 Jun 2022 02:51:30 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id E6EC960C92;
 Fri,  3 Jun 2022 00:51:28 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 058E0C385A5;
 Fri,  3 Jun 2022 00:51:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4dd4731d-e2d7-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654217488;
	bh=2Evjsba6HoIHlQIqA37xEcgouS2i8iyCI0A8MlhrQOY=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=gd9kMrkKvvIkK3NmFSni3Uo6I0wVmArSh1ugIcGRKvipnQddpYKnvzDKYTucsogEu
	 5Vsl7FSNE7STQ/DEz3ZandxKQYCvQK5r4mNwmO7z7jB8/R/gTuWwniufjWpLC5uyQF
	 dYSLza+64oPBYSoxgVFzSEPenpKGyFwEmzk6iAKu378+5Fke7fOparBeYim++M+63i
	 UbwtIM936Jbaa1qKkiFZA0+e9nZSWbhLVEdxgGEfE7u6gYa639CV86bhMC4cwj63NL
	 KEqWxbzszx0fe37oLvlFwdx7UbGKi+UJ6/pmWDoh1dHkRIsk86dymqH8JiRTV/ZklX
	 6uwGfe6tHkhjA==
Date: Thu, 2 Jun 2022 17:51:27 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Bertrand Marquis <bertrand.marquis@arm.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 0/4] Spectre BHB follow up
In-Reply-To: <cover.1653993431.git.bertrand.marquis@arm.com>
Message-ID: <alpine.DEB.2.22.394.2206021749110.2783803@ubuntu-linux-20-04-desktop>
References: <cover.1653993431.git.bertrand.marquis@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

I reviewed patches #1 and #3. Julien had already started reviewing the
other patches in details so it is probably better if he continues his
reviews on those. So I skipped them for now. Let me know if you'd like
me to review them.

On Tue, 31 May 2022, Bertrand Marquis wrote:
> Following up the handling of Spectre BHB on Arm (XSA-398), this serie
> contain several changes which were not needed in the XSA patches but
> should be done in Xen:
> - Sync sysregs and cpuinfo with latest version of Linux (5.18-rc3)
> - Add new fields inside cpufeature
> - Add sb instruction support. Some newer generations of CPU
>   (Neoverse-N2) do support the instruction so add support for it in Xen.
> - Create hidden Kconfig entries for CONFIG_ values actually used in
>   arm64 cpufeature.
> 
> Changes in v2
> - remove patch which was merged (workaround 1 when workaround 3 is done)
> - split sync with linux and update of cpufeatures
> - add patch to define kconfig entries used by arm64 cpufeature
> 
> Bertrand Marquis (4):
>   xen/arm: Sync sysregs and cpuinfo with Linux 5.18-rc3
>   xen/arm: Add sb instruction support
>   arm: add ISAR2, MMFR0 and MMFR1 fields in cpufeature
>   arm: Define kconfig symbols used by arm64 cpufeatures
> 
>  xen/arch/arm/Kconfig                     | 28 +++++++++
>  xen/arch/arm/arm64/cpufeature.c          | 18 +++++-
>  xen/arch/arm/cpufeature.c                | 28 +++++++++
>  xen/arch/arm/include/asm/arm64/sysregs.h | 76 ++++++++++++++++++++----
>  xen/arch/arm/include/asm/cpufeature.h    | 34 +++++++++--
>  xen/arch/arm/include/asm/macros.h        | 33 +++++++---
>  xen/arch/arm/setup.c                     |  3 +
>  xen/arch/arm/smpboot.c                   |  1 +
>  8 files changed, 193 insertions(+), 28 deletions(-)
> 
> -- 
> 2.25.1
> 


From xen-devel-bounces@lists.xenproject.org Fri Jun 03 00:59:52 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 00:59:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341351.566577 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwvfH-0007U7-F6; Fri, 03 Jun 2022 00:59:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341351.566577; Fri, 03 Jun 2022 00:59:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwvfH-0007U0-BA; Fri, 03 Jun 2022 00:59:47 +0000
Received: by outflank-mailman (input) for mailman id 341351;
 Fri, 03 Jun 2022 00:59:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fj01=WK=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nwvfG-0007Tu-53
 for xen-devel@lists.xenproject.org; Fri, 03 Jun 2022 00:59:46 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 74273651-e2d8-11ec-837f-e5687231ffcc;
 Fri, 03 Jun 2022 02:59:44 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by sin.source.kernel.org (Postfix) with ESMTPS id 0A1E1CE1D34;
 Fri,  3 Jun 2022 00:59:42 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id D13C1C385A5;
 Fri,  3 Jun 2022 00:59:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 74273651-e2d8-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654217980;
	bh=6VjJLc3D4YoG+52fRsVLrMuE+lidkCVepcikso8+YUs=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=fKAoks7G7jMQ+ENDtgFlFk0VzUus04JejR6r4D4dhy9tJU2QNmvr6qLeNEPtYrCLK
	 /3SxZLGLCHZJc3848mRvNnc19rvYdM1ZTzvrwwyEAMct3IJOur+/ZX0mPempcG5snJ
	 QZ5HimeB/D0jUEmRMAIh+i67PlQ/eLTANFw8+dppolkIfCDH2mpJnpJJu74H6Xf+Tx
	 Gx6pet55yzm8MHD6VDdXnGcnVx82UWbS+nq+o1pH1bTByMdxguF4srfOevgUJlzzJo
	 5tNhVeg7RTzO2iKmYa0ox3OJeZ8itu8suu8AqMg45wMWTP7dHh2AIq9mpQN3mYhNkM
	 NJEeSX7zFPiLQ==
Date: Thu, 2 Jun 2022 17:59:38 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Penny Zheng <Penny.Zheng@arm.com>
cc: xen-devel@lists.xenproject.org, wei.chen@arm.com, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH v5 3/9] xen: update SUPPORT.md for static allocation
In-Reply-To: <20220531031241.90374-4-Penny.Zheng@arm.com>
Message-ID: <alpine.DEB.2.22.394.2206021758010.2783803@ubuntu-linux-20-04-desktop>
References: <20220531031241.90374-1-Penny.Zheng@arm.com> <20220531031241.90374-4-Penny.Zheng@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 31 May 2022, Penny Zheng wrote:
> SUPPORT.md doesn't seem to explicitly say whether static memory is
> supported, so this commit updates SUPPORT.md to add feature static
> allocation tech preview for now.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
> v5 changes:
> - new commit
> ---
>  SUPPORT.md | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/SUPPORT.md b/SUPPORT.md
> index ee2cd319e2..5980a82c4b 100644
> --- a/SUPPORT.md
> +++ b/SUPPORT.md
> @@ -278,6 +278,13 @@ to boot with memory < maxmem.
>  
>      Status, x86 HVM: Supported
>  
> +### Static Allocation
> +
> +Static allocation refers to system or sub-system(domains) for which memory
> +areas are pre-defined by configuration using physical address ranges.

Although I completely understand what you mean I would rather not use
the word "sub-system" as we haven't used it before in a Xen context. So
instead I would only use domains. I would write it like this:


### Static Allocation

Static allocation refers to domains for which memory areas are
pre-defined by configuration using physical address ranges.

Status, ARM: Tech Preview


With that you can add

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> +    Status, ARM: Tech Preview
> +
>  ### Memory Sharing
>  
>  Allow sharing of identical pages between guests
> -- 
> 2.25.1
> 


From xen-devel-bounces@lists.xenproject.org Fri Jun 03 01:06:00 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 01:06:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341359.566588 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwvlD-0007PJ-3B; Fri, 03 Jun 2022 01:05:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341359.566588; Fri, 03 Jun 2022 01:05:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwvlD-0007PC-0C; Fri, 03 Jun 2022 01:05:55 +0000
Received: by outflank-mailman (input) for mailman id 341359;
 Fri, 03 Jun 2022 01:05:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fj01=WK=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nwvlB-0007P6-Q0
 for xen-devel@lists.xenproject.org; Fri, 03 Jun 2022 01:05:53 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5033c4b7-e2d9-11ec-bd2c-47488cf2e6aa;
 Fri, 03 Jun 2022 03:05:52 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id D613EB821A6;
 Fri,  3 Jun 2022 01:05:51 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0DDE5C385A5;
 Fri,  3 Jun 2022 01:05:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5033c4b7-e2d9-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654218350;
	bh=W6vsh6gkSltB0lQbkBL9hpDAbNIsH/xfNzMFiPILaos=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=MCnCN9KHcd7ReoRlu9n1dx4knm6omVffJHrtojfR5MOhIlFw5tydUSWHGQS+Pi+3G
	 c04uZK76VWOslDrorY9MuWxhvfqGAZ2V8fCOS/3W2OyScTQcwS2dfpUbls0+Gf7XFQ
	 SYplbLjkE/H2bFg7Eb6w1bnVkZnpLC1yECTO/g+xM+qfKrDjj5es0Kv8xZ3n+ve11Q
	 loxthL8J30lg6hm5xLjllj6UCuUmNKV8nrYCYdZaASu6uLOz2I3fWjTPfIcZ21kKw5
	 VVYg+OgiFB7x4MJtMZg2c9tpEUtHWvKOPEmHDRyzR9L6PIjsvktTSrI3G+F80S+Ja1
	 HjZYKWMcr0MZg==
Date: Thu, 2 Jun 2022 18:05:48 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Penny Zheng <Penny.Zheng@arm.com>
cc: xen-devel@lists.xenproject.org, wei.chen@arm.com, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH v5 6/9] xen/arm: introduce CDF_staticmem
In-Reply-To: <20220531031241.90374-7-Penny.Zheng@arm.com>
Message-ID: <alpine.DEB.2.22.394.2206021805140.2783803@ubuntu-linux-20-04-desktop>
References: <20220531031241.90374-1-Penny.Zheng@arm.com> <20220531031241.90374-7-Penny.Zheng@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 31 May 2022, Penny Zheng wrote:
> In order to have an easy and quick way to find out whether this domain memory
> is statically configured, this commit introduces a new flag CDF_staticmem and a
> new helper is_domain_using_staticmem() to tell.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>

I realize Jan asked for improvements but I just wanted to say that on my
side it looks good.


> ---
> v5 changes:
> - guard "is_domain_using_staticmem" under CONFIG_STATIC_MEMORY
> - #define is_domain_using_staticmem zero if undefined
> ---
> v4 changes:
> - no changes
> ---
> v3 changes:
> - change name from "is_domain_static()" to "is_domain_using_staticmem"
> ---
> v2 changes:
> - change name from "is_domain_on_static_allocation" to "is_domain_static()"
> ---
>  xen/arch/arm/domain_build.c       | 5 ++++-
>  xen/arch/arm/include/asm/domain.h | 4 ++++
>  xen/include/xen/domain.h          | 6 ++++++
>  3 files changed, 14 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 7ddd16c26d..f6e2e44c1e 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -3287,9 +3287,12 @@ void __init create_domUs(void)
>          if ( !dt_device_is_compatible(node, "xen,domain") )
>              continue;
>  
> +        if ( dt_find_property(node, "xen,static-mem", NULL) )
> +            flags |= CDF_staticmem;
> +
>          if ( dt_property_read_bool(node, "direct-map") )
>          {
> -            if ( !IS_ENABLED(CONFIG_STATIC_MEMORY) || !dt_find_property(node, "xen,static-mem", NULL) )
> +            if ( !IS_ENABLED(CONFIG_STATIC_MEMORY) || !(flags & CDF_staticmem) )
>                  panic("direct-map is not valid for domain %s without static allocation.\n",
>                        dt_node_name(node));
>  
> diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
> index fe7a029ebf..6bb999aff0 100644
> --- a/xen/arch/arm/include/asm/domain.h
> +++ b/xen/arch/arm/include/asm/domain.h
> @@ -31,6 +31,10 @@ enum domain_type {
>  
>  #define is_domain_direct_mapped(d) ((d)->cdf & CDF_directmap)
>  
> +#ifdef CONFIG_STATIC_MEMORY
> +#define is_domain_using_staticmem(d) ((d)->cdf & CDF_staticmem)
> +#endif
> +
>  /*
>   * Is the domain using the host memory layout?
>   *
> diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
> index 1c3c88a14d..c613afa57e 100644
> --- a/xen/include/xen/domain.h
> +++ b/xen/include/xen/domain.h
> @@ -34,6 +34,12 @@ void arch_get_domain_info(const struct domain *d,
>  #ifdef CONFIG_ARM
>  /* Should domain memory be directly mapped? */
>  #define CDF_directmap            (1U << 1)
> +/* Is domain memory on static allocation? */
> +#define CDF_staticmem            (1U << 2)
> +#endif
> +
> +#ifndef is_domain_using_staticmem
> +#define is_domain_using_staticmem(d) ((void)(d), false)
>  #endif
>  
>  /*
> -- 
> 2.25.1
> 


From xen-devel-bounces@lists.xenproject.org Fri Jun 03 01:12:09 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 01:12:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341367.566599 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwvrA-0000cm-P8; Fri, 03 Jun 2022 01:12:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341367.566599; Fri, 03 Jun 2022 01:12:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwvrA-0000cf-LD; Fri, 03 Jun 2022 01:12:04 +0000
Received: by outflank-mailman (input) for mailman id 341367;
 Fri, 03 Jun 2022 01:12:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fj01=WK=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nwvr9-0000cZ-1N
 for xen-devel@lists.xenproject.org; Fri, 03 Jun 2022 01:12:03 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2c3e58dd-e2da-11ec-837f-e5687231ffcc;
 Fri, 03 Jun 2022 03:12:01 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 21129B821B7;
 Fri,  3 Jun 2022 01:12:01 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 652C6C385A5;
 Fri,  3 Jun 2022 01:11:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c3e58dd-e2da-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654218719;
	bh=R4U6+hWyPYxvxGO+R4tI7iADPGDeeWkKT7LC37G7bPY=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=AWoxhE0o/WhOqyWgG9lVXA40J3l68eO1cqNB5xfpQ2gkYSxnWOdV406OVDInqQS3P
	 9qhlAOzqsIj9JlSyfni8zibyF5XP0Jw5h3oFqLeHl9/H7Zqiv9dCjyUSY/+37xNWXC
	 CwmavIGfKI8HkQabmGlQl/mk+SYcMUdVJFrWRGuxYQoMePcVyV98XYyKlXSK10RSjT
	 WPiwqHcW7C8/9Ha6VfiFx7286QqnbQTPLCbWfgkrUb3hzhPzbNV4Yv+Mo+MPeJYBN8
	 q9TE91rrmWSL/IS05EijKNdERy0wwadv6nTSBLB7hCtj88c1OrkkutCg6ZwFMBz/Ve
	 9HNrQ9kKBssUQ==
Date: Thu, 2 Jun 2022 18:11:58 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Penny Zheng <Penny.Zheng@arm.com>
cc: xen-devel@lists.xenproject.org, wei.chen@arm.com, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH v5 7/9] xen/arm: unpopulate memory when domain is
 static
In-Reply-To: <20220531031241.90374-8-Penny.Zheng@arm.com>
Message-ID: <alpine.DEB.2.22.394.2206021808090.2783803@ubuntu-linux-20-04-desktop>
References: <20220531031241.90374-1-Penny.Zheng@arm.com> <20220531031241.90374-8-Penny.Zheng@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 31 May 2022, Penny Zheng wrote:
> Today when a domain unpopulates the memory on runtime, they will always
> hand the memory back to the heap allocator. And it will be a problem if domain
> is static.
> 
> Pages as guest RAM for static domain shall be reserved to only this domain
> and not be used for any other purposes, so they shall never go back to heap
> allocator.
> 
> This commit puts reserved pages on the new list resv_page_list only after
> having taken them off the "normal" list, when the last ref dropped.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>

ARM bits:
Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> v5 changes:
> - adapt this patch for PGC_staticmem
> ---
> v4 changes:
> - no changes
> ---
> v3 changes:
> - have page_list_del() just once out of the if()
> - remove resv_pages counter
> - make arch_free_heap_page be an expression, not a compound statement.
> ---
> v2 changes:
> - put reserved pages on resv_page_list after having taken them off
> the "normal" list
> ---
>  xen/arch/arm/include/asm/mm.h | 12 ++++++++++++
>  xen/common/domain.c           |  4 ++++
>  xen/include/xen/sched.h       |  3 +++
>  3 files changed, 19 insertions(+)
> 
> diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
> index 56d0939318..ca384a3939 100644
> --- a/xen/arch/arm/include/asm/mm.h
> +++ b/xen/arch/arm/include/asm/mm.h
> @@ -360,6 +360,18 @@ void clear_and_clean_page(struct page_info *page);
>  
>  unsigned int arch_get_dma_bitsize(void);
>  
> +/*
> + * Put free pages on the resv page list after having taken them
> + * off the "normal" page list, when pages from static memory
> + */
> +#ifdef CONFIG_STATIC_MEMORY
> +#define arch_free_heap_page(d, pg) ({                   \
> +    page_list_del(pg, page_to_list(d, pg));             \
> +    if ( (pg)->count_info & PGC_staticmem )              \
> +        page_list_add_tail(pg, &(d)->resv_page_list);   \
> +})
> +#endif
> +
>  #endif /*  __ARCH_ARM_MM__ */
>  /*
>   * Local variables:
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index a3ef991bd1..a49574fa24 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -604,6 +604,10 @@ struct domain *domain_create(domid_t domid,
>      INIT_PAGE_LIST_HEAD(&d->page_list);
>      INIT_PAGE_LIST_HEAD(&d->extra_page_list);
>      INIT_PAGE_LIST_HEAD(&d->xenpage_list);
> +#ifdef CONFIG_STATIC_MEMORY
> +    INIT_PAGE_LIST_HEAD(&d->resv_page_list);
> +#endif
> +
>  
>      spin_lock_init(&d->node_affinity_lock);
>      d->node_affinity = NODE_MASK_ALL;
> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> index 5191853c18..3e22c77333 100644
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -381,6 +381,9 @@ struct domain
>      struct page_list_head page_list;  /* linked list */
>      struct page_list_head extra_page_list; /* linked list (size extra_pages) */
>      struct page_list_head xenpage_list; /* linked list (size xenheap_pages) */
> +#ifdef CONFIG_STATIC_MEMORY
> +    struct page_list_head resv_page_list; /* linked list (size resv_pages) */
> +#endif
>  
>      /*
>       * This field should only be directly accessed by domain_adjust_tot_pages()
> -- 
> 2.25.1
> 


From xen-devel-bounces@lists.xenproject.org Fri Jun 03 05:31:10 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 05:31:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341376.566610 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwztY-0005sw-5J; Fri, 03 Jun 2022 05:30:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341376.566610; Fri, 03 Jun 2022 05:30:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nwztY-0005sp-26; Fri, 03 Jun 2022 05:30:48 +0000
Received: by outflank-mailman (input) for mailman id 341376;
 Fri, 03 Jun 2022 05:30:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwztW-0005sf-6W; Fri, 03 Jun 2022 05:30:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwztW-0004Ti-3p; Fri, 03 Jun 2022 05:30:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nwztV-0006LI-Hj; Fri, 03 Jun 2022 05:30:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nwztV-0008Kl-Eq; Fri, 03 Jun 2022 05:30:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=sD5wE1fGgiyWkmWb+/P3BMQeeooqsHjYqXoGUGFvpZY=; b=Gl5rAPK4Iex1hn67qbJA5XNSJJ
	Xv91gpzYAfj5TgVqxcc/rodXhou5tSqgC6QIRVw6gknRBFPr0EWovFf8TUBM7Q+vvOmM1isDSYozw
	ptlFNXCAovCx/a+o1QSDwP86VShssFEJ8/atHqlH/YgOO7FmU/B0bkRj+VA8Cjz0PsVY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170810-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 170810: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=58f9d52ff689a262bec7f5713c07f5a79e115168
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 03 Jun 2022 05:30:45 +0000

flight 170810 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170810/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 170714
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 170714
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 170714

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170714
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170714
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170714
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                58f9d52ff689a262bec7f5713c07f5a79e115168
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z   10 days
Failing since        170716  2022-05-24 11:12:06 Z    9 days   28 attempts
Testing same since   170810  2022-06-02 22:12:29 Z    0 days    1 attempts

------------------------------------------------------------
2055 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 231856 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 03 09:12:33 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 09:12:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341397.566621 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx3Lp-00072y-NG; Fri, 03 Jun 2022 09:12:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341397.566621; Fri, 03 Jun 2022 09:12:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx3Lp-00072r-JE; Fri, 03 Jun 2022 09:12:13 +0000
Received: by outflank-mailman (input) for mailman id 341397;
 Fri, 03 Jun 2022 09:12:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nx3Lo-00072h-65; Fri, 03 Jun 2022 09:12:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nx3Lo-0000Ga-3M; Fri, 03 Jun 2022 09:12:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nx3Ln-000546-Dw; Fri, 03 Jun 2022 09:12:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nx3Ln-0004MJ-DS; Fri, 03 Jun 2022 09:12:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Bpnp8XJwH7k8+d9pifjCvp70CSkiWnIC4Od5G+lvVEc=; b=Cy4t658y95ADqxxgyQiTXcUX+M
	G9pUKUeO0iI6WUIaSWgYt8LDxxSRb49gliSw/KLzwwuUfOTGNa18j5Wkv5VZy/cNgUEBRkB8wW3/z
	d1XZUS2nUhOK15fPThU8IutI2pCTX0438bSlSvUPwcGSV7/5pV8NVwBkyjQP1pMmvUJU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170812-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 170812: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=c9641eb422905cc0804a7e310269abf09543cce8
X-Osstest-Versions-That:
    qemuu=1e62a82574fc28e64deca589a23cf55ada2e1a7d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 03 Jun 2022 09:12:11 +0000

flight 170812 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170812/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail blocked in 170809
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170809
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170809
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170809
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170809
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170809
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170809
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170809
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170809
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                c9641eb422905cc0804a7e310269abf09543cce8
baseline version:
 qemuu                1e62a82574fc28e64deca589a23cf55ada2e1a7d

Last test of basis   170809  2022-06-02 15:09:54 Z    0 days
Testing same since   170812  2022-06-03 00:39:54 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Joel Stanley <joel@jms.id.au>
  Richard Henderson <richard.henderson@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   1e62a82574..c9641eb422  c9641eb422905cc0804a7e310269abf09543cce8 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Fri Jun 03 09:38:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 09:38:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341410.566632 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx3lM-0001X2-Q6; Fri, 03 Jun 2022 09:38:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341410.566632; Fri, 03 Jun 2022 09:38:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx3lM-0001Wv-N1; Fri, 03 Jun 2022 09:38:36 +0000
Received: by outflank-mailman (input) for mailman id 341410;
 Fri, 03 Jun 2022 09:38:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nx3lL-0001Wk-TV; Fri, 03 Jun 2022 09:38:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nx3lL-0000jE-Nu; Fri, 03 Jun 2022 09:38:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nx3lL-0006BE-8r; Fri, 03 Jun 2022 09:38:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nx3lL-0000xs-8Q; Fri, 03 Jun 2022 09:38:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=M0oItHK26e5OJH/WBjI4e0LYUQdqpcQmeG7qp6+UW8o=; b=kB3oIqGixpuy3vak1WZaoYZpAu
	m+PpHLIIRCB/5XszxI4fYjt42Ywkr3vZisl1fFkA8JTttLh9SChqXujQ7lnoTG5LLRwtCRnVshW4r
	l3hXj1zNsL6lIZZDPkrC6FHtBgio54t8lVKl2RBpYMs6lXzU7IhZ8cn/CI/qqlgS2yaM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170814-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 170814: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64:xen-build:fail:regression
    libvirt:build-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=8d2567bccff06ee99064efdcfbc22cdaf5611d97
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 03 Jun 2022 09:38:35 +0000

flight 170814 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170814/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64                   6 xen-build                fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              8d2567bccff06ee99064efdcfbc22cdaf5611d97
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  693 days
Failing since        151818  2020-07-11 04:18:52 Z  692 days  674 attempts
Testing same since   170814  2022-06-03 04:20:23 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Amneesh Singh <natto@weirdnatto.in>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani@anisinha.ca>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brad Laue <brad@brad-x.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Kirbach <christian.kirbach@gmail.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Christophe Fergeau <cfergeau@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Didik Supriadi <didiksupriadi41@gmail.com>
  dinglimin <dinglimin@cmss.chinamobile.com>
  Divya Garg <divya.garg@nutanix.com>
  Dmitrii Shcherbakov <dmitrii.shcherbakov@canonical.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Emilio Herrera <ehespinosa57@gmail.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Franck Ridel <fridel@protonmail.com>
  Gavi Teitz <gavi@nvidia.com>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Haonan Wang <hnwanga1@gmail.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Hiroki Narukawa <hnarukaw@yahoo-corp.jp>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Ian Wienand <iwienand@redhat.com>
  Ioanna Alifieraki <ioanna-maria.alifieraki@canonical.com>
  Ivan Teterevkov <ivan.teterevkov@nutanix.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  jason lee <ppark5237@gmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jia Zhou <zhou.jia2@zte.com.cn>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jing Qi <jinqi@redhat.com>
  Jinsheng Zhang <zhangjl02@inspur.com>
  Jiri Denemark <jdenemar@redhat.com>
  Joachim Falk <joachim.falk@gmx.de>
  John Ferlan <jferlan@redhat.com>
  John Levon <john.levon@nutanix.com>
  John Levon <levon@movementarian.org>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Justin Gatzen <justin.gatzen@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kim InSoo <simmon@nplob.com>
  Koichi Murase <myoga.murase@gmail.com>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Lei Yang <yanglei209@huawei.com>
  Lena Voytek <lena.voytek@canonical.com>
  Liang Yan <lyan@digitalocean.com>
  Liang Yan <lyan@digtalocean.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Liu Yiding <liuyd.fnst@fujitsu.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  luzhipeng <luzhipeng@cestc.cn>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Martin Pitt <mpitt@debian.org>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matej Cepl <mcepl@cepl.eu>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Goodhart <c@chromakode.com>
  Maxim Nestratov <mnestratov@virtuozzo.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Moteen Shah <codeguy.moteen@gmail.com>
  Moteen Shah <moteenshah.02@gmail.com>
  Muha Aliss <muhaaliss@gmail.com>
  Nathan <nathan95@live.it>
  Neal Gompa <ngompa13@gmail.com>
  Nick Chevsky <nchevsky@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nicolas Lécureuil <neoclust@mageia.org>
  Nicolas Lécureuil <nicolas.lecureuil@siveo.net>
  Nikolay Shirokovskiy <nikolay.shirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Niteesh Dubey <niteesh@linux.ibm.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Or Ozeri <oro@il.ibm.com>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Praveen K Paladugu <prapal@linux.microsoft.com>
  Richard W.M. Jones <rjones@redhat.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Robin Lee <cheeselee@fedoraproject.org>
  Rohit Kumar <rohit.kumar3@nutanix.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Davis <scott.davis@starlab.io>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  shenjiatong <yshxxsjt715@gmail.com>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Simon Rowe <simon.rowe@nutanix.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tom Wieczorek <tom@bibbu.net>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tu Qiang <tu.qiang35@zte.com.cn>
  Tuguoyi <tu.guoyi@h3c.com>
  tuqiang <tu.qiang35@zte.com.cn>
  Vasiliy Ulyanov <vulyanov@suse.de>
  Victor Toso <victortoso@redhat.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Vinayak Kale <vkale@nvidia.com>
  Vineeth Pillai <viremana@linux.microsoft.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  Wei-Chen Chen <weicche@microsoft.com>
  William Douglas <william.douglas@intel.com>
  Xu Chao <xu.chao6@zte.com.cn>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Fei <yangfei85@huawei.com>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yasuhiko Kamata <belphegor@belbel.or.jp>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
  zhangjl02 <zhangjl02@inspur.com>
  zhanglei <zhanglei@smartx.com>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>
  Дамјан Георгиевски <gdamjan@gmail.com>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 111322 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 03 10:27:43 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 10:27:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341426.566646 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx4We-0007oN-Mp; Fri, 03 Jun 2022 10:27:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341426.566646; Fri, 03 Jun 2022 10:27:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx4We-0007oG-K8; Fri, 03 Jun 2022 10:27:28 +0000
Received: by outflank-mailman (input) for mailman id 341426;
 Fri, 03 Jun 2022 10:27:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nx4Wd-0007o6-Pd; Fri, 03 Jun 2022 10:27:27 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nx4Wd-0001cF-NA; Fri, 03 Jun 2022 10:27:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nx4Wd-0007db-7r; Fri, 03 Jun 2022 10:27:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nx4Wd-00017S-7R; Fri, 03 Jun 2022 10:27:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=yrYH2rmLQbQCQWJTfAX6bvDmigxlffm4Yltu4REm6XU=; b=alr0qZxJcIx24fQDPBv0lAZ7AA
	nmykZB5nbFiSdVtnFeol5jXHfcg6WP28HUYQ/dx9Owj8RT0N3EaqNQKCKogXHSa/jjI9sGItqk5Px
	wLPjpaGBaMpnWfNgsG8FLLIDLL0nmcBp9d7qmq8YmddAxxuPzfnh4glrJWWvPOsQbj90=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170816-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 170816: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=0223898f3e45e8c7e9d8ba01aea5f95a282f420b
X-Osstest-Versions-That:
    ovmf=64706ef761273ba403f9cb3b7a986bfb804c0a87
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 03 Jun 2022 10:27:27 +0000

flight 170816 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170816/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 0223898f3e45e8c7e9d8ba01aea5f95a282f420b
baseline version:
 ovmf                 64706ef761273ba403f9cb3b7a986bfb804c0a87

Last test of basis   170807  2022-06-02 09:13:22 Z    1 days
Testing same since   170816  2022-06-03 08:41:38 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dov Murik <dovmurik@linux.ibm.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Jiewen Yao <Jiewen.yao@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   64706ef761..0223898f3e  0223898f3e45e8c7e9d8ba01aea5f95a282f420b -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jun 03 10:43:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 10:43:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341437.566657 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx4m2-0002AE-33; Fri, 03 Jun 2022 10:43:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341437.566657; Fri, 03 Jun 2022 10:43:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx4m2-0002A7-0E; Fri, 03 Jun 2022 10:43:22 +0000
Received: by outflank-mailman (input) for mailman id 341437;
 Fri, 03 Jun 2022 10:43:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nx4m0-00029x-Sp; Fri, 03 Jun 2022 10:43:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nx4m0-0001sN-Oa; Fri, 03 Jun 2022 10:43:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nx4m0-0007zf-1J; Fri, 03 Jun 2022 10:43:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nx4m0-0004h6-0d; Fri, 03 Jun 2022 10:43:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=QeWjzK4dV/yf0O9TrFafV0jkRbTBoHWUi1gie1XHprI=; b=hRxLyg+0YUp0gUGMGuIAfG3jUN
	o9RdpDPr7H5KBNLVc7ih2RQmJWKWclJ3Avec2uNP89vFxrl2DeMyFnmBe/cpJXolagV8HIEDL+1aH
	Q3LbJO+we1UdTmd4V5ZEcsXMBm8YJnCUeWarZWX75NUuxrqq45lL508e6sCT1gv7QaEA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170813-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 170813: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-examine-uefi:xen-install:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=58ce5b6c33ecae76f2e9fc5213a56e98c3be4be5
X-Osstest-Versions-That:
    xen=58ce5b6c33ecae76f2e9fc5213a56e98c3be4be5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 03 Jun 2022 10:43:20 +0000

flight 170813 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170813/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-examine-uefi  6 xen-install      fail in 170806 pass in 170813
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat  fail pass in 170806

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail in 170806 like 170801
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170806
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170806
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170806
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 170806
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 170806
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170806
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170806
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170806
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170806
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170806
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170806
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170806
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  58ce5b6c33ecae76f2e9fc5213a56e98c3be4be5
baseline version:
 xen                  58ce5b6c33ecae76f2e9fc5213a56e98c3be4be5

Last test of basis   170813  2022-06-03 01:53:58 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Fri Jun 03 11:08:38 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 11:08:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341449.566668 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx5AK-00053l-5n; Fri, 03 Jun 2022 11:08:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341449.566668; Fri, 03 Jun 2022 11:08:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx5AK-00053e-1W; Fri, 03 Jun 2022 11:08:28 +0000
Received: by outflank-mailman (input) for mailman id 341449;
 Fri, 03 Jun 2022 11:08:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BWPM=WK=gmail.com=matiasevara@srs-se1.protection.inumbo.net>)
 id 1nx5AI-00053S-HK
 for xen-devel@lists.xenproject.org; Fri, 03 Jun 2022 11:08:26 +0000
Received: from mail-ej1-x631.google.com (mail-ej1-x631.google.com
 [2a00:1450:4864:20::631])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7c05acbb-e32d-11ec-bd2c-47488cf2e6aa;
 Fri, 03 Jun 2022 13:08:25 +0200 (CEST)
Received: by mail-ej1-x631.google.com with SMTP id q1so15234280ejz.9
 for <xen-devel@lists.xenproject.org>; Fri, 03 Jun 2022 04:08:23 -0700 (PDT)
Received: from horizon ([84.46.98.218]) by smtp.gmail.com with ESMTPSA id
 y17-20020a170906525100b006ff9e36cfeesm2671699ejm.196.2022.06.03.04.08.22
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 03 Jun 2022 04:08:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7c05acbb-e32d-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=RdFgsCSI0eA/Z30ftBEehauVIqGaKLRNGF/w6vlYHys=;
        b=PD7gsyb9INieFJ5VE3CQFRjLF8ZbjmnTJbKYDsPFT4oM9HofDN1xnXVAfavHYCAkj+
         EzbmKX15pXDYObRH8Om5n3VJDQc1kTetu1hhdkWUs23w2+Nqbk9k5E0lgH3EAgh4ZNI8
         L62aRPEIzHQDZOcS6+/tazeXbJ3mIrVs4oDUmXUsWqMve/yckIxyH7K5dGyJ+qBu1HS+
         GnFb0/b9TH28XgPeiuu7p5dmVcd1lrLJ/DfrpBYcS4pUxsgncPXeN+jByMbWw7+orJ3r
         Yr7G68SF6LsYbDNh6VY+R04BFIUffVGcoqQgp8VDgGuvLLboczeWrTsW+FkNW9fu8kKw
         11qw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=RdFgsCSI0eA/Z30ftBEehauVIqGaKLRNGF/w6vlYHys=;
        b=TpiDngVo0g8Jm36j1ZSejBZsv967UGKF71HUW2SKAIhUmZzcWc+OcbgOw0FZ5yUbh7
         la8yNPvBXY+GeH4qwayBDvOtMOcMDKIt24iLpMY8ypo03fpEWhdWotRKJ0/ZEA+jGREh
         Zo8Mkz/DjGKLxELZOK58ijaIBATv9rVTPQaEQLP1vSsA20Q+wJlUx7CAZRrRsRU4yywG
         h6NwwzWbypBXmy0cEHqmFkrUZPM9Xl3KSuHe3zudmU+veITq2If24u2SAd8iHG//or/j
         aD/fO9V6MMNC8Wm3vvyY0+Fm22x/cXIpIoQrF0btJiK3wO/vEyC8BI3HZYgDDlPdV+j8
         F3Pw==
X-Gm-Message-State: AOAM530BblGnTv2hkzmRlLSp1oa5A0gfDE9etnqKQ9raViusGZXkaYLa
	z7Kaxw7RExZftHT+Cy56/tBDIguEG8ewaVRB
X-Google-Smtp-Source: ABdhPJxBqYmWDocy8Kkk6TeaVF4YDLO1mnsNm0Fg0hAAPUgUmVE96OVROxaH7eqoHIhmlVSQkdfdMw==
X-Received: by 2002:a17:907:6d05:b0:6fe:c506:9239 with SMTP id sa5-20020a1709076d0500b006fec5069239mr8125822ejc.539.1654254502991;
        Fri, 03 Jun 2022 04:08:22 -0700 (PDT)
Date: Fri, 3 Jun 2022 13:08:20 +0200
From: Matias Ezequiel Vara Larsen <matiasevara@gmail.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel@lists.xenproject.org,
	Matias Ezequiel Vara Larsen <matias.vara@vates.fr>,
	Wei Liu <wl@xen.org>
Subject: Re: [RFC PATCH 2/2] tools/misc: Add xen-stats tool
Message-ID: <20220603110820.GA193297@horizon>
References: <cover.1652797713.git.matias.vara@vates.fr>
 <e233c4f60c6fe97b93b3adf9affeb0404c554130.1652797713.git.matias.vara@vates.fr>
 <YpX48uwOGVqayb/x@perard.uk.xensource.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <YpX48uwOGVqayb/x@perard.uk.xensource.com>

Hello Anthony and thanks for your comments. I addressed them below:

On Tue, May 31, 2022 at 12:16:02PM +0100, Anthony PERARD wrote:
> Hi Matias,
> 
> On Tue, May 17, 2022 at 04:33:15PM +0200, Matias Ezequiel Vara Larsen wrote:
> > Add a demostration tool that uses the stats_table resource to
> > query vcpu time for a DomU.
> > 
> > Signed-off-by: Matias Ezequiel Vara Larsen <matias.vara@vates.fr>
> > ---
> > diff --git a/tools/misc/Makefile b/tools/misc/Makefile
> > index 2b683819d4..b510e3aceb 100644
> > --- a/tools/misc/Makefile
> > +++ b/tools/misc/Makefile
> > @@ -135,4 +135,9 @@ xencov: xencov.o
> >  xen-ucode: xen-ucode.o
> >  	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(APPEND_LDFLAGS)
> >  
> > +xen-stats.o: CFLAGS += $(CFLAGS_libxenforeginmemory)
> > +
> > +xen-stats: xen-stats.o
> 
> The tools seems to be only about vcpu, maybe `xen-stats` is a bit too
> generic. Would `xen-vcpus-stats`, or maybe something with `time` would
> be better?

Do you think `xen-vcpus-stats` would be good enough?

> Also, is it a tools that could be useful enough to be installed by
> default? Should we at least build it by default so it doesn't rot? (By
> adding it to only $(TARGETS).)

I will add to build this tool by default in the next patches' version.
 
> > +	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(LDLIBS_libxenforeignmemory) $(APPEND_LDFLAGS)
> > +
> >  -include $(DEPS_INCLUDE)
> > diff --git a/tools/misc/xen-stats.c b/tools/misc/xen-stats.c
> > new file mode 100644
> > index 0000000000..5d4a3239cc
> > --- /dev/null
> > +++ b/tools/misc/xen-stats.c
> > @@ -0,0 +1,83 @@
> > +#include <err.h>
> > +#include <errno.h>
> > +#include <error.h>
> > +#include <stdio.h>
> > +#include <stdlib.h>
> > +#include <string.h>
> > +#include <sys/mman.h>
> > +#include <signal.h>
> > +
> > +#include <xenctrl.h>
> 
> It seems overkill to use this header when the tool only use
> xenforeignmemory interface. But I don't know how to replace
> XC_PAGE_SHIFT, so I guess that's ok.
> 
> > +#include <xenforeignmemory.h>
> > +#include <xen-tools/libs.h>
> 
> What do you use this headers for? Is it left over?

`xenforeignmemory.h` is used for `xenforeignmemory_*` functions.
`xen-tools/libs.h` is left over so I will remove it in next version.

> > +static sig_atomic_t interrupted;
> > +static void close_handler(int signum)
> > +{
> > +    interrupted = 1;
> > +}
> > +
> > +int main(int argc, char **argv)
> > +{
> > +    xenforeignmemory_handle *fh;
> > +    xenforeignmemory_resource_handle *res;
> > +    size_t size;
> > +    int rc, nr_frames, domid, frec, vcpu;
> > +    uint64_t * info;
> > +    struct sigaction act;
> > +
> > +    if (argc != 4 ) {
> > +        fprintf(stderr, "Usage: %s <domid> <vcpu> <period>\n", argv[0]);
> > +        return 1;
> > +    }
> > +
> > +    // TODO: this depends on the resource
> > +    nr_frames = 1;
> > +
> > +    domid = atoi(argv[1]);
> > +    frec = atoi(argv[3]);
> > +    vcpu = atoi(argv[2]);
> 
> Can you swap the last two line? I think it is better if the order is the same
> as on the command line.

Yes, I can.

> > +
> > +    act.sa_handler = close_handler;
> > +    act.sa_flags = 0;
> > +    sigemptyset(&act.sa_mask);
> > +    sigaction(SIGHUP,  &act, NULL);
> > +    sigaction(SIGTERM, &act, NULL);
> > +    sigaction(SIGINT,  &act, NULL);
> > +    sigaction(SIGALRM, &act, NULL);
> > +
> > +    fh = xenforeignmemory_open(NULL, 0);
> > +
> > +    if ( !fh )
> > +        err(1, "xenforeignmemory_open");
> > +
> > +    rc = xenforeignmemory_resource_size(
> > +        fh, domid, XENMEM_resource_stats_table,
> > +        vcpu, &size);
> > +
> > +    if ( rc )
> > +        err(1, "    Fail: Get size: %d - %s\n", errno, strerror(errno));
> 
> It seems that err() already does print strerror(), and add a "\n", so
> why print it again? Also, if we have strerror(), what the point of
> printing "errno"?

I will remove errno, strerror(errno), and the extra "\n".

> Also, I'm not sure the extra indentation in the error message is really
> useful, but that doesn't really matter.

I will remove the indentation.

> > +
> > +    if ( (size >> XC_PAGE_SHIFT) != nr_frames )
> > +        err(1, "    Fail: Get size: expected %u frames, got %zu\n",
> > +                    nr_frames, size >> XC_PAGE_SHIFT);
> 
> err() prints strerror(errno), maybe errx() is better here. 

I will use errx().

Thanks,
 
> 
> Thanks,
> 
> -- 
> Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri Jun 03 12:17:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 12:17:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341465.566684 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx6Ec-00056i-Ko; Fri, 03 Jun 2022 12:16:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341465.566684; Fri, 03 Jun 2022 12:16:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx6Ec-00055q-Fx; Fri, 03 Jun 2022 12:16:58 +0000
Received: by outflank-mailman (input) for mailman id 341465;
 Fri, 03 Jun 2022 12:16:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MD4n=WK=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nx6Eb-00052x-Er
 for xen-devel@lists.xenproject.org; Fri, 03 Jun 2022 12:16:57 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.111.102])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0d988743-e337-11ec-837f-e5687231ffcc;
 Fri, 03 Jun 2022 14:16:53 +0200 (CEST)
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur02lp2059.outbound.protection.outlook.com [104.47.6.59]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-6-gUUoVrmrOGaCS5rgYj74NQ-1; Fri, 03 Jun 2022 14:16:51 +0200
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by HE1PR0401MB2476.eurprd04.prod.outlook.com (2603:10a6:3:83::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Fri, 3 Jun
 2022 12:16:48 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::7577:8567:33c7:685b]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::7577:8567:33c7:685b%5]) with mapi id 15.20.5314.013; Fri, 3 Jun 2022
 12:16:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0d988743-e337-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654258613;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Mxsgk26JM3in7Nysx3+bpk7Ig2JWyenQGGQ3AfVU7Gs=;
	b=nt0kvTCKecQ9wzdoHnLQazQPPYKeiKyFO0igk3Gk3eDI44JhD3qXh9Bo4ZHnJGdWZoILeI
	u7cb0T5wqHFtMkJSW124zEl1bkyfdISc9ozZWbGJrNn9D6jmz2C1nLiQivU8dczoI4sBNl
	fd28NdhAuS5oO0GTY914fNA4O7PaJm0=
X-MC-Unique: gUUoVrmrOGaCS5rgYj74NQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gdONdbR6q8J45G6QkVpXOrjMOpGi5K6CCyf6ubupRiHQj4MtTkJu5nGOXCrc3zy+76xknuG7GYkWzvy9etp7PJQFhTUY5xbbuaaZ8G/CX6kMyKnSqL7V8Ya3Xc5iK+WrNcO1+BjHdtWhkuSNBWUADcuo1eyFKJgQrKy7z/85tiRmq85HTwRZeTQxVgz/ihs7edAWl7EvaEIAHjZtaQnRO4O3xl7frx/a7574R5uhnSaNFbwmhUhZbBze9OfvhFFbR2n+FN47zn09WnlA6/KX1nvi90VlrIHAiaeH6C+Oan/kaD2X8Kq9NZUlYsHoajTnpgb2FBTf+2Tu9UkM2sCprw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=vNnp1I8kCYNAff1ldvLNKBRhqqbKFNQtfE0RFdOvTMY=;
 b=hTtuoxuG/f9at2A/tuogpVskMHvN5s/qQwQPSvX85Mhhz7LEGv2hgBP49D4ctNj51OqatpTl0jp4+3A5sNuUnTgAmh8L8KUSbq8wFfnyT06tk039MKgtgliyCe6z4OdvyuhrkzihXCtDX5KPxLyZV8LGl9n9EG7XbGBGNgBd3Zb/xmzhWC7qmnW6jeUw+66bVZMnnhIFwgZeCONPmQLoPK7I+9i5//v3QvrZA+uXVcEAmczKXz8oTFoB8G6oEC2AawTtPuiXtsbDEGY9SGWZKLnAJp1gXAevvPW9CKgyOmyAyvsN4cJ2TqzlNoSEe25MFEqwLlD5gWRNpfcZ8hnMag==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <3165a99b-3a91-2ca3-80a0-af88d87e9bb0@suse.com>
Date: Fri, 3 Jun 2022 14:16:47 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v2 1/3] x86/vmx: implement Bus Lock detection
Content-Language: en-US
To: Roger Pau Monne <roger.pau@citrix.com>
CC: Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220526111157.24479-1-roger.pau@citrix.com>
 <20220526111157.24479-2-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220526111157.24479-2-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-ClientProxiedBy: AS9PR06CA0287.eurprd06.prod.outlook.com
 (2603:10a6:20b:45a::21) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 73cc333f-4dfc-4f68-1686-08da455aeefa
X-MS-TrafficTypeDiagnostic: HE1PR0401MB2476:EE_
X-Microsoft-Antispam-PRVS:
	<HE1PR0401MB2476D56E1BAA2EC5D7935536B3A19@HE1PR0401MB2476.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	UMu5HSnDlkzdqM/yZ5APoOZvEQbD116bzTUBEpHI12/FTkxh/KF64W+lWzg2zbBOo5guvh0G3Yck4IZVMoSZdEk14x3GgZe4ow+LPcrR3x+JaVfaE8a5n+okMkqB71oy2mzrT7zjqBg6ambmpa/GuHBADenoBivEwX93786D8JPHx8jMoQLPuSkhJPFKAZz7oXKm3jLakaaInGHDvwk+vZFI6y4uhC7DCxtKpy4bAxfs+SJVT3hN3w50Eveb9h6Mf9x61JmGNs8L+Zqa4+tMKr9BXFepZdfvj1r+VjoA+6LL9scV5TinZUgyOulOBxLpeOuzSf3qRrZjGcULA+p27BqiyVrYH4Q67we2SrKPtckJvXJ8OkCTrkvFqCTwx+6d7EIL2+f1ssyCtGWoe3c/ktH9OMP5gTcFJUg1LA8y+HnTJPXoSB2xFrhs2BBpqOmJ7k9HvwloELUYyr3cGkkBNC2m0BEsBFryY6SzRo53s7PqF3WJ7g3AERwfGj+gCepAZ9lQG/Ct4/8VbZ7P+6W+O9bqmNGSvmYwHktwtAqEXDQv2yw8tw8SvbbegnYcAXg5HYm2/NB/MB6OuUtMnedXUF4AAC8P9FqHxhUjuqHHEYGqmvaJUI5dAfHztr5CsmnBBMQ3tnwwpl5xw+Yp9lSBsJuh/CWsAJtiJUAHxAFUqkBRT8DbhP00lcCN0BxE1CfoUeYP2/PwFi2Zj+5rqD6Rx9s5DHnjkr17CXLDkQV2HOc=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(26005)(6512007)(8936002)(86362001)(2906002)(38100700002)(5660300002)(508600001)(31696002)(4744005)(6486002)(6506007)(2616005)(53546011)(316002)(186003)(31686004)(66556008)(4326008)(66946007)(8676002)(66476007)(36756003)(54906003)(6916009)(83380400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?kGB1yTqoxCvGJd3ueNC+Fb39lLKS95/2c0zaUKSFJA5o42RPTiYI5zIcLy4e?=
 =?us-ascii?Q?YyWK6jWdBeheIaFwr5H28/Ck52mtmT6ODM8PcQgroIEHIkCGRZjtlbbLYlrr?=
 =?us-ascii?Q?pTfk/PzTT84aJhuT8OuKkRk7qE8vrvJXO+IGSkBBKaAm3snI3mN+2r0eKv3m?=
 =?us-ascii?Q?fnvC6uHXb2lxJc4hoUcHcSd5uf2Bq0frOBTzTZtjRuoBl5TqWlos09zIt3NU?=
 =?us-ascii?Q?qQh0K2Lziu6RYLgz2xUn9dbokVnVUlxpfV3VRxUhM9rq/kf1fvDnzlO4ws/P?=
 =?us-ascii?Q?gMX+lETIn2QfKljEySfFiWX6AATwcf6GRssreQsCGkz3NXxb+MawKT9ggtoG?=
 =?us-ascii?Q?uKC+Y4VCvcqai+fkcAWPb0ihr1ICWv+CS1XtUOOU0VDka7zzg5Za7ktX+zCE?=
 =?us-ascii?Q?awYgizdPfxAREp7I0XXLv4UOZQkKpLc+mA/77h1+34v7dR0S8VGjEGNGhmjJ?=
 =?us-ascii?Q?wVEoFLGboBDmss2/t84m4Ys0TyfqFLZQOyWZcjeEZVUGtVOXUFRTkxvCEsiB?=
 =?us-ascii?Q?eSXdG1Zib5kDHyTkt58mWb7yrVWgrqnbVL2z1vEOaP/k2spvPm2+HB8mIKdt?=
 =?us-ascii?Q?IVOxgcbRZxy6szyispVBAW4BvHUI9O0Rnzfu7C9GzeG5DweNF3x6v4mJwN0L?=
 =?us-ascii?Q?GCWaNDCyAgFDuDgZZDak3np7l2zXCNxXkGeZDVCxuJed1Tv+KtlcPi6kWQLN?=
 =?us-ascii?Q?NEzfcP6lQFe6yBtOjyGCtpE4vc3/kKcOtH5gL78iweEwvCxcGiZQLmxImHFx?=
 =?us-ascii?Q?/ixGt3+lL+g/lPyUqhLNKwxpbpfYlE+HbGBj2fxieHUazGDInur7oH1lIAqJ?=
 =?us-ascii?Q?mw5agKSvDCFa/RVU5aazq9Qf0dTRMFW5ZtDJfwwyXNiuk6S3EqoSJLtgIxTV?=
 =?us-ascii?Q?LdhYJEmgpjI31T6aPROYXlSHGOR+RLWeQHcIevSLVHNkFF0AuVAVi1AfrGzY?=
 =?us-ascii?Q?Mjw6j7GSsY1h2Q4v9sHmV6HsdwkgSHNFhwj0glyDbZqAu74Ja88h/zxeXXBN?=
 =?us-ascii?Q?icxJI8Cb2eYdSouzP/SKYJI2MDBOYvAfkQWspgRaqSDfMyV8ZYq3vp8rqCRs?=
 =?us-ascii?Q?pwys/XRcOKksjJprNikwR7VKDGSU+7U7DGtVjfcNGxs877r9ANnUXswdnaxU?=
 =?us-ascii?Q?b/eeb9472U4ZWUOBFMG6t0gqy8eAYuTJ0nolvZ7IArY60MsTAiweYFzW814B?=
 =?us-ascii?Q?Q00ZfCZ1nNYXeTFh8u0JRJLDSosmqeGm+hB4ACYKctU/vZf8RVXDXU0AHk+I?=
 =?us-ascii?Q?j2bAPyij0sdncDa4skGu++m4edSqmhVX83nA6aPvqJ5p2wHA86uWg0GWDb3d?=
 =?us-ascii?Q?pTAO2TIQSqRr+ypj6mI8oQuvsM4allAW+3v2NhA9xNhHkm5oLJ02IDh8X5de?=
 =?us-ascii?Q?mIhSXeEBZ8pjSqrTthyhMI/8CSlZzaSvwy/ASNPPf9LKSAKaFJZbfxXgiN/6?=
 =?us-ascii?Q?yWmlsX9jApEA3x7wfl8w8xrbph2UyF25laOTe7VjJm+UMB3aBt26SyU1hNt+?=
 =?us-ascii?Q?T0aistnyTK7El4TdTdqLv0s0x0qJ6Wo4JEeZ0cszM9uwaQR34Rqs+Nya7qlt?=
 =?us-ascii?Q?t2/szDrLJAJMOqUikB70Be9ec371Y466zHA4iNwQ7xuRNRVAq2KbkTtW0tj3?=
 =?us-ascii?Q?RBu4h+3xpmL7XajgfAvGJCBp/Y0ZnMwAf8iFHbmlbKfCV05FvVwQEoo2Yaxt?=
 =?us-ascii?Q?Q1AvbjOpBUROdne+PzB2ppDi4DP5hEUBYjPGteUnx31xek/yst45UWHu6rQ7?=
 =?us-ascii?Q?f/Q3eQPKFQ=3D=3D?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 73cc333f-4dfc-4f68-1686-08da455aeefa
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jun 2022 12:16:48.6180
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: tcrtjuQ+XJDmMiuBhpvI4K/WZfjc/bNvheGEKEwMfmlLjJYV0nqogXnO/OsF+0j8KdAldIwRvDsn+p5px+L1IQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0401MB2476

On 26.05.2022 13:11, Roger Pau Monne wrote:
> Add support for enabling Bus Lock Detection on Intel systems.  Such
> detection works by triggering a vmexit, which ought to be enough of a
> pause to prevent a guest from abusing of the Bus Lock.
>=20
> Add an extra Xen perf counter to track the number of Bus Locks detected.
> This is done because Bus Locks can also be reported by setting the bit
> 26 in the exit reason field, so also account for those.
>=20
> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

This implements just the VMexit part of the feature - maybe the
title wants to reflect that? The vmx: tag could also mean there
is exposure to guests included for the #DB part of the feature.

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 03 12:17:12 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 12:17:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341463.566679 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx6Ec-00053K-AO; Fri, 03 Jun 2022 12:16:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341463.566679; Fri, 03 Jun 2022 12:16:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx6Ec-00053D-7e; Fri, 03 Jun 2022 12:16:58 +0000
Received: by outflank-mailman (input) for mailman id 341463;
 Fri, 03 Jun 2022 12:16:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nx6Ea-00052y-Qe; Fri, 03 Jun 2022 12:16:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nx6Ea-0003S3-OA; Fri, 03 Jun 2022 12:16:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nx6Ea-0001om-F3; Fri, 03 Jun 2022 12:16:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nx6Ea-0003ko-Ea; Fri, 03 Jun 2022 12:16:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+d/6Fr/6OAGRlSULLZCZKT0zXQcSwdJT0zBfRSbP1Co=; b=bi/WgGlscuCYoxUWIv0mAR9c8T
	GC7TCqsdqe3diQRxu+wOMI8WgwFx8Iz+VHMA2DIuETklTmpYfwTl1te2Vatg+UMwn2UTUMCyUmPyF
	pQuOyZguA78udhdCp8fQd8RDtiq9ZZ7W//+bipbREX6jEVdeYh5+TuhQKgMTeRdi8lUg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170818-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 170818: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=632574ced10fe184d5665b73c62c959109c39961
X-Osstest-Versions-That:
    ovmf=0223898f3e45e8c7e9d8ba01aea5f95a282f420b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 03 Jun 2022 12:16:56 +0000

flight 170818 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170818/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 632574ced10fe184d5665b73c62c959109c39961
baseline version:
 ovmf                 0223898f3e45e8c7e9d8ba01aea5f95a282f420b

Last test of basis   170816  2022-06-03 08:41:38 Z    0 days
Testing same since   170818  2022-06-03 10:46:12 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   0223898f3e..632574ced1  632574ced10fe184d5665b73c62c959109c39961 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jun 03 12:21:07 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 12:21:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341481.566701 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx6Id-00071R-4x; Fri, 03 Jun 2022 12:21:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341481.566701; Fri, 03 Jun 2022 12:21:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx6Id-00071K-1R; Fri, 03 Jun 2022 12:21:07 +0000
Received: by outflank-mailman (input) for mailman id 341481;
 Fri, 03 Jun 2022 12:21:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MD4n=WK=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nx6Ib-00071C-Tv
 for xen-devel@lists.xenproject.org; Fri, 03 Jun 2022 12:21:05 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.109.102])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a37721e2-e337-11ec-837f-e5687231ffcc;
 Fri, 03 Jun 2022 14:21:04 +0200 (CEST)
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 (mail-am5eur03lp2052.outbound.protection.outlook.com [104.47.8.52]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-3-STEBMNdRM2SViYaKXgYZlQ-1; Fri, 03 Jun 2022 14:21:02 +0200
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by DB6PR0401MB2438.eurprd04.prod.outlook.com (2603:10a6:4:33::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.15; Fri, 3 Jun
 2022 12:21:00 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::7577:8567:33c7:685b]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::7577:8567:33c7:685b%5]) with mapi id 15.20.5314.013; Fri, 3 Jun 2022
 12:21:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a37721e2-e337-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654258864;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=XrCx3zvRfXMrvo/sgtY5g7UtAwTMvt59RmLiIHbQNDg=;
	b=Nkano507JBZ7AGWIjUw2RrlO2LWd3PSVKYXavMzN1g18/dlf4cGpCFukkApHudn69l/mkg
	9tGN8UYNW9rEUgavgQeaNubQGohchMmj8kd8RrpL6OgM7UY5TV/9dog1OrNtVTK+QYER7h
	Fk63ftLaBzicatKqduuO458GM3mJXzo=
X-MC-Unique: STEBMNdRM2SViYaKXgYZlQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HVvAOwm3yFYt6sb3IHab7Do+8NY3tjMVaZYbKFT146MUNWFxgJCP66RN/VtEwBMr5XZIjRDoINEheY9Z/pnTvYqozvRozSM6XllcS8en1fKHCuyZ0eHjdtrVepHnIOwNvUE8G7ByFMun0M07gCLNohHQ2nYRAvSie3VyQIZ3287/pcyyJmjMvJ3UxGhLGH6veSSOIDfktfd5/f23uO2KLoUQyBJdvYz7rwJUaUgixl3KwmLPe9fmAqrYuaEDTLwUT/abaVrtiPKeE2eDNJiWnaOpjNDIgiauka0CwiEt0ITMpO5hUBaQpDvvdNDiTSu1QIcp8L+ljrdzBkfVcX1gmg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IDGABYpZW8mbeuAdJoj4HZ9UwBekeVDl12fji0rR0vI=;
 b=hKVln/ml+zBajZ1BpCQRFCFo2uvT4XblnabpKwCpC5U3jypz57Wp03q5kiUzX5hFcbreDR6aYIMGbhei02OSbM8NewXqLMw1akddSbfTmw1gPB4g424cgnfWrzwx51TEmqqv48ANedHaI1OVEqROd9M7VRtqcy9BDmWG3Ims18+QcEIQcbhLe2JjgPB/iFq4OyJhdiU3uy+XPrXfSAdqli3EzmluSUmYQLsIrKvfBf2El+HeRRn0+9lDk5+FXlV+R7J8kEtpkSrKG+rbpQUDKxQtgP4cs1ExpNbKeOFqdJtqM3ILAGkPIDpJMDWLgMZyWUA4u9ZkaG9XWO8c2gtqPA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <3629434d-a035-59c0-0fc8-f3db5fa6c18e@suse.com>
Date: Fri, 3 Jun 2022 14:20:59 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v2 2/3] x86/vmx: introduce helper to set
 VMX_INTR_SHADOW_NMI
Content-Language: en-US
To: Roger Pau Monne <roger.pau@citrix.com>
CC: Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220526111157.24479-1-roger.pau@citrix.com>
 <20220526111157.24479-3-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220526111157.24479-3-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-ClientProxiedBy: AM6PR0202CA0062.eurprd02.prod.outlook.com
 (2603:10a6:20b:3a::39) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4b5c70bd-b364-4ed6-7b5c-08da455b8535
X-MS-TrafficTypeDiagnostic: DB6PR0401MB2438:EE_
X-Microsoft-Antispam-PRVS:
	<DB6PR0401MB24384E56DA30385B6329F170B3A19@DB6PR0401MB2438.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	29aZruL3CZw13EsjhaakOQ/KYL9osMqOkbT8XZld8twJXAx2VWtO6xZZBcITHYBd27+Y4DZpDDryljZSxsAJMKQPxpJznVN6KGlV0f/Y/t7IC1nda20wiGmpaQ9GOw/VdkpsiE0i96PyVjAUyrppCXhQV4C/rDMO4lwPtmTNA/o9s/YdQLBj2U2eIJwc7P1xnefLblq3gHyPt1JO3yK25Ppbd/cHpO13/OtS+pfDQB6P+bseUUaAG4sR3CKuIP/vAmjbE/pP91UM94et6tJZbAyOZ/+EAybpIqFGBunBp3VvlESdeJVSc3ZF2j20vfB3YDEBauwQ+5xrjlZQ9XSt36/2olz3LX50qw+JZKtMrsSpj+6BPcGCDrfGI15hCUIFxZfzkbvjz1YSUFGfudOCopXLeze/O/acQ3PuvrgA59iQzuC4SRVPn49TmnLdJmM1HPR9bSbS2HKIYtYM+uakOUSeuzNrd/gFrpYvYiGoucrdVLXgo3s8lJ3Or14hb//mcksSUx3PlAIPCiVDh7pkAKo+HWNp5XuX4p+WRdbw2oeyLjWeis3+DoJJ5HgjHCkUaVOK0CM5oVg0wTdKTCVUEr/z+guA9J2Gl5ac0JRrnv3R6hjnl+4gT7np70+cuhOKJUWK86GWFt4a7FDuZaigcgVaxu6+SaQqu8gmetKv6zrqsHonevLq7EBoG4qR+OaBnMZRK/ZmFaTAz+ISfvJdJL7NQOZuOCXGnPCjHX45RMIOPEF7QexwpuReNMkyW1ej
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(6506007)(53546011)(2906002)(38100700002)(83380400001)(2616005)(6512007)(26005)(31686004)(186003)(508600001)(36756003)(6486002)(8936002)(31696002)(6916009)(54906003)(316002)(86362001)(66476007)(4326008)(8676002)(66946007)(66556008)(4744005)(5660300002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?ERtSObzUpajR9fqCZd3Wv3bn47x00EVqVWDLUqnpvhebdjhuDIbBuj61LHWc?=
 =?us-ascii?Q?ITip5FogUW04NkwsEhEiTjCku7I7ccRWTyjZg/kaQVYh/pg0Av7vJ/9as7qm?=
 =?us-ascii?Q?5TXqhyDnJHHi5lfrOZf17sHjApIJYjjHSzp08Hx5fhOmszMjHq/iS0bb/Mfx?=
 =?us-ascii?Q?cUNiNKDyqyxRLck6YDYrmBGXQp3dWznfigY42jRvfTskGu5LN25hGFOL8u9y?=
 =?us-ascii?Q?n0EnP4uZgfYqyepoDU3/lEQiwc7cx17dSzJrdZogA0arDx0Lm85xA/Y4st1O?=
 =?us-ascii?Q?S+nhBXt2TazV8+OtGfXoJDv3WFV66nQXuTVIlFqcrD8bkuzYOpgBnAUzywtD?=
 =?us-ascii?Q?40A4NUA6O9GAUfMqBB0iG8/W3lMchqwY7l6a+mzjhd0vUj6fla2BvdQaQIBs?=
 =?us-ascii?Q?zmGATh5UdYQrbXIupcH+j9Wn41naUR2USJHtKCP0L/K5a55p9Cjc4D9iGu62?=
 =?us-ascii?Q?54fxPo7S/zAOJ9LvSx10PXXPnJvsArOShCwWgxLmVJF938vbbEh6FxxBneuE?=
 =?us-ascii?Q?zMWsEkFhGuSw9hg0xvipbxHD0pNmL6mu800QANBhLpf0enSV21kQWOCsWWLu?=
 =?us-ascii?Q?0is4dh+sBQEjV5bS6yl/GsycrLoKUMgeluWXdKcqJdZlKZgwmnnPWvcHEyh/?=
 =?us-ascii?Q?W3A2fzSWcriVhn5EPKhaX2iFcAdMIhs4/Ueg7nOnA52XiMOlX4omA8KI2wHQ?=
 =?us-ascii?Q?/E/XO5oqJtKTIsNYr9naq7RZtsAMQ7/bSO3HpbBAU3wCNdFQs3udAou7eytT?=
 =?us-ascii?Q?JWSIosIh6DG2H1biugE9WMS4nuBGtKCQmJ94EPPTKKUlFHTBtQd3TfFpRzWN?=
 =?us-ascii?Q?+EfobQhhmZORRMMcCXWkyVN2mkAbndMPBWn0MaYhenWCJ14N6KNq4SUrZq5E?=
 =?us-ascii?Q?B2+I68MEFh9ub+p9P2Kyl45oQ9Jc/5f784/WwfyFMBo/cN8m0tw4iqUgwbgI?=
 =?us-ascii?Q?nGGX2jgASo88TQjp2WpVoNgHyHZaxTRCkUPAHgleTMsrY76TQQKNIOPItM62?=
 =?us-ascii?Q?pIRWT7SnMNnBe0Gnihj1wvTIx9NNmGTqt3sXgaE3g1f4Qeh9CFJoETszFY1v?=
 =?us-ascii?Q?Cwn15ENGyxNGmESKwJvFfRxzdP+sff1emPgbVi3UviQ5MV/wh096UdT6rthh?=
 =?us-ascii?Q?hbLxsjyNhsKNddbYyp2SDvVQCSD5f0UAcR5jRZBrwDKSiEunLR0xQdVR1eR2?=
 =?us-ascii?Q?yc/IC63Fp081T4jAsQLgyFcT674hHZ85IyGL9bLTibvNw07lIhk9q89XNwTs?=
 =?us-ascii?Q?MOU2MG8dsjFfoiWQbxS1t2C85xbZ8Bf6p3g1MLBdPg7FkbPHcbDRzgnv9e4j?=
 =?us-ascii?Q?J/+rtilPieMcZIWokx9cyEq1u/aIZv9taunzgCjQDRsk1sfQPtiSqWdPlUzu?=
 =?us-ascii?Q?PUmeOho98ue5qtjmJDiLlo+5Os4vrhQStX5ZJvIW1RkuQLMwGIqCfqHVGizL?=
 =?us-ascii?Q?D95j59++wMPfpJyBA4Xrlp9IKQBWzZjtd2VMJW45A7r08a/RXtsJym+yRKmv?=
 =?us-ascii?Q?oz+iHQQMZpcy7BhBtdyCAzQuZ8vLJb/BkKLBJAiWlk81ynFmPLqWCyq4Bthw?=
 =?us-ascii?Q?FyrrfonhsMgIiSuAkMfmvYOsKAdeqBZ6KZmHayKnL5DnSyk8w7HZ8OgEZZAz?=
 =?us-ascii?Q?wPx7rSFrWRhSDqLQ9F+44oxLI1On0rMbUea4qR9BITFVFk0S0reOvxm4AtxV?=
 =?us-ascii?Q?w4qafCwg0EourJZb7DmWe0RTtnURnwDv/jt+bZyH1RPFcBj3nT2RJDhCGAq2?=
 =?us-ascii?Q?b/ZUEAkX7A=3D=3D?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4b5c70bd-b364-4ed6-7b5c-08da455b8535
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jun 2022 12:21:00.6655
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: CK6kEiQo2+dCAT8tMJV36Ys5IDza6Zgm7lLOweBCGbUDTlK+ApU0t5gnAEhwz5dy7CM8dMCco/+BU24awv9CGg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0401MB2438

On 26.05.2022 13:11, Roger Pau Monne wrote:
> Introduce a small helper to OR VMX_INTR_SHADOW_NMI in
> GUEST_INTERRUPTIBILITY_INFO in order to help dealing with the NMI
> unblocked by IRET case.  Replace the existing usage in handling
> EXIT_REASON_EXCEPTION_NMI and also add such handling to EPT violations
> and page-modification log-full events.
>=20
> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Fri Jun 03 12:50:17 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 12:50:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341489.566712 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx6kf-0002Np-E7; Fri, 03 Jun 2022 12:50:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341489.566712; Fri, 03 Jun 2022 12:50:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx6kf-0002N5-Au; Fri, 03 Jun 2022 12:50:05 +0000
Received: by outflank-mailman (input) for mailman id 341489;
 Fri, 03 Jun 2022 12:50:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MD4n=WK=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nx6kd-00020s-L2
 for xen-devel@lists.xenproject.org; Fri, 03 Jun 2022 12:50:03 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.111.102])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id aedbfe11-e33b-11ec-bd2c-47488cf2e6aa;
 Fri, 03 Jun 2022 14:50:02 +0200 (CEST)
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur02lp2050.outbound.protection.outlook.com [104.47.6.50]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-28-wN2GYYO6Okuko51Vraw2IQ-1; Fri, 03 Jun 2022 14:49:58 +0200
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by HE1PR0401MB2361.eurprd04.prod.outlook.com (2603:10a6:3:27::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.15; Fri, 3 Jun
 2022 12:49:56 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::7577:8567:33c7:685b]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::7577:8567:33c7:685b%5]) with mapi id 15.20.5314.013; Fri, 3 Jun 2022
 12:49:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aedbfe11-e33b-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654260601;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=cYPWkSKycOf0YUDd120qlDhf1HGkNEEQOEaXgbeQNRA=;
	b=helhaWjp6/dS5+pb2NNRklWXc/OIpaHL0PPU2amVjP9VmbCXw7hlKdOCd8FMhl8ZRMjZ6j
	7J0TWqwVhXUKUDHs1s0Ou8pjyG1kNxjWrAcNNx2qpDDE6FEK9MkcGLJzcr+SoxUxF6OQXW
	yDrii1V/p/DP7E66EBNsJXm/3zNjF9Q=
X-MC-Unique: wN2GYYO6Okuko51Vraw2IQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aHrBL/IDpwSSwB9PCfwqcKrFlgae3uWpE3d19JXcXhde4mciquMY+RHw65cD4bGDOGJMoc0ERx/wz1DIwq2O5auYqkBd5N8VeTFoG6LhBdg2DaDQGWiHXJpsNG6GMDyRmiVWmeYayW4U0z9SdoxffnK5+177NZEffIuv4SJM5/AQ2S6JvIRq3LnPOeNhaSuQo07Ic9Pe61s0+wJyyzO+aFRi7mXJ5kqtt3kpk05MOYuP+Hj+jmaBAzGhRXA1cn1ITh2+lFai3xkjzvAbn77BV5TGbo18DGgQIiSoYHxAXz2SpvWfGY7dfyQL/F3G/AcBI6/gyLks4MUceUYXimNptg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=enAcfvQGdHV1UXZB953itue6Ttn22yaOfRMciV9J9uQ=;
 b=bhBcLlN7nJaYiz97rfdZTTxVjR6tHypLlhbimyj4AxaW6GqK5BbyC1IXURrGsci6rAFbdo/eWVvC0q2oSbAF1906zMEt19rpEtioVSTxChcNfGxgKtuZwbdx/X0Nwhy+gjeSn765Zyahaw/QGkXt6ALjZwm/QHaR8c7M3+sVKvIcOQXmTR928LSNVJRVdKV7fXSCZ8+S16sqG99/ZvrDufEbt/lDFjnx2TVoiCDUPkUMpUe3cMSuFudxCCjBQu9ZCBpDkJ0fAZBFQ1m3NsICcRqf8JydpMami8Fi2s8LIdwsboMt1rvEeihFjxQqlE1fS2lhzIxZFmNKkaS2Eb8xZw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <6fa93f8c-9336-331a-75c1-7e815d96ff49@suse.com>
Date: Fri, 3 Jun 2022 14:49:54 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v2 3/3] x86/vmx: implement Notify VM Exit
Content-Language: en-US
To: Roger Pau Monne <roger.pau@citrix.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
 xen-devel@lists.xenproject.org
References: <20220526111157.24479-1-roger.pau@citrix.com>
 <20220526111157.24479-4-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220526111157.24479-4-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-ClientProxiedBy: AM6PR10CA0050.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:209:80::27) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3a43c38d-3c65-4745-1473-08da455f8fc5
X-MS-TrafficTypeDiagnostic: HE1PR0401MB2361:EE_
X-Microsoft-Antispam-PRVS:
	<HE1PR0401MB23615DA6AAFA8FB7F0BE068BB3A19@HE1PR0401MB2361.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	TWXQXYAAuQM/GjHq+EJJaLEb4UU+dgvSsOnEI8AXoyBsBPAg6HBO4mCyqje4IlynNmu8SZ79rchdootNLo8vhrg8HUVoueutinKAU5PEKslAW3i3pI58luLJ+pmtbKB5U0h75+l3aS5pIF9isvFXn1VuIqBL/YU8yFE5kVm8Gjv+8EVAP1m7Dhzl9o04JReCDYPHMgEOqeMDREWeCgy3OoxmnWig/ublZeEB082TzJ5Ee6cxbYoSVBIZz9eI3We6nM1LwWmZsmM+aZ0GdhhQlRlxhzP3NRPv2qwwL42OJ+4jdMO6l/BuwoGVo1VDOSnViqtsoDEBBtGWIGf6yJu/XXnsDGuSkAqoCOuWdPlucNDfPM2Q/QARqlJgxLDLl093VB1WlWUhP3k8uK6PHszKpxVq/9AyUXW+9LlTxhHbGD0cCJiOwsmg9Rn4/cYuZ0NhU7sNAOOx700Y2AbYTjFZEWsgk/Q9h6lzyGcr6CMyrXWky0/j9fQS2P/2GjYF41dAMJxgsLuk7AnCqL3pY6iP9lBteaZtvsLOfGeqRUQ0A8TEUkZ0X4phKdTq7hxAEdcMXn71lEQQMoYJq7ngkAR8ap+zFltZWN4GUMOQv/ML2BOM07maTsN9PyhewGC5fqHKx9P9R0wvw3e+2kcd27IKqtEf2HBNiwVo0M8fin7S2CdmJrTX+R8+iBGZW1h655Tr2elryf6Haw1H3CQG7mtTbDnchLSq6b8u0URqKXG9To4=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(53546011)(26005)(6512007)(5660300002)(66476007)(508600001)(8936002)(2906002)(6506007)(83380400001)(2616005)(86362001)(31696002)(186003)(6486002)(54906003)(6916009)(38100700002)(66946007)(4326008)(316002)(31686004)(36756003)(8676002)(66556008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?gXnrO4i1PLuMq4ePOG2jvzAsIC1IJoLAZPecy4AdQpOzX9vY/yBAMC4FRyZh?=
 =?us-ascii?Q?/mtBE8hHzI9g+gjqDMe1wraa4AwajKKFZTMHg7wRyiXgnRPXbQFodK8bcTRl?=
 =?us-ascii?Q?pRacHv5L3N007pFme7vEsayBGniHvIDmjyyJzAYNT1t6IZyiQUxqOdMZhV/1?=
 =?us-ascii?Q?EMbyCghh1OJ1hLdhgI+njL7DS5WSTtE4asPu2cJtod2FbHdBTrXndMTgr7nB?=
 =?us-ascii?Q?p+IkXe3UGNRNZI+az8OInamkvClirsy6EB+87mUtrx+S9z1EKxOulAiml0VY?=
 =?us-ascii?Q?99UbFUo/LbAILQ3spOLbb/MOgkfIw6aDn+wvSMhUBxnPROe/VdDfMiFVK7dP?=
 =?us-ascii?Q?HyibClGr+UYzEDnTrToxaKjOKSY4lFP8a1BsUVrU3vA5PpazGp47p82Tl88o?=
 =?us-ascii?Q?mRfYb33fm7QN+f2gbFCqTtYKBpWKDiS5pGMjHqL7Wy4YPprkieGFRFuD8OQ5?=
 =?us-ascii?Q?QbJwVc+Wq4r/DSxGFnx5gnozj1tfd2MaHaTyBykEnylkD+VwpuvA+GwIpiSH?=
 =?us-ascii?Q?gTh0oRWlgXQ5GWXEZ2N/h6vzl/sskN9JW/x5c4Ec1i7odM2OoVtzCnjD8XF1?=
 =?us-ascii?Q?5c2ZLYvcWPIz0hTIYr9To68tCH3zgoitx33GhNU32pVN/IqrV43p+7vNhP/8?=
 =?us-ascii?Q?1OeZmbJeR+nZ9FpcXGqaapIIEvf5PaQYnTbzrLc6LXlVSbscq5LsQpjyquDW?=
 =?us-ascii?Q?IS/4lcAZYDPFZD+oH0ciSZPCaopEYLs4zkaDwb/Zfc0CkYeLtHN6enacFZSZ?=
 =?us-ascii?Q?o0//MzalighQHFmYX8s4pfZdjjMjAVVuJuiwtKtbnQbu0f6IMvyTxG1FDLBe?=
 =?us-ascii?Q?QxSRLPtUzVBfKgtGpvKRY4eRGfN/1IxA7TBuZtdDN05yVNbg2NG8e6lACfTd?=
 =?us-ascii?Q?Zo2jkMmYETwadM7EdCWVwBpYlDL9fjLXVswKlMUYYT4GhcWDyCUZssQ9C5eC?=
 =?us-ascii?Q?Sod4KNRQ4UtG0sgz9r5NY9MbucCrfmnWjdB1TKXvpi2QCxqt/6SdwlJtNS1Y?=
 =?us-ascii?Q?zoSVx+5fj66m7vKKWRXqUenv1hKvAZbA9DTS1jOhTHh0XsNuuDTZ2HTyQ19X?=
 =?us-ascii?Q?ivoXS840aBKeWwnHGrUU9yvjVQQ0cVKrwU/mfVd5JBgdpK+fdw/U2k4hd3yu?=
 =?us-ascii?Q?SzMZ4LrM4IO+Q/TIdROhnn1aQBCq+XE1nI3buQx2iTqQLah3xzqFzFwfg4LK?=
 =?us-ascii?Q?wl8SYaOwjDfx6fMeji2zPXxXehH0zWCxWalRU0Oa/BISM5sCXfOvsqLGzFT5?=
 =?us-ascii?Q?b3S7Ghsxu0tejEqWPwsg636FDx9Pkxd4U9X9+SyqRp76LMy84rDnGirgoq3z?=
 =?us-ascii?Q?vpuxhVU3himN3Kfu0KmfiFPKmTJXmRs9r3m+4KUnp8XA+OfMMUv01J6OTZvb?=
 =?us-ascii?Q?egIIV2laJH978SvPr81zjk4/iG9Mu4UsmC7mztEabKp8pze0tAumISDgEq0R?=
 =?us-ascii?Q?7P5xGBgBVn4CF1tne4lU+2daRsOT+C1VGm77HhovVkT52prK8UB1Rt4pzQzk?=
 =?us-ascii?Q?nSV8uHWbxw420PU7QwiBJqzRq2lxP5BzW4eVqM6FWxsUFc7UueXWkQuoobuh?=
 =?us-ascii?Q?BE47DR0qyWibGAHuirtBsHlGyJrg14dRfihag6hzIwxxLPG+m3Khe7NLyMSq?=
 =?us-ascii?Q?4jJW/79RSwW7SdXaJW2DGQjE9o4rW0sutiXC+CVpKqrJzVIVMLk0j4Xfduf8?=
 =?us-ascii?Q?+lbOmOtZI9FkO5zgZ4mOilxAQuuIDfQC1G5IwmwsZp8OOmA0sjtcl4bCmT2n?=
 =?us-ascii?Q?0eaDlCP1PA=3D=3D?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3a43c38d-3c65-4745-1473-08da455f8fc5
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jun 2022 12:49:56.3551
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: RZUozTMvu95uVG7gVjxoYr5E5kFgn3QAx7WS3SW/g0XcDUeUQrg7OCDRugRu1VOseJC3YAPKqWvwHWRunMJH3Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0401MB2361

On 26.05.2022 13:11, Roger Pau Monne wrote:
> Under certain conditions guests can get the CPU stuck in an unbounded
> loop without the possibility of an interrupt window to occur on
> instruction boundary.  This was the case with the scenarios described
> in XSA-156.
>=20
> Make use of the Notify VM Exit mechanism, that will trigger a VM Exit
> if no interrupt window occurs for a specified amount of time.  Note
> that using the Notify VM Exit avoids having to trap #AC and #DB
> exceptions, as Xen is guaranteed to get a VM Exit even if the guest
> puts the CPU in a loop without an interrupt window, as such disable
> the intercepts if the feature is available and enabled.
>=20
> Setting the notify VM exit window to 0 is safe because there's a
> threshold added by the hardware in order to have a sane window value.
>=20
> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
> ---
> Changes since v1:
>  - Properly update debug state when using notify VM exit.
>  - Reword commit message.
> ---
> This change enables the notify VM exit by default, KVM however doesn't
> seem to enable it by default, and there's the following note in the
> commit message:
>=20
> "- There's a possibility, however small, that a notify VM exit happens
>    with VM_CONTEXT_INVALID set in exit qualification. In this case, the
>    vcpu can no longer run. To avoid killing a well-behaved guest, set
>    notify window as -1 to disable this feature by default."
>=20
> It's not obviously clear to me whether the comment was meant to be:
> "There's a possibility, however small, that a notify VM exit _wrongly_
> happens with VM_CONTEXT_INVALID".
>=20
> It's also not clear whether such wrong hardware behavior only affects
> a specific set of hardware, in a way that we could avoid enabling
> notify VM exit there.
>=20
> There's a discussion in one of the Linux patches that 128K might be
> the safer value in order to prevent false positives, but I have no
> formal confirmation about this.  Maybe our Intel maintainers can
> provide some more feedback on a suitable notify VM exit window
> value.

This certainly wants sorting one way or another before I, for one,
would consider giving an R-b here.

> --- a/xen/arch/x86/hvm/vmx/vmcs.c
> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
> @@ -67,6 +67,9 @@ integer_param("ple_gap", ple_gap);
>  static unsigned int __read_mostly ple_window =3D 4096;
>  integer_param("ple_window", ple_window);
> =20
> +static unsigned int __ro_after_init vm_notify_window;
> +integer_param("vm-notify-window", vm_notify_window);

Could even be a runtime param, I guess. Albeit I realize this would
complicate things further down.

> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -1419,10 +1419,19 @@ static void cf_check vmx_update_host_cr3(struct v=
cpu *v)
> =20
>  void vmx_update_debug_state(struct vcpu *v)
>  {
> +    unsigned int mask =3D 1u << TRAP_int3;
> +
> +    if ( !cpu_has_monitor_trap_flag && cpu_has_vmx_notify_vm_exiting )

I'm puzzled by the lack of symmetry between this and ...

> +        /*
> +         * Only allow toggling TRAP_debug if notify VM exit is enabled, =
as
> +         * unconditionally setting TRAP_debug is part of the XSA-156 fix=
.
> +         */
> +        mask |=3D 1u << TRAP_debug;
> +
>      if ( v->arch.hvm.debug_state_latch )
> -        v->arch.hvm.vmx.exception_bitmap |=3D 1U << TRAP_int3;
> +        v->arch.hvm.vmx.exception_bitmap |=3D mask;
>      else
> -        v->arch.hvm.vmx.exception_bitmap &=3D ~(1U << TRAP_int3);
> +        v->arch.hvm.vmx.exception_bitmap &=3D ~mask;
> =20
>      vmx_vmcs_enter(v);
>      vmx_update_exception_bitmap(v);
> @@ -4155,6 +4164,9 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
>          switch ( vector )
>          {
>          case TRAP_debug:
> +            if ( cpu_has_monitor_trap_flag && cpu_has_vmx_notify_vm_exit=
ing )
> +                goto exit_and_crash;

... this condition. Shouldn't one be the inverse of the other (and
then it's the one down here which wants adjusting)?

> @@ -126,5 +126,6 @@ PERFCOUNTER(realmode_exits,      "vmexits from realmo=
de")
>  PERFCOUNTER(pauseloop_exits, "vmexits from Pause-Loop Detection")
> =20
>  PERFCOUNTER(buslock, "Bus Locks Detected")
> +PERFCOUNTER(vmnotify_crash, "domains crashed by Notify VM Exit")

I think the text is not entirely correct and would want to be
"domain crashes by ...". Multiple vCPU-s of a single domain can
experience this in parallel (granted this would require "good"
timing).

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 03 12:58:26 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 12:58:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341498.566722 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx6sh-0003Uw-Cj; Fri, 03 Jun 2022 12:58:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341498.566722; Fri, 03 Jun 2022 12:58:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx6sh-0003Up-9e; Fri, 03 Jun 2022 12:58:23 +0000
Received: by outflank-mailman (input) for mailman id 341498;
 Fri, 03 Jun 2022 12:58:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MD4n=WK=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nx6sf-0003Uj-Vs
 for xen-devel@lists.xenproject.org; Fri, 03 Jun 2022 12:58:21 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.109.102])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d8436cf6-e33c-11ec-837f-e5687231ffcc;
 Fri, 03 Jun 2022 14:58:21 +0200 (CEST)
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur03lp2053.outbound.protection.outlook.com [104.47.9.53]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-25-NWJKvzpZMjyMc3zhy6ck2w-1; Fri, 03 Jun 2022 14:58:18 +0200
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by PAXPR04MB8334.eurprd04.prod.outlook.com (2603:10a6:102:1cc::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.14; Fri, 3 Jun
 2022 12:58:16 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::7577:8567:33c7:685b]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::7577:8567:33c7:685b%5]) with mapi id 15.20.5314.013; Fri, 3 Jun 2022
 12:58:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d8436cf6-e33c-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654261100;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=P09xsg/S0JeIpfNf9MHrZwsWQDycBLb0/iFvbFqey6o=;
	b=joArDh/WIsaAlPlScZ9Abei+3s6goJT4sj4JojASDcsBjkdGKwU1US4RGFV9T4Eo5v/jAT
	eg/eyytZ4LW8I+U5Xy3oP8skG2f08Hd9Usv61KZV+zNvaoZ0SvTdjrEDJRHetelxOw7g9U
	fMLNfdYkRIIVMSwrxNqm0p6kmSPC89Y=
X-MC-Unique: NWJKvzpZMjyMc3zhy6ck2w-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PVCR5ohBTumrojmlXscazpol0D+86O8I3b39IHzHBfHmuZ481w6AK2AXKlsVsyvt1dERo7W3/oUXHBurwriE0XCmuAfuS0vK9UVvBMaNUj2rSAClUr6tpdoqwima1VTG8FWOu9mKuNGI1WlTl4zyin6uZSdA14z4QqS4of1J3os8jErYiqofxorLOw2oundjo6VTjmhlcBongtPkAP1RlFS0WKIsGxxl8hCO2Ne4fyTBbSvjR06zy6aP2eCm3Tw74pNpp2nBAhIqBazRJ9hkFR/0Ocx9h1J2SJeKXojm7gfVJRW3u1GIRlj3+0TGNh9ry+vhevfG2N4lkS7Y++bJew==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=P09xsg/S0JeIpfNf9MHrZwsWQDycBLb0/iFvbFqey6o=;
 b=Cs3aFTtzonWzQDMOIl9i25ITPCh10QjuJ+Y/YWnWgUikhmqte+jQOCE9aoYMkFgFrzo6HPPZL4tuSxbM3YQFoc1+gipplKbE0oNEfRwIVREFSDz8L3b3uHX0aAvCBmLIGNEsAWT9Bzp0qhlSosvkGsP9gz0EC+asjRR5lBKtybejaL+JVZEnLOZTHi/JuL03Wm7HOblanXV0RpJvKhzzg9Po4osPfYuqUIkMbkJwl3e5sE/O8nl0/mO/ucQTSAYBgfOKB4OlJUNNaRcIArFwzV1UKB/g7qezYxefGqKQli+Yof5YBeV5HrPFvx8tinwm6GZDRvVQYL0qjFj89IKpHg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <fb6c3858-ccdb-4f16-2c16-3aa01b0ba9ec@suse.com>
Date: Fri, 3 Jun 2022 14:58:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [RFC PATCH 1/4] kconfig: allow configuration of maximum modules
Content-Language: en-US
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Cc: scott.davis@starlab.io, christopher.clark@starlab.io,
 sstabellini@kernel.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Wei Liu <wl@xen.org>
References: <20220531024127.23669-1-dpsmith@apertussolutions.com>
 <20220531024127.23669-2-dpsmith@apertussolutions.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220531024127.23669-2-dpsmith@apertussolutions.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR04CA0113.eurprd04.prod.outlook.com
 (2603:10a6:20b:531::18) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 1673682e-254d-4dc7-8b01-08da4560b9ec
X-MS-TrafficTypeDiagnostic: PAXPR04MB8334:EE_
X-Microsoft-Antispam-PRVS:
	<PAXPR04MB8334DF5BBB3544E8664C83B7B3A19@PAXPR04MB8334.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	z1xr8vBr+EGdENWXzoRdkpT6IlXXN/9p8zenRRV1kyzj4Pa9JW5NWsM4VUrjPQVcyRZwDDxBj6ypZxGhh7vHqq0vBCS7fJy5+Qe6GhyLROC9X30HTlS5B8DcTk2Swu6SAWauvhfLL7jYaXxNZzPT3I0qcfmgECZsRNOl4lRCKO139T+B8WAKClcEoGub3ce6KnJKK/L5/UyauNyHoolPUPwrblWr+pmyzB3Tlc2HnIZcMFKuZTVjCrIJ9YpvmrMCIUdpbxM2UfT7k1dfY9h1i0pmZRn93zcnZl4IfrxvOsDf75PjU5lWvYiCNGpX0I63t0OLK6yKvrAlWcFk79P8ug/lPMhdxLAzPvHWi1Duc2KvnA1nYPNCCrLnZ8bo7FFfvKF0+iJfxhQ/ka02Et1HKwbXdoCSNoBvm7It1tasB8ivTl6Z2q46rn3IISy8qMDpYtU0WCzCbqMrcEKbKDsg+DkMVQpoT3msKACByVbvLKqkRbXGlOb5gaX/2vAVS/YfGqepp1kVQ7Lj3CT60GUqh5dYCc0mKEIfBzO2XvGcE1q4nB1Y2P4aVP8+nzJxdQeEYZ/xxULVaR9aCnO9kzlFQI9hraPJPz2x2zFsAjNX0qJvRu5HrmkBRD8RP7gN1qD56304Y+NVcQzeeNTF+d83hJl3lX6Sy+9HdxRpzafSVTooimHx/WmZ1xxzQQDuE3lLj55biqX1XbCqcMJuMJPgY+ESou5A4HZkFOydQ885IFrIls/z9gbcp++gI9zBm3q/
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(2616005)(186003)(316002)(66556008)(66946007)(2906002)(7416002)(54906003)(6512007)(26005)(31686004)(66476007)(5660300002)(6506007)(31696002)(86362001)(8676002)(53546011)(4326008)(8936002)(508600001)(38100700002)(36756003)(6916009)(83380400001)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WHdFTW8zVy9JNGlIeWJrMno4bzZQd0x3TXg3M1R4N29LbWgyUTB5QzA2NHdl?=
 =?utf-8?B?Y0pwVlIyVklVTXI5ejMxSFl6cEMwZ1FGNGF6bHZuc1pRbmVWMVlCTG4reFBL?=
 =?utf-8?B?dG4weGtzRXdmQUQ2QzE4c0RlWVBQQnhyVU9EbmpBM2lHKzJOS1RKbzdMY3pC?=
 =?utf-8?B?RG9xSlVqK2R0UkwxUFJXanlUakxrbWpVa3I3QXJ6bHZ3K0RYSi9wY1B2OFZ2?=
 =?utf-8?B?U0NOSFZMa2xHcDB2bjZ6Z3dSbFVMRUw3Sms4K1V1bzZrM0ZCVThROWE0eCsy?=
 =?utf-8?B?RXFvamdIbWNvM2VzV2RrOTNqaXNQbXlWQTI4MVc5dStTMzYzcUJjVEx0dlZP?=
 =?utf-8?B?cXJjRW1OdjliVWc4UUVnK2p0dFFDelIzTGg3OGxCVHhJMnZPSFJUM2tacC9a?=
 =?utf-8?B?Wm5EakE3VnN2a0VQVFNtUVN2dkp1ZThxRVJmcHZhWFNPYlhSMjhhN3c1NVRN?=
 =?utf-8?B?ZlhYZ3lhKzhiNjFuTzJWc082bkloTkdTL3R5UVpwUlk3Q2ZyKzZQYmVtT1ZV?=
 =?utf-8?B?YWMzYXo2L08rS0xaVy92OTl6SnJxa2lCQWNLQjBsNVdGRm56WEo0Z1Y5dms1?=
 =?utf-8?B?cmdTS1pFOE9GUWNid25qY0lmVzU3Qlhoc1dxelh1WWlyVUpIMTVQOE5DVFp4?=
 =?utf-8?B?RmE1a3hiUHJkK1lPamV0U0FjSzBxejBaWkRLYURrMjFHMml1WVlxZDdvUGln?=
 =?utf-8?B?OWhkcUttNHJncFEzR1RwVzhnVTdwd0oxVDh4QThxWWd2NkVoWWVVK2Y2UXIv?=
 =?utf-8?B?Wkw1UHFYMjgzUUUyejVvWWJXbXVzdERkd0RTbE9uYk1UUVBva2xNYXhDYkE1?=
 =?utf-8?B?OWNXTWw4a2lxR0twK1VDbXpPSHhnV0o3bGcwWE45MzMyaGNNVWFzZmFINEJT?=
 =?utf-8?B?am9mZ0U3WUtuRkVWVm1LdmxYSkoyTHIwenkzdDlncnZ1anRRdmx1R3RFLzMx?=
 =?utf-8?B?dmN2dC91OXRkanl5SFJmblFqczZSWXFQTlVNTDM4VFdxOGZiTk1QT1N2S3RS?=
 =?utf-8?B?UDRJUUlhRkFkMngrRkFBR3JLcG52Rm0rVzduOXUwOWdoSDMvZ3ZneU8yd09t?=
 =?utf-8?B?YmlxdEovbGtUYll6bTh6TU1kejdMaXM4RzdLa25NU3hsbnArRWkwM1ZqZHM2?=
 =?utf-8?B?MFRyUzNTMW4yYk16VUUxRU9KNU5LaHZMZWc4TUt3bFRpZGNOZlB0SVdlYkNk?=
 =?utf-8?B?cTFvU09peXVjSnRuMWFJNExoVkpINi9rRnRqc29CRUkzMk0wYXhFNDNKY2Vu?=
 =?utf-8?B?cDdoZFNreURCNmhUdmFWMS9pd1BHK2N6Zkp6QXRPUy9HNFpKQ0lkdzBuYjZy?=
 =?utf-8?B?WEFBb3F3d0d0cGE5V3FxaGtxZWJoOTBJTEZXV3JDQWp4VlRqQ1Urd3ZMWlls?=
 =?utf-8?B?M0I3QzBwNFlYYTRReDgxZDlPRzB2SHFCTHA4aUVPY0VCMmZLck90SkI5NnRm?=
 =?utf-8?B?aW4rWm5mUVpQZEUzZXhWZ0g4RWR6QlNxcWkvTzF6MUoxbVZicVZsN2JFaEhk?=
 =?utf-8?B?U2VHTVlzaVlkb1RUSXgwcGpEbFBSV05HR0RtZHRHYktQbXc0RGVQL2FkRmQ3?=
 =?utf-8?B?SmdhMk1OYmZIcWdteFpmWThXeElsYmhVdVEzVWFHSzFwQUVFQUdKTDBTbm5O?=
 =?utf-8?B?U2ZMQ3JVZWZJcFNKa0lmekdPazFHcEJBWGdtZXQzMVMyY1F5UXRaZS9RYTZN?=
 =?utf-8?B?YkZQdnB5OVpvakhYQjVzMEJvNk0rK09yQnBWMFlpNVVWdUhxR0xwZkJRblBm?=
 =?utf-8?B?NldQQ2QrVHplVWZDZEJha3BKSUZEd3ZsU29ZUXRyYXorcmR5YzJFQ1V5N3lT?=
 =?utf-8?B?MzBPcTJub3NtdjIyL242b2Y0QzJwRFNiYW5TM1ZQRTJhQ1E4M013NnJVZUhs?=
 =?utf-8?B?elZFSCthZGxsK2YrNVpFd3grcEdVM0MwbTdOaHJtSFJsZzNsUG10Z2tiei9k?=
 =?utf-8?B?a0diVzU0eW16N3h2L0NvK1Z0MFBBOGkwbU9lV1ZFc1J3d2NYSE55R0Q0VnN6?=
 =?utf-8?B?VFVXME5wVWhtL3BSWWhKTk5qZ3Q1UGRReXN3NDQ1cnh1RzN6RWl0TzRVL29m?=
 =?utf-8?B?ZU5MTVNqNXJVQkRXN1d1dURQQ3lkWW93RzN6dlFIeER2eXhIVTErejg1R2Rx?=
 =?utf-8?B?Ni90L2ZrMjVPMElwTUZGOWJYcitGKzUzTWw3VHhiemZUWFA2S0lkK3pnbVNr?=
 =?utf-8?B?ZGFnTm5FcUdjSWM0NGRwVkZKMTIyUzRFTHROUnQrMVE2c1VxUlpPYnJadDh2?=
 =?utf-8?B?bzBOalBldGZ1cUMxMnZqT3RXNFFTT005M3NBOENJUWdydC94SUE0aDdUZExk?=
 =?utf-8?B?SjRpWlZGTGhRbVhLWktpQzJ6M016V25HNTRWV2czUEw3T3k0QlhSZz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1673682e-254d-4dc7-8b01-08da4560b9ec
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jun 2022 12:58:16.5747
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: dDP606Da2M6pCOE/k4dJLG/8xT8aA14CE/fKmsiACKoNBibxqsd4w41+qMBHdp4mvb/9/upZ/Xo9dQrXZ82XIg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8334

On 31.05.2022 04:41, Daniel P. Smith wrote:
> --- a/xen/arch/x86/guest/xen/pvh-boot.c
> +++ b/xen/arch/x86/guest/xen/pvh-boot.c
> @@ -32,7 +32,7 @@ bool __initdata pvh_boot;
>  uint32_t __initdata pvh_start_info_pa;
>  
>  static multiboot_info_t __initdata pvh_mbi;
> -static module_t __initdata pvh_mbi_mods[8];
> +static module_t __initdata pvh_mbi_mods[CONFIG_NR_BOOTMOD + 1];

Did this build successfully for you? Looks like the trailing S is
missing ...

> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -1017,9 +1017,9 @@ void __init noreturn __start_xen(unsigned long mbi_p)
>          panic("dom0 kernel not specified. Check bootloader configuration\n");
>  
>      /* Check that we don't have a silly number of modules. */
> -    if ( mbi->mods_count > sizeof(module_map) * 8 )
> +    if ( mbi->mods_count > CONFIG_NR_BOOTMODS )
>      {
> -        mbi->mods_count = sizeof(module_map) * 8;
> +        mbi->mods_count = CONFIG_NR_BOOTMODS;
>          printk("Excessive multiboot modules - using the first %u only\n",
>                 mbi->mods_count);
>      }

You'll want to accompany this by a BUILD_BUG_ON() checking that
the Kconfig value doesn't exceed the capacity of module_map. Without
that it could be quite hard to notice that a bump of the upper bound
in the Kconfig file would result in an issue here.

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 03 13:20:00 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 13:20:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341507.566734 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx7DJ-0006Lq-48; Fri, 03 Jun 2022 13:19:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341507.566734; Fri, 03 Jun 2022 13:19:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx7DJ-0006Lj-1E; Fri, 03 Jun 2022 13:19:41 +0000
Received: by outflank-mailman (input) for mailman id 341507;
 Fri, 03 Jun 2022 13:19:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MD4n=WK=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nx7DH-0006Ld-TH
 for xen-devel@lists.xenproject.org; Fri, 03 Jun 2022 13:19:40 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.111.102])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d16cfa5f-e33f-11ec-837f-e5687231ffcc;
 Fri, 03 Jun 2022 15:19:38 +0200 (CEST)
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2107.outbound.protection.outlook.com [104.47.18.107]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-4-VNSnLDjaOB60QQMAvBzc1w-1; Fri, 03 Jun 2022 15:19:36 +0200
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by VI1PR04MB5309.eurprd04.prod.outlook.com (2603:10a6:803:59::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Fri, 3 Jun
 2022 13:19:35 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::7577:8567:33c7:685b]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::7577:8567:33c7:685b%5]) with mapi id 15.20.5314.013; Fri, 3 Jun 2022
 13:19:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d16cfa5f-e33f-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654262377;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=VOXCvtqmBtZIAq4CJH2vDsiKigZEDIZPjycWIf0etV0=;
	b=a0DXQfH/ayh8L08uYKgvUafBTVnT80kABZz1nPQnHr9RzfA3KKBQIGTU44kb3eKuOVMR1F
	nmkEKzJiT/G3+6RlcPXvokG21YcHCbLQv9tAl2YbrYS+jki7fESll4eBrw2sxqJzFeDAAf
	qfNnP1OykuC2sq3QnywNKz00BuFbs18=
X-MC-Unique: VNSnLDjaOB60QQMAvBzc1w-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JXXDN2sidmKop24wFKXERRx9qhLL1bz6vAwVuAPqdV5pGEA93H4b9gyyuEq4IienxiX69RI/PuBk61LVE49pXBCYJ9gEG/1+AM/bcSNs8W4rWote0Nw8ermmcHnwlC8JGwN/fl0ZDSqIwIN8CCyc1OfP8Kr02zlSoRG+aqJvsZ2Rk0kL/8vSmyPEvX1fWOhMmx3VkNuWBBYUWitWrG23FAGwncGFJeV3Bzmw842b45xrKcVAgyrrtqsT2IMvT32OMY0lUmKI91+cNsDNgnEKqKa99z0g2LogTJeMU6GmuNtVLQTRH7hNj6AKxumJFRnwgJzdhVch1KoePKC+ASvH0w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=bvziG45W3IYoT6P5Sf/XiRelkbBtO0LL+8k2buESsbs=;
 b=KlRjZ+IJyzR2q1JnNcoWZaJBQ03Iu0KFky0dwAvmOoCgYKvW4ZHeGsbbaGQQkc9yXm58G5WCVGc3Jiu/X8V09FqbmVON8QrPW8QmHcjmelwbhVn7FfULbs2vDp8nQ35Dbr0lzYnflyUW+VAo5aa0nZDbhBb0Zr2zFgEOzeTPCsR5rI2YfGPKpayOjuIlfJifOMl1kfObOnonF/FUKqdtTUFiHlq18xO/RitZ7CThjYUXnMXokwUg/AOT1+PhTMiljGUyTgh9NMyw26nBYC/6QQdHs8vk33eK1EHLRGCVm3d5tWAqUK+k+oVOGKu4jewghkdkp3W4HMAtGLa2g06I/w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <85dfc48f-3440-1e6a-dc44-4c2bb050184b@suse.com>
Date: Fri, 3 Jun 2022 15:19:34 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH RFC 1/6] x86/ioapic: set disable hook for masking edge
 interrupts
Content-Language: en-US
To: Roger Pau Monne <roger.pau@citrix.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220421132114.35118-1-roger.pau@citrix.com>
 <20220421132114.35118-2-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220421132114.35118-2-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-ClientProxiedBy: AS9PR07CA0047.eurprd07.prod.outlook.com
 (2603:10a6:20b:46b::23) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 34d7b6ee-2bae-4d4b-1e84-08da4563b42a
X-MS-TrafficTypeDiagnostic: VI1PR04MB5309:EE_
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB5309BAFEBCCCE2866D219335B3A19@VI1PR04MB5309.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	yXaNW2mBFK/3uYmyo5yKAIKHC0o8Iwo9PzNkESb5dTkUMFzbsGXuw33sRoGYa2S98S8ps9WgcS6K56Kcle/weSn7nSgvZhZTruTlRRvcjqyXaD0uaKQdP0jehG7XG0RKK/xpam74H/bnVPIHtjG0AM3VtyUb6IAwnSqUYwTPRTIjwPLeGkqY3/bFlV04DQg/c+HriC4IiRNWbhrrs/WscL5bt6KscCpM1bNwrhzQSXULVbWl9Hr95rXiHM8jVs7JElBPEIpp0+CP0MgsaIuNL58K+EPOrsUFtPIfG2jHusdZce7QPqRVx2yhq2nmvA7dDyWnTxkC/XjiCRTHBfWAHKCrNeJDw1ONZBGl9PxeHd90U1GeDve+znA6t5zVStH01YKKrx+fa+3sF3zvv2TiRritJLaOXU1aWvIw+xsOixknSyb/txS/Vdq6bbXsK2R+618WGiblYE/VSbIOClTiaxp932ZTSwA4iQgFx2vvpnltbxO/2PIO6sMzD8wDyBcBUkMoZQjOqWkdXt3UrLt2uIuP4Pihc6M7l4B3r1rK1IXCHBkY9ryWKqDEXQsUmsyqedbbHV3WkMpb4w9Qnyeeuyd14p031lkz1NAmtyVhGzLOetbMr1JINE76/29Na+HNbjbFONXTZfHtkhJZX8Cl738N6mhnO9XeIX4MCJ5PerKp8h0zI6hMINBiRrTvfGq2nqQuNcuwzEHdYsBRyaAgGJIiuyEVClMbFk9AkTjqTyw=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(2616005)(4326008)(316002)(31696002)(53546011)(2906002)(186003)(86362001)(54906003)(83380400001)(6506007)(6512007)(8676002)(6486002)(508600001)(5660300002)(36756003)(31686004)(38100700002)(6916009)(26005)(66556008)(66476007)(8936002)(66946007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?DRYXHKquR9JvTiEjcpOyeNZ7GlkKTvNuEUXrfsBF1VHERZJsByeNEeQDBjD8?=
 =?us-ascii?Q?4P5NJTBKue7msyBLFUCEMA2VsM5/8Ktvz4wWfHxse9W1XWnEE7SElTyUVBXv?=
 =?us-ascii?Q?XHvruxUv33st0mUR2CNJT5Tcx6jvmPXhdSoqzuBpumLk9O1uVbfjPqlvYFpf?=
 =?us-ascii?Q?73pRBZQKKQsriTgT/f9mCc6Ds5133aYi6lVwsFf+hYx4rBEsM1nzgYiVNCuc?=
 =?us-ascii?Q?+GbAeB09o3ZBjPas5sYV1yJC9prQKaDm6p7fkU/CN4HjsU7WUhIISVLfn6j5?=
 =?us-ascii?Q?cwPJMxZg5/EHBwZ6zh5W0EfHRcdbZ2hIEG+VZzqMQyrTjLbYwuOxHwum+02H?=
 =?us-ascii?Q?sVFOcmr/2WZf4KjQyaolNzhq5Gq1mdBwpRvcdajqZWYU77JLEf09Pn0cosSX?=
 =?us-ascii?Q?FouSDqZ6SJ4+uua1ceqUyJfcxD+7X4gluht3U9N1HraGSfxF2rh3J1aDWwx8?=
 =?us-ascii?Q?u/M0fIf7GhQ2hhEcIdTWx2yJkXLzfgIFSoE+GP3zGHDbXLgswTzutqek5egQ?=
 =?us-ascii?Q?11tf36avZkI+I844Ns2a/TKGU8mkKH99HQP2oAZbAEQRPXzQlnKJ87/H2OcH?=
 =?us-ascii?Q?1jcFJP3F7QpW4mB0Q14btzBE8j/kRnV29BLk2HJsW6miBRRr4xTkYvedD+Ut?=
 =?us-ascii?Q?+Oa23Pxl1x26kUT8mGU3CDPKgY+xo2URCSEkFZGmt5ye5mzwxwu89dIno+OQ?=
 =?us-ascii?Q?/uT1k8ty/UY8yrXxItBo2ycI7r/F4QI5VhI7qBKIGoaHGM0xuUAo9Z67cULu?=
 =?us-ascii?Q?ypE06fMXpNgwEUikUdlNWS5dzjOzazcPyVp5wXQTWd/fVHYL01dE9KmF86pQ?=
 =?us-ascii?Q?ek89UfPudxZtI6RFGzMzR7j8IKgUtQvpdANDuU9pCZ3FQq7Ax0j9Iy3rNxe1?=
 =?us-ascii?Q?sq77UWT79og3k6EuQXX/oUEbblY2Y8r/w1GDGDUUNFt45DRiJ0QU9mQ1agHW?=
 =?us-ascii?Q?fgPMcl0tak3QG3yCOeAuzm3gz8TiwPcNMSrarlQv/LFeTm3l2nUHYRsG6J6I?=
 =?us-ascii?Q?f2OhoMesel34eE3s07oKCp+ncd9oIGkLkXeHoxs8+RsXn835Gz3T73ubfgK9?=
 =?us-ascii?Q?9VRvIpdaCcDRurMGgQUTYtTzFp+76lNJrJ4+WgHNw1oo+H6Kf6habzNd6/ER?=
 =?us-ascii?Q?tZZIIc7jdJM+w8qZ2gKVsAQdu5ft4zQANVP0eKP4/ofIm13yhudEhFEuyEst?=
 =?us-ascii?Q?ktWIoApDb7srz+szrM0Vbi1ZbxG3rdMYU2y3Jz+j3TPPzWKam7/7DN22xU4p?=
 =?us-ascii?Q?Kr4tohYFeVQloZENDjHykBHGlGKNetjDSIARLBDRyEzLIGHOyanwW/nHYtEH?=
 =?us-ascii?Q?mXNwAVJ+Cj9HUERa5v5iUjIPUlOstK9aL4/P/erbAKZBoKehHAp+x3H7bdx2?=
 =?us-ascii?Q?csmEFC/9+hdffNRfm5lUvreAZVDqe60guNnAExjFeCF1D72Ml1sgARO9RwZr?=
 =?us-ascii?Q?vLec8kqZKY93Yj236w2VBGiY2MqUbBQlieiQjhWid19JsoCmLKb/u0hEYjua?=
 =?us-ascii?Q?wxdIUed/86DXBUpVie6hWPXytGspxzdAGK8t7Ac+V6m1itDxjpxUCkN5nUG6?=
 =?us-ascii?Q?63MIhgyZqEa5aL7rklc8POsmnTXj0XO3boKH+JI1NfY4maoJnuAneTp8rhHY?=
 =?us-ascii?Q?G4JZm7w1NuZZZAG/XFyI4/W2JjGq+4p4oOGk6viEuKCygHkhdjgmJEquExyZ?=
 =?us-ascii?Q?ZDD06EEZ8hf5SycuszPPZX7u8FcB+plg621gqdxDMFNUpabsMmqGwh1MJeIm?=
 =?us-ascii?Q?6Z62QkE3Kw=3D=3D?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 34d7b6ee-2bae-4d4b-1e84-08da4563b42a
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jun 2022 13:19:35.4490
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: FbmhNMH/WEOl0lKbVYQByIaYmi1lZJvHJtR9SNW0Pxi+FAzaGsRtFuglHAJbbb245cmud2cv1ZqS9CI3Ln/5zA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB5309

On 21.04.2022 15:21, Roger Pau Monne wrote:
> Allow disabling (masking) IO-APIC pins set to edge trigger mode.  This
> is required in order to safely migrate such interrupts between CPUs,
> as the write to update the IO-APIC RTE (or the IRTE) is not done
> atomically,

For IRTE on VT-d we use cmpxchg16b if available (i.e. virtually always).

> so there's a window where there's a mismatch between the
> destination CPU and the vector:
>=20
> (XEN) CPU1: No irq handler for vector b5 (IRQ -11, LAPIC)
> (XEN) IRQ10 a=3D0002[0002,0008] v=3Dbd[b5] t=3DIO-APIC-edge s=3D00000030

I think this needs some further explanation, as we generally move
edge IRQs only when an un-acked interrupt is in flight (and hence
no further one can arrive).

Jan

> The main risk with masking edge triggered interrupts is losing them,
> but getting them injected while in the process of updating the RTE is
> equally harmful as the interrupts gets lost anyway.
>=20
> Signed-off-by: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
> ---
>  xen/arch/x86/io_apic.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>=20
> diff --git a/xen/arch/x86/io_apic.c b/xen/arch/x86/io_apic.c
> index c086f40f63..2e5964640b 100644
> --- a/xen/arch/x86/io_apic.c
> +++ b/xen/arch/x86/io_apic.c
> @@ -1782,7 +1782,7 @@ static hw_irq_controller ioapic_edge_type =3D {
>      .startup 	=3D startup_edge_ioapic_irq,
>      .shutdown 	=3D irq_shutdown_none,
>      .enable 	=3D unmask_IO_APIC_irq,
> -    .disable 	=3D irq_disable_none,
> +    .disable 	=3D mask_IO_APIC_irq,
>      .ack 		=3D ack_edge_ioapic_irq,
>      .set_affinity 	=3D set_ioapic_affinity_irq,
>  };



From xen-devel-bounces@lists.xenproject.org Fri Jun 03 13:24:46 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 13:24:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341516.566745 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx7IB-0007yv-OJ; Fri, 03 Jun 2022 13:24:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341516.566745; Fri, 03 Jun 2022 13:24:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx7IB-0007yo-KC; Fri, 03 Jun 2022 13:24:43 +0000
Received: by outflank-mailman (input) for mailman id 341516;
 Fri, 03 Jun 2022 13:24:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MD4n=WK=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nx7IA-0007yi-AJ
 for xen-devel@lists.xenproject.org; Fri, 03 Jun 2022 13:24:42 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.111.102])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 862164a3-e340-11ec-837f-e5687231ffcc;
 Fri, 03 Jun 2022 15:24:41 +0200 (CEST)
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01lp2055.outbound.protection.outlook.com [104.47.1.55]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-34-L4Ggy98INCOyRBxp4Ta6OA-1; Fri, 03 Jun 2022 15:24:39 +0200
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by AM0PR04MB6308.eurprd04.prod.outlook.com (2603:10a6:208:140::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.15; Fri, 3 Jun
 2022 13:24:39 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::7577:8567:33c7:685b]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::7577:8567:33c7:685b%5]) with mapi id 15.20.5314.013; Fri, 3 Jun 2022
 13:24:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 862164a3-e340-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654262680;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=/I5zZaeAkEMmi5YQBwsUh7EHJ4817ik95m2Kt1PGYME=;
	b=ALWzu+3o6SP7Kg9MMJ2P1x50I0hqu/rRMFKCVPNp1NyetqWcTJ4lRtg29bTNkg67wEbbMi
	A6o794SG04roOaEeJtzMyrQ7yeUPcrRHBzbZ6Sj8kWabRK2hTxxejqQVFmN7ACPMrm+Bhf
	9ft5/MTvOys0fXXJPVH4VDwksq/I3rQ=
X-MC-Unique: L4Ggy98INCOyRBxp4Ta6OA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=I9mmHqrHyLtoPqezXfPTW5Pp5Q97w8M23pVW+eImZNYsEoV9wxNN4krpJEVHj78+KYt4unw642DWydRtwcb0lwTXemoIuIiYPh/un1HrybuDjEWiWbvrpRMYa2ElB8nq6WZAzi1NeT1ub9ThqdT7Jq0qXOXl9RyfsCEahDwvXhcRaRA8/tp8UgbZGgQ6TAdAopHWNTQfgR0MIIMYlQW8MmW863TQUi5ENG1ZAKCMRcSVmgYEaEOrm7rk64npRzVaGjdtDUVJefaYbgDo2vesFniJKvLRKl8dTgt4vfnf2eglVsV2c28Z2FSqLHgAS9QgEkSQHM206qAJDzw/UFAdKQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=v+gnUXqY97Gbhx/6t8uCAb+z73FX6uTiHPoZvGJiS7w=;
 b=cuwPUi8MMSKiZOqXb81lGo9wgxi2y/4UE18FWQNqcvjekN+hy78aH/0iUNYaOkXdL4520QN43I+OUfbAz1ubyk1BRLhYdu3CaLhHeXdfOeHZQLu9WQ+RhEqGxoe2lxdivwOmzhAlGobx3IDObNu0V+W0XkuBtnjmiSXX5Fbac0MqViKELrhMxzNKbsprYTnvn+sSfgYfjn6tTL/KVQaZLTnldYgbM+KaLsylEHmbh8mi/LfD6yIK0g4Tfaype556dQRbsWBJXNUBblcoQBgJuRN6KaVZxF2yuUmwin2ShnBx87YxXW+wMbvsT9Gg5NSuj5GVkRWf2u2igKTNo7ONWg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <3b14d173-b0bf-f8e5-70c4-2c9a3085bffa@suse.com>
Date: Fri, 3 Jun 2022 15:24:37 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH RFC 2/6] x86/ioapic: add a raw field to RTE struct
Content-Language: en-US
To: Roger Pau Monne <roger.pau@citrix.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220421132114.35118-1-roger.pau@citrix.com>
 <20220421132114.35118-3-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220421132114.35118-3-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-ClientProxiedBy: AM5PR0301CA0035.eurprd03.prod.outlook.com
 (2603:10a6:206:14::48) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6703bda9-9416-4a9c-5633-08da456468ff
X-MS-TrafficTypeDiagnostic: AM0PR04MB6308:EE_
X-Microsoft-Antispam-PRVS:
	<AM0PR04MB63086A3CCC821862C8EDC491B3A19@AM0PR04MB6308.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	OTY8eqMDaw0Hu3U4fvfSlVIWxSUiITR256rN+13MnGRljeQBaaYkBUCOzHn3gmtIFfGLnR42WkNtcCjyvUxnIh3Ddea4+/JTtFTImNru8ZkNDTuHmNQjuZpuzmbHM0IcnR1aXWlBFxxPQr6honaL0QSYJVaEZYf49YYbb69wKj/Cdpz8k3sqvfyMqFyDRssc36JUvEz+GXw3I+Ezf5CuO7I887BTgiuwb7akICN/iZ6V+EIMQOqrp8uHSrDbE+mwvksT29b+s4vVqmJgCvM1lcPcC/hDJseSUZKVrBcHPRy8bHxowvu1epPzTWmn2F7r8rmEZ2rkUu1scvm4TYDQCqJl330UU8mk2gaSQ2kutnxUjl7riOlPEpI+RfyUho2eIAHlndORlV3rdDXUj40xjpPlpXmJYypXuMLNb4YVdDcAk+CyP3kQqvLrNftnoNnRtDRnA0wLQoYZ50ixOAcKBe6UH0JHHUQJ/NQNLYX+9FQcd45L53Lm+Az6E+WPGt8ooVwQ9KFnEDZuQU8rIDNlBMIZNQ8G9cuJuuONUQFF9kBfmfZjYsA2cQb5sRu9eERRjDc8I8MsSMA31PMZml8MW1GcI+R/rihK0uBceLSpFGBjmUfgoKjNzIjYIRb/cL3XV2ExNW9LXWLRIxqlD4KYN9APN5d2M71MBz1/7lteXbzktyIWoPLIgU5jF/qlzJgMqJ4ceTYQRh3FEC+SLanA1ISxQnv4lSDGrTQlSRyyWac=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(6512007)(26005)(2906002)(36756003)(8936002)(186003)(6506007)(53546011)(5660300002)(31686004)(316002)(54906003)(6916009)(86362001)(31696002)(6486002)(2616005)(83380400001)(66946007)(38100700002)(66556008)(8676002)(66476007)(4326008)(508600001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?WcRlo5ZjY5dTZtspcP5/wmrj3dkFkBf0WNi9W+CM9X8OcdI63HfHOy0m/VL0?=
 =?us-ascii?Q?7xOuxM2Xk45jseZ6IoUgGSsf3kOqwzHqv3+4O1FpXq5jqxvHVXT7VpSi5Fpn?=
 =?us-ascii?Q?cW4TAIiTtz+g9A+v5a9sKgPTPhk933rrohavtJgZfLoBCnEPEoS8f2PBpGK6?=
 =?us-ascii?Q?t2eu/b/jQYMMj4JyC8orHH7YLAkhC5uJmAJqTF7AVsgIHt9bWIxt/q6njaCH?=
 =?us-ascii?Q?zkoZYbzDZCaW5Lmwu0eqYYto87T2F/U7uFrOGr+833OejUxBAq5/FLdLfK6J?=
 =?us-ascii?Q?myWLBAvYlg5jSipsf3eVA+DFKCI74UJqqRz82tc0C+ybw+G6VCAoNNPdashz?=
 =?us-ascii?Q?w4Y8nVxG6dnUtnA3HnRxYiNQ70v6aBMJ4qquk3FpHZiTSGosFh3N1sfcCnp4?=
 =?us-ascii?Q?l6jQWrRUhiq0a4MJ4ZRz8z+/WOWRTK5YcbRrMGrEyxbAOymkwVv8HWAwGNwZ?=
 =?us-ascii?Q?RqssRNt8VxgVw11CYvSrIb9pWZcYwTKvXX3d0mRkA0iHbJbQtQLJHfpe03IY?=
 =?us-ascii?Q?9RDoOssKZ04OW269xWMlbFfmF99wXWvb3NOxDDqiNQP5hIDtuTWB4zaseaBX?=
 =?us-ascii?Q?qNGe1C20VOH3CrpBC3sGUYpoIF0ltS68YyX9lxTuXLO76e2U4ykh7Giy8dqh?=
 =?us-ascii?Q?myFvyyVX9RP7hVH3LtzijxpeWiG6fOW/ss/OfRmlV3SOFPTylN/50mw/4fN2?=
 =?us-ascii?Q?KH9q54r84k3hlNMezn1p4vNOE4IDb6RfgZFcnsrTjlWyk3FjuMwjIGVKILeM?=
 =?us-ascii?Q?oFPWGLxK/8fNiPUwuARU68py6H+PYXKPvvdlZKWsY2mx+IITD72tuLzD9vff?=
 =?us-ascii?Q?lX8D4yr2x2yP6VRrxjDpOycD91xtYZ9BPQMJ0dJc5PlO/olNBkB3WKgeIRDL?=
 =?us-ascii?Q?RhtqpPjqTacSxDqcwwMQ4JHzNjzVUv6U9eoP2bqoCv/ecC21vwjLjcLBqEyQ?=
 =?us-ascii?Q?PhnftUu8n5oTKsfUMatUmsLC4HMQOAxJJGcWPvlK/O75dbXi9A8ZK+yy4xSZ?=
 =?us-ascii?Q?5nMtF443n8k682DNfQjMa7kkUtHfmLM1fZcO0r6EEQYwcfj6aOuUyLZ72Xal?=
 =?us-ascii?Q?kz57gwavlLk2GzuNd2gQ9+yQwJZBqyYp2qJhAJF8unizCvDjDeP26UhRFLiF?=
 =?us-ascii?Q?elKiQWvOPyDS/32lDdFv8zBXO2GkpjbPGbbQQCBuBgjLZmign8mtdyTNWkBW?=
 =?us-ascii?Q?0lh5hImDpzhHTyGTn1V6O+lhiwSdLstkAKx14Ostn9BFx7OC+ROMUqnikCq1?=
 =?us-ascii?Q?/NjwySTQ+8mk4frsNksi6gQPPDJfHsmZ916wCZNva3PDwTLb0gl+JHS+DO+w?=
 =?us-ascii?Q?C7UnIBArrTKhOf4q1MVMwoekE5D0UGUdl0ccU8MR5GAW0p/y+TETVs5lDRz1?=
 =?us-ascii?Q?C7DM/NziCUBJ2S+HsdckvtES7lF2pfkDODyd8HhYP8ZhJBE+OL2ozmWdHvtZ?=
 =?us-ascii?Q?qQyNYSVbHaQtgtGh1HDJYrOCjDiRQ0Rw1q+wvq3I1wc8ABK9Eht6ZZ8+lKUF?=
 =?us-ascii?Q?7A6z+vl27E5Xz5FTuntdD6NNdHxKVUpbfKg4AMs1Vd4/7cBz5jY6XkOCqsnU?=
 =?us-ascii?Q?JmqecHJu2etBDAhWiVvuyH6zGuUTdPPh4tcmHmbyd8NuO/k/VXVOYECWKWjA?=
 =?us-ascii?Q?84/6vUFyRDHO6yAmY0Zqq8ZaAW6fknmISPz5AUmenvcufEwMaAs/4gNqF0wq?=
 =?us-ascii?Q?rpD8MDUmJEw7ZIkQBhlfN3QYnbP1Wa+HEgPUIv8rhCrNCACksA0JwYcfFi3i?=
 =?us-ascii?Q?szUQUsYxRA=3D=3D?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6703bda9-9416-4a9c-5633-08da456468ff
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jun 2022 13:24:38.8050
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 8V3ngOjR0uBOfahYG/UZyBR66zxdXGanSlOv6fotISLKo0tpHW/ysMxBoPGvvgeO6M0Mqrchcyomwih6ywuE2w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB6308

On 21.04.2022 15:21, Roger Pau Monne wrote:
> No functional change intended.
>=20
> Signed-off-by: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>

Once seeing the purpose (in a later patch, I suppose) I certainly
don't mind. We do have a couple of literal initializers, though (see
e.g. the top of ioapic_guest_write()). Do those still compile fine
(warning free) even with old gcc?

Jan

> --- a/xen/arch/x86/include/asm/io_apic.h
> +++ b/xen/arch/x86/include/asm/io_apic.h
> @@ -89,35 +89,38 @@ enum ioapic_irq_destination_types {
>  };
> =20
>  struct IO_APIC_route_entry {
> -    unsigned int vector:8;
> -    unsigned int delivery_mode:3; /*
> -                                   * 000: FIXED
> -                                   * 001: lowest prio
> -                                   * 111: ExtINT
> -                                   */
> -    unsigned int dest_mode:1;     /* 0: physical, 1: logical */
> -    unsigned int delivery_status:1;
> -    unsigned int polarity:1;      /* 0: low, 1: high */
> -    unsigned int irr:1;
> -    unsigned int trigger:1;       /* 0: edge, 1: level */
> -    unsigned int mask:1;          /* 0: enabled, 1: disabled */
> -    unsigned int __reserved_2:15;
> -
>      union {
>          struct {
> -            unsigned int __reserved_1:24;
> -            unsigned int physical_dest:4;
> -            unsigned int __reserved_2:4;
> -        } physical;
> -
> -        struct {
> -            unsigned int __reserved_1:24;
> -            unsigned int logical_dest:8;
> -        } logical;
> -
> -        /* used when Interrupt Remapping with EIM is enabled */
> -        unsigned int dest32;
> -    } dest;
> +            unsigned int vector:8;
> +            unsigned int delivery_mode:3; /*
> +                                           * 000: FIXED
> +                                           * 001: lowest prio
> +                                           * 111: ExtINT
> +                                           */
> +            unsigned int dest_mode:1;     /* 0: physical, 1: logical */
> +            unsigned int delivery_status:1;
> +            unsigned int polarity:1;      /* 0: low, 1: high */
> +            unsigned int irr:1;
> +            unsigned int trigger:1;       /* 0: edge, 1: level */
> +            unsigned int mask:1;          /* 0: enabled, 1: disabled */
> +            unsigned int __reserved_2:15;
> +
> +            union {
> +                struct {
> +                    unsigned int __reserved_1:24;
> +                    unsigned int physical_dest:4;
> +                    unsigned int __reserved_2:4;
> +                } physical;
> +
> +                struct {
> +                    unsigned int __reserved_1:24;
> +                    unsigned int logical_dest:8;
> +                } logical;
> +                unsigned int dest32;
> +            } dest;
> +        };
> +        uint64_t raw;
> +    };
>  };
> =20
>  /*



From xen-devel-bounces@lists.xenproject.org Fri Jun 03 13:34:45 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 13:34:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341524.566756 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx7Ro-0001Ij-QG; Fri, 03 Jun 2022 13:34:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341524.566756; Fri, 03 Jun 2022 13:34:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx7Ro-0001Ic-N7; Fri, 03 Jun 2022 13:34:40 +0000
Received: by outflank-mailman (input) for mailman id 341524;
 Fri, 03 Jun 2022 13:34:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MD4n=WK=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nx7Rn-0001IW-Hi
 for xen-devel@lists.xenproject.org; Fri, 03 Jun 2022 13:34:39 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.109.102])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ea115958-e341-11ec-837f-e5687231ffcc;
 Fri, 03 Jun 2022 15:34:38 +0200 (CEST)
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2105.outbound.protection.outlook.com [104.47.18.105]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-15-iRY8a7jENmiVE1X8zdFP3g-1; Fri, 03 Jun 2022 15:34:36 +0200
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by DU0PR04MB9396.eurprd04.prod.outlook.com (2603:10a6:10:35b::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Fri, 3 Jun
 2022 13:34:35 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::7577:8567:33c7:685b]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::7577:8567:33c7:685b%5]) with mapi id 15.20.5314.013; Fri, 3 Jun 2022
 13:34:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ea115958-e341-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654263278;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=N+PYYSk4JB9FNIbJmOdNxrClAU6r3UY0d+7IVas+AGM=;
	b=G9cgz3IaK2n0M0dOmbt5iCuGtmGu95heu/fRn71M2ZQy+ccEQF8VsaThkcr2ZQundI9xGm
	N3F34FlLkDiOXC+K0GVZ1ZCll3HMtHhGuRK9QcNP148wHNglPO1n+EJTYpGx8TRBAoJM7P
	7i2BrRfUvT97Z59DFEwA/nnzRUHoCes=
X-MC-Unique: iRY8a7jENmiVE1X8zdFP3g-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eXN7Ef71MRwX+HHK/NpAPMYrqZMNsYYSa+nseApM98shQw2zBxqK1rwiI/oRYfuIyHtr6/8IDjGVhRbrv4MSrJPWr1GukVAKQCou9n9eqS134IfKy06Sk9KUHxZodiNOCXzoML8grjN78SSyeCNluaUIOt6GvEGB9J7LMtfuSlDIczkMeu5YYLXBCyKdhy1+gXorsSlioyHPXSb93MqJ1gV56G9RYBaGvbECu+P5wnkcoSRhkVdQQT+V9+BnUarvXy6iKQVSCktMH9tc1gQyrnRiH78vCjVoG4YCxClNEufsYauCjonHF217cqORVqXNlwUb8O8nXo581NKX+D5q1g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=N+PYYSk4JB9FNIbJmOdNxrClAU6r3UY0d+7IVas+AGM=;
 b=m0rhoSkto9Q7Rhmz+JGZBWYu3zUC1umKR8GuF499R55SyP/jFHxYDFiaSocYkWOQKsKZJKwkSM96Za+Ron0yrLq/KSuBDY8gkIaf4yL4MHRstKUXgQ3KYa8qZhya/ctInjEet73gtHyem72vWPc2ub9Agpw4+HcXhdnE6fJvbmHN2DJCPaZT2d3igQWIUbOadbt7sITB2A4CKPIfkJJLjPhRnaoHggn/vSiLovTqAkKdEC/PiEU+Szn8lHMeHDVlS1YWksBNvSZohLE+hzz7FY486cpAv5YQ6rbHlDvMsROEC2tiWBY+Q2CyclFH0YozcnalymNOk6KzoWjurV0Pag==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <febbff78-6a2d-f2fb-d8ea-a15f97a3abf4@suse.com>
Date: Fri, 3 Jun 2022 15:34:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH RFC 3/6] x86/ioapic: RTE modifications must use
 ioapic_write_entry
Content-Language: en-US
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220421132114.35118-1-roger.pau@citrix.com>
 <20220421132114.35118-4-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220421132114.35118-4-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0001.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:15::6) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c3948cf2-a3c5-4b26-c95d-08da4565cc16
X-MS-TrafficTypeDiagnostic: DU0PR04MB9396:EE_
X-Microsoft-Antispam-PRVS:
	<DU0PR04MB9396EA8F1A9A487B70FD3632B3A19@DU0PR04MB9396.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	g9C47gJQldMxeg3pFVel2exSp2bz2XXyV+pvXnf8cP58ZUZg1lGdYkYL3a7bQXXBdX93ap3SCg9IouXO38l2cnyCdCxVEPaFIFgdGyHpUvHY8vMvdB+mYAmFXpoWNeAesq9kwzSoTkm3b9//sJGg/QBKYYyI3aq3hHIGekGSHLtp36u7HKVEKMqnp1qvivypC2oHHYNrWXQl/Fm0aop0/YJc3+GjMU0Ri8UagV2QJ3gaFIGzD7rAMlHcb4Y291ex2enggPKzJFVR8yzqX6UaCf+pphdsmca8j02TSCIbLlaQHsXFmwglEQTSZihqRRalM8ReM37PPxaqTaq+difCU48RTL/Arq/X/zQGGgfh77QdPLg1AcBF46J0TLnzsyo9dzQVXnKGxuLR6C7LT5PCUSHm09uksCD6cE9xg4xH3/w7RbFXQtdblbgOJgjdlnma17oYfESpTliGZmp5opnPeZIYqw1eqkMfwhqAuzw8z/r8ftjR3rs96+zmVXQz4nTCQUC3yevc8Yy9J1ocTYmhLdfOGOSl4IaR1tRJ0dJf5fhvRHODMOSJ6kB4kaMe90Y68PFuHEqCFUA3CGaqyeFeubw1DUvzxs39OGnA6pz9OJj4xpCfIaf78mUg5xPT7oycUPGB/sT4eu9FqQ3un9lLYQ270lBpKdQR5Knh2KByaQZ7pN2reTHMpM2pq9oKFt9AJBnnPpH+cvx5k+qnbPBJ09KvIVF876pQkg1kmEvLD69/7955ysZKNvqMgjQMOHkW
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(36756003)(26005)(6512007)(6506007)(53546011)(6916009)(54906003)(31696002)(316002)(86362001)(508600001)(6486002)(66476007)(186003)(5660300002)(2906002)(66556008)(66946007)(4326008)(31686004)(8676002)(8936002)(2616005)(83380400001)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TDZoNjYyQW1BeXk5WG9mclVKMmpIYkwwNFMrUjIzMzRIWFRVaHhpMk9ZcW9p?=
 =?utf-8?B?YU10UzE1RGNDOE1GcjRGVndtRk1IOHZpbTV3Y1ZxN01IZ1k4REFOVXhGa0pq?=
 =?utf-8?B?ZGlHM1h5OWJjQ016VFMrZ1RtcGZNMDNBblZrQWh5ZVkvUE5XZmwrcjBXSUND?=
 =?utf-8?B?eHBSQzIxTUozeG4zS0dkcHRMWFFpejU2amR3VE9CZTJNdEtVVng4czBxZWxz?=
 =?utf-8?B?NDRaNi9wRXdPcXl5NlNVVUowaTJBc2tiQTNtS1p2Y3JKaThIRGxaZExlQlpl?=
 =?utf-8?B?OHFiOS9CcW10VFhyTE40NWRmeDVnN2RWeWl6REZrVHlaVFFaR2N6cks3YXA4?=
 =?utf-8?B?bFRybEs5bW9jYU5DcXgrUG1mZWJoSzZTbU5IeTFDOUZhUVRkQWQyM0ZVblI0?=
 =?utf-8?B?OGthQWlkajhFbDJxQUdrN212VGlsd3dHbWkwN0pXU0RQaC85RTc0eWpsaEhm?=
 =?utf-8?B?cUZBbWlJTjBQczVSU01jcVIvTnJLZkhYMHJxWWh0dlhnbTBUSDdWKzdaanZD?=
 =?utf-8?B?Vi81UkdMV0U5OHk0OXdjNUxwdHIvMmhrbjM0eUx4TlVXY2FjSlRnQ2VaQ3FL?=
 =?utf-8?B?TEp1ckZ2N1lUOHdZNnY4QWxGWHZIUnQ1NlhEcUVGSkV6a1V5QVpROHNBaUpq?=
 =?utf-8?B?WGFBRWlGbHdxTVpLS3BkZVFjZkxsWU5laWp6SUVkNy82WjVoM3Z1NzZoaDF6?=
 =?utf-8?B?Y1oxV2dNOWgyTmF1UTlGdGhzNjgzQ2tXT0lZVUgwRjJJNmRjM1pjKzB3bUx1?=
 =?utf-8?B?cEl5bUxBVjBZOE45OHRKNWU0a1QrQk94ZVRpWHZ0ZFVLa0RkVG1od3RsWHRH?=
 =?utf-8?B?T2t4TlNhaGZlMlhTUDR4NGE3b2k0MFluYzZxSVdsRFB4WmRnaDFJVzRFcU92?=
 =?utf-8?B?ZVFtVkJydkZ4Zi96UmlRUTJYWDRmdGtrNVFRQnNWdWVMWGx5QWloTHEvSU55?=
 =?utf-8?B?QmtEYjJobGFqcEg0SXNPYXdqdEFGcFNBVHZNcDBNMkZpV2J1b3hNQnNTajNK?=
 =?utf-8?B?RVB3Z3VVa1Frc0YyR2hUQXpoTGlsU0lqUTdQWFFBUVBlOTZLaGNZVmU5Mm5m?=
 =?utf-8?B?TndsSWJJbjNQdEhrMHpsVEpuU2FNb0hENE9XcVZQcVRQNUFJNXBpZ2RMVFVh?=
 =?utf-8?B?YlZmUzRjUmhDRHhrd09Pb2cvK1FKS25vUzBhb1JOdk9jY0R3cWpCL1N2aTN1?=
 =?utf-8?B?WnRMNGxzRGVibzBwdDhkMkoxU0QvbU01MEh2U2dIN0xPMUt5d2lha2YweG9i?=
 =?utf-8?B?L2V0VlhSbDRxVlNiZEFlN1B3dUF0ZHZ1c0FEZ05xamZhdkRlSDBJWlRYWWtT?=
 =?utf-8?B?elJHYWV2dUtXTmhnY1M0QWx4R1E5dlExOE1NNHVrS1NXRUZkVm5BaFlGMXJl?=
 =?utf-8?B?WlpEUzk5eEVoK0puNk5IbWM0WlNlQW1qN253NVN2dzJsMW9WdjVpWU5QTUps?=
 =?utf-8?B?c2hjV0Q0M2YxaUhwK2VOTEpvd25PNzk0eUtxZGNhdUJVOE4rVlJWUXhUaXpJ?=
 =?utf-8?B?bitWVTRDSkdPNzZ4VnJONnlWelp6czA0VmtaaUdPaitmVzVpWmdST0Q2ZkNW?=
 =?utf-8?B?T3JKNDExdjQ0Sko5ZEJ6amdQNEx2NkFvelRaK21OckQ2bTdjM2lCMVpOYXQy?=
 =?utf-8?B?Tm83L3JNakRCemxqY0RQSGFGcFc1NGt0K2N0c2hnVzUzK3BOcDF4ZWxsSy9v?=
 =?utf-8?B?MjNKL0ZZZVprdVFlVndOQUVkYXY0YVVtbjNPek5mYWFHaHBIN0VZajQ2a3Zk?=
 =?utf-8?B?TmVRaUd2UHhEdWoxdVZ6ek5hRlFiaWZPN2tpRmpTTlB1RXVkbXMxNll0MU9x?=
 =?utf-8?B?cVN1bUZOQS96M040VGFJUzVPYkR3ZlRYcjJXcGJRQlBlWWoyLzdTT3g5Zko3?=
 =?utf-8?B?R3ZnYkJTWDhGOWkzWUJKeVR3ZlVqSWpFL2dRRTFreVVIdzJVbEpCWUNNQW5t?=
 =?utf-8?B?U3YzSWNFSFlYRThtTGxUV3NncWYweTAvV28vUGptaEE4RWRuczQ1SkZoOVdt?=
 =?utf-8?B?M0hXVVpZcTRvL3NvSHpzVzhhR1lKWC9BMlpNZjROdXNjZXllWXN2Nm9rWS9K?=
 =?utf-8?B?UFBab0g3ZWorNnZaZnoxb2VxcC9pNExWbHhaU2VqdmlrRVpvTmRVc2hkdEVi?=
 =?utf-8?B?UFVDb0ZVc0ZjZUFNbC9QQnpaeklKVThNQlBERWJocUlWWFRRMzhyMlVzZ2hP?=
 =?utf-8?B?RlBSSmJiQThJSnB2a1NoMzlNaU14MzNxUFkwUHlqSGxiWWQrdnEveWcvaERZ?=
 =?utf-8?B?Wjh6aGVHamlQQVJXeGlRazNCQ3E0UWZCR0pYazNZRDFtWUNyTnFTVVFBU1ZR?=
 =?utf-8?B?MXRjU0FxdHo4eVYzWFh6SC9aeHJOcGhMUDVieHVWNGRSbUNTU0tBQT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c3948cf2-a3c5-4b26-c95d-08da4565cc16
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jun 2022 13:34:34.7998
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: JxLvrfV155MmRlDt2K0hUOyq8qWCl9BBhmFUXTHM0DjOwjz/ovyxgYDMyqb/odAa44cFKE08GIr0bVaVCWyCDw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR04MB9396

On 21.04.2022 15:21, Roger Pau Monne wrote:
> Do not allow to write to RTE registers using io_apic_write and instead
> require changes to RTE to be performed using ioapic_write_entry.

Hmm, this doubles the number of MMIO access in affected code fragments.

> --- a/xen/arch/x86/include/asm/io_apic.h
> +++ b/xen/arch/x86/include/asm/io_apic.h
> @@ -161,22 +161,11 @@ static inline void __io_apic_write(unsigned int apic, unsigned int reg, unsigned
>  
>  static inline void io_apic_write(unsigned int apic, unsigned int reg, unsigned int value)
>  {
> -    if ( ioapic_reg_remapped(reg) )
> -        return iommu_update_ire_from_apic(apic, reg, value);
> +    /* RTE writes must use ioapic_write_entry. */
> +    BUG_ON(reg >= 0x10);
>      __io_apic_write(apic, reg, value);
>  }
>  
> -/*
> - * Re-write a value: to be used for read-modify-write
> - * cycles where the read already set up the index register.
> - */
> -static inline void io_apic_modify(unsigned int apic, unsigned int reg, unsigned int value)
> -{
> -    if ( ioapic_reg_remapped(reg) )
> -        return iommu_update_ire_from_apic(apic, reg, value);
> -    *(IO_APIC_BASE(apic) + 4) = value;
> -}

While the last caller goes away, I don't think this strictly needs to
be dropped (but could just gain a BUG_ON() like you do a few lines up)?

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 03 13:44:10 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 13:44:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341536.566767 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx7au-00037Z-OM; Fri, 03 Jun 2022 13:44:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341536.566767; Fri, 03 Jun 2022 13:44:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx7au-00037S-Kq; Fri, 03 Jun 2022 13:44:04 +0000
Received: by outflank-mailman (input) for mailman id 341536;
 Fri, 03 Jun 2022 13:44:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MD4n=WK=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nx7at-00037I-QB
 for xen-devel@lists.xenproject.org; Fri, 03 Jun 2022 13:44:03 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.111.102])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3a495778-e343-11ec-bd2c-47488cf2e6aa;
 Fri, 03 Jun 2022 15:44:02 +0200 (CEST)
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03lp2170.outbound.protection.outlook.com [104.47.51.170]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-36-D9pnXIHKMzWWOz8rqFiqBA-1; Fri, 03 Jun 2022 15:44:01 +0200
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by AM7PR04MB6983.eurprd04.prod.outlook.com (2603:10a6:20b:102::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.15; Fri, 3 Jun
 2022 13:43:57 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::7577:8567:33c7:685b]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::7577:8567:33c7:685b%5]) with mapi id 15.20.5314.013; Fri, 3 Jun 2022
 13:43:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3a495778-e343-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654263842;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=cQR9zYVAX1Kqo90P5iiXDcawPbH4FFqZT3c/W4o1Y/E=;
	b=DI1AS3vwCYHDpXU3K6bFyc78HYdr4FelFwe8oGpmfROQ8lI3d4Mhrqtj31jfCrymBAN74B
	P5yy8OJreO3AphQnpu7sZRTk8xTQUqVOrwOkDLA6MtVcqRgaYoOUP6Yjbk5XyJFAFSD22V
	HXpGpH5MynfZwkvrbdPKUXonRFTT2T4=
X-MC-Unique: D9pnXIHKMzWWOz8rqFiqBA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WL9FPfrR2U/8xVe6VjIYXtYBAmPjCKYW8J0FpmqbwWh9r2lj6mVFhbmGEyPuUZbgVfdczImmzuCzQ70UhRHRpe7OOT7tnIYGCMflUR86pS/4BQfkCv8tZPdUl+Y9I8vcA5wHCLKYBGgYM8VMoBoggWVdfGgJBQom2HcGPtigNrH2IdhfjIyb8jKE5I8XBzOoy09e2x/lnbR3vnH/kotLQFEf2vBNdgEc8xoOw+f+fUUx9X87OXOpxglZ6BPBbfTqg5SFlZMDMTWQdjVzJiEhqbEPWIuWyOtQQpa0lGCn+wT28CVAW4SlfKnlNJFCv5EdPVq5drnpLw2gZg9cUErcAQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=cQR9zYVAX1Kqo90P5iiXDcawPbH4FFqZT3c/W4o1Y/E=;
 b=RJEJcFWQc3a+qjs0B47UxslNQebyvx3wUAXcEHuc/Apf+1rz7m8Wvs0mJPqVluuFn5dp4wWXAKfns1QLfw4K6PvHDWepYbqK5fe7izqyY/7Qos1Ju6Xqvk2wXuJJ7uATYZp1d3GoIa60jzbJ8/htX1JVlQWl99keRnScwGydyURPxbqwAUhT14c4aZ9dquMBKnzLOQSqryH9gAKczAtR/WFECvT2RvLIcNPVwImZGFikviNd0HtNLFDWhLOWPOLkRRTjM3fCpOQNq7O8ZUZKqXu3eLFqu/FFodKaP/5F+Oz1xSSO4rbSYT55E3pq3AknkJXIW+//gZvleetvfXYLnQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a5590d8c-2255-6dbb-6823-2db0ff714da6@suse.com>
Date: Fri, 3 Jun 2022 15:43:55 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [RFC v2 6/8] tools/arm: Introduce force_assign_without_iommu
 option to xl.cfg
Content-Language: en-US
To: Oleksii Moisieiev <Oleksii_Moisieiev@epam.com>
Cc: Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Nick Rosbrook <rosbrookn@ainfosec.com>,
 Anthony PERARD <anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>,
 Paul Durrant <paul@xen.org>
References: <cover.1644341635.git.oleksii_moisieiev@epam.com>
 <d333126d12f2281f8df92e66cfba1c9eb2425dca.1644341635.git.oleksii_moisieiev@epam.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <d333126d12f2281f8df92e66cfba1c9eb2425dca.1644341635.git.oleksii_moisieiev@epam.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM6P193CA0108.EURP193.PROD.OUTLOOK.COM
 (2603:10a6:209:88::49) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 0671c871-7e36-4069-ab9a-08da45671b61
X-MS-TrafficTypeDiagnostic: AM7PR04MB6983:EE_
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-Microsoft-Antispam-PRVS:
	<AM7PR04MB6983CDA6530B94C44B902E72B3A19@AM7PR04MB6983.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Cxhhd3vM2FYw4jdD1WOOJZBi1EzfMlQkNF14tWI7m/z8PLRUlYmcOaEU7nRHzaFMVk95e34ZO5TwVL9Db898QrbYP18FGw87Kh8l0NcXHDLOjZe2wl1OXx6MNfrslxPHkTCFcNiuIgxKhD07YUMha+f7ZhKxe1cv5Ezwdf1Uo2XfItGHBSJXTua02wjZ3Lbs8cOPXaG/yb2YQDQm5Dqg2wfm5EDezZuxe41Kc2bYzweWa9fgA1T20nELWyzWUwiEZ7L7Yyyz2VmPSU/XgEdoAbr4XLWtnGj0Jbc6gh9lDJNVFC9qQhYjVX1UPko/L2VqDkAsBMPe9d2wHUnuVf4GaE80Ie5phGBzoBt3A0VBmSowo7dmM4RDO9U3JPIRvwHwAmlTv+acb1X/S0kU7uJNw2/1SPEg8uSFzPgutBSL4xvVtGwMOX21suFa14ZzNY0POEMoO0RIdH593BJwZfjKKRRs0W1WDGIT48n/b7uDVN1niSGy7V42xP3yHVS716YrHzhnDl6JBindqBWTG7hE4hx1/xMuHbxl8XNtQmGC9jQ64jJNkc6JWo1WWVvBb4FOXm7gqMFN/XTTqKOLkLZPdLl3IcVJLclCehDTTzgXmDqWl9iVqa4/LgHTS3Oy6eytzsFUfnkujKHa1uJ/pnYQLrGzI+hWvOjVG4wXH5ZOU7z74WBWlRnqRDJ3EL5hWw3XiTGExArxH+mjQEHR5StSboHYZ+z6sSBFXQYaamBUDwskY0LYnmRQGxpKW9omOeZ5
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(8936002)(316002)(54906003)(31696002)(6916009)(186003)(508600001)(6486002)(36756003)(7416002)(5660300002)(4326008)(66476007)(8676002)(66556008)(66946007)(86362001)(4744005)(38100700002)(31686004)(6506007)(2616005)(53546011)(2906002)(26005)(6512007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UUFHUFloeGZubnBPNVJjS1FjRmVSMmpuM3hrZ1lZK3dOOGpSRXpyaHhvWUlV?=
 =?utf-8?B?dUZpQ2ZhSngrQWpiNnNNSGV4MGlQR01wOGlDM2JNNVQ1VkdvR0NPSitOVXVB?=
 =?utf-8?B?R0dPRWVoUlZQdS9SbGJ1RmUxVjlaQ2xPNXdvd2xGTy9MWjlKTkx1bXlrNFhh?=
 =?utf-8?B?dXRHOUZzZFoxS21USXB5WDl0VEIyZ292MWZYd3JwaUE2cksrbkdud2ZLcEdl?=
 =?utf-8?B?VUhZNXhZaHhMTlRLMjNaNEYwUkVCUjNxNzlCK0RuWm1lMGcyaThZN3BEUk5s?=
 =?utf-8?B?aHN3VDlpdWowMlhwL0hreUthT3lPdmFQbnVxTlBiWXBaOFFGeEV6R0U0cUNi?=
 =?utf-8?B?WFFzU0xQYnNZcEhFUGF5bGZ4OWR4dzBLaUQzaUJQUUFocVg5a240bkVlOElN?=
 =?utf-8?B?cEQ4VGFzMGUvWndsVWtFdHdNbm1wU21iTWd3VW9ZNittdXAyWXl6d3hoZTQv?=
 =?utf-8?B?Y1BpM092bWpMV1YyaFNNdzV4L00zRnNoUklDM09oMTZ4Q0VpazFtdEZrdjlT?=
 =?utf-8?B?dUhMUlpYN2xIRFhuY1k4OHljMzB1eFVPYU5XOXNNZzhidHBrS05PNjNIZTNk?=
 =?utf-8?B?TUZ0SkFnUlZVMnBSTW5DQTVKM2VFL3hubVhoRGczRmt6SVR3Y0ZXOVFabkM3?=
 =?utf-8?B?UXpZUlVyU2tiWUJnZnRmVG5rZmgwMWxOVHRLcllCbXkrVDlDM0phSk1jSzJi?=
 =?utf-8?B?U0ZNN21zTllqRjgwV1BDckZiTnluL0t3VnNtYnpsS0lPT01VMUY5RGFxVzRz?=
 =?utf-8?B?OVdWZVlrdSs2YW9yOStHWHJYb1NOTjNicTVCanVjWTBIeTMwWkRJV0M0bWRM?=
 =?utf-8?B?d1Y0L1dHbW9YeEFHeEZSUGNFZ1oxUGRLdExqTDczSTkrbUw3R1JUVU14SnZj?=
 =?utf-8?B?SWlUNzdwZWpYQ24wOTc5VHlkdmljT3U4WlpHWFhiV0dFM2VHZ0hvVEtzZGIw?=
 =?utf-8?B?VnVIWWZXNmU0YnBDdVlxdjJPOG1WdmtBWlc4QUJuU2RlVTM2MHBBS0REY294?=
 =?utf-8?B?K0gxU2lsVDJqNGtkR2RralJEOTFwR1VCSG5ndU1GV3M5eWIrVitXakF1RWFB?=
 =?utf-8?B?ejZZTXlsQzQreVJDVDJUbDVRL3ROa1pXWFp5TGxLM2RUR05XT3VoTXpsM3oz?=
 =?utf-8?B?Z3kzbjN2MjdoSzZ5SWJYUkpPSVV6cVdLbGg2VDVIZEdCNURMVmN3L1BBVUFx?=
 =?utf-8?B?bGhOU1I5OTlBcno1VzZHWTJQcUh4eUZsbGxXRGIxSzhrNnNkZUxOWFhEWnNE?=
 =?utf-8?B?WFZyaHBnS0NFTEd6U0VCQzU5MUVnc1d1Z0taZDBNK0lWS3dmTVBjejJDZjBN?=
 =?utf-8?B?TXB1MXBtaHNpUi8yZTFLWnVab3FoNk9UWEQ4WTZHMEdRSW0xRVhrL3ZWSjd6?=
 =?utf-8?B?UU5Xbzl2cFhwbEJDSEs5a3Rkc3lUdGR5dUVjaXdMcWNnNncwcWxIVlhuV20y?=
 =?utf-8?B?SWF0M2JlV2NUK1ZqU0tCYjgxNk95ZEdlck0vRmx5NkVNajExdVNUcEZXd0w4?=
 =?utf-8?B?ZHZSdG9OSXRqU1U1Sk1jQXdxbVJGWW4xWEM5VVk1Q0dHOGxTV2VVZWRHdlRT?=
 =?utf-8?B?K3IwWUZKQ0VuTDF2WTNWWkl2ek1VYjIvS1lLRmg2aWc0bENEL3VUclNCcEpW?=
 =?utf-8?B?VFEyYlFpaUwvWVRsY0FVaVcvMUQvZ0MvMjRLeW1zeVFHbUJRamx5dnV1c2xG?=
 =?utf-8?B?a0lOZS9SN3BibC9VV0dlOEEyb2ZXNkhEMEo3d0dxZVB5WURzU0poVSt2enJV?=
 =?utf-8?B?SkwrcS85c1B6aVR2WTFZU2x4R3hFMEVabUFTdVJjUVU4MXFSODlqSUtrMzV1?=
 =?utf-8?B?Q1J1TmtQK1RjRFlPajlYVCtuampiTU9IRm45V2lxTzJsU21lZS9UeTZaNzBm?=
 =?utf-8?B?dnFRcTVqWFRQWlNTNVN2UnR1NkFMd3MzR0xLMEh4WFJxRzNHVndTVjVEcGJI?=
 =?utf-8?B?OEhoRjIyTXhFSnIrSUU5VjI5RlEzYnYzNGZWNmZoNW5RUTV2NXJzM0NXWWdK?=
 =?utf-8?B?NDNycHlDMTNzYVA0aVRwdXg2ZkYwWEdHKzg2RTNKeFViZkk5UWdBYjdvT1Bu?=
 =?utf-8?B?aWEvMDJmb2g4T0xyWDY4MkxJYzJpN21jZDJHTHU5R2hDSUZPUjEwbW9YRUFW?=
 =?utf-8?B?VHJzaEltMGlidldaMDZ3RWRIZ2VZWVNrcnBYVnZtanMzTldJR0haQXhwM25Q?=
 =?utf-8?B?eHFnT08zeWxHZFFsYitBTzVrMDR6SytTOXRNQ2NYbzBweDhEem50WFdaQUFS?=
 =?utf-8?B?YkZDaThEQ0dsVXVzOXJ4ZlVnSTBCU0FJS0N6NGFya3lpdmxMV2pPTmhYcmdC?=
 =?utf-8?B?QXZGMHhyZFRKQ2xNSndjWUMwMGtzV3k0U211RUpzSDJJbStNMithZz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0671c871-7e36-4069-ab9a-08da45671b61
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jun 2022 13:43:57.0773
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: hPpzhZIG39b7SSG8fAv997TPdc2DWAf/4kcyPjjh3hlhRxLUZbwF4SWhQLkZEms2j77AbehVJUJktsNO8f4RDg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB6983

On 08.02.2022 19:00, Oleksii Moisieiev wrote:
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -512,7 +512,7 @@ static int sanitise_domain_config(struct xen_domctl_createdomain *config)
>  
>      if ( iommu )
>      {
> -        if ( config->iommu_opts & ~XEN_DOMCTL_IOMMU_no_sharept )
> +        if ( config->iommu_opts >> XEN_DOMCTL_IOMMU_MAX )
>          {
>              dprintk(XENLOG_INFO, "Unknown IOMMU options %#x\n",
>                      config->iommu_opts);

While in common code this is perhaps okay, the new bit wants rejecting
(or also implementing) for x86.

> @@ -534,6 +536,7 @@ int iommu_do_domctl(
>  {
>      int ret = -ENODEV;
>  
> +
>      if ( !is_iommu_enabled(d) )
>          return -EOPNOTSUPP;

Please don't.

> @@ -542,7 +545,7 @@ int iommu_do_domctl(
>  #endif
>  
>  #ifdef CONFIG_HAS_DEVICE_TREE
> -    if ( ret == -ENODEV )
> +    if ( ret == -ENOSYS )
>          ret = iommu_do_dt_domctl(domctl, d, u_domctl);
>  #endif

Why?

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 03 14:29:47 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 14:29:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341544.566778 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx8Iv-0008T9-8q; Fri, 03 Jun 2022 14:29:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341544.566778; Fri, 03 Jun 2022 14:29:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx8Iv-0008T2-4E; Fri, 03 Jun 2022 14:29:33 +0000
Received: by outflank-mailman (input) for mailman id 341544;
 Fri, 03 Jun 2022 14:29:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DOPF=WK=citrix.com=prvs=146533d13=roger.pau@srs-se1.protection.inumbo.net>)
 id 1nx8Is-0008Sw-Vx
 for xen-devel@lists.xenproject.org; Fri, 03 Jun 2022 14:29:31 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 91a752a9-e349-11ec-bd2c-47488cf2e6aa;
 Fri, 03 Jun 2022 16:29:27 +0200 (CEST)
Received: from mail-bn8nam11lp2177.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.177])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 03 Jun 2022 10:29:24 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by DS7PR03MB5400.namprd03.prod.outlook.com (2603:10b6:5:2cc::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12; Fri, 3 Jun
 2022 14:29:22 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::5df3:95ce:4dfd:134e]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::5df3:95ce:4dfd:134e%4]) with mapi id 15.20.5314.015; Fri, 3 Jun 2022
 14:29:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 91a752a9-e349-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654266567;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=C1YfvhpoVF1QArp6lYztvg0+2OgZnVDBt/mm9h6AYow=;
  b=VYPw07H3Z8IqZZTH64XyPRNa8PkmUgGmFlQPx1JeD4gkCoW4GD/oS9Oy
   ebu/Cq9SuJdXQk/Yg3+LHFWCzOne7OanXSPWJDkJ1A/KPmNiuGyHaleoY
   GsxazvaGYECW+DQT3E/59W6ohUTTXHEXKYNcRsU9m4UPdRZolf5vzsbOO
   0=;
X-IronPort-RemoteIP: 104.47.58.177
X-IronPort-MID: 72813593
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:RR0jJagNdRXw+3kaUMgoN3QYX161ehEKZh0ujC45NGQN5FlHY01je
 htvWjzTPvmDMGDxctF1Pd7k9EIAv5aGmtJjGwNrqn83Enwb9cadCdqndUqhZCn6wu8v7a5EA
 2fyTvGacajYm1eF/k/F3oDJ9CU6jefSLlbFILas1hpZHGeIcw98z0M68wIFqtQw24LhXVvW4
 YmaT/D3YzdJ5RYlagr41IrbwP9flKyaVOQw5wFWiVhj5TcyplFNZH4tDfjZw0jQG+G4KtWSV
 efbpIxVy0uCl/sb5nFJpZ6gGqECaua60QFjERO6UYD66vRJjnRaPqrWqJPwwKqY4tmEt4kZ9
 TlDiXC/YVcXAK79lb0jbxgCGnBEEbVn3ZnBA3fq5KR/z2WeG5ft69NHKRhveKc+qqNwC2wI8
 uEEIjcQaBzFn/ix3L+wVuhrgIIkMdXvO4Qc/HpnyFk1D95/GcyFH/qMuI8ehWhv7ixNNa+2i
 84xcz1gYQ6GexRSElwWFIg/jKGjgXyXnzhw9wjF/PFqsjO7IApZ9LGwH9nnX822SswNp1i8i
 DPspj/mHURPXDCY4X/fmp62vcfNly7mXIMZFJWj6+VnxlaUwwQ7CgAQE12yovC7i0u3c9NZN
 0EQvCEpqMAa90G1T9+7QxyxplaFuAIRX5xbFOhSwB6J4rrZ5UCeHGdsZi5MbpkqudE7QRQu1
 0SVhJX5CDp3qrqXRHmBsLCOoluP1TM9KGYDYWoOS1sD6ty6+YUr1EuQEZBkDbK/icDzFXfo2
 TeWoSMihrIVy8kWy6G8+lOBiDWpznTUcjMICszsdjrNxmtEiESNPORENXCzAS58Ebuk
IronPort-HdrOrdr: A9a23:8mUfZam67xDbZpy83pNyMtzArR3pDfOlimdD5ihNYBxZY6Wkfp
 +V8cjzhCWftN9OYhodcLC7V5Voj0mskKKdxbNhRYtKOzOWw1dATbsSlLcKpgeNJ8SQzI5gPM
 tbAstD4ZjLfCJHZKXBkXaF+rQbsb66GcmT7I+xrkuFDzsaDZ2Ihz0JdjpzeXcGIDWua6BJdq
 Z1saF81kedkDksH4yG7j5vZZmxm/T70LbdJTIWDR8u7weDyRuu9b7BChCdmjMTSSlGz7sO+X
 XM11WR3NToj9iLjjvnk0PD5ZVfn9XsjvNFGcy3k8AQbhHhkByhaohNU6CL+Bo1vOaswlA3l8
 SkmWZqA+1Dr1fqOk2lqxrk3AftlB4o9n/Z0FedxUDupMToLQhKfPZptMZ8SF/0+kAgtNZz3O
 ZgxGSCradaChvGgWDU+8XIfwsCrDv7nVMS1cooy1BPW4oXb7Fc6aYF+llOLZsGFCXmrKg6De
 hVCt3G7vo+SyLUU5nghBgu/DWQZAVxIv/fKXJy+PB9kgIm0EyR9nFohfD2xRw7hdcAo5ot3Z
 WxDk0nrsALciYsV9MNOA4we7rINoXze2O9DIuzGyWQKEhVAQOFl3bIiI9Flt2CSdgv8KYYvq
 jnfRdxiVMSEniefPFmmqc7vyzwfA==
X-IronPort-AV: E=Sophos;i="5.91,274,1647316800"; 
   d="scan'208";a="72813593"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FD+0x/3JpLm2NC2cVypUbA1J1VSDzKLVhOyMQLWUHNac+gDyJ5vprfS8ke+vyuJA7fTOJWTnNsixAIqUvTFwXl6p0ZJQSVP99O/d3CVTCJjvgjqaxzpoIyEkzQStpvNMJRQ6WGl/b4PHNYPlOv6bL9YLT+NXvUuMHyfW2zrjKiyRjhmJNms/FCwpe9D+DM1TPfrxq/hXGJXfD7p+ul7sjJykQ6alLdfLiwwcm28F4EecmOKW0g9WVdRd1XKzCnfoo/UNZDStpY9wcWTn0uUNMlyRcCPlMKyLKX8F5U+a6hLVf/vxHqKyup+zZylld9p4FvmnUR2d/kHapxRG6W34CA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=RAPsFEWd6e/1cHrxy5c/bDfyKaS9E1r6zvQaDUEcdfY=;
 b=Ybw1cxDXouBiDMvJ99rMYDSm9GbZsViW1pyuzOmIuPdKivZ2tQlHtoBnvuoA9L4pfPUtwODCjY7/3KJk1s2/p6EBPrBpKAW1NpRD6aQ5szY+6tyLhXpdeRUGrBpXzWLTRt0Y6IyEJ/4KV0hN4MsI4wrxGx9cEpoxeHIJd6T1HesGrx0AFBHMk8R6vCOp1tBsjto4pNv5UH5xLA8L2setxDAeSBSndoZZlBcNdogfLPb/oyT8BizI4k7fk1ASLQ1I7s5r1XDY16vb/2E1n4OHH5DhPaw2JcqWcWZuOoG9bAx3mErQPdXNG9tx1xv6WfgYOXV0ctsYE7I0reOLITudNw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RAPsFEWd6e/1cHrxy5c/bDfyKaS9E1r6zvQaDUEcdfY=;
 b=KHMEI5FC3KBedKJQm03reHjlhz55wSi3D1nsGlAAJXkPjxjayQhL+0mPiLH+8yOA8irANO3Mf++0Rdgdfo+Zf8cuiD84U+yezrdi8pu1z+sqsedRdl68suwy5re78MxmimHxMu4Khpgyvs+l1/w6Sub/VwXMAOVFwZ61oPqsEyk=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Fri, 3 Jun 2022 16:29:17 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>,
	Kevin Tian <kevin.tian@intel.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2 1/3] x86/vmx: implement Bus Lock detection
Message-ID: <YpoavceqO238Q6Ld@Air-de-Roger>
References: <20220526111157.24479-1-roger.pau@citrix.com>
 <20220526111157.24479-2-roger.pau@citrix.com>
 <3165a99b-3a91-2ca3-80a0-af88d87e9bb0@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <3165a99b-3a91-2ca3-80a0-af88d87e9bb0@suse.com>
X-ClientProxiedBy: LO4P265CA0055.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2af::11) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: bc170b7b-f69e-4016-e9e9-08da456d73e3
X-MS-TrafficTypeDiagnostic: DS7PR03MB5400:EE_
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-Microsoft-Antispam-PRVS:
	<DS7PR03MB5400CB1F3447A3312C69045E8FA19@DS7PR03MB5400.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BEpcDReH7oWqtbLWFSTzefjU8Lv6wSy7UWP3gysNUPIMvPwJX8BzRy9TVxUsRqrZDJCF3L9utvmzurnBFTG9dZrIh18mzquOE7ww44c+yfCMwJLC+1UtKi8ZgdyYBy8RzuhgcOk3BvMhg9/YfDpxM5thYqaLQX7exDXr14NWhcnRZfqeqaaDMj3+1yxCrEh1+favvvpU/C36ASm06mp89NT9oSRYcZiCtF32t8jcWAaoOZePJlFluy/MO3SIDAz6Og7i479f+2kkM+aDwEyzUmpfSh2G3+ua7EoSPNKLT1UAa/ksWcF2Epv8ajxdrS7pNsxAdx3wNNHTFQl4O0j4SoG6rtWWxXuYqRx+d49GPW97eO+Qh4CuH7gIJWtmuIydwcvZHDO/GzVcJQXG1kMZqcOyTSGGJivSaBrigiELXS/Zw/vbtPp8CgaYcVjMPz8f9PfefsexopckUtcIh6ZXm4O9WHGhyKnAohh9/Q6iVf721SJG9oXkdhOiwbMBPpQhTJCMkLXATIV3Hua40I/zDXv9T/sg+0XT0ZwhWbNFF9IcY53nAIuu1881P+z2jRuBzGwgZotRNjSXCujY29pxb1CiieeUkURDI4LMxQRE10m42cWiyMsuD/o/w68SZoIEusJIQP3DcbnrmTwBF8WCbg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(7916004)(4636009)(366004)(508600001)(38100700002)(4326008)(66946007)(8676002)(66476007)(26005)(66556008)(5660300002)(85182001)(2906002)(6486002)(6916009)(83380400001)(54906003)(316002)(6506007)(6666004)(33716001)(82960400001)(4744005)(186003)(9686003)(6512007)(86362001)(8936002)(53546011);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Q2RPOGErM1BBejZnSVVDQmptUnN4M2MyODBRSm5WSngxcUlqeFpzVE5tWUZU?=
 =?utf-8?B?RXNSQ2ZaL1EyWEF6aDZNcXRLOW16aC9DcEtlQnBFaWdxRktFY1JTdjYreWtv?=
 =?utf-8?B?ZlNQV091T2wwRENtakJDUDE1Q0E4c1JKdnhvaHVTamxWSitIa1lhYkluTE8r?=
 =?utf-8?B?SE9CMnFaOUdlWklyY1loMDlpTnZOTmE2dDRyb0hOcnRoVFdqK3lsazhtY29N?=
 =?utf-8?B?LzNuVHFSbFo4T1AzTVZhQ0lpSzErYm83dDlJTi92VWxhZ243MmRaeHFWYkNC?=
 =?utf-8?B?cGZMQ05ncldweSswaWRBUlRYb21YT0dJa25XSWVybnNUYy90d1BtSUo2RFBU?=
 =?utf-8?B?ajZkbUxYekwxdXZlUCtDNU5pVzZIQlZIS2VoSnJvaFpSalllbElxRGhyWUlY?=
 =?utf-8?B?V0NLbTNucGFJVFd5VG0vaVVvbDdMS3ppcmtMdkxid1RBWEFuN2tvY3M3a1kz?=
 =?utf-8?B?OHE1YzRiNTF1M3dYUENlalNHMjBYWDRMd3MzVDZVbExFTTFCRE5STzd3NGJm?=
 =?utf-8?B?OHdHWitpUEZITjI1NGh2YjUxa2tHeFR6RFgxcmlYTjJCZEx0eVFvN1FkNk5H?=
 =?utf-8?B?SElycTRibEZycWxKaENpVGVxbTNnQitKODErRGFQUGV6eFdUMDFrM1FiY2RH?=
 =?utf-8?B?RHVoL1BnUVJrdWVza3U0ZDRleG1iK3l5TUJPa0lXbkpDSDVmdFdaZ2dIOEwz?=
 =?utf-8?B?SzRtQzA2Smh5TXhGeTE5THJNM2EwbW5nejhPV2YrTTdZaUtkaFA0MERCdklw?=
 =?utf-8?B?ZXJrM3FHS3RnNWE5d2w2TXpBMWN2NndPcHJqZ2E0d0E1TzhhbjJhYjhwUDlZ?=
 =?utf-8?B?MTRFZysyTTRnWFFkWnhUZlowMFhVVUlWWXVnckpSeEFIcVNZZ0dXUEcwQ3pC?=
 =?utf-8?B?SGZwTS9VVHI1aEI3T1NSeDlYcDZnSHY4b09nTVc0MHFGM1ZoTG1BdEpHdTMw?=
 =?utf-8?B?aXlrSkpNTCtKMGYrUUZsNy91dTJjNXI3ZjNZVWlPeS9MRS9mM2pHcjYrMWor?=
 =?utf-8?B?ZThYdDM3R0pKSkE2dlNYYm5QQ3JWRkVZMytBUEQvLzNmS3lIVGd1VzdMNlV0?=
 =?utf-8?B?TzJYMCtEaFhteWdjeGVyWVpQQkJFMEwrV2MzVTlvYWthakZTd1NmQ0VmNGdo?=
 =?utf-8?B?R3U5dzV2eitJWFVJL2JRZ3pkdXUzVjNLQitTSzlaeEp2amlxQVRweVBoSUJv?=
 =?utf-8?B?R29mN0xlSDMvZTg3K2ZGZjJkZURER2M5RHMybjBaU1BYZUFob0RiMjIvQnFq?=
 =?utf-8?B?QkZpcEVDZVhhcWQwN0pKWFdYTUNXV2Y5LytnUVljejNUUzhEcHNRWkFQb2FV?=
 =?utf-8?B?QkNCWlM2cmNGc3RhclZhTXVGQk9wOXVnWWh0NWhsUzBYTlFjWENGVy92NFJI?=
 =?utf-8?B?VDJ1bC9Ib25QWFJraHpTRkgrcnJOcnRoY1RmVVhTeDViN1l1UXN0QXVhbk1N?=
 =?utf-8?B?aFY4Q3dVRW9hMHppY3B2QkRnSlB4VGRnVm1zbU10bUcxSFprSEpkQ0pnbXhW?=
 =?utf-8?B?cndwcDdyVTBqZmNnNFhod005UUpOTm03OHF4VUlwWXoxNzQ4SWUwMk53ZE54?=
 =?utf-8?B?S3NnV2ZaVWRHcWU2RjlPcWJyTjRKbytsZnpHSTA3V3ZBSVdkdmNpNnBXT0N2?=
 =?utf-8?B?eUhaUHBZTEZvVmtoS3ZSZzlYZ05ROW5ZTjVNczVKWEFsNkVGbUx1VW1SSEVB?=
 =?utf-8?B?SXBvZ0dWVnJPdHpNWWVrZlhZUWZoc1Mvb2NVU1hMQVBUQTJqbytQS3VyaTE2?=
 =?utf-8?B?bDFWL2p3SXNFaWRrb1dJeXUzRnNxaVB0MmNGMkFZRnlwbVJXOHUvLy9RdmxN?=
 =?utf-8?B?OUVDWWhCWUMzV2V2eWk3Z2FrLytHLzhqUURxRkF0R1NIQ3dzU3hDTkk2dlBI?=
 =?utf-8?B?N0NnOTN3RXZ3alJQQWtVSGhKazNjT005WHZDa2FKMEIycGx3bHE1SWRQL0FY?=
 =?utf-8?B?UnFmcWZHdkdobWU1YVZlcVFCTTVaZnpKa08rVkg2VEhqOVlpc1FQVUltQWlu?=
 =?utf-8?B?aFp0cytNRlJyMVY4akV4VDRPVGRwVDAxMitTRHlyVGFCV3NpSGtVRVBHeVZq?=
 =?utf-8?B?M2paNW9oWVdIWWQ2bVBJUURmSHNQOGtVVEZJT0dVbzhmeXZES0NMZDhYM3Nj?=
 =?utf-8?B?WXMzMUVBRVd3Vlg5dERFOUxJK0Jia2pEMUc2T21KSXJFclVlT05RcUg3Nml6?=
 =?utf-8?B?RkxRZzVHQnAyR05paUU2akpocEttY09nVEU5eXpjOE9Cc3ExemNQbG1QaC9J?=
 =?utf-8?B?M2lkVWc2RGUvNW4ybno2aVNrU1QvZXN6NEZ2VERob0JlNVdjNExzV05aOE4x?=
 =?utf-8?B?dzZpdXpTbWg1Q3JTNUxMV1ZnMkhEeVAxQVowMHJwQWdPS2h5aGNpQT09?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bc170b7b-f69e-4016-e9e9-08da456d73e3
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jun 2022 14:29:22.6289
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: HV3Jj1QKXEPS2ATT6qQ0mJzEiWPU6gTSxhRUx7FnKeGUwfk+UAF/829wyL6caPz0NlxsatpczIH8wSi3Z88YBA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5400

On Fri, Jun 03, 2022 at 02:16:47PM +0200, Jan Beulich wrote:
> On 26.05.2022 13:11, Roger Pau Monne wrote:
> > Add support for enabling Bus Lock Detection on Intel systems.  Such
> > detection works by triggering a vmexit, which ought to be enough of a
> > pause to prevent a guest from abusing of the Bus Lock.
> > 
> > Add an extra Xen perf counter to track the number of Bus Locks detected.
> > This is done because Bus Locks can also be reported by setting the bit
> > 26 in the exit reason field, so also account for those.
> > 
> > Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 
> This implements just the VMexit part of the feature - maybe the
> title wants to reflect that? The vmx: tag could also mean there
> is exposure to guests included for the #DB part of the feature.

Maybe:

"x86/vmx: add Bus Lock detection to the hypervisor"

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri Jun 03 14:45:43 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 14:45:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341554.566789 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx8YT-0002r5-QF; Fri, 03 Jun 2022 14:45:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341554.566789; Fri, 03 Jun 2022 14:45:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx8YT-0002qy-Kw; Fri, 03 Jun 2022 14:45:37 +0000
Received: by outflank-mailman (input) for mailman id 341554;
 Fri, 03 Jun 2022 14:45:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nx8YR-0002qo-Ry; Fri, 03 Jun 2022 14:45:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nx8YR-00063V-NA; Fri, 03 Jun 2022 14:45:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nx8YR-0005ck-9Q; Fri, 03 Jun 2022 14:45:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nx8YR-00015u-8y; Fri, 03 Jun 2022 14:45:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ygX5k92vk+UB4dQ75YIxnRETOe4MbtSnSxtdaqnE68E=; b=qP03qClsJqNa6XVLnjBLtZjSFe
	ZUwfiXYcp5yb11fWlepTCA80eiPdnWjmhZ6bA635Wwgc3R8QYzMzyiMJ+BoV743DLHvfKvNakSYl2
	paS9AvMthlcDuL/eYXchG+KARVwca2CYO8lqyXIeQQ9J6mkqbnfImhv1Vc/SUK1yYOrM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170819-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 170819: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=0a4019ec9de64c6565ea545dc8d847afe2b30d6c
X-Osstest-Versions-That:
    ovmf=632574ced10fe184d5665b73c62c959109c39961
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 03 Jun 2022 14:45:35 +0000

flight 170819 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170819/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 0a4019ec9de64c6565ea545dc8d847afe2b30d6c
baseline version:
 ovmf                 632574ced10fe184d5665b73c62c959109c39961

Last test of basis   170818  2022-06-03 10:46:12 Z    0 days
Testing same since   170819  2022-06-03 12:40:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jiewen Yao <jiewen.yao@intel.com>
  Min Xu <min.m.xu@intel.com>
  Sebastien Boeuf <sebastien.boeuf@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   632574ced1..0a4019ec9d  0a4019ec9de64c6565ea545dc8d847afe2b30d6c -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jun 03 14:46:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 14:46:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341564.566800 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx8ZL-0003OZ-4V; Fri, 03 Jun 2022 14:46:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341564.566800; Fri, 03 Jun 2022 14:46:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx8ZL-0003OS-1J; Fri, 03 Jun 2022 14:46:31 +0000
Received: by outflank-mailman (input) for mailman id 341564;
 Fri, 03 Jun 2022 14:46:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DOPF=WK=citrix.com=prvs=146533d13=roger.pau@srs-se1.protection.inumbo.net>)
 id 1nx8ZJ-0003LX-W9
 for xen-devel@lists.xenproject.org; Fri, 03 Jun 2022 14:46:30 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f1d0c313-e34b-11ec-837f-e5687231ffcc;
 Fri, 03 Jun 2022 16:46:28 +0200 (CEST)
Received: from mail-mw2nam12lp2045.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.45])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 03 Jun 2022 10:46:25 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by SJ0PR03MB6778.namprd03.prod.outlook.com (2603:10b6:a03:40d::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Fri, 3 Jun
 2022 14:46:21 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::5df3:95ce:4dfd:134e]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::5df3:95ce:4dfd:134e%4]) with mapi id 15.20.5314.015; Fri, 3 Jun 2022
 14:46:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f1d0c313-e34b-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654267588;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=nGzKT/FJuZe2Z3k3KNP/6gffcayKs891FLMtAUAnmsY=;
  b=KlX6Fg/DDchpL5zTf5LXDY8+zxX0GqRq7MILzkBMBCY3MdqHfFvtYaQo
   zl+DCIYwWpC33J02pGmp6yS2QYBzsZyfvKlO1crPmwR3ySfIzTwKUH1Sj
   ZaoOStNXCIBb5Gau1LALzqKwKb56iFtCZqHXGSGis/YBNFRDPTDdzX5Ea
   A=;
X-IronPort-RemoteIP: 104.47.66.45
X-IronPort-MID: 73212222
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:M6nZu6AkrPXjOBVW/+riw5YqxClBgxIJ4kV8jS/XYbTApDsj0TYHz
 DcXWmuPbquLMGWmft9+bIq2/BgEvMTQndVgQQY4rX1jcSlH+JHPbTi7wuYcHM8wwunrFh8PA
 xA2M4GYRCwMZiaA4E/raNANlFEkvU2ybuOU5NXsZ2YgHGeIdA970Ug5w7Bh2dYy6TSEK1jlV
 e3a8pW31GCNg1aYAkpMg05UgEoy1BhakGpwUm0WPZinjneH/5UmJMt3yZWKB2n5WuFp8tuSH
 I4v+l0bElTxpH/BAvv9+lryn9ZjrrT6ZWBigVIOM0Sub4QrSoXfHc/XOdJFAXq7hQllkPhM4
 89/nqa0VzwEL7/hhtQgA0FfHAthaPguFL/veRBTsOS15mifKT7J/K8rC0s7e4oF5uxwHGdCs
 +QCLywAZQyCgOTwx6+nTu5rhYIoK8yD0IE34yk8i22GS6h4B8yeK0nJzYYwMDMYnMdBEOyYf
 8MEQTFucA7Bc1tEPVJ/5JcWw7zy3yOlKWcwRFS9+Yom3TjU8A9K0KG9O9P5VfGJHeF2gRPNz
 o7B1yGjav0AD/SdwzeY9nOnhsfUgDj2HokVEdWQ9fN0gVvV2m0aDjUXU0e2pb+yjUvWc9BCL
 QoS8yknr6k3/WSqSMXwW1uzp3vslhwBX9tdFcUq5QfLzbDbiy6CHXQNRDNFbN0gtec1SCYs2
 1vPmMnmbRRwtJWFRHTb8a2bxRuiNC5QIWIcaCssSQoe/8KlsIw1lgjITNtoDOiylNKdJN3r6
 zWDrSx7gqpJi8cOjv+/5Qqf32/qoYXVRAko4AmRRnii8g5yeI+iYcqv9ETf6vFDao2eSzFto
 UQ5piRX18hWZbnlqcBHaLxl8G2BjxpdDADhvA==
IronPort-HdrOrdr: A9a23:2l9iSKGxazE7x+yqpLqFYZHXdLJyesId70hD6qkvc3Fom52j/f
 xGws5x6faVslkssb8b6LW90Y27MAvhHPlOkPIs1NaZLXDbUQ6TQL2KgrGD/9SNIVycygcZ79
 YbT0EcMqyOMbEZt7ec3ODQKb9Jrri6GeKT9IHjJh9WPH1XgspbnmNE42igYy9LrF4sP+tFKH
 PQ3LsOm9LmEk5nHfiTNz0gZazuttfLnJXpbVovAAMm0hCHiXeN5KThGxaV8x8CW3cXqI1Su1
 Ttokjc3OGOovu7whjT2yv66IlXosLozp9mCNaXgsYYBz3wgkKDZZhnWZeFoDcpydvfo2oCoZ
 3pmVMNLs5z43TeciWcpgbs4RDp1HIU53rr2Taj8AzeiP28YAh/J9tKhIpffBecwVEnpstA3K
 VC2H/cn4ZLDDvb9R6NqeTgZlVPrA6ZsHAimekcgzh0So0FcoJcqoQZ4Qd8DIoAJiTn84oqed
 MeQ/003MwmMW9yUkqp/VWGmLeXLzYO91a9MwQ/U/WuonlrdCsT9Tpc+CQd9k1wgK7VBaM0o9
 gsCZ4Y5Y2mfvVmE56VO91xMfdfKla9Ny4kY1jiaGgOKsk8SgDwgq+yxokJz8eXX7FN5KcOuf
 36ISZlXCgJCg/TNfE=
X-IronPort-AV: E=Sophos;i="5.91,274,1647316800"; 
   d="scan'208";a="73212222"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=R35LYOY3EV6uFymOmqCyCMB6/u0BAJOHr6VHLuZA5Vad+BBhkibn5DshnJjrBbDLM0GXp/lUalpPMzuCAD+xXr3ydIwvThsB8rPz0VfBF18ZKcNa/qNBBXmD9AiOor2sSqvaDaeYs+zucZEn79RkjUtIXoiXaejUwnGggTNsmtq3uGypJmeXP6kyTcMbiHygSQ1V8cnoZWhGG59PyDKuckBpkwe4Cmiy0LuuP46P6xXH/HNz6zHGEF3NWNpHBmFG4jiSbOsfEBjHg0vLVcoBgL6PHopYAGh9OZUpNFaXwNoVjTb4DWLmq3dvNR6kiHJj39hXiIdZbIsVdq0/tnuZTw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=K2rdynz9N7r99RF4nuN3SzI28v2vuLNHK2WstN8gwRY=;
 b=iQAoqb5QGBl11R9oPsfthhv7MfQCRZjtwdAKjLACmS1OvD+DAOI+1mXQzfuaKRVB9f1HWGu9CK7g/56a+yIBrOW2yFuwBD7zJNGdzcEWZLztnl54p8sQhQZUGZRbTwqV+B44yLbzWkiLBbohk7ZXHhjrqw0I+9GbObCIvjQNMwuCgWiBFlIPgfDyCksohOGy9UG1xfu8YJBSPHIL2UeOkACoy7JXQo9ocrJ2zIM/b0Ql9AEbs4koXVIMZiTEWunHRFpvxOk8dlIysKirIdLzWMwYwP11yrJu+T/iqXmbzKI994UJr7uuohAdr9FObSlNY99R3CK6DMPKYKYTsl4SHw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=K2rdynz9N7r99RF4nuN3SzI28v2vuLNHK2WstN8gwRY=;
 b=ssVeD+OKb31e9YdL9stDddHLxExkimCnGTZ5GnB7+0E5m7xytTPdFOdi9/jTnFyPxFEwGqIFyxGTmfGMsQCbzj6s71k9kQieIAa44BJxPhNO5g9Dq7EHmbvOW1Yu/9vvrESI9IRti4HJF2RWMvTORW9u8RC6ZF//oN/5q7nPhoA=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Fri, 3 Jun 2022 16:46:16 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, Kevin Tian <kevin.tian@intel.com>,
	Jun Nakajima <jun.nakajima@intel.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2 3/3] x86/vmx: implement Notify VM Exit
Message-ID: <YpoeuOJPS0gobz5u@Air-de-Roger>
References: <20220526111157.24479-1-roger.pau@citrix.com>
 <20220526111157.24479-4-roger.pau@citrix.com>
 <6fa93f8c-9336-331a-75c1-7e815d96ff49@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <6fa93f8c-9336-331a-75c1-7e815d96ff49@suse.com>
X-ClientProxiedBy: LO4P123CA0386.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18f::13) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3b1729a7-5bba-4205-43d3-08da456fd30e
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6778:EE_
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-Microsoft-Antispam-PRVS:
	<SJ0PR03MB677871CDC0AB07257C2EC0ED8FA19@SJ0PR03MB6778.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	yrPb5E7JdTByWHASz2HlzBtmt/cXcZ5tOO/SVGxPLdgS4dnk37n2zqA09gW5QcALWTsZ8RAkkGRw+XZOq63yj9ArBfVsMJSzW5VEA2Vqjzcdg8A2IGqouSNG1jPlRTCxN+u778SZ/Ea8tZFlibfdFq+N8Mo2fWlF7fgJSeobODgsEggdS/rXfSNoczEOt+cCIYvxwO+vhpwMfwDNpTAMAsjz8DZFs0qWXR4bu47dEbqKcq1AtJ7twamCCSP4K+N0AkkhF1PCHvoLNT2FfUvGNPF7sf7A/ocxj3IiV0+zKl4XUyNcT5FE/w4xpqtOjDTrzudUzzY98ph+V6k0twkFHwAz1Phx+NvvFKj9qMz6w91EE67Ob21nrV6mBXnw010WbjA8AkIeud4z7rc7Tlk9twUkaa/x41qgtNPJ+hUNI2q585RkoknaLBmG18CvBLCaFIhcpTQ1OmfDRnAn5xL+pIginFMC+y0s1i4vp6I84+Cc1FD/4h/aKCvLBkmKXnejyor1XFJkR7zCfVO4KyvJ0L4ZoRgsPnB5WfhU0U0fKyk0aKVKsIDLAvH7V6LpWaIsQGRveO71PHTFSTH8khIWcYOdGxiHSGAOeH33LXzmlSSIsAS/T0ibzezfdL60pWxtFJpvALoE5KdLzGNezuo10A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(7916004)(366004)(2906002)(6512007)(83380400001)(26005)(33716001)(8936002)(5660300002)(9686003)(4326008)(66946007)(66476007)(66556008)(8676002)(38100700002)(86362001)(6486002)(82960400001)(316002)(186003)(508600001)(6666004)(54906003)(53546011)(85182001)(110136005)(6506007);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VGhUODhzOGh6VXJhTWY3aEwrSmI2TExVdmNKbCtvSUN6c3BhZW1hUDl6Vi9X?=
 =?utf-8?B?SHZMWFRMdnZSS0hGQy9hc3Y5bFZRYnR6SFJlMDFGNUZaQ3Q5YWIzZisvc2VY?=
 =?utf-8?B?enBETjFiYWZTdFV4cU9RK3B3TlR4dGhoNjFzQUlBRFIra0FOOG16cFdsZC9V?=
 =?utf-8?B?dDRjTk1pN0o4alpvQjZRRkJmcXJOTXRuc3MxSENza2lISS9MaVk5SHZ0anU2?=
 =?utf-8?B?Y0dEMGU1eXRqUERWdFYzV09Gd2t3SVo4MXgvZlVjU1BBeXJJUmVKZjNkWm14?=
 =?utf-8?B?R095WjVJTmZMZ1k1dGdaRWJKdUsySzUrZmNtV1I2VzBRTHFyZGYvbGFBUFF1?=
 =?utf-8?B?MUJBOG5qZUdJOGFDSkthSTA1eFovZGRaRFBNSlN6b2tMbkpKY0xMaDlCUk1H?=
 =?utf-8?B?ZTE0Vk0yRnpYK3o5bWoyYWVYWDhJN3ZyUkFWYlNXdGcxSVRPTng1UWI2eDZq?=
 =?utf-8?B?VjVWS1VZYksxV1REQjQ1bWt3bjUwUy9KakRDVDFSMEhORWFJdjgzaXdrOGp3?=
 =?utf-8?B?R2VTcDltUWppcVdQTzZJNnByNUxraEk4MjFVNzZZYk9wUHc0eVYzOC9pNHl1?=
 =?utf-8?B?Y09KaytjUERsVEJFOUVSV0NLVmhmL1pwUzNYaHk4VXA2TENsVFBBNWJkbnRC?=
 =?utf-8?B?Q1VQVVpwMlE0QXlNMTZIVFhmZWVPYWQ4akc3M0IzejljQmdVaTV4bWdiM0JO?=
 =?utf-8?B?bkNoWHlhTWZTZklkN3M1QkI0S0RyVkY1ZWx6UEFsenhjeTV4czVGUzZyVFdz?=
 =?utf-8?B?YkNDdXFNamJHYWgrR1RGaEJiRmhHSmpaSUZ1bzFaVUJvMEdxRVNGaFhKa2xH?=
 =?utf-8?B?WENISTBqQmNMMVdDQTdBNFpqNzBlenhxMTlnUDRqNFFZMjBSVDhxbVpVaEho?=
 =?utf-8?B?NGRFUGptUGpPYm9QdEF0MHBvbGNrZk9GRWx3T2I1aU53ZGJJWWxWL0pUcTBF?=
 =?utf-8?B?eWVFZEIzcEFwY0FMRCtmREZsUWVyY3JPYjF0TEJGb1hGakRVVFlwdmlaNFIx?=
 =?utf-8?B?YTFPejU1RzJ2eXJhZW5EVDR0bXlBQ3VXQ3VkYlk4RlZzbEZIeUU1YjdOTjVT?=
 =?utf-8?B?ZVRNem1BcEpiNSszbVBUdDNzK01Cc3kwb01Ca2JBWXA2ZFordkdRbVhsZVF0?=
 =?utf-8?B?SWR3YmRoMlRiczZUQVIzNTArR2RFcVZIbEZneENPOFh4RmsvbzBjbHpCb0Z1?=
 =?utf-8?B?akVyNW5tUjZ2b2syYTk2SnlQNysySXY4UUY5YjNlRlluLzZBaSs2YWFQUUo0?=
 =?utf-8?B?RzRsSUZNek5qQWZZZm9DdXVwVjJhZC9pZ3J4ekhHbElzQk5ZZEl0VEVPRG5X?=
 =?utf-8?B?QUlYUnQyQ3RjaEZmYWxXelVBK0dwa2lXMHV0dE5EYmQ1VW9kRFJvVEhrMWNW?=
 =?utf-8?B?WGgzbGlsb0VSckJqd2IyS1dpamRBTEdQRk9LSnh0U2VXaGwzTzdkY1hvUm5Y?=
 =?utf-8?B?dGE2V3cwVUlTRXRMRHk0cHBrcE5sMG9kdUNrU2RkcXgwdTN1SVMveUIvU1Ju?=
 =?utf-8?B?MkpFcGxXNVB2NHFJQmhUUnFsS3VBbi92WDRvck9jQ2RCdW1YMC91UVBPUGRr?=
 =?utf-8?B?cmdFWmJ5cm1jMFRKS2h5S3lvOTdHbWZJUk53NEFXdnhLaGgvcTNoUEViVlNt?=
 =?utf-8?B?dEJhVTJXME1CamlPbDA1YnV2dE96b2NreXRiMFRicU9LbnloQjNHMkVBY2RU?=
 =?utf-8?B?VkZTZ3M5bmdtMkVWRC9iaUlabTJJU093Nmxlb1ByWkR4bFQ2Z3BBQXRab1VU?=
 =?utf-8?B?aWxWdGI0WnNvS0RVeG95NU5SanMzcGI0V2dkZGtMT1QzbkNGbGUrd28zUXl1?=
 =?utf-8?B?YlNoZHlzak1UcjlUUlpJMUdmV21yOXRkNVBOY094S1czU0lXYXRmWEdtZXNP?=
 =?utf-8?B?QlIvbHowTVMwVnZ5d256ZE1rY0duS3JReGhxbU9WM2Q0NHJVSXNTRHk4b2ZL?=
 =?utf-8?B?VVpab0IwazJxZmg3ZVNtN1hoWGdwMEd2dHFUYUpETWdyVHp3RXpQUHZMam9I?=
 =?utf-8?B?QThNeGFFZkp1MmswL0dJOXlDQXhTMmw0eEJ6REVBdXZtdXdETUoyUyt6dkVo?=
 =?utf-8?B?N2ZYZHZoRTBSQ0lRc0JJdXhaOTNrWEI5ODBhSGJIaVZTOEZyS2t6Z2VXRzho?=
 =?utf-8?B?V1FBT3F3aXQ1bGRWR3ZzMEI4RkY5cFk1VWU5bnJ0ZjFWTXdzVWdDNWhsRGhh?=
 =?utf-8?B?MlZrWklLTlA3L2puVTZFUDV2a1ZPYlU1ekl3LzZXbWJ6QmFMQzE4YzFLdnlT?=
 =?utf-8?B?M2V6dVd3Sit5bUpvb1dzS2FxU1p1RXRZQk0xTVl3eXVVQmQrNkcxTWs3VExo?=
 =?utf-8?B?N01xZ2JjT2RkbllTcGo2RTIxVWxGZUZZb3NuUXpzUFV5TXV2OFdsVkNqb3Rm?=
 =?utf-8?Q?+IHDrmI046I2KxaU=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3b1729a7-5bba-4205-43d3-08da456fd30e
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jun 2022 14:46:21.2991
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: vwEfc/O0XuCOmtB4uhZt+7sE6iie3KjEY9SCB7T0OnA7bb09/xFVYOUHJaBJe4hPfRHCebAvKlVZX7Ul3JKbvQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6778

On Fri, Jun 03, 2022 at 02:49:54PM +0200, Jan Beulich wrote:
> On 26.05.2022 13:11, Roger Pau Monne wrote:
> > Under certain conditions guests can get the CPU stuck in an unbounded
> > loop without the possibility of an interrupt window to occur on
> > instruction boundary.  This was the case with the scenarios described
> > in XSA-156.
> > 
> > Make use of the Notify VM Exit mechanism, that will trigger a VM Exit
> > if no interrupt window occurs for a specified amount of time.  Note
> > that using the Notify VM Exit avoids having to trap #AC and #DB
> > exceptions, as Xen is guaranteed to get a VM Exit even if the guest
> > puts the CPU in a loop without an interrupt window, as such disable
> > the intercepts if the feature is available and enabled.
> > 
> > Setting the notify VM exit window to 0 is safe because there's a
> > threshold added by the hardware in order to have a sane window value.
> > 
> > Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> > ---
> > Changes since v1:
> >  - Properly update debug state when using notify VM exit.
> >  - Reword commit message.
> > ---
> > This change enables the notify VM exit by default, KVM however doesn't
> > seem to enable it by default, and there's the following note in the
> > commit message:
> > 
> > "- There's a possibility, however small, that a notify VM exit happens
> >    with VM_CONTEXT_INVALID set in exit qualification. In this case, the
> >    vcpu can no longer run. To avoid killing a well-behaved guest, set
> >    notify window as -1 to disable this feature by default."
> > 
> > It's not obviously clear to me whether the comment was meant to be:
> > "There's a possibility, however small, that a notify VM exit _wrongly_
> > happens with VM_CONTEXT_INVALID".
> > 
> > It's also not clear whether such wrong hardware behavior only affects
> > a specific set of hardware, in a way that we could avoid enabling
> > notify VM exit there.
> > 
> > There's a discussion in one of the Linux patches that 128K might be
> > the safer value in order to prevent false positives, but I have no
> > formal confirmation about this.  Maybe our Intel maintainers can
> > provide some more feedback on a suitable notify VM exit window
> > value.
> 
> This certainly wants sorting one way or another before I, for one,
> would consider giving an R-b here.

I was hoping for either Kevin or Jun (now moved from Cc to To) to
provide some guidance here on what a suitable default value would be.

> > --- a/xen/arch/x86/hvm/vmx/vmcs.c
> > +++ b/xen/arch/x86/hvm/vmx/vmcs.c
> > @@ -67,6 +67,9 @@ integer_param("ple_gap", ple_gap);
> >  static unsigned int __read_mostly ple_window = 4096;
> >  integer_param("ple_window", ple_window);
> >  
> > +static unsigned int __ro_after_init vm_notify_window;
> > +integer_param("vm-notify-window", vm_notify_window);
> 
> Could even be a runtime param, I guess. Albeit I realize this would
> complicate things further down.
> 
> > --- a/xen/arch/x86/hvm/vmx/vmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vmx.c
> > @@ -1419,10 +1419,19 @@ static void cf_check vmx_update_host_cr3(struct vcpu *v)
> >  
> >  void vmx_update_debug_state(struct vcpu *v)
> >  {
> > +    unsigned int mask = 1u << TRAP_int3;
> > +
> > +    if ( !cpu_has_monitor_trap_flag && cpu_has_vmx_notify_vm_exiting )
> 
> I'm puzzled by the lack of symmetry between this and ...
> 
> > +        /*
> > +         * Only allow toggling TRAP_debug if notify VM exit is enabled, as
> > +         * unconditionally setting TRAP_debug is part of the XSA-156 fix.
> > +         */
> > +        mask |= 1u << TRAP_debug;
> > +
> >      if ( v->arch.hvm.debug_state_latch )
> > -        v->arch.hvm.vmx.exception_bitmap |= 1U << TRAP_int3;
> > +        v->arch.hvm.vmx.exception_bitmap |= mask;
> >      else
> > -        v->arch.hvm.vmx.exception_bitmap &= ~(1U << TRAP_int3);
> > +        v->arch.hvm.vmx.exception_bitmap &= ~mask;
> >  
> >      vmx_vmcs_enter(v);
> >      vmx_update_exception_bitmap(v);
> > @@ -4155,6 +4164,9 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
> >          switch ( vector )
> >          {
> >          case TRAP_debug:
> > +            if ( cpu_has_monitor_trap_flag && cpu_has_vmx_notify_vm_exiting )
> > +                goto exit_and_crash;
> 
> ... this condition. Shouldn't one be the inverse of the other (and
> then it's the one down here which wants adjusting)?

The condition in vmx_update_debug_state() sets the mask so that
TRAP_debug will only be added or removed from the bitmap if
!cpu_has_monitor_trap_flag && cpu_has_vmx_notify_vm_exiting (note that
otherwise TRAP_debug is unconditionally set if
!cpu_has_vmx_notify_vm_exiting).

Hence it's impossible to get a VMExit TRAP_debug with
cpu_has_monitor_trap_flag && cpu_has_vmx_notify_vm_exiting because
TRAP_debug will never be set by vmx_update_debug_state() in that
case.

> 
> > @@ -126,5 +126,6 @@ PERFCOUNTER(realmode_exits,      "vmexits from realmode")
> >  PERFCOUNTER(pauseloop_exits, "vmexits from Pause-Loop Detection")
> >  
> >  PERFCOUNTER(buslock, "Bus Locks Detected")
> > +PERFCOUNTER(vmnotify_crash, "domains crashed by Notify VM Exit")
> 
> I think the text is not entirely correct and would want to be
> "domain crashes by ...". Multiple vCPU-s of a single domain can
> experience this in parallel (granted this would require "good"
> timing).

Sure, thanks for the review.

Roger.


From xen-devel-bounces@lists.xenproject.org Fri Jun 03 14:53:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 14:53:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341573.566811 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx8g2-00057H-TT; Fri, 03 Jun 2022 14:53:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341573.566811; Fri, 03 Jun 2022 14:53:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx8g2-00057A-PR; Fri, 03 Jun 2022 14:53:26 +0000
Received: by outflank-mailman (input) for mailman id 341573;
 Fri, 03 Jun 2022 14:53:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DOPF=WK=citrix.com=prvs=146533d13=roger.pau@srs-se1.protection.inumbo.net>)
 id 1nx8g2-00056y-6y
 for xen-devel@lists.xenproject.org; Fri, 03 Jun 2022 14:53:26 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ea5b601d-e34c-11ec-837f-e5687231ffcc;
 Fri, 03 Jun 2022 16:53:24 +0200 (CEST)
Received: from mail-bn8nam11lp2175.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.175])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 03 Jun 2022 10:53:17 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by DM6PR03MB5180.namprd03.prod.outlook.com (2603:10b6:5:1::10) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5314.13; Fri, 3 Jun 2022 14:53:12 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::5df3:95ce:4dfd:134e]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::5df3:95ce:4dfd:134e%4]) with mapi id 15.20.5314.015; Fri, 3 Jun 2022
 14:53:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ea5b601d-e34c-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654268004;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=CeOoZHVmBDTeCDWyr6srfB3n1uMkh4xDcKOevGWXN/Y=;
  b=RS+93M4WGCxxsu7QvBmHdrmapsfGpRZQSBmKxos4yr89jlTPBEADaIX7
   +9VNQGMR8AzNaVchbo993uAivg9iZAU5Dchf+s+WlaNMJBnexBezcVLrG
   3FI34+65s3IYb61+85wPHikQWWJ+T4Nwk1AlcBxHkOzvaKMKqGjEAy8Nw
   A=;
X-IronPort-RemoteIP: 104.47.58.175
X-IronPort-MID: 75352390
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:ASaJ7aka32R7ggLnL7i7Cwvo5gz1J0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xJKXmmDaK3YYmDzLd52bdux8koG7JKEyYdrSQpprCkxQiMWpZLJC+rCIxarNUt+DCFioGGLT
 Sk6QoOdRCzhZiaE/n9BCpC48T8kk/vgqoPUUIYoAAgoLeNfYHpn2EsLd9IR2NYy24DnW1jV4
 7senuWEULOb828sWo4rw/rrRCNH5JwebxtB4zTSzdgS1LPvvyF94KA3fMldHFOhKmVgJcaoR
 v6r8V2M1jixEyHBqD+Suu2TnkUiGtY+NOUV45Zcc/DKbhNq/kTe3kunXRa1hIg+ZzihxrhMJ
 NtxWZOYZT0NNKzBwtohFDJZSxwmFvx2yKfLPi3q2SCT5xWun3rE5dxLVRhzF6tIv+F9DCdJ6
 OASLy0LYlabneWqzbmnS+5qwMM+MM3sO4BZsXZlpd3bJa9+HdafHOOXtZkBg2pYasNmRJ4yY
 +IDbjVidlLYagBnMVYLEpMu2uyvgxETdhUH9QnI//FmuwA/yiQt1+DVYMXpReCVRP5FkEGz/
 0DvrjzQV0Ry2Nu3jGDtHmiXru3FkD7/WYkSPKal7fMsi1qWrkQMDDUGWF39puO24mauVtQaJ
 0EK9y4Gqakp6FftXtT7Rwe/onOPolgbQdU4LgEhwASEy66R6QDJAGEBF2dFcIZ/65JwQiE23
 FiUmd+vHSZorLCeVXOa8PGTsC+2Pi8Wa2QFYEfoUDc43jUqm6lr5jqnczqpOPTdYgHdcd0o/
 w23kQ==
IronPort-HdrOrdr: A9a23:nW9dn6gj59+8FBmNyp7jow0qkHBQX0Z13DAbv31ZSRFFG/FwyP
 rCoB1L73XJYWgqM03I+eruBEBPewK4yXdQ2/hoAV7CZniehILMFu1fBOTZowEIdxeOldK1kJ
 0QCJSWa+eAcWSS7/yKhzVQeuxIqLfnzEnrv5a5854Ed3AWV0gK1XYcNu/0KDwVeOEQbqBJbq
 Z0q/A30QaISDAyVICWF3MFV+/Mq5nik4/nWwcPA1oC5BOVhT2lxbbmG1zAty1uGw9n8PMHyy
 zoggb57qKsv7WSzQLd7Xba69BzlMH6wtVOKcSQgow+KynqiCyveIN9Mofy9QwdkaWK0hIHgd
 PMqxAvM4Ba7G7QRHi8pV/X1wzpwF8Vmgrf4G7dpUGmjd3yRTo8BcYEr5leaAHl500pu8w5+L
 5X3kqC3qAnQS/orWDY3ZzlRhtqnk27rT4JiugIlUFSVoMYdft4sZEfxkVIC50NdRiKpbzPKN
 MeQv002cwmMG9zNxvizylSKZ2XLz4O9y69Mwc/Upf/6UkUoJh7p3FotvD30E1wtq7VcKM0md
 gsAp4Y642mcfVmHJ6VJN1xNfdfWVa9Ni4lDgqpUCTaPZBCHU7xgLjKx5hwzN2WWfUzvegPcd
 L6IRhliVI=
X-IronPort-AV: E=Sophos;i="5.91,274,1647316800"; 
   d="scan'208";a="75352390"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BZ/6qmbTf4Jsx6RBMcJxSsatizHoGq7cBjbX8LDbe+AF24Z2EC5QvGea/DKuhVEg5+0T/+yHDWEFhOZEgLdcxq4hfZJhkIFWRQ2fXcngrFigSmZAbiqkDgS0YSa5N/SNFaMefegE2E6OLfNEHBM24e17eEVEybi8Ki24b3jIzi7xc/n+2SjLE6haZuvlkcy85muN9mALdiPYqhKmy3vgJF54aclKYfhi7Ry8WrqDIPQvBicIFMwMpWeed2o7by6KXxPknrCis12Qs6rA8yhwZuQzseMy70uVOaT8LhhOUvm8puuIDsj1zttNisZ/B1deMg0DNrpf3eQDUeYRgvEZgg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=eEO5KEH84xBGRvwftGa85g4axL+WqYFNwUoCvf48EYk=;
 b=apmP4dKebxJllrqkAD076+G9NVJIngM8Jg1bqJLDx72ZIcqNsI+/jnkeczcmoiLl4KXs4gsDUGg8Li1pGi5D9jNmmedeRwXbFCbx5NpArRg3OLnYTWwOMnftBx9w6q4ognqgbG6CRLiZyPBXqdE67nJ+I4iWV3i6iOo9TwHDcNHN3LkjvjehNdYtMj+LKSAeqgbnTQj425bJP2hT1fFl0UAU2BogFdOGhqvSyLC14ZSmkldHqhVxqYoeUq6P3IrHAFYDlSPLi5mexueOmwKUwXnUgJCsovaRAWcBPQanVoDbdCLwJLXv9Vl6v99vXl9UBnBo6ZVVaWL6Ce/ZzgaBDg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eEO5KEH84xBGRvwftGa85g4axL+WqYFNwUoCvf48EYk=;
 b=vtAQuYEvVYJA6BVH5JELtRKaw0cai9waZRcZqux2RKu/7g7MM14N/HD2jCsR420yH0m2HPTA8XNEWvEcxqfg5YvQVxwhQPJbXf1GXfoNNvm9qzcoLVTn8Sej0DHCUt2x9XTZH3oN8QDoKWMK2uUGoIolG1h6Q5pDIdbNSXfsG0Q=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Fri, 3 Jun 2022 16:53:08 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH RFC 1/6] x86/ioapic: set disable hook for masking edge
 interrupts
Message-ID: <YpogVFqGl1zS3VCU@Air-de-Roger>
References: <20220421132114.35118-1-roger.pau@citrix.com>
 <20220421132114.35118-2-roger.pau@citrix.com>
 <85dfc48f-3440-1e6a-dc44-4c2bb050184b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <85dfc48f-3440-1e6a-dc44-4c2bb050184b@suse.com>
X-ClientProxiedBy: LO4P265CA0189.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:311::15) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: cb5493a9-fdad-449e-8cf5-08da4570c7d4
X-MS-TrafficTypeDiagnostic: DM6PR03MB5180:EE_
X-Microsoft-Antispam-PRVS:
	<DM6PR03MB5180E5C8B277F28119962FC48FA19@DM6PR03MB5180.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	jWLb9DYlLKKeqC29MeO5E+MtNon0FX3WCIS0ZaOx6C2iICHNzWXOipQmKoT1jyCgaOeqlA6ReslGTC3AyM+kytbHtgZ2oESDYEaACa3Ww+d4OTrJSQxKZKYqLvhjnClSDCBwKZnolRjIT8ZWuiVyVaGXZGxA5Y9ZnLFvIWhVxaSmiVAg4tPymtzSMHUX5N9LHRqw4TQFANumF/JtPJAWaUCMrD+kBo1rycX7IHEf555zP78Xw6twVJSYoz/ImO1wIB9iyLtTYLb6/akoEC4/VtcwlRj9/O1JJExoSVQ5OPcuoEL82XvDYv63ldpyK/WZtFiwxWbsl9lkzVCrGCqgLbNCdpiEsbe3QVfDQ1ida71ib56pbDuVv+niHLKwwLJcTqnI2RoV1UzU0+sjihpYq1b9eCRGKF0V/kA2BRSmmIJP6iwFFnyTeOneSnEMnngLtL+1SUntw/Z64lc3hGIfthBSG3ONF0QXgUIRvgaHIiKuukSZfv2vK5sSd1qed016W90M08qOCnFwPpq4pitGDXTWRz/GRcvMArjD7A97qHj1t7m9q7kg+ar1W7TGNTMj2ruUGzFCJDOYvnlwQ/DbKFybUInMt1UWkqTj4NQYTaG04fpeu/rA8KfpH/iMjKk/977mxGol1nmAIKBqGWaPeA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(7916004)(366004)(8936002)(6916009)(5660300002)(33716001)(66946007)(508600001)(316002)(6506007)(54906003)(4326008)(66476007)(66556008)(186003)(86362001)(8676002)(82960400001)(6486002)(83380400001)(6512007)(9686003)(2906002)(26005)(85182001)(6666004)(53546011)(38100700002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VmxGZUN3d1FhaUF3aEQyTlptTU41d0NSSzFuT25VSHl5ZkxrZmJoVGtFZURK?=
 =?utf-8?B?TVlvQVdlU1RtaTdmZ1RoRi9hcTBxRUpHZkFlVlhEam5Vc2F1OHVPbjBPK2o0?=
 =?utf-8?B?bVVsUUNqTEkrTEFCL1BtSERmd1NIRHp6Rnp0RFl0U3ZkbS95NW41UkkrQVkw?=
 =?utf-8?B?ZXExcVNmOFd2NHV0QUplM3ZQT0oycWlkcUJjTXMraEhVRzZpQUppQXdkdlcv?=
 =?utf-8?B?MkhTS21jNlc5aW5VdHBrWXJ0Sk5HejVKOS9SMWdCRUhJelZDU3g2Rkk5OGdJ?=
 =?utf-8?B?M2pzL3ZDTG91bUJLa0RrZ0lMRENVbmZZL0FvUHMxTWptdHBFSmJZYS9UNGhS?=
 =?utf-8?B?cXNualpZYlZ1U3AxUVZBcGI1QWZsaXB6Vm9MMUNjd1krVmllYmxUNmNRaFcy?=
 =?utf-8?B?NDQxaHRyaTFLVHArckxITWQ3M2Y1NDBuQzYwOUYyQnZYblFlN0xDYUd6WU9Y?=
 =?utf-8?B?cmJvbEJremIrUnRqM0MzMm5EcWwzL0IxK1A0L3lnY2pRYmFVSFR2bGtYNEd4?=
 =?utf-8?B?eHR4VFdFQzVEdnA0cWFxbFVSUEJhdVhzcXpjU2hWY1pCTEF4MnJCa0lsVHht?=
 =?utf-8?B?RDEvd05nYlo3UXVkTHlYNEdodWU5U2hXYVpXVHdNTVlvQTNScjNkS2IzTGNx?=
 =?utf-8?B?eTlPZUV1M3BBQjVNUlIxMzBLbkRrTTRpWW54M1dReVRIZEZ6UWdUTE8yYU1T?=
 =?utf-8?B?ZkZoU0tId0FuUHFBVXNwWHlmUlA5N0gvTnZvMTJ6QmsxeCtmN0NEUlFOWVB0?=
 =?utf-8?B?aFRuYUNSWVlnOGMyMUcyVGpsaXRBVmFONDFSamYxM3dPN3lzQ3RLUTBJN254?=
 =?utf-8?B?bWU1d2Mvd1VTQy9JQ1hHakg5MkRhRUF1TElycUhXSDhhcHN1dnBmNHAvKzJH?=
 =?utf-8?B?Q1FJL3J5eVVOYnVJOUM5UlJxQ3pnUWdLNk1hZnVIa2NFYklpQ0ZFVUtYWEJz?=
 =?utf-8?B?WHNmL04yZGI4Wk1MTFpORTdCU0FLZDduZGkxSkltRWZrUWlySnZLRmdzSURQ?=
 =?utf-8?B?amZsNGNIcmpYVEltc1pUd3JOZjIrTXBKd21OcVFtc25pRHRIbUFCdGlkNzhk?=
 =?utf-8?B?YmVsVHJYd0xOUjI3UW5JSmNBNGRUVEVzTERoOVRGdms2MnpaNGV4K1h6R2lz?=
 =?utf-8?B?NHY2QWhXNVR1YXpBUmJ2ZjJXQVpPdVdrclllakxPSjFLTmxBUG95YjdmMGpu?=
 =?utf-8?B?dDlzQ1dqZ280TGVkM1ZRRjB3aStjb3FlSTBzOWl3cVBiK0VicGNJck1zUG9E?=
 =?utf-8?B?VFo4OVlXOE90VWxYY2ZxcktpeXRDaHhqdjZkOFkweGxHYTdXa0F0VTJzV3Vk?=
 =?utf-8?B?cG0rQW85b0krbzdZWDF5M3pUdnhaZHBacC9LMWJ0QVI1UlJXazlEUUsvWHha?=
 =?utf-8?B?VEF6U0tzOHVxd0toaTl6dHppMWorR3pIMlBIOGx3RStxb0dieHdvZXplKytZ?=
 =?utf-8?B?VWxpNm1pZ1VyTENva0JlcC94OUtJQUVlQ3RwbTdjMjdLZ0hvOVdWK09yKzRS?=
 =?utf-8?B?dENkZlJXd0xOWUEyVmhySVdxK3hEcEpRbTMza2JIWXRZVFBKZGoxaWIxSVVu?=
 =?utf-8?B?NEhjRnZkaFVTQnpXOWJlYU1WWjFyR0FiUVdwZTVaQnE2aFBDcDNnS3A1WGNz?=
 =?utf-8?B?OUdpNklMbVBYbndEQ2xabXlEOFh1aUFzWU1NTkRPOU9yL0VGZ05hNGU4dGlq?=
 =?utf-8?B?TFBvZ2xiVjAzWm5rNDlaRWcyWHVlWm5OMFFqK1djb3FsV3doWUFrUzJvamsy?=
 =?utf-8?B?WFhUN2Y3Snp2bDRZeGFpdDNBMFJZbDVVUWtmWVpRUzVDRnltOTNKd1JYbldO?=
 =?utf-8?B?d1pPREdPZ2FCNndIMytQd3Q1KzZSTU84d3QzeElIRFFMRC9qU1hqbmtVaDh6?=
 =?utf-8?B?am5RenBuSzduclM0RFRuZDV6MHA3Z1ZoYTU1Vmt1NHJubGVCeFhHc2RwNWFu?=
 =?utf-8?B?Q1lHdjRGT1paRFhvYnV1OHV0aG5sK2RxcnptbnFPMDdNZExTZXR0Z2hmZWdy?=
 =?utf-8?B?VmwxMTh3aW55cmp3QS9sK1BuQ1VxTDNoSXR5dkVlS3lYOVJOZWpFZ1ZXNzM4?=
 =?utf-8?B?V2RzQzM3TFVwbjlqbGNoZ0svbXAybVREVGxiTzM5eFhEd3NXbUhzQldUcVA4?=
 =?utf-8?B?NFhnUmdBLytTTXQ2Q0xlVFRBTDFTVFV2aVVpQjNpN0tzTFVpZlJwWjJNbWJR?=
 =?utf-8?B?QW1oZXIrQ3NLRloyaGh3L2VIQ1BiZURKMWpRWXlyQTBxQ3JFYm9ZQldTRmI4?=
 =?utf-8?B?ZjhQVTMwc2kreit4SDdaeGhQQzVQSDlET1h4VGVNM3pSL053RHFLcnhlV256?=
 =?utf-8?B?V2xsdVAyOFM1SUJZazlWcE5WTGE1dUc5ZFRPaS90TW9tZThFdURlUlZWZFJp?=
 =?utf-8?Q?mb9O1Q9SbrQrqR+w=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cb5493a9-fdad-449e-8cf5-08da4570c7d4
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jun 2022 14:53:12.3219
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: vI+aqhluYjM4fCRgl4NjYqlIFdVLS1DPbqoDmsf2qsq8gGuue2WIAFFMpvJ8OX5nNpOGToz0WQQNoQtv3cZstw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5180

On Fri, Jun 03, 2022 at 03:19:34PM +0200, Jan Beulich wrote:
> On 21.04.2022 15:21, Roger Pau Monne wrote:
> > Allow disabling (masking) IO-APIC pins set to edge trigger mode.  This
> > is required in order to safely migrate such interrupts between CPUs,
> > as the write to update the IO-APIC RTE (or the IRTE) is not done
> > atomically,
> 
> For IRTE on VT-d we use cmpxchg16b if available (i.e. virtually always).
> 
> > so there's a window where there's a mismatch between the
> > destination CPU and the vector:
> > 
> > (XEN) CPU1: No irq handler for vector b5 (IRQ -11, LAPIC)
> > (XEN) IRQ10 a=0002[0002,0008] v=bd[b5] t=IO-APIC-edge s=00000030
> 
> I think this needs some further explanation, as we generally move
> edge IRQs only when an un-acked interrupt is in flight (and hence
> no further one can arrive).

A further one can arrive as soon as you modify either the vector or
the destination fields of the IO-APIC RTE, as then the non-EOI'ed
lapic vector is no longer there (because you have moved to a different
destination or vector).

This is the issue with updating the IO-APIC RTE using two separate
writes: even when using interrupt remapping the IRTE cannot be
atomically updated and there's a window where the interrupt is not
masked, but the destination and vector fields are not in sync, because
they reside in different parts of the RTE (destination is in the high
32bits, vector in the low 32bits part of the RTE).

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri Jun 03 14:54:37 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 14:54:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341582.566822 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx8hB-0005jP-AW; Fri, 03 Jun 2022 14:54:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341582.566822; Fri, 03 Jun 2022 14:54:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx8hB-0005jI-7P; Fri, 03 Jun 2022 14:54:37 +0000
Received: by outflank-mailman (input) for mailman id 341582;
 Fri, 03 Jun 2022 14:54:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VYxF=WK=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1nx8hA-0005jA-3H
 for xen-devel@lists.xenproject.org; Fri, 03 Jun 2022 14:54:36 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 14ac57ad-e34d-11ec-837f-e5687231ffcc;
 Fri, 03 Jun 2022 16:54:34 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id CE7A421B2D;
 Fri,  3 Jun 2022 14:54:33 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 7C92E13AA2;
 Fri,  3 Jun 2022 14:54:33 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id BpolG6kgmmICIwAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 03 Jun 2022 14:54:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 14ac57ad-e34d-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1654268073; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=IJ1tzfvvSI2WRuzor0X7DPaJ7IQQHsp7xh0uYBTP7mE=;
	b=gl6ZfWU88/4lGz6MJyuUlA8u6lhQRCwJNGRp/zDtBMJHrR+jEfB8mGEIz8NnnPj/EYmYBp
	BzQjWBdxnF7bahXs5x3oN6GiL+9iMT/DLDC26mR0szF4c4dlYjK8+Zq92zBksuvTtuw+Xj
	uVr8PnvwqIgbahNuaoYiRxiHwcMjx9M=
Message-ID: <2ab85f0c-2605-c401-ca59-7afca5807349@suse.com>
Date: Fri, 3 Jun 2022 16:54:33 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.8.0
Content-Language: en-US
To: Demi Marie Obenour <demi@invisiblethingslab.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Jennifer Herbert <jennifer.herbert@citrix.com>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 stable@vger.kernel.org
References: <20220602225352.3201-1-demi@invisiblethingslab.com>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v3] xen/gntdev: Avoid blocking in unmap_grant_pages()
In-Reply-To: <20220602225352.3201-1-demi@invisiblethingslab.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------1ucI0exEQEjkl2gz5HyhYmQ0"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------1ucI0exEQEjkl2gz5HyhYmQ0
Content-Type: multipart/mixed; boundary="------------pa7EHydxJHVUZYrUBb4pyDrz";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Demi Marie Obenour <demi@invisiblethingslab.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Jennifer Herbert <jennifer.herbert@citrix.com>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 stable@vger.kernel.org
Message-ID: <2ab85f0c-2605-c401-ca59-7afca5807349@suse.com>
Subject: Re: [PATCH v3] xen/gntdev: Avoid blocking in unmap_grant_pages()
References: <20220602225352.3201-1-demi@invisiblethingslab.com>
In-Reply-To: <20220602225352.3201-1-demi@invisiblethingslab.com>

--------------pa7EHydxJHVUZYrUBb4pyDrz
Content-Type: multipart/mixed; boundary="------------dpJdUNQI0TrfY3q1bJvkn0vi"

--------------dpJdUNQI0TrfY3q1bJvkn0vi
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDMuMDYuMjIgMDA6NTMsIERlbWkgTWFyaWUgT2Jlbm91ciB3cm90ZToNCj4gdW5tYXBf
Z3JhbnRfcGFnZXMoKSBjdXJyZW50bHkgd2FpdHMgZm9yIHRoZSBwYWdlcyB0byBubyBsb25n
ZXIgYmUgdXNlZC4NCj4gSW4gaHR0cHM6Ly9naXRodWIuY29tL1F1YmVzT1MvcXViZXMtaXNz
dWVzL2lzc3Vlcy83NDgxLCB0aGlzIGxlYWQgdG8gYQ0KPiBkZWFkbG9jayBhZ2FpbnN0IGk5
MTU6IGk5MTUgd2FzIHdhaXRpbmcgZm9yIGdudGRldidzIE1NVSBub3RpZmllciB0bw0KPiBm
aW5pc2gsIHdoaWxlIGdudGRldiB3YXMgd2FpdGluZyBmb3IgaTkxNSB0byBmcmVlIGl0cyBw
YWdlcy4gIEkgYWxzbw0KPiBiZWxpZXZlIHRoaXMgaXMgcmVzcG9uc2libGUgZm9yIHZhcmlv
dXMgZGVhZGxvY2tzIEkgaGF2ZSBleHBlcmllbmNlZCBpbg0KPiB0aGUgcGFzdC4NCj4gDQo+
IEF2b2lkIHRoZXNlIHByb2JsZW1zIGJ5IG1ha2luZyB1bm1hcF9ncmFudF9wYWdlcyBhc3lu
Yy4gIFRoaXMgcmVxdWlyZXMNCj4gbWFraW5nIGl0IHJldHVybiB2b2lkLCBhcyBhbnkgZXJy
b3JzIHdpbGwgbm90IGJlIGF2YWlsYWJsZSB3aGVuIHRoZQ0KPiBmdW5jdGlvbiByZXR1cm5z
LiAgRm9ydHVuYXRlbHksIHRoZSBvbmx5IHVzZSBvZiB0aGUgcmV0dXJuIHZhbHVlIGlzIGEN
Cj4gV0FSTl9PTigpLCB3aGljaCBjYW4gYmUgcmVwbGFjZWQgYnkgYSBXQVJOX09OIHdoZW4g
dGhlIGVycm9yIGlzDQo+IGRldGVjdGVkLiAgQWRkaXRpb25hbGx5LCBhIGZhaWxlZCBjYWxs
IHdpbGwgbm90IHByZXZlbnQgZnVydGhlciBjYWxscw0KPiBmcm9tIGJlaW5nIG1hZGUsIGJ1
dCB0aGlzIGlzIGhhcm1sZXNzLg0KPiANCj4gQmVjYXVzZSB1bm1hcF9ncmFudF9wYWdlcyBp
cyBub3cgYXN5bmMsIHRoZSBncmFudCBoYW5kbGUgd2lsbCBiZSBzZW50IHRvDQo+IElOVkFM
SURfR1JBTlRfSEFORExFIHRvbyBsYXRlIHRvIHByZXZlbnQgbXVsdGlwbGUgdW5tYXBzIG9m
IHRoZSBzYW1lDQo+IGhhbmRsZS4gIEluc3RlYWQsIGEgc2VwYXJhdGUgYm9vbCBhcnJheSBp
cyBhbGxvY2F0ZWQgZm9yIHRoaXMgcHVycG9zZS4NCj4gVGhpcyB3YXN0ZXMgbWVtb3J5LCBi
dXQgc3R1ZmZpbmcgdGhpcyBpbmZvcm1hdGlvbiBpbiBwYWRkaW5nIGJ5dGVzIGlzDQo+IHRv
byBmcmFnaWxlLiAgRnVydGhlcm1vcmUsIGl0IGlzIG5lY2Vzc2FyeSB0byBncmFiIGEgcmVm
ZXJlbmNlIHRvIHRoZQ0KPiBtYXAgYmVmb3JlIG1ha2luZyB0aGUgYXN5bmNocm9ub3VzIGNh
bGwsIGFuZCByZWxlYXNlIHRoZSByZWZlcmVuY2Ugd2hlbg0KPiB0aGUgY2FsbCByZXR1cm5z
Lg0KPiANCj4gSXQgaXMgYWxzbyBuZWNlc3NhcnkgdG8gZ3VhcmQgYWdhaW5zdCByZWVudHJh
bmN5IGluIGdudGRldl9tYXBfcHV0KCksDQo+IGFuZCB0byBoYW5kbGUgdGhlIGNhc2Ugd2hl
cmUgdXNlcnNwYWNlIHRyaWVzIHRvIG1hcCBhIG1hcHBpbmcgd2hvc2UNCj4gY29udGVudHMg
aGF2ZSBub3QgYWxsIGJlZW4gZnJlZWQgeWV0Lg0KPiANCj4gRml4ZXM6IDc0NTI4MjI1NmM3
NSAoInhlbi9nbnRkZXY6IHNhZmVseSB1bm1hcCBncmFudHMgaW4gY2FzZSB0aGV5IGFyZSBz
dGlsbCBpbiB1c2UiKQ0KPiBDYzogc3RhYmxlQHZnZXIua2VybmVsLm9yZw0KPiBTaWduZWQt
b2ZmLWJ5OiBEZW1pIE1hcmllIE9iZW5vdXIgPGRlbWlAaW52aXNpYmxldGhpbmdzbGFiLmNv
bT4NCj4gLS0tDQo+ICAgZHJpdmVycy94ZW4vZ250ZGV2LWNvbW1vbi5oIHwgICA3ICsrDQo+
ICAgZHJpdmVycy94ZW4vZ250ZGV2LmMgICAgICAgIHwgMTUzICsrKysrKysrKysrKysrKysr
KysrKysrKy0tLS0tLS0tLS0tLQ0KPiAgIDIgZmlsZXMgY2hhbmdlZCwgMTA5IGluc2VydGlv
bnMoKyksIDUxIGRlbGV0aW9ucygtKQ0KPiANCj4gZGlmZiAtLWdpdCBhL2RyaXZlcnMveGVu
L2dudGRldi1jb21tb24uaCBiL2RyaXZlcnMveGVuL2dudGRldi1jb21tb24uaA0KPiBpbmRl
eCAyMGQ3ZDA1OWRhZGIuLjE1YzJlM2FmY2MyYiAxMDA2NDQNCj4gLS0tIGEvZHJpdmVycy94
ZW4vZ250ZGV2LWNvbW1vbi5oDQo+ICsrKyBiL2RyaXZlcnMveGVuL2dudGRldi1jb21tb24u
aA0KPiBAQCAtMTYsNiArMTYsNyBAQA0KPiAgICNpbmNsdWRlIDxsaW51eC9tbXVfbm90aWZp
ZXIuaD4NCj4gICAjaW5jbHVkZSA8bGludXgvdHlwZXMuaD4NCj4gICAjaW5jbHVkZSA8eGVu
L2ludGVyZmFjZS9ldmVudF9jaGFubmVsLmg+DQo+ICsjaW5jbHVkZSA8eGVuL2dyYW50X3Rh
YmxlLmg+DQo+ICAgDQo+ICAgc3RydWN0IGdudGRldl9kbWFidWZfcHJpdjsNCj4gICANCj4g
QEAgLTU2LDYgKzU3LDcgQEAgc3RydWN0IGdudGRldl9ncmFudF9tYXAgew0KPiAgIAlzdHJ1
Y3QgZ250dGFiX3VubWFwX2dyYW50X3JlZiAqdW5tYXBfb3BzOw0KPiAgIAlzdHJ1Y3QgZ250
dGFiX21hcF9ncmFudF9yZWYgICAqa21hcF9vcHM7DQo+ICAgCXN0cnVjdCBnbnR0YWJfdW5t
YXBfZ3JhbnRfcmVmICprdW5tYXBfb3BzOw0KPiArCWJvb2wgKmJlaW5nX3JlbW92ZWQ7DQo+
ICAgCXN0cnVjdCBwYWdlICoqcGFnZXM7DQo+ICAgCXVuc2lnbmVkIGxvbmcgcGFnZXNfdm1f
c3RhcnQ7DQo+ICAgDQo+IEBAIC03Myw2ICs3NSwxMSBAQCBzdHJ1Y3QgZ250ZGV2X2dyYW50
X21hcCB7DQo+ICAgCS8qIE5lZWRlZCB0byBhdm9pZCBhbGxvY2F0aW9uIGluIGdudHRhYl9k
bWFfZnJlZV9wYWdlcygpLiAqLw0KPiAgIAl4ZW5fcGZuX3QgKmZyYW1lczsNCj4gICAjZW5k
aWYNCj4gKw0KPiArCS8qIE51bWJlciBvZiBsaXZlIGdyYW50cyAqLw0KPiArCWF0b21pY19s
b25nX3QgbGl2ZV9ncmFudHM7DQoNCkFueSByZWFzb24gdG8gdXNlIGF0b21pY19sb25nX3Qg
aW5zdGVhZCBvZiBhdG9taWNfdD8NCg0KQXMgdGhlIG1heCBudW1iZXIgb2YgbWFwcGluZ3Mg
aXMgbWFwLT5jb3VudCwgd2hpY2ggaXMgYW4gaW50LCBJIGRvbid0IHNlZSB3aHkNCmF0b21p
Y190IHdvdWxkbid0IHdvcmsgaGVyZS4NCg0KPiArCS8qIE5lZWRlZCB0byBhdm9pZCBhbGxv
Y2F0aW9uIGluIF9fdW5tYXBfZ3JhbnRfcGFnZXMgKi8NCj4gKwlzdHJ1Y3QgZ250YWJfdW5t
YXBfcXVldWVfZGF0YSB1bm1hcF9kYXRhOw0KPiAgIH07DQo+ICAgDQo+ICAgc3RydWN0IGdu
dGRldl9ncmFudF9tYXAgKmdudGRldl9hbGxvY19tYXAoc3RydWN0IGdudGRldl9wcml2ICpw
cml2LCBpbnQgY291bnQsDQo+IGRpZmYgLS1naXQgYS9kcml2ZXJzL3hlbi9nbnRkZXYuYyBi
L2RyaXZlcnMveGVuL2dudGRldi5jDQo+IGluZGV4IDU5ZmZlYTgwMDA3OS4uZThiODNlYTFl
YWNkIDEwMDY0NA0KPiAtLS0gYS9kcml2ZXJzL3hlbi9nbnRkZXYuYw0KPiArKysgYi9kcml2
ZXJzL3hlbi9nbnRkZXYuYw0KPiBAQCAtMzUsNiArMzUsNyBAQA0KPiAgICNpbmNsdWRlIDxs
aW51eC9zbGFiLmg+DQo+ICAgI2luY2x1ZGUgPGxpbnV4L2hpZ2htZW0uaD4NCj4gICAjaW5j
bHVkZSA8bGludXgvcmVmY291bnQuaD4NCj4gKyNpbmNsdWRlIDxsaW51eC93b3JrcXVldWUu
aD4NCj4gICANCj4gICAjaW5jbHVkZSA8eGVuL3hlbi5oPg0KPiAgICNpbmNsdWRlIDx4ZW4v
Z3JhbnRfdGFibGUuaD4NCj4gQEAgLTYwLDEwICs2MSwxMSBAQCBtb2R1bGVfcGFyYW0obGlt
aXQsIHVpbnQsIDA2NDQpOw0KPiAgIE1PRFVMRV9QQVJNX0RFU0MobGltaXQsDQo+ICAgCSJN
YXhpbXVtIG51bWJlciBvZiBncmFudHMgdGhhdCBtYXkgYmUgbWFwcGVkIGJ5IG9uZSBtYXBw
aW5nIHJlcXVlc3QiKTsNCj4gICANCj4gKy8qIFRydWUgaW4gUFYgbW9kZSwgZmFsc2Ugb3Ro
ZXJ3aXNlICovDQo+ICAgc3RhdGljIGludCB1c2VfcHRlbW9kOw0KPiAgIA0KPiAtc3RhdGlj
IGludCB1bm1hcF9ncmFudF9wYWdlcyhzdHJ1Y3QgZ250ZGV2X2dyYW50X21hcCAqbWFwLA0K
PiAtCQkJICAgICBpbnQgb2Zmc2V0LCBpbnQgcGFnZXMpOw0KPiArc3RhdGljIHZvaWQgdW5t
YXBfZ3JhbnRfcGFnZXMoc3RydWN0IGdudGRldl9ncmFudF9tYXAgKm1hcCwNCj4gKwkJCSAg
ICAgIGludCBvZmZzZXQsIGludCBwYWdlcyk7DQo+ICAgDQo+ICAgc3RhdGljIHN0cnVjdCBt
aXNjZGV2aWNlIGdudGRldl9taXNjZGV2Ow0KPiAgIA0KPiBAQCAtMTIwLDYgKzEyMiw3IEBA
IHN0YXRpYyB2b2lkIGdudGRldl9mcmVlX21hcChzdHJ1Y3QgZ250ZGV2X2dyYW50X21hcCAq
bWFwKQ0KPiAgIAlrdmZyZWUobWFwLT51bm1hcF9vcHMpOw0KPiAgIAlrdmZyZWUobWFwLT5r
bWFwX29wcyk7DQo+ICAgCWt2ZnJlZShtYXAtPmt1bm1hcF9vcHMpOw0KPiArCWt2ZnJlZSht
YXAtPmJlaW5nX3JlbW92ZWQpOw0KPiAgIAlrZnJlZShtYXApOw0KPiAgIH0NCj4gICANCj4g
QEAgLTE0MCwxMCArMTQzLDEzIEBAIHN0cnVjdCBnbnRkZXZfZ3JhbnRfbWFwICpnbnRkZXZf
YWxsb2NfbWFwKHN0cnVjdCBnbnRkZXZfcHJpdiAqcHJpdiwgaW50IGNvdW50LA0KPiAgIAlh
ZGQtPnVubWFwX29wcyA9IGt2bWFsbG9jX2FycmF5KGNvdW50LCBzaXplb2YoYWRkLT51bm1h
cF9vcHNbMF0pLA0KPiAgIAkJCQkJR0ZQX0tFUk5FTCk7DQo+ICAgCWFkZC0+cGFnZXMgICAg
ID0ga3ZjYWxsb2MoY291bnQsIHNpemVvZihhZGQtPnBhZ2VzWzBdKSwgR0ZQX0tFUk5FTCk7
DQo+ICsJYWRkLT5iZWluZ19yZW1vdmVkID0NCj4gKwkJa3ZjYWxsb2MoY291bnQsIHNpemVv
ZihhZGQtPmJlaW5nX3JlbW92ZWRbMF0pLCBHRlBfS0VSTkVMKTsNCj4gICAJaWYgKE5VTEwg
PT0gYWRkLT5ncmFudHMgICAgfHwNCj4gICAJICAgIE5VTEwgPT0gYWRkLT5tYXBfb3BzICAg
fHwNCj4gICAJICAgIE5VTEwgPT0gYWRkLT51bm1hcF9vcHMgfHwNCj4gLQkgICAgTlVMTCA9
PSBhZGQtPnBhZ2VzKQ0KPiArCSAgICBOVUxMID09IGFkZC0+cGFnZXMgICAgIHx8DQo+ICsJ
ICAgIE5VTEwgPT0gYWRkLT5iZWluZ19yZW1vdmVkKQ0KPiAgIAkJZ290byBlcnI7DQo+ICAg
CWlmICh1c2VfcHRlbW9kKSB7DQo+ICAgCQlhZGQtPmttYXBfb3BzICAgPSBrdm1hbGxvY19h
cnJheShjb3VudCwgc2l6ZW9mKGFkZC0+a21hcF9vcHNbMF0pLA0KPiBAQCAtMjUwLDkgKzI1
NiwzNCBAQCB2b2lkIGdudGRldl9wdXRfbWFwKHN0cnVjdCBnbnRkZXZfcHJpdiAqcHJpdiwg
c3RydWN0IGdudGRldl9ncmFudF9tYXAgKm1hcCkNCj4gICAJaWYgKCFyZWZjb3VudF9kZWNf
YW5kX3Rlc3QoJm1hcC0+dXNlcnMpKQ0KPiAgIAkJcmV0dXJuOw0KPiAgIA0KPiAtCWlmICht
YXAtPnBhZ2VzICYmICF1c2VfcHRlbW9kKQ0KPiArCWlmIChtYXAtPnBhZ2VzICYmICF1c2Vf
cHRlbW9kKSB7DQo+ICsJCS8qDQo+ICsJCSAqIEluY3JlbWVudCB0aGUgcmVmZXJlbmNlIGNv
dW50LiAgVGhpcyBlbnN1cmVzIHRoYXQgdGhlDQo+ICsJCSAqIHN1YnNlcXVlbnQgY2FsbCB0
byB1bm1hcF9ncmFudF9wYWdlcygpIHdpbGwgbm90IHdpbmQgdXANCj4gKwkJICogcmUtZW50
ZXJpbmcgaXRzZWxmLiAgSXQgKmNhbiogd2luZCB1cCBjYWxsaW5nDQo+ICsJCSAqIGdudGRl
dl9wdXRfbWFwKCkgcmVjdXJzaXZlbHksIGJ1dCBzdWNoIGNhbGxzIHdpbGwgYmUgd2l0aCBh
DQo+ICsJCSAqIG5vbnplcm8gcmVmZXJlbmNlIGNvdW50LCBzbyB0aGV5IHdpbGwgcmV0dXJu
IGJlZm9yZSB0aGlzIGNvZGUNCj4gKwkJICogaXMgcmVhY2hlZC4gIFRoZSByZWN1cnNpb24g
ZGVwdGggaXMgdGh1cyBsaW1pdGVkIHRvIDEuDQo+ICsJCSAqLw0KPiArCQlyZWZjb3VudF9p
bmMoJm1hcC0+dXNlcnMpOw0KPiArDQo+ICsJCS8qDQo+ICsJCSAqIFVubWFwIHRoZSBncmFu
dHMuICBUaGlzIG1heSBvciBtYXkgbm90IGJlIGFzeW5jaHJvbm91cywgc28gaXQNCj4gKwkJ
ICogaXMgcG9zc2libGUgdGhhdCB0aGUgcmVmZXJlbmNlIGNvdW50IGlzIDEgb24gcmV0dXJu
LCBidXQgaXQNCj4gKwkJICogY291bGQgYWxzbyBiZSBncmVhdGVyIHRoYW4gMS4NCj4gKwkJ
ICovDQo+ICAgCQl1bm1hcF9ncmFudF9wYWdlcyhtYXAsIDAsIG1hcC0+Y291bnQpOw0KPiAg
IA0KPiArCQkvKiBDaGVjayBpZiB0aGUgbWVtb3J5IG5vdyBuZWVkcyB0byBiZSBmcmVlZCAq
Lw0KPiArCQlpZiAoIXJlZmNvdW50X2RlY19hbmRfdGVzdCgmbWFwLT51c2VycykpDQo+ICsJ
CQlyZXR1cm47DQo+ICsNCj4gKwkJLyoNCj4gKwkJICogQWxsIHBhZ2VzIGhhdmUgYmVlbiBy
ZXR1cm5lZCB0byB0aGUgaHlwZXJ2aXNvciwgc28gZnJlZSB0aGUNCj4gKwkJICogbWFwLiAg
RklYTUU6IHRoaXMgaXMgZmFyIHRvbyBjb21wbGV4Lg0KPiArCQkgKi8NCg0KRG8geW91IGhh
dmUgYW4gaWRlYSBob3cgdG8gc2ltcGxpZnkgdGhpcz8NCg0KSWYgeWVzLCBJJ20gZmluZSB3
aXRoIHRoZSBjb21tZW50LiBJZiBubywgSSdkIHJhdGhlciBkcm9wIHRoZSAiRklYTUUiLg0K
DQo+ICsJfQ0KPiArDQo+ICAgCWlmIChtYXAtPm5vdGlmeS5mbGFncyAmIFVOTUFQX05PVElG
WV9TRU5EX0VWRU5UKSB7DQo+ICAgCQlub3RpZnlfcmVtb3RlX3ZpYV9ldnRjaG4obWFwLT5u
b3RpZnkuZXZlbnQpOw0KPiAgIAkJZXZ0Y2huX3B1dChtYXAtPm5vdGlmeS5ldmVudCk7DQo+
IEBAIC0yODMsNiArMzE0LDcgQEAgc3RhdGljIGludCBmaW5kX2dyYW50X3B0ZXMocHRlX3Qg
KnB0ZSwgdW5zaWduZWQgbG9uZyBhZGRyLCB2b2lkICpkYXRhKQ0KPiAgIA0KPiAgIGludCBn
bnRkZXZfbWFwX2dyYW50X3BhZ2VzKHN0cnVjdCBnbnRkZXZfZ3JhbnRfbWFwICptYXApDQo+
ICAgew0KPiArCXNpemVfdCBhbGxvY2VkID0gMDsNCj4gICAJaW50IGksIGVyciA9IDA7DQo+
ICAgDQo+ICAgCWlmICghdXNlX3B0ZW1vZCkgew0KPiBAQCAtMzMxLDk3ICszNjMsMTE0IEBA
IGludCBnbnRkZXZfbWFwX2dyYW50X3BhZ2VzKHN0cnVjdCBnbnRkZXZfZ3JhbnRfbWFwICpt
YXApDQo+ICAgCQkJbWFwLT5jb3VudCk7DQo+ICAgDQo+ICAgCWZvciAoaSA9IDA7IGkgPCBt
YXAtPmNvdW50OyBpKyspIHsNCj4gLQkJaWYgKG1hcC0+bWFwX29wc1tpXS5zdGF0dXMgPT0g
R05UU1Rfb2theSkNCj4gKwkJaWYgKG1hcC0+bWFwX29wc1tpXS5zdGF0dXMgPT0gR05UU1Rf
b2theSkgew0KPiAgIAkJCW1hcC0+dW5tYXBfb3BzW2ldLmhhbmRsZSA9IG1hcC0+bWFwX29w
c1tpXS5oYW5kbGU7DQo+IC0JCWVsc2UgaWYgKCFlcnIpDQo+ICsJCQlpZiAoIXVzZV9wdGVt
b2QpDQo+ICsJCQkJYWxsb2NlZCsrOw0KPiArCQl9IGVsc2UgaWYgKCFlcnIpDQo+ICAgCQkJ
ZXJyID0gLUVJTlZBTDsNCj4gICANCj4gICAJCWlmIChtYXAtPmZsYWdzICYgR05UTUFQX2Rl
dmljZV9tYXApDQo+ICAgCQkJbWFwLT51bm1hcF9vcHNbaV0uZGV2X2J1c19hZGRyID0gbWFw
LT5tYXBfb3BzW2ldLmRldl9idXNfYWRkcjsNCj4gICANCj4gICAJCWlmICh1c2VfcHRlbW9k
KSB7DQo+IC0JCQlpZiAobWFwLT5rbWFwX29wc1tpXS5zdGF0dXMgPT0gR05UU1Rfb2theSkN
Cj4gKwkJCWlmIChtYXAtPmttYXBfb3BzW2ldLnN0YXR1cyA9PSBHTlRTVF9va2F5KSB7DQo+
ICsJCQkJaWYgKG1hcC0+bWFwX29wc1tpXS5zdGF0dXMgPT0gR05UU1Rfb2theSkNCj4gKwkJ
CQkJYWxsb2NlZCsrOw0KPiAgIAkJCQltYXAtPmt1bm1hcF9vcHNbaV0uaGFuZGxlID0gbWFw
LT5rbWFwX29wc1tpXS5oYW5kbGU7DQo+IC0JCQllbHNlIGlmICghZXJyKQ0KPiArCQkJfSBl
bHNlIGlmICghZXJyKSB7DQo+ICsJCQkJLyogRklYTUU6IHNob3VsZCB0aGlzIGJlIGEgV0FS
TigpPyAqLw0KDQpJIGRvbid0IHRoaW5rIGEgV0FSTigpIHdvdWxkIGJlIGFwcHJvcHJpYXRl
IGhlcmUsIGFzIHRoZSBwYWdlIGlzIGJhc2ljYWxseQ0Kc2VsY3RhYmxlIHZpYSB1c2VyIGNv
ZGUuIEluIGNhc2UgdGhlIGNhbGxlciBpcyBwYXNzaW5nIGUuZy4gYSB1c2VyIGFkZHJlc3MN
CndoaWNoIGlzIGFscmVhZHkgbWFwcGluZyBhIGZvcmVpZ24gZnJhbWUsIHRoaXMgd291bGQg
cmVzdWx0IGluIGFuIGVycm9yIGhlcmUuDQoNClNvIGp1c3QgcmV0dXJuaW5nIGFuIGVycm9y
IGlzIGJldHRlciB0aGFuIGZsb29kaW5nIHRoZSBjb25zb2xlIHdpdGggbWVzc2FnZXMuDQoN
Cj4gICAJCQkJZXJyID0gLUVJTlZBTDsNCj4gKwkJCX0NCj4gICAJCX0NCj4gICAJfQ0KPiAr
CWF0b21pY19sb25nX2FkZChhbGxvY2VkLCAmbWFwLT5saXZlX2dyYW50cyk7DQo+ICAgCXJl
dHVybiBlcnI7DQo+ICAgfQ0KPiAgIA0KPiAtc3RhdGljIGludCBfX3VubWFwX2dyYW50X3Bh
Z2VzKHN0cnVjdCBnbnRkZXZfZ3JhbnRfbWFwICptYXAsIGludCBvZmZzZXQsDQo+IC0JCQkg
ICAgICAgaW50IHBhZ2VzKQ0KPiArc3RhdGljIHZvaWQgX191bm1hcF9ncmFudF9wYWdlc19k
b25lKGludCByZXN1bHQsDQo+ICsJCXN0cnVjdCBnbnRhYl91bm1hcF9xdWV1ZV9kYXRhICpk
YXRhKQ0KPiAgIHsNCj4gLQlpbnQgaSwgZXJyID0gMDsNCj4gLQlzdHJ1Y3QgZ250YWJfdW5t
YXBfcXVldWVfZGF0YSB1bm1hcF9kYXRhOw0KPiAtDQo+IC0JaWYgKG1hcC0+bm90aWZ5LmZs
YWdzICYgVU5NQVBfTk9USUZZX0NMRUFSX0JZVEUpIHsNCj4gLQkJaW50IHBnbm8gPSAobWFw
LT5ub3RpZnkuYWRkciA+PiBQQUdFX1NISUZUKTsNCj4gLQkJaWYgKHBnbm8gPj0gb2Zmc2V0
ICYmIHBnbm8gPCBvZmZzZXQgKyBwYWdlcykgew0KPiAtCQkJLyogTm8gbmVlZCBmb3Iga21h
cCwgcGFnZXMgYXJlIGluIGxvd21lbSAqLw0KPiAtCQkJdWludDhfdCAqdG1wID0gcGZuX3Rv
X2thZGRyKHBhZ2VfdG9fcGZuKG1hcC0+cGFnZXNbcGdub10pKTsNCj4gLQkJCXRtcFttYXAt
Pm5vdGlmeS5hZGRyICYgKFBBR0VfU0laRS0xKV0gPSAwOw0KPiAtCQkJbWFwLT5ub3RpZnku
ZmxhZ3MgJj0gflVOTUFQX05PVElGWV9DTEVBUl9CWVRFOw0KPiAtCQl9DQo+IC0JfQ0KPiAt
DQo+IC0JdW5tYXBfZGF0YS51bm1hcF9vcHMgPSBtYXAtPnVubWFwX29wcyArIG9mZnNldDsN
Cj4gLQl1bm1hcF9kYXRhLmt1bm1hcF9vcHMgPSB1c2VfcHRlbW9kID8gbWFwLT5rdW5tYXBf
b3BzICsgb2Zmc2V0IDogTlVMTDsNCj4gLQl1bm1hcF9kYXRhLnBhZ2VzID0gbWFwLT5wYWdl
cyArIG9mZnNldDsNCj4gLQl1bm1hcF9kYXRhLmNvdW50ID0gcGFnZXM7DQo+IC0NCj4gLQll
cnIgPSBnbnR0YWJfdW5tYXBfcmVmc19zeW5jKCZ1bm1hcF9kYXRhKTsNCj4gLQlpZiAoZXJy
KQ0KPiAtCQlyZXR1cm4gZXJyOw0KPiArCXVuc2lnbmVkIGludCBpOw0KPiArCXN0cnVjdCBn
bnRkZXZfZ3JhbnRfbWFwICptYXAgPSBkYXRhLT5kYXRhOw0KPiArCXVuc2lnbmVkIGludCBv
ZmZzZXQgPSBkYXRhLT51bm1hcF9vcHMgLSBtYXAtPnVubWFwX29wczsNCj4gKwlhdG9taWNf
bG9uZ19zdWIoZGF0YS0+Y291bnQsICZtYXAtPmxpdmVfZ3JhbnRzKTsNCg0KU2hvdWxkbid0
IHRoaXMgYmUgZG9uZSBvbmx5IGFmdGVyIHRoZSBsYXN0IHVzYWdlIG9mIG1hcCAoaS5lLiBh
ZnRlciB0aGUNCmZvbGx3aW5nIGxvb3ApPyBPdGhlcndpc2UgZ250ZGV2X21tYXAoKSB3b3Vs
ZCBubyBsb25nZXIgYmUgYmxvY2tlZCBmcm9tDQpyZXN1aW5nIG1hcC4NCg0KPiAgIA0KPiAt
CWZvciAoaSA9IDA7IGkgPCBwYWdlczsgaSsrKSB7DQo+IC0JCWlmIChtYXAtPnVubWFwX29w
c1tvZmZzZXQraV0uc3RhdHVzKQ0KPiAtCQkJZXJyID0gLUVJTlZBTDsNCj4gKwlmb3IgKGkg
PSAwOyBpIDwgZGF0YS0+Y291bnQ7IGkrKykgew0KPiArCQlXQVJOX09OKG1hcC0+dW5tYXBf
b3BzW29mZnNldCtpXS5zdGF0dXMpOw0KPiAgIAkJcHJfZGVidWcoInVubWFwIGhhbmRsZT0l
ZCBzdD0lZFxuIiwNCj4gICAJCQltYXAtPnVubWFwX29wc1tvZmZzZXQraV0uaGFuZGxlLA0K
PiAgIAkJCW1hcC0+dW5tYXBfb3BzW29mZnNldCtpXS5zdGF0dXMpOw0KPiAgIAkJbWFwLT51
bm1hcF9vcHNbb2Zmc2V0K2ldLmhhbmRsZSA9IElOVkFMSURfR1JBTlRfSEFORExFOw0KPiAg
IAkJaWYgKHVzZV9wdGVtb2QpIHsNCj4gLQkJCWlmIChtYXAtPmt1bm1hcF9vcHNbb2Zmc2V0
K2ldLnN0YXR1cykNCj4gLQkJCQllcnIgPSAtRUlOVkFMOw0KPiArCQkJV0FSTl9PTihtYXAt
Pmt1bm1hcF9vcHNbb2Zmc2V0K2ldLnN0YXR1cyk7DQo+ICAgCQkJcHJfZGVidWcoImt1bm1h
cCBoYW5kbGU9JXUgc3Q9JWRcbiIsDQo+ICAgCQkJCSBtYXAtPmt1bm1hcF9vcHNbb2Zmc2V0
K2ldLmhhbmRsZSwNCj4gICAJCQkJIG1hcC0+a3VubWFwX29wc1tvZmZzZXQraV0uc3RhdHVz
KTsNCj4gICAJCQltYXAtPmt1bm1hcF9vcHNbb2Zmc2V0K2ldLmhhbmRsZSA9IElOVkFMSURf
R1JBTlRfSEFORExFOw0KPiAgIAkJfQ0KPiAgIAl9DQo+IC0JcmV0dXJuIGVycjsNCj4gKw0K
PiArCS8qIFJlbGVhc2UgcmVmZXJlbmNlIHRha2VuIGJ5IF9fdW5tYXBfZ3JhbnRfcGFnZXMg
Ki8NCj4gKwlnbnRkZXZfcHV0X21hcChOVUxMLCBtYXApOw0KPiAgIH0NCj4gICANCj4gLXN0
YXRpYyBpbnQgdW5tYXBfZ3JhbnRfcGFnZXMoc3RydWN0IGdudGRldl9ncmFudF9tYXAgKm1h
cCwgaW50IG9mZnNldCwNCj4gLQkJCSAgICAgaW50IHBhZ2VzKQ0KPiArc3RhdGljIHZvaWQg
X191bm1hcF9ncmFudF9wYWdlcyhzdHJ1Y3QgZ250ZGV2X2dyYW50X21hcCAqbWFwLCBpbnQg
b2Zmc2V0LA0KPiArCQkJICAgICAgIGludCBwYWdlcykNCj4gICB7DQo+IC0JaW50IHJhbmdl
LCBlcnIgPSAwOw0KPiArCWlmIChtYXAtPm5vdGlmeS5mbGFncyAmIFVOTUFQX05PVElGWV9D
TEVBUl9CWVRFKSB7DQo+ICsJCWludCBwZ25vID0gKG1hcC0+bm90aWZ5LmFkZHIgPj4gUEFH
RV9TSElGVCk7DQo+ICsNCj4gKwkJaWYgKHBnbm8gPj0gb2Zmc2V0ICYmIHBnbm8gPCBvZmZz
ZXQgKyBwYWdlcykgew0KPiArCQkJLyogTm8gbmVlZCBmb3Iga21hcCwgcGFnZXMgYXJlIGlu
IGxvd21lbSAqLw0KPiArCQkJdWludDhfdCAqdG1wID0gcGZuX3RvX2thZGRyKHBhZ2VfdG9f
cGZuKG1hcC0+cGFnZXNbcGdub10pKTsNCj4gKw0KPiArCQkJdG1wW21hcC0+bm90aWZ5LmFk
ZHIgJiAoUEFHRV9TSVpFLTEpXSA9IDA7DQo+ICsJCQltYXAtPm5vdGlmeS5mbGFncyAmPSB+
VU5NQVBfTk9USUZZX0NMRUFSX0JZVEU7DQo+ICsJCX0NCj4gKwl9DQo+ICsNCj4gKwltYXAt
PnVubWFwX2RhdGEudW5tYXBfb3BzID0gbWFwLT51bm1hcF9vcHMgKyBvZmZzZXQ7DQo+ICsJ
bWFwLT51bm1hcF9kYXRhLmt1bm1hcF9vcHMgPSB1c2VfcHRlbW9kID8gbWFwLT5rdW5tYXBf
b3BzICsgb2Zmc2V0IDogTlVMTDsNCj4gKwltYXAtPnVubWFwX2RhdGEucGFnZXMgPSBtYXAt
PnBhZ2VzICsgb2Zmc2V0Ow0KPiArCW1hcC0+dW5tYXBfZGF0YS5jb3VudCA9IHBhZ2VzOw0K
PiArCW1hcC0+dW5tYXBfZGF0YS5kb25lID0gX191bm1hcF9ncmFudF9wYWdlc19kb25lOw0K
PiArCW1hcC0+dW5tYXBfZGF0YS5kYXRhID0gbWFwOw0KPiArCXJlZmNvdW50X2luYygmbWFw
LT51c2Vycyk7IC8qIHRvIGtlZXAgbWFwIGFsaXZlIGR1cmluZyBhc3luYyBjYWxsIGJlbG93
ICovDQo+ICsNCj4gKwlnbnR0YWJfdW5tYXBfcmVmc19hc3luYygmbWFwLT51bm1hcF9kYXRh
KTsNCj4gK30NCj4gKw0KPiArc3RhdGljIHZvaWQgdW5tYXBfZ3JhbnRfcGFnZXMoc3RydWN0
IGdudGRldl9ncmFudF9tYXAgKm1hcCwgaW50IG9mZnNldCwNCj4gKwkJCSAgICAgIGludCBw
YWdlcykNCj4gK3sNCj4gKwlpbnQgcmFuZ2U7DQo+ICsNCj4gKwlpZiAoYXRvbWljX2xvbmdf
cmVhZCgmbWFwLT5saXZlX2dyYW50cykgPT0gMCkNCj4gKwkJcmV0dXJuOyAvKiBOb3RoaW5n
IHRvIGRvICovDQo+ICAgDQo+ICAgCXByX2RlYnVnKCJ1bm1hcCAlZCslZCBbJWQrJWRdXG4i
LCBtYXAtPmluZGV4LCBtYXAtPmNvdW50LCBvZmZzZXQsIHBhZ2VzKTsNCj4gICANCj4gICAJ
LyogSXQgaXMgcG9zc2libGUgdGhlIHJlcXVlc3RlZCByYW5nZSB3aWxsIGhhdmUgYSAiaG9s
ZSIgd2hlcmUgd2UNCj4gICAJICogYWxyZWFkeSB1bm1hcHBlZCBzb21lIG9mIHRoZSBncmFu
dHMuIE9ubHkgdW5tYXAgdmFsaWQgcmFuZ2VzLg0KPiAgIAkgKi8NCj4gLQl3aGlsZSAocGFn
ZXMgJiYgIWVycikgew0KPiAtCQl3aGlsZSAocGFnZXMgJiYNCj4gLQkJICAgICAgIG1hcC0+
dW5tYXBfb3BzW29mZnNldF0uaGFuZGxlID09IElOVkFMSURfR1JBTlRfSEFORExFKSB7DQo+
ICsJd2hpbGUgKHBhZ2VzKSB7DQo+ICsJCXdoaWxlIChwYWdlcyAmJiBtYXAtPmJlaW5nX3Jl
bW92ZWRbb2Zmc2V0XSkgew0KPiAgIAkJCW9mZnNldCsrOw0KPiAgIAkJCXBhZ2VzLS07DQo+
ICAgCQl9DQo+ICAgCQlyYW5nZSA9IDA7DQo+ICAgCQl3aGlsZSAocmFuZ2UgPCBwYWdlcykg
ew0KPiAtCQkJaWYgKG1hcC0+dW5tYXBfb3BzW29mZnNldCArIHJhbmdlXS5oYW5kbGUgPT0N
Cj4gLQkJCSAgICBJTlZBTElEX0dSQU5UX0hBTkRMRSkNCj4gKwkJCWlmIChtYXAtPmJlaW5n
X3JlbW92ZWRbb2Zmc2V0ICsgcmFuZ2VdKQ0KPiAgIAkJCQlicmVhazsNCj4gKwkJCW1hcC0+
YmVpbmdfcmVtb3ZlZFtvZmZzZXQgKyByYW5nZV0gPSB0cnVlOw0KPiAgIAkJCXJhbmdlKys7
DQo+ICAgCQl9DQo+IC0JCWVyciA9IF9fdW5tYXBfZ3JhbnRfcGFnZXMobWFwLCBvZmZzZXQs
IHJhbmdlKTsNCj4gKwkJaWYgKHJhbmdlKQ0KPiArCQkJX191bm1hcF9ncmFudF9wYWdlcyht
YXAsIG9mZnNldCwgcmFuZ2UpOw0KPiAgIAkJb2Zmc2V0ICs9IHJhbmdlOw0KPiAgIAkJcGFn
ZXMgLT0gcmFuZ2U7DQo+ICAgCX0NCj4gLQ0KPiAtCXJldHVybiBlcnI7DQo+ICAgfQ0KPiAg
IA0KPiAgIC8qIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLSAqLw0KPiBAQCAtNDczLDcgKzUyMiw2IEBAIHN0YXRp
YyBib29sIGdudGRldl9pbnZhbGlkYXRlKHN0cnVjdCBtbXVfaW50ZXJ2YWxfbm90aWZpZXIg
Km1uLA0KPiAgIAlzdHJ1Y3QgZ250ZGV2X2dyYW50X21hcCAqbWFwID0NCj4gICAJCWNvbnRh
aW5lcl9vZihtbiwgc3RydWN0IGdudGRldl9ncmFudF9tYXAsIG5vdGlmaWVyKTsNCj4gICAJ
dW5zaWduZWQgbG9uZyBtc3RhcnQsIG1lbmQ7DQo+IC0JaW50IGVycjsNCj4gICANCj4gICAJ
aWYgKCFtbXVfbm90aWZpZXJfcmFuZ2VfYmxvY2thYmxlKHJhbmdlKSkNCj4gICAJCXJldHVy
biBmYWxzZTsNCj4gQEAgLTQ5NCwxMCArNTQyLDkgQEAgc3RhdGljIGJvb2wgZ250ZGV2X2lu
dmFsaWRhdGUoc3RydWN0IG1tdV9pbnRlcnZhbF9ub3RpZmllciAqbW4sDQo+ICAgCQkJbWFw
LT5pbmRleCwgbWFwLT5jb3VudCwNCj4gICAJCQltYXAtPnZtYS0+dm1fc3RhcnQsIG1hcC0+
dm1hLT52bV9lbmQsDQo+ICAgCQkJcmFuZ2UtPnN0YXJ0LCByYW5nZS0+ZW5kLCBtc3RhcnQs
IG1lbmQpOw0KPiAtCWVyciA9IHVubWFwX2dyYW50X3BhZ2VzKG1hcCwNCj4gKwl1bm1hcF9n
cmFudF9wYWdlcyhtYXAsDQo+ICAgCQkJCShtc3RhcnQgLSBtYXAtPnZtYS0+dm1fc3RhcnQp
ID4+IFBBR0VfU0hJRlQsDQo+ICAgCQkJCShtZW5kIC0gbXN0YXJ0KSA+PiBQQUdFX1NISUZU
KTsNCj4gLQlXQVJOX09OKGVycik7DQo+ICAgDQo+ICAgCXJldHVybiB0cnVlOw0KPiAgIH0N
Cj4gQEAgLTk4NSw2ICsxMDMyLDEwIEBAIHN0YXRpYyBpbnQgZ250ZGV2X21tYXAoc3RydWN0
IGZpbGUgKmZsaXAsIHN0cnVjdCB2bV9hcmVhX3N0cnVjdCAqdm1hKQ0KPiAgIAkJZ290byB1
bmxvY2tfb3V0Ow0KPiAgIAlpZiAodXNlX3B0ZW1vZCAmJiBtYXAtPnZtYSkNCj4gICAJCWdv
dG8gdW5sb2NrX291dDsNCj4gKwlpZiAoYXRvbWljX2xvbmdfcmVhZCgmbWFwLT5saXZlX2dy
YW50cykpIHsNCj4gKwkJZXJyID0gLUVBR0FJTjsNCj4gKwkJZ290byB1bmxvY2tfb3V0Ow0K
PiArCX0NCj4gICAJcmVmY291bnRfaW5jKCZtYXAtPnVzZXJzKTsNCj4gICANCj4gICAJdm1h
LT52bV9vcHMgPSAmZ250ZGV2X3Ztb3BzOw0KDQoNCkp1ZXJnZW4NCg==
--------------dpJdUNQI0TrfY3q1bJvkn0vi
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------dpJdUNQI0TrfY3q1bJvkn0vi--

--------------pa7EHydxJHVUZYrUBb4pyDrz--

--------------1ucI0exEQEjkl2gz5HyhYmQ0
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmKaIKkFAwAAAAAACgkQsN6d1ii/Ey8+
+wf/YpC6FaLlLm4md84SjXcKZo4uE+ZOPMTK1bAf7bnyEBpO+q9ihEXvsnHN6xVwBZo6lI1v8DKu
x6zARyqa7rfCxWmwu1U+aQbzWB1RLaVt2uAr+CAPT0VGblreSH5GiVbOB9LsmEA9R6/AtKRej4YN
n51VdjHhBiQ42iGzSN+PPyeRmWaC1ezmhp9nUttAvW5N67uWpcSDxxaHQHoiRHG8Joo8RLXRlCoU
5ZhN3ucePa7VJXS8Kp59mV8ibVYF6XxAayanlYNxfyxjHvWVV9lO9sKVt50ZzTSXsEwnsI8NxloD
7HCMJxnyzB7tQIhSTW4mmRFnrHdymM+FzBL/+ZDfvA==
=UI+0
-----END PGP SIGNATURE-----

--------------1ucI0exEQEjkl2gz5HyhYmQ0--


From xen-devel-bounces@lists.xenproject.org Fri Jun 03 14:55:10 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 14:55:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341589.566833 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx8hi-0006GM-Jt; Fri, 03 Jun 2022 14:55:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341589.566833; Fri, 03 Jun 2022 14:55:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx8hi-0006GF-GQ; Fri, 03 Jun 2022 14:55:10 +0000
Received: by outflank-mailman (input) for mailman id 341589;
 Fri, 03 Jun 2022 14:55:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DOPF=WK=citrix.com=prvs=146533d13=roger.pau@srs-se1.protection.inumbo.net>)
 id 1nx8hh-00064G-14
 for xen-devel@lists.xenproject.org; Fri, 03 Jun 2022 14:55:09 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 27eaf85e-e34d-11ec-bd2c-47488cf2e6aa;
 Fri, 03 Jun 2022 16:55:07 +0200 (CEST)
Received: from mail-bn8nam11lp2170.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.170])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 03 Jun 2022 10:55:05 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by DM6PR03MB5180.namprd03.prod.outlook.com (2603:10b6:5:1::10) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5314.13; Fri, 3 Jun 2022 14:55:03 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::5df3:95ce:4dfd:134e]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::5df3:95ce:4dfd:134e%4]) with mapi id 15.20.5314.015; Fri, 3 Jun 2022
 14:55:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 27eaf85e-e34d-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654268107;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=xb4wXTKLdRW+CGigAx1RYDOF+sJKiU/boCHSprKBQdo=;
  b=W0ifJcu6K5nxOrU60SHa3+FXbuT8ULr9oKfE7vb6YMksqj6eSAogFJfw
   1TrIXLacg6MNw0UKjyn1cP7djS5ZFTG5AUNe+lt7mfXTkEfg90gXJAT0v
   ipsldXXW0MPeHWL+lpSUFir7wEJ2xlZq0Y3DknGwNRPi1wTSNdp6ff86B
   Q=;
X-IronPort-RemoteIP: 104.47.58.170
X-IronPort-MID: 72815668
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:F41x3qLqoYzeZzwWFE+RpZQlxSXFcZb7ZxGr2PjKsXjdYENSgWQGy
 GQZWDiBbvzfYWDxeIokat+xoE8O7ZLWydA2GwZlqX01Q3x08seUXt7xwmUcns+xwm8vaGo9s
 q3yv/GZdJhcokf0/0vrav67xZVF/fngqoDUUYYoAQgsA149IMsdoUg7wbRh3NYz2YLR7z6l4
 rseneWOYDdJ5BYsWo4kw/rrRMRH5amaVJsw5zTSVNgT1LPsvyB94KE3fMldG0DQUIhMdtNWc
 s6YpF2PEsE1yD92Yj+tuu6TnkTn2dc+NyDW4pZdc/DKbhSvOkXee0v0XRYRQR4/ttmHozx+4
 I9Bvr6PTkRzBf2StutMfQV6KSQjLYQTrdcrIVDn2SCS52vvViK1ht9IXAQxN4Be/ftrC2ZT8
 /BeMCoKch2Im+OxxvS8V/VogcMgasLsOevzuFk5lW2fUalgHMCFGvqSjTNb9G5YasRmB/HRa
 tBfcTNyRB/BfwdOKhEcD5dWcOKA2SOvKmAG9gL9Sawf8jn+wiFj3JjWLNuES8OyY5pzwkmgn
 zeTl4j+KlRAXDCF8hKH+H+xgu7EnQvgRZkfUra/85ZCn1m71mEVThoMWjOTsfS/z0KzRd9bA
 0gV4TY167g/8lSxSdvwVAH+p2SL1iPwQPJVGuw+rQuLmqzd5l/AAnBeF2EZLts7qMUxWDomk
 EeTmM/kDiBut7vTTm+B8rCTrnW5Pi19wXI+WBLohDAtu7HLyLzfRDqWJjq/OMZZVuHIJAw=
IronPort-HdrOrdr: A9a23:wRkxoa/91xR1h2S1szRuk+FEdb1zdoMgy1knxilNoENuH/Bwxv
 rFoB1E73TJYVYqN03IV+rwXZVoZUmsjaKdgLNhRItKOTOLhILGFuFfBOfZsl7d8mjFh5VgPM
 RbAtRD4b/LfD9HZK/BiWHXcurIguP3lpxA7d2uskuFJjsaD52IgT0JaDpyRSZNNXN77NcCZe
 yhz/sCgwDlVWUcb8y9CHVAd+/fp+fTnJajRRIdHRYo5CSHkDvtsdfBYlCl9yZbdwkK7aYp8G
 DDnQC8zqK/s8ujwhuZ82PI9ZxZlPbo19MGLs2Rjco+LCnql2+TFc1ccozHmApwjPCk6V4snt
 WJixA8P/5r43eURW2xqQuF4XiU7B8er1vZjXOIi3rqpsL0ABggDdBauI5fehzFr2I9odBVys
 twri6knqsSKSmFsDX25tDOWR0vvFGzu2AenekaiGEaeZcCaYVWsZcU8CpuYdo99RrBmc4a+d
 RVfYDhDK48SyLbU5mZhBgk/DWUZAV9Iv/cKXJy+fB80FBt7QJEJgUjtY4id0w7hewAoql/lp
 v525tT5cBzp+8tHNZA7bQ6MLyK4lKke2O9DEuiZXLaKYogB1Xh77bK3ZRd3pDYRHVP9up4pK
 j8
X-IronPort-AV: E=Sophos;i="5.91,274,1647316800"; 
   d="scan'208";a="72815668"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AGgvvyTvZAVMYKfOiWz8iFkYzctGxkLZOQO+pUpD5Y545Mx0BrTIDOps6ekmwIPed2BbGV9R7VRAakZhQk2kO21QwZ+A/2B12P8hP9NOxWKtp3M8B7YnlZJgC7DGCUuWpnKxv6dfgGBYdAlLwuh3VELJfIFm/Gj/OdBkYIOW1NZuIB4abaO7ptt+R+Ysqgq92BSkBkVptXl3UZlBFnZl+bx99ANZAHiNUIxjGfXERV67oTymBxACZh5J0KRuLbo9iWEEylZEtO6+VmYL6RzSMbYnvFtaH3xGBClVlnlblCV1y675R8LhWOoHfxYCFYlJe+QMnO8ZzD44WhTD3UdB9w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Dt2GE1XUFdHp3jNRGW69KKUoRxG/iDikG0bNOy8uIJA=;
 b=YgQJ20nBYofnyRD1ulFBifyJgxuWlvdW5BGQK/Ryg4oUdcaq+yE6ENFkxw5HAHnKbSQZ6AwC6Kqx4SgmLM69oZsLz7NgX7UpTeZX9ZDSfeHC8fVfRU1cSoKsH17zZZcjVr4pWryTsM6fbIoKpQ4rK3vpXqXiBC+QsFPCWTKk4Rh4fwu1tuYvBHxgQQe/bnJn4r58x3VktK+WRESxYNtciCaF2WHYG5J/UAXoJaJycgypAVDVJfvZIj/I4RXcKlfAOZDiivdta9oH15t7YlEimVmhCXUjYmZR3Fd49w3gMUe0yqkWzjNg0Dvm6FdHJu1dyK9L9HlbRWcmPp2mVFD64w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Dt2GE1XUFdHp3jNRGW69KKUoRxG/iDikG0bNOy8uIJA=;
 b=gr5Bf107Xd9pubGRSKmy7POjmgTi/HHozEDRKzuwzgqQctq11aUeTuIMGQIv2PXcTRauQP/0u6YO/b373bNEWsDGFyOAxw5GYrK063M/5viYo/QtcbYKmBmF9hDtdXRO1InPn/nnEO2OtQsG5pv8X/sfyURPwGGNaKV82hGiRDM=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Fri, 3 Jun 2022 16:54:59 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH RFC 2/6] x86/ioapic: add a raw field to RTE struct
Message-ID: <YpogwwKnxDjCc1xs@Air-de-Roger>
References: <20220421132114.35118-1-roger.pau@citrix.com>
 <20220421132114.35118-3-roger.pau@citrix.com>
 <3b14d173-b0bf-f8e5-70c4-2c9a3085bffa@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <3b14d173-b0bf-f8e5-70c4-2c9a3085bffa@suse.com>
X-ClientProxiedBy: LO2P265CA0112.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:c::28) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 84f4b351-2141-45da-19a0-08da45710a7a
X-MS-TrafficTypeDiagnostic: DM6PR03MB5180:EE_
X-Microsoft-Antispam-PRVS:
	<DM6PR03MB5180874FFA38076503D7A7D28FA19@DM6PR03MB5180.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	xTGN1pZf7PGjwwxLUSxdhTh45Xu7a7zwoox8c4fkQ8rEEtJsPSPPb7wprf4nCIu+XJJBA45foPnhcR92bv1VGGelBwJ8Ssk0nuvV4dqMPfPk85kvFQ8vm+S9hrvhpCLYMcyZLvBy7tpUx1kvcgc/2nsDB+3R62OVF1rbvxBGnJGcO+9YlhaYvh0GacUDPlj5HZx2R5OUTHxKVkTLTqC8fkaRfHxQPdjjNwCnJVft+oiUoXD7WtXYBSkLp6G073OH0u6/nzXvJN6gpz0pUqbU3IF2kNPRSH0XgX6+xXCFNmCeqWdoV3uO/euSr7reP1RHJnnxo5+VH2NIRjoCLTl7qdOY1KA7QDRjU75O/EjT1kqkH2bNT0O84BX7TAgRHylvuHgJt1HL/FzGs2nkQX1wtPZbnX46DffziuBhygSNOtttjKnVbthGu4xD3DmZ/avRy922RYl5iTRYjTwfLzzJNPY65iSRYAH4jf5OFfXXIR87/GRj+A66OtMaHAS/gYY0U2NmGTJiWoh8wXkh3dYhynqdaQ1IhfCLFarRJ7DJbCDNvxUROhNLeGgqVrAj+w7TTbE8i+sYK20hiX2sUZ9CQX26ELFk4AGhZt/c9ScPPHkwk0HIHzGnRN6z9TWMHqCyvLywRD27dwu83vR2Hy5WXQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(7916004)(4636009)(366004)(82960400001)(6486002)(83380400001)(6512007)(85182001)(38100700002)(53546011)(6666004)(9686003)(26005)(2906002)(66946007)(8936002)(6916009)(5660300002)(33716001)(66476007)(66556008)(186003)(86362001)(8676002)(508600001)(316002)(4744005)(4326008)(6506007)(54906003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SjNXYzV0aWJ5NXNBc0t4ZmZhSEIxdVFYTGhBZnFjRGhzUWUxYjRuL2ZzV1pK?=
 =?utf-8?B?WEQweTNsaSsyTlZFQUF2aGw3S2dGb1lVSW16d3hTY0RZUCtGOUdoL0NtbnZ6?=
 =?utf-8?B?SUZGeFlFSDhMSlA1VHhLSkJlMnNxaFd5U2NJZW1JSEVJUWZ0YnMyTnpLYVl1?=
 =?utf-8?B?Z0RLOEEvak9USzBld0xpS0lEUEZ2NjlvK25KUndSNlRDSU0xOVBtQkRVK1Qx?=
 =?utf-8?B?SzR4ckZjNytSZDkxUU5IN1JIM0VQenM2WG1qaFJ1NzI3SFVsMGtNcW13MGFT?=
 =?utf-8?B?QzhTMytHdVNqU3ZPTWVCT3FqanRLbzhSeFRlZWxmaU1aSGoyVXZuZUZXVTJB?=
 =?utf-8?B?UWVpUFVyRE9wSG1CdmxWVlordWU5ZUl4VmYyc0c2eEd4S0I5TldFZ1pMTEtt?=
 =?utf-8?B?WG92REJXMzdGZit5cXlNWWppS1RqVkRJWHNnVFd2NzJUL0RFY1NtWlNyTDhn?=
 =?utf-8?B?eUtXTVhXb3daenl4OTN3VVBBOElVZVlvaTVOYThldnpaN3ZHQmZwZ085M2Iz?=
 =?utf-8?B?ZXRFRDFpVHZoVjJJYUVTbVpXSDdGcUpLU2hLeFViekR6bFc1NFJnTjJ6L3NS?=
 =?utf-8?B?QnMxcGI3cG9aTVIvV2dHeEVIZ2FQclI3K2d0WEM4SE4rT2pDK09WTTlNbTlz?=
 =?utf-8?B?bE10N0UyWlU3YmU2ZDFvRkVxeU9HendUSHdlODdhSGtnQnVGeDJXeG9odzVj?=
 =?utf-8?B?UCtMUU9JckJQUkQzTU5sRDl6a1V4azBFOUVBWmlFTDRMdHUwUFV3UUFKNnRC?=
 =?utf-8?B?RVpuSnZNUDNJM3pMby9VRXNiVFFwV3djc3ViTllMNDI2UGZVYkFZNWQ5Q1U4?=
 =?utf-8?B?L1VaWDFmOW80T1pyNW1ZNGhUbmltbm1QRjlwbDlFczhRRlMyMUNxd0dMRTg0?=
 =?utf-8?B?cXp5QVd6TUlmdUxRVFZ0N3V3Mk1oYVluSWg5TUh5c3ZPWlN6d2c4MldZNzhZ?=
 =?utf-8?B?eVlHaG1NczlOY0tzV2VQMkpwNUlsQVEzMFN3YmlKd3ErT2VoWHR1bUIyRDVO?=
 =?utf-8?B?ZGQzb2NNSW1MNnQ1T2hMQVNoZkUxazJVV1hMeGo5V0E3Rk8zVzlaNU5mL1V5?=
 =?utf-8?B?Mm1VOUcxVWJ0NzYrMGk5WGZUb2hhZlM3aFpHRHZzWHdVOFQxZ1JlblJOQTNx?=
 =?utf-8?B?T1YwRVhFYUZOQWJpeHBIRFN5aEwwbHBDTEIySFlRWlJwN05tUFpVR2N1dHJI?=
 =?utf-8?B?S3RoR3RZUUZveHNGVUx6ZWtUOFlIZE1MNVNuWmhlekxzVUhFckFMNHRrRzla?=
 =?utf-8?B?YTVMYWFLSjg0ODJzMlRQVUpmR2J4eGtSWUplUzFPd2s5WW5uS3RCYisxUWJC?=
 =?utf-8?B?eDhPcmVoR2dncGVJNWhvNkwxY3FIMmZtbkxreFN5VlNnWXhwaU52TWFwM0I0?=
 =?utf-8?B?QjFaZHEyY0tLVHc4bUVhTjBHdlJidmxaZUdyWFBuUHRmanoybU81MzJtVVBw?=
 =?utf-8?B?S0dSTzdvU3dsd203T3p3OWdGcUJoWVFibHRuUldIV1RsTC9OL29tVXM4VUJG?=
 =?utf-8?B?czYwbERiMXZ6MWppN3Mrd1pEZktHR09kUThmTGNVcFdGditoV3ZWMUJ4MzVX?=
 =?utf-8?B?V2hZMTFTQXE4QlFwOUJ2cHA2a05sT3BZQm9MM1haN3lGbkFBemNNaGRNNFJW?=
 =?utf-8?B?dUpnZ2JRRkUzNEZibUc2aU53Q2JVZ0NPUWF6S1lyc0pvc21UdGU4NVEvK1U5?=
 =?utf-8?B?ZDBkOFlyQ1pZclprMW1wZUdmMVo0WnJHbUFSS2lWQzc2dlR2QWhJZU9PTm5F?=
 =?utf-8?B?Y09PQkJhZ2l1RXhRd01kVFdyYVFML0pLbFRUQ2tRZGxXWjRlS3U2V0NtUGpV?=
 =?utf-8?B?YzMwMzJwdGRmVHU2Y0d5aGR6RGJncXNvNCtRVytBSUpTcTM3bE4rb2FFeEl3?=
 =?utf-8?B?YldObTlLYmo5cml0Rkx2UjlnY25IWmhrbUJXZVkxNEZTYlhiNUY2NFpkNVAy?=
 =?utf-8?B?NE1rYUQzajVRZFlxTm02bUlDajRqRldjdGMwZ2dmSDNjem00cWtGTk5vamdi?=
 =?utf-8?B?NHdRc3JCZFNXY00wRnFGUkhjSlNhOEMxanA0WXJmZ3M3OVZLam1mWTNCamVS?=
 =?utf-8?B?QXh5RGlRMVJOcE15Y28xTnYydnF1bG5YVUtjTlVxTHFCRVNOSkZjNW9KZFhh?=
 =?utf-8?B?N2xKVFl0eHdKcHBCKytRblRlU0ptd1h5MUZ2MGlSd2IvclVZTUI0WUxwMDlq?=
 =?utf-8?B?OGJOSEI5eHdEWHFlL2JrSEpOa0JKN2RnYVVzTWVFOEpobUZtRi92MWZRMkl6?=
 =?utf-8?B?M1l5RmtyQ3pMZTRrY0JKSWlFY2NIdHJNWnFiNDM4VEk1UDYxVGlGK3RhMmtP?=
 =?utf-8?B?L3MxQTJGWHBPYlVoQWtMTjNzcE9tdy9ORktBRnh1S3FmSFN2WkxLQXcyQ2xB?=
 =?utf-8?Q?SKKCNSmQBsicEw/8=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 84f4b351-2141-45da-19a0-08da45710a7a
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jun 2022 14:55:03.7005
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Zvn9pDXMSk6wDZ2WCULowT+EP7GqOoRmSPPDdfgumW0uR/KuDl1fTw1Lzu23hZiEROkg28MUXxyjRO1n7xCglA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5180

On Fri, Jun 03, 2022 at 03:24:37PM +0200, Jan Beulich wrote:
> On 21.04.2022 15:21, Roger Pau Monne wrote:
> > No functional change intended.
> > 
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Once seeing the purpose (in a later patch, I suppose) I certainly
> don't mind. We do have a couple of literal initializers, though (see
> e.g. the top of ioapic_guest_write()). Do those still compile fine
> (warning free) even with old gcc?

Will likely need to test it with the gitlab CI.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri Jun 03 15:01:35 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 15:01:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341601.566844 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx8np-0007rF-GB; Fri, 03 Jun 2022 15:01:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341601.566844; Fri, 03 Jun 2022 15:01:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx8np-0007r8-DH; Fri, 03 Jun 2022 15:01:29 +0000
Received: by outflank-mailman (input) for mailman id 341601;
 Fri, 03 Jun 2022 15:01:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DOPF=WK=citrix.com=prvs=146533d13=roger.pau@srs-se1.protection.inumbo.net>)
 id 1nx8no-0007r2-3g
 for xen-devel@lists.xenproject.org; Fri, 03 Jun 2022 15:01:28 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 098c8316-e34e-11ec-837f-e5687231ffcc;
 Fri, 03 Jun 2022 17:01:26 +0200 (CEST)
Received: from mail-bn7nam10lp2101.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.101])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 03 Jun 2022 11:01:24 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by MN2PR03MB5325.namprd03.prod.outlook.com (2603:10b6:208:1e4::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12; Fri, 3 Jun
 2022 15:01:22 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::5df3:95ce:4dfd:134e]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::5df3:95ce:4dfd:134e%4]) with mapi id 15.20.5314.015; Fri, 3 Jun 2022
 15:01:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 098c8316-e34e-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654268486;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=j1AkPfPtCKGgbnd7VZ5K7F3GvnADsPaF8IEAUDYxpKE=;
  b=bvSCsnXsVvDjDy+qnlHceyTATfIN8UgfFvNwpSHF53j1FF2FLlvjwk4j
   xxclTicaiLweOugc4JNW271xiWnc2y2JWT6wGKJKSn6XwUhbBE9CpWcte
   /b6xGZMcO+UFAgFhCa1kFxtOPix7nAUN+qv6tLDuQnTWp0LJcAiRL1+hs
   E=;
X-IronPort-RemoteIP: 104.47.70.101
X-IronPort-MID: 75353324
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:QYl8x6uf8Dojx7xIblpnwxk5GufnVCJfMUV32f8akzHdYApBsoF/q
 tZmKTvSOP+IZzenKdh+O4yw8EMCvZ+GyIBiGQQ++ShmHyIV+JbJXdiXEBz9bniYRiHhoOOLz
 Cm8hv3odp1coqr0/0/1WlTZhSAgk/nOHNIQMcacUsxLbVYMpBwJ1FQywobVvqYy2YLjW13V4
 ouoyyHiEATNNwBcYzp8B52r8HuDjNyq0N/PlgVjDRzjlAa2e0g9VPrzF4noR5fLatA88tqBb
 /TC1NmEElbxpH/BPD8HfoHTKSXmSpaKVeSHZ+E/t6KK2nCurQRquko32WZ1he66RFxlkvgoo
 Oihu6BcRi8UJPLIt9tMWSNSCgFbNqNZ8pj9IWmw5Jn7I03uKxMAwt1IJWRvZcgy3LkyBmtDs
 /sFNDoKcxaPwfqsx662QfVtgcJlK9T3OIQYuTdryjSx4fQOGMifBfmVo4AAmm5o36iiHt6HD
 yYdQSBoYxnaJQVGJ38cCY4knffujX76G9FdgA3I/vdsuDKPpOB3+OC2YIr1UeebfOkLu3uo4
 WXG8mG6DR5PYbRzzhLAqBpAnNTnnyn2RYYTH72Q7eNxjRuYwWl7IAISfUu2p7++kEHWc8JSL
 QkY9zQjqYA29Ve3VZ/tUhugunmGsxUAHd1KHIUHBBqlz6PV50OTADcCRzsYMNg+7pZuHHoty
 0ODmM7vCXp3qrqJRHmB97CS6zSvJSwSKmxEbigBJecY3+TeTEgIpkqnZr5e/GSd17UZxRmYL
 +i2kRUD
IronPort-HdrOrdr: A9a23:5seDQKCYixXp9K3lHegwsceALOsnbusQ8zAXPh9KJCC9I/bzqy
 nxpp8mPH/P5wr5lktQ/OxoHJPwOU80lKQFmLX5WI3PYOCIgguVxe1ZnOjfKnjbalbDH41mpN
 tdmspFebrN5DFB5K6VgTVQUexQpuVvmJrY+Ns2pE0dKT2CBZsQjTuQXW2gYzdLrUR9dOwEPa
 vZwvACiyureHwRYMj+Ln4ZX9Lbr9mOsJ79exYJCzMu9QHL1FqTmfbHOind+i1bfyJEwL8k/2
 SAuwvl5p+7u/X+7hPHzWfc47lfhdOk4NpeA86njNQTN1zX+0+VTbUkf4fHkCE+oemp5lpvuN
 7Qoy04N8A20H/VdnHdm2qZ5yDQlBIVr1Pyw16RhnXu5ebjQighNsZHjYVFNjPE9ksJprhHoe
 529lPck6ASIQLLnSz76dSNfQptjFCIrX0rlvNWp2BDULEZdKRaoeUkjQ5o+a87bWzHAb0cYa
 hT5Jm23ocXTbraVQGSgoBX+q3iYpxpdS32AXTruaSuokprdT5CvgklLfck7wk9HaIGOuZ5Dt
 v/Q9VVfZF1P7srhPFGdZA8qfXeMB28fTv8dESvHH/AKIYrf1rwlr+f2sRH2AjtQu1C8KcP
X-IronPort-AV: E=Sophos;i="5.91,274,1647316800"; 
   d="scan'208";a="75353324"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jguwvDVCpD99ePf2yWin6Yr8Xa+XXiUgLrk+u+oodzg9Y3ae1KIs54JqHshIDV/p4pkjTXZ+QAvNnyaYu5JTNvzq1HOMfLZvfGjO/nDtSTyF7uNLn3GBtoXTV8B01cFdVwtpYblz1SPhG3mMHRedHGDfi0nMOCB0FRnj+lbGHwFWa15usQ8pCd1Z/ElPwt/onbhqOtmdV+d2+y3JlpnLJhJ9TUIHl11xUns+U40kJPOjaq4Eik7Y00rzw6i281Tyt/YwIgUiFoIGP/ARpy7YLlfOn9WOCr879BIRDoLzqxmnxmxsApp3jStQZcA0Al7+XVPlUjzKbnNEEWVdcSkEbQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=jweHrj49xWVULGG2jMQ0CzOE1DRKXh3yACBN7lOJSMw=;
 b=goo5PETcFOrl2fFEdP/QXYLF9vSxz4/TjFU8mwE40xnchf+iwIQgH1fBobIBxD36Xy74qeMXxx5MJymuEVSrMggIWg1gs+rV6n0XWyQ+pOSX+VjrqbMQUrnEYEiXHdirRSOdgy2GMXkMTgKWtO8dh7i8aR8Daf06GKugj2oK11SKN/EP9+U5n6v5MuYEpm/74dRJKbsBeWKdCvNQdc8h1xK3IKQ8/zI99Q04NHBrVBOXlG9Dbn7b/NbGLGj8z8TYgzCQ4IsIzVOAOJ5+VA4oDRJyZkSm4IsSuV0NjhkzgGm9gMFnLDY8jv6AFacQsaUDy5jUd5oblHXs0qy0nOHb+w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jweHrj49xWVULGG2jMQ0CzOE1DRKXh3yACBN7lOJSMw=;
 b=H3SAo1J+AyM4zQfuG8Io7YRG0+XfOfnHB7Kn9vZVWvlAP/U941ju4KfKd5JwEDdxcZahjXhVOpSMa/oIUV/75hqydJaoKg6UhQfrKdhPFOfyF3sRU/itHG9Isvppw3w3xCRqvv5Fu0YgQtygAzI26mu/eupzlZssbc8vDxg25dA=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Fri, 3 Jun 2022 17:01:17 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH RFC 3/6] x86/ioapic: RTE modifications must use
 ioapic_write_entry
Message-ID: <YpoiPRETkjBskr1d@Air-de-Roger>
References: <20220421132114.35118-1-roger.pau@citrix.com>
 <20220421132114.35118-4-roger.pau@citrix.com>
 <febbff78-6a2d-f2fb-d8ea-a15f97a3abf4@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <febbff78-6a2d-f2fb-d8ea-a15f97a3abf4@suse.com>
X-ClientProxiedBy: LO2P265CA0298.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a5::22) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f68bcb09-0066-42ac-f5fa-08da4571ebea
X-MS-TrafficTypeDiagnostic: MN2PR03MB5325:EE_
X-Microsoft-Antispam-PRVS:
	<MN2PR03MB53254C4A651DC1EECE40AF358FA19@MN2PR03MB5325.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	wiUUTC1KOlz1rBrpdCw/OzLnqa4B1lLsCRwYZwaRmOUtuUVGhzKDtGf/5Sr2KEBG6kWtmQzG57sDjNbFesdQ3tVy252gNDwphgnooq4yR3S2IUjXcYGfD9CnV0i2OB90aYo+JOV30kXWuotqg+DZK2fOEZ4P6clR0QzrTjBgUDSqOoigZXzPAFWEQNKANrmoumpt8a23mqDJqXO3HfysrYtveiB1zTq+3heUJt7FaxoE6UNNeVWyqeQSgm5bN6K9zVRSh/U3jI5MDwXoeopQq1Y/2lbOYSQuc5Tf9w76QDShtntZA8aFF6/m4QaPtyh77kTByexMjp4qflq3k5zjKjWw0yoZ9A/YqHuC/6sQAYilzyt82VK5VtW/MtC7T9o02I/fLwODqnsppnPhzXUveOWUgzEXzplb39b7uM5fIAu+NPO7DwfFsO6pMt6G1uHpUCIUH7vXzj50QuhiUBy4dcaoZtZEyPBJgiEtltdrde578hN1BI8pqpIXNJsrH4rYw5RP0aYRj0uLqjJsdBWF4wConWsZvAnAXH6qI5dq4/DDztoDg9LD/441NQKJ1aMIPiyFqvkKSsHlx5cLIalF3zLIOOV9Pgq6Q9zM6z0PLZAPLb9pwVfZekr5Z2loy+7WSdNzWWNR7uAGLDJbwPYEoQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(7916004)(366004)(33716001)(85182001)(316002)(6916009)(54906003)(82960400001)(38100700002)(186003)(26005)(2906002)(83380400001)(4326008)(6486002)(66476007)(8676002)(66946007)(66556008)(8936002)(5660300002)(508600001)(6512007)(9686003)(6666004)(86362001)(6506007)(53546011);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eFl0VC94QmxhOUFQOWlIZm1VK2xDdVdnN3ZsTVF3eGJTSGY4Mno2ZEFpWmoy?=
 =?utf-8?B?NVpydGV6MThLbTBweGFZVEFQUThlSUhJZ0dvYkQzOTNjSjhFZCsvUlZpbHYv?=
 =?utf-8?B?MHZScDhPVy95SXRtZG5uV2tvNzB3L0tiWVFJeWlvQm0zNEM3Y2oyVFE4NHcz?=
 =?utf-8?B?ME9meGd5czV5ZWhLbWl1aFpxcmYzUkdZaExzdzlEMEx0YjczVitkV1J0bS92?=
 =?utf-8?B?VDdzUmpsLzRtM2Q1MU9jUmExZEFPaEdhb1M4VVdjdWNkOGcvRDRRa1ROVDBR?=
 =?utf-8?B?aEx4eWNZTis5K2FLc3g1YXZOcFgrenpIOTViRWdhai9LWGlPK2JwZWpZckYv?=
 =?utf-8?B?cUtaSTQrQ21Rc0dxNG1LblFjVnB1VmpGUXo1T0ZacmFmejhSQjl3cHdRenRl?=
 =?utf-8?B?a3lDSFBKU2JCUlVod0JKaHdrMHlnVWpmS0YwTEo1ZU1XSjZtTXQ0VTJNa3Qy?=
 =?utf-8?B?aUh3Z3FVSkVHK0I5VzZWMHg5OHkzdTRyUG5nMHk5RHZ4TDByODM5c0tCZkJH?=
 =?utf-8?B?dy9zZUlNWEVMRGlRaTN3aXNRMW15aUNKOXNEa3o1WUFTeVZtV1hOelloS0w2?=
 =?utf-8?B?ci8vL1ZLdUxTQlNtNi9xSHdCQXFUbURSRzd4dGJGY3RGN3Y5OGVwcVdwYjIv?=
 =?utf-8?B?U3A2VzdDdkJMOVpqWTZJNW5MZ3FrWFNMVlBBbUNnamZ1QnIyTnhnK0Rna3FW?=
 =?utf-8?B?c2FPR2FoT2dIaVl3MjRMd01DMVo1OVg4V1V1ZG9DZjdYY1FjbEpHZTZ6MTNN?=
 =?utf-8?B?NHZCZWVnK2JHeVZvTGM3VzRBWHIvcGVtOHk4N1BwaFYxNGQ0eE1GK1Ficndj?=
 =?utf-8?B?OTYyeTZuMVByd2ROak9DUlNiSkVHNUdvcFB6b3dwZ3NUcHNzeFYydjRaRkxs?=
 =?utf-8?B?cUdqRVFydFNPMTFsNWpKMVYrN251am0xa2J2MVoyWk1YdTZ6cEFqUEdHRm81?=
 =?utf-8?B?R29hMFcvNk84OWVDdldwMWNYai9IS2gwa2FiRWkyZ0dSTUtPWGk2Mk42Ykd2?=
 =?utf-8?B?TUNTMUMzR0lIMnZWd3ZqeXRoak5OUzJrR2hnUE85aFdMVEVsRzB2L2JldkJI?=
 =?utf-8?B?V3hRc1NrUHk3UWZUUnhSUVdLcDdzaDJ1MVhvREM5QjloT3ZZUXVSRGRMVUpr?=
 =?utf-8?B?eVlXWGJac1JCYzFVZzJ1c0xGQkV1SFdMVGtPMHc1TUtFbEhtT0tmVTJ6N3JN?=
 =?utf-8?B?UjJBS2JrUzNTdCtOTGk5V0RwSkYxd0NncEt3OHc2REFKblhWUnZpQTFheUNU?=
 =?utf-8?B?N0ZtVUFKeGh1cFYyWEE1Q0JOM0MxdUtrN2hlZXZrT2VrZmNHb2V2S2dLSU5z?=
 =?utf-8?B?eGJldXNlK2I2blh0NUJTMGtIeFU2WnBNV0JHWjJOY3A2SDFZeXZZWURjdHBB?=
 =?utf-8?B?K3lpOXBzRmFPWllpMmtIdGc4ZnlISEN6QjltMnBUN1pLc0FFUmtncStCdlYv?=
 =?utf-8?B?cUw2c1RhYk14MWw0OWQ1Rmp1ZVY0QkVQQlNyREp5eTFVSDl5amdZZllKZGRs?=
 =?utf-8?B?VzRGV20xQ1BjN0krakdyN3drWENLeEZxWUYxRWtpTUx6RkFxM1pjVC8rL0ZC?=
 =?utf-8?B?V2p3a2ZqTEcwZmhpK3lsOUJSTzZ2SjBSRlFBemxVbDFwQ01kSXJsTEd3RDFT?=
 =?utf-8?B?M0QrZEh1dHVwckd5NVhFV1hsM0lBdGtSZk4rNWFNekJUTnFvMEpyNzNFNDU0?=
 =?utf-8?B?TjR0Z2xlVEMvZkFNNFhkbXdXYjBXZ2xTaGF2NUxMdDVxVk9YRGg3TWxLWHFM?=
 =?utf-8?B?V2ZISDNaRE9QRkNTQUVuZHJZODZGdmt0UVA5bEdNK3FsYld1U2Fkd2VxcW5Z?=
 =?utf-8?B?WGtUbkpLK1ZhN2p1OHBIbmtrM3ZDekFjSGtWNG9jUEYyeWFXVlRoaXJnZUZU?=
 =?utf-8?B?bTBpcTNsMFE2a3Y2enFJNWdoa0pnQXppYlNBTVNVRHR1YzhDdzd3TjI0b1JN?=
 =?utf-8?B?RnlseHBBNEhKOEd6OTAwMG01T2JZR3IreTlUSlFNcDlPNnVwRnhYRGlaK0lO?=
 =?utf-8?B?WEFob3p1OS9qcnJrVy9seXNqMTA2dU44bUxCemVmZmF4NC9nNXNDRlZSaFo0?=
 =?utf-8?B?M3FhS3lpcDRRQnFBMDJUZkNOSzBxR1hKRUgvdUN1QlBBQmhyUENQVmRQdE9m?=
 =?utf-8?B?cWVFRjBCWXdpKytBZ1FLM0JzRGZOVHBJSVRNdy9pazRXZHEzNng0Y2lrOVU5?=
 =?utf-8?B?S2dJRlFPRXp1a0xCTXVvRndpeHM4aDJaUFV5cy9ZekkwZVBJL1IxUUxkRkpy?=
 =?utf-8?B?eUZMZTFLVmxVUDlmd0RJNDZadVNQL3pPVFQvbFVaZXRYTllQZldiRkM1WXRn?=
 =?utf-8?B?MjVGdVBNTE9Dd0wxa3hOVlpCL1JGMXFzZnNQNDVHa0xJdmFTM25yNk02bHdW?=
 =?utf-8?Q?PZW0gyDyJ26Y8Hiw=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f68bcb09-0066-42ac-f5fa-08da4571ebea
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jun 2022 15:01:22.0042
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: c4g2a92nwpq/8PDpkaEc55cxghCCyYdStwf7JmTGcZ8oDvVgNRbRU7c87YkwCErI81y55cjyarInxqqeSIV7OA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5325

On Fri, Jun 03, 2022 at 03:34:33PM +0200, Jan Beulich wrote:
> On 21.04.2022 15:21, Roger Pau Monne wrote:
> > Do not allow to write to RTE registers using io_apic_write and instead
> > require changes to RTE to be performed using ioapic_write_entry.
> 
> Hmm, this doubles the number of MMIO access in affected code fragments.

But it does allow to simplify the IOMMU side quite a lot by no longer
having to update the IRTE using two different calls.  I'm quite sure
it saves quite some accesses to the IOMMU RTE in the following
patches.

> > --- a/xen/arch/x86/include/asm/io_apic.h
> > +++ b/xen/arch/x86/include/asm/io_apic.h
> > @@ -161,22 +161,11 @@ static inline void __io_apic_write(unsigned int apic, unsigned int reg, unsigned
> >  
> >  static inline void io_apic_write(unsigned int apic, unsigned int reg, unsigned int value)
> >  {
> > -    if ( ioapic_reg_remapped(reg) )
> > -        return iommu_update_ire_from_apic(apic, reg, value);
> > +    /* RTE writes must use ioapic_write_entry. */
> > +    BUG_ON(reg >= 0x10);
> >      __io_apic_write(apic, reg, value);
> >  }
> >  
> > -/*
> > - * Re-write a value: to be used for read-modify-write
> > - * cycles where the read already set up the index register.
> > - */
> > -static inline void io_apic_modify(unsigned int apic, unsigned int reg, unsigned int value)
> > -{
> > -    if ( ioapic_reg_remapped(reg) )
> > -        return iommu_update_ire_from_apic(apic, reg, value);
> > -    *(IO_APIC_BASE(apic) + 4) = value;
> > -}
> 
> While the last caller goes away, I don't think this strictly needs to
> be dropped (but could just gain a BUG_ON() like you do a few lines up)?

Hm, could do, but it won't be suitable to be used to modify RTEs
anymore, and given that was it's only usage I didn't see much value
for leaving it around.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri Jun 03 15:48:17 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 15:48:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341614.566855 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx9Wr-0004me-0B; Fri, 03 Jun 2022 15:48:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341614.566855; Fri, 03 Jun 2022 15:48:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nx9Wq-0004mX-S6; Fri, 03 Jun 2022 15:48:00 +0000
Received: by outflank-mailman (input) for mailman id 341614;
 Fri, 03 Jun 2022 15:47:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nx9Wp-0004mN-Tg; Fri, 03 Jun 2022 15:47:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nx9Wp-00078q-R1; Fri, 03 Jun 2022 15:47:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nx9Wp-0007BO-84; Fri, 03 Jun 2022 15:47:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nx9Wp-0007pV-7W; Fri, 03 Jun 2022 15:47:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=YVsQocZ9FwwBXArJNQOebVuz1nGFXYhU3KvBY7Z69nk=; b=I7YmdUn1DQHfJqGW0Ac75uUj9D
	U50s3LBRb0ii7PFcR7WkA+x6uXfxqcp5tjK8vUvHnha2mbuJyms1yOMA3g0LhrGeNZdr6hxUAefDf
	8ROjVSI7qFlGdo0vfrWpqQ9C8Jwb589o7NHF4l9dlkwsvCTifBB0gZs1mt3ZeHMsRur0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170815-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 170815: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=50fd82b3a9a9335df5d50c7ddcb81c81d358c4fc
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 03 Jun 2022 15:47:59 +0000

flight 170815 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170815/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 170714
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 170714
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 170714

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170714
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170714
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170714
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                50fd82b3a9a9335df5d50c7ddcb81c81d358c4fc
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z   10 days
Failing since        170716  2022-05-24 11:12:06 Z   10 days   29 attempts
Testing same since   170815  2022-06-03 05:33:34 Z    0 days    1 attempts

------------------------------------------------------------
2072 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 234163 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 03 21:19:25 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 21:19:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341645.566866 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxEhI-0000oF-NJ; Fri, 03 Jun 2022 21:19:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341645.566866; Fri, 03 Jun 2022 21:19:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxEhI-0000o8-Iw; Fri, 03 Jun 2022 21:19:08 +0000
Received: by outflank-mailman (input) for mailman id 341645;
 Fri, 03 Jun 2022 21:19:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fj01=WK=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nxEhG-0000o2-Lz
 for xen-devel@lists.xenproject.org; Fri, 03 Jun 2022 21:19:06 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cb896c6e-e382-11ec-bd2c-47488cf2e6aa;
 Fri, 03 Jun 2022 23:19:04 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 7FFEEB82331;
 Fri,  3 Jun 2022 21:19:03 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id CEB51C385A9;
 Fri,  3 Jun 2022 21:19:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cb896c6e-e382-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654291142;
	bh=d1Q9IPKvjPok8OAnN8PoPnUQadcNmLlAPGerCI7jOBM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=L65XZnLU73fFbHiQ3HoNW52hsUxf9wZWVQr+vBMiuBwt/iIssaNOSaeWuzM14O7L5
	 NFgJ3Lv9JzFl3/SWM+DJ/tosGwEb5jDEFxwPcVc5x2414DnhuEuq8FzOG+bA29TtZV
	 kugUY/cxExetrr3OfBTvmj5+kpt3LPhpaHqtpbwKReJtLaKpfd28cLFGOLaOeVm50R
	 6FN8NVG8v3f+Z5CSQE/KsYTNI/iFsOv9Tnvj5muUWnepDJVbb30pjvA4TgGKBvWXJC
	 1kLBk96A7GMGF6SMp37Xdwy5z+Kth8tEviWn/h3P3ozj+d2uyeGRYS0DWti4DZSYj9
	 CcY7anQdJYB3A==
Date: Fri, 3 Jun 2022 14:19:00 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Boris Ostrovsky <boris.ostrovsky@oracle.com>, 
    Juergen Gross <jgross@suse.com>, Julien Grall <julien@xen.org>
Subject: Re: [RFC PATCH 2/2] xen/grant-table: Use unpopulated DMAable pages
 instead of real RAM ones
In-Reply-To: <1652810658-27810-3-git-send-email-olekstysh@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2206031348230.2783803@ubuntu-linux-20-04-desktop>
References: <1652810658-27810-1-git-send-email-olekstysh@gmail.com> <1652810658-27810-3-git-send-email-olekstysh@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 17 May 2022, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> Depends on CONFIG_XEN_UNPOPULATED_ALLOC. If enabled then unpopulated
> DMAable (contiguous) pages will be allocated for grant mapping into
> instead of ballooning out real RAM pages.
> 
> TODO: Fallback to real RAM pages if xen_alloc_unpopulated_dma_pages()
> fails.
> 
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> ---
>  drivers/xen/grant-table.c | 27 +++++++++++++++++++++++++++
>  1 file changed, 27 insertions(+)
> 
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index 8ccccac..2bb4392 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -864,6 +864,25 @@ EXPORT_SYMBOL_GPL(gnttab_free_pages);
>   */
>  int gnttab_dma_alloc_pages(struct gnttab_dma_alloc_args *args)
>  {
> +#ifdef CONFIG_XEN_UNPOPULATED_ALLOC
> +	int ret;

This is an alternative implementation of the same function. If we are
going to use #ifdef, then I would #ifdef the entire function, rather
than just the body. Otherwise within the function body we can use
IS_ENABLED.


> +	ret = xen_alloc_unpopulated_dma_pages(args->dev, args->nr_pages,
> +			args->pages);
> +	if (ret < 0)
> +		return ret;
> +
> +	ret = gnttab_pages_set_private(args->nr_pages, args->pages);
> +	if (ret < 0) {
> +		gnttab_dma_free_pages(args);

it should xen_free_unpopulated_dma_pages ?


> +		return ret;
> +	}
> +
> +	args->vaddr = page_to_virt(args->pages[0]);
> +	args->dev_bus_addr = page_to_phys(args->pages[0]);

There are two things to note here. 

The first thing to note is that normally we would call pfn_to_bfn to
retrieve the dev_bus_addr of a page because pfn_to_bfn takes into
account foreign mappings. However, these are freshly allocated pages
without foreign mappings, so page_to_phys/dma should be sufficient.


The second has to do with physical addresses and DMA addresses. The
functions are called gnttab_dma_alloc_pages and
xen_alloc_unpopulated_dma_pages which make you think we are retrieving a
DMA address here. However, to get a DMA address we need to call
page_to_dma rather than page_to_phys.

page_to_dma takes into account special offsets that some devices have
when accessing memory. There are real cases on ARM where the physical
address != DMA address, e.g. RPi4.

However, to call page_to_dma you need to specify as first argument the
DMA-capable device that is expected to use those pages for DMA (e.g. an
ethernet device or a MMC controller.) While the args->dev we have in
gnttab_dma_alloc_pages is the gntdev_miscdev.

So this interface cannot actually be used to allocate memory that is
supposed to be DMA-able by a DMA-capable device, such as an ethernet
device.

But I think that should be fine because the memory is meant to be used
by a userspace PV backend for grant mappings. If any of those mappings
end up being used for actual DMA in the kernel they should go through the
drivers/xen/swiotlb-xen.c and xen_phys_to_dma should be called, which
ends up calling page_to_dma as appropriate.

It would be good to double-check that the above is correct and, if so,
maybe add a short in-code comment about it:

/*
 * These are not actually DMA addresses but regular physical addresses.
 * If these pages end up being used in a DMA operation then the
 * swiotlb-xen functions are called and xen_phys_to_dma takes care of
 * the address translations:
 *
 * - from gfn to bfn in case of foreign mappings
 * - from physical to DMA addresses in case the two are different for a
 *   given DMA-mastering device
 */



> +	return ret;
> +#else
>  	unsigned long pfn, start_pfn;
>  	size_t size;
>  	int i, ret;
> @@ -910,6 +929,7 @@ int gnttab_dma_alloc_pages(struct gnttab_dma_alloc_args *args)
>  fail:
>  	gnttab_dma_free_pages(args);
>  	return ret;
> +#endif
>  }
>  EXPORT_SYMBOL_GPL(gnttab_dma_alloc_pages);
>  
> @@ -919,6 +939,12 @@ EXPORT_SYMBOL_GPL(gnttab_dma_alloc_pages);
>   */
>  int gnttab_dma_free_pages(struct gnttab_dma_alloc_args *args)
>  {
> +#ifdef CONFIG_XEN_UNPOPULATED_ALLOC
> +	gnttab_pages_clear_private(args->nr_pages, args->pages);
> +	xen_free_unpopulated_dma_pages(args->dev, args->nr_pages, args->pages);
> +
> +	return 0;
> +#else
>  	size_t size;
>  	int i, ret;
>  
> @@ -946,6 +972,7 @@ int gnttab_dma_free_pages(struct gnttab_dma_alloc_args *args)
>  		dma_free_wc(args->dev, size,
>  			    args->vaddr, args->dev_bus_addr);
>  	return ret;
> +#endif
>  }
>  EXPORT_SYMBOL_GPL(gnttab_dma_free_pages);
>  #endif
> -- 
> 2.7.4
> 


From xen-devel-bounces@lists.xenproject.org Fri Jun 03 21:53:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 21:53:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341654.566877 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxFE7-0005hD-9r; Fri, 03 Jun 2022 21:53:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341654.566877; Fri, 03 Jun 2022 21:53:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxFE7-0005h6-5m; Fri, 03 Jun 2022 21:53:03 +0000
Received: by outflank-mailman (input) for mailman id 341654;
 Fri, 03 Jun 2022 21:53:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fj01=WK=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nxFE5-0005h0-TO
 for xen-devel@lists.xenproject.org; Fri, 03 Jun 2022 21:53:01 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 88f442a9-e387-11ec-bd2c-47488cf2e6aa;
 Fri, 03 Jun 2022 23:53:00 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 6FDCBB824BE;
 Fri,  3 Jun 2022 21:52:59 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id C4719C385B8;
 Fri,  3 Jun 2022 21:52:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 88f442a9-e387-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654293178;
	bh=pgXaHU253phBXXtx1iKJe+k6xQpEHHkD4NWOfztbCWA=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=DjOo6fbwg+bg41Heda3LMLvcCtBXaZQ+cxFwrckVqaRGhnBMfum6FQ22n8eHlbgPQ
	 aWTo/63JBG/GVCzorqGcNfB3ENyrkPhj4kIHbf9YAK6MckjObo74FjEQs59pu3aaVl
	 iP2JL2QQft9D2esFF/VhaoOl90Zdu78cCwof2sw4QyG059+mawjfIef6QduD6pX94i
	 QshTjhCEQsfV4HSOJ63I8Ck3mkrUE5X4/gdkx42FP9bUeKmYHfyXqhJhUakv2lYQYr
	 oQq4D5r06PF74120zwja0pW3vggve0L7GMzSqgeN8uOSt6Y8mwqFhRpiq2arlPOcnF
	 KpnK1Ttcl3hdg==
Date: Fri, 3 Jun 2022 14:52:56 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Boris Ostrovsky <boris.ostrovsky@oracle.com>, 
    Juergen Gross <jgross@suse.com>, Julien Grall <julien@xen.org>
Subject: Re: [RFC PATCH 1/2] xen/unpopulated-alloc: Introduce helpers for
 DMA allocations
In-Reply-To: <1652810658-27810-2-git-send-email-olekstysh@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2206031420430.2783803@ubuntu-linux-20-04-desktop>
References: <1652810658-27810-1-git-send-email-olekstysh@gmail.com> <1652810658-27810-2-git-send-email-olekstysh@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 17 May 2022, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> Add ability to allocate unpopulated DMAable (contiguous) pages
> suitable for grant mapping into. This is going to be used by gnttab
> code (see gnttab_dma_alloc_pages()).
> 
> TODO: There is a code duplication in fill_dma_pool(). Also pool
> oparations likely need to be protected by the lock.
> 
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> ---
>  drivers/xen/unpopulated-alloc.c | 167 ++++++++++++++++++++++++++++++++++++++++
>  include/xen/xen.h               |  15 ++++
>  2 files changed, 182 insertions(+)
> 
> diff --git a/drivers/xen/unpopulated-alloc.c b/drivers/xen/unpopulated-alloc.c
> index a39f2d3..bca0198 100644
> --- a/drivers/xen/unpopulated-alloc.c
> +++ b/drivers/xen/unpopulated-alloc.c
> @@ -1,5 +1,6 @@
>  // SPDX-License-Identifier: GPL-2.0
>  #include <linux/errno.h>
> +#include <linux/genalloc.h>
>  #include <linux/gfp.h>
>  #include <linux/kernel.h>
>  #include <linux/mm.h>
> @@ -16,6 +17,8 @@ static DEFINE_MUTEX(list_lock);
>  static struct page *page_list;
>  static unsigned int list_count;
>  
> +static struct gen_pool *dma_pool;
> +
>  static struct resource *target_resource;
>  
>  /*
> @@ -230,6 +233,161 @@ void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages)
>  }
>  EXPORT_SYMBOL(xen_free_unpopulated_pages);
>  
> +static int fill_dma_pool(unsigned int nr_pages)
> +{

I think we shouldn't need to add this function at all as we should be
able to reuse fill_list even for contiguous pages. fill_list could
always call gen_pool_add_virt before returning.


> +	struct dev_pagemap *pgmap;
> +	struct resource *res, *tmp_res = NULL;
> +	void *vaddr;
> +	unsigned int alloc_pages = round_up(nr_pages, PAGES_PER_SECTION);
> +	struct range mhp_range;
> +	int ret;
> +
> +	res = kzalloc(sizeof(*res), GFP_KERNEL);
> +	if (!res)
> +		return -ENOMEM;
> +
> +	res->name = "Xen DMA pool";
> +	res->flags = IORESOURCE_MEM | IORESOURCE_BUSY;
> +
> +	mhp_range = mhp_get_pluggable_range(true);
> +
> +	ret = allocate_resource(target_resource, res,
> +				alloc_pages * PAGE_SIZE, mhp_range.start, mhp_range.end,
> +				PAGES_PER_SECTION * PAGE_SIZE, NULL, NULL);
> +	if (ret < 0) {
> +		pr_err("Cannot allocate new IOMEM resource\n");
> +		goto err_resource;
> +	}
> +
> +	/*
> +	 * Reserve the region previously allocated from Xen resource to avoid
> +	 * re-using it by someone else.
> +	 */
> +	if (target_resource != &iomem_resource) {
> +		tmp_res = kzalloc(sizeof(*tmp_res), GFP_KERNEL);
> +		if (!res) {
> +			ret = -ENOMEM;
> +			goto err_insert;
> +		}
> +
> +		tmp_res->name = res->name;
> +		tmp_res->start = res->start;
> +		tmp_res->end = res->end;
> +		tmp_res->flags = res->flags;
> +
> +		ret = request_resource(&iomem_resource, tmp_res);
> +		if (ret < 0) {
> +			pr_err("Cannot request resource %pR (%d)\n", tmp_res, ret);
> +			kfree(tmp_res);
> +			goto err_insert;
> +		}
> +	}
> +
> +	pgmap = kzalloc(sizeof(*pgmap), GFP_KERNEL);
> +	if (!pgmap) {
> +		ret = -ENOMEM;
> +		goto err_pgmap;
> +	}
> +
> +	pgmap->type = MEMORY_DEVICE_GENERIC;
> +	pgmap->range = (struct range) {
> +		.start = res->start,
> +		.end = res->end,
> +	};
> +	pgmap->nr_range = 1;
> +	pgmap->owner = res;
> +
> +	vaddr = memremap_pages(pgmap, NUMA_NO_NODE);
> +	if (IS_ERR(vaddr)) {
> +		pr_err("Cannot remap memory range\n");
> +		ret = PTR_ERR(vaddr);
> +		goto err_memremap;
> +	}
> +
> +	ret = gen_pool_add_virt(dma_pool, (unsigned long)vaddr, res->start,
> +			alloc_pages * PAGE_SIZE, NUMA_NO_NODE);
> +	if (ret)
> +		goto err_pool;
> +
> +	return 0;
> +
> +err_pool:
> +	memunmap_pages(pgmap);
> +err_memremap:
> +	kfree(pgmap);
> +err_pgmap:
> +	if (tmp_res) {
> +		release_resource(tmp_res);
> +		kfree(tmp_res);
> +	}
> +err_insert:
> +	release_resource(res);
> +err_resource:
> +	kfree(res);
> +	return ret;
> +}
> +
> +/**
> + * xen_alloc_unpopulated_dma_pages - alloc unpopulated DMAable pages
> + * @dev: valid struct device pointer
> + * @nr_pages: Number of pages
> + * @pages: pages returned
> + * @return 0 on success, error otherwise
> + */
> +int xen_alloc_unpopulated_dma_pages(struct device *dev, unsigned int nr_pages,
> +		struct page **pages)
> +{
> +	void *vaddr;
> +	bool filled = false;
> +	unsigned int i;
> +	int ret;


Also probably it might be better if xen_alloc_unpopulated_pages and
xen_alloc_unpopulated_dma_pages shared the implementation. Something
along these lines:

int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages)
{
    return _xen_alloc_unpopulated_pages(nr_pages, pages, false);
}

int xen_alloc_unpopulated_dma_pages(struct device *dev, unsigned int nr_pages,
		struct page **pages)
{
    return _xen_alloc_unpopulated_pages(nr_pages, pages, true);
}

static int _xen_alloc_unpopulated_pages(unsigned int nr_pages,
        struct page **pages, bool contiguous)
{
	unsigned int i;
	int ret = 0;

    if (contiguous && !xen_feature(XENFEAT_auto_translated_physmap))
        return -EINVAL;

	/*
	 * Fallback to default behavior if we do not have any suitable resource
	 * to allocate required region from and as the result we won't be able to
	 * construct pages.
	 */
	if (!target_resource) {
        if (contiguous)
            return -EINVAL;
		return xen_alloc_ballooned_pages(nr_pages, pages);
    }

	mutex_lock(&list_lock);
	if (list_count < nr_pages) {
		ret = fill_list(nr_pages - list_count);
		if (ret)
			goto out;
	}

    if (contiguous) {
        vaddr = (void *)gen_pool_alloc(dma_pool, nr_pages * PAGE_SIZE);

        for (i = 0; i < nr_pages; i++)
            pages[i] = virt_to_page(vaddr + PAGE_SIZE * i);
    } else {
        for (i = 0; i < nr_pages; i++) {
            struct page *pg = page_list;

            BUG_ON(!pg);
            page_list = pg->zone_device_data;
            list_count--;
            pages[i] = pg;

    #ifdef CONFIG_XEN_HAVE_PVMMU
            if (!xen_feature(XENFEAT_auto_translated_physmap)) {
                ret = xen_alloc_p2m_entry(page_to_pfn(pg));
                if (ret < 0) {
                    unsigned int j;

                    for (j = 0; j <= i; j++) {
                        pages[j]->zone_device_data = page_list;
                        page_list = pages[j];
                        list_count++;
                    }
                    goto out;
                }
            }
    #endif
        }
    }

out:
	mutex_unlock(&list_lock);
	return ret;
}

	

> +	if (!dma_pool)
> +		return -ENODEV;
> +
> +	/* XXX Handle devices which support 64-bit DMA address only for now */
> +	if (dma_get_mask(dev) != DMA_BIT_MASK(64))
> +		return -EINVAL;
> +
> +	while (!(vaddr = (void *)gen_pool_alloc(dma_pool, nr_pages * PAGE_SIZE))) {
> +		if (filled)
> +			return -ENOMEM;
> +		else {
> +			ret = fill_dma_pool(nr_pages);
> +			if (ret)
> +				return ret;
> +
> +			filled = true;
> +		}
> +	}
> +
> +	for (i = 0; i < nr_pages; i++)
> +		pages[i] = virt_to_page(vaddr + PAGE_SIZE * i);
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(xen_alloc_unpopulated_dma_pages);
> +
> +/**
> + * xen_free_unpopulated_dma_pages - return unpopulated DMAable pages
> + * @dev: valid struct device pointer
> + * @nr_pages: Number of pages
> + * @pages: pages to return
> + */
> +void xen_free_unpopulated_dma_pages(struct device *dev, unsigned int nr_pages,
> +		struct page **pages)
> +{
> +	void *vaddr;
> +
> +	if (!dma_pool)
> +		return;
> +
> +	vaddr = page_to_virt(pages[0]);
> +
> +	gen_pool_free(dma_pool, (unsigned long)vaddr, nr_pages * PAGE_SIZE);
> +}
> +EXPORT_SYMBOL(xen_free_unpopulated_dma_pages);
> +
>  static int __init unpopulated_init(void)
>  {
>  	int ret;
> @@ -241,8 +399,17 @@ static int __init unpopulated_init(void)
>  	if (ret) {
>  		pr_err("xen:unpopulated: Cannot initialize target resource\n");
>  		target_resource = NULL;
> +		return ret;
>  	}
>  
> +	dma_pool = gen_pool_create(PAGE_SHIFT, NUMA_NO_NODE);
> +	if (!dma_pool) {
> +		pr_err("xen:unpopulated: Cannot create DMA pool\n");
> +		return -ENOMEM;
> +	}
> +
> +	gen_pool_set_algo(dma_pool, gen_pool_best_fit, NULL);
> +
>  	return ret;
>  }
>  early_initcall(unpopulated_init);
> diff --git a/include/xen/xen.h b/include/xen/xen.h
> index a99bab8..a6a7a59 100644
> --- a/include/xen/xen.h
> +++ b/include/xen/xen.h
> @@ -52,9 +52,15 @@ bool xen_biovec_phys_mergeable(const struct bio_vec *vec1,
>  extern u64 xen_saved_max_mem_size;
>  #endif
>  
> +struct device;
> +
>  #ifdef CONFIG_XEN_UNPOPULATED_ALLOC
>  int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages);
>  void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages);
> +int xen_alloc_unpopulated_dma_pages(struct device *dev, unsigned int nr_pages,
> +		struct page **pages);
> +void xen_free_unpopulated_dma_pages(struct device *dev, unsigned int nr_pages,
> +		struct page **pages);
>  #include <linux/ioport.h>
>  int arch_xen_unpopulated_init(struct resource **res);
>  #else
> @@ -69,6 +75,15 @@ static inline void xen_free_unpopulated_pages(unsigned int nr_pages,
>  {
>  	xen_free_ballooned_pages(nr_pages, pages);
>  }
> +static inline int xen_alloc_unpopulated_dma_pages(struct device *dev,
> +		unsigned int nr_pages, struct page **pages)
> +{
> +	return -1;
> +}
> +static inline void xen_free_unpopulated_dma_pages(struct device *dev,
> +		unsigned int nr_pages, struct page **pages)
> +{
> +}
>  #endif

Given that we have these stubs, maybe we don't need to #ifdef the so
much code in the next patch


From xen-devel-bounces@lists.xenproject.org Fri Jun 03 22:16:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 22:16:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341662.566888 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxFaj-00007i-8L; Fri, 03 Jun 2022 22:16:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341662.566888; Fri, 03 Jun 2022 22:16:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxFaj-00007a-5K; Fri, 03 Jun 2022 22:16:25 +0000
Received: by outflank-mailman (input) for mailman id 341662;
 Fri, 03 Jun 2022 22:16:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxFai-00007Q-5F; Fri, 03 Jun 2022 22:16:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxFai-0005tU-3K; Fri, 03 Jun 2022 22:16:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxFah-0007QS-O2; Fri, 03 Jun 2022 22:16:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nxFah-0007Bd-NM; Fri, 03 Jun 2022 22:16:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=sJNJ3NCh/upyx/Fsudxo4IU7QOnCkxYvjwd7pbZVm/c=; b=IkYVUdYGUya/m9fYOIEHwkUaPS
	y2HR7yTOK4gQHV5pWwKsC46cwaY+0w3heiCCvotH204n0tiPL7SRa4+Q6gRaCn+LSl3j1r0dnqKjl
	S8r7KeSschrTfx75q2xkegRz/Umm/zY/9m9Bwm1Q5bvDMoKeZ50gdOdKkzQwWta1/o5U=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170820-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 170820: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=70e975203f366f2f30daaeb714bb852562b7b72f
X-Osstest-Versions-That:
    qemuu=c9641eb422905cc0804a7e310269abf09543cce8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 03 Jun 2022 22:16:23 +0000

flight 170820 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170820/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170812
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170812
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170812
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat    fail  like 170812
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170812
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170812
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170812
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170812
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170812
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                70e975203f366f2f30daaeb714bb852562b7b72f
baseline version:
 qemuu                c9641eb422905cc0804a7e310269abf09543cce8

Last test of basis   170812  2022-06-03 00:39:54 Z    0 days
Testing same since   170820  2022-06-03 15:38:21 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alex Bennée <alex.bennee@linaro.org>
  Cornelia Huck <cohuck@redhat.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Gautam Agrawal <gautamnagrawal@gmail.com>
  Gonglei <arei.gonglei@huawei.com>
  Hailiang Zhang <zhanghailiang@xfusion.com>
  Janis Schoetterl-Glausch <scgl@linux.ibm.com>
  Miaoqian Lin <linmq006@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>
  Wenchao Wang <wenchao.wang@intel.com>
  Zhang Chen <chen.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   c9641eb422..70e975203f  70e975203f366f2f30daaeb714bb852562b7b72f -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Fri Jun 03 22:27:55 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 22:27:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341673.566900 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxFld-00028o-GF; Fri, 03 Jun 2022 22:27:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341673.566900; Fri, 03 Jun 2022 22:27:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxFld-00028g-Av; Fri, 03 Jun 2022 22:27:41 +0000
Received: by outflank-mailman (input) for mailman id 341673;
 Fri, 03 Jun 2022 22:27:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fj01=WK=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nxFlb-00028Q-LU
 for xen-devel@lists.xenproject.org; Fri, 03 Jun 2022 22:27:39 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5ebd7c31-e38c-11ec-bd2c-47488cf2e6aa;
 Sat, 04 Jun 2022 00:27:37 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 2DFEA61B33;
 Fri,  3 Jun 2022 22:27:36 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 155D0C385B8;
 Fri,  3 Jun 2022 22:27:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5ebd7c31-e38c-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654295255;
	bh=4gpuc7+BG1J+YAalEOT+1m4zN0TRUyiuQmY0ekOxsQg=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=LuJjPqJyh7NxG7/rpbkb8YCAnak2zly9u/kEFH5h/2xdudWmXxwiiXSoqwzbSVbCw
	 tFRwY9KeJesHUGkV4xrBuQziVcWGAlUQ4jINu35jMCVWS5qzvY2tCsmqJuZJFwIatF
	 r/oln//BlcTJrWhMic2C9H3OTTKypwskR08otaELCY4Wj5aoDZY0Mbpp8ZbtSYHVRh
	 723lqMKE+DfX1abOINhefuDWgVZjFyjt/vEp/C5TuEb4dLN5IbNu2AJjvyT9zQV8sD
	 6Q+HZi81aspT9pqODJo6i09q6UbJPZuGf063GLdglmHb9CKTbaNnRoKnBTiNYLsxwL
	 F5Y82bN/sjXrA==
Date: Fri, 3 Jun 2022 15:27:34 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Julien Grall <julien.grall@arm.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Julien Grall <jgrall@amazon.com>, Hongda Deng <Hongda.Heng@arm.com>
Subject: Re: [PATCH 01/16] xen/arm: mm: Allow other mapping size in
 xen_pt_update_entry()
In-Reply-To: <20220520120937.28925-2-julien@xen.org>
Message-ID: <alpine.DEB.2.22.394.2206031526280.2783803@ubuntu-linux-20-04-desktop>
References: <20220520120937.28925-1-julien@xen.org> <20220520120937.28925-2-julien@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 20 May 2022, Julien Grall wrote:
> From: Julien Grall <julien.grall@arm.com>
> 
> At the moment, xen_pt_update_entry() only supports mapping at level 3
> (i.e 4KB mapping). While this is fine for most of the runtime helper,
> the boot code will require to use superpage mapping.
> 
> We don't want to allow superpage mapping by default as some of the
> callers may expect small mappings (i.e populate_pt_range()) or even
> expect to unmap only a part of a superpage.
> 
> To keep the code simple, a new flag _PAGE_BLOCK is introduced to
> allow the caller to enable superpage mapping.
> 
> As the code doesn't support all the combinations, xen_pt_check_entry()
> is extended to take into account the cases we don't support when
> using block mapping:
>     - Replacing a table with a mapping. This may happen if region was
>     first mapped with 4KB mapping and then later on replaced with a 2MB
>     (or 1GB mapping).
>     - Removing/modifying a table. This may happen if a caller try to
>     remove a region with _PAGE_BLOCK set when it was created without it.
> 
> Note that the current restriction means that the caller must ensure that
> _PAGE_BLOCK is consistently set/cleared across all the updates on a
> given virtual region. This ought to be fine with the expected use-cases.
> 
> More rework will be necessary if we wanted to remove the restrictions.
> 
> Note that nr_mfns is now marked const as it is used for flushing the
> TLBs and we don't want it to be modified.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Reviewed-by: Hongda Deng <Hongda.Heng@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v4:
>         - Add Hongda's reviewed-by
>         - Add a comment why nr_mfns is const
>         - Open-code pfn_to_paddr()
> 
>     Changes in v3:
>         - Fix clash after prefixing the PT macros with XEN_PT_
>         - Fix typoes in the commit message
>         - Support superpage mappings even if nr is not suitably aligned
>         - Move the logic to find the level in a separate function
> 
>     Changes in v2:
>         - Pass the target level rather than the order to
>         xen_pt_update_entry()
>         - Update some comments
>         - Open-code paddr_to_pfn()
>         - Add my AWS signed-off-by
> ---
>  xen/arch/arm/include/asm/page.h |   4 ++
>  xen/arch/arm/mm.c               | 109 ++++++++++++++++++++++++++------
>  2 files changed, 95 insertions(+), 18 deletions(-)
> 
> diff --git a/xen/arch/arm/include/asm/page.h b/xen/arch/arm/include/asm/page.h
> index c6f9fb0d4e0c..07998df47bac 100644
> --- a/xen/arch/arm/include/asm/page.h
> +++ b/xen/arch/arm/include/asm/page.h
> @@ -69,6 +69,7 @@
>   * [3:4] Permission flags
>   * [5]   Page present
>   * [6]   Only populate page tables
> + * [7]   Superpage mappings is allowed
>   */
>  #define PAGE_AI_MASK(x) ((x) & 0x7U)
>  
> @@ -82,6 +83,9 @@
>  #define _PAGE_PRESENT    (1U << 5)
>  #define _PAGE_POPULATE   (1U << 6)
>  
> +#define _PAGE_BLOCK_BIT     7
> +#define _PAGE_BLOCK         (1U << _PAGE_BLOCK_BIT)
> +
>  /*
>   * _PAGE_DEVICE and _PAGE_NORMAL are convenience defines. They are not
>   * meant to be used outside of this header.
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 7b1f2f49060d..be2ac302d731 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -1078,9 +1078,10 @@ static int xen_pt_next_level(bool read_only, unsigned int level,
>  }
>  
>  /* Sanity check of the entry */
> -static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags)
> +static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int level,
> +                               unsigned int flags)
>  {
> -    /* Sanity check when modifying a page. */
> +    /* Sanity check when modifying an entry. */
>      if ( (flags & _PAGE_PRESENT) && mfn_eq(mfn, INVALID_MFN) )
>      {
>          /* We don't allow modifying an invalid entry. */
> @@ -1090,6 +1091,13 @@ static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags)
>              return false;
>          }
>  
> +        /* We don't allow modifying a table entry */
> +        if ( !lpae_is_mapping(entry, level) )
> +        {
> +            mm_printk("Modifying a table entry is not allowed.\n");
> +            return false;
> +        }
> +
>          /* We don't allow changing memory attributes. */
>          if ( entry.pt.ai != PAGE_AI_MASK(flags) )
>          {
> @@ -1105,7 +1113,7 @@ static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags)
>              return false;
>          }
>      }
> -    /* Sanity check when inserting a page */
> +    /* Sanity check when inserting a mapping */
>      else if ( flags & _PAGE_PRESENT )
>      {
>          /* We should be here with a valid MFN. */
> @@ -1114,18 +1122,28 @@ static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags)
>          /* We don't allow replacing any valid entry. */
>          if ( lpae_is_valid(entry) )
>          {
> -            mm_printk("Changing MFN for a valid entry is not allowed (%#"PRI_mfn" -> %#"PRI_mfn").\n",
> -                      mfn_x(lpae_get_mfn(entry)), mfn_x(mfn));
> +            if ( lpae_is_mapping(entry, level) )
> +                mm_printk("Changing MFN for a valid entry is not allowed (%#"PRI_mfn" -> %#"PRI_mfn").\n",
> +                          mfn_x(lpae_get_mfn(entry)), mfn_x(mfn));
> +            else
> +                mm_printk("Trying to replace a table with a mapping.\n");
>              return false;
>          }
>      }
> -    /* Sanity check when removing a page. */
> +    /* Sanity check when removing a mapping. */
>      else if ( (flags & (_PAGE_PRESENT|_PAGE_POPULATE)) == 0 )
>      {
>          /* We should be here with an invalid MFN. */
>          ASSERT(mfn_eq(mfn, INVALID_MFN));
>  
> -        /* We don't allow removing page with contiguous bit set. */
> +        /* We don't allow removing a table */
> +        if ( lpae_is_table(entry, level) )
> +        {
> +            mm_printk("Removing a table is not allowed.\n");
> +            return false;
> +        }
> +
> +        /* We don't allow removing a mapping with contiguous bit set. */
>          if ( entry.pt.contig )
>          {
>              mm_printk("Removing entry with contiguous bit set is not allowed.\n");
> @@ -1143,13 +1161,13 @@ static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags)
>      return true;
>  }
>  
> +/* Update an entry at the level @target. */
>  static int xen_pt_update_entry(mfn_t root, unsigned long virt,
> -                               mfn_t mfn, unsigned int flags)
> +                               mfn_t mfn, unsigned int target,
> +                               unsigned int flags)
>  {
>      int rc;
>      unsigned int level;
> -    /* We only support 4KB mapping (i.e level 3) for now */
> -    unsigned int target = 3;
>      lpae_t *table;
>      /*
>       * The intermediate page tables are read-only when the MFN is not valid
> @@ -1204,7 +1222,7 @@ static int xen_pt_update_entry(mfn_t root, unsigned long virt,
>      entry = table + offsets[level];
>  
>      rc = -EINVAL;
> -    if ( !xen_pt_check_entry(*entry, mfn, flags) )
> +    if ( !xen_pt_check_entry(*entry, mfn, level, flags) )
>          goto out;
>  
>      /* If we are only populating page-table, then we are done. */
> @@ -1222,8 +1240,11 @@ static int xen_pt_update_entry(mfn_t root, unsigned long virt,
>          {
>              pte = mfn_to_xen_entry(mfn, PAGE_AI_MASK(flags));
>  
> -            /* Third level entries set pte.pt.table = 1 */
> -            pte.pt.table = 1;
> +            /*
> +             * First and second level pages set pte.pt.table = 0, but
> +             * third level entries set pte.pt.table = 1.
> +             */
> +            pte.pt.table = (level == 3);
>          }
>          else /* We are updating the permission => Copy the current pte. */
>              pte = *entry;
> @@ -1243,15 +1264,57 @@ out:
>      return rc;
>  }
>  
> +/* Return the level where mapping should be done */
> +static int xen_pt_mapping_level(unsigned long vfn, mfn_t mfn, unsigned long nr,
> +                                unsigned int flags)
> +{
> +    unsigned int level;
> +    unsigned long mask;
> +
> +    /*
> +      * Don't take into account the MFN when removing mapping (i.e
> +      * MFN_INVALID) to calculate the correct target order.
> +      *
> +      * Per the Arm Arm, `vfn` and `mfn` must be both superpage aligned.
> +      * They are or-ed together and then checked against the size of
> +      * each level.
> +      *
> +      * `left` is not included and checked separately to allow
> +      * superpage mapping even if it is not properly aligned (the
> +      * user may have asked to map 2MB + 4k).
> +      */
> +     mask = !mfn_eq(mfn, INVALID_MFN) ? mfn_x(mfn) : 0;
> +     mask |= vfn;
> +
> +     /*
> +      * Always use level 3 mapping unless the caller request block
> +      * mapping.
> +      */
> +     if ( likely(!(flags & _PAGE_BLOCK)) )
> +         level = 3;
> +     else if ( !(mask & (BIT(FIRST_ORDER, UL) - 1)) &&
> +               (nr >= BIT(FIRST_ORDER, UL)) )
> +         level = 1;
> +     else if ( !(mask & (BIT(SECOND_ORDER, UL) - 1)) &&
> +               (nr >= BIT(SECOND_ORDER, UL)) )
> +         level = 2;
> +     else
> +         level = 3;
> +
> +     return level;
> +}
> +
>  static DEFINE_SPINLOCK(xen_pt_lock);
>  
>  static int xen_pt_update(unsigned long virt,
>                           mfn_t mfn,
> -                         unsigned long nr_mfns,
> +                         /* const on purpose as it is used for TLB flush */
> +                         const unsigned long nr_mfns,
>                           unsigned int flags)
>  {
>      int rc = 0;
> -    unsigned long addr = virt, addr_end = addr + nr_mfns * PAGE_SIZE;
> +    unsigned long vfn = virt >> PAGE_SHIFT;
> +    unsigned long left = nr_mfns;
>  
>      /*
>       * For arm32, page-tables are different on each CPUs. Yet, they share
> @@ -1283,14 +1346,24 @@ static int xen_pt_update(unsigned long virt,
>  
>      spin_lock(&xen_pt_lock);
>  
> -    for ( ; addr < addr_end; addr += PAGE_SIZE )
> +    while ( left )
>      {
> -        rc = xen_pt_update_entry(root, addr, mfn, flags);
> +        unsigned int order, level;
> +
> +        level = xen_pt_mapping_level(vfn, mfn, left, flags);
> +        order = XEN_PT_LEVEL_ORDER(level);
> +
> +        ASSERT(left >= BIT(order, UL));
> +
> +        rc = xen_pt_update_entry(root, vfn << PAGE_SHIFT, mfn, level, flags);
>          if ( rc )
>              break;
>  
> +        vfn += 1U << order;
>          if ( !mfn_eq(mfn, INVALID_MFN) )
> -            mfn = mfn_add(mfn, 1);
> +            mfn = mfn_add(mfn, 1U << order);
> +
> +        left -= (1U << order);
>      }
>  
>      /*
> -- 
> 2.32.0
> 


From xen-devel-bounces@lists.xenproject.org Fri Jun 03 22:30:22 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 22:30:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341681.566910 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxFoD-0003cc-Qr; Fri, 03 Jun 2022 22:30:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341681.566910; Fri, 03 Jun 2022 22:30:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxFoD-0003cU-Ny; Fri, 03 Jun 2022 22:30:21 +0000
Received: by outflank-mailman (input) for mailman id 341681;
 Fri, 03 Jun 2022 22:30:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fj01=WK=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nxFoB-0003cF-Ls
 for xen-devel@lists.xenproject.org; Fri, 03 Jun 2022 22:30:19 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bf2993b1-e38c-11ec-837f-e5687231ffcc;
 Sat, 04 Jun 2022 00:30:18 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id CED42B824CB;
 Fri,  3 Jun 2022 22:30:17 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4965FC385A9;
 Fri,  3 Jun 2022 22:30:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bf2993b1-e38c-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654295416;
	bh=atfnncD2H3LRRgyTYv2tMls9s3FmBiZ/DMqBNAVMSfU=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=KMdZl20tzhNW61FQFgpU8NwKjAtHI8PHrI+AU1HW95bvB8qr4kOrYzMSxtkXY3Oab
	 UAe0UGfRti6aiRT3hSoDjycL2jrD/hQXd6r/B8fgB4Qf8s67zFSLMEvnkYPHDpV8lx
	 rUldZDLZOn4sp4A1eNiSecYktt9FYpEkDjBLI1bgmdc5NXl3wxRPTyg3AWvFik/P+4
	 3Js54obF8LLCw2MKQBHB2F0bVgz0JWqtHfR70blsJCaIUBACPcQ3hv9JEOZ5IBYGey
	 oqrHXc1+1zdQidmqvmGFdyfE+Q5TOQwxSIXQwXgqPtFB99k9WZXGZPIFAAKtVyX8zH
	 Y6fL93BUE8yWw==
Date: Fri, 3 Jun 2022 15:30:15 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH 03/16] xen/arm: mm: Avoid flushing the TLBs when mapping
 are inserted
In-Reply-To: <20220520120937.28925-4-julien@xen.org>
Message-ID: <alpine.DEB.2.22.394.2206031529490.2783803@ubuntu-linux-20-04-desktop>
References: <20220520120937.28925-1-julien@xen.org> <20220520120937.28925-4-julien@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 20 May 2022, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Currently, the function xen_pt_update() will flush the TLBs even when
> the mappings are inserted. This is a bit wasteful because we don't
> allow mapping replacement. Even if we were, the flush would need to
> happen earlier because mapping replacement should use Break-Before-Make
> when updating the entry.
> 
> A single call to xen_pt_update() can perform a single action. IOW, it
> is not possible to, for instance, mix inserting and removing mappings.
> Therefore, we can use `flags` to determine what action is performed.
> 
> This change will be particularly help to limit the impact of switching
> boot time mapping to use xen_pt_update().
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v4:
>         - Switch the check to a different expression that will still
>         result to the same truth table.
> 
>     Changes in v2:
>         - New patch
> ---
>  xen/arch/arm/mm.c | 17 ++++++++++++++---
>  1 file changed, 14 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index c4487dd7fc46..747083d820dd 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -1119,7 +1119,13 @@ static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int level,
>          /* We should be here with a valid MFN. */
>          ASSERT(!mfn_eq(mfn, INVALID_MFN));
>  
> -        /* We don't allow replacing any valid entry. */
> +        /*
> +         * We don't allow replacing any valid entry.
> +         *
> +         * Note that the function xen_pt_update() relies on this
> +         * assumption and will skip the TLB flush. The function will need
> +         * to be updated if the check is relaxed.
> +         */
>          if ( lpae_is_valid(entry) )
>          {
>              if ( lpae_is_mapping(entry, level) )
> @@ -1434,11 +1440,16 @@ static int xen_pt_update(unsigned long virt,
>      }
>  
>      /*
> -     * Flush the TLBs even in case of failure because we may have
> +     * The TLBs flush can be safely skipped when a mapping is inserted
> +     * as we don't allow mapping replacement (see xen_pt_check_entry()).
> +     *
> +     * For all the other cases, the TLBs will be flushed unconditionally
> +     * even if the mapping has failed. This is because we may have
>       * partially modified the PT. This will prevent any unexpected
>       * behavior afterwards.
>       */
> -    flush_xen_tlb_range_va(virt, PAGE_SIZE * nr_mfns);
> +    if ( !((flags & _PAGE_PRESENT) && !mfn_eq(mfn, INVALID_MFN)) )
> +        flush_xen_tlb_range_va(virt, PAGE_SIZE * nr_mfns);
>  
>      spin_unlock(&xen_pt_lock);
>  
> -- 
> 2.32.0
> 


From xen-devel-bounces@lists.xenproject.org Fri Jun 03 22:31:54 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 22:31:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341689.566920 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxFph-0004Vi-65; Fri, 03 Jun 2022 22:31:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341689.566920; Fri, 03 Jun 2022 22:31:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxFph-0004Vb-2r; Fri, 03 Jun 2022 22:31:53 +0000
Received: by outflank-mailman (input) for mailman id 341689;
 Fri, 03 Jun 2022 22:31:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fj01=WK=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nxFpf-0004VK-Qu
 for xen-devel@lists.xenproject.org; Fri, 03 Jun 2022 22:31:51 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f5c91fbb-e38c-11ec-bd2c-47488cf2e6aa;
 Sat, 04 Jun 2022 00:31:50 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id AEB0661B04;
 Fri,  3 Jun 2022 22:31:49 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id AE42BC385A9;
 Fri,  3 Jun 2022 22:31:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f5c91fbb-e38c-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654295509;
	bh=4YRymr/7MusvD0AOvlJ/NnAqXxFe39jTCG5ekZbC/ZQ=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=HJY+B3ZahBAJsBPMB/o5atjBGKKDkYlbeWKiDzGaWCjrh+Lk75EY+xSuPHLEWoeUi
	 7+byfHdn1EjOKdG9ppHGUHTH7bRbfkYzhDgoPhivZhasLjXE2KdWaa+3G7MbCBo95x
	 OJBwz4SHaZ320z9+VwLXt7ylo+0Z99j1mp3tY9cQ6gRkKKBp9+2IugDKAo6jRJUg/h
	 CPUNBzcIwd3P+puteU890Dzfb2tvR8lJ6ah1Q3wPgf0WH3zVEoBNgj+YqjPkqRM7Ax
	 9boRtZXWD9YzxNhTPAQtAiiBaSWRnFU2YPYie8169hhmMbCaagfJ50FcSPs8Se5JqH
	 jRBsRR/N3wpcg==
Date: Fri, 3 Jun 2022 15:31:47 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Julien Grall <julien.grall@arm.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Julien Grall <jgrall@amazon.com>, Hongda Deng <Hongda.Heng@arm.com>
Subject: Re: [PATCH 04/16] xen/arm: mm: Don't open-code Xen PT update in
 remove_early_mappings()
In-Reply-To: <20220520120937.28925-5-julien@xen.org>
Message-ID: <alpine.DEB.2.22.394.2206031531410.2783803@ubuntu-linux-20-04-desktop>
References: <20220520120937.28925-1-julien@xen.org> <20220520120937.28925-5-julien@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 20 May 2022, Julien Grall wrote:
> From: Julien Grall <julien.grall@arm.com>
> 
> Now that xen_pt_update_entry() is able to deal with different mapping
> size, we can replace the open-coding of the page-tables update by a call
> to modify_xen_mappings().
> 
> As the function is not meant to fail, a BUG_ON() is added to check the
> return.
> 
> Note that we don't use destroy_xen_mappings() because the helper doesn't
> allow us to pass a flags. In theory we could add an extra parameter to
> the function, however there are no other expected users. Hence why
> modify_xen_mappings() is used.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Reviewed-by: Hongda Deng <Hongda.Heng@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v4:
>         - Add Hongda's reviewed-by
>         - Add a comment to explain what modify_xen_mappings() does.
>         - Clarify in the commit message hwy modify_xen_mappings() is
>           used rather than destroy_xen_mappings().
> 
>     Changes in v2:
>         - Stay consistent with how function name are used in the commit
>         message
>         - Add my AWS signed-off-by
> ---
>  xen/arch/arm/mm.c | 11 ++++++-----
>  1 file changed, 6 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 747083d820dd..64a79d45b38c 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -614,11 +614,12 @@ void * __init early_fdt_map(paddr_t fdt_paddr)
>  
>  void __init remove_early_mappings(void)
>  {
> -    lpae_t pte = {0};
> -    write_pte(xen_second + second_table_offset(BOOT_FDT_VIRT_START), pte);
> -    write_pte(xen_second + second_table_offset(BOOT_FDT_VIRT_START + SZ_2M),
> -              pte);
> -    flush_xen_tlb_range_va(BOOT_FDT_VIRT_START, BOOT_FDT_SLOT_SIZE);
> +    int rc;
> +
> +    /* destroy the _PAGE_BLOCK mapping */
> +    rc = modify_xen_mappings(BOOT_FDT_VIRT_START, BOOT_FDT_VIRT_END,
> +                             _PAGE_BLOCK);
> +    BUG_ON(rc);
>  }
>  
>  /*
> -- 
> 2.32.0
> 


From xen-devel-bounces@lists.xenproject.org Fri Jun 03 22:34:03 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 22:34:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341697.566932 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxFrj-0005Cs-Im; Fri, 03 Jun 2022 22:33:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341697.566932; Fri, 03 Jun 2022 22:33:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxFrj-0005Cl-Fh; Fri, 03 Jun 2022 22:33:59 +0000
Received: by outflank-mailman (input) for mailman id 341697;
 Fri, 03 Jun 2022 22:33:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fj01=WK=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nxFri-0005Cd-EF
 for xen-devel@lists.xenproject.org; Fri, 03 Jun 2022 22:33:58 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4196a916-e38d-11ec-bd2c-47488cf2e6aa;
 Sat, 04 Jun 2022 00:33:57 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id D46BCB82482;
 Fri,  3 Jun 2022 22:33:56 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 366E8C385A9;
 Fri,  3 Jun 2022 22:33:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4196a916-e38d-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654295635;
	bh=+9JHCtA9BLmi35HrS9NgwTtn0qiew72TgVW99JpUGjU=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=QPz2+32AZnI21aFRJBEFN4IJXJQezGNMT3ZyDrkMGppvH9hPdgt4btwORr4kkc1ar
	 rKdi5RLbnn+cG9hxkXq9CO1YRIUS4k+27gfG1avKKSlYsz/6FeY62u5piBkTXEpESx
	 wNQ2v4jcHid1KLvum6yTibtdnbNyMtEs1XVCUIsH6ext/kAbH4BERpdzsp8kdqep2d
	 MtKknYhTEPv0p+ksMrSXhfxJorGXHwRRR/pbBmDx6yN/bP4sBvJ+ca04ZZPPLAR+zm
	 grOJ7CBm663S85SU876ywJMxlwibgAMpa6VwYD01zAsBuq5uk3X/exxoV/kFImf3HR
	 cGUl3OUNB4PVg==
Date: Fri, 3 Jun 2022 15:33:54 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Julien Grall <julien.grall@arm.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Julien Grall <jgrall@amazon.com>, Hongda Deng <Hongda.Heng@arm.com>
Subject: Re: [PATCH 05/16] xen/arm: mm: Re-implement early_fdt_map() using
 map_pages_to_xen()
In-Reply-To: <20220520120937.28925-6-julien@xen.org>
Message-ID: <alpine.DEB.2.22.394.2206031533400.2783803@ubuntu-linux-20-04-desktop>
References: <20220520120937.28925-1-julien@xen.org> <20220520120937.28925-6-julien@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 20 May 2022, Julien Grall wrote:
> From: Julien Grall <julien.grall@arm.com>
> 
> Now that map_pages_to_xen() has been extended to support 2MB mappings,
> we can replace the create_mappings() calls by map_pages_to_xen() calls.
> 
> The mapping can also be marked read-only as Xen should not modify
> the host Device Tree during boot.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Reviewed-by: Hongda Deng <Hongda.Heng@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v4:
>         - Fix typo in the commit message
>         - Add Hongda's reviewed-by
> 
>     Changes in v2:
>         - Add my AWS signed-off-by
>         - Fix typo in the commit message
> ---
>  xen/arch/arm/mm.c | 18 +++++++++++++-----
>  1 file changed, 13 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 64a79d45b38c..03f970e4d10b 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -574,6 +574,7 @@ void * __init early_fdt_map(paddr_t fdt_paddr)
>      paddr_t offset;
>      void *fdt_virt;
>      uint32_t size;
> +    int rc;
>  
>      /*
>       * Check whether the physical FDT address is set and meets the minimum
> @@ -589,8 +590,12 @@ void * __init early_fdt_map(paddr_t fdt_paddr)
>      /* The FDT is mapped using 2MB superpage */
>      BUILD_BUG_ON(BOOT_FDT_VIRT_START % SZ_2M);
>  
> -    create_mappings(xen_second, BOOT_FDT_VIRT_START, paddr_to_pfn(base_paddr),
> -                    SZ_2M >> PAGE_SHIFT, SZ_2M);
> +    rc = map_pages_to_xen(BOOT_FDT_VIRT_START, maddr_to_mfn(base_paddr),
> +                          SZ_2M >> PAGE_SHIFT,
> +                          PAGE_HYPERVISOR_RO | _PAGE_BLOCK);
> +    if ( rc )
> +        panic("Unable to map the device-tree.\n");
> +
>  
>      offset = fdt_paddr % SECOND_SIZE;
>      fdt_virt = (void *)BOOT_FDT_VIRT_START + offset;
> @@ -604,9 +609,12 @@ void * __init early_fdt_map(paddr_t fdt_paddr)
>  
>      if ( (offset + size) > SZ_2M )
>      {
> -        create_mappings(xen_second, BOOT_FDT_VIRT_START + SZ_2M,
> -                        paddr_to_pfn(base_paddr + SZ_2M),
> -                        SZ_2M >> PAGE_SHIFT, SZ_2M);
> +        rc = map_pages_to_xen(BOOT_FDT_VIRT_START + SZ_2M,
> +                              maddr_to_mfn(base_paddr + SZ_2M),
> +                              SZ_2M >> PAGE_SHIFT,
> +                              PAGE_HYPERVISOR_RO | _PAGE_BLOCK);
> +        if ( rc )
> +            panic("Unable to map the device-tree\n");
>      }
>  
>      return fdt_virt;
> -- 
> 2.32.0
> 


From xen-devel-bounces@lists.xenproject.org Fri Jun 03 23:09:22 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 23:09:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341717.566943 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxGPr-0000ro-F1; Fri, 03 Jun 2022 23:09:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341717.566943; Fri, 03 Jun 2022 23:09:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxGPr-0000rh-Bn; Fri, 03 Jun 2022 23:09:15 +0000
Received: by outflank-mailman (input) for mailman id 341717;
 Fri, 03 Jun 2022 23:09:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fj01=WK=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nxGPp-0000rb-QR
 for xen-devel@lists.xenproject.org; Fri, 03 Jun 2022 23:09:13 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2e29f754-e392-11ec-bd2c-47488cf2e6aa;
 Sat, 04 Jun 2022 01:09:12 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 9D1E8B82473;
 Fri,  3 Jun 2022 23:09:11 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 195E9C385A9;
 Fri,  3 Jun 2022 23:09:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2e29f754-e392-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654297750;
	bh=FnK2Y3LSrb78iul2thCA0adS6+qvrw7Rw8b8irhGjl0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=nxEnw2N5ezlJB7DRLFf2ycdYflUqPpo4IqTAi2nE2jsLcS+7ARfWYgw58dDGtBJKU
	 B7L0Fk4/d/oc/AzTa9eei6eO+7m9dw4KECaOcjA+cvT6a+27J0g/PjlbZed1j/tVWz
	 pkoOCCQgidHFbLdSsK2CmOH5HUf5nbfSkwjuTsF25UdDgwnNbuuvleKLyFfJnVkgE5
	 gtIy/SeTmPen1rLGS/NV6vao+RGo2RsI7GfchH9b4GV0lu87SLZzHaCwSB1USZvPFp
	 01pVhWsDWggf5W2bPnXmKeuY2FKkRqy/JksZelDFYfMAS0ZPcBk2wNXDibiKebuodt
	 HnCaNApDUBblg==
Date: Fri, 3 Jun 2022 16:09:09 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH 13/16] xen/arm32: setup: Move out the code to populate
 the boot allocator
In-Reply-To: <20220520120937.28925-14-julien@xen.org>
Message-ID: <alpine.DEB.2.22.394.2206031557080.2783803@ubuntu-linux-20-04-desktop>
References: <20220520120937.28925-1-julien@xen.org> <20220520120937.28925-14-julien@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 20 May 2022, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> In a follow-up patch, we will want to populate the boot allocator
> separately for arm64. The code will end up to be very similar to the one
> on arm32. So move out the code in a new helper populate_boot_allocator().
> 
> For now the code is still protected by CONFIG_ARM_32 to avoid any build
> failure on arm64.
> 
> Take the opportunity to replace mfn_add(xen_mfn_start, xenheap_pages) with
> xenheap_mfn_end as they are equivalent.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> 
>     Changes in v4:
>         - Patch added
> ---
>  xen/arch/arm/setup.c | 90 +++++++++++++++++++++++++-------------------
>  1 file changed, 51 insertions(+), 39 deletions(-)
> 
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index d5d0792ed48a..3d5a2283d4ef 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -637,10 +637,58 @@ static void __init init_staticmem_pages(void)
>  }
>  
>  #ifdef CONFIG_ARM_32
> +/*
> + * Populate the boot allocator. All the RAM but the following regions
> + * will be added:
> + *  - Modules (e.g., Xen, Kernel)
> + *  - Reserved regions
> + *  - Xenheap
> + */
> +static void __init populate_boot_allocator(void)
> +{
> +    unsigned int i;
> +    const struct meminfo *banks = &bootinfo.mem;
> +
> +    for ( i = 0; i < banks->nr_banks; i++ )
> +    {
> +        const struct membank *bank = &banks->bank[i];
> +        paddr_t bank_end = bank->start + bank->size;
> +        paddr_t s, e;
> +
> +        s = bank->start;
> +        while ( s < bank_end )
> +        {
> +            paddr_t n = bank_end;
> +
> +            e = next_module(s, &n);
> +
> +            if ( e == ~(paddr_t)0 )
> +                e = n = bank_end;
> +
> +            /*
> +             * Module in a RAM bank other than the one which we are
> +             * not dealing with here.
> +             */
> +            if ( e > bank_end )
> +                e = bank_end;
> +
> +            /* Avoid the xenheap */
> +            if ( s < mfn_to_maddr(xenheap_mfn_end) &&
> +                 mfn_to_maddr(xenheap_mfn_start) < e )
> +            {
> +                e = mfn_to_maddr(xenheap_mfn_start);
> +                n = mfn_to_maddr(xenheap_mfn_end);
> +            }
> +
> +            fw_unreserved_regions(s, e, init_boot_pages, 0);
> +            s = n;
> +        }
> +    }
> +}
> +
>  static void __init setup_mm(void)
>  {
> -    paddr_t ram_start, ram_end, ram_size;
> -    paddr_t s, e;
> +    paddr_t ram_start, ram_end, ram_size, e;
>      unsigned long ram_pages;
>      unsigned long heap_pages, xenheap_pages, domheap_pages;
>      int i;
> @@ -718,43 +766,7 @@ static void __init setup_mm(void)
>      setup_xenheap_mappings((e >> PAGE_SHIFT) - xenheap_pages, xenheap_pages);
>  
>      /* Add non-xenheap memory */
> -    for ( i = 0; i < bootinfo.mem.nr_banks; i++ )
> -    {
> -        paddr_t bank_start = bootinfo.mem.bank[i].start;
> -        paddr_t bank_end = bank_start + bootinfo.mem.bank[i].size;
> -
> -        s = bank_start;
> -        while ( s < bank_end )
> -        {
> -            paddr_t n = bank_end;
> -
> -            e = next_module(s, &n);
> -
> -            if ( e == ~(paddr_t)0 )
> -            {
> -                e = n = ram_end;
> -            }
> -
> -            /*
> -             * Module in a RAM bank other than the one which we are
> -             * not dealing with here.
> -             */
> -            if ( e > bank_end )
> -                e = bank_end;
> -
> -            /* Avoid the xenheap */
> -            if ( s < mfn_to_maddr(mfn_add(xenheap_mfn_start, xenheap_pages))
> -                 && mfn_to_maddr(xenheap_mfn_start) < e )
> -            {
> -                e = mfn_to_maddr(xenheap_mfn_start);
> -                n = mfn_to_maddr(mfn_add(xenheap_mfn_start, xenheap_pages));
> -            }
> -
> -            fw_unreserved_regions(s, e, init_boot_pages, 0);
> -
> -            s = n;
> -        }
> -    }
> +    populate_boot_allocator();
>  
>      /* Frame table covers all of RAM region, including holes */
>      setup_frametable_mappings(ram_start, ram_end);
> -- 
> 2.32.0
> 


From xen-devel-bounces@lists.xenproject.org Fri Jun 03 23:09:22 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 23:09:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341719.566965 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxGPx-0001Np-0y; Fri, 03 Jun 2022 23:09:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341719.566965; Fri, 03 Jun 2022 23:09:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxGPw-0001Ni-Ts; Fri, 03 Jun 2022 23:09:20 +0000
Received: by outflank-mailman (input) for mailman id 341719;
 Fri, 03 Jun 2022 23:09:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fj01=WK=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nxGPv-000174-Mi
 for xen-devel@lists.xenproject.org; Fri, 03 Jun 2022 23:09:19 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 321e049d-e392-11ec-837f-e5687231ffcc;
 Sat, 04 Jun 2022 01:09:18 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 8364CB82473;
 Fri,  3 Jun 2022 23:09:18 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id EDE5FC385A9;
 Fri,  3 Jun 2022 23:09:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 321e049d-e392-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654297757;
	bh=0djJC9KLui+Kdr5MsphZ6+DQIIwNtfeYRS1QGzcW9e8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=AeXZpqvHDF2QzAYSnYNnSOzuB5MweypuQ/a/HMLX0HBmx357NGTXwhtDg64FSST78
	 HwxphRFillNQuBJ5vh+nqT0bWFtmTvnEcOUj7B2OlWyOMSDaVBsJYhf4lZg2yHK8pJ
	 X36JDZUAmQF/18BWbGmVWNYDhdD5vWk5RSMO2lrg8g2NzqFKvOfPBAlkf5JPC2y+69
	 apuI73x4YElGj5slwRe/TNNTZaixBZKO5A4WZ+tMY42XgSiOHsYJ1PcVS+lpjA0KoX
	 ST1QcNZAoW5Cb4oo5/+XLI05W4TV+be5TZrj/Hv3cmMGY0WhijWSgfCDtc6oGeZNwT
	 p7KB0l1c7HEkQ==
Date: Fri, 3 Jun 2022 16:09:16 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Julien Grall <julien.grall@arm.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Julien Grall <jgrall@amazon.com>
Subject: Re: [PATCH 16/16] xen/arm: mm: Re-implement setup_frame_table_mappings()
 with map_pages_to_xen()
In-Reply-To: <20220520120937.28925-17-julien@xen.org>
Message-ID: <alpine.DEB.2.22.394.2206031607510.2783803@ubuntu-linux-20-04-desktop>
References: <20220520120937.28925-1-julien@xen.org> <20220520120937.28925-17-julien@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 20 May 2022, Julien Grall wrote:
> From: Julien Grall <julien.grall@arm.com>
> 
> Now that map_pages_to_xen() has been extended to support 2MB mappings,
> we can replace the create_mappings() call by map_pages_to_xen() call.
> 
> This has the advantage to remove the differences between 32-bit and
> 64-bit code.
> 
> Lastly remove create_mappings() as there is no more callers.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v4:
>         - Add missing _PAGE_BLOCK
> 
>     Changes in v3:
>         - Fix typo in the commit message
>         - Remove the TODO regarding contiguous bit
> 
>     Changes in v2:
>         - New patch
> ---
>  xen/arch/arm/mm.c | 64 +++++------------------------------------------
>  1 file changed, 6 insertions(+), 58 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 65af44f42232..be37176a4725 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -369,40 +369,6 @@ void clear_fixmap(unsigned map)
>      BUG_ON(res != 0);
>  }
>  
> -/* Create Xen's mappings of memory.
> - * Mapping_size must be either 2MB or 32MB.
> - * Base and virt must be mapping_size aligned.
> - * Size must be a multiple of mapping_size.
> - * second must be a contiguous set of second level page tables
> - * covering the region starting at virt_offset. */
> -static void __init create_mappings(lpae_t *second,
> -                                   unsigned long virt_offset,
> -                                   unsigned long base_mfn,
> -                                   unsigned long nr_mfns,
> -                                   unsigned int mapping_size)
> -{
> -    unsigned long i, count;
> -    const unsigned long granularity = mapping_size >> PAGE_SHIFT;
> -    lpae_t pte, *p;
> -
> -    ASSERT((mapping_size == MB(2)) || (mapping_size == MB(32)));
> -    ASSERT(!((virt_offset >> PAGE_SHIFT) % granularity));
> -    ASSERT(!(base_mfn % granularity));
> -    ASSERT(!(nr_mfns % granularity));
> -
> -    count = nr_mfns / XEN_PT_LPAE_ENTRIES;
> -    p = second + second_linear_offset(virt_offset);
> -    pte = mfn_to_xen_entry(_mfn(base_mfn), MT_NORMAL);
> -    if ( granularity == 16 * XEN_PT_LPAE_ENTRIES )
> -        pte.pt.contig = 1;  /* These maps are in 16-entry contiguous chunks. */
> -    for ( i = 0; i < count; i++ )
> -    {
> -        write_pte(p + i, pte);
> -        pte.pt.base += 1 << XEN_PT_LPAE_SHIFT;
> -    }
> -    flush_xen_tlb_local();
> -}
> -
>  #ifdef CONFIG_DOMAIN_PAGE
>  void *map_domain_page_global(mfn_t mfn)
>  {
> @@ -862,36 +828,18 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
>      unsigned long frametable_size = nr_pdxs * sizeof(struct page_info);
>      mfn_t base_mfn;
>      const unsigned long mapping_size = frametable_size < MB(32) ? MB(2) : MB(32);
> -#ifdef CONFIG_ARM_64
> -    lpae_t *second, pte;
> -    unsigned long nr_second;
> -    mfn_t second_base;
> -    int i;
> -#endif
> +    int rc;
>  
>      frametable_base_pdx = mfn_to_pdx(maddr_to_mfn(ps));
>      /* Round up to 2M or 32M boundary, as appropriate. */
>      frametable_size = ROUNDUP(frametable_size, mapping_size);
>      base_mfn = alloc_boot_pages(frametable_size >> PAGE_SHIFT, 32<<(20-12));
>  
> -#ifdef CONFIG_ARM_64
> -    /* Compute the number of second level pages. */
> -    nr_second = ROUNDUP(frametable_size, FIRST_SIZE) >> FIRST_SHIFT;
> -    second_base = alloc_boot_pages(nr_second, 1);
> -    second = mfn_to_virt(second_base);
> -    for ( i = 0; i < nr_second; i++ )
> -    {
> -        clear_page(mfn_to_virt(mfn_add(second_base, i)));
> -        pte = mfn_to_xen_entry(mfn_add(second_base, i), MT_NORMAL);
> -        pte.pt.table = 1;
> -        write_pte(&xen_first[first_table_offset(FRAMETABLE_VIRT_START)+i], pte);
> -    }
> -    create_mappings(second, 0, mfn_x(base_mfn), frametable_size >> PAGE_SHIFT,
> -                    mapping_size);
> -#else
> -    create_mappings(xen_second, FRAMETABLE_VIRT_START, mfn_x(base_mfn),
> -                    frametable_size >> PAGE_SHIFT, mapping_size);
> -#endif
> +    rc = map_pages_to_xen(FRAMETABLE_VIRT_START, base_mfn,
> +                          frametable_size >> PAGE_SHIFT,
> +                          PAGE_HYPERVISOR_RW | _PAGE_BLOCK);
> +    if ( rc )
> +        panic("Unable to setup the frametable mappings.\n");
>  
>      memset(&frame_table[0], 0, nr_pdxs * sizeof(struct page_info));
>      memset(&frame_table[nr_pdxs], -1,
> -- 
> 2.32.0
> 


From xen-devel-bounces@lists.xenproject.org Fri Jun 03 23:09:22 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jun 2022 23:09:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341718.566954 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxGPu-00017N-NL; Fri, 03 Jun 2022 23:09:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341718.566954; Fri, 03 Jun 2022 23:09:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxGPu-00017G-KP; Fri, 03 Jun 2022 23:09:18 +0000
Received: by outflank-mailman (input) for mailman id 341718;
 Fri, 03 Jun 2022 23:09:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fj01=WK=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nxGPt-000174-Sd
 for xen-devel@lists.xenproject.org; Fri, 03 Jun 2022 23:09:17 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 302c0bd0-e392-11ec-837f-e5687231ffcc;
 Sat, 04 Jun 2022 01:09:16 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 07BC360AC6;
 Fri,  3 Jun 2022 23:09:15 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1D3DFC385A9;
 Fri,  3 Jun 2022 23:09:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 302c0bd0-e392-11ec-837f-e5687231ffcc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654297754;
	bh=udT4QdZxtfUzKZoyHEtKMuodOQQjJlO/uIy92C2/miM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=QB7JUog1OvX8kYs4YgOjXsWGNl/oXHS/emTDDbxQ6LyBLssltHnIxzUYte3nOxa2/
	 Pby956lg58GmltoQdIDWuWF89mG/dy5X7DutlRT5l9DJvzXoqp7BYCjREfHeE1TQY1
	 bMP+R8LMWP7zHP+RJEaAotgvMAXxFZX2d9tF8O86y4dcMUS4h5a1je0Uu7Dft30sGH
	 i798dy2UYeMi5swdrupAFWds6JLT1JUPr3mJH95dKUgNLDcVFQA8tw1ZezJXpMNc2h
	 kId2lXXTeKGtFsGPC20HQO8G0nbBU6e7zcHzbQU+ceRLVlaP0vMxOiY3j18fo2cAGF
	 YCpFVY2zvLQsQ==
Date: Fri, 3 Jun 2022 16:09:13 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH 14/16] xen/arm64: mm: Add memory to the boot allocator
 first
In-Reply-To: <20220520120937.28925-15-julien@xen.org>
Message-ID: <alpine.DEB.2.22.394.2206031606070.2783803@ubuntu-linux-20-04-desktop>
References: <20220520120937.28925-1-julien@xen.org> <20220520120937.28925-15-julien@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 20 May 2022, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Currently, memory is added to the boot allocator after the xenheap
> mappings are done. This will break if the first mapping is more than
> 512GB of RAM.
> 
> In addition to that, a follow-up patch will rework setup_xenheap_mappings()
> to use smaller mappings (e.g. 2MB, 4KB). So it will be necessary to have
> memory in the boot allocator earlier.
> 
> Only free memory (e.g. not reserved or modules) can be added to the boot
> allocator. It might be possible that some regions (including the first
> one) will have no free memory.
> 
> So we need to add all the free memory to the boot allocator first
> and then add do the mappings.
> 
> Populating the boot allocator is nearly the same between arm32 and
> arm64. The only difference is on the former we need to exclude the
> xenheap for the boot allocator. Gate the difference with CONFIG_ARM_32
> so the code be re-used on arm64.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v4:
>         - The implementation of populate_boot_allocator() has been
>           moved in a separate patch.
>         - Fix typo
> 
>     Changes in v3:
>         - Patch added
> ---
>  xen/arch/arm/setup.c | 55 +++++++++++++++++++-------------------------
>  1 file changed, 24 insertions(+), 31 deletions(-)
> 
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 3d5a2283d4ef..db1768c03f03 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -636,13 +636,12 @@ static void __init init_staticmem_pages(void)
>  #endif
>  }
>  
> -#ifdef CONFIG_ARM_32
>  /*
>   * Populate the boot allocator. All the RAM but the following regions
>   * will be added:
>   *  - Modules (e.g., Xen, Kernel)
>   *  - Reserved regions
> - *  - Xenheap
> + *  - Xenheap (arm32 only)
>   */
>  static void __init populate_boot_allocator(void)
>  {
> @@ -672,6 +671,7 @@ static void __init populate_boot_allocator(void)
>              if ( e > bank_end )
>                  e = bank_end;
>  
> +#ifdef CONFIG_ARM_32
>              /* Avoid the xenheap */
>              if ( s < mfn_to_maddr(xenheap_mfn_end) &&
>                   mfn_to_maddr(xenheap_mfn_start) < e )
> @@ -679,6 +679,7 @@ static void __init populate_boot_allocator(void)
>                  e = mfn_to_maddr(xenheap_mfn_start);
>                  n = mfn_to_maddr(xenheap_mfn_end);
>              }
> +#endif
>  
>              fw_unreserved_regions(s, e, init_boot_pages, 0);
>              s = n;
> @@ -686,6 +687,7 @@ static void __init populate_boot_allocator(void)
>      }
>  }
>  
> +#ifdef CONFIG_ARM_32
>  static void __init setup_mm(void)
>  {
>      paddr_t ram_start, ram_end, ram_size, e;
> @@ -781,45 +783,36 @@ static void __init setup_mm(void)
>  #else /* CONFIG_ARM_64 */
>  static void __init setup_mm(void)
>  {
> +    const struct meminfo *banks = &bootinfo.mem;
>      paddr_t ram_start = ~0;
>      paddr_t ram_end = 0;
>      paddr_t ram_size = 0;
> -    int bank;
> +    unsigned int i;
>  
>      init_pdx();
>  
> -    total_pages = 0;
> -    for ( bank = 0 ; bank < bootinfo.mem.nr_banks; bank++ )
> -    {
> -        paddr_t bank_start = bootinfo.mem.bank[bank].start;
> -        paddr_t bank_size = bootinfo.mem.bank[bank].size;
> -        paddr_t bank_end = bank_start + bank_size;
> -        paddr_t s, e;
> -
> -        ram_size = ram_size + bank_size;
> -        ram_start = min(ram_start,bank_start);
> -        ram_end = max(ram_end,bank_end);
> -
> -        setup_xenheap_mappings(bank_start>>PAGE_SHIFT, bank_size>>PAGE_SHIFT);
> -
> -        s = bank_start;
> -        while ( s < bank_end )
> -        {
> -            paddr_t n = bank_end;
> +    /*
> +     * We need some memory to allocate the page-tables used for the xenheap
> +     * mappings. But some regions may contain memory already allocated
> +     * for other uses (e.g. modules, reserved-memory...).
> +     *
> +     * For simplicity, add all the free regions in the boot allocator.
> +     */
> +    populate_boot_allocator();
>  
> -            e = next_module(s, &n);
> +    total_pages = 0;
>  
> -            if ( e == ~(paddr_t)0 )
> -            {
> -                e = n = bank_end;
> -            }
> +    for ( i = 0; i < banks->nr_banks; i++ )
> +    {
> +        const struct membank *bank = &banks->bank[i];
> +        paddr_t bank_end = bank->start + bank->size;
>  
> -            if ( e > bank_end )
> -                e = bank_end;
> +        ram_size = ram_size + bank->size;
> +        ram_start = min(ram_start, bank->start);
> +        ram_end = max(ram_end, bank_end);
>  
> -            fw_unreserved_regions(s, e, init_boot_pages, 0);
> -            s = n;
> -        }
> +        setup_xenheap_mappings(PFN_DOWN(bank->start),
> +                               PFN_DOWN(bank->size));
>      }
>  
>      total_pages += ram_size >> PAGE_SHIFT;
> -- 
> 2.32.0
> 


From xen-devel-bounces@lists.xenproject.org Sat Jun 04 00:31:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jun 2022 00:31:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341741.566976 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxHhB-0004OC-5N; Sat, 04 Jun 2022 00:31:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341741.566976; Sat, 04 Jun 2022 00:31:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxHhB-0004O5-1G; Sat, 04 Jun 2022 00:31:13 +0000
Received: by outflank-mailman (input) for mailman id 341741;
 Sat, 04 Jun 2022 00:31:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxHhA-0004Nv-5z; Sat, 04 Jun 2022 00:31:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxHhA-0000Lk-3u; Sat, 04 Jun 2022 00:31:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxHh9-0002FY-HX; Sat, 04 Jun 2022 00:31:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nxHh9-0006Ei-Gz; Sat, 04 Jun 2022 00:31:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=L5YQrrbavU1Ndh4VRX+iUhzZy3alNljlBve6NQc5r1E=; b=upWNEcHJaEyGeFycNvaHe6JcGW
	4pw8lZ2KoD9PJuuf8WGPaN6eXf9PsS9wgb61LkGO47gudavFtbmX2FN+R+zIlTLGb/jAq2CJyJSlK
	DRkYyZlf2MzFPJTFJimRx+lOjtUZo9y/Kul+/pedXraAsevLJdntov8LtAXp/ipYu6FA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170821-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 170821: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=50fd82b3a9a9335df5d50c7ddcb81c81d358c4fc
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 04 Jun 2022 00:31:11 +0000

flight 170821 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170821/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 170714
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 170714
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 170714

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit1   8 xen-boot                   fail pass in 170815
 test-arm64-arm64-libvirt-raw  8 xen-boot                   fail pass in 170815
 test-arm64-arm64-xl-xsm       8 xen-boot                   fail pass in 170815
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 170815

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1 15 migrate-support-check fail in 170815 never pass
 test-arm64-arm64-xl-credit1 16 saverestore-support-check fail in 170815 never pass
 test-arm64-arm64-xl-xsm     15 migrate-support-check fail in 170815 never pass
 test-arm64-arm64-xl-xsm 16 saverestore-support-check fail in 170815 never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check fail in 170815 never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check fail in 170815 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170714
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170714
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170714
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                50fd82b3a9a9335df5d50c7ddcb81c81d358c4fc
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z   10 days
Failing since        170716  2022-05-24 11:12:06 Z   10 days   30 attempts
Testing same since   170815  2022-06-03 05:33:34 Z    0 days    2 attempts

------------------------------------------------------------
2072 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 234163 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 04 01:10:43 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jun 2022 01:10:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341752.566987 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxIJJ-0007ja-6d; Sat, 04 Jun 2022 01:10:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341752.566987; Sat, 04 Jun 2022 01:10:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxIJJ-0007jT-3t; Sat, 04 Jun 2022 01:10:37 +0000
Received: by outflank-mailman (input) for mailman id 341752;
 Sat, 04 Jun 2022 01:10:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YCBW=WL=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1nxIJH-0007jN-M0
 for xen-devel@lists.xenproject.org; Sat, 04 Jun 2022 01:10:36 +0000
Received: from sonic301-20.consmr.mail.gq1.yahoo.com
 (sonic301-20.consmr.mail.gq1.yahoo.com [98.137.64.146])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 20ad5394-e3a3-11ec-bd2c-47488cf2e6aa;
 Sat, 04 Jun 2022 03:10:32 +0200 (CEST)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic301.consmr.mail.gq1.yahoo.com with HTTP; Sat, 4 Jun 2022 01:10:30 +0000
Received: by hermes--canary-production-ne1-799d7bd497-2pzdr (Yahoo Inc. Hermes
 SMTP Server) with ESMTPA ID 349a39a7e5a8590bc241a8c8d4ccce1e; 
 Sat, 04 Jun 2022 01:10:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 20ad5394-e3a3-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netscape.net; s=a2048; t=1654305030; bh=owMZygzC3CpzhLFFhxoJKKpvvUwv3HmvaQddnWgt7XQ=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=C0R72fHmlqwzBP8rU9XQofRNc5NHjqaQ1Y69aF/cyewo+Wzd6AaOAU6Ql95kMNBe5SK7yMF2/eh9mWQvl/wmd212bvtu4VtWrv1mNWLc/gcrq5y/VSuaVHba0gl0aaxatZ2Uw2jEwY6cc42W+/lakIU2w0nJVawyogtwmvEja18jM9VuEVwy66CI8bR8AHfxSw5fUq+sQx4UO+g6AE0mJDchwSZwvF5O3G7Q7lA85lSSYBYut0UBknDKn2dlT14t98Xs5+RcBXiggmuvipoRWBP/cxMe9d+RDbDE/tfmnvkogd/O8cnC9KdsfuMjVTDd8nhym39UfRQeLRXfVcADnw==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1654305030; bh=j0odnHEajls3qfZXjAo1OwVlHEugMnvaly5uEPPk9mz=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=IO4pxP/Yj8oxQ/6Z04dNQu6NXbe/19UrHZUsFt6qWjQ5nVC2y/W1w2vIUy2y8Zy9pCO360u+hE9h8cpikVR2VtsK5Z0NAngsejPf+uTKta3cNEo1jGgSMj5LaGSTX065HPRrBnX6vBL6DC4erJ/QthqS/8+JRU/Y4FILglO+/HY4A8dLHOLvvjEjyWrUm/z4IA9RRYq0I2V030UWhX+JttQT7S6zD1VhicHKJ8GenqfF9ZCQ61C0KMb6iA5hvl7z6h0wU0FNgo8ZkZVeIgs8N4sgz2f0gBmFeXxyN/WPIU4paj3iEbNWnSJV0B5kDL2ooxM3SnOi3zKFfZg4sAMs6g==
X-YMail-OSG: U3NPG0UVM1kYui0QXfgMd.pS7GOjJY822PSjobs2od7LnmS1q6s3WCkYcABRNm9
 LnKHQQn0rCZQjgfX18oBMlpiZzVRoojdNb4eaQp7W9sYsknUkxn.TCoYRzzQIIrPS4eggr8aC4Bk
 nJTQxUhW2EHIRMmC2Mmk0DT6mVEK7vnO4wYiL71ILjNNwvVUlEhAVv12uZMYKc18Hc3MZULL5HmR
 IFhvezRf1mV_Pxi2vKfbAfEmhWHi9VkU0Grv9_x0030h9P19VWMfmteTq_smCZEAXmhtojyhp5OI
 anSaKaYXvy9Gjtv2hL5DHqdbPSnkKM1uiV7gSjW1ZryiZgUkAUwmtZr8PYwMNUvCOK4Z4iVUyku8
 7GqmvxTDy6Qyynpt.z5Pi0Lglmz7t_bkOkB2jopulTRzTqWG1osMojUD4K6ahgwBRT_GrxmkKLfs
 kt5yxvLPBFOhp7DfmzZPgDTfcyt4BwtqhAdyzaIHSHHJlGs3GKyJnmOMGORwoyfR8fWcEBCOKAqP
 qJrv7YJ5auB6ZhMVcEd7pUMPsopxW78kC._4HB6tjfpqOC8vC3r9OfgIHnyUnFvf5BUGqpxHB04U
 221VwXq6H0ZM7mR3P7DLY_VAShfMKRwYNektziRoJ0VuYIzdXrIyvYbWBQxPmKYaPz_JyMKYMbaa
 HepkGX8cTt4jOr7Am4MjL76vZhwkZ4cahDDvDIIFUJuI4v2RIKSl7mcHa8ydTO3wr.P58VuJnb.J
 Dke9XgaVDrbl0BGDWoLDFGu2rVPNLcJUTWvln8C.EIkqr23UZGF82xU93k4kCjjrKQ8oQjdjWAzu
 sVC.u.Z778PkDoKE8l7R1xkun5j4SYGuCxTmXX4ew8gN.obGz5dXSQdXUezmtu9Cl.QX2CO7kC.E
 8R9q44Na5D0j8gtc3szMxytMpgyj5.2d3OOD5TWxs94VglXrph0zWDwFHLqxjnAZFQMC_T_JqSCA
 DLRZDrUicomEMTbpPAwUINwwLDd2o0uhr_4iRDUn7q87.lPr0rc_n6RMLotYem90SvHv6.iIq6pR
 oRSmbdGmzHgzP3_NsScRA3Qi0pGhsZbpHmcW858AHKMIKjuMcmHJT8glYRTCmRDKoqIfKTR.bwFG
 W9nZ8YMHkYMVJ06z2C5C4DlxKf_9risztXYQ0NTfziNUyWWQoo_JhYjU2jBJ3qIuz0XtczYFWrvy
 SS6u153sfdHQD2V.apTU7UhWIPAG3goM1RqC5Nj.FP_GuyOs0wUPgLkeQVB3VXtEXMqBlyGlPHUZ
 .OK5V7UKSTZj7At1Sq0UXcRrwLY7Rspfk8IVjWpomsn6euKSo0LEAOBdonqwjtT1OKY9w7I7Dgda
 XPIXz7RxQH902ifMO4w4kXvzJXQaS8hqNOkICwbkDJVzX9D1OoTEU36q9bkzPLoZaliCSCAsH3B5
 7q86WDAXDk.GfSeXaRhZlhvbEVJaFvm._jxBz08gx7xexylv1N1ST1ZeG.SnH7DuwMCEBh2XW1RZ
 eJ_nOcsN45VttJPT7FwUcPdG0v2szjFzd5vPIRXzlxwg0u.SzT5MEFCng_jtvfa12GuWFgNTfFlw
 smjq6p3Q9ZO1U1MgcCxzuv7eAv8jleYbXV_nXj79.SITpCZDgh0lMZjpTzZc2g8EFW5W9l0dpJW5
 Uj8t3amfwz3Dv6rla7a8QohSOHBGGSrwvuPhqQTdcRM7_2U0aEZva6g4fPQi9nB.hBC0cBYQX6WN
 LW9ZVdsp2IKIpWn4F7EYu1G4FeN3X.uaUvb7Lazo02sYQb2UzyLqEBgVE5V.jcmKNW.qXxIa3JXS
 m7j3iu31RNG2Rr2HHUlCgGWw1ZrdLlmCfDPrsvFyUTBYf2CzRGkp0wqAUFrmxN.L4oxzbe1N0idf
 iuGjbV9JnO1YTOedoIsu.VKgR..283sPTQpoSBjw3_u0mrjBbx2KZCFDDm8DFMfCejaela_2nb8K
 28QZj2K3mtBYtaKT7PxmCUv.7mqrfxVTfGHzEGj4lodkJWa_kyGGYXjaYKqqPWmgsKBRrrjy81jV
 m8mrWVJtrfPTucHFIrhnWKXZE6qcZLaK4asch_37qb3xGFEREEt9Bz74I6WN7c4F.MHVRix1E2cT
 8pd8DeZdrf.wavHNdouEt.rNK5n89YLu8vOwwphO5BTYzWLiCOSZsKwLxG3CHPy3L.0uoDlwYbvM
 k.VNtBO7mW8QoURRnp2YPTitM0CmVg9_pNDeVXuLvElpKbyH2FqmtSXyiI0CO2N2ROVGbW7EOgix
 .HoBUoKdbNoqKYWUgp2ep
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <32638cee-de07-aa33-810b-534da4fa08ae@netscape.net>
Date: Fri, 3 Jun 2022 21:10:25 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [XEN PATCH] tools/libs/light/libxl_pci.c: explicitly grant access
 to Intel IGD opregion
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>,
 Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>
References: <b62fbc602a629941c1acaad3b93d250a3eba33c0.1647222184.git.brchuckz.ref@netscape.net>
 <b62fbc602a629941c1acaad3b93d250a3eba33c0.1647222184.git.brchuckz@netscape.net>
 <YkSQIoYhomhNKpYR@perard.uk.xensource.com>
From: Chuck Zmudzinski <brchuckz@netscape.net>
In-Reply-To: <YkSQIoYhomhNKpYR@perard.uk.xensource.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-Mailer: WebService/1.1.20225 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 7347

On 3/30/22 1:15 PM, Anthony PERARD wrote:
> Hi Chuck,
>
> On Sun, Mar 13, 2022 at 11:41:37PM -0400, Chuck Zmudzinski wrote:
>> When gfx_passthru is enabled for the Intel IGD, hvmloader maps the IGD
>> opregion to the guest but libxl does not grant the guest permission to
> I'm not reading the same thing when looking at code in hvmloader. It
> seems that hvmloader allocate some memory for the IGD opregion rather
> than mapping it.
>
>
> tools/firmware/hvmloader/pci.c:184
>      if ( vendor_id == 0x8086 )
>      {
>          igd_opregion_pgbase = mem_hole_alloc(IGD_OPREGION_PAGES);
>          /*
>           * Write the the OpRegion offset to give the opregion
>           * address to the device model. The device model will trap
>           * and map the OpRegion at the give address.
>           */
>          pci_writel(vga_devfn, PCI_INTEL_OPREGION,
>                     igd_opregion_pgbase << PAGE_SHIFT);
>      }
>
> I think this write would go through QEMU, does it do something with it?
> (I kind of replying to this question at the end of the mail.)
>
> Is this code in hvmloader actually run in your case?
>
>> Currently, because of another bug in Qemu upstream, this crash can
>> only be reproduced using the traditional Qemu device model, and of
> qemu-traditional is a bit old... What is the bug in QEMU? Do you have a
> link to a patch/mail?

I finally found a patch for the other bug in Qemu upstream. The
patch is currently being used in QubesOS, and they first added
it to their version of Qemu way back in 2017:

https://github.com/QubesOS/qubes-vmm-xen-stubdom-linux/pull/3/commits/ab2b4c2ad02827a73c52ba561e9a921cc4bb227c

Although this patch is advertised as applying to the device model
running in a Linux stub domain, it is also needed (at least on my
system) with the device model running in Dom0.

Here is the story:

The patch is titled "qemu: fix wrong mask in pci capability registers 
handling"

There is scant information in the commit message about the nature of
the problem, but I discovered the following in my testing:

On my Intel Haswell system configured for PCI passthrough to the
Xen HVM guest, Qemu does indeed incorrectly set the emulated
register because it uses the wrong mask that disables instead of
enables the PCI_STATUS_CAP_LIST bit of the PCI_STATUS register.

This disabled the MSI-x capability of two of the three PCI devices
passed through to the Xen HVM guest. The problem only
manifests in a bad way in a Linux guest, not in a Windows guest.

One possible reason that only Linux guests are affected is that
I discovered in the Xen xl-dmesg verbose logs that Windows and
Linux use different callbacks for interrupts:

(XEN) Dom1 callback via changed to GSI 28
...
(XEN) Dom3 callback via changed to Direct Vector 0xf3

Dom1 is a Windows Xen HVM and Dom3 is a Linux HVM

Apparently the Direct Vector callback that Linux uses requires
MSI or MSI-x capability of the passed through devices, but the
wrong mask in Qemu disables that capability.

After applying the QubesOS patch to Qemu upstream, the
PCI_STATUS_CAP_LIST bit is set correctly for the guest and
PCI and Intel IGD passthrough works normally because the
Linux guest can make use of the MSI-x capability of the
PCI devices.

The problem was discovered almost five years ago. I don't
know why the fix has not been committed to Qemu
upstream yet.

After this, I was able to discover that the need for libxl to
explicitly grant permission for access to the Intel OpRegion
is only needed for the old traditional device model because
the Xen hypercall to gain permission is included in upstream
Qemu, but it is omitted from the old traditional device model.

So this patch is not needed for users of the upstream device
model who also add the QubesOS patch to fix the
PCI_STATUS_CAP_LIST bit in the PCI_STATUS register.

This all assumes the device model is running in Dom0. The
permission for access to the Intel OpRegion might still be
needed if the upstream device model is running in a stub
domain.

There are other problems in addition to this problem of access
to the Intel OpRegion that are discussed here:

https://www.qubes-os.org/news/2017/10/18/msi-support/

As old as that post is, the feature of allowing PCI and VGA
passthrough to HVM domains is still not well supported,
especially for the case when the device model runs in a
stub domain.

Since my proposed patch only applies to the very insecure
case of the old traditional device model running in Dom0,
I will not pursue it further.

I will look for this feature in future versions of Xen. Currently,
Xen 4.16 advertises support for Linux-based stub domains
as "Tech Preview." So future versions of Xen might handle
this problem in libxl or perhaps in some other way, and also
hopefully the patch to Qemu to fix the PCI capabilities mask
can be committed to Qemu upstream soon so this feature
of Intel IGD passthrough can at least work with Linux
guests and the upstream Qemu running in Dom0.

Regards,

Chuck

>
>> diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
>> index 4bbbfe9f16..a4fc473de9 100644
>> --- a/tools/libs/light/libxl_pci.c
>> +++ b/tools/libs/light/libxl_pci.c
>> @@ -2531,6 +2572,37 @@ int libxl__grant_vga_iomem_permission(libxl__gc *gc, const uint32_t domid,
>>                     domid, vga_iomem_start, (vga_iomem_start + 0x20 - 1));
>>               return ret;
>>           }
>> +
>> +        /* If this is an Intel IGD, allow access to the IGD opregion */
>> +        if (!libxl__is_igd_vga_passthru(gc, d_config)) return 0;
>> +
>> +        uint32_t igd_opregion = sysfs_dev_get_igd_opregion(gc, pci);
>> +        uint32_t error = 0xffffffff;
>> +        if (igd_opregion == error) break;
>> +
>> +        vga_iomem_start = ( (uint64_t) igd_opregion ) >> XC_PAGE_SHIFT;
>> +        ret = xc_domain_iomem_permission(CTX->xch, stubdom_domid,
>> +                                         vga_iomem_start,
>> +                                         IGD_OPREGION_PAGES, 1);
>> +        if (ret < 0) {
>> +            LOGED(ERROR, domid,
>> +                  "failed to give stubdom%d access to iomem range "
>> +                  "%"PRIx64"-%"PRIx64" for IGD passthru",
>> +                  stubdom_domid, vga_iomem_start, (vga_iomem_start +
>> +                                                IGD_OPREGION_PAGES - 1));
>> +            return ret;
>> +        }
>> +        ret = xc_domain_iomem_permission(CTX->xch, domid,
>> +                                         vga_iomem_start,
>> +                                         IGD_OPREGION_PAGES, 1);
> Here, you seems to add permission to an address that is read from the
> pci config space of the device, but as I've pointed above hvmloader
> seems to overwrite this address. It this call to
> xc_domain_iomem_permission() does actually anything useful?
> Or is it by luck that the address returned by
> sysfs_dev_get_igd_opregion() happened to be the address that hvmloader
> is going to write?
>
> Or maybe hvmloader doesn't actually do anything?
>
>
> Some more though on that, looking at QEMU, it seems that there's already
> a call to xc_domain_iomem_permission(), in igd_write_opregion(). So
> adding one in libxl would seems redundant, or maybe it the one for the
> device model's domain that's needed  (named 'stubdom_domid' here)?
>
> Thanks,
>



From xen-devel-bounces@lists.xenproject.org Sat Jun 04 05:56:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jun 2022 05:56:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341773.566998 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxMlh-0006lx-Ed; Sat, 04 Jun 2022 05:56:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341773.566998; Sat, 04 Jun 2022 05:56:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxMlh-0006lq-B4; Sat, 04 Jun 2022 05:56:13 +0000
Received: by outflank-mailman (input) for mailman id 341773;
 Sat, 04 Jun 2022 05:56:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mc85=WL=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1nxMlf-0006lk-Ml
 for xen-devel@lists.xenproject.org; Sat, 04 Jun 2022 05:56:11 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 081f2ab6-e3cb-11ec-bd2c-47488cf2e6aa;
 Sat, 04 Jun 2022 07:56:09 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 307C61F8DE;
 Sat,  4 Jun 2022 05:56:09 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 0790813A5F;
 Sat,  4 Jun 2022 05:56:09 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 035nAPnzmmKgLwAAMHmgww
 (envelope-from <jgross@suse.com>); Sat, 04 Jun 2022 05:56:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 081f2ab6-e3cb-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1654322169; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=vspAA2gcSD1QkHhRhCOdcZ08ie07v+Yx0GVIhaTtDrI=;
	b=JWiouut0d2cp7hKK+zKFICpWUjpYRhU6Qx3HWCZb43yVunl4EfZZAHMW93i6Jeld75JC2F
	tDt1p7M6eQKEb7BJdquXb5ebIBSrtSBj6jzTnT9GvSr+OCIMbY5hsgcfkx1g4pe/IMKgH9
	t8oHvYA39STnitYCOzrj+QL6mLfzwtE=
From: Juergen Gross <jgross@suse.com>
To: torvalds@linux-foundation.org
Cc: linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	sstabellini@kernel.org
Subject: [GIT PULL] xen: branch for v5.19-rc1b
Date: Sat,  4 Jun 2022 07:56:08 +0200
Message-Id: <20220604055608.9037-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Linus,

Please git pull the following tag:

 git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.19-rc1b-tag

xen: 2nd batch for v5.19-rc1

It contains 2 cleanup patches for Xen related code and (more important)
an update of MAINTAINERS for Xen, as Boris Ostrovsky decided to step
down.

Thanks.

Juergen

 MAINTAINERS                         | 18 ++++++++++++------
 arch/x86/include/asm/xen/page.h     |  3 ---
 drivers/block/xen-blkfront.c        |  6 +++---
 drivers/input/misc/xen-kbdfront.c   |  4 ++--
 drivers/net/xen-netfront.c          |  7 +++----
 drivers/tty/hvc/hvc_xen.c           |  2 +-
 drivers/xen/gntalloc.c              |  9 +++------
 drivers/xen/gntdev-dmabuf.c         |  2 +-
 drivers/xen/grant-table.c           | 14 +++++++-------
 drivers/xen/pvcalls-front.c         |  6 +++---
 drivers/xen/xen-front-pgdir-shbuf.c |  2 +-
 drivers/xen/xenbus/xenbus_client.c  |  2 +-
 drivers/xen/xenbus/xenbus_probe.c   |  8 ++++----
 include/xen/arm/page.h              |  3 ---
 include/xen/grant_table.h           |  6 +++---
 net/9p/trans_xen.c                  |  8 ++++----
 16 files changed, 48 insertions(+), 52 deletions(-)

Boris Ostrovsky (1):
      MAINTAINERS: Update Xen maintainership

Juergen Gross (2):
      xen: switch gnttab_end_foreign_access() to take a struct page pointer
      xen: replace xen_remap() with memremap()


From xen-devel-bounces@lists.xenproject.org Sat Jun 04 09:03:33 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jun 2022 09:03:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341805.567009 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxPgk-0003XN-7c; Sat, 04 Jun 2022 09:03:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341805.567009; Sat, 04 Jun 2022 09:03:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxPgk-0003XG-4l; Sat, 04 Jun 2022 09:03:18 +0000
Received: by outflank-mailman (input) for mailman id 341805;
 Sat, 04 Jun 2022 09:03:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxPgj-0003WK-HB; Sat, 04 Jun 2022 09:03:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxPgj-0008Eg-Do; Sat, 04 Jun 2022 09:03:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxPgj-0004FM-0J; Sat, 04 Jun 2022 09:03:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nxPgi-0004q1-W8; Sat, 04 Jun 2022 09:03:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=k1JxKfBayuhE1/sj/1zFk3QBjUOsP3vgpFnbJtBeFn4=; b=StY84ujM2SMOvV0pupHTNv7Mq6
	ZDcwO8yHl1duFBnp3MCt3MWYvnYTpv0jL0osC7k+PkurmnSoNiHGFTc/KpFOcT1oySyeP+CMqYa8b
	miRKFRvSEve3iX53lDgEyGr2m9NNVEq/ISXeOWvsdvRZcrOOFHKUGkvIJSL/rgbZspwg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170822-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 170822: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:guest-start/freebsd.repeat:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=1f952675835bfe18d6ae494a5581724d68c52352
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 04 Jun 2022 09:03:16 +0000

flight 170822 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170822/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 170714
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 170714
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 170714
 test-amd64-amd64-freebsd11-amd64 21 guest-start/freebsd.repeat fail REGR. vs. 170714
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 170714

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170714
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170714
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170714
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                1f952675835bfe18d6ae494a5581724d68c52352
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z   11 days
Failing since        170716  2022-05-24 11:12:06 Z   10 days   31 attempts
Testing same since   170822  2022-06-04 00:43:08 Z    0 days    1 attempts

------------------------------------------------------------
2252 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 263586 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 04 09:44:38 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jun 2022 09:44:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341825.567020 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxQKd-0000F3-H5; Sat, 04 Jun 2022 09:44:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341825.567020; Sat, 04 Jun 2022 09:44:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxQKd-0000Er-DX; Sat, 04 Jun 2022 09:44:31 +0000
Received: by outflank-mailman (input) for mailman id 341825;
 Sat, 04 Jun 2022 09:44:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aAYa=WL=tls.msk.ru=mjt@srs-se1.protection.inumbo.net>)
 id 1nxQKc-0000ES-NS
 for xen-devel@lists.xenproject.org; Sat, 04 Jun 2022 09:44:30 +0000
Received: from isrv.corpit.ru (isrv.corpit.ru [86.62.121.231])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ed6b9807-e3ea-11ec-bd2c-47488cf2e6aa;
 Sat, 04 Jun 2022 11:44:29 +0200 (CEST)
Received: from tsrv.corpit.ru (tsrv.tls.msk.ru [192.168.177.2])
 by isrv.corpit.ru (Postfix) with ESMTP id 0FE9C4012A;
 Sat,  4 Jun 2022 12:44:28 +0300 (MSK)
Received: from [192.168.177.130] (mjt.wg.tls.msk.ru [192.168.177.130])
 by tsrv.corpit.ru (Postfix) with ESMTP id 6BE352A;
 Sat,  4 Jun 2022 12:44:27 +0300 (MSK)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ed6b9807-e3ea-11ec-bd2c-47488cf2e6aa
Message-ID: <0dd3627b-8f4c-93fd-89e7-3c8c3584994b@msgid.tls.msk.ru>
Date: Sat, 4 Jun 2022 12:44:27 +0300
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Content-Language: en-US
To: QEMU Developers <qemu-devel@nongnu.org>, xen-devel@lists.xenproject.org
From: Michael Tokarev <mjt@tls.msk.ru>
Subject: q: incorrect register emulation mask for Xen PCI passthrough?
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

There's a debian bugreport open - now against qemu - https://bugs.debian.org/988333 -
which initially said VGA Intel IGD Passthrough to Debian Xen HVM DomUs not working
but worked okay with windows DomUs.

The most interesting comment in there is the last one, https://bugs.debian.org/988333#146 ,
which sums it all up and provides a patch for the issue, at
https://github.com/QubesOS/qubes-vmm-xen-stubdom-linux/pull/3/commits/ab2b4c2ad02827a73c52ba561e9a921cc4bb227c
which is from 2017 (!).

I wonder if we should apply this one upstream, or if it is somehow incorrect
fix, should fix this particular issue the right way.  It's 5 years old already
and people are still suffering... :)

Thanks,

/mjt


From xen-devel-bounces@lists.xenproject.org Sat Jun 04 11:29:22 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jun 2022 11:29:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341848.567031 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxRxq-0003JQ-5a; Sat, 04 Jun 2022 11:29:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341848.567031; Sat, 04 Jun 2022 11:29:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxRxq-0003JJ-2L; Sat, 04 Jun 2022 11:29:06 +0000
Received: by outflank-mailman (input) for mailman id 341848;
 Sat, 04 Jun 2022 11:29:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxRxo-0003J9-Rt; Sat, 04 Jun 2022 11:29:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxRxo-0002PB-NK; Sat, 04 Jun 2022 11:29:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxRxo-0005Qk-4g; Sat, 04 Jun 2022 11:29:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nxRxo-0006wL-4G; Sat, 04 Jun 2022 11:29:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=fagTM8Z+uXt9LILrXqPz2ZkUkiWoFO1km4b8//+0aV0=; b=LJdr24h2cBuBC0WcSkpnfEgi9k
	RBmY8WAiYXhD3DFpA92gRauUnaV5bBCedoPRpEMcaN9UexfnI9Z+REtZfTCxx73hCTYYonyAZc0uo
	h+hpj/x3Uk498Hd9QpvlYeLWkYMXBKqLGY8izdnUweDxnSjdXhCIDL7mFYv2dkZTn/Ds=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170825-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 170825: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=a939d4d86919a1f9ffcdc053a852422f9184a00d
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 04 Jun 2022 11:29:04 +0000

flight 170825 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170825/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              a939d4d86919a1f9ffcdc053a852422f9184a00d
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  694 days
Failing since        151818  2020-07-11 04:18:52 Z  693 days  675 attempts
Testing same since   170825  2022-06-04 04:20:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Amneesh Singh <natto@weirdnatto.in>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani@anisinha.ca>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brad Laue <brad@brad-x.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Kirbach <christian.kirbach@gmail.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Christophe Fergeau <cfergeau@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Didik Supriadi <didiksupriadi41@gmail.com>
  dinglimin <dinglimin@cmss.chinamobile.com>
  Divya Garg <divya.garg@nutanix.com>
  Dmitrii Shcherbakov <dmitrii.shcherbakov@canonical.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Emilio Herrera <ehespinosa57@gmail.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Franck Ridel <fridel@protonmail.com>
  Gavi Teitz <gavi@nvidia.com>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Haonan Wang <hnwanga1@gmail.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Hiroki Narukawa <hnarukaw@yahoo-corp.jp>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Ian Wienand <iwienand@redhat.com>
  Ioanna Alifieraki <ioanna-maria.alifieraki@canonical.com>
  Ivan Teterevkov <ivan.teterevkov@nutanix.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  jason lee <ppark5237@gmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jia Zhou <zhou.jia2@zte.com.cn>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jing Qi <jinqi@redhat.com>
  Jinsheng Zhang <zhangjl02@inspur.com>
  Jiri Denemark <jdenemar@redhat.com>
  Joachim Falk <joachim.falk@gmx.de>
  John Ferlan <jferlan@redhat.com>
  John Levon <john.levon@nutanix.com>
  John Levon <levon@movementarian.org>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Justin Gatzen <justin.gatzen@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kim InSoo <simmon@nplob.com>
  Koichi Murase <myoga.murase@gmail.com>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Lei Yang <yanglei209@huawei.com>
  Lena Voytek <lena.voytek@canonical.com>
  Liang Yan <lyan@digitalocean.com>
  Liang Yan <lyan@digtalocean.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Liu Yiding <liuyd.fnst@fujitsu.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  luzhipeng <luzhipeng@cestc.cn>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Martin Pitt <mpitt@debian.org>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matej Cepl <mcepl@cepl.eu>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Goodhart <c@chromakode.com>
  Maxim Nestratov <mnestratov@virtuozzo.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Moteen Shah <codeguy.moteen@gmail.com>
  Moteen Shah <moteenshah.02@gmail.com>
  Muha Aliss <muhaaliss@gmail.com>
  Nathan <nathan95@live.it>
  Neal Gompa <ngompa13@gmail.com>
  Nick Chevsky <nchevsky@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nicolas Lécureuil <neoclust@mageia.org>
  Nicolas Lécureuil <nicolas.lecureuil@siveo.net>
  Nikolay Shirokovskiy <nikolay.shirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Niteesh Dubey <niteesh@linux.ibm.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Or Ozeri <oro@il.ibm.com>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Praveen K Paladugu <prapal@linux.microsoft.com>
  Richard W.M. Jones <rjones@redhat.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Robin Lee <cheeselee@fedoraproject.org>
  Rohit Kumar <rohit.kumar3@nutanix.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Davis <scott.davis@starlab.io>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  shenjiatong <yshxxsjt715@gmail.com>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Simon Rowe <simon.rowe@nutanix.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tom Wieczorek <tom@bibbu.net>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tu Qiang <tu.qiang35@zte.com.cn>
  Tuguoyi <tu.guoyi@h3c.com>
  tuqiang <tu.qiang35@zte.com.cn>
  Vasiliy Ulyanov <vulyanov@suse.de>
  Victor Toso <victortoso@redhat.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Vinayak Kale <vkale@nvidia.com>
  Vineeth Pillai <viremana@linux.microsoft.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  Wei-Chen Chen <weicche@microsoft.com>
  William Douglas <william.douglas@intel.com>
  Xu Chao <xu.chao6@zte.com.cn>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Fei <yangfei85@huawei.com>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yasuhiko Kamata <belphegor@belbel.or.jp>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
  zhangjl02 <zhangjl02@inspur.com>
  zhanglei <zhanglei@smartx.com>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>
  Дамјан Георгиевски <gdamjan@gmail.com>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 111348 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 04 12:30:55 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jun 2022 12:30:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341860.567042 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxSvM-0002gI-UB; Sat, 04 Jun 2022 12:30:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341860.567042; Sat, 04 Jun 2022 12:30:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxSvM-0002gB-Pm; Sat, 04 Jun 2022 12:30:36 +0000
Received: by outflank-mailman (input) for mailman id 341860;
 Sat, 04 Jun 2022 12:30:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxSvL-0002g1-Vv; Sat, 04 Jun 2022 12:30:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxSvL-0003R1-Ss; Sat, 04 Jun 2022 12:30:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxSvL-0007EG-CM; Sat, 04 Jun 2022 12:30:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nxSvL-0002Cq-Bv; Sat, 04 Jun 2022 12:30:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Sa2ORyskqQ2mKDYjkfhxJ5BAiIjOQv4Xw94xqlQJjgQ=; b=hKTNG7TXsbghV7TqvDBbVwtgfo
	PdhLndDzcvvAq+/i0D/+1P80k/piyGjs3/lO70XLZSqBN/XpY+BHpgKGNLIjDoHetpyiiA2+WEirS
	TbkJWBiwMEFTCXEr3mJ7rgG0SUbN6VtJ3tsaAb75Bc8NmfcQFyq1tg5qjVbd/X8o65N4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170823-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 170823: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=58ce5b6c33ecae76f2e9fc5213a56e98c3be4be5
X-Osstest-Versions-That:
    xen=58ce5b6c33ecae76f2e9fc5213a56e98c3be4be5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 04 Jun 2022 12:30:35 +0000

flight 170823 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170823/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install    fail pass in 170813

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop  fail in 170813 like 170806
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170813
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170813
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170813
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170813
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 170813
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 170813
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 170813
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170813
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170813
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170813
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170813
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170813
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 xen                  58ce5b6c33ecae76f2e9fc5213a56e98c3be4be5
baseline version:
 xen                  58ce5b6c33ecae76f2e9fc5213a56e98c3be4be5

Last test of basis   170823  2022-06-04 01:53:21 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sat Jun 04 13:13:53 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jun 2022 13:13:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341871.567053 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxTb6-0007kZ-7e; Sat, 04 Jun 2022 13:13:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341871.567053; Sat, 04 Jun 2022 13:13:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxTb6-0007kS-42; Sat, 04 Jun 2022 13:13:44 +0000
Received: by outflank-mailman (input) for mailman id 341871;
 Sat, 04 Jun 2022 13:13:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxTb5-0007kI-Ki; Sat, 04 Jun 2022 13:13:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxTb5-00048U-HK; Sat, 04 Jun 2022 13:13:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxTb5-00005R-67; Sat, 04 Jun 2022 13:13:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nxTb5-0007R1-5f; Sat, 04 Jun 2022 13:13:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=PP6gM0v9+0ADDkENwcfEt9wbe15vt5Z1VgnAm9dfkhg=; b=wJSKS5j5wAesubxcZgzue7i9wM
	430NGGx7NcWqwlguvSo+ZXTr9O7nHOAjancMDVFh+2v5y7/yMIjcMVbGrYnpJ1NIzQppUCuZO4JoT
	SFBlmP1w48q/1CmUSLvh82cN5FMIyMbQJkcRyzwoJ9HexlF/v9SGGuya44hgWHK4d0u0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170824-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 170824: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=ca127b3fc247517ec7d4dad291f2c0f90602ce5b
X-Osstest-Versions-That:
    qemuu=70e975203f366f2f30daaeb714bb852562b7b72f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 04 Jun 2022 13:13:43 +0000

flight 170824 qemu-mainline real [real]
flight 170827 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/170824/
http://logs.test-lab.xenproject.org/osstest/logs/170827/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 170820

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-vhd 21 guest-start/debian.repeat fail pass in 170827-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170820
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170820
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170820
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170820
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170820
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170820
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170820
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170820
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                ca127b3fc247517ec7d4dad291f2c0f90602ce5b
baseline version:
 qemuu                70e975203f366f2f30daaeb714bb852562b7b72f

Last test of basis   170820  2022-06-03 15:38:21 Z    0 days
Testing same since   170824  2022-06-04 03:08:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dmitry Tikhov <d.tihov@yadro.com>
  Dmitry Tikhov <ddtikhov@gmail.com>
  Klaus Jensen <k.jensen@samsung.com>
  Richard Henderson <richard.henderson@linaro.org>
  zhenwei pi <pizhenwei@bytedance.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ca127b3fc247517ec7d4dad291f2c0f90602ce5b
Merge: 70e975203f d7fe639cab
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Fri Jun 3 14:14:24 2022 -0700

    Merge tag 'nvme-next-pull-request' of git://git.infradead.org/qemu-nvme into staging
    
    hw/nvme updates
    
    # -----BEGIN PGP SIGNATURE-----
    #
    # iQEzBAABCAAdFiEEUigzqnXi3OaiR2bATeGvMW1PDekFAmKaZmgACgkQTeGvMW1P
    # DenI7wgAxY4QtRlUnufzaZqcoi+affFTKlKm0JYKZm/Ldxt2RtHoWxRZDLLIUp8B
    # 4XAlIGJw7VwrafEtSkx4K6cSyKluMJ9Ax8pNd03sEweXBBfdhNizspPprp+Jm9P9
    # hRcH8kSiBp5B451cORBlgmoHguWeWawe1r66uFLTCbEMtfQQNaxNVsTsgAsOvtwv
    # XsjLVFVKGNDWXGRta+lzu4seNNuzfucsAmKWUjg5HN38rstY7XxfLVMzt8ORcwjk
    # oNmQuy3JiKujdPVhE5PVgNRZkigwoDt3hDA1QTncGTBUoA/CtaB5SK9EhcJ5xJVI
    # EHv99S9LQ8ng5BJC2pUSU32yRkaNOQ==
    # =XTXH
    # -----END PGP SIGNATURE-----
    # gpg: Signature made Fri 03 Jun 2022 12:52:08 PM PDT
    # gpg:                using RSA key 522833AA75E2DCE6A24766C04DE1AF316D4F0DE9
    # gpg: Good signature from "Klaus Jensen <its@irrelevant.dk>" [unknown]
    # gpg:                 aka "Klaus Jensen <k.jensen@samsung.com>" [unknown]
    # gpg: WARNING: This key is not certified with a trusted signature!
    # gpg:          There is no indication that the signature belongs to the owner.
    # Primary key fingerprint: DDCA 4D9C 9EF9 31CC 3468  4272 63D5 6FC5 E55D A838
    #      Subkey fingerprint: 5228 33AA 75E2 DCE6 A247  66C0 4DE1 AF31 6D4F 0DE9
    
    * tag 'nvme-next-pull-request' of git://git.infradead.org/qemu-nvme:
      hw/nvme: add new command abort case
      hw/nvme: deprecate the use-intel-id compatibility parameter
      hw/nvme: bump firmware revision
      hw/nvme: do not report null uuid
      hw/nvme: do not auto-generate uuid
      hw/nvme: do not auto-generate eui64
      hw/nvme: enforce common serial per subsystem
      hw/nvme: fix smart aen
      hw/nvme: fix copy cmd for pi enabled namespaces
      hw/nvme: add missing return statement
      hw/nvme: fix narrowing conversion
    
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit d7fe639cabf778903f6cab23ff58c905c71375ec
Author: Dmitry Tikhov <d.tihov@yadro.com>
Date:   Wed Apr 20 11:20:44 2022 +0300

    hw/nvme: add new command abort case
    
    NVMe command set specification for end-to-end data protection formatted
    namespace states:
    
        o If the Reference Tag Check bit of the PRCHK field is set to ‘1’ and
          the namespace is formatted for Type 3 protection, then the
          controller:
              ▪ should not compare the protection Information Reference Tag
                field to the computed reference tag; and
              ▪ may ignore the ILBRT and EILBRT fields. If a command is
                aborted as a result of the Reference Tag Check bit of the
                PRCHK field being set to ‘1’, then that command should be
                aborted with a status code of Invalid Protection Information,
                but may be aborted with a status code of Invalid Field in
                Command.
    
    Currently qemu compares reftag in the nvme_dif_prchk function whenever
    Reference Tag Check bit is set in the command. For type 3 namespaces
    however, caller of nvme_dif_prchk - nvme_dif_check does not increment
    reftag for each subsequent logical block. That way commands incorporating
    more than one logical block for type 3 formatted namespaces with reftag
    check bit set, always fail with End-to-end Reference Tag Check Error.
    Comply with spec by handling case of set Reference Tag Check
    bit in the type 3 formatted namespace.
    
    Fixes: 146f720c5563 ("hw/block/nvme: end-to-end data protection")
    Signed-off-by: Dmitry Tikhov <d.tihov@yadro.com>
    Signed-off-by: Klaus Jensen <k.jensen@samsung.com>

commit 8b1e59a6873662a01379cf052384e5dedefe7447
Author: Klaus Jensen <k.jensen@samsung.com>
Date:   Tue Apr 19 13:24:23 2022 +0200

    hw/nvme: deprecate the use-intel-id compatibility parameter
    
    Since version 5.2 commit 6eb7a071292a ("hw/block/nvme: change controller
    pci id"), the emulated NVMe controller has defaulted to a non-Intel PCI
    identifier.
    
    Deprecate the compatibility parameter so we can get rid of it once and
    for all.
    
    Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
    Signed-off-by: Klaus Jensen <k.jensen@samsung.com>

commit fbba243bc700a4e479331e20544c7f6a41ae87b3
Author: Klaus Jensen <k.jensen@samsung.com>
Date:   Fri Apr 29 10:33:36 2022 +0200

    hw/nvme: bump firmware revision
    
    The Linux kernel quirks the QEMU NVMe controller pretty heavily because
    of the namespace identifier mess. Since this is now fixed, bump the
    firmware revision number to allow the quirk to be disabled for this
    revision.
    
    As of now, bump the firmware revision number to be equal to the QEMU
    release version number.
    
    Reviewed-by: Keith Busch <kbusch@kernel.org>
    Signed-off-by: Klaus Jensen <k.jensen@samsung.com>

commit 9f2e1acf83c332752f52c39dad390c94ec2ba9f5
Author: Klaus Jensen <k.jensen@samsung.com>
Date:   Fri Apr 29 10:33:35 2022 +0200

    hw/nvme: do not report null uuid
    
    Do not report the "null uuid" (all zeros) in the namespace
    identification descriptors.
    
    Reported-by: Luis Chamberlain <mcgrof@kernel.org>
    Reported-by: Christoph Hellwig <hch@lst.de>
    Reviewed-by: Christoph Hellwig <hch@lst.de>
    Reviewed-by: Keith Busch <kbusch@kernel.org>
    Signed-off-by: Klaus Jensen <k.jensen@samsung.com>

commit bd9f371c6f6eeb8e907dfc770876ad8ef4ff85fc
Author: Klaus Jensen <k.jensen@samsung.com>
Date:   Fri Apr 29 10:33:34 2022 +0200

    hw/nvme: do not auto-generate uuid
    
    Do not default to generate an UUID for namespaces if it is not
    explicitly specified.
    
    This is a technically a breaking change in behavior. However, since the
    UUID changes on every VM launch, it is not spec compliant and is of
    little use since the UUID cannot be used reliably anyway and the
    behavior prior to this patch must be considered buggy.
    
    Reviewed-by: Keith Busch <kbusch@kernel.org>
    Reviewed-by: Christoph Hellwig <hch@lst.de>
    Signed-off-by: Klaus Jensen <k.jensen@samsung.com>

commit 36d83272d5e45dff13e988ee0a59f11c58b442ba
Author: Klaus Jensen <k.jensen@samsung.com>
Date:   Fri Apr 29 10:33:33 2022 +0200

    hw/nvme: do not auto-generate eui64
    
    We cannot provide auto-generated unique or persistent namespace
    identifiers (EUI64, NGUID, UUID) easily. Since 6.1, namespaces have been
    assigned a generated EUI64 of the form "52:54:00:<namespace counter>".
    This is will be unique within a QEMU instance, but not globally.
    
    Revert that this is assigned automatically and immediately deprecate the
    compatibility parameter. Users can opt-in to this with the
    `eui64-default=on` device parameter or set it explicitly with
    `eui64=UINT64`.
    
    Cc: libvir-list@redhat.com
    Reviewed-by: Christoph Hellwig <hch@lst.de>
    Signed-off-by: Klaus Jensen <k.jensen@samsung.com>

commit a859eb9f8f64e116671048a43a07d87bc6527a55
Author: Klaus Jensen <k.jensen@samsung.com>
Date:   Fri Apr 29 10:33:32 2022 +0200

    hw/nvme: enforce common serial per subsystem
    
    The Identify Controller Serial Number (SN) is the serial number for the
    NVM subsystem and must be the same across all controller in the NVM
    subsystem.
    
    Enforce this.
    
    Reviewed-by: Christoph Hellwig <hch@lst.de>
    Reviewed-by: Keith Busch <kbusch@kernel.org>
    Signed-off-by: Klaus Jensen <k.jensen@samsung.com>

commit 9235a72a5df0fae1ede89f02717b597ef91cf6ad
Author: Klaus Jensen <k.jensen@samsung.com>
Date:   Fri May 6 00:21:47 2022 +0200

    hw/nvme: fix smart aen
    
    Pass the right constant to nvme_smart_event(). The NVME_AER* values hold
    the bit position in the SMART byte, not the shifted value that we expect
    it to be in nvme_smart_event().
    
    Fixes: c62720f137df ("hw/block/nvme: trigger async event during injecting smart warning")
    Acked-by: zhenwei pi <pizhenwei@bytedance.com>
    Signed-off-by: Klaus Jensen <k.jensen@samsung.com>

commit 2e8f952ae7de23b4847937dbbf51f7a1ab10a2af
Author: Dmitry Tikhov <d.tihov@yadro.com>
Date:   Thu Apr 21 13:51:58 2022 +0300

    hw/nvme: fix copy cmd for pi enabled namespaces
    
    Current implementation have problem in the read part of copy command.
    Because there is no metadata mangling before nvme_dif_check invocation,
    reftag error could be thrown for blocks of namespace that have not been
    previously written to.
    
    Signed-off-by: Dmitry Tikhov <d.tihov@yadro.com>
    Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
    Signed-off-by: Klaus Jensen <k.jensen@samsung.com>

commit 51c453266309166c2737623211c0afc12884cccd
Author: Dmitry Tikhov <d.tihov@yadro.com>
Date:   Fri Apr 15 23:48:32 2022 +0300

    hw/nvme: add missing return statement
    
    Since there is no return after nvme_dsm_cb invocation, metadata
    associated with non-zero block range is currently zeroed. Also this
    behaviour leads to segfault since we schedule iocb->bh two times.
    First when entering nvme_dsm_cb with iocb->idx == iocb->nr and
    second because of missing return on call stack unwinding by calling
    blk_aio_pwrite_zeroes and subsequent nvme_dsm_cb callback.
    
    Fixes: d7d1474fd85d ("hw/nvme: reimplement dsm to allow cancellation")
    Signed-off-by: Dmitry Tikhov <d.tihov@yadro.com>
    Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
    Signed-off-by: Klaus Jensen <k.jensen@samsung.com>

commit 1e64facc015e16d8e4efa239feaeda9e4e9aeb04
Author: Dmitry Tikhov <ddtikhov@gmail.com>
Date:   Tue Apr 12 11:59:09 2022 +0300

    hw/nvme: fix narrowing conversion
    
    Since nlbas is of type int, it does not work with large namespace size
    values, e.g., 9 TB size of file backing namespace and 8 byte metadata
    with 4096 bytes lbasz gives negative nlbas value, which is later
    promoted to negative int64_t type value and results in negative
    ns->moff which breaks namespace
    
    Signed-off-by: Dmitry Tikhov <ddtikhov@gmail.com>
    Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
    Signed-off-by: Klaus Jensen <k.jensen@samsung.com>


From xen-devel-bounces@lists.xenproject.org Sat Jun 04 17:09:49 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jun 2022 17:09:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341924.567076 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxXHI-00086e-Ik; Sat, 04 Jun 2022 17:09:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341924.567076; Sat, 04 Jun 2022 17:09:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxXHI-00086X-EG; Sat, 04 Jun 2022 17:09:32 +0000
Received: by outflank-mailman (input) for mailman id 341924;
 Sat, 04 Jun 2022 17:09:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxXHH-00086K-6D; Sat, 04 Jun 2022 17:09:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxXHH-0000g0-35; Sat, 04 Jun 2022 17:09:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxXHG-0001wO-Dv; Sat, 04 Jun 2022 17:09:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nxXHG-0002Fw-DU; Sat, 04 Jun 2022 17:09:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=N57QNdg7zwko784ifa4nr9bHmDwB0Ra+Pk80yRuZWXs=; b=Y/MjRPNS+wrB8VvOBNPuPemCEG
	bdVXn7cHo1g7c5oJil2OMPKLLnF5NuOv/m5liswmdLVZON7yTyJpjqiehh5A9IXtz0N7H/S57e0Hh
	NReSjQcsF3v/6c9TB6mZuDuHjFTCwuyNLmRtbyaoKypFuPV3O1XslMCJWUFh3BGP/lTc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170826-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 170826: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=032dcf09e2bf7c822be25b4abef7a6c913870d98
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 04 Jun 2022 17:09:30 +0000

flight 170826 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170826/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 170714
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 170714
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 170714
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 170714

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170714
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170714
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170714
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                032dcf09e2bf7c822be25b4abef7a6c913870d98
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z   11 days
Failing since        170716  2022-05-24 11:12:06 Z   11 days   32 attempts
Testing same since   170826  2022-06-04 09:06:06 Z    0 days    1 attempts

------------------------------------------------------------
2254 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 263710 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 04 20:04:58 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jun 2022 20:04:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341950.567086 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxa0p-0002A4-6z; Sat, 04 Jun 2022 20:04:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341950.567086; Sat, 04 Jun 2022 20:04:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxa0p-00029x-4C; Sat, 04 Jun 2022 20:04:43 +0000
Received: by outflank-mailman (input) for mailman id 341950;
 Sat, 04 Jun 2022 20:04:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxa0o-00029n-94; Sat, 04 Jun 2022 20:04:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxa0o-0003lg-6G; Sat, 04 Jun 2022 20:04:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxa0n-0000Ba-Jm; Sat, 04 Jun 2022 20:04:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nxa0n-0005Is-J8; Sat, 04 Jun 2022 20:04:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tbxbNiHCbiUh/UYRj38liXNZfNSiQggixrTjav3AeW4=; b=ouPVc2ZgXEIqOx7ER8jdTeVDsO
	v3DQ2Fom0YeANzJ8RAtGtgnzOsCljpEwgi03m3RVsPE7q/7ao+ub4tkW+vIVLgVAKU4VBiTAnILzq
	DWyUy7GJYuGas+TPo4Jg/szGAiOckYLd6uUwywP7Vtzcg9LAqrx7ZJNGrc6XSq9AvwAE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170829-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 170829: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start:fail:heisenbug
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start.2:fail:heisenbug
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=ca127b3fc247517ec7d4dad291f2c0f90602ce5b
X-Osstest-Versions-That:
    qemuu=70e975203f366f2f30daaeb714bb852562b7b72f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 04 Jun 2022 20:04:41 +0000

flight 170829 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170829/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 170820

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-vhd 21 guest-start/debian.repeat fail in 170824 pass in 170829
 test-armhf-armhf-xl-rtds     14 guest-start                fail pass in 170824
 test-armhf-armhf-xl-vhd      18 guest-start.2              fail pass in 170824

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds    15 migrate-support-check fail in 170824 never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-check fail in 170824 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170820
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170820
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170820
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170820
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170820
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170820
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170820
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170820
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                ca127b3fc247517ec7d4dad291f2c0f90602ce5b
baseline version:
 qemuu                70e975203f366f2f30daaeb714bb852562b7b72f

Last test of basis   170820  2022-06-03 15:38:21 Z    1 days
Testing same since   170824  2022-06-04 03:08:51 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dmitry Tikhov <d.tihov@yadro.com>
  Dmitry Tikhov <ddtikhov@gmail.com>
  Klaus Jensen <k.jensen@samsung.com>
  Richard Henderson <richard.henderson@linaro.org>
  zhenwei pi <pizhenwei@bytedance.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ca127b3fc247517ec7d4dad291f2c0f90602ce5b
Merge: 70e975203f d7fe639cab
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Fri Jun 3 14:14:24 2022 -0700

    Merge tag 'nvme-next-pull-request' of git://git.infradead.org/qemu-nvme into staging
    
    hw/nvme updates
    
    # -----BEGIN PGP SIGNATURE-----
    #
    # iQEzBAABCAAdFiEEUigzqnXi3OaiR2bATeGvMW1PDekFAmKaZmgACgkQTeGvMW1P
    # DenI7wgAxY4QtRlUnufzaZqcoi+affFTKlKm0JYKZm/Ldxt2RtHoWxRZDLLIUp8B
    # 4XAlIGJw7VwrafEtSkx4K6cSyKluMJ9Ax8pNd03sEweXBBfdhNizspPprp+Jm9P9
    # hRcH8kSiBp5B451cORBlgmoHguWeWawe1r66uFLTCbEMtfQQNaxNVsTsgAsOvtwv
    # XsjLVFVKGNDWXGRta+lzu4seNNuzfucsAmKWUjg5HN38rstY7XxfLVMzt8ORcwjk
    # oNmQuy3JiKujdPVhE5PVgNRZkigwoDt3hDA1QTncGTBUoA/CtaB5SK9EhcJ5xJVI
    # EHv99S9LQ8ng5BJC2pUSU32yRkaNOQ==
    # =XTXH
    # -----END PGP SIGNATURE-----
    # gpg: Signature made Fri 03 Jun 2022 12:52:08 PM PDT
    # gpg:                using RSA key 522833AA75E2DCE6A24766C04DE1AF316D4F0DE9
    # gpg: Good signature from "Klaus Jensen <its@irrelevant.dk>" [unknown]
    # gpg:                 aka "Klaus Jensen <k.jensen@samsung.com>" [unknown]
    # gpg: WARNING: This key is not certified with a trusted signature!
    # gpg:          There is no indication that the signature belongs to the owner.
    # Primary key fingerprint: DDCA 4D9C 9EF9 31CC 3468  4272 63D5 6FC5 E55D A838
    #      Subkey fingerprint: 5228 33AA 75E2 DCE6 A247  66C0 4DE1 AF31 6D4F 0DE9
    
    * tag 'nvme-next-pull-request' of git://git.infradead.org/qemu-nvme:
      hw/nvme: add new command abort case
      hw/nvme: deprecate the use-intel-id compatibility parameter
      hw/nvme: bump firmware revision
      hw/nvme: do not report null uuid
      hw/nvme: do not auto-generate uuid
      hw/nvme: do not auto-generate eui64
      hw/nvme: enforce common serial per subsystem
      hw/nvme: fix smart aen
      hw/nvme: fix copy cmd for pi enabled namespaces
      hw/nvme: add missing return statement
      hw/nvme: fix narrowing conversion
    
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit d7fe639cabf778903f6cab23ff58c905c71375ec
Author: Dmitry Tikhov <d.tihov@yadro.com>
Date:   Wed Apr 20 11:20:44 2022 +0300

    hw/nvme: add new command abort case
    
    NVMe command set specification for end-to-end data protection formatted
    namespace states:
    
        o If the Reference Tag Check bit of the PRCHK field is set to ‘1’ and
          the namespace is formatted for Type 3 protection, then the
          controller:
              ▪ should not compare the protection Information Reference Tag
                field to the computed reference tag; and
              ▪ may ignore the ILBRT and EILBRT fields. If a command is
                aborted as a result of the Reference Tag Check bit of the
                PRCHK field being set to ‘1’, then that command should be
                aborted with a status code of Invalid Protection Information,
                but may be aborted with a status code of Invalid Field in
                Command.
    
    Currently qemu compares reftag in the nvme_dif_prchk function whenever
    Reference Tag Check bit is set in the command. For type 3 namespaces
    however, caller of nvme_dif_prchk - nvme_dif_check does not increment
    reftag for each subsequent logical block. That way commands incorporating
    more than one logical block for type 3 formatted namespaces with reftag
    check bit set, always fail with End-to-end Reference Tag Check Error.
    Comply with spec by handling case of set Reference Tag Check
    bit in the type 3 formatted namespace.
    
    Fixes: 146f720c5563 ("hw/block/nvme: end-to-end data protection")
    Signed-off-by: Dmitry Tikhov <d.tihov@yadro.com>
    Signed-off-by: Klaus Jensen <k.jensen@samsung.com>

commit 8b1e59a6873662a01379cf052384e5dedefe7447
Author: Klaus Jensen <k.jensen@samsung.com>
Date:   Tue Apr 19 13:24:23 2022 +0200

    hw/nvme: deprecate the use-intel-id compatibility parameter
    
    Since version 5.2 commit 6eb7a071292a ("hw/block/nvme: change controller
    pci id"), the emulated NVMe controller has defaulted to a non-Intel PCI
    identifier.
    
    Deprecate the compatibility parameter so we can get rid of it once and
    for all.
    
    Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
    Signed-off-by: Klaus Jensen <k.jensen@samsung.com>

commit fbba243bc700a4e479331e20544c7f6a41ae87b3
Author: Klaus Jensen <k.jensen@samsung.com>
Date:   Fri Apr 29 10:33:36 2022 +0200

    hw/nvme: bump firmware revision
    
    The Linux kernel quirks the QEMU NVMe controller pretty heavily because
    of the namespace identifier mess. Since this is now fixed, bump the
    firmware revision number to allow the quirk to be disabled for this
    revision.
    
    As of now, bump the firmware revision number to be equal to the QEMU
    release version number.
    
    Reviewed-by: Keith Busch <kbusch@kernel.org>
    Signed-off-by: Klaus Jensen <k.jensen@samsung.com>

commit 9f2e1acf83c332752f52c39dad390c94ec2ba9f5
Author: Klaus Jensen <k.jensen@samsung.com>
Date:   Fri Apr 29 10:33:35 2022 +0200

    hw/nvme: do not report null uuid
    
    Do not report the "null uuid" (all zeros) in the namespace
    identification descriptors.
    
    Reported-by: Luis Chamberlain <mcgrof@kernel.org>
    Reported-by: Christoph Hellwig <hch@lst.de>
    Reviewed-by: Christoph Hellwig <hch@lst.de>
    Reviewed-by: Keith Busch <kbusch@kernel.org>
    Signed-off-by: Klaus Jensen <k.jensen@samsung.com>

commit bd9f371c6f6eeb8e907dfc770876ad8ef4ff85fc
Author: Klaus Jensen <k.jensen@samsung.com>
Date:   Fri Apr 29 10:33:34 2022 +0200

    hw/nvme: do not auto-generate uuid
    
    Do not default to generate an UUID for namespaces if it is not
    explicitly specified.
    
    This is a technically a breaking change in behavior. However, since the
    UUID changes on every VM launch, it is not spec compliant and is of
    little use since the UUID cannot be used reliably anyway and the
    behavior prior to this patch must be considered buggy.
    
    Reviewed-by: Keith Busch <kbusch@kernel.org>
    Reviewed-by: Christoph Hellwig <hch@lst.de>
    Signed-off-by: Klaus Jensen <k.jensen@samsung.com>

commit 36d83272d5e45dff13e988ee0a59f11c58b442ba
Author: Klaus Jensen <k.jensen@samsung.com>
Date:   Fri Apr 29 10:33:33 2022 +0200

    hw/nvme: do not auto-generate eui64
    
    We cannot provide auto-generated unique or persistent namespace
    identifiers (EUI64, NGUID, UUID) easily. Since 6.1, namespaces have been
    assigned a generated EUI64 of the form "52:54:00:<namespace counter>".
    This is will be unique within a QEMU instance, but not globally.
    
    Revert that this is assigned automatically and immediately deprecate the
    compatibility parameter. Users can opt-in to this with the
    `eui64-default=on` device parameter or set it explicitly with
    `eui64=UINT64`.
    
    Cc: libvir-list@redhat.com
    Reviewed-by: Christoph Hellwig <hch@lst.de>
    Signed-off-by: Klaus Jensen <k.jensen@samsung.com>

commit a859eb9f8f64e116671048a43a07d87bc6527a55
Author: Klaus Jensen <k.jensen@samsung.com>
Date:   Fri Apr 29 10:33:32 2022 +0200

    hw/nvme: enforce common serial per subsystem
    
    The Identify Controller Serial Number (SN) is the serial number for the
    NVM subsystem and must be the same across all controller in the NVM
    subsystem.
    
    Enforce this.
    
    Reviewed-by: Christoph Hellwig <hch@lst.de>
    Reviewed-by: Keith Busch <kbusch@kernel.org>
    Signed-off-by: Klaus Jensen <k.jensen@samsung.com>

commit 9235a72a5df0fae1ede89f02717b597ef91cf6ad
Author: Klaus Jensen <k.jensen@samsung.com>
Date:   Fri May 6 00:21:47 2022 +0200

    hw/nvme: fix smart aen
    
    Pass the right constant to nvme_smart_event(). The NVME_AER* values hold
    the bit position in the SMART byte, not the shifted value that we expect
    it to be in nvme_smart_event().
    
    Fixes: c62720f137df ("hw/block/nvme: trigger async event during injecting smart warning")
    Acked-by: zhenwei pi <pizhenwei@bytedance.com>
    Signed-off-by: Klaus Jensen <k.jensen@samsung.com>

commit 2e8f952ae7de23b4847937dbbf51f7a1ab10a2af
Author: Dmitry Tikhov <d.tihov@yadro.com>
Date:   Thu Apr 21 13:51:58 2022 +0300

    hw/nvme: fix copy cmd for pi enabled namespaces
    
    Current implementation have problem in the read part of copy command.
    Because there is no metadata mangling before nvme_dif_check invocation,
    reftag error could be thrown for blocks of namespace that have not been
    previously written to.
    
    Signed-off-by: Dmitry Tikhov <d.tihov@yadro.com>
    Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
    Signed-off-by: Klaus Jensen <k.jensen@samsung.com>

commit 51c453266309166c2737623211c0afc12884cccd
Author: Dmitry Tikhov <d.tihov@yadro.com>
Date:   Fri Apr 15 23:48:32 2022 +0300

    hw/nvme: add missing return statement
    
    Since there is no return after nvme_dsm_cb invocation, metadata
    associated with non-zero block range is currently zeroed. Also this
    behaviour leads to segfault since we schedule iocb->bh two times.
    First when entering nvme_dsm_cb with iocb->idx == iocb->nr and
    second because of missing return on call stack unwinding by calling
    blk_aio_pwrite_zeroes and subsequent nvme_dsm_cb callback.
    
    Fixes: d7d1474fd85d ("hw/nvme: reimplement dsm to allow cancellation")
    Signed-off-by: Dmitry Tikhov <d.tihov@yadro.com>
    Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
    Signed-off-by: Klaus Jensen <k.jensen@samsung.com>

commit 1e64facc015e16d8e4efa239feaeda9e4e9aeb04
Author: Dmitry Tikhov <ddtikhov@gmail.com>
Date:   Tue Apr 12 11:59:09 2022 +0300

    hw/nvme: fix narrowing conversion
    
    Since nlbas is of type int, it does not work with large namespace size
    values, e.g., 9 TB size of file backing namespace and 8 byte metadata
    with 4096 bytes lbasz gives negative nlbas value, which is later
    promoted to negative int64_t type value and results in negative
    ns->moff which breaks namespace
    
    Signed-off-by: Dmitry Tikhov <ddtikhov@gmail.com>
    Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
    Signed-off-by: Klaus Jensen <k.jensen@samsung.com>


From xen-devel-bounces@lists.xenproject.org Sat Jun 04 20:56:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jun 2022 20:56:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341961.567098 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxaoW-0007wj-BZ; Sat, 04 Jun 2022 20:56:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341961.567098; Sat, 04 Jun 2022 20:56:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxaoW-0007wc-6l; Sat, 04 Jun 2022 20:56:04 +0000
Received: by outflank-mailman (input) for mailman id 341961;
 Sat, 04 Jun 2022 20:56:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AJVq=WL=kernel.org=pr-tracker-bot@srs-se1.protection.inumbo.net>)
 id 1nxaoU-0007wW-LS
 for xen-devel@lists.xenproject.org; Sat, 04 Jun 2022 20:56:02 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bd3f2cc3-e448-11ec-bd2c-47488cf2e6aa;
 Sat, 04 Jun 2022 22:56:00 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id E1796B80AD5;
 Sat,  4 Jun 2022 20:55:59 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPS id 93E55C385B8;
 Sat,  4 Jun 2022 20:55:58 +0000 (UTC)
Received: from aws-us-west-2-korg-oddjob-1.ci.codeaurora.org
 (localhost.localdomain [127.0.0.1])
 by aws-us-west-2-korg-oddjob-1.ci.codeaurora.org (Postfix) with ESMTP id
 77C0BEAC081; Sat,  4 Jun 2022 20:55:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bd3f2cc3-e448-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654376158;
	bh=ZlwVPaT7l/IdNQNwYIHS1lieP4Aisx5mKwnqL0f1nXE=;
	h=Subject:From:In-Reply-To:References:Date:To:Cc:From;
	b=esF5ImWVpzAlPFEH0Tw+L9gHkm+zOiW+H+3JO9Gb4kcwKZNzRsa7E0RIKro6UsdNs
	 h144wUIHI7huwMbk6uTHSxO69PD7QI0AITNeuo3/xcwEXwoSiGVExLxxIKeao5ywFJ
	 Kx7N6rdtYJ3ofV+IgLSsqmYF9aDbLGPC6IwhgSQbeeAHQLZTRVn1M4RACfAZxQ/4xc
	 fDVko6YjZ5Cfwc8RdXDtgH8UzYgpOKtvCMAN+0fia4GtxEmswX9ERjYZ/niKVqTI5b
	 zRJLb+ENwLkNzOFkueKvcCPMt67YZlUQm9Oewl5hJbB5TCSK5Bd8Hx+fkpYNwhXUQ9
	 JzwX/TSy2EErQ==
Subject: Re: [GIT PULL] xen: branch for v5.19-rc1b
From: pr-tracker-bot@kernel.org
In-Reply-To: <20220604055608.9037-1-jgross@suse.com>
References: <20220604055608.9037-1-jgross@suse.com>
X-PR-Tracked-List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
X-PR-Tracked-Message-Id: <20220604055608.9037-1-jgross@suse.com>
X-PR-Tracked-Remote: git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.19-rc1b-tag
X-PR-Tracked-Commit-Id: 41925b105e345ebc84cedb64f59d20cb14a62613
X-PR-Merge-Tree: torvalds/linux.git
X-PR-Merge-Refname: refs/heads/master
X-PR-Merge-Commit-Id: 4ccbe91de91a8f9559052179d15c0229a8ac9f8a
Message-Id: <165437615848.25005.15611342522221631046.pr-tracker-bot@kernel.org>
Date: Sat, 04 Jun 2022 20:55:58 +0000
To: Juergen Gross <jgross@suse.com>
Cc: torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, sstabellini@kernel.org

The pull request you sent on Sat,  4 Jun 2022 07:56:08 +0200:

> git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.19-rc1b-tag

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/4ccbe91de91a8f9559052179d15c0229a8ac9f8a

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/prtracker.html


From xen-devel-bounces@lists.xenproject.org Sun Jun 05 02:48:56 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jun 2022 02:48:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341985.567109 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxgJi-0001es-2y; Sun, 05 Jun 2022 02:48:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341985.567109; Sun, 05 Jun 2022 02:48:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxgJh-0001el-Vw; Sun, 05 Jun 2022 02:48:37 +0000
Received: by outflank-mailman (input) for mailman id 341985;
 Sun, 05 Jun 2022 02:48:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxgJh-0001eb-1k; Sun, 05 Jun 2022 02:48:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxgJg-0000r6-Um; Sun, 05 Jun 2022 02:48:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxgJg-00024h-Dw; Sun, 05 Jun 2022 02:48:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nxgJg-0007zB-DJ; Sun, 05 Jun 2022 02:48:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KCUiXruFIdlSEl30KQJIGhCCKWnid1qnhpQ9CKDK0a8=; b=Uy2LwvhLmEvjtMcx0dy8Y/jlPR
	EPt07gVhsFwEnm+0uDcZ0WQ5MIm+OSwIhK3Va8KxjciBkfpx+4WSOftnP3851JCd1MCS+zc/U+4jb
	h5AJRu4xkz+lRvPNuquElV1nuRHKA+TXgDfAmc4lvfKhD7sQgOmry9wMGii4jdx/Fq70=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170831-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 170831: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=032dcf09e2bf7c822be25b4abef7a6c913870d98
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 05 Jun 2022 02:48:36 +0000

flight 170831 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170831/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 170714
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 170714
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 170714
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 170714
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 170714

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170714
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170714
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170714
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                032dcf09e2bf7c822be25b4abef7a6c913870d98
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z   11 days
Failing since        170716  2022-05-24 11:12:06 Z   11 days   33 attempts
Testing same since   170826  2022-06-04 09:06:06 Z    0 days    2 attempts

------------------------------------------------------------
2254 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 263710 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 05 04:27:38 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jun 2022 04:27:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.341997.567119 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxhrI-0004BG-Cx; Sun, 05 Jun 2022 04:27:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 341997.567119; Sun, 05 Jun 2022 04:27:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxhrI-0004B9-9p; Sun, 05 Jun 2022 04:27:24 +0000
Received: by outflank-mailman (input) for mailman id 341997;
 Sun, 05 Jun 2022 04:27:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxhrH-0004Az-Gl; Sun, 05 Jun 2022 04:27:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxhrH-0002iG-DK; Sun, 05 Jun 2022 04:27:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxhrG-0007Fi-Ou; Sun, 05 Jun 2022 04:27:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nxhrG-0000jh-O6; Sun, 05 Jun 2022 04:27:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=u3httWF2+0sEeLyDidBnBl9O7EKBwMeTwqIj8O48ckU=; b=d+ecArnXyQH9iepNKsg/Dk5oCr
	RxQR2x+Cw0bouGPFXp8VZXBkE3UuNXoclqD4Z7aSiFblPM3h2dgnHBia3AFgwleedrd7Q2fohqYA3
	tm8mMyPUS1Y/oo06wWTDYT8i8FRsr9bBAdEuiwJP6ZdR3+yA0hAi+Yf3QNDs0yq96fSM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170832-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 170832: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start:fail:heisenbug
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start.2:fail:heisenbug
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:heisenbug
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=ca127b3fc247517ec7d4dad291f2c0f90602ce5b
X-Osstest-Versions-That:
    qemuu=70e975203f366f2f30daaeb714bb852562b7b72f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 05 Jun 2022 04:27:22 +0000

flight 170832 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170832/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 170820

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-vhd 21 guest-start/debian.repeat fail in 170824 pass in 170832
 test-armhf-armhf-xl-rtds     14 guest-start      fail in 170829 pass in 170832
 test-armhf-armhf-xl-vhd      18 guest-start.2    fail in 170829 pass in 170832
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat  fail pass in 170824
 test-amd64-i386-xl-qemuu-ws16-amd64 12 windows-install     fail pass in 170829

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop   fail in 170824 like 170820
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170820
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170820
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170820
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170820
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170820
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170820
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170820
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                ca127b3fc247517ec7d4dad291f2c0f90602ce5b
baseline version:
 qemuu                70e975203f366f2f30daaeb714bb852562b7b72f

Last test of basis   170820  2022-06-03 15:38:21 Z    1 days
Testing same since   170824  2022-06-04 03:08:51 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dmitry Tikhov <d.tihov@yadro.com>
  Dmitry Tikhov <ddtikhov@gmail.com>
  Klaus Jensen <k.jensen@samsung.com>
  Richard Henderson <richard.henderson@linaro.org>
  zhenwei pi <pizhenwei@bytedance.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ca127b3fc247517ec7d4dad291f2c0f90602ce5b
Merge: 70e975203f d7fe639cab
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Fri Jun 3 14:14:24 2022 -0700

    Merge tag 'nvme-next-pull-request' of git://git.infradead.org/qemu-nvme into staging
    
    hw/nvme updates
    
    # -----BEGIN PGP SIGNATURE-----
    #
    # iQEzBAABCAAdFiEEUigzqnXi3OaiR2bATeGvMW1PDekFAmKaZmgACgkQTeGvMW1P
    # DenI7wgAxY4QtRlUnufzaZqcoi+affFTKlKm0JYKZm/Ldxt2RtHoWxRZDLLIUp8B
    # 4XAlIGJw7VwrafEtSkx4K6cSyKluMJ9Ax8pNd03sEweXBBfdhNizspPprp+Jm9P9
    # hRcH8kSiBp5B451cORBlgmoHguWeWawe1r66uFLTCbEMtfQQNaxNVsTsgAsOvtwv
    # XsjLVFVKGNDWXGRta+lzu4seNNuzfucsAmKWUjg5HN38rstY7XxfLVMzt8ORcwjk
    # oNmQuy3JiKujdPVhE5PVgNRZkigwoDt3hDA1QTncGTBUoA/CtaB5SK9EhcJ5xJVI
    # EHv99S9LQ8ng5BJC2pUSU32yRkaNOQ==
    # =XTXH
    # -----END PGP SIGNATURE-----
    # gpg: Signature made Fri 03 Jun 2022 12:52:08 PM PDT
    # gpg:                using RSA key 522833AA75E2DCE6A24766C04DE1AF316D4F0DE9
    # gpg: Good signature from "Klaus Jensen <its@irrelevant.dk>" [unknown]
    # gpg:                 aka "Klaus Jensen <k.jensen@samsung.com>" [unknown]
    # gpg: WARNING: This key is not certified with a trusted signature!
    # gpg:          There is no indication that the signature belongs to the owner.
    # Primary key fingerprint: DDCA 4D9C 9EF9 31CC 3468  4272 63D5 6FC5 E55D A838
    #      Subkey fingerprint: 5228 33AA 75E2 DCE6 A247  66C0 4DE1 AF31 6D4F 0DE9
    
    * tag 'nvme-next-pull-request' of git://git.infradead.org/qemu-nvme:
      hw/nvme: add new command abort case
      hw/nvme: deprecate the use-intel-id compatibility parameter
      hw/nvme: bump firmware revision
      hw/nvme: do not report null uuid
      hw/nvme: do not auto-generate uuid
      hw/nvme: do not auto-generate eui64
      hw/nvme: enforce common serial per subsystem
      hw/nvme: fix smart aen
      hw/nvme: fix copy cmd for pi enabled namespaces
      hw/nvme: add missing return statement
      hw/nvme: fix narrowing conversion
    
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit d7fe639cabf778903f6cab23ff58c905c71375ec
Author: Dmitry Tikhov <d.tihov@yadro.com>
Date:   Wed Apr 20 11:20:44 2022 +0300

    hw/nvme: add new command abort case
    
    NVMe command set specification for end-to-end data protection formatted
    namespace states:
    
        o If the Reference Tag Check bit of the PRCHK field is set to ‘1’ and
          the namespace is formatted for Type 3 protection, then the
          controller:
              ▪ should not compare the protection Information Reference Tag
                field to the computed reference tag; and
              ▪ may ignore the ILBRT and EILBRT fields. If a command is
                aborted as a result of the Reference Tag Check bit of the
                PRCHK field being set to ‘1’, then that command should be
                aborted with a status code of Invalid Protection Information,
                but may be aborted with a status code of Invalid Field in
                Command.
    
    Currently qemu compares reftag in the nvme_dif_prchk function whenever
    Reference Tag Check bit is set in the command. For type 3 namespaces
    however, caller of nvme_dif_prchk - nvme_dif_check does not increment
    reftag for each subsequent logical block. That way commands incorporating
    more than one logical block for type 3 formatted namespaces with reftag
    check bit set, always fail with End-to-end Reference Tag Check Error.
    Comply with spec by handling case of set Reference Tag Check
    bit in the type 3 formatted namespace.
    
    Fixes: 146f720c5563 ("hw/block/nvme: end-to-end data protection")
    Signed-off-by: Dmitry Tikhov <d.tihov@yadro.com>
    Signed-off-by: Klaus Jensen <k.jensen@samsung.com>

commit 8b1e59a6873662a01379cf052384e5dedefe7447
Author: Klaus Jensen <k.jensen@samsung.com>
Date:   Tue Apr 19 13:24:23 2022 +0200

    hw/nvme: deprecate the use-intel-id compatibility parameter
    
    Since version 5.2 commit 6eb7a071292a ("hw/block/nvme: change controller
    pci id"), the emulated NVMe controller has defaulted to a non-Intel PCI
    identifier.
    
    Deprecate the compatibility parameter so we can get rid of it once and
    for all.
    
    Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
    Signed-off-by: Klaus Jensen <k.jensen@samsung.com>

commit fbba243bc700a4e479331e20544c7f6a41ae87b3
Author: Klaus Jensen <k.jensen@samsung.com>
Date:   Fri Apr 29 10:33:36 2022 +0200

    hw/nvme: bump firmware revision
    
    The Linux kernel quirks the QEMU NVMe controller pretty heavily because
    of the namespace identifier mess. Since this is now fixed, bump the
    firmware revision number to allow the quirk to be disabled for this
    revision.
    
    As of now, bump the firmware revision number to be equal to the QEMU
    release version number.
    
    Reviewed-by: Keith Busch <kbusch@kernel.org>
    Signed-off-by: Klaus Jensen <k.jensen@samsung.com>

commit 9f2e1acf83c332752f52c39dad390c94ec2ba9f5
Author: Klaus Jensen <k.jensen@samsung.com>
Date:   Fri Apr 29 10:33:35 2022 +0200

    hw/nvme: do not report null uuid
    
    Do not report the "null uuid" (all zeros) in the namespace
    identification descriptors.
    
    Reported-by: Luis Chamberlain <mcgrof@kernel.org>
    Reported-by: Christoph Hellwig <hch@lst.de>
    Reviewed-by: Christoph Hellwig <hch@lst.de>
    Reviewed-by: Keith Busch <kbusch@kernel.org>
    Signed-off-by: Klaus Jensen <k.jensen@samsung.com>

commit bd9f371c6f6eeb8e907dfc770876ad8ef4ff85fc
Author: Klaus Jensen <k.jensen@samsung.com>
Date:   Fri Apr 29 10:33:34 2022 +0200

    hw/nvme: do not auto-generate uuid
    
    Do not default to generate an UUID for namespaces if it is not
    explicitly specified.
    
    This is a technically a breaking change in behavior. However, since the
    UUID changes on every VM launch, it is not spec compliant and is of
    little use since the UUID cannot be used reliably anyway and the
    behavior prior to this patch must be considered buggy.
    
    Reviewed-by: Keith Busch <kbusch@kernel.org>
    Reviewed-by: Christoph Hellwig <hch@lst.de>
    Signed-off-by: Klaus Jensen <k.jensen@samsung.com>

commit 36d83272d5e45dff13e988ee0a59f11c58b442ba
Author: Klaus Jensen <k.jensen@samsung.com>
Date:   Fri Apr 29 10:33:33 2022 +0200

    hw/nvme: do not auto-generate eui64
    
    We cannot provide auto-generated unique or persistent namespace
    identifiers (EUI64, NGUID, UUID) easily. Since 6.1, namespaces have been
    assigned a generated EUI64 of the form "52:54:00:<namespace counter>".
    This is will be unique within a QEMU instance, but not globally.
    
    Revert that this is assigned automatically and immediately deprecate the
    compatibility parameter. Users can opt-in to this with the
    `eui64-default=on` device parameter or set it explicitly with
    `eui64=UINT64`.
    
    Cc: libvir-list@redhat.com
    Reviewed-by: Christoph Hellwig <hch@lst.de>
    Signed-off-by: Klaus Jensen <k.jensen@samsung.com>

commit a859eb9f8f64e116671048a43a07d87bc6527a55
Author: Klaus Jensen <k.jensen@samsung.com>
Date:   Fri Apr 29 10:33:32 2022 +0200

    hw/nvme: enforce common serial per subsystem
    
    The Identify Controller Serial Number (SN) is the serial number for the
    NVM subsystem and must be the same across all controller in the NVM
    subsystem.
    
    Enforce this.
    
    Reviewed-by: Christoph Hellwig <hch@lst.de>
    Reviewed-by: Keith Busch <kbusch@kernel.org>
    Signed-off-by: Klaus Jensen <k.jensen@samsung.com>

commit 9235a72a5df0fae1ede89f02717b597ef91cf6ad
Author: Klaus Jensen <k.jensen@samsung.com>
Date:   Fri May 6 00:21:47 2022 +0200

    hw/nvme: fix smart aen
    
    Pass the right constant to nvme_smart_event(). The NVME_AER* values hold
    the bit position in the SMART byte, not the shifted value that we expect
    it to be in nvme_smart_event().
    
    Fixes: c62720f137df ("hw/block/nvme: trigger async event during injecting smart warning")
    Acked-by: zhenwei pi <pizhenwei@bytedance.com>
    Signed-off-by: Klaus Jensen <k.jensen@samsung.com>

commit 2e8f952ae7de23b4847937dbbf51f7a1ab10a2af
Author: Dmitry Tikhov <d.tihov@yadro.com>
Date:   Thu Apr 21 13:51:58 2022 +0300

    hw/nvme: fix copy cmd for pi enabled namespaces
    
    Current implementation have problem in the read part of copy command.
    Because there is no metadata mangling before nvme_dif_check invocation,
    reftag error could be thrown for blocks of namespace that have not been
    previously written to.
    
    Signed-off-by: Dmitry Tikhov <d.tihov@yadro.com>
    Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
    Signed-off-by: Klaus Jensen <k.jensen@samsung.com>

commit 51c453266309166c2737623211c0afc12884cccd
Author: Dmitry Tikhov <d.tihov@yadro.com>
Date:   Fri Apr 15 23:48:32 2022 +0300

    hw/nvme: add missing return statement
    
    Since there is no return after nvme_dsm_cb invocation, metadata
    associated with non-zero block range is currently zeroed. Also this
    behaviour leads to segfault since we schedule iocb->bh two times.
    First when entering nvme_dsm_cb with iocb->idx == iocb->nr and
    second because of missing return on call stack unwinding by calling
    blk_aio_pwrite_zeroes and subsequent nvme_dsm_cb callback.
    
    Fixes: d7d1474fd85d ("hw/nvme: reimplement dsm to allow cancellation")
    Signed-off-by: Dmitry Tikhov <d.tihov@yadro.com>
    Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
    Signed-off-by: Klaus Jensen <k.jensen@samsung.com>

commit 1e64facc015e16d8e4efa239feaeda9e4e9aeb04
Author: Dmitry Tikhov <ddtikhov@gmail.com>
Date:   Tue Apr 12 11:59:09 2022 +0300

    hw/nvme: fix narrowing conversion
    
    Since nlbas is of type int, it does not work with large namespace size
    values, e.g., 9 TB size of file backing namespace and 8 byte metadata
    with 4096 bytes lbasz gives negative nlbas value, which is later
    promoted to negative int64_t type value and results in negative
    ns->moff which breaks namespace
    
    Signed-off-by: Dmitry Tikhov <ddtikhov@gmail.com>
    Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
    Signed-off-by: Klaus Jensen <k.jensen@samsung.com>


From xen-devel-bounces@lists.xenproject.org Sun Jun 05 09:49:10 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jun 2022 09:49:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342033.567130 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxmsJ-0004PL-Lo; Sun, 05 Jun 2022 09:48:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342033.567130; Sun, 05 Jun 2022 09:48:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxmsJ-0004PE-JE; Sun, 05 Jun 2022 09:48:47 +0000
Received: by outflank-mailman (input) for mailman id 342033;
 Sun, 05 Jun 2022 09:48:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxmsI-0004Ol-73; Sun, 05 Jun 2022 09:48:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxmsI-0000Ub-2O; Sun, 05 Jun 2022 09:48:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxmsH-0001L9-KS; Sun, 05 Jun 2022 09:48:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nxmsH-0006TA-K5; Sun, 05 Jun 2022 09:48:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LR3rhYiUNdfUX1YT+jWE00gDp9nLTyxQH/2WqKdkgZo=; b=AlfZExpr9Rdj1RxZfsVqR4iVLi
	KnLJq3pgHpoKBMNd8El1RWch/b7K3QyVBnhNWpGszXcAYPJljdrzw/1G0tk/jQSqS5xXUvP/ujRXn
	2Q1HJj0YxObZt7mfWU7rfwrV7nzqwJ/gZNbes0uhPGeeAvhsorbsLTWZiF1ViRjK3hxg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170835-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 170835: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=a939d4d86919a1f9ffcdc053a852422f9184a00d
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 05 Jun 2022 09:48:45 +0000

flight 170835 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170835/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              a939d4d86919a1f9ffcdc053a852422f9184a00d
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  695 days
Failing since        151818  2020-07-11 04:18:52 Z  694 days  676 attempts
Testing same since   170825  2022-06-04 04:20:31 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Amneesh Singh <natto@weirdnatto.in>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani@anisinha.ca>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brad Laue <brad@brad-x.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Kirbach <christian.kirbach@gmail.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Christophe Fergeau <cfergeau@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Didik Supriadi <didiksupriadi41@gmail.com>
  dinglimin <dinglimin@cmss.chinamobile.com>
  Divya Garg <divya.garg@nutanix.com>
  Dmitrii Shcherbakov <dmitrii.shcherbakov@canonical.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Emilio Herrera <ehespinosa57@gmail.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Franck Ridel <fridel@protonmail.com>
  Gavi Teitz <gavi@nvidia.com>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Haonan Wang <hnwanga1@gmail.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Hiroki Narukawa <hnarukaw@yahoo-corp.jp>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Ian Wienand <iwienand@redhat.com>
  Ioanna Alifieraki <ioanna-maria.alifieraki@canonical.com>
  Ivan Teterevkov <ivan.teterevkov@nutanix.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  jason lee <ppark5237@gmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jia Zhou <zhou.jia2@zte.com.cn>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jing Qi <jinqi@redhat.com>
  Jinsheng Zhang <zhangjl02@inspur.com>
  Jiri Denemark <jdenemar@redhat.com>
  Joachim Falk <joachim.falk@gmx.de>
  John Ferlan <jferlan@redhat.com>
  John Levon <john.levon@nutanix.com>
  John Levon <levon@movementarian.org>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Justin Gatzen <justin.gatzen@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kim InSoo <simmon@nplob.com>
  Koichi Murase <myoga.murase@gmail.com>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Lei Yang <yanglei209@huawei.com>
  Lena Voytek <lena.voytek@canonical.com>
  Liang Yan <lyan@digitalocean.com>
  Liang Yan <lyan@digtalocean.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Liu Yiding <liuyd.fnst@fujitsu.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  luzhipeng <luzhipeng@cestc.cn>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Martin Pitt <mpitt@debian.org>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matej Cepl <mcepl@cepl.eu>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Goodhart <c@chromakode.com>
  Maxim Nestratov <mnestratov@virtuozzo.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Moteen Shah <codeguy.moteen@gmail.com>
  Moteen Shah <moteenshah.02@gmail.com>
  Muha Aliss <muhaaliss@gmail.com>
  Nathan <nathan95@live.it>
  Neal Gompa <ngompa13@gmail.com>
  Nick Chevsky <nchevsky@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nicolas Lécureuil <neoclust@mageia.org>
  Nicolas Lécureuil <nicolas.lecureuil@siveo.net>
  Nikolay Shirokovskiy <nikolay.shirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Niteesh Dubey <niteesh@linux.ibm.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Or Ozeri <oro@il.ibm.com>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Praveen K Paladugu <prapal@linux.microsoft.com>
  Richard W.M. Jones <rjones@redhat.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Robin Lee <cheeselee@fedoraproject.org>
  Rohit Kumar <rohit.kumar3@nutanix.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Davis <scott.davis@starlab.io>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  shenjiatong <yshxxsjt715@gmail.com>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Simon Rowe <simon.rowe@nutanix.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tom Wieczorek <tom@bibbu.net>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tu Qiang <tu.qiang35@zte.com.cn>
  Tuguoyi <tu.guoyi@h3c.com>
  tuqiang <tu.qiang35@zte.com.cn>
  Vasiliy Ulyanov <vulyanov@suse.de>
  Victor Toso <victortoso@redhat.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Vinayak Kale <vkale@nvidia.com>
  Vineeth Pillai <viremana@linux.microsoft.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  Wei-Chen Chen <weicche@microsoft.com>
  William Douglas <william.douglas@intel.com>
  Xu Chao <xu.chao6@zte.com.cn>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Fei <yangfei85@huawei.com>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yasuhiko Kamata <belphegor@belbel.or.jp>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
  zhangjl02 <zhangjl02@inspur.com>
  zhanglei <zhanglei@smartx.com>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>
  Дамјан Георгиевски <gdamjan@gmail.com>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 111348 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 05 10:04:16 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jun 2022 10:04:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342045.567142 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxn7D-000702-0R; Sun, 05 Jun 2022 10:04:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342045.567142; Sun, 05 Jun 2022 10:04:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxn7C-0006zv-Te; Sun, 05 Jun 2022 10:04:10 +0000
Received: by outflank-mailman (input) for mailman id 342045;
 Sun, 05 Jun 2022 10:04:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxn7B-0006zl-Br; Sun, 05 Jun 2022 10:04:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxn7B-0000pR-7e; Sun, 05 Jun 2022 10:04:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxn7A-0001pK-Li; Sun, 05 Jun 2022 10:04:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nxn7A-0004hi-LD; Sun, 05 Jun 2022 10:04:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Ybknle97bskx55ethsOT2u99Yc7cynodNKvU5+0lMjE=; b=HVHNiqrLq2EkloOeUQsyAMESSz
	Zh3frT3of+oeC/z6lae8i/1znRqHIQVe680c6gftfXflB6piBPr89WN0gOEj7TIzgZTUsujLBZMs7
	iDzMPk/AB6Yaq5tnxmPURzdC8VoyWCKrQv9r6MW+6h3Co18PwX7IElfqFSZ0/3mJNUL4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170833-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 170833: trouble: broken/fail/pass
X-Osstest-Failures:
    xen-unstable:test-xtf-amd64-amd64-3:<job status>:broken:regression
    xen-unstable:test-xtf-amd64-amd64-3:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=58ce5b6c33ecae76f2e9fc5213a56e98c3be4be5
X-Osstest-Versions-That:
    xen=58ce5b6c33ecae76f2e9fc5213a56e98c3be4be5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 05 Jun 2022 10:04:08 +0000

flight 170833 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170833/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-xtf-amd64-amd64-3          <job status>                 broken

Tests which are failing intermittently (not blocking):
 test-xtf-amd64-amd64-3        5 host-install(5)          broken pass in 170823
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install fail in 170823 pass in 170833

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 170823 like 170813
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170813
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170823
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170823
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170823
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170823
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 170823
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 170823
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170823
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170823
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170823
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170823
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170823
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 xen                  58ce5b6c33ecae76f2e9fc5213a56e98c3be4be5
baseline version:
 xen                  58ce5b6c33ecae76f2e9fc5213a56e98c3be4be5

Last test of basis   170833  2022-06-05 01:54:00 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       broken  
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-xtf-amd64-amd64-3 broken
broken-step test-xtf-amd64-amd64-3 host-install(5)

Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Jun 05 12:33:32 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jun 2022 12:33:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342060.567152 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxpRU-0005kJ-1F; Sun, 05 Jun 2022 12:33:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342060.567152; Sun, 05 Jun 2022 12:33:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxpRT-0005kC-Us; Sun, 05 Jun 2022 12:33:15 +0000
Received: by outflank-mailman (input) for mailman id 342060;
 Sun, 05 Jun 2022 12:33:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxpRS-0005k2-I1; Sun, 05 Jun 2022 12:33:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxpRS-0003M9-FU; Sun, 05 Jun 2022 12:33:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxpRS-0006lE-1u; Sun, 05 Jun 2022 12:33:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nxpRS-0006jV-1U; Sun, 05 Jun 2022 12:33:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/i3a7WJfdFsv0NjASWW4GtWSmrahhFbOP4pjgdwaOoA=; b=r5sFtr8u8iAw0f10jbrIWT47Qu
	KqHj50CRua3BwFRXj4pxhq/0/23fWTjPzhKJ59TtcZn4z2W7A7b0zQSUZF+Qndk+cDYFoKGEoKu+b
	dzw1o+Agu94h2ZZez8WrgyRElbyvEU/lVSvUNgkelcCqsKD4UeMoEzjq/WyYwUUSNcmA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170834-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 170834: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:debian-install:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=952923ddc01120190dcf671e7b354364ce1d1362
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 05 Jun 2022 12:33:14 +0000

flight 170834 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170834/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 170714
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 170714
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 170714
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 170714
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 170714
 test-armhf-armhf-libvirt     12 debian-install           fail REGR. vs. 170714
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 170714

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    18 guest-start/debian.repeat fail REGR. vs. 170714

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170714
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170714
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                952923ddc01120190dcf671e7b354364ce1d1362
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z   12 days
Failing since        170716  2022-05-24 11:12:06 Z   12 days   34 attempts
Testing same since   170834  2022-06-05 03:05:03 Z    0 days    1 attempts

------------------------------------------------------------
2261 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 265687 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 05 13:52:14 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jun 2022 13:52:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342076.567163 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxqfZ-0005vU-Nb; Sun, 05 Jun 2022 13:51:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342076.567163; Sun, 05 Jun 2022 13:51:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxqfZ-0005vN-Ku; Sun, 05 Jun 2022 13:51:53 +0000
Received: by outflank-mailman (input) for mailman id 342076;
 Sun, 05 Jun 2022 13:51:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxqfZ-0005vD-3h; Sun, 05 Jun 2022 13:51:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxqfZ-0004ek-0U; Sun, 05 Jun 2022 13:51:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxqfY-0000DZ-Kd; Sun, 05 Jun 2022 13:51:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nxqfY-0002jJ-KB; Sun, 05 Jun 2022 13:51:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nBA+8vu8xeKM1bPmiMGb870a0axlLHpGdy1WH7qCMYU=; b=1XK6gvOsali8ULlozYikdfyPgX
	6Ie8jmXtbcIv1JqnZ6qexRSrP0CAsYHSMNMDH0i/FnmYoR1xuHjCllzxGSCMDTcUtQNceh7rNUloT
	pO5mm3XExZ4jR0jvlffWz3iAOZEuML3ngmZLdQ09D2HjnS3to5fXVCGOTUrxYb4NPmFE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170836-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 170836: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=ca127b3fc247517ec7d4dad291f2c0f90602ce5b
X-Osstest-Versions-That:
    qemuu=70e975203f366f2f30daaeb714bb852562b7b72f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 05 Jun 2022 13:51:52 +0000

flight 170836 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170836/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-vhd 21 guest-start/debian.repeat fail in 170824 pass in 170836
 test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail in 170824 pass in 170836
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat  fail pass in 170824

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170820
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170820
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170820
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170820
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170820
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170820
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170820
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170820
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                ca127b3fc247517ec7d4dad291f2c0f90602ce5b
baseline version:
 qemuu                70e975203f366f2f30daaeb714bb852562b7b72f

Last test of basis   170820  2022-06-03 15:38:21 Z    1 days
Testing same since   170824  2022-06-04 03:08:51 Z    1 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dmitry Tikhov <d.tihov@yadro.com>
  Dmitry Tikhov <ddtikhov@gmail.com>
  Klaus Jensen <k.jensen@samsung.com>
  Richard Henderson <richard.henderson@linaro.org>
  zhenwei pi <pizhenwei@bytedance.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   70e975203f..ca127b3fc2  ca127b3fc247517ec7d4dad291f2c0f90602ce5b -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Sun Jun 05 20:38:12 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jun 2022 20:38:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342118.567175 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxx0U-0004iQ-Jg; Sun, 05 Jun 2022 20:37:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342118.567175; Sun, 05 Jun 2022 20:37:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxx0U-0004iJ-GE; Sun, 05 Jun 2022 20:37:54 +0000
Received: by outflank-mailman (input) for mailman id 342118;
 Sun, 05 Jun 2022 20:37:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxx0S-0004ht-Q1; Sun, 05 Jun 2022 20:37:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxx0S-0004JR-MU; Sun, 05 Jun 2022 20:37:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nxx0S-0004Ki-2y; Sun, 05 Jun 2022 20:37:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nxx0S-00081W-2U; Sun, 05 Jun 2022 20:37:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=pRV46ji8UVQIPu2iX8o8xySdXsHRIepbEgzUdPkLSMo=; b=qgNZeR0S9aO+0tOGo5OzGw0UrN
	VtMo3rS1bUniwrt6CA1dfovF2gnXt7J4sGBLt6NSFR6gBDz0CAgMRLYKlskIlM40PfQTiOWZxMJTi
	w6FPvRGS5yHIxP8aPeQ+nDJ/GcwtqEIkff2suaOdS88iRrgxjk1yoGGW+For+eepZxS0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170837-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 170837: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:debian-install:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=952923ddc01120190dcf671e7b354364ce1d1362
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 05 Jun 2022 20:37:52 +0000

flight 170837 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170837/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 170714
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 170714
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 170714
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 170714
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 170714

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-libvirt     12 debian-install   fail in 170834 pass in 170837
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 170834 pass in 170837
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot            fail pass in 170834

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170714
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170714
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170714
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                952923ddc01120190dcf671e7b354364ce1d1362
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z   12 days
Failing since        170716  2022-05-24 11:12:06 Z   12 days   35 attempts
Testing same since   170834  2022-06-05 03:05:03 Z    0 days    2 attempts

------------------------------------------------------------
2261 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 265687 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 05 22:46:01 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jun 2022 22:46:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342130.567185 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxz0B-00010i-Om; Sun, 05 Jun 2022 22:45:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342130.567185; Sun, 05 Jun 2022 22:45:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nxz0B-00010b-Lg; Sun, 05 Jun 2022 22:45:43 +0000
Received: by outflank-mailman (input) for mailman id 342130;
 Sun, 05 Jun 2022 22:45:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=O8Eh=WM=gmail.com=robherring2@srs-se1.protection.inumbo.net>)
 id 1nxz09-00010V-E2
 for xen-devel@lists.xenproject.org; Sun, 05 Jun 2022 22:45:41 +0000
Received: from mail-qv1-f46.google.com (mail-qv1-f46.google.com
 [209.85.219.46]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 37c20dc5-e521-11ec-b605-df0040e90b76;
 Mon, 06 Jun 2022 00:45:38 +0200 (CEST)
Received: by mail-qv1-f46.google.com with SMTP id b17so772480qvz.0
 for <xen-devel@lists.xenproject.org>; Sun, 05 Jun 2022 15:45:38 -0700 (PDT)
Received: from robh.at.kernel.org ([2607:fb90:1bdb:2e61:f12:452:5315:9c7e])
 by smtp.gmail.com with ESMTPSA id
 v128-20020a37dc86000000b0069fc13ce244sm10341437qki.117.2022.06.05.15.45.34
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 05 Jun 2022 15:45:36 -0700 (PDT)
Received: (nullmailer pid 3667844 invoked by uid 1000);
 Sun, 05 Jun 2022 22:45:33 -0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 37c20dc5-e521-11ec-b605-df0040e90b76
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=uDzznG00WmBRTYCBcArlD1mf1zVfk2p9nd9xHJn3CVw=;
        b=jzhRNB8WyHLNw1/C3wBunaI05YBqGijRgqe+bk4w4z5d3y0K9XEKoiS/4frChLXtrL
         mE2eD1GhAAMwnj530igAyhw42IPuP0wIEfgGM5qGaYv3aOx6RSe77u0wwDTxM7QliUb6
         +cX86Kh/6do5EgXiKXUJHoXeZrqR4fH4HyvGr6eJXODRX9IQSfMoK9uNTKGWv/93H9Ur
         Gx6E08E09ogvGlql+oqoDGTsgLWKtxR3ZKaw9vFjkTLXdilP6+rfJEwKV5FOptttkIN3
         sXTBsiMdijzXne84AjH3T1V3vf2R34SVHDN7Eg5xlCxBwEndFY+oCpCAUNbbZAfEAZ2u
         hTmA==
X-Gm-Message-State: AOAM531YAddF4PjY1vNIjMUADZtExtrnpMNwtJ1li26MKsYCWlmxPkM8
	aw4Rsuab9zne0mFdnCb3kw==
X-Google-Smtp-Source: ABdhPJwmsDYIVR7qLkG6UWB/x8H/AgliTU6o4B+wWC9F21jxx0e09sjuNDCoG74Zm/3g90Nz9x+0LQ==
X-Received: by 2002:ad4:5447:0:b0:461:d7a7:f0ec with SMTP id h7-20020ad45447000000b00461d7a7f0ecmr70549704qvt.21.1654469136911;
        Sun, 05 Jun 2022 15:45:36 -0700 (PDT)
Date: Sun, 5 Jun 2022 17:45:33 -0500
From: Rob Herring <robh@kernel.org>
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: linux-kernel@vger.kernel.org, iommu@lists.linux-foundation.org, Joerg Roedel <joro@8bytes.org>, "Michael S. Tsirkin" <mst@redhat.com>, Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org, devicetree@vger.kernel.org, Will Deacon <will@kernel.org>, Juergen Gross <jgross@suse.com>, Arnd Bergmann <arnd@arndb.de>, Stefano Stabellini <sstabellini@kernel.org>, Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Krzysztof Kozlowski <krzysztof.kozlowski+dt@linaro.org>, linux-arm-kernel@lists.infradead.org, Christoph Hellwig <hch@infradead.org>, Rob Herring <robh+dt@kernel.org>
Subject: Re: [PATCH V4 5/8] dt-bindings: Add xen,grant-dma IOMMU description
 for xen-grant DMA ops
Message-ID: <20220605224533.GA3667788-robh@kernel.org>
References: <1654197833-25362-1-git-send-email-olekstysh@gmail.com>
 <1654197833-25362-6-git-send-email-olekstysh@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <1654197833-25362-6-git-send-email-olekstysh@gmail.com>

On Thu, 02 Jun 2022 22:23:50 +0300, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> The main purpose of this binding is to communicate Xen specific
> information using generic IOMMU device tree bindings (which is
> a good fit here) rather than introducing a custom property.
> 
> Introduce Xen specific IOMMU for the virtualized device (e.g. virtio)
> to be used by Xen grant DMA-mapping layer in the subsequent commit.
> 
> The reference to Xen specific IOMMU node using "iommus" property
> indicates that Xen grant mappings need to be enabled for the device,
> and it specifies the ID of the domain where the corresponding backend
> resides. The domid (domain ID) is used as an argument to the Xen grant
> mapping APIs.
> 
> This is needed for the option to restrict memory access using Xen grant
> mappings to work which primary goal is to enable using virtio devices
> in Xen guests.
> 
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> ---
> Changes RFC -> V1:
>    - update commit subject/description and text in description
>    - move to devicetree/bindings/arm/
> 
> Changes V1 -> V2:
>    - update text in description
>    - change the maintainer of the binding
>    - fix validation issue
>    - reference xen,dev-domid.yaml schema from virtio/mmio.yaml
> 
> Change V2 -> V3:
>    - Stefano already gave his Reviewed-by, I dropped it due to the changes (significant)
>    - use generic IOMMU device tree bindings instead of custom property
>      "xen,dev-domid"
>    - change commit subject and description, was
>      "dt-bindings: Add xen,dev-domid property description for xen-grant DMA ops"
> 
> Changes V3 -> V4:
>    - add Stefano's R-b
>    - remove underscore in iommu node name
>    - remove consumer example virtio@3000
>    - update text for two descriptions
> ---
>  .../devicetree/bindings/iommu/xen,grant-dma.yaml   | 39 ++++++++++++++++++++++
>  1 file changed, 39 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml
> 

Reviewed-by: Rob Herring <robh@kernel.org>


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 02:33:58 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 02:33:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342142.567196 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny2Yo-0006Ev-Gp; Mon, 06 Jun 2022 02:33:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342142.567196; Mon, 06 Jun 2022 02:33:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny2Yo-0006Eo-E1; Mon, 06 Jun 2022 02:33:42 +0000
Received: by outflank-mailman (input) for mailman id 342142;
 Mon, 06 Jun 2022 02:33:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ny2Yn-0006Ee-5g; Mon, 06 Jun 2022 02:33:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ny2Yn-0000kc-3n; Mon, 06 Jun 2022 02:33:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ny2Ym-0007WI-HG; Mon, 06 Jun 2022 02:33:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ny2Ym-00048W-Gp; Mon, 06 Jun 2022 02:33:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=kyr6ZfGbvyG3lbYEJ1tgYS0JtrXQVyQJdMbK0LF+j2c=; b=iPxMFDsNfYDxNInYFhqJAG6NNf
	Hxl7dX+eGgkulZYbNeY0KO9hkQXzSrvGBTmdtysIF7hBNRHg7uQdoLkKld0inQ6PehSH/XNbwsDch
	WJqYB0uwwF/lgPeH/9Zh+kojao566foavE87ldJqoZVcsUIS7tk/H8SeTV1DFFDjsh9U=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170839-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 170839: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=0b36dea3f8b71ddddc4fdf3a5d2edf395115568b
X-Osstest-Versions-That:
    ovmf=0a4019ec9de64c6565ea545dc8d847afe2b30d6c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 06 Jun 2022 02:33:40 +0000

flight 170839 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170839/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 0b36dea3f8b71ddddc4fdf3a5d2edf395115568b
baseline version:
 ovmf                 0a4019ec9de64c6565ea545dc8d847afe2b30d6c

Last test of basis   170819  2022-06-03 12:40:25 Z    2 days
Testing same since   170839  2022-06-06 00:11:39 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jake Garver <jake@nvidia.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   0a4019ec9d..0b36dea3f8  0b36dea3f8b71ddddc4fdf3a5d2edf395115568b -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 03:13:07 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 03:13:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342152.567207 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny3Ap-0002iU-Jv; Mon, 06 Jun 2022 03:12:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342152.567207; Mon, 06 Jun 2022 03:12:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny3Ap-0002iN-Gu; Mon, 06 Jun 2022 03:12:59 +0000
Received: by outflank-mailman (input) for mailman id 342152;
 Mon, 06 Jun 2022 03:12:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ny3Ao-0002iB-BE; Mon, 06 Jun 2022 03:12:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ny3Ao-0001so-8f; Mon, 06 Jun 2022 03:12:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ny3An-0000tN-Qk; Mon, 06 Jun 2022 03:12:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ny3An-0002Qc-QI; Mon, 06 Jun 2022 03:12:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3TVPY5dpvAsXErbiZNd4nT5dppxu+Y1wQXzf8b42G0I=; b=MOdFpqF2qgedE3hkKldIDG49Qq
	OlsWOhrOfwi8hTWhPDCvrE/mK69q3Lof1G6LirtXuajx9H1rXrpRA2aTLK9bITv+UPv1kkFd7wC7Z
	MwLWRvps103W6zmYZTpkDxhNIKKThESK3VOuvhZE4XpFSMK6oOtfAPGCeIy6M6kXjh30=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170838-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 170838: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=d717180e7f9775d468f415c10a4a474640146001
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 06 Jun 2022 03:12:57 +0000

flight 170838 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170838/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 170714
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 170714
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 170714
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 170714
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 170714
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 170714

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170714
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170714
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170714
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                d717180e7f9775d468f415c10a4a474640146001
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z   12 days
Failing since        170716  2022-05-24 11:12:06 Z   12 days   36 attempts
Testing same since   170838  2022-06-05 20:43:00 Z    0 days    1 attempts

------------------------------------------------------------
2273 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 268049 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 03:40:56 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 03:40:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342169.567285 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny3bq-0007nD-Re; Mon, 06 Jun 2022 03:40:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342169.567285; Mon, 06 Jun 2022 03:40:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny3bq-0007my-Nn; Mon, 06 Jun 2022 03:40:54 +0000
Received: by outflank-mailman (input) for mailman id 342169;
 Mon, 06 Jun 2022 03:40:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PxEH=WN=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1ny3bo-0006AI-Cz
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 03:40:52 +0000
Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com
 [66.111.4.28]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 75cd1f4c-e54a-11ec-b605-df0040e90b76;
 Mon, 06 Jun 2022 05:40:51 +0200 (CEST)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.nyi.internal (Postfix) with ESMTP id A9F915C0061;
 Sun,  5 Jun 2022 23:40:50 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute5.internal (MEProxy); Sun, 05 Jun 2022 23:40:50 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sun,
 5 Jun 2022 23:40:49 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 75cd1f4c-e54a-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm1; t=1654486850; x=1654573250; bh=NGSmc+qtu3
	4gNopUlqM1N80pu5ni6kp1Nm5hYXfdT1w=; b=Obd1H0trnpDacJCfnGwuE/Lj6D
	EmLq6/S6pIRz3axGfWdZnP+tDN4tAzEKqIqHscd/W5lVhquhNznD5zEuXZDeXpgL
	S6053qeQOVeb5UvTeU14L580mcICg7iOr2y0ywJgNZTY0yyLBUunNmMzA8xyZSG3
	kwJVS3RThqtGzvv+w3X6OeottG634ba6EIFh786yxpAoF2r20GWXwayqzXWB0/ZX
	qVGJIRHaAGHiO1y4tOfrh0AanE318hQhh50N+YVfRTBAhkg91hvjgVDzdLRw2oSL
	uM9j6im+VKEU5oElR+2kwvsxN3msYLNiyoMZHZzutroWb0DNAG0L836YTcTg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1654486850; x=
	1654573250; bh=NGSmc+qtu34gNopUlqM1N80pu5ni6kp1Nm5hYXfdT1w=; b=o
	Io98yxeiOq1ZwOguKQVHDO1642SzCGnhJixkSIWE/vl+ZbS6QKvS5KXDBtrt/GUO
	ES/GHkLjFIsAsGS9SgsOzx2gFgPaDYEUXMCppUkRPCcCdRewm/McHnebkq639kD2
	1mjeBlFknfVPJCJHcTkjLIbekbbRA2EbFmRkntW8pCPXRbgK4bMo2+baxogab5PV
	inozapL7dcNZIAjygrL++fMti6vjMgy7oSIXXGnZpI5+JXPSoWGa2R0CmSwgyCU6
	a/vUOOKo8d8Akqzd1McgWkGE9XDfCHNFzJZlCY6gcL8//gTaN5U0Kr0HrZ8+YepR
	UlZndUpqIQHP3K7UxdoIw==
X-ME-Sender: <xms:QnedYgNWyrenokUQzcjqOq2JO8thG-hZJEKZxyL50YXJbhSuQIAGcw>
    <xme:QnedYm9tJzdoErhsj_p8mpub3JBy9Qo-6GYKW70baWrDO0kZ0ZlVfY9-CMKdfO5mq
    hSyxJIYwiLUtA>
X-ME-Received: <xmr:QnedYnSqzwPCvMhZHKUkz27Rh8PsmxRZglbktIA7MPQLPdGbSAUgx68PWB4rforQic8Dhx2aEWU6_UMDkd27KE02Y0iuYyKwYkgZS6SKyPQlhOekerA>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedruddtuddgjeefucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh
    hushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:QnedYos3YzL632hKDmOXeoSq2RxWfRq3JJW74JsXFFcJ5JCROrxwHg>
    <xmx:QnedYodMjZKEWId34cAI5gKqZ4jU7TG4EOzIj6Ig3FLYJcCCvzOkiQ>
    <xmx:QnedYs17RXiy3SU3_1eEh6s3Hwwrtsm0Y1_tW6eKVihcOF_P-5zqVA>
    <xmx:QnedYtECHspf_yPAOESfN4hb0qyRGdWkPthvgYkDqAtAH4KcM3BBMw>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [RFC PATCH 06/12] console: support multiple serial console simultaneously
Date: Mon,  6 Jun 2022 05:40:18 +0200
Message-Id: <0762fece2dc26ff926d92840e64ce30167cd3260.1654486751.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <cover.07846d9c1900bd8c905a05e7afb214b4cf4ab881.1654486751.git-series.marmarek@invisiblethingslab.com>
References: <cover.07846d9c1900bd8c905a05e7afb214b4cf4ab881.1654486751.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Previously only one serial console was supported at the same time. Using
console=com1,dbgp,vga silently ignored all but last serial console (in
this case: only dbgp and vga were active).

Fix this by storing not a single sercon_handle, but an array of them, up
to MAX_SERCONS entries. The value of MAX_SERCONS (4) is arbitrary,
inspired by the number of SERHND_IDX values.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 xen/drivers/char/console.c | 58 ++++++++++++++++++++++++++++++---------
 1 file changed, 45 insertions(+), 13 deletions(-)

diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index f9937c5134c0..f576cfdc3b62 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -113,7 +113,9 @@ static char *__read_mostly conring = _conring;
 static uint32_t __read_mostly conring_size = _CONRING_SIZE;
 static uint32_t conringc, conringp;
 
-static int __read_mostly sercon_handle = -1;
+#define MAX_SERCONS 4
+static int __read_mostly sercon_handle[MAX_SERCONS];
+static int __read_mostly nr_sercon_handle = 0;
 
 #ifdef CONFIG_X86
 /* Tristate: 0 disabled, 1 user enabled, -1 default enabled */
@@ -395,9 +397,17 @@ static unsigned int serial_rx_cons, serial_rx_prod;
 
 static void (*serial_steal_fn)(const char *, size_t nr) = early_puts;
 
+/* Redirect any console output to *fn*, if *handle* is configured as a console. */
 int console_steal(int handle, void (*fn)(const char *, size_t nr))
 {
-    if ( (handle == -1) || (handle != sercon_handle) )
+    int i;
+
+    if ( (handle == -1) )
+        return 0;
+    for ( i = 0; i < nr_sercon_handle; i++ )
+        if ( handle == sercon_handle[i] )
+            break;
+    if ( nr_sercon_handle && i == nr_sercon_handle )
         return 0;
 
     if ( serial_steal_fn != NULL )
@@ -415,10 +425,13 @@ void console_giveback(int id)
 
 void console_serial_puts(const char *s, size_t nr)
 {
+    int i;
+
     if ( serial_steal_fn != NULL )
         serial_steal_fn(s, nr);
     else
-        serial_puts(sercon_handle, s, nr);
+        for ( i = 0; i < nr_sercon_handle; i++ )
+            serial_puts(sercon_handle[i], s, nr);
 
     /* Copy all serial output into PV console */
     pv_console_puts(s, nr);
@@ -956,7 +969,7 @@ void guest_printk(const struct domain *d, const char *fmt, ...)
 void __init console_init_preirq(void)
 {
     char *p;
-    int sh;
+    int sh, i;
 
     serial_init_preirq();
 
@@ -977,7 +990,8 @@ void __init console_init_preirq(void)
             continue;
         else if ( (sh = serial_parse_handle(p)) >= 0 )
         {
-            sercon_handle = sh;
+            if ( nr_sercon_handle < MAX_SERCONS )
+                sercon_handle[nr_sercon_handle++] = sh;
             serial_steal_fn = NULL;
         }
         else
@@ -996,7 +1010,8 @@ void __init console_init_preirq(void)
         opt_console_xen = 0;
 #endif
 
-    serial_set_rx_handler(sercon_handle, serial_rx);
+    for ( i = 0; i < nr_sercon_handle; i++ )
+        serial_set_rx_handler(sercon_handle[i], serial_rx);
     pv_console_set_rx_handler(serial_rx);
 
     /* HELLO WORLD --- start-of-day banner text. */
@@ -1014,7 +1029,8 @@ void __init console_init_preirq(void)
 
     if ( opt_sync_console )
     {
-        serial_start_sync(sercon_handle);
+        for ( i = 0; i < nr_sercon_handle; i++ )
+            serial_start_sync(sercon_handle[i]);
         add_taint(TAINT_SYNC_CONSOLE);
         printk("Console output is synchronous.\n");
         warning_add(warning_sync_console);
@@ -1121,13 +1137,19 @@ int __init console_has(const char *device)
 
 void console_start_log_everything(void)
 {
-    serial_start_log_everything(sercon_handle);
+    int i;
+
+    for ( i = 0; i < nr_sercon_handle; i++ )
+        serial_start_log_everything(sercon_handle[i]);
     atomic_inc(&print_everything);
 }
 
 void console_end_log_everything(void)
 {
-    serial_end_log_everything(sercon_handle);
+    int i;
+
+    for ( i = 0; i < nr_sercon_handle; i++ )
+        serial_end_log_everything(sercon_handle[i]);
     atomic_dec(&print_everything);
 }
 
@@ -1149,23 +1171,32 @@ void console_unlock_recursive_irqrestore(unsigned long flags)
 
 void console_force_unlock(void)
 {
+    int i;
+
     watchdog_disable();
     spin_debug_disable();
     spin_lock_init(&console_lock);
-    serial_force_unlock(sercon_handle);
+    for ( i = 0 ; i < nr_sercon_handle ; i++ )
+        serial_force_unlock(sercon_handle[i]);
     console_locks_busted = 1;
     console_start_sync();
 }
 
 void console_start_sync(void)
 {
+    int i;
+
     atomic_inc(&print_everything);
-    serial_start_sync(sercon_handle);
+    for ( i = 0 ; i < nr_sercon_handle ; i++ )
+        serial_start_sync(sercon_handle[i]);
 }
 
 void console_end_sync(void)
 {
-    serial_end_sync(sercon_handle);
+    int i;
+
+    for ( i = 0; i < nr_sercon_handle; i++ )
+        serial_end_sync(sercon_handle[i]);
     atomic_dec(&print_everything);
 }
 
@@ -1291,7 +1322,8 @@ static int suspend_steal_id;
 
 int console_suspend(void)
 {
-    suspend_steal_id = console_steal(sercon_handle, suspend_steal_fn);
+    if ( nr_sercon_handle )
+        suspend_steal_id = console_steal(sercon_handle[0], suspend_steal_fn);
     serial_suspend();
     return 0;
 }
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 03:40:56 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 03:40:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342164.567224 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny3bk-0006Dj-76; Mon, 06 Jun 2022 03:40:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342164.567224; Mon, 06 Jun 2022 03:40:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny3bk-0006Ca-1Q; Mon, 06 Jun 2022 03:40:48 +0000
Received: by outflank-mailman (input) for mailman id 342164;
 Mon, 06 Jun 2022 03:40:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PxEH=WN=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1ny3bi-0006AI-Qm
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 03:40:46 +0000
Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com
 [66.111.4.28]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 722c1c16-e54a-11ec-b605-df0040e90b76;
 Mon, 06 Jun 2022 05:40:45 +0200 (CEST)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.nyi.internal (Postfix) with ESMTP id 8FFC15C00E6;
 Sun,  5 Jun 2022 23:40:44 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute2.internal (MEProxy); Sun, 05 Jun 2022 23:40:44 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sun,
 5 Jun 2022 23:40:43 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 722c1c16-e54a-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm1; t=1654486844; x=1654573244; bh=eX7fSKteqe
	onuY94tGwQDaahOOYr2OCH2EyjM2W2Mdk=; b=RTUScW8EXvoifl2aFua3yOjN42
	DoDd+q7Bz+sdizPVk8CdgetgBwlorqpm6CV4vSswMnEQt1hzdm5+XUWv8glK8aP3
	jJM42MJqK7vCSCzZCH2+jXYQHgrey8hsSzmER0IAfk2XfcPCFFMaC3XJdicCk+vu
	u/AuWuZ+PjptqKMwm/TNd1vl3SQWBYYAsYnsElV4iKnORt6Zp+SM60R3w4zlHK4J
	4mcSsDCdCo24dDP5tw8fkZcCxfxUTz+eH3M7KWl21Wiv7k6120fYrOhtzHEIIsMa
	9O5WZi01vfNohSlmSJhJnOfIpjc6CfTMSZjwFXsGCiCwgQQUImmT+F9QcW/w==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1654486844; x=
	1654573244; bh=eX7fSKteqeonuY94tGwQDaahOOYr2OCH2EyjM2W2Mdk=; b=q
	KkPMpIyBeYquR/feUT4HEvTY3j2rB4kGyEEYqK57IYMRcH7f7dy81GXY3xfzLidO
	lmh0Uk/Eh2RB5EJ8f1ddxmFXYKQAYBKS83l6h2Bz1y+ntSzZEKQMcTOV+KaaLlMb
	1ZLnEBXvTuGxgieLWFqR9oi0cksjnZq0lIZHeKikn4NAlmDmfubAQ992ITYi9kR4
	1t62rdMP6/pBC3aQksattom/OiKqzUGK1uz4GVfExyLA0Ru7GkP0mJM8s3b50ZdO
	Gi1C5ZPQjJZFzkJzSyTrMszNXFB+8lJQdNHbHFYXX3XkVCxhUebqnBC9b8TacEMb
	wYbNwCC5rLr7GJ9sH9tHg==
X-ME-Sender: <xms:PHedYrB9lhcU2nf48qPphXHLhaSdPDz10fkHTBIN9rmJPHHWff3SJQ>
    <xme:PHedYhjANQaZZdnVDFcvoJfy_iiC6tWmjQ1lytv2AVDHiE5LexLYeEan_sS4kSawA
    gJvlH7Y4oaqxA>
X-ME-Received: <xmr:PHedYmmy-Myg-Zjr39809TmCApD8Nbf7yR4x4h5N2M4Bbia80bf8-jQy7sOEagZXIuejJ0cHiBLbQPzmtXI1ULAQMQM9TC6-tDn4zh47sDJr1gA3-L0>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedruddtuddgjeegucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh
    hushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:PHedYty43YGeinW2J6q9np7DyOIIF9QL6pTUqru-4Dshalhbw21thA>
    <xmx:PHedYgQike4r9v2UjlKckAGj9CPJ95bGaYTO_VdW0V1Fu8vJnmC_iw>
    <xmx:PHedYgaw8Uw1EKLzqBE-eqSCj8nJmWddyqzPpmOFk8ayMNcewp4-uw>
    <xmx:PHedYlKTWovfNv5NizIRrGXFjU8VcwF65uvZcn5qjSQDzup3klmfWw>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [RFC PATCH 02/12] xue: annotate functions with cf_check
Date: Mon,  6 Jun 2022 05:40:14 +0200
Message-Id: <7f248769d2e50077b8719f7524c607d843199951.1654486751.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <cover.07846d9c1900bd8c905a05e7afb214b4cf4ab881.1654486751.git-series.marmarek@invisiblethingslab.com>
References: <cover.07846d9c1900bd8c905a05e7afb214b4cf4ab881.1654486751.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Annotate indirectly called functions for IBT.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 xen/drivers/char/xue.c | 12 ++++++------
 xen/include/xue.h      | 24 ++++++++++++------------
 2 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/xen/drivers/char/xue.c b/xen/drivers/char/xue.c
index 8aefae482b71..98334090c078 100644
--- a/xen/drivers/char/xue.c
+++ b/xen/drivers/char/xue.c
@@ -39,7 +39,7 @@ struct xue_uart {
 static struct xue_uart xue_uart;
 static struct xue_ops xue_ops;
 
-static void xue_uart_poll(void *data)
+static void cf_check xue_uart_poll(void *data)
 {
     struct serial_port *port = data;
     struct xue_uart *uart = port->uart;
@@ -56,13 +56,13 @@ static void xue_uart_poll(void *data)
     set_timer(&uart->timer, NOW() + MICROSECS(XUE_POLL_INTERVAL));
 }
 
-static void __init xue_uart_init_preirq(struct serial_port *port)
+static void __init cf_check xue_uart_init_preirq(struct serial_port *port)
 {
     struct xue_uart *uart = port->uart;
     uart->lock = &port->tx_lock;
 }
 
-static void __init xue_uart_init_postirq(struct serial_port *port)
+static void __init cf_check xue_uart_init_postirq(struct serial_port *port)
 {
     struct xue_uart *uart = port->uart;
 
@@ -71,7 +71,7 @@ static void __init xue_uart_init_postirq(struct serial_port *port)
     set_timer(&uart->timer, NOW() + MILLISECS(1));
 }
 
-static int xue_uart_tx_ready(struct serial_port *port)
+static int cf_check xue_uart_tx_ready(struct serial_port *port)
 {
     struct xue_uart *uart = port->uart;
     struct xue *xue = &uart->xue;
@@ -79,13 +79,13 @@ static int xue_uart_tx_ready(struct serial_port *port)
     return XUE_WORK_RING_CAP - xue_work_ring_size(&xue->dbc_owork);
 }
 
-static void xue_uart_putc(struct serial_port *port, char c)
+static void cf_check xue_uart_putc(struct serial_port *port, char c)
 {
     struct xue_uart *uart = port->uart;
     xue_putc(&uart->xue, c);
 }
 
-static inline void xue_uart_flush(struct serial_port *port)
+static inline void cf_check xue_uart_flush(struct serial_port *port)
 {
     s_time_t goal;
     struct xue_uart *uart = port->uart;
diff --git a/xen/include/xue.h b/xen/include/xue.h
index 66f7d66447f3..7515244f6af3 100644
--- a/xen/include/xue.h
+++ b/xen/include/xue.h
@@ -684,45 +684,45 @@ static inline void xue_sys_clflush(void *sys, void *ptr)
 #define xue_error(...) printk("xue error: " __VA_ARGS__)
 #define XUE_SYSID xue_sysid_xen
 
-static inline int xue_sys_init(void *sys) { return 1; }
-static inline void xue_sys_sfence(void *sys) { wmb(); }
-static inline void xue_sys_lfence(void *sys) { rmb(); }
-static inline void xue_sys_unmap_xhc(void *sys, void *virt, uint64_t count) {}
-static inline void xue_sys_free_dma(void *sys, void *addr, uint64_t order) {}
+static inline int cf_check xue_sys_init(void *sys) { return 1; }
+static inline void cf_check xue_sys_sfence(void *sys) { wmb(); }
+static inline void cf_check xue_sys_lfence(void *sys) { rmb(); }
+static inline void cf_check xue_sys_unmap_xhc(void *sys, void *virt, uint64_t count) {}
+static inline void cf_check xue_sys_free_dma(void *sys, void *addr, uint64_t order) {}
 
-static inline void xue_sys_pause(void *sys)
+static inline void cf_check xue_sys_pause(void *sys)
 {
     (void)sys;
     __asm volatile("pause" ::: "memory");
 }
 
-static inline void xue_sys_clflush(void *sys, void *ptr)
+static inline void cf_check xue_sys_clflush(void *sys, void *ptr)
 {
     (void)sys;
     __asm volatile("clflush %0" : "+m"(*(volatile char *)ptr));
 }
 
-static inline void *xue_sys_alloc_dma(void *sys, uint64_t order)
+static inline void *cf_check xue_sys_alloc_dma(void *sys, uint64_t order)
 {
     return NULL;
 }
 
-static inline uint32_t xue_sys_ind(void *sys, uint32_t port)
+static inline uint32_t cf_check xue_sys_ind(void *sys, uint32_t port)
 {
     return inl(port);
 }
 
-static inline void xue_sys_outd(void *sys, uint32_t port, uint32_t data)
+static inline void cf_check xue_sys_outd(void *sys, uint32_t port, uint32_t data)
 {
     outl(data, port);
 }
 
-static inline uint64_t xue_sys_virt_to_dma(void *sys, const void *virt)
+static inline uint64_t cf_check xue_sys_virt_to_dma(void *sys, const void *virt)
 {
     return virt_to_maddr(virt);
 }
 
-static void *xue_sys_map_xhc(void *sys, uint64_t phys, uint64_t size)
+static void *cf_check xue_sys_map_xhc(void *sys, uint64_t phys, uint64_t size)
 {
     size_t i;
 
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 03:40:56 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 03:40:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342168.567274 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny3bo-0007T0-J5; Mon, 06 Jun 2022 03:40:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342168.567274; Mon, 06 Jun 2022 03:40:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny3bo-0007S0-EO; Mon, 06 Jun 2022 03:40:52 +0000
Received: by outflank-mailman (input) for mailman id 342168;
 Mon, 06 Jun 2022 03:40:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PxEH=WN=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1ny3bm-0006LY-L8
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 03:40:50 +0000
Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com
 [66.111.4.28]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 74e2a6e1-e54a-11ec-bd2c-47488cf2e6aa;
 Mon, 06 Jun 2022 05:40:49 +0200 (CEST)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.nyi.internal (Postfix) with ESMTP id 2738F5C005B;
 Sun,  5 Jun 2022 23:40:49 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute2.internal (MEProxy); Sun, 05 Jun 2022 23:40:49 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sun,
 5 Jun 2022 23:40:47 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 74e2a6e1-e54a-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm1; t=1654486849; x=1654573249; bh=U8kU4goNfn
	hejF90xv/RWehAvQ7mOrYaRxEFIubWjr0=; b=vKoohr+AUD15ReHYhN+5posuv8
	ylsZ3vFTqlXURpzI/se+U7FhzPiVA0hqzje5Zax24fMlYfIfceiJ4o6hgyn5yZgr
	6tAUlsbm4i2G0mpAAsFI5B+tiXdNu4lTYD0rY4D9yFCFoP4r3o+7lEZ1QJA6VDss
	1iOeJB78rSUDqTzTT5UT9yRx6mbmyu0Yjn89HRc/BEbDMltMVV19ZpY2YsA/iOlr
	KtsjaiKIYDyX/SSK/ydwqa8AoEVEzF/pYXrDrYJi/cKNyPC1hdQuXSOnrAHBdxI4
	yoT6HsEIetEdP3aCjrNSsrAeAEAHDMKS+GghXQGjWOHVGDTzD/2eg55jdY/g==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1654486849; x=
	1654573249; bh=U8kU4goNfnhejF90xv/RWehAvQ7mOrYaRxEFIubWjr0=; b=E
	DAJtJ0Kmuhtg3wq8osJPyguvt9sSGG4+gzZUU37svHFhYSN7eQTcx+B0Vg3WKwLn
	qAN5I3vEf3hHz9lWDiZSz02NHXtQokOhErVrL4F4Ey0NBjoEGhI7myzZA0GWE8sF
	tkxwhrA+93qLnPHHpJuJB0s4hd85A5bQfrVGYrwZzylVbV8DYGnyH+I/4I/dFv/s
	E5TCYvR6zSM85Vcieqlyt2lSGPvdurmIOU3i/F+HedE7qqlOMZTjNDNqxJnBDXEQ
	JSpzPnQaipcpLHeh9Acuv3U7etNq1P2tcZc6zMGJs3Kg/xMfyF5yN3rwLdIM4PP9
	x3hW0mYdy7Ay43krr1cmw==
X-ME-Sender: <xms:QHedYqXcnBT6k38M3lCs_deMbrU76XY3K8-cg0dyOoURDmxCxsIzcA>
    <xme:QHedYmlMGHvn6lA4xbUG7cKC_S5ggJxRHujcBB5QyXqIA2oqcQTpFLLRAnBLbopx0
    CwqbTDUb3r50g>
X-ME-Received: <xmr:QHedYuY2K7QAi0SmzIZl0ikkf7lOfotJ_nfKfLjO3m-77RfkZJ2yJhIGFVPq7kzZTWpyVXl93mIUOA2Y9jnXktEOflj7j3DyclbH3fAoH2uPkHJnT0o>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedruddtuddgjeegucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh
    hushhtvghrufhiiigvpedunecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:QXedYhVwpN7N_jcoJJJ5Vrso4bsTh3kGj_r_G83LFl0IvsNiXbPMfQ>
    <xmx:QXedYklbVk7VxGseBb7IBnC-wfxuGNETuCu0k8tQ1bWIvl4iA2UeBQ>
    <xmx:QXedYmcQ--bfAIOZ1ReWy8EV9reBlGfhGHXwBgpZRBTJpsanNFw72Q>
    <xmx:QXedYjv1uU73cbFr2wuyNngp1TlOtjiUZu-BRmcArPvXdJJXP-_nZA>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [RFC PATCH 05/12] ehci-dbgp: fix selecting n-th ehci controller
Date: Mon,  6 Jun 2022 05:40:17 +0200
Message-Id: <c24ab71e7ee5ae4f411d4070fe7d29bf07c02dc7.1654486751.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <cover.07846d9c1900bd8c905a05e7afb214b4cf4ab881.1654486751.git-series.marmarek@invisiblethingslab.com>
References: <cover.07846d9c1900bd8c905a05e7afb214b4cf4ab881.1654486751.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The ehci<n> number was parsed but ignored.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 xen/drivers/char/ehci-dbgp.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/drivers/char/ehci-dbgp.c b/xen/drivers/char/ehci-dbgp.c
index 16c8ff394d5c..92c588ec0aa3 100644
--- a/xen/drivers/char/ehci-dbgp.c
+++ b/xen/drivers/char/ehci-dbgp.c
@@ -1480,7 +1480,7 @@ void __init ehci_dbgp_init(void)
         unsigned int num = 0;
 
         if ( opt_dbgp[4] )
-            simple_strtoul(opt_dbgp + 4, &e, 10);
+            num = simple_strtoul(opt_dbgp + 4, &e, 10);
 
         dbgp->cap = find_dbgp(dbgp, num);
         if ( !dbgp->cap )
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 03:40:56 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 03:40:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342163.567219 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny3bj-0006Ab-TM; Mon, 06 Jun 2022 03:40:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342163.567219; Mon, 06 Jun 2022 03:40:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny3bj-0006AU-Qa; Mon, 06 Jun 2022 03:40:47 +0000
Received: by outflank-mailman (input) for mailman id 342163;
 Mon, 06 Jun 2022 03:40:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PxEH=WN=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1ny3bi-0006AI-0I
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 03:40:46 +0000
Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com
 [66.111.4.28]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7021a82f-e54a-11ec-b605-df0040e90b76;
 Mon, 06 Jun 2022 05:40:42 +0200 (CEST)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.nyi.internal (Postfix) with ESMTP id 07AAE5C0121;
 Sun,  5 Jun 2022 23:40:41 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute3.internal (MEProxy); Sun, 05 Jun 2022 23:40:41 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sun,
 5 Jun 2022 23:40:39 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7021a82f-e54a-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:message-id
	:mime-version:reply-to:sender:subject:subject:to:to; s=fm1; t=
	1654486841; x=1654573241; bh=7MjZrA6murLLAihNY6Un10I7X//Y3HFoLj2
	d9nffLD8=; b=F/VU+rckCr1Bxg8+/5CDQ1IKqp/Kjk3j/b2a2ACwrVTHJ1+seZq
	jbzkMXJcFa78+BWk+6gaIUNKSlWad74kr73Qk3kw2xJ7SGnvYIp9kH+Z++bt9EXE
	/BkHySG0qFG5jltc2eBYrICEEvOW8TLPI9YNxcrQ3uHgQOORJbcdsfi+riH1ZLqh
	xSmBry4EEgeOdaRN8eNOgageaZwN3ZmLRcwB2V4bGznziLVj8abtemEGJ7WkW1k6
	lT6R1RePZTP08rLg+OWAdgVLM+xuSWZ9AdMCXMcSx6ahbzchZe2XMydUVYBColr5
	sS+2EyacF/DCDF8c2gJyvQqQvTSzh12gQpQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:message-id:mime-version:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm2; t=1654486841; x=1654573241; bh=7MjZrA6murLLA
	ihNY6Un10I7X//Y3HFoLj2d9nffLD8=; b=SmdPv9NRAuU2vZ1N5H1ToSLWGYsip
	wxXGaMJhN5miaLXBjk5h3z+xF/Cj8CLb9Fb3cRbWoj5TkJEQ2F6X0QyURqJPZell
	qxlWAg2g4TnPnI8tgUvn5pd3zSpT5nW1GbtZiuqJDUZGiHon2UffOnFwczJ27rqB
	aTHd1xya++GCv3JjPcMKgA/6hIyNAjzJoXn5c7gx27dEvNK3iBzP3ejEvL6+ajKX
	0aqFKhipyH/4k5akUFMd9FoSRTE5soAPVbtEUMrY4uNcpxsBt92/1NqTxHS7s41k
	veH0nEtFIUIz3YKWxU/+nOBgl5cbxjqgl2Rhd9LQ6Ciu+ehHbcZPlo5Tw==
X-ME-Sender: <xms:OHedYouKNnKdndSmJ2AbgtmB5Nq0_NG-PPi-1KsExEHby-ZXXDyzEQ>
    <xme:OHedYlf7yytrXyXpai3on747rKDWEz7iakUIj6EVvhF-OuAf52wdz8b7IxIG7fA_y
    -f5ymSKJgm4tw>
X-ME-Received: <xmr:OHedYjyGPgdVlZkN_l9t1i2HHaTVa9iCF5sxeAC0Y9T_6c2QQbgx2jJs6i-RrfF9u7H4l5I5Z3sZAL0yEn11g1IoXZjTFWZjpAvxlKPBkaz6X6K-azk>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedruddtuddgjeefucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofggtgfgsehtkeertdertdejnecuhfhrohhmpeforghrvghk
    ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeefgffg
    geevhffggfetfefhffeuvefhvdevkeehkedttddtgeefkeduheevffduleenucffohhmrg
    hinhepghhithhhuhgsrdgtohhmnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghm
    pehmrghilhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhnghhslh
    grsgdrtghomh
X-ME-Proxy: <xmx:OHedYrN7bE47sdhK_H5Sqy-hf2d9oTe15rE3LXM_igJqbhxMdnyK-g>
    <xmx:OHedYo9Bm1q0T70z9qjmG4Lf2oJ_pRAY1sTq5VXv8Nklq-ekmRUfFQ>
    <xmx:OHedYjUAvAO6YkQwnkW8IIpnx3V6smtpMcnvZ8Kc8mrIxgoxjwOhZQ>
    <xmx:OXedYsR1y97A0iP-YzJUmiimI-i1VH1HfK4QXOpe7fThNgLWjjSwqQ>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Kevin Tian <kevin.tian@intel.com>
Subject: [RFC PATCH 00/12] Add Xue - console over USB 3 Debug Capability
Date: Mon,  6 Jun 2022 05:40:12 +0200
Message-Id: <cover.07846d9c1900bd8c905a05e7afb214b4cf4ab881.1654486751.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This is integration of https://github.com/connojd/xue into mainline Xen.
This patch series includes several patches that I made in the process, some are
very loosely related.

The RFC status is to collect feedback on the shape of this series, specifically:

1. The actual Xue driver is a header-only library. Most of the code is in a
series of inline functions in xue.h. I kept it this way, to ease integrating
Xue updates. That's also why I preserved its original code style. Is it okay,
or should I move the code to a .c file?

2. The xue.h file includes bindings for several other environments too (EFI,
Linux, ...). This is unused code, behind #ifdef. Again, I kept it to ease
updating. Should I remove it?

3. The adding of IOMMU reserverd memory is necessary even if "hiding" device
from dom0. Otherwise, VT-d will deny DMA. That's probably not the most elegant
solution, but Xen doesn't have seem to have provisions for devices doing DMA
into Xen's memory.

4. To preserve authorship, I included unmodified "drivers/char: Add support for
Xue USB3 debugger" commit from Connor, and only added my changes on top. This
means, with that this commit, the driver doesn't work yet (but it does
compile). Is it okay, or should I combine fixes into that commit and somehow
mark authorship in the commit message?

5. The last patch(es) enable using the controller by dom0, even when Xen
uses DbC part. That's possible, because the capability was designed
specifically to allow separate driver handle it, in parallel to unmodified xhci
driver (separate set of registers, pretending the port is "disconnected" for
the main xhci driver etc). It works with Linux dom0, although requires an awful
hack - re-enabling bus mastering behind dom0's backs. Is it okay to leave this
functionality as is, or guard it behind some cmdline option, or maybe remove
completely?

Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Wei Liu <wl@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: Paul Durrant <paul@xen.org>
Cc: Kevin Tian <kevin.tian@intel.com>

Connor Davis (1):
  drivers/char: Add support for Xue USB3 debugger

Marek Marczykowski-Górecki (11):
  xue: annotate functions with cf_check
  xue: reset XHCI ports when initializing dbc
  xue: add support for selecting specific xhci
  ehci-dbgp: fix selecting n-th ehci controller
  console: support multiple serial console simultaneously
  IOMMU: add common API for device reserved memory
  IOMMU/VT-d: wire common device reserved memory API
  IOMMU/AMD: wire common device reserved memory API
  xue: mark DMA buffers as reserved for the device
  xue: prevent dom0 (or other domain) from using the device
  xue: allow driving the reset of XHCI by a domain while Xen uses DbC

 docs/misc/xen-command-line.pandoc        |    5 +-
 xen/arch/x86/include/asm/fixmap.h        |    4 +-
 xen/arch/x86/setup.c                     |    5 +-
 xen/drivers/char/Makefile                |    1 +-
 xen/drivers/char/console.c               |   58 +-
 xen/drivers/char/ehci-dbgp.c             |    2 +-
 xen/drivers/char/xue.c                   |  197 ++-
 xen/drivers/passthrough/amd/iommu_acpi.c |   16 +-
 xen/drivers/passthrough/iommu.c          |   40 +-
 xen/drivers/passthrough/vtd/dmar.c       |  203 +-
 xen/include/xen/iommu.h                  |   11 +-
 xen/include/xue.h                        | 1942 +++++++++++++++++++++++-
 12 files changed, 2387 insertions(+), 97 deletions(-)
 create mode 100644 xen/drivers/char/xue.c
 create mode 100644 xen/include/xue.h

base-commit: 49dd52fb1311dadab29f6634d0bc1f4c022c357a
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 03:40:56 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 03:40:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342170.567290 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny3br-0007uf-Ih; Mon, 06 Jun 2022 03:40:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342170.567290; Mon, 06 Jun 2022 03:40:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny3br-0007td-DR; Mon, 06 Jun 2022 03:40:55 +0000
Received: by outflank-mailman (input) for mailman id 342170;
 Mon, 06 Jun 2022 03:40:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PxEH=WN=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1ny3bp-0006AI-KJ
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 03:40:53 +0000
Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com
 [66.111.4.28]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 767fca2c-e54a-11ec-b605-df0040e90b76;
 Mon, 06 Jun 2022 05:40:52 +0200 (CEST)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.nyi.internal (Postfix) with ESMTP id D2ECA5C0067;
 Sun,  5 Jun 2022 23:40:51 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute2.internal (MEProxy); Sun, 05 Jun 2022 23:40:51 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sun,
 5 Jun 2022 23:40:50 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 767fca2c-e54a-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm1; t=1654486851; x=1654573251; bh=9Z09zPwAMV
	UfGBsTlLDLpiIwvOBt3luu8w7C4olNcaY=; b=rfKtrEAxCdTEyWbp73pYhMWN6/
	786II0vy2qb982ghSlOhqlBaiBfdCYJYHM7lo0cK4DRoSsHLQiUrNNZPOUqy4VFR
	qbwFZMTMjo+WphmQhUnz/kRGRp4yGzSUBGGbSuHUP2ylk0tpfcB+EVFKNvDSNhRy
	xEYe7WosfAUF586LGgxo24t2Qmz3cdvgClydzreRnkhodSDijzUGRwke1wQRp5kp
	awAkewluaX9KGPuI8q054T4xzynQyTjb+wgi4ARQXggI38aD86pEu63WaXY1STDN
	Gl/MQchrJepiJE/MMrvJh2ZD7Hqgc1uIMsRjbgIc2fSfWeuATXdPDV0ZhmtQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1654486851; x=
	1654573251; bh=9Z09zPwAMVUfGBsTlLDLpiIwvOBt3luu8w7C4olNcaY=; b=R
	JvWzyPG/YtFe/BIlHdR7UejC82pvAzRXFlCROrJfOWORsaZN8fKD4iwMROYjCC3c
	Zo54r0ilw5MiWF6oHg1tKFc7dlmspwJOKju37OwkU9jLyTmjPQ9EuhLQE5mqKTAp
	aCGQXFK9OkmbL825GBaJFTalfCo7leWkituOOd9TPuw373W+bNPNQB1KWXKHKufc
	NkP6CqM+AeK8yKlHlUGo13nw55GbWwODYBu2S0Zu9ZHIrz9FfjSLr0d3vbFmsmJz
	VQU/d2DWlfyGq6cnAtXdu5xlTWs+t2Ne6DhkEgxEE2T/n6NmtDochWlBu858sSr2
	zyX8sJ9JEbtGLu4J7CQiQ==
X-ME-Sender: <xms:Q3edYnwO7bQVWiMN3-IOTS4I02TRTwwRJbDtUaPCEWxcXnfPJdvxZg>
    <xme:Q3edYvSuOC6k-_aqrrYZnKqnvp3yCOrTQxP_endrVJE2yUNz-wOxuljfDvJC8Fzlg
    guqFDuZkIcQTw>
X-ME-Received: <xmr:Q3edYhUOonOVpVJTpeSphw6wXeEvjyDRSuiQg8RKwL47l5SmpxBEk3F9eIH-_Zdjjx8JVJXE2kIhAYJuxOm5zU8fxo2KvBM3qT5YAqF8fwNqcq7qeLI>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedruddtuddgjeegucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh
    hushhtvghrufhiiigvpedvnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:Q3edYhhJ5krkcRvyqHDidOkKuHvf0K_naO6OQdlm1TiEmQo6oSSdNg>
    <xmx:Q3edYpBzHDgx6nbwhl_IImREVH8Vcs-m4RhoAg3Cn9FkIWb-F6qsrQ>
    <xmx:Q3edYqIwl121yafWzxRtX8eOPxQkX1Bs_v2DAwndIS3rwiB8jjpS5A>
    <xmx:Q3edYsNiDMhV-eaBjxKZ0bzq7NLvx6NV6HRlgOFQFiz1zh29XycmMA>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [RFC PATCH 07/12] IOMMU: add common API for device reserved memory
Date: Mon,  6 Jun 2022 05:40:19 +0200
Message-Id: <8ba3e1cbfc7ee1d0a5da54e47f33a14c526691d3.1654486751.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <cover.07846d9c1900bd8c905a05e7afb214b4cf4ab881.1654486751.git-series.marmarek@invisiblethingslab.com>
References: <cover.07846d9c1900bd8c905a05e7afb214b4cf4ab881.1654486751.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Add API similar to rmrr= and ivmd= arguments, but in a common code. This
will allow drivers to register reserved memory regardless of the IOMMU
vendor.
The direct reason for this API is xhci-dbc console driver (aka xue),
that needs to use DMA. But future change may unify command line
arguments for user-supplied reserved memory, and it may be useful for
other drivers in the future too.

This commit just introduces an API, subsequent patches will plug it in
appropriate places. The reserved memory ranges needs to be saved
locally, because at the point when they are collected, Xen doesn't know
yet which IOMMU driver will be used.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 xen/drivers/passthrough/iommu.c | 40 ++++++++++++++++++++++++++++++++++-
 xen/include/xen/iommu.h         | 11 +++++++++-
 2 files changed, 51 insertions(+)

diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 9393d987c788..5c4162912359 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -654,6 +654,46 @@ bool_t iommu_has_feature(struct domain *d, enum iommu_feature feature)
     return is_iommu_enabled(d) && test_bit(feature, dom_iommu(d)->features);
 }
 
+#define MAX_EXTRA_RESERVED_RANGES 20
+struct extra_reserved_range {
+    xen_pfn_t start;
+    xen_ulong_t nr;
+    u32 sbdf;
+};
+static unsigned int __initdata nr_extra_reserved_ranges;
+static struct extra_reserved_range __initdata extra_reserved_ranges[MAX_EXTRA_RESERVED_RANGES];
+
+int iommu_add_extra_reserved_device_memory(xen_pfn_t start, xen_ulong_t nr, u32 sbdf)
+{
+    unsigned int idx;
+
+    if ( nr_extra_reserved_ranges >= MAX_EXTRA_RESERVED_RANGES )
+        return -ENOMEM;
+
+    idx = nr_extra_reserved_ranges++;
+    extra_reserved_ranges[idx].start = start;
+    extra_reserved_ranges[idx].nr = nr;
+    extra_reserved_ranges[idx].sbdf = sbdf;
+    return 0;
+}
+
+int iommu_get_extra_reserved_device_memory(iommu_grdm_t *func, void *ctxt)
+{
+    unsigned int idx;
+    int ret;
+
+    for ( idx = 0; idx < nr_extra_reserved_ranges; idx++ )
+    {
+        ret = func(extra_reserved_ranges[idx].start,
+                   extra_reserved_ranges[idx].nr,
+                   extra_reserved_ranges[idx].sbdf,
+                   ctxt);
+        if ( ret < 0 )
+            return ret;
+    }
+    return 0;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index e0f82712ed73..97424130247c 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -296,6 +296,17 @@ struct iommu_ops {
 #endif
 };
 
+/*
+ * To be called by Xen internally, to register extra RMRR/IVMD ranges.
+ * Needs to be called before IOMMU initialization.
+ */
+extern int iommu_add_extra_reserved_device_memory(xen_pfn_t start, xen_ulong_t nr, u32 sbdf);
+/*
+ * To be called by specific IOMMU driver during initialization,
+ * to fetch ranges registered with iommu_add_extra_reserved_device_memory().
+ */
+extern int iommu_get_extra_reserved_device_memory(iommu_grdm_t *func, void *ctxt);
+
 #include <asm/iommu.h>
 
 #ifndef iommu_call
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 03:40:56 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 03:40:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342165.567240 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny3bl-0006fR-Cz; Mon, 06 Jun 2022 03:40:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342165.567240; Mon, 06 Jun 2022 03:40:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny3bl-0006fI-9p; Mon, 06 Jun 2022 03:40:49 +0000
Received: by outflank-mailman (input) for mailman id 342165;
 Mon, 06 Jun 2022 03:40:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PxEH=WN=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1ny3bj-0006AI-HQ
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 03:40:47 +0000
Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com
 [66.111.4.28]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 713c33db-e54a-11ec-b605-df0040e90b76;
 Mon, 06 Jun 2022 05:40:43 +0200 (CEST)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailout.nyi.internal (Postfix) with ESMTP id 07A8A5C0145;
 Sun,  5 Jun 2022 23:40:43 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute1.internal (MEProxy); Sun, 05 Jun 2022 23:40:43 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sun,
 5 Jun 2022 23:40:41 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 713c33db-e54a-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to; s=fm1; t=
	1654486842; x=1654573242; bh=CFmNRn1sKegjaekfi5r4EENRtjIFuajMleR
	bsmOlPR0=; b=h6ZiwRe3juzDkGKl8KWuU+hxR2Ws/Rh2huRnP+phOkHQ9Wcff9n
	xHd76tdGUgNh2kcVj9yzcN5odBCxzqLEp4Z6OZZghw2tWTFYScOq9DW+K3qTynd0
	hpjsxzR+Se4vr91gGL5+Lfe5PjLV4pqy2iESd8lTMDNn2zkBC1ZsAIyQjPEE31/B
	kJQEC5JBsKC0MHGnmoFbzvQNWKpm5kuf9Yq99VqjdP/dHhf7hiGwG9axDHnYXhc3
	0yUtxLrcF9HmhrrKmcR1CqM4/xj0GW+bSc6e8I1lwDENSMPL0qASELDpMp8VymDi
	GD/zPVmrQbNDPo8paH+SxemYcirTsa/DwQg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm2; t=1654486842; x=1654573242; bh=CFmNRn1sKegja
	ekfi5r4EENRtjIFuajMleRbsmOlPR0=; b=xmBpL7b7zH4+lQwmBHNOuKui+jOGZ
	jDRScHF9h/ElsH6EhcygzZwXkAC3jr5UXEsUMQ+D9j2Ik2e4/4OiLGpjWy9HRicR
	m71MlHZdWAEhSnj1Fykllyz5SzHtML/+yQObhW47kTQDsKoZN+8HdflE8U2NrcTX
	Byka9OtxHSsddFpQXdFO0kJArlLLP60HCdRdCYe8tpXEaJWxKRhOqx4f13X27fcz
	2Hz6EwqSs2XQ5h4ixOStL70p5WphEDxVzyWzmMtSyI3+xz050UaETZa9G+Jof2Rn
	pQigHW/+V+zYPXvJYCBanaXmf2NFFQwzeooAF+6wsfkyW7cNS2gpcvogg==
X-ME-Sender: <xms:OnedYquOKpeS3rguA-xzONmv4M2WJgLVHwcQEvtsi-MmT_mZjiY4mw>
    <xme:OnedYvd85rAx-3j1kD3o3aU5CStlH3tGFtVtDyiz3iXP3Ch_DJSf1wQCiimZItIUo
    6zObBU-YTDxCw>
X-ME-Received: <xmr:OnedYlx3fcMAE9wfO0YBE2sbj6dC8MrgL2K8qT-T3aF9f9lVUpyV_lUKtb2QGkiG51ek59A0Pk1hOHnMZxPG2dsGSzwGwkBKKwE6Bd0MIkYz7JCZRJ4>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedruddtuddgjeefucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhgggfestdekredtredtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhephedt
    keekieeffeekieevgfffjeejhfehtdelveehjefhieduveethedtueduudfhnecuffhomh
    grihhnpehgnhhurdhorhhgnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehm
    rghilhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhnghhslhgrsg
    drtghomh
X-ME-Proxy: <xmx:OnedYlPmWwvMwmH2Ht8ikFxgOykH1Ah8du6velw8862xbIUF0zrlhg>
    <xmx:OnedYq9n1_Fau0A03SzaMWRJ3j4yl3ymBvWAyWfVRVi5CsY0ZUsiuQ>
    <xmx:OnedYtX7GFgCPZKOzOUd3_Bxg4Ftu5mMUKvlDDZWWmSRb41KouN3VA>
    <xmx:OnedYqz01Cdg46S10TOm8VoaOJeP6BnAIku4UsqtKS7P63MXVFK65g>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [RFC PATCH 01/12] drivers/char: Add support for Xue USB3 debugger
Date: Mon,  6 Jun 2022 05:40:13 +0200
Message-Id: <8d45d05dae0b5a2aa62120c7ff62295319a74478.1654486751.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <cover.07846d9c1900bd8c905a05e7afb214b4cf4ab881.1654486751.git-series.marmarek@invisiblethingslab.com>
References: <cover.07846d9c1900bd8c905a05e7afb214b4cf4ab881.1654486751.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Connor Davis <davisc@ainfosec.com>

Xue is a cross-platform USB 3 debugger that drives the Debug
Capability (DbC) of xHCI-compliant host controllers. This patch
implements the operations needed for xue to initialize the host
controller's DbC and communicate with it. It also implements a struct
uart_driver that uses xue as a backend. Note that only target -> host
communication is supported for now. To use Xue as a console, add
'console=dbgp dbgp=xue' to the command line.
---
 xen/arch/x86/include/asm/fixmap.h |    4 +-
 xen/arch/x86/setup.c              |    5 +-
 xen/drivers/char/Makefile         |    1 +-
 xen/drivers/char/xue.c            |  156 +++-
 xen/include/xue.h                 | 1855 ++++++++++++++++++++++++++++++-
 5 files changed, 2021 insertions(+)
 create mode 100644 xen/drivers/char/xue.c
 create mode 100644 xen/include/xue.h

diff --git a/xen/arch/x86/include/asm/fixmap.h b/xen/arch/x86/include/asm/fixmap.h
index 20746afd0a2a..bc39ffe896b1 100644
--- a/xen/arch/x86/include/asm/fixmap.h
+++ b/xen/arch/x86/include/asm/fixmap.h
@@ -25,6 +25,8 @@
 #include <asm/msi.h>
 #include <acpi/apei.h>
 
+#define MAX_XHCI_PAGES 16
+
 /*
  * Here we define all the compile-time 'special' virtual
  * addresses. The point is to have a constant address at
@@ -43,6 +45,8 @@ enum fixed_addresses {
     FIX_COM_BEGIN,
     FIX_COM_END,
     FIX_EHCI_DBGP,
+    FIX_XHCI_BEGIN,
+    FIX_XHCI_END = FIX_XHCI_BEGIN + MAX_XHCI_PAGES - 1,
 #ifdef CONFIG_XEN_GUEST
     FIX_PV_CONSOLE,
     FIX_XEN_SHARED_INFO,
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 53a73010e029..8fda7107da25 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -56,6 +56,8 @@
 #include <asm/microcode.h>
 #include <asm/pv/domain.h>
 
+#include <xue.h>
+
 /* opt_nosmp: If true, secondary processors are ignored. */
 static bool __initdata opt_nosmp;
 boolean_param("nosmp", opt_nosmp);
@@ -852,6 +854,8 @@ static struct domain *__init create_dom0(const module_t *image,
 /* How much of the directmap is prebuilt at compile time. */
 #define PREBUILT_MAP_LIMIT (1 << L2_PAGETABLE_SHIFT)
 
+void __init xue_uart_init(void);
+
 void __init noreturn __start_xen(unsigned long mbi_p)
 {
     char *memmap_type = NULL;
@@ -946,6 +950,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
     ns16550.irq     = 3;
     ns16550_init(1, &ns16550);
     ehci_dbgp_init();
+    xue_uart_init();
     console_init_preirq();
 
     if ( pvh_boot )
diff --git a/xen/drivers/char/Makefile b/xen/drivers/char/Makefile
index 14e67cf072d7..292d1bbd78d3 100644
--- a/xen/drivers/char/Makefile
+++ b/xen/drivers/char/Makefile
@@ -11,5 +11,6 @@ obj-$(CONFIG_HAS_EHCI) += ehci-dbgp.o
 obj-$(CONFIG_HAS_IMX_LPUART) += imx-lpuart.o
 obj-$(CONFIG_ARM) += arm-uart.o
 obj-y += serial.o
+obj-y += xue.o
 obj-$(CONFIG_XEN_GUEST) += xen_pv_console.o
 obj-$(CONFIG_PV_SHIM) += consoled.o
diff --git a/xen/drivers/char/xue.c b/xen/drivers/char/xue.c
new file mode 100644
index 000000000000..8aefae482b71
--- /dev/null
+++ b/xen/drivers/char/xue.c
@@ -0,0 +1,156 @@
+/*
+ * drivers/char/xue.c
+ *
+ * Xen port for the xue debugger
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; If not, see <http://www.gnu.org/licenses/>.
+ *
+ * Copyright (c) 2019 Assured Information Security.
+ */
+
+#include <xen/delay.h>
+#include <xen/types.h>
+#include <asm/string.h>
+#include <asm/system.h>
+#include <xen/serial.h>
+#include <xen/timer.h>
+#include <xen/param.h>
+#include <xue.h>
+
+#define XUE_POLL_INTERVAL 100 /* us */
+
+struct xue_uart {
+    struct xue xue;
+    struct timer timer;
+    spinlock_t *lock;
+};
+
+static struct xue_uart xue_uart;
+static struct xue_ops xue_ops;
+
+static void xue_uart_poll(void *data)
+{
+    struct serial_port *port = data;
+    struct xue_uart *uart = port->uart;
+    struct xue *xue = &uart->xue;
+    unsigned long flags = 0;
+
+    if ( spin_trylock_irqsave(&port->tx_lock, flags) )
+    {
+        xue_flush(xue, &xue->dbc_oring, &xue->dbc_owork);
+        spin_unlock_irqrestore(&port->tx_lock, flags);
+    }
+
+    serial_tx_interrupt(port, guest_cpu_user_regs());
+    set_timer(&uart->timer, NOW() + MICROSECS(XUE_POLL_INTERVAL));
+}
+
+static void __init xue_uart_init_preirq(struct serial_port *port)
+{
+    struct xue_uart *uart = port->uart;
+    uart->lock = &port->tx_lock;
+}
+
+static void __init xue_uart_init_postirq(struct serial_port *port)
+{
+    struct xue_uart *uart = port->uart;
+
+    serial_async_transmit(port);
+    init_timer(&uart->timer, xue_uart_poll, port, 0);
+    set_timer(&uart->timer, NOW() + MILLISECS(1));
+}
+
+static int xue_uart_tx_ready(struct serial_port *port)
+{
+    struct xue_uart *uart = port->uart;
+    struct xue *xue = &uart->xue;
+
+    return XUE_WORK_RING_CAP - xue_work_ring_size(&xue->dbc_owork);
+}
+
+static void xue_uart_putc(struct serial_port *port, char c)
+{
+    struct xue_uart *uart = port->uart;
+    xue_putc(&uart->xue, c);
+}
+
+static inline void xue_uart_flush(struct serial_port *port)
+{
+    s_time_t goal;
+    struct xue_uart *uart = port->uart;
+    struct xue *xue = &uart->xue;
+
+    xue_flush(xue, &xue->dbc_oring, &xue->dbc_owork);
+
+    goal = NOW() + MICROSECS(XUE_POLL_INTERVAL);
+    if ( uart->timer.expires > goal )
+        set_timer(&uart->timer, goal);
+}
+
+static struct uart_driver xue_uart_driver = {
+    .init_preirq = xue_uart_init_preirq,
+    .init_postirq = xue_uart_init_postirq,
+    .endboot = NULL,
+    .suspend = NULL,
+    .resume = NULL,
+    .tx_ready = xue_uart_tx_ready,
+    .putc = xue_uart_putc,
+    .flush = xue_uart_flush,
+    .getc = NULL
+};
+
+static struct xue_trb evt_trb[XUE_TRB_RING_CAP] __aligned(XUE_PAGE_SIZE);
+static struct xue_trb out_trb[XUE_TRB_RING_CAP] __aligned(XUE_PAGE_SIZE);
+static struct xue_trb in_trb[XUE_TRB_RING_CAP] __aligned(XUE_PAGE_SIZE);
+static struct xue_erst_segment erst __aligned(64);
+static struct xue_dbc_ctx ctx __aligned(64);
+static uint8_t wrk_buf[XUE_WORK_RING_CAP] __aligned(XUE_PAGE_SIZE);
+static char str_buf[XUE_PAGE_SIZE] __aligned(64);
+static char __initdata opt_dbgp[30];
+
+string_param("dbgp", opt_dbgp);
+
+void __init xue_uart_init(void)
+{
+    struct xue_uart *uart = &xue_uart;
+    struct xue *xue = &uart->xue;
+
+    if ( strncmp(opt_dbgp, "xue", 3) )
+        return;
+
+    memset(xue, 0, sizeof(*xue));
+    memset(&xue_ops, 0, sizeof(xue_ops));
+
+    xue->dbc_ctx = &ctx;
+    xue->dbc_erst = &erst;
+    xue->dbc_ering.trb = evt_trb;
+    xue->dbc_oring.trb = out_trb;
+    xue->dbc_iring.trb = in_trb;
+    xue->dbc_owork.buf = wrk_buf;
+    xue->dbc_str = str_buf;
+
+    xue->dma_allocated = 1;
+    xue->sysid = xue_sysid_xen;
+    xue_open(xue, &xue_ops, NULL);
+
+    serial_register_uart(SERHND_DBGP, &xue_uart_driver, &xue_uart);
+}
+
+void xue_uart_dump(void)
+{
+    struct xue_uart *uart = &xue_uart;
+    struct xue *xue = &uart->xue;
+
+    xue_dump(xue);
+}
diff --git a/xen/include/xue.h b/xen/include/xue.h
new file mode 100644
index 000000000000..66f7d66447f3
--- /dev/null
+++ b/xen/include/xue.h
@@ -0,0 +1,1855 @@
+/*
+ * Copyright (C) 2019 Assured Information Security, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef XUE_H
+#define XUE_H
+
+/* @cond */
+
+#define XUE_PAGE_SIZE 4096ULL
+
+/* Supported xHC PCI configurations */
+#define XUE_XHC_CLASSC 0xC0330ULL
+#define XUE_XHC_VEN_INTEL 0x8086ULL
+#define XUE_XHC_DEV_Z370 0xA2AFULL
+#define XUE_XHC_DEV_Z390 0xA36DULL
+#define XUE_XHC_DEV_WILDCAT_POINT 0x9CB1ULL
+#define XUE_XHC_DEV_SUNRISE_POINT 0x9D2FULL
+#define XUE_XHC_DEV_CANNON_POINT 0x9DEDULL
+
+/* DbC idVendor and idProduct */
+#define XUE_DBC_VENDOR 0x1D6B
+#define XUE_DBC_PRODUCT 0x0010
+#define XUE_DBC_PROTOCOL 0x0000
+
+/* DCCTRL fields */
+#define XUE_CTRL_DCR 0
+#define XUE_CTRL_HOT 2
+#define XUE_CTRL_HIT 3
+#define XUE_CTRL_DRC 4
+#define XUE_CTRL_DCE 31
+
+/* DCPORTSC fields */
+#define XUE_PSC_PED 1
+#define XUE_PSC_CSC 17
+#define XUE_PSC_PRC 21
+#define XUE_PSC_PLC 22
+#define XUE_PSC_CEC 23
+
+#define XUE_PSC_ACK_MASK                                                       \
+    ((1UL << XUE_PSC_CSC) | (1UL << XUE_PSC_PRC) | (1UL << XUE_PSC_PLC) |      \
+     (1UL << XUE_PSC_CEC))
+
+static inline int known_xhc(uint32_t dev_ven)
+{
+    switch (dev_ven) {
+    case (XUE_XHC_DEV_Z370 << 16) | XUE_XHC_VEN_INTEL:
+    case (XUE_XHC_DEV_Z390 << 16) | XUE_XHC_VEN_INTEL:
+    case (XUE_XHC_DEV_WILDCAT_POINT << 16) | XUE_XHC_VEN_INTEL:
+    case (XUE_XHC_DEV_SUNRISE_POINT << 16) | XUE_XHC_VEN_INTEL:
+    case (XUE_XHC_DEV_CANNON_POINT << 16) | XUE_XHC_VEN_INTEL:
+        return 1;
+    default:
+        return 0;
+    }
+}
+
+/* Xue system id */
+enum {
+    xue_sysid_linux,
+    xue_sysid_windows,
+    xue_sysid_efi,
+    xue_sysid_xen,
+    xue_sysid_test
+};
+
+/* Userspace testing */
+#if defined(XUE_TEST)
+#include <cstdint>
+#include <cstdio>
+
+#define xue_debug(...) printf("xue debug: " __VA_ARGS__)
+#define xue_alert(...) printf("xue alert: " __VA_ARGS__)
+#define xue_error(...) printf("xue error: " __VA_ARGS__)
+#define XUE_SYSID xue_sysid_test
+
+extern "C" {
+static inline int xue_sys_init(void *) { return 1; }
+static inline void xue_sys_sfence(void *) {}
+static inline void xue_sys_lfence(void *) {}
+static inline void xue_sys_pause(void *) {}
+static inline void xue_sys_clflush(void *, void *) {}
+static inline void *xue_sys_map_xhc(void *, uint64_t, uint64_t) { return NULL; }
+static inline void xue_sys_unmap_xhc(void *sys, void *, uint64_t) {}
+static inline void *xue_sys_alloc_dma(void *, uint64_t) { return NULL; }
+static inline void xue_sys_free_dma(void *sys, void *, uint64_t) {}
+static inline void xue_sys_outd(void *sys, uint32_t, uint32_t) {}
+static inline uint32_t xue_sys_ind(void *, uint32_t) { return 0; }
+static inline uint64_t xue_sys_virt_to_dma(void *, const void *virt)
+{
+    return (uint64_t)virt;
+}
+}
+
+#endif
+
+/* Bareflank VMM */
+#if defined(VMM)
+#include <arch/intel_x64/barrier.h>
+#include <arch/intel_x64/pause.h>
+#include <arch/x64/cache.h>
+#include <arch/x64/portio.h>
+#include <cstdio>
+#include <debug/serial/serial_ns16550a.h>
+#include <memory_manager/arch/x64/cr3.h>
+#include <memory_manager/memory_manager.h>
+#include <string>
+
+static_assert(XUE_PAGE_SIZE == BAREFLANK_PAGE_SIZE);
+
+#define xue_printf(...)                                                        \
+    do {                                                                       \
+        char buf[256] { 0 };                                                   \
+        snprintf(buf, 256, __VA_ARGS__);                                       \
+        for (int i = 0; i < 256; i++) {                                        \
+            if (buf[i]) {                                                      \
+                bfvmm::DEFAULT_COM_DRIVER::instance()->write(buf[i]);          \
+            } else {                                                           \
+                break;                                                         \
+            }                                                                  \
+        }                                                                      \
+    } while (0)
+
+#define xue_debug(...) xue_printf("xue debug: " __VA_ARGS__)
+#define xue_alert(...) xue_printf("xue alert: " __VA_ARGS__)
+#define xue_error(...) xue_printf("xue error: " __VA_ARGS__)
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+static inline int xue_sys_init(void *) { return 1; }
+static inline void xue_sys_sfence(void *) { ::intel_x64::wmb(); }
+static inline void xue_sys_lfence(void *) { ::intel_x64::rmb(); }
+static inline void xue_sys_pause(void *) { _pause(); }
+static inline void xue_sys_clflush(void *, void *ptr) { _clflush(ptr); }
+
+static inline uint64_t xue_sys_virt_to_dma(void *sys, const void *virt)
+{
+    (void)sys;
+    return g_mm->virtptr_to_physint((void *)virt);
+}
+
+static inline void *xue_sys_alloc_dma(void *sys, uint64_t order)
+{
+    (void)sys;
+    return g_mm->alloc(XUE_PAGE_SIZE << order);
+}
+
+static inline void xue_sys_free_dma(void *sys, void *addr, uint64_t order)
+{
+    (void)sys;
+    (void)order;
+
+    g_mm->free(addr);
+}
+
+static inline void *xue_sys_map_xhc(void *sys, uint64_t phys, uint64_t count)
+{
+    (void)sys;
+
+    void *virt = g_mm->alloc_map(count);
+
+    for (uint64_t i = 0U; i < count; i += XUE_PAGE_SIZE) {
+        using attr_t = bfvmm::x64::cr3::mmap::attr_type;
+        using mem_t = bfvmm::x64::cr3::mmap::memory_type;
+
+        g_cr3->map_4k((uint64_t)virt + i, phys + i, attr_t::read_write,
+                      mem_t::uncacheable);
+    }
+
+    return virt;
+}
+
+static inline void xue_sys_unmap_xhc(void *sys, void *virt, uint64_t count)
+{
+    (void)sys;
+
+    for (uint64_t i = 0U; i < count; i += XUE_PAGE_SIZE) {
+        g_cr3->unmap((uint64_t)virt + i);
+    }
+
+    g_mm->free_map(virt);
+}
+
+static inline void xue_sys_outd(void *sys, uint32_t port, uint32_t data)
+{
+    (void)sys;
+    _outd(port, data);
+}
+
+static inline uint32_t xue_sys_ind(void *sys, uint32_t port)
+{
+    (void)sys;
+    return _ind(port);
+}
+
+#ifdef __cplusplus
+}
+#endif
+#endif
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* Linux driver */
+#if defined(MODULE) && defined(__linux__)
+#include <asm/cacheflush.h>
+#include <asm/io.h>
+#include <linux/printk.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+
+#define xue_debug(...) printk(KERN_DEBUG "xue debug: " __VA_ARGS__)
+#define xue_alert(...) printk(KERN_ALERT "xue alert: " __VA_ARGS__)
+#define xue_error(...) printk(KERN_ERR "xue error: " __VA_ARGS__)
+#define XUE_SYSID xue_sysid_linux
+
+static inline int xue_sys_init(void *sys) { return 1; }
+static inline void xue_sys_sfence(void *sys) { wmb(); }
+static inline void xue_sys_lfence(void *sys) { rmb(); }
+static inline void xue_sys_clflush(void *sys, void *ptr) { clflush(ptr); }
+
+static inline void xue_sys_pause(void *sys)
+{
+    (void)sys;
+    __asm volatile("pause" ::: "memory");
+}
+
+static inline void *xue_sys_alloc_dma(void *sys, uint64_t order)
+{
+    return (void *)__get_free_pages(GFP_KERNEL | GFP_DMA, order);
+}
+
+static inline void xue_sys_free_dma(void *sys, void *addr, uint64_t order)
+{
+    free_pages((unsigned long)addr, order);
+}
+
+static inline void *xue_sys_map_xhc(void *sys, uint64_t phys, uint64_t count)
+{
+    return ioremap(phys, (long unsigned int)count);
+}
+
+static inline void xue_sys_unmap_xhc(void *sys, void *virt, uint64_t count)
+{
+    (void)count;
+    iounmap((volatile void *)virt);
+}
+
+static inline void xue_sys_outd(void *sys, uint32_t port, uint32_t data)
+{
+    outl(data, port);
+}
+
+static inline uint32_t xue_sys_ind(void *sys, uint32_t port)
+{
+    return inl((int32_t)port);
+}
+
+static inline uint64_t xue_sys_virt_to_dma(void *sys, const void *virt)
+{
+    return virt_to_phys((volatile void *)virt);
+}
+
+#endif
+
+/* Windows driver */
+#if defined(_WIN32)
+#include <basetsd.h>
+typedef INT8 int8_t;
+typedef INT16 int16_t;
+typedef INT32 int32_t;
+typedef INT64 int64_t;
+typedef UINT8 uint8_t;
+typedef UINT16 uint16_t;
+typedef UINT32 uint32_t;
+typedef UINT64 uint64_t;
+typedef UINT_PTR uintptr_t;
+typedef INT_PTR intptr_t;
+
+#define XUE_SYSID xue_sysid_windows
+
+#define xue_debug(...)                                                         \
+    DbgPrintEx(DPFLTR_IHVDRIVER_ID, DPFLTR_INFO_LEVEL,                         \
+               "xue debug: " __VA_ARGS__)
+#define xue_alert(...)                                                         \
+    DbgPrintEx(DPFLTR_IHVDRIVER_ID, DPFLTR_INFO_LEVEL,                         \
+               "xue alert: " __VA_ARGS__)
+#define xue_error(...)                                                         \
+    DbgPrintEx(DPFLTR_IHVDRIVER_ID, DPFLTR_ERROR_LEVEL,                        \
+               "xue error: " __VA_ARGS__)
+
+static inline int xue_sys_init(void *sys)
+{
+    (void)sys;
+
+    xue_error("Xue cannot be used from windows drivers");
+    return 0;
+}
+
+static inline void xue_sys_sfence(void *sys)
+{
+    (void)sys;
+    xue_error("Xue cannot be used from windows drivers");
+}
+
+static inline void xue_sys_lfence(void *sys)
+{
+    (void)sys;
+    xue_error("Xue cannot be used from windows drivers");
+}
+
+static inline void xue_sys_pause(void *sys)
+{
+    (void)sys;
+    xue_error("Xue cannot be used from windows drivers");
+}
+
+static inline void *xue_sys_alloc_dma(void *sys, uint64_t order)
+{
+    (void)sys;
+    (void)order;
+
+    xue_error("Xue cannot be used from windows drivers");
+    return NULL;
+}
+
+static inline void xue_sys_free_dma(void *sys, void *addr, uint64_t order)
+{
+    (void)sys;
+    (void)addr;
+    (void)order;
+
+    xue_error("Xue cannot be used from windows drivers");
+}
+
+static inline void *xue_sys_map_xhc(void *sys, uint64_t phys, uint64_t count)
+{
+    (void)sys;
+    (void)phys;
+    (void)count;
+
+    xue_error("Xue cannot be used from windows drivers");
+    return NULL;
+}
+
+static inline void xue_sys_unmap_xhc(void *sys, void *virt, uint64_t count)
+{
+    (void)sys;
+    (void)virt;
+    (void)count;
+
+    xue_error("Xue cannot be used from windows drivers");
+}
+
+static inline void xue_sys_outd(void *sys, uint32_t port, uint32_t data)
+{
+    (void)sys;
+    (void)port;
+    (void)data;
+
+    xue_error("Xue cannot be used from windows drivers");
+}
+
+static inline uint32_t xue_sys_ind(void *sys, uint32_t port)
+{
+    (void)sys;
+    (void)port;
+
+    xue_error("Xue cannot be used from windows drivers");
+    return 0U;
+}
+
+static inline uint64_t xue_sys_virt_to_dma(void *sys, const void *virt)
+{
+    (void)sys;
+    (void)virt;
+
+    xue_error("Xue cannot be used from windows drivers");
+    return 0U;
+}
+
+#endif
+
+/* UEFI driver (based on gnuefi) */
+#if defined(EFI)
+#include <efilib.h>
+
+#define xue_debug(...) Print(L"xue debug: " __VA_ARGS__)
+#define xue_alert(...) Print(L"xue alert: " __VA_ARGS__)
+#define xue_error(...) Print(L"xue error: " __VA_ARGS__)
+#define XUE_SYSID xue_sysid_efi
+
+/* NOTE: see xue_alloc_dma for the number of buffers created by alloc_dma */
+#define XUE_DMA_DESC_CAP 7
+
+struct xue_efi_dma {
+    UINTN pages;
+    EFI_PHYSICAL_ADDRESS dma_addr;
+    VOID *cpu_addr;
+    VOID *mapping;
+};
+
+struct xue_efi {
+    EFI_HANDLE img_hand;
+    EFI_HANDLE pci_hand;
+    EFI_PCI_IO *pci_io;
+    struct xue_efi_dma dma_desc[XUE_DMA_DESC_CAP];
+};
+
+static inline int xue_sys_init(void *sys)
+{
+    EFI_STATUS rc;
+    EFI_HANDLE *hand;
+    UINTN nr_hand;
+    UINTN i;
+
+    struct xue_efi *efi = (struct xue_efi *)sys;
+    ZeroMem((VOID *)&efi->dma_desc, sizeof(efi->dma_desc));
+
+    rc = LibLocateHandle(ByProtocol, &PciIoProtocol, NULL, &nr_hand, &hand);
+    if (EFI_ERROR(rc)) {
+        xue_error("LocateHandle failed: 0x%llx\n", rc);
+        return 0;
+    }
+
+    for (i = 0; i < nr_hand; i++) {
+        UINT32 dev_ven;
+        EFI_PCI_IO *pci_io = NULL;
+
+        rc = gBS->OpenProtocol(hand[i], &PciIoProtocol, (VOID **)&pci_io,
+                               efi->img_hand, NULL,
+                               EFI_OPEN_PROTOCOL_GET_PROTOCOL);
+        if (EFI_ERROR(rc)) {
+            continue;
+        }
+
+        rc = pci_io->Pci.Read(pci_io, EfiPciIoWidthUint32, 0, 1, &dev_ven);
+        if (EFI_ERROR(rc)) {
+            gBS->CloseProtocol(hand[i], &PciIoProtocol, efi->img_hand, NULL);
+            continue;
+        }
+
+        if (known_xhc(dev_ven)) {
+            efi->pci_hand = hand[i];
+            efi->pci_io = pci_io;
+            return 1;
+        }
+    }
+
+    xue_error("Failed to open PCI_IO_PROTOCOL on any known xHC\n");
+    return 0;
+}
+
+static inline void *xue_sys_alloc_dma(void *sys, uint64_t order)
+{
+    const EFI_ALLOCATE_TYPE atype = AllocateAnyPages;
+    const EFI_MEMORY_TYPE mtype = EfiRuntimeServicesData;
+    const UINTN attrs = EFI_PCI_ATTRIBUTE_MEMORY_CACHED;
+    const UINTN pages = 1UL << order;
+
+    struct xue_efi_dma *dma = NULL;
+    struct xue_efi *efi = (struct xue_efi *)sys;
+    EFI_PCI_IO *pci = efi->pci_io;
+    EFI_STATUS rc = 0;
+    VOID *addr = NULL;
+    UINTN i = 0;
+
+    for (; i < XUE_DMA_DESC_CAP; i++) {
+        dma = &efi->dma_desc[i];
+        if (!dma->cpu_addr) {
+            break;
+        }
+        dma = NULL;
+    }
+
+    if (!dma) {
+        xue_error("Out of DMA descriptors\n");
+        return NULL;
+    }
+
+    rc = pci->AllocateBuffer(pci, atype, mtype, pages, &addr, attrs);
+    if (EFI_ERROR(rc)) {
+        xue_error("AllocateBuffer failed: 0x%llx\n", rc);
+        return NULL;
+    }
+
+    dma->pages = pages;
+    dma->cpu_addr = addr;
+
+    return addr;
+}
+
+static inline void xue_sys_free_dma(void *sys, void *addr, uint64_t order)
+{
+    (void)order;
+
+    struct xue_efi_dma *dma = NULL;
+    struct xue_efi *efi = (struct xue_efi *)sys;
+    EFI_PCI_IO *pci = efi->pci_io;
+    EFI_STATUS rc = 0;
+    UINTN i = 0;
+
+    for (; i < XUE_DMA_DESC_CAP; i++) {
+        dma = &efi->dma_desc[i];
+        if (dma->cpu_addr == addr) {
+            break;
+        }
+        dma = NULL;
+    }
+
+    if (!dma) {
+        return;
+    }
+
+    if (dma->mapping) {
+        rc = pci->Unmap(pci, dma->mapping);
+        if (EFI_ERROR(rc)) {
+            xue_error("pci->Unmap failed: 0x%llx\n", rc);
+        }
+    }
+
+    rc = pci->FreeBuffer(pci, dma->pages, addr);
+    if (EFI_ERROR(rc)) {
+        xue_error("FreeBuffer failed: 0x%llx\n", rc);
+    }
+
+    ZeroMem((VOID *)dma, sizeof(*dma));
+}
+
+static inline uint64_t xue_sys_virt_to_dma(void *sys, const void *virt)
+{
+    UINTN i = 0;
+    UINTN offset = 0;
+    UINTN needed = 0;
+    UINTN mapped = 0;
+    struct xue_efi *efi = (struct xue_efi *)sys;
+    struct xue_efi_dma *dma = NULL;
+    EFI_PHYSICAL_ADDRESS dma_addr = 0;
+    EFI_PCI_IO *pci = efi->pci_io;
+    EFI_STATUS rc = 0;
+    VOID *mapping = NULL;
+
+    for (; i < XUE_DMA_DESC_CAP; i++) {
+        dma = &efi->dma_desc[i];
+        UINTN p = 0;
+
+        for (; p < dma->pages; p++) {
+            UINTN addr = (UINTN)dma->cpu_addr + (p * XUE_PAGE_SIZE);
+            if ((UINTN)virt == addr) {
+                offset = addr - (UINTN)dma->cpu_addr;
+                goto found;
+            }
+        }
+
+        dma = NULL;
+    }
+
+    if (!dma) {
+        xue_error("CPU addr 0x%llx not found in DMA descriptor\n", virt);
+        return 0;
+    }
+
+found:
+    if (dma->dma_addr && dma->mapping) {
+        return dma->dma_addr + offset;
+    }
+
+    needed = dma->pages << EFI_PAGE_SHIFT;
+    mapped = needed;
+    rc = pci->Map(pci, EfiPciIoOperationBusMasterCommonBuffer, (void *)virt,
+                  &mapped, &dma_addr, &mapping);
+    if (EFI_ERROR(rc) || mapped != needed) {
+        xue_error("pci->Map failed: rc: 0x%llx, mapped: %llu, needed: %llu\n",
+                  rc, mapped, needed);
+        return 0;
+    }
+
+    dma->dma_addr = dma_addr;
+    dma->mapping = mapping;
+
+    if ((const void *)dma_addr != virt) {
+        xue_alert("Non-identity DMA mapping: dma: 0x%llx cpu: 0x%llx\n",
+                  dma_addr, virt);
+    }
+
+    return dma_addr;
+}
+
+static inline void xue_sys_outd(void *sys, uint32_t port, uint32_t val)
+{
+    (void)sys;
+
+    __asm volatile("movq %0, %%rdx\n\t"
+                   "movq %1, %%rax\n\t"
+                   "outl %%eax, %%dx\n\t"
+                   :
+                   : "g"((uint64_t)port), "g"((uint64_t)val));
+}
+
+static inline uint32_t xue_sys_ind(void *sys, uint32_t port)
+{
+    (void)sys;
+    uint32_t ret;
+
+    __asm volatile("xorq %%rax, %%rax\n\t"
+                   "movq %1, %%rdx\n\t"
+                   "inl %%dx, %%eax\n\t"
+                   : "=a"(ret)
+                   : "g"((uint64_t)port));
+    return ret;
+}
+
+static inline void *xue_sys_map_xhc(void *sys, uint64_t phys, uint64_t count)
+{
+    (void)sys;
+    (void)count;
+
+    return (void *)phys;
+}
+
+static inline void xue_sys_unmap_xhc(void *sys, void *virt, uint64_t count)
+{
+    (void)sys;
+    (void)virt;
+    (void)count;
+}
+
+static inline void xue_sys_sfence(void *sys)
+{
+    (void)sys;
+    __asm volatile("sfence" ::: "memory");
+}
+
+static inline void xue_sys_lfence(void *sys)
+{
+    (void)sys;
+    __asm volatile("lfence" ::: "memory");
+}
+
+static inline void xue_sys_pause(void *sys)
+{
+    (void)sys;
+    __asm volatile("pause" ::: "memory");
+}
+
+static inline void xue_sys_clflush(void *sys, void *ptr)
+{
+    (void)sys;
+    __asm volatile("clflush %0" : "+m"(*(volatile char *)ptr));
+}
+
+#endif
+
+#if defined(__XEN__) && !defined(VMM)
+
+#include <asm/fixmap.h>
+#include <asm/io.h>
+#include <xen/mm.h>
+#include <xen/types.h>
+
+#define xue_debug(...) printk("xue debug: " __VA_ARGS__)
+#define xue_alert(...) printk("xue alert: " __VA_ARGS__)
+#define xue_error(...) printk("xue error: " __VA_ARGS__)
+#define XUE_SYSID xue_sysid_xen
+
+static inline int xue_sys_init(void *sys) { return 1; }
+static inline void xue_sys_sfence(void *sys) { wmb(); }
+static inline void xue_sys_lfence(void *sys) { rmb(); }
+static inline void xue_sys_unmap_xhc(void *sys, void *virt, uint64_t count) {}
+static inline void xue_sys_free_dma(void *sys, void *addr, uint64_t order) {}
+
+static inline void xue_sys_pause(void *sys)
+{
+    (void)sys;
+    __asm volatile("pause" ::: "memory");
+}
+
+static inline void xue_sys_clflush(void *sys, void *ptr)
+{
+    (void)sys;
+    __asm volatile("clflush %0" : "+m"(*(volatile char *)ptr));
+}
+
+static inline void *xue_sys_alloc_dma(void *sys, uint64_t order)
+{
+    return NULL;
+}
+
+static inline uint32_t xue_sys_ind(void *sys, uint32_t port)
+{
+    return inl(port);
+}
+
+static inline void xue_sys_outd(void *sys, uint32_t port, uint32_t data)
+{
+    outl(data, port);
+}
+
+static inline uint64_t xue_sys_virt_to_dma(void *sys, const void *virt)
+{
+    return virt_to_maddr(virt);
+}
+
+static void *xue_sys_map_xhc(void *sys, uint64_t phys, uint64_t size)
+{
+    size_t i;
+
+    if (size != MAX_XHCI_PAGES * XUE_PAGE_SIZE) {
+        return NULL;
+    }
+
+    for (i = FIX_XHCI_END; i >= FIX_XHCI_BEGIN; i--) {
+        set_fixmap_nocache(i, phys);
+        phys += XUE_PAGE_SIZE;
+    }
+
+    /*
+     * The fixmap grows downward, so the lowest virt is
+     * at the highest index
+     */
+    return fix_to_virt(FIX_XHCI_END);
+}
+
+#endif
+
+/******************************************************************************
+ * TRB ring (summarized from the manual):
+ *
+ * TRB rings are circular queues of TRBs shared between the xHC and the driver.
+ * Each ring has one producer and one consumer. The DbC has one event
+ * ring and two transfer rings; one IN and one OUT.
+ *
+ * The DbC hardware is the producer on the event ring, and
+ * xue is the consumer. This means that event TRBs are read-only from
+ * the xue.
+ *
+ * OTOH, xue is the producer of transfer TRBs on the two transfer
+ * rings, so xue enqueues transfers, and the hardware dequeues
+ * them. The dequeue pointer of a transfer ring is read by
+ * xue by examining the latest transfer event TRB on the event ring. The
+ * transfer event TRB contains the address of the transfer TRB that generated
+ * the event.
+ *
+ * To make each transfer ring circular, the last TRB must be a link TRB, which
+ * points to the beginning of the next queue. Note that this implementation
+ * does not support multiple segments, so each link TRB points back to the
+ * beginning of its own segment.
+ ******************************************************************************/
+
+/* TRB types */
+enum {
+    xue_trb_norm = 1,
+    xue_trb_link = 6,
+    xue_trb_tfre = 32,
+    xue_trb_psce = 34
+};
+
+/* TRB completion codes */
+enum { xue_trb_cc_success = 1, xue_trb_cc_trb_err = 5 };
+
+/* DbC endpoint types */
+enum { xue_ep_bulk_out = 2, xue_ep_bulk_in = 6 };
+
+/* DMA/MMIO structures */
+#pragma pack(push, 1)
+struct xue_trb {
+    uint64_t params;
+    uint32_t status;
+    uint32_t ctrl;
+};
+
+struct xue_erst_segment {
+    uint64_t base;
+    uint16_t size;
+    uint8_t rsvdz[6];
+};
+
+#define XUE_CTX_SIZE 16
+#define XUE_CTX_BYTES (XUE_CTX_SIZE * 4)
+
+struct xue_dbc_ctx {
+    uint32_t info[XUE_CTX_SIZE];
+    uint32_t ep_out[XUE_CTX_SIZE];
+    uint32_t ep_in[XUE_CTX_SIZE];
+};
+
+struct xue_dbc_reg {
+    uint32_t id;
+    uint32_t db;
+    uint32_t erstsz;
+    uint32_t rsvdz;
+    uint64_t erstba;
+    uint64_t erdp;
+    uint32_t ctrl;
+    uint32_t st;
+    uint32_t portsc;
+    uint32_t rsvdp;
+    uint64_t cp;
+    uint32_t ddi1;
+    uint32_t ddi2;
+};
+#pragma pack(pop)
+
+#define XUE_TRB_MAX_TFR (XUE_PAGE_SIZE << 4)
+#define XUE_TRB_PER_PAGE (XUE_PAGE_SIZE / sizeof(struct xue_trb))
+
+/* Defines the size in bytes of TRB rings as 2^XUE_TRB_RING_ORDER * 4096 */
+#ifndef XUE_TRB_RING_ORDER
+#define XUE_TRB_RING_ORDER 4
+#endif
+#define XUE_TRB_RING_CAP (XUE_TRB_PER_PAGE * (1ULL << XUE_TRB_RING_ORDER))
+#define XUE_TRB_RING_BYTES (XUE_TRB_RING_CAP * sizeof(struct xue_trb))
+#define XUE_TRB_RING_MASK (XUE_TRB_RING_BYTES - 1U)
+
+struct xue_trb_ring {
+    struct xue_trb *trb; /* Array of TRBs */
+    uint32_t enq; /* The offset of the enqueue ptr */
+    uint32_t deq; /* The offset of the dequeue ptr */
+    uint8_t cyc; /* Cycle state toggled on each wrap-around */
+    uint8_t db; /* Doorbell target */
+};
+
+#define XUE_DB_OUT 0x0
+#define XUE_DB_IN 0x1
+#define XUE_DB_INVAL 0xFF
+
+/* Defines the size in bytes of work rings as 2^XUE_WORK_RING_ORDER * 4096 */
+#ifndef XUE_WORK_RING_ORDER
+#define XUE_WORK_RING_ORDER 3
+#endif
+#define XUE_WORK_RING_CAP (XUE_PAGE_SIZE * (1ULL << XUE_WORK_RING_ORDER))
+#define XUE_WORK_RING_BYTES XUE_WORK_RING_CAP
+
+#if XUE_WORK_RING_CAP > XUE_TRB_MAX_TFR
+#error "XUE_WORK_RING_ORDER must be at most 4"
+#endif
+
+struct xue_work_ring {
+    uint8_t *buf;
+    uint32_t enq;
+    uint32_t deq;
+    uint64_t dma;
+};
+
+/* @endcond */
+
+/**
+ * Set of system-specific operations required by xue to initialize and
+ * control the DbC. An instance of this structure must be passed to
+ * xue_open. Any field that is NULL will default to the xue_sys_*
+ * implementation defined for the target platform. <em>Any non-NULL field will
+ * simply be called</em>.
+ */
+struct xue_ops {
+    /**
+     * Perform system-specific init operations
+     *
+     * @param sys a pointer to a system-specific data structure
+     * @return != 0 iff successful
+     */
+    int (*init)(void *sys);
+
+    /**
+     * Allocate pages for read/write DMA
+     *
+     * @param sys a pointer to a system-specific data structure
+     * @param order allocate 2^order pages
+     * @return a cpu-relative virtual address for accessing the DMA buffer
+     */
+    void *(*alloc_dma)(void *sys, uint64_t order);
+
+    /**
+     * Free pages previously allocated with alloc_dma
+     *
+     * @param sys a pointer to a system-specific data structure
+     * @param addr the cpu-relative address of the DMA range to free
+     * @param order the order of the set of pages to free
+     */
+    void (*free_dma)(void *sys, void *addr, uint64_t order);
+
+    /**
+     * Map in the xHC MMIO region as uncacheable memory
+     *
+     * @param sys a pointer to a system-specific data structure
+     * @param phys the value from the xHC's BAR
+     * @param size the number of bytes to map in
+     * @return the mapped virtual address
+     */
+    void *(*map_xhc)(void *sys, uint64_t phys, uint64_t size);
+
+    /**
+     * Unmap xHC MMIO region
+     *
+     * @param sys a pointer to a system-specific data structure
+     * @param virt the MMIO address to unmap
+     */
+    void (*unmap_xhc)(void *sys, void *virt, uint64_t size);
+
+    /**
+     * Write 32 bits to IO port
+     *
+     * @param sys a pointer to a system-specific data structure
+     * @param port the port to write to
+     * @param data the data to write
+     */
+    void (*outd)(void *sys, uint32_t port, uint32_t data);
+
+    /**
+     * Read 32 bits from IO port
+     *
+     * @param sys a pointer to a system-specific data structure
+     * @param port the port to read from
+     * @return the data read from the port
+     */
+    uint32_t (*ind)(void *sys, uint32_t port);
+
+    /**
+     * Translate a virtual address to a DMA address
+     *
+     * @param sys a pointer to a system-specific data structure
+     * @param virt the address returned from a previous alloc_dma call
+     * @return the resulting bus-relative DMA address
+     */
+    uint64_t (*virt_to_dma)(void *sys, const void *virt);
+
+    /**
+     * Perform a write memory barrier
+     * @param sys a pointer to a system-specific data structure
+     */
+    void (*sfence)(void *sys);
+
+    /**
+     * Perform a read memory barrier
+     * @param sys a pointer to a system-specific data structure
+     */
+    void (*lfence)(void *sys);
+
+    /**
+     * Pause CPU execution
+     * @param sys a pointer to a system-specific data structure
+     */
+    void (*pause)(void *sys);
+
+    /**
+     * Flush the cache line at the given address
+     * @param sys a pointer to a system-specific data structure
+     * @param ptr the address to flush
+     */
+    void (*clflush)(void *sys, void *ptr);
+};
+
+/* @cond */
+
+struct xue {
+    struct xue_ops *ops;
+    void *sys;
+
+    struct xue_dbc_reg *dbc_reg;
+    struct xue_dbc_ctx *dbc_ctx;
+    struct xue_erst_segment *dbc_erst;
+    struct xue_trb_ring dbc_ering;
+    struct xue_trb_ring dbc_oring;
+    struct xue_trb_ring dbc_iring;
+    struct xue_work_ring dbc_owork;
+    char *dbc_str;
+
+    uint32_t xhc_cf8;
+    uint64_t xhc_mmio_phys;
+    uint64_t xhc_mmio_size;
+    uint64_t xhc_dbc_offset;
+    void *xhc_mmio;
+
+    int dma_allocated;
+    int open;
+    int sysid;
+};
+
+static inline void *xue_mset(void *dest, int c, uint64_t size)
+{
+    uint64_t i;
+    char *d = (char *)dest;
+
+    for (i = 0; i < size; i++) {
+        d[i] = (char)c;
+    }
+
+    return dest;
+}
+
+static inline void *xue_mcpy(void *dest, const void *src, uint64_t size)
+{
+    uint64_t i;
+    char *d = (char *)dest;
+    const char *s = (const char *)src;
+
+    for (i = 0; i < size; i++) {
+        d[i] = s[i];
+    }
+
+    return dest;
+}
+
+static inline void xue_flush_range(struct xue *xue, void *ptr, uint32_t bytes)
+{
+    uint32_t i;
+
+    const uint32_t clshft = 6;
+    const uint32_t clsize = (1UL << clshft);
+    const uint32_t clmask = clsize - 1;
+
+    uint32_t lines = (bytes >> clshft);
+    lines += (bytes & clmask) != 0;
+
+    for (i = 0; i < lines; i++) {
+        xue->ops->clflush(xue->sys, (void *)((uint64_t)ptr + (i * clsize)));
+    }
+}
+
+static inline uint32_t xue_pci_read(struct xue *xue, uint32_t cf8, uint32_t reg)
+{
+    void *sys = xue->sys;
+    uint32_t addr = (cf8 & 0xFFFFFF03UL) | (reg << 2);
+
+    xue->ops->outd(sys, 0xCF8, addr);
+    return xue->ops->ind(sys, 0xCFC);
+}
+
+static inline void xue_pci_write(struct xue *xue, uint32_t cf8, uint32_t reg,
+                                 uint32_t val)
+{
+    void *sys = xue->sys;
+    uint32_t addr = (cf8 & 0xFFFFFF03UL) | (reg << 2);
+
+    xue->ops->outd(sys, 0xCF8, addr);
+    xue->ops->outd(sys, 0xCFC, val);
+}
+
+static inline int xue_init_xhc(struct xue *xue)
+{
+    uint32_t bar0;
+    uint64_t bar1;
+    uint64_t devfn;
+
+    struct xue_ops *ops = xue->ops;
+    void *sys = xue->sys;
+    xue->xhc_cf8 = 0;
+
+    /*
+     * Search PCI bus 0 for the xHC. All the host controllers supported so far
+     * are part of the chipset and are on bus 0.
+     */
+    for (devfn = 0; devfn < 256; devfn++) {
+        uint32_t dev = (devfn & 0xF8) >> 3;
+        uint32_t fun = devfn & 0x07;
+        uint32_t cf8 = (1UL << 31) | (dev << 11) | (fun << 8);
+        uint32_t hdr = (xue_pci_read(xue, cf8, 3) & 0xFF0000U) >> 16;
+
+        if (hdr == 0 || hdr == 0x80) {
+            if ((xue_pci_read(xue, cf8, 2) >> 8) == XUE_XHC_CLASSC) {
+                xue->xhc_cf8 = cf8;
+                break;
+            }
+        }
+    }
+
+    if (!xue->xhc_cf8) {
+        xue_error("Compatible xHC not found on bus 0\n");
+        return 0;
+    }
+
+    /* ...we found it, so parse the BAR and map the registers */
+    bar0 = xue_pci_read(xue, xue->xhc_cf8, 4);
+    bar1 = xue_pci_read(xue, xue->xhc_cf8, 5);
+
+    /* IO BARs not allowed; BAR must be 64-bit */
+    if ((bar0 & 0x1) != 0 || ((bar0 & 0x6) >> 1) != 2) {
+        return 0;
+    }
+
+    xue_pci_write(xue, xue->xhc_cf8, 4, 0xFFFFFFFF);
+    xue->xhc_mmio_size = ~(xue_pci_read(xue, xue->xhc_cf8, 4) & 0xFFFFFFF0) + 1;
+    xue_pci_write(xue, xue->xhc_cf8, 4, bar0);
+
+    xue->xhc_mmio_phys = (bar0 & 0xFFFFFFF0) | (bar1 << 32);
+    xue->xhc_mmio = ops->map_xhc(sys, xue->xhc_mmio_phys, xue->xhc_mmio_size);
+
+    return xue->xhc_mmio != NULL;
+}
+
+/**
+ * The first register of the debug capability is found by traversing the
+ * host controller's capability list (xcap) until a capability
+ * with ID = 0xA is found. The xHCI capability list begins at address
+ * mmio + (HCCPARAMS1[31:16] << 2)
+ */
+static inline struct xue_dbc_reg *xue_find_dbc(struct xue *xue)
+{
+    uint32_t *xcap;
+    uint32_t next;
+    uint32_t id;
+    uint8_t *mmio = (uint8_t *)xue->xhc_mmio;
+    uint32_t *hccp1 = (uint32_t *)(mmio + 0x10);
+    const uint32_t DBC_ID = 0xA;
+
+    /**
+     * Paranoid check against a zero value. The spec mandates that
+     * at least one "supported protocol" capability must be implemented,
+     * so this should always be false.
+     */
+    if ((*hccp1 & 0xFFFF0000) == 0) {
+        return NULL;
+    }
+
+    xcap = (uint32_t *)(mmio + (((*hccp1 & 0xFFFF0000) >> 16) << 2));
+    next = (*xcap & 0xFF00) >> 8;
+    id = *xcap & 0xFF;
+
+    /**
+     * Table 7-1 states that 'next' is relative to
+     * the current value of xcap and is a dword offset.
+     */
+    while (id != DBC_ID && next) {
+        xcap += next;
+        id = *xcap & 0xFF;
+        next = (*xcap & 0xFF00) >> 8;
+    }
+
+    if (id != DBC_ID) {
+        return NULL;
+    }
+
+    xue->xhc_dbc_offset = (uint64_t)xcap - (uint64_t)mmio;
+    return (struct xue_dbc_reg *)xcap;
+}
+
+/**
+ * Fields with the same interpretation for every TRB type (section 4.11.1).
+ * These are the fields defined in the TRB template, minus the ENT bit. That
+ * bit is the toggle cycle bit in link TRBs, so it shouldn't be in the
+ * template.
+ */
+static inline uint32_t xue_trb_cyc(const struct xue_trb *trb)
+{
+    return trb->ctrl & 0x1;
+}
+
+static inline uint32_t xue_trb_type(const struct xue_trb *trb)
+{
+    return (trb->ctrl & 0xFC00) >> 10;
+}
+
+static inline void xue_trb_set_cyc(struct xue_trb *trb, uint32_t c)
+{
+    trb->ctrl &= ~0x1UL;
+    trb->ctrl |= c;
+}
+
+static inline void xue_trb_set_type(struct xue_trb *trb, uint32_t t)
+{
+    trb->ctrl &= ~0xFC00UL;
+    trb->ctrl |= (t << 10);
+}
+
+/* Fields for normal TRBs */
+static inline void xue_trb_norm_set_buf(struct xue_trb *trb, uint64_t addr)
+{
+    trb->params = addr;
+}
+
+static inline void xue_trb_norm_set_len(struct xue_trb *trb, uint32_t len)
+{
+    trb->status &= ~0x1FFFFUL;
+    trb->status |= len;
+}
+
+static inline void xue_trb_norm_set_ioc(struct xue_trb *trb)
+{
+    trb->ctrl |= 0x20;
+}
+
+/**
+ * Fields for Transfer Event TRBs (see section 6.4.2.1). Note that event
+ * TRBs are read-only from software
+ */
+static inline uint64_t xue_trb_tfre_ptr(const struct xue_trb *trb)
+{
+    return trb->params;
+}
+
+static inline uint32_t xue_trb_tfre_cc(const struct xue_trb *trb)
+{
+    return trb->status >> 24;
+}
+
+/* Fields for link TRBs (section 6.4.4.1) */
+static inline void xue_trb_link_set_rsp(struct xue_trb *trb, uint64_t rsp)
+{
+    trb->params = rsp;
+}
+
+static inline void xue_trb_link_set_tc(struct xue_trb *trb)
+{
+    trb->ctrl |= 0x2;
+}
+
+static inline void xue_trb_ring_init(const struct xue *xue,
+                                     struct xue_trb_ring *ring, int producer,
+                                     int doorbell)
+{
+    xue_mset(ring->trb, 0, XUE_TRB_RING_CAP * sizeof(ring->trb[0]));
+
+    ring->enq = 0;
+    ring->deq = 0;
+    ring->cyc = 1;
+    ring->db = (uint8_t)doorbell;
+
+    /*
+     * Producer implies transfer ring, so we have to place a
+     * link TRB at the end that points back to trb[0]
+     */
+    if (producer) {
+        struct xue_trb *trb = &ring->trb[XUE_TRB_RING_CAP - 1];
+        xue_trb_set_type(trb, xue_trb_link);
+        xue_trb_link_set_tc(trb);
+        xue_trb_link_set_rsp(trb, xue->ops->virt_to_dma(xue->sys, ring->trb));
+    }
+}
+
+static inline int xue_trb_ring_full(const struct xue_trb_ring *ring)
+{
+    return ((ring->enq + 1) & (XUE_TRB_RING_CAP - 1)) == ring->deq;
+}
+
+static inline int xue_work_ring_full(const struct xue_work_ring *ring)
+{
+    return ((ring->enq + 1) & (XUE_WORK_RING_CAP - 1)) == ring->deq;
+}
+
+static inline uint64_t xue_work_ring_size(const struct xue_work_ring *ring)
+{
+    if (ring->enq >= ring->deq) {
+        return ring->enq - ring->deq;
+    }
+
+    return XUE_WORK_RING_CAP - ring->deq + ring->enq;
+}
+
+static inline void xue_push_trb(struct xue *xue, struct xue_trb_ring *ring,
+                                uint64_t dma, uint64_t len)
+{
+    struct xue_trb trb;
+
+    if (ring->enq == XUE_TRB_RING_CAP - 1) {
+        /*
+         * We have to make sure the xHC processes the link TRB in order
+         * for wrap-around to work properly. We do this by marking the
+         * xHC as owner of the link TRB by setting the TRB's cycle bit
+         * (just like with normal TRBs).
+         */
+        struct xue_trb *link = &ring->trb[ring->enq];
+        xue_trb_set_cyc(link, ring->cyc);
+
+        ring->enq = 0;
+        ring->cyc ^= 1;
+    }
+
+    trb.params = 0;
+    trb.status = 0;
+    trb.ctrl = 0;
+
+    xue_trb_set_type(&trb, xue_trb_norm);
+    xue_trb_set_cyc(&trb, ring->cyc);
+
+    xue_trb_norm_set_buf(&trb, dma);
+    xue_trb_norm_set_len(&trb, (uint32_t)len);
+    xue_trb_norm_set_ioc(&trb);
+
+    ring->trb[ring->enq++] = trb;
+    xue_flush_range(xue, &ring->trb[ring->enq - 1], sizeof(trb));
+}
+
+static inline int64_t xue_push_work(struct xue *xue, struct xue_work_ring *ring,
+                                    const char *buf, int64_t len)
+{
+    int64_t i = 0;
+    uint32_t start = ring->enq;
+    uint32_t end = 0;
+
+    while (!xue_work_ring_full(ring) && i < len) {
+        ring->buf[ring->enq] = buf[i++];
+        ring->enq = (ring->enq + 1) & (XUE_WORK_RING_CAP - 1);
+    }
+
+    end = ring->enq;
+
+    if (end > start) {
+        xue_flush_range(xue, &ring->buf[start], end - start);
+    } else if (i > 0) {
+        xue_flush_range(xue, &ring->buf[start], XUE_WORK_RING_CAP - start);
+        xue_flush_range(xue, &ring->buf[0], end);
+    }
+
+    return i;
+}
+
+/*
+ * Note that if IN transfer support is added, then this
+ * will need to be changed; it assumes an OUT transfer ring only
+ */
+static inline void xue_pop_events(struct xue *xue)
+{
+    const int trb_shift = 4;
+
+    void *sys = xue->sys;
+    struct xue_ops *ops = xue->ops;
+    struct xue_dbc_reg *reg = xue->dbc_reg;
+    struct xue_trb_ring *er = &xue->dbc_ering;
+    struct xue_trb_ring *tr = &xue->dbc_oring;
+    struct xue_trb *event = &er->trb[er->deq];
+    uint64_t erdp = reg->erdp;
+
+    ops->lfence(sys);
+
+    while (xue_trb_cyc(event) == er->cyc) {
+        switch (xue_trb_type(event)) {
+        case xue_trb_tfre:
+            if (xue_trb_tfre_cc(event) != xue_trb_cc_success) {
+                xue_alert("tfre error cc: %u\n", xue_trb_tfre_cc(event));
+                break;
+            }
+            tr->deq =
+                (xue_trb_tfre_ptr(event) & XUE_TRB_RING_MASK) >> trb_shift;
+            break;
+        case xue_trb_psce:
+            reg->portsc |= (XUE_PSC_ACK_MASK & reg->portsc);
+            break;
+        default:
+            break;
+        }
+
+        er->cyc = (er->deq == XUE_TRB_RING_CAP - 1) ? er->cyc ^ 1 : er->cyc;
+        er->deq = (er->deq + 1) & (XUE_TRB_RING_CAP - 1);
+        event = &er->trb[er->deq];
+    }
+
+    erdp &= ~XUE_TRB_RING_MASK;
+    erdp |= (er->deq << trb_shift);
+    ops->sfence(sys);
+    reg->erdp = erdp;
+}
+
+/**
+ * xue_init_ep
+ *
+ * Initializes the endpoint as specified in sections 7.6.3.2 and 7.6.9.2.
+ * Each endpoint is Bulk, so the MaxPStreams, LSA, HID, CErr, FE,
+ * Interval, Mult, and Max ESIT Payload fields are all 0.
+ *
+ * Max packet size: 1024
+ * Max burst size: debug mbs (from dbc_reg->ctrl register)
+ * EP type: 2 for OUT bulk, 6 for IN bulk
+ * TR dequeue ptr: physical base address of transfer ring
+ * Avg TRB length: software defined (see 4.14.1.1 for suggested defaults)
+ */
+static inline void xue_init_ep(uint32_t *ep, uint64_t mbs, uint32_t type,
+                               uint64_t ring_dma)
+{
+    xue_mset(ep, 0, XUE_CTX_BYTES);
+
+    ep[1] = (1024 << 16) | ((uint32_t)mbs << 8) | (type << 3);
+    ep[2] = (ring_dma & 0xFFFFFFFF) | 1;
+    ep[3] = ring_dma >> 32;
+    ep[4] = 3 * 1024;
+}
+
+/* Initialize the DbC info with USB string descriptor addresses */
+static inline void xue_init_strings(struct xue *xue, uint32_t *info)
+{
+    uint64_t *sda;
+
+    /* clang-format off */
+    const char strings[] = {
+        6,  3, 9, 0, 4, 0,
+        8,  3, 'A', 0, 'I', 0, 'S', 0,
+        30, 3, 'X', 0, 'u', 0, 'e', 0, ' ', 0,
+               'D', 0, 'b', 0, 'C', 0, ' ', 0,
+               'D', 0, 'e', 0, 'v', 0, 'i', 0, 'c', 0, 'e', 0,
+        4, 3, '0', 0
+    };
+    /* clang-format on */
+
+    xue_mcpy(xue->dbc_str, strings, sizeof(strings));
+
+    sda = (uint64_t *)&info[0];
+    sda[0] = xue->ops->virt_to_dma(xue->sys, xue->dbc_str);
+    sda[1] = sda[0] + 6;
+    sda[2] = sda[0] + 6 + 8;
+    sda[3] = sda[0] + 6 + 8 + 30;
+    info[8] = (4 << 24) | (30 << 16) | (8 << 8) | 6;
+}
+
+static inline void xue_dump(struct xue *xue)
+{
+    struct xue_ops *op = xue->ops;
+    struct xue_dbc_reg *r = xue->dbc_reg;
+
+    xue_debug("XUE DUMP:\n");
+    xue_debug("    ctrl: 0x%x stat: 0x%x psc: 0x%x\n", r->ctrl, r->st,
+              r->portsc);
+    xue_debug("    id: 0x%x, db: 0x%x\n", r->id, r->db);
+#if defined(__XEN__) || defined(VMM)
+    xue_debug("    erstsz: %u, erstba: 0x%lx\n", r->erstsz, r->erstba);
+    xue_debug("    erdp: 0x%lx, cp: 0x%lx\n", r->erdp, r->cp);
+#else
+    xue_debug("    erstsz: %u, erstba: 0x%llx\n", r->erstsz, r->erstba);
+    xue_debug("    erdp: 0x%llx, cp: 0x%llx\n", r->erdp, r->cp);
+#endif
+    xue_debug("    ddi1: 0x%x, ddi2: 0x%x\n", r->ddi1, r->ddi2);
+    xue_debug("    erstba == virt_to_dma(erst): %d\n",
+              r->erstba == op->virt_to_dma(xue->sys, xue->dbc_erst));
+    xue_debug("    erdp == virt_to_dma(erst[0].base): %d\n",
+              r->erdp == xue->dbc_erst[0].base);
+    xue_debug("    cp == virt_to_dma(ctx): %d\n",
+              r->cp == op->virt_to_dma(xue->sys, xue->dbc_ctx));
+}
+
+static inline void xue_enable_dbc(struct xue *xue)
+{
+    void *sys = xue->sys;
+    struct xue_ops *ops = xue->ops;
+    struct xue_dbc_reg *reg = xue->dbc_reg;
+
+    ops->sfence(sys);
+    reg->ctrl |= (1UL << XUE_CTRL_DCE);
+    ops->sfence(sys);
+
+    while ((reg->ctrl & (1UL << XUE_CTRL_DCE)) == 0) {
+        ops->pause(sys);
+    }
+
+    ops->sfence(sys);
+    reg->portsc |= (1UL << XUE_PSC_PED);
+    ops->sfence(sys);
+
+    /*
+     * TODO:
+     *
+     * There is a slight difference in behavior between enabling the DbC from
+     * pre and post-EFI. From post-EFI, if the cable is connected when the DbC
+     * is enabled, the host automatically enumerates the DbC. Pre-EFI, you
+     * have to plug the cable in after the DCE bit is set on some systems
+     * for it to enumerate.
+     *
+     * I suspect the difference is due to the state of the port prior to
+     * initializing the DbC. Section 4.19.1.2.4.2 seems like a good place to
+     * start a deeper investigation into this.
+     */
+    if (xue->sysid == xue_sysid_efi) {
+        xue_debug("Please insert the debug cable to continue...\n");
+    }
+
+    while ((reg->ctrl & (1UL << XUE_CTRL_DCR)) == 0) {
+        ops->pause(sys);
+    }
+}
+
+static inline void xue_disable_dbc(struct xue *xue)
+{
+    void *sys = xue->sys;
+    struct xue_ops *ops = xue->ops;
+    struct xue_dbc_reg *reg = xue->dbc_reg;
+
+    reg->portsc &= ~(1UL << XUE_PSC_PED);
+    ops->sfence(sys);
+    reg->ctrl &= ~(1UL << XUE_CTRL_DCE);
+
+    while (reg->ctrl & (1UL << XUE_CTRL_DCE)) {
+        ops->pause(sys);
+    }
+}
+
+static inline int xue_init_dbc(struct xue *xue)
+{
+    uint64_t erdp = 0;
+    uint64_t out = 0;
+    uint64_t in = 0;
+    uint64_t mbs = 0;
+    struct xue_ops *op = xue->ops;
+    struct xue_dbc_reg *reg = xue_find_dbc(xue);
+
+    if (!reg) {
+        return 0;
+    }
+
+    xue->dbc_reg = reg;
+    xue_disable_dbc(xue);
+
+    xue_trb_ring_init(xue, &xue->dbc_ering, 0, XUE_DB_INVAL);
+    xue_trb_ring_init(xue, &xue->dbc_oring, 1, XUE_DB_OUT);
+    xue_trb_ring_init(xue, &xue->dbc_iring, 1, XUE_DB_IN);
+
+    erdp = op->virt_to_dma(xue->sys, xue->dbc_ering.trb);
+    if (!erdp) {
+        return 0;
+    }
+
+    xue_mset(xue->dbc_erst, 0, sizeof(*xue->dbc_erst));
+    xue->dbc_erst->base = erdp;
+    xue->dbc_erst->size = XUE_TRB_RING_CAP;
+
+    mbs = (reg->ctrl & 0xFF0000) >> 16;
+    out = op->virt_to_dma(xue->sys, xue->dbc_oring.trb);
+    in = op->virt_to_dma(xue->sys, xue->dbc_iring.trb);
+
+    xue_mset(xue->dbc_ctx, 0, sizeof(*xue->dbc_ctx));
+    xue_init_strings(xue, xue->dbc_ctx->info);
+    xue_init_ep(xue->dbc_ctx->ep_out, mbs, xue_ep_bulk_out, out);
+    xue_init_ep(xue->dbc_ctx->ep_in, mbs, xue_ep_bulk_in, in);
+
+    reg->erstsz = 1;
+    reg->erstba = op->virt_to_dma(xue->sys, xue->dbc_erst);
+    reg->erdp = erdp;
+    reg->cp = op->virt_to_dma(xue->sys, xue->dbc_ctx);
+    reg->ddi1 = (XUE_DBC_VENDOR << 16) | XUE_DBC_PROTOCOL;
+    reg->ddi2 = XUE_DBC_PRODUCT;
+
+    xue_flush_range(xue, xue->dbc_ctx, sizeof(*xue->dbc_ctx));
+    xue_flush_range(xue, xue->dbc_erst, sizeof(*xue->dbc_erst));
+    xue_flush_range(xue, xue->dbc_ering.trb, XUE_TRB_RING_BYTES);
+    xue_flush_range(xue, xue->dbc_oring.trb, XUE_TRB_RING_BYTES);
+    xue_flush_range(xue, xue->dbc_iring.trb, XUE_TRB_RING_BYTES);
+    xue_flush_range(xue, xue->dbc_owork.buf, XUE_WORK_RING_BYTES);
+
+    return 1;
+}
+
+static inline void xue_free(struct xue *xue)
+{
+    void *sys = xue->sys;
+    struct xue_ops *ops = xue->ops;
+
+    if (!ops->free_dma) {
+        return;
+    }
+
+    ops->free_dma(sys, xue->dbc_str, 0);
+    ops->free_dma(sys, xue->dbc_owork.buf, XUE_WORK_RING_ORDER);
+    ops->free_dma(sys, xue->dbc_iring.trb, XUE_TRB_RING_ORDER);
+    ops->free_dma(sys, xue->dbc_oring.trb, XUE_TRB_RING_ORDER);
+    ops->free_dma(sys, xue->dbc_ering.trb, XUE_TRB_RING_ORDER);
+    ops->free_dma(sys, xue->dbc_erst, 0);
+    ops->free_dma(sys, xue->dbc_ctx, 0);
+    xue->dma_allocated = 0;
+
+    ops->unmap_xhc(sys, xue->xhc_mmio, xue->xhc_mmio_size);
+}
+
+static inline int xue_alloc(struct xue *xue)
+{
+    void *sys = xue->sys;
+    struct xue_ops *ops = xue->ops;
+
+    if (xue->dma_allocated) {
+        return 1;
+    }
+
+    if (!ops->alloc_dma) {
+        return 1;
+    } else if (!ops->free_dma) {
+        return 0;
+    }
+
+    xue->dbc_ctx = (struct xue_dbc_ctx *)ops->alloc_dma(sys, 0);
+    if (!xue->dbc_ctx) {
+        return 0;
+    }
+
+    xue->dbc_erst = (struct xue_erst_segment *)ops->alloc_dma(sys, 0);
+    if (!xue->dbc_erst) {
+        goto free_ctx;
+    }
+
+    xue->dbc_ering.trb =
+        (struct xue_trb *)ops->alloc_dma(sys, XUE_TRB_RING_ORDER);
+    if (!xue->dbc_ering.trb) {
+        goto free_erst;
+    }
+
+    xue->dbc_oring.trb =
+        (struct xue_trb *)ops->alloc_dma(sys, XUE_TRB_RING_ORDER);
+    if (!xue->dbc_oring.trb) {
+        goto free_etrb;
+    }
+
+    xue->dbc_iring.trb =
+        (struct xue_trb *)ops->alloc_dma(sys, XUE_TRB_RING_ORDER);
+    if (!xue->dbc_iring.trb) {
+        goto free_otrb;
+    }
+
+    xue->dbc_owork.buf = (uint8_t *)ops->alloc_dma(sys, XUE_WORK_RING_ORDER);
+    if (!xue->dbc_owork.buf) {
+        goto free_itrb;
+    }
+
+    xue->dbc_str = (char *)ops->alloc_dma(sys, 0);
+    if (!xue->dbc_str) {
+        goto free_owrk;
+    }
+
+    xue->dma_allocated = 1;
+    return 1;
+
+free_owrk:
+    ops->free_dma(sys, xue->dbc_owork.buf, XUE_WORK_RING_ORDER);
+free_itrb:
+    ops->free_dma(sys, xue->dbc_iring.trb, XUE_TRB_RING_ORDER);
+free_otrb:
+    ops->free_dma(sys, xue->dbc_oring.trb, XUE_TRB_RING_ORDER);
+free_etrb:
+    ops->free_dma(sys, xue->dbc_ering.trb, XUE_TRB_RING_ORDER);
+free_erst:
+    ops->free_dma(sys, xue->dbc_erst, 0);
+free_ctx:
+    ops->free_dma(sys, xue->dbc_ctx, 0);
+
+    return 0;
+}
+
+#define xue_set_op(op)                                                         \
+    do {                                                                       \
+        if (!ops->op) {                                                        \
+            ops->op = xue_sys_##op;                                            \
+        }                                                                      \
+    } while (0)
+
+static inline void xue_init_ops(struct xue *xue, struct xue_ops *ops)
+{
+    xue_set_op(init);
+    xue_set_op(alloc_dma);
+    xue_set_op(free_dma);
+    xue_set_op(map_xhc);
+    xue_set_op(unmap_xhc);
+    xue_set_op(outd);
+    xue_set_op(ind);
+    xue_set_op(virt_to_dma);
+    xue_set_op(sfence);
+    xue_set_op(lfence);
+    xue_set_op(pause);
+    xue_set_op(clflush);
+
+    xue->ops = ops;
+}
+
+static inline void xue_init_work_ring(struct xue *xue,
+                                      struct xue_work_ring *wrk)
+{
+    wrk->enq = 0;
+    wrk->deq = 0;
+    wrk->dma = xue->ops->virt_to_dma(xue->sys, wrk->buf);
+}
+
+/* @endcond */
+
+/**
+ * Initialize the DbC and enable it for transfers. First map in the DbC
+ * registers from the host controller's MMIO region. Then allocate and map
+ * DMA for the event and transfer rings. Finally, enable the DbC for
+ * the host to enumerate. On success, the DbC is ready to send packets.
+ *
+ * @param xue the xue to open (!= NULL)
+ * @param ops the xue ops to use (!= NULL)
+ * @param sys the system-specific data (may be NULL)
+ * @return 1 iff xue_open succeeded
+ */
+static inline int64_t xue_open(struct xue *xue, struct xue_ops *ops, void *sys)
+{
+    if (!xue || !ops) {
+        return 0;
+    }
+
+    xue_init_ops(xue, ops);
+    xue->sys = sys;
+
+    if (!ops->init(sys)) {
+        return 0;
+    }
+
+    if (!xue_init_xhc(xue)) {
+        return 0;
+    }
+
+    if (!xue_alloc(xue)) {
+        return 0;
+    }
+
+    if (!xue_init_dbc(xue)) {
+        xue_free(xue);
+        return 0;
+    }
+
+    xue_init_work_ring(xue, &xue->dbc_owork);
+    xue_enable_dbc(xue);
+    xue->open = 1;
+
+    return 1;
+}
+
+/**
+ * Commit the pending transfer TRBs to the DbC. This notifies
+ * the DbC of any previously-queued data on the work ring and
+ * rings the doorbell.
+ *
+ * @param xue the xue to flush
+ * @param trb the ring containing the TRBs to transfer
+ * @param wrk the work ring containing data to be flushed
+ */
+static inline void xue_flush(struct xue *xue, struct xue_trb_ring *trb,
+                             struct xue_work_ring *wrk)
+{
+    struct xue_dbc_reg *reg = xue->dbc_reg;
+    uint32_t db = (reg->db & 0xFFFF00FF) | (trb->db << 8);
+
+    if (xue->open && !(reg->ctrl & (1UL << XUE_CTRL_DCE))) {
+        if (!xue_init_dbc(xue)) {
+            xue_free(xue);
+            return;
+        }
+
+        xue_init_work_ring(xue, &xue->dbc_owork);
+        xue_enable_dbc(xue);
+    }
+
+    xue_pop_events(xue);
+
+    if (!(reg->ctrl & (1UL << XUE_CTRL_DCR))) {
+        xue_error("DbC not configured");
+        return;
+    }
+
+    if (reg->ctrl & (1UL << XUE_CTRL_DRC)) {
+        reg->ctrl |= (1UL << XUE_CTRL_DRC);
+        reg->portsc |= (1UL << XUE_PSC_PED);
+        xue->ops->sfence(xue->sys);
+    }
+
+    if (xue_trb_ring_full(trb)) {
+        return;
+    }
+
+    if (wrk->enq == wrk->deq) {
+        return;
+    } else if (wrk->enq > wrk->deq) {
+        xue_push_trb(xue, trb, wrk->dma + wrk->deq, wrk->enq - wrk->deq);
+        wrk->deq = wrk->enq;
+    } else {
+        xue_push_trb(xue, trb, wrk->dma + wrk->deq,
+                     XUE_WORK_RING_CAP - wrk->deq);
+        wrk->deq = 0;
+        if (wrk->enq > 0 && !xue_trb_ring_full(trb)) {
+            xue_push_trb(xue, trb, wrk->dma, wrk->enq);
+            wrk->deq = wrk->enq;
+        }
+    }
+
+    xue->ops->sfence(xue->sys);
+    reg->db = db;
+}
+
+/**
+ * Queue the data referenced by the given buffer to the DbC. A transfer TRB
+ * will be created and the DbC will be notified that data is available for
+ * writing to the debug host.
+ *
+ * @param xue the xue to write to
+ * @param buf the data to write
+ * @param size the length in bytes of buf
+ * @return the number of bytes written
+ */
+static inline int64_t xue_write(struct xue *xue, const char *buf, uint64_t size)
+{
+    int64_t ret;
+
+    if (!buf || size <= 0) {
+        return 0;
+    }
+
+    ret = xue_push_work(xue, &xue->dbc_owork, buf, size);
+    if (!ret) {
+        return 0;
+    }
+
+    xue_flush(xue, &xue->dbc_oring, &xue->dbc_owork);
+    return ret;
+}
+
+/**
+ * Queue a single character to the DbC. A transfer TRB will be created
+ * if the character is a newline and the DbC will be notified that data is
+ * available for writing to the debug host.
+ *
+ * @param xue the xue to write to
+ * @param c the character to write
+ * @return the number of bytes written
+ */
+static inline int64_t xue_putc(struct xue *xue, char c)
+{
+    if (!xue_push_work(xue, &xue->dbc_owork, &c, 1)) {
+        return 0;
+    }
+
+    if (c == '\n') {
+        xue_flush(xue, &xue->dbc_oring, &xue->dbc_owork);
+    }
+
+    return 1;
+}
+
+/**
+ * Disable the DbC and free DMA and MMIO resources back to the host system.
+ *
+ * @param xue the xue to close
+ */
+static inline void xue_close(struct xue *xue)
+{
+    xue_disable_dbc(xue);
+    xue_free(xue);
+    xue->open = 0;
+}
+
+#ifdef __cplusplus
+}
+#endif
+#endif
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 03:40:56 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 03:40:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342166.567247 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny3bl-0006mP-Vr; Mon, 06 Jun 2022 03:40:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342166.567247; Mon, 06 Jun 2022 03:40:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny3bl-0006lD-PR; Mon, 06 Jun 2022 03:40:49 +0000
Received: by outflank-mailman (input) for mailman id 342166;
 Mon, 06 Jun 2022 03:40:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PxEH=WN=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1ny3bl-0006LY-2l
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 03:40:49 +0000
Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com
 [66.111.4.28]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7311c486-e54a-11ec-bd2c-47488cf2e6aa;
 Mon, 06 Jun 2022 05:40:47 +0200 (CEST)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.nyi.internal (Postfix) with ESMTP id 17C635C0148;
 Sun,  5 Jun 2022 23:40:46 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute3.internal (MEProxy); Sun, 05 Jun 2022 23:40:46 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sun,
 5 Jun 2022 23:40:44 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7311c486-e54a-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm1; t=1654486846; x=1654573246; bh=c0cgv9w4Lg
	orNmNlJ07oJSahIj0+LHfl1jX6Aux8s3M=; b=Em9l1tQe6BW4ubfwuvScHVisYO
	NZ9juiKsdj2dzC/QD736ofHVHoXvvzcLgDRdZ5hyWnlegygzwT6Hwb/xc2zCfJTF
	KrWoED1p85JKkSL3Lmx6lylYJ+GPyEGtnwqqU+fJCJDo6FZ02uSerqjMIhWSmBbr
	Eo1HcGRmJKy9Q9IhWRk2zAZeD8vs9Bn5SBqFNMRM37nhemaHuDxdXpzqFn5uuHie
	ha83nwuUFaLntZkwydflYRkkC5Cw6O7sO6YM5hY8YntNdO6lLZYy7/LEUoRtlc4K
	9fOCGHquDYmoQrt1Ka5b9pNc3Y5GH4Av8fTLdZPXqy0nEp/GllFHPN5FHVSA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1654486846; x=
	1654573246; bh=c0cgv9w4LgorNmNlJ07oJSahIj0+LHfl1jX6Aux8s3M=; b=q
	DrAlhTnCMLiolvNqEwz453ubt1eiVi+e5NXTnNt+zxnxFOVRcLKbjTfjDR7G6KBH
	LKdKWoC1R30v6WIXQ7U1JnJN3MhNWV7c+XA+hAEcNjAheb06ch79I0om/Zx8KquH
	DqP4JhboHogjzNUOJ+NuPrjbpcK/19G8vYVVaPII+5dfwVI1HTisqyMQtjjcQQ0K
	VW4mbDE7T/Fzw4v1g/jmtTugy2MR+KfGJNggjVYM9XqclSD9zSKzpGT2xWtwKA7U
	uCXkfgz/2KYb0BuZsBzT/7LjViO5M1IEME115WuaYS62n4S7ezgMte6Jv972uont
	PPc1I3w7DU/8irBk+EVHQ==
X-ME-Sender: <xms:PXedYtTh_s7fpihmjYWSI2GYJc3nA0tc7HPsD3dDbrn9YOOw4GPHzg>
    <xme:PXedYmy70HCgJPe561RN85h-PDX1WwH5uJlC4-Vo-bwvv4B34tfEbP0D1naD-zjyq
    TABpMceYgLqpw>
X-ME-Received: <xmr:PXedYi0tM_i_0w5pRisLS4PlXEKOgnozfK_e0Y0FqQmV-ODpW3VQ8vBqvq-u1uvLOtwh4mV3v_QRs6bGCe9gY9CKQsD3Jt-qK6NYnkkdjfYW7PHUSRg>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedruddtuddgjeefucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh
    hushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:PXedYlAWwJj0V0DTvW12LDXET0xPeBuPCcfqQjnKzh68fMq6VrvXXg>
    <xmx:PXedYmgk2cuG_PU8trxdqvNJ-3zBDsLkEVR4VC3o4qG9YtGcNknRJA>
    <xmx:PXedYpqM8CK1v36faClKh17oWIkcKsV_2T92zUFcWSCTKbZBJiejGw>
    <xmx:PnedYoa2_KnKlcXSoTMgcijT7KAp4LWFyk4xZr77IDRF8_qpFUgExg>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [RFC PATCH 03/12] xue: reset XHCI ports when initializing dbc
Date: Mon,  6 Jun 2022 05:40:15 +0200
Message-Id: <27748e5f94769a66900697e521b35b61b1da01d8.1654486751.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <cover.07846d9c1900bd8c905a05e7afb214b4cf4ab881.1654486751.git-series.marmarek@invisiblethingslab.com>
References: <cover.07846d9c1900bd8c905a05e7afb214b4cf4ab881.1654486751.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Reset ports, to force host system to re-enumerate devices. Otheriwse it
will require the cable to be re-plugged, or will wait in the
"configuring" state indefinitely.

Trick and code copied from Linux:
drivers/usb/early/xhci-dbc.c:xdbc_start()->xdbc_reset_debug_port()

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 xen/include/xue.h | 69 ++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 69 insertions(+)

diff --git a/xen/include/xue.h b/xen/include/xue.h
index 7515244f6af3..6048dcdd5509 100644
--- a/xen/include/xue.h
+++ b/xen/include/xue.h
@@ -59,6 +59,10 @@
     ((1UL << XUE_PSC_CSC) | (1UL << XUE_PSC_PRC) | (1UL << XUE_PSC_PLC) |      \
      (1UL << XUE_PSC_CEC))
 
+#define     XUE_XHC_EXT_PORT_MAJOR(x)  (((x) >> 24) & 0xff)
+#define PORT_RESET  (1 << 4)
+#define PORT_CONNECT  (1 << 0)
+
 static inline int known_xhc(uint32_t dev_ven)
 {
     switch (dev_ven) {
@@ -1420,6 +1424,67 @@ static inline void xue_init_strings(struct xue *xue, uint32_t *info)
     info[8] = (4 << 24) | (30 << 16) | (8 << 8) | 6;
 }
 
+static inline void xue_do_reset_debug_port(struct xue *xue, u32 id, u32 count)
+{
+    uint32_t *ops_reg;
+    uint32_t *portsc;
+    u32 val, cap_length;
+    int i;
+
+    cap_length = (*(uint32_t*)xue->xhc_mmio) & 0xff;
+    ops_reg = xue->xhc_mmio + cap_length;
+
+    id--;
+    for (i = id; i < (id + count); i++) {
+        portsc = ops_reg + 0x100 + i * 0x4;
+        val = *portsc;
+        if (!(val & PORT_CONNECT))
+            *portsc = val | PORT_RESET;
+    }
+}
+
+
+static inline void xue_reset_debug_port(struct xue *xue)
+{
+    u32 val, port_offset, port_count;
+    uint32_t *xcap;
+    uint32_t next;
+    uint32_t id;
+    uint8_t *mmio = (uint8_t *)xue->xhc_mmio;
+    uint32_t *hccp1 = (uint32_t *)(mmio + 0x10);
+    const uint32_t PROTOCOL_ID = 0x2;
+
+    /**
+     * Paranoid check against a zero value. The spec mandates that
+     * at least one "supported protocol" capability must be implemented,
+     * so this should always be false.
+     */
+    if ((*hccp1 & 0xFFFF0000) == 0) {
+        return;
+    }
+
+    xcap = (uint32_t *)(mmio + (((*hccp1 & 0xFFFF0000) >> 16) << 2));
+    next = (*xcap & 0xFF00) >> 8;
+    id = *xcap & 0xFF;
+
+    /* Look for "supported protocol" capability, major revision 3 */
+    for (;next; xcap += next, id = *xcap & 0xFF, next = (*xcap & 0xFF00) >> 8) {
+        if (id != PROTOCOL_ID && next)
+            continue;
+
+        if (XUE_XHC_EXT_PORT_MAJOR(*xcap) != 0x3)
+            continue;
+
+        /* extract ports offset and count from the capability structure */
+        val = *(xcap + 2);
+        port_offset = val & 0xff;
+        port_count = (val >> 8) & 0xff;
+
+        /* and reset them all */
+        xue_do_reset_debug_port(xue, port_offset, port_count);
+    }
+}
+
 static inline void xue_dump(struct xue *xue)
 {
     struct xue_ops *op = xue->ops;
@@ -1459,6 +1524,10 @@ static inline void xue_enable_dbc(struct xue *xue)
         ops->pause(sys);
     }
 
+    /* reset ports on initial open, to force re-enumerating by the host */
+    if (!xue->open)
+        xue_reset_debug_port(xue);
+
     ops->sfence(sys);
     reg->portsc |= (1UL << XUE_PSC_PED);
     ops->sfence(sys);
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 03:40:56 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 03:40:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342171.567297 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny3bs-00083N-Er; Mon, 06 Jun 2022 03:40:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342171.567297; Mon, 06 Jun 2022 03:40:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny3bs-000812-3M; Mon, 06 Jun 2022 03:40:56 +0000
Received: by outflank-mailman (input) for mailman id 342171;
 Mon, 06 Jun 2022 03:40:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PxEH=WN=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1ny3bq-0006AI-LA
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 03:40:54 +0000
Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com
 [66.111.4.28]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 77130317-e54a-11ec-b605-df0040e90b76;
 Mon, 06 Jun 2022 05:40:53 +0200 (CEST)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.nyi.internal (Postfix) with ESMTP id CACDB5C0061;
 Sun,  5 Jun 2022 23:40:52 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute2.internal (MEProxy); Sun, 05 Jun 2022 23:40:52 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sun,
 5 Jun 2022 23:40:52 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77130317-e54a-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm1; t=1654486852; x=1654573252; bh=evEV0WFYEo
	SI8Y7yrgXC8kLje73u4yjd3P/XL+ndlpo=; b=jmKfYEQRXSaBoyGyjAWh8THlWt
	ptJbuQKMWadP1SntUcRUAnH0ThxD/Buf60iqtrgpDJIFnjaezP9eRttGoJ4J7dtD
	ePiadAXC0EmK4jcf3Ag4O03d1a7izzcEAzjyCSIftnGoh7T8pW2/CMSUpaifutkQ
	HChP6nKlgi8YwaI3hB8sQW95dtEiXTLgd7QjDeJle3zcpRfNa/7GlzOOW1PddmDI
	UaVpnwSR/Y9izc97VBZTalikKbKAn0/RBlAbNwFlosB+X65eOrmnmU6S29NjIYHg
	DcTW1EtSVdGxLUbwl/822LGZA9jS2/kky46TNuGQtlS83v1J14nY4J5mHanA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1654486852; x=
	1654573252; bh=evEV0WFYEoSI8Y7yrgXC8kLje73u4yjd3P/XL+ndlpo=; b=k
	l2P84RsPPWV38SKsArP+1S8k4/zLT2OVCv21o7gUF4SBhe4D2yrw8RWnd94KRFTM
	EgFAJ1jlV5qsDKhn/YKSPYeXoOZgU3HNZBrKhvnIduwHNCEzzJ1JpFH8NQFX3FR/
	cYqYW4TB8BfGkspN8aMTqjzfK3FvyDCu4DGEGNFh4Ry4Z9BxhNtX9swLlVg5aUMH
	OxE8hfZrHnyofFNAPN1csOH2SdZ1gueVZC3AJCrRR5DZ/FVowkTstCvwbd3fj9jf
	JZ3sjcrvNbFUlhmfEG3SQ/1BI9AZnbkN/dUhDpIPqeb2KTT5lMC1fhPoIqfRUCVK
	8HEQV/aDa37x3LWA+xQxw==
X-ME-Sender: <xms:RHedYuiDP8j9PiEZlaAcGE3T8B_TEGAebPuypOtOA1SO8aCXGiplzQ>
    <xme:RHedYvC1Y2UFQL0S-rVnUwYqg-tkNeXDkod014z-xyk_7uZdzYT2S19ScPVGTSk_V
    SLZKlTg4llogA>
X-ME-Received: <xmr:RHedYmFvipH6QeZvT-409QghpI18XtWXj5ifjdIv9swDULDwhj6ok5HpnQL-V0E-srmKNVWoKZLofEpSm1VQds2TFF-h6sCk58sGlPr8FCIaxZpmmAM>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedruddtuddgjeegucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh
    hushhtvghrufhiiigvpedvnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:RHedYnT-UWgnnvkBbhHJ5gzmpkA2PiZ_zYFCh9rCkwu3PVi9C36HIA>
    <xmx:RHedYrxTuXw8OwtiaTnuFdceybjZg_ke8zDqbf329qfZX6pFjN1QQA>
    <xmx:RHedYl4rWlb00izG-xlY-1PjcLTpoB7tULW1QcJWnJ4gYN3hIAcclg>
    <xmx:RHedYpa3Vz5q7_dGHt0tVQ7CWv0_OCEY2o02_x9jsOeOMqANm5eMeA>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Kevin Tian <kevin.tian@intel.com>
Subject: [RFC PATCH 08/12] IOMMU/VT-d: wire common device reserved memory API
Date: Mon,  6 Jun 2022 05:40:20 +0200
Message-Id: <386e7729069f427c537e2a59eb05f95427504e2e.1654486751.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <cover.07846d9c1900bd8c905a05e7afb214b4cf4ab881.1654486751.git-series.marmarek@invisiblethingslab.com>
References: <cover.07846d9c1900bd8c905a05e7afb214b4cf4ab881.1654486751.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Re-use rmrr= parameter handling code to handle common device reserved
memory.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 xen/drivers/passthrough/vtd/dmar.c | 201 +++++++++++++++++-------------
 1 file changed, 119 insertions(+), 82 deletions(-)

diff --git a/xen/drivers/passthrough/vtd/dmar.c b/xen/drivers/passthrough/vtd/dmar.c
index 367304c8739c..661a182b08d9 100644
--- a/xen/drivers/passthrough/vtd/dmar.c
+++ b/xen/drivers/passthrough/vtd/dmar.c
@@ -861,111 +861,148 @@ static struct user_rmrr __initdata user_rmrrs[MAX_USER_RMRR];
 
 /* Macro for RMRR inclusive range formatting. */
 #define ERMRRU_FMT "[%lx-%lx]"
-#define ERMRRU_ARG(eru) eru.base_pfn, eru.end_pfn
+#define ERMRRU_ARG base_pfn, end_pfn
+
+static int __init add_one_user_rmrr(unsigned long base_pfn,
+                                    unsigned long end_pfn,
+                                    unsigned int dev_count,
+                                    u32 *sbdf);
 
 static int __init add_user_rmrr(void)
 {
+    unsigned int i;
+    int ret;
+
+    for ( i = 0; i < nr_rmrr; i++ )
+    {
+        ret = add_one_user_rmrr(user_rmrrs[i].base_pfn,
+                                user_rmrrs[i].end_pfn,
+                                user_rmrrs[i].dev_count,
+                                user_rmrrs[i].sbdf);
+        if ( ret < 0 )
+            return ret;
+    }
+    return 0;
+}
+
+/* Returns 1 on success, 0 when ignoring and < 0 on error. */
+static int __init add_one_user_rmrr(unsigned long base_pfn,
+                                    unsigned long end_pfn,
+                                    unsigned int dev_count,
+                                    u32 *sbdf)
+{
     struct acpi_rmrr_unit *rmrr, *rmrru;
-    unsigned int idx, seg, i;
-    unsigned long base, end;
+    unsigned int idx, seg;
+    unsigned long base_iter;
     bool overlap;
 
-    for ( i = 0; i < nr_rmrr; i++ )
+    if ( iommu_verbose )
+        printk(XENLOG_DEBUG VTDPREFIX
+               "Adding RMRR for %d device ([0]: %#x) range "ERMRRU_FMT"\n",
+               dev_count, sbdf[0], ERMRRU_ARG);
+
+    if ( base_pfn > end_pfn )
+    {
+        printk(XENLOG_ERR VTDPREFIX
+               "Invalid RMRR Range "ERMRRU_FMT"\n",
+               ERMRRU_ARG);
+        return 0;
+    }
+
+    if ( (end_pfn - base_pfn) >= MAX_USER_RMRR_PAGES )
     {
-        base = user_rmrrs[i].base_pfn;
-        end = user_rmrrs[i].end_pfn;
+        printk(XENLOG_ERR VTDPREFIX
+               "RMRR range "ERMRRU_FMT" exceeds "\
+               __stringify(MAX_USER_RMRR_PAGES)" pages\n",
+               ERMRRU_ARG);
+        return 0;
+    }
 
-        if ( base > end )
+    overlap = false;
+    list_for_each_entry(rmrru, &acpi_rmrr_units, list)
+    {
+        if ( pfn_to_paddr(base_pfn) <= rmrru->end_address &&
+             rmrru->base_address <= pfn_to_paddr(end_pfn) )
         {
             printk(XENLOG_ERR VTDPREFIX
-                   "Invalid RMRR Range "ERMRRU_FMT"\n",
-                   ERMRRU_ARG(user_rmrrs[i]));
-            continue;
+                   "Overlapping RMRRs: "ERMRRU_FMT" and [%lx-%lx]\n",
+                   ERMRRU_ARG,
+                   paddr_to_pfn(rmrru->base_address),
+                   paddr_to_pfn(rmrru->end_address));
+            overlap = true;
+            break;
         }
+    }
+    /* Don't add overlapping RMRR. */
+    if ( overlap )
+        return 0;
 
-        if ( (end - base) >= MAX_USER_RMRR_PAGES )
+    base_iter = base_pfn;
+    do
+    {
+        if ( !mfn_valid(_mfn(base_iter)) )
         {
             printk(XENLOG_ERR VTDPREFIX
-                   "RMRR range "ERMRRU_FMT" exceeds "\
-                   __stringify(MAX_USER_RMRR_PAGES)" pages\n",
-                   ERMRRU_ARG(user_rmrrs[i]));
-            continue;
+                   "Invalid pfn in RMRR range "ERMRRU_FMT"\n",
+                   ERMRRU_ARG);
+            break;
         }
+    } while ( base_iter++ < end_pfn );
 
-        overlap = false;
-        list_for_each_entry(rmrru, &acpi_rmrr_units, list)
-        {
-            if ( pfn_to_paddr(base) <= rmrru->end_address &&
-                 rmrru->base_address <= pfn_to_paddr(end) )
-            {
-                printk(XENLOG_ERR VTDPREFIX
-                       "Overlapping RMRRs: "ERMRRU_FMT" and [%lx-%lx]\n",
-                       ERMRRU_ARG(user_rmrrs[i]),
-                       paddr_to_pfn(rmrru->base_address),
-                       paddr_to_pfn(rmrru->end_address));
-                overlap = true;
-                break;
-            }
-        }
-        /* Don't add overlapping RMRR. */
-        if ( overlap )
-            continue;
+    /* Invalid pfn in range as the loop ended before end_pfn was reached. */
+    if ( base_iter <= end_pfn )
+        return 0;
 
-        do
-        {
-            if ( !mfn_valid(_mfn(base)) )
-            {
-                printk(XENLOG_ERR VTDPREFIX
-                       "Invalid pfn in RMRR range "ERMRRU_FMT"\n",
-                       ERMRRU_ARG(user_rmrrs[i]));
-                break;
-            }
-        } while ( base++ < end );
+    rmrr = xzalloc(struct acpi_rmrr_unit);
+    if ( !rmrr )
+        return -ENOMEM;
 
-        /* Invalid pfn in range as the loop ended before end_pfn was reached. */
-        if ( base <= end )
-            continue;
+    rmrr->scope.devices = xmalloc_array(u16, dev_count);
+    if ( !rmrr->scope.devices )
+    {
+        xfree(rmrr);
+        return -ENOMEM;
+    }
 
-        rmrr = xzalloc(struct acpi_rmrr_unit);
-        if ( !rmrr )
-            return -ENOMEM;
+    seg = 0;
+    for ( idx = 0; idx < dev_count; idx++ )
+    {
+        rmrr->scope.devices[idx] = sbdf[idx];
+        seg |= PCI_SEG(sbdf[idx]);
+    }
+    if ( seg != PCI_SEG(sbdf[0]) )
+    {
+        printk(XENLOG_ERR VTDPREFIX
+               "Segments are not equal for RMRR range "ERMRRU_FMT"\n",
+               ERMRRU_ARG);
+        scope_devices_free(&rmrr->scope);
+        xfree(rmrr);
+        return 0;
+    }
 
-        rmrr->scope.devices = xmalloc_array(u16, user_rmrrs[i].dev_count);
-        if ( !rmrr->scope.devices )
-        {
-            xfree(rmrr);
-            return -ENOMEM;
-        }
+    rmrr->segment = seg;
+    rmrr->base_address = pfn_to_paddr(base_pfn);
+    /* Align the end_address to the end of the page */
+    rmrr->end_address = pfn_to_paddr(end_pfn) | ~PAGE_MASK;
+    rmrr->scope.devices_cnt = dev_count;
 
-        seg = 0;
-        for ( idx = 0; idx < user_rmrrs[i].dev_count; idx++ )
-        {
-            rmrr->scope.devices[idx] = user_rmrrs[i].sbdf[idx];
-            seg |= PCI_SEG(user_rmrrs[i].sbdf[idx]);
-        }
-        if ( seg != PCI_SEG(user_rmrrs[i].sbdf[0]) )
-        {
-            printk(XENLOG_ERR VTDPREFIX
-                   "Segments are not equal for RMRR range "ERMRRU_FMT"\n",
-                   ERMRRU_ARG(user_rmrrs[i]));
-            scope_devices_free(&rmrr->scope);
-            xfree(rmrr);
-            continue;
-        }
+    if ( register_one_rmrr(rmrr) )
+        printk(XENLOG_ERR VTDPREFIX
+               "Could not register RMMR range "ERMRRU_FMT"\n",
+               ERMRRU_ARG);
 
-        rmrr->segment = seg;
-        rmrr->base_address = pfn_to_paddr(user_rmrrs[i].base_pfn);
-        /* Align the end_address to the end of the page */
-        rmrr->end_address = pfn_to_paddr(user_rmrrs[i].end_pfn) | ~PAGE_MASK;
-        rmrr->scope.devices_cnt = user_rmrrs[i].dev_count;
+    return 1;
+}
 
-        if ( register_one_rmrr(rmrr) )
-            printk(XENLOG_ERR VTDPREFIX
-                   "Could not register RMMR range "ERMRRU_FMT"\n",
-                   ERMRRU_ARG(user_rmrrs[i]));
-    }
+static int __init cf_check add_one_extra_rmrr(xen_pfn_t start, xen_ulong_t nr, u32 id, void *ctxt)
+{
+    u32 sbdf_array[] = { id };
+    return add_one_user_rmrr(start, start+nr, 1, sbdf_array);
+}
 
-    return 0;
+static int __init add_extra_rmrr(void)
+{
+    return iommu_get_extra_reserved_device_memory(add_one_extra_rmrr, NULL);
 }
 
 #include <asm/tboot.h>
@@ -1010,7 +1047,7 @@ int __init acpi_dmar_init(void)
     {
         iommu_init_ops = &intel_iommu_init_ops;
 
-        return add_user_rmrr();
+        return add_user_rmrr() || add_extra_rmrr();
     }
 
     return ret;
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 03:40:56 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 03:40:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342167.567263 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny3bn-0007Ae-8P; Mon, 06 Jun 2022 03:40:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342167.567263; Mon, 06 Jun 2022 03:40:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny3bn-00079r-3X; Mon, 06 Jun 2022 03:40:51 +0000
Received: by outflank-mailman (input) for mailman id 342167;
 Mon, 06 Jun 2022 03:40:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PxEH=WN=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1ny3bl-0006LY-Kk
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 03:40:49 +0000
Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com
 [66.111.4.28]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 73f70345-e54a-11ec-bd2c-47488cf2e6aa;
 Mon, 06 Jun 2022 05:40:48 +0200 (CEST)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailout.nyi.internal (Postfix) with ESMTP id 971ED5C0131;
 Sun,  5 Jun 2022 23:40:47 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute1.internal (MEProxy); Sun, 05 Jun 2022 23:40:47 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sun,
 5 Jun 2022 23:40:46 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 73f70345-e54a-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm1; t=1654486847; x=1654573247; bh=rjE4Ee2HRh
	qow39eHssjbU5ZMvphSoRBeyLy9XO4DAQ=; b=W3lXABqFbA/PUMjQFDR+An8CC5
	2L7W6qsjbMmAzZ59mgo0wndLKDLuJDD0PEQQap5NqsXkm9wbKlWNDW1nx7vnkGnH
	JUT2G4FX5OxFPei1FMFMicEsndVO8c5cLPf2MafUr1cr5ME3smNjv+QsyeK2n/T9
	YPg4CTLBR7W7jj0JaW9Br3jmczbK9GzuB2tVNTs2mge7/AU9fYQXFkNvvOEj+3eF
	HJ0gqj0MrJ06QHEj3gD6hs6AmxFX6KX1donKm0fzpKgy1WabZgrlcFTgNaCU/EBe
	U0yTWdqaWZeWuhuXdrf45+A/dp1U2w/ZeeJQp2Snyq3xuAOH2pGHlNXCQAbA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1654486847; x=
	1654573247; bh=rjE4Ee2HRhqow39eHssjbU5ZMvphSoRBeyLy9XO4DAQ=; b=X
	2ABJW7Je5uwXpvgr9FSrCgyORHWpdQqhgsriS4C4OEcPPdXnCBu8IoQssne1Llny
	uGIs9xdBN0rRaRWYMqbSTpkX+t+ZdW9V9OEpSbgEDgFlhF4MQgxru0iqcJOfiexz
	4TTsdWJCpqx4QkeG1bKEF8lUi9d0/AvjUUTUt1SDW/4QOaynxCdJ1oHcTuRvn+cO
	NV+BaiKhpR5wTx4CImz4iXu9lav1LP2skh9C6U1r/WXRuF3sklQ8XkZQYWRUxGF/
	dK9T5g4KoYuNAnzYIgGIn+Td9r7Rlr8kvGMxdb0jr6HpBs+vNlanEXuI4tU2QmAI
	LD5SDrVq60emRIp47ajcw==
X-ME-Sender: <xms:P3edYhYuJMwqFRAoEm0Vb2hdJ16sP_Y1NwLIZXpG2ZedkIDbP-DLHw>
    <xme:P3edYoZ_DUXtpZgWhNmG37gvCuhnUZMBpm09tnhWrxr1kPawlQPgQhbfxyLTkeZ79
    EFVWmcLHzUASQ>
X-ME-Received: <xmr:P3edYj8IOkKytKe1dNoYy5RkQyXIHgG1KdX_go130koCXCpp2CSTE3TyXso_sdIAJaw5HkXChmrkLPYKHn8M_6oUYxhty6O-i_O42u4Ug4KNgY0benA>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedruddtuddgjeefucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh
    hushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:P3edYvqIFdenqUgpjQFaNmeU2AN44Y4X1tYPrtuHavSHPHrVchTMtA>
    <xmx:P3edYsqy9zsOVdsLPqQeQ7ZxlLo-RZZ7cFPDPbeFG0FzLIbtHj9FbA>
    <xmx:P3edYlQ4N5LIPFwqCmrlLzDISLmR4y40PWD2FT_-Zpg2TvnzZw8p4A>
    <xmx:P3edYhDJsUqQhVqkZswou49LDHt6p0Lf4FzBeR2jGt-iKcd9hIZaMQ>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [RFC PATCH 04/12] xue: add support for selecting specific xhci
Date: Mon,  6 Jun 2022 05:40:16 +0200
Message-Id: <c7a261e10c611de3bd457d540524b8207f98fcfc.1654486751.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <cover.07846d9c1900bd8c905a05e7afb214b4cf4ab881.1654486751.git-series.marmarek@invisiblethingslab.com>
References: <cover.07846d9c1900bd8c905a05e7afb214b4cf4ab881.1654486751.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Handle parameters similar to dbgp=ehci.

Implement this by not resettting xhc_cf8 again in xue_init_xhc(), but
using a value found there if non-zero. Additionally, add xue->xhc_num to
select n-th controller.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 docs/misc/xen-command-line.pandoc |  5 ++++-
 xen/drivers/char/xue.c            | 17 ++++++++++++++-
 xen/include/xue.h                 | 38 +++++++++++++++++++-------------
 3 files changed, 45 insertions(+), 15 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index 881fe409ac76..37a564c2386f 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -721,10 +721,15 @@ Available alternatives, with their meaning, are:
 
 ### dbgp
 > `= ehci[ <integer> | @pci<bus>:<slot>.<func> ]`
+> `= xue[ <integer> | @pci<bus>:<slot>.<func> ]`
 
 Specify the USB controller to use, either by instance number (when going
 over the PCI busses sequentially) or by PCI device (must be on segment 0).
 
+Use `ehci` for EHCI debug port, use `xue` for XHCI debug capability.
+Xue driver will wait indefinitely for the debug host to connect - make sure the
+cable is connected.
+
 ### debug_stack_lines
 > `= <integer>`
 
diff --git a/xen/drivers/char/xue.c b/xen/drivers/char/xue.c
index 98334090c078..632141715d4d 100644
--- a/xen/drivers/char/xue.c
+++ b/xen/drivers/char/xue.c
@@ -125,6 +125,7 @@ void __init xue_uart_init(void)
 {
     struct xue_uart *uart = &xue_uart;
     struct xue *xue = &uart->xue;
+    const char *e;
 
     if ( strncmp(opt_dbgp, "xue", 3) )
         return;
@@ -132,6 +133,22 @@ void __init xue_uart_init(void)
     memset(xue, 0, sizeof(*xue));
     memset(&xue_ops, 0, sizeof(xue_ops));
 
+    if ( isdigit(opt_dbgp[3]) || !opt_dbgp[3] )
+    {
+        if ( opt_dbgp[3] )
+            xue->xhc_num = simple_strtoul(opt_dbgp + 3, &e, 10);
+    }
+    else if ( strncmp(opt_dbgp + 3, "@pci", 4) == 0 )
+    {
+        unsigned int bus, slot, func;
+
+        e = parse_pci(opt_dbgp + 7, NULL, &bus, &slot, &func);
+        if ( !e || *e )
+            return;
+
+        xue->xhc_cf8 = (1UL << 31) | (bus << 16) | (slot << 11) | (func << 8);
+    }
+
     xue->dbc_ctx = &ctx;
     xue->dbc_erst = &erst;
     xue->dbc_ering.trb = evt_trb;
diff --git a/xen/include/xue.h b/xen/include/xue.h
index 6048dcdd5509..b1f304958679 100644
--- a/xen/include/xue.h
+++ b/xen/include/xue.h
@@ -998,6 +998,7 @@ struct xue {
     int dma_allocated;
     int open;
     int sysid;
+    int xhc_num; /* look for n-th xhc */
 };
 
 static inline void *xue_mset(void *dest, int c, uint64_t size)
@@ -1068,24 +1069,31 @@ static inline int xue_init_xhc(struct xue *xue)
 
     struct xue_ops *ops = xue->ops;
     void *sys = xue->sys;
-    xue->xhc_cf8 = 0;
 
-    /*
-     * Search PCI bus 0 for the xHC. All the host controllers supported so far
-     * are part of the chipset and are on bus 0.
-     */
-    for (devfn = 0; devfn < 256; devfn++) {
-        uint32_t dev = (devfn & 0xF8) >> 3;
-        uint32_t fun = devfn & 0x07;
-        uint32_t cf8 = (1UL << 31) | (dev << 11) | (fun << 8);
-        uint32_t hdr = (xue_pci_read(xue, cf8, 3) & 0xFF0000U) >> 16;
-
-        if (hdr == 0 || hdr == 0x80) {
-            if ((xue_pci_read(xue, cf8, 2) >> 8) == XUE_XHC_CLASSC) {
-                xue->xhc_cf8 = cf8;
-                break;
+    if (xue->xhc_cf8 == 0) {
+        /*
+         * Search PCI bus 0 for the xHC. All the host controllers supported so far
+         * are part of the chipset and are on bus 0.
+         */
+        for (devfn = 0; devfn < 256; devfn++) {
+            uint32_t dev = (devfn & 0xF8) >> 3;
+            uint32_t fun = devfn & 0x07;
+            uint32_t cf8 = (1UL << 31) | (dev << 11) | (fun << 8);
+            uint32_t hdr = (xue_pci_read(xue, cf8, 3) & 0xFF0000U) >> 16;
+
+            if (hdr == 0 || hdr == 0x80) {
+                if ((xue_pci_read(xue, cf8, 2) >> 8) == XUE_XHC_CLASSC) {
+                    if (xue->xhc_num--)
+                        continue;
+                    xue->xhc_cf8 = cf8;
+                    break;
+                }
             }
         }
+    } else {
+        /* Verify if selected device is really xHC */
+        if ((xue_pci_read(xue, xue->xhc_cf8, 2) >> 8) != XUE_XHC_CLASSC)
+            xue->xhc_cf8 = 0;
     }
 
     if (!xue->xhc_cf8) {
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 03:40:57 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 03:40:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342172.567305 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny3bt-0008CO-6u; Mon, 06 Jun 2022 03:40:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342172.567305; Mon, 06 Jun 2022 03:40:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny3bs-00089i-RS; Mon, 06 Jun 2022 03:40:56 +0000
Received: by outflank-mailman (input) for mailman id 342172;
 Mon, 06 Jun 2022 03:40:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PxEH=WN=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1ny3br-0006AI-EP
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 03:40:55 +0000
Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com
 [66.111.4.28]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 77b6ac07-e54a-11ec-b605-df0040e90b76;
 Mon, 06 Jun 2022 05:40:54 +0200 (CEST)
Received: from compute4.internal (compute4.nyi.internal [10.202.2.44])
 by mailout.nyi.internal (Postfix) with ESMTP id DC8F55C0092;
 Sun,  5 Jun 2022 23:40:53 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute4.internal (MEProxy); Sun, 05 Jun 2022 23:40:53 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sun,
 5 Jun 2022 23:40:52 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77b6ac07-e54a-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm1; t=1654486853; x=1654573253; bh=A6IUb5PhcA
	I6HvRB0V+hA466Exyej9hEMYg/SSDCyCk=; b=JMETKEtYs3QGO3ansj1uSe4kBc
	HWOucFNHxq5QDlAgtlc95WNZ9eSQjz4xzlbOzRKO9ZsQtIsYwQRzGQLUcmKFGQZ8
	sALDkfuN1/MDvBKuMaAqU19kgN+W6FpOnOtHAIheLY2Rsh+SkKW/aCcTYyVW5nr2
	rDs0UzvfDow7TxbSjNK1k/k59rxTa6SNryHwKoEQVPZljEM2omduu5eBPvIb1NrT
	3twJ8H5Qmy+ZcbgAGyLW43MWUep8lJjQHLJxWnaC7T25JBxEyv4axqDoxbz90Inc
	6lIsv+acUCl/jyBACLdkome3zJtqrpVPFG3c3KFAeO3GWv5LYcV92k48GCfQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1654486853; x=
	1654573253; bh=A6IUb5PhcAI6HvRB0V+hA466Exyej9hEMYg/SSDCyCk=; b=H
	as4gHtuzhsetWd9GSU457Un5devnnoow7tHZSR5crfCyWz6bzWK6/+suTYMoYAN4
	XJk8ddBrvtL97myk1FV3M7v5K7DRL1TEyQyq4bafwFZ3f3qGn/IS4Awwq75BupoA
	va5SMTi9PuBZCUNwPKLcCE/xWVTbdiDEv8BXjKpnV0iPv8sLBLi9G5W1pJP+nxCa
	wZuMwse9RbUDRTTZTiIobete1YH98h3x2pfXezHQU2LqlplbpwaugfATP04fCb+A
	FyVxOcd4U3ULZrNFM0sQjNTIKVsZixafcpTWT1MVkTYfr2WKb9eWOCJLn331AnM4
	siGtkD5fT5B23ksyg9DhQ==
X-ME-Sender: <xms:RXedYrQxbcRn4mn1S6EZf-iu4PC0os9mk5WRZKn9gAlks7YKm8MHjA>
    <xme:RXedYsz9C-8hLgw9XA-Hrc8dkbcdbgRpn0_dPTtLJim2gB9XDWbIdlpl2P0xq7wVV
    vgzMY6F9yTWKQ>
X-ME-Received: <xmr:RXedYg2wkGWB7LkhDLjK-Ae_ZNkw7trrnPekilpV41fhhpNSWhEZztY4XHF7Wc0jxQ6spTl7uWTOSSr566k5dinTJ6Du8f6fTM48xt3xE4N5EfDD39A>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedruddtuddgjeefucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh
    hushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:RXedYrAPphMLiGCgXlMzSA9JOvSmQftKaO07O4Y-rn7ZdhAELhf7Kg>
    <xmx:RXedYkiSUOIpSI3iD6m2CDSPSAbCnn2THkmzg1GmC9tnyKaGe-eZOw>
    <xmx:RXedYvqUAYv4SlIDrQ_ObL82CWob2Io1PvH9870rTZj02NiphM4Xeg>
    <xmx:RXedYgZh7VsqugW-I5R8nNc1frbi_V_odn-QK6Z2g-sX46geX-K1EQ>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [RFC PATCH 09/12] IOMMU/AMD: wire common device reserved memory API
Date: Mon,  6 Jun 2022 05:40:21 +0200
Message-Id: <caea51060b225a8ff208962c76deb59cc409ff9f.1654486751.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <cover.07846d9c1900bd8c905a05e7afb214b4cf4ab881.1654486751.git-series.marmarek@invisiblethingslab.com>
References: <cover.07846d9c1900bd8c905a05e7afb214b4cf4ab881.1654486751.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Register common device reserved memory similar to how ivmd= parameter is
handled.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 xen/drivers/passthrough/amd/iommu_acpi.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/xen/drivers/passthrough/amd/iommu_acpi.c b/xen/drivers/passthrough/amd/iommu_acpi.c
index ac6835225bae..2a4896c05442 100644
--- a/xen/drivers/passthrough/amd/iommu_acpi.c
+++ b/xen/drivers/passthrough/amd/iommu_acpi.c
@@ -1078,6 +1078,20 @@ static inline bool_t is_ivmd_block(u8 type)
             type == ACPI_IVRS_TYPE_MEMORY_IOMMU);
 }
 
+static int __init cf_check add_one_extra_ivmd(xen_pfn_t start, xen_ulong_t nr, u32 id, void *ctxt)
+{
+    struct acpi_ivrs_memory ivmd;
+
+    ivmd.start_address = start << PAGE_SHIFT;
+    ivmd.memory_length = nr << PAGE_SHIFT;
+    ivmd.header.flags = ACPI_IVMD_UNITY |
+                        ACPI_IVMD_READ | ACPI_IVMD_WRITE;
+    ivmd.header.length = sizeof(ivmd);
+    ivmd.header.device_id = id;
+    ivmd.header.type = ACPI_IVRS_TYPE_MEMORY_ONE;
+    return parse_ivmd_block(&ivmd);
+}
+
 static int __init cf_check parse_ivrs_table(struct acpi_table_header *table)
 {
     const struct acpi_ivrs_header *ivrs_block;
@@ -1121,6 +1135,8 @@ static int __init cf_check parse_ivrs_table(struct acpi_table_header *table)
         AMD_IOMMU_DEBUG("IVMD: %u command line provided entries\n", nr_ivmd);
     for ( i = 0; !error && i < nr_ivmd; ++i )
         error = parse_ivmd_block(user_ivmds + i);
+    if ( !error )
+        error = iommu_get_extra_reserved_device_memory(add_one_extra_ivmd, NULL);
 
     /* Each IO-APIC must have been mentioned in the table. */
     for ( apic = 0; !error && iommu_intremap && apic < nr_ioapics; ++apic )
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 03:40:59 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 03:40:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342173.567318 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny3bu-0000H5-VB; Mon, 06 Jun 2022 03:40:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342173.567318; Mon, 06 Jun 2022 03:40:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny3bu-0000EZ-Nc; Mon, 06 Jun 2022 03:40:58 +0000
Received: by outflank-mailman (input) for mailman id 342173;
 Mon, 06 Jun 2022 03:40:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PxEH=WN=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1ny3bt-0006LY-6L
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 03:40:57 +0000
Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com
 [66.111.4.28]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 78aa8a0a-e54a-11ec-bd2c-47488cf2e6aa;
 Mon, 06 Jun 2022 05:40:56 +0200 (CEST)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.nyi.internal (Postfix) with ESMTP id 7A3D45C00B0;
 Sun,  5 Jun 2022 23:40:55 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute2.internal (MEProxy); Sun, 05 Jun 2022 23:40:55 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sun,
 5 Jun 2022 23:40:54 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 78aa8a0a-e54a-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm1; t=1654486855; x=1654573255; bh=GtGzwElcnt
	Lkp+5KlVnKOJoP8pGSO5IG2VUoVVDwK7c=; b=Bg0fF7DTBeCgzVA0LNP6vSeH1Q
	1OB5C3aTEjXpNqmG+oYnnLfamf2ZaaicKWuv7EM+8lYxq8k268pCfe+L27oEXpgu
	whbDko5RJBACdRywSdg8oxvXnqSJnLfcSqZc8wmEsEPJbcyndLjcdKmtfJhzOMHj
	cOoAp8WnGms8//keHBb9gwZ5HTgF3M4RCh1P7ywj/Ze7YOf7cDCZbJVtw8RBMvV9
	sHwMC88QUJuHcJxUX/GZRocP0qBk+IbrdFYPbtka2RBVlWv/sbAiD7QR3BKMQZGZ
	l17Lp9Z8AA5101ExmYk121SNjAotISU68k95kjAmCYXu+S2w2/t9q6yV2wgg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1654486855; x=
	1654573255; bh=GtGzwElcntLkp+5KlVnKOJoP8pGSO5IG2VUoVVDwK7c=; b=K
	bZgKOu3WOFZ5Zhqjrv+muQsi/hcSa6CrxOO1wYyqaEzQ8gtdQwFtE/0z1nhMVXcC
	TjXrReM6re8Z9SJA9C2M349Bsh+MmOlZDz0PEciHNl4Ta11ExcWvqS8ExIVmiQTp
	XIuCuy7n1/mwg0HDBa2nhKZZsxW6DIDX9wCquZuuPdEI6cfBkGZ/MEQXYOrww9kk
	IGoysPSVrVOC/3KQ+DcMcRT3mqOfmo5mlKNrJy2STWc+qCctvMlI573otSEOaz0s
	YtzcnvhECXyGaEGffqUqdmCLDuSL/nv3VniEZM+QtWg4TlWTlwFccXGz109e7jj6
	J8a5yS2H/Wkwk6G0+DygQ==
X-ME-Sender: <xms:R3edYqew8ASbESIylszDxCJ8Fo6RoMV9c3ZfshRhb6YJcHj9Uq7C2A>
    <xme:R3edYkNdG025BFxsKTqBbRFOhkKvfi3SEyvZmOwTZc1NKUaXhvdU9qzTixSlaUrG9
    upey8gi945tnA>
X-ME-Received: <xmr:R3edYrg_lXVN8x960K06R45vIul-9av6ums0no_0Zfzl0laKwob9X2O3-nZzKJh1phkLZDZA-wyIkkJeFiIo-R1pALVh4uZX5IDIkv89_8FMNeoSOtw>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedruddtuddgjeegucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh
    hushhtvghrufhiiigvpedvnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:R3edYn_dTrpJrMe4pV_n1MCQhj6Ebvnh1REPa5tzz2kVrx1GKgQ17w>
    <xmx:R3edYmst7L3ZhDoKZjiAwNPs8E3Tug_9PL6xJ6wv4q52K_be_m62bw>
    <xmx:R3edYuEU6DcWo8I5fT4wUUIdKby6C0p5TgwK_PyLY920UmseT_5Sng>
    <xmx:R3edYqiNl3nvUXxNF1ssdQF7jMGKx4WLtfuBTVODgHABghkTNppPMg>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Kevin Tian <kevin.tian@intel.com>
Subject: [RFC PATCH 10/12] xue: mark DMA buffers as reserved for the device
Date: Mon,  6 Jun 2022 05:40:22 +0200
Message-Id: <2080c30addd42a0a6d72c4608da54cbc3fe2d860.1654486751.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <cover.07846d9c1900bd8c905a05e7afb214b4cf4ab881.1654486751.git-series.marmarek@invisiblethingslab.com>
References: <cover.07846d9c1900bd8c905a05e7afb214b4cf4ab881.1654486751.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The important part is to include those buffers in IOMMU page table
relevant for the USB controller. Otherwise, DbC will stop working as
soon as IOMMU is enabled, regardless of to which domain device assigned
(be it xen or dom0).
If the device is passed through to dom0 or other domain (see later
patches), that domain will effectively have access to those buffers too.
It does give such domain yet another way to DoS the system (as is the
case when having PCI device assigned already), but also possibly steal
the console ring content. Thus, such domain should be a trusted one.
In any case, prevent anything else being placed on those pages by adding
artificial padding.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 xen/drivers/char/xue.c             | 46 ++++++++++++++++++++-----------
 xen/drivers/passthrough/vtd/dmar.c |  2 +-
 2 files changed, 32 insertions(+), 16 deletions(-)

diff --git a/xen/drivers/char/xue.c b/xen/drivers/char/xue.c
index 632141715d4d..8863b996c619 100644
--- a/xen/drivers/char/xue.c
+++ b/xen/drivers/char/xue.c
@@ -26,6 +26,7 @@
 #include <xen/serial.h>
 #include <xen/timer.h>
 #include <xen/param.h>
+#include <xen/iommu.h>
 #include <xue.h>
 
 #define XUE_POLL_INTERVAL 100 /* us */
@@ -110,13 +111,21 @@ static struct uart_driver xue_uart_driver = {
     .getc = NULL
 };
 
-static struct xue_trb evt_trb[XUE_TRB_RING_CAP] __aligned(XUE_PAGE_SIZE);
-static struct xue_trb out_trb[XUE_TRB_RING_CAP] __aligned(XUE_PAGE_SIZE);
-static struct xue_trb in_trb[XUE_TRB_RING_CAP] __aligned(XUE_PAGE_SIZE);
-static struct xue_erst_segment erst __aligned(64);
-static struct xue_dbc_ctx ctx __aligned(64);
-static uint8_t wrk_buf[XUE_WORK_RING_CAP] __aligned(XUE_PAGE_SIZE);
-static char str_buf[XUE_PAGE_SIZE] __aligned(64);
+struct xue_dma_bufs {
+    struct xue_trb evt_trb[XUE_TRB_RING_CAP] __aligned(XUE_PAGE_SIZE);
+    struct xue_trb out_trb[XUE_TRB_RING_CAP] __aligned(XUE_PAGE_SIZE);
+    struct xue_trb in_trb[XUE_TRB_RING_CAP] __aligned(XUE_PAGE_SIZE);
+    struct xue_erst_segment erst __aligned(64);
+    struct xue_dbc_ctx ctx __aligned(64);
+    uint8_t wrk_buf[XUE_WORK_RING_CAP] __aligned(XUE_PAGE_SIZE);
+    char str_buf[XUE_PAGE_SIZE] __aligned(64);
+    /*
+     * Don't place anything else on this page - it will be
+     * DMA-reachable by the USB controller.
+     */
+    char _pad[0] __aligned(XUE_PAGE_SIZE);
+};
+static struct xue_dma_bufs xue_dma_bufs __aligned(XUE_PAGE_SIZE);
 static char __initdata opt_dbgp[30];
 
 string_param("dbgp", opt_dbgp);
@@ -149,17 +158,24 @@ void __init xue_uart_init(void)
         xue->xhc_cf8 = (1UL << 31) | (bus << 16) | (slot << 11) | (func << 8);
     }
 
-    xue->dbc_ctx = &ctx;
-    xue->dbc_erst = &erst;
-    xue->dbc_ering.trb = evt_trb;
-    xue->dbc_oring.trb = out_trb;
-    xue->dbc_iring.trb = in_trb;
-    xue->dbc_owork.buf = wrk_buf;
-    xue->dbc_str = str_buf;
+
+    xue->dbc_ctx = &xue_dma_bufs.ctx;
+    xue->dbc_erst = &xue_dma_bufs.erst;
+    xue->dbc_ering.trb = xue_dma_bufs.evt_trb;
+    xue->dbc_oring.trb = xue_dma_bufs.out_trb;
+    xue->dbc_iring.trb = xue_dma_bufs.in_trb;
+    xue->dbc_owork.buf = xue_dma_bufs.wrk_buf;
+    xue->dbc_str = xue_dma_bufs.str_buf;
 
     xue->dma_allocated = 1;
     xue->sysid = xue_sysid_xen;
-    xue_open(xue, &xue_ops, NULL);
+    if (xue_open(xue, &xue_ops, NULL))
+    {
+        iommu_add_extra_reserved_device_memory(
+                PFN_DOWN(virt_to_maddr(&xue_dma_bufs)),
+                PFN_UP(sizeof(xue_dma_bufs)),
+                (uart->xue.xhc_cf8 >> 8) & 0xffff);
+    }
 
     serial_register_uart(SERHND_DBGP, &xue_uart_driver, &xue_uart);
 }
diff --git a/xen/drivers/passthrough/vtd/dmar.c b/xen/drivers/passthrough/vtd/dmar.c
index 661a182b08d9..2caa3e9ad1b0 100644
--- a/xen/drivers/passthrough/vtd/dmar.c
+++ b/xen/drivers/passthrough/vtd/dmar.c
@@ -845,7 +845,7 @@ out:
     return ret;
 }
 
-#define MAX_USER_RMRR_PAGES 16
+#define MAX_USER_RMRR_PAGES 64
 #define MAX_USER_RMRR 10
 
 /* RMRR units derived from command line rmrr option. */
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 03:41:01 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 03:41:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342174.567335 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny3bx-0000md-5C; Mon, 06 Jun 2022 03:41:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342174.567335; Mon, 06 Jun 2022 03:41:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny3bw-0000iw-ND; Mon, 06 Jun 2022 03:41:00 +0000
Received: by outflank-mailman (input) for mailman id 342174;
 Mon, 06 Jun 2022 03:40:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PxEH=WN=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1ny3bu-0006AI-L9
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 03:40:58 +0000
Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com
 [66.111.4.28]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7999dc07-e54a-11ec-b605-df0040e90b76;
 Mon, 06 Jun 2022 05:40:57 +0200 (CEST)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailout.nyi.internal (Postfix) with ESMTP id 0D5C45C009E;
 Sun,  5 Jun 2022 23:40:57 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute1.internal (MEProxy); Sun, 05 Jun 2022 23:40:57 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sun,
 5 Jun 2022 23:40:55 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7999dc07-e54a-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm1; t=1654486857; x=1654573257; bh=CJWHUy2VWq
	xW/8CRCgMpeNzBJIn8oo40U2Dyt2b4CbY=; b=cbiGlQRAPePlltHvsM0totHwde
	mQgxKBrEQIYBdnaLgbZTwEBcaVF00/5RSMlM3pwDjpagnY2htyg0s+bcgvV9Ezkm
	/T9/p/IQnLOoOuThbgBEcexfysh05byebCgqkROmb7ilaOUxmDExJ9cx4JLftnBa
	tynWvL1cV52Xh/dGPChY9d5REvIiIDaDkG8h3VNttXMZd+t0QfN9sjFTy5ovLZHX
	XPHfxQJiDTRCbgjKolXQ37pZimlsB9MNLb75oiwmxBy6tXAe2Rnsxb/40eh1nwpf
	u+gf0b0clblWPqhAmXV2n/euYLdfVjtrntxMVbd20x47RoJAV12vEZCxsRWQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1654486857; x=
	1654573257; bh=CJWHUy2VWqxW/8CRCgMpeNzBJIn8oo40U2Dyt2b4CbY=; b=S
	is+XW1FcqbLf61YDtDrxLIWBV+PynaefCYM3lTuYwR/KqA4AeQGPSLammQU4BTMx
	ReDxr94nAgQ1HrPlbIsvn5oNRv3wJRwOYKzUTNe3wu7NbMwQ3aQYo00spND0o+Cl
	h+wC3jebG6HUUJ+RI5sujOw5SAdfASpYwQKuYcOCDvc5Zx6Yh3/5K2fdWi69rMaP
	Xa5x03kHSMZnDsslalwZHGmwt1ro65BW9t3E4Y4XodaXyksRXF70E7Y9oIfrdHr4
	EV5WE99XaMam6TrsDWOGNp0/tH5+HXK4n28BuJLLh2oP30jRaWGpn47bNZx/YxFa
	6xcZLM/NHJ8049V/szPaQ==
X-ME-Sender: <xms:SHedYnoq2Xv0qS3rgxLxQ0AvDoaEnxGhkEpdDdiOlbWepeQdw6c5RA>
    <xme:SHedYhrODhRrIJHbx6zkNC6jlxYtTV6YrpwqnLg9jqdZ-WWyijuvUZ8Vr8eca8_MF
    q2iYsA3qDxPJg>
X-ME-Received: <xmr:SHedYkN10VrovxOJr6y6MC3IE9znRumzk0fbD9dLiCk7ilYKHjxSyhGYX0hSPUXP0c9-U25XYMtpn06WQ-irD-aBS2CKufYmHNIF2eNLZAlR3aJb3kE>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedruddtuddgjeefucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh
    hushhtvghrufhiiigvpedunecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:SHedYq6LMFP48lT7Aee9_514AdYmN-b6XnJ3sNcizFD_hY4mL7X2rQ>
    <xmx:SHedYm6R1e1FbPbs0z7iDzQrLVzrqxFTy11OCcOFLUbV7Hxmt-uNEg>
    <xmx:SHedYihc5hr40FSOm93vKIVLXdi57IRTuxy30e7V45GvcRgapDyLbA>
    <xmx:SXedYoSNb_UC64k8g5jBC9NvRXw9MaGtgVWWLjsG6dg3jg8Jlx42nQ>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [RFC PATCH 11/12] xue: prevent dom0 (or other domain) from using the device
Date: Mon,  6 Jun 2022 05:40:23 +0200
Message-Id: <74ffafdaa109f6067f2ecbc89b885a8a9cdbadaf.1654486751.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <cover.07846d9c1900bd8c905a05e7afb214b4cf4ab881.1654486751.git-series.marmarek@invisiblethingslab.com>
References: <cover.07846d9c1900bd8c905a05e7afb214b4cf4ab881.1654486751.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Mark config space of the device as read-only, to prevent dom0 (or
others) re-configuring the USB controller while it's used as a debug
console.

This isn't strictly necessary, as the xhci and xhci-dbc drivers can
co-exists, but that's a simpler option.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 xen/drivers/char/xue.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/xen/drivers/char/xue.c b/xen/drivers/char/xue.c
index 8863b996c619..ff62b868e906 100644
--- a/xen/drivers/char/xue.c
+++ b/xen/drivers/char/xue.c
@@ -70,6 +70,13 @@ static void __init cf_check xue_uart_init_postirq(struct serial_port *port)
     serial_async_transmit(port);
     init_timer(&uart->timer, xue_uart_poll, port, 0);
     set_timer(&uart->timer, NOW() + MILLISECS(1));
+
+    if ( pci_ro_device(0, (uart->xue.xhc_cf8 >> 16) & 0xff,
+                          (uart->xue.xhc_cf8 >> 8) & 0xff) )
+            printk(XENLOG_INFO "Could not mark config space of %02x:%02x.%u read-only.\n",
+                   (uart->xue.xhc_cf8 >> 16) & 0xff,
+                   (uart->xue.xhc_cf8 >> 11) & 0x1f,
+                   (uart->xue.xhc_cf8 >> 8) & 0x0f);
 }
 
 static int cf_check xue_uart_tx_ready(struct serial_port *port)
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 03:41:03 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 03:41:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342177.567346 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny3by-00014j-TG; Mon, 06 Jun 2022 03:41:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342177.567346; Mon, 06 Jun 2022 03:41:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny3by-00013b-EM; Mon, 06 Jun 2022 03:41:02 +0000
Received: by outflank-mailman (input) for mailman id 342177;
 Mon, 06 Jun 2022 03:41:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PxEH=WN=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1ny3bw-0006AI-69
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 03:41:00 +0000
Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com
 [66.111.4.28]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7a80778f-e54a-11ec-b605-df0040e90b76;
 Mon, 06 Jun 2022 05:40:59 +0200 (CEST)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailout.nyi.internal (Postfix) with ESMTP id 8D86E5C00D9;
 Sun,  5 Jun 2022 23:40:58 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute1.internal (MEProxy); Sun, 05 Jun 2022 23:40:58 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sun,
 5 Jun 2022 23:40:57 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7a80778f-e54a-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm1; t=1654486858; x=1654573258; bh=Ul/YTZELjd
	XhM4yte1+Tqag6FgjJfdtEGmy0W8pthfU=; b=ODhMNVSmVnU+BovLPcl9pPwYol
	vgV/TPSH8AnyApCoRV37VwaQBaTdew7GP5M3kL7Vgj23TjpnM6f9cvuBPgUBSDY/
	AUHh/R8HqIRw1IY8xqxM4s4DswLhOJxne8V65TvrVOTF6dYwm598m8p/yNN/LgfK
	NZaxJocsBshN8IJ+n7YJY43teIoW6gRKGsu/mXmZZyJ1u+ffO3ZWUcGHBw+mL/Ht
	3YW8cAVDMsrbVEL14J2upacDPWqexqz7vBheU468FlnB0dJzsXEaUYFqFJqtONvs
	JnOEG56vlKyPdnT/qycODnr2TFL0mtNgv8Z4KHADArZISoJ8mXwMJmxXOjCQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1654486858; x=
	1654573258; bh=Ul/YTZELjdXhM4yte1+Tqag6FgjJfdtEGmy0W8pthfU=; b=H
	suzKn2Dd738Q1vwBxYukEroFNsufQJddh/iizV9eAY9HQy3+DB+EXVF4HtfcLq5+
	ds0kHEwJ/ou3x7dzPw0xH4hRSu/SI0I4jQlZXUtfhauoiiXNCVmKdiYFjiqo6suM
	bRwnPkVPdlo/cozd7u8zB9nCx/zNeVt4DtIZ7RMMhY5uilfqut0svYg4i02abqw2
	MoESh5Xluc9vOouB/Y9eZQKR5+nc9KjV0Tfh0KUr3olyFQSlylgY/LrUxyd0IW4V
	mORk2Zbs+ohdDYmxemPQIcLnseOX8JYHEwQcTQM1pNx9rdTvIvW7DLjC2RVc17bs
	KS4dmqvwSta545M1uRDmQ==
X-ME-Sender: <xms:SnedYgKiZmbTZd-KPzPKLCAv0KJLLoiMofrRk-Vd0eLJc27pP4zmNA>
    <xme:SnedYgLjDi0wsvijAWwe25vL0WtM4-n7E-dPUd2oPsM1-KXHsn8nMIkAaH8mYDYBd
    -DJxLcYRoC3fQ>
X-ME-Received: <xmr:SnedYgvMgojUOIbe4MHTNpgRKlZezKDkSpPQEKNPQu0YqE3RzbhEQZb1SWuHImXYin5-i1SpgwnpmNPAPTA0uwbn2TxIEaw5624zq2RsXDx2L_T1sTA>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedruddtuddgjeefucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh
    hushhtvghrufhiiigvpedvnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:SnedYtYuCozn_drnUtE09-0-J3E-oJ_2NkahLxz-E1WnE0YbrMwf-A>
    <xmx:SnedYnb59cNmy_C4moKOkfrlt4mSuQm9z9KrBXJCdp9cIji6LvH8cg>
    <xmx:SnedYpDXiTtw3okCKQaUsXDy3fyg6p30uN5lcr_8q524u_b5R_E2qg>
    <xmx:SnedYixIaBy1t2wVMOSbn8UC9TlY8K5L74_FhCeAfUXBijF1mzroeg>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [RFC PATCH 12/12] xue: allow driving the reset of XHCI by a domain while Xen uses DbC
Date: Mon,  6 Jun 2022 05:40:24 +0200
Message-Id: <2f7660330861b1c6db9520332bee20388178c162.1654486751.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <cover.07846d9c1900bd8c905a05e7afb214b4cf4ab881.1654486751.git-series.marmarek@invisiblethingslab.com>
References: <cover.07846d9c1900bd8c905a05e7afb214b4cf4ab881.1654486751.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

That's possible, because the capability was designed specifically to
allow separate driver handle it, in parallel to unmodified xhci driver
(separate set of registers, pretending the port is "disconnected" for
the main xhci driver etc). It works with Linux dom0, although requires
an awful hack - re-enabling bus mastering behind dom0's backs.
Linux driver does similar thing - see
drivers/usb/early/xhci-dbc.c:xdbc_handle_events().

To avoid Linux messing with the DbC, mark this MMIO area as read-only.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 xen/drivers/char/xue.c | 13 +++++++------
 xen/include/xue.h      | 10 ++++++++++
 2 files changed, 17 insertions(+), 6 deletions(-)

diff --git a/xen/drivers/char/xue.c b/xen/drivers/char/xue.c
index ff62b868e906..437ed6468630 100644
--- a/xen/drivers/char/xue.c
+++ b/xen/drivers/char/xue.c
@@ -27,6 +27,7 @@
 #include <xen/timer.h>
 #include <xen/param.h>
 #include <xen/iommu.h>
+#include <xen/rangeset.h>
 #include <xue.h>
 
 #define XUE_POLL_INTERVAL 100 /* us */
@@ -71,12 +72,12 @@ static void __init cf_check xue_uart_init_postirq(struct serial_port *port)
     init_timer(&uart->timer, xue_uart_poll, port, 0);
     set_timer(&uart->timer, NOW() + MILLISECS(1));
 
-    if ( pci_ro_device(0, (uart->xue.xhc_cf8 >> 16) & 0xff,
-                          (uart->xue.xhc_cf8 >> 8) & 0xff) )
-            printk(XENLOG_INFO "Could not mark config space of %02x:%02x.%u read-only.\n",
-                   (uart->xue.xhc_cf8 >> 16) & 0xff,
-                   (uart->xue.xhc_cf8 >> 11) & 0x1f,
-                   (uart->xue.xhc_cf8 >> 8) & 0x0f);
+#ifdef CONFIG_X86
+    if ( rangeset_add_range(mmio_ro_ranges,
+                PFN_DOWN(uart->xue.xhc_mmio_phys + uart->xue.xhc_dbc_offset),
+                PFN_UP(uart->xue.xhc_mmio_phys + uart->xue.xhc_dbc_offset + sizeof(*uart->xue.dbc_reg)) - 1) )
+        printk(XENLOG_INFO "Error while adding MMIO range of device to mmio_ro_ranges\n");
+#endif
 }
 
 static int cf_check xue_uart_tx_ready(struct serial_port *port)
diff --git a/xen/include/xue.h b/xen/include/xue.h
index b1f304958679..87b821429fd8 100644
--- a/xen/include/xue.h
+++ b/xen/include/xue.h
@@ -1818,6 +1818,7 @@ static inline void xue_flush(struct xue *xue, struct xue_trb_ring *trb,
 {
     struct xue_dbc_reg *reg = xue->dbc_reg;
     uint32_t db = (reg->db & 0xFFFF00FF) | (trb->db << 8);
+    uint32_t cmd;
 
     if (xue->open && !(reg->ctrl & (1UL << XUE_CTRL_DCE))) {
         if (!xue_init_dbc(xue)) {
@@ -1829,6 +1830,15 @@ static inline void xue_flush(struct xue *xue, struct xue_trb_ring *trb,
         xue_enable_dbc(xue);
     }
 
+    /* Re-enable bus mastering, if dom0 (or other) disabled it in the meantime. */
+    cmd = xue_pci_read(xue, xue->xhc_cf8, 1);
+#define XUE_XHCI_CMD_REQUIRED (PCI_COMMAND_MEMORY|PCI_COMMAND_MASTER)
+    if ((cmd & XUE_XHCI_CMD_REQUIRED) != XUE_XHCI_CMD_REQUIRED) {
+        cmd |= XUE_XHCI_CMD_REQUIRED;
+        xue_pci_write(xue, xue->xhc_cf8, 1, cmd);
+    }
+#undef XUE_XHCI_CMD_REQUIRED
+
     xue_pop_events(xue);
 
     if (!(reg->ctrl & (1UL << XUE_CTRL_DCR))) {
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 04:09:51 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 04:09:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342283.567362 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny43m-0007p7-LY; Mon, 06 Jun 2022 04:09:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342283.567362; Mon, 06 Jun 2022 04:09:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny43m-0007p0-HG; Mon, 06 Jun 2022 04:09:46 +0000
Received: by outflank-mailman (input) for mailman id 342283;
 Mon, 06 Jun 2022 04:09:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aNt7=WN=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1ny43k-0007op-IR
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 04:09:44 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur02on0606.outbound.protection.outlook.com
 [2a01:111:f400:fe06::606])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7dc59290-e54e-11ec-b605-df0040e90b76;
 Mon, 06 Jun 2022 06:09:42 +0200 (CEST)
Received: from AM6PR10CA0014.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:209:89::27)
 by HE1PR08MB2842.eurprd08.prod.outlook.com (2603:10a6:7:34::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.17; Mon, 6 Jun
 2022 04:09:39 +0000
Received: from VE1EUR03FT011.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:89:cafe::66) by AM6PR10CA0014.outlook.office365.com
 (2603:10a6:209:89::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13 via Frontend
 Transport; Mon, 6 Jun 2022 04:09:39 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT011.mail.protection.outlook.com (10.152.18.134) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 04:09:38 +0000
Received: ("Tessian outbound 4ab5a053767b:v120");
 Mon, 06 Jun 2022 04:09:37 +0000
Received: from 9632029c29a8.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 1C627D30-FCB2-4D6B-957F-2A2996E7B99A.1; 
 Mon, 06 Jun 2022 04:09:31 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9632029c29a8.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 06 Jun 2022 04:09:31 +0000
Received: from DU2PR04CA0336.eurprd04.prod.outlook.com (2603:10a6:10:2b4::15)
 by VI1PR0801MB2015.eurprd08.prod.outlook.com (2603:10a6:800:8b::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Mon, 6 Jun
 2022 04:09:25 +0000
Received: from DBAEUR03FT039.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:2b4:cafe::e5) by DU2PR04CA0336.outlook.office365.com
 (2603:10a6:10:2b4::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19 via Frontend
 Transport; Mon, 6 Jun 2022 04:09:25 +0000
Received: from nebula.arm.com (40.67.248.234) by
 DBAEUR03FT039.mail.protection.outlook.com (100.127.142.225) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 04:09:24 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX03.Arm.com
 (10.251.24.31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.27; Mon, 6 Jun
 2022 04:09:25 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2308.27 via Frontend
 Transport; Mon, 6 Jun 2022 04:09:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7dc59290-e54e-11ec-b605-df0040e90b76
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=MgDC/9BBLmp13PuCyZjIw88j4yl6DfzHObk1x0R9PZTfGOPa2bFQJbrb2kdH3mbw/i6VH+y1pxjnNHD5ZRyctagdhDDom8DNqoh5wLXJo/wvvDQk/1+PXMQzTr7QXEr7Kbd00M1gqp72Hta87tUUo7U0h+WyZzMCZRT5TJsq4SpD+nPAwr+pA9cdmGAajlX0Zd5gU/P+uLvLKsLJeQae3PgTpb1exhBwMBR63R200rR0q3I+f9szZGlQO2M9loYZcNkLQGo89/w+C3aUR8ETMEcKNNoylYo/S93h+mCF6XvEM2yx8k8CXXADs9DiZeHsd/b9UH8CumrzqplIvCrotA==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IgqMEKoYGJ9JOofanIwPVFJx2bQLFrHzSgsSU34qiaY=;
 b=UcIIhPoOUYGZVlphrCVWoxwbjZDAhWMd28LfzEiWBZQ+FX3borRM6D63bszdOm4tuaHVY/b+VxwbW5FR4ivUrdO9Zjx2LV8GWZYycoYypXO9pfq+mzzLa1iH4rLmqcj3OPPbpT6+DiWkQrOTC7uHWKjfQeXHkoOQvATT6Lx2z/bfXSaPXb/gr9M0Zm3R74STHH8M3Aq21XuM2ueMjfRar7Lbt+I6bY19ATgQjZdsC6e1VmKYL0zS2P0VbJRHm4DKr0r+nkou5mZ00BSNUaQ8aZffy8muJgm4a3KmVDmjlN5eo7JPatC7yJq9tiM46qePASkFLUPTN/D66wNe6ceLNQ==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IgqMEKoYGJ9JOofanIwPVFJx2bQLFrHzSgsSU34qiaY=;
 b=CmjxrqyAyx7ZSNghRujaEIUn1XdeLBI9XsMjtTjSSMF4kcEoPj6dtNnW8BTo9i+/U7jEHMQukyjvPhGLdUyTQs88k6vyJrXF8rQdPGSLRzRo9Wrnf2fbu9hUaO5NudQea5HnipGtB8Z+g+Z7QRadlqb0Z2kroqDy/xi2Do6uQW8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: ace7788932735120
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MCJeAQPDDd36JVQpCk7rsAuhq4bgN/8oGoPvwWw+hrKJ1mNsPZIJSnu0i5H82IQyttQgtUZOw8fX72N/Wh9Ei0LALc6dwKraFLhqh/wS7zdZLSHiAmIix6DdEVqRRtHpenvWy+yvG7OfsgW4buzD9X0FHR2mOMFK16tIDhiMcp5CF/RwLVcSDpbHj4sQaJ9GaYHWAv+ZvJLWYguylkObfp+GGJRaOsR2IdwcTPN+IwjNOkLzKXUfA4dVN6iTZviFERh6NsSF5z/6rw9I2XcooTrdGT8HkGdwQebz3+C22SmFluiLk1tiB4SQTPlHwtMgW2mDm9pd9lOuDRmVZtzg4A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IgqMEKoYGJ9JOofanIwPVFJx2bQLFrHzSgsSU34qiaY=;
 b=Xt/nlZl/PGl7mlAt4y7F/O/4hQj51RPmQkgqTotYMojOUnT04s1/4n0pA+0rvD11py8ZSpZa/OFafAN/n/ukOpU7fh+mbHlnOfD3oduiYrSUIxOhp4ckh101fwqe0i/5/mk3WR/0uddqNNnZbJX75VBXCy7u04Mohh6Mvv6uxQCJ3uU0ldziM+N9hvBhlyHFyXBXNsWF6fLKeb1+aE/nOH52EsgDlqkxH+hYMqa2MD9D3sLkZCXtkd78lrncje01QXqp9dyzJmQy70b9nE5y5eKafuUGwZgbSP5kiXZE6l/537kEmkcUvzoVYTWTcUF+tit3jJpK667Wybe5slsR+A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IgqMEKoYGJ9JOofanIwPVFJx2bQLFrHzSgsSU34qiaY=;
 b=CmjxrqyAyx7ZSNghRujaEIUn1XdeLBI9XsMjtTjSSMF4kcEoPj6dtNnW8BTo9i+/U7jEHMQukyjvPhGLdUyTQs88k6vyJrXF8rQdPGSLRzRo9Wrnf2fbu9hUaO5NudQea5HnipGtB8Z+g+Z7QRadlqb0Z2kroqDy/xi2Do6uQW8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Wei Chen <wei.chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>
Subject: [PATCH v5 0/8] Device tree based NUMA support for Arm - Part#1 
Date: Mon, 6 Jun 2022 12:09:08 +0800
Message-ID: <20220606040916.122184-1-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-Office365-Filtering-Correlation-Id: 51d35738-af1f-4e39-8270-08da47725fb4
X-MS-TrafficTypeDiagnostic:
	VI1PR0801MB2015:EE_|VE1EUR03FT011:EE_|HE1PR08MB2842:EE_
X-Microsoft-Antispam-PRVS:
	<HE1PR08MB2842BDDFC6410E57B2C381BD9EA29@HE1PR08MB2842.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 JhEy32ixmYoxnztIsF5qehAYMe3rEwfs1LavGOlD3E9mHejBP2tFYpGS2/SI6PuMiC8TFq5qPShyzfbAb1UAeaIR5LJvOFG4xdfY2Rz5fT75g+vv/o4V/fL5yySomCv/1NjuUGNmcCFiD4nxh+lNxKNdTdCxBIH4SIYSgHLgafYDwSKhXB+rbrw3np37G7Ilpd9dUxymfTn+h0BDmqYZ+2B88jizpKaxw0qXPOcvuOJdDHD9DHlRhhgUO/m0YaAb97gYuKKDEc99c/5W2sw6H6u27IONwysEG8UHLjF3PlbjywN1gHFBXWP1MkL606D+6Z17hUYPubFEMOzP6Dhp+mnPrxvVLcYfMrpc0KNsLoB6tlkKzjH4IvCsaortDpNnqxR2UEerHQdfS9u/IOv0J9Z+Ti1MFC9FS9dRlmI+MrVm2RZ07/tLdylWjgMqfmlwDSV3ooQpx8jkgAyFbJE46mKcBq4yBHkxgramYQuYGZEL0NalsJd+YtIUH1rg+I9JR86d57/Gq9WY5I+sOo0NgWwTGsfSMuty/cNYdsDPgorxLw8EemORFxfkgGt9TVfO1AFxMx8YMTAePKOFdg4x2z1Tgx5842yru6ehRbSA9IzigrUD9jTvV2/wZ/5Egp+1uR2WqhCj4hB3e/sZkvubEG43cGlzAWnqBVZ3TNIFej8mklFhvDMJyvfkvfpttTdchgO1lsrlLqJg9Kg/uo/c1ZVdTZi6qUneRzLwpESEdWE=
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230001)(4636009)(36840700001)(46966006)(8936002)(356005)(44832011)(81166007)(54906003)(70206006)(70586007)(6916009)(5660300002)(2906002)(316002)(82310400005)(8676002)(86362001)(508600001)(4326008)(83380400001)(6666004)(186003)(2616005)(1076003)(36756003)(426003)(336012)(47076005)(7696005)(26005)(36860700001)(4743002)(21314003)(17413003)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0801MB2015
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT011.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	effd145e-5a1b-4ac7-097e-08da4772577b
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	b0SOaecEayVd/vMzuEvwBOR0Khss+5QQ9RyoUV8d4MWYa4sv1o69SBZTkSgnblRdGyDsvtJNKkuIId31D71hqcGQn6pmRKg7o5sXV4HIIfih2xXbRjOTfCve/j4QxJ4PltFf8tJ297eL8g57XQt1fYfnpBZg/tB/+vnPFZQKWrJioCXZMm7EBcZRNKdhqP8yasC5yAuDtpPVGXum8UVZaB1sDe2winNXLoAiHWKFow3Dq/LcKN5VNGlOst//ubLuYgTzEN82X5eFjc8b6h0yrUBmm5XjcFF+PIYg1YIpfulfI/nYzMXFQ/ZS2PqGafQmKo9nFnLTMaaEPOfjY2btII17uUktQWPQAI8jvkmp859norclrTifjBw2n1n5Jwr8RHpP5BX/Tvg1hrlzcQYjyaSj4I6SFiV/wIsgh8Xi8WfsoS8EYQ22mOiQy4T/5NTJEvmKw5QUi1He1n9F05Adve8Ot8rIMjiE8HzV+hX9B67n7bASeKWjOlSnYisWYgfl94QR22LwAWGwMPoXXjwHkhcg5zDE9Bnd1UzWtz95F+fKXOXhNUl3+5dhhZh2QQWonMq81064C/cdxxYDgEC8nJZUN/CpXP3Mmdnl1+7C18ZI56zySgtu+aig1J0juMRPcJpm/4NqPWvwzOBPIK2wp7lvnVa1Rvtc+d8n8H6+UH4VuC+2IylMfa3Nw6CsdnvpQapyLiKUCyn84IeYKmlSAA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230001)(4636009)(46966006)(36840700001)(36860700001)(82310400005)(6666004)(7696005)(83380400001)(26005)(4743002)(107886003)(508600001)(47076005)(2616005)(1076003)(336012)(186003)(426003)(8936002)(86362001)(81166007)(4326008)(8676002)(44832011)(70206006)(70586007)(5660300002)(2906002)(6916009)(54906003)(316002)(36756003)(17413003)(21314003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2022 04:09:38.2761
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 51d35738-af1f-4e39-8270-08da47725fb4
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT011.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR08MB2842

(The Arm device tree based NUMA support patch set contains 35
patches. In order to make stuff easier for reviewers, I split
them into 3 parts:
1. Preparation. I have re-sorted the patch series. And moved
   independent patches to the head of the series.
2. Move generically usable code from x86 to common.
3. Add new code to support Arm.

This series only contains the first part patches.)

Xen memory allocation and scheduler modules are NUMA aware.
But actually, on x86 has implemented the architecture APIs
to support NUMA. Arm was providing a set of fake architecture
APIs to make it compatible with NUMA awared memory allocation
and scheduler.

Arm system was working well as a single node NUMA system with
these fake APIs, because we didn't have multiple nodes NUMA
system on Arm. But in recent years, more and more Arm devices
support multiple nodes NUMA system.

So now we have a new problem. When Xen is running on these Arm
devices, Xen still treat them as single node SMP systems. The
NUMA affinity capability of Xen memory allocation and scheduler
becomes meaningless. Because they rely on input data that does
not reflect real NUMA layout.

Xen still think the access time for all of the memory is the
same for all CPUs. However, Xen may allocate memory to a VM
from different NUMA nodes with different access speeds. This
difference can be amplified in workloads inside VM, causing
performance instability and timeouts.

So in this patch series, we implement a set of NUMA API to use
device tree to describe the NUMA layout. We reuse most of the
code of x86 NUMA to create and maintain the mapping between
memory and CPU, create the matrix between any two NUMA nodes.
Except ACPI and some x86 specified code, we have moved other
code to common. In next stage, when we implement ACPI based
NUMA for Arm64, we may move the ACPI NUMA code to common too,
but in current stage, we keep it as x86 only.

This patch serires has been tested and booted well on one
Arm64 NUMA machine and one HPE x86 NUMA machine.

---
Part1 v4->v5:
1. Remove "nd->end == end && nd->start == start" from
   conflicting_memblks.
2. Use case NO_CONFLICT instead of "default".
3. Correct wrong "node" to "pxm" in print message.
4. Remove unnecesary "else" to remove the indent depth.
5. Convert all ranges to proper mathematical interval
   representation.
6. Fix code-style comments.
Part1 v3->v4:
1. Add indent to make ln and test to be aligned in EFI
   common makefile.
2. Drop "ERR" prefix for node conflict check enumeration,
   and remove init value.
3. Use "switch case" for enumeration, and add "default:"
4. Use "PXM" in log messages.
5. Use unsigned int for node memory block id.
6. Fix some code-style comments.
7. Use "nd->end" in node range expansion check.
Part1 v2->v3:
1. Rework EFI stub patch:
   1.1. Add existed file check, if exists a regular stub files,
        the common/stub files' links will be ignored.
   1.2. Keep stub.c in x86/efi to include common/efi/stub.c
   1.3. Restore efi_compat_xxx stub functions to x86/efi.c.
        Other architectures will not use efi_compat_xxx.
   1.4. Remove ARM_EFI dependency from ARM_64.
   1.5. Add comment for adding stub.o to EFIOBJ-y.
   1.6. Merge patch#2 and patch#3 to one patch.
2. Rename arch_have_default_dmazone to arch_want_default_dmazone
3. Use uint64_t for size in acpi_scan_nodes, make it be
   consistent with numa_emulation.
4. Merge the interleaves checking code from a separate function
   to conflicting_memblks.
5. Use INFO level for node's without memory log message.
6. Move "xen/x86: Use ASSERT instead of VIRTUAL_BUG_ON for
   phys_to_nid" to part#2.
Part1 v1->v2:
1. Move independent patches from later to early of this series.
2. Drop the copy of EFI stub.c from Arm. Share common codes of
   x86 EFI stub for Arm.
3. Use CONFIG_ARM_EFI to replace CONFIG_EFI and remove help text
   and make CONFIG_ARM_EFI invisible.
4. Use ASSERT to replace VIRTUAL_BUG_ON in phys_to_nid.
5. Move MAX_NUMNODES from xen/numa.h to asm/numa.h for x86.
6. Extend the description of Arm's workaround for reserve DMA
   allocations to avoid the same discussion every time for
   arch_have_default_dmazone.
7. Update commit messages.

Wei Chen (8):
  xen: reuse x86 EFI stub functions for Arm
  xen/arm: Keep memory nodes in device tree when Xen boots from EFI
  xen: introduce an arch helper for default dma zone status
  xen: decouple NUMA from ACPI in Kconfig
  xen/arm: use !CONFIG_NUMA to keep fake NUMA API
  xen/x86: use paddr_t for addresses in NUMA node structure
  xen/x86: add detection of memory interleaves for different nodes
  xen/x86: use INFO level for node's without memory log message

 xen/arch/arm/Kconfig              |   4 +
 xen/arch/arm/Makefile             |   2 +-
 xen/arch/arm/bootfdt.c            |   8 +-
 xen/arch/arm/efi/Makefile         |   8 ++
 xen/arch/arm/efi/efi-boot.h       |  25 -----
 xen/arch/arm/include/asm/numa.h   |   6 ++
 xen/arch/x86/Kconfig              |   2 +-
 xen/arch/x86/efi/stub.c           |  32 +-----
 xen/arch/x86/include/asm/config.h |   1 -
 xen/arch/x86/include/asm/numa.h   |   9 +-
 xen/arch/x86/numa.c               |  32 +++---
 xen/arch/x86/srat.c               | 157 +++++++++++++++++++++---------
 xen/common/Kconfig                |   3 +
 xen/common/efi/efi-common.mk      |   3 +-
 xen/common/efi/stub.c             |  32 ++++++
 xen/common/page_alloc.c           |   2 +-
 xen/drivers/acpi/Kconfig          |   3 +-
 xen/drivers/acpi/Makefile         |   2 +-
 18 files changed, 199 insertions(+), 132 deletions(-)
 create mode 100644 xen/common/efi/stub.c

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 06 04:09:51 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 04:09:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342284.567368 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny43n-0007sl-02; Mon, 06 Jun 2022 04:09:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342284.567368; Mon, 06 Jun 2022 04:09:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny43m-0007sE-Pj; Mon, 06 Jun 2022 04:09:46 +0000
Received: by outflank-mailman (input) for mailman id 342284;
 Mon, 06 Jun 2022 04:09:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aNt7=WN=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1ny43l-0007op-AP
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 04:09:45 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20611.outbound.protection.outlook.com
 [2a01:111:f400:7d00::611])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7e484c36-e54e-11ec-b605-df0040e90b76;
 Mon, 06 Jun 2022 06:09:43 +0200 (CEST)
Received: from AS9PR0301CA0030.eurprd03.prod.outlook.com
 (2603:10a6:20b:468::30) by PAXPR08MB7575.eurprd08.prod.outlook.com
 (2603:10a6:102:23d::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19; Mon, 6 Jun
 2022 04:09:41 +0000
Received: from VE1EUR03FT050.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:468:cafe::86) by AS9PR0301CA0030.outlook.office365.com
 (2603:10a6:20b:468::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19 via Frontend
 Transport; Mon, 6 Jun 2022 04:09:41 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT050.mail.protection.outlook.com (10.152.19.209) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 04:09:40 +0000
Received: ("Tessian outbound 1766a3bff204:v120");
 Mon, 06 Jun 2022 04:09:39 +0000
Received: from 255bd13d042f.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 B9A23BA5-FF17-4655-BCDC-E795D194E9C6.1; 
 Mon, 06 Jun 2022 04:09:33 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 255bd13d042f.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 06 Jun 2022 04:09:33 +0000
Received: from DB9PR05CA0004.eurprd05.prod.outlook.com (2603:10a6:10:1da::9)
 by AM9PR08MB7290.eurprd08.prod.outlook.com (2603:10a6:20b:435::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Mon, 6 Jun
 2022 04:09:32 +0000
Received: from DBAEUR03FT015.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:1da:cafe::f2) by DB9PR05CA0004.outlook.office365.com
 (2603:10a6:10:1da::9) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19 via Frontend
 Transport; Mon, 6 Jun 2022 04:09:32 +0000
Received: from nebula.arm.com (40.67.248.234) by
 DBAEUR03FT015.mail.protection.outlook.com (100.127.142.112) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 04:09:31 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX03.Arm.com
 (10.251.24.31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.27; Mon, 6 Jun
 2022 04:09:31 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2308.27 via Frontend
 Transport; Mon, 6 Jun 2022 04:09:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7e484c36-e54e-11ec-b605-df0040e90b76
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=TNr0nDJy/7sNDpTUonGvwMqf8fkAXncvAU69Ru2/xzzsjcUtYj57MwVe/MPbgM24s0fBHZ9iv6Ysr/A3RmPVX4/6z42AeAUn7lb+ziPLQOU7X+/VcmQyM71pTauqPsZwZpqNgdYOL6MCz3IsalafwkVLdbk1xQm9sWn2QYiTmPDYkm+3g60Res6mdNINU54ltNINiuKVwyXkvl3zF5lKxciI96gC3CNM1abCBK+2zLTc/k1dnCRY/2Z/wWuN1Im4sJXcW1ehR/VgZihyIGEyvuQO3tvT+ZTVaCSTAW8y7PC3f2SK/QY4di/6vjsX9XLGfoQl40zPuP9oP5zQxQxo3A==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fe/7SCgP8Par+1oz7XY3pgQuiN9XdtDg0NAdMEO3/oI=;
 b=eJORL2AI6zonlga9Yz9hu3+XMxnqTsol28qGqXVhfU8pGroO1RQVlO3iv3Ybp5fd1IXBgji1Yj0cGZQ08HVFyfHUU0uZkro5Ot+IYVm9mk2lKk8zlge1vlsh3k3+hjcJhjkJa/sqIUP9HtZMYNHkrOlPE2Rv3Chf+vgfQCwA1D9FUrKHI8p0aRrMuRU9saigJgddIpNyur0INewWsC0BG1L+NMQ8YpoLbUNc53xAfbKSUbJxuUZL15BM7V+e2SiMwrZqKDd8y27Y3WZrySsIL7m8z/vrDFkkDYSwCafEtuK1+TKY2Suf8jqGNPXbvsDXYuv8hINYWwXiODbm4AbyGQ==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fe/7SCgP8Par+1oz7XY3pgQuiN9XdtDg0NAdMEO3/oI=;
 b=9thj/v2wtL4oVwQ/98ic3OTVm0anXvgjoDf+8B62IolFr6XWht4FvuDFdMmYlRTOJA2VfjDW140Q5PtFs/pNhwIbLUZPPtB4kFgEu8FbVRk7Lf8g3sgPd7vHUjwux/0ismwQy5744pT5TjXaf/8jBv7DE+OLirXSnyMqiuGiZsk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 3f02ca4c2c1ffa36
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hvm8xSS3GZChorjhWRJBrOHsUVgeDPIcCWscxdoQSuxY6tehw547EKGkRtdvYEZ6xw4dmyknP9sSJrSy2kniQLtVQiOOvjqVFnjNFvC/S1j0klqxazexjJ72+M5DkDqXHlnzQWLDxU4zr/2rcdmkVqFxjpvaLhCP/lS2Y/rpVjEDytdEYNiqVOF3lzCi027HVIRHazU6eDbPcaO2rxRwdv8W9iXR2eZXNg3VDqqrFwuaCFYJkl2oZwOsz4EabyESTZAWTHgtR8ORVAkzjEHMpqrlekiKGGnlj66EOCRUNVOIffBRr1mpU/WmbXV8CJmpOcOOGOssZnTMj77IuEJXbg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fe/7SCgP8Par+1oz7XY3pgQuiN9XdtDg0NAdMEO3/oI=;
 b=HWTMmD7ii6tGT3gg7ZNeTmmPWNfsFeebDNX7SWLfbBTzlFk7dI/wDY5EoF7YzEX1/BpoBnqZVRNd/mlWpW4WXXZRCFlpP5pTlj//pIjywly7D+UktPzytXVRc/6658BPaKmYdP1VhnY/4/BDiexPVE5AvAHczoNmtGxvb1nj6v8O9hxqwbUBCJsP0SeU8feYMTq2qR0Lcu23vixd6QIctFLyrS5f3jePy86Kw948P1BQ73YISJhrhgBWkBdeFaxG7eZfRbp9ijBnut5VtdlZklhzNrs+bm5DvfngQnKPGfbKcWpBET+araCZpZTvLPz9p0kuKmkZVTELvYlqTwJuZg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fe/7SCgP8Par+1oz7XY3pgQuiN9XdtDg0NAdMEO3/oI=;
 b=9thj/v2wtL4oVwQ/98ic3OTVm0anXvgjoDf+8B62IolFr6XWht4FvuDFdMmYlRTOJA2VfjDW140Q5PtFs/pNhwIbLUZPPtB4kFgEu8FbVRk7Lf8g3sgPd7vHUjwux/0ismwQy5744pT5TjXaf/8jBv7DE+OLirXSnyMqiuGiZsk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Wei Chen <wei.chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Jiamei Xie <jiamei.xie@arm.com>
Subject: [PATCH v5 2/8] xen/arm: Keep memory nodes in device tree when Xen boots from EFI
Date: Mon, 6 Jun 2022 12:09:10 +0800
Message-ID: <20220606040916.122184-3-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220606040916.122184-1-wei.chen@arm.com>
References: <20220606040916.122184-1-wei.chen@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-Office365-Filtering-Correlation-Id: c1a36ab5-1900-453d-886a-08da477260dc
X-MS-TrafficTypeDiagnostic:
	AM9PR08MB7290:EE_|VE1EUR03FT050:EE_|PAXPR08MB7575:EE_
X-Microsoft-Antispam-PRVS:
	<PAXPR08MB757516BFD2DDE53BA82E853B9EA29@PAXPR08MB7575.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 9VK3bKh8C+P/DnVk0+KnSSzZ4OpgFJ07m8tpIs8MXS6cchtt5mqOItI2BudaxnIjU0sBHbBrBGH3dwjqscEinWsfCOBem4HQhWt8zsQLm5papxFgKtGpBuFyes9OIG/9pdcFT7bQLB9Ehx36GBdLeYIm7wchPGDKDL5moA/mgu/R3qHEEVl/VX3xJYBvjYQMz8K4yWq9dg3AQoHYY45YTC7w2qS+aQQVs5N43eh418jaq8r5w17OLUM2Ep30b0YmqZV0l6Ua2iLzSQsdpoR8OfqnGrXgqJx1fT/RPBTaz07sHZoFaf8Zk0ZSUky1RQ2xQyMeJf0D3+Qnc3eSIQpR0bDXnI/EPckrBhKoHuZW0eZ+R4XGDLJ/K3GTeGJLgNuKOz9JnKl9ZuQteB/LZYXiiQyEZouT+T8HRlH+g481b3+ZPeaef3IrGibwL6gk6ANMl0ItzNnL8wURFxgjXTFZ+8F0SEJt+fJJGb9SMb7nDvRM+t++jfd9gRp4A1bqnE7PleVhDyjy/v6XfUqasidUxD9HgtRMRZOYs1aV6VochmfiwbtbrvJpynUPp3OGVlXxvb5gpBLxHVvXtV2bsjcRziIAUj1ZFsh5tjanA6GNRKKGjv0wZvYoV1WIFGFuHuzxDfquTH3PhSwoHfkGHXz6kC8kFXZUHrLOQks3qHChEJnUXIIZXfJ2M0mNGy6wjVAN77mYZUsCncLBu1Jjf2YVCQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(356005)(26005)(2906002)(6916009)(6666004)(54906003)(7696005)(81166007)(83380400001)(44832011)(316002)(36756003)(86362001)(36860700001)(47076005)(426003)(40460700003)(336012)(2616005)(8676002)(4326008)(1076003)(70586007)(70206006)(5660300002)(186003)(508600001)(82310400005)(8936002)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB7290
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT050.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	9fee6d00-a119-4b93-70c0-08da47725b6d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NpF85UpSvQMazgglJArmmY9xxazetA7Z1bRriDFN/ruz9PSOJ3i4/FrrPiuyXQiqPA2cWwZ/txpx4PeZfwOSswdDtlodD2E9bIkVR65qKnDlQuauy4j9wSC2WE2SqhchmcH0eTFc87QWzkRb2ZmpfDDjcyP2PamaCJdi0COFDGGVe96DEkONxwoKqGfnCXG7kTkoPIYwj2QvYYEDO6DBo1Ev8Nf4pC2auZoiaT20GhJWJnca4coFbIIFq3L6dOPrcbFI4azkOLAwjkOOSqU+GT/KFWAYRWqrQafch0ImIIFcft85T9rjz3feElW9cjBvoOw3eSbDCXuFhUQrT6Z0LSonjm8qcjyw5LKEOgSBHJmeGjYXtZ/610VpZOWCBPShvdyOE3p7VCURTV+KTsZm/xqNjDUytaW6bAsFmNbiuf6JjArKP2YYnNEmkiOAoB10hvJ51mR2OFjI0kCtAn+aAA0E223+9YWDdJkRRJXmHX1J9BN1ex3qBT727UcqLqRIAygwoD4bySHgbubx4oQDnGbTxZUkLScrBg+1VKlQm8puA+DXr+TfgPjreLrJSuj1kpbO9SGCmoiqL6Neb0AMhPpJ3w5ouEPuwoGQiUxu4th487aNNB1GDkyVkpgVVJK4Z8LO1pCNAaylVeysoozraQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230001)(4636009)(46966006)(36840700001)(81166007)(4326008)(8936002)(8676002)(36860700001)(5660300002)(36756003)(44832011)(82310400005)(70206006)(70586007)(47076005)(2906002)(316002)(6666004)(83380400001)(86362001)(426003)(508600001)(336012)(6916009)(54906003)(7696005)(26005)(1076003)(186003)(2616005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2022 04:09:40.2162
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c1a36ab5-1900-453d-886a-08da477260dc
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT050.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB7575

In current code, when Xen is booting from EFI, it will delete
all memory nodes in device tree. This would work well in current
stage, because Xen can get memory map from EFI system table.
However, EFI system table cannot completely replace memory nodes
of device tree. EFI system table doesn't contain memory NUMA
information. Xen depends on ACPI SRAT or device tree memory nodes
to parse memory blocks' NUMA mapping. So in EFI + DTB boot, Xen
doesn't have any method to get numa-node-id for memory blocks any
more. This makes device tree based NUMA support become impossible
for Xen in EFI + DTB boot.

So in this patch, we will keep memory nodes in device tree for
NUMA code to parse memory numa-node-id later.

As a side effect, if we still parse boot memory information in
early_scan_node, bootmem.info will calculate memory ranges in
memory nodes twice. So we have to prevent early_scan_node to
parse memory nodes in EFI boot.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Tested-by: Jiamei Xie <jiamei.xie@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
v3 -> v4:
1. No change.
v2 -> v3:
1. Add Rb.
v1 -> v2:
1. Move this patch from later to early of this series.
2. Refine commit message.
---
 xen/arch/arm/bootfdt.c      |  8 +++++++-
 xen/arch/arm/efi/efi-boot.h | 25 -------------------------
 2 files changed, 7 insertions(+), 26 deletions(-)

diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
index 29671c8df0..ec81a45de9 100644
--- a/xen/arch/arm/bootfdt.c
+++ b/xen/arch/arm/bootfdt.c
@@ -11,6 +11,7 @@
 #include <xen/lib.h>
 #include <xen/kernel.h>
 #include <xen/init.h>
+#include <xen/efi.h>
 #include <xen/device_tree.h>
 #include <xen/libfdt/libfdt.h>
 #include <xen/sort.h>
@@ -367,7 +368,12 @@ static int __init early_scan_node(const void *fdt,
 {
     int rc = 0;
 
-    if ( device_tree_node_matches(fdt, node, "memory") )
+    /*
+     * If Xen has been booted via UEFI, the memory banks are
+     * populated. So we should skip the parsing.
+     */
+    if ( !efi_enabled(EFI_BOOT) &&
+         device_tree_node_matches(fdt, node, "memory") )
         rc = process_memory_node(fdt, node, name, depth,
                                  address_cells, size_cells, &bootinfo.mem);
     else if ( depth == 1 && !dt_node_cmp(name, "reserved-memory") )
diff --git a/xen/arch/arm/efi/efi-boot.h b/xen/arch/arm/efi/efi-boot.h
index e452b687d8..59d93c24a1 100644
--- a/xen/arch/arm/efi/efi-boot.h
+++ b/xen/arch/arm/efi/efi-boot.h
@@ -231,33 +231,8 @@ EFI_STATUS __init fdt_add_uefi_nodes(EFI_SYSTEM_TABLE *sys_table,
     int status;
     u32 fdt_val32;
     u64 fdt_val64;
-    int prev;
     int num_rsv;
 
-    /*
-     * Delete any memory nodes present.  The EFI memory map is the only
-     * memory description provided to Xen.
-     */
-    prev = 0;
-    for (;;)
-    {
-        const char *type;
-        int len;
-
-        node = fdt_next_node(fdt, prev, NULL);
-        if ( node < 0 )
-            break;
-
-        type = fdt_getprop(fdt, node, "device_type", &len);
-        if ( type && strncmp(type, "memory", len) == 0 )
-        {
-            fdt_del_node(fdt, node);
-            continue;
-        }
-
-        prev = node;
-    }
-
    /*
     * Delete all memory reserve map entries. When booting via UEFI,
     * kernel will use the UEFI memory map to find reserved regions.
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 06 04:09:51 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 04:09:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342285.567384 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny43q-0008L2-9u; Mon, 06 Jun 2022 04:09:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342285.567384; Mon, 06 Jun 2022 04:09:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny43q-0008Kt-6I; Mon, 06 Jun 2022 04:09:50 +0000
Received: by outflank-mailman (input) for mailman id 342285;
 Mon, 06 Jun 2022 04:09:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aNt7=WN=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1ny43o-0007op-J4
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 04:09:48 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur02on0616.outbound.protection.outlook.com
 [2a01:111:f400:fe05::616])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8090972d-e54e-11ec-b605-df0040e90b76;
 Mon, 06 Jun 2022 06:09:47 +0200 (CEST)
Received: from AM5PR0701CA0055.eurprd07.prod.outlook.com (2603:10a6:203:2::17)
 by AM7PR08MB5431.eurprd08.prod.outlook.com (2603:10a6:20b:10c::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Mon, 6 Jun
 2022 04:09:44 +0000
Received: from AM5EUR03FT003.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:2:cafe::5d) by AM5PR0701CA0055.outlook.office365.com
 (2603:10a6:203:2::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.9 via Frontend
 Transport; Mon, 6 Jun 2022 04:09:44 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT003.mail.protection.outlook.com (10.152.16.149) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 04:09:43 +0000
Received: ("Tessian outbound e40990bc24d7:v120");
 Mon, 06 Jun 2022 04:09:43 +0000
Received: from 164833bb9af9.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 75AA96CA-C5D1-4716-85C6-A99FC37081BC.1; 
 Mon, 06 Jun 2022 04:09:36 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 164833bb9af9.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 06 Jun 2022 04:09:36 +0000
Received: from AM5PR0701CA0019.eurprd07.prod.outlook.com
 (2603:10a6:203:51::29) by DU0PR08MB7761.eurprd08.prod.outlook.com
 (2603:10a6:10:3bb::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.18; Mon, 6 Jun
 2022 04:09:35 +0000
Received: from AM5EUR03FT025.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:51:cafe::52) by AM5PR0701CA0019.outlook.office365.com
 (2603:10a6:203:51::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.9 via Frontend
 Transport; Mon, 6 Jun 2022 04:09:35 +0000
Received: from nebula.arm.com (40.67.248.234) by
 AM5EUR03FT025.mail.protection.outlook.com (10.152.16.157) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 04:09:34 +0000
Received: from AZ-NEU-EX01.Emea.Arm.com (10.251.26.4) by AZ-NEU-EX04.Arm.com
 (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2308.27; Mon, 6 Jun
 2022 04:09:36 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX01.Emea.Arm.com
 (10.251.26.4) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2308.27; Mon, 6
 Jun 2022 04:09:33 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2308.27 via Frontend
 Transport; Mon, 6 Jun 2022 04:09:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8090972d-e54e-11ec-b605-df0040e90b76
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=Rw4YCyc3kplTi0dy0J9sE2MdXGplsaHNaDGQ1DRVaBzbHFTA27zvi6lZvxugqn2WEaR88o2VOd9U2SiCOC2V/PX89LtiNWXGyX+VgcEcLUV2QvyQwWYUbays+kJP3EitWkuqwl+iD/Rd/2hkd4sO/zrFg1LUerfRhQUNLYZgIvy5G2JYZnLh0K33pnsYZ5emmB+MZD3b1v42jmED3G0HwCBrLRJkPpOPUoROndrzUTnLNnGgHtILR6Jwgk3l4Ga+JFSL049yFJ99bU4pHKMT0gxAMB7GjR1Ip7S0I4bcjkwgliHc9XZAnGt5fjWfic3GL4y9erJHPGPa9ATnNU+ePQ==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zV9BPdeAndxQ7scfKxz8oRzzRCdsj7R5+H/m4APAu1s=;
 b=BSk0r40Ul5Di9UWLMU0mPTASjblIR4xtTM7WXFGafB6NCYka3ZhaSDT+2B3O4liXCADhddupwtOjG2Vl7YpGzRKsvlo0+tk3PGWNiwqpraLe/2JpKr+u9A4NVNPOSajkrv9e6xHrduDuH2AVm9EFpoqAyXCnVHyME1w+LnnCcimgl3vA2KM2bYgwFyyHhr1bDbUB9GQdP2ICJBHUFwWtVf7AZBy44HiZaoEmpMBr1eZWQONCkrxfEzshSfSVvf6R+qcWbFOwuGdfO/lQv1XKE2gOUpgkZDXfup2AULaj9sj/8YoW1IOLntCdPb2kKGB71Z8vzmGr4ocNXalg91v5QA==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zV9BPdeAndxQ7scfKxz8oRzzRCdsj7R5+H/m4APAu1s=;
 b=LUgz2Xj5TEJfM2/hdPvibXP6XPMKn376Wz6Aa/VgnUm5dQh5vfRnt++CBMZEkLng4M0BHQuxYxSMSSDOO9/+brnK81rCW0rXte0miG49JOtFyC6gTrfwtY8BHgc1iSRnfd27vch+zov+UK8sWQdnxDYLZYPnRKb1uTe/7RYvjkk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: ab51ead58236c1aa
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=c7zrYHEOZWdptqK6uM8V7TGdX5QvWrCeeMU1XVr2edHrFzDm9VG+t2z3Bt2A2UIDYElk/Ptk1cdHtpUNDQRs3x1ejOCOzndQ/K8lgKATwq/FtD831HEcOAawCaRzBlhZgW+WaVJ0zejgYGr9GaPDFoBRQ47ceF/Ui+g0ROZ95QQcOKecX1lpFGdLb86F7Hw1efWChynFhxFNohR5MPP/b11w1uLjiQk/gUg78Vih/ZnfVu8HkUmdVepBrbY4nQ9L0l/oQcQSBxf4fwo2ejGin1X+RL+M3f+nx46Ft5vIRyuricfRlOaPCdtHIWC9ADSI16XENRuU8raG9UfXhf6LGw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zV9BPdeAndxQ7scfKxz8oRzzRCdsj7R5+H/m4APAu1s=;
 b=mKcKsXHA+4sRVb8MJqxmMVdNNk8LqADOh3womIJ63rNs+oAEQFTncDAe+Ka/x1IBDLHvftanIJsyL3ysUFJb+noTzNoypRHlkIrhhrSH9OhPcSL+Jd89ZnZ7xkanwr5Hi1LoLe6ZT0XCNHOqzUCVqeoVjGaMLhxhPvrJVLqQaBFxtySoFR4z0X2nUJmemLIowMzxpRkKN7dmjXKo0ga/Dow5gtXW5F4RJN72XOW2YcOCHKzV0izQJaWq/ljpMYFBpER2gbIbIw0mpYkBDv5jRETTJ1WEXwPH6Ch2QhO657PY0lzITmZrZBPPialskBujGMVtRQHu15QmKe31wt2YRw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zV9BPdeAndxQ7scfKxz8oRzzRCdsj7R5+H/m4APAu1s=;
 b=LUgz2Xj5TEJfM2/hdPvibXP6XPMKn376Wz6Aa/VgnUm5dQh5vfRnt++CBMZEkLng4M0BHQuxYxSMSSDOO9/+brnK81rCW0rXte0miG49JOtFyC6gTrfwtY8BHgc1iSRnfd27vch+zov+UK8sWQdnxDYLZYPnRKb1uTe/7RYvjkk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Wei Chen <wei.chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Wei Liu
	<wl@xen.org>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Jiamei Xie <jiamei.xie@arm.com>
Subject: [PATCH v5 3/8] xen: introduce an arch helper for default dma zone status
Date: Mon, 6 Jun 2022 12:09:11 +0800
Message-ID: <20220606040916.122184-4-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220606040916.122184-1-wei.chen@arm.com>
References: <20220606040916.122184-1-wei.chen@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-Office365-Filtering-Correlation-Id: 4a4a9bcd-d268-49d8-fcae-08da477262e7
X-MS-TrafficTypeDiagnostic:
	DU0PR08MB7761:EE_|AM5EUR03FT003:EE_|AM7PR08MB5431:EE_
X-Microsoft-Antispam-PRVS:
	<AM7PR08MB54314B12A3DD8F5A872B997A9EA29@AM7PR08MB5431.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 0voF7hWhQyKGPp/Aia+w9JJERbQIWLdPYZXsOOcek9d55j5lEIJZP220vtJP+N6qwz6AVqY1OfrYM3XG4SIuWsXODfMM/MD/n0S2VFnjhf7D9hRG8LXlMZv3KS7q2fg6RzLm5Y+X5OFW84kjIVg2clEIolN5JLxJsZGJT1ZHW2T5w+hOOKJ4Go6IlUANMpFKc/N8wVarYqlbzlgxcGTmN5r6ZN+BpdUm7FPTU3OLCnDPHy+wqaPzwsaUs63CUOUxUDrD1+Xqg3Fm9JKiRMq63qfPgIHVBja3YQdaOwIN6FnxTSYNE9dAXuUG6OA/0ORNcAOqSDkoxD36iwXxkMEjzUA0RGaZfKS9jy7QNEl8eoxviU+h5tOkbl84vXeyTDEgLph5golsIi5MUp1Fjka1lHhoSbxHRBek2+zU2QZVh3syHHveiSDv/PgvNbiwZg8ZfSOcFWEvYSPCdxX+ytlgm5/2dZtJzkkKyoVHH8N3ZGqC+vy0FcGceC5FTjrVRbyrSXMg67jrD3rq1rK1BAdaM/TjAu3UK23BqRACU7ZuuDv8zPQGxKFGvr7rz6tZYEcJLv2+fHiw9NoZlmYAtftk2tX2zrFGFmO2aE0I83WgPp5bYiAQxxvwF+xzhqN9plAxSM1bIJPaP3u6DcxWCM+K4x1VGXxv7hO50qaOmZSfm2DFFyu/ceh4XcjzYwQ8//98Xj33eIsbkmyTxitp4AHAtQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230001)(4636009)(46966006)(40470700004)(36840700001)(186003)(1076003)(426003)(47076005)(336012)(36860700001)(82310400005)(2616005)(86362001)(316002)(54906003)(36756003)(6916009)(70586007)(70206006)(4326008)(8676002)(44832011)(5660300002)(508600001)(8936002)(26005)(81166007)(7696005)(40460700003)(2906002)(6666004)(83380400001)(356005)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB7761
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT003.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	06bf1fe2-53e7-43fe-53cd-08da47725d84
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	UbCWdCCvzKmReqAe8wtB1ZY2ofrH4zL7wlgZRiIZGabmCcENtXDUReq4WFo1TYa8RP4d/yiSFVpqWpn1elXQwv79/8csuvT7k5ins3MckuXBfpPfaeFRS6Z9B12LpNEMr8caKPG2w3DcwX22VSz4f/5pxAbmv2cGXBmnV/qY6s/dvnL1E+PfaB0E/iJF+xSfbKDB37rLC1l5N/FzWnJ2kH5kGWTJKQ0VzZ+Vrqz9V6wbpwDxK9t0V4QTd+jVg8DEzSDWauGjDYR8e/KdGMHyGEcsoLtwUTuPO0N130STRPA0Up0oRO038bgOrOprhVR+93JvLBimtWBsKFemODNGiR2G34q0dYY79ETU8fw8vbMwCG2jvKIvQlQ+A+uyBDJXC3k43uYZEn/CfwEkDzDQXJE1lkTgAkGwL6DFvU1orzQOIepakxdIAWk2YM4JNluPU6gZiEbUSKJdWKetXfmDusUkxgftzklmXdiV7VxD37BMsIDbxuBaampkp+SV1Z+9Lxf6X/aKINnq/E2FVeLgPCXpe10Xl54jbkKPXcvBGgeoxzSV0gwu4MsRYfndk1ZzFRT6nfynqIatGXLJ4eybEuhLc7tOUDSDxSDeuB8dfHhYBGLhXNVeOurnu4LVIC0qwcxrfzUhT/kHbXX+zzHs9w==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230001)(4636009)(46966006)(36840700001)(426003)(336012)(82310400005)(70206006)(70586007)(8936002)(47076005)(5660300002)(186003)(508600001)(4326008)(2616005)(8676002)(1076003)(7696005)(54906003)(81166007)(26005)(6666004)(2906002)(6916009)(36756003)(316002)(83380400001)(36860700001)(86362001)(44832011);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2022 04:09:43.6936
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 4a4a9bcd-d268-49d8-fcae-08da477262e7
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT003.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR08MB5431

In current code, when Xen is running in a multiple nodes
NUMA system, it will set dma_bitsize in end_boot_allocator
to reserve some low address memory as DMA zone.

There are some x86 implications in the implementation.
Because on x86, memory starts from 0. On a multiple-nodes
NUMA system, if a single node contains the majority or all
of the DMA memory, x86 prefers to give out memory from
non-local allocations rather than exhausting the DMA memory
ranges. Hence x86 uses dma_bitsize to set aside some largely
arbitrary amount of memory for DMA zone. The allocations
from DMA zone would happen only after exhausting all other
nodes' memory.

But the implications are not shared across all architectures.
For example, Arm cannot guarantee the availability of memory
below a certain boundary for DMA limited-capability devices
either. But currently, Arm doesn't need a reserved DMA zone
in Xen. Because there is no DMA device in Xen. And for guests,
Xen Arm only allows Dom0 to have DMA operations without IOMMU.
Xen will try to allocate memory under 4GB or memory range that
is limited by dma_bitsize for Dom0 in boot time. For DomU, even
Xen can passthrough devices to DomU without IOMMU, but Xen Arm
doesn't guarantee their DMA operations. So, Xen Arm doesn't
need a reserved DMA zone to provide DMA memory for guests.

In this patch, we introduce an arch_want_default_dmazone helper
for different architectures to determine whether they need to
set dma_bitsize for DMA zone reservation or not.

At the same time, when x86 Xen is built with CONFIG_PV=n could
probably leverage this new helper to actually not trigger DMA
zone reservation.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Tested-by: Jiamei Xie <jiamei.xie@arm.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
v3 -> v4:
1. Add Acked-by.
v2 -> v3:
1. Add Tb.
2. Rename arch_have_default_dmazone to arch_want_default_dmazone.
v1 -> v2:
1. Extend the description of Arm's workaround for reserve DMA
   allocations to avoid the same discussion every time.
2. Use a macro to define arch_have_default_dmazone, because
   it's little hard to make x86 version to static inline.
   Use a macro will also avoid add __init for this function.
3. Change arch_have_default_dmazone return value from
   unsigned int to bool.
4. Un-addressed comment: make arch_have_default_dmazone
   of x86 to be static inline. Because, if we move
   arch_have_default_dmazone to x86/asm/numa.h, it depends
   on nodemask.h to provide num_online_nodes. But nodemask.h
   needs numa.h to provide MAX_NUMANODES. This will cause a
   loop dependency. And this function can only be used in
   end_boot_allocator, in Xen initialization. So I think,
   compared to the changes introduced by inline, it doesn't
   mean much.
---
 xen/arch/arm/include/asm/numa.h | 1 +
 xen/arch/x86/include/asm/numa.h | 1 +
 xen/common/page_alloc.c         | 2 +-
 3 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index 31a6de4e23..e4c4d89192 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -24,6 +24,7 @@ extern mfn_t first_valid_mfn;
 #define node_spanned_pages(nid) (max_page - mfn_x(first_valid_mfn))
 #define node_start_pfn(nid) (mfn_x(first_valid_mfn))
 #define __node_distance(a, b) (20)
+#define arch_want_default_dmazone() (false)
 
 #endif /* __ARCH_ARM_NUMA_H */
 /*
diff --git a/xen/arch/x86/include/asm/numa.h b/xen/arch/x86/include/asm/numa.h
index bada2c0bb9..5d8385f2e1 100644
--- a/xen/arch/x86/include/asm/numa.h
+++ b/xen/arch/x86/include/asm/numa.h
@@ -74,6 +74,7 @@ static inline __attribute__((pure)) nodeid_t phys_to_nid(paddr_t addr)
 #define node_spanned_pages(nid)	(NODE_DATA(nid)->node_spanned_pages)
 #define node_end_pfn(nid)       (NODE_DATA(nid)->node_start_pfn + \
 				 NODE_DATA(nid)->node_spanned_pages)
+#define arch_want_default_dmazone() (num_online_nodes() > 1)
 
 extern int valid_numa_range(u64 start, u64 end, nodeid_t node);
 
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 319029140f..b3bddc719b 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -1889,7 +1889,7 @@ void __init end_boot_allocator(void)
     }
     nr_bootmem_regions = 0;
 
-    if ( !dma_bitsize && (num_online_nodes() > 1) )
+    if ( !dma_bitsize && arch_want_default_dmazone() )
         dma_bitsize = arch_get_dma_bitsize();
 
     printk("Domain heap initialised");
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 06 04:09:54 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 04:09:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342286.567395 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny43u-0000Fp-Ow; Mon, 06 Jun 2022 04:09:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342286.567395; Mon, 06 Jun 2022 04:09:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny43u-0000Fc-LU; Mon, 06 Jun 2022 04:09:54 +0000
Received: by outflank-mailman (input) for mailman id 342286;
 Mon, 06 Jun 2022 04:09:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aNt7=WN=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1ny43s-0007op-UE
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 04:09:53 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on060a.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::60a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 833f7a36-e54e-11ec-b605-df0040e90b76;
 Mon, 06 Jun 2022 06:09:52 +0200 (CEST)
Received: from AS9P250CA0003.EURP250.PROD.OUTLOOK.COM (2603:10a6:20b:532::9)
 by AM6PR08MB4950.eurprd08.prod.outlook.com (2603:10a6:20b:ea::31) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12; Mon, 6 Jun
 2022 04:09:49 +0000
Received: from VE1EUR03FT037.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:532:cafe::1c) by AS9P250CA0003.outlook.office365.com
 (2603:10a6:20b:532::9) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19 via Frontend
 Transport; Mon, 6 Jun 2022 04:09:48 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT037.mail.protection.outlook.com (10.152.19.70) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 04:09:48 +0000
Received: ("Tessian outbound 1766a3bff204:v120");
 Mon, 06 Jun 2022 04:09:47 +0000
Received: from 762fe3f51f72.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 3BCC5203-6EFE-4C16-91A0-14043C0652B0.1; 
 Mon, 06 Jun 2022 04:09:41 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 762fe3f51f72.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 06 Jun 2022 04:09:41 +0000
Received: from AS9PR06CA0012.eurprd06.prod.outlook.com (2603:10a6:20b:462::34)
 by VI1PR0801MB1759.eurprd08.prod.outlook.com (2603:10a6:800:5b::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Mon, 6 Jun
 2022 04:09:39 +0000
Received: from AM5EUR03FT052.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:462:cafe::49) by AS9PR06CA0012.outlook.office365.com
 (2603:10a6:20b:462::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13 via Frontend
 Transport; Mon, 6 Jun 2022 04:09:39 +0000
Received: from nebula.arm.com (40.67.248.234) by
 AM5EUR03FT052.mail.protection.outlook.com (10.152.17.161) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 04:09:38 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX04.Arm.com
 (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.27; Mon, 6 Jun
 2022 04:09:39 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2308.27 via Frontend
 Transport; Mon, 6 Jun 2022 04:09:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 833f7a36-e54e-11ec-b605-df0040e90b76
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=fxYbdHYa8asQcYDJv0n2fVU+OsXbw9xW4PW5kpexPlTjug+wJiIgWQTWtC7ZARpJFE+NgwUz35lJrQ0H5L5kGYAlh2h2TMCpPSS7e20NiHW8aaqM1/ZYL+47NhE8JeQF9t+C9yKj+612Vfwd67oopkYn+iqOBZgdNDUjv7Be4RK+OfHQk1og6Z3mkfI2saC6u6IR8EXGvll3y1MjKdl3P5p1IqSopGdhDtM7maw69GW/N+irPeic9O/hJmUw4cR4YSozsMFkpzznwSSqiOheSnYc4JZ1fxo+JYR2yo6T2iv8v5u5skqRct4mNObQ3lTZ7+53EFsFCuLc+597ElsZAA==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=V9wUoq3XTl/aE5RFIP3KDTc2vdcTUcKiAanOp0Qrrhc=;
 b=BOQv2qYshyelR5e2vEGcmCaURjDz3T08zosgXAkocUCFgVYSWqa4+6qkLFdWSWABCgkdejDzAdWbQUnXx9Ce35f0jGyM1vs8G2GYo2qeApYr8c6ZvuLsDY25kRNhwcnHmc1lM/E5lLVULX4icsIBe8AC6VSZHj4y6pjY/CdCnaUBidm5lMTumI5JDpIFvRag0RXcsB7RG4tdWb2IU+u/OsMyJGam7ikCBWtnScEXVpqfV55bVrokeUOUehkNY16HN7Xkx8l+BdWxjIzfjWAs8qRG4pvRAaT4BtSCQ12QII3PAm59gxB6y5z57Z4gKgM44PjpSyy0GM3hxxsFPiaa8A==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=V9wUoq3XTl/aE5RFIP3KDTc2vdcTUcKiAanOp0Qrrhc=;
 b=xW/j8+qFLv8pbDzYo4QwzKcXANDdkeC8bfXuRdxT0o/OknYKjgm0i2JML5RLV2JHdZXnDC9L0tvS6OsmKPsDQ/iCpFANwMvgbW2Ot0NCbU8pZl0yriaqSl6nU93aeEskWDQ7jxxNxgsxJYf9pBLDkv6yRrHB8U2YSHimlA3h5iQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 647a2828cfb7d5e9
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dGDXm5uZphA3M0atgeqP/kup1VWg4qjSqU/GwAG1xy8TL/flRgnjwFqSF9H+xIiI23PPaWT6BckQQC0Nmb9jVnv7kDk+NInItLn4+V/7+RXHuFN8k0ASq/bdKYwq80S9aB81hwiaRpCbkU/JncQVWZqtyUo1LPY/0+d8Qv3Xha9GugXUgf+COoPu0WYIRu6gM5b3EIIhHlNqCJRuzp2CGPja1YCGn9wUEYapD1WPR7n1Zjo1/WdTQUFvl1dzV6lvLT7JZ1rw4YIZTZmBlT60a7VRKNqxrWN5lt7kHB9RT+mGbiTPotVtwb77ZY5OHBcQOGpODg6vJlyZFPe9/q7GeQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=V9wUoq3XTl/aE5RFIP3KDTc2vdcTUcKiAanOp0Qrrhc=;
 b=T5xhqDadgerMt7tCG34LjGOA8sX5b5dvKN6Q1auOE35DUX5uoh9CIFEsov8VxfVwQZAJp44hCcr9mwf6OdqclTJS1f0r5PUssY8pVj7za9OT8XYaGXhdeibDsa+Y1i4Sn9/qfTKFzJbTvSaHDaJSgK0nBGjfulSpV69ggjrT3ADqEtwL8oTJ1/tllJVIJQlDoh6RFMMbKIHC+jmFmifEiAw1Jvr+s8Esgi+EzxJa1ElG7w76CiX9mWHiJz0c3nGTSVCOrp4Og8ZgfUrT5sS8vMd3+pwqiJoHOgySNEMbwdW28IsRqwk8yQ/IbMalPwwCPb3iMdmLyxngQhxAUC0Vbg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=V9wUoq3XTl/aE5RFIP3KDTc2vdcTUcKiAanOp0Qrrhc=;
 b=xW/j8+qFLv8pbDzYo4QwzKcXANDdkeC8bfXuRdxT0o/OknYKjgm0i2JML5RLV2JHdZXnDC9L0tvS6OsmKPsDQ/iCpFANwMvgbW2Ot0NCbU8pZl0yriaqSl6nU93aeEskWDQ7jxxNxgsxJYf9pBLDkv6yRrHB8U2YSHimlA3h5iQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Wei Chen <wei.chen@arm.com>, Jan Beulich
	<jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Jiamei Xie
	<jiamei.xie@arm.com>
Subject: [PATCH v5 4/8] xen: decouple NUMA from ACPI in Kconfig
Date: Mon, 6 Jun 2022 12:09:12 +0800
Message-ID: <20220606040916.122184-5-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220606040916.122184-1-wei.chen@arm.com>
References: <20220606040916.122184-1-wei.chen@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-Office365-Filtering-Correlation-Id: cebca336-03f6-4e7c-fa4d-08da477265a8
X-MS-TrafficTypeDiagnostic:
	VI1PR0801MB1759:EE_|VE1EUR03FT037:EE_|AM6PR08MB4950:EE_
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB4950CB4DB9AB639DDEB928D29EA29@AM6PR08MB4950.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 MQ4rVsW4luiQfddiP0f1Ce7HXL+Rh6ViUwEFg0Wyunh9AaiVqNccENb0c+3y0q4/1/oLXP4l5We/qHkUe1VmRuIF/lKaoD7HOiJC+NZktkLCZSrcdH1FDcnK8ItYKgImMvBt/Lz/lg1NqwMj787crkETEg/cBQ1QpMAonAVumHLZFhGq6Rv6bO6fNwKdk6wv1Twq1OKHBinG6ZUYKSyx5qzWq8gvyJ0DuqOjGvGf9DrYOIP78z6j7z2PirAIzJApRyeIU4fK6ltX8i5cCNBr9rPlQlNJcEd9phVQzIahm76+/X2ZZpX1gJoRgiCDUmctvIU95cEinY5jHHhVNDHIUnzNfzhOiSTYLfmz1ie2v1+P7Upp/vhdYsBFlg8EBGdjZIe0aGSBTfqtUkOah5QUv2UO1BncYH56lAn77OjkZfM1UArhRywplEVBL+xh20UZf/1FoPbAO4eerDOqVUxEL9PwLsblDyp/eOYAVZmbyi7q/kFazAm2dPukub6I5ykS+3rGFZqpvdLgXwVCGxGks68zc9NObdfREYO7Y/UUSO06u/9sREBEDfOUyAwEq+wT2TK3SDQI/X+e1mroFUoGajLv1mARBRUJQQ8MfVkwRa8Hmqejz36kuHvbMjiHsq/R60fvGeHckAYYjcr4OvFmP4f0GNQMop868C71+Vaj8An8tJHl/GUDRm7j/3urVk+k
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230001)(4636009)(46966006)(36840700001)(508600001)(6666004)(82310400005)(47076005)(7696005)(36860700001)(86362001)(4326008)(5660300002)(36756003)(2906002)(70206006)(70586007)(186003)(8676002)(2616005)(8936002)(44832011)(1076003)(83380400001)(54906003)(356005)(6916009)(426003)(26005)(316002)(81166007)(336012)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0801MB1759
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT037.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	80b6b823-920b-4ad5-9223-08da47725fb5
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ULd/RJRei9xnfDZisnjm1y5kdwVyj7kzFlnZxYoyl4Tvhe5uAvwAgwZ/E8kwi73DZpBbQQlbBkb/eJ+U20f2UFitKVdHw5TDXBtc77EffQwkFshlEhBoRD01quTWSKghN0EKHHyxUX+pv+q5LMo0nTUfC2/BR0kQa5k3066z3RvOBgggkvXAqMsXpx8Jgw257EtWb7XGEEDitXgjr3E0cPQvrHTXL6dd+h28S4rR85ePo4DkGXqMamp7eXXuxx3waFLJda4DZSYqv1WUQXO2nQl522Ud8lcmPDIxH/Fyty5MaDUPLYq8G+o81Tifcv4wOSwz/tOP/wAk2ZgCXRkZ6IB4UtHSCnpFl1rMlkS1evBrb2ozZrl8Z1bqe+hAPzi5QuBbLUoiDZLkXAlghAf4WdStbzvWtnnfEETUVTIQLCKWNcSx2o30yxvM2F0jUa8Vo5Sq+JkxQfiFLmEqgEO2fFdkSMFYFIhgVkdsvM8Kae/1bNLB5jIpjul+92SwtXbjWasl5+mgS/FpYkbr09Df9xr7bUF5kWECnHtb3a0NAGfW+BMcSUWCPWygj+XkT876zHagbZTx95MboODySpPUZIk/RFlPEwTSDaEeokBYaiHISviDemFv6Jk8z+9ZwVySpnYV9d4ykrOUZxXGhCtxMA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230001)(4636009)(36840700001)(46966006)(7696005)(1076003)(6666004)(2616005)(26005)(2906002)(54906003)(86362001)(426003)(5660300002)(83380400001)(186003)(44832011)(336012)(8936002)(36756003)(316002)(81166007)(82310400005)(70586007)(4326008)(70206006)(6916009)(8676002)(36860700001)(47076005)(508600001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2022 04:09:48.2635
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: cebca336-03f6-4e7c-fa4d-08da477265a8
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT037.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4950

In current Xen code only implements x86 ACPI-based NUMA support.
So in Xen Kconfig system, NUMA equals to ACPI_NUMA. x86 selects
NUMA by default, and CONFIG_ACPI_NUMA is hardcode in config.h.

In a follow-up patch, we will introduce support for NUMA using
the device tree. That means we will have two NUMA implementations,
so in this patch we decouple NUMA from ACPI based NUMA in Kconfig.
Make NUMA as a common feature, that device tree based NUMA also
can select it.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Tested-by: Jiamei Xie <jiamei.xie@arm.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
v3 -> v4:
no change.
v2 -> v3:
Add Tb.
v1 -> v2:
No change.
---
 xen/arch/x86/Kconfig              | 2 +-
 xen/arch/x86/include/asm/config.h | 1 -
 xen/common/Kconfig                | 3 +++
 xen/drivers/acpi/Kconfig          | 3 ++-
 xen/drivers/acpi/Makefile         | 2 +-
 5 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
index 06d6fbc864..1e31edc99f 100644
--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -6,6 +6,7 @@ config X86
 	def_bool y
 	select ACPI
 	select ACPI_LEGACY_TABLES_LOOKUP
+	select ACPI_NUMA
 	select ALTERNATIVE_CALL
 	select ARCH_SUPPORTS_INT128
 	select CORE_PARKING
@@ -26,7 +27,6 @@ config X86
 	select HAS_UBSAN
 	select HAS_VPCI if HVM
 	select NEEDS_LIBELF
-	select NUMA
 
 config ARCH_DEFCONFIG
 	string
diff --git a/xen/arch/x86/include/asm/config.h b/xen/arch/x86/include/asm/config.h
index de20642524..07bcd15831 100644
--- a/xen/arch/x86/include/asm/config.h
+++ b/xen/arch/x86/include/asm/config.h
@@ -31,7 +31,6 @@
 /* Intel P4 currently has largest cache line (L2 line size is 128 bytes). */
 #define CONFIG_X86_L1_CACHE_SHIFT 7
 
-#define CONFIG_ACPI_NUMA 1
 #define CONFIG_ACPI_SRAT 1
 #define CONFIG_ACPI_CSTATE 1
 
diff --git a/xen/common/Kconfig b/xen/common/Kconfig
index d921c74d61..d65add3fc6 100644
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -70,6 +70,9 @@ config MEM_ACCESS
 config NEEDS_LIBELF
 	bool
 
+config NUMA
+	bool
+
 config STATIC_MEMORY
 	bool "Static Allocation Support (UNSUPPORTED)" if UNSUPPORTED
 	depends on ARM
diff --git a/xen/drivers/acpi/Kconfig b/xen/drivers/acpi/Kconfig
index b64d3731fb..e3f3d8f4b1 100644
--- a/xen/drivers/acpi/Kconfig
+++ b/xen/drivers/acpi/Kconfig
@@ -5,5 +5,6 @@ config ACPI
 config ACPI_LEGACY_TABLES_LOOKUP
 	bool
 
-config NUMA
+config ACPI_NUMA
 	bool
+	select NUMA
diff --git a/xen/drivers/acpi/Makefile b/xen/drivers/acpi/Makefile
index 4f8e97228e..2fc5230253 100644
--- a/xen/drivers/acpi/Makefile
+++ b/xen/drivers/acpi/Makefile
@@ -3,7 +3,7 @@ obj-y += utilities/
 obj-$(CONFIG_X86) += apei/
 
 obj-bin-y += tables.init.o
-obj-$(CONFIG_NUMA) += numa.o
+obj-$(CONFIG_ACPI_NUMA) += numa.o
 obj-y += osl.o
 obj-$(CONFIG_HAS_CPUFREQ) += pmstat.o
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 06 04:09:55 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 04:09:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342287.567400 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny43v-0000Jt-7A; Mon, 06 Jun 2022 04:09:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342287.567400; Mon, 06 Jun 2022 04:09:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny43v-0000J3-1P; Mon, 06 Jun 2022 04:09:55 +0000
Received: by outflank-mailman (input) for mailman id 342287;
 Mon, 06 Jun 2022 04:09:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aNt7=WN=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1ny43t-0007op-UH
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 04:09:54 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur02on0616.outbound.protection.outlook.com
 [2a01:111:f400:fe06::616])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8322dc6e-e54e-11ec-b605-df0040e90b76;
 Mon, 06 Jun 2022 06:09:51 +0200 (CEST)
Received: from FR0P281CA0062.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:49::23)
 by PAXPR08MB6970.eurprd08.prod.outlook.com (2603:10a6:102:1d9::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Mon, 6 Jun
 2022 04:09:37 +0000
Received: from VE1EUR03FT023.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:d10:49:cafe::6a) by FR0P281CA0062.outlook.office365.com
 (2603:10a6:d10:49::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.9 via Frontend
 Transport; Mon, 6 Jun 2022 04:09:37 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT023.mail.protection.outlook.com (10.152.18.133) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 04:09:36 +0000
Received: ("Tessian outbound e40990bc24d7:v120");
 Mon, 06 Jun 2022 04:09:36 +0000
Received: from e42c15b7b331.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 AC0E2A91-87D3-4EF0-96B1-62105DFA0037.1; 
 Mon, 06 Jun 2022 04:09:30 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e42c15b7b331.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 06 Jun 2022 04:09:30 +0000
Received: from DU2P250CA0013.EURP250.PROD.OUTLOOK.COM (2603:10a6:10:231::18)
 by VE1PR08MB5773.eurprd08.prod.outlook.com (2603:10a6:800:1a9::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Mon, 6 Jun
 2022 04:09:28 +0000
Received: from DBAEUR03FT032.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:231:cafe::2f) by DU2P250CA0013.outlook.office365.com
 (2603:10a6:10:231::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19 via Frontend
 Transport; Mon, 6 Jun 2022 04:09:28 +0000
Received: from nebula.arm.com (40.67.248.234) by
 DBAEUR03FT032.mail.protection.outlook.com (100.127.142.185) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 04:09:27 +0000
Received: from AZ-NEU-EX01.Emea.Arm.com (10.251.26.4) by AZ-NEU-EX03.Arm.com
 (10.251.24.31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2308.27; Mon, 6 Jun
 2022 04:09:29 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX01.Emea.Arm.com
 (10.251.26.4) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2308.27; Mon, 6
 Jun 2022 04:09:26 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2308.27 via Frontend
 Transport; Mon, 6 Jun 2022 04:09:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8322dc6e-e54e-11ec-b605-df0040e90b76
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=dgFkJlv52PVEumheILJpFC6xWelZlCEtiz5sj6DxgUYO8b9T2dkLSGWliCO1hKA56h/HMkmdGNDHywzKXfBL1bj1EcSWnONxHO20jXegeJo4NvUp2YtYe6zzsOU71pUvXoyRFqfKU6UdtGMnxvdr0uqJMsTBhvf5HOEto+3pb0W5/YZxeTPZwmSFRKLBrlUjOcABqq2bTme6p3csbeIhs4x330ZMZJy5RI/0JTgQfGR+voqJR7MmKxB5kjcwJCafI+nbVmBy7S3TwkG1tWqDZ6XNv4oCJK1pNWHa8eZW0PhJmvAiPa7gLyo1ndS9La/7r0GxlYnSrlXnMB4mJJj/cw==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=h1Mk+96NdZCuYCNVMfuYJAf8YYMD5xhqAKxBbA5l6eU=;
 b=g1RbakPyXM/1+m8jFzh5zPO7rJLzG/isLsmXkyFUBIaWSkqmqM8y/puA8uZgbVANbYlDFydvWxdBivtF0avXJAlPMIsyfzHKbOtrGuFf28aln+kk3rqhm0hPsuXZkyQnCuGi/ZVcrOgvF/3kM8wESlHppog7PNBkn9+5cvwVkp//+tI+zD+G4KVa8Ct61ZqYz3A3h47mTb4NUICyYfqEH9Zq/Sc+uCVcsN6++zqif8hQpcVeHElN+k5VRefjq5/QZKGwbVnkE05d+8tRbaoUqu0hcroNN/ub6Z9All4zhIsSXIeXnzZfzI1qoRyRG6+Yn7ufxRcett9opt820dFEtA==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=h1Mk+96NdZCuYCNVMfuYJAf8YYMD5xhqAKxBbA5l6eU=;
 b=ubGdOVJA9FgdaZetYbrjlq0E4RcrgfAjWhYJWHzA61eLVZN8VLz8PxtWptNH5BjouvwueIzqqmdNI0Z/24SjUYKBXZxo7CmjIUdOiscgz5qMlf93dOUr+jY2JhJuvcC92Jm4vbgPkvu3VftWCNnFCVYT1tmbYW/Nuro/V4ETAAw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: f36c588d85b21df3
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Nu9OpflSAl6SxawC2+WZmvQ/Pw0LNS4GKd+zEDpPDyxIiGFROXiuGDjut4Qk/CnbTkRgBdW4CF3uUj9TWkdeu5xqfrNVINHL1sUINur/ivuSoALA5AB1Xum6qO/cH8NDKSupqsVcZjm/TQqWXIB9vMhbrpODlIGk9bCELEyrwa+mgmSqMrmkf0OZaxJsOxbWwIkBOUosl38/K66lO1V0QQw6+0sB3q36XiOKbPKcbw/+kGckaPPFbpS20J8IT4I+ocUv+3glFC6tFb9KUimyYqW7AaF6/OvNFM9/VUYpq68hoC9/98M+/bNVGarFzZFiZdZ9ih7L/Yl3SoH6m5Jx/A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=h1Mk+96NdZCuYCNVMfuYJAf8YYMD5xhqAKxBbA5l6eU=;
 b=btRChuysDPpjjzbT+1Dc+5mlQddD3AIRjgF3MhZn3zqmMkh+Pi0CeURXjzC/u2SOvaiACBYMqOq/lZR9y2GRB+Pcm7N1qrSpl3/1aK+1DY2Me8H5/isVWSfLMBGAPRjlSQe/yYjMMgGIkEW9KPRhsFnPZzoQBVd9BiWYp6MBY/wKfidN24pvduUzFIwNVN+Zhy99aGDV5AfQElBlME9Hl06nUYsxXpqyp8R1oAVUtt/H84OyG+z1k5M0Zs2NTGR/XuFtT3cvBb6e4XFvpI0K6qxBeR01gbzmHlwXyaBbfnrmWsIvGhzKcR3VW+tWCG0SMHb1VR4vcjb4WB1aaXMWlA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=h1Mk+96NdZCuYCNVMfuYJAf8YYMD5xhqAKxBbA5l6eU=;
 b=ubGdOVJA9FgdaZetYbrjlq0E4RcrgfAjWhYJWHzA61eLVZN8VLz8PxtWptNH5BjouvwueIzqqmdNI0Z/24SjUYKBXZxo7CmjIUdOiscgz5qMlf93dOUr+jY2JhJuvcC92Jm4vbgPkvu3VftWCNnFCVYT1tmbYW/Nuro/V4ETAAw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Wei Chen <wei.chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Jiamei Xie <jiamei.xie@arm.com>
Subject: [PATCH v5 1/8] xen: reuse x86 EFI stub functions for Arm
Date: Mon, 6 Jun 2022 12:09:09 +0800
Message-ID: <20220606040916.122184-2-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220606040916.122184-1-wei.chen@arm.com>
References: <20220606040916.122184-1-wei.chen@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-Office365-Filtering-Correlation-Id: 07bb5d6d-038c-4aab-0797-08da47725ee4
X-MS-TrafficTypeDiagnostic:
	VE1PR08MB5773:EE_|VE1EUR03FT023:EE_|PAXPR08MB6970:EE_
X-Microsoft-Antispam-PRVS:
	<PAXPR08MB69706AE1B5F1B2788C7319C79EA29@PAXPR08MB6970.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 n2v0KG/Svo6ljlfeiykmyAbmwPCLYmG/J3yXmGi6HbF+ljZuIMSmjNCLahMaRGiqS/zn2FFqMYVwQ7vRA0Hwts8yoSbj+LMKriaVbNvmYFecs8pl0eQ6pOKdyZKik4XTS52kg7PGXscdAhWSdQiVH/0U6XxOtaOF6uQxp2YFJKW1M/hKqG3Wkt4DQIv+bgWWOWT5Am3uZ6MT4YGSbJWEr8fZoksq+VsZAbq3llmYIalqzRPXErX26n+py3yxiXliEe6HlLHr1hNyZvvqxBeBLY30BU5kNOhm3y8kb7aGqikboD8IwIExZQ0i1HbOnTDAWI/dNzJVdmXK6FvJZASwnlWcpXRck0D4d+4Y7gH+06cQB+bIhJ/QtgI/s79t357Rj1vJziBgy68IS7xpc6rRj35Hn/7X2jznUFSWlzqMcDJQ5MTknYTvS5xH/ypZq6TvxQflZ2FuBxTfROlUE1HgSq2+8jQnRSDhZYWRAZu3y5/7twVYSQssWlr5TaUgl2Dz8hgF7inTkTyNl8YR8G6avK3rbqRPfYoQ87261bofSL6oFXzCPDxrXdF+SSLXvEWSBl4QmJZy0xexOwrFOEO79A8QUdE7Ns+mkYQ4IEdY/OUsV1w1vS4HYdsOPdNkiTX+M2uBxDqtBE8eYQKW0TpLJrDthncwlv/zF2T7FzgBMkdnPYuz4ayDMwionM0dFg1CH8ULfm87a7YZcu02SBS7tQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230001)(4636009)(46966006)(36840700001)(36756003)(6666004)(81166007)(356005)(54906003)(316002)(8936002)(2616005)(70586007)(70206006)(26005)(82310400005)(86362001)(5660300002)(8676002)(7696005)(4326008)(6916009)(186003)(44832011)(1076003)(36860700001)(426003)(47076005)(508600001)(83380400001)(2906002)(336012)(21314003)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5773
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT023.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	0497a723-ee73-4113-0bcf-08da47725960
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+WT7gzlPM4AVimZBlxFekexvjEULyIAw1z5fqOoQkzzkYg2WbzJUtvyWZu5rYnVRKMOjlhH9hQmKUICLJyksOMglYVFvym9/FGxSaAg7sc+124RAHckLXAgGebyc85Ji8Xi40qsCHz/tkuCs2nLcwYCEFxuYFUmBhi+LPFop2OBsFlZOYd8QB2ZpeTlr6aWVY6SFbi4Nk/sSmECHSq/WO2sfJAoT71yvZHIvOzzR6dYTJW85R4WVfGay05SHApqKL6JV0BA7wCsRArfVmzuGv9Q8qG2zOj8Mk97HNdTo/FV97S+ACtRhO8uc3nP3BSCjLaQf6NMK5tzHeLL8V9FtLOesrAbdNsWKW5HE7lYWLpmXBZRMAB6+UjNH8f1IdnLZaSD20qL4qTxh0uyDVc5nV4thqVCLmruJDdneEYnq+7AXt1Fa3zAGBg2NXJdiHK1YgAyeg8O2MfRR/NY44VZsH85FVKJbnVtjpPUEzGyE2oGWmofstg+oMpwKhpVuKuSc2lVoJfzizHcClxs7i/LXmvIYJtMaVQmesDEDp9r6yf2K1dyn7waqpZQOBTvwwXPMRRXpT7Autw5IDP55/aYt16D1vkPc7ulkwxzNqOPO+P+eEbvQKPODOR+F6LE4CFN33gbXTDtSH5csoWXIcGgj92JVNTkZnB2pReF5MHTEbg0=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230001)(4636009)(46966006)(36840700001)(6666004)(316002)(26005)(186003)(336012)(426003)(5660300002)(2906002)(54906003)(36860700001)(70206006)(82310400005)(4326008)(70586007)(36756003)(8676002)(44832011)(47076005)(6916009)(8936002)(86362001)(2616005)(81166007)(83380400001)(7696005)(1076003)(508600001)(21314003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2022 04:09:36.8755
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 07bb5d6d-038c-4aab-0797-08da47725ee4
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT023.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB6970

x86 is using compiler feature testing to decide EFI build
enable or not. When EFI build is disabled, x86 will use an
efi/stub.c file to replace efi/runtime.c for build objects.
Following this idea, we introduce a stub file for Arm, but
use CONFIG_ARM_EFI to decide EFI build enable or not.

And the most functions in x86 EFI stub.c can be reused for
other architectures, like Arm. So we move them to common
and keep the x86 specific function in x86/efi/stub.c.

To avoid the symbol link conflict error when linking common
stub files to x86/efi. We add a regular file check in efi
stub files' link script. Depends on this check we can bypass
the link behaviors for existed stub files in x86/efi.

As there is no Arm specific EFI stub function for Arm in
current stage, Arm still can use the existed symbol link
method for EFI stub files.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Tested-by: Jiamei Xie <jiamei.xie@arm.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
v4 -> v5:
1. Add acked-by.
v3 -> v4:
1. Add indent to make ln and test to be aligned.
v2 -> v3:
1. Add existed file check, if a regular stub files,
   the common/stub files' link will be ignored.
2. Keep stub.c in x86/efi to include common/efi/stub.c
3. Restore efi_compat_xxx stub functions to x86/efi.c.
   Other architectures will not use efi_compat_xxx.
4. Remove ARM_EFI dependency from ARM_64.
5. Add comment for adding stub.o to EFIOBJ-y.
6. Merge patch#2 and patch#3 to one patch.
v1 -> v2:
1. Drop the copy of stub.c from Arm EFI.
2. Share common codes of x86 EFI stub for other architectures.
3. Use CONFIG_ARM_EFI to replace CONFIG_EFI
4. Remove help text and make CONFIG_ARM_EFI invisible.
5. Merge one following patch:
   xen/arm: introduce a stub file for non-EFI architectures
6. Use the common stub.c instead of creating new one.
---
 xen/arch/arm/Kconfig         |  4 ++++
 xen/arch/arm/Makefile        |  2 +-
 xen/arch/arm/efi/Makefile    |  8 ++++++++
 xen/arch/x86/efi/stub.c      | 32 +-------------------------------
 xen/common/efi/efi-common.mk |  3 ++-
 xen/common/efi/stub.c        | 32 ++++++++++++++++++++++++++++++++
 6 files changed, 48 insertions(+), 33 deletions(-)
 create mode 100644 xen/common/efi/stub.c

diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index ecfa6822e4..8a16d43bd5 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -6,6 +6,7 @@ config ARM_64
 	def_bool y
 	depends on !ARM_32
 	select 64BIT
+	select ARM_EFI
 	select HAS_FAST_MULTIPLY
 
 config ARM
@@ -33,6 +34,9 @@ config ACPI
 	  Advanced Configuration and Power Interface (ACPI) support for Xen is
 	  an alternative to device tree on ARM64.
 
+config ARM_EFI
+	bool
+
 config GICV3
 	bool "GICv3 driver"
 	depends on ARM_64 && !NEW_VGIC
diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 1d862351d1..bb7a6151c1 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -1,6 +1,5 @@
 obj-$(CONFIG_ARM_32) += arm32/
 obj-$(CONFIG_ARM_64) += arm64/
-obj-$(CONFIG_ARM_64) += efi/
 obj-$(CONFIG_ACPI) += acpi/
 obj-$(CONFIG_HAS_PCI) += pci/
 ifneq ($(CONFIG_NO_PLAT),y)
@@ -20,6 +19,7 @@ obj-y += domain.o
 obj-y += domain_build.init.o
 obj-y += domctl.o
 obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
+obj-y += efi/
 obj-y += gic.o
 obj-y += gic-v2.o
 obj-$(CONFIG_GICV3) += gic-v3.o
diff --git a/xen/arch/arm/efi/Makefile b/xen/arch/arm/efi/Makefile
index 4313c39066..dffe72e589 100644
--- a/xen/arch/arm/efi/Makefile
+++ b/xen/arch/arm/efi/Makefile
@@ -1,4 +1,12 @@
 include $(srctree)/common/efi/efi-common.mk
 
+ifeq ($(CONFIG_ARM_EFI),y)
 obj-y += $(EFIOBJ-y)
 obj-$(CONFIG_ACPI) +=  efi-dom0.init.o
+else
+# Add stub.o to EFIOBJ-y to re-use the clean-files in
+# efi-common.mk. Otherwise the link of stub.c in arm/efi
+# will not be cleaned in "make clean".
+EFIOBJ-y += stub.o
+obj-y += stub.o
+endif
diff --git a/xen/arch/x86/efi/stub.c b/xen/arch/x86/efi/stub.c
index 9984932626..f2365bc041 100644
--- a/xen/arch/x86/efi/stub.c
+++ b/xen/arch/x86/efi/stub.c
@@ -1,7 +1,5 @@
 #include <xen/efi.h>
-#include <xen/errno.h>
 #include <xen/init.h>
-#include <xen/lib.h>
 #include <asm/asm_defns.h>
 #include <asm/efibind.h>
 #include <asm/page.h>
@@ -10,6 +8,7 @@
 #include <efi/eficon.h>
 #include <efi/efidevp.h>
 #include <efi/efiapi.h>
+#include "../../../common/efi/stub.c"
 
 /*
  * Here we are in EFI stub. EFI calls are not supported due to lack
@@ -45,11 +44,6 @@ void __init noreturn efi_multiboot2(EFI_HANDLE ImageHandle,
     unreachable();
 }
 
-bool efi_enabled(unsigned int feature)
-{
-    return false;
-}
-
 void __init efi_init_memory(void) { }
 
 bool efi_boot_mem_unused(unsigned long *start, unsigned long *end)
@@ -62,32 +56,8 @@ bool efi_boot_mem_unused(unsigned long *start, unsigned long *end)
 
 void efi_update_l4_pgtable(unsigned int l4idx, l4_pgentry_t l4e) { }
 
-bool efi_rs_using_pgtables(void)
-{
-    return false;
-}
-
-unsigned long efi_get_time(void)
-{
-    BUG();
-    return 0;
-}
-
-void efi_halt_system(void) { }
-void efi_reset_system(bool warm) { }
-
-int efi_get_info(uint32_t idx, union xenpf_efi_info *info)
-{
-    return -ENOSYS;
-}
-
 int efi_compat_get_info(uint32_t idx, union compat_pf_efi_info *)
     __attribute__((__alias__("efi_get_info")));
 
-int efi_runtime_call(struct xenpf_efi_runtime_call *op)
-{
-    return -ENOSYS;
-}
-
 int efi_compat_runtime_call(struct compat_pf_efi_runtime_call *)
     __attribute__((__alias__("efi_runtime_call")));
diff --git a/xen/common/efi/efi-common.mk b/xen/common/efi/efi-common.mk
index 4298ceaee7..ec2c34f198 100644
--- a/xen/common/efi/efi-common.mk
+++ b/xen/common/efi/efi-common.mk
@@ -9,7 +9,8 @@ CFLAGS-y += -iquote $(srcdir)
 # e.g.: It transforms "dir/foo/bar" into successively
 #       "dir foo bar", ".. .. ..", "../../.."
 $(obj)/%.c: $(srctree)/common/efi/%.c FORCE
-	$(Q)ln -nfs $(subst $(space),/,$(patsubst %,..,$(subst /, ,$(obj))))/source/common/efi/$(<F) $@
+	$(Q)test -f $@ || \
+	    ln -nfs $(subst $(space),/,$(patsubst %,..,$(subst /, ,$(obj))))/source/common/efi/$(<F) $@
 
 clean-files += $(patsubst %.o, %.c, $(EFIOBJ-y:.init.o=.o) $(EFIOBJ-))
 
diff --git a/xen/common/efi/stub.c b/xen/common/efi/stub.c
new file mode 100644
index 0000000000..15694632c2
--- /dev/null
+++ b/xen/common/efi/stub.c
@@ -0,0 +1,32 @@
+#include <xen/efi.h>
+#include <xen/errno.h>
+#include <xen/lib.h>
+
+bool efi_enabled(unsigned int feature)
+{
+    return false;
+}
+
+bool efi_rs_using_pgtables(void)
+{
+    return false;
+}
+
+unsigned long efi_get_time(void)
+{
+    BUG();
+    return 0;
+}
+
+void efi_halt_system(void) { }
+void efi_reset_system(bool warm) { }
+
+int efi_get_info(uint32_t idx, union xenpf_efi_info *info)
+{
+    return -ENOSYS;
+}
+
+int efi_runtime_call(struct xenpf_efi_runtime_call *op)
+{
+    return -ENOSYS;
+}
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 06 04:10:01 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 04:10:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342288.567417 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny441-000105-PI; Mon, 06 Jun 2022 04:10:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342288.567417; Mon, 06 Jun 2022 04:10:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny441-0000zl-I6; Mon, 06 Jun 2022 04:10:01 +0000
Received: by outflank-mailman (input) for mailman id 342288;
 Mon, 06 Jun 2022 04:10:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aNt7=WN=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1ny440-0000w0-6t
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 04:10:00 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2061b.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::61b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8632fe96-e54e-11ec-bd2c-47488cf2e6aa;
 Mon, 06 Jun 2022 06:09:57 +0200 (CEST)
Received: from AS9PR06CA0474.eurprd06.prod.outlook.com (2603:10a6:20b:49a::27)
 by AM9PR08MB6051.eurprd08.prod.outlook.com (2603:10a6:20b:2d6::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12; Mon, 6 Jun
 2022 04:09:55 +0000
Received: from VE1EUR03FT022.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:49a:cafe::ce) by AS9PR06CA0474.outlook.office365.com
 (2603:10a6:20b:49a::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12 via Frontend
 Transport; Mon, 6 Jun 2022 04:09:55 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT022.mail.protection.outlook.com (10.152.18.64) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 04:09:54 +0000
Received: ("Tessian outbound 6f53897bcd4e:v120");
 Mon, 06 Jun 2022 04:09:54 +0000
Received: from 0fca4f0dc9b1.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 DF4E7710-73BA-4507-BD15-A56E5BEE59D3.1; 
 Mon, 06 Jun 2022 04:09:48 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 0fca4f0dc9b1.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 06 Jun 2022 04:09:48 +0000
Received: from AS9P194CA0025.EURP194.PROD.OUTLOOK.COM (2603:10a6:20b:46d::15)
 by VE1PR08MB4942.eurprd08.prod.outlook.com (2603:10a6:803:10f::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12; Mon, 6 Jun
 2022 04:09:45 +0000
Received: from AM5EUR03FT005.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:46d:cafe::b0) by AS9P194CA0025.outlook.office365.com
 (2603:10a6:20b:46d::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.15 via Frontend
 Transport; Mon, 6 Jun 2022 04:09:44 +0000
Received: from nebula.arm.com (40.67.248.234) by
 AM5EUR03FT005.mail.protection.outlook.com (10.152.16.146) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 04:09:44 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX04.Arm.com
 (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.27; Mon, 6 Jun
 2022 04:09:45 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2308.27 via Frontend
 Transport; Mon, 6 Jun 2022 04:09:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8632fe96-e54e-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=Or8Gvwh6eP/nhI4rhe6bSfH9eVwV3ynzMOy1Ai4F7/wD4v4Pi1xtMiVvGs2NaqOb/e0ukOBv4vqvJmMIENeLOhIzuAFQVS5zncnfF4gFnzYkxOJF2PrBHm7HujTq1PZQDw8ActuXdgPVYEljEesz8vlVaT4gNX8EbM99uekvzyRiPhlXLwCqgrJgrcq6Z/KJM7uH4O24iwmBCAjZyrsT9Td+0NkqdM8Apt9JgnnpOWuQSvKlodihV0Ib3w5ZcrPwOr5pKVDBDnNXELyg7Ergs6urlwz0niVUhWI7Can7tWLItSltsVAg5vPGkYxYHQLVRQXZwT0v9PSBvrngJG/dIA==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=YtTv1hIwjELwhmX84zR7R+LRlUkqarX1yb6jS7tn1R4=;
 b=lUWbDDIekuIfv5cOYhNyr8GFvpD9U8MBTR1ExBhp0o6OK+eOhtY9v5tfcVWaoepjq2yDK+FCgYQxFen4tfe0ea0MU48NnS7wTB69GSJeiDGMCMG4bvPagDVRgX/0hn7fIRrRlQYk7atDdXRB/xTEnx+NWz3zvPJn0wLAZAxt96WtVOybLVsu6BYlZDbgxEsY4pyaIDI2S1qW3ruZ+EwTerfriYFBS9gVxBXY6ZPfBB7kYDk/6k7fzSFAe1bwcF+fBYy3tL0/SrWebLIlLdzVBWD477aPgMfmBC89OChc9GbADbowboRKSgwDI9BuAsjlo49VareUV8I/OzZ9saGM7g==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YtTv1hIwjELwhmX84zR7R+LRlUkqarX1yb6jS7tn1R4=;
 b=rTz8KZ3kHowgygU44gF8uRtJS0u2lpXJOMe2kGhCIPrndA1MqWcjioPX79Jb3KoOo4eBEc1iQKQD7COuKMkgV6BqhxYjsSZHSzU+2Dr/eOj1gMi+Ot9DIs35UgQ4Yrhr+cYBQGn7wCiBQKTAoQa+CkY0wB5/71An15NvXAC6v28=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 32512fb47f154575
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SRVGngTFAg/5lOEQ8BcWcCedd8+5W4pGihYWmzF4krg0CsixpbkyWGIB9QivV9MXAUojX7/PlmlcpknssP2hFFXIXyierRppRjymcx/i67HcZyC1OSfgAEr2aNqey8MliZU2uBP3uM8xB2FyAGtpqOH0D0TvrdPdvZ7ou+O63wXgVemTimYEy1zlBeJ3gv3e8hnSNc8dv1XSJqLAU1OI4QWJueXXHwrYWPSjRs87rVmsSLnJAdNj0WjLC1hd1J4oP9oAH6EtAlYsGJJRPNH3M7SnlQBV64IFrAsUZ3+ArmXq1aQFKbQgq+7lRfw3ia+HIxmnFTMdF+mxc4hLPzAJeg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=YtTv1hIwjELwhmX84zR7R+LRlUkqarX1yb6jS7tn1R4=;
 b=b8Zbj6RKDxaC2ic97HLPBmsvxQfhwNs9zHGHEOyo8bQQzWfXes5JU3YzAK2QU/MjEeY5CztFYYv48tpDcHU+wCyzBH6xEArAZq18Mw70Qpb7JPwRyKoXVsX5R9dTagXybKUXLNiTxAPgJveESMAmAu/UZekx7PX0M1HULicft9696bEU73d8MilB+cTo4nPtI+i1gTJtOC3dI0z+MSqASr3czmCFHwqW4taPjVDnw72k/6mX4i/+fIY0uuMgEslBSstn+xffc00U9qwht0tXgRvUHQR9xcHSWK3sgaUWcnA6dMPAaniq51RqtbxyUdUgW5VWtGMKOayPhDmACxUOTg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YtTv1hIwjELwhmX84zR7R+LRlUkqarX1yb6jS7tn1R4=;
 b=rTz8KZ3kHowgygU44gF8uRtJS0u2lpXJOMe2kGhCIPrndA1MqWcjioPX79Jb3KoOo4eBEc1iQKQD7COuKMkgV6BqhxYjsSZHSzU+2Dr/eOj1gMi+Ot9DIs35UgQ4Yrhr+cYBQGn7wCiBQKTAoQa+CkY0wB5/71An15NvXAC6v28=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Wei Chen <wei.chen@arm.com>, Jan Beulich
	<jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Jiamei Xie <jiamei.xie@arm.com>
Subject: [PATCH v5 6/8] xen/x86: use paddr_t for addresses in NUMA node structure
Date: Mon, 6 Jun 2022 12:09:14 +0800
Message-ID: <20220606040916.122184-7-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220606040916.122184-1-wei.chen@arm.com>
References: <20220606040916.122184-1-wei.chen@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-Office365-Filtering-Correlation-Id: 2747065e-11ed-4f3f-ab44-08da47726979
X-MS-TrafficTypeDiagnostic:
	VE1PR08MB4942:EE_|VE1EUR03FT022:EE_|AM9PR08MB6051:EE_
X-Microsoft-Antispam-PRVS:
	<AM9PR08MB60512C0EB02C1C2CA1A0FA369EA29@AM9PR08MB6051.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 +oyYtJcfWxyblMmloXBIZSVBwSVAAMFVGfMwtP52kfrhsxJfUTlfvyut/+cpEho9y8t+zPOoTOHlgg34q03wxi+ajr/i7GuLsB/n7cOFVU25TcBVIUeqtd1Pj7w5pM4BxOjrbt46ERcjU6AoLAwIfU82gIFuH2szLwbRKNEuFHDg5AiXxH74DPSFCpEm5PNFbHqXCBu1DI+Pv6LkclhqALPyPkDTTlte78XZdkYToqJ3UHUWnEOjv0Yqce2EQq++OFchq5/+9v5sU5DDottGbY4cy/eXaRSOyBe4i2wm4sjTOgzKY/iynEo+CBdBgHamZa63Lm0O8NcBKmfWqEEITxrYUvpLZOWMJwNwKJXmbBPkxon4R75tD2ZakgP/DaoyEWs2yp381vbFncrMRIe/uVecm/DmWQGWX96g8IBm0BsERANJTp8hSYkXpHFkni+lSbF9fXTtt2NMi4Jh8ZwwJWSrscvngtAdkvc7ZTp18sF7tfEs9XouZ3SUvTtzChlqwAiu5SkyQ/thgw2a929VEUJWnlALo5J5Hw7YOQI6nCXbYA4ZPyVTFsFcIRMTF68T/wi5V2Nyvsdg0g3dMiCOVcZDT4UVVzJphv8+O9yuk9rywmfVFP2h65XLL2Op+VQjVe8o0ZN0a+JHH7kalFFclElGQmedfNDLgISlklCANG8vG/JrwTgLYJX20jpQnlmV
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230001)(4636009)(46966006)(36840700001)(82310400005)(83380400001)(186003)(316002)(26005)(54906003)(6916009)(47076005)(336012)(426003)(2616005)(508600001)(36756003)(2906002)(81166007)(8936002)(86362001)(36860700001)(7696005)(356005)(1076003)(44832011)(5660300002)(4326008)(8676002)(70586007)(70206006)(6666004)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB4942
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT022.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	39424812-6fce-4590-2fe0-08da4772635c
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	onlPedy2QptuBK4dBmRdS6XQNam6yYmAUK5pSaxTSa7e7T0hxNVNWcmgeVFNXxL2ZBeuTrqD5vOXOBvX8bOOyNIHiELcd7XsGQW8s/bhnEsHsMRtDsaWsGZlqwtMMmVX0BbwxChMT+XU2VCzTUI0jwoRrTxIeqMFqscNmuHRn2hekmwqY/MzDP94BKMUn649wki7+98WSP1gIkYFWjZOSU00UeT/SvPAJRxFgvbvMp24o4qQUrudtgJ64XF0yy3yQ5C6d5ge3DT8ti6bUamRME87C9+SiwBoyPYOl7QRigDOq+wBIT719VVrGnR1UCr1msE/m0rDjeHRSzy2MCEW64/n3En+H7OFqWXrq1esmDOJs65pnjAWW1wGxHT0XI2lXY2s244GvYp0RERfsBVhuharH+GuiZn3EilOSoF2fTP9BNdcdKZiI8F7katqUjX3tQCWNJyZvPnACLwfcBSuA0BVxJ7YsjYkNTz0jh5B/cgBOmtKC6zxOBKGzNFKatDlpi764SX9snJ86PB7sgEy0BeI6GYlS196W1DySlVUjbR3VQ/oPCbY8Z+L4yknFmpduMfBV+amqIaZ3oCvBAZGEDJmACYCCTybnyQchClgwd3WE1hjcmFnD9pvteUctYOuBGDXaRir9NjBAD8uNG8DuA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230001)(4636009)(46966006)(36840700001)(316002)(44832011)(47076005)(426003)(336012)(5660300002)(82310400005)(2616005)(86362001)(186003)(1076003)(2906002)(8936002)(8676002)(4326008)(36756003)(70586007)(6666004)(70206006)(36860700001)(6916009)(508600001)(54906003)(81166007)(7696005)(26005)(83380400001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2022 04:09:54.6389
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 2747065e-11ed-4f3f-ab44-08da47726979
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT022.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6051

NUMA node structure "struct node" is using u64 as node memory
range. In order to make other architectures can reuse this
NUMA node relative code, we replace the u64 to paddr_t. And
use pfn_to_paddr and paddr_to_pfn to replace explicit shift
operations. The relate PRIx64 in print messages have been
replaced by PRIpaddr at the same time. And some being-phased-out
types like u64 in the lines we have touched also have been
converted to uint64_t or unsigned long.

Tested-by: Jiamei Xie <jiamei.xie@arm.com>
Signed-off-by: Wei Chen <wei.chen@arm.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
v3 -> v4:
1. Add Tb.
v2 -> v3:
1. Use uint64_t for size in acpi_scan_nodes, make it be
   consistent with numa_emulation.
2. Add Tb.
v1 -> v2:
1. Drop useless cast.
2. Use initializers of the variables.
3. Replace u64 by uint64_t.
4. Use unsigned long for start_pfn and end_pfn.
---
 xen/arch/x86/include/asm/numa.h |  8 ++++----
 xen/arch/x86/numa.c             | 32 +++++++++++++++-----------------
 xen/arch/x86/srat.c             | 25 +++++++++++++------------
 3 files changed, 32 insertions(+), 33 deletions(-)

diff --git a/xen/arch/x86/include/asm/numa.h b/xen/arch/x86/include/asm/numa.h
index 5d8385f2e1..c32ccffde3 100644
--- a/xen/arch/x86/include/asm/numa.h
+++ b/xen/arch/x86/include/asm/numa.h
@@ -18,7 +18,7 @@ extern cpumask_t     node_to_cpumask[];
 #define node_to_cpumask(node)    (node_to_cpumask[node])
 
 struct node { 
-	u64 start,end; 
+	paddr_t start, end;
 };
 
 extern int compute_hash_shift(struct node *nodes, int numnodes,
@@ -38,7 +38,7 @@ extern void numa_set_node(int cpu, nodeid_t node);
 extern nodeid_t setup_node(unsigned int pxm);
 extern void srat_detect_node(int cpu);
 
-extern void setup_node_bootmem(nodeid_t nodeid, u64 start, u64 end);
+extern void setup_node_bootmem(nodeid_t nodeid, paddr_t start, paddr_t end);
 extern nodeid_t apicid_to_node[];
 extern void init_cpu_to_node(void);
 
@@ -76,9 +76,9 @@ static inline __attribute__((pure)) nodeid_t phys_to_nid(paddr_t addr)
 				 NODE_DATA(nid)->node_spanned_pages)
 #define arch_want_default_dmazone() (num_online_nodes() > 1)
 
-extern int valid_numa_range(u64 start, u64 end, nodeid_t node);
+extern int valid_numa_range(paddr_t start, paddr_t end, nodeid_t node);
 
-void srat_parse_regions(u64 addr);
+void srat_parse_regions(paddr_t addr);
 extern u8 __node_distance(nodeid_t a, nodeid_t b);
 unsigned int arch_get_dma_bitsize(void);
 
diff --git a/xen/arch/x86/numa.c b/xen/arch/x86/numa.c
index 680b7d9002..627ae8aa95 100644
--- a/xen/arch/x86/numa.c
+++ b/xen/arch/x86/numa.c
@@ -162,12 +162,10 @@ int __init compute_hash_shift(struct node *nodes, int numnodes,
     return shift;
 }
 /* initialize NODE_DATA given nodeid and start/end */
-void __init setup_node_bootmem(nodeid_t nodeid, u64 start, u64 end)
-{ 
-    unsigned long start_pfn, end_pfn;
-
-    start_pfn = start >> PAGE_SHIFT;
-    end_pfn = end >> PAGE_SHIFT;
+void __init setup_node_bootmem(nodeid_t nodeid, paddr_t start, paddr_t end)
+{
+    unsigned long start_pfn = paddr_to_pfn(start);
+    unsigned long end_pfn = paddr_to_pfn(end);
 
     NODE_DATA(nodeid)->node_start_pfn = start_pfn;
     NODE_DATA(nodeid)->node_spanned_pages = end_pfn - start_pfn;
@@ -198,11 +196,12 @@ void __init numa_init_array(void)
 static int numa_fake __initdata = 0;
 
 /* Numa emulation */
-static int __init numa_emulation(u64 start_pfn, u64 end_pfn)
+static int __init numa_emulation(unsigned long start_pfn,
+                                 unsigned long end_pfn)
 {
     int i;
     struct node nodes[MAX_NUMNODES];
-    u64 sz = ((end_pfn - start_pfn)<<PAGE_SHIFT) / numa_fake;
+    uint64_t sz = pfn_to_paddr(end_pfn - start_pfn) / numa_fake;
 
     /* Kludge needed for the hash function */
     if ( hweight64(sz) > 1 )
@@ -218,9 +217,9 @@ static int __init numa_emulation(u64 start_pfn, u64 end_pfn)
     memset(&nodes,0,sizeof(nodes));
     for ( i = 0; i < numa_fake; i++ )
     {
-        nodes[i].start = (start_pfn<<PAGE_SHIFT) + i*sz;
+        nodes[i].start = pfn_to_paddr(start_pfn) + i * sz;
         if ( i == numa_fake - 1 )
-            sz = (end_pfn<<PAGE_SHIFT) - nodes[i].start;
+            sz = pfn_to_paddr(end_pfn) - nodes[i].start;
         nodes[i].end = nodes[i].start + sz;
         printk(KERN_INFO "Faking node %d at %"PRIx64"-%"PRIx64" (%"PRIu64"MB)\n",
                i,
@@ -246,6 +245,8 @@ static int __init numa_emulation(u64 start_pfn, u64 end_pfn)
 void __init numa_initmem_init(unsigned long start_pfn, unsigned long end_pfn)
 { 
     int i;
+    paddr_t start = pfn_to_paddr(start_pfn);
+    paddr_t end = pfn_to_paddr(end_pfn);
 
 #ifdef CONFIG_NUMA_EMU
     if ( numa_fake && !numa_emulation(start_pfn, end_pfn) )
@@ -253,17 +254,15 @@ void __init numa_initmem_init(unsigned long start_pfn, unsigned long end_pfn)
 #endif
 
 #ifdef CONFIG_ACPI_NUMA
-    if ( !numa_off && !acpi_scan_nodes((u64)start_pfn << PAGE_SHIFT,
-         (u64)end_pfn << PAGE_SHIFT) )
+    if ( !numa_off && !acpi_scan_nodes(start, end) )
         return;
 #endif
 
     printk(KERN_INFO "%s\n",
            numa_off ? "NUMA turned off" : "No NUMA configuration found");
 
-    printk(KERN_INFO "Faking a node at %016"PRIx64"-%016"PRIx64"\n",
-           (u64)start_pfn << PAGE_SHIFT,
-           (u64)end_pfn << PAGE_SHIFT);
+    printk(KERN_INFO "Faking a node at %"PRIpaddr"-%"PRIpaddr"\n",
+           start, end);
     /* setup dummy node covering all memory */
     memnode_shift = BITS_PER_LONG - 1;
     memnodemap = _memnodemap;
@@ -276,8 +275,7 @@ void __init numa_initmem_init(unsigned long start_pfn, unsigned long end_pfn)
     for ( i = 0; i < nr_cpu_ids; i++ )
         numa_set_node(i, 0);
     cpumask_copy(&node_to_cpumask[0], cpumask_of(0));
-    setup_node_bootmem(0, (u64)start_pfn << PAGE_SHIFT,
-                    (u64)end_pfn << PAGE_SHIFT);
+    setup_node_bootmem(0, start, end);
 }
 
 void numa_add_cpu(int cpu)
diff --git a/xen/arch/x86/srat.c b/xen/arch/x86/srat.c
index cfe24c7e78..8ffe43bdfe 100644
--- a/xen/arch/x86/srat.c
+++ b/xen/arch/x86/srat.c
@@ -104,7 +104,7 @@ nodeid_t setup_node(unsigned pxm)
 	return node;
 }
 
-int valid_numa_range(u64 start, u64 end, nodeid_t node)
+int valid_numa_range(paddr_t start, paddr_t end, nodeid_t node)
 {
 	int i;
 
@@ -119,7 +119,7 @@ int valid_numa_range(u64 start, u64 end, nodeid_t node)
 	return 0;
 }
 
-static __init int conflicting_memblks(u64 start, u64 end)
+static __init int conflicting_memblks(paddr_t start, paddr_t end)
 {
 	int i;
 
@@ -135,7 +135,7 @@ static __init int conflicting_memblks(u64 start, u64 end)
 	return -1;
 }
 
-static __init void cutoff_node(int i, u64 start, u64 end)
+static __init void cutoff_node(int i, paddr_t start, paddr_t end)
 {
 	struct node *nd = &nodes[i];
 	if (nd->start < start) {
@@ -275,7 +275,7 @@ acpi_numa_processor_affinity_init(const struct acpi_srat_cpu_affinity *pa)
 void __init
 acpi_numa_memory_affinity_init(const struct acpi_srat_mem_affinity *ma)
 {
-	u64 start, end;
+	paddr_t start, end;
 	unsigned pxm;
 	nodeid_t node;
 	int i;
@@ -318,7 +318,7 @@ acpi_numa_memory_affinity_init(const struct acpi_srat_mem_affinity *ma)
 		bool mismatch = !(ma->flags & ACPI_SRAT_MEM_HOT_PLUGGABLE) !=
 		                !test_bit(i, memblk_hotplug);
 
-		printk("%sSRAT: PXM %u (%"PRIx64"-%"PRIx64") overlaps with itself (%"PRIx64"-%"PRIx64")\n",
+		printk("%sSRAT: PXM %u (%"PRIpaddr"-%"PRIpaddr") overlaps with itself (%"PRIpaddr"-%"PRIpaddr")\n",
 		       mismatch ? KERN_ERR : KERN_WARNING, pxm, start, end,
 		       node_memblk_range[i].start, node_memblk_range[i].end);
 		if (mismatch) {
@@ -327,7 +327,7 @@ acpi_numa_memory_affinity_init(const struct acpi_srat_mem_affinity *ma)
 		}
 	} else {
 		printk(KERN_ERR
-		       "SRAT: PXM %u (%"PRIx64"-%"PRIx64") overlaps with PXM %u (%"PRIx64"-%"PRIx64")\n",
+		       "SRAT: PXM %u (%"PRIpaddr"-%"PRIpaddr") overlaps with PXM %u (%"PRIpaddr"-%"PRIpaddr")\n",
 		       pxm, start, end, node_to_pxm(memblk_nodeid[i]),
 		       node_memblk_range[i].start, node_memblk_range[i].end);
 		bad_srat();
@@ -346,7 +346,7 @@ acpi_numa_memory_affinity_init(const struct acpi_srat_mem_affinity *ma)
 				nd->end = end;
 		}
 	}
-	printk(KERN_INFO "SRAT: Node %u PXM %u %"PRIx64"-%"PRIx64"%s\n",
+	printk(KERN_INFO "SRAT: Node %u PXM %u %"PRIpaddr"-%"PRIpaddr"%s\n",
 	       node, pxm, start, end,
 	       ma->flags & ACPI_SRAT_MEM_HOT_PLUGGABLE ? " (hotplug)" : "");
 
@@ -369,7 +369,7 @@ static int __init nodes_cover_memory(void)
 
 	for (i = 0; i < e820.nr_map; i++) {
 		int j, found;
-		unsigned long long start, end;
+		paddr_t start, end;
 
 		if (e820.map[i].type != E820_RAM) {
 			continue;
@@ -396,7 +396,7 @@ static int __init nodes_cover_memory(void)
 
 		if (start < end) {
 			printk(KERN_ERR "SRAT: No PXM for e820 range: "
-				"%016Lx - %016Lx\n", start, end);
+				"%"PRIpaddr" - %"PRIpaddr"\n", start, end);
 			return 0;
 		}
 	}
@@ -432,7 +432,7 @@ static int __init cf_check srat_parse_region(
 	return 0;
 }
 
-void __init srat_parse_regions(u64 addr)
+void __init srat_parse_regions(paddr_t addr)
 {
 	u64 mask;
 	unsigned int i;
@@ -457,7 +457,7 @@ void __init srat_parse_regions(u64 addr)
 }
 
 /* Use the information discovered above to actually set up the nodes. */
-int __init acpi_scan_nodes(u64 start, u64 end)
+int __init acpi_scan_nodes(paddr_t start, paddr_t end)
 {
 	int i;
 	nodemask_t all_nodes_parsed;
@@ -489,7 +489,8 @@ int __init acpi_scan_nodes(u64 start, u64 end)
 	/* Finally register nodes */
 	for_each_node_mask(i, all_nodes_parsed)
 	{
-		u64 size = nodes[i].end - nodes[i].start;
+		uint64_t size = nodes[i].end - nodes[i].start;
+
 		if ( size == 0 )
 			printk(KERN_WARNING "SRAT: Node %u has no memory. "
 			       "BIOS Bug or mis-configured hardware?\n", i);
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 06 04:10:05 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 04:10:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342292.567429 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny445-0001g8-Cm; Mon, 06 Jun 2022 04:10:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342292.567429; Mon, 06 Jun 2022 04:10:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny445-0001fk-2v; Mon, 06 Jun 2022 04:10:05 +0000
Received: by outflank-mailman (input) for mailman id 342292;
 Mon, 06 Jun 2022 04:10:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aNt7=WN=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1ny442-0000w0-SU
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 04:10:02 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20604.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::604])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 88e9a548-e54e-11ec-bd2c-47488cf2e6aa;
 Mon, 06 Jun 2022 06:10:01 +0200 (CEST)
Received: from AM6P193CA0083.EURP193.PROD.OUTLOOK.COM (2603:10a6:209:88::24)
 by VI1PR0802MB2157.eurprd08.prod.outlook.com (2603:10a6:800:9c::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Mon, 6 Jun
 2022 04:09:58 +0000
Received: from VE1EUR03FT012.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:88:cafe::c7) by AM6P193CA0083.outlook.office365.com
 (2603:10a6:209:88::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13 via Frontend
 Transport; Mon, 6 Jun 2022 04:09:57 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT012.mail.protection.outlook.com (10.152.18.211) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 04:09:57 +0000
Received: ("Tessian outbound 4ab5a053767b:v120");
 Mon, 06 Jun 2022 04:09:57 +0000
Received: from b15dbf3b8acb.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 50CD7091-62ED-45AF-BC14-8C68E4E28BF3.1; 
 Mon, 06 Jun 2022 04:09:50 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b15dbf3b8acb.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 06 Jun 2022 04:09:50 +0000
Received: from AS9PR06CA0010.eurprd06.prod.outlook.com (2603:10a6:20b:462::33)
 by AM0PR08MB3956.eurprd08.prod.outlook.com (2603:10a6:208:131::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Mon, 6 Jun
 2022 04:09:42 +0000
Received: from AM5EUR03FT052.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:462:cafe::e8) by AS9PR06CA0010.outlook.office365.com
 (2603:10a6:20b:462::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.14 via Frontend
 Transport; Mon, 6 Jun 2022 04:09:42 +0000
Received: from nebula.arm.com (40.67.248.234) by
 AM5EUR03FT052.mail.protection.outlook.com (10.152.17.161) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 04:09:41 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX04.Arm.com
 (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.27; Mon, 6 Jun
 2022 04:09:42 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2308.27 via Frontend
 Transport; Mon, 6 Jun 2022 04:09:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 88e9a548-e54e-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=bq8vLUcWh3JhP8o399uCZLIUC4NA0PnujoK2ZiSQ/7dttnZCWcd/xx1GE2FO/4DOm4u1NnF1xwvwZovlToDKm+k7DyoP39fTOBC72mgMTRZ01HZvbIDMMSb1za5TTxolQ/Jr4ObBKzeUzhmBuG8gM5PZX3gUJmTtK3FjO2B4gprzy0+sBDxAFuqCWpaIDK/T+fSxMxIRp/GE8tikAhhMH2PHH7M26gJTx/GQi7hyTE5WkYTMLS52o5nQ34B2u4I35i+wjz3mQMzZjcO1nnpthGKXmoZ7dhl3FKRH7ogXA36tt62SxwdP4XDRut82h4xCwIaqg1/3XZmQ+wVlvQWi6Q==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mmWNLdXhXdHZRodNywRZ0AluYyoEUalbdzqUNWlHEjU=;
 b=FO48yEIw7nOe/WhMducvyy5OQaZS255AmNQdAmQ41zDDKnoYY+eKGUULKRMUekT3kpBbesHHRb6Fqh+wV43yze9dTqwcVeu5IudLeDTRhed2gPdlQvzZJCgW5n1S25z99U4OHeFkeNm1Qn0HqOqH/ynHvyzKguaNb/z7P688uweYdvsKgh8XxqRoqRHXS20rk4JAQvKdjWvCPsuEHA/2tgOQKg+FngmbJ6XSQbpZfAtEI5UK8aghN101AdULDjQt86xdIbF3gbgvbak9r95AcqjrgiqTQwBZiqcqno2K1TiIn05FGSsvVAiC+4KAqNXBnQH0JS4XbmzeDWr9W/9Jvg==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mmWNLdXhXdHZRodNywRZ0AluYyoEUalbdzqUNWlHEjU=;
 b=IWHcnCoHzLJ9obstO+SZIfCtTAClU9IJY3Th8es+fDplCczMP5w+wziIx9ez3B8lILIUs/uD9LMANftRxBc6kPkRhyWtd5WoAQ+bsqbFg3Jql/OY0hN3dy6SysFF/NOajS61dZCEtU/R9Ol7s7j/gsoQgBmhe5pyDzqqlxkvQtQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: bf6fd438c57bcb5f
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iK7r0ye4H/mJT9pVfduEvLMivP3VhWa9cdj08q3xB6DRKvQw0+NgaGY7Lan0S2tfqV1cN38zmpLo+scEujmLuyB0XFY1oiW8BuZ08/xhTqLPFwrEqJUMKMXFudijqrJndDsh/AFKzFaOkdMmchVv9nhstMw8w85BCwQWv1Bw76NMTAvilCVjOCLpqFdTv1DEKlZAltbWDcqH3+VnzZNosXADyCe++zY+wxvu3cR5D3OxCQMVS1mUfD3pZZM8n2zO9DpMwc+KlA18DAiTq0cw6H1ywmzrtjGZIizuKOCIPHEKD/YY9/3WVjYLp4htRCFPXxdr1s2forAhRJuy3wYMRg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mmWNLdXhXdHZRodNywRZ0AluYyoEUalbdzqUNWlHEjU=;
 b=iIFguqocLwr74I8CijEwvVmWGw5q46v9hgzrx9lSqqrtJWe7q/BebwBl8CfuemrZEcJRuSbAmhf3Bm7Xc9TKcx0xPPbpB8bNUPQy6sHfg7UTgoeyehkTLwt5jZLnywtsAtTOgjlWe4GiyIW50PkesFkFRRBvEifMzjDqvQ6a43fs1JJoL0EZHg/9R51CMCbciTLBl52Tp2Zbr+gO+/0gKjDZcfQk5tmBflW4ArkmXHPT7b3MEk++lALWBknSMvrdmYjxkA8ZYN/9rCO/GT+B0hOODGGtluz+GIVmgCw5wauhwtSE/ZxmosO5jiBhgL8kzS7hac0qlROppP6gf+l4Uw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mmWNLdXhXdHZRodNywRZ0AluYyoEUalbdzqUNWlHEjU=;
 b=IWHcnCoHzLJ9obstO+SZIfCtTAClU9IJY3Th8es+fDplCczMP5w+wziIx9ez3B8lILIUs/uD9LMANftRxBc6kPkRhyWtd5WoAQ+bsqbFg3Jql/OY0hN3dy6SysFF/NOajS61dZCEtU/R9Ol7s7j/gsoQgBmhe5pyDzqqlxkvQtQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Wei Chen <wei.chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Jiamei Xie <jiamei.xie@arm.com>
Subject: [PATCH v5 5/8] xen/arm: use !CONFIG_NUMA to keep fake NUMA API
Date: Mon, 6 Jun 2022 12:09:13 +0800
Message-ID: <20220606040916.122184-6-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220606040916.122184-1-wei.chen@arm.com>
References: <20220606040916.122184-1-wei.chen@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-Office365-Filtering-Correlation-Id: 19bf49c7-5193-4fa0-cb81-08da47726b18
X-MS-TrafficTypeDiagnostic:
	AM0PR08MB3956:EE_|VE1EUR03FT012:EE_|VI1PR0802MB2157:EE_
X-Microsoft-Antispam-PRVS:
	<VI1PR0802MB2157EEDE4FC3EE6C7B3B300D9EA29@VI1PR0802MB2157.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 42tXNiIWn2M6QZaO6o7AaLjx2pcpicxiSpUbbPjt0HyUJcQgWEREg2b2EfM2gwjab6NcxjaZwNJ9VX0LClgP6oMTLRJdGChAMtbN3oRJZ603TsICC0HjvF38tDYZZzrrqDRDbyBwvWSMk13domZcBBtNaPRFkw+UWLT3jjsifvKY8G0xEmxVBTBOJfRgbRbP2bpqmOKQ1aPOg3IdLdIJSUglftp9sMMFTldLKgBH777BOfB7KWtdD49oFFxLxKuz4ZGKBx6Y2mueZ2juBTRJd2wqb6LGd/rX/PjEmQ+pXIi8hlS6QmNWIW/5BPyvIkNq17bkX4ERlDwVddYRNBjrASmmwiG6XQ3FzpfLZkJioYS8suqSTugjwWlV7XYF2AkI9hE/1efVN+2PD2pK006H6Hm9EYyVW141mrVyyiXhl0Q/dEuhpH0pxO/eiUpSIXd+DG3H0AWnuV4ulc+n6PeaoV5j95bUWqaNIQusy3GS84Mmz2a2Jhlr9W/HkhaSnOFX9d/rG19h6JWxtQdb4Cg1cqp1r2NlCXl3Gxq0UZW3NY+LcUjOZpqcoB8BR8S8KkSa1wPBXt2ZeErdb8LRgkHDvrzLcIC6qKkFuN26+TY0jxxVm5+S4yCHBzk9Ap1MgMIANRMGu2zbHbEXCMYb4u3FHEG6PxnA3OqvC0h+zUpDYGwZTrPSzn+RIiRjxl3hop9Y1E/4P3yZZOMDcpUQmNyeqQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230001)(4636009)(40470700004)(46966006)(36840700001)(356005)(26005)(2906002)(6916009)(6666004)(7696005)(54906003)(81166007)(44832011)(316002)(36756003)(86362001)(36860700001)(83380400001)(426003)(40460700003)(336012)(82310400005)(8676002)(2616005)(4326008)(1076003)(70586007)(70206006)(5660300002)(186003)(508600001)(47076005)(8936002)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB3956
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT012.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	18e30440-f57e-497d-22d6-08da477261b6
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8om3xkk/OXKpDM8CGnoYvtu9qLDcOToMjxmKmwHEKv4zoxMhNWh6Fj4U2aIdn9C8DuveB+T2q2knr1GeBYXHzk35LrtV0Xt0KXAKps8E57UQC91B2sPlxudLyxO7x5qMQZiwiDRRigEJ14ERWJ9/Uqx4Mzy4Wu5RI+LmqoSZpMT5QySn3OCp6VQwYMY65nCV2/I1lZrAoIeNMGQss9BKpnQKdingnyqYAmP3u3xOlKaL0fVheRZ2xVcYsAyGLDA1+kwOdTlad6L/eJZiNS4UZc4gumifnb3R8S9pTRQpvbpXMjvizuNG1DBTeAH1v+s/KLieqDKhQPuJsFLkNqkTvEOtsaumZXvDCeJrcjJKKv/6sofFTe0OKI8HSMXuy+egclwLQHZkS7ChVmo8uTMuMos9cQvzCpCeH6J1gTHlt9tdsDK/+1QCARapaEY8srleNCuX2fWSQ9bj+KZIn6IO+DHBqg8AlPhSJ1I+jEtP7GgVVPrbPzGFlhzEKbYNKQJJOoTWbLG9iHxrXM/HYJ4yHjf5F3k2s9kGKtjQ+lORyzpbkw4jMcrS1smOScWPGwsTGShkBkPFRfEfaUTinrAEWkuFTi0kcvkx17uKojNnAc43GftUrIl7NrPzywlfM8nWFbj7YWuxxzprsaq8WZdCOA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230001)(4636009)(46966006)(36840700001)(36756003)(54906003)(70206006)(6916009)(336012)(70586007)(426003)(47076005)(316002)(5660300002)(26005)(81166007)(8936002)(2906002)(4326008)(82310400005)(8676002)(1076003)(83380400001)(7696005)(36860700001)(44832011)(6666004)(2616005)(508600001)(186003)(86362001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2022 04:09:57.3883
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 19bf49c7-5193-4fa0-cb81-08da47726b18
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT012.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0802MB2157

We have introduced CONFIG_NUMA in a previous patch. And this
option is enabled only on x86 at the current stage. In a follow
up patch, we will enable this option for Arm. But we still
want users to be able to disable the CONFIG_NUMA via Kconfig. In
this case, keep the fake NUMA API, will make Arm code still
able to work with NUMA aware memory allocation and scheduler.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Tested-by: Jiamei Xie <jiamei.xie@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
v3 -> v4:
no change
v2 -> v3:
Add Tb.
v1 -> v2:
No change.
---
 xen/arch/arm/include/asm/numa.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index e4c4d89192..268a9db055 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -5,6 +5,8 @@
 
 typedef u8 nodeid_t;
 
+#ifndef CONFIG_NUMA
+
 /* Fake one node for now. See also node_online_map. */
 #define cpu_to_node(cpu) 0
 #define node_to_cpumask(node)   (cpu_online_map)
@@ -24,6 +26,9 @@ extern mfn_t first_valid_mfn;
 #define node_spanned_pages(nid) (max_page - mfn_x(first_valid_mfn))
 #define node_start_pfn(nid) (mfn_x(first_valid_mfn))
 #define __node_distance(a, b) (20)
+
+#endif
+
 #define arch_want_default_dmazone() (false)
 
 #endif /* __ARCH_ARM_NUMA_H */
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 06 04:10:08 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 04:10:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342301.567439 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny447-0002Jv-VN; Mon, 06 Jun 2022 04:10:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342301.567439; Mon, 06 Jun 2022 04:10:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny447-0002Iy-PL; Mon, 06 Jun 2022 04:10:07 +0000
Received: by outflank-mailman (input) for mailman id 342301;
 Mon, 06 Jun 2022 04:10:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aNt7=WN=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1ny445-0000w0-Vn
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 04:10:05 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on0611.outbound.protection.outlook.com
 [2a01:111:f400:fe0d::611])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8acb4aea-e54e-11ec-bd2c-47488cf2e6aa;
 Mon, 06 Jun 2022 06:10:04 +0200 (CEST)
Received: from AS9PR06CA0461.eurprd06.prod.outlook.com (2603:10a6:20b:49a::12)
 by DB9PR08MB6777.eurprd08.prod.outlook.com (2603:10a6:10:2ac::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Mon, 6 Jun
 2022 04:10:01 +0000
Received: from VE1EUR03FT022.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:49a:cafe::c3) by AS9PR06CA0461.outlook.office365.com
 (2603:10a6:20b:49a::12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13 via Frontend
 Transport; Mon, 6 Jun 2022 04:10:00 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT022.mail.protection.outlook.com (10.152.18.64) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 04:10:00 +0000
Received: ("Tessian outbound 1766a3bff204:v120");
 Mon, 06 Jun 2022 04:10:00 +0000
Received: from 3484997ec089.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 C0F3D47B-AE65-4203-9233-9B3ADCB3A999.1; 
 Mon, 06 Jun 2022 04:09:54 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 3484997ec089.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 06 Jun 2022 04:09:54 +0000
Received: from AS9PR06CA0076.eurprd06.prod.outlook.com (2603:10a6:20b:464::29)
 by AM8PR08MB5715.eurprd08.prod.outlook.com (2603:10a6:20b:1d7::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12; Mon, 6 Jun
 2022 04:09:51 +0000
Received: from AM5EUR03FT010.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:464:cafe::b3) by AS9PR06CA0076.outlook.office365.com
 (2603:10a6:20b:464::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.14 via Frontend
 Transport; Mon, 6 Jun 2022 04:09:51 +0000
Received: from nebula.arm.com (40.67.248.234) by
 AM5EUR03FT010.mail.protection.outlook.com (10.152.16.134) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 04:09:50 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX04.Arm.com
 (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.27; Mon, 6 Jun
 2022 04:09:51 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2308.27 via Frontend
 Transport; Mon, 6 Jun 2022 04:09:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8acb4aea-e54e-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=McmPWhlSq6lWKHNeSOzWyidTNHMMj1MEIJ5WR5EbFdlldBAT6Qz+XQCZ8csMghcHLDqupPXGqrKPFEp6zLdZAlgtzpb/SmwOub/WZ5jyEhzfw5oAjLduEHnfURw4vi2LiCy2BTNL7AQIxXtKBCsTpi2j6nmuUBNIb0n6rolycxj4NWeuW1U1JPqFVnh1nUwWQuGwVyEK3jm7yLWOMltM+vCCFD91GCdNd1wZ9Intavtj0hpzRp2vrW9yznTpJYb1y30VCwwyNCoo+pMjHWENJiRmSJ+GbBkhjV3VFa9YKWDFfQ661c/7XXfkTvWkzMYAf9cibM820PtQGT52aRmNIg==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7lmHKEnpMdOqTyUo5+1ssJ2WrW3qZKZlnI4Gu4KfwTE=;
 b=ZWFcK8So5MvYvkYjwAe8qnQFQPugyvUF8l5HXRerJoOVQgDcva4pSpVJlLEO2VJDy0Egf/sF3y+bY9jOXrm3JScE0jBF1cLeGG2ku+pTKJpwH6w/WDKcnCLyQE7NOLe3mSbXy0TU8iKNwJobSccnFzKA4DMXxeVZ19kxA3gRXDYQpWranuONJjKT4V36O4dvp8kN3YKpIolbwfpEd25wv1l5GYc24EXHiuq7tTgBG3pcO0yqnCSyJviJxG1oNHwXPS1Mf7cmgLSJzpmAUfRKo99iywn1Jr+PB+X7xvNnWVg6KL3izJ8Vmui70pJLYhoRc4wYi4nXJ3A+MBACL7/4/g==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7lmHKEnpMdOqTyUo5+1ssJ2WrW3qZKZlnI4Gu4KfwTE=;
 b=pK8yxXSvzKbG+ckCOfnix8PSAfMVODEyKQ5MxFK/Qj89L6FwasfMb72//OQvyT3+RFxbOiFJU1CJnRsn+EJavLBuYCKgoHogM5A4+WfMUpYzE6ahUiWXPiTonyaPrawHbpiEes9pwakeobry56tFLUebicMfTCCRzNRrG6yyjKs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: fb62ea7806747b8e
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Xlco7QV7pMAHSwJ6nKlXvhaw2l1o5c1C0V82tcuoXQqqdJduZNiOtCqapSTU3zXSLjxVceq+n+7IwSSN3zEKaSIOZolj3Ufi78/lloQo4+Enia2kyzPVmVXucG65o8ojAQFpKxYybKL3lPuy5quwLl0J/q9X5Fo/xf+xb5PObjFPaAzXrcndBdfrdDMJzRyxuIhVxprNEEVp1Z6BLy7ZkMEWEDfqR910RA6lN6SSbn8LzuKvxbNcZ58U/n9ak7hJJbgHvgw5OpqP/vg1VLd2TFU1SDjriqYmNdO9+K9pD30foJdKY5N7G09qXFVNhFlLnawc75ozIX5kEjw7+Ki4UA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7lmHKEnpMdOqTyUo5+1ssJ2WrW3qZKZlnI4Gu4KfwTE=;
 b=FE+qQz27ToaiHkJAj2DjUR7+cw5lBj9aJ7Vi/10hgpm+XSZ9MXHIEMP3dM+zFe/HTp+rAcDBTKwnwINgPOEefzvcGuJNe6d0d8j9KCxRcMG9epvYR2X9XJhhd8LgWsYZl3Munxd7yznw8VDYsFJYxQZz08gq+aYX1zpAMEDTNC7rjNz16elr+T1RZvO1aHxTOpZW6Kf7ahjyMkmpzJxYrGj2wilnghRrVddYv4o/Ubr1K9i5fHa9eSpnt+y/BFgS1UVAlDtva49lZYo7vOelwQKbdh4ZaXBw/4Nx4KhxeXV/s/CDW9xHXyaFTYoUQkOl3rgHiMY1hO58AubaRJT0Gg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7lmHKEnpMdOqTyUo5+1ssJ2WrW3qZKZlnI4Gu4KfwTE=;
 b=pK8yxXSvzKbG+ckCOfnix8PSAfMVODEyKQ5MxFK/Qj89L6FwasfMb72//OQvyT3+RFxbOiFJU1CJnRsn+EJavLBuYCKgoHogM5A4+WfMUpYzE6ahUiWXPiTonyaPrawHbpiEes9pwakeobry56tFLUebicMfTCCRzNRrG6yyjKs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Wei Chen <wei.chen@arm.com>, Jan Beulich
	<jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: [PATCH v5 8/8] xen/x86: use INFO level for node's without memory log message
Date: Mon, 6 Jun 2022 12:09:16 +0800
Message-ID: <20220606040916.122184-9-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220606040916.122184-1-wei.chen@arm.com>
References: <20220606040916.122184-1-wei.chen@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-Office365-Filtering-Correlation-Id: 66f69509-8db2-43bc-42e4-08da47726cdf
X-MS-TrafficTypeDiagnostic:
	AM8PR08MB5715:EE_|VE1EUR03FT022:EE_|DB9PR08MB6777:EE_
X-Microsoft-Antispam-PRVS:
	<DB9PR08MB677761F767F027914980ED049EA29@DB9PR08MB6777.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 FWC2x4tEu6SxTosykt9/VSzWl6pJ0mgCWsE0dDVTGSpeV9a5DGkmwJPG5KmrGTxAMNJf8o6MvIoXmBW22DgoHk8bJM5K5TyRGJyivWya1nl2XaI9VjDL8IgRIlS0H8gEQhhTu7Y2sgM+EXTNKPIo2z20F9H/7737Ogz2ePV7kBlt8tbPqmxpkuSPPt494hllcapVQc755hbHIsJE8G1W1+5WYKsPdgOccjxtZa5z/CxeYgChZxOI/iYT6gkMS/misRIPlXKN/eqzkv0X1ltlQZFsThA5QpKmUmVOpadSxTqXf+PjklOClQza1Xk28OuXylW3y5ezKpvuWgVv2426MzR7hwWshI3jakEMVcGVOBi1/rtJpBtkVymPeV8sweJJ/mHIImPvdC7q7h4rQ8Twb8/kL/LOwB860H6hEmXlfg1KtT/JmKVNTn+HpAVQ3K/MneZETVRAKe3TUcdom+ednHFsnQOecKxIetJDPzz4HCgEw2VpowCsFwzHgvJQC9s7ZnBcYBMIVdqNcrTQVo+gmwN9LcHu/NlusHVT9E0fOTfSyb1TvTq0HJ/xcJhYCNQGWFBmHJs7I7btVlxvIKmYJq/Kq09KMFMC1Cq5qeClOVnmgmbF+L9R9/EKUIxX5toVmlUMda+iuvYqSj/D9PT9klxa9iBFJUN9VamkZ8gzuNAK+WOhsMtXi/g9ANGQngwjgBxfWFHf6Cn9jzShLvNb1A==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230001)(4636009)(40470700004)(46966006)(36840700001)(4326008)(8676002)(356005)(316002)(15650500001)(86362001)(44832011)(5660300002)(40460700003)(2906002)(70206006)(70586007)(8936002)(81166007)(2616005)(26005)(426003)(47076005)(1076003)(336012)(186003)(508600001)(7696005)(6666004)(54906003)(82310400005)(83380400001)(36860700001)(6916009)(36756003)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB5715
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT022.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	b0979449-1c4f-4088-bfbd-08da4772672a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	UkOAeq6FoH6NQtsCs2ZRmzyggNWShq1ZnUB8HbtvG36DzzwR0mTiL3IxCalFcaWXv/Bh612GGxBvpXNBz6728qh/9N57P/Dvp8j3zi7m7dfFgkkug5np5YffYIYnp2Zcg8n/C+QHIKkfhqhxm5caIzuDYRdf9hWG5pbC+6/IyxzAmOFUdbc6snyU+DL0f/qoYcLscTSUN6ldeqW1gNEwm7T+42M25FNgM8GrE5Q7137eVISZFlO7KcrNLd0zQlR/VjdfjBnff7xAEQn4neo11ukIM04Ba56ivsQzbzjhP1lRET0kbY3L4YLhmZrkFhkKUjsDssMwjoKhHwa8qhek8D32+PlctogM8VKij0GEiaWBy+n1ZrTlFeilfn3irgU7nQaemG+vANUcM07n6wE4CzLcqNqXb4+XPuuLpC6OvwbGVwxzgLzbUOS7dIZPQygb8E/ZdfcBL1ENw1uN0+XvUOUrfTJ9dTyyprig0ohBhCfdlsIyz87ntCy/vQAOIWUnQkR9j/OkvN+gh4EBue/OryG/dqwu16mvgPZ8+Mio+bi8FZ0uHZTuUMMzqv+3BjfilLdJGfbzCjn1oVzqsVTlvVxF+WWpbA7L1LRHclcj0vsnRpX1oBHgVaMGIcPj1vhfauM3yopY+lWIrlu1GUfGdA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230001)(4636009)(36840700001)(46966006)(86362001)(8936002)(5660300002)(8676002)(4326008)(44832011)(6666004)(70586007)(70206006)(1076003)(7696005)(36860700001)(26005)(316002)(6916009)(54906003)(83380400001)(82310400005)(186003)(508600001)(2616005)(36756003)(81166007)(2906002)(336012)(47076005)(426003)(15650500001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2022 04:10:00.3668
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 66f69509-8db2-43bc-42e4-08da47726cdf
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT022.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB6777

In previous code, Xen was using KERN_WARNING for log message
when Xen found a node without memory. Xen will print this
warning message, and said that this may be an BIOS Bug or
mis-configured hardware. But actually, this warning is bogus,
because in an NUMA setting, nodes can only have processors,
and with 0 bytes memory. So it is unreasonable to warn about
BIOS or hardware corruption based on the detection of node
with 0 bytes memory.

So in this patch, we remove the warning messages, but just
keep an info message to info users that there is one or more
nodes with 0 bytes memory in the system.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
v3 -> v4:
1. Remove full stop and use lower-case for node.
2. Add Rb.
v2 -> v3:
new commit.
---
 xen/arch/x86/srat.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/xen/arch/x86/srat.c b/xen/arch/x86/srat.c
index b145ccb847..c267d86c54 100644
--- a/xen/arch/x86/srat.c
+++ b/xen/arch/x86/srat.c
@@ -555,8 +555,7 @@ int __init acpi_scan_nodes(paddr_t start, paddr_t end)
 		uint64_t size = nodes[i].end - nodes[i].start;
 
 		if ( size == 0 )
-			printk(KERN_WARNING "SRAT: Node %u has no memory. "
-			       "BIOS Bug or mis-configured hardware?\n", i);
+			printk(KERN_INFO "SRAT: node %u has no memory\n", i);
 
 		setup_node_bootmem(i, nodes[i].start, nodes[i].end);
 	}
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 06 04:10:09 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 04:10:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342302.567449 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny449-0002gK-EG; Mon, 06 Jun 2022 04:10:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342302.567449; Mon, 06 Jun 2022 04:10:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny449-0002eu-8E; Mon, 06 Jun 2022 04:10:09 +0000
Received: by outflank-mailman (input) for mailman id 342302;
 Mon, 06 Jun 2022 04:10:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aNt7=WN=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1ny446-0000w0-Vq
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 04:10:07 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2061d.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::61d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8b6e0821-e54e-11ec-bd2c-47488cf2e6aa;
 Mon, 06 Jun 2022 06:10:05 +0200 (CEST)
Received: from AS8P251CA0013.EURP251.PROD.OUTLOOK.COM (2603:10a6:20b:2f2::28)
 by PAXPR08MB7333.eurprd08.prod.outlook.com (2603:10a6:102:230::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12; Mon, 6 Jun
 2022 04:09:59 +0000
Received: from VE1EUR03FT033.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:2f2:cafe::e) by AS8P251CA0013.outlook.office365.com
 (2603:10a6:20b:2f2::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19 via Frontend
 Transport; Mon, 6 Jun 2022 04:09:59 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT033.mail.protection.outlook.com (10.152.18.147) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 04:09:58 +0000
Received: ("Tessian outbound e40990bc24d7:v120");
 Mon, 06 Jun 2022 04:09:57 +0000
Received: from 2814ff798609.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 9678FAF5-AF60-4C70-BEBB-639851BF9B12.1; 
 Mon, 06 Jun 2022 04:09:51 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 2814ff798609.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 06 Jun 2022 04:09:51 +0000
Received: from AM7PR02CA0013.eurprd02.prod.outlook.com (2603:10a6:20b:100::23)
 by DB7PR08MB3579.eurprd08.prod.outlook.com (2603:10a6:10:46::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.17; Mon, 6 Jun
 2022 04:09:47 +0000
Received: from AM5EUR03FT053.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:100:cafe::53) by AM7PR02CA0013.outlook.office365.com
 (2603:10a6:20b:100::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19 via Frontend
 Transport; Mon, 6 Jun 2022 04:09:47 +0000
Received: from nebula.arm.com (40.67.248.234) by
 AM5EUR03FT053.mail.protection.outlook.com (10.152.16.210) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 04:09:47 +0000
Received: from AZ-NEU-EX01.Emea.Arm.com (10.251.26.4) by AZ-NEU-EX04.Arm.com
 (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2308.27; Mon, 6 Jun
 2022 04:09:49 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX01.Emea.Arm.com
 (10.251.26.4) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2308.27; Mon, 6
 Jun 2022 04:09:45 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2308.27 via Frontend
 Transport; Mon, 6 Jun 2022 04:09:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8b6e0821-e54e-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=EwkcM7u9VXg1qxkt/TYpjxp1EzeemT0fPTlxfjlIj/Yx0EBe/iHEI9eIk2U56ZwFsE+zXeiUrqL1Enazn2cNvetdHeLnNTR+PtXNj+Y5Zsl4s2OgjPTqsKngvnldIqtc18kRXjvjkkwfVf2XUdDuD2Cvc7Sb/wUxWxkdWq95SY0Zji3yqiZGjgUU8mj6sj4Xf3EJtLDNekbuJlR6iIwHLgVat4dMD47iEtRneaqWA17PS13DFGrGr/Xi7+RcOX+Uvcfiveb5f7wXq/URp7xbz9etSxwFxhppln3MI4DC8Yu/zImBz6W+OiAYCO0ysOZbBgpBJmbaerRLAObls85KIQ==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=WTmh/NoFukFLTLwt/M1hKfyUJ4QPT35nQO5A1fR5J9I=;
 b=FRmTfyBKsI50TBnpmXR6YYPqHBUxlp8u3yr3fG+X3yrhwxI8aQEAKyFRW4b86LbKx0cV/2NKpd1/A2VFWi57m2Tb+cmiFFR5ih6OsAoKApT93Zxy/BR3WR+4TshykdqqqP7SREqKA/n4fUJaw7Zb1PPOwd60nUPNoOUmdky6DHQl6AVLzyFe5kb3Z9M5QPImANo1rkFG4EJpkbxvJoJ5hiyfccb6WUauwTzXRHdQOsh3ss2mDp5PNkbI8uglWvAhc9HcjhUOaJDKfEd0hLaltSTbGs+/z/eqVYfBemOeGn3OoDsCmBlNXJnMJGy5jLDk/3IshVGcWfvexOW5d8lj7A==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WTmh/NoFukFLTLwt/M1hKfyUJ4QPT35nQO5A1fR5J9I=;
 b=OTZHXfrIlZ+JAyIN11mT60uMAZgDVAruGKGyt7sP/w/RJO4Y2CSk8ppaOVX2fWMPXbLVHxBhTPSS6BDQbypU27fWSONk3XB5i5FXYmnD/gw5iubjHdZU4uE1X6O8M4iQmPAmb58c/eJ3939NOkPDpRrFoeaAdoCrOQJVzzsDmMI=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: c57d4c5667ec94b8
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iGVe3yLR09bcUf0GtnFj5Ou6NJCQQiMmcIIhcF1788zdFsvb2uLl+gpWsx7i7p5US7UdPLSFV4XNlvJx+ZaiD6Kc1OehGykU8h/rSr9gwP78MZ3qNwKshfi7IRH5Eq7zf71WQYmG+rPbgXbygwpGHZ9giSjHtN3QkLv//J+g5hfeJdi0O1EpbVMyahgLaOcth9JPLlFdvEnMm8RhplmDKQc7DYOiK/u5fBOpeJVxGJUGSzrxi5bFq8ezmrzhmGwg1bPTQW1O3XEUolzLAdV5q52DEeeOwxk1QNwNn9C1sKP7QeXBMUxf5SgPrQSs6GB1PmvovZQNsXM55nn2EgYTRQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=WTmh/NoFukFLTLwt/M1hKfyUJ4QPT35nQO5A1fR5J9I=;
 b=I1NYQPFV+2XnTTDuJoGZxkWud4t1RS8M/wLh9X/uojlSSbUe/VZchn9DkYucPo0fkXCI1EI14rmdJ44ZkhGlRmTgecEc/hhYpvFn15BzjSleXAJ1Caf++mGnvm+WHQAYkQvdFmKewen9v7nS+JW+sz2NZkIgdImxXqhb2UWP/UM4LVd9UbQS6qpOTyi5QtIfXWNMptWZuMUEPYlyIqoRz2S+veQdn/+L27CiBrRzV/CBtTQsANKjEgKjWpd+vbYPcs1xEjh3MeknPliyyF3hvvfsJKou+kTMHxYYjzHOkrjtL4dWM/CJdirL1XGr00RsX78Bo+id9b+15sh6BJRpOw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WTmh/NoFukFLTLwt/M1hKfyUJ4QPT35nQO5A1fR5J9I=;
 b=OTZHXfrIlZ+JAyIN11mT60uMAZgDVAruGKGyt7sP/w/RJO4Y2CSk8ppaOVX2fWMPXbLVHxBhTPSS6BDQbypU27fWSONk3XB5i5FXYmnD/gw5iubjHdZU4uE1X6O8M4iQmPAmb58c/eJ3939NOkPDpRrFoeaAdoCrOQJVzzsDmMI=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Wei Chen <wei.chen@arm.com>, Jan Beulich
	<jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Jiamei Xie <jiamei.xie@arm.com>
Subject: [PATCH v5 7/8] xen/x86: add detection of memory interleaves for different nodes
Date: Mon, 6 Jun 2022 12:09:15 +0800
Message-ID: <20220606040916.122184-8-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220606040916.122184-1-wei.chen@arm.com>
References: <20220606040916.122184-1-wei.chen@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 1
X-MS-Office365-Filtering-Correlation-Id: fb20b1cd-32e2-41f2-ecba-08da47726b97
X-MS-TrafficTypeDiagnostic:
	DB7PR08MB3579:EE_|VE1EUR03FT033:EE_|PAXPR08MB7333:EE_
X-Microsoft-Antispam-PRVS:
	<PAXPR08MB7333F8C6A30E5CACA41259E49EA29@PAXPR08MB7333.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 KAhorGSmHG4e1esHgk0GHFqSTES2vg8hDkYbQ8ZNxU++ETqKWXzjyF8jIhbnVwE1jezfRh1QicdOJGx9mpnUhEVaWxu3TxKi3wTpKG5ej6euo4/vZxD7jdeiyktR6sbtsBmK0O4dP3NC0Iv2Diw/+g0XGM2pEbIkaqYF4eEdWtYo6lBcPPlhmtlIsQfU7ynWikzTLLPglyDLExp9R5et5pI0VUCpt/jRvZY4kXOKx7byMIlyc0vXHD8FaiHd47ky133hAwMF6+AsQoJPzU4Bvj6XkyrQVc+hX9xaYz594SwzeWWv4EDKsJad8Zjb/a2fC6lnstSq4DFinONmcbpxTLToC7ozSQGZyU8uh6hTIAm1RRanG5Ru4691C5oi70pWs6Haqy4tcryft1bIk8Js6q3ZBAFXbC9ygKkhqG2sIkRxCC3NeDZquYQm3Rzi6YJyDJ/C6pl0hppfkQ/9qEiRS9Z+UuMZftsS0oA/0WOIXkc+fxqQ78enk1+vjFyJxBDAf52s9QPKySmYfoSqvksdAAyJJwLNm+Npxkw+5udF2lb6aibCYApA1MY8yElx62HB67KlVRFQRBuzYYdfhZgCp1qMIzemfAD17BoDhnd0YcWWXQ0rM94rj8Ez/GBb4t5vsCmoKfTPl5Yo+Olc3vIVO/pJewuU7YwTTovxyVCOs/Z4Y9fBbNAOAsvO19AonqF0
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230001)(4636009)(36840700001)(46966006)(83380400001)(7696005)(356005)(4326008)(26005)(44832011)(70586007)(6666004)(81166007)(336012)(82310400005)(47076005)(426003)(8676002)(316002)(8936002)(70206006)(6916009)(186003)(1076003)(36756003)(5660300002)(36860700001)(2616005)(86362001)(2906002)(508600001)(54906003)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3579
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT033.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	43128504-97cf-4e8b-cbc5-08da477264e9
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	t/VlKtAUWAaO6loms/yLtV/telJIqs1zkRPDv3UVJjGCuCeXQbF8/axpJndhD4onuUWtOBIFBsDP0uVhZ1ZdXoL6JhhQLkN1EpFezI08Bz8tdsTe80jlyIAfJ7CN4XrVQpQjk8YSoHm9vETRDN8CPCn+oIDoG847n3uKFVx6Kp2MbVbnmxK/e7IAB3sLAgNeuEB3Ekm19OZ4mPtRV2iCpYhrdpL0NS5LIg529OFzpeuddmS+t+dszguUfQpRIfEvUFXIE+BpX9pwE96Y0Tf3ngQwwkzt30I/fI53M69qY2uqffxkje0dqhM39vVGyN6NJtojj5EBMuPXrf+CAhVsyfoYdn9yFECE94atyfu/kpWaNVZhojaHxMBBb8/7xCM4fFZ77TckDIQdb5ZqRW0rSzALe3Xku2Zv8ZXWCsVZI05l34BIvh3R02ZvICyJ8hkOEGtQZ0u/blhBM4Df3fxtudqjKDgr1VnaaxntpFLg970vce+RGxiBFpNVpSSGB5DrJWOepAsKRw/S13bHc97piPHahq9QtKzdqpfIgoc8JTZY8pQxHXu3esrL1lHG0W7+lfiDYLP/xhYX3zd/SJrvVQ612iqRxIxLaDqfL45EcFGvp+E0mux+mkVfjNf1JXk438sV5mZD9UDyySm/4C0SJA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230001)(4636009)(36840700001)(46966006)(83380400001)(7696005)(36756003)(54906003)(2616005)(4326008)(8676002)(70206006)(186003)(2906002)(44832011)(5660300002)(6916009)(1076003)(316002)(508600001)(426003)(70586007)(8936002)(86362001)(336012)(26005)(47076005)(36860700001)(6666004)(81166007)(82310400005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2022 04:09:58.1746
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: fb20b1cd-32e2-41f2-ecba-08da47726b97
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT033.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB7333

One NUMA node may contain several memory blocks. In current Xen
code, Xen will maintain a node memory range for each node to cover
all its memory blocks. But here comes the problem, in the gap of
one node's two memory blocks, if there are some memory blocks don't
belong to this node (remote memory blocks). This node's memory range
will be expanded to cover these remote memory blocks.

One node's memory range contains other nodes' memory, this is
obviously not very reasonable. This means current NUMA code only
can support node has no interleaved memory blocks. However, on a
physical machine, the addresses of multiple nodes can be interleaved.

So in this patch, we add code to detect memory interleaves of
different nodes. NUMA initialization will be failed and error
messages will be printed when Xen detect such hardware configuration.

As we have checked the node's range before, for a non-empty node,
the "nd->end == end && nd->start == start" check is unnecesary.
So we remove it from conflicting_memblks as well.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Tested-by: Jiamei Xie <jiamei.xie@arm.com>
---
v4 -> v5:
1. Remove "nd->end == end && nd->start == start" from
   conflicting_memblks.
2. Use case NO_CONFLICT instead of "default".
3. Correct wrong "node" to "pxm" in print message.
4. Remove unnecesary "else" to remove the indent depth.
5. Convert all ranges to proper mathematical interval
   representation.
6. Fix code-style comments.
v3 -> v4:
1. Drop "ERR" prefix for enumeration, and remove init value.
2. Use "switch case" for enumeration, and add "default:"
3. Use "PXM" in log messages.
4. Use unsigned int for node memory block id.
5. Fix some code-style comments.
6. Use "nd->end" in node range expansion check.
v2 -> v3:
1. Merge the check code from a separate function to
   conflicting_memblks. This will reduce the loop
   times of node memory blocks.
2. Use an enumeration to indicate conflict check status.
3. Use a pointer to get conflict memory block id.
v1 -> v2:
1. Update the description to say we're after is no memory
   interleaves of different nodes.
2. Only update node range when it passes the interleave check.
3. Don't use full upper-case for "node".
---
 xen/arch/x86/srat.c | 139 ++++++++++++++++++++++++++++++++------------
 1 file changed, 101 insertions(+), 38 deletions(-)

diff --git a/xen/arch/x86/srat.c b/xen/arch/x86/srat.c
index 8ffe43bdfe..b145ccb847 100644
--- a/xen/arch/x86/srat.c
+++ b/xen/arch/x86/srat.c
@@ -42,6 +42,12 @@ static struct node node_memblk_range[NR_NODE_MEMBLKS];
 static nodeid_t memblk_nodeid[NR_NODE_MEMBLKS];
 static __initdata DECLARE_BITMAP(memblk_hotplug, NR_NODE_MEMBLKS);
 
+enum conflicts {
+	NO_CONFLICT,
+	OVERLAP,
+	INTERLEAVE,
+};
+
 static inline bool node_found(unsigned idx, unsigned pxm)
 {
 	return ((pxm2node[idx].pxm == pxm) &&
@@ -119,20 +125,45 @@ int valid_numa_range(paddr_t start, paddr_t end, nodeid_t node)
 	return 0;
 }
 
-static __init int conflicting_memblks(paddr_t start, paddr_t end)
+static
+enum conflicts __init conflicting_memblks(nodeid_t nid, paddr_t start,
+					  paddr_t end, paddr_t nd_start,
+					  paddr_t nd_end, unsigned int *mblkid)
 {
-	int i;
+	unsigned int i;
 
+	/*
+	 * Scan all recorded nodes' memory blocks to check conflicts:
+	 * Overlap or interleave.
+	 */
 	for (i = 0; i < num_node_memblks; i++) {
 		struct node *nd = &node_memblk_range[i];
+
+		*mblkid = i;
+
+		/* Skip 0 bytes node memory block. */
 		if (nd->start == nd->end)
 			continue;
+		/*
+		 * Use memblk range to check memblk overlaps, include the
+		 * self-overlap case. As nd's range is non-empty, the special
+		 * case "nd->end == end && nd->start == start" also can be covered.
+		 */
 		if (nd->end > start && nd->start < end)
-			return i;
-		if (nd->end == end && nd->start == start)
-			return i;
+			return OVERLAP;
+
+		/*
+		 * Use node memory range to check whether new range contains
+		 * memory from other nodes - interleave check. We just need
+		 * to check full contains situation. Because overlaps have
+		 * been checked above.
+		 */
+	        if (nid != memblk_nodeid[i] &&
+		    nd->start >= nd_start && nd->end <= nd_end)
+			return INTERLEAVE;
 	}
-	return -1;
+
+	return NO_CONFLICT;
 }
 
 static __init void cutoff_node(int i, paddr_t start, paddr_t end)
@@ -275,10 +306,12 @@ acpi_numa_processor_affinity_init(const struct acpi_srat_cpu_affinity *pa)
 void __init
 acpi_numa_memory_affinity_init(const struct acpi_srat_mem_affinity *ma)
 {
+	struct node *nd;
+	paddr_t nd_start, nd_end;
 	paddr_t start, end;
 	unsigned pxm;
 	nodeid_t node;
-	int i;
+	unsigned int i;
 
 	if (srat_disabled())
 		return;
@@ -310,44 +343,74 @@ acpi_numa_memory_affinity_init(const struct acpi_srat_mem_affinity *ma)
 		bad_srat();
 		return;
 	}
+
+	/*
+	 * For the node that already has some memory blocks, we will
+	 * expand the node memory range temporarily to check memory
+	 * interleaves with other nodes. We will not use this node
+	 * temp memory range to check overlaps, because it will mask
+	 * the overlaps in same node.
+	 *
+	 * Node with 0 bytes memory doesn't need this expandsion.
+	 */
+	nd_start = start;
+	nd_end = end;
+	nd = &nodes[node];
+	if (nd->start != nd->end) {
+		if (nd_start > nd->start)
+			nd_start = nd->start;
+
+		if (nd_end < nd->end)
+			nd_end = nd->end;
+	}
+
 	/* It is fine to add this area to the nodes data it will be used later*/
-	i = conflicting_memblks(start, end);
-	if (i < 0)
-		/* everything fine */;
-	else if (memblk_nodeid[i] == node) {
-		bool mismatch = !(ma->flags & ACPI_SRAT_MEM_HOT_PLUGGABLE) !=
-		                !test_bit(i, memblk_hotplug);
-
-		printk("%sSRAT: PXM %u (%"PRIpaddr"-%"PRIpaddr") overlaps with itself (%"PRIpaddr"-%"PRIpaddr")\n",
-		       mismatch ? KERN_ERR : KERN_WARNING, pxm, start, end,
-		       node_memblk_range[i].start, node_memblk_range[i].end);
-		if (mismatch) {
-			bad_srat();
-			return;
+	switch (conflicting_memblks(node, start, end, nd_start, nd_end, &i)) {
+	case OVERLAP:
+		if (memblk_nodeid[i] == node) {
+			bool mismatch = !(ma->flags &
+					  ACPI_SRAT_MEM_HOT_PLUGGABLE) !=
+			                !test_bit(i, memblk_hotplug);
+
+			printk("%sSRAT: PXM %u [%"PRIpaddr"-%"PRIpaddr"] overlaps with itself [%"PRIpaddr"-%"PRIpaddr"]\n",
+			       mismatch ? KERN_ERR : KERN_WARNING, pxm, start,
+			       end - 1, node_memblk_range[i].start,
+			       node_memblk_range[i].end - 1);
+			if (mismatch) {
+				bad_srat();
+				return;
+			}
+			break;
 		}
-	} else {
+
+		printk(KERN_ERR
+		       "SRAT: PXM %u [%"PRIpaddr"-%"PRIpaddr"] overlaps with PXM %u [%"PRIpaddr"-%"PRIpaddr"]\n",
+		       pxm, start, end - 1, node_to_pxm(memblk_nodeid[i]),
+		       node_memblk_range[i].start,
+		       node_memblk_range[i].end - 1);
+		bad_srat();
+		return;
+
+	case INTERLEAVE:
 		printk(KERN_ERR
-		       "SRAT: PXM %u (%"PRIpaddr"-%"PRIpaddr") overlaps with PXM %u (%"PRIpaddr"-%"PRIpaddr")\n",
-		       pxm, start, end, node_to_pxm(memblk_nodeid[i]),
-		       node_memblk_range[i].start, node_memblk_range[i].end);
+		       "SRAT： PXM %u: [%"PRIpaddr"-%"PRIpaddr"] interleaves with PXM %u memblk [%"PRIpaddr"-%"PRIpaddr"]\n",
+		       pxm, nd_start, nd_end - 1, node_to_pxm(memblk_nodeid[i]),
+		       node_memblk_range[i].start, node_memblk_range[i].end - 1);
 		bad_srat();
 		return;
+
+	case NO_CONFLICT:
+		break;
 	}
+
 	if (!(ma->flags & ACPI_SRAT_MEM_HOT_PLUGGABLE)) {
-		struct node *nd = &nodes[node];
-
-		if (!node_test_and_set(node, memory_nodes_parsed)) {
-			nd->start = start;
-			nd->end = end;
-		} else {
-			if (start < nd->start)
-				nd->start = start;
-			if (nd->end < end)
-				nd->end = end;
-		}
+		node_set(node, memory_nodes_parsed);
+		nd->start = nd_start;
+		nd->end = nd_end;
 	}
-	printk(KERN_INFO "SRAT: Node %u PXM %u %"PRIpaddr"-%"PRIpaddr"%s\n",
-	       node, pxm, start, end,
+
+	printk(KERN_INFO "SRAT: Node %u PXM %u [%"PRIpaddr"-%"PRIpaddr"]%s\n",
+	       node, pxm, start, end - 1,
 	       ma->flags & ACPI_SRAT_MEM_HOT_PLUGGABLE ? " (hotplug)" : "");
 
 	node_memblk_range[num_node_memblks].start = start;
@@ -396,7 +459,7 @@ static int __init nodes_cover_memory(void)
 
 		if (start < end) {
 			printk(KERN_ERR "SRAT: No PXM for e820 range: "
-				"%"PRIpaddr" - %"PRIpaddr"\n", start, end);
+				"[%"PRIpaddr" - %"PRIpaddr"]\n", start, end - 1);
 			return 0;
 		}
 	}
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 06 05:07:13 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 05:07:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342375.567460 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny4x8-0002r0-68; Mon, 06 Jun 2022 05:06:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342375.567460; Mon, 06 Jun 2022 05:06:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny4x8-0002qt-3I; Mon, 06 Jun 2022 05:06:58 +0000
Received: by outflank-mailman (input) for mailman id 342375;
 Mon, 06 Jun 2022 05:00:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+HWk=WN=kernel.org=masahiroy@srs-se1.protection.inumbo.net>)
 id 1ny4qx-0002fn-OO
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 05:00:36 +0000
Received: from conuserg-12.nifty.com (conuserg-12.nifty.com [210.131.2.79])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 948d2711-e555-11ec-b605-df0040e90b76;
 Mon, 06 Jun 2022 07:00:30 +0200 (CEST)
Received: from grover.sesame (133-32-177-133.west.xps.vectant.ne.jp
 [133.32.177.133]) (authenticated)
 by conuserg-12.nifty.com with ESMTP id 2564xNUT024943;
 Mon, 6 Jun 2022 13:59:23 +0900
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 948d2711-e555-11ec-b605-df0040e90b76
DKIM-Filter: OpenDKIM Filter v2.10.3 conuserg-12.nifty.com 2564xNUT024943
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nifty.com;
	s=dec2015msa; t=1654491564;
	bh=02SaZmEKBPMSF76nbpCPhuGFilCILCVnozQUdrfNE4A=;
	h=From:To:Cc:Subject:Date:From;
	b=RkkHLUjqmtVqOahyc+gud1UArOBglFYPpJZgVzg8bXRPULtWmrJ+kliGNUKHHtDny
	 JiWmrEm1pKeMQhx2JCtK9wQffux2NWWrU2BsRIZYg4fVi8QAJBz/YpNdoCzKKf7LoC
	 vlI95bU/MbYRII7rJ4vFT9VnOmGsvYvVjmfBi67GMgy6q/HMo/eJE1mr/Cp8Ciqoyv
	 0YxSF9sCrBzJ/Pj0oh9NLLnSXjg/VriRCI5ccSBEb2gDb7aiUlEeceQTnmO3Sunl3e
	 3lE9jXK7ywjcnYJylguEOjMYY88BLGf0e+gIJPOolZo3FUDfdRnG2wpAtAxSG0/lgj
	 ONhdPeLT6l+rg==
X-Nifty-SrcIP: [133.32.177.133]
From: Masahiro Yamada <masahiroy@kernel.org>
To: Juergen Gross <jgross@suse.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
        xen-devel@lists.xenproject.org (moderated for non-subscribers)
Cc: Masahiro Yamada <masahiroy@kernel.org>,
        Stephen Rothwell <sfr@canb.auug.org.au>,
        Julien Grall <julien.grall@arm.com>,
        Shannon Zhao <shannon.zhao@linaro.org>, linux-kernel@vger.kernel.org,
        xen-devel@lists.xenproject.org
Subject: [PATCH] xen: unexport __init-annotated xen_xlate_map_ballooned_pages()
Date: Mon,  6 Jun 2022 13:59:20 +0900
Message-Id: <20220606045920.4161881-1-masahiroy@kernel.org>
X-Mailer: git-send-email 2.32.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

EXPORT_SYMBOL and __init is a bad combination because the .init.text
section is freed up after the initialization. Hence, modules cannot
use symbols annotated __init. The access to a freed symbol may end up
with kernel panic.

modpost used to detect it, but it has been broken for a decade.

Recently, I fixed modpost so it started to warn it again, then this
showed up in linux-next builds.

There are two ways to fix it:

  - Remove __init
  - Remove EXPORT_SYMBOL

I chose the latter for this case because none of the in-tree call-sites
(arch/arm/xen/enlighten.c, arch/x86/xen/grant-table.c) is compiled as
modular.

Fixes: 243848fc018c ("xen/grant-table: Move xlated_setup_gnttab_pages to common place")
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
---

 drivers/xen/xlate_mmu.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/xen/xlate_mmu.c b/drivers/xen/xlate_mmu.c
index 34742c6e189e..f17c4c03db30 100644
--- a/drivers/xen/xlate_mmu.c
+++ b/drivers/xen/xlate_mmu.c
@@ -261,7 +261,6 @@ int __init xen_xlate_map_ballooned_pages(xen_pfn_t **gfns, void **virt,
 
 	return 0;
 }
-EXPORT_SYMBOL_GPL(xen_xlate_map_ballooned_pages);
 
 struct remap_pfn {
 	struct mm_struct *mm;
-- 
2.32.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 06 08:58:43 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 08:58:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342396.567472 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny8Z7-0001SB-1A; Mon, 06 Jun 2022 08:58:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342396.567472; Mon, 06 Jun 2022 08:58:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny8Z6-0001S4-Tm; Mon, 06 Jun 2022 08:58:24 +0000
Received: by outflank-mailman (input) for mailman id 342396;
 Mon, 06 Jun 2022 08:58:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ny8Z6-0001Ru-4m; Mon, 06 Jun 2022 08:58:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ny8Z6-0000jY-0q; Mon, 06 Jun 2022 08:58:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ny8Z5-0002Bs-Ev; Mon, 06 Jun 2022 08:58:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ny8Z5-00080s-Bq; Mon, 06 Jun 2022 08:58:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=aKX0QuXNwsnuO1ZWo0WyNBzb5tZN16Jfk1md0iTVreE=; b=tJ7Sebxxof4bcGF1yipr1V+34j
	hzRaZFcmJCmkJdFdfqPTwn2frQXUWO3VbIUPdt9G5lDJz2MhypeXX0wEOUuGq3n7LUDN1LaHG3XP0
	dryz93rP5LyyV5239FuCZcorwtEBBcK+S8cGr6jld4w2pVtRl1tyjcj9HEQYw/pKdLwA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170840-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 170840: FAIL
X-Osstest-Failures:
    xen-unstable:test-xtf-amd64-amd64-3:<job status>:broken:regression
    xen-unstable:test-xtf-amd64-amd64-3:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-i386-freebsd10-i386:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-pair:xen-install/dst_host:fail:heisenbug
    xen-unstable:test-amd64-i386-livepatch:xen-install:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:host-ping-check-xen:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=58ce5b6c33ecae76f2e9fc5213a56e98c3be4be5
X-Osstest-Versions-That:
    xen=58ce5b6c33ecae76f2e9fc5213a56e98c3be4be5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 06 Jun 2022 08:58:23 +0000

flight 170840 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170840/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-xtf-amd64-amd64-3          <job status>                 broken  in 170833

Tests which are failing intermittently (not blocking):
 test-xtf-amd64-amd64-3       5 host-install(5) broken in 170833 pass in 170840
 test-amd64-i386-freebsd10-i386  7 xen-install              fail pass in 170833
 test-amd64-i386-pair         11 xen-install/dst_host       fail pass in 170833
 test-amd64-i386-livepatch     7 xen-install                fail pass in 170833
 test-armhf-armhf-xl-rtds     10 host-ping-check-xen        fail pass in 170833

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds    15 migrate-support-check fail in 170833 never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-check fail in 170833 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170833
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170833
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170833
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170833
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 170833
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 170833
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170833
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170833
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170833
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170833
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170833
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170833
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 xen                  58ce5b6c33ecae76f2e9fc5213a56e98c3be4be5
baseline version:
 xen                  58ce5b6c33ecae76f2e9fc5213a56e98c3be4be5

Last test of basis   170840  2022-06-06 01:51:50 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    fail    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-xtf-amd64-amd64-3 broken

Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon Jun 06 09:39:48 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 09:39:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342412.567482 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny9D2-00062E-8K; Mon, 06 Jun 2022 09:39:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342412.567482; Mon, 06 Jun 2022 09:39:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny9D2-000627-5a; Mon, 06 Jun 2022 09:39:40 +0000
Received: by outflank-mailman (input) for mailman id 342412;
 Mon, 06 Jun 2022 09:39:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W7SY=WN=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1ny9D1-000621-3C
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 09:39:39 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20605.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::605])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 938ae22c-e57c-11ec-bd2c-47488cf2e6aa;
 Mon, 06 Jun 2022 11:39:37 +0200 (CEST)
Received: from AS8PR04CA0153.eurprd04.prod.outlook.com (2603:10a6:20b:331::8)
 by AM0PR08MB3780.eurprd08.prod.outlook.com (2603:10a6:208:10a::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.16; Mon, 6 Jun
 2022 09:39:33 +0000
Received: from AM5EUR03FT010.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:331:cafe::c4) by AS8PR04CA0153.outlook.office365.com
 (2603:10a6:20b:331::8) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19 via Frontend
 Transport; Mon, 6 Jun 2022 09:39:33 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT010.mail.protection.outlook.com (10.152.16.134) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5314.12 via Frontend Transport; Mon, 6 Jun 2022 09:39:33 +0000
Received: ("Tessian outbound 1766a3bff204:v120");
 Mon, 06 Jun 2022 09:39:32 +0000
Received: from d8e439f21f68.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 652818D8-A3A8-4819-A851-6D34197265A2.1; 
 Mon, 06 Jun 2022 09:39:26 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d8e439f21f68.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 06 Jun 2022 09:39:26 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AM9PR08MB6307.eurprd08.prod.outlook.com (2603:10a6:20b:2dc::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Mon, 6 Jun
 2022 09:39:24 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5314.019; Mon, 6 Jun 2022
 09:39:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 938ae22c-e57c-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=RytXc5vdmD8zrca6CG4UpLAV93kCNwvvM9dFm8vZUQsX+Vq3NBybo9bnJhS6Ia+DClDcb0Sl8HGm5Xn/74sqdBukrmM05PKTzZcVjYBp7VtJsGL3RsE0PpygTXSHY0XiihkrftnHPl0ZiEZDyDZr+e/l7GcNYUmaRF5sf+8AeYydnpUFeHJKEVVdye274hEMzqgtkl3dn9kQJFhjU66K+CJZL5GTq5xfydqIEl63m7TmQTt2uR9QXOqLLbvX+rONWWKfcIxNs0RYzYk1tzIetPBSQZC4UImUM1c+cc2+7iZT0kO0b28pjN4D2FmPTKvFFySOakKvILcIgYM4mLwQWA==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ONp9U4zHjzYZ57SmjD3yLd0d6EF2VRvkTdag3n8mqy8=;
 b=U+KnIabnAT7zdRjL07lRCywsiso5GUW38qdAbVaccb+xfRpKz9kYOysCY3nV9RNdSWP8u8cCokwSY3/CfLKY+y6SbFdHsiVLtw0wAFNlPEKnUQ69JMe3X8/9IAeE4pLPPOM/2z3XSdQHJHvS9mNRY7RLvlfpl6JN801f5wtiGQijON8rIICn8evtOqSMQxrrHbcPhRlQn84oeY9b/0Qv3zjeIjrAW5cpSN6OqiSX8TsuKhN/eAsIvH65QHE7hD3ccFcCvHETSTRPN6ZhQYc+a2sxbZZziNVVnT86UMMCBek3745o15G8Kp7rvUq9EL9bhHbshqEfAKpPW+NQHeJTdA==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ONp9U4zHjzYZ57SmjD3yLd0d6EF2VRvkTdag3n8mqy8=;
 b=jFP1Nqa050Hv4qVNUIP6H1oQlkglr7o9mNUzpaMj7OQT1/uXJkfAvAxn1lAWXHUeFFyvET2UlVolvGptSV+aW4/DrTH2Oo05GD5vSuInLwL26NI/BNEifsl4wM/RMUzwZ31IGwQ8FRRY9zKA7duqoTKfEZAhIvGGun6EoHiJfPE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: b24ed6c71288ab81
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MA7rftDGnvm+i5/jTW0NSj/d4a3T6jNCx8iRQLono35Q5UowO8MOubUJhe855mPwcLKeOzardC+7rxPaDY8GA3vlusmzG542vHS5erl5BFWtXnuk6xyczU9RG9Vyz2xgaocg8Dh4Sqn/BjQl3/o8pgKtjH26k2mBoFwgEE8Ss1tCq3JitjCr1oxov7auEu5w16J5LhOOQZp0Z3Ro3L5DWwXWYzYaP+x4bpcQRkZiTbw7XzfOI03YeY9EVPkT+VVS1xe4UOkOr69CPpuNZN4+zfaRWnFllIdQggByQGMIzBxpDrHgssbajnGeLtNO+nFbJtj95aMw2tG+Bc1vS/zONA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ONp9U4zHjzYZ57SmjD3yLd0d6EF2VRvkTdag3n8mqy8=;
 b=FBYzU3Zu9slvhQNoHW9LH9eYfa8uRE/tD1iuezPt+oJCUxJHCr0gqSzKJhNI0gehHAvjTk7eEKr2e2amsj+dksuC4hhNBC8Y0ZnIjxlV7gqX7NxBEfiGgutzb+dMTML9gfImp/6RxpM2F0Eoq0fkZzporY+ZosVDyeGt3an4X+tADgSbo8t5hRorz9mqtCr0h6hVJnU2rTmxRJ/RLfcdmRe5rZ0mX0hgn8hKQ0t/EjW/X6A2rns+KzKTMceLY8w9ciuvGG22aWsdH7M0pTrBTyiqUhnqnUkik7D4/gv7t2UkXn+9Tza0UpZlLFGlSq8G7rtoTBUj2+6gD9YXItaTdg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ONp9U4zHjzYZ57SmjD3yLd0d6EF2VRvkTdag3n8mqy8=;
 b=jFP1Nqa050Hv4qVNUIP6H1oQlkglr7o9mNUzpaMj7OQT1/uXJkfAvAxn1lAWXHUeFFyvET2UlVolvGptSV+aW4/DrTH2Oo05GD5vSuInLwL26NI/BNEifsl4wM/RMUzwZ31IGwQ8FRRY9zKA7duqoTKfEZAhIvGGun6EoHiJfPE=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Julien
 Grall <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 3/4] arm: add ISAR2, MMFR0 and MMFR1 fields in
 cpufeature
Thread-Topic: [PATCH v2 3/4] arm: add ISAR2, MMFR0 and MMFR1 fields in
 cpufeature
Thread-Index: AQHYdNtPuqCIYoKAP0q4q0ryuoAsQK083KqAgAVMOoA=
Date: Mon, 6 Jun 2022 09:39:23 +0000
Message-ID: <FE1F683F-FD96-4D55-8863-B9DB373CE790@arm.com>
References: <cover.1653993431.git.bertrand.marquis@arm.com>
 <4a0aef106ac7b6c16048ff3554eda1d8b3eab61a.1653993431.git.bertrand.marquis@arm.com>
 <alpine.DEB.2.22.394.2206021738430.2783803@ubuntu-linux-20-04-desktop>
In-Reply-To:
 <alpine.DEB.2.22.394.2206021738430.2783803@ubuntu-linux-20-04-desktop>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.80.82.1.1)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: dfaa5456-001a-49f5-b387-08da47a07658
x-ms-traffictypediagnostic:
	AM9PR08MB6307:EE_|AM5EUR03FT010:EE_|AM0PR08MB3780:EE_
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB3780752C24734E4F159D56D99DA29@AM0PR08MB3780.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 EePgj77rxAJYOW7cHkPCRfxMgDpL6CkU7qiN3xmREyCTFnGbM5YBneUC0zDw3pTUdcSPoI1eGtN4ZvaEugxtQ8v4vcHNv5yNYdl0ECftX2/0PTPaXou8fROts+tvYiiRJiqMaW1sKy7I+XvTLs4D5/6cLEyA4CN6FvvhM3Vs4aJKLLP3/CWvEU6eT00lPorDJR34QDYAeATSXdiJVEQwmDSBEK8Ucf44mMzOumVs+JAyscY9H1mU2ede0ki2l69yYig85kY9NzNyRsFMEy4ZMJ5lz/H9kXxGio7MDHxjlhi/1WTg2fwvAvxIyZSoC9s07LA4YLAJm3gBqA/bpSFZMbyvgRcTbPHXwz76ves99xnnQDyd89SHiwOnsRWNRWCWNiVGQ3/FJOHMVnHjsBzDxg0mtVALy/fIIv8nGBxkJ3s2RKx2avzuprhVa0UWrFdT/G7AeGRt7gHF2pDooR/SRoOY1aPMZftFs40F9c9CNsBkISz2ZvKXE4T/p8Pdl8jL538P2duzwCBt/3UeUYSbek2GYprzmDCctjprXd4tAbAV3BmLeCBAgxm8cGJrNYak994x/7hl5h35+MRUDKyWkjcxUAReMczH7h4AxMSLMbvrwvq/d2oRv7BDNkI7r5QqL9BjhIcZEHpuMFl+o3YzON+06ncqtd12r/t50Nsh1xU/Qu8hM+5k2XFkHepoMo9LX3Ss3PNvw+PXIrOsNgFsvxXuXot1qcAFijFyP8fK/1ue5ZI1j5iqdqVQ0RgswiMy
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(8936002)(8676002)(4326008)(5660300002)(36756003)(122000001)(38070700005)(91956017)(76116006)(66476007)(64756008)(66556008)(66446008)(66946007)(2906002)(38100700002)(33656002)(316002)(83380400001)(86362001)(6486002)(508600001)(26005)(54906003)(6916009)(53546011)(71200400001)(6512007)(6506007)(186003)(2616005)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <3479F4905B5C90439208BDDF26A27A70@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6307
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT010.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f5ab8134-6bd3-4afa-c13f-08da47a07094
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	elldzaoNHGwx/JWfLwe88At3Ds4T2zKTY4WGJmc1gtJtcGuwwM2LGsxwXmGlrfgLJ+7Ll62MDkJwNIZPImG7Fg1JN/5ofjxNQATtNyPjBcBNyWmpPjuQYGNalwIClCfJ5FzdhIiEoecLdf7SeCF6d4tFJdNGPEzGypEoUmq/+9EQiL5iSohCXbLGzb/EdfmN1+Kb8rF66woB51ywoPp3W0eureO4p0OApoAlBsbK9MbDxyVYsRj7Ip5KPQy4xQHIzV5jVhPQqx/NQ3sRUjmJaSgivnDjCYMDuvQhjZYBh3zdqFaLNSHGsB5oqGV4rxCZZ2nHiYSAu/l8QcqZlMsPVkJtKQWUKkjMY+TfnR5ZiWCTlQO25Wf+8VQ8VS+BzqZzV6pBDVmQG1g79ywujBXJVbmv6JcngF2vWDk3TxFYkeoXn40bIGdrJwK3XQOTkD6/SwGZMlsfoJRDnA5AgdL11tWW5x6n86vlV5WafaaTUeHOFRH7nGFzeCzSCpU6FLjbwrwNVreq4BCxzV7AEOK8LsXvIX7jECYuC2eCBmE/kXqUzH4QV9fBMAUt8pVfzjxjcdix9qWuQyUuLaUY6JLMntaIJ27vEINFH32pY8J8cJvKF4OLN4JoS3kuQgXyvWaBzD8WiWASvjh4YjQV9HgcXR2f93hSdp5Sw9yax9rtk6CT+YtG8PPBdoWymv1dEblg
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230001)(4636009)(40470700004)(36840700001)(46966006)(36756003)(356005)(81166007)(336012)(54906003)(6862004)(70586007)(70206006)(2616005)(26005)(316002)(8936002)(6512007)(86362001)(5660300002)(4326008)(8676002)(53546011)(6506007)(82310400005)(186003)(107886003)(36860700001)(33656002)(47076005)(508600001)(83380400001)(6486002)(40460700003)(2906002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2022 09:39:33.1741
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: dfaa5456-001a-49f5-b387-08da47a07658
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT010.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB3780

Hi Stefano,

> On 3 Jun 2022, at 02:45, Stefano Stabellini <sstabellini@kernel.org> wrot=
e:
>=20
> On Tue, 31 May 2022, Bertrand Marquis wrote:
>> Complete AA64ISAR2 and AA64MMFR[0-1] with more fields.
>> While there add a comment for MMFR bitfields as for other registers in
>> the cpuinfo structure definition.
>>=20
>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>> ---
>> Changes in v2:
>> - patch introduced to isolate changes in cpufeature.h
>> - complete MMFR0 and ISAR2 to sync with sysregs.h status
>> ---
>> xen/arch/arm/include/asm/cpufeature.h | 28 ++++++++++++++++++++++-----
>> 1 file changed, 23 insertions(+), 5 deletions(-)
>>=20
>> diff --git a/xen/arch/arm/include/asm/cpufeature.h b/xen/arch/arm/includ=
e/asm/cpufeature.h
>> index 9649a7afee..57eb6773d3 100644
>> --- a/xen/arch/arm/include/asm/cpufeature.h
>> +++ b/xen/arch/arm/include/asm/cpufeature.h
>> @@ -234,6 +234,7 @@ struct cpuinfo_arm {
>> union {
>> register_t bits[3];
>> struct {
>> + /* MMFR0 */
>> unsigned long pa_range:4;
>> unsigned long asid_bits:4;
>> unsigned long bigend:4;
>> @@ -242,18 +243,31 @@ struct cpuinfo_arm {
>> unsigned long tgranule_16K:4;
>> unsigned long tgranule_64K:4;
>> unsigned long tgranule_4K:4;
>> - unsigned long __res0:32;
>> -
>> + unsigned long tgranule_16k_2:4;
>> + unsigned long tgranule_64k_2:4;
>> + unsigned long tgranule_4k:4;
>=20
> Should be tgranule_4k_2:4

Right I will fix that.

>=20
>=20
>> + unsigned long exs:4;
>> + unsigned long __res0:8;
>> + unsigned long fgt:4;
>> + unsigned long ecv:4;
>> +
>> + /* MMFR1 */
>> unsigned long hafdbs:4;
>> unsigned long vmid_bits:4;
>> unsigned long vh:4;
>> unsigned long hpds:4;
>> unsigned long lo:4;
>> unsigned long pan:4;
>> - unsigned long __res1:8;
>> - unsigned long __res2:28;
>> + unsigned long specsei:4;
>> + unsigned long xnx:4;
>> + unsigned long twed:4;
>> + unsigned long ets:4;
>> + unsigned long __res1:4;
>=20
> hcx?
>=20
>=20
>> + unsigned long afp:4;
>> + unsigned long __res2:12;
>=20
> ntlbpa
> tidcp1
> cmow
>=20
>> unsigned long ecbhb:4;
>=20
> Strangely enough I am looking at DDI0487H and ecbhb is not there
> (D13.2.65). Am I looking at the wrong location?

Right now I declared here only the values which have a corresponding
declaration in sysregs.h
If I add more fields here we will not be in sync with it anymore.

And on ecbhb it will be in the next revision of the manual yes.


>=20
>=20
>> + /* MMFR2 */
>> unsigned long __res3:64;
>> };
>> } mm64;
>> @@ -297,7 +311,11 @@ struct cpuinfo_arm {
>> unsigned long __res2:8;
>>=20
>> /* ISAR2 */
>> - unsigned long __res3:28;
>> + unsigned long wfxt:4;
>> + unsigned long rpres:4;
>> + unsigned long gpa3:4;
>> + unsigned long apa3:4;
>> + unsigned long __res3:12;
>=20
> mops
> bc
> pac_frac
>=20
>=20
>> unsigned long clearbhb:4;
>=20
> And again this is not described at D13.2.63. Probably the bhb stuff
> didn't make it into the ARM ARM yet.

As said before, are you ok with only adding stuff declared in sysregs
to make it simpler to sync with Linux ?

Cheers
Bertrand

>=20
>=20
>>=20
>> unsigned long __res4:32;
>> --=20
>> 2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 06 10:18:46 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 10:18:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342434.567498 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny9oZ-0002I8-G4; Mon, 06 Jun 2022 10:18:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342434.567498; Mon, 06 Jun 2022 10:18:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ny9oZ-0002I1-Cj; Mon, 06 Jun 2022 10:18:27 +0000
Received: by outflank-mailman (input) for mailman id 342434;
 Mon, 06 Jun 2022 10:18:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ny9oY-0002Hl-0k; Mon, 06 Jun 2022 10:18:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ny9oX-00028f-SH; Mon, 06 Jun 2022 10:18:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ny9oX-0004Jb-A7; Mon, 06 Jun 2022 10:18:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ny9oX-000435-9j; Mon, 06 Jun 2022 10:18:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=xCJ7dZLR3SM1xSRjI+yi4IsEqTzocOyGxgynE+xRkQc=; b=T6rHntAwU2PjGnt3PhicDgn+Fk
	v7Xkjzj142+iS2z1AmZbRznhgtJWYyKaoTTir4z3jsAsfaHEBjn7LAfsNFUx9wkGuFuj2GM5WnltR
	H9tyH+SUVIW6ya+6qLT+PVyuHeuINvQmvBkDpPOS6E3s8nu3+6HaVIYneMFcW+OiePDE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170842-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 170842: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=a939d4d86919a1f9ffcdc053a852422f9184a00d
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 06 Jun 2022 10:18:25 +0000

flight 170842 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170842/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              a939d4d86919a1f9ffcdc053a852422f9184a00d
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  696 days
Failing since        151818  2020-07-11 04:18:52 Z  695 days  677 attempts
Testing same since   170825  2022-06-04 04:20:31 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Amneesh Singh <natto@weirdnatto.in>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani@anisinha.ca>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brad Laue <brad@brad-x.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Kirbach <christian.kirbach@gmail.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Christophe Fergeau <cfergeau@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Didik Supriadi <didiksupriadi41@gmail.com>
  dinglimin <dinglimin@cmss.chinamobile.com>
  Divya Garg <divya.garg@nutanix.com>
  Dmitrii Shcherbakov <dmitrii.shcherbakov@canonical.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Emilio Herrera <ehespinosa57@gmail.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Franck Ridel <fridel@protonmail.com>
  Gavi Teitz <gavi@nvidia.com>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Haonan Wang <hnwanga1@gmail.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Hiroki Narukawa <hnarukaw@yahoo-corp.jp>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Ian Wienand <iwienand@redhat.com>
  Ioanna Alifieraki <ioanna-maria.alifieraki@canonical.com>
  Ivan Teterevkov <ivan.teterevkov@nutanix.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  jason lee <ppark5237@gmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jia Zhou <zhou.jia2@zte.com.cn>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jing Qi <jinqi@redhat.com>
  Jinsheng Zhang <zhangjl02@inspur.com>
  Jiri Denemark <jdenemar@redhat.com>
  Joachim Falk <joachim.falk@gmx.de>
  John Ferlan <jferlan@redhat.com>
  John Levon <john.levon@nutanix.com>
  John Levon <levon@movementarian.org>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Justin Gatzen <justin.gatzen@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kim InSoo <simmon@nplob.com>
  Koichi Murase <myoga.murase@gmail.com>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Lei Yang <yanglei209@huawei.com>
  Lena Voytek <lena.voytek@canonical.com>
  Liang Yan <lyan@digitalocean.com>
  Liang Yan <lyan@digtalocean.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Liu Yiding <liuyd.fnst@fujitsu.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  luzhipeng <luzhipeng@cestc.cn>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Martin Pitt <mpitt@debian.org>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matej Cepl <mcepl@cepl.eu>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Goodhart <c@chromakode.com>
  Maxim Nestratov <mnestratov@virtuozzo.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Moteen Shah <codeguy.moteen@gmail.com>
  Moteen Shah <moteenshah.02@gmail.com>
  Muha Aliss <muhaaliss@gmail.com>
  Nathan <nathan95@live.it>
  Neal Gompa <ngompa13@gmail.com>
  Nick Chevsky <nchevsky@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nicolas Lécureuil <neoclust@mageia.org>
  Nicolas Lécureuil <nicolas.lecureuil@siveo.net>
  Nikolay Shirokovskiy <nikolay.shirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Niteesh Dubey <niteesh@linux.ibm.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Or Ozeri <oro@il.ibm.com>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Praveen K Paladugu <prapal@linux.microsoft.com>
  Richard W.M. Jones <rjones@redhat.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Robin Lee <cheeselee@fedoraproject.org>
  Rohit Kumar <rohit.kumar3@nutanix.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Davis <scott.davis@starlab.io>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  shenjiatong <yshxxsjt715@gmail.com>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Simon Rowe <simon.rowe@nutanix.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tom Wieczorek <tom@bibbu.net>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tu Qiang <tu.qiang35@zte.com.cn>
  Tuguoyi <tu.guoyi@h3c.com>
  tuqiang <tu.qiang35@zte.com.cn>
  Vasiliy Ulyanov <vulyanov@suse.de>
  Victor Toso <victortoso@redhat.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Vinayak Kale <vkale@nvidia.com>
  Vineeth Pillai <viremana@linux.microsoft.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  Wei-Chen Chen <weicche@microsoft.com>
  William Douglas <william.douglas@intel.com>
  Xu Chao <xu.chao6@zte.com.cn>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Fei <yangfei85@huawei.com>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yasuhiko Kamata <belphegor@belbel.or.jp>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
  zhangjl02 <zhangjl02@inspur.com>
  zhanglei <zhanglei@smartx.com>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>
  Дамјан Георгиевски <gdamjan@gmail.com>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 111348 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 13:19:12 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 13:19:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342500.567540 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyCdI-0004F6-By; Mon, 06 Jun 2022 13:19:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342500.567540; Mon, 06 Jun 2022 13:19:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyCdI-0004Ez-79; Mon, 06 Jun 2022 13:19:00 +0000
Received: by outflank-mailman (input) for mailman id 342500;
 Mon, 06 Jun 2022 13:18:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oh0U=WN=citrix.com=prvs=149ed92ed=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1nyCdH-0003zN-EF
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 13:18:59 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 375e1b3e-e59b-11ec-bd2c-47488cf2e6aa;
 Mon, 06 Jun 2022 15:18:57 +0200 (CEST)
Received: from mail-bn7nam10lp2107.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.107])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 06 Jun 2022 09:18:47 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BN8PR03MB4962.namprd03.prod.outlook.com (2603:10b6:408:7b::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Mon, 6 Jun
 2022 13:18:46 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::50a2:499b:fa53:b1eb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::50a2:499b:fa53:b1eb%5]) with mapi id 15.20.5314.019; Mon, 6 Jun 2022
 13:18:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 375e1b3e-e59b-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654521537;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=HoN4d8nMMn+mRNEtlPcCAAQtSQmw+GmljetBKUYnKPs=;
  b=ZzkMBaoClbNSC9OMdxlzBmN0c+2PBs1LCGeed8h/N45hqzCn1/qNqT/K
   TZHoQR4k+V67pA+WaVf96OsMzsSknFq/wFNrpbnK/z8NsZu4/asD9V+nw
   jM0WPXkwJ6fvwRBBEDP9fqsj8xvhUKalwwU6sAcr4H/oiwKLRKlP/YH5F
   c=;
X-IronPort-RemoteIP: 104.47.70.107
X-IronPort-MID: 72292677
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:EtfUDqxLoS32tP4DVPt6t+fIxyrEfRIJ4+MujC+fZmUNrF6WrkUPx
 2MaUG3XMqmMNDf2f9F3Po7j9EgB7ZXXyYUwGQZuqyAxQypGp/SeCIXCJC8cHc8zwu4v7q5Dx
 59DAjUVBJlsFhcwnj/0bv656yMUOZigHtIQMsadUsxKbVIiGX5JZS5LwbZj2NY22YXhWWthh
 PupyyHhEA79s9JLGjp8B5Kr8HuDa9yr5Vv0FnRnDRx6lAe2e0s9VfrzFonoR5fMeaFGH/bSe
 gr25OrRElU1XfsaIojNfr7TKiXmS1NJVOSEoiI+t6OK2nCuqsGuu0qS2TV1hUp/0l20c95NJ
 Npll5idUwEyGLD1ussxQTZ4Nn5EBYlL9+qSSZS/mZT7I0zuVVLJmq0rJmdpeIoS96BwHH1E8
 uEeJHYVdBefiumqwbW9DO5xmsAkK8qtN4Qa0p1i5WiBUbB6HtacG+OTvYAwMDQY36iiGd73Y
 cYDZCUpRxPHexBVYX8cCY4knffujX76G9FdgA3P//ttvTeMpOB3+Ir1PtiJINWwf4YLwV+U9
 mnjzk/DJChPYbRzzhLAqBpAnNTnmCrhXYsIGb6Q9/h0gUaSzGgeFB0XU1SgpfCzzEW5Xrp3O
 0ESvyYjs6U23EiqVcXmGQ21pmaeuRwRUMYWFPc1gCmPwKfJ5weSBkAfUyVMLtchsaceRyEu1
 1KPt8PkA3poqrL9YWmG6r6eoDe2OC4UBWwPfykJSU0C+daLiJ43pgLCSJBkCqHdpsX8BDXY0
 z2M6i8kiN07jsMV1qP94VHOhRqtoITESkg+4QC/dmmi6AV+ZoKseY2zwVfe5PdEao2eSzG8U
 GMsnsGf6KUEC86LnSnUGOEVRujxuLCCLSHWhkNpE9857TOx9nW/fIdWpjZjOENuNcVCcjjsC
 KPOhT5sCFZoFCPCRcdKj0iZUazGEYCI+QzZa83p
IronPort-HdrOrdr: A9a23:v6szGqjCT9HmMYSNB0lqtAH753BQX4V23DAbv31ZSRFFG/FwyP
 rCoB1L73XJYWgqM03IwerwQ5VpQRvnhP1ICRF4B8bsYOCUghrTEGgE1/qt/9SAIVyzygc578
 tdmsdFebrN5DRB7PoSpTPIa+rIo+P3sZxA592uqUuFJDsCA84P0+46MHfjLqQcfnglOXNNLu
 v52iMxnUvERZ14VKSGL0hAe9KGi8zAlZrgbxJDLQUg8hOygTSh76O/OwSE3z8FOgk/gosKwC
 zgqUjU96+ju/a0xlv3zGnI9albn9Pn159qGNGMsM4IMT/h4zzYJriJGofy+Qzdktvfr2rCo+
 O85SvI+P4Dsk85S1vF5ScFHTOQiArGpUWSkmNwykGT3PARDAhKd/apw7gpMicxonBQwu2Vms
 hwrh2knosSAhXakCvn4d/UExlsi0qvuHIn1fUelnpFTOIlGfRsRCMkjTFo+bo7bWvHAbocYa
 FT5QDnlYNrWELfa2qcsnhkwdSqUHh2FhCaQlIassjQ1zRNhnh2w0YR2cRaxx47hd8AYogB4/
 6BPrVjlblIQMNTZaVhBP0ZSc/yDmDWWxrDPG+bPFyiHqAaPHDGrYLx/dwOlauXUY1NyIF3lI
 XKUVteu2J3c0XyCdeW1JkO6RzJSHXVZ0Wa9iif3ekPhlTRfsuaDcTYciFeryKJmYRtPuTLH/
 CuJZlRH/jvaWPzBIch5XyLZ6Vv
X-IronPort-AV: E=Sophos;i="5.91,280,1647316800"; 
   d="scan'208";a="72292677"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=blSG7GesJIm5gHtS3GhmP8gf84T8tlPqhIgOdcUEnt3jbOhaKbb6wh+pWesVqcfWFYyLPwk3S9p0rT5apmbebya+pbPdYOf5uP+CEvjjs4WafoYwapek4r+NGN+qJm4H1x2QWEHMQ8hCCIcSugFMvqKNbQETdlPTF8V4pbxvB2//+63DsrfGEfozppqiGNOcr7NyTy8aIH/ey2lTP+w2oe1TW2K7pcVoKi83Tt56QJq9OukXtc2jERBxaGzQ53xGpXz8RChaM+A2nVBpEzyOBovW/6V+czxzyxQAz8wTSotmi91SPWR6q/QvAsMpnUNA075p4mMqMVCa8tTAfIx/6g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=HoN4d8nMMn+mRNEtlPcCAAQtSQmw+GmljetBKUYnKPs=;
 b=F/G747CHGteW6x+OsTnJ5s08WWFAajcCflr+qdp4GAOri8WEHcFFv0l1l7sLavIe+fNswJUWO8o4lbYDD0ifDI85J6zH8aMbplU+60SV9dUCwaHwiwKs3eQaniAO6N2HEYqn46hIGkBmDTZi7qTf4c5zGRZzli43L0k3LCXcXQttp1OJwuX6lMlsUaY0l9gDtWlybYlIp3Oxsl2yTulooUjcJBkH4LuxtJwPr/1RvaSMf/1TXvVCSHwFScdbhrM6lESzpGKfu/R+mHML35Tg/lSTNa/USlCIRvWfhZegdvgqHGUga16Kay3tYckpOkK51sZz/goO2JJXEs5qLBltwA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HoN4d8nMMn+mRNEtlPcCAAQtSQmw+GmljetBKUYnKPs=;
 b=aFZ2+SZCO+vK+EghtSJhlr+1m/8DlW3pG45G7GUrepY/aXKhrh5czdtzEvdiCkV2vcZS/AE2vo8XNfxuNCN+aDBDLio3TFn5O/cez2ZW/ueveBBN//xPHJpUkVhjebEssAsJLGX8d4LuwhEfi8f2o/MjaiLGMDlzHKC4iShPhJo=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: =?utf-8?B?TWFyZWsgTWFyY3p5a293c2tpLUfDs3JlY2tp?=
	<marmarek@invisiblethingslab.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: George Dunlap <George.Dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>, Paul Durrant
	<paul@xen.org>, Kevin Tian <kevin.tian@intel.com>, Connor Davis
	<connojdavis@gmail.com>
Subject: Re: [RFC PATCH 00/12] Add Xue - console over USB 3 Debug Capability
Thread-Topic: [RFC PATCH 00/12] Add Xue - console over USB 3 Debug Capability
Thread-Index: AQHYeVc0shRplu45PUy7PQfpSV53Bq1CXTeA
Date: Mon, 6 Jun 2022 13:18:45 +0000
Message-ID: <d1de82b2-333d-9c7f-0b82-4bf18b4ec469@citrix.com>
References:
 <cover.07846d9c1900bd8c905a05e7afb214b4cf4ab881.1654486751.git-series.marmarek@invisiblethingslab.com>
In-Reply-To:
 <cover.07846d9c1900bd8c905a05e7afb214b4cf4ab881.1654486751.git-series.marmarek@invisiblethingslab.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 72fd6dec-ce95-41c7-52cc-08da47bf15f5
x-ms-traffictypediagnostic: BN8PR03MB4962:EE_
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
x-microsoft-antispam-prvs:
 <BN8PR03MB49620C04ECA22052E7B0FA58BAA29@BN8PR03MB4962.namprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 WDTWyQLz9RiwmEvf6ILYtdfxw7BAGsZ3/CIuFKSvly6D5uwULyPjsm3+x+cG6Z1UeIdUHgcpxD/iXCqwA27WqYQ3g4SLG6sP+pGWp5uoed1roYRbZAq3xWi9duzQ1284eVky0UXqByWxVAonSUiWrkzL+y13uQhfx/i/iumuNLBE3sallBgBryB2sB3xFsNsIJrqdp64Jt7Q/cqVMAerm2LIOMak9jBtz8PWfeEU/oeeZILSr4Z1IXJn4OlvUSxdoxC27Kbm5m251vj01ZsvFSDVMqDkP/PCEevIpLaIgAvKuFQhbMQqqu+KWHKuqiNjB6XdUWXIBTT4t/bub2zesMYy2yq5h1/LNT7dONToCwd+Gd1wTOQdnszZJc9irLDQY6Q8mOTnTCGHMAdK8lJH8OHRBjtAYgykCPyCaFZA80OrbNI+D+2N+VI7Vl1oKXXa7MzsgYEDsGl172qRbdFO+v9wbt+YTS1MhEVvzCUU41lRvpgNaJRw6VfG9IHjppkdC3Cb9BeWsKtD7Sf2Ve+1XX5+5JcpFCIrzw0otgTvLvKlK+SXkrxEy8p7CpxaQ2T7lUSGJ8itWvWuGKMdIJq4/AcId6ajaJlvau669G9aqkmejgk/0XQBlCx+manLFRXWVtYQ+bTPjZY0lPr1Dsx/rMLqUaRo0lfF+GG5VY2ArufU81voT6LLs8mRsgoq1LIFveKlSLpbOAOKyYPUcmpF49vwh64Y3xY8rRAqChcYhZz7H0mhA4T8Jgc/ZmV03to0nFHXsF1nR4Z3BavXu/dluSCrwFHnOxicSQbEBnnirI+dPzSbjaN5V907YS0SvT27h1iMeYMQwNJCrteBd8rxuWZRER/HlLQAT0GCtTEFIuY=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(8936002)(122000001)(82960400001)(38070700005)(110136005)(66476007)(66556008)(64756008)(54906003)(91956017)(66446008)(5660300002)(76116006)(2906002)(31696002)(86362001)(316002)(4326008)(508600001)(71200400001)(8676002)(38100700002)(186003)(6486002)(966005)(2616005)(31686004)(36756003)(66574015)(66946007)(6506007)(53546011)(6512007)(26005)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?dHZyNFFpelh0ck4zZjVEeDM5Qmhaamd0aEpQQXRDWUZJYklJRW9LUWNwNkt0?=
 =?utf-8?B?OC9KSjYzczNvMUtHTkFCMkJJL3YyOTRLamJiRFdCUFk4NFRmY2NJWnJlWEF4?=
 =?utf-8?B?Y2h4WVVBZW10WDlKWXU2OExjOGhyeE5BMDl5RUZwK25DaFlGV1lyZjQyUmxm?=
 =?utf-8?B?UlZIZ2x1Y2NQNlBWbFBZWml3TzE4eUIwallpRy9pN3Z6RnhaUmQyaWpCalZX?=
 =?utf-8?B?aXBDVVo0VTdYV2gvS3VxcW5PdU50cmxPOXJzWUt5WWh6ajV2UU1IUzdGV0p4?=
 =?utf-8?B?OXN3RWRxT0RCT0RKQkdwYkhwMGtDVjRreVRKMXl1QTBDNEFKSXVhd2RhUGFy?=
 =?utf-8?B?MUhZUGJ6Q2FnQUxFcERhbjU3SkhOQW0yRVRCTzZjNVF5RFU3bGkvcmQyNm9r?=
 =?utf-8?B?Rk1pM2xkSThpaXVOTWh6LzJ2aHpMNzJXK00ycStKa0x2YjRRcWZSUXVEd2ds?=
 =?utf-8?B?YmFIVEJheWZxWi9jdUVpUUxkQUVHckRKcUN6a3pnYWhiaDRuTHArNG9CNmRh?=
 =?utf-8?B?Y01scDBHTGJRRWdHMjNzRzVncC9VdjRyNTFScEs5VnRUcTF3SVc3aXl1YnBD?=
 =?utf-8?B?bGN4bk9hOUlycW4wdllmdEsvSi8vMmNVdjJFVnA4UGhmTmV0TTZsWFE4QlA4?=
 =?utf-8?B?SHNmWFE2UlQ1alRIbUJDaHVIOTlrRDVwMWFYd1lkckNuMzJQeWZpYjFXNWxX?=
 =?utf-8?B?dS9KaDNnU0dPWGtZVlluemJZZU9lZEVtTGlKdmEvWE0vanZ5WCtWRU1FY0Iw?=
 =?utf-8?B?T2E5Sk9Uc2pIY3oydC9sZ0NjSzBNa0RCWmcyMGxXNWs4bmVUTHNxM2RjYW45?=
 =?utf-8?B?VTZIRGNQMnpQUlpxaXdwSkQ1MENQbFZkREtPODFHZUs1WkRObDI0M0Y0VkM0?=
 =?utf-8?B?M08yYmg2cjhVdlBWeElSU3NDc2RGNlhWU2x3NW1oWGtQZDJpcVQ5ZW5CYk4r?=
 =?utf-8?B?dWx6aUtzVUpDSWxnZU1BQWowZnhKRitUNWFlUEtyMHF4Sld1YVZSUkhGMzR3?=
 =?utf-8?B?RHd3bWJ5eVZUS1BtN0dyd2YxNmZEUW0yT1d6MHcwVmYzQjVWMmRTanFsbm4x?=
 =?utf-8?B?aXhuNDZ1ekN0M0NFS2NIcWZOeFRXZnVPZFFxcStSM2hvS1lGR0JVbXRtZjJC?=
 =?utf-8?B?dG82SWkxcitTcy9MakVKbWpMTE56MUViNUpPVmpsVW1CU0s0TTBCRWVOblBh?=
 =?utf-8?B?TnkzRHg1REZoQVJGS1VSajdqdytFNFdMV3NNcmlhQ3VUcnZwMVp2em0wR3pr?=
 =?utf-8?B?L2ZNNlREMDhhMk9nUjNYNTVOTHVobENZT1ZZczV2OCtKbVI1bDY3SXB1WGFv?=
 =?utf-8?B?OEE2Z1JuVmFoT3owS1pjZk84WnNhRm5DVStJTUJ5Wk5ZZjJDUS9HbC9wNmg1?=
 =?utf-8?B?Y09QbzRIVE9INlhCMktZZVBDMzJvOGFnTVAxNDdCWHRmclFZNjlyQ2hUUXRh?=
 =?utf-8?B?MERPdzFlKzYzbjlaQ2IzUlhydm1yc1BRbjBlSjNtSzEzU0RQK2YrWDFrN3FR?=
 =?utf-8?B?OFptT1k5NUxOVVpoR2tyT0pxSVR4ZWlpV0UyTUdxMjlsNkpYMmtLcExzTnE1?=
 =?utf-8?B?d2NNZGMvY3I3ajFHejVQTlhLZFMvTVdyWUVGcU5EK3VManIwQ28vcXQvK0cr?=
 =?utf-8?B?UmtFN1V1bHd4WTE4Z0xiQjFKV0pFQTVIWUFOL2dHM29GdHl5ckx2eE9kUmNY?=
 =?utf-8?B?T0VONWNmeFVhWHN2WGNGTWpoRjgzZWU3djJNYUZocDZQSGhsY0o0R3RhTUt3?=
 =?utf-8?B?bE1WUm0xQ2I3aGs2OUJUaUIwSGI5UHVXSktla2J3UTdTakFId2ZRWjI3SEVU?=
 =?utf-8?B?R3BHcHFWZklmZEovMnNCUFprL2xnRHJTMmFIS042RitrTTBGdUJrNnF1bWg2?=
 =?utf-8?B?dkI4ZGF1Q3BrZlVGSGVycXdJVnR4a242TkM5OWVWQnJGZ0V6RElCTC9SQmdp?=
 =?utf-8?B?SzhxYU9qUmdYeTNqcUNaelNEVm9xSHRaTXlJclp0TGVUdU51S2tIUVZ5bGJI?=
 =?utf-8?B?L2dWOFRuTGxVMytkc240bU5jL3RMVzBiOGQzVGhPTGN2NUQzOXlMMnJpdWd6?=
 =?utf-8?B?b0xLS3ZmVC9ici9QVXo3OGVFQWVBNmY1Z3BUWndjZFBxWkRFWHlLSlg1aWor?=
 =?utf-8?B?VDl4RGwxaGVmSWtjeGEwSDlJYi84RG12QUJibnBPZk9WbEQvT3JxYzk2S3VP?=
 =?utf-8?B?TitqRnVWYWNrTFlrdm14TUMvcEJhQmNUNU1PWkRpSUVLWnhuVVNmcUVvczF5?=
 =?utf-8?B?QmV2SE13VTdxRG9KMUtsMDJHa1FOVkNUeVlkanYzaDRzdUVVYUY5ZkJ6blVW?=
 =?utf-8?B?MkhKL0VaNVdLbUF0MTdWM0hBaGR0VFI4dGdFZUNuMlI3TklsMWlzRXFHWnhM?=
 =?utf-8?Q?1iWP4HsddmZAQSdw=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <8021ABEE55601B40BDB7B190903324B8@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 72fd6dec-ce95-41c7-52cc-08da47bf15f5
X-MS-Exchange-CrossTenant-originalarrivaltime: 06 Jun 2022 13:18:45.8758
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: +nDoQrw6Q/6cunJFYc0jveHQN2dr0j1G0AHkZX7Nvh5PzjfWlEBawXYmt1Cr/4DZpFuilJofzeHr/9ESBrCKGELUBU28IwZJcosl/hu4V7I=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR03MB4962

T24gMDYvMDYvMjAyMiAwNDo0MCwgTWFyZWsgTWFyY3p5a293c2tpLUfDs3JlY2tpIHdyb3RlOg0K
PiBUaGlzIGlzIGludGVncmF0aW9uIG9mIGh0dHBzOi8vZ2l0aHViLmNvbS9jb25ub2pkL3h1ZSBp
bnRvIG1haW5saW5lIFhlbi4NCj4gVGhpcyBwYXRjaCBzZXJpZXMgaW5jbHVkZXMgc2V2ZXJhbCBw
YXRjaGVzIHRoYXQgSSBtYWRlIGluIHRoZSBwcm9jZXNzLCBzb21lIGFyZQ0KPiB2ZXJ5IGxvb3Nl
bHkgcmVsYXRlZC4NCg0KVGhhbmtzIGZvciBsb29raW5nIGludG8gdGhpcy7CoCBDQydpbmcgQ29u
bm9yIGp1c3Qgc28gaGUncyBhd2FyZS4NCg0KPg0KPiBUaGUgUkZDIHN0YXR1cyBpcyB0byBjb2xs
ZWN0IGZlZWRiYWNrIG9uIHRoZSBzaGFwZSBvZiB0aGlzIHNlcmllcywgc3BlY2lmaWNhbGx5Og0K
Pg0KPiAxLiBUaGUgYWN0dWFsIFh1ZSBkcml2ZXIgaXMgYSBoZWFkZXItb25seSBsaWJyYXJ5LiBN
b3N0IG9mIHRoZSBjb2RlIGlzIGluIGENCj4gc2VyaWVzIG9mIGlubGluZSBmdW5jdGlvbnMgaW4g
eHVlLmguIEkga2VwdCBpdCB0aGlzIHdheSwgdG8gZWFzZSBpbnRlZ3JhdGluZw0KPiBYdWUgdXBk
YXRlcy4gVGhhdCdzIGFsc28gd2h5IEkgcHJlc2VydmVkIGl0cyBvcmlnaW5hbCBjb2RlIHN0eWxl
LiBJcyBpdCBva2F5LA0KPiBvciBzaG91bGQgSSBtb3ZlIHRoZSBjb2RlIHRvIGEgLmMgZmlsZT8N
Cg0KSXQgZG9lc24ndCBtYXR0ZXIgbXVjaCBpZiBpdCdzIGEgLmggb3IgLmMgZmlsZS7CoCBJdCBj
b3VsZCBwZXJmZWN0bHkNCmVhc2lseSBsaXZlIGFzIHhlbi9kcml2ZXJzL2NoYXIveHVlLmggYW5k
IGluY2x1ZGVkIG9ubHkgYnkgeHVlLmMuwqAgKE1vcmUNCnNwZWNpZmljYWxseSwgaXQgZG9lc24n
dCB3YW50IHRvIGxpdmUgYXMgeGVuL2luY2x1ZGUveHVlLmgpDQoNClRoYXQgc2FpZCwgYXMgc29v
biBhcyB5b3UgZ2V0IHRvIHBhdGNoIDIsIGl0J3Mgbm8gbG9uZ2VyIHVubW9kaWZpZWQgZnJvbQ0K
dXBzdHJlYW0sIGFuZCB3aXRoIHBhdGNoIDMsIHdlJ3JlIGdhaW5pbmcgY29uY3JldGUgZnVuY3Rp
b25hbGl0eSB0aGF0DQp1cHN0cmVhbSBkb2Vzbid0IGhhdmUuDQoNCj4NCj4gMi4gVGhlIHh1ZS5o
IGZpbGUgaW5jbHVkZXMgYmluZGluZ3MgZm9yIHNldmVyYWwgb3RoZXIgZW52aXJvbm1lbnRzIHRv
byAoRUZJLA0KPiBMaW51eCwgLi4uKS4gVGhpcyBpcyB1bnVzZWQgY29kZSwgYmVoaW5kICNpZmRl
Zi4gQWdhaW4sIEkga2VwdCBpdCB0byBlYXNlDQo+IHVwZGF0aW5nLiBTaG91bGQgSSByZW1vdmUg
aXQ/DQoNCkRyb3AgcGxlYXNlLsKgIFhlbiBpcyBhbiBlbWJlZGRlZCBlbnZpcm9ubWVudCwgYW5k
IHN1cHBvcnQgb3RoZXINCmVudmlyb25tZW50cyBpcyBhIHdhc3RlIG9mIHNwYWNlIGFuZCB0aW1l
Lg0KDQpJJ20gc2xvd2x5IHJpcHBpbmcgb3V0IG90aGVyIGV4YW1wbGVzLg0KDQo+IDMuIFRoZSBh
ZGRpbmcgb2YgSU9NTVUgcmVzZXJ2ZXJkIG1lbW9yeSBpcyBuZWNlc3NhcnkgZXZlbiBpZiAiaGlk
aW5nIiBkZXZpY2UNCj4gZnJvbSBkb20wLiBPdGhlcndpc2UsIFZULWQgd2lsbCBkZW55IERNQS4g
VGhhdCdzIHByb2JhYmx5IG5vdCB0aGUgbW9zdCBlbGVnYW50DQo+IHNvbHV0aW9uLCBidXQgWGVu
IGRvZXNuJ3QgaGF2ZSBzZWVtIHRvIGhhdmUgcHJvdmlzaW9ucyBmb3IgZGV2aWNlcyBkb2luZyBE
TUENCj4gaW50byBYZW4ncyBtZW1vcnkuDQoNCkkgdGhpbmsgdGhhdCdzIHRvIGJlIGV4cGVjdGVk
LCBhcyB0aGUgZGV2aWNlIHNob3VsZCBlbmQgdXAgaW4gcXVhcmFudGluZS4NCg0KVGhhdCBzYWlk
LCB0aGUgbW9kZWwgaXMgYnJva2VuIGZvciBkZXZpY2VzIHRoYXQgWGVuIGV4Y2x1c2l2ZWx5IHVz
ZXMsDQp3aGljaCBpbmNsdWRlcyBJT01NVSBkZXZpY2VzLsKgIElPTU1VcyBkb24ndCBoYXZlIGFu
eSBraW5kIG9mIGFwcGxpY2FibGUNCklPTU1VIGNvbnRleHQsIGFuZCB0aGluZ3MgdXNlZCBleGNs
dXNpdmVseSBieSBYZW4gZG9uJ3Qgd2FudCB0byBiZSBpbg0KdGhlIGdlbmVyYWwgcXVhcmFudGlu
ZSBwb29sLCBiZWNhdXNlIHRoZW4gYWxsIG1hbGljaW91cyBkZXZpY2VzIGNhbg0KaW50ZXJmZXJl
IHdpdGggdGhlIHJpbmdidWZmZXIuDQoNCj4gNC4gVG8gcHJlc2VydmUgYXV0aG9yc2hpcCwgSSBp
bmNsdWRlZCB1bm1vZGlmaWVkICJkcml2ZXJzL2NoYXI6IEFkZCBzdXBwb3J0IGZvcg0KPiBYdWUg
VVNCMyBkZWJ1Z2dlciIgY29tbWl0IGZyb20gQ29ubm9yLCBhbmQgb25seSBhZGRlZCBteSBjaGFu
Z2VzIG9uIHRvcC4gVGhpcw0KPiBtZWFucywgd2l0aCB0aGF0IHRoaXMgY29tbWl0LCB0aGUgZHJp
dmVyIGRvZXNuJ3Qgd29yayB5ZXQgKGJ1dCBpdCBkb2VzDQo+IGNvbXBpbGUpLiBJcyBpdCBva2F5
LCBvciBzaG91bGQgSSBjb21iaW5lIGZpeGVzIGludG8gdGhhdCBjb21taXQgYW5kIHNvbWVob3cN
Cj4gbWFyayBhdXRob3JzaGlwIGluIHRoZSBjb21taXQgbWVzc2FnZT8NCg0KVGhhdCBkZXBlbmRz
IG9uIGhvdyBtdWNoIGNoYW5nZXMuwqAgT3RoZXIgb3B0aW9ucyBhcmUgYSBkdWFsIFNvQiB3aXRo
DQpDb25ub3Igc3RpbGwgYXMgdGhlIGF1dGhvciAoSSB0eXBpY2FsbHkgZG8gdGhpcyBmb3Igc3Vi
c3RhbnRpYWwgY29kZQ0KbW92ZW1lbnQsIHByb2dyYW1tYXRpYyBjaGFuZ2VzLCBldGMpLCBvciBm
b3IgYSBtYWpvciByZXdyaXRlLCBjaGFuZ2luZw0KYXV0aG9yc2hpcCBhbmQgYmVpbmcgdmVyeSBj
bGVhciBpbiB0aGUgY29tbWl0IG1lc3NhZ2Ugd2hlcmUgdGhlIGNvZGUNCm9yaWdpbmF0ZWQuDQoN
Cj4gNS4gVGhlIGxhc3QgcGF0Y2goZXMpIGVuYWJsZSB1c2luZyB0aGUgY29udHJvbGxlciBieSBk
b20wLCBldmVuIHdoZW4gWGVuDQo+IHVzZXMgRGJDIHBhcnQuIFRoYXQncyBwb3NzaWJsZSwgYmVj
YXVzZSB0aGUgY2FwYWJpbGl0eSB3YXMgZGVzaWduZWQNCj4gc3BlY2lmaWNhbGx5IHRvIGFsbG93
IHNlcGFyYXRlIGRyaXZlciBoYW5kbGUgaXQsIGluIHBhcmFsbGVsIHRvIHVubW9kaWZpZWQgeGhj
aQ0KPiBkcml2ZXIgKHNlcGFyYXRlIHNldCBvZiByZWdpc3RlcnMsIHByZXRlbmRpbmcgdGhlIHBv
cnQgaXMgImRpc2Nvbm5lY3RlZCIgZm9yDQo+IHRoZSBtYWluIHhoY2kgZHJpdmVyIGV0YykuIEl0
IHdvcmtzIHdpdGggTGludXggZG9tMCwgYWx0aG91Z2ggcmVxdWlyZXMgYW4gYXdmdWwNCj4gaGFj
ayAtIHJlLWVuYWJsaW5nIGJ1cyBtYXN0ZXJpbmcgYmVoaW5kIGRvbTAncyBiYWNrcy4gSXMgaXQg
b2theSB0byBsZWF2ZSB0aGlzDQo+IGZ1bmN0aW9uYWxpdHkgYXMgaXMsIG9yIGd1YXJkIGl0IGJl
aGluZCBzb21lIGNtZGxpbmUgb3B0aW9uLCBvciBtYXliZSByZW1vdmUNCj4gY29tcGxldGVseT8N
Cg0KIlhlbiBpcyBjb25maWd1cmVkIHRvIHVzZSBVU0IzIGRlYnVnZ2luZyIgaXMgdGhlIG9ubHkg
cmVsZXZhbnQgc2lnbmFsLsKgDQpXZSBkbyBub3Qgd2FudCBhbnl0aGluZyBlbHNlLsKgIElmIHRo
aXMgdHJpZ2dlcnMgaGFja3MgZm9yIGRvbTAsIHRoZW4gZmluZS4NCg0KT09JLCBob3cgZG9lcyB0
aGUgZHVhbCBkcml2ZXIgc3RhY2sgd29yayBpbiBMaW51eD/CoCBBdCBhIG1pbmltdW0gdGhleSd2
ZQ0Kc3VyZWx5IGdvdCB0byBjb29yZGluYXRlIGRldmljZSByZXNldHMuDQoNCkluIGFuIGlkZWFs
IHdvcmxkLCBkb20wIHdvdWxkIGJlIGZ1bGx5IHVuYXdhcmUuwqAgV2UgaGlkZSB0aGUgRGJDDQpj
b250cm9scyAoc28gZG9tMCBkb2Vzbid0IGdldCBhbnkgY2xldmVyIGlkZWFzKSwgYnV0IHdlIGRv
IG5lZWQgdG8ga2VlcA0KdGhlIGRldmljZSBhY3RpdmUgd2hlbiBkb20wIHdhbnRzIHRvIHJlc2V0
ICh3aGljaCB3aWxsIHByb2JhYmx5IHJlcXVpcmUNCmEgZmFpciBjaHVuayBvZiBlbXVsYXRpb24p
Lg0KDQpDb25ub3IgZGlkIHVwc3RyZWFtIGNvZGUgaW50byBMaW51eCB0byBjYXVzZSBpdCB0byBp
Z25vcmUgYW4NCmFscmVhZHktYWN0aXZlIERiQyBzZXNzaW9uLsKgIEkgaG9wZSB0aGlzIHdpbGwg
Y2F1c2UgTGludXggdG8gYmUgZHVseQ0KY2FyZWZ1bCB3aXRoIHJlc2V0cywgYW5kIGlzIHByb2Jh
Ymx5IHRoZSBlYXNpZXN0IHdheSBmb3J3YXJkLg0KDQp+QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 13:19:12 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 13:19:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342499.567529 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyCdF-0003za-1G; Mon, 06 Jun 2022 13:18:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342499.567529; Mon, 06 Jun 2022 13:18:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyCdE-0003zT-UH; Mon, 06 Jun 2022 13:18:56 +0000
Received: by outflank-mailman (input) for mailman id 342499;
 Mon, 06 Jun 2022 13:18:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8G/q=WN=intel.com=tamas.lengyel@srs-se1.protection.inumbo.net>)
 id 1nyCdC-0003zN-7X
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 13:18:54 +0000
Received: from mga04.intel.com (mga04.intel.com [192.55.52.120])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 32bbea1c-e59b-11ec-bd2c-47488cf2e6aa;
 Mon, 06 Jun 2022 15:18:49 +0200 (CEST)
Received: from fmsmga007.fm.intel.com ([10.253.24.52])
 by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 06 Jun 2022 06:18:46 -0700
Received: from fmsmsx606.amr.corp.intel.com ([10.18.126.86])
 by fmsmga007.fm.intel.com with ESMTP; 06 Jun 2022 06:18:46 -0700
Received: from fmsmsx608.amr.corp.intel.com (10.18.126.88) by
 fmsmsx606.amr.corp.intel.com (10.18.126.86) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2308.27; Mon, 6 Jun 2022 06:18:45 -0700
Received: from FMSEDG603.ED.cps.intel.com (10.1.192.133) by
 fmsmsx608.amr.corp.intel.com (10.18.126.88) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2308.27 via Frontend Transport; Mon, 6 Jun 2022 06:18:45 -0700
Received: from NAM12-MW2-obe.outbound.protection.outlook.com (104.47.66.46) by
 edgegateway.intel.com (192.55.55.68) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2308.27; Mon, 6 Jun 2022 06:18:45 -0700
Received: from CY4PR11MB0056.namprd11.prod.outlook.com (2603:10b6:910:7c::30)
 by MW3PR11MB4524.namprd11.prod.outlook.com (2603:10b6:303:2c::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Mon, 6 Jun
 2022 13:18:42 +0000
Received: from CY4PR11MB0056.namprd11.prod.outlook.com
 ([fe80::4d92:bf6e:eae9:27c0]) by CY4PR11MB0056.namprd11.prod.outlook.com
 ([fe80::4d92:bf6e:eae9:27c0%3]) with mapi id 15.20.5314.019; Mon, 6 Jun 2022
 13:18:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 32bbea1c-e59b-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1654521530; x=1686057530;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=BWEnTEfdQTb5DG67CdP92HYnY9l85AiyiKxAnrOb/tc=;
  b=ZcNvidcwXQM/jMxemu8pL2ckbBbucCwM1zLY4TPg+70YCiQyb+MYIvm3
   F59QMB55V5vcXQT9JW//u3hp6oAJcV0nnDC3Eo7hTeHBv4o1VcBpVzcUn
   DpHfLNSjzxQGeHWpgsLsvejqu5xWR/LnGAqAoyiARyNxzXZdSeevlKUfh
   v9fEwthubwEQi/VGZM9Gb4psSTb8vVmwfNuPWHgxBYc9M99cygwl5Xnsh
   dGdT5Lzy3Cxs/WgESVhEJLhD8qNVqR+/27I1Y5b+DAXDicdDo8IlXXNda
   arD50otQDkuT+yJhz8aExFXdfVrc7plIL5ePP8wzQzlocGoFqHstpKOqx
   A==;
X-IronPort-AV: E=McAfee;i="6400,9594,10369"; a="275432569"
X-IronPort-AV: E=Sophos;i="5.91,280,1647327600"; 
   d="scan'208";a="275432569"
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.91,280,1647327600"; 
   d="scan'208";a="583634223"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OR+ZwsoM6r60VYp7iXIvBKtWuJ7o/5ooH9zf/Q7oQ7T5AZ6rG/0CgRxLBtInKdo+w6k9CwI5pB+WTU7hVEs9GzrgydVSN/RSnwfwPXi2IR/trOC1yG/a1ZQvX8kC+tU31z8FctaEKqG8NdCHbhToUeeaEyO3d2YF6vuQVxxSNgFIq5fJkXUbiPkQYC2abiBVoqg2SU9TN5DbzOtb17biCj1rkvgXwxCPjqnfAT/IIch8teN1wio0ZQPF0M1ekqzaEsrGTvwY4uybxvTIKGU/Q3U8q5z7ROVI7q3F7HK67vG7aXf6goUa3nul8ke+a/WuYK+tPfC9PixOoeSQnOI2vQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=BWEnTEfdQTb5DG67CdP92HYnY9l85AiyiKxAnrOb/tc=;
 b=lD7qoLA29rx9b+LwkRALsnfQDr3Z8iZgdZ+qC5PmzzwHOPoabGsQdCQyBzfI4wzzuQ3dKd3HsA2PpUtTI9dRJvglzyebJ0yiLWHoVrUc3AkCJUt08/tTLqYYobx/Rjk2EW5fd6pSns6nLW5SZ+XqhpXtPSMyrlaDEvXV+fbyjEghwf06+FrINGvNOCp4GTWud2EHQpFwnK2ncRfo4kG/kMmEQD+InLWCqK0XSc9+QaoJsWC4+jPYz35dhlAt6g2ZbzQizmHHJ55lzBQeGYARBtCmi74jJAI+U6Q535IcpLk5vSOjQm0ZY5eGx+z071/lZjkrIEUx2mTnsZ5emdCS9w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
From: "Lengyel, Tamas" <tamas.lengyel@intel.com>
To: "Marczykowski, Marek" <marmarek@invisiblethingslab.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: "Marczykowski, Marek" <marmarek@invisiblethingslab.com>, "Cooper, Andrew"
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
	"Beulich, Jan" <JBeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	=?utf-8?B?UGF1IE1vbm7DqSwgUm9nZXI=?= <roger.pau@citrix.com>, Paul Durrant
	<paul@xen.org>, "Tian, Kevin" <kevin.tian@intel.com>, Tamas K Lengyel
	<tamas@tklengyel.com>
Subject: RE: [RFC PATCH 00/12] Add Xue - console over USB 3 Debug Capability
Thread-Topic: [RFC PATCH 00/12] Add Xue - console over USB 3 Debug Capability
Thread-Index: AQHYeVd8ujqeRGJ5skKWtq2o4Wh0c61CXJcg
Date: Mon, 6 Jun 2022 13:18:42 +0000
Message-ID: <CY4PR11MB0056686DC984391051B53D59FBA29@CY4PR11MB0056.namprd11.prod.outlook.com>
References: <cover.07846d9c1900bd8c905a05e7afb214b4cf4ab881.1654486751.git-series.marmarek@invisiblethingslab.com>
In-Reply-To: <cover.07846d9c1900bd8c905a05e7afb214b4cf4ab881.1654486751.git-series.marmarek@invisiblethingslab.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
dlp-product: dlpe-windows
dlp-version: 11.6.500.17
dlp-reaction: no-action
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=intel.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 4ee8bad9-822b-4990-c293-08da47bf13ad
x-ms-traffictypediagnostic: MW3PR11MB4524:EE_
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-microsoft-antispam-prvs: <MW3PR11MB45241E92B4315ADBB6FB7524FBA29@MW3PR11MB4524.namprd11.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: oISWyupY9e41qTnaIisNDQzOlcCcJcjmnctnbH+Vf6PWL8iEUXKjlHWZM9oG9WXNhvx8mHdrlnmZV0B2kOKbK19idZaPspYEdHqrtM2OtmxzHJbkZMvCKqa//51nQ20BmNazibN8tYfWFdkJjqJSojZ1XpucNZMLrL9IMupTcsT9giC6wN8TvN+TVx0g+lodswMCbQz2bVOIeZh4PrrPBaUxUpiOdQCZJI2ew8Q7iC+XqbQ+tvFiLZTKqeNr7RvJ0SYEieoPWLX8dXGVjf4JJ6QKy94Y6Ia+8mkNpbfgw0ENYww5bN/AJVJb8NxP/4jCVjDeAUMxlNTmGADd69NAFqKvhnYEC0pB6uEINHvccW3bSnb2qYpBgBpMdFf1TGsfJydNCqLNfWwNEO27hXULa4jCY6O+IL+1/U45SNpuTIyn2TSL6/cr/cYR/XBU6k/wFT5xv8t4AGm62bLdC95O5gs+17NARQmpYNnFfLSR6cZ7hkvATpQylZrnz4MhyPjgatYJoJEnmyf12gVJRgSzajSTOC68DHyeVlxXg3psznp4/9ADm4p/hTLokhiXa3Uva71XA0daa1DKgHOEDnK15sn4cFQ5nzhfj7/Q/8CqPx7zHBKyexO0hdXRZPBoSL8JTY3ApvRBPoByyUf/9kRBi9CjgoZvlbud/x2U2WjykACLAGPXHjjCXFKtL7sUm+/v+LROCjq/dBJSlRgGfvKz/KYkWTz07HwiPDTTl4UJs+ND2hG8fxQUVCHgogrSAhDhT+JsE0JNJKAtr8VTPXr/ze53vNZRISbwlt5Cp7ocNUk=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:CY4PR11MB0056.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(5660300002)(66556008)(66446008)(55016003)(2906002)(52536014)(8676002)(7416002)(186003)(8936002)(316002)(26005)(64756008)(122000001)(38070700005)(83380400001)(82960400001)(9686003)(110136005)(66574015)(54906003)(66476007)(66946007)(33656002)(71200400001)(38100700002)(6506007)(53546011)(7696005)(76116006)(86362001)(4326008)(966005)(508600001);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?OTRKa2NPT21pS1pCSEVtSm1NQklyamhqZ0pZUUZMQzEwWVVpNi9ELzhWQy9m?=
 =?utf-8?B?MEtYZ1RwTkttRnZSaHBONjlXYS9IQXhzT0k5N2w0NFIvTDcrc0dRaUhhUmh1?=
 =?utf-8?B?cUZLUVZsdFdBSmlFN1MyS09nYUM2OVlyQm9ZeEhMQkhjbTExOThCOTF1dDVr?=
 =?utf-8?B?Z3dCZllOVkZzUXVTck93cjdGV0FOZ1FNbzc2d2dzZWROdWZ0Qm1kRXZucG9Z?=
 =?utf-8?B?aUxCTGRoNWNYMGNXRjlVVzJ1a28xMGhDaGRuQXlTQ2JkWWdhWGpjTitDczgv?=
 =?utf-8?B?NnhOd0MwSVNmTC9wVEc5dVNTVyt6ZUwwV3hDUHdoSUQ1dmpmaGRBQXdpT3Ni?=
 =?utf-8?B?cVdTZTg1V0dxb1JxeXFra2diaFp4eEMvTk1PMU1RUGNsL3ZqMkh6c0JJRDJt?=
 =?utf-8?B?a2xUZytSbGVaWkkvaldXekJLTFlhb3AxVDRVWG5ySzlpa3FMbzBncUx1LzJL?=
 =?utf-8?B?bUs4bTlBUit3a2tPSTUwelJaOHh6U1V4b3QyQVJ4bEdnWUdub0gybi9uUjE4?=
 =?utf-8?B?SE5aVEFoMkF1MFQrdkZtRnBRQWJXZk91U1JvV3JlSkpLbFZodWZaNEdpUkk1?=
 =?utf-8?B?RzlJNzRXUXVtRU9heitNZHZzVGc2MlJEWkpCMS9uS1o3Rml4M0JWaEFnVUlI?=
 =?utf-8?B?aFJINzhpbldvL0w0YjU0QUpNNzB6Q3pTSHJvUXZ0cEoxUU92dE5KMm8vSEpZ?=
 =?utf-8?B?ZyttWmE5WkFkeUtuL28veFJQT2luY3p1K3F4a1V0QnlqWkxmK3NScTNQZmth?=
 =?utf-8?B?cEs5eFpKaDQwYkNVMm5JSHByY0NzYU1VQnViTzNmZ251d2xhVzJLV0crN2M3?=
 =?utf-8?B?MSsrMVZuK0ttTU5lSEpUUksvNXRqYjhwUlZVN3IvUCtHSUdTbnc2Yk14Mmda?=
 =?utf-8?B?VDd0dngwSm1BZ2ZiNUVoRDZ1VTZpVk9SbHJUUE1YczBtQ1JPOCtaNWhTa3FD?=
 =?utf-8?B?QjI1K1ZnOVY5VDBhTk1LNnZIaC82eC93ZGlzOFVBMys2Qk9UdWk3Wng1bk01?=
 =?utf-8?B?OThLT1RidlFUVjZVWnY0ZGNTc0Nwd3FwTzFpNUUyOGlLOGFhNmxpWi82UVpk?=
 =?utf-8?B?dmlSMEhBeHR0TmRpTm1PZFJ4VmlKeHF6a3Jhcms5dkN5RVdZcEkzZjFmellP?=
 =?utf-8?B?eEFvMUVKZ3orK3dOWXVQMi9QVEQ3QXN2cWRZRUdlZU9CV1lRVkZPV0gwSE5W?=
 =?utf-8?B?bkZpZTZMamR1eDg2WW1NT1pxZGJtTGNxSmNrMThuTHA2RlBjWTNaK295WER1?=
 =?utf-8?B?OEJrRmVLQkpTUmxrTVR1ZVlCV2RsRjJkOU04by9yL2dZelp2V2F2MzJNWS8z?=
 =?utf-8?B?VEhLYitkY1dNK0xKcTh2amlwYXEwOHdEcjJQL2xaTXlBQWJMTHV5WU5DL2ll?=
 =?utf-8?B?WVV4NzVndm1GMjFiUTlsTlZHNlZodWtOYXBiTW9wUUhIMi8wYU9NNmZuYzh2?=
 =?utf-8?B?SlcwaUMzOEhmdGxzcC91TjFrTVppMzNjWFc1YjRGMGQ1Q1VuUGZnZjNwZHRY?=
 =?utf-8?B?K1VMSXRuOU15Z0FwMXRzaExtcVJ1MDFmM0kxQW52QkV2ZVZGOS9ZbFdNNDUx?=
 =?utf-8?B?a2pwLzhsQ0prbW0ranMxSEh3cm9yWUNUdFdJdHlUWDIrbmFHaitNUXRXLzJY?=
 =?utf-8?B?UlZyTGIvMmN2ajVzK1pkSEhJUFlYTUdKRmRaVmlqdTVzaVFCMThUUGM1NjV4?=
 =?utf-8?B?OEJNUDc5QnBHaENCUmhydTUxSXNVVGs4ZkVVaFo1Rk8xUWVkMUFHODRDUzlC?=
 =?utf-8?B?NVZkZnBFRTViM0JtOFdBRXZ5TVdIMENqWDhsc3lDNzdiV3NQdGtoemt3enMz?=
 =?utf-8?B?STBHbllvd2pOZ1FVVTgrUlRtZmxSNGdpZG1VUVQ0eURjUGo5WkVkMHhjY0ls?=
 =?utf-8?B?OTFxZzBrNE93SDZWU1ErS3p0SFZqajV0dW5ZQ1NsZTNoNFBlMEJ3c0pFMHRz?=
 =?utf-8?B?ZGx4T0xPVTVOTUZrSkVBbGdaa3dicUhldEVIWXk4eFpHd3Z2QWFZazJrcWEv?=
 =?utf-8?B?K0g5a0VmQzFkYytaM0RLRnZETEoxa090WCt5WFJGNlNDclBESG45VVNucUhT?=
 =?utf-8?B?dm1mS2xlekN4by83RFhEVkN4bVZMWWhwd2V2TWpOOWgvQldiaEhybGduWFJX?=
 =?utf-8?B?T0V5WFF4Z2NzaUxzbE1iVW5mNGJGK0NHMXFRYW12MDRyUGJ6bGh5R3h1SzNI?=
 =?utf-8?B?cXgvQklJNTJ6c05vUXNPanloekFHT3hFWmxEeldPWGhTOWQxTWxVZTJndklp?=
 =?utf-8?B?aDNZclcwL2crRzdieURnajRpSzhudm5OK1dkQmduN1pvZ3c0SkhBR0RXdXB1?=
 =?utf-8?B?WWNlMWlLSElaQ0NBeU82WWJmSXVKQncvR2c0RFNCTkNFYWFXR2s4Zz09?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: CY4PR11MB0056.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4ee8bad9-822b-4990-c293-08da47bf13ad
X-MS-Exchange-CrossTenant-originalarrivaltime: 06 Jun 2022 13:18:42.0358
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: /Ike6aO1p9CBbRAo9o9S0iDNaT7hwmjksDY87i8A5dez9rvo9O1nLHjYH/pREIwa4dhT9jQEATKpiCgRLkIPdw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW3PR11MB4524
X-OriginatorOrg: intel.com

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBYZW4tZGV2ZWwgPHhlbi1kZXZl
bC1ib3VuY2VzQGxpc3RzLnhlbnByb2plY3Qub3JnPiBPbiBCZWhhbGYgT2YgTWFyZWsNCj4gTWFy
Y3p5a293c2tpLUfDs3JlY2tpDQo+IFNlbnQ6IFN1bmRheSwgSnVuZSA1LCAyMDIyIDExOjQwIFBN
DQo+IFRvOiB4ZW4tZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmcNCj4gQ2M6IE1hcmN6eWtvd3Nr
aSwgTWFyZWsgPG1hcm1hcmVrQGludmlzaWJsZXRoaW5nc2xhYi5jb20+OyBDb29wZXIsIEFuZHJl
dw0KPiA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT47IEdlb3JnZSBEdW5sYXAgPGdlb3JnZS5k
dW5sYXBAY2l0cml4LmNvbT47DQo+IEJldWxpY2gsIEphbiA8SkJldWxpY2hAc3VzZS5jb20+OyBK
dWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4ub3JnPjsgU3RlZmFubw0KPiBTdGFiZWxsaW5pIDxzc3Rh
YmVsbGluaUBrZXJuZWwub3JnPjsgV2VpIExpdSA8d2xAeGVuLm9yZz47IFBhdSBNb25uw6ksIFJv
Z2VyDQo+IDxyb2dlci5wYXVAY2l0cml4LmNvbT47IFBhdWwgRHVycmFudCA8cGF1bEB4ZW4ub3Jn
PjsgVGlhbiwgS2V2aW4NCj4gPGtldmluLnRpYW5AaW50ZWwuY29tPg0KPiBTdWJqZWN0OiBbUkZD
IFBBVENIIDAwLzEyXSBBZGQgWHVlIC0gY29uc29sZSBvdmVyIFVTQiAzIERlYnVnIENhcGFiaWxp
dHkNCj4gDQo+IFRoaXMgaXMgaW50ZWdyYXRpb24gb2YgaHR0cHM6Ly9naXRodWIuY29tL2Nvbm5v
amQveHVlIGludG8gbWFpbmxpbmUgWGVuLg0KPiBUaGlzIHBhdGNoIHNlcmllcyBpbmNsdWRlcyBz
ZXZlcmFsIHBhdGNoZXMgdGhhdCBJIG1hZGUgaW4gdGhlIHByb2Nlc3MsIHNvbWUgYXJlDQo+IHZl
cnkgbG9vc2VseSByZWxhdGVkLg0KPiANCj4gVGhlIFJGQyBzdGF0dXMgaXMgdG8gY29sbGVjdCBm
ZWVkYmFjayBvbiB0aGUgc2hhcGUgb2YgdGhpcyBzZXJpZXMsIHNwZWNpZmljYWxseToNCj4gDQo+
IDEuIFRoZSBhY3R1YWwgWHVlIGRyaXZlciBpcyBhIGhlYWRlci1vbmx5IGxpYnJhcnkuIE1vc3Qg
b2YgdGhlIGNvZGUgaXMgaW4gYSBzZXJpZXMgb2YNCj4gaW5saW5lIGZ1bmN0aW9ucyBpbiB4dWUu
aC4gSSBrZXB0IGl0IHRoaXMgd2F5LCB0byBlYXNlIGludGVncmF0aW5nIFh1ZSB1cGRhdGVzLg0K
PiBUaGF0J3MgYWxzbyB3aHkgSSBwcmVzZXJ2ZWQgaXRzIG9yaWdpbmFsIGNvZGUgc3R5bGUuIElz
IGl0IG9rYXksIG9yIHNob3VsZCBJIG1vdmUgdGhlDQo+IGNvZGUgdG8gYSAuYyBmaWxlPw0KPiAN
Cj4gMi4gVGhlIHh1ZS5oIGZpbGUgaW5jbHVkZXMgYmluZGluZ3MgZm9yIHNldmVyYWwgb3RoZXIg
ZW52aXJvbm1lbnRzIHRvbyAoRUZJLCBMaW51eCwNCj4gLi4uKS4gVGhpcyBpcyB1bnVzZWQgY29k
ZSwgYmVoaW5kICNpZmRlZi4gQWdhaW4sIEkga2VwdCBpdCB0byBlYXNlIHVwZGF0aW5nLiBTaG91
bGQgSQ0KPiByZW1vdmUgaXQ/DQo+IA0KPiAzLiBUaGUgYWRkaW5nIG9mIElPTU1VIHJlc2VydmVy
ZCBtZW1vcnkgaXMgbmVjZXNzYXJ5IGV2ZW4gaWYgImhpZGluZyIgZGV2aWNlDQo+IGZyb20gZG9t
MC4gT3RoZXJ3aXNlLCBWVC1kIHdpbGwgZGVueSBETUEuIFRoYXQncyBwcm9iYWJseSBub3QgdGhl
IG1vc3QNCj4gZWxlZ2FudCBzb2x1dGlvbiwgYnV0IFhlbiBkb2Vzbid0IGhhdmUgc2VlbSB0byBo
YXZlIHByb3Zpc2lvbnMgZm9yIGRldmljZXMgZG9pbmcNCj4gRE1BIGludG8gWGVuJ3MgbWVtb3J5
Lg0KPiANCj4gNC4gVG8gcHJlc2VydmUgYXV0aG9yc2hpcCwgSSBpbmNsdWRlZCB1bm1vZGlmaWVk
ICJkcml2ZXJzL2NoYXI6IEFkZCBzdXBwb3J0IGZvcg0KPiBYdWUgVVNCMyBkZWJ1Z2dlciIgY29t
bWl0IGZyb20gQ29ubm9yLCBhbmQgb25seSBhZGRlZCBteSBjaGFuZ2VzIG9uIHRvcC4NCj4gVGhp
cyBtZWFucywgd2l0aCB0aGF0IHRoaXMgY29tbWl0LCB0aGUgZHJpdmVyIGRvZXNuJ3Qgd29yayB5
ZXQgKGJ1dCBpdCBkb2VzDQo+IGNvbXBpbGUpLiBJcyBpdCBva2F5LCBvciBzaG91bGQgSSBjb21i
aW5lIGZpeGVzIGludG8gdGhhdCBjb21taXQgYW5kIHNvbWVob3cNCj4gbWFyayBhdXRob3JzaGlw
IGluIHRoZSBjb21taXQgbWVzc2FnZT8NCj4gDQo+IDUuIFRoZSBsYXN0IHBhdGNoKGVzKSBlbmFi
bGUgdXNpbmcgdGhlIGNvbnRyb2xsZXIgYnkgZG9tMCwgZXZlbiB3aGVuIFhlbiB1c2VzDQo+IERi
QyBwYXJ0LiBUaGF0J3MgcG9zc2libGUsIGJlY2F1c2UgdGhlIGNhcGFiaWxpdHkgd2FzIGRlc2ln
bmVkIHNwZWNpZmljYWxseSB0bw0KPiBhbGxvdyBzZXBhcmF0ZSBkcml2ZXIgaGFuZGxlIGl0LCBp
biBwYXJhbGxlbCB0byB1bm1vZGlmaWVkIHhoY2kgZHJpdmVyIChzZXBhcmF0ZSBzZXQNCj4gb2Yg
cmVnaXN0ZXJzLCBwcmV0ZW5kaW5nIHRoZSBwb3J0IGlzICJkaXNjb25uZWN0ZWQiIGZvciB0aGUg
bWFpbiB4aGNpIGRyaXZlciBldGMpLg0KPiBJdCB3b3JrcyB3aXRoIExpbnV4IGRvbTAsIGFsdGhv
dWdoIHJlcXVpcmVzIGFuIGF3ZnVsIGhhY2sgLSByZS1lbmFibGluZyBidXMNCj4gbWFzdGVyaW5n
IGJlaGluZCBkb20wJ3MgYmFja3MuIElzIGl0IG9rYXkgdG8gbGVhdmUgdGhpcyBmdW5jdGlvbmFs
aXR5IGFzIGlzLCBvcg0KPiBndWFyZCBpdCBiZWhpbmQgc29tZSBjbWRsaW5lIG9wdGlvbiwgb3Ig
bWF5YmUgcmVtb3ZlIGNvbXBsZXRlbHk/DQoNCkhhcHB5IHRvIHNlZSB0aGlzIGVmZm9ydCwgaXQn
cyBiZWVuIGxvbmcgb3ZlcmR1ZSB0byBnZXQgdGhpcyBmZWF0dXJlIHVwc3RyZWFtISBJZiB5b3Ug
aGF2ZSBhIGdpdCBicmFuY2ggc29tZXdoZXJlIEknbSBoYXBweSB0byB0ZXN0IGl0IG91dC4gSSBh
bHJlYWR5IGhhdmUgdGVzdGVkIFh1ZSBiZWZvcmUgb24gbXkgTlVDIGFuZCBpdCB3YXMgd29ya2lu
ZyB3ZWxsLg0KDQpUaGFua3MsDQpUYW1hcw0K


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 13:19:25 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 13:19:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342504.567551 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyCdg-0004lW-Rd; Mon, 06 Jun 2022 13:19:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342504.567551; Mon, 06 Jun 2022 13:19:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyCdg-0004lP-OT; Mon, 06 Jun 2022 13:19:24 +0000
Received: by outflank-mailman (input) for mailman id 342504;
 Mon, 06 Jun 2022 13:19:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyCdf-0004h3-21; Mon, 06 Jun 2022 13:19:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyCde-0005Ok-Uh; Mon, 06 Jun 2022 13:19:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyCde-0008WF-Dz; Mon, 06 Jun 2022 13:19:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nyCde-0005xz-Da; Mon, 06 Jun 2022 13:19:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=AE0LfdqkL/Gd3gFXvA2hB3LWbhlO2LMmSAhR+P0dSk8=; b=eq6NgxqIzyoN4HXmkLwk+O45ME
	uvbM/QI51vU4eH3/fNztP0sOTJY/Bu+Y+tiv0HdRJ2JUL8EtS/M2mkr/nz1mtQhxiTDocDtu4l0/y
	lp0egGtEKrspgYSw8wBVAdChhqS2TpvfWdWMyVlh5YPQ2whIx+vK0OSvQcE2xW/gtVxY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170841-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 170841: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f2906aa863381afb0015a9eb7fefad885d4e5a56
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 06 Jun 2022 13:19:22 +0000

flight 170841 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170841/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 170714
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 170714
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 170714
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 170714
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 170714
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 170714

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170714
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170714
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170714
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                f2906aa863381afb0015a9eb7fefad885d4e5a56
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z   13 days
Failing since        170716  2022-05-24 11:12:06 Z   13 days   37 attempts
Testing same since   170841  2022-06-06 03:15:23 Z    0 days    1 attempts

------------------------------------------------------------
2274 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 268374 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 13:24:27 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 13:24:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342527.567561 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyCiV-0006hn-EM; Mon, 06 Jun 2022 13:24:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342527.567561; Mon, 06 Jun 2022 13:24:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyCiV-0006hg-BW; Mon, 06 Jun 2022 13:24:23 +0000
Received: by outflank-mailman (input) for mailman id 342527;
 Mon, 06 Jun 2022 13:24:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Jy1=WN=citrix.com=prvs=14988d3bc=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1nyCiT-0006ha-U2
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 13:24:21 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f81a8059-e59b-11ec-bd2c-47488cf2e6aa;
 Mon, 06 Jun 2022 15:24:20 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f81a8059-e59b-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654521860;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=nB7uJlkx7Z0K/2USL+juAs6nLnZo62u3Fcdbi9B4HVg=;
  b=NethKb4mGHHBGBOgfTDVM04acdmZgUGdH4AfnnCb1zwBPBF7cXH4MzvD
   LuKGII15+JWLjR92MHlKAY3Uh526l+BpZ4nLi9JIigforbZrqWuXtzimF
   swBUHbd3aY/oLrbLG6s8/dL1UKPuQbJYE290/3dwBT4X4fQ04fhOU1IlI
   Q=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 72939522
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:GzFse6gbWV9JBSjn83jFLclHX161GBAKZh0ujC45NGQN5FlHY01je
 htvWW+OMvaCYmP3Ktt3PY7kpk9SvpTSydBjQQc5/ig9Hnwb9cadCdqndUqhZCn6wu8v7a5EA
 2fyTvGacajYm1eF/k/F3oDJ9CU6jefSLlbFILas1hpZHGeIcw98z0M68wIFqtQw24LhXVvQ4
 YmaT/D3YzdJ5RYlagr41IrbwP9flKyaVOQw5wFWiVhj5TcyplFNZH4tDfjZw0jQG+G4KtWSV
 efbpIxVy0uCl/sb5nFJpZ6gGqECaua60QFjERO6UYD66vRJjnRaPqrWqJPwwKqY4tmEt4kZ9
 TlDiXC/YQoPGrHFg94ybwt/IQp4P49ayO7GeHfq5KR/z2WeG5ft6/BnDUVwNowE4OdnR2pJ8
 JT0KhhUMErF3bjvhuvmFK883azPL+GyVG8bknhm0THeC+dgWZ3ZSr/GzdRZwC0xloZFGvO2i
 88xNmA+N0WdOUcn1lE/OLk6prmHjHrEVCxlk1yR//UbwUuK9VkkuFTqGIWMIYHbLSlPpW6Ho
 krW8mK/BQsVXPSE0iaM+H+ogu7JnAv4VZgUGbn+8eRl6HWRzGEODBwdVXOgvOK0zEW5Xrp3O
 0ESvyYjs6U23EiqVcXmGQ21pmaeuRwRUMYWFPc1gCmP167V7gCxFmUCCDlbZ7QbWNQeHGJwk
 AXTxpWwWGIp4Ob9pW+hGqm8oBWWJSEOF0A+dwg2Zg0HwcXT8YBqgUeaJjp8K5JZnuEZCBmpn
 W3W9nJm2+9M5SIY//7lpA6a2lpAsrCMF1dovVuPAwpJ+ysjPOaYi5qUBU83BBqqBKKQVRG/s
 XcNgKByB8heXMjWxERhrAjgdYxFBspp0xWG2DaD57F7q1yQF4eLJOi8Gg1WKkZzKdojcjT0e
 kLVsg45zMYNYSf2MPMvO9/vU5RCIU3c+TPND6m8UzazSsIpKF/vEN9GOSZ8IFwBYGBzyPpia
 P93gO6nDGoACLQP8Qdas9w1iOdxrghnnDu7bcmik3yPjOvFDFbIGOhtDbd7Rr1ghE9yiF6No
 4g32grj40g3bdASlQGOqN5PfQ5XdiFrbX00wuQOHtO+zsNdMDlJI5fsLXkJIeSJQ4w9ej/0w
 0yA
IronPort-HdrOrdr: A9a23:YrSXVasPeT+7MRXCL+oUwJG77skCKYAji2hC6mlwRA09TyXGra
 2TdaUgvyMc1gx7ZJh5o6H6BEGBKUmslqKceeEqTPiftXrdyRGVxeZZnMXfKlzbamHDH4tmuZ
 uIHJIOb+EYYWIasS++2njBLz9C+qjHzEnLv5a5854Fd2gDBM9dBkVCe3+m+yZNNWt77O8CZf
 6hD7181l+dkBosDviTNz0gZazuttfLnJXpbVotHBg88jSDijuu9frTDwWY9g12aUIN/Z4StU
 z+1yDp7KSqtP+2jjXG0XXI0phQkNz9jvNeGc23jNQPIDmEsHfrWG0hYczGgNkGmpDp1L8Yqq
 iLn/7mBbUr15rlRBDwnfIq4Xi57N9h0Q649bbSuwqfnSWwfkNHNyMGv/MYTvKR0TtfgDk3up
 g7oF6xpt5ZCwjNkz/64MWNXxZ2llCsqX5niuILiWdDOLFuI4O5ArZvjn+9Pa1wVR4S0rpXWN
 WGzfuskcp+YBefdTTUr2NvyNujUjA6GQqHWFELvoiQ3yJNlH50wkMEzIhH901wua4VWt1B/a
 DJI65onLZBQosfar98Hv4IRY+yBnbWSRzBPWqOKRDsFb0BOXjKt5nriY9Frt2CadgN1t8/iZ
 7BWFRXuSo7fF/vE9SH2NlR/hXEUAyGLELQIwFllu9EU5HHNcrW2He4OSETeuOb0oYiK9yeXe
 qvM5RLBPKmJXfyGO9yrnnDZ6U=
X-IronPort-AV: E=Sophos;i="5.91,280,1647316800"; 
   d="scan'208";a="72939522"
Date: Mon, 6 Jun 2022 14:24:13 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Julien
 Grall" <julien@xen.org>, Jan Beulich <jbeulich@suse.com>, Wei Liu
	<wl@xen.org>, George Dunlap <George.Dunlap@citrix.com>, Stefano Stabellini
	<sstabellini@kernel.org>
Subject: Re: [XEN PATCH 0/4] xen: rework compat headers generation
Message-ID: <Yp3//c/CAcwLHCvi@perard.uk.xensource.com>
References: <20220601165909.46588-1-anthony.perard@citrix.com>
 <5e94648f-ba89-3691-0d80-1a8cca588ca6@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <5e94648f-ba89-3691-0d80-1a8cca588ca6@citrix.com>

On Wed, Jun 01, 2022 at 05:17:36PM +0000, Andrew Cooper wrote:
> On 01/06/2022 17:59, Anthony PERARD wrote:
> > Patch series available in this git branch:
> > https://xenbits.xen.org/git-http/people/aperard/xen-unstable.git br.build-system-xen-include-rework-v1
> >
> > Hi,
> >
> > This patch series is about 2 improvement. First one is to use $(if_changed, )
> > in "include/Makefile" to make the generation of the compat headers less verbose
> > and to have the command line part of the decision to rebuild the headers.
> > Second one is to replace one slow script by a much faster one, and save time
> > when generating the headers.
> >
> > Thanks.
> >
> > Anthony PERARD (4):
> >   build: xen/include: use if_changed
> >   build: set PERL
> >   build: replace get-fields.sh by a perl script
> >   build: remove auto.conf prerequisite from compat/xlat.h target
> >
> >  xen/Makefile                 |   1 +
> >  xen/include/Makefile         | 106 ++++---
> >  xen/tools/compat-xlat-header | 539 +++++++++++++++++++++++++++++++++++
> >  xen/tools/get-fields.sh      | 528 ----------------------------------
> 
> Excellent. I was planning to ask you about this. (I also need to
> refreshing my half series cleaning up other bits of the build.)
> 
> One trivial observation is that it would probably be nicer to name the
> script with a .pl extension.

Sound fine, I don't think it matter much here.

> Any numbers on what the speedup in patch 3 is?

Yes, on my machine when doing a full build, with `ccache` enabled, it
saves about 1.17 seconds (out of ~17s), and without ccache, it saves
about 2.0 seconds (out of ~37s).

Without ccache:

before:
    $ for i in `seq 4`; do time make -j$(nproc) -s O=build 2>/dev/null >/dev/null; rm -r build; done
    make --no-print-directory -j$(nproc) -s O=build > /dev/null  244.98s user 29.24s system 683% cpu 40.146 total
    make --no-print-directory -j$(nproc) -s O=build > /dev/null  47.05s user 11.50s system 332% cpu 17.610 total
    make --no-print-directory -j$(nproc) -s O=build > /dev/null  47.35s user 11.22s system 330% cpu 17.733 total
    make --no-print-directory -j$(nproc) -s O=build > /dev/null  47.31s user 11.23s system 333% cpu 17.577 total

after:
    $ for i in `seq 4`; do time make -j$(nproc) -s O=build 2>/dev/null>/dev/null; rm -r build; done
    make --no-print-directory -j$(nproc) -s O=build 2> /dev/null > /dev/null  237.28s user 25.97s system 667% cpu 39.456 total
    make --no-print-directory -j$(nproc) -s O=build 2> /dev/null > /dev/null  37.60s user 8.20s system 282% cpu 16.214 total
    make --no-print-directory -j$(nproc) -s O=build 2> /dev/null > /dev/null  37.95s user 8.67s system 279% cpu 16.651 total
    make --no-print-directory -j$(nproc) -s O=build 2> /dev/null > /dev/null  38.02s user 8.40s system 280% cpu 16.545 total

And without ccache:

before:
    $ for i in `seq 4`; do time make -j$(nproc) -s O=build 2>/dev/null>/dev/null; rm -r build; done
    make --no-print-directory -j$(nproc) -s O=build 2> /dev/null > /dev/null  206.37s user 22.19s system 640% cpu 35.695 total
    make --no-print-directory -j$(nproc) -s O=build 2> /dev/null > /dev/null  221.45s user 22.26s system 667% cpu 36.537 total
    make --no-print-directory -j$(nproc) -s O=build 2> /dev/null > /dev/null  233.95s user 23.80s system 686% cpu 37.518 total
    make --no-print-directory -j$(nproc) -s O=build 2> /dev/null > /dev/null  234.27s user 23.83s system 680% cpu 37.923 total

after:
    $ for i in `seq 4`; do time make -j$(nproc) -s O=build 2>/dev/null>/dev/null; rm -r build; done
    make --no-print-directory -j$(nproc) -s O=build 2> /dev/null > /dev/null  198.62s user 18.64s system 642% cpu 33.841 total
    make --no-print-directory -j$(nproc) -s O=build 2> /dev/null > /dev/null  202.91s user 19.46s system 655% cpu 33.912 total
    make --no-print-directory -j$(nproc) -s O=build 2> /dev/null > /dev/null  224.42s user 20.89s system 680% cpu 36.025 total
    make --no-print-directory -j$(nproc) -s O=build 2> /dev/null > /dev/null  222.93s user 21.29s system 683% cpu 35.708 total


> Are the generated compat headers identical before and after this
> series? If yes, I'm very tempted to ack and commit it straight away.

Yes, the headers are identical. Hopefully, I've managed to check with
all compat headers enabled, since that depends on .config.

Cheers,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 13:27:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 13:27:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342536.567573 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyClV-0007In-Se; Mon, 06 Jun 2022 13:27:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342536.567573; Mon, 06 Jun 2022 13:27:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyClV-0007Ig-PU; Mon, 06 Jun 2022 13:27:29 +0000
Received: by outflank-mailman (input) for mailman id 342536;
 Mon, 06 Jun 2022 13:27:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oh0U=WN=citrix.com=prvs=149ed92ed=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1nyClU-0007Ia-Io
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 13:27:28 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6739e078-e59c-11ec-b605-df0040e90b76;
 Mon, 06 Jun 2022 15:27:26 +0200 (CEST)
Received: from mail-dm6nam12lp2173.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.173])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 06 Jun 2022 09:27:23 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BL1PR03MB6086.namprd03.prod.outlook.com (2603:10b6:208:31c::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Mon, 6 Jun
 2022 13:27:20 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::50a2:499b:fa53:b1eb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::50a2:499b:fa53:b1eb%5]) with mapi id 15.20.5314.019; Mon, 6 Jun 2022
 13:27:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6739e078-e59c-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654522046;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=4WNgenD7RO/Hdtc9C1WsobotvwJWZeqx6euE/GeXOac=;
  b=VKrOdYny9FZXvoCh0TGdGMvJq1wGlHMoLieSU/rmkYqLl0qTHxMxdD0V
   QE8Z11DnngewtgLjCGmWBkqO8S0Hd1GswLHbYFwVDtZvc2L9Lqfb8eNFD
   bhhkPvfnEit/tW4FKtGWCNaRWPwt3EsplF+SPULE+j5HhpVe3yKfCs52f
   o=;
X-IronPort-RemoteIP: 104.47.59.173
X-IronPort-MID: 72939774
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:1s5EWqjFVxPbgyOhxBn0T0QHX161QREKZh0ujC45NGQN5FlHY01je
 htvCjyEb/+DYmP8LdxxaIuzpkxS75PTmIA1TFY+qSFjRiwb9cadCdqndUqhZCn6wu8v7a5EA
 2fyTvGacajYm1eF/k/F3oDJ9CU6jefSLlbFILas1hpZHGeIcw98z0M68wIFqtQw24LhXVvQ4
 YmaT/D3YzdJ5RYlagr41IrbwP9flKyaVOQw5wFWiVhj5TcyplFNZH4tDfjZw0jQG+G4KtWSV
 efbpIxVy0uCl/sb5nFJpZ6gGqECaua60QFjERO6UYD66vRJjnRaPqrWqJPwwKqY4tmEt4kZ9
 TlDiXC/YQYvZ/zsl/YwaDBZCn1bBLR/0rXjYkHq5KR/z2WeG5ft69NHKRhseKE9pKNwC2wI8
 uEEIjcQaBzFn/ix3L+wVuhrgIIkMdXvO4Qc/HpnyFk1D95/GcyFH/qMuI4ehWhr7ixNNa+2i
 84xQDxjdhnfJTZIPU8aEskWl+a0nHjvNTZfrTp5oIJouTmInVMujNABNvLYWNCOTPpxtXyFt
 2edxGbkCysdLoSmnG/tHnWEw7WncTnAcJIfEvi0++BnhHWXx3cPE1sGWF2ju/67h0WiHdVFJ
 CQ8+Dco664790WpT9z0dxy+vHOA+BUbXrJ4EOAk6QfL1qvd5S6YAHQJSnhKb9lOnM08SCEu1
 1SJt8j0HjEpu7qQIVqC8p+EoDX0PjIaRVLufgcBRAoBptXm/oc6i0uVSs45SfHuyNroBTv33
 jaG6jAkgKkehtIK0KP9+k3bhzWrpd7CSQtdChjrY19JJzhRPOaND7FEI3CChRqcBO51lmW8g
 UU=
IronPort-HdrOrdr: A9a23:cvyZb6zrIq7oGTwD95UuKrPxc+gkLtp133Aq2lEZdPULSKGlfp
 GV9sjziyWetN9IYgBapTiBUJPwIk81bfZOkMYs1MSZLXPbUQyTXc9fBOrZsnHd8kjFmtK1up
 0QFJSWZOeQMbE+t7eD3ODaKadg/DDkytHPuQ629R4EIm9XguNbnn5E422gYy9LrXx9dP4E/e
 2nl696TlSbGUg/X4CePD0oTuLDr9rEmNbNehgdHSMq7wGIkHeB9KP6OwLw5GZcbxp/hZMZtU
 TVmQ3w4auu99uhzAXH6mPV55NK3PP819p4AtCWgMR9EESvtu/oXvUlZ1SxhkFznAid0idtrD
 AKmWZ4Ay1H0QKUQohym2q05+Cv6kd015ao8y7kvZKqm72EeNt9MbsAuWsRSGqm16Jr1usMr5
 5jziaXsYFaAgjHmzm479/UVwtynk7xunY6l/UP5kYvGrf2x4Uh37D30XklWavoJhiKoLzP0d
 Meef309bJTaxeXfnrZtm5gzJilWWkyBA6PRgwHttaO2zZbkXhlxw9ArfZv00so5dY4Ud1J9u
 7EOqNnmPVHSdIXd7t0AKMETdGsAmLATBrQOCaZIEjhFqsAJ3XRwqSHqokd9aWvYtgF3ZEykJ
 POXBdRsnMzYVvnDYmU0JhC4nn2MROAtPTWu7ZjDrRCy83BreDQQFy+oXgV4ridiuRaBNHHUP
 CuP58TC+P/LALVaPJ04zE=
X-IronPort-AV: E=Sophos;i="5.91,280,1647316800"; 
   d="scan'208";a="72939774"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QNEuvxGjjMbo6BDoRXqF1ZiG/77L9DVq3auEl3LOpOBjV09uEajVRKIX7IZlmWUJyiKpdGsCts7PwaIulNMvEeqXv2bTLTAAP3ghHpCRvi23nVOHUoKwOwFvYnzZNjn451yD4EBN1D3vJHkbCc4ip3Q8LQs+RJ750JUvLQa5xhbREqpN/4kHTgkSvV3YoTy0goympwqVLKciWDjJsXHrup7XBaVxq9/s9icLXmTVgUwS8b5ZduH+R9ogg9HO4b8hfDInbiM6+o/Q0LCGHpJcV+73quTDGDMk/I1ZCH9cTx7hgg1RG9kb4Rv650lFJcrHwqf0uEM2vT83nBWT1hEo9Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=4WNgenD7RO/Hdtc9C1WsobotvwJWZeqx6euE/GeXOac=;
 b=YPLjZNn33gm/5nYKo3OoiVSuZ9onxF+zicugQlPYmNVHAtX2kWRc43S4ZhfwDhwpzXh4dcl2LqeGsbMXArcz6CX2XcWvL/NTlNNLr0bkxaBNO0SRYiOqO6whtGgeyreufF2WvfvPxrqY4QSNhpjswOB2uk9ev3Y6r1wurr9RNFFdFREFLv+CeoSOgQpY2GSUToE72A/d5oEyxQ8PreGBQIdD2j3a3kzCl+O2JMRtZJaFvCwvJYw4fM4ssKj6Aj89mxqEq2NKjbvXJYl5XbyzVP741deluGuXOPYSQnjL0MP3vplSg+yAQ8bsuvoqX8XbjhuBNexS+/c7s6Csv2aJWQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4WNgenD7RO/Hdtc9C1WsobotvwJWZeqx6euE/GeXOac=;
 b=xbo56kkYP6pQAIQbifuX90PK8St1/WmjfcsVWkGXvlVIPdz6LA4p0oOpaynNTbX23/86Rokk/vExTtuaDGvh8Vz1wIz+zjqJMfiypKRKuVNv7O6l573lxD/GZaQs/xTgj8Pwg3FF299MMn1vaZX+jk2p2pPK2UA1OvsiExMtKIA=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
	Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2 1/3] x86/vmx: implement Bus Lock detection
Thread-Topic: [PATCH v2 1/3] x86/vmx: implement Bus Lock detection
Thread-Index: AQHYcPF6ki4pJ9UKm0etGSMPsHK7Oa1CcGeA
Date: Mon, 6 Jun 2022 13:27:19 +0000
Message-ID: <3c8b0b72-0a9a-3dfd-bf5c-b6cc40a4ce3a@citrix.com>
References: <20220526111157.24479-1-roger.pau@citrix.com>
 <20220526111157.24479-2-roger.pau@citrix.com>
In-Reply-To: <20220526111157.24479-2-roger.pau@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 0c0abd9d-603f-42f6-899d-08da47c0485e
x-ms-traffictypediagnostic: BL1PR03MB6086:EE_
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
x-microsoft-antispam-prvs:
 <BL1PR03MB6086A237CADB4551BE278F94BAA29@BL1PR03MB6086.namprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 3eGB1zlqdkbZjV7wLCj80zjBC0mDY5pinOpEt8oHH6e5ghoY1OWQ++xGbJD2cdTAA0lEUrr+r+RAyF2UdtiSrVAtoR/pEGJCmueQ6pB06AlZ4lnm8Xm5J+ol12YHIuB/kdsrA5QpfZtYeZDRYZ9mpPVf1EvPXc45askbTpQcp8Kwm3CFQgHKGpW5sCH56onyKhvnl5fiRAqoszXGF/FMxJ0ZVYtTDqIz5MsZKJtgG32zOh1R8JAIqpJ84vo5rq8mqKvAni9gIsk7ZbAq+w1dY8hmVFEqdnycPyl1dfPnH+jT+osjO8o/y6CAkz5LCMCx033nWklZKxf2DhcthEW/jSbXIILjHliXpyg0H8FUJEGP2RxNCqTAkdrMPzQwnl0a1ZKQ2vEevP7DFRnQWMcNJhovbYtNABtC08iUGrr00egXFL98asOEPFkvFtsPGsCFgjYKTWnkiOmFoIZ++yo03pNXY5DLKXhnq3HMdQrWDpKZj+wIWugSpOQAjRIJrbhEipFkn4lzpbdizum3I0w40SP50qnqprq1Sm3mtpSoxXtAgdOy0YSMBMppc95AG+Pr0gMVdnUnALsSsH8JHmKFyUgiNiEQ8e1VFIFwapOA9ar9S9Z8Ke4TLH7qiVpTenZVAh6hktUVK3rdCZCaVn9fnV39FKLdLrZZM9JCUtv1e0rr6WKs2tPF+uPNy0Pwj7mFxg/1gpPj1Z/JN8qfNnrcXvFoejRR3q5/Y1dgkOf7oP9SWK7WS4dX1niwcjtyZ/4FBeKEnx5XzloCtcS9IDHF7Q==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(38100700002)(2616005)(53546011)(31686004)(71200400001)(54906003)(83380400001)(36756003)(6506007)(2906002)(86362001)(31696002)(5660300002)(64756008)(4326008)(122000001)(316002)(8676002)(66446008)(38070700005)(508600001)(186003)(6486002)(66556008)(76116006)(8936002)(66476007)(110136005)(91956017)(66946007)(26005)(6512007)(82960400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?OFVDUVFLNXQ5bmNDRytNTTZKaElTcE1pWkJlQkVIUG56MktUckRmOWQ2THNx?=
 =?utf-8?B?ZlVlWnRPMHQ1dDlvQ2hFOGNDZk1qZXh5eXRld3FocW5HaFFBVEloMXhqajYy?=
 =?utf-8?B?K2NMeHA0S1Uva3dDZEdFYzczdHhMVEZybFVTNHJzN2ZNeWVScnFhbFEvQk1N?=
 =?utf-8?B?d1RDeTVMeldpaEpZQXJuMFBHQ1RlQytXcUFNMnlpVEJmdDhrN3pNOXIyMUpC?=
 =?utf-8?B?RmZ1R2syV3VpTHhEbzdzZWphRnZudmIxZHVJNktFMHpGenZJUnlQbzVacmZP?=
 =?utf-8?B?RDcyTkdyNVFqaGtXeTF2elNpNDdwSDVEb1RTRVM2VkdkREtiTXh5c09rcEpH?=
 =?utf-8?B?Y1VRem81K2JseHFCT0ZyOW5iNVZIREF5QlFrUllEM0lDRmJqc3g2ZWdjSjRo?=
 =?utf-8?B?cFBQU29Bem9QU2pad0NidHlhcG0wT0t3bk4vZm1nYklUNHJlOHBya3U4WXRI?=
 =?utf-8?B?eDlYMmVSaUJMRWZKWlNMTnNIOFpzeFRtcTdLWEM5RjU5QjNRSGtQS1hkeGVj?=
 =?utf-8?B?cnoycU9yQnJ6cXpKM1Z5QlZRaC90aURZSk9yT1ZIbGNEeWQ3YWp4NmNKSGtG?=
 =?utf-8?B?aS8zb2FmWGFWVzJocUYzUllEL2R0TUVwKzRFSEZ4WWxnTndWRzNWaVNmRnZm?=
 =?utf-8?B?SUM0UFlvSmVyZW9HYWhEMG0zZ3h4cjJ6cWVVT1FNN2FuWC81WWx5ZVdnWnU0?=
 =?utf-8?B?TWJHY0RwTkFVaHc2RXVlNFNUcVVtVHJrZlBabmYvQXZId0ZpNmxXRnNZZlNu?=
 =?utf-8?B?RHlNWkhGd05GYVB0aXE5cDRScFZqRlFEbHBLL3JFbzJMZ3FiNEVWU3hPS0dx?=
 =?utf-8?B?YXNXVjFFc1htdTB4VnBIZzh0MnJJb1ZBUWtDeE9zamUrRFYyeXAzVGpBd3Zn?=
 =?utf-8?B?UkFNTndFL0F0anBIVDZoQ051eXdyMWROQjhUWHdmc2FYWnBUN2YzY3EyQXpH?=
 =?utf-8?B?RENIbWM3NmZMaGloVGprV0ZFMlFUMVFWR3BqQTgwS25kbXVZbDBQYWNnZXl6?=
 =?utf-8?B?RHFyc1BRY1d6TTJvaGNUd0VjMC9odCtKbENITXpkSWJEbU5hbGcxK0ZxZm81?=
 =?utf-8?B?ckg1TDN2Y0ZGazVjWU4rQUJYYUZCZkNySy9zL2tzTkgyUmZRSVJocUJEbzVh?=
 =?utf-8?B?b3RsZ0NGVWlKd1FJemEyekhZZzc2UFE4ZzVRWUlzOVI5T05qV2pRNEhJZkVS?=
 =?utf-8?B?M1FBVmswaWgvamw2bGFTOEJ2V2dBL2F0YnJjcjY0YmF6VENXRnRjTk16UlhU?=
 =?utf-8?B?MUhrRnB4STRrNys2RkVDWWREb2E5L3dMaG95QWFyM2EyMHdoUklSZmw5Zkx3?=
 =?utf-8?B?dFRYMTBXd3hudmxGUWtIMjcvYW5TcjZjOEpab0tFLzVRNFNONThFYVZ0S2pI?=
 =?utf-8?B?ZFY1d3VMd01raTlvbnhrcXBzcWcwODdGVnp2NlN2QnJZbG01ckRmSERwd1p6?=
 =?utf-8?B?VCtJa24rWnp1R2d4ZXBmZi91V0VnODdDZnQ4V2JHclhhSCtqUXZEV082T09P?=
 =?utf-8?B?aWJqbGFmZFMrd3k1OGpxMjJDTXpvREN4aDRlUVZ2VjFQYmwvV2tJd1I0ZHZv?=
 =?utf-8?B?Z2ZVTmpCTDMyRk5taVVaYkNHbEtlU2pCaVdKNHN2SFRpbXU2OGxJT3NvNXQv?=
 =?utf-8?B?RGp4OWZQb0JVbWVERnZrblhDWXVjOXVjcmlKUThvcXdpQlQrWkRzRzQ4SUhv?=
 =?utf-8?B?anhNWU1Da0lwTWFvdGs1SllKbURPMjJGOU9YcmlFTnUxRWNXTGsvZDZOMW9l?=
 =?utf-8?B?Tkl5aEI0endaYy80WDY1ankrN3VuQTJnSVVLUVNWcktWYlRtY05adkRnT3RD?=
 =?utf-8?B?VVNDYmVPL0ozbXJITGR0Mnd2ZlJ2TXRreHpzZmxGb0dXc0FBYVVkazFLWnJq?=
 =?utf-8?B?clpOdnhDYnVmNEcxU1R5cWZncHg0YUxxZkdNVUdXR2dPTkZEQUt0MTBOSkJr?=
 =?utf-8?B?MU9mQjZ0RXVTNk1KTlBMQTl2cGpoT2t5TS9taFNPQ1Y5bWlydjVacXBTMTZ6?=
 =?utf-8?B?OEt5ckE0TldNemVCQ1ZKOXowK1I3SWx5V05yMFJnNkF0ZVo1amNFZndpRTV6?=
 =?utf-8?B?ZEk1NGl4WHNKeUZVUHY3NDFjUy8xVGZWc2pRN2poTGx0SjVrQWxRZkJGMlVT?=
 =?utf-8?B?ajZYam1UUk91QzdPZmF5UlhpbmlxQmFNTUpFTlN0OE56QTNNL2VRa3lxbnI3?=
 =?utf-8?B?Q3gxRmN6Znd0UlFDQmd1dm5aSHRBQW9iL2xxMUw1VlpUZUR6ZlF4MTJmY2ty?=
 =?utf-8?B?OCtBOThxRlhBQjVzdy9FRW1KaGM4NUdnNEsrVVUzeTRSTUt2K1g2RUVKQTJj?=
 =?utf-8?B?UTVLVlV3U3hsOXlIeG5RbHBQbDBnaTc5dE1IamVDMURxVWdObElwaXp1MSsv?=
 =?utf-8?Q?pcoElInAoLe11efw=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <15688EEA390DE446941CA5FADAB2988C@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0c0abd9d-603f-42f6-899d-08da47c0485e
X-MS-Exchange-CrossTenant-originalarrivaltime: 06 Jun 2022 13:27:19.9638
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: nbSXwq4LUmi3LzFUEGEa9pbEDy5d+TZrPG6iECwCAQ6ChToTIMv5RC49EIACYTwMi1HDbzvodjVeaLxnYwkzwV5D3t62QMQ2PqtUJYXgyF0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR03MB6086

T24gMjYvMDUvMjAyMiAxMjoxMSwgUm9nZXIgUGF1IE1vbm5lIHdyb3RlOg0KPiBkaWZmIC0tZ2l0
IGEveGVuL2FyY2gveDg2L2h2bS92bXgvdm14LmMgYi94ZW4vYXJjaC94ODYvaHZtL3ZteC92bXgu
Yw0KPiBpbmRleCBmMDhhMDBkY2JiLi40NzZhYjcyNDYzIDEwMDY0NA0KPiAtLS0gYS94ZW4vYXJj
aC94ODYvaHZtL3ZteC92bXguYw0KPiArKysgYi94ZW4vYXJjaC94ODYvaHZtL3ZteC92bXguYw0K
PiBAQCAtNDA2NSw2ICs0MDY1LDE2IEBAIHZvaWQgdm14X3ZtZXhpdF9oYW5kbGVyKHN0cnVjdCBj
cHVfdXNlcl9yZWdzICpyZWdzKQ0KPiAgDQo+ICAgICAgaWYgKCB1bmxpa2VseShleGl0X3JlYXNv
biAmIFZNWF9FWElUX1JFQVNPTlNfRkFJTEVEX1ZNRU5UUlkpICkNCj4gICAgICAgICAgcmV0dXJu
IHZteF9mYWlsZWRfdm1lbnRyeShleGl0X3JlYXNvbiwgcmVncyk7DQo+ICsgICAgaWYgKCB1bmxp
a2VseShleGl0X3JlYXNvbiAmIFZNWF9FWElUX1JFQVNPTlNfQlVTX0xPQ0spICkNCj4gKyAgICB7
DQo+ICsgICAgICAgIC8qDQo+ICsgICAgICAgICAqIERlbGl2ZXJ5IG9mIEJ1cyBMb2NrIFZNIGV4
aXQgd2FzIHByZS1lbXB0ZWQgYnkgYSBoaWdoZXIgcHJpb3JpdHkgVk0NCj4gKyAgICAgICAgICog
ZXhpdC4NCj4gKyAgICAgICAgICovDQo+ICsgICAgICAgIGV4aXRfcmVhc29uICY9IH5WTVhfRVhJ
VF9SRUFTT05TX0JVU19MT0NLOw0KPiArICAgICAgICBpZiAoIGV4aXRfcmVhc29uICE9IEVYSVRf
UkVBU09OX0JVU19MT0NLICkNCj4gKyAgICAgICAgICAgIHBlcmZjX2luY3IoYnVzbG9jayk7DQo+
ICsgICAgfQ0KDQpJIGtub3cgdGhpcyBwb3N0LWRhdGVzIHlvdSBwb3N0aW5nIHYyLCBidXQgZ2l2
ZW4gdGhlIGxhdGVzdCB1cGRhdGUgZnJvbQ0KSW50ZWwsIFZNWF9FWElUX1JFQVNPTlNfQlVTX0xP
Q0sgd2lsbCBiZSBzZXQgb24gYWxsIGV4aXRzLg0KDQpTbyB0aGUgY29kZSBsb2dpYyB3b3VsZCBi
ZSBzaW1wbGVyIGFzOg0KDQppZiAoIGV4aXRfcmVhc29uICYgVk1YX0VYSVRfUkVBU09OU19CVVNf
TE9DSyApDQp7DQrCoMKgwqAgcGVyZmNfaW5jcihidXNsb2NrKTsNCsKgwqDCoCBleGl0X3JlYXNv
biAmPSB+Vk1YX0VYSVRfUkVBU09OU19CVVNfTE9DSzsNCn0NCg0KYW5kIC4uLg0KDQo+ICANCj4g
ICAgICBpZiAoIHYtPmFyY2guaHZtLnZteC52bXhfcmVhbG1vZGUgKQ0KPiAgICAgIHsNCj4gQEAg
LTQ1NjEsNiArNDU3MSwxNCBAQCB2b2lkIHZteF92bWV4aXRfaGFuZGxlcihzdHJ1Y3QgY3B1X3Vz
ZXJfcmVncyAqcmVncykNCj4gICAgICAgICAgdm14X2hhbmRsZV9kZXNjcmlwdG9yX2FjY2Vzcyhl
eGl0X3JlYXNvbik7DQo+ICAgICAgICAgIGJyZWFrOw0KPiAgDQo+ICsgICAgY2FzZSBFWElUX1JF
QVNPTl9CVVNfTE9DSzoNCj4gKyAgICAgICAgcGVyZmNfaW5jcihidXNsb2NrKTsNCg0KLi4uIGRy
b3BwaW5nIHRoaXMgcGVyZiBjb3VudGVyLg0KDQpXaXRoIHNvbWV0aGluZyBhbG9uZyB0aGVzZSBs
aW5lcywgUmV2aWV3ZWQtYnk6IEFuZHJldyBDb29wZXINCjxhbmRyZXcuY29vcGVyM0BjaXRyaXgu
Y29tPg0K


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 13:33:35 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 13:33:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342546.567584 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyCrJ-0000SK-Nu; Mon, 06 Jun 2022 13:33:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342546.567584; Mon, 06 Jun 2022 13:33:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyCrJ-0000SD-JX; Mon, 06 Jun 2022 13:33:29 +0000
Received: by outflank-mailman (input) for mailman id 342546;
 Mon, 06 Jun 2022 13:33:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BfO3=WN=gmail.com=tamas.k.lengyel@srs-se1.protection.inumbo.net>)
 id 1nyCrI-0000S7-4I
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 13:33:28 +0000
Received: from mail-oi1-x22b.google.com (mail-oi1-x22b.google.com
 [2607:f8b0:4864:20::22b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3e85017d-e59d-11ec-b605-df0040e90b76;
 Mon, 06 Jun 2022 15:33:27 +0200 (CEST)
Received: by mail-oi1-x22b.google.com with SMTP id q11so4354155oih.10
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jun 2022 06:33:27 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e85017d-e59d-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=WIcUftR+4nv81E/2CqeXj/bd28eck61W6b3k9iCB2bE=;
        b=BBHpY2AjnLz6HdZWei/zL3xZAXI8vOwEey2CpW9uG6+994RZPcQI2fJ5qDsPMTnSjs
         gFgdup5OcVsFTAjRBgmtfMVcpFauLZOlJosLs6A8RqCzvGawWpmRJ7pLawASh9PzE4pm
         XzScHdRLWuVtzaMLE4WzZMBh9nwb8quzQ43zzuhd35+P/3WtFKDMCK8OtXo4hYX7fkpw
         2khlTDOOG2dvQH2HteIUvATcL7CsmMywIJ+QuE8Jhj9KR5YrEwTpYwcjCunVvlObIQSR
         R/8aK8hqdUrvbYY2mzCMTAVQlpP3dVzKAlJ97588QVDy+bKMZVEWe5WoCQRDjCCjZk/Z
         0vYQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=WIcUftR+4nv81E/2CqeXj/bd28eck61W6b3k9iCB2bE=;
        b=Bu7dkSQze/+CqlTNdagERDVrL1LqB6wNvnEkqVo6mUTurrdubbJSwieoiySIBL62dK
         axep1md+RjkWRmz5XSmpg+F85lTHK6MEy6p0oiq8QaDcvaycTreiDcEqZFEgYMS9zfdH
         aPkQupR8zxQVyZPhabH7fvJKVDKNjUp9kc7CNcTaWjxIdFO2exxNjc48Nd9uNdyzY8t+
         YAPWZcZkzLAISS3C9p+YQEZGLs+J3hRB7/ssw7eXIYFkQ+Uf0BT4LViu38+uB4hot7bW
         16sHOm5mgkw7RD50MmNYkJG5tCVuQQ9P5du4Bf9yryUjwkqB8p56E8v81nc0wMQ/cTZ6
         4keQ==
X-Gm-Message-State: AOAM533tYqlN0yU+wOwoAu6OQ549zalAzT6qoL+R/CO15tVJHujint/H
	qi3gCf4vuh5Gvn5LvY7b1wtwXdhcNxeqIRYJlL8=
X-Google-Smtp-Source: ABdhPJz4/TxLIUR6urV41GuW3LPuzBHBcBkJZ3rL+s55taAfZaj3UpH8FcPsV6rgyKDn4eWxtKvXQWRII4i93+REU8A=
X-Received: by 2002:a05:6808:302b:b0:2f9:eeef:f03 with SMTP id
 ay43-20020a056808302b00b002f9eeef0f03mr30387637oib.128.1654522405852; Mon, 06
 Jun 2022 06:33:25 -0700 (PDT)
MIME-Version: 1.0
References: <cover.07846d9c1900bd8c905a05e7afb214b4cf4ab881.1654486751.git-series.marmarek@invisiblethingslab.com>
 <8d45d05dae0b5a2aa62120c7ff62295319a74478.1654486751.git-series.marmarek@invisiblethingslab.com>
In-Reply-To: <8d45d05dae0b5a2aa62120c7ff62295319a74478.1654486751.git-series.marmarek@invisiblethingslab.com>
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Mon, 6 Jun 2022 09:32:52 -0400
Message-ID: <CABfawhn7XGoMRb9LsSwNyaCb92KD5jC4juM+NwOMyOntOgo5pg@mail.gmail.com>
Subject: Re: [RFC PATCH 01/12] drivers/char: Add support for Xue USB3 debugger
To: =?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?= <marmarek@invisiblethingslab.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"

> +/* Supported xHC PCI configurations */
> +#define XUE_XHC_CLASSC 0xC0330ULL
> +#define XUE_XHC_VEN_INTEL 0x8086ULL
> +#define XUE_XHC_DEV_Z370 0xA2AFULL
> +#define XUE_XHC_DEV_Z390 0xA36DULL
> +#define XUE_XHC_DEV_WILDCAT_POINT 0x9CB1ULL
> +#define XUE_XHC_DEV_SUNRISE_POINT 0x9D2FULL
> +#define XUE_XHC_DEV_CANNON_POINT 0x9DEDULL

I had to add an extra device ID here to get it working on my NUC,
would be nice if we could add that to the list of supported configs so
I don't need to custom patch:

#define XUE_XHC_DEV_COMET_LAKE 0x02EDULL

The patch is here:
https://github.com/tklengyel/xen/commit/dd0423aba6caa4ef41dff65470598a1c0c1105ae

Thanks,
Tamas


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 13:38:08 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 13:38:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342554.567595 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyCvm-00014q-91; Mon, 06 Jun 2022 13:38:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342554.567595; Mon, 06 Jun 2022 13:38:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyCvm-00014j-64; Mon, 06 Jun 2022 13:38:06 +0000
Received: by outflank-mailman (input) for mailman id 342554;
 Mon, 06 Jun 2022 13:38:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PxEH=WN=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1nyCvk-00014N-Dp
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 13:38:04 +0000
Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com
 [66.111.4.29]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e21e7185-e59d-11ec-b605-df0040e90b76;
 Mon, 06 Jun 2022 15:38:02 +0200 (CEST)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.nyi.internal (Postfix) with ESMTP id 794915C0161;
 Mon,  6 Jun 2022 09:38:00 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute5.internal (MEProxy); Mon, 06 Jun 2022 09:38:00 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 6 Jun 2022 09:37:58 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e21e7185-e59d-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:date:date:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to; s=fm1; t=1654522680; x=
	1654609080; bh=NV+ZKZwxOXIqN4jf9GPyMHPIHRuU7FgMyNT5XXB5+1g=; b=g
	fVdI/JeOojWucbFo0dQCpuXNwRWpBA2TSjCcXLkPE8N2M0A+ypBfATfmoFtRXoz4
	pF//ITbLe/b4ZDfzzXxFyT6i7W7Adrn0vnWFxri2x/t6NDBAAwM8/VhN5Gv0kWPc
	QJ7Pi5S+sxoL4jFTIv2Td5O73uTWxs5tXT2gV5Rw9/TMcQ0WygNJZEDtquLB8nQR
	cPbXU8mRJEF9G0I+Q+JDEECFybbwQrLOiC54Bf/tHVvue/XMsuvdzsBVVqrHzWlI
	8uLZrfUUw0QVymFgfjvB00PIAfY1fiuui7DU6rx2cg3Yy5rqzXZlLlExaqLSPead
	/yp8nUYJ5iBLVICX+o5Hw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:date:date:feedback-id
	:feedback-id:from:from:in-reply-to:in-reply-to:message-id
	:mime-version:references:reply-to:sender:subject:subject:to:to
	:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm2; t=1654522680; x=1654609080; bh=NV+ZKZwxOXIqN4jf9GPyMHPIHRuU
	7FgMyNT5XXB5+1g=; b=a6Y7k0etXHqiLj697Lq+tdtC3+ayjZCe4PuWPzvCcw6Q
	1BHaCPCE1brWr7n+WkUiiX//+ISBZ8j2cABI96n2UIGa2foBZHBI0VkysUVm6mcC
	RdJjDHIuOmivjV79A+UU1nL1wdlbmFyzq93QQJ6SqRmJLN8i5K0LiCUDkPLfms7B
	aLDJRTNrVrcKtSCdNH5UYJfGgkpCSBlNdv2xwQWZUwXPEKgjlJrEY5yXsd3YtMwu
	72PAJzKxmUWjvcp1zdNr2ykOvQQ6AELrYO2jW6DAQIFkcU/6C2I2kUC3yMT+zm5V
	NSWZ6dH6P7WKbP6i93TwikTwBpFv/rRGVZU23ooICQ==
X-ME-Sender: <xms:OAOeYmaGzf494ZLJE2370KwY9X0SekBIAMlhhZeA3atCtC7ji71sEg>
    <xme:OAOeYpb5P8aT6YRXee_0CCnHuf0eG6WDZ0lypkNjRhK9AWnbnlwGZUdIVT_mtpbKs
    8n9s990fxL1mQ>
X-ME-Received: <xmr:OAOeYg9hIgWc7omaBxyrnG3MYJHTMr27K540yrVmWFVPy18G3WDFMBNHZFWo7ZQe2OItJxI-SK1QcCxvz5sTYT3adWF-xkHvpg>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedruddtvddgieeiucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepfdforghr
    tgiihihkohifshhkihdpucforghrvghkfdcuoehmrghrmhgrrhgvkhesihhnvhhishhisg
    hlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeffhffgveegleeu
    ieektdffuedvudfhkedvteeihffhhefhheegledvhefhvdevieenucffohhmrghinhepgh
    hithhhuhgsrdgtohhmnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghi
    lhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtg
    homh
X-ME-Proxy: <xmx:OAOeYoo5jtObSaEORs30uLnYg8rvRNl5kPxWsUtTHi1lPx0vXV8I-A>
    <xmx:OAOeYhpAvM7NAMPIK0YZCm_sQ6n1h2w6pAP3C54iZLtT5_1siah6vA>
    <xmx:OAOeYmRDnUwQYufGZTNNF-0r9O8WWdmtwRCVIGRXMVqBt2e9bgPa5g>
    <xmx:OAOeYs3QLMMXK0k2mdWSi8FuXT4YKwo6V4I6xdZbloVQfCg3v7R__w>
Feedback-ID: i1568416f:Fastmail
Date: Mon, 6 Jun 2022 15:37:55 +0200
From: "Marczykowski, Marek" <marmarek@invisiblethingslab.com>
To: "Lengyel, Tamas" <tamas.lengyel@intel.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"Cooper, Andrew" <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	"Beulich, Jan" <JBeulich@suse.com>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	Pau =?utf-8?B?TW9ubsOpLA==?= Roger <roger.pau@citrix.com>,
	Paul Durrant <paul@xen.org>, "Tian, Kevin" <kevin.tian@intel.com>,
	Tamas K Lengyel <tamas@tklengyel.com>
Subject: Re: [RFC PATCH 00/12] Add Xue - console over USB 3 Debug Capability
Message-ID: <Yp4DM0L8e+f77oRe@mail-itl>
References: <cover.07846d9c1900bd8c905a05e7afb214b4cf4ab881.1654486751.git-series.marmarek@invisiblethingslab.com>
 <CY4PR11MB0056686DC984391051B53D59FBA29@CY4PR11MB0056.namprd11.prod.outlook.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="+XH+T9PvfQDREpFq"
Content-Disposition: inline
In-Reply-To: <CY4PR11MB0056686DC984391051B53D59FBA29@CY4PR11MB0056.namprd11.prod.outlook.com>


--+XH+T9PvfQDREpFq
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Mon, 6 Jun 2022 15:37:55 +0200
From: "Marczykowski, Marek" <marmarek@invisiblethingslab.com>
To: "Lengyel, Tamas" <tamas.lengyel@intel.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"Cooper, Andrew" <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	"Beulich, Jan" <JBeulich@suse.com>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	Pau =?utf-8?B?TW9ubsOpLA==?= Roger <roger.pau@citrix.com>,
	Paul Durrant <paul@xen.org>, "Tian, Kevin" <kevin.tian@intel.com>,
	Tamas K Lengyel <tamas@tklengyel.com>
Subject: Re: [RFC PATCH 00/12] Add Xue - console over USB 3 Debug Capability

On Mon, Jun 06, 2022 at 01:18:42PM +0000, Lengyel, Tamas wrote:
> Happy to see this effort, it's been long overdue to get this feature upst=
ream! If you have a git branch somewhere I'm happy to test it out. I alread=
y have tested Xue before on my NUC and it was working well.

It's here:
https://github.com/marmarek/xen/tree/master-xue
warning: I do force-push to this branch from time to time

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--+XH+T9PvfQDREpFq
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmKeAzMACgkQ24/THMrX
1ywk/gf9GVdRcp41W4swFkulvGo9XmGMe7HtlL2B8UzIjrt+sItv1OJ+w08TOzbd
oJWlUfwu0M5yPwaXAe04EoUiqoYE9ffEHgK7CRrGMfUujPj9gPY49JDoNk+WkxBs
P74mVw5up8ZL1xbvcQ4sjLVJVId5IMnDTkwutxAQFEPkAastsXwVt3IK4jtXw4Fd
8wY7K/WbBGnC+3Ge1spg3BE+G74n9/8//eLvDi/M59BzLJ737iACoWv28dNdtQhu
Eh1neCpy6kz/tTLatP8I6tS+NXZAykFi6OHiF3a92rJtUP1KZKG2jSg3mIejZOnT
TmsFB/ThPBP8NI4Tkbc3a66UKAxLcQ==
=nDXm
-----END PGP SIGNATURE-----

--+XH+T9PvfQDREpFq--


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 13:39:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 13:39:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342563.567605 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyCx5-0001fP-KR; Mon, 06 Jun 2022 13:39:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342563.567605; Mon, 06 Jun 2022 13:39:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyCx5-0001fI-Gr; Mon, 06 Jun 2022 13:39:27 +0000
Received: by outflank-mailman (input) for mailman id 342563;
 Mon, 06 Jun 2022 13:39:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Jy1=WN=citrix.com=prvs=14988d3bc=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1nyCx3-0001fA-Ot
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 13:39:25 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 132d315c-e59e-11ec-b605-df0040e90b76;
 Mon, 06 Jun 2022 15:39:24 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 132d315c-e59e-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654522764;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=fTLexGdJToP1puX9yfkdXLVy4Nlx8Eqcf7dDJZT9zow=;
  b=e2ef+aDX7n5W7MO4rEjSAXXzdMsypsjmAPMhDvBqgqkrLTvN3ucVAC0q
   d0VAiLjGq4TrAyrCXui2lVKu9Rge+2dqaKeIecaBP+0TlGFvP4X+z0APy
   D8T0dnumPcxY5wgOHxeTpbw82+6UeHOwQ7yJYDDmvvSUO88JmIQ+6vPiz
   k=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 72294500
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:XJVZE6NcNkIM8m7vrR3dl8FynXyQoLVcMsEvi/4bfWQNrUor1jIBn
 TNKUG6COPyCajf3fY9zaorlp0tXvZPWzIVjTQto+SlhQUwRpJueD7x1DKtR0wB+jCHnZBg6h
 ynLQoCYdKjYdleF+lH1dOKJQUBUjclkfJKlYAL/En03FFYMpBsJ00o5wbZn2tMw27BVPivW0
 T/Mi5yHULOa82Yc3lI8s8pvfzs24ZweEBtB1rAPTagjUG32zhH5P7pGTU2FFFPqQ5E8IwKPb
 72rIIdVXI/u10xF5tuNyt4Xe6CRK1LYFVDmZnF+A8BOjvXez8CbP2lS2Pc0MC9qZzu1c99Z7
 o4UhMOMF1gVZvOSuOs7bR92NnheIvgTkFPHCSDXXc27ykTHdz3nwul0DVFwNoodkgp1KTgQr
 7pCcmlLN03dwbLtqF64YrAEasALJc/3PIQZqzd4wCvQF/oOSpHfWaTao9Rf2V/cg+gRR6yPO
 5dBMVKDajzHURAQPEUOE6hnwr/rr1rnURdy9nWa8P9fD2/7k1UqjemF3MDuUsOObdVYmACfv
 G2u13T0BFQWOcKSzRKB82mwnanfkCXjQoUQGbaksPlwjzW73XcPARcbUV+6p/iRiUOkXd9bb
 UsO9UIGr6I/6UiqRdnVRACjrTiPuRt0c9hNF+w37imdx6yS5ByWblXoVRYYNoZg7pVvA2V3i
 BnZxLsFGACDrpWzEiusqYvK8g/qZ3Y1Ikk8YwEhFCsatoyLTJ4Isv7fcjpyOPfr04GvQ2msm
 W/iQDsW3OtK05NSv0mv1RWe2m/3+MCUJuIgzl+PNl9J+D+Vc2JMi2aAzVHApchNI4+CJrVql
 ChVwpPOhAzi4HzkqcBsfAnuNOvwjxp9GGeA6WOD5rF4n9hXx1atfJpL/BZ1L1pzP8APdFfBO
 RGO5V8MuMcLYSXyPcebhr5d7OxzlMDd+SnNDKiIPrKinLAtHON4wM2eTRHJhD28+KTduao+J
 Y2aYa6RMJruMow+lGDeb75EidcDn3lirUuOFMuT50n2jtKjiIu9FO5t3K2mNbhpsstpYWz9r
 r5iCid940wGALGuPHSMqN57wJJjBSFTOK0aYvd/LoarSjeK0kl4YxMN6dvNo7BYopk=
IronPort-HdrOrdr: A9a23:k0sQe6HMYMcmCCwCpLqFfZHXdLJyesId70hD6qkvc3Bom52j+v
 xGws5w6fatskdoZJhSo6H6BEDgewKVyXcb2/h0AV7PZmjbUS6TXfhfBOjZsnbd8k/Fh4lgPM
 5bGsAUZ7PN5BpB/KDHCWKDYrUdKay8gcWVbJDlvhVQpG9RC51I3kNcMEK2A0d2TA5JCd4SD5
 yH/PdKoDKmZDA+ctm7LmNtZZmPm/T70LbdJTIWDR8u7weDyRmy7qThLhSe1hACFxtS3LYZ93
 TfmQCR3NTujxj78G6S64bg1eUWpDLT8KoCOCVKsLlXFtzYsHfnWG2mYczBgNl6mpDr1L9gqq
 i3n/5pBbUP15qWRBD+nfKl4Xid7B8+r3Dl0lOWmn3lvIjwQy87EdNIgcZDfgLe8FdIhqAK7E
 tn5RPti3NsN2K1oM093am5azh60k6v5XYym+8aiHJSFYMYdb9KtIQauEdYCo0JEi724J0uVL
 AGNrCU2N9GNVeBK3zJtGhmx9KhGnw1AxedW0AH/siYySJfknx1x1YRgMYfgnAD/pQgTIQs3Z
 WzDo140LVVCsMGZ6N0A+kMBcOxF2zWWBrJdHmfJFz2fZt3SU4la6SHk4ndyNvaBqDglqFC56
 gpeGkoxFIPRw==
X-IronPort-AV: E=Sophos;i="5.91,280,1647316800"; 
   d="scan'208";a="72294500"
Date: Mon, 6 Jun 2022 14:39:18 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	<xen-devel@lists.xenproject.org>
Subject: Re: [XEN PATCH 1/4] build: xen/include: use if_changed
Message-ID: <Yp4Dhj4UkORelT8D@perard.uk.xensource.com>
References: <20220601165909.46588-1-anthony.perard@citrix.com>
 <20220601165909.46588-2-anthony.perard@citrix.com>
 <0f8f0c20-690c-f02a-e1f8-957462118999@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <0f8f0c20-690c-f02a-e1f8-957462118999@suse.com>

On Thu, Jun 02, 2022 at 11:11:15AM +0200, Jan Beulich wrote:
> On 01.06.2022 18:59, Anthony PERARD wrote:
> > Use "define" for the headers*_chk commands as otherwise the "#"
> > is interpreted as a comment and make can't find the end of
> > $(foreach,).
> 
> In cmd_xlat_lst you use $(pound) - any reason this doesn't work in
> these rules? Note that I don't mind the use of "define", just that I'm
> puzzled by the justification.

I think I just forgot about $(pound) when I tried to debug why the
command didn't work in some environment (that is when not using define).

I think the second thing that make me not replacing '#' by "$(pound)" is
that reading "#include" in the Makefile is probably better that
reading "$(pound)include".

I guess we should add something to the justification like:
    That allow us to keep writing "#include" in the Makefile without
    having to replace that by "$(pound)include" which would be a bit
    less obvious about the command line purpose.

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 13:47:32 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 13:47:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342573.567617 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyD4q-0003D7-Cq; Mon, 06 Jun 2022 13:47:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342573.567617; Mon, 06 Jun 2022 13:47:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyD4q-0003D0-A5; Mon, 06 Jun 2022 13:47:28 +0000
Received: by outflank-mailman (input) for mailman id 342573;
 Mon, 06 Jun 2022 13:47:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Jy1=WN=citrix.com=prvs=14988d3bc=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1nyD4o-0003Cu-Tg
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 13:47:26 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 31ee5460-e59f-11ec-b605-df0040e90b76;
 Mon, 06 Jun 2022 15:47:25 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 31ee5460-e59f-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654523245;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=M7xMWiHZ0GdGm1TDPXD0V79kCqdCPGQN/V3VFFtCoIQ=;
  b=TeJWzCzPKh3zTlBhmmWbgBCFJhk0trevg9UYW2NQ80ODMVW+kxdEmEi0
   AcTYc0JiCGSIFcR+FxUiWghxj67ZnsXrkqkzszvMDM6AzHQTCCmZcW/Qz
   6Fqf1jpwEiZstH9fZUzKoYZ+Ns3N0noT72BTmKt7Ph1Pw2DY1ItYlsZWG
   w=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 72295056
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:3FpayqjKZUI/GXdFftucM2UjX161GxAKZh0ujC45NGQN5FlHY01je
 htvDD3QbqyMZzf2f90lOo+x9E1SvsXTzNBlSgRrqitjQywb9cadCdqndUqhZCn6wu8v7a5EA
 2fyTvGacajYm1eF/k/F3oDJ9CU6jefSLlbFILas1hpZHGeIcw98z0M68wIFqtQw24LhXVvQ4
 YmaT/D3YzdJ5RYlagr41IrbwP9flKyaVOQw5wFWiVhj5TcyplFNZH4tDfjZw0jQG+G4KtWSV
 efbpIxVy0uCl/sb5nFJpZ6gGqECaua60QFjERO6UYD66vRJjnRaPqrWqJPwwKqY4tmEt4kZ9
 TlDiXC/YSkIFa7Xibk/aBQCNhglN7Nl2+fLLnfq5KR/z2WeG5ft6/BnDUVwNowE4OdnR2pJ8
 JT0KhhUMErF3bjvhuvmFK883azPL+GyVG8bknhm0THeC+dgWZ3ZSr/GzdRZwC0xloZFGvO2i
 88xNmA1PUmYPkMn1lE/LdVio/q2iWnGLzxV9W2HgaEI6DTiw1kkuFTqGIWMIYHbLSlPpW6Ho
 krW8mK/BQsVXPSPxDzA/n+yi+vnmSLgRJlUBLC+7uRtglCY2ioUEhJ+fUCgvfCzh0q6WtReA
 08Z4Cwjqe417kPDZtv3UgC8oXWElgUBQNcWGOo/gCmP167V7gCxFmUCCDlbZ7QbWNQeHGJwk
 AXTxpWwWGIp4Ob9pW+hGqm8gxKZOjMEcE05aHUhdFAP/frA+I08gUeaJjp8K5JZnuEZCBmpn
 W3W9nJm2+9M5SIY//7lpA6a2lpAsrCMF1dovVuPAwpJ+ysjPOaYi5qUBU83BBqqBKKQVRG/s
 XcNgKByB8heXMjWxERhrAjgdYxFBspp0xWG2DaD57F7q1yQF4eLJOi8Gg1WKkZzKdojcjT0e
 kLVsg45zMYNYSf2MPMvO9/vU5RCIU3c+TPND6m8UzazSsIpKF/vEN9GOSZ8IFwBYGBzyPpia
 P93gO6nDGoACLQP8Qdas9w1iOdxrghnnDu7bcmik3yPjOvFDFbIGOhtDbd7Rr1ghE9yiF6No
 4g32grj40g3bdASlQGOqN5PfQ5XdiFrbX00wuQOHtO+zsNdMDlJI5fsLXkJItENc3h9/gsQw
 kyAZw==
IronPort-HdrOrdr: A9a23:6WiQw66y5JWsKtwwtAPXwMrXdLJyesId70hD6qhwISY1TiW9rb
 HIoB17726RtN9/Yh0dcLy7V5VoBEmsk6KdgrNhWItKPjOW21dARbsKheCJrgEIWReOlNK1vZ
 0QCpSWY+eRMbEVt6jH3DU=
X-IronPort-AV: E=Sophos;i="5.91,280,1647316800"; 
   d="scan'208";a="72295056"
Date: Mon, 6 Jun 2022 14:47:18 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	<xen-devel@lists.xenproject.org>
Subject: Re: [XEN PATCH 2/4] build: set PERL
Message-ID: <Yp4FZsVKjQcy6Ly6@perard.uk.xensource.com>
References: <20220601165909.46588-1-anthony.perard@citrix.com>
 <20220601165909.46588-3-anthony.perard@citrix.com>
 <092852c0-d833-0c7c-1bc4-5d2e86610a4d@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <092852c0-d833-0c7c-1bc4-5d2e86610a4d@suse.com>

On Thu, Jun 02, 2022 at 11:01:30AM +0200, Jan Beulich wrote:
> On 01.06.2022 18:59, Anthony PERARD wrote:
> > --- a/xen/Makefile
> > +++ b/xen/Makefile
> > @@ -22,6 +22,7 @@ PYTHON_INTERPRETER	:= $(word 1,$(shell which python3 python python2 2>/dev/null)
> >  export PYTHON		?= $(PYTHON_INTERPRETER)
> >  
> >  export CHECKPOLICY	?= checkpolicy
> > +export PERL		?= perl
> 
> For the intended use, is there a minimum version requirement? If so,
> it needs documenting in ./README (and it preferably wouldn't be any
> newer than from around the times our other dependencies are). And
> even when the uses are fully backwards compatible, I think the need
> for the tool wants mentioning there.

I don't think there's a minimum version. The script works in our
Gitlab CI, or at least the builds don't break.

Yes, it would be better to document the tool, I'll add it to the README.
(We already use it in the toolstack, at least for libxl, so it was at
least partially needed before.)

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 13:52:02 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 13:52:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342582.567628 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyD9D-0004h0-Ul; Mon, 06 Jun 2022 13:51:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342582.567628; Mon, 06 Jun 2022 13:51:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyD9D-0004gt-Rt; Mon, 06 Jun 2022 13:51:59 +0000
Received: by outflank-mailman (input) for mailman id 342582;
 Mon, 06 Jun 2022 13:51:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Jy1=WN=citrix.com=prvs=14988d3bc=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1nyD9D-0004gn-FG
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 13:51:59 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d456ad97-e59f-11ec-b605-df0040e90b76;
 Mon, 06 Jun 2022 15:51:58 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d456ad97-e59f-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654523518;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=krnQmXKeCQBjbkfKNZD5J8NR5QGqOxG1ETCGQIW5ZF0=;
  b=Z9vtN31oZAbeFSemz4Kfb/oNcbhuu1OPwqaHaJ2Sp7jCQaJblY9lZ4iu
   N06OM7wXfhXpNCGse8dOlNJfOIQNwFGTV3zvxA6XVuZkkxf4F4APMMY1K
   73Tnsezjs4q0HDqUMI6oaL/KCdQtuUSmuhOMhdg5XIoKEwdPOMVqR1P1F
   U=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 72295383
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:RKnoRKO0DPBhgu3vrR2vl8FynXyQoLVcMsEvi/4bfWQNrUp0hTcGy
 mRLWjzUPP2OZmbyfNhyOd608x4B7MLcxodkGQto+SlhQUwRpJueD7x1DKtR0wB+jCHnZBg6h
 ynLQoCYdKjYdleF+lH1dOKJQUBUjclkfJKlYAL/En03FFYMpBsJ00o5wbZn2tMw27BVPivW0
 T/Mi5yHULOa82Yc3lI8s8pvfzs24ZweEBtB1rAPTagjUG32zhH5P7pGTU2FFFPqQ5E8IwKPb
 72rIIdVXI/u10xF5tuNyt4Xe6CRK1LYFVDmZnF+A8BOjvXez8CbP2lS2Pc0MC9qZzu1c99Zk
 td8sLqvVFkVDqTvw/00AjMfHWJ1MvgTkFPHCSDXXc27ykTHdz3nwul0DVFwNoodkgp1KTgQr
 7pCcmlLN03dwbLtqF64YrAEasALJc/3PIQZqzd4wCvQF/oOSpHfWaTao9Rf2V/cg+gRR6yPO
 pFJMlKDajzGWj0INHcLV6s+hd+3uSPufhoC73Ca8P9fD2/7k1UqjemF3MDuUt6XQcRYmG6Iq
 2SA+H72ajkBL8CWwzeB9nOqh8fMkDn9VYZUE6e3ntZ1hHWDy2pVDwcZPXO2ofS8yV6zXfpad
 lRS8S0rxYAi+UruQtTjUhmQpH+fogVaS9dWC/c96gyG1uzT+QnxLmoOQyNFadcmnNQrXjFs3
 ViM9+4FHhQ27ufTEyjEsO7J83XiYkD5MFPuewdeTzoAxcb+/r0DhzGUV/8yOum7zdb6TGSYL
 y+xkMQuu1kCpZdVivnhpwib2W3ESovhFVBsuFiONo6xxkYgPdP+OdT1gbTOxawYRLt1WGVtq
 5TtdyK2yOkVRa+AmyWWKAnmNOH4vq3VWNEwbLMGInXAy9hO0yT6FWyoyGsiTHqFy+5dEdMTX
 GfduBlK+LhYN2awYKl8buqZUpp3kfS9SYy9C66MPrKih6SdkyfepUlTibO4hTixwCDAb4ljU
 XtkTSpcJSlDUvk2pNZHb+wczaUq1kgD+I8nfriil07P+ePHPBa9EO5ZWHPTP7tRxP7V/23oH
 yN3apLiJ+N3C7WuPEE6MOc7cDg3EJTMLcmv+5AHKLPYe1oO9aNII6a5/I7NsrdNx8x9/tokN
 FnkMqOE4DITXUH6FDg=
IronPort-HdrOrdr: A9a23:DS2di67fvpSDH9E+PQPXwMjXdLJyesId70hD6qhwISY6TiW9rb
 HLoB17726QtN9/YhwdcLy7VJVoBEmskqKdgrNhX4tKPjOHhILAFugLhuHfKn/bak7DH4ZmpM
 FdmsNFaeEYY2IUsfrH
X-IronPort-AV: E=Sophos;i="5.91,280,1647316800"; 
   d="scan'208";a="72295383"
Date: Mon, 6 Jun 2022 14:51:51 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Elliott Mitchell <ehem+xen@m5p.com>
CC: <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [XEN PATCH 3/4] build: replace get-fields.sh by a perl script
Message-ID: <Yp4Gd6leI/MSPkHw@perard.uk.xensource.com>
References: <20220601165909.46588-1-anthony.perard@citrix.com>
 <20220601165909.46588-4-anthony.perard@citrix.com>
 <Ypeimt5XHHog64qw@mattapan.m5p.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <Ypeimt5XHHog64qw@mattapan.m5p.com>

On Wed, Jun 01, 2022 at 10:32:10AM -0700, Elliott Mitchell wrote:
> On Wed, Jun 01, 2022 at 05:59:08PM +0100, Anthony PERARD wrote:
> > diff --git a/xen/tools/compat-xlat-header b/xen/tools/compat-xlat-header
> > new file mode 100755
> > index 0000000000..f1f42a9dde
> > --- /dev/null
> > +++ b/xen/tools/compat-xlat-header
> > @@ -0,0 +1,539 @@
> > +#!/usr/bin/perl -w
> > +
> > +use strict;
> > +use warnings;
> 
> I hope to take more of a look at this, but one thing I immediately
> notice:  -w is redundant with "use warnings;".  I strongly prefer
> "use warnings;", but others may have different preferences.

Sounds good, I might have copy the shebang and the "use*" from an other
script in our repo, without checking what the -w stand for.

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 13:56:47 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 13:56:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342591.567639 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyDDl-0005Kn-HU; Mon, 06 Jun 2022 13:56:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342591.567639; Mon, 06 Jun 2022 13:56:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyDDl-0005Kg-Eg; Mon, 06 Jun 2022 13:56:41 +0000
Received: by outflank-mailman (input) for mailman id 342591;
 Mon, 06 Jun 2022 13:56:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PxEH=WN=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1nyDDk-0005Ka-CC
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 13:56:40 +0000
Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com
 [66.111.4.29]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 799f0847-e5a0-11ec-bd2c-47488cf2e6aa;
 Mon, 06 Jun 2022 15:56:37 +0200 (CEST)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.nyi.internal (Postfix) with ESMTP id 9F80E5C0070;
 Mon,  6 Jun 2022 09:56:33 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute2.internal (MEProxy); Mon, 06 Jun 2022 09:56:33 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 6 Jun 2022 09:56:31 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 799f0847-e5a0-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:date:date:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to; s=fm1; t=1654523793; x=
	1654610193; bh=6sp/vIfi6IytLKC10gbRtaMkV91cBJR7QUf35+rMI8k=; b=p
	BxFVg3piBfhZRZKfUJqcfnOMHhPwOgGy7jgv9hcE+gcdGYmpzycb8+XlcpN7hdnt
	rpss8wy5LuGO8uaIIhy7TcREQmmcOu/0M65d/9QcGhcnHc3nS6mgECaCXHzPcGEg
	AJLtFRfxlhqik8sM4g1Kqx6LD5AAHwX9ssj7nYR8MrS+n/Pp3pIEZ6j+NRctXfNs
	3Wc/Xntzd616sEfOb0G/b0OA7R0KfmUXKOtCo+n1OWTi041a+/hvOwhHQRaVL3Bk
	NJsoraNpy5BtP90Rlz50CHPYjd0Z48HcEMnB+uaXt4+318IT1LKuwfQ89utLSvDU
	NTfuBaQGJQF6q6ZUVL5tA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:date:date:feedback-id
	:feedback-id:from:from:in-reply-to:in-reply-to:message-id
	:mime-version:references:reply-to:sender:subject:subject:to:to
	:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm2; t=1654523793; x=1654610193; bh=6sp/vIfi6IytLKC10gbRtaMkV91c
	BJR7QUf35+rMI8k=; b=fssM1tlWVGPrqBZGkCv0iDHH9QPvETdZPscNAbKGOfY6
	5dx2chPHM3WPw6OP2MSdt6IfK2hnShTEHqXmk0O0cfyODEj+Ptnav2oZUpgvBlQg
	YCQzIOFUWj3BKoQbVTloPY3oE7Tmsq7cOrhtlOewAUvFwJZkl5cagrN1yUPPXE5D
	VRFIckzMhs+eu0J3pTEhE1/lM0dD181P+5mgawpZg5cDsq3nUPTIlDtaZz48tsm+
	HougewD9pmrmMAKrnIZlqtN9qBEKVJJRHkNv09H0+jFLEGvNhqW36YVwyVf+sfTD
	i+BDo/eAg8NrKLkcq/qCdGnDp0nARONbAEryfpyjfg==
X-ME-Sender: <xms:kQeeYr_Hq8tDZ-UMfeK81Tv5w9_1MwowOKMUr-Axqz-3lrwWDzfffg>
    <xme:kQeeYnvCoxtFq6yvylMLMGa0djqioim4VzUIeCjBNaNs4Sw-tytQOPeOX5BKrFpnf
    KpW3UY2S9f3cQ>
X-ME-Received: <xmr:kQeeYpCoONC_eqE2SbijOTa79AzdnhEIwSWoM8bfsScMsBwfbvGgVYnIbKHL56PVIWmMZbeIvFYRfBq-HsgMnZHhNvq1Hq4G3g>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedruddtvddgjedtucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepueek
    teetgefggfekudehteegieeljeejieeihfejgeevhfetgffgteeuteetueetnecuffhomh
    grihhnpehgihhthhhusgdrtghomhenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgr
    mhepmhgrihhlfhhrohhmpehmrghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgsh
    hlrggsrdgtohhm
X-ME-Proxy: <xmx:kQeeYneRNBlzXdn9qcZRCd1vUTMkTP__bKPVXPnqbFlDZjYoKXiz7w>
    <xmx:kQeeYgOeCn6hCktLztGy1fitoH77ieeVAokSw9l8-l2ga6oA0UFAdw>
    <xmx:kQeeYpmVJvrxMOwS5kTbwRtpXrfx2brg5jajRuEHCP1SdZSHWbL89A>
    <xmx:kQeeYhjVougu4sRXJlKhgA-sLUFR5s-65S1snz2fH5ziqKtxzH2OCg>
Feedback-ID: i1568416f:Fastmail
Date: Mon, 6 Jun 2022 15:56:28 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	George Dunlap <George.Dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	Roger Pau Monne <roger.pau@citrix.com>, Paul Durrant <paul@xen.org>,
	Kevin Tian <kevin.tian@intel.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: Re: [RFC PATCH 00/12] Add Xue - console over USB 3 Debug Capability
Message-ID: <Yp4HjHHh5pT8CIOS@mail-itl>
References: <cover.07846d9c1900bd8c905a05e7afb214b4cf4ab881.1654486751.git-series.marmarek@invisiblethingslab.com>
 <d1de82b2-333d-9c7f-0b82-4bf18b4ec469@citrix.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="SdtKxU3nuHsHTiiT"
Content-Disposition: inline
In-Reply-To: <d1de82b2-333d-9c7f-0b82-4bf18b4ec469@citrix.com>


--SdtKxU3nuHsHTiiT
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Mon, 6 Jun 2022 15:56:28 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	George Dunlap <George.Dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	Roger Pau Monne <roger.pau@citrix.com>, Paul Durrant <paul@xen.org>,
	Kevin Tian <kevin.tian@intel.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: Re: [RFC PATCH 00/12] Add Xue - console over USB 3 Debug Capability

On Mon, Jun 06, 2022 at 01:18:45PM +0000, Andrew Cooper wrote:
> On 06/06/2022 04:40, Marek Marczykowski-G=C3=B3recki wrote:
> > This is integration of https://github.com/connojd/xue into mainline Xen.
> > This patch series includes several patches that I made in the process, =
some are
> > very loosely related.
>=20
> Thanks for looking into this.=C2=A0 CC'ing Connor just so he's aware.
>=20
> >
> > The RFC status is to collect feedback on the shape of this series, spec=
ifically:
> >
> > 1. The actual Xue driver is a header-only library. Most of the code is =
in a
> > series of inline functions in xue.h. I kept it this way, to ease integr=
ating
> > Xue updates. That's also why I preserved its original code style. Is it=
 okay,
> > or should I move the code to a .c file?
>=20
> It doesn't matter much if it's a .h or .c file.=C2=A0 It could perfectly
> easily live as xen/drivers/char/xue.h and included only by xue.c.=C2=A0 (=
More
> specifically, it doesn't want to live as xen/include/xue.h)
>=20
> That said, as soon as you get to patch 2, it's no longer unmodified from
> upstream, and with patch 3, we're gaining concrete functionality that
> upstream doesn't have.
>=20
> >
> > 2. The xue.h file includes bindings for several other environments too =
(EFI,
> > Linux, ...). This is unused code, behind #ifdef. Again, I kept it to ea=
se
> > updating. Should I remove it?
>=20
> Drop please.=C2=A0 Xen is an embedded environment, and support other
> environments is a waste of space and time.
>=20
> I'm slowly ripping out other examples.

With both the above answers, I'll see how much work will be refactoring it
into Xen-only driver. Dropping other environments should make
xue_ops abstraction unnecessary, which should simplify the code quite a
bit.

> > 3. The adding of IOMMU reserverd memory is necessary even if "hiding" d=
evice
> > from dom0. Otherwise, VT-d will deny DMA. That's probably not the most =
elegant
> > solution, but Xen doesn't have seem to have provisions for devices doin=
g DMA
> > into Xen's memory.
>=20
> I think that's to be expected, as the device should end up in quarantine.
>=20
> That said, the model is broken for devices that Xen exclusively uses,
> which includes IOMMU devices.=C2=A0 IOMMUs don't have any kind of applica=
ble
> IOMMU context, and things used exclusively by Xen don't want to be in
> the general quarantine pool, because then all malicious devices can
> interfere with the ringbuffer.

That's yet another reason for assigning it to dom0... this way, only
dom0(-assigned devices) can interfere with the ringbuffer. That's still
sub-optimal, but the current granularity of IOMMU configuration in Xen
doesn't allow to do any better.
I'll drop patch 11.

> > 4. To preserve authorship, I included unmodified "drivers/char: Add sup=
port for
> > Xue USB3 debugger" commit from Connor, and only added my changes on top=
=2E This
> > means, with that this commit, the driver doesn't work yet (but it does
> > compile). Is it okay, or should I combine fixes into that commit and so=
mehow
> > mark authorship in the commit message?
>=20
> That depends on how much changes.=C2=A0 Other options are a dual SoB with
> Connor still as the author (I typically do this for substantial code
> movement, programmatic changes, etc), or for a major rewrite, changing
> authorship and being very clear in the commit message where the code
> originated.

If I'll go with the refactor to get rid of xue_ops, then indeed it makes
more sense to create new commit and reference code origin in the commit
message.

> > 5. The last patch(es) enable using the controller by dom0, even when Xen
> > uses DbC part. That's possible, because the capability was designed
> > specifically to allow separate driver handle it, in parallel to unmodif=
ied xhci
> > driver (separate set of registers, pretending the port is "disconnected=
" for
> > the main xhci driver etc). It works with Linux dom0, although requires =
an awful
> > hack - re-enabling bus mastering behind dom0's backs. Is it okay to lea=
ve this
> > functionality as is, or guard it behind some cmdline option, or maybe r=
emove
> > completely?
>=20
> "Xen is configured to use USB3 debugging" is the only relevant signal.=C2=
=A0
> We do not want anything else.=C2=A0 If this triggers hacks for dom0, then=
 fine.

I'm worried here about depending on specific dom0 behavior. With the
current Linux driver, I needed just the bus mastering hack, but since in
this case dom0 has more or less full control over the controller, there
could be other ways it could disrupt DbC in the future.

> OOI, how does the dual driver stack work in Linux?=C2=A0 At a minimum the=
y've
> surely got to coordinate device resets.

Kind of. The DbC driver (both Linux and Xue) checks if nothing hasn't
disabled DbC in the meantime (for example via device reset). When it
happens, it re-enables it back.
I haven't tried what happens if I try to enable DbC both in Xen and
Linux at the same time, but one of the possibilities is spectacular
explosion. (In theory Xen should win, since I make DbC related MMIO area
read-only.)

> In an ideal world, dom0 would be fully unaware.=C2=A0 We hide the DbC
> controls (so dom0 doesn't get any clever ideas), but we do need to keep
> the device active when dom0 wants to reset (which will probably require
> a fair chunk of emulation).

Yeah, I don't want to go the "emulate almost-reset" way...

> Connor did upstream code into Linux to cause it to ignore an
> already-active DbC session.=C2=A0 I hope this will cause Linux to be duly
> careful with resets, and is probably the easiest way forward.

Not really, see above.

At least it doesn't try to explicitly disable it.

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--SdtKxU3nuHsHTiiT
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmKeB40ACgkQ24/THMrX
1yyZjwf+N3iAac2nv7N9DfvqAZASLHkuJJVR7l/x4YMJbh+o7EN+kKARg5NGajQ8
8enaEHprHJHgZqA6LCOE++Hd3Q3xMGWSrspMTbdXM6fsqVOwrGQ/LEIxKmGv3VMH
DxeCgtdI261bi/X5TAWCEZkOsh/oobF/j+Ax9iry3Kq+Fxvt0TPKoJQDlZCWm/hy
TdHqug6MW7piVkScUXT7v1ZqNM0rtSlmvLmqZslp9mlSK0cWzc2izwpt1uv/reJ9
zrPaQdtEGdvuxBfeXPdjq/tvkf7wmbgj93hPPOKBFFsLAKTAoK+aY0g9Xuca4suE
S2V6913sFdnd5E5kvYqDfy4srgAODQ==
=Chk+
-----END PGP SIGNATURE-----

--SdtKxU3nuHsHTiiT--


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 13:58:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 13:58:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342600.567650 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyDFV-0005zm-15; Mon, 06 Jun 2022 13:58:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342600.567650; Mon, 06 Jun 2022 13:58:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyDFU-0005zf-UF; Mon, 06 Jun 2022 13:58:28 +0000
Received: by outflank-mailman (input) for mailman id 342600;
 Mon, 06 Jun 2022 13:58:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0ykZ=WN=intel.com=lkp@srs-se1.protection.inumbo.net>)
 id 1nyDFT-0005zN-C2
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 13:58:27 +0000
Received: from mga02.intel.com (mga02.intel.com [134.134.136.20])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b83a3b88-e5a0-11ec-bd2c-47488cf2e6aa;
 Mon, 06 Jun 2022 15:58:23 +0200 (CEST)
Received: from orsmga001.jf.intel.com ([10.7.209.18])
 by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 06 Jun 2022 06:58:18 -0700
Received: from lkp-server01.sh.intel.com (HELO 60dabacc1df6) ([10.239.97.150])
 by orsmga001.jf.intel.com with ESMTP; 06 Jun 2022 06:58:16 -0700
Received: from kbuild by 60dabacc1df6 with local (Exim 4.95)
 (envelope-from <lkp@intel.com>) id 1nyDFI-000ClV-3h;
 Mon, 06 Jun 2022 13:58:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b83a3b88-e5a0-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1654523903; x=1686059903;
  h=date:from:to:cc:subject:message-id:mime-version;
  bh=OrdvjfoyWC2oTUugw5yPXKwfQFYNc7s7nspjZTsHZQw=;
  b=YfKot5kU9Xq2POvykM4Xr5QoB9eIWKROxPj0oEd0/ZNr0hcutVOkgcc7
   ZoMOmB5mFeg1oW9a/YNIo+z+k7Vua3JVMM3Q776FilbF/DWNPY8dAT1YR
   x09CJBBIFMzouEuHftPo20sydS4pwZYnIHHyxdIVynmZ1g/H09mYrUYeX
   32chIqQV20rTGAAomDVcSoURrlvH3skJpA0bKGrKoZHZWQR7oDppLhkoH
   x/QPmx2ArNpgsZMHCISMx8UJKAk1TFHquWzpFTJFzVwsdmtGtz7pezV2q
   ZjEj731pVGD6QchsVRLEsSy0NHXY+aAgO1gZq2zlG0jtzu1QEsaaVAD+i
   A==;
X-IronPort-AV: E=McAfee;i="6400,9594,10369"; a="264600466"
X-IronPort-AV: E=Sophos;i="5.91,280,1647327600"; 
   d="scan'208";a="264600466"
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.91,280,1647327600"; 
   d="scan'208";a="614363696"
Date: Mon, 6 Jun 2022 21:57:57 +0800
From: kernel test robot <lkp@intel.com>
To: Juergen Gross <jgross@suse.com>
Cc: kbuild-all@lists.01.org, xen-devel@lists.xenproject.org,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: [xen-tip:linux-next 6/10] drivers/xen/grant-dma-ops.c:278:6:
 warning: no previous prototype for 'xen_grant_setup_dma_ops'
Message-ID: <202206062149.cNjVOFb7-lkp@intel.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

Hi Juergen,

First bad commit (maybe != root cause):

tree:   https://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git linux-next
head:   bb1b8419ea23d8d2de3c886a540f41e39dfe82a9
commit: 6b268a48884cf8ef00477a0e652864638391587c [6/10] xen/virtio: Enable restricted memory access using Xen grant mappings
config: x86_64-allyesconfig (https://download.01.org/0day-ci/archive/20220606/202206062149.cNjVOFb7-lkp@intel.com/config)
compiler: gcc-11 (Debian 11.3.0-1) 11.3.0
reproduce (this is a W=1 build):
        # https://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git/commit/?id=6b268a48884cf8ef00477a0e652864638391587c
        git remote add xen-tip https://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git
        git fetch --no-tags xen-tip linux-next
        git checkout 6b268a48884cf8ef00477a0e652864638391587c
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        make W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash drivers/net/usb/ drivers/xen/

If you fix the issue, kindly add following tag where applicable
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

>> drivers/xen/grant-dma-ops.c:278:6: warning: no previous prototype for 'xen_grant_setup_dma_ops' [-Wmissing-prototypes]
     278 | void xen_grant_setup_dma_ops(struct device *dev)
         |      ^~~~~~~~~~~~~~~~~~~~~~~


vim +/xen_grant_setup_dma_ops +278 drivers/xen/grant-dma-ops.c

2c73e39aceb90b Juergen Gross 2022-06-02  277  
2c73e39aceb90b Juergen Gross 2022-06-02 @278  void xen_grant_setup_dma_ops(struct device *dev)
2c73e39aceb90b Juergen Gross 2022-06-02  279  {
2c73e39aceb90b Juergen Gross 2022-06-02  280  	struct xen_grant_dma_data *data;
2c73e39aceb90b Juergen Gross 2022-06-02  281  
2c73e39aceb90b Juergen Gross 2022-06-02  282  	data = find_xen_grant_dma_data(dev);
2c73e39aceb90b Juergen Gross 2022-06-02  283  	if (data) {
2c73e39aceb90b Juergen Gross 2022-06-02  284  		dev_err(dev, "Xen grant DMA data is already created\n");
2c73e39aceb90b Juergen Gross 2022-06-02  285  		return;
2c73e39aceb90b Juergen Gross 2022-06-02  286  	}
2c73e39aceb90b Juergen Gross 2022-06-02  287  
2c73e39aceb90b Juergen Gross 2022-06-02  288  	data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
2c73e39aceb90b Juergen Gross 2022-06-02  289  	if (!data)
2c73e39aceb90b Juergen Gross 2022-06-02  290  		goto err;
2c73e39aceb90b Juergen Gross 2022-06-02  291  
2c73e39aceb90b Juergen Gross 2022-06-02  292  	/* XXX The dom0 is hardcoded as the backend domain for now */
2c73e39aceb90b Juergen Gross 2022-06-02  293  	data->backend_domid = 0;
2c73e39aceb90b Juergen Gross 2022-06-02  294  
2c73e39aceb90b Juergen Gross 2022-06-02  295  	if (xa_err(xa_store(&xen_grant_dma_devices, (unsigned long)dev, data,
2c73e39aceb90b Juergen Gross 2022-06-02  296  			GFP_KERNEL))) {
2c73e39aceb90b Juergen Gross 2022-06-02  297  		dev_err(dev, "Cannot store Xen grant DMA data\n");
2c73e39aceb90b Juergen Gross 2022-06-02  298  		goto err;
2c73e39aceb90b Juergen Gross 2022-06-02  299  	}
2c73e39aceb90b Juergen Gross 2022-06-02  300  
2c73e39aceb90b Juergen Gross 2022-06-02  301  	dev->dma_ops = &xen_grant_dma_ops;
2c73e39aceb90b Juergen Gross 2022-06-02  302  
2c73e39aceb90b Juergen Gross 2022-06-02  303  	return;
2c73e39aceb90b Juergen Gross 2022-06-02  304  
2c73e39aceb90b Juergen Gross 2022-06-02  305  err:
2c73e39aceb90b Juergen Gross 2022-06-02  306  	dev_err(dev, "Cannot set up Xen grant DMA ops, retain platform DMA ops\n");
2c73e39aceb90b Juergen Gross 2022-06-02  307  }
2c73e39aceb90b Juergen Gross 2022-06-02  308  

:::::: The code at line 278 was first introduced by commit
:::::: 2c73e39aceb90b59058cdbc497916049e798963c xen/grant-dma-ops: Add option to restrict memory access under Xen

:::::: TO: Juergen Gross <jgross@suse.com>
:::::: CC: Juergen Gross <jgross@suse.com>

-- 
0-DAY CI Kernel Test Service
https://01.org/lkp


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 14:03:39 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 14:03:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342609.567661 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyDKR-0007Zr-LR; Mon, 06 Jun 2022 14:03:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342609.567661; Mon, 06 Jun 2022 14:03:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyDKR-0007Zk-I5; Mon, 06 Jun 2022 14:03:35 +0000
Received: by outflank-mailman (input) for mailman id 342609;
 Mon, 06 Jun 2022 14:03:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PxEH=WN=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1nyDKQ-0007Ze-7L
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 14:03:34 +0000
Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com
 [66.111.4.27]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 722b6449-e5a1-11ec-b605-df0040e90b76;
 Mon, 06 Jun 2022 16:03:33 +0200 (CEST)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.nyi.internal (Postfix) with ESMTP id A739D5C0110;
 Mon,  6 Jun 2022 10:03:30 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute5.internal (MEProxy); Mon, 06 Jun 2022 10:03:30 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 6 Jun 2022 10:03:28 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 722b6449-e5a1-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:date:date:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to; s=fm1; t=1654524210; x=
	1654610610; bh=0LW+vEZ1opUnoAkqaBAnedNH5NgRMTuq2RBhTx9Lf8I=; b=u
	MxW7oIIN6AdeD61I2uDx8IW5BTCbisyK7E4E4Rzg2dWgrpsd0uoUg31VAjjd06c7
	hvet437LE4JX7jIOkhG7ehy0kUbhcyFXNSDth9rOzW6QnvN9qU6Ek9AFL1CTvgkP
	5Du7UIZkvf2aJB0KDWP17TB15BB0H2lIm9GueHUlNLIFbGcKhme2AAU4dHfzYOAt
	DDENN/zNrZvDmDYozCKyphSaMSDtvryBw6+wVP5FMa3QidA4Fv0haT7pPTtPeFLE
	vFq3eevuqftmr9HhSpVCV+aNCix+5b2izFL1PeeE4S39KyF46yRB1UiHODLIlKQo
	XX+duAnPFBg74H+1G2wlw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:date:date:feedback-id
	:feedback-id:from:from:in-reply-to:in-reply-to:message-id
	:mime-version:references:reply-to:sender:subject:subject:to:to
	:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm2; t=1654524210; x=1654610610; bh=0LW+vEZ1opUnoAkqaBAnedNH5NgR
	MTuq2RBhTx9Lf8I=; b=YaEICWL/7Z87vdgoPTXqyLLKVBkEnRCZEqpYijwdgr90
	/acuXvpVqDnhmiR6I6AbAnCG1UyU4mE5qZAbuVXAUXby2l969mNoOuSHVtJQl12m
	mt2LmMG7qvmRbR42qHc+dRC5K9hcZHan/P+g4L1LZJ1u5rOVRXU7qWDGjH6JehQq
	DxhVTa6ekSzEnWijCW1YjwclOP3Q5bFl0k2cM1fcmyivsOitBxkzjnIwLJeXYa4U
	WTW/SZ2WEaPsB+cZ1YNLfBuodMW/I5CW8dGCkTNNnUBFnXd0IQCuh8AR+sy/8zHY
	lvJjIadLvX0JhQH3H/wJ9dyMXwFzUnAZ1mSgqSVzRA==
X-ME-Sender: <xms:MgmeYh4leBjjcVeQ4AfvygF1Wae8ZaxPxMHtPx7e96QU80aqhZ3Ffw>
    <xme:MgmeYu7dHuAqZqicIlq-5bGOGdiOFtdDq8-udVeMUvHIbqoDlP6Blh-FB4i-TJYKv
    um9QikfU7fIwA>
X-ME-Received: <xmr:MgmeYoeqUBtS7Q7vaRQlZOdzOwyNZd0h42Yd5-nJqrRpE_TMZcVTV7UGXA-9dSv6jtyS2mn9kV9Yq-Gs_V7C3upaVGZlLnwQMA>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedruddtvddgjeduucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepueek
    teetgefggfekudehteegieeljeejieeihfejgeevhfetgffgteeuteetueetnecuffhomh
    grihhnpehgihhthhhusgdrtghomhenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgr
    mhepmhgrihhlfhhrohhmpehmrghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgsh
    hlrggsrdgtohhm
X-ME-Proxy: <xmx:MgmeYqJR5aVkkonxzs-dTuSWlLUbbunN0wr4BFWmWJyNSn7GXnTkew>
    <xmx:MgmeYlISl6MkuNt1P-iYyzADAtldeMJpsClHHOIcxbKB3bc6H8gg8w>
    <xmx:MgmeYjwVR-IE2TrssEY6214xA3eQzJ1UNDbLj_0YeHxGH4Tl_NTmaA>
    <xmx:MgmeYt8N0ePWQ_GNlK9wlFaDJEH2Mbx5l9hpgqMuHGSdWdsIXNnZ6Q>
Feedback-ID: i1568416f:Fastmail
Date: Mon, 6 Jun 2022 16:03:25 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [RFC PATCH 01/12] drivers/char: Add support for Xue USB3 debugger
Message-ID: <Yp4JLd8UGS3jjD5Z@mail-itl>
References: <cover.07846d9c1900bd8c905a05e7afb214b4cf4ab881.1654486751.git-series.marmarek@invisiblethingslab.com>
 <8d45d05dae0b5a2aa62120c7ff62295319a74478.1654486751.git-series.marmarek@invisiblethingslab.com>
 <CABfawhn7XGoMRb9LsSwNyaCb92KD5jC4juM+NwOMyOntOgo5pg@mail.gmail.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="F9xz0k6yD4ufAAN4"
Content-Disposition: inline
In-Reply-To: <CABfawhn7XGoMRb9LsSwNyaCb92KD5jC4juM+NwOMyOntOgo5pg@mail.gmail.com>


--F9xz0k6yD4ufAAN4
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Mon, 6 Jun 2022 16:03:25 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [RFC PATCH 01/12] drivers/char: Add support for Xue USB3 debugger

On Mon, Jun 06, 2022 at 09:32:52AM -0400, Tamas K Lengyel wrote:
> > +/* Supported xHC PCI configurations */
> > +#define XUE_XHC_CLASSC 0xC0330ULL
> > +#define XUE_XHC_VEN_INTEL 0x8086ULL
> > +#define XUE_XHC_DEV_Z370 0xA2AFULL
> > +#define XUE_XHC_DEV_Z390 0xA36DULL
> > +#define XUE_XHC_DEV_WILDCAT_POINT 0x9CB1ULL
> > +#define XUE_XHC_DEV_SUNRISE_POINT 0x9D2FULL
> > +#define XUE_XHC_DEV_CANNON_POINT 0x9DEDULL
>=20
> I had to add an extra device ID here to get it working on my NUC,
> would be nice if we could add that to the list of supported configs so
> I don't need to custom patch:
>=20
> #define XUE_XHC_DEV_COMET_LAKE 0x02EDULL
>=20
> The patch is here:
> https://github.com/tklengyel/xen/commit/dd0423aba6caa4ef41dff65470598a1c0=
c1105ae

Interesting, I think known_xhc() is used only in the EFI variant of Xue.
Xen one just looks for any XHC based on the device class. And indeed, I
works for me on Tiger Lake that is not included here.

I did need to select specific controller, since I have 3 of them:
00:0d.0 USB controller: Intel Corporation Tiger Lake-LP Thunderbolt 4 USB C=
ontroller (rev 01)
00:0d.2 USB controller: Intel Corporation Tiger Lake-LP Thunderbolt 4 NHI #=
0 (rev 01)
00:14.0 USB controller: Intel Corporation Tiger Lake-LP USB 3.2 Gen 2x1 xHC=
I Host Controller (rev 20)

So, I need dbgp=3Dxue2 or dbgp=3Dxue@pci00:14.0.

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--F9xz0k6yD4ufAAN4
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmKeCS4ACgkQ24/THMrX
1yzzPgf/UVvhxRdvX8W06oSi6RZr466w+x/jNB09lxPjujceegvIKJrvLS3esone
GEBOePDS5NcLlwP1Rj1RGHEfu+0NPg5wXh+NwmxQS5RLU2GqjQB2a8BGt0iTzD9b
cK4pbMzP7EcdJNttjlfciGQu5T0IoSQyot62sYwcttXfPZLPqFZPucZ8UQD0oLuq
QfEjB4nypO2e/l+0tYqVUiWMf0EuhmA96mQ0s9l9HucfvPjMkU1EQ+HWhDcSmnYY
mLBQftYZk1TRyev7/H9ZzJ9hqXRBeQm95PDyYDcHtUt1/RCts3Hwwyb5QTOMvTPr
M0rVE0jLJjPu+UWuAl7x/cQCXbHpkg==
=zxEu
-----END PGP SIGNATURE-----

--F9xz0k6yD4ufAAN4--


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 14:11:14 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 14:11:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342618.567671 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyDRm-0000Yo-EX; Mon, 06 Jun 2022 14:11:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342618.567671; Mon, 06 Jun 2022 14:11:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyDRm-0000Yh-BR; Mon, 06 Jun 2022 14:11:10 +0000
Received: by outflank-mailman (input) for mailman id 342618;
 Mon, 06 Jun 2022 14:11:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BfO3=WN=gmail.com=tamas.k.lengyel@srs-se1.protection.inumbo.net>)
 id 1nyDRl-0000Yb-6P
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 14:11:09 +0000
Received: from mail-oo1-xc2a.google.com (mail-oo1-xc2a.google.com
 [2607:f8b0:4864:20::c2a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 81e5d5e4-e5a2-11ec-b605-df0040e90b76;
 Mon, 06 Jun 2022 16:11:07 +0200 (CEST)
Received: by mail-oo1-xc2a.google.com with SMTP id
 c17-20020a4ac311000000b0040ea8bf80f2so2811681ooq.1
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jun 2022 07:11:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 81e5d5e4-e5a2-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=XEqJbeUtBUA7H3dETz0qV/lyhxCqf96iNnmhrd/lyBM=;
        b=nysprAenqNM4Mh2AKXkktVBD4SzwxWdU+Ux0NQ1Poi1P39NDpwkIieYI83zhNR+vpB
         wLL4HKuZhu6KtKsVC9+YvtiPE/0g7/QPg8Ex964zKYVJ+WCrbq6/rEfI/wtK62Ywu0ca
         nnaK1itiSkud0inplRvkuh43CgNx5T0Ek4e14aKDbpKpwU55oqI4uIPUu3YoZpJSM0tb
         KkomVsCewUKdYyRIOStaSOdeDuS0XXGc1JxhFXJAQRnhUTfD0D57R5ebQXhJJvHWiJve
         cApzA5pXnnmXk2b2g8Jy+8Uqw+bAoM0LfyE756JT5sJMr3XGqtL3BGZEhk9AoGeR7WcM
         1vuQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=XEqJbeUtBUA7H3dETz0qV/lyhxCqf96iNnmhrd/lyBM=;
        b=EX8p4RMQsiSVUIOwQUj72n7vUXhPaVOmwKGmrdiqLZ0zoBm/Ag3pU+coEqVu5ZCuYj
         O17SkxBvjDWlpKSqUUmQSrJzuT/Li2nDlpQp77FPeFwQrMgVl0Uypx8nN1snHtbU2mAK
         avl00HMR+WTuqbI8a8P2epCO8MR6/aaZ+zypMi8Y2leZ3/majY4ZixsokNjJJkCcjjqU
         Ts5ftDFDvQcUM3rrDwv3E0pieinXL1hZU6/SU75BSlURCYCoHwbFn6+F1jNuoIV1oALs
         2df+9Id1euIphAI8to3w5PvO9I0OD6nu4s6dMpYtPSfwKN2CjgKK0p5Y2vhiG2DO41ne
         Xpww==
X-Gm-Message-State: AOAM530nq81xDhT3QKJwWCtajUApVNQ8DkuIzVaBaj9Xw4vGMeqxgdWu
	UWhiOpgKRC3xRW7kFAelJOxWsk2tR5P+qhRjbcI=
X-Google-Smtp-Source: ABdhPJyrTZDaPeuBNCzLwj6fz+hW7GMmUfhUnksLKfF0rD0O+/mmMLD+Axjfuq0nQv2I9DOds4sEnec3Ze/pIdF6YcY=
X-Received: by 2002:a4a:6f49:0:b0:35e:1902:1d3b with SMTP id
 i9-20020a4a6f49000000b0035e19021d3bmr10104470oof.1.1654524666311; Mon, 06 Jun
 2022 07:11:06 -0700 (PDT)
MIME-Version: 1.0
References: <cover.07846d9c1900bd8c905a05e7afb214b4cf4ab881.1654486751.git-series.marmarek@invisiblethingslab.com>
 <8d45d05dae0b5a2aa62120c7ff62295319a74478.1654486751.git-series.marmarek@invisiblethingslab.com>
 <CABfawhn7XGoMRb9LsSwNyaCb92KD5jC4juM+NwOMyOntOgo5pg@mail.gmail.com> <Yp4JLd8UGS3jjD5Z@mail-itl>
In-Reply-To: <Yp4JLd8UGS3jjD5Z@mail-itl>
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Mon, 6 Jun 2022 10:10:32 -0400
Message-ID: <CABfawhmz3+EJZ6qrqKOQ==Hmm93i+a4WBk8FcbOaBSxmaM3New@mail.gmail.com>
Subject: Re: [RFC PATCH 01/12] drivers/char: Add support for Xue USB3 debugger
To: =?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?= <marmarek@invisiblethingslab.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, Jun 6, 2022 at 10:03 AM Marek Marczykowski-G=C3=B3recki
<marmarek@invisiblethingslab.com> wrote:
>
> On Mon, Jun 06, 2022 at 09:32:52AM -0400, Tamas K Lengyel wrote:
> > > +/* Supported xHC PCI configurations */
> > > +#define XUE_XHC_CLASSC 0xC0330ULL
> > > +#define XUE_XHC_VEN_INTEL 0x8086ULL
> > > +#define XUE_XHC_DEV_Z370 0xA2AFULL
> > > +#define XUE_XHC_DEV_Z390 0xA36DULL
> > > +#define XUE_XHC_DEV_WILDCAT_POINT 0x9CB1ULL
> > > +#define XUE_XHC_DEV_SUNRISE_POINT 0x9D2FULL
> > > +#define XUE_XHC_DEV_CANNON_POINT 0x9DEDULL
> >
> > I had to add an extra device ID here to get it working on my NUC,
> > would be nice if we could add that to the list of supported configs so
> > I don't need to custom patch:
> >
> > #define XUE_XHC_DEV_COMET_LAKE 0x02EDULL
> >
> > The patch is here:
> > https://github.com/tklengyel/xen/commit/dd0423aba6caa4ef41dff65470598a1=
c0c1105ae
>
> Interesting, I think known_xhc() is used only in the EFI variant of Xue.
> Xen one just looks for any XHC based on the device class. And indeed, I
> works for me on Tiger Lake that is not included here.
>
> I did need to select specific controller, since I have 3 of them:
> 00:0d.0 USB controller: Intel Corporation Tiger Lake-LP Thunderbolt 4 USB=
 Controller (rev 01)
> 00:0d.2 USB controller: Intel Corporation Tiger Lake-LP Thunderbolt 4 NHI=
 #0 (rev 01)
> 00:14.0 USB controller: Intel Corporation Tiger Lake-LP USB 3.2 Gen 2x1 x=
HCI Host Controller (rev 20)
>
> So, I need dbgp=3Dxue2 or dbgp=3Dxue@pci00:14.0.

Interesting! OK, I'll give that a shot and see if it works that way
for me too, it's certainly been a while since I last tested :)

Thanks,
Tamas


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 15:51:37 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 15:51:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342630.567683 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyF0k-0002Tt-Rv; Mon, 06 Jun 2022 15:51:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342630.567683; Mon, 06 Jun 2022 15:51:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyF0k-0002Tm-Nr; Mon, 06 Jun 2022 15:51:22 +0000
Received: by outflank-mailman (input) for mailman id 342630;
 Mon, 06 Jun 2022 15:51:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyF0j-0002Tc-8Q; Mon, 06 Jun 2022 15:51:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyF0j-00082K-4J; Mon, 06 Jun 2022 15:51:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyF0i-0005uC-43; Mon, 06 Jun 2022 15:51:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nyF0i-0001FY-3b; Mon, 06 Jun 2022 15:51:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=US2k2BHBTckLqFFYL3yf2jDNoMlFuZqv44+W3Wncwls=; b=UqWCzy6mmb2fHDesHoUQ4lqwnf
	Eo7F2+OgMI+0Q0fnyCtyhwJSsUnL8eaSk55PuYfK6zHLb30DDwh9N8kJYcY/XAdDGZVjtuuFfq2Fy
	fHFN9di6Zjis+4Whv3dF1hbnD7sAsCDnFC2Y06a80PhHX91zFGExqXj/x4Bz0CAHFQbI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170843-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 170843: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:build-amd64:xen-build:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-5.4:build-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-examine-bios:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-examine-uefi:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-examine-bios:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-examine-uefi:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=35c6471fd2c181f6e5e0b292dc759b49dbd95d6a
X-Osstest-Versions-That:
    linux=04b092e4a01a3488e762897e2d29f85eda2c6a60
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 06 Jun 2022 15:51:20 +0000

flight 170843 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170843/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 170736
 test-armhf-armhf-xl-credit2  14 guest-start              fail REGR. vs. 170736
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 170736

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170724
 test-armhf-armhf-xl-multivcpu 18 guest-start/debian.repeat    fail like 170724
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170736
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170736
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                35c6471fd2c181f6e5e0b292dc759b49dbd95d6a
baseline version:
 linux                04b092e4a01a3488e762897e2d29f85eda2c6a60

Last test of basis   170736  2022-05-25 18:40:38 Z   11 days
Testing same since   170843  2022-06-06 06:44:17 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Akira Yokosawa <akiyks@gmail.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andy Lutomirski <luto@kernel.org>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Anna Schumaker <Anna.Schumaker@Netapp.com>
  Ard Biesheuvel <ardb@kernel.org>
  Ariadne Conill <ariadne@dereferenced.org>
  Aristeu Rozanski <aris@redhat.com>
  Benjamin Tissoires <benjamin.tissoires@redhat.com>
  Christian Brauner <brauner@kernel.org>
  Chuck Lever <chuck.lever@oracle.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Thompson <daniel.thompson@linaro.org>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  Denis Efremov (Oracle) <efremov@linux.com>
  Dmitry Mastykin <dmastykin@astralinux.ru>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Eric Dumazet <edumazet@google.com>
  Florian Westphal <fw@strlen.de>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo A. R. Silva <gustavoars@kernel.org>
  Hans Verkuil <hverkuil-cisco@xs4all.nl>
  Herbert Xu <herbert@gondor.apana.org.au>
  Hulk Robot <hulkrobot@huawei.com>
  IotaHydrae <writeforever@foxmail.com>
  Jakub Kicinski <kuba@kernel.org>
  Jarkko Sakkinen <jarkko@kernel.org>
  Jiri Kosina <jkosina@suse.cz>
  Joel Stanley <joel@jms.id.au>
  Johannes Berg <johannes.berg@intel.com>
  Jonathan Corbet <corbet@lwn.net>
  Kees Cook <keescook@chromium.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Liu Jian <liujian56@huawei.com>
  Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
  Luca Coelho <luciano.coelho@intel.com>
  Marek Maslanka <mm@semihalf.com>
  Marek Maślanka <mm@semihalf.com>
  Mariusz Tkaczyk <mariusz.tkaczyk@linux.intel.com>
  Mark-PK Tsai <mark-pk.tsai@mediatek.com>
  Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
  Mika Westerberg <mika.westerberg@linux.intel.com>
  Mike Snitzer <snitzer@kernel.org>
  Mikulas Patocka <mpatocka@redhat.com>
  Milan Broz <gmazyland@gmail.com>
  Minchan Kim <minchan@kernel.org>
  Miri Korenblit <miriam.rachel.korenblit@intel.com>
  Noah Meyerhans <nmeyerha@amazon.com>
  Noah Meyerhans <noahm@debian.org>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Piyush Malgujar <pmalgujar@marvell.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Sakari Ailus <sakari.ailus@linux.intel.com>
  Sarthak Kukreti <sarthakkukreti@google.com>
  Sasha Levin <sashal@kernel.org>
  Song Liu <song@kernel.org>
  Song Liu <songliubraving@fb.com>
  Stefan Ghinea <stefan.ghinea@windriver.com>
  Stefan Mahnke-Hartmann <stefan.mahnke-hartmann@infineon.com>
  Steffen Klassert <steffen.klassert@secunet.com>
  Stephen Brennan <stephen.s.brennan@oracle.com>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Sultan Alsawaf <sultan@kerneltoast.com>
  Szymon Balcerak <sbalcerak@marvell.com>
  Thomas Bartschies <thomas.bartschies@cvk.de>
  Thomas Gleixner <tglx@linutronix.de>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Vegard Nossum <vegard.nossum@oracle.com>
  Veronika Kabatova <vkabatov@redhat.com>
  Vitaly Chikunov <vt@altlinux.org>
  Willy Tarreau <w@1wt.eu>
  Wolfram Sang <wsa@kernel.org>
  Xiu Jianfeng <xiujianfeng@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  fail    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                blocked 
 test-amd64-i386-examine-bios                                 blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                blocked 
 test-amd64-i386-examine-uefi                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1081 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 16:18:08 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 16:18:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342642.567697 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyFQX-0005nP-4W; Mon, 06 Jun 2022 16:18:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342642.567697; Mon, 06 Jun 2022 16:18:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyFQX-0005nI-0f; Mon, 06 Jun 2022 16:18:01 +0000
Received: by outflank-mailman (input) for mailman id 342642;
 Mon, 06 Jun 2022 16:17:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NV1H=WN=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1nyFQV-0005nC-Iv
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 16:17:59 +0000
Received: from mail-wr1-x42c.google.com (mail-wr1-x42c.google.com
 [2a00:1450:4864:20::42c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3aabeae7-e5b4-11ec-bd2c-47488cf2e6aa;
 Mon, 06 Jun 2022 18:17:58 +0200 (CEST)
Received: by mail-wr1-x42c.google.com with SMTP id q7so20529305wrg.5
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jun 2022 09:17:58 -0700 (PDT)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 l9-20020a7bc349000000b0039746638d6esm17590755wmj.33.2022.06.06.09.17.56
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 06 Jun 2022 09:17:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3aabeae7-e5b4-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=RaQhq4KYkeTpBP9+9/YIEr0G6PjvF+TDJNb+GzMN8pE=;
        b=HCzloWuy8I0Pmkrc7dz8TytjmK6UWi8puFvzfUR+KM+5a651GzRP7FiAcVbTbXc5aF
         RpSyHIBoUAGSSTyLU1+2UgnXIGQ3x2Ml+503h9bYVrx10KgYl/iVV1rIvGsH8yz4fMrW
         THxdwFoza4sqeOLwiNooiNIHwvyWq++PCiYnhvUEsxQp5wyUKi3onG8GkasksmAj6W91
         T3dcgZOBFYHLdWRP3bupBiooLri+byrfeV7Qpi+zmlI+Bqps8LbOk5k+rrYQk6R7ObDI
         yohDP+sRBKbbIFKO51mIv+NPs8E47vrNQey39aELE8uZoBYAOvtLXVa2kdC3jzl57ldc
         DTIg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=RaQhq4KYkeTpBP9+9/YIEr0G6PjvF+TDJNb+GzMN8pE=;
        b=Rb+puOO5Xhg6L3Vb2fTbpQpHoEX7GXS1qgadNzwH+yZBFMdxesm5Spj8/1TBRgBFRp
         gzpPwCrRE4LUW7yPImxqOujM7Yc4Hka1wk9McRuGiGpMLYTVkqy8p2hzUdJrlhCnywpM
         dIujFMYc9tkEoKhNJ5ML6qsheJXnoCgRe58qXyRqaI33u7b0Fd4Ye6FQdLqCOKzJP/ok
         mSFWQ9EpcpFwLUwJnTWrRFwrJ1YIsxllktjGdC3RrLedsasxgaofBd6Xbfb4qVBLP3PY
         d75HNG9h/dvg6SFjFjghyK1MWWH2nu0JEn/wWaL4nZTCdok0Kjtu6tKsA3aZiM0ebHxy
         IH/w==
X-Gm-Message-State: AOAM532ufNOF9BLqK4AcM2gEe2wKxclNaypUQAFF9QVHd4YIEe+b+opr
	k/mPmEcnyPK6GiQKEUxBrmU=
X-Google-Smtp-Source: ABdhPJyubms71mI0yq0O8eaFjibbBPQLqWwuuEbqo88skGsbrd3DUTneQb2KakphwFPQ3zQZUIvfJg==
X-Received: by 2002:adf:a3d3:0:b0:213:baff:7654 with SMTP id m19-20020adfa3d3000000b00213baff7654mr17611749wrb.158.1654532277594;
        Mon, 06 Jun 2022 09:17:57 -0700 (PDT)
Subject: Re: [PATCH] xen: unexport __init-annotated
 xen_xlate_map_ballooned_pages()
To: Masahiro Yamada <masahiroy@kernel.org>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 moderated for non-subscribers <xen-devel@lists.xenproject.org>,
 Stephen Rothwell <sfr@canb.auug.org.au>, Julien Grall
 <julien.grall@arm.com>, Shannon Zhao <shannon.zhao@linaro.org>,
 linux-kernel@vger.kernel.org
References: <20220606045920.4161881-1-masahiroy@kernel.org>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <20c9cd23-429f-896c-b59b-c518ff2562e2@gmail.com>
Date: Mon, 6 Jun 2022 19:17:56 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20220606045920.4161881-1-masahiroy@kernel.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 06.06.22 07:59, Masahiro Yamada wrote:

Hello

> EXPORT_SYMBOL and __init is a bad combination because the .init.text
> section is freed up after the initialization. Hence, modules cannot
> use symbols annotated __init. The access to a freed symbol may end up
> with kernel panic.
>
> modpost used to detect it, but it has been broken for a decade.
>
> Recently, I fixed modpost so it started to warn it again, then this
> showed up in linux-next builds.
>
> There are two ways to fix it:
>
>    - Remove __init
>    - Remove EXPORT_SYMBOL
>
> I chose the latter for this case because none of the in-tree call-sites
> (arch/arm/xen/enlighten.c, arch/x86/xen/grant-table.c) is compiled as
> modular.

Good description.


>
> Fixes: 243848fc018c ("xen/grant-table: Move xlated_setup_gnttab_pages to common place")
> Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
> Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>

I think the patch is correct.

Reviewed-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

> ---
>
>   drivers/xen/xlate_mmu.c | 1 -
>   1 file changed, 1 deletion(-)
>
> diff --git a/drivers/xen/xlate_mmu.c b/drivers/xen/xlate_mmu.c
> index 34742c6e189e..f17c4c03db30 100644
> --- a/drivers/xen/xlate_mmu.c
> +++ b/drivers/xen/xlate_mmu.c
> @@ -261,7 +261,6 @@ int __init xen_xlate_map_ballooned_pages(xen_pfn_t **gfns, void **virt,
>   
>   	return 0;
>   }
> -EXPORT_SYMBOL_GPL(xen_xlate_map_ballooned_pages);
>   
>   struct remap_pfn {
>   	struct mm_struct *mm;

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Mon Jun 06 16:58:19 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 16:58:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342654.567720 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyG3I-0001s5-CW; Mon, 06 Jun 2022 16:58:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342654.567720; Mon, 06 Jun 2022 16:58:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyG3I-0001ry-7f; Mon, 06 Jun 2022 16:58:04 +0000
Received: by outflank-mailman (input) for mailman id 342654;
 Mon, 06 Jun 2022 16:58:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BfO3=WN=gmail.com=tamas.k.lengyel@srs-se1.protection.inumbo.net>)
 id 1nyG3H-0001rs-4j
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 16:58:03 +0000
Received: from mail-ot1-x335.google.com (mail-ot1-x335.google.com
 [2607:f8b0:4864:20::335])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d2cb4155-e5b9-11ec-b605-df0040e90b76;
 Mon, 06 Jun 2022 18:58:01 +0200 (CEST)
Received: by mail-ot1-x335.google.com with SMTP id
 f9-20020a9d2c09000000b0060bf1fa91f4so3961827otb.2
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jun 2022 09:58:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d2cb4155-e5b9-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=ppm3GrxvJwEnNEauVXNoAW5TIJNNp56oxbhAKdsNZMg=;
        b=ouRIAQGbIvSRxuiIoxiBxOqePqwzGCNocbGL1MRXuBJfDRftNNmwROsteWE90Ib9VN
         QLMjFD5UxI7OrUkjHEYvXpU0Xg/7+KfJ7ZCfe7WRfxf6j7j2OwtD1YKnKjMybcmqZhVk
         UCAGADAgx/si41FNtsZ1R232B4v3Km/Nl9FvfQQtw/2SasQv094JTaRzNpzL8it/jGwU
         fQW8xgkwPR/R9AezkWoI3iVJZ8itZ34+qgR85ABER6a2N5lTWClP1uLABKcko/JHvK4A
         1fQdthrUr8iPxhjVk8uL10aWhlm/oyWwHLy6DlIwWtNGOtlFo6ieHk3pAUVPaqsB+fnj
         r64Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=ppm3GrxvJwEnNEauVXNoAW5TIJNNp56oxbhAKdsNZMg=;
        b=gSpYiy8TGLNEZ9UdNts+3em8wlGJJhgU/E5geQc9/+VxdCu/Ea6G45VL3viOWGL7r/
         rK8JO6SY022Fu718whlgxHtgqgz6tq5SO8jXWLHG7Bc923fC5jszhVcXYogMJjX1KDka
         zNKcCMsEov0cCzS3Y/Pp1goiX2B7fwBW8R2QQgLS/ZPng1vL6hwKfbnOAuWdHN9Q/Tv8
         yx0sfE2LypwscyztA1KnG0gQKoTvwJXAqn66AmWWCh0AlUcwHpwxHyhf3kShjy1rzR0B
         xDIbD8z4oARnh7f9hmmoKH8Ok9B9uqNY4FATET9AmXbUDe+yUbTld2Rf7O26w9bjwg8g
         Sfaw==
X-Gm-Message-State: AOAM530kiDKE/Sk7NqvQboQofjDjRII5FD7H6B4m3j2jMj4A+BgsBAaY
	TAkLTh2b8mTMu5MWwhgbp5jfNV51yRQUaVscjjqU3OXuvKV5Sw==
X-Google-Smtp-Source: ABdhPJx5wfsbyttE6EtMN0PpFUqQPjjcqUsz09nvDqDEdnms+pmJWJ+hO2u5jVBbwIzroqgiHhLGu5Isjr4UsVMEnmI=
X-Received: by 2002:a9d:5c01:0:b0:60b:f1fc:3a04 with SMTP id
 o1-20020a9d5c01000000b0060bf1fc3a04mr4629804otk.204.1654534680462; Mon, 06
 Jun 2022 09:58:00 -0700 (PDT)
MIME-Version: 1.0
References: <cover.07846d9c1900bd8c905a05e7afb214b4cf4ab881.1654486751.git-series.marmarek@invisiblethingslab.com>
 <8d45d05dae0b5a2aa62120c7ff62295319a74478.1654486751.git-series.marmarek@invisiblethingslab.com>
 <CABfawhn7XGoMRb9LsSwNyaCb92KD5jC4juM+NwOMyOntOgo5pg@mail.gmail.com>
 <Yp4JLd8UGS3jjD5Z@mail-itl> <CABfawhmz3+EJZ6qrqKOQ==Hmm93i+a4WBk8FcbOaBSxmaM3New@mail.gmail.com>
In-Reply-To: <CABfawhmz3+EJZ6qrqKOQ==Hmm93i+a4WBk8FcbOaBSxmaM3New@mail.gmail.com>
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Mon, 6 Jun 2022 12:57:26 -0400
Message-ID: <CABfawhkkGr6Lp4SBEw7nsqfUs28QEqoCuVgTRBg9ZUCirLW5_g@mail.gmail.com>
Subject: Re: [RFC PATCH 01/12] drivers/char: Add support for Xue USB3 debugger
To: =?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?= <marmarek@invisiblethingslab.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, Jun 6, 2022 at 10:10 AM Tamas K Lengyel
<tamas.k.lengyel@gmail.com> wrote:
>
> On Mon, Jun 6, 2022 at 10:03 AM Marek Marczykowski-G=C3=B3recki
> <marmarek@invisiblethingslab.com> wrote:
> >
> > On Mon, Jun 06, 2022 at 09:32:52AM -0400, Tamas K Lengyel wrote:
> > > > +/* Supported xHC PCI configurations */
> > > > +#define XUE_XHC_CLASSC 0xC0330ULL
> > > > +#define XUE_XHC_VEN_INTEL 0x8086ULL
> > > > +#define XUE_XHC_DEV_Z370 0xA2AFULL
> > > > +#define XUE_XHC_DEV_Z390 0xA36DULL
> > > > +#define XUE_XHC_DEV_WILDCAT_POINT 0x9CB1ULL
> > > > +#define XUE_XHC_DEV_SUNRISE_POINT 0x9D2FULL
> > > > +#define XUE_XHC_DEV_CANNON_POINT 0x9DEDULL
> > >
> > > I had to add an extra device ID here to get it working on my NUC,
> > > would be nice if we could add that to the list of supported configs s=
o
> > > I don't need to custom patch:
> > >
> > > #define XUE_XHC_DEV_COMET_LAKE 0x02EDULL
> > >
> > > The patch is here:
> > > https://github.com/tklengyel/xen/commit/dd0423aba6caa4ef41dff65470598=
a1c0c1105ae
> >
> > Interesting, I think known_xhc() is used only in the EFI variant of Xue=
.
> > Xen one just looks for any XHC based on the device class. And indeed, I
> > works for me on Tiger Lake that is not included here.
> >
> > I did need to select specific controller, since I have 3 of them:
> > 00:0d.0 USB controller: Intel Corporation Tiger Lake-LP Thunderbolt 4 U=
SB Controller (rev 01)
> > 00:0d.2 USB controller: Intel Corporation Tiger Lake-LP Thunderbolt 4 N=
HI #0 (rev 01)
> > 00:14.0 USB controller: Intel Corporation Tiger Lake-LP USB 3.2 Gen 2x1=
 xHCI Host Controller (rev 20)
> >
> > So, I need dbgp=3Dxue2 or dbgp=3Dxue@pci00:14.0.
>
> Interesting! OK, I'll give that a shot and see if it works that way
> for me too, it's certainly been a while since I last tested :)

Yeap, with console=3Ddbgp dbgp=3Dxue@pci00:14.0 it works as expected.
Xen's boot does hang if you don't have a debug cable connected or if
the other end is not plugged into the right USB3 port. Not sure if
that behavior is documented anywhere. Once I found the right USB3 port
on the machine that receives the debug output it started booting and
everything works expected (ie. one-way communication only).

Tamas


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 17:04:50 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 17:04:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342662.567730 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyG9n-0003Qu-32; Mon, 06 Jun 2022 17:04:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342662.567730; Mon, 06 Jun 2022 17:04:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyG9n-0003Qn-02; Mon, 06 Jun 2022 17:04:47 +0000
Received: by outflank-mailman (input) for mailman id 342662;
 Mon, 06 Jun 2022 17:04:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PxEH=WN=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1nyG9l-0003Qh-9A
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 17:04:45 +0000
Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com
 [66.111.4.27]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c0d040df-e5ba-11ec-bd2c-47488cf2e6aa;
 Mon, 06 Jun 2022 19:04:43 +0200 (CEST)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.nyi.internal (Postfix) with ESMTP id EFED35C00EF;
 Mon,  6 Jun 2022 13:04:39 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute5.internal (MEProxy); Mon, 06 Jun 2022 13:04:39 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 6 Jun 2022 13:04:37 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c0d040df-e5ba-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:date:date:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to; s=fm1; t=1654535079; x=
	1654621479; bh=hcfGpKGkS/9iwKrvuCJTKt3bI0+5tbPMzlzGdi+DVB8=; b=T
	neXTQCNrnFutC04mYjsiLR6lIUJ0FK6L7VjdrTBFglMF8usXopgDk1A2tqyPlEjM
	70ZVqxrTcNJDNKGJaiZeeGvyxVmMY6ja3VP0wZ3B74bB3m0DDWVe3FTsD+oGNMz7
	inobeS3plHybu7Sr4RLkrGX9I+ZkKhXB/54Y/cq27eVcreaZhYRMW8xMbEbctIrV
	yAbSbZKiqQ3vbzxcQb6gGM39v/SXFFSGWTZFZvByiEH1ag+oKEs45EVx7Bm2P8Rp
	WrtiWA9qLRD/+T90XEMOTmYpPbHPz/KTiibRIwezQKHSENJmlncN0MWRX74FujlS
	xn49o07tiYn7n42hjzSFQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:date:date:feedback-id
	:feedback-id:from:from:in-reply-to:in-reply-to:message-id
	:mime-version:references:reply-to:sender:subject:subject:to:to
	:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm2; t=1654535079; x=1654621479; bh=hcfGpKGkS/9iwKrvuCJTKt3bI0+5
	tbPMzlzGdi+DVB8=; b=jVAbXvYmp/4umacGNwbrxWxOEnaUjzrL61solks6nNe5
	wjjJskToGG818vSXogIrH+vOmUIKCmf8B+CILhIdWQVrU1vTjNQnbCpv2iLCFDcq
	oNPiJU5B9pkob6G9J5tMv2P0OofXJzIVMEycT/pFApcVJqueVJDFN9AhlLDSQGGq
	VAnDfF74BP3zYK/mht4wiBBAd3wSosqtfJG/2+YSihduX/MNw6SihSj0SqT/Hjxz
	uIpDiH403TmtB+WboTlSvrRNk5HQhB7N8MwV7/5CvEzd2mPWDhgpulg5f5nTnHKb
	JWQ5BGZJx/CtuolnWL49U6ipei8+6ze4YTPlhPY1yw==
X-ME-Sender: <xms:pzOeYgXRsUXUTsnzfaVMgOf8wpA2IUjHKN2s3ZBXnizeCvYK6z5sYg>
    <xme:pzOeYkmZEr0xswikKlxjRa18IEy183TgZE-_8G4X8w3iHpjohuC2rf5zu9DRGDW-f
    HKtA4Gysl80sA>
X-ME-Received: <xmr:pzOeYkYyT8vRKyZ_0PYB5Ale6IklTMTUfXx39452DSz9WQs3nAOfp24dnaOrovdak-mrxBQscE2ZksV_rw3s5-GSK7nYjCIyMQ>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedruddtfedgudehucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepueek
    teetgefggfekudehteegieeljeejieeihfejgeevhfetgffgteeuteetueetnecuffhomh
    grihhnpehgihhthhhusgdrtghomhenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgr
    mhepmhgrihhlfhhrohhmpehmrghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgsh
    hlrggsrdgtohhm
X-ME-Proxy: <xmx:pzOeYvW3Tgk3ualzId29zcGUoUauxwGBNUdlCKTY6q9prHQlt7evvg>
    <xmx:pzOeYqk16ve4W218pIn8USRa2LtNjy4DGVI2NSxhdQzxvnYoZrfhsw>
    <xmx:pzOeYkdLoYG69Y_JbvPkivIC6cjFeQXSNxa1bqEnXB9xmZaVNdd6og>
    <xmx:pzOeYoaBeVf7m7q_PPVBaPgf0LSViWWGfgEE0ByzcR-MV6PlWNFLzQ>
Feedback-ID: i1568416f:Fastmail
Date: Mon, 6 Jun 2022 19:04:34 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [RFC PATCH 01/12] drivers/char: Add support for Xue USB3 debugger
Message-ID: <Yp4zo1UQV19euwRb@mail-itl>
References: <cover.07846d9c1900bd8c905a05e7afb214b4cf4ab881.1654486751.git-series.marmarek@invisiblethingslab.com>
 <8d45d05dae0b5a2aa62120c7ff62295319a74478.1654486751.git-series.marmarek@invisiblethingslab.com>
 <CABfawhn7XGoMRb9LsSwNyaCb92KD5jC4juM+NwOMyOntOgo5pg@mail.gmail.com>
 <Yp4JLd8UGS3jjD5Z@mail-itl>
 <CABfawhmz3+EJZ6qrqKOQ==Hmm93i+a4WBk8FcbOaBSxmaM3New@mail.gmail.com>
 <CABfawhkkGr6Lp4SBEw7nsqfUs28QEqoCuVgTRBg9ZUCirLW5_g@mail.gmail.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="I253qF8crHtcqfQr"
Content-Disposition: inline
In-Reply-To: <CABfawhkkGr6Lp4SBEw7nsqfUs28QEqoCuVgTRBg9ZUCirLW5_g@mail.gmail.com>


--I253qF8crHtcqfQr
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Mon, 6 Jun 2022 19:04:34 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [RFC PATCH 01/12] drivers/char: Add support for Xue USB3 debugger

On Mon, Jun 06, 2022 at 12:57:26PM -0400, Tamas K Lengyel wrote:
> On Mon, Jun 6, 2022 at 10:10 AM Tamas K Lengyel
> <tamas.k.lengyel@gmail.com> wrote:
> >
> > On Mon, Jun 6, 2022 at 10:03 AM Marek Marczykowski-G=C3=B3recki
> > <marmarek@invisiblethingslab.com> wrote:
> > >
> > > On Mon, Jun 06, 2022 at 09:32:52AM -0400, Tamas K Lengyel wrote:
> > > > > +/* Supported xHC PCI configurations */
> > > > > +#define XUE_XHC_CLASSC 0xC0330ULL
> > > > > +#define XUE_XHC_VEN_INTEL 0x8086ULL
> > > > > +#define XUE_XHC_DEV_Z370 0xA2AFULL
> > > > > +#define XUE_XHC_DEV_Z390 0xA36DULL
> > > > > +#define XUE_XHC_DEV_WILDCAT_POINT 0x9CB1ULL
> > > > > +#define XUE_XHC_DEV_SUNRISE_POINT 0x9D2FULL
> > > > > +#define XUE_XHC_DEV_CANNON_POINT 0x9DEDULL
> > > >
> > > > I had to add an extra device ID here to get it working on my NUC,
> > > > would be nice if we could add that to the list of supported configs=
 so
> > > > I don't need to custom patch:
> > > >
> > > > #define XUE_XHC_DEV_COMET_LAKE 0x02EDULL
> > > >
> > > > The patch is here:
> > > > https://github.com/tklengyel/xen/commit/dd0423aba6caa4ef41dff654705=
98a1c0c1105ae
> > >
> > > Interesting, I think known_xhc() is used only in the EFI variant of X=
ue.
> > > Xen one just looks for any XHC based on the device class. And indeed,=
 I
> > > works for me on Tiger Lake that is not included here.
> > >
> > > I did need to select specific controller, since I have 3 of them:
> > > 00:0d.0 USB controller: Intel Corporation Tiger Lake-LP Thunderbolt 4=
 USB Controller (rev 01)
> > > 00:0d.2 USB controller: Intel Corporation Tiger Lake-LP Thunderbolt 4=
 NHI #0 (rev 01)
> > > 00:14.0 USB controller: Intel Corporation Tiger Lake-LP USB 3.2 Gen 2=
x1 xHCI Host Controller (rev 20)
> > >
> > > So, I need dbgp=3Dxue2 or dbgp=3Dxue@pci00:14.0.
> >
> > Interesting! OK, I'll give that a shot and see if it works that way
> > for me too, it's certainly been a while since I last tested :)
>=20
> Yeap, with console=3Ddbgp dbgp=3Dxue@pci00:14.0 it works as expected.
> Xen's boot does hang if you don't have a debug cable connected or if
> the other end is not plugged into the right USB3 port. Not sure if
> that behavior is documented anywhere. Once I found the right USB3 port
> on the machine that receives the debug output it started booting and
> everything works expected (ie. one-way communication only).

Indeed, the indefinite wait for the connection is not the most
convenient. For debugging, I added some timeout, but it was based on
the loop iterations not an actual time (not sure if there is any time
source available at this early stage...). I'll see if this can be
improved.

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--I253qF8crHtcqfQr
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmKeM6IACgkQ24/THMrX
1yxoiwf/VqgGjT8amugHOiHkDWgx4j1qB/NtSaOIRwYmrsFeaunBDi3VeNZCSPLL
xI9CtNbH4Do3SqSYyyPcFtct7gCP4dCyM0MeS7AeqN6O1sZ55SCRblGzbXXwrQyi
HOposvRgSSzv8Muw/wNW8LvRr9EmPhU4HbaP/O3E/WTnbYpmbmiY0MugZAqgBouH
LUiWqg28zmQjpvrwMrTpLeLzDJV2LJMu5vrAltbH7//tGgyTeGhsBzgtPiWNRWOk
NBH8HtvvNdVihC8ikHDTcnQKSytrIKbQ6jYtWnIiiBQPOPQ4IFgRkU44sAjP3sBD
tFtddmRxdrPqF11mw0B18NkCBqzFoA==
=7044
-----END PGP SIGNATURE-----

--I253qF8crHtcqfQr--


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 18:48:20 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 18:48:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342672.567746 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyHlk-0005RJ-PQ; Mon, 06 Jun 2022 18:48:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342672.567746; Mon, 06 Jun 2022 18:48:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyHlk-0005RC-KN; Mon, 06 Jun 2022 18:48:04 +0000
Received: by outflank-mailman (input) for mailman id 342672;
 Mon, 06 Jun 2022 18:48:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qjuy=WN=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1nyHli-0005R4-UU
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 18:48:03 +0000
Received: from sonic301-22.consmr.mail.gq1.yahoo.com
 (sonic301-22.consmr.mail.gq1.yahoo.com [98.137.64.148])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2e591c75-e5c9-11ec-bd2c-47488cf2e6aa;
 Mon, 06 Jun 2022 20:47:59 +0200 (CEST)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic301.consmr.mail.gq1.yahoo.com with HTTP; Mon, 6 Jun 2022 18:47:56 +0000
Received: by hermes--canary-production-ne1-799d7bd497-2pzdr (Yahoo Inc. Hermes
 SMTP Server) with ESMTPA ID 45bd2f9c05a2726a2f7095440d7fd00d; 
 Mon, 06 Jun 2022 18:47:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2e591c75-e5c9-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netscape.net; s=a2048; t=1654541276; bh=kxA8kdihdD4E0EIVi7e3E/kzwhCP82CLQhp0tYo1HTo=; h=Date:Subject:From:To:Cc:References:In-Reply-To:From:Subject:Reply-To; b=eRfMLgO84clK9Is9adCzXkF2Nu9ovNwQNdLqPGYfoK4E4gjLQTFTZqgf2VGDOctQhf8zF2pLZElqo5Ga1NjfVfKjLvDL8d/8rNllYVNBp3mjPqm26zpoXLMu1Aj6FHktbEyYmzO9K+Cr/ImKX1mFISP0B8Yljd5NmZ2IhOjwA/necbEapS09u5uK1GAKH6yBg5gG2trs3DA15WCckIyjbIjZDGmxwuRzzfYZ31jQgHJPKD+rHjwjGrjDPkfeaZ0w9+e3aWVTjUivvbHmEhJdVTL22G2CIKsVlB5xWHLBcnPC9ZdxdOWMKY/bpd342bcfcKtRKP0gB/VaL84Ghd4chw==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1654541276; bh=w/TEIgVgCTuIVvpXjrf7yUQgX1KaApCiU8xGdhmbVAZ=; h=X-Sonic-MF:Date:Subject:From:To:From:Subject; b=FfAHTDGel5XDTwCbRGTJWNmf2ZHPQlB53WgtDDOjlB84+8Wv/xLRH6qnoAUaEGhzUdphBGNDkP+3Sf5twKDclLxLxo7fEO5ycliHDa43q29PijytdLufru15dBmfHnQH/ZX9W5GRRc77z7ArnhVXk7npCED3Z9pGf4yxcVyE3CjhJbkepNhKx5e3MuRo7LeFYHTZBRURq9EZl+8/56zNUZnLCoOmbZJfnSQX1og2ilkz+nyszzdAAB0Df44wrg666u0YkCR85WAWJfdoLIoyijqLf5tlokYRhOHrOUfhxGLKlaa6sWmBKC4+6jb5q3SAImVNOSNCbANLInKygrnZnA==
X-YMail-OSG: YNpYGrwVM1kKe8RFKhr3QgzrMOuYABQljs98yXzsBHKRF07LumTfjB.dEyN_CRa
 JZ5O4xvnT0CnZj6ccawmGH6K.kkuVKibez8PMCA5WZuBeTC7KGyMC.JF5w2xxoerKrtZ.me4kg2n
 LcQv6hXCep5WpfrnvORrG0l0iFkiQZ7GVAfo9P58bu81mD4kwBitJ__3DqIl3TtqEnubbZqtm68h
 jXUyrG.jD1nKjevtHHSpwsPw.7IF4xUQdoPtL.3SgQWphm8P8BOpX1BswRSfrFvzPX_P8aA1EajM
 6XOwb0dOmV34WXhqj1qZUwx9VpdqJCCxSmw7zbgHfQcTRnvHNPpfeKmw2kWdjIjNvKrgM7whb4pM
 i4zNX2mK5GuVCI.bbBRW8THXkwMQe_LVkTLOJ3OKFfIB6.wQTe._NnVQUrJw70ZOqfA741hj1OfW
 8erjekcOnjDg9STmJRf8Ud5kkapZ7haDvGhaOPYiMljZDoTCreOgEemd2txbXDM_e54gqyjFNpNo
 p.sH_jEGQmzb81PLwh3PWAtgEqczo2NS0nrP6LTi2eaHFt3kCnUa3T0dCOUeeYUQPbcxsMpNHQ.y
 L.RO9JD7AhNn0sZdAlqEviDKo9ddM5cYtNKmWA1djztePlK_NTIGXRFroVs267zK_ZYYlGBF82pT
 ppR8RzeIpPHVfwhEqBKEqfp4wYkRyp6ur6ndAv9VvnAhEnDL1MvtNypDirRyo4QsQW6DG2PQcPIW
 zh2PoVZpdiJh0DJRt3cby_16OeUKOtm3tonO9ZCpdFHq11KZLH_0v.3YaUkJHN5zENDXEojhoOvu
 ophpEWrJPI0MCJTBU4tnfK6yuNwWmbxwaPnlJoP7D_CAYdstHwpHzDOIP2NlCUEcW2jmcy5KDJZY
 rzHbpApM4SMHpa6Aq4p5gCEBmQvHLvqgQVd_Mw8RnE9odP2_mrHEjZZuwNXne97rxQFufU.PQI1h
 4Hw31_2gXt_o4AKYInEoa7cgavivh8AyPZq_BtBB_tI0aEofueIE6z_SV.AsRyK6RNx3zf1EhQat
 Vfyb6NOV5ji8BdrzRllYf0v6FferiUtkk_jXZS2fkxgNKFtBZ.zJdfkac9bW7veAnB3pbUfGMmyI
 vG5q1.8AvbavDJSlfB5K1HRS6cjC_FSpBapM3cki0WoqkS_w30nZjZK0nLhf43d.yUGGcKgr0giW
 MMnmrpZsIv6F0Qv7ClLh7qMrrB.mhx.Ae6EyJ4trT3t6JWfN_gUDoD7U9fwwciWy6hpMgXjaJNUn
 7cyPvZxd5.Kk1aKt2KKBHTs2U7c30utNu3NOeqYMbPuYFIwh.g10EEdcgAjA581LSXHMq1NGSLyT
 QzzGk8pKLezJD7caLqPXfRJOiF0keX7BDlw8zk4l2HbJDXsnXBPtOnm7UFI8Gb.87SZYyS6HoJdQ
 0A.ygh8ZQy6QIQ3e0U8IDy.J17qeBUee7wYkOYukutXdbcMxymgftU18O2c682D1yGUD1L1_OBez
 ldUsTBn6741.hA0bGaGcFsRZjsR6r2E_chffdR.iUoLJMCYg_LxxQrO5Z.i0w0TE3.BhVzelOcb8
 wHxJq8i4zZ91yQqoI2Wxhgvri1I1Bt2k3Pksa9smZWZeadus5yhD1mB5l38sgEx8.Uil.5NDAUWG
 Vu_sfp7oiNRuQoh4VIHyStjn1RsNjH.cHyvSg1JAQYcOJ_6MAmr0BnbN0hTLAdZkD1SiDaE3L3H7
 LA6KBEELw6boaKJQgjEOgQb_2RwnVo6edTYG44frmFNCVh.fyLFpbN28drpVjYQpvbvq9gT4y3zg
 QpxhNlPMQKXxDz33pJxvJqyABcyB5zv184NRokJK5Ee1DnRF43EuFvs5ZzTn1g0RaxuHRfERVG.T
 wL3asrkpOEcj2gMLvcDPYLiIP5epB_2tyyv5Ew6tycf.XZVdErGedMwMQTzS.qJM59k8hMKssKRT
 u7j42YXSkx5bCJwwl06vDAMVvirxRAdrxq1CPedI_f5cwvmZ_0hn6I2tsIu3cyl5B.v3Abc0sHwM
 RFovZRiQT9QZa2cvHpZ_6pXaRrtOhXebi9fquv8IFZGpVGKq0oAbKQ.GsPf7g0cTcJpG6Zzm16Ls
 yDzS5TUslt_dgKpCpDkhYJKIzwIjkf2prTOjJ7sYbIMkXmuMN6q7seUXoWu5_tFxHcpgrzMnpYyg
 hUFtHMjTM8oDPHCLsVk7.5FoSmANI2WSZbNGi86Rk2kr1RHDtj4_xiZ9CksB7kJKy_HuAtS6mMBA
 W07JWp2RBpGGbAxQEQd2bL.rG
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <23bdc704-7bf6-c3a1-755c-ad66bcb6695e@netscape.net>
Date: Mon, 6 Jun 2022 14:47:50 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [XEN PATCH] tools/libs/light/libxl_pci.c: explicitly grant access
 to Intel IGD opregion
Content-Language: en-US
From: Chuck Zmudzinski <brchuckz@netscape.net>
To: Anthony PERARD <anthony.perard@citrix.com>,
 Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>
References: <b62fbc602a629941c1acaad3b93d250a3eba33c0.1647222184.git.brchuckz.ref@netscape.net>
 <b62fbc602a629941c1acaad3b93d250a3eba33c0.1647222184.git.brchuckz@netscape.net>
 <YkSQIoYhomhNKpYR@perard.uk.xensource.com>
 <32638cee-de07-aa33-810b-534da4fa08ae@netscape.net>
In-Reply-To: <32638cee-de07-aa33-810b-534da4fa08ae@netscape.net>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.20225 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 5964

On 6/3/22 9:10 PM, Chuck Zmudzinski wrote:
> On 3/30/22 1:15 PM, Anthony PERARD wrote:
>> Hi Chuck,
>>
>> On Sun, Mar 13, 2022 at 11:41:37PM -0400, Chuck Zmudzinski wrote:
>>> When gfx_passthru is enabled for the Intel IGD, hvmloader maps the IGD
>>> opregion to the guest but libxl does not grant the guest permission to
>> I'm not reading the same thing when looking at code in hvmloader. It
>> seems that hvmloader allocate some memory for the IGD opregion rather
>> than mapping it.
>>
>>
>> tools/firmware/hvmloader/pci.c:184
>>      if ( vendor_id == 0x8086 )
>>      {
>>          igd_opregion_pgbase = mem_hole_alloc(IGD_OPREGION_PAGES);
>>          /*
>>           * Write the the OpRegion offset to give the opregion
>>           * address to the device model. The device model will trap
>>           * and map the OpRegion at the give address.
>>           */
>>          pci_writel(vga_devfn, PCI_INTEL_OPREGION,
>>                     igd_opregion_pgbase << PAGE_SHIFT);
>>      }
>>
>> I think this write would go through QEMU, does it do something with it?
>> (I kind of replying to this question at the end of the mail.)
>>
>> Is this code in hvmloader actually run in your case?
>>
>>> Currently, because of another bug in Qemu upstream, this crash can
>>> only be reproduced using the traditional Qemu device model, and of
>> qemu-traditional is a bit old... What is the bug in QEMU? Do you have a
>> link to a patch/mail?

I finally found a patch that fixes the upstream bug on my system
but I am not sure it is the best fix. It is a patch that QubesOS has
been using since 2017.

I just opened an issue titled "Incorrect register mask for PCI
passthrough on Xen" with Qemu upstream about this problem
which gives all the details:

https://gitlab.com/qemu-project/qemu/-/issues/1061

When you get a chance, can you take a look at it?

Not being an official Xen or Qemu developer, I would appreciate
any suggestions you might have for me before I formally submit
a patch to Qemu upstream. Please reply here or on the Qemu issue
tracker.

Best Regards,

Chuck

>
> I finally found a patch for the other bug in Qemu upstream. The
> patch is currently being used in QubesOS, and they first added
> it to their version of Qemu way back in 2017:
>
> https://github.com/QubesOS/qubes-vmm-xen-stubdom-linux/pull/3/commits/ab2b4c2ad02827a73c52ba561e9a921cc4bb227c 
>
>
> Although this patch is advertised as applying to the device model
> running in a Linux stub domain, it is also needed (at least on my
> system) with the device model running in Dom0.
>
> Here is the story:
>
> The patch is titled "qemu: fix wrong mask in pci capability registers 
> handling"
>
> There is scant information in the commit message about the nature of
> the problem, but I discovered the following in my testing:
>
> On my Intel Haswell system configured for PCI passthrough to the
> Xen HVM guest, Qemu does indeed incorrectly set the emulated
> register because it uses the wrong mask that disables instead of
> enables the PCI_STATUS_CAP_LIST bit of the PCI_STATUS register.
>
> This disabled the MSI-x capability of two of the three PCI devices
> passed through to the Xen HVM guest. The problem only
> manifests in a bad way in a Linux guest, not in a Windows guest.
>
> One possible reason that only Linux guests are affected is that
> I discovered in the Xen xl-dmesg verbose logs that Windows and
> Linux use different callbacks for interrupts:
>
> (XEN) Dom1 callback via changed to GSI 28
> ...
> (XEN) Dom3 callback via changed to Direct Vector 0xf3
>
> Dom1 is a Windows Xen HVM and Dom3 is a Linux HVM
>
> Apparently the Direct Vector callback that Linux uses requires
> MSI or MSI-x capability of the passed through devices, but the
> wrong mask in Qemu disables that capability.
>
> After applying the QubesOS patch to Qemu upstream, the
> PCI_STATUS_CAP_LIST bit is set correctly for the guest and
> PCI and Intel IGD passthrough works normally because the
> Linux guest can make use of the MSI-x capability of the
> PCI devices.
>
> The problem was discovered almost five years ago. I don't
> know why the fix has not been committed to Qemu
> upstream yet.
>
> After this, I was able to discover that the need for libxl to
> explicitly grant permission for access to the Intel OpRegion
> is only needed for the old traditional device model because
> the Xen hypercall to gain permission is included in upstream
> Qemu, but it is omitted from the old traditional device model.
>
> So this patch is not needed for users of the upstream device
> model who also add the QubesOS patch to fix the
> PCI_STATUS_CAP_LIST bit in the PCI_STATUS register.
>
> This all assumes the device model is running in Dom0. The
> permission for access to the Intel OpRegion might still be
> needed if the upstream device model is running in a stub
> domain.
>
> There are other problems in addition to this problem of access
> to the Intel OpRegion that are discussed here:
>
> https://www.qubes-os.org/news/2017/10/18/msi-support/
>
> As old as that post is, the feature of allowing PCI and VGA
> passthrough to HVM domains is still not well supported,
> especially for the case when the device model runs in a
> stub domain.
>
> Since my proposed patch only applies to the very insecure
> case of the old traditional device model running in Dom0,
> I will not pursue it further.
>
> I will look for this feature in future versions of Xen. Currently,
> Xen 4.16 advertises support for Linux-based stub domains
> as "Tech Preview." So future versions of Xen might handle
> this problem in libxl or perhaps in some other way, and also
> hopefully the patch to Qemu to fix the PCI capabilities mask
> can be committed to Qemu upstream soon so this feature
> of Intel IGD passthrough can at least work with Linux
> guests and the upstream Qemu running in Dom0.
>
> Regards,
>
> Chuck


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 22:43:38 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 22:43:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342702.567763 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyLRR-0000PJ-LL; Mon, 06 Jun 2022 22:43:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342702.567763; Mon, 06 Jun 2022 22:43:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyLRR-0000Ot-B6; Mon, 06 Jun 2022 22:43:21 +0000
Received: by outflank-mailman (input) for mailman id 342702;
 Mon, 06 Jun 2022 22:43:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyLRQ-0000O7-3f; Mon, 06 Jun 2022 22:43:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyKo3-0006Q3-9S; Mon, 06 Jun 2022 22:02:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyKo2-0008Rk-PA; Mon, 06 Jun 2022 22:02:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nyKo2-0000dc-Og; Mon, 06 Jun 2022 22:02:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=TetnxRZwv5DLtIWmGDnKfwA8xcu1ZT0hMBGYMPtkJNU=; b=GGyZe42dIzdItbKRkUiYIn7a7p
	2DEMDXWbNEQlOu0yUd5jul1bleuyqkFECBBfPU5T0AAluQWvIm/Jo5QdNlZe2QesmbH15YmuLTcsj
	buE1BMEYQovkiuk4dYnYdv0jo6LIBaG1WHRUnFKlKfDxXzvtj0z0v1Jo4PSyUolJyzfo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170844-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 170844: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f2906aa863381afb0015a9eb7fefad885d4e5a56
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 06 Jun 2022 22:02:38 +0000

flight 170844 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170844/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 170714
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 170714
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 170714
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 170714
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 170714

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170714
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170714
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170714
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                f2906aa863381afb0015a9eb7fefad885d4e5a56
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z   13 days
Failing since        170716  2022-05-24 11:12:06 Z   13 days   38 attempts
Testing same since   170841  2022-06-06 03:15:23 Z    0 days    2 attempts

------------------------------------------------------------
2274 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 268374 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 22:43:38 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 22:43:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342703.567765 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyLRR-0000Qe-Sx; Mon, 06 Jun 2022 22:43:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342703.567765; Mon, 06 Jun 2022 22:43:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyLRR-0000PN-MZ; Mon, 06 Jun 2022 22:43:21 +0000
Received: by outflank-mailman (input) for mailman id 342703;
 Mon, 06 Jun 2022 22:43:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyLRQ-0000O7-4c; Mon, 06 Jun 2022 22:43:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyKqq-0006Tk-Hz; Mon, 06 Jun 2022 22:05:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyKqq-00009B-1n; Mon, 06 Jun 2022 22:05:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nyKqq-0002np-1L; Mon, 06 Jun 2022 22:05:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5S/7W14/Q0OuknkC2K345TQBcc249GuBEYJtLruIs3s=; b=IPgrDDdOjy2jPzw4/mnXdr4cRQ
	dZdRJSxPJ7uxRZTy459eUuw3viSEh1G5cjhkJOP4rHYqZpK7FXp7yxC19lrUAX1+iYycuzlgL6i4T
	m5DlAVGSwHQ7jXcK1CGKEMA99f1fv8RkyreQt8uQxQ15ZwJdqXptHFXq0/tjodKdqwYY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170850-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 170850: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=cea9ae06229577cd5b77019ce122f9cdd1568106
X-Osstest-Versions-That:
    xen=58ce5b6c33ecae76f2e9fc5213a56e98c3be4be5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 06 Jun 2022 22:05:32 +0000

flight 170850 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170850/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  cea9ae06229577cd5b77019ce122f9cdd1568106
baseline version:
 xen                  58ce5b6c33ecae76f2e9fc5213a56e98c3be4be5

Last test of basis   170796  2022-06-01 08:00:24 Z    5 days
Testing same since   170850  2022-06-06 18:00:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   58ce5b6c33..cea9ae0622  cea9ae06229577cd5b77019ce122f9cdd1568106 -> smoke


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 23:44:02 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 23:44:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342725.567791 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyMNx-0007PZ-B3; Mon, 06 Jun 2022 23:43:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342725.567791; Mon, 06 Jun 2022 23:43:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyMNx-0007PS-8J; Mon, 06 Jun 2022 23:43:49 +0000
Received: by outflank-mailman (input) for mailman id 342725;
 Mon, 06 Jun 2022 23:43:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HB7Z=WN=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nyMNw-0007PM-E3
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 23:43:48 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 816ba312-e5f2-11ec-bd2c-47488cf2e6aa;
 Tue, 07 Jun 2022 01:43:46 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 24B3AB81A99;
 Mon,  6 Jun 2022 23:43:45 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 617FAC385A9;
 Mon,  6 Jun 2022 23:43:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 816ba312-e5f2-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654559023;
	bh=2RsyEJs423JMN7A4DtLut/N786UxEQNyTITZ2k18fmk=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Trpi4HAVxEqaKQq4B+EdeXeU+KZkabNoC/7XURg0Z/+qu5AtKD4f3zRolw/7wIX10
	 N/B8UeHfiiUzZb1bQyFzl8sN0JzdXg2D/3XzGIz16C8yz0m59b0HIIWV3yldkT9JTJ
	 fWwq71Nn5rMyKGiew4pGv+/7TaLA6SHgmURb54Nk1rXBo1muZ1bblrVmNDkR6QdQ9y
	 v7QOHLt0tyfeJGKz+ezS1Ftkn4FFoN0VMw/C7HV/qliXWQ8APfO06dEZeQgEE+C1wB
	 XwG3Y1K87Db1701zGVxDWwaEY1DP3Y0MpD9NlcYc+DrBt8Dw+7SkemjECmnbxVRIXB
	 goYLcu/X0N9eg==
Date: Mon, 6 Jun 2022 16:43:35 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Oleksandr <olekstysh@gmail.com>
cc: Masahiro Yamada <masahiroy@kernel.org>, Juergen Gross <jgross@suse.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, 
    moderated for non-subscribers <xen-devel@lists.xenproject.org>, 
    Stephen Rothwell <sfr@canb.auug.org.au>, 
    Julien Grall <julien.grall@arm.com>, 
    Shannon Zhao <shannon.zhao@linaro.org>, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] xen: unexport __init-annotated
 xen_xlate_map_ballooned_pages()
In-Reply-To: <20c9cd23-429f-896c-b59b-c518ff2562e2@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2206061643250.277622@ubuntu-linux-20-04-desktop>
References: <20220606045920.4161881-1-masahiroy@kernel.org> <20c9cd23-429f-896c-b59b-c518ff2562e2@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 6 Jun 2022, Oleksandr wrote:
> On 06.06.22 07:59, Masahiro Yamada wrote:
> 
> Hello
> 
> > EXPORT_SYMBOL and __init is a bad combination because the .init.text
> > section is freed up after the initialization. Hence, modules cannot
> > use symbols annotated __init. The access to a freed symbol may end up
> > with kernel panic.
> > 
> > modpost used to detect it, but it has been broken for a decade.
> > 
> > Recently, I fixed modpost so it started to warn it again, then this
> > showed up in linux-next builds.
> > 
> > There are two ways to fix it:
> > 
> >    - Remove __init
> >    - Remove EXPORT_SYMBOL
> > 
> > I chose the latter for this case because none of the in-tree call-sites
> > (arch/arm/xen/enlighten.c, arch/x86/xen/grant-table.c) is compiled as
> > modular.
> 
> Good description.
> 
> 
> > 
> > Fixes: 243848fc018c ("xen/grant-table: Move xlated_setup_gnttab_pages to
> > common place")
> > Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
> > Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
> 
> I think the patch is correct.
> 
> Reviewed-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> > ---
> > 
> >   drivers/xen/xlate_mmu.c | 1 -
> >   1 file changed, 1 deletion(-)
> > 
> > diff --git a/drivers/xen/xlate_mmu.c b/drivers/xen/xlate_mmu.c
> > index 34742c6e189e..f17c4c03db30 100644
> > --- a/drivers/xen/xlate_mmu.c
> > +++ b/drivers/xen/xlate_mmu.c
> > @@ -261,7 +261,6 @@ int __init xen_xlate_map_ballooned_pages(xen_pfn_t
> > **gfns, void **virt,
> >     	return 0;
> >   }
> > -EXPORT_SYMBOL_GPL(xen_xlate_map_ballooned_pages);
> >     struct remap_pfn {
> >   	struct mm_struct *mm;
> 
> -- 
> Regards,
> 
> Oleksandr Tyshchenko
> 


From xen-devel-bounces@lists.xenproject.org Mon Jun 06 23:51:38 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jun 2022 23:51:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342733.567802 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyMVT-0000PO-4W; Mon, 06 Jun 2022 23:51:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342733.567802; Mon, 06 Jun 2022 23:51:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyMVT-0000PH-0x; Mon, 06 Jun 2022 23:51:35 +0000
Received: by outflank-mailman (input) for mailman id 342733;
 Mon, 06 Jun 2022 23:51:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HB7Z=WN=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nyMVR-0000PB-K5
 for xen-devel@lists.xenproject.org; Mon, 06 Jun 2022 23:51:33 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 96a81075-e5f3-11ec-b605-df0040e90b76;
 Tue, 07 Jun 2022 01:51:32 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 417DD60DB5;
 Mon,  6 Jun 2022 23:51:30 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5FF1FC385A9;
 Mon,  6 Jun 2022 23:51:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 96a81075-e5f3-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654559489;
	bh=9RgMCcCNdditTAFm/8CFCB2qfs95j5mT6jhToGpSwEU=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=pP+r4euzwEJYLNybS7H69mN/G96q6AEOmPWS29vC7GyUquu/XI0fqDJ2Ox25aCZAs
	 E2UsgQ3ZGshuXI5k66HHIWsHqWnOjesket0ygtYLdRcHp4tUd1GWH0PlmwQFyTrBmk
	 1wjlstt0Cq6cg5jHfdUE15DMGd/nBlK0jl8jv2agoiQDtJWNnx9410hyeN1cjX4kBr
	 gFcS3lLcmQuep378iV4Y49Zr49I2349tI19/csjHEMXHaCs0wJfgpE66vVk2acNq3X
	 f+LWFA7wdNM9Beqv2ENiXG5xz93jULPWDa1P4kNlv7TlOFGWR7Y/zPJc6BIXODqe1w
	 0bDALzlhhPVCg==
Date: Mon, 6 Jun 2022 16:51:28 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 3/4] arm: add ISAR2, MMFR0 and MMFR1 fields in
 cpufeature
In-Reply-To: <FE1F683F-FD96-4D55-8863-B9DB373CE790@arm.com>
Message-ID: <alpine.DEB.2.22.394.2206061650330.277622@ubuntu-linux-20-04-desktop>
References: <cover.1653993431.git.bertrand.marquis@arm.com> <4a0aef106ac7b6c16048ff3554eda1d8b3eab61a.1653993431.git.bertrand.marquis@arm.com> <alpine.DEB.2.22.394.2206021738430.2783803@ubuntu-linux-20-04-desktop>
 <FE1F683F-FD96-4D55-8863-B9DB373CE790@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 6 Jun 2022, Bertrand Marquis wrote:
> Hi Stefano,
> 
> > On 3 Jun 2022, at 02:45, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > 
> > On Tue, 31 May 2022, Bertrand Marquis wrote:
> >> Complete AA64ISAR2 and AA64MMFR[0-1] with more fields.
> >> While there add a comment for MMFR bitfields as for other registers in
> >> the cpuinfo structure definition.
> >> 
> >> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> >> ---
> >> Changes in v2:
> >> - patch introduced to isolate changes in cpufeature.h
> >> - complete MMFR0 and ISAR2 to sync with sysregs.h status
> >> ---
> >> xen/arch/arm/include/asm/cpufeature.h | 28 ++++++++++++++++++++++-----
> >> 1 file changed, 23 insertions(+), 5 deletions(-)
> >> 
> >> diff --git a/xen/arch/arm/include/asm/cpufeature.h b/xen/arch/arm/include/asm/cpufeature.h
> >> index 9649a7afee..57eb6773d3 100644
> >> --- a/xen/arch/arm/include/asm/cpufeature.h
> >> +++ b/xen/arch/arm/include/asm/cpufeature.h
> >> @@ -234,6 +234,7 @@ struct cpuinfo_arm {
> >> union {
> >> register_t bits[3];
> >> struct {
> >> + /* MMFR0 */
> >> unsigned long pa_range:4;
> >> unsigned long asid_bits:4;
> >> unsigned long bigend:4;
> >> @@ -242,18 +243,31 @@ struct cpuinfo_arm {
> >> unsigned long tgranule_16K:4;
> >> unsigned long tgranule_64K:4;
> >> unsigned long tgranule_4K:4;
> >> - unsigned long __res0:32;
> >> -
> >> + unsigned long tgranule_16k_2:4;
> >> + unsigned long tgranule_64k_2:4;
> >> + unsigned long tgranule_4k:4;
> > 
> > Should be tgranule_4k_2:4
> 
> Right I will fix that.
> 
> > 
> > 
> >> + unsigned long exs:4;
> >> + unsigned long __res0:8;
> >> + unsigned long fgt:4;
> >> + unsigned long ecv:4;
> >> +
> >> + /* MMFR1 */
> >> unsigned long hafdbs:4;
> >> unsigned long vmid_bits:4;
> >> unsigned long vh:4;
> >> unsigned long hpds:4;
> >> unsigned long lo:4;
> >> unsigned long pan:4;
> >> - unsigned long __res1:8;
> >> - unsigned long __res2:28;
> >> + unsigned long specsei:4;
> >> + unsigned long xnx:4;
> >> + unsigned long twed:4;
> >> + unsigned long ets:4;
> >> + unsigned long __res1:4;
> > 
> > hcx?
> > 
> > 
> >> + unsigned long afp:4;
> >> + unsigned long __res2:12;
> > 
> > ntlbpa
> > tidcp1
> > cmow
> > 
> >> unsigned long ecbhb:4;
> > 
> > Strangely enough I am looking at DDI0487H and ecbhb is not there
> > (D13.2.65). Am I looking at the wrong location?
> 
> Right now I declared here only the values which have a corresponding
> declaration in sysregs.h
> If I add more fields here we will not be in sync with it anymore.
> 
> And on ecbhb it will be in the next revision of the manual yes.
> 
> 
> > 
> > 
> >> + /* MMFR2 */
> >> unsigned long __res3:64;
> >> };
> >> } mm64;
> >> @@ -297,7 +311,11 @@ struct cpuinfo_arm {
> >> unsigned long __res2:8;
> >> 
> >> /* ISAR2 */
> >> - unsigned long __res3:28;
> >> + unsigned long wfxt:4;
> >> + unsigned long rpres:4;
> >> + unsigned long gpa3:4;
> >> + unsigned long apa3:4;
> >> + unsigned long __res3:12;
> > 
> > mops
> > bc
> > pac_frac
> > 
> > 
> >> unsigned long clearbhb:4;
> > 
> > And again this is not described at D13.2.63. Probably the bhb stuff
> > didn't make it into the ARM ARM yet.
> 
> As said before, are you ok with only adding stuff declared in sysregs
> to make it simpler to sync with Linux ?

Yes, that makes sense. In that case just fix tgranule_4k_2 and you can
add my

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 00:30:12 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 00:30:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342743.567814 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyN6j-0005KQ-Hh; Tue, 07 Jun 2022 00:30:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342743.567814; Tue, 07 Jun 2022 00:30:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyN6j-0005Jy-BP; Tue, 07 Jun 2022 00:30:05 +0000
Received: by outflank-mailman (input) for mailman id 342743;
 Tue, 07 Jun 2022 00:30:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SxjK=WO=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nyN6h-0004sN-4w
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 00:30:03 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f813a3b8-e5f8-11ec-bd2c-47488cf2e6aa;
 Tue, 07 Jun 2022 02:30:02 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 89B8BB81C54;
 Tue,  7 Jun 2022 00:30:01 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id DC327C34119;
 Tue,  7 Jun 2022 00:29:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f813a3b8-e5f8-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654561800;
	bh=cWxrmJATm07wUGXJMvWhUjgKxdBBxFXIq1sX3C3q1pk=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=XG0Spwioyc+myXEVdQEsiIYSawyyHFXG3reRq4/EzZBvrdVmQTLURs+R9hlf5XNj0
	 Jyzfp8irwBQIH5dT1sqYSgyaHXBU9fVPar73Zy4dNqQe7a27yk1/2Dc8q/ZO83PjiU
	 SkZv0zrdp/BfaVXB6Aqe8zS6CuO52sEZOQpcySV0uC6l3SNlSgSDnsKXPF27TVBvp2
	 b5xm1bx7z+OCQqEtba1V5QHS076SDFqA4fogbeWtLuA1aLandUEDxA946EeKdAQHEE
	 ZnJeG+eFKKQnzPBNiWk7L/+Ot3zKtGZe7TzbsgsFb2dciZ+qyrrlULPYcfJwYM3xUC
	 xKBpxKnG4PgUw==
Date: Mon, 6 Jun 2022 17:29:59 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Stefano Stabellini <sstabellini@kernel.org>
cc: xen-devel@lists.xenproject.org, andrew.cooper3@citrix.com, 
    jbeulich@suse.com, roger.pau@citrix.com, julien@xen.org, 
    Bertrand.Marquis@arm.com, George.Dunlap@citrix.com
Subject: Re: [PATCH v2 0/2] introduce docs/misra/rules.rst
In-Reply-To: <alpine.DEB.2.22.394.2205311816170.1905099@ubuntu-linux-20-04-desktop>
Message-ID: <alpine.DEB.2.22.394.2206061728420.277622@ubuntu-linux-20-04-desktop>
References: <alpine.DEB.2.22.394.2205311816170.1905099@ubuntu-linux-20-04-desktop>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 31 May 2022, Stefano Stabellini wrote:
> Hi all,
> 
> This patch series is a follow-up from the MISRA C meeting last Thursday,
> when we went through the list of MISRA C rules on the spreadsheet and
> agreed to accept into the Xen coding style the first ones, starting from
> Dir 2.1 up until Rule 5.1 (except for Rule 2.2) in descending popularity
> order.
> 
> This is the full list of accepted rules so far:
> 
> Dir 2.1
> Dir 4.7
> Dir 4.10
> Dir 4.14
> Rule 1.3
> Rule 3.2
> Rule 5.1
> Rule 6.2
> Rule 8.1
> Rule 8.4
> Rule 8.5
> Rule 8.6
> Rule 8.8
> Rule 8.12
> 
> This short patch series add them as a new document under docs/misra as a
> list in rst format. The file can be used as input to cppcheck using a
> small python script from Bertrand (who will send it to the xen-devel
> separately.)

The two patches are acked and I plan to commit them in a day or two.


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 02:18:01 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 02:18:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342760.567827 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyOmv-00063G-Bo; Tue, 07 Jun 2022 02:17:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342760.567827; Tue, 07 Jun 2022 02:17:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyOmv-000639-8M; Tue, 07 Jun 2022 02:17:45 +0000
Received: by outflank-mailman (input) for mailman id 342760;
 Tue, 07 Jun 2022 02:17:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SxjK=WO=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nyOmt-00061K-Ub
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 02:17:44 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 02586f3a-e608-11ec-b605-df0040e90b76;
 Tue, 07 Jun 2022 04:17:41 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 3753FB81826;
 Tue,  7 Jun 2022 02:17:41 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 63F31C385A9;
 Tue,  7 Jun 2022 02:17:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 02586f3a-e608-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654568259;
	bh=G/OCd3vUJTb9gQkavsuraROzp2lGVfO55Ao2rS3VQyI=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=nt/BJoBK32hpV9/IJ00L7dUSCo7085TNhohCvxlbhKm6+KIShSjGTJqukogik0n0b
	 zqd/x/sZBHRn78JPmXY7tm4LQJ5adY9BCnR4C6+IlZ0MjxAuVsNKWGxotM4gD82kn9
	 uD6WWT0csXljZp0pvMHyf3fj0qrG0umqfMy6fstSGW6WVcU2Z14Qb0FtscV0HtMPjd
	 0nxo5r5aDgp2tDh0LtJyy+Md/noSR3HFJMJlmPOEoYt5d9Hg3R+LBH18CF9qD+iqdP
	 MXoNcRV6wJO/N5vEmkcLrqwWm5iqMgqU3S5a54cK/g3oGI1KGLOFw3Owumqob7I3+h
	 FJR/0Xj/iQLqg==
Date: Mon, 6 Jun 2022 19:17:38 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jan Beulich <jbeulich@suse.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    George Dunlap <George.Dunlap@citrix.com>, 
    xen-devel <xen-devel@lists.xenproject.org>, 
    Roger Pau Monne <roger.pau@citrix.com>, 
    Artem Mygaiev <Artem_Mygaiev@epam.com>, Andrew.Cooper3@citrix.com, 
    julien@xen.org, Bertrand.Marquis@arm.com, fusa-sig@lists.xenproject.org, 
    roberto.bagnara@bugseng.com
Subject: Re: MOVING COMMUNITY CALL Call for agenda items for 9 June Community
 Call @ 1500 UTC
In-Reply-To: <ebe4b409-318f-6b2c-0e05-fe9256528b32@suse.com>
Message-ID: <alpine.DEB.2.22.394.2206061731421.277622@ubuntu-linux-20-04-desktop>
References: <CC75A251-2695-4E9E-95A7-043874B22F32@citrix.com> <alpine.DEB.2.22.394.2206010942010.1905099@ubuntu-linux-20-04-desktop> <alpine.DEB.2.22.394.2206011324400.1905099@ubuntu-linux-20-04-desktop> <ebe4b409-318f-6b2c-0e05-fe9256528b32@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 2 Jun 2022, Jan Beulich wrote:
> On 01.06.2022 22:27, Stefano Stabellini wrote:
> > Reducing CC and adding fusa-sig
> > 
> > Actually Jun 9 at 8AM California / 4PM UK doesn't work for some of you,
> > so it is either:
> > 1) Jun 9 at 7AM California / 3PM UK
> > 2) Jun 14 at 8AM California / 4PM UK
> > 
> > My preference is the first option because it is sooner but let me know
> > if it doesn't work and we'll try the second option.
> 
> I don't think I was aware that another call is needed. Was that said
> somewhere earlier where I did miss it? In any event, either option
> looks to be okay here.

I sent out the meeting invite for Jun 9 at 7AM California / 3PM UK.

Just a reminder to fill in the remaining "yellow" rules of the
spreadsheet in advance of the meeting.


There are couple of interesting questions on the remaining rules, which
I tried to shed some light on.



# Rule 9.1 "The value of an object with automatic storage duration shall not be read before it has been set"

The question is whether -Wuninitalised already covers this case or not.
I think it does.

Eclair is reporting a few issues where variables are "possibly
uninitialized". We should ask Roberto about them, I don't think they are
actual errors? More like extra warnings?


# Rule 9.3 "Arrays shall not be partially initialized"

Andrew was pointing out that "we use implicit 0's all over the place".

That is true but as far as I can tell that is permitted. There is a
couple of exceptions to the rules:

- { 0 } is allowed

- sparse initialization using designated initializers is allowed
  e.g. char ar[3] = { [2] = 'c' }

So I think we are fine there.


# Rule 9.4 "An element of an object shall not be initialized more than once"

Andrew was noting that "There's one pattern using range syntax to set a
default where this rule would be violated, but the code is far cleaner
to read."

Range initializers is a GCC extension not part of the C standard. It is
not considered by the MISRA rule. The MISRA rule seems focused on
preveting accidental double-initializations (by copy/pasting errors for
instance) which is fair.

So I think we should be OK here and we need to clarify our usage of
range initializers. What we do or don't do with range initializer should
be a separate discussion I think.


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 03:59:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 03:59:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342771.567844 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyQNA-0007sd-Ff; Tue, 07 Jun 2022 03:59:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342771.567844; Tue, 07 Jun 2022 03:59:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyQNA-0007sW-Cm; Tue, 07 Jun 2022 03:59:16 +0000
Received: by outflank-mailman (input) for mailman id 342771;
 Tue, 07 Jun 2022 03:59:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyQN9-0007sM-P1; Tue, 07 Jun 2022 03:59:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyQN9-0002ia-Fz; Tue, 07 Jun 2022 03:59:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyQN9-0003ud-4w; Tue, 07 Jun 2022 03:59:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nyQN9-0006E0-4V; Tue, 07 Jun 2022 03:59:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WKPQ7uZqBF8WdLx72HVH8xeiummJZ1lNpidkRgsIGxg=; b=hngeveCdMM8QRDNEQqDaXZb5oj
	rsRiQ22750PvDjtjQhdZq/md3Qz7TIWJrb1LqNhg6IWS219jWR4tUEp2M31chFkwBMqZ+0gi8XV/t
	osWN734I04jT7XtrRWFtUX+YixDAULHZvWD/GLiu+YvJ/dSDuf5J8lDTfLPvigb/tn7c=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170855-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 170855: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=4f89e4b3e80329b9a445500009c658d2ebce8475
X-Osstest-Versions-That:
    ovmf=0b36dea3f8b71ddddc4fdf3a5d2edf395115568b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 07 Jun 2022 03:59:15 +0000

flight 170855 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170855/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 4f89e4b3e80329b9a445500009c658d2ebce8475
baseline version:
 ovmf                 0b36dea3f8b71ddddc4fdf3a5d2edf395115568b

Last test of basis   170839  2022-06-06 00:11:39 Z    1 days
Testing same since   170855  2022-06-07 01:57:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Kun Qin <kuqin12@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   0b36dea3f8..4f89e4b3e8  4f89e4b3e80329b9a445500009c658d2ebce8475 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 04:22:21 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 04:22:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342783.567855 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyQjQ-00030V-DK; Tue, 07 Jun 2022 04:22:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342783.567855; Tue, 07 Jun 2022 04:22:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyQjQ-00030O-AG; Tue, 07 Jun 2022 04:22:16 +0000
Received: by outflank-mailman (input) for mailman id 342783;
 Tue, 07 Jun 2022 04:22:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyQjP-00030E-3U; Tue, 07 Jun 2022 04:22:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyQjO-0003CU-W0; Tue, 07 Jun 2022 04:22:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyQjO-00056E-9D; Tue, 07 Jun 2022 04:22:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nyQjO-0003LX-8k; Tue, 07 Jun 2022 04:22:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+ZYJncJs6gqrcM6Rs9GMhiMHuFrUQNN5+0IuxblSBpo=; b=UeZF5kKWBlZypScG4jJ00Wu083
	esKlz9QDyjXhfbTfCE5beKxZN2ugj57kaJxi16SIuEYZsdD7KkXvwmOKiDcdEc49t4ab0jsG63TjU
	USAyCg+wOpGOhMi9JAbOFhjvdBnTGWkXkrQQkmGXu/qHNSXqSkbujlPfXTKjJ28tBgP8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170849-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 170849: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=57c9363c452af64fe058aa946cc923eae7f7ad33
X-Osstest-Versions-That:
    qemuu=ca127b3fc247517ec7d4dad291f2c0f90602ce5b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 07 Jun 2022 04:22:14 +0000

flight 170849 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170849/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-vhd       21 guest-start/debian.repeat    fail  like 170824
 test-armhf-armhf-xl-rtds     14 guest-start                  fail  like 170829
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170836
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170836
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170836
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170836
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170836
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170836
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170836
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170836
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                57c9363c452af64fe058aa946cc923eae7f7ad33
baseline version:
 qemuu                ca127b3fc247517ec7d4dad291f2c0f90602ce5b

Last test of basis   170836  2022-06-05 04:31:21 Z    1 days
Testing same since   170849  2022-06-06 17:39:05 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dario Faggioli <dfaggioli@suse.com>
  Igor Mammedov <imammedo@redhat.com>
  John Snow <jsnow@redhat.com>
  Jose R. Ziviani <jziviani@suse.de>
  Paolo Bonzini <pbonzini@redhat.com>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Richard Henderson <richard.henderson@linaro.org>
  Stephen Michael Jothen <sjothen@gmail.com>
  Yang Zhong <yang.zhong@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   ca127b3fc2..57c9363c45  57c9363c452af64fe058aa946cc923eae7f7ad33 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 04:37:55 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 04:37:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342780.567866 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyQyR-0004l9-SZ; Tue, 07 Jun 2022 04:37:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342780.567866; Tue, 07 Jun 2022 04:37:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyQyR-0004l2-PY; Tue, 07 Jun 2022 04:37:47 +0000
Received: by outflank-mailman (input) for mailman id 342780;
 Tue, 07 Jun 2022 04:00:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X+ro=WO=proton.me=alex.nlnnfn@srs-se1.protection.inumbo.net>)
 id 1nyQNu-0008C0-Oa
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 04:00:03 +0000
Received: from mail-40138.protonmail.ch (mail-40138.protonmail.ch
 [185.70.40.138]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4d221d3f-e616-11ec-bd2c-47488cf2e6aa;
 Tue, 07 Jun 2022 06:00:01 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4d221d3f-e616-11ec-bd2c-47488cf2e6aa
Date: Tue, 07 Jun 2022 03:59:52 +0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=proton.me;
	s=protonmail3; t=1654574399; x=1654833599;
	bh=jVMnhGK6XgxrgzWoGQWmzfBU3spCn/kuI/yEL6d/JDM=;
	h=Date:To:From:Reply-To:Subject:Message-ID:Feedback-ID:From:To:Cc:
	 Date:Subject:Reply-To:Feedback-ID:Message-ID;
	b=sy3iRNy705BiKoSPh2yewNGZx/C+9O39x6REgazaT924L6iqPFwvFjr/JmMgoyjxs
	 eAcW6CxkcWPrs1mEZ6LuIO7Vs8IK5lm8KEArf7qGFmlwagcGRabQwrmpQ1TmGaHWPG
	 2BmmsCqedN/qUvCZ3wAwpTqeDsS9VNv/XO5mhk0vCS4T6rngkUk03VCZ0FKeyt6ynT
	 XlxE9z+D2oo8sFYbeXSy2roTyboXr00VTZxH2o75Ph+nR5dsKVstzoYD0DfNWYN0Qa
	 K3vCN6nd6GQIwdSNfrFm+podgxeEjCCBmaXGqVvcxX4ZWaOU7r6tK0c7+5nFV0qI+h
	 bOBLxKXdz51EA==
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: alex.nlnnfn@proton.me
Reply-To: alex.nlnnfn@proton.me
Subject: memory overcomittment with sr-iov device assighment
Message-ID: <6f_bjKs323m5sDcqqvk3uosOLiugoCHlAvJ_tEMTl9d_05VqR-nOKtBBS4QWK29iKmorefG54vrbEgUM40Fl71BPZ0hwVyY3P0LjjJVjO-c=@proton.me>
Feedback-ID: 49537399:user:proton
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

Hello list,

I looked into Xen documentation and also Xen wiki and I could't find a defi=
nitive answer if Xen supports memory over-commitment when VMs use SR-IOV de=
vice assignment (passthrough). Memory over-commitment I mean giving VMs mor=
e RAM than is available in the host.

I know that ESX doesn't support it, and also QEMU/KVM pins all RAM when a d=
evice is directly assigned to a VM (VFIO_IOMMU_MAP_DMA ioctl). I have two q=
uestions:

1. Does Xen supports memory over commitment when all VMs are using direct d=
evice assignment e.g., SR-IOV?
2. Would you please point me to the code that does the pinning?

Thanks,
Alex





From xen-devel-bounces@lists.xenproject.org Tue Jun 07 05:00:36 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 05:00:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342802.567880 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyRKF-0008NT-OA; Tue, 07 Jun 2022 05:00:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342802.567880; Tue, 07 Jun 2022 05:00:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyRKF-0008NM-Ku; Tue, 07 Jun 2022 05:00:19 +0000
Received: by outflank-mailman (input) for mailman id 342802;
 Tue, 07 Jun 2022 05:00:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zm5l=WO=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1nyRKE-0008NG-5L
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 05:00:18 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b8c4504b-e61e-11ec-bd2c-47488cf2e6aa;
 Tue, 07 Jun 2022 07:00:16 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 12D941F9E9;
 Tue,  7 Jun 2022 05:00:16 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id AA22113638;
 Tue,  7 Jun 2022 05:00:15 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id OvrgJ1/bnmIeTgAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 07 Jun 2022 05:00:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b8c4504b-e61e-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1654578016; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=fnU4RMnsa5sD+TcpbeKCXtrhbgvP7w6SkTbj+L8ZBIU=;
	b=mmeBPJQAZvsXCliDTHgQxfq6spCQvkCsEvO5h7nXHiurnCNWwlinlobqjt35S9QWs3zxxM
	g8yzUnYsmC3Z1m+iJBnz+oMZRg6ZfKfOotI8OJunxSVb5Or4mSfdr+7aYyvytQGpS+j5KX
	7dWjCRezmCo1/QIdOX4UvlRRmtirLNM=
Message-ID: <88bd4769-ed35-196c-293e-0782999a4ade@suse.com>
Date: Tue, 7 Jun 2022 07:00:15 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH v6 1/9] xen: move do_vcpu_op() to arch specific code
Content-Language: en-US
To: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>, Julien Grall <jgrall@amazon.com>
References: <20220324140139.5899-1-jgross@suse.com>
 <20220324140139.5899-2-jgross@suse.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20220324140139.5899-2-jgross@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------v06V60687PTKUn6gYcwqjPDG"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------v06V60687PTKUn6gYcwqjPDG
Content-Type: multipart/mixed; boundary="------------yWo6yeRMmpKmhLxa6sN6TkpR";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>, Julien Grall <jgrall@amazon.com>
Message-ID: <88bd4769-ed35-196c-293e-0782999a4ade@suse.com>
Subject: Re: [PATCH v6 1/9] xen: move do_vcpu_op() to arch specific code
References: <20220324140139.5899-1-jgross@suse.com>
 <20220324140139.5899-2-jgross@suse.com>
In-Reply-To: <20220324140139.5899-2-jgross@suse.com>

--------------yWo6yeRMmpKmhLxa6sN6TkpR
Content-Type: multipart/mixed; boundary="------------pzL4Og88yAmC0YHxwDadbDJV"

--------------pzL4Og88yAmC0YHxwDadbDJV
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjQuMDMuMjIgMTU6MDEsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+IFRoZSBlbnRyeSBw
b2ludCB1c2VkIGZvciB0aGUgdmNwdV9vcCBoeXBlcmNhbGwgb24gQXJtIGlzIGRpZmZlcmVu
dA0KPiBmcm9tIHRoZSBvbmUgb24geDg2IHRvZGF5LCBhcyBzb21lIG9mIHRoZSBjb21tb24g
c3ViLW9wcyBhcmUgbm90DQo+IHN1cHBvcnRlZCBvbiBBcm0uIFRoZSBBcm0gc3BlY2lmaWMg
aGFuZGxlciBmaWx0ZXJzIG91dCB0aGUgbm90DQo+IHN1cHBvcnRlZCBzdWItb3BzIGFuZCB0
aGVuIGNhbGxzIHRoZSBjb21tb24gaGFuZGxlci4gVGhpcyBsZWFkcyB0byB0aGUNCj4gd2Vp
cmQgY2FsbCBoaWVyYXJjaHk6DQo+IA0KPiAgICBkb19hcm1fdmNwdV9vcCgpDQo+ICAgICAg
ZG9fdmNwdV9vcCgpDQo+ICAgICAgICBhcmNoX2RvX3ZjcHVfb3AoKQ0KPiANCj4gQ2xlYW4g
dGhpcyB1cCBieSByZW5hbWluZyBkb192Y3B1X29wKCkgdG8gY29tbW9uX3ZjcHVfb3AoKSBh
bmQNCj4gYXJjaF9kb192Y3B1X29wKCkgaW4gZWFjaCBhcmNoaXRlY3R1cmUgdG8gZG9fdmNw
dV9vcCgpLiBUaGlzIHdheSBvbmUNCj4gb2YgYWJvdmUgY2FsbHMgY2FuIGJlIGF2b2lkZWQg
d2l0aG91dCByZXN0cmljdGluZyBhbnkgcG90ZW50aWFsDQo+IGZ1dHVyZSB1c2Ugb2YgY29t
bW9uIHN1Yi1vcHMgZm9yIEFybS4NCj4gDQo+IFNpZ25lZC1vZmYtYnk6IEp1ZXJnZW4gR3Jv
c3MgPGpncm9zc0BzdXNlLmNvbT4NCj4gUmV2aWV3ZWQtYnk6IEp1bGllbiBHcmFsbCA8amdy
YWxsQGFtYXpvbi5jb20+DQoNClRoZXJlIGlzIHN0aWxsIGFuIEFjayBtaXNzaW5nIGZvciB4
ODYgc2lkZS4gSmFuIGFscmVhZHkgc2FpZCBoZSBpc24ndA0KaGFwcHkgdG8gZ2l2ZSBvbmUs
IGJ1dCB3b24ndCBOYWNrIGl0LiBSb2dlciwgQW5kcmV3LCBhbnkgY29tbWVudHMgZm9yDQp0
aGlzIHBhdGNoPyBJdCBpcyBibG9ja2luZyBmdXJ0aGVyIGNsZWFudXAgcGF0Y2hlcyB0byBn
byBpbiAocGF0Y2hlcw0KMiBhbmQgNCBvZiB0aGlzIHNlcmllcykuDQoNCg0KSnVlcmdlbg0K
DQo+IC0tLQ0KPiBWNDoNCj4gLSBkb24ndCByZW1vdmUgSFlQRVJDQUxMX0FSTSgpDQo+IFY0
LjE6DQo+IC0gYWRkIG1pc3NpbmcgY2ZfY2hlY2sgKEFuZHJldyBDb29wZXIpDQo+IFY1Og0K
PiAtIHVzZSB2IGluc3RlYWQgb2YgY3VycmVudCAoSnVsaWVuIEdyYWxsKQ0KPiAtLS0NCj4g
ICB4ZW4vYXJjaC9hcm0vZG9tYWluLmMgICAgICAgICAgICAgICAgfCAxNSArKysrKysrKy0t
LS0tLS0NCj4gICB4ZW4vYXJjaC9hcm0vaW5jbHVkZS9hc20vaHlwZXJjYWxsLmggfCAgMiAt
LQ0KPiAgIHhlbi9hcmNoL2FybS90cmFwcy5jICAgICAgICAgICAgICAgICB8ICAyICstDQo+
ICAgeGVuL2FyY2gveDg2L2RvbWFpbi5jICAgICAgICAgICAgICAgIHwgMTIgKysrKysrKyst
LS0tDQo+ICAgeGVuL2FyY2gveDg2L2luY2x1ZGUvYXNtL2h5cGVyY2FsbC5oIHwgIDIgKy0N
Cj4gICB4ZW4vYXJjaC94ODYveDg2XzY0L2RvbWFpbi5jICAgICAgICAgfCAxOCArKysrKysr
KysrKysrLS0tLS0NCj4gICB4ZW4vY29tbW9uL2NvbXBhdC9kb21haW4uYyAgICAgICAgICAg
fCAxNSArKysrKystLS0tLS0tLS0NCj4gICB4ZW4vY29tbW9uL2RvbWFpbi5jICAgICAgICAg
ICAgICAgICAgfCAxMiArKysrLS0tLS0tLS0NCj4gICB4ZW4vaW5jbHVkZS94ZW4vaHlwZXJj
YWxsLmggICAgICAgICAgfCAgMiArLQ0KPiAgIDkgZmlsZXMgY2hhbmdlZCwgNDIgaW5zZXJ0
aW9ucygrKSwgMzggZGVsZXRpb25zKC0pDQo+IA0KPiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gv
YXJtL2RvbWFpbi5jIGIveGVuL2FyY2gvYXJtL2RvbWFpbi5jDQo+IGluZGV4IDgxMTBjMWRm
ODYuLjJmOGVhYWI3YjUgMTAwNjQ0DQo+IC0tLSBhL3hlbi9hcmNoL2FybS9kb21haW4uYw0K
PiArKysgYi94ZW4vYXJjaC9hcm0vZG9tYWluLmMNCj4gQEAgLTEwNzksMjMgKzEwNzksMjQg
QEAgdm9pZCBhcmNoX2R1bXBfZG9tYWluX2luZm8oc3RydWN0IGRvbWFpbiAqZCkNCj4gICB9
DQo+ICAgDQo+ICAgDQo+IC1sb25nIGRvX2FybV92Y3B1X29wKGludCBjbWQsIHVuc2lnbmVk
IGludCB2Y3B1aWQsIFhFTl9HVUVTVF9IQU5ETEVfUEFSQU0odm9pZCkgYXJnKQ0KPiArbG9u
ZyBkb192Y3B1X29wKGludCBjbWQsIHVuc2lnbmVkIGludCB2Y3B1aWQsIFhFTl9HVUVTVF9I
QU5ETEVfUEFSQU0odm9pZCkgYXJnKQ0KPiAgIHsNCj4gKyAgICBzdHJ1Y3QgZG9tYWluICpk
ID0gY3VycmVudC0+ZG9tYWluOw0KPiArICAgIHN0cnVjdCB2Y3B1ICp2Ow0KPiArDQo+ICsg
ICAgaWYgKCAodiA9IGRvbWFpbl92Y3B1KGQsIHZjcHVpZCkpID09IE5VTEwgKQ0KPiArICAg
ICAgICByZXR1cm4gLUVOT0VOVDsNCj4gKw0KPiAgICAgICBzd2l0Y2ggKCBjbWQgKQ0KPiAg
ICAgICB7DQo+ICAgICAgICAgICBjYXNlIFZDUFVPUF9yZWdpc3Rlcl92Y3B1X2luZm86DQo+
ICAgICAgICAgICBjYXNlIFZDUFVPUF9yZWdpc3Rlcl9ydW5zdGF0ZV9tZW1vcnlfYXJlYToN
Cj4gLSAgICAgICAgICAgIHJldHVybiBkb192Y3B1X29wKGNtZCwgdmNwdWlkLCBhcmcpOw0K
PiArICAgICAgICAgICAgcmV0dXJuIGNvbW1vbl92Y3B1X29wKGNtZCwgdiwgYXJnKTsNCj4g
ICAgICAgICAgIGRlZmF1bHQ6DQo+ICAgICAgICAgICAgICAgcmV0dXJuIC1FSU5WQUw7DQo+
ICAgICAgIH0NCj4gICB9DQo+ICAgDQo+IC1sb25nIGFyY2hfZG9fdmNwdV9vcChpbnQgY21k
LCBzdHJ1Y3QgdmNwdSAqdiwgWEVOX0dVRVNUX0hBTkRMRV9QQVJBTSh2b2lkKSBhcmcpDQo+
IC17DQo+IC0gICAgcmV0dXJuIC1FTk9TWVM7DQo+IC19DQo+IC0NCj4gICB2b2lkIGFyY2hf
ZHVtcF92Y3B1X2luZm8oc3RydWN0IHZjcHUgKnYpDQo+ICAgew0KPiAgICAgICBnaWNfZHVt
cF9pbmZvKHYpOw0KPiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL2luY2x1ZGUvYXNtL2h5
cGVyY2FsbC5oIGIveGVuL2FyY2gvYXJtL2luY2x1ZGUvYXNtL2h5cGVyY2FsbC5oDQo+IGlu
ZGV4IDM5ZDJlNzg4OWQuLmZhYzRkNjBmMTcgMTAwNjQ0DQo+IC0tLSBhL3hlbi9hcmNoL2Fy
bS9pbmNsdWRlL2FzbS9oeXBlcmNhbGwuaA0KPiArKysgYi94ZW4vYXJjaC9hcm0vaW5jbHVk
ZS9hc20vaHlwZXJjYWxsLmgNCj4gQEAgLTQsOCArNCw2IEBADQo+ICAgI2luY2x1ZGUgPHB1
YmxpYy9kb21jdGwuaD4gLyogZm9yIGFyY2hfZG9fZG9tY3RsICovDQo+ICAgaW50IGRvX2Fy
bV9waHlzZGV2X29wKGludCBjbWQsIFhFTl9HVUVTVF9IQU5ETEVfUEFSQU0odm9pZCkgYXJn
KTsNCj4gICANCj4gLWxvbmcgZG9fYXJtX3ZjcHVfb3AoaW50IGNtZCwgdW5zaWduZWQgaW50
IHZjcHVpZCwgWEVOX0dVRVNUX0hBTkRMRV9QQVJBTSh2b2lkKSBhcmcpOw0KPiAtDQo+ICAg
bG9uZyBzdWJhcmNoX2RvX2RvbWN0bChzdHJ1Y3QgeGVuX2RvbWN0bCAqZG9tY3RsLCBzdHJ1
Y3QgZG9tYWluICpkLA0KPiAgICAgICAgICAgICAgICAgICAgICAgICAgWEVOX0dVRVNUX0hB
TkRMRV9QQVJBTSh4ZW5fZG9tY3RsX3QpIHVfZG9tY3RsKTsNCj4gICANCj4gZGlmZiAtLWdp
dCBhL3hlbi9hcmNoL2FybS90cmFwcy5jIGIveGVuL2FyY2gvYXJtL3RyYXBzLmMNCj4gaW5k
ZXggNDNmMzA3NDdjZi4uZTkwNmJiNGE4OSAxMDA2NDQNCj4gLS0tIGEveGVuL2FyY2gvYXJt
L3RyYXBzLmMNCj4gKysrIGIveGVuL2FyY2gvYXJtL3RyYXBzLmMNCj4gQEAgLTEzODAsNyAr
MTM4MCw3IEBAIHN0YXRpYyBhcm1faHlwZXJjYWxsX3QgYXJtX2h5cGVyY2FsbF90YWJsZVtd
ID0gew0KPiAgICNlbmRpZg0KPiAgICAgICBIWVBFUkNBTEwobXVsdGljYWxsLCAyKSwNCj4g
ICAgICAgSFlQRVJDQUxMKHBsYXRmb3JtX29wLCAxKSwNCj4gLSAgICBIWVBFUkNBTExfQVJN
KHZjcHVfb3AsIDMpLA0KPiArICAgIEhZUEVSQ0FMTCh2Y3B1X29wLCAzKSwNCj4gICAgICAg
SFlQRVJDQUxMKHZtX2Fzc2lzdCwgMiksDQo+ICAgI2lmZGVmIENPTkZJR19BUkdPDQo+ICAg
ICAgIEhZUEVSQ0FMTChhcmdvX29wLCA1KSwNCj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4
Ni9kb21haW4uYyBiL3hlbi9hcmNoL3g4Ni9kb21haW4uYw0KPiBpbmRleCBhNTA0OGVkNjU0
Li5kNTY2ZmM4MmI0IDEwMDY0NA0KPiAtLS0gYS94ZW4vYXJjaC94ODYvZG9tYWluLmMNCj4g
KysrIGIveGVuL2FyY2gveDg2L2RvbWFpbi5jDQo+IEBAIC0xNDg5LDExICsxNDg5LDE1IEBA
IGludCBhcmNoX3ZjcHVfcmVzZXQoc3RydWN0IHZjcHUgKnYpDQo+ICAgICAgIHJldHVybiAw
Ow0KPiAgIH0NCj4gICANCj4gLWxvbmcNCj4gLWFyY2hfZG9fdmNwdV9vcCgNCj4gLSAgICBp
bnQgY21kLCBzdHJ1Y3QgdmNwdSAqdiwgWEVOX0dVRVNUX0hBTkRMRV9QQVJBTSh2b2lkKSBh
cmcpDQo+ICtsb25nIGNmX2NoZWNrIGRvX3ZjcHVfb3AoaW50IGNtZCwgdW5zaWduZWQgaW50
IHZjcHVpZCwNCj4gKyAgICAgICAgICAgICAgICAgICAgICAgICBYRU5fR1VFU1RfSEFORExF
X1BBUkFNKHZvaWQpIGFyZykNCj4gICB7DQo+ICAgICAgIGxvbmcgcmMgPSAwOw0KPiArICAg
IHN0cnVjdCBkb21haW4gKmQgPSBjdXJyZW50LT5kb21haW47DQo+ICsgICAgc3RydWN0IHZj
cHUgKnY7DQo+ICsNCj4gKyAgICBpZiAoICh2ID0gZG9tYWluX3ZjcHUoZCwgdmNwdWlkKSkg
PT0gTlVMTCApDQo+ICsgICAgICAgIHJldHVybiAtRU5PRU5UOw0KPiAgIA0KPiAgICAgICBz
d2l0Y2ggKCBjbWQgKQ0KPiAgICAgICB7DQo+IEBAIC0xNTQ1LDcgKzE1NDksNyBAQCBhcmNo
X2RvX3ZjcHVfb3AoDQo+ICAgICAgIH0NCj4gICANCj4gICAgICAgZGVmYXVsdDoNCj4gLSAg
ICAgICAgcmMgPSAtRU5PU1lTOw0KPiArICAgICAgICByYyA9IGNvbW1vbl92Y3B1X29wKGNt
ZCwgdiwgYXJnKTsNCj4gICAgICAgICAgIGJyZWFrOw0KPiAgICAgICB9DQo+ICAgDQo+IGRp
ZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvaW5jbHVkZS9hc20vaHlwZXJjYWxsLmggYi94ZW4v
YXJjaC94ODYvaW5jbHVkZS9hc20vaHlwZXJjYWxsLmgNCj4gaW5kZXggNjFiZjg5NzE0Ny4u
ZDZkYWE3ZTRjYiAxMDA2NDQNCj4gLS0tIGEveGVuL2FyY2gveDg2L2luY2x1ZGUvYXNtL2h5
cGVyY2FsbC5oDQo+ICsrKyBiL3hlbi9hcmNoL3g4Ni9pbmNsdWRlL2FzbS9oeXBlcmNhbGwu
aA0KPiBAQCAtMTQ1LDcgKzE0NSw3IEBAIGNvbXBhdF9waHlzZGV2X29wKA0KPiAgICAgICBY
RU5fR1VFU1RfSEFORExFX1BBUkFNKHZvaWQpIGFyZyk7DQo+ICAgDQo+ICAgZXh0ZXJuIGlu
dA0KPiAtYXJjaF9jb21wYXRfdmNwdV9vcCgNCj4gK2NvbXBhdF9jb21tb25fdmNwdV9vcCgN
Cj4gICAgICAgaW50IGNtZCwgc3RydWN0IHZjcHUgKnYsIFhFTl9HVUVTVF9IQU5ETEVfUEFS
QU0odm9pZCkgYXJnKTsNCj4gICANCj4gICBleHRlcm4gaW50IGNmX2NoZWNrIGNvbXBhdF9t
bXVleHRfb3AoDQo+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYveDg2XzY0L2RvbWFpbi5j
IGIveGVuL2FyY2gveDg2L3g4Nl82NC9kb21haW4uYw0KPiBpbmRleCBjNDZkY2NjMjVhLi45
YzU1OWFhM2VhIDEwMDY0NA0KPiAtLS0gYS94ZW4vYXJjaC94ODYveDg2XzY0L2RvbWFpbi5j
DQo+ICsrKyBiL3hlbi9hcmNoL3g4Ni94ODZfNjQvZG9tYWluLmMNCj4gQEAgLTEyLDExICsx
MiwxNSBAQA0KPiAgIENIRUNLX3ZjcHVfZ2V0X3BoeXNpZDsNCj4gICAjdW5kZWYgeGVuX3Zj
cHVfZ2V0X3BoeXNpZA0KPiAgIA0KPiAtaW50DQo+IC1hcmNoX2NvbXBhdF92Y3B1X29wKA0K
PiAtICAgIGludCBjbWQsIHN0cnVjdCB2Y3B1ICp2LCBYRU5fR1VFU1RfSEFORExFX1BBUkFN
KHZvaWQpIGFyZykNCj4gK2ludCBjZl9jaGVjaw0KPiArY29tcGF0X3ZjcHVfb3AoaW50IGNt
ZCwgdW5zaWduZWQgaW50IHZjcHVpZCwgWEVOX0dVRVNUX0hBTkRMRV9QQVJBTSh2b2lkKSBh
cmcpDQo+ICAgew0KPiAtICAgIGludCByYyA9IC1FTk9TWVM7DQo+ICsgICAgaW50IHJjOw0K
PiArICAgIHN0cnVjdCBkb21haW4gKmQgPSBjdXJyZW50LT5kb21haW47DQo+ICsgICAgc3Ry
dWN0IHZjcHUgKnY7DQo+ICsNCj4gKyAgICBpZiAoICh2ID0gZG9tYWluX3ZjcHUoZCwgdmNw
dWlkKSkgPT0gTlVMTCApDQo+ICsgICAgICAgIHJldHVybiAtRU5PRU5UOw0KPiAgIA0KPiAg
ICAgICBzd2l0Y2ggKCBjbWQgKQ0KPiAgICAgICB7DQo+IEBAIC01NSw3ICs1OSwxMSBAQCBh
cmNoX2NvbXBhdF92Y3B1X29wKA0KPiAgICAgICB9DQo+ICAgDQo+ICAgICAgIGNhc2UgVkNQ
VU9QX2dldF9waHlzaWQ6DQo+IC0gICAgICAgIHJjID0gYXJjaF9kb192Y3B1X29wKGNtZCwg
diwgYXJnKTsNCj4gKyAgICAgICAgcmMgPSBkb192Y3B1X29wKGNtZCwgdmNwdWlkLCBhcmcp
Ow0KPiArICAgICAgICBicmVhazsNCj4gKw0KPiArICAgIGRlZmF1bHQ6DQo+ICsgICAgICAg
IHJjID0gY29tcGF0X2NvbW1vbl92Y3B1X29wKGNtZCwgdiwgYXJnKTsNCj4gICAgICAgICAg
IGJyZWFrOw0KPiAgICAgICB9DQo+ICAgDQo+IGRpZmYgLS1naXQgYS94ZW4vY29tbW9uL2Nv
bXBhdC9kb21haW4uYyBiL3hlbi9jb21tb24vY29tcGF0L2RvbWFpbi5jDQo+IGluZGV4IGFm
YWUyN2VlYmEuLjExMTk1MzQ2NzkgMTAwNjQ0DQo+IC0tLSBhL3hlbi9jb21tb24vY29tcGF0
L2RvbWFpbi5jDQo+ICsrKyBiL3hlbi9jb21tb24vY29tcGF0L2RvbWFpbi5jDQo+IEBAIC0z
OCwxNSArMzgsMTIgQEAgQ0hFQ0tfdmNwdV9odm1fY29udGV4dDsNCj4gICANCj4gICAjZW5k
aWYNCj4gICANCj4gLWludCBjZl9jaGVjayBjb21wYXRfdmNwdV9vcCgNCj4gLSAgICBpbnQg
Y21kLCB1bnNpZ25lZCBpbnQgdmNwdWlkLCBYRU5fR1VFU1RfSEFORExFX1BBUkFNKHZvaWQp
IGFyZykNCj4gK2ludCBjb21wYXRfY29tbW9uX3ZjcHVfb3AoaW50IGNtZCwgc3RydWN0IHZj
cHUgKnYsDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgIFhFTl9HVUVTVF9IQU5ETEVf
UEFSQU0odm9pZCkgYXJnKQ0KPiAgIHsNCj4gLSAgICBzdHJ1Y3QgZG9tYWluICpkID0gY3Vy
cmVudC0+ZG9tYWluOw0KPiAtICAgIHN0cnVjdCB2Y3B1ICp2Ow0KPiAgICAgICBpbnQgcmMg
PSAwOw0KPiAtDQo+IC0gICAgaWYgKCAodiA9IGRvbWFpbl92Y3B1KGQsIHZjcHVpZCkpID09
IE5VTEwgKQ0KPiAtICAgICAgICByZXR1cm4gLUVOT0VOVDsNCj4gKyAgICBzdHJ1Y3QgZG9t
YWluICpkID0gY3VycmVudC0+ZG9tYWluOw0KPiArICAgIHVuc2lnbmVkIGludCB2Y3B1aWQg
PSB2LT52Y3B1X2lkOw0KPiAgIA0KPiAgICAgICBzd2l0Y2ggKCBjbWQgKQ0KPiAgICAgICB7
DQo+IEBAIC0xMDMsNyArMTAwLDcgQEAgaW50IGNmX2NoZWNrIGNvbXBhdF92Y3B1X29wKA0K
PiAgICAgICBjYXNlIFZDUFVPUF9zdG9wX3NpbmdsZXNob3RfdGltZXI6DQo+ICAgICAgIGNh
c2UgVkNQVU9QX3JlZ2lzdGVyX3ZjcHVfaW5mbzoNCj4gICAgICAgY2FzZSBWQ1BVT1Bfc2Vu
ZF9ubWk6DQo+IC0gICAgICAgIHJjID0gZG9fdmNwdV9vcChjbWQsIHZjcHVpZCwgYXJnKTsN
Cj4gKyAgICAgICAgcmMgPSBjb21tb25fdmNwdV9vcChjbWQsIHYsIGFyZyk7DQo+ICAgICAg
ICAgICBicmVhazsNCj4gICANCj4gICAgICAgY2FzZSBWQ1BVT1BfZ2V0X3J1bnN0YXRlX2lu
Zm86DQo+IEBAIC0xMzQsNyArMTMxLDcgQEAgaW50IGNmX2NoZWNrIGNvbXBhdF92Y3B1X29w
KA0KPiAgICAgICB9DQo+ICAgDQo+ICAgICAgIGRlZmF1bHQ6DQo+IC0gICAgICAgIHJjID0g
YXJjaF9jb21wYXRfdmNwdV9vcChjbWQsIHYsIGFyZyk7DQo+ICsgICAgICAgIHJjID0gLUVO
T1NZUzsNCj4gICAgICAgICAgIGJyZWFrOw0KPiAgICAgICB9DQo+ICAgDQo+IGRpZmYgLS1n
aXQgYS94ZW4vY29tbW9uL2RvbWFpbi5jIGIveGVuL2NvbW1vbi9kb21haW4uYw0KPiBpbmRl
eCAzNTEwMjlmOGIyLi43MDc0N2MwMmU2IDEwMDY0NA0KPiAtLS0gYS94ZW4vY29tbW9uL2Rv
bWFpbi5jDQo+ICsrKyBiL3hlbi9jb21tb24vZG9tYWluLmMNCj4gQEAgLTE1NzAsMTUgKzE1
NzAsMTEgQEAgaW50IGRlZmF1bHRfaW5pdGlhbGlzZV92Y3B1KHN0cnVjdCB2Y3B1ICp2LCBY
RU5fR1VFU1RfSEFORExFX1BBUkFNKHZvaWQpIGFyZykNCj4gICAgICAgcmV0dXJuIHJjOw0K
PiAgIH0NCj4gICANCj4gLWxvbmcgY2ZfY2hlY2sgZG9fdmNwdV9vcCgNCj4gLSAgICBpbnQg
Y21kLCB1bnNpZ25lZCBpbnQgdmNwdWlkLCBYRU5fR1VFU1RfSEFORExFX1BBUkFNKHZvaWQp
IGFyZykNCj4gK2xvbmcgY29tbW9uX3ZjcHVfb3AoaW50IGNtZCwgc3RydWN0IHZjcHUgKnYs
IFhFTl9HVUVTVF9IQU5ETEVfUEFSQU0odm9pZCkgYXJnKQ0KPiAgIHsNCj4gLSAgICBzdHJ1
Y3QgZG9tYWluICpkID0gY3VycmVudC0+ZG9tYWluOw0KPiAtICAgIHN0cnVjdCB2Y3B1ICp2
Ow0KPiAgICAgICBsb25nIHJjID0gMDsNCj4gLQ0KPiAtICAgIGlmICggKHYgPSBkb21haW5f
dmNwdShkLCB2Y3B1aWQpKSA9PSBOVUxMICkNCj4gLSAgICAgICAgcmV0dXJuIC1FTk9FTlQ7
DQo+ICsgICAgc3RydWN0IGRvbWFpbiAqZCA9IHYtPmRvbWFpbjsNCj4gKyAgICB1bnNpZ25l
ZCBpbnQgdmNwdWlkID0gdi0+dmNwdV9pZDsNCj4gICANCj4gICAgICAgc3dpdGNoICggY21k
ICkNCj4gICAgICAgew0KPiBAQCAtMTc1MCw3ICsxNzQ2LDcgQEAgbG9uZyBjZl9jaGVjayBk
b192Y3B1X29wKA0KPiAgICAgICB9DQo+ICAgDQo+ICAgICAgIGRlZmF1bHQ6DQo+IC0gICAg
ICAgIHJjID0gYXJjaF9kb192Y3B1X29wKGNtZCwgdiwgYXJnKTsNCj4gKyAgICAgICAgcmMg
PSAtRU5PU1lTOw0KPiAgICAgICAgICAgYnJlYWs7DQo+ICAgICAgIH0NCj4gICANCj4gZGlm
ZiAtLWdpdCBhL3hlbi9pbmNsdWRlL3hlbi9oeXBlcmNhbGwuaCBiL3hlbi9pbmNsdWRlL3hl
bi9oeXBlcmNhbGwuaA0KPiBpbmRleCBhMWI2NTc1OTc2Li44MWFhZTdhNjYyIDEwMDY0NA0K
PiAtLS0gYS94ZW4vaW5jbHVkZS94ZW4vaHlwZXJjYWxsLmgNCj4gKysrIGIveGVuL2luY2x1
ZGUveGVuL2h5cGVyY2FsbC5oDQo+IEBAIC0xMTAsNyArMTEwLDcgQEAgZG9fdmNwdV9vcCgN
Cj4gICANCj4gICBzdHJ1Y3QgdmNwdTsNCj4gICBleHRlcm4gbG9uZw0KPiAtYXJjaF9kb192
Y3B1X29wKGludCBjbWQsDQo+ICtjb21tb25fdmNwdV9vcChpbnQgY21kLA0KPiAgICAgICBz
dHJ1Y3QgdmNwdSAqdiwNCj4gICAgICAgWEVOX0dVRVNUX0hBTkRMRV9QQVJBTSh2b2lkKSBh
cmcpOw0KPiAgIA0KDQo=
--------------pzL4Og88yAmC0YHxwDadbDJV
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------pzL4Og88yAmC0YHxwDadbDJV--

--------------yWo6yeRMmpKmhLxa6sN6TkpR--

--------------v06V60687PTKUn6gYcwqjPDG
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmKe218FAwAAAAAACgkQsN6d1ii/Ey/h
9Qf+JGjHZjLN0fpYnvzdhV1gjdmrZ/e3IQQLNhSdJI6eyqYSRPFkvlXU2+IctlwZvv8IjLZ1ERK+
fLRvVVKcXQfava13U5nr+8Y4Qqn1A5w84IXYdF+kLXDkkCaXcJm7cFjJqMSeUaN3ePHJe21auGau
Hlep832OZ0dzTIW+Q7aARtCcq1azBXlz3fNIS/7QJ4+J6JcCiTagXmshGdTs3k2YJ4VvQDvCYCBh
r/1eR0QxPt7L/kmIjEXYaUdd3d0lzGauhfAowrzWNWMMeRab4dE0It2XbMhyhqXqmu5KY3GNnUYO
nEekBNNW0VXnTcIQSkfqPCV2xa9TlCYBhuime2Y8tQ==
=0w0D
-----END PGP SIGNATURE-----

--------------v06V60687PTKUn6gYcwqjPDG--


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 05:05:58 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 05:05:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342811.567894 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyRPf-0000gD-Bc; Tue, 07 Jun 2022 05:05:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342811.567894; Tue, 07 Jun 2022 05:05:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyRPf-0000g6-8s; Tue, 07 Jun 2022 05:05:55 +0000
Received: by outflank-mailman (input) for mailman id 342811;
 Tue, 07 Jun 2022 05:05:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zm5l=WO=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1nyRPd-0000g0-Gv
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 05:05:53 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 80d09774-e61f-11ec-b605-df0040e90b76;
 Tue, 07 Jun 2022 07:05:52 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id DC3181F9EA;
 Tue,  7 Jun 2022 05:05:51 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 64F2613638;
 Tue,  7 Jun 2022 05:05:51 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id gUljFq/cnmKDTwAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 07 Jun 2022 05:05:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 80d09774-e61f-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1654578351; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=mFiIxVhCDeG+JJKnKKjNN3if/baI4WREDSCbcqrTuY4=;
	b=E26UaeEM5TUfXZ+XatuursvD/131YC48uRXqTqQDmg1Lz1xqKirfn0hHypra/IP1RcQorW
	7wh3naeKnzjeS5R3ZVNEq6cBKT6CPsbdDmBZJ+NQf+lrCsV9Xn0i/jc2s1DDOoC5j0JR8y
	HWU1d3UphxHbGqw062mLvzzMNT4Etds=
Message-ID: <264daffc-cc65-4bd6-195e-c6ccd1f46971@suse.com>
Date: Tue, 7 Jun 2022 07:05:50 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH v6 0/9] xen: drop hypercall function tables
Content-Language: en-US
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Christopher Clark <christopher.w.clark@gmail.com>,
 Dario Faggioli <dfaggioli@suse.com>, Daniel De Graaf
 <dgdegra@tycho.nsa.gov>, "Daniel P. Smith" <dpsmith@apertussolutions.com>
References: <20220324140139.5899-1-jgross@suse.com>
 <06edd55a-86f2-52e3-e275-ee928a956fdf@suse.com>
 <8baf689f-2a20-cf07-6878-9f9459063a25@suse.com>
 <db7f5c3e-894a-1700-e0a4-5893bd70c205@suse.com>
In-Reply-To: <db7f5c3e-894a-1700-e0a4-5893bd70c205@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------W6F17ZzGCEiDslkZ0lY2SHDB"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------W6F17ZzGCEiDslkZ0lY2SHDB
Content-Type: multipart/mixed; boundary="------------Fyx7mSCFkglUUsbBlWS0LiVt";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Christopher Clark <christopher.w.clark@gmail.com>,
 Dario Faggioli <dfaggioli@suse.com>, Daniel De Graaf
 <dgdegra@tycho.nsa.gov>, "Daniel P. Smith" <dpsmith@apertussolutions.com>
Message-ID: <264daffc-cc65-4bd6-195e-c6ccd1f46971@suse.com>
Subject: Re: [PATCH v6 0/9] xen: drop hypercall function tables
References: <20220324140139.5899-1-jgross@suse.com>
 <06edd55a-86f2-52e3-e275-ee928a956fdf@suse.com>
 <8baf689f-2a20-cf07-6878-9f9459063a25@suse.com>
 <db7f5c3e-894a-1700-e0a4-5893bd70c205@suse.com>
In-Reply-To: <db7f5c3e-894a-1700-e0a4-5893bd70c205@suse.com>

--------------Fyx7mSCFkglUUsbBlWS0LiVt
Content-Type: multipart/mixed; boundary="------------cF0VOitKla0W90m0tckMZTjS"

--------------cF0VOitKla0W90m0tckMZTjS
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTguMDUuMjIgMTE6NDUsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+IE9uIDA0LjA1LjIy
IDA5OjUzLCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4gT24gMTkuMDQuMjIgMTA6MDEsIEp1
ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+Pj4gT24gMjQuMDMuMjIgMTU6MDEsIEp1ZXJnZW4gR3Jv
c3Mgd3JvdGU6DQo+Pj4+IEluIG9yZGVyIHRvIGF2b2lkIGluZGlyZWN0IGZ1bmN0aW9uIGNh
bGxzIG9uIHRoZSBoeXBlcmNhbGwgcGF0aCBhcw0KPj4+PiBtdWNoIGFzIHBvc3NpYmxlIHRo
aXMgc2VyaWVzIGlzIHJlbW92aW5nIHRoZSBoeXBlcmNhbGwgZnVuY3Rpb24gdGFibGVzDQo+
Pj4+IGFuZCBpcyByZXBsYWNpbmcgdGhlIGh5cGVyY2FsbCBoYW5kbGVyIGNhbGxzIHZpYSB0
aGUgZnVuY3Rpb24gYXJyYXkNCj4+Pj4gYnkgYXV0b21hdGljYWxseSBnZW5lcmF0ZWQgY2Fs
bCBtYWNyb3MuDQo+Pj4+DQo+Pj4+IEFub3RoZXIgYnktcHJvZHVjdCBvZiBnZW5lcmF0aW5n
IHRoZSBjYWxsIG1hY3JvcyBpcyB0aGUgYXV0b21hdGljDQo+Pj4+IGdlbmVyYXRpbmcgb2Yg
dGhlIGh5cGVyY2FsbCBoYW5kbGVyIHByb3RvdHlwZXMgZnJvbSB0aGUgc2FtZSBkYXRhIGJh
c2UNCj4+Pj4gd2hpY2ggaXMgdXNlZCB0byBnZW5lcmF0ZSB0aGUgbWFjcm9zLg0KPj4+Pg0K
Pj4+PiBUaGlzIGhhcyB0aGUgYWRkaXRpb25hbCBhZHZhbnRhZ2Ugb2YgdXNpbmcgdHlwZSBz
YWZlIGNhbGxzIG9mIHRoZQ0KPj4+PiBoYW5kbGVycyBhbmQgdG8gZW5zdXJlIHJlbGF0ZWQg
aGFuZGxlciAoZS5nLiBQViBhbmQgSFZNIG9uZXMpIHNoYXJlDQo+Pj4+IHRoZSBzYW1lIHBy
b3RvdHlwZXMuDQo+Pj4+DQo+Pj4+IEEgdmVyeSBicmllZiBwZXJmb3JtYW5jZSB0ZXN0IChw
YXJhbGxlbCBidWlsZCBvZiB0aGUgWGVuIGh5cGVydmlzb3INCj4+Pj4gaW4gYSA2IHZjcHUg
Z3Vlc3QpIHNob3dlZCBhIHZlcnkgc2xpbSBpbXByb3ZlbWVudCAobGVzcyB0aGFuIDElKSBv
Zg0KPj4+PiB0aGUgcGVyZm9ybWFuY2Ugd2l0aCB0aGUgcGF0Y2hlcyBhcHBsaWVkLiBUaGUg
dGVzdCB3YXMgcGVyZm9ybWVkIHVzaW5nDQo+Pj4+IGEgUFYgYW5kIGEgUFZIIGd1ZXN0Lg0K
Pj4+DQo+Pj4gQSBnZW50bGUgcGluZyByZWdhcmRpbmcgdGhpcyBzZXJpZXMuDQo+Pj4NCj4+
PiBJIHRoaW5rIHBhdGNoIDEgc3RpbGwgbGFja3MgYW4gQWNrIGZyb20geDg2IHNpZGUuIE90
aGVyIHRoYW4gdGhhdA0KPj4+IHBhdGNoZXMgMSwgMiBhbmQgNCBzaG91bGQgYmUgZmluZSB0
byBnbyBpbiwgYXMgdGhleSBhcmUgY2xlYW51cHMgd2hpY2gNCj4+PiBhcmUgZmluZSBvbiB0
aGVpciBvd24gSU1ITy4NCj4+Pg0KPj4+IEFuZHJldywgeW91IHdhbnRlZCB0byBnZXQgc29t
ZSBwZXJmb3JtYW5jZSBudW1iZXJzIG9mIHRoZSBzZXJpZXMgdXNpbmcNCj4+PiB0aGUgQ2l0
cml4IHRlc3QgZW52aXJvbm1lbnQuIEFueSBuZXdzIG9uIHRoZSBwcm9ncmVzcyBoZXJlPw0K
Pj4NCj4+IEFuZCBhbm90aGVyIHBpbmcuDQo+Pg0KPj4gQW5kcmV3LCBjb3VsZCB5b3UgcGxl
YXNlIGdpdmUgc29tZSBmZWVkYmFjayByZWdhcmRpbmcgcGVyZm9ybWFuY2UNCj4+IHRlc3Rp
bmcgcHJvZ3Jlc3M/DQo+IA0KPiBUaGlzIGlzIGJlY29taW5nIHJpZGljdWxvdXMuIEFuZHJl
dywgSSBrbm93IHlvdSBhcmUgYnVzeSwgYnV0IG5vdCByZWFjdGluZw0KPiBhdCBhbGwgdG8g
ZXhwbGljaXQgcXVlc3Rpb25zIGlzIGtpbmQgb2YgYW5ub3lpbmcuDQoNCiAgX19fXyBfX18g
XyAgIF8gIF9fX18NCnwgIF8gXF8gX3wgXCB8IHwvIF9fX3wNCnwgfF8pIHwgfHwgIFx8IHwg
fCAgXw0KfCAgX18vfCB8fCB8XCAgfCB8X3wgfA0KfF98ICB8X19ffF98IFxffFxfX19ffA0K
DQoNCkp1ZXJnZW4NCg==
--------------cF0VOitKla0W90m0tckMZTjS
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------cF0VOitKla0W90m0tckMZTjS--

--------------Fyx7mSCFkglUUsbBlWS0LiVt--

--------------W6F17ZzGCEiDslkZ0lY2SHDB
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmKe3K4FAwAAAAAACgkQsN6d1ii/Ey89
RQf+MwmYYU6mnHqaioRtGiaaarsGdn9eb++zWYfMGwXxas4qy30UYDPOae8J9Jua57HSMIDtBrsr
cwVF15SaHfEm0bl2rMjpwOLYdeZMO68cdjZoiTjwGwnTVDZyGbkjzOT/vqALnAW5fmlhGHLzekMV
4CzaOC0RsSbHwvrXr/xRzJGnsqlMEUnlcnOcsL3A8wO9Gi9RY8I12siX2r9QDY/lv8jxa8zrd1EC
xNXNWiCqMu82hiid+4o6AdCAbNHgJKtuB8DCGGl7Y6xgvngBb2U3XfG2+o54FYagA+rQ6oXfe9Z8
hXb/t+gFQi6cK6BD+J7BtHWzj9A86T50GbmmeIGDIA==
=e0d4
-----END PGP SIGNATURE-----

--------------W6F17ZzGCEiDslkZ0lY2SHDB--


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 05:28:51 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 05:28:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342820.567904 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyRli-0003If-8b; Tue, 07 Jun 2022 05:28:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342820.567904; Tue, 07 Jun 2022 05:28:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyRli-0003IY-5x; Tue, 07 Jun 2022 05:28:42 +0000
Received: by outflank-mailman (input) for mailman id 342820;
 Tue, 07 Jun 2022 05:28:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zm5l=WO=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1nyRlh-0003IS-A6
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 05:28:41 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b0006aac-e622-11ec-bd2c-47488cf2e6aa;
 Tue, 07 Jun 2022 07:28:40 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 80EDA1F9F4;
 Tue,  7 Jun 2022 05:28:39 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 2D8EA13A5F;
 Tue,  7 Jun 2022 05:28:39 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id nZW5CQfinmJmVQAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 07 Jun 2022 05:28:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b0006aac-e622-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1654579719; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type;
	bh=rg81TiujhMLmUuZodoAmrrDcPk8e0ct7YAcUKxC6kzc=;
	b=Z7rLus0YlSrBnGMwMQ0XumBby6FqnYi8nm9bhLlLXM4x3CvCuFFI2cj/u1I5dUQBoD23aG
	DVFBJjvnF84ECn8Oa9vDqqm59uUO4TQeaZhLvYedPjFwHaYfusHCadcSBi0/bHB+OiC8HM
	B+akrWVTi/SbbwMHrckYBZg1BV9Rj8A=
Message-ID: <6507870c-1c32-ebf6-f85f-4bf2ede41367@suse.com>
Date: Tue, 7 Jun 2022 07:28:38 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
From: Juergen Gross <jgross@suse.com>
Subject: [PATCH Resend] xen/netback: do some code cleanup
To: xen-devel@lists.xenproject.org, netdev@vger.kernel.org,
 linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>, Wei Liu <wei.liu@kernel.org>,
 Paul Durrant <paul@xen.org>, "David S. Miller" <davem@davemloft.net>,
 Eric Dumazet <edumazet@google.com>, Jakub Kicinski <kuba@kernel.org>,
 Paolo Abeni <pabeni@redhat.com>
Content-Language: en-US
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------TQgRmYuvpfQLUkEYt03SVZJm"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------TQgRmYuvpfQLUkEYt03SVZJm
Content-Type: multipart/mixed; boundary="------------WD09QCmLozusZfCQ3ECpvsB6";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org, netdev@vger.kernel.org,
 linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>, Wei Liu <wei.liu@kernel.org>,
 Paul Durrant <paul@xen.org>, "David S. Miller" <davem@davemloft.net>,
 Eric Dumazet <edumazet@google.com>, Jakub Kicinski <kuba@kernel.org>,
 Paolo Abeni <pabeni@redhat.com>
Message-ID: <6507870c-1c32-ebf6-f85f-4bf2ede41367@suse.com>
Subject: [PATCH Resend] xen/netback: do some code cleanup

--------------WD09QCmLozusZfCQ3ECpvsB6
Content-Type: multipart/mixed; boundary="------------u0JUY0dZL7ULrQJw0tfsC1fF"

--------------u0JUY0dZL7ULrQJw0tfsC1fF
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

UmVtb3ZlIHNvbWUgdW51c2VkIG1hY3JvcyBhbmQgZnVuY3Rpb25zLCBtYWtlIGxvY2FsIGZ1
bmN0aW9ucyBzdGF0aWMuDQoNClNpZ25lZC1vZmYtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpncm9z
c0BzdXNlLmNvbT4NCkFja2VkLWJ5OiBXZWkgTGl1IDx3ZWkubGl1QGtlcm5lbC5vcmc+DQot
LS0NCiAgZHJpdmVycy9uZXQveGVuLW5ldGJhY2svY29tbW9uLmggICAgfCAxMiAtLS0tLS0t
LS0tLS0NCiAgZHJpdmVycy9uZXQveGVuLW5ldGJhY2svaW50ZXJmYWNlLmMgfCAxNiArLS0t
LS0tLS0tLS0tLS0tDQogIGRyaXZlcnMvbmV0L3hlbi1uZXRiYWNrL25ldGJhY2suYyAgIHwg
IDQgKysrLQ0KICBkcml2ZXJzL25ldC94ZW4tbmV0YmFjay9yeC5jICAgICAgICB8ICAyICst
DQogIDQgZmlsZXMgY2hhbmdlZCwgNSBpbnNlcnRpb25zKCspLCAyOSBkZWxldGlvbnMoLSkN
Cg0KZGlmZiAtLWdpdCBhL2RyaXZlcnMvbmV0L3hlbi1uZXRiYWNrL2NvbW1vbi5oIGIvZHJp
dmVycy9uZXQveGVuLW5ldGJhY2svY29tbW9uLmgNCmluZGV4IGQ5ZGVhNDgyOWM4Ni4uODE3
NGQ3YjI5NjZjIDEwMDY0NA0KLS0tIGEvZHJpdmVycy9uZXQveGVuLW5ldGJhY2svY29tbW9u
LmgNCisrKyBiL2RyaXZlcnMvbmV0L3hlbi1uZXRiYWNrL2NvbW1vbi5oDQpAQCAtNDgsNyAr
NDgsNiBAQA0KICAjaW5jbHVkZSA8bGludXgvZGVidWdmcy5oPg0KICAgdHlwZWRlZiB1bnNp
Z25lZCBpbnQgcGVuZGluZ19yaW5nX2lkeF90Ow0KLSNkZWZpbmUgSU5WQUxJRF9QRU5ESU5H
X1JJTkdfSURYICh+MFUpDQogICBzdHJ1Y3QgcGVuZGluZ190eF9pbmZvIHsNCiAgCXN0cnVj
dCB4ZW5fbmV0aWZfdHhfcmVxdWVzdCByZXE7IC8qIHR4IHJlcXVlc3QgKi8NCkBAIC04Miw4
ICs4MSw2IEBAIHN0cnVjdCB4ZW52aWZfcnhfbWV0YSB7DQogIC8qIERpc2NyaW1pbmF0ZSBm
cm9tIGFueSB2YWxpZCBwZW5kaW5nX2lkeCB2YWx1ZS4gKi8NCiAgI2RlZmluZSBJTlZBTElE
X1BFTkRJTkdfSURYIDB4RkZGRg0KICAtI2RlZmluZSBNQVhfQlVGRkVSX09GRlNFVCBYRU5f
UEFHRV9TSVpFDQotDQogICNkZWZpbmUgTUFYX1BFTkRJTkdfUkVRUyBYRU5fTkVUSUZfVFhf
UklOR19TSVpFDQogICAvKiBUaGUgbWF4aW11bSBudW1iZXIgb2YgZnJhZ3MgaXMgZGVyaXZl
ZCBmcm9tIHRoZSBzaXplIG9mIGEgZ3JhbnQgKHNhbWUNCkBAIC0zNjcsMTEgKzM2NCw2IEBA
IHZvaWQgeGVudmlmX2ZyZWUoc3RydWN0IHhlbnZpZiAqdmlmKTsNCiAgaW50IHhlbnZpZl94
ZW5idXNfaW5pdCh2b2lkKTsNCiAgdm9pZCB4ZW52aWZfeGVuYnVzX2Zpbmkodm9pZCk7DQog
IC1pbnQgeGVudmlmX3NjaGVkdWxhYmxlKHN0cnVjdCB4ZW52aWYgKnZpZik7DQotDQotaW50
IHhlbnZpZl9xdWV1ZV9zdG9wcGVkKHN0cnVjdCB4ZW52aWZfcXVldWUgKnF1ZXVlKTsNCi12
b2lkIHhlbnZpZl93YWtlX3F1ZXVlKHN0cnVjdCB4ZW52aWZfcXVldWUgKnF1ZXVlKTsNCi0N
CiAgLyogKFVuKU1hcCBjb21tdW5pY2F0aW9uIHJpbmdzLiAqLw0KICB2b2lkIHhlbnZpZl91
bm1hcF9mcm9udGVuZF9kYXRhX3JpbmdzKHN0cnVjdCB4ZW52aWZfcXVldWUgKnF1ZXVlKTsN
CiAgaW50IHhlbnZpZl9tYXBfZnJvbnRlbmRfZGF0YV9yaW5ncyhzdHJ1Y3QgeGVudmlmX3F1
ZXVlICpxdWV1ZSwNCkBAIC0zOTQsNyArMzg2LDYgQEAgaW50IHhlbnZpZl9kZWFsbG9jX2t0
aHJlYWQodm9pZCAqZGF0YSk7DQogIGlycXJldHVybl90IHhlbnZpZl9jdHJsX2lycV9mbihp
bnQgaXJxLCB2b2lkICpkYXRhKTsNCiAgIGJvb2wgeGVudmlmX2hhdmVfcnhfd29yayhzdHJ1
Y3QgeGVudmlmX3F1ZXVlICpxdWV1ZSwgYm9vbCB0ZXN0X2t0aHJlYWQpOw0KLXZvaWQgeGVu
dmlmX3J4X2FjdGlvbihzdHJ1Y3QgeGVudmlmX3F1ZXVlICpxdWV1ZSk7DQogIHZvaWQgeGVu
dmlmX3J4X3F1ZXVlX3RhaWwoc3RydWN0IHhlbnZpZl9xdWV1ZSAqcXVldWUsIHN0cnVjdCBz
a19idWZmICpza2IpOw0KICAgdm9pZCB4ZW52aWZfY2Fycmllcl9vbihzdHJ1Y3QgeGVudmlm
ICp2aWYpOw0KQEAgLTQwMyw5ICszOTQsNiBAQCB2b2lkIHhlbnZpZl9jYXJyaWVyX29uKHN0
cnVjdCB4ZW52aWYgKnZpZik7DQogIHZvaWQgeGVudmlmX3plcm9jb3B5X2NhbGxiYWNrKHN0
cnVjdCBza19idWZmICpza2IsIHN0cnVjdCB1YnVmX2luZm8gKnVidWYsDQogIAkJCSAgICAg
IGJvb2wgemVyb2NvcHlfc3VjY2Vzcyk7DQogIC0vKiBVbm1hcCBhIHBlbmRpbmcgcGFnZSBh
bmQgcmVsZWFzZSBpdCBiYWNrIHRvIHRoZSBndWVzdCAqLw0KLXZvaWQgeGVudmlmX2lkeF91
bm1hcChzdHJ1Y3QgeGVudmlmX3F1ZXVlICpxdWV1ZSwgdTE2IHBlbmRpbmdfaWR4KTsNCi0N
CiAgc3RhdGljIGlubGluZSBwZW5kaW5nX3JpbmdfaWR4X3QgbnJfcGVuZGluZ19yZXFzKHN0
cnVjdCB4ZW52aWZfcXVldWUgKnF1ZXVlKQ0KICB7DQogIAlyZXR1cm4gTUFYX1BFTkRJTkdf
UkVRUyAtDQpkaWZmIC0tZ2l0IGEvZHJpdmVycy9uZXQveGVuLW5ldGJhY2svaW50ZXJmYWNl
LmMgDQpiL2RyaXZlcnMvbmV0L3hlbi1uZXRiYWNrL2ludGVyZmFjZS5jDQppbmRleCA4ZTAz
NTM3NGEzNzAuLmZiMzJhZTgyZDliMCAxMDA2NDQNCi0tLSBhL2RyaXZlcnMvbmV0L3hlbi1u
ZXRiYWNrL2ludGVyZmFjZS5jDQorKysgYi9kcml2ZXJzL25ldC94ZW4tbmV0YmFjay9pbnRl
cmZhY2UuYw0KQEAgLTY5LDcgKzY5LDcgQEAgdm9pZCB4ZW52aWZfc2tiX3plcm9jb3B5X2Nv
bXBsZXRlKHN0cnVjdCB4ZW52aWZfcXVldWUgKnF1ZXVlKQ0KICAJd2FrZV91cCgmcXVldWUt
PmRlYWxsb2Nfd3EpOw0KICB9DQogIC1pbnQgeGVudmlmX3NjaGVkdWxhYmxlKHN0cnVjdCB4
ZW52aWYgKnZpZikNCitzdGF0aWMgaW50IHhlbnZpZl9zY2hlZHVsYWJsZShzdHJ1Y3QgeGVu
dmlmICp2aWYpDQogIHsNCiAgCXJldHVybiBuZXRpZl9ydW5uaW5nKHZpZi0+ZGV2KSAmJg0K
ICAJCXRlc3RfYml0KFZJRl9TVEFUVVNfQ09OTkVDVEVELCAmdmlmLT5zdGF0dXMpICYmDQpA
QCAtMTc3LDIwICsxNzcsNiBAQCBpcnFyZXR1cm5fdCB4ZW52aWZfaW50ZXJydXB0KGludCBp
cnEsIHZvaWQgKmRldl9pZCkNCiAgCXJldHVybiBJUlFfSEFORExFRDsNCiAgfQ0KICAtaW50
IHhlbnZpZl9xdWV1ZV9zdG9wcGVkKHN0cnVjdCB4ZW52aWZfcXVldWUgKnF1ZXVlKQ0KLXsN
Ci0Jc3RydWN0IG5ldF9kZXZpY2UgKmRldiA9IHF1ZXVlLT52aWYtPmRldjsNCi0JdW5zaWdu
ZWQgaW50IGlkID0gcXVldWUtPmlkOw0KLQlyZXR1cm4gbmV0aWZfdHhfcXVldWVfc3RvcHBl
ZChuZXRkZXZfZ2V0X3R4X3F1ZXVlKGRldiwgaWQpKTsNCi19DQotDQotdm9pZCB4ZW52aWZf
d2FrZV9xdWV1ZShzdHJ1Y3QgeGVudmlmX3F1ZXVlICpxdWV1ZSkNCi17DQotCXN0cnVjdCBu
ZXRfZGV2aWNlICpkZXYgPSBxdWV1ZS0+dmlmLT5kZXY7DQotCXVuc2lnbmVkIGludCBpZCA9
IHF1ZXVlLT5pZDsNCi0JbmV0aWZfdHhfd2FrZV9xdWV1ZShuZXRkZXZfZ2V0X3R4X3F1ZXVl
KGRldiwgaWQpKTsNCi19DQotDQogIHN0YXRpYyB1MTYgeGVudmlmX3NlbGVjdF9xdWV1ZShz
dHJ1Y3QgbmV0X2RldmljZSAqZGV2LCBzdHJ1Y3Qgc2tfYnVmZiAqc2tiLA0KICAJCQkgICAg
ICAgc3RydWN0IG5ldF9kZXZpY2UgKnNiX2RldikNCiAgew0KZGlmZiAtLWdpdCBhL2RyaXZl
cnMvbmV0L3hlbi1uZXRiYWNrL25ldGJhY2suYyBiL2RyaXZlcnMvbmV0L3hlbi1uZXRiYWNr
L25ldGJhY2suYw0KaW5kZXggZDkzODE0YzE0YTIzLi5mYzYxYTQ0MTg3MzcgMTAwNjQ0DQot
LS0gYS9kcml2ZXJzL25ldC94ZW4tbmV0YmFjay9uZXRiYWNrLmMNCisrKyBiL2RyaXZlcnMv
bmV0L3hlbi1uZXRiYWNrL25ldGJhY2suYw0KQEAgLTExMiw2ICsxMTIsOCBAQCBzdGF0aWMg
dm9pZCBtYWtlX3R4X3Jlc3BvbnNlKHN0cnVjdCB4ZW52aWZfcXVldWUgKnF1ZXVlLA0KICAJ
CQkgICAgIHM4ICAgICAgIHN0KTsNCiAgc3RhdGljIHZvaWQgcHVzaF90eF9yZXNwb25zZXMo
c3RydWN0IHhlbnZpZl9xdWV1ZSAqcXVldWUpOw0KICArc3RhdGljIHZvaWQgeGVudmlmX2lk
eF91bm1hcChzdHJ1Y3QgeGVudmlmX3F1ZXVlICpxdWV1ZSwgdTE2IHBlbmRpbmdfaWR4KTsN
CisNCiAgc3RhdGljIGlubGluZSBpbnQgdHhfd29ya190b2RvKHN0cnVjdCB4ZW52aWZfcXVl
dWUgKnF1ZXVlKTsNCiAgIHN0YXRpYyBpbmxpbmUgdW5zaWduZWQgbG9uZyBpZHhfdG9fcGZu
KHN0cnVjdCB4ZW52aWZfcXVldWUgKnF1ZXVlLA0KQEAgLTE0MTgsNyArMTQyMCw3IEBAIHN0
YXRpYyB2b2lkIHB1c2hfdHhfcmVzcG9uc2VzKHN0cnVjdCB4ZW52aWZfcXVldWUgKnF1ZXVl
KQ0KICAJCW5vdGlmeV9yZW1vdGVfdmlhX2lycShxdWV1ZS0+dHhfaXJxKTsNCiAgfQ0KICAt
dm9pZCB4ZW52aWZfaWR4X3VubWFwKHN0cnVjdCB4ZW52aWZfcXVldWUgKnF1ZXVlLCB1MTYg
cGVuZGluZ19pZHgpDQorc3RhdGljIHZvaWQgeGVudmlmX2lkeF91bm1hcChzdHJ1Y3QgeGVu
dmlmX3F1ZXVlICpxdWV1ZSwgdTE2IHBlbmRpbmdfaWR4KQ0KICB7DQogIAlpbnQgcmV0Ow0K
ICAJc3RydWN0IGdudHRhYl91bm1hcF9ncmFudF9yZWYgdHhfdW5tYXBfb3A7DQpkaWZmIC0t
Z2l0IGEvZHJpdmVycy9uZXQveGVuLW5ldGJhY2svcnguYyBiL2RyaXZlcnMvbmV0L3hlbi1u
ZXRiYWNrL3J4LmMNCmluZGV4IGRiYWM0YzAzZDIxYS4uOGRmMmM3MzZmZDIzIDEwMDY0NA0K
LS0tIGEvZHJpdmVycy9uZXQveGVuLW5ldGJhY2svcnguYw0KKysrIGIvZHJpdmVycy9uZXQv
eGVuLW5ldGJhY2svcnguYw0KQEAgLTQ4Niw3ICs0ODYsNyBAQCBzdGF0aWMgdm9pZCB4ZW52
aWZfcnhfc2tiKHN0cnVjdCB4ZW52aWZfcXVldWUgKnF1ZXVlKQ0KICAgI2RlZmluZSBSWF9C
QVRDSF9TSVpFIDY0DQogIC12b2lkIHhlbnZpZl9yeF9hY3Rpb24oc3RydWN0IHhlbnZpZl9x
dWV1ZSAqcXVldWUpDQorc3RhdGljIHZvaWQgeGVudmlmX3J4X2FjdGlvbihzdHJ1Y3QgeGVu
dmlmX3F1ZXVlICpxdWV1ZSkNCiAgew0KICAJc3RydWN0IHNrX2J1ZmZfaGVhZCBjb21wbGV0
ZWRfc2ticzsNCiAgCXVuc2lnbmVkIGludCB3b3JrX2RvbmUgPSAwOw0KLS0gDQoyLjM1LjMN
Cg0K
--------------u0JUY0dZL7ULrQJw0tfsC1fF
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------u0JUY0dZL7ULrQJw0tfsC1fF--

--------------WD09QCmLozusZfCQ3ECpvsB6--

--------------TQgRmYuvpfQLUkEYt03SVZJm
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmKe4gYFAwAAAAAACgkQsN6d1ii/Ey+3
mQgAic6FDHDgalJuzvf5gui7i7fCdePLfBzZJH6JE8qIckVEUk0q3d0ws4RJPkXpW7uI+ZSQWvYQ
11o7nd1TlU0vfBl8zw2eokh2OrosVEl+jpJ9XmCNFo0wrTPYZRHVvEKybR4gNsFhax0xpd70AcdG
RBKdbe1GpDv5FGNa503zSIvx98QdojIFmhQfWiXLPwzim2PeoG/5u27vrnIZ3RbPQP9lVKIkaCa6
dbg0Wwq/rx+MTOhsujOqbsK67idQC2/e9Xenic6OdoI4/FrN9oSmLFyqXZy8yTK4d+N25tfkPFzJ
+ZIVPUehziEq6suIoxLgMdbIiAPdQsBdFatm2XscFQ==
=emxx
-----END PGP SIGNATURE-----

--------------TQgRmYuvpfQLUkEYt03SVZJm--


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 05:39:51 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 05:39:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342829.567916 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyRwR-0004qd-A7; Tue, 07 Jun 2022 05:39:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342829.567916; Tue, 07 Jun 2022 05:39:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyRwR-0004qW-6g; Tue, 07 Jun 2022 05:39:47 +0000
Received: by outflank-mailman (input) for mailman id 342829;
 Tue, 07 Jun 2022 05:39:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyRwP-0004qM-Rn; Tue, 07 Jun 2022 05:39:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyRwP-0004us-NQ; Tue, 07 Jun 2022 05:39:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyRwP-0006wE-C4; Tue, 07 Jun 2022 05:39:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nyRwP-0000hR-Ba; Tue, 07 Jun 2022 05:39:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ZJCjgbX2Vy4lrwJpGf5QmdRmNUKnFirtTYIt9EWyf5Y=; b=ZAXCGxCi+YL6RVl/DuL0XcH7O5
	4Pmd86ZMD23GX63bgBDJaBNL1kKOIjTN+/PUJEH1FkkNRZIslDmsuj8BBoC8Oxao7uL+xOSqmJlLT
	40fJLWEfRoHKjM9BTVAotd6tndOfygnYNOf1oFouKwFiFsLlv6WRscz/naqzFLQzPC1M=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170846-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 170846: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-amd64-i386-examine-uefi:xen-install:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-5.4:build-amd64:xen-build:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-arndale:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-arm64-arm64-libvirt-raw:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-examine-bios:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-examine-uefi:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-examine-bios:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    linux-5.4:build-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-examine-uefi:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=35c6471fd2c181f6e5e0b292dc759b49dbd95d6a
X-Osstest-Versions-That:
    linux=04b092e4a01a3488e762897e2d29f85eda2c6a60
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 07 Jun 2022 05:39:45 +0000

flight 170846 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170846/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-examine-uefi  6 xen-install              fail REGR. vs. 170736
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 170736
 build-amd64                   6 xen-build      fail in 170843 REGR. vs. 170736

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-credit2  14 guest-start      fail in 170843 pass in 170846
 test-armhf-armhf-xl-multivcpu 14 guest-start               fail pass in 170843
 test-armhf-armhf-xl-arndale  18 guest-start/debian.repeat  fail pass in 170843
 test-arm64-arm64-libvirt-raw 17 guest-start/debian.repeat  fail pass in 170843

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked in 170843 n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 1 build-check(1) blocked in 170843 n/a
 test-amd64-amd64-examine      1 build-check(1)           blocked in 170843 n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked in 170843 n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked in 170843 n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)           blocked in 170843 n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)           blocked in 170843 n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)           blocked in 170843 n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)        blocked in 170843 n/a
 test-amd64-coresched-i386-xl  1 build-check(1)           blocked in 170843 n/a
 test-amd64-i386-examine-bios  1 build-check(1)           blocked in 170843 n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)        blocked in 170843 n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1) blocked in 170843 n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)    blocked in 170843 n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 1 build-check(1) blocked in 170843 n/a
 test-amd64-amd64-pair         1 build-check(1)           blocked in 170843 n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)           blocked in 170843 n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)      blocked in 170843 n/a
 test-amd64-amd64-examine-uefi  1 build-check(1)          blocked in 170843 n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)     blocked in 170843 n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)        blocked in 170843 n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)           blocked in 170843 n/a
 test-amd64-i386-examine       1 build-check(1)           blocked in 170843 n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)   blocked in 170843 n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)           blocked in 170843 n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)   blocked in 170843 n/a
 test-amd64-i386-xl-vhd        1 build-check(1)           blocked in 170843 n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1) blocked in 170843 n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)          blocked in 170843 n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64 1 build-check(1) blocked in 170843 n/a
 test-amd64-amd64-examine-bios  1 build-check(1)          blocked in 170843 n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)    blocked in 170843 n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)           blocked in 170843 n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)    blocked in 170843 n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)   blocked in 170843 n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)    blocked in 170843 n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)      blocked in 170843 n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)           blocked in 170843 n/a
 test-amd64-amd64-pygrub       1 build-check(1)           blocked in 170843 n/a
 test-amd64-i386-pair          1 build-check(1)           blocked in 170843 n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 1 build-check(1) blocked in 170843 n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)          blocked in 170843 n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)    blocked in 170843 n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64 1 build-check(1) blocked in 170843 n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)          blocked in 170843 n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)   blocked in 170843 n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)           blocked in 170843 n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)   blocked in 170843 n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)           blocked in 170843 n/a
 test-amd64-i386-xl            1 build-check(1)           blocked in 170843 n/a
 test-amd64-amd64-xl           1 build-check(1)           blocked in 170843 n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)         blocked in 170843 n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)    blocked in 170843 n/a
 test-amd64-amd64-libvirt      1 build-check(1)           blocked in 170843 n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)   blocked in 170843 n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)     blocked in 170843 n/a
 test-amd64-i386-xl-shadow     1 build-check(1)           blocked in 170843 n/a
 build-amd64-libvirt           1 build-check(1)           blocked in 170843 n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64 1 build-check(1) blocked in 170843 n/a
 test-amd64-i386-examine-uefi  1 build-check(1)           blocked in 170843 n/a
 test-amd64-i386-libvirt       1 build-check(1)           blocked in 170843 n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)   blocked in 170843 n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)           blocked in 170843 n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)          blocked in 170843 n/a
 test-armhf-armhf-xl-multivcpu 18 guest-start/debian.repeat fail in 170843 like 170724
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check fail in 170843 never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check fail in 170843 never pass
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170724
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170736
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 170736
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170736
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170736
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170736
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170736
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170736
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 170736
 test-armhf-armhf-xl-credit2  18 guest-start/debian.repeat    fail  like 170736
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 170736
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170736
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170736
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170736
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                35c6471fd2c181f6e5e0b292dc759b49dbd95d6a
baseline version:
 linux                04b092e4a01a3488e762897e2d29f85eda2c6a60

Last test of basis   170736  2022-05-25 18:40:38 Z   12 days
Testing same since   170843  2022-06-06 06:44:17 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Akira Yokosawa <akiyks@gmail.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andy Lutomirski <luto@kernel.org>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Anna Schumaker <Anna.Schumaker@Netapp.com>
  Ard Biesheuvel <ardb@kernel.org>
  Ariadne Conill <ariadne@dereferenced.org>
  Aristeu Rozanski <aris@redhat.com>
  Benjamin Tissoires <benjamin.tissoires@redhat.com>
  Christian Brauner <brauner@kernel.org>
  Chuck Lever <chuck.lever@oracle.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Thompson <daniel.thompson@linaro.org>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  Denis Efremov (Oracle) <efremov@linux.com>
  Dmitry Mastykin <dmastykin@astralinux.ru>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Eric Dumazet <edumazet@google.com>
  Florian Westphal <fw@strlen.de>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo A. R. Silva <gustavoars@kernel.org>
  Hans Verkuil <hverkuil-cisco@xs4all.nl>
  Herbert Xu <herbert@gondor.apana.org.au>
  Hulk Robot <hulkrobot@huawei.com>
  IotaHydrae <writeforever@foxmail.com>
  Jakub Kicinski <kuba@kernel.org>
  Jarkko Sakkinen <jarkko@kernel.org>
  Jiri Kosina <jkosina@suse.cz>
  Joel Stanley <joel@jms.id.au>
  Johannes Berg <johannes.berg@intel.com>
  Jonathan Corbet <corbet@lwn.net>
  Kees Cook <keescook@chromium.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Liu Jian <liujian56@huawei.com>
  Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
  Luca Coelho <luciano.coelho@intel.com>
  Marek Maslanka <mm@semihalf.com>
  Marek Maślanka <mm@semihalf.com>
  Mariusz Tkaczyk <mariusz.tkaczyk@linux.intel.com>
  Mark-PK Tsai <mark-pk.tsai@mediatek.com>
  Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
  Mika Westerberg <mika.westerberg@linux.intel.com>
  Mike Snitzer <snitzer@kernel.org>
  Mikulas Patocka <mpatocka@redhat.com>
  Milan Broz <gmazyland@gmail.com>
  Minchan Kim <minchan@kernel.org>
  Miri Korenblit <miriam.rachel.korenblit@intel.com>
  Noah Meyerhans <nmeyerha@amazon.com>
  Noah Meyerhans <noahm@debian.org>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Piyush Malgujar <pmalgujar@marvell.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Sakari Ailus <sakari.ailus@linux.intel.com>
  Sarthak Kukreti <sarthakkukreti@google.com>
  Sasha Levin <sashal@kernel.org>
  Song Liu <song@kernel.org>
  Song Liu <songliubraving@fb.com>
  Stefan Ghinea <stefan.ghinea@windriver.com>
  Stefan Mahnke-Hartmann <stefan.mahnke-hartmann@infineon.com>
  Steffen Klassert <steffen.klassert@secunet.com>
  Stephen Brennan <stephen.s.brennan@oracle.com>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Sultan Alsawaf <sultan@kerneltoast.com>
  Szymon Balcerak <sbalcerak@marvell.com>
  Thomas Bartschies <thomas.bartschies@cvk.de>
  Thomas Gleixner <tglx@linutronix.de>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Vegard Nossum <vegard.nossum@oracle.com>
  Veronika Kabatova <vkabatov@redhat.com>
  Vitaly Chikunov <vt@altlinux.org>
  Willy Tarreau <w@1wt.eu>
  Wolfram Sang <wsa@kernel.org>
  Xiu Jianfeng <xiujianfeng@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1081 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 06:21:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 06:21:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342839.567926 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nySab-0001lW-ID; Tue, 07 Jun 2022 06:21:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342839.567926; Tue, 07 Jun 2022 06:21:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nySab-0001lP-FW; Tue, 07 Jun 2022 06:21:17 +0000
Received: by outflank-mailman (input) for mailman id 342839;
 Tue, 07 Jun 2022 06:21:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8rqf=WO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nySaa-0001lI-7N
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 06:21:16 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.111.102])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 083fd312-e62a-11ec-bd2c-47488cf2e6aa;
 Tue, 07 Jun 2022 08:21:14 +0200 (CEST)
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2051.outbound.protection.outlook.com [104.47.14.51]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-26-LBTpDmAoNtOpDGHiqVXJog-1; Tue, 07 Jun 2022 08:21:11 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB6PR04MB3063.eurprd04.prod.outlook.com (2603:10a6:6:5::28) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5314.19; Tue, 7 Jun 2022 06:21:09 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.019; Tue, 7 Jun 2022
 06:21:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 083fd312-e62a-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654582874;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=PBUU9tjmVFGmXSma0OP00HIuqU3SrVJmCUAbmMc5LX0=;
	b=KnpjeFU8e3gk1XM7hPiWW0nJGexTvm7UoP5SCrbq29tagAF+H+y6EnYRBTMdWjS2tZYdI/
	18at/tWlveTfGbKC2HpZ2Y20ikBNnBy6NPyu8tpdV9bMBy7r33TcECkgkcrvP3a6ObSsG1
	+ka3iHPhES4TjhPPFR59Bo6CLJMSz30=
X-MC-Unique: LBTpDmAoNtOpDGHiqVXJog-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GrHXvYJ/rFqVK0yvY+N/5UIVOiBCIGh88W292UUEO4/VimFRZ6AaDbIn4bcAb0oWjcnp5XmcPxt6+FDRq4wqFa+Yni8fPjiYR4m3Of9V2zuMnw8VziKhY6QOAY1m5LIxL5hg4kDFfcFcw5y0ZcT9AtgAx84V9Ms+LXWf9fI1nf96E0JX+9jOoFETlXX6Lghcx2gfU55TFT3UU00FC8ENmekH5xdE8dTw6l26nkt9lmv+tEQNTzByre2DVFxrIJD03sUPzsfO3gaWHmQ52ful2NhorBRGDMslHvacmuQ0lkvTueXTZmMSJAs2Gah0pUbvD9laGRtlsFW2pWyK8lqzOA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=PBUU9tjmVFGmXSma0OP00HIuqU3SrVJmCUAbmMc5LX0=;
 b=ZYTBSMtPcwAQMWZ+7kWY2uvjSfxoqQ7JgNN3wEGpiaN7jFIEXJTOTE1pWpXusSsBHermF+Gj/g33k43tVenKcsjZ8xdYyD1vOoEqpJUBi72gSel0mpM/kauFcoJr2U4jItq+aP3CaiQM80eQQbP9+A3cEI7lLFTsttMQLPFccTHpzh14fvXgIgeI1pCGS6NcOsS2JPeTmLFvSCMVYWRHN7RUwxw6lN160zaT9C4bqdN+SNEog4Dr5mTFfzxNA3eUObQrkTpehhbfUH0do1qnzBJD8Lkbx3Jng5Ut0I2JQzurWHnLeSJc7DUdgiSZdXuLjcfxK1mKZiI9ttCTJiMWrw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=mysuse.onmicrosoft.com; s=selector1-mysuse-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PBUU9tjmVFGmXSma0OP00HIuqU3SrVJmCUAbmMc5LX0=;
 b=PQMcH6LX74pC8kbTMeneo4l2XNEelQpQp2bu6f+33qd3x4SSu/x14NIPzytIuWNHxAgrmcc+IcRS9tjIcW7ekaQcFA4Di2HvwAlaOSbUHLsG3LKYX+n8gNMhdaxFQwVXieMlBBnm3e/rrJjOEj81yZmUhep9COcQ2uBLgYO6oKsIAHxdipYCoUtefT3BhsFKukMWqH4bSslFktIbD6z6YYX+wnRA53XdVXbljGoVcXiCQ+xOF5d2kVcQ+xQANVn22tylP0lov/ZB8plV9s+jkrTLbdfmJj26si75D6JvatcUtSQ2lkLCHxx2711L5He6stDBi7e+jHO88mLctVPhtw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <cf484a23-9478-5b1e-6f15-c8e335dab332@suse.com>
Date: Tue, 7 Jun 2022 08:21:11 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: MOVING COMMUNITY CALL Call for agenda items for 9 June Community
 Call @ 1500 UTC
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>, Andrew.Cooper3@citrix.com
Cc: George Dunlap <George.Dunlap@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Roger Pau Monne <roger.pau@citrix.com>,
 Artem Mygaiev <Artem_Mygaiev@epam.com>, julien@xen.org,
 Bertrand.Marquis@arm.com, fusa-sig@lists.xenproject.org,
 roberto.bagnara@bugseng.com
References: <CC75A251-2695-4E9E-95A7-043874B22F32@citrix.com>
 <alpine.DEB.2.22.394.2206010942010.1905099@ubuntu-linux-20-04-desktop>
 <alpine.DEB.2.22.394.2206011324400.1905099@ubuntu-linux-20-04-desktop>
 <ebe4b409-318f-6b2c-0e05-fe9256528b32@suse.com>
 <alpine.DEB.2.22.394.2206061731421.277622@ubuntu-linux-20-04-desktop>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <alpine.DEB.2.22.394.2206061731421.277622@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS8PR07CA0050.eurprd07.prod.outlook.com
 (2603:10a6:20b:459::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 67abe334-abdf-41c0-701c-08da484de993
X-MS-TrafficTypeDiagnostic: DB6PR04MB3063:EE_
X-Microsoft-Antispam-PRVS:
	<DB6PR04MB30635DB34BC68E5947D98E6AB3A59@DB6PR04MB3063.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BFhhCXnZo4nxSzcJwSXC871YlpSojqzXl5MoYyXWwunkSN/mB4GBrdpA9LV+41g7Hglkd6qYt4MBV/ZlRyfCkvHiL5u+pJlnN24qmSO8q01DMOC6+3oVQz+ZlI5775le0q2wg5SEM99L7YdZVbtYoxhh+/kJith/q9y/KydBmE10wdqWb6DG6UG3swD8+zoR1XemzGpJ2+ytl3AQ6iPxPofKKgSVGnRImGnGAG7zaudVgEc+H3q8+BShIXzN7CsbQ1TgaGMOlcsZZh0gZqKnimCsUkGfZdifp5ZYz/j/dyiC6nuesDkv3823KeQhEuWa0GiRZHMUlh5APAc7QW/llj9Z5tzidzTRZCkrT9Vggf33WtVTSyOvwjfOuWyBQ5RWg4UwWga2KCJCJJZ5aPvgpWQn17Jbgi8Ddq02a3YgcyQcwc6mz5hUdvVuAEJLFX9tDEdL8XLIdwhPFO5QO++uz4VwwuHEytg4Bgl1UQ4r55zbH66fIlp4hQCP4V6HqrW82jCTWRFOC3dtjDlR1WQNRAY2zAH9gLlRpSWp5eLDLpD93argqfSuYUx7uwe0cQ1xRwz6ZG531rZPTGnPOuYhogAc9mRfofSRmYSKq/uSD+V/6ZuP7DGdOLWfT9iFUq+TlFwQ6aqa4V8fVnCp/G0iAMdIHwqTiUVev0l9G9zrXF4ypd6eyVpteAe+G1nV2nnRpRAb0fnu8orlNZ1+uNl0b6WXT5IPRc1IMFbEzlZY4sG1OlWk2tpk56bsJkZmnhrM4bC2gz4UAvU3Q+sSjKLG+w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(2616005)(6506007)(53546011)(31696002)(6666004)(26005)(86362001)(2906002)(54906003)(38100700002)(36756003)(316002)(66946007)(66556008)(4326008)(66476007)(8676002)(4744005)(31686004)(6486002)(83380400001)(186003)(6512007)(508600001)(8936002)(5660300002)(7416002)(221023002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?alMxQVZ2ckdaVWs4MGpXWGlTajJOZ3dIa2hEeGhWUkNaOFdiNnRNN29XbzJv?=
 =?utf-8?B?Z0NzZWlHaHZJbmJQWGtkM09ON3p3WTZRNUx0NGJtd2tJam1UOHlyNUNOc0Iw?=
 =?utf-8?B?V0VMRjlzZFVvTDJta2hMSGxYQ0lUS0NIOHlhL2ZMZGwvVkU1eWMyZ2s4Q3h1?=
 =?utf-8?B?bkk1YnBxSjhvKzIwcnM0VGdBOU1qR1JJWlZxbjFpRWd4L1JOMElGVkxlUk1Y?=
 =?utf-8?B?dG5wM01tQWYyRUY1M1dHNlgrdkh0UnR0VjRHODN6Q0srdFl5UUdLQUpBSFhR?=
 =?utf-8?B?U0tLZEY4OWJ2Wlk4dFoyN0hINXhNUGxHbDFwczdINTQyTElEdTlLQkw5MFdH?=
 =?utf-8?B?RnpPL2Y5NElxV1dVcGsxUXJpOHZDb1VQbVpXbEphaUt6NjBJUUtuLyttTE0v?=
 =?utf-8?B?RHlXOTBDSzVDa0JVWTlldndaRFY4ckZJWjRnTjA1MG9NUUJXTU5GUisvc0ls?=
 =?utf-8?B?NTYySDNQNUM0WFgzUjFqZk9RNy9YWXoxTnRnT3RRNUdEZ25WN05qa05ocXRK?=
 =?utf-8?B?d3JlYXg3SnVIS3NINFNwdGNralNPTml5MTFsMlRZZ1lGRzR2b1k3VnkvK2xn?=
 =?utf-8?B?RHNPZ0d0aHMxaTdTbE0vU1gyeGJYc2lNS2tja09XRnFBTjRESHdxcXdwLzYx?=
 =?utf-8?B?NEhnaWZYOG5FMEM0b1d6MXJhZEErbk5UNnhOK1crSmJySWM1N3NnRXlmajdn?=
 =?utf-8?B?aW0yS3NOZWhYNXN0ODdSOWt6SXJnMHZwWmRmZXhpRWloTGZRZWgvL0hCTDlS?=
 =?utf-8?B?VGg0bGtJYmFSTGtYOVNDYy91eUhCTWVwOUJSV3A3eWVyYXB1RTlEWFlpTFVo?=
 =?utf-8?B?ZEw3WTF6K0V4MGord25md1Nua1FhRFN2Z3U1dmgyUUNHQXRveGRrWGx3c21C?=
 =?utf-8?B?a0NYT1d4eWE1R3VPRk5QcXRSeHhvZ2hmZzg2eDdYSy9lZGlSbUszais2REJ3?=
 =?utf-8?B?TXBYU3ZKTndiZTFSRHVGbmN0L2xHRFkvQk9JbmIvQ0J2NG1PenVZQmtRb3FS?=
 =?utf-8?B?VlBMVDBFbklSQnBzQ0VNTUhONHVmY25NM3I5UDg0dnpvd2l3a016cytjMVVt?=
 =?utf-8?B?YzlxaXgzZUlSWkYrZU92UDQzSXg2RlRFT08zOTEzclUxZEZlaEtNN1ZWWVhj?=
 =?utf-8?B?NFcwaldpVHFpenpkTnp1anpkSjhmbUZrejVyU2FaeVE4bEZwOVNYNkw2a0tP?=
 =?utf-8?B?dnEwakFKUUhNYkpqTmQ0YTVtdDdmT0dpRWE4ckNaZ3orTlpIVHU2MUhFcTZs?=
 =?utf-8?B?bXhZQ1ZDOWMvOFNhZkxra01IY0FUN3JWeFBIaUJzeEVJdUJFckIwMDN0dHpX?=
 =?utf-8?B?QldURmlTcy9HTGU0dHlTQ1dLTXZNckI2em4yY21kR2FocERJeDd2dnJyaEF3?=
 =?utf-8?B?UEVJQVFxSXkxSHllcDI5Wlpaa1FucXF4bUU1S0s5dXZiRm94cngrQjBwVnJx?=
 =?utf-8?B?aXRzTTdsTlU5T0RQNEdiUmhLeTlzYmJtTWwrMGVoMGtSa0w3ZU9GbDREbFRk?=
 =?utf-8?B?RTMvM3ROcnpoNUpnVXVYaC9IZWN2eW5acGZTZUFxMTQ0TUMwemx0OGUxeUUr?=
 =?utf-8?B?M3g1ayt3Qld5UU9tMWo3MERZdldkUjVmMzdZcmx5SXpWRWErTDQrWTFHc2Jx?=
 =?utf-8?B?VXFSVjhlcmpPRk8rMEF4cmVpMExoL0JkaUJLZjhDdm15Qk40MnNsclR4MW9I?=
 =?utf-8?B?NVJHTmdjTkJPOGRoT1BHZXU5dFlvcjRoeERpd2FmbGlyQ04vblY2NVlJRUlV?=
 =?utf-8?B?bjQ1WWhzQlg0Yys5MkF2RDdQWk16Mm9wL3J4T1dUUnV0Z2dUQ1phUzQ4Ukth?=
 =?utf-8?B?bEtqUnZKOHJ4bThMK09KNUw0dHRHK09KbnRLcEdld24xaCtEV2hiamVCbVJw?=
 =?utf-8?B?ZmFsZzluN0hYbWFNbXZsSEx0N3RaSGJxc3Zrc0Joa1VzMDVYTnE0eDk3RC83?=
 =?utf-8?B?QmRBbHlCeEdLOTVlSXBrSmhyOUFBa29UZGJacDhObm9rOVJLL2xKK1R0UERY?=
 =?utf-8?B?VXlHc2x5TnhwM0poZjRoSUx1ejh6ZkpnQnBFU1FGK3BhL2hRV3ZSRFJBK2Iy?=
 =?utf-8?B?Q1NaZEFzWms2QmVvNit4L2JHcGVKa1dpYmI0eE1SUTg1TFlPdDVzUUY3azRE?=
 =?utf-8?B?dU51dnJmWmdHK1JxTmk1bC8zSHM2VUNjNWVXTFREY2VlbVl3ZEoybVM0TG5j?=
 =?utf-8?B?MC9HcEs4aU9uU1dqSlJ5TmF5VmFDWjNaVHJ3VlZDTVVtSjhsTUZuUW1KZ3JX?=
 =?utf-8?B?cnB1dS8wSkZHbXlCNWVJRnJIb3JacGdYb3ZxSXBjeStWVnJhZXdCOGtRZVg4?=
 =?utf-8?B?YTVaUVE4VkZjMmlQVjk1cUdSZXFXM1M3VEtEZWdzZHFiK3FydmF2UT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 67abe334-abdf-41c0-701c-08da484de993
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2022 06:21:09.6969
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6ZUyOY2pfXy2srQJrBNV+DJflNfqZS2Rbglf0+FxHxcyTWJ8mvxrltIwSioXZchoTM+V906nOA0zbcuXDDGqDg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR04MB3063

On 07.06.2022 04:17, Stefano Stabellini wrote:
> # Rule 9.4 "An element of an object shall not be initialized more than once"
> 
> Andrew was noting that "There's one pattern using range syntax to set a
> default where this rule would be violated, but the code is far cleaner
> to read."

I'm actually not sure we have such uses, as I seem to vaguely recall clang
warning about them. Off the top of your head, do you know of an instance,
Andrew?

Jan

> Range initializers is a GCC extension not part of the C standard. It is
> not considered by the MISRA rule. The MISRA rule seems focused on
> preveting accidental double-initializations (by copy/pasting errors for
> instance) which is fair.
> 
> So I think we should be OK here and we need to clarify our usage of
> range initializers. What we do or don't do with range initializer should
> be a separate discussion I think.
> 



From xen-devel-bounces@lists.xenproject.org Tue Jun 07 06:26:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 06:26:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342849.567938 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nySfZ-0002VO-6N; Tue, 07 Jun 2022 06:26:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342849.567938; Tue, 07 Jun 2022 06:26:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nySfZ-0002VH-3b; Tue, 07 Jun 2022 06:26:25 +0000
Received: by outflank-mailman (input) for mailman id 342849;
 Tue, 07 Jun 2022 06:26:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8rqf=WO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nySfY-0002VB-Lv
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 06:26:24 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.111.102])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c081359c-e62a-11ec-b605-df0040e90b76;
 Tue, 07 Jun 2022 08:26:23 +0200 (CEST)
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur02lp2052.outbound.protection.outlook.com [104.47.6.52]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-26-fdvmRkQJOlCsRkL7AGhntQ-1; Tue, 07 Jun 2022 08:26:22 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8740.eurprd04.prod.outlook.com (2603:10a6:20b:42d::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19; Tue, 7 Jun
 2022 06:26:20 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.019; Tue, 7 Jun 2022
 06:26:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c081359c-e62a-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654583183;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=s0LUxGRu6M1umnyDasRxv1iWXBTB8HSglNSzzceqsfo=;
	b=NMwsRYd+txJ1gmereDGJHRjE3659PzS5VnTeCALdQJh82xW3uIlZtDKXw/5HNpzGnqkyiA
	BJagBpTcoVM+rskGcaFOqjP+08eyclG2m0uHEtIZNbsjhLajYw+VpT2g6P4swlu+B/FpMa
	Mq2G7zOXuqW46+XxG0IHNzo5I1PoUVk=
X-MC-Unique: fdvmRkQJOlCsRkL7AGhntQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WyK0B+5QqUog+2L+1APVs5BvfY5trp7dxupV6OaX+s1EYXpBI2lUtOZ5VDas7Z3Jf8RrIahaxHBP3qducb955LqRIdPnUa6U853T1Pr4z7TA91kMTj/RIdiHJZz4OQ23kwroCtzoEXB1hnrUQLj45oDJ2HBhy3TAvFWaCb6XGR98IElPoY7fKvkJoXyRoFJXsfxwpmI8KMTdn/MoogKfviOr77AiDQXT14q/XdX7Ol10auW359Lyk88Qc2uBiHA3yJETxvan10XJOERG1u95021Tx5UH6Xk7/dD4TefxaDBBMNTAsuTexceEJKQJLDYdZLOL8SS4PVu6L26fMWKDbA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=s0LUxGRu6M1umnyDasRxv1iWXBTB8HSglNSzzceqsfo=;
 b=ee2mHRRsywWnepscHCzhyy63nFk3CyHICt/eE/7eupW5wgeAdkMUsnoItEsMqpehZzaQecIxroQFJquPBP5VvMxC8LJotRlF7zVUcsPoHLrNTQOW2dQT/T2qpuvt2XpRw0SY3BImuXLQatGQDksPqUcQv6CYgitVTMAgPE/OJ6I3uMFv2PMe8f6ddgRH2IGizQSBp5fP6FnhixkWbHR8lFMR5yxft0UUQAMuF6cM4NNG/vveYtWD9hNWzj/EvxHmPxtz7cvGcXqivc3GAEQ3aztJK/so2KpbWlhhH1rzUq7wGJp6umzc0CPE+fKXPfK3cNnyYCVGFMJpxjW8+mTfsQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=mysuse.onmicrosoft.com; s=selector1-mysuse-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=s0LUxGRu6M1umnyDasRxv1iWXBTB8HSglNSzzceqsfo=;
 b=CBTpevhYyF7gtgixlXo8ccRw5eJX7qqLobdNi5DvgjQ+c1iWyw22oKHZMEe2Bd0R6WFSdwVJBRZ288eGW2U8lznXrfzlPAf02NV5CgIP8A0tijN0yDF+Bu/odI4d9N73ZThi3461+vDQ5BtG4z9bqbFSb77z9HPav+lsXNZ8GM5DWVqA6U10SaW2rE311WNhjjJ8ehivTrQs++IGD/Na9behydG7IkHSIl43DiLMNG7oI1UsbmzIBWCgd/mCCrqRvUayT/k8X78uovVzYklIPZA1+8BczpNfA3dok6natJg+ZGeWMsXUSp1hMTzKRFZmNi2+kL79sQbBP0zu2x7mxA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5fb634bf-d13a-ccb0-a01f-83d7c0424b00@suse.com>
Date: Tue, 7 Jun 2022 08:26:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [XEN PATCH 1/4] build: xen/include: use if_changed
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220601165909.46588-1-anthony.perard@citrix.com>
 <20220601165909.46588-2-anthony.perard@citrix.com>
 <0f8f0c20-690c-f02a-e1f8-957462118999@suse.com>
 <Yp4Dhj4UkORelT8D@perard.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <Yp4Dhj4UkORelT8D@perard.uk.xensource.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9P250CA0016.EURP250.PROD.OUTLOOK.COM
 (2603:10a6:20b:532::26) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e783c82b-ff21-4902-f7d2-08da484ea2a2
X-MS-TrafficTypeDiagnostic: AS8PR04MB8740:EE_
X-Microsoft-Antispam-PRVS:
	<AS8PR04MB874014F9E05764A0BB676F9FB3A59@AS8PR04MB8740.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Ay2sViwtM9o9+BxKtGmVNZDbABIdDDVYZvdm048LT/ExwdftFWzpeZWvHbJmgJpYAzBKBzyd7oGGltDvqpZXSvhm2A0iVC944uZncSy3Q1sHHTuFh99xakAAelrsAZHVfbh7GLIOhf1STLIXlvTeTFe1GsNPU0yxky1sazg2kE7UxPTfiZbZsOrDzY2KgmedjsbsAfyP0qlp4LKSw9IpZ5AlLdCEeUos/F/gU0To9xd4jTnoTd1oaqaaioIDPX9eCCqdXrFDFwlI2DHRin3+H2H/yG6IjHLLpgY/mb2v6wSU5KeweD/TylEN9Y3JQxza/dQlaqpnDdQqd8/5Ed8Irp2tQRPGwtKvVcewD+sMiLuzd1A0FUFXAy5Rw/0+hshkyZTtc5Ceo3P0aFin71f5FvoyWNh/7lLa0eKuT26MKFHwdX7XL7/WcKKojXeT2xzpbBvVshwvR+ivUpqLQS/O2BoSN2+h9rfj0Q0YSqjZRNp5OGMieAx0TacJgcvxhaGI+ecSsXr+fad53bm3LlrMrShvlfhQtcL3XwMzoAaEl2n8NqcFT7q1w6c/vmobOBA9Vea/m0tYN5Ht46cLTrt7OsLNeFZVikRqRUOhDDiRXhibzNBmoZYEWY4X8ARwbZKGpToj+DOEosuqTNbXXwKjCzlW4DVmTiHL5Zb70QzakokNWkUx5MtihH2zmy9HyxhaSBZt2yrEkMWHhslN6qO95OxDLjftYh95D2SehgihJGddeZSmqWLY/Fww1Kzs3JnS
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(316002)(26005)(186003)(53546011)(8936002)(54906003)(6916009)(6512007)(2616005)(508600001)(2906002)(36756003)(31696002)(6486002)(38100700002)(86362001)(6506007)(5660300002)(31686004)(4326008)(8676002)(66946007)(66556008)(66476007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TENLTzRXQldDd0RoODJkWlVwRVM5TkxEYkdsNTRJbWRwRmkzZUFGUG9PZll5?=
 =?utf-8?B?NkszYXdlVExGQXhqcjhjL3AxRmNralYwbEZTY0YyMjhtc25UWENJRlpKV3ZD?=
 =?utf-8?B?NUFNSXNuK0dLcEh3VUFWOFZUNW5oelhBeDcrWTlGR1c4Ymkyb1VoNk1RN3Y5?=
 =?utf-8?B?VEdkSnU5M3lUbG1jbzZXcHlOUTBQZHplNWtLTmxhOG5MTkgwTHRmS3hSNHhV?=
 =?utf-8?B?Q1VhdlRIcUdhYUJzd1NBajdsOVVKRDM1d1Z2SkNoZlBEL3EzZEw0dXlCWEFY?=
 =?utf-8?B?RkY3S29pQkRqdlpKNC84UktyQklnY2xzMmx6NEtoSmVNRTdDak1FVy9DeFFR?=
 =?utf-8?B?bTNKSW12VGtTR2NQc0o4dmh0bUk5WUwrWmZZenJmMGJQT3ArRmNQa0hMNnpB?=
 =?utf-8?B?eER4TlRmWW9BUjhqSzdmZkxaSStReWgwWGFUTjVqUXZrWTFmN0ZjL0IwSHVs?=
 =?utf-8?B?NmpaVmdPYWpXdmpaTlJUa2dPTUFYdUZHWVdDWDVMVGhFZHplK1U3eTAvYWpO?=
 =?utf-8?B?cDFMdXBKRG01SjFRMVIzOHJHeUhVOFI0UFdKWDNnTHg1dU1scjcvMXFITlV6?=
 =?utf-8?B?RFdtdWhKV3k3alZJUlZ0aXExb1hEY0djbFM1UzZPTEgzU2RsOHdOSThFbFlj?=
 =?utf-8?B?c1JMZW9mZ3Y5TS9tWjJxVWVuUmtjNEFKZkxucVJERGZWNVBmeURJak1oSmRJ?=
 =?utf-8?B?TEJoT0pVNGVNR3ZGMFViU0xRbk45dG5VQ3ZYOG8zV3VnRUtoQmVIOWtvd2JE?=
 =?utf-8?B?WS83YTVuUnFhbUQwcmFDTTJ3OW1JUHNxdVFVNUVvbDExYzYyaW9nU09ydXpB?=
 =?utf-8?B?Z01XMmg2MlNkWElvRDBkSjQwRm02ZytDd0dMdVZGczJYWVBPd2VWNGgwMmJu?=
 =?utf-8?B?a3dBZGNuRjNhc0RESkIvSHJlNXBubVBzRVpDMW5DVWlQOWR6ZzNpd3BuNTlh?=
 =?utf-8?B?cThEQmowSi96WC9EVzEzWTd6blBVdVljZ090L3BZblZuOFFON1VndnVnd3BJ?=
 =?utf-8?B?Nm1PTm9ZS3Rkc212bVVNV3J0K0ZMc1lyV3RaOUtucWhVeURNbUxZS3FybXlZ?=
 =?utf-8?B?SCtFaU13bllZaUlWRWErcEs0M2w0YW1DWVhWaEJjNXh1L2g2RERPZitLdkhN?=
 =?utf-8?B?RFpwNEVtN0ZMSDd0ZjQ5WFhVVmVzcWh6c0xwb2h2cTFRUDU3S3BuOUdDMVZV?=
 =?utf-8?B?R2dlemZKVFF1VUhaRmZzRXFPT0ZBZmI0dzltbmlLK0pwREgvQjhyODE2R3Vl?=
 =?utf-8?B?c2N1dXdkaFJYM2dLT3Q2Y1ZibmVNNzgrTlFjb0hZZVpIMERUdUlmVkQrWDNs?=
 =?utf-8?B?ZExVUmtjZmRQaXl1YzluMEwxL3g2WVlGZFQ2QmtIajN3ZzhIZ3BvOTBIN3E2?=
 =?utf-8?B?VG5uZlQ1U042SnhsQWQ4UmFIUk5meU1ocTlnc1o1WUpJQWtBb3lCcFFuODFt?=
 =?utf-8?B?VmxNYUxFQ0J6QU9WdlJEOFpUN2ozNGZsTjlwUUhWYjFyM1FwblFhaHIrNG9C?=
 =?utf-8?B?MzRlYm40T0pCUXJFbWR5dzk3V0JHaythazJ3RGVSYUlRZVhaMW01UXhYT1N4?=
 =?utf-8?B?SUFLbVllMmoyQXpmK0lqVVNNYVdrNGhiSVFnbVUrVWdvWnA3MXpzLy9JQVZB?=
 =?utf-8?B?M2Y2K0gwZWVKWUt3dEgzUlRwbEdXVXdQVEFIL2U2ZDNqOVVkdU1qR1lDVjF2?=
 =?utf-8?B?T1UxMzVRcnR1d1UzbFpWRjJUVU00bmVWblJrSktNNWVEaEtjVW1wdkx1N0k2?=
 =?utf-8?B?bzhVekRSM1orOExWbkd6a3A3RWRmdFppbjhzU2ljYzgzTmZCNlJlRjMyVmpR?=
 =?utf-8?B?NnpvR1ZXMnJQemlyOUtnUEo0Um1UdEgrS0lhWitsY256UnZpcHBJMVFiU2pz?=
 =?utf-8?B?RU5kYVNNaEp0Ujc1THJWS24zZXEzS0dwMEY0VCt0RkNXTjIzOVB1MTJMbzky?=
 =?utf-8?B?cVY1UlRLNzdreHVWbnBGcjFiM25WNC8ydVBoVHJSVWNQM0NyNXhQbTNQUmRQ?=
 =?utf-8?B?R1JMNmVZZGxZTGwyV2VVS25iM0tvRTNrM1ZoVDAyeWhvZ01oekorMjh5WkJu?=
 =?utf-8?B?NlE3eTRMaVhJRjZudE40cTBVVUxUZUU0eVBPa3M3TTVLTks3RWJDWU9iRmlC?=
 =?utf-8?B?KzhUTkx5N0NYcEp1WEVxK3ZiU3dhNlZ6bFlwc096dnhHNzBJUXdMclZ1bXVV?=
 =?utf-8?B?WlVZekVSenVVKzY5ZkZOaVdybEIyU0QxOGJTTWlMTUJSYkRKTUVDVDN4TUd2?=
 =?utf-8?B?bXNTU1ZUbFk0MW9vQzVqemFERWZRbEdsZnBNdUdLMXVNb01MOGZGNkNKMGYv?=
 =?utf-8?B?YlE3RnRVNXJTdVBXWnA1Q2xHYldRUFRidXhybEZMSUg2LzVSSCs0UT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e783c82b-ff21-4902-f7d2-08da484ea2a2
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2022 06:26:20.1147
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: kSO2vLqBHfTeUScyyg/cKDA/TizWnapZdINRKu2NqYvasIhYyRbwjjWaZOVA+3gNM3vgFl7QkjOPqIsvpmcZug==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8740

On 06.06.2022 15:39, Anthony PERARD wrote:
> On Thu, Jun 02, 2022 at 11:11:15AM +0200, Jan Beulich wrote:
>> On 01.06.2022 18:59, Anthony PERARD wrote:
>>> Use "define" for the headers*_chk commands as otherwise the "#"
>>> is interpreted as a comment and make can't find the end of
>>> $(foreach,).
>>
>> In cmd_xlat_lst you use $(pound) - any reason this doesn't work in
>> these rules? Note that I don't mind the use of "define", just that I'm
>> puzzled by the justification.
> 
> I think I just forgot about $(pound) when I tried to debug why the
> command didn't work in some environment (that is when not using define).
> 
> I think the second thing that make me not replacing '#' by "$(pound)" is
> that reading "#include" in the Makefile is probably better that
> reading "$(pound)include".
> 
> I guess we should add something to the justification like:
>     That allow us to keep writing "#include" in the Makefile without
>     having to replace that by "$(pound)include" which would be a bit
>     less obvious about the command line purpose.

I'd be okay with this, and I'd also be okay with adding this while
committing. With it added
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan



From xen-devel-bounces@lists.xenproject.org Tue Jun 07 06:54:27 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 06:54:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342857.567948 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyT6a-0005x6-Cz; Tue, 07 Jun 2022 06:54:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342857.567948; Tue, 07 Jun 2022 06:54:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyT6a-0005wz-AB; Tue, 07 Jun 2022 06:54:20 +0000
Received: by outflank-mailman (input) for mailman id 342857;
 Tue, 07 Jun 2022 06:54:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8rqf=WO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nyT6Z-0005wt-5G
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 06:54:19 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.109.102])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a6896684-e62e-11ec-b605-df0040e90b76;
 Tue, 07 Jun 2022 08:54:18 +0200 (CEST)
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2168.outbound.protection.outlook.com [104.47.17.168]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-23-RzqjxCpYPnOpPbKmyHVerg-1; Tue, 07 Jun 2022 08:54:15 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB7683.eurprd04.prod.outlook.com (2603:10a6:20b:2d7::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19; Tue, 7 Jun
 2022 06:54:14 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.019; Tue, 7 Jun 2022
 06:54:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a6896684-e62e-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654584857;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=SC+WSNVhO9GML4oFShhKwiJYTk/8SC67TBz8JVyBbsg=;
	b=Cg3t9JXCeHYqYNdcFTpmChAkvyIM1Bf6IeMOemZO8E5vho2syibBxImMfs+HZgu/hCsPLx
	C0fFYUf/nzO4drYkv3PBJYy2zbF4a94rrtzoeO+MZn5yO5VOnStpVp6qHfptTkhwYW9XcK
	BVCERBPyz6T12laG/WRc3UijEqPzDlA=
X-MC-Unique: RzqjxCpYPnOpPbKmyHVerg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fvseONVHoJKEuqnpiD0xNWqSULxAEd65vM6lSZXr0WAqRWoNdD0b07PBttztLvZQy4bb4/hp3ZqXqY/2em3ql7HyB6WH18GhnaP3Os1cjmomUCKkIoW5kmncqrNJ98NYdd+8D6gVnLqyCk5Lj1mlU1paIJGIeV8Da1qj+Os0mdxLqRuB9iOTgun0WdJ2G/7eTlkNCC1DwgpPToCpYRwdbIWdmd9d1kbj+AuVBX9jUcwF9ygupCBaKwld0suA5sHLl/dWMfRkPth8A2l3THtwtLW/l59GidXmDiaN+8IlSISUkCS/Y5gImghCPKChsbgv91d9XjfZoO81VUHE8/HP5A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=SC+WSNVhO9GML4oFShhKwiJYTk/8SC67TBz8JVyBbsg=;
 b=HOm8p3Zs/BPodXanjP3AWoLoRAfw+SQlA9nqy5UA2KzS0PvNFf8yfXYBjr4mrIIJ9Jefiaj9sRCiHjmMA3JK+aUyRv/uXL2qgQGOPibTs9So56YGKO3v6Wu9zCXIX0lUlSRouTcFt4BB8cVzyhC3kJxggQZtpun10DfA3Md1YSBwhkuqBNxwcqZ0VOUre1rAdGS+1J7badKorJTIDV7g2GCAzLs20iceBMOgsLJiX4/yB559Ns5nDYdZ0yFZyeD1XUWh85Z/yipmAbP88PrQwn2QSq3hkEGzCpNjBwftUbmoESS8KmuaavjdQEnX7C4Iw6kqoJxeVKcfIMxfShF+PA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=mysuse.onmicrosoft.com; s=selector1-mysuse-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SC+WSNVhO9GML4oFShhKwiJYTk/8SC67TBz8JVyBbsg=;
 b=h5CzfMsy441O9KkmsvVkTpoVZYPVoLbTxx2LybMWLsj0/Hj43OjSu6rhbybRjCRaOi8wtoFtj7UHKu9NuM7YWa1fQS6pEamZGlQ09XoE/KnIwPnFcVaYh2RMf0Bdb/s8PcIj72KhT2/Y71y27OFziWtRxqztRT8FfcYiAQ5dBxS6r85p/ZNm/kePLkOC2BnVg21PAO20CWKNMmitIBzYB8amEApTkwAgp+sgVdXQ1vcN0kYB7M5CClEb8Jg+mlW9t1rHjvS3HRtixP97JrZbDe9EP6DDYBWblCBInGzhmW3Oo6V6+2uZ3TQoXqZGkYIr1L0wAUon28b4POUfr++00A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <fe0fbfb8-859c-7b24-89bc-d5c68c7b133e@suse.com>
Date: Tue, 7 Jun 2022 08:54:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v2 1/3] x86/vmx: implement Bus Lock detection
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
 Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20220526111157.24479-1-roger.pau@citrix.com>
 <20220526111157.24479-2-roger.pau@citrix.com>
 <3c8b0b72-0a9a-3dfd-bf5c-b6cc40a4ce3a@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <3c8b0b72-0a9a-3dfd-bf5c-b6cc40a4ce3a@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM6PR0502CA0056.eurprd05.prod.outlook.com
 (2603:10a6:20b:56::33) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: da99d1ad-21ed-46b3-7028-08da485288a9
X-MS-TrafficTypeDiagnostic: AM9PR04MB7683:EE_
X-Microsoft-Antispam-PRVS:
	<AM9PR04MB7683C20546C164E3C0178C97B3A59@AM9PR04MB7683.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JAKTED8QVrW2KCo8chsO3nef9TwWm6ZAqcgKv7Qnkg7BsJL7nqVO1Z8PIIWjBqlZpwmFGhEXeSYKtq7hT7Qn8AJvZ731APhnz49SJ2d0tLs+/iBKNTQ3gZlCZE7+uSWwjZmGAw2V3LYEeHBflLx9Mz4hWGU+LXLpyF8auz2c5PMbBamdHENAJ78ZFF/ZsnBvQvuQLnanUA6d9Kw7hPPIXPwTTe/UOiqEKYHtVhAvg2xY4xxzvVe1BesCTNVKAG7S4YjjtRisgZs8nLmBO158TrlUh9lwC+MUwzTg9YXdg56ZYIgRjdKk6LqsbWMddz7/2tKmmL9NjagdQniuNdn8KiYG3A1SlhgCYbGVLkSIwSrEmuZr3et5xngzDAj6yHhuf81JYD7BtkbJES1GVkvFOPYRMt6Q5Tkhy9KYWAe9He48IBYH5qywaqgFtez/rtM+RFLwTOLN5yS6YzqITosPZZNwYkJC07cA5NUEC38ZNFlJ4nsT8Qxcg1Ya/PrVkumIgGCd3UDVZNygZrL3mD/aV7wG7+l8i+MDn9fAmidSIjNlgvjttukbbk2Sc5IT88fAy0wBwB6UCL0+76p7MFs1cGZmyJiJLlLIzNFXNUQyVXE4NEKHrUN+K9ncuDBZ2Ww6VoTSBkmhSmb6Lx04NeDAFer8M1tniKAYSzzQ4eNTBkyV6qz5qTRVBjTt5JJFjmz7NMAI3tuqR47lIZrL1pht9ZCRJZQx85GSArXGc2GjN6P2FNz7oVNrviP1LW1V9j9f
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(31696002)(53546011)(6512007)(2906002)(26005)(6506007)(6486002)(86362001)(316002)(54906003)(508600001)(38100700002)(6916009)(66946007)(66476007)(66556008)(31686004)(8936002)(186003)(83380400001)(2616005)(8676002)(5660300002)(4326008)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?M2NlWmtHKzNEVUYyWFhMNDVxUkFKTFg2NUhwWExjVGxjZUZGa2pNeDFHR2h3?=
 =?utf-8?B?bHc4RW96WmNoeUNVekhTM2ZqT21UT0pWTEdhQVkrU0EzQXNaSXJJVTlNSE9m?=
 =?utf-8?B?aEFpWVZQdzNZMk1Vbms3My9jeUlMb1ZTdlYzMVJnbW5ULy9zcWxhY0Vqak9h?=
 =?utf-8?B?eHQ3WDFOcWlvYW5oU1cxRjBNY2NUa3BLNVlpZFdUcXJRUjBmcUtNK240Y2Fr?=
 =?utf-8?B?MGl3MjhvejdwMmN0UWtlNFJ6UVF5aVBFUU1MN0FPRUdHTW5ha05oeDhHRVNh?=
 =?utf-8?B?RkExS3c4RnY3NzhuV0o0RVQ2V1BONlNlOU95TGdWcCszaFAwcDZzaElkc0Fo?=
 =?utf-8?B?OTNqaWRTUW1uS0Z4MGZZb3ZtWHA0VWZnQmhYT1hrbEJoRGgvZ0REUUZwN0JL?=
 =?utf-8?B?SFZiNnBxR1BkVXlkZjJ0RTJQTW05SHlTZ21VazZCeHArRmJQbGR2RjliSDhM?=
 =?utf-8?B?ZnQxZDNJNFpJT3pmMXVCdTcxU3JlY0VzSWg1dFhYNkNMU2JucE00My8xYXdk?=
 =?utf-8?B?VzdwNjk4UzA2SkhYcWZZWXduZG95Mi9CTFJlWXQxa0Y1R1l6Rnovc3ZLV05s?=
 =?utf-8?B?eW5tVlVzSHVuOWRxdXl3R2tEVUJjSEozR3dDa1NTQ2RzT0tuOTJUekJoWWg0?=
 =?utf-8?B?VDRUbDZhK3BWektRMFVxc3ZpOTlvcEdESUVGQUhsVXVqVitaN2dodXJzK29K?=
 =?utf-8?B?WTBLYXZ5dHVVOHpLWmhNS2hYUUxFazhycTFtL2NRSFhTaWx6ZllMd2I5NVZQ?=
 =?utf-8?B?VHdhRUc2Z3dvdkFLOHFpQ2RueFNaaEJNV3JjQU1JNGNRS1pzQ2tCRkxGNHpU?=
 =?utf-8?B?OXZhNGFFaU55UTdtR256bFBFckNydktSbE9keTM3RjhxRHgzZ0FEUkNhV3VG?=
 =?utf-8?B?T3J5bDhaK3doci9SOXZZL2JaRVB0R2wybzIwODlnOEo3SkFqMVYzWDJGQVFt?=
 =?utf-8?B?eVErMzNvR2t4eExFUFpxVEZOcEZ5WVVxdFMydDFJRWZEbUd3a1M1YXFjL1l0?=
 =?utf-8?B?UWFPT1h3S3hwa0pBeXlCVk0zTEs0ODg0NjB5cXlmK0tMaWZLdDhtZGtSSGF2?=
 =?utf-8?B?WHNiWUxZbEx5LzJjbzB6eDBRU2Y5aFlwRms0T25EZDZDVTJodkY3NGlFMm1v?=
 =?utf-8?B?clFLblNLR0pIUGtBT2tnL2pSblV5YlhEbUFPazlLMitZaGxxTStkMndNZm1U?=
 =?utf-8?B?SDBmZG9taU1FQlFTWVRvS1RVUGk1MHNwZFIwRm96UDBWSDg1UGNQdDJvMlN5?=
 =?utf-8?B?K08xeE1leitCMGxtTXRrWG1TblB0eXB4T0NyaVRTTnU1MThKWGp2S3lBMm4y?=
 =?utf-8?B?VHQxNDJsTEpCeEdDbmk5WlRHTlA4cS9nRk94alRxNFVzTzU4QXBLVklPQUsx?=
 =?utf-8?B?WjNuVWNqK0VKMXVoSGdUdDBNN0tsQXFYdkN2dWpaU1JveVR3ZkorWUZoY2dU?=
 =?utf-8?B?K3l3RTlGdHZTZ09rU1FJa3loTUpBcjYzajFXT1NOK3pCeEdpZ3hIbW1mSGlV?=
 =?utf-8?B?eWFmYmJicHRqYkJwRGtLRGc4K3hacXphVmJJMStyLzRlSExRWlgwSlhJZEhD?=
 =?utf-8?B?ZThLVmZobTlkeW9YZjh4azltcUhFSmMrYkk0cFQvcE5yTWdVMFpvaFZoNjA4?=
 =?utf-8?B?THVKZVJQU2daYnJhSHdxUHN0Ymk2Y2tRdm05ZUhuUmJjTngxZ3I0ellzeE9J?=
 =?utf-8?B?NVdwZVB0NksweTFPRStnazBpekJBVmQrL2ZUdWM2cnd3cFNidU9EN0FjL3FI?=
 =?utf-8?B?aHdROGhIcjhyTENyR0EzWlo0aUVFdkFNUjNGenpnb003b1ZlT2ZsRk05eFBt?=
 =?utf-8?B?cU03TVV4WU9BeHBLRVBRRGRRQ3FHL3dTK0twTlEyTHFSSHU5Nzl3cW1sT1Fq?=
 =?utf-8?B?VytPQS9ucXh2K3hFeFNacHI1d2txZXRjWEgyWHQ3cDdBNldkbnlWaElzQk4x?=
 =?utf-8?B?c08wSW9aczhNWFNGZmtCZUdUeVVXT1hBOU9ZRytUVER5ZENPbnVwdXNHZFkv?=
 =?utf-8?B?ejViV0lIYi9CZytMVzhIZDRyRTNmeGVMdnYrMEFOSUd1anBJY2orc3h4cnNh?=
 =?utf-8?B?NVJ4cmN2MU5VK256Rm9NUUhZUEljMXQvdXJQRjl5YmkwRTRmVTZPOHJPR0Vn?=
 =?utf-8?B?MzZSV2ZwZTBZUVMzQVBQcEJhdnoyQWI3R3VIUTlWRno3TDBhTSthSXM1Z2hD?=
 =?utf-8?B?bHlaMlRycWFyV0hDUHVtcXlBOWVkNVZvVGNKbEIvcnlhYjlUcGZzZmhzdlVv?=
 =?utf-8?B?MHNPZjZvclFqWGpQYTR5YnhzQmxLZGp5R2ZrYXBQN0F0bnY3YUpCY1NFdis5?=
 =?utf-8?B?VlVXRllKVE1Pbjh1b2ZiK1RBbmV0MHVwaEtEaDZONEZtNzkxcnhLdz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: da99d1ad-21ed-46b3-7028-08da485288a9
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2022 06:54:14.4763
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: wX1aCwO1X3tk8ttiuCbrWMUNI6OFTYU+FAMYHHORlf8L1NlgUZ8rgzu9PnXQ26ZzqGLpRfQIgIXuElLXBgPEAg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB7683

On 06.06.2022 15:27, Andrew Cooper wrote:
> On 26/05/2022 12:11, Roger Pau Monne wrote:
>> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
>> index f08a00dcbb..476ab72463 100644
>> --- a/xen/arch/x86/hvm/vmx/vmx.c
>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
>> @@ -4065,6 +4065,16 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
>>  
>>      if ( unlikely(exit_reason & VMX_EXIT_REASONS_FAILED_VMENTRY) )
>>          return vmx_failed_vmentry(exit_reason, regs);
>> +    if ( unlikely(exit_reason & VMX_EXIT_REASONS_BUS_LOCK) )
>> +    {
>> +        /*
>> +         * Delivery of Bus Lock VM exit was pre-empted by a higher priority VM
>> +         * exit.
>> +         */
>> +        exit_reason &= ~VMX_EXIT_REASONS_BUS_LOCK;
>> +        if ( exit_reason != EXIT_REASON_BUS_LOCK )
>> +            perfc_incr(buslock);
>> +    }
> 
> I know this post-dates you posting v2, but given the latest update from
> Intel, VMX_EXIT_REASONS_BUS_LOCK will be set on all exits.

Mind me asking what "latest update" you're referring to? Neither SDM nor
ISE have seen a recent update, afaict.

Jan



From xen-devel-bounces@lists.xenproject.org Tue Jun 07 07:11:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 07:11:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342866.567959 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyTN8-0008Rn-1x; Tue, 07 Jun 2022 07:11:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342866.567959; Tue, 07 Jun 2022 07:11:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyTN7-0008Rg-V7; Tue, 07 Jun 2022 07:11:25 +0000
Received: by outflank-mailman (input) for mailman id 342866;
 Tue, 07 Jun 2022 07:11:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8rqf=WO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nyTN6-0008Ra-4U
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 07:11:24 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.111.102])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 08e2af38-e631-11ec-bd2c-47488cf2e6aa;
 Tue, 07 Jun 2022 09:11:23 +0200 (CEST)
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01lp2058.outbound.protection.outlook.com [104.47.2.58]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-24-AVHZz_0UMPSLB5O7jLAKYA-1; Tue, 07 Jun 2022 09:11:20 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PR3PR04MB7337.eurprd04.prod.outlook.com (2603:10a6:102:81::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19; Tue, 7 Jun
 2022 07:11:19 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.019; Tue, 7 Jun 2022
 07:11:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 08e2af38-e631-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654585881;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=1m9caj292I9jJ9mors5WTYZu/hAOyMyrDXa4+M9U+Jo=;
	b=OTWnnkl38Y3+m4yNWjawICqX8K3599v3882hmeD4QJ02VSUa9sqbI/LpSWlYV071VYxlsa
	sdkOTsqTxwxyIKXF1yho7Eudn0mmaRFOnOz2c3qlTCKqjGTGT3z5pdFvKIXbG2dGTv9Yg5
	mGsNrpN3XQiYRMW0giy+x7JSaA/4qok=
X-MC-Unique: AVHZz_0UMPSLB5O7jLAKYA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BSDWEpTIN6r/uFW8ePl9iAILK6zpXBK2t8UQAWsfvD2ZxG10uLGghs7Ebg1UcxOxYtsay0k13RldGTK7jIcZwnk7dH25RUdpIfW08kQRQHgdLpST+jFK09tN7ubqp6Q4NdgJo2sFNqVzEiex+kJmggI0/vOVWEKVe/xTW6L+k3v6xqy/bIW5yoJTkOWo9Ffy4mq8NL2GnS181JfZdAgGO0lQihTOcZqZ3l+nAbKjJnmKdgW5t3U5kkrmnnrQd5WCkM7o3EorNSDnmooQhXVu9uIGOfvQC+M6Uc4JtvDVr3tMHOmQhzBrua9kmebK/uv5i/U9oTVpEzqHX7sdETiovQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=1m9caj292I9jJ9mors5WTYZu/hAOyMyrDXa4+M9U+Jo=;
 b=k7nAoP7C2iLnMxpJvOPMMUj8H95ykXQHstyhd/flTRnw+C4T2e5QwGSkCS3f78jcs6bWFAA5ujQ7VLUO6QiPNoy+ewggDSi9dpRqJRVRzNAYwijD46a/kLNRWQmBy98k/hv1+kfcYepG2jlhdTCIcFdFdRT9Ic0ZKYcrDccHwJkeFNJGsZeGZnE7mFdhBDhTr0fdOL/mrv+l/3U03Ca4hCW2p5jqyzWI55a3Ri4wC+p2eQyRuABthUhQb9KryGJEOfhchGSJ29YQs3qW6c74O/VXoZGTeRXULU+dzdyTUR3LSRyAkfTPvKId1VyTLjH5MxMoVHnEEcTVqN0MwgXW+g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=mysuse.onmicrosoft.com; s=selector1-mysuse-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1m9caj292I9jJ9mors5WTYZu/hAOyMyrDXa4+M9U+Jo=;
 b=RpgOtTymddfZLvRtykCdz2KW8+i56AIoe0l1x1i0HpS1yvGTrn7Idy3OD2w90hpyJaiQQvcygJPez3n1x8/RTAZx1GChsMBMKAIH3dDhXke/Z3QJqQO7A0Lp/+6pHgB6EKjSzKHTSYN4xTR4gpR8SoXuAabLhfVPJ+HQggrdONKpMntw0Jw8w433p0alj/W3Oy28WtFvZlgugL+8Q3iaVMdYFwNwz/pGk2rFpRXGRs/o2AQtAzgm4A2qorhkWM3vwm7nsXSQiXsrkqVahRMhscKraUDFMqvc1rV1xHcabdmG4t7cJcaDXW/HHFUnhm2T2IOSG6WevcbznV84N5FwbQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <8a48ad48-ad52-1939-fdbf-bf5d0fc1d621@suse.com>
Date: Tue, 7 Jun 2022 09:11:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: memory overcomittment with sr-iov device assighment
Content-Language: en-US
To: alex.nlnnfn@proton.me
References: <6f_bjKs323m5sDcqqvk3uosOLiugoCHlAvJ_tEMTl9d_05VqR-nOKtBBS4QWK29iKmorefG54vrbEgUM40Fl71BPZ0hwVyY3P0LjjJVjO-c=@proton.me>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <6f_bjKs323m5sDcqqvk3uosOLiugoCHlAvJ_tEMTl9d_05VqR-nOKtBBS4QWK29iKmorefG54vrbEgUM40Fl71BPZ0hwVyY3P0LjjJVjO-c=@proton.me>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM6P191CA0051.EURP191.PROD.OUTLOOK.COM
 (2603:10a6:209:7f::28) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b79ba2bc-a633-4f59-5a12-08da4854eb8d
X-MS-TrafficTypeDiagnostic: PR3PR04MB7337:EE_
X-Microsoft-Antispam-PRVS:
	<PR3PR04MB733746A4BAD7D6AA5F47B2C5B3A59@PR3PR04MB7337.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0Q4MTzSIABrNbWQKNXjo7kWaTNwyUVn9rrMjiZcGYuH52sg1fF0yBrgWqFrwWTQfQ1AQJKNf75IeiZV1d+ZvVj15/RXyb72zqzHbqFaCkZqPwokCaI88r+ewO6e9h1Oyft35Lt28q7fBoTImlxFpcDxKbZLw8qZaFKnR57khhWI69xe4gCC3F+Y9CLCtr6bdZ0GDEdwJNV7RjPbbnZYWEHdkYpRhoHA/JjiP0w9iTskD54hpXhquqpZx0anJW6vk8xFBvb4jtKr62dHJ1B2WKaH6GhcqimadMzrvIrbUtH7x8Vx2Vo/pKBvoxCmP8kw7KJgx/lkqnl46eU6gfcg0iLKdDWT5x6EpqH7cAyj0w4+2X5kbuoMxAyrGgw5nGqdHLzL3jbWi+0gfz/YnF1VklygcMx0MDr/5Mv50nnSZzk0eiAt/rCU48KKoT+O/65RSu0uM4ZM5rLAwtvFig4pztZmYQTInLXqzeSuv0dV1EdsmBqTmp87rJaIJWptzCt5RwSR6ncvLty0uLaUVglaE/gcj28xEM0MZFac1Hy7es8Hg8xmORD3w4tsjsRgFabj6dg1kZQMw3i1Lti2O26AJx4Qyz32jWnNMpOlnWSP8RtL8XgoFxGHaCfSxZ9ntwdr+Y/dHNtXbPylq4HOiZcgtcOg0/NSg2FoGqUZI0ONf/sqsuOkL8mps6IOBn44ndT9vheh57ua2WySgLUDzUJa2o+DpgFbQMS1VzZxWQtyhtq8=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(31696002)(316002)(6916009)(66556008)(66476007)(8676002)(6666004)(66946007)(186003)(2616005)(4326008)(6506007)(53546011)(6512007)(86362001)(26005)(4744005)(2906002)(6486002)(508600001)(38100700002)(5660300002)(8936002)(31686004)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?S05vN0lWZWtoVldVbWthZjk0RUJabnhTa3pXY0k0a3BMSVNYTTBDMzlQc0ZL?=
 =?utf-8?B?VUNveWpvK05LcVpXd3IzMGl4UGE2QXlyVWdHaHlBc25iamMyaXZzUlNKVXdR?=
 =?utf-8?B?V2pGekkvSUxSdjZrSXVaVDZJdFlpTzZmcnkrbU53TVJFbW1wOTBycm8xVFhj?=
 =?utf-8?B?YkxCc0RaQ3RUQXRFUmM2d2NSd1ppblpsL2VZWlJWUjc5MTErVFVoNWJtQlN2?=
 =?utf-8?B?cm9oWUt2YmRDQVQxWHkyQW5raStvR3FMajV4dm0wRDNsYW90c1BYZFRmZWpr?=
 =?utf-8?B?aHppb0hjcHNGU2ZHc0cxTWg4MytMVVdTY3FxcElGZytjRGlmRjgrVTA4M0FN?=
 =?utf-8?B?Q0R2V1ZrYk9tcUk3S1MrcGNZODloL1VSbFdlMW51UUkwTmlQQlo0akMyNUtv?=
 =?utf-8?B?WDBsU2s3aFV1N3BkdTVLaWdCZ0xiMkpzWlBTb0RlbWNrZU5GT2pJUGNUSHF4?=
 =?utf-8?B?bGdQRE5vYjVTOGhZNitFTXBTUTFPUDE2eUNCbHdNWUh6L0EvelR0TmJhM0tq?=
 =?utf-8?B?N0dSbTdwTjRJRzdVYzVvM3dhQ1F6K1dnYmdNVkZ2bHh0aHRyVHB5NFVienI5?=
 =?utf-8?B?dEVsZ2xGYjJhVTZlaEdKM1hoY3FXOW1TOFc3WDVabm1iWTg0dyt1cDkxalpH?=
 =?utf-8?B?U0NBcXFhQWp5Ylg3NnRuUW9OdU5vSlVYUWM5cXMzZFZXNHU1d1pUQnR3Mytx?=
 =?utf-8?B?dXIxeGRwdUNaaE8yVWRaZEFBV05sQy9oelFwREczb213OURJSDhFZmFucjhO?=
 =?utf-8?B?WlkwSmZ0T2ZpZ2dtL3BOK1kzUkQrZzlqM2VrcWdoQzY2YUs1cXhaeXBjbHZS?=
 =?utf-8?B?OFhzeVBZbzkra2dyMnVBZCtpS1pUVHdwdzJTOGJiUGFPYWNTNDduK2o5OHZ6?=
 =?utf-8?B?SnY3dHgyM0I4cUU1dTExdzBETWl2U1JmYXNmYzR6QTlEN1ZESVZyUFdDT1Y1?=
 =?utf-8?B?WW1tbmJhTVQ4d2FZNi9BVG1CMERvRkZ2czB6dW43T0Y2ZXIxbVordXE0Nnpn?=
 =?utf-8?B?bXJEOFFNR2hmY0xWSWNxN04xVzUxVTFIODVhbXFYUE8xaGRaU1RKVVVUU0do?=
 =?utf-8?B?TUN3V3FraTBvcUJwdEsxdm0vdk9HWUI3WTQ2dlNXZHNKUFVFRFNWL3pndzBa?=
 =?utf-8?B?QkZUdmZBTTZOSmk1d3o1S05KNE90K2JJb2RWMlVkdlVCZExoQkI2SWkwNC9k?=
 =?utf-8?B?NjBJMEQ3L1lnWU1ZQVRNUEJpMCs4aEJ3M0JVQW1tMG5SZktFWlRjM1JNSjJr?=
 =?utf-8?B?ekQzWEp2TTBsdHNXaGFSbzJFNWwxWTMwc3ZVV0hGQ0FoYU1NaTRNVmZySTAr?=
 =?utf-8?B?bmt6VGtvd2tra0wvcTRUYWlYNGY0R29sQmdhOVQ0eXlXSTJOaDlhaXZDR3NS?=
 =?utf-8?B?ZXhJYmtwNGpsYlg1SDZxdnRkdkl0MnU4OG9OL0ZncndSWVhDUHVseWhHWCtL?=
 =?utf-8?B?Z1ZmSENPOEJQSTU3YnJCMzNUbGYwQ1VScGRkMUVmTnRCV1JscFFvdnRleU1r?=
 =?utf-8?B?ckhXUGgyQXBLbzQ4WGJ3SjdUTWxvK1F4OHRoOTB1L3BVL2RqSGRKRWEvYWpW?=
 =?utf-8?B?cXYyRDJBNnhrVTh6eXRLamtUd01VMzJzdGFqNjVrODUvZlVEMzQwa1pXMzli?=
 =?utf-8?B?T2NqWDlEK2JGSkg5Z3FtNmtNeTlEcmxKMjV2Q3AxT1dsRU1sdE02UXFHRDVC?=
 =?utf-8?B?VERiUVNsMlR6S2pEcW9IOTQ0ajhtalB4cktmT1hNbzdTbENSWmlzQ0FKUVlM?=
 =?utf-8?B?R21HdWxjUVBlVzM0a2ZFRFFsbVhaYTFGaysvNlFYM2dIc1FLTU1vVkRHMkVT?=
 =?utf-8?B?dVdOVUZiRWlKR1hrUzJQMVdyNHRYMVVoZmNWYTVLc25JVWhCUDlZQk9JNkwv?=
 =?utf-8?B?NmxvOEFPdG9FRExnK29BNHQ0N3ZVcDFoVHoySG0xSEFXMnlhTFR6OGJHUXkx?=
 =?utf-8?B?SkUySTZhRjY5Tmttb09zOVdodmcxQlNheTNtMlc5ZGh6ejhybmxYL3dZMFJx?=
 =?utf-8?B?bzFLQmZwdU5rZGcxejRGSkZWdmZ1YzNjNHhKS2w0ZFNYTzVGbDB5VjdZRTdn?=
 =?utf-8?B?Y1ZheXh4bmYxZlJEcTVMOFI0SmN0VTViSFdYd3ZiZkdKcE9QRm9WQ0Jjcjlh?=
 =?utf-8?B?RElmNGMwNlN4MzdHS251TjRiMUVHU3NTcWp0TW5saWNaVDNYc0ZGaFFHSDA3?=
 =?utf-8?B?TG9mSHNPNEpnSTA1VjlES3QxZ0FFZUgvRkhnelQ1bndYbzN1eXc1ZnIzalAw?=
 =?utf-8?B?c3RYS3RmZzlJTGF5WWxta0RuV3ZRbWw5YlUyTEZrcllvZFl0TWw5OERlelBV?=
 =?utf-8?B?a3dvTEpoSVVYVUMyUHRST0hleEMzeWFVdTc2TXlmUDZ0emFXaGJ2UT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b79ba2bc-a633-4f59-5a12-08da4854eb8d
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2022 07:11:19.3802
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: jjy/bBODfnvpsklsXVk3n8SUcTofJrlqlCogrIK9uU+eyiosZLif+ZdPP6Xl0KDRdgFh5Lyol0mnjVWUSiEGIg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR04MB7337

On 07.06.2022 05:59, alex.nlnnfn@proton.me wrote:
> I looked into Xen documentation and also Xen wiki and I could't find a definitive answer if Xen supports memory over-commitment when VMs use SR-IOV device assignment (passthrough). Memory over-commitment I mean giving VMs more RAM than is available in the host.
> 
> I know that ESX doesn't support it, and also QEMU/KVM pins all RAM when a device is directly assigned to a VM (VFIO_IOMMU_MAP_DMA ioctl). I have two questions:
> 
> 1. Does Xen supports memory over commitment when all VMs are using direct device assignment e.g., SR-IOV?

What you describe looks to match mem-paging / mem-sharing in Xen. Neither is
compatible with device pass-through (SR-IOV or not is irrelevant), and both
are only in "experimental" state anyway.

> 2. Would you please point me to the code that does the pinning?

That's a KVM concept with no direct equivalent in Xen (largely due to Xen
being a type-1 hypervisor, while KVM is a type-2 one).

Jan



From xen-devel-bounces@lists.xenproject.org Tue Jun 07 07:20:39 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 07:20:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342874.567971 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyTVz-0001Yw-Ug; Tue, 07 Jun 2022 07:20:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342874.567971; Tue, 07 Jun 2022 07:20:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyTVz-0001Yp-R4; Tue, 07 Jun 2022 07:20:35 +0000
Received: by outflank-mailman (input) for mailman id 342874;
 Tue, 07 Jun 2022 07:20:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8rqf=WO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nyTVx-0001Yj-Pj
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 07:20:33 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.109.102])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 511356af-e632-11ec-b605-df0040e90b76;
 Tue, 07 Jun 2022 09:20:32 +0200 (CEST)
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur02lp2053.outbound.protection.outlook.com [104.47.5.53]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-37-IU6mBVuXNoKKm3v3wjJnCQ-1; Tue, 07 Jun 2022 09:20:29 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB7840.eurprd04.prod.outlook.com (2603:10a6:102:ce::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19; Tue, 7 Jun
 2022 07:20:27 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.019; Tue, 7 Jun 2022
 07:20:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 511356af-e632-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654586432;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qNh9fAllHNgEqUECja1frIxpSQ810qY0TaCPSUBJjfs=;
	b=V7Es8tZ1DtHyvZTWQQtRl7/r6RTwgU3FwluWXsldO4Blr7kDbceG/Es3c6OIxdM4UQyl1h
	1Woyt4vWbS6Fmp0SIMlx6PTMRi4urHwinwMlXIsc/13eSSiszcU5hbkSYJkaFvOfIFc2ha
	ty/z4Hpi8tU9NtpeH/6pmqDvgSaz+l0=
X-MC-Unique: IU6mBVuXNoKKm3v3wjJnCQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FAY/C1xaQqJ/xJunfaVyJw64mvkPe5wgOd0U5PDLs2NG1V5boIg60AN23iIYFs4TximUvIAzGLqDlvE6jtviYK/ltQY3TLm30NjPdMAS9mg+qAVwcqSAUxqlpPi/bDZzSXq+K5NKSaypPIX53jhApx+SDqp4n97n72q0N0Bf7ZMu2b06QzEcqtfBrmSgwHpo9S8VPXddzFpikelW8H7+6MN6R7Hwdz7/r7AVSIFOh1BmdZ6NdXUynujfWHRBbCr1Bv3fN2IS59eNz88JsVP6VEX6r2xFOge1nEWYqrjzsbg1ZNmtcHdhxQ8fSQivbq2KSrHyVElg8p+l/in0f6cThQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=TIBerb7U2xn1dpbVt21+aw8L70djIOhJrN5m8BltoNc=;
 b=WGL2Zx2efmscwvboqLvLDNSNJK5+FPjJj+Vc9zyI4t3q/d2ff01RDM/jdqoywuSFDHMFIrGB0YKwosBiVx5gxxoXoWiB7mlp8xDXbq5yiHPrgGgFnnMfVyYkS9dDyuJYWoA47UhSSB86CAYHhWMhulHKUkV2f6C8n8AMmLUKdKR6rdGIx+xGpqiN0kVTLUzquQOcfrI2NRw+cMUvZfK3uCWOk+CFBQBuMpzOgXekEcZOpqQKe2SJMBmjDDmNMNmxHb9Co0lfO3ZQ79QX93+HRD3kVntrwVpRtxvq7eb23k1Jno0ZRhFE3ljuhe/bJllDNE7UTryMEaYFzcojy5PxIw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=mysuse.onmicrosoft.com; s=selector1-mysuse-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TIBerb7U2xn1dpbVt21+aw8L70djIOhJrN5m8BltoNc=;
 b=FAVAGLUAlpVKPSlDPha0aaJX9hXrCWd0p37GWN+hYJtJ0ZImEQHiZRfspS8RR860RxdCxJgjk0EQXQ8m/mD76Oe/kV1/6FQKmOHiBqSaqYAoByv0Akq8IFB927oXJTGKeRmGlzt6Q/gH5haFhU713PMketYreLOgLItCrF4+3RNa1wdS5KBZsDCi7qILnbJ3dEDJv7m02dsDweXZNPwnqh1HMRlgi9YTX9lnGhtkx0EyZ5jnjwDIarXYRjf3lQbLU+Wk2rso6TbsJPwUD/s0js7JV687OcFhdyOvehjNb48lY3CiH7pHlB9Yc2F82upAzCmykJQqcCo/Zq6Gl2e/mA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5569489c-b3f2-948d-134d-f95b0fdb689f@suse.com>
Date: Tue, 7 Jun 2022 09:20:29 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v2 1/3] x86/vmx: implement Bus Lock detection
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
CC: Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220526111157.24479-1-roger.pau@citrix.com>
 <20220526111157.24479-2-roger.pau@citrix.com>
 <3165a99b-3a91-2ca3-80a0-af88d87e9bb0@suse.com>
 <YpoavceqO238Q6Ld@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <YpoavceqO238Q6Ld@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-ClientProxiedBy: FR3P281CA0038.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4a::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 96eea97c-77b6-4c21-e22a-08da4856324d
X-MS-TrafficTypeDiagnostic: PA4PR04MB7840:EE_
X-Microsoft-Antispam-PRVS:
	<PA4PR04MB78403AF521C7E781E7D07370B3A59@PA4PR04MB7840.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	apd7p+muIZlwoq18mpwy7qEpWpVma28qapvWxjqukcHueOYb67eMw2+vBeUDnyRme21aPfUEpP3owiAMLO9hkmQzZHRb6Pghd1dUglKFOGD3INb4fEG4i6txuUCHb2kfve0H6xKQx5+E0/L5Pu0pomN1vBtrkvuz7b4RJ08/XPfH/h3NCabJkXzT5R5qaYRST0u+veeCKiEqzK2/FtA/O6SgCU7docJrL5lIs3/z6HNI64oA3isVjQB9d9Irft7t9hWnzQcRmiCaQrE4c3+n7qWxCFtoUtXWSJLHd2fqg/fxG1xDoQzDAjgU/67pAKZzgQ/N3rxMKvpVIKO0IaqvHVchG2xeXf4eXE6w2LURRe+JEsg/UecOav+pMzEvMHyxwONEmnrV92/1C/hkjb2OdRBeVRJrZmgvsdS11mV8lqcYrDk332KX0KUZuO0i7kmVbHIaaU9y/3SXNavNxSOM2aoWiM+6OHZ2oUOMnyVX23IZVOT/5ax5clnFW11GDq/PSi4I62aAaDAdQr5F7990icKPxrhGZQWz3PM2ItnpzdeTjCI6l+i7ua86jZfs+9WIGZLaHrU95C5MrLWFO6dHFzA90WZ55ZIgza0RG1TTfp9CD+UVjDqrqHErhSn8aRLPZyjg/jyUpi87AVmtkY0ufgBzAb3mk4Sl68a3+bj66/fbCeE5lSAyZLnHzDUl2HO2oomGn9ZmUXvi9GxaqskpdAi2VDxpx4auQkjqLvP4cH4=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(38100700002)(6486002)(508600001)(316002)(83380400001)(31696002)(86362001)(2906002)(2616005)(186003)(53546011)(54906003)(6506007)(26005)(6512007)(6916009)(5660300002)(36756003)(8676002)(8936002)(4326008)(66556008)(66476007)(66946007)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?xme2dxeW20EsFwp/WLYE4NEKG5tJIo3SduUxAB2Pxj1CPmXgIv8/gWuEHazT?=
 =?us-ascii?Q?1vo5Plncvt+KTd3IU8VZ22cFzGZe/XxvZYS1dXki0P8eT+IP/j7cyPD3pZ6q?=
 =?us-ascii?Q?ue8G6lBbbZ3kqHJNgc+iOYPVeI3hPyr0Wwa+/tRcEUEDalGw6z+ADZiFnf0f?=
 =?us-ascii?Q?YNr5WH3TFOhPu644gsqMAv3avs6V52Cwx22Qii/dNoO9+6t2UiGoAVjf6LOK?=
 =?us-ascii?Q?aw71EH4QedkSLYhBwL5JeYRAgO2BT6o4pEA5s3qnCReioxGFIt9l4t/BXIcT?=
 =?us-ascii?Q?FDYFm3SwpNnR/nfLfBpSYk8uiSCYEMn4q2/MpwR9tT7+o+32thOlMBsS+I2L?=
 =?us-ascii?Q?KjZZ9+zuQv3XKHhL7dOcjZXk+gFVwEi70ZqfkHgQepdpSZO8/bFjCcv9Wrv2?=
 =?us-ascii?Q?h1mZpk4sS2Vez1o0W/9RElZYVRQKXL/yg/IRkMiO5j4zTxL+mzhhdi20P47u?=
 =?us-ascii?Q?5pyhki6IpuN6A8FijAq4VH1aarVGnOlrbQt0zTpnRiA0gdL14wLSEBo88spC?=
 =?us-ascii?Q?UoKGyt6/5oWuplJTiMAKvvWcYbysE/KxKAAtzAiWJzmfGFP+wRnVUZ9OLR29?=
 =?us-ascii?Q?SU6TiK9Uf8+WYNhlIYYBrAzeZJPSzToyJVWtlYIPBXDecv+mXpMJvfBMBjIv?=
 =?us-ascii?Q?G9XkNnLCb+5jlkfE8nh3/oSmb4JWAJjPtaKM1z90jkzQgt6N+ZOzI2usm0qX?=
 =?us-ascii?Q?zddJRTzeTCmutjEFg/DfoZYR4wCwHWAlz2qzCLCI+M65QHgTeabyAz6oIFqA?=
 =?us-ascii?Q?8/ijSgA/g9JOIULjOgnEmlP3Isk42qpU2HYsD1lSvem8etMwtwbtwny9uCmW?=
 =?us-ascii?Q?eImaxWjprPsUOpsiqm3ig7eH4LVCENdif7vYyZzttnypTDn8cfnep3hikrUM?=
 =?us-ascii?Q?9JmDRFY/o/Xa+ctwUxmUe7DzcyJz1jIAn/2iaL1Me7K0idnNjzKDawBe/Zdq?=
 =?us-ascii?Q?T/Y21SCD6hkkdwDbh9ffLDc+7z2uvldm0tkFg9EtdYdvLLGOTKw/3Bm/zfYI?=
 =?us-ascii?Q?F7WOGKbdz1jlYD2Nd1dzHc7PUik5eDrHQ9f5C8p+ptvyUgXo51Gy3miwyd3W?=
 =?us-ascii?Q?DhpXQ2bLnn3ekV9jP4WsZm7EbDpNXwZUetLI5h8E5Qe+XgGdWYq2bxgwAZ3t?=
 =?us-ascii?Q?y+x8DKbl55uI3Q3hdZOMGOgbJ/QYCh9SdC0xrKG2ZzEXSwIj+A0UCXclCZLS?=
 =?us-ascii?Q?drnHSwLlns5m2gBFTV6QUtlA/CMQ9s8QyAbbr7VeT/6ZBsDze2BqbPZ1JCnU?=
 =?us-ascii?Q?MswMr4UDjGq7NMtTcPPRd22uaJBLzfOvErgNzbJlQmIo5M86CgEB01VhvYv9?=
 =?us-ascii?Q?NJhiPfYY2eC7HYEo/JIpjy3GOK2FNv8dp/We5Apu/UsjNZS9IijSLv3iEOMc?=
 =?us-ascii?Q?lY/IVBMW/4fUWp96AhMrR8Fbga5cI85PXV5jmW1hieCjqqHCFnb73jrkgTfb?=
 =?us-ascii?Q?TD5oo66OlogYtdF0VCdyBaw5gYHEpgAv7Ou4wdgEllfZY8dF8bTJMXYP8GBi?=
 =?us-ascii?Q?P7bNFkR83C9IyrVM9Qn22fW8EI3yo0uswe6mDHxZJyV6GsnBkVB8uf2ZXdge?=
 =?us-ascii?Q?OJTOUARiSbmlVLmFDvQw7yUcxzFTRcDz95j9fgA6VS+F2ELOede6v2Br8k80?=
 =?us-ascii?Q?pFRReWQmbqFzYw0QKsFtM0R//s4I6H7HZ3H9Fi0LVyoHpin3KOUvETJahDv/?=
 =?us-ascii?Q?TIVJnIiqMWkfg/WtoCUGLIh9vMlTqKa9CY/G5PJWCc3W9T5T35vjJ53DiwpT?=
 =?us-ascii?Q?aXCdUXa5gg=3D=3D?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 96eea97c-77b6-4c21-e22a-08da4856324d
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2022 07:20:27.5792
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: vw3n+30WXW2bopwFr9EOFECjoBXyvpp9c7aMlh4n0MaaXJVpPpmHevA5h16rFMgsBhOCuQZkqZpN26obroQL5A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB7840

On 03.06.2022 16:29, Roger Pau Monn=C3=A9 wrote:
> On Fri, Jun 03, 2022 at 02:16:47PM +0200, Jan Beulich wrote:
>> On 26.05.2022 13:11, Roger Pau Monne wrote:
>>> Add support for enabling Bus Lock Detection on Intel systems.  Such
>>> detection works by triggering a vmexit, which ought to be enough of a
>>> pause to prevent a guest from abusing of the Bus Lock.
>>>
>>> Add an extra Xen perf counter to track the number of Bus Locks detected=
.
>>> This is done because Bus Locks can also be reported by setting the bit
>>> 26 in the exit reason field, so also account for those.
>>>
>>> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> Signed-off-by: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
>>
>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>
>> This implements just the VMexit part of the feature - maybe the
>> title wants to reflect that? The vmx: tag could also mean there
>> is exposure to guests included for the #DB part of the feature.
>=20
> Maybe:
>=20
> "x86/vmx: add Bus Lock detection to the hypervisor"

Fine with me.

Jan



From xen-devel-bounces@lists.xenproject.org Tue Jun 07 07:31:04 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 07:31:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342883.567993 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyTg0-0003PL-4d; Tue, 07 Jun 2022 07:30:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342883.567993; Tue, 07 Jun 2022 07:30:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyTg0-0003PE-1e; Tue, 07 Jun 2022 07:30:56 +0000
Received: by outflank-mailman (input) for mailman id 342883;
 Tue, 07 Jun 2022 07:30:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uj6L=WO=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1nyTfz-00039i-3y
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 07:30:55 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id c3340c00-e633-11ec-b605-df0040e90b76;
 Tue, 07 Jun 2022 09:30:53 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0A790143D;
 Tue,  7 Jun 2022 00:30:53 -0700 (PDT)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 511293F66F;
 Tue,  7 Jun 2022 00:30:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c3340c00-e633-11ec-b605-df0040e90b76
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v6 1/9] xen/arm: rename PGC_reserved to PGC_static
Date: Tue,  7 Jun 2022 15:30:23 +0800
Message-Id: <20220607073031.722174-2-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220607073031.722174-1-Penny.Zheng@arm.com>
References: <20220607073031.722174-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

PGC_reserved could be ambiguous, and we have to tell what the pages are
reserved for, so this commit intends to rename PGC_reserved to
PGC_static, which clearly indicates the page is reserved for static
memory.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
v6 changes:
- rename PGC_staticmem to PGC_static
---
v5 changes:
- new commit
---
 xen/arch/arm/include/asm/mm.h |  6 +++---
 xen/common/page_alloc.c       | 22 +++++++++++-----------
 2 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
index 424aaf2823..fbff11c468 100644
--- a/xen/arch/arm/include/asm/mm.h
+++ b/xen/arch/arm/include/asm/mm.h
@@ -108,9 +108,9 @@ struct page_info
   /* Page is Xen heap? */
 #define _PGC_xen_heap     PG_shift(2)
 #define PGC_xen_heap      PG_mask(1, 2)
-  /* Page is reserved */
-#define _PGC_reserved     PG_shift(3)
-#define PGC_reserved      PG_mask(1, 3)
+  /* Page is static memory */
+#define _PGC_static    PG_shift(3)
+#define PGC_static     PG_mask(1, 3)
 /* ... */
 /* Page is broken? */
 #define _PGC_broken       PG_shift(7)
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 319029140f..9e5c757847 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -151,8 +151,8 @@
 #define p2m_pod_offline_or_broken_replace(pg) BUG_ON(pg != NULL)
 #endif
 
-#ifndef PGC_reserved
-#define PGC_reserved 0
+#ifndef PGC_static
+#define PGC_static 0
 #endif
 
 /*
@@ -2286,7 +2286,7 @@ int assign_pages(
 
         for ( i = 0; i < nr; i++ )
         {
-            ASSERT(!(pg[i].count_info & ~(PGC_extra | PGC_reserved)));
+            ASSERT(!(pg[i].count_info & ~(PGC_extra | PGC_static)));
             if ( pg[i].count_info & PGC_extra )
                 extra_pages++;
         }
@@ -2346,7 +2346,7 @@ int assign_pages(
         page_set_owner(&pg[i], d);
         smp_wmb(); /* Domain pointer must be visible before updating refcnt. */
         pg[i].count_info =
-            (pg[i].count_info & (PGC_extra | PGC_reserved)) | PGC_allocated | 1;
+            (pg[i].count_info & (PGC_extra | PGC_static)) | PGC_allocated | 1;
 
         page_list_add_tail(&pg[i], page_to_list(d, &pg[i]));
     }
@@ -2652,8 +2652,8 @@ void __init free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
             scrub_one_page(pg);
         }
 
-        /* In case initializing page of static memory, mark it PGC_reserved. */
-        pg[i].count_info |= PGC_reserved;
+        /* In case initializing page of static memory, mark it PGC_static. */
+        pg[i].count_info |= PGC_static;
     }
 }
 
@@ -2682,8 +2682,8 @@ static struct page_info * __init acquire_staticmem_pages(mfn_t smfn,
 
     for ( i = 0; i < nr_mfns; i++ )
     {
-        /* The page should be reserved and not yet allocated. */
-        if ( pg[i].count_info != (PGC_state_free | PGC_reserved) )
+        /* The page should be static and not yet allocated. */
+        if ( pg[i].count_info != (PGC_state_free | PGC_static) )
         {
             printk(XENLOG_ERR
                    "pg[%lu] Static MFN %"PRI_mfn" c=%#lx t=%#x\n",
@@ -2697,10 +2697,10 @@ static struct page_info * __init acquire_staticmem_pages(mfn_t smfn,
                                 &tlbflush_timestamp);
 
         /*
-         * Preserve flag PGC_reserved and change page state
+         * Preserve flag PGC_static and change page state
          * to PGC_state_inuse.
          */
-        pg[i].count_info = PGC_reserved | PGC_state_inuse;
+        pg[i].count_info = PGC_static | PGC_state_inuse;
         /* Initialise fields which have other uses for free pages. */
         pg[i].u.inuse.type_info = 0;
         page_set_owner(&pg[i], NULL);
@@ -2722,7 +2722,7 @@ static struct page_info * __init acquire_staticmem_pages(mfn_t smfn,
 
  out_err:
     while ( i-- )
-        pg[i].count_info = PGC_reserved | PGC_state_free;
+        pg[i].count_info = PGC_static | PGC_state_free;
 
     spin_unlock(&heap_lock);
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 07 07:31:04 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 07:31:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342882.567982 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyTfx-00039v-Tq; Tue, 07 Jun 2022 07:30:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342882.567982; Tue, 07 Jun 2022 07:30:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyTfx-00039o-R1; Tue, 07 Jun 2022 07:30:53 +0000
Received: by outflank-mailman (input) for mailman id 342882;
 Tue, 07 Jun 2022 07:30:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uj6L=WO=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1nyTfw-00039i-Fh
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 07:30:52 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id c0b001a4-e633-11ec-b605-df0040e90b76;
 Tue, 07 Jun 2022 09:30:49 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D7A9A143D;
 Tue,  7 Jun 2022 00:30:48 -0700 (PDT)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 52BBF3F66F;
 Tue,  7 Jun 2022 00:30:45 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c0b001a4-e633-11ec-b605-df0040e90b76
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v6 0/9] populate/unpopulate memory when domain on static allocation
Date: Tue,  7 Jun 2022 15:30:22 +0800
Message-Id: <20220607073031.722174-1-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Today when a domain unpopulates the memory on runtime, they will always
hand the memory over to the heap allocator. And it will be a problem if it
is a static domain.
Pages used as guest RAM for static domain shall always be reserved to this
domain only, and not be used for any other purposes, so they shall never go
back to heap allocator.

This patch serie intends to fix this issue, by adding pages on the new list
resv_page_list after having taken them off the "normal" list, when unpopulating
memory, and retrieving pages from resv page list(resv_page_list) when
populating memory.

---
v6 changes:
- rename PGC_staticmem to PGC_static
- remove #ifdef aroud function declaration
- use domain instead of sub-systems
- move non-zero is_domain_using_staticmem() from ARM header to common
header
- move PGC_static !CONFIG_STATIC_MEMORY definition to common header
- drop the lock before returning
---
v5 changes:
- introduce three new commits
- In order to avoid stub functions, we #define PGC_staticmem to non-zero only
when CONFIG_STATIC_MEMORY
- use "unlikely()" around pg->count_info & PGC_staticmem
- remove pointless "if", since mark_page_free() is going to set count_info
to PGC_state_free and by consequence clear PGC_staticmem
- move #define PGC_staticmem 0 to mm.h
- guard "is_domain_using_staticmem" under CONFIG_STATIC_MEMORY
- #define is_domain_using_staticmem zero if undefined
- extract common codes for assigning pages into a helper assign_domstatic_pages
- refine commit message
- remove stub function acquire_reserved_page
- Alloc/free of memory can happen concurrently. So access to rsv_page_list
needs to be protected with a spinlock
---
v4 changes:
- commit message refinement
- miss dropping __init in acquire_domstatic_pages
- add the page back to the reserved list in case of error
- remove redundant printk
- refine log message and make it warn level
- guard "is_domain_using_staticmem" under CONFIG_STATIC_MEMORY
- #define is_domain_using_staticmem zero if undefined
---
v3 changes:
- fix possible racy issue in free_staticmem_pages()
- introduce a stub free_staticmem_pages() for the !CONFIG_STATIC_MEMORY case
- move the change to free_heap_pages() to cover other potential call sites
- change fixed width type uint32_t to unsigned int
- change "flags" to a more descriptive name "cdf"
- change name from "is_domain_static()" to "is_domain_using_staticmem"
- have page_list_del() just once out of the if()
- remove resv_pages counter
- make arch_free_heap_page be an expression, not a compound statement.
- move #ifndef is_domain_using_staticmem to the common header file
- remove #ifdef CONFIG_STATIC_MEMORY-ary
- remove meaningless page_to_mfn(page) in error log
---
v2 changes:
- let "flags" live in the struct domain. So other arch can take
advantage of it in the future
- change name from "is_domain_on_static_allocation" to "is_domain_static()"
- put reserved pages on resv_page_list after having taken them off
the "normal" list
- introduce acquire_reserved_page to retrieve reserved pages from
resv_page_list
- forbid non-zero-order requests in populate_physmap
- let is_domain_static return ((void)(d), false) on x86
- fix coding style

Penny Zheng (9):
  xen/arm: rename PGC_reserved to PGC_static
  xen: do not free reserved memory into heap
  xen: update SUPPORT.md for static allocation
  xen: do not merge reserved pages in free_heap_pages()
  xen: add field "flags" to cover all internal CDF_XXX
  xen/arm: introduce CDF_staticmem
  xen/arm: unpopulate memory when domain is static
  xen: introduce prepare_staticmem_pages
  xen: retrieve reserved pages on populate_physmap

 SUPPORT.md                        |   7 ++
 xen/arch/arm/domain.c             |   2 -
 xen/arch/arm/domain_build.c       |   5 +-
 xen/arch/arm/include/asm/domain.h |   3 +-
 xen/arch/arm/include/asm/mm.h     |  20 +++-
 xen/common/domain.c               |   7 ++
 xen/common/memory.c               |  23 +++++
 xen/common/page_alloc.c           | 149 ++++++++++++++++++++----------
 xen/include/xen/domain.h          |   8 ++
 xen/include/xen/mm.h              |   7 +-
 xen/include/xen/sched.h           |   6 ++
 11 files changed, 178 insertions(+), 59 deletions(-)

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 07 07:31:04 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 07:31:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342884.568004 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyTg5-0003hg-Kw; Tue, 07 Jun 2022 07:31:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342884.568004; Tue, 07 Jun 2022 07:31:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyTg5-0003hX-FW; Tue, 07 Jun 2022 07:31:01 +0000
Received: by outflank-mailman (input) for mailman id 342884;
 Tue, 07 Jun 2022 07:30:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uj6L=WO=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1nyTg3-0003fm-GI
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 07:30:59 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id c599db1d-e633-11ec-bd2c-47488cf2e6aa;
 Tue, 07 Jun 2022 09:30:57 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3282B1480;
 Tue,  7 Jun 2022 00:30:57 -0700 (PDT)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 770003F66F;
 Tue,  7 Jun 2022 00:30:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c599db1d-e633-11ec-bd2c-47488cf2e6aa
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v6 2/9] xen: do not free reserved memory into heap
Date: Tue,  7 Jun 2022 15:30:24 +0800
Message-Id: <20220607073031.722174-3-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220607073031.722174-1-Penny.Zheng@arm.com>
References: <20220607073031.722174-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Pages used as guest RAM for static domain, shall be reserved to this
domain only.
So in case reserved pages being used for other purpose, users
shall not free them back to heap, even when last ref gets dropped.

free_staticmem_pages will be called by free_heap_pages in runtime
for static domain freeing memory resource, so let's drop the __init
flag.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
v6 changes:
- adapt to PGC_static
- remove #ifdef aroud function declaration
---
v5 changes:
- In order to avoid stub functions, we #define PGC_staticmem to non-zero only
when CONFIG_STATIC_MEMORY
- use "unlikely()" around pg->count_info & PGC_staticmem
- remove pointless "if", since mark_page_free() is going to set count_info
to PGC_state_free and by consequence clear PGC_staticmem
- move #define PGC_staticmem 0 to mm.h
---
v4 changes:
- no changes
---
v3 changes:
- fix possible racy issue in free_staticmem_pages()
- introduce a stub free_staticmem_pages() for the !CONFIG_STATIC_MEMORY case
- move the change to free_heap_pages() to cover other potential call sites
- fix the indentation
---
v2 changes:
- new commit
---
 xen/arch/arm/include/asm/mm.h |  4 +++-
 xen/common/page_alloc.c       | 12 +++++++++---
 xen/include/xen/mm.h          |  2 --
 3 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
index fbff11c468..7442893e77 100644
--- a/xen/arch/arm/include/asm/mm.h
+++ b/xen/arch/arm/include/asm/mm.h
@@ -108,9 +108,11 @@ struct page_info
   /* Page is Xen heap? */
 #define _PGC_xen_heap     PG_shift(2)
 #define PGC_xen_heap      PG_mask(1, 2)
-  /* Page is static memory */
+#ifdef CONFIG_STATIC_MEMORY
+/* Page is static memory */
 #define _PGC_static    PG_shift(3)
 #define PGC_static     PG_mask(1, 3)
+#endif
 /* ... */
 /* Page is broken? */
 #define _PGC_broken       PG_shift(7)
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 9e5c757847..6876869fa6 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -1443,6 +1443,13 @@ static void free_heap_pages(
 
     ASSERT(order <= MAX_ORDER);
 
+    if ( unlikely(pg->count_info & PGC_static) )
+    {
+        /* Pages of static memory shall not go back to the heap. */
+        free_staticmem_pages(pg, 1UL << order, need_scrub);
+        return;
+    }
+
     spin_lock(&heap_lock);
 
     for ( i = 0; i < (1 << order); i++ )
@@ -2636,8 +2643,8 @@ struct domain *get_pg_owner(domid_t domid)
 
 #ifdef CONFIG_STATIC_MEMORY
 /* Equivalent of free_heap_pages to free nr_mfns pages of static memory. */
-void __init free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
-                                 bool need_scrub)
+void free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
+                          bool need_scrub)
 {
     mfn_t mfn = page_to_mfn(pg);
     unsigned long i;
@@ -2652,7 +2659,6 @@ void __init free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
             scrub_one_page(pg);
         }
 
-        /* In case initializing page of static memory, mark it PGC_static. */
         pg[i].count_info |= PGC_static;
     }
 }
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index 3be754da92..1c4ddb336b 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -85,13 +85,11 @@ bool scrub_free_pages(void);
 } while ( false )
 #define FREE_XENHEAP_PAGE(p) FREE_XENHEAP_PAGES(p, 0)
 
-#ifdef CONFIG_STATIC_MEMORY
 /* These functions are for static memory */
 void free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
                           bool need_scrub);
 int acquire_domstatic_pages(struct domain *d, mfn_t smfn, unsigned int nr_mfns,
                             unsigned int memflags);
-#endif
 
 /* Map machine page range in Xen virtual address space. */
 int map_pages_to_xen(
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 07 07:31:05 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 07:31:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342885.568015 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyTg7-0003xu-0e; Tue, 07 Jun 2022 07:31:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342885.568015; Tue, 07 Jun 2022 07:31:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyTg6-0003wc-SC; Tue, 07 Jun 2022 07:31:02 +0000
Received: by outflank-mailman (input) for mailman id 342885;
 Tue, 07 Jun 2022 07:31:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uj6L=WO=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1nyTg6-0003fm-6H
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 07:31:02 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id c7d1595a-e633-11ec-bd2c-47488cf2e6aa;
 Tue, 07 Jun 2022 09:31:01 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D72931480;
 Tue,  7 Jun 2022 00:31:00 -0700 (PDT)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 9F30C3F66F;
 Tue,  7 Jun 2022 00:30:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c7d1595a-e633-11ec-bd2c-47488cf2e6aa
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v6 3/9] xen: update SUPPORT.md for static allocation
Date: Tue,  7 Jun 2022 15:30:25 +0800
Message-Id: <20220607073031.722174-4-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220607073031.722174-1-Penny.Zheng@arm.com>
References: <20220607073031.722174-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

SUPPORT.md doesn't seem to explicitly say whether static memory is
supported, so this commit updates SUPPORT.md to add feature static
allocation tech preview for now.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
v6 changes:
- use domain instead of sub-systems
---
v5 changes:
- new commit
---
 SUPPORT.md | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/SUPPORT.md b/SUPPORT.md
index ee2cd319e2..f50bc3a0fd 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -278,6 +278,13 @@ to boot with memory < maxmem.
 
     Status, x86 HVM: Supported
 
+### Static Allocation
+
+Static allocation refers to domains for which memory areas are
+pre-defined by configuration using physical address ranges.
+
+    Status, ARM: Tech Preview
+
 ### Memory Sharing
 
 Allow sharing of identical pages between guests
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 07 07:31:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 07:31:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342889.568026 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyTgF-0004QG-Ba; Tue, 07 Jun 2022 07:31:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342889.568026; Tue, 07 Jun 2022 07:31:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyTgF-0004Pv-75; Tue, 07 Jun 2022 07:31:11 +0000
Received: by outflank-mailman (input) for mailman id 342889;
 Tue, 07 Jun 2022 07:31:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uj6L=WO=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1nyTgE-0003fm-AY
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 07:31:10 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id ca2f7dd8-e633-11ec-bd2c-47488cf2e6aa;
 Tue, 07 Jun 2022 09:31:05 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C0F93143D;
 Tue,  7 Jun 2022 00:31:04 -0700 (PDT)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 503663F66F;
 Tue,  7 Jun 2022 00:31:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ca2f7dd8-e633-11ec-bd2c-47488cf2e6aa
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Julien Grall <jgrall@amazon.com>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v6 4/9] xen: do not merge reserved pages in free_heap_pages()
Date: Tue,  7 Jun 2022 15:30:26 +0800
Message-Id: <20220607073031.722174-5-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220607073031.722174-1-Penny.Zheng@arm.com>
References: <20220607073031.722174-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The code in free_heap_pages() will try to merge pages with the
successor/predecessor if pages are suitably aligned. So if the pages
reserved are right next to the pages given to the heap allocator,
free_heap_pages() will merge them, and give the reserved pages to heap
allocator accidently as a result.

So in order to avoid the above scenario, this commit updates free_heap_pages()
to check whether the predecessor and/or successor has PGC_reserved set,
when trying to merge the about-to-be-freed chunk with the predecessor
and/or successor.

Suggested-by: Julien Grall <jgrall@amazon.com>
Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
v6 changes:
- adapt to PGC_static
---
v5 changes:
- change PGC_reserved to adapt to PGC_staticmem
---
v4 changes:
- no changes
---
v3 changes:
- no changes
---
v2 changes:
- new commit
---
 xen/common/page_alloc.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 6876869fa6..7fb28e2e07 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -1486,6 +1486,7 @@ static void free_heap_pages(
             /* Merge with predecessor block? */
             if ( !mfn_valid(page_to_mfn(predecessor)) ||
                  !page_state_is(predecessor, free) ||
+                 (predecessor->count_info & PGC_static) ||
                  (PFN_ORDER(predecessor) != order) ||
                  (phys_to_nid(page_to_maddr(predecessor)) != node) )
                 break;
@@ -1509,6 +1510,7 @@ static void free_heap_pages(
             /* Merge with successor block? */
             if ( !mfn_valid(page_to_mfn(successor)) ||
                  !page_state_is(successor, free) ||
+                 (successor->count_info & PGC_static) ||
                  (PFN_ORDER(successor) != order) ||
                  (phys_to_nid(page_to_maddr(successor)) != node) )
                 break;
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 07 07:31:12 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 07:31:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342890.568037 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyTgG-0004ic-Lr; Tue, 07 Jun 2022 07:31:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342890.568037; Tue, 07 Jun 2022 07:31:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyTgG-0004hH-Gl; Tue, 07 Jun 2022 07:31:12 +0000
Received: by outflank-mailman (input) for mailman id 342890;
 Tue, 07 Jun 2022 07:31:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uj6L=WO=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1nyTgF-0003fm-Af
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 07:31:11 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id ccceee0a-e633-11ec-bd2c-47488cf2e6aa;
 Tue, 07 Jun 2022 09:31:09 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 34CCF1480;
 Tue,  7 Jun 2022 00:31:09 -0700 (PDT)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 3A1303F66F;
 Tue,  7 Jun 2022 00:31:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ccceee0a-e633-11ec-bd2c-47488cf2e6aa
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	Penny Zheng <penny.zheng@arm.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v6 5/9] xen: add field "flags" to cover all internal CDF_XXX
Date: Tue,  7 Jun 2022 15:30:27 +0800
Message-Id: <20220607073031.722174-6-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220607073031.722174-1-Penny.Zheng@arm.com>
References: <20220607073031.722174-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

With more and more CDF_xxx internal flags in and to save the space, this
commit introduces a new field "flags" in struct domain to store CDF_*
internal flags directly.

Another new CDF_xxx will be introduced in the next patch.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
v6 changes:
- no change
---
v5 changes:
- no change
---
v4 changes:
- no change
---
v3 changes:
- change fixed width type uint32_t to unsigned int
- change "flags" to a more descriptive name "cdf"
---
v2 changes:
- let "flags" live in the struct domain. So other arch can take
advantage of it in the future
- fix coding style
---
 xen/arch/arm/domain.c             | 2 --
 xen/arch/arm/include/asm/domain.h | 3 +--
 xen/common/domain.c               | 3 +++
 xen/include/xen/sched.h           | 3 +++
 4 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 8110c1df86..74189d9878 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -709,8 +709,6 @@ int arch_domain_create(struct domain *d,
     ioreq_domain_init(d);
 #endif
 
-    d->arch.directmap = flags & CDF_directmap;
-
     /* p2m_init relies on some value initialized by the IOMMU subsystem */
     if ( (rc = iommu_domain_init(d, config->iommu_opts)) != 0 )
         goto fail;
diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
index ed63c2b6f9..fe7a029ebf 100644
--- a/xen/arch/arm/include/asm/domain.h
+++ b/xen/arch/arm/include/asm/domain.h
@@ -29,7 +29,7 @@ enum domain_type {
 #define is_64bit_domain(d) (0)
 #endif
 
-#define is_domain_direct_mapped(d) (d)->arch.directmap
+#define is_domain_direct_mapped(d) ((d)->cdf & CDF_directmap)
 
 /*
  * Is the domain using the host memory layout?
@@ -103,7 +103,6 @@ struct arch_domain
     void *tee;
 #endif
 
-    bool directmap;
 }  __cacheline_aligned;
 
 struct arch_vcpu
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 7570eae91a..a3ef991bd1 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -567,6 +567,9 @@ struct domain *domain_create(domid_t domid,
     /* Sort out our idea of is_system_domain(). */
     d->domain_id = domid;
 
+    /* Holding CDF_* internal flags. */
+    d->cdf = flags;
+
     /* Debug sanity. */
     ASSERT(is_system_domain(d) ? config == NULL : config != NULL);
 
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 463d41ffb6..5191853c18 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -596,6 +596,9 @@ struct domain
         struct ioreq_server     *server[MAX_NR_IOREQ_SERVERS];
     } ioreq_server;
 #endif
+
+    /* Holding CDF_* constant. Internal flags for domain creation. */
+    unsigned int cdf;
 };
 
 static inline struct page_list_head *page_to_list(
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 07 07:31:17 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 07:31:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342896.568047 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyTgK-0005IB-Tm; Tue, 07 Jun 2022 07:31:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342896.568047; Tue, 07 Jun 2022 07:31:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyTgK-0005GX-OC; Tue, 07 Jun 2022 07:31:16 +0000
Received: by outflank-mailman (input) for mailman id 342896;
 Tue, 07 Jun 2022 07:31:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uj6L=WO=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1nyTgI-0003fm-Od
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 07:31:14 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id cf482a9e-e633-11ec-bd2c-47488cf2e6aa;
 Tue, 07 Jun 2022 09:31:13 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 68A6B143D;
 Tue,  7 Jun 2022 00:31:13 -0700 (PDT)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A11323F66F;
 Tue,  7 Jun 2022 00:31:09 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cf482a9e-e633-11ec-bd2c-47488cf2e6aa
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v6 6/9] xen/arm: introduce CDF_staticmem
Date: Tue,  7 Jun 2022 15:30:28 +0800
Message-Id: <20220607073031.722174-7-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220607073031.722174-1-Penny.Zheng@arm.com>
References: <20220607073031.722174-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to have an easy and quick way to find out whether this domain memory
is statically configured, this commit introduces a new flag CDF_staticmem and a
new helper is_domain_using_staticmem() to tell.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
v6 changes:
- move non-zero is_domain_using_staticmem() from ARM header to common
header
---
v5 changes:
- guard "is_domain_using_staticmem" under CONFIG_STATIC_MEMORY
- #define is_domain_using_staticmem zero if undefined
---
v4 changes:
- no changes
---
v3 changes:
- change name from "is_domain_static()" to "is_domain_using_staticmem"
---
v2 changes:
- change name from "is_domain_on_static_allocation" to "is_domain_static()"
---
 xen/arch/arm/domain_build.c | 5 ++++-
 xen/include/xen/domain.h    | 8 ++++++++
 2 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 7ddd16c26d..f6e2e44c1e 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -3287,9 +3287,12 @@ void __init create_domUs(void)
         if ( !dt_device_is_compatible(node, "xen,domain") )
             continue;
 
+        if ( dt_find_property(node, "xen,static-mem", NULL) )
+            flags |= CDF_staticmem;
+
         if ( dt_property_read_bool(node, "direct-map") )
         {
-            if ( !IS_ENABLED(CONFIG_STATIC_MEMORY) || !dt_find_property(node, "xen,static-mem", NULL) )
+            if ( !IS_ENABLED(CONFIG_STATIC_MEMORY) || !(flags & CDF_staticmem) )
                 panic("direct-map is not valid for domain %s without static allocation.\n",
                       dt_node_name(node));
 
diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
index 1c3c88a14d..c847452414 100644
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -34,6 +34,14 @@ void arch_get_domain_info(const struct domain *d,
 #ifdef CONFIG_ARM
 /* Should domain memory be directly mapped? */
 #define CDF_directmap            (1U << 1)
+/* Is domain memory on static allocation? */
+#define CDF_staticmem            (1U << 2)
+#endif
+
+#ifdef CONFIG_STATIC_MEMORY
+#define is_domain_using_staticmem(d) ((d)->cdf & CDF_staticmem)
+#else
+#define is_domain_using_staticmem(d) ((void)(d), false)
 #endif
 
 /*
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 07 07:33:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 07:33:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342915.568059 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyTiM-0007DK-96; Tue, 07 Jun 2022 07:33:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342915.568059; Tue, 07 Jun 2022 07:33:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyTiM-0007DD-5F; Tue, 07 Jun 2022 07:33:22 +0000
Received: by outflank-mailman (input) for mailman id 342915;
 Tue, 07 Jun 2022 07:33:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uj6L=WO=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1nyTgV-00039i-FE
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 07:31:27 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id d61425d5-e633-11ec-b605-df0040e90b76;
 Tue, 07 Jun 2022 09:31:25 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E218D143D;
 Tue,  7 Jun 2022 00:31:24 -0700 (PDT)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B40613F66F;
 Tue,  7 Jun 2022 00:31:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d61425d5-e633-11ec-b605-df0040e90b76
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v6 9/9] xen: retrieve reserved pages on populate_physmap
Date: Tue,  7 Jun 2022 15:30:31 +0800
Message-Id: <20220607073031.722174-10-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220607073031.722174-1-Penny.Zheng@arm.com>
References: <20220607073031.722174-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

When a static domain populates memory through populate_physmap at runtime,
it shall retrieve reserved pages from resv_page_list to make sure that
guest RAM is still restricted in statically configured memory regions.
This commit also introduces a new helper acquire_reserved_page to make it work.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
v6 changes:
- drop the lock before returning
---
v5 changes:
- extract common codes for assigning pages into a helper assign_domstatic_pages
- refine commit message
- remove stub function acquire_reserved_page
- Alloc/free of memory can happen concurrently. So access to rsv_page_list
needs to be protected with a spinlock
---
v4 changes：
- miss dropping __init in acquire_domstatic_pages
- add the page back to the reserved list in case of error
- remove redundant printk
- refine log message and make it warn level
---
v3 changes:
- move is_domain_using_staticmem to the common header file
- remove #ifdef CONFIG_STATIC_MEMORY-ary
- remove meaningless page_to_mfn(page) in error log
---
v2 changes:
- introduce acquire_reserved_page to retrieve reserved pages from
resv_page_list
- forbid non-zero-order requests in populate_physmap
- let is_domain_static return ((void)(d), false) on x86
---
 xen/common/memory.c     | 23 ++++++++++++++
 xen/common/page_alloc.c | 70 +++++++++++++++++++++++++++++++----------
 xen/include/xen/mm.h    |  1 +
 3 files changed, 77 insertions(+), 17 deletions(-)

diff --git a/xen/common/memory.c b/xen/common/memory.c
index f2d009843a..cb330ce877 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -245,6 +245,29 @@ static void populate_physmap(struct memop_args *a)
 
                 mfn = _mfn(gpfn);
             }
+            else if ( is_domain_using_staticmem(d) )
+            {
+                /*
+                 * No easy way to guarantee the retrieved pages are contiguous,
+                 * so forbid non-zero-order requests here.
+                 */
+                if ( a->extent_order != 0 )
+                {
+                    gdprintk(XENLOG_WARNING,
+                             "Cannot allocate static order-%u pages for static %pd\n",
+                             a->extent_order, d);
+                    goto out;
+                }
+
+                mfn = acquire_reserved_page(d, a->memflags);
+                if ( mfn_eq(mfn, INVALID_MFN) )
+                {
+                    gdprintk(XENLOG_WARNING,
+                             "%pd: failed to retrieve a reserved page\n",
+                             d);
+                    goto out;
+                }
+            }
             else
             {
                 page = alloc_domheap_pages(d, a->extent_order, a->memflags);
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 9004dd41c1..57d28304df 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -2661,9 +2661,8 @@ void free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
     }
 }
 
-static bool __init prepare_staticmem_pages(struct page_info *pg,
-                                           unsigned long nr_mfns,
-                                           unsigned int memflags)
+static bool prepare_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
+                                    unsigned int memflags)
 {
     bool need_tlbflush = false;
     uint32_t tlbflush_timestamp = 0;
@@ -2744,21 +2743,9 @@ static struct page_info * __init acquire_staticmem_pages(mfn_t smfn,
     return pg;
 }
 
-/*
- * Acquire nr_mfns contiguous pages, starting at #smfn, of static memory,
- * then assign them to one specific domain #d.
- */
-int __init acquire_domstatic_pages(struct domain *d, mfn_t smfn,
-                                   unsigned int nr_mfns, unsigned int memflags)
+static int assign_domstatic_pages(struct domain *d, struct page_info *pg,
+                                  unsigned int nr_mfns, unsigned int memflags)
 {
-    struct page_info *pg;
-
-    ASSERT(!in_irq());
-
-    pg = acquire_staticmem_pages(smfn, nr_mfns, memflags);
-    if ( !pg )
-        return -ENOENT;
-
     if ( !d || (memflags & (MEMF_no_owner | MEMF_no_refcount)) )
     {
         /*
@@ -2777,6 +2764,55 @@ int __init acquire_domstatic_pages(struct domain *d, mfn_t smfn,
 
     return 0;
 }
+
+/*
+ * Acquire nr_mfns contiguous pages, starting at #smfn, of static memory,
+ * then assign them to one specific domain #d.
+ */
+int __init acquire_domstatic_pages(struct domain *d, mfn_t smfn,
+                                   unsigned int nr_mfns, unsigned int memflags)
+{
+    struct page_info *pg;
+
+    ASSERT(!in_irq());
+
+    pg = acquire_staticmem_pages(smfn, nr_mfns, memflags);
+    if ( !pg )
+        return -ENOENT;
+
+    if ( assign_domstatic_pages(d, pg, nr_mfns, memflags) )
+        return -EINVAL;
+
+    return 0;
+}
+
+/*
+ * Acquire a page from reserved page list(resv_page_list), when populating
+ * memory for static domain on runtime.
+ */
+mfn_t acquire_reserved_page(struct domain *d, unsigned int memflags)
+{
+    struct page_info *page;
+
+    spin_lock(&d->page_alloc_lock);
+    /* Acquire a page from reserved page list(resv_page_list). */
+    page = page_list_remove_head(&d->resv_page_list);
+    spin_unlock(&d->page_alloc_lock);
+    if ( unlikely(!page) )
+        return INVALID_MFN;
+
+    if ( !prepare_staticmem_pages(page, 1, memflags) )
+        goto fail;
+
+    if ( assign_domstatic_pages(d, page, 1, memflags) )
+        goto fail;
+
+    return page_to_mfn(page);
+
+ fail:
+    page_list_add_tail(page, &d->resv_page_list);
+    return INVALID_MFN;
+}
 #endif
 
 /*
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index e80b4bdcde..e100151e50 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -90,6 +90,7 @@ void free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
                           bool need_scrub);
 int acquire_domstatic_pages(struct domain *d, mfn_t smfn, unsigned int nr_mfns,
                             unsigned int memflags);
+mfn_t acquire_reserved_page(struct domain *d, unsigned int memflags);
 
 /* Map machine page range in Xen virtual address space. */
 int map_pages_to_xen(
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 07 07:33:27 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 07:33:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342932.568070 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyTiR-0007Z6-Jq; Tue, 07 Jun 2022 07:33:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342932.568070; Tue, 07 Jun 2022 07:33:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyTiR-0007Yz-G0; Tue, 07 Jun 2022 07:33:27 +0000
Received: by outflank-mailman (input) for mailman id 342932;
 Tue, 07 Jun 2022 07:33:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uj6L=WO=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1nyTgN-0003fm-7S
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 07:31:19 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id d1c68291-e633-11ec-bd2c-47488cf2e6aa;
 Tue, 07 Jun 2022 09:31:18 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9A2BA1480;
 Tue,  7 Jun 2022 00:31:17 -0700 (PDT)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D68233F66F;
 Tue,  7 Jun 2022 00:31:13 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d1c68291-e633-11ec-bd2c-47488cf2e6aa
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v6 7/9] xen/arm: unpopulate memory when domain is static
Date: Tue,  7 Jun 2022 15:30:29 +0800
Message-Id: <20220607073031.722174-8-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220607073031.722174-1-Penny.Zheng@arm.com>
References: <20220607073031.722174-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Today when a domain unpopulates the memory on runtime, they will always
hand the memory back to the heap allocator. And it will be a problem if domain
is static.

Pages as guest RAM for static domain shall be reserved to only this domain
and not be used for any other purposes, so they shall never go back to heap
allocator.

This commit puts reserved pages on the new list resv_page_list only after
having taken them off the "normal" list, when the last ref dropped.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
v6 changes:
- refine in-code comment
- move PGC_static !CONFIG_STATIC_MEMORY definition to common header
---
v5 changes:
- adapt this patch for PGC_staticmem
---
v4 changes:
- no changes
---
v3 changes:
- have page_list_del() just once out of the if()
- remove resv_pages counter
- make arch_free_heap_page be an expression, not a compound statement.
---
v2 changes:
- put reserved pages on resv_page_list after having taken them off
the "normal" list
---
 xen/arch/arm/include/asm/mm.h | 12 ++++++++++++
 xen/common/domain.c           |  4 ++++
 xen/common/page_alloc.c       |  4 ----
 xen/include/xen/mm.h          |  4 ++++
 xen/include/xen/sched.h       |  3 +++
 5 files changed, 23 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
index 7442893e77..2ce4d80796 100644
--- a/xen/arch/arm/include/asm/mm.h
+++ b/xen/arch/arm/include/asm/mm.h
@@ -360,6 +360,18 @@ void clear_and_clean_page(struct page_info *page);
 
 unsigned int arch_get_dma_bitsize(void);
 
+/*
+ * Put free pages on the resv page list after having taken them
+ * off the "normal" page list, when pages from static memory
+ */
+#ifdef CONFIG_STATIC_MEMORY
+#define arch_free_heap_page(d, pg) ({                   \
+    page_list_del(pg, page_to_list(d, pg));             \
+    if ( (pg)->count_info & PGC_static )              \
+        page_list_add_tail(pg, &(d)->resv_page_list);   \
+})
+#endif
+
 #endif /*  __ARCH_ARM_MM__ */
 /*
  * Local variables:
diff --git a/xen/common/domain.c b/xen/common/domain.c
index a3ef991bd1..a49574fa24 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -604,6 +604,10 @@ struct domain *domain_create(domid_t domid,
     INIT_PAGE_LIST_HEAD(&d->page_list);
     INIT_PAGE_LIST_HEAD(&d->extra_page_list);
     INIT_PAGE_LIST_HEAD(&d->xenpage_list);
+#ifdef CONFIG_STATIC_MEMORY
+    INIT_PAGE_LIST_HEAD(&d->resv_page_list);
+#endif
+
 
     spin_lock_init(&d->node_affinity_lock);
     d->node_affinity = NODE_MASK_ALL;
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 7fb28e2e07..886b5d82a2 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -151,10 +151,6 @@
 #define p2m_pod_offline_or_broken_replace(pg) BUG_ON(pg != NULL)
 #endif
 
-#ifndef PGC_static
-#define PGC_static 0
-#endif
-
 /*
  * Comma-separated list of hexadecimal page numbers containing bad bytes.
  * e.g. 'badpage=0x3f45,0x8a321'.
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index 1c4ddb336b..e80b4bdcde 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -210,6 +210,10 @@ extern struct domain *dom_cow;
 
 #include <asm/mm.h>
 
+#ifndef PGC_static
+#define PGC_static 0
+#endif
+
 static inline bool is_special_page(const struct page_info *page)
 {
     return is_xen_heap_page(page) || (page->count_info & PGC_extra);
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 5191853c18..bd2782b3c5 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -381,6 +381,9 @@ struct domain
     struct page_list_head page_list;  /* linked list */
     struct page_list_head extra_page_list; /* linked list (size extra_pages) */
     struct page_list_head xenpage_list; /* linked list (size xenheap_pages) */
+#ifdef CONFIG_STATIC_MEMORY
+    struct page_list_head resv_page_list; /* linked list */
+#endif
 
     /*
      * This field should only be directly accessed by domain_adjust_tot_pages()
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 07 07:33:54 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 07:33:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342958.568081 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyTir-0008Sd-U2; Tue, 07 Jun 2022 07:33:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342958.568081; Tue, 07 Jun 2022 07:33:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyTir-0008SW-Q1; Tue, 07 Jun 2022 07:33:53 +0000
Received: by outflank-mailman (input) for mailman id 342958;
 Tue, 07 Jun 2022 07:33:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uj6L=WO=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1nyTgR-0003fm-B3
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 07:31:23 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id d40a8812-e633-11ec-bd2c-47488cf2e6aa;
 Tue, 07 Jun 2022 09:31:21 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 47AD5143D;
 Tue,  7 Jun 2022 00:31:21 -0700 (PDT)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 17FD33F66F;
 Tue,  7 Jun 2022 00:31:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d40a8812-e633-11ec-bd2c-47488cf2e6aa
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v6 8/9] xen: introduce prepare_staticmem_pages
Date: Tue,  7 Jun 2022 15:30:30 +0800
Message-Id: <20220607073031.722174-9-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220607073031.722174-1-Penny.Zheng@arm.com>
References: <20220607073031.722174-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Later, we want to use acquire_domstatic_pages() for populating memory
for static domain on runtime, however, there are a lot of pointless work
(checking mfn_valid(), scrubbing the free part, cleaning the cache...)
considering we know the page is valid and belong to the guest.

This commit splits acquire_staticmem_pages() in two parts, and
introduces prepare_staticmem_pages to bypass all "pointless work".

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
v6 changes:
- adapt to PGC_static
---
v5 changes:
- new commit
---
 xen/common/page_alloc.c | 61 ++++++++++++++++++++++++-----------------
 1 file changed, 36 insertions(+), 25 deletions(-)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 886b5d82a2..9004dd41c1 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -2661,26 +2661,13 @@ void free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
     }
 }
 
-/*
- * Acquire nr_mfns contiguous reserved pages, starting at #smfn, of
- * static memory.
- * This function needs to be reworked if used outside of boot.
- */
-static struct page_info * __init acquire_staticmem_pages(mfn_t smfn,
-                                                         unsigned long nr_mfns,
-                                                         unsigned int memflags)
+static bool __init prepare_staticmem_pages(struct page_info *pg,
+                                           unsigned long nr_mfns,
+                                           unsigned int memflags)
 {
     bool need_tlbflush = false;
     uint32_t tlbflush_timestamp = 0;
     unsigned long i;
-    struct page_info *pg;
-
-    ASSERT(nr_mfns);
-    for ( i = 0; i < nr_mfns; i++ )
-        if ( !mfn_valid(mfn_add(smfn, i)) )
-            return NULL;
-
-    pg = mfn_to_page(smfn);
 
     spin_lock(&heap_lock);
 
@@ -2691,7 +2678,7 @@ static struct page_info * __init acquire_staticmem_pages(mfn_t smfn,
         {
             printk(XENLOG_ERR
                    "pg[%lu] Static MFN %"PRI_mfn" c=%#lx t=%#x\n",
-                   i, mfn_x(smfn) + i,
+                   i, mfn_x(page_to_mfn(pg)) + i,
                    pg[i].count_info, pg[i].tlbflush_timestamp);
             goto out_err;
         }
@@ -2715,6 +2702,38 @@ static struct page_info * __init acquire_staticmem_pages(mfn_t smfn,
     if ( need_tlbflush )
         filtered_flush_tlb_mask(tlbflush_timestamp);
 
+    return true;
+
+ out_err:
+    while ( i-- )
+        pg[i].count_info = PGC_static | PGC_state_free;
+
+    spin_unlock(&heap_lock);
+
+    return false;
+}
+
+/*
+ * Acquire nr_mfns contiguous reserved pages, starting at #smfn, of
+ * static memory.
+ * This function needs to be reworked if used outside of boot.
+ */
+static struct page_info * __init acquire_staticmem_pages(mfn_t smfn,
+                                                         unsigned long nr_mfns,
+                                                         unsigned int memflags)
+{
+    unsigned long i;
+    struct page_info *pg;
+
+    ASSERT(nr_mfns);
+    for ( i = 0; i < nr_mfns; i++ )
+        if ( !mfn_valid(mfn_add(smfn, i)) )
+            return NULL;
+
+    pg = mfn_to_page(smfn);
+    if ( !prepare_staticmem_pages(pg, nr_mfns, memflags) )
+        return NULL;
+
     /*
      * Ensure cache and RAM are consistent for platforms where the guest
      * can control its own visibility of/through the cache.
@@ -2723,14 +2742,6 @@ static struct page_info * __init acquire_staticmem_pages(mfn_t smfn,
         flush_page_to_ram(mfn_x(smfn) + i, !(memflags & MEMF_no_icache_flush));
 
     return pg;
-
- out_err:
-    while ( i-- )
-        pg[i].count_info = PGC_static | PGC_state_free;
-
-    spin_unlock(&heap_lock);
-
-    return NULL;
 }
 
 /*
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 07 07:43:37 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 07:43:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342974.568092 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyTsC-0001mu-RW; Tue, 07 Jun 2022 07:43:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342974.568092; Tue, 07 Jun 2022 07:43:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyTsC-0001mn-O4; Tue, 07 Jun 2022 07:43:32 +0000
Received: by outflank-mailman (input) for mailman id 342974;
 Tue, 07 Jun 2022 07:43:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8rqf=WO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nyTsB-0001mh-1U
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 07:43:31 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.111.102])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 85925ac3-e635-11ec-b605-df0040e90b76;
 Tue, 07 Jun 2022 09:43:29 +0200 (CEST)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2108.outbound.protection.outlook.com [104.47.17.108]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-46-3RJYIxp5PACP9e7h-z-wPA-1; Tue, 07 Jun 2022 09:43:25 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB5379.eurprd04.prod.outlook.com (2603:10a6:208:11e::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19; Tue, 7 Jun
 2022 07:43:23 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.019; Tue, 7 Jun 2022
 07:43:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 85925ac3-e635-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654587808;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=UGHtsX/cKQU7nfUoHcg7u3J2RRyrOFGlV+M8PFZp9OU=;
	b=Qdf5djK0Ry3Bso4WRzj4IDJIAC4Uw7d4l1l4XZk24C8hPEIEnLnWakKFMcutv0Xo9jmgeM
	l/RX2EXqgwexEgYFVdr0WkeFDCb6AWkBhspNw0vnY+PQCz9HetRmBsYF+PZBMpf/XGGWVI
	sZC48I8SpLI9DIPtsZsfvUQxRuyWuPU=
X-MC-Unique: 3RJYIxp5PACP9e7h-z-wPA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Y+3mZgcvmyrVA3yzNmb+qdgWy8oB3t/1a7mAvbuD9/Quc+vG5rjzSsdXuGd5Uf8ciUGPlCKVaBR3zbS0nMik0lvodoIhJqajh2PvNgIKdHxgMFhWpsu/RKAXpytgbGN3lv/AXVig1gslxXjm8Yb/STPz7Od03LB4dEUmm4vm2n6agLMz/IkSdfHuTETf66xGpsxSJfTf22zM/AwyJq09eWba+XyKA2s7jh0QCWCALTvclKIdmhsw9YFtpVOuOVAc2QimtBKOoyyYNuJvjE3y1O9wBaTPVzS8hZPKalHraC7iSV0zVL+kPQ1FFrQOYszP0RpA/ERwSK9+Gq8fVAWXBg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=EmJNePRXgjIKvdu5abJi5FPixN7IoQ968WzPPUigfiI=;
 b=L5AHJBu3cP02z9Y+BXIliAtww2yDAvxRhEQVSRmeICXUCTjOqoMYmCZvQFl5AUVLDlxzUhgwcyhQ7hU967eTGj1lxg7b8q3nTxAx+wXZa00U56gBdsHUrkOkyJO1Ww1PurA+yqh1vwoz8uSv+FdjARhd2HRDKH3hSjt8/EHMwAUPvWvD28NlEMLb/Jt62LtjsRcNj7FqypdonJboIDNvaykncGICSFTUyYP2fg3s2P8ZnUoyhdwODSTjHa5wzTsYzXbJ1YXhGnwGER5688XmivRG+cL4uSR2n3I0NVXOm0Dd0UcAjGXY7sADxlrEZlqCpVoodlkEaR26bb2vgfalkw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=mysuse.onmicrosoft.com; s=selector1-mysuse-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EmJNePRXgjIKvdu5abJi5FPixN7IoQ968WzPPUigfiI=;
 b=BCYLEXmC0vQXIYu9dtb6Z0YHcWUiIoSdDCUZqPvZfv/uvykdkfi/+i+/y7xTXNYQZqxJT2cGqHMdXqbL8NeIAu3Df37Uk5kC+wPj/0ZfUh4PreAM5jazKcb28Yw/n6p4/jb9AXB1UaSjRZYZeBDIHkmlqEHVron3uEhkvb8Po92QFOBHjxqzFLryRoFHjNY6Lw8NTxfJdWMX+rENXZoZIlfKfSteDrN9xT1e29SJzlN5DccAzW/NUm6lV5EJMAZa2K06IVnQ5xKcK0xwtdfi9qi+An3IRISA10KnT1lnj1F/VaTzmZ/l9i98hZAd6oTombJ/gxwNHf7obgWo1UBMFw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <c8f22652-abd5-76f8-75d3-ed581d1c4752@suse.com>
Date: Tue, 7 Jun 2022 09:43:25 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v2 3/3] x86/vmx: implement Notify VM Exit
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Kevin Tian <kevin.tian@intel.com>, Jun Nakajima <jun.nakajima@intel.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20220526111157.24479-1-roger.pau@citrix.com>
 <20220526111157.24479-4-roger.pau@citrix.com>
 <6fa93f8c-9336-331a-75c1-7e815d96ff49@suse.com>
 <YpoeuOJPS0gobz5u@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <YpoeuOJPS0gobz5u@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-ClientProxiedBy: AM6PR01CA0038.eurprd01.prod.exchangelabs.com
 (2603:10a6:20b:e0::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 223bc18a-df1a-4840-0bd6-08da48596667
X-MS-TrafficTypeDiagnostic: AM0PR04MB5379:EE_
X-Microsoft-Antispam-PRVS:
	<AM0PR04MB537910BAFF71D5A3CD295C8DB3A59@AM0PR04MB5379.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	4G4PxRHVNVjH9yhJZzqvNqE76pOP+0azJJ33d2RwXLuJWXJ3u9h4yEIFYxRQtCyaxSREjxcliCql7qPT6ymI9a/38qQE2M8UJXJb3UuQsvWSQpXvrPtKu7bSMf3I8DePdT6drHxn/nEiaGjfwn6cUohsas7FhOSimqQQeJLFXbu2YOSh0iDK+yHmPtsb7gsvbZKdiLahYCmIamg7aOgSffxzFsbGzmTEAPNvYQXEWlPsbG1EyR0cMIou4PaZabXrumztD2KZL7Khno2UoxYWCcyYyYfB7+fdkKBKMRZ5Iu8CDbBgR2rFNe3ySp4zfr2cOQm2OBhyXQSl5+JS5AgbBpepnJ5qY6TP+TxY6Svln+NAzBNeErJ2ehScyZWMcvt0glc/8pfO50qvb12kNjV5RVgPgqxJJfwhhWKXOIMO6kPgkWCL66P42413OqXdoyGOc5AQIl2DnBc3UK1hB0gH0FTEal3KBtXIk1XtfCekDhHy349QUxwyK7qP64ZkJTbn5fSVwe2/rNxzZxO1LgeisLdQRUCtJVF4EQGVXG6u5NdTyjDa/g62Z3+hAE+eP95rVmXjMtvNHqLwRykZQLgK+b6UmXGOZXCGxmgY9xUu0tiyKmxdZ/jSoGo3C+QFSAfbAHEH2eNJr/PRvJRq1nezKNdOVKG9+sqiJWY3Ezq3d+DkWs5GxWdYL/gI3A++VKYUuTkasCdQnhkLIrLKI5avTg5QZEp6drfGuAe+fCUhm2u0i2rhm8k483VXDTISAcX6
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(86362001)(8936002)(26005)(38100700002)(4326008)(31686004)(8676002)(66946007)(66476007)(66556008)(5660300002)(6506007)(54906003)(6916009)(2616005)(83380400001)(53546011)(6486002)(186003)(498600001)(36756003)(2906002)(31696002)(6512007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?+zGKITPDeebhFzTM4fQalPnM0WwHthRBMEiiuKi/MWs+Qwa6THBkWO1PBwnI?=
 =?us-ascii?Q?/ABI55tXlQ08U3jUBBcCq4eLTMi0J0Nc90QaqUR7/qbHMxDytAenRm3ll89R?=
 =?us-ascii?Q?A1SPiiO//ot3MeXI9EMjk9PXm2Hmx86rGzKPY/RLyLIXr4GuGz1JnCwJ04GS?=
 =?us-ascii?Q?3yhEUMbEmgNjRXqTjdG8/baH4qREDBNrq8lt9635UFAyBbNrWoBtJLsgQX0h?=
 =?us-ascii?Q?buFmiYIOVI91ZhC2WzV1iUun/W/Pt0SSIE4F8+wQUNlCC4LLtkUoEa34V2x1?=
 =?us-ascii?Q?E07/LWnfUAUCD/m0QBnx24jNrA8DESbymMasQ/o6bT0GormlPRw7iYVFk3RZ?=
 =?us-ascii?Q?aiEazmJO/iRL5m0xXuDxcyUhTV+TpOPrjIJqTi1IRL0ua9j/knFRVJXwsup0?=
 =?us-ascii?Q?Mo/7Isme5DRGPmGGt9ZfC2MIoIZEofL7gppQCBRIcbvWmqQ9eIuraoK0QZLi?=
 =?us-ascii?Q?jWrcNmeY3dUfz6/U7JthVbpC9CzQGclEBWtpQeuLmQAPQK2GENlXRPhOtzpM?=
 =?us-ascii?Q?vRmaAvip1/5oPUgTxKP3OhkL1ulrnhgl001usrKCkR0ED1n10uEC5mubeo/9?=
 =?us-ascii?Q?T9U0U5dUFTS2nkqeOKHBB9taHw4qjytVOwiVXhZ/tLlnTZNtRxru50ULl5If?=
 =?us-ascii?Q?hZqsaPOA+m9Zu3ZVUlasgPVQY2ASBhtD34uSw6B5iLXYwcCpcMWVHLHrWWwA?=
 =?us-ascii?Q?nzzQ+sLLRjYo1YBiBuNBFQF6D71Afq19gjYt+bqTLc8GhvQfqgaRX/hMvmt6?=
 =?us-ascii?Q?ogfCeHUwaMGChMSHC7glc6+BYIelwFoSpm8fKBmQlWjjI+gtWFZyr1XEZQS9?=
 =?us-ascii?Q?lAEv9AzfLczlmEHzmFGWVA2PMg0Xvkucu1P4QKDgn7qe0yuRByKIxbCyewcx?=
 =?us-ascii?Q?IL3D3H0D6y4DApKZcd8V9qn+2m5vOIUJaXYOpLsUem9elGNxvEIrm5ccfMKi?=
 =?us-ascii?Q?3t7d32BLcS/GiHGBC6Dzbow4u+z6DYvZ6a8CsaAFoSnRj4eo47atBaFWIyuU?=
 =?us-ascii?Q?hL9XCotMIIfnA9eroekM3K4IIb3sEC1pE1Jjdvi+K0ejueFgOQgFc4MTvRf3?=
 =?us-ascii?Q?0uNzy+Zk5T2GavbnK3ApI7ofmWeQOEdqK4v9C0V18Wd0LY4dnErWBd1xF/JE?=
 =?us-ascii?Q?rkgQL0MuFGNQ9Phsf6VI+B9vetX5HFBVzoG8FhVZOG42uPXn9C6w+YFTlY6b?=
 =?us-ascii?Q?woGuWRICiLadrLokv0s7ebkxycMOZGqUMepXksx2rATBnvPR+D7Tvb1A4t1T?=
 =?us-ascii?Q?w9MLGwdveH7jFiw5nqQwInqhntdvXz9S9P9mIErkwPdKJS8bOy2IQOSQy7mY?=
 =?us-ascii?Q?B7I101Xduhfe+VYKMJirrhG1FiEzVWgSMT/5tZN7O9j41h/QdOGg5FbLNZNu?=
 =?us-ascii?Q?LA7VLB6w/7SFSbanZb1k58Gf29PzwEMIxvNNTUw6CG1h/moK3ntiiPA/F/Py?=
 =?us-ascii?Q?f/mh8ybH5LBp7GvDiqhqiemG/5+dep7Hc9bkmGaK8boiQHICMMCz+KaEVTUk?=
 =?us-ascii?Q?++cZNHlIuT1gl9xxCjwH3djYm88Sgt8IE6R8aMMfxNsohbKoPFLNUN+uc0uo?=
 =?us-ascii?Q?VXQana9/fqTjpShmYiW2lkcSDIfTRcVPGkuCgMXUn3jH99lGIdQBV06nwrdn?=
 =?us-ascii?Q?7SZ64OHkJnTUHNX5sz/5iDTKNS/V7vz+NJBcWC/g3kWNOPA7K4ltAIm1kNnG?=
 =?us-ascii?Q?b0wmNrTwr2rUchpPu6glKY9Gq54c40fUxH6r2/0/DErQ/oKDCL01eBzEf/f1?=
 =?us-ascii?Q?T/kmeGFJpw=3D=3D?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 223bc18a-df1a-4840-0bd6-08da48596667
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2022 07:43:23.4758
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 5l5QTwgW/epuQrdb0YZcKv0bS01JcccNDZhA2oNnugxgj6Xi1gcQdsQbTfRyPW3gafONkxQbFI40ybsdcSZ12w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB5379

On 03.06.2022 16:46, Roger Pau Monn=C3=A9 wrote:
> On Fri, Jun 03, 2022 at 02:49:54PM +0200, Jan Beulich wrote:
>> On 26.05.2022 13:11, Roger Pau Monne wrote:
>>> --- a/xen/arch/x86/hvm/vmx/vmx.c
>>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
>>> @@ -1419,10 +1419,19 @@ static void cf_check vmx_update_host_cr3(struct=
 vcpu *v)
>>> =20
>>>  void vmx_update_debug_state(struct vcpu *v)
>>>  {
>>> +    unsigned int mask =3D 1u << TRAP_int3;
>>> +
>>> +    if ( !cpu_has_monitor_trap_flag && cpu_has_vmx_notify_vm_exiting )
>>
>> I'm puzzled by the lack of symmetry between this and ...
>>
>>> +        /*
>>> +         * Only allow toggling TRAP_debug if notify VM exit is enabled=
, as
>>> +         * unconditionally setting TRAP_debug is part of the XSA-156 f=
ix.
>>> +         */
>>> +        mask |=3D 1u << TRAP_debug;
>>> +
>>>      if ( v->arch.hvm.debug_state_latch )
>>> -        v->arch.hvm.vmx.exception_bitmap |=3D 1U << TRAP_int3;
>>> +        v->arch.hvm.vmx.exception_bitmap |=3D mask;
>>>      else
>>> -        v->arch.hvm.vmx.exception_bitmap &=3D ~(1U << TRAP_int3);
>>> +        v->arch.hvm.vmx.exception_bitmap &=3D ~mask;
>>> =20
>>>      vmx_vmcs_enter(v);
>>>      vmx_update_exception_bitmap(v);
>>> @@ -4155,6 +4164,9 @@ void vmx_vmexit_handler(struct cpu_user_regs *reg=
s)
>>>          switch ( vector )
>>>          {
>>>          case TRAP_debug:
>>> +            if ( cpu_has_monitor_trap_flag && cpu_has_vmx_notify_vm_ex=
iting )
>>> +                goto exit_and_crash;
>>
>> ... this condition. Shouldn't one be the inverse of the other (and
>> then it's the one down here which wants adjusting)?
>=20
> The condition in vmx_update_debug_state() sets the mask so that
> TRAP_debug will only be added or removed from the bitmap if
> !cpu_has_monitor_trap_flag && cpu_has_vmx_notify_vm_exiting (note that
> otherwise TRAP_debug is unconditionally set if
> !cpu_has_vmx_notify_vm_exiting).
>=20
> Hence it's impossible to get a VMExit TRAP_debug with
> cpu_has_monitor_trap_flag && cpu_has_vmx_notify_vm_exiting because
> TRAP_debug will never be set by vmx_update_debug_state() in that
> case.

Hmm, yes, I've been misguided by you not altering the existing setting
of v->arch.hvm.vmx.exception_bitmap in construct_vmcs(). Instead you
add an entirely new block of code near the bottom of the function. Is
there any chance you could move up that adjustment, perhaps along the
lines of

    v->arch.hvm.vmx.exception_bitmap =3D HVM_TRAP_MASK
              | (paging_mode_hap(d) ? 0 : (1U << TRAP_page_fault))
              | (v->arch.fully_eager_fpu ? 0 : (1U << TRAP_no_device));
    if ( cpu_has_vmx_notify_vm_exiting )
    {
        __vmwrite(NOTIFY_WINDOW, vm_notify_window);
        /*
         * Disable #AC and #DB interception: by using VM Notify Xen is
         * guaranteed to get a VM exit even if the guest manages to lock th=
e
         * CPU.
         */
        v->arch.hvm.vmx.exception_bitmap &=3D ~((1U << TRAP_debug) |
                                              (1U << TRAP_alignment_check))=
;
    }
    vmx_update_exception_bitmap(v);

Jan



From xen-devel-bounces@lists.xenproject.org Tue Jun 07 07:45:07 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 07:45:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342982.568103 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyTtj-0002MS-70; Tue, 07 Jun 2022 07:45:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342982.568103; Tue, 07 Jun 2022 07:45:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyTtj-0002ML-3q; Tue, 07 Jun 2022 07:45:07 +0000
Received: by outflank-mailman (input) for mailman id 342982;
 Tue, 07 Jun 2022 07:45:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8rqf=WO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nyTth-0002LF-MW
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 07:45:05 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.111.102])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bea83ba9-e635-11ec-b605-df0040e90b76;
 Tue, 07 Jun 2022 09:45:05 +0200 (CEST)
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2170.outbound.protection.outlook.com [104.47.17.170]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-10-T_y9WbukMSe1Wj9xMsjCOA-1; Tue, 07 Jun 2022 09:44:59 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by HE1PR0402MB2892.eurprd04.prod.outlook.com (2603:10a6:3:e1::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19; Tue, 7 Jun
 2022 07:44:57 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.019; Tue, 7 Jun 2022
 07:44:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bea83ba9-e635-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654587904;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=VEcfuKnWPAwXRS34bYZ3siJsGnmt23cq2YsXEe/AUM0=;
	b=YLT/glt2GieDLr0wbb3lUWOgk5ecIJNzz5wZAzCs7ghLTbaGb6CIYDceefmCRT3IhZNqfn
	UVTopu507POr4VOQAriG0JOtRjrsAWavbl1bZ38vUX8a5UnH3RCwI/W7U5AhE+Re9C9RAX
	aQTrWIixLsaT4DqJ+ki0xBxXazwrsL4=
X-MC-Unique: T_y9WbukMSe1Wj9xMsjCOA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AZyt7RHZskAL9ZtFTImWhaVTTX5x00Jyrgb8bC8LtwMn8MJ/SUAJrxqRNH3T/F2XzPsYL/A9hrLQlEjXRq+D1/YqDLfvxz/Fe+MdQIg3szaRoLc3IW/DhrDFd3UtGEwLyvu8PgfV/hFQOVWNTZSFvaIfPbADTATIYPGB57pbt2PFpY0qWWy4T6TL5PxbWJv+f8+TLlOUiddpwTlRzC+U4XyekggsN9vjDknz+DzbW50gm50IOW8bwwAVZSS8qbikPGH2/SH+xXE3Ceo/iq84PPFS53YXYA7OVublkkFiA5D0pJTclXDT+wZzmRMH4cmOF0VKsuRKdHO3YKDnro8LSQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=VEcfuKnWPAwXRS34bYZ3siJsGnmt23cq2YsXEe/AUM0=;
 b=im1DU8Dzkj2ClgT7EFpwfRL8duGutNxg65KFE5xK7mdWtt7m5Ckzgxx48W9/pI0FKbs0Km2dXaNN3EUYrwYB0D3z6jvWkxf54HzHw+YwLVQLz1r/5ae+90hynB7fqchLD2LbJyllBmHiKUxVF95NjhZWX0/1kS3Nq8G5CM8c+EBmYKnifLSx6T4gO+3ICOFDlDgMoVj7nf5X0oJ3yOTjflhO3opX1loYjYid4AY9Hoj0VznKhY4rLficCqEu+s4qbktp/Ma6XAyZmsKlkWln2CGsUDWXamxyc2ixdOw5lFejzDxl7Pu+o3vVJ/mOeMmmIZCdnZk6mqFP8swCSBTluw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=mysuse.onmicrosoft.com; s=selector1-mysuse-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VEcfuKnWPAwXRS34bYZ3siJsGnmt23cq2YsXEe/AUM0=;
 b=detzoG8bOLU2NPxRo+FdrRRk51S4ash61vgz+PmY7M72lUSYYdqgLP+lhr65ygJ3MJ1n/DMjHY4Gm2Q+T659fogiCjRaTikp4rvNoM5NXXguGcwMWio8mln3zIz34rvWljTge7iB2MrsRAVQF8x9rmoXFcNFMnOVwlaNz1sGY7HTl3YAVtPDDqqplfIRou5eBRmwftqupvzfCrC08Ni+7+sY/G8i9XXdmeh6ReqxPxbZpLFg28Uul8Y/F6iN7g/2CbNhyAxOh4dIQvAK/pSLwBkNi4bdcNEFCy/aBRTUncUo35LsXya9P4SqJ7e3LgaMIOXUOr2aluMNtM8LDVEtBA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <eb7cc2b9-51cd-b3f5-8e48-9311d0d11cb6@suse.com>
Date: Tue, 7 Jun 2022 09:44:59 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v6 1/9] xen/arm: rename PGC_reserved to PGC_static
Content-Language: en-US
To: Penny Zheng <Penny.Zheng@arm.com>
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220607073031.722174-1-Penny.Zheng@arm.com>
 <20220607073031.722174-2-Penny.Zheng@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220607073031.722174-2-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR07CA0047.eurprd07.prod.outlook.com
 (2603:10a6:20b:46b::23) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b3af9618-96ee-46e4-d9dc-08da48599ea1
X-MS-TrafficTypeDiagnostic: HE1PR0402MB2892:EE_
X-Microsoft-Antispam-PRVS:
	<HE1PR0402MB28926D42EA83F66935073781B3A59@HE1PR0402MB2892.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	AZYaXttg130vm3tWS2bU1NgcQBiqudIVJBILuY1w1UOO/nZyaihI5LDnD/uu5MFPe7IDuUbu84uUiEieQ8ZLzKNpKH3qg1bF+brdcsptvvFeTRg30TUVjPTvvogDoL4Zyf5ec77r9YCo2zvIG7Q0CoOxVL30L6OFcJMkQemtFsTFhlK2ZzmP5ck8vZshKUU3/76PC4M+m+ihasMkh5euXcVGwhWGTRLc6QM5VHcmwVTO/d7j/as8YOCNOyFiAnuHvfzc32LMx5JEU5EYX24xceYchmGV79eFquA0s94N70z34tjYHs9qr9nZ7THVUomfNjL8DA9LDJh3h3oKG5XbGP4v5o16ad0Aj+FftvnsbLRewJPIcsg+2oYmVPVgvg7b0Cwa43RJ6gl0ab0V11Oxu+qud/vzX2UzmTT1KxoWMG6vWsTIe66fR7Af1dRtNlhijwv5d/D1CmzBYldYmU+wKTGvcA+jsAqMLQKoQzMmaKJjv7jjwtHB2mnNj/ZYpvbhaUtLRk7hs9S2TiFo51E2AqlIMjo3qssmYNKrnEXu7k8LYUcP9zWbK7l5ZlSJb1+J5F36JokgUOrwkrx3NYf8XXXkE/R6A5UuBswhlCrbhlZqx5j1HovR/T878sQiZBwZTMcc9bAMCkuOK2WBo4O/CnYzscYDZ1DH0LKgTfCBfsP4HmrXh+Y9+eTgGLxoDub1A+iGizmp3NBLvtGbQYD5+xQfqoAOY0lOim/v4XlycCEAK6/KIMccXIjgteyfuFhO
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(36756003)(31696002)(316002)(54906003)(6916009)(66476007)(66556008)(66946007)(186003)(8676002)(4326008)(2616005)(7416002)(6506007)(53546011)(6512007)(86362001)(26005)(6486002)(2906002)(4744005)(508600001)(38100700002)(5660300002)(8936002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dXpHenh3R1NLcmYxajlyZmNBMnRxTytTS2h5a1ZYemQwMHNQRDdzYmdsTGcr?=
 =?utf-8?B?aU1MN0hMdk5DcDRSejJyOXVsQXZRS0ZZWXgzTkZmTk9NS1MvaTVOT3lQTkdu?=
 =?utf-8?B?SFJEUUF3WDBNa3NvSUJ1L0hqSW5tZjI2bGZBSElHT2NhVE43OCtrRDJNYlBh?=
 =?utf-8?B?aG16RDFXZUc1Yk1WMUhjZG9zOFkxUWw1ZGRId3E4T3hlelNrR3p4YTB1cE5Z?=
 =?utf-8?B?SGVjY3lHbU9mNjlZemtHRWs5MTVWRys0ZEdsclI1OGdIN0t1c202cWF3UFRX?=
 =?utf-8?B?QStzZ1RRcHBWVHdDSXh2MGNlejA3cmlhcWxHN1preW1za1FFYmlwUlJ5WjZr?=
 =?utf-8?B?QWlCYW9RKzJpZXY5UVFvZUliQUplcXZObWp1OU51blJJUnA0TE9JL0d2b3JD?=
 =?utf-8?B?U0l4eHZ3RUQwTTdtamJZMHVBS1RpN0lmUXdoM2hKejdKcWp3cEROejRKdnF2?=
 =?utf-8?B?Sjd2MmIyL0lNZHBMMzF0VzMrbVNJWGZpL0w2aEdqQjZIR0pGQ01YMitQWnJx?=
 =?utf-8?B?OFFnVWZ4QmVKL1dMOTdZbUJwMVhiWVBGYlhxVVBzeTNseFQxbkQybTViOCtm?=
 =?utf-8?B?UHRJNE01czlUNk1kVk5BdVhnN1lHYVpZSzJDWlRLNHR4Z29vN1VTTEtwdkNy?=
 =?utf-8?B?ZFdEZ1hOTVZhUGV3ZmcvaERFZzhzTmliTXJ4THRMUDRrYzNIK3VKaHZxZU44?=
 =?utf-8?B?Z0xLMDBKTXhXY0h2UWxqczNjZXRRSk5zZ3JIOXp2K3ArL3N2VFNhWGp4TjI1?=
 =?utf-8?B?eGpNZzIzbmNqTllZaThvWjJBVGtZZTlPUHhtUzlnVi9IOVorM3ZEZ1Y4R2xQ?=
 =?utf-8?B?dDUwN3VmczN2S2RFUnlwelRLNlhycDNVc1NZWklDbmQ5VHZzdUVTOHZZNVdw?=
 =?utf-8?B?dFRITnpySVlGMlZXODZUTlNDQ1Zuenl0NXB3Y1ZzN0xyM1F1SVlLVklyeHQw?=
 =?utf-8?B?WVJPYi9XTVV5emt5eGlUM1dEL2lWUWFVQTNFRDVFaUlYZlhWakFpTXIyeTRO?=
 =?utf-8?B?ZEl4Rm9NbEVJS0FQZTFPQ0t1ZkZQSnNDditvK2dvbGMxYWVUVW85bmY3RGV5?=
 =?utf-8?B?UDV2M1dQUXgvUFgxZjhQNVZyQ29FU1NDbGk3ZnJyOS92MVY3RDhENTZNUmFW?=
 =?utf-8?B?R1IvdWxTc0J2WGlJcUpWbXF0K3pjV3M3SUo1UXQ5ZjRGNjFSOXdsYkpuRTdD?=
 =?utf-8?B?STkxOWpZaUN5SXduc0pMSE1lV0hHWDlITkhnbHcza2liSzRGUUNaSG5PQlJm?=
 =?utf-8?B?aGRvdzlwRzhwTWE2akNTaXFjeXk3dkhkaEN5NkZpZWlxaGQ2NWhSMlhVeEcz?=
 =?utf-8?B?dFM5VWhuclFzZndPR3BocGFld3c3aDIzd21XY3JaWmM0eEp4SUtCb2tJSkJF?=
 =?utf-8?B?a0N4eVpxOVN4V0EyZ1FoWDY0WE1JcU1yRU9TMnAvaC90K3l6V1FYbEd5UWxW?=
 =?utf-8?B?VFlIYmxDMEpSNGFwZmNKMFA3Y2VKUUdVeUE5SUVqYjdWczBvcmpJY0NYSDV6?=
 =?utf-8?B?ckRxWjhZdjAraTNkNzdOejJhU1lXdE9mSXpXUktsUlZLSkJ0SGdIeHVKVloy?=
 =?utf-8?B?d1Q4Y1hHL3hYRlg0aVZMdE5QWjM4d0ExWDBGYzY0Z2MvMHV0VlUzUVJjdjlv?=
 =?utf-8?B?aU8wY21CUTJWU2Z1N1RpT051QkVkaUw5ZnY5djRYZWc0ZUxISkY2TS9LSzMx?=
 =?utf-8?B?YUZqS043RmtGVERxakxrYUFvRUN3QlVvSitIcTlWRDZBbXpCRExVeWhZZDF1?=
 =?utf-8?B?UUxSVjlkUXdDNGpvYnpDamVCeUoxNTZzeWluS2J6OWFNMFJUY3VrTmdDeUxs?=
 =?utf-8?B?bUdQZ2x0Q1N1WS91WjA3M3hIcGQ1bUV3S3JCYVRFNExxckUxTWR6R3RTSzVq?=
 =?utf-8?B?TWRXS0xTSm9TeUkzMGJFa0lYcENzdHVxVVI5b3NpZXJ0dWhZVjVDR28wZkpE?=
 =?utf-8?B?cXNmQTN0RnJSbFl6VGZSMkxocUFrd292MG51dGFlU2xnSk9Iam9TRERkMXJP?=
 =?utf-8?B?a3pBeUdldHlneWpJeWRUZGZqMDdBbld1bUZ3bkFXZGtSS0NPbVVsWmtpUVB1?=
 =?utf-8?B?LzlLWklSdFVqYkJUUy9POHoxekhaeFFoVGxpVTB5Qkk3MUxybCtsTEVvb3dv?=
 =?utf-8?B?UXZIQnJNMm50RDUxb2NsVUUyVk5iZnhUTFVwMVdJcW5DTXQ5cFBlc3oxdnA3?=
 =?utf-8?B?MTlncXBRb2xQRGtaQkVIb1Y4SGM5bkJRVTZHNFIzWloyMmJRVEpGKzBaWC9l?=
 =?utf-8?B?V1BrM2NmTllvZEdoSFNDaEdIdDNvYlp3dXRtSFYzUm1PMzM4QTVieStkR0xj?=
 =?utf-8?B?VWpMZlhOZkEwT3VsYUtUZGZicklYVmNhTWxVTzkwKzNDak5udEltUT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b3af9618-96ee-46e4-d9dc-08da48599ea1
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2022 07:44:57.8449
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: BSvpTuW2qo6gtyhUdm19z2Nb0/FjFFslD529u8F9+LmS97uI8eAr424RoeuDVaMAZO9lXFN3OFaA1mTQigBM+Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0402MB2892

On 07.06.2022 09:30, Penny Zheng wrote:
> PGC_reserved could be ambiguous, and we have to tell what the pages are
> reserved for, so this commit intends to rename PGC_reserved to
> PGC_static, which clearly indicates the page is reserved for static
> memory.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>

Acked-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Tue Jun 07 07:46:13 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 07:46:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342991.568114 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyTum-0002zW-L4; Tue, 07 Jun 2022 07:46:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342991.568114; Tue, 07 Jun 2022 07:46:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyTum-0002zP-HF; Tue, 07 Jun 2022 07:46:12 +0000
Received: by outflank-mailman (input) for mailman id 342991;
 Tue, 07 Jun 2022 07:46:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8rqf=WO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nyTul-0002tu-HW
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 07:46:11 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.109.102])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e2807bc1-e635-11ec-bd2c-47488cf2e6aa;
 Tue, 07 Jun 2022 09:46:05 +0200 (CEST)
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2176.outbound.protection.outlook.com [104.47.17.176]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-27-2AHxPTiDPIy9Vb68RN653g-1; Tue, 07 Jun 2022 09:46:03 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by HE1PR0402MB2892.eurprd04.prod.outlook.com (2603:10a6:3:e1::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19; Tue, 7 Jun
 2022 07:45:48 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.019; Tue, 7 Jun 2022
 07:45:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e2807bc1-e635-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654587964;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Ff1Ip9EsM18YRgzfQAU0fgBNQDcFsnC4bdh+5wp6gHE=;
	b=N+zF2jFgH9oaH2VZNO//FSvKPQG6OxfgV+qJuC+HfNEyz8+c1dDIsh/NeFya9Wf+4yDk9h
	iX33IoKObXlS0ghnmS7v+jlO0KIkbNg7RsY7EYeQxwYBUWnZK+VT47szWrIornRKEa2FDC
	/onXy0PoSuD8kr0KROGFkgnylWEPpEw=
X-MC-Unique: 2AHxPTiDPIy9Vb68RN653g-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=X/CwzGdAwwb803NOdAUcXdWHtqTdJueA9TJ6sy1BGejaQryzclYHScbnOuoVDKMytvjlQ9gdU88uyiHDsHvtz6kHeempoiBWPXYjFy4q4qeRFaVN+a/oqvZ3P69qO49EfC9kHB+rSoaLE1kQ9pXV1R+abuqDpxXUQO5U3PAttmS978jc9Us1KhD+vHMGhnzm+N3dEceeiLg6ajZO9PhSTEc75FueA5ZcyZjwUhOcdbdBc+18jGUCwDxR8qBNpwMUe7Er8T7Fp7S6HwoJwTQo7whQHjxZI/F3rTp7Raz3ziHx/HXYPJduL2IP3GQxU0RM7v4GHalqMfpv1IBD+D5Seg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Ff1Ip9EsM18YRgzfQAU0fgBNQDcFsnC4bdh+5wp6gHE=;
 b=NxdhpoUgkwtewJUkTvnm9VJ2MVEPENo+utiIr+UWKs/DbTeSbC/MofdnezjbFZ82sH4ynkJ+kH+WicwLplKTjF4xBBDNmFan0mqm3OxjimbGS6KWRZJKUwnE1jyPx+1UszGKKCuVYfULtuv4PE2OGk/lttPJELWlrEkV77abEwA5QGRE3UjqJszwyF4sfXFa/BYo94mfyDSx37vUq4zsx80UoB0LVeSNw7JVCdllIJX5BE434PIVJJBxQ4/xkc+nZ3sZ7NyiGNL76p/ASRjovlj64viEKpjzxZVh4vy3ElL6491tx4r2d1ObKIvz0MxUyOmxtcEpufAl1k4+Cxp11Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=mysuse.onmicrosoft.com; s=selector1-mysuse-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ff1Ip9EsM18YRgzfQAU0fgBNQDcFsnC4bdh+5wp6gHE=;
 b=U35rvm68jxrvC7spb63JRePTVuzgrQ5NDWDUN+ZH1bD/FeOrTAAkPd5VuWfhUa+yqWpdnqAuELdlnAf7XhN0n5cLQhsy93lG00StKKb47M6eDT89H5DDuJ+NX/Jl8YsEb73vkRUM85T8FnZLdl/V+GLUpPjvAiE2pv3Sj0CaM4pTX0ZzxFZ1NofbIcYOVv6f4T/uXoU7i2dXRevlqvw/HFblc6n941+VDBV/gyNjLA8DPGL2e8Ngl09L3wLuBbb/oacO3Qu1WnpejmmXRxqSEeqIKCQSbdqY2ss8cyohx56z582bNfgdTpfJ4htmOc4PFaplOCSjzUr9QTvXRTWPgw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <46f151c5-b7e3-527b-a7c1-4d06e612eaaf@suse.com>
Date: Tue, 7 Jun 2022 09:45:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v6 2/9] xen: do not free reserved memory into heap
Content-Language: en-US
To: Penny Zheng <Penny.Zheng@arm.com>
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220607073031.722174-1-Penny.Zheng@arm.com>
 <20220607073031.722174-3-Penny.Zheng@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220607073031.722174-3-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR07CA0044.eurprd07.prod.outlook.com
 (2603:10a6:20b:46b::30) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6157a1ee-46a1-4b5a-8410-08da4859bcb1
X-MS-TrafficTypeDiagnostic: HE1PR0402MB2892:EE_
X-Microsoft-Antispam-PRVS:
	<HE1PR0402MB289231C16C35A50ACEA40B83B3A59@HE1PR0402MB2892.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	c33ayWG5JlZphnsYbftwus38I+R6aHwT5yY9EwhbapRhEmX+BwWNMiR7H1MupUc8TM9dkDynDvhviAKBCdJnmqSAYuMpqkpn3J4u7Jzl4KMTuG9u0iLjuOPD/ITpfFwpL/fqpZev60shXoXl1poJ1I0mX4QQoSv7Nxppp0g2Yans5TQGWmTWNH+Sm+zHjxYwor2986UlzFxneqAhOyQpdF5jGpmvGwtiE0Ic6zmaVgjB+I9eogT0PogfchdlqKxrMoQQ4PJATJXLjz1JSgqQazW03hkHrH/e4TaKLb9/hLjQE097MzNxeRJuuqf783QLKXvNE3Y16rUa7Gcwg6MSSYpDOA7HQbWbiZCgirn7XsZfm/AoJpu4J6aUlehShENoNR/wpVtMkLzqhW93ndXWjtxuT6lMFGrDl+a+CvNBl+vWlnMZfFQ1HHNDbylDncgA0yLZZZw5eCaMUKL1wY293TvUbqp3vsPbFYUmO92veUnCao0efKbp4sqtu9KHGVGlA12QS+ZMx5Rqz6NXA4EJ4DiLO8zVqBCAuW4au4v7SZSRWS8b/fTcPe17ViJyMNPDqj9xsSV1ZuSoIfvruv+1rsdq9cpgwKRD0Szw7KSFLwC1ydzAES0zFZcqf+FnqaKJ0EXdIZzvMz67KE81iq33nH75c7QQqzrZvaSITXvRn4pOnWqfXD3NKAyn2PXC/gLU+RLHaPoE88TP7ZLirGZ4t+E1dC6G8MQT0a59ZXCupOeSFXBPe9eEMoXODh4akt6e
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(36756003)(31696002)(54906003)(6916009)(66476007)(66556008)(66946007)(186003)(8676002)(4326008)(2616005)(7416002)(6506007)(53546011)(6512007)(86362001)(26005)(6486002)(2906002)(498600001)(4744005)(38100700002)(5660300002)(8936002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?U0hFTFZmOUVKVC80Mm41ellCdSs0VEhIanR3aHdKYmJNOFVkZVUza2tvNGpD?=
 =?utf-8?B?Q2lpeHFlT0xNajJDQ211TitES0dBMnJ5RnJuWUhOMlhhL0dSc2ZBT0xPTXVV?=
 =?utf-8?B?Yk44TGN4ZEppVDhkQ1VVK1ZIOUJSZXBXK1FYTlpxUWZvZnM3Z1hnZXFOUUNO?=
 =?utf-8?B?UGgyNkRYNkVlSmQ0N3JxQnR6cG02cmpmV1NwV05zdnZDaDcyaFdoZEptRUQv?=
 =?utf-8?B?R21PbkR3N0ZvM2U3UFBMNE5jTzd1ejNkRkhrT3NuVHNzVkczalREejVEdXA3?=
 =?utf-8?B?R2Npa2xLU1NWQmpTTEE5ZE9NaVYrTGxrcENORytiVHdxSGViN1hYTzFrNXlr?=
 =?utf-8?B?RXJST0cxYXN2NEZGNUNleis3dGUrcVBNL01JUTRLRU4raE00QnVySTJYaExp?=
 =?utf-8?B?bk9VcDl4c1BiUGZLZm1LOGdpVzJldGl0ajdmbDdFd2J2UmVuZ2NHSUJ6VTh4?=
 =?utf-8?B?MWZOMVdKWk0yQUEwdUFROFhDSFpuMENiVEsvenVQdlB1YUdZMkZBb2ZidXFv?=
 =?utf-8?B?QjkzZEpXOGhnUjRScTdZdjVEWkd2VFYwa25jbkRabUUzYUFHbWR4cWVFa0tw?=
 =?utf-8?B?TzEzTmZ6N0ZQb2xqRFp3Zi91QkhRa1ZIRmxnUzAzK1M5T3IwUEsyb0JTVmtY?=
 =?utf-8?B?YTdwQko0MDNRejE0NVBrcTlJeU1COXR5MncycTczT3ZnelY5V1dmY2ZwRy9h?=
 =?utf-8?B?UWxQZHJnclBVZjBPczNiOTF1emk4SEt6aWpseFBmSElvSWYzR1F2SEI4UjNl?=
 =?utf-8?B?RzRYc1NuQWNITFhTQkNiZHpQMVVBWUVDNDFBNGRZK205Mk10K2xMVEYzTFVz?=
 =?utf-8?B?RzZrek4zb3V1VTBDSmxYMTh4UEJ3dXpXRmlEcHdvZkZtMGNaOWJremxxM0Rt?=
 =?utf-8?B?bUlnNW9rOTNodWJ6a2xkY2IxQzc0OUZ2Y2NSYUwzV1dOcUQxWWtDN21uNEF1?=
 =?utf-8?B?NTBhbGVZQzBGSFpYaThuTXUvSndTdFhvNVBKSHNQczdMV2p5cTRqeXFxcHI4?=
 =?utf-8?B?WUNPWmVMWmsydGJWeVp4N1ZhWVpLdGhYdnJUb2w2OWNXaWRneXlaYThVQktm?=
 =?utf-8?B?WDlSYitTY3ZOTEtjWVhvbzNkRTBkVTQ0aUw4WTJhZlptUGVrYThlOXBxVEI3?=
 =?utf-8?B?eUdqcUJQblFvRFR1dmtxcUFjL1ExYi9md1doWXVwcjlxd0JLdTRWOWFDMHFx?=
 =?utf-8?B?Mk1Yd0dCVVAreDFpQXB2OW00UVdWMGhzc25aQnB1Zm1YYjhwVkRwUlZZOSt6?=
 =?utf-8?B?WEkyYmZkZnIxYU1YejUrYXNkSUJNRlhEUGoyNWxpdVk0U1BHQ3VoL1BpVGp1?=
 =?utf-8?B?QVBDYVJDN2xiWDV2TThwc0Y0bVBBdkxraVFUNG4vL3hOOHgzU215bEE2Qks0?=
 =?utf-8?B?UEtjTm5YbVV2YU11MkpZekxmN0Jzb1ppV0I1UnVmM0Y4SXR5YnE0ZGlqN2Rl?=
 =?utf-8?B?THdBeEo0TjZaTVFTSlNuVlEwWk8xN2xNMWJvSHU1L1UrR0kwT3J3cUZIZ003?=
 =?utf-8?B?MC9pMmhwZ3BSRk8rSWtPQjhVdmZ5VmdnNEovKzhTMzRrbE5lS0xRZkkxaVhB?=
 =?utf-8?B?U3RHUGlqdGtpYjByNGFaQ2JwcWtjU3Y1TFd6WWVjVkNETko2UUZLb3FqMnBY?=
 =?utf-8?B?NHRMMzdrRXZHdGMxS1dDclNudlEvRzY2cDQwaENKaE9nZGJvMGgzTFBWTVh2?=
 =?utf-8?B?SlNHWXJMU1MvNWU1Y1NMRG9YeVVjQTJmMmtmbFQybXZ3d3JLM0Z2UGoyZjZl?=
 =?utf-8?B?YXJadGxQTHMwS2xUY2liQU13UHl6U0hLNWlKZlplYlN1Ujhwc0ZaYWFvMnFU?=
 =?utf-8?B?MVE3MVBmMXQ0VFVERnlyVW9aSmpRUkR0V0NSMWlyallxYVRPVHFENVE1eFFz?=
 =?utf-8?B?V1hwdXlBb3BzN2NnaFdtYlVQc2p1SGVRTGZqeGExUi8vSG9qb3p6M0V4K3FG?=
 =?utf-8?B?UzUwTmY5RVVDRUNWSXROWFYwbHc1bGs5a25waHdQamxoS3ViMHQ1Q2RCTzRU?=
 =?utf-8?B?NXNxeDZvMWc1WHQ4SGpzT05iVFAyUVNOcG5URFZWY3FGUnRRdEpHWW50d2hu?=
 =?utf-8?B?RmcwdjY5S25qblU4Tkt4K3J0RTVPczJvK3J3YytqWXcrTzFlQStpd0hJMFpB?=
 =?utf-8?B?aEZYWGFua2J1bFFKNDdRNXBiQlFhNndWTW8zMmx0NENDbmxFcFllQ1Nwckg4?=
 =?utf-8?B?MEdiRTZwYkhBNUNpMWc4a0trcXdoR3cwV3dJaGx3aGNBbDR2cy85aUw5UkFl?=
 =?utf-8?B?Mkc0MjV6N0ExVWE2c1R5aDZ0SXJSSVVGSFVhdzVOUFk0ZHZITkxRS01JdWps?=
 =?utf-8?B?LzVTMU9zUHNIcXV2Mm9USFpvR0VsU2lCSGo0VGhBQmdJQWhIc3psUT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6157a1ee-46a1-4b5a-8410-08da4859bcb1
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2022 07:45:48.2323
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: TpakXyxUhy6Mad/155Gj8FZ+BNB12val6agMlfA67UEfZz+gSClv6BPyD/9naeWT9ZvpIrrbYVrerDdoTEdN0w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0402MB2892

On 07.06.2022 09:30, Penny Zheng wrote:
> Pages used as guest RAM for static domain, shall be reserved to this
> domain only.
> So in case reserved pages being used for other purpose, users
> shall not free them back to heap, even when last ref gets dropped.
> 
> free_staticmem_pages will be called by free_heap_pages in runtime
> for static domain freeing memory resource, so let's drop the __init
> flag.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>

Acked-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Tue Jun 07 07:55:04 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 07:55:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.342999.568125 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyU3G-0004af-Hk; Tue, 07 Jun 2022 07:54:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 342999.568125; Tue, 07 Jun 2022 07:54:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyU3G-0004aY-E6; Tue, 07 Jun 2022 07:54:58 +0000
Received: by outflank-mailman (input) for mailman id 342999;
 Tue, 07 Jun 2022 07:54:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8rqf=WO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nyU3E-0004aS-Ul
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 07:54:56 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.111.102])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1ee73a87-e637-11ec-b605-df0040e90b76;
 Tue, 07 Jun 2022 09:54:55 +0200 (CEST)
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2054.outbound.protection.outlook.com [104.47.14.54]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-12-r3yzcu60MjO8o4-kgtsuZg-2; Tue, 07 Jun 2022 09:54:51 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB7625.eurprd04.prod.outlook.com (2603:10a6:10:202::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19; Tue, 7 Jun
 2022 07:54:49 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.019; Tue, 7 Jun 2022
 07:54:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ee73a87-e637-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654588495;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=MTc/EEkBFCDuOeKCw4rqXpZJkxzvOOQxY29jKUWEpAE=;
	b=JKTpMlzHr/+ByIeIxoC1GGXhsV66jTsYlodPIJwAfXoazWFFxkdauyT/gx5eEPyP0kxUk/
	j3GixXp5jjcW47lOU2GBryg1h7h/W9KwU4Vha/huFz++TxR4WVDRuwpWJEgP1i+i4KA48c
	/NVqtIv2A+0CCn6AfKJc0XsZA9A/cQc=
X-MC-Unique: r3yzcu60MjO8o4-kgtsuZg-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QtvWjyC1060ev5OhtCp/yhDqysGkrPC3yDe1KLoxfqJwp6au8mNZMBkQsXgsuqej943Kn4FDLH9xZIak6PdM7XpSQ/EQ99bMbBjKh4AKvHaBb74kvWxrc0DBJ/cCVMoZKI+hT/8OR5caIkbkxdoDHkSZVSeYs8p8hOcvgn37ILcYqQdY/0ciuNnKXEjs36nqyqM1OlsPdRgPuxpLW8/rwU6/iaN1raz0Sq+A2KFkCjkKfY7N7+66o6dLhSNSUKG/Ug6zOqCcs1WqN9E+j6DAovSDDubZL+N3E3BUcFmAiyPfqqXcVFR8Xz70oKuwi9kcUb4UKE0OY9+Vib/JiLribg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=MTc/EEkBFCDuOeKCw4rqXpZJkxzvOOQxY29jKUWEpAE=;
 b=SBRziRh0Gy/HnHEnnqeZ4RfcRDBsdU3y14+EfwsWcM7RmyyPvlwgnbdI8lRL/OBtXThebgmssCg13srm2s1ERTeC5LIJx1HlCGpFlcKiISLb/erYOd3EOxLGxJQWa6GEtFZcY32erf/5ckWsH5m8VKEYD8+r6GSY/pccyyzQDdcgl2za9iaqqXJDkjjYVg8pdviPW8Q/abxsQXsovTYswGuzQYz4+9VdJ3ukJPSOHuNVrRo7zqud7neN9g7O3cVIrmr74ufItTMteiK53MwQgWwjVuovaKAPDepy3LsFSCREj0Bd2M6gXyp1Rh6HS2fBUvZ2veRuYKztrDxkKIlrEg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=mysuse.onmicrosoft.com; s=selector1-mysuse-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MTc/EEkBFCDuOeKCw4rqXpZJkxzvOOQxY29jKUWEpAE=;
 b=ISY9UYAwBAjIzzzzE1fPOo330RHLSeOz7CU2uhTknkD3GSsEUqX9P5rEUv5GqSk0YpSX2E7RScBT0jMD8ijSzNvOUFIgUDn+RVGbX36jvqQSwFqOC/TTSEGyTej6IC1ZoxsBTk1m159v+Ja2tVL769HKO+MFrk/JQYogdimRtEF4eHqQ/ZA4lKiGhspvGR8TXy9zBZZMIf1FfMHJqSG6UYevF3WNHHmXG8btuNjjAEsgga8CZzZuHeAUOyKSl1rUNnnCvsf9xgI5GWtZk40VsnGFRWkfYwbxFscRSxgvPmg8axmiWCuVT/81ZvZ96pPj0nljYpOpaDETavXQXI6v7g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <291312d3-9a05-fd12-86ee-a70bde5d9c89@suse.com>
Date: Tue, 7 Jun 2022 09:54:51 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v6 6/9] xen/arm: introduce CDF_staticmem
Content-Language: en-US
To: Penny Zheng <Penny.Zheng@arm.com>
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220607073031.722174-1-Penny.Zheng@arm.com>
 <20220607073031.722174-7-Penny.Zheng@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220607073031.722174-7-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM5PR0701CA0057.eurprd07.prod.outlook.com
 (2603:10a6:203:2::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d0e70e33-7f96-4a3e-a282-08da485aff4f
X-MS-TrafficTypeDiagnostic: DBBPR04MB7625:EE_
X-Microsoft-Antispam-PRVS:
	<DBBPR04MB76255E94AE66F077B78B75D7B3A59@DBBPR04MB7625.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	i+evzGB+hce9OHG0/6UKpJozOiy9y8+w3hZExT8nJ/UpwQUhPzBeeOA51JL9IdYA6roncco9g0zd6NWNBPmfesi+L/u03nUOqQSBG8IIwmBwMTE5rreuq+ch5AIUlz6BSZweuucA7xDgQE2xJOx16I8tgXFacIjMyTf8TnF1VjTGss4Hrwk7AUXOID+aLdz2o8dzVNX4c2XzaNqFZdJ3cVOxibhVDaQlqWiP07N35sOEyErinFZg3rPPybztpHwAHuwyPSYWLmN4Pq8q6ytidmNe1EZfEX3BsrHMnaMQgw8/NwLC6hSlp+nMgfKrb+JxpXv6EKgbFP4B2gzkBh0QDYzqydyc60KjVPzNpGZvog5D2qQcSTcEfhGsZ/xcCZ9Wy9foneD21si+LvbRlINFQAbLp/w3FvzYuFSy1O7pvq8iSP34FE8X5lt5oUlyv1OnPS0UlZiRllsWHnuOxDWI0baTOX4n4KNC09dk7+Mumh3A18rN/sSC62rE1A1CzBWR4fJcRASwDyx8XUHtdB2mohEmzT4E1EGWQy2URDEhKvLOx89ET/sy7nsXO1gjR1JgjHk9AF6f9jMv5/d4sTVG6W4GKuXRq/fXQRDYyH5mByV0EGqV0clX+zrFXUXXrlLAtcn7FfT8X4OYECX9yp2RNYxEj9CBDeK7piWm0oBVSZph+3qwhYy9YHfWVR17HctTfG0/3ltXsWNeM9YeVzJkww0m2qztvXKwI93GDPn9OawmH/mVZiVx9u3n4hUVBqAS
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(6666004)(316002)(26005)(186003)(8936002)(53546011)(54906003)(6916009)(6512007)(2616005)(508600001)(36756003)(2906002)(31696002)(6486002)(38100700002)(86362001)(6506007)(4326008)(5660300002)(31686004)(8676002)(66556008)(66476007)(66946007)(7416002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SUdEWVB6dTBWakR1QWt6SUxuWUthVWZFUDgzVkZzVGlzN0EwRDNDZCs2QUFX?=
 =?utf-8?B?V3RxbkxsOGhDZElXZGtHK2V4amJLaHh4UmhXK0FISW42UGVPczdwQW5keXBz?=
 =?utf-8?B?L2FpQnhRT29XdE0yZ1JwOVhjSWJzWkJkczEyZWlldmhuQjVpa3RpSmx5MEMw?=
 =?utf-8?B?NzdFTlNCQm85ZlJxZWwwTitSWmE1RVJPVU5Rb2lSNmwxUlZ5NVN0Q1BsQkM0?=
 =?utf-8?B?UHdReGxZZXlUMXErL0EwRTEwOUlxYXlEQWxnd3JoaVNncWowTlliR0tXSWRa?=
 =?utf-8?B?d01INWdWbWt4UkxoL3IrTkhIRUl1eFdLZ0tPRWE4UE42dy82b1A4SHRSRjYw?=
 =?utf-8?B?R3BjYjg3REN1RS9lYVpSMmVlSFU3NlAzWklaSDNvbG1iV2MwRlZtOTdxVWFy?=
 =?utf-8?B?a2RaUjVlTXRTaUxvZkJ6NmpSQVM3VlVGR1E0cFVpanN2cmJPdDdjR1VIZEtN?=
 =?utf-8?B?bndkWUphQUlrWHJmTkFidTVQRnV1ZWQ4QWdOcGd3QVRKdlVMRFJtdURQbHdo?=
 =?utf-8?B?MWRPZ0t0ZmZNTmhJekxPamMxc29xaFR0eG42MmllVzZkamxnc2NKZnpQdEd2?=
 =?utf-8?B?KzI3UW1ibkpnWDFXbTRJZWErV3o3VDhrbU1VSThSTUY1eEp3eVpKWXovb3Ju?=
 =?utf-8?B?VTRleUdMeEFmMlFJNkg3cVlVNjN3ejhlZjBvRWxlc21LSGgwcWN3TEZMYjd2?=
 =?utf-8?B?d2lqbTZ4cHJuUGdzdkhZTGF6MVNQdkRpc3lOSklrLzNvbGo4RWtpN1lNWmt5?=
 =?utf-8?B?bjgvRkJEWm5xQVNDSmdhdStwbUZ5UHUwRzR0Mm5WZTh4N0Uwbi95NXAveVR3?=
 =?utf-8?B?VCtJQjVWZEppY3cwRHI2VUxQOFN5Y3RWb1BuWEU0OXRGdmpmbmlidXdmdDBD?=
 =?utf-8?B?bldoVncrV1ZPRU9IVkxRK0FEOTMxUDFlMGltSGdSS3R4bDZrQlNaa3pYa2s0?=
 =?utf-8?B?a01ZZG40SDBrd3lCQndjUGVpaysxUksxS2o1dDhTTkFsM1JuM1Y2bGVyMmlm?=
 =?utf-8?B?ZGpKUjdBdnZJVDRwWkE4eG54SXJNbStlSWIwaXdqQmpuRi9EcVFhMTZEZFZ5?=
 =?utf-8?B?eXZYd2dNaE02UVNuTFhUOEdBcTFTcDlSRm5uRVhTZEhBNmx2YzNEQmdTM1Vz?=
 =?utf-8?B?U1FseHRueGxML0p6WUtuQXM1OUZ6N0lUL2xXOFdCMGtVSmQvaG5TekZzWjBK?=
 =?utf-8?B?KzkwNVUySXBLZFV4dFpQeFdNUHgxbnVlMzZDeVRkOHFZSUFaUThhcUVCRU1P?=
 =?utf-8?B?VitMbGhxZmd0Q1RYdHkwSGlBUEZYb1VuK1RmSWNYVGZnbHA2NFN0ZVBQTXlW?=
 =?utf-8?B?K3dNWC9FREFXVVhEUWNidjkxMTlJbTdnUkN1ckJYOFQ2RStDSVBGOW4vbjVl?=
 =?utf-8?B?djhSbzhzUnQwRnhDeDl4c2RrN0pDclZTSjk2R2VKMUxERWVWREtTSU9QQlR4?=
 =?utf-8?B?UFhydzhCT1lvWC9ndHJPUVUvMjkvTmNTMUwzQThHWXNjYXdQZWFuQUhVRWJ2?=
 =?utf-8?B?L08xVmlrNEFMdHdEL3hWd01mbU41TGNEa1g2YmlxaWE5RVRucFhBRUpleW43?=
 =?utf-8?B?L1V4NWtuYWNuZ3U1ZHlEVDFrYWJ4Z0YxS2NOcnJqMG9CcHIzTjIwbHZlSVgx?=
 =?utf-8?B?VVZTSGgreFgxaFd1REFZczdjSmp4Ykp3aVAxZ3JiVDBtZkE1alRoVkN1TVVQ?=
 =?utf-8?B?OER2K2liQ3NadW9Iek9mQlJyMTBwVmkyeEJWRjlMaitEekJHQVI4TDRTODZr?=
 =?utf-8?B?TjZNdEtMTnRVRndPd004Tm51SUJKOXpRK0J6NUVHZmZHSnE4RGVoTDhMQk1Z?=
 =?utf-8?B?a0dsc1J6WG5zcDlPQktyTkl4dERmcEovV2d6NkJONGxhVTVhOStwb25xcGN0?=
 =?utf-8?B?OCtIVm5YVThHV256U3praDdsTFFuVW5yNFRrWVlaZDJHeUJxYWZEeGhxSWds?=
 =?utf-8?B?S29mdE5OSTlVb0RidVRjb0p2N3Z3SzVxVExidmU4ZXVUNzIwajJmK1JTc042?=
 =?utf-8?B?c1VlMXBkWElnL05DcUlnTVlxTVlWU0w4a2FsSk10L0tWRWJZTUI4Q1A5dkt6?=
 =?utf-8?B?cWd1ZWszWHo3blR3SEZSTlFuNUpiYVQxNmFHaFdrNjhXTnB4VXBmajZRZUZj?=
 =?utf-8?B?aFE3clU1UVU3dVdVTnBBR0Vvb0ozZDNVSVQ2SEFZSFdVZFkvdTJvNEpmYXNx?=
 =?utf-8?B?T05FVk5QZTBZbCt3Mkx1ajNEQS9RMjFaZDM5a2VTU05Yblh0ajFDVUl6K1Zn?=
 =?utf-8?B?aHVtN1JHai9pMWtVdisxaU9uUHdsdzZxdThpR210WUtvb2l5UHVscldLQzBz?=
 =?utf-8?B?OVhacE1CdjZ3djExMUhrWVhMNVNuQnVmTndUd01nY0xyQzhpb3g2Zz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d0e70e33-7f96-4a3e-a282-08da485aff4f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2022 07:54:49.5261
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 4QgCFOxnZLUj3wkuflNSs78qFSqGX9ZhBef29hAuDUq3qZtv2lhLnvif+NBi0YNAnI8obD/OESzNTyFhfYvsUw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7625

On 07.06.2022 09:30, Penny Zheng wrote:
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -3287,9 +3287,12 @@ void __init create_domUs(void)
>          if ( !dt_device_is_compatible(node, "xen,domain") )
>              continue;
>  
> +        if ( dt_find_property(node, "xen,static-mem", NULL) )
> +            flags |= CDF_staticmem;
> +
>          if ( dt_property_read_bool(node, "direct-map") )
>          {
> -            if ( !IS_ENABLED(CONFIG_STATIC_MEMORY) || !dt_find_property(node, "xen,static-mem", NULL) )
> +            if ( !IS_ENABLED(CONFIG_STATIC_MEMORY) || !(flags & CDF_staticmem) )
>                  panic("direct-map is not valid for domain %s without static allocation.\n",
>                        dt_node_name(node));

I think there's either a change needed elsewhere (which you may have in a
later patch, but which would look to belong here) or you may want to deal
right here with CDF_staticmem being invalid without STATIC_MEMORY:

        if ( IS_ENABLED(CONFIG_STATIC_MEMORY) &&
             dt_find_property(node, "xen,static-mem", NULL) )
            flags |= CDF_staticmem;

        if ( dt_property_read_bool(node, "direct-map") )
        {
            if ( !(flags & CDF_staticmem) )
                panic("direct-map is not valid for domain %s without static allocation.\n",
                      dt_node_name(node));

Another option would seem to be

/* Is domain memory on static allocation? */
#ifdef CONFIG_STATIC_MEMORY
# define CDF_staticmem            (1U << 2)
#else
# define CDF_staticmem            0
#endif

in xen/domain.h (at which point no IS_ENABLED() would be needed
anymore afaict).

Jan



From xen-devel-bounces@lists.xenproject.org Tue Jun 07 07:58:22 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 07:58:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343007.568136 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyU6Y-0005DS-0Q; Tue, 07 Jun 2022 07:58:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343007.568136; Tue, 07 Jun 2022 07:58:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyU6X-0005DL-Th; Tue, 07 Jun 2022 07:58:21 +0000
Received: by outflank-mailman (input) for mailman id 343007;
 Tue, 07 Jun 2022 07:58:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8rqf=WO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nyU6W-0005DE-Ev
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 07:58:20 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.111.102])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9830e31b-e637-11ec-bd2c-47488cf2e6aa;
 Tue, 07 Jun 2022 09:58:19 +0200 (CEST)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2108.outbound.protection.outlook.com [104.47.17.108]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-36-4IfsE_cyMCOvfs8rHzQazg-1; Tue, 07 Jun 2022 09:58:15 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8008.eurprd04.prod.outlook.com (2603:10a6:20b:2a7::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19; Tue, 7 Jun
 2022 07:58:13 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.019; Tue, 7 Jun 2022
 07:58:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9830e31b-e637-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654588699;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=hP7yVvo9zlakhizZ+biJ02m01sYJKs/TzFLtYZT6wO0=;
	b=UHPxMDgozc7ZJAJ2bokCzBzRWKoRQ3rG+l66dmB4TEqB3lGty1/AZiwzTWXxBGRQKfRKff
	aMjbBr58ZHOssYijKVx8E+kTc3yu7gPaISAepPjmI22vt/Mb6yxvgKx1ICJMm09ZlSNXLX
	Q+golWLKJ0tJHH2xbNwi/zUzsvf6W5E=
X-MC-Unique: 4IfsE_cyMCOvfs8rHzQazg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Efj7rFc6WExoH0YcStL7WLDd6E56KopzDTv+8ntv+GGM3BuJKHr7HQTPuygTP41O+pTskS3+Xs+jsAj5XnPxg0L879a/GBMN0Qj2rq96nhCGKkSWhrNjdR6wJpgypMNJ/PtsQRhKyDsx+D6e1cdrJua6kPs9LvUjk4pPGWePFy3KshJSy53/QRrm/9Ft0IxFICrM07mBOC/XbUkBf8V5fkoQRI9dwngOqs+sIVvBrpyyxGRSf9f2YTMDEe9UVGz5dHzjj5oOZOqQqTX5YWoA44jgz9BVYim2RtwnVGrU1NGXnZstFoNLQQSc1QsRhT6WKnz3A/SWmKv4sXd8HDWzLg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=hP7yVvo9zlakhizZ+biJ02m01sYJKs/TzFLtYZT6wO0=;
 b=T4SX2H2rbi+9odP3pmFGiup7/7DWoJDG4eje1ZMZ/as+NxQ4ywci94+IsKExBAlc5fp51Dp5VRW6ki1L5gDqFokrYqBKweowb3VLgcuKuytxepg01ZCc2BN99AEWExcU0Z8ac0QoSqneDlmvnmJaJXz5RkDX6iYxzC+2qsN0zFWggJ0+JqAfQCdiubtj0eHqJ65cgJwErrH44TXMfiZF571GhRtzJ70hs7U2WNzyb3YTPWtrCHt9m8Obgidjixg8ggiFbJZkukyTuyCg0Q0NgJqySzSEAfH/Zffq/jjC7hHDB3Ye0kdC0M794THCNo+dHByGXdhuMdZuceOoxp4lFA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=mysuse.onmicrosoft.com; s=selector1-mysuse-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hP7yVvo9zlakhizZ+biJ02m01sYJKs/TzFLtYZT6wO0=;
 b=DkMtSeKGsBNKB/i1TLLLt/mws34MfaEHbbnt1in+VWmRUWnrNOS+M8J2KPBO9ZtQ4/UOGn0qIwxmfMLe1WgkBQIt1R+9+tEstJIWRxg3pW2szdTdmQjviD9hHpEkCvGAGgFzdzrAsd4R4bbx8wVqEO7rotsIawSw/mE+qKopqVZS2s4UnEdVRZ4F0k+LY2yfNu+cMIeIWTPaWVCBkEbWREById8/vMMlWrq/C8+HKCMzZDhjrbgEfKJn10Ezm8YXJoghLrt3Sg/QVvc6U6hguhrWAVkB1hQxXhFRDrA34LGpoE96uei5dG5ld0d3vAirZEDWTx/ZXyUheHa+8/PP0A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <c0dd7fcf-28b5-d206-701c-8c3e62597eb6@suse.com>
Date: Tue, 7 Jun 2022 09:58:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v6 9/9] xen: retrieve reserved pages on populate_physmap
Content-Language: en-US
To: Penny Zheng <Penny.Zheng@arm.com>
Cc: wei.chen@arm.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220607073031.722174-1-Penny.Zheng@arm.com>
 <20220607073031.722174-10-Penny.Zheng@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220607073031.722174-10-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM6P194CA0101.EURP194.PROD.OUTLOOK.COM
 (2603:10a6:209:8f::42) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b7b2f42d-9f58-49ae-7544-08da485b78bf
X-MS-TrafficTypeDiagnostic: AS8PR04MB8008:EE_
X-Microsoft-Antispam-PRVS:
	<AS8PR04MB8008ED9F12485C8A7B0D2E29B3A59@AS8PR04MB8008.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	T6PQ+3jwjDgA8moLz5CJE4q0DXHE96ESgx3ZL+U+TCb5x19ailyFtiT1UBvYuqRzO5brb+mAg678N0HX3inhgOskixoRM4z7Pb834v5tVQzzN/rTZ1Z2A8OEmX/qkAyWJUWC5jQ2aLiI5yFxhV2a4UoBKxqv6+LWsxc6bdwIhKlpxeQwfQ6pq8hda2eTPZ9jfqH/rsef+rTdrznKl+ISLiFAZcIEjbR7UFUdyA3DCG63FEM+xby9pZ7XglEqGBaWWZpil4BHY6eti5BpxWRau014ZKw1I1qzUd56+3cMDRr3lhwdHvySf0r8uCa2qeFqfeTUajV7uB2ffDbEqxT3uKFGuooiuGDULfHwoTRXyZ90XWbTLyDJ652MVfln+8fq5QNEnwdMAAb1iMg06r6Dk9jVXlaHi8ahWK0jKpEPAGwicAEY1NU1Dwc+KFx+xCBhLdkcOWesEyzWP/b4aaLKAvH7QWnXcWMunV6or7O4zCvbLYl10PMBPMHFo0uzk7IYyYCBvafjDvMvQt5J+FB/okhEbHrVuZptDvgy4m2TQgJM3FGnakE1g4+0weZ2Z1sU6jw5tA6ZQ/YgIUQt1JjYjBt71K7+WZj4zrHNlYn/BfNYsgSV0GoFngK7Zk9C8hdTNq/SZN4167C655ZuNCXitoSLFuBp+DLndG36ltLVHSJ8KKx+X3OASj1q2P7gRu2LF0Ie00ikEB1jNZEsnWxdDFbZ8C+aX8ioDLGNMLA+jB4=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(6916009)(4326008)(38100700002)(66556008)(66946007)(31686004)(36756003)(66476007)(54906003)(8676002)(5660300002)(508600001)(2906002)(4744005)(26005)(186003)(2616005)(6486002)(8936002)(316002)(53546011)(6506007)(83380400001)(6512007)(86362001)(31696002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WFpvalZWUktpUUVHR2kvdElCcVdqZ2M1MUZLMGw0M1JsMkFBdjhGMmNyZ0hJ?=
 =?utf-8?B?Z1VUNlFXYTdLbzNrYWZDaW92clhYQ2NlbnVhYVBjQ2ZsMldXelV3S2VYSlky?=
 =?utf-8?B?WjRHaFlFK0R3VEZyQkxGMzE2N1FwVEpmSXAzRDFyNVBOQVhiVHZ3dHlWSGdq?=
 =?utf-8?B?QTV0Mi9XVHQ5Z2lQb0JsTFV5OTdYUTA2Zm5JRXBlUEJEUjhTMzJFZmNsRjhr?=
 =?utf-8?B?TjJJalJDZ1AxWTlPVWN5Z1JLekFKL3FQNmZlWm9SLzQ0WXdMYnh0dG1weldS?=
 =?utf-8?B?UEdkdThERWU0bVYrWG5kRnAxbWwvVkgzbVpwcDJndzZrblFXeFhTRjFkSmpQ?=
 =?utf-8?B?RW1wSXlWeitPWWdTQUNVTko2bzlFZFUzd3pXcWxzMlVQbGp6S3kweEhUMzBX?=
 =?utf-8?B?L3o3KzBtWXd2OE5IMUpVOHBjcHVzQUJtaEY4R2dadzU4MS9CYzhGWGw3aWxC?=
 =?utf-8?B?U1dhRSsxaUZrSXBtWW54djdBeEI5bVUrZHdzTk4vL0hBWW9hMi9KeVh6ZW9o?=
 =?utf-8?B?YXQ2Ykk5RkdNZ3IyRFFpUUMvcWc3dFRoYnFwenY2ak9ieExWMDVoRHFjb1cr?=
 =?utf-8?B?RWVraW5WWk1NQitwa0RWRU05dWF5c2lJL3dtaWNmV1RwaXM2YXpFeE8yY2lF?=
 =?utf-8?B?cThZdUd6MjlIM241TkFEa2VpZUVzeHVNSlBpczhGaEpEYkJMZU51U2JTcWZ2?=
 =?utf-8?B?TWRmdzZIOTVwdWNRWEJSZ050cXRBZmdmOE84RHo4ODMvMHRYdTFLSVhEbVFG?=
 =?utf-8?B?U3pYclhkS1lKUHh6MjFRRnc5R0pzbGdvUHBBSEl2SU5PaVJuYUJHYkxsRTZF?=
 =?utf-8?B?MFpXdUFkM2dGSW1zZGYrZFdYZTN3ZG5SMDdqZ1kxeGZYV0RWVXNxeWxMdUlC?=
 =?utf-8?B?bXJzaHhMWCt3NnczTktBT0lJOE5HaXVvQVc5eXBqREx4SjFaaHVZd2NvbzRY?=
 =?utf-8?B?NjdBQmM3Y0ZFM29IamdkblZ2Tit6eWdScE5FaDFVYkRGM3hoQ0lWYTRkWVUy?=
 =?utf-8?B?M0x4dDAxYnUzSFRoL0h0bUQ0Vmg4L3lsaGVLS2lIamUrUnhNVXNoSThiUW1W?=
 =?utf-8?B?ekpMQ1pJbWdtUEpqQmlacmpXanB1RnU1MWlNQlkwdCtKcURGdldoSlh2SWZI?=
 =?utf-8?B?MFh0TVZabWgvc2dwdXBzcGY5dDJmdTM0bkxDditPbjhVOU1zOGE0RVgySWMx?=
 =?utf-8?B?WHQwVUg3dmloUy94VG0wQ0p0RmR5MUd6MlhmWm5mTHlHZGpQTUpwOFhmZVhq?=
 =?utf-8?B?Tm5LNHp3VHlKQVJkenpwamh1b2k0NDJkK0FYdzN1UWpNK1RiMW1kRGI1RWRv?=
 =?utf-8?B?U1dmRk14V29rUmQ0WnlteFlheUFBNHk1cm9Ca3VqditBS1BEVytwOFBIS3Zv?=
 =?utf-8?B?d1lEU0txM3NrTkJ1U0FCdVlxS2xVQ0ZqOEt4b09pSzlaYU5xUWEyMkRXVDB1?=
 =?utf-8?B?NGRZZ1JSaER3SGtxcGxkbFdjdFhMUGIvRkRyMEJzbjZUUUUrakJSaHBMYUZK?=
 =?utf-8?B?OTdrNjVweHVWTHhTd3F2aE0rQjNRRTVkRXVKK0RCOXVmc01ybHdDbXdkc2hH?=
 =?utf-8?B?bmJPZEliMmVPdUNMTEs4cXlRYVc0REYzNXhqVTlNejM1bzRQNVNWdlFJOWZs?=
 =?utf-8?B?MDEzUlIrdlFhOWtpZDd4U2VUVmpaOHRVajF2akNneWZibGFVL1JlSGFlbEdQ?=
 =?utf-8?B?RmR2YXZJZHBtbE1McVVDc1Jtbk53VGdyZGozeExTdGw5K0c4bVRKcEtVdER6?=
 =?utf-8?B?Z2RxN2REQnIrOTM0ZndNZ0JiSVJjWjgvT3g4VHR3N1FDdk15ZUpsZ1k4UFNk?=
 =?utf-8?B?NzdKQjY5dVdNK0lKMklQT0lVQVVkYWp5bHZzcTFxK3F3RXp1Nk1ySk8vR3lP?=
 =?utf-8?B?SGpKRys5K2hQMWNLd3I0RWw1dTlRa3RYS0lvUlk0M05mT0xRbmk0YjFaWnRM?=
 =?utf-8?B?Nkw1ZkNNYmVxWlJNWEdsSUNnT3lwaHNZcGVDZDZHeFhWYjRKU3VjZGhXTVhW?=
 =?utf-8?B?U25yL09MNDNuVGcvQW9hOGJxdnhHNnZrVEZhRXNUWnFNZnNOamlVRmRUakND?=
 =?utf-8?B?R09VM2ZmVnIwVldYYnNZWjlCNHhReVpMR2dLQ09mSk90eWlRNEkxUkhYQkZT?=
 =?utf-8?B?VE1OWTVCWlhhekdlWUdDSXB5bXJTa3h0RE5lQmlnZnlKNE52dnVtQ0ZlVEpY?=
 =?utf-8?B?eGNnMmY5QXJ5TmU1dmFCUDN5MnZKV0JIcnZFd2NwNCtDYkd3YTFjSFUyR2xw?=
 =?utf-8?B?T2tlTzFYbElhUFhrK1g1dzFRbmhrWWEzSGV2Qkh0RnFwTnVZMnRKUjdUMDhG?=
 =?utf-8?B?SnF4NHdlRUhNb2RLWDczVnRJRTRsN0dLNHcyWlNQaTlYd2Y4RUhjZz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b7b2f42d-9f58-49ae-7544-08da485b78bf
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2022 07:58:13.2633
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3mPk0sZIszKxVC+eGs/c4lRlkCtSIBxmJYK3+pSFWmvHte4/rSJs4Kkaowv5EJhEHGDqoiJG5x4WS3oajzUWjw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8008

On 07.06.2022 09:30, Penny Zheng wrote:
> +/*
> + * Acquire a page from reserved page list(resv_page_list), when populating
> + * memory for static domain on runtime.
> + */
> +mfn_t acquire_reserved_page(struct domain *d, unsigned int memflags)
> +{
> +    struct page_info *page;
> +
> +    spin_lock(&d->page_alloc_lock);
> +    /* Acquire a page from reserved page list(resv_page_list). */
> +    page = page_list_remove_head(&d->resv_page_list);
> +    spin_unlock(&d->page_alloc_lock);

With page removal done under lock, ...

> +    if ( unlikely(!page) )
> +        return INVALID_MFN;
> +
> +    if ( !prepare_staticmem_pages(page, 1, memflags) )
> +        goto fail;
> +
> +    if ( assign_domstatic_pages(d, page, 1, memflags) )
> +        goto fail;
> +
> +    return page_to_mfn(page);
> +
> + fail:
> +    page_list_add_tail(page, &d->resv_page_list);
> +    return INVALID_MFN;

... doesn't re-adding the page to the list also need to be done
with the lock held?

Jan



From xen-devel-bounces@lists.xenproject.org Tue Jun 07 08:03:05 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 08:03:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343021.568150 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyUB1-0007Ch-3y; Tue, 07 Jun 2022 08:02:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343021.568150; Tue, 07 Jun 2022 08:02:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyUB1-0007Ca-1F; Tue, 07 Jun 2022 08:02:59 +0000
Received: by outflank-mailman (input) for mailman id 343021;
 Tue, 07 Jun 2022 08:02:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8rqf=WO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nyUB0-0007CU-0Z
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 08:02:58 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.109.102])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3d7e7706-e638-11ec-bd2c-47488cf2e6aa;
 Tue, 07 Jun 2022 10:02:56 +0200 (CEST)
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01lp2055.outbound.protection.outlook.com [104.47.0.55]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-5-zgrFwNUeNCOOt0mZzJWK0w-2; Tue, 07 Jun 2022 10:02:55 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB7735.eurprd04.prod.outlook.com (2603:10a6:20b:2a5::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19; Tue, 7 Jun
 2022 08:02:52 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.019; Tue, 7 Jun 2022
 08:02:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3d7e7706-e638-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654588976;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ygwaL7VnH1TwlrUp/oCPl8jfiaqxo4uF7WI0QKrbZ8E=;
	b=P0O1PqnNbCgulQ4F77NFBSboWFko2vnlyfCF6+4aVECHbv3g7t4zcJ6BDbfLNAT9jgSjYl
	7Vvsl40OgW7EBKigKCTfcRaL/Hou2B4iWleZp2W99EZ15UxlTZir70l5QvBiOegfr/g5i7
	IEii0QrJuV3vhknoioM95BZvAd60YNc=
X-MC-Unique: zgrFwNUeNCOOt0mZzJWK0w-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=S8Beee25w8ip32EWgampJ5X8Fh1Qd/k5VxlLGy9BWILExC8zpwfyJmp02FUSeRfu7QH41eWAsF40ipW0yTCb+K0kFTAaYlDEIsmimT2bm8NOZn2kLmrvStasRIw6P98605cEgZTnH+do+ECiYzPhqZtMUoE4q5gV6GY6HwvD7cMgiEJvZCnZS0tOIwrzHIKs3IE1Hug4Lxr/tdOqt29ddCKA1/vSDW7anDKj4/y4HOP1Fu4WkiVCoB7JyEoz2jpYq7jnRsiGNNpnSnpjyEBjstBrq5MrdcxydJzAqwMwXmM1c6LjWfKGUI/u+gpSMp45rpT82TvTTSIuG+Q9+fgdMw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8+UGL11kf7cinjSaSYFT2bKH4Hc9cdI1wPnU9V4K7v4=;
 b=jpz+blz6EN7Rhb9/hrlWV6mYWD5Mj+l/QmaT3Wz/whldPDM3WYnk8yFOs/qHu27D3SC/eN2zbFyh74l2tmbDi9WWkiXZ4ZeXQehuHwhG0bWWCRl4iGL3HP24dFDXCXIvLau60ON89lTe4694Dz4Cc26BIvupVWc/mWCQsqeum/ZSb5e02Hl2kC1hOAb5p7d8+EOWhu3ywSnE8ziJ+I5wvZve4Pf7q14kttaHr6E8LtzIwIxz71aPyUnzhrncy0kPd9SjUAQs9q7ggA04YwCp4dLBDNELjUgFV8eyqdbVk6XKlCbwM5Nkwo/C+j/pAWrdJ0L/NXic8fvx/g8BtTAFVg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=mysuse.onmicrosoft.com; s=selector1-mysuse-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8+UGL11kf7cinjSaSYFT2bKH4Hc9cdI1wPnU9V4K7v4=;
 b=JHbadFJiFZF+m6CISH086rk0SoVHHm1DXLihQi2t7OfOXrfOlKMbGBHnRxqxFnrHaMeMJnBFmr3/OC9kom9875ZEtK11FcBg/Yie6j8D0MR8Rn6XHtGqop7uBMSgdo5g+U/Y2twa69DOa7eKEo4Oc3aHgBO/vG0zSJ3+HOX6Dzwutzs/hkUki3KpJ/IUCW/t3m3WLi/y3NBJDqd/UuuU9lGQ8uruXwKPDSl+xfk1ekqnbyA8dsgxo1pit9f98Il4NJ/OsKQditrP5QTb0RYPHeAE49TwIVrqZtoI0Q0uTIsYrqJPe5jxt1Hv1GvC7oStZjDIji8KytJ7Hl57tyD3UA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <2e951338-6b54-c5c2-1c44-96fff795671a@suse.com>
Date: Tue, 7 Jun 2022 10:02:54 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH RFC 1/6] x86/ioapic: set disable hook for masking edge
 interrupts
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220421132114.35118-1-roger.pau@citrix.com>
 <20220421132114.35118-2-roger.pau@citrix.com>
 <85dfc48f-3440-1e6a-dc44-4c2bb050184b@suse.com>
 <YpogVFqGl1zS3VCU@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <YpogVFqGl1zS3VCU@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-ClientProxiedBy: AM6P195CA0089.EURP195.PROD.OUTLOOK.COM
 (2603:10a6:209:86::30) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 28ab3a3a-1f2c-420d-a058-08da485c1f01
X-MS-TrafficTypeDiagnostic: AS8PR04MB7735:EE_
X-Microsoft-Antispam-PRVS:
	<AS8PR04MB7735B23CBF3C3B5554DA0CD5B3A59@AS8PR04MB7735.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	++ssjQzDNAzqKzTAgvj2YIxSItx0p20dIbN790ODhhbkMqUB8yKmpzeRbfizT7YeGB6Fz/JfaVCFqr5RBhNF2o1dWT/e2CeMJo7ehByBPPE5F34g66uvECcjkdfr8BGetuIyYTyaoR562FduGHuDSb9KWThMp/3/oaUp5w8m0GflxoDNkStDEtM1iRjdS68KgGzw8g/BpR1tPPM25QheP8mPKOtJ1psZxhTget800s+UsA/dry0Z/z4qb3BFLoVkBukSttBrH72o9jA67sM3jETw2TJsZ5V3iSXxZ7H7bGJ9cRiBgrUTbepJemij9ev0IdJrXajNV5EbQ9HQfaR/j14z6dfx1Ys7Mw+2VwulYPQV7i0uzlJi74MEDbib0Tmm21C1jaC0zv4rn1trcj6sLQorBVI0wsAzC+kppPxpW+15mARTGoOrVvIvyqw3/cZic0+3vDXBV5GDn8PjqdUdLqwthoJMUMn7YZQ60mYUKXEXKrWVGj5z3GyOeICkxoKlRLyKz4ivftPXl5KWYRZ9RveoIztMblK0wYFVYsLofxQF07g9j5OWBn5zuv9Vu6uRMxRk2sloWP9ey3ON2ecFdItLs8KNE1gnCViPykphhGY41bm64WbBm8/oqB1SRZCOO4AzuRxNCBZm21fpsHsOF/gHK0+kvV8V5vPxVTU6OlrmAUorym0XC4CJyg9mshrGVR6ZmLA5nCv7Gue4GXo3HBc4sHY7RATnlbwjTpba/io=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(38100700002)(6486002)(508600001)(316002)(83380400001)(31696002)(86362001)(2906002)(2616005)(186003)(53546011)(54906003)(6506007)(26005)(6512007)(6916009)(5660300002)(36756003)(8676002)(8936002)(4326008)(66556008)(66476007)(66946007)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?AaNddRHYtjDLi402jCoOxaCuWN98150ra6yHAfiRHbj3JkMFhuPIM5JZ8Eqf?=
 =?us-ascii?Q?gwzJD75zawlCsTxGg+r7Z670L4ykf8nmyy3NW0gUpY1OFISd7wIHG77cH7YH?=
 =?us-ascii?Q?dcygXpydv3gXcwIsOEKjJiYIES3dHOlsau3Y6mVXs0wa4HCbs1+yZ3AkgXpR?=
 =?us-ascii?Q?k/+P2lynKUvEmcX7BpK8hPP9b/nRAm0YKesmMxnzPVJ0WHZpKCb7RecOxhzI?=
 =?us-ascii?Q?caQfcffcf7OxbMkgGSIODneUax+75PsOQ82rHvy+gHtw1bDS0XYlMYl3cPA/?=
 =?us-ascii?Q?Yn2DZbpZq+R5JNPCjMd9n7PnM5KII0ERFG4auBOQXv/7sdMI1Fz+gTAORN0w?=
 =?us-ascii?Q?C+tOduLjgiJJYbxR2s3eRrWhQM0sHRo0dqb3OP/0tSpITnUBxJbAnTGQLmd0?=
 =?us-ascii?Q?jSJvru97o4WMh6T/dCj8pHTwZbr3m/Mdm2yRMtjuoo5Or/aXv/Ohizh7nm2n?=
 =?us-ascii?Q?V+ileUTUM/4wG1MrePxGjIeS4Mc+RCeN9CHg7Geu9ieES2JHivTt6H358vZE?=
 =?us-ascii?Q?rMRkI0l7oC+TP78YWvdwQBNoOw5OUd37X2s7wB8cCUVxWuTMwt4NJQENAL7F?=
 =?us-ascii?Q?koObvwB1TUSgJwS8xfs6s/lYr+y77lzQ2+Q5U6UvR4TXslF3fzpnRl3aarnS?=
 =?us-ascii?Q?KHqV1b3Mkaem3kMC5/vQAcmoaJuSMTY6VKjas9n9zC8M5Z0ME/BLvsEJ9QOQ?=
 =?us-ascii?Q?4WrfLYeeFk/TC1XXSPG4t3NBXI6fXmegkBdhqaM39HifP1MoIySOmKI/wHWb?=
 =?us-ascii?Q?uNISDnKaZRcabVWyqVpWqMbZhijG6LFBgN3YBQQtJL0UudXo2NgVEkX1THKH?=
 =?us-ascii?Q?/vLfCi7VY0S6vzP4wSwnXME2TpFawZu7Prf0j8/cRGG1z00pxSvP+oEMawtd?=
 =?us-ascii?Q?aMx+sO8e5eRoYIpQDxEm2wSCNVKKjajlFNMoBS5WjMf3JXxPt0bkSLVLIo0y?=
 =?us-ascii?Q?3o8r2YAYsZqvZCu8RvKRuc6GHPLG8imx5iRYbXrVhx9OQDEdhMAwnszcTsXB?=
 =?us-ascii?Q?/DeKirg6PDahHHkx3Uo91NLPo/00xpi6svaIeqlMePxqv7i01/YMvmMECixP?=
 =?us-ascii?Q?6UKtH1UTILqUQu/NKWTh4McXiwaP/VpP/TqThvQtxi8iAq9LhHMgyUYdpGGS?=
 =?us-ascii?Q?OlkS7mxvPs5ZSuFbLXVlmMjToNIMX/jbjmXENkrzOqRIflZQQ9zJ8VmKB7xr?=
 =?us-ascii?Q?NVR+1wiq3PIWovYJudiqpif7IxeR8oKb8SAiH2DjKcvaEVGy3x2STpPomn8/?=
 =?us-ascii?Q?QeQ3vWoa6LW6TEVuj2AxTJctxES1G733ofH3PdfhKAooCb8K3almrNR8IP2D?=
 =?us-ascii?Q?YU465cywLlrkgi+koevJkLA937nD0sViFKWLTsjEWmMewb7USzfndMZ4TZXy?=
 =?us-ascii?Q?EttXs5GeM5OVq7VBOtXD8iqmqMKgbeFNH/YxhAwE8XsIZyEGxkgGKxobYebl?=
 =?us-ascii?Q?kLGTDxPyWmR6N7KqiQDBHoziG0rg7k2WQiRmUGQK6G+WsqsNrS9m7mG3hbb2?=
 =?us-ascii?Q?8O1iEO2qQxtxZlI1igHjNh9itKNn4QgA4BbwiqvHDhnrIbQOOegBWg31CHWS?=
 =?us-ascii?Q?+cOYcjZFVSzm0EBdjV46Fjr2idK/x1Y9zne3KDQkEl9c1lrVlhxKTJp0xKkd?=
 =?us-ascii?Q?v8PYVRMNtH4NnQFQqBBVKuZJy81hNlg+UUhTEUFGk9fw9tzOwo4oHgmZnIiX?=
 =?us-ascii?Q?v5iuSQp45sJSAVpZakskCwMh0Efp9Bn08a8fonMRa0oouHH7kXKjRrtImtHB?=
 =?us-ascii?Q?V2H/HOEJDw=3D=3D?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 28ab3a3a-1f2c-420d-a058-08da485c1f01
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2022 08:02:52.1831
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: xCSxbwWEFFQCeG7xSMFU2cGd6AwXUfzif9WWAnP/gY79OCWHNQNFANuhkSW+sz3HsHVDBnlZLcV410ctfk0H2A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB7735

On 03.06.2022 16:53, Roger Pau Monn=C3=A9 wrote:
> On Fri, Jun 03, 2022 at 03:19:34PM +0200, Jan Beulich wrote:
>> On 21.04.2022 15:21, Roger Pau Monne wrote:
>>> Allow disabling (masking) IO-APIC pins set to edge trigger mode.  This
>>> is required in order to safely migrate such interrupts between CPUs,
>>> as the write to update the IO-APIC RTE (or the IRTE) is not done
>>> atomically,
>>
>> For IRTE on VT-d we use cmpxchg16b if available (i.e. virtually always).
>>
>>> so there's a window where there's a mismatch between the
>>> destination CPU and the vector:
>>>
>>> (XEN) CPU1: No irq handler for vector b5 (IRQ -11, LAPIC)
>>> (XEN) IRQ10 a=3D0002[0002,0008] v=3Dbd[b5] t=3DIO-APIC-edge s=3D0000003=
0
>>
>> I think this needs some further explanation, as we generally move
>> edge IRQs only when an un-acked interrupt is in flight (and hence
>> no further one can arrive).
>=20
> A further one can arrive as soon as you modify either the vector or
> the destination fields of the IO-APIC RTE, as then the non-EOI'ed
> lapic vector is no longer there (because you have moved to a different
> destination or vector).

Right - this is what I'm asking you to spell out in the description.

Jan

> This is the issue with updating the IO-APIC RTE using two separate
> writes: even when using interrupt remapping the IRTE cannot be
> atomically updated and there's a window where the interrupt is not
> masked, but the destination and vector fields are not in sync, because
> they reside in different parts of the RTE (destination is in the high
> 32bits, vector in the low 32bits part of the RTE).
>=20
> Thanks, Roger.
>=20



From xen-devel-bounces@lists.xenproject.org Tue Jun 07 08:35:37 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 08:35:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343030.568164 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyUgK-0002T1-Ms; Tue, 07 Jun 2022 08:35:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343030.568164; Tue, 07 Jun 2022 08:35:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyUgK-0002Su-K0; Tue, 07 Jun 2022 08:35:20 +0000
Received: by outflank-mailman (input) for mailman id 343030;
 Tue, 07 Jun 2022 08:35:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8rqf=WO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nyUgI-0002So-Ei
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 08:35:18 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.109.102])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c1cadc15-e63c-11ec-bd2c-47488cf2e6aa;
 Tue, 07 Jun 2022 10:35:16 +0200 (CEST)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2113.outbound.protection.outlook.com [104.47.17.113]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-5-9fe0DDkEMF6gxwO0-hIw9w-1; Tue, 07 Jun 2022 10:35:15 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB6242.eurprd04.prod.outlook.com (2603:10a6:208:147::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19; Tue, 7 Jun
 2022 08:35:14 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.019; Tue, 7 Jun 2022
 08:35:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c1cadc15-e63c-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654590916;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=iaEelIFpJ0H8LkmQxGJXlUvQrdaB0uJTBaA3rewwCMM=;
	b=Pg0Y7AxNWJN9V9WbIYnQb3195WNZcudoccm3vAjvVSwQb6G+CSOMuU1Nvt0IAPmzxGtk/k
	wEgZLoBMNSP3LJaoR7hPtnade21uwHI8SRVEvZ+Bbcwm2bnhBMfaWxpxGDKi/Aqj6QVa2v
	96yKJQw+abLrJ49tPmXC2ZI/px/suF0=
X-MC-Unique: 9fe0DDkEMF6gxwO0-hIw9w-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hPr+/XbvzXnDA7IltLtukzIlYbJBHJiDOHA8z6WLO7/1gDHJqkO4XDbkiXaOYcq+lJaHBJ6fTnsx6UgcFKWhO/dYVNufsb/7TC3vMFnl+GnAxrxk910jHY1pdgnWzylkvNoFsTqrf8gelLiluwIHSkMIhE7b4rbTVNi8yItHmv4d4g5cvGGx65a5XRwRIM1T07mPDu//u/94vGWcvpxz1NroXBt9setnAo99BMkjU9Dt1Z0BvW23yDABa/idCzhD452G+fLljsBlxS/QGpHqae8eSLBATIxzlZjjhCec/5SW1sikzX13jxr2e+0l7MczUPtC8zDij3/jLVlhA/vd2A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=D1EPj90kGl9tpAxjS29XufcEzd/5q0uJ3GMdf1z9HQw=;
 b=OJyG67HHrE3k7IAU7BlAAXEaVCxxe+8upjs04Ducu0L/gaBZIN+0EBfs4gxAkLjkXhMmJ4nT8c43dYKI+/m2bALl0H2PccJewIR0FZ2XxLDXbposuQQgaBBapEMLb3IT79xGeAJq6ZsecLPvSt8O9i8Ga1FxTBHivLuWs2v6DhgHT0yVOZZcTKTPAgem8baXw4wstHhedXto/dHNUIL8cP3iANNVDbg8ajL9YnxRo9BI+uHZKRkYgq3xY4iC0N/8lKTEG23/APGXoZVDAJ+7XPAEpDsCMLfnsFwWgK5RVG3j+es+x6J8Uz9qmR/EVCFnn+OBiiNGDq6KOMixZWixNQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=mysuse.onmicrosoft.com; s=selector1-mysuse-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=D1EPj90kGl9tpAxjS29XufcEzd/5q0uJ3GMdf1z9HQw=;
 b=AwJNbAdipaSSm/R+4GFRr0Tk/BLSFOc7Jqnr5e0zio1VeZgsRJAgldYTyd87fnmXDHG/KvcmMC3Qvh71NpzZvFYgoFQ56JOk3UalxWmfizCYBrMGJsIw7PwM7XCvftRDqTARgBcxRtotrLQ0zgLqP+bL61gwjirsQ+3LcVvKHp6ppCx29ykxGnHH8RP9FD+JMGbfIBo1ysIUd3QGaLAMLy9tdPlWBEdRQotXBmKVY3cX5NohZLYnsAIvWMXjPvIkAlYC2NiI2uClo5wM4EIBl0wVX24ge12hecQ/UR9JDDyv3FActkB4LgHjjstACxRZEpePQFU3b0ny44nWSjLJmA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f6638b64-ee36-0959-1ab4-e4ab2f55980b@suse.com>
Date: Tue, 7 Jun 2022 10:35:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH RFC 3/6] x86/ioapic: RTE modifications must use
 ioapic_write_entry
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220421132114.35118-1-roger.pau@citrix.com>
 <20220421132114.35118-4-roger.pau@citrix.com>
 <febbff78-6a2d-f2fb-d8ea-a15f97a3abf4@suse.com>
 <YpoiPRETkjBskr1d@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <YpoiPRETkjBskr1d@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-ClientProxiedBy: AM6PR04CA0009.eurprd04.prod.outlook.com
 (2603:10a6:20b:92::22) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 93cc85e9-9619-4774-a374-08da4860a487
X-MS-TrafficTypeDiagnostic: AM0PR04MB6242:EE_
X-Microsoft-Antispam-PRVS:
	<AM0PR04MB6242A977D949E6FFE0359C50B3A59@AM0PR04MB6242.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+W4D4rwjUNj77Hbx35w6AdzSrGC0qKNuFGj7OreBn6pj9/RjZdtxUUOAsCrio2bkY+kb/SPCB8vuQVggabghwOjrgYgXYxDk61ZLKAp14AEnMsrc8BXf2BPKncL7RsC8gXtaZarsMmul9vjiq/U3bAMrnjgE4aYd8rFAS+ayHJkPdfngs07Iostre8ocoiMgSPYLMJD5dS5TuaGB0/HazPILH1DHR+rGg01QK30anr7gVMAIhvl8bCeJt0yS5L7/FK1ZoRlBTIPQfgsyCyycKRMVq/WItFLlsYCjp2GOaObA01TjkXaZBI1HFTnoHgBVkEN3ZZGZt6x+sCNe2CoUKwNvQjeKeaog95Br9AuSULwitodhBmXe7JWBdHEi4gr9JdxFXzelJIXgGLevoRQrPlpr7+46eE3wcF3SCmzam5bIINuR6indcGqJ+HN4W23kYEHNFQFf6/lrLVVXkbq1bpAJRRQEQAaoHLbA8iJIisPHJsLQtMSmp1k2I1k91sgE0dxXCEGzV0+oru7U/W9RwvjHiW7wSA3gqUJgipas+7/ZAfhObno/SibGTvSjaGiF+S9u9ScCF8kToIOB94ecyk+VxP5t1l/bHmG2FM/XPMhOBPuCpm6pSNqcPSz5w2UHcR3BfmfioYTqtWV25MRdi19p/7J46kw4AkpTb67PtlgpCQIPcq4nb98XSFX3jAT5lIPcz8dWbM+VrjYFYtLbh6teOHKbTh/ddMGWcBOY2KZ3/82xzhLyriOkrr2d9/Qz
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(83380400001)(6506007)(66556008)(53546011)(4326008)(66946007)(508600001)(86362001)(186003)(6486002)(2616005)(8676002)(2906002)(31696002)(36756003)(66476007)(38100700002)(31686004)(6916009)(54906003)(8936002)(26005)(6512007)(316002)(5660300002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?P2y9Vlj64NySdYWW/c+JsR+Kh9hdujcr5t/qdzX/F6Jg0y5fAoI5mvya/264?=
 =?us-ascii?Q?/btVCJR4liB2ucNPU6gwmaJf5DLJk12FVSn9UKeCIvGmOMTnC0+aoXngMsFJ?=
 =?us-ascii?Q?Cufe+p+QW1R8hsFXHnoGQM8G+UYUqsVyPQu7851TzjrHexpvNT7+EQw493qk?=
 =?us-ascii?Q?SNXNcBR5kK2L6SeEuSx4U+cjWvJ4jbgtn+Nqn8Jb1VNeR+TrHXV1TgXZe7/b?=
 =?us-ascii?Q?xVuAnQPg+z5e6hZFtQ3G2GRcSZdlBIJ7NI+uUtQjlQkDpsvA+r+Bu1BkDuvM?=
 =?us-ascii?Q?prILbpzU+jDxG6cAOGZtHfTBZQLzQY6lhexUeUZPsbvGGyGsMlyA1eg4AJhe?=
 =?us-ascii?Q?9eE6UP9IJfKbwlKNiSoACuCp1UwbzoTZ+B516IyxNtqLdq862Q08WaVnYxli?=
 =?us-ascii?Q?bUl11XAKudGwXBRnN4ppTFqbMM3DfpReW7BqfBH852aRzXA7Hgcsn7DNQd+U?=
 =?us-ascii?Q?vkp0vO5/LdfySxOTPnz4gjmaABpm5sGBKeROAXFB4/rdY8Z9FGA6olhC0875?=
 =?us-ascii?Q?sVvaboIvgRr9QxyiyhHU6kugw4zGPWpKYd5nThWxEusSmTzqPOH0+3FM6n8f?=
 =?us-ascii?Q?s8RPkjIMH8jfq3rg7Axh5ZHsq7u7jAS0g1frY9ChHS+df9wZlDfPOp3ThQjS?=
 =?us-ascii?Q?s2BID+FjEmq41Jk77NSbvkeHoWhG6AW640iw+PMsafmiyMP38rEXjkZUH66t?=
 =?us-ascii?Q?L824a4Ajb2X+3CuIKy8WXipfmxPrT1SW6KbU8Qn3ZIvJN7mdH8o6juOYaJP6?=
 =?us-ascii?Q?awdgtvtyOhNASSaQttx4PywVTrY7PO55zau0EWMxc8UMppXczPZWAf39LNI8?=
 =?us-ascii?Q?K5guohlCx+4M2HrjTD6tnbC/QrnRWcqMhrK4hunWb9cbjwBN0g0bFXvbZyoV?=
 =?us-ascii?Q?Ztpet5oiAXO4v9YkMmCNMMBgPuHJoeaHXRUUa6JOMCWTbUI4BU0j/jgiaRgM?=
 =?us-ascii?Q?XnUXGmJXgZVLHzxZtDw8c3vjqLGbsrBgT+axtIXeCGLJERUBKfM6BRFyRpnN?=
 =?us-ascii?Q?a81hCdfxWW65B2RqPca5MpgwyrZyRL/j9PG7aPAvqNbjNKtVdP/XdmxyOBsu?=
 =?us-ascii?Q?fKddQwjqGIC80vUfiEf+1Wkce7rJyOaeqSrxLcwnqp74jP7QiIQMck3QIQQP?=
 =?us-ascii?Q?XxVz1jACBC8bHNAedauKQlJWq69ag1SdszcrO9oF5QBc6kzrWL7nesHbo+tD?=
 =?us-ascii?Q?DjNH+FTt4/O0H9sRx+i7+dlq/Fdj/CHcNPiExHCJkSwV5XlBG0guX8Uzcqb+?=
 =?us-ascii?Q?S8kMI/M4b3T2OB++bH1SJi8HHagMYE+mwfY3amRKkRklE/ZZMoHgD9ufd3wm?=
 =?us-ascii?Q?yft3sXVsO775piQo9L63h5fmUb5/b+cLwNCVnhQR5RIglRF3hFrfpxdYbOpJ?=
 =?us-ascii?Q?BNR4qLJMW2tQ3Y+n6nc4KLlKBLhXn8qrGrWcG6UBvPBXqgR0xm981DQkll4c?=
 =?us-ascii?Q?GliE/szUql+lzGPa6EcnASiC4p7Mt4sLeOTUbwNLiORdOauaKEZft7n6s0SA?=
 =?us-ascii?Q?fb6J/lDqe0d1Bdq5QCIJwimF4TSgs+3pPBjyIaXcq02ol86DVnqsMKYoyKc5?=
 =?us-ascii?Q?69Pm/Ha7v8vyhnGRAa6xQPl0Oa9E1dl9IG+9mZF7CJ3YLa3kRYY36WXlI2fM?=
 =?us-ascii?Q?StWCGC829q/8kyrsFe4L8FSsr0A7VlxdalPRoXPCNkwHPTOTQhRHG84X7NMk?=
 =?us-ascii?Q?AMw77yD1Hv+41OSTg4cc395amCpLipsbzT7SPly39ocl1IVhpadF08Da7p3h?=
 =?us-ascii?Q?oVzusDL51g=3D=3D?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 93cc85e9-9619-4774-a374-08da4860a487
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2022 08:35:14.1853
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: aLNk8GekFVkzZ5lPIYe+9vzhm/HOigKk3tCDRchQ3UWngQ4aQnFgdLosMQwOvdmpiFanUU6tPfNovyr3Oajz9A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB6242

On 03.06.2022 17:01, Roger Pau Monn=C3=A9 wrote:
> On Fri, Jun 03, 2022 at 03:34:33PM +0200, Jan Beulich wrote:
>> On 21.04.2022 15:21, Roger Pau Monne wrote:
>>> Do not allow to write to RTE registers using io_apic_write and instead
>>> require changes to RTE to be performed using ioapic_write_entry.
>>
>> Hmm, this doubles the number of MMIO access in affected code fragments.
>=20
> But it does allow to simplify the IOMMU side quite a lot by no longer
> having to update the IRTE using two different calls.  I'm quite sure
> it saves quite some accesses to the IOMMU RTE in the following
> patches.

Right. You may want to mention upsides and downsides (and the ultimate
balance) in the description.

>>> --- a/xen/arch/x86/include/asm/io_apic.h
>>> +++ b/xen/arch/x86/include/asm/io_apic.h
>>> @@ -161,22 +161,11 @@ static inline void __io_apic_write(unsigned int a=
pic, unsigned int reg, unsigned
>>> =20
>>>  static inline void io_apic_write(unsigned int apic, unsigned int reg, =
unsigned int value)
>>>  {
>>> -    if ( ioapic_reg_remapped(reg) )
>>> -        return iommu_update_ire_from_apic(apic, reg, value);
>>> +    /* RTE writes must use ioapic_write_entry. */
>>> +    BUG_ON(reg >=3D 0x10);
>>>      __io_apic_write(apic, reg, value);
>>>  }
>>> =20
>>> -/*
>>> - * Re-write a value: to be used for read-modify-write
>>> - * cycles where the read already set up the index register.
>>> - */
>>> -static inline void io_apic_modify(unsigned int apic, unsigned int reg,=
 unsigned int value)
>>> -{
>>> -    if ( ioapic_reg_remapped(reg) )
>>> -        return iommu_update_ire_from_apic(apic, reg, value);
>>> -    *(IO_APIC_BASE(apic) + 4) =3D value;
>>> -}
>>
>> While the last caller goes away, I don't think this strictly needs to
>> be dropped (but could just gain a BUG_ON() like you do a few lines up)?
>=20
> Hm, could do, but it won't be suitable to be used to modify RTEs
> anymore, and given that was it's only usage I didn't see much value
> for leaving it around.

I could see room for use of it elsewhere, e.g. setup_ioapic_ids_from_mpc(),
io_apic_get_unique_id() (albeit read and write may be a little far apart in
both of them) or ioapic_resume(). Otoh one may argue its benefit is
marginal, so with some extra justification I could also see the function go
away at this occasion.

Jan



From xen-devel-bounces@lists.xenproject.org Tue Jun 07 08:58:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 08:58:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343038.568175 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyV2c-00052l-JS; Tue, 07 Jun 2022 08:58:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343038.568175; Tue, 07 Jun 2022 08:58:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyV2c-00052e-GV; Tue, 07 Jun 2022 08:58:22 +0000
Received: by outflank-mailman (input) for mailman id 343038;
 Tue, 07 Jun 2022 08:58:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1nyV2b-00052Y-Ek
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 08:58:21 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nyV2b-0000Yt-0Q; Tue, 07 Jun 2022 08:58:21 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=[192.168.23.140]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nyV2a-0005ce-NY; Tue, 07 Jun 2022 08:58:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=Ym+TPfi90d4P7YBhK8HDAaQK2v5n9QwP7LrydDG9wUI=; b=2ANxNonLk7LQsbGCmiimZNIwpS
	gbQIY58kl7fljJDjKI8DKtPZZ9xhNx2Gcj0hHIf4LtJjKLpfc/R5M3u+Aozf7UJeDnIJZmngBk9kj
	seBEkmK2MbFRV0i/UmtX2T+yv/fIENFoaT8q8FwPqwAd+GuujNYGzZ7HKjVOoWvSpjVk=;
Message-ID: <9f6013f2-c365-4d6f-1ebc-f30b774dbd28@xen.org>
Date: Tue, 7 Jun 2022 09:58:18 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.9.1
Subject: Re: [PATCH v6 1/9] xen/arm: rename PGC_reserved to PGC_static
To: Penny Zheng <Penny.Zheng@arm.com>, xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>
References: <20220607073031.722174-1-Penny.Zheng@arm.com>
 <20220607073031.722174-2-Penny.Zheng@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220607073031.722174-2-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 07/06/2022 08:30, Penny Zheng wrote:
> PGC_reserved could be ambiguous, and we have to tell what the pages are
> reserved for, so this commit intends to rename PGC_reserved to
> PGC_static, which clearly indicates the page is reserved for static
> memory.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

> ---
> v6 changes:
> - rename PGC_staticmem to PGC_static
> ---
> v5 changes:
> - new commit
> ---
>   xen/arch/arm/include/asm/mm.h |  6 +++---
>   xen/common/page_alloc.c       | 22 +++++++++++-----------
>   2 files changed, 14 insertions(+), 14 deletions(-)
> 
> diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
> index 424aaf2823..fbff11c468 100644
> --- a/xen/arch/arm/include/asm/mm.h
> +++ b/xen/arch/arm/include/asm/mm.h
> @@ -108,9 +108,9 @@ struct page_info
>     /* Page is Xen heap? */
>   #define _PGC_xen_heap     PG_shift(2)
>   #define PGC_xen_heap      PG_mask(1, 2)
> -  /* Page is reserved */
> -#define _PGC_reserved     PG_shift(3)
> -#define PGC_reserved      PG_mask(1, 3)
> +  /* Page is static memory */
> +#define _PGC_static    PG_shift(3)
> +#define PGC_static     PG_mask(1, 3)
>   /* ... */
>   /* Page is broken? */
>   #define _PGC_broken       PG_shift(7)
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 319029140f..9e5c757847 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -151,8 +151,8 @@
>   #define p2m_pod_offline_or_broken_replace(pg) BUG_ON(pg != NULL)
>   #endif
>   
> -#ifndef PGC_reserved
> -#define PGC_reserved 0
> +#ifndef PGC_static
> +#define PGC_static 0
>   #endif
>   
>   /*
> @@ -2286,7 +2286,7 @@ int assign_pages(
>   
>           for ( i = 0; i < nr; i++ )
>           {
> -            ASSERT(!(pg[i].count_info & ~(PGC_extra | PGC_reserved)));
> +            ASSERT(!(pg[i].count_info & ~(PGC_extra | PGC_static)));
>               if ( pg[i].count_info & PGC_extra )
>                   extra_pages++;
>           }
> @@ -2346,7 +2346,7 @@ int assign_pages(
>           page_set_owner(&pg[i], d);
>           smp_wmb(); /* Domain pointer must be visible before updating refcnt. */
>           pg[i].count_info =
> -            (pg[i].count_info & (PGC_extra | PGC_reserved)) | PGC_allocated | 1;
> +            (pg[i].count_info & (PGC_extra | PGC_static)) | PGC_allocated | 1;
>   
>           page_list_add_tail(&pg[i], page_to_list(d, &pg[i]));
>       }
> @@ -2652,8 +2652,8 @@ void __init free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
>               scrub_one_page(pg);
>           }
>   
> -        /* In case initializing page of static memory, mark it PGC_reserved. */
> -        pg[i].count_info |= PGC_reserved;
> +        /* In case initializing page of static memory, mark it PGC_static. */
> +        pg[i].count_info |= PGC_static;
>       }
>   }
>   
> @@ -2682,8 +2682,8 @@ static struct page_info * __init acquire_staticmem_pages(mfn_t smfn,
>   
>       for ( i = 0; i < nr_mfns; i++ )
>       {
> -        /* The page should be reserved and not yet allocated. */
> -        if ( pg[i].count_info != (PGC_state_free | PGC_reserved) )
> +        /* The page should be static and not yet allocated. */
> +        if ( pg[i].count_info != (PGC_state_free | PGC_static) )
>           {
>               printk(XENLOG_ERR
>                      "pg[%lu] Static MFN %"PRI_mfn" c=%#lx t=%#x\n",
> @@ -2697,10 +2697,10 @@ static struct page_info * __init acquire_staticmem_pages(mfn_t smfn,
>                                   &tlbflush_timestamp);
>   
>           /*
> -         * Preserve flag PGC_reserved and change page state
> +         * Preserve flag PGC_static and change page state
>            * to PGC_state_inuse.
>            */
> -        pg[i].count_info = PGC_reserved | PGC_state_inuse;
> +        pg[i].count_info = PGC_static | PGC_state_inuse;
>           /* Initialise fields which have other uses for free pages. */
>           pg[i].u.inuse.type_info = 0;
>           page_set_owner(&pg[i], NULL);
> @@ -2722,7 +2722,7 @@ static struct page_info * __init acquire_staticmem_pages(mfn_t smfn,
>   
>    out_err:
>       while ( i-- )
> -        pg[i].count_info = PGC_reserved | PGC_state_free;
> +        pg[i].count_info = PGC_static | PGC_state_free;
>   
>       spin_unlock(&heap_lock);
>   

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 09:13:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 09:13:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343048.568186 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyVHB-0007cB-2v; Tue, 07 Jun 2022 09:13:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343048.568186; Tue, 07 Jun 2022 09:13:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyVHA-0007c4-Uc; Tue, 07 Jun 2022 09:13:24 +0000
Received: by outflank-mailman (input) for mailman id 343048;
 Tue, 07 Jun 2022 09:13:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1nyVHA-0007by-0K
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 09:13:24 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nyVH9-0000qD-Hv; Tue, 07 Jun 2022 09:13:23 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=[192.168.23.140]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nyVH9-0006hq-B7; Tue, 07 Jun 2022 09:13:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=OUK/PR+IfqCxodMwNH+9TbV6dj84dz7LHITxsHsalEQ=; b=2fHJBjBhx3VmdKBLSb3J97mRGE
	jBrHFpe3OduvJ2wzdqX5vv7Swt8Fpv2aUJBGXDubGBERPxgsflmtcb3aTLVAc0N5KCm4uIYDeFO3b
	ns/kEp22F3AvdYijGpskMztqCUP1NnJwlkyP1whhCCjILqy4qFJNcd7Jr+8IMD6+lacs=;
Message-ID: <d43d2dbd-6b0e-fb0c-5e0a-d409db4e18e9@xen.org>
Date: Tue, 7 Jun 2022 10:13:20 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.9.1
Subject: Re: [PATCH v6 2/9] xen: do not free reserved memory into heap
To: Penny Zheng <Penny.Zheng@arm.com>, xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>
References: <20220607073031.722174-1-Penny.Zheng@arm.com>
 <20220607073031.722174-3-Penny.Zheng@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220607073031.722174-3-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Penny,

On 07/06/2022 08:30, Penny Zheng wrote:
> Pages used as guest RAM for static domain, shall be reserved to this
> domain only.
> So in case reserved pages being used for other purpose, users
> shall not free them back to heap, even when last ref gets dropped.
> 
> free_staticmem_pages will be called by free_heap_pages in runtime
> for static domain freeing memory resource, so let's drop the __init
> flag.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
> v6 changes:
> - adapt to PGC_static
> - remove #ifdef aroud function declaration
> ---
> v5 changes:
> - In order to avoid stub functions, we #define PGC_staticmem to non-zero only
> when CONFIG_STATIC_MEMORY
> - use "unlikely()" around pg->count_info & PGC_staticmem
> - remove pointless "if", since mark_page_free() is going to set count_info
> to PGC_state_free and by consequence clear PGC_staticmem
> - move #define PGC_staticmem 0 to mm.h
> ---
> v4 changes:
> - no changes
> ---
> v3 changes:
> - fix possible racy issue in free_staticmem_pages()
> - introduce a stub free_staticmem_pages() for the !CONFIG_STATIC_MEMORY case
> - move the change to free_heap_pages() to cover other potential call sites
> - fix the indentation
> ---
> v2 changes:
> - new commit
> ---
>   xen/arch/arm/include/asm/mm.h |  4 +++-
>   xen/common/page_alloc.c       | 12 +++++++++---
>   xen/include/xen/mm.h          |  2 --
>   3 files changed, 12 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
> index fbff11c468..7442893e77 100644
> --- a/xen/arch/arm/include/asm/mm.h
> +++ b/xen/arch/arm/include/asm/mm.h
> @@ -108,9 +108,11 @@ struct page_info
>     /* Page is Xen heap? */
>   #define _PGC_xen_heap     PG_shift(2)
>   #define PGC_xen_heap      PG_mask(1, 2)
> -  /* Page is static memory */

NITpicking: You added this comment in patch #1 and now removing the 
space. Any reason to drop the space?

> +#ifdef CONFIG_STATIC_MEMORY

I think this change ought to be explained in the commit message. AFAIU, 
this is necessary to allow the compiler to remove code and avoid linking 
issues. Is that correct?

> +/* Page is static memory */
>   #define _PGC_static    PG_shift(3)
>   #define PGC_static     PG_mask(1, 3)
> +#endif
>   /* ... */
>   /* Page is broken? */
>   #define _PGC_broken       PG_shift(7)
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 9e5c757847..6876869fa6 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -1443,6 +1443,13 @@ static void free_heap_pages(
>   
>       ASSERT(order <= MAX_ORDER);
>   
> +    if ( unlikely(pg->count_info & PGC_static) )
> +    {
> +        /* Pages of static memory shall not go back to the heap. */
> +        free_staticmem_pages(pg, 1UL << order, need_scrub);
I can't remember whether I asked this before (I couldn't find a thread).

free_staticmem_pages() doesn't seem to be protected by any lock. So how 
do you prevent the concurrent access to the page info with the acquire part?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 09:19:46 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 09:19:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343056.568197 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyVNG-0008Fu-PH; Tue, 07 Jun 2022 09:19:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343056.568197; Tue, 07 Jun 2022 09:19:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyVNG-0008Fn-KJ; Tue, 07 Jun 2022 09:19:42 +0000
Received: by outflank-mailman (input) for mailman id 343056;
 Tue, 07 Jun 2022 09:19:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1nyVNF-0008Fh-4I
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 09:19:41 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nyVNE-0000wY-91; Tue, 07 Jun 2022 09:19:40 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=[192.168.23.140]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nyVNE-0006rb-2H; Tue, 07 Jun 2022 09:19:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=2opv/XBw1QQEw+2S/BMwrXlgcVc9+WDeXBnjQRn0s+0=; b=nzDFjR1AgFWNcZBBSFhiE2iIHc
	MJR3a3E+SfQPp/jOwwzK9kAJqIcRBoGxZWN4c3o3II4EN0unTgSEWkch3BW46yB8DbaLJf9S92a6D
	inOFFYA3orjNz6JIcv3X+0DiRh+RNPIRTEyCuR9LuJlf1Yr+Qr0CfFL1MpyuMtD7Veyw=;
Message-ID: <72bec2ab-13d7-8de9-6bb9-f1e4f9de6a3b@xen.org>
Date: Tue, 7 Jun 2022 10:19:37 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.9.1
Subject: Re: [PATCH v6 7/9] xen/arm: unpopulate memory when domain is static
To: Penny Zheng <Penny.Zheng@arm.com>, xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>
References: <20220607073031.722174-1-Penny.Zheng@arm.com>
 <20220607073031.722174-8-Penny.Zheng@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220607073031.722174-8-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Penny,

On 07/06/2022 08:30, Penny Zheng wrote:
> Today when a domain unpopulates the memory on runtime, they will always
> hand the memory back to the heap allocator. And it will be a problem if domain
> is static.
> 
> Pages as guest RAM for static domain shall be reserved to only this domain
> and not be used for any other purposes, so they shall never go back to heap
> allocator.
> 
> This commit puts reserved pages on the new list resv_page_list only after
> having taken them off the "normal" list, when the last ref dropped.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> Acked-by: Jan Beulich <jbeulich@suse.com>
> ---
> v6 changes:
> - refine in-code comment
> - move PGC_static !CONFIG_STATIC_MEMORY definition to common header

I don't understand why this change is necessary for this patch. AFAICT, 
all the users of PGC_static will be protected by #ifdef 
CONFIG_STATIC_MEMORY and therefore PGC_static should always be defined.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 10:03:36 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 10:03:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343075.568224 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyW3W-0005Um-7K; Tue, 07 Jun 2022 10:03:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343075.568224; Tue, 07 Jun 2022 10:03:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyW3W-0005Uf-46; Tue, 07 Jun 2022 10:03:22 +0000
Received: by outflank-mailman (input) for mailman id 343075;
 Tue, 07 Jun 2022 10:03:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u7Lz=WO=citrix.com=prvs=1504b46c8=roger.pau@srs-se1.protection.inumbo.net>)
 id 1nyW3V-0005UZ-5s
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 10:03:21 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0d33bb74-e649-11ec-bd2c-47488cf2e6aa;
 Tue, 07 Jun 2022 12:03:19 +0200 (CEST)
Received: from mail-bn8nam11lp2173.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.173])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 07 Jun 2022 06:03:15 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by SA0PR03MB5529.namprd03.prod.outlook.com (2603:10b6:806:bc::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Tue, 7 Jun
 2022 10:03:13 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::5df3:95ce:4dfd:134e]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::5df3:95ce:4dfd:134e%4]) with mapi id 15.20.5314.019; Tue, 7 Jun 2022
 10:03:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0d33bb74-e649-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654596199;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=a4a79oM4NPTFiMHtN1P2yssXZy0Fl5/x919PDQ1ra6g=;
  b=XQvjxpW3GqgPG2APqw/gJDmU/mN28EhNmbnKYL923cUDPZozhlAWfPxm
   4jwQssq7smEdukvAsgjdDq3rBMaDi+n6La41+73Qj82tixjiqQr+p3ZE8
   LEpc/pk+qG7Xh6Xt1lawdobMHrHp4/vvGAqlZ5IkQ/PLdNSuTF/T1nDTI
   Y=;
X-IronPort-RemoteIP: 104.47.58.173
X-IronPort-MID: 73028372
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Xu0zhazYt8x8hRdlbAB6t+ctxyrEfRIJ4+MujC+fZmUNrF6WrkVRz
 WMZWmiHOqqKZjTyKNonat7l9xwHuJGEmoRmGwo5rSAxQypGp/SeCIXCJC8cHc8zwu4v7q5Dx
 59DAjUVBJlsFhcwnj/0bv656yMUOZigHtIQMsadUsxKbVIiGX5JZS5LwbZj2NY22YfhWmthh
 PupyyHhEA79s9JLGjp8B5Kr8HuDa9yr5Vv0FnRnDRx6lAe2e0s9VfrzFonoR5fMeaFGH/bSe
 gr25OrRElU1XfsaIojNfr7TKiXmS1NJVOSEoiI+t6OK2nCuqsGuu0qS2TV1hUp/0l20c95NJ
 NplrpCQUix0Y6D2w/0TVEh4SwRvPIlvweqSSZS/mZT7I0zuVVLJmqwrJmdmeIoS96BwHH1E8
 uEeJHYVdBefiumqwbW9DO5xmsAkK8qtN4Qa0p1i5WiBUbB6HtaeE+OTuoQwMDQY36iiGd7EY
 MUUc3x3ZQnoaBxTIFYHTpk5mY9Eg1GgKGUC+AnK/8Lb5UD8y1Z+2qSyPOOIRYyged5fuh61n
 kDZqjGR7hYycYb3JSC+2nCmi/LLnCj7cJkPD7D+/flv6HWR22pVDhQVXFm6pPCRi0iiVtYZI
 EsRkgItoLYz8gq3T9D7dxy+vHOA+BUbXrJ4DOkS+AyLjK3O7G6xFmUCCzJMdtEinMs3XiAxk
 E+EmcvzAj5iu6HTTmiSnop4thu3MCkRaGUENSkNSFJc58G5+d5oyBXSUtxkDai5yMXvHi39y
 CyLqy54gKgPickM1OOw+lWvby+Qm6UlhzUdvm3/Nl9JJCsjDGJ5T+REMWTm0Ms=
IronPort-HdrOrdr: A9a23:gixSlqmz0BGiHPpdMgIvqN11mnbpDfO/imdD5ihNYBxZY6Wkfp
 +V8cjzhCWftN9OYhodcLC7V5Voj0mskKKdxbNhRYtKPTOWwVdASbsP0WKM+V3d8kHFh41gPO
 JbAtND4b7LfCRHZKTBkW6F+r8bqbHokZxAx92uqUuFJTsaFp2IhD0JbjpzfHcGJjWvUvECZe
 ChD4d81nOdkTN9VLXKOlA1G8z44/HbnpPvZhALQzYh9Qm1lDutrJr3CQKR0BsyWy5Ghe5Kyx
 mPryXJooGY992rwB7V0GHeq7xQhdva09NGQOiBkNIcJDnAghuhIK5hR7qBljYop/zH0idjrP
 D85zMbe+hj4XLYeW+45TPrxgnbyT4rr0TvzFeJ6EGT1fDRdXYfMY5slIhZehzW5w4Lp9dnyp
 9G2Gqfqt5+EQ7AtD6V3amGazha0m6P5VYym+8aiHJSFaEEbqVKkIAZ9ERJVL8dASPB7pw9Gu
 UGNrCR2B9vSyLaU5nlhBgu/DT1NU5DXStuA3Jy9/B96gIm0kyQlCAjtY4idnRpzuNJd3AL3Z
 WADk1SrsAxciYnV9MDOA4/e7rHNoXse2O6DIvAGyWQKEk4U0i92KLf0fES2NyAXqAu4d8bpK
 nhOWkox1LaPXieQ/Gz4A==
X-IronPort-AV: E=Sophos;i="5.91,283,1647316800"; 
   d="scan'208";a="73028372"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UQ3JThqEisiahtdEeYh3zttpXtQTV95dvBAT2pSP/OozZX67H8T6ckf98N3756ZalW28VgA5WEoBrba0IK1Qux+UCra7jATrl3GDP7Yij7EEK/YxTp22WjMUI+jwYtiMpDeBaCrPReKhjNp/PJw7D9Aw4Ph5yBCdJAE7kAQieH9ihKLE0y0/Rul2rPlt0Czoiwc82IiW+QKoKN1C+bLoJQ7Wxznv5bdgaAEJcs465Cc8P/eoZkW/HxKXcdC9/aFRr4P7nAlzEt+0+qQHLVdh4RMTlB5C2Wfk24MvhC8dY+HZ5FuIYdH4CmQrtVMofk8wbwcvhCZqNgxuc3b46od5dA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=hh1TNP0OG4TWHAc4H1wBDB5R+zLphn5Hzu+hEwe6Sjo=;
 b=dz7d+FZdvYS1p2bAYSNUka9nsTrziabi/tKVwLcMHEHTSmx379R5mta9q26hJ2tYazlOTKml/BPs8pbpEGGObZjkV2S0H5YY6Leqi2U+/jsBJF6h0IJKRy4SKWK6EXkUSf2kQyOeHGI71V+QOJdBnWySHRUIvLuGNA9bru6gMKfsEDjTVftVYl4IH5W3h8B9B5g46/HZ78nAhIY9IriVOz+W3TgmQxtJMPOi7hoFnQGIgMTpaXqiMrE2cLP4j8desjYNR2l/D8cVVZ5/n7raPPUDXT8S8vVWjyHUR+e7oF0ELRoHd5CB0vSBDXVkjeTM3OhljFDnixV/9fpa7vz9Fg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hh1TNP0OG4TWHAc4H1wBDB5R+zLphn5Hzu+hEwe6Sjo=;
 b=tGynxmAaWOZ0+W3YYDgXeyGfMBL/C17CVrB6AVmigfemjRs0WL0xVs6dGNwwXFB+KqusXN51Fcr0X9HFPCJrDbFDWT9uB7KCllmjt0u9bKRWzJJj+qgM1+pazEcZZ1W4dv5JkbcwkmEY17q3Tz99pqBcE1/J4vCSDg5gPZtkp1U=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 7 Jun 2022 12:03:09 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Jun Nakajima <jun.nakajima@intel.com>,
	Kevin Tian <kevin.tian@intel.com>, Wei Liu <wl@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v2 1/3] x86/vmx: implement Bus Lock detection
Message-ID: <Yp8iXWT+tXRB1wYP@Air-de-Roger>
References: <20220526111157.24479-1-roger.pau@citrix.com>
 <20220526111157.24479-2-roger.pau@citrix.com>
 <3c8b0b72-0a9a-3dfd-bf5c-b6cc40a4ce3a@citrix.com>
 <fe0fbfb8-859c-7b24-89bc-d5c68c7b133e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <fe0fbfb8-859c-7b24-89bc-d5c68c7b133e@suse.com>
X-ClientProxiedBy: LO2P265CA0488.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:13a::13) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 02c0792d-6e7a-450d-daf8-08da486cef4d
X-MS-TrafficTypeDiagnostic: SA0PR03MB5529:EE_
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-Microsoft-Antispam-PRVS:
	<SA0PR03MB5529F937A8EBDBECF131CDAF8FA59@SA0PR03MB5529.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	HERhQqj7FXVZoBkn4+WZDiSSrBjVyJcGF+2JfwAQ1IyBzrhNZHOjVoQxqzzxukWR/MK/KLuTtIVnb5e8fMAxGlQRDHYBI1o4K9kteVgK5jmFjjrFnMZgmQajOrzhfKEfiaHmyXB8UPKTc/sMSUuELMXe77g5+apF49jtHGr0T+9MUnCYctpfgSKX3uMbaIpUz6dfG8PR/bU4Bg78v00nueMmjlT3h24xIalwqeyViIXMMPxHuXzjgkxd1myYCIhxL1CcpIqnmBlwpuLhhD6Te1nBjtpNotyjEKs17A560hNRTNXv7NTeFMLygdehyDdgfKq10zRcVfEzMAs2r1ixLrpuxNwwz525sZ5CMZTmQbXMKZEV8crNDt4+REHsvIJ9QCf8QDPxp8mKudOJNQS+DoTuOR4OvMya5OHvbQDGzQUWCPCQ76fqqM7+Uo0BACPS720ORzrk+trt7mNCaxyWOBgQU7IgAj2X8yysBIX9kJzwbvMWF+OkHG9uLqIe/NjYyfd05fWknui9hZOkLRyEOqdX9j9Xfb6zxtUQAHRwrHU682PJZXrMOnsqGEQz5wW0swp+wuHjB0BH402KS2Lb3dvPo4SonbKJ+ngsSF4TAGdW7wXdidUhckvnzieOS1DMLHiyQJamK5ZziwGWRs512yH68FphLzZ8n+oUT6uuBUlsh/aPYcYcvAjg/lAQAzfg/H0Tb8R9G8Mv0IhuGNPbSA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(7916004)(4636009)(366004)(86362001)(38100700002)(4326008)(5660300002)(8676002)(66476007)(66556008)(66946007)(82960400001)(6506007)(6486002)(9686003)(6916009)(26005)(6666004)(316002)(53546011)(186003)(8936002)(83380400001)(2906002)(508600001)(33716001)(85182001)(6512007)(54906003)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QTNmYVkxaTJreXFRWjN4Y3E2QmZTbWp6RzVZdHJnbjM1SU5WbW9aNjV4UTRV?=
 =?utf-8?B?NW4xRzllWlJZdmF0Q0xlVU1zVk1xM0NNaVRKUW5XdFROc1d4bjNCSW9tQXdh?=
 =?utf-8?B?TGNkRjFxeHZhU2MwK2l1Rit6YjVSRU5NNUJSdkJXT2NlRWx5NWdkSW1BVVFh?=
 =?utf-8?B?Y3NaeXY1clJHcVJvNVE0c3YxcnV2b2F3ZEl3YUIwQndtYWxTd243MEFRK1hv?=
 =?utf-8?B?a0p6SFIwTnNuaFRmZzduc05rdlpzVG1yeHo0WTE5NzBmZGNiYXl6SXZPVXJF?=
 =?utf-8?B?Q0hIK1RQL3ZJanpLc1RJMzM1dmlaR2VmV2Vlb2hkYXZIR0lNZ2VTODd0em1V?=
 =?utf-8?B?ekZ3MEFJai9STDZMYzRYeVB0UjFyY3ZNTkNpb3RxSUI4T3c4L2VUTUZHSEpz?=
 =?utf-8?B?cTFVZ0xoa1JkYks1cWt6OTBjelJsTWxjalRqTzRZQ2tEZlFkbjluRHpqTFNM?=
 =?utf-8?B?SW4xNThDb3NVd3lzZXUvbVNKclFwcHVrSnhZTkNTNm1UR1pnVDhuaHFQb3lp?=
 =?utf-8?B?Uy9LdFZMQlpjVk5EdDhyTW5UeW02VlllS0VYbEw1dTUwSDJDTDR6VW1jQUEr?=
 =?utf-8?B?UDMrcHY5NWhUWWpac2RYS2JaVjg3Tms4U2hKL09kR2RlUmtUZ0JuMkl6OWNq?=
 =?utf-8?B?QUYyL3BMSGtaMUdqNE1OSER0VFE4RldXTWp1RTcrZjMxeHRqcE5JOEtWa1FN?=
 =?utf-8?B?c0NpZzVSRVp6TURSKy9ueENtWUc5cXVlU21Ya2Y3eUpOZjVYMlo2U0c1WFFh?=
 =?utf-8?B?WHduWTZPcUlxd2ZhWXBhV1BUVXRzbWhQblBQajBiU2Jqb2hPeERSYlU5YTNh?=
 =?utf-8?B?cmFITTJPOVRPcVZqNFdiZ3QxRVQ2TlVhM1ZSVzZjNzg4R1JmYWRVejhkNVVn?=
 =?utf-8?B?N3cyRGtpUHYrMjJ6YnRpVzg1WCt3OGlZTnByQzQ3YWY5USs0U2hKZUkwK1Vh?=
 =?utf-8?B?YmJkL1h2SG5rRURFRXZsMFRRZGRiZjJsdDRXd2ZqaXlLSUhqTHFsZUEvSWtl?=
 =?utf-8?B?VWxRSHZXeFRWVjZ2cmdLTHh3NU5vaEIzQU5uZWFnd0xiWDZzemtZS09rWHRn?=
 =?utf-8?B?c2thWTg5VWRpZ1dHTjdFUmU1N2lYSm1SUUNPN3hSclF2TGhSQTBoenJmazVy?=
 =?utf-8?B?UEE3a1NRTG5IQXU3MzhoZ24yZGZoVHRtZEplZkxlMzM0eGhFYXBBYkh3Y3ov?=
 =?utf-8?B?UlNlNmFVK3BmL3dTWjVNYjUzQlBqelZBVlIrQ0RTUTZONHVCanFtWkNmd1Ur?=
 =?utf-8?B?YmovUDhkYXpDVTRndC8zWUNlSFQ3dTFqejE4bXNSOEhGQTdVbXU2NU1GN1ND?=
 =?utf-8?B?REJZejR4TVptNVhBbXp5NzR2SFliZmdvOHlPL1NidGkzUTlDdkNiUlN3cHJi?=
 =?utf-8?B?b21YUzdLYktNaGx2UGpBYWd2QkczREtsL2I1YnNhbWNDS3R0V25HRVdnWjVy?=
 =?utf-8?B?VnNvT2xDcGdwTTF5N24rdEkrL0pPaWdnNzkxRDBFWFo0WVJxUzRSeTZqaUZE?=
 =?utf-8?B?Sk04SDJpQmRkd0ZzR1Z2ajhKbVo0ZVJLTVNVMERmbG5rbkp3QmgzZXlHOXZN?=
 =?utf-8?B?N3pBb3ZxVFg4U1ZKSUZ5SHZaNkQwQzBzcEdWZVNDTlY1NEJQaWl2MDlWcTVM?=
 =?utf-8?B?NXBLTWlKMXNXcGlRUHRPc3VHb0RWQlFBNlJTRUpFL2xDazJmZ1NlNVU0ZzBZ?=
 =?utf-8?B?K250QzU2TnBXRUFmZkVUT3FVb3NUTzErU2hOdnZDbllCazdTSy9pS2VtVTBm?=
 =?utf-8?B?MXR4VEcwM0VIQjl0bkN2aWRpaUplbnFPdVk5eEN4ZjhTZDdtV1FGRG5BNjZQ?=
 =?utf-8?B?OFE1TkxWOXUzSEJPb1RtZ0Y3c2QySktqNXlTZ1VRazd6SnNwdHFqWDZDaVFM?=
 =?utf-8?B?dnM1Y1dadTMwVmRKaXU4OWt2YzBkeGZUQzArMmFNN0lhMHNJaWgyVXNSN1VE?=
 =?utf-8?B?TDBVcE1KUm5FWVdGV3pVa1MwdWNNSkg0ZTdvRDFYaFROM2lyYklON0l2N3JK?=
 =?utf-8?B?cnozVzNpZVJ6TFVad3BIRnY1MWpZU2JISFZ1dGZTZlBmVjUzSFh1aUdtWkxP?=
 =?utf-8?B?Ym1RL0xLQlV3VmhCY1NHMkgwSENnbzBSY3cyRVhuT1FBdlBhRmZCUi95TkJT?=
 =?utf-8?B?RzJLZi9HdFNySDYvRzIwNGRWVThVQjRsamhJQTI1cUNLTC9HZmYwUXFEcUtw?=
 =?utf-8?B?OU4xYjhkc1lHOTZTcXlyNVF1aUVScHEvWjN4dUJXUzdxMFJWc04rNGlNOWg5?=
 =?utf-8?B?WEdVODViWWQ2QU1ia1RmQVgwclQzdFNMWUZXWG54d09KV1dOQXd3TGwxeHNM?=
 =?utf-8?B?bXpmWWJLNHBSbGhQeXdXcnpvVXJjYjhHSWpoZVg0YjJXVit6d1g5bG5QZFBi?=
 =?utf-8?Q?QKVHY2+AbffQV3+A=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 02c0792d-6e7a-450d-daf8-08da486cef4d
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2022 10:03:13.6591
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: qJ3fBaaD9uSPunu3sywYU26oIYsHPGDr4YkoL9wN5xBeu88R7eFAqqMM9hf3w8+qEk3SHcwQitQV1cugUHprcA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR03MB5529

On Tue, Jun 07, 2022 at 08:54:15AM +0200, Jan Beulich wrote:
> On 06.06.2022 15:27, Andrew Cooper wrote:
> > On 26/05/2022 12:11, Roger Pau Monne wrote:
> >> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> >> index f08a00dcbb..476ab72463 100644
> >> --- a/xen/arch/x86/hvm/vmx/vmx.c
> >> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> >> @@ -4065,6 +4065,16 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
> >>  
> >>      if ( unlikely(exit_reason & VMX_EXIT_REASONS_FAILED_VMENTRY) )
> >>          return vmx_failed_vmentry(exit_reason, regs);
> >> +    if ( unlikely(exit_reason & VMX_EXIT_REASONS_BUS_LOCK) )
> >> +    {
> >> +        /*
> >> +         * Delivery of Bus Lock VM exit was pre-empted by a higher priority VM
> >> +         * exit.
> >> +         */
> >> +        exit_reason &= ~VMX_EXIT_REASONS_BUS_LOCK;
> >> +        if ( exit_reason != EXIT_REASON_BUS_LOCK )
> >> +            perfc_incr(buslock);
> >> +    }
> > 
> > I know this post-dates you posting v2, but given the latest update from
> > Intel, VMX_EXIT_REASONS_BUS_LOCK will be set on all exits.
> 
> Mind me asking what "latest update" you're referring to? Neither SDM nor
> ISE have seen a recent update, afaict.

After Andrew's request we got in touch with Intel regarding whether
VMX_EXIT_REASONS_BUS_LOCK is set when exit_reason ==
EXIT_REASON_BUS_LOCK, and they will update the ISE to contain:

"the bit 26 in the exit-reason field will always be set on VM exits
due to bus locks."

So I will apply the changes requested by Andrew to match this
behavior.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 10:04:14 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 10:04:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343081.568235 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyW4M-0005z1-HU; Tue, 07 Jun 2022 10:04:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343081.568235; Tue, 07 Jun 2022 10:04:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyW4M-0005yu-E0; Tue, 07 Jun 2022 10:04:14 +0000
Received: by outflank-mailman (input) for mailman id 343081;
 Tue, 07 Jun 2022 10:04:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TXeP=WO=citrix.com=prvs=150b9cda6=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1nyW4L-0005UZ-Gj
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 10:04:13 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2d3ce90d-e649-11ec-bd2c-47488cf2e6aa;
 Tue, 07 Jun 2022 12:04:12 +0200 (CEST)
Received: from mail-co1nam11lp2172.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.172])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 07 Jun 2022 06:04:09 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SA0PR03MB5388.namprd03.prod.outlook.com (2603:10b6:806:b7::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Tue, 7 Jun
 2022 10:04:08 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::50a2:499b:fa53:b1eb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::50a2:499b:fa53:b1eb%5]) with mapi id 15.20.5314.019; Tue, 7 Jun 2022
 10:04:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2d3ce90d-e649-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654596252;
  h=from:to:subject:date:message-id:references:in-reply-to:
   content-id:content-transfer-encoding:mime-version;
  bh=R9cTBjs/q4wVN5C/aVgkLZwsXfn2Agxuj5RywH0c0XI=;
  b=O7cq7zsuLW201dhxVO4TGIgakfJzhJdcV3IN7ynopDu5E9imFWnybdKR
   YZbg9KfSOBhJ7iXNvd92WwwCzdbFZR4qWQ3zrICXfLOHtvog5uq9ClWg5
   7Ew2ZVJuOEq7hTjQ0NdlxMfoWHFFKpNWnafuBZrAWUC2nDUOMKKmOdcnD
   U=;
X-IronPort-RemoteIP: 104.47.56.172
X-IronPort-MID: 73439325
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:cMLKM6OmChRWKA3vrR3OlsFynXyQoLVcMsEvi/4bfWQNrUoq02NRz
 mtKWjuFPfuOYzDwLt13aI2x/EwAuZSHnIVjSQto+SlhQUwRpJueD7x1DKtR0wB+jCHnZBg6h
 ynLQoCYdKjYdleF+lH1dOKJQUBUjclkfJKlYAL/En03FFYMpBsJ00o5wbZn2tEw2LBVPivW0
 T/Mi5yHULOa82Yc3lI8s8pvfzs24ZweEBtB1rAPTagjUG32zhH5P7pGTU2FFFPqQ5E8IwKPb
 72rIIdVXI/u10xF5tuNyt4Xe6CRK1LYFVDmZnF+A8BOjvXez8CbP2lS2Pc0MC9qZzu1c99Z1
 MRxmq73ThgSEqzQteA5UAsIKTtXBPgTkFPHCSDXXc276WTjKiKp6NI3SUY8MMsf5/p9BnxI+
 boAMjcRYxufhuWwhrWmVu1rgcdlJ87uVG8dkig4kXeFUrB7H9aaHP+iCdxwhV/cguhnG/rEa
 tVfQj1odBnaODVEO0sNCYJ4l+Ct7pX6W2ID9AnE/vBqi4TV5BVB8JTcINzKQOTQfuZHnBq5j
 GTGoE2sV3n2M/Tak1Jp6EmEj+vCjWX9XIQUGruQ7uRtnFqVgGkeYDUGWF3+rfSnh0qWX9NEN
 1dS6icotbI19kGgUp/6RRLQnZKflhsVWt4VGOpj7giIk/PQ+1zAWTJCSSNdYts7ssNwXSYty
 lKCg9LuA3poraGRTnWesLyTqFteJBQoEIPLXgdcJSNt3jUpiN1bYs7nJjq7LJOIsw==
IronPort-HdrOrdr: A9a23:vNjL+KDSbo05/w7lHegOsceALOsnbusQ8zAXPh9KJCC9I/bzqy
 nxpp8mPEfP+U4ssOlJo6HMBEDyewKmyXcT2/hcAV7CZnithILMFu1fBOTZslnd8kHFl9K1kJ
 0QCpSWa+eAQmSS7/yKhzVQeuxIqLbozEnrv5a5854Hd3AIV0gU1XYdNu/tKDwVeOApP/oEPa
 vZwvACiyureHwRYMj+LGICRfL/q9rCk4+jSQIaBjY8gTP+zA+A2frfKVy1zx0eWzRAzfMJ6m
 7eiTH04a2lrrWS1gLc7WnO9J5b8eGRiuerRfb8yfT9GA+czzpAV74RH4FqewpF591H3Wxa0u
 UkZS1QefibpUmhJ11d6iGdoTUImAxelkMKj2Xoz0cL6PaJOw7TQaB69P5kWwqc5Ew6sN5m1q
 VXm2qfqppMFBvF2D/w/t7SSnhR5wOJSFcZ4JkuZkZkIP0jgX5q3P8i1VIQFI1FEDPx6YghHu
 UrBMbA5OxOeVffa3zCpGFgzNGlQ3x2R369MwM/k93Q1yITkGFyzkMeysBalnAc9IglQ50B4+
 jfKKxnmLxHU8dTZ6NgA+UKR9exFwX2MFrxGXPXJU6iGLAMOnrLpZKy6LIp5PuycJhN15c2kI
 SpaiItiYfzQTOaNSSj5uw7zvmWehTCYd3E8LAv27Fp/rvhWbHsLSqPDFgzjsrImYRsPvHm
X-IronPort-AV: E=Sophos;i="5.91,283,1647316800"; 
   d="scan'208";a="73439325"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DosVwEvLZoq2PZbQzYpmdYa+a4nMTA3t8e0Vn/8JrMcGhiAaVV67/hpGt0VMi1k/NnjNJglgweejHF0fARDmPzZk0CI/M46gZv2lcBIaDR8g9h4xnd8ciEaGdeG9Zju0Y4oydwSeO6nW3dyLbdUdDzaM4x8rAsFZsga04QAGC1Rozuu6xNiD/NtyqvxIpsU4aqaDTJ1R2/S0NMZcwgF4gbzlZGTZ487yvbJH+IgEqFRrEYSTYn0ACzqvgLW+DivHmK9BXttRmqlsaYNdd/QaGLVHUsICkAH6eywzwcC1jxl4sVaWBAgLyF/qOmdeBVdw+xf8vqu3qqwygMZzEfGinw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=R9cTBjs/q4wVN5C/aVgkLZwsXfn2Agxuj5RywH0c0XI=;
 b=XC8+8VbDXGwj5i2+CoJXXzAyAeDZF6o/PGz8q0uGjQnK2fkvBVaR4KakApIxOIliHNjhJjRhCGiawxv3B97/W8OZMZ4BDWsPBl8NpRAAbR3Z3owYjO/TTYaEyXtL5fgs9Em4s8VqPmi1NAxazIKqsSWFelS/+yUafjPcUq9voaHhwMBo+IhX8Bivs9asVGUUlxW7WYwOxmg3yKpnizlgtQmMrEGnaQsRqJEft9CDLoN4hXOGrIQIo41C7/TqFYY/JN+1HsqBYb+DB0hyXmLVUtpqmJlDDc6pnFbNZr0c3n5IXKYr4M7MDxuAPil3I4KfS22An6cpdXWggpyR1Tftog==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=R9cTBjs/q4wVN5C/aVgkLZwsXfn2Agxuj5RywH0c0XI=;
 b=tXGoosA4KkNS0nRDK3VHwK9e9mpPAhRDPblZgKVsk/h6PZGMKHx2oc+DLXLNco+YQd83fhCRcAZ/xV7puZc3vGrkHCFdURNV72k/chZc9GE8nGwIUzh7rCqmdNJz+5zWDGL2X9tqnZ/+yeJDjKxZ/1O8h6Ndc5/2XqtPvQTo1e4=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: "alex.nlnnfn@proton.me" <alex.nlnnfn@proton.me>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: memory overcomittment with sr-iov device assighment
Thread-Topic: memory overcomittment with sr-iov device assighment
Thread-Index: AQHYeihemc0HW39+c0mUd/wlJkeuKK1Dt4iA
Date: Tue, 7 Jun 2022 10:04:07 +0000
Message-ID: <eb7a2869-d9f8-7a9f-3884-60d1a61e36f4@citrix.com>
References:
 <6f_bjKs323m5sDcqqvk3uosOLiugoCHlAvJ_tEMTl9d_05VqR-nOKtBBS4QWK29iKmorefG54vrbEgUM40Fl71BPZ0hwVyY3P0LjjJVjO-c=@proton.me>
In-Reply-To:
 <6f_bjKs323m5sDcqqvk3uosOLiugoCHlAvJ_tEMTl9d_05VqR-nOKtBBS4QWK29iKmorefG54vrbEgUM40Fl71BPZ0hwVyY3P0LjjJVjO-c=@proton.me>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 5701c5e5-83de-4646-8efb-08da486d0fb8
x-ms-traffictypediagnostic: SA0PR03MB5388:EE_
x-microsoft-antispam-prvs:
 <SA0PR03MB5388FAF5DD7F57BB3D492776BAA59@SA0PR03MB5388.namprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 Om+639qI63HDOvzJ/NGSataKhmouqCmGyOpzl81Xdus3DM3NASAYmA/Ct5nnbs4XsJQ6780DjejV+tY46oiopyZlMeYqOc6Us7TGGqeclwE8hC+hHDMx6ikdW7VMR5lQJTpYfH/9gyoowrIvIs7k2s5u47b95vhfXJbluxDjH2iOrBzKOmfpTt07xOatHC2l/RwxzWItDFuSArQg/jfHL/MP6ssVdF1MxGXtRv/W82mTlt7Ekb01Z0vyoXG5oDHxBbOcxxs6iV5HZDy4eXWjjI3SSY+ZtC3g1bAZXpfznZKYnwo+4ymYjyQiUvPaHgzKaXtEQFmmJyGJtT9Zl5MWQY9nX/tyGtkR+AWPajpesulEAtmPxGI4nRp/kcXyPSklH8nPtvmUMdiCaWgRwpwekoBZ+UW0DHiC8rPZmRzOoOAFeAvqKVVmukJG68Nxd2WM7Mg4FflBSHPIugUKniTUPsQRaCBpVcWSTiaJpEKhIbZrnDwyOn8QA/Cnn9+qqBttCkySm3dcuhVuMxwSbOi4nPVklFqKl3npsmc89A2iI3HEEOR7cWQF79WxM6dDs4vdPe7gqXQ8ZSkX9/ZRiBUkNqo7FdJZnMZbus5PDSjvEKyvpRXlz0AdUYwsRkaeM/RTvUm9wiKExIAjf0X8gM6UQ5ZjX2bdHSe18Fe7gqsXXxHpXKTX1dTVMdZNeoGiq0EdLzM15yXsuPM3ZRVIYTZaa8YAm0TKgYR7oVXnN3U7hdExbPY6xiWhsgkB+pYAnbbkVgGw4hxLf3/O5mf4F4ghdQ==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(2616005)(55236004)(53546011)(110136005)(8936002)(31696002)(82960400001)(71200400001)(86362001)(316002)(36756003)(38070700005)(122000001)(6486002)(508600001)(186003)(6506007)(31686004)(2906002)(8676002)(66446008)(64756008)(66476007)(76116006)(91956017)(66556008)(66946007)(83380400001)(6512007)(26005)(5660300002)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?QlFZQzhLcGJkZzBSbDVkUWtOQUJndFpUN3BuYm1wNUhJa28wenNITFdyVzVY?=
 =?utf-8?B?bjB4d3pqMmJWbzlnUlJDWDB2TVJIa2lBL2pRYTJ2ZTFFLzd1SGpvYTZaTnE1?=
 =?utf-8?B?Qk8wa0ZpVzdIQ0lwSXAwYm12dHEzcjUxTnVKd1A1RVVrOGg0bXJzalUrY0xY?=
 =?utf-8?B?WklQM2JtRm9xL1VtckErdEIrQWRLZWdGSjRLSDZBY0VvdmhRL0xraHNTQlNL?=
 =?utf-8?B?bHVTbEg0V1h0TUlWVCtIVk1ZbjhBSTJqWXNIS2FBRUxCaVVHbkt0b2pUMjg0?=
 =?utf-8?B?REFWUitMYjFVekQxV0V0OVJrU21aK2ZYZWhManFYWDVjbjhBYTludVRXYlZS?=
 =?utf-8?B?SGpCUVBOODJoZzluOXVZVE5wODdTZ3c1U2FOdHhETlRYakMrZlpzTnRNSXJj?=
 =?utf-8?B?NlF5WWVlTG5jZU5BcUZvOFBpZzQxbFFpT0thMk5FekFuS1BrZGRraWpCUUZJ?=
 =?utf-8?B?YU9tVUViNGd1dDgwN1RWK3Z2QXEraEZlZ1lvWjhQTHdTb1BFVktqN0Z1Tmt1?=
 =?utf-8?B?NndvOHF2bzFaaW9XNEZlVlAyMjUwcERXUXNiR1RvQ2ZFS01UUE9FVUx2R2ZG?=
 =?utf-8?B?TStCbllWMUZ5TW5tR1FmakEwL3NZZWloZ2lWTmYrLzdlQUx0SVV2dEhqSnBG?=
 =?utf-8?B?SW1IWWh6dWtEYUx6NHhVWXNHNWgxY21FcGVGRURleHJwNXFKaFErRmtkQWVh?=
 =?utf-8?B?dHZZcUYxaVVydUpTQU1GTkpOY2UxalBFaEhpMUc4OXR3K1ZMcFd1aVg1YWdi?=
 =?utf-8?B?aUdESUNZa0FrN1pyM0tvSkI2Vjd4NE80emc5WGJFQy9zbVFXQjZjN3R5NFRF?=
 =?utf-8?B?aXFLRGR6dXlTblBFUUoxdXZZeGc2TUI2V3RmSVZXSjFIMCsva3FmMmp6RGFk?=
 =?utf-8?B?ZG9jaXhFU0NZVEsxT2RtNk1COG41emRER3RGY21QbTlPN3BTakc0U0djZ1Rw?=
 =?utf-8?B?TGlJclZHeHpRMXluZ1FtaXZzbXRMVmFDeThVVm9VZy9Udi8xeXpMVHVkVGhl?=
 =?utf-8?B?cTYvMlZWMFR0cURRNjJoUjZYbGxFWitjeTZpUGlycDMxT29MNTl2elNLNjZN?=
 =?utf-8?B?NEt0Y296MUNHT0VHeGdBYUZRT3h2cm54NVBjWnVCNzhOOWQvR1pCSkFUdk5H?=
 =?utf-8?B?bzBlYXQxRTNTNU9JUHlhTGxrN3FpWktaZmxsRUdGM3F2VE9EaFR1OVJ5ZGNB?=
 =?utf-8?B?cGsrMlNHNU56SGxHUTQvVGhvVWZGcGNqLzNJbGRTY3NrUnZBMUJXMU9jRHdK?=
 =?utf-8?B?OG5IR0grcE9Lc3JWTTQrOG14SVpPTEJWZkJmMXIwSlgwS2ltaFdMNHYva3BS?=
 =?utf-8?B?Y0dHUWYyQnJaM1RDN1JLNmNqVjdzVkgwa3RIbDllRm92eEhnd0NKOFVPWWg0?=
 =?utf-8?B?M25TRHdlL0FkYzRad0wxUlNhM0FQeE5MSGY0MjZLd3hjRWcxODg2OGxPTkNB?=
 =?utf-8?B?WXRwQVJvU0ZSUW43UGlkVElTUVEzU05Pb1dJZ0wwRURONjcrTkoxQnNTNjVZ?=
 =?utf-8?B?QzE0V1RJTis0N2pmWnN4bkhmU3BocnJrZUNBN2RVOGo5c3pWb2ZueWo3QUxv?=
 =?utf-8?B?R2NyMVZ1NEdSMDh1clZ1cldFekpna2p0VytLL09nMktyUjFrU1BGZDlxM0dL?=
 =?utf-8?B?WkhldTVIUUExNG1hSG1vRW1LTjRUM2Fady8rdnRPdGpMRWlaU2lONDNpaDFS?=
 =?utf-8?B?ZnFxWWZXUmE2YmZta1RUUWtCTTZqTGFicDJML2hmeDN1SUpQRXVGV0UvcDRJ?=
 =?utf-8?B?YWI4WVMxK0VoUytqYnB5cFIya1pHSkVZVGFlTUhRZ1RnSmZvNXhmN2doL3JO?=
 =?utf-8?B?RlU5dEZ6VXU2L3JWbHV3dzAyampyTm1nLzFha205N1lrYSt3S2lkQUdkb2wz?=
 =?utf-8?B?L2NXdktIZWNxTHJsWEpGSG1SOHJLa0VNdTdVNHBOQU1MY0pWVmRsQ0ZoZ2Zq?=
 =?utf-8?B?cy9ETnpZUGQ5OU1sd0s3MytqVU5FOWR6N2R2Y0crZmJhbDhUanNaSWdDTGJW?=
 =?utf-8?B?RnFVZXlxRGxEaGJNai90MkNXUUhheHY4SlRHakU3Ymt4K0lQbHd3QVVrYUZl?=
 =?utf-8?B?V2VSVSt3TlpRdE5nUWNOOCtFblUrUTZRRjNucUtMRlptWk5Ock9raC9iemgw?=
 =?utf-8?B?TEZTc0FPOGJkbnF2SE0xZzJFMS9EVzlXSzJMakNOS0YwQXdHcVpSNFRzNWc2?=
 =?utf-8?B?c05SMGpMaDRxdkRWTElUZi9EU0JPSDFSUGtxRzRmMkF2a2l0RUlSbS83R1dO?=
 =?utf-8?B?aDV1c2lPMHBlK2FZS0REemhWTS9BbWxDUnZiSFljcE9kYUkrZFRrVklXcU12?=
 =?utf-8?B?Y1YxeVNSSHRwcm5JdHpiTTh6MTdsdCtkMFE2MDZmdFo0TDB1dXlXQT09?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <8B9C8D4E29A18945AF9F74A42496F2C8@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5701c5e5-83de-4646-8efb-08da486d0fb8
X-MS-Exchange-CrossTenant-originalarrivaltime: 07 Jun 2022 10:04:07.8422
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: n+MtOg56nHZPQz6BCzFs7KYQEYLEaxQtQQFu6seksJBbPEZOs5AE2pB0tbX8X5eYllIgBrI6QVttsjfV/LQZ/f/vHyb6zBSplbplgMGeS5Y=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR03MB5388

T24gMDcvMDYvMjAyMiAwNDo1OSwgYWxleC5ubG5uZm5AcHJvdG9uLm1lIHdyb3RlOg0KPiBIZWxs
byBsaXN0LA0KPg0KPiBJIGxvb2tlZCBpbnRvIFhlbiBkb2N1bWVudGF0aW9uIGFuZCBhbHNvIFhl
biB3aWtpIGFuZCBJIGNvdWxkJ3QgZmluZCBhIGRlZmluaXRpdmUgYW5zd2VyIGlmIFhlbiBzdXBw
b3J0cyBtZW1vcnkgb3Zlci1jb21taXRtZW50IHdoZW4gVk1zIHVzZSBTUi1JT1YgZGV2aWNlIGFz
c2lnbm1lbnQgKHBhc3N0aHJvdWdoKS4gTWVtb3J5IG92ZXItY29tbWl0bWVudCBJIG1lYW4gZ2l2
aW5nIFZNcyBtb3JlIFJBTSB0aGFuIGlzIGF2YWlsYWJsZSBpbiB0aGUgaG9zdC4NCj4NCj4gSSBr
bm93IHRoYXQgRVNYIGRvZXNuJ3Qgc3VwcG9ydCBpdCwgYW5kIGFsc28gUUVNVS9LVk0gcGlucyBh
bGwgUkFNIHdoZW4gYSBkZXZpY2UgaXMgZGlyZWN0bHkgYXNzaWduZWQgdG8gYSBWTSAoVkZJT19J
T01NVV9NQVBfRE1BIGlvY3RsKS4gSSBoYXZlIHR3byBxdWVzdGlvbnM6DQo+DQo+IDEuIERvZXMg
WGVuIHN1cHBvcnRzIG1lbW9yeSBvdmVyIGNvbW1pdG1lbnQgd2hlbiBhbGwgVk1zIGFyZSB1c2lu
ZyBkaXJlY3QgZGV2aWNlIGFzc2lnbm1lbnQgZS5nLiwgU1ItSU9WPw0KDQpOby7CoCBNZW1vcnkg
b3ZlcmNvbW1pdCBpcyBmdW5kYW1lbnRhbGx5IGluY29tcGF0aWJsZSB3aXRoIGhhdmluZyByZWFs
DQpkZXZpY2VzLg0KDQpPbiB0aGUgQ1BVIHNpZGUsIEVQVF9WSU9MQVRJT04vRVBUX01JU0NPTkZJ
RyBhcmUgZmF1bHRzLCBqdXN0IGxpa2UgI1BGLA0Kc28gd2UgY2FuIHBsYXkgZ2FtZXMgd2l0aCBz
d2FwcGluZyBvdXQgc29tZSBvdGhlciBwYXJ0IG9mIHRoZSBndWVzdCBhbmQNCnBhZ2luZyBpbiB0
aGUgcGFydCB3aGljaCBpcyBjdXJyZW50bHkgYmVpbmcgYWNjZXNzZWQuDQoNCkJ1dCBJT01NVSB2
aW9sYXRpb25zIGFyZSBub3QgcmVzdGFydGFibGUuwqAgV2UgY2FuJ3QganVzdCB0YWtlIGFuIElP
TU1VDQpmYXVsdCwgYW5kIHNodWZmbGUgdGhlIGd1ZXN0cyBtZW1vcnksIGJlY2F1c2UgdGhlIFBD
SWUgcHJvdG9jb2wgaGFzDQp0aW1lb3V0cy7CoCBUaGVzZSBhcmVuJ3QgZ2VuZXJhbGx5IGxvbmcg
ZW5vdWdoIHRvIGV2ZW4gc2VuZCBhbiBpbnRlcnJ1cHQNCnRvIHJlcXVlc3Qgc29mdHdhcmUgaGVs
cCwgbGV0IGFsb25lIHBhZ2Ugb25lIHBhcnQgb3V0IGFuZCBhbm90aGVyIHBhcnQgaW4uDQoNCkZv
ciBhbiBJT01NVSBtYXBwaW5nIHRvIGV4aXN0cywgaXQgbXVzdCBwb2ludCBhdCByZWFsIFJBTSwg
YmVjYXVzZSBhbnkNCkRNQSB0YXJnZXR0aW5nIGl0IGNhbm5vdCBiZSBwYXVzZWQgYW5kIGRlbGF5
ZWQgZm9yIGxhdGVyLg0KDQp+QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 10:06:00 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 10:06:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343091.568246 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyW64-0006hO-3W; Tue, 07 Jun 2022 10:06:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343091.568246; Tue, 07 Jun 2022 10:06:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyW63-0006hH-Uk; Tue, 07 Jun 2022 10:05:59 +0000
Received: by outflank-mailman (input) for mailman id 343091;
 Tue, 07 Jun 2022 10:05:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u7Lz=WO=citrix.com=prvs=1504b46c8=roger.pau@srs-se1.protection.inumbo.net>)
 id 1nyW62-0006hB-RH
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 10:05:58 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6bbe0746-e649-11ec-b605-df0040e90b76;
 Tue, 07 Jun 2022 12:05:57 +0200 (CEST)
Received: from mail-bn8nam12lp2174.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.174])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 07 Jun 2022 06:05:54 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by SA0PR03MB5529.namprd03.prod.outlook.com (2603:10b6:806:bc::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Tue, 7 Jun
 2022 10:05:52 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::5df3:95ce:4dfd:134e]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::5df3:95ce:4dfd:134e%4]) with mapi id 15.20.5314.019; Tue, 7 Jun 2022
 10:05:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6bbe0746-e649-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654596357;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=GOsm/q6PlKhUpbGNAwEiz8eiJKvZXqwgM6BEFn5twt4=;
  b=TMbKrOVudboQ7YRxAUHji6qNGNqRsAH4cDJvwJdPi2LlQe9ZKlFKtKKb
   YCijMd3H+L+C7/J+6AHZyyFlVIPJKWEtH5ekuEdrQUcgLF9c23QWLd3lm
   k6W6lq+NMdKDusgfRfk1+uIwZmVN6IUqIXdqe77kcWgvkwu1M47lx7a6e
   U=;
X-IronPort-RemoteIP: 104.47.55.174
X-IronPort-MID: 73040467
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:pgBkT6uThtzclnqZ2gd8AL+pvufnVJRfMUV32f8akzHdYApBsoF/q
 tZmKW7UPfeOYWr0L9okOYqzo05Sv8PXzNYxSAc6/380EylA+JbJXdiXEBz9bniYRiHhoOOLz
 Cm8hv3odp1coqr0/0/1WlTZhSAgk/nOHNIQMcacUsxLbVYMpBwJ1FQywobVvqYy2YLjW13V5
 ouryyHiEATNNwBcYzp8B52r8HuDjNyq0N/PlgVjDRzjlAa2e0g9VPrzF4noR5fLatA88tqBb
 /TC1NmEElbxpH/BPD8HfoHTKSXmSpaKVeSHZ+E/t6KK2nCurQRquko32WZ1he66RFxlkvgoo
 Oihu6BcRi80M+7Ro8U8QSBYHiplIIFk+pj/fl+W5Jn7I03uKxMAwt1IJWRvZ8gy3LYyBmtDs
 /sFNDoKcxaPwfqsx662QfVtgcJlK9T3OIQYuTdryjSx4fQOGMifBfmVo4IFmm5v2qiiHt6HD
 yYdQSBoYxnaJQVGJ38cCY4knffujX76G9FdgA3O/fZvvzaMpOB3+OT9Od3rW8GTfNV+3UDC5
 WjY4lbeHh5PYbRzzhLAqBpAnNTnnyn2RYYTH72Q7eNxjRuYwWl7IB8LUVq2p9Gph0j4XMhQQ
 2QP4TYnp6U28E2tT/H+Uge+rXrCuQQTM/JPF8Uq5QfLzbDbiy6aC3YFSHhdadUgnM4wWTEuk
 FSOmrvBByFp9rucSnuf97KdhTK0JSURa2QFYEcsXQYDptXuvow3phbOVcp4Vr64iMXvHjP9y
 CzMqzIx74j/luYO3qS/uFrB0zSlo8GTShZvv1qLGGW48gl+eYipIZSy7kTW5upBK4DfSUSdu
 H8DmI6V6+Vm4YyxqRFhid4lRNmBj8tp+hWB6bKzN/HNLwiQxkM=
IronPort-HdrOrdr: A9a23:Yt5HfauqsTK5i/QQ06oGv/sc7skC/4Mji2hC6mlwRA09TyXGra
 2TdaUgvyMc1gx7ZJhBo7+90We7MBbhHLpOkPEs1NCZLXLbUQqTXfhfBO7ZrwEIdBefygcw79
 YCT0E6MqyLMbEYt7eE3ODbKadG/DDvysnB64bjJjVWPGdXgslbnntE422gYylLrWd9dPgE/M
 323Ls7m9PsQwVeUiz9bUN1LNTrlpnurtbLcBQGDxko5E2nii6p0qfzF1y90g0FWz1C7L8++S
 yd+jaJrJmLgrWe8FvxxmXT55NZlJ/IzcZCPtWFjowwJi/3ggilSYx9U/mpvSwzosuo9FE2+e
 O86CsIDoBW0Tf8b2u1qRzi103J1ysv0WbrzRuijX7qsaXCNUUHIvsEobgcXgrS6kImst05+r
 lMxXilu51eCg6FtDjh5vDTPisa2HackD4Hq6o+nnZfWYwRZPt6tooE5n5YF58GAWbT9J0nKu
 9zF8vRjcwmPm9yV0qp/lWH/ebcHUjaRny9Mwo/U42uonRrdUlCvgolLJd1pAZEyHo/I6M0k9
 gsfJ4Y0I2mdfVmHJ6VNN1xP/dfNVa9MS4kEFjiV2gPR5t3ck4klfbMkccIzdDvXqA0570Pv7
 mEeG9klAcJCjfT4Iu1rdB2ziw=
X-IronPort-AV: E=Sophos;i="5.91,283,1647316800"; 
   d="scan'208";a="73040467"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=n/68THZIkbldY52Ydlp7L5jnS75pZd2qc67VXvUBvFJMG+ion+a3Adf0mECLKmRJ8cfM+ovvSpe0AlQfIl/bHZoqUevSBeEItt91K2hrZq0Me19pYeIZl31JpFK+nVSKPJ1fphL3jTHjQUqRAVipl8Gsy9N7b6gopGpSnN/UHeBb/CWtR+vY/zZ85S+/Jc+T4cQAqTtINTnRU/oYU7rEt/rUJhAebVv2cm3wiAnXZCA6y4FyH9rcejjcqNb9hwpMxa7VpLGBRt4rSaFFt4+e3t0vDVL+AFUlbZu8qUq9FoX1bwgXax8LFyohECQ9RKIDpKrzWfV1p9xMX3Ururzf4A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=sP+J3hksyJu9gy2PUY/QvzBqSYuyjyjHyxqnlzQqoQg=;
 b=IBEdBmlAYfaBpEp9PBlo5nAsj/2eacy6z1vRJxGCBIyijg6LvalSodq/F82SRbbMf/UX1pjtCZS3Dh/UaQ7KIyO2Wu44uXAWwgP0fhD7BzyrCv1jdROiLA961f2JxCNG5bS78fStG6cCn0KEpzb6pEZWyS+VJdlt5QBInGVKl7Ff+mBIRDMy9jtmKVOIf83n3GOn7i36i07f74Sf5vT3iaIeyTle3TS8VZ/q5qwoCaRJpUMn6mlBRF7oYfAlSqgpEEtgEcWFFchLsKx46xcfXhvPjg8jF+GXxDcOHN7WHdQheguLzjq0UpM1zm7LxA9P/6GteiilLjdzGkeReLXyuA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sP+J3hksyJu9gy2PUY/QvzBqSYuyjyjHyxqnlzQqoQg=;
 b=Fbva538lqS0fSJILJxIInEG34a1Eg+cBYv0Z5MJF5wL1w/jVpca3qpIdom25jnnF/4chKC1NUfjcOR6Bz81VK2a3MXcl2mY/NGgCthV7kRc7kGeMKwPXHlCOm5w/v4opLBTnPhrenzclLHhfMwbFzsy4Kid9cTU1ZpkBawac+RE=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 7 Jun 2022 12:05:48 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	Kevin Tian <kevin.tian@intel.com>,
	Jun Nakajima <jun.nakajima@intel.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v2 3/3] x86/vmx: implement Notify VM Exit
Message-ID: <Yp8i/C+X82ZbIrSn@Air-de-Roger>
References: <20220526111157.24479-1-roger.pau@citrix.com>
 <20220526111157.24479-4-roger.pau@citrix.com>
 <6fa93f8c-9336-331a-75c1-7e815d96ff49@suse.com>
 <YpoeuOJPS0gobz5u@Air-de-Roger>
 <c8f22652-abd5-76f8-75d3-ed581d1c4752@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <c8f22652-abd5-76f8-75d3-ed581d1c4752@suse.com>
X-ClientProxiedBy: LO2P265CA0025.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:61::13) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: cb8e83bf-9549-48ba-aba2-08da486d4dab
X-MS-TrafficTypeDiagnostic: SA0PR03MB5529:EE_
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-Microsoft-Antispam-PRVS:
	<SA0PR03MB552915BBAFE2098DDBFB6F748FA59@SA0PR03MB5529.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	N2nwk6A+Rf05clZgtX0y2j7ne/FM9zzlswf6nJ1n4W0YbBn/gIp3A/FFMk13OCnZ0pSibKsYyYyPRqFZSZywRFcH+mcDeKyaFnQc6AsNy/hGKfZlzuQZamC4noKHf2azczraBlJtZDPJ2DSMsQoLk/oRWtylbxwSFEoYi9muTwxZbm8tk/ER0HRWAG9hAir01RyI64GaqTmxTan0p+J7FGxBSWw2HqisioZ0iGcvktSLEgrqGrELYQzF4LmQjoBmEcCPr91dvGHp+Le0Zu5w4sdw3iAMoKZsb1cH/JkMd7nQCUBGWE1hS/vu9ak3twNdzO7eE9UqxWQjzs+oymSr62qSofpoZTZbmOwtnwnGuzFAMzrk5/DomlUzjRLYG8Kagi8wb9GHd2ve910ivvZg7ijmizhJ0v3Tqno5xroFSYtAGsM/9v872gJphiyVvMibB9JfSnaDLBfZ2TKyq/uvngQlOHrbPUu7b8dUmj0MkYm09dfMuX+IH7V2q5capz68f9o6GehMe1aPOZffD5p/UgB6EolKVOiB2S5IfHgdPfa8PrRm/J2UuxO25b2B8+xrfu36YEiWOWo06yN6/fDK3QwjPcqor6xDAZzVqPbZsd/zupEUcJY5C4yJOJI2vn/f2kzGGmYlQ5TZjWLgPbpNqw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(7916004)(366004)(6666004)(316002)(26005)(186003)(83380400001)(8936002)(53546011)(6916009)(6512007)(54906003)(9686003)(508600001)(2906002)(33716001)(85182001)(6486002)(38100700002)(86362001)(82960400001)(6506007)(5660300002)(8676002)(4326008)(66476007)(66556008)(66946007);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cUhFWDdNQ3Foa1ZyNGZLVHI0bUc2VFYyRjdiQTl5SnJGeHltZjJ2WElBOVhG?=
 =?utf-8?B?aHk2KzdaYWpBdGd6c1Z0Nk5jMTRPYXBJbFFNQWVtdlQ3elVWYzVOVUM1cXpQ?=
 =?utf-8?B?bUhqcXYxajVhbDNoc0toc3RTUUFDek9kSDB0ZXg4S1BWV21KZ1pSa1Nkb053?=
 =?utf-8?B?MENrODBuZVNEczl4Sm4yVHJGMllKTVhlalJ5YWc3aVZtVFhYWkVvZzZOU3V2?=
 =?utf-8?B?dFBEQnZVVWtncjlDTXpVdWlnWXJlKzhReEpTRDJSYWo1SzA5N2ppM0RuK3k4?=
 =?utf-8?B?L2VQTW40MS90cjVxMVNzNUFLaStWNml2SjZmRUNKbjFiL1NCd2VZQkZVSE10?=
 =?utf-8?B?RHhTWjFjMEN3NDVYanJiZDFJL2FXbWliUXVWZU1WUHpDM1pzNERSUVhnMGo1?=
 =?utf-8?B?Q0dJM2VNVDhDRFNqSmlOUlFkdFZlMm5VMW1qVWxPK0ozdDRjaHBYVHphWFZl?=
 =?utf-8?B?ejFkOVhPbklUMDBkcDd2dEJNT1UzL1F6U01NaGxVZXRKVzd4cVFPQm1kSHF6?=
 =?utf-8?B?S3pVZEovekZnVFMrTmhjNkx1Y21VVkdDK2ZkN3RzdFZ2T3ZYTDNNaDVrUG9y?=
 =?utf-8?B?dlI0Q2QxczNZN0hoZ05ZaGxtc1lKOXY0VkVtakx0bWJ4NjdySGdkYnR1TFdL?=
 =?utf-8?B?LzNMZkxzbGpQYkFjTm1hNmJqalppOVZPMXU3T3VlTGtNb2xGMDRVYUgySWln?=
 =?utf-8?B?OHU4SHdKbThDTElzVXMwWVk0d0xSUEI5Q0FxdTNnMThZYUUrV1M1OG1mQTQv?=
 =?utf-8?B?MzlmSnRSMVROcC8yQ0tpeGFsRHJJcFc0YldDSE5pNXNiV0tuWXVScjdVQXMr?=
 =?utf-8?B?aDU5Q09yQlhrWnZrMUlxZEJnWW9henVqZTllaTBOeTdwTXhDSGZ2OTdQeG9s?=
 =?utf-8?B?R3N3QkJYT05xdXRGeG1KbzU1YXlncWxCOUlNNkRoTjJPbWZZckZxVHVBUW9i?=
 =?utf-8?B?ZUlXZEVxWE1xVFMrTHo1SlpRYTFDazVnM040SjFUMGNySmRTbm9ZNmpldVZH?=
 =?utf-8?B?QmI0cEZhUTZMSTJNVUhnOXpZb1BtYzU0cUUxR0NBaVQ5MkFDSUdva2hIempQ?=
 =?utf-8?B?V25aVmlrR25IL0dqUStGblEyZU1HWCt5bGpIWWc2REtqRXVpdHFvVzRUSkhV?=
 =?utf-8?B?S2xiWWhuOU81QmIyaU9sV2F2Q1ViNEtoYTQ4Q3RZQk02UGFEVDJpcGRlRkxz?=
 =?utf-8?B?Tjd6TE1wcmRrUzIwbjVuTTkwZGRsZFNCSmhyMGJGUllpdCsrYlkwakdGeUs2?=
 =?utf-8?B?dUhwc09xZmFOeFVVY0hseWsvZDViY05hSGs5UU8vZXIvZlBHYVRPTUxyS1lH?=
 =?utf-8?B?MzU0b25hUXNWTnFaWVJaZ0NtZUpDMnpLRU5YVktxS1NJZGw1Rzhsbkd3QTZR?=
 =?utf-8?B?TVhza0NzNEZrQmNxTUUzZHdMcVVlNUwxUUZRT2xaYmtvNVhRNlBBaDRQVVV2?=
 =?utf-8?B?cDVuc2VVeXVLT0NvS21FekVqNExBY3JMRlFMTmRYVHp3Y1BEK09Xc3JEL3dE?=
 =?utf-8?B?Q3A5OERwNWRKc01YVE8wYjV3aXI0bnUvL1pvRWxnZ25RbVJleWNZVmtMcDYz?=
 =?utf-8?B?VzYvdm4yU3o3MEI2ZnNGQmlVMHc1em51di9WMGdDZDF2cVpVZm01RXRTWG51?=
 =?utf-8?B?cG9RMEpLMDVEcm1uVDNHeXpSTUVTRkxsRGtybDN3anp2TmNkOTZDTjdFSkxv?=
 =?utf-8?B?TmtJN1owTE5RUmcwbzRSQkp6OGpWaWpoS1FWMUdQMk1FSlplOEw1VVV4K2dT?=
 =?utf-8?B?RU1JVHR5QXVDdFR0TlhQczFQN3RvZ1FIZFJEUGpGWUFsYXZ5ZDNaNnB6VEM3?=
 =?utf-8?B?YVF6NElmeDA2SU8zWm82T1VPOTVaMG5uOEl4azNSczk0SHowN2xwOGNHYndL?=
 =?utf-8?B?SGZOODlYcUNMdUVyYWx1bHdyS1prUkF4NXM3aFZGamdQOWNKQnBLWjhlMGFj?=
 =?utf-8?B?VXV3MUM3eDdOa0Q4cUlXNWxkTkU5em9nZ3FPakRqM2J2NDRXWnBrZ3R0eFNF?=
 =?utf-8?B?cC9VVWpLMGoxNG1jeGFQUXdUakZSWldyYml1Rkd3elRqanRmemd1SDhRb2xY?=
 =?utf-8?B?WE0wL0JYK2txdFliQnlKTlEwdzE2eHFpZDdIOU5XaTRMRWJ0N043Yk1lWVZm?=
 =?utf-8?B?bVIySGtmT3hoS3pNb2lMakN5WlFaT1VteGNHK2dyNWluTXlIM280aW8xaE1Y?=
 =?utf-8?B?Zy80KzJwR3lkRFhVVlBPWWpzcGs5QktXRDk2Q3VuYVBMR0lLdHJSZXpTOGQ2?=
 =?utf-8?B?clh6VmRidGIrbEZRVnFmaTlNK2ZCODkxU2JmMVRXb011S0dkdEtKVnFhZkRU?=
 =?utf-8?B?NDVscGJwUm5telFKUFlVczJLUHFDY3RvL1JxQiszeGVXNUh1NXl3ZTFlZjZR?=
 =?utf-8?Q?N1Dhcb+WoFG6NvF0=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cb8e83bf-9549-48ba-aba2-08da486d4dab
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2022 10:05:51.9171
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: +0P3zcOYtyPJ503fFTTvkTZuOcRQDd/06FAEIkqwNHOH/bASH+pL8wq1/0Jc+PFAuf01pSofbb7MG8zCd3Iaiw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR03MB5529

On Tue, Jun 07, 2022 at 09:43:25AM +0200, Jan Beulich wrote:
> On 03.06.2022 16:46, Roger Pau Monné wrote:
> > On Fri, Jun 03, 2022 at 02:49:54PM +0200, Jan Beulich wrote:
> >> On 26.05.2022 13:11, Roger Pau Monne wrote:
> >>> --- a/xen/arch/x86/hvm/vmx/vmx.c
> >>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> >>> @@ -1419,10 +1419,19 @@ static void cf_check vmx_update_host_cr3(struct vcpu *v)
> >>>  
> >>>  void vmx_update_debug_state(struct vcpu *v)
> >>>  {
> >>> +    unsigned int mask = 1u << TRAP_int3;
> >>> +
> >>> +    if ( !cpu_has_monitor_trap_flag && cpu_has_vmx_notify_vm_exiting )
> >>
> >> I'm puzzled by the lack of symmetry between this and ...
> >>
> >>> +        /*
> >>> +         * Only allow toggling TRAP_debug if notify VM exit is enabled, as
> >>> +         * unconditionally setting TRAP_debug is part of the XSA-156 fix.
> >>> +         */
> >>> +        mask |= 1u << TRAP_debug;
> >>> +
> >>>      if ( v->arch.hvm.debug_state_latch )
> >>> -        v->arch.hvm.vmx.exception_bitmap |= 1U << TRAP_int3;
> >>> +        v->arch.hvm.vmx.exception_bitmap |= mask;
> >>>      else
> >>> -        v->arch.hvm.vmx.exception_bitmap &= ~(1U << TRAP_int3);
> >>> +        v->arch.hvm.vmx.exception_bitmap &= ~mask;
> >>>  
> >>>      vmx_vmcs_enter(v);
> >>>      vmx_update_exception_bitmap(v);
> >>> @@ -4155,6 +4164,9 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
> >>>          switch ( vector )
> >>>          {
> >>>          case TRAP_debug:
> >>> +            if ( cpu_has_monitor_trap_flag && cpu_has_vmx_notify_vm_exiting )
> >>> +                goto exit_and_crash;
> >>
> >> ... this condition. Shouldn't one be the inverse of the other (and
> >> then it's the one down here which wants adjusting)?
> > 
> > The condition in vmx_update_debug_state() sets the mask so that
> > TRAP_debug will only be added or removed from the bitmap if
> > !cpu_has_monitor_trap_flag && cpu_has_vmx_notify_vm_exiting (note that
> > otherwise TRAP_debug is unconditionally set if
> > !cpu_has_vmx_notify_vm_exiting).
> > 
> > Hence it's impossible to get a VMExit TRAP_debug with
> > cpu_has_monitor_trap_flag && cpu_has_vmx_notify_vm_exiting because
> > TRAP_debug will never be set by vmx_update_debug_state() in that
> > case.
> 
> Hmm, yes, I've been misguided by you not altering the existing setting
> of v->arch.hvm.vmx.exception_bitmap in construct_vmcs(). Instead you
> add an entirely new block of code near the bottom of the function. Is
> there any chance you could move up that adjustment, perhaps along the
> lines of
> 
>     v->arch.hvm.vmx.exception_bitmap = HVM_TRAP_MASK
>               | (paging_mode_hap(d) ? 0 : (1U << TRAP_page_fault))
>               | (v->arch.fully_eager_fpu ? 0 : (1U << TRAP_no_device));
>     if ( cpu_has_vmx_notify_vm_exiting )
>     {
>         __vmwrite(NOTIFY_WINDOW, vm_notify_window);
>         /*
>          * Disable #AC and #DB interception: by using VM Notify Xen is
>          * guaranteed to get a VM exit even if the guest manages to lock the
>          * CPU.
>          */
>         v->arch.hvm.vmx.exception_bitmap &= ~((1U << TRAP_debug) |
>                                               (1U << TRAP_alignment_check));
>     }
>     vmx_update_exception_bitmap(v);

Sure, will move up when posting a new version then.  I will wait for
feedback from Jun or Kevin regarding the default window size before
doing so.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 10:10:36 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 10:10:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343103.568256 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyWAT-00087u-Jk; Tue, 07 Jun 2022 10:10:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343103.568256; Tue, 07 Jun 2022 10:10:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyWAT-00087n-Gz; Tue, 07 Jun 2022 10:10:33 +0000
Received: by outflank-mailman (input) for mailman id 343103;
 Tue, 07 Jun 2022 10:10:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Y3MX=WO=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1nyWAR-00087g-Vp
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 10:10:31 +0000
Received: from mail-lf1-x12d.google.com (mail-lf1-x12d.google.com
 [2a00:1450:4864:20::12d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0fd9e9e1-e64a-11ec-bd2c-47488cf2e6aa;
 Tue, 07 Jun 2022 12:10:31 +0200 (CEST)
Received: by mail-lf1-x12d.google.com with SMTP id s6so27476590lfo.13
 for <xen-devel@lists.xenproject.org>; Tue, 07 Jun 2022 03:10:31 -0700 (PDT)
Received: from jade.urgonet (h-79-136-84-253.A175.priv.bahnhof.se.
 [79.136.84.253]) by smtp.gmail.com with ESMTPSA id
 e2-20020ac24e02000000b0047900e9a9d2sm3209841lfr.266.2022.06.07.03.10.29
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 07 Jun 2022 03:10:29 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0fd9e9e1-e64a-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=uJcjlqpCDenxDebpNBzBomFtJsifdsICr66XeqHfwIw=;
        b=RHu6caKUT9EkS97WlVqL5cLYP53mMtiW/Fz1kSpDu3G7x+l0VAYfVL7IhZUj8hR3as
         4nOb8zisWCOBjPL9mE7lbDQgFSesfSRmKX/O3zQ2IwUCDfY0fwOlKbt5Nd63ge8sXJxq
         bLe0zmqnlSPi69aDfnTnxaQU8cVsS/fbCe1y/tRTPCW1e8eFyfoDeV/+PCYnHml/oikr
         ENMwi/zCmYWsAZi1lmhD5hK5vbIoJ/XCiOTAyafrh8E2MKS9SFUNfQ1ag+0miko5WeDO
         oO07DVBf9ekKY+bF09+1lW0XR+Mn/Ld6vpojnfKaVl8V7Db88kPP9xgLUah7ubA2YmCi
         rhFg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=uJcjlqpCDenxDebpNBzBomFtJsifdsICr66XeqHfwIw=;
        b=XiiDuKDEb409C9tU++hhMsjenXtRmg2PltXIWdXXAyP2ww2fVQvaWtjFdVV9tZRBkI
         XoMmVy/ZcPy5v0mat7o/fYGIx4KR9yqcvZ4xk8WQO5QETDOO43KwVyZk6P3k27wyrAp9
         2W6z37HCgcQrqllLRA/dfZauPVHVSuzdeNa5ZXWEiU4nzhJi+L7S6Y3wqquBML2S2wxM
         VjKwHU337XBIYT5XywSUT4TG950n+0hGr9vQse6PBlF3GAdwy9FzE7bc4yELhKJ+c9ZQ
         2P4NAgGErNClszsAt3fluoz+FgTRY8DebrcuNJdAHXLH0IrDgDbINAIf+dS+nwTEU9P7
         XoLA==
X-Gm-Message-State: AOAM5322oxsEr+3OU94prleCwxoj+eRhWgknNTZ7l2PeBuYRnUaTaJPZ
	hZ5mNoxiWrTkCyIbbbKiPDmza2BXB1QcxQ==
X-Google-Smtp-Source: ABdhPJx6b2jxxQ2behYyjMbf0BsToJbzmi4Md5G/MZ+xAwdkI8YEbT0CgGc4b+whsq8osBSFkRkatQ==
X-Received: by 2002:a05:6512:987:b0:479:3983:e744 with SMTP id w7-20020a056512098700b004793983e744mr7585233lft.402.1654596630343;
        Tue, 07 Jun 2022 03:10:30 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Jens Wiklander <jens.wiklander@linaro.org>
Subject: [PATCH 0/2] Xen FF-A mediator
Date: Tue,  7 Jun 2022 12:10:08 +0200
Message-Id: <20220607101010.3136600-1-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.31.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Hi,

This patch sets add a FF-A [1] mediator modeled after the TEE mediator
already present in Xen. The FF-A mediator implements the subset of the FF-A
1.1 specification needed to communicate with OP-TEE using FF-A as transport
mechanism instead of SMC/HVC as with the TEE mediator. It allows a similar
design in OP-TEE as with the TEE mediator where OP-TEE presents one virtual
partition of itself to each guest in Xen.

The FF-A mediator is generic in the sense it has nothing OP-TEE specific
except that only the subset needed for OP-TEE is implemented so far. The
hooks needed to inform OP-TEE that a guest is created or destroyed is part
of the FF-A specification.

It should be possible to extend the FF-A mediator to implement a larger
portion of the FF-A 1.1 specification without breaking with the way OP-TEE
is communicated with here. So it should be possible to support any TEE or
Secure Partition using FF-A as transport with this mediator.

[1] https://developer.arm.com/documentation/den0077/latest

Thanks,
Jens

Jens Wiklander (2):
  xen/arm: smccc: add support for SMCCCv1.2 extended input/output
    registers
  xen/arm: add FF-A mediator

 xen/arch/arm/Kconfig         |   11 +
 xen/arch/arm/Makefile        |    1 +
 xen/arch/arm/arm64/smc.S     |   43 +
 xen/arch/arm/domain.c        |   10 +
 xen/arch/arm/ffa.c           | 1624 ++++++++++++++++++++++++++++++++++
 xen/arch/arm/vsmc.c          |   19 +-
 xen/include/asm-arm/domain.h |    4 +
 xen/include/asm-arm/ffa.h    |   71 ++
 xen/include/asm-arm/smccc.h  |   42 +
 9 files changed, 1821 insertions(+), 4 deletions(-)
 create mode 100644 xen/arch/arm/ffa.c
 create mode 100644 xen/include/asm-arm/ffa.h

-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 07 10:10:36 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 10:10:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343104.568268 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyWAU-0008NS-Ts; Tue, 07 Jun 2022 10:10:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343104.568268; Tue, 07 Jun 2022 10:10:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyWAU-0008NH-QI; Tue, 07 Jun 2022 10:10:34 +0000
Received: by outflank-mailman (input) for mailman id 343104;
 Tue, 07 Jun 2022 10:10:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Y3MX=WO=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1nyWAT-00087m-OF
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 10:10:33 +0000
Received: from mail-lf1-x12b.google.com (mail-lf1-x12b.google.com
 [2a00:1450:4864:20::12b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 107c546b-e64a-11ec-b605-df0040e90b76;
 Tue, 07 Jun 2022 12:10:32 +0200 (CEST)
Received: by mail-lf1-x12b.google.com with SMTP id c2so8097342lfk.0
 for <xen-devel@lists.xenproject.org>; Tue, 07 Jun 2022 03:10:32 -0700 (PDT)
Received: from jade.urgonet (h-79-136-84-253.A175.priv.bahnhof.se.
 [79.136.84.253]) by smtp.gmail.com with ESMTPSA id
 e2-20020ac24e02000000b0047900e9a9d2sm3209841lfr.266.2022.06.07.03.10.30
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 07 Jun 2022 03:10:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 107c546b-e64a-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=dQPXxDCut3Pua0qNAEZ53+VJvVFB7ixy0a17hUj7Md8=;
        b=Fugla0OF7AXkPjag2WeGbVplXjA4J1zsQJSn7tsDIKGDPijAAgkyc9GcBpYn1OPbE0
         w7kMLlK9s9qo6ixuiLECcwE0qiHJH43ey+FGSKhs7I4dqiIqQFZYAVOr8+J6rUla6uXy
         ulvR4OCC11PBc2qfGD7Jn9BHiOo8Qb7bwY7/2pb8dFdgT/9EpWWIdzJKTYBdn83xBkkb
         1yaFCa9hDjRR1QWO3k13TE20JzNdYdQ26DAZZVgIXL8my1UwOAw0K/2NsOwvFJZTD3sS
         KY5Umr9dEhbI890IIMYrlGZHALZRSOWKGVfJRyJ8xmdntSFSLKObMPxYEx73BwUNG1EH
         r6RA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=dQPXxDCut3Pua0qNAEZ53+VJvVFB7ixy0a17hUj7Md8=;
        b=1DlS/2fQX5VXEoHWdABZnhOzeFXBLfFf9Gh9lF+YBFDc99QTcjp+Wfcd1BYHg5+jFD
         bBI8rxk+NFsQK70k83F1r0Pwg667lf3J4zjRMWsKFhP0xSqoOELetOjYcyUW/EUe6/hv
         U0LTMXm5yzp2GrmK71lJ5hiNjLIYSWatGNILQtNRu/WWfLbPWe++1izbAHsarRDSQrOs
         AyYfylfq//EUYbQzYRbT7UrUaqJ77wG76cU7EM5b3XVSaeSBYmdVK2sEztVC17WXm4kJ
         Pq+sTi5mdzdzPRYzsTHQ2K/qq8/mYfx8y0ydcfqYuS2GW2V3a7dGpFCLopNODgX+vyFS
         T1QQ==
X-Gm-Message-State: AOAM533Kcn4uGkYmRjMEpPxkp0jJiGc3Wqi0TBlAWOtGIdSUimAyiPvD
	8r96LE3D1Iiv0lGI4M1aHJ09sNy7a+F2Sw==
X-Google-Smtp-Source: ABdhPJz4saHK0vOfKS2cDVhY1Iv++cAEl9n/zRndcN6Hyt9Zlj+OcsqsEEoVMG/9LKHx8hEj+lQRiw==
X-Received: by 2002:ac2:5f92:0:b0:479:112e:76d7 with SMTP id r18-20020ac25f92000000b00479112e76d7mr16665941lfe.189.1654596631464;
        Tue, 07 Jun 2022 03:10:31 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Jens Wiklander <jens.wiklander@linaro.org>
Subject: [PATCH 1/2] xen/arm: smccc: add support for SMCCCv1.2 extended input/output registers
Date: Tue,  7 Jun 2022 12:10:09 +0200
Message-Id: <20220607101010.3136600-2-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20220607101010.3136600-1-jens.wiklander@linaro.org>
References: <20220607101010.3136600-1-jens.wiklander@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

SMCCC v1.2 AArch64 allows x0-x17 to be used as both parameter registers
and result registers for the SMC and HVC instructions.

Arm Firmware Framework for Armv8-A specification makes use of x0-x7 as
parameter and result registers.

Let us add new interface to support this extended set of input/output
registers.

This is based on 3fdc0cb59d97 ("arm64: smccc: Add support for SMCCCv1.2
extended input/output registers") by Sudeep Holla from the Linux kernel

Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
---
 xen/arch/arm/arm64/smc.S    | 43 +++++++++++++++++++++++++++++++++++++
 xen/arch/arm/vsmc.c         |  2 +-
 xen/include/asm-arm/smccc.h | 42 ++++++++++++++++++++++++++++++++++++
 3 files changed, 86 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/arm64/smc.S b/xen/arch/arm/arm64/smc.S
index b0752be57e8f..e024fa4d44d1 100644
--- a/xen/arch/arm/arm64/smc.S
+++ b/xen/arch/arm/arm64/smc.S
@@ -30,3 +30,46 @@ ENTRY(__arm_smccc_1_0_smc)
         stp     x2, x3, [x4, #SMCCC_RES_a2]
 1:
         ret
+
+
+/*
+ * void arm_smccc_1_2_smc(const struct arm_smccc_1_2_regs *args,
+ *                        struct arm_smccc_1_2_regs *res)
+ */
+ENTRY(arm_smccc_1_2_smc)
+    /* Save `res` and free a GPR that won't be clobbered */
+    stp     x1, x19, [sp, #-16]!
+
+    /* Ensure `args` won't be clobbered while loading regs in next step */
+    mov	x19, x0
+
+    /* Load the registers x0 - x17 from the struct arm_smccc_1_2_regs */
+    ldp	x0, x1, [x19, #0]
+    ldp	x2, x3, [x19, #16]
+    ldp	x4, x5, [x19, #32]
+    ldp	x6, x7, [x19, #48]
+    ldp	x8, x9, [x19, #64]
+    ldp	x10, x11, [x19, #80]
+    ldp	x12, x13, [x19, #96]
+    ldp	x14, x15, [x19, #112]
+    ldp	x16, x17, [x19, #128]
+
+    smc #0
+
+    /* Load the `res` from the stack */
+    ldr	x19, [sp]
+
+    /* Store the registers x0 - x17 into the result structure */
+    stp	x0, x1, [x19, #0]
+    stp	x2, x3, [x19, #16]
+    stp	x4, x5, [x19, #32]
+    stp	x6, x7, [x19, #48]
+    stp	x8, x9, [x19, #64]
+    stp	x10, x11, [x19, #80]
+    stp	x12, x13, [x19, #96]
+    stp	x14, x15, [x19, #112]
+    stp	x16, x17, [x19, #128]
+
+    /* Restore original x19 */
+    ldp     xzr, x19, [sp], #16
+    ret
diff --git a/xen/arch/arm/vsmc.c b/xen/arch/arm/vsmc.c
index a36db15fffc0..33b0ddc634da 100644
--- a/xen/arch/arm/vsmc.c
+++ b/xen/arch/arm/vsmc.c
@@ -93,7 +93,7 @@ static bool handle_arch(struct cpu_user_regs *regs)
     switch ( fid )
     {
     case ARM_SMCCC_VERSION_FID:
-        set_user_reg(regs, 0, ARM_SMCCC_VERSION_1_1);
+        set_user_reg(regs, 0, ARM_SMCCC_VERSION_1_2);
         return true;
 
     case ARM_SMCCC_ARCH_FEATURES_FID:
diff --git a/xen/include/asm-arm/smccc.h b/xen/include/asm-arm/smccc.h
index 9d94beb3df2d..8128283bc7b6 100644
--- a/xen/include/asm-arm/smccc.h
+++ b/xen/include/asm-arm/smccc.h
@@ -33,6 +33,7 @@
 
 #define ARM_SMCCC_VERSION_1_0   SMCCC_VERSION(1, 0)
 #define ARM_SMCCC_VERSION_1_1   SMCCC_VERSION(1, 1)
+#define ARM_SMCCC_VERSION_1_2   SMCCC_VERSION(1, 2)
 
 /*
  * This file provides common defines for ARM SMC Calling Convention as
@@ -217,6 +218,7 @@ struct arm_smccc_res {
 #ifdef CONFIG_ARM_32
 #define arm_smccc_1_0_smc(...) arm_smccc_1_1_smc(__VA_ARGS__)
 #define arm_smccc_smc(...) arm_smccc_1_1_smc(__VA_ARGS__)
+
 #else
 
 void __arm_smccc_1_0_smc(register_t a0, register_t a1, register_t a2,
@@ -265,8 +267,48 @@ void __arm_smccc_1_0_smc(register_t a0, register_t a1, register_t a2,
         else                                                    \
             arm_smccc_1_0_smc(__VA_ARGS__);                     \
     } while ( 0 )
+
+/**
+ * struct arm_smccc_1_2_regs - Arguments for or Results from SMC call
+ * @a0-a17 argument values from registers 0 to 17
+ */
+struct arm_smccc_1_2_regs {
+    unsigned long a0;
+    unsigned long a1;
+    unsigned long a2;
+    unsigned long a3;
+    unsigned long a4;
+    unsigned long a5;
+    unsigned long a6;
+    unsigned long a7;
+    unsigned long a8;
+    unsigned long a9;
+    unsigned long a10;
+    unsigned long a11;
+    unsigned long a12;
+    unsigned long a13;
+    unsigned long a14;
+    unsigned long a15;
+    unsigned long a16;
+    unsigned long a17;
+};
 #endif /* CONFIG_ARM_64 */
 
+/**
+ * arm_smccc_1_2_smc() - make SMC calls
+ * @args: arguments passed via struct arm_smccc_1_2_regs
+ * @res: result values via struct arm_smccc_1_2_regs
+ *
+ * This function is used to make SMC calls following SMC Calling Convention
+ * v1.2 or above. The content of the supplied param are copied from the
+ * structure to registers prior to the SMC instruction. The return values
+ * are updated with the content from registers on return from the SMC
+ * instruction.
+ */
+void arm_smccc_1_2_smc(const struct arm_smccc_1_2_regs *args,
+                       struct arm_smccc_1_2_regs *res);
+
+
 #endif /* __ASSEMBLY__ */
 
 /*
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 07 10:10:38 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 10:10:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343105.568279 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyWAY-0000F2-6R; Tue, 07 Jun 2022 10:10:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343105.568279; Tue, 07 Jun 2022 10:10:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyWAY-0000Er-2Z; Tue, 07 Jun 2022 10:10:38 +0000
Received: by outflank-mailman (input) for mailman id 343105;
 Tue, 07 Jun 2022 10:10:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Y3MX=WO=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1nyWAW-00087g-Jx
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 10:10:37 +0000
Received: from mail-lf1-x129.google.com (mail-lf1-x129.google.com
 [2a00:1450:4864:20::129])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 11a4a5b4-e64a-11ec-bd2c-47488cf2e6aa;
 Tue, 07 Jun 2022 12:10:34 +0200 (CEST)
Received: by mail-lf1-x129.google.com with SMTP id y32so27519176lfa.6
 for <xen-devel@lists.xenproject.org>; Tue, 07 Jun 2022 03:10:34 -0700 (PDT)
Received: from jade.urgonet (h-79-136-84-253.A175.priv.bahnhof.se.
 [79.136.84.253]) by smtp.gmail.com with ESMTPSA id
 e2-20020ac24e02000000b0047900e9a9d2sm3209841lfr.266.2022.06.07.03.10.31
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 07 Jun 2022 03:10:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 11a4a5b4-e64a-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=GFAxwM0E1J+hFqOU+pwMo5sJX/6z8mWdPCUefCJffW4=;
        b=LBPzQIom8ASdMWaHhThhrBp2MFgVyV9+/Fa2Irce6ZqReCX5aTAMunO9GavY8IaEup
         x1XZu4O7Ncg79UYFYV1mbHreAKT9caHebaiw6/03yA7DyC+uqr037JM00YdHEfxC7j39
         Iw0jss4ZmmtkBSw3PIF+zKKsBwdxo5BmxizqhixJ8mPGPOrRfKH/AlxhJHFyrWVwRAf+
         RvKPvrzl1V7KeVf7Unpor14kJF5IApxW5QDweZDNwkiR6YPQejs3CNS8txLpo5sD+GBk
         yMST4KzVVqsgX7wiEKpQfzZSkNgYYykyhqL4YdqJDcBD3CVo5Qr9AaEQ/oSu65PJXc23
         KvYg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=GFAxwM0E1J+hFqOU+pwMo5sJX/6z8mWdPCUefCJffW4=;
        b=L91x5zPGd0y7X7KxWMz66mela+ymLOcFX0MzHFfZmlQswUHWrj6YJ/g7g23jG0N0a7
         jqUHbU2eW4qmdGd1nxn/CupQP3DHhZEm5RUxhtU5ZP4/2kxyQ3bVaJTuBLEyxvsWEugc
         qJfERcdIfcE8SjKrWY+Y6D9u55wY2dSR+S6yYTm0Y5M68GZ1QBDv39FqkCeAcjTRqqsW
         tsiM681NXTgK9bzeV2LsfyJiYAZm2PjPeX1VPVqadCXNTUuKj61DiQFC9eyj4OdHkuFW
         p5vAEYQgPecmbCRXw1IKgttxx+HR5FV1sHMWdtbf6n4MbGRMRgl4esJMd01nJqlqV4kH
         KNMQ==
X-Gm-Message-State: AOAM5300NBzCwMgI1JgtP1Dewurg+celpr2p6XmA6138pXeS23+uK8+w
	B/Q5JVM4sK1XVxyh1oL+nA9/Bldk53czdQ==
X-Google-Smtp-Source: ABdhPJzx+q7lYSnAzaio/gFDyfziQv9ZD6fBkn4BU3JbmLcJ3+i98/8tfkhl2148JFHJZISNtVB9Og==
X-Received: by 2002:a05:6512:2301:b0:477:ba28:3bdc with SMTP id o1-20020a056512230100b00477ba283bdcmr66383811lfu.383.1654596632740;
        Tue, 07 Jun 2022 03:10:32 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Jens Wiklander <jens.wiklander@linaro.org>
Subject: [PATCH 2/2] xen/arm: add FF-A mediator
Date: Tue,  7 Jun 2022 12:10:10 +0200
Message-Id: <20220607101010.3136600-3-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20220607101010.3136600-1-jens.wiklander@linaro.org>
References: <20220607101010.3136600-1-jens.wiklander@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Adds a FF-A version 1.1 [1] mediator to communicate with a Secure
Partition in secure world.

The implementation is the bare minimum to be able to communicate with
OP-TEE running as an SPMC at S-EL1.

This is loosely based on the TEE mediator framework and the OP-TEE
mediator.

[1] https://developer.arm.com/documentation/den0077/latest
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
---
 xen/arch/arm/Kconfig         |   11 +
 xen/arch/arm/Makefile        |    1 +
 xen/arch/arm/domain.c        |   10 +
 xen/arch/arm/ffa.c           | 1624 ++++++++++++++++++++++++++++++++++
 xen/arch/arm/vsmc.c          |   17 +-
 xen/include/asm-arm/domain.h |    4 +
 xen/include/asm-arm/ffa.h    |   71 ++
 7 files changed, 1735 insertions(+), 3 deletions(-)
 create mode 100644 xen/arch/arm/ffa.c
 create mode 100644 xen/include/asm-arm/ffa.h

diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index 277738826581..281edb85cbff 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -114,6 +114,17 @@ config TEE
 
 source "arch/arm/tee/Kconfig"
 
+config FFA
+	bool "Enable FF-A mediator support" if EXPERT
+	default n
+	depends on ARM_64
+	help
+	  This option enables a minamal FF-A mediator. The mediator is
+	  generic as it follows the FF-A specification [1], but it only
+	  implements a small substet of the specification.
+
+	  [1] https://developer.arm.com/documentation/den0077/latest
+
 endmenu
 
 menu "ARM errata workaround via the alternative framework"
diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 1ded44d20047..90c43a4a9edf 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -17,6 +17,7 @@ obj-y += domain.o
 obj-y += domain_build.init.o
 obj-y += domctl.o
 obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
+obj-$(CONFIG_FFA) += ffa.o
 obj-y += gic.o
 obj-y += gic-v2.o
 obj-$(CONFIG_GICV3) += gic-v3.o
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index ac6a419f55c9..451f3eed550f 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -25,6 +25,7 @@
 #include <asm/cpufeature.h>
 #include <asm/current.h>
 #include <asm/event.h>
+#include <asm/ffa.h>
 #include <asm/gic.h>
 #include <asm/guest_access.h>
 #include <asm/guest_atomics.h>
@@ -737,6 +738,9 @@ int arch_domain_create(struct domain *d,
     if ( (rc = tee_domain_init(d, config->arch.tee_type)) != 0 )
         goto fail;
 
+    if ( (rc = ffa_domain_init(d)) != 0 )
+        goto fail;
+
     update_domain_wallclock_time(d);
 
     /*
@@ -974,6 +978,7 @@ static int relinquish_memory(struct domain *d, struct page_list_head *list)
  */
 enum {
     PROG_tee = 1,
+    PROG_ffa,
     PROG_xen,
     PROG_page,
     PROG_mapping,
@@ -1012,6 +1017,11 @@ int domain_relinquish_resources(struct domain *d)
         if (ret )
             return ret;
 
+    PROGRESS(ffa):
+        ret = ffa_relinquish_resources(d);
+        if (ret )
+            return ret;
+
     PROGRESS(xen):
         ret = relinquish_memory(d, &d->xenpage_list);
         if ( ret )
diff --git a/xen/arch/arm/ffa.c b/xen/arch/arm/ffa.c
new file mode 100644
index 000000000000..9063b7f2b59e
--- /dev/null
+++ b/xen/arch/arm/ffa.c
@@ -0,0 +1,1624 @@
+/*
+ * xen/arch/arm/ffa.c
+ *
+ * Arm Firmware Framework for ARMv8-A(FFA) mediator
+ *
+ * Copyright (C) 2021  Linaro Limited
+ *
+ * Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without restriction,
+ * including without limitation the rights to use, copy, modify, merge,
+ * publish, distribute, sublicense, and/or sell copies of the Software,
+ * and to permit persons to whom the Software is furnished to do so,
+ * subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+ * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
+ * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
+ * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+ * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include <xen/domain_page.h>
+#include <xen/errno.h>
+#include <xen/init.h>
+#include <xen/lib.h>
+#include <xen/sched.h>
+#include <xen/types.h>
+#include <xen/sizes.h>
+#include <xen/bitops.h>
+
+#include <asm/smccc.h>
+#include <asm/event.h>
+#include <asm/ffa.h>
+#include <asm/regs.h>
+
+/* Error codes */
+#define FFA_RET_OK			0
+#define FFA_RET_NOT_SUPPORTED		-1
+#define FFA_RET_INVALID_PARAMETERS	-2
+#define FFA_RET_NO_MEMORY		-3
+#define FFA_RET_BUSY			-4
+#define FFA_RET_INTERRUPTED		-5
+#define FFA_RET_DENIED			-6
+#define FFA_RET_RETRY			-7
+#define FFA_RET_ABORTED			-8
+
+/* FFA_VERSION helpers */
+#define FFA_VERSION_MAJOR		_AC(1,U)
+#define FFA_VERSION_MAJOR_SHIFT		_AC(16,U)
+#define FFA_VERSION_MAJOR_MASK		_AC(0x7FFF,U)
+#define FFA_VERSION_MINOR		_AC(1,U)
+#define FFA_VERSION_MINOR_SHIFT		_AC(0,U)
+#define FFA_VERSION_MINOR_MASK		_AC(0xFFFF,U)
+#define MAKE_FFA_VERSION(major, minor)	\
+	((((major) & FFA_VERSION_MAJOR_MASK) << FFA_VERSION_MAJOR_SHIFT) | \
+	 ((minor) & FFA_VERSION_MINOR_MASK))
+
+#define FFA_MIN_VERSION		MAKE_FFA_VERSION(1, 0)
+#define FFA_VERSION_1_0		MAKE_FFA_VERSION(1, 0)
+#define FFA_VERSION_1_1		MAKE_FFA_VERSION(1, 1)
+#define FFA_MY_VERSION		MAKE_FFA_VERSION(FFA_VERSION_MAJOR, \
+						 FFA_VERSION_MINOR)
+
+
+#define FFA_HANDLE_HYP_FLAG             BIT(63,ULL)
+
+/* Memory attributes: Normal memory, Write-Back cacheable, Inner shareable */
+#define FFA_NORMAL_MEM_REG_ATTR		_AC(0x2f,U)
+
+/* Memory access permissions: Read-write */
+#define FFA_MEM_ACC_RW			_AC(0x2,U)
+
+/* Clear memory before mapping in receiver */
+#define FFA_MEMORY_REGION_FLAG_CLEAR		BIT(0, U)
+/* Relayer may time slice this operation */
+#define FFA_MEMORY_REGION_FLAG_TIME_SLICE	BIT(1, U)
+/* Clear memory after receiver relinquishes it */
+#define FFA_MEMORY_REGION_FLAG_CLEAR_RELINQUISH	BIT(2, U)
+
+/* Share memory transaction */
+#define FFA_MEMORY_REGION_TRANSACTION_TYPE_SHARE (_AC(1,U) << 3)
+/* Relayer must choose the alignment boundary */
+#define FFA_MEMORY_REGION_FLAG_ANY_ALIGNMENT	_AC(0,U)
+
+#define FFA_HANDLE_INVALID		_AC(0xffffffffffffffff,ULL)
+
+/* Framework direct request/response */
+#define FFA_MSG_FLAG_FRAMEWORK		BIT(31, U)
+#define FFA_MSG_TYPE_MASK		_AC(0xFF,U);
+#define FFA_MSG_PSCI			_AC(0x0,U)
+#define FFA_MSG_SEND_VM_CREATED		_AC(0x4,U)
+#define FFA_MSG_RESP_VM_CREATED		_AC(0x5,U)
+#define FFA_MSG_SEND_VM_DESTROYED	_AC(0x6,U)
+#define FFA_MSG_RESP_VM_DESTROYED	_AC(0x7,U)
+
+/*
+ * Flags used for the FFA_PARTITION_INFO_GET return message:
+ * BIT(0): Supports receipt of direct requests
+ * BIT(1): Can send direct requests
+ * BIT(2): Can send and receive indirect messages
+ * BIT(3): Supports receipt of notifications
+ * BIT(4-5): Partition ID is a PE endpoint ID
+ */
+#define FFA_PART_PROP_DIRECT_REQ_RECV   BIT(0,U)
+#define FFA_PART_PROP_DIRECT_REQ_SEND   BIT(1,U)
+#define FFA_PART_PROP_INDIRECT_MSGS     BIT(2,U)
+#define FFA_PART_PROP_RECV_NOTIF        BIT(3,U)
+#define FFA_PART_PROP_IS_PE_ID          (_AC(0,U) << 4)
+#define FFA_PART_PROP_IS_SEPID_INDEP    (_AC(1,U) << 4)
+#define FFA_PART_PROP_IS_SEPID_DEP      (_AC(2,U) << 4)
+#define FFA_PART_PROP_IS_AUX_ID         (_AC(3,U) << 4)
+#define FFA_PART_PROP_NOTIF_CREATED     BIT(6,U)
+#define FFA_PART_PROP_NOTIF_DESTROYED   BIT(7,U)
+#define FFA_PART_PROP_AARCH64_STATE     BIT(8,U)
+
+/* Function IDs */
+#define FFA_ERROR			_AC(0x84000060,U)
+#define FFA_SUCCESS_32			_AC(0x84000061,U)
+#define FFA_SUCCESS_64			_AC(0xC4000061,U)
+#define FFA_INTERRUPT			_AC(0x84000062,U)
+#define FFA_VERSION			_AC(0x84000063,U)
+#define FFA_FEATURES			_AC(0x84000064,U)
+#define FFA_RX_ACQUIRE			_AC(0x84000084,U)
+#define FFA_RX_RELEASE			_AC(0x84000065,U)
+#define FFA_RXTX_MAP_32			_AC(0x84000066,U)
+#define FFA_RXTX_MAP_64			_AC(0xC4000066,U)
+#define FFA_RXTX_UNMAP			_AC(0x84000067,U)
+#define FFA_PARTITION_INFO_GET		_AC(0x84000068,U)
+#define FFA_ID_GET			_AC(0x84000069,U)
+#define FFA_SPM_ID_GET			_AC(0x84000085,U)
+#define FFA_MSG_WAIT			_AC(0x8400006B,U)
+#define FFA_MSG_YIELD			_AC(0x8400006C,U)
+#define FFA_MSG_RUN			_AC(0x8400006D,U)
+#define FFA_MSG_SEND2			_AC(0x84000086,U)
+#define FFA_MSG_SEND_DIRECT_REQ_32	_AC(0x8400006F,U)
+#define FFA_MSG_SEND_DIRECT_REQ_64	_AC(0xC400006F,U)
+#define FFA_MSG_SEND_DIRECT_RESP_32	_AC(0x84000070,U)
+#define FFA_MSG_SEND_DIRECT_RESP_64	_AC(0xC4000070,U)
+#define FFA_MEM_DONATE_32		_AC(0x84000071,U)
+#define FFA_MEM_DONATE_64		_AC(0xC4000071,U)
+#define FFA_MEM_LEND_32			_AC(0x84000072,U)
+#define FFA_MEM_LEND_64			_AC(0xC4000072,U)
+#define FFA_MEM_SHARE_32		_AC(0x84000073,U)
+#define FFA_MEM_SHARE_64		_AC(0xC4000073,U)
+#define FFA_MEM_RETRIEVE_REQ_32		_AC(0x84000074,U)
+#define FFA_MEM_RETRIEVE_REQ_64		_AC(0xC4000074,U)
+#define FFA_MEM_RETRIEVE_RESP		_AC(0x84000075,U)
+#define FFA_MEM_RELINQUISH		_AC(0x84000076,U)
+#define FFA_MEM_RECLAIM			_AC(0x84000077,U)
+#define FFA_MEM_FRAG_RX			_AC(0x8400007A,U)
+#define FFA_MEM_FRAG_TX			_AC(0x8400007B,U)
+#define FFA_MSG_SEND			_AC(0x8400006E,U)
+#define FFA_MSG_POLL			_AC(0x8400006A,U)
+
+/* Partition information descriptor */
+struct ffa_partition_info_1_0 {
+    uint16_t id;
+    uint16_t execution_context;
+    uint32_t partition_properties;
+};
+
+struct ffa_partition_info_1_1 {
+    uint16_t id;
+    uint16_t execution_context;
+    uint32_t partition_properties;
+    uint8_t uuid[16];
+};
+
+/* Constituent memory region descriptor */
+struct ffa_address_range {
+    uint64_t address;
+    uint32_t page_count;
+    uint32_t reserved;
+};
+
+/* Composite memory region descriptor */
+struct ffa_mem_region {
+    uint32_t total_page_count;
+    uint32_t address_range_count;
+    uint64_t reserved;
+    struct ffa_address_range address_range_array[];
+};
+
+/* Memory access permissions descriptor */
+struct ffa_mem_access_perm {
+    uint16_t endpoint_id;
+    uint8_t perm;
+    uint8_t flags;
+};
+
+/* Endpoint memory access descriptor */
+struct ffa_mem_access {
+    struct ffa_mem_access_perm access_perm;
+    uint32_t region_offs;
+    uint64_t reserved;
+};
+
+/* Lend, donate or share memory transaction descriptor */
+struct ffa_mem_transaction_1_0 {
+    uint16_t sender_id;
+    uint8_t mem_reg_attr;
+    uint8_t reserved0;
+    uint32_t flags;
+    uint64_t global_handle;
+    uint64_t tag;
+    uint32_t reserved1;
+    uint32_t mem_access_count;
+    struct ffa_mem_access mem_access_array[];
+};
+
+struct ffa_mem_transaction_1_1 {
+    uint16_t sender_id;
+    uint16_t mem_reg_attr;
+    uint32_t flags;
+    uint64_t global_handle;
+    uint64_t tag;
+    uint32_t mem_access_size;
+    uint32_t mem_access_count;
+    uint32_t mem_access_offs;
+    uint8_t reserved[12];
+};
+
+/*
+ * The parts needed from struct ffa_mem_transaction_1_0 or struct
+ * ffa_mem_transaction_1_1, used to provide an abstraction of difference in
+ * data structures between version 1.0 and 1.1. This is just an internal
+ * interface and can be changed without changing any ABI.
+ */
+struct ffa_mem_transaction_x {
+    uint16_t sender_id;
+    uint8_t mem_reg_attr;
+    uint8_t flags;
+    uint8_t mem_access_size;
+    uint8_t mem_access_count;
+    uint16_t mem_access_offs;
+    uint64_t global_handle;
+    uint64_t tag;
+};
+
+/* Endpoint RX/TX descriptor */
+struct ffa_endpoint_rxtx_descriptor_1_0 {
+    uint16_t sender_id;
+    uint16_t reserved;
+    uint32_t rx_range_count;
+    uint32_t tx_range_count;
+};
+
+struct ffa_endpoint_rxtx_descriptor_1_1 {
+    uint16_t sender_id;
+    uint16_t reserved;
+    uint32_t rx_region_offs;
+    uint32_t tx_region_offs;
+};
+
+struct ffa_ctx {
+    void *rx;
+    void *tx;
+    struct page_info *rx_pg;
+    struct page_info *tx_pg;
+    unsigned int page_count;
+    uint32_t guest_vers;
+    bool tx_is_mine;
+    bool interrupted;
+};
+
+struct ffa_shm_mem {
+    struct list_head list;
+    uint16_t sender_id;
+    uint16_t ep_id;     /* endpoint, the one lending */
+    uint64_t handle;    /* FFA_HANDLE_INVALID if not set yet */
+    unsigned int page_count;
+    struct page_info *pages[];
+};
+
+struct mem_frag_state {
+    struct list_head list;
+    struct ffa_shm_mem *shm;
+    uint32_t range_count;
+    unsigned int current_page_idx;
+    unsigned int frag_offset;
+    unsigned int range_offset;
+    uint8_t *buf;
+    unsigned int buf_size;
+    struct ffa_address_range range;
+};
+
+/*
+ * Our rx/rx buffer shared with the SPMC
+ */
+static uint32_t ffa_version;
+static uint16_t *subsr_vm_created;
+static unsigned int subsr_vm_created_count;
+static uint16_t *subsr_vm_destroyed;
+static unsigned int subsr_vm_destroyed_count;
+static void *ffa_rx;
+static void *ffa_tx;
+static unsigned int ffa_page_count;
+static spinlock_t ffa_buffer_lock = SPIN_LOCK_UNLOCKED;
+
+static struct list_head ffa_mem_list = LIST_HEAD_INIT(ffa_mem_list);
+static struct list_head ffa_frag_list = LIST_HEAD_INIT(ffa_frag_list);
+static spinlock_t ffa_mem_list_lock = SPIN_LOCK_UNLOCKED;
+
+static uint64_t next_handle = FFA_HANDLE_HYP_FLAG;
+
+static uint64_t reg_pair_to_64(uint32_t reg0, uint32_t reg1)
+{
+    return (uint64_t)reg0 << 32 | reg1;
+}
+
+static void reg_pair_from_64(uint32_t *reg0, uint32_t *reg1, uint64_t val)
+{
+    *reg0 = val >> 32;
+    *reg1 = val;
+}
+
+static bool ffa_get_version(uint32_t *vers)
+{
+    const struct arm_smccc_1_2_regs arg = {
+        .a0 = FFA_VERSION, .a1 = FFA_MY_VERSION,
+    };
+    struct arm_smccc_1_2_regs resp;
+
+    arm_smccc_1_2_smc(&arg, &resp);
+    if ( resp.a0 == FFA_RET_NOT_SUPPORTED )
+    {
+        printk(XENLOG_ERR "ffa: FFA_VERSION returned not supported\n");
+        return false;
+    }
+
+    *vers = resp.a0;
+    return true;
+}
+
+static uint32_t ffa_rxtx_map(register_t tx_addr, register_t rx_addr,
+                             uint32_t page_count)
+{
+    const struct arm_smccc_1_2_regs arg = {
+#ifdef CONFIG_ARM_64
+        .a0 = FFA_RXTX_MAP_64,
+#endif
+#ifdef CONFIG_ARM_32
+        .a0 = FFA_RXTX_MAP_32,
+#endif
+	.a1 = tx_addr, .a2 = rx_addr,
+        .a3 = page_count,
+    };
+    struct arm_smccc_1_2_regs resp;
+
+    arm_smccc_1_2_smc(&arg, &resp);
+
+    if ( resp.a0 == FFA_ERROR )
+    {
+        if ( resp.a2 )
+            return resp.a2;
+        else
+            return FFA_RET_NOT_SUPPORTED;
+    }
+
+    return FFA_RET_OK;
+}
+
+static uint32_t ffa_rxtx_unmap(uint16_t vm_id)
+{
+    const struct arm_smccc_1_2_regs arg = {
+        .a0 = FFA_RXTX_UNMAP, .a1 = vm_id,
+    };
+    struct arm_smccc_1_2_regs resp;
+
+    arm_smccc_1_2_smc(&arg, &resp);
+
+    if ( resp.a0 == FFA_ERROR )
+    {
+        if ( resp.a2 )
+            return resp.a2;
+        else
+            return FFA_RET_NOT_SUPPORTED;
+    }
+
+    return FFA_RET_OK;
+}
+
+static uint32_t ffa_partition_info_get(uint32_t w1, uint32_t w2, uint32_t w3,
+                                       uint32_t w4, uint32_t w5,
+                                       uint32_t *count)
+{
+    const struct arm_smccc_1_2_regs arg = {
+        .a0 = FFA_PARTITION_INFO_GET, .a1 = w1, .a2 = w2, .a3 = w3, .a4 = w4,
+        .a5 = w5,
+    };
+    struct arm_smccc_1_2_regs resp;
+
+    arm_smccc_1_2_smc(&arg, &resp);
+
+    if ( resp.a0 == FFA_ERROR )
+    {
+        if ( resp.a2 )
+            return resp.a2;
+        else
+            return FFA_RET_NOT_SUPPORTED;
+    }
+
+    *count = resp.a2;
+
+    return FFA_RET_OK;
+}
+
+static uint32_t ffa_rx_release(void)
+{
+    const struct arm_smccc_1_2_regs arg = { .a0 = FFA_RX_RELEASE, };
+    struct arm_smccc_1_2_regs resp;
+
+    arm_smccc_1_2_smc(&arg, &resp);
+
+    if ( resp.a0 == FFA_ERROR )
+    {
+        if ( resp.a2 )
+            return resp.a2;
+        else
+            return FFA_RET_NOT_SUPPORTED;
+    }
+
+    return FFA_RET_OK;
+}
+
+static int32_t ffa_mem_share(uint32_t tot_len, uint32_t frag_len,
+                             register_t addr, uint32_t pg_count,
+                             uint64_t *handle)
+{
+    struct arm_smccc_1_2_regs arg = {
+        .a0 = FFA_MEM_SHARE_32, .a1 = tot_len, .a2 = frag_len, .a3 = addr,
+        .a4 = pg_count,
+    };
+    struct arm_smccc_1_2_regs resp;
+
+    /*
+     * For arm64 we must use 64-bit calling convention if the buffer isn't
+     * passed in our tx buffer.
+     */
+    if (sizeof(addr) > 4 && addr)
+        arg.a0 = FFA_MEM_SHARE_64;
+
+    arm_smccc_1_2_smc(&arg, &resp);
+
+    switch ( resp.a0 ) {
+    case FFA_ERROR:
+        if ( resp.a2 )
+            return resp.a2;
+        else
+            return FFA_RET_NOT_SUPPORTED;
+    case FFA_SUCCESS_32:
+        *handle = reg_pair_to_64(resp.a3, resp.a2);
+        return FFA_RET_OK;
+    case FFA_MEM_FRAG_RX:
+        *handle = reg_pair_to_64(resp.a2, resp.a1);
+        return resp.a3;
+    default:
+            return FFA_RET_NOT_SUPPORTED;
+    }
+}
+
+static int32_t ffa_mem_frag_tx(uint64_t handle, uint32_t frag_len,
+                               uint16_t sender_id)
+{
+    struct arm_smccc_1_2_regs arg = {
+        .a0 = FFA_MEM_FRAG_TX, .a1 = handle & UINT32_MAX, .a2 = handle >> 32,
+        .a3 = frag_len, .a4 = (uint32_t)sender_id << 16,
+    };
+    struct arm_smccc_1_2_regs resp;
+
+    arm_smccc_1_2_smc(&arg, &resp);
+
+    switch ( resp.a0 ) {
+    case FFA_ERROR:
+        if ( resp.a2 )
+            return resp.a2;
+        else
+            return FFA_RET_NOT_SUPPORTED;
+    case FFA_SUCCESS_32:
+        return FFA_RET_OK;
+    case FFA_MEM_FRAG_RX:
+        return resp.a3;
+    default:
+            return FFA_RET_NOT_SUPPORTED;
+    }
+}
+
+static uint32_t ffa_mem_reclaim(uint32_t handle_lo, uint32_t handle_hi,
+                                uint32_t flags)
+{
+    const struct arm_smccc_1_2_regs arg = {
+        .a0 = FFA_MEM_RECLAIM, .a1 = handle_lo, .a2 = handle_hi, .a3 = flags,
+    };
+    struct arm_smccc_1_2_regs resp;
+
+    arm_smccc_1_2_smc(&arg, &resp);
+
+    if ( resp.a0 == FFA_ERROR )
+    {
+        if ( resp.a2 )
+            return resp.a2;
+        else
+            return FFA_RET_NOT_SUPPORTED;
+    }
+
+    return FFA_RET_OK;
+}
+
+static int32_t ffa_direct_req_send_vm(uint16_t sp_id, uint16_t vm_id,
+                                      uint8_t msg)
+{
+    uint32_t exp_resp = FFA_MSG_FLAG_FRAMEWORK;
+    int32_t res;
+
+    if ( msg != FFA_MSG_SEND_VM_CREATED && msg !=FFA_MSG_SEND_VM_DESTROYED )
+        return FFA_RET_INVALID_PARAMETERS;
+
+    if ( msg == FFA_MSG_SEND_VM_CREATED )
+        exp_resp |= FFA_MSG_RESP_VM_CREATED;
+    else
+        exp_resp |= FFA_MSG_RESP_VM_DESTROYED;
+
+    do {
+        const struct arm_smccc_1_2_regs arg = {
+            .a0 = FFA_MSG_SEND_DIRECT_REQ_32,
+            .a1 = sp_id,
+            .a2 = FFA_MSG_FLAG_FRAMEWORK | msg,
+            .a5 = vm_id,
+        };
+        struct arm_smccc_1_2_regs resp;
+
+        arm_smccc_1_2_smc(&arg, &resp);
+        if ( resp.a0 != FFA_MSG_SEND_DIRECT_RESP_32 || resp.a2 != exp_resp ) {
+            /*
+             * This is an invalid response, likely due to some error in the
+             * implementation of the ABI.
+             */
+            return FFA_RET_INVALID_PARAMETERS;
+        }
+        res = resp.a3;
+    } while ( res == FFA_RET_INTERRUPTED || res == FFA_RET_RETRY );
+
+    return res;
+}
+
+static u16 get_vm_id(struct domain *d)
+{
+    /* +1 since 0 is reserved for the hypervisor in FF-A */
+    return d->domain_id + 1;
+}
+
+static void set_regs(struct cpu_user_regs *regs, register_t v0, register_t v1,
+                     register_t v2, register_t v3, register_t v4, register_t v5,
+                     register_t v6, register_t v7)
+{
+        set_user_reg(regs, 0, v0);
+        set_user_reg(regs, 1, v1);
+        set_user_reg(regs, 2, v2);
+        set_user_reg(regs, 3, v3);
+        set_user_reg(regs, 4, v4);
+        set_user_reg(regs, 5, v5);
+        set_user_reg(regs, 6, v6);
+        set_user_reg(regs, 7, v7);
+}
+
+static void set_regs_error(struct cpu_user_regs *regs, uint32_t error_code)
+{
+    set_regs(regs, FFA_ERROR, 0, error_code, 0, 0, 0, 0, 0);
+}
+
+static void set_regs_success(struct cpu_user_regs *regs, uint32_t w2,
+                             uint32_t w3)
+{
+    set_regs(regs, FFA_SUCCESS_32, 0, w2, w3, 0, 0, 0, 0);
+}
+
+static void set_regs_frag_rx(struct cpu_user_regs *regs, uint32_t handle_lo,
+                             uint32_t handle_hi, uint32_t frag_offset,
+                             uint16_t sender_id)
+{
+    set_regs(regs, FFA_MEM_FRAG_RX, handle_lo, handle_hi, frag_offset,
+             (uint32_t)sender_id << 16, 0, 0, 0);
+}
+
+static void handle_version(struct cpu_user_regs *regs)
+{
+    struct domain *d = current->domain;
+    struct ffa_ctx *ctx = d->arch.ffa;
+    uint32_t vers = get_user_reg(regs, 1);
+
+    if ( vers < FFA_VERSION_1_1 )
+        vers = FFA_VERSION_1_0;
+    else
+        vers = FFA_VERSION_1_1;
+
+    ctx->guest_vers = vers;
+    set_regs(regs, vers, 0, 0, 0, 0, 0, 0, 0);
+}
+
+static uint32_t handle_rxtx_map(uint32_t fid, register_t tx_addr,
+                                register_t rx_addr, uint32_t page_count)
+{
+    uint32_t ret = FFA_RET_NOT_SUPPORTED;
+    struct domain *d = current->domain;
+    struct ffa_ctx *ctx = d->arch.ffa;
+    struct page_info *tx_pg;
+    struct page_info *rx_pg;
+    p2m_type_t t;
+    void *rx;
+    void *tx;
+
+    if ( !smccc_is_conv_64(fid) )
+    {
+        tx_addr &= UINT32_MAX;
+        rx_addr &= UINT32_MAX;
+    }
+
+    /* For now to keep things simple, only deal with a single page */
+    if ( page_count != 1 )
+        return FFA_RET_NOT_SUPPORTED;
+
+    /* Already mapped */
+    if ( ctx->rx )
+        return FFA_RET_DENIED;
+
+    tx_pg = get_page_from_gfn(d, gaddr_to_gfn(tx_addr), &t, P2M_ALLOC);
+    if ( !tx_pg )
+        return FFA_RET_NOT_SUPPORTED;
+    /* Only normal RAM for now */
+    if (t != p2m_ram_rw)
+        goto err_put_tx_pg;
+
+    rx_pg = get_page_from_gfn(d, gaddr_to_gfn(rx_addr), &t, P2M_ALLOC);
+    if ( !tx_pg )
+        goto err_put_tx_pg;
+    /* Only normal RAM for now */
+    if ( t != p2m_ram_rw )
+        goto err_put_rx_pg;
+
+    tx = __map_domain_page_global(tx_pg);
+    if ( !tx )
+        goto err_put_rx_pg;
+
+    rx = __map_domain_page_global(rx_pg);
+    if ( !rx )
+        goto err_unmap_tx;
+
+    ctx->rx = rx;
+    ctx->tx = tx;
+    ctx->rx_pg = rx_pg;
+    ctx->tx_pg = tx_pg;
+    ctx->page_count = 1;
+    ctx->tx_is_mine = true;
+    return FFA_RET_OK;
+
+err_unmap_tx:
+    unmap_domain_page_global(tx);
+err_put_rx_pg:
+    put_page(rx_pg);
+err_put_tx_pg:
+    put_page(tx_pg);
+    return ret;
+}
+
+static uint32_t handle_rxtx_unmap(void)
+{
+    struct domain *d = current->domain;
+    struct ffa_ctx *ctx = d->arch.ffa;
+    uint32_t ret;
+
+    if ( !ctx-> rx )
+        return FFA_RET_INVALID_PARAMETERS;
+
+    ret = ffa_rxtx_unmap(get_vm_id(d));
+    if ( ret )
+        return ret;
+
+    unmap_domain_page_global(ctx->rx);
+    unmap_domain_page_global(ctx->tx);
+    put_page(ctx->rx_pg);
+    put_page(ctx->tx_pg);
+    ctx->rx = NULL;
+    ctx->tx = NULL;
+    ctx->rx_pg = NULL;
+    ctx->tx_pg = NULL;
+    ctx->page_count = 0;
+    ctx->tx_is_mine = false;
+
+    return FFA_RET_OK;
+}
+
+static uint32_t handle_partition_info_get(uint32_t w1, uint32_t w2, uint32_t w3,
+                                          uint32_t w4, uint32_t w5,
+                                          uint32_t *count)
+{
+    uint32_t ret = FFA_RET_DENIED;
+    struct domain *d = current->domain;
+    struct ffa_ctx *ctx = d->arch.ffa;
+
+    if ( !ffa_page_count )
+        return FFA_RET_DENIED;
+
+    spin_lock(&ffa_buffer_lock);
+    if ( !ctx->page_count || !ctx->tx_is_mine )
+        goto out;
+    ret = ffa_partition_info_get(w1, w2, w3, w4, w5, count);
+    if ( ret )
+        goto out;
+    if ( ctx->guest_vers == FFA_VERSION_1_0 ) {
+        size_t n;
+        struct ffa_partition_info_1_1 *src = ffa_rx;
+        struct ffa_partition_info_1_0 *dst = ctx->rx;
+
+        for ( n = 0; n < *count; n++ ) {
+            dst[n].id = src[n].id;
+            dst[n].execution_context = src[n].execution_context;
+            dst[n].partition_properties = src[n].partition_properties;
+        }
+    } else {
+        size_t sz = *count * sizeof(struct ffa_partition_info_1_1);
+
+        memcpy(ctx->rx, ffa_rx, sz);
+    }
+    ffa_rx_release();
+    ctx->tx_is_mine = false;
+out:
+    spin_unlock(&ffa_buffer_lock);
+
+    return ret;
+}
+
+static uint32_t handle_rx_release(void)
+{
+    uint32_t ret = FFA_RET_DENIED;
+    struct domain *d = current->domain;
+    struct ffa_ctx *ctx = d->arch.ffa;
+
+    spin_lock(&ffa_buffer_lock);
+    if ( !ctx->page_count || ctx->tx_is_mine )
+        goto out;
+    ret = FFA_RET_OK;
+    ctx->tx_is_mine = true;
+out:
+    spin_unlock(&ffa_buffer_lock);
+
+    return ret;
+}
+
+static void handle_msg_send_direct_req(struct cpu_user_regs *regs, uint32_t fid)
+{
+    struct arm_smccc_1_2_regs arg = { .a0 = fid, };
+    struct arm_smccc_1_2_regs resp = { };
+    struct domain *d = current->domain;
+    struct ffa_ctx *ctx = d->arch.ffa;
+    uint32_t src_dst;
+    uint64_t mask;
+
+    if ( smccc_is_conv_64(fid) )
+        mask = 0xffffffffffffffff;
+    else
+        mask = 0xffffffff;
+
+    src_dst = get_user_reg(regs, 1);
+    if ( (src_dst >> 16) != get_vm_id(d) )
+    {
+        resp.a0 = FFA_ERROR;
+        resp.a2 = FFA_RET_INVALID_PARAMETERS;
+        goto out;
+    }
+
+    arg.a1 = src_dst;
+    arg.a2 = get_user_reg(regs, 2) & mask;
+    arg.a3 = get_user_reg(regs, 3) & mask;
+    arg.a4 = get_user_reg(regs, 4) & mask;
+    arg.a5 = get_user_reg(regs, 5) & mask;
+    arg.a6 = get_user_reg(regs, 6) & mask;
+    arg.a7 = get_user_reg(regs, 7) & mask;
+
+    while ( true ) {
+        arm_smccc_1_2_smc(&arg, &resp);
+
+        switch ( resp.a0 )
+        {
+        case FFA_INTERRUPT:
+            ctx->interrupted = true;
+            goto out;
+        case FFA_ERROR:
+        case FFA_SUCCESS_32:
+        case FFA_SUCCESS_64:
+        case FFA_MSG_SEND_DIRECT_RESP_32:
+        case FFA_MSG_SEND_DIRECT_RESP_64:
+            goto out;
+        default:
+            /* Bad fid, report back. */
+            memset(&arg, 0, sizeof(arg));
+            arg.a0 = FFA_ERROR;
+            arg.a1 = src_dst;
+            arg.a2 = FFA_RET_NOT_SUPPORTED;
+            continue;
+        }
+    }
+
+out:
+    set_user_reg(regs, 0, resp.a0);
+    set_user_reg(regs, 2, resp.a2 & mask);
+    set_user_reg(regs, 1, resp.a1 & mask);
+    set_user_reg(regs, 3, resp.a3 & mask);
+    set_user_reg(regs, 4, resp.a4 & mask);
+    set_user_reg(regs, 5, resp.a5 & mask);
+    set_user_reg(regs, 6, resp.a6 & mask);
+    set_user_reg(regs, 7, resp.a7 & mask);
+}
+
+static int get_shm_pages(struct domain *d, struct ffa_shm_mem *shm,
+                         struct ffa_address_range *range, uint32_t range_count,
+                         unsigned int start_page_idx,
+                         unsigned int *last_page_idx)
+{
+    unsigned int pg_idx = start_page_idx;
+    unsigned long gfn;
+    unsigned int n;
+    unsigned int m;
+    p2m_type_t t;
+    uint64_t addr;
+
+    for ( n = 0; n < range_count; n++ ) {
+        for ( m = 0; m < range[n].page_count; m++ ) {
+            if ( pg_idx >= shm->page_count )
+                return FFA_RET_INVALID_PARAMETERS;
+
+            addr = read_atomic(&range[n].address);
+            gfn = gaddr_to_gfn(addr + m * PAGE_SIZE);
+            shm->pages[pg_idx] = get_page_from_gfn(d, gfn, &t, P2M_ALLOC);
+            if ( !shm->pages[pg_idx] )
+                return FFA_RET_DENIED;
+            pg_idx++;
+            /* Only normal RAM for now */
+            if ( t != p2m_ram_rw )
+                return FFA_RET_DENIED;
+        }
+    }
+
+    *last_page_idx = pg_idx;
+
+    return FFA_RET_OK;
+}
+
+static void put_shm_pages(struct ffa_shm_mem *shm)
+{
+    unsigned int n;
+
+    for ( n = 0; n < shm->page_count && shm->pages[n]; n++ )
+    {
+        if ( shm->pages[n] ) {
+            put_page(shm->pages[n]);
+            shm->pages[n] = NULL;
+        }
+    }
+}
+
+static void init_range(struct ffa_address_range *addr_range,
+                       paddr_t pa)
+{
+    memset(addr_range, 0, sizeof(*addr_range));
+    addr_range->address = pa;
+    addr_range->page_count = 1;
+}
+
+static int share_shm(struct ffa_shm_mem *shm)
+{
+    uint32_t max_frag_len = ffa_page_count * PAGE_SIZE;
+    struct ffa_mem_transaction_1_1 *descr = ffa_tx;
+    struct ffa_mem_access *mem_access_array;
+    struct ffa_mem_region *region_descr;
+    struct ffa_address_range *addr_range;
+    paddr_t pa;
+    paddr_t last_pa;
+    unsigned int n;
+    uint32_t frag_len;
+    uint32_t tot_len;
+    int ret;
+    unsigned int range_count;
+    unsigned int range_base;
+    bool first;
+
+    memset(descr, 0, sizeof(*descr));
+    descr->sender_id = shm->sender_id;
+    descr->global_handle = shm->handle;
+    descr->mem_reg_attr = FFA_NORMAL_MEM_REG_ATTR;
+    descr->mem_access_count = 1;
+    descr->mem_access_size = sizeof(*mem_access_array);
+    descr->mem_access_offs = sizeof(*descr);
+    mem_access_array = (void *)(descr + 1);
+    region_descr = (void *)(mem_access_array + 1);
+
+    memset(mem_access_array, 0, sizeof(*mem_access_array));
+    mem_access_array[0].access_perm.endpoint_id = shm->ep_id;
+    mem_access_array[0].access_perm.perm = FFA_MEM_ACC_RW;
+    mem_access_array[0].region_offs = (vaddr_t)region_descr - (vaddr_t)ffa_tx;
+
+    memset(region_descr, 0, sizeof(*region_descr));
+    region_descr->total_page_count = shm->page_count;
+
+    region_descr->address_range_count = 1;
+    last_pa = page_to_maddr(shm->pages[0]);
+    for ( n = 1; n < shm->page_count; last_pa = pa, n++ )
+    {
+        pa = page_to_maddr(shm->pages[n]);
+        if ( last_pa + PAGE_SIZE == pa )
+        {
+            continue;
+        }
+        region_descr->address_range_count++;
+    }
+
+    tot_len = sizeof(*descr) + sizeof(*mem_access_array) +
+              sizeof(*region_descr) +
+              region_descr->address_range_count * sizeof(*addr_range);
+
+    addr_range = region_descr->address_range_array;
+    frag_len = (vaddr_t)(addr_range + 1) - (vaddr_t)ffa_tx;
+    last_pa = page_to_maddr(shm->pages[0]);
+    init_range(addr_range, last_pa);
+    first = true;
+    range_count = 1;
+    range_base = 0;
+    for ( n = 1; n < shm->page_count; last_pa = pa, n++ )
+    {
+        pa = page_to_maddr(shm->pages[n]);
+        if ( last_pa + PAGE_SIZE == pa )
+        {
+            addr_range->page_count++;
+            continue;
+        }
+
+        if (frag_len == max_frag_len) {
+            if (first)
+            {
+                ret = ffa_mem_share(tot_len, frag_len, 0, 0, &shm->handle);
+                first = false;
+            }
+            else
+            {
+                ret = ffa_mem_frag_tx(shm->handle, frag_len, shm->sender_id);
+            }
+            if (ret <= 0)
+                return ret;
+            range_base = range_count;
+            range_count = 0;
+            frag_len = sizeof(*addr_range);
+            addr_range = ffa_tx;
+        } else {
+            frag_len += sizeof(*addr_range);
+            addr_range++;
+        }
+        init_range(addr_range, pa);
+        range_count++;
+    }
+
+    if (first)
+        return ffa_mem_share(tot_len, frag_len, 0, 0, &shm->handle);
+    else
+        return ffa_mem_frag_tx(shm->handle, frag_len, shm->sender_id);
+}
+
+static int read_mem_transaction(uint32_t ffa_vers, void *buf, size_t blen,
+                                struct ffa_mem_transaction_x *trans)
+{
+    uint16_t mem_reg_attr;
+    uint32_t flags;
+    uint32_t count;
+    uint32_t offs;
+    uint32_t size;
+
+    if (ffa_vers >= FFA_VERSION_1_1) {
+        struct ffa_mem_transaction_1_1 *descr;
+
+        if (blen < sizeof(*descr))
+            return FFA_RET_INVALID_PARAMETERS;
+
+        descr = buf;
+        trans->sender_id = read_atomic(&descr->sender_id);
+        mem_reg_attr = read_atomic(&descr->mem_reg_attr);
+        flags = read_atomic(&descr->flags);
+        trans->global_handle = read_atomic(&descr->global_handle);
+        trans->tag = read_atomic(&descr->tag);
+
+        count = read_atomic(&descr->mem_access_count);
+        size = read_atomic(&descr->mem_access_size);
+        offs = read_atomic(&descr->mem_access_offs);
+    } else {
+        struct ffa_mem_transaction_1_0 *descr;
+
+        if (blen < sizeof(*descr))
+            return FFA_RET_INVALID_PARAMETERS;
+
+        descr = buf;
+        trans->sender_id = read_atomic(&descr->sender_id);
+        mem_reg_attr = read_atomic(&descr->mem_reg_attr);
+        flags = read_atomic(&descr->flags);
+        trans->global_handle = read_atomic(&descr->global_handle);
+        trans->tag = read_atomic(&descr->tag);
+
+        count = read_atomic(&descr->mem_access_count);
+        size = sizeof(struct ffa_mem_access);
+        offs = offsetof(struct ffa_mem_transaction_1_0, mem_access_array);
+    }
+
+    if (mem_reg_attr > UINT8_MAX || flags > UINT8_MAX || size > UINT8_MAX ||
+        count > UINT8_MAX || offs > UINT16_MAX)
+        return FFA_RET_INVALID_PARAMETERS;
+
+    /* Check that the endpoint memory access descriptor array fits */
+    if (size * count + offs > blen)
+        return FFA_RET_INVALID_PARAMETERS;
+
+    trans->mem_reg_attr = mem_reg_attr;
+    trans->flags = flags;
+    trans->mem_access_size = size;
+    trans->mem_access_count = count;
+    trans->mem_access_offs = offs;
+    return 0;
+}
+
+static int add_mem_share_frag(struct mem_frag_state *s, unsigned int offs,
+                              unsigned int frag_len)
+{
+    struct domain *d = current->domain;
+    unsigned int o = offs;
+    unsigned int l;
+    int ret;
+
+    if (frag_len < o)
+        return FFA_RET_INVALID_PARAMETERS;
+
+    /* Fill up the first struct ffa_address_range */
+    l = min_t(unsigned int, frag_len - o, sizeof(s->range) - s->range_offset);
+    memcpy((uint8_t *)&s->range + s->range_offset, s->buf + o, l);
+    s->range_offset += l;
+    o += l;
+    if (s->range_offset != sizeof(s->range))
+        goto out;
+    s->range_offset = 0;
+
+    while (true) {
+        ret = get_shm_pages(d, s->shm, &s->range, 1, s->current_page_idx,
+                            &s->current_page_idx);
+        if (ret)
+            return ret;
+        if (s->range_count == 1)
+            return 0;
+        s->range_count--;
+        if (frag_len - o < sizeof(s->range))
+            break;
+        memcpy(&s->range, s->buf + o, sizeof(s->range));
+        o += sizeof(s->range);
+    }
+
+    /* Collect any remaining bytes for the next struct ffa_address_range */
+    s->range_offset = frag_len - o;
+    memcpy(&s->range, s->buf + o, frag_len - o);
+out:
+    s->frag_offset += frag_len;
+    return s->frag_offset;
+}
+
+static void handle_mem_share(struct cpu_user_regs *regs)
+{
+    uint32_t tot_len = get_user_reg(regs, 1);
+    uint32_t frag_len = get_user_reg(regs, 2);
+    uint64_t addr = get_user_reg(regs, 3);
+    uint32_t page_count = get_user_reg(regs, 4);
+    struct ffa_mem_transaction_x trans;
+    struct ffa_mem_access *mem_access;
+    struct ffa_mem_region *region_descr;
+    struct domain *d = current->domain;
+    struct ffa_ctx *ctx = d->arch.ffa;
+    struct ffa_shm_mem *shm = NULL;
+    unsigned int last_page_idx = 0;
+    uint32_t range_count;
+    uint32_t region_offs;
+    int ret = FFA_RET_DENIED;
+    uint32_t handle_hi = 0;
+    uint32_t handle_lo = 0;
+
+    /*
+     * We're only accepting memory transaction descriptors via the rx/tx
+     * buffer.
+     */
+    if ( addr ) {
+        ret = FFA_RET_NOT_SUPPORTED;
+        goto out_unlock;
+    }
+
+    /* Check that fragment legnth doesn't exceed total length */
+    if (frag_len > tot_len) {
+        ret = FFA_RET_INVALID_PARAMETERS;
+        goto out_unlock;
+    }
+
+    spin_lock(&ffa_buffer_lock);
+
+    if ( frag_len > ctx->page_count * PAGE_SIZE )
+        goto out_unlock;
+
+    if ( !ffa_page_count ) {
+        ret = FFA_RET_NO_MEMORY;
+        goto out_unlock;
+    }
+
+    ret = read_mem_transaction(ctx->guest_vers, ctx->tx, frag_len, &trans);
+    if (ret)
+        goto out_unlock;
+
+    if ( trans.mem_reg_attr != FFA_NORMAL_MEM_REG_ATTR )
+    {
+        ret = FFA_RET_NOT_SUPPORTED;
+        goto out;
+    }
+
+    /* Only supports sharing it with one SP for now */
+    if ( trans.mem_access_count != 1 )
+    {
+        ret = FFA_RET_NOT_SUPPORTED;
+        goto out_unlock;
+    }
+
+    if ( trans.sender_id != get_vm_id(d) )
+    {
+        ret = FFA_RET_INVALID_PARAMETERS;
+        goto out_unlock;
+    }
+
+    /* Check that it fits in the supplied data */
+    if ( trans.mem_access_offs + trans.mem_access_size > frag_len)
+        goto out_unlock;
+
+    mem_access = (void *)((vaddr_t)ctx->tx + trans.mem_access_offs);
+    if ( read_atomic(&mem_access->access_perm.perm) != FFA_MEM_ACC_RW )
+    {
+        ret = FFA_RET_NOT_SUPPORTED;
+        goto out_unlock;
+    }
+
+    region_offs = read_atomic(&mem_access->region_offs);
+    if (sizeof(*region_descr) + region_offs > frag_len) {
+        ret = FFA_RET_NOT_SUPPORTED;
+        goto out_unlock;
+    }
+
+    region_descr = (void *)((vaddr_t)ctx->tx + region_offs);
+    range_count = read_atomic(&region_descr->address_range_count);
+    page_count = read_atomic(&region_descr->total_page_count);
+
+    shm = xzalloc_flex_struct(struct ffa_shm_mem, pages, page_count);
+    if ( !shm )
+    {
+        ret = FFA_RET_NO_MEMORY;
+        goto out;
+    }
+    shm->sender_id = trans.sender_id;
+    shm->ep_id = read_atomic(&mem_access->access_perm.endpoint_id);
+    shm->page_count = page_count;
+
+    if (frag_len != tot_len) {
+        struct mem_frag_state *s = xzalloc(struct mem_frag_state);
+
+        if (!s) {
+            ret = FFA_RET_NO_MEMORY;
+            goto out;
+        }
+        s->shm = shm;
+        s->range_count = range_count;
+        s->buf = ctx->tx;
+        s->buf_size = ffa_page_count * PAGE_SIZE;
+        ret = add_mem_share_frag(s, sizeof(*region_descr)  + region_offs,
+                                 frag_len);
+        if (ret <= 0) {
+            xfree(s);
+            if (ret < 0)
+                goto out;
+        } else {
+            shm->handle = next_handle++;
+            reg_pair_from_64(&handle_hi, &handle_lo, shm->handle);
+            spin_lock(&ffa_mem_list_lock);
+            list_add_tail(&s->list, &ffa_frag_list);
+            spin_unlock(&ffa_mem_list_lock);
+        }
+        goto out_unlock;
+    }
+
+    /*
+     * Check that the Composite memory region descriptor fits.
+     */
+    if ( sizeof(*region_descr) + region_offs +
+         range_count * sizeof(struct ffa_address_range) > frag_len) {
+        ret = FFA_RET_INVALID_PARAMETERS;
+        goto out;
+    }
+
+    ret = get_shm_pages(d, shm, region_descr->address_range_array, range_count,
+                        0, &last_page_idx);
+    if ( ret )
+        goto out;
+    if (last_page_idx != shm->page_count) {
+        ret = FFA_RET_INVALID_PARAMETERS;
+        goto out;
+    }
+
+    /* Note that share_shm() uses our tx buffer */
+    ret = share_shm(shm);
+    if ( ret )
+        goto out;
+
+    spin_lock(&ffa_mem_list_lock);
+    list_add_tail(&shm->list, &ffa_mem_list);
+    spin_unlock(&ffa_mem_list_lock);
+
+    reg_pair_from_64(&handle_hi, &handle_lo, shm->handle);
+
+out:
+    if ( ret && shm )
+    {
+        put_shm_pages(shm);
+        xfree(shm);
+    }
+out_unlock:
+    spin_unlock(&ffa_buffer_lock);
+
+    if ( ret > 0 )
+            set_regs_frag_rx(regs, handle_lo, handle_hi, ret, trans.sender_id);
+    else if ( ret == 0)
+            set_regs_success(regs, handle_lo, handle_hi);
+    else
+            set_regs_error(regs, ret);
+}
+
+static struct mem_frag_state *find_frag_state(uint64_t handle)
+{
+    struct mem_frag_state *s;
+
+    list_for_each_entry(s, &ffa_frag_list, list)
+        if ( s->shm->handle == handle)
+            return s;
+
+    return NULL;
+}
+
+static void handle_mem_frag_tx(struct cpu_user_regs *regs)
+{
+    uint32_t frag_len = get_user_reg(regs, 3);
+    uint32_t handle_lo = get_user_reg(regs, 1);
+    uint32_t handle_hi = get_user_reg(regs, 2);
+    uint64_t handle = reg_pair_to_64(handle_hi, handle_lo);
+    struct mem_frag_state *s;
+    uint16_t sender_id = 0;
+    int ret;
+
+    spin_lock(&ffa_buffer_lock);
+    s = find_frag_state(handle);
+    if (!s) {
+        ret = FFA_RET_INVALID_PARAMETERS;
+        goto out;
+    }
+    sender_id = s->shm->sender_id;
+
+    if (frag_len > s->buf_size) {
+        ret = FFA_RET_INVALID_PARAMETERS;
+        goto out;
+    }
+
+    ret = add_mem_share_frag(s, 0, frag_len);
+    if (ret == 0) {
+        /* Note that share_shm() uses our tx buffer */
+        ret = share_shm(s->shm);
+        if (ret == 0) {
+            spin_lock(&ffa_mem_list_lock);
+            list_add_tail(&s->shm->list, &ffa_mem_list);
+            spin_unlock(&ffa_mem_list_lock);
+        } else {
+            put_shm_pages(s->shm);
+            xfree(s->shm);
+        }
+        spin_lock(&ffa_mem_list_lock);
+        list_del(&s->list);
+        spin_unlock(&ffa_mem_list_lock);
+        xfree(s);
+    } else if (ret < 0) {
+        put_shm_pages(s->shm);
+        xfree(s->shm);
+        spin_lock(&ffa_mem_list_lock);
+        list_del(&s->list);
+        spin_unlock(&ffa_mem_list_lock);
+        xfree(s);
+    }
+out:
+    spin_unlock(&ffa_buffer_lock);
+
+    if ( ret > 0 )
+            set_regs_frag_rx(regs, handle_lo, handle_hi, ret, sender_id);
+    else if ( ret == 0)
+            set_regs_success(regs, handle_lo, handle_hi);
+    else
+            set_regs_error(regs, ret);
+}
+
+static int handle_mem_reclaim(uint64_t handle, uint32_t flags)
+{
+    struct ffa_shm_mem *shm;
+    uint32_t handle_hi;
+    uint32_t handle_lo;
+    int ret;
+
+    spin_lock(&ffa_mem_list_lock);
+    list_for_each_entry(shm, &ffa_mem_list, list) {
+        if ( shm->handle == handle )
+            goto found_it;
+    }
+    shm = NULL;
+found_it:
+    spin_unlock(&ffa_mem_list_lock);
+
+    if ( !shm )
+        return FFA_RET_INVALID_PARAMETERS;
+
+    reg_pair_from_64(&handle_hi, &handle_lo, handle);
+    ret = ffa_mem_reclaim(handle_lo, handle_hi, flags);
+    if ( ret )
+        return ret;
+
+    spin_lock(&ffa_mem_list_lock);
+    list_del(&shm->list);
+    spin_unlock(&ffa_mem_list_lock);
+
+    put_shm_pages(shm);
+    xfree(shm);
+
+    return ret;
+}
+
+bool ffa_handle_call(struct cpu_user_regs *regs, uint32_t fid)
+{
+    struct domain *d = current->domain;
+    struct ffa_ctx *ctx = d->arch.ffa;
+    uint32_t count;
+    uint32_t e;
+
+    if ( !ctx )
+        return false;
+
+    switch ( fid )
+    {
+    case FFA_VERSION:
+        handle_version(regs);
+        return true;
+    case FFA_ID_GET:
+        set_regs_success(regs, get_vm_id(d), 0);
+        return true;
+    case FFA_RXTX_MAP_32:
+#ifdef CONFIG_ARM_64
+    case FFA_RXTX_MAP_64:
+#endif
+        e = handle_rxtx_map(fid, get_user_reg(regs, 1), get_user_reg(regs, 2),
+                            get_user_reg(regs, 3));
+        if ( e )
+            set_regs_error(regs, e);
+        else
+            set_regs_success(regs, 0, 0);
+        return true;
+    case FFA_RXTX_UNMAP:
+        e = handle_rxtx_unmap();
+        if ( e )
+            set_regs_error(regs, e);
+        else
+            set_regs_success(regs, 0, 0);
+        return true;
+    case FFA_PARTITION_INFO_GET:
+        e = handle_partition_info_get(get_user_reg(regs, 1),
+                                      get_user_reg(regs, 2),
+                                      get_user_reg(regs, 3),
+                                      get_user_reg(regs, 4),
+                                      get_user_reg(regs, 5), &count);
+        if ( e )
+            set_regs_error(regs, e);
+        else
+            set_regs_success(regs, count, 0);
+        return true;
+    case FFA_RX_RELEASE:
+        e = handle_rx_release();
+        if ( e )
+            set_regs_error(regs, e);
+        else
+            set_regs_success(regs, 0, 0);
+        return true;
+    case FFA_MSG_SEND_DIRECT_REQ_32:
+#ifdef CONFIG_ARM_64
+    case FFA_MSG_SEND_DIRECT_REQ_64:
+#endif
+        handle_msg_send_direct_req(regs, fid);
+        return true;
+    case FFA_MEM_SHARE_32:
+#ifdef CONFIG_ARM_64
+    case FFA_MEM_SHARE_64:
+#endif
+        handle_mem_share(regs);
+        return true;
+    case FFA_MEM_RECLAIM:
+        e = handle_mem_reclaim(reg_pair_to_64(get_user_reg(regs, 2),
+                                              get_user_reg(regs, 1)),
+                               get_user_reg(regs, 3));
+        if ( e )
+            set_regs_error(regs, e);
+        else
+            set_regs_success(regs, 0, 0);
+        return true;
+    case FFA_MEM_FRAG_TX:
+        handle_mem_frag_tx(regs);
+        return true;
+
+    default:
+        printk(XENLOG_ERR "ffa: unhandled fid 0x%x\n", fid);
+        return false;
+    }
+}
+
+int ffa_domain_init(struct domain *d)
+{
+    struct ffa_ctx *ctx;
+    unsigned int n;
+    unsigned int m;
+    unsigned int c_pos;
+    int32_t res;
+
+    if ( !ffa_version )
+        return 0;
+
+    ctx = xzalloc(struct ffa_ctx);
+    if ( !ctx )
+        return -ENOMEM;
+
+    for ( n = 0; n < subsr_vm_created_count; n++ ) {
+        res = ffa_direct_req_send_vm(subsr_vm_created[n], get_vm_id(d),
+                                     FFA_MSG_SEND_VM_CREATED);
+        if ( res ) {
+            printk(XENLOG_ERR "ffa: Failed to report creation of vm_id %u to  %u: res %d\n",
+                   get_vm_id(d), subsr_vm_created[n], res);
+            c_pos = n;
+            goto err;
+        }
+    }
+
+    d->arch.ffa = ctx;
+
+    return 0;
+
+err:
+    /* Undo any already sent vm created messaged */
+    for ( n = 0; n < c_pos; n++ )
+        for ( m = 0; m < subsr_vm_destroyed_count; m++ )
+            if ( subsr_vm_destroyed[m] == subsr_vm_created[n] )
+                ffa_direct_req_send_vm(subsr_vm_destroyed[n], get_vm_id(d),
+                                       FFA_MSG_SEND_VM_DESTROYED);
+    return -ENOMEM;
+}
+
+int ffa_relinquish_resources(struct domain *d)
+{
+    struct ffa_ctx *ctx = d->arch.ffa;
+    unsigned int n;
+    int32_t res;
+
+    if ( !ctx )
+        return 0;
+
+    for ( n = 0; n < subsr_vm_destroyed_count; n++ ) {
+        res = ffa_direct_req_send_vm(subsr_vm_destroyed[n], get_vm_id(d),
+                                     FFA_MSG_SEND_VM_DESTROYED);
+
+        if ( res )
+            printk(XENLOG_ERR "ffa: Failed to report destruction of vm_id %u to  %u: res %d\n",
+                   get_vm_id(d), subsr_vm_destroyed[n], res);
+    }
+
+    XFREE(d->arch.ffa);
+
+    return 0;
+}
+
+static bool __init init_subscribers(void)
+{
+    struct ffa_partition_info_1_1 *fpi;
+    bool ret = false;
+    uint32_t count;
+    uint32_t e;
+    uint32_t n;
+    uint32_t c_pos;
+    uint32_t d_pos;
+
+    if ( ffa_version < FFA_VERSION_1_1 )
+        return true;
+
+    e = ffa_partition_info_get(0, 0, 0, 0, 1, &count);
+    ffa_rx_release();
+    if ( e ) {
+        printk(XENLOG_ERR "ffa: Failed to get list of SPs: %d\n", (int)e);
+        goto out;
+    }
+
+    fpi = ffa_rx;
+    subsr_vm_created_count = 0;
+    subsr_vm_destroyed_count = 0;
+    for ( n = 0; n < count; n++ ) {
+        if (fpi[n].partition_properties & FFA_PART_PROP_NOTIF_CREATED)
+            subsr_vm_created_count++;
+        if (fpi[n].partition_properties & FFA_PART_PROP_NOTIF_DESTROYED)
+            subsr_vm_destroyed_count++;
+    }
+
+    if ( subsr_vm_created_count )
+        subsr_vm_created = xzalloc_array(uint16_t, subsr_vm_created_count);
+    if ( subsr_vm_destroyed_count )
+        subsr_vm_destroyed = xzalloc_array(uint16_t, subsr_vm_destroyed_count);
+    if ( (subsr_vm_created_count && !subsr_vm_created) ||
+        (subsr_vm_destroyed_count && !subsr_vm_destroyed) ) {
+        printk(XENLOG_ERR "ffa: Failed to allocate subscription lists\n");
+        subsr_vm_created_count = 0;
+        subsr_vm_destroyed_count = 0;
+        XFREE(subsr_vm_created);
+        XFREE(subsr_vm_destroyed);
+        goto out;
+    }
+
+    for ( c_pos = 0, d_pos = 0, n = 0; n < count; n++ ) {
+        if (fpi[n].partition_properties & FFA_PART_PROP_NOTIF_CREATED)
+            subsr_vm_created[c_pos++] = fpi[n].id;
+        if (fpi[n].partition_properties & FFA_PART_PROP_NOTIF_DESTROYED)
+            subsr_vm_destroyed[d_pos++] = fpi[n].id;
+    }
+
+    ret = true;
+out:
+    ffa_rx_release();
+    return ret;
+}
+
+static int __init ffa_init(void)
+{
+    uint32_t vers;
+    uint32_t e;
+    unsigned int major_vers;
+    unsigned int minor_vers;
+
+    /*
+     * psci_init_smccc() updates this value with what's reported by EL-3
+     * or secure world.
+     */
+    if ( smccc_ver < ARM_SMCCC_VERSION_1_2 )
+    {
+        printk(XENLOG_ERR
+               "ffa: unsupported SMCCC version %#x (need at least %#x)\n",
+               smccc_ver, ARM_SMCCC_VERSION_1_2);
+        return 0;
+    }
+
+    if ( !ffa_get_version(&vers) )
+        return 0;
+
+    if ( vers < FFA_MIN_VERSION || vers > FFA_MY_VERSION )
+    {
+        printk(XENLOG_ERR "ffa: Incompatible version %#x found\n", vers);
+        return 0;
+    }
+
+    major_vers = (vers >> FFA_VERSION_MAJOR_SHIFT) & FFA_VERSION_MAJOR_MASK;
+    minor_vers = vers & FFA_VERSION_MINOR_MASK;
+    printk(XENLOG_ERR "ARM FF-A Mediator version %u.%u\n",
+           FFA_VERSION_MAJOR, FFA_VERSION_MINOR);
+    printk(XENLOG_ERR "ARM FF-A Firmware version %u.%u\n",
+           major_vers, minor_vers);
+
+    ffa_rx = alloc_xenheap_pages(0, 0);
+    if ( !ffa_rx )
+        return 0;
+
+    ffa_tx = alloc_xenheap_pages(0, 0);
+    if ( !ffa_tx )
+        goto err_free_ffa_rx;
+
+    e = ffa_rxtx_map(__pa(ffa_tx), __pa(ffa_rx), 1);
+    if ( e )
+    {
+        printk(XENLOG_ERR "ffa: Failed to map rxtx: error %d\n", (int)e);
+        goto err_free_ffa_tx;
+    }
+    ffa_page_count = 1;
+    ffa_version = vers;
+
+    if ( !init_subscribers() )
+        goto err_free_ffa_tx;
+
+    return 0;
+
+err_free_ffa_tx:
+    free_xenheap_pages(ffa_tx, 0);
+    ffa_tx = NULL;
+err_free_ffa_rx:
+    free_xenheap_pages(ffa_rx, 0);
+    ffa_rx = NULL;
+    ffa_page_count = 0;
+    ffa_version = 0;
+    XFREE(subsr_vm_created);
+    subsr_vm_created_count = 0;
+    XFREE(subsr_vm_destroyed);
+    subsr_vm_destroyed_count = 0;
+    return 0;
+}
+
+__initcall(ffa_init);
diff --git a/xen/arch/arm/vsmc.c b/xen/arch/arm/vsmc.c
index 33b0ddc634da..cfc56715191b 100644
--- a/xen/arch/arm/vsmc.c
+++ b/xen/arch/arm/vsmc.c
@@ -20,6 +20,7 @@
 #include <public/arch-arm/smccc.h>
 #include <asm/cpuerrata.h>
 #include <asm/cpufeature.h>
+#include <asm/ffa.h>
 #include <asm/monitor.h>
 #include <asm/regs.h>
 #include <asm/smccc.h>
@@ -32,7 +33,7 @@
 #define XEN_SMCCC_FUNCTION_COUNT 3
 
 /* Number of functions currently supported by Standard Service Service Calls. */
-#define SSSC_SMCCC_FUNCTION_COUNT (3 + VPSCI_NR_FUNCS)
+#define SSSC_SMCCC_FUNCTION_COUNT (3 + VPSCI_NR_FUNCS + FFA_NR_FUNCS)
 
 static bool fill_uid(struct cpu_user_regs *regs, xen_uuid_t uuid)
 {
@@ -186,13 +187,23 @@ static bool handle_existing_apis(struct cpu_user_regs *regs)
     return do_vpsci_0_1_call(regs, fid);
 }
 
+static bool is_psci_fid(uint32_t fid)
+{
+    uint32_t fn = fid & ARM_SMCCC_FUNC_MASK;
+
+    return fn >= 0 && fn <= 0x1fU;
+}
+
 /* PSCI 0.2 interface and other Standard Secure Calls */
 static bool handle_sssc(struct cpu_user_regs *regs)
 {
     uint32_t fid = (uint32_t)get_user_reg(regs, 0);
 
-    if ( do_vpsci_0_2_call(regs, fid) )
-        return true;
+    if ( is_psci_fid(fid) )
+        return do_vpsci_0_2_call(regs, fid);
+
+    if ( is_ffa_fid(fid) )
+        return ffa_handle_call(regs, fid);
 
     switch ( fid )
     {
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 4e2f582006a5..e8ac49e29899 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -91,6 +91,10 @@ struct arch_domain
 #ifdef CONFIG_TEE
     void *tee;
 #endif
+
+#ifdef CONFIG_FFA
+    void *ffa;
+#endif
 }  __cacheline_aligned;
 
 struct arch_vcpu
diff --git a/xen/include/asm-arm/ffa.h b/xen/include/asm-arm/ffa.h
new file mode 100644
index 000000000000..1c6ce6421294
--- /dev/null
+++ b/xen/include/asm-arm/ffa.h
@@ -0,0 +1,71 @@
+/*
+ * xen/arch/arm/ffa.c
+ *
+ * Arm Firmware Framework for ARMv8-A(FFA) mediator
+ *
+ * Copyright (C) 2021  Linaro Limited
+ *
+ * Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without restriction,
+ * including without limitation the rights to use, copy, modify, merge,
+ * publish, distribute, sublicense, and/or sell copies of the Software,
+ * and to permit persons to whom the Software is furnished to do so,
+ * subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+ * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
+ * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
+ * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+ * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#ifndef __ASM_ARM_FFA_H__
+#define __ASM_ARM_FFA_H__
+
+#include <xen/const.h>
+
+#include <asm/smccc.h>
+#include <asm/types.h>
+
+#define FFA_FNUM_MIN_VALUE              _AC(0x60,U)
+#define FFA_FNUM_MAX_VALUE              _AC(0x86,U)
+
+static inline bool is_ffa_fid(uint32_t fid)
+{
+    uint32_t fn = fid & ARM_SMCCC_FUNC_MASK;
+
+    return fn >= FFA_FNUM_MIN_VALUE && fn <= FFA_FNUM_MAX_VALUE;
+}
+
+#ifdef CONFIG_FFA
+#define FFA_NR_FUNCS    11
+
+bool ffa_handle_call(struct cpu_user_regs *regs, uint32_t fid);
+int ffa_domain_init(struct domain *d);
+int ffa_relinquish_resources(struct domain *d);
+#else
+#define FFA_NR_FUNCS    0
+
+static inline bool ffa_handle_call(struct cpu_user_regs *regs, uint32_t fid)
+{
+    return false;
+}
+
+static inline int ffa_domain_init(struct domain *d)
+{
+    return 0;
+}
+
+static inline int ffa_relinquish_resources(struct domain *d)
+{
+    return 0;
+}
+#endif
+
+#endif /*__ASM_ARM_FFA_H__*/
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 07 10:37:57 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 10:37:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343137.568294 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyWar-0003lB-Mu; Tue, 07 Jun 2022 10:37:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343137.568294; Tue, 07 Jun 2022 10:37:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyWar-0003l4-Il; Tue, 07 Jun 2022 10:37:49 +0000
Received: by outflank-mailman (input) for mailman id 343137;
 Tue, 07 Jun 2022 10:37:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyWaq-0003ku-GT; Tue, 07 Jun 2022 10:37:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyWaq-0002Ks-A5; Tue, 07 Jun 2022 10:37:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyWap-0008Ff-V0; Tue, 07 Jun 2022 10:37:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nyWap-000385-UO; Tue, 07 Jun 2022 10:37:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=M68BnqzOw5MbNi8HQX//mAWqBgW1oLVXQ5o82AgTMa0=; b=wLe/QcimkfXqmR1fnSC5GAf8Eo
	wHClaTjguzmf2M0I/uF3pn+tjQJxoZshYZgWh6GfPe9PNAaip9aPWzWBlaU/mTqCfRVylXYUAi1ky
	/s+0TSrpdF7LJuA9jwF/Wzqw80noHIotBvE2yIgaDtTM/s2BjVE6FEKViO55PJs9853c=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170852-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 170852: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    xen-unstable:test-amd64-i386-pair:xen-install/dst_host:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=cea9ae06229577cd5b77019ce122f9cdd1568106
X-Osstest-Versions-That:
    xen=58ce5b6c33ecae76f2e9fc5213a56e98c3be4be5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 07 Jun 2022 10:37:47 +0000

flight 170852 xen-unstable real [real]
flight 170862 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/170852/
http://logs.test-lab.xenproject.org/osstest/logs/170862/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 170840

Tests which did not succeed, but are not blocking:
 test-amd64-i386-pair         11 xen-install/dst_host         fail  like 170840
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170840
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170840
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170840
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170840
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 170840
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 170840
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170840
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170840
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170840
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170840
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170840
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170840
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 xen                  cea9ae06229577cd5b77019ce122f9cdd1568106
baseline version:
 xen                  58ce5b6c33ecae76f2e9fc5213a56e98c3be4be5

Last test of basis   170840  2022-06-06 01:51:50 Z    1 days
Testing same since   170852  2022-06-06 22:09:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit cea9ae06229577cd5b77019ce122f9cdd1568106
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Feb 18 16:02:51 2022 +0000

    x86/spec-ctrl: Enumeration for new Intel BHI controls
    
    https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/technical-documentation/branch-history-injection.html
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 11:35:02 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 11:35:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343152.568313 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyXU0-0001vO-5H; Tue, 07 Jun 2022 11:34:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343152.568313; Tue, 07 Jun 2022 11:34:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyXU0-0001vH-2B; Tue, 07 Jun 2022 11:34:48 +0000
Received: by outflank-mailman (input) for mailman id 343152;
 Tue, 07 Jun 2022 11:34:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1nyXTz-0001v9-0e
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 11:34:47 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nyXTw-0003GR-PX; Tue, 07 Jun 2022 11:34:44 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=[192.168.23.140]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nyXTw-0006VT-Iv; Tue, 07 Jun 2022 11:34:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=FDoIl+m/RiJW6Wizp3jfa76PNVptIK6ZFlP5XDbG9e8=; b=aTzV0d9g5mmzSiXtd3Ep/s0Wtg
	uJyDUSQGJRdoQIQqN+fhqvFC6xqEUsJpQ2Zkds0mDwoy/mGpLhmc0BBm9bVpHAFqwcFigV7w2mPlU
	CBy7/GbaWky5q9G9s+GqyKwO7giqjCzgEh4uENaudyWsgOrz7QyZUEhUjfmzghr9Ycnk=;
Message-ID: <9e02210b-237b-0af3-b5b8-57ddeef55707@xen.org>
Date: Tue, 7 Jun 2022 12:34:41 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.9.1
Subject: Re: [RFC PATCH 1/4] kconfig: allow configuration of maximum modules
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>
Cc: "Daniel P. Smith" <dpsmith@apertussolutions.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Wei Liu <wl@xen.org>,
 "scott.davis@starlab.io" <scott.davis@starlab.io>,
 "christopher.clark@starlab.io" <christopher.clark@starlab.io>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>
References: <20220531024127.23669-1-dpsmith@apertussolutions.com>
 <20220531024127.23669-2-dpsmith@apertussolutions.com>
 <2F13F24D-0A55-4BC3-9AC6-606C7E1626E8@arm.com>
 <4ebbb465-00ec-f4ba-8555-711cd76517ed@apertussolutions.com>
 <YpYdqglsWIlsFsaB@Air-de-Roger>
 <8F3BD9BB-EC62-4721-9BD0-3E4CC1E75A22@citrix.com>
 <Ypcr/N/0FpxepyTc@Air-de-Roger>
From: Julien Grall <julien@xen.org>
In-Reply-To: <Ypcr/N/0FpxepyTc@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 01/06/2022 10:06, Roger Pau Monné wrote:
> On Wed, Jun 01, 2022 at 07:40:12AM +0000, George Dunlap wrote:
>> The down side of this is that you can’t use “automatically remove trailing whitespace on save” features of some editors.
>>
>> Without such automation, I introduce loads of trailing whitespace.  With such automation, I end up removing random trailing whitespace as I happen to touch files.  I’ve always done this by just adding “While here, remove some trailing whitespace” to the commit message, and there haven’t been any complaints.
> 
> FWIW, I have an editor feature that highlights trailing whitespace,
> but doesn't remove it.
> 
> As said, I find it cumbersome to have to go through more jumps while
> using `git blame` or similar, just because of unrelated cleanups.
> 
> IMO I don't think it's good practice to wholesale replace all trailing
> whitespace from a file as part of an unrelated change.  

+1

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 11:37:51 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 11:37:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343160.568324 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyXWw-0002WW-Ju; Tue, 07 Jun 2022 11:37:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343160.568324; Tue, 07 Jun 2022 11:37:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyXWw-0002WP-HI; Tue, 07 Jun 2022 11:37:50 +0000
Received: by outflank-mailman (input) for mailman id 343160;
 Tue, 07 Jun 2022 11:37:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1nyXWv-0002WJ-Qo
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 11:37:49 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nyXWr-0003Kj-90; Tue, 07 Jun 2022 11:37:45 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=[192.168.23.140]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nyXWr-0006lN-1P; Tue, 07 Jun 2022 11:37:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=2DTmA/fVDPnd+03QzqdhfZ9OgzBy9cn82PZPGPcz5og=; b=tq7vdxoAZo7388JuVC8qNQcQUE
	vuSvy2nlYUqYvGxzJsbuyY5eIyDezguLpyBtfh0uC1/SoQYDOgtZ1Zt9Kocj7V878bzpWp5SvVr5H
	+U8ZuAYEiMSMi2MyfauplNPN/WC5SJBDIsxEbr9k0K3FQiZefazOjZtDE0A+Je819e+Y=;
Message-ID: <76399014-f256-6be4-299a-9d46742abd8a@xen.org>
Date: Tue, 7 Jun 2022 12:37:42 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.9.1
Subject: Re: [RFC PATCH 1/4] kconfig: allow configuration of maximum modules
To: Jan Beulich <jbeulich@suse.com>
Cc: scott.davis@starlab.io, christopher.clark@starlab.io,
 sstabellini@kernel.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "Daniel P. Smith" <dpsmith@apertussolutions.com>,
 xen-devel@lists.xenproject.org,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Wei Liu <wl@xen.org>
References: <20220531024127.23669-1-dpsmith@apertussolutions.com>
 <20220531024127.23669-2-dpsmith@apertussolutions.com>
 <ab531f8b-a602-22e0-dabf-c7d073c88236@xen.org>
 <be06db4d-43c4-7d24-db0d-349c0a1e4999@apertussolutions.com>
 <337d6dbf-e8ee-5de7-a75e-97be815f4467@xen.org>
 <de93f0f5-374e-6fc8-22c3-70023a7d2f9b@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <de93f0f5-374e-6fc8-22c3-70023a7d2f9b@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi,

On 02/06/2022 09:49, Jan Beulich wrote:
> On 01.06.2022 19:35, Julien Grall wrote:
>>
>>
>> On 31/05/2022 11:53, Daniel P. Smith wrote:
>>> On 5/31/22 05:25, Julien Grall wrote:
>>>> Hi,
>>>>
>>>> On 31/05/2022 03:41, Daniel P. Smith wrote:
>>>>> diff --git a/xen/arch/Kconfig b/xen/arch/Kconfig
>>>>> index f16eb0df43..57b14e22c9 100644
>>>>> --- a/xen/arch/Kconfig
>>>>> +++ b/xen/arch/Kconfig
>>>>> @@ -17,3 +17,15 @@ config NR_CPUS
>>>>>           For CPU cores which support Simultaneous Multi-Threading or
>>>>> similar
>>>>>           technologies, this the number of logical threads which Xen will
>>>>>           support.
>>>>> +
>>>>> +config NR_BOOTMODS
>>>>> +    int "Maximum number of boot modules that a loader can pass"
>>>>> +    range 1 64
>>>>
>>>> OOI, any reason to limit the size?
>>>
>>> I modelled this entirely after NR_CPUS, which applied a limit
>>
>> The limit for NR_CPUS makes sense because there are scalability issues
>> after that (although 4095 seems quite high) and/or the HW impose a limit.
> 
> The 4095 is actually a software limit (due to how spinlocks are
> implemented).
> 
>>> , and it
>>> seemed reasonable to me at the time. I choose 64 since it was double
>>> currently what Arm had set for MAX_MODULES. As such, I have no hard
>>> reason for there to be a limit. It can easily be removed or adjusted to
>>> whatever the reviewers feel would be appropriate.
>>
>> Ok. In which case I would drop the limit beause it also prevent a users
>> to create more 64 dom0less domUs (actually a bit less because some
>> modules are used by Xen). I don't think there are a strong reason to
>> prevent that, right?
> 
> At least as per the kconfig language doc the upper bound is not
> optional, so if a range is specified (which I think it should be,
> to enforce the lower limit of 1) and upper bound is needed. To
> address your concern with dom0less - 32768 maybe?

I am fine with that.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 12:01:27 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 12:01:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343171.568337 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyXth-000654-PW; Tue, 07 Jun 2022 12:01:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343171.568337; Tue, 07 Jun 2022 12:01:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyXth-00064x-KX; Tue, 07 Jun 2022 12:01:21 +0000
Received: by outflank-mailman (input) for mailman id 343171;
 Tue, 07 Jun 2022 12:01:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyXtf-00064l-Qd; Tue, 07 Jun 2022 12:01:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyXtf-0003ld-NX; Tue, 07 Jun 2022 12:01:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyXtf-0004F9-BH; Tue, 07 Jun 2022 12:01:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nyXtf-0000Zx-Am; Tue, 07 Jun 2022 12:01:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8EEKrzubm06Iqs5vpYVZ1NTwmVBHYs43rBl67rQutXs=; b=vc4JHTLTresDUUJNK2Do3/xwVJ
	WMUGKo9nIFr3dm2vMRFsTup4x5R4tEMlGYyEVZzVwtY9L/pzXkLyRr2ktbZvsA6c+SrpnXJ/fASUp
	Jb1bFKfJ61ImojPoAazR6VxJq7n1TrtWmyrB26qHWN0qtpVE8AMRXAJHHTFhQROBCmfQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170853-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 170853: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-freebsd11-amd64:guest-start/freebsd.repeat:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f2906aa863381afb0015a9eb7fefad885d4e5a56
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 07 Jun 2022 12:01:19 +0000

flight 170853 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170853/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 170714
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 170714
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 170714
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 170714
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 170714

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-vhd       8 xen-boot                   fail pass in 170844
 test-amd64-amd64-freebsd11-amd64 21 guest-start/freebsd.repeat fail pass in 170844

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-vhd     14 migrate-support-check fail in 170844 never pass
 test-arm64-arm64-xl-vhd 15 saverestore-support-check fail in 170844 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170714
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170714
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170714
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 linux                f2906aa863381afb0015a9eb7fefad885d4e5a56
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z   14 days
Failing since        170716  2022-05-24 11:12:06 Z   14 days   39 attempts
Testing same since   170841  2022-06-06 03:15:23 Z    1 days    3 attempts

------------------------------------------------------------
2274 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 268374 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 12:09:49 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 12:09:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343182.568349 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyY1p-0006uW-K9; Tue, 07 Jun 2022 12:09:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343182.568349; Tue, 07 Jun 2022 12:09:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyY1p-0006uP-HB; Tue, 07 Jun 2022 12:09:45 +0000
Received: by outflank-mailman (input) for mailman id 343182;
 Tue, 07 Jun 2022 12:09:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dSew=WO=apertussolutions.com=dpsmith@srs-se1.protection.inumbo.net>)
 id 1nyY1o-0006uJ-Pr
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 12:09:44 +0000
Received: from sender4-of-o51.zoho.com (sender4-of-o51.zoho.com
 [136.143.188.51]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b58a84e1-e65a-11ec-bd2c-47488cf2e6aa;
 Tue, 07 Jun 2022 14:09:42 +0200 (CEST)
Received: from [10.10.1.138] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 165460377364539.0024044712826;
 Tue, 7 Jun 2022 05:09:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b58a84e1-e65a-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; t=1654603779; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=dPjRd8sYiMzP0IwITrb4TfhNnXFtLRstH2f17kz4Q2A72E+pptieTSWoZIpuMN0N00ZrJFScYrSRX4i3TYo269dkZxaCxvnuARTH8zPojtn86aipcO1p/KnA+MLEyH8t160wNmMBQKQN6VIFizhffGAkem04KW8kMB6XL+HCnWg=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1654603779; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=ScXb0xdi8YgdMGfuuVK8ZgufA8fQmT/uIWL/VcY2uL0=; 
	b=QWTyTRTUvJyJ982Du1iWooSedRej7un+HfCj5zSiG+gGfxBnIfvaFpVL8MyL/GagyFlNOYc8yyPI/paiz0fpS3MxhshVYFmW7s+a74kYgcnYBo80k9Goo+w5LKlyqijr1irOxIS9Y3ScKMTlu4Vbmu8XK1Lh9EI2tySF3pezclo=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1654603779;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Message-ID:Date:Date:MIME-Version:Subject:Subject:To:To:Cc:Cc:References:From:From:In-Reply-To:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=ScXb0xdi8YgdMGfuuVK8ZgufA8fQmT/uIWL/VcY2uL0=;
	b=mikFuU57+OfEbK7MamU6otc2gID4KI4ke+QFDHHMsEOiqWqGzjNhf45u0QNlXK5S
	yq/I6IB59SfCM+iV+2ujMiO9Gjd4yG8FPafiGNGDv1lVHD/oVPmNhZKxbe2QCPfRl2Y
	fJMHdMlo9SUhwWr3gGm27nCnN59ulftPtXO3+M2w=
Message-ID: <f51609ea-765a-034c-a5ca-845bad6ad5ed@apertussolutions.com>
Date: Tue, 7 Jun 2022 08:07:57 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH v3 1/3] xsm: only search for a policy file when needed
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: scott.davis@starlab.io, christopher.clark@starlab.io, jandryuk@gmail.com,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xenproject.org
References: <20220531150857.19727-1-dpsmith@apertussolutions.com>
 <20220531150857.19727-2-dpsmith@apertussolutions.com>
 <1358771f-32ae-8a6b-9894-980014d7112c@suse.com>
 <604e79d6-d07f-1a28-83a0-55fede499e12@apertussolutions.com>
 <7716ff49-a306-9938-0e91-aad45deef313@suse.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
In-Reply-To: <7716ff49-a306-9938-0e91-aad45deef313@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ZohoMailClient: External

On 6/1/22 02:04, Jan Beulich wrote:
> On 31.05.2022 18:15, Daniel P. Smith wrote:
>>
>> On 5/31/22 11:51, Jan Beulich wrote:
>>> On 31.05.2022 17:08, Daniel P. Smith wrote:
>>>> It is possible to select a few different build configurations that results in
>>>> the unnecessary walking of the boot module list looking for a policy module.
>>>> This specifically occurs when the flask policy is enabled but either the dummy
>>>> or the SILO policy is selected as the enforcing policy. This is not ideal for
>>>> configurations like hyperlaunch and dom0less when there could be a number of
>>>> modules to be walked or doing an unnecessary device tree lookup.
>>>>
>>>> This patch introduces the policy_file_required flag for tracking when an XSM
>>>> policy module requires a policy file. Only when the policy_file_required flag
>>>> is set to true, will XSM search the boot modules for a policy file.
>>>>
>>>> Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
>>>
>>> Looks technically okay, so
>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>> but couldn't you ...
>>>
>>>> @@ -148,7 +160,7 @@ int __init xsm_multiboot_init(
>>>>  
>>>>      printk("XSM Framework v" XSM_FRAMEWORK_VERSION " initialized\n");
>>>>  
>>>> -    if ( XSM_MAGIC )
>>>> +    if ( policy_file_required && XSM_MAGIC )
>>>>      {
>>>>          ret = xsm_multiboot_policy_init(module_map, mbi, &policy_buffer,
>>>>                                          &policy_size);
>>>> @@ -176,7 +188,7 @@ int __init xsm_dt_init(void)
>>>>  
>>>>      printk("XSM Framework v" XSM_FRAMEWORK_VERSION " initialized\n");
>>>>  
>>>> -    if ( XSM_MAGIC )
>>>> +    if ( policy_file_required && XSM_MAGIC )
>>>>      {
>>>>          ret = xsm_dt_policy_init(&policy_buffer, &policy_size);
>>>>          if ( ret )
>>>
>>> ... drop the two "&& XSM_MAGIC" here at this time? Afaict policy_file_required
>>> cannot be true when XSM_MAGIC is zero.
>>
>> I was on the fence about this, as it should be rendered as redundant as
>> you point out. I am good with dropping on next spin.
> 
> I'd also be okay dropping this while committing, unless a v4 appears
> first ...

ack

v/r,
dps


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 12:10:36 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 12:10:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343189.568362 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyY2d-0008CN-WA; Tue, 07 Jun 2022 12:10:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343189.568362; Tue, 07 Jun 2022 12:10:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyY2d-0008CG-RT; Tue, 07 Jun 2022 12:10:35 +0000
Received: by outflank-mailman (input) for mailman id 343189;
 Tue, 07 Jun 2022 12:10:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dSew=WO=apertussolutions.com=dpsmith@srs-se1.protection.inumbo.net>)
 id 1nyY2c-0007DE-DA
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 12:10:34 +0000
Received: from sender4-of-o51.zoho.com (sender4-of-o51.zoho.com
 [136.143.188.51]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d345eab2-e65a-11ec-b605-df0040e90b76;
 Tue, 07 Jun 2022 14:10:32 +0200 (CEST)
Received: from [10.10.1.138] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1654603825043241.9183558585422;
 Tue, 7 Jun 2022 05:10:25 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d345eab2-e65a-11ec-b605-df0040e90b76
ARC-Seal: i=1; a=rsa-sha256; t=1654603829; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=hwOfAdb7lnmi4VyDM1Cwqj1a7LBQkH8uZDBoAPczEghKnfrxb7y2fL0TQg1HsoTr+iTXRrkukrxa8igHLY41cfGYae4bUApmTlXRaBogjS45rU8jxQCtkM1LdNot72bmTOiStc2V4N9hKKidie3/iP2U7mTfdAOejMcUzSEeoOI=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1654603829; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=/Yts9vn3EFors/IAA0m0XFxstC3Nquu1IHMlw8qecvA=; 
	b=Y0KAR+cWeuWYNg4vvDbt3xjyQk7wG89SInpzzVQR/KymdXY1haIDqgSXbIuJm/5gQvQNCxWb5IXrCevTdqHMCF9+W0Bs6KiqpSKsHza5XEnBTMYlJ7lIQXW1kZV8BqgjSy5hzHagZicq/yyry6btX+mKo+jKJaaJOrG1sOp0XuE=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1654603829;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Message-ID:Date:Date:MIME-Version:Subject:Subject:To:To:Cc:Cc:References:From:From:In-Reply-To:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=/Yts9vn3EFors/IAA0m0XFxstC3Nquu1IHMlw8qecvA=;
	b=Gn/mcWNFYQ5qKluQ/zoXwjBA6S9WwuIG285O+cwJvls5ATkIcRGiJGzGcXS/jtBu
	NsH/Uuc9bTzlDEB6Ypk3q/VrrriMtJrOSVZpDmI5+eD5W0NYjI7fJbxJRDxSa7dkjn0
	2zKdLPFVthzDYO2ITtJrkMYv/6X7O9y5/vC5wqLA=
Message-ID: <32993db2-360d-1d08-b31a-0a29ef23dd98@apertussolutions.com>
Date: Tue, 7 Jun 2022 08:08:49 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH v3 1/3] xsm: only search for a policy file when needed
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: scott.davis@starlab.io, christopher.clark@starlab.io, jandryuk@gmail.com,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xenproject.org
References: <20220531150857.19727-1-dpsmith@apertussolutions.com>
 <20220531150857.19727-2-dpsmith@apertussolutions.com>
 <f8697e9f-6b48-0c1b-8d5a-2d36dafa75b4@suse.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
In-Reply-To: <f8697e9f-6b48-0c1b-8d5a-2d36dafa75b4@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ZohoMailClient: External

On 6/1/22 02:08, Jan Beulich wrote:
> On 31.05.2022 17:08, Daniel P. Smith wrote:
>> It is possible to select a few different build configurations that results in
>> the unnecessary walking of the boot module list looking for a policy module.
>> This specifically occurs when the flask policy is enabled but either the dummy
>> or the SILO policy is selected as the enforcing policy. This is not ideal for
>> configurations like hyperlaunch and dom0less when there could be a number of
>> modules to be walked or doing an unnecessary device tree lookup.
>>
>> This patch introduces the policy_file_required flag for tracking when an XSM
>> policy module requires a policy file.
> 
> In light of the "flask=late" aspect of patch 2, I'd like to suggest to
> slightly alter wording here: "... requires looking for a policy file."

ack

>> --- a/xen/xsm/xsm_core.c
>> +++ b/xen/xsm/xsm_core.c
>> @@ -55,19 +55,31 @@ static enum xsm_bootparam __initdata xsm_bootparam =
>>      XSM_BOOTPARAM_DUMMY;
>>  #endif
>>  
>> +static bool __initdata policy_file_required =
>> +    IS_ENABLED(CONFIG_XSM_FLASK_DEFAULT);
> 
> The variable may then also want renaming, to e.g. "find_policy_file".

ack

v/r,
dps


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 12:16:16 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 12:16:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343219.568455 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyY84-0001Z5-IX; Tue, 07 Jun 2022 12:16:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343219.568455; Tue, 07 Jun 2022 12:16:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyY84-0001Yy-Fu; Tue, 07 Jun 2022 12:16:12 +0000
Received: by outflank-mailman (input) for mailman id 343219;
 Tue, 07 Jun 2022 12:16:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dSew=WO=apertussolutions.com=dpsmith@srs-se1.protection.inumbo.net>)
 id 1nyY83-0001Ys-Q3
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 12:16:11 +0000
Received: from sender4-of-o51.zoho.com (sender4-of-o51.zoho.com
 [136.143.188.51]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9cfb4696-e65b-11ec-b605-df0040e90b76;
 Tue, 07 Jun 2022 14:16:10 +0200 (CEST)
Received: from [10.10.1.138] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1654604158771316.67427795343394;
 Tue, 7 Jun 2022 05:15:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9cfb4696-e65b-11ec-b605-df0040e90b76
ARC-Seal: i=1; a=rsa-sha256; t=1654604165; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=ggJv8rlHPL9u04m2KPPkaH+voFIJIw6qfNNHk5w0n8a9cg5m6z040O1FrxStQ27jH0ozJSFJ5ANG8XMVkFRhwDMxjbdIBWn+G1Rkh8mDXz/e+MlLVuEAO8sSNDfy9uCgdh6pvOvqkHOPkKpigHFnFOGcgcdQMsmDKKvhp7vI8+Q=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1654604165; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=EkEJdzTn5wsupDMSIaSNG6LmQ+ASAiYE6Uzwve0WIJE=; 
	b=jWr599UIyhQbZ8Q8Q8Bt9FmrxG4SrN1RAZvgkDiiEolmp0pW+uHFYkCKqlVjnu4Uvox66ch1GJnhPKGR37hJcHlCQLXxuYdC67MtZGhoEhZymVKLYhaIM2HbyvcO2MsHHTdCFHuXaOiuKPELf9Bx4yynOV5SLsfkcyL6kuLec0w=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1654604165;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Message-ID:Date:Date:MIME-Version:To:To:Cc:Cc:References:From:From:Subject:Subject:In-Reply-To:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=EkEJdzTn5wsupDMSIaSNG6LmQ+ASAiYE6Uzwve0WIJE=;
	b=c4gCprB/pMp792NIlbB5zJnJg8pMmFU/48HiAfmjWo/NF26NTkLgQp73xkxI/1Oo
	SlKFVbWXO6A4CyVtF52G0Cd1cs+DD5IZvkulGnU0N94xyFozfOi0rl4baIHWBwvcfyL
	gIkDBE+AcCSJVjeoaWLrQajMj9/x1M7nOM17cokU=
Message-ID: <a1f633c1-f494-e59d-d5eb-f84c478e5b53@apertussolutions.com>
Date: Tue, 7 Jun 2022 08:14:22 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: scott.davis@starlab.io, christopher.clark@starlab.io, jandryuk@gmail.com,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xenproject.org,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Wei Liu <wl@xen.org>
References: <20220531150857.19727-1-dpsmith@apertussolutions.com>
 <20220531150857.19727-4-dpsmith@apertussolutions.com>
 <e7582bd3-1a3b-49e9-7d3f-f86ae3d4ab2b@suse.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Subject: Re: [PATCH v3 3/3] xsm: properly handle error from XSM init
In-Reply-To: <e7582bd3-1a3b-49e9-7d3f-f86ae3d4ab2b@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ZohoMailClient: External

On 6/1/22 02:14, Jan Beulich wrote:
> On 31.05.2022 17:08, Daniel P. Smith wrote:
>> @@ -1690,7 +1691,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
>>  
>>      open_softirq(NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ, new_tlbflush_clock_period);
>>  
>> -    if ( opt_watchdog ) 
>> +    if ( opt_watchdog )
>>          nmi_watchdog = NMI_LOCAL_APIC;
>>  
>>      find_smp_config();
> 
> Please omit formatting changes to entirely unrelated pieces of code.

Ack. this was in simliar vein of the other patches where I cleaned
nearby trailing space.

>> @@ -1700,7 +1701,11 @@ void __init noreturn __start_xen(unsigned long mbi_p)
>>      mmio_ro_ranges = rangeset_new(NULL, "r/o mmio ranges",
>>                                    RANGESETF_prettyprint_hex);
>>  
>> -    xsm_multiboot_init(module_map, mbi);
>> +    if ( xsm_multiboot_init(module_map, mbi) )
>> +        warning_add("WARNING: XSM failed to initialize.\n"
>> +                    "This has implications on the security of the system,\n"
>> +                    "as uncontrolled communications between trusted and\n"
>> +                    "untrusted domains may occur.\n");
> 
> Uncontrolled communication isn't the only thing that could occur, aiui.
> So at the very least "e.g." or some such would want adding imo.

Agreed, this was a reuse of the existing message and honestly I would
like to believe this was the original author's attempt at brevity versus
writing a detailed message of every implication to the security of the
system.

> Now that return values are checked, I think that in addition to what
> you already do the two function declarations may want decorating with
> __must_check.

Understood but likely not necessary based on Andy's review suggestion to
move these functions to void.

v/r,
dps


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 13:16:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 13:16:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343256.568550 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyZ4A-0000iP-5y; Tue, 07 Jun 2022 13:16:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343256.568550; Tue, 07 Jun 2022 13:16:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyZ4A-0000iI-2o; Tue, 07 Jun 2022 13:16:14 +0000
Received: by outflank-mailman (input) for mailman id 343256;
 Tue, 07 Jun 2022 13:16:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dSew=WO=apertussolutions.com=dpsmith@srs-se1.protection.inumbo.net>)
 id 1nyZ48-0000iC-Hw
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 13:16:12 +0000
Received: from sender4-of-o51.zoho.com (sender4-of-o51.zoho.com
 [136.143.188.51]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fbde11b7-e663-11ec-b605-df0040e90b76;
 Tue, 07 Jun 2022 15:16:09 +0200 (CEST)
Received: from [10.10.1.138] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1654607754456498.10169702409087;
 Tue, 7 Jun 2022 06:15:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fbde11b7-e663-11ec-b605-df0040e90b76
ARC-Seal: i=1; a=rsa-sha256; t=1654607759; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=MMu85ma6SasvrDBt1w8Bx9KnKmR0DbGrJz2QIHYitIgB8ZNnhkZjZpp3CEeOX5kRRbwD9mjJW2uKiscVARr6cDX/s3tFDo9fx3GlugswaxNsFMH5mwbVDoApaDXuiPjUS3gYlXqTLHxMcw3G7A03w/UqKenfpbyBFNmro3jnIFs=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1654607759; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=caIyPaIFAl+kjEWmMLkymdTyCOuK766Y5p0fopuRovg=; 
	b=V3P0VwCAr54C6vYiv2cfoLtckWxvTdO7m9G6W7UzpKvVsGMjd3S2KXknURhxKnSxF62ip6+CUn0NjXLrk69VhCmXQ6eFy19W539+GUouH/SfJsjXI4X8z5/3Wg6Nc7L9ZOWUL2FwNObsPwMeRkHjLaLNJghdacq8g7z5/Oy2Uow=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1654607759;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Message-ID:Date:Date:MIME-Version:To:To:Cc:Cc:References:From:From:Subject:Subject:In-Reply-To:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=caIyPaIFAl+kjEWmMLkymdTyCOuK766Y5p0fopuRovg=;
	b=VgYuvbSVcQMO1Pq7SeqcUwYqIBNOpd1yLcQnM9Qi/FVvN4/uB5ykUEHMs8aGPvnN
	sgLIYFGPhGX4ezZDjYgkSya3JEi4gSJnBbcRez1SEPtD/3yrNxCEqpCCsHaWr+8Bd22
	dQBH++c+IzW0halCPSBr+Q6hc24ju/IAz4OwHIlM=
Message-ID: <6d6b14ad-2a0d-c7aa-d835-3131f83e0f68@apertussolutions.com>
Date: Tue, 7 Jun 2022 09:14:18 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: "scott.davis@starlab.io" <scott.davis@starlab.io>,
 "christopher.clark@starlab.io" <christopher.clark@starlab.io>,
 "jandryuk@gmail.com" <jandryuk@gmail.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>
References: <20220531182041.10640-1-dpsmith@apertussolutions.com>
 <20220531182041.10640-3-dpsmith@apertussolutions.com>
 <6f177e81-9c97-0f7f-2f79-88cc4db83e02@citrix.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Subject: Re: [PATCH v4 2/3] xsm: consolidate loading the policy buffer
In-Reply-To: <6f177e81-9c97-0f7f-2f79-88cc4db83e02@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

On 5/31/22 15:07, Andrew Cooper wrote:
> On 31/05/2022 19:20, Daniel P. Smith wrote:
>> diff --git a/xen/xsm/xsm_policy.c b/xen/xsm/xsm_policy.c
>> index 8dafbc9381..690fd23e9f 100644
>> --- a/xen/xsm/xsm_policy.c
>> +++ b/xen/xsm/xsm_policy.c
>> @@ -8,7 +8,7 @@
>>   *  Contributors:
>>   *  Michael LeMay, <mdlemay@epoch.ncsc.mil>
>>   *  George Coker, <gscoker@alpha.ncsc.mil>
>> - *  
>> + *
>>   *  This program is free software; you can redistribute it and/or modify
>>   *  it under the terms of the GNU General Public License version 2,
>>   *  as published by the Free Software Foundation.
>> @@ -32,14 +32,21 @@
>>  #ifdef CONFIG_MULTIBOOT
>>  int __init xsm_multiboot_policy_init(
>>      unsigned long *module_map, const multiboot_info_t *mbi,
>> -    void **policy_buffer, size_t *policy_size)
>> +    const unsigned char **policy_buffer, size_t *policy_size)
>>  {
>>      int i;
>>      module_t *mod = (module_t *)__va(mbi->mods_addr);
>> -    int rc = 0;
>> +    int rc = -ENOENT;
>>      u32 *_policy_start;
>>      unsigned long _policy_len;
>>  
>> +#ifdef CONFIG_XSM_FLASK_POLICY
>> +    /* Initially set to builtin policy, overriden if boot module is found. */
>> +    *policy_buffer = xsm_flask_init_policy;
>> +    *policy_size = xsm_flask_init_policy_size;
>> +    rc = 0;
>> +#endif
> 
> Does
> 
> if ( IS_ENABLED(CONFIG_XSM_FLASK_POLICY) )
> {
>     ...
> }
> 
> compile properly?  You'll need to drop the ifdefary in xsm.h, but this
> would be a better approach (more compiler coverage in normal builds).
> 
> Same for the related hunk on the DT side.

Yes, I know this pattern is used elsewhere, so it should work here fine.

>> +
>>      /*
>>       * Try all modules and see whichever could be the binary policy.
>>       * Adjust module_map for the module that is the binary policy.
>> @@ -54,13 +61,14 @@ int __init xsm_multiboot_policy_init(
>>  
>>          if ( (xsm_magic_t)(*_policy_start) == XSM_MAGIC )
>>          {
>> -            *policy_buffer = _policy_start;
>> +            *policy_buffer = (unsigned char *)_policy_start;
> 
> The existing logic is horrible.  To start with, there's a buffer overrun
> for a module of fewer than 4 bytes.  (Same on the DT side.)
> 
> It would be slightly less horrible as
> 
> for ( ... )
> {
>     void *ptr;
> 
>     if ( !test_bit(i, module_map) || mod[i].mod_end < sizeof(xsm_header) )
>         continue;
> 
>     ptr = bootstrap_map(...);
> 
>     if ( memcmp(ptr, XSM_MAGIC, sizeof(XSM_MAGIC)) == 0 )
>     {
>         *policy_buffer = ptr;
>         *policy_len = mod[i].mod_end;
> 
>         ...
>         break;
>     }
> 
>     bootstrap_map(NULL);
> }
> 
> because at least this gets rid of the intermediate variables, the buffer
> overrun, and the awkward casting to find XSM_MAGIC.

Since you were kind enough to take the time to write out the fix, I can
incorporate.

> That said, it's still unclear what's going on, because has_xsm_magic()
> says the header is 16 bytes long, rather than 4 (assuming that it means
> uint32_t.  policydb_read() uses uint32_t).
> 
> Also, policydb_read() uses le32_to_cpu() so the multiboot/dt checks are
> wrong on big-endian systems.
> 
> Also also, policydb_read() really doesn't need to make a temporary
> memory allocation to check a fixed string of fixed length.  This is
> horrible.
> 
> I suspect we're getting into "in a subsequent patch" territory here.

Unfortunately the scope of what this series started out to solve, not to
walk all the boot modules when no policy file is needed, and what the
reviewers have been requesting be addressed is continually diverging.
Granted, with the technical debt currently present in XSM, I understand
why this is occurring. Unfortunately, subsequent patch here means, going
on to my list of things I would like to fix/work on in XSM.

v/r,
dps



From xen-devel-bounces@lists.xenproject.org Tue Jun 07 13:21:33 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 13:21:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343264.568561 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyZ9F-00027u-RP; Tue, 07 Jun 2022 13:21:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343264.568561; Tue, 07 Jun 2022 13:21:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyZ9F-00027n-Nq; Tue, 07 Jun 2022 13:21:29 +0000
Received: by outflank-mailman (input) for mailman id 343264;
 Tue, 07 Jun 2022 13:21:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8rqf=WO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nyZ9D-00027h-UZ
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 13:21:28 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.111.102])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bc1246a3-e664-11ec-bd2c-47488cf2e6aa;
 Tue, 07 Jun 2022 15:21:27 +0200 (CEST)
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2173.outbound.protection.outlook.com [104.47.17.173]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-32-q9sVxRYWOwylh4HX9I03Tw-1; Tue, 07 Jun 2022 15:21:23 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB5865.eurprd04.prod.outlook.com (2603:10a6:10:a6::31) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19; Tue, 7 Jun
 2022 13:21:21 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.019; Tue, 7 Jun 2022
 13:21:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bc1246a3-e664-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654608086;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=LoKlRfCzoBq5luuz09Ka/Zz1sWvP7iEM7NgBl5PN9qc=;
	b=ExX/KUjr3zAnZd1HvFFYhj+BXRvbD5wPxa89fu78kxT01ZqE+0yuNg3awLkj+4jdaoSf5V
	buVyY9CWgtXeuV9+2/B85HeI4XhnLAEq587J18K88znhXm3bI35CwsPDlLeqUsBY20+bjt
	0hLxdipm89UEKWUQD1lN/uMwKq1SEn4=
X-MC-Unique: q9sVxRYWOwylh4HX9I03Tw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UqPzMrZ7M2vwh++Q4FCnKJcLGwZN7bhhKCkHWbLLADoouFdRVewXZoXXm9HW1mWecyGL7pps3dcumKNKG59enICEsqYoZ2PDo1Uvb1/ejO2ib5HxEHtb/p1xnKgDvuQFWYRE/74MygYTFqYpXpa6kRoBh3WvQtffzBtA2KiD8/pNiZvkhy2bskno7dLMopYK1hkQ7WpQte+ZLKzvQO9171CudwiXKF77B0brjIQ0Nv8s6+am17q7RPZR6kgzxbVfpvXsi8WCn4fLimvD359rzs06+/LWHcBdenih8z/K22vRkdjCUwZpTuW5NFHMExH9TERUc9rq8dO1XjoDp0GBAA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=LoKlRfCzoBq5luuz09Ka/Zz1sWvP7iEM7NgBl5PN9qc=;
 b=QXcLTpAqOimKVs1A5g5WbTkkNUuZ2Q59FdohFBIg7H01wNLD9ueWfupUXb+mSU28diX93TIgoDi9ElGZ53mH276yflzkW8xavudWjfHY6iHzZLfr4VrpQgoatCiWrtm9oYLVruYakJs0iUgFrCyayslFhp2jMMLIHQ0I3UwM2BCY0ASU08kS3wsoc03Az5jCnuqspDjtxXvvkddYLHCneAcYacAXvhj+ZX+dX4lkot63iZKM9aJkwhTw3+GNhbrN6JhKSEHj8Ob3Ony/mKal7vNu4epzOmub2LiKONdh3GmRROoHaMX75rEurbWPzPPd5Q8mCp1pYnb6U81fb2DMuA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=mysuse.onmicrosoft.com; s=selector1-mysuse-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LoKlRfCzoBq5luuz09Ka/Zz1sWvP7iEM7NgBl5PN9qc=;
 b=BhU2IJe3Dyfog7XihWLAZXimbRIBWEQP1YGPo3DXLB6nF6kV0g/tdNdH5P14YO5+6t8LoF3s3Jz4Lnc6fBLbNeea3MeS0XzElpILFUuhNqKo7WXmns989eDgvC2ht6HPLQeAfQEFsdpHeEhQKKTVKVKb8vK/x1S+1+0fZXzGjxxR7/ndq153QX2znMBv6N+FvVGkMohfxwxqu4A7kAVKmBCKWVvX3bSi++coBCtwluVe5x4k1ELyYczJL+Gj/wf4jxb342jXdw849EISM/nnWxFQxp7wtzD6TrujKWS8anekXLAFEsFoQghx4UEYpYYSvQFtLD21siRzZCVg9CgXJg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <021d1c9e-7989-ed9b-0fc8-1e522e75a12a@suse.com>
Date: Tue, 7 Jun 2022 15:21:18 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v3 3/3] xsm: properly handle error from XSM init
Content-Language: en-US
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Cc: scott.davis@starlab.io, christopher.clark@starlab.io, jandryuk@gmail.com,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xenproject.org,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Wei Liu <wl@xen.org>
References: <20220531150857.19727-1-dpsmith@apertussolutions.com>
 <20220531150857.19727-4-dpsmith@apertussolutions.com>
 <e7582bd3-1a3b-49e9-7d3f-f86ae3d4ab2b@suse.com>
 <a1f633c1-f494-e59d-d5eb-f84c478e5b53@apertussolutions.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <a1f633c1-f494-e59d-d5eb-f84c478e5b53@apertussolutions.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM7PR04CA0001.eurprd04.prod.outlook.com
 (2603:10a6:20b:110::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b1e2f61b-09ff-4986-65ba-08da48889cac
X-MS-TrafficTypeDiagnostic: DB8PR04MB5865:EE_
X-Microsoft-Antispam-PRVS:
	<DB8PR04MB586509E23BCFB45393672509B3A59@DB8PR04MB5865.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ZtoXEV/eDoG4q7Rbr9xQwzFCRw7XTSWavZc16VFRrNxY70bVIfjjGnH1yhxAdtzSdAcGhFqjyFOvB6pBTJaEN0V7TEwq9uy9Zhc32aiPH9AI+jOIZKknRVPr59X3Wgu29eRtKYNIcOPwj85M3YghDNwtzFnUIPV3vIz10B1/NV0W78z5WmnKdlcMhotqPF8UeNV0TQzDua9hIGJ6jf4FUAdvFMK716/xjuzt8Gm3BaJDnfV68T5TP0gjYnuHCWgu0S18PIEDil8AfZxbtst2LJWTONn+C+66d3Bybb6b2D93Q1RYebUS5myQVeMV34IjAZjjLMJeP15XXGNslGDcB5hVSUZgnKFk4lA2pOpj3GLxuQ/0fE6bFWNFCijvCrzmWTZ/JM7PK4NRqHlDUs9Urmdn5wKkJ+KGkl7H4whSFi0nsvjJhf6loyFwgu/jb3sntlGkYkZwCiPtnAvh9ojMHwlJpyNlsJBpzQ2wceQv6m+ewtIfvHDL7xu6S1oMCD/TMRhxly3XSuMLV0gZ6BasLV5J/fLTJ3A678qVnNpZU4AhI1XuDkyxfcX0Rvp1D/ChElAvllEy9hoMAvSF7P0A6bSOBqNHKrVHSdbZ/mx6sUZGk9/fl7opu3cq7UgWYWvPThkxKccUkQjhOo/BYLnYyUZsY9Smlh90B6zdwHZ3Qt6SNUg2p/kQyUNaNsOktTmUXA49GzqOwpqL9IxJAxy+dclS3nQeLnwHs1a2FgMmlTqGBui/xyVVseYPbiGhOiZH
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(6666004)(8936002)(316002)(26005)(186003)(53546011)(6916009)(6512007)(54906003)(2616005)(508600001)(36756003)(2906002)(31696002)(6486002)(38100700002)(86362001)(4744005)(6506007)(5660300002)(31686004)(8676002)(4326008)(66556008)(66946007)(66476007)(7416002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cURMb1ZuM00wd1FnYkZaVTBvSEJYaXdBUktkZXZ1bytaVnp1ZGJGcVFOL3lR?=
 =?utf-8?B?L2dlc1N0UEpydWJGNUhpSENDdlAxcUtaZVhWWndpVGFnYnZHVU5pTlptazgr?=
 =?utf-8?B?Zjk4OVBIVE1oV1orYUY0b3B4UDJqcG9pZ09zbmlYZThnSnZQbUhGeVh4YW5n?=
 =?utf-8?B?bHFaWUJiMk42V0dvMW1pdXdNN2QvVFoxN1EwcHA0MGJVZXpORkRDZlBRVjlK?=
 =?utf-8?B?NURUWmx5ZHlQU0tBMkRFK2kvdm4zM0QyY3Z5M1JtdFR6bk9VZzkxN3ZkZk5z?=
 =?utf-8?B?RzZlbHN6eXBTTDFXb3lkWEd6elBsMGc2b3A1TkRlSitoVHN3b1NjTkxEZjg4?=
 =?utf-8?B?WURGbzZockRPaG9CcUF1YXd4UTI4eXlwYUxTeHpCdFdDOVFPcmhsRG1DdWg3?=
 =?utf-8?B?ZHl2ZGc4L2dvdjM2aHg1THVsY0c0dHhBb2dPVFR1L01vUEp6VkFSLzV0M0RM?=
 =?utf-8?B?Zi81cU9ZYjFLbnEyUDlTQ0V0amJuTGxqKzJOOWE3ZWc3L2g2V2ZCT252QUh5?=
 =?utf-8?B?NTBwT1BRc2FQQVdJSUNhdzdSTGJ6L3lUTVNKUTlKVmdZSmpZTUdJUG1wY1F5?=
 =?utf-8?B?dGFJNzVxclh0NWlTblhPOS85OUxJalgybHQ4dW5NY0F0NXV6aEFUQkhpdmZ4?=
 =?utf-8?B?eEhWSFdLOE42cU1ibFJ2ODAwVVhLM3hxdUd5cmRRdVZFQmRNNDVMczBLQTll?=
 =?utf-8?B?bTBNMm5CNVdjTVhBNFlxWlA5VStsS0t6SjBFMVlkNUFOZUF0UitSaTEyUjli?=
 =?utf-8?B?YnVCdDN2Qmx0L3pYdVlNY3pSNVJSbEFzL2RMWkZqak9LSnZPLzl1ektJYkNZ?=
 =?utf-8?B?RVpCWVRlMk1KdlFMKzhjQmlmMzhhUDNCMy9XWTlQekkrL0RMVHk3c3M0dG84?=
 =?utf-8?B?b0lrcVpsUFhjd2VDeUExMENSdXBYZEhmUDgwTDludFBiSTJta3YrQVgzL3BR?=
 =?utf-8?B?YWEwQmhTU2U5RFY4L3MySE1NVG1MalZ5NjJBYkRoSnlwaTJDc2UxODY0N2M4?=
 =?utf-8?B?NTNYZFhiUjgvZ2dtR3h6RHNoSElwSjNzUUx5Q3NQUDd1RFk5Z3k0VWs0RHJo?=
 =?utf-8?B?bGZjRzI5TUxEUXQ1RG4rTU1LQXhBQ2ZsZDZqekI2YWlvb0lQaFdWSklwcnp2?=
 =?utf-8?B?WVRGVG9rOU10YVZETzNjaXIrakJYaVVZbmdTdnlNSUxQNmllZFlUeHd1Wm4v?=
 =?utf-8?B?TjZGQmJNVFlPSlZIQTNDRkMwSC9tOXlXREM3b3liL3dRenB1RU5VRzNFb3li?=
 =?utf-8?B?VEVBcVUwK2dVRWZISlpOczY4akdOMFF3a1VwK2ZBc0RQUjNoMWh2c1hXbTlB?=
 =?utf-8?B?Y3o1OVFsOWl4L085KzdLYXpMN1JlUU1FTHVjVHhQWEluOE1kZEszcUNzQ21Q?=
 =?utf-8?B?Y0FxYUVtK0U3TVpuK0d6R0RHSGVDaUx1d0EyelRiV3pQcTdONEh0aTRrZHV0?=
 =?utf-8?B?UHFTVjFtbnR6bnJDZXU5NUJrVS94V2dHMFlvNllDbWJ2WVNEZlFJL0xLaDkx?=
 =?utf-8?B?Yzd2Q2R0RjczbjFBNUt6b2pVTWsyOU9LS3lZQnFmUVR3Ti9PNndMVGZxdVJh?=
 =?utf-8?B?TkY5V1RUZXl2SEptQjRJcFJ6Z1dHOVJkbzQ3ZjJ3d083LytOclh4bnpaUXZp?=
 =?utf-8?B?NU8vV1NXREtwaC9kYnhpK0pQRk1MRkFObk5ma2NCMFFIZDJwbWdINDJyS1Qx?=
 =?utf-8?B?TVV2TmRQNEhiTERFR0hLVy9QR29mQXZwSjhZSmsvRWQ4NkNHN0hxd2lLY2pB?=
 =?utf-8?B?QVEvN0ZLZFNVNVlKdlI0R2QwWGtZK0dyZnJXOXB2UWpiQ0lrL3FTbkVoRkZ0?=
 =?utf-8?B?QkFlbmJJMmw3RlhsS091c0tpQ0VzY3lvTG5hYjg0SEtaeUhtZXFadmNzNmUz?=
 =?utf-8?B?cmxSY2hISE5qcjgrV2JmOThoY1E2Rkl3eWZWRlF5VEx1b2ZmYnRDYUFOdDdF?=
 =?utf-8?B?WFI0N1dPR1U5cUtrOVJXdUcxUXNEd0laQWM3b0MvSGg4Vk1kdlNldUVlUDVZ?=
 =?utf-8?B?UVhRUnRid0hJZ3gyRDVrZno3NHRGYWJTc0dFR3k5SFJod2xJYjd2QlhvL25m?=
 =?utf-8?B?ZnlUYTcvWnhibWlFZk13elVBK1BaUUpVYWtDbWFDeG1UTzBTSmc0andoakNE?=
 =?utf-8?B?OU81UlQ2RENtN1dvVmNxeElKTDFHRUt6SittUlRDdUlPZy9iWWY0U1pYYkx5?=
 =?utf-8?B?SmtSU0Nmbi9EM0JTK3g0bmJwNmkxdmgrN2dnWVVUd3U2QytqeEFOdzZtc0RR?=
 =?utf-8?B?TlpZUkxmRktoL3lJTS9KVmxZSlBWOHpPUFlHZFhNYXpXejJrc2s0aGVCaEJN?=
 =?utf-8?B?VWFLdlhpeVdQZ3dEYjM5R3ZCWXV0SGgyVkxTUGVNZ0NkR0VpUklhQT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b1e2f61b-09ff-4986-65ba-08da48889cac
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2022 13:21:20.9204
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 81bqv7dk64hk9x7WIcVWGhr14f5abjQo9FDLuUHza6PUQyRp0td5LlfTOxoZqd5SsnpJqnqyuAwgpwjcdM+cBg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB5865

On 07.06.2022 14:14, Daniel P. Smith wrote:
> On 6/1/22 02:14, Jan Beulich wrote:
>> Now that return values are checked, I think that in addition to what
>> you already do the two function declarations may want decorating with
>> __must_check.
> 
> Understood but likely not necessary based on Andy's review suggestion to
> move these functions to void.

Of course - once they return void, they cannot be __must_check.

Jan



From xen-devel-bounces@lists.xenproject.org Tue Jun 07 13:48:48 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 13:48:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343276.568575 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyZZX-0004uC-1Y; Tue, 07 Jun 2022 13:48:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343276.568575; Tue, 07 Jun 2022 13:48:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyZZW-0004u5-Ty; Tue, 07 Jun 2022 13:48:38 +0000
Received: by outflank-mailman (input) for mailman id 343276;
 Tue, 07 Jun 2022 13:48:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyZZV-0004tv-Gi; Tue, 07 Jun 2022 13:48:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyZZV-0005e8-EM; Tue, 07 Jun 2022 13:48:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyZZV-0002qn-7A; Tue, 07 Jun 2022 13:48:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nyZZV-0006U5-6i; Tue, 07 Jun 2022 13:48:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=hYXcvkenuhYrLI3x/iGm4lAFCtat5DYWq6DLzNmsWsU=; b=vPRgMYhSLAa8A3X1X5zDXieEEf
	QOKkUTH57nZ/T4MMA7QcTf8e/ysETg5/NuqOI5slzsgrMf+WzhXelycbI6LRsLleEnF5Ki3UFa9OW
	SeJLB4r0CjG6bbAbY6BtXaKE5dr0w9QjhazmqSVCYwbzYBvgeK8FPezhNIK+g+qfkdO4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170867-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 170867: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=a81a650da1dc40ec2b2825d1878cdf2778b4be14
X-Osstest-Versions-That:
    ovmf=4f89e4b3e80329b9a445500009c658d2ebce8475
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 07 Jun 2022 13:48:37 +0000

flight 170867 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170867/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 a81a650da1dc40ec2b2825d1878cdf2778b4be14
baseline version:
 ovmf                 4f89e4b3e80329b9a445500009c658d2ebce8475

Last test of basis   170855  2022-06-07 01:57:37 Z    0 days
Testing same since   170867  2022-06-07 11:10:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   4f89e4b3e8..a81a650da1  a81a650da1dc40ec2b2825d1878cdf2778b4be14 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 13:49:44 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 13:49:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343286.568585 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyZaa-0005R9-BL; Tue, 07 Jun 2022 13:49:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343286.568585; Tue, 07 Jun 2022 13:49:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyZaa-0005R2-8U; Tue, 07 Jun 2022 13:49:44 +0000
Received: by outflank-mailman (input) for mailman id 343286;
 Tue, 07 Jun 2022 13:49:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dSew=WO=apertussolutions.com=dpsmith@srs-se1.protection.inumbo.net>)
 id 1nyZaY-0005Qs-Du
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 13:49:42 +0000
Received: from sender4-of-o51.zoho.com (sender4-of-o51.zoho.com
 [136.143.188.51]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ad16a1c4-e668-11ec-b605-df0040e90b76;
 Tue, 07 Jun 2022 15:49:41 +0200 (CEST)
Received: from [10.10.1.138] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1654609773475473.3381688403167;
 Tue, 7 Jun 2022 06:49:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ad16a1c4-e668-11ec-b605-df0040e90b76
ARC-Seal: i=1; a=rsa-sha256; t=1654609777; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=GsqfH/xRBbpoECvLih4W7OE5L3wqkOnkNAYo92peDB+BtVzZDVt9ZvB2oyxtHl70e6IEiCc1uMeY5XgkUj/2czBja797Fzyc9CQMaxi3hyAVBwCMMvVqWQn0hT/WlKzkxQ9CEULLcH0sPq2Wq/IeER/+vO9c/QFt2SNnhuoSeO4=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1654609777; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=dOsy6EjkVih1OL+NrztAnvYWxEfNj6E/Al9okX3Yf1Q=; 
	b=b69NotZvp57/6VDwDFZ9rdHvzfJ813iYJRN2Snx6gbeJqCQVBvFSb/HryMdnhP1MUrhUKyYlx/bWt0/tKf4zs/dC2BkFFEon7OWWLfwV2SDz2NZt3Y1Wk5IPwZofsRH+9SlTDaj82ATbX/AH4kmRAmJ7U0PGXvYqZWZRjWXm+/Q=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1654609777;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Message-ID:Date:Date:MIME-Version:To:To:Cc:Cc:References:From:From:Subject:Subject:In-Reply-To:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=dOsy6EjkVih1OL+NrztAnvYWxEfNj6E/Al9okX3Yf1Q=;
	b=RAg9smD1DPlz+1vCi+TQpcvqGs+rLnzq0UgJy2IH+eVQUvLmnDnsRacFLwpUNle0
	eKkIg8Hbj4xefoQSGA2BThHu7Jrd56uwGruSyRgkHdPEApQblX6TCszOdbcCF91DVhI
	xy3eeaLcsiVxR/Wcayi4c9yGx+1mwS2CeS5iISuw=
Message-ID: <6447f0ff-993c-9d39-52a8-40a434be9e52@apertussolutions.com>
Date: Tue, 7 Jun 2022 09:47:57 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: scott.davis@starlab.io, christopher.clark@starlab.io, jandryuk@gmail.com,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xenproject.org
References: <20220531182041.10640-1-dpsmith@apertussolutions.com>
 <20220531182041.10640-3-dpsmith@apertussolutions.com>
 <17edde4a-0d00-0da7-5910-09874ab70838@suse.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Subject: Re: [PATCH v4 2/3] xsm: consolidate loading the policy buffer
In-Reply-To: <17edde4a-0d00-0da7-5910-09874ab70838@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ZohoMailClient: External


On 6/2/22 05:47, Jan Beulich wrote:
> On 31.05.2022 20:20, Daniel P. Smith wrote:
>> Previously, initializing the policy buffer was split between two functions,
>> xsm_{multiboot,dt}_policy_init() and xsm_core_init(). The latter for loading
>> the policy from boot modules and the former for falling back to built-in
>> policy.
>>
>> This patch moves all policy buffer initialization logic under the
>> xsm_{multiboot,dt}_policy_init() functions. It then ensures that an error
>> message is printed for every error condition that may occur in the functions.
>> With all policy buffer init contained and only called when the policy buffer
>> must be populated, the respective xsm_{mb,dt}_init() functions will panic for
>> all errors except ENOENT. An ENOENT signifies that a policy file could not be
>> located. Since it is not possible to know if late loading of the policy file is
>> intended, a warning is reported and XSM initialization is continued.
> 
> Is it really not possible to know? flask_init() panics in the one case
> where a policy is strictly required. And I'm not convinced it is
> appropriate to issue both an error and a warning in most (all?) of the
> other cases (and it should be at most one of the two anyway imo).

With how XSM currently works, I do not see how without creating a
layering violation, as you had mentioned before. It is possible for
flask_init() to know because the flask= parameter belongs to the flask
policy module and can be directly checked.

I think we view this differently. A call to
xsm_{multiboot,dt}_policy_init() is asking for a policy file to be
loaded. If it is not able to do so is an error. This error is reported
back to xsm_{multiboot,dt}_init() which is responsible for initializing
the XSM framework. If it encounters an error for which inhibits it from
initializing XSM would be an error whereas an error it encounters that
does not block initialization should warn the user as such. While it is
true that the only error for the xsm_multiboot_policy_init() currently
is a failure to locate a policy file, ENOENT, I don't see how that
changes the understanding.

>> --- a/xen/include/xsm/xsm.h
>> +++ b/xen/include/xsm/xsm.h
>> @@ -775,7 +775,7 @@ int xsm_multiboot_init(
>>      unsigned long *module_map, const multiboot_info_t *mbi);
>>  int xsm_multiboot_policy_init(
>>      unsigned long *module_map, const multiboot_info_t *mbi,
>> -    void **policy_buffer, size_t *policy_size);
>> +    const unsigned char *policy_buffer[], size_t *policy_size);
> 
> See my v3 comment on the use of [] here. And that comment was actually
> made before you sent v4 (unlike the later comment on patch 1). Oh,
> actually you did change this in the function definition, just not here.

ack

>> @@ -32,14 +32,21 @@
>>  #ifdef CONFIG_MULTIBOOT
>>  int __init xsm_multiboot_policy_init(
>>      unsigned long *module_map, const multiboot_info_t *mbi,
>> -    void **policy_buffer, size_t *policy_size)
>> +    const unsigned char **policy_buffer, size_t *policy_size)
>>  {
>>      int i;
>>      module_t *mod = (module_t *)__va(mbi->mods_addr);
>> -    int rc = 0;
>> +    int rc = -ENOENT;
>>      u32 *_policy_start;
>>      unsigned long _policy_len;
>>  
>> +#ifdef CONFIG_XSM_FLASK_POLICY
>> +    /* Initially set to builtin policy, overriden if boot module is found. */
> 
> Nit: "overridden"

ack

v/r,
dps


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 13:57:15 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 13:57:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343296.568596 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyZhn-00074i-7i; Tue, 07 Jun 2022 13:57:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343296.568596; Tue, 07 Jun 2022 13:57:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyZhn-00074b-4f; Tue, 07 Jun 2022 13:57:11 +0000
Received: by outflank-mailman (input) for mailman id 343296;
 Tue, 07 Jun 2022 13:57:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dSew=WO=apertussolutions.com=dpsmith@srs-se1.protection.inumbo.net>)
 id 1nyZhm-00074V-0r
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 13:57:10 +0000
Received: from sender4-of-o51.zoho.com (sender4-of-o51.zoho.com
 [136.143.188.51]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b785aae1-e669-11ec-bd2c-47488cf2e6aa;
 Tue, 07 Jun 2022 15:57:08 +0200 (CEST)
Received: from [10.10.1.138] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1654610219359757.4448433028651;
 Tue, 7 Jun 2022 06:56:59 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b785aae1-e669-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; t=1654610224; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=NgD/lrOma2MdX58Kdyi3sc+FlohT0S9Zd8/aikUnsQDOKnKbquphyoV0N2bSL8mdTq7Ge/clRm5pCL4/xkUs1teONELvrH2wZCC6Gf4ot4IQ+BYeZxkIFxT/lf7cUbQK40dvET9RI1nWXvZN1HF0ImR8ssmG4gWtsHLrPumpn0I=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1654610224; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=i+13/atmT2pRkn+f2pb7VWfDJx2TihjcLgz8X3iQ3xw=; 
	b=KZxlGwKk9j9FBxbYKxmd4O/Aului3XMlpWM9ZNPm2Mf1/ehzPqFN/VqMJZn14dQ4TCDRqd3EeBU8wab+eivCV7z8spx/XuuIkwOZItzZTC5HVLgzDFA/6pVHeUF18ahoK9hT7YhjHHVEn//io7/7ufMpl2d0NGriJghoaRXsvWs=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1654610224;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Message-ID:Date:Date:MIME-Version:To:To:Cc:Cc:References:From:From:Subject:Subject:In-Reply-To:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=i+13/atmT2pRkn+f2pb7VWfDJx2TihjcLgz8X3iQ3xw=;
	b=OL9cfskioxpqc/hD+3UrKbIljiRoA6HiLS8x8+3Ms/Hj0ZdRCCq7Qn5CE5ZTeQlH
	n7In/RJWXBKc4GKBg9HNituzbApuLQ1WCwAw53Q5GSXIuk8XZIgK5GM7ek4ColfR8NG
	a+TkZRePdwSGpCbu2qqE9OKwq46Gc994scukI9zU=
Message-ID: <76d59a32-203a-5a8b-83f4-ab60d419ef9f@apertussolutions.com>
Date: Tue, 7 Jun 2022 09:55:22 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Wei Liu <wl@xen.org>
Cc: "scott.davis@starlab.io" <scott.davis@starlab.io>,
 "christopher.clark@starlab.io" <christopher.clark@starlab.io>,
 "jandryuk@gmail.com" <jandryuk@gmail.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>
References: <20220531182041.10640-1-dpsmith@apertussolutions.com>
 <20220531182041.10640-4-dpsmith@apertussolutions.com>
 <c206a20b-ee5f-aa5b-64ba-fe06469f0f2f@citrix.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Subject: Re: [PATCH v4 3/3] xsm: properly handle error from XSM init
In-Reply-To: <c206a20b-ee5f-aa5b-64ba-fe06469f0f2f@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ZohoMailClient: External


On 5/31/22 15:18, Andrew Cooper wrote:
> On 31/05/2022 19:20, Daniel P. Smith wrote:
>> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
>> index 53a73010e0..ed67b50c9d 100644
>> --- a/xen/arch/x86/setup.c
>> +++ b/xen/arch/x86/setup.c
>> @@ -1700,7 +1701,11 @@ void __init noreturn __start_xen(unsigned long mbi_p)
>>      mmio_ro_ranges = rangeset_new(NULL, "r/o mmio ranges",
>>                                    RANGESETF_prettyprint_hex);
>>  
>> -    xsm_multiboot_init(module_map, mbi);
>> +    if ( xsm_multiboot_init(module_map, mbi) )
>> +        warning_add("WARNING: XSM failed to initialize.\n"
>> +                    "This has implications on the security of the system,\n"
>> +                    "as uncontrolled communications between trusted and\n"
>> +                    "untrusted domains may occur.\n");
> 
> The problem with this approach is that it forces each architecture to
> opencode the failure string, in a function which is very busy with other
> things too.
> 
> Couldn't xsm_{multiboot,dt}_init() be void, and the warning_add() move
> into them, like the SLIO warning for ARM already?
> 
> That would simplify both ARM and x86's __start_xen(), and be an
> improvement for the RISC-V series just posted to xen-devel...

I was trying to address the MISRA review comment by handling the
unhandled return while trying to provide a uniform implementation across
arch. Moving the xsm_{multiboot,dt}_init() to void will address both and
as you point out make it simpler overall.

v/r,
dps


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 13:58:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 13:58:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343304.568608 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyZj1-0007fi-J1; Tue, 07 Jun 2022 13:58:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343304.568608; Tue, 07 Jun 2022 13:58:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyZj1-0007fb-Fm; Tue, 07 Jun 2022 13:58:27 +0000
Received: by outflank-mailman (input) for mailman id 343304;
 Tue, 07 Jun 2022 13:58:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8rqf=WO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nyZj0-0007Mg-HN
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 13:58:26 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.111.102])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e6517dc9-e669-11ec-b605-df0040e90b76;
 Tue, 07 Jun 2022 15:58:25 +0200 (CEST)
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur02lp2057.outbound.protection.outlook.com [104.47.6.57]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-34-Dk7SnjefNqaGWPIQC7x77A-1; Tue, 07 Jun 2022 15:58:23 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB7585.eurprd04.prod.outlook.com (2603:10a6:20b:280::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19; Tue, 7 Jun
 2022 13:58:21 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.019; Tue, 7 Jun 2022
 13:58:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e6517dc9-e669-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654610305;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=eN69vorWK8qAaqyxGuU7P7QSV6TX9WVFh6JvHOKCMuA=;
	b=guKGIMk+b+h7ygtvew0wGcybcWJ5lTWyGegfU8ATLwEIpykSt8S2czR/uMFOiCn1FNRMkq
	M3JM4uKlI1ubmWvQkbGlhmJs0oRCbhXfSk7A6123FPf32EO2q5A5pqBW9ClVy6X5tLt6a9
	5BmBXxsmqTD9CdbWnuFUa6qhqpisi9w=
X-MC-Unique: Dk7SnjefNqaGWPIQC7x77A-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZIGNQJjpDAiRHVJ01EMeFEq4JXCf+/NYx/zxsIfeew2qgKsJIbpNxXoskBOZEi+6th4Eend/5iLqebsXmjQ+kv7M8CX6lIjde4XwyRpanvEY8FwI+1mArbi08OQ220qxkBCYsmVCQmqj346QIBUgPRM4DsZQG0HdKy85z+IGh/PrMzFQFEPzv+dsXWtHqM61kG25gD4yFbkWBtB4vODjWAkgwXP0zPyQbO+h+ZmJ7hvG2cnpd1kikvXXmUFGV/x5sGffgNJrke2vkY6IEboA/RvWWP49rxm9ap1vRQ4ECOJ/P0TRhv9RsSfTVw8/kMqwLCIoSJEC7L9XLVS+lvTKng==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=eN69vorWK8qAaqyxGuU7P7QSV6TX9WVFh6JvHOKCMuA=;
 b=MtktUecg/RMfQB+XQ3UnWdug9I3eHDwB/PhhIGArP57vwxRPvbigEFwjo30Tzi4TQ+3aR+e010uCYOx0DzFm29qBnoEK/QDafe/w3rldholgLaJIdgfTdl7dVXnrbx61xh8Mmzku6w902yEapSHyIcsppR2zCKya998+nIJneHEcHhLLdGwrwnDgltVNKZs6Gn3NzfUvs/7/+u0Smbc0IHpawp4ybfJUjcAkirhR1bsoJY1/D4KjrhRF3fsbxH2mbQQaPUhnqJ2lJdLZdQ23+rj05JiPEFZ5ONhHWXEUspJbPC4IxXHGE9zogbEjFTrYubgD5gfM9Kq8AHIvoBbfOw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=mysuse.onmicrosoft.com; s=selector1-mysuse-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eN69vorWK8qAaqyxGuU7P7QSV6TX9WVFh6JvHOKCMuA=;
 b=H9NzDfoAYbQn561D3k1dAPHFqLDkOPYb60hHeQxQj/7PwOMmQKtGE88bAGLwMaXJJhySH2eIrrU9Rwnt66boznbpPraR5fZQOJ+u8xKoDeULfANk8rE8JqlZ0mdK/D2/3RGtar/CTH298p9R/4UjR0l3y7RWrgyTkIw/FVY08+O2hpXCzNPErSXSUJMKEBbKP1y9uhuqeGXcPRuAfzbSscwQXo8/Ngk9KJo7gSu63XSZMwBqwDLOsffdsKMovfiUjnPiYpZxwK3oXA/24XopIZhbl7MSOuHe5d1At+Yz37mmFrcpT7iSQbZRubHgvyIYE5qP6RjkvMq8S10sSoeY5Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <054d4009-6042-c985-cc10-b133fc2341b1@suse.com>
Date: Tue, 7 Jun 2022 15:58:19 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v4 2/3] xsm: consolidate loading the policy buffer
Content-Language: en-US
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Cc: scott.davis@starlab.io, christopher.clark@starlab.io, jandryuk@gmail.com,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xenproject.org
References: <20220531182041.10640-1-dpsmith@apertussolutions.com>
 <20220531182041.10640-3-dpsmith@apertussolutions.com>
 <17edde4a-0d00-0da7-5910-09874ab70838@suse.com>
 <6447f0ff-993c-9d39-52a8-40a434be9e52@apertussolutions.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <6447f0ff-993c-9d39-52a8-40a434be9e52@apertussolutions.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM6P195CA0061.EURP195.PROD.OUTLOOK.COM
 (2603:10a6:209:87::38) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 02197eb4-951c-447e-83e2-08da488dc817
X-MS-TrafficTypeDiagnostic: AM9PR04MB7585:EE_
X-Microsoft-Antispam-PRVS:
	<AM9PR04MB758567955660BE368F1513B0B3A59@AM9PR04MB7585.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ieieJuqWSPFgTgV+nzQtIOhoVDlUec2VZvNOf06va1x0lnnd7MHQ1H9enBqGHrdEdxFXht7tMc8FknV73VDMIBQ0LEIS4b6Q/agJ3UXGew/m0fn6nYBT5q5eM9jTx4EgeEz08jMt0K8gwsZPeUMqs4mmwcvpIFtf6OAGo9dqXoM+yNPN7Lz3JAKASGol7zuVcs/WFQqKXiE5FDnBc9Gp7Ex2z0N5ScxRiU2WLKGoMfVbJ0lvMJG5eyjHbqIT7rJgFbyxhltkEnjm+W/jQh8nfC0a3nQXLyENsC+LKyCKilQlvTLJUc+CO6Sgv2noDPp5xvzZzTBrZZHkqhjjNORba8ycL6iEWGibsm3Vbv00xCiitNBob+QiM64ni54VMglM0YvWwL5+AaYGFMXM1w2zEQ6KvlwgaCqVwtRM7FG3yniVgeTSG4oJe5KB+FD03wUJUWE6afUrcxwAdFre/TEJ16CQYh9RL+zKMLr5P5W9GKkwWlSKxaQAx1GjIaZR7e7pkEIOpzndgRNpSZb9aoRwrdPBjEb0z93NJWpAvozFMWpN5Zoh7QGu/TDn99QQ8/COBq4Y+MtBVwxZ41NsLq9+aktd0QU4lV7Mcw+28ODiyh+5MqurRFMiodDZXJp0E232pHIqcSoEH+2A4VZbWxl/pIrQDpJ5dlykUdWxn7/UFqhzficmqdBiW97VOFMFoqVSFsAKFCVNJKGarUB3d8U242LkIylMcZyaj7dffl4CDdagyu34SsOqlaw1OEDfW51r
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(86362001)(38100700002)(5660300002)(31686004)(8676002)(4326008)(66556008)(66946007)(66476007)(6506007)(6486002)(6916009)(26005)(316002)(8936002)(53546011)(83380400001)(186003)(36756003)(2906002)(508600001)(2616005)(31696002)(6512007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TjAwbGlGaDhGMEtGSWd5VlRDUXMwQVVrSXhBdFlkOUlObjNhcVZ4QlJHcFYx?=
 =?utf-8?B?cUZvcTVhYWtjWURqWHM5bERicERpTFF3Wm16SDZpeEE5TEJ6cUoxQloxSU9J?=
 =?utf-8?B?dFBNL1lJNlNFNWNyc3I5em1PVmNXOCtDTk1PTnhrb0F5VU9PZmdyemNibC9B?=
 =?utf-8?B?Nm9YR1czVEt4UXdMejFmWHBXWFV3LzhKdjJ5SGd4dU16c01PQ2FobGQzeUdh?=
 =?utf-8?B?OVBpcjhQNEtqMlNDbVFyQ3hPQXViL3NrclpsdHpmRDF4bnFyaXBoQnNFNEtj?=
 =?utf-8?B?a2U3d29lSEF0TlF5dDJYRk9xeFhIQ0dMU0Z3ZEdXTzNwYktRai9LelJJWUNB?=
 =?utf-8?B?ak1QcmtaZytpOCtzMC9Vd0phYkZUaUgrd1YyekNEbXVVcDkrMU9jeHpoZXU4?=
 =?utf-8?B?QUFYV1lUcWQwKy9sZzVzOERYNWNnQXVrOVYyTDBmdXo2S2dFc2pqemNXOURz?=
 =?utf-8?B?UDdGVHpVNFRpQnpWaGtTMU04c3M5ZjZ0bytqWWdQSXVLbWR1QmExN1BZTUtH?=
 =?utf-8?B?bmZJSHp6M281L1dyL0gvZTZEVWpnMTl0TnkvSkczOEJwUjR1anNNS3BYeGZ6?=
 =?utf-8?B?aE1QYzVGRDlvN3VBVWdyWVVzZEp6MFZCYllQK2JRUFlsdmdjWFNwQjVqZzJu?=
 =?utf-8?B?eG16K0U5Q095WVQ3MmxobkszaWExOEliZzBwWm1TOW1KNjE5ZkRPOUVSWUlj?=
 =?utf-8?B?aGc5QWxWWVhYcTk3OVM4cmpoSTZ5ZHpKZ2pJbWYyK0QvVzRsamlWbGNlQklK?=
 =?utf-8?B?TlJlMG9NOVdSKzZuckwyd0Q1K0hTUkNET2JPNmNLWWVxVU1ORVkwSHM5enJt?=
 =?utf-8?B?c1hHSTZvb08rL25qelVYRFNLL0o4SUkrNTJ2Ukp2SU0xemx4V1dzUTBoUTNp?=
 =?utf-8?B?c2pLak44YVRTdUFjMSthNTk0MmJwM1NYbE9XRkRMWThqcmJ0N2hjR1BVQmV6?=
 =?utf-8?B?aHZzSEVETjVTQVNlZ0VDbjZPbWMyOGsxRGs1N090eFRjY0luWG8zMGZsQ1U4?=
 =?utf-8?B?ZmNrcjZqeWpnRTBGLzRpK1FBS3NOQ2o1MlpzbTJ5OHBrZ1RKSVdzRXpGamVj?=
 =?utf-8?B?RFhvSnFEK2k4eDNIMENXdU53SXl4YktCUW9aWDVDenNXbUhHc3Jha2RrVS9x?=
 =?utf-8?B?VWw4WjNqdVhmQTlHamV5SDRaK3lBaDFyUEhFMWFTak90YjJHUkl4MzA3S3Rm?=
 =?utf-8?B?bEltbGF6dVQ0SXg0S2ZNclpWT284NVEzM3R5dWNDejE4RmUzNmN0TjEwbFNK?=
 =?utf-8?B?dUNQT1l6b0ZCQ3BZRzF2NGVlNWtuc0NBUWxRYlFwM0ltcFdNME1FeGYxZnlR?=
 =?utf-8?B?RTEvbDBwelR6MUJQbXFMTHRtOTFKelRoa0RXWFhpaUZ4Q2tqc1BmR0JzUWNo?=
 =?utf-8?B?a2xJNlZQc1JiU1Y4enBUVSszUXBGdHU4Vzlnc0N4QVR0SVpYZjhhdE83eTRv?=
 =?utf-8?B?N3NWL2ZIT2NReUppc2tFb0tFQmxUSUZWdU9Vc284QmdoY3VaU2xLYlB6emU0?=
 =?utf-8?B?dlMvcUpYV3huYXhZdGZtSDJHYmNPVzZ2RnFwUkVkZ0ZJak1RUTNhVCtxNTdn?=
 =?utf-8?B?Wkxuckx1bDRXbXlSYTVaM1pudzlRZ1NjWHlrY0ZKU0VoMTBpdnJ1MEpqZmNy?=
 =?utf-8?B?K1R1ZzZOQXRaMTVwbWpEOXRqdFZIaGlHMUJPbnRFNlpVMjR3OEs3Y1M0bHZZ?=
 =?utf-8?B?SVUvWmJCUjRGZ3BRalE4R0dMNHdTa2xzaVlIb0cwYnFpTVRxaFRzZ0xhYlUw?=
 =?utf-8?B?WVIza0VmNk5wanpJUGZZL0piTzhQLzUyN3p0WGpXSXZaWXNCYWVNbXVyczRl?=
 =?utf-8?B?aVJGY211UXB6L1ZXZzBCd3N6cnd6VEQ5Z3FlTk9FaVhqbG4vV1VVVXdWd1k3?=
 =?utf-8?B?cnRlZWdPVE1qaGFUaFdOaUpndVExUktyWElzdkpJQVQzSTJmSUJSU1BVWnBW?=
 =?utf-8?B?UzV5VC9rckI2bkhNdGxaTjlFa2QveFlyU3lDTDIycnRsUnJ0cnpORWFsalda?=
 =?utf-8?B?VGRVQVRuajdjcDZMOVpPNDRDREtnTFp1QlhHekxpeE9NNzVuTGsza3JnTmxw?=
 =?utf-8?B?VW9INU1sSWZqYkZCaTlZZG9rUWZ2VTQrKzBiQklsSGdQZkRjbU1oc3dCSDV1?=
 =?utf-8?B?NlljVTdIUDViL2ppQk1ndnBwamNINnd3Z0hkN2luNXo4dWhmSWQxYk5WS3lY?=
 =?utf-8?B?UlRtQ28wT0R1a2NObUp0WWNCRDc4Sm1iS3RHaDE5V1dxbmRUU0tSY3JQcVdw?=
 =?utf-8?B?WVllNVJsTUFOeXRPbWl0ck9QUWt5U0pYZDFiaHRXMjZqWmxERXRmaXdNaUY2?=
 =?utf-8?B?amF2Njg4L2YrUmZUYUUyNGJoalhRaG8veWRiazc1Y0RxWEpucHRGdz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 02197eb4-951c-447e-83e2-08da488dc817
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2022 13:58:21.2481
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: F0whvRjWXw2WVMmoPiKdYCun2c7dIzNFQfwymFJ3Wjb3T87flZBHx+WPq+WbPZTQtpm8FdsHriUKQV+qemKr4Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB7585

On 07.06.2022 15:47, Daniel P. Smith wrote:
> 
> On 6/2/22 05:47, Jan Beulich wrote:
>> On 31.05.2022 20:20, Daniel P. Smith wrote:
>>> Previously, initializing the policy buffer was split between two functions,
>>> xsm_{multiboot,dt}_policy_init() and xsm_core_init(). The latter for loading
>>> the policy from boot modules and the former for falling back to built-in
>>> policy.
>>>
>>> This patch moves all policy buffer initialization logic under the
>>> xsm_{multiboot,dt}_policy_init() functions. It then ensures that an error
>>> message is printed for every error condition that may occur in the functions.
>>> With all policy buffer init contained and only called when the policy buffer
>>> must be populated, the respective xsm_{mb,dt}_init() functions will panic for
>>> all errors except ENOENT. An ENOENT signifies that a policy file could not be
>>> located. Since it is not possible to know if late loading of the policy file is
>>> intended, a warning is reported and XSM initialization is continued.
>>
>> Is it really not possible to know? flask_init() panics in the one case
>> where a policy is strictly required. And I'm not convinced it is
>> appropriate to issue both an error and a warning in most (all?) of the
>> other cases (and it should be at most one of the two anyway imo).
> 
> With how XSM currently works, I do not see how without creating a
> layering violation, as you had mentioned before. It is possible for
> flask_init() to know because the flask= parameter belongs to the flask
> policy module and can be directly checked.
> 
> I think we view this differently. A call to
> xsm_{multiboot,dt}_policy_init() is asking for a policy file to be
> loaded. If it is not able to do so is an error. This error is reported
> back to xsm_{multiboot,dt}_init() which is responsible for initializing
> the XSM framework. If it encounters an error for which inhibits it from
> initializing XSM would be an error whereas an error it encounters that
> does not block initialization should warn the user as such. While it is
> true that the only error for the xsm_multiboot_policy_init() currently
> is a failure to locate a policy file, ENOENT, I don't see how that
> changes the understanding.

Well, I think that to avoid the layering violation the decision whether
an error is severe enough to warrant a warning (or is even fatal) needs
to be left to the specific model (i.e. Flask in this case).

Jan



From xen-devel-bounces@lists.xenproject.org Tue Jun 07 13:58:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 13:58:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343305.568619 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyZj5-0007x9-Qk; Tue, 07 Jun 2022 13:58:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343305.568619; Tue, 07 Jun 2022 13:58:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyZj5-0007x0-Na; Tue, 07 Jun 2022 13:58:31 +0000
Received: by outflank-mailman (input) for mailman id 343305;
 Tue, 07 Jun 2022 13:58:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dSew=WO=apertussolutions.com=dpsmith@srs-se1.protection.inumbo.net>)
 id 1nyZj4-0007Mg-2O
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 13:58:30 +0000
Received: from sender4-of-o51.zoho.com (sender4-of-o51.zoho.com
 [136.143.188.51]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e7c5694d-e669-11ec-b605-df0040e90b76;
 Tue, 07 Jun 2022 15:58:29 +0200 (CEST)
Received: from [10.10.1.138] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1654610300304342.68285292228654;
 Tue, 7 Jun 2022 06:58:20 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7c5694d-e669-11ec-b605-df0040e90b76
ARC-Seal: i=1; a=rsa-sha256; t=1654610304; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=Fnp8UMO+bB8cCYj2NMvftNSOUQb1g0qKEwP0oMxmjb+EcrCcwlUKhbPYN9rsr0diNGswoqkgReXZ0ubvLogeVbgz9r/Z4Lx89y4jzO4sZkmXDk2MlA0srGAByk8DiLxnaldiyxhsuAo7tuqpGuXohfVtVIIEQGdlDFumihbMR6Q=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1654610304; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=gi6Pus47de7pulvticJ0b4ipqHwzYws15CHcLrCm5S0=; 
	b=CNwv75cs21mHMbmPbCAEP5Myrmyzc3JSSiGgo5fFJfUg+iRLmYLj29ie+LvzKDcBWAZmi6jROgqKo3R0q9mPlN30gXIFgTFU+1JeRndk85A+Xc8DizEXlSI+Fg2rVFNoaWGbzzyCE3Anj1NfwtbLph44sXB8fIoH9ea0HDDt46c=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1654610304;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Message-ID:Date:Date:MIME-Version:Subject:Subject:To:To:Cc:Cc:References:From:From:In-Reply-To:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=gi6Pus47de7pulvticJ0b4ipqHwzYws15CHcLrCm5S0=;
	b=FvlCoMwbMMSEIuE/4e/Qb8gM4uYSfBY7UFlKFup0W19CIvb1yG2g0ihDdKgEnTCM
	VpRH6Q2nYfpMbawBWh0E2NcuazRwmwMHr6gqoDxbaY0+7yZpYKfFDXXkvwhqvm+wuHw
	GDH1HYvxi3CkqvircqLXrhd8v+dq/Iq6kuSRZCrY=
Message-ID: <f2fc5c13-fd33-5fff-1135-e21e979009ca@apertussolutions.com>
Date: Tue, 7 Jun 2022 09:56:43 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH v4 3/3] xsm: properly handle error from XSM init
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>, Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: "scott.davis@starlab.io" <scott.davis@starlab.io>,
 "christopher.clark@starlab.io" <christopher.clark@starlab.io>,
 "jandryuk@gmail.com" <jandryuk@gmail.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Wei Liu <wl@xen.org>
References: <20220531182041.10640-1-dpsmith@apertussolutions.com>
 <20220531182041.10640-4-dpsmith@apertussolutions.com>
 <c206a20b-ee5f-aa5b-64ba-fe06469f0f2f@citrix.com>
 <40db300c-4d20-8339-599f-bcf6521442fa@suse.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
In-Reply-To: <40db300c-4d20-8339-599f-bcf6521442fa@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ZohoMailClient: External

On 6/1/22 02:49, Jan Beulich wrote:
> On 31.05.2022 21:18, Andrew Cooper wrote:
>> On 31/05/2022 19:20, Daniel P. Smith wrote:
>>> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
>>> index 53a73010e0..ed67b50c9d 100644
>>> --- a/xen/arch/x86/setup.c
>>> +++ b/xen/arch/x86/setup.c
>>> @@ -1700,7 +1701,11 @@ void __init noreturn __start_xen(unsigned long mbi_p)
>>>      mmio_ro_ranges = rangeset_new(NULL, "r/o mmio ranges",
>>>                                    RANGESETF_prettyprint_hex);
>>>  
>>> -    xsm_multiboot_init(module_map, mbi);
>>> +    if ( xsm_multiboot_init(module_map, mbi) )
>>> +        warning_add("WARNING: XSM failed to initialize.\n"
>>> +                    "This has implications on the security of the system,\n"
>>> +                    "as uncontrolled communications between trusted and\n"
>>> +                    "untrusted domains may occur.\n");
>>
>> The problem with this approach is that it forces each architecture to
>> opencode the failure string, in a function which is very busy with other
>> things too.
>>
>> Couldn't xsm_{multiboot,dt}_init() be void, and the warning_add() move
>> into them, like the SLIO warning for ARM already?
> 
> I, too, was considering to suggest this (but then didn't on v3). Furthermore
> the warning_add() could then be wrapped in a trivial helper function to be
> used by both MB and DT.

Re: helper function, ack.


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 14:12:45 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 14:12:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343320.568629 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyZwl-0002HL-3a; Tue, 07 Jun 2022 14:12:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343320.568629; Tue, 07 Jun 2022 14:12:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyZwl-0002HE-09; Tue, 07 Jun 2022 14:12:39 +0000
Received: by outflank-mailman (input) for mailman id 343320;
 Tue, 07 Jun 2022 14:12:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dSew=WO=apertussolutions.com=dpsmith@srs-se1.protection.inumbo.net>)
 id 1nyZwj-0002H8-81
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 14:12:37 +0000
Received: from sender4-of-o51.zoho.com (sender4-of-o51.zoho.com
 [136.143.188.51]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dfb8bb6f-e66b-11ec-b605-df0040e90b76;
 Tue, 07 Jun 2022 16:12:35 +0200 (CEST)
Received: from [10.10.1.138] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1654611146830462.09408058299937;
 Tue, 7 Jun 2022 07:12:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dfb8bb6f-e66b-11ec-b605-df0040e90b76
ARC-Seal: i=1; a=rsa-sha256; t=1654611150; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=a2CRATwAjNOIuHsk6YGsXQDUBbcrw+gl/yPAB5Qxq8L+m6gZxYa6xNZbOmDHtN7R/f4LE5eKxRKM4grtDzr9ZgfyJgwUpKpogP5iFznETNuCA+RWUzjNpocOQgFT4jeAwSdKKQwKEYJmt2BMPlZyD7+yATyrYthC6k23VLlQW7k=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1654611150; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=HZMQH+1lgVVAWilw6cNK/EVjdX4w8dLVlVF/UaMSMqE=; 
	b=HdBzgVZF+mHQP2W5wOP2qfEHGXxMK7aay0899xcviaiBFIKCiN59RVDT9oBLvxRLle4DcRGQoHW2gCr2CzF36R0iWqoSUhPl2hV0OfdNXEhaYowqOacjaj6QvJrpEWZD/zZxDHFk54f4ZR3T2nn5baf6cgfIw0TaFNxxUVH+tBY=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1654611150;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Message-ID:Date:Date:MIME-Version:To:To:Cc:Cc:References:From:From:Subject:Subject:In-Reply-To:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=HZMQH+1lgVVAWilw6cNK/EVjdX4w8dLVlVF/UaMSMqE=;
	b=TdVQCqB4nsNbLuriPdiS+kTqWDQJuGok8gGwSHAViyK/nM36sJEGbAhLHiQrkY+v
	HoHXp3ZQ541mHxbTYAlwJMnzs6r/fRIl9sAgdMAYiru6ffl2XFhjiWuRHURzCpH1sYa
	FzmH03TUhf6HRTvFnFMBhXexourPBnwvBhOyelt4=
Message-ID: <523f3594-02d5-e762-27bf-9d48d4b8c6e1@apertussolutions.com>
Date: Tue, 7 Jun 2022 10:10:50 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: scott.davis@starlab.io, christopher.clark@starlab.io, jandryuk@gmail.com,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xenproject.org
References: <20220531182041.10640-1-dpsmith@apertussolutions.com>
 <20220531182041.10640-3-dpsmith@apertussolutions.com>
 <17edde4a-0d00-0da7-5910-09874ab70838@suse.com>
 <6447f0ff-993c-9d39-52a8-40a434be9e52@apertussolutions.com>
 <054d4009-6042-c985-cc10-b133fc2341b1@suse.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Subject: Re: [PATCH v4 2/3] xsm: consolidate loading the policy buffer
In-Reply-To: <054d4009-6042-c985-cc10-b133fc2341b1@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ZohoMailClient: External

On 6/7/22 09:58, Jan Beulich wrote:
> On 07.06.2022 15:47, Daniel P. Smith wrote:
>>
>> On 6/2/22 05:47, Jan Beulich wrote:
>>> On 31.05.2022 20:20, Daniel P. Smith wrote:
>>>> Previously, initializing the policy buffer was split between two functions,
>>>> xsm_{multiboot,dt}_policy_init() and xsm_core_init(). The latter for loading
>>>> the policy from boot modules and the former for falling back to built-in
>>>> policy.
>>>>
>>>> This patch moves all policy buffer initialization logic under the
>>>> xsm_{multiboot,dt}_policy_init() functions. It then ensures that an error
>>>> message is printed for every error condition that may occur in the functions.
>>>> With all policy buffer init contained and only called when the policy buffer
>>>> must be populated, the respective xsm_{mb,dt}_init() functions will panic for
>>>> all errors except ENOENT. An ENOENT signifies that a policy file could not be
>>>> located. Since it is not possible to know if late loading of the policy file is
>>>> intended, a warning is reported and XSM initialization is continued.
>>>
>>> Is it really not possible to know? flask_init() panics in the one case
>>> where a policy is strictly required. And I'm not convinced it is
>>> appropriate to issue both an error and a warning in most (all?) of the
>>> other cases (and it should be at most one of the two anyway imo).
>>
>> With how XSM currently works, I do not see how without creating a
>> layering violation, as you had mentioned before. It is possible for
>> flask_init() to know because the flask= parameter belongs to the flask
>> policy module and can be directly checked.
>>
>> I think we view this differently. A call to
>> xsm_{multiboot,dt}_policy_init() is asking for a policy file to be
>> loaded. If it is not able to do so is an error. This error is reported
>> back to xsm_{multiboot,dt}_init() which is responsible for initializing
>> the XSM framework. If it encounters an error for which inhibits it from
>> initializing XSM would be an error whereas an error it encounters that
>> does not block initialization should warn the user as such. While it is
>> true that the only error for the xsm_multiboot_policy_init() currently
>> is a failure to locate a policy file, ENOENT, I don't see how that
>> changes the understanding.
> 
> Well, I think that to avoid the layering violation the decision whether
> an error is severe enough to warrant a warning (or is even fatal) needs
> to be left to the specific model (i.e. Flask in this case).

Except that it is not the policy module that loads the policy, where the
error could occur. As you pointed out for MISRA compliance, you cannot
have unhandled errors. So either, the errors must be ignored where they
occur and wait for a larger, non-specific panic, or a nesting of
handling/reporting the errors needs to be provided for a user to see in
the log as to why they ended up at the panic.

v/r,
dps


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 14:15:55 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 14:15:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343329.568641 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyZzu-00034O-MT; Tue, 07 Jun 2022 14:15:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343329.568641; Tue, 07 Jun 2022 14:15:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyZzu-00034H-JY; Tue, 07 Jun 2022 14:15:54 +0000
Received: by outflank-mailman (input) for mailman id 343329;
 Tue, 07 Jun 2022 14:15:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8rqf=WO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nyZzt-000349-N8
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 14:15:53 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.111.102])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 566bce19-e66c-11ec-b605-df0040e90b76;
 Tue, 07 Jun 2022 16:15:52 +0200 (CEST)
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur02lp2058.outbound.protection.outlook.com [104.47.5.58]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-46-V2_x6K3MOu2ZC8HOKqvi7Q-1; Tue, 07 Jun 2022 16:15:49 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB4205.eurprd04.prod.outlook.com (2603:10a6:803:42::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19; Tue, 7 Jun
 2022 14:15:46 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5314.019; Tue, 7 Jun 2022
 14:15:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 566bce19-e66c-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654611352;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=LhlZbOwPp9043ER1BXoZcj7AFVJiR8ps1Vacfd5r/OI=;
	b=dMkpjwJvEtIEPHgR0Acw0MtBz+rQq0UiFX27jcSqO5a+coO1viWodAt6raS5bwtyyNbjMp
	YhflKayfIzcT5P3CRxclVx7hn5hXV3LONmo3L68KjHMyKx7FiM1f41U1N9IDiZskvAtP2a
	4XUsNrndREW40+Pmp1sxhsidXEMrfS8=
X-MC-Unique: V2_x6K3MOu2ZC8HOKqvi7Q-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WfegiwFWLcoUFZgT+yDUuoXz3b4kupT5IHZYTFJG8HOnYh4QRUwegfONzB5B+DHpJYSdTM+f/6xt0jUVolOcn+xSYMtOFEUs6p8VPAQclb2KLBL1BNOSsW4a2KvUqK8mIf8Cy8Wqy4j+EhIZ/YHkpJeyCe9FU5sxlu9JJMDe6PMST2/f4wUkMBRbmz8ciTNLATvmsDg9cDmFU/3YdRkfkr70zuTgyygRqoSS/OoHeVExmwsdpVBq1r/yFpqNm6PeIDQUYOs1W+2XGTRMLbHnodUVTIOviCGYJnnIWWk2KfkGFwccspmFSFspI0Kk6JmaopoYcU+8Rfs0XLMq20iffQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=LhlZbOwPp9043ER1BXoZcj7AFVJiR8ps1Vacfd5r/OI=;
 b=ZLSeWPJNwrsk5++GnrO1j0KzSv3poHBNoShoWoOsfcE1Ya+t/W8td4e80cIx9ChmfkzcaL8ZXANyA/605zsW/87JqlSQp2AsCx63e2wuCpY00Sm5TypRKObjxjPOHo3wHG9/3Dj8YMhLpBdDjSapkFqxTsfBQa7qGY1ZyVN8Cdezh1yuDeDKjv4mWiOdweW3lE25ck1ZJ6RQt8kcVPZV65iS24lqus3yZb2fg6odlCV0EEtnnPF/lwtIBKDFdU2yf4G4Oc5jMcuKNLYP72T9I49PBl3NXY7DSiZYlI5KjaAaOYe06UQE/QTRhy9Xrh99pxvo/Hp+oJg1teDuS/vRsA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=mysuse.onmicrosoft.com; s=selector1-mysuse-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LhlZbOwPp9043ER1BXoZcj7AFVJiR8ps1Vacfd5r/OI=;
 b=eILwualKGcLUNRqOLffFuhV+P89VYnQVibLMu9cE24/iPLLVTnq9ZCzqXEgf9ndkyH5m6/du010mvQElf63+OJoe1o0eexBRBJF7V2UFnh4y7vz0zuHJjOHOJ8uzvYiHt2EoPt1rP3rQAJw81i3APW2060fbrLJlVONFswu3IkDKCu3cXbvF3qMm9K5IZlDB13fLmRMe901yR7a572tUI8Wy8T0hSpsVnfuljyGNHNVxFuU7jy6JTDmgtt+0ydxLrML/wzFS4jJXmnyp5ct28bgPeoDvT5vWIPRhcl659nivg6dxcj1R8sOPpBICv4cHFjujfWAZnp/r8Jr7h3sU8w==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <bab0e4ed-643d-01ad-abf8-de569ae83036@suse.com>
Date: Tue, 7 Jun 2022 16:15:45 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v4 2/3] xsm: consolidate loading the policy buffer
Content-Language: en-US
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Cc: scott.davis@starlab.io, christopher.clark@starlab.io, jandryuk@gmail.com,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xenproject.org
References: <20220531182041.10640-1-dpsmith@apertussolutions.com>
 <20220531182041.10640-3-dpsmith@apertussolutions.com>
 <17edde4a-0d00-0da7-5910-09874ab70838@suse.com>
 <6447f0ff-993c-9d39-52a8-40a434be9e52@apertussolutions.com>
 <054d4009-6042-c985-cc10-b133fc2341b1@suse.com>
 <523f3594-02d5-e762-27bf-9d48d4b8c6e1@apertussolutions.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <523f3594-02d5-e762-27bf-9d48d4b8c6e1@apertussolutions.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0017.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1d::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e4f99fcc-e5ba-405e-2897-08da4890375e
X-MS-TrafficTypeDiagnostic: VI1PR04MB4205:EE_
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB42055A4A5208E78CB941F62BB3A59@VI1PR04MB4205.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	xixvD3Ya4U90+q68XqnTHlu9myM8RXkHE1CuY8b2i2yU06UOgBRPt74VMXu6dHfsaQ3aVVVA6sK6jI5llKe1wFs6OPP1hwSdMcGuXCi1mwxyvlfWilzWBUcuEh9QCeU9BXxXM+UJUTTnmijpqUUrQ68wDXufn2oIiVAy/NByTHG+MLMYm4GegFUf/33mU5AxU1bMIAohTlJfZjmW9kjAGzI0CT6HVH1HbBA7hw1Z1kjX9u0A2GNnlCROjkTfinv0TeyBqNjqrWdoRdtEkntwxr9xy7YIziRMUwUIaF02y5egqx4XKfNaMw8klsB+lUzCLygi4HMGakAjVO9hF8H+nQNHrBrJLLelcTJQMlZ0HeCtk8BjxgMAXO0h5j0oFOquFf9OtR57i/A+uZoJC1YZ2ybX+ENr0PFmr4RgkWlz0Aw+1updtysnRl+SxcUsU7PiUsr+aUYCmSd0idIYTcfF0xi3Lpkxblqab7YcUQ2EUBfOfJbbMFF5FAZFJU/STvzT6CvDCY2bOvEyDufF0PVg+Uz9r7fQHB/cg7DqXGyDRE7ySMfmwmkW69EAtspWAT5PjGM3KhWgQwSWcYJhFfwAjE5GBTRFoHbiNcHmKZtBpCLR5YODxirL/dMif+VwiIbF/sYfezJl7vYhvJ1/mdrBjc/C9+k4giusq7dYCOWrfA25AhSbZPt1Mc0PFPnSM/lWApP9QCK6oTMaQ6IQ/F8ominkRkHgsRDLRO8kq5U+tksS3ycGAENR0OSYY92mKtPk
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(38100700002)(66946007)(8936002)(66556008)(6916009)(36756003)(2906002)(316002)(5660300002)(31686004)(26005)(6512007)(66476007)(4326008)(83380400001)(6506007)(8676002)(53546011)(2616005)(508600001)(6486002)(186003)(31696002)(86362001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bkc1RGpZNVlWTDJYZGMveUE4cEVJUlY5V05ZNjdxOVp4Yi9jSFN3Mk1BejR1?=
 =?utf-8?B?TTRGOUhqOFRBZy9sSUxzY0grRW1nYTlJS3NOdDBRTGVIWDV6VUMrbE1RUEE3?=
 =?utf-8?B?K3E1anhFVW5tVVYrTFI3WTlFMUJzWnM3N0MyY05nU2hOazY5blVMaTlleHVz?=
 =?utf-8?B?cmRPQ3RnREkydlRtekNGTWRBUEFZTDFiTkpuNHdKRU9kZXl4alhkYWdFTUM2?=
 =?utf-8?B?NWx3MHZFaVloTExKMkNQRGVtTUErVW42cWVjdGpVMG8zUllrY1dKR25idHBo?=
 =?utf-8?B?SjhWTkVObjdsZ3hsTXI0NUprbFVFSTZvTFFoN1owakpmYnVLS0NpOHRZQnIv?=
 =?utf-8?B?OUVaT29ldlZyTFRXVVRzMWJtWG02cVRURk1ETlZXOWEvYVVPRkw1Tmx4d2l4?=
 =?utf-8?B?RUJUdXhKbUc4ZklsV0tsRHlmR01kUFVvKzlHd1NFOTJiNHlIOUVQc2Rsa1RX?=
 =?utf-8?B?UElGaFBHdWd1Sk9Ha0RiYXZqMnNCSERyakFjem5uOTB0L2NMa2RlczQxNDdq?=
 =?utf-8?B?VVBFRGdjREFjcllObzBFMTBXak9RY1JSY3U5YndGMUJTNGpSUG5BbG9VTjZu?=
 =?utf-8?B?b216ZkdOTlhQTW5hajBRNjRSc256VHA0d3g2T0tCc3lRZVpqZ0ZObnlZeXFj?=
 =?utf-8?B?Sys0aFluNEwzWXNoYlVzRitKdEhKbkRZamNFQjg3M0QwWW9CZGoxU0s1MUhX?=
 =?utf-8?B?OVVzU2RuSi91aHVoNCtPUUZrUHgzSDVDdUYxN2RKQXFxUUwwc3NlcDd0Qm5F?=
 =?utf-8?B?Z1JFRWVDbnJnQ2FqdHB0d3NncnZjTjNBWXVCSHFud0VvUG5KandEU0J3cTl3?=
 =?utf-8?B?akFaR2YzSmZpNGk2K2lHWjJNNWJpZXQxaUFVRTJFc05zS3hQcHFzd1ZJUXEy?=
 =?utf-8?B?SFJaemcxcEdFaFNsWUEwbmpXam96MG14SUM3dFl4SzV6UExrQ0lTR1lFbDVL?=
 =?utf-8?B?WEZPclFtNS9KWjFiOFF0ZmdSc3RUcnU2RWorQ01QVDdyZXRGQmdMVGZrQldn?=
 =?utf-8?B?WUhhMWh4aXVKQUIvSHh4NDlYWHphKzBuUzFIUjVlbjZCSzJweWtFcmovY0hr?=
 =?utf-8?B?M3pDbkJoSiswaUtxb050c3VFaDBUdUJJZ1U1RUlnWEo2OXl3bGpvcDNFK1Bm?=
 =?utf-8?B?aXNQdXdPSU9rcjE0Z2NJU2lPYlVYeDVBZ3FhVDd3Q2FVdnk3RHBLb0k2cDI3?=
 =?utf-8?B?Mzk2ek5kT0hYVTN0aVdDRnc5Yy9xU3dVUExaM01sQmtWakc0NE5nTWRUUmtx?=
 =?utf-8?B?aWpwa01iZzg2NDRqazdBZ0NaS2p4VHdrNktEWVlXQlpRRG5HUjkyUHFyR3hO?=
 =?utf-8?B?RUtobkwxb3dJNWc3NUdSLzlZTzdHRmdLZDVlT253TzdxOUJrdjJrcmc3WE5S?=
 =?utf-8?B?c0lYRkVSS1VKRGxiSGhKdXNuWTJOaUc3ZitBOUQ0WWR6MjloeGFDR1JyZEs0?=
 =?utf-8?B?SmsvQjFOb2o4Sy9lZEsxY1I3R0R0ZU9OVW9PQzBlTFNYcGRML1pIc015czFV?=
 =?utf-8?B?d0V6OE5kRm9oQm1mMVI0TExCbkJFcTNidWIycHRrRUtrcEg4VVhobDR1RFly?=
 =?utf-8?B?UW1iaEJvRlNXeU5nZlBtbk9UQURaWVBKaC9mN2JzVVNzN0xJRUduUG1OWXdl?=
 =?utf-8?B?ZzBSK0lnZ3hVYVdYYkdJdmdXbjJvRkZKcFVqN214QmhCQTNOWjdjV2R4Y0Jn?=
 =?utf-8?B?Vmk5cmNjWlp2MWZmK3JxVldMbXpZejM0b2tWRTcxdlBkd2l5SEtZaU0yYkhQ?=
 =?utf-8?B?NUVvZFhiQW1hUzRqZERsUGxLUFFicnFaeUI5NW0vWmJWYzM0ZHpneExFc1F4?=
 =?utf-8?B?QUJhWi9Ub0kzUU9yWlFOYjJndzBaL3lCVWRaL0RJZG0zQW00TU43UkxiOWxE?=
 =?utf-8?B?cnY5am5aT0ZWOTlOcDBJd0t2NGNBMkU0V3NWRUh5Wm1xampPR21yTkh0Rnl0?=
 =?utf-8?B?cVVLS1BHRC9iQmFtb2NqdEFxS040eGJ5Qm1kR08ybWhndzJWV0ZMc1VQNlJG?=
 =?utf-8?B?OG1BRmgvOWpzN0ZKK3Fpd28vYUtJeGF5NmsyY1A3RDRLbzQ4MlRYa0tyVEd4?=
 =?utf-8?B?VXNLbHBkbVJSSFUzcmo4SDFHa3U5V28zRkNzZjJOUWVYdzdTbnh6RGhDdUZx?=
 =?utf-8?B?eVE0S3Z0NjB1Wkl4UEliVjlnTzF0VFg2V3g4QTBsaTJZU0RPZkpFTzd6cFBO?=
 =?utf-8?B?QWZFN3VLVVlkbVBHVmNueE9tbzIyZmNiTDJYK0s1em1CTnVNOE4vSGc4ZXlV?=
 =?utf-8?B?eDdKRUcvRlUwczFLSGRYaWRLZTNjSFAzc2tRcVcwTE9qOGNMekd5Z3hZRlZu?=
 =?utf-8?B?N1pVaEZHamQzbmR4NUt0cE5Gdi9zS0swc1VRd0tvVVp4eVF1ak9tdz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e4f99fcc-e5ba-405e-2897-08da4890375e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2022 14:15:46.9004
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: HZxkKwVDamEDKjproAJpB/utiTkmDqEv3sdwDk4ZwicE0696bvUNJ404UhjyBYw31PegCM7XiDe2+kwPdMdgZg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4205

On 07.06.2022 16:10, Daniel P. Smith wrote:
> On 6/7/22 09:58, Jan Beulich wrote:
>> On 07.06.2022 15:47, Daniel P. Smith wrote:
>>>
>>> On 6/2/22 05:47, Jan Beulich wrote:
>>>> On 31.05.2022 20:20, Daniel P. Smith wrote:
>>>>> Previously, initializing the policy buffer was split between two functions,
>>>>> xsm_{multiboot,dt}_policy_init() and xsm_core_init(). The latter for loading
>>>>> the policy from boot modules and the former for falling back to built-in
>>>>> policy.
>>>>>
>>>>> This patch moves all policy buffer initialization logic under the
>>>>> xsm_{multiboot,dt}_policy_init() functions. It then ensures that an error
>>>>> message is printed for every error condition that may occur in the functions.
>>>>> With all policy buffer init contained and only called when the policy buffer
>>>>> must be populated, the respective xsm_{mb,dt}_init() functions will panic for
>>>>> all errors except ENOENT. An ENOENT signifies that a policy file could not be
>>>>> located. Since it is not possible to know if late loading of the policy file is
>>>>> intended, a warning is reported and XSM initialization is continued.
>>>>
>>>> Is it really not possible to know? flask_init() panics in the one case
>>>> where a policy is strictly required. And I'm not convinced it is
>>>> appropriate to issue both an error and a warning in most (all?) of the
>>>> other cases (and it should be at most one of the two anyway imo).
>>>
>>> With how XSM currently works, I do not see how without creating a
>>> layering violation, as you had mentioned before. It is possible for
>>> flask_init() to know because the flask= parameter belongs to the flask
>>> policy module and can be directly checked.
>>>
>>> I think we view this differently. A call to
>>> xsm_{multiboot,dt}_policy_init() is asking for a policy file to be
>>> loaded. If it is not able to do so is an error. This error is reported
>>> back to xsm_{multiboot,dt}_init() which is responsible for initializing
>>> the XSM framework. If it encounters an error for which inhibits it from
>>> initializing XSM would be an error whereas an error it encounters that
>>> does not block initialization should warn the user as such. While it is
>>> true that the only error for the xsm_multiboot_policy_init() currently
>>> is a failure to locate a policy file, ENOENT, I don't see how that
>>> changes the understanding.
>>
>> Well, I think that to avoid the layering violation the decision whether
>> an error is severe enough to warrant a warning (or is even fatal) needs
>> to be left to the specific model (i.e. Flask in this case).
> 
> Except that it is not the policy module that loads the policy, where the
> error could occur. As you pointed out for MISRA compliance, you cannot
> have unhandled errors. So either, the errors must be ignored where they
> occur and wait for a larger, non-specific panic, or a nesting of
> handling/reporting the errors needs to be provided for a user to see in
> the log as to why they ended up at the panic.

Right - I was thinking that the error code could be propagated to Flask
instead of (or, less desirable, along with) the NULL pointer indicating
the absence of a policy module. That still would satisfy the "error
indications need to be checked for" MISRA requirement.

Jan



From xen-devel-bounces@lists.xenproject.org Tue Jun 07 14:29:08 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 14:29:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343337.568652 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyaCS-0004h2-SV; Tue, 07 Jun 2022 14:28:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343337.568652; Tue, 07 Jun 2022 14:28:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyaCS-0004gv-Pp; Tue, 07 Jun 2022 14:28:52 +0000
Received: by outflank-mailman (input) for mailman id 343337;
 Tue, 07 Jun 2022 14:28:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W0mQ=WO=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1nyaCR-0004go-4D
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 14:28:51 +0000
Received: from mail-lf1-x130.google.com (mail-lf1-x130.google.com
 [2a00:1450:4864:20::130])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 25bb5404-e66e-11ec-bd2c-47488cf2e6aa;
 Tue, 07 Jun 2022 16:28:49 +0200 (CEST)
Received: by mail-lf1-x130.google.com with SMTP id t25so28581893lfg.7
 for <xen-devel@lists.xenproject.org>; Tue, 07 Jun 2022 07:28:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 25bb5404-e66e-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=4Cp/kyM102WmBQnkCm9RNeqyZdx7wIJ+Bk+Kzplg+ZE=;
        b=QzccWae2yrURKMCTOJ3MlyR29k9PwmbUktfz9xzPMWMLqTJXzCP3Z7wr20btdLbYWY
         Cnlb7Vfytse8b6Aqj+HOWrdjsT0bUD11pCYpY6twTaN7fhbjw0jFIXr+LWcM3D4aXH6P
         w1CO5ezX3IA08Db1V+Mn2OyWXDmYEmiTMrZJPCBz8UMazF4IRzRrOrvP21fC2rfP3lGZ
         asnpPT8v1XyCRHnwP2FCPt9cU8x/glMWSYeaIxktgZr2tdrf6pIh1Cbs1YfCFLYdXjv1
         4ya2GRrpL5WPxkryHu/YcNKyIGwTlixscYAh1RDJthWcB9BduwC4gLpjRwMCkDVz7NXc
         0XaQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=4Cp/kyM102WmBQnkCm9RNeqyZdx7wIJ+Bk+Kzplg+ZE=;
        b=crH7PeqiDoBPXKE8RMsAhICItH38gQKg/vzsHKSv8m7ap1bEFv364bPin5cYn0wM/k
         NSgWwAcxreJ5RfvxFL22fEMX3dqtyWCcNEvPJZwYW5IRJsFoBqGGDfEDnT83k9pf9GGS
         yF8NR+VRWbzjGr9P7AhSSeSjHTNoI0QpDyezCi/M9AfhxnUZjTi2A79GGY3pZ9aQ8NsB
         Bqy2sKyZxUwUp97sjDQuSNgg2DpLg09hXLuf4/UTSYquv/Gkbnoc/1F0yskybP7a6Rp5
         PbXItjLsfBRT4N6LBRDeYkwOJp3iWS76I01UcmiU4AWNCp8bzv1UVg6sqDoYser0haP+
         DKjg==
X-Gm-Message-State: AOAM532mUByDa0zXo79YXYV9mZst2pAkj5rpRi3hWEpx8SByUbbPXZxX
	fLJ0+SslRHpRqqmXzDRS+NCprPG1FdTehYoqcPo=
X-Google-Smtp-Source: ABdhPJyBzkxpbAnjAcYH30+5JeC4fL+UQAGjJXcJHNVqJIbc3+U3+XDrfpeKzEJjDRg1ytMERd82aOE+3dsqmFR2eu8=
X-Received: by 2002:a05:6512:249:b0:479:a3c:de with SMTP id
 b9-20020a056512024900b004790a3c00demr19183936lfo.128.1654612129170; Tue, 07
 Jun 2022 07:28:49 -0700 (PDT)
MIME-Version: 1.0
References: <20220531182041.10640-1-dpsmith@apertussolutions.com>
 <20220531182041.10640-3-dpsmith@apertussolutions.com> <6f177e81-9c97-0f7f-2f79-88cc4db83e02@citrix.com>
 <6d6b14ad-2a0d-c7aa-d835-3131f83e0f68@apertussolutions.com>
In-Reply-To: <6d6b14ad-2a0d-c7aa-d835-3131f83e0f68@apertussolutions.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Tue, 7 Jun 2022 10:28:37 -0400
Message-ID: <CAKf6xpuT1Dg1+JmXMngz0B03aqSUvW6+DOeapLvhhsLzS6_N2Q@mail.gmail.com>
Subject: Re: [PATCH v4 2/3] xsm: consolidate loading the policy buffer
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	"scott.davis@starlab.io" <scott.davis@starlab.io>, 
	"christopher.clark@starlab.io" <christopher.clark@starlab.io>, Daniel De Graaf <dgdegra@tycho.nsa.gov>
Content-Type: text/plain; charset="UTF-8"

On Tue, Jun 7, 2022 at 9:16 AM Daniel P. Smith
<dpsmith@apertussolutions.com> wrote:
> Unfortunately the scope of what this series started out to solve, not to
> walk all the boot modules when no policy file is needed, and what the
> reviewers have been requesting be addressed is continually diverging.

You only need patch 1/3 to skip the walk, AFAICT.  So maybe you should
just submit that independently.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 14:31:17 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 14:31:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343345.568663 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyaEm-00061S-9j; Tue, 07 Jun 2022 14:31:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343345.568663; Tue, 07 Jun 2022 14:31:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyaEm-00061L-6M; Tue, 07 Jun 2022 14:31:16 +0000
Received: by outflank-mailman (input) for mailman id 343345;
 Tue, 07 Jun 2022 14:31:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fbs7=WO=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1nyaEj-000619-Rs
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 14:31:14 +0000
Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com
 [66.111.4.25]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 797a938f-e66e-11ec-bd2c-47488cf2e6aa;
 Tue, 07 Jun 2022 16:31:11 +0200 (CEST)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailout.nyi.internal (Postfix) with ESMTP id 0F4E85C009E;
 Tue,  7 Jun 2022 10:31:10 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute1.internal (MEProxy); Tue, 07 Jun 2022 10:31:10 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 7 Jun 2022 10:31:07 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 797a938f-e66e-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:message-id
	:mime-version:reply-to:sender:subject:subject:to:to; s=fm1; t=
	1654612270; x=1654698670; bh=R7ACutaosRorqKmEfDpSiNv/WJ0bp2X+dUr
	jFXbFlLE=; b=BukLq8aYQVUqGFBvGB5c79tvLsRiyYeFEkyTZT987Q1vGIUU/IQ
	nNw/7jj9+a3OHve83km1Wch4tUWz3FdMLSJloYjOmtp45KwxXfWNJTBWju81Nbh7
	8GYY29h82zR2/EyA9eAScOY/HxRGdCd2nmVg9CMpF+ESwel1NEmZaMb4sxN/QRFn
	GloNiIcQ2gwfLcXWpKiBhTgOH7xji6Jinz59Sb5PGGJTPKNM7CyZ/+m80k480MVe
	8CyI5H2U7kUXcCaCCYfaNBROTFYFsREFWwqO9kHo0ndYqvSFKy3saen69cd6c7MC
	3axYkp1Y94b4bE1UZNLsf/XfbZ9+Ex7N7pg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:message-id:mime-version:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm2; t=1654612270; x=1654698670; bh=R7ACutaosRorq
	KmEfDpSiNv/WJ0bp2X+dUrjFXbFlLE=; b=BPf11H9T2WD6/YZ6LCN9RfvsvYZwa
	cJaurm1EnjUJb2N0cpTu8jTo3FRW3HJOxeuazNRwKnAW9bLEhXJXXRLuQMQ3cblH
	eJwsxIqg1CXzHsJydQwFqpI3bl9bf27ZTtQoxB/2LPcvlpvGccTWi0bY3OPL/ZHR
	V6jwZx6BDZe3a438HqxfhI3V/pxjkMB1Xq9hpwoLw/9/AnLiuL1pwuotc41EEM+E
	jrmJr9/uWHyI4AL4wOrs5ED5QuiuNVn5RgJ46Jilu5uzbWHfnyH7ML73PvCRjpSZ
	ldD4w2KoBpEcReQL27q9UKqXoYnk4Cwd7a4PudsTqX8uTTkRV6TIglnqQ==
X-ME-Sender: <xms:LWGfYsK5A69vdfLE98_MidZDABvZUbdOYirSG0xzkWnhjOEArLve7w>
    <xme:LWGfYsJ_SKC66dD2M3emnfmsFlTqSkhu34Xfhs_As6W0tQ4_dPwElM-EWqw508pKA
    CwWwBkICFRxCQ>
X-ME-Received: <xmr:LWGfYssJ31tLVE97orF4Pg01iv6bbjqsri282Fppva3IVkZ0Lstyg16Tnyf88tHicWh5LWEwSbeB2uA1PsmtW3pPSc9z1pi2kZnPrFgKICi9CALKFRQ>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedruddthedgjeeiucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofggtgfgsehtkeertdertdejnecuhfhrohhmpeforghrvghk
    ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeefgffg
    geevhffggfetfefhffeuvefhvdevkeehkedttddtgeefkeduheevffduleenucffohhmrg
    hinhepghhithhhuhgsrdgtohhmnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghm
    pehmrghilhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhnghhslh
    grsgdrtghomh
X-ME-Proxy: <xmx:LWGfYpb5ytZa_dz10dx15H08k-q7QpXgd98R1Bf6vxmC31ogSgiWKA>
    <xmx:LWGfYjbNNDf3KNSqqUt6no1mHGR8B54d0p-bLVGcTF3dmNo7r7fyTQ>
    <xmx:LWGfYlCAiHTQvQIC8rZMZbZ1AUq7wO-WMdNfKuw69gM-wkoufnFNdg>
    <xmx:LmGfYtmriZTmcLe2yhFp6Xml2BzHjhRS9Mo5yAFFLyhCyXqG9cAg1w>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Kevin Tian <kevin.tian@intel.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v1 00/10] Add Xue - console over USB 3 Debug Capability
Date: Tue,  7 Jun 2022 16:30:06 +0200
Message-Id: <cover.5d286dc6304969ed7155051e900236947c1b14dc.1654612169.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This is integration of https://github.com/connojd/xue into mainline Xen.
This patch series includes several patches that I made in the process, some are
very loosely related.

The driver developed by Connor supports output-only console via USB3 debug
capability. The capability is designed to operate mostly independently of
normal XHCI driver, so this patch series allows dom0 to drive standard USB3
controller part, while Xen uses DbC for console output.

Changes since RFC:
 - move the driver to xue.c, remove non-Xen parts, remove now unneeded abstraction
 - adjust for Xen code style
 - build for x86 only
 - drop patch hidding the device from dom0

Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Wei Liu <wl@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: Paul Durrant <paul@xen.org>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Connor Davis <connojdavis@gmail.com>

Connor Davis (1):
  drivers/char: Add support for Xue USB3 debugger

Marek Marczykowski-Górecki (9):
  xue: reset XHCI ports when initializing dbc
  xue: add support for selecting specific xhci
  ehci-dbgp: fix selecting n-th ehci controller
  console: support multiple serial console simultaneously
  IOMMU: add common API for device reserved memory
  IOMMU/VT-d: wire common device reserved memory API
  IOMMU/AMD: wire common device reserved memory API
  xue: mark DMA buffers as reserved for the device
  xue: allow driving the rest of XHCI by a domain while Xen uses DbC

 docs/misc/xen-command-line.pandoc        |    5 +-
 xen/arch/x86/Kconfig                     |    1 +-
 xen/arch/x86/include/asm/fixmap.h        |    4 +-
 xen/arch/x86/setup.c                     |    1 +-
 xen/drivers/char/Kconfig                 |    7 +-
 xen/drivers/char/Makefile                |    1 +-
 xen/drivers/char/console.c               |   58 +-
 xen/drivers/char/ehci-dbgp.c             |    2 +-
 xen/drivers/char/xue.c                   | 1089 +++++++++++++++++++++++-
 xen/drivers/passthrough/amd/iommu_acpi.c |   16 +-
 xen/drivers/passthrough/iommu.c          |   40 +-
 xen/drivers/passthrough/vtd/dmar.c       |  203 ++--
 xen/include/xen/iommu.h                  |   11 +-
 xen/include/xen/serial.h                 |    1 +-
 14 files changed, 1342 insertions(+), 97 deletions(-)
 create mode 100644 xen/drivers/char/xue.c

base-commit: 49dd52fb1311dadab29f6634d0bc1f4c022c357a
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 14:31:17 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 14:31:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343346.568669 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyaEm-00064z-Jw; Tue, 07 Jun 2022 14:31:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343346.568669; Tue, 07 Jun 2022 14:31:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyaEm-00064p-FV; Tue, 07 Jun 2022 14:31:16 +0000
Received: by outflank-mailman (input) for mailman id 343346;
 Tue, 07 Jun 2022 14:31:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fbs7=WO=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1nyaEl-000619-Nd
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 14:31:15 +0000
Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com
 [66.111.4.25]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7bcb5aaa-e66e-11ec-bd2c-47488cf2e6aa;
 Tue, 07 Jun 2022 16:31:14 +0200 (CEST)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.nyi.internal (Postfix) with ESMTP id 1A2265C01E3;
 Tue,  7 Jun 2022 10:31:14 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute5.internal (MEProxy); Tue, 07 Jun 2022 10:31:14 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 7 Jun 2022 10:31:12 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7bcb5aaa-e66e-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm1; t=1654612274; x=1654698674; bh=hjjCyL93xP
	zVhTjBxPMqGX4BQBGEIc0Cak0bF/sHaPE=; b=VyGiUkOtBFMTxv7ZoHNmIbNOsd
	AB177RNXxwaSJWtagThnKSrkUjb5VNbsMLrTPv9n+jApy9ASVbcNMK2OVcfLUmhz
	0BIUbAeV4t8IEQLZHQK6zBWHVikGpjc1PvKqhpVadoazrvobSkNmdrkm+tn6bTex
	NrVXxUxZPDjXWf752c20zj0LowKRi6aTmSVWCkIzdAy4+VUEo3EYZ0BafpBjZzjh
	HhPhRZDycba/TePH6up79LLmHE3xzqyAoxoEBe55jw8vf6grHr9pf6NVi9xSalVV
	VweqCoKiEcoAPUhqqjvRGA9H/r7qCkmeWcCbgQiF+rKuAXI9gyk3r04vfFAQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1654612274; x=
	1654698674; bh=hjjCyL93xPzVhTjBxPMqGX4BQBGEIc0Cak0bF/sHaPE=; b=b
	G/zZt35M7Z/JIcwnFutal3cTjRw5VWuiwzM9heDUnGkkNsnyMHNrNfU8VRjFGIf8
	DWlQWykXOiBW/F0kJtQqzsjE6abvaW++gl9EPEfyJXQETnxFB+k/ZPwPud3gT/Yl
	WGXbI1GG8Lt29FOm35DA40JoKHsOZknAuMy4Axny/2jYktjbsfEqg5DSU8u1UKe4
	I8y5oxDn7w5uAoB3G4Q5e17S1p/uf9j/G9ZIEwito3XY6zCXfc+Imd9DDQ8MOCeI
	D4Z8+TyMroYRiQrYJW/SEDT5L5k7OkzZmpm4ebaM9A0ijQx0tcKe7yrmXXsqOBwE
	bdeWxAxb+xrCGA5heT+2g==
X-ME-Sender: <xms:MWGfYoqd5TDAYxH4Y_e8QvEUiH3rCGOH_GR_6Ec5fAOtRFi4b6YRcQ>
    <xme:MWGfYuorfEwqutM0VXSkxCA_FUaHIPIHzE_kf9tBx0SuJ1O3G_T_UIka_WXaIdTqc
    SThyy0lmyFiQw>
X-ME-Received: <xmr:MWGfYtNGa39QlgG7hirnwxW6qPnxa4qwYk-O5DUUdyhKXa6Z781GdVmqs1RB1e6SWArXkEa7u5ncmdSvwdxeWDpTptjCFmZkug2cd__p0vXBsTTOZqI>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedruddthedgjeeiucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh
    hushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:MWGfYv4ltiv27I8yt0Sprm54VTOxeEYbJBxNoJA1o-V-bRt-KqpIrg>
    <xmx:MWGfYn4lHMlK_U5KTfoEOrz6CBUm26hB5zAaRq2N5VG_tYOtm3JZCA>
    <xmx:MWGfYvjGuT8SZFy80Go6NWa9aE5Q_7JDhnxltCC1FCx0GrokNnbr9w>
    <xmx:MmGfYtQW4zWLK4OQ3boShJ-3X6ZBfj68oUMiFQtTDSt49wpZLANcOg>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v1 02/10] xue: reset XHCI ports when initializing dbc
Date: Tue,  7 Jun 2022 16:30:08 +0200
Message-Id: <f89ad3528e9d57e4598ac450f08a81391538fa69.1654612169.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <cover.5d286dc6304969ed7155051e900236947c1b14dc.1654612169.git-series.marmarek@invisiblethingslab.com>
References: <cover.5d286dc6304969ed7155051e900236947c1b14dc.1654612169.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Reset ports, to force host system to re-enumerate devices. Otheriwse it
will require the cable to be re-plugged, or will wait in the
"configuring" state indefinitely.

Trick and code copied from Linux:
drivers/usb/early/xhci-dbc.c:xdbc_start()->xdbc_reset_debug_port()

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 xen/drivers/char/xue.c | 70 +++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 70 insertions(+)

diff --git a/xen/drivers/char/xue.c b/xen/drivers/char/xue.c
index e95dd09d39a8..a9ba25d9d07e 100644
--- a/xen/drivers/char/xue.c
+++ b/xen/drivers/char/xue.c
@@ -60,6 +60,10 @@
     ((1UL << XUE_PSC_CSC) | (1UL << XUE_PSC_PRC) | (1UL << XUE_PSC_PLC) |      \
      (1UL << XUE_PSC_CEC))
 
+#define     XUE_XHC_EXT_PORT_MAJOR(x)  (((x) >> 24) & 0xff)
+#define PORT_RESET  (1 << 4)
+#define PORT_CONNECT  (1 << 0)
+
 #define xue_debug(...) printk("xue debug: " __VA_ARGS__)
 #define xue_alert(...) printk("xue alert: " __VA_ARGS__)
 #define xue_error(...) printk("xue error: " __VA_ARGS__)
@@ -604,6 +608,68 @@ static void xue_init_strings(struct xue *xue, uint32_t *info)
     info[8] = (4 << 24) | (30 << 16) | (8 << 8) | 6;
 }
 
+static void xue_do_reset_debug_port(struct xue *xue, u32 id, u32 count)
+{
+    uint32_t *ops_reg;
+    uint32_t *portsc;
+    u32 val, cap_length;
+    int i;
+
+    cap_length = (*(uint32_t*)xue->xhc_mmio) & 0xff;
+    ops_reg = xue->xhc_mmio + cap_length;
+
+    id--;
+    for ( i = id; i < (id + count); i++ )
+    {
+        portsc = ops_reg + 0x100 + i * 0x4;
+        val = *portsc;
+        if ( !(val & PORT_CONNECT) )
+            *portsc = val | PORT_RESET;
+    }
+}
+
+
+static void xue_reset_debug_port(struct xue *xue)
+{
+    u32 val, port_offset, port_count;
+    uint32_t *xcap;
+    uint32_t next;
+    uint32_t id;
+    uint8_t *mmio = (uint8_t *)xue->xhc_mmio;
+    uint32_t *hccp1 = (uint32_t *)(mmio + 0x10);
+    const uint32_t PROTOCOL_ID = 0x2;
+
+    /**
+     * Paranoid check against a zero value. The spec mandates that
+     * at least one "supported protocol" capability must be implemented,
+     * so this should always be false.
+     */
+    if ( (*hccp1 & 0xFFFF0000) == 0 )
+        return;
+
+    xcap = (uint32_t *)(mmio + (((*hccp1 & 0xFFFF0000) >> 16) << 2));
+    next = (*xcap & 0xFF00) >> 8;
+    id = *xcap & 0xFF;
+
+    /* Look for "supported protocol" capability, major revision 3 */
+    for ( ; next; xcap += next, id = *xcap & 0xFF, next = (*xcap & 0xFF00) >> 8)
+    {
+        if ( id != PROTOCOL_ID && next )
+            continue;
+
+        if ( XUE_XHC_EXT_PORT_MAJOR(*xcap) != 0x3 )
+            continue;
+
+        /* extract ports offset and count from the capability structure */
+        val = *(xcap + 2);
+        port_offset = val & 0xff;
+        port_count = (val >> 8) & 0xff;
+
+        /* and reset them all */
+        xue_do_reset_debug_port(xue, port_offset, port_count);
+    }
+}
+
 static void xue_dump(struct xue *xue)
 {
     struct xue_dbc_reg *r = xue->dbc_reg;
@@ -639,6 +705,10 @@ static void xue_enable_dbc(struct xue *xue)
     while ( (reg->ctrl & (1UL << XUE_CTRL_DCE)) == 0 )
         xue_sys_pause();
 
+    /* reset ports on initial open, to force re-enumerating by the host */
+    if ( !xue->open )
+        xue_reset_debug_port(xue);
+
     wmb();
     reg->portsc |= (1UL << XUE_PSC_PED);
     wmb();
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 14:31:18 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 14:31:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343347.568685 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyaEo-0006Xj-3V; Tue, 07 Jun 2022 14:31:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343347.568685; Tue, 07 Jun 2022 14:31:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyaEn-0006Xa-VG; Tue, 07 Jun 2022 14:31:17 +0000
Received: by outflank-mailman (input) for mailman id 343347;
 Tue, 07 Jun 2022 14:31:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fbs7=WO=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1nyaEm-00061K-Er
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 14:31:16 +0000
Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com
 [66.111.4.25]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7ac4aee9-e66e-11ec-b605-df0040e90b76;
 Tue, 07 Jun 2022 16:31:12 +0200 (CEST)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailout.nyi.internal (Postfix) with ESMTP id 115835C01B2;
 Tue,  7 Jun 2022 10:31:12 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute1.internal (MEProxy); Tue, 07 Jun 2022 10:31:12 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 7 Jun 2022 10:31:10 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7ac4aee9-e66e-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm1; t=1654612272; x=1654698672; bh=Z7Lr0j0M2K
	aSvcNr1cRj7+FQLB0GDunqIgj6YEaiVuo=; b=QMbE85kSJ2KvhvxcyHDARQoyz7
	ss3tdAXA6hhXhNY5rR01dfLXxLe8d0c4cdWskt5OrXN8KswNxKKwijkv+32BtP84
	nNv64e+cu8rBlsOg3PjNH2UvfxK//wGv3fCXdvvr01m93bcS4UEUEDhchdaCib7N
	XWhSe6EhBfMYJnOycAp33d24H3v67YUA4ti3w5RHvMME7Th9DGmXJmpZiqBW714H
	J/MawPI1Wl63DpSnmcyhUPlEelgnK+EWRykBSEK4eqEhcdVDD5wWetOusgDZhBh8
	SeS9mc+1Owm9/DAceXupmUoDJLPntN/d5SJoGXIsF1NPZ8skU7OCSUm5kp5Q==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1654612272; x=
	1654698672; bh=Z7Lr0j0M2KaSvcNr1cRj7+FQLB0GDunqIgj6YEaiVuo=; b=W
	cnEKzUcDZSkoZc7lWP7fzxcKOMq8LLrQxv/gz5SVITEkBIcKcLrEp49bgAi9U7QC
	JEDyFC4/HS7K83/OZJyU/viGV2+w4oqCaX5WnKfqiimSvpcf1AlDfmdhRu3Idou9
	zxRgeoPsYTcVP3M/VUIIutFVivaWHWzHDnEIhGY6vWnLNOFYoPtuu6DfLvuRrQKy
	XlrP6UcmqLlgx16YvRsfY7WXQOaTE6V6aNdCIIQ3vmY2DP1DJakEgQh1dNT9qR8+
	n3jQkwOJGi9nVFk6KQIVQuDhsYAgckXcuvhTHNWQ5DIitXGzRqpOJOFIJXxxlKs6
	OAxYwWr+kObgJqW8BgxFg==
X-ME-Sender: <xms:L2GfYoQ_g3ghj2SiokYyzf8idYmuuaR531c5w0yrQ5sAPuX0u1NklA>
    <xme:L2GfYly_mIiaZDyABWWeso_tZLZbR41jwP7cAirt9QQKXJZU0AWzEOgKp6eZ1vNX4
    FD9LMT7n2mL9g>
X-ME-Received: <xmr:L2GfYl16Jo0W6LyPZNqpPPccYFSGbxEBvVlMeI0UERFFJZToC6_xcDtoeR0f3wc-Gx7l7cpqAqiug07_oc-svGDQvpNER1jra5RXjCu0E-EYnOFR2lw>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedruddthedgjeeiucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeek
    udejueeuudetudffffdtheetteeileekkeefjeeljefhkeelffevueffkeelfeenucffoh
    hmrghinhepghhithhhuhgsrdgtohhmpdhgnhhurdhorhhgnecuvehluhhsthgvrhfuihii
    vgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsih
    gslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:L2GfYsAT8WN8-047dna1FXCce8uBHEmyfv6-LjOIuVtXgzv2UfcOXQ>
    <xmx:L2GfYhitpxY_BsPw15DOAy9niR39dsIeqeT8h9G9c-FmKkyj7RPxEQ>
    <xmx:L2GfYorf7i3FRO3hrFGHUlMdKGCAI0-tRppIPf6Br127j5F4sjJRSg>
    <xmx:MGGfYpjYYAjijg1wzh3abvcVoSsenGd-sb3l8Vtg0EAtpLU-cfgpag>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Connor Davis <davisc@ainfosec.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v1 01/10] drivers/char: Add support for Xue USB3 debugger
Date: Tue,  7 Jun 2022 16:30:07 +0200
Message-Id: <87c73737fe8ec6d9fe31c844b72b6c979b90c25d.1654612169.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <cover.5d286dc6304969ed7155051e900236947c1b14dc.1654612169.git-series.marmarek@invisiblethingslab.com>
References: <cover.5d286dc6304969ed7155051e900236947c1b14dc.1654612169.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Connor Davis <davisc@ainfosec.com>

[Connor]
Xue is a cross-platform USB 3 debugger that drives the Debug
Capability (DbC) of xHCI-compliant host controllers. This patch
implements the operations needed for xue to initialize the host
controller's DbC and communicate with it. It also implements a struct
uart_driver that uses xue as a backend. Note that only target -> host
communication is supported for now. To use Xue as a console, add
'console=dbgp dbgp=xue' to the command line.

[Marek]
The Xue driver is taken from https://github.com/connojd/xue and heavily
refactored to fit into Xen code base. Major changes include:
- drop support for non-Xen systems
- drop xue_ops abstraction
- use Xen's native helper functions for PCI access
- move all the code to xue.c, drop "inline"
- build for x86 only
- annotate functions with cf_check
- adjust for Xen's code style

Signed-off-by: Connor Davis <davisc@ainfosec.com>
Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 xen/arch/x86/Kconfig              |   1 +-
 xen/arch/x86/include/asm/fixmap.h |   4 +-
 xen/arch/x86/setup.c              |   1 +-
 xen/drivers/char/Kconfig          |   7 +-
 xen/drivers/char/Makefile         |   1 +-
 xen/drivers/char/xue.c            | 957 +++++++++++++++++++++++++++++++-
 xen/include/xen/serial.h          |   1 +-
 7 files changed, 972 insertions(+)
 create mode 100644 xen/drivers/char/xue.c

diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
index 06d6fbc86478..9260f464d478 100644
--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -13,6 +13,7 @@ config X86
 	select HAS_COMPAT
 	select HAS_CPUFREQ
 	select HAS_EHCI
+	select HAS_XHCI
 	select HAS_EX_TABLE
 	select HAS_FAST_MULTIPLY
 	select HAS_IOPORTS
diff --git a/xen/arch/x86/include/asm/fixmap.h b/xen/arch/x86/include/asm/fixmap.h
index 20746afd0a2a..bc39ffe896b1 100644
--- a/xen/arch/x86/include/asm/fixmap.h
+++ b/xen/arch/x86/include/asm/fixmap.h
@@ -25,6 +25,8 @@
 #include <asm/msi.h>
 #include <acpi/apei.h>
 
+#define MAX_XHCI_PAGES 16
+
 /*
  * Here we define all the compile-time 'special' virtual
  * addresses. The point is to have a constant address at
@@ -43,6 +45,8 @@ enum fixed_addresses {
     FIX_COM_BEGIN,
     FIX_COM_END,
     FIX_EHCI_DBGP,
+    FIX_XHCI_BEGIN,
+    FIX_XHCI_END = FIX_XHCI_BEGIN + MAX_XHCI_PAGES - 1,
 #ifdef CONFIG_XEN_GUEST
     FIX_PV_CONSOLE,
     FIX_XEN_SHARED_INFO,
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 53a73010e029..801081befa78 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -946,6 +946,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
     ns16550.irq     = 3;
     ns16550_init(1, &ns16550);
     ehci_dbgp_init();
+    xue_uart_init();
     console_init_preirq();
 
     if ( pvh_boot )
diff --git a/xen/drivers/char/Kconfig b/xen/drivers/char/Kconfig
index e5f7b1d8eb8a..55d35af960e5 100644
--- a/xen/drivers/char/Kconfig
+++ b/xen/drivers/char/Kconfig
@@ -74,3 +74,10 @@ config HAS_EHCI
 	help
 	  This selects the USB based EHCI debug port to be used as a UART. If
 	  you have an x86 based system with USB, say Y.
+
+config HAS_XHCI
+	bool
+	depends on X86
+	help
+      This selects the USB based XHCI debug capability to be used as a UART. If
+      you have an x86 based system with USB3, say Y.
diff --git a/xen/drivers/char/Makefile b/xen/drivers/char/Makefile
index 14e67cf072d7..bda1e44d3f39 100644
--- a/xen/drivers/char/Makefile
+++ b/xen/drivers/char/Makefile
@@ -8,6 +8,7 @@ obj-$(CONFIG_HAS_MVEBU) += mvebu-uart.o
 obj-$(CONFIG_HAS_OMAP) += omap-uart.o
 obj-$(CONFIG_HAS_SCIF) += scif-uart.o
 obj-$(CONFIG_HAS_EHCI) += ehci-dbgp.o
+obj-$(CONFIG_HAS_XHCI) += xue.o
 obj-$(CONFIG_HAS_IMX_LPUART) += imx-lpuart.o
 obj-$(CONFIG_ARM) += arm-uart.o
 obj-y += serial.o
diff --git a/xen/drivers/char/xue.c b/xen/drivers/char/xue.c
new file mode 100644
index 000000000000..e95dd09d39a8
--- /dev/null
+++ b/xen/drivers/char/xue.c
@@ -0,0 +1,957 @@
+/*
+ * drivers/char/xue.c
+ *
+ * Xen port for the xue debugger
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; If not, see <http://www.gnu.org/licenses/>.
+ *
+ * Copyright (c) 2019 Assured Information Security.
+ */
+
+#include <xen/delay.h>
+#include <xen/types.h>
+#include <asm/string.h>
+#include <asm/system.h>
+#include <xen/serial.h>
+#include <xen/timer.h>
+#include <xen/param.h>
+#include <asm/fixmap.h>
+#include <asm/io.h>
+#include <xen/mm.h>
+
+#define XUE_POLL_INTERVAL 100 /* us */
+
+#define XUE_PAGE_SIZE 4096ULL
+
+/* Supported xHC PCI configurations */
+#define XUE_XHC_CLASSC 0xC0330ULL
+
+/* DbC idVendor and idProduct */
+#define XUE_DBC_VENDOR 0x1D6B
+#define XUE_DBC_PRODUCT 0x0010
+#define XUE_DBC_PROTOCOL 0x0000
+
+/* DCCTRL fields */
+#define XUE_CTRL_DCR 0
+#define XUE_CTRL_HOT 2
+#define XUE_CTRL_HIT 3
+#define XUE_CTRL_DRC 4
+#define XUE_CTRL_DCE 31
+
+/* DCPORTSC fields */
+#define XUE_PSC_PED 1
+#define XUE_PSC_CSC 17
+#define XUE_PSC_PRC 21
+#define XUE_PSC_PLC 22
+#define XUE_PSC_CEC 23
+
+#define XUE_PSC_ACK_MASK                                                       \
+    ((1UL << XUE_PSC_CSC) | (1UL << XUE_PSC_PRC) | (1UL << XUE_PSC_PLC) |      \
+     (1UL << XUE_PSC_CEC))
+
+#define xue_debug(...) printk("xue debug: " __VA_ARGS__)
+#define xue_alert(...) printk("xue alert: " __VA_ARGS__)
+#define xue_error(...) printk("xue error: " __VA_ARGS__)
+
+/******************************************************************************
+ * TRB ring (summarized from the manual):
+ *
+ * TRB rings are circular queues of TRBs shared between the xHC and the driver.
+ * Each ring has one producer and one consumer. The DbC has one event
+ * ring and two transfer rings; one IN and one OUT.
+ *
+ * The DbC hardware is the producer on the event ring, and
+ * xue is the consumer. This means that event TRBs are read-only from
+ * the xue.
+ *
+ * OTOH, xue is the producer of transfer TRBs on the two transfer
+ * rings, so xue enqueues transfers, and the hardware dequeues
+ * them. The dequeue pointer of a transfer ring is read by
+ * xue by examining the latest transfer event TRB on the event ring. The
+ * transfer event TRB contains the address of the transfer TRB that generated
+ * the event.
+ *
+ * To make each transfer ring circular, the last TRB must be a link TRB, which
+ * points to the beginning of the next queue. Note that this implementation
+ * does not support multiple segments, so each link TRB points back to the
+ * beginning of its own segment.
+ ******************************************************************************/
+
+/* TRB types */
+enum {
+    xue_trb_norm = 1,
+    xue_trb_link = 6,
+    xue_trb_tfre = 32,
+    xue_trb_psce = 34
+};
+
+/* TRB completion codes */
+enum { xue_trb_cc_success = 1, xue_trb_cc_trb_err = 5 };
+
+/* DbC endpoint types */
+enum { xue_ep_bulk_out = 2, xue_ep_bulk_in = 6 };
+
+/* DMA/MMIO structures */
+#pragma pack(push, 1)
+struct xue_trb {
+    uint64_t params;
+    uint32_t status;
+    uint32_t ctrl;
+};
+
+struct xue_erst_segment {
+    uint64_t base;
+    uint16_t size;
+    uint8_t rsvdz[6];
+};
+
+#define XUE_CTX_SIZE 16
+#define XUE_CTX_BYTES (XUE_CTX_SIZE * 4)
+
+struct xue_dbc_ctx {
+    uint32_t info[XUE_CTX_SIZE];
+    uint32_t ep_out[XUE_CTX_SIZE];
+    uint32_t ep_in[XUE_CTX_SIZE];
+};
+
+struct xue_dbc_reg {
+    uint32_t id;
+    uint32_t db;
+    uint32_t erstsz;
+    uint32_t rsvdz;
+    uint64_t erstba;
+    uint64_t erdp;
+    uint32_t ctrl;
+    uint32_t st;
+    uint32_t portsc;
+    uint32_t rsvdp;
+    uint64_t cp;
+    uint32_t ddi1;
+    uint32_t ddi2;
+};
+#pragma pack(pop)
+
+#define XUE_TRB_MAX_TFR (XUE_PAGE_SIZE << 4)
+#define XUE_TRB_PER_PAGE (XUE_PAGE_SIZE / sizeof(struct xue_trb))
+
+/* Defines the size in bytes of TRB rings as 2^XUE_TRB_RING_ORDER * 4096 */
+#ifndef XUE_TRB_RING_ORDER
+#define XUE_TRB_RING_ORDER 4
+#endif
+#define XUE_TRB_RING_CAP (XUE_TRB_PER_PAGE * (1ULL << XUE_TRB_RING_ORDER))
+#define XUE_TRB_RING_BYTES (XUE_TRB_RING_CAP * sizeof(struct xue_trb))
+#define XUE_TRB_RING_MASK (XUE_TRB_RING_BYTES - 1U)
+
+struct xue_trb_ring {
+    struct xue_trb *trb; /* Array of TRBs */
+    uint32_t enq; /* The offset of the enqueue ptr */
+    uint32_t deq; /* The offset of the dequeue ptr */
+    uint8_t cyc; /* Cycle state toggled on each wrap-around */
+    uint8_t db; /* Doorbell target */
+};
+
+#define XUE_DB_OUT 0x0
+#define XUE_DB_IN 0x1
+#define XUE_DB_INVAL 0xFF
+
+/* Defines the size in bytes of work rings as 2^XUE_WORK_RING_ORDER * 4096 */
+#ifndef XUE_WORK_RING_ORDER
+#define XUE_WORK_RING_ORDER 3
+#endif
+#define XUE_WORK_RING_CAP (XUE_PAGE_SIZE * (1ULL << XUE_WORK_RING_ORDER))
+#define XUE_WORK_RING_BYTES XUE_WORK_RING_CAP
+
+#if XUE_WORK_RING_CAP > XUE_TRB_MAX_TFR
+#error "XUE_WORK_RING_ORDER must be at most 4"
+#endif
+
+struct xue_work_ring {
+    uint8_t *buf;
+    uint32_t enq;
+    uint32_t deq;
+    uint64_t dma;
+};
+
+struct xue {
+    struct xue_dbc_reg *dbc_reg;
+    struct xue_dbc_ctx *dbc_ctx;
+    struct xue_erst_segment *dbc_erst;
+    struct xue_trb_ring dbc_ering;
+    struct xue_trb_ring dbc_oring;
+    struct xue_trb_ring dbc_iring;
+    struct xue_work_ring dbc_owork;
+    char *dbc_str;
+
+    pci_sbdf_t sbdf;
+    uint64_t xhc_mmio_phys;
+    uint64_t xhc_mmio_size;
+    uint64_t xhc_dbc_offset;
+    void *xhc_mmio;
+
+    int open;
+};
+
+static void xue_sys_pause(void)
+{
+    __asm volatile("pause" ::: "memory");
+}
+
+static void *xue_sys_map_xhc(uint64_t phys, uint64_t size)
+{
+    size_t i;
+
+    if ( size != MAX_XHCI_PAGES * XUE_PAGE_SIZE )
+        return NULL;
+
+    for ( i = FIX_XHCI_END; i >= FIX_XHCI_BEGIN; i-- )
+    {
+        set_fixmap_nocache(i, phys);
+        phys += XUE_PAGE_SIZE;
+    }
+
+    /*
+     * The fixmap grows downward, so the lowest virt is
+     * at the highest index
+     */
+    return fix_to_virt(FIX_XHCI_END);
+}
+
+static void xue_flush_range(struct xue *xue, void *ptr, uint32_t bytes)
+{
+    uint32_t i;
+
+    const uint32_t clshft = 6;
+    const uint32_t clsize = (1UL << clshft);
+    const uint32_t clmask = clsize - 1;
+
+    uint32_t lines = (bytes >> clshft);
+    lines += (bytes & clmask) != 0;
+
+    for ( i = 0; i < lines; i++ )
+        clflush((void *)((uint64_t)ptr + (i * clsize)));
+}
+
+static int xue_init_xhc(struct xue *xue)
+{
+    uint32_t bar0;
+    uint64_t bar1;
+    uint64_t devfn;
+
+    /*
+     * Search PCI bus 0 for the xHC. All the host controllers supported so far
+     * are part of the chipset and are on bus 0.
+     */
+    for ( devfn = 0; devfn < 256; devfn++ ) {
+        uint32_t dev = (devfn & 0xF8) >> 3;
+        uint32_t fun = devfn & 0x07;
+        pci_sbdf_t sbdf = PCI_SBDF(0, dev, fun);
+        uint32_t hdr = pci_conf_read8(sbdf, PCI_HEADER_TYPE);
+
+        if ( hdr == 0 || hdr == 0x80 )
+        {
+            if ( (pci_conf_read32(sbdf, PCI_CLASS_REVISION) >> 8) == XUE_XHC_CLASSC )
+            {
+                xue->sbdf = sbdf;
+                break;
+            }
+        }
+    }
+
+    if ( !xue->sbdf.sbdf )
+    {
+        xue_error("Compatible xHC not found on bus 0\n");
+        return 0;
+    }
+
+    /* ...we found it, so parse the BAR and map the registers */
+    bar0 = pci_conf_read32(xue->sbdf, PCI_BASE_ADDRESS_0);
+    bar1 = pci_conf_read32(xue->sbdf, PCI_BASE_ADDRESS_1);
+
+    /* IO BARs not allowed; BAR must be 64-bit */
+    if ( (bar0 & 0x1) != 0 || ((bar0 & 0x6) >> 1) != 2 )
+        return 0;
+
+    pci_conf_write32(xue->sbdf, PCI_BASE_ADDRESS_0, 0xFFFFFFFF);
+    xue->xhc_mmio_size = ~(pci_conf_read32(xue->sbdf, PCI_BASE_ADDRESS_0) & 0xFFFFFFF0) + 1;
+    pci_conf_write32(xue->sbdf, PCI_BASE_ADDRESS_0, bar0);
+
+    xue->xhc_mmio_phys = (bar0 & 0xFFFFFFF0) | (bar1 << 32);
+    xue->xhc_mmio = xue_sys_map_xhc(xue->xhc_mmio_phys, xue->xhc_mmio_size);
+
+    return xue->xhc_mmio != NULL;
+}
+
+/**
+ * The first register of the debug capability is found by traversing the
+ * host controller's capability list (xcap) until a capability
+ * with ID = 0xA is found. The xHCI capability list begins at address
+ * mmio + (HCCPARAMS1[31:16] << 2)
+ */
+static struct xue_dbc_reg *xue_find_dbc(struct xue *xue)
+{
+    uint32_t *xcap;
+    uint32_t next;
+    uint32_t id;
+    uint8_t *mmio = (uint8_t *)xue->xhc_mmio;
+    uint32_t *hccp1 = (uint32_t *)(mmio + 0x10);
+    const uint32_t DBC_ID = 0xA;
+
+    /**
+     * Paranoid check against a zero value. The spec mandates that
+     * at least one "supported protocol" capability must be implemented,
+     * so this should always be false.
+     */
+    if ( (*hccp1 & 0xFFFF0000) == 0 )
+        return NULL;
+
+    xcap = (uint32_t *)(mmio + (((*hccp1 & 0xFFFF0000) >> 16) << 2));
+    next = (*xcap & 0xFF00) >> 8;
+    id = *xcap & 0xFF;
+
+    /**
+     * Table 7-1 states that 'next' is relative to
+     * the current value of xcap and is a dword offset.
+     */
+    while ( id != DBC_ID && next ) {
+        xcap += next;
+        id = *xcap & 0xFF;
+        next = (*xcap & 0xFF00) >> 8;
+    }
+
+    if ( id != DBC_ID )
+        return NULL;
+
+    xue->xhc_dbc_offset = (uint64_t)xcap - (uint64_t)mmio;
+    return (struct xue_dbc_reg *)xcap;
+}
+
+/**
+ * Fields with the same interpretation for every TRB type (section 4.11.1).
+ * These are the fields defined in the TRB template, minus the ENT bit. That
+ * bit is the toggle cycle bit in link TRBs, so it shouldn't be in the
+ * template.
+ */
+static uint32_t xue_trb_cyc(const struct xue_trb *trb)
+{
+    return trb->ctrl & 0x1;
+}
+
+static uint32_t xue_trb_type(const struct xue_trb *trb)
+{
+    return (trb->ctrl & 0xFC00) >> 10;
+}
+
+static void xue_trb_set_cyc(struct xue_trb *trb, uint32_t c)
+{
+    trb->ctrl &= ~0x1UL;
+    trb->ctrl |= c;
+}
+
+static void xue_trb_set_type(struct xue_trb *trb, uint32_t t)
+{
+    trb->ctrl &= ~0xFC00UL;
+    trb->ctrl |= (t << 10);
+}
+
+/* Fields for normal TRBs */
+static void xue_trb_norm_set_buf(struct xue_trb *trb, uint64_t addr)
+{
+    trb->params = addr;
+}
+
+static void xue_trb_norm_set_len(struct xue_trb *trb, uint32_t len)
+{
+    trb->status &= ~0x1FFFFUL;
+    trb->status |= len;
+}
+
+static void xue_trb_norm_set_ioc(struct xue_trb *trb)
+{
+    trb->ctrl |= 0x20;
+}
+
+/**
+ * Fields for Transfer Event TRBs (see section 6.4.2.1). Note that event
+ * TRBs are read-only from software
+ */
+static uint64_t xue_trb_tfre_ptr(const struct xue_trb *trb)
+{
+    return trb->params;
+}
+
+static uint32_t xue_trb_tfre_cc(const struct xue_trb *trb)
+{
+    return trb->status >> 24;
+}
+
+/* Fields for link TRBs (section 6.4.4.1) */
+static void xue_trb_link_set_rsp(struct xue_trb *trb, uint64_t rsp)
+{
+    trb->params = rsp;
+}
+
+static void xue_trb_link_set_tc(struct xue_trb *trb)
+{
+    trb->ctrl |= 0x2;
+}
+
+static void xue_trb_ring_init(const struct xue *xue,
+                              struct xue_trb_ring *ring, int producer,
+                              int doorbell)
+{
+    memset(ring->trb, 0, XUE_TRB_RING_CAP * sizeof(ring->trb[0]));
+
+    ring->enq = 0;
+    ring->deq = 0;
+    ring->cyc = 1;
+    ring->db = (uint8_t)doorbell;
+
+    /*
+     * Producer implies transfer ring, so we have to place a
+     * link TRB at the end that points back to trb[0]
+     */
+    if ( producer )
+    {
+        struct xue_trb *trb = &ring->trb[XUE_TRB_RING_CAP - 1];
+        xue_trb_set_type(trb, xue_trb_link);
+        xue_trb_link_set_tc(trb);
+        xue_trb_link_set_rsp(trb, virt_to_maddr(ring->trb));
+    }
+}
+
+static int xue_trb_ring_full(const struct xue_trb_ring *ring)
+{
+    return ((ring->enq + 1) & (XUE_TRB_RING_CAP - 1)) == ring->deq;
+}
+
+static int xue_work_ring_full(const struct xue_work_ring *ring)
+{
+    return ((ring->enq + 1) & (XUE_WORK_RING_CAP - 1)) == ring->deq;
+}
+
+static uint64_t xue_work_ring_size(const struct xue_work_ring *ring)
+{
+    if ( ring->enq >= ring->deq )
+        return ring->enq - ring->deq;
+
+    return XUE_WORK_RING_CAP - ring->deq + ring->enq;
+}
+
+static void xue_push_trb(struct xue *xue, struct xue_trb_ring *ring,
+                         uint64_t dma, uint64_t len)
+{
+    struct xue_trb trb;
+
+    if ( ring->enq == XUE_TRB_RING_CAP - 1 )
+    {
+        /*
+         * We have to make sure the xHC processes the link TRB in order
+         * for wrap-around to work properly. We do this by marking the
+         * xHC as owner of the link TRB by setting the TRB's cycle bit
+         * (just like with normal TRBs).
+         */
+        struct xue_trb *link = &ring->trb[ring->enq];
+        xue_trb_set_cyc(link, ring->cyc);
+
+        ring->enq = 0;
+        ring->cyc ^= 1;
+    }
+
+    trb.params = 0;
+    trb.status = 0;
+    trb.ctrl = 0;
+
+    xue_trb_set_type(&trb, xue_trb_norm);
+    xue_trb_set_cyc(&trb, ring->cyc);
+
+    xue_trb_norm_set_buf(&trb, dma);
+    xue_trb_norm_set_len(&trb, (uint32_t)len);
+    xue_trb_norm_set_ioc(&trb);
+
+    ring->trb[ring->enq++] = trb;
+    xue_flush_range(xue, &ring->trb[ring->enq - 1], sizeof(trb));
+}
+
+static int64_t xue_push_work(struct xue *xue, struct xue_work_ring *ring,
+                             const char *buf, int64_t len)
+{
+    int64_t i = 0;
+    uint32_t start = ring->enq;
+    uint32_t end = 0;
+
+    while ( !xue_work_ring_full(ring) && i < len )
+    {
+        ring->buf[ring->enq] = buf[i++];
+        ring->enq = (ring->enq + 1) & (XUE_WORK_RING_CAP - 1);
+    }
+
+    end = ring->enq;
+
+    if ( end > start )
+        xue_flush_range(xue, &ring->buf[start], end - start);
+    else if ( i > 0 )
+    {
+        xue_flush_range(xue, &ring->buf[start], XUE_WORK_RING_CAP - start);
+        xue_flush_range(xue, &ring->buf[0], end);
+    }
+
+    return i;
+}
+
+/*
+ * Note that if IN transfer support is added, then this
+ * will need to be changed; it assumes an OUT transfer ring only
+ */
+static void xue_pop_events(struct xue *xue)
+{
+    const int trb_shift = 4;
+
+    struct xue_dbc_reg *reg = xue->dbc_reg;
+    struct xue_trb_ring *er = &xue->dbc_ering;
+    struct xue_trb_ring *tr = &xue->dbc_oring;
+    struct xue_trb *event = &er->trb[er->deq];
+    uint64_t erdp = reg->erdp;
+
+    rmb();
+
+    while ( xue_trb_cyc(event) == er->cyc ) {
+        switch (xue_trb_type(event)) {
+        case xue_trb_tfre:
+            if ( xue_trb_tfre_cc(event) != xue_trb_cc_success )
+            {
+                xue_alert("tfre error cc: %u\n", xue_trb_tfre_cc(event));
+                break;
+            }
+            tr->deq =
+                (xue_trb_tfre_ptr(event) & XUE_TRB_RING_MASK) >> trb_shift;
+            break;
+        case xue_trb_psce:
+            reg->portsc |= (XUE_PSC_ACK_MASK & reg->portsc);
+            break;
+        default:
+            break;
+        }
+
+        er->cyc = (er->deq == XUE_TRB_RING_CAP - 1) ? er->cyc ^ 1 : er->cyc;
+        er->deq = (er->deq + 1) & (XUE_TRB_RING_CAP - 1);
+        event = &er->trb[er->deq];
+    }
+
+    erdp &= ~XUE_TRB_RING_MASK;
+    erdp |= (er->deq << trb_shift);
+    wmb();
+    reg->erdp = erdp;
+}
+
+/**
+ * xue_init_ep
+ *
+ * Initializes the endpoint as specified in sections 7.6.3.2 and 7.6.9.2.
+ * Each endpoint is Bulk, so the MaxPStreams, LSA, HID, CErr, FE,
+ * Interval, Mult, and Max ESIT Payload fields are all 0.
+ *
+ * Max packet size: 1024
+ * Max burst size: debug mbs (from dbc_reg->ctrl register)
+ * EP type: 2 for OUT bulk, 6 for IN bulk
+ * TR dequeue ptr: physical base address of transfer ring
+ * Avg TRB length: software defined (see 4.14.1.1 for suggested defaults)
+ */
+static void xue_init_ep(uint32_t *ep, uint64_t mbs, uint32_t type,
+                        uint64_t ring_dma)
+{
+    memset(ep, 0, XUE_CTX_BYTES);
+
+    ep[1] = (1024 << 16) | ((uint32_t)mbs << 8) | (type << 3);
+    ep[2] = (ring_dma & 0xFFFFFFFF) | 1;
+    ep[3] = ring_dma >> 32;
+    ep[4] = 3 * 1024;
+}
+
+/* Initialize the DbC info with USB string descriptor addresses */
+static void xue_init_strings(struct xue *xue, uint32_t *info)
+{
+    uint64_t *sda;
+
+    /* clang-format off */
+    const char strings[] = {
+        6,  3, 9, 0, 4, 0,
+        8,  3, 'A', 0, 'I', 0, 'S', 0,
+        30, 3, 'X', 0, 'u', 0, 'e', 0, ' ', 0,
+               'D', 0, 'b', 0, 'C', 0, ' ', 0,
+               'D', 0, 'e', 0, 'v', 0, 'i', 0, 'c', 0, 'e', 0,
+        4, 3, '0', 0
+    };
+    /* clang-format on */
+
+    memcpy(xue->dbc_str, strings, sizeof(strings));
+
+    sda = (uint64_t *)&info[0];
+    sda[0] = virt_to_maddr(xue->dbc_str);
+    sda[1] = sda[0] + 6;
+    sda[2] = sda[0] + 6 + 8;
+    sda[3] = sda[0] + 6 + 8 + 30;
+    info[8] = (4 << 24) | (30 << 16) | (8 << 8) | 6;
+}
+
+static void xue_dump(struct xue *xue)
+{
+    struct xue_dbc_reg *r = xue->dbc_reg;
+
+    xue_debug("XUE DUMP:\n");
+    xue_debug("    ctrl: 0x%x stat: 0x%x psc: 0x%x\n", r->ctrl, r->st,
+              r->portsc);
+    xue_debug("    id: 0x%x, db: 0x%x\n", r->id, r->db);
+#if defined(__XEN__) || defined(VMM)
+    xue_debug("    erstsz: %u, erstba: 0x%lx\n", r->erstsz, r->erstba);
+    xue_debug("    erdp: 0x%lx, cp: 0x%lx\n", r->erdp, r->cp);
+#else
+    xue_debug("    erstsz: %u, erstba: 0x%llx\n", r->erstsz, r->erstba);
+    xue_debug("    erdp: 0x%llx, cp: 0x%llx\n", r->erdp, r->cp);
+#endif
+    xue_debug("    ddi1: 0x%x, ddi2: 0x%x\n", r->ddi1, r->ddi2);
+    xue_debug("    erstba == virt_to_dma(erst): %d\n",
+              r->erstba == virt_to_maddr(xue->dbc_erst));
+    xue_debug("    erdp == virt_to_dma(erst[0].base): %d\n",
+              r->erdp == xue->dbc_erst[0].base);
+    xue_debug("    cp == virt_to_dma(ctx): %d\n",
+              r->cp == virt_to_maddr(xue->dbc_ctx));
+}
+
+static void xue_enable_dbc(struct xue *xue)
+{
+    struct xue_dbc_reg *reg = xue->dbc_reg;
+
+    wmb();
+    reg->ctrl |= (1UL << XUE_CTRL_DCE);
+    wmb();
+
+    while ( (reg->ctrl & (1UL << XUE_CTRL_DCE)) == 0 )
+        xue_sys_pause();
+
+    wmb();
+    reg->portsc |= (1UL << XUE_PSC_PED);
+    wmb();
+
+    while ( (reg->ctrl & (1UL << XUE_CTRL_DCR)) == 0 )
+        xue_sys_pause();
+}
+
+static void xue_disable_dbc(struct xue *xue)
+{
+    struct xue_dbc_reg *reg = xue->dbc_reg;
+
+    reg->portsc &= ~(1UL << XUE_PSC_PED);
+    wmb();
+    reg->ctrl &= ~(1UL << XUE_CTRL_DCE);
+
+    while ( reg->ctrl & (1UL << XUE_CTRL_DCE) )
+        xue_sys_pause();
+}
+
+static int xue_init_dbc(struct xue *xue)
+{
+    uint64_t erdp = 0;
+    uint64_t out = 0;
+    uint64_t in = 0;
+    uint64_t mbs = 0;
+    struct xue_dbc_reg *reg = xue_find_dbc(xue);
+
+    if ( !reg )
+        return 0;
+
+    xue->dbc_reg = reg;
+    xue_disable_dbc(xue);
+
+    xue_trb_ring_init(xue, &xue->dbc_ering, 0, XUE_DB_INVAL);
+    xue_trb_ring_init(xue, &xue->dbc_oring, 1, XUE_DB_OUT);
+    xue_trb_ring_init(xue, &xue->dbc_iring, 1, XUE_DB_IN);
+
+    erdp = virt_to_maddr(xue->dbc_ering.trb);
+    if ( !erdp )
+        return 0;
+
+    memset(xue->dbc_erst, 0, sizeof(*xue->dbc_erst));
+    xue->dbc_erst->base = erdp;
+    xue->dbc_erst->size = XUE_TRB_RING_CAP;
+
+    mbs = (reg->ctrl & 0xFF0000) >> 16;
+    out = virt_to_maddr(xue->dbc_oring.trb);
+    in = virt_to_maddr(xue->dbc_iring.trb);
+
+    memset(xue->dbc_ctx, 0, sizeof(*xue->dbc_ctx));
+    xue_init_strings(xue, xue->dbc_ctx->info);
+    xue_init_ep(xue->dbc_ctx->ep_out, mbs, xue_ep_bulk_out, out);
+    xue_init_ep(xue->dbc_ctx->ep_in, mbs, xue_ep_bulk_in, in);
+
+    reg->erstsz = 1;
+    reg->erstba = virt_to_maddr(xue->dbc_erst);
+    reg->erdp = erdp;
+    reg->cp = virt_to_maddr(xue->dbc_ctx);
+    reg->ddi1 = (XUE_DBC_VENDOR << 16) | XUE_DBC_PROTOCOL;
+    reg->ddi2 = XUE_DBC_PRODUCT;
+
+    xue_flush_range(xue, xue->dbc_ctx, sizeof(*xue->dbc_ctx));
+    xue_flush_range(xue, xue->dbc_erst, sizeof(*xue->dbc_erst));
+    xue_flush_range(xue, xue->dbc_ering.trb, XUE_TRB_RING_BYTES);
+    xue_flush_range(xue, xue->dbc_oring.trb, XUE_TRB_RING_BYTES);
+    xue_flush_range(xue, xue->dbc_iring.trb, XUE_TRB_RING_BYTES);
+    xue_flush_range(xue, xue->dbc_owork.buf, XUE_WORK_RING_BYTES);
+
+    return 1;
+}
+
+static void xue_init_work_ring(struct xue *xue,
+                               struct xue_work_ring *wrk)
+{
+    wrk->enq = 0;
+    wrk->deq = 0;
+    wrk->dma = virt_to_maddr(wrk->buf);
+}
+
+/* @endcond */
+
+/**
+ * Initialize the DbC and enable it for transfers. First map in the DbC
+ * registers from the host controller's MMIO region. Then allocate and map
+ * DMA for the event and transfer rings. Finally, enable the DbC for
+ * the host to enumerate. On success, the DbC is ready to send packets.
+ *
+ * @param xue the xue to open (!= NULL)
+ * @param ops the xue ops to use (!= NULL)
+ * @param sys the system-specific data (may be NULL)
+ * @return 1 iff xue_open succeeded
+ */
+static int64_t xue_open(struct xue *xue)
+{
+    if ( !xue )
+        return 0;
+
+    if ( !xue_init_xhc(xue) )
+        return 0;
+
+    if ( !xue_init_dbc(xue) )
+        return 0;
+
+    xue_init_work_ring(xue, &xue->dbc_owork);
+    xue_enable_dbc(xue);
+    xue->open = 1;
+
+    return 1;
+}
+
+/**
+ * Commit the pending transfer TRBs to the DbC. This notifies
+ * the DbC of any previously-queued data on the work ring and
+ * rings the doorbell.
+ *
+ * @param xue the xue to flush
+ * @param trb the ring containing the TRBs to transfer
+ * @param wrk the work ring containing data to be flushed
+ */
+static void xue_flush(struct xue *xue, struct xue_trb_ring *trb,
+                      struct xue_work_ring *wrk)
+{
+    struct xue_dbc_reg *reg = xue->dbc_reg;
+    uint32_t db = (reg->db & 0xFFFF00FF) | (trb->db << 8);
+
+    if ( xue->open && !(reg->ctrl & (1UL << XUE_CTRL_DCE)) )
+    {
+        if ( !xue_init_dbc(xue) )
+            return;
+
+        xue_init_work_ring(xue, &xue->dbc_owork);
+        xue_enable_dbc(xue);
+    }
+
+    xue_pop_events(xue);
+
+    if ( !(reg->ctrl & (1UL << XUE_CTRL_DCR)) )
+    {
+        xue_error("DbC not configured");
+        return;
+    }
+
+    if ( reg->ctrl & (1UL << XUE_CTRL_DRC) )
+    {
+        reg->ctrl |= (1UL << XUE_CTRL_DRC);
+        reg->portsc |= (1UL << XUE_PSC_PED);
+        wmb();
+    }
+
+    if ( xue_trb_ring_full(trb) )
+        return;
+
+    if ( wrk->enq == wrk->deq )
+        return;
+    else if ( wrk->enq > wrk->deq )
+    {
+        xue_push_trb(xue, trb, wrk->dma + wrk->deq, wrk->enq - wrk->deq);
+        wrk->deq = wrk->enq;
+    }
+    else
+    {
+        xue_push_trb(xue, trb, wrk->dma + wrk->deq,
+                     XUE_WORK_RING_CAP - wrk->deq);
+        wrk->deq = 0;
+        if ( wrk->enq > 0 && !xue_trb_ring_full(trb) )
+        {
+            xue_push_trb(xue, trb, wrk->dma, wrk->enq);
+            wrk->deq = wrk->enq;
+        }
+    }
+
+    wmb();
+    reg->db = db;
+}
+
+/**
+ * Queue a single character to the DbC. A transfer TRB will be created
+ * if the character is a newline and the DbC will be notified that data is
+ * available for writing to the debug host.
+ *
+ * @param xue the xue to write to
+ * @param c the character to write
+ * @return the number of bytes written
+ */
+static int64_t xue_putc(struct xue *xue, char c)
+{
+    if ( !xue_push_work(xue, &xue->dbc_owork, &c, 1) )
+        return 0;
+
+    if ( c == '\n' )
+        xue_flush(xue, &xue->dbc_oring, &xue->dbc_owork);
+
+    return 1;
+}
+
+struct xue_uart {
+    struct xue xue;
+    struct timer timer;
+    spinlock_t *lock;
+};
+
+static struct xue_uart xue_uart;
+
+static void cf_check xue_uart_poll(void *data)
+{
+    struct serial_port *port = data;
+    struct xue_uart *uart = port->uart;
+    struct xue *xue = &uart->xue;
+    unsigned long flags = 0;
+
+    if ( spin_trylock_irqsave(&port->tx_lock, flags) )
+    {
+        xue_flush(xue, &xue->dbc_oring, &xue->dbc_owork);
+        spin_unlock_irqrestore(&port->tx_lock, flags);
+    }
+
+    serial_tx_interrupt(port, guest_cpu_user_regs());
+    set_timer(&uart->timer, NOW() + MICROSECS(XUE_POLL_INTERVAL));
+}
+
+static void __init cf_check xue_uart_init_preirq(struct serial_port *port)
+{
+    struct xue_uart *uart = port->uart;
+    uart->lock = &port->tx_lock;
+}
+
+static void __init cf_check xue_uart_init_postirq(struct serial_port *port)
+{
+    struct xue_uart *uart = port->uart;
+
+    serial_async_transmit(port);
+    init_timer(&uart->timer, xue_uart_poll, port, 0);
+    set_timer(&uart->timer, NOW() + MILLISECS(1));
+}
+
+static int cf_check xue_uart_tx_ready(struct serial_port *port)
+{
+    struct xue_uart *uart = port->uart;
+    struct xue *xue = &uart->xue;
+
+    return XUE_WORK_RING_CAP - xue_work_ring_size(&xue->dbc_owork);
+}
+
+static void cf_check xue_uart_putc(struct serial_port *port, char c)
+{
+    struct xue_uart *uart = port->uart;
+    xue_putc(&uart->xue, c);
+}
+
+static void cf_check xue_uart_flush(struct serial_port *port)
+{
+    s_time_t goal;
+    struct xue_uart *uart = port->uart;
+    struct xue *xue = &uart->xue;
+
+    xue_flush(xue, &xue->dbc_oring, &xue->dbc_owork);
+
+    goal = NOW() + MICROSECS(XUE_POLL_INTERVAL);
+    if ( uart->timer.expires > goal )
+        set_timer(&uart->timer, goal);
+}
+
+static struct uart_driver xue_uart_driver = {
+    .init_preirq = xue_uart_init_preirq,
+    .init_postirq = xue_uart_init_postirq,
+    .endboot = NULL,
+    .suspend = NULL,
+    .resume = NULL,
+    .tx_ready = xue_uart_tx_ready,
+    .putc = xue_uart_putc,
+    .flush = xue_uart_flush,
+    .getc = NULL
+};
+
+static struct xue_trb evt_trb[XUE_TRB_RING_CAP] __aligned(XUE_PAGE_SIZE);
+static struct xue_trb out_trb[XUE_TRB_RING_CAP] __aligned(XUE_PAGE_SIZE);
+static struct xue_trb in_trb[XUE_TRB_RING_CAP] __aligned(XUE_PAGE_SIZE);
+static struct xue_erst_segment erst __aligned(64);
+static struct xue_dbc_ctx ctx __aligned(64);
+static uint8_t wrk_buf[XUE_WORK_RING_CAP] __aligned(XUE_PAGE_SIZE);
+static char str_buf[XUE_PAGE_SIZE] __aligned(64);
+static char __initdata opt_dbgp[30];
+
+string_param("dbgp", opt_dbgp);
+
+void __init xue_uart_init(void)
+{
+    struct xue_uart *uart = &xue_uart;
+    struct xue *xue = &uart->xue;
+
+    if ( strncmp(opt_dbgp, "xue", 3) )
+        return;
+
+    memset(xue, 0, sizeof(*xue));
+
+    xue->dbc_ctx = &ctx;
+    xue->dbc_erst = &erst;
+    xue->dbc_ering.trb = evt_trb;
+    xue->dbc_oring.trb = out_trb;
+    xue->dbc_iring.trb = in_trb;
+    xue->dbc_owork.buf = wrk_buf;
+    xue->dbc_str = str_buf;
+
+    xue_open(xue);
+
+    serial_register_uart(SERHND_DBGP, &xue_uart_driver, &xue_uart);
+}
+
+void xue_uart_dump(void)
+{
+    struct xue_uart *uart = &xue_uart;
+    struct xue *xue = &uart->xue;
+
+    xue_dump(xue);
+}
diff --git a/xen/include/xen/serial.h b/xen/include/xen/serial.h
index 6548f0b0a9cf..803e24cda0b6 100644
--- a/xen/include/xen/serial.h
+++ b/xen/include/xen/serial.h
@@ -171,6 +171,7 @@ struct ns16550_defaults {
 };
 void ns16550_init(int index, struct ns16550_defaults *defaults);
 void ehci_dbgp_init(void);
+void xue_uart_init(void);
 
 void arm_uart_init(void);
 
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 14:31:19 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 14:31:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343348.568696 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyaEp-0006pV-N6; Tue, 07 Jun 2022 14:31:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343348.568696; Tue, 07 Jun 2022 14:31:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyaEp-0006pJ-Hr; Tue, 07 Jun 2022 14:31:19 +0000
Received: by outflank-mailman (input) for mailman id 343348;
 Tue, 07 Jun 2022 14:31:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fbs7=WO=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1nyaEn-00061K-GP
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 14:31:17 +0000
Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com
 [66.111.4.25]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7cfeae9f-e66e-11ec-b605-df0040e90b76;
 Tue, 07 Jun 2022 16:31:16 +0200 (CEST)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.nyi.internal (Postfix) with ESMTP id C13F55C01E9;
 Tue,  7 Jun 2022 10:31:15 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute5.internal (MEProxy); Tue, 07 Jun 2022 10:31:15 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 7 Jun 2022 10:31:14 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7cfeae9f-e66e-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm1; t=1654612275; x=1654698675; bh=Q5My+tX/Z1
	yRUUbiyXNglzpIQl2V/2Tz8JD1D74I3cc=; b=nbw599YaKL8ReHa6gbyVi9XSSv
	ItmURASUNmvTOp7lDLCLDrL7exendz2RW064vfjKNe+XEEyDSOe7taLHiIk/EWX+
	N2Xu5o3rAFowCxKcmq5ZQMo70VEs/Lzaw1DmK934uZjoyUzxF+kyrNFw7vao86Lo
	XY4ga85nMJ4YikbBWAgBNurGAxvofsvTmpWwrHg8DgjqsT31QqKVx0mbF9SG0QOd
	/Sj6JbIBlqoBVcMLFYGFp4UpeSoauhkSA45EyOXEQrChzWjT6wH7r5kfgHvOqzLx
	VwildrMwXDgstNL422TTIbZtdYwmTBMV5s28k744Th+BClQjgx9JIqkKogpQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1654612275; x=
	1654698675; bh=Q5My+tX/Z1yRUUbiyXNglzpIQl2V/2Tz8JD1D74I3cc=; b=g
	vHbtrRLLsyncnXqdlZ2ywdzEvXx2p7iDewMG2ktXdlzIHRPJjP14GE4/ZNZ7d1JF
	meGrWecXZ0dVYqs8hdYMPi7ALfmqrkpsExXspysTeo+BTxVGuZiyre74yV4M4rUx
	dMwPZrMYPG1Tmqmnkpe0LYVRFpggDBz4AOKqz/utRl1kfjyuNYT/6L4elk91KkEl
	fqk6FqMeY1qqzTIDwZ3en6fU1IKwlFrAL7PZq7lAn5eKHtuen/Bjsku695qXh4wY
	N3JKeufGTrho5vWe5858kcaKCbgcfFSyhi1XlrFo5ApqWV94WiCJ2ROjc+WrzETn
	OorJ4Z24NI9fspxAjIy4Q==
X-ME-Sender: <xms:M2GfYh-7kxcn6m_xEPTmP1WnBWPzgAqVV9-Bk3vs7eBfPru4qqHYzg>
    <xme:M2GfYluijm4wXJjmDBlEPblMUPS0LiSMJ7s-P1_vbgobUhtWSEH0Gu1pxF7k1GnnK
    ET_5iMuEHvcFQ>
X-ME-Received: <xmr:M2GfYvA63cXPkjYrLukZdFXijV7aqh6ahv3LAgbh9p-gulc1SwovZy-KAzFNmcNwK-oq2RPhDOkHya6YunMWlUZjgQhzcfhwmv4LbfzlPxpZPpOKwd8>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedruddthedgjeeiucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh
    hushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:M2GfYlckSbwrVLPOOjGNmJy1B3lGu1-NlBm4xFkUb1zB0uTTJP2v1Q>
    <xmx:M2GfYmNnQhAwXQltAOYO9KOwetl09zCuConknJYbkqhZEPj4pSh5Dw>
    <xmx:M2GfYnmTUz9Rnb3ZcoQpKq2Iy4ccbtlVnMNxl60sVx9OXwulGKmq1w>
    <xmx:M2GfYt0VJmy4Oe3uIrlcHZhADFtbs4q8X51liDIxluAHefDL9Au_rA>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v1 03/10] xue: add support for selecting specific xhci
Date: Tue,  7 Jun 2022 16:30:09 +0200
Message-Id: <b5466e495943210adc48c754df98862ae49ee489.1654612169.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <cover.5d286dc6304969ed7155051e900236947c1b14dc.1654612169.git-series.marmarek@invisiblethingslab.com>
References: <cover.5d286dc6304969ed7155051e900236947c1b14dc.1654612169.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Handle parameters similar to dbgp=ehci.

Implement this by not resettting xhc_cf8 again in xue_init_xhc(), but
using a value found there if non-zero. Additionally, add xue->xhc_num to
select n-th controller.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 docs/misc/xen-command-line.pandoc |  5 +++-
 xen/drivers/char/xue.c            | 56 ++++++++++++++++++++++++--------
 2 files changed, 47 insertions(+), 14 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index 881fe409ac76..37a564c2386f 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -721,10 +721,15 @@ Available alternatives, with their meaning, are:
 
 ### dbgp
 > `= ehci[ <integer> | @pci<bus>:<slot>.<func> ]`
+> `= xue[ <integer> | @pci<bus>:<slot>.<func> ]`
 
 Specify the USB controller to use, either by instance number (when going
 over the PCI busses sequentially) or by PCI device (must be on segment 0).
 
+Use `ehci` for EHCI debug port, use `xue` for XHCI debug capability.
+Xue driver will wait indefinitely for the debug host to connect - make sure the
+cable is connected.
+
 ### debug_stack_lines
 > `= <integer>`
 
diff --git a/xen/drivers/char/xue.c b/xen/drivers/char/xue.c
index a9ba25d9d07e..b253426a95f8 100644
--- a/xen/drivers/char/xue.c
+++ b/xen/drivers/char/xue.c
@@ -204,6 +204,7 @@ struct xue {
     void *xhc_mmio;
 
     int open;
+    int xhc_num; /* look for n-th xhc */
 };
 
 static void xue_sys_pause(void)
@@ -252,24 +253,34 @@ static int xue_init_xhc(struct xue *xue)
     uint64_t bar1;
     uint64_t devfn;
 
-    /*
-     * Search PCI bus 0 for the xHC. All the host controllers supported so far
-     * are part of the chipset and are on bus 0.
-     */
-    for ( devfn = 0; devfn < 256; devfn++ ) {
-        uint32_t dev = (devfn & 0xF8) >> 3;
-        uint32_t fun = devfn & 0x07;
-        pci_sbdf_t sbdf = PCI_SBDF(0, dev, fun);
-        uint32_t hdr = pci_conf_read8(sbdf, PCI_HEADER_TYPE);
-
-        if ( hdr == 0 || hdr == 0x80 )
+    if ( xue->sbdf.sbdf == 0 )
+    {
+        /*
+         * Search PCI bus 0 for the xHC. All the host controllers supported so far
+         * are part of the chipset and are on bus 0.
+         */
+        for ( devfn = 0; devfn < 256; devfn++ )
         {
-            if ( (pci_conf_read32(sbdf, PCI_CLASS_REVISION) >> 8) == XUE_XHC_CLASSC )
+            uint32_t dev = (devfn & 0xF8) >> 3;
+            uint32_t fun = devfn & 0x07;
+            pci_sbdf_t sbdf = PCI_SBDF(0, dev, fun);
+            uint32_t hdr = pci_conf_read8(sbdf, PCI_HEADER_TYPE);
+
+            if ( hdr == 0 || hdr == 0x80 )
             {
-                xue->sbdf = sbdf;
-                break;
+                if ( (pci_conf_read32(sbdf, PCI_CLASS_REVISION) >> 8) == XUE_XHC_CLASSC )
+                {
+                    if ( xue->xhc_num-- )
+                        continue;
+                    xue->sbdf = sbdf;
+                    break;
+                }
             }
         }
+    } else {
+        /* Verify if selected device is really xHC */
+        if ( (pci_conf_read32(xue->sbdf, PCI_CLASS_REVISION) >> 8) != XUE_XHC_CLASSC )
+            xue->sbdf.sbdf = 0;
     }
 
     if ( !xue->sbdf.sbdf )
@@ -999,12 +1010,29 @@ void __init xue_uart_init(void)
 {
     struct xue_uart *uart = &xue_uart;
     struct xue *xue = &uart->xue;
+    const char *e;
 
     if ( strncmp(opt_dbgp, "xue", 3) )
         return;
 
     memset(xue, 0, sizeof(*xue));
 
+    if ( isdigit(opt_dbgp[3]) || !opt_dbgp[3] )
+    {
+        if ( opt_dbgp[3] )
+            xue->xhc_num = simple_strtoul(opt_dbgp + 3, &e, 10);
+    }
+    else if ( strncmp(opt_dbgp + 3, "@pci", 4) == 0 )
+    {
+        unsigned int bus, slot, func;
+
+        e = parse_pci(opt_dbgp + 7, NULL, &bus, &slot, &func);
+        if ( !e || *e )
+            return;
+
+        xue->sbdf = PCI_SBDF(0, bus, slot, func);
+    }
+
     xue->dbc_ctx = &ctx;
     xue->dbc_erst = &erst;
     xue->dbc_ering.trb = evt_trb;
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 14:31:20 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 14:31:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343349.568702 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyaEq-0006u3-8d; Tue, 07 Jun 2022 14:31:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343349.568702; Tue, 07 Jun 2022 14:31:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyaEp-0006sV-U8; Tue, 07 Jun 2022 14:31:19 +0000
Received: by outflank-mailman (input) for mailman id 343349;
 Tue, 07 Jun 2022 14:31:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fbs7=WO=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1nyaEp-00061K-7m
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 14:31:19 +0000
Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com
 [66.111.4.25]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7e216186-e66e-11ec-b605-df0040e90b76;
 Tue, 07 Jun 2022 16:31:18 +0200 (CEST)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.nyi.internal (Postfix) with ESMTP id AAAFD5C01CA;
 Tue,  7 Jun 2022 10:31:17 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute3.internal (MEProxy); Tue, 07 Jun 2022 10:31:17 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 7 Jun 2022 10:31:15 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7e216186-e66e-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm1; t=1654612277; x=1654698677; bh=U8kU4goNfn
	hejF90xv/RWehAvQ7mOrYaRxEFIubWjr0=; b=R19Zk3lTBIeggp7H1+bRpEHI/N
	X/7qNFFQex0MnoJYaqhZ0tdxCiKI+vgtcLLVUcnwQ5vgmCEUzzr6bAak9WqLMm4s
	Bcinoc7WAgQ68OM5AruE6spN6iUCPA+FjAIcsO7zO7cAjqpsEhFV5hTffWzQfJKq
	jFAlt7LBoCG8zLmlNZHrzG3sRYE8As+LhOVzNREn5RbbM1yVt3+LaR6IwXPqxiGh
	S4o+oqKJ0fbUW7hAS5LHOJGWR6GvXUy3M0ljr/4N5Q6dEgymDBbytT5xgBED1s9S
	XBnQVufCI0z+9ucFYcVocoNWPytS32v+FQcL0SSvAHtIVwNWJi7vnQtzjkkA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1654612277; x=
	1654698677; bh=U8kU4goNfnhejF90xv/RWehAvQ7mOrYaRxEFIubWjr0=; b=H
	uXIBu3wUbTKPSN1QekhmnIa/VHUUcWwk2OAM3jg34KB+lNkS8cmGyTxu2SF8kn+5
	vdWcHXPqxJeTOfKDx7goJDx/d5eQEGLZ0Yoj2gCowra/cXabkzjjvkjyr0VDH2ST
	C/E32RFFif/xxQHKqUgfVnvqNBFJu3DRQyscNYdsjDpf+LeF85uorh52FoKMSd8V
	ybuROokVnpRuaU6QRU071ogi1mCZnCHW3m8kRbHIwvl54sEhmvtQa8LmoMnSd4jd
	EtK0GukwDJeA4i/7CPyvs8C+xjkP/nyq1j01L93TxuOtz0GRlaC6IJinVR8+/8vy
	Ym9kiltCtPkkR0vjkyCvQ==
X-ME-Sender: <xms:NWGfYlkM9gAwgQRyIKhSsSi97CePmObOSEjwEWQg2Wxo5KOLWTKiKQ>
    <xme:NWGfYg1U3U8ZJqy1bC8z15_WNXKcItgi_Lr8IzTff-V3-DZCD9BDoxDk3domh6UfS
    96I3y6zlWBEvw>
X-ME-Received: <xmr:NWGfYrrPHJTNgObIP-v3YcPVkmgd0na0z_ca2lPj6KXrVVmrYjS1wxnofxf3rIfqMgIVNVnKJY9qm40Vdw7PznHyhrHUbyznK2Md9kq0r8KbWnA301I>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedruddthedgjeeiucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh
    hushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:NWGfYln6QAMy4HI7yyVx_ceVRz5LoL8sbxJ1UE4oAYmRdk4g-paOKA>
    <xmx:NWGfYj1S5vf8CGb5Q0oJgY3DHCri0klX6kUYWs_jf0_2OGzWAEvHwg>
    <xmx:NWGfYktuFrKqohHfzUovQy-wVP4UAwqmdJeuVaOhrhQfCKQk9Vt-Zw>
    <xmx:NWGfYj-s55_EzihEgTFcpsn_2T17jwbpIVWG7XmqjQJamaJZ3qWKlw>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v1 04/10] ehci-dbgp: fix selecting n-th ehci controller
Date: Tue,  7 Jun 2022 16:30:10 +0200
Message-Id: <e7d63b72873de3791e26a6551fef7132fcc9f241.1654612169.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <cover.5d286dc6304969ed7155051e900236947c1b14dc.1654612169.git-series.marmarek@invisiblethingslab.com>
References: <cover.5d286dc6304969ed7155051e900236947c1b14dc.1654612169.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The ehci<n> number was parsed but ignored.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 xen/drivers/char/ehci-dbgp.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/drivers/char/ehci-dbgp.c b/xen/drivers/char/ehci-dbgp.c
index 16c8ff394d5c..92c588ec0aa3 100644
--- a/xen/drivers/char/ehci-dbgp.c
+++ b/xen/drivers/char/ehci-dbgp.c
@@ -1480,7 +1480,7 @@ void __init ehci_dbgp_init(void)
         unsigned int num = 0;
 
         if ( opt_dbgp[4] )
-            simple_strtoul(opt_dbgp + 4, &e, 10);
+            num = simple_strtoul(opt_dbgp + 4, &e, 10);
 
         dbgp->cap = find_dbgp(dbgp, num);
         if ( !dbgp->cap )
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 14:31:22 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 14:31:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343350.568718 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyaEs-0007Pu-FU; Tue, 07 Jun 2022 14:31:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343350.568718; Tue, 07 Jun 2022 14:31:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyaEs-0007Pg-Bj; Tue, 07 Jun 2022 14:31:22 +0000
Received: by outflank-mailman (input) for mailman id 343350;
 Tue, 07 Jun 2022 14:31:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fbs7=WO=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1nyaEr-000619-3Z
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 14:31:21 +0000
Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com
 [66.111.4.25]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7ee3f402-e66e-11ec-bd2c-47488cf2e6aa;
 Tue, 07 Jun 2022 16:31:19 +0200 (CEST)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.nyi.internal (Postfix) with ESMTP id 48AAD5C01ED;
 Tue,  7 Jun 2022 10:31:19 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute5.internal (MEProxy); Tue, 07 Jun 2022 10:31:19 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 7 Jun 2022 10:31:17 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7ee3f402-e66e-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm1; t=1654612279; x=1654698679; bh=/FmM9IRoHk
	jrJvz5SecKZ+9vnPpot8BHqRtqlD2wsdk=; b=nRmoMLokKnoTQxNYHIVIw6fE1F
	177uy3Ek8M3bRPAGocRhUA91JiD6McUjah+DxUKG2X7wCEDpEIMNJ32lZKHPS4Vt
	sAWqYtZlee9SCdgWHD95BmrNoTfJIxPbm2MEkDAB7jpp1COWYEEqXqoa9oaNpEgx
	XmBvcm5kny8NuTpzHCVYx8xWjyYr8W5f+FyfrijW5gl54Og4K+tQNxCuL51fJEmD
	ELtKV5Q1ZN+p1oLuBNHIWqrDi14khV1cRlGvOk4wu3IOSMhdLVZBzeyBH+BuwzyO
	PX0m30cGtNFk09D/6t9jOlt6Ux7Yod8Bhsd/xBR4ZOcD46A8ks6zdnGXbVYA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1654612279; x=
	1654698679; bh=/FmM9IRoHkjrJvz5SecKZ+9vnPpot8BHqRtqlD2wsdk=; b=h
	E4byz8dzkMVIukdL40jB1/bCncvD2WvJLvYOfo6DohGP60RT3ZHzKb6kuGVh0OnA
	N81od+R+NH8Kh2A/nO09OFE8Epb/QC1ZdSGxDxgKYRL+tPPXUY5gBbNE+m869MtK
	jZwF9NHZY9Fob/b8MnHjol7yjNnkyUR2RdGtLk85L5z+VSEiJYZwLNSyh3Ois5P+
	RiJOSnk6CWj6/dkL9EmF02IqiloxEvd6DVWkNRcWtjX99yCqyDMXcllrhaNq9gs5
	bYC+SX5uZ+w0vD1ri8brr+eur/JS0RY/OAexZfMvLrf6lL8jhjoC1iw+ZiTLaQoc
	J2uAvzmXGTjs2j28NQvmQ==
X-ME-Sender: <xms:N2GfYhxX9sKRAy1N9bhVop8Rv4lE1USK_3bdIgSS_mrFlXgdzxqd9Q>
    <xme:N2GfYhRb7DL-phmy-tFyJAFRCQ5rqXeilIjVE7MxJXjyhwH1wWO4oBMT_u6l_pTiv
    0AEtIIKX7QcsA>
X-ME-Received: <xmr:N2GfYrUXR04OfikY8ouYlYwRx3rk6WD9nJ8rGQmNgURFmzpHGUBiu_ByrhoJsa-wk9N9AfndC5kVOib8yAznlXV3VGazaOJHX3H1PKZRDUlTc4vzro0>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedruddthedgjeeiucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh
    hushhtvghrufhiiigvpedvnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:N2GfYjgiUUmIVUW7-J8DsqxqqzkR8QqW1SntlxzZwaFEX5tkSL74kw>
    <xmx:N2GfYjCmLXTSOfR3se0zmOoY0q8c0iL2ZMoTyKuEQJuiU_zqxiuipQ>
    <xmx:N2GfYsJ-5c-fsc8lFmtPk3U8kd7YgCPdcALExN8Fglrn1ZAjiNAJew>
    <xmx:N2GfYu59LKrKUUt9kZLRjQljwmbBEt2nOY9YC0Gz80R_wiZ9AydBYA>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v1 05/10] console: support multiple serial console simultaneously
Date: Tue,  7 Jun 2022 16:30:11 +0200
Message-Id: <e13ee6e75e41da9468c3c38a18ec265879985976.1654612169.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <cover.5d286dc6304969ed7155051e900236947c1b14dc.1654612169.git-series.marmarek@invisiblethingslab.com>
References: <cover.5d286dc6304969ed7155051e900236947c1b14dc.1654612169.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Previously only one serial console was supported at the same time. Using
console=com1,dbgp,vga silently ignored all but last serial console (in
this case: only dbgp and vga were active).

Fix this by storing not a single sercon_handle, but an array of them, up
to MAX_SERCONS entries. The value of MAX_SERCONS (4) is arbitrary,
inspired by the number of SERHND_IDX values.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 xen/drivers/char/console.c | 58 ++++++++++++++++++++++++++++++---------
 1 file changed, 45 insertions(+), 13 deletions(-)

diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index f9937c5134c0..44b703296487 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -113,7 +113,9 @@ static char *__read_mostly conring = _conring;
 static uint32_t __read_mostly conring_size = _CONRING_SIZE;
 static uint32_t conringc, conringp;
 
-static int __read_mostly sercon_handle = -1;
+#define MAX_SERCONS 4
+static int __read_mostly sercon_handle[MAX_SERCONS];
+static int __read_mostly nr_sercon_handle = 0;
 
 #ifdef CONFIG_X86
 /* Tristate: 0 disabled, 1 user enabled, -1 default enabled */
@@ -395,9 +397,17 @@ static unsigned int serial_rx_cons, serial_rx_prod;
 
 static void (*serial_steal_fn)(const char *, size_t nr) = early_puts;
 
+/* Redirect any console output to *fn*, if *handle* is configured as a console. */
 int console_steal(int handle, void (*fn)(const char *, size_t nr))
 {
-    if ( (handle == -1) || (handle != sercon_handle) )
+    int i;
+
+    if ( handle == -1 )
+        return 0;
+    for ( i = 0; i < nr_sercon_handle; i++ )
+        if ( handle == sercon_handle[i] )
+            break;
+    if ( nr_sercon_handle && i == nr_sercon_handle )
         return 0;
 
     if ( serial_steal_fn != NULL )
@@ -415,10 +425,13 @@ void console_giveback(int id)
 
 void console_serial_puts(const char *s, size_t nr)
 {
+    int i;
+
     if ( serial_steal_fn != NULL )
         serial_steal_fn(s, nr);
     else
-        serial_puts(sercon_handle, s, nr);
+        for ( i = 0; i < nr_sercon_handle; i++ )
+            serial_puts(sercon_handle[i], s, nr);
 
     /* Copy all serial output into PV console */
     pv_console_puts(s, nr);
@@ -956,7 +969,7 @@ void guest_printk(const struct domain *d, const char *fmt, ...)
 void __init console_init_preirq(void)
 {
     char *p;
-    int sh;
+    int sh, i;
 
     serial_init_preirq();
 
@@ -977,7 +990,8 @@ void __init console_init_preirq(void)
             continue;
         else if ( (sh = serial_parse_handle(p)) >= 0 )
         {
-            sercon_handle = sh;
+            if ( nr_sercon_handle < MAX_SERCONS )
+                sercon_handle[nr_sercon_handle++] = sh;
             serial_steal_fn = NULL;
         }
         else
@@ -996,7 +1010,8 @@ void __init console_init_preirq(void)
         opt_console_xen = 0;
 #endif
 
-    serial_set_rx_handler(sercon_handle, serial_rx);
+    for ( i = 0; i < nr_sercon_handle; i++ )
+        serial_set_rx_handler(sercon_handle[i], serial_rx);
     pv_console_set_rx_handler(serial_rx);
 
     /* HELLO WORLD --- start-of-day banner text. */
@@ -1014,7 +1029,8 @@ void __init console_init_preirq(void)
 
     if ( opt_sync_console )
     {
-        serial_start_sync(sercon_handle);
+        for ( i = 0; i < nr_sercon_handle; i++ )
+            serial_start_sync(sercon_handle[i]);
         add_taint(TAINT_SYNC_CONSOLE);
         printk("Console output is synchronous.\n");
         warning_add(warning_sync_console);
@@ -1121,13 +1137,19 @@ int __init console_has(const char *device)
 
 void console_start_log_everything(void)
 {
-    serial_start_log_everything(sercon_handle);
+    int i;
+
+    for ( i = 0; i < nr_sercon_handle; i++ )
+        serial_start_log_everything(sercon_handle[i]);
     atomic_inc(&print_everything);
 }
 
 void console_end_log_everything(void)
 {
-    serial_end_log_everything(sercon_handle);
+    int i;
+
+    for ( i = 0; i < nr_sercon_handle; i++ )
+        serial_end_log_everything(sercon_handle[i]);
     atomic_dec(&print_everything);
 }
 
@@ -1149,23 +1171,32 @@ void console_unlock_recursive_irqrestore(unsigned long flags)
 
 void console_force_unlock(void)
 {
+    int i;
+
     watchdog_disable();
     spin_debug_disable();
     spin_lock_init(&console_lock);
-    serial_force_unlock(sercon_handle);
+    for ( i = 0 ; i < nr_sercon_handle ; i++ )
+        serial_force_unlock(sercon_handle[i]);
     console_locks_busted = 1;
     console_start_sync();
 }
 
 void console_start_sync(void)
 {
+    int i;
+
     atomic_inc(&print_everything);
-    serial_start_sync(sercon_handle);
+    for ( i = 0 ; i < nr_sercon_handle ; i++ )
+        serial_start_sync(sercon_handle[i]);
 }
 
 void console_end_sync(void)
 {
-    serial_end_sync(sercon_handle);
+    int i;
+
+    for ( i = 0; i < nr_sercon_handle; i++ )
+        serial_end_sync(sercon_handle[i]);
     atomic_dec(&print_everything);
 }
 
@@ -1291,7 +1322,8 @@ static int suspend_steal_id;
 
 int console_suspend(void)
 {
-    suspend_steal_id = console_steal(sercon_handle, suspend_steal_fn);
+    if ( nr_sercon_handle )
+        suspend_steal_id = console_steal(sercon_handle[0], suspend_steal_fn);
     serial_suspend();
     return 0;
 }
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 14:31:25 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 14:31:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343351.568729 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyaEu-0007l7-UN; Tue, 07 Jun 2022 14:31:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343351.568729; Tue, 07 Jun 2022 14:31:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyaEu-0007kc-Ng; Tue, 07 Jun 2022 14:31:24 +0000
Received: by outflank-mailman (input) for mailman id 343351;
 Tue, 07 Jun 2022 14:31:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fbs7=WO=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1nyaEs-00061K-Ln
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 14:31:22 +0000
Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com
 [66.111.4.25]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8016fee5-e66e-11ec-b605-df0040e90b76;
 Tue, 07 Jun 2022 16:31:21 +0200 (CEST)
Received: from compute4.internal (compute4.nyi.internal [10.202.2.44])
 by mailout.nyi.internal (Postfix) with ESMTP id E625E5C01DB;
 Tue,  7 Jun 2022 10:31:20 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute4.internal (MEProxy); Tue, 07 Jun 2022 10:31:20 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 7 Jun 2022 10:31:19 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8016fee5-e66e-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm1; t=1654612280; x=1654698680; bh=9Z09zPwAMV
	UfGBsTlLDLpiIwvOBt3luu8w7C4olNcaY=; b=KwbseHJmSVm51Z4o4plrUu7Ohp
	WVSyTi0N9mwuNZ5H4GKcEGFy99iQyEg3SqnjTjmz+HXBqcmZrLBh9bdsVIAbd3mM
	jqiGXk+YgoY1X999yxsd10+7zuAMz+nygATXfwDxn22y8YetIo0WSxKrdnQiJU0F
	GuTfm5Kr8xbjSq9HfoUWsm27UjPOs4vtkXVD1WmASb7240SNJWME6kRvJlupuVJR
	1hVkN22/N3rGtwX/leE4OyTS7x9jDAvXU9OXBzaKzu7E8mHb0bqyMCi3Kgq60ivX
	paFC194VSGV3S9oQmdNsV3yKRBfHb2ukJ3hr1RHX81Bk8yet27T8vGsmNlZA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1654612280; x=
	1654698680; bh=9Z09zPwAMVUfGBsTlLDLpiIwvOBt3luu8w7C4olNcaY=; b=Y
	OhGYlkT512yD3GpdI3c0a8g3LpUsSWdrGUyjXKpl92I+gf2PypmHCAaIOPyOXZ5k
	esJ+pnxpMC6Rjgedvb/0ej8yZw4eUfu8fFxyEmylfElmqrN3VHOnB13RycpbnCz9
	wzqSxIK1LI4TWHQ8UUbgJnAJgzO6jbgaRx5FqQcA06Pyd9BZS/abQlZWUzXS27Yv
	xqMRhhyQ0+Q6hZG/CUR8bn8hspxbTJAS7Dg+4g4fCrMqcl5dxWt63Uzlt8cgnM7h
	FtXmlqvXtD9SULnuPbuB9FljoKLpuH6uZr0N4cg3g/Rl5EZ3o54JQML6R4PWLm4O
	HQgP5LNfJS6/5ONAcDR9w==
X-ME-Sender: <xms:OGGfYsBWBUWN1OagrZucFB29kr0k-dAjBoYPzL8fE_7qh4PvL0Z-lw>
    <xme:OGGfYuiLLPBE7cTUCr-v4vT74GSCnvNBWU4qq_UO5nLAm3lzOieq-NuYTRWj4O_JY
    3eS_JcwpN74yA>
X-ME-Received: <xmr:OGGfYvlZ6mkMw3zq908jtmIAgaq1GwWowVZ1hTqHFV8KboM2TsmNlDEqWMwcq9ZKSm6J4rfIg1pBdSVdffnTePwr53p-EtoOJnrDh07Nnmb8siDhc88>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedruddthedgjeeiucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh
    hushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:OGGfYixxDcoh7wkTuKFpcnRGGk005AD1p6pffliHYo3dn1fvrIdkVw>
    <xmx:OGGfYhR8OtTo_nopHsz-w9NBa-0RUInEMfw-lja4-JxZMuqu4nfLbA>
    <xmx:OGGfYtZotyPdShqATimJwHbBnE5bJ9OKtPszLGyXUI1C0ryMeHn6fw>
    <xmx:OGGfYsd3Scal2QVaG3NzdfQ6t3cZ4LC1F_OzBhINmmeds8A-a_cIBw>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v1 06/10] IOMMU: add common API for device reserved memory
Date: Tue,  7 Jun 2022 16:30:12 +0200
Message-Id: <9a5b2f380244c0932b3c2c9ada7346a4d6a0433d.1654612169.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <cover.5d286dc6304969ed7155051e900236947c1b14dc.1654612169.git-series.marmarek@invisiblethingslab.com>
References: <cover.5d286dc6304969ed7155051e900236947c1b14dc.1654612169.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Add API similar to rmrr= and ivmd= arguments, but in a common code. This
will allow drivers to register reserved memory regardless of the IOMMU
vendor.
The direct reason for this API is xhci-dbc console driver (aka xue),
that needs to use DMA. But future change may unify command line
arguments for user-supplied reserved memory, and it may be useful for
other drivers in the future too.

This commit just introduces an API, subsequent patches will plug it in
appropriate places. The reserved memory ranges needs to be saved
locally, because at the point when they are collected, Xen doesn't know
yet which IOMMU driver will be used.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 xen/drivers/passthrough/iommu.c | 40 ++++++++++++++++++++++++++++++++++-
 xen/include/xen/iommu.h         | 11 +++++++++-
 2 files changed, 51 insertions(+)

diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 9393d987c788..5c4162912359 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -654,6 +654,46 @@ bool_t iommu_has_feature(struct domain *d, enum iommu_feature feature)
     return is_iommu_enabled(d) && test_bit(feature, dom_iommu(d)->features);
 }
 
+#define MAX_EXTRA_RESERVED_RANGES 20
+struct extra_reserved_range {
+    xen_pfn_t start;
+    xen_ulong_t nr;
+    u32 sbdf;
+};
+static unsigned int __initdata nr_extra_reserved_ranges;
+static struct extra_reserved_range __initdata extra_reserved_ranges[MAX_EXTRA_RESERVED_RANGES];
+
+int iommu_add_extra_reserved_device_memory(xen_pfn_t start, xen_ulong_t nr, u32 sbdf)
+{
+    unsigned int idx;
+
+    if ( nr_extra_reserved_ranges >= MAX_EXTRA_RESERVED_RANGES )
+        return -ENOMEM;
+
+    idx = nr_extra_reserved_ranges++;
+    extra_reserved_ranges[idx].start = start;
+    extra_reserved_ranges[idx].nr = nr;
+    extra_reserved_ranges[idx].sbdf = sbdf;
+    return 0;
+}
+
+int iommu_get_extra_reserved_device_memory(iommu_grdm_t *func, void *ctxt)
+{
+    unsigned int idx;
+    int ret;
+
+    for ( idx = 0; idx < nr_extra_reserved_ranges; idx++ )
+    {
+        ret = func(extra_reserved_ranges[idx].start,
+                   extra_reserved_ranges[idx].nr,
+                   extra_reserved_ranges[idx].sbdf,
+                   ctxt);
+        if ( ret < 0 )
+            return ret;
+    }
+    return 0;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index e0f82712ed73..97424130247c 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -296,6 +296,17 @@ struct iommu_ops {
 #endif
 };
 
+/*
+ * To be called by Xen internally, to register extra RMRR/IVMD ranges.
+ * Needs to be called before IOMMU initialization.
+ */
+extern int iommu_add_extra_reserved_device_memory(xen_pfn_t start, xen_ulong_t nr, u32 sbdf);
+/*
+ * To be called by specific IOMMU driver during initialization,
+ * to fetch ranges registered with iommu_add_extra_reserved_device_memory().
+ */
+extern int iommu_get_extra_reserved_device_memory(iommu_grdm_t *func, void *ctxt);
+
 #include <asm/iommu.h>
 
 #ifndef iommu_call
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 14:31:25 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 14:31:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343352.568735 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyaEv-0007rT-K2; Tue, 07 Jun 2022 14:31:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343352.568735; Tue, 07 Jun 2022 14:31:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyaEv-0007qU-BL; Tue, 07 Jun 2022 14:31:25 +0000
Received: by outflank-mailman (input) for mailman id 343352;
 Tue, 07 Jun 2022 14:31:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fbs7=WO=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1nyaEt-00061K-QX
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 14:31:24 +0000
Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com
 [66.111.4.25]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 80a6abbd-e66e-11ec-b605-df0040e90b76;
 Tue, 07 Jun 2022 16:31:22 +0200 (CEST)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.nyi.internal (Postfix) with ESMTP id E60835C01ED;
 Tue,  7 Jun 2022 10:31:21 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute3.internal (MEProxy); Tue, 07 Jun 2022 10:31:21 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 7 Jun 2022 10:31:21 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 80a6abbd-e66e-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm1; t=1654612281; x=1654698681; bh=evEV0WFYEo
	SI8Y7yrgXC8kLje73u4yjd3P/XL+ndlpo=; b=ZxYqhXY/YIi8dTivWNGloIFnBH
	2MHNOBgTxoURpbsF7RR5y2q7U90JZLBqvdcqh4O3teqCu9y1LSwrut6//Huny2m+
	9jPve5Gqo4yq2Q5m5hNX0Q00BtdQ3bQfxDt668kykyxywRZNypqO+881n1Wwyib6
	ibCEr2yvOEYq92KZl00SBQ7jmRPAJ6Y3be61toBD9TQHK7sY81UsqlKDFWub4Cbq
	p6Kei2XDSdDa4cKcPZEgRW0NevnP2+VYEbxSzXF4wHgco3Mdp3pSKjtcQaciiD5E
	D9CT3g1J5B4x+UKH0gEY3kBcCl9xnOHnwS72LdC7Yl0axx/U/jrmHHUhB3dA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1654612281; x=
	1654698681; bh=evEV0WFYEoSI8Y7yrgXC8kLje73u4yjd3P/XL+ndlpo=; b=A
	Daxk6mZF3PB/WoZhtKAkeciH3EcZpYg8ujSfcM7SkjxHgwvH45LpAdqkF0D/b0vm
	oC+xq3nCSPe266xWIab7jHelDnQM1Bq/m6Ep5uyyk2wviH3BVpGNQSKBM3w0/09n
	7fK6A9kAnoQCuoCT+itsps73hUkbyd0n3jvQkd5ld53SnMGI5GNE6RFbxrjw9hCn
	oULQFugauGb9PEssT9Uxf5iwRvrbovere3iVQu6omZmJLJokesA8ifWsYZWEmKiN
	60j4My0Ihe+nKOvxUff6Vnl54TS9TLwQH9ILEi0f2zUHZQxLr+VPu9/ztxvhaKRj
	OTwLpBV09ogcsfxiQ8Xfw==
X-ME-Sender: <xms:OWGfYtGcVr8107Nu0M2bzG2GCoGmji0d88UVVwFpLG6ycdTbLUYRUg>
    <xme:OWGfYiWnZl_6Wx56HL6PkhMc89BYgpGmBqwarZKo44QgApI7pDrCCXXSQi4-bmKTC
    KPCkKvPW73ViA>
X-ME-Received: <xmr:OWGfYvLAXPxUDZZudAzz9On_4qs5JY4PqeMDYj6tKO8IgEYdH_B-2vfiqDFE-H8vy3kxuquZFA-DeoKhi2QgtQtErki5uqd41INFL3lCofpdbOENYGU>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedruddthedgjeeiucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh
    hushhtvghrufhiiigvpedunecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:OWGfYjH_0sgtPvoMn9uvlDFdG_7g_55w6jufpheX_hztf5o7xMiITw>
    <xmx:OWGfYjVyjA6EOQG7v0FavNvwm7AXrh7N0APSuEzzLRMYYSOGcGljyQ>
    <xmx:OWGfYuPaKiDlexo2fbU9P98G6uYHkFl00uF98YII75DMt2EShXM7jg>
    <xmx:OWGfYkdKEn2JpVwXd9kj8fL6vlAxRQQECwFkW8m_0CwHoDD2QU7WFA>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Kevin Tian <kevin.tian@intel.com>
Subject: [PATCH v1 07/10] IOMMU/VT-d: wire common device reserved memory API
Date: Tue,  7 Jun 2022 16:30:13 +0200
Message-Id: <80a37be8a8fd6ba2f401a45b5e073939ceaca2cf.1654612169.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <cover.5d286dc6304969ed7155051e900236947c1b14dc.1654612169.git-series.marmarek@invisiblethingslab.com>
References: <cover.5d286dc6304969ed7155051e900236947c1b14dc.1654612169.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Re-use rmrr= parameter handling code to handle common device reserved
memory.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 xen/drivers/passthrough/vtd/dmar.c | 201 +++++++++++++++++-------------
 1 file changed, 119 insertions(+), 82 deletions(-)

diff --git a/xen/drivers/passthrough/vtd/dmar.c b/xen/drivers/passthrough/vtd/dmar.c
index 367304c8739c..661a182b08d9 100644
--- a/xen/drivers/passthrough/vtd/dmar.c
+++ b/xen/drivers/passthrough/vtd/dmar.c
@@ -861,111 +861,148 @@ static struct user_rmrr __initdata user_rmrrs[MAX_USER_RMRR];
 
 /* Macro for RMRR inclusive range formatting. */
 #define ERMRRU_FMT "[%lx-%lx]"
-#define ERMRRU_ARG(eru) eru.base_pfn, eru.end_pfn
+#define ERMRRU_ARG base_pfn, end_pfn
+
+static int __init add_one_user_rmrr(unsigned long base_pfn,
+                                    unsigned long end_pfn,
+                                    unsigned int dev_count,
+                                    u32 *sbdf);
 
 static int __init add_user_rmrr(void)
 {
+    unsigned int i;
+    int ret;
+
+    for ( i = 0; i < nr_rmrr; i++ )
+    {
+        ret = add_one_user_rmrr(user_rmrrs[i].base_pfn,
+                                user_rmrrs[i].end_pfn,
+                                user_rmrrs[i].dev_count,
+                                user_rmrrs[i].sbdf);
+        if ( ret < 0 )
+            return ret;
+    }
+    return 0;
+}
+
+/* Returns 1 on success, 0 when ignoring and < 0 on error. */
+static int __init add_one_user_rmrr(unsigned long base_pfn,
+                                    unsigned long end_pfn,
+                                    unsigned int dev_count,
+                                    u32 *sbdf)
+{
     struct acpi_rmrr_unit *rmrr, *rmrru;
-    unsigned int idx, seg, i;
-    unsigned long base, end;
+    unsigned int idx, seg;
+    unsigned long base_iter;
     bool overlap;
 
-    for ( i = 0; i < nr_rmrr; i++ )
+    if ( iommu_verbose )
+        printk(XENLOG_DEBUG VTDPREFIX
+               "Adding RMRR for %d device ([0]: %#x) range "ERMRRU_FMT"\n",
+               dev_count, sbdf[0], ERMRRU_ARG);
+
+    if ( base_pfn > end_pfn )
+    {
+        printk(XENLOG_ERR VTDPREFIX
+               "Invalid RMRR Range "ERMRRU_FMT"\n",
+               ERMRRU_ARG);
+        return 0;
+    }
+
+    if ( (end_pfn - base_pfn) >= MAX_USER_RMRR_PAGES )
     {
-        base = user_rmrrs[i].base_pfn;
-        end = user_rmrrs[i].end_pfn;
+        printk(XENLOG_ERR VTDPREFIX
+               "RMRR range "ERMRRU_FMT" exceeds "\
+               __stringify(MAX_USER_RMRR_PAGES)" pages\n",
+               ERMRRU_ARG);
+        return 0;
+    }
 
-        if ( base > end )
+    overlap = false;
+    list_for_each_entry(rmrru, &acpi_rmrr_units, list)
+    {
+        if ( pfn_to_paddr(base_pfn) <= rmrru->end_address &&
+             rmrru->base_address <= pfn_to_paddr(end_pfn) )
         {
             printk(XENLOG_ERR VTDPREFIX
-                   "Invalid RMRR Range "ERMRRU_FMT"\n",
-                   ERMRRU_ARG(user_rmrrs[i]));
-            continue;
+                   "Overlapping RMRRs: "ERMRRU_FMT" and [%lx-%lx]\n",
+                   ERMRRU_ARG,
+                   paddr_to_pfn(rmrru->base_address),
+                   paddr_to_pfn(rmrru->end_address));
+            overlap = true;
+            break;
         }
+    }
+    /* Don't add overlapping RMRR. */
+    if ( overlap )
+        return 0;
 
-        if ( (end - base) >= MAX_USER_RMRR_PAGES )
+    base_iter = base_pfn;
+    do
+    {
+        if ( !mfn_valid(_mfn(base_iter)) )
         {
             printk(XENLOG_ERR VTDPREFIX
-                   "RMRR range "ERMRRU_FMT" exceeds "\
-                   __stringify(MAX_USER_RMRR_PAGES)" pages\n",
-                   ERMRRU_ARG(user_rmrrs[i]));
-            continue;
+                   "Invalid pfn in RMRR range "ERMRRU_FMT"\n",
+                   ERMRRU_ARG);
+            break;
         }
+    } while ( base_iter++ < end_pfn );
 
-        overlap = false;
-        list_for_each_entry(rmrru, &acpi_rmrr_units, list)
-        {
-            if ( pfn_to_paddr(base) <= rmrru->end_address &&
-                 rmrru->base_address <= pfn_to_paddr(end) )
-            {
-                printk(XENLOG_ERR VTDPREFIX
-                       "Overlapping RMRRs: "ERMRRU_FMT" and [%lx-%lx]\n",
-                       ERMRRU_ARG(user_rmrrs[i]),
-                       paddr_to_pfn(rmrru->base_address),
-                       paddr_to_pfn(rmrru->end_address));
-                overlap = true;
-                break;
-            }
-        }
-        /* Don't add overlapping RMRR. */
-        if ( overlap )
-            continue;
+    /* Invalid pfn in range as the loop ended before end_pfn was reached. */
+    if ( base_iter <= end_pfn )
+        return 0;
 
-        do
-        {
-            if ( !mfn_valid(_mfn(base)) )
-            {
-                printk(XENLOG_ERR VTDPREFIX
-                       "Invalid pfn in RMRR range "ERMRRU_FMT"\n",
-                       ERMRRU_ARG(user_rmrrs[i]));
-                break;
-            }
-        } while ( base++ < end );
+    rmrr = xzalloc(struct acpi_rmrr_unit);
+    if ( !rmrr )
+        return -ENOMEM;
 
-        /* Invalid pfn in range as the loop ended before end_pfn was reached. */
-        if ( base <= end )
-            continue;
+    rmrr->scope.devices = xmalloc_array(u16, dev_count);
+    if ( !rmrr->scope.devices )
+    {
+        xfree(rmrr);
+        return -ENOMEM;
+    }
 
-        rmrr = xzalloc(struct acpi_rmrr_unit);
-        if ( !rmrr )
-            return -ENOMEM;
+    seg = 0;
+    for ( idx = 0; idx < dev_count; idx++ )
+    {
+        rmrr->scope.devices[idx] = sbdf[idx];
+        seg |= PCI_SEG(sbdf[idx]);
+    }
+    if ( seg != PCI_SEG(sbdf[0]) )
+    {
+        printk(XENLOG_ERR VTDPREFIX
+               "Segments are not equal for RMRR range "ERMRRU_FMT"\n",
+               ERMRRU_ARG);
+        scope_devices_free(&rmrr->scope);
+        xfree(rmrr);
+        return 0;
+    }
 
-        rmrr->scope.devices = xmalloc_array(u16, user_rmrrs[i].dev_count);
-        if ( !rmrr->scope.devices )
-        {
-            xfree(rmrr);
-            return -ENOMEM;
-        }
+    rmrr->segment = seg;
+    rmrr->base_address = pfn_to_paddr(base_pfn);
+    /* Align the end_address to the end of the page */
+    rmrr->end_address = pfn_to_paddr(end_pfn) | ~PAGE_MASK;
+    rmrr->scope.devices_cnt = dev_count;
 
-        seg = 0;
-        for ( idx = 0; idx < user_rmrrs[i].dev_count; idx++ )
-        {
-            rmrr->scope.devices[idx] = user_rmrrs[i].sbdf[idx];
-            seg |= PCI_SEG(user_rmrrs[i].sbdf[idx]);
-        }
-        if ( seg != PCI_SEG(user_rmrrs[i].sbdf[0]) )
-        {
-            printk(XENLOG_ERR VTDPREFIX
-                   "Segments are not equal for RMRR range "ERMRRU_FMT"\n",
-                   ERMRRU_ARG(user_rmrrs[i]));
-            scope_devices_free(&rmrr->scope);
-            xfree(rmrr);
-            continue;
-        }
+    if ( register_one_rmrr(rmrr) )
+        printk(XENLOG_ERR VTDPREFIX
+               "Could not register RMMR range "ERMRRU_FMT"\n",
+               ERMRRU_ARG);
 
-        rmrr->segment = seg;
-        rmrr->base_address = pfn_to_paddr(user_rmrrs[i].base_pfn);
-        /* Align the end_address to the end of the page */
-        rmrr->end_address = pfn_to_paddr(user_rmrrs[i].end_pfn) | ~PAGE_MASK;
-        rmrr->scope.devices_cnt = user_rmrrs[i].dev_count;
+    return 1;
+}
 
-        if ( register_one_rmrr(rmrr) )
-            printk(XENLOG_ERR VTDPREFIX
-                   "Could not register RMMR range "ERMRRU_FMT"\n",
-                   ERMRRU_ARG(user_rmrrs[i]));
-    }
+static int __init cf_check add_one_extra_rmrr(xen_pfn_t start, xen_ulong_t nr, u32 id, void *ctxt)
+{
+    u32 sbdf_array[] = { id };
+    return add_one_user_rmrr(start, start+nr, 1, sbdf_array);
+}
 
-    return 0;
+static int __init add_extra_rmrr(void)
+{
+    return iommu_get_extra_reserved_device_memory(add_one_extra_rmrr, NULL);
 }
 
 #include <asm/tboot.h>
@@ -1010,7 +1047,7 @@ int __init acpi_dmar_init(void)
     {
         iommu_init_ops = &intel_iommu_init_ops;
 
-        return add_user_rmrr();
+        return add_user_rmrr() || add_extra_rmrr();
     }
 
     return ret;
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 14:31:27 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 14:31:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343353.568749 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyaEx-0008EJ-6b; Tue, 07 Jun 2022 14:31:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343353.568749; Tue, 07 Jun 2022 14:31:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyaEw-0008DY-OQ; Tue, 07 Jun 2022 14:31:26 +0000
Received: by outflank-mailman (input) for mailman id 343353;
 Tue, 07 Jun 2022 14:31:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fbs7=WO=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1nyaEu-00061K-MG
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 14:31:24 +0000
Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com
 [66.111.4.25]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 81604b6c-e66e-11ec-b605-df0040e90b76;
 Tue, 07 Jun 2022 16:31:23 +0200 (CEST)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.nyi.internal (Postfix) with ESMTP id 26C525C00E3;
 Tue,  7 Jun 2022 10:31:23 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute5.internal (MEProxy); Tue, 07 Jun 2022 10:31:23 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 7 Jun 2022 10:31:22 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 81604b6c-e66e-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm1; t=1654612283; x=1654698683; bh=A6IUb5PhcA
	I6HvRB0V+hA466Exyej9hEMYg/SSDCyCk=; b=kvLeIvF/rub3mbujq80JZ6rqh8
	xFM+jPqRlvU2zoq6eFhJP10jWYcqRebmHXjkUeb5cV0cLtZjr0SxKhyCoceU/c+/
	gCS+YBjMc/39D+Ab0Swy87DAeQcSVuVg+Vd5fknFmvFT+87x6ocyDF9pGsaHGr9N
	Fo3c8EOilP8kSCULmALBid6fS57MDKnC3GwHUSRyDgR/1EsjItYCaDZeqbEM3asD
	gTsP6dJjgIYMceDf+wiSTmnf7w/HulDgWtNFFRw/7UNyHlkR1f6D/vojpt6pQOxX
	ux7iG+5dVwcYPlFuI6c8b42fIxh6vo65jGdARhpG9gf3XyYtQhdSU4apSJIg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1654612283; x=
	1654698683; bh=A6IUb5PhcAI6HvRB0V+hA466Exyej9hEMYg/SSDCyCk=; b=P
	EZAkeVuZqW40QfZ+2Iw50ElZFKjh0SaSOZSTxddgnhHWS0WID4RthSk687b1APWq
	WAl+zFBx/wgbljGAl4SHS6lE+EoXogSbVtJjSoLRnZjEz6DMrwEUDquSFgSk3ZCK
	XjR5dcbvcMAmaxk52yKgSr3LQjlGB0kNRR9hWqvzqzfMvRqUQuV8QwN0Ee4q1qJs
	YbC5jjPzbMOsYv9gwedWnaVlbmLRK9c9GuNTciOGdZ6yIp0K7707tkeZekkKTrYL
	lUcXDLyLzlRI2LjIMlJpHQLLA70eDpWT1GbWK/C4cjTiu8rMA9dmh5DDQ692tVWr
	hAQuUZ6RHAdAnvpsyckjQ==
X-ME-Sender: <xms:OmGfYqATAHN10ezeXJqXFbpq6D8CRhs_ap7G5UjvSCEmBbHxJZWQrw>
    <xme:OmGfYkg3IChUTgLDtbXxykOkPV_zWlfd9iCreh2EUgEghwpxOEQ6nm9jqpTwtbtLA
    9gx7kaJw5tHLA>
X-ME-Received: <xmr:OmGfYtmRImV0apA2s9uIHszoIQh12JvZmUl1NP-l9obyby5ZTuHiIx5A6Qw2x7kO_xlKJYS1mPI1Wre6v8Z7_8B7hu6R3dphCE9AfknKiXm6ODeXAHs>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedruddthedgjeeiucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh
    hushhtvghrufhiiigvpeefnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:OmGfYox7sAX0EiOO31JlFU0oRWhUCZO0-lP4uGW9Scl5iV1VLFUZTA>
    <xmx:OmGfYvQmeISZB39E8_TVpxNxkdBHntJ8G1fHWbVWXb6Zd7tS25awRw>
    <xmx:OmGfYjZq_lvgGCdYvfG1SfEas8VOnHx0blW-HGp9s0V-SnI3-yGBiQ>
    <xmx:O2GfYmLsDkNRAvbkuyiREyx5_A87lHCipugx7yx9yLeQ646XL1EHkw>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [PATCH v1 08/10] IOMMU/AMD: wire common device reserved memory API
Date: Tue,  7 Jun 2022 16:30:14 +0200
Message-Id: <cd54972e6ffd7a6d98740cc9eb1c48ac94f422e8.1654612169.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <cover.5d286dc6304969ed7155051e900236947c1b14dc.1654612169.git-series.marmarek@invisiblethingslab.com>
References: <cover.5d286dc6304969ed7155051e900236947c1b14dc.1654612169.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Register common device reserved memory similar to how ivmd= parameter is
handled.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 xen/drivers/passthrough/amd/iommu_acpi.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/xen/drivers/passthrough/amd/iommu_acpi.c b/xen/drivers/passthrough/amd/iommu_acpi.c
index ac6835225bae..2a4896c05442 100644
--- a/xen/drivers/passthrough/amd/iommu_acpi.c
+++ b/xen/drivers/passthrough/amd/iommu_acpi.c
@@ -1078,6 +1078,20 @@ static inline bool_t is_ivmd_block(u8 type)
             type == ACPI_IVRS_TYPE_MEMORY_IOMMU);
 }
 
+static int __init cf_check add_one_extra_ivmd(xen_pfn_t start, xen_ulong_t nr, u32 id, void *ctxt)
+{
+    struct acpi_ivrs_memory ivmd;
+
+    ivmd.start_address = start << PAGE_SHIFT;
+    ivmd.memory_length = nr << PAGE_SHIFT;
+    ivmd.header.flags = ACPI_IVMD_UNITY |
+                        ACPI_IVMD_READ | ACPI_IVMD_WRITE;
+    ivmd.header.length = sizeof(ivmd);
+    ivmd.header.device_id = id;
+    ivmd.header.type = ACPI_IVRS_TYPE_MEMORY_ONE;
+    return parse_ivmd_block(&ivmd);
+}
+
 static int __init cf_check parse_ivrs_table(struct acpi_table_header *table)
 {
     const struct acpi_ivrs_header *ivrs_block;
@@ -1121,6 +1135,8 @@ static int __init cf_check parse_ivrs_table(struct acpi_table_header *table)
         AMD_IOMMU_DEBUG("IVMD: %u command line provided entries\n", nr_ivmd);
     for ( i = 0; !error && i < nr_ivmd; ++i )
         error = parse_ivmd_block(user_ivmds + i);
+    if ( !error )
+        error = iommu_get_extra_reserved_device_memory(add_one_extra_ivmd, NULL);
 
     /* Each IO-APIC must have been mentioned in the table. */
     for ( apic = 0; !error && iommu_intremap && apic < nr_ioapics; ++apic )
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 14:31:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 14:31:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343354.568760 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyaEy-0000C5-N0; Tue, 07 Jun 2022 14:31:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343354.568760; Tue, 07 Jun 2022 14:31:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyaEy-0000B4-EX; Tue, 07 Jun 2022 14:31:28 +0000
Received: by outflank-mailman (input) for mailman id 343354;
 Tue, 07 Jun 2022 14:31:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fbs7=WO=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1nyaEw-000619-K1
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 14:31:26 +0000
Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com
 [66.111.4.25]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 823b965b-e66e-11ec-bd2c-47488cf2e6aa;
 Tue, 07 Jun 2022 16:31:25 +0200 (CEST)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.nyi.internal (Postfix) with ESMTP id D8C015C01A9;
 Tue,  7 Jun 2022 10:31:24 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute2.internal (MEProxy); Tue, 07 Jun 2022 10:31:24 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 7 Jun 2022 10:31:23 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 823b965b-e66e-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm1; t=1654612284; x=1654698684; bh=bVSELf5uLQ
	6uKc/B+4q8TUi8lxYysoFD6qxpJaiy/dU=; b=XVGLR8gbDGDMJ01qLDdQa2s0RW
	bwVjQzaCsTdHatp1qUKzzjlbTwJBKXg25OCF8wYvZoSvKl2f0VECtYye3DpnBRPu
	bwqERkdJ/khzDWLbSUPfEgy99MlYjf2jxwzfvzx7h3FquGFG2jhHjRF70K6vtGag
	ayYkckmyyfCZHRWYSi4EXr0Wiq7h38KIV6kapI6wxAqwCTeVm3B6jUdfKHPMpiWu
	bVhxKOctQKzqe7gnWHNE97EeRaHp5DMtQCO/hxvP2N8B56SDWLV/fGtMALZLjUQZ
	LdDK9KL+XOSGIRbglmwISUMKXVsrglYW4cK8nHHudh7voVstTI80rKYKDYHQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1654612284; x=
	1654698684; bh=bVSELf5uLQ6uKc/B+4q8TUi8lxYysoFD6qxpJaiy/dU=; b=S
	29bcsaAtrem1xMhKQHKkXzBYB57nkIGxGhCiFTiin1F7h/K0pUe6G8mMH62j1zuC
	kON8mTJN0pcJPMODq6oAaPudqfOeWpc02sNX9vpBFD5qGlwlI+X3wCNYR+scmnhA
	l6FC3IBFIrz3g9/j3l7vrZ+YvRF7g1Nj39yiLG/iYe45ymogY0ZFplYnQw/23Rej
	m8MbDF40qsRSCYR4LScwt+2YX0oYfSO617v7EwUgegQER+t8wbH3tLyaL5VRDpuF
	hX6OzsaTnNrpp/FXe4dQzBSeirp6mPZos/WvDk+kxq/8pcfNqLk+vR2YGiU5gGvT
	xWL8L684T7FmSsLoRdjog==
X-ME-Sender: <xms:PGGfYs6zgr0jaxbThCymxdp1MXrvda5TvEdmEJAfij-8f42qYCyHig>
    <xme:PGGfYt6-D8zFTfsPCBAyK_5fqxuR8fEvrrNWsnomq6fexofxleNl9aH5gVmoeH1WD
    oFMnPdA6Jd_Eg>
X-ME-Received: <xmr:PGGfYrcQOHgC8ky9q3Td8DkbwK4RDRXxVC1WrRod5OF-_nTDNX_3fv4t_a_fTGpXLuj3Xn-xMnZufQmRdBkiJ45-gtW5B0sOPTSQsZ6QYe0NyKGc_Es>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedruddthedgjeeiucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh
    hushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:PGGfYhL0eu40Z3-R6hE2Fl30oqcE1qTxjNWM8kflqFWwI1qR6RBQQw>
    <xmx:PGGfYgLcfdrpmn53m1LYlf0LYTL5Ie6nZl34eTNza-8F3RFyhydF7A>
    <xmx:PGGfYiwV_ynWb3Ou5T3iBK0TpCp3kpGH5chzYyX71CZpO1ZHcO9yYQ>
    <xmx:PGGfYo-oXIg3gy5mA5DCK92vITUFzeJ4MrsrKq3b2wHTpYwpTSEjHg>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Kevin Tian <kevin.tian@intel.com>
Subject: [PATCH v1 09/10] xue: mark DMA buffers as reserved for the device
Date: Tue,  7 Jun 2022 16:30:15 +0200
Message-Id: <f1f5d65f08915f9977d46e0dc2ffb9296298cd3a.1654612169.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <cover.5d286dc6304969ed7155051e900236947c1b14dc.1654612169.git-series.marmarek@invisiblethingslab.com>
References: <cover.5d286dc6304969ed7155051e900236947c1b14dc.1654612169.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The important part is to include those buffers in IOMMU page table
relevant for the USB controller. Otherwise, DbC will stop working as
soon as IOMMU is enabled, regardless of to which domain device assigned
(be it xen or dom0).
If the device is passed through to dom0 or other domain (see later
patches), that domain will effectively have access to those buffers too.
It does give such domain yet another way to DoS the system (as is the
case when having PCI device assigned already), but also possibly steal
the console ring content. Thus, such domain should be a trusted one.
In any case, prevent anything else being placed on those pages by adding
artificial padding.

Using this API for DbC pages requires raising MAX_USER_RMRR_PAGES.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 xen/drivers/char/xue.c             | 45 ++++++++++++++++++++-----------
 xen/drivers/passthrough/vtd/dmar.c |  2 +-
 2 files changed, 31 insertions(+), 16 deletions(-)

diff --git a/xen/drivers/char/xue.c b/xen/drivers/char/xue.c
index b253426a95f8..6fd26c3d38a8 100644
--- a/xen/drivers/char/xue.c
+++ b/xen/drivers/char/xue.c
@@ -26,6 +26,7 @@
 #include <xen/serial.h>
 #include <xen/timer.h>
 #include <xen/param.h>
+#include <xen/iommu.h>
 #include <asm/fixmap.h>
 #include <asm/io.h>
 #include <xen/mm.h>
@@ -995,13 +996,21 @@ static struct uart_driver xue_uart_driver = {
     .getc = NULL
 };
 
-static struct xue_trb evt_trb[XUE_TRB_RING_CAP] __aligned(XUE_PAGE_SIZE);
-static struct xue_trb out_trb[XUE_TRB_RING_CAP] __aligned(XUE_PAGE_SIZE);
-static struct xue_trb in_trb[XUE_TRB_RING_CAP] __aligned(XUE_PAGE_SIZE);
-static struct xue_erst_segment erst __aligned(64);
-static struct xue_dbc_ctx ctx __aligned(64);
-static uint8_t wrk_buf[XUE_WORK_RING_CAP] __aligned(XUE_PAGE_SIZE);
-static char str_buf[XUE_PAGE_SIZE] __aligned(64);
+struct xue_dma_bufs {
+    struct xue_trb evt_trb[XUE_TRB_RING_CAP] __aligned(XUE_PAGE_SIZE);
+    struct xue_trb out_trb[XUE_TRB_RING_CAP] __aligned(XUE_PAGE_SIZE);
+    struct xue_trb in_trb[XUE_TRB_RING_CAP] __aligned(XUE_PAGE_SIZE);
+    struct xue_erst_segment erst __aligned(64);
+    struct xue_dbc_ctx ctx __aligned(64);
+    uint8_t wrk_buf[XUE_WORK_RING_CAP] __aligned(XUE_PAGE_SIZE);
+    char str_buf[XUE_PAGE_SIZE] __aligned(64);
+    /*
+     * Don't place anything else on this page - it will be
+     * DMA-reachable by the USB controller.
+     */
+    char _pad[0] __aligned(XUE_PAGE_SIZE);
+};
+static struct xue_dma_bufs xue_dma_bufs __aligned(XUE_PAGE_SIZE);
 static char __initdata opt_dbgp[30];
 
 string_param("dbgp", opt_dbgp);
@@ -1033,15 +1042,21 @@ void __init xue_uart_init(void)
         xue->sbdf = PCI_SBDF(0, bus, slot, func);
     }
 
-    xue->dbc_ctx = &ctx;
-    xue->dbc_erst = &erst;
-    xue->dbc_ering.trb = evt_trb;
-    xue->dbc_oring.trb = out_trb;
-    xue->dbc_iring.trb = in_trb;
-    xue->dbc_owork.buf = wrk_buf;
-    xue->dbc_str = str_buf;
+    xue->dbc_ctx = &xue_dma_bufs.ctx;
+    xue->dbc_erst = &xue_dma_bufs.erst;
+    xue->dbc_ering.trb = xue_dma_bufs.evt_trb;
+    xue->dbc_oring.trb = xue_dma_bufs.out_trb;
+    xue->dbc_iring.trb = xue_dma_bufs.in_trb;
+    xue->dbc_owork.buf = xue_dma_bufs.wrk_buf;
+    xue->dbc_str = xue_dma_bufs.str_buf;
 
-    xue_open(xue);
+    if ( xue_open(xue) )
+    {
+        iommu_add_extra_reserved_device_memory(
+                PFN_DOWN(virt_to_maddr(&xue_dma_bufs)),
+                PFN_UP(sizeof(xue_dma_bufs)),
+                uart->xue.sbdf.sbdf);
+    }
 
     serial_register_uart(SERHND_DBGP, &xue_uart_driver, &xue_uart);
 }
diff --git a/xen/drivers/passthrough/vtd/dmar.c b/xen/drivers/passthrough/vtd/dmar.c
index 661a182b08d9..2caa3e9ad1b0 100644
--- a/xen/drivers/passthrough/vtd/dmar.c
+++ b/xen/drivers/passthrough/vtd/dmar.c
@@ -845,7 +845,7 @@ out:
     return ret;
 }
 
-#define MAX_USER_RMRR_PAGES 16
+#define MAX_USER_RMRR_PAGES 64
 #define MAX_USER_RMRR 10
 
 /* RMRR units derived from command line rmrr option. */
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 14:31:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 14:31:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343355.568770 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyaF0-0000Wh-Fv; Tue, 07 Jun 2022 14:31:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343355.568770; Tue, 07 Jun 2022 14:31:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyaF0-0000U8-15; Tue, 07 Jun 2022 14:31:30 +0000
Received: by outflank-mailman (input) for mailman id 343355;
 Tue, 07 Jun 2022 14:31:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fbs7=WO=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1nyaEy-000619-2q
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 14:31:28 +0000
Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com
 [66.111.4.25]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8319eddd-e66e-11ec-bd2c-47488cf2e6aa;
 Tue, 07 Jun 2022 16:31:26 +0200 (CEST)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.nyi.internal (Postfix) with ESMTP id 546365C01BE;
 Tue,  7 Jun 2022 10:31:26 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute5.internal (MEProxy); Tue, 07 Jun 2022 10:31:26 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 7 Jun 2022 10:31:25 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8319eddd-e66e-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm1; t=1654612286; x=1654698686; bh=sGlhYvcRcQ
	5DJkbewOsNiwax4QikybgKWGNbHy1b/To=; b=XGPSWXvQbtm+CS+eaOfrElY0ut
	zgiez65mQYYyvwyfMyrXmcXC9yY4jonAu8A6H07J+cLVqAMNKgp14DHkLbOIn7uS
	xQPhUhVE2rR02HvD5esvwfM6/Zed6GgmgtGsLeboi4GOzMkHwL/OEoQ5H93EfUrx
	2+4Xio2CVonGty74H3TOdnKDmrFTfsXpp0Ew+V/ewSusMoCbXopF2R6eorFhRBdx
	r89oAfQbFNc7Vjq4FuUlw5fX1CfYkVcotG/Ya4yFX+blwIJaLXisKC9SgkIiuP08
	z+oxVyKDPKBjJy7Qsh97WwCpIMtMes+DDcLJ2tpQ7OvHhd5Mg31kSGGmT4yg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1654612286; x=
	1654698686; bh=sGlhYvcRcQ5DJkbewOsNiwax4QikybgKWGNbHy1b/To=; b=h
	8K3dtAM37ho2kR1GuCG0lGqQezLmA9O5PF6EhvBk3e75yilPIQo7CqQ06r6WivfL
	aZFXj9hDqUxkBUJFZ8G6s2qaEloZ3LKpLH3GhjuKNtN5h38BGqe739R0huqj+fSm
	aNSD+GMOFdRYp82kpomLwDrZSNIjL59WzjsQQeN4PUv58SeSkiWG/W8Cm3qNmnmR
	zzbZQw1hhdaXq5YK5u5qcOHcX4h2gqLUO1lx5TLVP3dLLuuHMk+3nMFreLMxe55F
	8kAPia2ck5TzlxbYEbAgxoCgqQBQE7h7ubE1x/J0Sv8w4nUWcRlf5yiY9PSX6H+b
	O/x7daW3163aBDgr8jSWA==
X-ME-Sender: <xms:PmGfYivtkpu2khFqG9Vh0vmXdTta1nJxKMksidyCpqjDiXejRuwKYg>
    <xme:PmGfYne5PXRi7FD4S3F4lDFlVkvJ8lD1PdhDMf5-l4jvY-5V1ytq-brQm881UJPbX
    QHrMJTmu8bnbw>
X-ME-Received: <xmr:PmGfYtw5_RX_9XR04XErXjsRes5pgkv3HOLfR70Rpwv5wtSmQ4mrhUdRKa7-sXdTDleAsdrjb7mZ_Ui27J7JS49esVqLJJh7D84r-RsuQB7vAT6bpJM>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedruddthedgjeeiucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh
    hushhtvghrufhiiigvpeefnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:PmGfYtO2gELOatYkZUZh3S_WwwJAn4WVd_2OOQ2v9FZP9O8ex5_McA>
    <xmx:PmGfYi8u17Vyxa0depxYDXOyhS9XdySeQBjIlkqMxz_7hyPbMzroVg>
    <xmx:PmGfYlWAV5b1Yrs8rAbAm1vUzh2HLv7EDJWA4gfbbOmknYSyc4mhQg>
    <xmx:PmGfYpnf7W43aXkRbQMVNS_UNQyN9mbagXj3jFyPSg6UYofFVr8IPQ>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v1 10/10] xue: allow driving the rest of XHCI by a domain while Xen uses DbC
Date: Tue,  7 Jun 2022 16:30:16 +0200
Message-Id: <9c35e0e4ac14a273ed59fab22034fa11a264e394.1654612169.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <cover.5d286dc6304969ed7155051e900236947c1b14dc.1654612169.git-series.marmarek@invisiblethingslab.com>
References: <cover.5d286dc6304969ed7155051e900236947c1b14dc.1654612169.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

That's possible, because the capability was designed specifically to
allow separate driver handle it, in parallel to unmodified xhci driver
(separate set of registers, pretending the port is "disconnected" for
the main xhci driver etc). It works with Linux dom0, although requires
an awful hack - re-enabling bus mastering behind dom0's backs.
Linux driver does similar thing - see
drivers/usb/early/xhci-dbc.c:xdbc_handle_events().

To avoid Linux messing with the DbC, mark this MMIO area as read-only.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 xen/drivers/char/xue.c | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/xen/drivers/char/xue.c b/xen/drivers/char/xue.c
index 6fd26c3d38a8..85aed0bccbbf 100644
--- a/xen/drivers/char/xue.c
+++ b/xen/drivers/char/xue.c
@@ -27,6 +27,7 @@
 #include <xen/timer.h>
 #include <xen/param.h>
 #include <xen/iommu.h>
+#include <xen/rangeset.h>
 #include <asm/fixmap.h>
 #include <asm/io.h>
 #include <xen/mm.h>
@@ -846,6 +847,7 @@ static void xue_flush(struct xue *xue, struct xue_trb_ring *trb,
 {
     struct xue_dbc_reg *reg = xue->dbc_reg;
     uint32_t db = (reg->db & 0xFFFF00FF) | (trb->db << 8);
+    uint32_t cmd;
 
     if ( xue->open && !(reg->ctrl & (1UL << XUE_CTRL_DCE)) )
     {
@@ -856,6 +858,16 @@ static void xue_flush(struct xue *xue, struct xue_trb_ring *trb,
         xue_enable_dbc(xue);
     }
 
+    /* Re-enable bus mastering, if dom0 (or other) disabled it in the meantime. */
+    cmd = pci_conf_read16(xue->sbdf, PCI_COMMAND);
+#define XUE_XHCI_CMD_REQUIRED (PCI_COMMAND_MEMORY|PCI_COMMAND_MASTER)
+    if ( (cmd & XUE_XHCI_CMD_REQUIRED) != XUE_XHCI_CMD_REQUIRED )
+    {
+        cmd |= XUE_XHCI_CMD_REQUIRED;
+        pci_conf_write16(xue->sbdf, PCI_COMMAND, cmd);
+    }
+#undef XUE_XHCI_CMD_REQUIRED
+
     xue_pop_events(xue);
 
     if ( !(reg->ctrl & (1UL << XUE_CTRL_DCR)) )
@@ -955,6 +967,13 @@ static void __init cf_check xue_uart_init_postirq(struct serial_port *port)
     serial_async_transmit(port);
     init_timer(&uart->timer, xue_uart_poll, port, 0);
     set_timer(&uart->timer, NOW() + MILLISECS(1));
+
+#ifdef CONFIG_X86
+    if ( rangeset_add_range(mmio_ro_ranges,
+                PFN_DOWN(uart->xue.xhc_mmio_phys + uart->xue.xhc_dbc_offset),
+                PFN_UP(uart->xue.xhc_mmio_phys + uart->xue.xhc_dbc_offset + sizeof(*uart->xue.dbc_reg)) - 1) )
+        printk(XENLOG_INFO "Error while adding MMIO range of device to mmio_ro_ranges\n");
+#endif
 }
 
 static int cf_check xue_uart_tx_ready(struct serial_port *port)
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 15:23:19 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 15:23:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343455.568784 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyb2w-0000u1-Ur; Tue, 07 Jun 2022 15:23:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343455.568784; Tue, 07 Jun 2022 15:23:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyb2w-0000tu-Rq; Tue, 07 Jun 2022 15:23:06 +0000
Received: by outflank-mailman (input) for mailman id 343455;
 Tue, 07 Jun 2022 15:23:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyb2w-0000tk-FQ; Tue, 07 Jun 2022 15:23:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyb2w-0007Mk-B9; Tue, 07 Jun 2022 15:23:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyb2v-0007x9-VF; Tue, 07 Jun 2022 15:23:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nyb2v-0005B1-Uo; Tue, 07 Jun 2022 15:23:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Yxgrg/BADn8UaQ1grPnEtbEe/nLJQF1HWByJmN0VEQA=; b=d/ka+dMpX1JVeYSFrSJiskM7PI
	QxNLdqbcfslDdL2rqjRq8YjWmlj8Qs75/4ZejfoK/bkl73+USVvX06e/tdC9B7V4CDSala6a4NiKr
	H/4FFShi8GQZK9t11MHgqNF/lCzdRIdQQlBEOXSK7itA+a06JBuq8+v0pXCd4V1koTJQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170857-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 170857: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=95bd13721647fda869f4256c18e8b33a52f7afb6
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 07 Jun 2022 15:23:05 +0000

flight 170857 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170857/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              95bd13721647fda869f4256c18e8b33a52f7afb6
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  697 days
Failing since        151818  2020-07-11 04:18:52 Z  696 days  678 attempts
Testing same since   170857  2022-06-07 04:25:17 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Amneesh Singh <natto@weirdnatto.in>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani@anisinha.ca>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brad Laue <brad@brad-x.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Kirbach <christian.kirbach@gmail.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Christophe Fergeau <cfergeau@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Didik Supriadi <didiksupriadi41@gmail.com>
  dinglimin <dinglimin@cmss.chinamobile.com>
  Divya Garg <divya.garg@nutanix.com>
  Dmitrii Shcherbakov <dmitrii.shcherbakov@canonical.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Emilio Herrera <ehespinosa57@gmail.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Franck Ridel <fridel@protonmail.com>
  Gavi Teitz <gavi@nvidia.com>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Haonan Wang <hnwanga1@gmail.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Hiroki Narukawa <hnarukaw@yahoo-corp.jp>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Ian Wienand <iwienand@redhat.com>
  Ioanna Alifieraki <ioanna-maria.alifieraki@canonical.com>
  Ivan Teterevkov <ivan.teterevkov@nutanix.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  jason lee <ppark5237@gmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jia Zhou <zhou.jia2@zte.com.cn>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jing Qi <jinqi@redhat.com>
  Jinsheng Zhang <zhangjl02@inspur.com>
  Jiri Denemark <jdenemar@redhat.com>
  Joachim Falk <joachim.falk@gmx.de>
  John Ferlan <jferlan@redhat.com>
  John Levon <john.levon@nutanix.com>
  John Levon <levon@movementarian.org>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Justin Gatzen <justin.gatzen@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kim InSoo <simmon@nplob.com>
  Koichi Murase <myoga.murase@gmail.com>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Lei Yang <yanglei209@huawei.com>
  Lena Voytek <lena.voytek@canonical.com>
  Liang Yan <lyan@digitalocean.com>
  Liang Yan <lyan@digtalocean.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Liu Yiding <liuyd.fnst@fujitsu.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  luzhipeng <luzhipeng@cestc.cn>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Martin Pitt <mpitt@debian.org>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matej Cepl <mcepl@cepl.eu>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Goodhart <c@chromakode.com>
  Maxim Nestratov <mnestratov@virtuozzo.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Moteen Shah <codeguy.moteen@gmail.com>
  Moteen Shah <moteenshah.02@gmail.com>
  Muha Aliss <muhaaliss@gmail.com>
  Nathan <nathan95@live.it>
  Neal Gompa <ngompa13@gmail.com>
  Nick Chevsky <nchevsky@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nicolas Lécureuil <neoclust@mageia.org>
  Nicolas Lécureuil <nicolas.lecureuil@siveo.net>
  Nikolay Shirokovskiy <nikolay.shirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Niteesh Dubey <niteesh@linux.ibm.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Or Ozeri <oro@il.ibm.com>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Praveen K Paladugu <prapal@linux.microsoft.com>
  Richard W.M. Jones <rjones@redhat.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Robin Lee <cheeselee@fedoraproject.org>
  Rohit Kumar <rohit.kumar3@nutanix.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Davis <scott.davis@starlab.io>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  shenjiatong <yshxxsjt715@gmail.com>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Simon Rowe <simon.rowe@nutanix.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tom Wieczorek <tom@bibbu.net>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tu Qiang <tu.qiang35@zte.com.cn>
  Tuguoyi <tu.guoyi@h3c.com>
  tuqiang <tu.qiang35@zte.com.cn>
  Vasiliy Ulyanov <vulyanov@suse.de>
  Victor Toso <victortoso@redhat.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Vinayak Kale <vkale@nvidia.com>
  Vineeth Pillai <viremana@linux.microsoft.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  Wei-Chen Chen <weicche@microsoft.com>
  William Douglas <william.douglas@intel.com>
  Xu Chao <xu.chao6@zte.com.cn>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Fei <yangfei85@huawei.com>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yasuhiko Kamata <belphegor@belbel.or.jp>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
  zhangjl02 <zhangjl02@inspur.com>
  zhanglei <zhanglei@smartx.com>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>
  Дамјан Георгиевски <gdamjan@gmail.com>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 111388 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 18:48:59 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 18:48:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343472.568807 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyeFt-0005Nw-QB; Tue, 07 Jun 2022 18:48:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343472.568807; Tue, 07 Jun 2022 18:48:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyeFt-0005Np-NG; Tue, 07 Jun 2022 18:48:41 +0000
Received: by outflank-mailman (input) for mailman id 343472;
 Tue, 07 Jun 2022 18:48:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OQGf=WO=gmail.com=tamas.k.lengyel@srs-se1.protection.inumbo.net>)
 id 1nyeFs-0005Nj-J7
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 18:48:40 +0000
Received: from mail-oi1-x22a.google.com (mail-oi1-x22a.google.com
 [2607:f8b0:4864:20::22a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 716c0ba2-e692-11ec-bd2c-47488cf2e6aa;
 Tue, 07 Jun 2022 20:48:39 +0200 (CEST)
Received: by mail-oi1-x22a.google.com with SMTP id h187so11967492oif.4
 for <xen-devel@lists.xenproject.org>; Tue, 07 Jun 2022 11:48:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 716c0ba2-e692-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=tgKco9MRubG8WBIGWhqaMAjuefz45fkYscFCEEt8VNo=;
        b=EMniyzo/gc9yNNU7LvcM5tky+bmChS0d0WNwi8holR7nNenOnchm2eh0DhQPZ4ENvD
         L/5AVAFeMBzXbfw6xLcDjDKSl1euXC+ELRl+HWQ8MqO2AXIEx61xtueuavN/xYckKAKz
         DA2zOfl4OYSLPN06dDbTNoWs+8OYKNetMGVytR/sHMxpnyz8lXinZmpX2zNE/XbU9eo5
         JPeFASn/c7dq7X0tjDt1RnawsHAmmH+vougGIS48y9uCpMfs0MqSJYWeToSZ4gPyvkjl
         +mx7DJhXEjU++j31MLrhLrLGQBqcj03kxp6HgiF1WqTeOQHaoz+KojLuaqVlcobn4TL5
         n6+w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=tgKco9MRubG8WBIGWhqaMAjuefz45fkYscFCEEt8VNo=;
        b=4JcDkdMAyVeuvcKA/FT8dlwvH4/z5obeJxFLZZHsArSt/d6NDiguh/YwpqK+whOMMc
         HE5libKoaR75YnexoHaKBuhC4OAh2t0gnhnc4IdzeK+Ohx3KSYR6ePNW6lgzaEpnVfYM
         LDZMnq1D1B9wdBWT3RYSbKdxX3BeVZ8q/qe4SYIF8rPKvCfkSMtHY/CTk35WM/CxeVe0
         rhNt1QDZ7o7AuQDnlSEQvJkdzaTq3YriTtoqH4p08k8g1zpXMGc77vik+HQHh6383g6m
         1T+/w4dNejH2P4Q7nGifXI+bLUP/PxLoBgJDv8A1DOHjcsBnTz/xlpHqXEB58GcXQGoN
         xswA==
X-Gm-Message-State: AOAM5319M1+I1nCUij12E1/VVYtWTo/EgAdQ6ZKuBMrzY//2BAvdMcRD
	awTYAMsYN6SZMpC8y071RZeF1qUMSIrEdPcmgHQ=
X-Google-Smtp-Source: ABdhPJxcBHyJoQXbk17VvTdPQCYRoUzhpmn8EG3+I48sv6EecSJa+IiYDY3v/jvxYS69VkZreRx42i88V8cgDHVRtDE=
X-Received: by 2002:a54:4486:0:b0:32e:aa60:134a with SMTP id
 v6-20020a544486000000b0032eaa60134amr183949oiv.9.1654627718000; Tue, 07 Jun
 2022 11:48:38 -0700 (PDT)
MIME-Version: 1.0
References: <cover.5d286dc6304969ed7155051e900236947c1b14dc.1654612169.git-series.marmarek@invisiblethingslab.com>
In-Reply-To: <cover.5d286dc6304969ed7155051e900236947c1b14dc.1654612169.git-series.marmarek@invisiblethingslab.com>
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Tue, 7 Jun 2022 14:48:01 -0400
Message-ID: <CABfawhn_R-Kd1w8OhPh2Lf6UDS28j-6W=5EOJ9CARV83U328aQ@mail.gmail.com>
Subject: Re: [PATCH v1 00/10] Add Xue - console over USB 3 Debug Capability
To: =?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?= <marmarek@invisiblethingslab.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>, 
	George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Paul Durrant <paul@xen.org>, Kevin Tian <kevin.tian@intel.com>, 
	Connor Davis <connojdavis@gmail.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, Jun 7, 2022 at 10:31 AM Marek Marczykowski-G=C3=B3recki
<marmarek@invisiblethingslab.com> wrote:
>
> This is integration of https://github.com/connojd/xue into mainline Xen.
> This patch series includes several patches that I made in the process, so=
me are
> very loosely related.
>
> The driver developed by Connor supports output-only console via USB3 debu=
g
> capability. The capability is designed to operate mostly independently of
> normal XHCI driver, so this patch series allows dom0 to drive standard US=
B3
> controller part, while Xen uses DbC for console output.
>
> Changes since RFC:
>  - move the driver to xue.c, remove non-Xen parts, remove now unneeded ab=
straction
>  - adjust for Xen code style
>  - build for x86 only
>  - drop patch hidding the device from dom0

Tested-by: Tamas K Lengyel <tamas@tklengyel.com>


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 19:19:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 19:19:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343480.568818 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyejb-0000Y1-8o; Tue, 07 Jun 2022 19:19:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343480.568818; Tue, 07 Jun 2022 19:19:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyejb-0000Xu-4Y; Tue, 07 Jun 2022 19:19:23 +0000
Received: by outflank-mailman (input) for mailman id 343480;
 Tue, 07 Jun 2022 19:19:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyejZ-0000Xk-Fv; Tue, 07 Jun 2022 19:19:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyejZ-0003d1-Cv; Tue, 07 Jun 2022 19:19:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyejY-0006Ln-Um; Tue, 07 Jun 2022 19:19:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nyejY-0005Bo-Tw; Tue, 07 Jun 2022 19:19:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qX9K56v+YKhw5Ou++F+drHGUy/JUcyDLUzhFvdX8njc=; b=UwRc/Hc8t/JUaAH/ijvF5Lk2dq
	Hon3Np+FZNSPWxtr5phPrqaI1n2TmzIqepjAOdSwIpdKRKeQQozazqndvdkeB90kdIqfGcpxThf3o
	Ag8dndLDtx/RoeMdp5GYiE3qDhL1AfEOEb/N/DocK01aBGpFTsm7bcFOqQ/mnw9vIVUs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170858-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 170858: trouble: broken/fail/pass
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:host-install(5):broken:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=9b1f58854959c5a9bdb347e3e04c252ab7fc9ef5
X-Osstest-Versions-That:
    qemuu=57c9363c452af64fe058aa946cc923eae7f7ad33
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 07 Jun 2022 19:19:20 +0000

flight 170858 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170858/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow    <job status>        broken
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 5 host-install(5) broken REGR. vs. 170849

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170849
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170849
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170849
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170849
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170849
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170849
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170849
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170849
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                9b1f58854959c5a9bdb347e3e04c252ab7fc9ef5
baseline version:
 qemuu                57c9363c452af64fe058aa946cc923eae7f7ad33

Last test of basis   170849  2022-06-06 17:39:05 Z    1 days
Testing same since   170858  2022-06-07 04:25:53 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alex Bennée <alex.bennee@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Song Gao <gaosong@loongson.cn>
  Xiaojuan Yang <yangxiaojuan@loongson.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             broken  
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow broken
broken-step test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow host-install(5)

Not pushing.

(No revision log; it would be 734 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 19:22:50 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 19:22:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343491.568829 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyemv-0001yg-Tu; Tue, 07 Jun 2022 19:22:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343491.568829; Tue, 07 Jun 2022 19:22:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyemv-0001yZ-Qc; Tue, 07 Jun 2022 19:22:49 +0000
Received: by outflank-mailman (input) for mailman id 343491;
 Tue, 07 Jun 2022 19:22:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X+ro=WO=proton.me=alex.nlnnfn@srs-se1.protection.inumbo.net>)
 id 1nyemt-0001yP-0m
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 19:22:48 +0000
Received: from mail-4325.protonmail.ch (mail-4325.protonmail.ch [185.70.43.25])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3557de3f-e697-11ec-b605-df0040e90b76;
 Tue, 07 Jun 2022 21:22:45 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3557de3f-e697-11ec-b605-df0040e90b76
Date: Tue, 07 Jun 2022 19:22:35 +0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=proton.me;
	s=protonmail3; t=1654629764; x=1654888964;
	bh=n/ltimDOnvv4ixV3kd3ax/zvXL0LLP0GtcLhtcAZT20=;
	h=Date:To:From:Cc:Reply-To:Subject:Message-ID:In-Reply-To:
	 References:Feedback-ID:From:To:Cc:Date:Subject:Reply-To:
	 Feedback-ID:Message-ID;
	b=a2u2qlNHmVoRS1Ja0eTffzMU8Pb5cBIddPheZoRNgxR9qh/VB3y6iTrZom5g1J9Wp
	 JmTTngqK7jz9cc0g4axZY9Jtsf4I3JYfi+wahdam6fvK6u0SBfcmzovUhl8a6lVD07
	 mxYpslGy2j1rqgwJqjQ1x1Px1jJw82h5mwfLKXnk5CgUrbotq+qSTTg3Gfrob4yUN6
	 9AVbXl0e1r/px0mg2T+6QXd9ZJB0aw6FSo+u2aslHucKV16KVQRAAb/oxXaar3JJGB
	 ll6x151LxABxHHVaMmf8EpHWmAwOz1rsZvuJ2aeiEqraby4tGZQiF6jIvvsCaNkfCF
	 XBWX6YCqqUh1A==
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
From: Alex Nln <alex.nlnnfn@proton.me>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Reply-To: Alex Nln <alex.nlnnfn@proton.me>
Subject: Re: memory overcomittment with sr-iov device assighment
Message-ID: <RLsJJNdXwhH2qzKW2TCaLRSv8JcF5heYpDGfzMxwwrkzQvi0cOZ2YyiwcI0K_UkgiH7fFFCNsFScpqH-oFA00BnyRIcdWKKbuCsal_9aFhE=@proton.me>
In-Reply-To: <eb7a2869-d9f8-7a9f-3884-60d1a61e36f4@citrix.com>
References: <6f_bjKs323m5sDcqqvk3uosOLiugoCHlAvJ_tEMTl9d_05VqR-nOKtBBS4QWK29iKmorefG54vrbEgUM40Fl71BPZ0hwVyY3P0LjjJVjO-c=@proton.me> <eb7a2869-d9f8-7a9f-3884-60d1a61e36f4@citrix.com>
Feedback-ID: 49537399:user:proton
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable


------- Original Message -------
On Tuesday, June 7th, 2022 at 3:04 AM, Andrew Cooper <Andrew.Cooper3@citrix=
.com> wrote:
> But IOMMU violations are not restartable. We can't just take an IOMMU
> fault, and shuffle the guests memory, because the PCIe protocol has
> timeouts. These aren't generally long enough to even send an interrupt
> to request software help, let alone page one part out and another part in=
.
>
> For an IOMMU mapping to exists, it must point at real RAM, because any
> DMA targetting it cannot be paused and delayed for later.
>

Thanks for the information!

Speaking about IOMMU. Can Xen emulate virtual IOMMU? Just thinking out loud=
.
If Xen exposes a virtual IOMMU to a guest VM, wouldn't it be possible
for the hypervisor to pin VM's pages that are mapped (dynamically)
by the guest virtual IOMMU? But I guess it will eliminate the performance
benefits of direct device assignment.



From xen-devel-bounces@lists.xenproject.org Tue Jun 07 19:47:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 19:47:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343499.568840 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyfAa-0004ku-UV; Tue, 07 Jun 2022 19:47:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343499.568840; Tue, 07 Jun 2022 19:47:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyfAa-0004kn-Po; Tue, 07 Jun 2022 19:47:16 +0000
Received: by outflank-mailman (input) for mailman id 343499;
 Tue, 07 Jun 2022 19:47:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TXeP=WO=citrix.com=prvs=150b9cda6=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1nyfAZ-0004kh-6Y
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 19:47:15 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9f101479-e69a-11ec-b605-df0040e90b76;
 Tue, 07 Jun 2022 21:47:13 +0200 (CEST)
Received: from mail-bn8nam04lp2049.outbound.protection.outlook.com (HELO
 NAM04-BN8-obe.outbound.protection.outlook.com) ([104.47.74.49])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 07 Jun 2022 15:47:04 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MN2PR03MB4958.namprd03.prod.outlook.com (2603:10b6:208:1ad::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Tue, 7 Jun
 2022 19:47:02 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::50a2:499b:fa53:b1eb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::50a2:499b:fa53:b1eb%5]) with mapi id 15.20.5314.019; Tue, 7 Jun 2022
 19:47:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9f101479-e69a-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654631232;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=ccZ9Gby7PCU6na6QwRjpasAAHx9d+SrXzMkgZ3b6Xhg=;
  b=gaL3mokw35lPCoIsa+DuM9BFpwVlQlJkK54IR7dMiYsA7+GyUF23KZCb
   nhtY3ujG24oNz7vbOSiuOExQSfl6wXJhauduS5w+gce8P/+AtUVgOJNnb
   m7+Fj1qmU/DETTgbiS/Re6aYzlkv8LOM8t+2hnWl8rtdz6q1tVSuXS8Tl
   Y=;
X-IronPort-RemoteIP: 104.47.74.49
X-IronPort-MID: 72928762
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:zMqNraLrEnedYsRzFE+RuZQlxSXFcZb7ZxGr2PjKsXjdYENS0zMCy
 WEXXDzUOqmDY2ujKNx1YN60pE0Gv5aGxoRkSlBlqX01Q3x08seUXt7xwmUcns+xwm8vaGo9s
 q3yv/GZdJhcokf0/0vrav67xZVF/fngqoDUUYYoAQgsA149IMsdoUg7wbRh3NY32YLR7z6l4
 rseneWOYDdJ5BYsWo4kw/rrRMRH5amaVJsw5zTSVNgT1LPsvyB94KE3fMldG0DQUIhMdtNWc
 s6YpF2PEsE1yD92Yj+tuu6TnkTn2dc+NyDW4pZdc/DKbhSvOkXee0v0XRYRQR4/ttmHozx+4
 OtplrO9SRwQBbaWpb9GQT91Mg8uDIQTrdcrIVDn2SCS52vvViOwht9IXAQxN4Be/ftrC2ZT8
 /BeMCoKch2Im+OxxvS8V/VogcMgasLsOevzuFk5lW2fUalgHsyFH/SiCdxwhV/cguhnG/rEa
 tVfQj1odBnaODVEO0sNCYJ4l+Ct7pX6W2IC+QjF+vRti4TV5B5SyZfLHfGKQOGTGeoJj2qnj
 GXm42usV3n2M/Tak1Jp6EmEj+vCjWX9XIQUGruQ7uRtnFqVgGkeYDUGWF3+rfSnh0qWX9NEN
 1dS6icotbI19kGgUp/6RRLQnZKflhsVWt4VGetq7giIkvbQ+1zBWjlCSSNdYts7ssNwXSYty
 lKCg9LuA3poraGRTnWesLyTqFteJBQoEIPLXgdcJSNt3jUpiNhbYs7nJjq7LJOIsw==
IronPort-HdrOrdr: A9a23:7dExyqsItahImT/j+mKJylsE7skCL4Aji2hC6mlwRA09TyXGra
 2TdaUgvyMc1gx7ZJh5o6H6BEGBKUmslqKceeEqTPuftXrdyRGVxeZZnMTfKlzbamDDH4tmuZ
 uIHJIOb+EYYWIasS++2njBLz9C+qjIzEnLv5a5854Fd2gDBM9dBkVCe3+m+yZNNWt77O8CZf
 6hD7181l+dkBosDviTNz0gZazuttfLnJXpbVotHBg88jSDijuu9frTDwWY9g12aUIO/Z4StU
 z+1yDp7KSqtP+2jjXG0XXI0phQkNz9jvNeGc23jNQPIDmEsHfqWG0hYczBgNkGmpDq1L8Yqq
 iKn/7mBbU015rlRBDxnfIq4Xi47N9h0Q679bbSuwqfnSWwfkNHNyMGv/MZTvKR0TtfgDk3up
 g7oF6xpt5ZCwjNkz/64MWNXxZ2llCsqX5niuILiWdDOLFuIIO5gLZvin+9Kq1wVR4SKbpXYt
 VGHYXZ/rJbYFmaZ3fWsi1mx8GtRG06GlODTlIZssKY3jBKlDQhpnFojvA3jzMF7tYwWpNE7+
 PLPuBhk6xPVNYfaeZ4CP0aScW6B2TRSVbHMX6UI17gCKYbUki94KLf8fEw/qWnaZYIxJw9lN
 DIV05Zr3c7fwb0BciHzPRwg2fwqaWGLEDQI+1lluhEU+fHNcvW2AW4OSMTutrlpekDCcvGXP
 v2MI5KApbYXB7TJbo=
X-IronPort-AV: E=Sophos;i="5.91,284,1647316800"; 
   d="scan'208";a="72928762"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kH/OvUob82g+X1Pov5fBcpCUBhsU8rsqd4h//My2ZzVoI4DnGHb4qUD+h3f9Zfms6V6IGCWbrPqzU/z9fYEvyl7AWu+Sa/Hb1rE6pCnSZkVDWPI3XYxsgvI/XJJMg6AQk/TsWj+hGao/Q3ExwMMVFxjA+/j2Ya0jY9Hqu6nrVuiK6InAHjWeF6rB9pjIGX0wuLnFcwIlWr3/62Jelvm1+i4N7CSCn8jZsX61iAt4PF1YDXio+ssdWX/bQrG1bG3MH/v+pb1iciqbbIn1gcDtLI2irKSJ1gSEi9+09oohBr5bLIdlJ6P9pyQs6sazGFpEz6zuwIrAlm4NnyJQ6ATPnQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ccZ9Gby7PCU6na6QwRjpasAAHx9d+SrXzMkgZ3b6Xhg=;
 b=HZg/bMtVkCsyfCJmgaL0bRat1xqWncdKhr8GGbGPeqrH22OnOBNE9XHUnY1RLmzdG/4WrbejZZ3Sc638Add8CSB9/mVMwmKJmGXqWs8kKDVTqVnVtLeORUegEKz21/IPzKC+VeyFZqSU1mZd1t2kLkXE/GK4xUTdsVm24P0c+9omn/bUudxOWdOKzCqbV8pcNnE1OWs+7VrMQiEucHqPUFUpeRhVq/csmyGNx0i+VExZMCHqWVTp38alMfIBvsB71noozNR48dVpStJ8WxxvqRYOHWnJL9aR8qPaomllc6FtwBhP9zYJRA3IVPHqiafJDKkdEeZDcG2jkFLWLZt74A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ccZ9Gby7PCU6na6QwRjpasAAHx9d+SrXzMkgZ3b6Xhg=;
 b=LWybnVooo2a+OqJYtJrtHx7DP3yoGoF85H/RTCLJveVfe/Cjh2N4kthEpkw7/LqXUZNmc0QTfftlzpEX+OUaGwqeHkRuH/mX8yi2cnjPkvaM3HIP8Mn9yMflaJznTRCfU7uUUJ0CDIyhmzpitmYmflzd9DHdYdp6mnZ9otKrZyI=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Alex Nln <alex.nlnnfn@proton.me>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: memory overcomittment with sr-iov device assighment
Thread-Topic: memory overcomittment with sr-iov device assighment
Thread-Index: AQHYeihemc0HW39+c0mUd/wlJkeuKK1Dt4iAgACcCYCAAAbTgA==
Date: Tue, 7 Jun 2022 19:47:02 +0000
Message-ID: <f3fa2179-6b4f-74eb-03b9-b6755aa26278@citrix.com>
References:
 <6f_bjKs323m5sDcqqvk3uosOLiugoCHlAvJ_tEMTl9d_05VqR-nOKtBBS4QWK29iKmorefG54vrbEgUM40Fl71BPZ0hwVyY3P0LjjJVjO-c=@proton.me>
 <eb7a2869-d9f8-7a9f-3884-60d1a61e36f4@citrix.com>
 <RLsJJNdXwhH2qzKW2TCaLRSv8JcF5heYpDGfzMxwwrkzQvi0cOZ2YyiwcI0K_UkgiH7fFFCNsFScpqH-oFA00BnyRIcdWKKbuCsal_9aFhE=@proton.me>
In-Reply-To:
 <RLsJJNdXwhH2qzKW2TCaLRSv8JcF5heYpDGfzMxwwrkzQvi0cOZ2YyiwcI0K_UkgiH7fFFCNsFScpqH-oFA00BnyRIcdWKKbuCsal_9aFhE=@proton.me>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: d7c51fe1-7923-4c7c-c854-08da48be7e25
x-ms-traffictypediagnostic: MN2PR03MB4958:EE_
x-microsoft-antispam-prvs:
 <MN2PR03MB49581F191488DF36E667DA5CBAA59@MN2PR03MB4958.namprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 BWX3ZUJu7253POvXSCJN9Hss98FLE/aUovzkrTyazTPNF4/bj0Af/pXCkLVR26/6JbRM5gAHzrZOnvV4jdH+ieXRH8ALn11FuUrxn6XdXlb5KecAYo3feSXIoj5KdB90xmP8YxA5yEcc7WkKIm8iM8ZENt+oVJN+WPK7NKItbq+uWkj+5A8lUfRQtluCwEGsXg3uISlI50m3F1o3LNraPKOrVFS0OSUrK3l85Rm4RULMdAwsWf5SBq2CR5cn+OPVcoTvYsJ0WyTevfZcdwBCl4/WBAtS+UWLr1vTdX69xeFdYqyZ+W7KXlTyyfyKgFURzhCVu9uvSGVx3hOEKFDdl//pzLa1F3yt+m1o+ZcPZw7Amciy8E1FCmpZGxRIm4HN2x8dbGqoHO/Gf+msjQDLDBezCRX6ioJK0mQvyUCnjpc2WqVUnGfNwr77BOfAC2i430zgMlBpRPCOj0JIC2zaS+/Ne9D+WFTX6GXVNZf483DqcHQhWiKWBQ7uSXMCxn7T4UnIAe5TW99K0DxrIGrGphOSbEWIOoxbUWVFABkopI4YQBa2x7wRabFRn8i8vvKi0E9FgBYqrmgbSiyYEr+ATBCZPdKYQY5nvZMMl6E77bhvfdVME9HoyKuiNtO5PQ9U89cFg9D9NL2EFCj3LxY06tFukbexIlzhlRN4jMSLmPbGwXTh3oZvCo4iDmZw0w6vghGPg6exUW8GnCyXexGBFoFtEfvODuWOlXgJPCUC7UjKrT0M82tqDSMGxKckNd4pbC7ZzrSC6Ug8HX7omDSuLojMZdafIn1X7dx+6VdQJgqD/NqDiRr9Y5laohq/O26fcKKGKHU6HrOknElMGSDaxLbZOvsrxRtsxOKA7z+0WQg=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(38070700005)(71200400001)(31696002)(53546011)(966005)(6486002)(76116006)(31686004)(8936002)(66556008)(186003)(508600001)(5660300002)(8676002)(66946007)(66476007)(4326008)(2616005)(66446008)(64756008)(91956017)(6506007)(26005)(82960400001)(2906002)(38100700002)(6916009)(122000001)(36756003)(316002)(6512007)(83380400001)(86362001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?aVlFMGlvcHFvdEptTjVZZ1h2YWFBY3ZZOVpaYWcvTWV5NDJVRjdqT3NCQmpE?=
 =?utf-8?B?RTdLOFlWR3FYcGZsNmdJVDVYeERlR1JUTkNuL1M5ZnJnb2JLWksxMEs1VnRr?=
 =?utf-8?B?S2JGRERmMlZMcmh0L3ZlQ29ldWNsYy9FbHRzTGRRSVdWakUwdUVKWklvdTRE?=
 =?utf-8?B?ZlFSNTliOVBSQ0V6NUtrQ3dhSnJJY3VxVzNkcWkwUDFQazRZVUhHcnN4VlIx?=
 =?utf-8?B?SWJoMHl3cGUxUFZzNVI2SGdhcURuSUJTYWlkaTlOeW5qV1NYQlE3NTJ2VEU3?=
 =?utf-8?B?cHlkNk9VNzJXdTNHa1U3U1hiRlpNdDlGem14VDFZUjJ0OXFMMkpQM3N0WFhH?=
 =?utf-8?B?NEdUMU4zaE1yNTNpQ2N5QWJ1RWw5VW0wQ1JGZGhVWUJWeFl5N2JydHM2U0Jn?=
 =?utf-8?B?VGdXS3RnVHNXemxYWXlBV3VBeDdXakZwTWJTUHZVT2IxQ1IxWlpnUXp4ZDRE?=
 =?utf-8?B?eXM0NFVaS09aNHFwWU1oaWRBdmdFUm8vMlVBWFVxV0NwclAzeEkzUDVlM3pK?=
 =?utf-8?B?Qm1mM0tCcUNJVnlFV0dubTJhdEUrLzc5OS85Q1hJa25JTFFBc2lDYmJBOUs5?=
 =?utf-8?B?OUprY2ovUURYVXpXci93bENmd09Zemt2bEVFUlJSU2NhRkVXakFTSFRYWGZr?=
 =?utf-8?B?MkJsNGlreHJlMU0wYi84aEJPWHVKZVdTRnord2lZTStCdjFQWjQvTnVmekoy?=
 =?utf-8?B?WFNBRUNOVElXSERoTkpicGZqdW5laG9zTXNQd0grOUdwZG1Mamt5WlN2NzlT?=
 =?utf-8?B?Tm81SDBOdldkYVVqbmRDOTFWNUljb21XOXh6ZkZUV1FlZVdkZmM5RzJ4Q3hi?=
 =?utf-8?B?QUFxdzd0cmJ5SEVHeE5oYk0zcjBFRExwSWNQNFA4cTdqbkVRcTFnK3dnM3do?=
 =?utf-8?B?a1FLb2pMY2dYaWNHNWF0Nk45UWRXSzVkOGZyN0IvQkVHWVVObURzWGZWVzlk?=
 =?utf-8?B?YmtJbGVPaGFTOUt3aGMvM3VtejhUdmRCaE5YWG45TlB0RjNGbVhJR01LZEs0?=
 =?utf-8?B?eFl4Z2luRW9Xcm9DYXF0SVVjVUY0Y2hTYzBOSjI3b05jNGdxNmZYTDVLTGg4?=
 =?utf-8?B?STgzdnk2b0xwSGtES2l0UGNrS0VJU2ErYi9YYjk2bU1JNFN0NVVmNHNMSTVk?=
 =?utf-8?B?cDVsK0Y1UXpMSEZmamRUS3hHeGdlSlQ0d2h4a1BTQzRYaHVyck5iTWFoWGtV?=
 =?utf-8?B?QWlEaExWTzJVK0ZjcVJQVS9TK2JtSW1yc21LTkowVkcrOVc2TjlmYlJJMVVx?=
 =?utf-8?B?d2xUNjQyUEJTMlBwRU5nd2xZYWZ6dXVQempnZ1c0alpGaVVzTm11YXF1Vng0?=
 =?utf-8?B?Z2tKQlQyZCtMMDBmZkNQZTFDWVlCdCtIc1BHaXhpb3ROUWx2aTYrOEowTlRa?=
 =?utf-8?B?SVdFWnhaQjJkUEYxVFVuNEg2d0RFdGVGeXdvTU5vbzRFLzN1UnZVN1BSVmNN?=
 =?utf-8?B?YjJkYVZQRDRzTXZjQ0EwN1B3M3FpNzhsR1lkN3JvZkVmRXVmcXFtZ25iNlpB?=
 =?utf-8?B?Nk5vRXgwbGR3UU01VEUwcXZlbFZmSDh2M1E3NlQ3VXNTTFdoaWxXRkszeVhZ?=
 =?utf-8?B?cC9EVmVDSGJGcENMdEY1WmpOOEV6akV0WU9MMHd1NGtYOTZDOTFwUitHVzVY?=
 =?utf-8?B?SktRVTFMUFg0U3JHSzdKU1dqQkJkbmlFOXROdnZqL2grUEFBWlh0K2FGNGs2?=
 =?utf-8?B?N3BPb1EyTEo5SVc0Mi8xMGIrTlZZeHh5NGtrK0l4SFFCT09kZnhrQUhieUd3?=
 =?utf-8?B?RGZZVEI2K2FULzd1U1VKc2YvaFQ0blZsZVJhSU5Xb2Nvc0VyK0crYWpxTW9J?=
 =?utf-8?B?SDRjaFBhTFh1M2VUei80dWRVVGFNM1VFOXVWYmxoV0k5ZlRDMVRaNVRZR0hr?=
 =?utf-8?B?VUV5OFppYnMxcUdwL29weW9UaTlEbEo0YnF6Y3YxdWdnSTNTVXdRREkwVXZh?=
 =?utf-8?B?OW5DMEY2MW9FZ09WS0puVzg1ZjdGL0sxSTFrRitad1IyMWxwYkxCb0wvVnRY?=
 =?utf-8?B?eGtvSzdPTERtS1o4MytkWlB0SnUrbktFZExwL3ExRERiWDFKaTZzZlp4b1Vx?=
 =?utf-8?B?aEF2L3daOVNHTzVIa1B4alczTmtVMzR2SS92NTBLL1VPanArWHo5WS9JUEs1?=
 =?utf-8?B?YTIvQ0Nxek5mdXpwcktiTmtJZkh0TzBqajc5SmdGbXBpdWIyTlJIVUNibUJh?=
 =?utf-8?B?V2FwS1JyMjlDMmhwSW9hMkdMQ25obm5DbTBkeHRwLzVwSWtEcjI5NzA2eVJK?=
 =?utf-8?B?VEdYUGJWZ1MwVGJpQWRrQTV4dG1STm5HRWNzZjAwUGFQWWpDaUVNbXhqMlAz?=
 =?utf-8?B?SlhXVGNLRktUa0dpaGdiV3BKcVNlR2lzUTJONUZscGxMVTBvNXl2eEthVmpD?=
 =?utf-8?Q?+mzkWXll3ainTcPc=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <457CF527D24E6B44A8FDA559C14AAD68@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d7c51fe1-7923-4c7c-c854-08da48be7e25
X-MS-Exchange-CrossTenant-originalarrivaltime: 07 Jun 2022 19:47:02.3115
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: wUZZZn3NUQPmj7ER48lwc1WrqB8XoZ/sUV1f54XEBb7pEKq6XSPR8hzgKlbEJbbvk2XpcXAsSd5nW3v9QVfy84SyZYjkQaFZOIemsv1FDMw=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB4958

T24gMDcvMDYvMjAyMiAyMDoyMiwgQWxleCBObG4gd3JvdGU6DQo+IC0tLS0tLS0gT3JpZ2luYWwg
TWVzc2FnZSAtLS0tLS0tDQo+IE9uIFR1ZXNkYXksIEp1bmUgN3RoLCAyMDIyIGF0IDM6MDQgQU0s
IEFuZHJldyBDb29wZXIgPEFuZHJldy5Db29wZXIzQGNpdHJpeC5jb20+IHdyb3RlOg0KPj4gQnV0
IElPTU1VIHZpb2xhdGlvbnMgYXJlIG5vdCByZXN0YXJ0YWJsZS4gV2UgY2FuJ3QganVzdCB0YWtl
IGFuIElPTU1VDQo+PiBmYXVsdCwgYW5kIHNodWZmbGUgdGhlIGd1ZXN0cyBtZW1vcnksIGJlY2F1
c2UgdGhlIFBDSWUgcHJvdG9jb2wgaGFzDQo+PiB0aW1lb3V0cy4gVGhlc2UgYXJlbid0IGdlbmVy
YWxseSBsb25nIGVub3VnaCB0byBldmVuIHNlbmQgYW4gaW50ZXJydXB0DQo+PiB0byByZXF1ZXN0
IHNvZnR3YXJlIGhlbHAsIGxldCBhbG9uZSBwYWdlIG9uZSBwYXJ0IG91dCBhbmQgYW5vdGhlciBw
YXJ0IGluLg0KPj4NCj4+IEZvciBhbiBJT01NVSBtYXBwaW5nIHRvIGV4aXN0cywgaXQgbXVzdCBw
b2ludCBhdCByZWFsIFJBTSwgYmVjYXVzZSBhbnkNCj4+IERNQSB0YXJnZXR0aW5nIGl0IGNhbm5v
dCBiZSBwYXVzZWQgYW5kIGRlbGF5ZWQgZm9yIGxhdGVyLg0KPj4NCj4gVGhhbmtzIGZvciB0aGUg
aW5mb3JtYXRpb24hDQo+DQo+IFNwZWFraW5nIGFib3V0IElPTU1VLiBDYW4gWGVuIGVtdWxhdGUg
dmlydHVhbCBJT01NVT8gSnVzdCB0aGlua2luZyBvdXQgbG91ZC4NCj4gSWYgWGVuIGV4cG9zZXMg
YSB2aXJ0dWFsIElPTU1VIHRvIGEgZ3Vlc3QgVk0sIHdvdWxkbid0IGl0IGJlIHBvc3NpYmxlDQo+
IGZvciB0aGUgaHlwZXJ2aXNvciB0byBwaW4gVk0ncyBwYWdlcyB0aGF0IGFyZSBtYXBwZWQgKGR5
bmFtaWNhbGx5KQ0KPiBieSB0aGUgZ3Vlc3QgdmlydHVhbCBJT01NVT8gQnV0IEkgZ3Vlc3MgaXQg
d2lsbCBlbGltaW5hdGUgdGhlIHBlcmZvcm1hbmNlDQo+IGJlbmVmaXRzIG9mIGRpcmVjdCBkZXZp
Y2UgYXNzaWdubWVudC4NCg0KQXMgSmFuIHBvaW50ZWQgb3V0LCB0aGVyZSdzIG5vIHN1Y2ggdGhp
bmcgYXMgcGlubmluZyBsaWtlIHRoYXQuwqAgWGVuIGlzDQpub3QgS1ZNLsKgIFNlZQ0KaHR0cHM6
Ly94ZW5iaXRzLnhlbi5vcmcvZG9jcy9sYXRlc3QvYWRtaW4tZ3VpZGUvaW50cm9kdWN0aW9uLmh0
bWwNCg0KDQpCdXQgImNhbiB5b3Ugb25seSBkZWFsIHdpdGggdGhlIHN1YnNldCBtYXBwZWQgaW4g
YSB2aXJ0dWFsIElPTU1VIiBpcyBhDQpyZWFzb25hYmxlIHF1ZXN0aW9uLsKgIFRoZSBhbnN3ZXIg
aXMgbm8sIGZvciB0d28gcmVhc29ucy4NCg0KRmlyc3QsIHdoYXQgZG8geW91IGRvIGlmIHRoZSBn
dWVzdCBtYXBzIGl0J3Mgd2hvbGUgZ3Vlc3QgcGh5c2ljYWwNCmFkZHJlc3Mgc3BhY2UgaW4gdGhl
IHZpcnR1YWwgSU9NTVU/wqAgVGhlcmUncyBubyAic29ycnkgLSBub3QgZ290IGVub3VnaA0KbWVt
b3J5IHRvIGFycmFuZ2UgdGhhdCIgZXJyb3IgYXZhaWxhYmxlIGluIGVpdGhlciBJbnRlbCBvciBB
TUQncyBJT01NVQ0Kc3BlYywgYmVjYXVzZSB0aGVyZSdzIG5vIHN1Y2ggdGhpbmcgYXMgbWVtb3J5
IG92ZXJjb21taXQgaW4gYSByZWFsIHN5c3RlbS4NCg0KU2Vjb25kbHkgYW5kIG1vcmUgaW1wb3J0
YW50bHksIHRoZSBhdC1yZXNldCBiZWhhdmlvdXIgb2YgYW4gSU9NTVUgaXMgbm8NCnRyYW5zbGF0
aW9uLCB3aGljaCBmb3IgYSBndWVzdCBtZWFucyB0aGF0IERNQSBjYW4gZ28gYW55d2hlcmUgaW4g
dGhlDQpndWVzdCBwaHlzaWNhbCBhZGRyZXNzIHNwYWNlLCBtZWFuaW5nIHRoZSB3aG9sZSBndWVz
dCBwaHlzaWNhbCBhZGRyZXNzDQpzcGFjZSBuZWVkcyBtYXBwaW5nIGludG8gdGhlIElPTU1VIHBh
Z2V0YWJsZXMsDQoNCn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 21:26:40 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 21:26:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343511.568857 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nygiU-00070y-9D; Tue, 07 Jun 2022 21:26:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343511.568857; Tue, 07 Jun 2022 21:26:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nygiU-00070r-6C; Tue, 07 Jun 2022 21:26:22 +0000
Received: by outflank-mailman (input) for mailman id 343511;
 Tue, 07 Jun 2022 21:26:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nygiT-00070R-CW; Tue, 07 Jun 2022 21:26:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nygiT-0005q0-9s; Tue, 07 Jun 2022 21:26:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nygiS-0006NW-UW; Tue, 07 Jun 2022 21:26:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nygiS-0005pv-U4; Tue, 07 Jun 2022 21:26:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gWGJ5RLIkWxgfauxW7gRB1vhkQAxbyIPu048S4Wzub0=; b=FMBdXpeEMxwn6SLGdG3DyxNJ39
	nF6swSxM4F40pjU4PJHEBsgHMpEoq16TIGXaOcGpvdpsU0gEC0Y4UcCA3fNqz9O53j8DzpF+2rOjA
	da7EiwzOn3zA9uM7C/BTo6YPwgZuSu+Ls3vEqSQnWFzru6Z/y90SdgvFSbBiuzdOHH+Q=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170860-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 170860: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-5.4:test-amd64-i386-examine-uefi:xen-install:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-arndale:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-arm64-arm64-libvirt-raw:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-localmigrate/x10:fail:heisenbug
    linux-5.4:test-armhf-armhf-libvirt-qcow2:guest-start:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=35c6471fd2c181f6e5e0b292dc759b49dbd95d6a
X-Osstest-Versions-That:
    linux=04b092e4a01a3488e762897e2d29f85eda2c6a60
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 07 Jun 2022 21:26:20 +0000

flight 170860 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170860/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 170736

Tests which are failing intermittently (not blocking):
 test-amd64-i386-examine-uefi  6 xen-install      fail in 170846 pass in 170860
 test-armhf-armhf-xl-multivcpu 14 guest-start     fail in 170846 pass in 170860
 test-armhf-armhf-xl-arndale 18 guest-start/debian.repeat fail in 170846 pass in 170860
 test-arm64-arm64-libvirt-raw 17 guest-start/debian.repeat fail in 170846 pass in 170860
 test-armhf-armhf-xl-credit2  14 guest-start                fail pass in 170846
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 18 guest-localmigrate/x10 fail pass in 170846
 test-armhf-armhf-libvirt-qcow2 13 guest-start              fail pass in 170846

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit2 18 guest-start/debian.repeat fail in 170846 like 170736
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail in 170846 like 170736
 test-armhf-armhf-xl-credit2 15 migrate-support-check fail in 170846 never pass
 test-armhf-armhf-xl-credit2 16 saverestore-support-check fail in 170846 never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check fail in 170846 never pass
 test-armhf-armhf-xl-multivcpu 18 guest-start/debian.repeat    fail like 170724
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170724
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170736
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 170736
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170736
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170736
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170736
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170736
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170736
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 170736
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 170736
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170736
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170736
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                35c6471fd2c181f6e5e0b292dc759b49dbd95d6a
baseline version:
 linux                04b092e4a01a3488e762897e2d29f85eda2c6a60

Last test of basis   170736  2022-05-25 18:40:38 Z   13 days
Testing same since   170843  2022-06-06 06:44:17 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Akira Yokosawa <akiyks@gmail.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andy Lutomirski <luto@kernel.org>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Anna Schumaker <Anna.Schumaker@Netapp.com>
  Ard Biesheuvel <ardb@kernel.org>
  Ariadne Conill <ariadne@dereferenced.org>
  Aristeu Rozanski <aris@redhat.com>
  Benjamin Tissoires <benjamin.tissoires@redhat.com>
  Christian Brauner <brauner@kernel.org>
  Chuck Lever <chuck.lever@oracle.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Thompson <daniel.thompson@linaro.org>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  Denis Efremov (Oracle) <efremov@linux.com>
  Dmitry Mastykin <dmastykin@astralinux.ru>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Eric Dumazet <edumazet@google.com>
  Florian Westphal <fw@strlen.de>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo A. R. Silva <gustavoars@kernel.org>
  Hans Verkuil <hverkuil-cisco@xs4all.nl>
  Herbert Xu <herbert@gondor.apana.org.au>
  Hulk Robot <hulkrobot@huawei.com>
  IotaHydrae <writeforever@foxmail.com>
  Jakub Kicinski <kuba@kernel.org>
  Jarkko Sakkinen <jarkko@kernel.org>
  Jiri Kosina <jkosina@suse.cz>
  Joel Stanley <joel@jms.id.au>
  Johannes Berg <johannes.berg@intel.com>
  Jonathan Corbet <corbet@lwn.net>
  Kees Cook <keescook@chromium.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Liu Jian <liujian56@huawei.com>
  Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
  Luca Coelho <luciano.coelho@intel.com>
  Marek Maslanka <mm@semihalf.com>
  Marek Maślanka <mm@semihalf.com>
  Mariusz Tkaczyk <mariusz.tkaczyk@linux.intel.com>
  Mark-PK Tsai <mark-pk.tsai@mediatek.com>
  Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
  Mika Westerberg <mika.westerberg@linux.intel.com>
  Mike Snitzer <snitzer@kernel.org>
  Mikulas Patocka <mpatocka@redhat.com>
  Milan Broz <gmazyland@gmail.com>
  Minchan Kim <minchan@kernel.org>
  Miri Korenblit <miriam.rachel.korenblit@intel.com>
  Noah Meyerhans <nmeyerha@amazon.com>
  Noah Meyerhans <noahm@debian.org>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Piyush Malgujar <pmalgujar@marvell.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Sakari Ailus <sakari.ailus@linux.intel.com>
  Sarthak Kukreti <sarthakkukreti@google.com>
  Sasha Levin <sashal@kernel.org>
  Song Liu <song@kernel.org>
  Song Liu <songliubraving@fb.com>
  Stefan Ghinea <stefan.ghinea@windriver.com>
  Stefan Mahnke-Hartmann <stefan.mahnke-hartmann@infineon.com>
  Steffen Klassert <steffen.klassert@secunet.com>
  Stephen Brennan <stephen.s.brennan@oracle.com>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Sultan Alsawaf <sultan@kerneltoast.com>
  Szymon Balcerak <sbalcerak@marvell.com>
  Thomas Bartschies <thomas.bartschies@cvk.de>
  Thomas Gleixner <tglx@linutronix.de>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Vegard Nossum <vegard.nossum@oracle.com>
  Veronika Kabatova <vkabatov@redhat.com>
  Vitaly Chikunov <vt@altlinux.org>
  Willy Tarreau <w@1wt.eu>
  Wolfram Sang <wsa@kernel.org>
  Xiu Jianfeng <xiujianfeng@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1081 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 21:55:39 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 21:55:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343523.568868 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyhAh-0002E0-PY; Tue, 07 Jun 2022 21:55:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343523.568868; Tue, 07 Jun 2022 21:55:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyhAh-0002Dt-Ls; Tue, 07 Jun 2022 21:55:31 +0000
Received: by outflank-mailman (input) for mailman id 343523;
 Tue, 07 Jun 2022 21:55:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SxjK=WO=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nyhAg-0002Dn-Sj
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 21:55:30 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8a7d4c03-e6ac-11ec-bd2c-47488cf2e6aa;
 Tue, 07 Jun 2022 23:55:29 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id B7C6761768;
 Tue,  7 Jun 2022 21:55:26 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 749E5C385A5;
 Tue,  7 Jun 2022 21:55:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8a7d4c03-e6ac-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654638926;
	bh=Wf4mpmab6OumvfGAsjsDUb6O4YEDpkpZwwzVRTvvVmc=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=agVfJOcHwOsJSslTtgbtoXo6V3I1rxzyX7KPE1Sh4pMKRN9Ab4YTO/V5erJ050WvK
	 98V5Phh6cL0KV0a4K/N4dysYhY4f1OCVUW9X7UIQcy56HnMhunrenVx1ofIDRD9M6J
	 zr35wNFMPajI57T+zkC2de6WgGDU+N9O1LyufOEiMSE0CBCk85d7CTc/3KlY4YkVFd
	 Uxhwq7MvMpmMXYkPddp+htmBbZjXFY8uDqXl6oUq1vO6Lj0xhbOgm9PgUXbgJvQ58G
	 bloUjQ+vJ0Gn2LSQcXyUs/PpCCxgl21KM/c91bVE+KZDcw45D6erqeXcKVAOysfshr
	 YelDCQ8M/w0Uw==
Date: Tue, 7 Jun 2022 14:55:24 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jens Wiklander <jens.wiklander@linaro.org>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH 0/2] Xen FF-A mediator
In-Reply-To: <20220607101010.3136600-1-jens.wiklander@linaro.org>
Message-ID: <alpine.DEB.2.22.394.2206071454020.21215@ubuntu-linux-20-04-desktop>
References: <20220607101010.3136600-1-jens.wiklander@linaro.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 7 Jun 2022, Jens Wiklander wrote:
> Hi,
> 
> This patch sets add a FF-A [1] mediator modeled after the TEE mediator
> already present in Xen. The FF-A mediator implements the subset of the FF-A
> 1.1 specification needed to communicate with OP-TEE using FF-A as transport
> mechanism instead of SMC/HVC as with the TEE mediator. It allows a similar
> design in OP-TEE as with the TEE mediator where OP-TEE presents one virtual
> partition of itself to each guest in Xen.
> 
> The FF-A mediator is generic in the sense it has nothing OP-TEE specific
> except that only the subset needed for OP-TEE is implemented so far. The
> hooks needed to inform OP-TEE that a guest is created or destroyed is part
> of the FF-A specification.
> 
> It should be possible to extend the FF-A mediator to implement a larger
> portion of the FF-A 1.1 specification without breaking with the way OP-TEE
> is communicated with here. So it should be possible to support any TEE or
> Secure Partition using FF-A as transport with this mediator.
> 
> [1] https://developer.arm.com/documentation/den0077/latest
> 
> Thanks,
> Jens

Hi Jens,

Many thanks for the patches! I tried to apply them to the master branch
but unfortunately they don't apply any longer. Could you please rebase
them on master (or even better rebase them on staging) and resend?

Thank you!



> Jens Wiklander (2):
>   xen/arm: smccc: add support for SMCCCv1.2 extended input/output
>     registers
>   xen/arm: add FF-A mediator
> 
>  xen/arch/arm/Kconfig         |   11 +
>  xen/arch/arm/Makefile        |    1 +
>  xen/arch/arm/arm64/smc.S     |   43 +
>  xen/arch/arm/domain.c        |   10 +
>  xen/arch/arm/ffa.c           | 1624 ++++++++++++++++++++++++++++++++++
>  xen/arch/arm/vsmc.c          |   19 +-
>  xen/include/asm-arm/domain.h |    4 +
>  xen/include/asm-arm/ffa.h    |   71 ++
>  xen/include/asm-arm/smccc.h  |   42 +
>  9 files changed, 1821 insertions(+), 4 deletions(-)
>  create mode 100644 xen/arch/arm/ffa.c
>  create mode 100644 xen/include/asm-arm/ffa.h



From xen-devel-bounces@lists.xenproject.org Tue Jun 07 22:47:22 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 22:47:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343536.568879 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyhyP-00080b-J6; Tue, 07 Jun 2022 22:46:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343536.568879; Tue, 07 Jun 2022 22:46:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyhyP-00080U-GA; Tue, 07 Jun 2022 22:46:53 +0000
Received: by outflank-mailman (input) for mailman id 343536;
 Tue, 07 Jun 2022 22:46:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyhyO-00080K-3H; Tue, 07 Jun 2022 22:46:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyhyN-000785-So; Tue, 07 Jun 2022 22:46:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyhyN-0003Ha-D4; Tue, 07 Jun 2022 22:46:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nyhyN-0004Pr-CJ; Tue, 07 Jun 2022 22:46:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Q68f1+kkJGB8ICUhVlMV2zCklsg0M1EpicqNHspdNSQ=; b=EnRrxIA7ZeDWxggSGM17O3sfFO
	uVl0GpXsYbt3Vb0G2w/Pd1zA6q8xp2Il0WD9d3Cs96wKzwvQfL7Lf0uKZw4qa3nsizcpU4rHcFI94
	moTiLwb6W4GDsBcB/r+hl60rV6DA8oWACmKQhADGHoRppOCKMrWoNAjhHUyfa0FkY0HI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170865-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 170865: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-pair:xen-install/dst_host:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=cea9ae06229577cd5b77019ce122f9cdd1568106
X-Osstest-Versions-That:
    xen=58ce5b6c33ecae76f2e9fc5213a56e98c3be4be5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 07 Jun 2022 22:46:51 +0000

flight 170865 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170865/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 170840

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install         fail pass in 170852

Tests which did not succeed, but are not blocking:
 test-amd64-i386-pair        11 xen-install/dst_host fail in 170852 like 170840
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170840
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170840
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170840
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170840
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 170840
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 170840
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170840
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170840
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170840
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170840
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170840
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170840
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 xen                  cea9ae06229577cd5b77019ce122f9cdd1568106
baseline version:
 xen                  58ce5b6c33ecae76f2e9fc5213a56e98c3be4be5

Last test of basis   170840  2022-06-06 01:51:50 Z    1 days
Testing same since   170852  2022-06-06 22:09:25 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit cea9ae06229577cd5b77019ce122f9cdd1568106
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Feb 18 16:02:51 2022 +0000

    x86/spec-ctrl: Enumeration for new Intel BHI controls
    
    https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/technical-documentation/branch-history-injection.html
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jun 07 23:07:15 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jun 2022 23:07:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343546.568889 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyiI2-0002Bp-6h; Tue, 07 Jun 2022 23:07:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343546.568889; Tue, 07 Jun 2022 23:07:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyiI2-0002Bi-3s; Tue, 07 Jun 2022 23:07:10 +0000
Received: by outflank-mailman (input) for mailman id 343546;
 Tue, 07 Jun 2022 23:07:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SxjK=WO=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nyiI0-0002Bc-5w
 for xen-devel@lists.xenproject.org; Tue, 07 Jun 2022 23:07:08 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8c618745-e6b6-11ec-b605-df0040e90b76;
 Wed, 08 Jun 2022 01:07:06 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 111B8609D0;
 Tue,  7 Jun 2022 23:07:05 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 38CDEC3411C;
 Tue,  7 Jun 2022 23:07:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8c618745-e6b6-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654643224;
	bh=LWfyMb2v5MSdyL+3ClcROnPz1ASejoPP0cLJZf/35SU=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=liQYECUeX1Q6fBqgwq2D3ZXwL4V0xoawyK0hCPgfVSXpD26d0ss9qqxyUg7LefCfz
	 WIKOvKC8V4B0/Xse2MjmssJbX46oII0O5chMInC23VMoby+QO+p3LwleEFdL5j0dde
	 4Ot5h2bc3Nyqs6EsB2b5zas4f9P5N3RuQke3syPmBRU3aoN7+G2AToPL+LVcUpcLst
	 zVvPNjriyLhYmVieVbT11r0B3tMLjEmlv4Jj9MQjrhUdaJB26kaC7HOTITmxRS2FJO
	 vAoV1ocSXycYOv0+PRe6He69vyE9Q5e0zr8Ckk1bdes1R+S4X1aR2IUrK4f7q9/a/x
	 2rGQcUYjgZwhA==
Date: Tue, 7 Jun 2022 16:07:03 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Stefano Stabellini <sstabellini@kernel.org>
cc: Jens Wiklander <jens.wiklander@linaro.org>, xen-devel@lists.xenproject.org, 
    Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH 0/2] Xen FF-A mediator
In-Reply-To: <alpine.DEB.2.22.394.2206071454020.21215@ubuntu-linux-20-04-desktop>
Message-ID: <alpine.DEB.2.22.394.2206071606550.21215@ubuntu-linux-20-04-desktop>
References: <20220607101010.3136600-1-jens.wiklander@linaro.org> <alpine.DEB.2.22.394.2206071454020.21215@ubuntu-linux-20-04-desktop>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 7 Jun 2022, Stefano Stabellini wrote:
> On Tue, 7 Jun 2022, Jens Wiklander wrote:
> > Hi,
> > 
> > This patch sets add a FF-A [1] mediator modeled after the TEE mediator
> > already present in Xen. The FF-A mediator implements the subset of the FF-A
> > 1.1 specification needed to communicate with OP-TEE using FF-A as transport
> > mechanism instead of SMC/HVC as with the TEE mediator. It allows a similar
> > design in OP-TEE as with the TEE mediator where OP-TEE presents one virtual
> > partition of itself to each guest in Xen.
> > 
> > The FF-A mediator is generic in the sense it has nothing OP-TEE specific
> > except that only the subset needed for OP-TEE is implemented so far. The
> > hooks needed to inform OP-TEE that a guest is created or destroyed is part
> > of the FF-A specification.
> > 
> > It should be possible to extend the FF-A mediator to implement a larger
> > portion of the FF-A 1.1 specification without breaking with the way OP-TEE
> > is communicated with here. So it should be possible to support any TEE or
> > Secure Partition using FF-A as transport with this mediator.
> > 
> > [1] https://developer.arm.com/documentation/den0077/latest
> > 
> > Thanks,
> > Jens
> 
> Hi Jens,
> 
> Many thanks for the patches! I tried to apply them to the master branch
> but unfortunately they don't apply any longer. Could you please rebase
> them on master (or even better rebase them on staging) and resend?
> 
> Thank you!

One question without having looked at the patches in details. These
patches are necessary to mediate OS (e.g. Linux) interactions with
OPTEE. The difference between xen/arch/arm/ffa.c and
xen/arch/arm/tee/optee.c is the transport mechanism: shared mem vs. SMC.
Is that right?

If only the transport is different, would it make sense to place ffa.c
under xen/arch/arm/tee?

Without having looked at the details of the transport or the FF-A
protocol let me ask you a question. Do you think it would be possible to
share part of the implementation with xen/arch/arm/tee/optee.c? I am
asking because intuitively, if only the transport is differenti I would
have thought that some things could be common. But it doesn't look like
the current patches are reusing anything from xen/arch/arm/tee/optee.c.
Are the two protocols too different?


From xen-devel-bounces@lists.xenproject.org Wed Jun 08 00:26:19 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 00:26:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343558.568901 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyjWL-0002vo-8n; Wed, 08 Jun 2022 00:26:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343558.568901; Wed, 08 Jun 2022 00:26:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyjWL-0002vh-4s; Wed, 08 Jun 2022 00:26:01 +0000
Received: by outflank-mailman (input) for mailman id 343558;
 Wed, 08 Jun 2022 00:26:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=RtW+=WP=kernel.org=kuba@srs-se1.protection.inumbo.net>)
 id 1nyjWK-0002vb-P6
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 00:26:00 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 908ab562-e6c1-11ec-b605-df0040e90b76;
 Wed, 08 Jun 2022 02:25:58 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 89C7D61782;
 Wed,  8 Jun 2022 00:25:56 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9750BC34114;
 Wed,  8 Jun 2022 00:25:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 908ab562-e6c1-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654647956;
	bh=Swm8frSQYF0dasnZJ7rfGFppedE+tAaQ3A6Qn1RLL4w=;
	h=Date:From:To:Cc:Subject:In-Reply-To:References:From;
	b=rzzogDpbsRIe8kdUfMX4KxxsBmYM5b21oee7lee8hmr0Hu2pzI8Xk8IDGxFFIfx7l
	 wF0TmrAZA+kfgY7aYxSXJLGt6NeWzXlBPbldOXGTi+AVdtUvQIfpYX0VDsfq65p1PL
	 sVyK3V8cLsn0mkRbDJzQUkvQ2fVzqBgaiu7ccT/+ewiLuDEm96HxRvUQ1/gs2wezWy
	 tL22JChrMYkoKKSeR9c6gPwv18/GIg/psS+1LqYd5c5oN6h0mlXUL1GIF8nYN2HALe
	 ku2HPHMg1vetN5eB0JE1bkbD1QF0c2H7zTfm9Zknr3LT0L47JlrdN18HCWjrQmjplH
	 GrfKrj+F4MKSw==
Date: Tue, 7 Jun 2022 17:25:54 -0700
From: Jakub Kicinski <kuba@kernel.org>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, netdev@vger.kernel.org,
 linux-kernel@vger.kernel.org, Wei Liu <wei.liu@kernel.org>, Paul Durrant
 <paul@xen.org>, "David S. Miller" <davem@davemloft.net>, Eric Dumazet
 <edumazet@google.com>, Paolo Abeni <pabeni@redhat.com>
Subject: Re: [PATCH Resend] xen/netback: do some code cleanup
Message-ID: <20220607172554.4b24d138@kernel.org>
In-Reply-To: <6507870c-1c32-ebf6-f85f-4bf2ede41367@suse.com>
References: <6507870c-1c32-ebf6-f85f-4bf2ede41367@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

On Tue, 7 Jun 2022 07:28:38 +0200 Juergen Gross wrote:
> Remove some unused macros and functions, make local functions static.

> --- a/drivers/net/xen-netback/rx.c
> +++ b/drivers/net/xen-netback/rx.c
> @@ -486,7 +486,7 @@ static void xenvif_rx_skb(struct xenvif_queue *queue)
>    #define RX_BATCH_SIZE 64
>   -void xenvif_rx_action(struct xenvif_queue *queue)
> +static void xenvif_rx_action(struct xenvif_queue *queue)

Strange, I haven't seen this kind of corruption before, but the patch
certainly looks corrupted. It doesn't apply.
Could you "git send-email" it?

>   {



From xen-devel-bounces@lists.xenproject.org Wed Jun 08 01:03:08 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 01:03:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343571.568926 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyk6A-0005gs-98; Wed, 08 Jun 2022 01:03:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343571.568926; Wed, 08 Jun 2022 01:03:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyk6A-0005gl-5l; Wed, 08 Jun 2022 01:03:02 +0000
Received: by outflank-mailman (input) for mailman id 343571;
 Wed, 08 Jun 2022 01:03:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyk68-0005gb-G1; Wed, 08 Jun 2022 01:03:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyk68-0007y6-Bc; Wed, 08 Jun 2022 01:03:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyk67-0004fd-VC; Wed, 08 Jun 2022 01:03:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nyk67-000096-Uk; Wed, 08 Jun 2022 01:02:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BZheXOlygOdALYRqWx7/3ZKxle8hwTE64PoGT9/XuA4=; b=lsIeNm0u13b+hEFjJgv6gi+Sni
	Cx8aqPlVQvP+S5aDmZ7DOxej3OjOLT5ttdONbHzrEDsBJbJWQgfRZeiKHWWYpMzWznE08LOp6tHLv
	p62GobX+HzIBiNASNsyRnSYOXTxH5ljHIQ/cJwFv3uRehN1gUg6WZioGSLWEsUGI1Jz4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170868-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 170868: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=e71e60cd74df9386c3f684c54888f2367050b831
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 08 Jun 2022 01:02:59 +0000

flight 170868 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170868/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 170714
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 170714
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 170714
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 170714
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 170714
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 170714
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 170714
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 170714

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170714
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170714
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                e71e60cd74df9386c3f684c54888f2367050b831
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z   14 days
Failing since        170716  2022-05-24 11:12:06 Z   14 days   40 attempts
Testing same since   170868  2022-06-07 12:09:40 Z    0 days    1 attempts

------------------------------------------------------------
2274 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 268423 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 08 01:08:40 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 01:08:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343581.568936 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nykBX-0006UD-WE; Wed, 08 Jun 2022 01:08:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343581.568936; Wed, 08 Jun 2022 01:08:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nykBX-0006U6-TL; Wed, 08 Jun 2022 01:08:35 +0000
Received: by outflank-mailman (input) for mailman id 343581;
 Wed, 08 Jun 2022 01:08:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8TEA=WP=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nykBW-0006U0-37
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 01:08:34 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 83a19904-e6c7-11ec-bd2c-47488cf2e6aa;
 Wed, 08 Jun 2022 03:08:32 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 869DEB82489;
 Wed,  8 Jun 2022 01:08:31 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id B27E3C34114;
 Wed,  8 Jun 2022 01:08:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 83a19904-e6c7-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654650510;
	bh=I4fcBOTQjhfyeFkrEprR88T4LuWZ2bbkS5qhS+uJu9I=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=qh+pJ2hLltIdqnZGKY6F8mA2dlapUDMDyLJN5xdVL7BhiMKssaYIJPHHrGAH1xbHV
	 iFgzYDhn04dbKEhOw8VnK356yqIMcWpmecS12SSkkDco+GfbGvmx4bGuy1G2yUSAeb
	 IdZ82ucOMdLiyvHAm5HNUdJR69l+x5oWryrh/86XcGHxNB16/RFvLSXx8tDGDZ2Ek5
	 d3Uy/ddeYb4sNKBevmXyzP3xS+srnNcibam8AaVO88iSGUEnMroQ+ZezRN6N1eOcnE
	 QNi62z8AYqy/paRVBLY2U7U30J0HjYzRzV5VyfzF0WSGjH0tgqz7L32UBqjcsQCrF1
	 fTMwRyA59MuFg==
Date: Tue, 7 Jun 2022 18:08:29 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Wei Liu <wei.liu2@citrix.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    George Dunlap <george.dunlap@citrix.com>, 
    Hongyan Xia <hongyxia@amazon.com>, Julien Grall <jgrall@amazon.com>, 
    Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [PATCH 10/16] xen/arm: add Persistent Map (PMAP)
 infrastructure
In-Reply-To: <20220520120937.28925-11-julien@xen.org>
Message-ID: <alpine.DEB.2.22.394.2206071806390.21215@ubuntu-linux-20-04-desktop>
References: <20220520120937.28925-1-julien@xen.org> <20220520120937.28925-11-julien@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-2035065655-1654650510=:21215"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-2035065655-1654650510=:21215
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Fri, 20 May 2022, Julien Grall wrote:
> From: Wei Liu <wei.liu2@citrix.com>
> 
> The basic idea is like Persistent Kernel Map (PKMAP) in Linux. We
> pre-populate all the relevant page tables before the system is fully
> set up.
> 
> We will need it on Arm in order to rework the arm64 version of
> xenheap_setup_mappings() as we may need to use pages allocated from
> the boot allocator before they are effectively mapped.
> 
> This infrastructure is not lock-protected therefore can only be used
> before smpboot. After smpboot, map_domain_page() has to be used.
> 
> This is based on the x86 version [1] that was originally implemented
> by Wei Liu.
> 
> The PMAP infrastructure is implemented in common code with some
> arch helpers to set/clear the page-table entries and convertion
> between a fixmap slot to a virtual address...
> 
> As mfn_to_xen_entry() now needs to be exported, take the opportunity
> to swich the parameter attr from unsigned to unsigned int.
> 
> [1] <e92da4ad6015b6089737fcccba3ec1d6424649a5.1588278317.git.hongyxia@amazon.com>
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
> [julien: Adapted for Arm]
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ---
>     Changes in v4:
>         - Move xen_fixmap in fixmap.h and add a comment about its usage.
>         - Update comments
>         - Use DECLARE_BITMAP()
>         - Replace local_irq_{enable, disable} with an ASSERT() as there
>           should be no user of pmap() in interrupt context.
> 
>     Changes in v3:
>         - s/BITS_PER_LONG/BITS_PER_BYTE/
>         - Move pmap to common code
> 
>     Changes in v2:
>         - New patch
> 
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Wei Liu <wl@xen.org>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: Roger Pau Monné <roger.pau@citrix.com>
> ---
>  xen/arch/arm/Kconfig              |  1 +
>  xen/arch/arm/include/asm/fixmap.h | 24 +++++++++++
>  xen/arch/arm/include/asm/lpae.h   |  8 ++++
>  xen/arch/arm/include/asm/pmap.h   | 32 ++++++++++++++
>  xen/arch/arm/mm.c                 |  7 +--
>  xen/common/Kconfig                |  3 ++
>  xen/common/Makefile               |  1 +
>  xen/common/pmap.c                 | 72 +++++++++++++++++++++++++++++++
>  xen/include/xen/pmap.h            | 16 +++++++
>  9 files changed, 158 insertions(+), 6 deletions(-)
>  create mode 100644 xen/arch/arm/include/asm/pmap.h
>  create mode 100644 xen/common/pmap.c
>  create mode 100644 xen/include/xen/pmap.h
> 
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index ecfa6822e4d3..a89a67802aa9 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -14,6 +14,7 @@ config ARM
>  	select HAS_DEVICE_TREE
>  	select HAS_PASSTHROUGH
>  	select HAS_PDX
> +	select HAS_PMAP
>  	select IOMMU_FORCE_PT_SHARE
>  
>  config ARCH_DEFCONFIG
> diff --git a/xen/arch/arm/include/asm/fixmap.h b/xen/arch/arm/include/asm/fixmap.h
> index 1cee51e52ab9..365a2385a087 100644
> --- a/xen/arch/arm/include/asm/fixmap.h
> +++ b/xen/arch/arm/include/asm/fixmap.h
> @@ -5,20 +5,44 @@
>  #define __ASM_FIXMAP_H
>  
>  #include <xen/acpi.h>
> +#include <xen/pmap.h>
>  
>  /* Fixmap slots */
>  #define FIXMAP_CONSOLE  0  /* The primary UART */
>  #define FIXMAP_MISC     1  /* Ephemeral mappings of hardware */
>  #define FIXMAP_ACPI_BEGIN  2  /* Start mappings of ACPI tables */
>  #define FIXMAP_ACPI_END    (FIXMAP_ACPI_BEGIN + NUM_FIXMAP_ACPI_PAGES - 1)  /* End mappings of ACPI tables */
> +#define FIXMAP_PMAP_BEGIN (FIXMAP_ACPI_END + 1) /* Start of PMAP */
> +#define FIXMAP_PMAP_END (FIXMAP_PMAP_BEGIN + NUM_FIX_PMAP - 1) /* End of PMAP */
> +
> +#define FIXMAP_LAST FIXMAP_PMAP_END
> +
> +#define FIXADDR_START FIXMAP_ADDR(0)
> +#define FIXADDR_TOP FIXMAP_ADDR(FIXMAP_LAST)
>  
>  #ifndef __ASSEMBLY__
>  
> +/*
> + * Direct access to xen_fixmap[] should only happen when {set,
> + * clear}_fixmap() is unusable (e.g. where we would end up to
> + * recursively call the helpers).
> + */
> +extern lpae_t xen_fixmap[XEN_PT_LPAE_ENTRIES];
> +
>  /* Map a page in a fixmap entry */
>  extern void set_fixmap(unsigned map, mfn_t mfn, unsigned attributes);
>  /* Remove a mapping from a fixmap entry */
>  extern void clear_fixmap(unsigned map);
>  
> +#define fix_to_virt(slot) ((void *)FIXMAP_ADDR(slot))
> +
> +static inline unsigned int virt_to_fix(vaddr_t vaddr)
> +{
> +    BUG_ON(vaddr >= FIXADDR_TOP || vaddr < FIXADDR_START);
> +
> +    return ((vaddr - FIXADDR_START) >> PAGE_SHIFT);
> +}
> +
>  #endif /* __ASSEMBLY__ */
>  
>  #endif /* __ASM_FIXMAP_H */
> diff --git a/xen/arch/arm/include/asm/lpae.h b/xen/arch/arm/include/asm/lpae.h
> index aecb320dec45..fc19cbd84772 100644
> --- a/xen/arch/arm/include/asm/lpae.h
> +++ b/xen/arch/arm/include/asm/lpae.h
> @@ -4,6 +4,7 @@
>  #ifndef __ASSEMBLY__
>  
>  #include <xen/page-defs.h>
> +#include <xen/mm-frame.h>
>  
>  /*
>   * WARNING!  Unlike the x86 pagetable code, where l1 is the lowest level and
> @@ -168,6 +169,13 @@ static inline bool lpae_is_superpage(lpae_t pte, unsigned int level)
>          third_table_offset(addr)            \
>      }
>  
> +/*
> + * Standard entry type that we'll use to build Xen's own pagetables.
> + * We put the same permissions at every level, because they're ignored
> + * by the walker in non-leaf entries.
> + */
> +lpae_t mfn_to_xen_entry(mfn_t mfn, unsigned int attr);
> +
>  #endif /* __ASSEMBLY__ */
>  
>  /*
> diff --git a/xen/arch/arm/include/asm/pmap.h b/xen/arch/arm/include/asm/pmap.h
> new file mode 100644
> index 000000000000..74398b4c4fe6
> --- /dev/null
> +++ b/xen/arch/arm/include/asm/pmap.h
> @@ -0,0 +1,32 @@
> +#ifndef __ASM_PMAP_H__
> +#define __ASM_PMAP_H__
> +
> +#include <xen/mm.h>
> +
> +#include <asm/fixmap.h>
> +
> +static inline void arch_pmap_map(unsigned int slot, mfn_t mfn)
> +{
> +    lpae_t *entry = &xen_fixmap[slot];
> +    lpae_t pte;
> +
> +    ASSERT(!lpae_is_valid(*entry));
> +
> +    pte = mfn_to_xen_entry(mfn, PAGE_HYPERVISOR_RW);
> +    pte.pt.table = 1;
> +    write_pte(entry, pte);

Here we don't need a tlb flush because we never go from a valid mapping
to another valid mapping. We also go through arch_pmap_unmap which
clears the mapping and also flushes the tlb. Is that right?


> +}
> +
> +static inline void arch_pmap_unmap(unsigned int slot)
> +{
> +    lpae_t pte = {};
> +
> +    write_pte(&xen_fixmap[slot], pte);
> +
> +    flush_xen_tlb_range_va_local(FIXMAP_ADDR(slot), PAGE_SIZE);
> +}
> +
> +void arch_pmap_map_slot(unsigned int slot, mfn_t mfn);
> +void arch_pmap_clear_slot(void *ptr);

What are these two? They are not defined anywhere?


> +#endif /* __ASM_PMAP_H__ */
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 52b2a0394047..bd1348a99716 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -305,12 +305,7 @@ void dump_hyp_walk(vaddr_t addr)
>      dump_pt_walk(ttbr, addr, HYP_PT_ROOT_LEVEL, 1);
>  }
>  
> -/*
> - * Standard entry type that we'll use to build Xen's own pagetables.
> - * We put the same permissions at every level, because they're ignored
> - * by the walker in non-leaf entries.
> - */
> -static inline lpae_t mfn_to_xen_entry(mfn_t mfn, unsigned attr)
> +lpae_t mfn_to_xen_entry(mfn_t mfn, unsigned int attr)
>  {
>      lpae_t e = (lpae_t) {
>          .pt = {
> diff --git a/xen/common/Kconfig b/xen/common/Kconfig
> index d921c74d615e..5b6b2406c028 100644
> --- a/xen/common/Kconfig
> +++ b/xen/common/Kconfig
> @@ -49,6 +49,9 @@ config HAS_KEXEC
>  config HAS_PDX
>  	bool
>  
> +config HAS_PMAP
> +	bool
> +
>  config HAS_SCHED_GRANULARITY
>  	bool
>  
> diff --git a/xen/common/Makefile b/xen/common/Makefile
> index b1e076c30b81..3baf83d527d8 100644
> --- a/xen/common/Makefile
> +++ b/xen/common/Makefile
> @@ -29,6 +29,7 @@ obj-y += notifier.o
>  obj-y += page_alloc.o
>  obj-$(CONFIG_HAS_PDX) += pdx.o
>  obj-$(CONFIG_PERF_COUNTERS) += perfc.o
> +obj-bin-$(CONFIG_HAS_PMAP) += pmap.init.o
>  obj-y += preempt.o
>  obj-y += random.o
>  obj-y += rangeset.o
> diff --git a/xen/common/pmap.c b/xen/common/pmap.c
> new file mode 100644
> index 000000000000..9355cacb7373
> --- /dev/null
> +++ b/xen/common/pmap.c
> @@ -0,0 +1,72 @@
> +#include <xen/bitops.h>
> +#include <xen/init.h>
> +#include <xen/irq.h>
> +#include <xen/pmap.h>
> +
> +#include <asm/pmap.h>
> +#include <asm/fixmap.h>
> +
> +/*
> + * Simple mapping infrastructure to map / unmap pages in fixed map.
> + * This is used to set the page table before the map domain page infrastructure
> + * is initialized.
> + *
> + * This structure is not protected by any locks, so it must not be used after
> + * smp bring-up.
> + */
> +
> +/* Bitmap to track which slot is used */
> +static __initdata DECLARE_BITMAP(inuse, NUM_FIX_PMAP);
> +
> +void *__init pmap_map(mfn_t mfn)
> +{
> +    unsigned int idx;
> +    unsigned int slot;
> +
> +    ASSERT(system_state < SYS_STATE_smp_boot);
> +    ASSERT(!in_irq());
> +
> +    idx = find_first_zero_bit(inuse, NUM_FIX_PMAP);
> +    if ( idx == NUM_FIX_PMAP )
> +        panic("Out of PMAP slots\n");
> +
> +    __set_bit(idx, inuse);
> +
> +    slot = idx + FIXMAP_PMAP_BEGIN;
> +    ASSERT(slot >= FIXMAP_PMAP_BEGIN && slot <= FIXMAP_PMAP_END);
> +
> +    /*
> +     * We cannot use set_fixmap() here. We use PMAP when the domain map
> +     * page infrastructure is not yet initialized, so map_pages_to_xen() called
> +     * by set_fixmap() needs to map pages on demand, which then calls pmap()
> +     * again, resulting in a loop. Modify the PTEs directly instead. The same
> +     * is true for pmap_unmap().
> +     */
> +    arch_pmap_map(slot, mfn);
> +
> +    return fix_to_virt(slot);
> +}
> +
> +void __init pmap_unmap(const void *p)
> +{
> +    unsigned int idx;
> +    unsigned int slot = virt_to_fix((unsigned long)p);
> +
> +    ASSERT(system_state < SYS_STATE_smp_boot);
> +    ASSERT(slot >= FIXMAP_PMAP_BEGIN && slot <= FIXMAP_PMAP_END);
> +    ASSERT(in_irq());
> +
> +    idx = slot - FIXMAP_PMAP_BEGIN;
> +
> +    __clear_bit(idx, inuse);
> +    arch_pmap_unmap(slot);
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/include/xen/pmap.h b/xen/include/xen/pmap.h
> new file mode 100644
> index 000000000000..93e61b10870e
> --- /dev/null
> +++ b/xen/include/xen/pmap.h
> @@ -0,0 +1,16 @@
> +#ifndef __XEN_PMAP_H__
> +#define __XEN_PMAP_H__
> +
> +/* Large enough for mapping 5 levels of page tables with some headroom */
> +#define NUM_FIX_PMAP 8
> +
> +#ifndef __ASSEMBLY__
> +
> +#include <xen/mm-frame.h>
> +
> +void *pmap_map(mfn_t mfn);
> +void pmap_unmap(const void *p);
> +
> +#endif /* __ASSEMBLY__ */
> +
> +#endif /* __XEN_PMAP_H__ */
> -- 
> 2.32.0
> 
--8323329-2035065655-1654650510=:21215--


From xen-devel-bounces@lists.xenproject.org Wed Jun 08 01:09:04 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 01:09:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343586.568947 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nykC0-000702-Ei; Wed, 08 Jun 2022 01:09:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343586.568947; Wed, 08 Jun 2022 01:09:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nykC0-0006zv-BK; Wed, 08 Jun 2022 01:09:04 +0000
Received: by outflank-mailman (input) for mailman id 343586;
 Wed, 08 Jun 2022 01:09:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8TEA=WP=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nykBz-0006yD-09
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 01:09:03 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9448e9e3-e6c7-11ec-b605-df0040e90b76;
 Wed, 08 Jun 2022 03:09:01 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 6F50761755;
 Wed,  8 Jun 2022 01:08:59 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 65FB4C34114;
 Wed,  8 Jun 2022 01:08:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9448e9e3-e6c7-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654650538;
	bh=UCRrUBCvkRAHHH8YBFR6ClbMWiZGaosAvN0eiDdDXK0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=TZunASziJLqT9SQGPmjEAi1TsSPf8dhnUodaLFxPxaLI+/q82H7hfKW10zOaRHrTD
	 e6RE+fayzCgTo3EY2MZmnBQ09COvKOm/ITZY9K9GLlKFTV5Lprw8GYUM4IYeKSZEE3
	 DwfGgYFR/wGOPoLdY49ILDk9zkH0XgV996zlECs6cLWuGG4dy8xIW5QWxWbOTzct2M
	 FtIdKAIYmKfPNepjQKHBK+KqUMqnHEhaX2HEyunYv7vC/LKt+dIOKz3dzemH8D0T3t
	 2QdGuNNNP6g08xKlw7/SLL5eT+1pWggRP2Ew57lJZEpjHr2z6dJtJujoVHgPnat72Y
	 gNw33FEbgmqjQ==
Date: Tue, 7 Jun 2022 18:08:57 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH 09/16] xen/arm: Move fixmap definitions in a separate
 header
In-Reply-To: <20220520120937.28925-10-julien@xen.org>
Message-ID: <alpine.DEB.2.22.394.2206071808350.21215@ubuntu-linux-20-04-desktop>
References: <20220520120937.28925-1-julien@xen.org> <20220520120937.28925-10-julien@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 20 May 2022, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> To use properly the fixmap definitions, their user would need
> also new to include <xen/acpi.h>. This is not very great when
> the user itself is not meant to directly use ACPI definitions.
> 
> Including <xen/acpi.h> in <asm/config.h> is not option because
> the latter header is included by everyone. So move out the fixmap
> entries definition in a new header.
> 
> Take the opportunity to also move {set, clear}_fixmap() prototypes
> in the new header.
> 
> Note that most of the definitions in <xen/acpi.h> now need to be
> surrounded with #ifndef __ASSEMBLY__ because <asm/fixmap.h> will
> be used in assembly (see EARLY_UART_VIRTUAL_ADDRESS).
> 
> The split will become more helpful in a follow-up patch where new
> fixmap entries will be defined.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Acked-by: Jan Beulich <jbeulich@suse.com>
> 
> ---
>     There was some disagreement with Stefano on whether fixmap.h
>     should include acpi.h or this should be the other way around.
> 
>     I chose the former because each components should decide how
>     much entries in the fixmap they need and also because this is
>     the current behavior on x86. We should stay consitent
>     between arch to avoid any headers mess.
> 
>     Jan acked this patch, so I am assuming he is happy with this
>     approach. I would be OK to rework it if others agree with
>     Stefano's view.

noone is speaking so:

Acked-by: Stefano Stabellini <sstabellini@kernel.org>



>     Changes in v4:
>         - Add Jan's acked-by
>         - Record Stefano's disagreement on the approach
> 
>     Changes in v3:
>         - Patch added
> ---
>  xen/arch/arm/acpi/lib.c                 |  2 ++
>  xen/arch/arm/include/asm/config.h       |  6 ------
>  xen/arch/arm/include/asm/early_printk.h |  1 +
>  xen/arch/arm/include/asm/fixmap.h       | 24 ++++++++++++++++++++++++
>  xen/arch/arm/include/asm/mm.h           |  4 ----
>  xen/arch/arm/kernel.c                   |  1 +
>  xen/arch/arm/mm.c                       |  1 +
>  xen/include/xen/acpi.h                  | 18 +++++++++++-------
>  8 files changed, 40 insertions(+), 17 deletions(-)
>  create mode 100644 xen/arch/arm/include/asm/fixmap.h
> 
> diff --git a/xen/arch/arm/acpi/lib.c b/xen/arch/arm/acpi/lib.c
> index a59cc4074cfb..41d521f720ac 100644
> --- a/xen/arch/arm/acpi/lib.c
> +++ b/xen/arch/arm/acpi/lib.c
> @@ -25,6 +25,8 @@
>  #include <xen/init.h>
>  #include <xen/mm.h>
>  
> +#include <asm/fixmap.h>
> +
>  static bool fixmap_inuse;
>  
>  char *__acpi_map_table(paddr_t phys, unsigned long size)
> diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
> index b25c9d39bb32..3e2a55a91058 100644
> --- a/xen/arch/arm/include/asm/config.h
> +++ b/xen/arch/arm/include/asm/config.h
> @@ -169,12 +169,6 @@
>  
>  #endif
>  
> -/* Fixmap slots */
> -#define FIXMAP_CONSOLE  0  /* The primary UART */
> -#define FIXMAP_MISC     1  /* Ephemeral mappings of hardware */
> -#define FIXMAP_ACPI_BEGIN  2  /* Start mappings of ACPI tables */
> -#define FIXMAP_ACPI_END    (FIXMAP_ACPI_BEGIN + NUM_FIXMAP_ACPI_PAGES - 1)  /* End mappings of ACPI tables */
> -
>  #define NR_hypercalls 64
>  
>  #define STACK_ORDER 3
> diff --git a/xen/arch/arm/include/asm/early_printk.h b/xen/arch/arm/include/asm/early_printk.h
> index 8dc911cf48a3..c5149b2976da 100644
> --- a/xen/arch/arm/include/asm/early_printk.h
> +++ b/xen/arch/arm/include/asm/early_printk.h
> @@ -11,6 +11,7 @@
>  #define __ARM_EARLY_PRINTK_H__
>  
>  #include <xen/page-size.h>
> +#include <asm/fixmap.h>
>  
>  #ifdef CONFIG_EARLY_PRINTK
>  
> diff --git a/xen/arch/arm/include/asm/fixmap.h b/xen/arch/arm/include/asm/fixmap.h
> new file mode 100644
> index 000000000000..1cee51e52ab9
> --- /dev/null
> +++ b/xen/arch/arm/include/asm/fixmap.h
> @@ -0,0 +1,24 @@
> +/*
> + * fixmap.h: compile-time virtual memory allocation
> + */
> +#ifndef __ASM_FIXMAP_H
> +#define __ASM_FIXMAP_H
> +
> +#include <xen/acpi.h>
> +
> +/* Fixmap slots */
> +#define FIXMAP_CONSOLE  0  /* The primary UART */
> +#define FIXMAP_MISC     1  /* Ephemeral mappings of hardware */
> +#define FIXMAP_ACPI_BEGIN  2  /* Start mappings of ACPI tables */
> +#define FIXMAP_ACPI_END    (FIXMAP_ACPI_BEGIN + NUM_FIXMAP_ACPI_PAGES - 1)  /* End mappings of ACPI tables */
> +
> +#ifndef __ASSEMBLY__
> +
> +/* Map a page in a fixmap entry */
> +extern void set_fixmap(unsigned map, mfn_t mfn, unsigned attributes);
> +/* Remove a mapping from a fixmap entry */
> +extern void clear_fixmap(unsigned map);
> +
> +#endif /* __ASSEMBLY__ */
> +
> +#endif /* __ASM_FIXMAP_H */
> diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
> index 424aaf28230b..045a8ba4bb63 100644
> --- a/xen/arch/arm/include/asm/mm.h
> +++ b/xen/arch/arm/include/asm/mm.h
> @@ -191,10 +191,6 @@ extern void mmu_init_secondary_cpu(void);
>  extern void setup_xenheap_mappings(unsigned long base_mfn, unsigned long nr_mfns);
>  /* Map a frame table to cover physical addresses ps through pe */
>  extern void setup_frametable_mappings(paddr_t ps, paddr_t pe);
> -/* Map a 4k page in a fixmap entry */
> -extern void set_fixmap(unsigned map, mfn_t mfn, unsigned attributes);
> -/* Remove a mapping from a fixmap entry */
> -extern void clear_fixmap(unsigned map);
>  /* map a physical range in virtual memory */
>  void __iomem *ioremap_attr(paddr_t start, size_t len, unsigned attributes);
>  
> diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
> index 8f43caa1866d..25ded1c056d9 100644
> --- a/xen/arch/arm/kernel.c
> +++ b/xen/arch/arm/kernel.c
> @@ -15,6 +15,7 @@
>  #include <xen/vmap.h>
>  
>  #include <asm/byteorder.h>
> +#include <asm/fixmap.h>
>  #include <asm/kernel.h>
>  #include <asm/setup.h>
>  
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 6b7b72de27fe..52b2a0394047 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -41,6 +41,7 @@
>  #include <xen/sizes.h>
>  #include <xen/libfdt/libfdt.h>
>  
> +#include <asm/fixmap.h>
>  #include <asm/setup.h>
>  
>  /* Override macros from asm/page.h to make them work with mfn_t */
> diff --git a/xen/include/xen/acpi.h b/xen/include/xen/acpi.h
> index 39d51fcd01dd..1b9c75e68fc4 100644
> --- a/xen/include/xen/acpi.h
> +++ b/xen/include/xen/acpi.h
> @@ -28,6 +28,15 @@
>  #define _LINUX
>  #endif
>  
> +/*
> + * Fixmap pages to reserve for ACPI boot-time tables (see
> + * arch/x86/include/asm/fixmap.h or arch/arm/include/asm/fixmap.h),
> + * 64 pages(256KB) is large enough for most cases.)
> + */
> +#define NUM_FIXMAP_ACPI_PAGES  64
> +
> +#ifndef __ASSEMBLY__
> +
>  #include <xen/list.h>
>  
>  #include <acpi/acpi.h>
> @@ -39,13 +48,6 @@
>  #define ACPI_MADT_GET_POLARITY(inti)	ACPI_MADT_GET_(POLARITY, inti)
>  #define ACPI_MADT_GET_TRIGGER(inti)	ACPI_MADT_GET_(TRIGGER, inti)
>  
> -/*
> - * Fixmap pages to reserve for ACPI boot-time tables (see
> - * arch/x86/include/asm/fixmap.h or arch/arm/include/asm/config.h,
> - * 64 pages(256KB) is large enough for most cases.)
> - */
> -#define NUM_FIXMAP_ACPI_PAGES  64
> -
>  #define BAD_MADT_ENTRY(entry, end) (                                        \
>                  (!(entry)) || (unsigned long)(entry) + sizeof(*(entry)) > (end) ||  \
>                  (entry)->header.length < sizeof(*(entry)))
> @@ -207,4 +209,6 @@ void acpi_reboot(void);
>  void acpi_dmar_zap(void);
>  void acpi_dmar_reinstate(void);
>  
> +#endif /* __ASSEMBLY__ */
> +
>  #endif /*_LINUX_ACPI_H*/
> -- 
> 2.32.0
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 08 01:14:09 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 01:14:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343599.568959 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nykGs-0000D0-3v; Wed, 08 Jun 2022 01:14:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343599.568959; Wed, 08 Jun 2022 01:14:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nykGs-0000Ct-0E; Wed, 08 Jun 2022 01:14:06 +0000
Received: by outflank-mailman (input) for mailman id 343599;
 Wed, 08 Jun 2022 01:14:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8TEA=WP=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nykGq-0000Cn-TH
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 01:14:04 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 484f55c3-e6c8-11ec-bd2c-47488cf2e6aa;
 Wed, 08 Jun 2022 03:14:03 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id C5A5A6194B;
 Wed,  8 Jun 2022 01:14:01 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id C94F6C34114;
 Wed,  8 Jun 2022 01:14:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 484f55c3-e6c8-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654650841;
	bh=0djJC9KLui+Kdr5MsphZ6+DQIIwNtfeYRS1QGzcW9e8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=g4FMtoRQlVT+W58NZUtmi6qds/mOVh/iLZ/Npp0Xx5FjnMcSORyR3kSEj4DPebMxP
	 JyRM4kCPM47nyv0PnRJnhpzlEvSPQyVKed6Nj986+hB8MdhHq0QfMuQnfnPSgA6QLY
	 /cs1b0JLRGaIxDqPt/lDTKAIprG6QbWGJUy87cEb6LD2o9GmtmeR3uzG97Ja9hkG5b
	 aKawokZH4MoYHCOxPilZv9HxrhWERS78GbJRm932f3ELQkg+bdSU7RE4T3qY3QLcC8
	 UpuuMD8GGfdKQlfkaCGdezSO3FKZyPb6uaw6uqDOVvv+bKVgIqlFg7WQoS14YZrudN
	 PeutfnJ92YSzQ==
Date: Tue, 7 Jun 2022 18:14:00 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Julien Grall <julien.grall@arm.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Julien Grall <jgrall@amazon.com>
Subject: Re: [PATCH 16/16] xen/arm: mm: Re-implement setup_frame_table_mappings()
 with map_pages_to_xen()
In-Reply-To: <20220520120937.28925-17-julien@xen.org>
Message-ID: <alpine.DEB.2.22.394.2206071813470.21215@ubuntu-linux-20-04-desktop>
References: <20220520120937.28925-1-julien@xen.org> <20220520120937.28925-17-julien@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 20 May 2022, Julien Grall wrote:
> From: Julien Grall <julien.grall@arm.com>
> 
> Now that map_pages_to_xen() has been extended to support 2MB mappings,
> we can replace the create_mappings() call by map_pages_to_xen() call.
> 
> This has the advantage to remove the differences between 32-bit and
> 64-bit code.
> 
> Lastly remove create_mappings() as there is no more callers.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v4:
>         - Add missing _PAGE_BLOCK
> 
>     Changes in v3:
>         - Fix typo in the commit message
>         - Remove the TODO regarding contiguous bit
> 
>     Changes in v2:
>         - New patch
> ---
>  xen/arch/arm/mm.c | 64 +++++------------------------------------------
>  1 file changed, 6 insertions(+), 58 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 65af44f42232..be37176a4725 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -369,40 +369,6 @@ void clear_fixmap(unsigned map)
>      BUG_ON(res != 0);
>  }
>  
> -/* Create Xen's mappings of memory.
> - * Mapping_size must be either 2MB or 32MB.
> - * Base and virt must be mapping_size aligned.
> - * Size must be a multiple of mapping_size.
> - * second must be a contiguous set of second level page tables
> - * covering the region starting at virt_offset. */
> -static void __init create_mappings(lpae_t *second,
> -                                   unsigned long virt_offset,
> -                                   unsigned long base_mfn,
> -                                   unsigned long nr_mfns,
> -                                   unsigned int mapping_size)
> -{
> -    unsigned long i, count;
> -    const unsigned long granularity = mapping_size >> PAGE_SHIFT;
> -    lpae_t pte, *p;
> -
> -    ASSERT((mapping_size == MB(2)) || (mapping_size == MB(32)));
> -    ASSERT(!((virt_offset >> PAGE_SHIFT) % granularity));
> -    ASSERT(!(base_mfn % granularity));
> -    ASSERT(!(nr_mfns % granularity));
> -
> -    count = nr_mfns / XEN_PT_LPAE_ENTRIES;
> -    p = second + second_linear_offset(virt_offset);
> -    pte = mfn_to_xen_entry(_mfn(base_mfn), MT_NORMAL);
> -    if ( granularity == 16 * XEN_PT_LPAE_ENTRIES )
> -        pte.pt.contig = 1;  /* These maps are in 16-entry contiguous chunks. */
> -    for ( i = 0; i < count; i++ )
> -    {
> -        write_pte(p + i, pte);
> -        pte.pt.base += 1 << XEN_PT_LPAE_SHIFT;
> -    }
> -    flush_xen_tlb_local();
> -}
> -
>  #ifdef CONFIG_DOMAIN_PAGE
>  void *map_domain_page_global(mfn_t mfn)
>  {
> @@ -862,36 +828,18 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
>      unsigned long frametable_size = nr_pdxs * sizeof(struct page_info);
>      mfn_t base_mfn;
>      const unsigned long mapping_size = frametable_size < MB(32) ? MB(2) : MB(32);
> -#ifdef CONFIG_ARM_64
> -    lpae_t *second, pte;
> -    unsigned long nr_second;
> -    mfn_t second_base;
> -    int i;
> -#endif
> +    int rc;
>  
>      frametable_base_pdx = mfn_to_pdx(maddr_to_mfn(ps));
>      /* Round up to 2M or 32M boundary, as appropriate. */
>      frametable_size = ROUNDUP(frametable_size, mapping_size);
>      base_mfn = alloc_boot_pages(frametable_size >> PAGE_SHIFT, 32<<(20-12));
>  
> -#ifdef CONFIG_ARM_64
> -    /* Compute the number of second level pages. */
> -    nr_second = ROUNDUP(frametable_size, FIRST_SIZE) >> FIRST_SHIFT;
> -    second_base = alloc_boot_pages(nr_second, 1);
> -    second = mfn_to_virt(second_base);
> -    for ( i = 0; i < nr_second; i++ )
> -    {
> -        clear_page(mfn_to_virt(mfn_add(second_base, i)));
> -        pte = mfn_to_xen_entry(mfn_add(second_base, i), MT_NORMAL);
> -        pte.pt.table = 1;
> -        write_pte(&xen_first[first_table_offset(FRAMETABLE_VIRT_START)+i], pte);
> -    }
> -    create_mappings(second, 0, mfn_x(base_mfn), frametable_size >> PAGE_SHIFT,
> -                    mapping_size);
> -#else
> -    create_mappings(xen_second, FRAMETABLE_VIRT_START, mfn_x(base_mfn),
> -                    frametable_size >> PAGE_SHIFT, mapping_size);
> -#endif
> +    rc = map_pages_to_xen(FRAMETABLE_VIRT_START, base_mfn,
> +                          frametable_size >> PAGE_SHIFT,
> +                          PAGE_HYPERVISOR_RW | _PAGE_BLOCK);
> +    if ( rc )
> +        panic("Unable to setup the frametable mappings.\n");
>  
>      memset(&frame_table[0], 0, nr_pdxs * sizeof(struct page_info));
>      memset(&frame_table[nr_pdxs], -1,
> -- 
> 2.32.0
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 08 03:51:35 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 03:51:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343607.568970 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nymiz-0000n2-AC; Wed, 08 Jun 2022 03:51:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343607.568970; Wed, 08 Jun 2022 03:51:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nymiz-0000mv-69; Wed, 08 Jun 2022 03:51:17 +0000
Received: by outflank-mailman (input) for mailman id 343607;
 Wed, 08 Jun 2022 03:51:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MCZa=WP=intel.com=kevin.tian@srs-se1.protection.inumbo.net>)
 id 1nymiw-0000mp-GV
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 03:51:16 +0000
Received: from mga07.intel.com (mga07.intel.com [134.134.136.100])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3982a8c4-e6de-11ec-b605-df0040e90b76;
 Wed, 08 Jun 2022 05:51:09 +0200 (CEST)
Received: from orsmga006.jf.intel.com ([10.7.209.51])
 by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 07 Jun 2022 20:51:05 -0700
Received: from orsmsx602.amr.corp.intel.com ([10.22.229.15])
 by orsmga006.jf.intel.com with ESMTP; 07 Jun 2022 20:51:05 -0700
Received: from orsmsx607.amr.corp.intel.com (10.22.229.20) by
 ORSMSX602.amr.corp.intel.com (10.22.229.15) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2308.27; Tue, 7 Jun 2022 20:51:05 -0700
Received: from ORSEDG602.ED.cps.intel.com (10.7.248.7) by
 orsmsx607.amr.corp.intel.com (10.22.229.20) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2308.27 via Frontend Transport; Tue, 7 Jun 2022 20:51:05 -0700
Received: from NAM10-MW2-obe.outbound.protection.outlook.com (104.47.55.101)
 by edgegateway.intel.com (134.134.137.103) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2308.27; Tue, 7 Jun 2022 20:51:05 -0700
Received: from BN9PR11MB5276.namprd11.prod.outlook.com (2603:10b6:408:135::18)
 by DM6PR11MB4202.namprd11.prod.outlook.com (2603:10b6:5:1df::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12; Wed, 8 Jun
 2022 03:51:03 +0000
Received: from BN9PR11MB5276.namprd11.prod.outlook.com
 ([fe80::a1cb:c445:9900:65c8]) by BN9PR11MB5276.namprd11.prod.outlook.com
 ([fe80::a1cb:c445:9900:65c8%7]) with mapi id 15.20.5314.019; Wed, 8 Jun 2022
 03:51:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3982a8c4-e6de-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1654660269; x=1686196269;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=bkkumWqgbaZ3AgzGuQ+NeBErrcxjIN5t/ctVRL3LnYE=;
  b=YvMaWvsiSIYMiuCR5lGJ9P2KmZd6sQhjIAYRUBeZxpOaBPJJd4u9RRnp
   s+v0ziw1GA1a11PdTGfwTnGGlQa9rQjfWg6XJvZg/KQjBMcMIfTrdHIMR
   r/3I5mGTgfVL5H4gp1S7HL6Sof7xPbeJUGmqJHUzCXjxC+vxC6Srp1fIb
   VDxpRlnxGYkm/xdPiWOb6X2Ck/0GFaJQ8d/apnB9pz23nHM0EZudDpjMd
   w91nGDK131t9Z66EpFL7O6doWAM0wyEQC9/1P5sUYSBlY9UKLlN8m3JwR
   gwvILiNQWUeQjmoIE863IOrR87AXQoeQ0HREX4vhPWElSiVZDOmalNGjf
   w==;
X-IronPort-AV: E=McAfee;i="6400,9594,10371"; a="340833729"
X-IronPort-AV: E=Sophos;i="5.91,285,1647327600"; 
   d="scan'208";a="340833729"
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.91,285,1647327600"; 
   d="scan'208";a="555225124"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SjZ3E/f2CD/kAykjOLjc+rgX0DBTXXw7JhM9OyNqp51ztXOTxywjoXu1Mf+/B8Rc8B2KYbvK+zz5E8Zzkbl6/BzeLuCodrsbqERTZpKxIHOj0CSbHjTt/t+rxwyVzNm3OhxOMZ9kRsjBREt42rLmM1c1uq6EsvZAjhnwXbHsSlRdEhYgQWfOzZzCIk1Qg8I7c/2IHOQqyzRnEi2g6RcH8hOoz5bdrrwElcLwziZkJ1b4gc7y2N7CSqJkIKP9bkXvq5FOnX7IFmOaXpxThN/id0TbbSGLZ/Rvqqno3hYcC5OTTjwjYvq9ZtlwCyUxYDcEkj6zdJMOwTvTj8ChlGXdDg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=bkkumWqgbaZ3AgzGuQ+NeBErrcxjIN5t/ctVRL3LnYE=;
 b=jfdIMt3VIdNmVrLRyZ81I1LrUgtbSo7qh3xOdib20t6+YU0VBMug9ff38k8K+EPZw1MlyCwsif90H6QifmSgr/WYoQz5E23m7gMEy7/MqzyeQu/jSe77W5LhGAsB5hMyKSeYldyFqKV7poUiwq+Q6eBOQvNLuLL/EucFtr6qfEmrVY3raUvVFyt0wIvB5YajstxSGfwj5i2he6AnocG8frFS5pYSLZpH5dtevA1IPm1d/jqTXLbboJsJmtDj4yTUS/diQPkMICo+vlZgbtgQD7VFkOzmWV8WpmX4Feu4pWjhVNtcczDWCEkrvlpH9XpsGPBob3NeMiaO5zUAmkqhvg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
From: "Tian, Kevin" <kevin.tian@intel.com>
To: =?utf-8?B?UGF1IE1vbm7DqSwgUm9nZXI=?= <roger.pau@citrix.com>, "Beulich,
 Jan" <JBeulich@suse.com>
CC: "Cooper, Andrew" <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>, "Nakajima, Jun"
	<jun.nakajima@intel.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v2 3/3] x86/vmx: implement Notify VM Exit
Thread-Topic: [PATCH v2 3/3] x86/vmx: implement Notify VM Exit
Thread-Index: AQHYcPGLnpUujBwR5EWVeB9tTYBUAq09rvQAgAAghACABdMugIAAJ8gAgAEpgQA=
Date: Wed, 8 Jun 2022 03:51:03 +0000
Message-ID: <BN9PR11MB52769B1AEEE39A3B679B2EE28CA49@BN9PR11MB5276.namprd11.prod.outlook.com>
References: <20220526111157.24479-1-roger.pau@citrix.com>
 <20220526111157.24479-4-roger.pau@citrix.com>
 <6fa93f8c-9336-331a-75c1-7e815d96ff49@suse.com>
 <YpoeuOJPS0gobz5u@Air-de-Roger>
 <c8f22652-abd5-76f8-75d3-ed581d1c4752@suse.com>
 <Yp8i/C+X82ZbIrSn@Air-de-Roger>
In-Reply-To: <Yp8i/C+X82ZbIrSn@Air-de-Roger>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=intel.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: f59ced17-2f59-44c5-9e31-08da49021bf4
x-ms-traffictypediagnostic: DM6PR11MB4202:EE_
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-microsoft-antispam-prvs: <DM6PR11MB42028709AC023D5FB694E0148CA49@DM6PR11MB4202.namprd11.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: eHAdT6SF8xBLXOBu7TLHuKfcJMB7RUrValcOAVFHVUzxE9NSvy09aO+A+ZjCgqhByKMMJPDU2nrjkRm88ubDlp9xMJbovydFXJVxF9oHswB00FaHF38lmpNyjGZM3ZLqxIc/Iji/hKIBxaqiONqxw4Rv0jTv0lK0vztnqAf+oiTnoATDnY5lRdWkkmODbw/5x8sy3etWzXTHjMNOlWTvu+ldbYy4JdsJtlmObkZipxvowjFkj8R+MIvRY+M9NjE9cqvrGK1JvqWJ/ArcyQeMlL3iuqKEdHtRDbwPak8m+1JDhBNwKAmjVzS67cRHVaqLc/e9GWdYQX+NJePD2c0LRxlEAK4Lq5ArPqkWVmZYBScMN1APr7/fKrsAXotFTp8tQfKbH4tyhen/G45ZPoVeAg/Qb/hAqRMsl62WczokoiF7q8zkXJ4qvwhByui+nqE/oRsAsngbHWyO86qSPxXs911XN6z05jD95B52IOEPFwgaoCxNRzXDVL0dfQ6agMOvCUOsCxNvmKEaokWJShV5Myrl6oIxhdMGwP1pKVnwV0CUQHjf3/vwK5JihvXC4ibpISlfF3DO2v/lEVUPjoqMTTRbDwGyOMSU4KiTgYQHAC2qQXwHDLwuAWoVfxnLz0oNNk/B257inu3cc3XiitbzshRuDyAV8hXs/VZJAC53JLYPzNeh1hBqJZe/f4v+Fax9j9vkjQmVnr3jNp5u7lpZfQ==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BN9PR11MB5276.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(66946007)(66556008)(110136005)(316002)(76116006)(53546011)(4326008)(8676002)(33656002)(7696005)(54906003)(64756008)(66476007)(2906002)(86362001)(71200400001)(8936002)(66446008)(6506007)(83380400001)(26005)(55016003)(5660300002)(82960400001)(52536014)(38070700005)(38100700002)(122000001)(186003)(508600001)(9686003);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?WENCNnQ4cjhQRk80NG8wWEhKd2hKTENMb241ZnE3c3B2THl2Y1NLMDJBYW8z?=
 =?utf-8?B?S0V6TFpoTUo2aGNrWFVyMVdVWEVxYUdvSC8vZmFTTHRKNWJQTHJyTFk1L1dT?=
 =?utf-8?B?a05LQThsWi8vd3lSWlU2K1E4Y2ROTHhiZGVFQ0Z4YnZXYkw0VjJwYk5OcFA2?=
 =?utf-8?B?TUpGeTZLRnFmSDV5bUU4ZWtuUnpJMkpQd0VpTkpLTER1K0Z5U05nek1RbGlt?=
 =?utf-8?B?a1hiZkVMbnFLeUEwZ2JKdlFUNzlFRTh0WXo0ZHkyZGNUS04ydXF0WDVBT2FZ?=
 =?utf-8?B?c1hLalRvKzVQZEEwNHF0Nm1XTnRhQTB5amV3QTVOUmhCdW9xVHh2S3dlVVdy?=
 =?utf-8?B?OWtBamQ5REJScHA0UjlTcndYaUorSTZCb2xzVGttQmhKS3JFaHFtREovdnVj?=
 =?utf-8?B?SFdONlpkSVJHUnJwUEVkZTVoei9sV1pNcDFhcC9tUm90bmNPV1MrbDVSVUlT?=
 =?utf-8?B?Y1loVG81SEZtY0RlUWhYVDQrY0Y3M2JxelVFYjYrekdhR2FDYmtGY08vTWNQ?=
 =?utf-8?B?a3oySnBObnNrdGlGY0N1dDNFamJDYmFJVWF1N0lIcStBdmIxKzJPMkNEMlZL?=
 =?utf-8?B?bDZSMHFLa0FidEh3Rnl0dGszem9hSnQ0bmF5TlpWRnRtcjZpbk1TRHowRjdH?=
 =?utf-8?B?d2RZRkMzREQranFHbUZvMFo0WmNRTGtDM0p3aEpXQXl1M3RlaVNhQzllalJZ?=
 =?utf-8?B?M01SQm1SdjN4Y0JQRGlJN3lCR2cxNWJHb0pCdThQb1lON0JXWUlPU0Z1SlMy?=
 =?utf-8?B?Qm5YcjJYaG9QWE1zSi9iY3R0Y2NzaHZQUS9BVXNad0ttdnFMd0c2bG92WFBj?=
 =?utf-8?B?dUVCb0NTZnFLakVZS2lYVDBVS25BdkJBejVCTFgwekI0ZTB3RVFMQjkwc2tU?=
 =?utf-8?B?ZUsxMkFUM1BESk1PbHhIMm9Kamd6bnBxcWxQZXZPaUd6MHBaUFp4SnFEQXlX?=
 =?utf-8?B?ZmpXR2k0UVNiQmkreVgrbHNEQ2hqOGpSd1lYUnFzMWw4a0tpRGx6UVF5djNJ?=
 =?utf-8?B?S2VlZzFyOUx5QS9maUFBN0YxRUltenFhVnl5Skx0OGtiOVpmL3hJbkZtWWov?=
 =?utf-8?B?TWFpQ3JoWnhhdWpMek1QT1BCNHFXWjdPQ05SR3NSeE5QSkdmaFFzRHhoRzM5?=
 =?utf-8?B?dWpEVWJLYzRkbjhGRzhhdDJmdUc0RUVneUp6Z3ZkODZDVm5idTI3UGdwSE84?=
 =?utf-8?B?UGcyZmpsL1VLK3g2cG9YQ1BkNTlBdlAwL1FZY0ZCbFljTU03QVp6MlI4Q3BO?=
 =?utf-8?B?a3BHa0R1Tm5xeTJqVENIU3JEcS9uOVVsR2xlNUE1S2ZTaCt2VFpiWjNmQy8r?=
 =?utf-8?B?MFpYUkdLNThiRzhXT2FaSlFqcVdQTUg2T0R5ZFF4MElDamFwUTBoODFyRDMx?=
 =?utf-8?B?QzFhdUY0MFZFUGtMUnIwWnJoa29QZ0w1ZjdlNVZCQjBZdWdQWEw2bno2R25w?=
 =?utf-8?B?VEZkdUtxOWxwMjlva3RQQjZsTHZFVkIzdVdRbTRMY1BCLzVOY0VRSTAzOVp2?=
 =?utf-8?B?K2dQczh2U2pZUHhVc3RobWNvTktxWVFRelc4MXhWWWdDRGFaKytSbzZ3bUkv?=
 =?utf-8?B?UnloQ1BuZk92djlIUzJDRkRETVRpZC9ncThGZ2Z6NisvMXQ4ajJNVFpqcTVZ?=
 =?utf-8?B?YW1wNGc1Nm5YNGZUUlFnaFpMd3BOTEtLREdEcXJQUVdVY1VRTnJoUzNRSmdn?=
 =?utf-8?B?SWtuR3d5KzVLSjcrVmdtcjJDcWJRc0NITG81ZkNRNnA4aU9OT2tDckhEalNp?=
 =?utf-8?B?STNkT01HelVDV1ZtNFpCQU84UFFDYXA0YkJJNzFwYWYrUkN5NUR0TG1DV1R5?=
 =?utf-8?B?NWZuK0hNWW9aRDErMnhwaDFNUWw4TkFTRGk4TFROMHlUcytKUDJVN1RkYkp3?=
 =?utf-8?B?R09SR2trdWE0b1p4Smk5NWZjR2JlQ3lLcXBQRml1elpmd24wRFJ3dmFHelpt?=
 =?utf-8?B?elJUWGtRNUFRWCtpNXhzaGhYcitVck9VMHp3cFFzMmh2RGtjcTAzY004SWQ0?=
 =?utf-8?B?MVRhbUQ0dVFIbmM0Rkg0VWpEbk05MSthY2VRdlg0a3IrNkN0NFNVK256UXBF?=
 =?utf-8?B?SmdTKzV2UVhpSTJ6Zmp6VUJsNzRUOTM5cWIzbUcvRlFCeE52UWh3ejBxQVNV?=
 =?utf-8?B?bm1EN1FwTUdCbGxLVDliYXR5TkRxeVNFRFpNc08xM1llNVdWelpTU1Rja1Rj?=
 =?utf-8?B?Q2FXd2VVN09lUitiYStZR0RiMUJreHZvcEhhdWREY1Rnc2tZa0Y5SDJzL2Rs?=
 =?utf-8?B?dStKdkJ4M0dyQi9jV09zN2NneEczT2xkdGdtb3hoU0I0cXlWVDl6ZktXWDVi?=
 =?utf-8?B?NzBZMTMrT2NIRWpXUzFJT1YwdmtKOEtkNXB4SVJiR0lmdGJabHMxZz09?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BN9PR11MB5276.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f59ced17-2f59-44c5-9e31-08da49021bf4
X-MS-Exchange-CrossTenant-originalarrivaltime: 08 Jun 2022 03:51:03.3968
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: QB0AXND5wwDdeNtlh5Aab1VqsJ+Upgj8nkDGZVDVDnbh3IDBj3SpLwiDwel3dV0j+xXjgCk1xyLzco15FwjX1A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR11MB4202
X-OriginatorOrg: intel.com

PiBGcm9tOiBSb2dlciBQYXUgTW9ubsOpDQo+IFNlbnQ6IFR1ZXNkYXksIEp1bmUgNywgMjAyMiA2
OjA2IFBNDQo+IA0KPiBPbiBUdWUsIEp1biAwNywgMjAyMiBhdCAwOTo0MzoyNUFNICswMjAwLCBK
YW4gQmV1bGljaCB3cm90ZToNCj4gPiBPbiAwMy4wNi4yMDIyIDE2OjQ2LCBSb2dlciBQYXUgTW9u
bsOpIHdyb3RlOg0KPiA+ID4gT24gRnJpLCBKdW4gMDMsIDIwMjIgYXQgMDI6NDk6NTRQTSArMDIw
MCwgSmFuIEJldWxpY2ggd3JvdGU6DQo+ID4gPj4gT24gMjYuMDUuMjAyMiAxMzoxMSwgUm9nZXIg
UGF1IE1vbm5lIHdyb3RlOg0KPiA+ID4+PiAtLS0gYS94ZW4vYXJjaC94ODYvaHZtL3ZteC92bXgu
Yw0KPiA+ID4+PiArKysgYi94ZW4vYXJjaC94ODYvaHZtL3ZteC92bXguYw0KPiA+ID4+PiBAQCAt
MTQxOSwxMCArMTQxOSwxOSBAQCBzdGF0aWMgdm9pZCBjZl9jaGVjaw0KPiB2bXhfdXBkYXRlX2hv
c3RfY3IzKHN0cnVjdCB2Y3B1ICp2KQ0KPiA+ID4+Pg0KPiA+ID4+PiAgdm9pZCB2bXhfdXBkYXRl
X2RlYnVnX3N0YXRlKHN0cnVjdCB2Y3B1ICp2KQ0KPiA+ID4+PiAgew0KPiA+ID4+PiArICAgIHVu
c2lnbmVkIGludCBtYXNrID0gMXUgPDwgVFJBUF9pbnQzOw0KPiA+ID4+PiArDQo+ID4gPj4+ICsg
ICAgaWYgKCAhY3B1X2hhc19tb25pdG9yX3RyYXBfZmxhZyAmJg0KPiBjcHVfaGFzX3ZteF9ub3Rp
Znlfdm1fZXhpdGluZyApDQo+ID4gPj4NCj4gPiA+PiBJJ20gcHV6emxlZCBieSB0aGUgbGFjayBv
ZiBzeW1tZXRyeSBiZXR3ZWVuIHRoaXMgYW5kIC4uLg0KPiA+ID4+DQo+ID4gPj4+ICsgICAgICAg
IC8qDQo+ID4gPj4+ICsgICAgICAgICAqIE9ubHkgYWxsb3cgdG9nZ2xpbmcgVFJBUF9kZWJ1ZyBp
ZiBub3RpZnkgVk0gZXhpdCBpcyBlbmFibGVkLCBhcw0KPiA+ID4+PiArICAgICAgICAgKiB1bmNv
bmRpdGlvbmFsbHkgc2V0dGluZyBUUkFQX2RlYnVnIGlzIHBhcnQgb2YgdGhlIFhTQS0xNTYgZml4
Lg0KPiA+ID4+PiArICAgICAgICAgKi8NCj4gPiA+Pj4gKyAgICAgICAgbWFzayB8PSAxdSA8PCBU
UkFQX2RlYnVnOw0KPiA+ID4+PiArDQo+ID4gPj4+ICAgICAgaWYgKCB2LT5hcmNoLmh2bS5kZWJ1
Z19zdGF0ZV9sYXRjaCApDQo+ID4gPj4+IC0gICAgICAgIHYtPmFyY2guaHZtLnZteC5leGNlcHRp
b25fYml0bWFwIHw9IDFVIDw8IFRSQVBfaW50MzsNCj4gPiA+Pj4gKyAgICAgICAgdi0+YXJjaC5o
dm0udm14LmV4Y2VwdGlvbl9iaXRtYXAgfD0gbWFzazsNCj4gPiA+Pj4gICAgICBlbHNlDQo+ID4g
Pj4+IC0gICAgICAgIHYtPmFyY2guaHZtLnZteC5leGNlcHRpb25fYml0bWFwICY9IH4oMVUgPDwg
VFJBUF9pbnQzKTsNCj4gPiA+Pj4gKyAgICAgICAgdi0+YXJjaC5odm0udm14LmV4Y2VwdGlvbl9i
aXRtYXAgJj0gfm1hc2s7DQo+ID4gPj4+DQo+ID4gPj4+ICAgICAgdm14X3ZtY3NfZW50ZXIodik7
DQo+ID4gPj4+ICAgICAgdm14X3VwZGF0ZV9leGNlcHRpb25fYml0bWFwKHYpOw0KPiA+ID4+PiBA
QCAtNDE1NSw2ICs0MTY0LDkgQEAgdm9pZCB2bXhfdm1leGl0X2hhbmRsZXIoc3RydWN0DQo+IGNw
dV91c2VyX3JlZ3MgKnJlZ3MpDQo+ID4gPj4+ICAgICAgICAgIHN3aXRjaCAoIHZlY3RvciApDQo+
ID4gPj4+ICAgICAgICAgIHsNCj4gPiA+Pj4gICAgICAgICAgY2FzZSBUUkFQX2RlYnVnOg0KPiA+
ID4+PiArICAgICAgICAgICAgaWYgKCBjcHVfaGFzX21vbml0b3JfdHJhcF9mbGFnICYmDQo+IGNw
dV9oYXNfdm14X25vdGlmeV92bV9leGl0aW5nICkNCj4gPiA+Pj4gKyAgICAgICAgICAgICAgICBn
b3RvIGV4aXRfYW5kX2NyYXNoOw0KPiA+ID4+DQo+ID4gPj4gLi4uIHRoaXMgY29uZGl0aW9uLiBT
aG91bGRuJ3Qgb25lIGJlIHRoZSBpbnZlcnNlIG9mIHRoZSBvdGhlciAoYW5kDQo+ID4gPj4gdGhl
biBpdCdzIHRoZSBvbmUgZG93biBoZXJlIHdoaWNoIHdhbnRzIGFkanVzdGluZyk/DQo+ID4gPg0K
PiA+ID4gVGhlIGNvbmRpdGlvbiBpbiB2bXhfdXBkYXRlX2RlYnVnX3N0YXRlKCkgc2V0cyB0aGUg
bWFzayBzbyB0aGF0DQo+ID4gPiBUUkFQX2RlYnVnIHdpbGwgb25seSBiZSBhZGRlZCBvciByZW1v
dmVkIGZyb20gdGhlIGJpdG1hcCBpZg0KPiA+ID4gIWNwdV9oYXNfbW9uaXRvcl90cmFwX2ZsYWcg
JiYgY3B1X2hhc192bXhfbm90aWZ5X3ZtX2V4aXRpbmcgKG5vdGUNCj4gdGhhdA0KPiA+ID4gb3Ro
ZXJ3aXNlIFRSQVBfZGVidWcgaXMgdW5jb25kaXRpb25hbGx5IHNldCBpZg0KPiA+ID4gIWNwdV9o
YXNfdm14X25vdGlmeV92bV9leGl0aW5nKS4NCj4gPiA+DQo+ID4gPiBIZW5jZSBpdCdzIGltcG9z
c2libGUgdG8gZ2V0IGEgVk1FeGl0IFRSQVBfZGVidWcgd2l0aA0KPiA+ID4gY3B1X2hhc19tb25p
dG9yX3RyYXBfZmxhZyAmJiBjcHVfaGFzX3ZteF9ub3RpZnlfdm1fZXhpdGluZyBiZWNhdXNlDQo+
ID4gPiBUUkFQX2RlYnVnIHdpbGwgbmV2ZXIgYmUgc2V0IGJ5IHZteF91cGRhdGVfZGVidWdfc3Rh
dGUoKSBpbiB0aGF0DQo+ID4gPiBjYXNlLg0KPiA+DQo+ID4gSG1tLCB5ZXMsIEkndmUgYmVlbiBt
aXNndWlkZWQgYnkgeW91IG5vdCBhbHRlcmluZyB0aGUgZXhpc3Rpbmcgc2V0dGluZw0KPiA+IG9m
IHYtPmFyY2guaHZtLnZteC5leGNlcHRpb25fYml0bWFwIGluIGNvbnN0cnVjdF92bWNzKCkuIElu
c3RlYWQgeW91DQo+ID4gYWRkIGFuIGVudGlyZWx5IG5ldyBibG9jayBvZiBjb2RlIG5lYXIgdGhl
IGJvdHRvbSBvZiB0aGUgZnVuY3Rpb24uIElzDQo+ID4gdGhlcmUgYW55IGNoYW5jZSB5b3UgY291
bGQgbW92ZSB1cCB0aGF0IGFkanVzdG1lbnQsIHBlcmhhcHMgYWxvbmcgdGhlDQo+ID4gbGluZXMg
b2YNCj4gPg0KPiA+ICAgICB2LT5hcmNoLmh2bS52bXguZXhjZXB0aW9uX2JpdG1hcCA9IEhWTV9U
UkFQX01BU0sNCj4gPiAgICAgICAgICAgICAgIHwgKHBhZ2luZ19tb2RlX2hhcChkKSA/IDAgOiAo
MVUgPDwgVFJBUF9wYWdlX2ZhdWx0KSkNCj4gPiAgICAgICAgICAgICAgIHwgKHYtPmFyY2guZnVs
bHlfZWFnZXJfZnB1ID8gMCA6ICgxVSA8PCBUUkFQX25vX2RldmljZSkpOw0KPiA+ICAgICBpZiAo
IGNwdV9oYXNfdm14X25vdGlmeV92bV9leGl0aW5nICkNCj4gPiAgICAgew0KPiA+ICAgICAgICAg
X192bXdyaXRlKE5PVElGWV9XSU5ET1csIHZtX25vdGlmeV93aW5kb3cpOw0KPiA+ICAgICAgICAg
LyoNCj4gPiAgICAgICAgICAqIERpc2FibGUgI0FDIGFuZCAjREIgaW50ZXJjZXB0aW9uOiBieSB1
c2luZyBWTSBOb3RpZnkgWGVuIGlzDQo+ID4gICAgICAgICAgKiBndWFyYW50ZWVkIHRvIGdldCBh
IFZNIGV4aXQgZXZlbiBpZiB0aGUgZ3Vlc3QgbWFuYWdlcyB0byBsb2NrIHRoZQ0KPiA+ICAgICAg
ICAgICogQ1BVLg0KPiA+ICAgICAgICAgICovDQo+ID4gICAgICAgICB2LT5hcmNoLmh2bS52bXgu
ZXhjZXB0aW9uX2JpdG1hcCAmPSB+KCgxVSA8PCBUUkFQX2RlYnVnKSB8DQo+ID4gICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICgxVSA8PCBUUkFQX2FsaWdubWVu
dF9jaGVjaykpOw0KPiA+ICAgICB9DQo+ID4gICAgIHZteF91cGRhdGVfZXhjZXB0aW9uX2JpdG1h
cCh2KTsNCj4gDQo+IFN1cmUsIHdpbGwgbW92ZSB1cCB3aGVuIHBvc3RpbmcgYSBuZXcgdmVyc2lv
biB0aGVuLiAgSSB3aWxsIHdhaXQgZm9yDQo+IGZlZWRiYWNrIGZyb20gSnVuIG9yIEtldmluIHJl
Z2FyZGluZyB0aGUgZGVmYXVsdCB3aW5kb3cgc2l6ZSBiZWZvcmUNCj4gZG9pbmcgc28uDQo+IA0K
DQpsZXQgbWUgY2hlY2sgaW50ZXJuYWxseS4NCg==


From xen-devel-bounces@lists.xenproject.org Wed Jun 08 04:37:44 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 04:37:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343616.568980 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nynRo-0005fX-RN; Wed, 08 Jun 2022 04:37:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343616.568980; Wed, 08 Jun 2022 04:37:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nynRo-0005fQ-OT; Wed, 08 Jun 2022 04:37:36 +0000
Received: by outflank-mailman (input) for mailman id 343616;
 Wed, 08 Jun 2022 04:37:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dgi5=WP=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1nynRn-0005fJ-1N
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 04:37:35 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b67b1ae4-e6e4-11ec-bd2c-47488cf2e6aa;
 Wed, 08 Jun 2022 06:37:33 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id BC28D1F9C5;
 Wed,  8 Jun 2022 04:37:32 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 657F813A15;
 Wed,  8 Jun 2022 04:37:32 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id KylBF4wnoGIEYAAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 08 Jun 2022 04:37:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b67b1ae4-e6e4-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1654663052; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=5b0LCcjxt2G2c58tf8D2HlEk4AjfaFqov/aOlpz/9ps=;
	b=Clad6PfPBjwlU+3zwJgEWSQiKzQeeiKfI0l2WATAP3iDRom/7TNf8QKUIMnNN3n4yTPqfe
	+EGmk/r/ijLOE0BZ46sVNoUtsYFNhKfey1U+7wgTQbgWaOb83AoIUq+LpWMaB1yBIpojVw
	QEHhxwnX2bcmSQwV4qlNaRrW9WTAqlc=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wei.liu@kernel.org>,
	Paul Durrant <paul@xen.org>,
	"David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	Jakub Kicinski <kuba@kernel.org>,
	Paolo Abeni <pabeni@redhat.com>
Subject: [PATCH Resend] xen/netback: do some code cleanup
Date: Wed,  8 Jun 2022 06:37:26 +0200
Message-Id: <20220608043726.9380-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove some unused macros and functions, make local functions static.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Wei Liu <wei.liu@kernel.org>
---
 drivers/net/xen-netback/common.h    | 12 ------------
 drivers/net/xen-netback/interface.c | 16 +---------------
 drivers/net/xen-netback/netback.c   |  4 +++-
 drivers/net/xen-netback/rx.c        |  2 +-
 4 files changed, 5 insertions(+), 29 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index d9dea4829c86..8174d7b2966c 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -48,7 +48,6 @@
 #include <linux/debugfs.h>
 
 typedef unsigned int pending_ring_idx_t;
-#define INVALID_PENDING_RING_IDX (~0U)
 
 struct pending_tx_info {
 	struct xen_netif_tx_request req; /* tx request */
@@ -82,8 +81,6 @@ struct xenvif_rx_meta {
 /* Discriminate from any valid pending_idx value. */
 #define INVALID_PENDING_IDX 0xFFFF
 
-#define MAX_BUFFER_OFFSET XEN_PAGE_SIZE
-
 #define MAX_PENDING_REQS XEN_NETIF_TX_RING_SIZE
 
 /* The maximum number of frags is derived from the size of a grant (same
@@ -367,11 +364,6 @@ void xenvif_free(struct xenvif *vif);
 int xenvif_xenbus_init(void);
 void xenvif_xenbus_fini(void);
 
-int xenvif_schedulable(struct xenvif *vif);
-
-int xenvif_queue_stopped(struct xenvif_queue *queue);
-void xenvif_wake_queue(struct xenvif_queue *queue);
-
 /* (Un)Map communication rings. */
 void xenvif_unmap_frontend_data_rings(struct xenvif_queue *queue);
 int xenvif_map_frontend_data_rings(struct xenvif_queue *queue,
@@ -394,7 +386,6 @@ int xenvif_dealloc_kthread(void *data);
 irqreturn_t xenvif_ctrl_irq_fn(int irq, void *data);
 
 bool xenvif_have_rx_work(struct xenvif_queue *queue, bool test_kthread);
-void xenvif_rx_action(struct xenvif_queue *queue);
 void xenvif_rx_queue_tail(struct xenvif_queue *queue, struct sk_buff *skb);
 
 void xenvif_carrier_on(struct xenvif *vif);
@@ -403,9 +394,6 @@ void xenvif_carrier_on(struct xenvif *vif);
 void xenvif_zerocopy_callback(struct sk_buff *skb, struct ubuf_info *ubuf,
 			      bool zerocopy_success);
 
-/* Unmap a pending page and release it back to the guest */
-void xenvif_idx_unmap(struct xenvif_queue *queue, u16 pending_idx);
-
 static inline pending_ring_idx_t nr_pending_reqs(struct xenvif_queue *queue)
 {
 	return MAX_PENDING_REQS -
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index 8e035374a370..fb32ae82d9b0 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -69,7 +69,7 @@ void xenvif_skb_zerocopy_complete(struct xenvif_queue *queue)
 	wake_up(&queue->dealloc_wq);
 }
 
-int xenvif_schedulable(struct xenvif *vif)
+static int xenvif_schedulable(struct xenvif *vif)
 {
 	return netif_running(vif->dev) &&
 		test_bit(VIF_STATUS_CONNECTED, &vif->status) &&
@@ -177,20 +177,6 @@ irqreturn_t xenvif_interrupt(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
-int xenvif_queue_stopped(struct xenvif_queue *queue)
-{
-	struct net_device *dev = queue->vif->dev;
-	unsigned int id = queue->id;
-	return netif_tx_queue_stopped(netdev_get_tx_queue(dev, id));
-}
-
-void xenvif_wake_queue(struct xenvif_queue *queue)
-{
-	struct net_device *dev = queue->vif->dev;
-	unsigned int id = queue->id;
-	netif_tx_wake_queue(netdev_get_tx_queue(dev, id));
-}
-
 static u16 xenvif_select_queue(struct net_device *dev, struct sk_buff *skb,
 			       struct net_device *sb_dev)
 {
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index d93814c14a23..fc61a4418737 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -112,6 +112,8 @@ static void make_tx_response(struct xenvif_queue *queue,
 			     s8       st);
 static void push_tx_responses(struct xenvif_queue *queue);
 
+static void xenvif_idx_unmap(struct xenvif_queue *queue, u16 pending_idx);
+
 static inline int tx_work_todo(struct xenvif_queue *queue);
 
 static inline unsigned long idx_to_pfn(struct xenvif_queue *queue,
@@ -1418,7 +1420,7 @@ static void push_tx_responses(struct xenvif_queue *queue)
 		notify_remote_via_irq(queue->tx_irq);
 }
 
-void xenvif_idx_unmap(struct xenvif_queue *queue, u16 pending_idx)
+static void xenvif_idx_unmap(struct xenvif_queue *queue, u16 pending_idx)
 {
 	int ret;
 	struct gnttab_unmap_grant_ref tx_unmap_op;
diff --git a/drivers/net/xen-netback/rx.c b/drivers/net/xen-netback/rx.c
index dbac4c03d21a..8df2c736fd23 100644
--- a/drivers/net/xen-netback/rx.c
+++ b/drivers/net/xen-netback/rx.c
@@ -486,7 +486,7 @@ static void xenvif_rx_skb(struct xenvif_queue *queue)
 
 #define RX_BATCH_SIZE 64
 
-void xenvif_rx_action(struct xenvif_queue *queue)
+static void xenvif_rx_action(struct xenvif_queue *queue)
 {
 	struct sk_buff_head completed_skbs;
 	unsigned int work_done = 0;
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Jun 08 04:54:40 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 04:54:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343624.568992 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyni7-0008Ow-C0; Wed, 08 Jun 2022 04:54:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343624.568992; Wed, 08 Jun 2022 04:54:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyni7-0008Op-7d; Wed, 08 Jun 2022 04:54:27 +0000
Received: by outflank-mailman (input) for mailman id 343624;
 Wed, 08 Jun 2022 04:54:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyni5-0008OP-Fh; Wed, 08 Jun 2022 04:54:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyni5-000499-BL; Wed, 08 Jun 2022 04:54:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyni4-0002eq-Px; Wed, 08 Jun 2022 04:54:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nyni4-0006rH-Oz; Wed, 08 Jun 2022 04:54:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=i50V6k4JR5T4TFlh81EhmRQsgg5AAnGaTsmqV2hCMdQ=; b=y5zHWQeddsUE6SuWTiaKN5fWBv
	i2O8v8YKUPNvtahPad0HzvQrvk73U54Ha2DwpIkxl69i9suRhzb9E7LkRYH/xkVWt/HscL+AiuBA8
	q1XwSw2yP7H1p3eNj/394/YK1oUIGJYWHUqsDNWxw2/HA8XtMiN1SZsjU7q/J7VvflF4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170870-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.15-testing test] 170870: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.15-testing:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
    xen-4.15-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=64249afeb63cf7d70b4faf02e76df5eed82371f9
X-Osstest-Versions-That:
    xen=d9e73f6320b311d739546d6325e530f07392c100
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 08 Jun 2022 04:54:24 +0000

flight 170870 xen-4.15-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170870/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    18 guest-start/debian.repeat fail REGR. vs. 169237

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 169237
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 169237
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 169237
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 169237
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 169237
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 169237
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 169237
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 169237
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 169237
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 169237
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 169237
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 169237
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  64249afeb63cf7d70b4faf02e76df5eed82371f9
baseline version:
 xen                  d9e73f6320b311d739546d6325e530f07392c100

Last test of basis   169237  2022-04-08 13:06:58 Z   60 days
Testing same since   170870  2022-06-07 12:36:54 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  David Vrabel <dvrabel@amazon.co.uk>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   d9e73f6320..64249afeb6  64249afeb63cf7d70b4faf02e76df5eed82371f9 -> stable-4.15


From xen-devel-bounces@lists.xenproject.org Wed Jun 08 05:28:03 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 05:28:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343654.569083 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyoEW-0004r1-Td; Wed, 08 Jun 2022 05:27:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343654.569083; Wed, 08 Jun 2022 05:27:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyoEW-0004qu-Q8; Wed, 08 Jun 2022 05:27:56 +0000
Received: by outflank-mailman (input) for mailman id 343654;
 Wed, 08 Jun 2022 05:27:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyoEU-0004qh-RW; Wed, 08 Jun 2022 05:27:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyoEU-00054p-PU; Wed, 08 Jun 2022 05:27:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyoEU-0005Tw-8J; Wed, 08 Jun 2022 05:27:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nyoEU-0007wz-7t; Wed, 08 Jun 2022 05:27:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=H1Bl/swMCUSUjR3TaHnpqQcziyrj0pc7btiyGDDzy8Q=; b=6t4b0yTQgc4G43LpL6lrTxfKCg
	LtstLXYtwSAB5bU84yPtXpvID8wq9dA2sRXrWHNUBzCIFxTBuh1HKEDg1E90vZEnahnjxEgYSzaEC
	ZzJucL2X8otPNeEDP1mpWeeq9IRqDSSTEBHpsi1WNNC1sQNLfODIJhxn1lDXlP+3l5lc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170879-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 170879: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8c1cc69748f4719e6ba8a275c2ecd60747c52c21
X-Osstest-Versions-That:
    xen=cea9ae06229577cd5b77019ce122f9cdd1568106
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 08 Jun 2022 05:27:54 +0000

flight 170879 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170879/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8c1cc69748f4719e6ba8a275c2ecd60747c52c21
baseline version:
 xen                  cea9ae06229577cd5b77019ce122f9cdd1568106

Last test of basis   170850  2022-06-06 18:00:31 Z    1 days
Testing same since   170879  2022-06-08 01:00:32 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bertrand Marquis <bertrand.marquis@arm.com>
  Julien Grall <jgrall@amazon.com>
  Stefano Stabellini <stefano.stabellini@xilinx.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   cea9ae0622..8c1cc69748  8c1cc69748f4719e6ba8a275c2ecd60747c52c21 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Jun 08 05:36:47 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 05:36:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343665.569094 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyoN0-0006UD-RM; Wed, 08 Jun 2022 05:36:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343665.569094; Wed, 08 Jun 2022 05:36:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyoN0-0006U6-OX; Wed, 08 Jun 2022 05:36:42 +0000
Received: by outflank-mailman (input) for mailman id 343665;
 Wed, 08 Jun 2022 05:36:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dgi5=WP=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1nyoMy-0006U0-Qh
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 05:36:40 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f80dbe59-e6ec-11ec-b605-df0040e90b76;
 Wed, 08 Jun 2022 07:36:39 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 922BF1F8D1;
 Wed,  8 Jun 2022 05:36:38 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 0601E13638;
 Wed,  8 Jun 2022 05:36:38 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id bvEEAGY1oGK7bwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 08 Jun 2022 05:36:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f80dbe59-e6ec-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1654666598; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=KG3MDIb67Cqj3RdRueGPvqWvK42QvayLSNru55CBb90=;
	b=l6p0EyyOntxOCkumtHy+TYe+0fcvpu5XzkIU9L9sw5emNs2sb9u6q0cbasELYKZ6guYJ8y
	bqfBuxiQMkkeDsKknt5kFOY103Tn/3WWPgHQ6XipWaOOOY2/0M2q8lnEPJ5c3lL3/UtZ8Q
	Plu258Ciw/rvsjRMm3ohr4Dk/Rh4UAg=
Message-ID: <6f09de90-e287-b2c0-2d28-2689337ed768@suse.com>
Date: Wed, 8 Jun 2022 07:36:37 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH V4 0/8] virtio: Solution to restrict memory access under
 Xen using xen-grant DMA-mapping layer
Content-Language: en-US
To: Oleksandr Tyshchenko <olekstysh@gmail.com>,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 linux-arm-kernel@lists.infradead.org,
 virtualization@lists.linux-foundation.org, x86@kernel.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Christoph Hellwig
 <hch@infradead.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, Julien Grall <julien@xen.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Henry Wang <Henry.Wang@arm.com>, Kaly Xin <Kaly.Xin@arm.com>,
 Jiamei Xie <Jiamei.Xie@arm.com>, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>
References: <1654197833-25362-1-git-send-email-olekstysh@gmail.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <1654197833-25362-1-git-send-email-olekstysh@gmail.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------P34y19ERRK8XCSHtZtdwIlHD"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------P34y19ERRK8XCSHtZtdwIlHD
Content-Type: multipart/mixed; boundary="------------4Cjni3eD5XJyKJTrynMPYfTl";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Oleksandr Tyshchenko <olekstysh@gmail.com>,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 linux-arm-kernel@lists.infradead.org,
 virtualization@lists.linux-foundation.org, x86@kernel.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Christoph Hellwig
 <hch@infradead.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, Julien Grall <julien@xen.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Henry Wang <Henry.Wang@arm.com>, Kaly Xin <Kaly.Xin@arm.com>,
 Jiamei Xie <Jiamei.Xie@arm.com>, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>
Message-ID: <6f09de90-e287-b2c0-2d28-2689337ed768@suse.com>
Subject: Re: [PATCH V4 0/8] virtio: Solution to restrict memory access under
 Xen using xen-grant DMA-mapping layer
References: <1654197833-25362-1-git-send-email-olekstysh@gmail.com>
In-Reply-To: <1654197833-25362-1-git-send-email-olekstysh@gmail.com>

--------------4Cjni3eD5XJyKJTrynMPYfTl
Content-Type: multipart/mixed; boundary="------------dd4p2vIgYKq6Mo1aY8vQl8HD"

--------------dd4p2vIgYKq6Mo1aY8vQl8HD
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDIuMDYuMjIgMjE6MjMsIE9sZWtzYW5kciBUeXNoY2hlbmtvIHdyb3RlOg0KPiBGcm9t
OiBPbGVrc2FuZHIgVHlzaGNoZW5rbyA8b2xla3NhbmRyX3R5c2hjaGVua29AZXBhbS5jb20+
DQo+IA0KPiBIZWxsbyBhbGwuDQo+IA0KPiBUaGUgcHVycG9zZSBvZiB0aGlzIHBhdGNoIHNl
cmllcyBpcyB0byBhZGQgc3VwcG9ydCBmb3IgcmVzdHJpY3RpbmcgbWVtb3J5IGFjY2VzcyB1
bmRlciBYZW4gdXNpbmcgc3BlY2lmaWMNCj4gZ3JhbnQgdGFibGUgWzFdIGJhc2VkIERNQS1t
YXBwaW5nIGxheWVyLiBQYXRjaCBzZXJpZXMgaXMgYmFzZWQgb24gSnVlcmdlbiBHcm9zc+KA
mSBpbml0aWFsIHdvcmsgWzJdIHdoaWNoIGltcGxpZXMNCj4gdXNpbmcgZ3JhbnQgcmVmZXJl
bmNlcyBpbnN0ZWFkIG9mIHJhdyBndWVzdCBwaHlzaWNhbCBhZGRyZXNzZXMgKEdQQSkgZm9y
IHRoZSB2aXJ0aW8gY29tbXVuaWNhdGlvbnMgKHNvbWUNCj4ga2luZCBvZiB0aGUgc29mdHdh
cmUgSU9NTVUpLg0KPiANCj4gWW91IGNhbiBmaW5kIFJGQy1WMyBwYXRjaCBzZXJpZXMgKGFu
ZCBwcmV2aW91cyBkaXNjdXNzaW9ucykgYXQgWzNdLg0KPiANCj4gISEhIFBsZWFzZSBub3Rl
LCB0aGUgb25seSBkaWZmIGJldHdlZW4gVjMgYW5kIFY0IGlzIGluIGNvbW1pdCAjNSwgYWxz
byBJIGhhdmUgY29sbGVjdGVkIHRoZSBhY2tzIChjb21taXRzICMjNC03KS4NCj4gDQo+IFRo
ZSBoaWdoIGxldmVsIGlkZWEgaXMgdG8gY3JlYXRlIG5ldyBYZW7igJlzIGdyYW50IHRhYmxl
IGJhc2VkIERNQS1tYXBwaW5nIGxheWVyIGZvciB0aGUgZ3Vlc3QgTGludXggd2hvc2UgbWFp
bg0KPiBwdXJwb3NlIGlzIHRvIHByb3ZpZGUgYSBzcGVjaWFsIDY0LWJpdCBETUEgYWRkcmVz
cyB3aGljaCBpcyBmb3JtZWQgYnkgdXNpbmcgdGhlIGdyYW50IHJlZmVyZW5jZSAoZm9yIGEg
cGFnZQ0KPiB0byBiZSBzaGFyZWQgd2l0aCB0aGUgYmFja2VuZCkgd2l0aCBvZmZzZXQgYW5k
IHNldHRpbmcgdGhlIGhpZ2hlc3QgYWRkcmVzcyBiaXQgKHRoaXMgaXMgZm9yIHRoZSBiYWNr
ZW5kIHRvDQo+IGJlIGFibGUgdG8gZGlzdGluZ3Vpc2ggZ3JhbnQgcmVmIGJhc2VkIERNQSBh
ZGRyZXNzIGZyb20gbm9ybWFsIEdQQSkuIEZvciB0aGlzIHRvIHdvcmsgd2UgbmVlZCB0aGUg
YWJpbGl0eQ0KPiB0byBhbGxvY2F0ZSBjb250aWd1b3VzIChjb25zZWN1dGl2ZSkgZ3JhbnQg
cmVmZXJlbmNlcyBmb3IgbXVsdGktcGFnZSBhbGxvY2F0aW9ucy4gQW5kIHRoZSBiYWNrZW5k
IHRoZW4gbmVlZHMNCj4gdG8gb2ZmZXIgVklSVElPX0ZfQUNDRVNTX1BMQVRGT1JNIGFuZCBW
SVJUSU9fRl9WRVJTSU9OXzEgZmVhdHVyZSBiaXRzIChpdCBtdXN0IHN1cHBvcnQgdmlydGlv
LW1taW8gbW9kZXJuDQo+IHRyYW5zcG9ydCBmb3IgNjQtYml0IGFkZHJlc3NlcyBpbiB0aGUg
dmlydHF1ZXVlKS4NCj4gDQo+IFhlbidzIGdyYW50IG1hcHBpbmcgbWVjaGFuaXNtIGlzIHRo
ZSBzZWN1cmUgYW5kIHNhZmUgc29sdXRpb24gdG8gc2hhcmUgcGFnZXMgYmV0d2VlbiBkb21h
aW5zIHdoaWNoIHByb3Zlbg0KPiB0byB3b3JrIGFuZCB3b3JrcyBmb3IgeWVhcnMgKGluIHRo
ZSBjb250ZXh0IG9mIHRyYWRpdGlvbmFsIFhlbiBQViBkcml2ZXJzIGZvciBleGFtcGxlKS4g
U28gZmFyLCB0aGUgZm9yZWlnbg0KPiBtYXBwaW5nIGlzIHVzZWQgZm9yIHRoZSB2aXJ0aW8g
YmFja2VuZCB0byBtYXAgYW5kIGFjY2VzcyBndWVzdCBtZW1vcnkuIFdpdGggdGhlIGZvcmVp
Z24gbWFwcGluZywgdGhlIGJhY2tlbmQNCj4gaXMgYWJsZSB0byBtYXAgYXJiaXRyYXJ5IHBh
Z2VzIGZyb20gdGhlIGd1ZXN0IG1lbW9yeSAob3IgZXZlbiBmcm9tIERvbTAgbWVtb3J5KS4g
QW5kIGFzIHRoZSByZXN1bHQsIHRoZSBtYWxpY2lvdXMNCj4gYmFja2VuZCB3aGljaCBydW5z
IGluIGEgbm9uLXRydXN0ZWQgZG9tYWluIGNhbiB0YWtlIGFkdmFudGFnZSBvZiB0aGlzLiBJ
bnN0ZWFkLCB3aXRoIHRoZSBncmFudCBtYXBwaW5nDQo+IHRoZSBiYWNrZW5kIGlzIG9ubHkg
YWxsb3dlZCB0byBtYXAgcGFnZXMgd2hpY2ggd2VyZSBleHBsaWNpdGx5IGdyYW50ZWQgYnkg
dGhlIGd1ZXN0IGJlZm9yZSBhbmQgbm90aGluZyBlbHNlLg0KPiBBY2NvcmRpbmcgdG8gdGhl
IGRpc2N1c3Npb25zIGluIHZhcmlvdXMgbWFpbmxpbmUgdGhyZWFkcyB0aGlzIHNvbHV0aW9u
IHdvdWxkIGxpa2VseSBiZSB3ZWxjb21lIGJlY2F1c2UgaXQNCj4gcGVyZmVjdGx5IGZpdHMg
aW4gdGhlIHNlY3VyaXR5IG1vZGVsIFhlbiBwcm92aWRlcy4NCj4gDQo+IFdoYXQgaXMgbW9y
ZSwgdGhlIGdyYW50IHRhYmxlIGJhc2VkIHNvbHV0aW9uIHJlcXVpcmVzIHplcm8gY2hhbmdl
cyB0byB0aGUgWGVuIGh5cGVydmlzb3IgaXRzZWxmIGF0IGxlYXN0DQo+IHdpdGggdmlydGlv
LW1taW8gYW5kIERUIChpbiBjb21wYXJpc29uLCBmb3IgZXhhbXBsZSwgd2l0aCAiZm9yZWln
biBtYXBwaW5nICsgdmlydGlvLWlvbW11IiBzb2x1dGlvbiB3aGljaCB3b3VsZA0KPiByZXF1
aXJlIHRoZSB3aG9sZSBuZXcgY29tcGxleCBlbXVsYXRvciBpbiBoeXBlcnZpc29yIGluIGFk
ZGl0aW9uIHRvIG5ldyBmdW5jdGlvbmFsaXR5L2h5cGVyY2FsbCB0byBwYXNzIElPVkENCj4g
ZnJvbSB0aGUgdmlydGlvIGJhY2tlbmQgcnVubmluZyBlbHNld2hlcmUgdG8gdGhlIGh5cGVy
dmlzb3IgYW5kIHRyYW5zbGF0ZSBpdCB0byB0aGUgR1BBIGJlZm9yZSBtYXBwaW5nIGludG8N
Cj4gUDJNIG9yIGRlbnlpbmcgdGhlIGZvcmVpZ24gbWFwcGluZyByZXF1ZXN0IGlmIG5vIGNv
cnJlc3BvbmRpbmcgSU9WQS1HUEEgbWFwcGluZyBwcmVzZW50IGluIHRoZSBJT01NVSBwYWdl
IHRhYmxlDQo+IGZvciB0aGF0IHBhcnRpY3VsYXIgZGV2aWNlKS4gV2Ugb25seSBuZWVkIHRv
IHVwZGF0ZSB0b29sc3RhY2sgdG8gaW5zZXJ0ICJ4ZW4sZ3JhbnQtZG1hIiBJT01NVSBub2Rl
ICh0byBiZSByZWZlcnJlZA0KPiBieSB0aGUgdmlydGlvLW1taW8gZGV2aWNlIHVzaW5nICJp
b21tdXMiIHByb3BlcnR5KSB3aGVuIGNyZWF0aW5nIGEgZ3Vlc3QgZGV2aWNlLXRyZWUgKHRo
aXMgaXMgYW4gaW5kaWNhdG9yIGZvcg0KPiB0aGUgZ3Vlc3QgdG8gdXNlIFhlbiBncmFudCBt
YXBwaW5ncyBzY2hlbWUgZm9yIHRoYXQgZGV2aWNlIHdpdGggdGhlIGVuZHBvaW50IElEIGJl
aW5nIHVzZWQgYXMgSUQgb2YgWGVuIGRvbWFpbg0KPiB3aGVyZSB0aGUgY29ycmVzcG9uZGlu
ZyBiYWNrZW5kIGlzIHJ1bm5pbmcsIHRoZSBiYWNrZW5kIGRvbWlkIGlzIHVzZWQgYXMgYW4g
YXJndW1lbnQgdG8gdGhlIGdyYW50IG1hcHBpbmcgQVBJcykuDQo+IEl0IHdvcnRoIG1lbnRp
b25pbmcgdGhhdCB0b29sc3RhY2sgcGF0Y2ggaXMgYmFzZWQgb24gbm9uIHVwc3RyZWFtZWQg
eWV0IOKAnFZpcnRpbyBzdXBwb3J0IGZvciB0b29sc3RhY2sgb24gQXJt4oCdDQo+IHNlcmll
cyB3aGljaCBpcyBvbiByZXZpZXcgbm93IFs0XS4NCj4gDQo+IFBsZWFzZSBub3RlIHRoZSBm
b2xsb3dpbmc6DQo+IC0gUGF0Y2ggc2VyaWVzIG9ubHkgY292ZXJzIEFybSBhbmQgdmlydGlv
LW1taW8gKGRldmljZS10cmVlKSBmb3Igbm93LiBUbyBlbmFibGUgdGhlIHJlc3RyaWN0ZWQg
bWVtb3J5IGFjY2Vzcw0KPiAgICBmZWF0dXJlIG9uIEFybSB0aGUgZm9sbG93aW5nIG9wdGlv
biBzaG91bGQgYmUgc2V0Og0KPiAgICBDT05GSUdfWEVOX1ZJUlRJTz15DQo+IC0gUGF0Y2gg
c2VyaWVzIGlzIGJhc2VkIG9uICJrZXJuZWw6IGFkZCBuZXcgaW5mcmFzdHJ1Y3R1cmUgZm9y
IHBsYXRmb3JtX2hhcygpIHN1cHBvcnQiIHBhdGNoIHNlcmllcyB3aGljaA0KPiAgICBpcyBv
biByZXZpZXcgbm93IFs1XQ0KPiAtIFhlbiBzaG91bGQgYmUgYnVpbHQgd2l0aCB0aGUgZm9s
bG93aW5nIG9wdGlvbnM6DQo+ICAgIENPTkZJR19JT1JFUV9TRVJWRVI9eQ0KPiAgICBDT05G
SUdfRVhQRVJUPXkNCj4gDQo+IFBhdGNoIHNlcmllcyBpcyByZWJhc2VkIG9uICJmb3ItbGlu
dXMtNS4xOSIgYnJhbmNoIFsxXSB3aXRoICJwbGF0Zm9ybV9oYXMoKSIgc2VyaWVzIGFwcGxp
ZWQgYW5kIHRlc3RlZCBvbiBSZW5lc2FzDQo+IFNhbHZhdG9yLVggYm9hcmQgKyBIMyBFUzMu
MCBTb0MgKEFybTY0KSB3aXRoIHN0YW5kYWxvbmUgdXNlcnNwYWNlIChub24tUWVtdSkgdmly
dGlvLW1taW8gYmFzZWQgdmlydGlvLWRpc2sgYmFja2VuZA0KPiBydW5uaW5nIGluIERyaXZl
ciBkb21haW4gYW5kIExpbnV4IGd1ZXN0IHJ1bm5pbmcgb24gZXhpc3RpbmcgdmlydGlvLWJs
ayBkcml2ZXIgKGZyb250ZW5kKS4gTm8gaXNzdWVzIHdlcmUgb2JzZXJ2ZWQuDQo+IEd1ZXN0
IGRvbWFpbiAncmVib290L2Rlc3Ryb3knIHVzZS1jYXNlcyB3b3JrIHByb3Blcmx5Lg0KPiBJ
IGhhdmUgYWxzbyB0ZXN0ZWQgb3RoZXIgdXNlLWNhc2VzIHN1Y2ggYXMgYXNzaWduaW5nIHNl
dmVyYWwgdmlydGlvIGJsb2NrIGRldmljZXMgb3IgYSBtaXggb2YgdmlydGlvIGFuZCBYZW4g
UFYgYmxvY2sNCj4gZGV2aWNlcyB0byB0aGUgZ3Vlc3QuIFBhdGNoIHNlcmllcyB3YXMgYnVp
bGQtdGVzdGVkIG9uIEFybTMyIGFuZCB4ODYuDQo+IA0KPiAxLiBYZW4gY2hhbmdlcyBsb2Nh
dGVkIGF0IChsYXN0IHBhdGNoKToNCj4gaHR0cHM6Ly9naXRodWIuY29tL290eXNoY2hlbmtv
MS94ZW4vY29tbWl0cy9saWJ4bF92aXJ0aW9fbmV4dDJfMQ0KPiAyLiBMaW51eCBjaGFuZ2Vz
IGxvY2F0ZWQgYXQgKGxhc3QgOCBwYXRjaGVzKToNCj4gaHR0cHM6Ly9naXRodWIuY29tL290
eXNoY2hlbmtvMS9saW51eC9jb21taXRzL3ZpcnRpb19ncmFudDkNCj4gMy4gdmlydGlvLWRp
c2sgY2hhbmdlcyBsb2NhdGVkIGF0Og0KPiBodHRwczovL2dpdGh1Yi5jb20vb3R5c2hjaGVu
a28xL3ZpcnRpby1kaXNrL2NvbW1pdHMvdmlydGlvX2dyYW50DQo+IA0KPiBBbnkgZmVlZGJh
Y2svaGVscCB3b3VsZCBiZSBoaWdobHkgYXBwcmVjaWF0ZWQuDQo+IA0KPiBbMV0gaHR0cHM6
Ly94ZW5iaXRzLnhlbnByb2plY3Qub3JnL2RvY3MvNC4xNi10ZXN0aW5nL21pc2MvZ3JhbnQt
dGFibGVzLnR4dA0KPiBbMl0gaHR0cHM6Ly93d3cueW91dHViZS5jb20vd2F0Y2g/dj1JcmxF
ZGFJVURQaw0KPiBbM10gaHR0cHM6Ly9sb3JlLmtlcm5lbC5vcmcveGVuLWRldmVsLzE2NDk5
NjM5NzMtMjI4NzktMS1naXQtc2VuZC1lbWFpbC1vbGVrc3R5c2hAZ21haWwuY29tLw0KPiAg
ICAgIGh0dHBzOi8vbG9yZS5rZXJuZWwub3JnL3hlbi1kZXZlbC8xNjUwNjQ2MjYzLTIyMDQ3
LTEtZ2l0LXNlbmQtZW1haWwtb2xla3N0eXNoQGdtYWlsLmNvbS8NCj4gICAgICBodHRwczov
L2xvcmUua2VybmVsLm9yZy94ZW4tZGV2ZWwvMTY1MTk0NzU0OC00MDU1LTEtZ2l0LXNlbmQt
ZW1haWwtb2xla3N0eXNoQGdtYWlsLmNvbS8NCj4gICAgICBodHRwczovL2xvcmUua2VybmVs
Lm9yZy94ZW4tZGV2ZWwvMTY1Mzk0NDQxNy0xNzE2OC0xLWdpdC1zZW5kLWVtYWlsLW9sZWtz
dHlzaEBnbWFpbC5jb20vDQo+IFs0XSBodHRwczovL2xvcmUua2VybmVsLm9yZy94ZW4tZGV2
ZWwvMTY1NDEwNjI2MS0yODA0NC0xLWdpdC1zZW5kLWVtYWlsLW9sZWtzdHlzaEBnbWFpbC5j
b20vDQo+ICAgICAgaHR0cHM6Ly9sb3JlLmtlcm5lbC5vcmcveGVuLWRldmVsLzE2NTM5NDQ4
MTMtMTc5NzAtMS1naXQtc2VuZC1lbWFpbC1vbGVrc3R5c2hAZ21haWwuY29tLw0KPiBbNV0g
aHR0cHM6Ly9sb3JlLmtlcm5lbC5vcmcveGVuLWRldmVsLzIwMjIwNTA0MTU1NzAzLjEzMzM2
LTEtamdyb3NzQHN1c2UuY29tLw0KPiBbNl0gaHR0cHM6Ly9naXQua2VybmVsLm9yZy9wdWIv
c2NtL2xpbnV4L2tlcm5lbC9naXQveGVuL3RpcC5naXQvbG9nLz9oPWZvci1saW51cy01LjE5
DQo+IA0KPiBKdWVyZ2VuIEdyb3NzICgzKToNCj4gICAgeGVuL2dyYW50czogc3VwcG9ydCBh
bGxvY2F0aW5nIGNvbnNlY3V0aXZlIGdyYW50cw0KPiAgICB4ZW4vZ3JhbnQtZG1hLW9wczog
QWRkIG9wdGlvbiB0byByZXN0cmljdCBtZW1vcnkgYWNjZXNzIHVuZGVyIFhlbg0KPiAgICB4
ZW4vdmlydGlvOiBFbmFibGUgcmVzdHJpY3RlZCBtZW1vcnkgYWNjZXNzIHVzaW5nIFhlbiBn
cmFudCBtYXBwaW5ncw0KPiANCj4gT2xla3NhbmRyIFR5c2hjaGVua28gKDUpOg0KPiAgICBh
cm0veGVuOiBJbnRyb2R1Y2UgeGVuX3NldHVwX2RtYV9vcHMoKQ0KPiAgICBkdC1iaW5kaW5n
czogQWRkIHhlbixncmFudC1kbWEgSU9NTVUgZGVzY3JpcHRpb24gZm9yIHhlbi1ncmFudCBE
TUEgb3BzDQo+ICAgIHhlbi9ncmFudC1kbWEtaW9tbXU6IEludHJvZHVjZSBzdHViIElPTU1V
IGRyaXZlcg0KPiAgICB4ZW4vZ3JhbnQtZG1hLW9wczogUmV0cmlldmUgdGhlIElEIG9mIGJh
Y2tlbmQncyBkb21haW4gZm9yIERUIGRldmljZXMNCj4gICAgYXJtL3hlbjogQXNzaWduIHhl
bi1ncmFudCBETUEgb3BzIGZvciB4ZW4tZ3JhbnQgRE1BIGRldmljZXMNCj4gDQo+ICAgLi4u
L2RldmljZXRyZWUvYmluZGluZ3MvaW9tbXUveGVuLGdyYW50LWRtYS55YW1sICAgfCAgMzkg
KysrDQo+ICAgYXJjaC9hcm0vaW5jbHVkZS9hc20veGVuL3hlbi1vcHMuaCAgICAgICAgICAg
ICAgICAgfCAgIDIgKw0KPiAgIGFyY2gvYXJtL21tL2RtYS1tYXBwaW5nLmMgICAgICAgICAg
ICAgICAgICAgICAgICAgIHwgICA3ICstDQo+ICAgYXJjaC9hcm0veGVuL2VubGlnaHRlbi5j
ICAgICAgICAgICAgICAgICAgICAgICAgICAgfCAgIDIgKw0KPiAgIGFyY2gvYXJtNjQvaW5j
bHVkZS9hc20veGVuL3hlbi1vcHMuaCAgICAgICAgICAgICAgIHwgICAyICsNCj4gICBhcmNo
L2FybTY0L21tL2RtYS1tYXBwaW5nLmMgICAgICAgICAgICAgICAgICAgICAgICB8ICAgNyAr
LQ0KPiAgIGFyY2gveDg2L3hlbi9lbmxpZ2h0ZW5faHZtLmMgICAgICAgICAgICAgICAgICAg
ICAgIHwgICAyICsNCj4gICBhcmNoL3g4Ni94ZW4vZW5saWdodGVuX3B2LmMgICAgICAgICAg
ICAgICAgICAgICAgICB8ICAgMiArDQo+ICAgZHJpdmVycy94ZW4vS2NvbmZpZyAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgfCAgMjAgKysNCj4gICBkcml2ZXJzL3hlbi9NYWtl
ZmlsZSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB8ICAgMiArDQo+ICAgZHJpdmVy
cy94ZW4vZ3JhbnQtZG1hLWlvbW11LmMgICAgICAgICAgICAgICAgICAgICAgfCAgNzggKysr
KysNCj4gICBkcml2ZXJzL3hlbi9ncmFudC1kbWEtb3BzLmMgICAgICAgICAgICAgICAgICAg
ICAgICB8IDM0NSArKysrKysrKysrKysrKysrKysrKysNCj4gICBkcml2ZXJzL3hlbi9ncmFu
dC10YWJsZS5jICAgICAgICAgICAgICAgICAgICAgICAgICB8IDI1MSArKysrKysrKysrKyst
LS0NCj4gICBpbmNsdWRlL3hlbi9hcm0veGVuLW9wcy5oICAgICAgICAgICAgICAgICAgICAg
ICAgICB8ICAxOCArKw0KPiAgIGluY2x1ZGUveGVuL2dyYW50X3RhYmxlLmggICAgICAgICAg
ICAgICAgICAgICAgICAgIHwgICA0ICsNCj4gICBpbmNsdWRlL3hlbi94ZW4tb3BzLmggICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICB8ICAxMyArDQo+ICAgaW5jbHVkZS94ZW4veGVu
LmggICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgfCAgIDggKw0KPiAgIDE3IGZp
bGVzIGNoYW5nZWQsIDc1NiBpbnNlcnRpb25zKCspLCA0NiBkZWxldGlvbnMoLSkNCj4gICBj
cmVhdGUgbW9kZSAxMDA2NDQgRG9jdW1lbnRhdGlvbi9kZXZpY2V0cmVlL2JpbmRpbmdzL2lv
bW11L3hlbixncmFudC1kbWEueWFtbA0KPiAgIGNyZWF0ZSBtb2RlIDEwMDY0NCBhcmNoL2Fy
bS9pbmNsdWRlL2FzbS94ZW4veGVuLW9wcy5oDQo+ICAgY3JlYXRlIG1vZGUgMTAwNjQ0IGFy
Y2gvYXJtNjQvaW5jbHVkZS9hc20veGVuL3hlbi1vcHMuaA0KPiAgIGNyZWF0ZSBtb2RlIDEw
MDY0NCBkcml2ZXJzL3hlbi9ncmFudC1kbWEtaW9tbXUuYw0KPiAgIGNyZWF0ZSBtb2RlIDEw
MDY0NCBkcml2ZXJzL3hlbi9ncmFudC1kbWEtb3BzLmMNCj4gICBjcmVhdGUgbW9kZSAxMDA2
NDQgaW5jbHVkZS94ZW4vYXJtL3hlbi1vcHMuaA0KPiANCg0KU2VyaWVzIHB1c2hlZCB0byB4
ZW4vdGlwLmdpdCBmb3ItbGludXMtNS4xOWENCg0KDQpKdWVyZ2VuDQo=
--------------dd4p2vIgYKq6Mo1aY8vQl8HD
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------dd4p2vIgYKq6Mo1aY8vQl8HD--

--------------4Cjni3eD5XJyKJTrynMPYfTl--

--------------P34y19ERRK8XCSHtZtdwIlHD
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmKgNWUFAwAAAAAACgkQsN6d1ii/Ey+Y
VQf+MQwIc1tv3W1x0pqbhO3puOeqRHmfRA5BwDPmgpPLyriNEGyDPD2/94WysvuZSCqvQFFq9V3m
E1XshX6hY1GwLE0tx6CKT7pcRbAJxKr6sV5bn7TUBHE0Xc5A3Tm3IyHzrgPMuK2+2pIIMpN5NpoO
kOq9Exlj3ZNktlJytoEyWcbTqHrSK8L0RNjszXglC8qzG0qkWCaimopz23VnV7tD3mFjGWgz+6qS
RLM+cRYW2A86h07mIFtB5qRvsPIdaCWAe9/MrKt5mZ4QlPaHUX4WEkEF1kjS19bX5pzuhvwwNY5P
YaWzFRnrPQQjKqSl0s+7nLarLEhIvC2tx0qBxDFHFw==
=4rPU
-----END PGP SIGNATURE-----

--------------P34y19ERRK8XCSHtZtdwIlHD--


From xen-devel-bounces@lists.xenproject.org Wed Jun 08 05:37:13 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 05:37:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343667.569105 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyoNV-0006ww-50; Wed, 08 Jun 2022 05:37:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343667.569105; Wed, 08 Jun 2022 05:37:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyoNV-0006vq-1R; Wed, 08 Jun 2022 05:37:13 +0000
Received: by outflank-mailman (input) for mailman id 343667;
 Wed, 08 Jun 2022 05:37:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dgi5=WP=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1nyoNT-0006t8-P5
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 05:37:11 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0ae79603-e6ed-11ec-bd2c-47488cf2e6aa;
 Wed, 08 Jun 2022 07:37:10 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 7CAD121A6B;
 Wed,  8 Jun 2022 05:37:10 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 2F07D13638;
 Wed,  8 Jun 2022 05:37:10 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id Su0UCoY1oGIDcAAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 08 Jun 2022 05:37:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0ae79603-e6ed-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1654666630; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=e/FPclAX4SEQhI3WAYAvS5WpRBi4c3ovJp5fbx59ihQ=;
	b=gaL6+OK02YkQai6LRYBqGDzbwRYz887l/KNSZ9kW7PRK7y11HZMHu33Qg3KUQvJRWMaUpV
	Xqx9zSHQ4NP5bWaDWM0hG2bL2/58rBbX3t5BpEUM8q6fTaJEn8vECY18p2tiydox3GClBJ
	37oMixruWaW2fUSzaYkXeR1TTau431Y=
Message-ID: <d7ab14e9-2411-774e-21f1-2ecf3cffd2e9@suse.com>
Date: Wed, 8 Jun 2022 07:37:09 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH] xen: unexport __init-annotated
 xen_xlate_map_ballooned_pages()
Content-Language: en-US
To: Masahiro Yamada <masahiroy@kernel.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 moderated for non-subscribers <xen-devel@lists.xenproject.org>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>,
 Julien Grall <julien.grall@arm.com>, Shannon Zhao <shannon.zhao@linaro.org>,
 linux-kernel@vger.kernel.org
References: <20220606045920.4161881-1-masahiroy@kernel.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20220606045920.4161881-1-masahiroy@kernel.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------oj1mRICSkSejk0rWguOK0LQn"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------oj1mRICSkSejk0rWguOK0LQn
Content-Type: multipart/mixed; boundary="------------JvVY88ac2iha0WITW905MGd9";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Masahiro Yamada <masahiroy@kernel.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 moderated for non-subscribers <xen-devel@lists.xenproject.org>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>,
 Julien Grall <julien.grall@arm.com>, Shannon Zhao <shannon.zhao@linaro.org>,
 linux-kernel@vger.kernel.org
Message-ID: <d7ab14e9-2411-774e-21f1-2ecf3cffd2e9@suse.com>
Subject: Re: [PATCH] xen: unexport __init-annotated
 xen_xlate_map_ballooned_pages()
References: <20220606045920.4161881-1-masahiroy@kernel.org>
In-Reply-To: <20220606045920.4161881-1-masahiroy@kernel.org>

--------------JvVY88ac2iha0WITW905MGd9
Content-Type: multipart/mixed; boundary="------------ZeiZdpYwEdKOOY0zd3kClKE9"

--------------ZeiZdpYwEdKOOY0zd3kClKE9
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDYuMDYuMjIgMDY6NTksIE1hc2FoaXJvIFlhbWFkYSB3cm90ZToNCj4gRVhQT1JUX1NZ
TUJPTCBhbmQgX19pbml0IGlzIGEgYmFkIGNvbWJpbmF0aW9uIGJlY2F1c2UgdGhlIC5pbml0
LnRleHQNCj4gc2VjdGlvbiBpcyBmcmVlZCB1cCBhZnRlciB0aGUgaW5pdGlhbGl6YXRpb24u
IEhlbmNlLCBtb2R1bGVzIGNhbm5vdA0KPiB1c2Ugc3ltYm9scyBhbm5vdGF0ZWQgX19pbml0
LiBUaGUgYWNjZXNzIHRvIGEgZnJlZWQgc3ltYm9sIG1heSBlbmQgdXANCj4gd2l0aCBrZXJu
ZWwgcGFuaWMuDQo+IA0KPiBtb2Rwb3N0IHVzZWQgdG8gZGV0ZWN0IGl0LCBidXQgaXQgaGFz
IGJlZW4gYnJva2VuIGZvciBhIGRlY2FkZS4NCj4gDQo+IFJlY2VudGx5LCBJIGZpeGVkIG1v
ZHBvc3Qgc28gaXQgc3RhcnRlZCB0byB3YXJuIGl0IGFnYWluLCB0aGVuIHRoaXMNCj4gc2hv
d2VkIHVwIGluIGxpbnV4LW5leHQgYnVpbGRzLg0KPiANCj4gVGhlcmUgYXJlIHR3byB3YXlz
IHRvIGZpeCBpdDoNCj4gDQo+ICAgIC0gUmVtb3ZlIF9faW5pdA0KPiAgICAtIFJlbW92ZSBF
WFBPUlRfU1lNQk9MDQo+IA0KPiBJIGNob3NlIHRoZSBsYXR0ZXIgZm9yIHRoaXMgY2FzZSBi
ZWNhdXNlIG5vbmUgb2YgdGhlIGluLXRyZWUgY2FsbC1zaXRlcw0KPiAoYXJjaC9hcm0veGVu
L2VubGlnaHRlbi5jLCBhcmNoL3g4Ni94ZW4vZ3JhbnQtdGFibGUuYykgaXMgY29tcGlsZWQg
YXMNCj4gbW9kdWxhci4NCj4gDQo+IEZpeGVzOiAyNDM4NDhmYzAxOGMgKCJ4ZW4vZ3JhbnQt
dGFibGU6IE1vdmUgeGxhdGVkX3NldHVwX2dudHRhYl9wYWdlcyB0byBjb21tb24gcGxhY2Ui
KQ0KPiBSZXBvcnRlZC1ieTogU3RlcGhlbiBSb3Rod2VsbCA8c2ZyQGNhbmIuYXV1Zy5vcmcu
YXU+DQo+IFNpZ25lZC1vZmYtYnk6IE1hc2FoaXJvIFlhbWFkYSA8bWFzYWhpcm95QGtlcm5l
bC5vcmc+DQoNClB1c2hlZCB0byB4ZW4vdGlwLmdpdCBmb3ItbGludXMtNS4xOWENCg0KDQpK
dWVyZ2VuDQo=
--------------ZeiZdpYwEdKOOY0zd3kClKE9
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------ZeiZdpYwEdKOOY0zd3kClKE9--

--------------JvVY88ac2iha0WITW905MGd9--

--------------oj1mRICSkSejk0rWguOK0LQn
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmKgNYUFAwAAAAAACgkQsN6d1ii/Ey9e
Xgf+KwHpnK/tq7Xc1VnnhPGEDHSqDxRDwlvwcfgFFl2Z9EqwXcDwIQ7HGtnRVS5FyxGzDx1Z/RrA
tuPi1k/MjwoqMVWMuoxRxfoIe2fQN/3PtawkVviWyNjL+3D8QDwks/dBhlrnYxHMP6XCtwtoxzz7
jGla6k3nDKEsb5GI7d6aaMzt1dXduBIqW/e8QX46rh7isnR7byqkGv8hC+4VQ8X8iOH2KAIvX1UV
k7EAwLZzhxXgegCDr7CWvb/jOqo7dnKmVPAyvUeqDWou29zlq+XovMB7I5gpl8q0SXr8SOJsB89s
fFNGCjpClpothDKrjtjlZ3RxwXgQHP+ZIviJcJZqdw==
=tDYM
-----END PGP SIGNATURE-----

--------------oj1mRICSkSejk0rWguOK0LQn--


From xen-devel-bounces@lists.xenproject.org Wed Jun 08 05:50:26 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 05:50:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343682.569116 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyoaD-00015g-GV; Wed, 08 Jun 2022 05:50:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343682.569116; Wed, 08 Jun 2022 05:50:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyoaD-00015Z-Dd; Wed, 08 Jun 2022 05:50:21 +0000
Received: by outflank-mailman (input) for mailman id 343682;
 Wed, 08 Jun 2022 05:50:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/Sw9=WP=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1nyoaB-00015T-QQ
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 05:50:19 +0000
Received: from mail-pg1-x52e.google.com (mail-pg1-x52e.google.com
 [2607:f8b0:4864:20::52e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dfbd124f-e6ee-11ec-b605-df0040e90b76;
 Wed, 08 Jun 2022 07:50:18 +0200 (CEST)
Received: by mail-pg1-x52e.google.com with SMTP id e66so17932723pgc.8
 for <xen-devel@lists.xenproject.org>; Tue, 07 Jun 2022 22:50:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dfbd124f-e6ee-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=WMH4xaktW67fldV3OvwLVmjB8V7WBSrOQAIMHKOo7/0=;
        b=YLtN/1xvl1ndg9z+YLs3P1QkANWxHp+nqIdUTuj14TOK/KOLmdMastT0ADpzO0GAgn
         ajXH0SI6RuWci8WOgL5Uy+RbF9CRqjfIEw5MjCG+h2DXsJu+4LQ4iRZ5RyDWtzWlO6MI
         +p19x5KFRplnCE+e1ptn/+ddNOtR+9mBSbvB6fDIsXMRIuTfe9D/VRHkoh8R3Hjore9o
         /uFtC+Akjt3JODqhkUe2kuLttMeX7Aga7RnAGekwAbHIWnmHIpd25O7Ib3YQmA0NXq3w
         bBXkMUeoEZjAfpDUdjxjHJH6x4nyiE/N7wd3bF3zZvMBb1HSyGhUi4EufQ2K1nhNDFDB
         vaCA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=WMH4xaktW67fldV3OvwLVmjB8V7WBSrOQAIMHKOo7/0=;
        b=gfMGJUZg/vyqfDoA+0XrJITZmV8JQVR8EyLQvKFyn1d9JS8hQb6GwxzSUhccsTEUIG
         soC5QxLkrmGJcMopql/svQKg1sXunJePqaDdeB/ElS4Koh25PqQ2OKZtKL4AqvNFdlqV
         ozgfcRvkrNT3lK3t4VHIFTcn+NpxG9xT8q0F88uDBiiPTR8oREYg5+G/7/mC64UL4WwK
         4LqvG4sWSKX8+DM5MLwxyI4puvIWgGxl4y6ncBAHU8IJoVVg1jej9seqkAvYQq6mEvY6
         hY6VTQOud/undRjpDAw4HfeLupHFoAOh/KGc8iLCkrtUgBVE5Bd7jotT93gkamMfElyL
         VEhw==
X-Gm-Message-State: AOAM532O/Rft6uuLcLg/GjqwLSyS0dUnRnA5VkQLWYw43OJs16evPqEl
	mR4Lq+DmEr461lhnLkJSDlbTf+x5g7h3gBpfC1PvWw==
X-Google-Smtp-Source: ABdhPJw690q0GJUcYUgZdXT7/vJPtzYrYeX6BrX8JGhe6gcNUrzWLQNwHE8Nbdl7KINDk1AiVve6p6QStmlyiBBfdjs=
X-Received: by 2002:a65:674c:0:b0:3fd:cf53:6d44 with SMTP id
 c12-20020a65674c000000b003fdcf536d44mr11244820pgu.428.1654667416424; Tue, 07
 Jun 2022 22:50:16 -0700 (PDT)
MIME-Version: 1.0
References: <20220607101010.3136600-1-jens.wiklander@linaro.org> <alpine.DEB.2.22.394.2206071454020.21215@ubuntu-linux-20-04-desktop>
In-Reply-To: <alpine.DEB.2.22.394.2206071454020.21215@ubuntu-linux-20-04-desktop>
From: Jens Wiklander <jens.wiklander@linaro.org>
Date: Wed, 8 Jun 2022 07:50:05 +0200
Message-ID: <CAHUa44EL8oj4St=wX2Akio5tV1yH3kKs3SapJA9GFBRVezEARQ@mail.gmail.com>
Subject: Re: [PATCH 0/2] Xen FF-A mediator
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Julien Grall <julien@xen.org>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Content-Type: text/plain; charset="UTF-8"

Hi Stefano,

On Tue, Jun 7, 2022 at 11:55 PM Stefano Stabellini
<sstabellini@kernel.org> wrote:
>
> On Tue, 7 Jun 2022, Jens Wiklander wrote:
> > Hi,
> >
> > This patch sets add a FF-A [1] mediator modeled after the TEE mediator
> > already present in Xen. The FF-A mediator implements the subset of the FF-A
> > 1.1 specification needed to communicate with OP-TEE using FF-A as transport
> > mechanism instead of SMC/HVC as with the TEE mediator. It allows a similar
> > design in OP-TEE as with the TEE mediator where OP-TEE presents one virtual
> > partition of itself to each guest in Xen.
> >
> > The FF-A mediator is generic in the sense it has nothing OP-TEE specific
> > except that only the subset needed for OP-TEE is implemented so far. The
> > hooks needed to inform OP-TEE that a guest is created or destroyed is part
> > of the FF-A specification.
> >
> > It should be possible to extend the FF-A mediator to implement a larger
> > portion of the FF-A 1.1 specification without breaking with the way OP-TEE
> > is communicated with here. So it should be possible to support any TEE or
> > Secure Partition using FF-A as transport with this mediator.
> >
> > [1] https://developer.arm.com/documentation/den0077/latest
> >
> > Thanks,
> > Jens
>
> Hi Jens,
>
> Many thanks for the patches! I tried to apply them to the master branch
> but unfortunately they don't apply any longer. Could you please rebase
> them on master (or even better rebase them on staging) and resend?

No problem, I'll rebase and send out a v2.

Thanks,
Jens

>
> Thank you!
>
>
>
> > Jens Wiklander (2):
> >   xen/arm: smccc: add support for SMCCCv1.2 extended input/output
> >     registers
> >   xen/arm: add FF-A mediator
> >
> >  xen/arch/arm/Kconfig         |   11 +
> >  xen/arch/arm/Makefile        |    1 +
> >  xen/arch/arm/arm64/smc.S     |   43 +
> >  xen/arch/arm/domain.c        |   10 +
> >  xen/arch/arm/ffa.c           | 1624 ++++++++++++++++++++++++++++++++++
> >  xen/arch/arm/vsmc.c          |   19 +-
> >  xen/include/asm-arm/domain.h |    4 +
> >  xen/include/asm-arm/ffa.h    |   71 ++
> >  xen/include/asm-arm/smccc.h  |   42 +
> >  9 files changed, 1821 insertions(+), 4 deletions(-)
> >  create mode 100644 xen/arch/arm/ffa.c
> >  create mode 100644 xen/include/asm-arm/ffa.h
>


From xen-devel-bounces@lists.xenproject.org Wed Jun 08 06:41:37 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 06:41:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343702.569135 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nypNb-0006yK-JR; Wed, 08 Jun 2022 06:41:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343702.569135; Wed, 08 Jun 2022 06:41:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nypNb-0006yD-GS; Wed, 08 Jun 2022 06:41:23 +0000
Received: by outflank-mailman (input) for mailman id 343702;
 Wed, 08 Jun 2022 06:41:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/Sw9=WP=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1nypNa-0006y7-VA
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 06:41:23 +0000
Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com
 [2607:f8b0:4864:20::1029])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 01bb6ab8-e6f6-11ec-b605-df0040e90b76;
 Wed, 08 Jun 2022 08:41:21 +0200 (CEST)
Received: by mail-pj1-x1029.google.com with SMTP id
 w2-20020a17090ac98200b001e0519fe5a8so17486451pjt.4
 for <xen-devel@lists.xenproject.org>; Tue, 07 Jun 2022 23:41:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 01bb6ab8-e6f6-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=a/sCgo9v/efv0g+VDiQLZhtLiKBOj2X+pUuTz2zgxFs=;
        b=bEaIz2+5gq4CgE0nIzhKk7HytX9pUXUbNQmMS0giHUXeS+w1eZKGXfNlITgwaVMTVT
         6DzTFMp0t9ZVvV1O5Pu9cdyjVQCdBdzRZT2OQyMv/wYA3qPBFkRdSfZohQBsoeYmk5R0
         rd6eIZ5yglo4eX1EYW/eMGNqA0djmJzppLJeDUPay4i2MY2O1voxo0M4tdMt+4uC2O+i
         ePeXhgK9iMicrzUvARDYbxGgP2nP5POcor88KIyZWZICO+i7vYQDShS0CaYM8jPv6jbN
         QhYVJGxBol2tXyALVTbS4cWcPoXpAYZibZwNbp0rBmgEV+MKr4YAyMk/0TZJ2T+m/kqO
         m+PQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=a/sCgo9v/efv0g+VDiQLZhtLiKBOj2X+pUuTz2zgxFs=;
        b=oAGba/6hLC3IKdNbIKHHZj7d2TMck88Gr5EH2e/RL11NhMK4GtEQLTtw3VCkF0z49F
         ZjSoc129yKnO3cMk7p7Zq7+ibXz22v0oiYNukO5tNaylb3yOhaAjKFhaqUjPIjpKTfDT
         mzHNOWV6pXUfvuBxBPZsaz+YmQCLyq17LsT1FZpug+O5r85/aEhae7EuvBnZLXARvQev
         Q46dDN2gJDXuNg6hupn9B2GNKl6yr//gSGDkjNK2H/XXKnuSVaDlvqZd3mVUteXAxaWN
         XmaCuInCtUStJirM8Hcsi6z1dK2NJi1qHG9XhCCGcYs9KqEn8UGSPA5XBMvhtGVKSjq0
         Q3Jw==
X-Gm-Message-State: AOAM532xUJRKjzxUxR2lXrVcXdUF0Aw8OZ82MjvXR9rAao4lLRRDK4lO
	GLTQxywhH7UoiV5axxsYlfxqLbEucs8j/zuTljLEAV+1Ul/H/w==
X-Google-Smtp-Source: ABdhPJz0shapgQITbxTI6b/BAtMb3pKcdHJgDK5/zKbirtbCi9sz7LDJeqY4WIREFx9+9Vvar5kFP6is37QCeH8kWt0=
X-Received: by 2002:a17:90b:388c:b0:1e2:f376:b8d with SMTP id
 mu12-20020a17090b388c00b001e2f3760b8dmr53598507pjb.148.1654670480104; Tue, 07
 Jun 2022 23:41:20 -0700 (PDT)
MIME-Version: 1.0
References: <20220607101010.3136600-1-jens.wiklander@linaro.org>
 <alpine.DEB.2.22.394.2206071454020.21215@ubuntu-linux-20-04-desktop> <alpine.DEB.2.22.394.2206071606550.21215@ubuntu-linux-20-04-desktop>
In-Reply-To: <alpine.DEB.2.22.394.2206071606550.21215@ubuntu-linux-20-04-desktop>
From: Jens Wiklander <jens.wiklander@linaro.org>
Date: Wed, 8 Jun 2022 08:41:09 +0200
Message-ID: <CAHUa44E2jnPsm=hPKOd54PUwzLFEc8Zwjup1d0K5YKc6GkEmxQ@mail.gmail.com>
Subject: Re: [PATCH 0/2] Xen FF-A mediator
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Julien Grall <julien@xen.org>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Content-Type: text/plain; charset="UTF-8"

On Wed, Jun 8, 2022 at 1:07 AM Stefano Stabellini
<sstabellini@kernel.org> wrote:
>
> On Tue, 7 Jun 2022, Stefano Stabellini wrote:
> > On Tue, 7 Jun 2022, Jens Wiklander wrote:
> > > Hi,
> > >
> > > This patch sets add a FF-A [1] mediator modeled after the TEE mediator
> > > already present in Xen. The FF-A mediator implements the subset of the FF-A
> > > 1.1 specification needed to communicate with OP-TEE using FF-A as transport
> > > mechanism instead of SMC/HVC as with the TEE mediator. It allows a similar
> > > design in OP-TEE as with the TEE mediator where OP-TEE presents one virtual
> > > partition of itself to each guest in Xen.
> > >
> > > The FF-A mediator is generic in the sense it has nothing OP-TEE specific
> > > except that only the subset needed for OP-TEE is implemented so far. The
> > > hooks needed to inform OP-TEE that a guest is created or destroyed is part
> > > of the FF-A specification.
> > >
> > > It should be possible to extend the FF-A mediator to implement a larger
> > > portion of the FF-A 1.1 specification without breaking with the way OP-TEE
> > > is communicated with here. So it should be possible to support any TEE or
> > > Secure Partition using FF-A as transport with this mediator.
> > >
> > > [1] https://developer.arm.com/documentation/den0077/latest
> > >
> > > Thanks,
> > > Jens
> >
> > Hi Jens,
> >
> > Many thanks for the patches! I tried to apply them to the master branch
> > but unfortunately they don't apply any longer. Could you please rebase
> > them on master (or even better rebase them on staging) and resend?
> >
> > Thank you!
>
> One question without having looked at the patches in details. These
> patches are necessary to mediate OS (e.g. Linux) interactions with
> OPTEE. The difference between xen/arch/arm/ffa.c and
> xen/arch/arm/tee/optee.c is the transport mechanism: shared mem vs. SMC.
> Is that right?
>
> If only the transport is different, would it make sense to place ffa.c
> under xen/arch/arm/tee?

FF-A is a standard Arm service with a different range of SMC Function
Identifiers. FF-A is used to communicate with SPs, Secure Partitions.
An SP can be a TEE but also something different.

There are similarities between xen/arch/arm/ffa.c and
xen/arch/arm/tee/optee.c, but I believe it's more in broader terms on
the surface.

FF-A is a generic transport protocol that is generic enough that Xen
doesn't even need to know anything about the entity it's facilitating
communication with except what's provided by FF-A. The idea is that
it's only the end points that need to be aware of details of the
protocol run on top of FF-A. This means that EL2 (Xen), EL3
(SPMD/TF-A) and S-EL2 (SPMC/Hafnium) in principle can be kept
unchanged and agnostic to Trusted OSes and what not.

>
> Without having looked at the details of the transport or the FF-A
> protocol let me ask you a question. Do you think it would be possible to
> share part of the implementation with xen/arch/arm/tee/optee.c? I am
> asking because intuitively, if only the transport is differenti I would
> have thought that some things could be common. But it doesn't look like
> the current patches are reusing anything from xen/arch/arm/tee/optee.c.
> Are the two protocols too different?

The two protocols are quite different. So far I haven't been able to
identify suitable common helper functions.

Thanks,
Jens


From xen-devel-bounces@lists.xenproject.org Wed Jun 08 07:53:35 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 07:53:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343710.569146 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyqVB-0006LG-Tg; Wed, 08 Jun 2022 07:53:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343710.569146; Wed, 08 Jun 2022 07:53:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyqVB-0006L9-Pq; Wed, 08 Jun 2022 07:53:17 +0000
Received: by outflank-mailman (input) for mailman id 343710;
 Wed, 08 Jun 2022 07:53:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyqVA-0006Jg-KV; Wed, 08 Jun 2022 07:53:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyqVA-0007aO-Eu; Wed, 08 Jun 2022 07:53:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyqV9-0001eV-7T; Wed, 08 Jun 2022 07:53:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nyqV9-0000I0-71; Wed, 08 Jun 2022 07:53:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NEA/fP26dOBPV2xEBuSnbrG1uhR8crbc3xE/pzZXH+I=; b=qUqbqNMpmi9olPZT8CBvGaCMm6
	D3uDlsxMtgzu7ZU3I/3376kVPNY4ziIyQT99FT4HVZaIOW4ZXlnXUI7+p3stPKUqYko9AUt7+Vlv2
	0LgGUxVTaHVkNuwiXS7yVsA5ae8qRuHoUtHq/dNn3Q9ck53cD/YRAR/P9Q0ER0wV9Ut8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170871-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.16-testing test] 170871: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.16-testing:test-amd64-i386-xl-vhd:xen-install:fail:heisenbug
    xen-4.16-testing:test-amd64-i386-libvirt-raw:xen-install:fail:heisenbug
    xen-4.16-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8e11ec8fbf6f933f8854f4bc54226653316903f2
X-Osstest-Versions-That:
    xen=f26544492298cb82d66f9bf36e29d2f75b3133f2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 08 Jun 2022 07:53:15 +0000

flight 170871 xen-4.16-testing real [real]
flight 170882 xen-4.16-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/170871/
http://logs.test-lab.xenproject.org/osstest/logs/170882/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-vhd        7 xen-install         fail pass in 170882-retest
 test-amd64-i386-libvirt-raw   7 xen-install         fail pass in 170882-retest

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-raw 14 migrate-support-check fail in 170882 never pass
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 169333
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 169333
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 169333
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 169333
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 169333
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 169333
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 169333
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 169333
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 169333
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 169333
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 169333
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 169333
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 xen                  8e11ec8fbf6f933f8854f4bc54226653316903f2
baseline version:
 xen                  f26544492298cb82d66f9bf36e29d2f75b3133f2

Last test of basis   169333  2022-04-12 12:36:49 Z   56 days
Testing same since   170871  2022-06-07 12:36:55 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  David Vrabel <dvrabel@amazon.co.uk>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   f265444922..8e11ec8fbf  8e11ec8fbf6f933f8854f4bc54226653316903f2 -> stable-4.16


From xen-devel-bounces@lists.xenproject.org Wed Jun 08 09:42:35 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 09:42:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343752.569245 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nysCg-0002cl-EK; Wed, 08 Jun 2022 09:42:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343752.569245; Wed, 08 Jun 2022 09:42:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nysCg-0002ce-BP; Wed, 08 Jun 2022 09:42:18 +0000
Received: by outflank-mailman (input) for mailman id 343752;
 Wed, 08 Jun 2022 09:42:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1nysCe-0002cY-8W
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 09:42:16 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nysCd-0001ad-SD; Wed, 08 Jun 2022 09:42:15 +0000
Received: from [54.239.6.189] (helo=[192.168.10.106])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nysCd-0007Sv-Kg; Wed, 08 Jun 2022 09:42:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=CSO9GolwBr1UYgcGNdZNafgXrjTnSwewvwOS535JqD0=; b=oMGtuUmooWoWAXuPe3guBYe0E1
	fLad/RENSNnk2gI8ULNkFeoSrofSMfqHNRsWyBijTHt5yIYhvEMnK81BLSVj9YUZsk8vZOafPgtCE
	iclH1FwBniqoranAk+euy0IABhKO80ufbDWwBv8cg5npFiy6F0bwHeknTQUoNTcFLzHE=;
Message-ID: <f9b78205-bbc1-9675-f7fe-79cd40ce2baa@xen.org>
Date: Wed, 8 Jun 2022 10:42:12 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.9.1
Subject: Re: [PATCH v3 2/2] xen/common: Use enhanced ASSERT_ALLOC_CONTEXT in
 xmalloc()
To: Henry Wang <Henry.Wang@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>, Wei Chen <Wei.Chen@arm.com>,
 Julien Grall <jgrall@amazon.com>
References: <20220507025434.1063710-1-Henry.Wang@arm.com>
 <20220507025434.1063710-3-Henry.Wang@arm.com>
 <5c0e81f1-fac4-f14f-f4a1-2a00f6d16f47@xen.org>
 <AS8PR08MB7991500E0D0127986F4D957B92D79@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <AS8PR08MB7991500E0D0127986F4D957B92D79@AS8PR08MB7991.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Henry,

On 24/05/2022 02:53, Henry Wang wrote:
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Subject: Re: [PATCH v3 2/2] xen/common: Use enhanced
>> ASSERT_ALLOC_CONTEXT in xmalloc()
>>
>> Hi,
>>
>> On 07/05/2022 03:54, Henry Wang wrote:
>>> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
>>> index e866e0d864..ea59cd1a4a 100644
>>> --- a/xen/common/page_alloc.c
>>> +++ b/xen/common/page_alloc.c
>>> @@ -162,13 +162,6 @@
>>>    static char __initdata opt_badpage[100] = "";
>>>    string_param("badpage", opt_badpage);
>>>
>>> -/*
>>> - * Heap allocations may need TLB flushes which may require IRQs to be
>>> - * enabled (except when only 1 PCPU is online).
>>> - */
>>> -#define ASSERT_ALLOC_CONTEXT() \
>>> -    ASSERT(!in_irq() && (local_irq_is_enabled() || num_online_cpus() <=
>> 1))
>>> -
>> FYI, the patch introducing ASSERT_ALLOC_CONTEXT() has been reverted. I
>> intend to re-introduce it once your previous patch and the one fixing
>> the ITS (not yet formally sent) have been committed.
> 
> Thanks for the information! IIUC the patch:
> "xen/arm: gic-v3-lpi: Allocate the pending table while preparing the CPU"
> is merged. So I guess both "page_alloc: assert IRQs are enabled in heap alloc/free"
> and this patch can be re-introduced if everyone is happy with the patch?

I have re-committed David's patch and committed yours.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 08 10:01:44 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 10:01:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343765.569264 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nysVO-0005NC-7L; Wed, 08 Jun 2022 10:01:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343765.569264; Wed, 08 Jun 2022 10:01:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nysVO-0005N5-2Y; Wed, 08 Jun 2022 10:01:38 +0000
Received: by outflank-mailman (input) for mailman id 343765;
 Wed, 08 Jun 2022 10:01:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1nysVN-0005Mw-Bb
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 10:01:37 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nysVM-00021A-9B; Wed, 08 Jun 2022 10:01:36 +0000
Received: from [54.239.6.189] (helo=[192.168.10.106])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nysVL-0000Gq-UZ; Wed, 08 Jun 2022 10:01:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=p0wvtAVS4QIIRYpjqkGJh7255/WfzqDwRwO7WO5ZFxs=; b=c2+wrRMO5B3JA1Xsy+2s7b/Zgy
	4Adgcr0AbtvTSifHOMbGO7kSaukSsuI4wXhGCidH6mjwBYpj14hJ3VuSl/8bbDLrxtoASspLIQ6Ma
	sIri0NdS5+BEhXUmOC64fBJzAzZMC2GXfl0ImIY82vTKRJ6eFRMSY4QFATc5ei34ZtAQ=;
Message-ID: <dec50384-5172-67b6-f4ac-a9c40d80a641@xen.org>
Date: Wed, 8 Jun 2022 11:01:32 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.9.1
Subject: Re: [PATCH 10/16] xen/arm: add Persistent Map (PMAP) infrastructure
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wei.liu2@citrix.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 George Dunlap <george.dunlap@citrix.com>, Hongyan Xia <hongyxia@amazon.com>,
 Julien Grall <jgrall@amazon.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20220520120937.28925-1-julien@xen.org>
 <20220520120937.28925-11-julien@xen.org>
 <alpine.DEB.2.22.394.2206071806390.21215@ubuntu-linux-20-04-desktop>
From: Julien Grall <julien@xen.org>
In-Reply-To: <alpine.DEB.2.22.394.2206071806390.21215@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 08/06/2022 02:08, Stefano Stabellini wrote:
>> diff --git a/xen/arch/arm/include/asm/pmap.h b/xen/arch/arm/include/asm/pmap.h
>> new file mode 100644
>> index 000000000000..74398b4c4fe6
>> --- /dev/null
>> +++ b/xen/arch/arm/include/asm/pmap.h
>> @@ -0,0 +1,32 @@
>> +#ifndef __ASM_PMAP_H__
>> +#define __ASM_PMAP_H__
>> +
>> +#include <xen/mm.h>
>> +
>> +#include <asm/fixmap.h>
>> +
>> +static inline void arch_pmap_map(unsigned int slot, mfn_t mfn)
>> +{
>> +    lpae_t *entry = &xen_fixmap[slot];
>> +    lpae_t pte;
>> +
>> +    ASSERT(!lpae_is_valid(*entry));
>> +
>> +    pte = mfn_to_xen_entry(mfn, PAGE_HYPERVISOR_RW);
>> +    pte.pt.table = 1;
>> +    write_pte(entry, pte);
> 
> Here we don't need a tlb flush because we never go from a valid mapping
> to another valid mapping.

A TLB flush would not be sufficient here. You would need to follow the 
break-before-make sequence in order to replace a valid mapping with 
another valid mapping.

> We also go through arch_pmap_unmap which
> clears the mapping and also flushes the tlb. Is that right?

The PMAP code is using a bitmap to know which entry is used. So when 
arch_pmap_map() is called, we also guarantees the entry will be invalid 
(hence the ASSERT(!lpae_is_valid()).

The bit in the bitmap will only be cleared by pmap_unmap() which will 
result to a TLB flush.

>> +}
>> +
>> +static inline void arch_pmap_unmap(unsigned int slot)
>> +{
>> +    lpae_t pte = {};
>> +
>> +    write_pte(&xen_fixmap[slot], pte);
>> +
>> +    flush_xen_tlb_range_va_local(FIXMAP_ADDR(slot), PAGE_SIZE);
>> +}
>> +
>> +void arch_pmap_map_slot(unsigned int slot, mfn_t mfn);
>> +void arch_pmap_clear_slot(void *ptr);
> 
> What are these two? They are not defined anywhere?

It is left-over. I will drop them.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 08 10:17:45 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 10:17:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343773.569275 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nysks-00078l-Hs; Wed, 08 Jun 2022 10:17:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343773.569275; Wed, 08 Jun 2022 10:17:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nysks-00078e-El; Wed, 08 Jun 2022 10:17:38 +0000
Received: by outflank-mailman (input) for mailman id 343773;
 Wed, 08 Jun 2022 10:17:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1nyskr-00078Y-5Z
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 10:17:37 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nyskp-0002I3-Vz; Wed, 08 Jun 2022 10:17:35 +0000
Received: from [54.239.6.189] (helo=[192.168.10.106])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nyskp-0001J5-J4; Wed, 08 Jun 2022 10:17:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=IoMv2n41yEwpZh6S8L3Tl/wbkXKn28BjTOqLofJm3/s=; b=rDZc7qeZZLknBhTCLY7apulSTi
	h+8AZ+9sOuKeWcV063q8Mt8vdo9kXivt0tqmf+9JxWbf40MU7gN0W0mfY2GkXh+43HxQg7qKH/fHO
	q/4KQQ8AVJUrV7WlRAwcBO/m82VRYmVdgPhf4V65pQ3Eyif120yjiIopUmgSHNFEMlmk=;
Message-ID: <ab4d4425-400b-2745-589f-d643d67a4f59@xen.org>
Date: Wed, 8 Jun 2022 11:17:32 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.9.1
Subject: Re: [PATCH] xen/arm: Allow setting the number of CPUs to activate at
 runtime
To: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Michal Orzel <Michal.Orzel@arm.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220523091324.137350-1-michal.orzel@arm.com>
 <A1DC403E-3BAF-4BED-AFF1-68313005669F@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <A1DC403E-3BAF-4BED-AFF1-68313005669F@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 30/05/2022 10:09, Bertrand Marquis wrote:
>> On 23 May 2022, at 10:13, Michal Orzel <Michal.Orzel@arm.com> wrote:
>>
>> Introduce a command line parameter "maxcpus" on Arm to allow adjusting
>> the number of CPUs to activate. Currently the limit is defined by the
>> config option CONFIG_NR_CPUS. Such parameter already exists on x86.
>>
>> Define a parameter "maxcpus" and a corresponding static variable
>> max_cpus in Arm smpboot.c. Modify function smp_get_max_cpus to take
>> max_cpus as a limit and to return proper unsigned int instead of int.
>>
>> Take the opportunity to remove redundant variable cpus from start_xen
>> function and to directly assign the return value from smp_get_max_cpus
>> to nr_cpu_ids (global variable in Xen used to store the number of CPUs
>> actually activated).
>>
>> Signed-off-by: Michal Orzel <michal.orzel@arm.com>
> 
> With the warning added in the documentation (which is ok to do on commit):
> 
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

I have committed it with the update in the documentation.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 08 10:40:45 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 10:40:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343792.569326 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyt79-0002aS-TM; Wed, 08 Jun 2022 10:40:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343792.569326; Wed, 08 Jun 2022 10:40:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyt79-0002aL-Q8; Wed, 08 Jun 2022 10:40:39 +0000
Received: by outflank-mailman (input) for mailman id 343792;
 Wed, 08 Jun 2022 10:40:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyt79-0002aB-C7; Wed, 08 Jun 2022 10:40:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyt79-0002iH-8m; Wed, 08 Jun 2022 10:40:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyt78-00067c-U6; Wed, 08 Jun 2022 10:40:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nyt78-0001WP-TX; Wed, 08 Jun 2022 10:40:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=QpdhfR2QZT5NCjS9Ny4AhtmoDw5eqTus2Tbsd35ma/U=; b=ddPWIr0U78eyX0/R6ng7k0FvZy
	kbrmo9wUkRhyecrWXbvLBe/YrGXSvYcInJXYbTlSZc22JGbm25u2XTnvUPGgGxOytQ05CRX4igrgS
	7g+fGb/bazab6MX5WCueWvaQWL0Xt1YfDqdqxiW0ROx2ikDGjT1w+GLAf7rxUVSdbfOI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170874-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 170874: FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:host-install(5):broken:heisenbug
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=9b1f58854959c5a9bdb347e3e04c252ab7fc9ef5
X-Osstest-Versions-That:
    qemuu=57c9363c452af64fe058aa946cc923eae7f7ad33
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 08 Jun 2022 10:40:38 +0000

flight 170874 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170874/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow <job status> broken in 170858

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 5 host-install(5) broken in 170858 pass in 170874
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install          fail pass in 170858

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170849
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170849
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170849
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170849
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170849
 test-amd64-i386-xl-vhd       21 guest-start/debian.repeat    fail  like 170849
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170849
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170849
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170849
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                9b1f58854959c5a9bdb347e3e04c252ab7fc9ef5
baseline version:
 qemuu                57c9363c452af64fe058aa946cc923eae7f7ad33

Last test of basis   170849  2022-06-06 17:39:05 Z    1 days
Testing same since   170858  2022-06-07 04:25:53 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alex Bennée <alex.bennee@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Song Gao <gaosong@loongson.cn>
  Xiaojuan Yang <yangxiaojuan@loongson.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow broken

Not pushing.

(No revision log; it would be 734 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 08 13:27:39 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 13:27:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343810.569336 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyviW-0002nu-1Q; Wed, 08 Jun 2022 13:27:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343810.569336; Wed, 08 Jun 2022 13:27:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyviV-0002nn-Uf; Wed, 08 Jun 2022 13:27:23 +0000
Received: by outflank-mailman (input) for mailman id 343810;
 Wed, 08 Jun 2022 13:27:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyviV-0002nc-Gb; Wed, 08 Jun 2022 13:27:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyviV-0005aC-En; Wed, 08 Jun 2022 13:27:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyviV-0006I4-4U; Wed, 08 Jun 2022 13:27:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nyviV-0001BD-3v; Wed, 08 Jun 2022 13:27:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9DRJ7pX7aib9PA/6iBUXJJBmHHmp/KoLdXb+KMqtBIw=; b=1Do2njb6la7SckPUQcFKcU5PtH
	2NAkPAOUC+wOcwBoaUhJEMj97XBBFGa9ERxdRYCBRzSsPwjn9Zbu4prjweiSIv82FoX/dDCLsE6ZY
	wGpfJthAp6JwMTShSRq2/LZRufCTcUhuQoUDYuKT7ReaSp13UQ0TBPfmjD5cb6eHlvgE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170883-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 170883: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5047cd1d5deaa734ce67b4d706ac59d9a258c3e1
X-Osstest-Versions-That:
    xen=8c1cc69748f4719e6ba8a275c2ecd60747c52c21
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 08 Jun 2022 13:27:23 +0000

flight 170883 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170883/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  5047cd1d5deaa734ce67b4d706ac59d9a258c3e1
baseline version:
 xen                  8c1cc69748f4719e6ba8a275c2ecd60747c52c21

Last test of basis   170879  2022-06-08 01:00:32 Z    0 days
Testing same since   170883  2022-06-08 10:02:59 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <dvrabel@amazon.co.uk>
  Henry Wang <Henry.Wang@arm.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8c1cc69748..5047cd1d5d  5047cd1d5deaa734ce67b4d706ac59d9a258c3e1 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:13:43 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:13:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343821.569347 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywRD-0000A4-Cf; Wed, 08 Jun 2022 14:13:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343821.569347; Wed, 08 Jun 2022 14:13:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywRD-00009u-9q; Wed, 08 Jun 2022 14:13:35 +0000
Received: by outflank-mailman (input) for mailman id 343821;
 Wed, 08 Jun 2022 14:13:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nywRB-00009h-Qq; Wed, 08 Jun 2022 14:13:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nywRB-0006T4-P9; Wed, 08 Jun 2022 14:13:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nywRB-0007xy-De; Wed, 08 Jun 2022 14:13:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nywRB-0007VF-DE; Wed, 08 Jun 2022 14:13:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2OFx+eP1trTGvb8yOwQShBNse4OoLKit0MdAZCllE+Q=; b=TJ555VUb3H6xUFGVrOACKPlo3G
	MAiYBqlYI/htF0drWU3TL8vMA9iGbhkpH1Wp/waf9iNVnuh7rFe5gHoRa9ZUCLvpA9FS5vOixurPh
	0gPDOUoA6qjaoX5veHPKmPSKAFZFDVKvk+TAcVjHMi3fhtGmE/GpcwW5nYj/BgsB093w=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170885-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 170885: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=ff36b2550f94dc5fac838cf298ae5a23cfddf204
X-Osstest-Versions-That:
    ovmf=a81a650da1dc40ec2b2825d1878cdf2778b4be14
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 08 Jun 2022 14:13:33 +0000

flight 170885 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170885/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 ff36b2550f94dc5fac838cf298ae5a23cfddf204
baseline version:
 ovmf                 a81a650da1dc40ec2b2825d1878cdf2778b4be14

Last test of basis   170867  2022-06-07 11:10:44 Z    1 days
Testing same since   170885  2022-06-08 12:11:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   a81a650da1..ff36b2550f  ff36b2550f94dc5fac838cf298ae5a23cfddf204 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:40:12 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:40:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343832.569359 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywqh-0002rC-Gx; Wed, 08 Jun 2022 14:39:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343832.569359; Wed, 08 Jun 2022 14:39:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywqh-0002r5-Dx; Wed, 08 Jun 2022 14:39:55 +0000
Received: by outflank-mailman (input) for mailman id 343832;
 Wed, 08 Jun 2022 14:39:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nywqg-0002qq-Fb; Wed, 08 Jun 2022 14:39:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nywqg-0006wb-BM; Wed, 08 Jun 2022 14:39:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nywqf-0000FF-Ql; Wed, 08 Jun 2022 14:39:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nywqf-0005EY-QE; Wed, 08 Jun 2022 14:39:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1guT4xqzsutadgTptfKpQJi3cvYRFTBViI9N3369YeU=; b=o0T1lsrOr/qAjnmGqvwICKJ7zq
	UFaFLm8LqyEjudhEsOL6va0J6puYAik9OGeleEnAeBRjg+usn8CT+gfnn42DEbne5bM+XolneJsTy
	5uy5xVMJcwAfAgIJL7CkCSzxZvTW78kISoni1siY6XxG7Tgkdzne8IizbQP7Viw3lAMw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170876-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 170876: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    linux-5.4:test-armhf-armhf-xl-credit1:<job status>:broken:regression
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit1:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-i386-examine-uefi:xen-install:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-arndale:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-arm64-arm64-libvirt-raw:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-localmigrate/x10:fail:heisenbug
    linux-5.4:test-armhf-armhf-libvirt-qcow2:guest-start:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=35c6471fd2c181f6e5e0b292dc759b49dbd95d6a
X-Osstest-Versions-That:
    linux=04b092e4a01a3488e762897e2d29f85eda2c6a60
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 08 Jun 2022 14:39:53 +0000

flight 170876 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170876/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1     <job status>                 broken
 test-armhf-armhf-xl-credit1  14 guest-start    fail in 170846 REGR. vs. 170736

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-credit1   5 host-install(5)          broken pass in 170860
 test-amd64-i386-examine-uefi  6 xen-install      fail in 170846 pass in 170876
 test-armhf-armhf-xl-multivcpu 14 guest-start     fail in 170846 pass in 170876
 test-armhf-armhf-xl-arndale 18 guest-start/debian.repeat fail in 170846 pass in 170876
 test-arm64-arm64-libvirt-raw 17 guest-start/debian.repeat fail in 170846 pass in 170876
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 18 guest-localmigrate/x10 fail in 170860 pass in 170876
 test-armhf-armhf-libvirt-qcow2 13 guest-start    fail in 170860 pass in 170876
 test-armhf-armhf-xl-credit2  14 guest-start                fail pass in 170846

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit2 18 guest-start/debian.repeat fail in 170846 like 170736
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 170846 like 170736
 test-armhf-armhf-xl-credit2 15 migrate-support-check fail in 170846 never pass
 test-armhf-armhf-xl-credit2 16 saverestore-support-check fail in 170846 never pass
 test-armhf-armhf-xl-multivcpu 18 guest-start/debian.repeat fail in 170860 like 170724
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170724
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170736
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170736
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 170736
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170736
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170736
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170736
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170736
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 170736
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170736
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170736
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170736
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                35c6471fd2c181f6e5e0b292dc759b49dbd95d6a
baseline version:
 linux                04b092e4a01a3488e762897e2d29f85eda2c6a60

Last test of basis   170736  2022-05-25 18:40:38 Z   13 days
Testing same since   170843  2022-06-06 06:44:17 Z    2 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Akira Yokosawa <akiyks@gmail.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andy Lutomirski <luto@kernel.org>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Anna Schumaker <Anna.Schumaker@Netapp.com>
  Ard Biesheuvel <ardb@kernel.org>
  Ariadne Conill <ariadne@dereferenced.org>
  Aristeu Rozanski <aris@redhat.com>
  Benjamin Tissoires <benjamin.tissoires@redhat.com>
  Christian Brauner <brauner@kernel.org>
  Chuck Lever <chuck.lever@oracle.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Thompson <daniel.thompson@linaro.org>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  Denis Efremov (Oracle) <efremov@linux.com>
  Dmitry Mastykin <dmastykin@astralinux.ru>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Eric Dumazet <edumazet@google.com>
  Florian Westphal <fw@strlen.de>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo A. R. Silva <gustavoars@kernel.org>
  Hans Verkuil <hverkuil-cisco@xs4all.nl>
  Herbert Xu <herbert@gondor.apana.org.au>
  Hulk Robot <hulkrobot@huawei.com>
  IotaHydrae <writeforever@foxmail.com>
  Jakub Kicinski <kuba@kernel.org>
  Jarkko Sakkinen <jarkko@kernel.org>
  Jiri Kosina <jkosina@suse.cz>
  Joel Stanley <joel@jms.id.au>
  Johannes Berg <johannes.berg@intel.com>
  Jonathan Corbet <corbet@lwn.net>
  Kees Cook <keescook@chromium.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Liu Jian <liujian56@huawei.com>
  Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
  Luca Coelho <luciano.coelho@intel.com>
  Marek Maslanka <mm@semihalf.com>
  Marek Maślanka <mm@semihalf.com>
  Mariusz Tkaczyk <mariusz.tkaczyk@linux.intel.com>
  Mark-PK Tsai <mark-pk.tsai@mediatek.com>
  Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
  Mika Westerberg <mika.westerberg@linux.intel.com>
  Mike Snitzer <snitzer@kernel.org>
  Mikulas Patocka <mpatocka@redhat.com>
  Milan Broz <gmazyland@gmail.com>
  Minchan Kim <minchan@kernel.org>
  Miri Korenblit <miriam.rachel.korenblit@intel.com>
  Noah Meyerhans <nmeyerha@amazon.com>
  Noah Meyerhans <noahm@debian.org>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Piyush Malgujar <pmalgujar@marvell.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Sakari Ailus <sakari.ailus@linux.intel.com>
  Sarthak Kukreti <sarthakkukreti@google.com>
  Sasha Levin <sashal@kernel.org>
  Song Liu <song@kernel.org>
  Song Liu <songliubraving@fb.com>
  Stefan Ghinea <stefan.ghinea@windriver.com>
  Stefan Mahnke-Hartmann <stefan.mahnke-hartmann@infineon.com>
  Steffen Klassert <steffen.klassert@secunet.com>
  Stephen Brennan <stephen.s.brennan@oracle.com>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Sultan Alsawaf <sultan@kerneltoast.com>
  Szymon Balcerak <sbalcerak@marvell.com>
  Thomas Bartschies <thomas.bartschies@cvk.de>
  Thomas Gleixner <tglx@linutronix.de>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Vegard Nossum <vegard.nossum@oracle.com>
  Veronika Kabatova <vkabatov@redhat.com>
  Vitaly Chikunov <vt@altlinux.org>
  Willy Tarreau <w@1wt.eu>
  Wolfram Sang <wsa@kernel.org>
  Xiu Jianfeng <xiujianfeng@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  broken  
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-xl-credit1 broken
broken-step test-armhf-armhf-xl-credit1 host-install(5)

Not pushing.

(No revision log; it would be 1081 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:48:14 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:48:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343851.569406 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywyk-0005HS-3h; Wed, 08 Jun 2022 14:48:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343851.569406; Wed, 08 Jun 2022 14:48:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywyj-0005Fn-N6; Wed, 08 Jun 2022 14:48:13 +0000
Received: by outflank-mailman (input) for mailman id 343851;
 Wed, 08 Jun 2022 14:47:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywxx-0004Sj-RU
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:25 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e6db3ddb-e739-11ec-bd2c-47488cf2e6aa;
 Wed, 08 Jun 2022 16:47:21 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywww-0066AY-GC; Wed, 08 Jun 2022 14:46:23 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 7B1E63021D5;
 Wed,  8 Jun 2022 16:46:20 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id 3CA0E20C0F9A5; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e6db3ddb-e739-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=O62myI6pQAS8t81lbED6UulH15u+VxDiccCxL8l46b0=; b=gT8EMkBd6bMPLcSrcn+tIlmqrQ
	gTlIn15eBCUJQJ7odUKeDR2ZWGk6cr6CL+IgYOg0zC0XwW4KvSjAi4QIaQShcZztyyB5uW4tjGOZ+
	Xx0k619Zyq8PzUIepWyk++0mEYVwQwRDrMoV5OMsBFqmPNytGFdVmw1KNESQJd2/bvekElwzIcwfq
	h9dM8RgSuGh8dAJ/hoh8FLTtUTAzinFeiF1y1HPLwogXCp0Uo4fTbKBO+SfwljZRT7cvOxCuC5v2c
	rVPi+OaAn7J4nbIx17Sn50Rh+9lBzO0j0yDzRiHN+flr/WJFb/Do9/yGDSiiRsQ1W4fIUFcfmvAbk
	rGLVyLLg==;
Message-ID: <20220608144516.047149313@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:25 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org
Subject: [PATCH 02/36] x86/idle: Replace x86_idle with a static_call
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

Typical boot time setup; no need to suffer an indirect call for that.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
---
 arch/x86/kernel/process.c |   50 +++++++++++++++++++++++++---------------------
 1 file changed, 28 insertions(+), 22 deletions(-)

--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -24,6 +24,7 @@
 #include <linux/cpuidle.h>
 #include <linux/acpi.h>
 #include <linux/elf-randomize.h>
+#include <linux/static_call.h>
 #include <trace/events/power.h>
 #include <linux/hw_breakpoint.h>
 #include <asm/cpu.h>
@@ -692,7 +693,23 @@ void __switch_to_xtra(struct task_struct
 unsigned long boot_option_idle_override = IDLE_NO_OVERRIDE;
 EXPORT_SYMBOL(boot_option_idle_override);
 
-static void (*x86_idle)(void);
+/*
+ * We use this if we don't have any better idle routine..
+ */
+void __cpuidle default_idle(void)
+{
+	raw_safe_halt();
+}
+#if defined(CONFIG_APM_MODULE) || defined(CONFIG_HALTPOLL_CPUIDLE_MODULE)
+EXPORT_SYMBOL(default_idle);
+#endif
+
+DEFINE_STATIC_CALL_NULL(x86_idle, default_idle);
+
+static bool x86_idle_set(void)
+{
+	return !!static_call_query(x86_idle);
+}
 
 #ifndef CONFIG_SMP
 static inline void play_dead(void)
@@ -715,28 +732,17 @@ void arch_cpu_idle_dead(void)
 /*
  * Called from the generic idle code.
  */
-void arch_cpu_idle(void)
-{
-	x86_idle();
-}
-
-/*
- * We use this if we don't have any better idle routine..
- */
-void __cpuidle default_idle(void)
+void __cpuidle arch_cpu_idle(void)
 {
-	raw_safe_halt();
+	static_call(x86_idle)();
 }
-#if defined(CONFIG_APM_MODULE) || defined(CONFIG_HALTPOLL_CPUIDLE_MODULE)
-EXPORT_SYMBOL(default_idle);
-#endif
 
 #ifdef CONFIG_XEN
 bool xen_set_default_idle(void)
 {
-	bool ret = !!x86_idle;
+	bool ret = x86_idle_set();
 
-	x86_idle = default_idle;
+	static_call_update(x86_idle, default_idle);
 
 	return ret;
 }
@@ -859,20 +865,20 @@ void select_idle_routine(const struct cp
 	if (boot_option_idle_override == IDLE_POLL && smp_num_siblings > 1)
 		pr_warn_once("WARNING: polling idle and HT enabled, performance may degrade\n");
 #endif
-	if (x86_idle || boot_option_idle_override == IDLE_POLL)
+	if (x86_idle_set() || boot_option_idle_override == IDLE_POLL)
 		return;
 
 	if (boot_cpu_has_bug(X86_BUG_AMD_E400)) {
 		pr_info("using AMD E400 aware idle routine\n");
-		x86_idle = amd_e400_idle;
+		static_call_update(x86_idle, amd_e400_idle);
 	} else if (prefer_mwait_c1_over_halt(c)) {
 		pr_info("using mwait in idle threads\n");
-		x86_idle = mwait_idle;
+		static_call_update(x86_idle, mwait_idle);
 	} else if (cpu_feature_enabled(X86_FEATURE_TDX_GUEST)) {
 		pr_info("using TDX aware idle routine\n");
-		x86_idle = tdx_safe_halt;
+		static_call_update(x86_idle, tdx_safe_halt);
 	} else
-		x86_idle = default_idle;
+		static_call_update(x86_idle, default_idle);
 }
 
 void amd_e400_c1e_apic_setup(void)
@@ -925,7 +931,7 @@ static int __init idle_setup(char *str)
 		 * To continue to load the CPU idle driver, don't touch
 		 * the boot_option_idle_override.
 		 */
-		x86_idle = default_idle;
+		static_call_update(x86_idle, default_idle);
 		boot_option_idle_override = IDLE_HALT;
 	} else if (!strcmp(str, "nomwait")) {
 		/*




From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:48:14 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:48:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343846.569381 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywyi-0004oO-HU; Wed, 08 Jun 2022 14:48:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343846.569381; Wed, 08 Jun 2022 14:48:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywyi-0004nD-BS; Wed, 08 Jun 2022 14:48:12 +0000
Received: by outflank-mailman (input) for mailman id 343846;
 Wed, 08 Jun 2022 14:47:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywxw-0004Sj-Ce
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:24 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e62e071c-e739-11ec-bd2c-47488cf2e6aa;
 Wed, 08 Jun 2022 16:47:20 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywx2-0066CG-FE; Wed, 08 Jun 2022 14:46:30 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 9C506302E89;
 Wed,  8 Jun 2022 16:46:23 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id 8875D20C10ECE; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e62e071c-e739-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=Nqou5pdyyCauM7UPuuZujxiA/xlRh3RuPNWmJQy6NBI=; b=L7miXvYMId1vcjF94h7EBydmu4
	ttxKNvdzfXSuw3gtJMufZU0+4OpDCCzxfuP/RtmtaGWdzCLzVzK3K3ZROIEKQ12fnbNonYhoVBWET
	qbOTM65AeO8Z1c2ze/3bhgeaht4FUH8xxTg0iIHysiydKgvyqux5JmaNNl/3bZLVOW1QBhQO/Vntq
	duMyjsMbQXhr/+Eb7B4+s1xm4G5jsqR+LR4hZJQhFD7nrBbyMSNhSriAWdkwNvfT9XUDLR02XSbjT
	sKnHoYsBy4ZL05rEHDgjoNgR3PBuki9A8bwPV/4K47QdfHlIefAT+eYwnGvOUXAWh12382TVypQqj
	sDrl35Dg==;
Message-ID: <20220608144517.188449351@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:43 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org
Subject: [PATCH 20/36] arch/idle: Change arch_cpu_idle() IRQ behaviour
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

Current arch_cpu_idle() is called with IRQs disabled, but will return
with IRQs enabled.

However, the very first thing the generic code does after calling
arch_cpu_idle() is raw_local_irq_disable(). This means that
architectures that can idle with IRQs disabled end up doing a
pointless 'enable-disable' dance.

Therefore, push this IRQ disabling into the idle function, meaning
that those architectures can avoid the pointless IRQ state flipping.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/alpha/kernel/process.c      |    1 -
 arch/arc/kernel/process.c        |    3 +++
 arch/arm/kernel/process.c        |    1 -
 arch/arm/mach-gemini/board-dt.c  |    3 ++-
 arch/arm64/kernel/idle.c         |    1 -
 arch/csky/kernel/process.c       |    1 -
 arch/csky/kernel/smp.c           |    2 +-
 arch/hexagon/kernel/process.c    |    1 -
 arch/ia64/kernel/process.c       |    1 +
 arch/microblaze/kernel/process.c |    1 -
 arch/mips/kernel/idle.c          |    8 +++-----
 arch/nios2/kernel/process.c      |    1 -
 arch/openrisc/kernel/process.c   |    1 +
 arch/parisc/kernel/process.c     |    2 --
 arch/powerpc/kernel/idle.c       |    5 ++---
 arch/riscv/kernel/process.c      |    1 -
 arch/s390/kernel/idle.c          |    1 -
 arch/sh/kernel/idle.c            |    1 +
 arch/sparc/kernel/leon_pmc.c     |    4 ++++
 arch/sparc/kernel/process_32.c   |    1 -
 arch/sparc/kernel/process_64.c   |    3 ++-
 arch/um/kernel/process.c         |    1 -
 arch/x86/coco/tdx/tdx.c          |    3 +++
 arch/x86/kernel/process.c        |   15 ++++-----------
 arch/xtensa/kernel/process.c     |    1 +
 kernel/sched/idle.c              |    2 --
 26 files changed, 28 insertions(+), 37 deletions(-)

--- a/arch/alpha/kernel/process.c
+++ b/arch/alpha/kernel/process.c
@@ -57,7 +57,6 @@ EXPORT_SYMBOL(pm_power_off);
 void arch_cpu_idle(void)
 {
 	wtint(0);
-	raw_local_irq_enable();
 }
 
 void arch_cpu_idle_dead(void)
--- a/arch/arc/kernel/process.c
+++ b/arch/arc/kernel/process.c
@@ -114,6 +114,8 @@ void arch_cpu_idle(void)
 		"sleep %0	\n"
 		:
 		:"I"(arg)); /* can't be "r" has to be embedded const */
+
+	raw_local_irq_disable();
 }
 
 #else	/* ARC700 */
@@ -122,6 +124,7 @@ void arch_cpu_idle(void)
 {
 	/* sleep, but enable both set E1/E2 (levels of interrupts) before committing */
 	__asm__ __volatile__("sleep 0x3	\n");
+	raw_local_irq_disable();
 }
 
 #endif
--- a/arch/arm/kernel/process.c
+++ b/arch/arm/kernel/process.c
@@ -78,7 +78,6 @@ void arch_cpu_idle(void)
 		arm_pm_idle();
 	else
 		cpu_do_idle();
-	raw_local_irq_enable();
 }
 
 void arch_cpu_idle_prepare(void)
--- a/arch/arm/mach-gemini/board-dt.c
+++ b/arch/arm/mach-gemini/board-dt.c
@@ -42,8 +42,9 @@ static void gemini_idle(void)
 	 */
 
 	/* FIXME: Enabling interrupts here is racy! */
-	local_irq_enable();
+	raw_local_irq_enable();
 	cpu_do_idle();
+	raw_local_irq_disable();
 }
 
 static void __init gemini_init_machine(void)
--- a/arch/arm64/kernel/idle.c
+++ b/arch/arm64/kernel/idle.c
@@ -42,5 +42,4 @@ void noinstr arch_cpu_idle(void)
 	 * tricks
 	 */
 	cpu_do_idle();
-	raw_local_irq_enable();
 }
--- a/arch/csky/kernel/process.c
+++ b/arch/csky/kernel/process.c
@@ -101,6 +101,5 @@ void arch_cpu_idle(void)
 #ifdef CONFIG_CPU_PM_STOP
 	asm volatile("stop\n");
 #endif
-	raw_local_irq_enable();
 }
 #endif
--- a/arch/csky/kernel/smp.c
+++ b/arch/csky/kernel/smp.c
@@ -314,7 +314,7 @@ void arch_cpu_idle_dead(void)
 	while (!secondary_stack)
 		arch_cpu_idle();
 
-	local_irq_disable();
+	raw_local_irq_disable();
 
 	asm volatile(
 		"mov	sp, %0\n"
--- a/arch/hexagon/kernel/process.c
+++ b/arch/hexagon/kernel/process.c
@@ -44,7 +44,6 @@ void arch_cpu_idle(void)
 {
 	__vmwait();
 	/*  interrupts wake us up, but irqs are still disabled */
-	raw_local_irq_enable();
 }
 
 /*
--- a/arch/ia64/kernel/process.c
+++ b/arch/ia64/kernel/process.c
@@ -241,6 +241,7 @@ void arch_cpu_idle(void)
 		(*mark_idle)(1);
 
 	raw_safe_halt();
+	raw_local_irq_disable();
 
 	if (mark_idle)
 		(*mark_idle)(0);
--- a/arch/microblaze/kernel/process.c
+++ b/arch/microblaze/kernel/process.c
@@ -138,5 +138,4 @@ int dump_fpu(struct pt_regs *regs, elf_f
 
 void arch_cpu_idle(void)
 {
-       raw_local_irq_enable();
 }
--- a/arch/mips/kernel/idle.c
+++ b/arch/mips/kernel/idle.c
@@ -33,13 +33,13 @@ static void __cpuidle r3081_wait(void)
 {
 	unsigned long cfg = read_c0_conf();
 	write_c0_conf(cfg | R30XX_CONF_HALT);
-	raw_local_irq_enable();
 }
 
 void __cpuidle r4k_wait(void)
 {
 	raw_local_irq_enable();
 	__r4k_wait();
+	raw_local_irq_disable();
 }
 
 /*
@@ -57,7 +57,6 @@ void __cpuidle r4k_wait_irqoff(void)
 		"	.set	arch=r4000	\n"
 		"	wait			\n"
 		"	.set	pop		\n");
-	raw_local_irq_enable();
 }
 
 /*
@@ -77,7 +76,6 @@ static void __cpuidle rm7k_wait_irqoff(v
 		"	wait						\n"
 		"	mtc0	$1, $12		# stalls until W stage	\n"
 		"	.set	pop					\n");
-	raw_local_irq_enable();
 }
 
 /*
@@ -103,6 +101,8 @@ static void __cpuidle au1k_wait(void)
 	"	nop				\n"
 	"	.set	pop			\n"
 	: : "r" (au1k_wait), "r" (c0status));
+
+	raw_local_irq_disable();
 }
 
 static int __initdata nowait;
@@ -245,8 +245,6 @@ void arch_cpu_idle(void)
 {
 	if (cpu_wait)
 		cpu_wait();
-	else
-		raw_local_irq_enable();
 }
 
 #ifdef CONFIG_CPU_IDLE
--- a/arch/nios2/kernel/process.c
+++ b/arch/nios2/kernel/process.c
@@ -33,7 +33,6 @@ EXPORT_SYMBOL(pm_power_off);
 
 void arch_cpu_idle(void)
 {
-	raw_local_irq_enable();
 }
 
 /*
--- a/arch/openrisc/kernel/process.c
+++ b/arch/openrisc/kernel/process.c
@@ -102,6 +102,7 @@ void arch_cpu_idle(void)
 	raw_local_irq_enable();
 	if (mfspr(SPR_UPR) & SPR_UPR_PMP)
 		mtspr(SPR_PMR, mfspr(SPR_PMR) | SPR_PMR_DME);
+	raw_local_irq_disable();
 }
 
 void (*pm_power_off)(void) = NULL;
--- a/arch/parisc/kernel/process.c
+++ b/arch/parisc/kernel/process.c
@@ -187,8 +187,6 @@ void arch_cpu_idle_dead(void)
 
 void __cpuidle arch_cpu_idle(void)
 {
-	raw_local_irq_enable();
-
 	/* nop on real hardware, qemu will idle sleep. */
 	asm volatile("or %%r10,%%r10,%%r10\n":::);
 }
--- a/arch/powerpc/kernel/idle.c
+++ b/arch/powerpc/kernel/idle.c
@@ -51,10 +51,9 @@ void arch_cpu_idle(void)
 		 * Some power_save functions return with
 		 * interrupts enabled, some don't.
 		 */
-		if (irqs_disabled())
-			raw_local_irq_enable();
+		if (!irqs_disabled())
+			raw_local_irq_disable();
 	} else {
-		raw_local_irq_enable();
 		/*
 		 * Go into low thread priority and possibly
 		 * low power mode.
--- a/arch/riscv/kernel/process.c
+++ b/arch/riscv/kernel/process.c
@@ -39,7 +39,6 @@ extern asmlinkage void ret_from_kernel_t
 void arch_cpu_idle(void)
 {
 	cpu_do_idle();
-	raw_local_irq_enable();
 }
 
 void __show_regs(struct pt_regs *regs)
--- a/arch/s390/kernel/idle.c
+++ b/arch/s390/kernel/idle.c
@@ -66,7 +66,6 @@ void arch_cpu_idle(void)
 	idle->idle_count++;
 	account_idle_time(cputime_to_nsecs(idle_time));
 	raw_write_seqcount_end(&idle->seqcount);
-	raw_local_irq_enable();
 }
 
 static ssize_t show_idle_count(struct device *dev,
--- a/arch/sh/kernel/idle.c
+++ b/arch/sh/kernel/idle.c
@@ -25,6 +25,7 @@ void default_idle(void)
 	raw_local_irq_enable();
 	/* Isn't this racy ? */
 	cpu_sleep();
+	raw_local_irq_disable();
 	clear_bl_bit();
 }
 
--- a/arch/sparc/kernel/leon_pmc.c
+++ b/arch/sparc/kernel/leon_pmc.c
@@ -57,6 +57,8 @@ static void pmc_leon_idle_fixup(void)
 		"lda	[%0] %1, %%g0\n"
 		:
 		: "r"(address), "i"(ASI_LEON_BYPASS));
+
+	raw_local_irq_disable();
 }
 
 /*
@@ -70,6 +72,8 @@ static void pmc_leon_idle(void)
 
 	/* For systems without power-down, this will be no-op */
 	__asm__ __volatile__ ("wr	%g0, %asr19\n\t");
+
+	raw_local_irq_disable();
 }
 
 /* Install LEON Power Down function */
--- a/arch/sparc/kernel/process_32.c
+++ b/arch/sparc/kernel/process_32.c
@@ -71,7 +71,6 @@ void arch_cpu_idle(void)
 {
 	if (sparc_idle)
 		(*sparc_idle)();
-	raw_local_irq_enable();
 }
 
 /* XXX cli/sti -> local_irq_xxx here, check this works once SMP is fixed. */
--- a/arch/sparc/kernel/process_64.c
+++ b/arch/sparc/kernel/process_64.c
@@ -59,7 +59,6 @@ void arch_cpu_idle(void)
 {
 	if (tlb_type != hypervisor) {
 		touch_nmi_watchdog();
-		raw_local_irq_enable();
 	} else {
 		unsigned long pstate;
 
@@ -90,6 +89,8 @@ void arch_cpu_idle(void)
 			"wrpr %0, %%g0, %%pstate"
 			: "=&r" (pstate)
 			: "i" (PSTATE_IE));
+
+		raw_local_irq_disable();
 	}
 }
 
--- a/arch/um/kernel/process.c
+++ b/arch/um/kernel/process.c
@@ -216,7 +216,6 @@ void arch_cpu_idle(void)
 {
 	cpu_tasks[current_thread_info()->cpu].pid = os_getpid();
 	um_idle_sleep();
-	raw_local_irq_enable();
 }
 
 int __cant_sleep(void) {
--- a/arch/x86/coco/tdx/tdx.c
+++ b/arch/x86/coco/tdx/tdx.c
@@ -178,6 +178,9 @@ void __cpuidle tdx_safe_halt(void)
 	 */
 	if (__halt(irq_disabled, do_sti))
 		WARN_ONCE(1, "HLT instruction emulation failed\n");
+
+	/* XXX I can't make sense of what @do_sti actually does */
+	raw_local_irq_disable();
 }
 
 static bool read_msr(struct pt_regs *regs)
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -699,6 +699,7 @@ EXPORT_SYMBOL(boot_option_idle_override)
 void __cpuidle default_idle(void)
 {
 	raw_safe_halt();
+	raw_local_irq_disable();
 }
 #if defined(CONFIG_APM_MODULE) || defined(CONFIG_HALTPOLL_CPUIDLE_MODULE)
 EXPORT_SYMBOL(default_idle);
@@ -804,13 +805,7 @@ static void amd_e400_idle(void)
 
 	default_idle();
 
-	/*
-	 * The switch back from broadcast mode needs to be called with
-	 * interrupts disabled.
-	 */
-	raw_local_irq_disable();
 	tick_broadcast_exit();
-	raw_local_irq_enable();
 }
 
 /*
@@ -849,12 +844,10 @@ static __cpuidle void mwait_idle(void)
 		}
 
 		__monitor((void *)&current_thread_info()->flags, 0, 0);
-		if (!need_resched())
+		if (!need_resched()) {
 			__sti_mwait(0, 0);
-		else
-			raw_local_irq_enable();
-	} else {
-		raw_local_irq_enable();
+			raw_local_irq_disable();
+		}
 	}
 	__current_clr_polling();
 }
--- a/arch/xtensa/kernel/process.c
+++ b/arch/xtensa/kernel/process.c
@@ -183,6 +183,7 @@ void coprocessor_flush_release_all(struc
 void arch_cpu_idle(void)
 {
 	platform_idle();
+	raw_local_irq_disable();
 }
 
 /*
--- a/kernel/sched/idle.c
+++ b/kernel/sched/idle.c
@@ -79,7 +79,6 @@ void __weak arch_cpu_idle_dead(void) { }
 void __weak arch_cpu_idle(void)
 {
 	cpu_idle_force_poll = 1;
-	raw_local_irq_enable();
 }
 
 /**
@@ -96,7 +95,6 @@ void __cpuidle default_idle_call(void)
 
 		cpuidle_rcu_enter();
 		arch_cpu_idle();
-		raw_local_irq_disable();
 		cpuidle_rcu_exit();
 
 		start_critical_timings();




From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:48:14 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:48:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343845.569374 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywyi-0004jo-76; Wed, 08 Jun 2022 14:48:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343845.569374; Wed, 08 Jun 2022 14:48:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywyi-0004iu-1d; Wed, 08 Jun 2022 14:48:12 +0000
Received: by outflank-mailman (input) for mailman id 343845;
 Wed, 08 Jun 2022 14:47:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywxv-0004Sj-RG
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:23 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e4a60525-e739-11ec-bd2c-47488cf2e6aa;
 Wed, 08 Jun 2022 16:47:19 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywx3-0066CV-M7; Wed, 08 Jun 2022 14:46:30 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id C2880302F13;
 Wed,  8 Jun 2022 16:46:23 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id A2E5D20C119A0; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e4a60525-e739-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=o5HILV8lXWZVR34kQkfXu7ELtM3PHvDf12JWqGFz8as=; b=FdF0WwTfqAD0qSj5IztZ4sLhvT
	k5gLeMLjDAi8AiG+FPGxmqabFeIQHQCNOTIfhG/cpcbGLZPNynYxoEOkV1KR6W1cM26gk09NcMOho
	OTH5ruv6nF9XsgSF9ERM0tY+ijfUIOhLwICgBWU3v/b6jxNumek114l8uiB/Ho+YCAnkYFEEtHFA1
	yJwPilsqg79N5SwaSL+a8z3EFTRK0XJDDbWj1Ixr9OQeIT2Fab/9sy2Cr+siozN3wTiG232WtGG/1
	MiOLZNXIY/fnByd71PGk1LDgy98zbKguCtDMF+RXQCVcyn0bJrc/NWlQckEgt24lFU5RTP60t+1Fw
	I4HHiDAQ==;
Message-ID: <20220608144517.570101266@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:49 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org
Subject: [PATCH 26/36] cpuidle,sched: Remove annotations from TIF_{POLLING_NRFLAG,NEED_RESCHED}
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

vmlinux.o: warning: objtool: mwait_idle+0x5: call to current_set_polling_and_test() leaves .noinstr.text section
vmlinux.o: warning: objtool: acpi_processor_ffh_cstate_enter+0xc5: call to current_set_polling_and_test() leaves .noinstr.text section
vmlinux.o: warning: objtool: cpu_idle_poll.isra.0+0x73: call to test_ti_thread_flag() leaves .noinstr.text section
vmlinux.o: warning: objtool: intel_idle+0xbc: call to current_set_polling_and_test() leaves .noinstr.text section
vmlinux.o: warning: objtool: intel_idle_irq+0xea: call to current_set_polling_and_test() leaves .noinstr.text section
vmlinux.o: warning: objtool: intel_idle_s2idle+0xb4: call to current_set_polling_and_test() leaves .noinstr.text section

vmlinux.o: warning: objtool: intel_idle+0xa6: call to current_clr_polling() leaves .noinstr.text section
vmlinux.o: warning: objtool: intel_idle_irq+0xbf: call to current_clr_polling() leaves .noinstr.text section
vmlinux.o: warning: objtool: intel_idle_s2idle+0xa1: call to current_clr_polling() leaves .noinstr.text section

vmlinux.o: warning: objtool: mwait_idle+0xe: call to __current_set_polling() leaves .noinstr.text section
vmlinux.o: warning: objtool: acpi_processor_ffh_cstate_enter+0xc5: call to __current_set_polling() leaves .noinstr.text section
vmlinux.o: warning: objtool: cpu_idle_poll.isra.0+0x73: call to test_ti_thread_flag() leaves .noinstr.text section
vmlinux.o: warning: objtool: intel_idle+0xbc: call to __current_set_polling() leaves .noinstr.text section
vmlinux.o: warning: objtool: intel_idle_irq+0xea: call to __current_set_polling() leaves .noinstr.text section
vmlinux.o: warning: objtool: intel_idle_s2idle+0xb4: call to __current_set_polling() leaves .noinstr.text section

vmlinux.o: warning: objtool: cpu_idle_poll.isra.0+0x73: call to test_ti_thread_flag() leaves .noinstr.text section
vmlinux.o: warning: objtool: intel_idle_s2idle+0x73: call to test_ti_thread_flag.constprop.0() leaves .noinstr.text section
vmlinux.o: warning: objtool: intel_idle_irq+0x91: call to test_ti_thread_flag.constprop.0() leaves .noinstr.text section
vmlinux.o: warning: objtool: intel_idle+0x78: call to test_ti_thread_flag.constprop.0() leaves .noinstr.text section
vmlinux.o: warning: objtool: acpi_safe_halt+0xf: call to test_ti_thread_flag.constprop.0() leaves .noinstr.text section

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 include/linux/sched/idle.h  |   40 ++++++++++++++++++++++++++++++----------
 include/linux/thread_info.h |   18 +++++++++++++++++-
 2 files changed, 47 insertions(+), 11 deletions(-)

--- a/include/linux/sched/idle.h
+++ b/include/linux/sched/idle.h
@@ -23,12 +23,37 @@ static inline void wake_up_if_idle(int c
  */
 #ifdef TIF_POLLING_NRFLAG
 
-static inline void __current_set_polling(void)
+#ifdef _ASM_GENERIC_BITOPS_INSTRUMENTED_ATOMIC_H
+
+static __always_inline void __current_set_polling(void)
 {
-	set_thread_flag(TIF_POLLING_NRFLAG);
+	arch_set_bit(TIF_POLLING_NRFLAG,
+		     (unsigned long *)(&current_thread_info()->flags));
 }
 
-static inline bool __must_check current_set_polling_and_test(void)
+static __always_inline void __current_clr_polling(void)
+{
+	arch_clear_bit(TIF_POLLING_NRFLAG,
+		       (unsigned long *)(&current_thread_info()->flags));
+}
+
+#else
+
+static __always_inline void __current_set_polling(void)
+{
+	set_bit(TIF_POLLING_NRFLAG,
+		(unsigned long *)(&current_thread_info()->flags));
+}
+
+static __always_inline void __current_clr_polling(void)
+{
+	clear_bit(TIF_POLLING_NRFLAG,
+		  (unsigned long *)(&current_thread_info()->flags));
+}
+
+#endif /* _ASM_GENERIC_BITOPS_INSTRUMENTED_ATOMIC_H */
+
+static __always_inline bool __must_check current_set_polling_and_test(void)
 {
 	__current_set_polling();
 
@@ -41,12 +66,7 @@ static inline bool __must_check current_
 	return unlikely(tif_need_resched());
 }
 
-static inline void __current_clr_polling(void)
-{
-	clear_thread_flag(TIF_POLLING_NRFLAG);
-}
-
-static inline bool __must_check current_clr_polling_and_test(void)
+static __always_inline bool __must_check current_clr_polling_and_test(void)
 {
 	__current_clr_polling();
 
@@ -73,7 +93,7 @@ static inline bool __must_check current_
 }
 #endif
 
-static inline void current_clr_polling(void)
+static __always_inline void current_clr_polling(void)
 {
 	__current_clr_polling();
 
--- a/include/linux/thread_info.h
+++ b/include/linux/thread_info.h
@@ -177,7 +177,23 @@ static __always_inline unsigned long rea
 	clear_ti_thread_flag(task_thread_info(t), TIF_##fl)
 #endif /* !CONFIG_GENERIC_ENTRY */
 
-#define tif_need_resched() test_thread_flag(TIF_NEED_RESCHED)
+#ifdef _ASM_GENERIC_BITOPS_INSTRUMENTED_NON_ATOMIC_H
+
+static __always_inline bool tif_need_resched(void)
+{
+	return arch_test_bit(TIF_NEED_RESCHED,
+			     (unsigned long *)(&current_thread_info()->flags));
+}
+
+#else
+
+static __always_inline bool tif_need_resched(void)
+{
+	return test_bit(TIF_NEED_RESCHED,
+			(unsigned long *)(&current_thread_info()->flags));
+}
+
+#endif /* _ASM_GENERIC_BITOPS_INSTRUMENTED_NON_ATOMIC_H */
 
 #ifndef CONFIG_HAVE_ARCH_WITHIN_STACK_FRAMES
 static inline int arch_within_stack_frames(const void * const stack,




From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:48:14 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:48:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343850.569398 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywyj-00058n-Mp; Wed, 08 Jun 2022 14:48:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343850.569398; Wed, 08 Jun 2022 14:48:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywyj-00056n-AN; Wed, 08 Jun 2022 14:48:13 +0000
Received: by outflank-mailman (input) for mailman id 343850;
 Wed, 08 Jun 2022 14:47:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywxx-0004T5-MD
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:25 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e57aae47-e739-11ec-b605-df0040e90b76;
 Wed, 08 Jun 2022 16:47:21 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywx3-0066CO-6k; Wed, 08 Jun 2022 14:46:30 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id A15F3302ECB;
 Wed,  8 Jun 2022 16:46:23 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id 8D44D20C10ED6; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e57aae47-e739-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=n+8kbD5k3joQL4tu1DQuka314sY5HtbiPc/MGM3uXCw=; b=CFsFgByJSeY2PmftE59lIwd61A
	fjJ72jju0l9GwBYm1tDQavNTQt1uwtfxtPtqec7K6IBFRtI8mUyMjSb23xu2nggDLQ3Izti62YLZ9
	81i/g9DPcf892kC/BAGwIehJW/W7Fewyul66uCAJh+Bsk0BWV05Jio2VzipiczXpOVQR7AS6M8em5
	wh3T1DJLe1ciqy+OR6gyRoeQARA3peLCGum9kfZYaXxBeujZ3coueZc7Rw5XccGKhOx8fJer5LVm7
	DK6L3i+rJEhIKvEmmEfkmunPBAh5TavkMzTbRzPaMqLIGOwpl+o4n2Ge2dXsInCxEuWYARNglpkyB
	ylWDIb/Q==;
Message-ID: <20220608144517.251109029@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:44 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org,
 Isaku Yamahata <isaku.yamahata@gmail.com>,
 kirill.shutemov@linux.intel.com
Subject: [PATCH 21/36] x86/tdx: Remove TDX_HCALL_ISSUE_STI
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

Now that arch_cpu_idle() is expected to return with IRQs disabled,
avoid the useless STI/CLI dance.

Per the specs this is supposed to work, but nobody has yet relied up
this behaviour so broken implementations are possible.

Cc: Isaku Yamahata <isaku.yamahata@gmail.com>
Cc: kirill.shutemov@linux.intel.com
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/x86/coco/tdx/tdcall.S        |   13 -------------
 arch/x86/coco/tdx/tdx.c           |   23 ++++-------------------
 arch/x86/include/asm/shared/tdx.h |    1 -
 3 files changed, 4 insertions(+), 33 deletions(-)

--- a/arch/x86/coco/tdx/tdcall.S
+++ b/arch/x86/coco/tdx/tdcall.S
@@ -139,19 +139,6 @@ SYM_FUNC_START(__tdx_hypercall)
 
 	movl $TDVMCALL_EXPOSE_REGS_MASK, %ecx
 
-	/*
-	 * For the idle loop STI needs to be called directly before the TDCALL
-	 * that enters idle (EXIT_REASON_HLT case). STI instruction enables
-	 * interrupts only one instruction later. If there is a window between
-	 * STI and the instruction that emulates the HALT state, there is a
-	 * chance for interrupts to happen in this window, which can delay the
-	 * HLT operation indefinitely. Since this is the not the desired
-	 * result, conditionally call STI before TDCALL.
-	 */
-	testq $TDX_HCALL_ISSUE_STI, %rsi
-	jz .Lskip_sti
-	sti
-.Lskip_sti:
 	tdcall
 
 	/*
--- a/arch/x86/coco/tdx/tdx.c
+++ b/arch/x86/coco/tdx/tdx.c
@@ -124,7 +124,7 @@ static u64 get_cc_mask(void)
 	return BIT_ULL(gpa_width - 1);
 }
 
-static u64 __cpuidle __halt(const bool irq_disabled, const bool do_sti)
+static u64 __cpuidle __halt(const bool irq_disabled)
 {
 	struct tdx_hypercall_args args = {
 		.r10 = TDX_HYPERCALL_STANDARD,
@@ -144,20 +144,14 @@ static u64 __cpuidle __halt(const bool i
 	 * can keep the vCPU in virtual HLT, even if an IRQ is
 	 * pending, without hanging/breaking the guest.
 	 */
-	return __tdx_hypercall(&args, do_sti ? TDX_HCALL_ISSUE_STI : 0);
+	return __tdx_hypercall(&args, 0);
 }
 
 static bool handle_halt(void)
 {
-	/*
-	 * Since non safe halt is mainly used in CPU offlining
-	 * and the guest will always stay in the halt state, don't
-	 * call the STI instruction (set do_sti as false).
-	 */
 	const bool irq_disabled = irqs_disabled();
-	const bool do_sti = false;
 
-	if (__halt(irq_disabled, do_sti))
+	if (__halt(irq_disabled))
 		return false;
 
 	return true;
@@ -165,22 +159,13 @@ static bool handle_halt(void)
 
 void __cpuidle tdx_safe_halt(void)
 {
-	 /*
-	  * For do_sti=true case, __tdx_hypercall() function enables
-	  * interrupts using the STI instruction before the TDCALL. So
-	  * set irq_disabled as false.
-	  */
 	const bool irq_disabled = false;
-	const bool do_sti = true;
 
 	/*
 	 * Use WARN_ONCE() to report the failure.
 	 */
-	if (__halt(irq_disabled, do_sti))
+	if (__halt(irq_disabled))
 		WARN_ONCE(1, "HLT instruction emulation failed\n");
-
-	/* XXX I can't make sense of what @do_sti actually does */
-	raw_local_irq_disable();
 }
 
 static bool read_msr(struct pt_regs *regs)
--- a/arch/x86/include/asm/shared/tdx.h
+++ b/arch/x86/include/asm/shared/tdx.h
@@ -8,7 +8,6 @@
 #define TDX_HYPERCALL_STANDARD  0
 
 #define TDX_HCALL_HAS_OUTPUT	BIT(0)
-#define TDX_HCALL_ISSUE_STI	BIT(1)
 
 #define TDX_CPUID_LEAF_ID	0x21
 #define TDX_IDENT		"IntelTDX    "




From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:48:14 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:48:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343844.569370 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywyh-0004gx-Tu; Wed, 08 Jun 2022 14:48:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343844.569370; Wed, 08 Jun 2022 14:48:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywyh-0004gq-PF; Wed, 08 Jun 2022 14:48:11 +0000
Received: by outflank-mailman (input) for mailman id 343844;
 Wed, 08 Jun 2022 14:47:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywxt-0004Sj-F2
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:23 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e5a1a972-e739-11ec-bd2c-47488cf2e6aa;
 Wed, 08 Jun 2022 16:47:19 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywx3-0066CL-0a; Wed, 08 Jun 2022 14:46:30 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id A082F302E9A;
 Wed,  8 Jun 2022 16:46:23 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id 90A2620C10ED8; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e5a1a972-e739-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=rveYe0whW8HL+t/HGSaazkf4IJzq2XAc7vdeRKjh+gA=; b=nT18fRjmtv8PtOHNlSRz7z7OA3
	ocSKQvQMk90e4BYQ8FG72UOd1vXBcUYk3m24zX9nucFet4rZzw6IRnvpGNzErQ7vF6gmeSoA/I3hv
	UbyXINmvnZz3foLi4MgiuIC4OlBuTtckcjJAePYfIiJ2CqwvUU5HtHdW4N26hlwOmjLBkcG7LqG8Q
	BoJE3LoXRlBBcMZ7RvdkoEP2AdgziBJ305TMSdp+BvfA1B2vsf69NthlS94vm+wkGw/Vop/LGC4Dq
	IFK+3BYDKT97pJkb9/zTHujIgColQowacfqug2BBLzHPsqX11KJsuZ63kgKCy3HEu4aklD3cuhGeV
	CF8D6kbg==;
Message-ID: <20220608144517.313931267@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:45 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org
Subject: [PATCH 22/36] arm,smp: Remove trace_.*_rcuidle() usage
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

None of these functions should ever be ran with RCU disabled anymore.

Specifically, do_handle_IPI() is only called from handle_IPI() which
explicitly does irq_enter()/irq_exit() which ensures RCU is watching.

The problem with smp_cross_call() was, per commit 7c64cc0531fa ("arm: Use
_rcuidle for smp_cross_call() tracepoints"), that
cpuidle_enter_state_coupled() already had RCU disabled, but that's
long been fixed by commit 1098582a0f6c ("sched,idle,rcu: Push rcu_idle
deeper into the idle path").

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/arm/kernel/smp.c |    6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -639,7 +639,7 @@ static void do_handle_IPI(int ipinr)
 	unsigned int cpu = smp_processor_id();
 
 	if ((unsigned)ipinr < NR_IPI)
-		trace_ipi_entry_rcuidle(ipi_types[ipinr]);
+		trace_ipi_entry(ipi_types[ipinr]);
 
 	switch (ipinr) {
 	case IPI_WAKEUP:
@@ -686,7 +686,7 @@ static void do_handle_IPI(int ipinr)
 	}
 
 	if ((unsigned)ipinr < NR_IPI)
-		trace_ipi_exit_rcuidle(ipi_types[ipinr]);
+		trace_ipi_exit(ipi_types[ipinr]);
 }
 
 /* Legacy version, should go away once all irqchips have been converted */
@@ -709,7 +709,7 @@ static irqreturn_t ipi_handler(int irq,
 
 static void smp_cross_call(const struct cpumask *target, unsigned int ipinr)
 {
-	trace_ipi_raise_rcuidle(target, ipi_types[ipinr]);
+	trace_ipi_raise(target, ipi_types[ipinr]);
 	__ipi_send_mask(ipi_desc[ipinr], target);
 }
 




From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:48:14 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:48:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343847.569390 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywyj-00051u-3X; Wed, 08 Jun 2022 14:48:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343847.569390; Wed, 08 Jun 2022 14:48:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywyi-00050Y-Qu; Wed, 08 Jun 2022 14:48:12 +0000
Received: by outflank-mailman (input) for mailman id 343847;
 Wed, 08 Jun 2022 14:47:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywxw-0004Sj-UG
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:24 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e6dc62a2-e739-11ec-bd2c-47488cf2e6aa;
 Wed, 08 Jun 2022 16:47:21 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywwx-0066BF-JJ; Wed, 08 Jun 2022 14:46:24 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 8DD01301134;
 Wed,  8 Jun 2022 16:46:22 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id 44C8420C0F9B2; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e6dc62a2-e739-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=UoTZx1hq9H2CvH0rrTcV0HznPxDNWA5jpJDFLHo4BPo=; b=pWyQNOHjBAsuQpOso13wfLuQhy
	thcRQJ5QKxIY2Kx66e1E+YWNvAX7hqXi16FG/Yvv/6jL9gxmDHqHQPKhI/0ggi7JOI+l8ZF0tpt+J
	/gMx4WY4gISoTMfEmXYlgqylwRgmyofm3M3UVZmyco3ziu7Gt9h7q5c6wPiUbTexd+wgemxfyY0hu
	i2lO18Et3kvZasrH8//VkE/uwwYBCUK8seXG2zWjgtEkf9QnvC7u4APyLvyuKQI1XGO+wwn5fXDdh
	Vbf00ESlaotTdYhjQqpHmvpB4FpOELLEZl4FjWs1MRv1VKlYsOC56S74pB+SmAESsa1OcuRYoS/lu
	XKg+WotA==;
Message-ID: <20220608144516.172460444@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:27 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org
Subject: [PATCH 04/36] cpuidle,intel_idle: Fix CPUIDLE_FLAG_IRQ_ENABLE
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

Commit c227233ad64c ("intel_idle: enable interrupts before C1 on
Xeons") wrecked intel_idle in two ways:

 - must not have tracing in idle functions
 - must return with IRQs disabled

Additionally, it added a branch for no good reason.

Fixes: c227233ad64c ("intel_idle: enable interrupts before C1 on Xeons")
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 drivers/idle/intel_idle.c |   48 +++++++++++++++++++++++++++++++++++-----------
 1 file changed, 37 insertions(+), 11 deletions(-)

--- a/drivers/idle/intel_idle.c
+++ b/drivers/idle/intel_idle.c
@@ -129,21 +137,37 @@ static unsigned int mwait_substates __in
  *
  * Must be called under local_irq_disable().
  */
+
-static __cpuidle int intel_idle(struct cpuidle_device *dev,
-				struct cpuidle_driver *drv, int index)
+static __always_inline int __intel_idle(struct cpuidle_device *dev,
+					struct cpuidle_driver *drv, int index)
 {
 	struct cpuidle_state *state = &drv->states[index];
 	unsigned long eax = flg2MWAIT(state->flags);
 	unsigned long ecx = 1; /* break on interrupt flag */
 
-	if (state->flags & CPUIDLE_FLAG_IRQ_ENABLE)
-		local_irq_enable();
-
 	mwait_idle_with_hints(eax, ecx);
 
 	return index;
 }
 
+static __cpuidle int intel_idle(struct cpuidle_device *dev,
+				struct cpuidle_driver *drv, int index)
+{
+	return __intel_idle(dev, drv, index);
+}
+
+static __cpuidle int intel_idle_irq(struct cpuidle_device *dev,
+				    struct cpuidle_driver *drv, int index)
+{
+	int ret;
+
+	raw_local_irq_enable();
+	ret = __intel_idle(dev, drv, index);
+	raw_local_irq_disable();
+
+	return ret;
+}
+
 /**
  * intel_idle_s2idle - Ask the processor to enter the given idle state.
  * @dev: cpuidle device of the target CPU.
@@ -1801,6 +1824,9 @@ static void __init intel_idle_init_cstat
 		/* Structure copy. */
 		drv->states[drv->state_count] = cpuidle_state_table[cstate];
 
+		if (cpuidle_state_table[cstate].flags & CPUIDLE_FLAG_IRQ_ENABLE)
+			drv->states[drv->state_count].enter = intel_idle_irq;
+
 		if ((disabled_states_mask & BIT(drv->state_count)) ||
 		    ((icpu->use_acpi || force_use_acpi) &&
 		     intel_idle_off_by_default(mwait_hint) &&




From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:48:15 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:48:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343854.569413 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywyk-0005QK-Hq; Wed, 08 Jun 2022 14:48:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343854.569413; Wed, 08 Jun 2022 14:48:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywyk-0005Nc-73; Wed, 08 Jun 2022 14:48:14 +0000
Received: by outflank-mailman (input) for mailman id 343854;
 Wed, 08 Jun 2022 14:47:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywxy-0004T5-EM
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:26 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e57820ed-e739-11ec-b605-df0040e90b76;
 Wed, 08 Jun 2022 16:47:21 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywwx-0066BG-Ju; Wed, 08 Jun 2022 14:46:24 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 90828302D84;
 Wed,  8 Jun 2022 16:46:22 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id 4854A20C0F9B4; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e57820ed-e739-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=VIZr4C1ndt5VeFEZaPlCUg+5Hz4QhdyspUVrjQe3s0Q=; b=kqq4L3g1QhZht/evh2l5s39qps
	jnBmYIZTu4x4BpY6GPBYny1gorePfmz6gw7CvGZUUJ45UgB+3ZNOnUmDTa3XgcsU+AliBRJfiL13t
	KoZXJ4YmKkyUiI8J1DUyBTQtQ6SVAlRBjd2lYrcQy3VMZs1753vYk1wEdi55HMURyDlMTLnIp2/DG
	MGr0nky4kAarfcUiOqeOkLaYDyaA2nmZrRDB481pz0z3AmobbTQyJi9vcmXREbRCThBWV4zvIbtlx
	7V9kx4NPTBy1ipdrxzrHOS+OsGpY03vNc7rSx1DlzzyZ2hAby7AX1FdAQjZ3DF1PF+uzX0/EndUAP
	yetT9Drg==;
Message-ID: <20220608144516.235041924@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:28 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org
Subject: [PATCH 05/36] cpuidle: Move IRQ state validation
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

Make cpuidle_enter_state() consistent with the s2idle variant and
verify ->enter() always returns with interrupts disabled.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 drivers/cpuidle/cpuidle.c |   10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

--- a/drivers/cpuidle/cpuidle.c
+++ b/drivers/cpuidle/cpuidle.c
@@ -234,7 +234,11 @@ int cpuidle_enter_state(struct cpuidle_d
 	stop_critical_timings();
 	if (!(target_state->flags & CPUIDLE_FLAG_RCU_IDLE))
 		rcu_idle_enter();
+
 	entered_state = target_state->enter(dev, drv, index);
+	if (WARN_ONCE(!irqs_disabled(), "%ps leaked IRQ state", target_state->enter))
+		raw_local_irq_disable();
+
 	if (!(target_state->flags & CPUIDLE_FLAG_RCU_IDLE))
 		rcu_idle_exit();
 	start_critical_timings();
@@ -246,12 +250,8 @@ int cpuidle_enter_state(struct cpuidle_d
 	/* The cpu is no longer idle or about to enter idle. */
 	sched_idle_set_state(NULL);
 
-	if (broadcast) {
-		if (WARN_ON_ONCE(!irqs_disabled()))
-			local_irq_disable();
-
+	if (broadcast)
 		tick_broadcast_exit();
-	}
 
 	if (!cpuidle_state_is_coupled(drv, index))
 		local_irq_enable();




From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:48:15 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:48:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343855.569425 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywyl-0005hu-FY; Wed, 08 Jun 2022 14:48:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343855.569425; Wed, 08 Jun 2022 14:48:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywyk-0005bO-Pc; Wed, 08 Jun 2022 14:48:14 +0000
Received: by outflank-mailman (input) for mailman id 343855;
 Wed, 08 Jun 2022 14:47:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywxy-0004Sj-RX
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:26 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e6e147db-e739-11ec-bd2c-47488cf2e6aa;
 Wed, 08 Jun 2022 16:47:21 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywx6-0066Do-Kq; Wed, 08 Jun 2022 14:46:33 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 0CE1C302F57;
 Wed,  8 Jun 2022 16:46:24 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id CB17620C119BB; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e6e147db-e739-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=lz2MUgDITA4yYb0U+PMz5bsqmEWyVbYbLXt1grP5wdc=; b=E4ma1UJky4lYLkF7WsmuwBiBAR
	ZNOCNP37EHcDipS0h5pGpc7Z83q2l0F9r1qoT2IwS4JaVORLNk9Y8kD1/G4PQ/WPrdROyQRafl+bw
	8kEHexR7OAoHvsHpcXrvfDNKNmCb7vZaRmG5jCMjvkiVnlyjOoCZYPiCN2YUEe0s+KCLCMKw1lusn
	c//k/ch+ZMfy68BfHEU9q/SBuHFmKkFeZQNGD+V4Ey9G51j7RCAyIzKlUBl1/EhiN9VFEuwMKKqHS
	exZwnsJXFjdzNoY07XCZXRHpvxrUSpp80Ad7Sk+AHygZ2w1HtBhHyFFImsUsvO1nkQsZm+y1UkgzW
	W9tgnQAA==;
Message-ID: <20220608144518.199614455@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:59 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org
Subject: [PATCH 36/36] cpuidle,clk: Remove trace_.*_rcuidle()
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

OMAP was the one and only user.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 drivers/clk/clk.c |    8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

--- a/drivers/clk/clk.c
+++ b/drivers/clk/clk.c
@@ -978,12 +978,12 @@ static void clk_core_disable(struct clk_
 	if (--core->enable_count > 0)
 		return;
 
-	trace_clk_disable_rcuidle(core);
+	trace_clk_disable(core);
 
 	if (core->ops->disable)
 		core->ops->disable(core->hw);
 
-	trace_clk_disable_complete_rcuidle(core);
+	trace_clk_disable_complete(core);
 
 	clk_core_disable(core->parent);
 }
@@ -1037,12 +1037,12 @@ static int clk_core_enable(struct clk_co
 		if (ret)
 			return ret;
 
-		trace_clk_enable_rcuidle(core);
+		trace_clk_enable(core);
 
 		if (core->ops->enable)
 			ret = core->ops->enable(core->hw);
 
-		trace_clk_enable_complete_rcuidle(core);
+		trace_clk_enable_complete(core);
 
 		if (ret) {
 			clk_core_disable(core->parent);




From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:48:16 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:48:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343859.569435 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywym-0005yk-ET; Wed, 08 Jun 2022 14:48:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343859.569435; Wed, 08 Jun 2022 14:48:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywyl-0005ud-Sf; Wed, 08 Jun 2022 14:48:15 +0000
Received: by outflank-mailman (input) for mailman id 343859;
 Wed, 08 Jun 2022 14:47:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywxz-0004Sj-Rw
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:27 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e6db6120-e739-11ec-bd2c-47488cf2e6aa;
 Wed, 08 Jun 2022 16:47:21 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywx6-0066Da-9x; Wed, 08 Jun 2022 14:46:33 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 0452B302F4F;
 Wed,  8 Jun 2022 16:46:24 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id C421820C119B8; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e6db6120-e739-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=T8wRwm6eLEEe+d6prOVCLJdHQ0FxyXLfs5ZoW8qxROY=; b=qz2/QcW+8oRBpx5oORnEf+GtM8
	q1YrQyf1tmyPcNQqEPrXFmduaozueI1j9rg8SJ6zBna7Uz8lvnEwB+B+wyKCf5pGe3lOvB4F+TeAD
	F9tyHNLVtk1V2IJAmR3uomZgujNpMa70W2bEyEOoERgoVaups+pxLymddoQIwXqrmFio0yWVivIGv
	IsqY6h6gyND3FQXvxxlKKrgtcZk2R0bnOmFFyBY0qQH0BnYniwDC01FuURzwOyZMFoXutRSidDScS
	SGPdVyuHTbrYBEgBLfjjxHu+bvwr+Z2Livrdq6BQDnc77vplqPh7z9gL7QwK+/3eqINZJiDgrTyxM
	zjRENbJA==;
Message-ID: <20220608144518.073801916@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:57 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org
Subject: [PATCH 34/36] cpuidle,omap3: Push RCU-idle into omap_sram_idle()
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

OMAP3 uses full SoC suspend modes as idle states, as such it needs the
whole power-domain and clock-domain code from the idle path.

All that code is not suitable to run with RCU disabled, as such push
RCU-idle deeper still.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/arm/mach-omap2/cpuidle34xx.c |    4 +---
 arch/arm/mach-omap2/pm.h          |    2 +-
 arch/arm/mach-omap2/pm34xx.c      |   12 ++++++++++--
 3 files changed, 12 insertions(+), 6 deletions(-)

--- a/arch/arm/mach-omap2/cpuidle34xx.c
+++ b/arch/arm/mach-omap2/cpuidle34xx.c
@@ -133,9 +133,7 @@ static int omap3_enter_idle(struct cpuid
 	}
 
 	/* Execute ARM wfi */
-	cpuidle_rcu_enter();
-	omap_sram_idle();
-	cpuidle_rcu_exit();
+	omap_sram_idle(true);
 
 	/*
 	 * Call idle CPU PM enter notifier chain to restore
--- a/arch/arm/mach-omap2/pm.h
+++ b/arch/arm/mach-omap2/pm.h
@@ -29,7 +29,7 @@ static inline int omap4_idle_init(void)
 
 extern void *omap3_secure_ram_storage;
 extern void omap3_pm_off_mode_enable(int);
-extern void omap_sram_idle(void);
+extern void omap_sram_idle(bool rcuidle);
 extern int omap_pm_clkdms_setup(struct clockdomain *clkdm, void *unused);
 
 #if defined(CONFIG_PM_OPP)
--- a/arch/arm/mach-omap2/pm34xx.c
+++ b/arch/arm/mach-omap2/pm34xx.c
@@ -26,6 +26,7 @@
 #include <linux/delay.h>
 #include <linux/slab.h>
 #include <linux/of.h>
+#include <linux/cpuidle.h>
 
 #include <trace/events/power.h>
 
@@ -174,7 +175,7 @@ static int omap34xx_do_sram_idle(unsigne
 	return 0;
 }
 
-void omap_sram_idle(void)
+void omap_sram_idle(bool rcuidle)
 {
 	/* Variable to tell what needs to be saved and restored
 	 * in omap_sram_idle*/
@@ -254,11 +255,18 @@ void omap_sram_idle(void)
 	 */
 	if (save_state)
 		omap34xx_save_context(omap3_arm_context);
+
+	if (rcuidle)
+		cpuidle_rcu_enter();
+
 	if (save_state == 1 || save_state == 3)
 		cpu_suspend(save_state, omap34xx_do_sram_idle);
 	else
 		omap34xx_do_sram_idle(save_state);
 
+	if (rcuidle)
+		rcuidle_rcu_exit();
+
 	/* Restore normal SDRC POWER settings */
 	if (cpu_is_omap3430() && omap_rev() >= OMAP3430_REV_ES3_0 &&
 	    (omap_type() == OMAP2_DEVICE_TYPE_EMU ||
@@ -316,7 +324,7 @@ static int omap3_pm_suspend(void)
 
 	omap3_intc_suspend();
 
-	omap_sram_idle();
+	omap_sram_idle(false);
 
 restore:
 	/* Restore next_pwrsts */




From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:48:17 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:48:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343862.569447 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywyn-0006F0-BJ; Wed, 08 Jun 2022 14:48:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343862.569447; Wed, 08 Jun 2022 14:48:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywym-0006AL-LT; Wed, 08 Jun 2022 14:48:16 +0000
Received: by outflank-mailman (input) for mailman id 343862;
 Wed, 08 Jun 2022 14:47:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywy0-0004Sj-Rc
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:28 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e6ddfc5a-e739-11ec-bd2c-47488cf2e6aa;
 Wed, 08 Jun 2022 16:47:21 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywwz-0066Ba-B6; Wed, 08 Jun 2022 14:46:26 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id B8DFD302DFF;
 Wed,  8 Jun 2022 16:46:22 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id 625B720C10EA1; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e6ddfc5a-e739-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=F6PB1BepzhzXddHr1ut7n1kgQmvFEr/OgkB1cnk1WPs=; b=byVXHTfaXyhwXkVWU34OeN2xjr
	vTMEADr9AdMN4xHcWBep23kLnsiQVyLSj4HqWVhmsL20Q63VRuqgjppQfFat+lmh2Pbt6yuI05PVi
	B/vZ5gLXONWrBiMHoZyPpI31hW52oaOd5Q+Zgmha2YUC+d3qWpDjVfld8SE50G6vX3+fsUty05+g/
	PqJORkbJzW4cixlb8X2JmhdQCYbEoM3gipWREQgDoKJXQ7lx7sZENnMmNaekx+c7thX8e70ATDQAV
	ogxHUJrsDNEILk2KU0snekvGC5YIvCbq6+P7l9ozH4UrKBeyks8enYUtEgUO0YROi7waCAtUSg8Xh
	EwRQGdGA==;
Message-ID: <20220608144516.552202452@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:33 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org
Subject: [PATCH 10/36] cpuidle,omap3: Push RCU-idle into driver
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

Doing RCU-idle outside the driver, only to then teporarily enable it
again before going idle is daft.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/arm/mach-omap2/cpuidle34xx.c |   16 ++++++++++++++++
 1 file changed, 16 insertions(+)

--- a/arch/arm/mach-omap2/cpuidle34xx.c
+++ b/arch/arm/mach-omap2/cpuidle34xx.c
@@ -133,7 +133,9 @@ static int omap3_enter_idle(struct cpuid
 	}
 
 	/* Execute ARM wfi */
+	rcu_idle_enter();
 	omap_sram_idle();
+	rcu_idle_exit();
 
 	/*
 	 * Call idle CPU PM enter notifier chain to restore
@@ -265,6 +267,7 @@ static struct cpuidle_driver omap3_idle_
 	.owner            = THIS_MODULE,
 	.states = {
 		{
+			.flags		  = CPUIDLE_FLAG_RCU_IDLE,
 			.enter		  = omap3_enter_idle_bm,
 			.exit_latency	  = 2 + 2,
 			.target_residency = 5,
@@ -272,6 +275,7 @@ static struct cpuidle_driver omap3_idle_
 			.desc		  = "MPU ON + CORE ON",
 		},
 		{
+			.flags		  = CPUIDLE_FLAG_RCU_IDLE,
 			.enter		  = omap3_enter_idle_bm,
 			.exit_latency	  = 10 + 10,
 			.target_residency = 30,
@@ -279,6 +283,7 @@ static struct cpuidle_driver omap3_idle_
 			.desc		  = "MPU ON + CORE ON",
 		},
 		{
+			.flags		  = CPUIDLE_FLAG_RCU_IDLE,
 			.enter		  = omap3_enter_idle_bm,
 			.exit_latency	  = 50 + 50,
 			.target_residency = 300,
@@ -286,6 +291,7 @@ static struct cpuidle_driver omap3_idle_
 			.desc		  = "MPU RET + CORE ON",
 		},
 		{
+			.flags		  = CPUIDLE_FLAG_RCU_IDLE,
 			.enter		  = omap3_enter_idle_bm,
 			.exit_latency	  = 1500 + 1800,
 			.target_residency = 4000,
@@ -293,6 +299,7 @@ static struct cpuidle_driver omap3_idle_
 			.desc		  = "MPU OFF + CORE ON",
 		},
 		{
+			.flags		  = CPUIDLE_FLAG_RCU_IDLE,
 			.enter		  = omap3_enter_idle_bm,
 			.exit_latency	  = 2500 + 7500,
 			.target_residency = 12000,
@@ -300,6 +307,7 @@ static struct cpuidle_driver omap3_idle_
 			.desc		  = "MPU RET + CORE RET",
 		},
 		{
+			.flags		  = CPUIDLE_FLAG_RCU_IDLE,
 			.enter		  = omap3_enter_idle_bm,
 			.exit_latency	  = 3000 + 8500,
 			.target_residency = 15000,
@@ -307,6 +315,7 @@ static struct cpuidle_driver omap3_idle_
 			.desc		  = "MPU OFF + CORE RET",
 		},
 		{
+			.flags		  = CPUIDLE_FLAG_RCU_IDLE,
 			.enter		  = omap3_enter_idle_bm,
 			.exit_latency	  = 10000 + 30000,
 			.target_residency = 30000,
@@ -328,6 +337,7 @@ static struct cpuidle_driver omap3430_id
 	.owner            = THIS_MODULE,
 	.states = {
 		{
+			.flags		  = CPUIDLE_FLAG_RCU_IDLE,
 			.enter		  = omap3_enter_idle_bm,
 			.exit_latency	  = 110 + 162,
 			.target_residency = 5,
@@ -335,6 +345,7 @@ static struct cpuidle_driver omap3430_id
 			.desc		  = "MPU ON + CORE ON",
 		},
 		{
+			.flags		  = CPUIDLE_FLAG_RCU_IDLE,
 			.enter		  = omap3_enter_idle_bm,
 			.exit_latency	  = 106 + 180,
 			.target_residency = 309,
@@ -342,6 +353,7 @@ static struct cpuidle_driver omap3430_id
 			.desc		  = "MPU ON + CORE ON",
 		},
 		{
+			.flags		  = CPUIDLE_FLAG_RCU_IDLE,
 			.enter		  = omap3_enter_idle_bm,
 			.exit_latency	  = 107 + 410,
 			.target_residency = 46057,
@@ -349,6 +361,7 @@ static struct cpuidle_driver omap3430_id
 			.desc		  = "MPU RET + CORE ON",
 		},
 		{
+			.flags		  = CPUIDLE_FLAG_RCU_IDLE,
 			.enter		  = omap3_enter_idle_bm,
 			.exit_latency	  = 121 + 3374,
 			.target_residency = 46057,
@@ -356,6 +369,7 @@ static struct cpuidle_driver omap3430_id
 			.desc		  = "MPU OFF + CORE ON",
 		},
 		{
+			.flags		  = CPUIDLE_FLAG_RCU_IDLE,
 			.enter		  = omap3_enter_idle_bm,
 			.exit_latency	  = 855 + 1146,
 			.target_residency = 46057,
@@ -363,6 +377,7 @@ static struct cpuidle_driver omap3430_id
 			.desc		  = "MPU RET + CORE RET",
 		},
 		{
+			.flags		  = CPUIDLE_FLAG_RCU_IDLE,
 			.enter		  = omap3_enter_idle_bm,
 			.exit_latency	  = 7580 + 4134,
 			.target_residency = 484329,
@@ -370,6 +385,7 @@ static struct cpuidle_driver omap3430_id
 			.desc		  = "MPU OFF + CORE RET",
 		},
 		{
+			.flags		  = CPUIDLE_FLAG_RCU_IDLE,
 			.enter		  = omap3_enter_idle_bm,
 			.exit_latency	  = 7505 + 15274,
 			.target_residency = 484329,




From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:48:18 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:48:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343863.569456 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywyo-0006VB-7K; Wed, 08 Jun 2022 14:48:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343863.569456; Wed, 08 Jun 2022 14:48:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywyn-0006Px-Ld; Wed, 08 Jun 2022 14:48:17 +0000
Received: by outflank-mailman (input) for mailman id 343863;
 Wed, 08 Jun 2022 14:47:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywy1-0004Sj-Rh
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:29 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e6a4266d-e739-11ec-bd2c-47488cf2e6aa;
 Wed, 08 Jun 2022 16:47:21 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywwy-00ChW2-V1; Wed, 08 Jun 2022 14:46:25 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 9A5BD302D93;
 Wed,  8 Jun 2022 16:46:22 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id 4EAD120C0F9B6; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e6a4266d-e739-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=cRgbn3QvpupGZaeUDgwKs7FEGzBWq3dUbK0A/YX8NGM=; b=PSLeO/yNb1FdwotZ74TIoxbYsI
	soiM9zNMO7t19/qdX6p+jhtYuw+jlgRDVggbwYw5y93wJ9B0AEFFNitZiOLiLqvrO6WYzTd3Ks6/N
	j6ONiLqH02pSUgCxo2FohIp7YpPnZGRJFJTWnKXkMEaG1p9PIY+g1hJZEWjA2DvWqfDzZ864CPgsH
	JZUmgyUVfTzWlN2taLJ537IYJBncAgv7L7CMo71dRetIsfM+fYr9nFmmPfnAie7VS2DskPrxSiIFO
	ifFfahGR54R38dhYOg0BWoewZIQRuCcVUmaPuAuM9Fgo4uTkuRaQcDupqqv1VST4cbPCmhIiOv9NB
	7YdXlpfg==;
Message-ID: <20220608144516.297980240@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:29 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org
Subject: [PATCH 06/36] cpuidle,riscv: Push RCU-idle into driver
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

Doing RCU-idle outside the driver, only to then temporarily enable it
again, at least twice, before going idle is daft.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 drivers/cpuidle/cpuidle-riscv-sbi.c |    9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

--- a/drivers/cpuidle/cpuidle-riscv-sbi.c
+++ b/drivers/cpuidle/cpuidle-riscv-sbi.c
@@ -116,12 +116,12 @@ static int __sbi_enter_domain_idle_state
 		return -1;
 
 	/* Do runtime PM to manage a hierarchical CPU toplogy. */
-	rcu_irq_enter_irqson();
 	if (s2idle)
 		dev_pm_genpd_suspend(pd_dev);
 	else
 		pm_runtime_put_sync_suspend(pd_dev);
-	rcu_irq_exit_irqson();
+
+	rcu_idle_enter();
 
 	if (sbi_is_domain_state_available())
 		state = sbi_get_domain_state();
@@ -130,12 +130,12 @@ static int __sbi_enter_domain_idle_state
 
 	ret = sbi_suspend(state) ? -1 : idx;
 
-	rcu_irq_enter_irqson();
+	rcu_idle_exit();
+
 	if (s2idle)
 		dev_pm_genpd_resume(pd_dev);
 	else
 		pm_runtime_get_sync(pd_dev);
-	rcu_irq_exit_irqson();
 
 	cpu_pm_exit();
 
@@ -246,6 +246,7 @@ static int sbi_dt_cpu_init_topology(stru
 	 * of a shared state for the domain, assumes the domain states are all
 	 * deeper states.
 	 */
+	drv->states[state_count - 1].flags |= CPUIDLE_FLAG_RCU_IDLE;
 	drv->states[state_count - 1].enter = sbi_enter_domain_idle_state;
 	drv->states[state_count - 1].enter_s2idle =
 					sbi_enter_s2idle_domain_idle_state;




From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:48:19 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:48:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343864.569465 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywyp-0006kF-9W; Wed, 08 Jun 2022 14:48:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343864.569465; Wed, 08 Jun 2022 14:48:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywyo-0006i5-IM; Wed, 08 Jun 2022 14:48:18 +0000
Received: by outflank-mailman (input) for mailman id 343864;
 Wed, 08 Jun 2022 14:47:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywy2-0004Sj-Ro
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:30 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e6a419a2-e739-11ec-bd2c-47488cf2e6aa;
 Wed, 08 Jun 2022 16:47:21 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywx4-00ChZ5-S9; Wed, 08 Jun 2022 14:46:30 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id CF56C302F2D;
 Wed,  8 Jun 2022 16:46:23 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id ACED020C119A6; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e6a419a2-e739-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=/sqr2RkpmWV4/QsFh9IpMWbDQxRkD/xb8LFsX6prmG0=; b=PMRi03XWoXclDviRUqbE/MtOjq
	BctQuAYPZxOMxsfOMiyem28bAW8AVWTyoJEScncE3ewsqlmC44gesqRRvD3B8d2x2o6Oa3YKdgS6F
	N/lpBgDl9rG8Y87zQ9aXv0X/K3/i3SOyjl709wqIh2P8wY30DT3oxZsmJJDGR2d7YpL64XTDbtm0N
	5k00Y596qRpvXjB0UkiaqAr3svE1BP55qX4/b4iH3/3A6gQNxBZUjoJBg6kVeAr1y4s/o8S9iW+rn
	f/uMfy6If6inoLAxZq0GFvTDrpfPEwMlmkj+swzSpz+tgCObUk3HyahRZMsXvA5vHVxaf8AvbHklN
	HHoVukvQ==;
Message-ID: <20220608144517.696962976@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:51 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org
Subject: [PATCH 28/36] cpuidle,tdx: Make tdx noinstr clean
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

vmlinux.o: warning: objtool: __halt+0x2c: call to hcall_func.constprop.0() leaves .noinstr.text section
vmlinux.o: warning: objtool: __halt+0x3f: call to __tdx_hypercall() leaves .noinstr.text section
vmlinux.o: warning: objtool: __tdx_hypercall+0x66: call to __tdx_hypercall_failed() leaves .noinstr.text section

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/x86/coco/tdx/tdcall.S |    2 ++
 arch/x86/coco/tdx/tdx.c    |    5 +++--
 2 files changed, 5 insertions(+), 2 deletions(-)

--- a/arch/x86/coco/tdx/tdcall.S
+++ b/arch/x86/coco/tdx/tdcall.S
@@ -31,6 +31,8 @@
 					  TDX_R12 | TDX_R13 | \
 					  TDX_R14 | TDX_R15 )
 
+.section .noinstr.text, "ax"
+
 /*
  * __tdx_module_call()  - Used by TDX guests to request services from
  * the TDX module (does not include VMM services) using TDCALL instruction.
--- a/arch/x86/coco/tdx/tdx.c
+++ b/arch/x86/coco/tdx/tdx.c
@@ -53,8 +53,9 @@ static inline u64 _tdx_hypercall(u64 fn,
 }
 
 /* Called from __tdx_hypercall() for unrecoverable failure */
-void __tdx_hypercall_failed(void)
+noinstr void __tdx_hypercall_failed(void)
 {
+	instrumentation_begin();
 	panic("TDVMCALL failed. TDX module bug?");
 }
 
@@ -64,7 +65,7 @@ void __tdx_hypercall_failed(void)
  * Reusing the KVM EXIT_REASON macros makes it easier to connect the host and
  * guest sides of these calls.
  */
-static u64 hcall_func(u64 exit_reason)
+static __always_inline u64 hcall_func(u64 exit_reason)
 {
 	return exit_reason;
 }




From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:48:20 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:48:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343867.569478 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywyq-00077r-MZ; Wed, 08 Jun 2022 14:48:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343867.569478; Wed, 08 Jun 2022 14:48:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywyq-00071R-0K; Wed, 08 Jun 2022 14:48:20 +0000
Received: by outflank-mailman (input) for mailman id 343867;
 Wed, 08 Jun 2022 14:47:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywxz-0004T5-EI
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:32 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e5376313-e739-11ec-b605-df0040e90b76;
 Wed, 08 Jun 2022 16:47:21 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywwz-00ChW4-1y; Wed, 08 Jun 2022 14:46:25 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id AC095302D9E;
 Wed,  8 Jun 2022 16:46:22 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id 598CC20C0FC8A; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e5376313-e739-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=y6I/0XZeW+SwdAeAR3t2m034incS08IQBnZfBcNKYzA=; b=pV/EkEgKRIpeKvq5HN3+CCEy73
	r8xSxfvRwX0Hp/RbmQEkGGqgtSQmt2V7wVKm/Dgh7gCxT8VN3Zd3ykv33oYJXDZhFrsOiZzgqwZwN
	mYfnoNbn4mkY4D0X7knpoC+PXyRY28u9ZWLLFNEC02wY4oKlFNPo5yScw7t5dA1Upkyps4KrRMpkG
	hdfHf70HVdGxmq1amZj2pji0fv+Uuwf5Bh2HdSJMzH66/m6rsKuia+FhR3TxjdKwxLuBcApt1D03J
	veIlHyJOTL69bf8GkRxQ0BEOGIuKD5pe0mKmy/Kj0wSR+O/rPyAwmL/tyBM+mm2nltdjhcArwd3n8
	F0v7I28w==;
Message-ID: <20220608144516.426117259@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:31 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org
Subject: [PATCH 08/36] cpuidle,psci: Push RCU-idle into driver
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

Doing RCU-idle outside the driver, only to then temporarily enable it
again, at least twice, before going idle is daft.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 drivers/cpuidle/cpuidle-psci.c |    9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

--- a/drivers/cpuidle/cpuidle-psci.c
+++ b/drivers/cpuidle/cpuidle-psci.c
@@ -69,12 +69,12 @@ static int __psci_enter_domain_idle_stat
 		return -1;
 
 	/* Do runtime PM to manage a hierarchical CPU toplogy. */
-	rcu_irq_enter_irqson();
 	if (s2idle)
 		dev_pm_genpd_suspend(pd_dev);
 	else
 		pm_runtime_put_sync_suspend(pd_dev);
-	rcu_irq_exit_irqson();
+
+	rcu_idle_enter();
 
 	state = psci_get_domain_state();
 	if (!state)
@@ -82,12 +82,12 @@ static int __psci_enter_domain_idle_stat
 
 	ret = psci_cpu_suspend_enter(state) ? -1 : idx;
 
-	rcu_irq_enter_irqson();
+	rcu_idle_exit();
+
 	if (s2idle)
 		dev_pm_genpd_resume(pd_dev);
 	else
 		pm_runtime_get_sync(pd_dev);
-	rcu_irq_exit_irqson();
 
 	cpu_pm_exit();
 
@@ -240,6 +240,7 @@ static int psci_dt_cpu_init_topology(str
 	 * of a shared state for the domain, assumes the domain states are all
 	 * deeper states.
 	 */
+	drv->states[state_count - 1].flags |= CPUIDLE_FLAG_RCU_IDLE;
 	drv->states[state_count - 1].enter = psci_enter_domain_idle_state;
 	drv->states[state_count - 1].enter_s2idle = psci_enter_s2idle_domain_idle_state;
 	psci_cpuidle_use_cpuhp = true;




From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:48:22 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:48:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343868.569487 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywyr-0007Mg-Qf; Wed, 08 Jun 2022 14:48:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343868.569487; Wed, 08 Jun 2022 14:48:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywyq-0007Ij-W6; Wed, 08 Jun 2022 14:48:20 +0000
Received: by outflank-mailman (input) for mailman id 343868;
 Wed, 08 Jun 2022 14:47:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywy4-0004T5-VI
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:33 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e53af897-e739-11ec-b605-df0040e90b76;
 Wed, 08 Jun 2022 16:47:21 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywx3-00ChYK-L3; Wed, 08 Jun 2022 14:46:29 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id A3BDD302EE7;
 Wed,  8 Jun 2022 16:46:23 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id 981A220C10EDC; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e53af897-e739-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=1usWiRWR4uttba39otUsN6OqB3IXu4/S2NHARc2LOVI=; b=FwGsX6HqAEWrIDIwqSsLl+2rn+
	jxtc/+7Q73QoS/5n4U2sAOL0VlfaYMGCWkilbboinFeB59SIcUo3UpH4a5CtYGWmPThbsscblAnH4
	PSS71rXa70D7UmJtRK8HWOujM8sLBbHEPGPYqLVKvfZP62p3cW80pTaY4XSX5yfYbfkdrHxiUG5Wz
	T0edzqMYifaS2QyjulPfkPqxRE0jrOsPGKKpMpJYlly7Zipb93HlY74okg/8Mb1f3Kbt7PF9GRF4I
	xfQiX+7/KU3X93vRF1S+bG52Qv3PZIAkNXLJYuCETihI1/nFsTHWt4NnKd/O4/6se7YsYY89QZ1rA
	qO/mDSKA==;
Message-ID: <20220608144517.444659212@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:47 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org
Subject: [PATCH 24/36] printk: Remove trace_.*_rcuidle() usage
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

The problem, per commit fc98c3c8c9dc ("printk: use rcuidle console
tracepoint"), was printk usage from the cpuidle path where RCU was
already disabled.

Per the patches earlier in this series, this is no longer the case.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 kernel/printk/printk.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/kernel/printk/printk.c
+++ b/kernel/printk/printk.c
@@ -2238,7 +2238,7 @@ static u16 printk_sprint(char *text, u16
 		}
 	}
 
-	trace_console_rcuidle(text, text_len);
+	trace_console(text, text_len);
 
 	return text_len;
 }




From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:48:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:48:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343870.569497 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywyt-0007aZ-6F; Wed, 08 Jun 2022 14:48:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343870.569497; Wed, 08 Jun 2022 14:48:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywys-0007UU-2H; Wed, 08 Jun 2022 14:48:22 +0000
Received: by outflank-mailman (input) for mailman id 343870;
 Wed, 08 Jun 2022 14:47:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywy5-0004T5-VK
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:34 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e5c27f9e-e739-11ec-b605-df0040e90b76;
 Wed, 08 Jun 2022 16:47:21 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywwz-00ChWm-Uq; Wed, 08 Jun 2022 14:46:26 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 48773302E6B;
 Wed,  8 Jun 2022 16:46:23 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id 812DB20C10ECA; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e5c27f9e-e739-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=4bele9ejzKzNd9pnJzXay06BR+7LKyo/8SDE7WF3xHo=; b=tPPYeAWPLLomHQyMVLJ5rPzsHQ
	RWGF7M0Mce61MT+ddmgb9LEMNk1WL5mudqg073VXtHlq00DAzt78HxCsBAEtKRxkcXavO/85/VZxk
	St2GqSY9qsP+Xr58sUTFba5HKXMa5app75CuJy9Gj6wZChoH6jBqlb60S6+uDOQ/9uA1CHpCagvMO
	PjB4e4NnxbHKE12LCbxTxFaNyJdX+xa1TwHy0IYtSD8fTskaneE8elW12I3edyD9ShjEHISc8iDZ4
	GRPIMTpQEWgfRfkgST/ySyn1wwEnoM5Cgrw7tfhziDkregbr3JBwLs34aQW7JRzuCY0Omo+qRiaQF
	AQ82XnZw==;
Message-ID: <20220608144517.061583457@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:41 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org
Subject: [PATCH 18/36] cpuidle: Annotate poll_idle()
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

The __cpuidle functions will become a noinstr class, as such they need
explicit annotations.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 drivers/cpuidle/poll_state.c |    6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

--- a/drivers/cpuidle/poll_state.c
+++ b/drivers/cpuidle/poll_state.c
@@ -13,7 +13,10 @@
 static int __cpuidle poll_idle(struct cpuidle_device *dev,
 			       struct cpuidle_driver *drv, int index)
 {
-	u64 time_start = local_clock();
+	u64 time_start;
+
+	instrumentation_begin();
+	time_start = local_clock();
 
 	dev->poll_time_limit = false;
 
@@ -39,6 +42,7 @@ static int __cpuidle poll_idle(struct cp
 	raw_local_irq_disable();
 
 	current_clr_polling();
+	instrumentation_end();
 
 	return index;
 }




From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:48:25 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:48:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343871.569508 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywyu-0007xm-Q4; Wed, 08 Jun 2022 14:48:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343871.569508; Wed, 08 Jun 2022 14:48:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywyt-0007pi-J0; Wed, 08 Jun 2022 14:48:23 +0000
Received: by outflank-mailman (input) for mailman id 343871;
 Wed, 08 Jun 2022 14:47:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywy6-0004T5-Vb
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:35 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e748dfdc-e739-11ec-b605-df0040e90b76;
 Wed, 08 Jun 2022 16:47:22 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywx4-00ChYj-7G; Wed, 08 Jun 2022 14:46:30 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id CC219302F27;
 Wed,  8 Jun 2022 16:46:23 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id A74D220C119A3; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e748dfdc-e739-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=pxqjaMyXYMAIciLaWxKe0qnmRXZim+cyW3iOuZrk594=; b=npWDxkQdpsP/KSbB8UVk7cSTfR
	iAvH2EDMYqweSjq8GVWChqEcX/9BIHnt+/9Lh2o73ASUuV3YGe0N1gseF/G3NKGkJrIf+0It9dNyi
	FAvR8qXc2Wg9XRPiJtu0ItkxyceEVaw/05Yp7Ekb/y4g2aezSDA4pvjQ6wKPUVatsqaVeKHbB2bcq
	mZiZTyBSaEWIyApTNtwFIeiNSKagNK4BMp9uLK4X+ZDckF4tkdR4/twee7LOTtORPxdg1htlaoW41
	LiZVginb/4mTy1FluTeIqpvVDv2MTNtlL+Aa4yxsbdvio0znFSvZKUdNiAQUmElpo3cRSF9a3DUwX
	Ck0o5LKQ==;
Message-ID: <20220608144517.633293983@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:50 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org
Subject: [PATCH 27/36] cpuidle,mwait: Make noinstr clean
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

vmlinux.o: warning: objtool: intel_idle_s2idle+0x6e: call to __monitor.constprop.0() leaves .noinstr.text section
vmlinux.o: warning: objtool: intel_idle_irq+0x8c: call to __monitor.constprop.0() leaves .noinstr.text section
vmlinux.o: warning: objtool: intel_idle+0x73: call to __monitor.constprop.0() leaves .noinstr.text section

vmlinux.o: warning: objtool: mwait_idle+0x88: call to clflush() leaves .noinstr.text section

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/x86/include/asm/mwait.h         |   12 ++++++------
 arch/x86/include/asm/special_insns.h |    2 +-
 2 files changed, 7 insertions(+), 7 deletions(-)

--- a/arch/x86/include/asm/mwait.h
+++ b/arch/x86/include/asm/mwait.h
@@ -25,7 +25,7 @@
 #define TPAUSE_C01_STATE		1
 #define TPAUSE_C02_STATE		0
 
-static inline void __monitor(const void *eax, unsigned long ecx,
+static __always_inline void __monitor(const void *eax, unsigned long ecx,
 			     unsigned long edx)
 {
 	/* "monitor %eax, %ecx, %edx;" */
@@ -33,7 +33,7 @@ static inline void __monitor(const void
 		     :: "a" (eax), "c" (ecx), "d"(edx));
 }
 
-static inline void __monitorx(const void *eax, unsigned long ecx,
+static __always_inline void __monitorx(const void *eax, unsigned long ecx,
 			      unsigned long edx)
 {
 	/* "monitorx %eax, %ecx, %edx;" */
@@ -41,7 +41,7 @@ static inline void __monitorx(const void
 		     :: "a" (eax), "c" (ecx), "d"(edx));
 }
 
-static inline void __mwait(unsigned long eax, unsigned long ecx)
+static __always_inline void __mwait(unsigned long eax, unsigned long ecx)
 {
 	mds_idle_clear_cpu_buffers();
 
@@ -76,8 +76,8 @@ static inline void __mwait(unsigned long
  * EAX                     (logical) address to monitor
  * ECX                     #GP if not zero
  */
-static inline void __mwaitx(unsigned long eax, unsigned long ebx,
-			    unsigned long ecx)
+static __always_inline void __mwaitx(unsigned long eax, unsigned long ebx,
+				     unsigned long ecx)
 {
 	/* No MDS buffer clear as this is AMD/HYGON only */
 
@@ -86,7 +86,7 @@ static inline void __mwaitx(unsigned lon
 		     :: "a" (eax), "b" (ebx), "c" (ecx));
 }
 
-static inline void __sti_mwait(unsigned long eax, unsigned long ecx)
+static __always_inline void __sti_mwait(unsigned long eax, unsigned long ecx)
 {
 	mds_idle_clear_cpu_buffers();
 	/* "mwait %eax, %ecx;" */
--- a/arch/x86/include/asm/special_insns.h
+++ b/arch/x86/include/asm/special_insns.h
@@ -196,7 +196,7 @@ static inline void load_gs_index(unsigne
 
 #endif /* CONFIG_PARAVIRT_XXL */
 
-static inline void clflush(volatile void *__p)
+static __always_inline void clflush(volatile void *__p)
 {
 	asm volatile("clflush %0" : "+m" (*(volatile char __force *)__p));
 }




From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:48:27 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:48:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343874.569518 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywyw-0008Ig-Jn; Wed, 08 Jun 2022 14:48:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343874.569518; Wed, 08 Jun 2022 14:48:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywyv-0008DB-7Y; Wed, 08 Jun 2022 14:48:25 +0000
Received: by outflank-mailman (input) for mailman id 343874;
 Wed, 08 Jun 2022 14:47:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywy7-0004T5-Vr
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:36 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e53ed9fe-e739-11ec-b605-df0040e90b76;
 Wed, 08 Jun 2022 16:47:21 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywwz-0066Be-OR; Wed, 08 Jun 2022 14:46:26 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id E0912302E3B;
 Wed,  8 Jun 2022 16:46:22 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id 71E0D20C10EAA; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e53ed9fe-e739-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=tnA3wPqNh+XrpwQVOnpJL5IWoR32uxOYSq9ZeZNPVI0=; b=E6cr+j5sSXZ8GGphavK9qnd7+N
	nSY0hCAsEVgofgOUTaPMjXeH9fBrlzJVRkmzWwePjKPAgQD+c7y/qDFJlPV1JWEjs5+fD3YShsNI1
	amqZHcD/4E577ioc5qBMYzKqejMRhgd3CALXdnfS65RmjQjuUT0GnljmbHD+8OcdekhOIlhyPeiiK
	EgjVG7WYEO4Ln23h4oHtnHs6IWQNkcQvoDLhNBYPXyCcxELae17cJVba2SVQa9lUVbBg+lwGZuS0G
	24S8FrujRWom0XP3cd9BaFyIXDHECD/BF/vy/d1gw8+LeZJSYxH1bc9JwiizKACgIho7RnJP8/1Ao
	tQpHp77w==;
Message-ID: <20220608144516.808451191@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:37 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org
Subject: [PATCH 14/36] cpuidle: Fix rcu_idle_*() usage
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

The whole disable-RCU, enable-IRQS dance is very intricate since
changing IRQ state is traced, which depends on RCU.

Add two helpers for the cpuidle case that mirror the entry code.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/arm/mach-imx/cpuidle-imx6q.c    |    4 +--
 arch/arm/mach-imx/cpuidle-imx6sx.c   |    4 +--
 arch/arm/mach-omap2/cpuidle34xx.c    |    4 +--
 arch/arm/mach-omap2/cpuidle44xx.c    |    8 +++---
 drivers/acpi/processor_idle.c        |   18 ++++++++------
 drivers/cpuidle/cpuidle-big_little.c |    4 +--
 drivers/cpuidle/cpuidle-mvebu-v7.c   |    4 +--
 drivers/cpuidle/cpuidle-psci.c       |    4 +--
 drivers/cpuidle/cpuidle-riscv-sbi.c  |    4 +--
 drivers/cpuidle/cpuidle-tegra.c      |    8 +++---
 drivers/cpuidle/cpuidle.c            |   11 ++++----
 include/linux/cpuidle.h              |   37 +++++++++++++++++++++++++---
 kernel/sched/idle.c                  |   45 ++++++++++-------------------------
 kernel/time/tick-broadcast.c         |    6 +++-
 14 files changed, 90 insertions(+), 71 deletions(-)

--- a/arch/arm/mach-imx/cpuidle-imx6q.c
+++ b/arch/arm/mach-imx/cpuidle-imx6q.c
@@ -24,9 +24,9 @@ static int imx6q_enter_wait(struct cpuid
 		imx6_set_lpm(WAIT_UNCLOCKED);
 	raw_spin_unlock(&cpuidle_lock);
 
-	rcu_idle_enter();
+	cpuidle_rcu_enter();
 	cpu_do_idle();
-	rcu_idle_exit();
+	cpuidle_rcu_exit();
 
 	raw_spin_lock(&cpuidle_lock);
 	if (num_idle_cpus-- == num_online_cpus())
--- a/arch/arm/mach-imx/cpuidle-imx6sx.c
+++ b/arch/arm/mach-imx/cpuidle-imx6sx.c
@@ -47,9 +47,9 @@ static int imx6sx_enter_wait(struct cpui
 		cpu_pm_enter();
 		cpu_cluster_pm_enter();
 
-		rcu_idle_enter();
+		cpuidle_rcu_enter();
 		cpu_suspend(0, imx6sx_idle_finish);
-		rcu_idle_exit();
+		cpuidle_rcu_exit();
 
 		cpu_cluster_pm_exit();
 		cpu_pm_exit();
--- a/arch/arm/mach-omap2/cpuidle34xx.c
+++ b/arch/arm/mach-omap2/cpuidle34xx.c
@@ -133,9 +133,9 @@ static int omap3_enter_idle(struct cpuid
 	}
 
 	/* Execute ARM wfi */
-	rcu_idle_enter();
+	cpuidle_rcu_enter();
 	omap_sram_idle();
-	rcu_idle_exit();
+	cpuidle_rcu_exit();
 
 	/*
 	 * Call idle CPU PM enter notifier chain to restore
--- a/arch/arm/mach-omap2/cpuidle44xx.c
+++ b/arch/arm/mach-omap2/cpuidle44xx.c
@@ -105,9 +105,9 @@ static int omap_enter_idle_smp(struct cp
 	}
 	raw_spin_unlock_irqrestore(&mpu_lock, flag);
 
-	rcu_idle_enter();
+	cpuidle_rcu_enter();
 	omap4_enter_lowpower(dev->cpu, cx->cpu_state);
-	rcu_idle_exit();
+	cpuidle_rcu_exit();
 
 	raw_spin_lock_irqsave(&mpu_lock, flag);
 	if (cx->mpu_state_vote == num_online_cpus())
@@ -186,10 +186,10 @@ static int omap_enter_idle_coupled(struc
 		}
 	}
 
-	rcu_idle_enter();
+	cpuidle_rcu_enter();
 	omap4_enter_lowpower(dev->cpu, cx->cpu_state);
 	cpu_done[dev->cpu] = true;
-	rcu_idle_exit();
+	cpuidle_rcu_exit();
 
 	/* Wakeup CPU1 only if it is not offlined */
 	if (dev->cpu == 0 && cpumask_test_cpu(1, cpu_online_mask)) {
--- a/drivers/acpi/processor_idle.c
+++ b/drivers/acpi/processor_idle.c
@@ -607,7 +607,7 @@ static DEFINE_RAW_SPINLOCK(c3_lock);
  * @cx: Target state context
  * @index: index of target state
  */
-static int acpi_idle_enter_bm(struct cpuidle_driver *drv,
+static noinstr int acpi_idle_enter_bm(struct cpuidle_driver *drv,
 			       struct acpi_processor *pr,
 			       struct acpi_processor_cx *cx,
 			       int index)
@@ -626,6 +626,8 @@ static int acpi_idle_enter_bm(struct cpu
 	 */
 	bool dis_bm = pr->flags.bm_control;
 
+	instrumentation_begin();
+
 	/* If we can skip BM, demote to a safe state. */
 	if (!cx->bm_sts_skip && acpi_idle_bm_check()) {
 		dis_bm = false;
@@ -647,11 +649,11 @@ static int acpi_idle_enter_bm(struct cpu
 		raw_spin_unlock(&c3_lock);
 	}
 
-	rcu_idle_enter();
+	cpuidle_rcu_enter();
 
 	acpi_idle_do_entry(cx);
 
-	rcu_idle_exit();
+	cpuidle_rcu_exit();
 
 	/* Re-enable bus master arbitration */
 	if (dis_bm) {
@@ -661,11 +663,13 @@ static int acpi_idle_enter_bm(struct cpu
 		raw_spin_unlock(&c3_lock);
 	}
 
+	instrumentation_end();
+
 	return index;
 }
 
-static int acpi_idle_enter(struct cpuidle_device *dev,
-			   struct cpuidle_driver *drv, int index)
+static noinstr int acpi_idle_enter(struct cpuidle_device *dev,
+				   struct cpuidle_driver *drv, int index)
 {
 	struct acpi_processor_cx *cx = per_cpu(acpi_cstate[index], dev->cpu);
 	struct acpi_processor *pr;
@@ -693,8 +697,8 @@ static int acpi_idle_enter(struct cpuidl
 	return index;
 }
 
-static int acpi_idle_enter_s2idle(struct cpuidle_device *dev,
-				  struct cpuidle_driver *drv, int index)
+static noinstr int acpi_idle_enter_s2idle(struct cpuidle_device *dev,
+					  struct cpuidle_driver *drv, int index)
 {
 	struct acpi_processor_cx *cx = per_cpu(acpi_cstate[index], dev->cpu);
 
--- a/drivers/cpuidle/cpuidle-big_little.c
+++ b/drivers/cpuidle/cpuidle-big_little.c
@@ -126,13 +126,13 @@ static int bl_enter_powerdown(struct cpu
 				struct cpuidle_driver *drv, int idx)
 {
 	cpu_pm_enter();
-	rcu_idle_enter();
+	cpuidle_rcu_enter();
 
 	cpu_suspend(0, bl_powerdown_finisher);
 
 	/* signals the MCPM core that CPU is out of low power state */
 	mcpm_cpu_powered_up();
-	rcu_idle_exit();
+	cpuidle_rcu_exit();
 
 	cpu_pm_exit();
 
--- a/drivers/cpuidle/cpuidle-mvebu-v7.c
+++ b/drivers/cpuidle/cpuidle-mvebu-v7.c
@@ -36,9 +36,9 @@ static int mvebu_v7_enter_idle(struct cp
 	if (drv->states[index].flags & MVEBU_V7_FLAG_DEEP_IDLE)
 		deepidle = true;
 
-	rcu_idle_enter();
+	cpuidle_rcu_enter();
 	ret = mvebu_v7_cpu_suspend(deepidle);
-	rcu_idle_exit();
+	cpuidle_rcu_exit();
 
 	cpu_pm_exit();
 
--- a/drivers/cpuidle/cpuidle-psci.c
+++ b/drivers/cpuidle/cpuidle-psci.c
@@ -74,7 +74,7 @@ static int __psci_enter_domain_idle_stat
 	else
 		pm_runtime_put_sync_suspend(pd_dev);
 
-	rcu_idle_enter();
+	cpuidle_rcu_enter();
 
 	state = psci_get_domain_state();
 	if (!state)
@@ -82,7 +82,7 @@ static int __psci_enter_domain_idle_stat
 
 	ret = psci_cpu_suspend_enter(state) ? -1 : idx;
 
-	rcu_idle_exit();
+	cpuidle_rcu_exit();
 
 	if (s2idle)
 		dev_pm_genpd_resume(pd_dev);
--- a/drivers/cpuidle/cpuidle-riscv-sbi.c
+++ b/drivers/cpuidle/cpuidle-riscv-sbi.c
@@ -121,7 +121,7 @@ static int __sbi_enter_domain_idle_state
 	else
 		pm_runtime_put_sync_suspend(pd_dev);
 
-	rcu_idle_enter();
+	cpuidle_rcu_enter();
 
 	if (sbi_is_domain_state_available())
 		state = sbi_get_domain_state();
@@ -130,7 +130,7 @@ static int __sbi_enter_domain_idle_state
 
 	ret = sbi_suspend(state) ? -1 : idx;
 
-	rcu_idle_exit();
+	cpuidle_rcu_exit();
 
 	if (s2idle)
 		dev_pm_genpd_resume(pd_dev);
--- a/drivers/cpuidle/cpuidle-tegra.c
+++ b/drivers/cpuidle/cpuidle-tegra.c
@@ -183,7 +183,7 @@ static int tegra_cpuidle_state_enter(str
 	tegra_pm_set_cpu_in_lp2();
 	cpu_pm_enter();
 
-	rcu_idle_enter();
+	cpuidle_rcu_enter();
 
 	switch (index) {
 	case TEGRA_C7:
@@ -199,7 +199,7 @@ static int tegra_cpuidle_state_enter(str
 		break;
 	}
 
-	rcu_idle_exit();
+	cpuidle_rcu_exit();
 
 	cpu_pm_exit();
 	tegra_pm_clear_cpu_in_lp2();
@@ -240,10 +240,10 @@ static int tegra_cpuidle_enter(struct cp
 
 	if (index == TEGRA_C1) {
 		if (do_rcu)
-			rcu_idle_enter();
+			cpuidle_rcu_enter();
 		ret = arm_cpuidle_simple_enter(dev, drv, index);
 		if (do_rcu)
-			rcu_idle_exit();
+			cpuidle_rcu_exit();
 	} else
 		ret = tegra_cpuidle_state_enter(dev, index, cpu);
 
--- a/drivers/cpuidle/cpuidle.c
+++ b/drivers/cpuidle/cpuidle.c
@@ -13,6 +13,7 @@
 #include <linux/mutex.h>
 #include <linux/sched.h>
 #include <linux/sched/clock.h>
+#include <linux/sched/idle.h>
 #include <linux/notifier.h>
 #include <linux/pm_qos.h>
 #include <linux/cpu.h>
@@ -150,12 +151,12 @@ static void enter_s2idle_proper(struct c
 	 */
 	stop_critical_timings();
 	if (!(target_state->flags & CPUIDLE_FLAG_RCU_IDLE))
-		rcu_idle_enter();
+		cpuidle_rcu_enter();
 	target_state->enter_s2idle(dev, drv, index);
 	if (WARN_ON_ONCE(!irqs_disabled()))
-		local_irq_disable();
+		raw_local_irq_disable();
 	if (!(target_state->flags & CPUIDLE_FLAG_RCU_IDLE))
-		rcu_idle_exit();
+		cpuidle_rcu_exit();
 	tick_unfreeze();
 	start_critical_timings();
 
@@ -233,14 +234,14 @@ int cpuidle_enter_state(struct cpuidle_d
 
 	stop_critical_timings();
 	if (!(target_state->flags & CPUIDLE_FLAG_RCU_IDLE))
-		rcu_idle_enter();
+		cpuidle_rcu_enter();
 
 	entered_state = target_state->enter(dev, drv, index);
 	if (WARN_ONCE(!irqs_disabled(), "%ps leaked IRQ state", target_state->enter))
 		raw_local_irq_disable();
 
 	if (!(target_state->flags & CPUIDLE_FLAG_RCU_IDLE))
-		rcu_idle_exit();
+		cpuidle_rcu_exit();
 	start_critical_timings();
 
 	sched_clock_idle_wakeup_event();
--- a/include/linux/cpuidle.h
+++ b/include/linux/cpuidle.h
@@ -115,6 +115,35 @@ struct cpuidle_device {
 DECLARE_PER_CPU(struct cpuidle_device *, cpuidle_devices);
 DECLARE_PER_CPU(struct cpuidle_device, cpuidle_dev);
 
+static __always_inline void cpuidle_rcu_enter(void)
+{
+	lockdep_assert_irqs_disabled();
+	/*
+	 * Idle is allowed to (temporary) enable IRQs. It
+	 * will return with IRQs disabled.
+	 *
+	 * Trace IRQs enable here, then switch off RCU, and have
+	 * arch_cpu_idle() use raw_local_irq_enable(). Note that
+	 * rcu_idle_enter() relies on lockdep IRQ state, so switch that
+	 * last -- this is very similar to the entry code.
+	 */
+	trace_hardirqs_on_prepare();
+	lockdep_hardirqs_on_prepare();
+	instrumentation_end();
+	rcu_idle_enter();
+	lockdep_hardirqs_on(_THIS_IP_);
+}
+
+static __always_inline void cpuidle_rcu_exit(void)
+{
+	/*
+	 * Carefully undo the above.
+	 */
+	lockdep_hardirqs_off(_THIS_IP_);
+	rcu_idle_exit();
+	instrumentation_begin();
+}
+
 /****************************
  * CPUIDLE DRIVER INTERFACE *
  ****************************/
@@ -282,18 +311,18 @@ extern s64 cpuidle_governor_latency_req(
 	int __ret = 0;							\
 									\
 	if (!idx) {							\
-		rcu_idle_enter();					\
+		cpuidle_rcu_enter();					\
 		cpu_do_idle();						\
-		rcu_idle_exit();					\
+		cpuidle_rcu_exit();					\
 		return idx;						\
 	}								\
 									\
 	if (!is_retention)						\
 		__ret =  cpu_pm_enter();				\
 	if (!__ret) {							\
-		rcu_idle_enter();					\
+		cpuidle_rcu_enter();					\
 		__ret = low_level_idle_enter(state);			\
-		rcu_idle_exit();					\
+		cpuidle_rcu_exit();					\
 		if (!is_retention)					\
 			cpu_pm_exit();					\
 	}								\
--- a/kernel/sched/idle.c
+++ b/kernel/sched/idle.c
@@ -51,18 +51,22 @@ __setup("hlt", cpu_idle_nopoll_setup);
 
 static noinline int __cpuidle cpu_idle_poll(void)
 {
+	instrumentation_begin();
 	trace_cpu_idle(0, smp_processor_id());
 	stop_critical_timings();
-	rcu_idle_enter();
-	local_irq_enable();
+	cpuidle_rcu_enter();
 
+	raw_local_irq_enable();
 	while (!tif_need_resched() &&
 	       (cpu_idle_force_poll || tick_check_broadcast_expired()))
 		cpu_relax();
+	raw_local_irq_disable();
 
-	rcu_idle_exit();
+	cpuidle_rcu_exit();
 	start_critical_timings();
 	trace_cpu_idle(PWR_EVENT_EXIT, smp_processor_id());
+	local_irq_enable();
+	instrumentation_end();
 
 	return 1;
 }
@@ -85,44 +89,21 @@ void __weak arch_cpu_idle(void)
  */
 void __cpuidle default_idle_call(void)
 {
-	if (current_clr_polling_and_test()) {
-		local_irq_enable();
-	} else {
-
+	instrumentation_begin();
+	if (!current_clr_polling_and_test()) {
 		trace_cpu_idle(1, smp_processor_id());
 		stop_critical_timings();
 
-		/*
-		 * arch_cpu_idle() is supposed to enable IRQs, however
-		 * we can't do that because of RCU and tracing.
-		 *
-		 * Trace IRQs enable here, then switch off RCU, and have
-		 * arch_cpu_idle() use raw_local_irq_enable(). Note that
-		 * rcu_idle_enter() relies on lockdep IRQ state, so switch that
-		 * last -- this is very similar to the entry code.
-		 */
-		trace_hardirqs_on_prepare();
-		lockdep_hardirqs_on_prepare();
-		rcu_idle_enter();
-		lockdep_hardirqs_on(_THIS_IP_);
-
+		cpuidle_rcu_enter();
 		arch_cpu_idle();
-
-		/*
-		 * OK, so IRQs are enabled here, but RCU needs them disabled to
-		 * turn itself back on.. funny thing is that disabling IRQs
-		 * will cause tracing, which needs RCU. Jump through hoops to
-		 * make it 'work'.
-		 */
 		raw_local_irq_disable();
-		lockdep_hardirqs_off(_THIS_IP_);
-		rcu_idle_exit();
-		lockdep_hardirqs_on(_THIS_IP_);
-		raw_local_irq_enable();
+		cpuidle_rcu_exit();
 
 		start_critical_timings();
 		trace_cpu_idle(PWR_EVENT_EXIT, smp_processor_id());
 	}
+	local_irq_enable();
+	instrumentation_end();
 }
 
 static int call_cpuidle_s2idle(struct cpuidle_driver *drv,
--- a/kernel/time/tick-broadcast.c
+++ b/kernel/time/tick-broadcast.c
@@ -622,9 +622,13 @@ struct cpumask *tick_get_broadcast_onesh
  * to avoid a deep idle transition as we are about to get the
  * broadcast IPI right away.
  */
-int tick_check_broadcast_expired(void)
+noinstr int tick_check_broadcast_expired(void)
 {
+#ifdef _ASM_GENERIC_BITOPS_INSTRUMENTED_NON_ATOMIC_H
+	return arch_test_bit(smp_processor_id(), cpumask_bits(tick_broadcast_force_mask));
+#else
 	return cpumask_test_cpu(smp_processor_id(), tick_broadcast_force_mask);
+#endif
 }
 
 /*




From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:48:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:48:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343876.569529 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywyz-0000Hs-8j; Wed, 08 Jun 2022 14:48:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343876.569529; Wed, 08 Jun 2022 14:48:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywyx-00009V-Fq; Wed, 08 Jun 2022 14:48:27 +0000
Received: by outflank-mailman (input) for mailman id 343876;
 Wed, 08 Jun 2022 14:47:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywy8-0004T5-Vs
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:37 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e86f4822-e739-11ec-b605-df0040e90b76;
 Wed, 08 Jun 2022 16:47:24 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywww-00ChVo-B5; Wed, 08 Jun 2022 14:46:22 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 58CC930008D;
 Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id 3237720C0F991; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e86f4822-e739-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Subject:Cc:To:From:Date:Message-ID:
	Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To:References;
	bh=p3ITF8gI2GW93+lJzbbS6NFG2q6U7tMgsXybvfaCekw=; b=IZXaFH74aF36MwDA0OcR+zpVbA
	fK18DkOhaaN9C1eG5s1+L4J5X2rdyrZWtHWtryVrvRYiz9kbVuWHWIVzgsfJfxFZ8UgI4eMrcfFM5
	ly+cYznO6PhU6shFXLnvDd0b/G1PXNb75sy24LC6bprnB0FmxiDeLgilEva9+AdZsD25bPgSLsRpe
	GUlfX77jG3iIxErkRylfvAiLV5VrXlgN4oApJfGBzUS+eVMlY/iFt5TC7UePOXm4aRWTFo8vVwc5V
	N9prIiiaKfHEwyVF5x1V2xztOpfeiRRaQZej2V9pcRWXnMPtP3IswDsfj7t5zg4WvO1/IMs6PKgfh
	Vu2f1Ucw==;
Message-ID: <20220608142723.103523089@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:23 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org
Subject: [PATCH 00/36] cpuidle,rcu: Cleanup the mess

Hi All! (omg so many)

These here few patches mostly clear out the utter mess that is cpuidle vs rcuidle.

At the end of the ride there's only 2 real RCU_NONIDLE() users left

  arch/arm64/kernel/suspend.c:            RCU_NONIDLE(__cpu_suspend_exit());
  drivers/perf/arm_pmu.c:                 RCU_NONIDLE(armpmu_start(event, PERF_EF_RELOAD));
  kernel/cfi.c:   RCU_NONIDLE({

(the CFI one is likely dead in the kCFI rewrite) and there's only a hand full
of trace_.*_rcuidle() left:

  kernel/trace/trace_preemptirq.c:                        trace_irq_enable_rcuidle(CALLER_ADDR0, CALLER_ADDR1);
  kernel/trace/trace_preemptirq.c:                        trace_irq_disable_rcuidle(CALLER_ADDR0, CALLER_ADDR1);
  kernel/trace/trace_preemptirq.c:                        trace_irq_enable_rcuidle(CALLER_ADDR0, caller_addr);
  kernel/trace/trace_preemptirq.c:                        trace_irq_disable_rcuidle(CALLER_ADDR0, caller_addr);
  kernel/trace/trace_preemptirq.c:                trace_preempt_enable_rcuidle(a0, a1);
  kernel/trace/trace_preemptirq.c:                trace_preempt_disable_rcuidle(a0, a1);

All of them are in 'deprecated' code that is unused for GENERIC_ENTRY.

I've touched a _lot_ of code that I can't test and likely broken some of it :/
In particular, the whole ARM cpuidle stuff was quite involved with OMAP being
the absolute 'winner'.

I'm hoping Mark can help me sort the remaining ARM64 bits as he moves that to
GENERIC_ENTRY. I've also got a note that says ARM64 can probably do a WFE based
idle state and employ TIF_POLLING_NRFLAG to avoid some IPIs.

---
 arch/alpha/kernel/process.c          |    1 
 arch/alpha/kernel/vmlinux.lds.S      |    1 
 arch/arc/kernel/process.c            |    3 +
 arch/arc/kernel/vmlinux.lds.S        |    1 
 arch/arm/include/asm/vmlinux.lds.h   |    1 
 arch/arm/kernel/process.c            |    1 
 arch/arm/kernel/smp.c                |    6 +--
 arch/arm/mach-gemini/board-dt.c      |    3 +
 arch/arm/mach-imx/cpuidle-imx6q.c    |    4 +-
 arch/arm/mach-imx/cpuidle-imx6sx.c   |    5 ++
 arch/arm/mach-omap2/cpuidle34xx.c    |   16 ++++++++
 arch/arm/mach-omap2/cpuidle44xx.c    |   29 +++++++++------
 arch/arm/mach-omap2/pm.h             |    2 -
 arch/arm/mach-omap2/pm34xx.c         |   14 +++++--
 arch/arm/mach-omap2/powerdomain.c    |   10 ++---
 arch/arm64/kernel/idle.c             |    1 
 arch/arm64/kernel/smp.c              |    4 +-
 arch/arm64/kernel/vmlinux.lds.S      |    1 
 arch/csky/kernel/process.c           |    1 
 arch/csky/kernel/smp.c               |    2 -
 arch/csky/kernel/vmlinux.lds.S       |    1 
 arch/hexagon/kernel/process.c        |    1 
 arch/hexagon/kernel/vmlinux.lds.S    |    1 
 arch/ia64/kernel/process.c           |    1 
 arch/ia64/kernel/vmlinux.lds.S       |    1 
 arch/loongarch/kernel/vmlinux.lds.S  |    1 
 arch/m68k/kernel/vmlinux-nommu.lds   |    1 
 arch/m68k/kernel/vmlinux-std.lds     |    1 
 arch/m68k/kernel/vmlinux-sun3.lds    |    1 
 arch/microblaze/kernel/process.c     |    1 
 arch/microblaze/kernel/vmlinux.lds.S |    1 
 arch/mips/kernel/idle.c              |    8 +---
 arch/mips/kernel/vmlinux.lds.S       |    1 
 arch/nios2/kernel/process.c          |    1 
 arch/nios2/kernel/vmlinux.lds.S      |    1 
 arch/openrisc/kernel/process.c       |    1 
 arch/openrisc/kernel/vmlinux.lds.S   |    1 
 arch/parisc/kernel/process.c         |    2 -
 arch/parisc/kernel/vmlinux.lds.S     |    1 
 arch/powerpc/kernel/idle.c           |    5 +-
 arch/powerpc/kernel/vmlinux.lds.S    |    1 
 arch/riscv/kernel/process.c          |    1 
 arch/riscv/kernel/vmlinux-xip.lds.S  |    1 
 arch/riscv/kernel/vmlinux.lds.S      |    1 
 arch/s390/kernel/idle.c              |    1 
 arch/s390/kernel/vmlinux.lds.S       |    1 
 arch/sh/kernel/idle.c                |    1 
 arch/sh/kernel/vmlinux.lds.S         |    1 
 arch/sparc/kernel/leon_pmc.c         |    4 ++
 arch/sparc/kernel/process_32.c       |    1 
 arch/sparc/kernel/process_64.c       |    3 +
 arch/sparc/kernel/vmlinux.lds.S      |    1 
 arch/um/kernel/dyn.lds.S             |    1 
 arch/um/kernel/process.c             |    1 
 arch/um/kernel/uml.lds.S             |    1 
 arch/x86/coco/tdx/tdcall.S           |   15 +-------
 arch/x86/coco/tdx/tdx.c              |   25 +++----------
 arch/x86/events/amd/brs.c            |   13 ++-----
 arch/x86/include/asm/irqflags.h      |   11 ++---
 arch/x86/include/asm/mwait.h         |   14 +++----
 arch/x86/include/asm/nospec-branch.h |    2 -
 arch/x86/include/asm/paravirt.h      |    6 ++-
 arch/x86/include/asm/perf_event.h    |    2 -
 arch/x86/include/asm/shared/io.h     |    4 +-
 arch/x86/include/asm/shared/tdx.h    |    1 
 arch/x86/include/asm/special_insns.h |    6 +--
 arch/x86/include/asm/xen/hypercall.h |    2 -
 arch/x86/kernel/paravirt.c           |   14 ++++++-
 arch/x86/kernel/process.c            |   65 +++++++++++++++++------------------
 arch/x86/kernel/vmlinux.lds.S        |    1 
 arch/x86/xen/enlighten_pv.c          |    2 -
 arch/x86/xen/irq.c                   |    2 -
 arch/xtensa/kernel/process.c         |    1 
 arch/xtensa/kernel/vmlinux.lds.S     |    1 
 drivers/acpi/processor_idle.c        |   46 ++++++++++++++----------
 drivers/base/power/runtime.c         |   24 ++++++------
 drivers/clk/clk.c                    |    8 ++--
 drivers/cpuidle/cpuidle-arm.c        |    1 
 drivers/cpuidle/cpuidle-big_little.c |    8 +++-
 drivers/cpuidle/cpuidle-mvebu-v7.c   |    7 +++
 drivers/cpuidle/cpuidle-psci.c       |   10 +++--
 drivers/cpuidle/cpuidle-qcom-spm.c   |    1 
 drivers/cpuidle/cpuidle-riscv-sbi.c  |   10 +++--
 drivers/cpuidle/cpuidle-tegra.c      |   21 ++++++++---
 drivers/cpuidle/cpuidle.c            |   21 +++++------
 drivers/cpuidle/dt_idle_states.c     |    2 -
 drivers/cpuidle/poll_state.c         |   10 ++++-
 drivers/idle/intel_idle.c            |   29 ++++++++++++---
 include/asm-generic/vmlinux.lds.h    |    9 +---
 include/linux/compiler_types.h       |    8 +++-
 include/linux/cpu.h                  |    3 -
 include/linux/cpuidle.h              |   33 +++++++++++++++++
 include/linux/cpumask.h              |    4 +-
 include/linux/sched/idle.h           |   40 ++++++++++++++++-----
 include/linux/thread_info.h          |   18 +++++++++
 include/linux/tracepoint.h           |   13 ++++++-
 kernel/cpu_pm.c                      |    9 ----
 kernel/printk/printk.c               |    2 -
 kernel/rcu/tree.c                    |    9 +---
 kernel/sched/idle.c                  |   47 +++++++------------------
 kernel/time/tick-broadcast-hrtimer.c |   29 ++++++---------
 kernel/time/tick-broadcast.c         |    6 ++-
 kernel/trace/trace.c                 |    3 +
 tools/objtool/check.c                |   15 +++++++-
 104 files changed, 449 insertions(+), 342 deletions(-)



From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:48:32 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:48:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343878.569540 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywz1-0000gG-KR; Wed, 08 Jun 2022 14:48:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343878.569540; Wed, 08 Jun 2022 14:48:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywyz-0000aE-Nz; Wed, 08 Jun 2022 14:48:29 +0000
Received: by outflank-mailman (input) for mailman id 343878;
 Wed, 08 Jun 2022 14:47:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywy9-0004T5-Vv
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:38 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ea0e3a0e-e739-11ec-b605-df0040e90b76;
 Wed, 08 Jun 2022 16:47:27 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywx6-00ChZS-6w; Wed, 08 Jun 2022 14:46:32 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id E2347302F4B;
 Wed,  8 Jun 2022 16:46:23 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id C09F920C119B6; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ea0e3a0e-e739-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=72Sb3LM19KDQn37emviwOKsQWcEROOYn6vEWI6g4O0g=; b=EIStTNEvIWCOhCtVS6i/K7y72k
	Z2/y8QVivxbrQMzv70rV/xYI1TqoeKXswIk5VnUpw0+4RhgOORQAEqSaWWFcpGN/Ozze52Ug5i0sI
	TleRT/9YHz6wiGFbGNdRNWgltUsVvJQ/VjZwME/1FoaZpc7a2ndwOU/eD8GtMnigWvtYOtoQbNZbT
	YAEGoBE1YPDvgKLkfYqGdKYI7530FqsneWIVYclJ5CEXvPE38a9eC2TbKb0SOYjj+vs7v40/yoQE7
	k4NN68jbT7q4/PIwEPV2vzemhXOOO98Uy9H+Lq6ErhVJIdH1jDJG5a7zOOPhARBO9XvArNFyyMFwC
	hXw9kkBw==;
Message-ID: <20220608144518.010587032@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:56 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org
Subject: [PATCH 33/36] cpuidle,omap3: Use WFI for omap3_pm_idle()
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

arch_cpu_idle() is a very simple idle interface and exposes only a
single idle state and is expected to not require RCU and not do any
tracing/instrumentation.

As such, omap_sram_idle() is not a valid implementation. Replace it
with the simple (shallow) omap3_do_wfi() call. Leaving the more
complicated idle states for the cpuidle driver.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/arm/mach-omap2/pm34xx.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/arch/arm/mach-omap2/pm34xx.c
+++ b/arch/arm/mach-omap2/pm34xx.c
@@ -294,7 +294,7 @@ static void omap3_pm_idle(void)
 	if (omap_irq_pending())
 		return;
 
-	omap_sram_idle();
+	omap3_do_wfi();
 }
 
 #ifdef CONFIG_SUSPEND




From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:48:33 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:48:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343880.569548 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywz3-00012z-Al; Wed, 08 Jun 2022 14:48:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343880.569548; Wed, 08 Jun 2022 14:48:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nywz1-0000wJ-D9; Wed, 08 Jun 2022 14:48:31 +0000
Received: by outflank-mailman (input) for mailman id 343880;
 Wed, 08 Jun 2022 14:47:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywyB-0004T5-0D
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:39 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id eac7d911-e739-11ec-b605-df0040e90b76;
 Wed, 08 Jun 2022 16:47:28 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywwy-0066BS-W1; Wed, 08 Jun 2022 14:46:25 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id A8FF730008D;
 Wed,  8 Jun 2022 16:46:22 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id 5313720C0F9B9; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eac7d911-e739-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=OtG2eksypqTPnibPh4fnaffiD2hY+EVznlbJ2kHLrow=; b=OVXFxuucL+euI7qJ0DHWP+HSd+
	K0Ja573cJSNgJwZfmTV/5FSX0IvQJrgOo9v/0uLAq5TdSmJQtXAZyIW9o/35yTdAOleYqjANFEm+E
	l56+S1viwHvjXO62dxEnDYPCdtMwVOYv/k+oP4A2fDyCb6sOHq4kRNEOvqcsJ/5x/Zoh6zkng8v4u
	/0d1tlL+++2dqJwM54DkEmGHC+DPLj6jQdLOk++gXXVkvZXV4HarSMkLWNk/RWR6rJjZ3hTgPD+7o
	fRFAUGoXS/CQX8DE/ix3aPY/jnZckpcWrGai1TDNvAnceUPXNMM/pOlZZ/Tt52+3oavm+6TFmtcmc
	EkhLqJ7w==;
Message-ID: <20220608144516.362668063@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:30 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org
Subject: [PATCH 07/36] cpuidle,tegra: Push RCU-idle into driver
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

Doing RCU-idle outside the driver, only to then temporarily enable it
again, at least twice, before going idle is daft.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 drivers/cpuidle/cpuidle-tegra.c |   21 ++++++++++++++++-----
 1 file changed, 16 insertions(+), 5 deletions(-)

--- a/drivers/cpuidle/cpuidle-tegra.c
+++ b/drivers/cpuidle/cpuidle-tegra.c
@@ -180,9 +180,11 @@ static int tegra_cpuidle_state_enter(str
 	}
 
 	local_fiq_disable();
-	RCU_NONIDLE(tegra_pm_set_cpu_in_lp2());
+	tegra_pm_set_cpu_in_lp2();
 	cpu_pm_enter();
 
+	rcu_idle_enter();
+
 	switch (index) {
 	case TEGRA_C7:
 		err = tegra_cpuidle_c7_enter();
@@ -197,8 +199,10 @@ static int tegra_cpuidle_state_enter(str
 		break;
 	}
 
+	rcu_idle_exit();
+
 	cpu_pm_exit();
-	RCU_NONIDLE(tegra_pm_clear_cpu_in_lp2());
+	tegra_pm_clear_cpu_in_lp2();
 	local_fiq_enable();
 
 	return err ?: index;
@@ -226,6 +230,7 @@ static int tegra_cpuidle_enter(struct cp
 			       struct cpuidle_driver *drv,
 			       int index)
 {
+	bool do_rcu = drv->states[index].flags & CPUIDLE_FLAG_RCU_IDLE;
 	unsigned int cpu = cpu_logical_map(dev->cpu);
 	int ret;
 
@@ -233,9 +238,13 @@ static int tegra_cpuidle_enter(struct cp
 	if (dev->states_usage[index].disable)
 		return -1;
 
-	if (index == TEGRA_C1)
+	if (index == TEGRA_C1) {
+		if (do_rcu)
+			rcu_idle_enter();
 		ret = arm_cpuidle_simple_enter(dev, drv, index);
-	else
+		if (do_rcu)
+			rcu_idle_exit();
+	} else
 		ret = tegra_cpuidle_state_enter(dev, index, cpu);
 
 	if (ret < 0) {
@@ -285,7 +294,8 @@ static struct cpuidle_driver tegra_idle_
 			.exit_latency		= 2000,
 			.target_residency	= 2200,
 			.power_usage		= 100,
-			.flags			= CPUIDLE_FLAG_TIMER_STOP,
+			.flags			= CPUIDLE_FLAG_TIMER_STOP |
+						  CPUIDLE_FLAG_RCU_IDLE,
 			.name			= "C7",
 			.desc			= "CPU core powered off",
 		},
@@ -295,6 +305,7 @@ static struct cpuidle_driver tegra_idle_
 			.target_residency	= 10000,
 			.power_usage		= 0,
 			.flags			= CPUIDLE_FLAG_TIMER_STOP |
+						  CPUIDLE_FLAG_RCU_IDLE   |
 						  CPUIDLE_FLAG_COUPLED,
 			.name			= "CC6",
 			.desc			= "CPU cluster powered off",




From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:54:09 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:54:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343978.569601 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyx4R-0000qx-6l; Wed, 08 Jun 2022 14:54:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343978.569601; Wed, 08 Jun 2022 14:54:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyx4R-0000pb-0S; Wed, 08 Jun 2022 14:54:07 +0000
Received: by outflank-mailman (input) for mailman id 343978;
 Wed, 08 Jun 2022 14:53:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywyD-0004Sj-Ta
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:42 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f199674b-e739-11ec-bd2c-47488cf2e6aa;
 Wed, 08 Jun 2022 16:47:39 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywwz-0066Bg-PM; Wed, 08 Jun 2022 14:46:26 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 0F8EC302E4E;
 Wed,  8 Jun 2022 16:46:23 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id 7A4C620C10EC5; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f199674b-e739-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=Ae56F2/vHSxAMY+0KpLDtlvNNteu31Pblave+ocppBw=; b=K8Ynm/svRXkkOXUoDfBvEs9ZFK
	qWXYZy+/5CqUIjZ31FPJ7xPNOIntKDH8NSfrS8D9IkyLAoFrHvJMHJIL/IcnOIM7nqISgd0gcoDeE
	M8QwoOkYRBsSJ6LT9EQT7VUIsk+Z6r7wIXAFTOjV+qSGSdym/gNXacQGPlu5y6oCjnxUFjzOxnG6i
	Qnh3s+Cj3bWdVkKJ0fKWJQJeEH2PS1tbGo8HnFfis2/BavxNxwTBZTq7iTayiBsKwAc/kQ4fijj9Z
	RSxzjU7orl0A9t/vtgrMCy+RX4IT1VlY5omzQIH8HgAO6TFwqNP7RiyW7pf7sKmhryRMbaaQJQ5W8
	0pAFiEvA==;
Message-ID: <20220608144516.935970247@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:39 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org
Subject: [PATCH 16/36] rcu: Fix rcu_idle_exit()
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

Current rcu_idle_exit() is terminally broken because it uses
local_irq_{save,restore}(), which are traced which uses RCU.

However, now that all the callers are sure to have IRQs disabled, we
can remove these calls.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Paul E. McKenney <paulmck@kernel.org>
---
 kernel/rcu/tree.c |    9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -659,7 +659,7 @@ static noinstr void rcu_eqs_enter(bool u
  * If you add or remove a call to rcu_idle_enter(), be sure to test with
  * CONFIG_RCU_EQS_DEBUG=y.
  */
-void rcu_idle_enter(void)
+void noinstr rcu_idle_enter(void)
 {
 	lockdep_assert_irqs_disabled();
 	rcu_eqs_enter(false);
@@ -896,13 +896,10 @@ static void noinstr rcu_eqs_exit(bool us
  * If you add or remove a call to rcu_idle_exit(), be sure to test with
  * CONFIG_RCU_EQS_DEBUG=y.
  */
-void rcu_idle_exit(void)
+void noinstr rcu_idle_exit(void)
 {
-	unsigned long flags;
-
-	local_irq_save(flags);
+	lockdep_assert_irqs_disabled();
 	rcu_eqs_exit(false);
-	local_irq_restore(flags);
 }
 EXPORT_SYMBOL_GPL(rcu_idle_exit);
 




From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:54:09 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:54:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343965.569595 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyx4Q-0000kK-S5; Wed, 08 Jun 2022 14:54:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343965.569595; Wed, 08 Jun 2022 14:54:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyx4Q-0000jD-LA; Wed, 08 Jun 2022 14:54:06 +0000
Received: by outflank-mailman (input) for mailman id 343965;
 Wed, 08 Jun 2022 14:53:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywyI-0004T5-I5
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:46 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f3e5c454-e739-11ec-b605-df0040e90b76;
 Wed, 08 Jun 2022 16:47:43 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywwz-0066Bi-PV; Wed, 08 Jun 2022 14:46:26 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 03046302E46;
 Wed,  8 Jun 2022 16:46:23 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id 75C7720C10EAF; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f3e5c454-e739-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=fn1U7MEbZz2QK5hbMetRGs1LaJ4dLQ5TDl+y9fGIqjs=; b=AHoLDlFIJpM2yJSN34mWYuYDyZ
	RrXBAbCHcbiLETtqHyostv6hzGyHst5r7VORCZGrERmGvvRqMr2NwXjLQsK1TYGzVvVJBLCgsCf3j
	LieJtCZHAdjEyntbLFkbTU93xwC08IjFigevejKo+3rMPJ833Xxh7xrrmgzcKasuf6t3frm6PkmxU
	vZovY0XqO2YqPGD7SFjfk9rslAidePmYi7ReTaMHUlcVRH/amspREeDyWUmyOeqnEP6bQYjeAjWHS
	JBInW3LvF6tscP+ZjodqILQopYhhA13ptDRMV4kNqtb9FKzkYAPZG/ay8CNlYM5sVD+3xA6vTTC0E
	vR1dCMqg==;
Message-ID: <20220608144516.871305980@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:38 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org
Subject: [PATCH 15/36] cpuidle,cpu_pm: Remove RCU fiddling from cpu_pm_{enter,exit}()
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

All callers should still have RCU enabled.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 kernel/cpu_pm.c |    9 ---------
 1 file changed, 9 deletions(-)

--- a/kernel/cpu_pm.c
+++ b/kernel/cpu_pm.c
@@ -30,16 +30,9 @@ static int cpu_pm_notify(enum cpu_pm_eve
 {
 	int ret;
 
-	/*
-	 * This introduces a RCU read critical section, which could be
-	 * disfunctional in cpu idle. Copy RCU_NONIDLE code to let RCU know
-	 * this.
-	 */
-	rcu_irq_enter_irqson();
 	rcu_read_lock();
 	ret = raw_notifier_call_chain(&cpu_pm_notifier.chain, event, NULL);
 	rcu_read_unlock();
-	rcu_irq_exit_irqson();
 
 	return notifier_to_errno(ret);
 }
@@ -49,11 +42,9 @@ static int cpu_pm_notify_robust(enum cpu
 	unsigned long flags;
 	int ret;
 
-	rcu_irq_enter_irqson();
 	raw_spin_lock_irqsave(&cpu_pm_notifier.lock, flags);
 	ret = raw_notifier_call_chain_robust(&cpu_pm_notifier.chain, event_up, event_down, NULL);
 	raw_spin_unlock_irqrestore(&cpu_pm_notifier.lock, flags);
-	rcu_irq_exit_irqson();
 
 	return notifier_to_errno(ret);
 }




From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:54:09 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:54:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343987.569610 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyx4R-00010Z-Lm; Wed, 08 Jun 2022 14:54:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343987.569610; Wed, 08 Jun 2022 14:54:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyx4R-0000yP-Eo; Wed, 08 Jun 2022 14:54:07 +0000
Received: by outflank-mailman (input) for mailman id 343987;
 Wed, 08 Jun 2022 14:53:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywy9-0004Sj-Sz
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:38 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e83f51a0-e739-11ec-bd2c-47488cf2e6aa;
 Wed, 08 Jun 2022 16:47:24 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywx2-00ChXM-Tb; Wed, 08 Jun 2022 14:46:29 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id A06A8302E95;
 Wed,  8 Jun 2022 16:46:23 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id 9441920C10EDA; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e83f51a0-e739-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=RgIlqtYjjJdUiI+7wCmxTrS6x1kOK6pzGpl8Q80bmr4=; b=G36djvM6W5RR7vlHVG43A6Iudr
	2F5NZ1PvnNXbTO0zpliXNxkDUVY5zkHKbOaq03CD5t8mleRv7RSL0O0tO1HwRwQ/MOKZWZwXP94dO
	OyHTPEvggi7hUK+yhhVFLFdvi4GdvU2f2SC8Nsjn5PVc/OK6YfCwGyHnvXicXlc0Bl51oWzBO7qjr
	s4lmtFzyLrwIq1biy5jzW2hDxT+rMaooNCcN/ALUDKJ5MfHYSkr4rDMi0HCIPUt8IpmdPh8QLzULX
	1Pzez9sPHZCsctScaUazcc/wB8wvvc2KcGz1r+hnNxtZlyWfJIlX0d7HYAF/GjFQCHJgoGLPVpXS4
	rz4KAG3w==;
Message-ID: <20220608144517.380962958@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:46 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org
Subject: [PATCH 23/36] arm64,smp: Remove trace_.*_rcuidle() usage
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

Ever since commit d3afc7f12987 ("arm64: Allow IPIs to be handled as
normal interrupts") this function is called in regular IRQ context.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/arm64/kernel/smp.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -865,7 +865,7 @@ static void do_handle_IPI(int ipinr)
 	unsigned int cpu = smp_processor_id();
 
 	if ((unsigned)ipinr < NR_IPI)
-		trace_ipi_entry_rcuidle(ipi_types[ipinr]);
+		trace_ipi_entry(ipi_types[ipinr]);
 
 	switch (ipinr) {
 	case IPI_RESCHEDULE:
@@ -914,7 +914,7 @@ static void do_handle_IPI(int ipinr)
 	}
 
 	if ((unsigned)ipinr < NR_IPI)
-		trace_ipi_exit_rcuidle(ipi_types[ipinr]);
+		trace_ipi_exit(ipi_types[ipinr]);
 }
 
 static irqreturn_t ipi_handler(int irq, void *data)




From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:54:09 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:54:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343949.569590 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyx4Q-0000hW-Gq; Wed, 08 Jun 2022 14:54:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343949.569590; Wed, 08 Jun 2022 14:54:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyx4Q-0000hF-DK; Wed, 08 Jun 2022 14:54:06 +0000
Received: by outflank-mailman (input) for mailman id 343949;
 Wed, 08 Jun 2022 14:53:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywy3-0004Sj-Rm
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:31 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e69bc49c-e739-11ec-bd2c-47488cf2e6aa;
 Wed, 08 Jun 2022 16:47:21 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywwz-00ChWI-8u; Wed, 08 Jun 2022 14:46:25 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id B3E87302DCE;
 Wed,  8 Jun 2022 16:46:22 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id 5E0D320C0FC99; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e69bc49c-e739-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=vvLwebALuO06Z1aX8J6kQrwqaL5iBz0X7E8LlI1b2uQ=; b=Q7F83ia8UIiOgZBYPdseLSfVcX
	fd1A47vBrZfxmsAiR/xl9gW4bE0gTf/YoXWA7i0L4s6hdeyInRbvJxphp2ct7rHMcAjOpumtTOXla
	kCHhvSeqbknKSzusfMS0cPVUddgycIDiAi/gMB72y87uoWeS9XNyFVoldQqHiGPSZ7TOmt0RkBrr8
	OZtMT+qFPLvTi4Atjpzwddq/pxu/gm+1vIS+xOsRB6WuFkQSrB/cB1hqLpGCTzp5XjujYYrJI8egq
	m1+E81UoNtWrbGhPnOm/e0s2DOPOjD3cR82rcmdV4yqY9TPecmPva7MxHVXdeWic0XTm2Kurl0SV6
	/dgVoRTQ==;
Message-ID: <20220608144516.489126887@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:32 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org
Subject: [PATCH 09/36] cpuidle,imx6: Push RCU-idle into driver
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

Doing RCU-idle outside the driver, only to then temporarily enable it
again, at least twice, before going idle is daft.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/arm/mach-imx/cpuidle-imx6sx.c |    5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

--- a/arch/arm/mach-imx/cpuidle-imx6sx.c
+++ b/arch/arm/mach-imx/cpuidle-imx6sx.c
@@ -47,7 +47,9 @@ static int imx6sx_enter_wait(struct cpui
 		cpu_pm_enter();
 		cpu_cluster_pm_enter();
 
+		rcu_idle_enter();
 		cpu_suspend(0, imx6sx_idle_finish);
+		rcu_idle_exit();
 
 		cpu_cluster_pm_exit();
 		cpu_pm_exit();
@@ -87,7 +89,8 @@ static struct cpuidle_driver imx6sx_cpui
 			 */
 			.exit_latency = 300,
 			.target_residency = 500,
-			.flags = CPUIDLE_FLAG_TIMER_STOP,
+			.flags = CPUIDLE_FLAG_TIMER_STOP |
+				 CPUIDLE_FLAG_RCU_IDLE,
 			.enter = imx6sx_enter_wait,
 			.name = "LOW-POWER-IDLE",
 			.desc = "ARM power off",




From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:54:12 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:54:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.343998.569634 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyx4W-0001pI-Ai; Wed, 08 Jun 2022 14:54:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 343998.569634; Wed, 08 Jun 2022 14:54:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyx4W-0001p4-5b; Wed, 08 Jun 2022 14:54:12 +0000
Received: by outflank-mailman (input) for mailman id 343998;
 Wed, 08 Jun 2022 14:53:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywyE-0004T5-0L
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:42 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f0d78caf-e739-11ec-b605-df0040e90b76;
 Wed, 08 Jun 2022 16:47:38 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywww-0066AZ-G5; Wed, 08 Jun 2022 14:46:22 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id A2545302D40;
 Wed,  8 Jun 2022 16:46:20 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id 40FBE20C0F9AF; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f0d78caf-e739-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=5naerUzcbldjXRb10QYbs7+/pM4Ur6B2nPLePdbzHOg=; b=rhgD+2PcLe5JBDMlblYneku2pt
	sdXli9DhMYdDhGWlEZGfdH33pA1vxt2/I6b1j6EXLWZWbJKzwmGTw5dQ1MmGgZUXFXCjmqjyi7Cj0
	RDEeMRhRx462t65vhBmJxF3B2p6UkTTvXKi94yWM8bZ08fR0lDh+Y4pYwx1LnZArZ4GjMKpZDddEp
	x8xSvPeIQZbvtL3deY5Q/Os9xDxH0nwk7Kl+yc43Zl8sk4PIGyqEmr5VJo8LA2u3JPyOV7lFHyHMX
	QOljdOyxCUAilPhGGsAmgB+8n8H2M0DXrQgHdXFz3ZS+VW69bh5LQomhK8O36eqaPNPg9koIFunZF
	REK2rFOw==;
Message-ID: <20220608144516.109792837@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:26 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org
Subject: [PATCH 03/36] cpuidle/poll: Ensure IRQ state is invariant
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

cpuidle_state::enter() methods should be IRQ invariant

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 drivers/cpuidle/poll_state.c |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

--- a/drivers/cpuidle/poll_state.c
+++ b/drivers/cpuidle/poll_state.c
@@ -17,7 +17,7 @@ static int __cpuidle poll_idle(struct cp
 
 	dev->poll_time_limit = false;
 
-	local_irq_enable();
+	raw_local_irq_enable();
 	if (!current_set_polling_and_test()) {
 		unsigned int loop_count = 0;
 		u64 limit;
@@ -36,6 +36,8 @@ static int __cpuidle poll_idle(struct cp
 			}
 		}
 	}
+	raw_local_irq_disable();
+
 	current_clr_polling();
 
 	return index;




From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:55:22 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:55:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344050.569651 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyx5d-0003yD-Tv; Wed, 08 Jun 2022 14:55:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344050.569651; Wed, 08 Jun 2022 14:55:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyx5d-0003x6-Mu; Wed, 08 Jun 2022 14:55:21 +0000
Received: by outflank-mailman (input) for mailman id 344050;
 Wed, 08 Jun 2022 14:54:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywy6-0004Sj-SL
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:34 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e6cd9b4e-e739-11ec-bd2c-47488cf2e6aa;
 Wed, 08 Jun 2022 16:47:21 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywx6-00ChZr-Mr; Wed, 08 Jun 2022 14:46:32 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 0C581302F55;
 Wed,  8 Jun 2022 16:46:24 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id C77CE20C119BA; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e6cd9b4e-e739-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=HzY3jBvKRSGcnvMJEPYny4/0n/oTUbqDiv/WYyIgET0=; b=sA+18VAK88SPXdLOpMza+rOEfP
	LIZ49BFC7l9BHr/9lFK8wXWo46KZqkiByud1HSMZeZ+BUYAV1ia+Yhjd7ltAYMQ5wWfxb2f6aW9XN
	LaYbqONFU5epqLIj3qUQtYBecrUdqqZByYp3TXTApbtEn9PG18xAyF+bW9M0QZ0Hl1P3HfTU4flDJ
	/60dHZ76Xt7rMxgX4MVdhQrPOIHu7jlHBXo6lM/0rE/s1QqIBAAOq4lIO8uNFmK6Mus0AzeHFEaFf
	95DVCFKqB4yKLaIMK8VXS/RFhvqO+ARWFtTz41dKNkpurhwu+bIx/YcQ0H56dkcHsKj/6fk/1/KoL
	4ucMtSnA==;
Message-ID: <20220608144518.136731332@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:58 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org
Subject: [PATCH 35/36] cpuidle,powerdomain: Remove trace_.*_rcuidle()
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

OMAP was the one and only user.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/arm/mach-omap2/powerdomain.c |   10 +++++-----
 drivers/base/power/runtime.c      |   24 ++++++++++++------------
 2 files changed, 17 insertions(+), 17 deletions(-)

--- a/arch/arm/mach-omap2/powerdomain.c
+++ b/arch/arm/mach-omap2/powerdomain.c
@@ -187,9 +187,9 @@ static int _pwrdm_state_switch(struct po
 			trace_state = (PWRDM_TRACE_STATES_FLAG |
 				       ((next & OMAP_POWERSTATE_MASK) << 8) |
 				       ((prev & OMAP_POWERSTATE_MASK) << 0));
-			trace_power_domain_target_rcuidle(pwrdm->name,
-							  trace_state,
-							  raw_smp_processor_id());
+			trace_power_domain_target(pwrdm->name,
+						  trace_state,
+						  raw_smp_processor_id());
 		}
 		break;
 	default:
@@ -541,8 +541,8 @@ int pwrdm_set_next_pwrst(struct powerdom
 
 	if (arch_pwrdm && arch_pwrdm->pwrdm_set_next_pwrst) {
 		/* Trace the pwrdm desired target state */
-		trace_power_domain_target_rcuidle(pwrdm->name, pwrst,
-						  raw_smp_processor_id());
+		trace_power_domain_target(pwrdm->name, pwrst,
+					  raw_smp_processor_id());
 		/* Program the pwrdm desired target state */
 		ret = arch_pwrdm->pwrdm_set_next_pwrst(pwrdm, pwrst);
 	}
--- a/drivers/base/power/runtime.c
+++ b/drivers/base/power/runtime.c
@@ -442,7 +442,7 @@ static int rpm_idle(struct device *dev,
 	int (*callback)(struct device *);
 	int retval;
 
-	trace_rpm_idle_rcuidle(dev, rpmflags);
+	trace_rpm_idle(dev, rpmflags);
 	retval = rpm_check_suspend_allowed(dev);
 	if (retval < 0)
 		;	/* Conditions are wrong. */
@@ -481,7 +481,7 @@ static int rpm_idle(struct device *dev,
 			dev->power.request_pending = true;
 			queue_work(pm_wq, &dev->power.work);
 		}
-		trace_rpm_return_int_rcuidle(dev, _THIS_IP_, 0);
+		trace_rpm_return_int(dev, _THIS_IP_, 0);
 		return 0;
 	}
 
@@ -493,7 +493,7 @@ static int rpm_idle(struct device *dev,
 	wake_up_all(&dev->power.wait_queue);
 
  out:
-	trace_rpm_return_int_rcuidle(dev, _THIS_IP_, retval);
+	trace_rpm_return_int(dev, _THIS_IP_, retval);
 	return retval ? retval : rpm_suspend(dev, rpmflags | RPM_AUTO);
 }
 
@@ -557,7 +557,7 @@ static int rpm_suspend(struct device *de
 	struct device *parent = NULL;
 	int retval;
 
-	trace_rpm_suspend_rcuidle(dev, rpmflags);
+	trace_rpm_suspend(dev, rpmflags);
 
  repeat:
 	retval = rpm_check_suspend_allowed(dev);
@@ -708,7 +708,7 @@ static int rpm_suspend(struct device *de
 	}
 
  out:
-	trace_rpm_return_int_rcuidle(dev, _THIS_IP_, retval);
+	trace_rpm_return_int(dev, _THIS_IP_, retval);
 
 	return retval;
 
@@ -760,7 +760,7 @@ static int rpm_resume(struct device *dev
 	struct device *parent = NULL;
 	int retval = 0;
 
-	trace_rpm_resume_rcuidle(dev, rpmflags);
+	trace_rpm_resume(dev, rpmflags);
 
  repeat:
 	if (dev->power.runtime_error) {
@@ -925,7 +925,7 @@ static int rpm_resume(struct device *dev
 		spin_lock_irq(&dev->power.lock);
 	}
 
-	trace_rpm_return_int_rcuidle(dev, _THIS_IP_, retval);
+	trace_rpm_return_int(dev, _THIS_IP_, retval);
 
 	return retval;
 }
@@ -1081,7 +1081,7 @@ int __pm_runtime_idle(struct device *dev
 		if (retval < 0) {
 			return retval;
 		} else if (retval > 0) {
-			trace_rpm_usage_rcuidle(dev, rpmflags);
+			trace_rpm_usage(dev, rpmflags);
 			return 0;
 		}
 	}
@@ -1119,7 +1119,7 @@ int __pm_runtime_suspend(struct device *
 		if (retval < 0) {
 			return retval;
 		} else if (retval > 0) {
-			trace_rpm_usage_rcuidle(dev, rpmflags);
+			trace_rpm_usage(dev, rpmflags);
 			return 0;
 		}
 	}
@@ -1202,7 +1202,7 @@ int pm_runtime_get_if_active(struct devi
 	} else {
 		retval = atomic_inc_not_zero(&dev->power.usage_count);
 	}
-	trace_rpm_usage_rcuidle(dev, 0);
+	trace_rpm_usage(dev, 0);
 	spin_unlock_irqrestore(&dev->power.lock, flags);
 
 	return retval;
@@ -1566,7 +1566,7 @@ void pm_runtime_allow(struct device *dev
 	if (ret == 0)
 		rpm_idle(dev, RPM_AUTO | RPM_ASYNC);
 	else if (ret > 0)
-		trace_rpm_usage_rcuidle(dev, RPM_AUTO | RPM_ASYNC);
+		trace_rpm_usage(dev, RPM_AUTO | RPM_ASYNC);
 
  out:
 	spin_unlock_irq(&dev->power.lock);
@@ -1635,7 +1635,7 @@ static void update_autosuspend(struct de
 			atomic_inc(&dev->power.usage_count);
 			rpm_resume(dev, 0);
 		} else {
-			trace_rpm_usage_rcuidle(dev, 0);
+			trace_rpm_usage(dev, 0);
 		}
 	}
 




From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:55:22 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:55:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344059.569656 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyx5e-00044y-7U; Wed, 08 Jun 2022 14:55:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344059.569656; Wed, 08 Jun 2022 14:55:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyx5e-000431-0q; Wed, 08 Jun 2022 14:55:22 +0000
Received: by outflank-mailman (input) for mailman id 344059;
 Wed, 08 Jun 2022 14:54:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywyI-0004Sj-1b
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:46 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f4e13d87-e739-11ec-bd2c-47488cf2e6aa;
 Wed, 08 Jun 2022 16:47:45 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywx5-0066DR-V8; Wed, 08 Jun 2022 14:46:32 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id D41E1302F3B;
 Wed,  8 Jun 2022 16:46:23 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id B77DB20C119AE; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f4e13d87-e739-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=OtI5HbrqLesCtm4u4WFjeviiqkcvkrIAtN470m5UDAY=; b=YxAxQQTnVMjXt5F5qT80bkgPh5
	ixluP6Zz3hO7fx8vJIQrj4C5TqUSGKakaJJsFYPkJyX/quow9i3Q1RHxs4fPIHvQqAI4ItoUKSg9H
	Q96BC3qMYMoqP4qCHwfqrdYRb4hE4ZTb5URJUGq6ct2gnpSWNNz0/tb6QgN8X/3EG+8+Hn8P0Jpfj
	s1Mx2wUY/shYiCwGx373RWVVV8k86WJZX6RUBzcpDy2OG/HxZoSIcohcw28STfEH3IheSXP1uOHaG
	ZHJWiPNFWu4NLZ6eL7MRQvDj+sp0KsDvF/li+47ulaB6B0czYcAARGWueTqt7adXuGkZdkHCdgM5d
	rhVbSspQ==;
Message-ID: <20220608144517.885263942@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:54 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org
Subject: [PATCH 31/36] cpuidle,acpi: Make noinstr clean
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

vmlinux.o: warning: objtool: io_idle+0xc: call to __inb.isra.0() leaves .noinstr.text section
vmlinux.o: warning: objtool: acpi_idle_enter+0xfe: call to num_online_cpus() leaves .noinstr.text section
vmlinux.o: warning: objtool: acpi_idle_enter+0x115: call to acpi_idle_fallback_to_c1.isra.0() leaves .noinstr.text section

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/x86/include/asm/shared/io.h |    4 ++--
 drivers/acpi/processor_idle.c    |    2 +-
 include/linux/cpumask.h          |    4 ++--
 3 files changed, 5 insertions(+), 5 deletions(-)

--- a/arch/x86/include/asm/shared/io.h
+++ b/arch/x86/include/asm/shared/io.h
@@ -5,13 +5,13 @@
 #include <linux/types.h>
 
 #define BUILDIO(bwl, bw, type)						\
-static inline void __out##bwl(type value, u16 port)			\
+static __always_inline void __out##bwl(type value, u16 port)		\
 {									\
 	asm volatile("out" #bwl " %" #bw "0, %w1"			\
 		     : : "a"(value), "Nd"(port));			\
 }									\
 									\
-static inline type __in##bwl(u16 port)					\
+static __always_inline type __in##bwl(u16 port)				\
 {									\
 	type value;							\
 	asm volatile("in" #bwl " %w1, %" #bw "0"			\
--- a/drivers/acpi/processor_idle.c
+++ b/drivers/acpi/processor_idle.c
@@ -593,7 +593,7 @@ static int acpi_idle_play_dead(struct cp
 	return 0;
 }
 
-static bool acpi_idle_fallback_to_c1(struct acpi_processor *pr)
+static __always_inline bool acpi_idle_fallback_to_c1(struct acpi_processor *pr)
 {
 	return IS_ENABLED(CONFIG_HOTPLUG_CPU) && !pr->flags.has_cst &&
 		!(acpi_gbl_FADT.flags & ACPI_FADT_C2_MP_SUPPORTED);
--- a/include/linux/cpumask.h
+++ b/include/linux/cpumask.h
@@ -908,9 +908,9 @@ static inline const struct cpumask *get_
  * concurrent CPU hotplug operations unless invoked from a cpuhp_lock held
  * region.
  */
-static inline unsigned int num_online_cpus(void)
+static __always_inline unsigned int num_online_cpus(void)
 {
-	return atomic_read(&__num_online_cpus);
+	return arch_atomic_read(&__num_online_cpus);
 }
 #define num_possible_cpus()	cpumask_weight(cpu_possible_mask)
 #define num_present_cpus()	cpumask_weight(cpu_present_mask)




From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:55:22 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:55:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344048.569644 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyx5d-0003uj-IX; Wed, 08 Jun 2022 14:55:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344048.569644; Wed, 08 Jun 2022 14:55:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyx5d-0003uc-Fm; Wed, 08 Jun 2022 14:55:21 +0000
Received: by outflank-mailman (input) for mailman id 344048;
 Wed, 08 Jun 2022 14:54:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywyC-0004T5-01
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:40 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ed05865c-e739-11ec-b605-df0040e90b76;
 Wed, 08 Jun 2022 16:47:32 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywwz-0066Bb-Bb; Wed, 08 Jun 2022 14:46:26 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id C3582302E0A;
 Wed,  8 Jun 2022 16:46:22 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id 6647820C10EA3; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ed05865c-e739-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=9yd1uW/Ha4lFVJl9g3NG8xPj6tkgCPp8FbdRM3bEBHk=; b=fc+lbbvKwwN4ZJ7iUz0gOd6Kw1
	bWZHe3UXTKQ37VnD0JIa/nKeK2NWAC5dW50BJVMiH1O0g/NAQki27BvHuVbo9/MbJioA+l8bQlAeg
	l4w6/u19HStC/e7hYn9qn64X+GhFM03qFAT/CAxcvj/R6DIolTBBDV6RBWD9rGeBRWsCeD6c+DPTN
	hfdniSGhPewVHFKY3H9c5xdzdll1zcPQGSamYV1TPyhmdCITIhSKQrv2bNMKa90Crt7jnx4M6v4J0
	gzF0aqTx0qtLc4aL7xQtl8DpGTt8VZizoHoOYrXPnJORVcj1rNrY3g+XpMSApiCVfrx62z+2ZhBaT
	W5Yf5pVQ==;
Message-ID: <20220608144516.614797628@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:34 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org
Subject: [PATCH 11/36] cpuidle,armada: Push RCU-idle into driver
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

Doing RCU-idle outside the driver, only to then temporarily enable it
again before going idle is daft.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 drivers/cpuidle/cpuidle-mvebu-v7.c |    7 +++++++
 1 file changed, 7 insertions(+)

--- a/drivers/cpuidle/cpuidle-mvebu-v7.c
+++ b/drivers/cpuidle/cpuidle-mvebu-v7.c
@@ -36,7 +36,10 @@ static int mvebu_v7_enter_idle(struct cp
 	if (drv->states[index].flags & MVEBU_V7_FLAG_DEEP_IDLE)
 		deepidle = true;
 
+	rcu_idle_enter();
 	ret = mvebu_v7_cpu_suspend(deepidle);
+	rcu_idle_exit();
+
 	cpu_pm_exit();
 
 	if (ret)
@@ -49,6 +52,7 @@ static struct cpuidle_driver armadaxp_id
 	.name			= "armada_xp_idle",
 	.states[0]		= ARM_CPUIDLE_WFI_STATE,
 	.states[1]		= {
+		.flags			= CPUIDLE_FLAG_RCU_IDLE,
 		.enter			= mvebu_v7_enter_idle,
 		.exit_latency		= 100,
 		.power_usage		= 50,
@@ -57,6 +61,7 @@ static struct cpuidle_driver armadaxp_id
 		.desc			= "CPU power down",
 	},
 	.states[2]		= {
+		.flags			= CPUIDLE_FLAG_RCU_IDLE,
 		.enter			= mvebu_v7_enter_idle,
 		.exit_latency		= 1000,
 		.power_usage		= 5,
@@ -72,6 +77,7 @@ static struct cpuidle_driver armada370_i
 	.name			= "armada_370_idle",
 	.states[0]		= ARM_CPUIDLE_WFI_STATE,
 	.states[1]		= {
+		.flags			= CPUIDLE_FLAG_RCU_IDLE,
 		.enter			= mvebu_v7_enter_idle,
 		.exit_latency		= 100,
 		.power_usage		= 5,
@@ -87,6 +93,7 @@ static struct cpuidle_driver armada38x_i
 	.name			= "armada_38x_idle",
 	.states[0]		= ARM_CPUIDLE_WFI_STATE,
 	.states[1]		= {
+		.flags			= CPUIDLE_FLAG_RCU_IDLE,
 		.enter			= mvebu_v7_enter_idle,
 		.exit_latency		= 10,
 		.power_usage		= 5,




From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:55:22 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:55:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344072.569663 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyx5e-0004AN-ID; Wed, 08 Jun 2022 14:55:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344072.569663; Wed, 08 Jun 2022 14:55:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyx5e-00048u-At; Wed, 08 Jun 2022 14:55:22 +0000
Received: by outflank-mailman (input) for mailman id 344072;
 Wed, 08 Jun 2022 14:54:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywy5-0004Sj-S2
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:33 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e69f6cbf-e739-11ec-bd2c-47488cf2e6aa;
 Wed, 08 Jun 2022 16:47:21 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywww-00ChVp-CO; Wed, 08 Jun 2022 14:46:22 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 5CD17300779;
 Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id 36D1B20C0F9A1; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e69f6cbf-e739-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=YOWU43M4IjsvcmJXQwOIFJh+WuCbnSytdcIdnv65DkY=; b=XCDVN9tj9Nd6cCt5yPaK9O3amI
	ci7YUUM2v+28cn8J99wm4KhjbAdAoCxm0ZkgOdpCkJChjzfZ+C735h+hlVrx0OpQ7Sz06hLL2cROZ
	Bv8WVo508YV6l6eZBA29iCGnbSDF9FtDcrv+aceuzuJTig8MZggSg4i4uwwpFk8A3wQMqX47uf/mT
	8zfnGc1VxoVB+WmWEDtxTYLuSF41PTkkxW16U+FTxE7204ryh1jVbHwBdUZ4quIaNuIHjqbsN+C7n
	+Iep3h1zsyXP0xbxfqaVi9r1Yzcival+fbwxlxbx1qrowfPg2m7YfJQdX7yj+Cn50NoW7JIOu6shq
	BsRPsiUA==;
Message-ID: <20220608144515.984413284@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:24 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org
Subject: [PATCH 01/36] x86/perf/amd: Remove tracing from perf_lopwr_cb()
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

The perf_lopwr_cb() is called from the idle routines; there is no RCU
there, we must not enter tracing.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/x86/events/amd/brs.c         |   13 +++++--------
 arch/x86/include/asm/perf_event.h |    2 +-
 2 files changed, 6 insertions(+), 9 deletions(-)

--- a/arch/x86/events/amd/brs.c
+++ b/arch/x86/events/amd/brs.c
@@ -41,18 +41,15 @@ static inline unsigned int brs_to(int id
 	return MSR_AMD_SAMP_BR_FROM + 2 * idx + 1;
 }
 
-static inline void set_debug_extn_cfg(u64 val)
+static __always_inline void set_debug_extn_cfg(u64 val)
 {
 	/* bits[4:3] must always be set to 11b */
-	wrmsrl(MSR_AMD_DBG_EXTN_CFG, val | 3ULL << 3);
+	__wrmsr(MSR_AMD_DBG_EXTN_CFG, val | 3ULL << 3, val >> 32);
 }
 
-static inline u64 get_debug_extn_cfg(void)
+static __always_inline u64 get_debug_extn_cfg(void)
 {
-	u64 val;
-
-	rdmsrl(MSR_AMD_DBG_EXTN_CFG, val);
-	return val;
+	return __rdmsr(MSR_AMD_DBG_EXTN_CFG);
 }
 
 static bool __init amd_brs_detect(void)
@@ -338,7 +335,7 @@ void amd_pmu_brs_sched_task(struct perf_
  * called from ACPI processor_idle.c or acpi_pad.c
  * with interrupts disabled
  */
-void perf_amd_brs_lopwr_cb(bool lopwr_in)
+void noinstr perf_amd_brs_lopwr_cb(bool lopwr_in)
 {
 	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
 	union amd_debug_extn_cfg cfg;
--- a/arch/x86/include/asm/perf_event.h
+++ b/arch/x86/include/asm/perf_event.h
@@ -554,7 +554,7 @@ extern void perf_amd_brs_lopwr_cb(bool l
 
 DECLARE_STATIC_CALL(perf_lopwr_cb, perf_amd_brs_lopwr_cb);
 
-static inline void perf_lopwr_cb(bool lopwr_in)
+static __always_inline void perf_lopwr_cb(bool lopwr_in)
 {
 	static_call_mod(perf_lopwr_cb)(lopwr_in);
 }




From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:55:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:55:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344079.569668 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyx5e-0004IK-U0; Wed, 08 Jun 2022 14:55:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344079.569668; Wed, 08 Jun 2022 14:55:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyx5e-0004FU-MT; Wed, 08 Jun 2022 14:55:22 +0000
Received: by outflank-mailman (input) for mailman id 344079;
 Wed, 08 Jun 2022 14:54:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywy4-0004Sj-Rn
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:32 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e69bcd4f-e739-11ec-bd2c-47488cf2e6aa;
 Wed, 08 Jun 2022 16:47:21 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywx6-00ChZl-GY; Wed, 08 Jun 2022 14:46:32 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id D4501302F3F;
 Wed,  8 Jun 2022 16:46:23 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id BC41C20C119B1; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e69bcd4f-e739-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=Gj+eSw46X5SDCGNw8cDv0ubviLBcGEVy5VW0lj+ATKg=; b=PcbQR+3su0P/C8pc1s793bBA2j
	ktQ235pkDlbAkwOAsWLsW7DM1mrfh1vhIzRbfNYR5F+8ain6CH9jgWYuoxYcl9uL+ocL0b+8XEOXt
	24PIItx96i9m0jKeLIwsBN7wi+qRRuwgHbXmsn9vkkOSl/hQ6c9MO4xk/n4kODjnZxuF/ldFEvFng
	RKNgnZ2b45WmOjCX7CJ/vNjd9ZJz2ATMfgbLbqDohT3ILrAVWpvPd1p1QwjA2MxqCCmR1M8FMqgQ/
	/DAneELoezzt6eDUpdwXCRtAQhyWretA4G2ZWAYAHTi4PqTwawLXUc26HjQgfO3qwapWqCjS2Q5FP
	3sSC5Q9g==;
Message-ID: <20220608144517.948600553@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:55 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org
Subject: [PATCH 32/36] ftrace: WARN on rcuidle
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

CONFIG_GENERIC_ENTRY disallows any and all tracing when RCU isn't
enabled.

XXX if s390 (the only other GENERIC_ENTRY user as of this writing)
isn't comfortable with this, we could switch to
HAVE_NOINSTR_VALIDATION which is x86_64 only atm.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 include/linux/tracepoint.h |   13 ++++++++++++-
 kernel/trace/trace.c       |    3 +++
 2 files changed, 15 insertions(+), 1 deletion(-)

--- a/include/linux/tracepoint.h
+++ b/include/linux/tracepoint.h
@@ -178,6 +178,16 @@ static inline struct tracepoint *tracepo
 #endif /* CONFIG_HAVE_STATIC_CALL */
 
 /*
+ * CONFIG_GENERIC_ENTRY archs are expected to have sanitized entry and idle
+ * code that disallow any/all tracing/instrumentation when RCU isn't watching.
+ */
+#ifdef CONFIG_GENERIC_ENTRY
+#define RCUIDLE_COND(rcuidle)	(rcuidle)
+#else
+#define RCUIDLE_COND(rcuidle)	(rcuidle && in_nmi())
+#endif
+
+/*
  * it_func[0] is never NULL because there is at least one element in the array
  * when the array itself is non NULL.
  */
@@ -189,7 +199,8 @@ static inline struct tracepoint *tracepo
 			return;						\
 									\
 		/* srcu can't be used from NMI */			\
-		WARN_ON_ONCE(rcuidle && in_nmi());			\
+		if (WARN_ON_ONCE(RCUIDLE_COND(rcuidle)))		\
+			return;						\
 									\
 		/* keep srcu and sched-rcu usage consistent */		\
 		preempt_disable_notrace();				\
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -3104,6 +3104,9 @@ void __trace_stack(struct trace_array *t
 		return;
 	}
 
+	if (WARN_ON_ONCE(IS_ENABLED(CONFIG_GENERIC_ENTRY)))
+		return;
+
 	/*
 	 * When an NMI triggers, RCU is enabled via rcu_nmi_enter(),
 	 * but if the above rcu_is_watching() failed, then the NMI




From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:55:24 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:55:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344093.569680 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyx5f-0004bR-U7; Wed, 08 Jun 2022 14:55:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344093.569680; Wed, 08 Jun 2022 14:55:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyx5f-0004Yz-LO; Wed, 08 Jun 2022 14:55:23 +0000
Received: by outflank-mailman (input) for mailman id 344093;
 Wed, 08 Jun 2022 14:54:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywyA-0004Sj-T5
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:39 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e8d1fd64-e739-11ec-bd2c-47488cf2e6aa;
 Wed, 08 Jun 2022 16:47:24 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywx6-00ChZm-Gu; Wed, 08 Jun 2022 14:46:32 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id D3A8C302F34;
 Wed,  8 Jun 2022 16:46:23 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id B3C6F20C119AC; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e8d1fd64-e739-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=rYkYazBjkfDgerhAXoQmSQo0bqbhUZdR/pxZJBUI/ng=; b=vu07N0jdKUpflmoXw9ffNL6Z73
	erDS9h9Rs2elipxuuO4E6oiZH+Xt8Kduq8uIy0K8+Ab0UB0Ng85EAEgq7IePggGQkxci90i2DCS1y
	8hasm97zpwVR42qXyoEVcdAAXC0o1yjBEf2zXDXzDeV7u6KYIrksdlid9J/WVIYBdEcU6d6Gvdj7E
	u0DN3Ge62DOP6wTTVj2hf27+jjO+Ouhx1rAS3CxarHAIYXw+7tlexjGWTNw7nu0E3iu6Ok6OCplMs
	JoqeWWF7u27G3D9d8dvjFXenflW76lfCYePvRuxdlq/eC0M85xlj/iHefoDUVMnKfItqMCLHKrmNd
	qd303QFw==;
Message-ID: <20220608144517.822208471@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:53 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org
Subject: [PATCH 30/36] cpuidle,nospec: Make noinstr clean
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

vmlinux.o: warning: objtool: mwait_idle+0x47: call to mds_idle_clear_cpu_buffers() leaves .noinstr.text section
vmlinux.o: warning: objtool: acpi_processor_ffh_cstate_enter+0xa2: call to mds_idle_clear_cpu_buffers() leaves .noinstr.text section
vmlinux.o: warning: objtool: intel_idle+0x91: call to mds_idle_clear_cpu_buffers() leaves .noinstr.text section
vmlinux.o: warning: objtool: intel_idle_s2idle+0x8c: call to mds_idle_clear_cpu_buffers() leaves .noinstr.text section
vmlinux.o: warning: objtool: intel_idle_irq+0xaa: call to mds_idle_clear_cpu_buffers() leaves .noinstr.text section

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/x86/include/asm/nospec-branch.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -310,7 +310,7 @@ static __always_inline void mds_user_cle
  *
  * Clear CPU buffers if the corresponding static key is enabled
  */
-static inline void mds_idle_clear_cpu_buffers(void)
+static __always_inline void mds_idle_clear_cpu_buffers(void)
 {
 	if (static_branch_likely(&mds_idle_clear))
 		mds_clear_cpu_buffers();




From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:55:24 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:55:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344095.569689 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyx5g-0004ok-Ha; Wed, 08 Jun 2022 14:55:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344095.569689; Wed, 08 Jun 2022 14:55:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyx5g-0004lT-8t; Wed, 08 Jun 2022 14:55:24 +0000
Received: by outflank-mailman (input) for mailman id 344095;
 Wed, 08 Jun 2022 14:54:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywyD-0004T5-02
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:41 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f03684e3-e739-11ec-b605-df0040e90b76;
 Wed, 08 Jun 2022 16:47:37 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywx6-0066DZ-9s; Wed, 08 Jun 2022 14:46:33 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id D3AE2302F39;
 Wed,  8 Jun 2022 16:46:23 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id B05F720C119A8; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f03684e3-e739-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=nZ3sIP+AfjwnKNI5o4cLuwY0yE/41XdF+GjWsz9OuTg=; b=rY0rNdiivLzvGTKxHIjKArJtGv
	JQMkawTJj8SQEqtEJ3xPCf08PRNkIpqEuLBTMQig+GuhSmxqEeLxY3bBYBSL1InjLeSRd7FuiBfFT
	ff8s0zfY7cn4ya1+C/p5LG6pommI01eYjXpKZ5a0O8ke3WwQmE3MEvmTU5SP1xjU/1b8dlkt8zdBB
	dfl1YfdLv5JzxR/V9IHBzV1JRXdilUsvPgqYGue8e4FFC4bu+f9F4ROYoUtE/n02AU4fyepgF1L3B
	VBo784ESh/4eySw45ErTG5Vm1/QFi4Ovz8KTGnWPg9jP5ZuBq92Wq48RzyoKTm6W2zDIfJ6TRHldf
	0pxIbpdQ==;
Message-ID: <20220608144517.759631860@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:52 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org
Subject: [PATCH 29/36] cpuidle,xenpv: Make more PARAVIRT_XXL noinstr clean
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

vmlinux.o: warning: objtool: acpi_idle_enter_s2idle+0xde: call to wbinvd() leaves .noinstr.text section
vmlinux.o: warning: objtool: default_idle+0x4: call to arch_safe_halt() leaves .noinstr.text section
vmlinux.o: warning: objtool: xen_safe_halt+0xa: call to HYPERVISOR_sched_op.constprop.0() leaves .noinstr.text section

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/x86/include/asm/paravirt.h      |    6 ++++--
 arch/x86/include/asm/special_insns.h |    4 ++--
 arch/x86/include/asm/xen/hypercall.h |    2 +-
 arch/x86/kernel/paravirt.c           |   14 ++++++++++++--
 arch/x86/xen/enlighten_pv.c          |    2 +-
 arch/x86/xen/irq.c                   |    2 +-
 6 files changed, 21 insertions(+), 9 deletions(-)

--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -168,7 +168,7 @@ static inline void __write_cr4(unsigned
 	PVOP_VCALL1(cpu.write_cr4, x);
 }
 
-static inline void arch_safe_halt(void)
+static __always_inline void arch_safe_halt(void)
 {
 	PVOP_VCALL0(irq.safe_halt);
 }
@@ -178,7 +178,9 @@ static inline void halt(void)
 	PVOP_VCALL0(irq.halt);
 }
 
-static inline void wbinvd(void)
+extern noinstr void pv_native_wbinvd(void);
+
+static __always_inline void wbinvd(void)
 {
 	PVOP_ALT_VCALL0(cpu.wbinvd, "wbinvd", ALT_NOT(X86_FEATURE_XENPV));
 }
--- a/arch/x86/include/asm/special_insns.h
+++ b/arch/x86/include/asm/special_insns.h
@@ -115,7 +115,7 @@ static inline void wrpkru(u32 pkru)
 }
 #endif
 
-static inline void native_wbinvd(void)
+static __always_inline void native_wbinvd(void)
 {
 	asm volatile("wbinvd": : :"memory");
 }
@@ -179,7 +179,7 @@ static inline void __write_cr4(unsigned
 	native_write_cr4(x);
 }
 
-static inline void wbinvd(void)
+static __always_inline void wbinvd(void)
 {
 	native_wbinvd();
 }
--- a/arch/x86/include/asm/xen/hypercall.h
+++ b/arch/x86/include/asm/xen/hypercall.h
@@ -382,7 +382,7 @@ MULTI_stack_switch(struct multicall_entr
 }
 #endif
 
-static inline int
+static __always_inline int
 HYPERVISOR_sched_op(int cmd, void *arg)
 {
 	return _hypercall2(int, sched_op, cmd, arg);
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -233,6 +233,11 @@ static noinstr void pv_native_set_debugr
 	native_set_debugreg(regno, val);
 }
 
+noinstr void pv_native_wbinvd(void)
+{
+	native_wbinvd();
+}
+
 static noinstr void pv_native_irq_enable(void)
 {
 	native_irq_enable();
@@ -242,6 +247,11 @@ static noinstr void pv_native_irq_disabl
 {
 	native_irq_disable();
 }
+
+static noinstr void pv_native_safe_halt(void)
+{
+	native_safe_halt();
+}
 #endif
 
 enum paravirt_lazy_mode paravirt_get_lazy_mode(void)
@@ -273,7 +283,7 @@ struct paravirt_patch_template pv_ops =
 	.cpu.read_cr0		= native_read_cr0,
 	.cpu.write_cr0		= native_write_cr0,
 	.cpu.write_cr4		= native_write_cr4,
-	.cpu.wbinvd		= native_wbinvd,
+	.cpu.wbinvd		= pv_native_wbinvd,
 	.cpu.read_msr		= native_read_msr,
 	.cpu.write_msr		= native_write_msr,
 	.cpu.read_msr_safe	= native_read_msr_safe,
@@ -307,7 +317,7 @@ struct paravirt_patch_template pv_ops =
 	.irq.save_fl		= __PV_IS_CALLEE_SAVE(native_save_fl),
 	.irq.irq_disable	= __PV_IS_CALLEE_SAVE(pv_native_irq_disable),
 	.irq.irq_enable		= __PV_IS_CALLEE_SAVE(pv_native_irq_enable),
-	.irq.safe_halt		= native_safe_halt,
+	.irq.safe_halt		= pv_native_safe_halt,
 	.irq.halt		= native_halt,
 #endif /* CONFIG_PARAVIRT_XXL */
 
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -1019,7 +1019,7 @@ static const typeof(pv_ops) xen_cpu_ops
 
 		.write_cr4 = xen_write_cr4,
 
-		.wbinvd = native_wbinvd,
+		.wbinvd = pv_native_wbinvd,
 
 		.read_msr = xen_read_msr,
 		.write_msr = xen_write_msr,
--- a/arch/x86/xen/irq.c
+++ b/arch/x86/xen/irq.c
@@ -24,7 +24,7 @@ noinstr void xen_force_evtchn_callback(v
 	(void)HYPERVISOR_xen_version(0, NULL);
 }
 
-static void xen_safe_halt(void)
+static noinstr void xen_safe_halt(void)
 {
 	/* Blocking includes an implicit local_irq_enable(). */
 	if (HYPERVISOR_sched_op(SCHEDOP_block, NULL) != 0)




From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:55:25 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:55:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344101.569700 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyx5h-00050I-Ag; Wed, 08 Jun 2022 14:55:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344101.569700; Wed, 08 Jun 2022 14:55:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyx5g-0004x8-Uv; Wed, 08 Jun 2022 14:55:24 +0000
Received: by outflank-mailman (input) for mailman id 344101;
 Wed, 08 Jun 2022 14:54:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywy7-0004Sj-Sa
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:35 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e7fd822c-e739-11ec-bd2c-47488cf2e6aa;
 Wed, 08 Jun 2022 16:47:23 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywx3-00ChYC-JV; Wed, 08 Jun 2022 14:46:29 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id A3BE9302EEA;
 Wed,  8 Jun 2022 16:46:23 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id 9DB8020C10EDE; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7fd822c-e739-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=+nBTmu8be4dn87X4h4i4v5hXqWi/FDQJ2e0V2G49JnI=; b=c0sBpB4EJtpUlMJmM1BtzQg1jI
	4PcdDgJzYOowDoMnmKAFYhNbpvRYzsV7kRFyLJf62+ZuPqPEFAxeBvm38jC9EIzinrmdPRpzRnIth
	ZjcGrFN3FC2OLDiPw1MQ9GiO/ueu64sxGeUIVMfNNGZ1I6IAzUSGlYS50yZYH/qpVkumUW9sz9ylw
	NiQ4O2XB8H8Hk/RXSmU/7i/042dpY1RsaqJxDbzJ3hgd+h+vW2daHvNujHxeLDb2O3c/EAvt6fmpH
	/CT3Hd5fJEQuF8uryywQG23Lx98/8viggcr1njgwvu2L7IHYIVv/w/rdZa7aTRE+XIf9ro7DYS6a3
	okJs1Mxg==;
Message-ID: <20220608144517.507286638@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:48 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org
Subject: [PATCH 25/36] time/tick-broadcast: Remove RCU_NONIDLE usage
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

No callers left that have already disabled RCU.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 kernel/time/tick-broadcast-hrtimer.c |   29 ++++++++++++-----------------
 1 file changed, 12 insertions(+), 17 deletions(-)

--- a/kernel/time/tick-broadcast-hrtimer.c
+++ b/kernel/time/tick-broadcast-hrtimer.c
@@ -56,25 +56,20 @@ static int bc_set_next(ktime_t expires,
 	 * hrtimer callback function is currently running, then
 	 * hrtimer_start() cannot move it and the timer stays on the CPU on
 	 * which it is assigned at the moment.
+	 */
+	hrtimer_start(&bctimer, expires, HRTIMER_MODE_ABS_PINNED_HARD);
+	/*
+	 * The core tick broadcast mode expects bc->bound_on to be set
+	 * correctly to prevent a CPU which has the broadcast hrtimer
+	 * armed from going deep idle.
 	 *
-	 * As this can be called from idle code, the hrtimer_start()
-	 * invocation has to be wrapped with RCU_NONIDLE() as
-	 * hrtimer_start() can call into tracing.
+	 * As tick_broadcast_lock is held, nothing can change the cpu
+	 * base which was just established in hrtimer_start() above. So
+	 * the below access is safe even without holding the hrtimer
+	 * base lock.
 	 */
-	RCU_NONIDLE( {
-		hrtimer_start(&bctimer, expires, HRTIMER_MODE_ABS_PINNED_HARD);
-		/*
-		 * The core tick broadcast mode expects bc->bound_on to be set
-		 * correctly to prevent a CPU which has the broadcast hrtimer
-		 * armed from going deep idle.
-		 *
-		 * As tick_broadcast_lock is held, nothing can change the cpu
-		 * base which was just established in hrtimer_start() above. So
-		 * the below access is safe even without holding the hrtimer
-		 * base lock.
-		 */
-		bc->bound_on = bctimer.base->cpu_base->cpu;
-	} );
+	bc->bound_on = bctimer.base->cpu_base->cpu;
+
 	return 0;
 }
 




From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:55:26 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:55:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344105.569713 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyx5i-0005Gu-K5; Wed, 08 Jun 2022 14:55:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344105.569713; Wed, 08 Jun 2022 14:55:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyx5h-0005DO-Vg; Wed, 08 Jun 2022 14:55:25 +0000
Received: by outflank-mailman (input) for mailman id 344105;
 Wed, 08 Jun 2022 14:54:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywy8-0004Sj-Sv
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:37 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e6a42499-e739-11ec-bd2c-47488cf2e6aa;
 Wed, 08 Jun 2022 16:47:21 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywx2-00ChXF-KJ; Wed, 08 Jun 2022 14:46:28 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 9C476302E7F;
 Wed,  8 Jun 2022 16:46:23 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id 84CF720C10ECC; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e6a42499-e739-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=aqLG9Lqq+GxkmTG8UwvHapEbGSsmV7F2cXGrE5BS5NY=; b=rPiUyMGWkK4HyA4MoXkM+UgqQ6
	VaXgt6RcOydOxoUnaRfCfMbAtBJ/29g86kj1c39TrjyrRRw5LNAXi6xAiwbQsca2Qg3+3equHWL8Q
	BhQmKdDrNUwB5m2NMymYGWNyhw4vbbfRaNwVesyIacBXyasPDLHjRfwj2fnza/c1itcSej3QP8np4
	N1THP4hm8m55mZuiz8nir3RlMX7ANrUAqVzLqiDWZ5C5oo3J/NLzmvKS2fSk9/Gc2Jc7QCKQQFsyN
	BKw9k4A5B+34GthUiYDWMzBwEJ7C7YClTrI/O4L3ITlq7VQuCJHusFMO2qIDniFPVC3uGQ2ZKKFb/
	QUHvCXdQ==;
Message-ID: <20220608144517.124597382@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:42 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org
Subject: [PATCH 19/36] objtool/idle: Validate __cpuidle code as noinstr
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

Idle code is very like entry code in that RCU isn't available. As
such, add a little validation.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/alpha/kernel/vmlinux.lds.S      |    1 -
 arch/arc/kernel/vmlinux.lds.S        |    1 -
 arch/arm/include/asm/vmlinux.lds.h   |    1 -
 arch/arm64/kernel/vmlinux.lds.S      |    1 -
 arch/csky/kernel/vmlinux.lds.S       |    1 -
 arch/hexagon/kernel/vmlinux.lds.S    |    1 -
 arch/ia64/kernel/vmlinux.lds.S       |    1 -
 arch/loongarch/kernel/vmlinux.lds.S  |    1 -
 arch/m68k/kernel/vmlinux-nommu.lds   |    1 -
 arch/m68k/kernel/vmlinux-std.lds     |    1 -
 arch/m68k/kernel/vmlinux-sun3.lds    |    1 -
 arch/microblaze/kernel/vmlinux.lds.S |    1 -
 arch/mips/kernel/vmlinux.lds.S       |    1 -
 arch/nios2/kernel/vmlinux.lds.S      |    1 -
 arch/openrisc/kernel/vmlinux.lds.S   |    1 -
 arch/parisc/kernel/vmlinux.lds.S     |    1 -
 arch/powerpc/kernel/vmlinux.lds.S    |    1 -
 arch/riscv/kernel/vmlinux-xip.lds.S  |    1 -
 arch/riscv/kernel/vmlinux.lds.S      |    1 -
 arch/s390/kernel/vmlinux.lds.S       |    1 -
 arch/sh/kernel/vmlinux.lds.S         |    1 -
 arch/sparc/kernel/vmlinux.lds.S      |    1 -
 arch/um/kernel/dyn.lds.S             |    1 -
 arch/um/kernel/uml.lds.S             |    1 -
 arch/x86/include/asm/irqflags.h      |   11 ++++-------
 arch/x86/include/asm/mwait.h         |    2 +-
 arch/x86/kernel/vmlinux.lds.S        |    1 -
 arch/xtensa/kernel/vmlinux.lds.S     |    1 -
 include/asm-generic/vmlinux.lds.h    |    9 +++------
 include/linux/compiler_types.h       |    8 ++++++--
 include/linux/cpu.h                  |    3 ---
 kernel/module/main.c                 |    2 ++
 kernel/sched/idle.c                  |   15 +++++++++++++--
 tools/objtool/check.c                |   15 ++++++++++++++-
 34 files changed, 43 insertions(+), 48 deletions(-)

--- a/arch/alpha/kernel/vmlinux.lds.S
+++ b/arch/alpha/kernel/vmlinux.lds.S
@@ -27,7 +27,6 @@ SECTIONS
 		HEAD_TEXT
 		TEXT_TEXT
 		SCHED_TEXT
-		CPUIDLE_TEXT
 		LOCK_TEXT
 		*(.fixup)
 		*(.gnu.warning)
--- a/arch/arc/kernel/vmlinux.lds.S
+++ b/arch/arc/kernel/vmlinux.lds.S
@@ -85,7 +85,6 @@ SECTIONS
 		_stext = .;
 		TEXT_TEXT
 		SCHED_TEXT
-		CPUIDLE_TEXT
 		LOCK_TEXT
 		KPROBES_TEXT
 		IRQENTRY_TEXT
--- a/arch/arm/include/asm/vmlinux.lds.h
+++ b/arch/arm/include/asm/vmlinux.lds.h
@@ -96,7 +96,6 @@
 		SOFTIRQENTRY_TEXT					\
 		TEXT_TEXT						\
 		SCHED_TEXT						\
-		CPUIDLE_TEXT						\
 		LOCK_TEXT						\
 		KPROBES_TEXT						\
 		ARM_STUBS_TEXT						\
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -163,7 +163,6 @@ SECTIONS
 			ENTRY_TEXT
 			TEXT_TEXT
 			SCHED_TEXT
-			CPUIDLE_TEXT
 			LOCK_TEXT
 			KPROBES_TEXT
 			HYPERVISOR_TEXT
--- a/arch/csky/kernel/vmlinux.lds.S
+++ b/arch/csky/kernel/vmlinux.lds.S
@@ -38,7 +38,6 @@ SECTIONS
 		SOFTIRQENTRY_TEXT
 		TEXT_TEXT
 		SCHED_TEXT
-		CPUIDLE_TEXT
 		LOCK_TEXT
 		KPROBES_TEXT
 		*(.fixup)
--- a/arch/hexagon/kernel/vmlinux.lds.S
+++ b/arch/hexagon/kernel/vmlinux.lds.S
@@ -41,7 +41,6 @@ SECTIONS
 		IRQENTRY_TEXT
 		SOFTIRQENTRY_TEXT
 		SCHED_TEXT
-		CPUIDLE_TEXT
 		LOCK_TEXT
 		KPROBES_TEXT
 		*(.fixup)
--- a/arch/ia64/kernel/vmlinux.lds.S
+++ b/arch/ia64/kernel/vmlinux.lds.S
@@ -51,7 +51,6 @@ SECTIONS {
 		__end_ivt_text = .;
 		TEXT_TEXT
 		SCHED_TEXT
-		CPUIDLE_TEXT
 		LOCK_TEXT
 		KPROBES_TEXT
 		IRQENTRY_TEXT
--- a/arch/loongarch/kernel/vmlinux.lds.S
+++ b/arch/loongarch/kernel/vmlinux.lds.S
@@ -40,7 +40,6 @@ SECTIONS
 	.text : {
 		TEXT_TEXT
 		SCHED_TEXT
-		CPUIDLE_TEXT
 		LOCK_TEXT
 		KPROBES_TEXT
 		IRQENTRY_TEXT
--- a/arch/m68k/kernel/vmlinux-nommu.lds
+++ b/arch/m68k/kernel/vmlinux-nommu.lds
@@ -48,7 +48,6 @@ SECTIONS {
 		IRQENTRY_TEXT
 		SOFTIRQENTRY_TEXT
 		SCHED_TEXT
-		CPUIDLE_TEXT
 		LOCK_TEXT
 		*(.fixup)
 		. = ALIGN(16);
--- a/arch/m68k/kernel/vmlinux-std.lds
+++ b/arch/m68k/kernel/vmlinux-std.lds
@@ -19,7 +19,6 @@ SECTIONS
 	IRQENTRY_TEXT
 	SOFTIRQENTRY_TEXT
 	SCHED_TEXT
-	CPUIDLE_TEXT
 	LOCK_TEXT
 	*(.fixup)
 	*(.gnu.warning)
--- a/arch/m68k/kernel/vmlinux-sun3.lds
+++ b/arch/m68k/kernel/vmlinux-sun3.lds
@@ -19,7 +19,6 @@ SECTIONS
 	IRQENTRY_TEXT
 	SOFTIRQENTRY_TEXT
 	SCHED_TEXT
-	CPUIDLE_TEXT
 	LOCK_TEXT
 	*(.fixup)
 	*(.gnu.warning)
--- a/arch/microblaze/kernel/vmlinux.lds.S
+++ b/arch/microblaze/kernel/vmlinux.lds.S
@@ -36,7 +36,6 @@ SECTIONS {
 		EXIT_TEXT
 		EXIT_CALL
 		SCHED_TEXT
-		CPUIDLE_TEXT
 		LOCK_TEXT
 		KPROBES_TEXT
 		IRQENTRY_TEXT
--- a/arch/mips/kernel/vmlinux.lds.S
+++ b/arch/mips/kernel/vmlinux.lds.S
@@ -61,7 +61,6 @@ SECTIONS
 	.text : {
 		TEXT_TEXT
 		SCHED_TEXT
-		CPUIDLE_TEXT
 		LOCK_TEXT
 		KPROBES_TEXT
 		IRQENTRY_TEXT
--- a/arch/nios2/kernel/vmlinux.lds.S
+++ b/arch/nios2/kernel/vmlinux.lds.S
@@ -24,7 +24,6 @@ SECTIONS
 	.text : {
 		TEXT_TEXT
 		SCHED_TEXT
-		CPUIDLE_TEXT
 		LOCK_TEXT
 		IRQENTRY_TEXT
 		SOFTIRQENTRY_TEXT
--- a/arch/openrisc/kernel/vmlinux.lds.S
+++ b/arch/openrisc/kernel/vmlinux.lds.S
@@ -52,7 +52,6 @@ SECTIONS
           _stext = .;
 	  TEXT_TEXT
 	  SCHED_TEXT
-	  CPUIDLE_TEXT
 	  LOCK_TEXT
 	  KPROBES_TEXT
 	  IRQENTRY_TEXT
--- a/arch/parisc/kernel/vmlinux.lds.S
+++ b/arch/parisc/kernel/vmlinux.lds.S
@@ -86,7 +86,6 @@ SECTIONS
 		TEXT_TEXT
 		LOCK_TEXT
 		SCHED_TEXT
-		CPUIDLE_TEXT
 		KPROBES_TEXT
 		IRQENTRY_TEXT
 		SOFTIRQENTRY_TEXT
--- a/arch/powerpc/kernel/vmlinux.lds.S
+++ b/arch/powerpc/kernel/vmlinux.lds.S
@@ -107,7 +107,6 @@ SECTIONS
 #endif
 		NOINSTR_TEXT
 		SCHED_TEXT
-		CPUIDLE_TEXT
 		LOCK_TEXT
 		KPROBES_TEXT
 		IRQENTRY_TEXT
--- a/arch/riscv/kernel/vmlinux-xip.lds.S
+++ b/arch/riscv/kernel/vmlinux-xip.lds.S
@@ -39,7 +39,6 @@ SECTIONS
 		_stext = .;
 		TEXT_TEXT
 		SCHED_TEXT
-		CPUIDLE_TEXT
 		LOCK_TEXT
 		KPROBES_TEXT
 		ENTRY_TEXT
--- a/arch/riscv/kernel/vmlinux.lds.S
+++ b/arch/riscv/kernel/vmlinux.lds.S
@@ -42,7 +42,6 @@ SECTIONS
 		_stext = .;
 		TEXT_TEXT
 		SCHED_TEXT
-		CPUIDLE_TEXT
 		LOCK_TEXT
 		KPROBES_TEXT
 		ENTRY_TEXT
--- a/arch/s390/kernel/vmlinux.lds.S
+++ b/arch/s390/kernel/vmlinux.lds.S
@@ -42,7 +42,6 @@ SECTIONS
 		HEAD_TEXT
 		TEXT_TEXT
 		SCHED_TEXT
-		CPUIDLE_TEXT
 		LOCK_TEXT
 		KPROBES_TEXT
 		IRQENTRY_TEXT
--- a/arch/sh/kernel/vmlinux.lds.S
+++ b/arch/sh/kernel/vmlinux.lds.S
@@ -29,7 +29,6 @@ SECTIONS
 		HEAD_TEXT
 		TEXT_TEXT
 		SCHED_TEXT
-		CPUIDLE_TEXT
 		LOCK_TEXT
 		KPROBES_TEXT
 		IRQENTRY_TEXT
--- a/arch/sparc/kernel/vmlinux.lds.S
+++ b/arch/sparc/kernel/vmlinux.lds.S
@@ -50,7 +50,6 @@ SECTIONS
 		HEAD_TEXT
 		TEXT_TEXT
 		SCHED_TEXT
-		CPUIDLE_TEXT
 		LOCK_TEXT
 		KPROBES_TEXT
 		IRQENTRY_TEXT
--- a/arch/um/kernel/dyn.lds.S
+++ b/arch/um/kernel/dyn.lds.S
@@ -74,7 +74,6 @@ SECTIONS
     _stext = .;
     TEXT_TEXT
     SCHED_TEXT
-    CPUIDLE_TEXT
     LOCK_TEXT
     IRQENTRY_TEXT
     SOFTIRQENTRY_TEXT
--- a/arch/um/kernel/uml.lds.S
+++ b/arch/um/kernel/uml.lds.S
@@ -35,7 +35,6 @@ SECTIONS
     _stext = .;
     TEXT_TEXT
     SCHED_TEXT
-    CPUIDLE_TEXT
     LOCK_TEXT
     IRQENTRY_TEXT
     SOFTIRQENTRY_TEXT
--- a/arch/x86/include/asm/irqflags.h
+++ b/arch/x86/include/asm/irqflags.h
@@ -8,9 +8,6 @@
 
 #include <asm/nospec-branch.h>
 
-/* Provide __cpuidle; we can't safely include <linux/cpu.h> */
-#define __cpuidle __section(".cpuidle.text")
-
 /*
  * Interrupt control:
  */
@@ -45,13 +42,13 @@ static __always_inline void native_irq_e
 	asm volatile("sti": : :"memory");
 }
 
-static inline __cpuidle void native_safe_halt(void)
+static __always_inline void native_safe_halt(void)
 {
 	mds_idle_clear_cpu_buffers();
 	asm volatile("sti; hlt": : :"memory");
 }
 
-static inline __cpuidle void native_halt(void)
+static __always_inline void native_halt(void)
 {
 	mds_idle_clear_cpu_buffers();
 	asm volatile("hlt": : :"memory");
@@ -84,7 +81,7 @@ static __always_inline void arch_local_i
  * Used in the idle loop; sti takes one instruction cycle
  * to complete:
  */
-static inline __cpuidle void arch_safe_halt(void)
+static __always_inline void arch_safe_halt(void)
 {
 	native_safe_halt();
 }
@@ -93,7 +90,7 @@ static inline __cpuidle void arch_safe_h
  * Used when interrupts are already enabled or to
  * shutdown the processor:
  */
-static inline __cpuidle void halt(void)
+static __always_inline void halt(void)
 {
 	native_halt();
 }
--- a/arch/x86/include/asm/mwait.h
+++ b/arch/x86/include/asm/mwait.h
@@ -104,7 +104,7 @@ static inline void __sti_mwait(unsigned
  * New with Core Duo processors, MWAIT can take some hints based on CPU
  * capability.
  */
-static inline void mwait_idle_with_hints(unsigned long eax, unsigned long ecx)
+static __always_inline void mwait_idle_with_hints(unsigned long eax, unsigned long ecx)
 {
 	if (static_cpu_has_bug(X86_BUG_MONITOR) || !current_set_polling_and_test()) {
 		if (static_cpu_has_bug(X86_BUG_CLFLUSH_MONITOR)) {
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -129,7 +129,6 @@ SECTIONS
 		HEAD_TEXT
 		TEXT_TEXT
 		SCHED_TEXT
-		CPUIDLE_TEXT
 		LOCK_TEXT
 		KPROBES_TEXT
 		ALIGN_ENTRY_TEXT_BEGIN
--- a/arch/xtensa/kernel/vmlinux.lds.S
+++ b/arch/xtensa/kernel/vmlinux.lds.S
@@ -125,7 +125,6 @@ SECTIONS
     ENTRY_TEXT
     TEXT_TEXT
     SCHED_TEXT
-    CPUIDLE_TEXT
     LOCK_TEXT
     *(.fixup)
   }
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -559,6 +559,9 @@
 		ALIGN_FUNCTION();					\
 		__noinstr_text_start = .;				\
 		*(.noinstr.text)					\
+		__cpuidle_text_start = .;				\
+		*(.cpuidle.text)					\
+		__cpuidle_text_end = .;					\
 		__noinstr_text_end = .;
 
 /*
@@ -600,12 +603,6 @@
 		*(.spinlock.text)					\
 		__lock_text_end = .;
 
-#define CPUIDLE_TEXT							\
-		ALIGN_FUNCTION();					\
-		__cpuidle_text_start = .;				\
-		*(.cpuidle.text)					\
-		__cpuidle_text_end = .;
-
 #define KPROBES_TEXT							\
 		ALIGN_FUNCTION();					\
 		__kprobes_text_start = .;				\
--- a/include/linux/compiler_types.h
+++ b/include/linux/compiler_types.h
@@ -225,10 +225,14 @@ struct ftrace_likely_data {
 #endif
 
 /* Section for code which can't be instrumented at all */
-#define noinstr								\
-	noinline notrace __attribute((__section__(".noinstr.text")))	\
+#define __noinstr_section(section)					\
+	noinline notrace __attribute((__section__(section)))		\
 	__no_kcsan __no_sanitize_address __no_profile __no_sanitize_coverage
 
+#define noinstr __noinstr_section(".noinstr.text")
+
+#define __cpuidle __noinstr_section(".cpuidle.text")
+
 #endif /* __KERNEL__ */
 
 #endif /* __ASSEMBLY__ */
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -171,9 +171,6 @@ void __noreturn cpu_startup_entry(enum c
 
 void cpu_idle_poll_ctrl(bool enable);
 
-/* Attach to any functions which should be considered cpuidle. */
-#define __cpuidle	__section(".cpuidle.text")
-
 bool cpu_in_idle(unsigned long pc);
 
 void arch_cpu_idle(void);
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -376,7 +376,8 @@ static int decode_instructions(struct ob
 			sec->text = true;
 
 		if (!strcmp(sec->name, ".noinstr.text") ||
-		    !strcmp(sec->name, ".entry.text"))
+		    !strcmp(sec->name, ".entry.text") ||
+		    !strcmp(sec->name, ".cpuidle.text"))
 			sec->noinstr = true;
 
 		for (offset = 0; offset < sec->sh.sh_size; offset += insn->len) {
@@ -3080,6 +3081,12 @@ static inline bool noinstr_call_dest(str
 		return true;
 
 	/*
+	 * If the symbol is a static_call trampoline, we can't tell.
+	 */
+	if (func->static_call_tramp)
+		return true;
+
+	/*
 	 * The __ubsan_handle_*() calls are like WARN(), they only happen when
 	 * something 'BAD' happened. At the risk of taking the machine down,
 	 * let them proceed to get the message out.
@@ -3648,6 +3655,12 @@ static int validate_noinstr_sections(str
 	if (sec) {
 		warnings += validate_section(file, sec);
 		warnings += validate_unwind_hints(file, sec);
+	}
+
+	sec = find_section_by_name(file->elf, ".cpuidle.text");
+	if (sec) {
+		warnings += validate_section(file, sec);
+		warnings += validate_unwind_hints(file, sec);
 	}
 
 	return warnings;




From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:55:27 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:55:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344110.569716 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyx5j-0005Y5-A1; Wed, 08 Jun 2022 14:55:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344110.569716; Wed, 08 Jun 2022 14:55:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyx5i-0005Sa-TO; Wed, 08 Jun 2022 14:55:26 +0000
Received: by outflank-mailman (input) for mailman id 344110;
 Wed, 08 Jun 2022 14:54:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywyB-0004Sj-T9
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:40 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id eb4cf9c0-e739-11ec-bd2c-47488cf2e6aa;
 Wed, 08 Jun 2022 16:47:29 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywwz-00ChWi-PN; Wed, 08 Jun 2022 14:46:25 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 2A2ED302E59;
 Wed,  8 Jun 2022 16:46:23 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id 7DA0620C10EC7; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb4cf9c0-e739-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=zScd4cADrYcj3rMhEyHUI7u7RAFkUNduwGg8qRRyi/k=; b=lMeZtUCMmLelJdq3AWi7ufvFy9
	zlr0j5q30DieJ7hrCz8do1dwuurdkRTVZkClv5qAsNNY34C5+J1MisafZfl5mqNbyVj29S2tZDxtK
	LEpTKomujvoW1TCLYxlDbZt43UH74WE9L7Lh3HKWRXe6UR3tN8vTT5ffsmmFmxd0VHkpTlLfEISj7
	1YiC/dMdRcaQcaLa3hWd6i2S9Ph6nB4hTbaK6IoI8FUNQvFYBexxYiq+T8SJikPxMDFN+B/sZWVMO
	b9LXaI8a06LKzZOZDbTdvz+wlt156y7IT5OCrJlji47mznjmSKwwwdv64p++NrqDV4Nf5RRQ6f3GK
	cjl+uyVg==;
Message-ID: <20220608144516.998681585@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:40 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org
Subject: [PATCH 17/36] acpi_idle: Remove tracing
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

All the idle routines are called with RCU disabled, as such there must
not be any tracing inside.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 drivers/acpi/processor_idle.c |   24 +++++++++++++-----------
 1 file changed, 13 insertions(+), 11 deletions(-)

--- a/drivers/acpi/processor_idle.c
+++ b/drivers/acpi/processor_idle.c
@@ -108,8 +108,8 @@ static const struct dmi_system_id proces
 static void __cpuidle acpi_safe_halt(void)
 {
 	if (!tif_need_resched()) {
-		safe_halt();
-		local_irq_disable();
+		raw_safe_halt();
+		raw_local_irq_disable();
 	}
 }
 
@@ -524,16 +524,21 @@ static int acpi_idle_bm_check(void)
 	return bm_status;
 }
 
-static void wait_for_freeze(void)
+static __cpuidle void io_idle(unsigned long addr)
 {
+	/* IO port based C-state */
+	inb(addr);
+
 #ifdef	CONFIG_X86
 	/* No delay is needed if we are in guest */
 	if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
 		return;
 #endif
-	/* Dummy wait op - must do something useless after P_LVL2 read
-	   because chipsets cannot guarantee that STPCLK# signal
-	   gets asserted in time to freeze execution properly. */
+	/*
+	 * Dummy wait op - must do something useless after P_LVL2 read
+	 * because chipsets cannot guarantee that STPCLK# signal
+	 * gets asserted in time to freeze execution properly.
+	 */
 	inl(acpi_gbl_FADT.xpm_timer_block.address);
 }
 
@@ -553,9 +558,7 @@ static void __cpuidle acpi_idle_do_entry
 	} else if (cx->entry_method == ACPI_CSTATE_HALT) {
 		acpi_safe_halt();
 	} else {
-		/* IO port based C-state */
-		inb(cx->address);
-		wait_for_freeze();
+		io_idle(cx->address);
 	}
 
 	perf_lopwr_cb(false);
@@ -577,8 +580,7 @@ static int acpi_idle_play_dead(struct cp
 		if (cx->entry_method == ACPI_CSTATE_HALT)
 			safe_halt();
 		else if (cx->entry_method == ACPI_CSTATE_SYSTEMIO) {
-			inb(cx->address);
-			wait_for_freeze();
+			io_idle(cx->address);
 		} else
 			return -ENODEV;
 




From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:55:29 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:55:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344116.569733 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyx5k-0005jy-NQ; Wed, 08 Jun 2022 14:55:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344116.569733; Wed, 08 Jun 2022 14:55:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyx5j-0005iC-QQ; Wed, 08 Jun 2022 14:55:27 +0000
Received: by outflank-mailman (input) for mailman id 344116;
 Wed, 08 Jun 2022 14:55:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywyF-0004T5-ET
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:43 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f1fb4114-e739-11ec-b605-df0040e90b76;
 Wed, 08 Jun 2022 16:47:40 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywwz-00ChWK-Ba; Wed, 08 Jun 2022 14:46:25 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id CE1C8302E1D;
 Wed,  8 Jun 2022 16:46:22 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id 6A05E20C10EA5; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f1fb4114-e739-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=KHvAe13dC0TrHBuyrT47ebjbzLfzhz60TYCCkk8vbtw=; b=itZGmiVK2icgGpaxEUa4cF4lIi
	+0YlKT5EyDMJU/MWxEIYXD0gdpmN61uIirZw23P88WQR972g9551F5GQf2PqyIZiGqSM7Xb3E9Bfo
	5FJC2KAaPRvPAyYdtPgdvQXWbuqPmfD46eyn+WecUdijp92GxIoao/oz7pLSvWG+zLdB395Gr6udf
	SnHkd2/tnr0TU6wUvVXi0bK4I9GCbQFvJd8NnUWdzoAGBSqxHc+lDSifdza+NoDEIQlsu/cUajh/n
	135EoBmasvrh3QQ0nBdIwhHcrUsDu7hkMdqwZ9EbvT5fVJqK+jxBGdYTIkn8tf2ChYh88coEYmRqG
	IpB+K9xg==;
Message-ID: <20220608144516.677524509@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:35 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org
Subject: [PATCH 12/36] cpuidle,omap2: Push RCU-idle into driver
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

Doing RCU-idle outside the driver, only to then temporarily enable it
again, some *four* times, before going idle is daft.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/arm/mach-omap2/cpuidle44xx.c |   29 ++++++++++++++++++-----------
 1 file changed, 18 insertions(+), 11 deletions(-)

--- a/arch/arm/mach-omap2/cpuidle44xx.c
+++ b/arch/arm/mach-omap2/cpuidle44xx.c
@@ -105,7 +105,9 @@ static int omap_enter_idle_smp(struct cp
 	}
 	raw_spin_unlock_irqrestore(&mpu_lock, flag);
 
+	rcu_idle_enter();
 	omap4_enter_lowpower(dev->cpu, cx->cpu_state);
+	rcu_idle_exit();
 
 	raw_spin_lock_irqsave(&mpu_lock, flag);
 	if (cx->mpu_state_vote == num_online_cpus())
@@ -151,10 +153,10 @@ static int omap_enter_idle_coupled(struc
 				 (cx->mpu_logic_state == PWRDM_POWER_OFF);
 
 	/* Enter broadcast mode for periodic timers */
-	RCU_NONIDLE(tick_broadcast_enable());
+	tick_broadcast_enable();
 
 	/* Enter broadcast mode for one-shot timers */
-	RCU_NONIDLE(tick_broadcast_enter());
+	tick_broadcast_enter();
 
 	/*
 	 * Call idle CPU PM enter notifier chain so that
@@ -166,7 +168,7 @@ static int omap_enter_idle_coupled(struc
 
 	if (dev->cpu == 0) {
 		pwrdm_set_logic_retst(mpu_pd, cx->mpu_logic_state);
-		RCU_NONIDLE(omap_set_pwrdm_state(mpu_pd, cx->mpu_state));
+		omap_set_pwrdm_state(mpu_pd, cx->mpu_state);
 
 		/*
 		 * Call idle CPU cluster PM enter notifier chain
@@ -178,14 +180,16 @@ static int omap_enter_idle_coupled(struc
 				index = 0;
 				cx = state_ptr + index;
 				pwrdm_set_logic_retst(mpu_pd, cx->mpu_logic_state);
-				RCU_NONIDLE(omap_set_pwrdm_state(mpu_pd, cx->mpu_state));
+				omap_set_pwrdm_state(mpu_pd, cx->mpu_state);
 				mpuss_can_lose_context = 0;
 			}
 		}
 	}
 
+	rcu_idle_enter();
 	omap4_enter_lowpower(dev->cpu, cx->cpu_state);
 	cpu_done[dev->cpu] = true;
+	rcu_idle_exit();
 
 	/* Wakeup CPU1 only if it is not offlined */
 	if (dev->cpu == 0 && cpumask_test_cpu(1, cpu_online_mask)) {
@@ -194,9 +198,9 @@ static int omap_enter_idle_coupled(struc
 		    mpuss_can_lose_context)
 			gic_dist_disable();
 
-		RCU_NONIDLE(clkdm_deny_idle(cpu_clkdm[1]));
-		RCU_NONIDLE(omap_set_pwrdm_state(cpu_pd[1], PWRDM_POWER_ON));
-		RCU_NONIDLE(clkdm_allow_idle(cpu_clkdm[1]));
+		clkdm_deny_idle(cpu_clkdm[1]);
+		omap_set_pwrdm_state(cpu_pd[1], PWRDM_POWER_ON);
+		clkdm_allow_idle(cpu_clkdm[1]);
 
 		if (IS_PM44XX_ERRATUM(PM_OMAP4_ROM_SMP_BOOT_ERRATUM_GICD) &&
 		    mpuss_can_lose_context) {
@@ -222,7 +226,7 @@ static int omap_enter_idle_coupled(struc
 	cpu_pm_exit();
 
 cpu_pm_out:
-	RCU_NONIDLE(tick_broadcast_exit());
+	tick_broadcast_exit();
 
 fail:
 	cpuidle_coupled_parallel_barrier(dev, &abort_barrier);
@@ -247,7 +251,8 @@ static struct cpuidle_driver omap4_idle_
 			/* C2 - CPU0 OFF + CPU1 OFF + MPU CSWR */
 			.exit_latency = 328 + 440,
 			.target_residency = 960,
-			.flags = CPUIDLE_FLAG_COUPLED,
+			.flags = CPUIDLE_FLAG_COUPLED |
+				 CPUIDLE_FLAG_RCU_IDLE,
 			.enter = omap_enter_idle_coupled,
 			.name = "C2",
 			.desc = "CPUx OFF, MPUSS CSWR",
@@ -256,7 +261,8 @@ static struct cpuidle_driver omap4_idle_
 			/* C3 - CPU0 OFF + CPU1 OFF + MPU OSWR */
 			.exit_latency = 460 + 518,
 			.target_residency = 1100,
-			.flags = CPUIDLE_FLAG_COUPLED,
+			.flags = CPUIDLE_FLAG_COUPLED |
+				 CPUIDLE_FLAG_RCU_IDLE,
 			.enter = omap_enter_idle_coupled,
 			.name = "C3",
 			.desc = "CPUx OFF, MPUSS OSWR",
@@ -282,7 +288,8 @@ static struct cpuidle_driver omap5_idle_
 			/* C2 - CPU0 RET + CPU1 RET + MPU CSWR */
 			.exit_latency = 48 + 60,
 			.target_residency = 100,
-			.flags = CPUIDLE_FLAG_TIMER_STOP,
+			.flags = CPUIDLE_FLAG_TIMER_STOP |
+				 CPUIDLE_FLAG_RCU_IDLE,
 			.enter = omap_enter_idle_smp,
 			.name = "C2",
 			.desc = "CPUx CSWR, MPUSS CSWR",




From xen-devel-bounces@lists.xenproject.org Wed Jun 08 14:55:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 14:55:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344125.569740 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyx5m-00068s-0V; Wed, 08 Jun 2022 14:55:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344125.569740; Wed, 08 Jun 2022 14:55:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyx5l-00062C-EF; Wed, 08 Jun 2022 14:55:29 +0000
Received: by outflank-mailman (input) for mailman id 344125;
 Wed, 08 Jun 2022 14:55:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nywyC-0004Sj-TS
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 14:47:41 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ee307618-e739-11ec-bd2c-47488cf2e6aa;
 Wed, 08 Jun 2022 16:47:34 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nywwz-00ChWh-QV; Wed, 08 Jun 2022 14:46:25 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id D9E26302E26;
 Wed,  8 Jun 2022 16:46:22 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id 6DFDD20C10EA7; Wed,  8 Jun 2022 16:46:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ee307618-e739-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=ou5L3Xuucqh3BNNcA7ggTsb03AZP3YIS722c+bYuzBQ=; b=ZOVRlbbcCEg9f9+zDw+TPOkCoQ
	YJOgWW3oEH82SS8Z2FuffbgfBl054sk+t6H+Rq2XeLma5kdCxwidap5S0SZ5XeVfGdfVdMbLJNNYL
	YUDo3thSb7yxZHil5mYZUWfLcklav/WCfqY6rCjeC6qjU/nBlQDhK0klBgJnBPATP+T0eekHb57DV
	a3IWCjoXGGJg+fXRaT0EhRgSMGXmbiPtijG31E8mGH/ZwdblFIuXh+Nuag7Qs6bhdIVEIHdhVu5Sn
	PQkbZTtx/1Knxx/UHPMcT1MX5UDbBTRBJAIWXT2/zN4xQTriACukwHXtewsvKWBQBXC/Te1fEdWnC
	dD2IBCDQ==;
Message-ID: <20220608144516.743744529@infradead.org>
User-Agent: quilt/0.66
Date: Wed, 08 Jun 2022 16:27:36 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: peterz@infradead.org
Cc: rth@twiddle.net,
 ink@jurassic.park.msu.ru,
 mattst88@gmail.com,
 vgupta@kernel.org,
 linux@armlinux.org.uk,
 ulli.kroll@googlemail.com,
 linus.walleij@linaro.org,
 shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>,
 kernel@pengutronix.de,
 festevam@gmail.com,
 linux-imx@nxp.com,
 tony@atomide.com,
 khilman@kernel.org,
 catalin.marinas@arm.com,
 will@kernel.org,
 guoren@kernel.org,
 bcain@quicinc.com,
 chenhuacai@kernel.org,
 kernel@xen0n.name,
 geert@linux-m68k.org,
 sammy@sammy.net,
 monstr@monstr.eu,
 tsbogend@alpha.franken.de,
 dinguyen@kernel.org,
 jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi,
 shorne@gmail.com,
 James.Bottomley@HansenPartnership.com,
 deller@gmx.de,
 mpe@ellerman.id.au,
 benh@kernel.crashing.org,
 paulus@samba.org,
 paul.walmsley@sifive.com,
 palmer@dabbelt.com,
 aou@eecs.berkeley.edu,
 hca@linux.ibm.com,
 gor@linux.ibm.com,
 agordeev@linux.ibm.com,
 borntraeger@linux.ibm.com,
 svens@linux.ibm.com,
 ysato@users.sourceforge.jp,
 dalias@libc.org,
 davem@davemloft.net,
 richard@nod.at,
 anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net,
 tglx@linutronix.de,
 mingo@redhat.com,
 bp@alien8.de,
 dave.hansen@linux.intel.com,
 x86@kernel.org,
 hpa@zytor.com,
 acme@kernel.org,
 mark.rutland@arm.com,
 alexander.shishkin@linux.intel.com,
 jolsa@kernel.org,
 namhyung@kernel.org,
 jgross@suse.com,
 srivatsa@csail.mit.edu,
 amakhalov@vmware.com,
 pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com,
 chris@zankel.net,
 jcmvbkbc@gmail.com,
 rafael@kernel.org,
 lenb@kernel.org,
 pavel@ucw.cz,
 gregkh@linuxfoundation.org,
 mturquette@baylibre.com,
 sboyd@kernel.org,
 daniel.lezcano@linaro.org,
 lpieralisi@kernel.org,
 sudeep.holla@arm.com,
 agross@kernel.org,
 bjorn.andersson@linaro.org,
 anup@brainfault.org,
 thierry.reding@gmail.com,
 jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com,
 Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com,
 andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk,
 rostedt@goodmis.org,
 pmladek@suse.com,
 senozhatsky@chromium.org,
 john.ogness@linutronix.de,
 paulmck@kernel.org,
 frederic@kernel.org,
 quic_neeraju@quicinc.com,
 josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com,
 jiangshanlai@gmail.com,
 joel@joelfernandes.org,
 juri.lelli@redhat.com,
 vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com,
 bsegall@google.com,
 mgorman@suse.de,
 bristot@redhat.com,
 vschneid@redhat.com,
 jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org,
 linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org,
 linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org,
 linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org,
 rcu@vger.kernel.org
Subject: [PATCH 13/36] cpuidle,dt: Push RCU-idle into driver
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

Doing RCU-idle outside the driver, only to then temporarily enable it
again before going idle is daft.

Notably: this converts all dt_init_idle_driver() and
__CPU_PM_CPU_IDLE_ENTER() users for they are inextrably intertwined.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/arm/mach-omap2/cpuidle34xx.c    |    4 ++--
 drivers/acpi/processor_idle.c        |    2 ++
 drivers/cpuidle/cpuidle-arm.c        |    1 +
 drivers/cpuidle/cpuidle-big_little.c |    8 ++++++--
 drivers/cpuidle/cpuidle-psci.c       |    1 +
 drivers/cpuidle/cpuidle-qcom-spm.c   |    1 +
 drivers/cpuidle/cpuidle-riscv-sbi.c  |    1 +
 drivers/cpuidle/dt_idle_states.c     |    2 +-
 include/linux/cpuidle.h              |    4 ++++
 9 files changed, 19 insertions(+), 5 deletions(-)

--- a/drivers/acpi/processor_idle.c
+++ b/drivers/acpi/processor_idle.c
@@ -1200,6 +1200,8 @@ static int acpi_processor_setup_lpi_stat
 		state->target_residency = lpi->min_residency;
 		if (lpi->arch_flags)
 			state->flags |= CPUIDLE_FLAG_TIMER_STOP;
+		if (lpi->entry_method == ACPI_CSTATE_FFH)
+			state->flags |= CPUIDLE_FLAG_RCU_IDLE;
 		state->enter = acpi_idle_lpi_enter;
 		drv->safe_state_index = i;
 	}
--- a/drivers/cpuidle/cpuidle-arm.c
+++ b/drivers/cpuidle/cpuidle-arm.c
@@ -53,6 +53,7 @@ static struct cpuidle_driver arm_idle_dr
 	 * handler for idle state index 0.
 	 */
 	.states[0] = {
+		.flags			= CPUIDLE_FLAG_RCU_IDLE,
 		.enter                  = arm_enter_idle_state,
 		.exit_latency           = 1,
 		.target_residency       = 1,
--- a/drivers/cpuidle/cpuidle-big_little.c
+++ b/drivers/cpuidle/cpuidle-big_little.c
@@ -64,7 +64,8 @@ static struct cpuidle_driver bl_idle_lit
 		.enter			= bl_enter_powerdown,
 		.exit_latency		= 700,
 		.target_residency	= 2500,
-		.flags			= CPUIDLE_FLAG_TIMER_STOP,
+		.flags			= CPUIDLE_FLAG_TIMER_STOP |
+					  CPUIDLE_FLAG_RCU_IDLE,
 		.name			= "C1",
 		.desc			= "ARM little-cluster power down",
 	},
@@ -85,7 +86,8 @@ static struct cpuidle_driver bl_idle_big
 		.enter			= bl_enter_powerdown,
 		.exit_latency		= 500,
 		.target_residency	= 2000,
-		.flags			= CPUIDLE_FLAG_TIMER_STOP,
+		.flags			= CPUIDLE_FLAG_TIMER_STOP |
+					  CPUIDLE_FLAG_RCU_IDLE,
 		.name			= "C1",
 		.desc			= "ARM big-cluster power down",
 	},
@@ -124,11 +126,13 @@ static int bl_enter_powerdown(struct cpu
 				struct cpuidle_driver *drv, int idx)
 {
 	cpu_pm_enter();
+	rcu_idle_enter();
 
 	cpu_suspend(0, bl_powerdown_finisher);
 
 	/* signals the MCPM core that CPU is out of low power state */
 	mcpm_cpu_powered_up();
+	rcu_idle_exit();
 
 	cpu_pm_exit();
 
--- a/drivers/cpuidle/cpuidle-psci.c
+++ b/drivers/cpuidle/cpuidle-psci.c
@@ -357,6 +357,7 @@ static int psci_idle_init_cpu(struct dev
 	 * PSCI idle states relies on architectural WFI to be represented as
 	 * state index 0.
 	 */
+	drv->states[0].flags = CPUIDLE_FLAG_RCU_IDLE;
 	drv->states[0].enter = psci_enter_idle_state;
 	drv->states[0].exit_latency = 1;
 	drv->states[0].target_residency = 1;
--- a/drivers/cpuidle/cpuidle-qcom-spm.c
+++ b/drivers/cpuidle/cpuidle-qcom-spm.c
@@ -72,6 +72,7 @@ static struct cpuidle_driver qcom_spm_id
 	.owner = THIS_MODULE,
 	.states[0] = {
 		.enter			= spm_enter_idle_state,
+		.flags			= CPUIDLE_FLAG_RCU_IDLE,
 		.exit_latency		= 1,
 		.target_residency	= 1,
 		.power_usage		= UINT_MAX,
--- a/drivers/cpuidle/cpuidle-riscv-sbi.c
+++ b/drivers/cpuidle/cpuidle-riscv-sbi.c
@@ -332,6 +332,7 @@ static int sbi_cpuidle_init_cpu(struct d
 	drv->cpumask = (struct cpumask *)cpumask_of(cpu);
 
 	/* RISC-V architectural WFI to be represented as state index 0. */
+	drv->states[0].flags = CPUIDLE_FLAG_RCU_IDLE;
 	drv->states[0].enter = sbi_cpuidle_enter_state;
 	drv->states[0].exit_latency = 1;
 	drv->states[0].target_residency = 1;
--- a/drivers/cpuidle/dt_idle_states.c
+++ b/drivers/cpuidle/dt_idle_states.c
@@ -77,7 +77,7 @@ static int init_state_node(struct cpuidl
 	if (err)
 		desc = state_node->name;
 
-	idle_state->flags = 0;
+	idle_state->flags = CPUIDLE_FLAG_RCU_IDLE;
 	if (of_property_read_bool(state_node, "local-timer-stop"))
 		idle_state->flags |= CPUIDLE_FLAG_TIMER_STOP;
 	/*
--- a/include/linux/cpuidle.h
+++ b/include/linux/cpuidle.h
@@ -282,14 +282,18 @@ extern s64 cpuidle_governor_latency_req(
 	int __ret = 0;							\
 									\
 	if (!idx) {							\
+		rcu_idle_enter();					\
 		cpu_do_idle();						\
+		rcu_idle_exit();					\
 		return idx;						\
 	}								\
 									\
 	if (!is_retention)						\
 		__ret =  cpu_pm_enter();				\
 	if (!__ret) {							\
+		rcu_idle_enter();					\
 		__ret = low_level_idle_enter(state);			\
+		rcu_idle_exit();					\
 		if (!is_retention)					\
 			cpu_pm_exit();					\
 	}								\




From xen-devel-bounces@lists.xenproject.org Wed Jun 08 15:03:05 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 15:03:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344156.569777 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyxD0-0003X9-Uf; Wed, 08 Jun 2022 15:02:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344156.569777; Wed, 08 Jun 2022 15:02:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyxD0-0003X2-QB; Wed, 08 Jun 2022 15:02:58 +0000
Received: by outflank-mailman (input) for mailman id 344156;
 Wed, 08 Jun 2022 15:01:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Gkb=WP=gmail.com=rjwysocki@srs-se1.protection.inumbo.net>)
 id 1nyxBP-0003VW-6P
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 15:01:19 +0000
Received: from mail-yb1-f174.google.com (mail-yb1-f174.google.com
 [209.85.219.174]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d9040b95-e73b-11ec-bd2c-47488cf2e6aa;
 Wed, 08 Jun 2022 17:01:17 +0200 (CEST)
Received: by mail-yb1-f174.google.com with SMTP id p13so36971735ybm.1
 for <xen-devel@lists.xenproject.org>; Wed, 08 Jun 2022 08:01:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d9040b95-e73b-11ec-bd2c-47488cf2e6aa
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=hna+hKCo48NcMl7Pqd/5P+UyLOQ1wF88gpvnodTJhxE=;
        b=T2Be4DrFv6G3qcJ++FB4ZSrMJP3DtSWE2mn74qgcsVbynX27tp2+q2yYz5Shgscnh9
         Pj61HINY1sy/yB98Pgxq+qT3s2stkVY73bee4qLw/4clqHgC0YdgtSR3NOPwrJ1wnfBM
         TO8t0cgDuti7rGQ20GARp98dvq2xmErG1BKBVxl3uPXQzRPLoZ8cDv2QcgmWbOCDpulC
         TljqbUJKMKjMJOp3wLlkTKZ2ROqhHkI5VUCNqI+AoVZNYfe0BsH3Kedt2QOQv96jeM31
         oFJELo0qzhjjzh21O7W/HZfpY3g2t+PhBShlnB/waW/ZKy8ZHcULoOECY7yBUwYvZDb8
         dXxw==
X-Gm-Message-State: AOAM530enMW7/EfLlbahrJjH/itaF6SAw0xDGjWcZO5LXIt44wisjEha
	yqeaSp6nJ0sXKiGaPYcBCHora5RjntRVMcZkHf0=
X-Google-Smtp-Source: ABdhPJwqalQMdxMy/pZcDZhBH1xedB7nhZhOlo0yW6lp9BXwx4h5qG3Bba3YMzpLPF7jDA8EKVOjRHlndcGghrEojUg=
X-Received: by 2002:a5b:4a:0:b0:663:7c5b:a5ba with SMTP id e10-20020a5b004a000000b006637c5ba5bamr16536948ybp.81.1654700476488;
 Wed, 08 Jun 2022 08:01:16 -0700 (PDT)
MIME-Version: 1.0
References: <20220608142723.103523089@infradead.org> <20220608144516.172460444@infradead.org>
In-Reply-To: <20220608144516.172460444@infradead.org>
From: "Rafael J. Wysocki" <rafael@kernel.org>
Date: Wed, 8 Jun 2022 17:01:05 +0200
Message-ID: <CAJZ5v0gW-zD8Mgghy70f3rFz0QoozCwZ9idyrqtFgA6SWHK5XQ@mail.gmail.com>
Subject: Re: [PATCH 04/36] cpuidle,intel_idle: Fix CPUIDLE_FLAG_IRQ_ENABLE
To: Peter Zijlstra <peterz@infradead.org>
Cc: rth@twiddle.net, ink@jurassic.park.msu.ru, mattst88@gmail.com, 
	vgupta@kernel.org, Russell King - ARM Linux <linux@armlinux.org.uk>, ulli.kroll@googlemail.com, 
	Linus Walleij <linus.walleij@linaro.org>, Shawn Guo <shawnguo@kernel.org>, 
	Sascha Hauer <s.hauer@pengutronix.de>, Sascha Hauer <kernel@pengutronix.de>, 
	Fabio Estevam <festevam@gmail.com>, dl-linux-imx <linux-imx@nxp.com>, Tony Lindgren <tony@atomide.com>, 
	Kevin Hilman <khilman@kernel.org>, Catalin Marinas <catalin.marinas@arm.com>, 
	Will Deacon <will@kernel.org>, Guo Ren <guoren@kernel.org>, bcain@quicinc.com, 
	Huacai Chen <chenhuacai@kernel.org>, kernel@xen0n.name, 
	Geert Uytterhoeven <geert@linux-m68k.org>, sammy@sammy.net, Michal Simek <monstr@monstr.eu>, 
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>, dinguyen@kernel.org, jonas@southpole.se, 
	stefan.kristiansson@saunalahti.fi, Stafford Horne <shorne@gmail.com>, 
	James Bottomley <James.Bottomley@hansenpartnership.com>, Helge Deller <deller@gmx.de>, 
	Michael Ellerman <mpe@ellerman.id.au>, Benjamin Herrenschmidt <benh@kernel.crashing.org>, 
	Paul Mackerras <paulus@samba.org>, Paul Walmsley <paul.walmsley@sifive.com>, 
	Palmer Dabbelt <palmer@dabbelt.com>, Albert Ou <aou@eecs.berkeley.edu>, 
	Heiko Carstens <hca@linux.ibm.com>, Vasily Gorbik <gor@linux.ibm.com>, 
	Alexander Gordeev <agordeev@linux.ibm.com>, Christian Borntraeger <borntraeger@linux.ibm.com>, 
	Sven Schnelle <svens@linux.ibm.com>, Yoshinori Sato <ysato@users.sourceforge.jp>, 
	Rich Felker <dalias@libc.org>, David Miller <davem@davemloft.net>, 
	Richard Weinberger <richard@nod.at>, anton.ivanov@cambridgegreys.com, 
	Johannes Berg <johannes@sipsolutions.net>, Thomas Gleixner <tglx@linutronix.de>, 
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, 
	Dave Hansen <dave.hansen@linux.intel.com>, "the arch/x86 maintainers" <x86@kernel.org>, 
	"H. Peter Anvin" <hpa@zytor.com>, acme@kernel.org, Mark Rutland <mark.rutland@arm.com>, 
	Alexander Shishkin <alexander.shishkin@linux.intel.com>, jolsa@kernel.org, namhyung@kernel.org, 
	Juergen Gross <jgross@suse.com>, srivatsa@csail.mit.edu, amakhalov@vmware.com, 
	pv-drivers@vmware.com, Boris Ostrovsky <boris.ostrovsky@oracle.com>, chris@zankel.net, 
	jcmvbkbc@gmail.com, "Rafael J. Wysocki" <rafael@kernel.org>, Len Brown <lenb@kernel.org>, 
	Pavel Machek <pavel@ucw.cz>, Greg Kroah-Hartman <gregkh@linuxfoundation.org>, 
	Michael Turquette <mturquette@baylibre.com>, Stephen Boyd <sboyd@kernel.org>, 
	Daniel Lezcano <daniel.lezcano@linaro.org>, lpieralisi@kernel.org, 
	Sudeep Holla <sudeep.holla@arm.com>, Andy Gross <agross@kernel.org>, 
	Bjorn Andersson <bjorn.andersson@linaro.org>, Anup Patel <anup@brainfault.org>, 
	Thierry Reding <thierry.reding@gmail.com>, Jon Hunter <jonathanh@nvidia.com>, 
	Jacob Pan <jacob.jun.pan@linux.intel.com>, Arnd Bergmann <arnd@arndb.de>, 
	Yury Norov <yury.norov@gmail.com>, Andy Shevchenko <andriy.shevchenko@linux.intel.com>, 
	Rasmus Villemoes <linux@rasmusvillemoes.dk>, Steven Rostedt <rostedt@goodmis.org>, 
	Petr Mladek <pmladek@suse.com>, senozhatsky@chromium.org, 
	John Ogness <john.ogness@linutronix.de>, "Paul E. McKenney" <paulmck@kernel.org>, 
	Frederic Weisbecker <frederic@kernel.org>, quic_neeraju@quicinc.com, 
	Josh Triplett <josh@joshtriplett.org>, Mathieu Desnoyers <mathieu.desnoyers@efficios.com>, 
	Lai Jiangshan <jiangshanlai@gmail.com>, Joel Fernandes <joel@joelfernandes.org>, 
	Juri Lelli <juri.lelli@redhat.com>, Vincent Guittot <vincent.guittot@linaro.org>, 
	Dietmar Eggemann <dietmar.eggemann@arm.com>, Benjamin Segall <bsegall@google.com>, 
	Mel Gorman <mgorman@suse.de>, Daniel Bristot de Oliveira <bristot@redhat.com>, vschneid@redhat.com, 
	jpoimboe@kernel.org, linux-alpha@vger.kernel.org, 
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>, linux-snps-arc@lists.infradead.org, 
	Linux ARM <linux-arm-kernel@lists.infradead.org>, 
	Linux OMAP Mailing List <linux-omap@vger.kernel.org>, linux-csky@vger.kernel.org, 
	linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, 
	linux-m68k <linux-m68k@lists.linux-m68k.org>, 
	"open list:BROADCOM NVRAM DRIVER" <linux-mips@vger.kernel.org>, openrisc@lists.librecores.org, 
	Parisc List <linux-parisc@vger.kernel.org>, linuxppc-dev <linuxppc-dev@lists.ozlabs.org>, 
	linux-riscv <linux-riscv@lists.infradead.org>, linux-s390@vger.kernel.org, 
	Linux-sh list <linux-sh@vger.kernel.org>, sparclinux@vger.kernel.org, 
	linux-um@lists.infradead.org, linux-perf-users@vger.kernel.org, 
	virtualization@lists.linux-foundation.org, xen-devel@lists.xenproject.org, 
	linux-xtensa@linux-xtensa.org, 
	ACPI Devel Maling List <linux-acpi@vger.kernel.org>, Linux PM <linux-pm@vger.kernel.org>, 
	linux-clk <linux-clk@vger.kernel.org>, linux-arm-msm <linux-arm-msm@vger.kernel.org>, 
	linux-tegra <linux-tegra@vger.kernel.org>, linux-arch <linux-arch@vger.kernel.org>, 
	rcu@vger.kernel.org
Content-Type: text/plain; charset="UTF-8"

On Wed, Jun 8, 2022 at 4:47 PM Peter Zijlstra <peterz@infradead.org> wrote:
>
> Commit c227233ad64c ("intel_idle: enable interrupts before C1 on
> Xeons") wrecked intel_idle in two ways:
>
>  - must not have tracing in idle functions
>  - must return with IRQs disabled
>
> Additionally, it added a branch for no good reason.
>
> Fixes: c227233ad64c ("intel_idle: enable interrupts before C1 on Xeons")
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>

Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

And do I think correctly that this can be applied without the rest of
the series?

> ---
>  drivers/idle/intel_idle.c |   48 +++++++++++++++++++++++++++++++++++-----------
>  1 file changed, 37 insertions(+), 11 deletions(-)
>
> --- a/drivers/idle/intel_idle.c
> +++ b/drivers/idle/intel_idle.c
> @@ -129,21 +137,37 @@ static unsigned int mwait_substates __in
>   *
>   * Must be called under local_irq_disable().
>   */
> +
> -static __cpuidle int intel_idle(struct cpuidle_device *dev,
> -                               struct cpuidle_driver *drv, int index)
> +static __always_inline int __intel_idle(struct cpuidle_device *dev,
> +                                       struct cpuidle_driver *drv, int index)
>  {
>         struct cpuidle_state *state = &drv->states[index];
>         unsigned long eax = flg2MWAIT(state->flags);
>         unsigned long ecx = 1; /* break on interrupt flag */
>
> -       if (state->flags & CPUIDLE_FLAG_IRQ_ENABLE)
> -               local_irq_enable();
> -
>         mwait_idle_with_hints(eax, ecx);
>
>         return index;
>  }
>
> +static __cpuidle int intel_idle(struct cpuidle_device *dev,
> +                               struct cpuidle_driver *drv, int index)
> +{
> +       return __intel_idle(dev, drv, index);
> +}
> +
> +static __cpuidle int intel_idle_irq(struct cpuidle_device *dev,
> +                                   struct cpuidle_driver *drv, int index)
> +{
> +       int ret;
> +
> +       raw_local_irq_enable();
> +       ret = __intel_idle(dev, drv, index);
> +       raw_local_irq_disable();
> +
> +       return ret;
> +}
> +
>  /**
>   * intel_idle_s2idle - Ask the processor to enter the given idle state.
>   * @dev: cpuidle device of the target CPU.
> @@ -1801,6 +1824,9 @@ static void __init intel_idle_init_cstat
>                 /* Structure copy. */
>                 drv->states[drv->state_count] = cpuidle_state_table[cstate];
>
> +               if (cpuidle_state_table[cstate].flags & CPUIDLE_FLAG_IRQ_ENABLE)
> +                       drv->states[drv->state_count].enter = intel_idle_irq;
> +
>                 if ((disabled_states_mask & BIT(drv->state_count)) ||
>                     ((icpu->use_acpi || force_use_acpi) &&
>                      intel_idle_off_by_default(mwait_hint) &&
>
>


From xen-devel-bounces@lists.xenproject.org Wed Jun 08 15:07:17 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 15:07:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344202.569788 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyxHA-0004Ra-JY; Wed, 08 Jun 2022 15:07:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344202.569788; Wed, 08 Jun 2022 15:07:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyxHA-0004RT-Ge; Wed, 08 Jun 2022 15:07:16 +0000
Received: by outflank-mailman (input) for mailman id 344202;
 Wed, 08 Jun 2022 15:05:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nyxFD-0003xi-Eu
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 15:05:15 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6682fdbf-e73c-11ec-bd2c-47488cf2e6aa;
 Wed, 08 Jun 2022 17:05:14 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nyxEg-00CjUs-8y; Wed, 08 Jun 2022 15:04:42 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 227AC301BE7;
 Wed,  8 Jun 2022 17:04:41 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id 089E520C0B5D8; Wed,  8 Jun 2022 17:04:41 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6682fdbf-e73c-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:To:From:Date:Sender:Reply-To:Cc:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=Ukp02Ivtbnt/qfD+Hr8nM4W5X9OZTZ1VV5qELeaRbZ8=; b=bbxN9JJ2Y40pQW61I7MOn5Xh9V
	S2c5QQ85ziHXD+1t+vjh+fhEPqK3iKxsEIjt7edJojnVzBl9gYwXvjRIeRNrN7ZibzX36WHLDVayV
	nInDIdKkQ3BbhSqYmNWsz/BWsnpgq0hnvQFNP0XQXQaQmMCitXKvbycTG4I5w7PN7aMWKy95vrTUX
	Uzl0vpf81V6OGBp/kwTpeykt3MVn9hantJe3I/I9BoxiPWvicF7AqjWicEUzWX/bGUDW8mfpyxbC/
	CHmKhktwPRwgpVTZHfY5ORW3ZQsK5N3aBEK/OQ7rkuVgMtsoWyCiQqRvNg8qTMNOWWW7Vr6eX+dTm
	f13zRk5g==;
Date: Wed, 8 Jun 2022 17:04:40 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: rth@twiddle.net, ink@jurassic.park.msu.ru, mattst88@gmail.com,
	vgupta@kernel.org, linux@armlinux.org.uk, ulli.kroll@googlemail.com,
	linus.walleij@linaro.org, shawnguo@kernel.org,
	Sascha Hauer <s.hauer@pengutronix.de>, kernel@pengutronix.de,
	festevam@gmail.com, linux-imx@nxp.com, tony@atomide.com,
	khilman@kernel.org, catalin.marinas@arm.com, will@kernel.org,
	guoren@kernel.org, bcain@quicinc.com, chenhuacai@kernel.org,
	kernel@xen0n.name, geert@linux-m68k.org, sammy@sammy.net,
	monstr@monstr.eu, tsbogend@alpha.franken.de, dinguyen@kernel.org,
	jonas@southpole.se, stefan.kristiansson@saunalahti.fi,
	shorne@gmail.com, James.Bottomley@hansenpartnership.com,
	deller@gmx.de, mpe@ellerman.id.au, benh@kernel.crashing.org,
	paulus@samba.org, paul.walmsley@sifive.com, palmer@dabbelt.com,
	aou@eecs.berkeley.edu, hca@linux.ibm.com, gor@linux.ibm.com,
	agordeev@linux.ibm.com, borntraeger@linux.ibm.com,
	svens@linux.ibm.com, ysato@users.sourceforge.jp, dalias@libc.org,
	davem@davemloft.net, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
	dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com,
	acme@kernel.org, mark.rutland@arm.com,
	alexander.shishkin@linux.intel.com, jolsa@kernel.org,
	namhyung@kernel.org, jgross@suse.com, srivatsa@csail.mit.edu,
	amakhalov@vmware.com, pv-drivers@vmware.com,
	boris.ostrovsky@oracle.com, chris@zankel.net, jcmvbkbc@gmail.com,
	rafael@kernel.org, lenb@kernel.org, pavel@ucw.cz,
	gregkh@linuxfoundation.org, mturquette@baylibre.com,
	sboyd@kernel.org, daniel.lezcano@linaro.org, lpieralisi@kernel.org,
	sudeep.holla@arm.com, agross@kernel.org, bjorn.andersson@linaro.org,
	anup@brainfault.org, thierry.reding@gmail.com, jonathanh@nvidia.com,
	jacob.jun.pan@linux.intel.com, Arnd Bergmann <arnd@arndb.de>,
	yury.norov@gmail.com, andriy.shevchenko@linux.intel.com,
	linux@rasmusvillemoes.dk, rostedt@goodmis.org, pmladek@suse.com,
	senozhatsky@chromium.org, john.ogness@linutronix.de,
	paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com,
	josh@joshtriplett.org, mathieu.desnoyers@efficios.com,
	jiangshanlai@gmail.com, joel@joelfernandes.org,
	juri.lelli@redhat.com, vincent.guittot@linaro.org,
	dietmar.eggemann@arm.com, bsegall@google.com, mgorman@suse.de,
	bristot@redhat.com, vschneid@redhat.com, jpoimboe@kernel.org,
	linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-snps-arc@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, openrisc@lists.librecores.org,
	linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org, sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org, linux-perf-users@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, linux-xtensa@linux-xtensa.org,
	linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org,
	linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	linux-tegra@vger.kernel.org, linux-arch@vger.kernel.org,
	rcu@vger.kernel.org
Subject: Re: [PATCH 34/36] cpuidle,omap3: Push RCU-idle into omap_sram_idle()
Message-ID: <YqC6iJx4ygSmry0G@hirez.programming.kicks-ass.net>
References: <20220608142723.103523089@infradead.org>
 <20220608144518.073801916@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20220608144518.073801916@infradead.org>

On Wed, Jun 08, 2022 at 04:27:57PM +0200, Peter Zijlstra wrote:
> @@ -254,11 +255,18 @@ void omap_sram_idle(void)
>  	 */
>  	if (save_state)
>  		omap34xx_save_context(omap3_arm_context);
> +
> +	if (rcuidle)
> +		cpuidle_rcu_enter();
> +
>  	if (save_state == 1 || save_state == 3)
>  		cpu_suspend(save_state, omap34xx_do_sram_idle);
>  	else
>  		omap34xx_do_sram_idle(save_state);
>  
> +	if (rcuidle)
> +		rcuidle_rcu_exit();

*sigh* so much for this having been exposed to the robots for >2 days :/


From xen-devel-bounces@lists.xenproject.org Wed Jun 08 15:52:07 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 15:52:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344288.569815 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyxyL-00026Y-51; Wed, 08 Jun 2022 15:51:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344288.569815; Wed, 08 Jun 2022 15:51:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyxyL-00026R-1i; Wed, 08 Jun 2022 15:51:53 +0000
Received: by outflank-mailman (input) for mailman id 344288;
 Wed, 08 Jun 2022 15:49:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OvtQ=WP=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nyxvh-0001IW-3y
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 15:49:10 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 876e443b-e742-11ec-b605-df0040e90b76;
 Wed, 08 Jun 2022 17:49:07 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nyxuv-00ClXO-Bt; Wed, 08 Jun 2022 15:48:21 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 1781E301093;
 Wed,  8 Jun 2022 17:48:17 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id F027020C0D33A; Wed,  8 Jun 2022 17:48:16 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 876e443b-e742-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=wkrvwwo/ehTnPQ2GyODy/ZpUUMe6HOjJCITHLe45W7c=; b=PhHp+PJmr+2T6rnBijz+YHTCqo
	Q3sRs0kB+1HF2TzaTXkAa8+nJlCba8qwWkYHuYUvJIz6d+sUwN3QATDRrp6HkFJtxuDUGlK2esQFG
	B6FqhNDw4mzKWIndiKngqO5+uVumTrnZhLB4eRFSER0mlLEnLKmACyt/N0zRX6B8QnMdEd1jF93Qv
	+F3wXq8WpQgBGoI9TYubNiobTj3byuz+cOLmJDoCRAcPPkDwjHQiqUYBc/eVlZdt6y9xI16C13ivT
	9RycLHrYGa8XbNQes8S9gNmrNcQe8ImgWY0Ct9y735wKSNGr9BXQRTA4U6CQXwwbi/h2tLQ2WHupK
	g4TPoOhQ==;
Date: Wed, 8 Jun 2022 17:48:16 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: "Rafael J. Wysocki" <rafael@kernel.org>
Cc: rth@twiddle.net, ink@jurassic.park.msu.ru, mattst88@gmail.com,
	vgupta@kernel.org, Russell King - ARM Linux <linux@armlinux.org.uk>,
	ulli.kroll@googlemail.com, Linus Walleij <linus.walleij@linaro.org>,
	Shawn Guo <shawnguo@kernel.org>,
	Sascha Hauer <s.hauer@pengutronix.de>,
	Sascha Hauer <kernel@pengutronix.de>,
	Fabio Estevam <festevam@gmail.com>,
	dl-linux-imx <linux-imx@nxp.com>, Tony Lindgren <tony@atomide.com>,
	Kevin Hilman <khilman@kernel.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>, Guo Ren <guoren@kernel.org>,
	bcain@quicinc.com, Huacai Chen <chenhuacai@kernel.org>,
	kernel@xen0n.name, Geert Uytterhoeven <geert@linux-m68k.org>,
	sammy@sammy.net, Michal Simek <monstr@monstr.eu>,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	dinguyen@kernel.org, jonas@southpole.se,
	stefan.kristiansson@saunalahti.fi,
	Stafford Horne <shorne@gmail.com>,
	James Bottomley <James.Bottomley@hansenpartnership.com>,
	Helge Deller <deller@gmx.de>, Michael Ellerman <mpe@ellerman.id.au>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Paul Mackerras <paulus@samba.org>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	Albert Ou <aou@eecs.berkeley.edu>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Alexander Gordeev <agordeev@linux.ibm.com>,
	Christian Borntraeger <borntraeger@linux.ibm.com>,
	Sven Schnelle <svens@linux.ibm.com>,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	Rich Felker <dalias@libc.org>, David Miller <davem@davemloft.net>,
	Richard Weinberger <richard@nod.at>,
	anton.ivanov@cambridgegreys.com,
	Johannes Berg <johannes@sipsolutions.net>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	the arch/x86 maintainers <x86@kernel.org>,
	"H. Peter Anvin" <hpa@zytor.com>, acme@kernel.org,
	Mark Rutland <mark.rutland@arm.com>,
	Alexander Shishkin <alexander.shishkin@linux.intel.com>,
	jolsa@kernel.org, namhyung@kernel.org,
	Juergen Gross <jgross@suse.com>, srivatsa@csail.mit.edu,
	amakhalov@vmware.com, pv-drivers@vmware.com,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, chris@zankel.net,
	jcmvbkbc@gmail.com, Len Brown <lenb@kernel.org>,
	Pavel Machek <pavel@ucw.cz>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Michael Turquette <mturquette@baylibre.com>,
	Stephen Boyd <sboyd@kernel.org>,
	Daniel Lezcano <daniel.lezcano@linaro.org>, lpieralisi@kernel.org,
	Sudeep Holla <sudeep.holla@arm.com>, Andy Gross <agross@kernel.org>,
	Bjorn Andersson <bjorn.andersson@linaro.org>,
	Anup Patel <anup@brainfault.org>,
	Thierry Reding <thierry.reding@gmail.com>,
	Jon Hunter <jonathanh@nvidia.com>,
	Jacob Pan <jacob.jun.pan@linux.intel.com>,
	Arnd Bergmann <arnd@arndb.de>, Yury Norov <yury.norov@gmail.com>,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Rasmus Villemoes <linux@rasmusvillemoes.dk>,
	Steven Rostedt <rostedt@goodmis.org>,
	Petr Mladek <pmladek@suse.com>, senozhatsky@chromium.org,
	John Ogness <john.ogness@linutronix.de>,
	"Paul E. McKenney" <paulmck@kernel.org>,
	Frederic Weisbecker <frederic@kernel.org>, quic_neeraju@quicinc.com,
	Josh Triplett <josh@joshtriplett.org>,
	Mathieu Desnoyers <mathieu.desnoyers@efficios.com>,
	Lai Jiangshan <jiangshanlai@gmail.com>,
	Joel Fernandes <joel@joelfernandes.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Benjamin Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
	Daniel Bristot de Oliveira <bristot@redhat.com>,
	vschneid@redhat.com, jpoimboe@kernel.org,
	linux-alpha@vger.kernel.org,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	linux-snps-arc@lists.infradead.org,
	Linux ARM <linux-arm-kernel@lists.infradead.org>,
	Linux OMAP Mailing List <linux-omap@vger.kernel.org>,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	linux-ia64@vger.kernel.org,
	linux-m68k <linux-m68k@lists.linux-m68k.org>,
	"open list:BROADCOM NVRAM DRIVER" <linux-mips@vger.kernel.org>,
	openrisc@lists.librecores.org,
	Parisc List <linux-parisc@vger.kernel.org>,
	linuxppc-dev <linuxppc-dev@lists.ozlabs.org>,
	linux-riscv <linux-riscv@lists.infradead.org>,
	linux-s390@vger.kernel.org,
	Linux-sh list <linux-sh@vger.kernel.org>,
	sparclinux@vger.kernel.org, linux-um@lists.infradead.org,
	linux-perf-users@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, linux-xtensa@linux-xtensa.org,
	ACPI Devel Maling List <linux-acpi@vger.kernel.org>,
	Linux PM <linux-pm@vger.kernel.org>,
	linux-clk <linux-clk@vger.kernel.org>,
	linux-arm-msm <linux-arm-msm@vger.kernel.org>,
	linux-tegra <linux-tegra@vger.kernel.org>,
	linux-arch <linux-arch@vger.kernel.org>, rcu@vger.kernel.org
Subject: Re: [PATCH 04/36] cpuidle,intel_idle: Fix CPUIDLE_FLAG_IRQ_ENABLE
Message-ID: <YqDEwMDSL1YXdHFH@hirez.programming.kicks-ass.net>
References: <20220608142723.103523089@infradead.org>
 <20220608144516.172460444@infradead.org>
 <CAJZ5v0gW-zD8Mgghy70f3rFz0QoozCwZ9idyrqtFgA6SWHK5XQ@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAJZ5v0gW-zD8Mgghy70f3rFz0QoozCwZ9idyrqtFgA6SWHK5XQ@mail.gmail.com>

On Wed, Jun 08, 2022 at 05:01:05PM +0200, Rafael J. Wysocki wrote:
> On Wed, Jun 8, 2022 at 4:47 PM Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > Commit c227233ad64c ("intel_idle: enable interrupts before C1 on
> > Xeons") wrecked intel_idle in two ways:
> >
> >  - must not have tracing in idle functions
> >  - must return with IRQs disabled
> >
> > Additionally, it added a branch for no good reason.
> >
> > Fixes: c227233ad64c ("intel_idle: enable interrupts before C1 on Xeons")
> > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> 
> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> 
> And do I think correctly that this can be applied without the rest of
> the series?

Yeah, I don't think this relies on any of the preceding patches. If you
want to route this through the pm/fixes tree that's fine.

Thanks!


From xen-devel-bounces@lists.xenproject.org Wed Jun 08 17:36:51 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 17:36:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344304.569826 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyzbf-0005bv-U6; Wed, 08 Jun 2022 17:36:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344304.569826; Wed, 08 Jun 2022 17:36:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyzbf-0005bo-Qr; Wed, 08 Jun 2022 17:36:35 +0000
Received: by outflank-mailman (input) for mailman id 344304;
 Wed, 08 Jun 2022 17:36:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1nyzbe-0005bi-Km
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 17:36:34 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nyzbd-0002SD-AI; Wed, 08 Jun 2022 17:36:33 +0000
Received: from [54.239.6.189] (helo=[192.168.10.106])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nyzbd-00028x-2W; Wed, 08 Jun 2022 17:36:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=a82r2Ro6l8h74jW9DtjQZO0KTBfnwv/++8Mmg07eTP8=; b=YB9GGqYQHWtr6gX/F94blAB2ek
	SGKk10DvojIQa783lbU0vFxMG+WSVWoeCWPrlinWMXYviLnmldqdLX959H7oqrukS835VMb/V5qBe
	+3XkmU3pG8hYkYyYYAoPeaa1ql4cLsYQq/Ed8Oq04X9VrgfQWQi8PA08bN7//a2bRSI4=;
Message-ID: <4c0d7d61-4c58-6aec-c653-997a2fb87282@xen.org>
Date: Wed, 8 Jun 2022 18:36:29 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH v3] SUPPORT.md: extend security support for x86 hosts to
 12 TiB of memory
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <873890bc-e5fd-a9ed-77ff-7bd06d390ae9@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <873890bc-e5fd-a9ed-77ff-7bd06d390ae9@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Jan,

On 02/06/2022 09:43, Jan Beulich wrote:
> c49ee0329ff3 ("SUPPORT.md: limit security support for hosts with very
> much memory"), as a result of XSA-385, restricted security support to
> 8 TiB of host memory. While subsequently further restricted for Arm,
> extend this to 12 TiB on x86, putting in place a guest restriction to
> 8 TiB (or yet less for Arm) in exchange.
> 
> A 12 TiB x86 host was certified successfully for use with Xen 4.14 as
> per https://www.suse.com/nbswebapp/yesBulletin.jsp?bulletinNumber=150753.
> This in particular included running as many guests (2 TiB each) as
> possible in parallel, to actually prove that all the memory can be used
> like this. It may be relevant to note that the Optane memory there was
> used in memory-only mode, with DRAM acting as cache.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Acked-by: George Dunlap <george.dunlap@citrix.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 08 17:49:32 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 17:49:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344313.569837 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyzo6-0007JI-3u; Wed, 08 Jun 2022 17:49:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344313.569837; Wed, 08 Jun 2022 17:49:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nyzo6-0007JB-0g; Wed, 08 Jun 2022 17:49:26 +0000
Received: by outflank-mailman (input) for mailman id 344313;
 Wed, 08 Jun 2022 17:49:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyzo4-0007J1-It; Wed, 08 Jun 2022 17:49:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyzo4-0002fj-Es; Wed, 08 Jun 2022 17:49:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nyzo3-0002hv-Rd; Wed, 08 Jun 2022 17:49:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nyzo3-0006uD-Qz; Wed, 08 Jun 2022 17:49:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DUeshEdnVGPMB0Lhkr19eDuBfYBc41rW2oZRKZvkiUI=; b=1/iT/GbWnqS5+cA6W/ybdElQOt
	rmLxUW1nDO7ASumSzTc8lW3Gfef9U8c/MBMQ9pT1LY2QzngJzxu/TNmhry69LnJh1culig4zgzA0Z
	P9YOyg56Y380f+Eu0Y3FWAHeY5H6cF1yzJZtGAr9cinZaxbNF1j0olaSsw58gH7vc0Do=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170880-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 170880: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=9886142c7a2226439c1e3f7d9b69f9c7094c3ef6
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 08 Jun 2022 17:49:23 +0000

flight 170880 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170880/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 170714
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 170714
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 170714
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 170714
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 170714

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170714
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170714
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170714
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                9886142c7a2226439c1e3f7d9b69f9c7094c3ef6
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z   15 days
Failing since        170716  2022-05-24 11:12:06 Z   15 days   41 attempts
Testing same since   170880  2022-06-08 01:11:42 Z    0 days    1 attempts

------------------------------------------------------------
2276 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 268586 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 08 18:12:38 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 18:12:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344327.569848 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz0AS-0002Y3-5J; Wed, 08 Jun 2022 18:12:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344327.569848; Wed, 08 Jun 2022 18:12:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz0AS-0002Xw-1g; Wed, 08 Jun 2022 18:12:32 +0000
Received: by outflank-mailman (input) for mailman id 344327;
 Wed, 08 Jun 2022 18:12:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1KPy=WP=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1nz0AP-0002Xq-R5
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 18:12:30 +0000
Received: from mail-wm1-x329.google.com (mail-wm1-x329.google.com
 [2a00:1450:4864:20::329])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8d8710e6-e756-11ec-bd2c-47488cf2e6aa;
 Wed, 08 Jun 2022 20:12:27 +0200 (CEST)
Received: by mail-wm1-x329.google.com with SMTP id
 h62-20020a1c2141000000b0039aa4d054e2so13805160wmh.1
 for <xen-devel@lists.xenproject.org>; Wed, 08 Jun 2022 11:12:27 -0700 (PDT)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 s3-20020a5d6a83000000b0020cfed0bb7fsm21866895wru.53.2022.06.08.11.12.25
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 08 Jun 2022 11:12:25 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8d8710e6-e756-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=vs8aNdWJ8iRamP5ksY+ZLfOeMGxldxBLBkpCKBVJgx0=;
        b=NpHi/hmApW89ypZglQsthe0FNCSYUduJalRn0hoBp/sZoaE7V/zou7LM6qyj7Jos6s
         K3qyyISpbYEuR50b6lm4sREXXOkKp5DQ58NFECl880IGBBI5XSvpiAkGSGP73tDPs+r1
         ojKvHib9YN45gExt7ak3Fz4N2yT4hbrbf7I1LahGSfMZhUBm3iTVgHQ5Ml1wdGkzENrE
         rnjUrQTg/Mje5hvAEMB8L0e9UL8D1WLUI06e09RUtPUsVZqPoPOBrX0w7XQyAIbqArcw
         kkCtjsNIO99V3LUWPbqvPDJP7ngGKlltaZVN/DwrjzFehRuQi7xjXUnZFPKNWPJZSIcK
         D/EA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=vs8aNdWJ8iRamP5ksY+ZLfOeMGxldxBLBkpCKBVJgx0=;
        b=dsTANRYGdvYx5h0TsIojuasQ5xHOy6lkRIU9maNWPuU2mHxRX5kYtN7n0X/3VMwiuq
         NF+igMKHzoeG3zYwRqTh1aygqzv5KMUPRvLtLyrvIlj0h50w3UzXHYvcfhXfeiQzqBFI
         XG409SdZyDgdZcEdzMgednyuHn6H8KSlMuvxsf6tdD+puKjWgXnpJIak0VFBq7gKnrXv
         HZAb6bC2UZJhAPh2wZko+nji+OJZASyipKOPGYy50sDIBIAwIrLV7xufSluolPfVHcnP
         5eAgyS8yKTG6D4Cw3KsiuU/srYrCXNHCybyv6Y5VeZEuz5UI4k/a/obaleUytwZtPpt0
         iMHA==
X-Gm-Message-State: AOAM532BHJABm2S8Mh7dj4c06NPIBJPVoyDj2qdZZwzIF9cjOiwv4L69
	e7Tq6k2VR+bKYJAU9P9Djuw=
X-Google-Smtp-Source: ABdhPJz1i5kXRJ6WYRxpkMhq5FtRMZvrnoAPxdmuXKaqhaKa1neVILqztXvNAA6DJ6qS3iTMNBW89Q==
X-Received: by 2002:a7b:c205:0:b0:39c:506d:e294 with SMTP id x5-20020a7bc205000000b0039c506de294mr438462wmi.159.1654711946336;
        Wed, 08 Jun 2022 11:12:26 -0700 (PDT)
Subject: Re: [RFC PATCH 1/2] xen/unpopulated-alloc: Introduce helpers for DMA
 allocations
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, Juergen Gross
 <jgross@suse.com>, Julien Grall <julien@xen.org>
References: <1652810658-27810-1-git-send-email-olekstysh@gmail.com>
 <1652810658-27810-2-git-send-email-olekstysh@gmail.com>
 <alpine.DEB.2.22.394.2206031420430.2783803@ubuntu-linux-20-04-desktop>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <00c14b91-4cf2-179c-749d-593db853e42e@gmail.com>
Date: Wed, 8 Jun 2022 21:12:24 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.22.394.2206031420430.2783803@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 04.06.22 00:52, Stefano Stabellini wrote:


Hello Stefano

Thank you for having a look and sorry for the late response.

> On Tue, 17 May 2022, Oleksandr Tyshchenko wrote:
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> Add ability to allocate unpopulated DMAable (contiguous) pages
>> suitable for grant mapping into. This is going to be used by gnttab
>> code (see gnttab_dma_alloc_pages()).
>>
>> TODO: There is a code duplication in fill_dma_pool(). Also pool
>> oparations likely need to be protected by the lock.
>>
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>> ---
>>   drivers/xen/unpopulated-alloc.c | 167 ++++++++++++++++++++++++++++++++++++++++
>>   include/xen/xen.h               |  15 ++++
>>   2 files changed, 182 insertions(+)
>>
>> diff --git a/drivers/xen/unpopulated-alloc.c b/drivers/xen/unpopulated-alloc.c
>> index a39f2d3..bca0198 100644
>> --- a/drivers/xen/unpopulated-alloc.c
>> +++ b/drivers/xen/unpopulated-alloc.c
>> @@ -1,5 +1,6 @@
>>   // SPDX-License-Identifier: GPL-2.0
>>   #include <linux/errno.h>
>> +#include <linux/genalloc.h>
>>   #include <linux/gfp.h>
>>   #include <linux/kernel.h>
>>   #include <linux/mm.h>
>> @@ -16,6 +17,8 @@ static DEFINE_MUTEX(list_lock);
>>   static struct page *page_list;
>>   static unsigned int list_count;
>>   
>> +static struct gen_pool *dma_pool;
>> +
>>   static struct resource *target_resource;
>>   
>>   /*
>> @@ -230,6 +233,161 @@ void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages)
>>   }
>>   EXPORT_SYMBOL(xen_free_unpopulated_pages);
>>   
>> +static int fill_dma_pool(unsigned int nr_pages)
>> +{
> I think we shouldn't need to add this function at all as we should be
> able to reuse fill_list even for contiguous pages. fill_list could
> always call gen_pool_add_virt before returning.


First of all, I agree that fill_dma_pool() has a lot in common with 
fill_list(), so we indeed can avoid code duplication (this was mentioned 
in TODO).
I am not quite sure regarding "to always call gen_pool_add_virt before 
returning" (does this mean that the same pages will be in the 
"page_list" and "dma_pool" simultaneously?),
but I completely agree that we can reuse fill_list() for contiguous 
pages as well slightly updating it.

Please see below.


>
>
>> +	struct dev_pagemap *pgmap;
>> +	struct resource *res, *tmp_res = NULL;
>> +	void *vaddr;
>> +	unsigned int alloc_pages = round_up(nr_pages, PAGES_PER_SECTION);
>> +	struct range mhp_range;
>> +	int ret;
>> +
>> +	res = kzalloc(sizeof(*res), GFP_KERNEL);
>> +	if (!res)
>> +		return -ENOMEM;
>> +
>> +	res->name = "Xen DMA pool";
>> +	res->flags = IORESOURCE_MEM | IORESOURCE_BUSY;
>> +
>> +	mhp_range = mhp_get_pluggable_range(true);
>> +
>> +	ret = allocate_resource(target_resource, res,
>> +				alloc_pages * PAGE_SIZE, mhp_range.start, mhp_range.end,
>> +				PAGES_PER_SECTION * PAGE_SIZE, NULL, NULL);
>> +	if (ret < 0) {
>> +		pr_err("Cannot allocate new IOMEM resource\n");
>> +		goto err_resource;
>> +	}
>> +
>> +	/*
>> +	 * Reserve the region previously allocated from Xen resource to avoid
>> +	 * re-using it by someone else.
>> +	 */
>> +	if (target_resource != &iomem_resource) {
>> +		tmp_res = kzalloc(sizeof(*tmp_res), GFP_KERNEL);
>> +		if (!res) {
>> +			ret = -ENOMEM;
>> +			goto err_insert;
>> +		}
>> +
>> +		tmp_res->name = res->name;
>> +		tmp_res->start = res->start;
>> +		tmp_res->end = res->end;
>> +		tmp_res->flags = res->flags;
>> +
>> +		ret = request_resource(&iomem_resource, tmp_res);
>> +		if (ret < 0) {
>> +			pr_err("Cannot request resource %pR (%d)\n", tmp_res, ret);
>> +			kfree(tmp_res);
>> +			goto err_insert;
>> +		}
>> +	}
>> +
>> +	pgmap = kzalloc(sizeof(*pgmap), GFP_KERNEL);
>> +	if (!pgmap) {
>> +		ret = -ENOMEM;
>> +		goto err_pgmap;
>> +	}
>> +
>> +	pgmap->type = MEMORY_DEVICE_GENERIC;
>> +	pgmap->range = (struct range) {
>> +		.start = res->start,
>> +		.end = res->end,
>> +	};
>> +	pgmap->nr_range = 1;
>> +	pgmap->owner = res;
>> +
>> +	vaddr = memremap_pages(pgmap, NUMA_NO_NODE);
>> +	if (IS_ERR(vaddr)) {
>> +		pr_err("Cannot remap memory range\n");
>> +		ret = PTR_ERR(vaddr);
>> +		goto err_memremap;
>> +	}
>> +
>> +	ret = gen_pool_add_virt(dma_pool, (unsigned long)vaddr, res->start,
>> +			alloc_pages * PAGE_SIZE, NUMA_NO_NODE);
>> +	if (ret)
>> +		goto err_pool;
>> +
>> +	return 0;
>> +
>> +err_pool:
>> +	memunmap_pages(pgmap);
>> +err_memremap:
>> +	kfree(pgmap);
>> +err_pgmap:
>> +	if (tmp_res) {
>> +		release_resource(tmp_res);
>> +		kfree(tmp_res);
>> +	}
>> +err_insert:
>> +	release_resource(res);
>> +err_resource:
>> +	kfree(res);
>> +	return ret;
>> +}
>> +
>> +/**
>> + * xen_alloc_unpopulated_dma_pages - alloc unpopulated DMAable pages
>> + * @dev: valid struct device pointer
>> + * @nr_pages: Number of pages
>> + * @pages: pages returned
>> + * @return 0 on success, error otherwise
>> + */
>> +int xen_alloc_unpopulated_dma_pages(struct device *dev, unsigned int nr_pages,
>> +		struct page **pages)
>> +{
>> +	void *vaddr;
>> +	bool filled = false;
>> +	unsigned int i;
>> +	int ret;
>
> Also probably it might be better if xen_alloc_unpopulated_pages and
> xen_alloc_unpopulated_dma_pages shared the implementation. Something
> along these lines:
>
> int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages)
> {
>      return _xen_alloc_unpopulated_pages(nr_pages, pages, false);
> }
>
> int xen_alloc_unpopulated_dma_pages(struct device *dev, unsigned int nr_pages,
> 		struct page **pages)
> {
>      return _xen_alloc_unpopulated_pages(nr_pages, pages, true);
> }

I think, this makes sense, although it depends on how the resulting 
_xen_alloc_unpopulated_pages() will look like. I'd 
s/_xen_alloc_unpopulated_pages/alloc_unpopulated_pages

Please see below.


>
> static int _xen_alloc_unpopulated_pages(unsigned int nr_pages,
>          struct page **pages, bool contiguous)
> {
> 	unsigned int i;
> 	int ret = 0;
>
>      if (contiguous && !xen_feature(XENFEAT_auto_translated_physmap))
>          return -EINVAL;
>
> 	/*
> 	 * Fallback to default behavior if we do not have any suitable resource
> 	 * to allocate required region from and as the result we won't be able to
> 	 * construct pages.
> 	 */
> 	if (!target_resource) {
>          if (contiguous)
>              return -EINVAL;
> 		return xen_alloc_ballooned_pages(nr_pages, pages);
>      }
>
> 	mutex_lock(&list_lock);
> 	if (list_count < nr_pages) {
> 		ret = fill_list(nr_pages - list_count);

As I understand, this might not work if we need contiguous pages. The 
check might not be precise as "list_count" is a number of available 
pages in the list, which are not guaranteed to be contiguous, also the 
amount of pages to be added to the pool here "nr_pages - list_count" 
might not be enough to allocate contiguous region with "nr_pages" size.


> 		if (ret)
> 			goto out;
> 	}
>
>      if (contiguous) {
>          vaddr = (void *)gen_pool_alloc(dma_pool, nr_pages * PAGE_SIZE);
>
>          for (i = 0; i < nr_pages; i++)
>              pages[i] = virt_to_page(vaddr + PAGE_SIZE * i);
>      } else {
>          for (i = 0; i < nr_pages; i++) {
>              struct page *pg = page_list;
>
>              BUG_ON(!pg);
>              page_list = pg->zone_device_data;
>              list_count--;
>              pages[i] = pg;

I think, if we keep the same pages in the "page_list" and "dma_pool" 
simultaneously we will need to reflect that here, otherwise we might end 
up reusing already allocated pages.
What I mean is if we allocate pages from "dma_pool" we will need to 
remove them from the "page_list" as well and vice versa, so this might 
add a complexity to the code. I or missed something?



According to the suggestions I see two options, both follow suggestion 
where xen_alloc(free)_unpopulated_pages() and 
xen_alloc(free)_unpopulated_dma_pages() share the implementation.

1. Keep "page_list" and "dma_pool" separately. So they don't share 
pages. Like how it was implemented in current patch but with eliminating 
both TODOs. This doesn't change behavior for all users of 
xen_alloc_unpopulated_pages().

Below the diff for unpopulated-alloc.c. The patch is also available at:

https://github.com/otyshchenko1/linux/commit/1c629abc37478c108a5f4c37ae8076b766c4d5cc


diff --git a/drivers/xen/unpopulated-alloc.c 
b/drivers/xen/unpopulated-alloc.c
index a39f2d3..c3a86c09 100644
--- a/drivers/xen/unpopulated-alloc.c
+++ b/drivers/xen/unpopulated-alloc.c
@@ -1,5 +1,7 @@
  // SPDX-License-Identifier: GPL-2.0
+#include <linux/dma-mapping.h>
  #include <linux/errno.h>
+#include <linux/genalloc.h>
  #include <linux/gfp.h>
  #include <linux/kernel.h>
  #include <linux/mm.h>
@@ -16,6 +18,8 @@ static DEFINE_MUTEX(list_lock);
  static struct page *page_list;
  static unsigned int list_count;

+static struct gen_pool *dma_pool;
+
  static struct resource *target_resource;

  /*
@@ -31,7 +35,7 @@ int __weak __init arch_xen_unpopulated_init(struct 
resource **res)
         return 0;
  }

-static int fill_list(unsigned int nr_pages)
+static int fill_list(unsigned int nr_pages, bool use_pool)
  {
         struct dev_pagemap *pgmap;
         struct resource *res, *tmp_res = NULL;
@@ -125,12 +129,21 @@ static int fill_list(unsigned int nr_pages)
                 goto err_memremap;
         }

-       for (i = 0; i < alloc_pages; i++) {
-               struct page *pg = virt_to_page(vaddr + PAGE_SIZE * i);
+       if (use_pool) {
+               ret = gen_pool_add_virt(dma_pool, (unsigned long)vaddr, 
res->start,
+                               alloc_pages * PAGE_SIZE, NUMA_NO_NODE);
+               if (ret) {
+                       memunmap_pages(pgmap);
+                       goto err_memremap;
+               }
+       } else {
+               for (i = 0; i < alloc_pages; i++) {
+                       struct page *pg = virt_to_page(vaddr + PAGE_SIZE 
* i);

-               pg->zone_device_data = page_list;
-               page_list = pg;
-               list_count++;
+                       pg->zone_device_data = page_list;
+                       page_list = pg;
+                       list_count++;
+               }
         }

         return 0;
@@ -149,13 +162,8 @@ static int fill_list(unsigned int nr_pages)
         return ret;
  }

-/**
- * xen_alloc_unpopulated_pages - alloc unpopulated pages
- * @nr_pages: Number of pages
- * @pages: pages returned
- * @return 0 on success, error otherwise
- */
-int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages)
+static int alloc_unpopulated_pages(unsigned int nr_pages, struct page 
**pages,
+               bool contiguous)
  {
         unsigned int i;
         int ret = 0;
@@ -165,71 +173,167 @@ int xen_alloc_unpopulated_pages(unsigned int 
nr_pages, struct page **pages)
          * to allocate required region from and as the result we won't 
be able to
          * construct pages.
          */
-       if (!target_resource)
+       if (!target_resource) {
+               if (contiguous)
+                       return -ENODEV;
+
                 return xen_alloc_ballooned_pages(nr_pages, pages);
+       }

         mutex_lock(&list_lock);
-       if (list_count < nr_pages) {
-               ret = fill_list(nr_pages - list_count);
-               if (ret)
-                       goto out;
-       }

-       for (i = 0; i < nr_pages; i++) {
-               struct page *pg = page_list;
+       if (contiguous) {
+               void *vaddr;
+               bool filled = false;
+
+               while (!(vaddr = (void *)gen_pool_alloc(dma_pool, 
nr_pages * PAGE_SIZE))) {
+                       if (filled)
+                               ret = -ENOMEM;
+                       else {
+                               ret = fill_list(nr_pages, true);
+                               filled = true;
+                       }
+                       if (ret)
+                               goto out;
+               }

-               BUG_ON(!pg);
-               page_list = pg->zone_device_data;
-               list_count--;
-               pages[i] = pg;
+               for (i = 0; i < nr_pages; i++) {
+                       pages[i] = virt_to_page(vaddr + PAGE_SIZE * i);

  #ifdef CONFIG_XEN_HAVE_PVMMU
-               if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-                       ret = xen_alloc_p2m_entry(page_to_pfn(pg));
-                       if (ret < 0) {
-                               unsigned int j;
-
-                               for (j = 0; j <= i; j++) {
- pages[j]->zone_device_data = page_list;
-                                       page_list = pages[j];
-                                       list_count++;
+                       if (!xen_feature(XENFEAT_auto_translated_physmap)) {
+                               ret = 
xen_alloc_p2m_entry(page_to_pfn(pages[i]));
+                               if (ret < 0) {
+                                       /* XXX Do we need to zeroed 
pages[i]? */
+                                       gen_pool_free(dma_pool, 
(unsigned long)vaddr,
+                                                       nr_pages * 
PAGE_SIZE);
+                                       goto out;
                                 }
-                               goto out;
                         }
+#endif
+               }
+       } else {
+               if (list_count < nr_pages) {
+                       ret = fill_list(nr_pages - list_count, false);
+                       if (ret)
+                               goto out;
                 }
+
+               for (i = 0; i < nr_pages; i++) {
+                       struct page *pg = page_list;
+
+                       BUG_ON(!pg);
+                       page_list = pg->zone_device_data;
+                       list_count--;
+                       pages[i] = pg;
+
+#ifdef CONFIG_XEN_HAVE_PVMMU
+                       if (!xen_feature(XENFEAT_auto_translated_physmap)) {
+                               ret = xen_alloc_p2m_entry(page_to_pfn(pg));
+                               if (ret < 0) {
+                                       unsigned int j;
+
+                                       for (j = 0; j <= i; j++) {
+ pages[j]->zone_device_data = page_list;
+                                               page_list = pages[j];
+                                               list_count++;
+                                       }
+                                       goto out;
+                               }
+                       }
  #endif
+               }
         }

  out:
         mutex_unlock(&list_lock);
         return ret;
  }
-EXPORT_SYMBOL(xen_alloc_unpopulated_pages);

-/**
- * xen_free_unpopulated_pages - return unpopulated pages
- * @nr_pages: Number of pages
- * @pages: pages to return
- */
-void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages)
+static void free_unpopulated_pages(unsigned int nr_pages, struct page 
**pages,
+               bool contiguous)
  {
-       unsigned int i;
-
         if (!target_resource) {
+               if (contiguous)
+                       return;
+
                 xen_free_ballooned_pages(nr_pages, pages);
                 return;
         }

         mutex_lock(&list_lock);
-       for (i = 0; i < nr_pages; i++) {
-               pages[i]->zone_device_data = page_list;
-               page_list = pages[i];
-               list_count++;
+
+       /* XXX Do we need to check the range (gen_pool_has_addr)? */
+       if (contiguous)
+               gen_pool_free(dma_pool, (unsigned 
long)page_to_virt(pages[0]),
+                               nr_pages * PAGE_SIZE);
+       else {
+               unsigned int i;
+
+               for (i = 0; i < nr_pages; i++) {
+                       pages[i]->zone_device_data = page_list;
+                       page_list = pages[i];
+                       list_count++;
+               }
         }
+
         mutex_unlock(&list_lock);
  }
+
+/**
+ * xen_alloc_unpopulated_pages - alloc unpopulated pages
+ * @nr_pages: Number of pages
+ * @pages: pages returned
+ * @return 0 on success, error otherwise
+ */
+int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages)
+{
+       return alloc_unpopulated_pages(nr_pages, pages, false);
+}
+EXPORT_SYMBOL(xen_alloc_unpopulated_pages);
+
+/**
+ * xen_free_unpopulated_pages - return unpopulated pages
+ * @nr_pages: Number of pages
+ * @pages: pages to return
+ */
+void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages)
+{
+       free_unpopulated_pages(nr_pages, pages, false);
+}
  EXPORT_SYMBOL(xen_free_unpopulated_pages);

+/**
+ * xen_alloc_unpopulated_dma_pages - alloc unpopulated DMAable pages
+ * @dev: valid struct device pointer
+ * @nr_pages: Number of pages
+ * @pages: pages returned
+ * @return 0 on success, error otherwise
+ */
+int xen_alloc_unpopulated_dma_pages(struct device *dev, unsigned int 
nr_pages,
+               struct page **pages)
+{
+       /* XXX Handle devices which support 64-bit DMA address only for 
now */
+       if (dma_get_mask(dev) != DMA_BIT_MASK(64))
+               return -EINVAL;
+
+       return alloc_unpopulated_pages(nr_pages, pages, true);
+}
+EXPORT_SYMBOL(xen_alloc_unpopulated_dma_pages);
+
+/**
+ * xen_free_unpopulated_dma_pages - return unpopulated DMAable pages
+ * @dev: valid struct device pointer
+ * @nr_pages: Number of pages
+ * @pages: pages to return
+ */
+void xen_free_unpopulated_dma_pages(struct device *dev, unsigned int 
nr_pages,
+               struct page **pages)
+{
+       free_unpopulated_pages(nr_pages, pages, true);
+}
+EXPORT_SYMBOL(xen_free_unpopulated_dma_pages);
+
  static int __init unpopulated_init(void)
  {
         int ret;
@@ -237,9 +341,19 @@ static int __init unpopulated_init(void)
         if (!xen_domain())
                 return -ENODEV;

+       dma_pool = gen_pool_create(PAGE_SHIFT, NUMA_NO_NODE);
+       if (!dma_pool) {
+               pr_err("xen:unpopulated: Cannot create DMA pool\n");
+               return -ENOMEM;
+       }
+
+       gen_pool_set_algo(dma_pool, gen_pool_best_fit, NULL);
+
         ret = arch_xen_unpopulated_init(&target_resource);
         if (ret) {
                 pr_err("xen:unpopulated: Cannot initialize target 
resource\n");
+               gen_pool_destroy(dma_pool);
+               dma_pool = NULL;
                 target_resource = NULL;
         }

[snip]


2. Drop the "page_list" entirely and use "dma_pool" for all (contiguous 
and non-contiguous) allocations. After all, all pages are initially 
contiguous in fill_list() as they are built from the resource. This 
changes behavior for all users of xen_alloc_unpopulated_pages()

Below the diff for unpopulated-alloc.c. The patch is also available at:

https://github.com/otyshchenko1/linux/commit/7be569f113a4acbdc4bcb9b20cb3995b3151387a


diff --git a/drivers/xen/unpopulated-alloc.c 
b/drivers/xen/unpopulated-alloc.c
index a39f2d3..ab5c7bd 100644
--- a/drivers/xen/unpopulated-alloc.c
+++ b/drivers/xen/unpopulated-alloc.c
@@ -1,5 +1,7 @@
  // SPDX-License-Identifier: GPL-2.0
+#include <linux/dma-mapping.h>
  #include <linux/errno.h>
+#include <linux/genalloc.h>
  #include <linux/gfp.h>
  #include <linux/kernel.h>
  #include <linux/mm.h>
@@ -13,8 +15,8 @@
  #include <xen/xen.h>

  static DEFINE_MUTEX(list_lock);
-static struct page *page_list;
-static unsigned int list_count;
+
+static struct gen_pool *dma_pool;

  static struct resource *target_resource;

@@ -36,7 +38,7 @@ static int fill_list(unsigned int nr_pages)
         struct dev_pagemap *pgmap;
         struct resource *res, *tmp_res = NULL;
         void *vaddr;
-       unsigned int i, alloc_pages = round_up(nr_pages, PAGES_PER_SECTION);
+       unsigned int alloc_pages = round_up(nr_pages, PAGES_PER_SECTION);
         struct range mhp_range;
         int ret;

@@ -106,6 +108,7 @@ static int fill_list(unsigned int nr_pages)
           * conflict with any devices.
           */
         if (!xen_feature(XENFEAT_auto_translated_physmap)) {
+               unsigned int i;
                 xen_pfn_t pfn = PFN_DOWN(res->start);

                 for (i = 0; i < alloc_pages; i++) {
@@ -125,16 +128,17 @@ static int fill_list(unsigned int nr_pages)
                 goto err_memremap;
         }

-       for (i = 0; i < alloc_pages; i++) {
-               struct page *pg = virt_to_page(vaddr + PAGE_SIZE * i);
-
-               pg->zone_device_data = page_list;
-               page_list = pg;
-               list_count++;
+       ret = gen_pool_add_virt(dma_pool, (unsigned long)vaddr, res->start,
+                       alloc_pages * PAGE_SIZE, NUMA_NO_NODE);
+       if (ret) {
+               pr_err("Cannot add memory range to the pool\n");
+               goto err_pool;
         }

         return 0;

+err_pool:
+       memunmap_pages(pgmap);
  err_memremap:
         kfree(pgmap);
  err_pgmap:
@@ -149,51 +153,49 @@ static int fill_list(unsigned int nr_pages)
         return ret;
  }

-/**
- * xen_alloc_unpopulated_pages - alloc unpopulated pages
- * @nr_pages: Number of pages
- * @pages: pages returned
- * @return 0 on success, error otherwise
- */
-int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages)
+static int alloc_unpopulated_pages(unsigned int nr_pages, struct page 
**pages,
+               bool contiguous)
  {
         unsigned int i;
         int ret = 0;
+       void *vaddr;
+       bool filled = false;

         /*
          * Fallback to default behavior if we do not have any suitable 
resource
          * to allocate required region from and as the result we won't 
be able to
          * construct pages.
          */
-       if (!target_resource)
+       if (!target_resource) {
+               if (contiguous)
+                       return -ENODEV;
+
                 return xen_alloc_ballooned_pages(nr_pages, pages);
+       }

         mutex_lock(&list_lock);
-       if (list_count < nr_pages) {
-               ret = fill_list(nr_pages - list_count);
+
+       while (!(vaddr = (void *)gen_pool_alloc(dma_pool, nr_pages * 
PAGE_SIZE))) {
+               if (filled)
+                       ret = -ENOMEM;
+               else {
+                       ret = fill_list(nr_pages);
+                       filled = true;
+               }
                 if (ret)
                         goto out;
         }

         for (i = 0; i < nr_pages; i++) {
-               struct page *pg = page_list;
-
-               BUG_ON(!pg);
-               page_list = pg->zone_device_data;
-               list_count--;
-               pages[i] = pg;
+               pages[i] = virt_to_page(vaddr + PAGE_SIZE * i);

  #ifdef CONFIG_XEN_HAVE_PVMMU
                 if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-                       ret = xen_alloc_p2m_entry(page_to_pfn(pg));
+                       ret = xen_alloc_p2m_entry(page_to_pfn(pages[i]));
                         if (ret < 0) {
-                               unsigned int j;
-
-                               for (j = 0; j <= i; j++) {
- pages[j]->zone_device_data = page_list;
-                                       page_list = pages[j];
-                                       list_count++;
-                               }
+                               /* XXX Do we need to zeroed pages[i]? */
+                               gen_pool_free(dma_pool, (unsigned 
long)vaddr,
+                                               nr_pages * PAGE_SIZE);
                                 goto out;
                         }
                 }
@@ -204,32 +206,89 @@ int xen_alloc_unpopulated_pages(unsigned int 
nr_pages, struct page **pages)
         mutex_unlock(&list_lock);
         return ret;
  }
-EXPORT_SYMBOL(xen_alloc_unpopulated_pages);

-/**
- * xen_free_unpopulated_pages - return unpopulated pages
- * @nr_pages: Number of pages
- * @pages: pages to return
- */
-void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages)
+static void free_unpopulated_pages(unsigned int nr_pages, struct page 
**pages,
+               bool contiguous)
  {
-       unsigned int i;
-
         if (!target_resource) {
+               if (contiguous)
+                       return;
+
                 xen_free_ballooned_pages(nr_pages, pages);
                 return;
         }

         mutex_lock(&list_lock);
-       for (i = 0; i < nr_pages; i++) {
-               pages[i]->zone_device_data = page_list;
-               page_list = pages[i];
-               list_count++;
+
+       /* XXX Do we need to check the range (gen_pool_has_addr)? */
+       if (contiguous)
+               gen_pool_free(dma_pool, (unsigned 
long)page_to_virt(pages[0]),
+                               nr_pages * PAGE_SIZE);
+       else {
+               unsigned int i;
+
+               for (i = 0; i < nr_pages; i++)
+                       gen_pool_free(dma_pool, (unsigned 
long)page_to_virt(pages[i]),
+                                       PAGE_SIZE);
         }
+
         mutex_unlock(&list_lock);
  }
+
+/**
+ * xen_alloc_unpopulated_pages - alloc unpopulated pages
+ * @nr_pages: Number of pages
+ * @pages: pages returned
+ * @return 0 on success, error otherwise
+ */
+int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages)
+{
+       return alloc_unpopulated_pages(nr_pages, pages, false);
+}
+EXPORT_SYMBOL(xen_alloc_unpopulated_pages);
+
+/**
+ * xen_free_unpopulated_pages - return unpopulated pages
+ * @nr_pages: Number of pages
+ * @pages: pages to return
+ */
+void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages)
+{
+       free_unpopulated_pages(nr_pages, pages, false);
+}
  EXPORT_SYMBOL(xen_free_unpopulated_pages);

+/**
+ * xen_alloc_unpopulated_dma_pages - alloc unpopulated DMAable pages
+ * @dev: valid struct device pointer
+ * @nr_pages: Number of pages
+ * @pages: pages returned
+ * @return 0 on success, error otherwise
+ */
+int xen_alloc_unpopulated_dma_pages(struct device *dev, unsigned int 
nr_pages,
+               struct page **pages)
+{
+       /* XXX Handle devices which support 64-bit DMA address only for 
now */
+       if (dma_get_mask(dev) != DMA_BIT_MASK(64))
+               return -EINVAL;
+
+       return alloc_unpopulated_pages(nr_pages, pages, true);
+}
+EXPORT_SYMBOL(xen_alloc_unpopulated_dma_pages);
+
+/**
+ * xen_free_unpopulated_dma_pages - return unpopulated DMAable pages
+ * @dev: valid struct device pointer
+ * @nr_pages: Number of pages
+ * @pages: pages to return
+ */
+void xen_free_unpopulated_dma_pages(struct device *dev, unsigned int 
nr_pages,
+               struct page **pages)
+{
+       free_unpopulated_pages(nr_pages, pages, true);
+}
+EXPORT_SYMBOL(xen_free_unpopulated_dma_pages);
+
  static int __init unpopulated_init(void)
  {
         int ret;
@@ -237,9 +296,19 @@ static int __init unpopulated_init(void)
         if (!xen_domain())
                 return -ENODEV;

+       dma_pool = gen_pool_create(PAGE_SHIFT, NUMA_NO_NODE);
+       if (!dma_pool) {
+               pr_err("xen:unpopulated: Cannot create DMA pool\n");
+               return -ENOMEM;
+       }
+
+       gen_pool_set_algo(dma_pool, gen_pool_best_fit, NULL);
+
         ret = arch_xen_unpopulated_init(&target_resource);
         if (ret) {
                 pr_err("xen:unpopulated: Cannot initialize target 
resource\n");
+               gen_pool_destroy(dma_pool);
+               dma_pool = NULL;
                 target_resource = NULL;
         }

[snip]


I think, regarding on the approach we would likely need to do some 
renaming for fill_list, page_list, list_lock, etc.


Both options work in my Arm64 based environment, not sure regarding x86.
Or do we have another option here?
I would be happy to go any route. What do you think?


>
>      #ifdef CONFIG_XEN_HAVE_PVMMU
>              if (!xen_feature(XENFEAT_auto_translated_physmap)) {
>                  ret = xen_alloc_p2m_entry(page_to_pfn(pg));
>                  if (ret < 0) {
>                      unsigned int j;
>
>                      for (j = 0; j <= i; j++) {
>                          pages[j]->zone_device_data = page_list;
>                          page_list = pages[j];
>                          list_count++;
>                      }
>                      goto out;
>                  }
>              }
>      #endif
>          }
>      }
>
> out:
> 	mutex_unlock(&list_lock);
> 	return ret;
> }
>
> 	
>
>> +	if (!dma_pool)
>> +		return -ENODEV;
>> +
>> +	/* XXX Handle devices which support 64-bit DMA address only for now */
>> +	if (dma_get_mask(dev) != DMA_BIT_MASK(64))
>> +		return -EINVAL;
>> +
>> +	while (!(vaddr = (void *)gen_pool_alloc(dma_pool, nr_pages * PAGE_SIZE))) {
>> +		if (filled)
>> +			return -ENOMEM;
>> +		else {
>> +			ret = fill_dma_pool(nr_pages);
>> +			if (ret)
>> +				return ret;
>> +
>> +			filled = true;
>> +		}
>> +	}
>> +
>> +	for (i = 0; i < nr_pages; i++)
>> +		pages[i] = virt_to_page(vaddr + PAGE_SIZE * i);
>> +
>> +	return 0;
>> +}
>> +EXPORT_SYMBOL(xen_alloc_unpopulated_dma_pages);
>> +
>> +/**
>> + * xen_free_unpopulated_dma_pages - return unpopulated DMAable pages
>> + * @dev: valid struct device pointer
>> + * @nr_pages: Number of pages
>> + * @pages: pages to return
>> + */
>> +void xen_free_unpopulated_dma_pages(struct device *dev, unsigned int nr_pages,
>> +		struct page **pages)
>> +{
>> +	void *vaddr;
>> +
>> +	if (!dma_pool)
>> +		return;
>> +
>> +	vaddr = page_to_virt(pages[0]);
>> +
>> +	gen_pool_free(dma_pool, (unsigned long)vaddr, nr_pages * PAGE_SIZE);
>> +}
>> +EXPORT_SYMBOL(xen_free_unpopulated_dma_pages);
>> +
>>   static int __init unpopulated_init(void)
>>   {
>>   	int ret;
>> @@ -241,8 +399,17 @@ static int __init unpopulated_init(void)
>>   	if (ret) {
>>   		pr_err("xen:unpopulated: Cannot initialize target resource\n");
>>   		target_resource = NULL;
>> +		return ret;
>>   	}
>>   
>> +	dma_pool = gen_pool_create(PAGE_SHIFT, NUMA_NO_NODE);
>> +	if (!dma_pool) {
>> +		pr_err("xen:unpopulated: Cannot create DMA pool\n");
>> +		return -ENOMEM;
>> +	}
>> +
>> +	gen_pool_set_algo(dma_pool, gen_pool_best_fit, NULL);
>> +
>>   	return ret;
>>   }
>>   early_initcall(unpopulated_init);
>> diff --git a/include/xen/xen.h b/include/xen/xen.h
>> index a99bab8..a6a7a59 100644
>> --- a/include/xen/xen.h
>> +++ b/include/xen/xen.h
>> @@ -52,9 +52,15 @@ bool xen_biovec_phys_mergeable(const struct bio_vec *vec1,
>>   extern u64 xen_saved_max_mem_size;
>>   #endif
>>   
>> +struct device;
>> +
>>   #ifdef CONFIG_XEN_UNPOPULATED_ALLOC
>>   int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages);
>>   void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages);
>> +int xen_alloc_unpopulated_dma_pages(struct device *dev, unsigned int nr_pages,
>> +		struct page **pages);
>> +void xen_free_unpopulated_dma_pages(struct device *dev, unsigned int nr_pages,
>> +		struct page **pages);
>>   #include <linux/ioport.h>
>>   int arch_xen_unpopulated_init(struct resource **res);
>>   #else
>> @@ -69,6 +75,15 @@ static inline void xen_free_unpopulated_pages(unsigned int nr_pages,
>>   {
>>   	xen_free_ballooned_pages(nr_pages, pages);
>>   }
>> +static inline int xen_alloc_unpopulated_dma_pages(struct device *dev,
>> +		unsigned int nr_pages, struct page **pages)
>> +{
>> +	return -1;
>> +}
>> +static inline void xen_free_unpopulated_dma_pages(struct device *dev,
>> +		unsigned int nr_pages, struct page **pages)
>> +{
>> +}
>>   #endif
> Given that we have these stubs, maybe we don't need to #ifdef the so
> much code in the next patch

ok, I will analyze


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Wed Jun 08 18:30:45 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 18:30:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344339.569859 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz0Ry-0005Kg-Vc; Wed, 08 Jun 2022 18:30:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344339.569859; Wed, 08 Jun 2022 18:30:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz0Ry-0005KZ-RY; Wed, 08 Jun 2022 18:30:38 +0000
Received: by outflank-mailman (input) for mailman id 344339;
 Wed, 08 Jun 2022 18:30:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nz0Ry-0005KP-8e; Wed, 08 Jun 2022 18:30:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nz0Ry-0003Ua-6j; Wed, 08 Jun 2022 18:30:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nz0Rx-00042W-NN; Wed, 08 Jun 2022 18:30:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nz0Rx-0007rg-Mx; Wed, 08 Jun 2022 18:30:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jQ1idkjWhjCYQ11aUBKaLq8m/omyLvoznL201IHsJok=; b=nfHBpmULIRDDuLmUtCXLOMspaB
	aK0HdLp6VFAyVWKpOgA/D1j15kx5I7bFt+BxH3q9B12poYwDrW+LBILkvpMII2sPhwfCCQr8Rn1bu
	HSB1DEtmDjKl+djnSSqNV/p7eg5/++OFpN84SdRTeDn/W38WrRzvLcfBva5ttTUl8ehE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170886-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 170886: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=7ac12e3634cc3ed9234de03e48149e7f5fbf73c3
X-Osstest-Versions-That:
    xen=5047cd1d5deaa734ce67b4d706ac59d9a258c3e1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 08 Jun 2022 18:30:37 +0000

flight 170886 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170886/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  7ac12e3634cc3ed9234de03e48149e7f5fbf73c3
baseline version:
 xen                  5047cd1d5deaa734ce67b4d706ac59d9a258c3e1

Last test of basis   170883  2022-06-08 10:02:59 Z    0 days
Testing same since   170886  2022-06-08 14:02:52 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <julien.grall@arm.com>
  Luca Fancellu <luca.fancellu@arm.com>
  Michal Orzel <michal.orzel@arm.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5047cd1d5d..7ac12e3634  7ac12e3634cc3ed9234de03e48149e7f5fbf73c3 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Jun 08 18:56:43 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 18:56:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344299.569870 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz0r1-0008K4-2s; Wed, 08 Jun 2022 18:56:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344299.569870; Wed, 08 Jun 2022 18:56:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz0r1-0008Jx-02; Wed, 08 Jun 2022 18:56:31 +0000
Received: by outflank-mailman (input) for mailman id 344299;
 Wed, 08 Jun 2022 16:08:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Gkb=WP=gmail.com=rjwysocki@srs-se1.protection.inumbo.net>)
 id 1nyyEl-0004Vf-4N
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 16:08:51 +0000
Received: from mail-yb1-f178.google.com (mail-yb1-f178.google.com
 [209.85.219.178]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 485605a0-e745-11ec-b605-df0040e90b76;
 Wed, 08 Jun 2022 18:08:50 +0200 (CEST)
Received: by mail-yb1-f178.google.com with SMTP id i39so9271045ybj.9
 for <xen-devel@lists.xenproject.org>; Wed, 08 Jun 2022 09:08:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 485605a0-e745-11ec-b605-df0040e90b76
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=kprufSUtg2c0Z0Z7k//OQq4qpvnNTKXHCWDLteZACkI=;
        b=Aeux3pBasL1UmchGiOjI/uDgStoe/Alw+PZv1BCotj2UuQGBurml1ekhKA5/3oR8pU
         OxqwR9KMqUP66bN0bc1bcKW+6NNbv7tCNCGhmIJvjp1lkH5OyHi7Y5FFvhjWmfucBw9U
         ujnUt+Y7YnfolHvCBb5zI3LbgDL8wasFne8Q8R5jozMR6pzJQFvPeu0sGbeHHlYrM3TP
         YhjR8GI0yOsqx4E+RUA9m0fiyBTKoqUctcc0rcI9UPD85Kc0zkm0C9NnGN8rZmu5/W9A
         blYsGf6eVq3TZaTAAFNDN9xUUEZ2fICNPHA12tYXQhq0X1ufCHrxP98lmehc8jXATprE
         jUfg==
X-Gm-Message-State: AOAM5322dgvE5HKib8d/NU7kztDKbYccYy3oodyiO6RoWSGkw2FgguF3
	9QpMxQ+BGS+z/ygz8xSmzShGou44AgTPUvmmcVg=
X-Google-Smtp-Source: ABdhPJyG4ZT1DjeY4ERw/W2efwDb4GS4TUltgXoK23+ZW/CgpHQ75Z+ngY7GcqUaxeuwjXh809zcJPBimQ/BOSJe/30=
X-Received: by 2002:a25:d98b:0:b0:65c:9dc9:7a8f with SMTP id
 q133-20020a25d98b000000b0065c9dc97a8fmr34529681ybg.622.1654704528580; Wed, 08
 Jun 2022 09:08:48 -0700 (PDT)
MIME-Version: 1.0
References: <20220608142723.103523089@infradead.org> <20220608144516.172460444@infradead.org>
 <CAJZ5v0gW-zD8Mgghy70f3rFz0QoozCwZ9idyrqtFgA6SWHK5XQ@mail.gmail.com> <YqDEwMDSL1YXdHFH@hirez.programming.kicks-ass.net>
In-Reply-To: <YqDEwMDSL1YXdHFH@hirez.programming.kicks-ass.net>
From: "Rafael J. Wysocki" <rafael@kernel.org>
Date: Wed, 8 Jun 2022 18:08:37 +0200
Message-ID: <CAJZ5v0hFfhxjp5cVNz+JSWcWx5ga1cDccmsqKAVgxp-JWs9upg@mail.gmail.com>
Subject: Re: [PATCH 04/36] cpuidle,intel_idle: Fix CPUIDLE_FLAG_IRQ_ENABLE
To: Peter Zijlstra <peterz@infradead.org>
Cc: "Rafael J. Wysocki" <rafael@kernel.org>, rth@twiddle.net, ink@jurassic.park.msu.ru, 
	mattst88@gmail.com, vgupta@kernel.org, 
	Russell King - ARM Linux <linux@armlinux.org.uk>, ulli.kroll@googlemail.com, 
	Linus Walleij <linus.walleij@linaro.org>, Shawn Guo <shawnguo@kernel.org>, 
	Sascha Hauer <s.hauer@pengutronix.de>, Sascha Hauer <kernel@pengutronix.de>, 
	Fabio Estevam <festevam@gmail.com>, dl-linux-imx <linux-imx@nxp.com>, Tony Lindgren <tony@atomide.com>, 
	Kevin Hilman <khilman@kernel.org>, Catalin Marinas <catalin.marinas@arm.com>, 
	Will Deacon <will@kernel.org>, Guo Ren <guoren@kernel.org>, bcain@quicinc.com, 
	Huacai Chen <chenhuacai@kernel.org>, kernel@xen0n.name, 
	Geert Uytterhoeven <geert@linux-m68k.org>, sammy@sammy.net, Michal Simek <monstr@monstr.eu>, 
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>, dinguyen@kernel.org, jonas@southpole.se, 
	stefan.kristiansson@saunalahti.fi, Stafford Horne <shorne@gmail.com>, 
	James Bottomley <James.Bottomley@hansenpartnership.com>, Helge Deller <deller@gmx.de>, 
	Michael Ellerman <mpe@ellerman.id.au>, Benjamin Herrenschmidt <benh@kernel.crashing.org>, 
	Paul Mackerras <paulus@samba.org>, Paul Walmsley <paul.walmsley@sifive.com>, 
	Palmer Dabbelt <palmer@dabbelt.com>, Albert Ou <aou@eecs.berkeley.edu>, 
	Heiko Carstens <hca@linux.ibm.com>, Vasily Gorbik <gor@linux.ibm.com>, 
	Alexander Gordeev <agordeev@linux.ibm.com>, Christian Borntraeger <borntraeger@linux.ibm.com>, 
	Sven Schnelle <svens@linux.ibm.com>, Yoshinori Sato <ysato@users.sourceforge.jp>, 
	Rich Felker <dalias@libc.org>, David Miller <davem@davemloft.net>, 
	Richard Weinberger <richard@nod.at>, anton.ivanov@cambridgegreys.com, 
	Johannes Berg <johannes@sipsolutions.net>, Thomas Gleixner <tglx@linutronix.de>, 
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, 
	Dave Hansen <dave.hansen@linux.intel.com>, "the arch/x86 maintainers" <x86@kernel.org>, 
	"H. Peter Anvin" <hpa@zytor.com>, acme@kernel.org, Mark Rutland <mark.rutland@arm.com>, 
	Alexander Shishkin <alexander.shishkin@linux.intel.com>, jolsa@kernel.org, namhyung@kernel.org, 
	Juergen Gross <jgross@suse.com>, srivatsa@csail.mit.edu, amakhalov@vmware.com, 
	pv-drivers@vmware.com, Boris Ostrovsky <boris.ostrovsky@oracle.com>, chris@zankel.net, 
	jcmvbkbc@gmail.com, Len Brown <lenb@kernel.org>, Pavel Machek <pavel@ucw.cz>, 
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>, Michael Turquette <mturquette@baylibre.com>, 
	Stephen Boyd <sboyd@kernel.org>, Daniel Lezcano <daniel.lezcano@linaro.org>, lpieralisi@kernel.org, 
	Sudeep Holla <sudeep.holla@arm.com>, Andy Gross <agross@kernel.org>, 
	Bjorn Andersson <bjorn.andersson@linaro.org>, Anup Patel <anup@brainfault.org>, 
	Thierry Reding <thierry.reding@gmail.com>, Jon Hunter <jonathanh@nvidia.com>, 
	Jacob Pan <jacob.jun.pan@linux.intel.com>, Arnd Bergmann <arnd@arndb.de>, 
	Yury Norov <yury.norov@gmail.com>, Andy Shevchenko <andriy.shevchenko@linux.intel.com>, 
	Rasmus Villemoes <linux@rasmusvillemoes.dk>, Steven Rostedt <rostedt@goodmis.org>, 
	Petr Mladek <pmladek@suse.com>, senozhatsky@chromium.org, 
	John Ogness <john.ogness@linutronix.de>, "Paul E. McKenney" <paulmck@kernel.org>, 
	Frederic Weisbecker <frederic@kernel.org>, quic_neeraju@quicinc.com, 
	Josh Triplett <josh@joshtriplett.org>, Mathieu Desnoyers <mathieu.desnoyers@efficios.com>, 
	Lai Jiangshan <jiangshanlai@gmail.com>, Joel Fernandes <joel@joelfernandes.org>, 
	Juri Lelli <juri.lelli@redhat.com>, Vincent Guittot <vincent.guittot@linaro.org>, 
	Dietmar Eggemann <dietmar.eggemann@arm.com>, Benjamin Segall <bsegall@google.com>, 
	Mel Gorman <mgorman@suse.de>, Daniel Bristot de Oliveira <bristot@redhat.com>, vschneid@redhat.com, 
	jpoimboe@kernel.org, linux-alpha@vger.kernel.org, 
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>, linux-snps-arc@lists.infradead.org, 
	Linux ARM <linux-arm-kernel@lists.infradead.org>, 
	Linux OMAP Mailing List <linux-omap@vger.kernel.org>, linux-csky@vger.kernel.org, 
	linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, 
	linux-m68k <linux-m68k@lists.linux-m68k.org>, 
	"open list:BROADCOM NVRAM DRIVER" <linux-mips@vger.kernel.org>, openrisc@lists.librecores.org, 
	Parisc List <linux-parisc@vger.kernel.org>, linuxppc-dev <linuxppc-dev@lists.ozlabs.org>, 
	linux-riscv <linux-riscv@lists.infradead.org>, linux-s390@vger.kernel.org, 
	Linux-sh list <linux-sh@vger.kernel.org>, sparclinux@vger.kernel.org, 
	linux-um@lists.infradead.org, linux-perf-users@vger.kernel.org, 
	virtualization@lists.linux-foundation.org, xen-devel@lists.xenproject.org, 
	linux-xtensa@linux-xtensa.org, 
	ACPI Devel Maling List <linux-acpi@vger.kernel.org>, Linux PM <linux-pm@vger.kernel.org>, 
	linux-clk <linux-clk@vger.kernel.org>, linux-arm-msm <linux-arm-msm@vger.kernel.org>, 
	linux-tegra <linux-tegra@vger.kernel.org>, linux-arch <linux-arch@vger.kernel.org>, 
	rcu@vger.kernel.org
Content-Type: text/plain; charset="UTF-8"

On Wed, Jun 8, 2022 at 5:48 PM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Wed, Jun 08, 2022 at 05:01:05PM +0200, Rafael J. Wysocki wrote:
> > On Wed, Jun 8, 2022 at 4:47 PM Peter Zijlstra <peterz@infradead.org> wrote:
> > >
> > > Commit c227233ad64c ("intel_idle: enable interrupts before C1 on
> > > Xeons") wrecked intel_idle in two ways:
> > >
> > >  - must not have tracing in idle functions
> > >  - must return with IRQs disabled
> > >
> > > Additionally, it added a branch for no good reason.
> > >
> > > Fixes: c227233ad64c ("intel_idle: enable interrupts before C1 on Xeons")
> > > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> >
> > Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> >
> > And do I think correctly that this can be applied without the rest of
> > the series?
>
> Yeah, I don't think this relies on any of the preceding patches. If you
> want to route this through the pm/fixes tree that's fine.

OK, thanks, applied (and I moved the intel_idle() kerneldoc so it is
next to the function to avoid the docs build warning).


From xen-devel-bounces@lists.xenproject.org Wed Jun 08 18:56:44 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 18:56:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344335.569874 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz0r1-0008NW-CE; Wed, 08 Jun 2022 18:56:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344335.569874; Wed, 08 Jun 2022 18:56:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz0r1-0008MW-8L; Wed, 08 Jun 2022 18:56:31 +0000
Received: by outflank-mailman (input) for mailman id 344335;
 Wed, 08 Jun 2022 18:13:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Gkb=WP=gmail.com=rjwysocki@srs-se1.protection.inumbo.net>)
 id 1nz0BE-0002Xq-Qb
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 18:13:20 +0000
Received: from mail-yb1-f182.google.com (mail-yb1-f182.google.com
 [209.85.219.182]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id acac7942-e756-11ec-bd2c-47488cf2e6aa;
 Wed, 08 Jun 2022 20:13:19 +0200 (CEST)
Received: by mail-yb1-f182.google.com with SMTP id x187so10310615ybe.2
 for <xen-devel@lists.xenproject.org>; Wed, 08 Jun 2022 11:13:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: acac7942-e756-11ec-bd2c-47488cf2e6aa
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=ZQePx5uIM1fUidMEiJVM5nuB6ZzPVeEu38qEpc1UrpE=;
        b=mrMV3XaTw9M5veahVv7TjH6RnhOai37m5qBTBdlGVb2XLE/MlWrKSEl9yusFHm7W9h
         TPsGW58oMDC2mRARmhNJFZT3IukmWnDVB3uFZPEm/L2IaL9UVjmHDurHA37ARDot1rLj
         d8JhK8u5Q/comRUPCCa/1b/m8S+EtgpgcP8kemO17nCexM2rKOicIjL7YpFbeXqCGxzq
         8LFKQYnr/W10feKXiXbVS9G1Yab3mrMQlMsGHfiUgBdE2rLqbxNpcOz/5LuaTe4fd+x6
         sop/fjBqBnHk1LKAwUMVjGWTGBzE5s6Hdo9pdSZ+ob0LnlPlAt/N16C3UfwBxV0EUIYa
         11Ww==
X-Gm-Message-State: AOAM532+P7ZwPtWYlNdbA2vmZD7spYIkZBoiaeUiYXb8yHHBrx5L2Gfk
	3rAop5zoYN59cjcmeX8BJtzMUWopkzMynz+Goe4=
X-Google-Smtp-Source: ABdhPJyd1CrHem+vVK3IGyX/t5xH0MretwaFF4fsfRB0Td7tNKI/Gb3F1VH58WOF1ULS3yEBVcV2AFovGma6uhuJi5U=
X-Received: by 2002:a5b:4a:0:b0:663:7c5b:a5ba with SMTP id e10-20020a5b004a000000b006637c5ba5bamr17352131ybp.81.1654711998678;
 Wed, 08 Jun 2022 11:13:18 -0700 (PDT)
MIME-Version: 1.0
References: <20220608142723.103523089@infradead.org> <20220608144516.047149313@infradead.org>
In-Reply-To: <20220608144516.047149313@infradead.org>
From: "Rafael J. Wysocki" <rafael@kernel.org>
Date: Wed, 8 Jun 2022 20:13:07 +0200
Message-ID: <CAJZ5v0g=jcRkMhVoNkZzSaM0viK0C7WN6T5C1_3VQ5GGGQkSzQ@mail.gmail.com>
Subject: Re: [PATCH 02/36] x86/idle: Replace x86_idle with a static_call
To: Peter Zijlstra <peterz@infradead.org>
Cc: rth@twiddle.net, ink@jurassic.park.msu.ru, mattst88@gmail.com, 
	vgupta@kernel.org, Russell King - ARM Linux <linux@armlinux.org.uk>, ulli.kroll@googlemail.com, 
	Linus Walleij <linus.walleij@linaro.org>, Shawn Guo <shawnguo@kernel.org>, 
	Sascha Hauer <s.hauer@pengutronix.de>, Sascha Hauer <kernel@pengutronix.de>, 
	Fabio Estevam <festevam@gmail.com>, dl-linux-imx <linux-imx@nxp.com>, Tony Lindgren <tony@atomide.com>, 
	Kevin Hilman <khilman@kernel.org>, Catalin Marinas <catalin.marinas@arm.com>, 
	Will Deacon <will@kernel.org>, Guo Ren <guoren@kernel.org>, bcain@quicinc.com, 
	Huacai Chen <chenhuacai@kernel.org>, kernel@xen0n.name, 
	Geert Uytterhoeven <geert@linux-m68k.org>, sammy@sammy.net, Michal Simek <monstr@monstr.eu>, 
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>, dinguyen@kernel.org, jonas@southpole.se, 
	stefan.kristiansson@saunalahti.fi, Stafford Horne <shorne@gmail.com>, 
	James Bottomley <James.Bottomley@hansenpartnership.com>, Helge Deller <deller@gmx.de>, 
	Michael Ellerman <mpe@ellerman.id.au>, Benjamin Herrenschmidt <benh@kernel.crashing.org>, 
	Paul Mackerras <paulus@samba.org>, Paul Walmsley <paul.walmsley@sifive.com>, 
	Palmer Dabbelt <palmer@dabbelt.com>, Albert Ou <aou@eecs.berkeley.edu>, 
	Heiko Carstens <hca@linux.ibm.com>, Vasily Gorbik <gor@linux.ibm.com>, 
	Alexander Gordeev <agordeev@linux.ibm.com>, Christian Borntraeger <borntraeger@linux.ibm.com>, 
	Sven Schnelle <svens@linux.ibm.com>, Yoshinori Sato <ysato@users.sourceforge.jp>, 
	Rich Felker <dalias@libc.org>, David Miller <davem@davemloft.net>, 
	Richard Weinberger <richard@nod.at>, anton.ivanov@cambridgegreys.com, 
	Johannes Berg <johannes@sipsolutions.net>, Thomas Gleixner <tglx@linutronix.de>, 
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, 
	Dave Hansen <dave.hansen@linux.intel.com>, "the arch/x86 maintainers" <x86@kernel.org>, 
	"H. Peter Anvin" <hpa@zytor.com>, acme@kernel.org, Mark Rutland <mark.rutland@arm.com>, 
	Alexander Shishkin <alexander.shishkin@linux.intel.com>, jolsa@kernel.org, namhyung@kernel.org, 
	Juergen Gross <jgross@suse.com>, srivatsa@csail.mit.edu, amakhalov@vmware.com, 
	pv-drivers@vmware.com, Boris Ostrovsky <boris.ostrovsky@oracle.com>, chris@zankel.net, 
	jcmvbkbc@gmail.com, "Rafael J. Wysocki" <rafael@kernel.org>, Len Brown <lenb@kernel.org>, 
	Pavel Machek <pavel@ucw.cz>, Greg Kroah-Hartman <gregkh@linuxfoundation.org>, 
	Michael Turquette <mturquette@baylibre.com>, Stephen Boyd <sboyd@kernel.org>, 
	Daniel Lezcano <daniel.lezcano@linaro.org>, lpieralisi@kernel.org, 
	Sudeep Holla <sudeep.holla@arm.com>, Andy Gross <agross@kernel.org>, 
	Bjorn Andersson <bjorn.andersson@linaro.org>, Anup Patel <anup@brainfault.org>, 
	Thierry Reding <thierry.reding@gmail.com>, Jon Hunter <jonathanh@nvidia.com>, 
	Jacob Pan <jacob.jun.pan@linux.intel.com>, Arnd Bergmann <arnd@arndb.de>, 
	Yury Norov <yury.norov@gmail.com>, Andy Shevchenko <andriy.shevchenko@linux.intel.com>, 
	Rasmus Villemoes <linux@rasmusvillemoes.dk>, Steven Rostedt <rostedt@goodmis.org>, 
	Petr Mladek <pmladek@suse.com>, senozhatsky@chromium.org, 
	John Ogness <john.ogness@linutronix.de>, "Paul E. McKenney" <paulmck@kernel.org>, 
	Frederic Weisbecker <frederic@kernel.org>, quic_neeraju@quicinc.com, 
	Josh Triplett <josh@joshtriplett.org>, Mathieu Desnoyers <mathieu.desnoyers@efficios.com>, 
	Lai Jiangshan <jiangshanlai@gmail.com>, Joel Fernandes <joel@joelfernandes.org>, 
	Juri Lelli <juri.lelli@redhat.com>, Vincent Guittot <vincent.guittot@linaro.org>, 
	Dietmar Eggemann <dietmar.eggemann@arm.com>, Benjamin Segall <bsegall@google.com>, 
	Mel Gorman <mgorman@suse.de>, Daniel Bristot de Oliveira <bristot@redhat.com>, vschneid@redhat.com, 
	jpoimboe@kernel.org, linux-alpha@vger.kernel.org, 
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>, linux-snps-arc@lists.infradead.org, 
	Linux ARM <linux-arm-kernel@lists.infradead.org>, 
	Linux OMAP Mailing List <linux-omap@vger.kernel.org>, linux-csky@vger.kernel.org, 
	linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, 
	linux-m68k <linux-m68k@lists.linux-m68k.org>, 
	"open list:BROADCOM NVRAM DRIVER" <linux-mips@vger.kernel.org>, openrisc@lists.librecores.org, 
	Parisc List <linux-parisc@vger.kernel.org>, linuxppc-dev <linuxppc-dev@lists.ozlabs.org>, 
	linux-riscv <linux-riscv@lists.infradead.org>, linux-s390@vger.kernel.org, 
	Linux-sh list <linux-sh@vger.kernel.org>, sparclinux@vger.kernel.org, 
	linux-um@lists.infradead.org, linux-perf-users@vger.kernel.org, 
	virtualization@lists.linux-foundation.org, xen-devel@lists.xenproject.org, 
	linux-xtensa@linux-xtensa.org, 
	ACPI Devel Maling List <linux-acpi@vger.kernel.org>, Linux PM <linux-pm@vger.kernel.org>, 
	linux-clk <linux-clk@vger.kernel.org>, linux-arm-msm <linux-arm-msm@vger.kernel.org>, 
	linux-tegra <linux-tegra@vger.kernel.org>, linux-arch <linux-arch@vger.kernel.org>, 
	rcu@vger.kernel.org
Content-Type: text/plain; charset="UTF-8"

On Wed, Jun 8, 2022 at 4:47 PM Peter Zijlstra <peterz@infradead.org> wrote:
>
> Typical boot time setup; no need to suffer an indirect call for that.
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> Reviewed-by: Frederic Weisbecker <frederic@kernel.org>

Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

> ---
>  arch/x86/kernel/process.c |   50 +++++++++++++++++++++++++---------------------
>  1 file changed, 28 insertions(+), 22 deletions(-)
>
> --- a/arch/x86/kernel/process.c
> +++ b/arch/x86/kernel/process.c
> @@ -24,6 +24,7 @@
>  #include <linux/cpuidle.h>
>  #include <linux/acpi.h>
>  #include <linux/elf-randomize.h>
> +#include <linux/static_call.h>
>  #include <trace/events/power.h>
>  #include <linux/hw_breakpoint.h>
>  #include <asm/cpu.h>
> @@ -692,7 +693,23 @@ void __switch_to_xtra(struct task_struct
>  unsigned long boot_option_idle_override = IDLE_NO_OVERRIDE;
>  EXPORT_SYMBOL(boot_option_idle_override);
>
> -static void (*x86_idle)(void);
> +/*
> + * We use this if we don't have any better idle routine..
> + */
> +void __cpuidle default_idle(void)
> +{
> +       raw_safe_halt();
> +}
> +#if defined(CONFIG_APM_MODULE) || defined(CONFIG_HALTPOLL_CPUIDLE_MODULE)
> +EXPORT_SYMBOL(default_idle);
> +#endif
> +
> +DEFINE_STATIC_CALL_NULL(x86_idle, default_idle);
> +
> +static bool x86_idle_set(void)
> +{
> +       return !!static_call_query(x86_idle);
> +}
>
>  #ifndef CONFIG_SMP
>  static inline void play_dead(void)
> @@ -715,28 +732,17 @@ void arch_cpu_idle_dead(void)
>  /*
>   * Called from the generic idle code.
>   */
> -void arch_cpu_idle(void)
> -{
> -       x86_idle();
> -}
> -
> -/*
> - * We use this if we don't have any better idle routine..
> - */
> -void __cpuidle default_idle(void)
> +void __cpuidle arch_cpu_idle(void)
>  {
> -       raw_safe_halt();
> +       static_call(x86_idle)();
>  }
> -#if defined(CONFIG_APM_MODULE) || defined(CONFIG_HALTPOLL_CPUIDLE_MODULE)
> -EXPORT_SYMBOL(default_idle);
> -#endif
>
>  #ifdef CONFIG_XEN
>  bool xen_set_default_idle(void)
>  {
> -       bool ret = !!x86_idle;
> +       bool ret = x86_idle_set();
>
> -       x86_idle = default_idle;
> +       static_call_update(x86_idle, default_idle);
>
>         return ret;
>  }
> @@ -859,20 +865,20 @@ void select_idle_routine(const struct cp
>         if (boot_option_idle_override == IDLE_POLL && smp_num_siblings > 1)
>                 pr_warn_once("WARNING: polling idle and HT enabled, performance may degrade\n");
>  #endif
> -       if (x86_idle || boot_option_idle_override == IDLE_POLL)
> +       if (x86_idle_set() || boot_option_idle_override == IDLE_POLL)
>                 return;
>
>         if (boot_cpu_has_bug(X86_BUG_AMD_E400)) {
>                 pr_info("using AMD E400 aware idle routine\n");
> -               x86_idle = amd_e400_idle;
> +               static_call_update(x86_idle, amd_e400_idle);
>         } else if (prefer_mwait_c1_over_halt(c)) {
>                 pr_info("using mwait in idle threads\n");
> -               x86_idle = mwait_idle;
> +               static_call_update(x86_idle, mwait_idle);
>         } else if (cpu_feature_enabled(X86_FEATURE_TDX_GUEST)) {
>                 pr_info("using TDX aware idle routine\n");
> -               x86_idle = tdx_safe_halt;
> +               static_call_update(x86_idle, tdx_safe_halt);
>         } else
> -               x86_idle = default_idle;
> +               static_call_update(x86_idle, default_idle);
>  }
>
>  void amd_e400_c1e_apic_setup(void)
> @@ -925,7 +931,7 @@ static int __init idle_setup(char *str)
>                  * To continue to load the CPU idle driver, don't touch
>                  * the boot_option_idle_override.
>                  */
> -               x86_idle = default_idle;
> +               static_call_update(x86_idle, default_idle);
>                 boot_option_idle_override = IDLE_HALT;
>         } else if (!strcmp(str, "nomwait")) {
>                 /*
>
>


From xen-devel-bounces@lists.xenproject.org Wed Jun 08 20:34:20 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 20:34:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344372.569899 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz2NU-0003R6-Hb; Wed, 08 Jun 2022 20:34:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344372.569899; Wed, 08 Jun 2022 20:34:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz2NU-0003Qz-EG; Wed, 08 Jun 2022 20:34:08 +0000
Received: by outflank-mailman (input) for mailman id 344372;
 Wed, 08 Jun 2022 20:34:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nz2NS-0003Qp-IB; Wed, 08 Jun 2022 20:34:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nz2NS-0005yR-Eq; Wed, 08 Jun 2022 20:34:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nz2NR-0001rv-TI; Wed, 08 Jun 2022 20:34:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nz2NR-0003JN-Sn; Wed, 08 Jun 2022 20:34:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=I1Qg6itMTiZkW/w5Z54Aad9lvC5a/2FmiJ7ZuSZ4bD0=; b=YaidmTuHWtPmm63JhLuI1c/13e
	VOjZeI+cMLUi4HOxOIAfxWI0L+GNTPpkwDW26Xc8smkx7PirgEeOnrp6+7hKNXOCruDOf2FDzrY6P
	wLKlNGtXVbQr0g0jdXRCaENgl39Hs/VbqrSzprSJSrR5+faYsr2K5mjia7LO1t7nqrkU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170877-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 170877: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-freebsd10-amd64:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-amd64-i386-pair:xen-install/dst_host:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=cea9ae06229577cd5b77019ce122f9cdd1568106
X-Osstest-Versions-That:
    xen=58ce5b6c33ecae76f2e9fc5213a56e98c3be4be5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 08 Jun 2022 20:34:05 +0000

flight 170877 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170877/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-freebsd10-amd64  7 xen-install   fail in 170852 pass in 170877
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install         fail pass in 170852
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10     fail pass in 170865

Tests which did not succeed, but are not blocking:
 test-amd64-i386-pair        11 xen-install/dst_host fail in 170852 like 170840
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170840
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170840
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170840
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170840
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 170840
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 170840
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170840
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170840
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170840
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170840
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170840
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170840
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 xen                  cea9ae06229577cd5b77019ce122f9cdd1568106
baseline version:
 xen                  58ce5b6c33ecae76f2e9fc5213a56e98c3be4be5

Last test of basis   170840  2022-06-06 01:51:50 Z    2 days
Testing same since   170852  2022-06-06 22:09:25 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   58ce5b6c33..cea9ae0622  cea9ae06229577cd5b77019ce122f9cdd1568106 -> master


From xen-devel-bounces@lists.xenproject.org Wed Jun 08 22:01:39 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 22:01:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344387.569916 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz3jy-0004ze-LT; Wed, 08 Jun 2022 22:01:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344387.569916; Wed, 08 Jun 2022 22:01:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz3jy-0004yb-Cm; Wed, 08 Jun 2022 22:01:26 +0000
Received: by outflank-mailman (input) for mailman id 344387;
 Wed, 08 Jun 2022 21:55:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=QjaB=WP=arndb.de=arnd@srs-se1.protection.inumbo.net>)
 id 1nz3eV-00044U-Fw
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 21:55:47 +0000
Received: from mout.kundenserver.de (mout.kundenserver.de [212.227.126.133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id be8f22cc-e775-11ec-bd2c-47488cf2e6aa;
 Wed, 08 Jun 2022 23:55:46 +0200 (CEST)
Received: from mail-yb1-f173.google.com ([209.85.219.173]) by
 mrelayeu.kundenserver.de (mreue010 [213.165.67.97]) with ESMTPSA (Nemesis) id
 1MuUza-1niDDo45z8-00rar3 for <xen-devel@lists.xenproject.org>; Wed, 08 Jun
 2022 23:55:43 +0200
Received: by mail-yb1-f173.google.com with SMTP id a30so20003739ybj.3
 for <xen-devel@lists.xenproject.org>; Wed, 08 Jun 2022 14:55:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: be8f22cc-e775-11ec-bd2c-47488cf2e6aa
X-Gm-Message-State: AOAM531iVnSG872l62iKviwF9PrHO8lr/WeOht9qKnDF0w6sed6e+kZg
	NVK4tzOsBS5NyF1aj0Itt46paxGzZS9LjbWyQQI=
X-Google-Smtp-Source: ABdhPJzN5kbTwiXruPcKSpnmqojFIl1cGQUBuhNCCdRneCrpSwm0deMOGKvKLmavgOJ05DHkpAoCHNV4Fm5CQu/QOU4=
X-Received: by 2002:a0d:efc2:0:b0:2fe:d2b7:da8 with SMTP id
 y185-20020a0defc2000000b002fed2b70da8mr36982567ywe.42.1654705351589; Wed, 08
 Jun 2022 09:22:31 -0700 (PDT)
MIME-Version: 1.0
References: <20220608142723.103523089@infradead.org> <20220608144517.188449351@infradead.org>
In-Reply-To: <20220608144517.188449351@infradead.org>
From: Arnd Bergmann <arnd@arndb.de>
Date: Wed, 8 Jun 2022 18:22:12 +0200
X-Gmail-Original-Message-ID: <CAK8P3a2y5+nrQFzhjrTTZe+d57Ug261J3kwLNe8Mu8i2qxtG2w@mail.gmail.com>
Message-ID: <CAK8P3a2y5+nrQFzhjrTTZe+d57Ug261J3kwLNe8Mu8i2qxtG2w@mail.gmail.com>
Subject: Re: [PATCH 20/36] arch/idle: Change arch_cpu_idle() IRQ behaviour
To: Peter Zijlstra <peterz@infradead.org>
Cc: Richard Henderson <rth@twiddle.net>, Ivan Kokshaysky <ink@jurassic.park.msu.ru>, 
	Matt Turner <mattst88@gmail.com>, Vineet Gupta <vgupta@kernel.org>, 
	Russell King - ARM Linux <linux@armlinux.org.uk>, Hans Ulli Kroll <ulli.kroll@googlemail.com>, 
	Linus Walleij <linus.walleij@linaro.org>, Shawn Guo <shawnguo@kernel.org>, 
	Sascha Hauer <s.hauer@pengutronix.de>, Sascha Hauer <kernel@pengutronix.de>, 
	Fabio Estevam <festevam@gmail.com>, NXP Linux Team <linux-imx@nxp.com>, Tony Lindgren <tony@atomide.com>, 
	Kevin Hilman <khilman@kernel.org>, Catalin Marinas <catalin.marinas@arm.com>, 
	Will Deacon <will@kernel.org>, Guo Ren <guoren@kernel.org>, bcain@quicinc.com, 
	Huacai Chen <chenhuacai@kernel.org>, Xuerui Wang <kernel@xen0n.name>, 
	Geert Uytterhoeven <geert@linux-m68k.org>, Sam Creasey <sammy@sammy.net>, Michal Simek <monstr@monstr.eu>, 
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>, Dinh Nguyen <dinguyen@kernel.org>, 
	Jonas Bonn <jonas@southpole.se>, 
	Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>, Stafford Horne <shorne@gmail.com>, 
	James Bottomley <James.Bottomley@hansenpartnership.com>, Helge Deller <deller@gmx.de>, 
	Michael Ellerman <mpe@ellerman.id.au>, Benjamin Herrenschmidt <benh@kernel.crashing.org>, 
	Paul Mackerras <paulus@samba.org>, Paul Walmsley <paul.walmsley@sifive.com>, 
	Palmer Dabbelt <palmer@dabbelt.com>, Albert Ou <aou@eecs.berkeley.edu>, 
	Heiko Carstens <hca@linux.ibm.com>, Vasily Gorbik <gor@linux.ibm.com>, 
	Alexander Gordeev <agordeev@linux.ibm.com>, Christian Borntraeger <borntraeger@linux.ibm.com>, 
	Sven Schnelle <svens@linux.ibm.com>, Yoshinori Sato <ysato@users.sourceforge.jp>, 
	Rich Felker <dalias@libc.org>, David Miller <davem@davemloft.net>, 
	Richard Weinberger <richard@nod.at>, Anton Ivanov <anton.ivanov@cambridgegreys.com>, 
	Johannes Berg <johannes@sipsolutions.net>, Thomas Gleixner <tglx@linutronix.de>, 
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, 
	Dave Hansen <dave.hansen@linux.intel.com>, "the arch/x86 maintainers" <x86@kernel.org>, 
	"H. Peter Anvin" <hpa@zytor.com>, Arnaldo Carvalho de Melo <acme@kernel.org>, Mark Rutland <mark.rutland@arm.com>, 
	Alexander Shishkin <alexander.shishkin@linux.intel.com>, Jiri Olsa <jolsa@kernel.org>, 
	Namhyung Kim <namhyung@kernel.org>, Juergen Gross <jgross@suse.com>, srivatsa@csail.mit.edu, 
	amakhalov@vmware.com, Pv-drivers <pv-drivers@vmware.com>, 
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Chris Zankel <chris@zankel.net>, 
	Max Filippov <jcmvbkbc@gmail.com>, Rafael Wysocki <rafael@kernel.org>, Len Brown <lenb@kernel.org>, 
	Pavel Machek <pavel@ucw.cz>, gregkh <gregkh@linuxfoundation.org>, 
	Michael Turquette <mturquette@baylibre.com>, Stephen Boyd <sboyd@kernel.org>, 
	Daniel Lezcano <daniel.lezcano@linaro.org>, lpieralisi@kernel.org, 
	Sudeep Holla <sudeep.holla@arm.com>, Andy Gross <agross@kernel.org>, 
	Bjorn Andersson <bjorn.andersson@linaro.org>, Anup Patel <anup@brainfault.org>, 
	Thierry Reding <thierry.reding@gmail.com>, Jonathan Hunter <jonathanh@nvidia.com>, 
	jacob.jun.pan@linux.intel.com, Arnd Bergmann <arnd@arndb.de>, 
	Yury Norov <yury.norov@gmail.com>, Andy Shevchenko <andriy.shevchenko@linux.intel.com>, 
	Rasmus Villemoes <linux@rasmusvillemoes.dk>, Steven Rostedt <rostedt@goodmis.org>, 
	Petr Mladek <pmladek@suse.com>, Sergey Senozhatsky <senozhatsky@chromium.org>, 
	John Ogness <john.ogness@linutronix.de>, "Paul E. McKenney" <paulmck@kernel.org>, 
	Frederic Weisbecker <frederic@kernel.org>, quic_neeraju@quicinc.com, 
	Josh Triplett <josh@joshtriplett.org>, Mathieu Desnoyers <mathieu.desnoyers@efficios.com>, 
	Lai Jiangshan <jiangshanlai@gmail.com>, Joel Fernandes <joel@joelfernandes.org>, 
	Juri Lelli <juri.lelli@redhat.com>, Vincent Guittot <vincent.guittot@linaro.org>, 
	Dietmar Eggemann <dietmar.eggemann@arm.com>, Ben Segall <bsegall@google.com>, 
	Mel Gorman <mgorman@suse.de>, Daniel Bristot de Oliveira <bristot@redhat.com>, vschneid@redhat.com, 
	jpoimboe@kernel.org, alpha <linux-alpha@vger.kernel.org>, 
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>, 
	"open list:SYNOPSYS ARC ARCHITECTURE" <linux-snps-arc@lists.infradead.org>, 
	Linux ARM <linux-arm-kernel@lists.infradead.org>, 
	linux-omap <linux-omap@vger.kernel.org>, linux-csky@vger.kernel.org, 
	"open list:QUALCOMM HEXAGON..." <linux-hexagon@vger.kernel.org>, 
	"open list:IA64 (Itanium) PLATFORM" <linux-ia64@vger.kernel.org>, linux-m68k <linux-m68k@lists.linux-m68k.org>, 
	"open list:BROADCOM NVRAM DRIVER" <linux-mips@vger.kernel.org>, Openrisc <openrisc@lists.librecores.org>, 
	Parisc List <linux-parisc@vger.kernel.org>, linuxppc-dev <linuxppc-dev@lists.ozlabs.org>, 
	linux-riscv <linux-riscv@lists.infradead.org>, linux-s390 <linux-s390@vger.kernel.org>, 
	Linux-sh list <linux-sh@vger.kernel.org>, sparclinux <sparclinux@vger.kernel.org>, 
	linux-um <linux-um@lists.infradead.org>, linux-perf-users@vger.kernel.org, 
	"open list:DRM DRIVER FOR QEMU'S CIRRUS DEVICE" <virtualization@lists.linux-foundation.org>, 
	xen-devel <xen-devel@lists.xenproject.org>, 
	"open list:TENSILICA XTENSA PORT (xtensa)" <linux-xtensa@linux-xtensa.org>, 
	ACPI Devel Maling List <linux-acpi@vger.kernel.org>, Linux PM list <linux-pm@vger.kernel.org>, 
	linux-clk <linux-clk@vger.kernel.org>, linux-arm-msm <linux-arm-msm@vger.kernel.org>, 
	"open list:TEGRA ARCHITECTURE SUPPORT" <linux-tegra@vger.kernel.org>, linux-arch <linux-arch@vger.kernel.org>, 
	rcu@vger.kernel.org
Content-Type: text/plain; charset="UTF-8"
X-Provags-ID: V03:K1:m8R1Pu+UFzRU6eC4jzpDdb8g6pVLHLdmGO5KVQBK/pYk/k+ipCy
 zrXY5zy+jCD8GyZxTQKr1nfTPbl8VptQzV0ZLnZjAcVyxO1YtQMOF00P9nkKmyR2as2/JYY
 d+dS3QyIckg74iMvx9YNJEBkn6Q0Wa993OqVjB+g/Sy0h/7aQNeResdeL3awiaLBEyZaClT
 cFDlan7NH0tH7dpNRe1hA==
X-Spam-Flag: NO
X-UI-Out-Filterresults: notjunk:1;V03:K0:1xrwEBKmde8=:q/TsCD1dQLxyvoyLkQBcVR
 YJ85mfuLrkJIz1Y3PnVowarRhaV8J5rY2cMYoZP6Shqz4N4pzeoO49gAJdfyXKD3a2iFQOcq9
 JNUbUG+d8+2TryKFp13hUtpcPr9688Z0S9wG7wWMthVEcSBa3zAhQI2oigQYAtNIQrdcrac/D
 c0CymXAeaU1ZnUA8i9ijTM+RrEAWOZacLiowS7jzC9H3dinzIJgMxmBuwlGS9fEk9y2vc/lTv
 +3CY5GiGxjKixJGzqw6zPTuWcHwK/I3Skdbe4d5A8ne2EsKBUE+D46vHJwDJ1WVazpaDpVdeA
 xS1ZE6/rHCrldwzqZwJ3tcpQsrFSE6Qf0jtgCpD2A5Xyqj5T2pyKY55/l9pM9ZmyhzvEtQ8ZK
 6dWczCmjHBFPmkiNWh+YuFr4000thhf7z6G/RlXh1NNZdca2TMPG5xBZYXAB+gipnSmw+IW81
 z0zks2bNN3twYpu2sVirKN6pa+iO34rheBsHZuRZV5IXLrRIvwVlibxUPNsO1PmfZasfsL+rm
 xhFbJV+iO/8jrrd4BnT4tGy/PrjZuJpYO8aSGiktsG1s1pblmr1cmcy9L4DaPb0lIjPDH2XlS
 cuin7mFRP2/P0yR4up2Chi1hAPuvf/pmTh8Fon3zbJ3hqGvvYj4i1e5jSeE0fiJXcpZZvapVR
 k8rac1q/u5zGNg6Vyj7T6m+O+81HqluFb7huzVTwLFOZP8Xf2R4dUIlhvlCz9nB746XG+eAUu
 T6zVgENFsViPDghtQ4sFdd+jFcO0OdIsTmNAglqd8iXuD7G9TCkmrUWAWkIWMafUK1Zsu6lOy
 XvfxmgE27E6EAHtFOypPV4B7QXQ2vn2jWt/olXB8HOKsQiU29rj1f6fwCVsMfTfS4RSDHKRTz
 NvZdczBCSEO2BRbafEAr9ELi6rNZ88WJWWMJGRH5oZKhKdpOhkN8M5Tgjd3EPfUgf82iCmvUL
 0WCOXRSwBoewOsg/0s6PDJxoPgvctlTQmB6SV+OkzuowYYLCamVg+

On Wed, Jun 8, 2022 at 4:27 PM Peter Zijlstra <peterz@infradead.org> wrote:
>
> Current arch_cpu_idle() is called with IRQs disabled, but will return
> with IRQs enabled.
>
> However, the very first thing the generic code does after calling
> arch_cpu_idle() is raw_local_irq_disable(). This means that
> architectures that can idle with IRQs disabled end up doing a
> pointless 'enable-disable' dance.
>
> Therefore, push this IRQ disabling into the idle function, meaning
> that those architectures can avoid the pointless IRQ state flipping.
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>

I think you now need to add the a raw_local_irq_disable(); in loongarch
as well.

       Arnd


From xen-devel-bounces@lists.xenproject.org Wed Jun 08 22:01:39 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 22:01:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344385.569910 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz3jy-0004vy-87; Wed, 08 Jun 2022 22:01:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344385.569910; Wed, 08 Jun 2022 22:01:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz3jy-0004vr-4a; Wed, 08 Jun 2022 22:01:26 +0000
Received: by outflank-mailman (input) for mailman id 344385;
 Wed, 08 Jun 2022 21:43:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=QjaB=WP=arndb.de=arnd@srs-se1.protection.inumbo.net>)
 id 1nz3SS-0002zq-7T
 for xen-devel@lists.xenproject.org; Wed, 08 Jun 2022 21:43:20 +0000
Received: from mout.kundenserver.de (mout.kundenserver.de [212.227.126.187])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 01f61925-e774-11ec-b605-df0040e90b76;
 Wed, 08 Jun 2022 23:43:17 +0200 (CEST)
Received: from mail-ua1-f47.google.com ([209.85.222.47]) by
 mrelayeu.kundenserver.de (mreue011 [213.165.67.97]) with ESMTPSA (Nemesis) id
 1MdNse-1nPzgj06cq-00ZQ1c for <xen-devel@lists.xenproject.org>; Wed, 08 Jun
 2022 23:43:17 +0200
Received: by mail-ua1-f47.google.com with SMTP id o8so7382278uap.6
 for <xen-devel@lists.xenproject.org>; Wed, 08 Jun 2022 14:43:16 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 01f61925-e774-11ec-b605-df0040e90b76
X-Gm-Message-State: AOAM5314z4DzazvIHroPhIG82d+Ud50XyVYeVMqhFAsPWTaAbMXG4IjA
	JQoWwW8CRVZctX3NqzJyDMjIwri+VmJuQs6foWQ=
X-Google-Smtp-Source: ABdhPJzNVjvOg+I4TKoqtI9Mj5r7eGnKl4yZEYVHLzTr22uimte8bgKc1nTgREfDsecVqzQDeuxH8v8jGezndj85u8s=
X-Received: by 2002:a25:e64b:0:b0:663:ffad:eac5 with SMTP id
 d72-20020a25e64b000000b00663ffadeac5mr3789690ybh.550.1654705730388; Wed, 08
 Jun 2022 09:28:50 -0700 (PDT)
MIME-Version: 1.0
References: <20220608142723.103523089@infradead.org> <20220608144518.010587032@infradead.org>
In-Reply-To: <20220608144518.010587032@infradead.org>
From: Arnd Bergmann <arnd@arndb.de>
Date: Wed, 8 Jun 2022 18:28:33 +0200
X-Gmail-Original-Message-ID: <CAK8P3a0g-fNu9=BUECSXcNeWT7XWHQMnSXZE-XYE+5eakHxKxA@mail.gmail.com>
Message-ID: <CAK8P3a0g-fNu9=BUECSXcNeWT7XWHQMnSXZE-XYE+5eakHxKxA@mail.gmail.com>
Subject: Re: [PATCH 33/36] cpuidle,omap3: Use WFI for omap3_pm_idle()
To: Peter Zijlstra <peterz@infradead.org>
Cc: Richard Henderson <rth@twiddle.net>, Ivan Kokshaysky <ink@jurassic.park.msu.ru>, 
	Matt Turner <mattst88@gmail.com>, Vineet Gupta <vgupta@kernel.org>, 
	Russell King - ARM Linux <linux@armlinux.org.uk>, Hans Ulli Kroll <ulli.kroll@googlemail.com>, 
	Linus Walleij <linus.walleij@linaro.org>, Shawn Guo <shawnguo@kernel.org>, 
	Sascha Hauer <s.hauer@pengutronix.de>, Sascha Hauer <kernel@pengutronix.de>, 
	Fabio Estevam <festevam@gmail.com>, NXP Linux Team <linux-imx@nxp.com>, Tony Lindgren <tony@atomide.com>, 
	Kevin Hilman <khilman@kernel.org>, Catalin Marinas <catalin.marinas@arm.com>, 
	Will Deacon <will@kernel.org>, Guo Ren <guoren@kernel.org>, bcain@quicinc.com, 
	Huacai Chen <chenhuacai@kernel.org>, Xuerui Wang <kernel@xen0n.name>, 
	Geert Uytterhoeven <geert@linux-m68k.org>, Sam Creasey <sammy@sammy.net>, Michal Simek <monstr@monstr.eu>, 
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>, Dinh Nguyen <dinguyen@kernel.org>, 
	Jonas Bonn <jonas@southpole.se>, 
	Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>, Stafford Horne <shorne@gmail.com>, 
	James Bottomley <James.Bottomley@hansenpartnership.com>, Helge Deller <deller@gmx.de>, 
	Michael Ellerman <mpe@ellerman.id.au>, Benjamin Herrenschmidt <benh@kernel.crashing.org>, 
	Paul Mackerras <paulus@samba.org>, Paul Walmsley <paul.walmsley@sifive.com>, 
	Palmer Dabbelt <palmer@dabbelt.com>, Albert Ou <aou@eecs.berkeley.edu>, 
	Heiko Carstens <hca@linux.ibm.com>, Vasily Gorbik <gor@linux.ibm.com>, 
	Alexander Gordeev <agordeev@linux.ibm.com>, Christian Borntraeger <borntraeger@linux.ibm.com>, 
	Sven Schnelle <svens@linux.ibm.com>, Yoshinori Sato <ysato@users.sourceforge.jp>, 
	Rich Felker <dalias@libc.org>, David Miller <davem@davemloft.net>, 
	Richard Weinberger <richard@nod.at>, Anton Ivanov <anton.ivanov@cambridgegreys.com>, 
	Johannes Berg <johannes@sipsolutions.net>, Thomas Gleixner <tglx@linutronix.de>, 
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, 
	Dave Hansen <dave.hansen@linux.intel.com>, "the arch/x86 maintainers" <x86@kernel.org>, 
	"H. Peter Anvin" <hpa@zytor.com>, Arnaldo Carvalho de Melo <acme@kernel.org>, Mark Rutland <mark.rutland@arm.com>, 
	Alexander Shishkin <alexander.shishkin@linux.intel.com>, Jiri Olsa <jolsa@kernel.org>, 
	Namhyung Kim <namhyung@kernel.org>, Juergen Gross <jgross@suse.com>, srivatsa@csail.mit.edu, 
	amakhalov@vmware.com, Pv-drivers <pv-drivers@vmware.com>, 
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Chris Zankel <chris@zankel.net>, 
	Max Filippov <jcmvbkbc@gmail.com>, Rafael Wysocki <rafael@kernel.org>, Len Brown <lenb@kernel.org>, 
	Pavel Machek <pavel@ucw.cz>, gregkh <gregkh@linuxfoundation.org>, 
	Michael Turquette <mturquette@baylibre.com>, Stephen Boyd <sboyd@kernel.org>, 
	Daniel Lezcano <daniel.lezcano@linaro.org>, lpieralisi@kernel.org, 
	Sudeep Holla <sudeep.holla@arm.com>, Andy Gross <agross@kernel.org>, 
	Bjorn Andersson <bjorn.andersson@linaro.org>, Anup Patel <anup@brainfault.org>, 
	Thierry Reding <thierry.reding@gmail.com>, Jonathan Hunter <jonathanh@nvidia.com>, 
	jacob.jun.pan@linux.intel.com, Arnd Bergmann <arnd@arndb.de>, 
	Yury Norov <yury.norov@gmail.com>, Andy Shevchenko <andriy.shevchenko@linux.intel.com>, 
	Rasmus Villemoes <linux@rasmusvillemoes.dk>, Steven Rostedt <rostedt@goodmis.org>, 
	Petr Mladek <pmladek@suse.com>, Sergey Senozhatsky <senozhatsky@chromium.org>, 
	John Ogness <john.ogness@linutronix.de>, "Paul E. McKenney" <paulmck@kernel.org>, 
	Frederic Weisbecker <frederic@kernel.org>, quic_neeraju@quicinc.com, 
	Josh Triplett <josh@joshtriplett.org>, Mathieu Desnoyers <mathieu.desnoyers@efficios.com>, 
	Lai Jiangshan <jiangshanlai@gmail.com>, Joel Fernandes <joel@joelfernandes.org>, 
	Juri Lelli <juri.lelli@redhat.com>, Vincent Guittot <vincent.guittot@linaro.org>, 
	Dietmar Eggemann <dietmar.eggemann@arm.com>, Ben Segall <bsegall@google.com>, 
	Mel Gorman <mgorman@suse.de>, Daniel Bristot de Oliveira <bristot@redhat.com>, vschneid@redhat.com, 
	jpoimboe@kernel.org, alpha <linux-alpha@vger.kernel.org>, 
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>, 
	"open list:SYNOPSYS ARC ARCHITECTURE" <linux-snps-arc@lists.infradead.org>, 
	Linux ARM <linux-arm-kernel@lists.infradead.org>, 
	linux-omap <linux-omap@vger.kernel.org>, linux-csky@vger.kernel.org, 
	"open list:QUALCOMM HEXAGON..." <linux-hexagon@vger.kernel.org>, 
	"open list:IA64 (Itanium) PLATFORM" <linux-ia64@vger.kernel.org>, linux-m68k <linux-m68k@lists.linux-m68k.org>, 
	"open list:BROADCOM NVRAM DRIVER" <linux-mips@vger.kernel.org>, Openrisc <openrisc@lists.librecores.org>, 
	Parisc List <linux-parisc@vger.kernel.org>, linuxppc-dev <linuxppc-dev@lists.ozlabs.org>, 
	linux-riscv <linux-riscv@lists.infradead.org>, linux-s390 <linux-s390@vger.kernel.org>, 
	Linux-sh list <linux-sh@vger.kernel.org>, sparclinux <sparclinux@vger.kernel.org>, 
	linux-um <linux-um@lists.infradead.org>, linux-perf-users@vger.kernel.org, 
	"open list:DRM DRIVER FOR QEMU'S CIRRUS DEVICE" <virtualization@lists.linux-foundation.org>, 
	xen-devel <xen-devel@lists.xenproject.org>, 
	"open list:TENSILICA XTENSA PORT (xtensa)" <linux-xtensa@linux-xtensa.org>, 
	ACPI Devel Maling List <linux-acpi@vger.kernel.org>, Linux PM list <linux-pm@vger.kernel.org>, 
	linux-clk <linux-clk@vger.kernel.org>, linux-arm-msm <linux-arm-msm@vger.kernel.org>, 
	"open list:TEGRA ARCHITECTURE SUPPORT" <linux-tegra@vger.kernel.org>, linux-arch <linux-arch@vger.kernel.org>, 
	rcu@vger.kernel.org
Content-Type: text/plain; charset="UTF-8"
X-Provags-ID: V03:K1:4jU4inRxbIK3YZV/EI0Jq7c1w+cBWepIpNcv7a5g9CAVNkikiKb
 xcIefAUSSDbZe9fnETUIBLrH/P/1DFCeasuj3P2CSClhs/obpVFFXyQ8KP+cH1mWwI9sxbt
 WyEm52osMQyq4rsYpXa5HZO05IFl2/JO42Q6KEqWu4F+TclkyQKN4EENOizX7E+Crg7A4Dd
 j+50z30Y0EdbHBe6S7m2Q==
X-Spam-Flag: NO
X-UI-Out-Filterresults: notjunk:1;V03:K0:TZ/SzoOX1Aw=:2b4snShJDmaai2JDJp149R
 8FV8+Wr+seTO3BZs718tpDLR9SxzgG/RvKnywOfVdRL0BBgMkp+zbOY5Nj6O4l5qjeGmfZow9
 3rybvr47sXvN/HgjblHRbOR4p+lhwylZBbhruHwAAf88nVpIQeRDMWKacz64twYQAqEKZfNrb
 K393NZta2geV6RGJNbxBra2lfHi1KGdHeRV4a8RsdH3BYgWqjvXOLwfYLVKH34CLVhaVkgXbU
 wMkX07FWil2qBaS8McnhbF+ofHPWhuEbKw95RvkcPsCbhDc1fNMkrNxQl7tGCBqsWaJ7Yd94p
 foJ+9/TMnRZzd1j4OynDZGFC15qmstg1JJHJGFUEVTMGYcZHeFwv9Hjjbwsg9glq0pRVpwRsc
 h/jGPlLUKObZvQt+3o6kZgB4HwbEXLavhmksFPZxaV8wTVfnKB/VIXYI0dyN/yMPw+t8QGJTo
 GmbVjtDTPbZqaMfGctSszMaN77jA2n+em2fNTUcqeRYz7tV0J6U1AOWX2n3E3WK2na8WT4UUU
 VnFr5sbvX5viV4klFPqnEQqRLMKBCy8WI6NU1m9flzZC7bq9RG/ak9sbkCAMSrRofz9aGQqDk
 BL3NsJF6Al5ZowAlgDlfed5uWTR0QSRHkS7N5e7TpmhHk4OzGWgHDB3ZuAxAgbiSYB21Ua7rV
 YXKCnjukuL4yY9NBvCZnSyTIwraCLBuLmelrPNVXdbrGvIRfKBJf0vyVsnRyiaJdxwBbY2l5p
 Ejt0m1od1bLjojt5fBIbrXXU0rUHURjFMashSYFHVLJUxhnBZIKmmY9ChACgf+xeg53x2L/j4
 ytQFsOvUUaP1NfssOO1tWEPk6qB7+fRd9ev2jpMmSlsr/OHF0A5HASDdMY9cZGcB/tSn9dtz6
 MfLyYNKzUmi8Sk98iQgoPMyfugYuQZl7JQV4QDPG4EL+3VVn2URMoF6oStzqk0eSYf2Kd9mRT
 CuCzu01m6J+UAEaJxyauhf62VsSuGO2xNMcmdl2k7ci3a/M2j9MRj

On Wed, Jun 8, 2022 at 4:27 PM Peter Zijlstra <peterz@infradead.org> wrote:
>
> arch_cpu_idle() is a very simple idle interface and exposes only a
> single idle state and is expected to not require RCU and not do any
> tracing/instrumentation.
>
> As such, omap_sram_idle() is not a valid implementation. Replace it
> with the simple (shallow) omap3_do_wfi() call. Leaving the more
> complicated idle states for the cpuidle driver.
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>

I see similar code in omap2:

omap2_pm_idle()
 -> omap2_enter_full_retention()
     -> omap2_sram_suspend()

Is that code path safe to use without RCU or does it need a similar change?

        Arnd


From xen-devel-bounces@lists.xenproject.org Wed Jun 08 22:20:17 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 22:20:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344406.569932 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz426-0008Cf-0h; Wed, 08 Jun 2022 22:20:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344406.569932; Wed, 08 Jun 2022 22:20:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz425-0008CY-U4; Wed, 08 Jun 2022 22:20:09 +0000
Received: by outflank-mailman (input) for mailman id 344406;
 Wed, 08 Jun 2022 22:20:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nz424-0008CO-4m; Wed, 08 Jun 2022 22:20:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nz424-0007r4-0n; Wed, 08 Jun 2022 22:20:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nz423-0007Og-FJ; Wed, 08 Jun 2022 22:20:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nz423-00073L-Eu; Wed, 08 Jun 2022 22:20:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=HC6TPpCqzDaOb3NBHppAYzgbOJ5Q74I9UlEySClzh/s=; b=p/VjdpcBZWGzJd199HR+8V6xNP
	P/o/LP8G7JYiX2FEk9Z52AM1Unh+Eid5DvCgBXOoezboNUtLl19wbHLob1a3GG1Rl5BJpo5Naf+ZN
	W1vU4fGNa39/Aq97iU4fZo9nPloim1eZ0wXxssTxapm49xmKO6owWEQIOT+lhdfDp/1g=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170881-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 170881: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=87257c76b9d58c1c52f3b30244ecfa20ce3eee72
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 08 Jun 2022 22:20:07 +0000

flight 170881 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170881/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              87257c76b9d58c1c52f3b30244ecfa20ce3eee72
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  698 days
Failing since        151818  2020-07-11 04:18:52 Z  697 days  679 attempts
Testing same since   170881  2022-06-08 04:20:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Amneesh Singh <natto@weirdnatto.in>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani@anisinha.ca>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brad Laue <brad@brad-x.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Kirbach <christian.kirbach@gmail.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Christophe Fergeau <cfergeau@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Didik Supriadi <didiksupriadi41@gmail.com>
  dinglimin <dinglimin@cmss.chinamobile.com>
  Divya Garg <divya.garg@nutanix.com>
  Dmitrii Shcherbakov <dmitrii.shcherbakov@canonical.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Emilio Herrera <ehespinosa57@gmail.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Franck Ridel <fridel@protonmail.com>
  Gavi Teitz <gavi@nvidia.com>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Haonan Wang <hnwanga1@gmail.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Hiroki Narukawa <hnarukaw@yahoo-corp.jp>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Ian Wienand <iwienand@redhat.com>
  Ioanna Alifieraki <ioanna-maria.alifieraki@canonical.com>
  Ivan Teterevkov <ivan.teterevkov@nutanix.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  jason lee <ppark5237@gmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jia Zhou <zhou.jia2@zte.com.cn>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jing Qi <jinqi@redhat.com>
  Jinsheng Zhang <zhangjl02@inspur.com>
  Jiri Denemark <jdenemar@redhat.com>
  Joachim Falk <joachim.falk@gmx.de>
  John Ferlan <jferlan@redhat.com>
  John Levon <john.levon@nutanix.com>
  John Levon <levon@movementarian.org>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Justin Gatzen <justin.gatzen@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kim InSoo <simmon@nplob.com>
  Koichi Murase <myoga.murase@gmail.com>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Lei Yang <yanglei209@huawei.com>
  Lena Voytek <lena.voytek@canonical.com>
  Liang Yan <lyan@digitalocean.com>
  Liang Yan <lyan@digtalocean.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Liu Yiding <liuyd.fnst@fujitsu.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  luzhipeng <luzhipeng@cestc.cn>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Martin Pitt <mpitt@debian.org>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matej Cepl <mcepl@cepl.eu>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Goodhart <c@chromakode.com>
  Maxim Nestratov <mnestratov@virtuozzo.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Moteen Shah <codeguy.moteen@gmail.com>
  Moteen Shah <moteenshah.02@gmail.com>
  Muha Aliss <muhaaliss@gmail.com>
  Nathan <nathan95@live.it>
  Neal Gompa <ngompa13@gmail.com>
  Nick Chevsky <nchevsky@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nicolas Lécureuil <neoclust@mageia.org>
  Nicolas Lécureuil <nicolas.lecureuil@siveo.net>
  Nikolay Shirokovskiy <nikolay.shirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Niteesh Dubey <niteesh@linux.ibm.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Or Ozeri <oro@il.ibm.com>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peng Liang <tcx4c70@gmail.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Praveen K Paladugu <prapal@linux.microsoft.com>
  Richard W.M. Jones <rjones@redhat.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Robin Lee <cheeselee@fedoraproject.org>
  Rohit Kumar <rohit.kumar3@nutanix.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Davis <scott.davis@starlab.io>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  shenjiatong <yshxxsjt715@gmail.com>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Simon Rowe <simon.rowe@nutanix.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tom Wieczorek <tom@bibbu.net>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tu Qiang <tu.qiang35@zte.com.cn>
  Tuguoyi <tu.guoyi@h3c.com>
  tuqiang <tu.qiang35@zte.com.cn>
  Vasiliy Ulyanov <vulyanov@suse.de>
  Victor Toso <victortoso@redhat.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Vinayak Kale <vkale@nvidia.com>
  Vineeth Pillai <viremana@linux.microsoft.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  Wei-Chen Chen <weicche@microsoft.com>
  William Douglas <william.douglas@intel.com>
  Xu Chao <xu.chao6@zte.com.cn>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Fei <yangfei85@huawei.com>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yasuhiko Kamata <belphegor@belbel.or.jp>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
  zhangjl02 <zhangjl02@inspur.com>
  zhanglei <zhanglei@smartx.com>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>
  Дамјан Георгиевски <gdamjan@gmail.com>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 112628 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 08 23:57:01 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jun 2022 23:57:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344418.569943 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz5XQ-0001iT-7A; Wed, 08 Jun 2022 23:56:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344418.569943; Wed, 08 Jun 2022 23:56:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz5XQ-0001iM-3b; Wed, 08 Jun 2022 23:56:36 +0000
Received: by outflank-mailman (input) for mailman id 344418;
 Wed, 08 Jun 2022 23:56:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nz5XO-0001iC-Sg; Wed, 08 Jun 2022 23:56:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nz5XO-0001Cg-QB; Wed, 08 Jun 2022 23:56:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nz5XO-0003uu-GH; Wed, 08 Jun 2022 23:56:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nz5XO-00006d-Fk; Wed, 08 Jun 2022 23:56:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nCLJELJm0H7d4AATM8oBUKEAv05rxo5zIgXwyhdr+AQ=; b=wOgrv4CAimxGAaKgXftAx2HSVv
	pWzKNGCWNFSGr1Ret1Q+oTl12sDXpxPt8LAqxfI8KqxKoi2wV7GLbC9bBWNelTg3onN6Q4tL+lPIH
	ASyAUkYMzV0n9l5XHGwHUC8vrcdcBvk4QEfiJVmAtT0UJvVcXuyXpQdtvwanG/bp0JpI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170889-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 170889: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f3185c165d28901c3222becfc8be547263c53745
X-Osstest-Versions-That:
    xen=7ac12e3634cc3ed9234de03e48149e7f5fbf73c3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 08 Jun 2022 23:56:34 +0000

flight 170889 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170889/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f3185c165d28901c3222becfc8be547263c53745
baseline version:
 xen                  7ac12e3634cc3ed9234de03e48149e7f5fbf73c3

Last test of basis   170886  2022-06-08 14:02:52 Z    0 days
Testing same since   170889  2022-06-08 19:01:56 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Luca Fancellu <luca.fancellu@arm.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   7ac12e3634..f3185c165d  f3185c165d28901c3222becfc8be547263c53745 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 00:25:37 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 00:25:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344429.569954 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz5zP-00062m-9M; Thu, 09 Jun 2022 00:25:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344429.569954; Thu, 09 Jun 2022 00:25:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz5zP-00062f-5r; Thu, 09 Jun 2022 00:25:31 +0000
Received: by outflank-mailman (input) for mailman id 344429;
 Thu, 09 Jun 2022 00:25:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oenM=WQ=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nz5zN-00062Z-Jp
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 00:25:29 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a9a0e0f0-e78a-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 02:25:28 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 60CE2B82B8E;
 Thu,  9 Jun 2022 00:25:27 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6AAE9C34116;
 Thu,  9 Jun 2022 00:25:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a9a0e0f0-e78a-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654734326;
	bh=24cm83TNnP6wGdoCnhsnsWwlJfgpn3O4hCfwVVooURg=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=cksXGyTUAwv6aWK8GHioS4kfhYIwlIGb5rBSNmUv5m/ww3cSq36Eg5em0s2bKNqbf
	 QNnIP0sEf04aHvA3A/Gt9Ca6nRyy9nAFvvxoq+vateEl+l/cRnARFscCHQOyTffeSp
	 qpiSZM0AwuVZeXDbVHjaHfxT5UQrF4b2XeIEYMLqVyUqeWvm/Jj6TZEfQJSxzbVoDv
	 fzPTr9SJ6pNY9Ruz4V0jAfizCwhWZHuHRqmLu8maygroNWwoKyc6KCZklzvSVY1CIh
	 O+1u+d8QDWjfCm4a6L26WURKANVqcO/0mwWSMnZ4IRVTZEMeIET4WAGGbgf6azWnv3
	 hQ/CnEsLrXvhg==
Date: Wed, 8 Jun 2022 17:25:24 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, Wei Liu <wei.liu2@citrix.com>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    George Dunlap <george.dunlap@citrix.com>, 
    Hongyan Xia <hongyxia@amazon.com>, Julien Grall <jgrall@amazon.com>, 
    Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [PATCH 10/16] xen/arm: add Persistent Map (PMAP)
 infrastructure
In-Reply-To: <dec50384-5172-67b6-f4ac-a9c40d80a641@xen.org>
Message-ID: <alpine.DEB.2.22.394.2206081725040.21215@ubuntu-linux-20-04-desktop>
References: <20220520120937.28925-1-julien@xen.org> <20220520120937.28925-11-julien@xen.org> <alpine.DEB.2.22.394.2206071806390.21215@ubuntu-linux-20-04-desktop> <dec50384-5172-67b6-f4ac-a9c40d80a641@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 8 Jun 2022, Julien Grall wrote:
> On 08/06/2022 02:08, Stefano Stabellini wrote:
> > > diff --git a/xen/arch/arm/include/asm/pmap.h
> > > b/xen/arch/arm/include/asm/pmap.h
> > > new file mode 100644
> > > index 000000000000..74398b4c4fe6
> > > --- /dev/null
> > > +++ b/xen/arch/arm/include/asm/pmap.h
> > > @@ -0,0 +1,32 @@
> > > +#ifndef __ASM_PMAP_H__
> > > +#define __ASM_PMAP_H__
> > > +
> > > +#include <xen/mm.h>
> > > +
> > > +#include <asm/fixmap.h>
> > > +
> > > +static inline void arch_pmap_map(unsigned int slot, mfn_t mfn)
> > > +{
> > > +    lpae_t *entry = &xen_fixmap[slot];
> > > +    lpae_t pte;
> > > +
> > > +    ASSERT(!lpae_is_valid(*entry));
> > > +
> > > +    pte = mfn_to_xen_entry(mfn, PAGE_HYPERVISOR_RW);
> > > +    pte.pt.table = 1;
> > > +    write_pte(entry, pte);
> > 
> > Here we don't need a tlb flush because we never go from a valid mapping
> > to another valid mapping.
> 
> A TLB flush would not be sufficient here. You would need to follow the
> break-before-make sequence in order to replace a valid mapping with another
> valid mapping.
> 
> > We also go through arch_pmap_unmap which
> > clears the mapping and also flushes the tlb. Is that right?
> 
> The PMAP code is using a bitmap to know which entry is used. So when
> arch_pmap_map() is called, we also guarantees the entry will be invalid (hence
> the ASSERT(!lpae_is_valid()).
> 
> The bit in the bitmap will only be cleared by pmap_unmap() which will result
> to a TLB flush.
> 
> > > +}
> > > +
> > > +static inline void arch_pmap_unmap(unsigned int slot)
> > > +{
> > > +    lpae_t pte = {};
> > > +
> > > +    write_pte(&xen_fixmap[slot], pte);
> > > +
> > > +    flush_xen_tlb_range_va_local(FIXMAP_ADDR(slot), PAGE_SIZE);
> > > +}
> > > +
> > > +void arch_pmap_map_slot(unsigned int slot, mfn_t mfn);
> > > +void arch_pmap_clear_slot(void *ptr);
> > 
> > What are these two? They are not defined anywhere?
> 
> It is left-over. I will drop them.

With those two functions removed, you can add my 

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 00:59:43 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 00:59:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344441.569993 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz6WQ-0001wy-2g; Thu, 09 Jun 2022 00:59:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344441.569993; Thu, 09 Jun 2022 00:59:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz6WP-0001w1-S2; Thu, 09 Jun 2022 00:59:37 +0000
Received: by outflank-mailman (input) for mailman id 344441;
 Thu, 09 Jun 2022 00:59:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RtT8=WQ=oracle.com=dongli.zhang@srs-se1.protection.inumbo.net>)
 id 1nz6WN-0001cl-VH
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 00:59:36 +0000
Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com
 [205.220.177.32]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6bdc2097-e78f-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 02:59:33 +0200 (CEST)
Received: from pps.filterd (m0246631.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 258LWi9k005804;
 Thu, 9 Jun 2022 00:58:49 GMT
Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com
 (iadpaimrmta01.appoci.oracle.com [130.35.100.223])
 by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3gfyekhsey-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 09 Jun 2022 00:58:49 +0000
Received: from pps.filterd
 (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1])
 by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (8.16.1.2/8.16.1.2)
 with SMTP id 2590u8Rw003540; Thu, 9 Jun 2022 00:58:48 GMT
Received: from nam12-dm6-obe.outbound.protection.outlook.com
 (mail-dm6nam12lp2169.outbound.protection.outlook.com [104.47.59.169])
 by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com with ESMTP id
 3gfwua5eeu-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 09 Jun 2022 00:58:48 +0000
Received: from BYAPR10MB2663.namprd10.prod.outlook.com (2603:10b6:a02:a9::20)
 by BYAPR10MB2488.namprd10.prod.outlook.com (2603:10b6:a02:b9::27)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.12; Thu, 9 Jun
 2022 00:58:46 +0000
Received: from BYAPR10MB2663.namprd10.prod.outlook.com
 ([fe80::7081:e264:cc58:37b9]) by BYAPR10MB2663.namprd10.prod.outlook.com
 ([fe80::7081:e264:cc58:37b9%3]) with mapi id 15.20.5314.019; Thu, 9 Jun 2022
 00:58:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6bdc2097-e78f-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc :
 subject : date : message-id : in-reply-to : references : content-type :
 mime-version; s=corp-2021-07-09;
 bh=fg5JBE/9uMN3vIcpEYAyJ0dIp33/fBmQIiC1iTsr1WI=;
 b=c8ecfA8I9303nu/0kGzmJIYtPOzcKd590kKlDyQrHDCr8JpfAvlj0+lybJYLbrC11QAC
 grlG6RfxSfiuxg4B0fQYmoZ714LR5cuJSDS+NUyoO55oAaD1Cy7q1xUiivKRTUaB4u6l
 1ychTytz6BbmcAFGk8ZOWsmWtn3EBkI07oH6tXSeOZlL6XMzOK5Lzyex+uZwT82ssde4
 4jGsQUYfHnqmBn0bRu/LpIGPN3/d/LwdrTdBCN//OKHMYBJ0PHsAkenQcXmX06gJ/zRB
 u/WGbSW0HjjoPhw2Jo3AkzV7S0cPJOWstCuWAKPhXyQ4l19GIae9YYGkOsv2Wn1aNZz0 sg== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UaxlOS4FSWP+dY8KXuHtKJoZaZ0sy2JK0zEgDdp9/+1ipm/bPO3EQE5KEaHVAGeu5ICjvdNXVmm2w1gwcSRZgjduXewAOvQRhoNHFtLzpKupGdRgfruQskk5qhKKWM0Pj393MvVFWL2fL174lbIuYquGEKBZyqsfjSkWxfnM2KrrphzJ9QXlNRv8uKH4/GwQ6N6/0GYFkMLosxGVEfELcaQsP7Ia0LZE3n6tqVMSb3VBsseoChQenPSL0RLbKot3W2lG64/Cxefj+9fEA/hV3VPi/wJSHwB+c5FMbHS3wYFc3JYaO1InDbi7ncPjZQevSwcj7UNW1m+THaqmlIxyhA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fg5JBE/9uMN3vIcpEYAyJ0dIp33/fBmQIiC1iTsr1WI=;
 b=fKLmA99xmJ0FTMfqsHuuH6sqfvQWQSLdbwiWtD1L+k9GKCP0OGwz0gEGIhcSGXTBBIuByTXArZg+cVyU+CxGQbXFriu+izrIb5xsItvafXQ2CarzDQPPAmopm7aCyNfSN423dLpQ3v5uS3/H3jZEqPyclLXKfqm2PWnQfHfCuW9Dsee6oOnk/QkWWzgH0eYyoDhJapegEnk3MG0p4zXwoGT6V7mCyttgWq7jfvv9Kg7fE1iA4NF4DVuhR3QfZSC1st/HpJfDfYnZ/sq8+osdxDSU4acsOwNC9eORu+yc7bJhT80vEH8njAiRRj9VQ0F7dSKcdqDmWYsGVdfbS/qF1w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fg5JBE/9uMN3vIcpEYAyJ0dIp33/fBmQIiC1iTsr1WI=;
 b=u+e8Yh1d4As1vyyiJemOOK4t02FrXNJv8mA4Y+RWupf5VkGizwVDoSIw+aH1Dq0SHnGuc+0/pp84az59LgrTKbErcBYQXq0VBJQQWlUHI5qZP1ri6uhTavFhVTkQeCnkRAnzf02vOZdR/BJ9EZM2e6U69vKCVrZTlNmpdU8RtSc=
From: Dongli Zhang <dongli.zhang@oracle.com>
To: iommu@lists.linux-foundation.org, xen-devel@lists.xenproject.org,
        x86@kernel.org, linuxppc-dev@lists.ozlabs.org,
        virtualization@lists.linux-foundation.org
Cc: linux-kernel@vger.kernel.org, hch@infradead.org, m.szyprowski@samsung.com,
        jgross@suse.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
        dave.hansen@linux.intel.com, sstabellini@kernel.org,
        mpe@ellerman.id.au, konrad.wilk@oracle.com, mst@redhat.com,
        jasowang@redhat.com, joe.jin@oracle.com
Subject: [PATCH RFC v1 6/7] virtio: use io_tlb_high_mem if it is active
Date: Wed,  8 Jun 2022 17:55:52 -0700
Message-Id: <20220609005553.30954-7-dongli.zhang@oracle.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20220609005553.30954-1-dongli.zhang@oracle.com>
References: <20220609005553.30954-1-dongli.zhang@oracle.com>
Content-Type: text/plain
X-ClientProxiedBy: SJ0PR05CA0075.namprd05.prod.outlook.com
 (2603:10b6:a03:332::20) To BYAPR10MB2663.namprd10.prod.outlook.com
 (2603:10b6:a02:a9::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 68ae0a81-e49a-4ce9-8c6f-08da49b333e8
X-MS-TrafficTypeDiagnostic: BYAPR10MB2488:EE_
X-Microsoft-Antispam-PRVS: 
	<BYAPR10MB24882BFDD1E7717ABF485D1DF0A79@BYAPR10MB2488.namprd10.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	rF7S5TckleDeTqTJl6VVjAITpeCO04cKVpBXJL5No918B+UpPSWPDAvoDhIKUJj5kE4dkaJl1aBAB5nS3FwTdF4+pFTt1RedQ62bFE2qnEG73PuVKZT14ps7d0btJGHX+wTLbj8Rpdh4RBbDMQxo24TVWdyML2XjaHrSEcmJo4Kclqb3RBuOafbCOyJPWZ7aFz10aI3cEJ8x8bkZ1gbt95seAPYi2iXR+/dOQOiyJGeVSTRIlY9+c9tJ3l2r/ETWjl/+MAasU3caA3WuL8a1juyHyjoB9unljCMpsAhw9DSu0z0TUejKmao3vo7ivWCakw225d9iWlbQMB1jd/R2+xVW3XnQ0INVlVdx0yejRnH/GJPXAx/Cxg4OQ8xfmkXlyc+YEKuRXOe5CUy3CRPdWsr5m3o6m1XR7Ay0LjpoXArFH5v6yjHqgRBR+NnfBjs20p0Hin5YWDzA4ttFrHN25MnBSj/QRFT70dmd8haKEkSInWRki2Dg3KaAn5Nxmda2A5SrQpD+wwYjPsJRGStm033ClcU2iNGG6PoXD7rbhZIpCNM96V21pf80ZFYBZXqNj0pWRWTlaiB8GTngYPe/4QN8BN/i/GZoZsmqFxsKrsYVU3jX6Ckjh722kl+L/6ICZWGTjOtYenRUfcJD0E0dbbq1yWlIdbsRnTHPvBAC+LGFn3BiRnNcunWOc7iBHxEkJ9ZTzxLEC2E5yt3QzIBgGg==
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB2663.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(44832011)(36756003)(5660300002)(38100700002)(38350700002)(4326008)(66946007)(66556008)(66476007)(8676002)(316002)(6486002)(52116002)(26005)(6666004)(2906002)(107886003)(6512007)(86362001)(1076003)(186003)(83380400001)(7416002)(6506007)(8936002)(2616005)(508600001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?us-ascii?Q?OfE2mbk/d4ZjDpuW/3GlLsKLX4ZjEXJM2MW+bi0sMEAMQibcTwBr55VTBgcs?=
 =?us-ascii?Q?rS4Zdi4HO1HAhH4w7Yfm/Bq9ptWzNT81nqTPzVVTezeq3sc1WF90tEhqm/ew?=
 =?us-ascii?Q?xYE63zrApzRmTdqOtUb5cb0XjmMlhnBypX8zjlesPPGE5HzF8ecq89svW4RM?=
 =?us-ascii?Q?CUec66iyLeV+NA8fFf6eedhfnfcqwGimehfRxmjEQLWJaNsVvgMSbpuMHshp?=
 =?us-ascii?Q?Yiz+xTdyS1rv1DJCDPrfVcUtpnh/2yzmtMRKnA89HFng1iUmUxifk7E/gZzu?=
 =?us-ascii?Q?P2FagjQIbybNykoETQb4eKVQ8cyp6fOlu5enCExy5NtyaHodrJD9X2lav75l?=
 =?us-ascii?Q?944ZO5RfiiYcxlLaxxEGgfYCcOtnrYJs6H30b3q4Q+sExUSmbamRRl/2opB6?=
 =?us-ascii?Q?xP+Wea46t3Sz84QwQEi9y7QbgMrPE7ij5Wl4xMPZaFTiJiBb9AOfFQxiDjcP?=
 =?us-ascii?Q?aZJVuPZtPoAcIAfYxxlHswP8v145SpvnMUIWAxFo9ZR9U4WUTGH/fyZSqoc+?=
 =?us-ascii?Q?FRyx/0Xx/hQ5A8FBHH5XDJSpjgO5AgnEwzqAR1Bkc7d5d3xbADLuZwsXrTry?=
 =?us-ascii?Q?zo9wUdl3BvnyujTifsp8kdpmmV3adWsK0Wxvo8s3tcJHoxiRuXder16hWLqL?=
 =?us-ascii?Q?MI8XA8gmb0qsJvMdMMGmpiQGhmQv0uMR6M7W6qRiobFMS2x07H8vAumE//ST?=
 =?us-ascii?Q?+p04L0Z6/cK90jHduUzjDJfKw9WNfVS6YoC2ZoY8aHeUqMMpCKDmn+Bno1UW?=
 =?us-ascii?Q?tdK/hRbAXEQgOGs3VkfI/faPBiK9k77fyZ7N5Y5MCENxSPQA1qJlYMghAEMS?=
 =?us-ascii?Q?JiTxmufmPZoZCscb/I1wtszrZ3ClNSSGLUxKdIfxi4/6HyAyJ92LUSYui4C4?=
 =?us-ascii?Q?qtK1ID8U2XSs1ZVr7U7+XlwcpJgwdOd8opBaGNUC6NSipN5tiwRROKmb8+8T?=
 =?us-ascii?Q?vtEW83CbqnkTld3teVvmE9ah5mo98zVbcAGv4VF8jvQlV95jGNV4LdD0gdKG?=
 =?us-ascii?Q?k59TF1smnkFdWnt7uz+bNuOQkfcM+fFmREj9wdXt5AWM3UAwsv6mfVj0lUzo?=
 =?us-ascii?Q?98iLpw7PpAeH/QiuDMYVZK/e558FZcz3E9a+heD5ZoLN9dcJ+RWGqkXXsgEc?=
 =?us-ascii?Q?yq1c9aC0d4pHYV807nCgDw5WoLc+ElIE35AUyLuIkEQezbeMGy9cxnBSITVH?=
 =?us-ascii?Q?sizuYr5e6U1/ZYj5UIM8uo/YDoLVVy3f8FIlPcU8HefuYNvLr8TR0fPdXI4w?=
 =?us-ascii?Q?mLPkWGwHJt/3so8a39OCstGINse96PovSjkF2pwOfTCoYRAEeygoEVgmKv2k?=
 =?us-ascii?Q?d+3RP4qRqNJ/O2mgPAfdxLXIQXSdZt2HxhskT9tmpKN9IPE9nFaX3nixy7IE?=
 =?us-ascii?Q?QY69H7GNZupwMTlt3sQhEYiSfVPZj9bEyQzno8VCvKDaODqGjfOVtIj4e2y3?=
 =?us-ascii?Q?9yhWm/gVOlS1P+8X/RWt55ybWhBFsp6/bzFLvIGj3ieUfLrbzsjAJmYspR3v?=
 =?us-ascii?Q?cBYexcunEJw78dyU3vb+Z2L5DQXTzeEekYqGlhEwmVEBjIHuC2M8ULCJ7eut?=
 =?us-ascii?Q?u31PdLiMoX11iJ7hu5/wQfTYH8j46Iz+sDr67BHkZ+PjronOs6S7fpToYIPw?=
 =?us-ascii?Q?JloM9YYr0bxLYebQMqrSG+ACXMmxfaaWiTPPi5W3D7DoM7O+Kw/AZvkJxU+Y?=
 =?us-ascii?Q?dABYbuZL5llypjK3qk0PPvUegEZeJJd79r8wKGKZdVAPdAHPKxERyv9AEwER?=
 =?us-ascii?Q?yjcm2YFoZlxeTQlkEridJYLjIBaji9I=3D?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 68ae0a81-e49a-4ce9-8c6f-08da49b333e8
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB2663.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 00:58:44.6448
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: U8tCX3jI++97fFdKG73OhS/Bsx2w1/ExREvbONLR5S+W7c4dp+9Sa8GIf27TNsZEKgUGIUtn9y4XcoULFtUbpA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR10MB2488
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.517,18.0.874
 definitions=2022-06-08_04:2022-06-07,2022-06-08 signatures=0
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0 mlxlogscore=999
 adultscore=0 bulkscore=0 mlxscore=0 suspectscore=0 malwarescore=0
 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2204290000 definitions=main-2206090001
X-Proofpoint-GUID: pu1VdrVJqh7XmXDZttnPoU3K2016-zGU
X-Proofpoint-ORIG-GUID: pu1VdrVJqh7XmXDZttnPoU3K2016-zGU

When the swiotlb is enforced (e.g., when amd sev is involved), the virito
driver will not be able to use 4+ GB memory. Therefore, the virtio driver
uses 'io_tlb_high_mem' as swiotlb.

Cc: Konrad Wilk <konrad.wilk@oracle.com>
Cc: Joe Jin <joe.jin@oracle.com>
Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
---
 drivers/virtio/virtio.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/drivers/virtio/virtio.c b/drivers/virtio/virtio.c
index ef04a96942bf..d9ebe3940e2d 100644
--- a/drivers/virtio/virtio.c
+++ b/drivers/virtio/virtio.c
@@ -5,6 +5,8 @@
 #include <linux/module.h>
 #include <linux/idr.h>
 #include <linux/of.h>
+#include <linux/swiotlb.h>
+#include <linux/dma-mapping.h>
 #include <uapi/linux/virtio_ids.h>
 
 /* Unique numbering for virtio devices. */
@@ -241,6 +243,12 @@ static int virtio_dev_probe(struct device *_d)
 	u64 device_features;
 	u64 driver_features;
 	u64 driver_features_legacy;
+	struct device *parent = dev->dev.parent;
+	u64 dma_mask = min_not_zero(*parent->dma_mask,
+				    parent->bus_dma_limit);
+
+	if (dma_mask == DMA_BIT_MASK(64))
+		swiotlb_use_high(parent);
 
 	/* We have a driver! */
 	virtio_add_status(dev, VIRTIO_CONFIG_S_DRIVER);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 00:59:43 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 00:59:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344443.570020 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz6WT-0002ia-P0; Thu, 09 Jun 2022 00:59:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344443.570020; Thu, 09 Jun 2022 00:59:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz6WT-0002hV-Jk; Thu, 09 Jun 2022 00:59:41 +0000
Received: by outflank-mailman (input) for mailman id 344443;
 Thu, 09 Jun 2022 00:59:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RtT8=WQ=oracle.com=dongli.zhang@srs-se1.protection.inumbo.net>)
 id 1nz6WS-0001NE-3A
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 00:59:40 +0000
Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com
 [205.220.177.32]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6f3dbcb8-e78f-11ec-b605-df0040e90b76;
 Thu, 09 Jun 2022 02:59:38 +0200 (CEST)
Received: from pps.filterd (m0246631.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 258LaE84005815;
 Thu, 9 Jun 2022 00:58:47 GMT
Received: from phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com
 (phxpaimrmta02.appoci.oracle.com [147.154.114.232])
 by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3gfyekhsew-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 09 Jun 2022 00:58:47 +0000
Received: from pps.filterd
 (phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1])
 by phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com (8.16.1.2/8.16.1.2)
 with SMTP id 2590ub2w032517; Thu, 9 Jun 2022 00:58:46 GMT
Received: from nam12-mw2-obe.outbound.protection.outlook.com
 (mail-mw2nam12lp2049.outbound.protection.outlook.com [104.47.66.49])
 by phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com with ESMTP id
 3gfwu433k8-2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 09 Jun 2022 00:58:46 +0000
Received: from BYAPR10MB2663.namprd10.prod.outlook.com (2603:10b6:a02:a9::20)
 by BN0PR10MB5126.namprd10.prod.outlook.com (2603:10b6:408:129::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Thu, 9 Jun
 2022 00:58:43 +0000
Received: from BYAPR10MB2663.namprd10.prod.outlook.com
 ([fe80::7081:e264:cc58:37b9]) by BYAPR10MB2663.namprd10.prod.outlook.com
 ([fe80::7081:e264:cc58:37b9%3]) with mapi id 15.20.5314.019; Thu, 9 Jun 2022
 00:58:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6f3dbcb8-e78f-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc :
 subject : date : message-id : in-reply-to : references : content-type :
 mime-version; s=corp-2021-07-09;
 bh=XtvlHDdZh3gD4230vY7FcyRi2a6AYz+OtBd4ItiFTG4=;
 b=eJSPJ8HHx9J6N9zOTSz6dFfNp9L1IsrA4/WXY/tUAFTIngJnbmyZjPYbe9Z1hIWlJhBU
 53+G4q3kQcYmnbiZwTfQrvx8hiPzgCQYoZwxYRiGTIusN7MjsN0cWNmFHu2n1BpQRRlv
 GJ5L4tkoQ760fLOe6vRXzkHujsM4BmcavFzsKpqp9/5uqrpqsp3lDGtBt2xOphXQcQKU
 WmX/LxAY40IZi1+8EaWMdoohqvnk44Fysh1nF6CPyEswnMc9JrGBzzFAYmrV/cBP7Jvl
 S0fQjZQqvWRb9vIPE+MKiSCBJtUBD78h//w6hFgfgQxWajFqf1YIdl3VheqrDN57iVf0 1w== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=neEi6gW5dVlPDc8Abw2PHApm4mkg58z6NDgBPF6R0QJFJ4KpG96uVc6P8j7lLU76YTj+ci46Rd65Ba8hff6aE8YNonG9xWftFdwf9BmjA9UOEFO7NyUoFhsbkYzGcyq+nfzMJ9rTFCFguHdImE0qP/bMpSr1zMv1cGV2j8IpkUgRyxEwnCGnumcEIC7y9ufHhGTTIjkq4+w+7HhLCWVlbeVycX3434+9XDVrY5NPQnEDRmPWYSBBStm6NVQ0dl0Ypi4SMYB6HndHjJzn76oVRI3stuEghYb1tUBxB3xkaFw5HIjmkMK8OjGkIaonNZAQwFoqZdcvQVnG8UV05kD19w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=XtvlHDdZh3gD4230vY7FcyRi2a6AYz+OtBd4ItiFTG4=;
 b=Xu6juY4on7MX5uideiigQaNrlT7hIJLVAB0ZYvALl4l/Dte2HDePMNm7Mi91KiMUPMEVzKFUC7nKnisLpiM46iHvusYWhgUE39PhMEzD8m6w6s4t0Y8HeHfCM1xNz/SLIKmtITanvczQ36Euex9KgLfbVHp6Prwv0jA8vnSimvEjH4qzYUdHUpezYMcGDQp6D/iJgvkQKdB1TxAVv6APayBs7Cme7XtCIcOKlwQ5v7re7A3TTQIA756UoqSkNSH3sUdoUeWsEm38NsiP6h2qwf/7RlWwISVZDhMRXlpM8mxHJf9UelpXGW0SrYqk2wW6Z23x2N1yyrzRD8+TarUdTg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XtvlHDdZh3gD4230vY7FcyRi2a6AYz+OtBd4ItiFTG4=;
 b=UtNPpVY1ACRpgMmRrmRcjBu5VwUBRbxUqNex05R5ujyAi7KxnzvR9QVGglpNfXeHKxjtmC6ahJ+ykjHmFIPEYsAV4itK9mQbXfvyv7zysuiFqk973Q1fsRyjFg0TUVcKSHuPHGKi81xqZfWODnBfSjK5GrxMkggoDDzGmNztkx0=
From: Dongli Zhang <dongli.zhang@oracle.com>
To: iommu@lists.linux-foundation.org, xen-devel@lists.xenproject.org,
        x86@kernel.org, linuxppc-dev@lists.ozlabs.org,
        virtualization@lists.linux-foundation.org
Cc: linux-kernel@vger.kernel.org, hch@infradead.org, m.szyprowski@samsung.com,
        jgross@suse.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
        dave.hansen@linux.intel.com, sstabellini@kernel.org,
        mpe@ellerman.id.au, konrad.wilk@oracle.com, mst@redhat.com,
        jasowang@redhat.com, joe.jin@oracle.com
Subject: [PATCH RFC v1 1/7] swiotlb: introduce the highmem swiotlb buffer
Date: Wed,  8 Jun 2022 17:55:47 -0700
Message-Id: <20220609005553.30954-2-dongli.zhang@oracle.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20220609005553.30954-1-dongli.zhang@oracle.com>
References: <20220609005553.30954-1-dongli.zhang@oracle.com>
Content-Type: text/plain
X-ClientProxiedBy: SJ0PR05CA0075.namprd05.prod.outlook.com
 (2603:10b6:a03:332::20) To BYAPR10MB2663.namprd10.prod.outlook.com
 (2603:10b6:a02:a9::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 0283aaf6-5349-4a61-c529-08da49b332d8
X-MS-TrafficTypeDiagnostic: BN0PR10MB5126:EE_
X-Microsoft-Antispam-PRVS: 
	<BN0PR10MB5126CF1B1C76AC85950DD2C4F0A79@BN0PR10MB5126.namprd10.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	yPsJ0mcCuuSfLdYeZdr85LUR1IrndxSqKmXjvE/QCfV3Glv4mFXezwYdKQsj445Cz72sMs8oWf9USVgd7pxiIKmd+V/r7ROA+oMAUC5prZoBw0JfIcteSBEeU37bLC/2wWfhOPwIWs7RzefRrNQ2kaQyl2AnoteVqv/y5ZCuYkNpFHYBe0wsOkeYk3pR2M71eu7lcFdKaR6sbYsbthQdr9x3YE2IHX7Hs5xLdTKUiWN6hD3pX+ZM4nzNxz2JzxgjxtPt9/EYg4sm9xpgb8NwLJhpuw+HIUenzhdM/J06ImbJ19ipRMjewIGkHoOumAL5tMoWHOAE4bwkGKLUFn9o7MoNxUgFyQYj3TeMgX2HnL8xNi7oio4kTtbWY/JGams1cI/DZQoE59dr4ey6dszjwkSuxTxWXtq44Rq3NZWfyEFJQJbeYau4LS7Q0eny01ZljqSCYCEPcGZPPYBi4/0j+PFlWob+T9GW+EH9HGDGAqlXkXzKX5ciQM3Ej3zE2bYINdU2IeroynBKQlaUk4cOHRjyju0LjsVQZKcqzz34we+J3Rogj9rK6dBMOdXSdtILerV2pUt6u+6rUlXmMHqks3Hn1mCeGMhuVumjFoz/3HP1/whFOOG4jk7S3Zgx2mg345noWyszIbzBzzBzRrNuMCkCBVPCBcw+yZaAZrYLdgGsCu3THNL98nx/2vBcsNq75FBuEXb/GqaE1GjJWSuw6g==
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB2663.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(8936002)(6486002)(5660300002)(186003)(66556008)(508600001)(4326008)(66946007)(7416002)(66476007)(2616005)(8676002)(107886003)(1076003)(2906002)(38100700002)(38350700002)(26005)(316002)(6512007)(36756003)(86362001)(44832011)(6506007)(52116002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?us-ascii?Q?OqkleaNu0ccMHkAYb5Iyd995KHh8covlVc3N71W+JHTtwXDk9T77taMWjGte?=
 =?us-ascii?Q?jkVl7Uu8L/egAwuuzNUMxBR8risBPQvWjmP1jwzkER42SNQK6SkMlVB7w9dU?=
 =?us-ascii?Q?8Umhl/3i1jNaqFp4s6TFNkBNQ+8spMfS3anOpFwuS1rZJ6k52KLgqxltuzGW?=
 =?us-ascii?Q?48MkHUxBhcsD91dfYJ+UjSNdj+bzcap7rJS/4DL42akjnCvEZLy9WbTh8i1/?=
 =?us-ascii?Q?V1KIY5fDzvfwYpbIo/E/08R2ghFHu4jQdYm9MGJowujN5DAyuH/QfAaPNrKD?=
 =?us-ascii?Q?AwuV/YIVu3jdXCRj+g/gNypKhzYVXNizqX/moA6m5of63SFGJgM9Y38WxSin?=
 =?us-ascii?Q?dQUuHvAHxi1Jqeon1VGwd0T9CCPjzh2mBUe46HIK6kjIZo2v1jbz+48ZGIg9?=
 =?us-ascii?Q?RMR3s7D4176RS6SOCdmQpRdCkhBbk4EwbMzfFutldKsv6nv63q0afDXw25te?=
 =?us-ascii?Q?rxQrA+LLWEdLfMnexiShlPr8jpCAVrpoEijJFwGXUSZ3ce/6uMckX+xHGw2f?=
 =?us-ascii?Q?sY3tE5MIPsWZ7dmFNoeJZ9GBqIyJQwleGpVr6wQxhmDmKV8NTGFWXv8jRg3H?=
 =?us-ascii?Q?WNG7PU3ueNRZBVyG6l8sovU5ksgRQM5iWDR+D8cYEO8qq39ffmLBaaWhdQz+?=
 =?us-ascii?Q?4rdiFotxbazqyrrEIwj3H4YvNLv8LumBMm8ml/U00CZLnysRzJYyofTvybOW?=
 =?us-ascii?Q?f3BS4P8oq4Vq0nVi4zRxE82kSigtjxe+pYj/yS289DOU2utHNpAvM9XpA5mt?=
 =?us-ascii?Q?/aAL8IWU2DTVKazq+MHY5y0sN3i4ReCr5V8iDo/i2AArDv0dIiIIsvS6f4pT?=
 =?us-ascii?Q?RHWgG98xVXrksreTymj0s0zUzF1vZTZ+pKoiLSFvePT5pfEg180oFlozn9IZ?=
 =?us-ascii?Q?RAsDVgtRdZy0T6CVZw3vPHd8i9U1vqpYfjLhyjZb4wbCzCE+5XwkQl/dgH6F?=
 =?us-ascii?Q?LYMAOt63A3HqMcKliCkIBm7z8n/OveU5c+tFR+BegYW85FkPMvgJgst1xb1l?=
 =?us-ascii?Q?wT7m2wQNDczMr3qQNpEFEWyBiwWatt7VnoQNXzmjZgiiJ/Q4p9z5PmmX4CWm?=
 =?us-ascii?Q?UT+R6mGgU7coMk7nxmxArlImO1SfQ6RVM/xaNvbxKwgOevUmCWhZJlPv4QDN?=
 =?us-ascii?Q?hHJenoFlUswDfwxxYoyydYI2jfaDTUInxKfMDGiHFMnQle42nyje3/1kjeE1?=
 =?us-ascii?Q?Wt2YiiZu6XFbkmPbLyw0XrpbLfM1CZWF7kp2PFuJVxKcobbaOTkdOi6LMFWL?=
 =?us-ascii?Q?R73nkEh7mRxCRThVmNVk7tkn3knij2dRiHeA4Xv9fdU7/SwJg5fVZCLRTnAY?=
 =?us-ascii?Q?J+J56HMPI6nRCDhZfouIoG2Sy8NcQt/2+6XCL70FuDAhauIsw28d9hb60ChE?=
 =?us-ascii?Q?+XK7HPKqDXU2agHClo2l1xi2BEhtl9DTTtwlrlLrFPoxwyb/kUheVp3vPHAg?=
 =?us-ascii?Q?Mrr3rj0htNQcfLT+9UVNrdrzxprl1yZekIzuXSwkmzqUFtmVp7K0Z00R9MHm?=
 =?us-ascii?Q?RjlV8ly5vjBlSkPxJKrjZ8zU6NigNF0/obFc1kHVa1Dh+wHaKw9oJ629f5/M?=
 =?us-ascii?Q?Jf2oZwICcxPMc5HOBVtiH4bxrHSJDg1lnWZTpfncgzUFfBU8sEgOTo4jhENJ?=
 =?us-ascii?Q?M9jTgp6rbnrciG7JcAcA6jyB+6v1PskzB1B0If5gQLFhgG3D36maN+vw+G0m?=
 =?us-ascii?Q?TlSrr2KVWiNMhvKCIi3gorpTPSm5rqaPjHQUqeK9TsJR4xTz2TwEt3jibXem?=
 =?us-ascii?Q?72E70f07mcHTcwtpDjfTPIQnYK4S0+Y=3D?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0283aaf6-5349-4a61-c529-08da49b332d8
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB2663.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 00:58:42.8637
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6qGMG3KJcVcpGVdNQeNdjoOh0/Ul5HGLy4Lj9qvJIJO+iV3Z8Cd3/6La03DSQPhWEx6oyqf4xUWaHv7lwO7/pg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN0PR10MB5126
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.517,18.0.874
 definitions=2022-06-08_04:2022-06-07,2022-06-08 signatures=0
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 spamscore=0
 adultscore=0 suspectscore=0 malwarescore=0 phishscore=0 bulkscore=0
 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2204290000 definitions=main-2206090001
X-Proofpoint-GUID: EzEjAVIO1vtQw7P_H2VxyecUKO4JRZzM
X-Proofpoint-ORIG-GUID: EzEjAVIO1vtQw7P_H2VxyecUKO4JRZzM

Currently, the virtio driver is not able to use 4+ GB memory when the
swiotlb is enforced, e.g., when amd sev is involved.

Fortunately, the SWIOTLB_ANY flag has been introduced since
commit 8ba2ed1be90f ("swiotlb: add a SWIOTLB_ANY flag to lift the low
memory restriction") to allocate swiotlb buffer from high memory.

While the default swiotlb is 'io_tlb_default_mem', the extra
'io_tlb_high_mem' is introduced to allocate with SWIOTLB_ANY flag in the
future patches. E.g., the user may configure the extra highmem swiotlb
buffer via "swiotlb=327680,4194304" to allocate 8GB memory.

In the future, the driver will be able to decide to use whether
'io_tlb_default_mem' or 'io_tlb_high_mem'.

The highmem swiotlb is enabled by user if io_tlb_high_mem is set. It can
be actively used if swiotlb_high_active() returns true.

The kernel command line "swiotlb=32768,3145728,force" is to allocate 64MB
for default swiotlb, and 6GB for the extra highmem swiotlb.

Cc: Konrad Wilk <konrad.wilk@oracle.com>
Cc: Joe Jin <joe.jin@oracle.com>
Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
---
 include/linux/swiotlb.h |  2 ++
 kernel/dma/swiotlb.c    | 16 ++++++++++++++++
 2 files changed, 18 insertions(+)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 7ed35dd3de6e..e67e605af2dd 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -109,6 +109,7 @@ struct io_tlb_mem {
 	} *slots;
 };
 extern struct io_tlb_mem io_tlb_default_mem;
+extern struct io_tlb_mem io_tlb_high_mem;
 
 static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
@@ -164,6 +165,7 @@ static inline void swiotlb_adjust_size(unsigned long size)
 }
 #endif /* CONFIG_SWIOTLB */
 
+extern bool swiotlb_high_active(void);
 extern void swiotlb_print_info(void);
 
 #ifdef CONFIG_DMA_RESTRICTED_POOL
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index cb50f8d38360..569bc30e7b7a 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -66,10 +66,12 @@ static bool swiotlb_force_bounce;
 static bool swiotlb_force_disable;
 
 struct io_tlb_mem io_tlb_default_mem;
+struct io_tlb_mem io_tlb_high_mem;
 
 phys_addr_t swiotlb_unencrypted_base;
 
 static unsigned long default_nslabs = IO_TLB_DEFAULT_SIZE >> IO_TLB_SHIFT;
+static unsigned long high_nslabs;
 
 static int __init
 setup_io_tlb_npages(char *str)
@@ -81,6 +83,15 @@ setup_io_tlb_npages(char *str)
 	}
 	if (*str == ',')
 		++str;
+
+	if (isdigit(*str)) {
+		/* avoid tail segment of size < IO_TLB_SEGSIZE */
+		high_nslabs =
+			ALIGN(simple_strtoul(str, &str, 0), IO_TLB_SEGSIZE);
+	}
+	if (*str == ',')
+		++str;
+
 	if (!strcmp(str, "force"))
 		swiotlb_force_bounce = true;
 	else if (!strcmp(str, "noforce"))
@@ -90,6 +101,11 @@ setup_io_tlb_npages(char *str)
 }
 early_param("swiotlb", setup_io_tlb_npages);
 
+bool swiotlb_high_active(void)
+{
+	return high_nslabs && io_tlb_high_mem.nslabs;
+}
+
 unsigned int swiotlb_max_segment(void)
 {
 	if (!io_tlb_default_mem.nslabs)
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 00:59:43 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 00:59:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344442.570009 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz6WS-0002Rc-E9; Thu, 09 Jun 2022 00:59:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344442.570009; Thu, 09 Jun 2022 00:59:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz6WS-0002RO-9z; Thu, 09 Jun 2022 00:59:40 +0000
Received: by outflank-mailman (input) for mailman id 344442;
 Thu, 09 Jun 2022 00:59:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RtT8=WQ=oracle.com=dongli.zhang@srs-se1.protection.inumbo.net>)
 id 1nz6WR-0001NE-2t
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 00:59:39 +0000
Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com
 [205.220.177.32]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6f3d29a1-e78f-11ec-b605-df0040e90b76;
 Thu, 09 Jun 2022 02:59:38 +0200 (CEST)
Received: from pps.filterd (m0246632.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 258LSC3N017882;
 Thu, 9 Jun 2022 00:58:50 GMT
Received: from phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com
 (phxpaimrmta02.appoci.oracle.com [147.154.114.232])
 by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3ghexefa63-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 09 Jun 2022 00:58:50 +0000
Received: from pps.filterd
 (phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1])
 by phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com (8.16.1.2/8.16.1.2)
 with SMTP id 2590ub33032517; Thu, 9 Jun 2022 00:58:49 GMT
Received: from nam12-mw2-obe.outbound.protection.outlook.com
 (mail-mw2nam12lp2049.outbound.protection.outlook.com [104.47.66.49])
 by phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com with ESMTP id
 3gfwu433k8-7
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 09 Jun 2022 00:58:49 +0000
Received: from BYAPR10MB2663.namprd10.prod.outlook.com (2603:10b6:a02:a9::20)
 by BN0PR10MB5126.namprd10.prod.outlook.com (2603:10b6:408:129::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Thu, 9 Jun
 2022 00:58:46 +0000
Received: from BYAPR10MB2663.namprd10.prod.outlook.com
 ([fe80::7081:e264:cc58:37b9]) by BYAPR10MB2663.namprd10.prod.outlook.com
 ([fe80::7081:e264:cc58:37b9%3]) with mapi id 15.20.5314.019; Thu, 9 Jun 2022
 00:58:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6f3d29a1-e78f-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc :
 subject : date : message-id : in-reply-to : references : content-type :
 mime-version; s=corp-2021-07-09;
 bh=6xdMcHGrZJvq1N3Xs9DS/AuUbrlxyN/JsBYOw/WhJRg=;
 b=t6xvKWaUO1Cb0ACv7O02JNBKWpGQyRAYaff/9fxmix1NDt/jq8NGujUtmuPavqtyrum3
 61g0bZzyTMIaO6JROS9hBKDXKiWJ89lItnvbssDhRA1WibklKPZ+xDd/uGbAQQq8nPgq
 LDnvShp5gIqP7Vs1eXsbyphCOE5Hslnk8GjOwUeOLRpnuIfBlBgo9jyAm0ksr3bBYIAc
 jVindzTJHJGCX2KArbxTZU6+GjOlC8eoyXJFRWv5QvsJnmFfNt7HPikFhYLDIZ5S1Bgk
 oENEGNVWEyoCic4H+Uk/AWeRDyXxOJh6plef058kgU1srT3p1UUIKPzyRuQZA2OsnIdj IQ== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=juVTivBgEKe7m9kue76MVZTdZz5I+lX5bC3X1Dwiev8pMdPfD/lPoPq7i70s+l2AiDQkGz4jutHa87Y98v2KjGfj2ICGGe2cZUN3KssJmb21PVOjoCxiMRyjXawcpQxN2M+eX5Jv6VYQzbC3xTmWXUMMrljq6QFxUbKEK1oBg1WbZ36hxudQ5hmNmJ5k01wRx3/zNQ7DEgud31mJFWTUtlMy1W5j5tA3WoqwaDFuL5ERuc8swc23L2YE+XVChddl31WEi1Kb8xlqZAUjWnhCLaHxXBaf64JXPHcn3SjE1/+dU41gZ7a+SpklOOZy44jh+P/YtaJgfWWHd1jRak2ukQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6xdMcHGrZJvq1N3Xs9DS/AuUbrlxyN/JsBYOw/WhJRg=;
 b=UOrm/8bzwY6vzTqms06n9LxCxUm0IvSQYGX2ohMKLoNGTkS8yIBhn99g9T/kEhZUL/+TeOTRi7vRKX1AMwXT2WN7HsbNEGEyEhx/gVMFok24s1PivrEdEISEgjOIYzadq9UtDy7lXf9X4XWgdVFha/4jMUof/EBgmOr+ABUopPGxZtWV0JIF+hfh/WZ3s3593D2PmASjJh5uAnFxm4DpMmWLtSIVDp28OoF3sMeZBaTARFIPKMrLdB5rTK6mc9jZtDdECKJwLxIbWOXJ6jlOs5Izh3WQKQm7sFKmxTRsluEWeIm9X90+3mzzFUWttyG7GIsmZeBWV0DedinpoJ/0rw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6xdMcHGrZJvq1N3Xs9DS/AuUbrlxyN/JsBYOw/WhJRg=;
 b=Vfuabbv9gyNMxOOTQHPlq6zEnEPwT/NoDVlapjk4vEFzTw/JUnh7f1GLHNZaytGKdDZx8OAutG2dw4e6wW7apwMY9kJ2t2NGyQF+yyhzoTnNvYR0kAtBc06IYVVmAPOEqCs4TvGvcFEHnNiKYubHeKcYFue+ifvEXq9OhXGauig=
From: Dongli Zhang <dongli.zhang@oracle.com>
To: iommu@lists.linux-foundation.org, xen-devel@lists.xenproject.org,
        x86@kernel.org, linuxppc-dev@lists.ozlabs.org,
        virtualization@lists.linux-foundation.org
Cc: linux-kernel@vger.kernel.org, hch@infradead.org, m.szyprowski@samsung.com,
        jgross@suse.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
        dave.hansen@linux.intel.com, sstabellini@kernel.org,
        mpe@ellerman.id.au, konrad.wilk@oracle.com, mst@redhat.com,
        jasowang@redhat.com, joe.jin@oracle.com
Subject: [PATCH RFC v1 7/7] swiotlb: fix the slot_addr() overflow
Date: Wed,  8 Jun 2022 17:55:53 -0700
Message-Id: <20220609005553.30954-8-dongli.zhang@oracle.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20220609005553.30954-1-dongli.zhang@oracle.com>
References: <20220609005553.30954-1-dongli.zhang@oracle.com>
Content-Type: text/plain
X-ClientProxiedBy: SJ0PR05CA0075.namprd05.prod.outlook.com
 (2603:10b6:a03:332::20) To BYAPR10MB2663.namprd10.prod.outlook.com
 (2603:10b6:a02:a9::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3528f4e0-24bd-4d9d-6194-08da49b3341f
X-MS-TrafficTypeDiagnostic: BN0PR10MB5126:EE_
X-Microsoft-Antispam-PRVS: 
	<BN0PR10MB51262F3B4A18265C2FB742C3F0A79@BN0PR10MB5126.namprd10.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	Hw2XOLWs69rmJbKaYgjiBO5Ff1axa/FesU6+5saE+jEdk3I74upd15GtORLYL8fmC50GmuHG9nOetokT4cjgZAgGn1rkVA33D0Mf5/CL90JvOF5E6RQe0ExgKWAqo8uweE60/DNKne5LGZ30/rHKxAWBWfTgVym+o3q64Zp1N8iS4NGMgggat0AWtvPKD4hAkwDjAVAGdFY7TBXmNtKU9RGpDvOQ7Cuc8S9Dxuw6XKrcRpNMg/oOKJ9PRlsVJaC9p1vzOU3VxsR0EmX3+lnFWrmitbTnRYC4Xvy9oSwopXdRdti2DV1VQ88QO9rAUw7ZEr1+X8IOxoIBtfZX/x3z8wPls5hf6CrsvGIBGB+jSbxXQvF+o9/uUUTJpODDlOnpe3tcUq09H5Nvl8UO6O6/HhwFkUgWzm2j32+kcOXTRNTQ7JJZv+71KmNvsS+aUjpy03/IfU+zPFWFp1D6jPZbEygyvIygUTx5nVtQL66Iir/5rPg+COv+6rHy0tSwqqTQmxrtTTD0uSRLLFGLbkTUGz5RLGa90ro+ato+ohXuvOu0FiDKcIvfSJSYkZ64e++x/Uq2RcMvUvKmtGqAIK5TXaJtWYdsWbH8f0W8gj1QbuGspqX0oyVln3auTvRGBxhTE4fNO8ytQqLrrlA2GDRmbGt7HqZVCmzVAAsenohRhczDmkoaRg9ewjasXNESP02CNHhN9TaRwXlkNZ+HBl/YGQ==
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB2663.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(8936002)(6486002)(5660300002)(186003)(66556008)(508600001)(4326008)(66946007)(7416002)(66476007)(2616005)(8676002)(107886003)(1076003)(2906002)(38100700002)(38350700002)(26005)(6666004)(316002)(6512007)(36756003)(83380400001)(86362001)(44832011)(6506007)(52116002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?us-ascii?Q?dO7j1OhcZpj0eBaof3FrgXcevtMQyym3E7YKNGKux9JDsaueQyGsMMZvtMVQ?=
 =?us-ascii?Q?aU7V5UaNG28P4P1grx+GwcIR6OTDHrTFqPvn87xipS3mABbVgAzpexuF0Cuo?=
 =?us-ascii?Q?6O9a8qmN0d1UTbMobhkZZaxlkrNzGrVrhMXco28+DS6c9p4M4CVeyiLC8Pnj?=
 =?us-ascii?Q?tsPZ7skYRxxrEzfeH5e5IHBiSXMGJKh8Muu2jjSj01e2YH+1tyq8u7Ci2Cdz?=
 =?us-ascii?Q?k3ISjZeCyA0k4YP/ywgUi2fmGKJFxWQB+jcJUl9gZudTmi6wBBv0A4D+h+ux?=
 =?us-ascii?Q?JR9AlRc4SbGda4kP5Gy1RsBAurPLyQm8IvCRUZLeW37nPezImK0ZDPXd0GgS?=
 =?us-ascii?Q?CKUPPteMHlH4cOxVLHu5UIeBkLdbnJ6CwBtPWyK1US/pm6Bu5Up4+HRm4ZJI?=
 =?us-ascii?Q?hOP0vqMyuY0q9T0WqjOHnNPwXteClHBPxXej2NinuXC4oAd4AKUDHkdDlvJo?=
 =?us-ascii?Q?SXJLpptiUtnv99bdYuJceuCPURjKeoxDIb2zUOvG3zwziLbWgbDONCMciorQ?=
 =?us-ascii?Q?DBxZt0VfT+T0stcAbwoOtdMUlYdOdLPMJoa2WX++yra+/fdSSKUmBPZcdc+1?=
 =?us-ascii?Q?MbIi0aQYSDPBOzTQEBPWbEuhBWDBSGk/uCv/pcjnBuA9FvnVbPdoPtO5ynav?=
 =?us-ascii?Q?lXOpo9GDBeBElYoHEL6lBvp184CH9FubB97hC82DnQdHLery5QAI4iJ6AQO3?=
 =?us-ascii?Q?UdtRAWCVOyMuSEONOPMuZNtHZ2JkCSQtWPsPafi2r1bagIbZMzongwcn0G4z?=
 =?us-ascii?Q?WJM3MJM5Dz3OMC70Bjivn9V0eiapGoytm0z563HChmYEh/mVaalzRr+bBMZj?=
 =?us-ascii?Q?CWdDbZAZjXPqJlY8vXFFjZYVRq0ru/ZS3BY7i2AReDNtcF0BfMgzHFRlOwVU?=
 =?us-ascii?Q?i8a+qeHg0jBS3+8HpXlJe+reA0tOBBn1aoaiKwY3nC64lY1+626KDkWB1w/C?=
 =?us-ascii?Q?FU8ad/7Eu44zwTeRw2OYOOE3yLgvHj8++Q5VstgOPu2+e//pWEFUl/TcVIwX?=
 =?us-ascii?Q?hiKMDf0RGOXPjMfwMyWZXxn9ks3wm8qpAEHV388EfOKabGWnCO7PhWsUKicB?=
 =?us-ascii?Q?8Vo0jNvFRM4lEYA6sKoWdyG6UnSyB4DuBN12s7waou5XGDRh5KSW6Akmuius?=
 =?us-ascii?Q?wo0NGzsGMFHl4UYAZki63ChGSIrWnVjSXyBKmHpMeIGLCaMluxhB4ZdIwmi1?=
 =?us-ascii?Q?Y+ay7ALFja1b7XdncJS+/t6evV/Yoj98/C0yeO21K7k7MQ28gMJsW4Fe1Yvp?=
 =?us-ascii?Q?AsWmRJZ9UJHPA/IQowgSPWHPqVAqUY+Zm4k+WcUUw+4sac7e4oKMfyMouC2n?=
 =?us-ascii?Q?GkiCgY2hMwPVdbnXVgwSHToHHanzkmWvA2AYN5Ffp0MkFEiIrhC6hREgg7SO?=
 =?us-ascii?Q?NEMhsxQqsWw00MnYzlFThWZfkDh6IuESZSACZV+fl6qOWvptaz1pCrjhq8Tb?=
 =?us-ascii?Q?viSLey238jUqMRamaJGvXgUb7VzV6T+R67t7OS3ptLiGzWbYJxqHMFPfTJBa?=
 =?us-ascii?Q?iFyrlHubVuuehP0EKsn/rsAtZDutPjzJrc+95Ly2Oa9MBZNeeJferjb2urGr?=
 =?us-ascii?Q?mG4tQNtBHkCEdIRMoD2CfyMYM5F3FU02bps13LwACyiHw3nVJy6zsG1BGN3V?=
 =?us-ascii?Q?Aiu6gyzI0JyUanedCBbJWCWGUYVcJGrepWi4z/ImP6sx2N2+hjOTJ0wTXDIh?=
 =?us-ascii?Q?YJU2weah4Mdf6YGVAsvS2y2Vyqi/aK8LWcvZE8KL07+X3bdjOXAq07jEanhO?=
 =?us-ascii?Q?f0WS05q6ebjMtvkXLMgE6WkRKs0/boI=3D?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3528f4e0-24bd-4d9d-6194-08da49b3341f
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB2663.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 00:58:45.0198
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: fsHhpKgQMAFV5IQe9XonvZ6LuEuKA2pc/dZM/yLoQf3ZVc0AGoosmfav76kRqOvKRnjd5EMw+vsDJQK2bGEACw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN0PR10MB5126
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.517,18.0.874
 definitions=2022-06-08_04:2022-06-07,2022-06-08 signatures=0
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 spamscore=0
 adultscore=0 suspectscore=0 malwarescore=0 phishscore=0 bulkscore=0
 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2204290000 definitions=main-2206090001
X-Proofpoint-GUID: UjsT3auR4N1lgfozhTWirii4iLwJ428y
X-Proofpoint-ORIG-GUID: UjsT3auR4N1lgfozhTWirii4iLwJ428y

Since the type of swiotlb slot index is a signed integer, the
"((idx) << IO_TLB_SHIFT)" will returns incorrect value. As a result, the
slot_addr() returns a value which is smaller than the expected one.

E.g., the 'tlb_addr' generated in swiotlb_tbl_map_single() may return a
value smaller than the expected one. As a result, the swiotlb_bounce()
will access a wrong swiotlb slot.

Cc: Konrad Wilk <konrad.wilk@oracle.com>
Cc: Joe Jin <joe.jin@oracle.com>
Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
---
 kernel/dma/swiotlb.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 0dcdd25ea95d..c64e557de55c 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -531,7 +531,8 @@ static void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, size_t size
 	}
 }
 
-#define slot_addr(start, idx)	((start) + ((idx) << IO_TLB_SHIFT))
+#define slot_addr(start, idx)	((start) + \
+				(((unsigned long)idx) << IO_TLB_SHIFT))
 
 /*
  * Carefully handle integer overflow which can occur when boundary_mask == ~0UL.
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 00:59:43 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 00:59:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344438.569964 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz6WK-0001NR-Uh; Thu, 09 Jun 2022 00:59:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344438.569964; Thu, 09 Jun 2022 00:59:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz6WK-0001NK-S4; Thu, 09 Jun 2022 00:59:32 +0000
Received: by outflank-mailman (input) for mailman id 344438;
 Thu, 09 Jun 2022 00:59:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RtT8=WQ=oracle.com=dongli.zhang@srs-se1.protection.inumbo.net>)
 id 1nz6WI-0001NE-61
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 00:59:31 +0000
Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com
 [205.220.165.32]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6880e2e3-e78f-11ec-b605-df0040e90b76;
 Thu, 09 Jun 2022 02:59:27 +0200 (CEST)
Received: from pps.filterd (m0246627.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 258LTarE020255;
 Thu, 9 Jun 2022 00:58:49 GMT
Received: from phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com
 (phxpaimrmta02.appoci.oracle.com [147.154.114.232])
 by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3gfydqss4h-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 09 Jun 2022 00:58:49 +0000
Received: from pps.filterd
 (phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1])
 by phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com (8.16.1.2/8.16.1.2)
 with SMTP id 2590ub32032517; Thu, 9 Jun 2022 00:58:48 GMT
Received: from nam12-mw2-obe.outbound.protection.outlook.com
 (mail-mw2nam12lp2049.outbound.protection.outlook.com [104.47.66.49])
 by phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com with ESMTP id
 3gfwu433k8-6
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 09 Jun 2022 00:58:48 +0000
Received: from BYAPR10MB2663.namprd10.prod.outlook.com (2603:10b6:a02:a9::20)
 by BN0PR10MB5126.namprd10.prod.outlook.com (2603:10b6:408:129::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Thu, 9 Jun
 2022 00:58:45 +0000
Received: from BYAPR10MB2663.namprd10.prod.outlook.com
 ([fe80::7081:e264:cc58:37b9]) by BYAPR10MB2663.namprd10.prod.outlook.com
 ([fe80::7081:e264:cc58:37b9%3]) with mapi id 15.20.5314.019; Thu, 9 Jun 2022
 00:58:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6880e2e3-e78f-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc :
 subject : date : message-id : in-reply-to : references : content-type :
 mime-version; s=corp-2021-07-09;
 bh=AFRDbEqu5zmJCte4xJD2iO6rtVuuAhsG60fCY4JHBq8=;
 b=gDOfPRTVfJBGUiamq1Dx5kN8F1arWpKRQawNtWH7QiCaNf0KcaDUL73cvaLCdPXaVOrC
 iKJFVcq1uaX7otlciAJJDx7/5tPjA3KWbDevRLiJA1k2GR3gO3yb6oYMoelkfPlGzJ/z
 V+o8o+836hNBncDu3DVkBKcFsFAT+R5VtHhT6bu20CY2M2lKNL61t0DkXfCLrHsIGCAK
 N8lYol9wo0ZhAkTN4RVjQap4UV4GvP0m+hTwHi9pw6YEXglojgPswWUBhzI7ED7hJ2yk
 9Q4+3lgSeCHSGYFl4D6PIoTsn6v3BD4+SABJ4910DTJjRIZcdMGNneFxjJP9ifpafSK9 +Q== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EdTiufipmwD2ag4/7wOWhJZgzqIvnb3SE7aQpB1SIL3+cWicA1lGG7q8r6gxOYiSyt+UddfWr+oUGJ+T4CwC4Ks7B1QAWxgBi2AyL6z4UdsSLFsUFIUAPS/KzG9wzeHIXCpz9gWBPmV2MdiojsXllCYwLbFsousUQL408eFihTZuRIzoiXb4nuxM50b51G4yVGtdPDcdyGDhlDMKSss9jshZPI/TapEsX3WZaxWVa4mS6LIaQVPEIBt5isfguL86vhlk3r0mDJ8a/mN89ik38Uyj4ZzBFpWC/5sRQd3FnD2FiXr0jLhYHE0nBTyKXcoNzedNMSmfEg0MEfOF+xQGZQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=AFRDbEqu5zmJCte4xJD2iO6rtVuuAhsG60fCY4JHBq8=;
 b=gPH/IRag3x0LijN2SrdYttT7/FAlhx7HzhJIdJEngCDUrj6tKM+h8C3hOF8fNzF6tobvxDyehI+sfrrZ9SHWXzpGbCDmYx6KVp9AS5/5CJ+c3IirsrqUspeC5CJ7NH8glgqR1gtz3/6P9kxZGv/+Dr6TFscfjXU8VJVeoBcUOvW1P9eCkFUTDJJ5D7sQrQbTHY5tSUmjiZF2HregBZKYPq1/TfDsZY5UE1LgcwFG634I5g+HjDfimognLFCXUamicsSKm3ERkwHEjI+MokQ8/1DKf17VGBing/6ipLahuKfsN2Qa3TYTZwdb+7fIBIcoJnNEMVTb88m41e0P7H6WqA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AFRDbEqu5zmJCte4xJD2iO6rtVuuAhsG60fCY4JHBq8=;
 b=VO1LAZBovLMvGkMmlUIi8iVMK9Vcpb8f+1Z5ml3xKOTlcO646GV7KwlyaP7PzCbqb04xA3HYcj26g4vQhGZUnneaLWjQCmQrBeEzsQzqLYjZubgxJXqmdvsEgcmEMDVQcJkc6tFuAffKWwa8uck+uyQZBv+eETRB/bmpx3GXxZo=
From: Dongli Zhang <dongli.zhang@oracle.com>
To: iommu@lists.linux-foundation.org, xen-devel@lists.xenproject.org,
        x86@kernel.org, linuxppc-dev@lists.ozlabs.org,
        virtualization@lists.linux-foundation.org
Cc: linux-kernel@vger.kernel.org, hch@infradead.org, m.szyprowski@samsung.com,
        jgross@suse.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
        dave.hansen@linux.intel.com, sstabellini@kernel.org,
        mpe@ellerman.id.au, konrad.wilk@oracle.com, mst@redhat.com,
        jasowang@redhat.com, joe.jin@oracle.com
Subject: [PATCH RFC v1 5/7] swiotlb: add interface to set dev->dma_io_tlb_mem
Date: Wed,  8 Jun 2022 17:55:51 -0700
Message-Id: <20220609005553.30954-6-dongli.zhang@oracle.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20220609005553.30954-1-dongli.zhang@oracle.com>
References: <20220609005553.30954-1-dongli.zhang@oracle.com>
Content-Type: text/plain
X-ClientProxiedBy: SJ0PR05CA0075.namprd05.prod.outlook.com
 (2603:10b6:a03:332::20) To BYAPR10MB2663.namprd10.prod.outlook.com
 (2603:10b6:a02:a9::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 98e30956-dd49-46d7-b758-08da49b333b1
X-MS-TrafficTypeDiagnostic: BN0PR10MB5126:EE_
X-Microsoft-Antispam-PRVS: 
	<BN0PR10MB512694566F1AE8AC5204E513F0A79@BN0PR10MB5126.namprd10.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	eo6mTZhpT+LeYbsiAQpywSST18f0/OTlhdpZbh14ogf/ywMB62Jpkafs9VIOTG/Vm959hqLo4VYWLNyZZGO/XXkn3BXIHu18LAQL7gaK1TZ/RoWSK7xgELYIvlmlyEDQyHEz0EJJoEtTiQ7oVaxGfdM75TQ97fpiianExRIYoJRvvHoDiApm4xB6sbr8uO9RJsL4D8lC8DWRGk3IuJUGX+PFeEtIIkzhOcxpO0/3+mBP51dCqLmri4XbdgZ1Ch2d4wVQsOUCOYfcZp7yXAYi1kZrCC+lFpWhWaQ5uKi/bnvZCfAJkJTHk3DG9PdZGpjhyJbtv8QsZMlHtL8jr0LrBiDtrDGSMygdeMRQYB7SBbkJFpOjZt5aXk+QWuXer6ILhh8g8uEWvHucQxBw5j+6+DQIrpIAz6eZiL6+re8y8ERAW0X+wllc0ei5vIHcUzHYNWpGqpfT+shCJqIlmJQx3F1cZaGKVV6KdLtWJ0zSAObXK/gRVqkqeadPkGbPpDGvj9H5ZbFgNmPEwd9OhQEVXi1xMGgXznIEhsd4hHF0e69O7xTyGQASoW8FO1n7fZqReBPAvLAfH2JE98yDMaZssY1gGG/LtgfoVrjwk1PbfXaN3rwghmOWTB8I5D1DAMwTHuaRiSW/PhvIPelSSIwWZ/3ARDSmSQpBgRrbOOjPkUlERvB6pRndyCEGYbLq402lqghoiYxNJnid3AJ8B23nhQ==
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB2663.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(8936002)(6486002)(5660300002)(186003)(66556008)(508600001)(4326008)(66946007)(7416002)(66476007)(2616005)(8676002)(107886003)(1076003)(2906002)(38100700002)(38350700002)(26005)(6666004)(316002)(6512007)(36756003)(83380400001)(86362001)(44832011)(6506007)(52116002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?us-ascii?Q?7Ctv44MLU4FssVzQB8FUAgT+Zf8xZTolAx2orqFrRI8ZbFIRuDKfB70HRZAq?=
 =?us-ascii?Q?NDOP9RhC3XpPmW6UMojspkCFoetrkpMhNvXJOBASxW+jAQ4QTV7i7IfjlCyc?=
 =?us-ascii?Q?KQ7y1aGNQvlZDCDW44RKGEZTQaCFziHIZtQsMgshf92AKEHiSdmnrh7FklXu?=
 =?us-ascii?Q?hTQg4jxpQLVjXwb8PPwZIEbF5bYPQ+eq7yWXi8OsOGy3UhSFNOzoe+yC0U6X?=
 =?us-ascii?Q?y7iFzrRagrOBOgOb5XNYSMmYoQjCejzEatpld4EOGxSJlc1WBNvY0CpK1dXy?=
 =?us-ascii?Q?9W9wSQ8rramx3o8gzw8kGhdWSo8t381JuEJqT/0qMnkXRXuwnvnGCgEMErjP?=
 =?us-ascii?Q?Csw+os1+k51h9wiptcJ5QNIZXlKiHyq58vzE99rEYWFcaBkSs8ZpkTwv4zm0?=
 =?us-ascii?Q?ll/tC7VmijboQljTywQaztfenlz7gW+wnRGdAE80nE8MrFdpoT0vCyJCxT8B?=
 =?us-ascii?Q?rLlSsC+o5jWeNDdX5HMpGSbSzCAKpsrqkpde7Jsmbj4b+oBGWqzl/7PBWVzo?=
 =?us-ascii?Q?zb3WHsTXMNZc7DZg28uTMsGUTkU4yp6HmiBEJReWbuqlPpO7Ix5DEpVotutT?=
 =?us-ascii?Q?vJfRU50RGXhiBff7qcWXlIJCDiQqlu/vG476NAbbJQkpuo5+gSIctgFAmplf?=
 =?us-ascii?Q?MSUj79NHCCtw85NRnVqHyBa0l9dO2ODFNO2or8tfQZbOXJtFMwaXRl88/msl?=
 =?us-ascii?Q?dKuFmEvxgTUIzWExF0W0G8ublCq4IjkFFy+2jye+gVzdrIk9xP+yHv9ACcY8?=
 =?us-ascii?Q?XRfRHKUJqLp5UzIFNGkuKb+WczAFciHGxIvEjjYmnFyRxgwnfS6ItQUpvz4k?=
 =?us-ascii?Q?IipOhkCw1cRl6yWPAjuG9xKRWeQA+ngz6nBytcAvpMKbs5xb87dSD5poB6FH?=
 =?us-ascii?Q?M0WJ3tkpxVmfDWf/U8SG4XRJAEqsDnDR0iV3KITGWrJiIs4h3h45LsrikgQE?=
 =?us-ascii?Q?cn+QnykfDHe2wPYDawFf1godlJ2PqiGzt+LQ2qDBrYKjnqkTpyqOsMYwFHfz?=
 =?us-ascii?Q?SIGCRiPnVn6gFIfugOQGP79ofkBq2zjY7kwTk8gN1xcdAwEcXMiflm8HW+k3?=
 =?us-ascii?Q?QwTdTCQp5Os00W6wTuPjV80QDEmKXp7bnZtW2u6+iQIK3aDrEaM2dkloOvoX?=
 =?us-ascii?Q?mCfJvozG+dbxMaReRHqV5xItsICAkNfdsT5PsOzVnqEnTB35zjerWIod0HSb?=
 =?us-ascii?Q?N7UnVlkatFUTcR7P/GNwxzivqGh7XNlGpxMRNU8BEm/QxhqTc+XLWjmB7V8j?=
 =?us-ascii?Q?eWIB4C/X9O0wiAu9nnIMyhHESNOgmaL2vfNN6dLfl63GeKG4hTPU3aKjKhNv?=
 =?us-ascii?Q?UU8Ip5Fv+UyxvERii5E9ZMs3aemTg/Ii0a9UT04XIbdsxLFDzUeYchiIhilp?=
 =?us-ascii?Q?zEp4QLdT0XLmy+gdRCHiH7Dl4TOQHbFSWRdERXGE7Gkaq++Fr/qmwbT17dTD?=
 =?us-ascii?Q?vjoGpDXIJYjy26F+VNeEdu7J+UCvHSsOsQOnOBUN2qOdkzmI5O1p+OLPNA+2?=
 =?us-ascii?Q?4vSEnTpwVwGW9MztojYay4gzzjYSp5GBcBC9fs95DItYu2JH70ywPp3+PHqO?=
 =?us-ascii?Q?ja49oWJhkzQrPDilOCPcu363AufRgIaGZcGxmvcSb9aVHZSmgSSPU8sZH10I?=
 =?us-ascii?Q?pJ1Rrh4uO5s7ID+Iva2k2WPlFhNhsidsLBEESot6UUqeNAtthaEHb8VlJ9At?=
 =?us-ascii?Q?FCZqfVCpzitWErJbl2dNqMoF5Sb4uCfVailwB/h7Avg6EYMXcGN9q1Hx7+mM?=
 =?us-ascii?Q?y9eMpGLBRtWJy2amksb+NZhljTlsM5w=3D?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 98e30956-dd49-46d7-b758-08da49b333b1
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB2663.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 00:58:44.2854
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: n3lY1NrAlt7noMAHkn6b+9NJkykI9dZEIaSm/1GB1FA6f9mkLXclXdYbdT3y9VoDyShIOhPy7+9U2OLeJeIJBA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN0PR10MB5126
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.517,18.0.874
 definitions=2022-06-08_04:2022-06-07,2022-06-08 signatures=0
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 spamscore=0
 adultscore=0 suspectscore=0 malwarescore=0 phishscore=0 bulkscore=0
 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2204290000 definitions=main-2206090001
X-Proofpoint-GUID: Qnr6QbV_IucdMdA0svBJTgRweeORWUAO
X-Proofpoint-ORIG-GUID: Qnr6QbV_IucdMdA0svBJTgRweeORWUAO

The interface re-configures dev->dma_io_tlb_mem conditionally, if the
'io_tlb_high_mem' is active.

Cc: Konrad Wilk <konrad.wilk@oracle.com>
Cc: Joe Jin <joe.jin@oracle.com>
Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
---
 include/linux/swiotlb.h |  6 ++++++
 kernel/dma/swiotlb.c    | 10 ++++++++++
 2 files changed, 16 insertions(+)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 8196bf961aab..78217d8bbee2 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -131,6 +131,7 @@ unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
 bool is_swiotlb_active(struct device *dev);
 void __init swiotlb_adjust_size(unsigned long size);
+bool swiotlb_use_high(struct device *dev);
 #else
 static inline void swiotlb_init(bool addressing_limited, unsigned int flags)
 {
@@ -163,6 +164,11 @@ static inline bool is_swiotlb_active(struct device *dev)
 static inline void swiotlb_adjust_size(unsigned long size)
 {
 }
+
+static bool swiotlb_use_high(struct device *dev);
+{
+	return false;
+}
 #endif /* CONFIG_SWIOTLB */
 
 extern bool swiotlb_high_active(void);
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index ff82b281ce01..0dcdd25ea95d 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -121,6 +121,16 @@ bool swiotlb_high_active(void)
 	return high_nslabs && io_tlb_high_mem.nslabs;
 }
 
+bool swiotlb_use_high(struct device *dev)
+{
+	if (!swiotlb_high_active())
+		return false;
+
+	dev->dma_io_tlb_mem = &io_tlb_high_mem;
+	return true;
+}
+EXPORT_SYMBOL_GPL(swiotlb_use_high);
+
 unsigned int swiotlb_max_segment(void)
 {
 	if (!io_tlb_default_mem.nslabs)
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 00:59:43 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 00:59:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344439.569976 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz6WO-0001d4-7y; Thu, 09 Jun 2022 00:59:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344439.569976; Thu, 09 Jun 2022 00:59:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz6WO-0001cx-44; Thu, 09 Jun 2022 00:59:36 +0000
Received: by outflank-mailman (input) for mailman id 344439;
 Thu, 09 Jun 2022 00:59:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RtT8=WQ=oracle.com=dongli.zhang@srs-se1.protection.inumbo.net>)
 id 1nz6WM-0001NE-7Z
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 00:59:34 +0000
Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com
 [205.220.177.32]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6bebfcaa-e78f-11ec-b605-df0040e90b76;
 Thu, 09 Jun 2022 02:59:33 +0200 (CEST)
Received: from pps.filterd (m0246631.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 258LWK4l005800;
 Thu, 9 Jun 2022 00:58:48 GMT
Received: from phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com
 (phxpaimrmta02.appoci.oracle.com [147.154.114.232])
 by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3gfyekhsex-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 09 Jun 2022 00:58:47 +0000
Received: from pps.filterd
 (phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1])
 by phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com (8.16.1.2/8.16.1.2)
 with SMTP id 2590ub2x032517; Thu, 9 Jun 2022 00:58:46 GMT
Received: from nam12-mw2-obe.outbound.protection.outlook.com
 (mail-mw2nam12lp2049.outbound.protection.outlook.com [104.47.66.49])
 by phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com with ESMTP id
 3gfwu433k8-3
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 09 Jun 2022 00:58:46 +0000
Received: from BYAPR10MB2663.namprd10.prod.outlook.com (2603:10b6:a02:a9::20)
 by BN0PR10MB5126.namprd10.prod.outlook.com (2603:10b6:408:129::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Thu, 9 Jun
 2022 00:58:44 +0000
Received: from BYAPR10MB2663.namprd10.prod.outlook.com
 ([fe80::7081:e264:cc58:37b9]) by BYAPR10MB2663.namprd10.prod.outlook.com
 ([fe80::7081:e264:cc58:37b9%3]) with mapi id 15.20.5314.019; Thu, 9 Jun 2022
 00:58:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6bebfcaa-e78f-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc :
 subject : date : message-id : in-reply-to : references : content-type :
 mime-version; s=corp-2021-07-09;
 bh=f3aDaHRkXv0/EyEdApnYiM6tML432+yHUf7g7UhEaI4=;
 b=krZCYtRptY/9JXFUuiGNW2ubs398VBe3AyIpDRJm37tR777mNTBpAsR47cqy/q1BGymT
 jvHnamm/n5JE+D/DMFaeX6RU/nzR3grp0d0w2KlM6iPc8oMdeybT0cQDd/bAvm9iHONI
 fNRVzJpIuAvcHxPWzOKfGanmAcW4GTRnPyYj04pdugs1+LudUle0zpi2wmkFlNPx511Y
 LZ4d9QHtcmwxr/ZZS0Y59n3SCz/XNHxxbx/YVI6+IH6+l1pj8KJk94NqnJR2wES2/wp9
 xVe1IlxZqQG/VZmFzdb9Ik+uZdKlSV1Wpy6uJ3LrLRvCk1nl5tg9NTcpO+0OUcLLSyVe dw== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Dl/7qCFk6Zi0YyW1kD9O59E4x+jk8hvLrXYZMFiQbI5NTSATY3uDQyT4RIpsdeAwlh9mYssu/Nkc0gq9mYXeH/2+qCBSwgpdiIxCTVbYr3X+Sc1ycpwgOS51I/ol6wBj2WEw0Koss5ti/GRaeUimbMHZF3LSqxGHgWBRCM3csXz6vOMkKdSYGIsU8iW+lHuQfNwFDntcfSz3JzCvA2c4X6CxS3HagBFRcwiV/4mPo4p/9ijLGGjZnKtLxX+l8HBMZOh/+31Rxe2giRxP5knRBpSFUSzVPP6j58uCIFVWtVous6SycD2npZMCVGZYFTPIPSLEDY7YxxdjUD2N6kTJ2Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=f3aDaHRkXv0/EyEdApnYiM6tML432+yHUf7g7UhEaI4=;
 b=OsGVVgm2So6jXcub7ie1Np9quv4OUQQZKxQCdN6Eun7aqMDtzDEB0FYI5P8J5tM1sPxg36UX6GxDfZdfvGcgFcKfLfXAXyypOFaRB3PJA4BO2QzlWVNqOfsQ7WS+IfOrwP3snW8FrAnHyqBHNyjE3rnzc9lrLy9qkGUHKqAH3FWXze5DrV6eMKjK4sZgY4+89foglRXqxGjFBsn1F9YCQHqnXT+moUyT15QJ6sS1jCREgjfPfhuzqijkhR5bOB55wnzTjKCY2ZvlN0zwVIUaOmiwgFYALntmAcSe3XxKhPb6Of4+2wUGfIisRzU8bDf7pb1lo794kWWozq3EyI8xJg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=f3aDaHRkXv0/EyEdApnYiM6tML432+yHUf7g7UhEaI4=;
 b=yD4NrIHTaAosLTX0AeFyVcvi2Xc1vPM4bdzr6OdLxGTYQaoMTmPedYD6maoTKkP7buwKeOhH1lJNg7iSURQlIEIsJehZbUO2ONIeDX+TjcMTQ28iJCt1Hs+2DXablEoVaXuzxpK9mHEH/VUW3Tvf4sxRPZ3v15jeyWCDq8phBOQ=
From: Dongli Zhang <dongli.zhang@oracle.com>
To: iommu@lists.linux-foundation.org, xen-devel@lists.xenproject.org,
        x86@kernel.org, linuxppc-dev@lists.ozlabs.org,
        virtualization@lists.linux-foundation.org
Cc: linux-kernel@vger.kernel.org, hch@infradead.org, m.szyprowski@samsung.com,
        jgross@suse.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
        dave.hansen@linux.intel.com, sstabellini@kernel.org,
        mpe@ellerman.id.au, konrad.wilk@oracle.com, mst@redhat.com,
        jasowang@redhat.com, joe.jin@oracle.com
Subject: [PATCH RFC v1 2/7] swiotlb: change the signature of remap function
Date: Wed,  8 Jun 2022 17:55:48 -0700
Message-Id: <20220609005553.30954-3-dongli.zhang@oracle.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20220609005553.30954-1-dongli.zhang@oracle.com>
References: <20220609005553.30954-1-dongli.zhang@oracle.com>
Content-Type: text/plain
X-ClientProxiedBy: SJ0PR05CA0075.namprd05.prod.outlook.com
 (2603:10b6:a03:332::20) To BYAPR10MB2663.namprd10.prod.outlook.com
 (2603:10b6:a02:a9::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6cfee2df-580d-4d88-9a45-08da49b3330f
X-MS-TrafficTypeDiagnostic: BN0PR10MB5126:EE_
X-Microsoft-Antispam-PRVS: 
	<BN0PR10MB512603C19E6E2E0980C6A807F0A79@BN0PR10MB5126.namprd10.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	FswaP8XQL8bIWGofFyszNMc6YneHl5NhOP8rWVrcfD3g+lVIqE1VTDuZIwAq0PLXvVBRMD48ZPbLbfG5mJB8ubZI49ESmC8FYYbwnPT2fv7M0ns9BmfRyMTxw7fvfiNTFFIqCeUIwwmJriOniXlSAbx4rO7DpTwDmN8D54i3ZcIjAkhGJqAjml35gQNE3ko2oAla0rz8jJKHMbbLSAE4iXZitGJB8Y4nFVeGJkKD69djAUJBe9MSpKL7A7twttOy63lfoRDIRrquD2uTUAyVwkGAeLrtARZ2Dban1RkdiRrCFRw3uplrJ76AtW/FnjfEg55ksKDcVLrjtXDBdN9TEtzs13Fyd/jxPa5pbiRDyKZ1OJWrSKBgSa/ihl1C39W8xx4zO0V7oPcmZo8KHbMPBLaRueiEOzW1iBLNcqY6mWgnL57wVtl5IkzCvh329Nxd3pWdDqZZOI2xQBmXzoEAf+uJnP1imX/6MY25hnULMV15pUMU5nE300HcEfmLha07ILmBRydU9ngmrDv8oGV6C02DDFGTpyrSQHvCaMwh/j2Iuga6sKpIPm8xzQwgpKc71dGGq6B5sG0Dbm5QosjL62erYyjuilZGTFQyPAqHDS6gPsuKnGUMFsSfqMaQ76VTkNvgPF2v7ahH5RLV8cd6+DGbnHtSIvAJrMpOj9k+OHAtQ8yi1QNliqzK5HDt/UdK/0T4svCHGLTdJ4c3p9tLsw==
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB2663.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(8936002)(6486002)(5660300002)(186003)(66556008)(508600001)(4326008)(66946007)(7416002)(66476007)(2616005)(8676002)(107886003)(1076003)(2906002)(38100700002)(38350700002)(26005)(316002)(6512007)(36756003)(83380400001)(86362001)(44832011)(6506007)(52116002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?us-ascii?Q?bAQ65IoHBanKWnf96lQ/GtiTzzzbD20JDMAxTFLCSsV0Cberu4AZhhqjv0V/?=
 =?us-ascii?Q?/Xvs9DUhBD6jZcEMrQq+YnnWZZzQgmp1toOFnAY6QtKD4iQeb6gHVsk5eQM6?=
 =?us-ascii?Q?ElXxn52wtsiYN8luWYDE5Ihlxjq9UgqcmSuq5CzX9oKuZE3FXlaXzLDEjiEp?=
 =?us-ascii?Q?YiWtbagRHfta2CI5nWGKcz1Y7NchUp7YhExJcnM+TSU/yXtMaAFEDkr10QSE?=
 =?us-ascii?Q?P/iskNzO6L7vwNdCXijgwzOnwqYtKVbJW6iaIxsrCsPt/UVMAbqWs2yoC8DD?=
 =?us-ascii?Q?b8n04EPZcYcQIrLBcSCM5gNej+gZoRg859a/N/tfm7UbYYFebHGbt6hjTxCe?=
 =?us-ascii?Q?zcdcbJ5ksdLRD8Ge5edlqNVWhiGMES0B2BvIjPyBr1x8wd4CveOzhKOV55lf?=
 =?us-ascii?Q?i1ElyTS+rYm5YtqVRnm8Yx+06JCmB69JlxmdiML/CoK3WptmRLREQiYMP0Is?=
 =?us-ascii?Q?rySmueKlXBTU10hbwk0Bea5XNDyHbsLfLmJV/T7Cxcqm9BOE850t9muvjgzq?=
 =?us-ascii?Q?x+AEjuoYAlL8aK21No+4R+S4Jqi02+k5ZQWDCOpCm0SGKFQAWi8mC6XG50vN?=
 =?us-ascii?Q?z7X3Z64pljZsWte9Dj2w2syIgPgLkJPKCwGgDYLVxBzAz/7f5gl3oS2EEefw?=
 =?us-ascii?Q?oGgRoZS1KT8ordB5mpmtLAjtgzjQwukbnwlZqa78RVcqa3jxeuudjcNvnfKM?=
 =?us-ascii?Q?7h9t3A3jXAacVd+YS0TD53GnQ2M4auDwktwczIxNqG1DWBn6kCVUeSDw+KOX?=
 =?us-ascii?Q?4UJGM3i1PP1n94xi6pq2K2cH71ys+tC4f9RAor2xCirOkT2gIaReVVzXjR6y?=
 =?us-ascii?Q?MsR7jzqvZOTtQHAs0iLAZIJy8XsGtmjt0ScPdRG42XU9I7ABPX3he7xTNN0K?=
 =?us-ascii?Q?PEToFphiXB5dy3DqUDgw/Zgk30KjB4enfribdcmeiMOxhAKF1dfoxGBWtXFj?=
 =?us-ascii?Q?/2lB+OpBJhXVkmbGROZZObMMiNKzC1Lz3VB+bTW3/FLI5qUYCevgpcl6/Wdn?=
 =?us-ascii?Q?rasdNnvl54h+WaiyQUYrPGn0XRhU8foLkNZhSLxDS4XTHU0wJf1I1Tiam9jv?=
 =?us-ascii?Q?kUoX1GWNm8tuuzS0gt5HWnl495QmFv5qLeLagweVpX6u//I7RKzFOVRzpdZJ?=
 =?us-ascii?Q?tlsrC89eq0Acn4m8GKbTJu9cLej99CSkhSK4xjKMo/ncDnU42yF03IiqyOmq?=
 =?us-ascii?Q?GxzAz2Alf5KmNkO7ZOVBvNP5q/RMoQ0w4avFR2xaPrQC7rv0IpC6MvhbZQUB?=
 =?us-ascii?Q?NFngheo+b43C68dOl1I4FtrrwbxnHXZK3mTYNGnSmfPr7++N0SKdMaPlsiTE?=
 =?us-ascii?Q?NUX1+kP/Fhw1umorxsDRPVubNoNsA0wHwsgpg83iq/XLGXK+a1I/rdC2sR9A?=
 =?us-ascii?Q?o7CVA8aI3EF4rm67bUsL32MmAmYDw98ADZxTkPGZmgzlD8Uk3fExPl8k8meB?=
 =?us-ascii?Q?38622evBIJ2djfUH80faWu94Cax29I2h1xzgU1NZ6XiTvquWNEeSwlVzE4uE?=
 =?us-ascii?Q?CxE9uh+h4Tr1+/X/xSlcK5o3alHGr8pgcpSWCpJXOOguBsuFPlhRtJ4bCmCI?=
 =?us-ascii?Q?fsTzG+wUbsArZuTNM0G8I5xGR6Nbf7F8NpLRiCtBz+ppTti/NZRhnEwWRDfA?=
 =?us-ascii?Q?6KOnmVrjf8H/mZJoJjXp5oHiGGTibF+FMVnxPSHxvZBRPL7zsrsui6UKORUh?=
 =?us-ascii?Q?XCY3JqXXhLsqf4LNxd7xJ4srBMlEcWSHkfvcLrCInkIfx9D2T1Fw6Q+KvFoY?=
 =?us-ascii?Q?Uo2GqakYv3YXTWJgKqjf+GlUqw9x6FM=3D?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6cfee2df-580d-4d88-9a45-08da49b3330f
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB2663.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 00:58:43.2074
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: i7XSW3AdSZuwdXzKFG8yrqlLCtdev5kwrWoBl9pljLl63defgaTd41SKif7StP3ocU339+PNitBuNtrsZg40tQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN0PR10MB5126
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.517,18.0.874
 definitions=2022-06-08_04:2022-06-07,2022-06-08 signatures=0
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 spamscore=0
 adultscore=0 suspectscore=0 malwarescore=0 phishscore=0 bulkscore=0
 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2204290000 definitions=main-2206090001
X-Proofpoint-GUID: jwdl0rZLCrlF0D9S45XOnf0OHrDPWVww
X-Proofpoint-ORIG-GUID: jwdl0rZLCrlF0D9S45XOnf0OHrDPWVww

Add new argument 'high' to remap function, so that it will be able to
remap the swiotlb buffer based on whether the swiotlb is 32-bit or
64-bit.

Currently the only remap function is xen_swiotlb_fixup().

Cc: Konrad Wilk <konrad.wilk@oracle.com>
Cc: Joe Jin <joe.jin@oracle.com>
Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
---
 arch/x86/include/asm/xen/swiotlb-xen.h | 2 +-
 drivers/xen/swiotlb-xen.c              | 2 +-
 include/linux/swiotlb.h                | 4 ++--
 kernel/dma/swiotlb.c                   | 8 ++++----
 4 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/x86/include/asm/xen/swiotlb-xen.h b/arch/x86/include/asm/xen/swiotlb-xen.h
index 77a2d19cc990..a54eae15605e 100644
--- a/arch/x86/include/asm/xen/swiotlb-xen.h
+++ b/arch/x86/include/asm/xen/swiotlb-xen.h
@@ -8,7 +8,7 @@ extern int pci_xen_swiotlb_init_late(void);
 static inline int pci_xen_swiotlb_init_late(void) { return -ENXIO; }
 #endif
 
-int xen_swiotlb_fixup(void *buf, unsigned long nslabs);
+int xen_swiotlb_fixup(void *buf, unsigned long nslabs, bool high);
 int xen_create_contiguous_region(phys_addr_t pstart, unsigned int order,
 				unsigned int address_bits,
 				dma_addr_t *dma_handle);
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 67aa74d20162..339f46e21053 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -104,7 +104,7 @@ static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
 }
 
 #ifdef CONFIG_X86
-int xen_swiotlb_fixup(void *buf, unsigned long nslabs)
+int xen_swiotlb_fixup(void *buf, unsigned long nslabs, bool high)
 {
 	int rc;
 	unsigned int order = get_order(IO_TLB_SEGSIZE << IO_TLB_SHIFT);
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index e67e605af2dd..e61c074c55eb 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -36,9 +36,9 @@ struct scatterlist;
 
 unsigned long swiotlb_size_or_default(void);
 void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
-	int (*remap)(void *tlb, unsigned long nslabs));
+	int (*remap)(void *tlb, unsigned long nslabs, bool high));
 int swiotlb_init_late(size_t size, gfp_t gfp_mask,
-	int (*remap)(void *tlb, unsigned long nslabs));
+	int (*remap)(void *tlb, unsigned long nslabs, bool high));
 extern void __init swiotlb_update_mem_attributes(void);
 
 phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t phys,
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 569bc30e7b7a..7988883ca7f9 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -245,7 +245,7 @@ static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
  * structures for the software IO TLB used to implement the DMA API.
  */
 void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
-		int (*remap)(void *tlb, unsigned long nslabs))
+		int (*remap)(void *tlb, unsigned long nslabs, bool high))
 {
 	struct io_tlb_mem *mem = &io_tlb_default_mem;
 	unsigned long nslabs = default_nslabs;
@@ -274,7 +274,7 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
 		return;
 	}
 
-	if (remap && remap(tlb, nslabs) < 0) {
+	if (remap && remap(tlb, nslabs, false) < 0) {
 		memblock_free(tlb, PAGE_ALIGN(bytes));
 
 		nslabs = ALIGN(nslabs >> 1, IO_TLB_SEGSIZE);
@@ -307,7 +307,7 @@ void __init swiotlb_init(bool addressing_limit, unsigned int flags)
  * This should be just like above, but with some error catching.
  */
 int swiotlb_init_late(size_t size, gfp_t gfp_mask,
-		int (*remap)(void *tlb, unsigned long nslabs))
+		int (*remap)(void *tlb, unsigned long nslabs, bool high))
 {
 	struct io_tlb_mem *mem = &io_tlb_default_mem;
 	unsigned long nslabs = ALIGN(size >> IO_TLB_SHIFT, IO_TLB_SEGSIZE);
@@ -337,7 +337,7 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask,
 		return -ENOMEM;
 
 	if (remap)
-		rc = remap(vstart, nslabs);
+		rc = remap(vstart, nslabs, false);
 	if (rc) {
 		free_pages((unsigned long)vstart, order);
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 00:59:44 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 00:59:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344444.570026 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz6WU-0002nJ-6n; Thu, 09 Jun 2022 00:59:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344444.570026; Thu, 09 Jun 2022 00:59:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz6WT-0002mn-UV; Thu, 09 Jun 2022 00:59:41 +0000
Received: by outflank-mailman (input) for mailman id 344444;
 Thu, 09 Jun 2022 00:59:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RtT8=WQ=oracle.com=dongli.zhang@srs-se1.protection.inumbo.net>)
 id 1nz6WT-0001NE-3C
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 00:59:41 +0000
Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com
 [205.220.177.32]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6f88001c-e78f-11ec-b605-df0040e90b76;
 Thu, 09 Jun 2022 02:59:38 +0200 (CEST)
Received: from pps.filterd (m0246630.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 258Lidrt019514;
 Thu, 9 Jun 2022 00:58:48 GMT
Received: from phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com
 (phxpaimrmta02.appoci.oracle.com [147.154.114.232])
 by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3ggvxmyyx8-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 09 Jun 2022 00:58:48 +0000
Received: from pps.filterd
 (phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1])
 by phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com (8.16.1.2/8.16.1.2)
 with SMTP id 2590ub30032517; Thu, 9 Jun 2022 00:58:47 GMT
Received: from nam12-mw2-obe.outbound.protection.outlook.com
 (mail-mw2nam12lp2049.outbound.protection.outlook.com [104.47.66.49])
 by phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com with ESMTP id
 3gfwu433k8-4
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 09 Jun 2022 00:58:47 +0000
Received: from BYAPR10MB2663.namprd10.prod.outlook.com (2603:10b6:a02:a9::20)
 by BN0PR10MB5126.namprd10.prod.outlook.com (2603:10b6:408:129::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Thu, 9 Jun
 2022 00:58:44 +0000
Received: from BYAPR10MB2663.namprd10.prod.outlook.com
 ([fe80::7081:e264:cc58:37b9]) by BYAPR10MB2663.namprd10.prod.outlook.com
 ([fe80::7081:e264:cc58:37b9%3]) with mapi id 15.20.5314.019; Thu, 9 Jun 2022
 00:58:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6f88001c-e78f-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc :
 subject : date : message-id : in-reply-to : references : content-type :
 mime-version; s=corp-2021-07-09;
 bh=M/N7o1znb6+4t73Rt8rz95E2Jxw2jCfaHcqkKvbe9ys=;
 b=Gya0bw4JEUWXE/atEtc5x/alQ08k/QUchkav3+RzJUTpBnltxNj1U6d6nrZ+bG1YnwuF
 8Y+SmNNcRbaIEuiTJuipSUO2Hri1T8eDz7UHRMc+XhRLa3B51BZE01LRp0UU7SJ3MTHl
 0cwI7+3//OQwsnAxQ070KUix4owodF4AaZHU77LEwJ5OGcx4DSj7tuoiU0NLgZpTnOSd
 rb5kkN9iH7SZbll2odesOVekqkza0OCKjyMjT4YiYWO7s92qA9xivXd2AoHfWkUgn/8n
 j5UD4j9A7DCBZ5BflvUGJNVNX1FVWnkXceLQvYAqC+V95aSi30tdGFbN7PZ06rPsoRgO 2w== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UrEJa2uSqi+BAtxWq+iBQ/ga37NHjFLPktbd/xc5ovtpitnSoP9QSineRW7YquRy0jVtdXp9FupCpztSFmKQE6NIzIZ/OA0jFPWF79OBTNTGCWglo/Q4WMtpiR62doLKD8BXKb1zxsdxqdHMhXySyU4AOTzDeUx00pBcGUtegbG/Vek4P9EL+WvcF+aBeQZYooHXu/o2aQy9STv/AD7io+d6pRyFtqvAMgcC/d4kIfGoOK+B83eaKe2cxn2UJTgKJ92zOW3plD7KfiCZnbd04gppM0HCeSeivoBFQ7Inc0/XkUz6CEFu3uwYTBKdEg2FKEc3vGBvaAEDsuVzJ9uxwA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=M/N7o1znb6+4t73Rt8rz95E2Jxw2jCfaHcqkKvbe9ys=;
 b=NSbA6ZqeZ4xy4CiAFscA/20R0/xH1OsoqjS5b2ZqWD9v2ZvUBKeKgQ9EJ7ADzqGtfbvLUA2XpGbrV41A0yLwoI5MYDb0Jkf/n9zoAMsVMfhB4MDF9xoxJ5fdJ+i6i1OAWwYUAXeprqaqmPexaXpdhvWtrfb7kDOuo45fuWo2CtLpQx8/PCm22HTtEHDQXcLN6TJ9jdTBq4LyE+Yb38Q+IvoLo9ucjy2z1ZDwDGU0ksDGnQWfEY7dQDdQSBKkIUP5qJLIFgVcqggPSYGoPQx3eC09/vQy1gZbwXqVZoj50rBsTU1HkEY7lky3evZO/mu3ROhn+rdxpVQHAZL30S+PAA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=M/N7o1znb6+4t73Rt8rz95E2Jxw2jCfaHcqkKvbe9ys=;
 b=kz7d9eEYc1z3c4MrHkpSPwQd/nqcZ4UCFnvKXZlzNxjtHNyqxwMzqpjDhRsjjbD8y1SQkrpN5yKWJpq//cQa2clMWs1UrhYA8dDtBNmwgaCL2mdgPILUnZf+h2/iohElky11JTuoiU/0kRGiDFJAzYaZs8WdEDgtevRMoY6vTX8=
From: Dongli Zhang <dongli.zhang@oracle.com>
To: iommu@lists.linux-foundation.org, xen-devel@lists.xenproject.org,
        x86@kernel.org, linuxppc-dev@lists.ozlabs.org,
        virtualization@lists.linux-foundation.org
Cc: linux-kernel@vger.kernel.org, hch@infradead.org, m.szyprowski@samsung.com,
        jgross@suse.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
        dave.hansen@linux.intel.com, sstabellini@kernel.org,
        mpe@ellerman.id.au, konrad.wilk@oracle.com, mst@redhat.com,
        jasowang@redhat.com, joe.jin@oracle.com
Subject: [PATCH RFC v1 3/7] swiotlb-xen: support highmem for xen specific code
Date: Wed,  8 Jun 2022 17:55:49 -0700
Message-Id: <20220609005553.30954-4-dongli.zhang@oracle.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20220609005553.30954-1-dongli.zhang@oracle.com>
References: <20220609005553.30954-1-dongli.zhang@oracle.com>
Content-Type: text/plain
X-ClientProxiedBy: SJ0PR05CA0075.namprd05.prod.outlook.com
 (2603:10b6:a03:332::20) To BYAPR10MB2663.namprd10.prod.outlook.com
 (2603:10b6:a02:a9::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ecf46f83-b4e4-4d7c-6f67-08da49b33346
X-MS-TrafficTypeDiagnostic: BN0PR10MB5126:EE_
X-Microsoft-Antispam-PRVS: 
	<BN0PR10MB5126D6F1ED1C21EA78AB5946F0A79@BN0PR10MB5126.namprd10.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	3sK869Yt+t4hcqRY8O5IczGhj4jkhP3lpnIE7kDM1DEGTJOIi0fv7I78m0N7kpbrdCTNaOiWHwuu3sXNF289GBHaP903eyAadm6Uo2DXtnGgkvJWzhi0TKkFpINf8Fo9g4VzQ7d0/fp1yzpomuayG3WJyvBY3pLWv60vSvCx6skdM2quROsETipfArrmlx2hpyzCij5huL8kLAZKLb8/NrrmW8nUTOeUZcchb9mIJ4T8Yh8fgES3t9aQ42U7z8eNt7031TFLEL9CXWFZTiEEbogfuzMaayoeVxPaZRUqsBc4LYpJK8MWEvePeKmGBq0nFBI5yEadA1AnhVDLRW528acSmDb0tdwGWvdRhybLVPhV31YazlmzVKQ6lhRUx0fzpY1wSI5U2aG7nMi2jMU9v7+kK4cXu3lmsfZSqIu2YF2Fy5si0N/B1QQSfbYQHHPAapV5Z6Ki4sENdy0Z2bNRqqWkIJHzsGF3bnqUwGwfFJ5y54Yb7bW1j23rkShFM1yezKQuDh9dX4rkRI00waWugCCtzlC91bmXFn67rUzDg3XkArI/wB90avNI1u2y+SGHltNB4QkffyBNEk+NZ6A2hH1dbdGVY1SLk4L+7x4p1F6ZjRhFBuAbCAsS60O2ybFoVfUsGk0OxNJExKSN7KiAgiuKKS364Afz7yFalcQuCOpEk4CnGaFrBszvlrqQPSlaLCkn68ns+P0RvFe6glI6MQ==
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB2663.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(8936002)(6486002)(5660300002)(186003)(66556008)(508600001)(4326008)(66946007)(7416002)(66476007)(2616005)(8676002)(107886003)(1076003)(2906002)(38100700002)(38350700002)(26005)(316002)(6512007)(36756003)(83380400001)(86362001)(44832011)(6506007)(52116002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?us-ascii?Q?sfgOr+8UOIHRQ9h1HrP2h4gTUpqVmHaZY3cOSyzq5TX5E60Iix8nNwlOk6op?=
 =?us-ascii?Q?qQ3FUKz8pPs+VSYbysYg1f+Zc/RIwYtXJAk7uZtegageY8Y3gpJ0GJWdesSW?=
 =?us-ascii?Q?OqR3i/iFMiwupduy5ShrUFx4QH7+6ZKGQ893NRbONBzLYnM/JAB3KS7YySuq?=
 =?us-ascii?Q?uywklzD1ibTWugePuvfwtNG2UOYqwyFIsgeZefsf7eYCbIxMPsL22XKNPxnk?=
 =?us-ascii?Q?AUBsRhUx1+sJLpwpeC3dil3tfSiW5LOr+24uIArUmddVWZFYvg9obOvlhe3a?=
 =?us-ascii?Q?SV+Aj+wkMgyJEZDduc3pbrNokfU3b2IWl/EtmHyJCveiXXcyXS6dkstYCepl?=
 =?us-ascii?Q?lRsiGM704IVQ0SgB5S8v+uQfaRq/UxPk81XNt1nC6D8RNduVLV5C3FhJ33Z3?=
 =?us-ascii?Q?F6Y7oxSiBwOleP7IHrnN0f/HzZUz+ankO2Pu8EnpPU9YfhHc0AttAa1FV0rd?=
 =?us-ascii?Q?SFlCj2Gl2qBTpxqvP0iPzyrX6h1ayfiLbjt0Y8+RAqtieqKO57JGeKl6EW6/?=
 =?us-ascii?Q?ma6h+KfS2BxlEamK/w+oNnAdTqLb3ZR1cV8VaubQ+x+dAQpfx/f6cBtOw3yI?=
 =?us-ascii?Q?BYmChkUJpDnV9U/uU2pcMzMrvBCtLsPZHp66bf7I1xSCVxXqzQMSw4cHv5H0?=
 =?us-ascii?Q?3bfjCJhsO8ZNjJo+rWIYkRgD61hQNHLqipEqKGPhSPrVKsfo/MuEUatNpeY0?=
 =?us-ascii?Q?cc71RP7PH1RFqwWhWvB3G5TfR6x/RTzqhY5hJLA5TIn2PUJDDMp9NwgaAOH6?=
 =?us-ascii?Q?TUrjrIxdwNQYOyRTuEFSOJ3tT/sZp3x5ZRrwWm5CuO9ypNiEftYGE0P7IACM?=
 =?us-ascii?Q?Rgq3l1HoglWCknpoeHz5nQooVaHj+dcpeaa0Z5BWK71H+XvxMNK/tb5LP0mm?=
 =?us-ascii?Q?YZ5P+Y2DP+GkT6gU4oIm+dZHJmSHB8ChtJ8WoZX1E/ZIuCP7tEdN7enAYaML?=
 =?us-ascii?Q?RpFd8RYnkrh2cCQswC/tbO+aa8PEZ9utW/yz5/Kbi7nULi9T9fsdEUpYPQ83?=
 =?us-ascii?Q?dfat2rgVFZVxafSSgj+U0B2jijQsAkzPlriQdaF7gyeY+v1YZMXne2UPh+76?=
 =?us-ascii?Q?57zwKBjFhHtpyg8voeQwX3x4Fq2wiB3RqP2eashyM0JRmzos8YML92oze6LT?=
 =?us-ascii?Q?l2eZQnQzPDrrE4EDDEr7TJUcsDOzCBVBmmUDhlHMbDSMSeGDPbJ3sUeprYE7?=
 =?us-ascii?Q?JVAdVWYZeoYG9TsYm6UDBVZVwoL6Qe5nhBvBvW2au8ms/o2L+df3QmwS8Vy+?=
 =?us-ascii?Q?WsU/i+E2RS7VW8NgDM2U9qAsARspmwhq8NJdUY3v7x85VW2XATottkAj9z6K?=
 =?us-ascii?Q?XNayvuYuAwHII1JrKC2BMqMnynBEmVyhJZbIU+qfURKBX9cB84zFjILyiQic?=
 =?us-ascii?Q?5lWCQXKq5mFP0U+dwg76H0KHNpdFe3tn4dE68xHALh5Xz62smoE4RcO+gDr2?=
 =?us-ascii?Q?H8buvZd5cCXFhRmrfpja0q4moCPVJM8c8J9nH/O3VcprC9sUDrcm4/sgWcyq?=
 =?us-ascii?Q?73rYESTIGmeczX3PrS8ZBS4jWSnYgBaWjQOkz+D8eE0BZuyvK6G1F8kGUJNY?=
 =?us-ascii?Q?6zNXgZpz95Awa2Vs7KOVxz2YBL2xnp1T8g7LSDxz2FocrNZqxuIOvCectB+A?=
 =?us-ascii?Q?QHsrpC++9DuIPaB4JMZ2Gre+qG3nSF+4AHsf4QnW8yJlwZH9tsoHHy74nx29?=
 =?us-ascii?Q?+cnx3vLpqNhHFiob6Cz1zakOqzRxnuQmTTneTBIPS09+A59IvR/zH/W/+2hu?=
 =?us-ascii?Q?CFOVTAIO6QZ0qroxi9FBwSHupoKI2Xc=3D?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ecf46f83-b4e4-4d7c-6f67-08da49b33346
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB2663.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 00:58:43.5667
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: JbPhiEqyQnVmI+eCEuwlTjZtQpRe2rp1ezhhToipkCg2LIAnizsl7/YPuKNfQXMLxyuF5+GehHtTpKxnEUC5LQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN0PR10MB5126
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.517,18.0.874
 definitions=2022-06-08_04:2022-06-07,2022-06-08 signatures=0
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 spamscore=0
 adultscore=0 suspectscore=0 malwarescore=0 phishscore=0 bulkscore=0
 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2204290000 definitions=main-2206090001
X-Proofpoint-GUID: 3b6ywZH7eCaeEphEqt9imafHd3QE9lGi
X-Proofpoint-ORIG-GUID: 3b6ywZH7eCaeEphEqt9imafHd3QE9lGi

While for most of times the swiotlb-xen relies on the generic swiotlb api
to initialize and use swiotlb, this patch is to support highmem swiotlb
for swiotlb-xen specific code.

E.g., the xen_swiotlb_fixup() may request the hypervisor to provide
64-bit memory pages as swiotlb buffer.

Cc: Konrad Wilk <konrad.wilk@oracle.com>
Cc: Joe Jin <joe.jin@oracle.com>
Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
---
 drivers/xen/swiotlb-xen.c | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 339f46e21053..d15321e9f9db 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -38,7 +38,8 @@
 #include <asm/dma-mapping.h>
 
 #include <trace/events/swiotlb.h>
-#define MAX_DMA_BITS 32
+#define MAX_DMA32_BITS 32
+#define MAX_DMA64_BITS 64
 
 /*
  * Quick lookup value of the bus address of the IOTLB.
@@ -109,19 +110,25 @@ int xen_swiotlb_fixup(void *buf, unsigned long nslabs, bool high)
 	int rc;
 	unsigned int order = get_order(IO_TLB_SEGSIZE << IO_TLB_SHIFT);
 	unsigned int i, dma_bits = order + PAGE_SHIFT;
+	unsigned int max_dma_bits = MAX_DMA32_BITS;
 	dma_addr_t dma_handle;
 	phys_addr_t p = virt_to_phys(buf);
 
 	BUILD_BUG_ON(IO_TLB_SEGSIZE & (IO_TLB_SEGSIZE - 1));
 	BUG_ON(nslabs % IO_TLB_SEGSIZE);
 
+	if (high) {
+		dma_bits = MAX_DMA64_BITS;
+		max_dma_bits = MAX_DMA64_BITS;
+	}
+
 	i = 0;
 	do {
 		do {
 			rc = xen_create_contiguous_region(
 				p + (i << IO_TLB_SHIFT), order,
 				dma_bits, &dma_handle);
-		} while (rc && dma_bits++ < MAX_DMA_BITS);
+		} while (rc && dma_bits++ < max_dma_bits);
 		if (rc)
 			return rc;
 
@@ -381,7 +388,8 @@ xen_swiotlb_sync_sg_for_device(struct device *dev, struct scatterlist *sgl,
 static int
 xen_swiotlb_dma_supported(struct device *hwdev, u64 mask)
 {
-	return xen_phys_to_dma(hwdev, io_tlb_default_mem.end - 1) <= mask;
+	return xen_phys_to_dma(hwdev, io_tlb_default_mem.end - 1) <= mask ||
+	       xen_phys_to_dma(hwdev, io_tlb_high_mem.end - 1) <= mask;
 }
 
 const struct dma_map_ops xen_swiotlb_dma_ops = {
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 00:59:44 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 00:59:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344440.569987 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz6WP-0001t5-M6; Thu, 09 Jun 2022 00:59:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344440.569987; Thu, 09 Jun 2022 00:59:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz6WP-0001su-Gk; Thu, 09 Jun 2022 00:59:37 +0000
Received: by outflank-mailman (input) for mailman id 344440;
 Thu, 09 Jun 2022 00:59:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RtT8=WQ=oracle.com=dongli.zhang@srs-se1.protection.inumbo.net>)
 id 1nz6WN-0001NE-7g
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 00:59:35 +0000
Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com
 [205.220.177.32]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6bde0305-e78f-11ec-b605-df0040e90b76;
 Thu, 09 Jun 2022 02:59:33 +0200 (CEST)
Received: from pps.filterd (m0246632.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 258LYAah017875;
 Thu, 9 Jun 2022 00:58:49 GMT
Received: from phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com
 (phxpaimrmta02.appoci.oracle.com [147.154.114.232])
 by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3ghexefa62-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 09 Jun 2022 00:58:49 +0000
Received: from pps.filterd
 (phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1])
 by phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com (8.16.1.2/8.16.1.2)
 with SMTP id 2590ub31032517; Thu, 9 Jun 2022 00:58:48 GMT
Received: from nam12-mw2-obe.outbound.protection.outlook.com
 (mail-mw2nam12lp2049.outbound.protection.outlook.com [104.47.66.49])
 by phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com with ESMTP id
 3gfwu433k8-5
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 09 Jun 2022 00:58:48 +0000
Received: from BYAPR10MB2663.namprd10.prod.outlook.com (2603:10b6:a02:a9::20)
 by BN0PR10MB5126.namprd10.prod.outlook.com (2603:10b6:408:129::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Thu, 9 Jun
 2022 00:58:45 +0000
Received: from BYAPR10MB2663.namprd10.prod.outlook.com
 ([fe80::7081:e264:cc58:37b9]) by BYAPR10MB2663.namprd10.prod.outlook.com
 ([fe80::7081:e264:cc58:37b9%3]) with mapi id 15.20.5314.019; Thu, 9 Jun 2022
 00:58:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6bde0305-e78f-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc :
 subject : date : message-id : in-reply-to : references : content-type :
 mime-version; s=corp-2021-07-09;
 bh=nekokp8TW4nxfgOy/dsnG3rVJSWVzWzy9psuFT1B75U=;
 b=GMBqDZK+PAwYlUzMbGQhxJM5l0kHoHp/ZTShoUcvDMnTueiJYk7+aWp6Dj8jhcWPSI85
 HeFi1msOXWEebXsuQs1i7XKs3ebwu6vMtnK/JARTCRvIUgDqmFHyy/W4PCC00z8ijQvQ
 4r20InFx8Xhj4L/EnpqcmmpVxVMhXYAhwqK7sJiKY6BlaVmjw/zVzW8m/gdyN4iI70VW
 Ded2/qDF/G0dNbQ1PaqJRau4rQ5AjJ0Amm2C05c+iSglP36FPGwDtyIyakQ9DDsja2aq
 W7PbYRN+70zFkFMWe1whg4CYR1RfWkWgVDagxpmagbGN9jAEQf0kX71EqvCueEscQ/1c XQ== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WgZ33u82gVHs+34+LlKyvBPTKxqgvVMrF7pZnmHxZoHs4VsmdXTjQEMQ56/POBJxCufdYYdhrz83mDE0B4QLYIf665CyWmISA+qnx5hgDmIRuzQcgjYtK0psUtczzzhLVPDMN9es9WbCt6aFkFy82Winj5Ikgke0rFySDD7/X4EJ0d2f0yf9rtbxnOOA9E/4E5dz1RVShZ0bbLu8UZjdUGV0y+5/oz8OpsCFKeXCLzG6jn2VB7EgoOH2/X/vQJeP6Z+pxYGy5YW4q1wIed8LPB1Pbt2LjfYs86+OwwsGt5+LhuohRVe41kR6w3VHiOhUXcHWvZCKkvPSJSkHXJO4eA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nekokp8TW4nxfgOy/dsnG3rVJSWVzWzy9psuFT1B75U=;
 b=LGgfjynM+7CwFq5AdgRrUhcTWXYmRVXc1aqmvpTcUeIirRwi1ZoO993xAbmC6vp988oyQAP0BJ2+E/E4VA0xSVRe8oFj93rt7Jm7Ye+WgxpVzO5CCYkbHj6xNwcfjx283HBB75OGDKrlDF1+6PtZWgPa0XxGVemfV7KfmfuYq46fMWvMAsMq08M1HxXFjLj3SLVi5VYwUXyqBQMgGqh1HkkgNd3wahqvJLxEXUhXfELMViecAqofkTYI938n39nKzW2H7HoiDcU+TVKKgTt5J0tYw8F8efuItONUq/ILwMq9Mq+br9jqlxMYLKao/jcQ3fYFD21TtN6JDcjRCL44eg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nekokp8TW4nxfgOy/dsnG3rVJSWVzWzy9psuFT1B75U=;
 b=dojvVz+u0/uvYoSYrL0wtDfV4g0AvMj6XWxh0nY0t+RCakvbkLBxGGndrgt+UGfpmA7BzLpm+LMNziv0ffJG5xiPDAmSkMNzCK0/L9YwbNBqZd7UOay0Z4WH68HPd2OKtiktyYDwF3IIRyR3rq7bA/dFxDup7+u7IlnYgVWbRcg=
From: Dongli Zhang <dongli.zhang@oracle.com>
To: iommu@lists.linux-foundation.org, xen-devel@lists.xenproject.org,
        x86@kernel.org, linuxppc-dev@lists.ozlabs.org,
        virtualization@lists.linux-foundation.org
Cc: linux-kernel@vger.kernel.org, hch@infradead.org, m.szyprowski@samsung.com,
        jgross@suse.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
        dave.hansen@linux.intel.com, sstabellini@kernel.org,
        mpe@ellerman.id.au, konrad.wilk@oracle.com, mst@redhat.com,
        jasowang@redhat.com, joe.jin@oracle.com
Subject: [PATCH RFC v1 4/7] swiotlb: to implement io_tlb_high_mem
Date: Wed,  8 Jun 2022 17:55:50 -0700
Message-Id: <20220609005553.30954-5-dongli.zhang@oracle.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20220609005553.30954-1-dongli.zhang@oracle.com>
References: <20220609005553.30954-1-dongli.zhang@oracle.com>
Content-Type: text/plain
X-ClientProxiedBy: SJ0PR05CA0075.namprd05.prod.outlook.com
 (2603:10b6:a03:332::20) To BYAPR10MB2663.namprd10.prod.outlook.com
 (2603:10b6:a02:a9::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f6344bf6-302d-401e-7fea-08da49b33378
X-MS-TrafficTypeDiagnostic: BN0PR10MB5126:EE_
X-Microsoft-Antispam-PRVS: 
	<BN0PR10MB5126C91F431E8CD443CABD44F0A79@BN0PR10MB5126.namprd10.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	3AvNmZ5CCbdviHODH6y40fVAdyczC50JrU2fTllizQb59n8MhDlm8SZEkseXDAEeDHWBLifws7IWqQOkcOZPR2OVPpRRTgt5SC1nQm5iM27IzypGZINdflLDL6Auj807bsC74y2XMW1Ng3LV4GE20UR4+mNpWss/3Ob4V7lMx8Tmt2jFNGvcOg9GjVsIPrLbSytpXSRRnKfYm+7UvYQ6CKlGyMFly1Q4JLgezsxSEd9uJMbBiqshXmcADXPOfrFSTNuI9VvvvgdJX65Ny5QQFclwEV/URAWY79qVFLy8bagdBBsP0JmXgydi1XrpIDV0p6TXXb1a1dzBtD3BNnW8nFx67xtFSpnGxW1dK08FXCUEPPlT92Bqd5Q4BAY7gx5b/+5y5yDVuI0m/gZKqxsmF4zCG4OD7wb0lDRdfWkh8oJNN2iPd+uPTgAHGBsvEAIA7+kdt5SX3XqfJwKru28/GMJ271AoTcTNxWQTqqGF5HRf5rsiLdpMIZ163wg1nj9tmmYIETm3s1tXJc399/fBE8FiyxdGpNZ8vz59anD9JuLHP10UqBNk8MyT4xGEImA+jVNHHcGSQ4aI78BMItdRCgsk9ENcMDwyHk841vno8ubcUCWbVsp+i5fAZT4k3x01RLtO56QIey15lnfBGRZMzggG5lIfw+7Qw42xZ0M2zkUMvXNgPd0cXuUpf/PYvjdYnQWag9b3C7WtnyFnxYceIQ==
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB2663.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(8936002)(6486002)(5660300002)(186003)(66556008)(508600001)(4326008)(66946007)(7416002)(66476007)(2616005)(8676002)(107886003)(1076003)(2906002)(38100700002)(38350700002)(26005)(6666004)(316002)(6512007)(36756003)(83380400001)(86362001)(44832011)(6506007)(52116002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?us-ascii?Q?0qxQmydipRe6x+4KxK/g0yP+vjf71C3oD4ELdMTU1X/iKIpS+rxB2yX5GvM3?=
 =?us-ascii?Q?6MtwxwutCpdqbBYsfo3VT9u3RSG76AeGDcE1ucqRUEE+gR/Y/+miL7ipgUSa?=
 =?us-ascii?Q?aegXAuOF8vpCiMuoiUXymqfPoQQIwqZdhqorKtfLN13LJYLZ+FtH/3RS353c?=
 =?us-ascii?Q?EXJogWzkX/XDGwDKL9lVyJETfVqAkDhFq/m/MpY4aXdHXvMk0+4Ud2VGWEQS?=
 =?us-ascii?Q?yvhNYimXq0fh6vkXIGwiYMAU6y9v5kipTvr9lKlF5CRMEBi3YNWkZhRjjgf2?=
 =?us-ascii?Q?UTT0GW4kuCXH89ZRI/o2fw198IQqazPm2N0K60oK27B1LH8KfcOs86klF2xK?=
 =?us-ascii?Q?6KPvQjXVo7HqKpDriZgVBEbQQ0utNRanaOiiIm+pNH1phpwugf0Se2ihR2a1?=
 =?us-ascii?Q?8fjEeMDBGEfHVkpbzuEkgm+nyvfGdy3xwVFOCgrrHeyus4Sh/A6HzYnsK/L/?=
 =?us-ascii?Q?9M+lujldAGUX62k1Mq/bjRZ28fqENPpxWYGTfQR4TgfnnshYvfGHU+AmkLvL?=
 =?us-ascii?Q?dGIMlRW7Bd9QXtDVhUDFXiYyWqWQzplmhuba8A2B5DQEIzIjWmzXEX6B/nKB?=
 =?us-ascii?Q?/rsNPtny1+F5RcuNFYCvgobBcTQGXUulYsxgSgx12YG3RbjKs+P9DucAalP+?=
 =?us-ascii?Q?003uhcfM7IL2XPhSlDnzwKIp6SUAawebkfjhyFgJZ8p02QOh33F1pcBEszlp?=
 =?us-ascii?Q?Q9TgWlYulY/I8ZJHrWdNi1cM+tWrNlx1w0tx+WVUOz328tDrbzRJJKsXO7mq?=
 =?us-ascii?Q?kG9zYI04x7URSothWZgPiyfQlw1CUHBxb+hHRsRnawGys2LUXG5haUzxr1+M?=
 =?us-ascii?Q?F8Qo6ky6PQeXaDtF5rGpAG8u8K/pnOJ/T0jlzZxkbnJTw+V19S0143VQXJzR?=
 =?us-ascii?Q?n+ijzGDkK7CG0dFizF2nQkJ9F/6M3yHL6CN/hya2RG3hFswMSnxSnlDOfY0f?=
 =?us-ascii?Q?7txGA9j0jJlbGn9H3ea1yCkWk5fWsUcACyBzAb8BTaZti/dq6y6x5qBhtZbw?=
 =?us-ascii?Q?+mTj6AUSIMdqgX0UBUEDs/smbljYrdVT1J+VlLRicdNVgMw58bQy3JJ4RukZ?=
 =?us-ascii?Q?QkbQC3KZM+Wri4UBNdERQa6GfJMIaSCyNb9bpUfbiuxcLt6C6pdxj/11gyEv?=
 =?us-ascii?Q?qIbdFMtw9+hu6Necp2x0aCk2Jb7Y6ceGd5+MF8zlLuIC/TXeJeUCs0pOLOqy?=
 =?us-ascii?Q?bik+I+aPJ45+RXQQ8PnuaJG2rXGUxY9B9nRjpyBA+1OUk7egs8RFKJkrZBMa?=
 =?us-ascii?Q?z6X+3PnfrpccXeVz71pXC9rh7oe9exNxMSBA4Mq3UMJpxL1V1CMLh2CplbJj?=
 =?us-ascii?Q?GR3XK3vBpn3CIB0b+lAW+ywA69KMx1RbICb8OFfYHMu7rsLLx5r3DDHKgMTN?=
 =?us-ascii?Q?DbBtqRC9uMF/yBg9GpFy9ZCFhM1FHy2r5SN8il6c9SOJGeAdHI9CnR8xf5sI?=
 =?us-ascii?Q?GedOgs+Q1a9zKnRb8FZqgxzXq4pxrL3Fp3kax/TIbApfT8iaYoksiyZP5kme?=
 =?us-ascii?Q?JM7pkU+oHFfbpnwm6BYH+JWnuC5zoutucLIVtZB0v6Q9dKOBqXszL+OdHFf+?=
 =?us-ascii?Q?Sgz/okE2R/63J7LqBfOVv4thINQQHneM8TJnQOWirmgbpkGPxsRXdYk7Jtsi?=
 =?us-ascii?Q?WkZ5zG5vX8uGJPjyp7+JwEXbCEDZiCspHkafW/XjyhX5yIh6Ucchra4RINU8?=
 =?us-ascii?Q?VT8Jzs2qNRxmts7a3BUn2iQ1QaxYjgQLJSMbINAJNmza9Nd5VPKjjLybRuNF?=
 =?us-ascii?Q?e13WfbMEfIezRj5QybDpVXZUPH9CzB8=3D?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f6344bf6-302d-401e-7fea-08da49b33378
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB2663.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 00:58:43.9105
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: QbmrOfoWVaceqq7p3y9j6JLtOdNu500cOzOywMJ88LRFf9Aw04woiaUj2UrEnUplKciIQVh3+lFqRiN3fhajwQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN0PR10MB5126
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.517,18.0.874
 definitions=2022-06-08_04:2022-06-07,2022-06-08 signatures=0
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 spamscore=0
 adultscore=0 suspectscore=0 malwarescore=0 phishscore=0 bulkscore=0
 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2204290000 definitions=main-2206090001
X-Proofpoint-GUID: jVJJJsSBSr0KhW_JAr0g-MNbDxv5_E8_
X-Proofpoint-ORIG-GUID: jVJJJsSBSr0KhW_JAr0g-MNbDxv5_E8_

This patch is to implement the extra 'io_tlb_high_mem'. In the future, the
device drivers may choose to use either 'io_tlb_default_mem' or
'io_tlb_high_mem' as dev->dma_io_tlb_mem.

The highmem buffer is regarded as active if
(high_nslabs && io_tlb_high_mem.nslabs) returns true.

Cc: Konrad Wilk <konrad.wilk@oracle.com>
Cc: Joe Jin <joe.jin@oracle.com>
Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
---
 arch/powerpc/kernel/dma-swiotlb.c |   8 ++-
 arch/x86/kernel/pci-dma.c         |   5 +-
 include/linux/swiotlb.h           |   2 +-
 kernel/dma/swiotlb.c              | 103 +++++++++++++++++++++---------
 4 files changed, 84 insertions(+), 34 deletions(-)

diff --git a/arch/powerpc/kernel/dma-swiotlb.c b/arch/powerpc/kernel/dma-swiotlb.c
index ba256c37bcc0..f18694881264 100644
--- a/arch/powerpc/kernel/dma-swiotlb.c
+++ b/arch/powerpc/kernel/dma-swiotlb.c
@@ -20,9 +20,11 @@ void __init swiotlb_detect_4g(void)
 
 static int __init check_swiotlb_enabled(void)
 {
-	if (ppc_swiotlb_enable)
-		swiotlb_print_info();
-	else
+	if (ppc_swiotlb_enable) {
+		swiotlb_print_info(false);
+		if (swiotlb_high_active())
+			swiotlb_print_info(true);
+	} else
 		swiotlb_exit();
 
 	return 0;
diff --git a/arch/x86/kernel/pci-dma.c b/arch/x86/kernel/pci-dma.c
index 30bbe4abb5d6..1504b349b312 100644
--- a/arch/x86/kernel/pci-dma.c
+++ b/arch/x86/kernel/pci-dma.c
@@ -196,7 +196,10 @@ static int __init pci_iommu_init(void)
 	/* An IOMMU turned us off. */
 	if (x86_swiotlb_enable) {
 		pr_info("PCI-DMA: Using software bounce buffering for IO (SWIOTLB)\n");
-		swiotlb_print_info();
+
+		swiotlb_print_info(false);
+		if (swiotlb_high_active())
+			swiotlb_print_info(true);
 	} else {
 		swiotlb_exit();
 	}
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index e61c074c55eb..8196bf961aab 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -166,7 +166,7 @@ static inline void swiotlb_adjust_size(unsigned long size)
 #endif /* CONFIG_SWIOTLB */
 
 extern bool swiotlb_high_active(void);
-extern void swiotlb_print_info(void);
+extern void swiotlb_print_info(bool high);
 
 #ifdef CONFIG_DMA_RESTRICTED_POOL
 struct page *swiotlb_alloc(struct device *dev, size_t size);
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 7988883ca7f9..ff82b281ce01 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -101,6 +101,21 @@ setup_io_tlb_npages(char *str)
 }
 early_param("swiotlb", setup_io_tlb_npages);
 
+static struct io_tlb_mem *io_tlb_mem_get(bool high)
+{
+	return high ? &io_tlb_high_mem : &io_tlb_default_mem;
+}
+
+static unsigned long nslabs_get(bool high)
+{
+	return high ? high_nslabs : default_nslabs;
+}
+
+static char *swiotlb_name_get(bool high)
+{
+	return high ? "high" : "default";
+}
+
 bool swiotlb_high_active(void)
 {
 	return high_nslabs && io_tlb_high_mem.nslabs;
@@ -133,17 +148,18 @@ void __init swiotlb_adjust_size(unsigned long size)
 	pr_info("SWIOTLB bounce buffer size adjusted to %luMB", size >> 20);
 }
 
-void swiotlb_print_info(void)
+void swiotlb_print_info(bool high)
 {
-	struct io_tlb_mem *mem = &io_tlb_default_mem;
+	struct io_tlb_mem *mem = io_tlb_mem_get(high);
 
 	if (!mem->nslabs) {
 		pr_warn("No low mem\n");
 		return;
 	}
 
-	pr_info("mapped [mem %pa-%pa] (%luMB)\n", &mem->start, &mem->end,
-	       (mem->nslabs << IO_TLB_SHIFT) >> 20);
+	pr_info("%s mapped [mem %pa-%pa] (%luMB)\n",
+		swiotlb_name_get(high), &mem->start, &mem->end,
+		(mem->nslabs << IO_TLB_SHIFT) >> 20);
 }
 
 static inline unsigned long io_tlb_offset(unsigned long val)
@@ -184,15 +200,9 @@ static void *swiotlb_mem_remap(struct io_tlb_mem *mem, unsigned long bytes)
 }
 #endif
 
-/*
- * Early SWIOTLB allocation may be too early to allow an architecture to
- * perform the desired operations.  This function allows the architecture to
- * call SWIOTLB when the operations are possible.  It needs to be called
- * before the SWIOTLB memory is used.
- */
-void __init swiotlb_update_mem_attributes(void)
+static void __init __swiotlb_update_mem_attributes(bool high)
 {
-	struct io_tlb_mem *mem = &io_tlb_default_mem;
+	struct io_tlb_mem *mem = io_tlb_mem_get(high);
 	void *vaddr;
 	unsigned long bytes;
 
@@ -207,6 +217,19 @@ void __init swiotlb_update_mem_attributes(void)
 		mem->vaddr = vaddr;
 }
 
+/*
+ * Early SWIOTLB allocation may be too early to allow an architecture to
+ * perform the desired operations.  This function allows the architecture to
+ * call SWIOTLB when the operations are possible.  It needs to be called
+ * before the SWIOTLB memory is used.
+ */
+void __init swiotlb_update_mem_attributes(void)
+{
+	__swiotlb_update_mem_attributes(false);
+	if (swiotlb_high_active())
+		__swiotlb_update_mem_attributes(true);
+}
+
 static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
 		unsigned long nslabs, unsigned int flags, bool late_alloc)
 {
@@ -240,15 +263,13 @@ static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
 	return;
 }
 
-/*
- * Statically reserve bounce buffer space and initialize bounce buffer data
- * structures for the software IO TLB used to implement the DMA API.
- */
-void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
-		int (*remap)(void *tlb, unsigned long nslabs, bool high))
+static void __init
+__swiotlb_init_remap(bool addressing_limit, unsigned int flags,
+		     int (*remap)(void *tlb, unsigned long nslabs, bool high),
+		     bool high)
 {
-	struct io_tlb_mem *mem = &io_tlb_default_mem;
-	unsigned long nslabs = default_nslabs;
+	struct io_tlb_mem *mem = io_tlb_mem_get(high);
+	unsigned long nslabs = nslabs_get(high);
 	size_t alloc_size;
 	size_t bytes;
 	void *tlb;
@@ -274,7 +295,7 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
 		return;
 	}
 
-	if (remap && remap(tlb, nslabs, false) < 0) {
+	if (remap && remap(tlb, nslabs, high) < 0) {
 		memblock_free(tlb, PAGE_ALIGN(bytes));
 
 		nslabs = ALIGN(nslabs >> 1, IO_TLB_SEGSIZE);
@@ -293,7 +314,20 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
 	swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, flags, false);
 
 	if (flags & SWIOTLB_VERBOSE)
-		swiotlb_print_info();
+		swiotlb_print_info(high);
+}
+
+/*
+ * Statically reserve bounce buffer space and initialize bounce buffer data
+ * structures for the software IO TLB used to implement the DMA API.
+ */
+void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
+		int (*remap)(void *tlb, unsigned long nslabs, bool high))
+{
+	__swiotlb_init_remap(addressing_limit, flags, remap, false);
+	if (high_nslabs)
+		__swiotlb_init_remap(addressing_limit, flags | SWIOTLB_ANY,
+				     remap, true);
 }
 
 void __init swiotlb_init(bool addressing_limit, unsigned int flags)
@@ -364,23 +398,20 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask,
 			     (nslabs << IO_TLB_SHIFT) >> PAGE_SHIFT);
 	swiotlb_init_io_tlb_mem(mem, virt_to_phys(vstart), nslabs, 0, true);
 
-	swiotlb_print_info();
+	swiotlb_print_info(false);
 	return 0;
 }
 
-void __init swiotlb_exit(void)
+static void __init __swiotlb_exit(bool high)
 {
-	struct io_tlb_mem *mem = &io_tlb_default_mem;
+	struct io_tlb_mem *mem = io_tlb_mem_get(high);
 	unsigned long tbl_vaddr;
 	size_t tbl_size, slots_size;
 
-	if (swiotlb_force_bounce)
-		return;
-
 	if (!mem->nslabs)
 		return;
 
-	pr_info("tearing down default memory pool\n");
+	pr_info("tearing down %s memory pool\n", swiotlb_name_get(high));
 	tbl_vaddr = (unsigned long)phys_to_virt(mem->start);
 	tbl_size = PAGE_ALIGN(mem->end - mem->start);
 	slots_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), mem->nslabs));
@@ -397,6 +428,16 @@ void __init swiotlb_exit(void)
 	memset(mem, 0, sizeof(*mem));
 }
 
+void __init swiotlb_exit(void)
+{
+	if (swiotlb_force_bounce)
+		return;
+
+	__swiotlb_exit(false);
+	if (swiotlb_high_active())
+		__swiotlb_exit(true);
+}
+
 /*
  * Return the offset into a iotlb slot required to keep the device happy.
  */
@@ -786,6 +827,10 @@ static void swiotlb_create_debugfs_files(struct io_tlb_mem *mem,
 static int __init __maybe_unused swiotlb_create_default_debugfs(void)
 {
 	swiotlb_create_debugfs_files(&io_tlb_default_mem, "swiotlb");
+
+	if (swiotlb_high_active())
+		swiotlb_create_debugfs_files(&io_tlb_high_mem, "swiotlb-hi");
+
 	return 0;
 }
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 01:03:16 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 01:03:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344480.570042 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz6Zw-0004lG-08; Thu, 09 Jun 2022 01:03:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344480.570042; Thu, 09 Jun 2022 01:03:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz6Zv-0004l8-Sg; Thu, 09 Jun 2022 01:03:15 +0000
Received: by outflank-mailman (input) for mailman id 344480;
 Thu, 09 Jun 2022 01:03:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RtT8=WQ=oracle.com=dongli.zhang@srs-se1.protection.inumbo.net>)
 id 1nz6XP-0001NE-L3
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 01:00:39 +0000
Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com
 [205.220.177.32]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 93315604-e78f-11ec-b605-df0040e90b76;
 Thu, 09 Jun 2022 03:00:38 +0200 (CEST)
Received: from pps.filterd (m0246632.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 258LZ7rl017866;
 Thu, 9 Jun 2022 00:58:46 GMT
Received: from phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com
 (phxpaimrmta02.appoci.oracle.com [147.154.114.232])
 by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3ghexefa60-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 09 Jun 2022 00:58:46 +0000
Received: from pps.filterd
 (phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1])
 by phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com (8.16.1.2/8.16.1.2)
 with SMTP id 2590ub2v032517; Thu, 9 Jun 2022 00:58:45 GMT
Received: from nam12-mw2-obe.outbound.protection.outlook.com
 (mail-mw2nam12lp2049.outbound.protection.outlook.com [104.47.66.49])
 by phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com with ESMTP id
 3gfwu433k8-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 09 Jun 2022 00:58:45 +0000
Received: from BYAPR10MB2663.namprd10.prod.outlook.com (2603:10b6:a02:a9::20)
 by BN0PR10MB5126.namprd10.prod.outlook.com (2603:10b6:408:129::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Thu, 9 Jun
 2022 00:58:42 +0000
Received: from BYAPR10MB2663.namprd10.prod.outlook.com
 ([fe80::7081:e264:cc58:37b9]) by BYAPR10MB2663.namprd10.prod.outlook.com
 ([fe80::7081:e264:cc58:37b9%3]) with mapi id 15.20.5314.019; Thu, 9 Jun 2022
 00:58:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 93315604-e78f-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc :
 subject : date : message-id : content-type : mime-version;
 s=corp-2021-07-09; bh=3SsoYUhartg9823tFAK+tRiIVPElf2jSv9+cjvVkLE8=;
 b=M7ozWCOa3+TfeNQwOa7gpxd6yF1zY09Ju/Ua5AF2i4a+U0N0tYT6jd4AgcPfgt3VJg4L
 mf43BzUEdhRAyNu9OJXAG7BmRswA912dPbMmBC4czuQUiufxvHP8zkmjUCN/oPjc5fqe
 /07g4anPGymLFyAqaBykDLXyvCKuio5RL/xHeJeXdvV4BuYj4QCsuNjHBMGFj5hLfw8/
 QxzsXc+LSifuibdoZtGFiVe/t7uCQQo42tuDKldAJBynWlMP9zOTw2rJXXpbHU/G5oTw
 I4x3Xt0UDGKNwmMfJbyeR8oPodKHdovuWA0+Bp1ku29N9/ITu6tsNs6teSxg6h5phS6O QQ== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=idy4vGYje4vaAx7bI5YnHt2whCNmdGL3p1mv7u9rNj4WZ9iMgD5paaaqFCcqaNPcCTAdF/VZJanjvYcNAJYHfnKG5A6HFVtS7juVmr6umgM0FTXp4YU7NXRpZPSonbkTPk2fbFNKXYXG9MPr3DB9fU+2uOkHCObu7trKnZOb4yawwFEttsO/ZiSEBy7hUfRgIcJvwmPb2Luq2XT8lpjl+05O/SaCbBDCSRvAqHzVIxbI/guMSFMvyCglMU+6QKmfPksYe88L42eLV5LNziZ4ktacscyaraznquN6IE9apVVW2Vk6XUOfhqLCRI6CepTGUsXpaBKoe9PvrWJYVXwtvg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3SsoYUhartg9823tFAK+tRiIVPElf2jSv9+cjvVkLE8=;
 b=OrzNLs5L2oDowsnZEhb9imoQmZGNI1om7SDzWikUSRhgHvYwrjIeWZ4ZLAsxHtV1hpfkK2nDtNLIezUGAlBy4AkxwsANj5tRlfVY4Yhdynrditrm8XAeMfhtVVa+KGF2dV+ARsqWg0YMZSwtWu0H1tkJw0fq5QbAfiOVnbvLyGdMO0/3iRL+qoGD1uMi1o5JMs9Ev/zdraYJW6RImQId2h20D2ONdqMadjvLK61VJsifO2SbKXk+hMXMcThWli+CckbWW4gnP3etp+iKuTeo4t2PXYKy7yVlhiyuFya8G49nHtMNYKfRFaZ15t9VtIEJUNxHdRaeIA6ccyCypWCezA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3SsoYUhartg9823tFAK+tRiIVPElf2jSv9+cjvVkLE8=;
 b=NXncSQgfGOTZYlykBnhqLvNUMWyXStN+cMfunC05/2R/j5aV5n/M90jBRFomdHVEHjemXHEBPKy8IjiWhwznojDtUcLYRYIWIgN8gs6EYtYijHRTsbk33AZuimaOSOE9Qx4JWqWle878j7LbIiNoW0La0JpOwPgjuQ6+majAVQI=
From: Dongli Zhang <dongli.zhang@oracle.com>
To: iommu@lists.linux-foundation.org, xen-devel@lists.xenproject.org,
        x86@kernel.org, linuxppc-dev@lists.ozlabs.org,
        virtualization@lists.linux-foundation.org
Cc: linux-kernel@vger.kernel.org, hch@infradead.org, m.szyprowski@samsung.com,
        jgross@suse.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
        dave.hansen@linux.intel.com, sstabellini@kernel.org,
        mpe@ellerman.id.au, konrad.wilk@oracle.com, mst@redhat.com,
        jasowang@redhat.com, joe.jin@oracle.com
Subject: [PATCH RFC v1 0/7] swiotlb: extra 64-bit buffer for dev->dma_io_tlb_mem
Date: Wed,  8 Jun 2022 17:55:46 -0700
Message-Id: <20220609005553.30954-1-dongli.zhang@oracle.com>
X-Mailer: git-send-email 2.17.1
Content-Type: text/plain
X-ClientProxiedBy: SJ0PR05CA0075.namprd05.prod.outlook.com
 (2603:10b6:a03:332::20) To BYAPR10MB2663.namprd10.prod.outlook.com
 (2603:10b6:a02:a9::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 2cb2e478-077a-4474-2b82-08da49b3329d
X-MS-TrafficTypeDiagnostic: BN0PR10MB5126:EE_
X-Microsoft-Antispam-PRVS: 
	<BN0PR10MB512612A37D72D4478347F88AF0A79@BN0PR10MB5126.namprd10.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	phOFzr6us0qxhf4BlGC3EWXqNEz4CBGDjH5s96GZyLoh7N7Uncb5zBgg9UZs1cWlABTdKJf3vVcy6UmmAe1FwkKCHedsPsZ3H66G/pLWfxIfF8ung6ESlpj1+wKgmZ2t8ar9tRRY7m/rj6de43oS32Ixsa1WrhOnpOUPdjIQKHAclCD1b9jmp0WGnC68WQk8arYJDaMd0KuAFAe5HPzu6JY+cqhD+lAt2e6f94pzitLkW5GG6h9k7ccFEhxsmEhRk0RVbKI3laZ3EyuvS7s8jFwBWnH0S1niRu7jR52/HWQ8JN5ocj8w9r0sFL36Mdr0JReqh5O4gHzs0QnHlbJQrv3OUYK6DivggUKlG/teKz0f+QbGmGgeCgqNEKM77uFBIeNJunDSbCYY5UJcrXNjC74s7eW5csyNUJzt3fW4vEcdB3hSaYa/zUTOk69jLrWK8taPs/XkVxSHl5MT8ugrEiR0yon9SEkVjBBvSYbDOuA1G9ahAWHTp7yzqhV2Msdic4ZYt/Uh3vsrFSTFFYam7FBaInuPUUvp9VKonsmBIgnv6OgXx5/03R7JcpE1/PlDarM0ixweZt8OBrQYU6DQl1oBacXHBiFOH37pbjaNS5oZa5jobwFRbA5CTBX8K4HiZNtvcJCQlGtbyJafF7ABgIYnhjwmnP+cCzJMKnEZgn3kq8tfoGbubGD6eM2EzwQHX8olfXNxDpUFKAXDwLXvVaSZkF63C1FiYaos4aASYkWTDv3ezZCM6v2ZZ5pfO/yUkVeJnsPHfOz5uIzx1XEG27QkgyqaFFQWzoA6NceVcOY=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB2663.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(8936002)(6486002)(966005)(5660300002)(186003)(66556008)(508600001)(4326008)(66946007)(7416002)(66476007)(2616005)(8676002)(107886003)(1076003)(2906002)(38100700002)(38350700002)(26005)(316002)(6512007)(36756003)(83380400001)(86362001)(44832011)(6506007)(52116002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?us-ascii?Q?pBO4BPbogLXOZu6vcAtGBYD2Ytg0IwKr8vEIZF/oWiSMFol+V2mrETdzXMMy?=
 =?us-ascii?Q?RJZuyCI8Jn+4MwL9ESFnHzPQpHjV0YWAXpC/IAHuyMPuUH+JE7GdFRHal5lL?=
 =?us-ascii?Q?rofuZhUPvU157zGW0V8sssCE/racWWq7ixXUOp3gDPoSHBHKabOXHCE+HEEm?=
 =?us-ascii?Q?H1yfS/a6Tms6oxIASg4FtKjSdzI0HPHHOsAEqkD0J2Gos0kuLp2HpelOUuFs?=
 =?us-ascii?Q?J7mueZgg7b8o7ndpugpXcG+ksyGd5EmGRQlgpGgQLBotFD7ChbOekaQeY7JU?=
 =?us-ascii?Q?IsHP4BovrgCHVvA4xJ+3mz4AAWwSWmL9wgJeEmv4j9W7uQSHFBSM8+CHr/6f?=
 =?us-ascii?Q?QlH+eH3wtS4XS6cUE90EQ7jR5UyI2UhJn5J/wE3Wb2khsNoBlm3TnulBVOQx?=
 =?us-ascii?Q?Vu2A474nl6ZkutUCasH7oaOtEuVOoi0u3pgo9RBD38DGRNt15QDfS7ODiS0w?=
 =?us-ascii?Q?w9ffKLh2PdQTaDgJpyz97jgVb4nAYvRb7PCOdlXeVHkpyP73GPPKKKY9uLQ0?=
 =?us-ascii?Q?CBPbziO/bLFHGZmU1sLGOU7WXWy/6HNiUvzBS5lV//Fwx30wibzQAW9ZIynl?=
 =?us-ascii?Q?Ikuw58DrW1xSDAEbnRKqRIM3IFj1G3h0s8Rm94i8fsbjs76ini8UsfSnz7Vz?=
 =?us-ascii?Q?rrJvkT4ZkuMJbAsZIT9MKUVlS5f3atIFr1yGE0WdaDMPdfktXnolOEuHG/vf?=
 =?us-ascii?Q?p4PfN0BW5eOmjqaW3eZ8UmKDyBx9c1/owhR+zQUCtmEuzbCXTbkmojk6DfUN?=
 =?us-ascii?Q?K4f/ncRuLApExOnX2f/8U/dt+r3MhQtkB1L5bfM4qUVtKiylYQGNyUVzYo9H?=
 =?us-ascii?Q?yJiZPlkX8XnikFLZcjm6pGre5oD/9RVz4aB/L+8DKgJggpmbXtqeKJti1kRA?=
 =?us-ascii?Q?Lzg+L3SoduiF5Zbygo5JPk34J65BTsBQFy05MWitv0txouxXVsJbl1YulBH1?=
 =?us-ascii?Q?PK9mfQvo6lYzrjWrYOG0w5KlFKWBJ+wcYkYAmL7Wl90cuzpYe4sUO0cteCUo?=
 =?us-ascii?Q?Smr2IRaJxAkzmylP9IEHBHQFkI0sC/X2IIWry7Lx7XEr42OlPgT6ta++5MKe?=
 =?us-ascii?Q?xHDjrLSCfZLkuhRqfK6yoluD+MEDinjpm4n1tBSZe7iSHXflMx1kOyjQqfO+?=
 =?us-ascii?Q?8XC+uIjHeVrEGXvsbenYPE6b6mSdzatBmFWMMY/6vbu2ttN9WbFhfgJ5//VJ?=
 =?us-ascii?Q?IkWFfA0knnqk5Xy0Z/nCCH2Mt5spgiHPsdduCXDRDYSnWQCQKM8uejH7hH9N?=
 =?us-ascii?Q?If6gj8TN1WhdvXDFjdewbR4zRDfz2KeURnp0Qq4gdx/Wn8rt9wLoGhBIdqey?=
 =?us-ascii?Q?rD+wPBor5Yx3leapfg9c2xG6JJeZHvCyMJaFGHnE8gKR//J3ZbADpveVGz92?=
 =?us-ascii?Q?LlOyrvOjSqaIOlAHGH5Th+uqLKi1UAV1tsvBu5wy5ub2M/lArZQflQ1qf5i0?=
 =?us-ascii?Q?QY9IbOAIQicb3hRE8QatxxUOSN1lRG4NvfzN0pIOnyQWp4+g+CNij4xHQF+J?=
 =?us-ascii?Q?7lH7mEMURJjAMJTRMQnj1zpfa4MA3yYuV3SvxgrfN7C1Q+7LN/QY4uTK9Jg2?=
 =?us-ascii?Q?8CxYSRXdCswRp3xmCt9fXbcnv77AkCk2ed3x1+IsymnKL1WKzKL69798mvKt?=
 =?us-ascii?Q?RcwC2hWB/fD1saq523+lxUz5udoypGglKYB11mfIYhpdiph6UZePHPSFdfcH?=
 =?us-ascii?Q?2Vlh1metT/Nc9OMwlsrC1lPfEhXad9mHU6QcZzpIjvd/liUtYYMcL3nBhxTR?=
 =?us-ascii?Q?ErEl00F4MWXhCg2PWr9NV+5MweXyXX4=3D?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2cb2e478-077a-4474-2b82-08da49b3329d
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB2663.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 00:58:42.5199
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: V3CtjPGlpYbIwnKPmsX+TYmcM+xROYmmG5MAt7VRiQB4NPCI9kvN1yLPZJGKUDf05jU/U/2TttCXRFFAEQIScQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN0PR10MB5126
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.517,18.0.874
 definitions=2022-06-08_04:2022-06-07,2022-06-08 signatures=0
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 spamscore=0
 adultscore=0 suspectscore=0 malwarescore=0 phishscore=0 bulkscore=0
 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2204290000 definitions=main-2206090001
X-Proofpoint-GUID: dNBDCVqEeGfksdhDcpmZcy8m6vklENs5
X-Proofpoint-ORIG-GUID: dNBDCVqEeGfksdhDcpmZcy8m6vklENs5

Hello,

I used to send out a patchset on 64-bit buffer and people thought it was
the same as Restricted DMA. However, the 64-bit buffer is still not supported.

https://lore.kernel.org/all/20210203233709.19819-1-dongli.zhang@oracle.com/

This RFC is to introduce the extra swiotlb buffer with SWIOTLB_ANY flag,
to support 64-bit swiotlb.

The core ideas are:

1. Create an extra io_tlb_mem with SWIOTLB_ANY flags.

2. The dev->dma_io_tlb_mem is set to either default or the extra io_tlb_mem,
   depending on dma mask.


Would you please help suggest for below questions in the RFC?

- Is it fine to create the extra io_tlb_mem?

- Which one is better: to create a separate variable for the extra
  io_tlb_mem, or make it an array of two io_tlb_mem?

- Should I set dev->dma_io_tlb_mem in each driver (e.g., virtio driver as
  in this patchset)based on the value of
  min_not_zero(*dev->dma_mask, dev->bus_dma_limit), or at higher level
  (e.g., post pci driver)?


This patchset is to demonstrate that the idea works. Since this is just a
RFC, I have only tested virtio-blk on qemu-7.0 by enforcing swiotlb. It is
not tested on AMD SEV environment.

qemu-system-x86_64 -cpu host -name debug-threads=on \
-smp 8 -m 16G -machine q35,accel=kvm -vnc :5 -hda boot.img \
-kernel mainline-linux/arch/x86_64/boot/bzImage \
-append "root=/dev/sda1 init=/sbin/init text console=ttyS0 loglevel=7 swiotlb=327680,3145728,force" \
-device virtio-blk-pci,id=vblk0,num-queues=8,drive=drive0,disable-legacy=on,iommu_platform=true \
-drive file=test.raw,if=none,id=drive0,cache=none \
-net nic -net user,hostfwd=tcp::5025-:22 -serial stdio


The kernel command line "swiotlb=327680,3145728,force" is to allocate 6GB for
the extra swiotlb.

[    2.826676] PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
[    2.826693] software IO TLB: default mapped [mem 0x0000000037000000-0x000000005f000000] (640MB)
[    2.826697] software IO TLB: high mapped [mem 0x00000002edc80000-0x000000046dc80000] (6144MB)

The highmem swiotlb is being used by virtio-blk.

$ cat /sys/kernel/debug/swiotlb/swiotlb-hi/io_tlb_nslabs 
3145728
$ cat /sys/kernel/debug/swiotlb/swiotlb-hi/io_tlb_used 
8960


Dongli Zhang (7):
  swiotlb: introduce the highmem swiotlb buffer
  swiotlb: change the signature of remap function
  swiotlb-xen: support highmem for xen specific code
  swiotlb: to implement io_tlb_high_mem
  swiotlb: add interface to set dev->dma_io_tlb_mem
  virtio: use io_tlb_high_mem if it is active
  swiotlb: fix the slot_addr() overflow

arch/powerpc/kernel/dma-swiotlb.c      |   8 +-
arch/x86/include/asm/xen/swiotlb-xen.h |   2 +-
arch/x86/kernel/pci-dma.c              |   5 +-
drivers/virtio/virtio.c                |   8 ++
drivers/xen/swiotlb-xen.c              |  16 +++-
include/linux/swiotlb.h                |  14 ++-
kernel/dma/swiotlb.c                   | 136 +++++++++++++++++++++-------
7 files changed, 145 insertions(+), 44 deletions(-)

Thank you very much for feedback and suggestion!

Dongli Zhang




From xen-devel-bounces@lists.xenproject.org Thu Jun 09 01:20:19 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 01:20:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344515.570053 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz6qM-0007Nk-Da; Thu, 09 Jun 2022 01:20:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344515.570053; Thu, 09 Jun 2022 01:20:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz6qM-0007Nd-9u; Thu, 09 Jun 2022 01:20:14 +0000
Received: by outflank-mailman (input) for mailman id 344515;
 Thu, 09 Jun 2022 01:20:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oenM=WQ=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nz6qK-0007NJ-EJ
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 01:20:12 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4d65b898-e792-11ec-b605-df0040e90b76;
 Thu, 09 Jun 2022 03:20:10 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id C2CF7B82B96;
 Thu,  9 Jun 2022 01:20:08 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id F06F5C34116;
 Thu,  9 Jun 2022 01:20:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4d65b898-e792-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654737607;
	bh=d+DLiXwlbjDMXCSdYRfGavyNvyx6xU4i4BT1aAY1rBQ=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=aDr4OzrBoPV+1QLWdfgKDwbYQB0biEJJF3dWK6mDAd14CHqYk1Vls0HA2GsLfpKjW
	 +a7SKQF2jP4QJETYBg/YamTNQ5EqQIlIH6br+YsSUqnDkGEW0C0veORkZbUyNhVKRK
	 qgllxt4Zjnt6n44zQC6hCfAwB1JT/g/8HG0Pr/wLR3JZh0wigNrcbS5AKAZu+VdD63
	 JX/g0A2haxZV4mj51ocoqfqpOFpHjsWM0ETybpk24ONTY/ru1eT6Z2hf1DtDByAmxQ
	 GJizPnIDe1nMBQ6YiRNY147iZlUXx/MyoHVXupgx+Imda7TlTaU8/91UfaaW7T8wD/
	 17Zz68ikP5E3g==
Date: Wed, 8 Jun 2022 18:20:06 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Stefano Stabellini <sstabellini@kernel.org>
cc: Jan Beulich <jbeulich@suse.com>, George Dunlap <George.Dunlap@citrix.com>, 
    xen-devel <xen-devel@lists.xenproject.org>, 
    Roger Pau Monne <roger.pau@citrix.com>, 
    Artem Mygaiev <Artem_Mygaiev@epam.com>, Andrew.Cooper3@citrix.com, 
    julien@xen.org, Bertrand.Marquis@arm.com, fusa-sig@lists.xenproject.org, 
    roberto.bagnara@bugseng.com
Subject: MISRA C meeting tomorrow, was: MOVING COMMUNITY CALL Call for agenda
 items for 9 June Community Call @ 1500 UTC
In-Reply-To: <alpine.DEB.2.22.394.2206061731421.277622@ubuntu-linux-20-04-desktop>
Message-ID: <alpine.DEB.2.22.394.2206081806020.21215@ubuntu-linux-20-04-desktop>
References: <CC75A251-2695-4E9E-95A7-043874B22F32@citrix.com> <alpine.DEB.2.22.394.2206010942010.1905099@ubuntu-linux-20-04-desktop> <alpine.DEB.2.22.394.2206011324400.1905099@ubuntu-linux-20-04-desktop> <ebe4b409-318f-6b2c-0e05-fe9256528b32@suse.com>
 <alpine.DEB.2.22.394.2206061731421.277622@ubuntu-linux-20-04-desktop>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

Hi all,

Just a quick note to have another look at the spreadsheet before the
meeting tomorrow. I cleared the old rules we have already discussed
leaving only the ones to discuss next.


A few rules are similar to the already accepted Rule 5.1 with our agreed
40 characters limit for identifiers:
- Rule 5.2
- Rule 5.4


A couple of rules are about comparisons/operations between pointers to
different objects:
- Rule 18.1
- Rule 18.2
- Rule 18.3
In my opinion these rules are good in the general case. Things like _end
- _start and other "fake objects" should be deviations.


A few rules weren't clear:
- Rule 13.1, the example link was wrong, I updated it
- Rule 9.3, { 0 } is allowed by the rule and also { [2] = 1 } is allowed
- Rule 9.4, range initializers are not considered by the rule because
            they are a GNU extension


Finally, for Rule 13.2, I updated the link to ECLAIR's results. There
are a lot more violations than just 4, but I don't know if they are
valid or false positives.


Looking forward to our discussion tomorrow!

Cheers,

Stefano


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 01:27:56 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 01:27:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344525.570067 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz6xh-0008HJ-6F; Thu, 09 Jun 2022 01:27:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344525.570067; Thu, 09 Jun 2022 01:27:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz6xh-0008HC-34; Thu, 09 Jun 2022 01:27:49 +0000
Received: by outflank-mailman (input) for mailman id 344525;
 Thu, 09 Jun 2022 01:27:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nz6xg-0008H2-DU; Thu, 09 Jun 2022 01:27:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nz6xg-0001Lr-9f; Thu, 09 Jun 2022 01:27:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nz6xf-0007A1-SB; Thu, 09 Jun 2022 01:27:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nz6xf-0005yQ-Ri; Thu, 09 Jun 2022 01:27:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=QgPRj3KQ8Mei0KQWzZ/DC1b4WuheFuzyd2/Bsq169Vo=; b=ADrwZNMuW8C6gKKKro66Rv2BK1
	dPZzQNhG5TfEUxxa9pdERCmnoSqelyjNaiQTMnhlnoJXKuDHIpfQeuaF6WxSVlt6yfIBEw/V/1JjL
	pcLO27vUb2q1rH2gG0CRUYqsrq65zVhmAdr3hPns2Fx2kcDtycpR6ToTg06aLklFvv/Y=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170884-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 170884: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:heisenbug
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-localmigrate/x10:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=9b1f58854959c5a9bdb347e3e04c252ab7fc9ef5
X-Osstest-Versions-That:
    qemuu=57c9363c452af64fe058aa946cc923eae7f7ad33
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 09 Jun 2022 01:27:47 +0000

flight 170884 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170884/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-amd 7 xen-install fail in 170874 pass in 170884
 test-amd64-amd64-qemuu-freebsd12-amd64 19 guest-localmigrate/x10 fail pass in 170874

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170849
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170849
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170849
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170849
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170849
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170849
 test-amd64-i386-xl-vhd       21 guest-start/debian.repeat    fail  like 170849
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170849
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170849
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                9b1f58854959c5a9bdb347e3e04c252ab7fc9ef5
baseline version:
 qemuu                57c9363c452af64fe058aa946cc923eae7f7ad33

Last test of basis   170849  2022-06-06 17:39:05 Z    2 days
Testing same since   170858  2022-06-07 04:25:53 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alex Bennée <alex.bennee@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Song Gao <gaosong@loongson.cn>
  Xiaojuan Yang <yangxiaojuan@loongson.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   57c9363c45..9b1f588549  9b1f58854959c5a9bdb347e3e04c252ab7fc9ef5 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 03:24:20 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 03:24:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344538.570078 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz8mC-00053s-Qy; Thu, 09 Jun 2022 03:24:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344538.570078; Thu, 09 Jun 2022 03:24:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz8mC-00053l-NU; Thu, 09 Jun 2022 03:24:04 +0000
Received: by outflank-mailman (input) for mailman id 344538;
 Thu, 09 Jun 2022 03:24:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jl5s=WQ=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1nz8mB-00053e-Bn
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 03:24:03 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2061b.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::61b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9aa77218-e7a3-11ec-b605-df0040e90b76;
 Thu, 09 Jun 2022 05:24:00 +0200 (CEST)
Received: from DU2PR04CA0265.eurprd04.prod.outlook.com (2603:10a6:10:28e::30)
 by VI1PR08MB3568.eurprd08.prod.outlook.com (2603:10a6:803:7f::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Thu, 9 Jun
 2022 03:23:55 +0000
Received: from DBAEUR03FT009.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:28e:cafe::40) by DU2PR04CA0265.outlook.office365.com
 (2603:10a6:10:28e::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.12 via Frontend
 Transport; Thu, 9 Jun 2022 03:23:54 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT009.mail.protection.outlook.com (100.127.143.21) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5332.12 via Frontend Transport; Thu, 9 Jun 2022 03:23:54 +0000
Received: ("Tessian outbound ff2e13d26e0f:v120");
 Thu, 09 Jun 2022 03:23:54 +0000
Received: from 14fc1bb5f678.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 249C0593-EB57-4EE9-A209-DA65BC9E7D98.1; 
 Thu, 09 Jun 2022 03:23:43 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 14fc1bb5f678.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 09 Jun 2022 03:23:43 +0000
Received: from DU2PR08MB7325.eurprd08.prod.outlook.com (2603:10a6:10:2e4::7)
 by AM6PR08MB5015.eurprd08.prod.outlook.com (2603:10a6:20b:e5::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Thu, 9 Jun
 2022 03:23:40 +0000
Received: from DU2PR08MB7325.eurprd08.prod.outlook.com
 ([fe80::50e5:8757:6643:77f8]) by DU2PR08MB7325.eurprd08.prod.outlook.com
 ([fe80::50e5:8757:6643:77f8%7]) with mapi id 15.20.5332.012; Thu, 9 Jun 2022
 03:23:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9aa77218-e7a3-11ec-b605-df0040e90b76
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=CEz+UchdGAP2t+h6D2FWuzyH1LAORqtrJiK0EGqpPtyvNz4nWzo8qAAeHnLGDJoB0KUudKWik4j/fJxIIILP/mvqpnH2hepfZFLomB1tqxDy5VP3VL1MqqE3IF+M/jygw5stvKO2MtSlmWa3cci0/yFsF0eWdQZxt1vSmh6ui5lIERhKICqOTjB09wgTUMX2IWg/YqGGuXwupEVd9bzfeMoQT+Q1bs+XKpScGfPn9GEYWEXtzvw6KqwPxdSUO197JnHKNl78l4upPQP9+W1WgaxJTw1qxiSVRGklp4rzr66bhsD3BvRbCTzwcsUAKpJ2OifgMxMNNzeBtuUzndSDDw==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nUoK2vfRlr5hPn0qnnmWHphQJK115+ItmFJpMWV1aGg=;
 b=P0SakBKsGxTVP3EjeIz84yRYiKb8O/ochXITWkNxjaZzZM/ZLReAR7zudMA0zncjmLtHMDcs0qEZLpYzFwdjkFhHcHmeqYGRmVGpFr471FhVD5wXNDHf/Us8IPzwxaEm/xM0bZGgdjsiC9nb/0AEaEkWamvpJsHBdf3M5cLUujm29GwjHT+Fq5h3llxapytwWbZkP4V5DHy6r5IYiIJ1rELa6M1hOENi/0FEYHWnlFYG6060LQBj6Z+EZZMnuNQdy2CMwDpVNyEHsze55deV/UO5JPBiMXdVLn+1vuZyR8rwDVhnT44y45B+2LaEkLf8914w0UPzZ75iNtTlSXnEmg==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nUoK2vfRlr5hPn0qnnmWHphQJK115+ItmFJpMWV1aGg=;
 b=GLmADv+0z+Apv1ljE7fGmagEzEVwvyizR/p+6kGvDVSIxC4l2asKB6BzEtQGiQIH2SqeeXY9q9J6BtncBd4yxHjxesmAdE9SThGJdDe8FQy9k8n469+PnE1RO4WWgHCCCu0WeAKMLkot0r7U2zPcEpYX2oLvmAExWi1cbC19QIo=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZH+qxWGcyHpcxPGtwA+t8FkMNnAPWoplI9BkeG9Pbk70jIZ7iGT4q/jCBHupOuSePfscxYL527ZNPHZlerou47NB5sxHOINlZc/odDVVmiPw+7znvi+uF2janr2kfr11UJaKH2cNl6NabfSIapt8lHi2agG+pni5oBT7y0ERnmS907Q3g69rOsv20kauyAFF2FQMl/YuAvZRZH8SAUajxu0CfduBUfBCCMLBK2uabHI/xrRozRZTOfHBJCNN8fqOr9vCn8D4wdLOPuFNkXqjkvoDwL4/brZ6kdp4UUJexRLr30M6QFQ8SgL60lAEPfU/7Bhqxi7kIBopO4sFKc67rA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nUoK2vfRlr5hPn0qnnmWHphQJK115+ItmFJpMWV1aGg=;
 b=h/SsVdCKYYqhn9jHe+xQ40/SW2TZ3JUqIkNxvxfG3DmFk3fKcY6U20KeLwHJoLloFe4oTTxs7RfHcWIQKmTyaiHxKQRKogde9saxzwjv5P7CYY+1pXVa69dsIra7C6g7VoKMhzCsrdUDBClbXSQD3ZoB64WOesgiqoSyaCCdA1VOWZrP/rm+duirUgvtPn7I+fdjP9g6qVroZU0pma7FvAQ/Akete6P0xU2WNOfK8IkOWanBMyv7UgmqEJID/+VuD7+bNQ72KGMsvo/rzwIAgJ7Y2CHwOMihVbRufhtEY1i5DSNL+Nqsx/Gv07LQ8CPMXgephb/8gVly4g98Ae/Fgw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nUoK2vfRlr5hPn0qnnmWHphQJK115+ItmFJpMWV1aGg=;
 b=GLmADv+0z+Apv1ljE7fGmagEzEVwvyizR/p+6kGvDVSIxC4l2asKB6BzEtQGiQIH2SqeeXY9q9J6BtncBd4yxHjxesmAdE9SThGJdDe8FQy9k8n469+PnE1RO4WWgHCCCu0WeAKMLkot0r7U2zPcEpYX2oLvmAExWi1cbC19QIo=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>
Subject: RE: [PATCH v6 7/9] xen/arm: unpopulate memory when domain is static
Thread-Topic: [PATCH v6 7/9] xen/arm: unpopulate memory when domain is static
Thread-Index: AQHYekCZzsveZhnerUWxbO0gDoOu8a1DqumAgAK/JFA=
Date: Thu, 9 Jun 2022 03:23:40 +0000
Message-ID:
 <DU2PR08MB7325DCCA62FCA3620E6FB600F7A79@DU2PR08MB7325.eurprd08.prod.outlook.com>
References: <20220607073031.722174-1-Penny.Zheng@arm.com>
 <20220607073031.722174-8-Penny.Zheng@arm.com>
 <72bec2ab-13d7-8de9-6bb9-f1e4f9de6a3b@xen.org>
In-Reply-To: <72bec2ab-13d7-8de9-6bb9-f1e4f9de6a3b@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 920E9DAC76B4FD428F28966B86889C37.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 27d1ee54-1d6d-4e6c-cc98-08da49c77b4c
x-ms-traffictypediagnostic:
	AM6PR08MB5015:EE_|DBAEUR03FT009:EE_|VI1PR08MB3568:EE_
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB3568CC227B805DF49BE04146F7A79@VI1PR08MB3568.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 chV6tW7JbiNBaKeLqQarq8enXJ4plk6RH1YARHraMRkqRR7bkFO3oAZT947Cskaa3QWK9+W4npAUkkSEaPGsisLndfk3iNExyqJvh2bopDI22R6cn+g8UDS9EYf9TN//iN73uuNm5rD+muiw0MZmqcHi1muEmID+L1zyUFlql6mneYa/PaSO5joKxQPHbRhFtPDxLSn864GEVFv+bC0mj90AvP5oIp9+iwrMtQ72MJvMwNEESS8xTFMdk+vUk9GUilf+65Ilzoxutnu4mcFk2YTp1MbUSNJQcFEEf9O8i+/mT2WKsETALqEoAmwIuAoZ300a4LOCXkpSGnIcaPGoPAlnU1+YfyQwLkTPWQMqL0TFmGL1s9aqJAmKN7mzKpH7d97pxc/5LM+fmJrbV2X68at/iCSNh7dXTNd447yOz7K4NBQKCmFSweVu9OcFYmfvamcStEPVW4omPdwC1BL/TDhbYD2xzdU+umZOv/Ys7HnnAPsSXYEmWuIPPWQyFIvkQfiLvKTWSKxr+K19FKhdgBmi93x8DX0qk25mMD0sATgXS8H6Ja6VE2D+RL0J/+dHeZMQZPkrzVRv0w8O9+qZvFiyfyoh5UhHKNKDKKD4UF+tpAQxZHcxcmHkOKH78dCvuGoHpaUnInGaK+I9pV3ImY8Qa3xMFmuTZwcfBfMLhbz1/O9DLhwBSpNqoGPohCC6uK5Xezzj4tgdy6CuK1sSlg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DU2PR08MB7325.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(508600001)(55016003)(83380400001)(71200400001)(2906002)(33656002)(6506007)(53546011)(7696005)(9686003)(26005)(186003)(38070700005)(8936002)(122000001)(52536014)(54906003)(110136005)(316002)(64756008)(66946007)(76116006)(66476007)(8676002)(4326008)(66556008)(66446008)(86362001)(5660300002)(38100700002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB5015
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT009.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	33a79481-0afc-4693-845c-08da49c77309
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	dQt3qtZ5ICc6X+YsZwhLm+/80Im7MlV5aZFqJFYsnRr7zBqYKUQ7rzX6+tZ9Ws/CB54lw9KZjWyueFe/KBKEY8soYUHjKVzHTac6FLk6VoMwIdSJ5IVtpKBdNRnleF+DFhrloEVK+3sWq5HUoNS6dCgfGq/Bx4ucg0N8Ok0OfWIWdupf6nl9tmoCn2tNcsscN9oRzGfF90XGUzdHnf7KTnfFPBWvsZQvMQ/BGlA20KuvU+uT9hiLHsc4nznTdwrpimZkqOomcT0VA7GcQ0ayWiUO6SBIjDYGPj3J8tbquQLUjOxlshydg/4RqTaiTrRPFqBtd3GycBuKWnOV+knQWZo/HQIHe3NkvO2IzZ9H+TbhmErl06EXgrCu3QV02u4McveVA31i5qQ7QlJp4BWdoa03xAC1f3WnMud1QXVoxo84+Fv/BAdN05x4bztoEWGqfaqo1k/xoIYLTSK/jXTv1KUvEMGvg27NY/Mq3njrPQIs2/Ioq49AyCDZVc7ZykI+GvnPKiWpj3GW1v8oTtfSTmBz8i7ZvgXQjf0+9HOnyeVAb74zlGJwzsXKgRfQeOOT8GJDUx29iWESrY4C506U0rUAPzHLEXTQT4J0lkrmIvG8H/6fEYH3cwb7bWPuDT8tQuVzhgcoIAjgXe88aSUMjJExdKBdaBKL+7oI0atjzTQgrHDtqpUh9CamViD42pHP
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230001)(4636009)(40470700004)(36840700001)(46966006)(2906002)(70206006)(70586007)(83380400001)(6506007)(7696005)(4326008)(52536014)(8676002)(53546011)(8936002)(316002)(110136005)(336012)(33656002)(47076005)(54906003)(356005)(81166007)(86362001)(508600001)(82310400005)(5660300002)(26005)(36860700001)(40460700003)(9686003)(55016003)(186003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 03:23:54.2140
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 27d1ee54-1d6d-4e6c-cc98-08da49c77b4c
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT009.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3568

SGkgSnVsaWVuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSnVsaWVu
IEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4NCj4gU2VudDogVHVlc2RheSwgSnVuZSA3LCAyMDIyIDU6
MjAgUE0NCj4gVG86IFBlbm55IFpoZW5nIDxQZW5ueS5aaGVuZ0Bhcm0uY29tPjsgeGVuLWRldmVs
QGxpc3RzLnhlbnByb2plY3Qub3JnDQo+IENjOiBXZWkgQ2hlbiA8V2VpLkNoZW5AYXJtLmNvbT47
IFN0ZWZhbm8gU3RhYmVsbGluaQ0KPiA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz47IEJlcnRyYW5k
IE1hcnF1aXMgPEJlcnRyYW5kLk1hcnF1aXNAYXJtLmNvbT47DQo+IFZvbG9keW15ciBCYWJjaHVr
IDxWb2xvZHlteXJfQmFiY2h1a0BlcGFtLmNvbT47IEFuZHJldyBDb29wZXINCj4gPGFuZHJldy5j
b29wZXIzQGNpdHJpeC5jb20+OyBHZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJpeC5j
b20+Ow0KPiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+OyBXZWkgTGl1IDx3bEB4ZW4u
b3JnPg0KPiBTdWJqZWN0OiBSZTogW1BBVENIIHY2IDcvOV0geGVuL2FybTogdW5wb3B1bGF0ZSBt
ZW1vcnkgd2hlbiBkb21haW4gaXMNCj4gc3RhdGljDQo+IA0KPiBIaSBQZW5ueSwNCj4gDQo+IE9u
IDA3LzA2LzIwMjIgMDg6MzAsIFBlbm55IFpoZW5nIHdyb3RlOg0KPiA+IFRvZGF5IHdoZW4gYSBk
b21haW4gdW5wb3B1bGF0ZXMgdGhlIG1lbW9yeSBvbiBydW50aW1lLCB0aGV5IHdpbGwNCj4gPiBh
bHdheXMgaGFuZCB0aGUgbWVtb3J5IGJhY2sgdG8gdGhlIGhlYXAgYWxsb2NhdG9yLiBBbmQgaXQg
d2lsbCBiZSBhDQo+ID4gcHJvYmxlbSBpZiBkb21haW4gaXMgc3RhdGljLg0KPiA+DQo+ID4gUGFn
ZXMgYXMgZ3Vlc3QgUkFNIGZvciBzdGF0aWMgZG9tYWluIHNoYWxsIGJlIHJlc2VydmVkIHRvIG9u
bHkgdGhpcw0KPiA+IGRvbWFpbiBhbmQgbm90IGJlIHVzZWQgZm9yIGFueSBvdGhlciBwdXJwb3Nl
cywgc28gdGhleSBzaGFsbCBuZXZlciBnbw0KPiA+IGJhY2sgdG8gaGVhcCBhbGxvY2F0b3IuDQo+
ID4NCj4gPiBUaGlzIGNvbW1pdCBwdXRzIHJlc2VydmVkIHBhZ2VzIG9uIHRoZSBuZXcgbGlzdCBy
ZXN2X3BhZ2VfbGlzdCBvbmx5DQo+ID4gYWZ0ZXIgaGF2aW5nIHRha2VuIHRoZW0gb2ZmIHRoZSAi
bm9ybWFsIiBsaXN0LCB3aGVuIHRoZSBsYXN0IHJlZiBkcm9wcGVkLg0KPiA+DQo+ID4gU2lnbmVk
LW9mZi1ieTogUGVubnkgWmhlbmcgPHBlbm55LnpoZW5nQGFybS5jb20+DQo+ID4gQWNrZWQtYnk6
IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4gPiAtLS0NCj4gPiB2NiBjaGFuZ2Vz
Og0KPiA+IC0gcmVmaW5lIGluLWNvZGUgY29tbWVudA0KPiA+IC0gbW92ZSBQR0Nfc3RhdGljICFD
T05GSUdfU1RBVElDX01FTU9SWSBkZWZpbml0aW9uIHRvIGNvbW1vbiBoZWFkZXINCj4gDQo+IEkg
ZG9uJ3QgdW5kZXJzdGFuZCB3aHkgdGhpcyBjaGFuZ2UgaXMgbmVjZXNzYXJ5IGZvciB0aGlzIHBh
dGNoLiBBRkFJQ1QsIGFsbCB0aGUNCj4gdXNlcnMgb2YgUEdDX3N0YXRpYyB3aWxsIGJlIHByb3Rl
Y3RlZCBieSAjaWZkZWYgQ09ORklHX1NUQVRJQ19NRU1PUlkgYW5kDQo+IHRoZXJlZm9yZSBQR0Nf
c3RhdGljIHNob3VsZCBhbHdheXMgYmUgZGVmaW5lZC4NCj4gDQoNClRydWUsIEkgbm90aWNlIHRo
YXQgYXJjaF9mcmVlX2hlYXBfcGFnZSBoYXMgYWxyZWFkeSBiZWVuIGd1YXJkZWQgd2l0aA0KI2lm
ZGVmIENPTkZJR19TVEFUSUNfTUVNT1JZLiBJJ2xsIHJldmVydCB0aGUgY2hhbmdlLg0KDQo+IENo
ZWVycywNCj4gDQo+IC0tDQo+IEp1bGllbiBHcmFsbA0K


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 04:20:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 04:20:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344547.570088 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz9ec-0003QK-1B; Thu, 09 Jun 2022 04:20:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344547.570088; Thu, 09 Jun 2022 04:20:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nz9eb-0003QD-Um; Thu, 09 Jun 2022 04:20:17 +0000
Received: by outflank-mailman (input) for mailman id 344547;
 Thu, 09 Jun 2022 04:20:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=n95G=WQ=kernel.org=patchwork-bot+netdevbpf@srs-se1.protection.inumbo.net>)
 id 1nz9ea-0003Q7-Rp
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 04:20:16 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 75a7bc0a-e7ab-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 06:20:15 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id A018761CD0;
 Thu,  9 Jun 2022 04:20:13 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPS id 81283C341C8;
 Thu,  9 Jun 2022 04:20:12 +0000 (UTC)
Received: from aws-us-west-2-korg-oddjob-1.ci.codeaurora.org
 (localhost.localdomain [127.0.0.1])
 by aws-us-west-2-korg-oddjob-1.ci.codeaurora.org (Postfix) with ESMTP id
 6D140E737FE; Thu,  9 Jun 2022 04:20:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 75a7bc0a-e7ab-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654748412;
	bh=dIUaXAb8JvdExVmLmSfZD2I3zXBIqIGH+YOHQiEOWhU=;
	h=Subject:From:Date:References:In-Reply-To:To:Cc:From;
	b=h2R27x8HPrjU1msxKqyvS+J400vNxerOUKREFoCnB56VtLkohU7c0fxvU7elGi3Qk
	 ix6DLQijmN2uYAeeW/TotCKhYdSxZHrgZx9pwYsYwrYbv5aUqCzsdgAbgGnfHkVZB5
	 AUtRrZDo7K7CHVEiHfZiXP9681ELjQnmE4m/IBhgMdWyY58Ookooj7K8Pe34QlxnyS
	 x2yn+VhZodnjxbrnsQ9kVMSBU2rrC0nyyJ+6F0/qhOzW3NMXNSLBO46T6OZ3dxk7uz
	 unE9J0e/oRI4Amct/G74g3xwhLtEGd72eADNlRKh6kBsMaQIXH67DSXcEQg+o4HxGR
	 hn9tQSq1qNZIw==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Subject: Re: [PATCH Resend] xen/netback: do some code cleanup
From: patchwork-bot+netdevbpf@kernel.org
Message-Id: 
 <165474841244.6883.11725233073993264776.git-patchwork-notify@kernel.org>
Date: Thu, 09 Jun 2022 04:20:12 +0000
References: <20220608043726.9380-1-jgross@suse.com>
In-Reply-To: <20220608043726.9380-1-jgross@suse.com>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, netdev@vger.kernel.org,
 linux-kernel@vger.kernel.org, wei.liu@kernel.org, paul@xen.org,
 davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com

Hello:

This patch was applied to netdev/net-next.git (master)
by Jakub Kicinski <kuba@kernel.org>:

On Wed,  8 Jun 2022 06:37:26 +0200 you wrote:
> Remove some unused macros and functions, make local functions static.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Acked-by: Wei Liu <wei.liu@kernel.org>
> ---
>  drivers/net/xen-netback/common.h    | 12 ------------
>  drivers/net/xen-netback/interface.c | 16 +---------------
>  drivers/net/xen-netback/netback.c   |  4 +++-
>  drivers/net/xen-netback/rx.c        |  2 +-
>  4 files changed, 5 insertions(+), 29 deletions(-)

Here is the summary with links:
  - [Resend] xen/netback: do some code cleanup
    https://git.kernel.org/netdev/net-next/c/5834e72eda0b

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html




From xen-devel-bounces@lists.xenproject.org Thu Jun 09 05:05:24 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 05:05:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344557.570100 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzALz-0000UL-Ci; Thu, 09 Jun 2022 05:05:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344557.570100; Thu, 09 Jun 2022 05:05:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzALz-0000UE-9p; Thu, 09 Jun 2022 05:05:07 +0000
Received: by outflank-mailman (input) for mailman id 344557;
 Thu, 09 Jun 2022 05:05:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UmT3=WQ=bombadil.srs.infradead.org=BATV+a39afe573ddbd6ff3389+6864+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1nzALx-0000U6-IG
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 05:05:06 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b6c45287-e7b1-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 07:05:02 +0200 (CEST)
Received: from hch by bombadil.infradead.org with local (Exim 4.94.2 #2 (Red
 Hat Linux)) id 1nzALp-00Gmug-RP; Thu, 09 Jun 2022 05:04:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b6c45287-e7b1-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=In-Reply-To:Content-Type:MIME-Version
	:References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=wu8aFObH2hHSMKYcBRx522Iw5QK4xZIvlYeZFEP3Kk8=; b=AGFJ5yCS6ccyDZaWrHTtIENtg2
	ewgk3wbTcgyJwqkGBn/ECO0YkpnGr1nhlzroXQ0e44S3K0rloI9v7JC83sD3Y+DkscyzxXtRq6seC
	5BE4mdWoEqX0Vgn4MxsbbkI6XMr8CcpNWB7qMagiEN2yOqELMm7CRGAWlXy+QVLDeK0tjLXFZUb9J
	JzBYTt0lYEem70Oabsxdet520DwMot9ysqcapl0IQqwDDK88mt0arOXL0CrgBrBZzbntk0nnGmXJG
	u1dvlEEXt4AiKYNPte6f6uIgPw4n1FjwNUbg8QzgaLNbbJ2FvoFqCi6bB/hwjUaqIjVN/4DtjsKn3
	waLCjuCA==;
Date: Wed, 8 Jun 2022 22:04:57 -0700
From: Christoph Hellwig <hch@infradead.org>
To: Dongli Zhang <dongli.zhang@oracle.com>
Cc: iommu@lists.linux-foundation.org, xen-devel@lists.xenproject.org,
	x86@kernel.org, linuxppc-dev@lists.ozlabs.org,
	virtualization@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org, hch@infradead.org,
	m.szyprowski@samsung.com, jgross@suse.com, tglx@linutronix.de,
	mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com,
	sstabellini@kernel.org, mpe@ellerman.id.au, konrad.wilk@oracle.com,
	mst@redhat.com, jasowang@redhat.com, joe.jin@oracle.com
Subject: Re: [PATCH RFC v1 1/7] swiotlb: introduce the highmem swiotlb buffer
Message-ID: <YqF/eZE9eozDURWz@infradead.org>
References: <20220609005553.30954-1-dongli.zhang@oracle.com>
 <20220609005553.30954-2-dongli.zhang@oracle.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20220609005553.30954-2-dongli.zhang@oracle.com>
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

On Wed, Jun 08, 2022 at 05:55:47PM -0700, Dongli Zhang wrote:
> @@ -109,6 +109,7 @@ struct io_tlb_mem {
>  	} *slots;
>  };
>  extern struct io_tlb_mem io_tlb_default_mem;
> +extern struct io_tlb_mem io_tlb_high_mem;

Tis should not be exposed.

> +extern bool swiotlb_high_active(void);

And this should not even exist.

> +static unsigned long high_nslabs;

And I don't think "high" is a good name here to start with.  That
suggests highmem, which we are not using here.


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 05:06:00 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 05:06:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344564.570111 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzAMp-0000yp-Md; Thu, 09 Jun 2022 05:05:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344564.570111; Thu, 09 Jun 2022 05:05:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzAMp-0000yi-JS; Thu, 09 Jun 2022 05:05:59 +0000
Received: by outflank-mailman (input) for mailman id 344564;
 Thu, 09 Jun 2022 05:05:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UmT3=WQ=bombadil.srs.infradead.org=BATV+a39afe573ddbd6ff3389+6864+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1nzAMo-0000U6-4z
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 05:05:58 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d8502d1d-e7b1-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 07:05:57 +0200 (CEST)
Received: from hch by bombadil.infradead.org with local (Exim 4.94.2 #2 (Red
 Hat Linux)) id 1nzAMl-00Gn4g-0a; Thu, 09 Jun 2022 05:05:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d8502d1d-e7b1-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=In-Reply-To:Content-Type:MIME-Version
	:References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=lSjKiwGQP2bLdZmsJuNaLJfxZDGjC6fCq2g/NdyMPNA=; b=kyhL2c0fixHav+CSRIiyWMtFN7
	AtDNsnF/9FvWJ9TIx9fua/CrNv5rR8a9O7oji3ME60GbokDgdhgvD0ea2XdXEW/L1DOT//y9eaYzo
	JnuI00fO4ezPxNMBvHkQYYI5xIPoHzR9wnf4anfLqVEwyENKdzkr/7GIIOrh15zyrv+YMfLMdP9s5
	6Rvc7JdDtTpL7W2pcS0iqoBpYYs8PLlaD0HauJabPoVr7WpXOlJQlGsGN0JStdxSNMixCdQE7blhE
	RaUJDr2rfJW4kg5HofoJ8jqgC2HUgKr4MOCFxVgU1Mt6OJy/VvcugFl1p5U4yhYTwg9+HBWpohhZ5
	n20MR6mw==;
Date: Wed, 8 Jun 2022 22:05:54 -0700
From: Christoph Hellwig <hch@infradead.org>
To: Dongli Zhang <dongli.zhang@oracle.com>
Cc: iommu@lists.linux-foundation.org, xen-devel@lists.xenproject.org,
	x86@kernel.org, linuxppc-dev@lists.ozlabs.org,
	virtualization@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org, hch@infradead.org,
	m.szyprowski@samsung.com, jgross@suse.com, tglx@linutronix.de,
	mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com,
	sstabellini@kernel.org, mpe@ellerman.id.au, konrad.wilk@oracle.com,
	mst@redhat.com, jasowang@redhat.com, joe.jin@oracle.com
Subject: Re: [PATCH RFC v1 4/7] swiotlb: to implement io_tlb_high_mem
Message-ID: <YqF/sphJj6n+22Si@infradead.org>
References: <20220609005553.30954-1-dongli.zhang@oracle.com>
 <20220609005553.30954-5-dongli.zhang@oracle.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20220609005553.30954-5-dongli.zhang@oracle.com>
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

All this really needs to be hidden under the hood.


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 05:06:27 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 05:06:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344571.570122 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzANH-0001Sk-0W; Thu, 09 Jun 2022 05:06:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344571.570122; Thu, 09 Jun 2022 05:06:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzANG-0001Sd-Sh; Thu, 09 Jun 2022 05:06:26 +0000
Received: by outflank-mailman (input) for mailman id 344571;
 Thu, 09 Jun 2022 05:06:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UmT3=WQ=bombadil.srs.infradead.org=BATV+a39afe573ddbd6ff3389+6864+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1nzANF-0000U6-6N
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 05:06:25 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e86f65ab-e7b1-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 07:06:24 +0200 (CEST)
Received: from hch by bombadil.infradead.org with local (Exim 4.94.2 #2 (Red
 Hat Linux)) id 1nzANC-00GnAv-9y; Thu, 09 Jun 2022 05:06:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e86f65ab-e7b1-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=In-Reply-To:Content-Type:MIME-Version
	:References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=d9gxlHADcAR3dZ8bJB1PHcB/bh00hLGGE8frKxME3zs=; b=iYQQqCAPXVCNNcBSd6/LN/ZjOV
	4Alol9GUeSmGyzdwdlE2wivpQEGbeGaFCez4O4B9XQgEFOhzNeD6JL12swx/rzZWvXefaKapSNw1m
	PlpQsdA2Rbx6KyzlL0eEyHocUVn+n7Msu0brZwHLrA+TYfR25AGMrAH+zJOkn4YHdPffVSr4AWcvr
	Ogo5wx4xPZPyn4DuIRkgBiWhIA1AUImP/tMbpPGYEGa8/IWmqJcC1Dvbs8BRthWMT3KabUBUsh/qY
	1CnWC/FK2QJFrR/WIBhNFqsuku35MV9OojHHBK1lVY21/f4WGxtvUD/SrGBFPoqGneEadMZp6sovO
	C4iNnF2w==;
Date: Wed, 8 Jun 2022 22:06:22 -0700
From: Christoph Hellwig <hch@infradead.org>
To: Dongli Zhang <dongli.zhang@oracle.com>
Cc: iommu@lists.linux-foundation.org, xen-devel@lists.xenproject.org,
	x86@kernel.org, linuxppc-dev@lists.ozlabs.org,
	virtualization@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org, hch@infradead.org,
	m.szyprowski@samsung.com, jgross@suse.com, tglx@linutronix.de,
	mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com,
	sstabellini@kernel.org, mpe@ellerman.id.au, konrad.wilk@oracle.com,
	mst@redhat.com, jasowang@redhat.com, joe.jin@oracle.com
Subject: Re: [PATCH RFC v1 5/7] swiotlb: add interface to set
 dev->dma_io_tlb_mem
Message-ID: <YqF/zrqYtjPS9cvF@infradead.org>
References: <20220609005553.30954-1-dongli.zhang@oracle.com>
 <20220609005553.30954-6-dongli.zhang@oracle.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20220609005553.30954-6-dongli.zhang@oracle.com>
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

This should be handled under the hood without the driver even knowing.


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 05:09:01 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 05:09:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344578.570133 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzAPk-0002FZ-ER; Thu, 09 Jun 2022 05:09:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344578.570133; Thu, 09 Jun 2022 05:09:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzAPk-0002FS-B3; Thu, 09 Jun 2022 05:09:00 +0000
Received: by outflank-mailman (input) for mailman id 344578;
 Thu, 09 Jun 2022 05:08:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UmT3=WQ=bombadil.srs.infradead.org=BATV+a39afe573ddbd6ff3389+6864+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1nzAPi-0002FG-QN
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 05:08:58 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 43ab486e-e7b2-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 07:08:57 +0200 (CEST)
Received: from hch by bombadil.infradead.org with local (Exim 4.94.2 #2 (Red
 Hat Linux)) id 1nzAPe-00Gnin-I4; Thu, 09 Jun 2022 05:08:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 43ab486e-e7b2-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=In-Reply-To:Content-Type:MIME-Version
	:References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=1370MvjFrg636nlGFXu8CpOfBg8+mRgqzhQ7YScZLE8=; b=z4YKvf9NkWQn8P4v8bwSUJFMzO
	wFYfxYFg0PkZgnD659U4rh7iitXqzKlNSVwWTA1C2nrNn5h07lgmlJhc0ReHis19xaonefaZB6sxZ
	+V+P9DVDmD/46UTQmN/JjdZTZZAQC5fMv8RMAMMUgf2fXLubCn2cMrYDqfdfylD9qDdZdNi8P/Svh
	J+9KZ2qoXLt20f3lL7nsIP15svEtGRlcPZ9rH5GjuL/37Zo1WS6wUeHHKMNr3CNoOja+2Q00FXJVs
	gHmBMVQKXvRtoRkoEQ8E3W/0xik7QAo//cz+ATtxhmw/3yPF5rc4z1d9FC8BroUGeyAeuXFwnjhyj
	BRV2y6Ug==;
Date: Wed, 8 Jun 2022 22:08:54 -0700
From: Christoph Hellwig <hch@infradead.org>
To: Dongli Zhang <dongli.zhang@oracle.com>
Cc: iommu@lists.linux-foundation.org, xen-devel@lists.xenproject.org,
	x86@kernel.org, linuxppc-dev@lists.ozlabs.org,
	virtualization@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org, hch@infradead.org,
	m.szyprowski@samsung.com, jgross@suse.com, tglx@linutronix.de,
	mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com,
	sstabellini@kernel.org, mpe@ellerman.id.au, konrad.wilk@oracle.com,
	mst@redhat.com, jasowang@redhat.com, joe.jin@oracle.com
Subject: Re: [PATCH RFC v1 3/7] swiotlb-xen: support highmem for xen specific
 code
Message-ID: <YqGAZoG6/pVX9NqN@infradead.org>
References: <20220609005553.30954-1-dongli.zhang@oracle.com>
 <20220609005553.30954-4-dongli.zhang@oracle.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20220609005553.30954-4-dongli.zhang@oracle.com>
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

On Wed, Jun 08, 2022 at 05:55:49PM -0700, Dongli Zhang wrote:
> @@ -109,19 +110,25 @@ int xen_swiotlb_fixup(void *buf, unsigned long nslabs, bool high)
>  	int rc;
>  	unsigned int order = get_order(IO_TLB_SEGSIZE << IO_TLB_SHIFT);
>  	unsigned int i, dma_bits = order + PAGE_SHIFT;
> +	unsigned int max_dma_bits = MAX_DMA32_BITS;
>  	dma_addr_t dma_handle;
>  	phys_addr_t p = virt_to_phys(buf);
>  
>  	BUILD_BUG_ON(IO_TLB_SEGSIZE & (IO_TLB_SEGSIZE - 1));
>  	BUG_ON(nslabs % IO_TLB_SEGSIZE);
>  
> +	if (high) {
> +		dma_bits = MAX_DMA64_BITS;
> +		max_dma_bits = MAX_DMA64_BITS;
> +	}
> +

I think you really want to pass the addressing bits or mask to the
remap callback and not do magic with a 'high' flag here.


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 05:13:18 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 05:13:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344595.570144 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzATr-0003lr-3M; Thu, 09 Jun 2022 05:13:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344595.570144; Thu, 09 Jun 2022 05:13:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzATr-0003lk-0a; Thu, 09 Jun 2022 05:13:15 +0000
Received: by outflank-mailman (input) for mailman id 344595;
 Thu, 09 Jun 2022 05:13:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UmT3=WQ=bombadil.srs.infradead.org=BATV+a39afe573ddbd6ff3389+6864+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1nzAOH-0000o0-2N
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 05:07:29 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0e8e3cef-e7b2-11ec-b605-df0040e90b76;
 Thu, 09 Jun 2022 07:07:28 +0200 (CEST)
Received: from hch by bombadil.infradead.org with local (Exim 4.94.2 #2 (Red
 Hat Linux)) id 1nzAOE-00GnS5-8r; Thu, 09 Jun 2022 05:07:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0e8e3cef-e7b2-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=In-Reply-To:Content-Type:MIME-Version
	:References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=acG/hWi7Fz9ui2xVxpMoGeBP7vrmuNMLdX3ejzV7aPc=; b=MZhm3nf6mKFSlcr8dK8ijoBLa2
	MBmpr98pFWNa8b6ut+XZouG4H5T08bSJ23S8eyxslIMTr4C/O2zuvnG2/qsMxG0x37tmaxvgijfIb
	lGXyW4Q+2Io60XdLfGcV4rp/RIywa2u5fO9NieV0COBKOJzrFJhvVfmdj+5pSGWt/lb4HHj77Z/qa
	w661jXNSO/l+LgOue5Ca0yCP7G8O+ptXdbp8dSD1Vzdtm86SSvktBkKXYdbPsB22/xE+l3YTcUINL
	9HMTg+plhTcoM0gRCPZ1oapj04vRnOV9EHBOaOpaoHqGEW+z8Mh7/21uIp1vsKiXFvwHo2LUpgWRB
	KVKRCt6A==;
Date: Wed, 8 Jun 2022 22:07:26 -0700
From: Christoph Hellwig <hch@infradead.org>
To: Dongli Zhang <dongli.zhang@oracle.com>
Cc: iommu@lists.linux-foundation.org, xen-devel@lists.xenproject.org,
	x86@kernel.org, linuxppc-dev@lists.ozlabs.org,
	virtualization@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org, hch@infradead.org,
	m.szyprowski@samsung.com, jgross@suse.com, tglx@linutronix.de,
	mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com,
	sstabellini@kernel.org, mpe@ellerman.id.au, konrad.wilk@oracle.com,
	mst@redhat.com, jasowang@redhat.com, joe.jin@oracle.com
Subject: Re: [PATCH RFC v1 7/7] swiotlb: fix the slot_addr() overflow
Message-ID: <YqGADnHAP7HYPvRr@infradead.org>
References: <20220609005553.30954-1-dongli.zhang@oracle.com>
 <20220609005553.30954-8-dongli.zhang@oracle.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20220609005553.30954-8-dongli.zhang@oracle.com>
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

On Wed, Jun 08, 2022 at 05:55:53PM -0700, Dongli Zhang wrote:
> +#define slot_addr(start, idx)	((start) + \
> +				(((unsigned long)idx) << IO_TLB_SHIFT))

Please just convert it to an inline function.


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 05:13:18 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 05:13:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344596.570155 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzATs-00045j-Bq; Thu, 09 Jun 2022 05:13:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344596.570155; Thu, 09 Jun 2022 05:13:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzATs-00045a-7n; Thu, 09 Jun 2022 05:13:16 +0000
Received: by outflank-mailman (input) for mailman id 344596;
 Thu, 09 Jun 2022 05:13:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UmT3=WQ=bombadil.srs.infradead.org=BATV+a39afe573ddbd6ff3389+6864+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1nzANy-0000o0-Iz
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 05:07:10 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 02aeebc7-e7b2-11ec-b605-df0040e90b76;
 Thu, 09 Jun 2022 07:07:09 +0200 (CEST)
Received: from hch by bombadil.infradead.org with local (Exim 4.94.2 #2 (Red
 Hat Linux)) id 1nzANu-00GnN3-B4; Thu, 09 Jun 2022 05:07:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 02aeebc7-e7b2-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=In-Reply-To:Content-Type:MIME-Version
	:References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=TvPXHMQO6PU//nHDA0WgcCETKSRRfJH2DEgRCxwDbVs=; b=kYmHjR7gMPF1i+IrBOvuM8YJYv
	JS56zj46eCdXv6HizKLwWK6BBJX65b+pjfOUjohLpFK2UZWT0nc39riUnjJa9KdbSNswiMY1Wf8pW
	BeK204Hw7PCKtQiDfxkbHFRDDBh4d7SeK4+7H/ecJcy6NPU9wTuqwFvK+vowvACpLgMReGOnkxRTZ
	5cCR0G2QrtAuXf7Msmn6xuLzKhR7rk7bI0LOAHyZBtYiAE/U219aV9w9pAMoHOBXScvVJzJ1ITweb
	q41BtyqfX5T02Awsmk9AVmPe1SxBXJvV+k6GtG4ovM5N310b1iJCPRLX5pcqbpP2HDax2X2Zm/YmJ
	KajG5p+w==;
Date: Wed, 8 Jun 2022 22:07:06 -0700
From: Christoph Hellwig <hch@infradead.org>
To: Dongli Zhang <dongli.zhang@oracle.com>
Cc: iommu@lists.linux-foundation.org, xen-devel@lists.xenproject.org,
	x86@kernel.org, linuxppc-dev@lists.ozlabs.org,
	virtualization@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org, hch@infradead.org,
	m.szyprowski@samsung.com, jgross@suse.com, tglx@linutronix.de,
	mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com,
	sstabellini@kernel.org, mpe@ellerman.id.au, konrad.wilk@oracle.com,
	mst@redhat.com, jasowang@redhat.com, joe.jin@oracle.com
Subject: Re: [PATCH RFC v1 6/7] virtio: use io_tlb_high_mem if it is active
Message-ID: <YqF/+tqMA7GtjfAY@infradead.org>
References: <20220609005553.30954-1-dongli.zhang@oracle.com>
 <20220609005553.30954-7-dongli.zhang@oracle.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20220609005553.30954-7-dongli.zhang@oracle.com>
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

On Wed, Jun 08, 2022 at 05:55:52PM -0700, Dongli Zhang wrote:
>  /* Unique numbering for virtio devices. */
> @@ -241,6 +243,12 @@ static int virtio_dev_probe(struct device *_d)
>  	u64 device_features;
>  	u64 driver_features;
>  	u64 driver_features_legacy;
> +	struct device *parent = dev->dev.parent;
> +	u64 dma_mask = min_not_zero(*parent->dma_mask,
> +				    parent->bus_dma_limit);
> +
> +	if (dma_mask == DMA_BIT_MASK(64))
> +		swiotlb_use_high(parent);

The driver already very clearly communicated its addressing
requirements.  The underlying swiotlb code needs to transparently
pick the right pool.



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 05:15:56 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 05:15:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344616.570166 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzAWQ-00056C-On; Thu, 09 Jun 2022 05:15:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344616.570166; Thu, 09 Jun 2022 05:15:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzAWQ-000565-Lt; Thu, 09 Jun 2022 05:15:54 +0000
Received: by outflank-mailman (input) for mailman id 344616;
 Thu, 09 Jun 2022 05:15:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzAWP-00055t-Ji; Thu, 09 Jun 2022 05:15:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzAWP-0006TM-G9; Thu, 09 Jun 2022 05:15:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzAWP-0001Sl-05; Thu, 09 Jun 2022 05:15:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nzAWO-00055e-Vd; Thu, 09 Jun 2022 05:15:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1U+1IHho6D4wQfhClggTEFF6UGjCY+wKS8HXcz4eIvs=; b=pRodnBoYsr+HzzbLBIbBjPBsfn
	5zNGM8WFnIJ9S402F/MJtTYrjpEbKox7yjdtgWK+qFEF8yJBvEUitP1GjlrdD08JnRaP4iwWoHpQe
	7IgCZhKLUUTaka9k0AJ92Q9LHOBZJFAmWS57nWJK/FqylPhimX+RZU3K4Z4TkqONsczw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170888-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 170888: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=34f4335c16a5f4bb7da6c8d2d5e780b6a163846a
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 09 Jun 2022 05:15:52 +0000

flight 170888 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170888/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 170714
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 170714
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 170714
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 170714
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 170714
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 170714

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 170714

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170714
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170714
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170714
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                34f4335c16a5f4bb7da6c8d2d5e780b6a163846a
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z   16 days
Failing since        170716  2022-05-24 11:12:06 Z   15 days   42 attempts
Testing same since   170888  2022-06-08 17:52:29 Z    0 days    1 attempts

------------------------------------------------------------
2281 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 268874 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 05:55:01 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 05:55:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344629.570177 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzB89-0001L0-RA; Thu, 09 Jun 2022 05:54:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344629.570177; Thu, 09 Jun 2022 05:54:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzB89-0001Kt-MJ; Thu, 09 Jun 2022 05:54:53 +0000
Received: by outflank-mailman (input) for mailman id 344629;
 Thu, 09 Jun 2022 05:54:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jl5s=WQ=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1nzB87-0001Kn-VA
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 05:54:52 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2061c.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::61c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id abc4319c-e7b8-11ec-b605-df0040e90b76;
 Thu, 09 Jun 2022 07:54:48 +0200 (CEST)
Received: from AM0P190CA0018.EURP190.PROD.OUTLOOK.COM (2603:10a6:208:190::28)
 by AM0PR08MB4147.eurprd08.prod.outlook.com (2603:10a6:208:12c::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.15; Thu, 9 Jun
 2022 05:54:46 +0000
Received: from AM5EUR03FT050.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:208:190:cafe::7e) by AM0P190CA0018.outlook.office365.com
 (2603:10a6:208:190::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13 via Frontend
 Transport; Thu, 9 Jun 2022 05:54:46 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT050.mail.protection.outlook.com (10.152.17.47) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5332.12 via Frontend Transport; Thu, 9 Jun 2022 05:54:45 +0000
Received: ("Tessian outbound 5b5a41c043d3:v120");
 Thu, 09 Jun 2022 05:54:45 +0000
Received: from 932a8a0b6663.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 8A9F0247-055C-483F-9E8D-09B5A7963282.1; 
 Thu, 09 Jun 2022 05:54:39 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 932a8a0b6663.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 09 Jun 2022 05:54:39 +0000
Received: from DU2PR08MB7325.eurprd08.prod.outlook.com (2603:10a6:10:2e4::7)
 by AM9PR08MB6673.eurprd08.prod.outlook.com (2603:10a6:20b:307::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.11; Thu, 9 Jun
 2022 05:54:37 +0000
Received: from DU2PR08MB7325.eurprd08.prod.outlook.com
 ([fe80::50e5:8757:6643:77f8]) by DU2PR08MB7325.eurprd08.prod.outlook.com
 ([fe80::50e5:8757:6643:77f8%7]) with mapi id 15.20.5332.012; Thu, 9 Jun 2022
 05:54:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: abc4319c-e7b8-11ec-b605-df0040e90b76
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=SY5TqLtmwUvYWigfWcadzuF6AUCyWTM+luL3WVGkHmNQyrzfHl7PQc1B7paRjhc1cxvgPYiUbQqyUzX3XSeyZ8ODHBdJoXGybtgboLH1b0DTFJOwQGjdLkyUC3uTSfsDCoNZhXQvLrvKlY5kbFRBwmbOeTc7iQvdVY3+4nuclR6ByYceTE78UjQTf1xYLwP9RrsaYrKGQIq6043UApua5I+icBZFTCcW2UhlDEU1nD9RVe81GjxjvtL8+Zn0/fUbqumcDgFpaOye9q4WEmXtdylEMlgOPe75jTkiVohXgiagmhwp7Vf9lOB8MhDZ4adoJY6hPAjFC1eCCUKy+3sJeQ==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=lV76d8EifUYxX2ogEEWUwF2kTd7ZXe4L5KM7qJxoibw=;
 b=I3BxS2AqVGutKY+r8T/Ln/Z7BAEJE1K1KwdpYpeopI9bGEzeVB+BRfJPF9Zr+cN0Lfog4T4EKyerh0YSUfULM4aZ6ivM250VSATnk3zMGZ8gFltOOGmgOZQL8q5ybSU8hSM0Av2lDfYKwduHj/gVoQaKCJ2gkUF/TX2kljEAdW02a2VjnEnC2PHg59gdS6fdP4ivv+Z8x941b+TNtBoSOqAPy+WpoV6hiRMoh+gNUVGMOWZ1tcHfIxW32BlbARMLV60vDDH/Qpd5pKS/gsy0/glLLpcZOPLWK6MINM6ZW5bAqbutKYp/xdeNRxUSkmCUya7a6i/Cs+xjcxMK1MDP0w==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lV76d8EifUYxX2ogEEWUwF2kTd7ZXe4L5KM7qJxoibw=;
 b=SaGuyBj3inYcgeTuRp82Ql+9ltWaKmv0i7MxxyhoZiY89lXo/1sfVUSjdIceM0zljN1fbt3Ds5tiuhKrxq8jby7jk3Zjz1VJfRjc5tZSwf738eCQog/bS4k8YUWijJVba73PCqI7rY7Qw4LOCS+ZdDmjrGu0HhnuFwKEf6xbJ+Y=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZQVDpeEG0RZq3jLqqJxDohcWuR/vOvvEH2AzhHRuvstW+WgpQ3m0HI0L9SqWtbhroFGTIcnZS6MX4kvm+9QdyxkKv9tCNrXaf6WCtkqRtBwQbzzeh67MoftUGMUZZ7ObtpQ9xBJqusbqFg005CZfiDmu+G2zIC7Q8TZOZ8x93KXkpGcPJLN0wMKKtHnMQhVkwg8iGEhxdyX36ID9rglFXNelzbQ49m7+9CyDhU8RHoi3uZFPJNlg6gftgaKo6RpypW4txewd1FKV/lKJvH5zrny6vcx6Gn0HSsn30C/YS5la0Rtpp/778VIkSHUsOWf7CgpOsKnuGxwz73rx41rf8g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=lV76d8EifUYxX2ogEEWUwF2kTd7ZXe4L5KM7qJxoibw=;
 b=jOS7ZorqVuEAh6Z9App1mNSvdULOQYhXrh8U/enIk2rtBIhnzbJotjfHJCumJ/PQ7VyfYNjLC++GbKM7AdT6enkSPGlJmpkzYGlzGCmTSkZu+vzXTpsuTovnaJyEXSGuslq7Pa2q1fyluZScQImqQQ92Uoo2puPCFkQd83SxDfpdZoRZbl3wlbkH3sldtaokt17Yb2Af+zcj7b3N2yiixh6+Zhoq1lFZHENwMqgpVr+Uygbk5mrOhgsFxrQgb9Ab0pe73IWrt+qc3vPobM4Yu2U6+sqLYSMC6aqkDtrO4tSQOUF5J2loRmCEttunfmz5xAdTaoyEWF6HblUYZp4RPQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lV76d8EifUYxX2ogEEWUwF2kTd7ZXe4L5KM7qJxoibw=;
 b=SaGuyBj3inYcgeTuRp82Ql+9ltWaKmv0i7MxxyhoZiY89lXo/1sfVUSjdIceM0zljN1fbt3Ds5tiuhKrxq8jby7jk3Zjz1VJfRjc5tZSwf738eCQog/bS4k8YUWijJVba73PCqI7rY7Qw4LOCS+ZdDmjrGu0HhnuFwKEf6xbJ+Y=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>
Subject: RE: [PATCH v6 2/9] xen: do not free reserved memory into heap
Thread-Topic: [PATCH v6 2/9] xen: do not free reserved memory into heap
Thread-Index: AQHYekCLapi6TfhNLECKo4lt4smsBq1DqSgAgALDbhA=
Date: Thu, 9 Jun 2022 05:54:37 +0000
Message-ID:
 <DU2PR08MB7325B2A677FCF2FBF905D588F7A79@DU2PR08MB7325.eurprd08.prod.outlook.com>
References: <20220607073031.722174-1-Penny.Zheng@arm.com>
 <20220607073031.722174-3-Penny.Zheng@arm.com>
 <d43d2dbd-6b0e-fb0c-5e0a-d409db4e18e9@xen.org>
In-Reply-To: <d43d2dbd-6b0e-fb0c-5e0a-d409db4e18e9@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: B41A76F11B9A594591B05042733E43B7.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 29702f3b-87fa-4c16-f86d-08da49dc8e95
x-ms-traffictypediagnostic:
	AM9PR08MB6673:EE_|AM5EUR03FT050:EE_|AM0PR08MB4147:EE_
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB41474AA03839CB58A9F2535EF7A79@AM0PR08MB4147.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 kQRzdNxieh5xbicAtNLdRj+m+jSDNuwELIK+jpuqioEiF7PnFvYUC4+rA/87lpeLOdEDFAWkqagDcuRooax1JgAk2wVn1YCYPnYkW5Nt0OZ0DoJTukpD3y43zgYOX6dFU+eyiviAzUzlGEjM1vUmcJKgpXD73vJGm6ffE0K9q0NBB43AAcUH4+ZXfhsZ7meTW6BRkO71FM1uxguI8KgrXMRi+onvwx92VzwO0BHzjUjNM1GdhA+7RWB9rfzrlfeql3cT6rAnl0NzfMJ5CJglX4ONyhP1eqhlbwQegRPfppgYle9O7aJIiJOstOBE1bXsLe7bb+rCB5axkYtb6bc50fMikMq0ecUUXEnxiogbABWt187Cdi/3OX+zvD+dqRatJnusn5+WS168G8+5RRDw0Bj+U7HwGVa6wi/TIWj9oRfQsea1Vdlb0Uq5Nsv4bhcsnuqWgVhpe6gBu7Y4G8qd5QJUHUXYW9qNlzWc+5pcc3K8fhRIrUune/PhsvsunYZWTtando1Qd2FWLmpQxVWo4tNICHcWVUMKfp4o+dfs4ssQlfMDNUyShaJmu6M4dSAVtz3SfJc+rXHlmIrzgFZEKLupkaCaMeiuHut8ldEX/NKgJR5RVxH9KCAyXiZV+cc32MFlq7+dOs3MpnmtcyWBQl3a9YIzZaz6BRpFPg96zeiF/2iVm2gdIc8zz3JZrMAug7UqVzmKxVvi2o9XsbJlCw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DU2PR08MB7325.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(6506007)(7696005)(26005)(4326008)(186003)(71200400001)(9686003)(76116006)(66946007)(64756008)(86362001)(53546011)(8676002)(66476007)(66446008)(66556008)(55016003)(508600001)(5660300002)(122000001)(83380400001)(2906002)(38070700005)(52536014)(8936002)(316002)(54906003)(110136005)(38100700002)(33656002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6673
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT050.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	4ed2d0f6-24f0-49d7-d228-08da49dc89ae
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JC7TKyp6p2HfPT75CRVaHqzdRTOjpxhF2jfX9cd1Ah+0vZPgftSVh9QymdpHk2FOnVpyNw+2bXSo6IOk+gcGBUSSQOUlh2b3PoLyL2+MKmQrjQsu49t1zhOPheSehIk5VsKTPEV97NxIbuJda1mTNhjyW1IKXRxl6wG1SVKXFmx9q5zFGTmMWnREhnPNH64O/lznCZlcIS1dVzYGti1hmavd6kZdP1cqlJq/s/CSmwG5D65grpZUWPk/y0KTHjVdr/IbuolffCsTtq142CAy8iVBNorL/bP+xdJXyPbpjahjYE7H2dXEgSiwM/XYMeONAPq9ZJi8OjCrR/lH4+riHkqriD98Hkb6NZhDE7ntIVj1sw5ztA3HaKVEbNCaRLlwV63gFUeybjzYhegy8pJJ44r1CpkV4I8YxNm2F3EUYEf5b78vuLonctFRnwwrc2mga5tv7TdY1uq9HQZiZeNwyATSULM6I5Qlw87tx0ZEq75zcleiJu9/jw4sW9DsrZzoYu4TJ52upBrLVsj8OFaCiEnojx2hcJX2V3WOCICzZdvyC9zy3/paKBjlNShZ/91s9znL/B7I+Vw7pt52LGkDQ6LEv9m2dB2sTDX4WxzUTMmrHuKglnMjARmbSNc+01WMQTLcytx/Pzr54kOixCguEBiUVJ4mkF6MOvACQD8Hfx0R8mP13Wvecx529n2Ei9pM
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230001)(4636009)(36840700001)(40470700004)(46966006)(70206006)(2906002)(54906003)(316002)(26005)(6506007)(7696005)(82310400005)(47076005)(40460700003)(70586007)(53546011)(5660300002)(8676002)(33656002)(9686003)(336012)(4326008)(186003)(81166007)(36860700001)(110136005)(55016003)(83380400001)(86362001)(356005)(508600001)(52536014)(8936002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 05:54:45.9373
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 29702f3b-87fa-4c16-f86d-08da49dc8e95
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT050.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB4147

DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSnVsaWVuIEdyYWxsIDxq
dWxpZW5AeGVuLm9yZz4NCj4gU2VudDogVHVlc2RheSwgSnVuZSA3LCAyMDIyIDU6MTMgUE0NCj4g
VG86IFBlbm55IFpoZW5nIDxQZW5ueS5aaGVuZ0Bhcm0uY29tPjsgeGVuLWRldmVsQGxpc3RzLnhl
bnByb2plY3Qub3JnDQo+IENjOiBXZWkgQ2hlbiA8V2VpLkNoZW5AYXJtLmNvbT47IFN0ZWZhbm8g
U3RhYmVsbGluaQ0KPiA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz47IEJlcnRyYW5kIE1hcnF1aXMg
PEJlcnRyYW5kLk1hcnF1aXNAYXJtLmNvbT47DQo+IFZvbG9keW15ciBCYWJjaHVrIDxWb2xvZHlt
eXJfQmFiY2h1a0BlcGFtLmNvbT47IEFuZHJldyBDb29wZXINCj4gPGFuZHJldy5jb29wZXIzQGNp
dHJpeC5jb20+OyBHZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJpeC5jb20+Ow0KPiBK
YW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+OyBXZWkgTGl1IDx3bEB4ZW4ub3JnPg0KPiBT
dWJqZWN0OiBSZTogW1BBVENIIHY2IDIvOV0geGVuOiBkbyBub3QgZnJlZSByZXNlcnZlZCBtZW1v
cnkgaW50byBoZWFwDQo+IA0KPiBIaSBQZW5ueSwNCj4gDQoNCkhpIEp1bGllbg0KDQo+IE9uIDA3
LzA2LzIwMjIgMDg6MzAsIFBlbm55IFpoZW5nIHdyb3RlOg0KPiA+IFBhZ2VzIHVzZWQgYXMgZ3Vl
c3QgUkFNIGZvciBzdGF0aWMgZG9tYWluLCBzaGFsbCBiZSByZXNlcnZlZCB0byB0aGlzDQo+ID4g
ZG9tYWluIG9ubHkuDQo+ID4gU28gaW4gY2FzZSByZXNlcnZlZCBwYWdlcyBiZWluZyB1c2VkIGZv
ciBvdGhlciBwdXJwb3NlLCB1c2VycyBzaGFsbA0KPiA+IG5vdCBmcmVlIHRoZW0gYmFjayB0byBo
ZWFwLCBldmVuIHdoZW4gbGFzdCByZWYgZ2V0cyBkcm9wcGVkLg0KPiA+DQo+ID4gZnJlZV9zdGF0
aWNtZW1fcGFnZXMgd2lsbCBiZSBjYWxsZWQgYnkgZnJlZV9oZWFwX3BhZ2VzIGluIHJ1bnRpbWUg
Zm9yDQo+ID4gc3RhdGljIGRvbWFpbiBmcmVlaW5nIG1lbW9yeSByZXNvdXJjZSwgc28gbGV0J3Mg
ZHJvcCB0aGUgX19pbml0IGZsYWcuDQo+ID4NCj4gPiBTaWduZWQtb2ZmLWJ5OiBQZW5ueSBaaGVu
ZyA8cGVubnkuemhlbmdAYXJtLmNvbT4NCj4gPiAtLS0NCj4gPiB2NiBjaGFuZ2VzOg0KPiA+IC0g
YWRhcHQgdG8gUEdDX3N0YXRpYw0KPiA+IC0gcmVtb3ZlICNpZmRlZiBhcm91ZCBmdW5jdGlvbiBk
ZWNsYXJhdGlvbg0KPiA+IC0tLQ0KPiA+IHY1IGNoYW5nZXM6DQo+ID4gLSBJbiBvcmRlciB0byBh
dm9pZCBzdHViIGZ1bmN0aW9ucywgd2UgI2RlZmluZSBQR0Nfc3RhdGljbWVtIHRvDQo+ID4gbm9u
LXplcm8gb25seSB3aGVuIENPTkZJR19TVEFUSUNfTUVNT1JZDQo+ID4gLSB1c2UgInVubGlrZWx5
KCkiIGFyb3VuZCBwZy0+Y291bnRfaW5mbyAmIFBHQ19zdGF0aWNtZW0NCj4gPiAtIHJlbW92ZSBw
b2ludGxlc3MgImlmIiwgc2luY2UgbWFya19wYWdlX2ZyZWUoKSBpcyBnb2luZyB0byBzZXQNCj4g
PiBjb3VudF9pbmZvIHRvIFBHQ19zdGF0ZV9mcmVlIGFuZCBieSBjb25zZXF1ZW5jZSBjbGVhciBQ
R0Nfc3RhdGljbWVtDQo+ID4gLSBtb3ZlICNkZWZpbmUgUEdDX3N0YXRpY21lbSAwIHRvIG1tLmgN
Cj4gPiAtLS0NCj4gPiB2NCBjaGFuZ2VzOg0KPiA+IC0gbm8gY2hhbmdlcw0KPiA+IC0tLQ0KPiA+
IHYzIGNoYW5nZXM6DQo+ID4gLSBmaXggcG9zc2libGUgcmFjeSBpc3N1ZSBpbiBmcmVlX3N0YXRp
Y21lbV9wYWdlcygpDQo+ID4gLSBpbnRyb2R1Y2UgYSBzdHViIGZyZWVfc3RhdGljbWVtX3BhZ2Vz
KCkgZm9yIHRoZQ0KPiA+ICFDT05GSUdfU1RBVElDX01FTU9SWSBjYXNlDQo+ID4gLSBtb3ZlIHRo
ZSBjaGFuZ2UgdG8gZnJlZV9oZWFwX3BhZ2VzKCkgdG8gY292ZXIgb3RoZXIgcG90ZW50aWFsIGNh
bGwNCj4gPiBzaXRlcw0KPiA+IC0gZml4IHRoZSBpbmRlbnRhdGlvbg0KPiA+IC0tLQ0KPiA+IHYy
IGNoYW5nZXM6DQo+ID4gLSBuZXcgY29tbWl0DQo+ID4gLS0tDQo+ID4gICB4ZW4vYXJjaC9hcm0v
aW5jbHVkZS9hc20vbW0uaCB8ICA0ICsrKy0NCj4gPiAgIHhlbi9jb21tb24vcGFnZV9hbGxvYy5j
ICAgICAgIHwgMTIgKysrKysrKysrLS0tDQo+ID4gICB4ZW4vaW5jbHVkZS94ZW4vbW0uaCAgICAg
ICAgICB8ICAyIC0tDQo+ID4gICAzIGZpbGVzIGNoYW5nZWQsIDEyIGluc2VydGlvbnMoKyksIDYg
ZGVsZXRpb25zKC0pDQo+ID4NCj4gPiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL2luY2x1ZGUv
YXNtL21tLmgNCj4gPiBiL3hlbi9hcmNoL2FybS9pbmNsdWRlL2FzbS9tbS5oIGluZGV4IGZiZmYx
MWM0NjguLjc0NDI4OTNlNzcgMTAwNjQ0DQo+ID4gLS0tIGEveGVuL2FyY2gvYXJtL2luY2x1ZGUv
YXNtL21tLmgNCj4gPiArKysgYi94ZW4vYXJjaC9hcm0vaW5jbHVkZS9hc20vbW0uaA0KPiA+IEBA
IC0xMDgsOSArMTA4LDExIEBAIHN0cnVjdCBwYWdlX2luZm8NCj4gPiAgICAgLyogUGFnZSBpcyBY
ZW4gaGVhcD8gKi8NCj4gPiAgICNkZWZpbmUgX1BHQ194ZW5faGVhcCAgICAgUEdfc2hpZnQoMikN
Cj4gPiAgICNkZWZpbmUgUEdDX3hlbl9oZWFwICAgICAgUEdfbWFzaygxLCAyKQ0KPiA+IC0gIC8q
IFBhZ2UgaXMgc3RhdGljIG1lbW9yeSAqLw0KPiANCj4gTklUcGlja2luZzogWW91IGFkZGVkIHRo
aXMgY29tbWVudCBpbiBwYXRjaCAjMSBhbmQgbm93IHJlbW92aW5nIHRoZSBzcGFjZS4NCj4gQW55
IHJlYXNvbiB0byBkcm9wIHRoZSBzcGFjZT8NCj4gDQo+ID4gKyNpZmRlZiBDT05GSUdfU1RBVElD
X01FTU9SWQ0KPiANCj4gSSB0aGluayB0aGlzIGNoYW5nZSBvdWdodCB0byBiZSBleHBsYWluZWQg
aW4gdGhlIGNvbW1pdCBtZXNzYWdlLiBBRkFJVSwgdGhpcyBpcw0KPiBuZWNlc3NhcnkgdG8gYWxs
b3cgdGhlIGNvbXBpbGVyIHRvIHJlbW92ZSBjb2RlIGFuZCBhdm9pZCBsaW5raW5nIGlzc3Vlcy4g
SXMNCj4gdGhhdCBjb3JyZWN0Pw0KPiANCj4gPiArLyogUGFnZSBpcyBzdGF0aWMgbWVtb3J5ICov
DQo+ID4gICAjZGVmaW5lIF9QR0Nfc3RhdGljICAgIFBHX3NoaWZ0KDMpDQo+ID4gICAjZGVmaW5l
IFBHQ19zdGF0aWMgICAgIFBHX21hc2soMSwgMykNCj4gPiArI2VuZGlmDQo+ID4gICAvKiAuLi4g
Ki8NCj4gPiAgIC8qIFBhZ2UgaXMgYnJva2VuPyAqLw0KPiA+ICAgI2RlZmluZSBfUEdDX2Jyb2tl
biAgICAgICBQR19zaGlmdCg3KQ0KPiA+IGRpZmYgLS1naXQgYS94ZW4vY29tbW9uL3BhZ2VfYWxs
b2MuYyBiL3hlbi9jb21tb24vcGFnZV9hbGxvYy5jIGluZGV4DQo+ID4gOWU1Yzc1Nzg0Ny4uNjg3
Njg2OWZhNiAxMDA2NDQNCj4gPiAtLS0gYS94ZW4vY29tbW9uL3BhZ2VfYWxsb2MuYw0KPiA+ICsr
KyBiL3hlbi9jb21tb24vcGFnZV9hbGxvYy5jDQo+ID4gQEAgLTE0NDMsNiArMTQ0MywxMyBAQCBz
dGF0aWMgdm9pZCBmcmVlX2hlYXBfcGFnZXMoDQo+ID4NCj4gPiAgICAgICBBU1NFUlQob3JkZXIg
PD0gTUFYX09SREVSKTsNCj4gPg0KPiA+ICsgICAgaWYgKCB1bmxpa2VseShwZy0+Y291bnRfaW5m
byAmIFBHQ19zdGF0aWMpICkNCj4gPiArICAgIHsNCj4gPiArICAgICAgICAvKiBQYWdlcyBvZiBz
dGF0aWMgbWVtb3J5IHNoYWxsIG5vdCBnbyBiYWNrIHRvIHRoZSBoZWFwLiAqLw0KPiA+ICsgICAg
ICAgIGZyZWVfc3RhdGljbWVtX3BhZ2VzKHBnLCAxVUwgPDwgb3JkZXIsIG5lZWRfc2NydWIpOw0K
PiBJIGNhbid0IHJlbWVtYmVyIHdoZXRoZXIgSSBhc2tlZCB0aGlzIGJlZm9yZSAoSSBjb3VsZG4n
dCBmaW5kIGEgdGhyZWFkKS4NCj4gDQo+IGZyZWVfc3RhdGljbWVtX3BhZ2VzKCkgZG9lc24ndCBz
ZWVtIHRvIGJlIHByb3RlY3RlZCBieSBhbnkgbG9jay4gU28gaG93IGRvDQo+IHlvdSBwcmV2ZW50
IHRoZSBjb25jdXJyZW50IGFjY2VzcyB0byB0aGUgcGFnZSBpbmZvIHdpdGggdGhlIGFjcXVpcmUg
cGFydD8NCg0KVHJ1ZSwgbGFzdCB0aW1lIHlvdSBzdWdnZXN0ZWQgdGhhdCByc3ZfcGFnZV9saXN0
IG5lZWRzIHRvIGJlIHByb3RlY3RlZCB3aXRoIGENCnNwaW5sb2NrIChtb3N0bHkgbGlrZSBkLT5w
YWdlX2FsbG9jX2xvY2spLiBJIGhhdmVuJ3QgdGhvdWdodCBpdCB0aG9yb3VnaGx5LCBzb3JyeQ0K
YWJvdXQgdGhhdC4NClNvIGZvciBmcmVlaW5nIHBhcnQsIEkgc2hhbGwgZ2V0IHRoZSBsb2NrIGF0
IGFyY2hfZnJlZV9oZWFwX3BhZ2UoKSwgd2hlcmUgd2UgaW5zZXJ0DQp0aGUgcGFnZSB0byB0aGUg
cnN2X3BhZ2VfbGlzdCwgYW5kIHJlbGVhc2UgdGhlIGxvY2sgYXQgdGhlIGVuZCBvZiB0aGUgZnJl
ZV9zdGF0aWNtZW1fcGFnZS4NCkFuZCBmb3IgYWNxdWlyaW5nIHBhcnQsIEkndmUgYWxyZWFkeSBw
dXQgdGhlIGxvY2sgYXJvdW5kIA0KcGFnZSA9IHBhZ2VfbGlzdF9yZW1vdmVfaGVhZCgmZC0+cmVz
dl9wYWdlX2xpc3QpOw0KDQo+IENoZWVycywNCj4gDQo+IC0tDQo+IEp1bGllbiBHcmFsbA0K


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 05:59:01 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 05:59:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344639.570188 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzBC8-0002CE-EV; Thu, 09 Jun 2022 05:59:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344639.570188; Thu, 09 Jun 2022 05:59:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzBC8-0002C7-BL; Thu, 09 Jun 2022 05:59:00 +0000
Received: by outflank-mailman (input) for mailman id 344639;
 Thu, 09 Jun 2022 05:58:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jl5s=WQ=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1nzBC7-0002Bx-34
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 05:58:59 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on0626.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::626])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 40431253-e7b9-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 07:58:58 +0200 (CEST)
Received: from DB6PR0601CA0007.eurprd06.prod.outlook.com (2603:10a6:4:7b::17)
 by AM6PR08MB4641.eurprd08.prod.outlook.com (2603:10a6:20b:d1::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.17; Thu, 9 Jun
 2022 05:58:52 +0000
Received: from DBAEUR03FT044.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:7b:cafe::ed) by DB6PR0601CA0007.outlook.office365.com
 (2603:10a6:4:7b::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.12 via Frontend
 Transport; Thu, 9 Jun 2022 05:58:52 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT044.mail.protection.outlook.com (100.127.142.189) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5332.12 via Frontend Transport; Thu, 9 Jun 2022 05:58:52 +0000
Received: ("Tessian outbound 4ab5a053767b:v120");
 Thu, 09 Jun 2022 05:58:52 +0000
Received: from b6fed9fb339c.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 4F22561D-E5E6-4725-90F3-B2351FC55E9A.1; 
 Thu, 09 Jun 2022 05:58:46 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b6fed9fb339c.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 09 Jun 2022 05:58:46 +0000
Received: from DU2PR08MB7325.eurprd08.prod.outlook.com (2603:10a6:10:2e4::7)
 by AM9PR08MB6673.eurprd08.prod.outlook.com (2603:10a6:20b:307::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.11; Thu, 9 Jun
 2022 05:58:44 +0000
Received: from DU2PR08MB7325.eurprd08.prod.outlook.com
 ([fe80::50e5:8757:6643:77f8]) by DU2PR08MB7325.eurprd08.prod.outlook.com
 ([fe80::50e5:8757:6643:77f8%7]) with mapi id 15.20.5332.012; Thu, 9 Jun 2022
 05:58:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 40431253-e7b9-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=UxSWc5Sil4Y7AsYsmph20zaW1fk9R5dKnSbmZjyGqMVAsD7g6t9chzKpk7SdKKnnlyA6IcHd5r5NdbnzOK/i+mmUEg2/EIxfUOtQ96+EPgzu2KfmPq2medsoKUEPZ4k7E8e4YWwIweCDJ77oHoy/gXLo1G2GqoWzjcKjMxFpX9rFFRPUYJjpwMrr9L/so920QjeUTL9YLf+1795o/HxA9v9kCidN7xAGfQYKon1gEr+HTLF8HW72YD/ft7Pmq1eVE3vtBzG40vns+XGKQLRaZ6+m4Ty0zmpYbDsWfmAW7l9yTFhnFgqbrPKuEsuoK+2QpAIA1ZLWKUnous6t5fEchQ==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=bWAEh1+OLJV9Xdv58lHsoPOV4yHsCh0rjQy+NPOPoOs=;
 b=AskxFymftaOyRu1BaCFTFHkSE8dn0XFrwmDLhPNdUQIws1XV4uWom9lk/bnnUvT/nlcpcvRgK4BeAocrGrEKMQg52SpgYKRD2gpF0zD27m71EtI5rC5iKE0Dchls7rSSqwIb+edaaR+WMcTlspmcmxhTlcQTtynD1QfH/v40OwAUOpT9y7GS3NzJcO5lCPfhInPnhCrXW6iMX/3Oj5ywEOTdIhBgN8Yj+G43R9ixfnyxCudeJA5VgRZchWsim5rHqklrLXP+B63sjqzFz5dsjSPxSFc9vQfeX+0VG0qzx8P/uwM6F7LQVc/Yvlx9lJNMpM8ftY16ctXF16LxYe4M9g==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bWAEh1+OLJV9Xdv58lHsoPOV4yHsCh0rjQy+NPOPoOs=;
 b=yfaISNp+PmNoSIFIwwd41LfjGOQcdS6vNzcDYZB7ssJr+3nawEKrQxyGg0Cxah/gm6oABBCNcM10Yd+LWEqq35BtSVI5PDycbXN2Ry5BiSB6f2NQSLCTB4mgEFWv5anN2tOyQItmMUn4hUgcYwczXdQSwbx6GfwG6TkMrht6Sfs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IysKou5d/5XJzVm2vbpBvFdsC8iGAlUg0T/Et+Eu3Be/7o/4edW3HFRLUrFv2AbkOfktcsoJIA6FCXaCUFjOu71Gwn0ObYm5FeUmIXcdpyCbzbnVoiSvO7w4b9+NQFHZN5AvikzbXyyQWepQAqgGPrbpdMU+XJeNPkXhhHLJHmbWkBfWsLEmWYxiQaRKQTbPFrPaRBx1sDG4fLOUJ0xSZXfQlRhySszB4j1Jux+NVOdP7+dnkmI4frv89F6sGd13VekIvPJ5uJXPVLvh2bnohOWKK8/xpQnl5uhriQZS/giD/fw2LQ0bs0yvJwjeyINrVflaw+dopJfzbwckWYJEsg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=bWAEh1+OLJV9Xdv58lHsoPOV4yHsCh0rjQy+NPOPoOs=;
 b=UsPh3IwKrl8OJRJYuo4N/YF7YbnHcxrhshRgZYTgAIZIR2YICrLCA1FBr2DvRgNR0vlJ9n9fAow7uqhrtTCyWYck7KyMLsN+3DB31Cn2WHyaCv4F/asWvfGndmHQPgH0UwMHtSHvFNT8tL2lGsgJ0uN+f7Q9E77zfjE0gquUMUcm0RraxpqvPv8+unwq6iakxnBL3TUbI6Ex3P6ey7fBzdBKdX+8f9jfNXp8/+ocKp8cZaP1u05CMNXQLmw2Zr17xCX1ZerGXwWw+bp7HGRRg5NnMesHkMWBO/hwD9zImd1iQKjtLMt+jcWSPy1h4v0pG3Oj1jZXf+zu/gzI+dSnIA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bWAEh1+OLJV9Xdv58lHsoPOV4yHsCh0rjQy+NPOPoOs=;
 b=yfaISNp+PmNoSIFIwwd41LfjGOQcdS6vNzcDYZB7ssJr+3nawEKrQxyGg0Cxah/gm6oABBCNcM10Yd+LWEqq35BtSVI5PDycbXN2Ry5BiSB6f2NQSLCTB4mgEFWv5anN2tOyQItmMUn4hUgcYwczXdQSwbx6GfwG6TkMrht6Sfs=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Chen <Wei.Chen@arm.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v6 9/9] xen: retrieve reserved pages on populate_physmap
Thread-Topic: [PATCH v6 9/9] xen: retrieve reserved pages on populate_physmap
Thread-Index: AQHYekCd/J/B/CFgukmRW4QioUnzda1DlC2AgAMCXbA=
Date: Thu, 9 Jun 2022 05:58:44 +0000
Message-ID:
 <DU2PR08MB7325F42617DF4F6B36D1E777F7A79@DU2PR08MB7325.eurprd08.prod.outlook.com>
References: <20220607073031.722174-1-Penny.Zheng@arm.com>
 <20220607073031.722174-10-Penny.Zheng@arm.com>
 <c0dd7fcf-28b5-d206-701c-8c3e62597eb6@suse.com>
In-Reply-To: <c0dd7fcf-28b5-d206-701c-8c3e62597eb6@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: AF0C719B3786A64C955C9D9DCAE50B04.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 892b37f3-e882-40af-4918-08da49dd21a2
x-ms-traffictypediagnostic:
	AM9PR08MB6673:EE_|DBAEUR03FT044:EE_|AM6PR08MB4641:EE_
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB464162ACD60F9E19C6DB1323F7A79@AM6PR08MB4641.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 0MHFjwYA2ZhrCgQ29Pf53WAWcib/x7bshB9t02ar7nKd08hnKIdBPyXzWgjNLlWW9/cuttbuKuFV6G3mSSwBhTPd235XMSFoPWpH3t+2TKLSoI/gioT3cACphO5ZpWmg9AmmOOn7V/xWw/SuM+gugNXJOKc/OaWnbcJX1vvbEfdHM7MoaU0cX8Lsu+R4s28qEqsWiiwBFgZc0A7hoD4dO+LLBq1v9dMsqnIpo7P17yXLj25LnURag05D0yKeiAkleztQGWLQdg7bB2mQkfrPtLMgaoS2L7bKPuqDa8k458OV4NqRRusKBPkHLCxiYvPJx2FcCgJQ5yMexNCOHttxA1xT8A5GN6Ew9hWK09Zjiu8nbU0abXej7Oq6j6PgG9oeFIeiZNGzkVzouGNF5SP181tfs5R4xDYimwburoq93biAQeXONXujpQk3WAzcjScWrOKEUSOf0Ch/BwFfZombQwJmjq5y6IUYwHnxB/wPPbWPy1HNQw6s67y1pMUTA99MJ+3SpzlSIA+5kLVRpqcMOE6exMHW0v0y4CwssdTMyVE1ZelYODNrwY4J2YsBxZQ+GM5/9dh9cUvHW7v7WRSBKK7HiQx7iaXWDU/ghb07p1EKMl2Fi3nz1fHtMbp+/tOJoVDhfBnAfdtyG6vOLflbmiKAj1pTWGCPnNTp8JM8QwQW3UxLZRVqpg8Wr5kIGTiuGt/Qp88NeQd7r5PCdp604Q==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DU2PR08MB7325.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(6506007)(7696005)(26005)(4326008)(186003)(71200400001)(9686003)(76116006)(66946007)(64756008)(86362001)(53546011)(8676002)(66476007)(66446008)(66556008)(55016003)(508600001)(5660300002)(122000001)(83380400001)(2906002)(38070700005)(52536014)(8936002)(316002)(54906003)(38100700002)(6916009)(33656002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6673
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT044.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	170cfa42-278f-48dd-ebd7-08da49dd1cc0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	k32ZWnfp9mzcfBl55k7QHBas4KdtkOWU5/Bxluvjdn/VkZ1hsMTh5IN79xlm4Cqh7dFPI4CH9ignqUZ5qxXxHw7fPPi6GY6UYtbZxBm2c4827lmnku90mPA/0siDDptZqn4zqpOWsLiuU3lam/K3qV8zvt9L61v2x2vPptL2o0nqvs8t37RCkxRnWDHhbVw/Rx28Ij/bbGAJQrHUWFPZTsCkBitZV7+Ehx0MyaBJ3Mloi3imCdJjX6UpgpuhkxNk3DExugu3bQ7vNLoUKPFj4tTXX9zGilY8aHa9JgCyPhTSjwb/jA69a9z8t5Y7btyVZsPeIAyEcED9+/SQZCpWpdoQQ8MswDhSjYzua6a3RurEorlgktTO7K0Sy+ZNKcjZbsmVN9TvZc3lC50XxJon3Hp1IcVhBtdSGYFPEcm8HZF/5sMPBdk9LqloQo/Vuo21fvwFhIRj9jiE0UctPGXan43gyaimcW6XtIcpfjHnk3cIMKPnRo0aqSEY/12CQ/qHs5yC0BHeOCbn1cQoTBfnlw44uWRrZdvVQtssbR9WYVZQHWsQ8v3R7+AR0sImeHn4/nHCYubzLTrAzh1+c4BCd60XFwUVuTioj9knwbMAsdEwjnj2IqHvTQuvIseKgb4JTfBiHhYPj6LAuGsSezPKWUfdd/Ka9fUDme2RYKOU/XuJ73Qjhs1lsmQkKNk1h79L
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(83380400001)(81166007)(8676002)(4326008)(54906003)(86362001)(316002)(356005)(55016003)(70206006)(70586007)(82310400005)(36860700001)(336012)(2906002)(186003)(47076005)(508600001)(8936002)(5660300002)(52536014)(6862004)(7696005)(6506007)(53546011)(40460700003)(26005)(33656002)(9686003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 05:58:52.7230
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 892b37f3-e882-40af-4918-08da49dd21a2
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT044.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4641

SGkgSmFuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSmFuIEJldWxp
Y2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KPiBTZW50OiBUdWVzZGF5LCBKdW5lIDcsIDIwMjIgMzo1
OCBQTQ0KPiBUbzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+DQo+IENjOiBXZWkg
Q2hlbiA8V2VpLkNoZW5AYXJtLmNvbT47IEFuZHJldyBDb29wZXINCj4gPGFuZHJldy5jb29wZXIz
QGNpdHJpeC5jb20+OyBHZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJpeC5jb20+Ow0K
PiBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4ub3JnPjsgU3RlZmFubyBTdGFiZWxsaW5pIDxzc3Rh
YmVsbGluaUBrZXJuZWwub3JnPjsgV2VpDQo+IExpdSA8d2xAeGVuLm9yZz47IHhlbi1kZXZlbEBs
aXN0cy54ZW5wcm9qZWN0Lm9yZw0KPiBTdWJqZWN0OiBSZTogW1BBVENIIHY2IDkvOV0geGVuOiBy
ZXRyaWV2ZSByZXNlcnZlZCBwYWdlcyBvbiBwb3B1bGF0ZV9waHlzbWFwDQo+IA0KPiBPbiAwNy4w
Ni4yMDIyIDA5OjMwLCBQZW5ueSBaaGVuZyB3cm90ZToNCj4gPiArLyoNCj4gPiArICogQWNxdWly
ZSBhIHBhZ2UgZnJvbSByZXNlcnZlZCBwYWdlIGxpc3QocmVzdl9wYWdlX2xpc3QpLCB3aGVuDQo+
ID4gK3BvcHVsYXRpbmcNCj4gPiArICogbWVtb3J5IGZvciBzdGF0aWMgZG9tYWluIG9uIHJ1bnRp
bWUuDQo+ID4gKyAqLw0KPiA+ICttZm5fdCBhY3F1aXJlX3Jlc2VydmVkX3BhZ2Uoc3RydWN0IGRv
bWFpbiAqZCwgdW5zaWduZWQgaW50IG1lbWZsYWdzKQ0KPiA+ICt7DQo+ID4gKyAgICBzdHJ1Y3Qg
cGFnZV9pbmZvICpwYWdlOw0KPiA+ICsNCj4gPiArICAgIHNwaW5fbG9jaygmZC0+cGFnZV9hbGxv
Y19sb2NrKTsNCj4gPiArICAgIC8qIEFjcXVpcmUgYSBwYWdlIGZyb20gcmVzZXJ2ZWQgcGFnZSBs
aXN0KHJlc3ZfcGFnZV9saXN0KS4gKi8NCj4gPiArICAgIHBhZ2UgPSBwYWdlX2xpc3RfcmVtb3Zl
X2hlYWQoJmQtPnJlc3ZfcGFnZV9saXN0KTsNCj4gPiArICAgIHNwaW5fdW5sb2NrKCZkLT5wYWdl
X2FsbG9jX2xvY2spOw0KPiANCj4gV2l0aCBwYWdlIHJlbW92YWwgZG9uZSB1bmRlciBsb2NrLCAu
Li4NCj4gDQo+ID4gKyAgICBpZiAoIHVubGlrZWx5KCFwYWdlKSApDQo+ID4gKyAgICAgICAgcmV0
dXJuIElOVkFMSURfTUZOOw0KPiA+ICsNCj4gPiArICAgIGlmICggIXByZXBhcmVfc3RhdGljbWVt
X3BhZ2VzKHBhZ2UsIDEsIG1lbWZsYWdzKSApDQo+ID4gKyAgICAgICAgZ290byBmYWlsOw0KPiA+
ICsNCj4gPiArICAgIGlmICggYXNzaWduX2RvbXN0YXRpY19wYWdlcyhkLCBwYWdlLCAxLCBtZW1m
bGFncykgKQ0KPiA+ICsgICAgICAgIGdvdG8gZmFpbDsNCj4gPiArDQo+ID4gKyAgICByZXR1cm4g
cGFnZV90b19tZm4ocGFnZSk7DQo+ID4gKw0KPiA+ICsgZmFpbDoNCj4gPiArICAgIHBhZ2VfbGlz
dF9hZGRfdGFpbChwYWdlLCAmZC0+cmVzdl9wYWdlX2xpc3QpOw0KPiA+ICsgICAgcmV0dXJuIElO
VkFMSURfTUZOOw0KPiANCj4gLi4uIGRvZXNuJ3QgcmUtYWRkaW5nIHRoZSBwYWdlIHRvIHRoZSBs
aXN0IGFsc28gbmVlZCB0byBiZSBkb25lIHdpdGggdGhlIGxvY2sNCj4gaGVsZD8NCg0KVHJ1ZSwg
U29ycnkgYWJvdXQgdGhhdC4NCkxpa2UgSSBzYWlkIGluIGFub3RoZXIgdGhyZWFkIHdpdGggSnVs
aWVuLCBJJ2xsIGFkZCB0aGUgbWlzc2luZyBoYWxmIHBhcnQNCiINCkZvciBmcmVlaW5nIHBhcnQs
IEkgc2hhbGwgZ2V0IHRoZSBsb2NrIGF0IGFyY2hfZnJlZV9oZWFwX3BhZ2UoKSwNCndoZXJlIHdl
IGluc2VydCB0aGUgcGFnZSB0byB0aGUgcnN2X3BhZ2VfbGlzdCwgYW5kIHJlbGVhc2UgdGhlIGxv
Y2sgYXQgdGhlIGVuZCBvZg0KdGhlIGZyZWVfc3RhdGljbWVtX3BhZ2UNCiINCg0KPiANCj4gSmFu
DQoNCg==


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 06:09:46 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 06:09:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344648.570199 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzBMH-0003xG-Ey; Thu, 09 Jun 2022 06:09:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344648.570199; Thu, 09 Jun 2022 06:09:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzBMH-0003x9-Ap; Thu, 09 Jun 2022 06:09:29 +0000
Received: by outflank-mailman (input) for mailman id 344648;
 Thu, 09 Jun 2022 06:09:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzBMG-0003wz-Th; Thu, 09 Jun 2022 06:09:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzBMG-0007dh-QD; Thu, 09 Jun 2022 06:09:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzBMG-0002za-5S; Thu, 09 Jun 2022 06:09:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nzBMG-0007O4-4z; Thu, 09 Jun 2022 06:09:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Ve2+1XJtPTiajwnmflQj+LZyZrKHAHOIIgg9rDNE6jE=; b=4CnfLIx7k1UbHQZzdT4JUs9dt+
	X73G7nCVSoE6FkkI5VazGF8I0LsyeHMh4lNALk+Iwg8ZsY6xNOde5aTYeWtQMjqwscN3YKxvxUumW
	AgMQSmUXywfgbXrGX8jgzvG3OuirO3yDe1cMm4pvK2l59Kv4vyGFkHufVqVGCeSRxWqw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170887-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 170887: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start/debian.repeat:fail:regression
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=35c6471fd2c181f6e5e0b292dc759b49dbd95d6a
X-Osstest-Versions-That:
    linux=04b092e4a01a3488e762897e2d29f85eda2c6a60
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 09 Jun 2022 06:09:28 +0000

flight 170887 linux-5.4 real [real]
flight 170893 linux-5.4 real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/170887/
http://logs.test-lab.xenproject.org/osstest/logs/170893/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1 18 guest-start/debian.repeat fail REGR. vs. 170736

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170724
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170736
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170736
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 170736
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170736
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170736
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170736
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170736
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 170736
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 170736
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170736
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170736
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170736
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                35c6471fd2c181f6e5e0b292dc759b49dbd95d6a
baseline version:
 linux                04b092e4a01a3488e762897e2d29f85eda2c6a60

Last test of basis   170736  2022-05-25 18:40:38 Z   14 days
Testing same since   170843  2022-06-06 06:44:17 Z    2 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Akira Yokosawa <akiyks@gmail.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andy Lutomirski <luto@kernel.org>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Anna Schumaker <Anna.Schumaker@Netapp.com>
  Ard Biesheuvel <ardb@kernel.org>
  Ariadne Conill <ariadne@dereferenced.org>
  Aristeu Rozanski <aris@redhat.com>
  Benjamin Tissoires <benjamin.tissoires@redhat.com>
  Christian Brauner <brauner@kernel.org>
  Chuck Lever <chuck.lever@oracle.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Thompson <daniel.thompson@linaro.org>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  Denis Efremov (Oracle) <efremov@linux.com>
  Dmitry Mastykin <dmastykin@astralinux.ru>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Eric Dumazet <edumazet@google.com>
  Florian Westphal <fw@strlen.de>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo A. R. Silva <gustavoars@kernel.org>
  Hans Verkuil <hverkuil-cisco@xs4all.nl>
  Herbert Xu <herbert@gondor.apana.org.au>
  Hulk Robot <hulkrobot@huawei.com>
  IotaHydrae <writeforever@foxmail.com>
  Jakub Kicinski <kuba@kernel.org>
  Jarkko Sakkinen <jarkko@kernel.org>
  Jiri Kosina <jkosina@suse.cz>
  Joel Stanley <joel@jms.id.au>
  Johannes Berg <johannes.berg@intel.com>
  Jonathan Corbet <corbet@lwn.net>
  Kees Cook <keescook@chromium.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Liu Jian <liujian56@huawei.com>
  Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
  Luca Coelho <luciano.coelho@intel.com>
  Marek Maslanka <mm@semihalf.com>
  Marek Maślanka <mm@semihalf.com>
  Mariusz Tkaczyk <mariusz.tkaczyk@linux.intel.com>
  Mark-PK Tsai <mark-pk.tsai@mediatek.com>
  Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
  Mika Westerberg <mika.westerberg@linux.intel.com>
  Mike Snitzer <snitzer@kernel.org>
  Mikulas Patocka <mpatocka@redhat.com>
  Milan Broz <gmazyland@gmail.com>
  Minchan Kim <minchan@kernel.org>
  Miri Korenblit <miriam.rachel.korenblit@intel.com>
  Noah Meyerhans <nmeyerha@amazon.com>
  Noah Meyerhans <noahm@debian.org>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Piyush Malgujar <pmalgujar@marvell.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Sakari Ailus <sakari.ailus@linux.intel.com>
  Sarthak Kukreti <sarthakkukreti@google.com>
  Sasha Levin <sashal@kernel.org>
  Song Liu <song@kernel.org>
  Song Liu <songliubraving@fb.com>
  Stefan Ghinea <stefan.ghinea@windriver.com>
  Stefan Mahnke-Hartmann <stefan.mahnke-hartmann@infineon.com>
  Steffen Klassert <steffen.klassert@secunet.com>
  Stephen Brennan <stephen.s.brennan@oracle.com>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Sultan Alsawaf <sultan@kerneltoast.com>
  Szymon Balcerak <sbalcerak@marvell.com>
  Thomas Bartschies <thomas.bartschies@cvk.de>
  Thomas Gleixner <tglx@linutronix.de>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Vegard Nossum <vegard.nossum@oracle.com>
  Veronika Kabatova <vkabatov@redhat.com>
  Vitaly Chikunov <vt@altlinux.org>
  Willy Tarreau <w@1wt.eu>
  Wolfram Sang <wsa@kernel.org>
  Xiu Jianfeng <xiujianfeng@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1081 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 06:18:24 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 06:18:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344661.570221 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzBUr-0005tn-PS; Thu, 09 Jun 2022 06:18:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344661.570221; Thu, 09 Jun 2022 06:18:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzBUr-0005tg-Kz; Thu, 09 Jun 2022 06:18:21 +0000
Received: by outflank-mailman (input) for mailman id 344661;
 Thu, 09 Jun 2022 06:18:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ts8Y=WQ=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1nzBUp-0005eB-VB
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 06:18:20 +0000
Received: from mail-lj1-x22e.google.com (mail-lj1-x22e.google.com
 [2a00:1450:4864:20::22e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f48dfa39-e7bb-11ec-b605-df0040e90b76;
 Thu, 09 Jun 2022 08:18:19 +0200 (CEST)
Received: by mail-lj1-x22e.google.com with SMTP id b12so14035871ljq.3
 for <xen-devel@lists.xenproject.org>; Wed, 08 Jun 2022 23:18:19 -0700 (PDT)
Received: from jade.urgonet (h-79-136-84-253.A175.priv.bahnhof.se.
 [79.136.84.253]) by smtp.gmail.com with ESMTPSA id
 u20-20020ac248b4000000b00478d24ad1basm4061130lfg.307.2022.06.08.23.18.17
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 08 Jun 2022 23:18:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f48dfa39-e7bb-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=Ri0u6xOBjOiuO3HIUy3d1X4f5KfG9398JmyGMTYIkKk=;
        b=OZJUhsCXnnFB7y6quvN4Q282K/DVrYsoz2N4EAyYNX7gGu/glzLsKAUpoz3j1hr0LL
         ydCoSJRn7nm+TpyF7ukZm1S+4T/ahh0tC6GlrOl7Yx1HmPoxZIVPAG2QGDX1PYpaRHUD
         B5+8YPJPH6ckdHS0jwAhmCT0Ge/gQCR9AoDku847gv41zYzzH595DQ5yw46rF5KDGSxV
         AoQF3FlhQkh/ETCaiKrExDHBNFKrHQxWnnrNR2lQRGvT51lgNd8GRR8UBROZaHgLbpSL
         PHS7HpCpOoCsakA2SG/DT3I5JL0gp6PC913Tqr1eoIxQWzS0EiIVN2Rs4CPELWuGWZBS
         WNyQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=Ri0u6xOBjOiuO3HIUy3d1X4f5KfG9398JmyGMTYIkKk=;
        b=JyV28ytMCLFaPoB90sQbfdhLGJynwIm8eK5MmyBC980YDfost63ma7+on3ZwxcscPE
         OS7sDQ7qf9PoboJsOfIfmQc7tZT7+zknEoxc5K77cjSQ7I2jEotZ4U4m5GoCrkcX8Dzg
         HEgwJXv0l9jxZx7zGB2ZFTY4rFZlUYizDFMJyykcBeKUzz4g6F02OB+MNCBqTar4IfZp
         24b+/A0powVSywlMSBkCyqBJICzKtDkbsHi5rAzfA8RUeFhty3T3LIhlvvEcEmDJyMFt
         fMIb25fDaeFrK/ripOW5QocvLslweGr5pdkpJBYd45qbxialWxu5hzjnT1gvVMIldd/v
         tzQw==
X-Gm-Message-State: AOAM531N5CKZL8w/cD1UImuOxKb/tv2O1CGMIu4k3ZhC+uhpT9G8QZOZ
	E8Udc987sJIQYvQ5NXEV08mZnVIS4N9oNw==
X-Google-Smtp-Source: ABdhPJw540PSLNtEStMCasgsWgpyR8rfrBeo7Upyseh9LJipbBEAaC7sQVmQ5z103Y3w6Y9d7ZTmGw==
X-Received: by 2002:a05:651c:1a0d:b0:255:bf5a:3445 with SMTP id by13-20020a05651c1a0d00b00255bf5a3445mr4641115ljb.285.1654755498307;
        Wed, 08 Jun 2022 23:18:18 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Jens Wiklander <jens.wiklander@linaro.org>
Subject: [PATCH v2 1/2] xen/arm: smccc: add support for SMCCCv1.2 extended input/output registers
Date: Thu,  9 Jun 2022 08:18:11 +0200
Message-Id: <20220609061812.422130-2-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20220609061812.422130-1-jens.wiklander@linaro.org>
References: <20220609061812.422130-1-jens.wiklander@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

SMCCC v1.2 AArch64 allows x0-x17 to be used as both parameter registers
and result registers for the SMC and HVC instructions.

Arm Firmware Framework for Armv8-A specification makes use of x0-x7 as
parameter and result registers.

Let us add new interface to support this extended set of input/output
registers.

This is based on 3fdc0cb59d97 ("arm64: smccc: Add support for SMCCCv1.2
extended input/output registers") by Sudeep Holla from the Linux kernel

Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
---
 xen/arch/arm/arm64/smc.S         | 43 ++++++++++++++++++++++++++++++++
 xen/arch/arm/include/asm/smccc.h | 42 +++++++++++++++++++++++++++++++
 xen/arch/arm/vsmc.c              |  2 +-
 3 files changed, 86 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/arm64/smc.S b/xen/arch/arm/arm64/smc.S
index 91bae62dd4d2..1570bc8eb9d4 100644
--- a/xen/arch/arm/arm64/smc.S
+++ b/xen/arch/arm/arm64/smc.S
@@ -27,3 +27,46 @@ ENTRY(__arm_smccc_1_0_smc)
         stp     x2, x3, [x4, #SMCCC_RES_a2]
 1:
         ret
+
+
+/*
+ * void arm_smccc_1_2_smc(const struct arm_smccc_1_2_regs *args,
+ *                        struct arm_smccc_1_2_regs *res)
+ */
+ENTRY(arm_smccc_1_2_smc)
+    /* Save `res` and free a GPR that won't be clobbered */
+    stp     x1, x19, [sp, #-16]!
+
+    /* Ensure `args` won't be clobbered while loading regs in next step */
+    mov	x19, x0
+
+    /* Load the registers x0 - x17 from the struct arm_smccc_1_2_regs */
+    ldp	x0, x1, [x19, #0]
+    ldp	x2, x3, [x19, #16]
+    ldp	x4, x5, [x19, #32]
+    ldp	x6, x7, [x19, #48]
+    ldp	x8, x9, [x19, #64]
+    ldp	x10, x11, [x19, #80]
+    ldp	x12, x13, [x19, #96]
+    ldp	x14, x15, [x19, #112]
+    ldp	x16, x17, [x19, #128]
+
+    smc #0
+
+    /* Load the `res` from the stack */
+    ldr	x19, [sp]
+
+    /* Store the registers x0 - x17 into the result structure */
+    stp	x0, x1, [x19, #0]
+    stp	x2, x3, [x19, #16]
+    stp	x4, x5, [x19, #32]
+    stp	x6, x7, [x19, #48]
+    stp	x8, x9, [x19, #64]
+    stp	x10, x11, [x19, #80]
+    stp	x12, x13, [x19, #96]
+    stp	x14, x15, [x19, #112]
+    stp	x16, x17, [x19, #128]
+
+    /* Restore original x19 */
+    ldp     xzr, x19, [sp], #16
+    ret
diff --git a/xen/arch/arm/include/asm/smccc.h b/xen/arch/arm/include/asm/smccc.h
index b3dbeecc90ad..316adf968e74 100644
--- a/xen/arch/arm/include/asm/smccc.h
+++ b/xen/arch/arm/include/asm/smccc.h
@@ -33,6 +33,7 @@
 
 #define ARM_SMCCC_VERSION_1_0   SMCCC_VERSION(1, 0)
 #define ARM_SMCCC_VERSION_1_1   SMCCC_VERSION(1, 1)
+#define ARM_SMCCC_VERSION_1_2   SMCCC_VERSION(1, 2)
 
 /*
  * This file provides common defines for ARM SMC Calling Convention as
@@ -217,6 +218,7 @@ struct arm_smccc_res {
 #ifdef CONFIG_ARM_32
 #define arm_smccc_1_0_smc(...) arm_smccc_1_1_smc(__VA_ARGS__)
 #define arm_smccc_smc(...) arm_smccc_1_1_smc(__VA_ARGS__)
+
 #else
 
 void __arm_smccc_1_0_smc(register_t a0, register_t a1, register_t a2,
@@ -265,8 +267,48 @@ void __arm_smccc_1_0_smc(register_t a0, register_t a1, register_t a2,
         else                                                    \
             arm_smccc_1_0_smc(__VA_ARGS__);                     \
     } while ( 0 )
+
+/**
+ * struct arm_smccc_1_2_regs - Arguments for or Results from SMC call
+ * @a0-a17 argument values from registers 0 to 17
+ */
+struct arm_smccc_1_2_regs {
+    unsigned long a0;
+    unsigned long a1;
+    unsigned long a2;
+    unsigned long a3;
+    unsigned long a4;
+    unsigned long a5;
+    unsigned long a6;
+    unsigned long a7;
+    unsigned long a8;
+    unsigned long a9;
+    unsigned long a10;
+    unsigned long a11;
+    unsigned long a12;
+    unsigned long a13;
+    unsigned long a14;
+    unsigned long a15;
+    unsigned long a16;
+    unsigned long a17;
+};
 #endif /* CONFIG_ARM_64 */
 
+/**
+ * arm_smccc_1_2_smc() - make SMC calls
+ * @args: arguments passed via struct arm_smccc_1_2_regs
+ * @res: result values via struct arm_smccc_1_2_regs
+ *
+ * This function is used to make SMC calls following SMC Calling Convention
+ * v1.2 or above. The content of the supplied param are copied from the
+ * structure to registers prior to the SMC instruction. The return values
+ * are updated with the content from registers on return from the SMC
+ * instruction.
+ */
+void arm_smccc_1_2_smc(const struct arm_smccc_1_2_regs *args,
+                       struct arm_smccc_1_2_regs *res);
+
+
 #endif /* __ASSEMBLY__ */
 
 /*
diff --git a/xen/arch/arm/vsmc.c b/xen/arch/arm/vsmc.c
index 676740ef1520..6f90c08a6304 100644
--- a/xen/arch/arm/vsmc.c
+++ b/xen/arch/arm/vsmc.c
@@ -93,7 +93,7 @@ static bool handle_arch(struct cpu_user_regs *regs)
     switch ( fid )
     {
     case ARM_SMCCC_VERSION_FID:
-        set_user_reg(regs, 0, ARM_SMCCC_VERSION_1_1);
+        set_user_reg(regs, 0, ARM_SMCCC_VERSION_1_2);
         return true;
 
     case ARM_SMCCC_ARCH_FEATURES_FID:
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 06:18:24 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 06:18:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344660.570210 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzBUq-0005eT-FX; Thu, 09 Jun 2022 06:18:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344660.570210; Thu, 09 Jun 2022 06:18:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzBUq-0005eM-C0; Thu, 09 Jun 2022 06:18:20 +0000
Received: by outflank-mailman (input) for mailman id 344660;
 Thu, 09 Jun 2022 06:18:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ts8Y=WQ=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1nzBUo-0005eB-V4
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 06:18:18 +0000
Received: from mail-lf1-x134.google.com (mail-lf1-x134.google.com
 [2a00:1450:4864:20::134])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f3ce1fa8-e7bb-11ec-b605-df0040e90b76;
 Thu, 09 Jun 2022 08:18:17 +0200 (CEST)
Received: by mail-lf1-x134.google.com with SMTP id t25so36381884lfg.7
 for <xen-devel@lists.xenproject.org>; Wed, 08 Jun 2022 23:18:17 -0700 (PDT)
Received: from jade.urgonet (h-79-136-84-253.A175.priv.bahnhof.se.
 [79.136.84.253]) by smtp.gmail.com with ESMTPSA id
 u20-20020ac248b4000000b00478d24ad1basm4061130lfg.307.2022.06.08.23.18.15
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 08 Jun 2022 23:18:16 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f3ce1fa8-e7bb-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=HoQViV+2VWli6a10OFSyz3kotcrFDlBPUE1vKpSCf14=;
        b=YJsLxTLfipMxWfPYFRkJI6yrVU+dFUi83/nxDVHQor6gq1TlWEjgQ1JG5H6VkeOWgI
         1foVZKVpEfcWlWl5uOvQTXmrgU1+2iKRi2R304LUV4kYJrondNtdmtkLD05AT4QIfoeC
         aZAzB5LT4Bj8dtdBsE4ZRQMFhxJkXTBQAnAXK1yT+ucxgMglHbtNFFUgXc4kRoudj31w
         h0QAot5kIOGjUKVem0CbW3sc8abiL33e5V9UFC6or12plAs5BPY+4uWsHdN/pE38K3DH
         oaKIEVBcjEKMYuFLqDL61q7GMJhdqnChMuNQT+0K+ZtOE+PiSQLl4OLH0NDCpA5fUdzk
         10Tg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=HoQViV+2VWli6a10OFSyz3kotcrFDlBPUE1vKpSCf14=;
        b=sJkdBbuMAyb3gPSw/VY5CPjSIl7TqdZqv8FobW9ZysJekBJt8ZHTHHsG9d7nzr2/jl
         MhzJu+MVRe5yqlW7eSXXlKKNhgEmfoc8ePxZQMauu2MfusvQW4whSCFnmiLaoEZM2y84
         che/XQg3tVR9/kbtzkCXrA0rw7VbeGg6HUXXKpM39t/q/7aNLNCaGKp1X/qhIaSAnfsK
         sIa1Shy3KlsZq4s2SwGY2fsWKfFr+5VJXqhLDoVpObZzVOfui2NkHK1MqDqq4v1O8B4h
         CKvJPIqOGaHl8pk5nr4fLhpWy9nSs5W30CaYVgAT5fJ9gfikr45niB/LncfUN1Ne3HNr
         s/CA==
X-Gm-Message-State: AOAM53101RQJI9ewVhA4BSCisoFgPHTcYM4kgflWdQlAjsjFKcvRM1Bi
	QxgjoKeyoSUJFUsFitQSTuowD7YJat1VHw==
X-Google-Smtp-Source: ABdhPJwiVuGh8rWfkw2Vdz25vp18A0RSAUT86dv2EQRqfS57+ZjW6/bpaBZJCzgJrlbIpYta1AN9nA==
X-Received: by 2002:a05:6512:3b98:b0:479:1313:35ab with SMTP id g24-20020a0565123b9800b00479131335abmr21503278lfv.399.1654755497012;
        Wed, 08 Jun 2022 23:18:17 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Jens Wiklander <jens.wiklander@linaro.org>
Subject: [PATCH v2 0/2] Xen FF-A mediator
Date: Thu,  9 Jun 2022 08:18:10 +0200
Message-Id: <20220609061812.422130-1-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.31.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Hi,

This patch sets add a FF-A [1] mediator modeled after the TEE mediator
already present in Xen. The FF-A mediator implements the subset of the FF-A
1.1 specification needed to communicate with OP-TEE using FF-A as transport
mechanism instead of SMC/HVC as with the TEE mediator. It allows a similar
design in OP-TEE as with the TEE mediator where OP-TEE presents one virtual
partition of itself to each guest in Xen.

The FF-A mediator is generic in the sense it has nothing OP-TEE specific
except that only the subset needed for OP-TEE is implemented so far. The
hooks needed to inform OP-TEE that a guest is created or destroyed is part
of the FF-A specification.

It should be possible to extend the FF-A mediator to implement a larger
portion of the FF-A 1.1 specification without breaking with the way OP-TEE
is communicated with here. So it should be possible to support any TEE or
Secure Partition using FF-A as transport with this mediator.

[1] https://developer.arm.com/documentation/den0077/latest

Thanks,
Jens

v1->v2:
* Rebased on staging to resolve some merge conflicts as requested


Jens Wiklander (2):
  xen/arm: smccc: add support for SMCCCv1.2 extended input/output
    registers
  xen/arm: add FF-A mediator

 xen/arch/arm/Kconfig              |   11 +
 xen/arch/arm/Makefile             |    1 +
 xen/arch/arm/arm64/smc.S          |   43 +
 xen/arch/arm/domain.c             |   10 +
 xen/arch/arm/ffa.c                | 1624 +++++++++++++++++++++++++++++
 xen/arch/arm/include/asm/domain.h |    4 +
 xen/arch/arm/include/asm/ffa.h    |   71 ++
 xen/arch/arm/include/asm/smccc.h  |   42 +
 xen/arch/arm/vsmc.c               |   19 +-
 9 files changed, 1821 insertions(+), 4 deletions(-)
 create mode 100644 xen/arch/arm/ffa.c
 create mode 100644 xen/arch/arm/include/asm/ffa.h

-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 06:18:26 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 06:18:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344662.570232 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzBUw-0006Ci-1A; Thu, 09 Jun 2022 06:18:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344662.570232; Thu, 09 Jun 2022 06:18:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzBUv-0006CZ-U3; Thu, 09 Jun 2022 06:18:25 +0000
Received: by outflank-mailman (input) for mailman id 344662;
 Thu, 09 Jun 2022 06:18:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ts8Y=WQ=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1nzBUt-0005eB-GX
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 06:18:23 +0000
Received: from mail-lf1-x12f.google.com (mail-lf1-x12f.google.com
 [2a00:1450:4864:20::12f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f5b47ae7-e7bb-11ec-b605-df0040e90b76;
 Thu, 09 Jun 2022 08:18:21 +0200 (CEST)
Received: by mail-lf1-x12f.google.com with SMTP id a2so30289856lfg.5
 for <xen-devel@lists.xenproject.org>; Wed, 08 Jun 2022 23:18:20 -0700 (PDT)
Received: from jade.urgonet (h-79-136-84-253.A175.priv.bahnhof.se.
 [79.136.84.253]) by smtp.gmail.com with ESMTPSA id
 u20-20020ac248b4000000b00478d24ad1basm4061130lfg.307.2022.06.08.23.18.18
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 08 Jun 2022 23:18:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f5b47ae7-e7bb-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=sBwCZBMVFF2AM3Jw+UVxUrTlMHW+Qj6lLfR/yFRkKPM=;
        b=Ty5ZyBKSNPlDapIBkBPb2Hr7Z2wVrkLS5c+s8p2fTycyyvfp7P0LYFCVrzrXB2zFf3
         Dr98QNYiWeqk+IxrwVCDQRN3TzHCOleMZiNZb6sOMv2+4AgMIW9Jmuzco99T8KNPlDW5
         5+KbYn2HGTkbD8I+vasyJ1Cpj0Q6gmuluP3WZByX23nm0ymTIEGhN72LBe+f7052JZzw
         5KRJcDV8NmUvzhHA6r5ezDYLAlPN/3kT5XeNq9QvGvNjRs1ez1mo1oNzt5ChP4cobWFl
         cYhWOMBbAAw0SdoHVvrsakanJ5AS7AH125jKaw1ntEwgRmVJ64sgcHSo808Nuoyu1m8b
         NY4Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=sBwCZBMVFF2AM3Jw+UVxUrTlMHW+Qj6lLfR/yFRkKPM=;
        b=k2KnHYWxaoYBW7cRJ5W2R9jUfvNBp4/R+hJAi0lvlqIVL1WwSnRwNimdoUOnDW6fUD
         6XTKrTZEy1qke/CG2mzNj1LcRUm95PqPJEfLBbSrJVGo20tlYR6E/71oN+XwaLLV0f6t
         7BPLP7lzTHpuacrFPV3RUnN/ko5RwrSv7nufoNzvomcGs6ipp3LkF4IMyPUsSvOfgGTP
         sDhLHSHzNo/p3WISlc3zYjROfvRjvhvudhuQ+9AcUaVwr6vNPoMxqE0tyIhqc+zqnHdf
         vSMfpy5miUSb/w8zb0VWbpWvh0+y6q6RMHBoz03T8KEdxzXCdcWIqGTTM408doCjV4K+
         glZg==
X-Gm-Message-State: AOAM530WOYH5NbWkAbdfsdKo6hzOhMiGNvy4WYVTLQP+DGGUATsCiBBB
	3oqpFiDByOwZITVWOTir88nyGxMnhkKlVA==
X-Google-Smtp-Source: ABdhPJzJNGOuOvzpNeJoMxmdxy/M7UwI9H51yOM6aPDETFIw1ExBHH9eCmNy9m+spZ1kijtSyW2Gdw==
X-Received: by 2002:a05:6512:1092:b0:479:4dc6:6b39 with SMTP id j18-20020a056512109200b004794dc66b39mr10968299lfg.30.1654755499801;
        Wed, 08 Jun 2022 23:18:19 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Jens Wiklander <jens.wiklander@linaro.org>
Subject: [PATCH v2 2/2] xen/arm: add FF-A mediator
Date: Thu,  9 Jun 2022 08:18:12 +0200
Message-Id: <20220609061812.422130-3-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20220609061812.422130-1-jens.wiklander@linaro.org>
References: <20220609061812.422130-1-jens.wiklander@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Adds a FF-A version 1.1 [1] mediator to communicate with a Secure
Partition in secure world.

The implementation is the bare minimum to be able to communicate with
OP-TEE running as an SPMC at S-EL1.

This is loosely based on the TEE mediator framework and the OP-TEE
mediator.

[1] https://developer.arm.com/documentation/den0077/latest
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
---
 xen/arch/arm/Kconfig              |   11 +
 xen/arch/arm/Makefile             |    1 +
 xen/arch/arm/domain.c             |   10 +
 xen/arch/arm/ffa.c                | 1624 +++++++++++++++++++++++++++++
 xen/arch/arm/include/asm/domain.h |    4 +
 xen/arch/arm/include/asm/ffa.h    |   71 ++
 xen/arch/arm/vsmc.c               |   17 +-
 7 files changed, 1735 insertions(+), 3 deletions(-)
 create mode 100644 xen/arch/arm/ffa.c
 create mode 100644 xen/arch/arm/include/asm/ffa.h

diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index ecfa6822e4d3..5b75067e2745 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -106,6 +106,17 @@ config TEE
 
 source "arch/arm/tee/Kconfig"
 
+config FFA
+	bool "Enable FF-A mediator support" if EXPERT
+	default n
+	depends on ARM_64
+	help
+	  This option enables a minamal FF-A mediator. The mediator is
+	  generic as it follows the FF-A specification [1], but it only
+	  implements a small substet of the specification.
+
+	  [1] https://developer.arm.com/documentation/den0077/latest
+
 endmenu
 
 menu "ARM errata workaround via the alternative framework"
diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 1d862351d111..dbf5e593a069 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -20,6 +20,7 @@ obj-y += domain.o
 obj-y += domain_build.init.o
 obj-y += domctl.o
 obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
+obj-$(CONFIG_FFA) += ffa.o
 obj-y += gic.o
 obj-y += gic-v2.o
 obj-$(CONFIG_GICV3) += gic-v3.o
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 8110c1df8638..a93e6a9c4aef 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -27,6 +27,7 @@
 #include <asm/cpufeature.h>
 #include <asm/current.h>
 #include <asm/event.h>
+#include <asm/ffa.h>
 #include <asm/gic.h>
 #include <asm/guest_atomics.h>
 #include <asm/irq.h>
@@ -756,6 +757,9 @@ int arch_domain_create(struct domain *d,
     if ( (rc = tee_domain_init(d, config->arch.tee_type)) != 0 )
         goto fail;
 
+    if ( (rc = ffa_domain_init(d)) != 0 )
+        goto fail;
+
     update_domain_wallclock_time(d);
 
     /*
@@ -998,6 +1002,7 @@ static int relinquish_memory(struct domain *d, struct page_list_head *list)
 enum {
     PROG_pci = 1,
     PROG_tee,
+    PROG_ffa,
     PROG_xen,
     PROG_page,
     PROG_mapping,
@@ -1046,6 +1051,11 @@ int domain_relinquish_resources(struct domain *d)
         if (ret )
             return ret;
 
+    PROGRESS(ffa):
+        ret = ffa_relinquish_resources(d);
+        if (ret )
+            return ret;
+
     PROGRESS(xen):
         ret = relinquish_memory(d, &d->xenpage_list);
         if ( ret )
diff --git a/xen/arch/arm/ffa.c b/xen/arch/arm/ffa.c
new file mode 100644
index 000000000000..9063b7f2b59e
--- /dev/null
+++ b/xen/arch/arm/ffa.c
@@ -0,0 +1,1624 @@
+/*
+ * xen/arch/arm/ffa.c
+ *
+ * Arm Firmware Framework for ARMv8-A(FFA) mediator
+ *
+ * Copyright (C) 2021  Linaro Limited
+ *
+ * Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without restriction,
+ * including without limitation the rights to use, copy, modify, merge,
+ * publish, distribute, sublicense, and/or sell copies of the Software,
+ * and to permit persons to whom the Software is furnished to do so,
+ * subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+ * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
+ * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
+ * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+ * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include <xen/domain_page.h>
+#include <xen/errno.h>
+#include <xen/init.h>
+#include <xen/lib.h>
+#include <xen/sched.h>
+#include <xen/types.h>
+#include <xen/sizes.h>
+#include <xen/bitops.h>
+
+#include <asm/smccc.h>
+#include <asm/event.h>
+#include <asm/ffa.h>
+#include <asm/regs.h>
+
+/* Error codes */
+#define FFA_RET_OK			0
+#define FFA_RET_NOT_SUPPORTED		-1
+#define FFA_RET_INVALID_PARAMETERS	-2
+#define FFA_RET_NO_MEMORY		-3
+#define FFA_RET_BUSY			-4
+#define FFA_RET_INTERRUPTED		-5
+#define FFA_RET_DENIED			-6
+#define FFA_RET_RETRY			-7
+#define FFA_RET_ABORTED			-8
+
+/* FFA_VERSION helpers */
+#define FFA_VERSION_MAJOR		_AC(1,U)
+#define FFA_VERSION_MAJOR_SHIFT		_AC(16,U)
+#define FFA_VERSION_MAJOR_MASK		_AC(0x7FFF,U)
+#define FFA_VERSION_MINOR		_AC(1,U)
+#define FFA_VERSION_MINOR_SHIFT		_AC(0,U)
+#define FFA_VERSION_MINOR_MASK		_AC(0xFFFF,U)
+#define MAKE_FFA_VERSION(major, minor)	\
+	((((major) & FFA_VERSION_MAJOR_MASK) << FFA_VERSION_MAJOR_SHIFT) | \
+	 ((minor) & FFA_VERSION_MINOR_MASK))
+
+#define FFA_MIN_VERSION		MAKE_FFA_VERSION(1, 0)
+#define FFA_VERSION_1_0		MAKE_FFA_VERSION(1, 0)
+#define FFA_VERSION_1_1		MAKE_FFA_VERSION(1, 1)
+#define FFA_MY_VERSION		MAKE_FFA_VERSION(FFA_VERSION_MAJOR, \
+						 FFA_VERSION_MINOR)
+
+
+#define FFA_HANDLE_HYP_FLAG             BIT(63,ULL)
+
+/* Memory attributes: Normal memory, Write-Back cacheable, Inner shareable */
+#define FFA_NORMAL_MEM_REG_ATTR		_AC(0x2f,U)
+
+/* Memory access permissions: Read-write */
+#define FFA_MEM_ACC_RW			_AC(0x2,U)
+
+/* Clear memory before mapping in receiver */
+#define FFA_MEMORY_REGION_FLAG_CLEAR		BIT(0, U)
+/* Relayer may time slice this operation */
+#define FFA_MEMORY_REGION_FLAG_TIME_SLICE	BIT(1, U)
+/* Clear memory after receiver relinquishes it */
+#define FFA_MEMORY_REGION_FLAG_CLEAR_RELINQUISH	BIT(2, U)
+
+/* Share memory transaction */
+#define FFA_MEMORY_REGION_TRANSACTION_TYPE_SHARE (_AC(1,U) << 3)
+/* Relayer must choose the alignment boundary */
+#define FFA_MEMORY_REGION_FLAG_ANY_ALIGNMENT	_AC(0,U)
+
+#define FFA_HANDLE_INVALID		_AC(0xffffffffffffffff,ULL)
+
+/* Framework direct request/response */
+#define FFA_MSG_FLAG_FRAMEWORK		BIT(31, U)
+#define FFA_MSG_TYPE_MASK		_AC(0xFF,U);
+#define FFA_MSG_PSCI			_AC(0x0,U)
+#define FFA_MSG_SEND_VM_CREATED		_AC(0x4,U)
+#define FFA_MSG_RESP_VM_CREATED		_AC(0x5,U)
+#define FFA_MSG_SEND_VM_DESTROYED	_AC(0x6,U)
+#define FFA_MSG_RESP_VM_DESTROYED	_AC(0x7,U)
+
+/*
+ * Flags used for the FFA_PARTITION_INFO_GET return message:
+ * BIT(0): Supports receipt of direct requests
+ * BIT(1): Can send direct requests
+ * BIT(2): Can send and receive indirect messages
+ * BIT(3): Supports receipt of notifications
+ * BIT(4-5): Partition ID is a PE endpoint ID
+ */
+#define FFA_PART_PROP_DIRECT_REQ_RECV   BIT(0,U)
+#define FFA_PART_PROP_DIRECT_REQ_SEND   BIT(1,U)
+#define FFA_PART_PROP_INDIRECT_MSGS     BIT(2,U)
+#define FFA_PART_PROP_RECV_NOTIF        BIT(3,U)
+#define FFA_PART_PROP_IS_PE_ID          (_AC(0,U) << 4)
+#define FFA_PART_PROP_IS_SEPID_INDEP    (_AC(1,U) << 4)
+#define FFA_PART_PROP_IS_SEPID_DEP      (_AC(2,U) << 4)
+#define FFA_PART_PROP_IS_AUX_ID         (_AC(3,U) << 4)
+#define FFA_PART_PROP_NOTIF_CREATED     BIT(6,U)
+#define FFA_PART_PROP_NOTIF_DESTROYED   BIT(7,U)
+#define FFA_PART_PROP_AARCH64_STATE     BIT(8,U)
+
+/* Function IDs */
+#define FFA_ERROR			_AC(0x84000060,U)
+#define FFA_SUCCESS_32			_AC(0x84000061,U)
+#define FFA_SUCCESS_64			_AC(0xC4000061,U)
+#define FFA_INTERRUPT			_AC(0x84000062,U)
+#define FFA_VERSION			_AC(0x84000063,U)
+#define FFA_FEATURES			_AC(0x84000064,U)
+#define FFA_RX_ACQUIRE			_AC(0x84000084,U)
+#define FFA_RX_RELEASE			_AC(0x84000065,U)
+#define FFA_RXTX_MAP_32			_AC(0x84000066,U)
+#define FFA_RXTX_MAP_64			_AC(0xC4000066,U)
+#define FFA_RXTX_UNMAP			_AC(0x84000067,U)
+#define FFA_PARTITION_INFO_GET		_AC(0x84000068,U)
+#define FFA_ID_GET			_AC(0x84000069,U)
+#define FFA_SPM_ID_GET			_AC(0x84000085,U)
+#define FFA_MSG_WAIT			_AC(0x8400006B,U)
+#define FFA_MSG_YIELD			_AC(0x8400006C,U)
+#define FFA_MSG_RUN			_AC(0x8400006D,U)
+#define FFA_MSG_SEND2			_AC(0x84000086,U)
+#define FFA_MSG_SEND_DIRECT_REQ_32	_AC(0x8400006F,U)
+#define FFA_MSG_SEND_DIRECT_REQ_64	_AC(0xC400006F,U)
+#define FFA_MSG_SEND_DIRECT_RESP_32	_AC(0x84000070,U)
+#define FFA_MSG_SEND_DIRECT_RESP_64	_AC(0xC4000070,U)
+#define FFA_MEM_DONATE_32		_AC(0x84000071,U)
+#define FFA_MEM_DONATE_64		_AC(0xC4000071,U)
+#define FFA_MEM_LEND_32			_AC(0x84000072,U)
+#define FFA_MEM_LEND_64			_AC(0xC4000072,U)
+#define FFA_MEM_SHARE_32		_AC(0x84000073,U)
+#define FFA_MEM_SHARE_64		_AC(0xC4000073,U)
+#define FFA_MEM_RETRIEVE_REQ_32		_AC(0x84000074,U)
+#define FFA_MEM_RETRIEVE_REQ_64		_AC(0xC4000074,U)
+#define FFA_MEM_RETRIEVE_RESP		_AC(0x84000075,U)
+#define FFA_MEM_RELINQUISH		_AC(0x84000076,U)
+#define FFA_MEM_RECLAIM			_AC(0x84000077,U)
+#define FFA_MEM_FRAG_RX			_AC(0x8400007A,U)
+#define FFA_MEM_FRAG_TX			_AC(0x8400007B,U)
+#define FFA_MSG_SEND			_AC(0x8400006E,U)
+#define FFA_MSG_POLL			_AC(0x8400006A,U)
+
+/* Partition information descriptor */
+struct ffa_partition_info_1_0 {
+    uint16_t id;
+    uint16_t execution_context;
+    uint32_t partition_properties;
+};
+
+struct ffa_partition_info_1_1 {
+    uint16_t id;
+    uint16_t execution_context;
+    uint32_t partition_properties;
+    uint8_t uuid[16];
+};
+
+/* Constituent memory region descriptor */
+struct ffa_address_range {
+    uint64_t address;
+    uint32_t page_count;
+    uint32_t reserved;
+};
+
+/* Composite memory region descriptor */
+struct ffa_mem_region {
+    uint32_t total_page_count;
+    uint32_t address_range_count;
+    uint64_t reserved;
+    struct ffa_address_range address_range_array[];
+};
+
+/* Memory access permissions descriptor */
+struct ffa_mem_access_perm {
+    uint16_t endpoint_id;
+    uint8_t perm;
+    uint8_t flags;
+};
+
+/* Endpoint memory access descriptor */
+struct ffa_mem_access {
+    struct ffa_mem_access_perm access_perm;
+    uint32_t region_offs;
+    uint64_t reserved;
+};
+
+/* Lend, donate or share memory transaction descriptor */
+struct ffa_mem_transaction_1_0 {
+    uint16_t sender_id;
+    uint8_t mem_reg_attr;
+    uint8_t reserved0;
+    uint32_t flags;
+    uint64_t global_handle;
+    uint64_t tag;
+    uint32_t reserved1;
+    uint32_t mem_access_count;
+    struct ffa_mem_access mem_access_array[];
+};
+
+struct ffa_mem_transaction_1_1 {
+    uint16_t sender_id;
+    uint16_t mem_reg_attr;
+    uint32_t flags;
+    uint64_t global_handle;
+    uint64_t tag;
+    uint32_t mem_access_size;
+    uint32_t mem_access_count;
+    uint32_t mem_access_offs;
+    uint8_t reserved[12];
+};
+
+/*
+ * The parts needed from struct ffa_mem_transaction_1_0 or struct
+ * ffa_mem_transaction_1_1, used to provide an abstraction of difference in
+ * data structures between version 1.0 and 1.1. This is just an internal
+ * interface and can be changed without changing any ABI.
+ */
+struct ffa_mem_transaction_x {
+    uint16_t sender_id;
+    uint8_t mem_reg_attr;
+    uint8_t flags;
+    uint8_t mem_access_size;
+    uint8_t mem_access_count;
+    uint16_t mem_access_offs;
+    uint64_t global_handle;
+    uint64_t tag;
+};
+
+/* Endpoint RX/TX descriptor */
+struct ffa_endpoint_rxtx_descriptor_1_0 {
+    uint16_t sender_id;
+    uint16_t reserved;
+    uint32_t rx_range_count;
+    uint32_t tx_range_count;
+};
+
+struct ffa_endpoint_rxtx_descriptor_1_1 {
+    uint16_t sender_id;
+    uint16_t reserved;
+    uint32_t rx_region_offs;
+    uint32_t tx_region_offs;
+};
+
+struct ffa_ctx {
+    void *rx;
+    void *tx;
+    struct page_info *rx_pg;
+    struct page_info *tx_pg;
+    unsigned int page_count;
+    uint32_t guest_vers;
+    bool tx_is_mine;
+    bool interrupted;
+};
+
+struct ffa_shm_mem {
+    struct list_head list;
+    uint16_t sender_id;
+    uint16_t ep_id;     /* endpoint, the one lending */
+    uint64_t handle;    /* FFA_HANDLE_INVALID if not set yet */
+    unsigned int page_count;
+    struct page_info *pages[];
+};
+
+struct mem_frag_state {
+    struct list_head list;
+    struct ffa_shm_mem *shm;
+    uint32_t range_count;
+    unsigned int current_page_idx;
+    unsigned int frag_offset;
+    unsigned int range_offset;
+    uint8_t *buf;
+    unsigned int buf_size;
+    struct ffa_address_range range;
+};
+
+/*
+ * Our rx/rx buffer shared with the SPMC
+ */
+static uint32_t ffa_version;
+static uint16_t *subsr_vm_created;
+static unsigned int subsr_vm_created_count;
+static uint16_t *subsr_vm_destroyed;
+static unsigned int subsr_vm_destroyed_count;
+static void *ffa_rx;
+static void *ffa_tx;
+static unsigned int ffa_page_count;
+static spinlock_t ffa_buffer_lock = SPIN_LOCK_UNLOCKED;
+
+static struct list_head ffa_mem_list = LIST_HEAD_INIT(ffa_mem_list);
+static struct list_head ffa_frag_list = LIST_HEAD_INIT(ffa_frag_list);
+static spinlock_t ffa_mem_list_lock = SPIN_LOCK_UNLOCKED;
+
+static uint64_t next_handle = FFA_HANDLE_HYP_FLAG;
+
+static uint64_t reg_pair_to_64(uint32_t reg0, uint32_t reg1)
+{
+    return (uint64_t)reg0 << 32 | reg1;
+}
+
+static void reg_pair_from_64(uint32_t *reg0, uint32_t *reg1, uint64_t val)
+{
+    *reg0 = val >> 32;
+    *reg1 = val;
+}
+
+static bool ffa_get_version(uint32_t *vers)
+{
+    const struct arm_smccc_1_2_regs arg = {
+        .a0 = FFA_VERSION, .a1 = FFA_MY_VERSION,
+    };
+    struct arm_smccc_1_2_regs resp;
+
+    arm_smccc_1_2_smc(&arg, &resp);
+    if ( resp.a0 == FFA_RET_NOT_SUPPORTED )
+    {
+        printk(XENLOG_ERR "ffa: FFA_VERSION returned not supported\n");
+        return false;
+    }
+
+    *vers = resp.a0;
+    return true;
+}
+
+static uint32_t ffa_rxtx_map(register_t tx_addr, register_t rx_addr,
+                             uint32_t page_count)
+{
+    const struct arm_smccc_1_2_regs arg = {
+#ifdef CONFIG_ARM_64
+        .a0 = FFA_RXTX_MAP_64,
+#endif
+#ifdef CONFIG_ARM_32
+        .a0 = FFA_RXTX_MAP_32,
+#endif
+	.a1 = tx_addr, .a2 = rx_addr,
+        .a3 = page_count,
+    };
+    struct arm_smccc_1_2_regs resp;
+
+    arm_smccc_1_2_smc(&arg, &resp);
+
+    if ( resp.a0 == FFA_ERROR )
+    {
+        if ( resp.a2 )
+            return resp.a2;
+        else
+            return FFA_RET_NOT_SUPPORTED;
+    }
+
+    return FFA_RET_OK;
+}
+
+static uint32_t ffa_rxtx_unmap(uint16_t vm_id)
+{
+    const struct arm_smccc_1_2_regs arg = {
+        .a0 = FFA_RXTX_UNMAP, .a1 = vm_id,
+    };
+    struct arm_smccc_1_2_regs resp;
+
+    arm_smccc_1_2_smc(&arg, &resp);
+
+    if ( resp.a0 == FFA_ERROR )
+    {
+        if ( resp.a2 )
+            return resp.a2;
+        else
+            return FFA_RET_NOT_SUPPORTED;
+    }
+
+    return FFA_RET_OK;
+}
+
+static uint32_t ffa_partition_info_get(uint32_t w1, uint32_t w2, uint32_t w3,
+                                       uint32_t w4, uint32_t w5,
+                                       uint32_t *count)
+{
+    const struct arm_smccc_1_2_regs arg = {
+        .a0 = FFA_PARTITION_INFO_GET, .a1 = w1, .a2 = w2, .a3 = w3, .a4 = w4,
+        .a5 = w5,
+    };
+    struct arm_smccc_1_2_regs resp;
+
+    arm_smccc_1_2_smc(&arg, &resp);
+
+    if ( resp.a0 == FFA_ERROR )
+    {
+        if ( resp.a2 )
+            return resp.a2;
+        else
+            return FFA_RET_NOT_SUPPORTED;
+    }
+
+    *count = resp.a2;
+
+    return FFA_RET_OK;
+}
+
+static uint32_t ffa_rx_release(void)
+{
+    const struct arm_smccc_1_2_regs arg = { .a0 = FFA_RX_RELEASE, };
+    struct arm_smccc_1_2_regs resp;
+
+    arm_smccc_1_2_smc(&arg, &resp);
+
+    if ( resp.a0 == FFA_ERROR )
+    {
+        if ( resp.a2 )
+            return resp.a2;
+        else
+            return FFA_RET_NOT_SUPPORTED;
+    }
+
+    return FFA_RET_OK;
+}
+
+static int32_t ffa_mem_share(uint32_t tot_len, uint32_t frag_len,
+                             register_t addr, uint32_t pg_count,
+                             uint64_t *handle)
+{
+    struct arm_smccc_1_2_regs arg = {
+        .a0 = FFA_MEM_SHARE_32, .a1 = tot_len, .a2 = frag_len, .a3 = addr,
+        .a4 = pg_count,
+    };
+    struct arm_smccc_1_2_regs resp;
+
+    /*
+     * For arm64 we must use 64-bit calling convention if the buffer isn't
+     * passed in our tx buffer.
+     */
+    if (sizeof(addr) > 4 && addr)
+        arg.a0 = FFA_MEM_SHARE_64;
+
+    arm_smccc_1_2_smc(&arg, &resp);
+
+    switch ( resp.a0 ) {
+    case FFA_ERROR:
+        if ( resp.a2 )
+            return resp.a2;
+        else
+            return FFA_RET_NOT_SUPPORTED;
+    case FFA_SUCCESS_32:
+        *handle = reg_pair_to_64(resp.a3, resp.a2);
+        return FFA_RET_OK;
+    case FFA_MEM_FRAG_RX:
+        *handle = reg_pair_to_64(resp.a2, resp.a1);
+        return resp.a3;
+    default:
+            return FFA_RET_NOT_SUPPORTED;
+    }
+}
+
+static int32_t ffa_mem_frag_tx(uint64_t handle, uint32_t frag_len,
+                               uint16_t sender_id)
+{
+    struct arm_smccc_1_2_regs arg = {
+        .a0 = FFA_MEM_FRAG_TX, .a1 = handle & UINT32_MAX, .a2 = handle >> 32,
+        .a3 = frag_len, .a4 = (uint32_t)sender_id << 16,
+    };
+    struct arm_smccc_1_2_regs resp;
+
+    arm_smccc_1_2_smc(&arg, &resp);
+
+    switch ( resp.a0 ) {
+    case FFA_ERROR:
+        if ( resp.a2 )
+            return resp.a2;
+        else
+            return FFA_RET_NOT_SUPPORTED;
+    case FFA_SUCCESS_32:
+        return FFA_RET_OK;
+    case FFA_MEM_FRAG_RX:
+        return resp.a3;
+    default:
+            return FFA_RET_NOT_SUPPORTED;
+    }
+}
+
+static uint32_t ffa_mem_reclaim(uint32_t handle_lo, uint32_t handle_hi,
+                                uint32_t flags)
+{
+    const struct arm_smccc_1_2_regs arg = {
+        .a0 = FFA_MEM_RECLAIM, .a1 = handle_lo, .a2 = handle_hi, .a3 = flags,
+    };
+    struct arm_smccc_1_2_regs resp;
+
+    arm_smccc_1_2_smc(&arg, &resp);
+
+    if ( resp.a0 == FFA_ERROR )
+    {
+        if ( resp.a2 )
+            return resp.a2;
+        else
+            return FFA_RET_NOT_SUPPORTED;
+    }
+
+    return FFA_RET_OK;
+}
+
+static int32_t ffa_direct_req_send_vm(uint16_t sp_id, uint16_t vm_id,
+                                      uint8_t msg)
+{
+    uint32_t exp_resp = FFA_MSG_FLAG_FRAMEWORK;
+    int32_t res;
+
+    if ( msg != FFA_MSG_SEND_VM_CREATED && msg !=FFA_MSG_SEND_VM_DESTROYED )
+        return FFA_RET_INVALID_PARAMETERS;
+
+    if ( msg == FFA_MSG_SEND_VM_CREATED )
+        exp_resp |= FFA_MSG_RESP_VM_CREATED;
+    else
+        exp_resp |= FFA_MSG_RESP_VM_DESTROYED;
+
+    do {
+        const struct arm_smccc_1_2_regs arg = {
+            .a0 = FFA_MSG_SEND_DIRECT_REQ_32,
+            .a1 = sp_id,
+            .a2 = FFA_MSG_FLAG_FRAMEWORK | msg,
+            .a5 = vm_id,
+        };
+        struct arm_smccc_1_2_regs resp;
+
+        arm_smccc_1_2_smc(&arg, &resp);
+        if ( resp.a0 != FFA_MSG_SEND_DIRECT_RESP_32 || resp.a2 != exp_resp ) {
+            /*
+             * This is an invalid response, likely due to some error in the
+             * implementation of the ABI.
+             */
+            return FFA_RET_INVALID_PARAMETERS;
+        }
+        res = resp.a3;
+    } while ( res == FFA_RET_INTERRUPTED || res == FFA_RET_RETRY );
+
+    return res;
+}
+
+static u16 get_vm_id(struct domain *d)
+{
+    /* +1 since 0 is reserved for the hypervisor in FF-A */
+    return d->domain_id + 1;
+}
+
+static void set_regs(struct cpu_user_regs *regs, register_t v0, register_t v1,
+                     register_t v2, register_t v3, register_t v4, register_t v5,
+                     register_t v6, register_t v7)
+{
+        set_user_reg(regs, 0, v0);
+        set_user_reg(regs, 1, v1);
+        set_user_reg(regs, 2, v2);
+        set_user_reg(regs, 3, v3);
+        set_user_reg(regs, 4, v4);
+        set_user_reg(regs, 5, v5);
+        set_user_reg(regs, 6, v6);
+        set_user_reg(regs, 7, v7);
+}
+
+static void set_regs_error(struct cpu_user_regs *regs, uint32_t error_code)
+{
+    set_regs(regs, FFA_ERROR, 0, error_code, 0, 0, 0, 0, 0);
+}
+
+static void set_regs_success(struct cpu_user_regs *regs, uint32_t w2,
+                             uint32_t w3)
+{
+    set_regs(regs, FFA_SUCCESS_32, 0, w2, w3, 0, 0, 0, 0);
+}
+
+static void set_regs_frag_rx(struct cpu_user_regs *regs, uint32_t handle_lo,
+                             uint32_t handle_hi, uint32_t frag_offset,
+                             uint16_t sender_id)
+{
+    set_regs(regs, FFA_MEM_FRAG_RX, handle_lo, handle_hi, frag_offset,
+             (uint32_t)sender_id << 16, 0, 0, 0);
+}
+
+static void handle_version(struct cpu_user_regs *regs)
+{
+    struct domain *d = current->domain;
+    struct ffa_ctx *ctx = d->arch.ffa;
+    uint32_t vers = get_user_reg(regs, 1);
+
+    if ( vers < FFA_VERSION_1_1 )
+        vers = FFA_VERSION_1_0;
+    else
+        vers = FFA_VERSION_1_1;
+
+    ctx->guest_vers = vers;
+    set_regs(regs, vers, 0, 0, 0, 0, 0, 0, 0);
+}
+
+static uint32_t handle_rxtx_map(uint32_t fid, register_t tx_addr,
+                                register_t rx_addr, uint32_t page_count)
+{
+    uint32_t ret = FFA_RET_NOT_SUPPORTED;
+    struct domain *d = current->domain;
+    struct ffa_ctx *ctx = d->arch.ffa;
+    struct page_info *tx_pg;
+    struct page_info *rx_pg;
+    p2m_type_t t;
+    void *rx;
+    void *tx;
+
+    if ( !smccc_is_conv_64(fid) )
+    {
+        tx_addr &= UINT32_MAX;
+        rx_addr &= UINT32_MAX;
+    }
+
+    /* For now to keep things simple, only deal with a single page */
+    if ( page_count != 1 )
+        return FFA_RET_NOT_SUPPORTED;
+
+    /* Already mapped */
+    if ( ctx->rx )
+        return FFA_RET_DENIED;
+
+    tx_pg = get_page_from_gfn(d, gaddr_to_gfn(tx_addr), &t, P2M_ALLOC);
+    if ( !tx_pg )
+        return FFA_RET_NOT_SUPPORTED;
+    /* Only normal RAM for now */
+    if (t != p2m_ram_rw)
+        goto err_put_tx_pg;
+
+    rx_pg = get_page_from_gfn(d, gaddr_to_gfn(rx_addr), &t, P2M_ALLOC);
+    if ( !tx_pg )
+        goto err_put_tx_pg;
+    /* Only normal RAM for now */
+    if ( t != p2m_ram_rw )
+        goto err_put_rx_pg;
+
+    tx = __map_domain_page_global(tx_pg);
+    if ( !tx )
+        goto err_put_rx_pg;
+
+    rx = __map_domain_page_global(rx_pg);
+    if ( !rx )
+        goto err_unmap_tx;
+
+    ctx->rx = rx;
+    ctx->tx = tx;
+    ctx->rx_pg = rx_pg;
+    ctx->tx_pg = tx_pg;
+    ctx->page_count = 1;
+    ctx->tx_is_mine = true;
+    return FFA_RET_OK;
+
+err_unmap_tx:
+    unmap_domain_page_global(tx);
+err_put_rx_pg:
+    put_page(rx_pg);
+err_put_tx_pg:
+    put_page(tx_pg);
+    return ret;
+}
+
+static uint32_t handle_rxtx_unmap(void)
+{
+    struct domain *d = current->domain;
+    struct ffa_ctx *ctx = d->arch.ffa;
+    uint32_t ret;
+
+    if ( !ctx-> rx )
+        return FFA_RET_INVALID_PARAMETERS;
+
+    ret = ffa_rxtx_unmap(get_vm_id(d));
+    if ( ret )
+        return ret;
+
+    unmap_domain_page_global(ctx->rx);
+    unmap_domain_page_global(ctx->tx);
+    put_page(ctx->rx_pg);
+    put_page(ctx->tx_pg);
+    ctx->rx = NULL;
+    ctx->tx = NULL;
+    ctx->rx_pg = NULL;
+    ctx->tx_pg = NULL;
+    ctx->page_count = 0;
+    ctx->tx_is_mine = false;
+
+    return FFA_RET_OK;
+}
+
+static uint32_t handle_partition_info_get(uint32_t w1, uint32_t w2, uint32_t w3,
+                                          uint32_t w4, uint32_t w5,
+                                          uint32_t *count)
+{
+    uint32_t ret = FFA_RET_DENIED;
+    struct domain *d = current->domain;
+    struct ffa_ctx *ctx = d->arch.ffa;
+
+    if ( !ffa_page_count )
+        return FFA_RET_DENIED;
+
+    spin_lock(&ffa_buffer_lock);
+    if ( !ctx->page_count || !ctx->tx_is_mine )
+        goto out;
+    ret = ffa_partition_info_get(w1, w2, w3, w4, w5, count);
+    if ( ret )
+        goto out;
+    if ( ctx->guest_vers == FFA_VERSION_1_0 ) {
+        size_t n;
+        struct ffa_partition_info_1_1 *src = ffa_rx;
+        struct ffa_partition_info_1_0 *dst = ctx->rx;
+
+        for ( n = 0; n < *count; n++ ) {
+            dst[n].id = src[n].id;
+            dst[n].execution_context = src[n].execution_context;
+            dst[n].partition_properties = src[n].partition_properties;
+        }
+    } else {
+        size_t sz = *count * sizeof(struct ffa_partition_info_1_1);
+
+        memcpy(ctx->rx, ffa_rx, sz);
+    }
+    ffa_rx_release();
+    ctx->tx_is_mine = false;
+out:
+    spin_unlock(&ffa_buffer_lock);
+
+    return ret;
+}
+
+static uint32_t handle_rx_release(void)
+{
+    uint32_t ret = FFA_RET_DENIED;
+    struct domain *d = current->domain;
+    struct ffa_ctx *ctx = d->arch.ffa;
+
+    spin_lock(&ffa_buffer_lock);
+    if ( !ctx->page_count || ctx->tx_is_mine )
+        goto out;
+    ret = FFA_RET_OK;
+    ctx->tx_is_mine = true;
+out:
+    spin_unlock(&ffa_buffer_lock);
+
+    return ret;
+}
+
+static void handle_msg_send_direct_req(struct cpu_user_regs *regs, uint32_t fid)
+{
+    struct arm_smccc_1_2_regs arg = { .a0 = fid, };
+    struct arm_smccc_1_2_regs resp = { };
+    struct domain *d = current->domain;
+    struct ffa_ctx *ctx = d->arch.ffa;
+    uint32_t src_dst;
+    uint64_t mask;
+
+    if ( smccc_is_conv_64(fid) )
+        mask = 0xffffffffffffffff;
+    else
+        mask = 0xffffffff;
+
+    src_dst = get_user_reg(regs, 1);
+    if ( (src_dst >> 16) != get_vm_id(d) )
+    {
+        resp.a0 = FFA_ERROR;
+        resp.a2 = FFA_RET_INVALID_PARAMETERS;
+        goto out;
+    }
+
+    arg.a1 = src_dst;
+    arg.a2 = get_user_reg(regs, 2) & mask;
+    arg.a3 = get_user_reg(regs, 3) & mask;
+    arg.a4 = get_user_reg(regs, 4) & mask;
+    arg.a5 = get_user_reg(regs, 5) & mask;
+    arg.a6 = get_user_reg(regs, 6) & mask;
+    arg.a7 = get_user_reg(regs, 7) & mask;
+
+    while ( true ) {
+        arm_smccc_1_2_smc(&arg, &resp);
+
+        switch ( resp.a0 )
+        {
+        case FFA_INTERRUPT:
+            ctx->interrupted = true;
+            goto out;
+        case FFA_ERROR:
+        case FFA_SUCCESS_32:
+        case FFA_SUCCESS_64:
+        case FFA_MSG_SEND_DIRECT_RESP_32:
+        case FFA_MSG_SEND_DIRECT_RESP_64:
+            goto out;
+        default:
+            /* Bad fid, report back. */
+            memset(&arg, 0, sizeof(arg));
+            arg.a0 = FFA_ERROR;
+            arg.a1 = src_dst;
+            arg.a2 = FFA_RET_NOT_SUPPORTED;
+            continue;
+        }
+    }
+
+out:
+    set_user_reg(regs, 0, resp.a0);
+    set_user_reg(regs, 2, resp.a2 & mask);
+    set_user_reg(regs, 1, resp.a1 & mask);
+    set_user_reg(regs, 3, resp.a3 & mask);
+    set_user_reg(regs, 4, resp.a4 & mask);
+    set_user_reg(regs, 5, resp.a5 & mask);
+    set_user_reg(regs, 6, resp.a6 & mask);
+    set_user_reg(regs, 7, resp.a7 & mask);
+}
+
+static int get_shm_pages(struct domain *d, struct ffa_shm_mem *shm,
+                         struct ffa_address_range *range, uint32_t range_count,
+                         unsigned int start_page_idx,
+                         unsigned int *last_page_idx)
+{
+    unsigned int pg_idx = start_page_idx;
+    unsigned long gfn;
+    unsigned int n;
+    unsigned int m;
+    p2m_type_t t;
+    uint64_t addr;
+
+    for ( n = 0; n < range_count; n++ ) {
+        for ( m = 0; m < range[n].page_count; m++ ) {
+            if ( pg_idx >= shm->page_count )
+                return FFA_RET_INVALID_PARAMETERS;
+
+            addr = read_atomic(&range[n].address);
+            gfn = gaddr_to_gfn(addr + m * PAGE_SIZE);
+            shm->pages[pg_idx] = get_page_from_gfn(d, gfn, &t, P2M_ALLOC);
+            if ( !shm->pages[pg_idx] )
+                return FFA_RET_DENIED;
+            pg_idx++;
+            /* Only normal RAM for now */
+            if ( t != p2m_ram_rw )
+                return FFA_RET_DENIED;
+        }
+    }
+
+    *last_page_idx = pg_idx;
+
+    return FFA_RET_OK;
+}
+
+static void put_shm_pages(struct ffa_shm_mem *shm)
+{
+    unsigned int n;
+
+    for ( n = 0; n < shm->page_count && shm->pages[n]; n++ )
+    {
+        if ( shm->pages[n] ) {
+            put_page(shm->pages[n]);
+            shm->pages[n] = NULL;
+        }
+    }
+}
+
+static void init_range(struct ffa_address_range *addr_range,
+                       paddr_t pa)
+{
+    memset(addr_range, 0, sizeof(*addr_range));
+    addr_range->address = pa;
+    addr_range->page_count = 1;
+}
+
+static int share_shm(struct ffa_shm_mem *shm)
+{
+    uint32_t max_frag_len = ffa_page_count * PAGE_SIZE;
+    struct ffa_mem_transaction_1_1 *descr = ffa_tx;
+    struct ffa_mem_access *mem_access_array;
+    struct ffa_mem_region *region_descr;
+    struct ffa_address_range *addr_range;
+    paddr_t pa;
+    paddr_t last_pa;
+    unsigned int n;
+    uint32_t frag_len;
+    uint32_t tot_len;
+    int ret;
+    unsigned int range_count;
+    unsigned int range_base;
+    bool first;
+
+    memset(descr, 0, sizeof(*descr));
+    descr->sender_id = shm->sender_id;
+    descr->global_handle = shm->handle;
+    descr->mem_reg_attr = FFA_NORMAL_MEM_REG_ATTR;
+    descr->mem_access_count = 1;
+    descr->mem_access_size = sizeof(*mem_access_array);
+    descr->mem_access_offs = sizeof(*descr);
+    mem_access_array = (void *)(descr + 1);
+    region_descr = (void *)(mem_access_array + 1);
+
+    memset(mem_access_array, 0, sizeof(*mem_access_array));
+    mem_access_array[0].access_perm.endpoint_id = shm->ep_id;
+    mem_access_array[0].access_perm.perm = FFA_MEM_ACC_RW;
+    mem_access_array[0].region_offs = (vaddr_t)region_descr - (vaddr_t)ffa_tx;
+
+    memset(region_descr, 0, sizeof(*region_descr));
+    region_descr->total_page_count = shm->page_count;
+
+    region_descr->address_range_count = 1;
+    last_pa = page_to_maddr(shm->pages[0]);
+    for ( n = 1; n < shm->page_count; last_pa = pa, n++ )
+    {
+        pa = page_to_maddr(shm->pages[n]);
+        if ( last_pa + PAGE_SIZE == pa )
+        {
+            continue;
+        }
+        region_descr->address_range_count++;
+    }
+
+    tot_len = sizeof(*descr) + sizeof(*mem_access_array) +
+              sizeof(*region_descr) +
+              region_descr->address_range_count * sizeof(*addr_range);
+
+    addr_range = region_descr->address_range_array;
+    frag_len = (vaddr_t)(addr_range + 1) - (vaddr_t)ffa_tx;
+    last_pa = page_to_maddr(shm->pages[0]);
+    init_range(addr_range, last_pa);
+    first = true;
+    range_count = 1;
+    range_base = 0;
+    for ( n = 1; n < shm->page_count; last_pa = pa, n++ )
+    {
+        pa = page_to_maddr(shm->pages[n]);
+        if ( last_pa + PAGE_SIZE == pa )
+        {
+            addr_range->page_count++;
+            continue;
+        }
+
+        if (frag_len == max_frag_len) {
+            if (first)
+            {
+                ret = ffa_mem_share(tot_len, frag_len, 0, 0, &shm->handle);
+                first = false;
+            }
+            else
+            {
+                ret = ffa_mem_frag_tx(shm->handle, frag_len, shm->sender_id);
+            }
+            if (ret <= 0)
+                return ret;
+            range_base = range_count;
+            range_count = 0;
+            frag_len = sizeof(*addr_range);
+            addr_range = ffa_tx;
+        } else {
+            frag_len += sizeof(*addr_range);
+            addr_range++;
+        }
+        init_range(addr_range, pa);
+        range_count++;
+    }
+
+    if (first)
+        return ffa_mem_share(tot_len, frag_len, 0, 0, &shm->handle);
+    else
+        return ffa_mem_frag_tx(shm->handle, frag_len, shm->sender_id);
+}
+
+static int read_mem_transaction(uint32_t ffa_vers, void *buf, size_t blen,
+                                struct ffa_mem_transaction_x *trans)
+{
+    uint16_t mem_reg_attr;
+    uint32_t flags;
+    uint32_t count;
+    uint32_t offs;
+    uint32_t size;
+
+    if (ffa_vers >= FFA_VERSION_1_1) {
+        struct ffa_mem_transaction_1_1 *descr;
+
+        if (blen < sizeof(*descr))
+            return FFA_RET_INVALID_PARAMETERS;
+
+        descr = buf;
+        trans->sender_id = read_atomic(&descr->sender_id);
+        mem_reg_attr = read_atomic(&descr->mem_reg_attr);
+        flags = read_atomic(&descr->flags);
+        trans->global_handle = read_atomic(&descr->global_handle);
+        trans->tag = read_atomic(&descr->tag);
+
+        count = read_atomic(&descr->mem_access_count);
+        size = read_atomic(&descr->mem_access_size);
+        offs = read_atomic(&descr->mem_access_offs);
+    } else {
+        struct ffa_mem_transaction_1_0 *descr;
+
+        if (blen < sizeof(*descr))
+            return FFA_RET_INVALID_PARAMETERS;
+
+        descr = buf;
+        trans->sender_id = read_atomic(&descr->sender_id);
+        mem_reg_attr = read_atomic(&descr->mem_reg_attr);
+        flags = read_atomic(&descr->flags);
+        trans->global_handle = read_atomic(&descr->global_handle);
+        trans->tag = read_atomic(&descr->tag);
+
+        count = read_atomic(&descr->mem_access_count);
+        size = sizeof(struct ffa_mem_access);
+        offs = offsetof(struct ffa_mem_transaction_1_0, mem_access_array);
+    }
+
+    if (mem_reg_attr > UINT8_MAX || flags > UINT8_MAX || size > UINT8_MAX ||
+        count > UINT8_MAX || offs > UINT16_MAX)
+        return FFA_RET_INVALID_PARAMETERS;
+
+    /* Check that the endpoint memory access descriptor array fits */
+    if (size * count + offs > blen)
+        return FFA_RET_INVALID_PARAMETERS;
+
+    trans->mem_reg_attr = mem_reg_attr;
+    trans->flags = flags;
+    trans->mem_access_size = size;
+    trans->mem_access_count = count;
+    trans->mem_access_offs = offs;
+    return 0;
+}
+
+static int add_mem_share_frag(struct mem_frag_state *s, unsigned int offs,
+                              unsigned int frag_len)
+{
+    struct domain *d = current->domain;
+    unsigned int o = offs;
+    unsigned int l;
+    int ret;
+
+    if (frag_len < o)
+        return FFA_RET_INVALID_PARAMETERS;
+
+    /* Fill up the first struct ffa_address_range */
+    l = min_t(unsigned int, frag_len - o, sizeof(s->range) - s->range_offset);
+    memcpy((uint8_t *)&s->range + s->range_offset, s->buf + o, l);
+    s->range_offset += l;
+    o += l;
+    if (s->range_offset != sizeof(s->range))
+        goto out;
+    s->range_offset = 0;
+
+    while (true) {
+        ret = get_shm_pages(d, s->shm, &s->range, 1, s->current_page_idx,
+                            &s->current_page_idx);
+        if (ret)
+            return ret;
+        if (s->range_count == 1)
+            return 0;
+        s->range_count--;
+        if (frag_len - o < sizeof(s->range))
+            break;
+        memcpy(&s->range, s->buf + o, sizeof(s->range));
+        o += sizeof(s->range);
+    }
+
+    /* Collect any remaining bytes for the next struct ffa_address_range */
+    s->range_offset = frag_len - o;
+    memcpy(&s->range, s->buf + o, frag_len - o);
+out:
+    s->frag_offset += frag_len;
+    return s->frag_offset;
+}
+
+static void handle_mem_share(struct cpu_user_regs *regs)
+{
+    uint32_t tot_len = get_user_reg(regs, 1);
+    uint32_t frag_len = get_user_reg(regs, 2);
+    uint64_t addr = get_user_reg(regs, 3);
+    uint32_t page_count = get_user_reg(regs, 4);
+    struct ffa_mem_transaction_x trans;
+    struct ffa_mem_access *mem_access;
+    struct ffa_mem_region *region_descr;
+    struct domain *d = current->domain;
+    struct ffa_ctx *ctx = d->arch.ffa;
+    struct ffa_shm_mem *shm = NULL;
+    unsigned int last_page_idx = 0;
+    uint32_t range_count;
+    uint32_t region_offs;
+    int ret = FFA_RET_DENIED;
+    uint32_t handle_hi = 0;
+    uint32_t handle_lo = 0;
+
+    /*
+     * We're only accepting memory transaction descriptors via the rx/tx
+     * buffer.
+     */
+    if ( addr ) {
+        ret = FFA_RET_NOT_SUPPORTED;
+        goto out_unlock;
+    }
+
+    /* Check that fragment legnth doesn't exceed total length */
+    if (frag_len > tot_len) {
+        ret = FFA_RET_INVALID_PARAMETERS;
+        goto out_unlock;
+    }
+
+    spin_lock(&ffa_buffer_lock);
+
+    if ( frag_len > ctx->page_count * PAGE_SIZE )
+        goto out_unlock;
+
+    if ( !ffa_page_count ) {
+        ret = FFA_RET_NO_MEMORY;
+        goto out_unlock;
+    }
+
+    ret = read_mem_transaction(ctx->guest_vers, ctx->tx, frag_len, &trans);
+    if (ret)
+        goto out_unlock;
+
+    if ( trans.mem_reg_attr != FFA_NORMAL_MEM_REG_ATTR )
+    {
+        ret = FFA_RET_NOT_SUPPORTED;
+        goto out;
+    }
+
+    /* Only supports sharing it with one SP for now */
+    if ( trans.mem_access_count != 1 )
+    {
+        ret = FFA_RET_NOT_SUPPORTED;
+        goto out_unlock;
+    }
+
+    if ( trans.sender_id != get_vm_id(d) )
+    {
+        ret = FFA_RET_INVALID_PARAMETERS;
+        goto out_unlock;
+    }
+
+    /* Check that it fits in the supplied data */
+    if ( trans.mem_access_offs + trans.mem_access_size > frag_len)
+        goto out_unlock;
+
+    mem_access = (void *)((vaddr_t)ctx->tx + trans.mem_access_offs);
+    if ( read_atomic(&mem_access->access_perm.perm) != FFA_MEM_ACC_RW )
+    {
+        ret = FFA_RET_NOT_SUPPORTED;
+        goto out_unlock;
+    }
+
+    region_offs = read_atomic(&mem_access->region_offs);
+    if (sizeof(*region_descr) + region_offs > frag_len) {
+        ret = FFA_RET_NOT_SUPPORTED;
+        goto out_unlock;
+    }
+
+    region_descr = (void *)((vaddr_t)ctx->tx + region_offs);
+    range_count = read_atomic(&region_descr->address_range_count);
+    page_count = read_atomic(&region_descr->total_page_count);
+
+    shm = xzalloc_flex_struct(struct ffa_shm_mem, pages, page_count);
+    if ( !shm )
+    {
+        ret = FFA_RET_NO_MEMORY;
+        goto out;
+    }
+    shm->sender_id = trans.sender_id;
+    shm->ep_id = read_atomic(&mem_access->access_perm.endpoint_id);
+    shm->page_count = page_count;
+
+    if (frag_len != tot_len) {
+        struct mem_frag_state *s = xzalloc(struct mem_frag_state);
+
+        if (!s) {
+            ret = FFA_RET_NO_MEMORY;
+            goto out;
+        }
+        s->shm = shm;
+        s->range_count = range_count;
+        s->buf = ctx->tx;
+        s->buf_size = ffa_page_count * PAGE_SIZE;
+        ret = add_mem_share_frag(s, sizeof(*region_descr)  + region_offs,
+                                 frag_len);
+        if (ret <= 0) {
+            xfree(s);
+            if (ret < 0)
+                goto out;
+        } else {
+            shm->handle = next_handle++;
+            reg_pair_from_64(&handle_hi, &handle_lo, shm->handle);
+            spin_lock(&ffa_mem_list_lock);
+            list_add_tail(&s->list, &ffa_frag_list);
+            spin_unlock(&ffa_mem_list_lock);
+        }
+        goto out_unlock;
+    }
+
+    /*
+     * Check that the Composite memory region descriptor fits.
+     */
+    if ( sizeof(*region_descr) + region_offs +
+         range_count * sizeof(struct ffa_address_range) > frag_len) {
+        ret = FFA_RET_INVALID_PARAMETERS;
+        goto out;
+    }
+
+    ret = get_shm_pages(d, shm, region_descr->address_range_array, range_count,
+                        0, &last_page_idx);
+    if ( ret )
+        goto out;
+    if (last_page_idx != shm->page_count) {
+        ret = FFA_RET_INVALID_PARAMETERS;
+        goto out;
+    }
+
+    /* Note that share_shm() uses our tx buffer */
+    ret = share_shm(shm);
+    if ( ret )
+        goto out;
+
+    spin_lock(&ffa_mem_list_lock);
+    list_add_tail(&shm->list, &ffa_mem_list);
+    spin_unlock(&ffa_mem_list_lock);
+
+    reg_pair_from_64(&handle_hi, &handle_lo, shm->handle);
+
+out:
+    if ( ret && shm )
+    {
+        put_shm_pages(shm);
+        xfree(shm);
+    }
+out_unlock:
+    spin_unlock(&ffa_buffer_lock);
+
+    if ( ret > 0 )
+            set_regs_frag_rx(regs, handle_lo, handle_hi, ret, trans.sender_id);
+    else if ( ret == 0)
+            set_regs_success(regs, handle_lo, handle_hi);
+    else
+            set_regs_error(regs, ret);
+}
+
+static struct mem_frag_state *find_frag_state(uint64_t handle)
+{
+    struct mem_frag_state *s;
+
+    list_for_each_entry(s, &ffa_frag_list, list)
+        if ( s->shm->handle == handle)
+            return s;
+
+    return NULL;
+}
+
+static void handle_mem_frag_tx(struct cpu_user_regs *regs)
+{
+    uint32_t frag_len = get_user_reg(regs, 3);
+    uint32_t handle_lo = get_user_reg(regs, 1);
+    uint32_t handle_hi = get_user_reg(regs, 2);
+    uint64_t handle = reg_pair_to_64(handle_hi, handle_lo);
+    struct mem_frag_state *s;
+    uint16_t sender_id = 0;
+    int ret;
+
+    spin_lock(&ffa_buffer_lock);
+    s = find_frag_state(handle);
+    if (!s) {
+        ret = FFA_RET_INVALID_PARAMETERS;
+        goto out;
+    }
+    sender_id = s->shm->sender_id;
+
+    if (frag_len > s->buf_size) {
+        ret = FFA_RET_INVALID_PARAMETERS;
+        goto out;
+    }
+
+    ret = add_mem_share_frag(s, 0, frag_len);
+    if (ret == 0) {
+        /* Note that share_shm() uses our tx buffer */
+        ret = share_shm(s->shm);
+        if (ret == 0) {
+            spin_lock(&ffa_mem_list_lock);
+            list_add_tail(&s->shm->list, &ffa_mem_list);
+            spin_unlock(&ffa_mem_list_lock);
+        } else {
+            put_shm_pages(s->shm);
+            xfree(s->shm);
+        }
+        spin_lock(&ffa_mem_list_lock);
+        list_del(&s->list);
+        spin_unlock(&ffa_mem_list_lock);
+        xfree(s);
+    } else if (ret < 0) {
+        put_shm_pages(s->shm);
+        xfree(s->shm);
+        spin_lock(&ffa_mem_list_lock);
+        list_del(&s->list);
+        spin_unlock(&ffa_mem_list_lock);
+        xfree(s);
+    }
+out:
+    spin_unlock(&ffa_buffer_lock);
+
+    if ( ret > 0 )
+            set_regs_frag_rx(regs, handle_lo, handle_hi, ret, sender_id);
+    else if ( ret == 0)
+            set_regs_success(regs, handle_lo, handle_hi);
+    else
+            set_regs_error(regs, ret);
+}
+
+static int handle_mem_reclaim(uint64_t handle, uint32_t flags)
+{
+    struct ffa_shm_mem *shm;
+    uint32_t handle_hi;
+    uint32_t handle_lo;
+    int ret;
+
+    spin_lock(&ffa_mem_list_lock);
+    list_for_each_entry(shm, &ffa_mem_list, list) {
+        if ( shm->handle == handle )
+            goto found_it;
+    }
+    shm = NULL;
+found_it:
+    spin_unlock(&ffa_mem_list_lock);
+
+    if ( !shm )
+        return FFA_RET_INVALID_PARAMETERS;
+
+    reg_pair_from_64(&handle_hi, &handle_lo, handle);
+    ret = ffa_mem_reclaim(handle_lo, handle_hi, flags);
+    if ( ret )
+        return ret;
+
+    spin_lock(&ffa_mem_list_lock);
+    list_del(&shm->list);
+    spin_unlock(&ffa_mem_list_lock);
+
+    put_shm_pages(shm);
+    xfree(shm);
+
+    return ret;
+}
+
+bool ffa_handle_call(struct cpu_user_regs *regs, uint32_t fid)
+{
+    struct domain *d = current->domain;
+    struct ffa_ctx *ctx = d->arch.ffa;
+    uint32_t count;
+    uint32_t e;
+
+    if ( !ctx )
+        return false;
+
+    switch ( fid )
+    {
+    case FFA_VERSION:
+        handle_version(regs);
+        return true;
+    case FFA_ID_GET:
+        set_regs_success(regs, get_vm_id(d), 0);
+        return true;
+    case FFA_RXTX_MAP_32:
+#ifdef CONFIG_ARM_64
+    case FFA_RXTX_MAP_64:
+#endif
+        e = handle_rxtx_map(fid, get_user_reg(regs, 1), get_user_reg(regs, 2),
+                            get_user_reg(regs, 3));
+        if ( e )
+            set_regs_error(regs, e);
+        else
+            set_regs_success(regs, 0, 0);
+        return true;
+    case FFA_RXTX_UNMAP:
+        e = handle_rxtx_unmap();
+        if ( e )
+            set_regs_error(regs, e);
+        else
+            set_regs_success(regs, 0, 0);
+        return true;
+    case FFA_PARTITION_INFO_GET:
+        e = handle_partition_info_get(get_user_reg(regs, 1),
+                                      get_user_reg(regs, 2),
+                                      get_user_reg(regs, 3),
+                                      get_user_reg(regs, 4),
+                                      get_user_reg(regs, 5), &count);
+        if ( e )
+            set_regs_error(regs, e);
+        else
+            set_regs_success(regs, count, 0);
+        return true;
+    case FFA_RX_RELEASE:
+        e = handle_rx_release();
+        if ( e )
+            set_regs_error(regs, e);
+        else
+            set_regs_success(regs, 0, 0);
+        return true;
+    case FFA_MSG_SEND_DIRECT_REQ_32:
+#ifdef CONFIG_ARM_64
+    case FFA_MSG_SEND_DIRECT_REQ_64:
+#endif
+        handle_msg_send_direct_req(regs, fid);
+        return true;
+    case FFA_MEM_SHARE_32:
+#ifdef CONFIG_ARM_64
+    case FFA_MEM_SHARE_64:
+#endif
+        handle_mem_share(regs);
+        return true;
+    case FFA_MEM_RECLAIM:
+        e = handle_mem_reclaim(reg_pair_to_64(get_user_reg(regs, 2),
+                                              get_user_reg(regs, 1)),
+                               get_user_reg(regs, 3));
+        if ( e )
+            set_regs_error(regs, e);
+        else
+            set_regs_success(regs, 0, 0);
+        return true;
+    case FFA_MEM_FRAG_TX:
+        handle_mem_frag_tx(regs);
+        return true;
+
+    default:
+        printk(XENLOG_ERR "ffa: unhandled fid 0x%x\n", fid);
+        return false;
+    }
+}
+
+int ffa_domain_init(struct domain *d)
+{
+    struct ffa_ctx *ctx;
+    unsigned int n;
+    unsigned int m;
+    unsigned int c_pos;
+    int32_t res;
+
+    if ( !ffa_version )
+        return 0;
+
+    ctx = xzalloc(struct ffa_ctx);
+    if ( !ctx )
+        return -ENOMEM;
+
+    for ( n = 0; n < subsr_vm_created_count; n++ ) {
+        res = ffa_direct_req_send_vm(subsr_vm_created[n], get_vm_id(d),
+                                     FFA_MSG_SEND_VM_CREATED);
+        if ( res ) {
+            printk(XENLOG_ERR "ffa: Failed to report creation of vm_id %u to  %u: res %d\n",
+                   get_vm_id(d), subsr_vm_created[n], res);
+            c_pos = n;
+            goto err;
+        }
+    }
+
+    d->arch.ffa = ctx;
+
+    return 0;
+
+err:
+    /* Undo any already sent vm created messaged */
+    for ( n = 0; n < c_pos; n++ )
+        for ( m = 0; m < subsr_vm_destroyed_count; m++ )
+            if ( subsr_vm_destroyed[m] == subsr_vm_created[n] )
+                ffa_direct_req_send_vm(subsr_vm_destroyed[n], get_vm_id(d),
+                                       FFA_MSG_SEND_VM_DESTROYED);
+    return -ENOMEM;
+}
+
+int ffa_relinquish_resources(struct domain *d)
+{
+    struct ffa_ctx *ctx = d->arch.ffa;
+    unsigned int n;
+    int32_t res;
+
+    if ( !ctx )
+        return 0;
+
+    for ( n = 0; n < subsr_vm_destroyed_count; n++ ) {
+        res = ffa_direct_req_send_vm(subsr_vm_destroyed[n], get_vm_id(d),
+                                     FFA_MSG_SEND_VM_DESTROYED);
+
+        if ( res )
+            printk(XENLOG_ERR "ffa: Failed to report destruction of vm_id %u to  %u: res %d\n",
+                   get_vm_id(d), subsr_vm_destroyed[n], res);
+    }
+
+    XFREE(d->arch.ffa);
+
+    return 0;
+}
+
+static bool __init init_subscribers(void)
+{
+    struct ffa_partition_info_1_1 *fpi;
+    bool ret = false;
+    uint32_t count;
+    uint32_t e;
+    uint32_t n;
+    uint32_t c_pos;
+    uint32_t d_pos;
+
+    if ( ffa_version < FFA_VERSION_1_1 )
+        return true;
+
+    e = ffa_partition_info_get(0, 0, 0, 0, 1, &count);
+    ffa_rx_release();
+    if ( e ) {
+        printk(XENLOG_ERR "ffa: Failed to get list of SPs: %d\n", (int)e);
+        goto out;
+    }
+
+    fpi = ffa_rx;
+    subsr_vm_created_count = 0;
+    subsr_vm_destroyed_count = 0;
+    for ( n = 0; n < count; n++ ) {
+        if (fpi[n].partition_properties & FFA_PART_PROP_NOTIF_CREATED)
+            subsr_vm_created_count++;
+        if (fpi[n].partition_properties & FFA_PART_PROP_NOTIF_DESTROYED)
+            subsr_vm_destroyed_count++;
+    }
+
+    if ( subsr_vm_created_count )
+        subsr_vm_created = xzalloc_array(uint16_t, subsr_vm_created_count);
+    if ( subsr_vm_destroyed_count )
+        subsr_vm_destroyed = xzalloc_array(uint16_t, subsr_vm_destroyed_count);
+    if ( (subsr_vm_created_count && !subsr_vm_created) ||
+        (subsr_vm_destroyed_count && !subsr_vm_destroyed) ) {
+        printk(XENLOG_ERR "ffa: Failed to allocate subscription lists\n");
+        subsr_vm_created_count = 0;
+        subsr_vm_destroyed_count = 0;
+        XFREE(subsr_vm_created);
+        XFREE(subsr_vm_destroyed);
+        goto out;
+    }
+
+    for ( c_pos = 0, d_pos = 0, n = 0; n < count; n++ ) {
+        if (fpi[n].partition_properties & FFA_PART_PROP_NOTIF_CREATED)
+            subsr_vm_created[c_pos++] = fpi[n].id;
+        if (fpi[n].partition_properties & FFA_PART_PROP_NOTIF_DESTROYED)
+            subsr_vm_destroyed[d_pos++] = fpi[n].id;
+    }
+
+    ret = true;
+out:
+    ffa_rx_release();
+    return ret;
+}
+
+static int __init ffa_init(void)
+{
+    uint32_t vers;
+    uint32_t e;
+    unsigned int major_vers;
+    unsigned int minor_vers;
+
+    /*
+     * psci_init_smccc() updates this value with what's reported by EL-3
+     * or secure world.
+     */
+    if ( smccc_ver < ARM_SMCCC_VERSION_1_2 )
+    {
+        printk(XENLOG_ERR
+               "ffa: unsupported SMCCC version %#x (need at least %#x)\n",
+               smccc_ver, ARM_SMCCC_VERSION_1_2);
+        return 0;
+    }
+
+    if ( !ffa_get_version(&vers) )
+        return 0;
+
+    if ( vers < FFA_MIN_VERSION || vers > FFA_MY_VERSION )
+    {
+        printk(XENLOG_ERR "ffa: Incompatible version %#x found\n", vers);
+        return 0;
+    }
+
+    major_vers = (vers >> FFA_VERSION_MAJOR_SHIFT) & FFA_VERSION_MAJOR_MASK;
+    minor_vers = vers & FFA_VERSION_MINOR_MASK;
+    printk(XENLOG_ERR "ARM FF-A Mediator version %u.%u\n",
+           FFA_VERSION_MAJOR, FFA_VERSION_MINOR);
+    printk(XENLOG_ERR "ARM FF-A Firmware version %u.%u\n",
+           major_vers, minor_vers);
+
+    ffa_rx = alloc_xenheap_pages(0, 0);
+    if ( !ffa_rx )
+        return 0;
+
+    ffa_tx = alloc_xenheap_pages(0, 0);
+    if ( !ffa_tx )
+        goto err_free_ffa_rx;
+
+    e = ffa_rxtx_map(__pa(ffa_tx), __pa(ffa_rx), 1);
+    if ( e )
+    {
+        printk(XENLOG_ERR "ffa: Failed to map rxtx: error %d\n", (int)e);
+        goto err_free_ffa_tx;
+    }
+    ffa_page_count = 1;
+    ffa_version = vers;
+
+    if ( !init_subscribers() )
+        goto err_free_ffa_tx;
+
+    return 0;
+
+err_free_ffa_tx:
+    free_xenheap_pages(ffa_tx, 0);
+    ffa_tx = NULL;
+err_free_ffa_rx:
+    free_xenheap_pages(ffa_rx, 0);
+    ffa_rx = NULL;
+    ffa_page_count = 0;
+    ffa_version = 0;
+    XFREE(subsr_vm_created);
+    subsr_vm_created_count = 0;
+    XFREE(subsr_vm_destroyed);
+    subsr_vm_destroyed_count = 0;
+    return 0;
+}
+
+__initcall(ffa_init);
diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
index ed63c2b6f91f..b3dee269bced 100644
--- a/xen/arch/arm/include/asm/domain.h
+++ b/xen/arch/arm/include/asm/domain.h
@@ -103,6 +103,10 @@ struct arch_domain
     void *tee;
 #endif
 
+#ifdef CONFIG_FFA
+    void *ffa;
+#endif
+
     bool directmap;
 }  __cacheline_aligned;
 
diff --git a/xen/arch/arm/include/asm/ffa.h b/xen/arch/arm/include/asm/ffa.h
new file mode 100644
index 000000000000..1c6ce6421294
--- /dev/null
+++ b/xen/arch/arm/include/asm/ffa.h
@@ -0,0 +1,71 @@
+/*
+ * xen/arch/arm/ffa.c
+ *
+ * Arm Firmware Framework for ARMv8-A(FFA) mediator
+ *
+ * Copyright (C) 2021  Linaro Limited
+ *
+ * Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without restriction,
+ * including without limitation the rights to use, copy, modify, merge,
+ * publish, distribute, sublicense, and/or sell copies of the Software,
+ * and to permit persons to whom the Software is furnished to do so,
+ * subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+ * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
+ * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
+ * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+ * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#ifndef __ASM_ARM_FFA_H__
+#define __ASM_ARM_FFA_H__
+
+#include <xen/const.h>
+
+#include <asm/smccc.h>
+#include <asm/types.h>
+
+#define FFA_FNUM_MIN_VALUE              _AC(0x60,U)
+#define FFA_FNUM_MAX_VALUE              _AC(0x86,U)
+
+static inline bool is_ffa_fid(uint32_t fid)
+{
+    uint32_t fn = fid & ARM_SMCCC_FUNC_MASK;
+
+    return fn >= FFA_FNUM_MIN_VALUE && fn <= FFA_FNUM_MAX_VALUE;
+}
+
+#ifdef CONFIG_FFA
+#define FFA_NR_FUNCS    11
+
+bool ffa_handle_call(struct cpu_user_regs *regs, uint32_t fid);
+int ffa_domain_init(struct domain *d);
+int ffa_relinquish_resources(struct domain *d);
+#else
+#define FFA_NR_FUNCS    0
+
+static inline bool ffa_handle_call(struct cpu_user_regs *regs, uint32_t fid)
+{
+    return false;
+}
+
+static inline int ffa_domain_init(struct domain *d)
+{
+    return 0;
+}
+
+static inline int ffa_relinquish_resources(struct domain *d)
+{
+    return 0;
+}
+#endif
+
+#endif /*__ASM_ARM_FFA_H__*/
diff --git a/xen/arch/arm/vsmc.c b/xen/arch/arm/vsmc.c
index 6f90c08a6304..34586025eff8 100644
--- a/xen/arch/arm/vsmc.c
+++ b/xen/arch/arm/vsmc.c
@@ -20,6 +20,7 @@
 #include <public/arch-arm/smccc.h>
 #include <asm/cpuerrata.h>
 #include <asm/cpufeature.h>
+#include <asm/ffa.h>
 #include <asm/monitor.h>
 #include <asm/regs.h>
 #include <asm/smccc.h>
@@ -32,7 +33,7 @@
 #define XEN_SMCCC_FUNCTION_COUNT 3
 
 /* Number of functions currently supported by Standard Service Service Calls. */
-#define SSSC_SMCCC_FUNCTION_COUNT (3 + VPSCI_NR_FUNCS)
+#define SSSC_SMCCC_FUNCTION_COUNT (3 + VPSCI_NR_FUNCS + FFA_NR_FUNCS)
 
 static bool fill_uid(struct cpu_user_regs *regs, xen_uuid_t uuid)
 {
@@ -196,13 +197,23 @@ static bool handle_existing_apis(struct cpu_user_regs *regs)
     return do_vpsci_0_1_call(regs, fid);
 }
 
+static bool is_psci_fid(uint32_t fid)
+{
+    uint32_t fn = fid & ARM_SMCCC_FUNC_MASK;
+
+    return fn >= 0 && fn <= 0x1fU;
+}
+
 /* PSCI 0.2 interface and other Standard Secure Calls */
 static bool handle_sssc(struct cpu_user_regs *regs)
 {
     uint32_t fid = (uint32_t)get_user_reg(regs, 0);
 
-    if ( do_vpsci_0_2_call(regs, fid) )
-        return true;
+    if ( is_psci_fid(fid) )
+        return do_vpsci_0_2_call(regs, fid);
+
+    if ( is_ffa_fid(fid) )
+        return ffa_handle_call(regs, fid);
 
     switch ( fid )
     {
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 07:04:52 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 07:04:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344689.570246 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzCDk-0004ND-Nh; Thu, 09 Jun 2022 07:04:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344689.570246; Thu, 09 Jun 2022 07:04:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzCDk-0004N6-Iv; Thu, 09 Jun 2022 07:04:44 +0000
Received: by outflank-mailman (input) for mailman id 344689;
 Thu, 09 Jun 2022 07:04:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MyY7=WQ=intel.com=kevin.tian@srs-se1.protection.inumbo.net>)
 id 1nzCDi-0004N0-Jt
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 07:04:43 +0000
Received: from mga02.intel.com (mga02.intel.com [134.134.136.20])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6acf8cc3-e7c2-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 09:04:37 +0200 (CEST)
Received: from fmsmga004.fm.intel.com ([10.253.24.48])
 by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Jun 2022 00:04:17 -0700
Received: from fmsmsx601.amr.corp.intel.com ([10.18.126.81])
 by fmsmga004.fm.intel.com with ESMTP; 09 Jun 2022 00:04:16 -0700
Received: from fmsmsx607.amr.corp.intel.com (10.18.126.87) by
 fmsmsx601.amr.corp.intel.com (10.18.126.81) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2308.27; Thu, 9 Jun 2022 00:04:16 -0700
Received: from FMSEDG603.ED.cps.intel.com (10.1.192.133) by
 fmsmsx607.amr.corp.intel.com (10.18.126.87) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2308.27 via Frontend Transport; Thu, 9 Jun 2022 00:04:16 -0700
Received: from NAM11-BN8-obe.outbound.protection.outlook.com (104.47.58.170)
 by edgegateway.intel.com (192.55.55.68) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2308.27; Thu, 9 Jun 2022 00:04:15 -0700
Received: from BN9PR11MB5276.namprd11.prod.outlook.com (2603:10b6:408:135::18)
 by MWHPR11MB0078.namprd11.prod.outlook.com (2603:10b6:301:67::38)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Thu, 9 Jun
 2022 07:04:13 +0000
Received: from BN9PR11MB5276.namprd11.prod.outlook.com
 ([fe80::3583:afc6:2732:74b8]) by BN9PR11MB5276.namprd11.prod.outlook.com
 ([fe80::3583:afc6:2732:74b8%4]) with mapi id 15.20.5332.013; Thu, 9 Jun 2022
 07:04:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6acf8cc3-e7c2-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1654758278; x=1686294278;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=1PXseCSgxzJ24PN17SlXkaebzEixJqISFenoo1/Y6l8=;
  b=JwjeKPePYVEfHNVMAK3in9mozZp+qH7MUJOfPBjq8H/lEnhgCTuGMcc4
   6ef9r4mbdSb/zZiQC+LDaM9WzLaEE742lYu9PMiZxwSH4qZBhmdp0OY6D
   bJdwtgqexRlpzwREPyu3JShw6kz+ZcvOT224VbYGi1+f+r9kRHvr9N7en
   UNBXtkiyNxt6Got6HZgqt2BEKu90A8l5KUJ2kb5FunD14raFPa0CFNk96
   5aZQM9x/4KYXKqRaXuldJutKvH9Qfq8sbAMZ6cb+r+oON/qqoYjbYCfnd
   KWn2lTlUidsewn0obzAv0SbqlY2TCQ6H8ajtLHgBEwfHLCj6xanpzkhCO
   A==;
X-IronPort-AV: E=McAfee;i="6400,9594,10372"; a="265961481"
X-IronPort-AV: E=Sophos;i="5.91,287,1647327600"; 
   d="scan'208";a="265961481"
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.91,287,1647327600"; 
   d="scan'208";a="649068918"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VndCciwLgJiELCMUq/BfZ2MGPuBbKeF9EdxbmQg4zPR1bWHqOjDfQkgdWSwW4HSud4yYn1PtnE6MeIQEGU+7kFLc/5BLvE+K7e3R8pYCmd7ait+7b7xc3zNRPd+4ipO6D05dlemmIiAMoVgzEtUMur5/l4s0n5wRUj6aLFE0CGT1qCn0IAE80+iQxRB+bJLExk+KKdvxw/ZRjAb3GA3cENBgPziRtukVnMOP2OSqqxJYxH/Z+4hnCDDytdeZEfY2BUF8J9oh5YMMQZbFZDQtEqT0EfzCwU1VgEgvRyLMc57LHPzjsKCsFS6nu2cHmSdP/iUyrYwIVNiXv4Xz3nTRsQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=1PXseCSgxzJ24PN17SlXkaebzEixJqISFenoo1/Y6l8=;
 b=YrQ5plWAn+OWhOfXvR7CShWCei05L6B00JWxPmlUlIBEkL+2J3LJJ79KJtWu0DSpGy22wSF1YOmxejcatEwvKlXQD8MxC5C51Vht09DqW2heUxiAqyYAl4WcXsPYBegoLcIOR4ka4CYuVFPv3BjquurDzUP5yX+HFDo+vB+giwa+hZ43fQUkZLrnjJc0fRLfnL2HJzH+gEuXeI5vKxgbX1xJTm5JhpGceuQgxhwLke+5hJwlAHjIxlH/gIM9qkAqzk7W77HH20WvoZafJiW+6Eg3ZXq4TKAUkKBK8+Dm6oUAuBBIkxWWA8YD0/7RIZTx6XOmZay6U2SofzTvAORhcQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
From: "Tian, Kevin" <kevin.tian@intel.com>
To: =?utf-8?B?UGF1IE1vbm7DqSwgUm9nZXI=?= <roger.pau@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: =?utf-8?B?UGF1IE1vbm7DqSwgUm9nZXI=?= <roger.pau@citrix.com>, "Cooper,
 Andrew" <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, "Beulich, Jan" <JBeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, "Nakajima, Jun" <jun.nakajima@intel.com>, "Qiang, Chenyi"
	<chenyi.qiang@intel.com>, "Li, Xiaoyao" <xiaoyao.li@intel.com>
Subject: RE: [PATCH v2 3/3] x86/vmx: implement Notify VM Exit
Thread-Topic: [PATCH v2 3/3] x86/vmx: implement Notify VM Exit
Thread-Index: AQHYcPGLnpUujBwR5EWVeB9tTYBUAq1Gu3mA
Date: Thu, 9 Jun 2022 07:04:12 +0000
Message-ID: <BN9PR11MB5276B16CB69514120B7E0E318CA79@BN9PR11MB5276.namprd11.prod.outlook.com>
References: <20220526111157.24479-1-roger.pau@citrix.com>
 <20220526111157.24479-4-roger.pau@citrix.com>
In-Reply-To: <20220526111157.24479-4-roger.pau@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=intel.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: f36314a2-0bc0-4b2a-b910-08da49e6423f
x-ms-traffictypediagnostic: MWHPR11MB0078:EE_
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-microsoft-antispam-prvs: <MWHPR11MB0078D3A63292BD2C8F2407068CA79@MWHPR11MB0078.namprd11.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: N5Dy7mhGxz6DRR0IbGwEPE115iFvEUXFUb0faotqLYy4iI7H8JsheqSWVhAGxUbHDDGDAJ20HRtIS4XqGxvsRcvbj8RQY0M2Px5mF/lKsWNmK9RJERJ5Kmd2u8Bjfb7+2YmtXAaZJ2i24sw0rlC1OKe0BIKnN3Sczb8Ugg6xqoq7L19fvfXVWo9HKeDfQtDge969uEz6Sckt2xUvAhGTE7Fd69D2kFTdu6W4kMWFOcZ6cNg8zMleyOQS3lJYDknXVCPXfwCMd7eajFuz52xNSeVfYZH8wOA3ztESxhHB+Lwswca/xGvVw0csyWmAH/2bJ+4rczlbhGo1vsk5dy4FNa9Uh8NvjVItc0rXons0ojb0tPdMdFR0AVh87NON5HSHBS2+XBWdW4tmYBTTtrLaZVz+WwKOG/5wkDSjFatCMTFtoW4GZnle9a4IcVdGX7e0Rsbv1Hx40nfrynQ8BIgsdy/VE6XfiGJ0FR2vy1FqIYtPJAnkB7ZUy1th42ojQx9Hm1dtRmoE05MVqcCvL5xiDF5850uwMBr3tJslZ55g+cb10lHGcgB9HxZhOnwhHkaRe2A/1W8IqhA8vLI4Irb4gKD/k9g6BJsxkPkRCIm4fMyYKlM+LhZ4nSiXIYG0Pwyi97xxWDRpClxGRICAZ2ZR8lhkdJo4Ngd+i+8kj5fphvQrnoPkzm+jW/kIr/ci78dZtTV76I5vXhjS6UrP9kL3XA==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BN9PR11MB5276.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(9686003)(508600001)(71200400001)(26005)(6506007)(7696005)(122000001)(186003)(38070700005)(83380400001)(38100700002)(82960400001)(66556008)(8676002)(66446008)(66946007)(107886003)(4326008)(64756008)(76116006)(30864003)(8936002)(2906002)(33656002)(52536014)(86362001)(5660300002)(55016003)(316002)(110136005)(54906003)(66476007);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?cDNEcHg2c0hOMzJ4RWMzSUl1dlZCQm10WXlDT054RHowNWRrdlJaVkZ1VGVW?=
 =?utf-8?B?RjhiV0paamswUmt6T0NaTk0xMkdyTUllT0dFVTNCT1RzbWtUUTl5TnJBVnQz?=
 =?utf-8?B?SzRmWVdiWWZ1RFFTRFVMUW1IbDBYc1dvYkk5SlpNQjduNGx1OWpmUXZvRzZQ?=
 =?utf-8?B?VmVBQ1JpaVlYVk5aYUlWOHhTaDFHMHBTejNrN1dEVU1wd0k3Qm5qVVU2TDg2?=
 =?utf-8?B?N1VFWEZsR0loZ3VETjd5V0FySXlwbGkxaS9kcGhNMXJackZ1ckJRNlhmakhS?=
 =?utf-8?B?bDlIWVlYQTBrcU1JYmJscW1nbFFnOHpPZDA5c0QrazlGbURQTUVjQUZCV2lG?=
 =?utf-8?B?a21tRzFPbERiZXFYbVBsclk4UVhPTHlxTHl3WGk0NWQyNnhLcCtwbHR5RnB3?=
 =?utf-8?B?bDVvSjJSaW9nTnhEcFlVbEttSHgzV1BRSUdVUVJQR3ZNTFllb1I5Z3Q5cFVR?=
 =?utf-8?B?MzZpTUFIR3VrNlNvZStwQ3BHalhUVERTbXVoSlRmRXJBZGIraktBQUU5SGlF?=
 =?utf-8?B?cGI4WHdnUXArS0R4cDBBb0tPNmx1d3ZGU0xHQkd1bi9lSTRqYmRRWldqeHNG?=
 =?utf-8?B?ZVBxakk1bEh0OG5xM0ttdndTcjJwT24xanhyV2lJaDc4TTMvUTBRV0VJbWUy?=
 =?utf-8?B?L0tTNmRKRlhSaVcyQUZ2QXQ4cHg5amhRSWNFU0c2K2d3SWNkaWM4R3ZHUFFF?=
 =?utf-8?B?by9sL1k0VWVhNndkeGIwVDl5NGRmU05Wa0o3TExyTVFJQkttaHl5c3oxUFdU?=
 =?utf-8?B?Y3djVGdMMXVtaS8vQ3loMDJ4OTRPL1cybndua29HaXdKclFTUGp6VzVyVVZF?=
 =?utf-8?B?eENPR3dRU1FuKzR0d1pHY0VpbWZiQlJXUmNaNHRjSUFTaFVsMnZuaEZKUUZ2?=
 =?utf-8?B?SkZHRkJETG94T0pTUmNXNkxxVC9qMFNrRFEwdzlBQ2FlazdYY3lITW10eldD?=
 =?utf-8?B?Syt2M2gyQnpkYllUemJveUJlUHdHaC9TL3RXdjNaNjVCT0dOUkM2NU9SL1ZC?=
 =?utf-8?B?L1FVL0ZqOGlXZEtkcTJYUDZFNEtPVnJCVENwZGVtbTRZWHp4ZXA0ME1WMms1?=
 =?utf-8?B?UmV2TzNIdEFoRGpyN05QaHY3L0VDd1Y4SC8vbUJUUjljUnhYZm56c2VQWW1r?=
 =?utf-8?B?cytyajROWlBFRXhDcEdDVHV2S1FoaUdHdW1Pa0xrQ012SWl3ZGJOQVJBQUtk?=
 =?utf-8?B?eGY0aFRWTTRTdVkzM3BUb1NrNXlwdU44Z1pyMFdqTVhhY0xBWEJSaEpLdk1x?=
 =?utf-8?B?Znl0MmlpWU9GTzRCelJ4QXlnU2hKWE8rd1JWSXhPTjdIOUxMLy9nRmF4NXMy?=
 =?utf-8?B?cXF0WldqZ3BweWdHRmtoNVEzTkQyQVh4OFg0Z0t1SzBmL3pWUjdUOWZTaUtk?=
 =?utf-8?B?WU8zUXNSdHNYNW9MVEl2UUtaeDgxQStpbHppOWdwK25mREIrYXpCVXBQZHk3?=
 =?utf-8?B?VFVZRlVtWWg4VG5qUVJ3TGdsT0p0ZGxHSEZwcGVvd2UxUFlvVGV3ajVDaFFy?=
 =?utf-8?B?bS9UMUJOd1N4QVV6MGJEV3h3dFUrdEhNY0FUNnBwQUpUTGNCM3JidDRwblVM?=
 =?utf-8?B?MGI5a1hQcjZZNkxRQ3hWM3NHRnI1N2JJTkJ0eG9zRkhmNUgvcWFjZkxlSmsy?=
 =?utf-8?B?MFo1b0R6ZGdudEcwMTh4QkdPQ29LQzdBZGl1UGN0YlNScWlJK1p4SjFCZEx2?=
 =?utf-8?B?dEJCdVArQVpoT3VseTB3cGJ1SXpXQ1lDOUEwT0crZVFQUUl1Mmt4eFBUMHpq?=
 =?utf-8?B?bkMrZFZ1M1Q4K2R4Q2RPbjdoSndaSGdxOXJBNWdIZnpweGxGRjJSUTVQZlMr?=
 =?utf-8?B?aExCNm85WTZDWlVnY0VKb1ptSkFPT3VFdW1HWDhQMHZNSmpiNlA4SGtwRVFT?=
 =?utf-8?B?Z0ZiazB2QWREZXdjNWp4cEJUcDNFbGI5UWROUHRyUkR2YjNER0I3TGdjNDB6?=
 =?utf-8?B?VkhnZ1hlbnhQZll1Wis5MGNFSktRSFZNYXFmczk4T1FlRDBOVmxaQ3FYcnhx?=
 =?utf-8?B?QUJ6ckJ2cFpmazkxaHhtSklDV3dULzhxaFJUQUkyWHhzaG1mZEJSVUx6OVkr?=
 =?utf-8?B?cDN0QzRSWngrSUlVeEdsMEV5M1NsOEFIUjA2WmF4bndOYlE0UDcrWEl2OHJj?=
 =?utf-8?B?NkFIOVQrUENVeFVjQ05lQ1gvOHJnL25aVkNHQ3dqdklzWVpNRFhpTjRRU1Fp?=
 =?utf-8?B?cGFqR3dqRnVCN3RMQWlJUm5DRE1wRnJPbCtzMjJGSHR5clhFZzBSRFRZUUt3?=
 =?utf-8?B?cUIwemJYYU9MWTk2QkhybWhuUXpQcFFQbkp6dGN4OVBKbWJJMlRXUzErU2R6?=
 =?utf-8?B?OFN4U3lad25mV3ZLTGlkOGJEWmRlUEpBcTJRMWU0WXBwdnBZaVcrQT09?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BN9PR11MB5276.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f36314a2-0bc0-4b2a-b910-08da49e6423f
X-MS-Exchange-CrossTenant-originalarrivaltime: 09 Jun 2022 07:04:12.8967
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: L5WQtGEroeFB6tm5+lzCngV48A3/zuZ1ZasAFTevAJ5f/1fcZUdUKB0nbgW2tl4ocxoZ5i4KEsUvOyWFW5jheQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR11MB0078
X-OriginatorOrg: intel.com

K0NoZW55aS9YaWFveWFvIHdobyB3b3JrZWQgb24gdGhlIEtWTSBzdXBwb3J0LiBQcmVzdW1hYmx5
DQpzaW1pbGFyIG9wZW5zIGhhdmUgYmVlbiBkaXNjdXNzZWQgaW4gS1ZNIGhlbmNlIHRoZXkgaGF2
ZSB0aGUNCnJpZ2h0IGJhY2tncm91bmQgdG8gY29tbWVudCBoZXJlLg0KDQo+IEZyb206IFJvZ2Vy
IFBhdSBNb25uZSA8cm9nZXIucGF1QGNpdHJpeC5jb20+DQo+IFNlbnQ6IFRodXJzZGF5LCBNYXkg
MjYsIDIwMjIgNzoxMiBQTQ0KPiANCj4gVW5kZXIgY2VydGFpbiBjb25kaXRpb25zIGd1ZXN0cyBj
YW4gZ2V0IHRoZSBDUFUgc3R1Y2sgaW4gYW4gdW5ib3VuZGVkDQo+IGxvb3Agd2l0aG91dCB0aGUg
cG9zc2liaWxpdHkgb2YgYW4gaW50ZXJydXB0IHdpbmRvdyB0byBvY2N1ciBvbg0KPiBpbnN0cnVj
dGlvbiBib3VuZGFyeS4gIFRoaXMgd2FzIHRoZSBjYXNlIHdpdGggdGhlIHNjZW5hcmlvcyBkZXNj
cmliZWQNCj4gaW4gWFNBLTE1Ni4NCj4gDQo+IE1ha2UgdXNlIG9mIHRoZSBOb3RpZnkgVk0gRXhp
dCBtZWNoYW5pc20sIHRoYXQgd2lsbCB0cmlnZ2VyIGEgVk0gRXhpdA0KPiBpZiBubyBpbnRlcnJ1
cHQgd2luZG93IG9jY3VycyBmb3IgYSBzcGVjaWZpZWQgYW1vdW50IG9mIHRpbWUuICBOb3RlDQo+
IHRoYXQgdXNpbmcgdGhlIE5vdGlmeSBWTSBFeGl0IGF2b2lkcyBoYXZpbmcgdG8gdHJhcCAjQUMg
YW5kICNEQg0KPiBleGNlcHRpb25zLCBhcyBYZW4gaXMgZ3VhcmFudGVlZCB0byBnZXQgYSBWTSBF
eGl0IGV2ZW4gaWYgdGhlIGd1ZXN0DQo+IHB1dHMgdGhlIENQVSBpbiBhIGxvb3Agd2l0aG91dCBh
biBpbnRlcnJ1cHQgd2luZG93LCBhcyBzdWNoIGRpc2FibGUNCj4gdGhlIGludGVyY2VwdHMgaWYg
dGhlIGZlYXR1cmUgaXMgYXZhaWxhYmxlIGFuZCBlbmFibGVkLg0KPiANCj4gU2V0dGluZyB0aGUg
bm90aWZ5IFZNIGV4aXQgd2luZG93IHRvIDAgaXMgc2FmZSBiZWNhdXNlIHRoZXJlJ3MgYQ0KPiB0
aHJlc2hvbGQgYWRkZWQgYnkgdGhlIGhhcmR3YXJlIGluIG9yZGVyIHRvIGhhdmUgYSBzYW5lIHdp
bmRvdyB2YWx1ZS4NCj4gDQo+IFN1Z2dlc3RlZC1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNv
b3BlcjNAY2l0cml4LmNvbT4NCj4gU2lnbmVkLW9mZi1ieTogUm9nZXIgUGF1IE1vbm7DqSA8cm9n
ZXIucGF1QGNpdHJpeC5jb20+DQo+IC0tLQ0KPiBDaGFuZ2VzIHNpbmNlIHYxOg0KPiAgLSBQcm9w
ZXJseSB1cGRhdGUgZGVidWcgc3RhdGUgd2hlbiB1c2luZyBub3RpZnkgVk0gZXhpdC4NCj4gIC0g
UmV3b3JkIGNvbW1pdCBtZXNzYWdlLg0KPiAtLS0NCj4gVGhpcyBjaGFuZ2UgZW5hYmxlcyB0aGUg
bm90aWZ5IFZNIGV4aXQgYnkgZGVmYXVsdCwgS1ZNIGhvd2V2ZXIgZG9lc24ndA0KPiBzZWVtIHRv
IGVuYWJsZSBpdCBieSBkZWZhdWx0LCBhbmQgdGhlcmUncyB0aGUgZm9sbG93aW5nIG5vdGUgaW4g
dGhlDQo+IGNvbW1pdCBtZXNzYWdlOg0KPiANCj4gIi0gVGhlcmUncyBhIHBvc3NpYmlsaXR5LCBo
b3dldmVyIHNtYWxsLCB0aGF0IGEgbm90aWZ5IFZNIGV4aXQgaGFwcGVucw0KPiAgICB3aXRoIFZN
X0NPTlRFWFRfSU5WQUxJRCBzZXQgaW4gZXhpdCBxdWFsaWZpY2F0aW9uLiBJbiB0aGlzIGNhc2Us
IHRoZQ0KPiAgICB2Y3B1IGNhbiBubyBsb25nZXIgcnVuLiBUbyBhdm9pZCBraWxsaW5nIGEgd2Vs
bC1iZWhhdmVkIGd1ZXN0LCBzZXQNCj4gICAgbm90aWZ5IHdpbmRvdyBhcyAtMSB0byBkaXNhYmxl
IHRoaXMgZmVhdHVyZSBieSBkZWZhdWx0LiINCj4gDQo+IEl0J3Mgbm90IG9idmlvdXNseSBjbGVh
ciB0byBtZSB3aGV0aGVyIHRoZSBjb21tZW50IHdhcyBtZWFudCB0byBiZToNCj4gIlRoZXJlJ3Mg
YSBwb3NzaWJpbGl0eSwgaG93ZXZlciBzbWFsbCwgdGhhdCBhIG5vdGlmeSBWTSBleGl0IF93cm9u
Z2x5Xw0KPiBoYXBwZW5zIHdpdGggVk1fQ09OVEVYVF9JTlZBTElEIi4NCj4gDQo+IEl0J3MgYWxz
byBub3QgY2xlYXIgd2hldGhlciBzdWNoIHdyb25nIGhhcmR3YXJlIGJlaGF2aW9yIG9ubHkgYWZm
ZWN0cw0KPiBhIHNwZWNpZmljIHNldCBvZiBoYXJkd2FyZSwgaW4gYSB3YXkgdGhhdCB3ZSBjb3Vs
ZCBhdm9pZCBlbmFibGluZw0KPiBub3RpZnkgVk0gZXhpdCB0aGVyZS4NCj4gDQo+IFRoZXJlJ3Mg
YSBkaXNjdXNzaW9uIGluIG9uZSBvZiB0aGUgTGludXggcGF0Y2hlcyB0aGF0IDEyOEsgbWlnaHQg
YmUNCj4gdGhlIHNhZmVyIHZhbHVlIGluIG9yZGVyIHRvIHByZXZlbnQgZmFsc2UgcG9zaXRpdmVz
LCBidXQgSSBoYXZlIG5vDQo+IGZvcm1hbCBjb25maXJtYXRpb24gYWJvdXQgdGhpcy4gIE1heWJl
IG91ciBJbnRlbCBtYWludGFpbmVycyBjYW4NCj4gcHJvdmlkZSBzb21lIG1vcmUgZmVlZGJhY2sg
b24gYSBzdWl0YWJsZSBub3RpZnkgVk0gZXhpdCB3aW5kb3cNCj4gdmFsdWUuDQo+IA0KPiBJJ3Zl
IHRlc3RlZCB3aXRoIDAgKHRoZSBwcm9wb3NlZCBkZWZhdWx0IGluIHRoZSBwYXRjaCkgYW5kIEkg
ZG9uJ3QNCj4gc2VlbSB0byBiZSBhYmxlIHRvIHRyaWdnZXIgbm90aWZ5IFZNIGV4aXRzIHVuZGVy
IG5vcm1hbCBndWVzdA0KPiBvcGVyYXRpb24uICBOb3RlIHRoYXQgZXZlbiBpbiB0aGF0IGNhc2Ug
dGhlIGd1ZXN0IHdvbid0IGJlIGRlc3Ryb3llZA0KPiB1bmxlc3MgdGhlIGNvbnRleHQgaXMgY29y
cnVwdC4NCj4gLS0tDQo+ICBkb2NzL21pc2MveGVuLWNvbW1hbmQtbGluZS5wYW5kb2MgICAgICAg
fCAxMSArKysrKysrKysNCj4gIHhlbi9hcmNoL3g4Ni9odm0vdm14L3ZtY3MuYyAgICAgICAgICAg
ICB8IDE5ICsrKysrKysrKysrKysrKw0KPiAgeGVuL2FyY2gveDg2L2h2bS92bXgvdm14LmMgICAg
ICAgICAgICAgIHwgMzIgKysrKysrKysrKysrKysrKysrKysrKystLQ0KPiAgeGVuL2FyY2gveDg2
L2luY2x1ZGUvYXNtL2h2bS92bXgvdm1jcy5oIHwgIDQgKysrKw0KPiAgeGVuL2FyY2gveDg2L2lu
Y2x1ZGUvYXNtL2h2bS92bXgvdm14LmggIHwgIDYgKysrKysNCj4gIHhlbi9hcmNoL3g4Ni9pbmNs
dWRlL2FzbS9wZXJmY19kZWZuLmggICB8ICAzICsrLQ0KPiAgNiBmaWxlcyBjaGFuZ2VkLCA3MiBp
bnNlcnRpb25zKCspLCAzIGRlbGV0aW9ucygtKQ0KPiANCj4gZGlmZiAtLWdpdCBhL2RvY3MvbWlz
Yy94ZW4tY29tbWFuZC1saW5lLnBhbmRvYyBiL2RvY3MvbWlzYy94ZW4tDQo+IGNvbW1hbmQtbGlu
ZS5wYW5kb2MNCj4gaW5kZXggMWRjN2UxY2EwNy4uY2NmOGJmNTgwNiAxMDA2NDQNCj4gLS0tIGEv
ZG9jcy9taXNjL3hlbi1jb21tYW5kLWxpbmUucGFuZG9jDQo+ICsrKyBiL2RvY3MvbWlzYy94ZW4t
Y29tbWFuZC1saW5lLnBhbmRvYw0KPiBAQCAtMjU0NCw2ICsyNTQ0LDE3IEBAIGd1ZXN0IHdpbGwg
bm90aWZ5IFhlbiB0aGF0IGl0IGhhcyBmYWlsZWQgdG8gYWNxdWlyZSBhDQo+IHNwaW5sb2NrLg0K
PiAgPG1ham9yPiwgPG1pbm9yPiBhbmQgPGJ1aWxkPiBtdXN0IGJlIGludGVnZXJzLiBUaGUgdmFs
dWVzIHdpbGwgYmUNCj4gIGVuY29kZWQgaW4gZ3Vlc3QgQ1BVSUQgMHg0MDAwMDAwMiBpZiB2aXJp
ZGlhbiBlbmxpZ2h0ZW5tZW50cyBhcmUgZW5hYmxlZC4NCj4gDQo+ICsjIyMgdm0tbm90aWZ5LXdp
bmRvdyAoSW50ZWwpDQo+ICs+IGA9IDxpbnRlZ2VyPmANCj4gKw0KPiArPiBEZWZhdWx0OiBgMGAN
Cj4gKw0KPiArU3BlY2lmeSB0aGUgdmFsdWUgb2YgdGhlIFZNIE5vdGlmeSB3aW5kb3cgdXNlZCB0
byBkZXRlY3QgbG9ja2VkIFZNcy4gU2V0DQo+IHRvIC0xDQo+ICt0byBkaXNhYmxlIHRoZSBmZWF0
dXJlLiAgVmFsdWUgaXMgaW4gdW5pdHMgb2YgY3J5c3RhbCBjbG9jayBjeWNsZXMuDQo+ICsNCj4g
K05vdGUgdGhlIGhhcmR3YXJlIG1pZ2h0IGFkZCBhIHRocmVzaG9sZCB0byB0aGUgcHJvdmlkZWQg
dmFsdWUgaW4gb3JkZXIgdG8NCj4gbWFrZQ0KPiAraXQgc2FmZSwgYW5kIGhlbmNlIHVzaW5nIDAg
aXMgZmluZS4NCj4gKw0KPiAgIyMjIHZwaWQgKEludGVsKQ0KPiAgPiBgPSA8Ym9vbGVhbj5gDQo+
IA0KPiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L2h2bS92bXgvdm1jcy5jIGIveGVuL2FyY2gv
eDg2L2h2bS92bXgvdm1jcy5jDQo+IGluZGV4IGQzODhlNjcyOWMuLjZjYjJjNmM2YjcgMTAwNjQ0
DQo+IC0tLSBhL3hlbi9hcmNoL3g4Ni9odm0vdm14L3ZtY3MuYw0KPiArKysgYi94ZW4vYXJjaC94
ODYvaHZtL3ZteC92bWNzLmMNCj4gQEAgLTY3LDYgKzY3LDkgQEAgaW50ZWdlcl9wYXJhbSgicGxl
X2dhcCIsIHBsZV9nYXApOw0KPiAgc3RhdGljIHVuc2lnbmVkIGludCBfX3JlYWRfbW9zdGx5IHBs
ZV93aW5kb3cgPSA0MDk2Ow0KPiAgaW50ZWdlcl9wYXJhbSgicGxlX3dpbmRvdyIsIHBsZV93aW5k
b3cpOw0KPiANCj4gK3N0YXRpYyB1bnNpZ25lZCBpbnQgX19yb19hZnRlcl9pbml0IHZtX25vdGlm
eV93aW5kb3c7DQo+ICtpbnRlZ2VyX3BhcmFtKCJ2bS1ub3RpZnktd2luZG93Iiwgdm1fbm90aWZ5
X3dpbmRvdyk7DQo+ICsNCj4gIHN0YXRpYyBib29sIF9fcmVhZF9tb3N0bHkgb3B0X2VwdF9wbWwg
PSB0cnVlOw0KPiAgc3RhdGljIHM4IF9fcmVhZF9tb3N0bHkgb3B0X2VwdF9hZCA9IC0xOw0KPiAg
aW50OF90IF9fcmVhZF9tb3N0bHkgb3B0X2VwdF9leGVjX3NwID0gLTE7DQo+IEBAIC0yMTAsNiAr
MjEzLDcgQEAgc3RhdGljIHZvaWQgX19pbml0IHZteF9kaXNwbGF5X2ZlYXR1cmVzKHZvaWQpDQo+
ICAgICAgUChjcHVfaGFzX3ZteF9wbWwsICJQYWdlIE1vZGlmaWNhdGlvbiBMb2dnaW5nIik7DQo+
ICAgICAgUChjcHVfaGFzX3ZteF90c2Nfc2NhbGluZywgIlRTQyBTY2FsaW5nIik7DQo+ICAgICAg
UChjcHVfaGFzX3ZteF9idXNfbG9ja19kZXRlY3Rpb24sICJCdXMgTG9jayBEZXRlY3Rpb24iKTsN
Cj4gKyAgICBQKGNwdV9oYXNfdm14X25vdGlmeV92bV9leGl0aW5nLCAiTm90aWZ5IFZNIEV4aXQi
KTsNCj4gICN1bmRlZiBQDQo+IA0KPiAgICAgIGlmICggIXByaW50ZWQgKQ0KPiBAQCAtMzI5LDYg
KzMzMyw4IEBAIHN0YXRpYyBpbnQgdm14X2luaXRfdm1jc19jb25maWcoYm9vbCBic3ApDQo+ICAg
ICAgICAgICAgICBvcHQgfD0gU0VDT05EQVJZX0VYRUNfVU5SRVNUUklDVEVEX0dVRVNUOw0KPiAg
ICAgICAgICBpZiAoIG9wdF9lcHRfcG1sICkNCj4gICAgICAgICAgICAgIG9wdCB8PSBTRUNPTkRB
UllfRVhFQ19FTkFCTEVfUE1MOw0KPiArICAgICAgICBpZiAoIHZtX25vdGlmeV93aW5kb3cgIT0g
fjB1ICkNCj4gKyAgICAgICAgICAgIG9wdCB8PSBTRUNPTkRBUllfRVhFQ19OT1RJRllfVk1fRVhJ
VElORzsNCj4gDQo+ICAgICAgICAgIC8qDQo+ICAgICAgICAgICAqICJBUElDIFJlZ2lzdGVyIFZp
cnR1YWxpemF0aW9uIiBhbmQgIlZpcnR1YWwgSW50ZXJydXB0IERlbGl2ZXJ5Ig0KPiBAQCAtMTMz
Myw2ICsxMzM5LDE5IEBAIHN0YXRpYyBpbnQgY29uc3RydWN0X3ZtY3Moc3RydWN0IHZjcHUgKnYp
DQo+ICAgICAgICAgIHJjID0gdm14X2FkZF9tc3IodiwgTVNSX0ZMVVNIX0NNRCwgRkxVU0hfQ01E
X0wxRCwNCj4gICAgICAgICAgICAgICAgICAgICAgICAgICBWTVhfTVNSX0dVRVNUX0xPQURPTkxZ
KTsNCj4gDQo+ICsgICAgaWYgKCBjcHVfaGFzX3ZteF9ub3RpZnlfdm1fZXhpdGluZyApDQo+ICsg
ICAgew0KPiArICAgICAgICBfX3Ztd3JpdGUoTk9USUZZX1dJTkRPVywgdm1fbm90aWZ5X3dpbmRv
dyk7DQo+ICsgICAgICAgIC8qDQo+ICsgICAgICAgICAqIERpc2FibGUgI0FDIGFuZCAjREIgaW50
ZXJjZXB0aW9uOiBieSB1c2luZyBWTSBOb3RpZnkgWGVuIGlzDQo+ICsgICAgICAgICAqIGd1YXJh
bnRlZWQgdG8gZ2V0IGEgVk0gZXhpdCBldmVuIGlmIHRoZSBndWVzdCBtYW5hZ2VzIHRvIGxvY2sg
dGhlDQo+ICsgICAgICAgICAqIENQVS4NCj4gKyAgICAgICAgICovDQo+ICsgICAgICAgIHYtPmFy
Y2guaHZtLnZteC5leGNlcHRpb25fYml0bWFwICY9IH4oKDFVIDw8IFRSQVBfZGVidWcpIHwNCj4g
KyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAoMVUgPDwgVFJB
UF9hbGlnbm1lbnRfY2hlY2spKTsNCj4gKyAgICAgICAgdm14X3VwZGF0ZV9leGNlcHRpb25fYml0
bWFwKHYpOw0KPiArICAgIH0NCj4gKw0KPiAgIG91dDoNCj4gICAgICB2bXhfdm1jc19leGl0KHYp
Ow0KPiANCj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9odm0vdm14L3ZteC5jIGIveGVuL2Fy
Y2gveDg2L2h2bS92bXgvdm14LmMNCj4gaW5kZXggNjk5ODBjOGUzMS4uZDNjMTU5N2IzZSAxMDA2
NDQNCj4gLS0tIGEveGVuL2FyY2gveDg2L2h2bS92bXgvdm14LmMNCj4gKysrIGIveGVuL2FyY2gv
eDg2L2h2bS92bXgvdm14LmMNCj4gQEAgLTE0MTksMTAgKzE0MTksMTkgQEAgc3RhdGljIHZvaWQg
Y2ZfY2hlY2sgdm14X3VwZGF0ZV9ob3N0X2NyMyhzdHJ1Y3QNCj4gdmNwdSAqdikNCj4gDQo+ICB2
b2lkIHZteF91cGRhdGVfZGVidWdfc3RhdGUoc3RydWN0IHZjcHUgKnYpDQo+ICB7DQo+ICsgICAg
dW5zaWduZWQgaW50IG1hc2sgPSAxdSA8PCBUUkFQX2ludDM7DQo+ICsNCj4gKyAgICBpZiAoICFj
cHVfaGFzX21vbml0b3JfdHJhcF9mbGFnICYmIGNwdV9oYXNfdm14X25vdGlmeV92bV9leGl0aW5n
ICkNCj4gKyAgICAgICAgLyoNCj4gKyAgICAgICAgICogT25seSBhbGxvdyB0b2dnbGluZyBUUkFQ
X2RlYnVnIGlmIG5vdGlmeSBWTSBleGl0IGlzIGVuYWJsZWQsIGFzDQo+ICsgICAgICAgICAqIHVu
Y29uZGl0aW9uYWxseSBzZXR0aW5nIFRSQVBfZGVidWcgaXMgcGFydCBvZiB0aGUgWFNBLTE1NiBm
aXguDQo+ICsgICAgICAgICAqLw0KPiArICAgICAgICBtYXNrIHw9IDF1IDw8IFRSQVBfZGVidWc7
DQo+ICsNCj4gICAgICBpZiAoIHYtPmFyY2guaHZtLmRlYnVnX3N0YXRlX2xhdGNoICkNCj4gLSAg
ICAgICAgdi0+YXJjaC5odm0udm14LmV4Y2VwdGlvbl9iaXRtYXAgfD0gMVUgPDwgVFJBUF9pbnQz
Ow0KPiArICAgICAgICB2LT5hcmNoLmh2bS52bXguZXhjZXB0aW9uX2JpdG1hcCB8PSBtYXNrOw0K
PiAgICAgIGVsc2UNCj4gLSAgICAgICAgdi0+YXJjaC5odm0udm14LmV4Y2VwdGlvbl9iaXRtYXAg
Jj0gfigxVSA8PCBUUkFQX2ludDMpOw0KPiArICAgICAgICB2LT5hcmNoLmh2bS52bXguZXhjZXB0
aW9uX2JpdG1hcCAmPSB+bWFzazsNCj4gDQo+ICAgICAgdm14X3ZtY3NfZW50ZXIodik7DQo+ICAg
ICAgdm14X3VwZGF0ZV9leGNlcHRpb25fYml0bWFwKHYpOw0KPiBAQCAtNDE1NSw2ICs0MTY0LDkg
QEAgdm9pZCB2bXhfdm1leGl0X2hhbmRsZXIoc3RydWN0IGNwdV91c2VyX3JlZ3MNCj4gKnJlZ3Mp
DQo+ICAgICAgICAgIHN3aXRjaCAoIHZlY3RvciApDQo+ICAgICAgICAgIHsNCj4gICAgICAgICAg
Y2FzZSBUUkFQX2RlYnVnOg0KPiArICAgICAgICAgICAgaWYgKCBjcHVfaGFzX21vbml0b3JfdHJh
cF9mbGFnICYmIGNwdV9oYXNfdm14X25vdGlmeV92bV9leGl0aW5nICkNCj4gKyAgICAgICAgICAg
ICAgICBnb3RvIGV4aXRfYW5kX2NyYXNoOw0KPiArDQo+ICAgICAgICAgICAgICAvKg0KPiAgICAg
ICAgICAgICAgICogVXBkYXRlcyBEUjYgd2hlcmUgZGVidWdnZXIgY2FuIHBlZWsgKFNlZSAzQiAy
My4yLjEsDQo+ICAgICAgICAgICAgICAgKiBUYWJsZSAyMy0xLCAiRXhpdCBRdWFsaWZpY2F0aW9u
IGZvciBEZWJ1ZyBFeGNlcHRpb25zIikuDQo+IEBAIC00NTkzLDYgKzQ2MDUsMjIgQEAgdm9pZCB2
bXhfdm1leGl0X2hhbmRsZXIoc3RydWN0IGNwdV91c2VyX3JlZ3MNCj4gKnJlZ3MpDQo+ICAgICAg
ICAgICAqLw0KPiAgICAgICAgICBicmVhazsNCj4gDQo+ICsgICAgY2FzZSBFWElUX1JFQVNPTl9O
T1RJRlk6DQo+ICsgICAgICAgIF9fdm1yZWFkKEVYSVRfUVVBTElGSUNBVElPTiwgJmV4aXRfcXVh
bGlmaWNhdGlvbik7DQo+ICsNCj4gKyAgICAgICAgaWYgKCBleGl0X3F1YWxpZmljYXRpb24gJiBO
T1RJRllfVk1fQ09OVEVYVF9JTlZBTElEICkNCj4gKyAgICAgICAgew0KPiArICAgICAgICAgICAg
cGVyZmNfaW5jcih2bW5vdGlmeV9jcmFzaCk7DQo+ICsgICAgICAgICAgICBncHJpbnRrKFhFTkxP
R19FUlIsICJpbnZhbGlkIFZNIGNvbnRleHQgYWZ0ZXIgbm90aWZ5IHZtZXhpdFxuIik7DQo+ICsg
ICAgICAgICAgICBkb21haW5fY3Jhc2godi0+ZG9tYWluKTsNCj4gKyAgICAgICAgICAgIGJyZWFr
Ow0KPiArICAgICAgICB9DQo+ICsNCj4gKyAgICAgICAgaWYgKCB1bmxpa2VseShleGl0X3F1YWxp
ZmljYXRpb24gJg0KPiBJTlRSX0lORk9fTk1JX1VOQkxPQ0tFRF9CWV9JUkVUKSApDQo+ICsgICAg
ICAgICAgICB1bmRvX25taXNfdW5ibG9ja2VkX2J5X2lyZXQoKTsNCj4gKw0KPiArICAgICAgICBi
cmVhazsNCj4gKw0KPiAgICAgIGNhc2UgRVhJVF9SRUFTT05fVk1YX1BSRUVNUFRJT05fVElNRVJf
RVhQSVJFRDoNCj4gICAgICBjYXNlIEVYSVRfUkVBU09OX0lOVlBDSUQ6DQo+ICAgICAgLyogZmFs
bCB0aHJvdWdoICovDQo+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvaW5jbHVkZS9hc20vaHZt
L3ZteC92bWNzLmgNCj4gYi94ZW4vYXJjaC94ODYvaW5jbHVkZS9hc20vaHZtL3ZteC92bWNzLmgN
Cj4gaW5kZXggNWQzZWRjMTY0Mi4uMDk2MWVhYmYzZiAxMDA2NDQNCj4gLS0tIGEveGVuL2FyY2gv
eDg2L2luY2x1ZGUvYXNtL2h2bS92bXgvdm1jcy5oDQo+ICsrKyBiL3hlbi9hcmNoL3g4Ni9pbmNs
dWRlL2FzbS9odm0vdm14L3ZtY3MuaA0KPiBAQCAtMjY3LDYgKzI2Nyw3IEBAIGV4dGVybiB1MzIg
dm14X3ZtZW50cnlfY29udHJvbDsNCj4gICNkZWZpbmUgU0VDT05EQVJZX0VYRUNfWFNBVkVTICAg
ICAgICAgICAgICAgICAgIDB4MDAxMDAwMDANCj4gICNkZWZpbmUgU0VDT05EQVJZX0VYRUNfVFND
X1NDQUxJTkcgICAgICAgICAgICAgIDB4MDIwMDAwMDANCj4gICNkZWZpbmUgU0VDT05EQVJZX0VY
RUNfQlVTX0xPQ0tfREVURUNUSU9OICAgICAgIDB4NDAwMDAwMDANCj4gKyNkZWZpbmUgU0VDT05E
QVJZX0VYRUNfTk9USUZZX1ZNX0VYSVRJTkcgICAgICAgIDB4ODAwMDAwMDANCj4gIGV4dGVybiB1
MzIgdm14X3NlY29uZGFyeV9leGVjX2NvbnRyb2w7DQo+IA0KPiAgI2RlZmluZSBWTVhfRVBUX0VY
RUNfT05MWV9TVVBQT1JURUQgICAgICAgICAgICAgICAgICAgICAgICAgMHgwMDAwMDAwMQ0KPiBA
QCAtMzQ4LDYgKzM0OSw4IEBAIGV4dGVybiB1NjQgdm14X2VwdF92cGlkX2NhcDsNCj4gICAgICAo
dm14X3NlY29uZGFyeV9leGVjX2NvbnRyb2wgJiBTRUNPTkRBUllfRVhFQ19UU0NfU0NBTElORykN
Cj4gICNkZWZpbmUgY3B1X2hhc192bXhfYnVzX2xvY2tfZGV0ZWN0aW9uIFwNCj4gICAgICAodm14
X3NlY29uZGFyeV9leGVjX2NvbnRyb2wgJg0KPiBTRUNPTkRBUllfRVhFQ19CVVNfTE9DS19ERVRF
Q1RJT04pDQo+ICsjZGVmaW5lIGNwdV9oYXNfdm14X25vdGlmeV92bV9leGl0aW5nIFwNCj4gKyAg
ICAodm14X3NlY29uZGFyeV9leGVjX2NvbnRyb2wgJg0KPiBTRUNPTkRBUllfRVhFQ19OT1RJRllf
Vk1fRVhJVElORykNCj4gDQo+ICAjZGVmaW5lIFZNQ1NfUklEX1RZUEVfTUFTSyAgICAgICAgICAg
ICAgMHg4MDAwMDAwMA0KPiANCj4gQEAgLTQ1NSw2ICs0NTgsNyBAQCBlbnVtIHZtY3NfZmllbGQg
ew0KPiAgICAgIFNFQ09OREFSWV9WTV9FWEVDX0NPTlRST0wgICAgICAgPSAweDAwMDA0MDFlLA0K
PiAgICAgIFBMRV9HQVAgICAgICAgICAgICAgICAgICAgICAgICAgPSAweDAwMDA0MDIwLA0KPiAg
ICAgIFBMRV9XSU5ET1cgICAgICAgICAgICAgICAgICAgICAgPSAweDAwMDA0MDIyLA0KPiArICAg
IE5PVElGWV9XSU5ET1cgICAgICAgICAgICAgICAgICAgPSAweDAwMDA0MDI0LA0KPiAgICAgIFZN
X0lOU1RSVUNUSU9OX0VSUk9SICAgICAgICAgICAgPSAweDAwMDA0NDAwLA0KPiAgICAgIFZNX0VY
SVRfUkVBU09OICAgICAgICAgICAgICAgICAgPSAweDAwMDA0NDAyLA0KPiAgICAgIFZNX0VYSVRf
SU5UUl9JTkZPICAgICAgICAgICAgICAgPSAweDAwMDA0NDA0LA0KPiBkaWZmIC0tZ2l0IGEveGVu
L2FyY2gveDg2L2luY2x1ZGUvYXNtL2h2bS92bXgvdm14LmgNCj4gYi94ZW4vYXJjaC94ODYvaW5j
bHVkZS9hc20vaHZtL3ZteC92bXguaA0KPiBpbmRleCBiYzBjYWFkNmZiLi5lNDI5ZGU4NTQxIDEw
MDY0NA0KPiAtLS0gYS94ZW4vYXJjaC94ODYvaW5jbHVkZS9hc20vaHZtL3ZteC92bXguaA0KPiAr
KysgYi94ZW4vYXJjaC94ODYvaW5jbHVkZS9hc20vaHZtL3ZteC92bXguaA0KPiBAQCAtMjIxLDYg
KzIyMSw3IEBAIHN0YXRpYyBpbmxpbmUgdm9pZCBwaV9jbGVhcl9zbihzdHJ1Y3QgcGlfZGVzYyAq
cGlfZGVzYykNCj4gICNkZWZpbmUgRVhJVF9SRUFTT05fWFNBVkVTICAgICAgICAgICAgICA2Mw0K
PiAgI2RlZmluZSBFWElUX1JFQVNPTl9YUlNUT1JTICAgICAgICAgICAgIDY0DQo+ICAjZGVmaW5l
IEVYSVRfUkVBU09OX0JVU19MT0NLICAgICAgICAgICAgNzQNCj4gKyNkZWZpbmUgRVhJVF9SRUFT
T05fTk9USUZZICAgICAgICAgICAgICA3NQ0KPiAgLyogUmVtZW1iZXIgdG8gYWxzbyB1cGRhdGUg
Vk1YX1BFUkZfRVhJVF9SRUFTT05fU0laRSEgKi8NCj4gDQo+ICAvKg0KPiBAQCAtMjM2LDYgKzIz
NywxMSBAQCBzdGF0aWMgaW5saW5lIHZvaWQgcGlfY2xlYXJfc24oc3RydWN0IHBpX2Rlc2MgKnBp
X2Rlc2MpDQo+ICAjZGVmaW5lIElOVFJfSU5GT19WQUxJRF9NQVNLICAgICAgICAgICAgMHg4MDAw
MDAwMCAgICAgIC8qIDMxICovDQo+ICAjZGVmaW5lIElOVFJfSU5GT19SRVNWRF9CSVRTX01BU0sg
ICAgICAgMHg3ZmZmZjAwMA0KPiANCj4gKy8qDQo+ICsgKiBFeGl0IFF1YWxpZmljYXRpb25zIGZv
ciBOT1RJRlkgVk0gRVhJVA0KPiArICovDQo+ICsjZGVmaW5lIE5PVElGWV9WTV9DT05URVhUX0lO
VkFMSUQgICAgICAgMXUNCj4gKw0KPiAgLyoNCj4gICAqIEV4aXQgUXVhbGlmaWNhdGlvbnMgZm9y
IE1PViBmb3IgQ29udHJvbCBSZWdpc3RlciBBY2Nlc3MNCj4gICAqLw0KPiBkaWZmIC0tZ2l0IGEv
eGVuL2FyY2gveDg2L2luY2x1ZGUvYXNtL3BlcmZjX2RlZm4uaA0KPiBiL3hlbi9hcmNoL3g4Ni9p
bmNsdWRlL2FzbS9wZXJmY19kZWZuLmgNCj4gaW5kZXggZDZlYjY2MTk0MC4uYzZiNjAxYjcyOSAx
MDA2NDQNCj4gLS0tIGEveGVuL2FyY2gveDg2L2luY2x1ZGUvYXNtL3BlcmZjX2RlZm4uaA0KPiAr
KysgYi94ZW4vYXJjaC94ODYvaW5jbHVkZS9hc20vcGVyZmNfZGVmbi5oDQo+IEBAIC02LDcgKzYs
NyBAQCBQRVJGQ09VTlRFUl9BUlJBWShleGNlcHRpb25zLCAgICAgICAgICAgImV4Y2VwdGlvbnMi
LCAzMikNCj4gDQo+ICAjaWZkZWYgQ09ORklHX0hWTQ0KPiANCj4gLSNkZWZpbmUgVk1YX1BFUkZf
RVhJVF9SRUFTT05fU0laRSA3NQ0KPiArI2RlZmluZSBWTVhfUEVSRl9FWElUX1JFQVNPTl9TSVpF
IDc2DQo+ICAjZGVmaW5lIFZNRVhJVF9OUEZfUEVSRkMgMTQzDQo+ICAjZGVmaW5lIFNWTV9QRVJG
X0VYSVRfUkVBU09OX1NJWkUgKFZNRVhJVF9OUEZfUEVSRkMgKyAxKQ0KPiAgUEVSRkNPVU5URVJf
QVJSQVkodm1leGl0cywgICAgICAgICAgICAgICJ2bWV4aXRzIiwNCj4gQEAgLTEyNiw1ICsxMjYs
NiBAQCBQRVJGQ09VTlRFUihyZWFsbW9kZV9leGl0cywgICAgICAidm1leGl0cyBmcm9tDQo+IHJl
YWxtb2RlIikNCj4gIFBFUkZDT1VOVEVSKHBhdXNlbG9vcF9leGl0cywgInZtZXhpdHMgZnJvbSBQ
YXVzZS1Mb29wIERldGVjdGlvbiIpDQo+IA0KPiAgUEVSRkNPVU5URVIoYnVzbG9jaywgIkJ1cyBM
b2NrcyBEZXRlY3RlZCIpDQo+ICtQRVJGQ09VTlRFUih2bW5vdGlmeV9jcmFzaCwgImRvbWFpbnMg
Y3Jhc2hlZCBieSBOb3RpZnkgVk0gRXhpdCIpDQo+IA0KPiAgLyojZW5kaWYqLyAvKiBfX1hFTl9Q
RVJGQ19ERUZOX0hfXyAqLw0KPiAtLQ0KPiAyLjM2LjANCg0K


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 07:05:02 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 07:05:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344690.570257 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzCE2-0004hD-1c; Thu, 09 Jun 2022 07:05:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344690.570257; Thu, 09 Jun 2022 07:05:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzCE1-0004h6-Rw; Thu, 09 Jun 2022 07:05:01 +0000
Received: by outflank-mailman (input) for mailman id 344690;
 Thu, 09 Jun 2022 07:05:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jWvP=WQ=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1nzCE0-0004ga-JW
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 07:05:00 +0000
Received: from de-smtp-delivery-102.mimecast.com
 (de-smtp-delivery-102.mimecast.com [194.104.111.102])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 792ece58-e7c2-11ec-b605-df0040e90b76;
 Thu, 09 Jun 2022 09:04:58 +0200 (CEST)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2113.outbound.protection.outlook.com [104.47.17.113]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-20-haorNSoXNG2QyJ6301i6hg-1; Thu, 09 Jun 2022 09:04:55 +0200
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB6PR0401MB2646.eurprd04.prod.outlook.com (2603:10a6:4:38::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Thu, 9 Jun
 2022 07:04:53 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.013; Thu, 9 Jun 2022
 07:04:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 792ece58-e7c2-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1654758298;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Sh6OEi6dpaIq+cfESSujIAlMweqMyXandHtIlZg4miQ=;
	b=igZgN5+vBV/BivG2ijfWkIb4sS4MR5lj8P+Y3qxqMwoghYaUpO4ZEzZoeAfpH/7rPbdC79
	LV7oKM2OTvry1rf2GNKI9+QW5YR5JGO7swRLNy+ifR/LE3pwD3ajaDGw5ndgKBJPoold2N
	fe3uPgqtyq1uPdeCzG9r6yS432UHCBg=
X-MC-Unique: haorNSoXNG2QyJ6301i6hg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FLpbkylwj2IlsWgKG//12fL8lR21Ssm3UlhzO6owcOjkwfYOqA5hUFSRVcJRuZAyPYas7RTjVcYxtL/sFxzA/poy+yhnAnJpyRkL1EXF2TDJq67kiJotRCtTLZ8HpzdD4xtR26vJUAv3LS8XEtPZfIhEuylIXgJWKPziB/jcQv8weJXd0cmXvob4yB9ZpzfXDyOchI0poH84orXp1UTzP8uRc0lwsxMoU3DjFfedHaimc1gi8nFvLmp0xz7WGZyn6V+rwAgXy7yRr4BX4wyiBZbtk+STiA4RI4U6xtjOBn+iUka0/ZHe5RWxTkZBOTVwlCpQ63FMiwK0ENT78BQ0fw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Sh6OEi6dpaIq+cfESSujIAlMweqMyXandHtIlZg4miQ=;
 b=ZlXXHSLDmWB+Qp5YKJ+7SQD+hthh2Dwq4K2H3ZGIMqefK1h0XBnifrS04uPJsH2xK56QUvK+a2YAdU5dlMHnT1NmToFh18vi6bqWPsLzxKgTsev7ANmUtdC6fM/t4hl7wZTw8xxE43bu48ExZ/QcXVBK1KpNO4UJE/ODl9NV6HyorSgpPNlEu3OOvghlmptLnEdAw+u/juznGypcGPsgjnMXDrmEitMRaJJCfF+B95C1HO2eUgu1DSgmcLvsUY44IZh9lNNQeAZUvd0Fa6lxuCr089nkfo5SqYERidptR6/g9/WOGCz8fMCePz071nL4PExm816Y3dpzQ9H/9H3quw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=mysuse.onmicrosoft.com; s=selector1-mysuse-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Sh6OEi6dpaIq+cfESSujIAlMweqMyXandHtIlZg4miQ=;
 b=IWFVkstGqHWi7tGnke1UKN5YlYqQyTcVcL7V/Px6oXI+sOdeksUovAuJu8bity1ul3ILARK1wigaFMCeRt9HDeYnsKNxhm3lXYOZwNh8NywIbSIy+rIpJ50ZsOfl47t61KqAjwC4Wiqk+DcYbmcFlAYCD7aCotdvh6DNXp9QfAGpv63oTxHBhhS4st02GeF2Rm4x9ce9ybnRyaHwhJuwt1fjGlBj1ElE25ylQifRIqzwcSuUJOb9ZrSeinkXnsH7LmtYxsk4hgWLyOwZAzYtWF6FJKJ39Fo0psst6JLsgb9sgTbdEB1CXn6JRo3BcFnH0dcrwWh2vFG3k+NyLz/HOA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <880f3273-f92b-2b60-8de0-e69fefbada21@suse.com>
Date: Thu, 9 Jun 2022 09:04:50 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: MISRA C meeting tomorrow, was: MOVING COMMUNITY CALL Call for
 agenda items for 9 June Community Call @ 1500 UTC
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: George Dunlap <George.Dunlap@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Roger Pau Monne <roger.pau@citrix.com>,
 Artem Mygaiev <Artem_Mygaiev@epam.com>, Andrew.Cooper3@citrix.com,
 julien@xen.org, Bertrand.Marquis@arm.com, fusa-sig@lists.xenproject.org,
 roberto.bagnara@bugseng.com
References: <CC75A251-2695-4E9E-95A7-043874B22F32@citrix.com>
 <alpine.DEB.2.22.394.2206010942010.1905099@ubuntu-linux-20-04-desktop>
 <alpine.DEB.2.22.394.2206011324400.1905099@ubuntu-linux-20-04-desktop>
 <ebe4b409-318f-6b2c-0e05-fe9256528b32@suse.com>
 <alpine.DEB.2.22.394.2206061731421.277622@ubuntu-linux-20-04-desktop>
 <alpine.DEB.2.22.394.2206081806020.21215@ubuntu-linux-20-04-desktop>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <alpine.DEB.2.22.394.2206081806020.21215@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM5PR0201CA0021.eurprd02.prod.outlook.com
 (2603:10a6:203:3d::31) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e8a0de29-1511-4f84-3cae-08da49e65a0a
X-MS-TrafficTypeDiagnostic: DB6PR0401MB2646:EE_
X-Microsoft-Antispam-PRVS:
	<DB6PR0401MB264609076977638275C20A18B3A79@DB6PR0401MB2646.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7ELFu3Igv2EZh2qS5j9G+MS1NiOlq23bIFTjT6g8pt3gK7w2qud7mhCVVl2QmIK9a8JUrtPlzGwzoN6ws7eblEvq96Z8xCH9YNLmDFBdbvnGad9wPP/cJo3vHLzY2ZXUgPn7jlahVwX9d1EHE95Hnk2V5v1Y2NP/FViP8VLrKgcICRO1C6PrQRyYBmZnBQ52uedxLdrSibLkeLmhF/gxGRHkaDJ1VPuPCeOj0uxWKUGB/dzFRbF4Erw/FKTJRoziQTo3ArpqU600f98hgEYtc41fb94nGP4sGVGmirv7gUvW7Hl+2SBRKOm6BczLnCqHk8uEPBOyLl0xw4JQjY0+uekQjLGAPXeKFXgl0EcctoD+TiU4p0ocJtTkN7GECIpGu3vcQ3fVQy/my4uK/81/g9SDz1qw/iHOEybJyFGZG9U4ku81gXZR68BwZYoE35xmCHiGQ5v1sEsVic3X46UL3yuEsLhqEKP1mMhdbEharRj4aPhAo5HBcbQIkqQ7brG1M1RJKMz3BoBB8A1rNM/Sl465WlmDoIxP8gk65w5a18OwDHSGPWztCFEHO4+i55Efxie14z0Gd20gnikysKWIzlpN28eE9AzzqTjJHnGY4uQ9iTlX5YvhcyQqasDrY7vx8epS50t+nB8AwdtpJRu0KTGcAi6f/W2asHB0WK7C+/Uqxhk3weraGZ+KXac5QrCavKIgNlwaTk45XsV6vx8zmSzQ2HYzudpTPnL11wLalFjbYmDRkfzcmTveyvgPyWbGKaTpuUJVH1L9iLfYggtVWA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(2906002)(6486002)(6506007)(186003)(6916009)(8936002)(54906003)(53546011)(316002)(2616005)(38100700002)(83380400001)(31696002)(36756003)(7416002)(86362001)(5660300002)(508600001)(4744005)(4326008)(8676002)(6512007)(26005)(66946007)(66476007)(66556008)(31686004)(221023002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MVJvalQzYjZKUW45NVNyTlhJNFBGR3drUGhaZWthejJVWVNmQ0s5SlNzc3g0?=
 =?utf-8?B?SEFpOVdFSFZmWGZqNUFsWHZ4WG82bkdCYWpJYkVicHpVNkU1cC9iODd1amlz?=
 =?utf-8?B?M3JWQWlnMk50ZHRocTgvUXlTVHpWbm00a2NvVStTVDdwTlQrYTA0czlncTRL?=
 =?utf-8?B?MUFYaWF4eEZwV1l4M0JVMmRQb293cHExYjNSaDVZQ3NjWWYwSFZYVUdzemo2?=
 =?utf-8?B?NHlXN0JpN0VTMm1aeHpHTTJWSW1JdjdpZ1FRWWNQQ1ZXd3AvWVoyUzVsdnZX?=
 =?utf-8?B?Q0owSWpwVDYwa2hIYlphU29GZElrLy96MmU0d3BrTkYyK0pJVzFTTjVibHIr?=
 =?utf-8?B?ZG43SVBRR0Z0VFFaQUJxYkhhTytocTdnazJuTTN4WnRQRmQrZGJ3WTBzV2NP?=
 =?utf-8?B?WElWSDhQLy9jSGJ5czFPbkdZQmhuVUZvS2dQVmM4RzJLQ3ZlL3pCTm4zdTZa?=
 =?utf-8?B?WmY1amF3a3BrZTJXSHJCT3NEMlFoSmcxYjlSQnByZ3lKdUZOWFM1L0JTb1VN?=
 =?utf-8?B?aWVWNklWbFhrakhKK3JtNVdsclhGQVBEeVhOdmlKMEZkbUF6Q0lvOFZIY3dY?=
 =?utf-8?B?YzRBblFDYm9UcGtacnJOVzUzUzZpTHhkRmFBQmZnR1pKc0piMGhzTjl5SHg1?=
 =?utf-8?B?c1dkczgrWmVUM3FrajAzM2VnbTkrVEdaNkdndCs0cmRWbi9tN3hyblVNNTdU?=
 =?utf-8?B?WnhnNjFDSjBEL1VPR0VnVG5NMTBJNGNwWVpGQTZFL1dENDNPWWJEUGxrZkVK?=
 =?utf-8?B?ZFVNV0d2eGdTdnQzRXlVN2sxQmJhWFhYZ2VBTXBXZmRoNkk4dldWNWY3ZWV5?=
 =?utf-8?B?L2tkN2VrS2tVaXU3a2xGb1RKZlg4K2VpK3p0ckhDTEtQRUo2TkhmN2t2bm9h?=
 =?utf-8?B?STRHTkZFMkNHNE1HNElIOFd0a1plcHNrVFJLTWlIV21DUzh6ZnBIUDdUZzNt?=
 =?utf-8?B?ZkluYnFkVWRzbnl2WVRRUm42WTdDYXJFTGZwVDZuaFJ2SFpVU2lVbHhnemM4?=
 =?utf-8?B?eTJsVlEvY3prZEFuR3hEalZIK0ZaaFcxTUY4aE93V3YyYmhhM1BZMlZrMDYr?=
 =?utf-8?B?dTQyU0VPemZWKzJVSUNnNnBOQnE5bUpJc2V6YS9ya0k2RCtxYkk2ZDk0ajBJ?=
 =?utf-8?B?U3MyYXdHYjlhZkZiaks2UWhkK253aVZYZy93aHdMLzZNVXFPYzdaQlNaWUxn?=
 =?utf-8?B?c0t5OGZkVm1nTVBCaWpERGhjbld5U2EwY2VLUkh3emJnNjczT3VFN2RhZEFB?=
 =?utf-8?B?aVVOODNPbHpLQUoxVkt1SlZObytXeDNwYi9iOTg5ZGF6b1lEdGVWN3B6SWRI?=
 =?utf-8?B?UTFsK3JOa2RlR2oxZEw1ZjNaQ0ZpWGQ4SUliT0VGTVhaSUVtQmMzcDJwbHQ2?=
 =?utf-8?B?UWJFdlFpR2pabWtIR2ZRWll6cTVMNU5EVHArYThjczV3MkZlbVp5K1YrTE1J?=
 =?utf-8?B?dy9zb2Y1OHRDQllRQ1EwVnhMelllTEduR01udWxUWGNHRSt1Y21zZjkvNnM1?=
 =?utf-8?B?YW9DSTVDbEhWSkVNTWJLUFQvWkdvdHlNc3JMeUwrV1RXaWhnVTdrZWtZRzc2?=
 =?utf-8?B?OGx4SldBUXhuQmpHZHc4MUp0SjF1NHVmNFEyTGdzNjBpS1JOSXpRQjgwZHdX?=
 =?utf-8?B?WUJDNTdFcFVjTUx0cFBuK0tmQW84Um0rMXkxQVhqSDYzcHZQK3hsYmV2MENv?=
 =?utf-8?B?akFrdGtzVUxOWmdaRW9ZbXFIRkM4YTAwTjU1NTZLMlNZQXZldnB0Q1ByWVBU?=
 =?utf-8?B?akRsd0U3aS91OFl6M1FycXQ1TmljTVNqU2JiL2Ircklnc1lVUFR2V2NOcFhW?=
 =?utf-8?B?TlRiMUczY256ZmJpVGlwbEhNeG5Za3lHSWI4L29mOVEwUVVSU1h1SDgwUm1U?=
 =?utf-8?B?djNCZWFnejFkbDBWczR3RU9QbDg5UmRtWlpIclI0YXJHL0NwcC9NNUY0cWJq?=
 =?utf-8?B?SzFPSExzTHpobkhzWVVoZnV1c3hlU2taaEFINGV4cDJrT0VZN2RndVVTUU53?=
 =?utf-8?B?Unk2R1VjMXhrVDRYbng0c29rZHBxMk85RVFvODFTeGVkaDNSejQwOGsyZUZa?=
 =?utf-8?B?TlFxVStUbEd6NVlaWUd5bWZwRWRYM3BTY3dQcWJ4dWNWdHByMy9xcXFvWEM3?=
 =?utf-8?B?cHJUWWROYTZ3b2hhS1EvMlpiUytHa1lTTDhiRExya1RGUGxuVFJzOTBKL1pp?=
 =?utf-8?B?YTNFUVQwYkNCMlJLdFB6TU9WRzR6NXlmNXd0cGJ4WWQ0cWJqOWhOZXlBZHBW?=
 =?utf-8?B?czJxU0ovSG1uMWhIcEVJaUNIaTFVTTBiMWhnT25NTGMzSksza1BTL0dxRDFk?=
 =?utf-8?B?dzg5czVDMngzQmt0NVdJQk9pS3djUWxTUkVOSTBxeDIrNDFkTVI3dz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e8a0de29-1511-4f84-3cae-08da49e65a0a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 07:04:52.9678
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: R8ExCTg7yJlFpfM8WlxXITW2LfY6TsDNZBIQ4I4XpyxDWDlrYfH2VB2GV4BYhUWiIzPaFtDxHY6MdNB5RLZ9Gg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0401MB2646

On 09.06.2022 03:20, Stefano Stabellini wrote:
> Finally, for Rule 13.2, I updated the link to ECLAIR's results. There
> are a lot more violations than just 4, but I don't know if they are
> valid or false positives.

I've picked just the one case in xen/common/efi/ebmalloc.c to check,
and it says "possibly". That's because evaluation of function call
arguments involves the calling of (in this case two) further
functions. If those functions had side effects (which apparently the
tool can't figure), there would indeed be a problem.

The (Arm based) count of almost 10k violations is clearly a concern.
I don't consider it even remotely reasonable to add 10k comments, no
matter how brief, to cover all the false positives.

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 07:49:15 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 07:49:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344710.570268 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzCuZ-0001UQ-I8; Thu, 09 Jun 2022 07:48:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344710.570268; Thu, 09 Jun 2022 07:48:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzCuZ-0001UJ-F0; Thu, 09 Jun 2022 07:48:59 +0000
Received: by outflank-mailman (input) for mailman id 344710;
 Thu, 09 Jun 2022 07:40:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0Zro=WQ=intel.com=xiaoyao.li@srs-se1.protection.inumbo.net>)
 id 1nzCmR-0001BI-8p
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 07:40:35 +0000
Received: from mga05.intel.com (mga05.intel.com [192.55.52.43])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6f95386f-e7c7-11ec-b605-df0040e90b76;
 Thu, 09 Jun 2022 09:40:32 +0200 (CEST)
Received: from fmsmga005.fm.intel.com ([10.253.24.32])
 by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Jun 2022 00:39:38 -0700
Received: from xiaoyaol-hp-g830.ccr.corp.intel.com (HELO [10.255.28.173])
 ([10.255.28.173])
 by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Jun 2022 00:39:36 -0700
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6f95386f-e7c7-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1654760432; x=1686296432;
  h=message-id:date:mime-version:subject:to:cc:references:
   from:in-reply-to:content-transfer-encoding;
  bh=5n59FDDxmhbu2HSu9CH2sYVaxYeqlNolsszNS27NcpQ=;
  b=AOfJJlgt+jlvPWtQGPeKOGyCSDt92ltI8L2PR70k1wFwcJ3VL5f+mSUi
   /tyiEVmPHyC0nEyceVl+ywQrRKzXzJ1yC4YZxLxc7B7KXaER5yCeBUolB
   rQ8GPQ7zsg4QelBpQWJBRmohcOkLpuLFU77Jp3MHIkc7xtzceLO2vz4h6
   bqscNACjaq7NgaJCtEH3oNmq4FnkgOK3lYOH+YqZraeEvCSN7CoDPttyC
   H5OjCIr9GalQvPbAEgOi7HzJUdVd21tXUdQ6XaFLLLFdJpDoYxMQp5utU
   dHyKtNiiha7Ev5LsXoqmF4f7ouqlh/mfwn26o1m63ZaJoTIld96xERzu1
   Q==;
X-IronPort-AV: E=McAfee;i="6400,9594,10372"; a="363513210"
X-IronPort-AV: E=Sophos;i="5.91,287,1647327600"; 
   d="scan'208";a="363513210"
X-IronPort-AV: E=Sophos;i="5.91,287,1647327600"; 
   d="scan'208";a="908134993"
Message-ID: <4f2c4d5b-dab8-c9d2-f4c2-b8cd44011630@intel.com>
Date: Thu, 9 Jun 2022 15:39:33 +0800
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Firefox/91.0 Thunderbird/91.9.0
Subject: Re: [PATCH v2 3/3] x86/vmx: implement Notify VM Exit
Content-Language: en-US
To: "Tian, Kevin" <kevin.tian@intel.com>,
 =?UTF-8?Q?Pau_Monn=c3=a9=2c_Roger?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: "Cooper, Andrew" <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, "Beulich, Jan"
 <JBeulich@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "Nakajima, Jun" <jun.nakajima@intel.com>,
 "Qiang, Chenyi" <chenyi.qiang@intel.com>
References: <20220526111157.24479-1-roger.pau@citrix.com>
 <20220526111157.24479-4-roger.pau@citrix.com>
 <BN9PR11MB5276B16CB69514120B7E0E318CA79@BN9PR11MB5276.namprd11.prod.outlook.com>
From: Xiaoyao Li <xiaoyao.li@intel.com>
In-Reply-To: <BN9PR11MB5276B16CB69514120B7E0E318CA79@BN9PR11MB5276.namprd11.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 6/9/2022 3:04 PM, Tian, Kevin wrote:
> +Chenyi/Xiaoyao who worked on the KVM support. Presumably
> similar opens have been discussed in KVM hence they have the
> right background to comment here.
> 
>> From: Roger Pau Monne <roger.pau@citrix.com>
>> Sent: Thursday, May 26, 2022 7:12 PM
>>
>> Under certain conditions guests can get the CPU stuck in an unbounded
>> loop without the possibility of an interrupt window to occur on
>> instruction boundary.  This was the case with the scenarios described
>> in XSA-156.
>>
>> Make use of the Notify VM Exit mechanism, that will trigger a VM Exit
>> if no interrupt window occurs for a specified amount of time.  Note
>> that using the Notify VM Exit avoids having to trap #AC and #DB
>> exceptions, as Xen is guaranteed to get a VM Exit even if the guest
>> puts the CPU in a loop without an interrupt window, as such disable
>> the intercepts if the feature is available and enabled.
>>
>> Setting the notify VM exit window to 0 is safe because there's a
>> threshold added by the hardware in order to have a sane window value.
>>
>> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>> ---
>> Changes since v1:
>>   - Properly update debug state when using notify VM exit.
>>   - Reword commit message.
>> ---
>> This change enables the notify VM exit by default, KVM however doesn't
>> seem to enable it by default, and there's the following note in the
>> commit message:
>>
>> "- There's a possibility, however small, that a notify VM exit happens
>>     with VM_CONTEXT_INVALID set in exit qualification. In this case, the
>>     vcpu can no longer run. To avoid killing a well-behaved guest, set
>>     notify window as -1 to disable this feature by default."
>>
>> It's not obviously clear to me whether the comment was meant to be:
>> "There's a possibility, however small, that a notify VM exit _wrongly_
>> happens with VM_CONTEXT_INVALID".
>>
>> It's also not clear whether such wrong hardware behavior only affects
>> a specific set of hardware, 

I'm not sure what you mean for a specific set of hardware.

We make it default off in KVM just in case that future silicon wrongly 
sets VM_CONTEXT_INVALID bit. Becuase we make the policy that VM cannot 
continue running in that case.

For the worst case, if some future silicon happens to have this kind 
silly bug, then the existing product kernel all suffer the possibility 
that their VM being killed due to the feature is default on.

>> in a way that we could avoid enabling
>> notify VM exit there.
>>
>> There's a discussion in one of the Linux patches that 128K might be
>> the safer value in order to prevent false positives, but I have no
>> formal confirmation about this.  Maybe our Intel maintainers can
>> provide some more feedback on a suitable notify VM exit window
>> value.

The 128k is the internal threshold for SPR silicon. The internal 
threshold is tuned by Intel for each silicon, to make sure it's big 
enough to avoid false positive even when user set vmcs.notify_window to 0.

However, it varies for different processor generations.

What is the suitable value is hard to say, it depends on how soon does 
VMM want to intercept the VM. Anyway, Intel ensures that even value 0 is 
safe.

>>
>> I've tested with 0 (the proposed default in the patch) and I don't
>> seem to be able to trigger notify VM exits under normal guest
>> operation.  Note that even in that case the guest won't be destroyed
>> unless the context is corrupt.
>> ---




From xen-devel-bounces@lists.xenproject.org Thu Jun 09 07:57:48 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 07:57:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344720.570279 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzD31-00037L-EI; Thu, 09 Jun 2022 07:57:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344720.570279; Thu, 09 Jun 2022 07:57:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzD31-00037E-A9; Thu, 09 Jun 2022 07:57:43 +0000
Received: by outflank-mailman (input) for mailman id 344720;
 Thu, 09 Jun 2022 07:56:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bepE=WQ=atomide.com=tony@srs-se1.protection.inumbo.net>)
 id 1nzD28-00036P-1l
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 07:56:48 +0000
Received: from muru.com (muru.com [72.249.23.125])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id b4f1d3a2-e7c9-11ec-b605-df0040e90b76;
 Thu, 09 Jun 2022 09:56:45 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by muru.com (Postfix) with ESMTPS id 88FEF80F1;
 Thu,  9 Jun 2022 07:34:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b4f1d3a2-e7c9-11ec-b605-df0040e90b76
Date: Thu, 9 Jun 2022 10:39:22 +0300
From: Tony Lindgren <tony@atomide.com>
To: Arnd Bergmann <arnd@arndb.de>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Richard Henderson <rth@twiddle.net>,
	Ivan Kokshaysky <ink@jurassic.park.msu.ru>,
	Matt Turner <mattst88@gmail.com>, Vineet Gupta <vgupta@kernel.org>,
	Russell King - ARM Linux <linux@armlinux.org.uk>,
	Hans Ulli Kroll <ulli.kroll@googlemail.com>,
	Linus Walleij <linus.walleij@linaro.org>,
	Shawn Guo <shawnguo@kernel.org>,
	Sascha Hauer <s.hauer@pengutronix.de>,
	Sascha Hauer <kernel@pengutronix.de>,
	Fabio Estevam <festevam@gmail.com>,
	NXP Linux Team <linux-imx@nxp.com>,
	Kevin Hilman <khilman@kernel.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>, Guo Ren <guoren@kernel.org>,
	bcain@quicinc.com, Huacai Chen <chenhuacai@kernel.org>,
	Xuerui Wang <kernel@xen0n.name>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Sam Creasey <sammy@sammy.net>, Michal Simek <monstr@monstr.eu>,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	Dinh Nguyen <dinguyen@kernel.org>, Jonas Bonn <jonas@southpole.se>,
	Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>,
	Stafford Horne <shorne@gmail.com>,
	James Bottomley <James.Bottomley@hansenpartnership.com>,
	Helge Deller <deller@gmx.de>, Michael Ellerman <mpe@ellerman.id.au>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Paul Mackerras <paulus@samba.org>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	Albert Ou <aou@eecs.berkeley.edu>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Alexander Gordeev <agordeev@linux.ibm.com>,
	Christian Borntraeger <borntraeger@linux.ibm.com>,
	Sven Schnelle <svens@linux.ibm.com>,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	Rich Felker <dalias@libc.org>, David Miller <davem@davemloft.net>,
	Richard Weinberger <richard@nod.at>,
	Anton Ivanov <anton.ivanov@cambridgegreys.com>,
	Johannes Berg <johannes@sipsolutions.net>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	the arch/x86 maintainers <x86@kernel.org>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Arnaldo Carvalho de Melo <acme@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Alexander Shishkin <alexander.shishkin@linux.intel.com>,
	Jiri Olsa <jolsa@kernel.org>, Namhyung Kim <namhyung@kernel.org>,
	Juergen Gross <jgross@suse.com>, srivatsa@csail.mit.edu,
	amakhalov@vmware.com, Pv-drivers <pv-drivers@vmware.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Chris Zankel <chris@zankel.net>, Max Filippov <jcmvbkbc@gmail.com>,
	Rafael Wysocki <rafael@kernel.org>, Len Brown <lenb@kernel.org>,
	Pavel Machek <pavel@ucw.cz>, gregkh <gregkh@linuxfoundation.org>,
	Michael Turquette <mturquette@baylibre.com>,
	Stephen Boyd <sboyd@kernel.org>,
	Daniel Lezcano <daniel.lezcano@linaro.org>, lpieralisi@kernel.org,
	Sudeep Holla <sudeep.holla@arm.com>, Andy Gross <agross@kernel.org>,
	Bjorn Andersson <bjorn.andersson@linaro.org>,
	Anup Patel <anup@brainfault.org>,
	Thierry Reding <thierry.reding@gmail.com>,
	Jonathan Hunter <jonathanh@nvidia.com>,
	jacob.jun.pan@linux.intel.com, Yury Norov <yury.norov@gmail.com>,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Rasmus Villemoes <linux@rasmusvillemoes.dk>,
	Steven Rostedt <rostedt@goodmis.org>,
	Petr Mladek <pmladek@suse.com>,
	Sergey Senozhatsky <senozhatsky@chromium.org>,
	John Ogness <john.ogness@linutronix.de>,
	"Paul E. McKenney" <paulmck@kernel.org>,
	Frederic Weisbecker <frederic@kernel.org>, quic_neeraju@quicinc.com,
	Josh Triplett <josh@joshtriplett.org>,
	Mathieu Desnoyers <mathieu.desnoyers@efficios.com>,
	Lai Jiangshan <jiangshanlai@gmail.com>,
	Joel Fernandes <joel@joelfernandes.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
	Daniel Bristot de Oliveira <bristot@redhat.com>,
	vschneid@redhat.com, jpoimboe@kernel.org,
	alpha <linux-alpha@vger.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	"open list:SYNOPSYS ARC ARCHITECTURE" <linux-snps-arc@lists.infradead.org>,
	Linux ARM <linux-arm-kernel@lists.infradead.org>,
	linux-omap <linux-omap@vger.kernel.org>, linux-csky@vger.kernel.org,
	"open list:QUALCOMM HEXAGON..." <linux-hexagon@vger.kernel.org>,
	"open list:IA64 (Itanium) PLATFORM" <linux-ia64@vger.kernel.org>,
	linux-m68k <linux-m68k@lists.linux-m68k.org>,
	"open list:BROADCOM NVRAM DRIVER" <linux-mips@vger.kernel.org>,
	Openrisc <openrisc@lists.librecores.org>,
	Parisc List <linux-parisc@vger.kernel.org>,
	linuxppc-dev <linuxppc-dev@lists.ozlabs.org>,
	linux-riscv <linux-riscv@lists.infradead.org>,
	linux-s390 <linux-s390@vger.kernel.org>,
	Linux-sh list <linux-sh@vger.kernel.org>,
	sparclinux <sparclinux@vger.kernel.org>,
	linux-um <linux-um@lists.infradead.org>,
	linux-perf-users@vger.kernel.org,
	"open list:DRM DRIVER FOR QEMU'S CIRRUS DEVICE" <virtualization@lists.linux-foundation.org>,
	xen-devel <xen-devel@lists.xenproject.org>,
	"open list:TENSILICA XTENSA PORT (xtensa)" <linux-xtensa@linux-xtensa.org>,
	ACPI Devel Maling List <linux-acpi@vger.kernel.org>,
	Linux PM list <linux-pm@vger.kernel.org>,
	linux-clk <linux-clk@vger.kernel.org>,
	linux-arm-msm <linux-arm-msm@vger.kernel.org>,
	"open list:TEGRA ARCHITECTURE SUPPORT" <linux-tegra@vger.kernel.org>,
	linux-arch <linux-arch@vger.kernel.org>, rcu@vger.kernel.org
Subject: Re: [PATCH 33/36] cpuidle,omap3: Use WFI for omap3_pm_idle()
Message-ID: <YqGjqgSrTRseJW6M@atomide.com>
References: <20220608142723.103523089@infradead.org>
 <20220608144518.010587032@infradead.org>
 <CAK8P3a0g-fNu9=BUECSXcNeWT7XWHQMnSXZE-XYE+5eakHxKxA@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAK8P3a0g-fNu9=BUECSXcNeWT7XWHQMnSXZE-XYE+5eakHxKxA@mail.gmail.com>

* Arnd Bergmann <arnd@arndb.de> [220608 18:18]:
> On Wed, Jun 8, 2022 at 4:27 PM Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > arch_cpu_idle() is a very simple idle interface and exposes only a
> > single idle state and is expected to not require RCU and not do any
> > tracing/instrumentation.
> >
> > As such, omap_sram_idle() is not a valid implementation. Replace it
> > with the simple (shallow) omap3_do_wfi() call. Leaving the more
> > complicated idle states for the cpuidle driver.

Agreed it makes sense to limit deeper idle states to cpuidle. Hopefully
there is some informative splat for attempting to use arch_cpu_ide()
for deeper idle states :)

> I see similar code in omap2:
> 
> omap2_pm_idle()
>  -> omap2_enter_full_retention()
>      -> omap2_sram_suspend()
> 
> Is that code path safe to use without RCU or does it need a similar change?

Seems like a similar change should be done for omap2. Then anybody who
cares to implement a minimal cpuidle support can do so.

Regards,

Tony


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 08:15:43 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 08:15:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344750.570349 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzDKM-00073B-00; Thu, 09 Jun 2022 08:15:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344750.570349; Thu, 09 Jun 2022 08:15:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzDKL-000734-Sp; Thu, 09 Jun 2022 08:15:37 +0000
Received: by outflank-mailman (input) for mailman id 344750;
 Thu, 09 Jun 2022 08:15:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzDKK-00072u-QN; Thu, 09 Jun 2022 08:15:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzDKK-00026b-Nt; Thu, 09 Jun 2022 08:15:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzDKK-0002cX-DB; Thu, 09 Jun 2022 08:15:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nzDKK-0000cs-Ch; Thu, 09 Jun 2022 08:15:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7wgBKLfEm09eWIqGISS3zZYDCM1BMUurzO8xoqQ4it4=; b=LHNYL00HSWBS/7OBR0ZoPxc0pe
	eWz2kdR2glwTyNLdzVTDAAeyTJGCNsQ1FG/pCUuucQGG4cwcMrN0qTGFtcZ1B4mvWuTvHhYnGFhk8
	nc5qval9LdpTs0eqZZ8vu6P24BBZTXXURypkWspyaEC1xYp8hOZD9E5RYWYYv1midKWM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170890-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 170890: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=7ac12e3634cc3ed9234de03e48149e7f5fbf73c3
X-Osstest-Versions-That:
    xen=cea9ae06229577cd5b77019ce122f9cdd1568106
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 09 Jun 2022 08:15:36 +0000

flight 170890 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170890/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10       fail  like 170877
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170877
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170877
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170877
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170877
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 170877
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 170877
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170877
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170877
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170877
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170877
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170877
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170877
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 xen                  7ac12e3634cc3ed9234de03e48149e7f5fbf73c3
baseline version:
 xen                  cea9ae06229577cd5b77019ce122f9cdd1568106

Last test of basis   170877  2022-06-07 23:08:25 Z    1 days
Testing same since   170890  2022-06-08 20:37:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bertrand Marquis <bertrand.marquis@arm.com>
  David Vrabel <dvrabel@amazon.co.uk>
  Henry Wang <Henry.Wang@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <julien.grall@arm.com>
  Luca Fancellu <luca.fancellu@arm.com>
  Michal Orzel <michal.orzel@arm.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   cea9ae0622..7ac12e3634  7ac12e3634cc3ed9234de03e48149e7f5fbf73c3 -> master


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 08:30:56 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 08:30:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344766.570384 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzDZ7-0001VV-9I; Thu, 09 Jun 2022 08:30:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344766.570384; Thu, 09 Jun 2022 08:30:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzDZ7-0001V3-0U; Thu, 09 Jun 2022 08:30:53 +0000
Received: by outflank-mailman (input) for mailman id 344766;
 Thu, 09 Jun 2022 08:30:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1nzDZ5-0001Rg-W1
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 08:30:51 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nzDZ5-0002Mz-L8; Thu, 09 Jun 2022 08:30:51 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nzDZ5-0001qx-CA; Thu, 09 Jun 2022 08:30:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=ZiWn/AllIOhVkpe3HghVI1xsTWWu2FCQf6ihSSFqwAc=; b=zCKC0A8Q9YSIyv516jc+2x413r
	mwGK88HMmrYWgDpQaYEs/Kmc/pbb2ezvngXhWiZVB1f61Yrjh3yn9jviiEAZ3L9Fz5WJBHdwN90dd
	MMecSlgPaaRfhWVMLfurI/8q1Dwgle9rgBqgSBGkZDkGavW+ZDcZqA2LpMAtU+AD3S7I=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	Hongyan Xia <hongyxia@amazon.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH 2/2] xen/heap: pass order to free_heap_pages() in heap init
Date: Thu,  9 Jun 2022 09:30:39 +0100
Message-Id: <20220609083039.76667-3-julien@xen.org>
X-Mailer: git-send-email 2.32.0
In-Reply-To: <20220609083039.76667-1-julien@xen.org>
References: <20220609083039.76667-1-julien@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Hongyan Xia <hongyxia@amazon.com>

The idea is to split the range into multiple aligned power-of-2 regions
which only needs to call free_heap_pages() once each. We check the least
significant set bit of the start address and use its bit index as the
order of this increment. This makes sure that each increment is both
power-of-2 and properly aligned, which can be safely passed to
free_heap_pages(). Of course, the order also needs to be sanity checked
against the upper bound and MAX_ORDER.

Testing on a nested environment on c5.metal with various amount
of RAM. Time for end_boot_allocator() to complete:
            Before         After
    - 90GB: 1426 ms        166 ms
    -  8GB:  124 ms         12 ms
    -  4GB:   60 ms          6 ms

Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/common/page_alloc.c | 39 +++++++++++++++++++++++++++++++++------
 1 file changed, 33 insertions(+), 6 deletions(-)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index a1938df1406c..bf852cfc11ea 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -1779,16 +1779,28 @@ int query_page_offline(mfn_t mfn, uint32_t *status)
 
 /*
  * init_contig_heap_pages() is intended to only take pages from the same
- * NUMA node.
+ * NUMA node and zone.
+ *
+ * For the latter, it is always true for !CONFIG_SEPARATE_XENHEAP since
+ * free_heap_pages() can only take power-of-two ranges which never cross
+ * zone boundaries. But for separate xenheap which is manually defined,
+ * it is possible for a power-of-two range to cross zones, so we need to
+ * check that as well.
  */
-static bool is_contig_page(struct page_info *pg, unsigned int nid)
+static bool is_contig_page(struct page_info *pg, unsigned int nid,
+                           unsigned int zone)
 {
+#ifdef CONFIG_SEPARATE_XENHEAP
+    if ( zone != page_to_zone(pg) )
+        return false;
+#endif
+
     return (nid == (phys_to_nid(page_to_maddr(pg))));
 }
 
 /*
  * This function should only be called with valid pages from the same NUMA
- * node.
+ * node and the same zone.
  *
  * Callers should use is_contig_page() first to check if all the pages
  * in a range are contiguous.
@@ -1817,8 +1829,22 @@ static void init_contig_heap_pages(struct page_info *pg, unsigned long nr_pages,
 
     while ( s < e )
     {
-        free_heap_pages(mfn_to_page(_mfn(s)), 0, need_scrub);
-        s += 1UL;
+        /*
+         * For s == 0, we simply use the largest increment by checking the
+         * index of the MSB set. For s != 0, we also need to ensure that the
+         * chunk is properly sized to end at power-of-two alignment. We do this
+         * by checking the LSB set and use its index as the increment. Both
+         * cases need to be guarded by MAX_ORDER.
+         *
+         * Note that the value of ffsl() and flsl() starts from 1 so we need
+         * to decrement it by 1.
+         */
+        int inc_order = min(MAX_ORDER, flsl(e - s) - 1);
+
+        if ( s )
+            inc_order = min(inc_order, ffsl(s) - 1);
+        free_heap_pages(mfn_to_page(_mfn(s)), inc_order, need_scrub);
+        s += (1UL << inc_order);
     }
 }
 
@@ -1856,12 +1882,13 @@ static void init_heap_pages(
     for ( i = 0; i < nr_pages; )
     {
         unsigned int nid = phys_to_nid(page_to_maddr(pg));
+        unsigned int zone = page_to_zone(pg);
         unsigned long left = nr_pages - i;
         unsigned long contig_pages;
 
         for ( contig_pages = 1; contig_pages < left; contig_pages++ )
         {
-            if ( !is_contig_page(pg + contig_pages, nid) )
+            if ( !is_contig_page(pg + contig_pages, nid, zone) )
                 break;
         }
 
-- 
2.32.0



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 08:30:56 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 08:30:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344764.570365 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzDZ4-0001CN-JV; Thu, 09 Jun 2022 08:30:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344764.570365; Thu, 09 Jun 2022 08:30:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzDZ4-0001CG-Gm; Thu, 09 Jun 2022 08:30:50 +0000
Received: by outflank-mailman (input) for mailman id 344764;
 Thu, 09 Jun 2022 08:30:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1nzDZ3-0001CA-5J
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 08:30:49 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nzDZ2-0002Mb-SA; Thu, 09 Jun 2022 08:30:48 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nzDZ2-0001qx-HE; Thu, 09 Jun 2022 08:30:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:Message-Id:Date:
	Subject:Cc:To:From; bh=bP2JcB/LIDxgF1FeJGcHxrQnobjWYEcKaQQDMM+Bw+8=; b=3VdsdL
	hjxXUUKU0Yia/jxq61CgSoOrjEWSBbE122ilmqISlFIyOPx1+mqR/ISeEeSm8nAbRBxCYXerNQ5xY
	dHZZMN0HnKXwzOf15N08uTnOP3hcX3fA7xD4aNhqThQfdjfKSRdDIn0CodKApLjxSOODoYxgCiW0D
	q+jMVjyEGqE=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	Julien Grall <jgrall@amazon.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 0/2] xen/mm: Optimize init_heap_pages()
Date: Thu,  9 Jun 2022 09:30:37 +0100
Message-Id: <20220609083039.76667-1-julien@xen.org>
X-Mailer: git-send-email 2.32.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

Hi all,

As part of the Live-Update work, we noticed that a big part Xen boot
is spent to add pages in the heap. For instance, on when running Xen
in a nested envionment on a c5.metal, it takes ~1.5s.

This small series is reworking init_heap_pages() to give the pages
to free_heap_pages() by chunk rather than one by one.

With this approach, the time spent to init the heap is down
to 166 ms in the setup mention above.

There is potentially one more optimization possible that would
allow to further reduce the time spent. The new approach is accessing
the page information multiple time in separate loop that can potentially
be large.

Cheers,

Hongyan Xia (1):
  xen/heap: pass order to free_heap_pages() in heap init

Julien Grall (1):
  xen/heap: Split init_heap_pages() in two

 xen/common/page_alloc.c | 109 ++++++++++++++++++++++++++++++----------
 1 file changed, 82 insertions(+), 27 deletions(-)

-- 
2.32.0



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 08:30:56 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 08:30:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344765.570377 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzDZ6-0001Rv-QK; Thu, 09 Jun 2022 08:30:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344765.570377; Thu, 09 Jun 2022 08:30:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzDZ6-0001Ro-NY; Thu, 09 Jun 2022 08:30:52 +0000
Received: by outflank-mailman (input) for mailman id 344765;
 Thu, 09 Jun 2022 08:30:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1nzDZ4-0001Dc-L7
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 08:30:50 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nzDZ4-0002Ml-5Y; Thu, 09 Jun 2022 08:30:50 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nzDZ3-0001qx-T6; Thu, 09 Jun 2022 08:30:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=cD3CnzUA7uhX3bAfaHZoI05qWLImHHGhMASI4zN+Amk=; b=4/o0eJ/GjP6VQ26C8lKXFTH+Pd
	EJp6ON2ZJpKCDqXzyl2YzQUYGA3qiez5bEGgOE6UY1wN/M06ZHsHZo7pxomUPOAygBV4d1lNJ4U7s
	uQSh9XvDVrLHXS2aeT7ufiLK3HhKjLVUQoCrguLIr7+UKc3nGHQjOL7cxBqyjchHSAyQ=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	Julien Grall <jgrall@amazon.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 1/2] xen/heap: Split init_heap_pages() in two
Date: Thu,  9 Jun 2022 09:30:38 +0100
Message-Id: <20220609083039.76667-2-julien@xen.org>
X-Mailer: git-send-email 2.32.0
In-Reply-To: <20220609083039.76667-1-julien@xen.org>
References: <20220609083039.76667-1-julien@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

At the moment, init_heap_pages() will call free_heap_pages() page
by page. To reduce the time to initialize the heap, we will want
to provide multiple pages at the same time.

init_heap_pages() is now split in two parts:
    - init_heap_pages(): will break down the range in multiple set
      of contiguous pages. For now, the criteria is the pages should
      belong to the same NUMA node.
    - init_contig_pages(): will initialize a set of contiguous pages.
      For now the pages are still passed one by one to free_heap_pages().

Note that the comment before init_heap_pages() is heavily outdated and
does not reflect the current code. So update it.

This patch is a merge/rework of patches from David Woodhouse and
Hongyan Xia.

Signed-off-by: Julien Grall <jgrall@amazon.com>

----

Interestingly, I was expecting this patch to perform worse. However,
from testing there is a small increase in perf.

That said, I split the patch because it keeps refactoring and
optimization separated.
---
 xen/common/page_alloc.c | 82 +++++++++++++++++++++++++++--------------
 1 file changed, 55 insertions(+), 27 deletions(-)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 3e6504283f1e..a1938df1406c 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -1778,16 +1778,55 @@ int query_page_offline(mfn_t mfn, uint32_t *status)
 }
 
 /*
- * Hand the specified arbitrary page range to the specified heap zone
- * checking the node_id of the previous page.  If they differ and the
- * latter is not on a MAX_ORDER boundary, then we reserve the page by
- * not freeing it to the buddy allocator.
+ * init_contig_heap_pages() is intended to only take pages from the same
+ * NUMA node.
  */
+static bool is_contig_page(struct page_info *pg, unsigned int nid)
+{
+    return (nid == (phys_to_nid(page_to_maddr(pg))));
+}
+
+/*
+ * This function should only be called with valid pages from the same NUMA
+ * node.
+ *
+ * Callers should use is_contig_page() first to check if all the pages
+ * in a range are contiguous.
+ */
+static void init_contig_heap_pages(struct page_info *pg, unsigned long nr_pages,
+                                   bool need_scrub)
+{
+    unsigned long s, e;
+    unsigned int nid = phys_to_nid(page_to_maddr(pg));
+
+    s = mfn_x(page_to_mfn(pg));
+    e = mfn_x(mfn_add(page_to_mfn(pg + nr_pages - 1), 1));
+    if ( unlikely(!avail[nid]) )
+    {
+        bool use_tail = !(s & ((1UL << MAX_ORDER) - 1)) &&
+                        (find_first_set_bit(e) <= find_first_set_bit(s));
+        unsigned long n;
+
+        n = init_node_heap(nid, s, nr_pages, &use_tail);
+        BUG_ON(n > nr_pages);
+        if ( use_tail )
+            e -= n;
+        else
+            s += n;
+    }
+
+    while ( s < e )
+    {
+        free_heap_pages(mfn_to_page(_mfn(s)), 0, need_scrub);
+        s += 1UL;
+    }
+}
+
 static void init_heap_pages(
     struct page_info *pg, unsigned long nr_pages)
 {
     unsigned long i;
-    bool idle_scrub = false;
+    bool need_scrub = scrub_debug;
 
     /*
      * Keep MFN 0 away from the buddy allocator to avoid crossing zone
@@ -1812,35 +1851,24 @@ static void init_heap_pages(
     spin_unlock(&heap_lock);
 
     if ( system_state < SYS_STATE_active && opt_bootscrub == BOOTSCRUB_IDLE )
-        idle_scrub = true;
+        need_scrub = true;
 
-    for ( i = 0; i < nr_pages; i++ )
+    for ( i = 0; i < nr_pages; )
     {
-        unsigned int nid = phys_to_nid(page_to_maddr(pg+i));
+        unsigned int nid = phys_to_nid(page_to_maddr(pg));
+        unsigned long left = nr_pages - i;
+        unsigned long contig_pages;
 
-        if ( unlikely(!avail[nid]) )
+        for ( contig_pages = 1; contig_pages < left; contig_pages++ )
         {
-            unsigned long s = mfn_x(page_to_mfn(pg + i));
-            unsigned long e = mfn_x(mfn_add(page_to_mfn(pg + nr_pages - 1), 1));
-            bool use_tail = (nid == phys_to_nid(pfn_to_paddr(e - 1))) &&
-                            !(s & ((1UL << MAX_ORDER) - 1)) &&
-                            (find_first_set_bit(e) <= find_first_set_bit(s));
-            unsigned long n;
-
-            n = init_node_heap(nid, mfn_x(page_to_mfn(pg + i)), nr_pages - i,
-                               &use_tail);
-            BUG_ON(i + n > nr_pages);
-            if ( n && !use_tail )
-            {
-                i += n - 1;
-                continue;
-            }
-            if ( i + n == nr_pages )
+            if ( !is_contig_page(pg + contig_pages, nid) )
                 break;
-            nr_pages -= n;
         }
 
-        free_heap_pages(pg + i, 0, scrub_debug || idle_scrub);
+        init_contig_heap_pages(pg, contig_pages, need_scrub);
+
+        pg += contig_pages;
+        i += contig_pages;
     }
 }
 
-- 
2.32.0



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 09:21:53 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 09:21:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344805.570415 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzEMG-00007n-7q; Thu, 09 Jun 2022 09:21:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344805.570415; Thu, 09 Jun 2022 09:21:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzEMG-00007g-4n; Thu, 09 Jun 2022 09:21:40 +0000
Received: by outflank-mailman (input) for mailman id 344805;
 Thu, 09 Jun 2022 09:21:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1nzEMF-00007a-1u
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 09:21:39 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nzEME-0003JB-LJ; Thu, 09 Jun 2022 09:21:38 +0000
Received: from [54.239.6.190] (helo=[10.85.101.129])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nzEME-0004w1-DE; Thu, 09 Jun 2022 09:21:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=KFWx10asrx386DYKVQ1NOzEXPolvDb3zs5yGeLHwxV8=; b=hYuKDtosi4VnOEtd1gqbV3wRao
	zLUdJdsQ0NC8xFL5R1s+2SN+lTbB8dPp54ghbUZ+Oi1wtHuYuVHTIN6Ao1WHznOp2Rwu4dT/yXbNH
	/D0IWNCmjkZe9PkIAhsJ69ZDd52gTtelsd6V00GwDk9ibJc1jGbROwkMtQdSTPeUhYps=;
Message-ID: <dd9f8a18-23ce-d5f6-45ff-82376aaefaba@xen.org>
Date: Thu, 9 Jun 2022 10:21:35 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH v6 2/9] xen: do not free reserved memory into heap
To: Penny Zheng <Penny.Zheng@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>
References: <20220607073031.722174-1-Penny.Zheng@arm.com>
 <20220607073031.722174-3-Penny.Zheng@arm.com>
 <d43d2dbd-6b0e-fb0c-5e0a-d409db4e18e9@xen.org>
 <DU2PR08MB7325B2A677FCF2FBF905D588F7A79@DU2PR08MB7325.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <DU2PR08MB7325B2A677FCF2FBF905D588F7A79@DU2PR08MB7325.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 09/06/2022 06:54, Penny Zheng wrote:
> 
> 
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: Tuesday, June 7, 2022 5:13 PM
>> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org
>> Cc: Wei Chen <Wei.Chen@arm.com>; Stefano Stabellini
>> <sstabellini@kernel.org>; Bertrand Marquis <Bertrand.Marquis@arm.com>;
>> Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>; Andrew Cooper
>> <andrew.cooper3@citrix.com>; George Dunlap <george.dunlap@citrix.com>;
>> Jan Beulich <jbeulich@suse.com>; Wei Liu <wl@xen.org>
>> Subject: Re: [PATCH v6 2/9] xen: do not free reserved memory into heap
>>
>> Hi Penny,
>>
> 
> Hi Julien
> 
>> On 07/06/2022 08:30, Penny Zheng wrote:
>>> Pages used as guest RAM for static domain, shall be reserved to this
>>> domain only.
>>> So in case reserved pages being used for other purpose, users shall
>>> not free them back to heap, even when last ref gets dropped.
>>>
>>> free_staticmem_pages will be called by free_heap_pages in runtime for
>>> static domain freeing memory resource, so let's drop the __init flag.
>>>
>>> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
>>> ---
>>> v6 changes:
>>> - adapt to PGC_static
>>> - remove #ifdef aroud function declaration
>>> ---
>>> v5 changes:
>>> - In order to avoid stub functions, we #define PGC_staticmem to
>>> non-zero only when CONFIG_STATIC_MEMORY
>>> - use "unlikely()" around pg->count_info & PGC_staticmem
>>> - remove pointless "if", since mark_page_free() is going to set
>>> count_info to PGC_state_free and by consequence clear PGC_staticmem
>>> - move #define PGC_staticmem 0 to mm.h
>>> ---
>>> v4 changes:
>>> - no changes
>>> ---
>>> v3 changes:
>>> - fix possible racy issue in free_staticmem_pages()
>>> - introduce a stub free_staticmem_pages() for the
>>> !CONFIG_STATIC_MEMORY case
>>> - move the change to free_heap_pages() to cover other potential call
>>> sites
>>> - fix the indentation
>>> ---
>>> v2 changes:
>>> - new commit
>>> ---
>>>    xen/arch/arm/include/asm/mm.h |  4 +++-
>>>    xen/common/page_alloc.c       | 12 +++++++++---
>>>    xen/include/xen/mm.h          |  2 --
>>>    3 files changed, 12 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/include/asm/mm.h
>>> b/xen/arch/arm/include/asm/mm.h index fbff11c468..7442893e77 100644
>>> --- a/xen/arch/arm/include/asm/mm.h
>>> +++ b/xen/arch/arm/include/asm/mm.h
>>> @@ -108,9 +108,11 @@ struct page_info
>>>      /* Page is Xen heap? */
>>>    #define _PGC_xen_heap     PG_shift(2)
>>>    #define PGC_xen_heap      PG_mask(1, 2)
>>> -  /* Page is static memory */
>>
>> NITpicking: You added this comment in patch #1 and now removing the space.
>> Any reason to drop the space?
>>
>>> +#ifdef CONFIG_STATIC_MEMORY
>>
>> I think this change ought to be explained in the commit message. AFAIU, this is
>> necessary to allow the compiler to remove code and avoid linking issues. Is
>> that correct?
>>
>>> +/* Page is static memory */
>>>    #define _PGC_static    PG_shift(3)
>>>    #define PGC_static     PG_mask(1, 3)
>>> +#endif
>>>    /* ... */
>>>    /* Page is broken? */
>>>    #define _PGC_broken       PG_shift(7)
>>> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index
>>> 9e5c757847..6876869fa6 100644
>>> --- a/xen/common/page_alloc.c
>>> +++ b/xen/common/page_alloc.c
>>> @@ -1443,6 +1443,13 @@ static void free_heap_pages(
>>>
>>>        ASSERT(order <= MAX_ORDER);
>>>
>>> +    if ( unlikely(pg->count_info & PGC_static) )
>>> +    {
>>> +        /* Pages of static memory shall not go back to the heap. */
>>> +        free_staticmem_pages(pg, 1UL << order, need_scrub);
>> I can't remember whether I asked this before (I couldn't find a thread).
>>
>> free_staticmem_pages() doesn't seem to be protected by any lock. So how do
>> you prevent the concurrent access to the page info with the acquire part?
> 
> True, last time you suggested that rsv_page_list needs to be protected with a
> spinlock (mostly like d->page_alloc_lock). I haven't thought it thoroughly, sorry
> about that.
> So for freeing part, I shall get the lock at arch_free_heap_page(), where we insert
> the page to the rsv_page_list, and release the lock at the end of the free_staticmem_page.

In general, a lock is better to be lock/unlock in the same function 
because it is easier to verify. However, I am not sure that extending 
the locking from d->page_alloc_lock up after free_heap_pages() is right.

The first reason being that they are other callers of free_heap_pages(). 
So now all the callers of the helpers would need to know whether they 
need to help d->page_alloc_lock.

Secondly, free_staticmem_pages() is meant to be the reverse of 
prepare_staticmem_pages(). We should prevent both of them to be called 
concurrently. It sounds strange to use the d->page_alloc_lock to protect 
it (a page is technically not belonging to a domain at this point).

To me it looks like we want to add the pages on the rsv_page_list 
*after* the pages have been freed. So we know that all the pages on that 
list have been marked as freed (i.e. free_staticmem_pages() completed).

In addition to that, we would need the code in free_staticmem_pages() to 
be protected by the heap_lock (at least so it is match 
prepare_staticmem_pages()).

Any thoughts?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 09:34:50 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 09:34:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344816.570426 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzEYw-00021c-DR; Thu, 09 Jun 2022 09:34:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344816.570426; Thu, 09 Jun 2022 09:34:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzEYw-00021V-A4; Thu, 09 Jun 2022 09:34:46 +0000
Received: by outflank-mailman (input) for mailman id 344816;
 Thu, 09 Jun 2022 09:34:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6kiA=WQ=arm.com=bertrand.marquis@srs-se1.protection.inumbo.net>)
 id 1nzEYu-00021O-RU
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 09:34:44 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 6427916a-e7d7-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 11:34:43 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 47EE412FC;
 Thu,  9 Jun 2022 02:34:42 -0700 (PDT)
Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com
 [10.1.199.62])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 741C33F766;
 Thu,  9 Jun 2022 02:34:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6427916a-e7d7-11ec-bd2c-47488cf2e6aa
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2] xen: Add MISRA support to cppcheck make rule
Date: Thu,  9 Jun 2022 10:34:29 +0100
Message-Id: <56d3deee8889d1372752db3105f3a1349ef4562e.1654767188.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

cppcheck MISRA addon can be used to check for non compliance to some of
the MISRA standard rules.

Add a CPPCHECK_MISRA variable that can be set to "y" using make command
line to generate a cppcheck report including cppcheck misra checks.

When MISRA checking is enabled, a file with a text description suitable
for cppcheck misra addon is generated out of Xen documentation file
which lists the rules followed by Xen (docs/misra/rules.rst).

By default MISRA checking is turned off.

While adding cppcheck-misra files to gitignore, also fix the missing /
for htmlreport gitignore

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes in v2:
- fix missing / for htmlreport
- use wildcard for cppcheck-misra remove and gitignore
- fix comment in makefile
- fix dependencies for generation of json and txt file
---
 .gitignore                     |   3 +-
 xen/Makefile                   |  29 ++++++-
 xen/tools/convert_misra_doc.py | 139 +++++++++++++++++++++++++++++++++
 3 files changed, 168 insertions(+), 3 deletions(-)
 create mode 100755 xen/tools/convert_misra_doc.py

diff --git a/.gitignore b/.gitignore
index 18ef56a780..b106caa7a9 100644
--- a/.gitignore
+++ b/.gitignore
@@ -297,6 +297,7 @@ xen/.banner
 xen/.config
 xen/.config.old
 xen/.xen.elf32
+xen/cppcheck-misra.*
 xen/xen-cppcheck.xml
 xen/System.map
 xen/arch/x86/boot/mkelf32
@@ -318,7 +319,7 @@ xen/arch/*/efi/runtime.c
 xen/arch/*/include/asm/asm-offsets.h
 xen/common/config_data.S
 xen/common/config.gz
-xen/cppcheck-htmlreport
+xen/cppcheck-htmlreport/
 xen/include/headers*.chk
 xen/include/compat/*
 xen/include/config/
diff --git a/xen/Makefile b/xen/Makefile
index 82f5310b12..a4dce29efd 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -548,7 +548,7 @@ _clean:
 	rm -f include/asm $(TARGET) $(TARGET).gz $(TARGET).efi $(TARGET).efi.map $(TARGET)-syms $(TARGET)-syms.map
 	rm -f asm-offsets.s arch/*/include/asm/asm-offsets.h
 	rm -f .banner .allconfig.tmp include/xen/compile.h
-	rm -f xen-cppcheck.xml
+	rm -f cppcheck-misra.* xen-cppcheck.xml
 
 .PHONY: _distclean
 _distclean: clean
@@ -642,6 +642,10 @@ CPPCHECK_HTMLREPORT ?= cppcheck-htmlreport
 # build directory. This can be changed by giving a directory in this variable.
 CPPCHECK_HTMLREPORT_OUTDIR ?= cppcheck-htmlreport
 
+# By default we do not check misra rules, to enable pass "CPPCHECK_MISRA=y" to
+# make command line.
+CPPCHECK_MISRA ?= n
+
 # Compile flags to pass to cppcheck:
 # - include directories and defines Xen Makefile is passing (from CFLAGS)
 # - include config.h as this is passed directly to the compiler.
@@ -666,6 +670,15 @@ CPPCHECKFILES := $(wildcard $(patsubst $(objtree)/%.o,$(srctree)/%.c, \
                  $(filter-out $(objtree)/tools/%, \
                  $(shell find $(objtree) -name "*.o"))))
 
+# Headers and files required to run cppcheck on a file
+CPPCHECKDEPS := $(objtree)/include/generated/autoconf.h \
+                $(objtree)/include/generated/compiler-def.h
+
+ifeq ($(CPPCHECK_MISRA),y)
+    CPPCHECKFLAGS += --addon=cppcheck-misra.json
+    CPPCHECKDEPS += cppcheck-misra.json
+endif
+
 quiet_cmd_cppcheck_xml = CPPCHECK $(patsubst $(srctree)/%,%,$<)
 cmd_cppcheck_xml = $(CPPCHECK) -v -q --xml $(CPPCHECKFLAGS) \
                    --output-file=$@ $<
@@ -690,7 +703,7 @@ ifeq ($(CPPCHECKFILES),)
 endif
 	$(call if_changed,merge_cppcheck_reports)
 
-$(objtree)/%.c.cppcheck: $(srctree)/%.c $(objtree)/include/generated/autoconf.h $(objtree)/include/generated/compiler-def.h | cppcheck-version
+$(objtree)/%.c.cppcheck: $(srctree)/%.c $(CPPCHECKDEPS) | cppcheck-version
 	$(call if_changed,cppcheck_xml)
 
 cppcheck-version:
@@ -703,6 +716,18 @@ cppcheck-version:
 		exit 1; \
 	fi
 
+# List of Misra rules to respect is written inside a doc.
+# In order to have some helpful text in the cppcheck output, generate a text
+# file containing the rules identifier, classification and text from the Xen
+# documentation file. Also generate a json file with the right arguments for
+# cppcheck in json format including the list of rules to ignore.
+#
+cppcheck-misra.txt: $(XEN_ROOT)/docs/misra/rules.rst $(srctree)/tools/convert_misra_doc.py
+	$(Q)$(srctree)/tools/convert_misra_doc.py -i $< -o $@ -j $(@:.txt=.json)
+
+# convert_misra_doc is generating both files.
+cppcheck-misra.json: cppcheck-misra.txt
+
 # Put this in generated headers this way it is cleaned by include/Makefile
 $(objtree)/include/generated/compiler-def.h:
 	$(Q)$(CC) -dM -E -o $@ - < /dev/null
diff --git a/xen/tools/convert_misra_doc.py b/xen/tools/convert_misra_doc.py
new file mode 100755
index 0000000000..47133a33a6
--- /dev/null
+++ b/xen/tools/convert_misra_doc.py
@@ -0,0 +1,139 @@
+#!/usr/bin/env python
+
+"""
+This script is converting the misra documentation RST file into a text file
+that can be used as text-rules for cppcheck.
+Usage:
+    convert_misr_doc.py -i INPUT [-o OUTPUT] [-j JSON]
+
+    INPUT  - RST file containing the list of misra rules.
+    OUTPUT - file to store the text output to be used by cppcheck.
+             If not specified, the result will be printed to stdout.
+    JSON   - cppcheck json file to be created (optional).
+"""
+
+import sys, getopt, re
+
+def main(argv):
+    infile = ''
+    outfile = ''
+    outstr = sys.stdout
+    jsonfile = ''
+
+    try:
+        opts, args = getopt.getopt(argv,"hi:o:j:",["input=","output=","json="])
+    except getopt.GetoptError:
+        print('convert-misra.py -i <input> [-o <output>] [-j <json>')
+        sys.exit(2)
+    for opt, arg in opts:
+        if opt == '-h':
+            print('convert-misra.py -i <input> [-o <output>] [-j <json>')
+            print('  If output is not specified, print to stdout')
+            sys.exit(1)
+        elif opt in ("-i", "--input"):
+            infile = arg
+        elif opt in ("-o", "--output"):
+            outfile = arg
+        elif opt in ("-j", "--json"):
+            jsonfile = arg
+
+    try:
+        file_stream = open(infile, 'rt')
+    except:
+        print('Error opening ' + infile)
+        sys.exit(1)
+
+    if outfile:
+        try:
+            outstr = open(outfile, "w")
+        except:
+            print('Error creating ' + outfile)
+            sys.exit(1)
+
+    # Each rule start with '- Rule: Dir'
+    pattern_dir = re.compile(r'^- Rule: Dir ([0-9]+.[0-9]+).*$')
+    pattern_rule = re.compile(r'^- Rule: Rule ([0-9]+.[0-9]+).*$')
+    pattern_severity = re.compile(r'^  - Severity:  (.*)$')
+    pattern_text = re.compile(r'^  - Summary:  (.*)$')
+
+    rule_number = ''
+    rule_state  = 0
+    rule_list = []
+
+    # Start search by cppcheck misra
+    outstr.write('Appendix A Summary of guidelines\n')
+
+    for line in file_stream:
+
+        line = line.replace('\r', '').replace('\n', '')
+
+        if len(line) == 0:
+            continue
+
+        # New rule
+        if rule_state == 0:
+            res = pattern_rule.match(line)
+            if res:
+                rule_state = 1
+                rule_number = res.group(1)
+                rule_list.append(rule_number)
+                continue
+            res = pattern_dir.match(line)
+            if res:
+                rule_state = 1
+                rule_number = res.group(1)
+                rule_list.append(rule_number)
+            continue
+
+        # Severity
+        elif rule_state == 1:
+            res = pattern_severity.match(line)
+            if res:
+                outstr.write('Rule ' + rule_number + ' ' + res.group(1) + '\n')
+                rule_state = 2
+            continue
+
+        # Summary
+        elif rule_state == 2:
+            res = pattern_text.match(line)
+            if res:
+                outstr.write(res.group(1) + ' (Misra rule ' + rule_number
+                             + ')\n')
+                rule_state = 0
+                rule_number = ''
+            continue
+        else:
+            print('Error impossible case !!!')
+
+    skip_list = []
+
+    # Search for missing rules and add a dummy text with the rule number
+    for i in list(range(1,22)):
+        for j in list(range(1,22)):
+            if str(i) + '.' + str(j) not in rule_list:
+                outstr.write('Rule ' + str(i) + '.' + str(j) + '\n')
+                outstr.write('No description for rule ' + str(i) + '.' + str(j)
+                             + '\n')
+                skip_list.append(str(i) + '.' + str(j))
+
+    # Make cppcheck happy by starting the appendix
+    outstr.write('Appendix B\n')
+    outstr.write('\n')
+    if outfile:
+        outstr.close()
+
+    if jsonfile:
+        with open(jsonfile, "w") as f:
+            f.write('{\n')
+            f.write('    "script": "misra.py",\n')
+            f.write('    "args": [\n')
+            if outfile:
+                f.write('      "--rule-texts=' + outfile + '",\n')
+
+            f.write('      "--suppress-rules=' + ",".join(skip_list) + '"\n')
+            f.write('    ]\n')
+            f.write('}\n')
+        f.close()
+
+if __name__ == "__main__":
+   main(sys.argv[1:])
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 10:09:46 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 10:09:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344834.570436 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzF6d-00065K-BU; Thu, 09 Jun 2022 10:09:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344834.570436; Thu, 09 Jun 2022 10:09:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzF6d-00065D-8P; Thu, 09 Jun 2022 10:09:35 +0000
Received: by outflank-mailman (input) for mailman id 344834;
 Thu, 09 Jun 2022 10:09:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Xsch=WQ=citrix.com=prvs=152c7a754=roger.pau@srs-se1.protection.inumbo.net>)
 id 1nzF6c-000657-07
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 10:09:34 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 40fb8587-e7dc-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 12:09:32 +0200 (CEST)
Received: from mail-bn1nam07lp2046.outbound.protection.outlook.com (HELO
 NAM02-BN1-obe.outbound.protection.outlook.com) ([104.47.51.46])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 09 Jun 2022 06:09:26 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by CO1PR03MB5828.namprd03.prod.outlook.com (2603:10b6:303:91::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.12; Thu, 9 Jun
 2022 10:09:23 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::5df3:95ce:4dfd:134e]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::5df3:95ce:4dfd:134e%4]) with mapi id 15.20.5314.019; Thu, 9 Jun 2022
 10:09:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 40fb8587-e7dc-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654769372;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=tFrNL56WR6J3LF3LsjSrGu5DMIJyLUUbkKS20m2Ib3A=;
  b=HDjG2w7r3Kb+IArrpcnYTAp71EBiBK/JB/1qNkydGzy+9o/b50lqDqB3
   Q6y3Z7IN4+M4keXbahAmcgV7RoFnN9qyWxU1vJPiLXec4zuh+4cATVGR+
   R6S3EmrZuyQxzQ+kmJjboug2gAXvG8I3N5TVx12JE7zVZjUvo7kczjPzb
   w=;
X-IronPort-RemoteIP: 104.47.51.46
X-IronPort-MID: 72565683
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:r0Ch/KJpRQgoDLOrFE+RIZQlxSXFcZb7ZxGr2PjKsXjdYENS0TwGz
 jNJC2uBaK7bM2r1eo9wbYi3oUlVvpDcmtJkGQplqX01Q3x08seUXt7xwmUcns+xwm8vaGo9s
 q3yv/GZdJhcokf0/0vrav67xZVF/fngqoDUUYYoAQgsA149IMsdoUg7wbRh3NY42YPR7z6l4
 rseneWOYDdJ5BYsWo4kw/rrRMRH5amaVJsw5zTSVNgT1LPsvyB94KE3fMldG0DQUIhMdtNWc
 s6YpF2PEsE1yD92Yj+tuu6TnkTn2dc+NyDW4pZdc/DKbhSvOkXee0v0XRYRQR4/ttmHozx+4
 Md9v8GrdgoSB7PBwb8RUjtFKnFRIqITrdcrIVDn2SCS52vvViO2ht9IVQQxN4Be/ftrC2ZT8
 /BeMCoKch2Im+OxxvS8V/VogcMgasLsOevzuFk5lW2fUalgHMmFH/uiCdxwhV/cguhUGvnTf
 YwBYCdHZxXceRxffFwQDfrSmc/33CShLmMI8zp5o4Ixs3b5zhxhwoTuF9n8QMenZYZ/gFSx8
 zeuE2PRR0ty2Mak4SqE+3W9j+iJmSLTWYQOGbn+/flv6HWQy3ISDlsKVFK9ifi/lkO6HdlYL
 iQ86ico6KQ/6kGvZt38RAGj5m6JuAYGXNhdGPF87xuCooL2yQuEAmkPThZadccr8sQxQFQC1
 EKNnt7vLSxitvuSU3313qyPsTq4NCwRLGkDTSwJVw0I55/kuo5bpg3LZsZuFuiylNKdMTPtx
 XaMpSs3hbQWhOYK0bm2+RbMhDfEjpPJQwgk50POX2uj4St4YpKoY8qj7l2z0BpbBIOQT13Et
 n5dncGbtL8KFcvVyHLLR/gRFra04frDKCfbnVNkA5gm8XKq5mKneodTpjp5IS+FL/o5RNMgW
 2eL0Ss52XOZFCL2BUOrS+pd0/gX8JU=
IronPort-HdrOrdr: A9a23:u4iQ5634y+sl/Ie35II3KAqjBTtyeYIsimQD101hICG9Lfb0qy
 n+pp4mPEHP4wr5OEtOpTlPAtjkfZr5z+8M3WB3B8bYYOCGghrQEGgG1+ffKlLbexEWmtQttp
 uINpIOcuEYbmIK8voSgjPIdOrIqePvmM7IuQ6d9QYKcegDUdAd0+4TMHf+LqQZfnglOXJvf6
 Dsm/av6gDQMUj+Ka+Adwo4dtmGg+eOuIPtYBYACRJiwA6SjQmw4Lq/NxSDxB8RXx5G3L9nqA
 H+4kbEz5Tml8v+5g7X1mfV4ZgTsNz9yuFbDMjJrsQOMD3jhiuheYwkcbyfuzIepv2p9T8R4Z
 LxiiZlG/42x2Laf2mzrxeo8w780Aw243un8lOciWuLm72PeBsKT+56wa5JeBrQ7EQt+Ptm1r
 hQ4m6fv51LSTvdgSXU/bHzJl9Xv3vxhUBnvf8YjnRZX4dbQqRWt5Yj8ERcF4pFND7m6bogDP
 JlAKjnlblrmGuhHjDkV1RUsZ+RtixZJGbFfqFCgL3Y79FupgE586NCr/Zv20vp9/oGOu15Dq
 r/Q+BVfYp1P74rhJJGdZk8qPSMexzwqDL3QRSvyAfcZeg600ykke+E3JwFoMeXRbcv8Lwe3L
 z8bXIwjx9GR6upM7zC4KF2
X-IronPort-AV: E=Sophos;i="5.91,287,1647316800"; 
   d="scan'208";a="72565683"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XFFkFQmrmyQJXm4ZSKGMQqXEOe//dvQacECnJgPoz+/R/+bnzbCKhWvvqTkVVfGrTOgUqihdPPP27ZGgHuc1EbOkflcRY7oEtqrSlLrZLEwjHo8HLf4HC5biWLnUnCsVD7t4irqSNrtnjQ6aV9XfZjc0XSTYj34laymUZBhLdX2wbsFLJAnGcid4DS0fkvGrncIBn8cdyhamF0h7Ejy0VlCxx35JcSnmTicf8tShy/Eehjq8LcMeccnWP//6zPIhsezwBWnTlxBTxYVVCy2WsBXiQQzF7utFHjOGQ7ucnmxZ3NagheexBAEpy/vy3JpAmu3Pv5Id9PcqEzSw/jHV4A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=2E4PxDigpKPE9tD7lv2bPpV+3LSY190CMvzZwrejMf0=;
 b=Zazln7noNmwiTlPNyPMAwr2kpfUaFZKZ5kj3z8FS3KDetuUtG2IqCpojReEd5Lta/DC9UEYe/BRsGvrkcDLY4RCXth6Dkz5EAKMQbTJUtMfQlWis6Zua/IL/vPJ2x7fVEBVZNGq3QeJRhSGrDbz0Ze8cgFeLBlEdxbQYmg3gd+fcK7gs+RWBUJQFKWEgsZQgLb2PPYwKlvaKCNVNFqi+h+eP1jn0aK8XgWF+COU63C//GbHuoFM8SN4TiPkmn8cTSVAtn3oSGmlrau0dIpdM7LOkWtXsuS23lIYN0fSUHJ0GhQymEKJ6cqu3Bt9mIGyn/Awc83/xh6kvgEj6UKyCWA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2E4PxDigpKPE9tD7lv2bPpV+3LSY190CMvzZwrejMf0=;
 b=rbJfVuZZf89rjWSMIrurYmulzw2EouRCC31ojSh0nTwBN5F/mlCfEU+5E8YfL8Iindwnp05sQ4qmNTY4c941zVgX20yp8rL50T156SnAQGaUf7Sglc8pybpTO3jF+LYz7L3fvz+jH/m8bBsW5RZMQdTHQ4YwuJjBqRD+Tu9GtgU=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Thu, 9 Jun 2022 12:09:18 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Xiaoyao Li <xiaoyao.li@intel.com>
Cc: "Tian, Kevin" <kevin.tian@intel.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"Cooper, Andrew" <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	"Beulich, Jan" <JBeulich@suse.com>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	"Nakajima, Jun" <jun.nakajima@intel.com>,
	"Qiang, Chenyi" <chenyi.qiang@intel.com>
Subject: Re: [PATCH v2 3/3] x86/vmx: implement Notify VM Exit
Message-ID: <YqHGzuJ+D0WjaW+6@Air-de-Roger>
References: <20220526111157.24479-1-roger.pau@citrix.com>
 <20220526111157.24479-4-roger.pau@citrix.com>
 <BN9PR11MB5276B16CB69514120B7E0E318CA79@BN9PR11MB5276.namprd11.prod.outlook.com>
 <4f2c4d5b-dab8-c9d2-f4c2-b8cd44011630@intel.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <4f2c4d5b-dab8-c9d2-f4c2-b8cd44011630@intel.com>
X-ClientProxiedBy: MR1P264CA0043.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:501:3e::10) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 38084208-6db5-434e-1dd1-08da4a0020ce
X-MS-TrafficTypeDiagnostic: CO1PR03MB5828:EE_
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-Microsoft-Antispam-PRVS:
	<CO1PR03MB58284F58082110F9DEAA8EEF8FA79@CO1PR03MB5828.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Xi+LPy7uekwQ5Yviv5Rars/SLMQJCO1gUai3uM8PaiddCniBa1jTvYn6NTlfWUR+difDxtBu3acMt3iEkqqcKNXG8qFcQLk+hV9gK8kofn/q10tvtdewnc0u87J0FvZocNH5hCTQFELAPIgBWRxo9O4xhYfyHsGGAmnwT0u+w3jN2Vwljk8muk04hZiwQBNE2ARQPTk8n+q8493a5/FRxEwQu4806qN3C2/yfMKqRAmxeop4VhDgoX0TJTMtTaZzOTDuAQLjXKWQs0Tnt27pEf/Dr+UUiawbFhNNx6I3GnrOz5wS/hsi9KmmUunrHGMdyc/KbylpRvKzf7Mwl5QRM7d/7gWaN5UoyAJtkSKWMJK9VLibPGtFW5+vuSFRo7hXCBAyBYKWSjSi/ZGNPpAtDvCm7Jy5a3+SGnUU/BoY2XdEXAC9L0R5L4rhoUqJLKKbQlSL1VWHdb6SCrhPjgPusU5tesndKw+qpYQ1GW5Lpv+6yj3/J8LkxaEJiZ3BreTqz7NCEj0Ji3x8z9w0pAsxB5B+ehVzkOi7NHhAV8dFvHNkRuXh5IJ2mmU9cl1Bylll6MHlfDTcrqUMHGaH3RmczqTC7dYS9tMoR5PO4ZzGBDMEzmXyDJc4C8dmT7ix1lBW0SipPCEMNJIoJ30TaxzY0g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(7916004)(4636009)(366004)(82960400001)(6666004)(54906003)(85182001)(83380400001)(6486002)(186003)(33716001)(508600001)(6506007)(53546011)(2906002)(8676002)(8936002)(66556008)(26005)(66946007)(4326008)(5660300002)(9686003)(316002)(6512007)(38100700002)(86362001)(6916009)(66476007);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VncwS3htMUUvOGlVWmxHbFBjSXpQTkhUc0c5NmIwbFBjelBxR1lWeTQzanhB?=
 =?utf-8?B?K3l0VkY5S0FFdGlHWXdxbFdDK3A2ekhuei9qSkpmRWJqU0dsVnFQeStReW5Y?=
 =?utf-8?B?cG52NjFZbm5oVkFSZDBORmdleEVmNFcwOXdlU0J0cmhLRFRnM0V5V3F3UmRr?=
 =?utf-8?B?QkxLdVNZdGtpT3dDcjFPMGxPeUFkMi9CcjlGSVlkZTZCQjlDd1dxNFdsTVBn?=
 =?utf-8?B?SkVNaS90UHFXNzh0RDRPSWhyd3REeFB3Tkhkb1JGY1hSOEtybk5NWVNQVWla?=
 =?utf-8?B?Q2JZWWZWWnFtLzhyRFYrWnlhNkQwaUpLQVAvUHBsTm5oMTJaQVlyaGpNVFhE?=
 =?utf-8?B?WDFWTElrUGhvQWsrL0k4MzVDLzhpV2J2ZStVMGhwNWs4NjdsOVZ6YlQxdTJK?=
 =?utf-8?B?RVJJbTlrcVVhS1VkOUYwWUhhQlMvWVJYbHFUeFpwOTllclFOd1JlZDlQNHNo?=
 =?utf-8?B?T2o2bkNtUEVyUml6NkZwWnV6aUhZZ3JXSis2UmZIUld2Tm5QSkVmQnMzOU1N?=
 =?utf-8?B?OFVvVWpOL3FzYkFNSXc5TE9GVDJ2c2dXUXdYaTZaRlZwQ2Q3bmtuQS9kSW0v?=
 =?utf-8?B?NzdjQXJLSEwwMGhUYVZUMXE2Q2hOUVFDTllxYTVqOG12b3BHU0p0WnM3VFYx?=
 =?utf-8?B?UjFxSFR6ZXhDN2t5bmlvd3J5L24vRWs5WSt3RjJhUzY4bWJjT2dheHF2cEdj?=
 =?utf-8?B?NXVINE1GZTJwb1VDSWluZmJBaXdHNXozRGpTcllsNHEvRWtvUTYrSGx2eUll?=
 =?utf-8?B?dm01TU5CMzVyZEQzd3JQeStNbnJBWVRnZS9PNVcrUGpDSC9MYlEwdHlCcS9Z?=
 =?utf-8?B?cXFiWGlyTmZiUHZXdVlVcG9wb0VBUXZxN2NRMFhYdlJPZE9NbXUwNVpmM0VJ?=
 =?utf-8?B?Nmc0c3UxMEk4VUszS3dqclRIRGlMQTlFV2JHTXFNZ096L29sWEo1YmZXdWpV?=
 =?utf-8?B?YnFQTXNYQlhGeEZlL0xxdHc4WWVDSlBCc1Vzb0pvN1AzUnZzYXR2dE94NDE2?=
 =?utf-8?B?RU5PSHlOYzJrVjZ6MTEyZzNRL2cra2xnZ3pUU3JQZStMZWk1TTdDZEVqanVz?=
 =?utf-8?B?Q0lhclk3UnY0Rnc1VFZnRXVjZEZFUzRqUFVGb2N1ZzFFM3JXR0RvaTlxelpW?=
 =?utf-8?B?NXdGbVp0UDY0T3ZMUkV1R1Z2Vi9WbW8wdUtBY2YreVpBOENKcElZdFNDZGc3?=
 =?utf-8?B?Q0dnYXFTbGZHd1ViaE5sU3AxeFJ5VnFVYmcwcnRRc0VNWkc4Y2tXVEVRYTVG?=
 =?utf-8?B?VmtiL2RmczVjTkcraHR6elVYOVQ2bkRXTjVyZU96c284VEZQczd3eDExZnNj?=
 =?utf-8?B?VmJMcGpPeEVoK3ZVaDhuR0ZUU2hQVlhNTVJTdGRrVy9NZk1GK3YyN3VyVFh1?=
 =?utf-8?B?VzhGYkgrUWY2UkJLN2tXdFA3RThXRloxTDZHSjNZb3dWQk1QL3FLZEFiRnoz?=
 =?utf-8?B?TTV6NmYxSVhBakhiQ3pOLzhGWkk1OGJWZCtBTnMzQTVtNUV2VVcrbHRrMzJ2?=
 =?utf-8?B?S2JKTjhYWWYyUmpNM0tUdlVCME5wVkFSeHYxUTVPbVg0SGswZUphdEhpTjVm?=
 =?utf-8?B?UnNYdXV2Y1owRjU2YWpzVjJxT2pjT0dqbmdBaUk5RlpndE4zWldwc09aR2Vj?=
 =?utf-8?B?eFJDUGNIS1gxOU5kL3hPQUY1MTByM2xCZ2M4N09uSGcyaUJVOUl0bitnbVRu?=
 =?utf-8?B?clY5NHBkK2Vsa1Q3Y3c0THUzdWJ1SGVUWmVLMWtZOGlIRzNndSs2R1UwQ0g3?=
 =?utf-8?B?UkJXSUNRTTUwMDljQm81S0RjdEc4V2FEY3ZqUjYwRWhxQ2h6d1U3L0RWbWMr?=
 =?utf-8?B?cXEvLzMrVGxJVGk5QjY2d3lZZ2xWaUJGdjN6bFhkTkExTTRzNnVpVnpEVVdw?=
 =?utf-8?B?TFhxOXpFNGU3OW1aQWJ0N2p2ZVlBZ1E5N2YwTm0xbEhQSjZGMm1nejkvNmVZ?=
 =?utf-8?B?T0h0anRDQzlYOFR1R1hzZlZPSnNMb0ozcDlUZ1lJOW1oVkwyQW9hS0lZUCtI?=
 =?utf-8?B?WnNCMkROMHBmWFdPcHBRWHRKa2Z4RFRrdFN3dFFpWE5YYkJWVXgrc2FkK2FS?=
 =?utf-8?B?UE15b1dWOGw4WTV4aU53MFdmR2lOeVY5cHoxR1p6MkJmSmNVTlQ2K1pxYXgy?=
 =?utf-8?B?WnljRDVBN1lCSHpPZDBQS1I4T3hqbmFYYXNLZnFVMmhUaWRqdzMrL3k4WE9S?=
 =?utf-8?B?U1NjMFc4Q0FVQmNOZTM0S1IzbmRwMlJ4cVV3STE5Z3MxSXFmcjF1TG9CQVdq?=
 =?utf-8?B?dXBWQ0twbmd5WTVCbXpkeUhtQnl0K3ZPVXE4eE5QWTJCQnF0ZFgxNnRXdFg1?=
 =?utf-8?B?NjhvOEFKbStYcGFMOFpxclNQRlFQVURRdzZzMWcvK2hENVB2eVFFSTY5dXZJ?=
 =?utf-8?Q?ttVkeJV37PfE68uc=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 38084208-6db5-434e-1dd1-08da4a0020ce
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 10:09:23.8274
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 1GjihrytaKVKG5NhxJz89XD2ajBLsao0NMBsyx2pT0OJdckt9dBBqKzwwagwIh5RExYe+djOpqFTAH7IAS8LMQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO1PR03MB5828

On Thu, Jun 09, 2022 at 03:39:33PM +0800, Xiaoyao Li wrote:
> On 6/9/2022 3:04 PM, Tian, Kevin wrote:
> > +Chenyi/Xiaoyao who worked on the KVM support. Presumably
> > similar opens have been discussed in KVM hence they have the
> > right background to comment here.
> > 
> > > From: Roger Pau Monne <roger.pau@citrix.com>
> > > Sent: Thursday, May 26, 2022 7:12 PM
> > > 
> > > Under certain conditions guests can get the CPU stuck in an unbounded
> > > loop without the possibility of an interrupt window to occur on
> > > instruction boundary.  This was the case with the scenarios described
> > > in XSA-156.
> > > 
> > > Make use of the Notify VM Exit mechanism, that will trigger a VM Exit
> > > if no interrupt window occurs for a specified amount of time.  Note
> > > that using the Notify VM Exit avoids having to trap #AC and #DB
> > > exceptions, as Xen is guaranteed to get a VM Exit even if the guest
> > > puts the CPU in a loop without an interrupt window, as such disable
> > > the intercepts if the feature is available and enabled.
> > > 
> > > Setting the notify VM exit window to 0 is safe because there's a
> > > threshold added by the hardware in order to have a sane window value.
> > > 
> > > Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> > > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> > > ---
> > > Changes since v1:
> > >   - Properly update debug state when using notify VM exit.
> > >   - Reword commit message.
> > > ---
> > > This change enables the notify VM exit by default, KVM however doesn't
> > > seem to enable it by default, and there's the following note in the
> > > commit message:
> > > 
> > > "- There's a possibility, however small, that a notify VM exit happens
> > >     with VM_CONTEXT_INVALID set in exit qualification. In this case, the
> > >     vcpu can no longer run. To avoid killing a well-behaved guest, set
> > >     notify window as -1 to disable this feature by default."
> > > 
> > > It's not obviously clear to me whether the comment was meant to be:
> > > "There's a possibility, however small, that a notify VM exit _wrongly_
> > > happens with VM_CONTEXT_INVALID".
> > > 
> > > It's also not clear whether such wrong hardware behavior only affects
> > > a specific set of hardware,
> 
> I'm not sure what you mean for a specific set of hardware.
> 
> We make it default off in KVM just in case that future silicon wrongly sets
> VM_CONTEXT_INVALID bit. Becuase we make the policy that VM cannot continue
> running in that case.
> 
> For the worst case, if some future silicon happens to have this kind silly
> bug, then the existing product kernel all suffer the possibility that their
> VM being killed due to the feature is default on.

That's IMO a weird policy.  If there's such behavior in any hardware
platform I would assume Intel would issue an errata, and then we would
just avoid using the feature on affected hardware (like we do with
other hardware features when they have erratas).

If we applied the same logic to all new Intel features we won't use
any of them.  At least in Xen there are already combinations of vmexit
conditions that will lead to the guest being killed.

> > > in a way that we could avoid enabling
> > > notify VM exit there.
> > > 
> > > There's a discussion in one of the Linux patches that 128K might be
> > > the safer value in order to prevent false positives, but I have no
> > > formal confirmation about this.  Maybe our Intel maintainers can
> > > provide some more feedback on a suitable notify VM exit window
> > > value.
> 
> The 128k is the internal threshold for SPR silicon. The internal threshold
> is tuned by Intel for each silicon, to make sure it's big enough to avoid
> false positive even when user set vmcs.notify_window to 0.
> 
> However, it varies for different processor generations.
> 
> What is the suitable value is hard to say, it depends on how soon does VMM
> want to intercept the VM. Anyway, Intel ensures that even value 0 is safe.

Ideally we need a fixed default value that's guaranteed to work on all
possible hardware that supports the feature, or alternatively a way to
calculate a sane default window based on the hardware platform.

Could we get some wording added to the ISE regarding 0 being a
suitable default value to use because hardware will add a threshold
internally to make the value safe?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 10:12:24 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 10:12:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344842.570448 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzF9L-0007Q7-Pl; Thu, 09 Jun 2022 10:12:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344842.570448; Thu, 09 Jun 2022 10:12:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzF9L-0007Q0-Mf; Thu, 09 Jun 2022 10:12:23 +0000
Received: by outflank-mailman (input) for mailman id 344842;
 Thu, 09 Jun 2022 10:12:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jWvP=WQ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1nzF9K-0007Pu-Fo
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 10:12:22 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on0604.outbound.protection.outlook.com
 [2a01:111:f400:fe0d::604])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a623cb7e-e7dc-11ec-b605-df0040e90b76;
 Thu, 09 Jun 2022 12:12:21 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM6PR04MB5608.eurprd04.prod.outlook.com (2603:10a6:20b:a1::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Thu, 9 Jun
 2022 10:12:18 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.013; Thu, 9 Jun 2022
 10:12:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a623cb7e-e7dc-11ec-b605-df0040e90b76
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MnS9Ko8haU9sC23yvdvtuv01zYmRAHnF0MCy6gCNkO1CFGKn8SaSpSrZqYGyUGuGoLVc2+u9pOawj+k0i3vx+GohZBQnP4GijTZc8NXxjmh34PrLhERUFIlFIAwSlun3ARjW7REIK6jq9vpI450bBeJ7nTR5cv8lO0A7+S7jNtolH9TKaraUTcp20TViC/txk/bdOI0Y1s5zKo9ZpAA7K8129G0Snr4iUeDlQ4oSdWUpE/0qi47S53S8EHsAqeaUdAOuA4YdoTfEtsGuoORwdA5YW+9zPWJP55YxFseC30CLgh1xoe9lDD0ltn2hmZ4qUVtkKDKtJGSMNHizxqnPpw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DNJhQRJwcy1cRgL+GcMx71Kx/vI63KNV/AsP3CWfQxA=;
 b=JZkrK3rH4E6ks4COaWoLj2UBvmcxpW8S95oGaUaCvM4/vtrQW/nRFGbfWp4vSvCKosy7zG12I3y5HaOitPnFCZv4IZenbtueemvhnTMUlyONuFmixnMnlpxZvM1QJY1foF0mjFb1D5X0d3c6elLYR1M+Ru0qaTRCI7P61+Cmmt20xwIZSDDxIVoeSbCo+PS6LBq9Y0ZuBE0st1m3dHZqumb0LniYVQu49RiYNsXJSgu1femmvdyzfrAYma8jHy5aGrnPyT+tf1WTK5xUl/8KxhQBTGO+wurBJOAAZ/7Hp43R4E8vZbaYROpIdhxopn7C7aSXrGJLpo7NrIbSt7M++A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DNJhQRJwcy1cRgL+GcMx71Kx/vI63KNV/AsP3CWfQxA=;
 b=LbDYqHHD/Mk/rNlxjsu84k+2OerbJjvlEJGOA1VN+CkXNWO13OksOZGz1UMkyo47XQtsc38q1ZyIFM2YEg5afO/jcBxm/34iGtHI/yLkxHH2sJd52aqWbDtV9ZX/nBpH5c64aahjVFm9BVj+S/oaVLj7/wixE0tSF22GlqfVEbnN459eJJ4n+Sm98cPurAs8XxGk/vgK9kmDPNRj0LaVenJbJetf3tv7FAvvpjUf9358Aa/ELq+8/QyK2AmFS9myUpI3qC63WCNUr7++eLTrEACJQ+b8dNcMu3qnOzAAbokY0OnqpGDMXNfhPEtE/a2RFobU33XxS9vRxGtJOWnTfw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <cac6b820-31fe-10d2-50ea-7c7e14e00f06@suse.com>
Date: Thu, 9 Jun 2022 12:12:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v2] xen: Add MISRA support to cppcheck make rule
Content-Language: en-US
To: Bertrand Marquis <bertrand.marquis@arm.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <56d3deee8889d1372752db3105f3a1349ef4562e.1654767188.git.bertrand.marquis@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <56d3deee8889d1372752db3105f3a1349ef4562e.1654767188.git.bertrand.marquis@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR06CA0529.eurprd06.prod.outlook.com
 (2603:10a6:20b:49d::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 28a2095e-19c4-4b05-a223-08da4a0088af
X-MS-TrafficTypeDiagnostic: AM6PR04MB5608:EE_
X-Microsoft-Antispam-PRVS:
	<AM6PR04MB5608474F032BDA562E54632BB3A79@AM6PR04MB5608.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	XtLsGpDLJ8t0sQUtZ/flVa6B3FxwgJhGKCpesppsmJ0NjM+IZcAFMy6F8gKfJ96mlCGJT6/N4brPhqLaEhl9E4j4trfHAZQY/ZXIMOr8WRmF19B1hNyJNCMpesSPMK4TJHsjinaORSOrat9DHZqxD6EZjgH8KG/u+urRr+G5vJb6tpQBe64fo8/Yt7QbreoU45O9Z6aS1DNa6FgacyYPE6l35/oC5T0QJbTWNYYV7QLetCxzCosn/spJ6UwAth9HAFZG18u22ZRxY4pajDslMB1No18J3/EVmhceFNbfuc3LAN6yuVzBSFbcgJMhJusmXSgio9y6eSezre+px8nSdkdsf8TPyIxrS+G9YapEiuxnMLs+t+/9WmqfeTWocPZFBp7P7FcsbbQZNSWAA3HpFjqXJa/KceMjctwhalu1LMSlVEsVI/yI+jEzU7+5H++HgWpc4IAn6w98Hb9qkzyrDTcz83/Rma8nJYjTqHOcynt/pOzu1ITUhtPPbuzP/rZbRb9ftLA2GC6tJYbNl9tXDvrqkuo5k80toxAM3vzdNOF3guEyUwPLmM4kBIFfHQSjE+80HfOfYNR7rylbeXtztEu4VNt1t/vaOHGiu7gpru7YA0nlQVG7IrCBkp5b1ZxabpoG2WBFdgzJcEOMtaCaLyuhbH2zGqWeQtLYmteCU86EQsB0a1xnctiod/9+aek5sh0S8+wM3ANMYrPO/3Yv6HdEwmEnBoazvAM375ncll15QBpLER/Kd0xn3VMF0yDH
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(2616005)(31696002)(6506007)(508600001)(6512007)(6486002)(86362001)(53546011)(26005)(186003)(2906002)(83380400001)(31686004)(5660300002)(36756003)(66556008)(54906003)(316002)(6916009)(38100700002)(8676002)(4326008)(66946007)(66476007)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZjU0MUlCY2Y0VVl6Qnl1b0RxUlJTNUNZT0FQMEVkOGRvZ0hvMjNtSDEzL29h?=
 =?utf-8?B?V0tJUlh1S05HWlVvWUl0NFFEOU5VUGJCdU5IUllLbzdaL2NNRzV0SlJSOFlk?=
 =?utf-8?B?RWd2amExMVg2TVhUaUZXekdLUmh2NXBiOEI2bFRCcytrR0MveUE3M3VjcDNE?=
 =?utf-8?B?c25sYjdXcHJBeHF4RGdYTFduaGFNQytwS0pzdE5uRVZaZWQyM1k3Ry9TVDZ3?=
 =?utf-8?B?M2Q1OXJ3VE9PYTVKUDZjZ1l5cUg5RG1xTlRsek4rTHRZNWJLdXE5aHB0YlhI?=
 =?utf-8?B?V1lhWWxoL1IrMWQvSGkxK3Qway9vWlVTejR5bmZQaFczU3AwRFo4L0p5bDRq?=
 =?utf-8?B?d0h6ajB4NHRmdFA0bksvU0xQSitxamdSYjdycldkYmViNlFGdkNab3lRK1Rk?=
 =?utf-8?B?Nk02eXBaeFRib2FtaWRGbHluVit6UUErcmcvOEhvVUt6YnpsQktPMDBEeWMv?=
 =?utf-8?B?RndleEwxYlk4a2ZtMlY5aDZEWmwrWTk3VElXblE0SnArRm0yQmlXb3pobGFU?=
 =?utf-8?B?U0FRd2djbWtuZ05DZCtWR0JXYjZmSFYxdEM2eVdPTTgreUsxOXFnRjlKZFFi?=
 =?utf-8?B?Qk8wOEdIRTROc2VoeXdQaGtmOHRvMTFFcWhad2FxdDBkejh4a2t1UHlLVXBH?=
 =?utf-8?B?MEpweXFnTktoeG1vKzNKbHJJZko0cTVJOC9WMjJyMTY1OVJORVZjMC9USm9u?=
 =?utf-8?B?eUVUVVFJUlhCWWpKTVdsbDJxRWxlMWhXTVM2NlFPcHlaeDZpY1h0MVZ2blB1?=
 =?utf-8?B?VVlFbHhidmJKSlhMdHJjK1JPUTUzU0h1RVdKNmljekptZkVlanlYWUlsV1Zx?=
 =?utf-8?B?STZ3WDNwNDFyWjNYQkNxUUNpUWFLc2U1RW80TDJGVkJhZzNQOFQzM1ovODRz?=
 =?utf-8?B?eUpIemtOWU5vay8xY1oxd0hHdmdxOXZQVWdocFhvRnUra29yckZvMU1PS3RY?=
 =?utf-8?B?cEs1TkMrWnl5Z1NKZGFGeVhzNVNLWlFsRFhSdFEwRDZCUFl1ZXpad2RPSkFp?=
 =?utf-8?B?eVNKSTZkMW9IdHZ4WGtMWHM1VzA4ZmZvejhjaWQ3NGdBQmIxSUQ4Z2JjclNB?=
 =?utf-8?B?bmpjbmtDMkM0RklxN3o3YWd3UFhFRlhodmw2UTZVNTlVUWpBWW1QaC9KQkRP?=
 =?utf-8?B?ZHhTVmY1aUk4ZGRMYXhpaENRT2oxS2NYTVRVNWMrWE9ZV0dLMi92M0F1KzQw?=
 =?utf-8?B?QjBvUHl0b1lPMlgwN05rSHhjSjBJcGZ0ZWQySXhITDBNeXM2dkU1SUlkNTdV?=
 =?utf-8?B?eHpLOGREN3h4THR3RjJDZTVlWW83WW42a2pKTHROVlhHYmtYTzAvUTZPUlZK?=
 =?utf-8?B?NlloMXRvTVpralVQYk02bG9iRVdoNW5rSW1oT1FDcXNoNDhFamlqV0d0bFRF?=
 =?utf-8?B?RjE1WFRRcmgxWlhVUTBvalVtTHVWQjZkMjNSM3JrUUhLQUJUVnFSbVJ5SCtv?=
 =?utf-8?B?d0ZJWEhES3BqblVUUU8wanMvMFcrUkp5M0NRWklkQ0lFWnBoU2g4K1lTYVNs?=
 =?utf-8?B?aUd1K0k2VGpSbTI5NVBEemxRZDV1cDdkaHRzUURtRjA1NHJUSEJ6WEFFakEv?=
 =?utf-8?B?TlhkcEMybUd6TGIxbDBTemFVV0FFWm9SNzJBM3Zlbnl1R0xpK0ZVcllVQXNS?=
 =?utf-8?B?ZC9GZnI0UW9HMk9RYSt3eVZQa1dyUWU5WFBueWM0U3N3dVpEV1B4SmNxRDRD?=
 =?utf-8?B?eDYzUUIxbVg3cEpxWk94U1BGN1JSR0tzZHl0MTNCSWd6eFRPcTBiR1hEVmpz?=
 =?utf-8?B?YVlSNyt5aG1INy9nQ3ArZVJzRk5LUjJOZHVLSDJrWDc4QnF4V1BicldHaWhR?=
 =?utf-8?B?M0NrQzVlVFU0Y1ZVdWtlN3JHQVRWVTl3ZUZPUEhvMHZUM2NRZFUveEYxemgy?=
 =?utf-8?B?ZFFyQnRHNUVrS2JlNE01TjhhYStjVEYyZDl3cmxtaDl5ZEpZOCtYejUwL1RE?=
 =?utf-8?B?dGtRdGJ4ZXlPaUpScXdyMWIrckIzUDU4NjdoMDNLSkJieDlvUGNyRVgwTGRX?=
 =?utf-8?B?WDFZRHhGc0U5dE1HTHl4em8ycEtKL29GeWhOZTZHMFQyeFlhdnJtYy90MXB6?=
 =?utf-8?B?L2FXUlJobHpuS2pjSDFVZVg0Q2VUMWdaT3E4R24xZ2VHcnpBVXpDQ2FGRXUr?=
 =?utf-8?B?a2ZkSkF5ZXVRT2NQUUtUdFR6VWpLKzBVRjRXODRCcXVJVlhkbDJZa1M0MlZB?=
 =?utf-8?B?c29kdDRHdjFITWVCbzdTQkR1SWU4alBGdzJjUVo0R3JsMXMzUnNzQ2NhSnNz?=
 =?utf-8?B?ZnZIajZQbjdoUHlHQmxVWUlyRFlabzkxb2tobXpkcEw2VlZCb05nNGlnSTVi?=
 =?utf-8?B?cysxdXVEaHloQXA0ajROZXRYWjBWZnBxakR4RTU4VW9CR1ppaUY0UT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 28a2095e-19c4-4b05-a223-08da4a0088af
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 10:12:18.1732
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: MXVTeYTaLNgoh3CCIZIinESKqTCC7Nkewg29SjSTpGtd/x4SmNSNPaBwmsqqDJioLcguDqkmOZUvhvdv8PRWfA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR04MB5608

On 09.06.2022 11:34, Bertrand Marquis wrote:
> cppcheck MISRA addon can be used to check for non compliance to some of
> the MISRA standard rules.
> 
> Add a CPPCHECK_MISRA variable that can be set to "y" using make command
> line to generate a cppcheck report including cppcheck misra checks.
> 
> When MISRA checking is enabled, a file with a text description suitable
> for cppcheck misra addon is generated out of Xen documentation file
> which lists the rules followed by Xen (docs/misra/rules.rst).
> 
> By default MISRA checking is turned off.
> 
> While adding cppcheck-misra files to gitignore, also fix the missing /
> for htmlreport gitignore
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> ---
> Changes in v2:
> - fix missing / for htmlreport
> - use wildcard for cppcheck-misra remove and gitignore
> - fix comment in makefile
> - fix dependencies for generation of json and txt file
> ---
>  .gitignore                     |   3 +-
>  xen/Makefile                   |  29 ++++++-
>  xen/tools/convert_misra_doc.py | 139 +++++++++++++++++++++++++++++++++
>  3 files changed, 168 insertions(+), 3 deletions(-)
>  create mode 100755 xen/tools/convert_misra_doc.py
> 
> diff --git a/.gitignore b/.gitignore
> index 18ef56a780..b106caa7a9 100644
> --- a/.gitignore
> +++ b/.gitignore
> @@ -297,6 +297,7 @@ xen/.banner
>  xen/.config
>  xen/.config.old
>  xen/.xen.elf32
> +xen/cppcheck-misra.*

As said on v1, this wants to be added further down, while ...

>  xen/xen-cppcheck.xml

... this line wants moving down at this occasion or in a separate
change.

>  xen/System.map
>  xen/arch/x86/boot/mkelf32
> @@ -318,7 +319,7 @@ xen/arch/*/efi/runtime.c
>  xen/arch/*/include/asm/asm-offsets.h
>  xen/common/config_data.S
>  xen/common/config.gz
> -xen/cppcheck-htmlreport
> +xen/cppcheck-htmlreport/
>  xen/include/headers*.chk
>  xen/include/compat/*
>  xen/include/config/

xen/cppcheck-misra.* wants to go alongside the line you adjust, while
xen/xen-cppcheck.xml belongs yet further down.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 10:15:29 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 10:15:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344854.570459 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFCH-0008HT-BG; Thu, 09 Jun 2022 10:15:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344854.570459; Thu, 09 Jun 2022 10:15:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFCH-0008HM-7f; Thu, 09 Jun 2022 10:15:25 +0000
Received: by outflank-mailman (input) for mailman id 344854;
 Thu, 09 Jun 2022 10:15:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jWvP=WQ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1nzFCG-0008HG-4z
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 10:15:24 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on0609.outbound.protection.outlook.com
 [2a01:111:f400:fe0d::609])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 12b43717-e7dd-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 12:15:23 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM6PR04MB5608.eurprd04.prod.outlook.com (2603:10a6:20b:a1::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Thu, 9 Jun
 2022 10:15:21 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.013; Thu, 9 Jun 2022
 10:15:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12b43717-e7dd-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Tj1+9VgKZzatEAZUG2hpNPtIrU6+7H55pXFR+QOjQdhlqzHLhvpnBhOoJNYp5NlJUKLOS46grH66lKbx8i8UhAxZjpkkfPni6YvqafPIDuvnAV6F1TFdTbwO523IX5Ru/A53rUmhQ71CtKrZ9GMw8nTQv0G5sW3chZAYHlsHXpatXN+AYHj0ZeuOKl0AqiSYcUJzfe/yZb2zWwkoI5M630vuQZi65IFS09mOvK/SmFSZn68w1m3vmg+LBIa89t9MhpkltLzTIaaRHLhNDtz7Vjd3jJ5Ir7OG79GW6Ma4MYdW9e3RRpSBPVpbdRcVa1A6QpVLGEbFQXp2MsUWtLtQ2Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=CYUfvm+7aTTtLSn0S7sSWO7MQyed7qAyha2WFJFBcm8=;
 b=mg6u0DV7Mic0Xw5RyeEPZKlDQzYaOlsJJV4oBJtc8lwS94Lh3PUyf6fY9p29WTHgceny5b7RKUQxYN9atnc45ndEaOLv1/YNyWT1zdJjqAhGMAkvziMv3a/LSNMHLxzdaQb8Y6mYm9m7yCowM3nR/o8bMs69JrvjTTotWkF+dV2M87qkYH/59EhW1n6hym7tG8V3Tklkw1ARlL2PCQz3T7i+ScABau+J6BUw74TeHMMhWx5kstpK1nTzGPiD9yOupzzjbEg6d38TPDnWPJVqlsPix4djlhpq2vU3i5sOWLfYQSBX6yvuP3IZsakpJ66aKdRfkyiNehkbzpw39SeIdw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CYUfvm+7aTTtLSn0S7sSWO7MQyed7qAyha2WFJFBcm8=;
 b=eUzWg3gwTT+UVKye0ZMfTDPulfgquIOj3XQmaTrAxUPuyC5/eKAuWEpmffW1L9cf2nGvAwIQ/+dsRjEULuVq5q1kdwvlM4xQYRQCDskr/L+j1xDWC9I3U1OqUWDaAee1TqSttbo+hHZXZrB7e+l8IrgoM28pHZq1pakJlON1iYysKwju7QvGRn8085/Z4P8IZ3yr0Iz1EeDAaTm2w5i8uG7U0z8qs3S51ztyUe1yGFWa1irSGnxSgWfPkeHd6XeCTzjkg9XKljB7ZKAQFDTfyS8JMkRYBPJaDLs19Sar05F6gpyNrUg1jhpaX5BJhkwaJl2kHWPWwryb+ZAcsCEYyA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e873e30c-7a04-a8a3-2fe5-0dda30e654fe@suse.com>
Date: Thu, 9 Jun 2022 12:15:19 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v6 00/12] IOMMU: superpage support when not sharing pagetables
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR04CA0150.eurprd04.prod.outlook.com
 (2603:10a6:20b:48a::23) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c9106dbe-291d-425d-0643-08da4a00f61c
X-MS-TrafficTypeDiagnostic: AM6PR04MB5608:EE_
X-Microsoft-Antispam-PRVS:
	<AM6PR04MB5608B9BFD105C146B38EA27AB3A79@AM6PR04MB5608.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	avEdKG5Ha2/xg/At0p2X/SjlAYYKz7rF6bM19kC25yXjbWJKiJ1/9ct6vdT/IV9jaqxgb7aiVc9niqQ9JULAgyNIAE0CAPuQKYQ5lCuSagheYU0TRH2w5nAxgE7Y14lxyLka5q6SBGldCtHj7d4ClcyUP+OSdU4EHpwk90X2wi46YaK7cwJt7dtbCkkGD3G3XJd7Yc1bnZ5lCS9vE/hvy5GHwBecrbgUl03uL4bkMvxSHcMqaUcmYUHab4bsTdnry7pba66mWZAJanl1bbzw3Qx/zA1MAmAo5IzttMRnVPUEm5+U6gbslHyBJ8HmQvogclvkgHHcWy6ChWYjhyYsaWZazDQe4hWJQzttvrz1gDlwaTCwMQ9UxS/3R9kp019lNKMIpp+6fY0dblKVVvIbVDU59b1OmUK9hU5NuQqIiAGpM+XbVHG2iM5wdayeBaNPOamt2S1UXB1WzCnOKyMp4tLXrJkzfDMFvwFVAo5i4Y8iFwjQjZRW99PJSwlLSZeHGVU+PQKLhn8Zx9S8zgNowcL6P2nqRFvSldFNyfAuw1/zVuW3GrD8afSxoV9hn1rZJlZvqFHh1LyG/WvvzqyrvRwWaVWE7VDHdM/U1Y8ev3FKX4exylZbalNgyUJ1JUBzYhF5NYpGcqNZHdICiiR+BR7tsaAZAM5PUqOZJpEuqibaUeVoy+OP/rhu8Rb5dfMEV9IFPc+ynYIs7DVrLXI3kxwYAKlWvFNGy4zoHCgdbExJvF8tpUt3J9rqoBnnSldDyRXvbMDHyIFzOehOcqLkAcCI/e3HhZI9z+TYO473+LVFu2b1LJrEr49o0B8t7qFBE6p/gImWUScz4F7TdjCoJQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(66556008)(36756003)(5660300002)(31686004)(4326008)(8676002)(66476007)(8936002)(66946007)(54906003)(316002)(6916009)(38100700002)(86362001)(966005)(6486002)(6512007)(26005)(6506007)(2616005)(31696002)(508600001)(186003)(83380400001)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Ukl4bzRCeUhSRWlZNXRFb0lLRUdzcm1QdTNGT2xIdDdkYjJ5S1JzK0t1T2Vx?=
 =?utf-8?B?RGRlMGR4Q01oMzVWQWY2bHcwZ2U1MzlNbUUxMU03QzBkTVp4eEZoNGpSUjhF?=
 =?utf-8?B?UWdESUlLc1A1RTJrV0MyTDBqTlJRWWc2MEhhN3cweWZ5U2lqOGpKbmVJbWlo?=
 =?utf-8?B?MmErM3ZCdGFqQjNMbjY0ZWYyQWdya1VhYWFGdHBjNDBkZzEwSnNZTDdHMHBO?=
 =?utf-8?B?dDVzUHR5SjdwZlhzd3N6bGg5c1R3c3lRankvQ015MkpwY1RPYUZNUVNpak1C?=
 =?utf-8?B?dkt2aXhnZnQrY3FMUjRpV1BDVDk4azczVFcyOW1tNlpENzEvNWo4UXpSRXVM?=
 =?utf-8?B?VEkrNnhxU0V1RDRvd043VkxmOHhDdWRjeHE5RTF0NmVhN3g4Y2dITG1QUjhr?=
 =?utf-8?B?TTg0UFJ5OGpvb1NOYmhJZjRZYWdTMGJESmF6MW1ZNUFsc0V3RE5HSjF4Z2x5?=
 =?utf-8?B?YU96dFdQcG5YVlk1TnIrY3hoenU2dWh6Vm9Ca3VCdmoxMlJhTXIzZkhLMTBz?=
 =?utf-8?B?ZE1WRjNRVWdvejB0QllHbFkwUFM2Z2tSMkp1OTI5UTgyYWZQa2dNMjhpOXNn?=
 =?utf-8?B?T2hpK1NUUFZkMGFaUk1sZk9ibVRpTzU5cjh5NTJjaXJYaS9XQUg2aXQ5REps?=
 =?utf-8?B?RmVnNGp4SHhJaitpZTYyaW04QXlaRXNwekdOUUxLSEtHSGhRenRIL0NkUnVI?=
 =?utf-8?B?MGg2bG9JUW55L3oxR2RSSEFzTW00UEF1d21EMTdtNzNVNE5OYnIyY21IeEVB?=
 =?utf-8?B?ZW9mQ25CMXBDWWUzMWtuT3VsUkxBTWVuaEhQMmZNQkhiR0taeERlVUpHMFk4?=
 =?utf-8?B?YlY3bjR5VXlUd0FPQ0pPamRoYjVBNlQwTGVESW84MWNvT2srMEZxaU9IcW1t?=
 =?utf-8?B?RUNFZ3lmU011OE1vR1N1Y0tuWmYrMEJnTmQrTWFVL0dMUmpneFJSaGZTOWJW?=
 =?utf-8?B?NlcvYmZqRGNUOVdyNVVNVGp5QUM3RDBSZFNoTVkxWVRyU1FpT3hOS3g1TGpD?=
 =?utf-8?B?VUlmakRGeVNkK2tuOWhKN0ZHbTFXZjRMSytKaS9jUEZWaVNWU3R1WllhWkZV?=
 =?utf-8?B?ZVZ0WlYxMlpQYkE4MndDcThUTUNreG9STkE2TWpHc3haVkUyeVp4djNqdHBI?=
 =?utf-8?B?RU9JN2FqR2kxU2RUYXFkcll6VGhZOUNTb3IzZWJUZ3J1QjM3bUl6cUxJREJI?=
 =?utf-8?B?MHRzM0R3L094ZEpnQzRyMFNKaTUzdDdSK0Mva1B5Q3FGSjVXQVBUcklJNkh0?=
 =?utf-8?B?MVFCSm85SmI4UEUvRSthREpxa2pHMyt3bnVoNVovd1ArYzRWR3NENk9pN1ZH?=
 =?utf-8?B?ZjNuTUdqZ0JUb1ozSnZxUFcydWZMbDZzL2doVnFXYzZrenpQREx6bWhmVlh4?=
 =?utf-8?B?WmFCMVdPWFhQTGhuVTJvc3o0ZXgrbUZBazdlU2lTYklWdzN4YThLZmRrKzY4?=
 =?utf-8?B?SDRueFdiVVV0YVZ6VnNYdWYrQU9iRHc4Vyt5amx5TVZTRmcrdDJPdUd2L1lu?=
 =?utf-8?B?eE1hMTdONW91NEh4dG1qNFp1ZFU2enFNSG9DUU9vdnhlZVpmVnoxbmM2UTBO?=
 =?utf-8?B?V2hlSHpXU0hFRGdFRzljSmt6NG1GYmdmcWlsdWtKbThNcEhKM25qMFlUYWlp?=
 =?utf-8?B?OXUxQVMrWXV2b0RxSHJIdkZkeXJBY09vdWc5ZWdqWjhaMlI5SzhyYmZHWXhW?=
 =?utf-8?B?TVRIQ2RLWGkrVjFYaElTVDBjRDd6MHlIUUdleVV2WmxNMDdIbElCL01PQ0hY?=
 =?utf-8?B?ZlV0dzB0RExKQmRvKzZaZHVJYWtPR1VFanE3T0RzY3FlSTNKWHFhRTRwQ3dw?=
 =?utf-8?B?TzN0NjRyNitpME1MUk5pY1JIWWkwUDhlSWFQUlY0ekQwQ3UyMlBOUWNrV0hD?=
 =?utf-8?B?T2dLWng5ZDlPOWZhV1ZRMXVHRlNWODcvR0VhZFVxbXZydUtkMmVRQUVVSldy?=
 =?utf-8?B?Q2FYUFFZcVFXaXhTQUNQdk5wbnB1QXdCVVJ6NVQ2djgvSnNQai9wNktFaXA3?=
 =?utf-8?B?eU9qdkdwZDAxd2F2b3pIL1dxWkEza3d0LzBYeGJMeWJ1ZUpVNm1Dc1hyYzV0?=
 =?utf-8?B?SDdSRFZxNHoxWWFLdkdNUnVqaE8xdEM5QWc4WWVGaTNicm9oUitEQnVneWJX?=
 =?utf-8?B?MUtETEdvSnYwUnNTZitTSU4xNUpYWjluM0lEdjJlcGJxTTZPbFZRODhlV0g2?=
 =?utf-8?B?WUloQnMvWVpweTJmaHg3VGVXTTdIWFgxTWM2YVR4aEJHV01Lc0JEMVFnOEVw?=
 =?utf-8?B?S0lPK3ZudzJlZlFrNVhVT1JkVG9RQnlscDlsS1BTUzVRZU1ZeExqRmQ1SkYx?=
 =?utf-8?B?c3JTQnlFV1lhaS93bzA4eVU2VWRZZ2djZzZiUUVFVFY4MFZiMHpQUT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c9106dbe-291d-425d-0643-08da4a00f61c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 10:15:21.7083
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: h+ULaibvoEAKvKKDB8lI94SCu6kWTy8kGDTDZxh9XesMVUT/tJbfEt6sJqRcKSpO/prSavWIefYrbvUGduBqxw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR04MB5608

For a long time we've been rather inefficient with IOMMU page table
management when not sharing page tables, i.e. in particular for PV (and
further specifically also for PV Dom0) and AMD (where nowadays we never
share page tables). While up to about 3.5 years ago AMD code had logic
to un-shatter page mappings, that logic was ripped out for being buggy
(XSA-275 plus follow-on).

This series enables use of large pages in AMD and Intel (VT-d) code;
Arm is presently not in need of any enabling as pagetables are always
shared there. It also augments PV Dom0 creation with suitable explicit
IOMMU mapping calls to facilitate use of large pages there. Depending
on the amount of memory handed to Dom0 this improves booting time
(latency until Dom0 actually starts) quite a bit; subsequent shattering
of some of the large pages may of course consume some of the saved time.

Known fallout has been spelled out here:
https://lists.xen.org/archives/html/xen-devel/2021-08/msg00781.html

I'm inclined to say "of course" there are also a few seemingly unrelated
changes included here, which I just came to consider necessary or at
least desirable (in part for having been in need of adjustment for a
long time) along the way.

See individual patches for details on the v6 changes.

01: IOMMU/x86: support freeing of pagetables
02: IOMMU/x86: new command line option to suppress use of superpage mappings
03: AMD/IOMMU: allow use of superpage mappings
04: VT-d: allow use of superpage mappings
05: x86: introduce helper for recording degree of contiguity in page tables
06: IOMMU/x86: prefill newly allocate page tables
07: AMD/IOMMU: free all-empty page tables
08: VT-d: free all-empty page tables
09: AMD/IOMMU: replace all-contiguous page tables by superpage mappings
10: VT-d: replace all-contiguous page tables by superpage mappings
11: IOMMU/x86: add perf counters for page table splitting / coalescing
12: VT-d: fold dma_pte_clear_one() into its only caller

While not directly related (except that making this mode work properly
here was a fair part of the overall work), at this occasion I'd also
like to renew my proposal to make "iommu=dom0-strict" the default going
forward. It already is not only the default, but the only possible mode
for PVH Dom0.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 10:16:46 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 10:16:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344864.570470 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFDZ-0000U5-Py; Thu, 09 Jun 2022 10:16:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344864.570470; Thu, 09 Jun 2022 10:16:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFDZ-0000Ty-Mv; Thu, 09 Jun 2022 10:16:45 +0000
Received: by outflank-mailman (input) for mailman id 344864;
 Thu, 09 Jun 2022 10:16:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jWvP=WQ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1nzFDY-0000Tp-BV
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 10:16:44 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on060b.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::60b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 426952f3-e7dd-11ec-b605-df0040e90b76;
 Thu, 09 Jun 2022 12:16:43 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB6297.eurprd04.prod.outlook.com (2603:10a6:10:cd::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.12; Thu, 9 Jun
 2022 10:16:40 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.013; Thu, 9 Jun 2022
 10:16:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 426952f3-e7dd-11ec-b605-df0040e90b76
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NINFdV5wdugjbFGv+VeBa0HOfbxcLzdRpzCI+uLmjTn7geSfuN7QuYBH+DfpQr3tCU5+jyHjIUSNPvldFzTWWDyP+gql68TEjfmX3r/3Axfg9ZH7LgXo7ahbM3ubHV2aX06vsKZJeYZjk8dPlJa9mQTQI/0wkI5PlrBg2l5YFo0K2HTaJt05ZoRMy3nAi/cwbXkQ6bKowXyhKG0s87YiFEwnlFNY7p7lwf5392O9M4oUnnamPqwxmVZdGBcL99dTkqOvBXQROPRaQ/yxIzPgwYls3BQ0YUgfiN0K3y5DhpmkqseoFOAmQDe3LPuwXaoRMeRfmnq4mTRDylZzrhb1pQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Ht+gcaO8t0HFmkoBoateZqPcuX/ukcu9CpBSQf8lM4E=;
 b=SStxS6jrPCDvvEeUj2EVq6vPntsqTq6mKsMl74/IsidWfBsIo6ioKqBH34D64ZmUSWgH7sjHShNVxBA4aYJlUI9paS5uUAAXKauoWySJCBG55FMB/MBSr5UgvILNDk4cNcr6RnBjSolae3PHjG2wEZEBgIK3WJ/rMefqP+zgbIE6/dwswhwPz3/1aSN2Yr6BqwIV8y7nZmDp+Nyn+SONYdM0hhsbw4k8V8AkNgpTFNuWdgKnlhYr3tmh+sgnze6aKgdhwfdcaEZA/OKg76Du/m+uXCdxyO3cqF2IHqJDEPo32lu8UkJ6QyyJU2NcdsPY12msXwEIBbdWr50jFbbn8Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ht+gcaO8t0HFmkoBoateZqPcuX/ukcu9CpBSQf8lM4E=;
 b=al6pmNttL96AmQH75aiOumkwO8AmMxEd3EIm7f1c39KvCJGoCWxyQVTp3n+00u2+Ul/SXYDm7Al8xfW64tSoxCsBUN7FEOp5uEpZSI4C7i2qkN9YCBy6jDLZDKU6d4Lr8j8VJspba7NQIZ8w212x/e503hCoqF0Nfpc51lXQbtWcdeZwChySWLg+07k5w7QDEfDoualaNdRMgioQQGDf/vPbuRY/04u5X/axZQo54EChv8dxLnCeSIC9Wn5kd0ZQEY0VgYgTkc4nVY6dJeNdoCABEe63Jt/94G1bFchEQk69OSs33X+YAE1U5lA3jDoXEOWJuoE4G1vJS1q2wQbqCw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <24eb0b99-c2c4-08aa-740d-df94d2505599@suse.com>
Date: Thu, 9 Jun 2022 12:16:38 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: [PATCH v6 01/12] IOMMU/x86: support freeing of pagetables
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <e873e30c-7a04-a8a3-2fe5-0dda30e654fe@suse.com>
In-Reply-To: <e873e30c-7a04-a8a3-2fe5-0dda30e654fe@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM6PR0202CA0045.eurprd02.prod.outlook.com
 (2603:10a6:20b:3a::22) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: dfb4dec2-d371-4f9f-136c-08da4a012515
X-MS-TrafficTypeDiagnostic: DBBPR04MB6297:EE_
X-Microsoft-Antispam-PRVS:
	<DBBPR04MB629736244C0BF3CB904CD791B3A79@DBBPR04MB6297.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/4LkUC97aHneSm/krzy/YdIb6CsGLuxPwJkvPW69rAMjFAnLzhVIVemhY0875FG6wP1qFiOpZFUHlnQPA8Nl6bbc5S8BUtDyFmVpf2/x7jQ0Ks4E7aXHayV8a1psIPNdqpwolswCBOCqJRS0i8zlYCnnJFttS6L5Hv932dOCCbLw4DUSHDgcd6veOmh7pteu37AjlnMC8N6jISXpFlSipvJYNIsE4tsrOIVkLzetFpl55L4VolC3ySqzVGeEEC/6v9FENiQ9BfEmzccKqQNvqrL2wuKN8+AnuneoU6O7o3qD2tXgOwVTAQ+UcLPTZ+Jxox+lRly1moML9aO/5WTypzfDLmFyvIX2BK6hQgf9AYgTsg6CV56KHp4+h3d0bRHl3XjcoGs3g2Yp4bj/HANvGip65YcWDcwPe+esuERdU2nt9hdzcyHrjiwkewuEL/ujaU8ubHDx8Ru/yFPPEubxp/VuTbsagWzn4oIx0alVPxsOB5Usj4/exhmTe2A1eiiCl/j/Stf6XEbj0wI4abtdaFKrSUhk5tQAjuH3zz2/apqDXSBRo+O7a5b+MJXM0wdYrs5ae4ZCajSbDkxlixQvyH8iv9xI1XMArg9NdJx5kZhnUww7HvbqmoUiiOgqQSbAseQky+9wkAkygMwq4mQPd1Fx4lz8XCVJcgwgE5vj+V1uxwj2kRKAgbxhTkmlqDG9Mqa1yAUCyM1XmkFufUwJzZAWwCau4Aa5eoZ5cNgLdXyiL63ipcpz3DvfGJD5w7kt+A741t6rAU0fujmjdr8w6VkR/R8k3h2WGvThzrmSn2dZ22cC8fBwY9BljfURHNbH
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(2616005)(36756003)(83380400001)(31696002)(26005)(31686004)(6512007)(6506007)(2906002)(4326008)(54906003)(8936002)(5660300002)(316002)(6486002)(6916009)(38100700002)(508600001)(86362001)(186003)(66556008)(66476007)(8676002)(66946007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QUU5TnJkL3RvTXRTWjd0bjZUMHNISGRudXYveFNBclcwbVF0cGQ2UThnSHZu?=
 =?utf-8?B?YkU1MW5Nb29HMmJ2bzYvRjZnY0Z3ekF0OTRBZXBrWG0vTTlHdjUxYmlVT0dL?=
 =?utf-8?B?cnI1WElFcEhWWXIxYTUrNTBzZk5iUktoZGtMQmdoL1BUSU9rdFBReitXanlu?=
 =?utf-8?B?TE0rQWNMTEtqY2pQWHlLVUdWdWRYTmFYeU9Qc1MxSDlGUXk2UmpENmJmYitp?=
 =?utf-8?B?R1c3d1MzRWdyNXNHZ1J5eTF6VHVmNHRRZDdTaW1JdHhMSUZyQXA3K1pmck40?=
 =?utf-8?B?dEsvU1pYUWp4a2ZXbElHU2U2YzI3aFZqbjB4Y0h1MHU3cXloVXhTQ1J2RXox?=
 =?utf-8?B?NHc3SG8vbXhOR2x6TjZVWmtmUVpxVUZwM1BYV2FReUJ4K1RielhRSW56YkNL?=
 =?utf-8?B?V3k4Ui9TMjlyVXBTam1XVDJjTGFqWGc2ZlltRWxxNS9ydk0rQ2pyemx1ZG8y?=
 =?utf-8?B?MVBqaEswUHpuWU5FdzVQTEVoMFV0Qm92SzFWTS9zcjhKUUZzdGNkSDN6dWk0?=
 =?utf-8?B?S1QzVXJORk84OHNQdlhDbmhsWHpjdWYrTmJBOHFYRVlWODR0OXRMVkQwQ2xW?=
 =?utf-8?B?NzQ1Lysza0NxOVpVZWh5VlRRTjVnME5vWjNWQ0NjRWdtam84WUtkV3ZiOGcw?=
 =?utf-8?B?WDJqY2RwQ3hnWVY3TTRlQ256WVl1WEFXdXhBOWFOMk9MdTI4Ty82Q2k4cS9S?=
 =?utf-8?B?VVBzSjJhUnJoMSt3ME4xSGlyK2dERXp2TVpucElSQzd1WlZXT1FTeHovdkUv?=
 =?utf-8?B?aldRS1V2Ym5qWDBYdEJXbmVSOWZxNkI2d0RPQjVkSHlEajdrM202a0pSR3FS?=
 =?utf-8?B?eUhqbklza1ltTlFtNGhNN1RsS2tuaEsrRDVINWNHTE1KNnd6em93dSt3TGhR?=
 =?utf-8?B?amJWLzdJQWh2bmpEMVZPUXNqbFZYbDBzcHNpM1N3Q09IR2RreEdGNmR4ckk2?=
 =?utf-8?B?T015MlNyMDZnOFMvclQ1cm5tbjV0LzBPeUVqc096WGs3WjlLUXIvb2RsMnN6?=
 =?utf-8?B?bm9WWnl6bmwyRHdNdU5pODdHQURQVk15SUlKYnduWUFiemhkUmkvQnE1ejZI?=
 =?utf-8?B?bU5ibW4wYmM1d3U1UFJpeEo4NEw5ZXpHK0pUTjhMOGh2MHl2VDBTc1I4Nmkv?=
 =?utf-8?B?SHB5QW1IL3BqQVB6TE5hd0hrWHlWaU5oT3lVV1lCUjJzRjBUNSt4VG1XMnBG?=
 =?utf-8?B?NEVhcGw2YXFNS3NsYTdnUWg4NjRqQldXU0luWWtGWXRWVHl0WFErRDNjNHJM?=
 =?utf-8?B?RWFuYklaaWZCaFpWL0c5RjFQWEltSG1naUdyRVE1ODgxbWtYdDZTczg5M3Ja?=
 =?utf-8?B?SmFMYkhWMDFIQ1ZCQmpuajg2RmRVZUtmNVl2UzVQcW14Uk5ZRDZxWlAwQTRY?=
 =?utf-8?B?OVJkOGNBckdFcThkK2JOM1NjRWlQRzVuei8rVXZiWEF2SHRBU3IwaFBTMTBy?=
 =?utf-8?B?RHNEYXgycGVFYkdkMVNNYWNiODBmNGYyeXY0NXJ0cEdoSUNzd3ZFenNiU2dX?=
 =?utf-8?B?TUgrNDBtZTA5eFdkc0kzMEtOUHZIeFlxQWFneENsNWIyRnZ6aC9iclVoQ2hJ?=
 =?utf-8?B?MFR0cDZUZVhWYzJ1amQ4M3lOZVpzSm9OcHhqQWIzN3AzYnJRQ05PSGh0TGZw?=
 =?utf-8?B?a1JNeFlHaW5Vb0tzVTQ2U2krOEtsZHhDOFFPazM2eU1qR0d1R0Z2UW5TSm9K?=
 =?utf-8?B?TnNzVTh0MG5CMFNBZitrdzJuRjhSMTlkOGNoU3FmM3U2S08yQVNQVENIWDE5?=
 =?utf-8?B?UkZqVDlvQkcyVkVzM2Z0ejM0WjVmTXo4U0tDdC9HSE5vam5kRCtuNVJ1R3l4?=
 =?utf-8?B?QjFndWxBQW5WUGRpcE5xWUcvbmp3MjBSNW5YSjQ5STNBaEw0cHlNSE5hcDk2?=
 =?utf-8?B?c1V0ajFvNVBiVE4yYkZMdkRlT01NbVRLTWtLWENVcWM5b1NFOGhnSGJpWVlq?=
 =?utf-8?B?bC9xTDBGMFI4UE9FUllsaU54TTZwMzhETnpYanF4a2ZEa1hKRXVSdjMxRm0v?=
 =?utf-8?B?dFpKNnppN1BJY0hnbWl4TlQ4RlRMdGF2UGRNbFFSNEhWTVdnSCtkSWlpMFBq?=
 =?utf-8?B?blYzTy82azJGVTVlRnpNV0I1Q2U2cXVoWWM1WFlqWHhmRkNQRUhML1VwRHJG?=
 =?utf-8?B?ekl2d0hCdFFlU0VGNXhSbDdrVGw1N3N1WWZ5cnVLNjBMMHYzM1U0Yy9mRURR?=
 =?utf-8?B?VWIzRGtnbEttQmprM0U0ZWtkMkhuMy9La2V0azFXT21GUXBFQ0E4bGRSUC9O?=
 =?utf-8?B?QXpCK1l4anB2TnRjeXpoQ3N0eXZkbkxoZ3h1ZzgyT28rZTZYUlNPd1F4UXRq?=
 =?utf-8?B?Y2xkUDlOczA0b0MyVVJFWk84UStNRWxxNjZNTEhDTk1kdDc4WVhndz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: dfb4dec2-d371-4f9f-136c-08da4a012515
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 10:16:40.5313
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: HXmYS6QAkDhzJ4I+YoqdChPw7Q9V8mQIvBdvaF01LwSdtStsMpsoejum6Ir0c1CjnkSBTSabnP44pvFHL+sK/A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB6297

For vendor specific code to support superpages we need to be able to
deal with a superpage mapping replacing an intermediate page table (or
hierarchy thereof). Consequently an iommu_alloc_pgtable() counterpart is
needed to free individual page tables while a domain is still alive.
Since the freeing needs to be deferred until after a suitable IOTLB
flush was performed, released page tables get queued for processing by a
tasklet.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
I was considering whether to use a softirq-tasklet instead. This would
have the benefit of avoiding extra scheduling operations, but come with
the risk of the freeing happening prematurely because of a
process_pending_softirqs() somewhere.
---
v6: Extend comment on the use of process_pending_softirqs().
v5: Fix CPU_UP_PREPARE for BIGMEM. Schedule tasklet in CPU_DOWN_FAILED
    when list is not empty. Skip all processing in CPU_DEAD when list is
    empty.
v4: Change type of iommu_queue_free_pgtable()'s 1st parameter. Re-base.
v3: Call process_pending_softirqs() from free_queued_pgtables().

--- a/xen/arch/x86/include/asm/iommu.h
+++ b/xen/arch/x86/include/asm/iommu.h
@@ -147,6 +147,7 @@ void iommu_free_domid(domid_t domid, uns
 int __must_check iommu_free_pgtables(struct domain *d);
 struct domain_iommu;
 struct page_info *__must_check iommu_alloc_pgtable(struct domain_iommu *hd);
+void iommu_queue_free_pgtable(struct domain_iommu *hd, struct page_info *pg);
 
 #endif /* !__ARCH_X86_IOMMU_H__ */
 /*
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -12,6 +12,7 @@
  * this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <xen/cpu.h>
 #include <xen/sched.h>
 #include <xen/iocap.h>
 #include <xen/iommu.h>
@@ -551,6 +552,103 @@ struct page_info *iommu_alloc_pgtable(st
     return pg;
 }
 
+/*
+ * Intermediate page tables which get replaced by large pages may only be
+ * freed after a suitable IOTLB flush. Hence such pages get queued on a
+ * per-CPU list, with a per-CPU tasklet processing the list on the assumption
+ * that the necessary IOTLB flush will have occurred by the time tasklets get
+ * to run. (List and tasklet being per-CPU has the benefit of accesses not
+ * requiring any locking.)
+ */
+static DEFINE_PER_CPU(struct page_list_head, free_pgt_list);
+static DEFINE_PER_CPU(struct tasklet, free_pgt_tasklet);
+
+static void free_queued_pgtables(void *arg)
+{
+    struct page_list_head *list = arg;
+    struct page_info *pg;
+    unsigned int done = 0;
+
+    while ( (pg = page_list_remove_head(list)) )
+    {
+        free_domheap_page(pg);
+
+        /*
+         * Just to be on the safe side, check for processing softirqs every
+         * once in a while.  Generally it is expected that parties queuing
+         * pages for freeing will find a need for preemption before too many
+         * pages can be queued.  Granularity of checking is somewhat arbitrary.
+         */
+        if ( !(++done & 0x1ff) )
+             process_pending_softirqs();
+    }
+}
+
+void iommu_queue_free_pgtable(struct domain_iommu *hd, struct page_info *pg)
+{
+    unsigned int cpu = smp_processor_id();
+
+    spin_lock(&hd->arch.pgtables.lock);
+    page_list_del(pg, &hd->arch.pgtables.list);
+    spin_unlock(&hd->arch.pgtables.lock);
+
+    page_list_add_tail(pg, &per_cpu(free_pgt_list, cpu));
+
+    tasklet_schedule(&per_cpu(free_pgt_tasklet, cpu));
+}
+
+static int cf_check cpu_callback(
+    struct notifier_block *nfb, unsigned long action, void *hcpu)
+{
+    unsigned int cpu = (unsigned long)hcpu;
+    struct page_list_head *list = &per_cpu(free_pgt_list, cpu);
+    struct tasklet *tasklet = &per_cpu(free_pgt_tasklet, cpu);
+
+    switch ( action )
+    {
+    case CPU_DOWN_PREPARE:
+        tasklet_kill(tasklet);
+        break;
+
+    case CPU_DEAD:
+        if ( !page_list_empty(list) )
+        {
+            page_list_splice(list, &this_cpu(free_pgt_list));
+            INIT_PAGE_LIST_HEAD(list);
+            tasklet_schedule(&this_cpu(free_pgt_tasklet));
+        }
+        break;
+
+    case CPU_UP_PREPARE:
+        INIT_PAGE_LIST_HEAD(list);
+        fallthrough;
+    case CPU_DOWN_FAILED:
+        tasklet_init(tasklet, free_queued_pgtables, list);
+        if ( !page_list_empty(list) )
+            tasklet_schedule(tasklet);
+        break;
+    }
+
+    return NOTIFY_DONE;
+}
+
+static struct notifier_block cpu_nfb = {
+    .notifier_call = cpu_callback,
+};
+
+static int __init cf_check bsp_init(void)
+{
+    if ( iommu_enabled )
+    {
+        cpu_callback(&cpu_nfb, CPU_UP_PREPARE,
+                     (void *)(unsigned long)smp_processor_id());
+        register_cpu_notifier(&cpu_nfb);
+    }
+
+    return 0;
+}
+presmp_initcall(bsp_init);
+
 bool arch_iommu_use_permitted(const struct domain *d)
 {
     /*



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 10:16:53 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 10:16:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344865.570481 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFDh-0000n9-34; Thu, 09 Jun 2022 10:16:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344865.570481; Thu, 09 Jun 2022 10:16:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFDg-0000n2-VT; Thu, 09 Jun 2022 10:16:52 +0000
Received: by outflank-mailman (input) for mailman id 344865;
 Thu, 09 Jun 2022 10:16:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6kiA=WQ=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1nzFDf-0000Tp-SI
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 10:16:52 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur02on0621.outbound.protection.outlook.com
 [2a01:111:f400:fe06::621])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 46fc6939-e7dd-11ec-b605-df0040e90b76;
 Thu, 09 Jun 2022 12:16:51 +0200 (CEST)
Received: from AS9PR06CA0350.eurprd06.prod.outlook.com (2603:10a6:20b:466::10)
 by DB6PR0801MB1797.eurprd08.prod.outlook.com (2603:10a6:4:32::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.17; Thu, 9 Jun
 2022 10:16:47 +0000
Received: from VE1EUR03FT026.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:466:cafe::1) by AS9PR06CA0350.outlook.office365.com
 (2603:10a6:20b:466::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13 via Frontend
 Transport; Thu, 9 Jun 2022 10:16:47 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT026.mail.protection.outlook.com (10.152.18.148) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5332.12 via Frontend Transport; Thu, 9 Jun 2022 10:16:47 +0000
Received: ("Tessian outbound d3318d0cda7b:v120");
 Thu, 09 Jun 2022 10:16:46 +0000
Received: from ac2cf8f44b83.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 492D3347-3F23-4085-B935-1DE8D8426FD4.1; 
 Thu, 09 Jun 2022 10:16:39 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ac2cf8f44b83.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 09 Jun 2022 10:16:39 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AM6PR08MB2998.eurprd08.prod.outlook.com (2603:10a6:209:4e::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Thu, 9 Jun
 2022 10:16:37 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5314.019; Thu, 9 Jun 2022
 10:16:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 46fc6939-e7dd-11ec-b605-df0040e90b76
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=dF9faxHMKim1awnkWZnK5CBTB6qJjlA+5G4TMPFgrFT8ThjWo/xCIVztqmElCbEPXPzKIeN9UDzuXHJT13lezVhJbNNplbibICaDzS9PXiGF5dKJeGBzQ4Z4SR9oPSx9IRGxeeImhnaKNw1SSUX5LL4Mdk7HmJx8bd8VTIXI26f/7HkoYLoVHRYbvZULuugxSb+nh0h00LyC4QIjij/CdVngIE46rkrBejkQMd1VQm/2MLw0tynGvJkfi6JrI0nBwPtyPC5Tng47ZQ19mm5mEGggnTbdJ1euvRbwKBCyk71bs4JvHmojZn7gmFblUBsg4CmguvYUaqCVI28toie4JQ==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=sZUP64wpScVM9B/KumhWG4mkq4094dBhOkHD0muqmsY=;
 b=JOTvHFN4JdBDkuqMID9igVoiBr0v5MhaRh6Y+eJPKgwExVhqHxIQfFfYPgtmBeTd/aNvEorQqJT+66Au1ntGiixF7cBtO57qMjibZiuQCsUaVki450YfIQCI4LbDGYHnfcL+stec8Xob6dNILH/ZSgkwTa1Jye1iCTXfVpUblRI41tifpOt4zHPmzqhuL/NxlVwg7wevCxYrEVfXmdeGz3Cr0nsck/UCLdd5F1fjJQkSwYoWy2NSEYrhmoJLv5erJ+LZ00YlRjiNRaOE/LVyNE/sxwTXpemZAQ+a1Exy1xorzhjeFXcYmi9nnQ/JTFMcLYuqT2gGLXU7zwMG5+xcEg==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sZUP64wpScVM9B/KumhWG4mkq4094dBhOkHD0muqmsY=;
 b=su43UjA56g8U3Za4qxYeCyQ/0dDfSbDKAuVfjAnCfsHFoxdaQzMTez67GKxBfbwXDxpGJ+nrABvcUdHxVY4UqHx+wX3ZdZdWvlQ6hc8YiMBuSz3dX3m5nG2VYm6pcjLkF4Lq/69shzs3MdGmXhSzibUY1LcThxQoyFai1S37NVE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 482bdf5473fccc23
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FFfKK69uXlUxBhgvfKepGWxI3I6YcgWVV8GKDwxHQ+SLNgr58uILKA2+5u5WtWVk+PPTaJXYnWVcb7U4DtaDkNXXrBOzesdvjwRNwysAIwruE9yDfg3wkXDdhSvWqBjQZsLvo1p2MKYq/XJSrxlMkTtEPKhpS87rB4FIXAzJeuyKZKU6dC9rlmPUh/dq+SvJuRE0rzqO5EqMzxi5CCbAzwAZowZxilJBkHGJhX2fHBF5x0WeQMHqyps9HTRNJWoU4ubhQ6peVtIse25VNQ5pPxxIkOmCKD/F/jFpt/3KhQYY7qBBg1H3jKmGy4j3Hnl9s+gGMpccPL+WANoSxp08og==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=sZUP64wpScVM9B/KumhWG4mkq4094dBhOkHD0muqmsY=;
 b=T+5IiJyBEbTp1MDodGGQ4N96ThVEakJmlaXvo9G/DZGdd6MCOIaSkFThiaKE2l49v1kIvbnpgVq8/rmYiIQuocLjeHE3UAXV4lEph7c9Kv6Elr8YZG+b5LIS/902wcoB1pSb/f/DgaQHPT5bW5LqYtxEmcCvZAOJFZ7W1PVZTCPJ3wbJR2oFfUOF8lFAPAFv3b2uRjw/vrbqaMthpcetjETDseNpjbuwCUMDC76AyLM+y/Q1pV/gyawDTkAomInfZDGMr8bp1L9wJZZ8u6IQy+xD9cPwrvh2TtE3T6GcI2lKp2k6tTnPDZjvPGS4zHB/z96IYGQbHST5lUJey2v00Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sZUP64wpScVM9B/KumhWG4mkq4094dBhOkHD0muqmsY=;
 b=su43UjA56g8U3Za4qxYeCyQ/0dDfSbDKAuVfjAnCfsHFoxdaQzMTez67GKxBfbwXDxpGJ+nrABvcUdHxVY4UqHx+wX3ZdZdWvlQ6hc8YiMBuSz3dX3m5nG2VYm6pcjLkF4Lq/69shzs3MdGmXhSzibUY1LcThxQoyFai1S37NVE=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Anthony PERARD <anthony.perard@citrix.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [XEN PATCH 1/4] build: xen/include: use if_changed
Thread-Topic: [XEN PATCH 1/4] build: xen/include: use if_changed
Thread-Index: AQHYddkD4JsESJA0gkGnWWwykzQjSK1G6E+A
Date: Thu, 9 Jun 2022 10:16:37 +0000
Message-ID: <6EE2C13C-7218-4063-8C73-88695C6BF4CE@arm.com>
References: <20220601165909.46588-1-anthony.perard@citrix.com>
 <20220601165909.46588-2-anthony.perard@citrix.com>
In-Reply-To: <20220601165909.46588-2-anthony.perard@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.80.82.1.1)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: b66f1e4a-c6bb-4497-8678-08da4a012940
x-ms-traffictypediagnostic:
	AM6PR08MB2998:EE_|VE1EUR03FT026:EE_|DB6PR0801MB1797:EE_
X-Microsoft-Antispam-PRVS:
	<DB6PR0801MB1797C15CF9364D35547869659DA79@DB6PR0801MB1797.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 lrwSwIf7V5qWHevc/mxQ5jIbfQ8NgIQR+LMDZx+pGCs+7c8kYBVFcnjatEyVvmJabQ7pPY6YaBo8CCO+lam/lMB0W2Zu+w26JS2gqK9E9J43lhDIc0NzOgrjiia6JYuWua8Em1N9/GFoBk769zpuYzmixjsAu7VBrf7vjexzc+mBGetJLo5PBFSmKdw5h8yDSKTSm5W6Ozmwa7R8NRbL6Fgf/AZdMjZ42FMmOvEXrEAaCZ00keUN+5E66Nmq8sl2njSKdL16hWzR9pPC+d3o7haYIS14jCnBPcRBjzo823SVtZPxMP+BdEYF+woWxGsfav7pt9SvHV+6aM7yA4FzgGvpN5gzrRFT56mGtOLCIsgTLq7ajOMq2B3H/EeMqyA9YQ32lIvYrVGtnX2JDChmuu2Y+c8hkml3VYG50rY1yX3JV/sLqo//f6sa8M0NbYZFFXnqBV065ZB1As1drW83ngZ67iAvW2rYij1uHHlKbDUlp/rUPY1GNagJOkywrZx8S6A3s0Z7iAcHR1g03uRF2EoQb3BW4K6T4ZSvyooqZAT2gF3EWFMjVkp2fZPFzfX5JHfHTf1wbo0G9EfpDn4pq8ZD+QuchOfwxBbJjflGeTNw9vqD+sYitN/+uGhSwnE2W38/ojPLsHocZ7W8q4Wx4v6qCCOHbj8S6jqdB9TDnS50M5naHcgf6tSabbX07BbIAjptygACoa5jGqejKkTduLlkJ07OxGkYe8NBgxC1hKRibAOnnyITa9DHetXizOpZ
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(54906003)(38070700005)(66556008)(316002)(6512007)(186003)(71200400001)(6486002)(8936002)(2906002)(83380400001)(508600001)(33656002)(38100700002)(66946007)(122000001)(2616005)(36756003)(6916009)(66476007)(66446008)(64756008)(8676002)(4326008)(76116006)(53546011)(26005)(91956017)(5660300002)(6506007)(86362001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <78A91C6CC22D9643A88C652E1B7863A2@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB2998
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT026.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	d69e5ee9-e547-49e3-3dee-08da4a012373
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	T3L6Oku/CUMTXeLFkH3lq8Xw6uoRLmojD6l5fhDNAcLEndcJahyvIoyj67p+QFcCeQ4LWBf6u0O47a1IGdebHW15MBJjWHnGyhts9SEMEeFCEfhv1csCTbhu9cdJ4rHan6OXRw+IeXv07cUt2OB1OIuwBMo4kstyqJ9HJkFbv1zPGQWS9woN63ASTFrLxv5ipCUjkDyerNEx69GRU0D11E51LSmSyntDhOqnClQ4rkhfVzuBAezY2/nrsk8TrtaVrJEcsLfjlzcwEI+OX71zDSfrNsweO+w1Dud/uKxuSAc5rNh7jGL4mgxBfKDvziB4SgL3QsBpRwrK+mUkJYm3YNsVXcgr/Ur8Prgk6adnt5ZQS/hgJU1ycQ2hVD+g9iretiDj8SgnrZ1zxZbp4NqBzxhzD8th+ZACN/h8MXa1LpYGSJOCnKdBb05Fy1XjLq+NmqVjY0rY6EMSpIZlt5qJxpU4LfppeyDUty4FDE1vjjW+VQM9HqNu09TgSyw/bHz3VGLcpYeU7G08XTM1S777WSloo83G9dnMqB46QhwIFDVgwwxaV1jk8R238KnjR7lE+s2OxjFKXBY+liQEJdhhUVc39TiDdjnc+OyE02+Xk95ULkcde6Re9NXNb5Ud6CaWQV12YngHORSiNO5drRoTyVi6ynDLO6R3Edjh9cVceepe5epBcU/izl7pDyJa/tIH
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230001)(4636009)(40470700004)(36840700001)(46966006)(356005)(26005)(36860700001)(6486002)(8936002)(6862004)(508600001)(6506007)(53546011)(8676002)(81166007)(82310400005)(86362001)(2906002)(83380400001)(70586007)(6512007)(316002)(4326008)(186003)(5660300002)(70206006)(40460700003)(336012)(47076005)(2616005)(33656002)(36756003)(54906003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 10:16:47.2301
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b66f1e4a-c6bb-4497-8678-08da4a012940
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT026.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1797

Hi Anthony,

> On 1 Jun 2022, at 17:59, Anthony PERARD <anthony.perard@citrix.com> wrote=
:
>=20
> Use "define" for the headers*_chk commands as otherwise the "#"
> is interpreted as a comment and make can't find the end of
> $(foreach,).
>=20
> Adding several .PRECIOUS as without them `make` deletes the
> intermediate targets. This is an issue because the macro $(if_changed,)
> check if the target exist in order to decide whether to recreate the
> target.
>=20
> Removing the call to `mkdir` from the commands. Those aren't needed
> anymore because a rune in Rules.mk creates the directory for each
> $(targets).
>=20
> Remove "export PYTHON" as it is already exported.

With this change, compiling for x86 is now ending up in:
CHK     include/headers99.chk
make[9]: execvp: /bin/sh: Argument list too long
make[9]: *** [include/Makefile:181: include/headers++.chk] Error 127

Not quite sure yet why but I wanted to signal it early as other might be im=
pacted.

Arm and arm64 builds are not impacted.

Cheers
Bertrand

>=20
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> ---
> xen/include/Makefile | 108 ++++++++++++++++++++++++++++++-------------
> 1 file changed, 76 insertions(+), 32 deletions(-)
>=20
> diff --git a/xen/include/Makefile b/xen/include/Makefile
> index 03baf10efb..6d9bcc19b0 100644
> --- a/xen/include/Makefile
> +++ b/xen/include/Makefile
> @@ -45,38 +45,65 @@ public-$(CONFIG_ARM) :=3D $(wildcard $(srcdir)/public=
/arch-arm/*.h $(srcdir)/publi
> .PHONY: all
> all: $(addprefix $(obj)/,$(headers-y))
>=20
> -$(obj)/compat/%.h: $(obj)/compat/%.i $(srcdir)/Makefile $(srctree)/tools=
/compat-build-header.py
> -	$(PYTHON) $(srctree)/tools/compat-build-header.py <$< $(patsubst $(obj)=
/%,%,$@) >>$@.new; \
> -	mv -f $@.new $@
> -
> -$(obj)/compat/%.i: $(obj)/compat/%.c $(srcdir)/Makefile
> -	$(CPP) $(filter-out -Wa$(comma)% -include %/include/xen/config.h,$(XEN_=
CFLAGS)) $(cppflags-y) -o $@ $<
> -
> -$(obj)/compat/%.c: $(src)/public/%.h $(srcdir)/xlat.lst $(srcdir)/Makefi=
le $(srctree)/tools/compat-build-source.py
> -	mkdir -p $(@D)
> -	$(PYTHON) $(srctree)/tools/compat-build-source.py $(srcdir)/xlat.lst <$=
< >$@.new
> -	mv -f $@.new $@
> -
> -$(obj)/compat/.xlat/%.h: $(obj)/compat/%.h $(obj)/compat/.xlat/%.lst $(s=
rctree)/tools/get-fields.sh $(srcdir)/Makefile
> -	export PYTHON=3D$(PYTHON); \
> -	while read what name; do \
> -		$(SHELL) $(srctree)/tools/get-fields.sh "$$what" compat_$$name $< || e=
xit $$?; \
> -	done <$(patsubst $(obj)/compat/%,$(obj)/compat/.xlat/%,$(basename $<)).=
lst >$@.new
> -	mv -f $@.new $@
> +quiet_cmd_compat_h =3D GEN     $@
> +cmd_compat_h =3D \
> +    $(PYTHON) $(srctree)/tools/compat-build-header.py <$< $(patsubst $(o=
bj)/%,%,$@) >>$@.new; \
> +    mv -f $@.new $@
> +
> +quiet_cmd_compat_i =3D CPP     $@
> +cmd_compat_i =3D $(CPP) $(filter-out -Wa$(comma)% -include %/include/xen=
/config.h,$(XEN_CFLAGS)) $(cppflags-y) -o $@ $<
> +
> +quiet_cmd_compat_c =3D GEN     $@
> +cmd_compat_c =3D \
> +   $(PYTHON) $(srctree)/tools/compat-build-source.py $(srcdir)/xlat.lst =
<$< >$@.new; \
> +   mv -f $@.new $@
> +
> +quiet_cmd_xlat_headers =3D GEN     $@
> +cmd_xlat_headers =3D \
> +    while read what name; do \
> +        $(SHELL) $(srctree)/tools/get-fields.sh "$$what" compat_$$name $=
< || exit $$?; \
> +    done <$(patsubst $(obj)/compat/%,$(obj)/compat/.xlat/%,$(basename $<=
)).lst >$@.new; \
> +    mv -f $@.new $@
> +
> +targets +=3D $(headers-y)
> +$(obj)/compat/%.h: $(obj)/compat/%.i $(srctree)/tools/compat-build-heade=
r.py FORCE
> +	$(call if_changed,compat_h)
> +
> +.PRECIOUS: $(obj)/compat/%.i
> +targets +=3D $(patsubst %.h, %.i, $(headers-y))
> +$(obj)/compat/%.i: $(obj)/compat/%.c FORCE
> +	$(call if_changed,compat_i)
> +
> +.PRECIOUS: $(obj)/compat/%.c
> +targets +=3D $(patsubst %.h, %.c, $(headers-y))
> +$(obj)/compat/%.c: $(src)/public/%.h $(srcdir)/xlat.lst $(srctree)/tools=
/compat-build-source.py FORCE
> +	$(call if_changed,compat_c)
> +
> +targets +=3D $(patsubst compat/%, compat/.xlat/%, $(headers-y))
> +$(obj)/compat/.xlat/%.h: $(obj)/compat/%.h $(obj)/compat/.xlat/%.lst $(s=
rctree)/tools/get-fields.sh FORCE
> +	$(call if_changed,xlat_headers)
> +
> +quiet_cmd_xlat_lst =3D GEN     $@
> +cmd_xlat_lst =3D \
> +	grep -v '^[[:blank:]]*$(pound)' $< | sed -ne 's,@arch@,$(compat-arch-y)=
,g' -re 's,[[:blank:]]+$*\.h[[:blank:]]*$$,,p' >$@.new; \
> +	$(call move-if-changed,$@.new,$@)
>=20
> .PRECIOUS: $(obj)/compat/.xlat/%.lst
> -$(obj)/compat/.xlat/%.lst: $(srcdir)/xlat.lst $(srcdir)/Makefile
> -	mkdir -p $(@D)
> -	grep -v '^[[:blank:]]*#' $< | sed -ne 's,@arch@,$(compat-arch-y),g' -re=
 's,[[:blank:]]+$*\.h[[:blank:]]*$$,,p' >$@.new
> -	$(call move-if-changed,$@.new,$@)
> +targets +=3D $(patsubst compat/%.h, compat/.xlat/%.lst, $(headers-y))
> +$(obj)/compat/.xlat/%.lst: $(srcdir)/xlat.lst FORCE
> +	$(call if_changed,xlat_lst)
>=20
> xlat-y :=3D $(shell sed -ne 's,@arch@,$(compat-arch-y),g' -re 's,^[?!][[:=
blank:]]+[^[:blank:]]+[[:blank:]]+,,p' $(srcdir)/xlat.lst | uniq)
> xlat-y :=3D $(filter $(patsubst compat/%,%,$(headers-y)),$(xlat-y))
>=20
> -$(obj)/compat/xlat.h: $(addprefix $(obj)/compat/.xlat/,$(xlat-y)) $(obj)=
/config/auto.conf $(srcdir)/Makefile
> -	cat $(filter %.h,$^) >$@.new
> +quiet_cmd_xlat_h =3D GEN     $@
> +cmd_xlat_h =3D \
> +	cat $(filter %.h,$^) >$@.new; \
> 	mv -f $@.new $@
>=20
> +$(obj)/compat/xlat.h: $(addprefix $(obj)/compat/.xlat/,$(xlat-y)) $(obj)=
/config/auto.conf FORCE
> +	$(call if_changed,xlat_h)
> +
> ifeq ($(XEN_TARGET_ARCH),$(XEN_COMPILE_ARCH))
>=20
> all: $(obj)/headers.chk $(obj)/headers99.chk $(obj)/headers++.chk
> @@ -102,27 +129,31 @@ PUBLIC_C99_HEADERS :=3D $(call public-filter-header=
s,public-c99-headers)
> $(src)/public/io/9pfs.h-prereq :=3D string
> $(src)/public/io/pvcalls.h-prereq :=3D string
>=20
> -$(obj)/headers.chk: $(PUBLIC_ANSI_HEADERS) $(srcdir)/Makefile
> +quiet_cmd_header_chk =3D CHK     $@
> +cmd_header_chk =3D \
> 	for i in $(filter %.h,$^); do \
> 	    $(CC) -x c -ansi -Wall -Werror -include stdint.h \
> 	          -S -o /dev/null $$i || exit 1; \
> 	    echo $$i; \
> -	done >$@.new
> +	done >$@.new; \
> 	mv $@.new $@
>=20
> -$(obj)/headers99.chk: $(PUBLIC_C99_HEADERS) $(srcdir)/Makefile
> -	rm -f $@.new
> +quiet_cmd_headers99_chk =3D CHK     $@
> +define cmd_headers99_chk
> +	rm -f $@.new; \
> 	$(foreach i, $(filter %.h,$^),                                        \
> 	    echo "#include "\"$(i)\"                                          \
> 	    | $(CC) -x c -std=3Dc99 -Wall -Werror                               =
\
> 	      -include stdint.h                                               \
> 	      $(foreach j, $($(patsubst $(srctree)/%,%,$i)-prereq), -include $(j=
).h) \
> 	      -S -o /dev/null -                                               \
> -	    || exit $$?; echo $(i) >> $@.new;)
> +	    || exit $$?; echo $(i) >> $@.new;) \
> 	mv $@.new $@
> +endef
>=20
> -$(obj)/headers++.chk: $(PUBLIC_HEADERS) $(srcdir)/Makefile
> -	rm -f $@.new
> +quiet_cmd_headerscxx_chk =3D CHK     $@
> +define cmd_headerscxx_chk
> +	rm -f $@.new; \
> 	if ! $(CXX) -v >/dev/null 2>&1; then                                  \
> 	    touch $@.new;                                                     \
> 	    exit 0;                                                           \
> @@ -133,8 +164,21 @@ $(obj)/headers++.chk: $(PUBLIC_HEADERS) $(srcdir)/Ma=
kefile
> 	      -include stdint.h -include $(srcdir)/public/xen.h               \
> 	      $(foreach j, $($(patsubst $(srctree)/%,%,$i)-prereq), -include c$(=
j)) \
> 	      -S -o /dev/null -                                               \
> -	    || exit $$?; echo $(i) >> $@.new;)
> +	    || exit $$?; echo $(i) >> $@.new;) \
> 	mv $@.new $@
> +endef
> +
> +targets +=3D headers.chk
> +$(obj)/headers.chk: $(PUBLIC_ANSI_HEADERS) FORCE
> +	$(call if_changed,header_chk)
> +
> +targets +=3D headers99.chk
> +$(obj)/headers99.chk: $(PUBLIC_C99_HEADERS) FORCE
> +	$(call if_changed,headers99_chk)
> +
> +targets +=3D headers++.chk
> +$(obj)/headers++.chk: $(PUBLIC_HEADERS) FORCE
> +	$(call if_changed,headerscxx_chk)
>=20
> endif
>=20
> --=20
> Anthony PERARD
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 10:17:55 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 10:17:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344878.570492 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFEg-0001eh-Gw; Thu, 09 Jun 2022 10:17:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344878.570492; Thu, 09 Jun 2022 10:17:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFEg-0001ea-Ds; Thu, 09 Jun 2022 10:17:54 +0000
Received: by outflank-mailman (input) for mailman id 344878;
 Thu, 09 Jun 2022 10:17:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jWvP=WQ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1nzFEf-0001eG-3a
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 10:17:53 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0629.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::629])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6b31da17-e7dd-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 12:17:52 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB6297.eurprd04.prod.outlook.com (2603:10a6:10:cd::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.12; Thu, 9 Jun
 2022 10:17:49 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.013; Thu, 9 Jun 2022
 10:17:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6b31da17-e7dd-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VuZwRaVZOZULWr7YNvcOn45z8LRSdDG8yBRnBcjztnBO3ZFkxurmrOnRvRZLK/T9Jz5KFdgxYDxch8Dyt2gJDkXi+tONpgDCu+k7HJSN6kLnHbV2zei5Qc3ME8LizT/Ji6Hhm0hRK4JEufFbMz2aUKkUTBixvwfexNswa10hHoDwGVEXuvNZLxKt7gAIX+w2vacTs4EXfhnFLBdN2KdFbeGi9EBF0kO89nU6LKfiIs7yYMbZf4+n3HNIbsMmBEpDQxvPvkwCpJ971qjeDKg3wIvR50OcIwgQa5wH5vix9qMOixSphQHzvI3kEy91HBzj1ch30l+4sqoh9CKlHqVrGQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=AxiRfgvA8ZTM896+3sr+fZ7fI2k/BkIKHoJCeGpTy2A=;
 b=BH9AgLBo4YDVtaM0nrNSYwAoL7RFOTOC/JcQKuxsHG6NkMy0JJZmP9Wmr7jZYhiDh7f7WBEYAbTySJ+0tEQx4GShuMRIQ7iMs3t02ozinY12KlzCefJ7akvjB/cYCvZsBn0442pXRrzK8u0AN3dLRYjkuOMnk00K1EXLdPcdfbYTY3NSfTpChOL8RvDgPs2hCp5kjDKQaSDNOmvNssEFfX95ndhdN/eytHXc+9E6ob6ETKBDSPJoemcPXBY7YJ5bkzaorRdhqoyANWR+lD0lytG6N25SbO45prxD8OuKdAKcNJ51JVsVU8cAhO7QiQdZiEXhHlSdlYPfTSH984JgGQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AxiRfgvA8ZTM896+3sr+fZ7fI2k/BkIKHoJCeGpTy2A=;
 b=JoG35Y0rnua6N8FOZ3UAT/x5CFgDtE1kqBy4BqzYcTQGLT9HwFT/OGueEwtXOYw/GbbfwBIVzlWckVy4ppI3jbIIenmiI2PNi6X2n9xhvsAOjkJTPGcX1slwsQsyL4Tqf7NmGXWDodK68spENWCH/1Cl6cOYEAcglR1LNlhDKpsrhU6FcCzF6zQ9292vpgPhN9hPTC/98c2t7COL5U4ozMRIqj3HC/bEdDK1yfV/t2MwYazcdA/3sFCnv7ltDTtj97U6dGOKdz98KgwiRvkXjNG11JRaS6mJGyH4b8Akrj6XGuABRUORyXwSc58LKEkPc2fkPTiLuzvYyY/WdQ20zQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ffe69943-1d65-9d04-f167-f0d61e04a9fb@suse.com>
Date: Thu, 9 Jun 2022 12:17:48 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: [PATCH v6 03/12] AMD/IOMMU: allow use of superpage mappings
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <e873e30c-7a04-a8a3-2fe5-0dda30e654fe@suse.com>
In-Reply-To: <e873e30c-7a04-a8a3-2fe5-0dda30e654fe@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AS8PR04CA0009.eurprd04.prod.outlook.com
 (2603:10a6:20b:310::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d7e8f5f5-7357-4601-53f6-08da4a014e56
X-MS-TrafficTypeDiagnostic: DBBPR04MB6297:EE_
X-Microsoft-Antispam-PRVS:
	<DBBPR04MB62970640B237D96693498A8CB3A79@DBBPR04MB6297.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	3R/Q6DIY1wA4zQqvgybq5IySLmywL9Vj4o85gUEBvPsvetMRPm03pO24wliT92gxwQLPEv7UP0AUdiFxf8UeW3eUpQXUQ0hQXkWkieBqKtG6ha2CxqE/yM6rkhx3bS6lpK2CgzEts70w2QFNN1zgq+ed3iDyXYaJjnDNpQKeRIphXx07PLf+/tYTplecZB17toauAatRf+adeWqpFL9KS050YuGBU5CbFCXVObp5ySqjYCdzQKgfBQbZS4WMimt2Il8hHVDXyTvVemBy6SkQfQdGCJm9VG9N3e4gUKVLKqqSnY/t4kXJNqNMvZN0aOCym+0CYTTzYWzREw3Y4zl/pkUeLpLgWEtTxMSno0D3I/sxjVEsSFhtWBDFO8Y8d5bfQAT9HJdznwU7znR6jko4QJhHNRqg4a3h4tKqC/kwcBCwKT+GCpa1TrMmtL+vzKhGyR1o7qlF/skIqq88LxunGjXV3X/u7uTeKqE937a+4gw6hfvtfUO9nMwy8W3ki9cIFoZJAx6BE4WedVt56r/IZAyc3k/5L5o+5KEZvaiMwwbxnyoxMdI6inqU+lKHhfLytFZHYm0bigpkVgb2PdO4wi55nV9rCP0oSjcxxnDPs315+H64aEjWShnWyQyK7ax7qDjxGJDnykoDWVjw1t7xC0DDbdkR7q9h6o3rRiL4gGRHoIuEOI6OjxzBz0h6OFedihAGKMglqRc3gNCjjFOU/WSBtHZklFeTKHAgC/Rvc/o=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(2616005)(36756003)(83380400001)(31696002)(26005)(31686004)(6512007)(6506007)(2906002)(4326008)(54906003)(8936002)(5660300002)(316002)(6486002)(6916009)(38100700002)(508600001)(86362001)(186003)(66556008)(66476007)(8676002)(66946007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SVJsZFlBbmVYWkk3Z0JubWVOd21VTHZ5c21IelB0djd0MHBSWjRtQjhLR09S?=
 =?utf-8?B?THc0ZXJrajM0M1ppaGo3ZHROV25DdUZLRnVkNjN3ZXlLYmkxMXVhcXR3VDJG?=
 =?utf-8?B?bVdYRFVncFFrTmxIZTZ1UWlmMXVXMGJVTjZQeVlOcEZIWHQxaHVsRUtnSG9t?=
 =?utf-8?B?R2FOaXlqK0JRSVVhY1NOZ0t0OU1NTFExZTRpVnlINExpVitxZnZUdEtnYWc1?=
 =?utf-8?B?ZExSOEhGZis3RUNmSjBLclFaZlRaQ0tLNS9QNnZTdmpXMzBSSzVuaGdRMWdG?=
 =?utf-8?B?azZuNzJHMnR0WDZBb3ZhVmppRVgxNlk2YUxzOW84NUFtendRZlJ3WnlqQ2Np?=
 =?utf-8?B?cTI5UVJNVTVtQUlxWXorZkdSRCt4MDdNN2NoRnliTFhmM2NEZnhSQnp5V2F2?=
 =?utf-8?B?SUQvZDRVdEs3SG9sYitSVmFoNVV3ZkdiNUNKaGN1Z2R0cHl2RC9RMU5hOXVU?=
 =?utf-8?B?OGh0eFFuTVB4Vjh3Yy8zT1ZXSDVFZHZlZjNMVWswOWx2UHdTQSswUjRCT1lL?=
 =?utf-8?B?ZHJmUEZOK3lHcXZ2MzNQNk9oOCs2aXV3djBERnhjRDVtM3g1TFpoUkZQRWN3?=
 =?utf-8?B?MTYvR0kyN05lV1VaOFJnUDlUdEJESENGeHk2TDJZVVJUYjlrMElaNGMvMWJP?=
 =?utf-8?B?a3c5U0RteEt2YjJDcFVrQXkvT1IvdTJWcGpmMkp5VTdHeW5WT3JablN3eW5w?=
 =?utf-8?B?dHBGRStYa0UrMytMSURlM1ltSi9UVTJ6VW9VNXRLMjBBVXRoamxuT0Ira2Z2?=
 =?utf-8?B?WmptMTRvY1hNQ0gza2xTYWZPTVo0NHd2Z0p5ZENGS0lDN3hqeE43c2JJbFQ3?=
 =?utf-8?B?SzZ0eDVZSXZOQUYrNkdPd2ZtYTJtbjdMTkFEOC9tRC9XNzJIK3NaZnRZS1Bi?=
 =?utf-8?B?VlpubXQzNVh2V3FuOHhJUVN0T3NJN2ZZTW9QelRvQkZDU3hMKy9ja0xGUUgw?=
 =?utf-8?B?c2dmS3FmYzhPNm5nMDVaVklKSTI0K2FpK1M0M1NianAwa0RiTlZ6ZU9kajkx?=
 =?utf-8?B?S1hGQ1ZDdlEyMk42b0ozNmhkeUJEcXJCZ0hFblRYeVI2aVI2VjZrdWM0K2E2?=
 =?utf-8?B?d1dnV1hMSkZpdDFodXhHYmdpM0EzU3RYOGVNeURZWHVmL2hiWnhQejJGVG1I?=
 =?utf-8?B?WUNaR1B5TFdYd1V6NERNRmZVR3lkWE56bi9XaUc2bHFZR0xjbmo4UWdZUndw?=
 =?utf-8?B?K29TOVA0UlU0dkdWK3lnWkNOTXhMTlhvNi9vNUdIZnlUV3Y5T3FrYTRQYXk1?=
 =?utf-8?B?NlVKbzU4a3o5eGFqcHdFRlc4M1Rkbkp0UVlDbExBNE9qZGs1bm5EdzJybGlr?=
 =?utf-8?B?aVNudDZFTmlweDhoZ0xrN1lwNjV2VFhKcWhHM3BVbXVSY2pEQXpRRVVKbVJi?=
 =?utf-8?B?K0NPb0pTY0w5cnNuN1JnM1VNWEtod3NBYVllckRCSGh5UGhxNzZSNXJOK0w2?=
 =?utf-8?B?WUhTZyt4ZVlLOFVTdmxEbVozMGdjMWZTTitFY0Z4QWxxRHBkcTJWZmZxM3c3?=
 =?utf-8?B?RlNmdGd3NUwzMXJ1WlBEaWdzb2UwSmo5Rk5reHlWdndlRzhUeDVjUHdzZ29o?=
 =?utf-8?B?NUE3NSsxYkY3QzB2SWhqanNlS1RJODdVUVc3bnM1VnNmRStpcHFsOElhL0pZ?=
 =?utf-8?B?SGl3Miszdi9pZnNNNXdWaElCUU5QM2w4RENhcVhFWWd1NXlZV25iZm55VTl2?=
 =?utf-8?B?TlpZQ2puWkFhU3V2N2dkRW5ZQi9vdHd3SGRRLzlrVXFKWGo0bFFiWHllZlVH?=
 =?utf-8?B?VjhjcGNlR1h4eGdkYlQvcllwWVc5T3JrZ1RUTFozQnA4THUxbHZGcWQwNkhE?=
 =?utf-8?B?UnJweTNrS0x2VHRzT3NEdC9KWHFwNmFTc3U4VXptNU5RMklFSzBibHQ5aTdm?=
 =?utf-8?B?TE5TSVJNSjZraU9pOGFYMSs4RGZwRU1wWkF5aG5DVklaTDVEdWFQaVFlckxW?=
 =?utf-8?B?UGsxbTgzV0FKNFZHbnNsUGZFRVpQWUdlL2ZXL1pwb21KakVTcWhCRkFkYUpz?=
 =?utf-8?B?aXF5UStIMEhORFBwSEp5NS81dFl4YkdpeE1SNTZkbG9hRVk4dnlsZ3BJc0g0?=
 =?utf-8?B?MkFmTWV0VDdxbENBQ0hDcy9PWC85Z2FlSEk3ZFFsQzQyYzQzS09yK1FrYXVa?=
 =?utf-8?B?VU54UWVlc2FQRXEwRi95eGZleWQ2MnRjK05ub2s4VjNySDVIMFdhTGJreGtC?=
 =?utf-8?B?NVlzOUUzRHd6K0FzQm44Q0hNdlBQSnlyQSt6TXlwWHVkKys1UjJPNjFWZlF2?=
 =?utf-8?B?T0F3dURIREVyZ1FMZ0RMTGIycmJqU3lqeU04Q2xKcXE4NjM3R2NDWCtBakZz?=
 =?utf-8?B?MWE4Y1dIOHBLSnlreXdQdmVBdC9MMVFEV2VlZWVmOHpERE9tNlBhZz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d7e8f5f5-7357-4601-53f6-08da4a014e56
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 10:17:49.7143
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: x7YmCFx/gyE4T16TDgIg23liYij8Wh85TRRphxCr/8f08HvvHFfGKdX03gsQeTde/MDa3+BHJg/Y3GHesgKxLg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB6297

No separate feature flags exist which would control availability of
these; the only restriction is HATS (establishing the maximum number of
page table levels in general), and even that has a lower bound of 4.
Thus we can unconditionally announce 2M and 1G mappings. (Via non-
default page sizes the implementation in principle permits arbitrary
size mappings, but these require multiple identical leaf PTEs to be
written, which isn't all that different from having to write multiple
consecutive PTEs with increasing frame numbers. IMO that's therefore
beneficial only on hardware where suitable TLBs exist; I'm unaware of
such hardware.)

Note that in principle 512G and 256T mappings could also be supported
right away, but the freeing of page tables (to be introduced in
subsequent patches) when replacing a sufficiently populated tree with a
single huge page would need suitable preemption, which will require
extra work.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---
v5: Drop PAGE_SIZE_512G. In amd_iommu_{,un}map_page() assert page order
    is supported.
v4: Change type of queue_free_pt()'s 1st parameter. Re-base.
v3: Rename queue_free_pt()'s last parameter. Replace "level > 1" checks
    where possible.

--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -32,12 +32,13 @@ static unsigned int pfn_to_pde_idx(unsig
 }
 
 static union amd_iommu_pte clear_iommu_pte_present(unsigned long l1_mfn,
-                                                   unsigned long dfn)
+                                                   unsigned long dfn,
+                                                   unsigned int level)
 {
     union amd_iommu_pte *table, *pte, old;
 
     table = map_domain_page(_mfn(l1_mfn));
-    pte = &table[pfn_to_pde_idx(dfn, 1)];
+    pte = &table[pfn_to_pde_idx(dfn, level)];
     old = *pte;
 
     write_atomic(&pte->raw, 0);
@@ -351,15 +352,39 @@ static int iommu_pde_from_dfn(struct dom
     return 0;
 }
 
+static void queue_free_pt(struct domain_iommu *hd, mfn_t mfn, unsigned int level)
+{
+    if ( level > 1 )
+    {
+        union amd_iommu_pte *pt = map_domain_page(mfn);
+        unsigned int i;
+
+        for ( i = 0; i < PTE_PER_TABLE_SIZE; ++i )
+            if ( pt[i].pr && pt[i].next_level )
+            {
+                ASSERT(pt[i].next_level < level);
+                queue_free_pt(hd, _mfn(pt[i].mfn), pt[i].next_level);
+            }
+
+        unmap_domain_page(pt);
+    }
+
+    iommu_queue_free_pgtable(hd, mfn_to_page(mfn));
+}
+
 int cf_check amd_iommu_map_page(
     struct domain *d, dfn_t dfn, mfn_t mfn, unsigned int flags,
     unsigned int *flush_flags)
 {
     struct domain_iommu *hd = dom_iommu(d);
+    unsigned int level = (IOMMUF_order(flags) / PTE_PER_TABLE_SHIFT) + 1;
     int rc;
     unsigned long pt_mfn = 0;
     union amd_iommu_pte old;
 
+    ASSERT((hd->platform_ops->page_sizes >> IOMMUF_order(flags)) &
+           PAGE_SIZE_4K);
+
     spin_lock(&hd->arch.mapping_lock);
 
     /*
@@ -384,7 +409,7 @@ int cf_check amd_iommu_map_page(
         return rc;
     }
 
-    if ( iommu_pde_from_dfn(d, dfn_x(dfn), 1, &pt_mfn, flush_flags, true) ||
+    if ( iommu_pde_from_dfn(d, dfn_x(dfn), level, &pt_mfn, flush_flags, true) ||
          !pt_mfn )
     {
         spin_unlock(&hd->arch.mapping_lock);
@@ -394,8 +419,8 @@ int cf_check amd_iommu_map_page(
         return -EFAULT;
     }
 
-    /* Install 4k mapping */
-    old = set_iommu_pte_present(pt_mfn, dfn_x(dfn), mfn_x(mfn), 1,
+    /* Install mapping */
+    old = set_iommu_pte_present(pt_mfn, dfn_x(dfn), mfn_x(mfn), level,
                                 (flags & IOMMUF_writable),
                                 (flags & IOMMUF_readable));
 
@@ -403,8 +428,13 @@ int cf_check amd_iommu_map_page(
 
     *flush_flags |= IOMMU_FLUSHF_added;
     if ( old.pr )
+    {
         *flush_flags |= IOMMU_FLUSHF_modified;
 
+        if ( IOMMUF_order(flags) && old.next_level )
+            queue_free_pt(hd, _mfn(old.mfn), old.next_level);
+    }
+
     return 0;
 }
 
@@ -413,8 +443,15 @@ int cf_check amd_iommu_unmap_page(
 {
     unsigned long pt_mfn = 0;
     struct domain_iommu *hd = dom_iommu(d);
+    unsigned int level = (order / PTE_PER_TABLE_SHIFT) + 1;
     union amd_iommu_pte old = {};
 
+    /*
+     * While really we could unmap at any granularity, for now we assume unmaps
+     * are issued by common code only at the same granularity as maps.
+     */
+    ASSERT((hd->platform_ops->page_sizes >> order) & PAGE_SIZE_4K);
+
     spin_lock(&hd->arch.mapping_lock);
 
     if ( !hd->arch.amd.root_table )
@@ -423,7 +460,7 @@ int cf_check amd_iommu_unmap_page(
         return 0;
     }
 
-    if ( iommu_pde_from_dfn(d, dfn_x(dfn), 1, &pt_mfn, flush_flags, false) )
+    if ( iommu_pde_from_dfn(d, dfn_x(dfn), level, &pt_mfn, flush_flags, false) )
     {
         spin_unlock(&hd->arch.mapping_lock);
         AMD_IOMMU_ERROR("invalid IO pagetable entry dfn = %"PRI_dfn"\n",
@@ -435,14 +472,19 @@ int cf_check amd_iommu_unmap_page(
     if ( pt_mfn )
     {
         /* Mark PTE as 'page not present'. */
-        old = clear_iommu_pte_present(pt_mfn, dfn_x(dfn));
+        old = clear_iommu_pte_present(pt_mfn, dfn_x(dfn), level);
     }
 
     spin_unlock(&hd->arch.mapping_lock);
 
     if ( old.pr )
+    {
         *flush_flags |= IOMMU_FLUSHF_modified;
 
+        if ( order && old.next_level )
+            queue_free_pt(hd, _mfn(old.mfn), old.next_level);
+    }
+
     return 0;
 }
 
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -747,7 +747,7 @@ static void cf_check amd_dump_page_table
 }
 
 static const struct iommu_ops __initconst_cf_clobber _iommu_ops = {
-    .page_sizes = PAGE_SIZE_4K,
+    .page_sizes = PAGE_SIZE_4K | PAGE_SIZE_2M | PAGE_SIZE_1G,
     .init = amd_iommu_domain_init,
     .hwdom_init = amd_iommu_hwdom_init,
     .quarantine_init = amd_iommu_quarantine_init,
--- a/xen/include/xen/page-defs.h
+++ b/xen/include/xen/page-defs.h
@@ -21,4 +21,14 @@
 #define PAGE_MASK_64K               PAGE_MASK_GRAN(64K)
 #define PAGE_ALIGN_64K(addr)        PAGE_ALIGN_GRAN(64K, addr)
 
+#define PAGE_SHIFT_2M               21
+#define PAGE_SIZE_2M                PAGE_SIZE_GRAN(2M)
+#define PAGE_MASK_2M                PAGE_MASK_GRAN(2M)
+#define PAGE_ALIGN_2M(addr)         PAGE_ALIGN_GRAN(2M, addr)
+
+#define PAGE_SHIFT_1G               30
+#define PAGE_SIZE_1G                PAGE_SIZE_GRAN(1G)
+#define PAGE_MASK_1G                PAGE_MASK_GRAN(1G)
+#define PAGE_ALIGN_1G(addr)         PAGE_ALIGN_GRAN(1G, addr)
+
 #endif /* __XEN_PAGE_DEFS_H__ */



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 10:18:16 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 10:18:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344881.570503 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFF1-00027r-PK; Thu, 09 Jun 2022 10:18:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344881.570503; Thu, 09 Jun 2022 10:18:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFF1-00027h-MF; Thu, 09 Jun 2022 10:18:15 +0000
Received: by outflank-mailman (input) for mailman id 344881;
 Thu, 09 Jun 2022 10:18:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jWvP=WQ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1nzFF0-000264-9E
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 10:18:14 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on0611.outbound.protection.outlook.com
 [2a01:111:f400:fe0d::611])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 77eaa28c-e7dd-11ec-b605-df0040e90b76;
 Thu, 09 Jun 2022 12:18:13 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8790.eurprd04.prod.outlook.com (2603:10a6:10:2e1::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Thu, 9 Jun
 2022 10:18:11 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.013; Thu, 9 Jun 2022
 10:18:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77eaa28c-e7dd-11ec-b605-df0040e90b76
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DaR4wWkNvCjzOLg9rWoHFv8x0ogd7WgRJk9ugUcN35Y8HKoGQsEIYNP4mjuPLPXYIBKhF2FEqT/h2r+LFRuocknzytZfla8rHy7vN4dR+s/clp53R+0mIfXJA5Sm4r2zPty6IG9jOhvtkcpKq55yaxVEN5+k+DWSz0XfeSmctU1NjTuGoBNTTHtyj5lfsVCULOGhZfbjl4bKAu1l2w+Glsybz2WHkgg4dzOEGcavuijf8YKpuV2Yl3P26fClIjSkyPCTzE+J9y7MZV16lu1qIqntzt8bRasakcCMjuxcRowP0Z5zFMpO/ENv9eh2YqfQkMX3I+QTkof/XbRQuNAn1w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nYq1b9sHpYOMu/7plOnkihexpK35czv31oBFyEyWWuo=;
 b=TVyCmsdizZX/7J0a2+6kXQOhB8eLu4WuTK7bz/DYYsaEyWy2CCPRlVXzg7H7ndMhz/y+RJg+ycs4hJors+QQapL4rW1Vy84vD1/y2D5prxbdRYqWYn/Ox6AX/LLt1oR5lf9OLcrWSvQhQbew6S80vgnhBSRgaZaMnDNfYJ4UESsixU0nflJO7vMwOwJE6bMY+2B4cH+QZ4Tbng9eAfth93aqv2RYzGFIP2IgRzhzpPPXjhWU5MNVf531EYxMnkltllisbl7gyb8WXCnf5k7hUX8iN/ZOZCPGtYidWx/9Ds0vNOqxdpJ5nnCJaHuSBI1OpZiS3bT5pvRAFuwnWwwfdw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nYq1b9sHpYOMu/7plOnkihexpK35czv31oBFyEyWWuo=;
 b=bEtwslfUSOAUr720RO8IdKRPY7ukKrOKcfw3p3S/M1HG2Vf7vK0kemxLji0C9O/o+eLKYHp5Q3XLYq416rBiuFmZsvw7k14ygRoqtesOPBYwQVagixG1B6Y5+WHH67rQKOS0pxVvcgFpB/OJ5r1Pi24becsqJI0XmzCA4PLzNdIkqcbLzALx2F7/QDL9J0Ve73TEmXbSEQwIaYTEmjcdq3/bEqfgx0/ux06ZfQHCCTQUrd31CyZGXFoloW7aJ9YQNpgUtoVVMfCvkkGvy2Mw5O1KOaFk0A6zAsCmCEuEWqKba6L0ECyGEl6HkMVuL81JMvyc5DpqnbExCEotYckZ3A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a2a3e240-115e-4c76-7fd9-61cdcebafd13@suse.com>
Date: Thu, 9 Jun 2022 12:18:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: [PATCH v6 04/12] VT-d: allow use of superpage mappings
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <e873e30c-7a04-a8a3-2fe5-0dda30e654fe@suse.com>
In-Reply-To: <e873e30c-7a04-a8a3-2fe5-0dda30e654fe@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AS9PR04CA0131.eurprd04.prod.outlook.com
 (2603:10a6:20b:531::28) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ff67f553-29e7-400a-d708-08da4a015afb
X-MS-TrafficTypeDiagnostic: DU2PR04MB8790:EE_
X-Microsoft-Antispam-PRVS:
	<DU2PR04MB8790CE75924B9BD1A1C3BA00B3A79@DU2PR04MB8790.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	rcDKphobPpyg2XD4eV+IHvRKo/IeKj9+MWjKXYjRjUI4zcrpPYjtcKQHJSUNgBi2KIa0xWB1OxPD2v34XQfEAb5YnqIMHCp6SNaNpGExOuUAXfEZYVyZEJ0xHKrpPQ0CMu5bj9ZiTPDOstyi6c1+VXY3ROvA8Tol34bMLFjPXVpLU2N9oXEwPq1kXPH+qMlNxpuL5SmZ8XZWdE7AcW2Y70LLnSP1WyPZsgGigXo8ZTfyUWcxq3kDpc7Ky5Q5z+eZD4GkpB8R1eFrPysp9/N2U1taJulTN+Y6/TganLXK6cOv0a4r2kSs/VTryu61K8P9fGhJAmzuFUCBfv1hWJLUjuiiW+I/RtVn1aZ0k/qjZLTZenNCClkx5iRV2t7iQqg/UX1OGZl80AqoiYq7pLBTwPpIdIVH0uyHc56WCBt0Zzsm3l6Drl+0f0BV/XWAIgywXDVTq9jotdhRC6RdKO3I5cCMnNZdwwVNZqrg1Z2U/k96cPVMKM0iY220M3nQFqiAj1nY2JoUcUEC8VPXMEAK4c+aiukzwpxsZPrZBx3LGfayBe0HLzzazZne4BLAUvUshXjLSbuvhBvGejwxKrtNQtqBgSZMDnF4GAGvhiax92sXMHf2Asg72FQBjOOZDXsaWN/0rnB7Bqcm9u6u0mAxZOraNcOKdqYe9Y6wuQMmq7iC6OVWv88P3sMV77c7776kAV5r1+flklwxP828MBZSu+wGHM+PGTJAHmU7HL2vebluL0VIPpmNMA3bfzTWTGmE
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(26005)(6512007)(31696002)(6486002)(6506007)(5660300002)(186003)(66946007)(83380400001)(2906002)(86362001)(508600001)(8936002)(36756003)(2616005)(38100700002)(66476007)(66556008)(8676002)(4326008)(6916009)(316002)(54906003)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SFMwOEVtQUZLU3hlVXFHc3dya2ZocGEweDN1bWd5S3U4UFJNS092QXlVS0Fn?=
 =?utf-8?B?eWUvaE9mM1BETC9uM0swcGNnMEp2WmFRQ3l2c1BlWVVOSUY4Vk1sWDU0YjV2?=
 =?utf-8?B?SFhJcVBjUmlNUmRHa1VVanhaaHZTZG55VnpQOGtTMHpiWGliUFo5SWMyQlpj?=
 =?utf-8?B?ODJwNkxHSFloSlZvdGNKcVorbURkY080UC93d1h5Z0ZRTFUwendkcU5kZWwv?=
 =?utf-8?B?QzVaZS9HRkZWVU85SFhqUlh4U3BNTS9XbjhQU0drUWdEL252SnBocVZBT3hR?=
 =?utf-8?B?RURrK0VyRHlMMFpNclpoMVFnOEhLSjBCMDhQdUc2c1BaeDVoOWk2SnBEVll2?=
 =?utf-8?B?cURlTDFoakExTFBFdlZ0UnZvSHNMRFR3bUJNZ0VCeVJxblRxbHZ4SW9HZWwv?=
 =?utf-8?B?UGJrT21DbVUrdE8zTUwxZlRPdjRJZHZpc3dsNUFZbngxVDZlQTd5d1FWQUli?=
 =?utf-8?B?dVpFSzZobFBvOHNCY1QyYWVibVZ0d1N2MlVub2NVYTQwMUlsOGtBUFdLcktv?=
 =?utf-8?B?QkpNRk1Ic3MyUjduU2sxZkpoRVhqZnNqUFJqelBxSGR6QTIveE5tdWNZem0x?=
 =?utf-8?B?aFlFenRueWRsclcxTG9mYzRlYnAxR1puTXB6cFdrd2hRQTB1QVpLTy9BRHRM?=
 =?utf-8?B?MjhJSTN6b3AxRDhUVG82cXlMMzBxSkc0NGpwK2p2VTdzQlZMemtoWUh4OFRy?=
 =?utf-8?B?bUtPRGVGTVdocTJubG44aUZ3TEFGOCtIenhXdkhpcTVnMTlabm56NENnVE9F?=
 =?utf-8?B?YXFsOXNubklBSnE0Y055NUhJYjkrcG4raUQ3SklxWWFZRzNTeDNNRHdhcTN2?=
 =?utf-8?B?eTRJY1pUczNHTDJXcDRmbFZLQ2Y3WU9Ed2pqRkl5NFU3UTNCbUo5TURDMkxH?=
 =?utf-8?B?aXgwclVBeHArekNaK3A3WlU4empENkw3ZWs2enExYnAwb0V1ZUFXZVVQb0VB?=
 =?utf-8?B?bHFNU2pibnY0anBleTBnSkh3UEprby92L0VtU0hQaVc5cGhqby95S3hCamhi?=
 =?utf-8?B?enVOUWJmdUJiNkgzQTB6d2VsTFV6UWdHVHozaWJadGV6dWtKdWpOV1lDamhw?=
 =?utf-8?B?YVpwL3l6R3Z3SjIxd1ZyUFVTbktoTTlYR296S1NBSVdsNGF2UnZOdHgrc0FQ?=
 =?utf-8?B?RnZMNzRHWWFHSVIyUUE0SDRKN3ZVbmxHaFlJRU40YytieUZVQm5qUnl0TWRZ?=
 =?utf-8?B?VTZZQVNFaW1qckh2dk5mYzR4ZGt1eEVlcnZNaGsvdkVQR24rSnJuQmxGTmVa?=
 =?utf-8?B?VG1hT0JxQ21TSHVjMzZPN0tQU09tQ2QyemUwY29hNUdkRTBZVSsxRElVektW?=
 =?utf-8?B?b2pTazA4dVlNZWg0SUxEbTdCQ2lwaGhJTkNFQVJZSW1paGVWam9NV2lFV3Vx?=
 =?utf-8?B?RHNzcjA3elU0ZFJQS3dFenR5M1lMcmdpeFBHOWRmQzYycUVqLzdGUC9nZlAz?=
 =?utf-8?B?ekVNeC9oZmhFQXJJOTBsOHE3M05UUnJ2TkU2WWFlSVpHY3U4VXNmZXd5THNP?=
 =?utf-8?B?emFIdERlUDNLbDdIdnFOWW9la1VBVlJrTW51d1AvbmNzYmorbEx4ZXlNeVpk?=
 =?utf-8?B?VUgweFdwSkExQ2tGRXM2Q0JONEVleE1RcS80NlhuUXlQdnlHQ1hxOUxRanh6?=
 =?utf-8?B?WDZHaEp4a1E1dmEvbmhKbUhJYTNPZ1JxYmtBaVhQbmhDSytmd1BqNWtRb2w3?=
 =?utf-8?B?eEx4Z2s5UGNKZzlTVW9nU1dxcWhFT0NmZHNjMW5SWGc1bGVPRjJXaEx6WFlZ?=
 =?utf-8?B?TjNDUEk2M1lEZnF5NnBHSHYyUDRMWGtLMFJFdE80cEJKQUtaa2JWeXpOWU9o?=
 =?utf-8?B?MENBYUtFNWM3WFBJVjZrQlZvb0pWRHVFWnUxM2tZNEhEakVkNmtNSkhZMERW?=
 =?utf-8?B?SzBSdVVGVDJNS2JOWnZEU2N4MzFKZlBCZUNsNnB3dE5rT05LYUIxNk0zOEYz?=
 =?utf-8?B?d2JsU2Z5dE00YW1QTVVYcHdjUlczaW1EWHM4Ykc3VUQxQm9jVEdseENWMkJI?=
 =?utf-8?B?UVV0cHFkTEsybkxYc3QzVWxrd2VGcUtFWlZHeTlzUkpMbjhCYlhYSFhzQzdo?=
 =?utf-8?B?SVU2V21jNlZUZmwwa2dFZW93ZzlvSlpiRXczd0xaL1RlWktqeFhVMWswUWRq?=
 =?utf-8?B?a1NhMXlkakFGOHFMSE15N1RhSFlsVytJRDFwQnBLOGVmK0NlcERwa1JsSG1N?=
 =?utf-8?B?NkZVMlVxQ3UrUE5MVnp0UmthV3pRZHV6TGFLb1RHSDBhVzdJKzhzVFpMOTdI?=
 =?utf-8?B?WmpFWEhYK0hXUm9QSkhBV1dQc3AyKzlqT1RnUkJjWDBJVmVNWUxlaFNpVVBo?=
 =?utf-8?B?eHQ5VFc0bDN3aDdDQ1dCbHBjbzd4Z3ZYSlpqenlqcnFvNkFmaTdWdz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ff67f553-29e7-400a-d708-08da4a015afb
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 10:18:10.9629
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: sUs2cUBD3ZPyLCVKWVJVXpTHP5n+ceV5RpWJk2WWBtvofpRjs1k4TN0t6nrYvjnT08S4LWf4TOB0Xo4QSqCS3g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8790

... depending on feature availability (and absence of quirks).

Also make the page table dumping function aware of superpages.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---
v6: Re-base over addition of "iommu=no-superpages" command line option.
v5: In intel_iommu_{,un}map_page() assert page order is supported.
v4: Change type of queue_free_pt()'s 1st parameter. Re-base.
v3: Rename queue_free_pt()'s last parameter. Replace "level > 1" checks
    where possible. Tighten assertion.

--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -779,18 +779,37 @@ static int __must_check cf_check iommu_f
     return ret;
 }
 
+static void queue_free_pt(struct domain_iommu *hd, mfn_t mfn, unsigned int level)
+{
+    if ( level > 1 )
+    {
+        struct dma_pte *pt = map_domain_page(mfn);
+        unsigned int i;
+
+        for ( i = 0; i < PTE_NUM; ++i )
+            if ( dma_pte_present(pt[i]) && !dma_pte_superpage(pt[i]) )
+                queue_free_pt(hd, maddr_to_mfn(dma_pte_addr(pt[i])),
+                              level - 1);
+
+        unmap_domain_page(pt);
+    }
+
+    iommu_queue_free_pgtable(hd, mfn_to_page(mfn));
+}
+
 /* clear one page's page table */
 static int dma_pte_clear_one(struct domain *domain, daddr_t addr,
                              unsigned int order,
                              unsigned int *flush_flags)
 {
     struct domain_iommu *hd = dom_iommu(domain);
-    struct dma_pte *page = NULL, *pte = NULL;
+    struct dma_pte *page = NULL, *pte = NULL, old;
     u64 pg_maddr;
+    unsigned int level = (order / LEVEL_STRIDE) + 1;
 
     spin_lock(&hd->arch.mapping_lock);
-    /* get last level pte */
-    pg_maddr = addr_to_dma_page_maddr(domain, addr, 1, flush_flags, false);
+    /* get target level pte */
+    pg_maddr = addr_to_dma_page_maddr(domain, addr, level, flush_flags, false);
     if ( pg_maddr < PAGE_SIZE )
     {
         spin_unlock(&hd->arch.mapping_lock);
@@ -798,7 +817,7 @@ static int dma_pte_clear_one(struct doma
     }
 
     page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
-    pte = page + address_level_offset(addr, 1);
+    pte = &page[address_level_offset(addr, level)];
 
     if ( !dma_pte_present(*pte) )
     {
@@ -807,14 +826,20 @@ static int dma_pte_clear_one(struct doma
         return 0;
     }
 
+    old = *pte;
     dma_clear_pte(*pte);
-    *flush_flags |= IOMMU_FLUSHF_modified;
 
     spin_unlock(&hd->arch.mapping_lock);
     iommu_sync_cache(pte, sizeof(struct dma_pte));
 
     unmap_vtd_domain_page(page);
 
+    *flush_flags |= IOMMU_FLUSHF_modified;
+
+    if ( order && !dma_pte_superpage(old) )
+        queue_free_pt(hd, maddr_to_mfn(dma_pte_addr(old)),
+                      order / LEVEL_STRIDE);
+
     return 0;
 }
 
@@ -2092,8 +2117,12 @@ static int __must_check cf_check intel_i
     struct domain_iommu *hd = dom_iommu(d);
     struct dma_pte *page, *pte, old, new = {};
     u64 pg_maddr;
+    unsigned int level = (IOMMUF_order(flags) / LEVEL_STRIDE) + 1;
     int rc = 0;
 
+    ASSERT((hd->platform_ops->page_sizes >> IOMMUF_order(flags)) &
+           PAGE_SIZE_4K);
+
     /* Do nothing if VT-d shares EPT page table */
     if ( iommu_use_hap_pt(d) )
         return 0;
@@ -2116,7 +2145,7 @@ static int __must_check cf_check intel_i
         return 0;
     }
 
-    pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), 1, flush_flags,
+    pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), level, flush_flags,
                                       true);
     if ( pg_maddr < PAGE_SIZE )
     {
@@ -2125,13 +2154,15 @@ static int __must_check cf_check intel_i
     }
 
     page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
-    pte = &page[dfn_x(dfn) & LEVEL_MASK];
+    pte = &page[address_level_offset(dfn_to_daddr(dfn), level)];
     old = *pte;
 
     dma_set_pte_addr(new, mfn_to_maddr(mfn));
     dma_set_pte_prot(new,
                      ((flags & IOMMUF_readable) ? DMA_PTE_READ  : 0) |
                      ((flags & IOMMUF_writable) ? DMA_PTE_WRITE : 0));
+    if ( IOMMUF_order(flags) )
+        dma_set_pte_superpage(new);
 
     /* Set the SNP on leaf page table if Snoop Control available */
     if ( iommu_snoop )
@@ -2152,14 +2183,26 @@ static int __must_check cf_check intel_i
 
     *flush_flags |= IOMMU_FLUSHF_added;
     if ( dma_pte_present(old) )
+    {
         *flush_flags |= IOMMU_FLUSHF_modified;
 
+        if ( IOMMUF_order(flags) && !dma_pte_superpage(old) )
+            queue_free_pt(hd, maddr_to_mfn(dma_pte_addr(old)),
+                          IOMMUF_order(flags) / LEVEL_STRIDE);
+    }
+
     return rc;
 }
 
 static int __must_check cf_check intel_iommu_unmap_page(
     struct domain *d, dfn_t dfn, unsigned int order, unsigned int *flush_flags)
 {
+    /*
+     * While really we could unmap at any granularity, for now we assume unmaps
+     * are issued by common code only at the same granularity as maps.
+     */
+    ASSERT((dom_iommu(d)->platform_ops->page_sizes >> order) & PAGE_SIZE_4K);
+
     /* Do nothing if VT-d shares EPT page table */
     if ( iommu_use_hap_pt(d) )
         return 0;
@@ -2515,6 +2558,7 @@ static int __init cf_check vtd_setup(voi
 {
     struct acpi_drhd_unit *drhd;
     struct vtd_iommu *iommu;
+    unsigned int large_sizes = iommu_superpages ? PAGE_SIZE_2M | PAGE_SIZE_1G : 0;
     int ret;
     bool reg_inval_supported = true;
 
@@ -2557,6 +2601,11 @@ static int __init cf_check vtd_setup(voi
                cap_sps_2mb(iommu->cap) ? ", 2MB" : "",
                cap_sps_1gb(iommu->cap) ? ", 1GB" : "");
 
+        if ( !cap_sps_2mb(iommu->cap) )
+            large_sizes &= ~PAGE_SIZE_2M;
+        if ( !cap_sps_1gb(iommu->cap) )
+            large_sizes &= ~PAGE_SIZE_1G;
+
 #ifndef iommu_snoop
         if ( iommu_snoop && !ecap_snp_ctl(iommu->ecap) )
             iommu_snoop = false;
@@ -2628,6 +2677,9 @@ static int __init cf_check vtd_setup(voi
     if ( ret )
         goto error;
 
+    ASSERT(iommu_ops.page_sizes == PAGE_SIZE_4K);
+    iommu_ops.page_sizes |= large_sizes;
+
     register_keyhandler('V', vtd_dump_iommu_info, "dump iommu info", 1);
 
     return 0;
@@ -2960,7 +3012,7 @@ static void vtd_dump_page_table_level(pa
             continue;
 
         address = gpa + offset_level_address(i, level);
-        if ( next_level >= 1 ) 
+        if ( next_level && !dma_pte_superpage(*pte) )
             vtd_dump_page_table_level(dma_pte_addr(*pte), next_level,
                                       address, indent + 1);
         else



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 10:18:49 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 10:18:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344895.570514 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFFZ-0002qj-7k; Thu, 09 Jun 2022 10:18:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344895.570514; Thu, 09 Jun 2022 10:18:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFFZ-0002qc-40; Thu, 09 Jun 2022 10:18:49 +0000
Received: by outflank-mailman (input) for mailman id 344895;
 Thu, 09 Jun 2022 10:18:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jWvP=WQ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1nzFFX-000264-OF
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 10:18:47 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on0625.outbound.protection.outlook.com
 [2a01:111:f400:fe0d::625])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8c3db64c-e7dd-11ec-b605-df0040e90b76;
 Thu, 09 Jun 2022 12:18:46 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8790.eurprd04.prod.outlook.com (2603:10a6:10:2e1::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Thu, 9 Jun
 2022 10:18:45 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.013; Thu, 9 Jun 2022
 10:18:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8c3db64c-e7dd-11ec-b605-df0040e90b76
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KzbRGGGfZwiRpIwszqMeYG8MPArqTM5DUedH7Qsip/Lvz9fntrSo5qK6j3Y3+iNMm7flGN8k5VZWUcmNjAVHOMpFjYdxJNH9a3sfksECBRoGZzNOOUDvichbN0SdFfgcht3B+Jb3xVA8dM1h5C4ECSW9rKmJatknbn2g/uIsalNo65OSOQvSxPYcUpcRU7Eik3kEIJJEDzt12bgHFWCmg8J7hcIHy+z9VQvBWJgx7aHlcxedXqpuBj7D+Fnems3v20HzUtx5tOWT8t3ClO4s8zxuFlzLnRuPpkh6yySlTS0LHnIszx+TYIil4c7zpP8/DA+PB/MARAVrmIxAdxC1hQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nzH6EFMW+e3+Xs7xettJZg30tav9rjlAdUzEqGTjlFU=;
 b=WZq5xNk8k/aZ3WAC/pYR7TnmZ9xZMUSFIMmIaPf57q9RAoDl1q3HjOqTpa8QBgos0DCAKwF1OsEHzHpNIdDh1bDsLbJiAVmZMOUox8USRh5GUdERceYfuXJUbOxUss4dtE2OLJF0NVBHkO3jGlma14Kl9WmfNgNzUn4flyvGSc2bPhBYVA+H710l6I1boKDcJT9MctbhldV3gbiPyAkxWw9YgYLczhfZlQjKPHrJ0q5e80l55A32363DYcp9p0IjfUlZOpuSmuoQW4LpqLPXUqvGUSuPItzbeKdOU3qzm4CpMqF9XHfMLafTLzwC6eAL2eaHYyREMLyLLj9Tvcbpww==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nzH6EFMW+e3+Xs7xettJZg30tav9rjlAdUzEqGTjlFU=;
 b=4ZIAI3IbTXjtYrTEodr/B8hdhaiWqSTIbRjV4Z4RitEQjM5oW1qIzcdvimWGLFdUreD28VPzgpq8m6BSTjTgUY9VlYtFkny9QuQ1zQ+pwvWQagjc0LagpCDTP2l/CSJDnJcYPiR2R4Wlo4pZHblefDpvKMwJBNwCDH64AMOvJe6kWFJoRptRvz+cVo7XDMI3DM1XPStpgfFP3EQ3SJLsiOKDPP0GqVZHjeJvDd2+sCgo2KsL48r02hJvzFP3OgTt9JIZzCJ4Tmhvn29yYVyCpVEvZqCIl41QjWP683KmclSEmq2vdajNomjcnm41+Rcjmu9h2XNJ8xOFbAkopo0zAw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <4932f8c0-26e5-4289-ef7b-6bba27a9216c@suse.com>
Date: Thu, 9 Jun 2022 12:18:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: [PATCH v6 05/12] x86: introduce helper for recording degree of
 contiguity in page tables
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <e873e30c-7a04-a8a3-2fe5-0dda30e654fe@suse.com>
In-Reply-To: <e873e30c-7a04-a8a3-2fe5-0dda30e654fe@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AM7PR02CA0029.eurprd02.prod.outlook.com
 (2603:10a6:20b:100::39) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e503443c-0bb0-4455-9fcf-08da4a016f84
X-MS-TrafficTypeDiagnostic: DU2PR04MB8790:EE_
X-Microsoft-Antispam-PRVS:
	<DU2PR04MB87904A21E836293CE8338C63B3A79@DU2PR04MB8790.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5MoHlvozFHMcq9CkQ4I7r6cMveOku/ksD7Vy+IoIccFLbt5RgtEZOyFAIACS3pQsuqyHUHsqLcYnguZC8Q228BrumWB3OX0jEMAtmJcRBoFVhhVKtpQR8jb0fM9Z1VNEfFwD4S7SpJl77AFzCn1LdO++RmURUKfd2qRv+ZZ4Nx3Jv8WJlWWw86c2XyB3wpwoExRRSPxkLwhbnarTqnO7Jh8jB5y6wZRA/CjnvY7leO3Hlm9cSI2J3boSpiyL5dbUILq8mG6gC7mzcu6Lx0hOk2aPlb1GtsMzRfG68wK9FDL6+IB6s+EOlgswN+9KD1Xacn5i9IlGDPvFO2R9MDWpptJw/6ZkoRtBO6W3hCBMmMy2FRG4M/N6TVT6lNBn5j8ya/HaH/SKm03MVTLqpkYb0AkvisBUps0xbfKyXb29yypNn4Fx3V/1fEjcobY35M+Nk4ztZx4cI/YD9VwSUAkBuqjx0snTk7MUPhFzcVv/s+Na08gj27xLVkK9sVyr4I7GKtXsPt3EGGtvcZ3DLder9JnPgmrNLTAEvOXcU17XbdggLvpdSt/UwR6VQzExtRotxQGc6rxtcW7k4nKheYkkIgNvFPdt19QXqEy5HrFNgj0oV0DdWkD/1uxw4tTk6xoOXogaIkFeH4Ne6xIlAqa527jIkVVvxUrqlfIyuH3dzIsamFbsFQKkYIFavbQhEj42yN7xGNEXvRTvr6c2/N/haCUVhEw6c5qi6+k3cznW74XQO5gRfWLqomA6DQ+ggSGB
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(26005)(6512007)(31696002)(6486002)(6506007)(5660300002)(186003)(66946007)(83380400001)(2906002)(86362001)(508600001)(8936002)(36756003)(2616005)(38100700002)(66476007)(66556008)(8676002)(4326008)(6916009)(316002)(54906003)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SExaRnlnbnRrcmxneEpKM1NKNzVvZUJVWnd4bTkzWSsvRi8yTjFJT1hVZ2s4?=
 =?utf-8?B?Zmg4LzA0eHlUOUoyYzZzNmpsMjFGRG5BbTRPSjlveWxuNkhJM3dUQ01UV0RL?=
 =?utf-8?B?VVlXOEdWOGxQNTZNMng0d2VLd2dWekFPVEoxcGhOdUdrK2VmL1RSM2tMMTVW?=
 =?utf-8?B?R2FUWng3K1BoRGZUUE1pNWxFMHFyWW0wVWdkd3RxK1BlaDZuWllGdTNsdWF2?=
 =?utf-8?B?WVJxT2ljTllZQkNqR2ZoQmNtdWx1MU5JWWlpY05XekN5cWxZbk90WlRPeGFh?=
 =?utf-8?B?bHNwVVZ5bU45V2hSdFl0L3YvQ2RLeTZmcFl4TnRiQmVDeTNuZyt2MnB6aE1B?=
 =?utf-8?B?UTRUN2JKTnNuYnBqRVZ6SWdJSkdoZWtCaFJMcWZaeU82VTRraHJqOHBqOGVG?=
 =?utf-8?B?UTRYU0JuNGk1dURkeW9pRE5BRXZZNFhCQS9sNFZZYlYyNnJqZnB4MHRyeTlU?=
 =?utf-8?B?WEs2RERRTXluY2RXOWxYT20yb3hiTnJoNVdRYVlKMmI3YXdFRTNzcHVFdjU0?=
 =?utf-8?B?N3lTNVZnODJ5WUpYT3FIMFMxSkdjM3pmcjNJQm53Smw3ZWRXMldSZUNPTUFL?=
 =?utf-8?B?eUlSb29Ga2V4aERabkhJM1JQWXV0bDgyNUpCa2lmTHNUY012eUtrNkRyaUFp?=
 =?utf-8?B?cHVtWi9wMkVtQ08vUDdKNkhoalVaaitLYjV5WTExelFJTWcwWmxSYVlQbG5T?=
 =?utf-8?B?OGFaTXJ4Vlc0aWs1V20weUxlMzdnTE9ZeSsxWGZOZUdVMS8vMHZPOGR6Ymsx?=
 =?utf-8?B?ZDZRbHpTeXQvY2F6UVpBazlEK2ljUVpwallPVDhiUlBkdWF5czFLajI0MlZp?=
 =?utf-8?B?UTIvcXhTQXUwbndGM0ZKSkM1MWI4QmhPRUtuRXhoR005RXpBbTErMVNoSTFR?=
 =?utf-8?B?ajJWaTVBUDZqRjhPdUdFamtzTjEvYUFpNUxQRWV2YVJHRCtjWVFrZXJlSjBP?=
 =?utf-8?B?bXRIaW1vRi9XQ0FRdndFeDVFVGZzeS9EQmdtUmFJSUxYZ1RES3cvTTV2RmdF?=
 =?utf-8?B?ZU9ZdGhKanI5dUV3SEIyWUY5Y1IrdWpuakVjZFZTVnlVTzhFVHMzNEo0djdI?=
 =?utf-8?B?SzdmNUExSk1JeU5LVG9iN2RZaUp0QmxCaHpqdHBuVVQxSENIU2VpNWRFdm80?=
 =?utf-8?B?ZlFyTklFZ2pPeGxHa1hPZ3M4bCtWNkNmdXFrbVZzb3VUUTFHUXlLenNWNGVY?=
 =?utf-8?B?RDdkYVVLODVXcDcyT25GR3VDelRMd0piUis2bU9MQ0Z0VVdSTG9mRU5FNjUr?=
 =?utf-8?B?eW0rVHVNYmx5UWh1eGlsVU9zdE1tRlFXeFh2bjdiOW4rU05vN2JrRUhHTlNS?=
 =?utf-8?B?SS82eTlhVlV2dzNja3laeUJ1bkRHSk9sQ0ppWS81Z3NGSDBodzBoSnJvN2Jo?=
 =?utf-8?B?ZWpVK2xnZDg4SXJGL1NxMk5tZkxWVzhXMm55UDZQekgwVEhlbjNyVDF2cTZz?=
 =?utf-8?B?czk4U3VobU1rNlJ6Z1RlanVxbE9NYWtLLytta3hTRmtmYWR3R0JtOXlobFIr?=
 =?utf-8?B?Q0pERkZuU1VzeW1zYk1ES1k5c2sxUUJDbnpYVXZLdkZpRjVQaWp4Tk1DR0hv?=
 =?utf-8?B?Q1lHQTBjY0NCZUtvVjc0SmhvNzdDdFlmVU5WYWxKQS8zUXV3OStOYzZrbWRP?=
 =?utf-8?B?elM3RmIvZ3lsMVozTUcwQk1qTjlXZ3gvL1ZWSCtEK0MrSkh2TjZUM1VoMFo3?=
 =?utf-8?B?OXRYY3JyZkM5VVJZbUYxdVhBeHkyMmw2cHpDTHpvNEhFcXc0Yms4REJUaWdB?=
 =?utf-8?B?WDB2UytqYXFmUHFkMkpXbUxkMmhBOVVwVGVQbWdTOFluTXJzQVY2WVh5WVhI?=
 =?utf-8?B?WStucm4vTFBvamp4Uk00WWtYNjZCOW15ZzNOVi9wOHo3Zm9lUmJoMktsMTlX?=
 =?utf-8?B?azZ2ZWFLMVQzOEowOTNQcGpyTWUwbzZhaEVNaWxsL3NpN25GdGR2QlcwYmh3?=
 =?utf-8?B?R0RobmRLZVV2Ui9EY1R5V09ReVMxZGlyOFp3MWtzVldtMjl4c25LekRDeEY3?=
 =?utf-8?B?bHJ6eWxzMUcwSERXU3dEeEoxR3FtZjBYOEFJTFZWVVMwSGRWNFlBdS9jU2F3?=
 =?utf-8?B?QW1xSVpnenl4K2N3RnhqMHQ2UHZGdDhpQU5lemZ1WmNuMjlkbkhnSTEvM3Na?=
 =?utf-8?B?QkNJTHpBNHZQOWZQQ3htNFVxNHFMNXB3Vi94b1ZOYnlVa2J1V2g4Zzd5VUFw?=
 =?utf-8?B?MitJbk1NRDhsS0VWd0Jnakt2eTVnWTNDVTVXbU15UlRwMjN0SEF4OEJCYmZ6?=
 =?utf-8?B?UnBTcTdTTnFYWTYzbUNjM3g3Ym9teG5ha1QxRXl5Yzd0OHRYYjEya01rd1Rp?=
 =?utf-8?B?WjBnSUZMbjM3MEk1ZzQ3c3ZuZlJoalpXNitVOWQ4a1pENXQ2TnBqQT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e503443c-0bb0-4455-9fcf-08da4a016f84
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 10:18:45.3981
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ozcmZ5vgmGKPwYSYcnY6JQ3eJ9ICOyayjUBhirOlZEyIgVol2GxN6TZ+rkZN8oXwO911m/h/2HNJzkn2AlrXqQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8790

This is a re-usable helper (kind of a template) which gets introduced
without users so that the individual subsequent patches introducing such
users can get committed independently of one another.

See the comment at the top of the new file. To demonstrate the effect,
if a page table had just 16 entries, this would be the set of markers
for a page table with fully contiguous mappings:

index  0 1 2 3 4 5 6 7 8 9 A B C D E F
marker 4 0 1 0 2 0 1 0 3 0 1 0 2 0 1 0

"Contiguous" here means not only present entries with successively
increasing MFNs, each one suitably aligned for its slot, but also a
respective number of all non-present entries.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---
v5: Bail early from step 1 if possible. Arrange for consumers who are
    just after CONTIG_{LEVEL_SHIFT,NR}. Extend comment.
v3: Rename function and header. Introduce IS_CONTIG().
v2: New.

--- /dev/null
+++ b/xen/arch/x86/include/asm/pt-contig-markers.h
@@ -0,0 +1,110 @@
+#ifndef __ASM_X86_PT_CONTIG_MARKERS_H
+#define __ASM_X86_PT_CONTIG_MARKERS_H
+
+/*
+ * Short of having function templates in C, the function defined below is
+ * intended to be used by multiple parties interested in recording the
+ * degree of contiguity in mappings by a single page table.
+ *
+ * Scheme: Every entry records the order of contiguous successive entries,
+ * up to the maximum order covered by that entry (which is the number of
+ * clear low bits in its index, with entry 0 being the exception using
+ * the base-2 logarithm of the number of entries in a single page table).
+ * While a few entries need touching upon update, knowing whether the
+ * table is fully contiguous (and can hence be replaced by a higher level
+ * leaf entry) is then possible by simply looking at entry 0's marker.
+ *
+ * Prereqs:
+ * - CONTIG_MASK needs to be #define-d, to a value having at least 4
+ *   contiguous bits (ignored by hardware), before including this file (or
+ *   else only CONTIG_LEVEL_SHIFT and CONTIG_NR will become available),
+ * - page tables to be passed to the helper need to be initialized with
+ *   correct markers,
+ * - not-present entries need to be entirely clear except for the marker.
+ */
+
+/* This is the same for all anticipated users, so doesn't need passing in. */
+#define CONTIG_LEVEL_SHIFT 9
+#define CONTIG_NR          (1 << CONTIG_LEVEL_SHIFT)
+
+#ifdef CONTIG_MASK
+
+#include <xen/bitops.h>
+#include <xen/lib.h>
+#include <xen/page-size.h>
+
+#define GET_MARKER(e) MASK_EXTR(e, CONTIG_MASK)
+#define SET_MARKER(e, m) \
+    ((void)((e) = ((e) & ~CONTIG_MASK) | MASK_INSR(m, CONTIG_MASK)))
+
+#define IS_CONTIG(kind, pt, i, idx, shift, b) \
+    ((kind) == PTE_kind_leaf \
+     ? (((pt)[i] ^ (pt)[idx]) & ~CONTIG_MASK) == (1ULL << ((b) + (shift))) \
+     : !((pt)[i] & ~CONTIG_MASK))
+
+enum PTE_kind {
+    PTE_kind_null,
+    PTE_kind_leaf,
+    PTE_kind_table,
+};
+
+static bool pt_update_contig_markers(uint64_t *pt, unsigned int idx,
+                                     unsigned int level, enum PTE_kind kind)
+{
+    unsigned int b, i = idx;
+    unsigned int shift = (level - 1) * CONTIG_LEVEL_SHIFT + PAGE_SHIFT;
+
+    ASSERT(idx < CONTIG_NR);
+    ASSERT(!(pt[idx] & CONTIG_MASK));
+
+    /* Step 1: Reduce markers in lower numbered entries. */
+    while ( i )
+    {
+        b = find_first_set_bit(i);
+        i &= ~(1U << b);
+        if ( GET_MARKER(pt[i]) <= b )
+            break;
+        SET_MARKER(pt[i], b);
+    }
+
+    /* An intermediate table is never contiguous with anything. */
+    if ( kind == PTE_kind_table )
+        return false;
+
+    /*
+     * Present entries need in-sync index and address to be a candidate
+     * for being contiguous: What we're after is whether ultimately the
+     * intermediate table can be replaced by a superpage.
+     */
+    if ( kind != PTE_kind_null &&
+         idx != ((pt[idx] >> shift) & (CONTIG_NR - 1)) )
+        return false;
+
+    /* Step 2: Check higher numbered entries for contiguity. */
+    for ( b = 0; b < CONTIG_LEVEL_SHIFT && !(idx & (1U << b)); ++b )
+    {
+        i = idx | (1U << b);
+        if ( !IS_CONTIG(kind, pt, i, idx, shift, b) || GET_MARKER(pt[i]) != b )
+            break;
+    }
+
+    /* Step 3: Update markers in this and lower numbered entries. */
+    for ( ; SET_MARKER(pt[idx], b), b < CONTIG_LEVEL_SHIFT; ++b )
+    {
+        i = idx ^ (1U << b);
+        if ( !IS_CONTIG(kind, pt, i, idx, shift, b) || GET_MARKER(pt[i]) != b )
+            break;
+        idx &= ~(1U << b);
+    }
+
+    return b == CONTIG_LEVEL_SHIFT;
+}
+
+#undef IS_CONTIG
+#undef SET_MARKER
+#undef GET_MARKER
+#undef CONTIG_MASK
+
+#endif /* CONTIG_MASK */
+
+#endif /* __ASM_X86_PT_CONTIG_MARKERS_H */



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 10:19:45 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 10:19:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344904.570525 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFGS-0003UR-GQ; Thu, 09 Jun 2022 10:19:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344904.570525; Thu, 09 Jun 2022 10:19:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFGS-0003UK-Di; Thu, 09 Jun 2022 10:19:44 +0000
Received: by outflank-mailman (input) for mailman id 344904;
 Thu, 09 Jun 2022 10:19:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jWvP=WQ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1nzFGR-0003U3-MK
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 10:19:43 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur03on0615.outbound.protection.outlook.com
 [2a01:111:f400:fe09::615])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ad4bb61e-e7dd-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 12:19:42 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8559.eurprd04.prod.outlook.com (2603:10a6:102:216::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Thu, 9 Jun
 2022 10:19:41 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.013; Thu, 9 Jun 2022
 10:19:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ad4bb61e-e7dd-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OSG56MYUeDTBjUZtKwynHQfPfGEbVYGWxTbP6fxJb2xBFGwpc8RPGlHipIOoRbTs2z+84O98tlYeDS2415adZqc9yrH9Q7JATFDiNQJTn+cgu67jCEQ6z8B2YbkGwi/sjFkXJOyhZmx518gTx5qOw7hdOHB2yEjHsW8If/Qdh4TaN0QToQNwIiWM5QY5qR/skCVaR19bxGYnbfGyvzEM/7Kn3I8rh0ptF0ZpNw9OLAWlY7HRVK/aMFD4agRFBbO8toP3iuG0x1nr/c5+kzM4ZPjxw4sEdnqapD6eTZdMq8kqkLTqqCfypLul0OXwEG4f/fpKR+StgEmfRHU7YESrfg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=i1SRbMZ5vtUn4A75NysmPVCeWEjl1KBkL8Pu3NNkQeY=;
 b=QUtiBQexOROx22dYmxZBzyjk2js3wpx89FbfdPCf8rMCZtTY50PYyAyB+ifoyFLuhYDYXnkckF5gQkED1WsaZOJYRsL6abetr1Zy/GOkmCcFQA0dNXgctURr3uHKXRmWUuRRgXPYzr0g4uah9tMo3zuJdQZ7nfONHnORaffzeQ/qKm5BMJhnTsfSOmxKBXiJehseRrf5/abjQ5HdChlhGK7H+dFxJDoDbtV+jex4dg0Y2iqQ9w1pkKP92KgWRT7S25qBZqmhnmwHocEKw7dhF4XOqKAzjTTJYanip2LunAoT7xLQSGZpaYo07BFoA0HUztibU4pce5LTa23nycInhg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=i1SRbMZ5vtUn4A75NysmPVCeWEjl1KBkL8Pu3NNkQeY=;
 b=4RD3TPjgOKvP/ILG99eZi6mlFo40A8sfUPofFnLfdgg/N5df/3FsMxA+6EzC80EDGAotxahVFTMwceuAUmGik0JsTQ59oTTbqMdQuf+7i1bdIZt3rv9nEnrMDDXAnjf4nrIX/gITLkwnrKBU/mO2UNlwbhulFc7FfgTQqe4Ky/Ns3PZGkaLRX0InBOvJFhbdzb6/vnD/tXDteyzM16UBGuIcElAHTjfAgHEw42sDbX50YR4EfIWk7KIa8+OgdEIgGTYyrjsD7gDBzE7xr9gpxYyzYddi7OR5hI49kBx8N5uwhaUbd090v2i2FP6zPn+DqIUFh22AahTa+1k+je9Z3w==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <730648db-7a2b-ce0c-675d-cddca116474f@suse.com>
Date: Thu, 9 Jun 2022 12:19:39 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: [PATCH v6 07/12] AMD/IOMMU: free all-empty page tables
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <e873e30c-7a04-a8a3-2fe5-0dda30e654fe@suse.com>
In-Reply-To: <e873e30c-7a04-a8a3-2fe5-0dda30e654fe@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AM7PR03CA0028.eurprd03.prod.outlook.com
 (2603:10a6:20b:130::38) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 0449f426-36e3-4ae6-e953-08da4a0190d1
X-MS-TrafficTypeDiagnostic: PAXPR04MB8559:EE_
X-Microsoft-Antispam-PRVS:
	<PAXPR04MB85595A340B7754680CCD7A7EB3A79@PAXPR04MB8559.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RVsT2Ku1jdNEZbZ0N+GhSIS70NTG05KKVgjDtwjuB34wV2kaSpVewDZmpdz8jOyTXae42iLJB7/oNVZlQRissuEHdWF2JimEAsyf3Q7NhMvl6tAa5ERLjvovoFnjRbWcksIvIhGPStuqelZlQhtmgmPSQouZ6vC7dPqEdGvKr30FR3KF5SiUBKXDacTehgZ+y92wDCDf8HdKODs+2Fj28QC8gehcwFgAXM1nWBK739/MrVfX1rNdUA2miU9tPaoSRa6xyx59k0t7n58CpHu+GQm6VQGqag8duZfUmrjY+Aw/5sZJpTJfdr0A++NGVSLs4rgz+9Aqqk5PuJ452xVVcmDOTj0o4o8WbQ6LBTEOYLh7vO00pFueCv3aHmFfvCC1OB5oUyilArrQEzIx51y4vP2kJvN6qgIo8HnBUW0gUqkt2uqcg9lm5ZsGUAvyD0MVi/ivwiEeJB/kVI30qUDhs1MWbhV6Sqe2sSscrCcvZlNpyNae88SKgPEkOcXC5HaNFa7OTFtslJDvmUrgqx1zV7gnVpwP3IEzY2i12PPpSQIlsBH4oREhdiHY2g3MiK7Ifc7LSwXxbeo4mrL6fhay9ZlaN2Oj5EfBWggkodtIwHyc06w8jPHtRdz6CNjJGS1zldcGCujLiz84cMC6wRIibu8cwJt8vO21Ix7tJKgwzolm0ZiYtbhT6DiRaXf+RDTxrsam9fbAckj2ZTWka5KAAYi3pK3DtU0ouutCHus0mcA=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(316002)(36756003)(66476007)(66556008)(66946007)(54906003)(6916009)(8676002)(4326008)(83380400001)(31686004)(6486002)(86362001)(5660300002)(6506007)(2906002)(508600001)(186003)(2616005)(38100700002)(8936002)(6512007)(26005)(31696002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Vm03bERUYk5JNTFTaGc2SFpPL3NtSWZxRFJaS3dpS25rbmZRS1NkUDJaVXNG?=
 =?utf-8?B?UU5ld1RDMUhCYXRDUGdEcmJPWUVpbndrVWVmS1ZLOVRzOXpyYmd2ajBMVmRI?=
 =?utf-8?B?ZlAvTExmV0VsaExEMEdLTXdBaTJqeEkrYW9pNmEwbkwrY3VpUXNONVNEdTJP?=
 =?utf-8?B?YmVkZWMybGl4NEY1cHVnNVNMQ2Z0UnNTN21tVTNDcjIyc1o5U1c2OHlLL1V3?=
 =?utf-8?B?TWtVT25iMlQ0U2pFcm1xYjhjWTc1NTVXTFpvb1ZQdVJKUXlVZUNJcTVITmd3?=
 =?utf-8?B?SjBrVmlsV0FaQUh4b3FsR3M4bmJnQ2VDcE01TXNMYkFVZlFnaEZqUzJRV0lx?=
 =?utf-8?B?a2JYK0Nxc3pUUVdVelRPUW9Oa045VVJnZjErYU1kSCtLaVJNMlJWN0xYZjJx?=
 =?utf-8?B?S29vTDl3bjJKSmw1WFBVU3hsaEZxUEduSWhlU1RySThaeVk0YXhNL09mQ3pa?=
 =?utf-8?B?eDN3ZWkrK3lxTmdyRVVObkNvait5VjNxZnIvRG52S0gxdG1ZY1JVci8rRXFj?=
 =?utf-8?B?U04rYmVLZFd3bG9hbFFNSGY2Z3RkRWtuaWJWK1g0QmVSUXFLalFHMGNsY1BF?=
 =?utf-8?B?V1VnQ0N3U2t4NjRyWkZ3NEJVNUE1TjJUQU9XU2VkYXJZZkJSZERyVG9kVHdj?=
 =?utf-8?B?aDBoZDJvRnhDWG4wZ1JIVWpPL0ZUU2IrdWRKUEJzS0xXREx5R2VyNktydi9m?=
 =?utf-8?B?N081SDNJMkNBYmRrdmkvTGlKVlNzd0FMOGRZUUt3UGxyZzE0RHFMeDNhSWtE?=
 =?utf-8?B?UW1OTi9nSDMrRmErWU1peThGUmtHN2xqKy9KVG9IOGFFK2QyNDJMOUpVdVVX?=
 =?utf-8?B?ZEs3bFJieXlPNzA1ODhlNlNWUnA4Zlp3T0FCWE5IZnVvZFBsSFg1VE44N2wx?=
 =?utf-8?B?OVdTOUhPVlpScTR3ZVBtSWVsbE80Uk5yeFRTUkFpKzlTdDFnVTAydk95eks0?=
 =?utf-8?B?bXNSNDBNMkNBSHV6U28vVm05SHBCcktud0FuM0d0b2YxMFpBdUE4YThYbWJK?=
 =?utf-8?B?L01ZcVFFZDQxVmtIczB0OVRSWmNPN01yS0N4a2lRYjVKekhxdEMveE82enpm?=
 =?utf-8?B?ZG5qQXBXMVRUZHRmT0F1YlVaWGZGK0dsdjJETUMwTHlMeGtjQTcxdVJCb3F3?=
 =?utf-8?B?UkwxQzcvaWVFRUFsaGdxcVFNTExkSWcwWmJpa3laQTZFdnZKN1ArRGdBVk53?=
 =?utf-8?B?WnppSFlxeVFxbDEvL3RzdGtwdjl1Unc5ZXljL3gvRUc3VnBUQzROeUwwdjVU?=
 =?utf-8?B?S2JYSkd4RVkzNFdnTlZGQTJTVkNLYWpTQ0RxRDZadm5HcnQxdEVRNVBrd09a?=
 =?utf-8?B?UkorV05BYnVRWEg0dFlWK3Y0c0FZbG9CVFpaY1o0YTdjOTdPbFJtTGxzcTRp?=
 =?utf-8?B?aWkrV0NERXZNODExRXlSeEdaeEJSQ2tRM3JwVVFjM1M1a1RLZTVpK3BNbUVY?=
 =?utf-8?B?eFl6V29CN250cFFHRFBrbnpET1RhWS9ZWnJ4RHo1eVM3c1Vtcis0ODdTb0dY?=
 =?utf-8?B?bUFXKzl6YzZTQjllVjc1S3Nsek9OL3RpNWVOT25Zd0hTZzVIVHdIaG5mQW5y?=
 =?utf-8?B?c1hmSDM2OGgya2VkSW15ZDJ2azBLZnU5RFNGMmNSeEEvSFZMYlJwUjVRc0Mz?=
 =?utf-8?B?L0s5RWpDRi9Bd0lyNFZIczlQbGJmcFh6Q29xNkQxclpCZDVmR3V5NkEyaE9M?=
 =?utf-8?B?K1h0UEVIQXBVSWxzM25SaTh2eDJ2RDZKVjZsVXF6b1gwV0FwWEN3S2Z1U0l2?=
 =?utf-8?B?WEsvNnIrNnhoRzZyZG1Fa2dYYy9zZXJURHFBWlp5Um5VaW13Y2tBajJVRDUw?=
 =?utf-8?B?ckozYVZySUJIZzBUaFhWSTg5MGFSVVJhTFJZMndTREk0WU5BYjZKTTEzVnRw?=
 =?utf-8?B?Q0FKNlplY1gwaHN0M2JHM1NYeWZXd3NOai9GQVBpOTV5RG12ZlpqelNXcW55?=
 =?utf-8?B?Lzk2YWFOL3podFJzbzVSeFVEUlYrc3NwSVU2NGNMY1ZienY3Um1VbGduY1R2?=
 =?utf-8?B?RE0zYVMzc1EyN3IrUXc4RDJ6bk1rN2poWVdmbUJodS9JbW5sQmFBY1ZXbUJh?=
 =?utf-8?B?VE0zS1pDUXNObFZCSEJvRUJiYi9PSStwaDIwRDNkT0huNWJsWXVBQjg5YTQ1?=
 =?utf-8?B?UGYraWlpQ2J6Vno1eW93RzFOd2FXSGpka1BReTN0U1RrUHRuNDZYWXpwV2lZ?=
 =?utf-8?B?U1NwVHFVS2pWb0l0TUFJRXBLejNRcDBKK0RFa3VMb1VST0JFNDZKTUk3cTAw?=
 =?utf-8?B?QjFPaDlmNzNldGxYRXl4UUk3L0JNdWNMQ01oLzhIeW9zdkxQTWdnVktNSXU2?=
 =?utf-8?B?WjRUQy9qTHZscW5uaFFBVTRWTElOaktnbW02aWNFbVFzYVAzWHVjUT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0449f426-36e3-4ae6-e953-08da4a0190d1
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 10:19:41.2851
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: X59Sn3qNiA4CQ9aWhDZL50U5FAq0MK/9X1RIsdI9z5SMoTXLTUE5kh4u+29rMXaG7rr+SrZCHimj5UB44Eijsw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8559

When a page table ends up with no present entries left, it can be
replaced by a non-present entry at the next higher level. The page table
itself can then be scheduled for freeing.

Note that while its output isn't used there yet,
pt_update_contig_markers() right away needs to be called in all places
where entries get updated, not just the one where entries get cleared.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---
v5: Re-base over changes earlier in the series.
v4: Re-base over changes earlier in the series.
v3: Re-base over changes earlier in the series.
v2: New.

--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -21,6 +21,7 @@
 
 #include "iommu.h"
 
+#define CONTIG_MASK IOMMU_PTE_CONTIG_MASK
 #include <asm/pt-contig-markers.h>
 
 /* Given pfn and page table level, return pde index */
@@ -35,16 +36,20 @@ static unsigned int pfn_to_pde_idx(unsig
 
 static union amd_iommu_pte clear_iommu_pte_present(unsigned long l1_mfn,
                                                    unsigned long dfn,
-                                                   unsigned int level)
+                                                   unsigned int level,
+                                                   bool *free)
 {
     union amd_iommu_pte *table, *pte, old;
+    unsigned int idx = pfn_to_pde_idx(dfn, level);
 
     table = map_domain_page(_mfn(l1_mfn));
-    pte = &table[pfn_to_pde_idx(dfn, level)];
+    pte = &table[idx];
     old = *pte;
 
     write_atomic(&pte->raw, 0);
 
+    *free = pt_update_contig_markers(&table->raw, idx, level, PTE_kind_null);
+
     unmap_domain_page(table);
 
     return old;
@@ -87,7 +92,11 @@ static union amd_iommu_pte set_iommu_pte
     if ( !old.pr || old.next_level ||
          old.mfn != next_mfn ||
          old.iw != iw || old.ir != ir )
+    {
         set_iommu_pde_present(pde, next_mfn, 0, iw, ir);
+        pt_update_contig_markers(&table->raw, pfn_to_pde_idx(dfn, level),
+                                 level, PTE_kind_leaf);
+    }
     else
         old.pr = false; /* signal "no change" to the caller */
 
@@ -326,6 +335,9 @@ static int iommu_pde_from_dfn(struct dom
             smp_wmb();
             set_iommu_pde_present(pde, next_table_mfn, next_level, true,
                                   true);
+            pt_update_contig_markers(&next_table_vaddr->raw,
+                                     pfn_to_pde_idx(dfn, level),
+                                     level, PTE_kind_table);
 
             *flush_flags |= IOMMU_FLUSHF_modified;
         }
@@ -351,6 +363,9 @@ static int iommu_pde_from_dfn(struct dom
                 next_table_mfn = mfn_x(page_to_mfn(table));
                 set_iommu_pde_present(pde, next_table_mfn, next_level, true,
                                       true);
+                pt_update_contig_markers(&next_table_vaddr->raw,
+                                         pfn_to_pde_idx(dfn, level),
+                                         level, PTE_kind_table);
             }
             else /* should never reach here */
             {
@@ -487,8 +502,24 @@ int cf_check amd_iommu_unmap_page(
 
     if ( pt_mfn )
     {
+        bool free;
+
         /* Mark PTE as 'page not present'. */
-        old = clear_iommu_pte_present(pt_mfn, dfn_x(dfn), level);
+        old = clear_iommu_pte_present(pt_mfn, dfn_x(dfn), level, &free);
+
+        while ( unlikely(free) && ++level < hd->arch.amd.paging_mode )
+        {
+            struct page_info *pg = mfn_to_page(_mfn(pt_mfn));
+
+            if ( iommu_pde_from_dfn(d, dfn_x(dfn), level, &pt_mfn,
+                                    flush_flags, false) )
+                BUG();
+            BUG_ON(!pt_mfn);
+
+            clear_iommu_pte_present(pt_mfn, dfn_x(dfn), level, &free);
+            *flush_flags |= IOMMU_FLUSHF_all;
+            iommu_queue_free_pgtable(hd, pg);
+        }
     }
 
     spin_unlock(&hd->arch.mapping_lock);



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 10:20:22 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 10:20:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344912.570536 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFH3-0004nt-PR; Thu, 09 Jun 2022 10:20:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344912.570536; Thu, 09 Jun 2022 10:20:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFH3-0004nm-Me; Thu, 09 Jun 2022 10:20:21 +0000
Received: by outflank-mailman (input) for mailman id 344912;
 Thu, 09 Jun 2022 10:20:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jWvP=WQ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1nzFH1-0003pZ-To
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 10:20:20 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur03on061f.outbound.protection.outlook.com
 [2a01:111:f400:fe09::61f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c2e9ca3d-e7dd-11ec-b605-df0040e90b76;
 Thu, 09 Jun 2022 12:20:19 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8559.eurprd04.prod.outlook.com (2603:10a6:102:216::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Thu, 9 Jun
 2022 10:20:17 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.013; Thu, 9 Jun 2022
 10:20:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c2e9ca3d-e7dd-11ec-b605-df0040e90b76
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ha9OO4+1M3UBYk8ybev9pGOJZyTzGjeYTcPtTIovQn6VxYSIYDc/U+3msdTbfwpp66HzLpXaO8RJJPUk1kO8ALHjplrq+J4fKaFobZFTfTV0QmITV6WmcsICZJQvXtfkjNUM5XSE9nS+LTjCNIfQTo4KRaZIbk5CPf48BI1sL0OG6HmgLO1ckWExWK3RACgfVe3Pd/WJH37AyxDYQWvig7TMteRUNdW9PAMEXAPeQuuiINteV7ydt2mcjKRaHH5OnK1D0qeOO8ElrTfn/rePtJprJYZ831W9UXZmDMMRml4+PSdP1JsYqSK+QP7CIY0F0GEi8z+pQ47lO9mOIXsEqA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8x0wNvPPnwdgiVsTXX9OqkK4ScAeymQhxLWbmrRAFA4=;
 b=TUZGXIqg9V9sQoAChk/pXd8TXeUojT1FHSRzhmeamhxwLWIOq5wqtxWhF7RpqMK7HzS20Qnn5q73EfXKbU3YUD9CJYqhuBBB3Vc7ARxqV1Yk7nf9FWrJXdhR53oBkuvScfXse0+d1+I2TfgmYfv/ejFWdmIJ29+HUhaFBbZJeQ9G1KbCvjpdo7ElQjRsnsyhmgjaZGHfdQsHhB/AbhYfLvv0wdONXWlhc0pL+DasVttmJHoC1rW0DQzfxPaHqF6DvG4NgvRuIYkRNLojI5BqGuoWU4ELN3WXLBpSDX+1aCKffUvkfuQTUeCFeBiDZlgoV/UGiNDHBBztlQWFrS9LCQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8x0wNvPPnwdgiVsTXX9OqkK4ScAeymQhxLWbmrRAFA4=;
 b=Q+S8oiRd2Zu1eRP1x8lu7kTkRLbcuW8TzVJ7IqaN1rFxyrVHYij9n7/ZHNEQEBKJq2HPrik4RgvleZ75ftE9ngmRtlaIpYhyeeuGFDoPcb7ymhujjixgpHheGnr68f35vNJy+DaQ/D6RQ8OU7mREM8eTYxiC3rdrnx2aA4tJgWwenuSb04BBezqLTi2duncrYirTve3shpXTgCB3duO4cLxTf+iRv++inAO+j4LVBxBT7GT+ptZ+44kxS0Il5qlJT4VBfmuN3cPaXxT/iR1K1/dXSFIpljyEp0Kpd3cqsdpt8MhqLJ2dkcSYmDtYZO4qPmuKMqJZ2oVR/S2qfBhJiQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ad82cb8a-5682-aec6-c274-0adeb1f48c64@suse.com>
Date: Thu, 9 Jun 2022 12:20:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: [PATCH v6 08/12] VT-d: free all-empty page tables
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <e873e30c-7a04-a8a3-2fe5-0dda30e654fe@suse.com>
In-Reply-To: <e873e30c-7a04-a8a3-2fe5-0dda30e654fe@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AS9PR06CA0490.eurprd06.prod.outlook.com
 (2603:10a6:20b:49b::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 999a4e3c-cbf9-41ab-7945-08da4a01a66a
X-MS-TrafficTypeDiagnostic: PAXPR04MB8559:EE_
X-Microsoft-Antispam-PRVS:
	<PAXPR04MB85595D9B9FADE356D7527F31B3A79@PAXPR04MB8559.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RBWul+2N8hKk7UzY2w/Ylocv14sNkIWIPfPrHK3vLSN8FkDhnz84EZYU3vEgXfYGdsib4mB7xfF5qI+1/SOj1IWyZ7ZO3n0voEr7UF0u1PxTWJXVytPaJsOjhGtrum1aayjOYPGUm8OV6b/g1ZdOzQpXTwj1fEpr63Px8wm005BGOwwSIcMHNcQARzRrKckzjiJTfJvXYL1emOZbPoQt3RVt6ITxSokfnG5R69tReereyhRq7pQ70MJzP8QgBU+u5U6HPuQqDWkXT7GPg8CTZlrQRqcqLyeWk/wG2ndFllUhnPRF9EDza7Ck4Wqxf3oQfUiEGCRj1YJosQRS0fuOq2IDc09IiqfLo+iO1JlWyRTVmb7+pgjjKHWrauEaD3E3wiZNuVxGqomwuGE8EIKRk1vVAd4mvWdeiILbk3rmL10gukDGVBvnFnNBEwSk4/guaamt52EGMAXJfY4Z6mFfI0XV5N6XIsWFK6dRqxQB70nljET/tkxTBUYgt2fqtn1Lx2LKk2h1y5p5kBtax6Qzt9MuQfR7SyyMGHh3fMmtfKBW0GuaiG/seoDtxzMGEpjplc90juDLv+H+qGdJ9ayT57syzMt06sLaDnoF6ZhlUOYnAJ6CbQ8RH1bLx2M4xRE5DIdKtcGYTkAzTsmEZm8hKIBqnWVoMvG9WjlqibDCiQJrMqqi7vDfZxWbF6UzuaV0PoX+T3sZ+8eaP6ruzryWHy1G6Ou92Nai51eHw3u+PAg=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(508600001)(186003)(2906002)(2616005)(6486002)(86362001)(5660300002)(6506007)(26005)(6512007)(8936002)(31696002)(38100700002)(66556008)(36756003)(66476007)(316002)(31686004)(83380400001)(6916009)(66946007)(54906003)(8676002)(4326008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?T05rbndYYjhHbGx2TXBRWExhUXJ1Y2xrazBBbk1YdXV0ZGh3TTBPNmV0Mk1p?=
 =?utf-8?B?RlpDbnpvZ1NPMjVXWFVPNGptM205Yms0VVZQM1F2YnBYTGY1eEVWOVdvenZY?=
 =?utf-8?B?UUxYN2I5SU9QMWkvOVFDT0JvWUZrSVpZUDR5SG4yTHZJVHQ0ZzZRdmo3N2lk?=
 =?utf-8?B?YlpDOUJ4dDJDN2VwcEQ2VHJ0OFVEdHl4U0VLbzBEME1tc1M3bjFVaGhaYWZv?=
 =?utf-8?B?R3Erb0llVGxxQk9mR1lmUkhYRHl0M3FocEJXWDhvSjhDVHNTemNiOFNNbEs2?=
 =?utf-8?B?VWJwT1VZNEk0ZGNoTFB0enpOYXFYV0pjdjRuTURPb2VLK2Vpa3lXbndDM0w0?=
 =?utf-8?B?eEo5Z1NRL2E4d1FzaU5BNHI3NVlVaEE3YWFSWlRHdmZ5L25TS3BHQmQ2N2tY?=
 =?utf-8?B?aWE5M0lqSWxWQVJKWkxiUkMyaXB1Mmt2eVdUUGtsbkR5VjM5Y2NvcEIrelZl?=
 =?utf-8?B?MEt5c01HVUVIQ3hsWTIvb2lSWWlaMGtjNUxGL1NtL0h1T2JjV1lwVmhhaVAx?=
 =?utf-8?B?M0VYRm1BeU00M0ZGTTdKUUs0MTluSjFtOUhZbnNJd3Bqc1hEK1RTRFgzb2dK?=
 =?utf-8?B?VmtxblJ6Z1EycGY0OVhyam9OL3ltTEgwR3VMekJ1WHh0NTJQTFpZVGJzZitq?=
 =?utf-8?B?UzVUVlVYNjdZcHhLSXlDaWh3ckphOGwxbEhiV1ZUSGVOY0VWYnV4aEZkd1hm?=
 =?utf-8?B?aVM5UXI5QVNqRkYzUElZRWNVZkltME5UWFBNcDl1V0dKQndyWjgzZXFHL1g3?=
 =?utf-8?B?bytzeTc1VjJGYTByL24yVDEvdldwd1F6bkdMeXRXSjJwRzhwM0Vvdlg5eEQz?=
 =?utf-8?B?Q1AzUWgyTHFRNm5iT0JXZnZXSVB4TzJUUjY1RmNzYVRsVC82UVRIVTRUWTZ6?=
 =?utf-8?B?T0Uva1dwellBTVVVQ0VwZlpkS21FVXF6TnVyQXc1NmFGLzdJQzcxZ3RvSkhk?=
 =?utf-8?B?cXBXZ2NxQVBYVEh4SnIzekMxN0RUa0k4ek8zL3dyd2RySDJlYVdJNCtaSjJq?=
 =?utf-8?B?ZkpkbVJIVnZLcW50N2FQcFRZNEtaNDgycHBiS2JGNGtyTVJRcjI0MUhMczRU?=
 =?utf-8?B?Y2xDZG1nNWhkZG4yZ21JOVp5cWpiR3VPZk5yL2NMT3E2Ui8rbW5XRS9aalk2?=
 =?utf-8?B?OXRFM3NKSitVeTZlZy84bEsvcFZ6ZUp4YlFydStsSEZUK1FIOFMzU1Q0UFds?=
 =?utf-8?B?Qmkwc0J1MHV3b3I5VkJETDliTXdBZFByc0E0VmY5WDZyekJsSktwOUxJVVMy?=
 =?utf-8?B?aWdneEYvYjlNUDdydzMwQ2tNQmF5YTRObjNVVE9mWlpQNzFJbmp6ZDYwcXdx?=
 =?utf-8?B?VGtsbURoVTNDRGVzeWYvVi84cklxNFVCdDF4M24vQk5Id2NJL00rbGhoaUhD?=
 =?utf-8?B?M0pkZ1ZDZGVjYjRDSGsyYldVTUNFSlgzWmVyN2laR0xMcnRoMUN1ZGhmM2tM?=
 =?utf-8?B?QWVXMm5sRTdZMEo2MFhnVC9GWG0za0ROeDhvdnAwWFRXaStzNXd1ZjU3YWZK?=
 =?utf-8?B?bzg0ZGVnb1VqQVVENGx5S2dOcnR6dHA1UHo2Q2NPSUxFYklBZm52T29pTDRo?=
 =?utf-8?B?NEdsSm4rSlpSZmthOTNYVWVnSnY2Y2loYS9xU1pDbHJvUXh2YkxPMnVzdVEx?=
 =?utf-8?B?Ni9CbDQrME1kUjRESlErUWVkV3VBa0lXN1pBckNNcUNIbm1vRVFFa1Q5VFgw?=
 =?utf-8?B?T2VIcDBXK0xYcXE5Rm15c05QeFRYNndkVi9DblFwVVpRY0dYZHlEcHhCM0hx?=
 =?utf-8?B?c3hJYjJBWlFKYWp2L0VkY2Zib3QraXVONmgva3NhZGdXRDA4KysvZTBVQVFx?=
 =?utf-8?B?dFlCUkdWT0toOUszcVN4aTVCK1EzSlhGY25XYlR5dE9FU0IzbFZNR01HTWFX?=
 =?utf-8?B?ejExb0JxbERzL3NNTldweHBnMWF3L1ZKM2oyVDRuZDU1ZnNWc3JmOXpGeEV2?=
 =?utf-8?B?TVRlK1NVSG5HZWVxanNXbFVYZ2tQMFZoTG5QU21EaVREWm1lcWRGMEVCZWRp?=
 =?utf-8?B?YldyY1NXeVRGMFNBeTl4WW85c2twWUh5Ymowd2V0VE43MHk5ZTNkZkw3VHhv?=
 =?utf-8?B?NG0yMDZRZHdweDVWVTVWNGNKWnNXTWwwdWZuc29kYSs1NDFtMW1UbjAzYjNY?=
 =?utf-8?B?Nmhsb1hMMW1rYnNON3IxazRpOTVlc2lRWURvQ3owOE01TXorc0R2dUdieXdv?=
 =?utf-8?B?UFlKRzZsbVZ1RGtYMnpsSkhrdlQ4NWI3TGVUcDM0U0J2MHhtc291TjZWaTMw?=
 =?utf-8?B?cGxBZEpXdExNM2FjTVA1akd6Y3JHQjZ6KzJsRWliZzQ4TlNYNXd0VWRvUktS?=
 =?utf-8?B?SjdqTFYvV0lFRWlkaUExNjV6Nk1VMEhrY2RreFhtbFpJNU9YZ2Fmdz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 999a4e3c-cbf9-41ab-7945-08da4a01a66a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 10:20:17.5171
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /vrrtSeyijmSHkq+4iG5A8M/w3D3gFAwV3oEpaZqNWibimu9T8lywHFlJ4xiYoUV5pm0fVpk7lY1pInT/ZSohQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8559

When a page table ends up with no present entries left, it can be
replaced by a non-present entry at the next higher level. The page table
itself can then be scheduled for freeing.

Note that while its output isn't used there yet,
pt_update_contig_markers() right away needs to be called in all places
where entries get updated, not just the one where entries get cleared.

Note further that while pt_update_contig_markers() updates perhaps
several PTEs within the table, since these are changes to "avail" bits
only I do not think that cache flushing would be needed afterwards. Such
cache flushing (of entire pages, unless adding yet more logic to me more
selective) would be quite noticable performance-wise (very prominent
during Dom0 boot).

Also note that cache sync-ing is likely more strict than necessary. This
is both to be on the safe side as well as to maintain the pattern of all
updates of (potentially) live tables being accompanied by a flush (if so
needed).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---
v4: Re-base over changes earlier in the series.
v3: Properly bound loop. Re-base over changes earlier in the series.
v2: New.
---
The hang during boot on my Latitude E6410 (see the respective code
comment) was pretty close after iommu_enable_translation(). No errors,
no watchdog would kick in, just sometimes the first few pixel lines of
the next log message's (XEN) prefix would have made it out to the screen
(and there's no serial there). It's been a lot of experimenting until I
figured the workaround (which I consider ugly, but halfway acceptable).
I've been trying hard to make sure the workaround wouldn't be masking a
real issue, yet I'm still wary of it possibly doing so ... My best guess
at this point is that on these old IOMMUs the ignored bits 52...61
aren't really ignored for present entries, but also aren't "reserved"
enough to trigger faults. This guess is from having tried to set other
bits in this range (unconditionally, and with the workaround here in
place), which yielded the same behavior.

--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -43,6 +43,9 @@
 #include "vtd.h"
 #include "../ats.h"
 
+#define CONTIG_MASK DMA_PTE_CONTIG_MASK
+#include <asm/pt-contig-markers.h>
+
 /* dom_io is used as a sentinel for quarantined devices */
 #define QUARANTINE_SKIP(d, pgd_maddr) ((d) == dom_io && !(pgd_maddr))
 #define DEVICE_DOMID(d, pdev) ((d) != dom_io ? (d)->domain_id \
@@ -405,6 +408,9 @@ static uint64_t addr_to_dma_page_maddr(s
 
             write_atomic(&pte->val, new_pte.val);
             iommu_sync_cache(pte, sizeof(struct dma_pte));
+            pt_update_contig_markers(&parent->val,
+                                     address_level_offset(addr, level),
+                                     level, PTE_kind_table);
         }
 
         if ( --level == target )
@@ -829,9 +835,31 @@ static int dma_pte_clear_one(struct doma
 
     old = *pte;
     dma_clear_pte(*pte);
+    iommu_sync_cache(pte, sizeof(*pte));
+
+    while ( pt_update_contig_markers(&page->val,
+                                     address_level_offset(addr, level),
+                                     level, PTE_kind_null) &&
+            ++level < min_pt_levels )
+    {
+        struct page_info *pg = maddr_to_page(pg_maddr);
+
+        unmap_vtd_domain_page(page);
+
+        pg_maddr = addr_to_dma_page_maddr(domain, addr, level, flush_flags,
+                                          false);
+        BUG_ON(pg_maddr < PAGE_SIZE);
+
+        page = map_vtd_domain_page(pg_maddr);
+        pte = &page[address_level_offset(addr, level)];
+        dma_clear_pte(*pte);
+        iommu_sync_cache(pte, sizeof(*pte));
+
+        *flush_flags |= IOMMU_FLUSHF_all;
+        iommu_queue_free_pgtable(hd, pg);
+    }
 
     spin_unlock(&hd->arch.mapping_lock);
-    iommu_sync_cache(pte, sizeof(struct dma_pte));
 
     unmap_vtd_domain_page(page);
 
@@ -2177,8 +2205,21 @@ static int __must_check cf_check intel_i
     }
 
     *pte = new;
-
     iommu_sync_cache(pte, sizeof(struct dma_pte));
+
+    /*
+     * While the (ab)use of PTE_kind_table here allows to save some work in
+     * the function, the main motivation for it is that it avoids a so far
+     * unexplained hang during boot (while preparing Dom0) on a Westmere
+     * based laptop.
+     */
+    pt_update_contig_markers(&page->val,
+                             address_level_offset(dfn_to_daddr(dfn), level),
+                             level,
+                             (hd->platform_ops->page_sizes &
+                              (1UL << level_to_offset_bits(level + 1))
+                              ? PTE_kind_leaf : PTE_kind_table));
+
     spin_unlock(&hd->arch.mapping_lock);
     unmap_vtd_domain_page(page);
 



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 10:20:49 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 10:20:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344919.570547 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFHS-0005LF-7E; Thu, 09 Jun 2022 10:20:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344919.570547; Thu, 09 Jun 2022 10:20:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFHS-0005L7-3p; Thu, 09 Jun 2022 10:20:46 +0000
Received: by outflank-mailman (input) for mailman id 344919;
 Thu, 09 Jun 2022 10:20:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jWvP=WQ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1nzFHQ-0003U3-Om
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 10:20:44 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur03on0618.outbound.protection.outlook.com
 [2a01:111:f400:fe09::618])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d1b8863c-e7dd-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 12:20:44 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8559.eurprd04.prod.outlook.com (2603:10a6:102:216::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Thu, 9 Jun
 2022 10:20:42 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.013; Thu, 9 Jun 2022
 10:20:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d1b8863c-e7dd-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=F56ymm6uSQP3kHQK5HRsC1JzZMNMfH7ySFKl3tGTY58SrqujyIy7+zEueGC0AIwPU5zeZTTrFf++ofyJjV8kv9KPjEIMhMtexIB8YrHlTNGYa94tWWoTzTvAl9yOy8UGFwi003sjIJ/+oK3PNWgY1EwmKun9d+wUf2BDmBVCRHYraF6uXdn7DK/L/6bFGlIsgBpklOvyXFNL6jC8OZ0CuTWuECdbTyEvSHBcNaf4iJwbeKor/HPXbUmy7TOS8Fs+QzaolPA6PmG0B8ajnxqH7CnbRT+glPrDMBF6pagqjgRPo6Hyonl/e8EU1KzNSMGk0P1k82kXmfB2rsgoQfMFAA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=XpBKX8iKhA8N8952sSDbijOunJ7uuUUiGIDbjZy4Htk=;
 b=Nj/oQXki34+K4RyLOQZK/8x+ZszeqkYK5o2Ej+VIeBsCYpnUxLeR4i6vad+SEl6jfeZGfxMgXczxFZ0g9x+3Nini3EeAI1A2PXg3aIfTd9U+U5G107wNt5BYvRkuajUNrprkzlHcg5Y9FQdb0KWA5e3c7meoGCoHLXEQUGGxLLAAqtkBeKixq3MJr6fRhQnFJsbx0sEVsDfWbH4v6mvLoOevGgSEzkZUFPnCQzSIq+E4Yz6Ogdu+lZO239eAVYopoO1li/jUKa+3N0ykVE46g191FsQLiKQVmRHBosALLVFWhgnnyQRSleVeL1dDd8qFQup5KxPSCyc0nnAw2yU+wQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XpBKX8iKhA8N8952sSDbijOunJ7uuUUiGIDbjZy4Htk=;
 b=hI1LiLH4da6Dx70WByVSmXBQXxnKIVyzG9OTrKLWfTUF7397ysKXFg79XaD5en+E+kBURZOtsmnof+4HaLx9rgH3tL2kc5S+DOLWjcoa50mCVQCb2NijHke32TA7MCILEGwmYSsQF7fbB6RLdwO9+2Je6hF2MO2M29GgHrHP40eWGG/PsN3V7p/2YdnrX74GGQ+7FfpUHoeMXpNfxO1tplMEmTk9kV3r1MVXCSLHu/HAWYWa3urkSvu4GlBJF6KI4NZc6ffOaFc/3i8Vf0FYFuDDQTMGGiE/GWlBU/HjA/lzMJOgGrdfLZJDJarxpvWtIC2nAkjExu/+EJpt0yKB7g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <31b3f7a0-c7ca-70e4-0ac3-7a2bfb65facc@suse.com>
Date: Thu, 9 Jun 2022 12:20:41 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: [PATCH v6 09/12] AMD/IOMMU: replace all-contiguous page tables by
 superpage mappings
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <e873e30c-7a04-a8a3-2fe5-0dda30e654fe@suse.com>
In-Reply-To: <e873e30c-7a04-a8a3-2fe5-0dda30e654fe@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AM6P193CA0059.EURP193.PROD.OUTLOOK.COM
 (2603:10a6:209:8e::36) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b75b3e96-f223-4843-24bc-08da4a01b56b
X-MS-TrafficTypeDiagnostic: PAXPR04MB8559:EE_
X-Microsoft-Antispam-PRVS:
	<PAXPR04MB855969FA4D48785D09F1716FB3A79@PAXPR04MB8559.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	SQ1JvmN4zwr8KLmrjtOm9d1apfORBTxe4K+uDAT/TUXFDi/xKA5vkhiU+ndUATNSvRrsfSrJDO5fZWHnhliFFx32agwsTupqia2+ftPzCEKKvOK70gv4en56TOwaD3O5yNsHrmEyzqXtlnL4Z01quxktVGxxlwW9qCsYIeOi0CR4FqjsT0HknJiIsNQ+Ag2MoNvqtmSCtOjKVg2L2zdRAF/8tuPXn16b0st6MUzBIHP0AaB87+C3KmAQEGEI0u6k/xuWTaFtbfY+h8VnU6h9IoN7hQ+npbTahMcjZLPA7ocLeRfJ9GBrYlscpSaIsLTah2g2uabYEPPx+hTLnsopXLBxzYMbn/UDrsH1CuBFI48rCTgVpL565dnCxCVpQqe/hKVqvn6e+PFSH0ITv6MbajT8ooUz1ww22L8wFTJwOen8NXMtcOs+jVsOKZSlD6ORqyW3u1JIczJ4x/iXa4/INFGKvx6dijj7wF/rULqVvvt0kMNttUS0pfkT/D7Ar+iN/dYFo+fs/52atk4OCZr55Jsc4lnYjH6hzMaibunuz45ubQ5dRNJRZYfpOjSZUvT70SqMQNBGSqsvWO9fyBlLC4iR7q2vzkocGposlBHgD8abC4qbMq9rvce8JAec8q7Igm8GQHjZkU3jk6Ws0hBrqYINUqveBiaGJszHDLliDP8PZmfwXxc46aqU9e+QIxZPz8UZKB7ELlLBFzDsF1O8PPhUGlonUVo6DTw4I5TLx8nzFnu75y2lVeAbYLhgYo5f
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(508600001)(186003)(2906002)(2616005)(6486002)(86362001)(5660300002)(6506007)(26005)(6512007)(8936002)(31696002)(38100700002)(66556008)(36756003)(66476007)(316002)(31686004)(83380400001)(6916009)(66946007)(54906003)(8676002)(4326008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SGlVQ2RNVEFwRDRrN1ZxdlBCazVubW5QZUJUOC9tZzhKenpmQWRET3RDTENs?=
 =?utf-8?B?TjNmWEFMeG0vNnNyWHh3YUdIaVdRWEFyaEtadWx1MWl3aUNDTGc2c3FTd2Np?=
 =?utf-8?B?SWpyMDZrUGJGYnRFMVVRNm1OQWd0ZWIxTUJwN2JjUGwyY1FCSVI2eXY5Q0Vl?=
 =?utf-8?B?WEQvV1dtMEdxZmRIMnB4UHVUQjl2aHJWUnpQNkxvVDJncm4wMDhsUHFQd09u?=
 =?utf-8?B?V2JzTURSdkxDY2djRTV2MEkwc0didjBQQWpyOFVzRHJUdHBERm1vSjZENUZF?=
 =?utf-8?B?TlRRVTNYMUpDK0E1RlNFYzRLemtJVTc3NnFEVHFzTWJzRjNCUXV2QnJYd0Fv?=
 =?utf-8?B?VktXQ2wzMkpBa1hCVXVVNUNZYVNLQnR3enVlOGFMREgwYmMrTEVzRG1STFhi?=
 =?utf-8?B?UjNRREIyaCsyeTFHM0xvZXNZOGtGeHF5OE9VYk51RytBVkk3NkMvRTNHYVBN?=
 =?utf-8?B?Y3ZuSGtuY243Y2lBdEdvaVdrTXk1azJMQTNyQnRqdGhWTUJ1U2taUmQ5bDBP?=
 =?utf-8?B?cWNjNDRLNWp4L3daSkVDRngrZ0JlOWU0K05RYWxIaEtwY2dzQVNBUVYySVFx?=
 =?utf-8?B?dWZKRW1FU1A3Y00rRU1vaGU0U0xDeWoyQ1JSRldvNDZpQ3RVMU9yN3J0MTlU?=
 =?utf-8?B?NktTUGMyLzVzRHVrcnZtVjRlZXo5RHJJeUlPZ1YzR3JqSzQ5U0NBdnNURkZK?=
 =?utf-8?B?OENaa3hRSm5KZzZ0VThOd1RXM2c5eTNlMGxnS1JFS3BMWjU4YjlWQUNxa05U?=
 =?utf-8?B?ZDljR1BjM1dRRDA5UmdGMHM3RXM3K2ZkQ1RxTmc3ZUR3NnR1VXNEYVM5Ni81?=
 =?utf-8?B?TmVTOUF1akpPeTVhTlF2MDE0ZWNwWmU1bmRoVm00dGJnc2hvVW9vN2FzeUdQ?=
 =?utf-8?B?b2RROElhMXBJK09aUFFhNEF5NVdCTFJqUlM5V1VqWnVZa0tIUUpGZkFqRk1U?=
 =?utf-8?B?dnhrQk93Mk1qdkxZbHowdE1zdjlhMDFRL1hzR1l6OUlhMjRtbUNtd0N0NTNV?=
 =?utf-8?B?eFRjV3BxNnE4a2NmMkY2bXExMm15eXNaQi9vN09PUkhXR0x0TU1McUxuOVJV?=
 =?utf-8?B?M3Q5Qys0WkJBY3BsNmlDNDU5S2JpdTNINmN2SVIzS3lMQnY2bHcrQWgyWmt6?=
 =?utf-8?B?clJrcjRYdWJGSVZNNFpFUWtpOFdwblpRRXR2ZUhTVkUyY0N0SkhyWnRvRkRt?=
 =?utf-8?B?Syt5S1BnSlpPcnFVZzBRcXkyay9UOHlOMDF5cFBIRnpoL3JURUt2eW9oRkFL?=
 =?utf-8?B?U1N0SUp0MTY5V0xiOEZmNUR6aWV0aDBrU2owVTd2anZZRzRyOThRWDM2aGls?=
 =?utf-8?B?eVQvNjUybWIvRkJmclY5bDhRZXYxaXBEVS9rdXVjTE5zdFQ0Vy9xRXREbTNy?=
 =?utf-8?B?ZWZWK2ZDUldvRlZETW5vQVdGdGRvbWRUakN6VXJvVUdtZGkrR3ZqaUpuR2J4?=
 =?utf-8?B?anl2dG5LMW9NMWlaRktGSy9Zcmgyblp6V1ZPVWF4dUlkZHI0NTNodlQxcmNa?=
 =?utf-8?B?UGhTaXdMdTREaWttOEppK1hEcVZDWnNaU1hTdEtxMWlzeDc3ZWhlakVETkoy?=
 =?utf-8?B?dTBCb25vREdwVGpzbXJDdDNrdTdaK2dGNituUWl1QWllMjdVVmVHZGEvTXoz?=
 =?utf-8?B?NWZDVHhvQ2w1OUp4bDFmV3czTlA4K3JRMDlrVW9HVWVuNU82aWNPeXlMMTVt?=
 =?utf-8?B?YURzVDBjd2JCVUc2UzY2ajk1WVpRU1NMSVFOeloyY3kyWGZuLy9GRkhrNmdH?=
 =?utf-8?B?d1JpNVBJRG1TK2lreWFvMjY5WFp3MTJMaUZtWmZrMkFTckdUVzZ5RGtxZmNp?=
 =?utf-8?B?THZDRlVrdFM5azl2WnFlY1lCMEx2WElVVE9tNzhHd0xUajhsRWpuYW9mRDNU?=
 =?utf-8?B?RVlJUUIzeHhsNkRMbTFqaXE5dnNOTjAwcW9mRHE5UkNMVVJ4NG9kaHNoQnM4?=
 =?utf-8?B?TVpsVm5CUGlXdTNiUFZOMUd4VDlLbjNTd1plZmJsaFV4cFhCcEEzTlgvVXk5?=
 =?utf-8?B?NzVnaXNXN3FJNGczNXdONTJZUE5XNjdob3hMQlFrUUN4U3NwN1RFMTg2Sk9W?=
 =?utf-8?B?UDJsYmxtSnQ4Y3R2YkY5M1U0TFk2T0JBNGYyVmxXdjBXbkRzYnFBNWhyWDAy?=
 =?utf-8?B?b3N3aUlkalQxSGVzQW5ZV011NGJwYlJyM3dPRDhCRmtaa3pSODZIdXJ3MTdB?=
 =?utf-8?B?WTJrOVBHWDJ5NXBPRVJQeUNkUjU4d05SZTdiNS9RNVYyY2FSS0tPL2t0blk4?=
 =?utf-8?B?SXJubG5yeHZuZ1p4dEpzem9xYU5CT1U5aFBvVU9IU3dGaWl2MzQ3UENhaWJE?=
 =?utf-8?B?QUUybVhSRWkxclNnYzZ5UVVHRExNVHZ0bVRoYjA5dEtLTlE4alpKdz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b75b3e96-f223-4843-24bc-08da4a01b56b
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 10:20:42.6873
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: GQin/3QcNARuNzq+W4GIkhgFo4yy/7zW4lsIJYg0mFiUoAmuTrLBMu1Un/3mGojNq+Ze32D/tSzbCVLcsc9CGA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8559

When a page table ends up with all contiguous entries (including all
identical attributes), it can be replaced by a superpage entry at the
next higher level. The page table itself can then be scheduled for
freeing.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---
Unlike the freeing of all-empty page tables, this causes quite a bit of
back and forth for PV domains, due to their mapping/unmapping of pages
when they get converted to/from being page tables. It may therefore be
worth considering to delay re-coalescing a little, to avoid doing so
when the superpage would otherwise get split again pretty soon. But I
think this would better be the subject of a separate change anyway.

Of course this could also be helped by more "aware" kernel side
behavior: They could avoid immediately mapping freed page tables
writable again, in anticipation of re-using that same page for another
page table elsewhere.
---
v4: Re-base over changes earlier in the series.
v3: New.

--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -81,7 +81,8 @@ static union amd_iommu_pte set_iommu_pte
                                                  unsigned long dfn,
                                                  unsigned long next_mfn,
                                                  unsigned int level,
-                                                 bool iw, bool ir)
+                                                 bool iw, bool ir,
+                                                 bool *contig)
 {
     union amd_iommu_pte *table, *pde, old;
 
@@ -94,11 +95,15 @@ static union amd_iommu_pte set_iommu_pte
          old.iw != iw || old.ir != ir )
     {
         set_iommu_pde_present(pde, next_mfn, 0, iw, ir);
-        pt_update_contig_markers(&table->raw, pfn_to_pde_idx(dfn, level),
-                                 level, PTE_kind_leaf);
+        *contig = pt_update_contig_markers(&table->raw,
+                                           pfn_to_pde_idx(dfn, level),
+                                           level, PTE_kind_leaf);
     }
     else
+    {
         old.pr = false; /* signal "no change" to the caller */
+        *contig = false;
+    }
 
     unmap_domain_page(table);
 
@@ -409,6 +414,7 @@ int cf_check amd_iommu_map_page(
 {
     struct domain_iommu *hd = dom_iommu(d);
     unsigned int level = (IOMMUF_order(flags) / PTE_PER_TABLE_SHIFT) + 1;
+    bool contig;
     int rc;
     unsigned long pt_mfn = 0;
     union amd_iommu_pte old;
@@ -452,8 +458,26 @@ int cf_check amd_iommu_map_page(
 
     /* Install mapping */
     old = set_iommu_pte_present(pt_mfn, dfn_x(dfn), mfn_x(mfn), level,
-                                (flags & IOMMUF_writable),
-                                (flags & IOMMUF_readable));
+                                flags & IOMMUF_writable,
+                                flags & IOMMUF_readable, &contig);
+
+    while ( unlikely(contig) && ++level < hd->arch.amd.paging_mode )
+    {
+        struct page_info *pg = mfn_to_page(_mfn(pt_mfn));
+        unsigned long next_mfn;
+
+        if ( iommu_pde_from_dfn(d, dfn_x(dfn), level, &pt_mfn, flush_flags,
+                                false) )
+            BUG();
+        BUG_ON(!pt_mfn);
+
+        next_mfn = mfn_x(mfn) & (~0UL << (PTE_PER_TABLE_SHIFT * (level - 1)));
+        set_iommu_pte_present(pt_mfn, dfn_x(dfn), next_mfn, level,
+                              flags & IOMMUF_writable,
+                              flags & IOMMUF_readable, &contig);
+        *flush_flags |= IOMMU_FLUSHF_modified | IOMMU_FLUSHF_all;
+        iommu_queue_free_pgtable(hd, pg);
+    }
 
     spin_unlock(&hd->arch.mapping_lock);
 



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 10:21:07 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 10:21:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344927.570558 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFHn-0005wI-GM; Thu, 09 Jun 2022 10:21:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344927.570558; Thu, 09 Jun 2022 10:21:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFHn-0005w9-Ce; Thu, 09 Jun 2022 10:21:07 +0000
Received: by outflank-mailman (input) for mailman id 344927;
 Thu, 09 Jun 2022 10:21:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jWvP=WQ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1nzFHl-0003pZ-SD
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 10:21:05 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur02on061b.outbound.protection.outlook.com
 [2a01:111:f400:fe06::61b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id de436bff-e7dd-11ec-b605-df0040e90b76;
 Thu, 09 Jun 2022 12:21:05 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8790.eurprd04.prod.outlook.com (2603:10a6:10:2e1::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Thu, 9 Jun
 2022 10:21:03 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.013; Thu, 9 Jun 2022
 10:21:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: de436bff-e7dd-11ec-b605-df0040e90b76
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cVZltJrc9tJhV0TJXGKBJiv/jE/uZjLNuGyZ5b/s98sAkRMerr2SokaV857aSrFPexBqn4XufxBrK6NjItyvYkUpKKeMxX0/G7sGe3Xi2RV7Tcc4+kR/Vgd1jo5WMHAzcYYYlp37qdVnTFQgQKEGIEib+0MKldYovzSygBhTI7qYNwuxuT47Csh7IzogDCfxhYdMIDCGATUWOwQGZW+E/0yjLRUPSPnRB0AHKt5gpYWS0vqOkxPb0L5+OJ2iCe23Iiwd9t+89kCrJKtx/ZXL4IlO1ygresvekFSZBY8zF4ulHqvTOWn3jURkUXhMDaUt104RmUccnpm7JlsS4ti53Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DfKw4w4lT3delgxvxQXi2BzPNtcGD1oKtqm/W3kaSWA=;
 b=Af97b+RCDElkLr8SANvxVYT552+x3K/XZdoeBTjV1zaPMH7xVSkllS32207WYmDnFn2T1D2Cneh4OminKErMQB0BkaGeP8u5Yk7WkZnEBXJJFBcoxhNY00ZMlO8rRLhvcEkw0Xg+suFn5HB/eJzouIWp93BnIxJhddd/GHnPOjbKyoMip2L2uBlcWsBY9uuz+P5YOsca/Dm+grcjKKwwf3Wb5BsmCqyVOfOZroJMCl8bIuyWCsTpZN+mI++DM1DNCP7US15Ls3DcWCLF10P9yWTKF9ArCHwOCqRTaeG9VHoP8HUiND0YsqnNRAekRQbAeyyp8goyu8w9aq3BA9XtYA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DfKw4w4lT3delgxvxQXi2BzPNtcGD1oKtqm/W3kaSWA=;
 b=tptJBasTf5Dw6WYVu/scgPvrPQISklOLfF2Oh0wDBm0vuDFUNdJvZnIWwGR2WE3RGFPRSLg6z6o+oz3K4Q54cwGNy/ODUTP5JQwIcaqPPe3+MyHyICs9gdZDUMA0SHrDmuNBya9yp5N2CBsEEHetFd37/NRSf9WYxsoOGM7/XKdaPjIULjlSh50ldfZ2f96e5mDQM+Rl/lqaIoDrVtE6xFAl4iC14PXMcCxXh+PvGc74f7Ijagq1IpcuW3Nme95EYL6ew03g+jLmyQgROj0cCVRbuNYiDJniyvKobgZZWl9ByYE3ANO/omhcCo+hDtC/AhDq8dLibTa1gBysK+eGrw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <0013f62e-395e-464c-30dc-e8e800b1a976@suse.com>
Date: Thu, 9 Jun 2022 12:21:01 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: [PATCH v6 10/12] VT-d: replace all-contiguous page tables by
 superpage mappings
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <e873e30c-7a04-a8a3-2fe5-0dda30e654fe@suse.com>
In-Reply-To: <e873e30c-7a04-a8a3-2fe5-0dda30e654fe@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0022.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:14::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 2dd9b4b4-8c06-4b0d-8847-08da4a01c18d
X-MS-TrafficTypeDiagnostic: DU2PR04MB8790:EE_
X-Microsoft-Antispam-PRVS:
	<DU2PR04MB87904A2FF9EEAC139F077E2EB3A79@DU2PR04MB8790.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	K1SpXDsla+jeYa1FpP3Ie45MYaTA2fKc+ldvPOorZdsR2pj0bQKg4tbQYKqpBnuv4pJ+vj2qOWoIBEYZUPcdzw4HpF1BxDt48iwFnC4/Tez3Lo5Og1sMCH3TUFo6B6ci2ZBRyCzzwV0XJ4wufKsHe40giL7dS9e3IERfSXRXzqMne5ONVnfZ4US9h+YfccUn31gjjnc9RR9SK3grQ1Zli6dYdLrAGN1kmQaNIp5HoEomDGMDNnon1QswxuxQ5NIxzcNlhY7rSqNVB1YeZXy4tjvbBBSDb0mhd60U6QiKYfKjRLS7D+oeeLBmkMwQPSnWfB+loaf/ZRyCkwsn4IBPXpxxBLxmyOd4aMDhR3xp4NgBPn67yNDcJ2x7Qgs4LO8MSWIWP/tmtRlv0pzHwHdGjEPjYkQ6sFbqrHsPOe57ACs3fE3PR+jRJ39wfBn5qiEaphPrEJyMVxuLMkUaN5gkOUkJO/Qo1oWGvauzlnNUx4PbsO6vsRup6IxDN1wJS96r2d00+30hbNAmJZNdZt8WcEi/WnCd3sRECr7eWEo33qcNPmobiE6JIuRNzjq5RRoeiwMDShdI6iDiQV1AtgH/cregqHFpgoutvX1oCC43l1yflO7OsCIQgOFCO9PC3IiKrhgx7dbBOgGY3qj21UnIIEOu7jT4XBmNp9m8lOSaxbefUxtBvSDG0eQs/nd6qTgILQK12ESyoCfGJxrOWhglGDe/UQXyrBPTPcGLJDKf4bbzYIHcNEESAE55SxN7d9eC
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(26005)(6512007)(31696002)(6486002)(6506007)(5660300002)(186003)(66946007)(83380400001)(2906002)(86362001)(508600001)(8936002)(36756003)(2616005)(38100700002)(66476007)(66556008)(8676002)(4326008)(6916009)(316002)(54906003)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MDVWbzJ3L1V1VlFBT2xuTm5pdVpBY0FsUWFPdHp6S2xSNkswdERtK1Z5S0dE?=
 =?utf-8?B?ZVViZTBrK01mekErcEljZFFCU3Q2YkZxakxJaEJqV1NGRnhPcWgxcXA1bk1z?=
 =?utf-8?B?VVJ3aFRYWVliOVFjRnlYYU1ueUNFcy84eTNHc1Nyd3hkV2o5QW1oQXdhYVU1?=
 =?utf-8?B?UDMyaDE4ZHdnbXdPSktvYkpSbDdaSi8wTFNKUUNENlNNY29TSFk1K0tZdE9u?=
 =?utf-8?B?Wjdmdk1ESWJoeTJ2QkU4UHhaVFdqdXhPNUxpUmd0eEJCak9oaU9tZnBtL1hX?=
 =?utf-8?B?QytVRjVhM3pjNjBRdG9wNEptZ1liSjE0Q2ZBNy9acXRpMTIzU3hRYU1GMzdE?=
 =?utf-8?B?TVI0a2VSUDFVZm5DRHRVb1pNOWZMQVdXRTl2M0dXVWN0U3lqYmRza3lvV2s2?=
 =?utf-8?B?VXJnb3NtTGdUbFg0RWFUSkRlNGZzZ0xOcGhDdWpKck0vc0FpWk5OUjQxVVB1?=
 =?utf-8?B?OE5ISWp3bllPZ3RvcXVjSzFSS01VNHFoMEJpdzN5RHovK2E3WU5XUENhRmlP?=
 =?utf-8?B?WitudStwUEFqNktKRGdNb1U4d0FpQkRHVlpzQUJLR0FIbHNFcTJTRFRRakQ2?=
 =?utf-8?B?VExEQnVkWnAyUmV5L0ZxMndwTkNlbHQzQms0VSthTWxLV2w4QW9HNnZtRkpm?=
 =?utf-8?B?b0UydGU0UE1NZzMxemdlMStFMVhSbk5QNjBPUjFQZGNVMUg2ampuYTYySFZN?=
 =?utf-8?B?dEZIRExCMmN6UnJ0WnVLUXRxTGU5TG5zdzN0cllXeWs1ZjQ2ZG1hM1IwTGMr?=
 =?utf-8?B?MEhFazBuaDd3M1gvK09GZFNJNFRaRzF1ZXpjRnY4VlZ6QzJLRUJZTlNaSU9k?=
 =?utf-8?B?SkFwSERvSVNFNTFseStScWZnMFZBbTBJeWVXQUkzZzgrZXZyWjBqUzdjcmxJ?=
 =?utf-8?B?V294N0ZuZ0Z0Y2NnYWxIOTdTSHpnYnhWcDhobFBBS3BVd1JNeXQ2UTU0bFRK?=
 =?utf-8?B?ZnYzYmhLK1B5d1AzMldSY1JrSGhweFpXaGtpV3RVMmx2M3ZxNWJ2dmJKcTRJ?=
 =?utf-8?B?Q2lhOHZtOWlpcWZ5TTVkZDU1ZDllRVhIRFh5Mk5mL3A4dU4zT1FqYnV3N29a?=
 =?utf-8?B?bC9KN2hKallYRjhhMlVXNXQyWUQzRy9wOS8wWFp6NG0yUFpYOUovYS93V3c2?=
 =?utf-8?B?SW1zNlFRVmRSemUyUXlMMUxEbkdTbkhONFkrUjFmNzNOVXArZXdLdVl4OTFL?=
 =?utf-8?B?RkJVV2hTSHJmMFNyOWdEcllFQ1dlVWduMU1qNTBUQ2VEajJ5V2R4b2Q0SzZp?=
 =?utf-8?B?NittTDBheVdUSW92YTVLeG5NcVF4YXhTMUpBV0x4WTRWb29HQVhQNFBsUVZE?=
 =?utf-8?B?VU1MMVByVUZueXpRUnV5VS9rQWV2MUxVOUdNL20veTlFSERrUDZDYzgyQTBv?=
 =?utf-8?B?RE5OR2lYenAzWWwyQytZZ2dCcnJ5TDlpRmlXNDlOeU5sa1JPeWJXaFp4N1ZW?=
 =?utf-8?B?ZHZRRFNSRHhjQm8rRElpaTJhWE9SM20wMldBSm54aWlMLzExR0NBZ2htaUtP?=
 =?utf-8?B?MnJ3Y09VZDRCUmhBc2ZqVFhHTHZyK1BOb0tvS2hPemZZNUNPTCtvcGw2SGRE?=
 =?utf-8?B?MzZyeHY5Rkxlbmw0b2xTczBWd0pWZHpFNFZHdnV6YW5mSzZ4K252QmJ0WUJv?=
 =?utf-8?B?UURNSm5vZEdhR0Rnd1NCVXVIUEhFMS94V2tiTFBDbElSby9selNieTZpYnIw?=
 =?utf-8?B?eGR4WlRqL3A4YjNJbEJTcTlDT1hBckZ2MmlKL3N3Z0RjV3lhODZLbGhrK0JR?=
 =?utf-8?B?Y0h2Vk45QlhCb0xiNUYwNHFkN1Z5WVhtZHZZNDVveHJqNmQvUmduT092eEZG?=
 =?utf-8?B?RUVzblZ2Q1BjVHBvMlpPT2xKMFNzYnVMVjJsMG11a1pVeVgzSm5Reng5T0Np?=
 =?utf-8?B?eFl0akNxWkRSVGI3VlFWcytqVXpGclh4Sk5FZmR6RkFjZGRkOFNINTg2aE1K?=
 =?utf-8?B?RGtKVDQrK28xakcyRk1pZEFadkx0TmdKaEY3ZEhMVkVHaFdONFlBdExSNUk2?=
 =?utf-8?B?TzFoUHNBcHNBYmRsWFZhMHZHYU5vQ3ZQcGFtblVuSytaejlwR01UNktacDRq?=
 =?utf-8?B?eEdhVW9DN0J5M1N6S3o1a2REbGxKbm56cjBici90eUYwRkJuVUJra1BOR0tJ?=
 =?utf-8?B?MkpCaGhuQ0VHZTV3dldvcUFSenM1dHVPOU5jZHBFVEtkVU04YVBKaFpGWk1a?=
 =?utf-8?B?aWpHeTNYNmtpaFhHMms5amNRU2YrWUc3Vkp0cTc0a1FnV25SYXNGdjZ4UWNP?=
 =?utf-8?B?aDk1SVEwSHJJMzZLQTA4Z1ZxYUlDRXk1VmRYNmVSQWdKUUNXNTlFeGJNdTF4?=
 =?utf-8?B?bnk0U05SVFlUdG9vSTlnMk5OdWRxd3VYZFBLaFdsUEswdGExV0lXdz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2dd9b4b4-8c06-4b0d-8847-08da4a01c18d
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 10:21:03.0766
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: PtGJt879e4RoS3madOEV4blfsfIosL/0oxgMzSHlMzqxBN2vo2dmkLnAiPyIygNfxRDK0ZHJF/6yYI9s4m0vvg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8790

When a page table ends up with all contiguous entries (including all
identical attributes), it can be replaced by a superpage entry at the
next higher level. The page table itself can then be scheduled for
freeing.

The adjustment to LEVEL_MASK is merely to avoid leaving a latent trap
for whenever we (and obviously hardware) start supporting 512G mappings.

Note that cache sync-ing is likely more strict than necessary. This is
both to be on the safe side as well as to maintain the pattern of all
updates of (potentially) live tables being accompanied by a flush (if so
needed).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---
Unlike the freeing of all-empty page tables, this causes quite a bit of
back and forth for PV domains, due to their mapping/unmapping of pages
when they get converted to/from being page tables. It may therefore be
worth considering to delay re-coalescing a little, to avoid doing so
when the superpage would otherwise get split again pretty soon. But I
think this would better be the subject of a separate change anyway.

Of course this could also be helped by more "aware" kernel side
behavior: They could avoid immediately mapping freed page tables
writable again, in anticipation of re-using that same page for another
page table elsewhere.
---
v4: Re-base over changes earlier in the series.
v3: New.

--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -2211,14 +2211,35 @@ static int __must_check cf_check intel_i
      * While the (ab)use of PTE_kind_table here allows to save some work in
      * the function, the main motivation for it is that it avoids a so far
      * unexplained hang during boot (while preparing Dom0) on a Westmere
-     * based laptop.
+     * based laptop.  This also has the intended effect of terminating the
+     * loop when super pages aren't supported anymore at the next level.
      */
-    pt_update_contig_markers(&page->val,
-                             address_level_offset(dfn_to_daddr(dfn), level),
-                             level,
-                             (hd->platform_ops->page_sizes &
-                              (1UL << level_to_offset_bits(level + 1))
-                              ? PTE_kind_leaf : PTE_kind_table));
+    while ( pt_update_contig_markers(&page->val,
+                                     address_level_offset(dfn_to_daddr(dfn), level),
+                                     level,
+                                     (hd->platform_ops->page_sizes &
+                                      (1UL << level_to_offset_bits(level + 1))
+                                       ? PTE_kind_leaf : PTE_kind_table)) )
+    {
+        struct page_info *pg = maddr_to_page(pg_maddr);
+
+        unmap_vtd_domain_page(page);
+
+        new.val &= ~(LEVEL_MASK << level_to_offset_bits(level));
+        dma_set_pte_superpage(new);
+
+        pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), ++level,
+                                          flush_flags, false);
+        BUG_ON(pg_maddr < PAGE_SIZE);
+
+        page = map_vtd_domain_page(pg_maddr);
+        pte = &page[address_level_offset(dfn_to_daddr(dfn), level)];
+        *pte = new;
+        iommu_sync_cache(pte, sizeof(*pte));
+
+        *flush_flags |= IOMMU_FLUSHF_modified | IOMMU_FLUSHF_all;
+        iommu_queue_free_pgtable(hd, pg);
+    }
 
     spin_unlock(&hd->arch.mapping_lock);
     unmap_vtd_domain_page(page);
--- a/xen/drivers/passthrough/vtd/iommu.h
+++ b/xen/drivers/passthrough/vtd/iommu.h
@@ -232,7 +232,7 @@ struct context_entry {
 
 /* page table handling */
 #define LEVEL_STRIDE       (9)
-#define LEVEL_MASK         ((1 << LEVEL_STRIDE) - 1)
+#define LEVEL_MASK         (PTE_NUM - 1UL)
 #define PTE_NUM            (1 << LEVEL_STRIDE)
 #define level_to_agaw(val) ((val) - 2)
 #define agaw_to_level(val) ((val) + 2)



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 10:21:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 10:21:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344930.570569 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFHr-0006Fc-RA; Thu, 09 Jun 2022 10:21:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344930.570569; Thu, 09 Jun 2022 10:21:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFHr-0006FR-NK; Thu, 09 Jun 2022 10:21:11 +0000
Received: by outflank-mailman (input) for mailman id 344930;
 Thu, 09 Jun 2022 10:21:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6kiA=WQ=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1nzFHq-0003U3-Ma
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 10:21:10 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on0612.outbound.protection.outlook.com
 [2a01:111:f400:fe1e::612])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e1aa0b4a-e7dd-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 12:21:10 +0200 (CEST)
Received: from DU2P251CA0012.EURP251.PROD.OUTLOOK.COM (2603:10a6:10:230::8) by
 HE1PR0802MB2409.eurprd08.prod.outlook.com (2603:10a6:3:dc::23) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5314.17; Thu, 9 Jun 2022 10:21:07 +0000
Received: from DBAEUR03FT026.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:230:cafe::53) by DU2P251CA0012.outlook.office365.com
 (2603:10a6:10:230::8) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13 via Frontend
 Transport; Thu, 9 Jun 2022 10:21:07 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT026.mail.protection.outlook.com (100.127.142.242) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5332.12 via Frontend Transport; Thu, 9 Jun 2022 10:21:07 +0000
Received: ("Tessian outbound 4ab5a053767b:v120");
 Thu, 09 Jun 2022 10:21:06 +0000
Received: from fa1dde82cdf8.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 3231B8EE-8D4D-467B-887C-F4D3DCF4CDC3.1; 
 Thu, 09 Jun 2022 10:20:55 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id fa1dde82cdf8.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 09 Jun 2022 10:20:55 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by VI1PR08MB4240.eurprd08.prod.outlook.com (2603:10a6:803:102::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.14; Thu, 9 Jun
 2022 10:20:52 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5314.019; Thu, 9 Jun 2022
 10:20:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e1aa0b4a-e7dd-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=EjJZNvjNU7FyUFMEOiUqy4NW+deF+6nw1yzBZeEC65gBxIImRNZ4Ync3mFfvyQ7d+lCoNsgZKvdqaNrtQnv8yb0zInHfgB0VOBHL/3PXMsdITdOXTH3+po2YT1KsVyJTQlDf1IWucnCgft1H1v8r1kRRbBV87JmmbMriAfPi8cWUyLqV1hFaY6mWLz+1m9jkV/PLDB9NYtFugHrtCtkCiKCLSIQSXo4brITGb3SFo35CwWH554yis8NNrPH4K0c7M3mJVdcm1/A20cBdLFVMkkcZSLk8rnm44jNY1eR6fgR/lukkxLjQoa0KCkHN234Pon9vrLWJxjKILA2mOljErw==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=MBJ7RpNMy22bNKVbTr9syrXjbU6dd7A4FGpRiwc/DHQ=;
 b=l/mvPPUbzEp70q4kXRVQhyXQqu+ad+LMEBe2wtbNqnC8urMsfxZDyLmJk6/W4+FAbbG+e2Ajm+5BlhwUl0rGiklxTOnPxdyHGMCPQKsPh8pXVmodAyBRKd5Xmr8ZBPq5frKTimve6Fy/5VkL7O6HXrZRb4wObg9IqKEe3PcQM8imzHS+ZJwBUZoDCqZ4+/o3S5xYY7kDKdO/VRHnswagR8yZF4vEzVbxMvoLm1yM0r+cRzYteRDAGbnmiRbPanFiu/9CfGCGUfUkyTU5kBi/+33Aw56UmwbzSedQHf8/56hWZ29LMvCS7SpXEuxGi+05wcR+PsImk9gwqj5OJtSzjg==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MBJ7RpNMy22bNKVbTr9syrXjbU6dd7A4FGpRiwc/DHQ=;
 b=tAcuDYkb/jrLowGbqn77EysGNHrL0GyD0y2rjpUDDSq/5nqLZBWxx1KLqKmg8Fw5kwXlr0DKW+LEgy/2IWH8wSRlTsTDdwNw/hOGLDOyvfzOSFBQKi7wSCAQENbiOTgQYSGDaHpCJTb0xn8/miQMZGo7hNBEYgqn6pfPZLIsaoY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: bc655ac6b5cb5bb9
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JMP2YdQTe5PGJC9dMDyKc/DZpye0pCo4YijfAZY0oQKIr6XwbK+CwuKtE1ImQa0BxZ//su9Vnc4nbRLH0Os4Ru8x8Hs741dDQf/jzXjIda0AOXq2I00DpQUWGuKOZUTU2+mpaNpWlykliDkmfXZKkVJ5xYTYMb+c1o4L7U45JeMVd6Bo6mDZ0BoPLGKXk9riGvRclSdfxE+XKjKklVWqGMSeb+7GPZhB13vrvemMU/cVwksi7mvuj4M17L/QGnKdC4ZyfJSP4kvUQpkTFgg+8NXIkZFLpkwer3T8YtaUaKZ3NlMXYOqK/4T8SxdCBhvE4N0hAd6XEQZEsXhEj8v2Sw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=MBJ7RpNMy22bNKVbTr9syrXjbU6dd7A4FGpRiwc/DHQ=;
 b=J7csr6f3/cGl/vAsvm5V+nq1A8X18uReJypmz8fqp+UZJdItQcf1V3/QlvhdYTlTLC1k9NodT3XLyojh0vYZfhVGjUp4V99GzTvNoJWdluQ4gf+jrqoWEdRENenHy3ZSLNmi0OSP2cFXgEMhKtrrH2N/oPk41BHqcybmYe+GmJFLQiAluvrhxzepu0iLROKPXXlYUe7s705/R93JGUR0gntZpogCy9dE9dnrZPqk9Jb9XZ1TVoidki7h2w1xL0A71F6QdACwNV7ReH/CkbxVHAAIJymNVyNUqJzRMDktPXvKHtVfRFD5k5n3A1jKNlOF3dte86A97Bzlm4olRknvbQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MBJ7RpNMy22bNKVbTr9syrXjbU6dd7A4FGpRiwc/DHQ=;
 b=tAcuDYkb/jrLowGbqn77EysGNHrL0GyD0y2rjpUDDSq/5nqLZBWxx1KLqKmg8Fw5kwXlr0DKW+LEgy/2IWH8wSRlTsTDdwNw/hOGLDOyvfzOSFBQKi7wSCAQENbiOTgQYSGDaHpCJTb0xn8/miQMZGo7hNBEYgqn6pfPZLIsaoY=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v2] xen: Add MISRA support to cppcheck make rule
Thread-Topic: [PATCH v2] xen: Add MISRA support to cppcheck make rule
Thread-Index: AQHYe+Q/CnV4rYCY7UiS7+9Bq/v4m61G2wEAgAACZwA=
Date: Thu, 9 Jun 2022 10:20:52 +0000
Message-ID: <4DE73CBB-1C78-4BF4-A50D-69A560C51728@arm.com>
References:
 <56d3deee8889d1372752db3105f3a1349ef4562e.1654767188.git.bertrand.marquis@arm.com>
 <cac6b820-31fe-10d2-50ea-7c7e14e00f06@suse.com>
In-Reply-To: <cac6b820-31fe-10d2-50ea-7c7e14e00f06@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.80.82.1.1)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: a6742253-4cb5-40f9-b181-08da4a01c405
x-ms-traffictypediagnostic:
	VI1PR08MB4240:EE_|DBAEUR03FT026:EE_|HE1PR0802MB2409:EE_
X-Microsoft-Antispam-PRVS:
	<HE1PR0802MB2409CD163A46EC71DCCC138E9DA79@HE1PR0802MB2409.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 w8u6NUPVfmuj5bAkT/wTGYpUIM33N94q9rGwhI7zgrqyzkTbkNggp2rSzVgRoQF7R65HKdxPvvAeCi/makJrE4Be5QkNia/xEWJa3X9RPYZJVbS5UmxRmiAUM2J6EsGJZKwvzvvRyMqNdh6wrnQTnKAr4dypypBepmJoickac1KsHFTvGEqLDCvNbfxBLGtoWRJWADqOYeUIbaBTNOOUagd2YRhj2a8Q396pNtepeG/e3wR7uDf0cpWD6D1EAdCkN1L6ozQ6rxGzUAE1O+ba4XTl7NJnUXj7i++0EIDGh8P+MuFkOFpLJzIjYZSY3FzUuQ4VqgX3HTFqTV2Zp8IL2cYMNmLvGCa/Wrd9eW7UHhAir0n0pioqgTjIWi1nO3tTUV5Tk4nWoaixMQhK9ZSqpZoCdM8imTisU0U00aEeuFsEfy8ozoXqpbQq5CkCKklmrMraPI2fPkgB8SeWyZnnn0MKYfgP0XX/5wa+wQya48CdDzwCW2PcEy1wUpFjR92x0J+c8rM8yfeVpfrrdFWScif8YO1ske62OjttxVnbNF09hle2QqRzcW6HqbRWgNfMAvJYtKZDRk92mIcQALpqQlmkFl23g2DSTC8U/U8gh1iBQZ2IJKZcfn3lYg5R0Es9huB0eKZBT2RRmtwapH3h4/YGtZFA+O8NO8oDyHdBoUfvm3rc4qVQWU+oHgmZY2p8Wqcm5Ub4ulko8o2enbhUNAVmLe5g5MQ1+RYOx8UFTczvudbZjebUwNwzkPa4mVoa
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(66946007)(38100700002)(6506007)(508600001)(26005)(316002)(8936002)(6512007)(76116006)(6916009)(5660300002)(54906003)(186003)(83380400001)(2616005)(33656002)(53546011)(36756003)(86362001)(6486002)(66556008)(71200400001)(2906002)(38070700005)(66446008)(66476007)(91956017)(122000001)(8676002)(64756008)(4326008)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <CD96332016D6A647B208EE6ABFEF1CCB@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4240
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT026.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	d69cae93-f7be-41b7-38fe-08da4a01bb55
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	s+Hwd09BBsNt3nyGY6r1WAUu+VqbxlzxOvy4MmiR4f0MFsRhF1rCYIq5cfb+mo7C4Qgv1g1iZeBOyan9OYrSCMP1subJDfpEAbbKt6Agkqa5n2RYbMQXCd5oLwscMYe0L6f353bB+lFsc7Py+4Ug004sMZU/Baw8BCzKp7/deKKGHm0MWDC7ckPzaNk+FEz6Nk5cQgfkSDIaBwB9TO/xj35cXu0fx+L/E7qwngWJWM0drQKjFroANWRrni3ePIqFVQrLxvUqOYvdXl+fcx+t1uxzZcssVoH4YDn8UIoOfanNCwWdw0swe0ks/gpPlLjmCpfFeZ3wuNY2xbqrGWenpcOD9Igdd1s0GfiBUJbmPvJoZ2W64NIroegY15nTZM4hYcBYBRHY+YAqpM4NSUlDGYsGX9sUJuaZUPOhaWFeKtIEMbNJ7wlnhIjP6ysldULimRtJ1k/GQ6ys36wfNlyG677a+cvKk4YtmaIdLTrNiJaUUOedb5Oto7rhqn2eRa1RxQ0638iranld8CfiLXK/hWV1pbBJtEeSca1bgkMS6ShScw9GcSxCRy+bzqOITE2uRz13FgD+1B7fKdB3dAJMNnbVhcQYeUGXdud0T1RVR6MUVEP8lc8Ybr/b743lC4+VmiciEsKe/VRtzwV1Y2RIlN31dx0HYbOihrJMpq+lb2X0BAmN5fuo5u2a0YjbJ6gt
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(36756003)(70206006)(356005)(40460700003)(4326008)(2906002)(6512007)(26005)(316002)(86362001)(54906003)(6486002)(508600001)(33656002)(53546011)(36860700001)(47076005)(70586007)(6862004)(8936002)(186003)(336012)(83380400001)(81166007)(2616005)(82310400005)(8676002)(5660300002)(6506007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 10:21:07.0320
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a6742253-4cb5-40f9-b181-08da4a01c405
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT026.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0802MB2409

Hi Jan,

> On 9 Jun 2022, at 11:12, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 09.06.2022 11:34, Bertrand Marquis wrote:
>> cppcheck MISRA addon can be used to check for non compliance to some of
>> the MISRA standard rules.
>>=20
>> Add a CPPCHECK_MISRA variable that can be set to "y" using make command
>> line to generate a cppcheck report including cppcheck misra checks.
>>=20
>> When MISRA checking is enabled, a file with a text description suitable
>> for cppcheck misra addon is generated out of Xen documentation file
>> which lists the rules followed by Xen (docs/misra/rules.rst).
>>=20
>> By default MISRA checking is turned off.
>>=20
>> While adding cppcheck-misra files to gitignore, also fix the missing /
>> for htmlreport gitignore
>>=20
>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>> ---
>> Changes in v2:
>> - fix missing / for htmlreport
>> - use wildcard for cppcheck-misra remove and gitignore
>> - fix comment in makefile
>> - fix dependencies for generation of json and txt file
>> ---
>> .gitignore | 3 +-
>> xen/Makefile | 29 ++++++-
>> xen/tools/convert_misra_doc.py | 139 +++++++++++++++++++++++++++++++++
>> 3 files changed, 168 insertions(+), 3 deletions(-)
>> create mode 100755 xen/tools/convert_misra_doc.py
>>=20
>> diff --git a/.gitignore b/.gitignore
>> index 18ef56a780..b106caa7a9 100644
>> --- a/.gitignore
>> +++ b/.gitignore
>> @@ -297,6 +297,7 @@ xen/.banner
>> xen/.config
>> xen/.config.old
>> xen/.xen.elf32
>> +xen/cppcheck-misra.*
>=20
> As said on v1, this wants to be added further down, while ...
>=20
>> xen/xen-cppcheck.xml
>=20
> ... this line wants moving down at this occasion or in a separate
> change.
>=20
>> xen/System.map
>> xen/arch/x86/boot/mkelf32
>> @@ -318,7 +319,7 @@ xen/arch/*/efi/runtime.c
>> xen/arch/*/include/asm/asm-offsets.h
>> xen/common/config_data.S
>> xen/common/config.gz
>> -xen/cppcheck-htmlreport
>> +xen/cppcheck-htmlreport/
>> xen/include/headers*.chk
>> xen/include/compat/*
>> xen/include/config/
>=20
> xen/cppcheck-misra.* wants to go alongside the line you adjust, while
> xen/xen-cppcheck.xml belongs yet further down.

Sorry I forgot that part in my v2.
I will do all fixes including xen-cppcheck.xml one in a v3 shortly.

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 10:25:32 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 10:25:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344942.570580 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFLx-0007Sy-Hz; Thu, 09 Jun 2022 10:25:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344942.570580; Thu, 09 Jun 2022 10:25:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFLx-0007SU-Dy; Thu, 09 Jun 2022 10:25:25 +0000
Received: by outflank-mailman (input) for mailman id 344942;
 Thu, 09 Jun 2022 10:25:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jWvP=WQ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1nzFIR-0003pZ-Ts
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 10:21:47 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on061b.outbound.protection.outlook.com
 [2a01:111:f400:fe0d::61b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f70c16e7-e7dd-11ec-b605-df0040e90b76;
 Thu, 09 Jun 2022 12:21:46 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8790.eurprd04.prod.outlook.com (2603:10a6:10:2e1::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Thu, 9 Jun
 2022 10:21:44 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.013; Thu, 9 Jun 2022
 10:21:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f70c16e7-e7dd-11ec-b605-df0040e90b76
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ITIHncsS5i6juTV/OQ1dadMZFTNGBGE/cDmro+DuVd3y3qytYIV8pJDYuJqyS7askxMnYDqyx6se6bIrjEKnj6wwrnmM/K/yAY1K8KjaP6rKfkAbDDhsmOfTq4I5YgYJ3/9wwBpHHe8QZgPkL4n5lKOf/i6vLgalwDK0GCd1h5YBvI7bCY28UcLXF05alB8GO2n6ULR0UVniV2NdU65mBI4N8eNeVAO1yQqqrHb15YQ4A4Tjz2gnM+penn37RpNXWrB9OU0hBkZ7oqCB3MYsGpWkGHQVsteug6pn6/PJ8dKRw/8eh8r91vVzVO1IP9xPkmUw6yhmhyGX9m0hW5ElGA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=aNGQdoucDFou5j3M8E7Hd4miL8kBXhKfjN3w6aHaDIs=;
 b=VzF/WEsM4wYMSbleMTCRK+yfIaQ7T4UtmEmrBurN+i7sJ9D/f6FaJTfIbYyGHFLAlosf84hzdkYzHEJ8uljnKHNznzbKmcWONR1V6+EaQMoI25suRclxWdk4udo+rYJ8m+XwJ0fDQ5GZl9gksqC1SiOOOZs3+95C3iG2fnrUeFVmTePtfWuetkNj9aTvHwws/dcNzizYp/u3EyAVi5h5fvTnb2OX0rShux73uQgTfr+jkpdI7HNqoTb23tnHg+nng3uJYMqJGLe2ZhgKApIRnEYDRqg6SMyZpkaX1s1ImUzLcuGixrMA7d9f215B5pRgfskJ4AGq9QTLDeXbzYs/UA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aNGQdoucDFou5j3M8E7Hd4miL8kBXhKfjN3w6aHaDIs=;
 b=QcorO23eUpXazUWkY+jKT6nWBUWoitabBH5iY6WmuJcNNzU4xUBUv4g/Lpey4CjVRCeEJrnhXzb4Nf18G2gHRXBScqT0p/WllolbAYXiacQsKpMvSn4WodxT+87oCubVmIVlnkdPAxU7gKBpWDTLWKDqS8KlSv5z9VIW3jr76RC+cuO8NGteOS9H5cpoYp3SiMTqwgnYhaEELWRNa9+HNb4IDEIV5MYMMMdVu+oA8cVX6cjSFK0YmUgaSco9hParby/AK5IbiRCUjGLPj4YdZaECmMRFm/rpCkRRdAkclDo+iEoNQUgU/qplFQMee/bhpBGrQQaB7jQfeZfiseRcCg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <10a6eee9-099e-59c1-7dcc-60bdb9902703@suse.com>
Date: Thu, 9 Jun 2022 12:21:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: [PATCH v6 12/12] VT-d: fold dma_pte_clear_one() into its only caller
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <e873e30c-7a04-a8a3-2fe5-0dda30e654fe@suse.com>
In-Reply-To: <e873e30c-7a04-a8a3-2fe5-0dda30e654fe@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AS9PR06CA0473.eurprd06.prod.outlook.com
 (2603:10a6:20b:49a::25) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ffe31b03-f075-4088-0a26-08da4a01da72
X-MS-TrafficTypeDiagnostic: DU2PR04MB8790:EE_
X-Microsoft-Antispam-PRVS:
	<DU2PR04MB87900B0C5E7959C47414FA81B3A79@DU2PR04MB8790.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	PC4veMMXI+eII7Wml7Hvnh5o4eAhZtVk5BnBJXZ1EBeawvLsOsRgzwWB+x8gdGwFP5EGw5QqwQEnNaL9K8500w5q+2s43kq+1reYYCHUHN7TIUseiRFmE7QoUho1BA6u6CiI2zHNSM3JwnrOsrbWVP81a57jdlqyYDNeh4P9xhx1+yZYZkcsjq10FhgPjhTwwgX1obLXjpkmSaAu3iia77/x0q3VwAyJnKucV/Fni6oNJDaH6JQ5Uz54SMrL+RSX/6FUJJgjYw6f6Q2EbXn9aqDu29LgcyOPOYLAVi1Jb/iFSBNKaPmGQOLfsxaOJLrwVAwFRqIb5SWAywJg/fmxwwCSTEuU4RgfZHtBL5+IVyzcQMnD2GtNEvWLOsWfDFcWr/M/ndESGZ6w+SIsyEKsWlWCfF2Iv2HCryqh63gcdU824keqEePAyBUnu56JM38jqgdRimLolg4wvzAGM1yqHvPcwbCJc+VQu2ClImJVbFI0fr/ZuB60ebH9zOBDL5SAxxVHTDx9CDta8vwcyY1z9pQ3aGCnUAmnc1wfvxWa4SAPOjP+fCB7LCliYMLM3xH98tYSVUaAHGfvabQd/72ScPn8kCiVvt7c7DrhRpPTaUXPKP5rDsNC8SZTgaytUY7frFyL8vvOTjK2aLsLd51MieyyZzkCCu5u26/AGjdeK4CYKS5+O6mdA2z7WAruXg09aZaAs2AI7zIBFyUjQczgWdlH3sYuNy2dD8ypd7Ak9+c=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(66476007)(6916009)(66556008)(8676002)(4326008)(54906003)(31686004)(316002)(31696002)(5660300002)(6506007)(6486002)(26005)(6512007)(38100700002)(36756003)(2616005)(186003)(8936002)(66946007)(83380400001)(2906002)(86362001)(508600001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TWcrbndFSFp5RjlvOXB4djl3WWhyenNDakFjeXV0SHJHd3hDSGxEbm5YZzFU?=
 =?utf-8?B?TEIzcXZCK2w4eUtDTEp6YU1neGJSWDREdCtHY0RwMTVUaHVCYVhZUTdKRUhS?=
 =?utf-8?B?RWZmd3pqOVJRMUhlR05QNnpIYzcxVTZvZTdPck16WE5Ma01KTWhUYzI3MVp2?=
 =?utf-8?B?aERiK052VUVhWjNJRDluUUJSZHl2NExScVp3QkxYd3hwVzJ1L1JhbXdGdmZv?=
 =?utf-8?B?NkUrSEl5b0FxbFFxb2gvbkNaUHY3RUpZbHRPcTJxeGZoSU8yN1VaUlFCUkJr?=
 =?utf-8?B?MXRhZEZiamttVnQzK3NpWTZsMDRDVkQxOUdldzJCS3c5WjQzZHJiOXpPR0t5?=
 =?utf-8?B?S2k1akNNNFhOak9TU1grV0dWaHRLNThJQ0dJSWRvaUVXdFVGdUVRTmVVVEFp?=
 =?utf-8?B?bXlrdXM1SzhGYkZmMHZHWkN1UnFSR0c0U1NQWG50VTFwdHJKM3VqencvMHpN?=
 =?utf-8?B?UVF6QWtaY0ZkSlVYazFFbElCV2w2Qkk4WTk1cVFWaCs5eVpJTUdOeiswRFJU?=
 =?utf-8?B?VzhibEZXeFpLYTdaU0V2bnkxM0dDbGlRTWcxc0JzVUJJcEZjeEFtOHFNQ0pH?=
 =?utf-8?B?eG45b0FoZXhjdGFtaUp0SE1GMW5vYnpxU2h4RG5BQWs4dFlOdEQ2elVlaTFJ?=
 =?utf-8?B?YUJnMUpvS1VvcnBYbUhaelhBZEZWS0wvMG01VTBOeEVpejNSOUJlMS9zaTVj?=
 =?utf-8?B?Z2hHeC9mbFo3c3lpYmZyc1J4a3VkbTRZUU5iSkVabXo4d3ZoZEIzVEZvSVVT?=
 =?utf-8?B?dXBYS2NqS3Y2VDg1ZTdzdHlwaFNLWVF0Q1dCWXR3eGxINDh6cXNwUGZJRGsz?=
 =?utf-8?B?UDBjYWg2RzJ1Sk9GWnUxbFhmdW5ZTnFYc0tHc3FIMktwSmprbSs1blFJcUdN?=
 =?utf-8?B?ZVA3WjVuVkMwTnQ0TTdweDczNXJXVHh3RnN1YVhkalpKUHpTM3UyYzI4enlO?=
 =?utf-8?B?dGd0cTFtemVUN2QvOHYvVmxldE9QWGlad0VkWmtlMVVjSlV1RlJlWE44YmRH?=
 =?utf-8?B?NkhiWmpxQWZSNXhPaS9KcDdkK3JVWXllWExJdEp6ZEVMWGJMQ3RPR1ZCNkJk?=
 =?utf-8?B?eW8vYjdwRHYrUjFPUzNYVm1KMGZ2SVN0dmIrdi9NRC9ZRUFxUGg4N29pNGdl?=
 =?utf-8?B?MG1kYXMvaTg0SmNaYkd1aEpKeCtOSnkrV1hGQk4vVTZBVjA2dHk0U3p5QU9a?=
 =?utf-8?B?bWkrQUliT213ays2Vk9kY3J4czVjWnZac2lxSTNWUHhJTURrc2h2Zkhlck4r?=
 =?utf-8?B?RkJrRnNuOWdZS3c1aXRENHFKR3ZtWjU1WFlyQVFLajJ2OVVDUVdZN1J3TDlY?=
 =?utf-8?B?cmFOazZKdTdnVWhsQ1RjcndFN0Y5Z3dVa3pSRG4wRUI2Z1U3RVdheFJ6c2NK?=
 =?utf-8?B?Wk9KNm43MkRzSHdQRkNYTHFGOG5ZZ3VTMDZGRlBMNmU3ZGEvNFdUTVpWUjNq?=
 =?utf-8?B?ZUx6ZlloTkxqcm1LdDcvRENEY1pMenlPRkxlbGlGcWJmWkI0aUZqaTdNdkRB?=
 =?utf-8?B?dlpNL0NtZ2t4U05ERldjb2xEUVB1K2dFOTY0NjVtdmM4SUVDTUVFZGd5dFdw?=
 =?utf-8?B?eU9PcHY1VjhvOG1xVDJaR2hvN0xXNzVKNTRpWjgrYWFwZFR6TnJrYVpoL2Zx?=
 =?utf-8?B?ZlJyVE4xQ0d2SWRQTEVqSVFPcFNzZ2JXUmNHbVA2WlgrWEl3djNvcUg2NWVm?=
 =?utf-8?B?RVZYK0wxS04wbWxnOCtidGkxNCs0a1VUQVM3WE90dndKakZZdjE5TkFFalZS?=
 =?utf-8?B?UFA2ZG1URDJBT2I4aTB0VWM3T1pPMUcrUUxtZERoeXMxenNwMWU2WDFQYjdv?=
 =?utf-8?B?Q25wd2lqbkt4L0NYQ1BiVXBDKy82RlhxWmNDWDlwb3ZjeDBSKzhkMGt3NUdx?=
 =?utf-8?B?T1ZGWTBTZW1UaFFXY0hxU2paU2JUWnU2MmluM2VOTkUxUlVpbU1KQThza20v?=
 =?utf-8?B?Z0g3ekRsMDhRTUdhVGd3c21DeG1UbGxFNlErQnJDSDJNZ2kySnBpd3RnQ2kw?=
 =?utf-8?B?QjhQM0NmRk5qaTRXUWZrUUtJOVZqUjNmYW9mUE4rNlNNVkxGNDRGdWtzazJx?=
 =?utf-8?B?TXJxWkhjRU56b1pOeGdJd25iVGV2cGVTQWI0dG5uY2xWVENPOHdKSjVxMmVK?=
 =?utf-8?B?SWx5eUhCNFk5ZGlYWmsrMUNoTWUxaUNPU1FiNGtKQXhabC9hZG5YVVN3S2NQ?=
 =?utf-8?B?VU1BY3VOZ293dVQ2WisrNmtKZTQ2QnM0MFl3eHF5UUt4OTZnYTJVaG1ueGlp?=
 =?utf-8?B?cmJ5Vlp4SlYwQmY2SitaU1I3N0ZyOW5HUmpiVzllQXRXbERuVHVmVzUxd096?=
 =?utf-8?B?RkZ5ZHhwaDNQWENwbkZvdklKSkE2dnNuaGo2WGhPZ0E2clRlSG0rQT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ffe31b03-f075-4088-0a26-08da4a01da72
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 10:21:44.8082
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: NE19DrHYCw9eNRXiHFZo5CthOYUaSwPVAjtq7php6oBacDI+uMdBGv4PB5rlLAbcBY9Z5u7/mJQRS9Sd+HLqww==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8790

This way intel_iommu_unmap_page() ends up quite a bit more similar to
intel_iommu_map_page().

No functional change intended.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---
v5: Re-base of changes earlier in the series.
v4: New.

--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -806,75 +806,6 @@ static void queue_free_pt(struct domain_
     iommu_queue_free_pgtable(hd, mfn_to_page(mfn));
 }
 
-/* clear one page's page table */
-static int dma_pte_clear_one(struct domain *domain, daddr_t addr,
-                             unsigned int order,
-                             unsigned int *flush_flags)
-{
-    struct domain_iommu *hd = dom_iommu(domain);
-    struct dma_pte *page = NULL, *pte = NULL, old;
-    u64 pg_maddr;
-    unsigned int level = (order / LEVEL_STRIDE) + 1;
-
-    spin_lock(&hd->arch.mapping_lock);
-    /* get target level pte */
-    pg_maddr = addr_to_dma_page_maddr(domain, addr, level, flush_flags, false);
-    if ( pg_maddr < PAGE_SIZE )
-    {
-        spin_unlock(&hd->arch.mapping_lock);
-        return pg_maddr ? -ENOMEM : 0;
-    }
-
-    page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
-    pte = &page[address_level_offset(addr, level)];
-
-    if ( !dma_pte_present(*pte) )
-    {
-        spin_unlock(&hd->arch.mapping_lock);
-        unmap_vtd_domain_page(page);
-        return 0;
-    }
-
-    old = *pte;
-    dma_clear_pte(*pte);
-    iommu_sync_cache(pte, sizeof(*pte));
-
-    while ( pt_update_contig_markers(&page->val,
-                                     address_level_offset(addr, level),
-                                     level, PTE_kind_null) &&
-            ++level < min_pt_levels )
-    {
-        struct page_info *pg = maddr_to_page(pg_maddr);
-
-        unmap_vtd_domain_page(page);
-
-        pg_maddr = addr_to_dma_page_maddr(domain, addr, level, flush_flags,
-                                          false);
-        BUG_ON(pg_maddr < PAGE_SIZE);
-
-        page = map_vtd_domain_page(pg_maddr);
-        pte = &page[address_level_offset(addr, level)];
-        dma_clear_pte(*pte);
-        iommu_sync_cache(pte, sizeof(*pte));
-
-        *flush_flags |= IOMMU_FLUSHF_all;
-        iommu_queue_free_pgtable(hd, pg);
-        perfc_incr(iommu_pt_coalesces);
-    }
-
-    spin_unlock(&hd->arch.mapping_lock);
-
-    unmap_vtd_domain_page(page);
-
-    *flush_flags |= IOMMU_FLUSHF_modified;
-
-    if ( order && !dma_pte_superpage(old) )
-        queue_free_pt(hd, maddr_to_mfn(dma_pte_addr(old)),
-                      order / LEVEL_STRIDE);
-
-    return 0;
-}
-
 static int iommu_set_root_entry(struct vtd_iommu *iommu)
 {
     u32 sts;
@@ -2264,11 +2195,17 @@ static int __must_check cf_check intel_i
 static int __must_check cf_check intel_iommu_unmap_page(
     struct domain *d, dfn_t dfn, unsigned int order, unsigned int *flush_flags)
 {
+    struct domain_iommu *hd = dom_iommu(d);
+    daddr_t addr = dfn_to_daddr(dfn);
+    struct dma_pte *page = NULL, *pte = NULL, old;
+    uint64_t pg_maddr;
+    unsigned int level = (order / LEVEL_STRIDE) + 1;
+
     /*
      * While really we could unmap at any granularity, for now we assume unmaps
      * are issued by common code only at the same granularity as maps.
      */
-    ASSERT((dom_iommu(d)->platform_ops->page_sizes >> order) & PAGE_SIZE_4K);
+    ASSERT((hd->platform_ops->page_sizes >> order) & PAGE_SIZE_4K);
 
     /* Do nothing if VT-d shares EPT page table */
     if ( iommu_use_hap_pt(d) )
@@ -2278,7 +2215,62 @@ static int __must_check cf_check intel_i
     if ( iommu_hwdom_passthrough && is_hardware_domain(d) )
         return 0;
 
-    return dma_pte_clear_one(d, dfn_to_daddr(dfn), order, flush_flags);
+    spin_lock(&hd->arch.mapping_lock);
+    /* get target level pte */
+    pg_maddr = addr_to_dma_page_maddr(d, addr, level, flush_flags, false);
+    if ( pg_maddr < PAGE_SIZE )
+    {
+        spin_unlock(&hd->arch.mapping_lock);
+        return pg_maddr ? -ENOMEM : 0;
+    }
+
+    page = map_vtd_domain_page(pg_maddr);
+    pte = &page[address_level_offset(addr, level)];
+
+    if ( !dma_pte_present(*pte) )
+    {
+        spin_unlock(&hd->arch.mapping_lock);
+        unmap_vtd_domain_page(page);
+        return 0;
+    }
+
+    old = *pte;
+    dma_clear_pte(*pte);
+    iommu_sync_cache(pte, sizeof(*pte));
+
+    while ( pt_update_contig_markers(&page->val,
+                                     address_level_offset(addr, level),
+                                     level, PTE_kind_null) &&
+            ++level < min_pt_levels )
+    {
+        struct page_info *pg = maddr_to_page(pg_maddr);
+
+        unmap_vtd_domain_page(page);
+
+        pg_maddr = addr_to_dma_page_maddr(d, addr, level, flush_flags, false);
+        BUG_ON(pg_maddr < PAGE_SIZE);
+
+        page = map_vtd_domain_page(pg_maddr);
+        pte = &page[address_level_offset(addr, level)];
+        dma_clear_pte(*pte);
+        iommu_sync_cache(pte, sizeof(*pte));
+
+        *flush_flags |= IOMMU_FLUSHF_all;
+        iommu_queue_free_pgtable(hd, pg);
+        perfc_incr(iommu_pt_coalesces);
+    }
+
+    spin_unlock(&hd->arch.mapping_lock);
+
+    unmap_vtd_domain_page(page);
+
+    *flush_flags |= IOMMU_FLUSHF_modified;
+
+    if ( order && !dma_pte_superpage(old) )
+        queue_free_pt(hd, maddr_to_mfn(dma_pte_addr(old)),
+                      order / LEVEL_STRIDE);
+
+    return 0;
 }
 
 static int cf_check intel_iommu_lookup_page(



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 10:25:33 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 10:25:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344948.570591 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFM0-0007oC-PZ; Thu, 09 Jun 2022 10:25:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344948.570591; Thu, 09 Jun 2022 10:25:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFM0-0007o1-M6; Thu, 09 Jun 2022 10:25:28 +0000
Received: by outflank-mailman (input) for mailman id 344948;
 Thu, 09 Jun 2022 10:25:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jWvP=WQ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1nzFEF-0000Tp-NL
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 10:17:27 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on062a.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::62a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5c263a2f-e7dd-11ec-b605-df0040e90b76;
 Thu, 09 Jun 2022 12:17:26 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB6297.eurprd04.prod.outlook.com (2603:10a6:10:cd::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.12; Thu, 9 Jun
 2022 10:17:25 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.013; Thu, 9 Jun 2022
 10:17:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5c263a2f-e7dd-11ec-b605-df0040e90b76
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=e7rHyLtlrpWi5R0XdyzjaEKcOE7LTFYo+BWVNTEciVbNS5hjkWdVxWfmbfCZ+VDfVZ/AISGuXbDP/Z+rdPcWEWAR19TMtH6v/svZ/gecHsnH3/B+j4Ej8eISkL1S7sr44ZsDVYdB+3pV/ZCaoJYPwoCIHFRJFUPK7CawlrgkoE+lwcJ9Kqj31hd+33jRkcIxOQ+jTTVfcsgPuwPVb02F4nuRgcXnejTj7910VWRqNROzFcjqrZrOcMH59s6ZtWByr0dcXDyojJ7JrqHMkCVBCt17hFwHANFoEeAggN9JFXDw30G4kbP7iCoofyTZuyEQLG3pNFzQfLqPgGIiHTnZlQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=g2qWJv01a2c4+rYUCaFJsVUw5bwP3FyDZ7zpU5yOzbI=;
 b=jHl1LIl4L1BbsWyarVD2X7Od4GSoygFZZHEzJvBwdW1boDqVmjEwpV+j2axBvUcd8/xD2nm8t1uavUJ7LyQuGiKCGOwLjC/uK9C25FMeQK5MjGcrRUn+mn7XQZRKwE1yFoIBJ4twYxQmdxyS/wCBazQP1N0rCwF3R3GUITNWPHj+Xa0u+jHUaxYVlzc4xMim3FwYcNltEV8n0c0ccpTEJveC7MG8trjglMX8vwF1T9NttuUZGg2WerqCL1bAkPzEt5VkPL0T3PGchLPh+b5RF5nhh/iBEC6VUwT0f6bxIBzAVeCYDy1y0IDGPNDu+nDUw4jCuVFQUnKXPWex2VL9cg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=g2qWJv01a2c4+rYUCaFJsVUw5bwP3FyDZ7zpU5yOzbI=;
 b=rsxK6MCO8GU9dgOs7R/kYyqbOaImxEuTQ5B2Kvd63TNjSrQmObEh5axSxE7RmePJygjT7hei+v/0YLjR4eQ0rILfZvsrYAY5oebEVBcvyfLEdqB8n6ZnxcipYUHkcN4cGwWluk9CgDSmTNSnsYixd+pUGkW5LlPfLLAKTqDGQ+HHUeemK6+fwlvxZkvmD2N2IMdqGT104AhOrSBqs4HPo4j1fgntW1Ke07Asm8scNQ3qQvgvc3qX83WvGWEyAch8h1iwbffNRUctO2vpB3gHSc4z21V/HCrYLxSlfddrleWvzWq5CbYgY0XM5n1bFPAubAVWJP9kLNaXeyNoGtiNdg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <99086452-43a8-2d93-ab4d-0343a0259259@suse.com>
Date: Thu, 9 Jun 2022 12:17:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: [PATCH v6 02/12] IOMMU/x86: new command line option to suppress use
 of superpage mappings
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Kevin Tian <kevin.tian@intel.com>
References: <e873e30c-7a04-a8a3-2fe5-0dda30e654fe@suse.com>
In-Reply-To: <e873e30c-7a04-a8a3-2fe5-0dda30e654fe@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AM6PR0202CA0061.eurprd02.prod.outlook.com
 (2603:10a6:20b:3a::38) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 0b7d5f78-bdeb-4b7e-a227-08da4a013fb2
X-MS-TrafficTypeDiagnostic: DBBPR04MB6297:EE_
X-Microsoft-Antispam-PRVS:
	<DBBPR04MB6297FB1B3BAD6C11DBFD2C1FB3A79@DBBPR04MB6297.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	oEmrcC25lCjyViFTbCFB7piXd1+8puPHZ73LujkTIM6HrlMlBqJvi0FNwFoixdQWqBxatpejFSNUnzM3gXr83oUo1jUXG8e9mvvX3dX2+YwyibN+8zPcoYc0BxFer2n1AKO1rNYC9+Kecpe54T2NNsfoDAN8p1ifrNRpxxQ4qEHUzC4o2e2Hzw0rjxMV1CK+BycksacoBi9KoXv4SahomNSL0k2iIDTSIpwrwInmf3nswRp7pzaBshA79HDVr7n4Y9L+7Q9U4h9xq8d3R5bs6ymOhzsYYLbT4O1x/KlPh1LWBwi9VMGRLIjs8L3HrV96kq4MakmwTTppTjyQm16dMj7RyVvbsJXA0jqt1v9ikKegAZYDHBxmQdaUA2ycn62zwo5v2i72ICYyYzYiKYvraJSQInWO1TVxJfptKi6wtaRYJKOqViZAed1aoPX9L07t81eJbx/dBif4Vwqz0LgLvgkspUbKGRxQQpu1pdNaA/IddTvKWJa6CUgAgV8+7kPW9dYa5haXeVsoVS+AfJsD4q6sKFdzfo1LI+uVZE88D3ZWHbfdrntrUoyr0VkbUYvJxPX1rt+w9F0ZoS35/dtdgh49maAmVGnXp8ErFMyDYpsRZWenpH02L3NSJZRoZWdyT7Jr+AM0KWey+uE1w0BYTrt5t7FKrKI4aAE1KDkXDumCqpeP9oQY81qfeBVGbGxU61orYChIGO1I4Oebq1FN63PMaTIJEfNdhcAMgQSDVDjbNknHMP038EeK+iDL3bAj
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(2616005)(36756003)(83380400001)(31696002)(26005)(31686004)(6512007)(6506007)(2906002)(4326008)(54906003)(8936002)(5660300002)(316002)(6486002)(6916009)(38100700002)(508600001)(86362001)(186003)(66556008)(66476007)(8676002)(66946007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?R0JjeWMyN0V2UTBWZWowVXN1Z1NxOTFmdURFc2g2UnFITzl5b1M4Rnc3Vk1k?=
 =?utf-8?B?b21JUHRvYVBRay9BdVg4KzVxMkdkbjdGanAxTjZBWlk0NmExdnBLUk9zRU1x?=
 =?utf-8?B?UmhPcFU3clo1M1RrTjM3MmxoMm93YVIrV1JLaXlLOC9pc3kzb3M5d3VIRTVz?=
 =?utf-8?B?NDU0ZnNxMDd6dWN3NnBHUUZLamhNL1RpemdRRHY4VkQzVmpTa1o2OC9HVzFs?=
 =?utf-8?B?VUNnejd6MHMrZ1NxaUo0VXR1cVJRM1kwQ0dZSEp3UXN1VUd1b3JDNlFYTDVU?=
 =?utf-8?B?ZXQyWTVNaWNnV2RETmdXK3BaS3JMd3RJem5qSjJacTNxM3dZL3ZqakZaVjVt?=
 =?utf-8?B?SW5YN05abEc4TitsZGd5SUhMVmpTUHVnQS9QU2RQNTdTeENZemtZcHhKeGlR?=
 =?utf-8?B?OVpqYzZUMGY4SlEvTDUvUzJTNkJvMytHRytKc1MvcE1SVmpzNmdSc0lBWDht?=
 =?utf-8?B?ZVZTM08rSkI2SUxLL3lGY0o4K0Z1dGNjWmNtSmFXVDJOVjZMSW4zWUVjb0Iz?=
 =?utf-8?B?WTFLV1laeDMvcEZmVEgxbis2K0E4Zk40azNOaTZ6RHZKVjdJMDhlNzEvRWxu?=
 =?utf-8?B?enRGWlhJODJSU2xlUi9tWXpQYkFjN1NiY21aQkpNVEpFaTBBNFBXcktxdlZE?=
 =?utf-8?B?V0dISDhmZ1pXM0hiQ0gxU3VqdTBkbTNvNmxGVlZ6N0JUYkRVMkU3T1FsMzFt?=
 =?utf-8?B?WGowR1JMZUxYYnZUd3NIb0orRnpTeS9jbTdGV0NqaElNd3dUa3Vad0pjckI2?=
 =?utf-8?B?SnJWS09idjdCOGtqTWF6NVJLazJ6TS9ZRVFkdEo1akdRMHpidHRyL093K3lh?=
 =?utf-8?B?elBiUkF4TVY2bWQ1VGxUS0xaNEhEM3RsSmJlNzM1bFIydVBrZzZGaHRKNEkx?=
 =?utf-8?B?bEYwUElGbERSN2RSWXcvdEw1NEpGcFMzRFdNS0h1dXZraW1qY05ILy9GQ0h1?=
 =?utf-8?B?TmtvWkhDVXQ1anZtMjJlMU12WXFjbzhrZmdIMUF3MzNmM3JNZnlhRmE1TmlS?=
 =?utf-8?B?RUtyeXcvWDlNODgxVk1aZHJsOVVsT2Y1RjlFVEZaWkpGU3pCNkJ0Yi9QNGRk?=
 =?utf-8?B?TE1qdEl6eFRUMU1oQXZFYlhaNDRocnpRMGdhRUVBZjAvZ2VpLy82MXJzaVFn?=
 =?utf-8?B?Ujh1UnlBK2tXa2w2MFBWQzBKaXh2cm0waGV5NTZsMWZobmJsR0NxR1k5SGRt?=
 =?utf-8?B?QmYzanJJRGNXOE5ka2JDTUE4NFRGb2E3SVJtMm5kbmNsL2k5d3prR1d6bXNh?=
 =?utf-8?B?WGdyRVlNd1dLcFpid2lHdTJOWnRvUmVnMmRsOTJsNnQ4SW5XLy9kT2N2SWZB?=
 =?utf-8?B?OGJSMlpHTklYS0ZvWGpibTdIWUJGS2IwS3VoalZsYVZXdmtBcG9oZCtSTUlS?=
 =?utf-8?B?clBEYk40akF2Y1FWdXNjZzJKbks1NTNjKzZ4YlAvZTJ5WGpva0dxYUF4ZTg0?=
 =?utf-8?B?NVovbm9qditRb1Bjbjh5MEk1bDFacHhROVFSUnlxK3V1QjdwOTYwUFlqQkFC?=
 =?utf-8?B?TkwzcjdjMHVQang4alJ0WkdQRnlhejBSN0F4bCtOU3RXRWZINitpZnFjWUVW?=
 =?utf-8?B?dDFKZXp4cHRXSHhKRG0yam82Q1p0REdLeUZsSUREckFndFh2cjJWQmdLSTY1?=
 =?utf-8?B?aWNZYVpQazdKd1NucHNiSjl0YXBMTkFjSVhuOVlOV3RudGNDOGE5bmlSSHlz?=
 =?utf-8?B?cmF2SXp4N2g3RGlmTjBNZUFzQXBVYXpvbUZaaUJYWVFlVTlNbVdtYVpXRHBG?=
 =?utf-8?B?RzBVWkkwTFJGZmdCYXFLWE91bGthV2ZZS1I1WXpxL3BZTjhrbGF5Rk1rZk1i?=
 =?utf-8?B?N1A1K0Nla1RhME1IQVg5Zzc3THBxWjVpOXYwKzZXTVQ3QjdOdVZ5N3phc2J4?=
 =?utf-8?B?NFdBVWdlNjJENk94SzF5dXREVzdneDFRRHltME03Qy96QjBwc25SSlBmUGJL?=
 =?utf-8?B?dC9oRnNMV0p4elJEMkFDelZTTWxYUHhKYTJuNHJxWmZRdCtzK05wSlJiL2pW?=
 =?utf-8?B?RmtIYTFJdDdwWHNZZDZJQVYrT3piblpGelV2WUFLcTI4ZXBZMjVyYTBwWHZ6?=
 =?utf-8?B?Z1FWdExjUlVjU3lIUyswU09RaVVZVFNSMDhhbUhoeFlqVjhsd1RBdEwzKy9O?=
 =?utf-8?B?bFIzWFlYUUpFR3BMdER1Qmk5SDIvZENrdnRTY3NNVU04Q2xhQjQ4NHBDNk1v?=
 =?utf-8?B?N1hqdk45dEM3blVWZ3RUVGhOblhpWVVIYnpYSmpHcWpubEUvYk9wTW5XejJ5?=
 =?utf-8?B?VDVSb0pkNjdXNkgvTVRZdVNPSmdTaGt5djh6TEhvMEdUVW1VQTBQODJ5c0pO?=
 =?utf-8?B?MEliM05RU1B0d1I3c0RFQ084MzN4dG9Ca2dsMG1FQ2R5bXVkQ3JKZz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0b7d5f78-bdeb-4b7e-a227-08da4a013fb2
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 10:17:25.1846
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: yz2dJUtuY39TSqr5sOyPWELuQHHZQ3pI5v2/1N5qV1Nz/GupZVBZR/om0UjH+QDRvwKU5CqpbDaWJ5SaBuN0KA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB6297

Before actually enabling their use, provide a means to suppress it in
case of problems. Note that using the option can also affect the sharing
of page tables in the VT-d / EPT combination: If EPT would use large
page mappings but the option is in effect, page table sharing would be
suppressed (to properly fulfill the admin request).

Requested-by: Roger Pau Monné <roger.pau@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v6: New.

--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -1405,7 +1405,7 @@ detection of systems known to misbehave
 
 ### iommu
     = List of [ <bool>, verbose, debug, force, required, quarantine[=scratch-page],
-                sharept, intremap, intpost, crash-disable,
+                sharept, superpages, intremap, intpost, crash-disable,
                 snoop, qinval, igfx, amd-iommu-perdev-intremap,
                 dom0-{passthrough,strict} ]
 
@@ -1481,6 +1481,12 @@ boolean (e.g. `iommu=no`) can override t
 
     This option is ignored on ARM, and the pagetables are always shared.
 
+*   The `superpages` boolean controls whether superpage mappings may be used
+    in IOMMU page tables.  If using this option is necessary to fix an issue,
+    please report a bug.
+
+    This option is only valid on x86.
+
 *   The `intremap` boolean controls the Interrupt Remapping sub-feature, and
     is active by default on compatible hardware.  On x86 systems, the first
     generation of IOMMUs only supported DMA remapping, and Interrupt Remapping
--- a/xen/arch/x86/include/asm/iommu.h
+++ b/xen/arch/x86/include/asm/iommu.h
@@ -132,7 +132,7 @@ extern bool untrusted_msi;
 int pi_update_irte(const struct pi_desc *pi_desc, const struct pirq *pirq,
                    const uint8_t gvec);
 
-extern bool iommu_non_coherent;
+extern bool iommu_non_coherent, iommu_superpages;
 
 static inline void iommu_sync_cache(const void *addr, unsigned int size)
 {
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -88,6 +88,8 @@ static int __init cf_check parse_iommu_p
             iommu_igfx = val;
         else if ( (val = parse_boolean("qinval", s, ss)) >= 0 )
             iommu_qinval = val;
+        else if ( (val = parse_boolean("superpages", s, ss)) >= 0 )
+            iommu_superpages = val;
 #endif
         else if ( (val = parse_boolean("verbose", s, ss)) >= 0 )
             iommu_verbose = val;
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -2213,7 +2213,8 @@ static bool __init vtd_ept_page_compatib
     if ( rdmsr_safe(MSR_IA32_VMX_EPT_VPID_CAP, ept_cap) != 0 ) 
         return false;
 
-    return (ept_has_2mb(ept_cap) && opt_hap_2mb) <= cap_sps_2mb(vtd_cap) &&
+    return iommu_superpages &&
+           (ept_has_2mb(ept_cap) && opt_hap_2mb) <= cap_sps_2mb(vtd_cap) &&
            (ept_has_1gb(ept_cap) && opt_hap_1gb) <= cap_sps_1gb(vtd_cap);
 }
 
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -31,6 +31,7 @@
 const struct iommu_init_ops *__initdata iommu_init_ops;
 struct iommu_ops __ro_after_init iommu_ops;
 bool __read_mostly iommu_non_coherent;
+bool __initdata iommu_superpages = true;
 
 enum iommu_intremap __read_mostly iommu_intremap = iommu_intremap_full;
 
@@ -104,8 +105,13 @@ int __init iommu_hardware_setup(void)
         mask_IO_APIC_setup(ioapic_entries);
     }
 
+    if ( !iommu_superpages )
+        iommu_ops.page_sizes &= PAGE_SIZE_4K;
+
     rc = iommu_init_ops->setup();
 
+    ASSERT(iommu_superpages || iommu_ops.page_sizes == PAGE_SIZE_4K);
+
     if ( ioapic_entries )
     {
         restore_IO_APIC_setup(ioapic_entries, rc);



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 10:25:36 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 10:25:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344956.570602 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFM8-0008Cz-1r; Thu, 09 Jun 2022 10:25:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344956.570602; Thu, 09 Jun 2022 10:25:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFM7-0008CX-Uu; Thu, 09 Jun 2022 10:25:35 +0000
Received: by outflank-mailman (input) for mailman id 344956;
 Thu, 09 Jun 2022 10:25:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jWvP=WQ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1nzFI5-0003pZ-H5
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 10:21:25 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur02on0605.outbound.protection.outlook.com
 [2a01:111:f400:fe06::605])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ea18e1e8-e7dd-11ec-b605-df0040e90b76;
 Thu, 09 Jun 2022 12:21:24 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8790.eurprd04.prod.outlook.com (2603:10a6:10:2e1::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Thu, 9 Jun
 2022 10:21:23 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.013; Thu, 9 Jun 2022
 10:21:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ea18e1e8-e7dd-11ec-b605-df0040e90b76
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Odal0snAZ8F9yN2IlfiSiVIK9OwJDNdkM+AZyUBxTsCvEj1sc3v5AuyoKtnunKVw7dMHpgEos5Z69dYG6yR/wtci/DEnkzgfYZlCfEA0l4vtvv/Bmrp2RUBvfkCBCDi24M1sjkcyJtkw5kyYoKmtOYDpugONdTA3bCyl1wQq8GKOwJ11jpehA6rkgwCIYIvLexDfeVbGi3G/1ZMJ5Ha2wVZxhIhec7VnoCmNcqBwxD74BOV9FYu84T38HO6e3WSVABMlQwuSKAo1Ux7ieDhSqv5Kih7yDzMHbUUlJMK+4MVtddo2+Le839AY8KsHZXcDNypJ33BkPFJe8Jd7SfgeHw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=iNSivqMUGJNXw6YV9IuRH2YBw0MQFLGrycARpWhZOTo=;
 b=MFtefsviZuhrRF3fj+CZ28MqyK1OaOe6LT1BXQCHRaAEl/qImrkuOIdCDLCjV3MgkJoOBMsd3xbSBu1EvGbZAM3I64YTFjwLJdksGCSrkqGS390rh40ZRF1BIpMASKB0Td87LXYJws2sP4mBzqd6uOcRnAl3nDjGKNTXW7PqlTpKlr3t2/jdOETfY/c0+IZPNlBT9iMVDokHcCtPBFi/jnDQ4SIkPWrbQg+Hk1NboK0qY+n2hQ44SpnxjN7ywi6WpDK+J5pIFpJXTQId3ojg7TqnWFSY+mDvqVWmpFZDYaPCV4wdfVvVlAQoQQb4jvYhmzK4MBYOAYAtlV2CtzpMiw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iNSivqMUGJNXw6YV9IuRH2YBw0MQFLGrycARpWhZOTo=;
 b=KNUbVApEC4UPIpV7LsnKtKBhpDqY3q9tOy7QJhEpI2HN3OX87pcA8L83dBBEgjj5OGa9qubomQDAof+suyPrYiEIJAPQmuNxVtrBWKy4gW0Uak6GqRnmygZiztkIvz10c0vRoPliC4XkU+QUdnWBQLAGLf+drDwl+S1GVQgxmy2KeFZsod7lQ6EObgIDaADLbyjsCjHdvq4g1LCqmefSRhoiNcLGKiA1v++O6X+liO3MX4/ofmfwS9eDdU2J/u+yLDF3v7YFMwMUUl0Ju+mrbBZb3UB6jx4sefoujEeira2KLkHhgWxJM5KAtypymyvKKOjQTQvkKFI2fnFJegaqsw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5569f3d3-7315-3fd1-3e73-0cb610eb3f43@suse.com>
Date: Thu, 9 Jun 2022 12:21:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: [PATCH v6 11/12] IOMMU/x86: add perf counters for page table
 splitting / coalescing
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <e873e30c-7a04-a8a3-2fe5-0dda30e654fe@suse.com>
In-Reply-To: <e873e30c-7a04-a8a3-2fe5-0dda30e654fe@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AS8PR04CA0146.eurprd04.prod.outlook.com
 (2603:10a6:20b:127::31) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d9a45ac9-ad1e-4875-4c29-08da4a01cdd1
X-MS-TrafficTypeDiagnostic: DU2PR04MB8790:EE_
X-Microsoft-Antispam-PRVS:
	<DU2PR04MB8790DC857E3B530D97891F18B3A79@DU2PR04MB8790.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	CQaUxOHFIyLGzQDuN3kk7V7d1Sx4Qvv5WE8f41ZFqzwbSqHFBX8k+I7n/e1fJGGXI2cJOmOO6xGc++jyt2SYj5I3f2CyDKV7+db5zqscHqEgcyl6Ql8hQi147Jo2sa+/wb40FbxjhV2678mxJLzYdGZibA+BaxLf2e0jk1dLIEun1t1enSifdDPfP1Vt7NUJX4WNAN9K2G0+7Tg6hCSPOVDMPPFdgkUBcl67K0GWhoPzlMse3jv5l+gSa9TavaHRhPdbMaXG+5S5rRRuz4+0al0OHztIyqtPbTwcptmOX7QSGoUDFElj+L8fqoMWx7z+pI0UXhsxB8FpRm6EiiyRq3JfAtzmfij/YXc0S0MQX0lzKlDEQzrVlsbWSqupszkMtBBywY/rtfXARddFQ8+BPivL6jvoH9b85w9k4K75NqG5+jPZHg+VN7SWofbfv10n3TIJir7ZFnNwNzhCrcIla9MgOLITaL+60YJ8gwVLI7/WtcjJaFOE+rfew1pcKAmA6Zy2Zxu4Em7q0/ececBWPvczs7OGNkgwmcHSzQ2aX6TPNybNrV84L5Nh5/YLSt7rhmRGCv4LRTlYzIKehmEX7mmUhU4TlDRonKZDOg5bCaPprS/jce3p2CvnIb5uwSxvuEqD705TRtWpN5rOe112NUOizrQGeRuuEgWNI6383rwB4Un2t6j0K0OdTcQWHF/yLq2Zc5wBvGPVbaSXnp1qjRqEoN+cUmsuEo3oxW0BK8wiI1kHUXCcGEN8sM1NMZ88
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(26005)(6512007)(31696002)(6486002)(6506007)(5660300002)(186003)(66946007)(83380400001)(2906002)(86362001)(508600001)(8936002)(36756003)(2616005)(38100700002)(66476007)(66556008)(8676002)(4326008)(6916009)(316002)(54906003)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RWJ2Z0RVYzhaY2R5VHlrc1FVSkYrdmd4ZDRsMzJ2RkJhcU1NQ1NHODcrTjRh?=
 =?utf-8?B?OEVWR29DUG9Wc3o1Yk5ZbWZZaExpV1VEVUNndWMwazJEREQxanllRkVZZExi?=
 =?utf-8?B?UGxocFlSVFZuZkpHaTRTTUFvWWIrc0ZhS2dmeVN1eVloTXV0YVNsakkwVytV?=
 =?utf-8?B?Nlpjc21kaEVmWnBmMFhKNlBIUS9aM0lKbC9aNnZGU2s5T1MwMmFzK3hiUHcv?=
 =?utf-8?B?ZzY1SmNZblgxS2JjRTdvV3dCSHErRkVsYXVYTUpFVkVoVFcrVjZoZUZxS00y?=
 =?utf-8?B?YlJ5WHc5ZXIyUlM3Nmx3cDFsK0xuRXFQdGtHN0o5QjAyUWV6RmZIM08vYkM5?=
 =?utf-8?B?UkVRUkZwYWlqRmVpMmZBVXBhZU4zNmZGU2JBNWtWcjFnS0ROYkJWSUpFWXk1?=
 =?utf-8?B?WXNMa251VzhYK0VUdCtaUUsyU0h0UmdCSzE5TU1YcHRLSWdrZ2JZcFZKdWtq?=
 =?utf-8?B?anNDdUx0cnc4QkkzU2NySTRrMmIxWFVSY0V6bkg4M1NrajF1bzkrN0l5VGRj?=
 =?utf-8?B?cXYrYlVqY1BsREFIdTBqbXdrZkt6UkhrZTBmMGsyaVNhWHBnUEttWlVuWmM4?=
 =?utf-8?B?a2lla2JLeklXT1YyRVR0ZlErOEs5YUlOdU4zeWZkWEJidStsWkNnQXQvVnFm?=
 =?utf-8?B?ajZZQUovaEhBNWgrbmwxSGpMTGgzbzE3Qmt5R21rc0NUeUc3U2VxYkZUSWU1?=
 =?utf-8?B?YmNXaTFpWUFmUGJTVWNJQTZmTHo2cTZFdVRtdTBweE1TZTVTd3RsL3VJbE5z?=
 =?utf-8?B?V1VveEZMcGk3NFJCd0JiWTNlQ2dIZ25OWnFQeE13M3RCQ0NsQTF0VWJVQ3Ey?=
 =?utf-8?B?MDg4NTNvamNVR3NHcElEVTAwK1JzS09nV0xhRzVFaUF1NlJGQituS0dsRnRT?=
 =?utf-8?B?OUhRWjdNR09CQ1dyYXdhU1NjYUNtM0haWGRFZ1NuQmJHdnZwVU9jdHdOZWhI?=
 =?utf-8?B?UlljSnY1OVpCYjBlTWlSekVHMlNvYWJXNUR0dndEa094UEd0UEVHNHowMVho?=
 =?utf-8?B?QU1wdVB3WGVOVHVVb0I5UFc1cm1zZ2srYVFweXNpL2JacTZENHIvN0F5WmxK?=
 =?utf-8?B?RCswZWFaZWFCb1FhYk5HZXNNVkdUQVNpTm5NK3NpUUFEVENWb01tTTFSTmN5?=
 =?utf-8?B?RHNZYVpBd3dHSFVkeEhaTEt0THAwMEpkVlV1aWNpNXpzNG5RaitINThMTWJE?=
 =?utf-8?B?VEJ6eTJ0MXVMUWh1azUxczBnRmVmRE8rLzA5T0tCUHBXYUQzRTFEeStmQUc4?=
 =?utf-8?B?eWZRdWR5aXFSYUU2Q2tldi82R2ZxVDQvNTdvcDBCU2J1OExYVUpEZ3B6OEdF?=
 =?utf-8?B?SXhGVGErajVQWmpoZkF4M3F6ZElIejdXcXJpZmZlcklaRytoSzlSbkFpdGU5?=
 =?utf-8?B?WnFNYlFzTnBDNUI0aGljcmcvSG84Q2wyd01rZUtYRjdMNDJKTXlNSVpJbVVu?=
 =?utf-8?B?UnhJREZEenE0SUJkNU9aakdZS2FXaHVlRnIyK09GWUJubUw4Z2Y0cmQvankr?=
 =?utf-8?B?Rm1tb1J5S1BLTnVEQVJoUHZONGRyc094VUZaMWxHdm9xY3lTejlYQ2ZKdjRT?=
 =?utf-8?B?SzFKY2FiZlo2YXVtWThWN2UxYXhGcWVmRFh3Q01LeXNHbDZscm5OQ24xOWdy?=
 =?utf-8?B?d0RJNHh0aHJnQjcrLzJRODJua1dLbVBQd0E1cXpMODhCektlcmhleTUyMkIy?=
 =?utf-8?B?OVFaOEZPQmowM2xQbHZXa1EyT0JRZnVYMmVIcTF5cWR4czFYMXFwWlN4eGZZ?=
 =?utf-8?B?VFJpNEtZTlJBSHRmRWpHdGFzNkxHWjQvUytHT1lpeExzNDFJUVloYlBtbEhi?=
 =?utf-8?B?RGVJMzg5cXIzckpkUktpVlhpTGFBeDhHVXVRSTNnUDViaStYWU4zL1B4SlNa?=
 =?utf-8?B?MEpybFFuNGJzZkNuQ3hzdUtMZUFOUGlFUnRueTBJYk0wemM2MVd5TDZ4Nnk3?=
 =?utf-8?B?N1hkZ2FCWnc3eFNnbEMrQkQwem93M1NkRlc1SGNzTWdQRU1pMzQzd2dQMmhY?=
 =?utf-8?B?S2xvZEdFL0ZTa2x5K2VRQU5qOUFCM0tsM2dCMXhRRDExczJPamZTWk0ybnIy?=
 =?utf-8?B?UlF2dk9Xa1BqNmN6ZG05N0FpWUJyRldPZDk0dC85THJxdUdWVTU4U1J4eUFL?=
 =?utf-8?B?UFFseFpSdTRBSEZMQ2pRTnYzYndmczhDRFlHSWpvUHF5bzVUY0I2emJSdEY5?=
 =?utf-8?B?RU0yNFRwZUpmVUgrNTBRb3QzZG4vZGppRXlIRmdJZkFzSjZ1QWwyWTlzSmVW?=
 =?utf-8?B?RGd3Rll3UWlNMk93N2xlUGZEelhHcUhNNHZsS3FUZ1J6REhya29JU01IT2pF?=
 =?utf-8?B?SFkwY05zSmZ6NHNOZ2hwLzh6SldSUHJJUXloVnQvczV5dmphY0taZz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d9a45ac9-ad1e-4875-4c29-08da4a01cdd1
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 10:21:23.5908
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: JHrOC8FivjUt+RQ/6nvDeVYA9jL5WMxe1ZLdaHzKmDKr8aWEm3ZpPgkgg805HuYYuBIxypm5el4/N/SmyT7DSA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8790

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Kevin tian <kevin.tian@intel.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---
v3: New.

--- a/xen/arch/x86/include/asm/perfc_defn.h
+++ b/xen/arch/x86/include/asm/perfc_defn.h
@@ -125,4 +125,7 @@ PERFCOUNTER(realmode_exits,      "vmexit
 
 PERFCOUNTER(pauseloop_exits, "vmexits from Pause-Loop Detection")
 
+PERFCOUNTER(iommu_pt_shatters,    "IOMMU page table shatters")
+PERFCOUNTER(iommu_pt_coalesces,   "IOMMU page table coalesces")
+
 /*#endif*/ /* __XEN_PERFC_DEFN_H__ */
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -345,6 +345,8 @@ static int iommu_pde_from_dfn(struct dom
                                      level, PTE_kind_table);
 
             *flush_flags |= IOMMU_FLUSHF_modified;
+
+            perfc_incr(iommu_pt_shatters);
         }
 
         /* Install lower level page table for non-present entries */
@@ -477,6 +479,7 @@ int cf_check amd_iommu_map_page(
                               flags & IOMMUF_readable, &contig);
         *flush_flags |= IOMMU_FLUSHF_modified | IOMMU_FLUSHF_all;
         iommu_queue_free_pgtable(hd, pg);
+        perfc_incr(iommu_pt_coalesces);
     }
 
     spin_unlock(&hd->arch.mapping_lock);
@@ -543,6 +546,7 @@ int cf_check amd_iommu_unmap_page(
             clear_iommu_pte_present(pt_mfn, dfn_x(dfn), level, &free);
             *flush_flags |= IOMMU_FLUSHF_all;
             iommu_queue_free_pgtable(hd, pg);
+            perfc_incr(iommu_pt_coalesces);
         }
     }
 
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -404,6 +404,8 @@ static uint64_t addr_to_dma_page_maddr(s
 
                 if ( flush_flags )
                     *flush_flags |= IOMMU_FLUSHF_modified;
+
+                perfc_incr(iommu_pt_shatters);
             }
 
             write_atomic(&pte->val, new_pte.val);
@@ -857,6 +859,7 @@ static int dma_pte_clear_one(struct doma
 
         *flush_flags |= IOMMU_FLUSHF_all;
         iommu_queue_free_pgtable(hd, pg);
+        perfc_incr(iommu_pt_coalesces);
     }
 
     spin_unlock(&hd->arch.mapping_lock);
@@ -2239,6 +2242,7 @@ static int __must_check cf_check intel_i
 
         *flush_flags |= IOMMU_FLUSHF_modified | IOMMU_FLUSHF_all;
         iommu_queue_free_pgtable(hd, pg);
+        perfc_incr(iommu_pt_coalesces);
     }
 
     spin_unlock(&hd->arch.mapping_lock);



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 10:25:37 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 10:25:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344958.570613 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFM9-0008Vw-HY; Thu, 09 Jun 2022 10:25:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344958.570613; Thu, 09 Jun 2022 10:25:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFM9-0008Vh-Bc; Thu, 09 Jun 2022 10:25:37 +0000
Received: by outflank-mailman (input) for mailman id 344958;
 Thu, 09 Jun 2022 10:25:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jWvP=WQ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1nzFG1-000264-44
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 10:19:17 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur02on0604.outbound.protection.outlook.com
 [2a01:111:f400:fe05::604])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9d8311c2-e7dd-11ec-b605-df0040e90b76;
 Thu, 09 Jun 2022 12:19:16 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8559.eurprd04.prod.outlook.com (2603:10a6:102:216::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Thu, 9 Jun
 2022 10:19:14 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.013; Thu, 9 Jun 2022
 10:19:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9d8311c2-e7dd-11ec-b605-df0040e90b76
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=V4Tl17E8XHTZoVwz1PEI6v2e0gSS8GxWHxUU6FNPRDySPytFRDzHpERGfVJYm0z0XprLHQcz7WYCsOE9mnvrFH2HdT6zbxKwpvw5VzfbcF1CHjma/efbamOVyPBftMiWBKv3hS0QRFfC9ha3gqT52u8YFHvTEHVZp/IOzlsXITlXalEfHWNceL9bNOg8MPTOMMYMOE3NKhpysTrbF8bGNiPhopp2/zod5731Yy5QdBxNBijqje9DwnNnQsEv74q5uX6Ea78A7QaYkcUwyxE/XVzemLr3Bn+Bjypc1s8eRsE9AwjH3qK0DuAD/vW+cDICe3C/de3YSyrhd6JmHxAnHQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Hd1OcMBHkK7Yj2mkyRz7GlV2YWhhqUm/Sfm8Zi4SK5Y=;
 b=iW6VrXM12rVjjBVQJQWUy7fgy9UZACYQVxVvl1RYc+qebagTI9OafFdPi9aTXT5zs/Mi/T2cgGqDIT4opzVC/aR3TGYfXM3lQB63r3fZpOPHFYbLGlm93bYW9TKInDXS117pf01y8tU/ixaKm8EkJaHw3QJEFtTIXk4dRFrHKRpNA5y7+EE7KT1ewgFpDXo/idpk73k2+jJLf0y+qJzaysWDY4vc/0KcOmSFBOFG/39BANCg55+bCS2tae458rDb/AWsjDYtGGghrk7NoSwS28XQSuDA+tpSvwWBhjJudWuALDDt/4VChDKnrPqJFeL1DgAzr8J4wtPvIjkU+WGyJg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Hd1OcMBHkK7Yj2mkyRz7GlV2YWhhqUm/Sfm8Zi4SK5Y=;
 b=eNRCsHzq2no7pRMrWKzaXWXjtYJuIjprLYlCyXvLEc6fFe/GpbEmKSj7vYjNcNX26ck73dClkA9PPBQurlW86bSUMEtV/LiUV7DwAZMPhP1j+DagqimmAymgIcNiaoHaZ3pxmRIpvj7Z8EwveVk9ZqIc7kUTVUpsfqr966cJVPliXDqApCx78vPTzE96igYW4S6KqEEqdIm1tKoNUY0HpyW86TgsruVew3aFzGDK/ZAiJkJIZaIl4UCORV11SxmsEWvsNcQWolrizatkMICIElJkW0hcb8g7lBoKoscd4DH8tZdDvvWKTyZU6ksXQUIhXe1KVmG7VXkUAotbhI0BkA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5e0d611e-b96c-2aae-1f0e-f5cafddde77a@suse.com>
Date: Thu, 9 Jun 2022 12:19:12 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: [PATCH v6 06/12] IOMMU/x86: prefill newly allocate page tables
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <e873e30c-7a04-a8a3-2fe5-0dda30e654fe@suse.com>
In-Reply-To: <e873e30c-7a04-a8a3-2fe5-0dda30e654fe@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AS9PR04CA0121.eurprd04.prod.outlook.com
 (2603:10a6:20b:531::26) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 12aa2a13-4c96-456d-8ace-08da4a0180b5
X-MS-TrafficTypeDiagnostic: PAXPR04MB8559:EE_
X-Microsoft-Antispam-PRVS:
	<PAXPR04MB85590B057B951531A8EB3F46B3A79@PAXPR04MB8559.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	KnG7lqmCP//t8u/KZbcr+1apZ/keyGyqIZVeFMzU9N8SgqT2mBxNAjBII9lauXx0odzP6Y9m2x6y8eLvTMKds/SAsj0yrEUE9H9h+S1aFchUwMtlj/v/bI1+qoZRoSPrYoqsuibYuAfQJUOKblGxqrFO3xhTBA1iBL2hS9s9qHj/wMkN+UzqNYPUM+9/3DzI4CBD8Yo/OqWhdBtAlbLXooROnam3kGhO/bYa6afm9KvtwxRg+QUhhjwERCmmAsQOtTtcRlP1gxHqIENler3C9uCeoY1LBIVk4H0V6jPkGo3phcVNsLPCo8r+qSTzgz5d4RphmyWwraE12W+vh4tcGbBRmta/mML6kxjJn6mx5iuiVzWtY1b6OIUGYPphNRdkWMPvgpXgDNFNa3eBnQI758JaZ6Y+90UGvanc9QitoO970YbVEhK7xHj1HA2NVPrn0wxwzZm0TZOJ7EkOOn00TKcpzxQKhIcQ7khbSNcgSeQ9rZ+WvGkbnXxzQXiwKHlcjD3RMOyHTucEuqSRyrR1nqlKgdfvnU9e/A/6LANx79O7H/JAIEzU9PLHWH5rZq6C8B7etaqeylZS4tpJnL35tkU5/Vorqys2Zz5RBYieMQ9x33EtUX2htFXy9Bj2SQrLqDrXSjhvrsyAlu1vZBQQoHf2T5j9GiDFfLPxzArojABS6Pg5DvyeUr2EPkaCHNa88LbC0YGt6jMU0riTw2BRrH/NYU14ZAkkta3h/lOF179KwLM/wbyCPYlMnKUOqdH2
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(316002)(36756003)(66476007)(66556008)(66946007)(54906003)(6916009)(8676002)(4326008)(83380400001)(31686004)(6486002)(86362001)(5660300002)(6506007)(2906002)(508600001)(186003)(2616005)(38100700002)(8936002)(6512007)(26005)(31696002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?M21TVGh2WG5wZHRyVllsKzhVSVZWSCtEeXZ0bDZLd0ZkWUgzWUhqT05uUWRG?=
 =?utf-8?B?cG9DM2NiR2M0NXBFUjFXZTUvM2J4czBjS25leWNNTUt3NEpwbUVORmJucld2?=
 =?utf-8?B?aTU4b0xCS2d6SjNlOE5wbnRUei9EcWRYSW9sY3BVODBmdUdVUnc3ZFNUNG9a?=
 =?utf-8?B?QWZQRXVDRS9IMEtabFBSUEl6ZEtPQnFEUkFIK01ZSXUrbjE2K0pJeGRXQkZL?=
 =?utf-8?B?Z2ZpZ3htL0NUaW03NzNPVThGSXhVLytIMHdTTTZpMHR1dDBXVFJXN0ZsR0RJ?=
 =?utf-8?B?bENUWFFLSTVsVU9seGpBdFhKMk4rSXZ2ZjZ3R3ZmTitGUHFvZmxSWHZmTkIy?=
 =?utf-8?B?Q1RJZW1MZEVmTWdQUlh4QmRqbGFhcTF5blAxc01YM05MY05ZMnFPd3ZOcUxL?=
 =?utf-8?B?elUrb3lXYkJTVWd0aHNzODBDSzAzb2t5Zlk4ekxNQlIvQXJqM0tSOWdrNnJl?=
 =?utf-8?B?dWdwdXE2aHBwTjVta2xrTVBqN2hYOEN6VCtnSmpud3g0c3V3czlLUGVsUU1W?=
 =?utf-8?B?MlBKcGZ1OFoxRkx0RkZUMVJrc2Q4NENwRit3UlRCNStUQ2dGd2N5TUFGQW5l?=
 =?utf-8?B?bk5hZDdleUlDK0twRnhFaTZZd2ZzR1d1eTZ2bkk1OWo1UWNFZGRyN1NGTWww?=
 =?utf-8?B?K29mSk9nM0lzRU94L1Q5ai81UVFNMll3bjJwcnhCT0VHd1d0bjN4QVRVaTZr?=
 =?utf-8?B?UkppNHlQbDBwMmRBRi9PVHFKcnJXZEJRc1RISDNqb25XZi9pMXpmeGNQQk9W?=
 =?utf-8?B?UmZOaVZwMWRibHUyMWdjV1Y0cjdPUFZ6eXZlOStRZGxJN01WaUo1aWE4blFx?=
 =?utf-8?B?NWxVTlV3M1ZPOHUzeGNPSjRFa2RBOEtnOUhSVmF2QlFuWlZleGNSSmpiNXJB?=
 =?utf-8?B?cVlnTjkzQUdmZVFwL00zbCsrems5VG9acVUyVkUwQWlIcDJta25rS2dHUU5h?=
 =?utf-8?B?cUlPRDZWMCt6ZHFkcWtUejRKSndZYWxUb1NWNGZMV3NhOEtvZmk3MjRtS1pQ?=
 =?utf-8?B?TlF5ZlcxeEtrZm9veVBMQmJjTVdDU1paKyt5bzVrK05jUmlNZEhwMndJSkNE?=
 =?utf-8?B?M1dmMGVtekFpTzB1cnE1ZHRsVnR5R0hJYUNEVGROZndxazcyNnlrZ09nYTdC?=
 =?utf-8?B?bUVqdjVaZnQySVNlYjg5cVovTmFkQnIvUEZmVHEreWdEcXNHUkxXR3lvbXA0?=
 =?utf-8?B?Z3NHTWVVOG5MakVjUHVoQ0tFbllUcG5uQlJyVXJKVXJ1Ry9DYm9qc1Q1YjZS?=
 =?utf-8?B?NVhacVZ4QWJlTHV3dHpLWEpDOTBTNTlVdS9nTktIMU1XTWY1clBaWHV1aWxF?=
 =?utf-8?B?ZENKK2FDQVJiLzU0Qmg4bTNHaytqRm9udkxUWDdXdjVWZmVIOW05ZDF2RWlm?=
 =?utf-8?B?c2RkVHRraU9FNDBGR2daaVdGd05Galh6RklvSFJWSFIyUG4rTG4wV1dRUUVz?=
 =?utf-8?B?UEQ1dk9RU09XZ3NSdnhYVXVjZUdVckYzVVZ6RVpuN1pYNXk3ZEc4QkZYeWQx?=
 =?utf-8?B?bk9DV1RPMzQvVmQrelZIV3dVUTVKRDRvMW1sV3BsMWpqZDJvYlRNSmV4a3pH?=
 =?utf-8?B?Mm1nbmE5cGJMaGp3cXlSTk80cEVUN0o5aUgvU1Z0eWx2Yy9meXo1MlhNb01k?=
 =?utf-8?B?emx0YUx4VFRQbENFaUdjVlNtTjYwcyt6Y0xPdEFRdUxoejk3a1RKaURvMEJ2?=
 =?utf-8?B?eVZEN2FkcXAwTXEzOHpPYXBrVXRKUEFOZGxTeWRaSENQRVdtaFdHRldkNTJo?=
 =?utf-8?B?dkQ5T3E5WUNFZnpuN2d5UEtpUktrczFCK0VITjZjOXJkVHZIeC93RjlEZlNj?=
 =?utf-8?B?TUJ2QzJFcWpBV1M4OHo3K1VDNmMxMUFONHdUSVM1eDNtTGFTWjRhMHJOV2F6?=
 =?utf-8?B?aW9zWjdzVVZ2TndGOU54Nm9ONDV1R21LcTFYY1NKd2ZlWUFCZGQ1LzdEdkJ1?=
 =?utf-8?B?MGVLSHJTZzZZMFRxeXFuL0RqQ3JYdTAwWTJVTGxPdTNQVzUrZ0xpWTVMaHFs?=
 =?utf-8?B?S05BQ0lMVFQzSkFsTThORlN1K0I0a1FweSthYUNsdVdGQnBCNW5uanNzd2Ew?=
 =?utf-8?B?SzNVT3Y4UGNTSXlub0FZalZ2S3dMZGJGQVJMWlFLZHRwTXdPS2p4M0pscmVm?=
 =?utf-8?B?ZjRXZ3FUb29PWEx1WXFjaWRtZFE5Qmg1eXBHUDA2RWxEY1JJckpteEJtR3BY?=
 =?utf-8?B?TmZhWWhVOTdCZmpkTGtnQVRza2lQTWhySUNOd3c0QWoxWWxiejhFWUVpY3F0?=
 =?utf-8?B?UVBYRndQWnhzWGlKMk5aTS9qNVlBUjB6ZnR0bHVBMXpIWXptdEhHVjdjM1BO?=
 =?utf-8?B?OElrb2lERHFHRmtnY0E5anhIczJqeHRTV3V1R0hrUXNKVXZ4dEExdz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 12aa2a13-4c96-456d-8ace-08da4a0180b5
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 10:19:14.2712
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Gvk5mqf1sb7xrJdh7xLVZv9zRgO8cyk5nVb5ahEarn7RLUGEcyUwREj7N8gOFWHtyCchQdQJEXjkr3Jx8AI34A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8559

Page tables are used for two purposes after allocation: They either
start out all empty, or they are filled to replace a superpage.
Subsequently, to replace all empty or fully contiguous page tables,
contiguous sub-regions will be recorded within individual page tables.
Install the initial set of markers immediately after allocation. Make
sure to retain these markers when further populating a page table in
preparation for it to replace a superpage.

The markers are simply 4-bit fields holding the order value of
contiguous entries. To demonstrate this, if a page table had just 16
entries, this would be the initial (fully contiguous) set of markers:

index  0 1 2 3 4 5 6 7 8 9 A B C D E F
marker 4 0 1 0 2 0 1 0 3 0 1 0 2 0 1 0

"Contiguous" here means not only present entries with successively
increasing MFNs, each one suitably aligned for its slot, and identical
attributes, but also a respective number of all non-present (zero except
for the markers) entries.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---
An alternative to the ASSERT()s added to set_iommu_ptes_present() would
be to make the function less general-purpose; it's used in a single
place only after all (i.e. it might as well be folded into its only
caller).

While in VT-d's comment ahead of struct dma_pte I'm adjusting the
description of the high bits, I'd like to note that the description of
some of the lower bits isn't correct either. Yet I don't think adjusting
that belongs here.
---
v6: Use sizeof().
v5: Assert next_mfn is suitably aligned in set_iommu_ptes_present(). Use
    CONTIG_LEVEL_SHIFT in favor of PAGE_SHIFT-3.
v4: Add another comment referring to pt-contig-markers.h. Re-base.
v3: Add comments. Re-base.
v2: New.

--- a/xen/arch/x86/include/asm/iommu.h
+++ b/xen/arch/x86/include/asm/iommu.h
@@ -146,7 +146,8 @@ void iommu_free_domid(domid_t domid, uns
 
 int __must_check iommu_free_pgtables(struct domain *d);
 struct domain_iommu;
-struct page_info *__must_check iommu_alloc_pgtable(struct domain_iommu *hd);
+struct page_info *__must_check iommu_alloc_pgtable(struct domain_iommu *hd,
+                                                   uint64_t contig_mask);
 void iommu_queue_free_pgtable(struct domain_iommu *hd, struct page_info *pg);
 
 #endif /* !__ARCH_X86_IOMMU_H__ */
--- a/xen/drivers/passthrough/amd/iommu-defs.h
+++ b/xen/drivers/passthrough/amd/iommu-defs.h
@@ -446,11 +446,13 @@ union amd_iommu_x2apic_control {
 #define IOMMU_PAGE_TABLE_U32_PER_ENTRY	(IOMMU_PAGE_TABLE_ENTRY_SIZE / 4)
 #define IOMMU_PAGE_TABLE_ALIGNMENT	4096
 
+#define IOMMU_PTE_CONTIG_MASK           0x1e /* The ign0 field below. */
+
 union amd_iommu_pte {
     uint64_t raw;
     struct {
         bool pr:1;
-        unsigned int ign0:4;
+        unsigned int ign0:4; /* Covered by IOMMU_PTE_CONTIG_MASK. */
         bool a:1;
         bool d:1;
         unsigned int ign1:2;
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -21,6 +21,8 @@
 
 #include "iommu.h"
 
+#include <asm/pt-contig-markers.h>
+
 /* Given pfn and page table level, return pde index */
 static unsigned int pfn_to_pde_idx(unsigned long pfn, unsigned int level)
 {
@@ -113,9 +115,23 @@ static void set_iommu_ptes_present(unsig
         return;
     }
 
+    ASSERT(!(next_mfn & (page_sz - 1)));
+
     while ( nr_ptes-- )
     {
-        set_iommu_pde_present(pde, next_mfn, 0, iw, ir);
+        ASSERT(!pde->next_level);
+        ASSERT(!pde->u);
+
+        if ( pde > table )
+            ASSERT(pde->ign0 == find_first_set_bit(pde - table));
+        else
+            ASSERT(pde->ign0 == CONTIG_LEVEL_SHIFT);
+
+        pde->iw = iw;
+        pde->ir = ir;
+        pde->fc = true; /* See set_iommu_pde_present(). */
+        pde->mfn = next_mfn;
+        pde->pr = true;
 
         ++pde;
         next_mfn += page_sz;
@@ -295,7 +311,7 @@ static int iommu_pde_from_dfn(struct dom
             mfn = next_table_mfn;
 
             /* allocate lower level page table */
-            table = iommu_alloc_pgtable(hd);
+            table = iommu_alloc_pgtable(hd, IOMMU_PTE_CONTIG_MASK);
             if ( table == NULL )
             {
                 AMD_IOMMU_ERROR("cannot allocate I/O page table\n");
@@ -325,7 +341,7 @@ static int iommu_pde_from_dfn(struct dom
 
             if ( next_table_mfn == 0 )
             {
-                table = iommu_alloc_pgtable(hd);
+                table = iommu_alloc_pgtable(hd, IOMMU_PTE_CONTIG_MASK);
                 if ( table == NULL )
                 {
                     AMD_IOMMU_ERROR("cannot allocate I/O page table\n");
@@ -726,7 +742,7 @@ static int fill_qpt(union amd_iommu_pte
                  * page table pages, and the resulting allocations are always
                  * zeroed.
                  */
-                pgs[level] = iommu_alloc_pgtable(hd);
+                pgs[level] = iommu_alloc_pgtable(hd, 0);
                 if ( !pgs[level] )
                 {
                     rc = -ENOMEM;
@@ -784,7 +800,7 @@ int cf_check amd_iommu_quarantine_init(s
         return 0;
     }
 
-    pdev->arch.amd.root_table = iommu_alloc_pgtable(hd);
+    pdev->arch.amd.root_table = iommu_alloc_pgtable(hd, 0);
     if ( !pdev->arch.amd.root_table )
         return -ENOMEM;
 
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -342,7 +342,7 @@ int amd_iommu_alloc_root(struct domain *
 
     if ( unlikely(!hd->arch.amd.root_table) && d != dom_io )
     {
-        hd->arch.amd.root_table = iommu_alloc_pgtable(hd);
+        hd->arch.amd.root_table = iommu_alloc_pgtable(hd, 0);
         if ( !hd->arch.amd.root_table )
             return -ENOMEM;
     }
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -334,7 +334,7 @@ static uint64_t addr_to_dma_page_maddr(s
             goto out;
 
         pte_maddr = level;
-        if ( !(pg = iommu_alloc_pgtable(hd)) )
+        if ( !(pg = iommu_alloc_pgtable(hd, 0)) )
             goto out;
 
         hd->arch.vtd.pgd_maddr = page_to_maddr(pg);
@@ -376,7 +376,7 @@ static uint64_t addr_to_dma_page_maddr(s
             }
 
             pte_maddr = level - 1;
-            pg = iommu_alloc_pgtable(hd);
+            pg = iommu_alloc_pgtable(hd, DMA_PTE_CONTIG_MASK);
             if ( !pg )
                 break;
 
@@ -388,12 +388,13 @@ static uint64_t addr_to_dma_page_maddr(s
                 struct dma_pte *split = map_vtd_domain_page(pte_maddr);
                 unsigned long inc = 1UL << level_to_offset_bits(level - 1);
 
-                split[0].val = pte->val;
+                split[0].val |= pte->val & ~DMA_PTE_CONTIG_MASK;
                 if ( inc == PAGE_SIZE )
                     split[0].val &= ~DMA_PTE_SP;
 
                 for ( offset = 1; offset < PTE_NUM; ++offset )
-                    split[offset].val = split[offset - 1].val + inc;
+                    split[offset].val |=
+                        (split[offset - 1].val & ~DMA_PTE_CONTIG_MASK) + inc;
 
                 iommu_sync_cache(split, PAGE_SIZE);
                 unmap_vtd_domain_page(split);
@@ -2168,7 +2169,7 @@ static int __must_check cf_check intel_i
     if ( iommu_snoop )
         dma_set_pte_snp(new);
 
-    if ( old.val == new.val )
+    if ( !((old.val ^ new.val) & ~DMA_PTE_CONTIG_MASK) )
     {
         spin_unlock(&hd->arch.mapping_lock);
         unmap_vtd_domain_page(page);
@@ -3057,7 +3058,7 @@ static int fill_qpt(struct dma_pte *this
                  * page table pages, and the resulting allocations are always
                  * zeroed.
                  */
-                pgs[level] = iommu_alloc_pgtable(hd);
+                pgs[level] = iommu_alloc_pgtable(hd, 0);
                 if ( !pgs[level] )
                 {
                     rc = -ENOMEM;
@@ -3114,7 +3115,7 @@ static int cf_check intel_iommu_quaranti
     if ( !drhd )
         return -ENODEV;
 
-    pg = iommu_alloc_pgtable(hd);
+    pg = iommu_alloc_pgtable(hd, 0);
     if ( !pg )
         return -ENOMEM;
 
--- a/xen/drivers/passthrough/vtd/iommu.h
+++ b/xen/drivers/passthrough/vtd/iommu.h
@@ -253,7 +253,10 @@ struct context_entry {
  * 2-6: reserved
  * 7: super page
  * 8-11: available
- * 12-63: Host physcial address
+ * 12-51: Host physcial address
+ * 52-61: available (52-55 used for DMA_PTE_CONTIG_MASK)
+ * 62: reserved
+ * 63: available
  */
 struct dma_pte {
     u64 val;
@@ -263,6 +266,7 @@ struct dma_pte {
 #define DMA_PTE_PROT (DMA_PTE_READ | DMA_PTE_WRITE)
 #define DMA_PTE_SP   (1 << 7)
 #define DMA_PTE_SNP  (1 << 11)
+#define DMA_PTE_CONTIG_MASK  (0xfull << PADDR_BITS)
 #define dma_clear_pte(p)    do {(p).val = 0;} while(0)
 #define dma_set_pte_readable(p) do {(p).val |= DMA_PTE_READ;} while(0)
 #define dma_set_pte_writable(p) do {(p).val |= DMA_PTE_WRITE;} while(0)
@@ -276,7 +280,7 @@ struct dma_pte {
 #define dma_pte_write(p) (dma_pte_prot(p) & DMA_PTE_WRITE)
 #define dma_pte_addr(p) ((p).val & PADDR_MASK & PAGE_MASK_4K)
 #define dma_set_pte_addr(p, addr) do {\
-            (p).val |= ((addr) & PAGE_MASK_4K); } while (0)
+            (p).val |= ((addr) & PADDR_MASK & PAGE_MASK_4K); } while (0)
 #define dma_pte_present(p) (((p).val & DMA_PTE_PROT) != 0)
 #define dma_pte_superpage(p) (((p).val & DMA_PTE_SP) != 0)
 
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -26,6 +26,7 @@
 #include <asm/hvm/io.h>
 #include <asm/io_apic.h>
 #include <asm/mem_paging.h>
+#include <asm/pt-contig-markers.h>
 #include <asm/setup.h>
 
 const struct iommu_init_ops *__initdata iommu_init_ops;
@@ -529,11 +530,12 @@ int iommu_free_pgtables(struct domain *d
     return 0;
 }
 
-struct page_info *iommu_alloc_pgtable(struct domain_iommu *hd)
+struct page_info *iommu_alloc_pgtable(struct domain_iommu *hd,
+                                      uint64_t contig_mask)
 {
     unsigned int memflags = 0;
     struct page_info *pg;
-    void *p;
+    uint64_t *p;
 
 #ifdef CONFIG_NUMA
     if ( hd->node != NUMA_NO_NODE )
@@ -545,7 +547,29 @@ struct page_info *iommu_alloc_pgtable(st
         return NULL;
 
     p = __map_domain_page(pg);
-    clear_page(p);
+
+    if ( contig_mask )
+    {
+        /* See pt-contig-markers.h for a description of the marker scheme. */
+        unsigned int i, shift = find_first_set_bit(contig_mask);
+
+        ASSERT((CONTIG_LEVEL_SHIFT & (contig_mask >> shift)) == CONTIG_LEVEL_SHIFT);
+
+        p[0] = (CONTIG_LEVEL_SHIFT + 0ull) << shift;
+        p[1] = 0;
+        p[2] = 1ull << shift;
+        p[3] = 0;
+
+        for ( i = 4; i < PAGE_SIZE / sizeof(*p); i += 4 )
+        {
+            p[i + 0] = (find_first_set_bit(i) + 0ull) << shift;
+            p[i + 1] = 0;
+            p[i + 2] = 1ull << shift;
+            p[i + 3] = 0;
+        }
+    }
+    else
+        clear_page(p);
 
     iommu_sync_cache(p, PAGE_SIZE);
 



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 10:27:05 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 10:27:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344990.570624 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFNU-0001Wb-Sg; Thu, 09 Jun 2022 10:27:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344990.570624; Thu, 09 Jun 2022 10:27:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFNU-0001WU-ON; Thu, 09 Jun 2022 10:27:00 +0000
Received: by outflank-mailman (input) for mailman id 344990;
 Thu, 09 Jun 2022 10:26:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jWvP=WQ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1nzFNT-0001WI-QO
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 10:26:59 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on061a.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::61a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b12cd6e7-e7de-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 12:26:58 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by GV1PR04MB9120.eurprd04.prod.outlook.com (2603:10a6:150:27::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19; Thu, 9 Jun
 2022 10:26:56 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.013; Thu, 9 Jun 2022
 10:26:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b12cd6e7-e7de-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GLl1GzD6lA9Yjg0wLBtrRKbXcsD+P2zAg4fChgFRIq3Vngz7uxOS/8d/AxPAEhF8tOP8qrs0z0tvzO4kAoSQ5cWmSqQ9B4Xiaf8LXLUZ8brJcoOQt268ydvZmNRg+3vRh2/bntLG/akmN+zAkcs5+5sAoYCxQXcvhLiGbW8AYrB8P8pXnXDq+sKW2MewDzbQvLOpjOTChltzxdNVgKEhitRXAorV0lW3gPec1by+K9ElgE79Gv/RF7ET0bWDJQLB6LwXXsDGscqzyQRonp63Itm/7xZd1mmm3E9vJuUaLjWMAh2PUJ1k2sqmHeihUcLZc696Ir8dCmIE8AEaiTiQfA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xjjTI/ZsScrAkFB9KLCX3EUQq6oFWziO74v5KgCOXmI=;
 b=RpYYQZqtDpjidbWYwefjgVbMoJ/YbsYFbqkfoOYypFtahEkKshbpoqyFyRRODIZPPt6WU+APnVvlmblS4JDURIq4ASQfSZM4qGaxxw2gqn8tNxT5FujAzJiuv1vgRVzGIV2qd/rvtMy3y0nUCUSznTuooiVAAhuNiCWxK1OcPplDGDxeR3mvZRxZ0tfvaXHSUT20Um7GXpnaVbI5qesr7ku/Y5YHq+w+sf++tftFqmOCD/pGNf1CDMgk6Z0j7dQKRb5tbjoNsQqSOpFhH5sXQaSpF3ohnCUHLqN5hC8e9xn1RG90zxxAeC3IpPiRfV5uXSa5uVie3FFwjXJgekb17g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xjjTI/ZsScrAkFB9KLCX3EUQq6oFWziO74v5KgCOXmI=;
 b=mD4ukK5U6XYXMm3RLpjjftNSpJKIlvEj0ONDZgDKjJCYVxfvZCz+5SfogGkJsOtuAoVjysGA+ypaZjczSDJZEXvF7NA2rHW+HnwM3HBpk4OX3F2aqnvOy+oB4+xicnlqa3zSubQGcxb3lKMay7YYRXI+EPUIpVk4Ya0WB8oSgY9uaL98ae1TYCvGmcsuEeTXpqeRMLuJkMQHYW2sAEJmNtCc8Sn9mvgJebPwa58J9idzWSz48wv+n7Ul1kpT/cmezE0Q/qtDTy3EenHH+TN3ZVR64c9GLVnHiiPaMhGE/qnbdSCYyk1t1/j5GpKWbYiCLmwVy9b0pVJWmthqQG2kjQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <0d85ad23-a232-eac3-416f-fff4d5ec1a93@suse.com>
Date: Thu, 9 Jun 2022 12:26:54 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [XEN PATCH 1/4] build: xen/include: use if_changed
Content-Language: en-US
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
References: <20220601165909.46588-1-anthony.perard@citrix.com>
 <20220601165909.46588-2-anthony.perard@citrix.com>
 <6EE2C13C-7218-4063-8C73-88695C6BF4CE@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <6EE2C13C-7218-4063-8C73-88695C6BF4CE@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM5PR0402CA0001.eurprd04.prod.outlook.com
 (2603:10a6:203:90::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a6315980-8c7b-44d5-c361-08da4a029442
X-MS-TrafficTypeDiagnostic: GV1PR04MB9120:EE_
X-Microsoft-Antispam-PRVS:
	<GV1PR04MB91201DD5CF0D389055D84E95B3A79@GV1PR04MB9120.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Lk2Fdb+9oDfkw8bNNOtlfpsiDnF24APQ4tYDThcU03qre9ZBjAMjO0CRICg9ak8hXBWWXFRLu0Y3ThxddKGGuKk2Rw2mDJtfK0y96SBg+La/Bm1qUwfODthxE7VqXK/6LKgPgtMJH3tusrJ3DB8VivNSjF/zFrz2ivgBlOKG5QeROcVlK6okbrbH3gJsx2ePBXkekf1juCJ/aUd7JzsI+1tNCuBDtPQeHziJJLeh6CejBnCPF//en+hivxZszzXIu8U0VRLXLOyUcL2UAL3OoaOmk0ITcDZ+DEVokutWC/qeOariNIjF/kFsVEm3M2IfFUzCrNJs0ZyeUfJIAh9XpVo6y/itwuAXV60+gYzMnOvDuB+mhdOgV6iO2Y0FhoAmNcaJn+mITnGJwA5B2gpG77kBqL9PhYaF7BakptwJJiOrX+sVF5HJXTOQ3VIVdlmoNAuiVctnSUAh5J8MXuXz0Ef8Ejk/3dsI63GDnEqNxKFIL6Czid7osG0iBzZ6Z0YeruyOel7Q/TxmqG0COHTNnHVw0+6IUbEvE77eS/atU3AMvtnOZSO5L4B1zTR4c8vXyDReuSRgX67BT/RjBKyghtZfo8socCEdOX57fkt9nDg84poEkJrvQ4BUsJESQiUefy/jPSiv4F8EcbPHEoS3dCqeB+4lrynkZGHlmZ4Dd7XlhwKcxlfwessL5bUPyCssgDzOmzQ2Ly8sC3peBuw2TJ4le2bXXVCblJvfImrBd3yIJGEtGtvfvfIPNXio8nFR
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(508600001)(66476007)(31696002)(4326008)(8676002)(66946007)(6506007)(5660300002)(66556008)(31686004)(6486002)(6512007)(186003)(86362001)(53546011)(2616005)(54906003)(2906002)(6916009)(316002)(26005)(83380400001)(36756003)(38100700002)(8936002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QmJma0ttQmVMc0RPS0w3UlNFeFZFRndoa2hIYlEvckt5RjNieWFPcVEybGZG?=
 =?utf-8?B?ZTRQYzc3K1hlTFFHNzdCSGdtVGo4S1cveWxzRjdwOUZUU1R1R0RNNnR6eEtN?=
 =?utf-8?B?d0FqOURKcGQrYTh5QVFYeU9MMjlRTjZ4SVZBQW02dUVsdGRpS2tsdWhNUDdh?=
 =?utf-8?B?dElGOUc3UFlMbXpMNTd4aUpXeU9YcGpkWXVHdGdwdng4U0RSWVNmeE10c2hQ?=
 =?utf-8?B?cE50RW8vWHE4M2VLTTAreXB3WEhwZmlOeENwWS9kRktlTkF6a21UQ0JCT1li?=
 =?utf-8?B?RWJjT2FmZFNzSkJuSzY0cmJFKzY0K3RKbEZwUW14YzJFY2ZqM0RDTFB2ZjJV?=
 =?utf-8?B?ZnVwRjBuNHNLR3BNQ0hGeUdYRENvM3VTOVNocmFTcXRBZXprRWlTUlNjSzlE?=
 =?utf-8?B?YStpYzB5NU5VVDdlR2NIMTI3YzZ0ZG5QdHpScXFxeCt5bW5QYlE3TzN4UjZT?=
 =?utf-8?B?VWZOUHBuR0k4Vkd4ZmRwd1FYcFhhelBnWk1KOEFsbjd3U0hrREtyNWdOWGtD?=
 =?utf-8?B?ajhkQjFwYW40a1BQVEs0VHZvdHc1Rk9KN0swc3VJSGpjNGkyM2lqLzZ3anhz?=
 =?utf-8?B?MnV5ZzFJVCtnV2JTRWZCUUVwemxCdnBLbGRXc2c0M090a2ZZeTJIK2trcXdT?=
 =?utf-8?B?alFsd044bTRzNWtPR213dDlWZGJXbU9mWGoxdjZwdXpPZkwvcVA2dzVab255?=
 =?utf-8?B?a21wVGFnanEwREtMazRzUnpKSHJSMFdrc0pyY2creG9HRGVTYnkrTEVVOTN5?=
 =?utf-8?B?cTJST1RLRWFTajVsYmxzdVFYWGw2V2ZQVUMzUXpPYVhpNkpnVzY5bytkcU1C?=
 =?utf-8?B?NjgxdFFQQTJKRDFZR2hVSWp3VzVOWWhmVkcvUWpmdGxBRWlGclQ3bDZCcnlJ?=
 =?utf-8?B?UjFEOXk2UU43QjdzcEh4UUxPd2VwTUVpZjA2NVE3OTZnaWlvSm9OVzU0Ui85?=
 =?utf-8?B?Qm1xci9ZdVlpNTh4UkV1WEJwVm5ha09FTDJWbnc3dHJHQ3FqM0hxVUZ0azFT?=
 =?utf-8?B?NFA4a2VXdXFiYTJ1cU1IODFUeVFtTU1SZW5OL2FNZ3hDQWUra0lTdHVWTm1h?=
 =?utf-8?B?dkZMY1NFMC9aU3lsNDNqc1loVGtjMmRRQzgyQzBkOWwzOTJRR3VvRjIxMjFX?=
 =?utf-8?B?TU5HQ1BRaFdaU1Q0cUpJb2xQSWZrSU1RUk1QSWFRRnhRbE1GdnVFUnExb1o5?=
 =?utf-8?B?TkpiMjUwajVZSThkYXlZUHJVQ3JIRG5laXppZjAvbVQ0UHdIbmxyZTRPM0ly?=
 =?utf-8?B?b3Q0elk2TTl1ZUc4cWp3dDNhbjdNRTJ0MW9HQ2c0OGNPTTRkWGVQVFRSSWk3?=
 =?utf-8?B?MW01NzBQVzM4VTRyeFU4N0hiMHQrM285WTZRSXUxcER3NXlXYWx1QWtRK2p3?=
 =?utf-8?B?QjByTkxlaGxyNndqSXo3RENaTDFYcW8vcjZYWllGSUw3bm4vdlF0S1ZJc2dm?=
 =?utf-8?B?TnRGaXpuSCtDMFpOdllKNDdaanp0eG9KSDZObGFZbGMrWFA3cG9YRHJNc2ow?=
 =?utf-8?B?OVdqQVM1Zm5WU2VBd3k4R0VmdVhEbHBFZkQyNVdSU1pLd2EzUjBybmVLaC9K?=
 =?utf-8?B?bU50cmx0TEY5aExHSlBBbnpYM2NydjNBb3I1M2pQaHNaV0tQc1NFTWxUT01R?=
 =?utf-8?B?R2xIVkZtK3VaUW5kb3RyZ09RMC9nV2FTL2R3N0R5Vm1hdWlPaXhKMW9mSXpX?=
 =?utf-8?B?TTdRVFVoVjd2dDhuMi8yb0hkTXd3a1ltZHk2R2JRZHhFdVZDMW0rK1RJMGFq?=
 =?utf-8?B?SmFOYUdpMHBjUkZaQ3dDZ09WOWRSSjVZOWZpbEgwdmlwZks4Ym4vcWtxSnEr?=
 =?utf-8?B?TUEzUjBFb1VOaGg5a3dQM2xNVTBNb0k0eTVaTnlIeVZnVFArajJ5N3RmQXIr?=
 =?utf-8?B?WEVGbzFMbGlmbDA0cXBXeDVtbDR2WWdrWHdZMncwOFNFd3U0QzRzaEplZ2c0?=
 =?utf-8?B?T2E0REtJTzVKZWxKc2JoQklzSVJ0cHZxZlRRSExwTmdJclNTTUo0WENYbS9S?=
 =?utf-8?B?Z1E4bEdTYWRscVVCVklCZjNPaytGejFVNVk2YVFFREkwNGJhZkdYWWVZaEt6?=
 =?utf-8?B?QklORDRIM0s4YzZMbDA5ajltV2xuL1N0NFJEYkpTbGdXQjNKUUdVNmhNcjRJ?=
 =?utf-8?B?akFpd090Ly9xRXBmTnk3RmFDODcwektKajA4ejJoTHFicEsvTUVpUGtSRkRv?=
 =?utf-8?B?bUFyNGNDUDU2eFVsM1VjU1hwUnZLdmlWLy9rbWNmYXpUU0FNeS95TWdVUUo4?=
 =?utf-8?B?NFhlR2RFcVpDcHIwMVBmRUE2T0tlblIzQVlKelVjR3FsMEliVXJzejkxUlg0?=
 =?utf-8?B?c3pFT2VySE55cEVtVWlNQ3cvNWw4UW56T0xKc1lHTzlrNlp0ZUhJUT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a6315980-8c7b-44d5-c361-08da4a029442
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 10:26:56.5389
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9Gvpz2BUYcU0Q4fPlFYMjxd3MLFGAGXjMzX8OAa7IuDXlMN8VJR0HBKPaadCXchp357TsZLHMUYEAgCPxuI/ig==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR04MB9120

On 09.06.2022 12:16, Bertrand Marquis wrote:
>> On 1 Jun 2022, at 17:59, Anthony PERARD <anthony.perard@citrix.com> wrote:
>>
>> Use "define" for the headers*_chk commands as otherwise the "#"
>> is interpreted as a comment and make can't find the end of
>> $(foreach,).
>>
>> Adding several .PRECIOUS as without them `make` deletes the
>> intermediate targets. This is an issue because the macro $(if_changed,)
>> check if the target exist in order to decide whether to recreate the
>> target.
>>
>> Removing the call to `mkdir` from the commands. Those aren't needed
>> anymore because a rune in Rules.mk creates the directory for each
>> $(targets).
>>
>> Remove "export PYTHON" as it is already exported.
> 
> With this change, compiling for x86 is now ending up in:
> CHK     include/headers99.chk
> make[9]: execvp: /bin/sh: Argument list too long
> make[9]: *** [include/Makefile:181: include/headers++.chk] Error 127
> 
> Not quite sure yet why but I wanted to signal it early as other might be impacted.
> 
> Arm and arm64 builds are not impacted.

Hmm, that patch has passed the smoke push gate already, so there likely is
more to it than there being an unconditional issue. I did build-test this
before pushing, and I've just re-tested on a 2nd system without seeing an
issue.

Also please remember to trim your replies.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 10:28:16 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 10:28:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344999.570635 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFOi-0002Ct-Ac; Thu, 09 Jun 2022 10:28:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344999.570635; Thu, 09 Jun 2022 10:28:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFOi-0002Cm-69; Thu, 09 Jun 2022 10:28:16 +0000
Received: by outflank-mailman (input) for mailman id 344999;
 Thu, 09 Jun 2022 10:28:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Nuio=WQ=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1nzFOf-0002CP-Va
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 10:28:14 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dc4cbace-e7de-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 12:28:11 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-538-mZyfLHhaM-GD1Jv_Xb-K4Q-1; Thu, 09 Jun 2022 06:28:07 -0400
Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com
 [10.11.54.10])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C8EC529AA2E7;
 Thu,  9 Jun 2022 10:28:06 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 935C3492C3B;
 Thu,  9 Jun 2022 10:28:06 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 1B42E1800094; Thu,  9 Jun 2022 12:28:05 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc4cbace-e7de-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1654770490;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=gTg5vlQmIJu4b20Pu0C1H0E0ALbjaRUIwY0jnF9uY2Q=;
	b=ISsgwOrVZ+zz2ODNNFtchATqJh9iS+bwfTZ6SnXlfO3YfRWl9bk0Rlx6qIDehTfjV/He7Y
	mjDH3uWs2NtNHHlOrjmIzdSsSze2GqDt2M89WxrALk4cYA25ga0ZJuI2WsJyJKDA9LoCyH
	HHodfZ+K12ZV2UmXU09X2G2ny2C2nXM=
X-MC-Unique: mZyfLHhaM-GD1Jv_Xb-K4Q-1
Date: Thu, 9 Jun 2022 12:28:05 +0200
From: Gerd Hoffmann <kraxel@redhat.com>
To: Akihiko Odaki <akihiko.odaki@gmail.com>
Cc: qemu Developers <qemu-devel@nongnu.org>, xen-devel@lists.xenproject.org,
	"Michael S . Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>
Subject: Re: [PATCH v3 2/3] ui: Deliver refresh rate via QemuUIInfo
Message-ID: <20220609102805.qz2xrnd6ms6cigir@sirius.home.kraxel.org>
References: <20220226115516.59830-1-akihiko.odaki@gmail.com>
 <20220226115516.59830-3-akihiko.odaki@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20220226115516.59830-3-akihiko.odaki@gmail.com>
X-Scanned-By: MIMEDefang 2.85 on 10.11.54.10

> --- a/include/ui/console.h
> +++ b/include/ui/console.h
> @@ -139,6 +139,7 @@ typedef struct QemuUIInfo {
>      int       yoff;
>      uint32_t  width;
>      uint32_t  height;
> +    uint32_t  refresh_rate;
>  } QemuUIInfo;
>  
>  /* cursor data format is 32bit RGBA */
> @@ -426,7 +427,6 @@ typedef struct GraphicHwOps {
>      void (*gfx_update)(void *opaque);
>      bool gfx_update_async; /* if true, calls graphic_hw_update_done() */
>      void (*text_update)(void *opaque, console_ch_t *text);
> -    void (*update_interval)(void *opaque, uint64_t interval);
>      void (*ui_info)(void *opaque, uint32_t head, QemuUIInfo *info);
>      void (*gl_block)(void *opaque, bool block);
>  } GraphicHwOps;

So you are dropping update_interval, which isn't mentioned in the commit
message at all.  Also this patch is rather big.  I'd suggest:

(1) add refresh_rate
(2) update users one by one
(3) finally drop update_interval when no user is left.

thanks,
  Gerd



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 10:47:22 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 10:47:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344802.570646 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFh8-000547-Qe; Thu, 09 Jun 2022 10:47:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344802.570646; Thu, 09 Jun 2022 10:47:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFh8-000540-Ns; Thu, 09 Jun 2022 10:47:18 +0000
Received: by outflank-mailman (input) for mailman id 344802;
 Thu, 09 Jun 2022 09:16:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7WWr=WQ=suse.com=pmladek@srs-se1.protection.inumbo.net>)
 id 1nzEHe-0007jN-Gk
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 09:16:54 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e65cface-e7d4-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 11:16:53 +0200 (CEST)
Received: from relay2.suse.de (relay2.suse.de [149.44.160.134])
 by smtp-out1.suse.de (Postfix) with ESMTP id 317C521DE0;
 Thu,  9 Jun 2022 09:16:52 +0000 (UTC)
Received: from suse.cz (unknown [10.100.208.146])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by relay2.suse.de (Postfix) with ESMTPS id B45BB2C14F;
 Thu,  9 Jun 2022 09:16:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e65cface-e7d4-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1654766212; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=d57/DrqDiiJ2foYz2ew+5Cqn+PqODCYrsHqzu7X2C2Q=;
	b=fo+sYmFjpmQOmpyD3JvJl9AYNinBjxAVeUE+HbamyvnKBmUJ4aJ/YSAu3A8+CqB7Ibeuz8
	5d8JOsM43+X+4Im3fKziKImYEhGOTr+/e/e7kKF981tYnqb1CU7vBLnpyqhaj1HOQaPm0q
	8dfR9EB/TBh2sOsSKiFiTu8q7nOVsPQ=
Date: Thu, 9 Jun 2022 11:16:46 +0200
From: Petr Mladek <pmladek@suse.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: ink@jurassic.park.msu.ru, mattst88@gmail.com, vgupta@kernel.org,
	linux@armlinux.org.uk, ulli.kroll@googlemail.com,
	linus.walleij@linaro.org, shawnguo@kernel.org,
	Sascha Hauer <s.hauer@pengutronix.de>, kernel@pengutronix.de,
	festevam@gmail.com, linux-imx@nxp.com, tony@atomide.com,
	khilman@kernel.org, catalin.marinas@arm.com, will@kernel.org,
	guoren@kernel.org, bcain@quicinc.com, chenhuacai@kernel.org,
	kernel@xen0n.name, geert@linux-m68k.org, sammy@sammy.net,
	monstr@monstr.eu, tsbogend@alpha.franken.de, dinguyen@kernel.org,
	jonas@southpole.se, stefan.kristiansson@saunalahti.fi,
	shorne@gmail.com, James.Bottomley@hansenpartnership.com,
	deller@gmx.de, mpe@ellerman.id.au, benh@kernel.crashing.org,
	paulus@samba.org, paul.walmsley@sifive.com, palmer@dabbelt.com,
	aou@eecs.berkeley.edu, hca@linux.ibm.com, gor@linux.ibm.com,
	agordeev@linux.ibm.com, borntraeger@linux.ibm.com,
	svens@linux.ibm.com, ysato@users.sourceforge.jp, dalias@libc.org,
	davem@davemloft.net, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
	dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com,
	acme@kernel.org, mark.rutland@arm.com,
	alexander.shishkin@linux.intel.com, jolsa@kernel.org,
	namhyung@kernel.org, jgross@suse.com, srivatsa@csail.mit.edu,
	amakhalov@vmware.com, pv-drivers@vmware.com,
	boris.ostrovsky@oracle.com, chris@zankel.net, jcmvbkbc@gmail.com,
	rafael@kernel.org, lenb@kernel.org, pavel@ucw.cz,
	gregkh@linuxfoundation.org, mturquette@baylibre.com,
	sboyd@kernel.org, daniel.lezcano@linaro.org, lpieralisi@kernel.org,
	sudeep.holla@arm.com, agross@kernel.org, bjorn.andersson@linaro.org,
	anup@brainfault.org, thierry.reding@gmail.com, jonathanh@nvidia.com,
	jacob.jun.pan@linux.intel.com, Arnd Bergmann <arnd@arndb.de>,
	yury.norov@gmail.com, andriy.shevchenko@linux.intel.com,
	linux@rasmusvillemoes.dk, rostedt@goodmis.org,
	senozhatsky@chromium.org, john.ogness@linutronix.de,
	paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com,
	josh@joshtriplett.org, mathieu.desnoyers@efficios.com,
	jiangshanlai@gmail.com, joel@joelfernandes.org,
	juri.lelli@redhat.com, vincent.guittot@linaro.org,
	dietmar.eggemann@arm.com, bsegall@google.com, mgorman@suse.de,
	bristot@redhat.com, vschneid@redhat.com, jpoimboe@kernel.org,
	linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-snps-arc@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, openrisc@lists.librecores.org,
	linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org, sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org, linux-perf-users@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, linux-xtensa@linux-xtensa.org,
	linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org,
	linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	linux-tegra@vger.kernel.org, linux-arch@vger.kernel.org,
	rcu@vger.kernel.org
Subject: Re: [PATCH 24/36] printk: Remove trace_.*_rcuidle() usage
Message-ID: <YqG6URbihTNCk9YR@alley>
References: <20220608142723.103523089@infradead.org>
 <20220608144517.444659212@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20220608144517.444659212@infradead.org>

On Wed 2022-06-08 16:27:47, Peter Zijlstra wrote:
> The problem, per commit fc98c3c8c9dc ("printk: use rcuidle console
> tracepoint"), was printk usage from the cpuidle path where RCU was
> already disabled.
> 
> Per the patches earlier in this series, this is no longer the case.

My understanding is that this series reduces a lot the amount
of code called with RCU disabled. As a result the particular printk()
call mentioned by commit fc98c3c8c9dc ("printk: use rcuidle console
tracepoint") is called with RCU enabled now. Hence this particular
problem is fixed better way now.

But is this true in general?
Does this "prevent" calling printk() a safe way in code with
RCU disabled?

I am not sure if anyone cares. printk() is the best effort
functionality because of the consoles code anyway. Also I wonder
if anyone uses this trace_console().

Therefore if this patch allows to remove some tricky tracing
code then it might be worth it. But if trace_console_rcuidle()
variant is still going to be available then I would keep using it.

Best Regards,
Petr

> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> ---
>  kernel/printk/printk.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> --- a/kernel/printk/printk.c
> +++ b/kernel/printk/printk.c
> @@ -2238,7 +2238,7 @@ static u16 printk_sprint(char *text, u16
>  		}
>  	}
>  
> -	trace_console_rcuidle(text, text_len);
> +	trace_console(text, text_len);
>  
>  	return text_len;
>  }
> 


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 10:47:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 10:47:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344827.570651 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFh9-00056M-34; Thu, 09 Jun 2022 10:47:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344827.570651; Thu, 09 Jun 2022 10:47:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFh8-00055v-Uu; Thu, 09 Jun 2022 10:47:18 +0000
Received: by outflank-mailman (input) for mailman id 344827;
 Thu, 09 Jun 2022 09:41:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5tdW=WQ=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nzEfb-0003Xg-Jf
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 09:41:40 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5b691434-e7d8-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 11:41:37 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=worktop.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nzEen-006Kx5-AX; Thu, 09 Jun 2022 09:40:49 +0000
Received: by worktop.programming.kicks-ass.net (Postfix, from userid 1000)
 id B092F981287; Thu,  9 Jun 2022 11:40:46 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5b691434-e7d8-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=jxhFyRA7gcpqWtRQyvyD3abswc2PbVc4FSoswUV5Ukc=; b=eTVis02Li1gUMyvYNvhtQOfPjM
	7jPK+tlnMQu8YlONdKYuGGzg5f4OtA6n+DLuvaVtWLmmwBKEcjK3gJJFZyuUsx9s+WrgcdXDtEGo5
	n/0BitYa/9Tv9a5PHDy4FBaJ0usEOIIGYMMcR/SAKx38GPSteGLT2qjJKdgzgVt9Wb+/EeSMk/GCj
	o7MIldwwpKDLAKlNGDE/FUzTM2RKPEJfDUWHFxb549VYWV718uWBpBWrwfwVc4Y51nI2XmOQdPNke
	dbtUv13RdN1xprVJqbJ/lLZKhjUqu4j3NGqJXLsSZjtm/Jap7nXVhvEByTNVwWiNq35HcNs4n5id9
	tqC7XoNQ==;
Date: Thu, 9 Jun 2022 11:40:46 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: Arnd Bergmann <arnd@arndb.de>
Cc: Richard Henderson <rth@twiddle.net>,
	Ivan Kokshaysky <ink@jurassic.park.msu.ru>,
	Matt Turner <mattst88@gmail.com>, Vineet Gupta <vgupta@kernel.org>,
	Russell King - ARM Linux <linux@armlinux.org.uk>,
	Hans Ulli Kroll <ulli.kroll@googlemail.com>,
	Linus Walleij <linus.walleij@linaro.org>,
	Shawn Guo <shawnguo@kernel.org>,
	Sascha Hauer <s.hauer@pengutronix.de>,
	Sascha Hauer <kernel@pengutronix.de>,
	Fabio Estevam <festevam@gmail.com>,
	NXP Linux Team <linux-imx@nxp.com>,
	Tony Lindgren <tony@atomide.com>, Kevin Hilman <khilman@kernel.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>, Guo Ren <guoren@kernel.org>,
	bcain@quicinc.com, Huacai Chen <chenhuacai@kernel.org>,
	Xuerui Wang <kernel@xen0n.name>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Sam Creasey <sammy@sammy.net>, Michal Simek <monstr@monstr.eu>,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	Dinh Nguyen <dinguyen@kernel.org>, Jonas Bonn <jonas@southpole.se>,
	Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>,
	Stafford Horne <shorne@gmail.com>,
	James Bottomley <James.Bottomley@hansenpartnership.com>,
	Helge Deller <deller@gmx.de>, Michael Ellerman <mpe@ellerman.id.au>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Paul Mackerras <paulus@samba.org>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	Albert Ou <aou@eecs.berkeley.edu>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Alexander Gordeev <agordeev@linux.ibm.com>,
	Christian Borntraeger <borntraeger@linux.ibm.com>,
	Sven Schnelle <svens@linux.ibm.com>,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	Rich Felker <dalias@libc.org>, David Miller <davem@davemloft.net>,
	Richard Weinberger <richard@nod.at>,
	Anton Ivanov <anton.ivanov@cambridgegreys.com>,
	Johannes Berg <johannes@sipsolutions.net>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	the arch/x86 maintainers <x86@kernel.org>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Arnaldo Carvalho de Melo <acme@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Alexander Shishkin <alexander.shishkin@linux.intel.com>,
	Jiri Olsa <jolsa@kernel.org>, Namhyung Kim <namhyung@kernel.org>,
	Juergen Gross <jgross@suse.com>, srivatsa@csail.mit.edu,
	amakhalov@vmware.com, Pv-drivers <pv-drivers@vmware.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Chris Zankel <chris@zankel.net>, Max Filippov <jcmvbkbc@gmail.com>,
	Rafael Wysocki <rafael@kernel.org>, Len Brown <lenb@kernel.org>,
	Pavel Machek <pavel@ucw.cz>, gregkh <gregkh@linuxfoundation.org>,
	Michael Turquette <mturquette@baylibre.com>,
	Stephen Boyd <sboyd@kernel.org>,
	Daniel Lezcano <daniel.lezcano@linaro.org>, lpieralisi@kernel.org,
	Sudeep Holla <sudeep.holla@arm.com>, Andy Gross <agross@kernel.org>,
	Bjorn Andersson <bjorn.andersson@linaro.org>,
	Anup Patel <anup@brainfault.org>,
	Thierry Reding <thierry.reding@gmail.com>,
	Jonathan Hunter <jonathanh@nvidia.com>,
	jacob.jun.pan@linux.intel.com, Yury Norov <yury.norov@gmail.com>,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Rasmus Villemoes <linux@rasmusvillemoes.dk>,
	Steven Rostedt <rostedt@goodmis.org>,
	Petr Mladek <pmladek@suse.com>,
	Sergey Senozhatsky <senozhatsky@chromium.org>,
	John Ogness <john.ogness@linutronix.de>,
	"Paul E. McKenney" <paulmck@kernel.org>,
	Frederic Weisbecker <frederic@kernel.org>, quic_neeraju@quicinc.com,
	Josh Triplett <josh@joshtriplett.org>,
	Mathieu Desnoyers <mathieu.desnoyers@efficios.com>,
	Lai Jiangshan <jiangshanlai@gmail.com>,
	Joel Fernandes <joel@joelfernandes.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
	Daniel Bristot de Oliveira <bristot@redhat.com>,
	vschneid@redhat.com, jpoimboe@kernel.org,
	alpha <linux-alpha@vger.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	"open list:SYNOPSYS ARC ARCHITECTURE" <linux-snps-arc@lists.infradead.org>,
	Linux ARM <linux-arm-kernel@lists.infradead.org>,
	linux-omap <linux-omap@vger.kernel.org>, linux-csky@vger.kernel.org,
	"open list:QUALCOMM HEXAGON..." <linux-hexagon@vger.kernel.org>,
	"open list:IA64 (Itanium) PLATFORM" <linux-ia64@vger.kernel.org>,
	linux-m68k <linux-m68k@lists.linux-m68k.org>,
	"open list:BROADCOM NVRAM DRIVER" <linux-mips@vger.kernel.org>,
	Openrisc <openrisc@lists.librecores.org>,
	Parisc List <linux-parisc@vger.kernel.org>,
	linuxppc-dev <linuxppc-dev@lists.ozlabs.org>,
	linux-riscv <linux-riscv@lists.infradead.org>,
	linux-s390 <linux-s390@vger.kernel.org>,
	Linux-sh list <linux-sh@vger.kernel.org>,
	sparclinux <sparclinux@vger.kernel.org>,
	linux-um <linux-um@lists.infradead.org>,
	linux-perf-users@vger.kernel.org,
	"open list:DRM DRIVER FOR QEMU'S CIRRUS DEVICE" <virtualization@lists.linux-foundation.org>,
	xen-devel <xen-devel@lists.xenproject.org>,
	"open list:TENSILICA XTENSA PORT (xtensa)" <linux-xtensa@linux-xtensa.org>,
	ACPI Devel Maling List <linux-acpi@vger.kernel.org>,
	Linux PM list <linux-pm@vger.kernel.org>,
	linux-clk <linux-clk@vger.kernel.org>,
	linux-arm-msm <linux-arm-msm@vger.kernel.org>,
	"open list:TEGRA ARCHITECTURE SUPPORT" <linux-tegra@vger.kernel.org>,
	linux-arch <linux-arch@vger.kernel.org>, rcu@vger.kernel.org
Subject: Re: [PATCH 33/36] cpuidle,omap3: Use WFI for omap3_pm_idle()
Message-ID: <YqHAHpGVe10I8O1z@worktop.programming.kicks-ass.net>
References: <20220608142723.103523089@infradead.org>
 <20220608144518.010587032@infradead.org>
 <CAK8P3a0g-fNu9=BUECSXcNeWT7XWHQMnSXZE-XYE+5eakHxKxA@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAK8P3a0g-fNu9=BUECSXcNeWT7XWHQMnSXZE-XYE+5eakHxKxA@mail.gmail.com>

On Wed, Jun 08, 2022 at 06:28:33PM +0200, Arnd Bergmann wrote:
> On Wed, Jun 8, 2022 at 4:27 PM Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > arch_cpu_idle() is a very simple idle interface and exposes only a
> > single idle state and is expected to not require RCU and not do any
> > tracing/instrumentation.
> >
> > As such, omap_sram_idle() is not a valid implementation. Replace it
> > with the simple (shallow) omap3_do_wfi() call. Leaving the more
> > complicated idle states for the cpuidle driver.
> >
> > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> 
> I see similar code in omap2:
> 
> omap2_pm_idle()
>  -> omap2_enter_full_retention()
>      -> omap2_sram_suspend()
> 
> Is that code path safe to use without RCU or does it need a similar change?

It needs a similar change, clearly I was running on fumes to not have
found that when grepping around the omap code :/


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 10:47:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 10:47:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344832.570664 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFh9-0005KW-Qw; Thu, 09 Jun 2022 10:47:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344832.570664; Thu, 09 Jun 2022 10:47:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFh9-0005Hq-Iz; Thu, 09 Jun 2022 10:47:19 +0000
Received: by outflank-mailman (input) for mailman id 344832;
 Thu, 09 Jun 2022 10:02:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5tdW=WQ=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nzF01-0005mi-IY
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 10:02:45 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4e3bea48-e7db-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 12:02:44 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=worktop.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nzEzM-006LJR-Mm; Thu, 09 Jun 2022 10:02:07 +0000
Received: by worktop.programming.kicks-ass.net (Postfix, from userid 1000)
 id 2E614981287; Thu,  9 Jun 2022 12:02:04 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4e3bea48-e7db-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=UbbDgBiwip0YBZOTZVQugfugvsZnYOECFEm/sChB2KE=; b=bn7nWKrTByqCttAV7Tb+hk3S6t
	I/mkMmJ369JvRCOAATit3qrnLrVVApCU3MZIi6lIKiFAlbStRcA30X0UFUd1T8D12fVQnVKa7hYPR
	K7EYNQp37UzeZk78mCBcTmEsaKr3iz0rWwxCle8s1uG3rrOkr/TW+qX1Zj/6s0Og8NDJ1jJvyTCIJ
	X/UPiCnCV5l95zgrnKFHSTcoAhPMEQ49r4HzYtVzZmCksowLBW+PMICGN33p7tE1QEHeN+zrII/zd
	ec7l/iKVKEAoDbJibxSRhGbBYgET1Ue5jZm0b5HeYxPcLQDm5a8x7p7RxjPba9/By/81XUqzodwzS
	pNWm9E9w==;
Date: Thu, 9 Jun 2022 12:02:04 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: Petr Mladek <pmladek@suse.com>
Cc: ink@jurassic.park.msu.ru, mattst88@gmail.com, vgupta@kernel.org,
	linux@armlinux.org.uk, ulli.kroll@googlemail.com,
	linus.walleij@linaro.org, shawnguo@kernel.org,
	Sascha Hauer <s.hauer@pengutronix.de>, kernel@pengutronix.de,
	festevam@gmail.com, linux-imx@nxp.com, tony@atomide.com,
	khilman@kernel.org, catalin.marinas@arm.com, will@kernel.org,
	guoren@kernel.org, bcain@quicinc.com, chenhuacai@kernel.org,
	kernel@xen0n.name, geert@linux-m68k.org, sammy@sammy.net,
	monstr@monstr.eu, tsbogend@alpha.franken.de, dinguyen@kernel.org,
	jonas@southpole.se, stefan.kristiansson@saunalahti.fi,
	shorne@gmail.com, James.Bottomley@hansenpartnership.com,
	deller@gmx.de, mpe@ellerman.id.au, benh@kernel.crashing.org,
	paulus@samba.org, paul.walmsley@sifive.com, palmer@dabbelt.com,
	aou@eecs.berkeley.edu, hca@linux.ibm.com, gor@linux.ibm.com,
	agordeev@linux.ibm.com, borntraeger@linux.ibm.com,
	svens@linux.ibm.com, ysato@users.sourceforge.jp, dalias@libc.org,
	davem@davemloft.net, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
	dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com,
	acme@kernel.org, mark.rutland@arm.com,
	alexander.shishkin@linux.intel.com, jolsa@kernel.org,
	namhyung@kernel.org, jgross@suse.com, srivatsa@csail.mit.edu,
	amakhalov@vmware.com, pv-drivers@vmware.com,
	boris.ostrovsky@oracle.com, chris@zankel.net, jcmvbkbc@gmail.com,
	rafael@kernel.org, lenb@kernel.org, pavel@ucw.cz,
	gregkh@linuxfoundation.org, mturquette@baylibre.com,
	sboyd@kernel.org, daniel.lezcano@linaro.org, lpieralisi@kernel.org,
	sudeep.holla@arm.com, agross@kernel.org, bjorn.andersson@linaro.org,
	anup@brainfault.org, thierry.reding@gmail.com, jonathanh@nvidia.com,
	jacob.jun.pan@linux.intel.com, Arnd Bergmann <arnd@arndb.de>,
	yury.norov@gmail.com, andriy.shevchenko@linux.intel.com,
	linux@rasmusvillemoes.dk, rostedt@goodmis.org,
	senozhatsky@chromium.org, john.ogness@linutronix.de,
	paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com,
	josh@joshtriplett.org, mathieu.desnoyers@efficios.com,
	jiangshanlai@gmail.com, joel@joelfernandes.org,
	juri.lelli@redhat.com, vincent.guittot@linaro.org,
	dietmar.eggemann@arm.com, bsegall@google.com, mgorman@suse.de,
	bristot@redhat.com, vschneid@redhat.com, jpoimboe@kernel.org,
	linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-snps-arc@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, openrisc@lists.librecores.org,
	linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org, sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org, linux-perf-users@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, linux-xtensa@linux-xtensa.org,
	linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org,
	linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	linux-tegra@vger.kernel.org, linux-arch@vger.kernel.org,
	rcu@vger.kernel.org
Subject: Re: [PATCH 24/36] printk: Remove trace_.*_rcuidle() usage
Message-ID: <YqHFHB6qqv5wiR8t@worktop.programming.kicks-ass.net>
References: <20220608142723.103523089@infradead.org>
 <20220608144517.444659212@infradead.org>
 <YqG6URbihTNCk9YR@alley>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <YqG6URbihTNCk9YR@alley>

On Thu, Jun 09, 2022 at 11:16:46AM +0200, Petr Mladek wrote:
> On Wed 2022-06-08 16:27:47, Peter Zijlstra wrote:
> > The problem, per commit fc98c3c8c9dc ("printk: use rcuidle console
> > tracepoint"), was printk usage from the cpuidle path where RCU was
> > already disabled.
> > 
> > Per the patches earlier in this series, this is no longer the case.
> 
> My understanding is that this series reduces a lot the amount
> of code called with RCU disabled. As a result the particular printk()
> call mentioned by commit fc98c3c8c9dc ("printk: use rcuidle console
> tracepoint") is called with RCU enabled now. Hence this particular
> problem is fixed better way now.
> 
> But is this true in general?
> Does this "prevent" calling printk() a safe way in code with
> RCU disabled?

On x86_64, yes. Other architectures, less so.

Specifically, the objtool noinstr validation pass will warn at build
time (DEBUG_ENTRY=y) if any noinstr/cpuidle code does a call to
non-vetted code like printk().

At the same time; there's a few hacks that allow WARN to work, but
mostly if you hit WARN in entry/noinstr you get to keep the pieces in
any case.

On other architecture we'll need to rely on runtime coverage with
PROVE_RCU. That is, if a splat like in the above mentioned commit
happens again, we'll need to fix it by adjusting the callchain, not by
mucking about with RCU state.

> I am not sure if anyone cares. printk() is the best effort
> functionality because of the consoles code anyway. Also I wonder
> if anyone uses this trace_console().

This is the tracepoint used to spool all of printk into ftrace, I
suspect there's users, but I haven't used it myself.

> Therefore if this patch allows to remove some tricky tracing
> code then it might be worth it. But if trace_console_rcuidle()
> variant is still going to be available then I would keep using it.

My ultimate goal is to delete trace_.*_rcuidle() and RCU_NONIDLE()
entirely. We're close, but not quite there yet.


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 10:47:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 10:47:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344829.570659 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFh9-0005Ed-FT; Thu, 09 Jun 2022 10:47:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344829.570659; Thu, 09 Jun 2022 10:47:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFh9-0005DK-8t; Thu, 09 Jun 2022 10:47:19 +0000
Received: by outflank-mailman (input) for mailman id 344829;
 Thu, 09 Jun 2022 09:48:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5tdW=WQ=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1nzEmW-0003qg-8Q
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 09:48:48 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5b2d0c2b-e7d9-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 11:48:46 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=worktop.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1nzElj-00DR5R-4H; Thu, 09 Jun 2022 09:47:59 +0000
Received: by worktop.programming.kicks-ass.net (Postfix, from userid 1000)
 id 100CA981287; Thu,  9 Jun 2022 11:47:58 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5b2d0c2b-e7d9-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=9jA4mchnYQoJ5XaXo73/qgoc6iaW5C3P1k7XaYKsS1U=; b=cB9XFjQVxWqfDyzeN0qaxTZuLD
	YLcvVXkDGRGNOtMLtEgH+zL1QwGa+jRQrKiDu6o+KwcqUIios781clMIKdQSQtEcXojjpXyjmvj46
	euyjd4m+n/z+pXAWpiww5AHKVJISuO8dbwfn2kdK+vnbm4iQMh9u58KvobfYp6DKnGfTofnM78Yt0
	Jz76LJDB6p5dvl91aPCez/Dj+sFAkswEne+HjI0PWGc0G7Uh6QasF9s0y9AlvQ8RjyVaUQ4ilXQ62
	Z4qivKfe5pGVxrpCjV/g6X6ndctS6ICj5/6oYIabFd8D02kUsNuFDdK3DNDO+pifvTslSQOhwvIep
	NfI5oq7A==;
Date: Thu, 9 Jun 2022 11:47:57 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: Tony Lindgren <tony@atomide.com>
Cc: Arnd Bergmann <arnd@arndb.de>, Richard Henderson <rth@twiddle.net>,
	Ivan Kokshaysky <ink@jurassic.park.msu.ru>,
	Matt Turner <mattst88@gmail.com>, Vineet Gupta <vgupta@kernel.org>,
	Russell King - ARM Linux <linux@armlinux.org.uk>,
	Hans Ulli Kroll <ulli.kroll@googlemail.com>,
	Linus Walleij <linus.walleij@linaro.org>,
	Shawn Guo <shawnguo@kernel.org>,
	Sascha Hauer <s.hauer@pengutronix.de>,
	Sascha Hauer <kernel@pengutronix.de>,
	Fabio Estevam <festevam@gmail.com>,
	NXP Linux Team <linux-imx@nxp.com>,
	Kevin Hilman <khilman@kernel.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>, Guo Ren <guoren@kernel.org>,
	bcain@quicinc.com, Huacai Chen <chenhuacai@kernel.org>,
	Xuerui Wang <kernel@xen0n.name>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Sam Creasey <sammy@sammy.net>, Michal Simek <monstr@monstr.eu>,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	Dinh Nguyen <dinguyen@kernel.org>, Jonas Bonn <jonas@southpole.se>,
	Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>,
	Stafford Horne <shorne@gmail.com>,
	James Bottomley <James.Bottomley@hansenpartnership.com>,
	Helge Deller <deller@gmx.de>, Michael Ellerman <mpe@ellerman.id.au>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Paul Mackerras <paulus@samba.org>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	Albert Ou <aou@eecs.berkeley.edu>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Alexander Gordeev <agordeev@linux.ibm.com>,
	Christian Borntraeger <borntraeger@linux.ibm.com>,
	Sven Schnelle <svens@linux.ibm.com>,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	Rich Felker <dalias@libc.org>, David Miller <davem@davemloft.net>,
	Richard Weinberger <richard@nod.at>,
	Anton Ivanov <anton.ivanov@cambridgegreys.com>,
	Johannes Berg <johannes@sipsolutions.net>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	the arch/x86 maintainers <x86@kernel.org>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Arnaldo Carvalho de Melo <acme@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Alexander Shishkin <alexander.shishkin@linux.intel.com>,
	Jiri Olsa <jolsa@kernel.org>, Namhyung Kim <namhyung@kernel.org>,
	Juergen Gross <jgross@suse.com>, srivatsa@csail.mit.edu,
	amakhalov@vmware.com, Pv-drivers <pv-drivers@vmware.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Chris Zankel <chris@zankel.net>, Max Filippov <jcmvbkbc@gmail.com>,
	Rafael Wysocki <rafael@kernel.org>, Len Brown <lenb@kernel.org>,
	Pavel Machek <pavel@ucw.cz>, gregkh <gregkh@linuxfoundation.org>,
	Michael Turquette <mturquette@baylibre.com>,
	Stephen Boyd <sboyd@kernel.org>,
	Daniel Lezcano <daniel.lezcano@linaro.org>, lpieralisi@kernel.org,
	Sudeep Holla <sudeep.holla@arm.com>, Andy Gross <agross@kernel.org>,
	Bjorn Andersson <bjorn.andersson@linaro.org>,
	Anup Patel <anup@brainfault.org>,
	Thierry Reding <thierry.reding@gmail.com>,
	Jonathan Hunter <jonathanh@nvidia.com>,
	jacob.jun.pan@linux.intel.com, Yury Norov <yury.norov@gmail.com>,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Rasmus Villemoes <linux@rasmusvillemoes.dk>,
	Steven Rostedt <rostedt@goodmis.org>,
	Petr Mladek <pmladek@suse.com>,
	Sergey Senozhatsky <senozhatsky@chromium.org>,
	John Ogness <john.ogness@linutronix.de>,
	"Paul E. McKenney" <paulmck@kernel.org>,
	Frederic Weisbecker <frederic@kernel.org>, quic_neeraju@quicinc.com,
	Josh Triplett <josh@joshtriplett.org>,
	Mathieu Desnoyers <mathieu.desnoyers@efficios.com>,
	Lai Jiangshan <jiangshanlai@gmail.com>,
	Joel Fernandes <joel@joelfernandes.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
	Daniel Bristot de Oliveira <bristot@redhat.com>,
	vschneid@redhat.com, jpoimboe@kernel.org,
	alpha <linux-alpha@vger.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	"open list:SYNOPSYS ARC ARCHITECTURE" <linux-snps-arc@lists.infradead.org>,
	Linux ARM <linux-arm-kernel@lists.infradead.org>,
	linux-omap <linux-omap@vger.kernel.org>, linux-csky@vger.kernel.org,
	"open list:QUALCOMM HEXAGON..." <linux-hexagon@vger.kernel.org>,
	"open list:IA64 (Itanium) PLATFORM" <linux-ia64@vger.kernel.org>,
	linux-m68k <linux-m68k@lists.linux-m68k.org>,
	"open list:BROADCOM NVRAM DRIVER" <linux-mips@vger.kernel.org>,
	Openrisc <openrisc@lists.librecores.org>,
	Parisc List <linux-parisc@vger.kernel.org>,
	linuxppc-dev <linuxppc-dev@lists.ozlabs.org>,
	linux-riscv <linux-riscv@lists.infradead.org>,
	linux-s390 <linux-s390@vger.kernel.org>,
	Linux-sh list <linux-sh@vger.kernel.org>,
	sparclinux <sparclinux@vger.kernel.org>,
	linux-um <linux-um@lists.infradead.org>,
	linux-perf-users@vger.kernel.org,
	"open list:DRM DRIVER FOR QEMU'S CIRRUS DEVICE" <virtualization@lists.linux-foundation.org>,
	xen-devel <xen-devel@lists.xenproject.org>,
	"open list:TENSILICA XTENSA PORT (xtensa)" <linux-xtensa@linux-xtensa.org>,
	ACPI Devel Maling List <linux-acpi@vger.kernel.org>,
	Linux PM list <linux-pm@vger.kernel.org>,
	linux-clk <linux-clk@vger.kernel.org>,
	linux-arm-msm <linux-arm-msm@vger.kernel.org>,
	"open list:TEGRA ARCHITECTURE SUPPORT" <linux-tegra@vger.kernel.org>,
	linux-arch <linux-arch@vger.kernel.org>, rcu@vger.kernel.org
Subject: Re: [PATCH 33/36] cpuidle,omap3: Use WFI for omap3_pm_idle()
Message-ID: <YqHBzbAiqaZeoipw@worktop.programming.kicks-ass.net>
References: <20220608142723.103523089@infradead.org>
 <20220608144518.010587032@infradead.org>
 <CAK8P3a0g-fNu9=BUECSXcNeWT7XWHQMnSXZE-XYE+5eakHxKxA@mail.gmail.com>
 <YqGjqgSrTRseJW6M@atomide.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <YqGjqgSrTRseJW6M@atomide.com>

On Thu, Jun 09, 2022 at 10:39:22AM +0300, Tony Lindgren wrote:
> * Arnd Bergmann <arnd@arndb.de> [220608 18:18]:
> > On Wed, Jun 8, 2022 at 4:27 PM Peter Zijlstra <peterz@infradead.org> wrote:
> > >
> > > arch_cpu_idle() is a very simple idle interface and exposes only a
> > > single idle state and is expected to not require RCU and not do any
> > > tracing/instrumentation.
> > >
> > > As such, omap_sram_idle() is not a valid implementation. Replace it
> > > with the simple (shallow) omap3_do_wfi() call. Leaving the more
> > > complicated idle states for the cpuidle driver.
> 
> Agreed it makes sense to limit deeper idle states to cpuidle. Hopefully
> there is some informative splat for attempting to use arch_cpu_ide()
> for deeper idle states :)

The arch_cpu_idle() interface doesn't allow one to express a desire for
deeper states. I'm not sure how anyone could even attempt this.

But given what OMAP needs to go deeper, this would involve things that
require RCU, combine that with the follow up patches that rip out all
the trace_.*_rcuidle() hackery from the power and clock domain code,
PROVE_RCU should scream if anybody were to attempt it.


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 10:47:24 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 10:47:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.344851.570673 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFhA-0005TO-Ah; Thu, 09 Jun 2022 10:47:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 344851.570673; Thu, 09 Jun 2022 10:47:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzFh9-0005RZ-VC; Thu, 09 Jun 2022 10:47:19 +0000
Received: by outflank-mailman (input) for mailman id 344851;
 Thu, 09 Jun 2022 10:14:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7WWr=WQ=suse.com=pmladek@srs-se1.protection.inumbo.net>)
 id 1nzFBh-0008FU-B8
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 10:14:49 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fdf9167a-e7dc-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 12:14:48 +0200 (CEST)
Received: from relay2.suse.de (relay2.suse.de [149.44.160.134])
 by smtp-out1.suse.de (Postfix) with ESMTP id 8501C21E03;
 Thu,  9 Jun 2022 10:14:47 +0000 (UTC)
Received: from suse.cz (unknown [10.100.208.146])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by relay2.suse.de (Postfix) with ESMTPS id 311FB2C141;
 Thu,  9 Jun 2022 10:14:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fdf9167a-e7dc-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1654769687; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=DrcvBEFljM5p6TtbSeEidSTZ/fg57f/9Kkfw39rL09Y=;
	b=DMcdvAORR690fi8pr+U8fk3+hyZsj7Tt5x9qwl047KjZChmxVjUcAfS5VoE+jvBI94jRA8
	x1qWl9NmnYPcL1/wsN/jGmXJEl3+KOTwVVBHmebd33G6ciWPS6jZdzz9VOlvQCAFmLQ8Zj
	Z7XOIERpCYK8sshW0SQuT7wCuIg0KcE=
Date: Thu, 9 Jun 2022 12:14:42 +0200
From: Petr Mladek <pmladek@suse.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: ink@jurassic.park.msu.ru, mattst88@gmail.com, vgupta@kernel.org,
	linux@armlinux.org.uk, ulli.kroll@googlemail.com,
	linus.walleij@linaro.org, shawnguo@kernel.org,
	Sascha Hauer <s.hauer@pengutronix.de>, kernel@pengutronix.de,
	festevam@gmail.com, linux-imx@nxp.com, tony@atomide.com,
	khilman@kernel.org, catalin.marinas@arm.com, will@kernel.org,
	guoren@kernel.org, bcain@quicinc.com, chenhuacai@kernel.org,
	kernel@xen0n.name, geert@linux-m68k.org, sammy@sammy.net,
	monstr@monstr.eu, tsbogend@alpha.franken.de, dinguyen@kernel.org,
	jonas@southpole.se, stefan.kristiansson@saunalahti.fi,
	shorne@gmail.com, James.Bottomley@hansenpartnership.com,
	deller@gmx.de, mpe@ellerman.id.au, benh@kernel.crashing.org,
	paulus@samba.org, paul.walmsley@sifive.com, palmer@dabbelt.com,
	aou@eecs.berkeley.edu, hca@linux.ibm.com, gor@linux.ibm.com,
	agordeev@linux.ibm.com, borntraeger@linux.ibm.com,
	svens@linux.ibm.com, ysato@users.sourceforge.jp, dalias@libc.org,
	davem@davemloft.net, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
	dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com,
	acme@kernel.org, mark.rutland@arm.com,
	alexander.shishkin@linux.intel.com, jolsa@kernel.org,
	namhyung@kernel.org, jgross@suse.com, srivatsa@csail.mit.edu,
	amakhalov@vmware.com, pv-drivers@vmware.com,
	boris.ostrovsky@oracle.com, chris@zankel.net, jcmvbkbc@gmail.com,
	rafael@kernel.org, lenb@kernel.org, pavel@ucw.cz,
	gregkh@linuxfoundation.org, mturquette@baylibre.com,
	sboyd@kernel.org, daniel.lezcano@linaro.org, lpieralisi@kernel.org,
	sudeep.holla@arm.com, agross@kernel.org, bjorn.andersson@linaro.org,
	anup@brainfault.org, thierry.reding@gmail.com, jonathanh@nvidia.com,
	jacob.jun.pan@linux.intel.com, Arnd Bergmann <arnd@arndb.de>,
	yury.norov@gmail.com, andriy.shevchenko@linux.intel.com,
	linux@rasmusvillemoes.dk, rostedt@goodmis.org,
	senozhatsky@chromium.org, john.ogness@linutronix.de,
	paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com,
	josh@joshtriplett.org, mathieu.desnoyers@efficios.com,
	jiangshanlai@gmail.com, joel@joelfernandes.org,
	juri.lelli@redhat.com, vincent.guittot@linaro.org,
	dietmar.eggemann@arm.com, bsegall@google.com, mgorman@suse.de,
	bristot@redhat.com, vschneid@redhat.com, jpoimboe@kernel.org,
	linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-snps-arc@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, openrisc@lists.librecores.org,
	linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org, sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org, linux-perf-users@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, linux-xtensa@linux-xtensa.org,
	linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org,
	linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	linux-tegra@vger.kernel.org, linux-arch@vger.kernel.org,
	rcu@vger.kernel.org
Subject: Re: [PATCH 24/36] printk: Remove trace_.*_rcuidle() usage
Message-ID: <YqHIEthhhi5e+Mtb@alley>
References: <20220608142723.103523089@infradead.org>
 <20220608144517.444659212@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20220608144517.444659212@infradead.org>

Sending again. The previous attempt was rejected by several
recipients. It was caused by a mail server changes on my side.

I am sorry for spamming those who got the 1st mail already.

On Wed 2022-06-08 16:27:47, Peter Zijlstra wrote:
> The problem, per commit fc98c3c8c9dc ("printk: use rcuidle console
> tracepoint"), was printk usage from the cpuidle path where RCU was
> already disabled.
> 
> Per the patches earlier in this series, this is no longer the case.

My understanding is that this series reduces a lot the amount
of code called with RCU disabled. As a result the particular printk()
call mentioned by commit fc98c3c8c9dc ("printk: use rcuidle console
tracepoint") is called with RCU enabled now. Hence this particular
problem is fixed better way now.

But is this true in general?
Does this "prevent" calling printk() a safe way in code with
RCU disabled?

I am not sure if anyone cares. printk() is the best effort
functionality because of the consoles code anyway. Also I wonder
if anyone uses this trace_console().

Therefore if this patch allows to remove some tricky tracing
code then it might be worth it. But if trace_console_rcuidle()
variant is still going to be available then I would keep using it.

Best Regards,
Petr

> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> ---
>  kernel/printk/printk.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> --- a/kernel/printk/printk.c
> +++ b/kernel/printk/printk.c
> @@ -2238,7 +2238,7 @@ static u16 printk_sprint(char *text, u16
>  		}
>  	}
>  
> -	trace_console_rcuidle(text, text_len);
> +	trace_console(text, text_len);
>  
>  	return text_len;
>  }
> 


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 11:11:21 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 11:11:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345060.570701 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzG4J-0002e4-Az; Thu, 09 Jun 2022 11:11:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345060.570701; Thu, 09 Jun 2022 11:11:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzG4J-0002dx-7p; Thu, 09 Jun 2022 11:11:15 +0000
Received: by outflank-mailman (input) for mailman id 345060;
 Thu, 09 Jun 2022 11:11:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/fG7=WQ=bugseng.com=roberto.bagnara@srs-se1.protection.inumbo.net>)
 id 1nzG4I-0002dm-5c
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 11:11:14 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id de3ecbb3-e7e4-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 13:11:12 +0200 (CEST)
Received: from [192.168.1.137] (host-82-59-248-251.retail.telecomitalia.it
 [82.59.248.251])
 by support.bugseng.com (Postfix) with ESMTPSA id 7FDC44EE0CDD;
 Thu,  9 Jun 2022 13:11:09 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: de3ecbb3-e7e4-11ec-bd2c-47488cf2e6aa
Message-ID: <45c4d8fa-06de-b4a2-5688-14b9cbe5b48c@bugseng.com>
Date: Thu, 9 Jun 2022 13:11:08 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.12) Gecko/20050929
 Thunderbird/1.0.7 Fedora/1.0.7-1.1.fc4 Mnenhy/0.7.3.0
Subject: Re: MOVING COMMUNITY CALL Call for agenda items for 9 June Community
 Call @ 1500 UTC
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>,
 Jan Beulich <jbeulich@suse.com>
Cc: George Dunlap <George.Dunlap@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Roger Pau Monne <roger.pau@citrix.com>,
 Artem Mygaiev <Artem_Mygaiev@epam.com>, Andrew.Cooper3@citrix.com,
 julien@xen.org, Bertrand.Marquis@arm.com, fusa-sig@lists.xenproject.org
References: <CC75A251-2695-4E9E-95A7-043874B22F32@citrix.com>
 <alpine.DEB.2.22.394.2206010942010.1905099@ubuntu-linux-20-04-desktop>
 <alpine.DEB.2.22.394.2206011324400.1905099@ubuntu-linux-20-04-desktop>
 <ebe4b409-318f-6b2c-0e05-fe9256528b32@suse.com>
 <alpine.DEB.2.22.394.2206061731421.277622@ubuntu-linux-20-04-desktop>
From: Roberto Bagnara <roberto.bagnara@bugseng.com>
In-Reply-To: <alpine.DEB.2.22.394.2206061731421.277622@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 07/06/22 04:17, Stefano Stabellini wrote:
 > # Rule 9.1 "The value of an object with automatic storage duration shall not be read before it has been set"
 >
 > The question is whether -Wuninitalised already covers this case or not.
 > I think it does.
 >
 > Eclair is reporting a few issues where variables are "possibly
 > uninitialized". We should ask Roberto about them, I don't think they are
 > actual errors? More like extra warnings?

No, -Wuninitialized is not reliable, as it has plenty of (well known)
false negatives.  This is typical of compilers, for which the generation
of warnings is only a secondary objective.  I wrote about that here:

   https://www.bugseng.com/blog/compiler-warnings-use-them-dont-trust-them

On the specifics:

$ cat p.c
int foo (int b)
{
     int a;

     if (b)
     {
         a = 1;
     }

     return a;
}

$ gcc -c -W -Wall -Wmaybe-uninitialized -O3 p.c
$ gcc -c -W -Wall -Wuninitialized -O3 p.c
$

Note that the example is less contrived than you might think.
See, JF Bastien's talk at 2019 LLVM Developers' Meeting:

   https://www.youtube.com/watch?v=I-XUHPimq3o

More generally, you can only embrace MISRA if you agree on
its preventive nature, which is radically different from
the "bug finding" approach.  The point is rather simple:

1) static analysis alone cannot guarantee correctness;
2) peer review is unavoidable;
3) testing is unavoidable.

In order to effectively conduct a peer review, you cannot
afford being distracted every minute by the thought
"is this initialized?  where is it initialized?  with which
value is it initialized?"
In a MISRA setting, you want that the answer to such questions
is immediately clear to anyone.
In contrast, if you embrace bug finding (that is, checkers with
false negatives like the ones implemented by compilers),
you will miss instances that you may miss also with testing
(testing a program with UB does not give reliable results);
and you will likely miss them with peer review, unless you
can spend a lot of time and resources in the activity.

The checker implemented by ECLAIR for Rule 9.1 embodies this
principle: if it says "violation", then it is a definite
violation;  if it says "caution", then maybe there is no
UB, but a human will have to spend more than 30 seconds
in order to convince herself that there is no UB.

I understand this may sound frustrating to virtuoso programmers,
and there are many of them in the open source world.
But the truth is that virtuosity in programming is not a good
thing for safety-related development.   For safety you want
code that is simple and straightforward to reason about.
Kind regards,

    Roberto





From xen-devel-bounces@lists.xenproject.org Thu Jun 09 11:17:55 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 11:17:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345071.570712 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzGAh-0003Wu-3a; Thu, 09 Jun 2022 11:17:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345071.570712; Thu, 09 Jun 2022 11:17:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzGAg-0003Wn-V6; Thu, 09 Jun 2022 11:17:50 +0000
Received: by outflank-mailman (input) for mailman id 345071;
 Thu, 09 Jun 2022 11:17:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/fG7=WQ=bugseng.com=roberto.bagnara@srs-se1.protection.inumbo.net>)
 id 1nzGAf-0003Wb-7i
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 11:17:49 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cac58179-e7e5-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 13:17:47 +0200 (CEST)
Received: from [192.168.1.137] (host-82-59-248-251.retail.telecomitalia.it
 [82.59.248.251])
 by support.bugseng.com (Postfix) with ESMTPSA id 9EA634EE0CDD;
 Thu,  9 Jun 2022 13:17:46 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cac58179-e7e5-11ec-bd2c-47488cf2e6aa
Message-ID: <26829bf4-bbcc-a610-ba3b-fa60aa296cf9@bugseng.com>
Date: Thu, 9 Jun 2022 13:17:45 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.12) Gecko/20050929
 Thunderbird/1.0.7 Fedora/1.0.7-1.1.fc4 Mnenhy/0.7.3.0
Subject: Re: MISRA C meeting tomorrow, was: MOVING COMMUNITY CALL Call for
 agenda items for 9 June Community Call @ 1500 UTC
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: George Dunlap <George.Dunlap@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Roger Pau Monne <roger.pau@citrix.com>,
 Artem Mygaiev <Artem_Mygaiev@epam.com>, Andrew.Cooper3@citrix.com,
 julien@xen.org, Bertrand.Marquis@arm.com, fusa-sig@lists.xenproject.org,
 roberto.bagnara@bugseng.com
References: <CC75A251-2695-4E9E-95A7-043874B22F32@citrix.com>
 <alpine.DEB.2.22.394.2206010942010.1905099@ubuntu-linux-20-04-desktop>
 <alpine.DEB.2.22.394.2206011324400.1905099@ubuntu-linux-20-04-desktop>
 <ebe4b409-318f-6b2c-0e05-fe9256528b32@suse.com>
 <alpine.DEB.2.22.394.2206061731421.277622@ubuntu-linux-20-04-desktop>
 <alpine.DEB.2.22.394.2206081806020.21215@ubuntu-linux-20-04-desktop>
 <880f3273-f92b-2b60-8de0-e69fefbada21@suse.com>
From: Roberto Bagnara <roberto.bagnara@bugseng.com>
In-Reply-To: <880f3273-f92b-2b60-8de0-e69fefbada21@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 09/06/22 09:04, Jan Beulich wrote:
> On 09.06.2022 03:20, Stefano Stabellini wrote:
>> Finally, for Rule 13.2, I updated the link to ECLAIR's results. There
>> are a lot more violations than just 4, but I don't know if they are
>> valid or false positives.
> 
> I've picked just the one case in xen/common/efi/ebmalloc.c to check,
> and it says "possibly". That's because evaluation of function call
> arguments involves the calling of (in this case two) further
> functions. If those functions had side effects (which apparently the
> tool can't figure), there would indeed be a problem.
> 
> The (Arm based) count of almost 10k violations is clearly a concern.
> I don't consider it even remotely reasonable to add 10k comments, no
> matter how brief, to cover all the false positives.

Again, the MISRA approach is a preventive one.
If you have reasons you want to write

    f(g(), h());

then declare g() and h() as pure (or const, if they are const).
E.g.:

#if COMPILER_SUPPORTS_PURE
#define PURE __attribute__((pure))
#else
#define PURE
#endif

int g(void) PURE;
int h(void) PURE;

It's good documentation, it improves compiler diagnostics,
and it satisfies Rule 13.2.
Kind regards,

    Roberto




From xen-devel-bounces@lists.xenproject.org Thu Jun 09 11:20:05 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 11:20:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345082.570723 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzGCq-0004cJ-I2; Thu, 09 Jun 2022 11:20:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345082.570723; Thu, 09 Jun 2022 11:20:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzGCq-0004bP-DJ; Thu, 09 Jun 2022 11:20:04 +0000
Received: by outflank-mailman (input) for mailman id 345082;
 Thu, 09 Jun 2022 11:20:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jWvP=WQ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1nzGCp-0004Mo-7L
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 11:20:03 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20600.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::600])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1a743aab-e7e6-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 13:20:01 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by HE1PR0402MB3354.eurprd04.prod.outlook.com (2603:10a6:7:85::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Thu, 9 Jun
 2022 11:19:58 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.013; Thu, 9 Jun 2022
 11:19:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1a743aab-e7e6-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Jbc+qkexA7gMgQHU3w/UAVmCTpfYhKm+x91BugYdYKnkMSE/jEx6mBpeO5/gKyflXrEXucf/7xrzYhGUYwCrKZNo1+p2W6d0mNtFyQCr7FTiz54gNgkb1TRRZAU4Cr0RTq8I4Hz88siGvbatzA8Pgu86ldfNX1UWDZChZyKv5UBmZVpNHYK3Rt8jcDTTZDtYrSyCgzHpJ+bY43/nbTLOKhCmYwjyQAk044jeFB/ZfAkh8zGmAr+BUir1gNK5RAzw0ZbsrgHZ915Pc8O/Rw8haq9a+LQUm/qW7ZexoK7G2fUlMu5T3wmREBtY7YkVQsRdn6VjiAWyZ6Vwyw7x+GWALg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=wKx6xv7ny6SA1H3YTbytFXJJbnPaawkf7qGDv7wwlbA=;
 b=ASHQ8cktU0xFLE1uEmxwtbHmooPKZ4RW+phf9zNoEfAMCYryfjajx8KNFGwSZnk31MTxWf8inFEsP1TYxqR8ybnKpm3R3UrAgxNNLon/tLtcqt6nIwDEvY+ocHQeEFXAPRMM5Iol2h3oB2YCtEgq4oDBHGQpz/phBXzAmZF9/R9dRM26YDQCOzNsDmidFzClSoPrf5fza+6P9r37YrbI+TRdhnf8uOLlZpfigxMGnmhM2BSkYidoIN9jN06MhWRgk+xVwbXMhgxjMo/pVf1nF28uiJotH6IsgtqrBgsWrw+XCbH4Fhu0fYHjgtIc50tWxGE5nRIljpMqMNQUBkcjnw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wKx6xv7ny6SA1H3YTbytFXJJbnPaawkf7qGDv7wwlbA=;
 b=bUhFowCEL/WqGmWTdtWIZ+g+hjqMirZFxlVUp5sLGerb14XzL3oFEp6G1UR11cb8WWFJsr86Huqq07NQytmUJZQHM5fBLZaQdUwyMwfv6rQo0QP7K/3OdyeTPUkxTiOW4hQF/jjn0d8qiA/EKTtyKVYQcerg/IBlokX9dZMwbLtCy9ddrgRQo3LUUCzbZ+O+Ll5S5c6ar+VtRWJXknYg1aSAENTkCpQznFMNUSAwKHvHY3/f1eYVaWnLdTDf8bJ0nH6Dn4s7OdoEKgO15nHTnB8FWp48tGHCYbqs5HlP4SgBp88afiRobphaarvQLjBZu9lvTVxOKu9W+WUvIdU6RA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <c1c36273-af87-07b8-9770-93e6be5e60e2@suse.com>
Date: Thu, 9 Jun 2022 13:19:56 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v5 7/8] xen/x86: add detection of memory interleaves for
 different nodes
Content-Language: en-US
To: Wei Chen <wei.chen@arm.com>
Cc: nd@arm.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Jiamei Xie <jiamei.xie@arm.com>, xen-devel@lists.xenproject.org
References: <20220606040916.122184-1-wei.chen@arm.com>
 <20220606040916.122184-8-wei.chen@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220606040916.122184-8-wei.chen@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AS9PR06CA0253.eurprd06.prod.outlook.com
 (2603:10a6:20b:45f::21) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f799e0a1-1fe8-404e-6925-08da4a09fcf9
X-MS-TrafficTypeDiagnostic: HE1PR0402MB3354:EE_
X-Microsoft-Antispam-PRVS:
	<HE1PR0402MB33544F3128BC11EE268B065CB3A79@HE1PR0402MB3354.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BSFHU4ubLhLPvj+n+RCtAkNuqSmePicfV42iBZNE+UAmHVSu/sY5viarQiPwYKzOzbW9ZyNdY+XcUE9WcEVdVAVPbqSkz6noWNYnDkCo3rQtls7Y5yE9xD3JuAkuEIT0YsxdNACJn7gMnS5ZEbuMe7MNaoIepcjDg1MjGJacdOgKKvhunfnoqOv94U06kv5m2QNhfWq2SkY0W9ZBCQUBK3WZMZDtUIhQaZYAfYhbbaFgpgdBo09Ff5oHXIEvA9Auc27bm3MtNRYOq6udjzonGvLlR4SIeAa0lXtZba391PZtSptdfgxUyi3aMxcVe4sd6WypsJyWZ1H8/EyWIUIvYpLUGK+gt3gptGdsqxTWer4ohkyc8JVvVUkVbxmfDI2oDDjJ7zXBcuuhRshuJ+3axe2pVggCeBf5fm6M0cZQarKZeVkWkqZ5DQ6TZYGb2eZlfCsI9XQkJSUb0wPylLw7pkFFMXWmOsVPaMTzqrZPkaiM9jPkJwIwD1f4dTv4s0vFzUvj1hHBwKp7zgvZ5wa+j78zWrdLBlQuUW+sSCtrnHjVvwrC7PJIvfHlCiVM/oyRbXsMkuKry+ulcDx4lEdggc4205m1T2gIzwhY13XhsAP5lG4u1c1ZANkujtSVlDbh43hzBsocN1g4ZM5sHv4ldifkRMwRBl/dxka4ohJSlF+3rutIpdKwkgOJH1/aZ4WwtryIe2AwKBrT2drNpfq/OgmLyJ7Uci+5N4cyX5nq8/Q=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(316002)(38100700002)(4326008)(8676002)(31696002)(66556008)(86362001)(66946007)(66476007)(6916009)(54906003)(36756003)(2616005)(186003)(31686004)(53546011)(83380400001)(2906002)(6506007)(6512007)(26005)(508600001)(8936002)(6486002)(5660300002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VTJKcFlXN2RQWnhlSlp4K29NU2VEZHVkOHM0cWNCNHlZMXZNTjlWZ2pXVGRO?=
 =?utf-8?B?aWpoNSsveEQ1WjM1WEtHdWxoSkRhdTBxYTIzRW1JZ05EeW1IandNbnIwNUlC?=
 =?utf-8?B?Q1JsaE0rZlo1QzlSNjR1TTZRd3gwVHhWTk80MThOeFdvU003a1lSbHU4anhi?=
 =?utf-8?B?eFdGSXYrUU5KdGV3TVRLWHpmVkI0NU1YNkpkd096OEVpVGhLUXJJT2dMRElG?=
 =?utf-8?B?VkdFWGJyZ0g5blFqK1JmYXNXODF3YW8vdW40L3UrWVRYSnl3dmllOUFzSkYx?=
 =?utf-8?B?d2l4YVBhMmk1UWI2Zk01eG1sdzdYZFR6NjhTamsvM1o4MlF0S0oweGtqT1Vy?=
 =?utf-8?B?MDFUUHA5SzJkZlNmSXA1djNkY2d4Z1ZUNi9UQmVFNVBsUDZqZ2JXR0I0dkRu?=
 =?utf-8?B?bytyQnlUbFZlMkFXTHNSeUw4N1Q2RzFyYzJjcnJlQmNhWTRUZVRvamF5UlVr?=
 =?utf-8?B?YkVreW1yL1NqMSs1L2toWERhdnZKek02cElDbGNRbGRCNDJYZVNQVFRHUm1Y?=
 =?utf-8?B?Y3M3TXM2dXdXY2lIb0g2TzZhV0dpV1lQdkNzTmdYc0hITkUzdHptOXg3NnJ5?=
 =?utf-8?B?a1NHZm1jRlFKMUtxay96REh6R2NWNUhWcVJoSmZhT1JadVpJdTBsV0VIMEJt?=
 =?utf-8?B?em5ZMmxrbHBXL2o3YjVRVll1QnRlZnZlWVJRWm0xNkNHOXMxb05YZEVtTE5y?=
 =?utf-8?B?elNtMzY0dS9vZzBlTldzRk91SHNNbFhQVWxMelIzNlJ2V3QzVURQWE15NTBF?=
 =?utf-8?B?YWFqajFjNnBGcGpRWVplOUxFbHF5SFZtS25JSUJ5VkR3THMveHc1M1Vya2Qz?=
 =?utf-8?B?Tml0SWNPbFpJSytnR1h3WTF4RWpXNmN3YXZCTU5lR0czK2hlZWVwbE45Mklj?=
 =?utf-8?B?eFdOWUJBT3daQlZUdWhXWDdLNCs3aDE5eGdJaUl3a20zSmM1OFpsUjFUSmR6?=
 =?utf-8?B?SFZFeVM3QlVVQ0VpN0NYM0lzOUVFdWFlVFM5L292Mmd5RUFVQk5RbWZCQmlM?=
 =?utf-8?B?NldLWUx4T21odHdzM0hEa1JyQVRQV21xMEJEU0xKRzZFaFZkMjNyWGF2OHBt?=
 =?utf-8?B?Y1pFZHNIbnBOZklONEpKc055WWdzSVdJbjVIZVZMdCtBRVRBbU5zRDZZOTVJ?=
 =?utf-8?B?WHZIeUlrekxaalJZUG1NdE1aRTdma245TzdXdm5BTFJiNVpBRzFEbkVuOHZ1?=
 =?utf-8?B?Yk1GRHRSUWNRSmtEQUVWZE0wa3E1RDRWUEZLVnRnL25IQmhLRjdCNFkyeU5h?=
 =?utf-8?B?SWNTMzF2VnV4T3hObDNrbHNNanZWcEM1K1RMMHk2bUtwa3I1VVBXWDlSVGJH?=
 =?utf-8?B?bGo3N2FRdWJBMWRsWE9mRUV2RHlmWEd4NnIxVFBZMEZDVndLTm9XTlIyT0M3?=
 =?utf-8?B?Q2NQTzdRcG9sN00vVGhMZ1hDMHA1N0pHNWwzVFprMzJKcXRxV1BmK1pXeisy?=
 =?utf-8?B?ekhtSlVtYnNQNXJrbU1LbTFURUJxcTNQRm4vc2VjVlF2UjQ1VzVMZ05uYnk5?=
 =?utf-8?B?czA5NUoycnhsNm52NXZqL21QeGlESWNnOVQwZWRiMWV3OTl1OG5saUpZMzhS?=
 =?utf-8?B?VnM3YkQrRFJkY1Y1Z0w1Q3V0Q0VZbjFwVk5idUJUT3RRdE5wL05TTHJacDNh?=
 =?utf-8?B?RVVrckFEcFNWMnVQSHpnNUU3YVk3bVdJWGRKU1c2RThiN0FoZUZMVXlIdDcr?=
 =?utf-8?B?YzE0M2Vid3gwY3NGeXY4SGIrczJ3dktvSWtCUXVIUjRpOTNxaHRLSzBVMGJO?=
 =?utf-8?B?N0taT0J6ekZyTEg1RHNTV0FhdEpyQlhCUFUyNGFkTVQvbDNmUFA3VklvVEIz?=
 =?utf-8?B?aW15WHJVVHVLMDRsdHlBUmphSDRjWllUZ0pxQXE5RWs4a0xZYUNxd0NnVzlL?=
 =?utf-8?B?ZUpXdjhOWWlCQW1JTGhLS0NFQjZ2cUtvY1AxRTZWOHB1aG5nd2N1RnFmTElT?=
 =?utf-8?B?QUJCblhCcFc4cDhkaTB6SmV3ZEhiMzM5anpuNVpDWklWcTJPSC96Q2ZNckxw?=
 =?utf-8?B?ODBrczV4NW53L202Q0VFUW9WZHhRMjU5cklHbUdHRXpNVFNPUlBTR2FnZk56?=
 =?utf-8?B?ZWNJcDFpS2ZjQjU2Q0hPYWpxQW1IajBsbWpacTB6bENMSnErU3FwWFpFZHA4?=
 =?utf-8?B?bFhTU3VKUkc4M25zTnEzaEdhRGl4TGFyU2NnYlZiMmpwZ3prRndkTmJNTXhG?=
 =?utf-8?B?WjI2NGJnMytvUVZzNDFZaTdsR01Ka2dBV212aVU5RmxTSlM1UTRPSTM4bFd3?=
 =?utf-8?B?N3pKS0tBdG5YN3cvT0paMEhDY0hqdndMaXR3MHNhZjlIZlBpMksrKzgzZTha?=
 =?utf-8?B?V2FJTmVjWVdwRVNVZkxaR29RbGpQT0M1UUw1SDhYOFJpeExpLzRUQT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f799e0a1-1fe8-404e-6925-08da4a09fcf9
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 11:19:58.7117
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: MtG+ZNWBdnXXyibxYw/7ka57q7GReaPTLrPEhWrVwx5cGZ1uSShK92EZ7FNBcvmeO/Fv3DQ8gv+Pkli7QTuY4w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0402MB3354

On 06.06.2022 06:09, Wei Chen wrote:
> v4 -> v5:
> 1. Remove "nd->end == end && nd->start == start" from
>    conflicting_memblks.
> 2. Use case NO_CONFLICT instead of "default".
> 3. Correct wrong "node" to "pxm" in print message.
> 4. Remove unnecesary "else" to remove the indent depth.
> 5. Convert all ranges to proper mathematical interval
>    representation.

As to this:

> @@ -310,44 +343,74 @@ acpi_numa_memory_affinity_init(const struct acpi_srat_mem_affinity *ma)
>  		bad_srat();
>  		return;
>  	}
> +
> +	/*
> +	 * For the node that already has some memory blocks, we will
> +	 * expand the node memory range temporarily to check memory
> +	 * interleaves with other nodes. We will not use this node
> +	 * temp memory range to check overlaps, because it will mask
> +	 * the overlaps in same node.
> +	 *
> +	 * Node with 0 bytes memory doesn't need this expandsion.
> +	 */
> +	nd_start = start;
> +	nd_end = end;
> +	nd = &nodes[node];
> +	if (nd->start != nd->end) {
> +		if (nd_start > nd->start)
> +			nd_start = nd->start;
> +
> +		if (nd_end < nd->end)
> +			nd_end = nd->end;
> +	}
> +
>  	/* It is fine to add this area to the nodes data it will be used later*/
> -	i = conflicting_memblks(start, end);
> -	if (i < 0)
> -		/* everything fine */;
> -	else if (memblk_nodeid[i] == node) {
> -		bool mismatch = !(ma->flags & ACPI_SRAT_MEM_HOT_PLUGGABLE) !=
> -		                !test_bit(i, memblk_hotplug);
> -
> -		printk("%sSRAT: PXM %u (%"PRIpaddr"-%"PRIpaddr") overlaps with itself (%"PRIpaddr"-%"PRIpaddr")\n",
> -		       mismatch ? KERN_ERR : KERN_WARNING, pxm, start, end,
> -		       node_memblk_range[i].start, node_memblk_range[i].end);
> -		if (mismatch) {
> -			bad_srat();
> -			return;
> +	switch (conflicting_memblks(node, start, end, nd_start, nd_end, &i)) {
> +	case OVERLAP:
> +		if (memblk_nodeid[i] == node) {
> +			bool mismatch = !(ma->flags &
> +					  ACPI_SRAT_MEM_HOT_PLUGGABLE) !=
> +			                !test_bit(i, memblk_hotplug);
> +
> +			printk("%sSRAT: PXM %u [%"PRIpaddr"-%"PRIpaddr"] overlaps with itself [%"PRIpaddr"-%"PRIpaddr"]\n",

As said when discussing v4, mathematical representation is [start,end].
Please properly use a comma instead of a dash here and below plus ...

> +			       mismatch ? KERN_ERR : KERN_WARNING, pxm, start,
> +			       end - 1, node_memblk_range[i].start,
> +			       node_memblk_range[i].end - 1);
> +			if (mismatch) {
> +				bad_srat();
> +				return;
> +			}
> +			break;
>  		}
> -	} else {
> +
> +		printk(KERN_ERR
> +		       "SRAT: PXM %u [%"PRIpaddr"-%"PRIpaddr"] overlaps with PXM %u [%"PRIpaddr"-%"PRIpaddr"]\n",
> +		       pxm, start, end - 1, node_to_pxm(memblk_nodeid[i]),
> +		       node_memblk_range[i].start,
> +		       node_memblk_range[i].end - 1);
> +		bad_srat();
> +		return;
> +
> +	case INTERLEAVE:
>  		printk(KERN_ERR
> -		       "SRAT: PXM %u (%"PRIpaddr"-%"PRIpaddr") overlaps with PXM %u (%"PRIpaddr"-%"PRIpaddr")\n",
> -		       pxm, start, end, node_to_pxm(memblk_nodeid[i]),
> -		       node_memblk_range[i].start, node_memblk_range[i].end);
> +		       "SRAT： PXM %u: [%"PRIpaddr"-%"PRIpaddr"] interleaves with PXM %u memblk [%"PRIpaddr"-%"PRIpaddr"]\n",
> +		       pxm, nd_start, nd_end - 1, node_to_pxm(memblk_nodeid[i]),
> +		       node_memblk_range[i].start, node_memblk_range[i].end - 1);
>  		bad_srat();
>  		return;
> +
> +	case NO_CONFLICT:
> +		break;
>  	}
> +
>  	if (!(ma->flags & ACPI_SRAT_MEM_HOT_PLUGGABLE)) {
> -		struct node *nd = &nodes[node];
> -
> -		if (!node_test_and_set(node, memory_nodes_parsed)) {
> -			nd->start = start;
> -			nd->end = end;
> -		} else {
> -			if (start < nd->start)
> -				nd->start = start;
> -			if (nd->end < end)
> -				nd->end = end;
> -		}
> +		node_set(node, memory_nodes_parsed);
> +		nd->start = nd_start;
> +		nd->end = nd_end;
>  	}
> -	printk(KERN_INFO "SRAT: Node %u PXM %u %"PRIpaddr"-%"PRIpaddr"%s\n",
> -	       node, pxm, start, end,
> +
> +	printk(KERN_INFO "SRAT: Node %u PXM %u [%"PRIpaddr"-%"PRIpaddr"]%s\n",
> +	       node, pxm, start, end - 1,
>  	       ma->flags & ACPI_SRAT_MEM_HOT_PLUGGABLE ? " (hotplug)" : "");
>  
>  	node_memblk_range[num_node_memblks].start = start;
> @@ -396,7 +459,7 @@ static int __init nodes_cover_memory(void)
>  
>  		if (start < end) {
>  			printk(KERN_ERR "SRAT: No PXM for e820 range: "
> -				"%"PRIpaddr" - %"PRIpaddr"\n", start, end);
> +				"[%"PRIpaddr" - %"PRIpaddr"]\n", start, end - 1);

... please be consistent with the use of blanks (personally I'd prefer
no blanks to be there, but I could see people preferring a blank after
the comma). Then
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 11:24:17 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 11:24:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345094.570734 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzGGo-0005p8-9s; Thu, 09 Jun 2022 11:24:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345094.570734; Thu, 09 Jun 2022 11:24:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzGGo-0005p1-6P; Thu, 09 Jun 2022 11:24:10 +0000
Received: by outflank-mailman (input) for mailman id 345094;
 Thu, 09 Jun 2022 11:24:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jWvP=WQ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1nzGGm-0005oh-LC
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 11:24:08 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20619.outbound.protection.outlook.com
 [2a01:111:f400:7d00::619])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ac3b4a19-e7e6-11ec-b605-df0040e90b76;
 Thu, 09 Jun 2022 13:24:06 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB4286.eurprd04.prod.outlook.com (2603:10a6:803:46::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Thu, 9 Jun
 2022 11:24:03 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.013; Thu, 9 Jun 2022
 11:24:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ac3b4a19-e7e6-11ec-b605-df0040e90b76
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mUOXMZbynTid7gPpbZtlc4eu1oJkwLNBLbOnIh/VkUby/IpffMyCkFqL+VDtwD8v3/SeaDnf98pEoKKy2DWqSL6ZrJQwyqknW/zkKNMtfBpgxF0QZKAxOgJ+7YJJ4yXcO9HgPTn2M0Bb2VSgLNCQ4BDNoiCiWfgVZATJdZjT2RaTM4e3bhElkPxf+7Jd0hGLPdtoiMuddUQ84Rf62kcw7AiRqcxqLOlhY0MGgBSoXHy2Mp8UGVpn6gGqtBugpP8pgr9hamQZu7zTvRAICrjZdyxxCpkSeV7NlwjIkf1Opc/J07ZzR748A8Ls2wj0GVPCKPi0kAK/bprJmUTmUb7ZSA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=XRztNSMmQff1m4Mgag6KGmW8568Om7TlD8Momh3Zj34=;
 b=g7yYV2HWBqQxHk6HpvHHY/lwn6rb7cXr6ZXjBTlTaZpJcOawegdt2w4ftSSkCVtJUVZo1FsrYKdOl5kJ+NxmvhJZvUD4Y0LeHLVwm1HyVzbFPg/K5eIfJPazKT0tLt1oyckriMBvcFZ24XSmwxbQOqmvnSUeYpCsFCsCSfLn8y9hH50lubobamONA5D1ozbYGpWUsCrYaLZws0aiiKTZLHUdBi/lVsKUR4HJ4P8+ZlNrk1pzGOIDEX+FvZuf9bv4NCFz5wFSIxA1sxXecu1abEub2ekTdfOvhC50cKyRPdq3nMLMJwt02x6K5imKxLaW66d3qzexHWorDdN+QeHaWA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XRztNSMmQff1m4Mgag6KGmW8568Om7TlD8Momh3Zj34=;
 b=UmH3cLiwPThhX10rrjLB+L8AVYQnD2i/3seMhw4cUt9JJmjVJ3OkJ6pdAb9mKPFq/XZEhOuQM6exP2apQHTB3KBiNDJGQc0SP/lUTba0t2hMtKHfJq+Kg8ApaJQaTq3AZQejm2Lq14GXet91vkTE3CfoRnlBECpakboOUS/PV0UZEyncFPET9ptf0ool4Pn1ctkIPRixw8I6uv/GXljq0CpVmIm/j9vQibkmc/oeXUDdw08hiJx6nhrHJUsGNKxIKt8bjp8YVR3mufj+QdLYe7Co80ynbJHxXJcnakAvqavLmwK5faQz2HhznXb01MHMcVlEdZx5paHRlxzr2LIy6A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a10972fc-0f25-4187-4386-e73b4f5563af@suse.com>
Date: Thu, 9 Jun 2022 13:24:00 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: MOVING COMMUNITY CALL Call for agenda items for 9 June Community
 Call @ 1500 UTC
Content-Language: en-US
To: Roberto Bagnara <roberto.bagnara@bugseng.com>
Cc: George Dunlap <George.Dunlap@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Roger Pau Monne <roger.pau@citrix.com>,
 Artem Mygaiev <Artem_Mygaiev@epam.com>, Andrew.Cooper3@citrix.com,
 julien@xen.org, Bertrand.Marquis@arm.com, fusa-sig@lists.xenproject.org,
 Stefano Stabellini <sstabellini@kernel.org>
References: <CC75A251-2695-4E9E-95A7-043874B22F32@citrix.com>
 <alpine.DEB.2.22.394.2206010942010.1905099@ubuntu-linux-20-04-desktop>
 <alpine.DEB.2.22.394.2206011324400.1905099@ubuntu-linux-20-04-desktop>
 <ebe4b409-318f-6b2c-0e05-fe9256528b32@suse.com>
 <alpine.DEB.2.22.394.2206061731421.277622@ubuntu-linux-20-04-desktop>
 <45c4d8fa-06de-b4a2-5688-14b9cbe5b48c@bugseng.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <45c4d8fa-06de-b4a2-5688-14b9cbe5b48c@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS8PR04CA0142.eurprd04.prod.outlook.com
 (2603:10a6:20b:127::27) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4054d076-d559-490c-1273-08da4a0a8ec1
X-MS-TrafficTypeDiagnostic: VI1PR04MB4286:EE_
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB4286F0AFD6F5660AD447AD28B3A79@VI1PR04MB4286.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	t734zoAY1G80tSUnQjpK4uJy3Gplb8kKd5b0Brx2qgbUSK5pjv5+MV/1F2HNsXjGTDbhZRGXhNtRmQyIldBOADW3pSavqaAfjRQPS2HdBEfy80mf6RCjmElwg7B/7+UYTJyv/k/E/Et/E1txyGmajL7uKDuhBM/rfUvKcjXM6N8pSoC2gCBEaVXVFABNWofJSNmH6yjY1rNYm1tGFymINMlSAAhlDDzWMTEuCccYIze+kKwUGhCd29MOKPhSQIG5FwcPC9MWxnyY7lNUQyNaf6gfmatYF+ycX3kqUfnpP+P5vzr61t0r5wAnr2j6WMgJN/7hs40nyP0z4HGePMeJNtQtDm/eX6HS22BTkKjCzb8q+giGIjiB89c/oWDCXZPqF/JvF76zcLFXjyj+tnU1Wo7ILfINZpaWEDFMs0fQWNICfP45ZfgNN5svO3Su65YfBqWc2e5GGkLe5NKZiaaArlJF9qTJBl5KMMg2QIyiQiVAp2k2K94zTg6QwzLfQ1AURrrhS5FLQ+1+hjJ61L1lHHAdjyyxL9NBGp2Z4FvMnqxb5iVjVY1EdiwvKdgol5ZBtt1QOboTxkSnKgIR5a7MOTTpTVyinGpporGec51wRBXsoTZjrWVzXuoHXquA3Siu4a3DZKp36NSgpKUk48R9R3JAoPscsuSLtSCWuN+sKvdWkpxGh5JxP0jCzT9HCQSpftIEakixWGmaJhfKeDBsTvQDTXDemMq1BvxLuXeJUElnNqRM1fdS/x7otZB+1DB1bwxJB/hD8S35mFKHIGg3b8zr/LWq7JMB1+zUm1/pligwMiPOuYm7V395eqlzJMqOi43oGPICTmSixcfMd3Es96U8vUnVvy1g9n8CxxfFjzg=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(186003)(38100700002)(66556008)(8936002)(66946007)(2616005)(8676002)(31686004)(4326008)(66476007)(7416002)(6916009)(54906003)(316002)(5660300002)(36756003)(86362001)(6506007)(26005)(2906002)(53546011)(6512007)(31696002)(83380400001)(508600001)(966005)(6486002)(221023002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZC9Qa2JHbDA0TUdObVpRdzl4SmNnZ09Id1htdG5TSHR3bEw5eHBzajZsRFo5?=
 =?utf-8?B?emhqc2VHdWZaU25rbEVVbm95ZHl3NE1OdFdxd2V2ZEZLRWw4VmVEV2VXbTNS?=
 =?utf-8?B?QmJQb09zQUZzdGJZRlVFOHBCVTF4TERySW1IL0k5TGVhSFVhNEhhVDUzaTV2?=
 =?utf-8?B?a0Vzbi9HT2hoUnp6cy9ZaFlxRnNybVpWY01mZlczTi9FNms0cjdGQ3g3S0Mv?=
 =?utf-8?B?c1FQR0MwaGlHSzVUYmhZUHFVQ3A4a0lKajdEcFJMcVdJQk4rRTl5MFBJUEV5?=
 =?utf-8?B?TlMwVTUxNVVuKzBWRUF0bFpSczU2d0VVbWhuWlBaZTFxQlgzOE1SNjAvcDgy?=
 =?utf-8?B?OWhiQkxVVW9BZXF2RlBXUDd6Q3NjeHVhQWtUR0ZqZ0hndThOWUVSdHppTDJX?=
 =?utf-8?B?T0tUbmU2d1JObDdqZlRTdkUvcldWc3dLcWpoNCtQSExLaDRCOE9Hcjc3YUVx?=
 =?utf-8?B?cWwxcHQwM1ZmWWRGNzFnNHZxRXhJY0VRVmRaZ2FURjZKSUduNnVHUlR2bGtT?=
 =?utf-8?B?ZlpZR2pLbzdZNUdNamt5VXN6cXVYbTlFRFVpSDJVMnpObGh2ZUZFNXlPYlhS?=
 =?utf-8?B?WFJlTGt5aUxwY0ZqRFhSTXJNVEVld2ZwNERHY3hEMTNrN0MzSVhxb21VTWgx?=
 =?utf-8?B?UEpzVUVMQjY5VXhETTNOUEI5Um4yVWZXUkpzTW1vaDFSU1ZxODBuMWRnTWVz?=
 =?utf-8?B?N200c0Q0UWxPRG9IMEpHajdiRlovbHB4ZjF1K3dhZjdLVTJyU1lWajF4dDk2?=
 =?utf-8?B?ZndEL1pXV3Y2L0FmWjVNUGRtWnFndWxGTUpjUXlTYytvZ3MxUGozMkVJVG9j?=
 =?utf-8?B?VDFqUC84MmJldW5NOUJYQjhHZlBGeHhwc3hSVEtwQmNvM0hGUEZCd3NUSGRo?=
 =?utf-8?B?Yk1nOW9hak5LZDR0bmpNM1d2eUtGVnIrTTlGaVo0dFJCNGcxb0ducmpFM25M?=
 =?utf-8?B?NmFocFpVci84Tk5HbUlmNU1icFdFWnRBcmpFSUVuUmFRdnJBMXIwMDRSSGdQ?=
 =?utf-8?B?UkN5YU45eDYrdzFHVTRJaW51ak50VE9ZMWFVZ0NoWGZFeHkyVnNTNWp3MW5S?=
 =?utf-8?B?Mk9qanJmSHA4WThTZWR1bUxHVXZCWld2VVhIMFQxanl3NVhTNWhmRkduMWJq?=
 =?utf-8?B?LzdmZGZXcXRLN1NBa211ck81MmJncFB1L3BsSTFubEhlTUNYaXUrZHdzbCtE?=
 =?utf-8?B?SnZJcElxdUl2Z0xaL2FneTRmVGJVWFdXWUhpUzgxS3pSUmJhMnBuVWplek95?=
 =?utf-8?B?QjkydkcvcjFzcDkwQlB4bU9qbUZlSlVhU1czWVNaTDR5TnZPRDkvT3F3L0dq?=
 =?utf-8?B?Zk5XZDhXd3pvTUxpVkpINis0d1RsV0wwNDJoeFRiVUxmbFZ6YzRVMkQrNm1Y?=
 =?utf-8?B?Rk9yOUJDS0lGQlVKWDVUbkpXMlJ2VXZDMUFZWXhzN3E1U0lIYmFOU1U1elhP?=
 =?utf-8?B?WTIzOXRUeCtzVWxoUm1ZUHJHUzE0Uit3LzhrME8yREV6UlBSLzlRa2haK1gw?=
 =?utf-8?B?ekFpc2ZYMm9JUE1PVldpbnQxYlVFNFVnV0luQ211bDJKL014YjVWL3FzUlN1?=
 =?utf-8?B?VmhmQWdTNzZNNmhDSmhSL2RXbUNMbE4vdyt2M3dwUlVoZU9MSGhZMTJOb2ty?=
 =?utf-8?B?UWJ2Y3BiaHR4UmxUa2pFSHVJVm9oV2ZaTWczL09HVFN1OWZTSE4ybjFXbmNP?=
 =?utf-8?B?OVgreWs0Ly9XSDJBM3FvNm1rcndaSXQ4TjFvejJFNGJpL3ptdllhRXRQZlVD?=
 =?utf-8?B?Q2NGUWU0R3hYdVpZMnBoNlVvTEFWaFowRTEvUUxTUzVjNjUvRzBCVUFHYkx4?=
 =?utf-8?B?bzZLd1J5d3ZTUkJsK2Q3TFJraXBTV2hETnY5cUZQTFBBMCszV0ZLZWdBcWxJ?=
 =?utf-8?B?RklhK3VVQWNMeUQxVXZiMlhOenRQa1NrYUpEdlpzVXZHVUlBWFllMkRreVBp?=
 =?utf-8?B?cnlDaXFZLzNQcEVnT3hxRi9rYlhYZ3NzNFF6ZFJDNW8wNWRzdFR6WDZFY3RN?=
 =?utf-8?B?a3FKMkZjNzhmK0txOEpLMng4bU9aQ0tVSk4wdlVXbzZMdVh2dk56eE5ydVg1?=
 =?utf-8?B?MHMydGJtYTAxazJSMU5OQ2VHaG5tdVB5N3ZTYXBZOXJNejRsbUV5VVVlRG53?=
 =?utf-8?B?TzNDNHhJSCtNSW1STzJnc041bmFsUlpaeDBJNXBJdW5Yc3lMK3ZCamRIMzdz?=
 =?utf-8?B?NVNIazk3UWExU01ZNGQvQWU2UWw0SUZFQ0RXWDhTOWZKOGZtTGg3ZXl6YVg4?=
 =?utf-8?B?UGo5aUNEQmVSNDZsZUtya2lLcXRSUUNsN3UvT29OaFlqTDFMU3VDWGs1OTBG?=
 =?utf-8?B?NU9VMWtLMmI1U3RwN09UbzhYVUxyL1l6SWsxR2l4SGIyQXRyWFNMQT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4054d076-d559-490c-1273-08da4a0a8ec1
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 11:24:03.2898
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 0z2gmhFFBxugL9t8Nx24Op/lGg1JFXwceSt0q4HtiyYgaNreP2OfHLQxFJVIReMqgNOgGLUckO32XeFu6STN2g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4286

On 09.06.2022 13:11, Roberto Bagnara wrote:
> On 07/06/22 04:17, Stefano Stabellini wrote:
>  > # Rule 9.1 "The value of an object with automatic storage duration shall not be read before it has been set"
>  >
>  > The question is whether -Wuninitalised already covers this case or not.
>  > I think it does.
>  >
>  > Eclair is reporting a few issues where variables are "possibly
>  > uninitialized". We should ask Roberto about them, I don't think they are
>  > actual errors? More like extra warnings?
> 
> No, -Wuninitialized is not reliable, as it has plenty of (well known)
> false negatives.  This is typical of compilers, for which the generation
> of warnings is only a secondary objective.  I wrote about that here:
> 
>    https://www.bugseng.com/blog/compiler-warnings-use-them-dont-trust-them
> 
> On the specifics:
> 
> $ cat p.c
> int foo (int b)
> {
>      int a;
> 
>      if (b)
>      {
>          a = 1;
>      }
> 
>      return a;
> }
> 
> $ gcc -c -W -Wall -Wmaybe-uninitialized -O3 p.c
> $ gcc -c -W -Wall -Wuninitialized -O3 p.c
> $
> 
> Note that the example is less contrived than you might think.
> See, JF Bastien's talk at 2019 LLVM Developers' Meeting:
> 
>    https://www.youtube.com/watch?v=I-XUHPimq3o
> 
> More generally, you can only embrace MISRA if you agree on
> its preventive nature, which is radically different from
> the "bug finding" approach.  The point is rather simple:
> 
> 1) static analysis alone cannot guarantee correctness;
> 2) peer review is unavoidable;
> 3) testing is unavoidable.
> 
> In order to effectively conduct a peer review, you cannot
> afford being distracted every minute by the thought
> "is this initialized?  where is it initialized?  with which
> value is it initialized?"
> In a MISRA setting, you want that the answer to such questions
> is immediately clear to anyone.
> In contrast, if you embrace bug finding (that is, checkers with
> false negatives like the ones implemented by compilers),
> you will miss instances that you may miss also with testing
> (testing a program with UB does not give reliable results);
> and you will likely miss them with peer review, unless you
> can spend a lot of time and resources in the activity.
> 
> The checker implemented by ECLAIR for Rule 9.1 embodies this
> principle: if it says "violation", then it is a definite
> violation;  if it says "caution", then maybe there is no
> UB, but a human will have to spend more than 30 seconds
> in order to convince herself that there is no UB.
> 
> I understand this may sound frustrating to virtuoso programmers,
> and there are many of them in the open source world.
> But the truth is that virtuosity in programming is not a good
> thing for safety-related development.   For safety you want
> code that is simple and straightforward to reason about.

I understand what you're saying, yet I'd like to point out that adding
initializers "blindly" may give a false sense of code correctness.
Among other things it takes away the chance for tools to point out
possible issues. Plus some tools warn about stray initializers ...

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 11:33:02 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 11:33:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345106.570745 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzGPD-0007G0-8y; Thu, 09 Jun 2022 11:32:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345106.570745; Thu, 09 Jun 2022 11:32:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzGPD-0007Ft-4x; Thu, 09 Jun 2022 11:32:51 +0000
Received: by outflank-mailman (input) for mailman id 345106;
 Thu, 09 Jun 2022 11:32:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jWvP=WQ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1nzGPB-0007Fi-GL
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 11:32:49 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on0616.outbound.protection.outlook.com
 [2a01:111:f400:fe0d::616])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e3881867-e7e7-11ec-b605-df0040e90b76;
 Thu, 09 Jun 2022 13:32:48 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB6949.eurprd04.prod.outlook.com (2603:10a6:20b:102::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Thu, 9 Jun
 2022 11:32:47 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.013; Thu, 9 Jun 2022
 11:32:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e3881867-e7e7-11ec-b605-df0040e90b76
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Xhf/eqtqo3fRT8+RIuH8Kb1Jdu1mHSicfmsb8hGqQOQhYpTLSLpZEFyzno45L+JIvwbTdjQb/vXWnD3nWGqWXWE5QiZbNCX78ldbHJq1GzUdCXdBKk5dcXKFkleoddWcVi6lAStFPbCf2LtP9eEk9j/xrPxU/T3xzr/ll2JJx31eJN+Ycs/twTYy76f8/V2/zh/Ta8K2jA8Q2GaiSqnDxmKk5qXcOtPLM4FWLNR/5eND9TMUbI8I3oLGC1qjRGjoj9AhxwL74qP5iBBXs2sMyJ29/b0BySJVzWesHzAgrnmwi1TIKJ5PsuqDALUy/R0Xrd8vT5HOkHhtFTVdI/4mBw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=baF2ghcrgKF1M0W+kGbc6OboNpQy41SERRxrEystets=;
 b=lRoyT5US5GbUB73fjylrj1F9W6tJ2ZdsSTCU97gdqCOMmjLASxo6lPU+8RdC16+eiM+qAacjzPIqWMK71saM88tw4K4xNW5mloaMvinjSHcSHl4gXAVaVnfdR6fpMLAFMcWWtnNjxkO3j8VEIt/nDjQBCHEsc//zdrcETPDmAkhRK77/xHRcY/OZ4Iu35FiM8MEhv1RbQ0RerAaNbXShwv+l2zxjpm0hCmmZt2dWoOb+oSPldTi3KGSfOwcj3fge10Gt8CJUEZmRz2iGQ4AkG/2/6XVr6pBXaryk298uJxt9QNEZR42W4VXD2dzVGbm+NIzVPRu4JvSYavldKKEEaQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=baF2ghcrgKF1M0W+kGbc6OboNpQy41SERRxrEystets=;
 b=ymr1muUjs/HfMUznk+YKUM7LP/+oB/Xqlw6D+lQYQT4W/UfrYoLXUJB8Zc6b0pIrN9wSYwI4anG7LEFfEMwbrsKaQ0W8q2l4ryBpaHhs5B5sJoqeGWcm99XEaACBWbQmiNOWmCXZzsclCHrWqcdlidn2P+6U9QeDmCyrd9UlusQK9Yhalw1ngm9eMyVoffyCIcrFcWwKGRzvb+6+xPyK2ZuyIOca/NGFEtuy9mhTSFq6u33wFB7Yi4FTfRnyzo8CAkDpHYmqBNohlSmP3aOLbSR2yd/jtJGSl1VidldUlI8S4IGBbAAnv0UCe2uXKitFr0DD/J++CF8J1I3B6Tkjrw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b9675dd3-5ad2-9caf-ab76-30bcba72019f@suse.com>
Date: Thu, 9 Jun 2022 13:32:44 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: MISRA C meeting tomorrow, was: MOVING COMMUNITY CALL Call for
 agenda items for 9 June Community Call @ 1500 UTC
Content-Language: en-US
To: Roberto Bagnara <roberto.bagnara@bugseng.com>
Cc: George Dunlap <George.Dunlap@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Roger Pau Monne <roger.pau@citrix.com>,
 Artem Mygaiev <Artem_Mygaiev@epam.com>, Andrew.Cooper3@citrix.com,
 julien@xen.org, Bertrand.Marquis@arm.com, fusa-sig@lists.xenproject.org,
 Stefano Stabellini <sstabellini@kernel.org>
References: <CC75A251-2695-4E9E-95A7-043874B22F32@citrix.com>
 <alpine.DEB.2.22.394.2206010942010.1905099@ubuntu-linux-20-04-desktop>
 <alpine.DEB.2.22.394.2206011324400.1905099@ubuntu-linux-20-04-desktop>
 <ebe4b409-318f-6b2c-0e05-fe9256528b32@suse.com>
 <alpine.DEB.2.22.394.2206061731421.277622@ubuntu-linux-20-04-desktop>
 <alpine.DEB.2.22.394.2206081806020.21215@ubuntu-linux-20-04-desktop>
 <880f3273-f92b-2b60-8de0-e69fefbada21@suse.com>
 <26829bf4-bbcc-a610-ba3b-fa60aa296cf9@bugseng.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <26829bf4-bbcc-a610-ba3b-fa60aa296cf9@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM6P195CA0001.EURP195.PROD.OUTLOOK.COM
 (2603:10a6:209:81::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6a125966-d777-41b3-fbfe-08da4a0bc6e4
X-MS-TrafficTypeDiagnostic: AM7PR04MB6949:EE_
X-Microsoft-Antispam-PRVS:
	<AM7PR04MB69492E5DC9261E059F3941CCB3A79@AM7PR04MB6949.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ul04Etl9tYvSBALAirkD+pDQkPN1vX5DPijmvrZ54du2WgnOrwE5ZUloQs0KugYuO3Haia5/dTDVfXogSoX9XAvqtOpjR1RMs947uUPWLbmVqFpgV8B1cMV90yehkhOCb7h12BvN9O0DXf4AtXVugL9EdN4OTIOKqd9l1TtxO+pgPrss8iaJks9RMG+s/vM9bmsRXHb8Q1NlOZsM4rC87X3NvUmW+Wc4/kUbLIRMxMqR/w3yfU+tlTue/zf2ionnpxJZIaUrJVWxOjr23m9y4NHkqLjdMS6Y7lS33DZNeJ2tz1Kcgfsqlspo1t6sjklz0gxP/1gOldytOZky/nIytv/MWw80k/Ovnzd8gj4ovx9i8LHbFZV8sT918nyJ53xBZHF/0/CpWziatiUNSKq3HW2enq8k1Qyp8+Q0PPsS3oMSu5zDu8A7wsTJL2Y5Lj0l4CdBJCLgM6YDdnmmGrYzeZY2MqpmUHeVYmHpUFYW3R5SAfZBUViNXgvex7NWwP1+nkx9G+UNetT65VW+A6eblOd1bZyDY0fHz3JZMPWHgqj0cQ1ShhN7il7PhBuQlt9a9pQzr6dvql7xMTOzsCbs064eDl9sWamqSoSTNw7fCIA1tyek9r5eQDtPUnzVDUoIt6E/9zFiq+3kx92k7MZMIm67noLRLzYFzOOilyIkbSxOHijK2GI2M3tEv32xad3LOKyEaX4xivTR1ONNNq4zOBHEqG7X4lHoLf/tJus11i9GhWi3MVFgHYq7DTLWOR0C1ykmj4UwGU8cnr44YEdH+A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(26005)(6512007)(31696002)(6666004)(6506007)(86362001)(6486002)(53546011)(2906002)(508600001)(83380400001)(8936002)(2616005)(186003)(38100700002)(36756003)(5660300002)(8676002)(316002)(31686004)(4326008)(54906003)(6916009)(66476007)(66946007)(7416002)(66556008)(221023002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Y2l5UFRYSFFpRFB3TzlZdGdMWmZXMThXdE96amoxZE5tbXg3MjI2bXhjUE40?=
 =?utf-8?B?OVNhWTcrR1U4UTRDWHFZVytvdkFtQ0lCd3RkdlVVVWU2N0FvemNJcHNIV1BC?=
 =?utf-8?B?ZGZaME9LRHBrMjNqZ09oaHRBRFBiRGlCckh3c1UvQi9td1o2UEZWMTNBVE8y?=
 =?utf-8?B?WDhoTkVHMitGRGZFd0ZEdGJKc2kzU1hZT08xZkRCVjdFdkM1cUMyS3lFeEVF?=
 =?utf-8?B?eDJNSFJJdmppL3UvM2ZWT1JUYk9oWlVQNHV6QVNvWUp2ZlNiNGc3a2J5TUJh?=
 =?utf-8?B?cys3U0dEWHpLeU9RREloZFJsQnlNVzJzSnRPcXNyL0VGUTBIZndMMi9TR1Y0?=
 =?utf-8?B?VXFBdDhHd3orak0zQ0pINHVTLzBoaElpbllWQTcxaGdTZ3FxdGFMbGlMQU80?=
 =?utf-8?B?aTBoR0k3d2EvOVhNQ1RrSXpObjB1VHlZNXVkMXBtWG5LNURGOFNmWWJDaEZj?=
 =?utf-8?B?UmZ3bnFoTWhTaitiL1RtaEFLYzdBcVUxQmZ6VE11K3FpbjZyK1QzRW1nSXlS?=
 =?utf-8?B?amVaWmorb01hVjJJcHZMTlMvMUZySTdEZW02bS96cGN4RE5aZWloM2RWdmxQ?=
 =?utf-8?B?TnVPd3Fhb1h6RUNSUmtML2xTU2NyT0dzcHloMXpEN3dzb3dLa056OVpaVndH?=
 =?utf-8?B?YnkvSHkvdERSVEFOTjRGVXpaemlrbWVtSHVwaWhKSnFSdXZOY1NRaEpqWGE4?=
 =?utf-8?B?Z0doWHg3TWdCTFRLaXRsUXZVZTg5YkYxbzZYRHFNV0FEK0xZdXZSZWhITng0?=
 =?utf-8?B?aGRsd2xDRXpZQ29aM0lwVEg4aWxUQTl1Z3hsaDNOTkYwV3NXSGc3NmJJalFy?=
 =?utf-8?B?amw1eUNLUFVmbVQzdTU1a0JZa3FFQXBSMjlOWFNkUXB4OWRtZ3FXL0pwbDh0?=
 =?utf-8?B?dnVuSDdIcDI4K1RtNGlPVlNiQy82eE95bEV1bFNFTFowQzZhWW5IL3lVVzFR?=
 =?utf-8?B?OVJ3SXNrRnZQVlhOaDFvV1dyRGRTZm1KWEI0akluMzdRM1M2M3AzbWJRS25q?=
 =?utf-8?B?RGNiU1hUcUl5bFhYTDA3cEVkaXlKMDhYUHZqcEprZTYwOGxCeVRzQm9oYTl0?=
 =?utf-8?B?NGJJbTF6ZTZDYnJKZVJyWkQxMVNkWXQ2bjBQK1k0VlRkNkRlZkcydHNycUE2?=
 =?utf-8?B?Wmp4bGdzN3dtOGdYcGZpU2dxQmM5YnlmYWpuM053UHZFOHA1RWw5M2NOcUs4?=
 =?utf-8?B?aWFFQ2Q1N1BodDJ6eU56WXBQV2JpWGJVSGpFVEVGd1ozSVZhVG93eWVZZWk0?=
 =?utf-8?B?Slo0dHRCMzNGYjQvK1oxSTNhUDY3OEs4R05tWVd5ZmtGbEF5NlcwR25KczRn?=
 =?utf-8?B?WHRVY3VIZGJ3dzRJVXJ2dmg2RXFiTGFNOWF0bzJNWC9NVWtCaVN3N2dmKy9U?=
 =?utf-8?B?SHhoZ2xUem1YRXBKT2NTN3B3RGxTeHoybmczbEJzWmNZNGw2SDFtWXk4NlpW?=
 =?utf-8?B?c2lRaVZrMS9EVU9laXUyUUFMemliMkNSYTQ1M1FLRkpjdEJ4WUx1bHhyek5o?=
 =?utf-8?B?RXBhb3lDenpvQ0VOZzFWYUpmakUyMW9jeVJsYmFadUVlbm1Ib1RCNFBDOE9t?=
 =?utf-8?B?ZHBCczAraXllQzlWY3BrZ3I1NmhVM3FsckNmM01xbHdwRk1KLzA3R1lZcEZs?=
 =?utf-8?B?MmJ2aWRtRkxxNHp0UDZwaGt3QUxjWnFhODRHRUo3cExMZVlmSWxMYnFYbjJV?=
 =?utf-8?B?am9EMEIrVFIySEFqQklKYUhSdzUwUGFQOUJWUHBQdEpMZUVMOTQvaDhPMUwy?=
 =?utf-8?B?N0dxaldZQ0JTRE5acjhBdzJiaHRXRS9Fb0d5VWMwRVhQeWFQeVJmUms0bSs4?=
 =?utf-8?B?bFpoRnc1KzVoeGxSTFlDblNqaFllNTl1Vy81bzRqMUg4WndMUVZUbEhkSDJL?=
 =?utf-8?B?Y3JabzFsM3F3WC9RQk1xNjFXVUM2dGoxcXZSRjdOYVpxc055Zm9nYUtLYmRF?=
 =?utf-8?B?eG5GdVhheWlpSm9QT01kOVJuVWludEYyS3JRdDdlUjNqSG5mNGxkamk2T2ts?=
 =?utf-8?B?UTJLeTFiZExhTWNyaHZDS2E2djhMeU1GRmhyd3hzYTllUzl5N1VjejdYQjR4?=
 =?utf-8?B?dzM1RkozRG1QZGpkRy9XRDVCd2R4QVVFMlJ3Z3dHSW1TNHIzQldnSDhHZk5Q?=
 =?utf-8?B?cVRSS2I3N0JaQ2VwMXU4YlpEMHc1UUFYd2JXZ1dSVURLcHZ6Z0Nib2pHNlk3?=
 =?utf-8?B?RDhXK3VKYXozNVpHdE80QzdXWVJWMk01dm94aWFxVk9DdnFnTFIxTk9xN3B5?=
 =?utf-8?B?V3lMK1pCUG1qbUF0NFlHUTZlWTN0Uk0wVUx3SXh0djA2RDFldGV3czhTN0V0?=
 =?utf-8?B?dmNVVmFOK0RzeHZRdFR6bnZGNTBvSjZaZDJHU1hqbktrSS9MYU54Zz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6a125966-d777-41b3-fbfe-08da4a0bc6e4
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 11:32:46.9743
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Zb8VU5v91jKtp6YnHNAWCr73gCuFfH2EpZ1jN4MbxtT6hv6v+xyIKCJpE6kw6w7wvrQOd691JuunE0BHR9PJ6A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB6949

On 09.06.2022 13:17, Roberto Bagnara wrote:
> On 09/06/22 09:04, Jan Beulich wrote:
>> On 09.06.2022 03:20, Stefano Stabellini wrote:
>>> Finally, for Rule 13.2, I updated the link to ECLAIR's results. There
>>> are a lot more violations than just 4, but I don't know if they are
>>> valid or false positives.
>>
>> I've picked just the one case in xen/common/efi/ebmalloc.c to check,
>> and it says "possibly". That's because evaluation of function call
>> arguments involves the calling of (in this case two) further
>> functions. If those functions had side effects (which apparently the
>> tool can't figure), there would indeed be a problem.
>>
>> The (Arm based) count of almost 10k violations is clearly a concern.
>> I don't consider it even remotely reasonable to add 10k comments, no
>> matter how brief, to cover all the false positives.
> 
> Again, the MISRA approach is a preventive one.
> If you have reasons you want to write
> 
>     f(g(), h());
> 
> then declare g() and h() as pure (or const, if they are const).
> E.g.:
> 
> #if COMPILER_SUPPORTS_PURE
> #define PURE __attribute__((pure))
> #else
> #define PURE
> #endif
> 
> int g(void) PURE;
> int h(void) PURE;
> 
> It's good documentation, it improves compiler diagnostics,
> and it satisfies Rule 13.2.

But such attributes first of all should be correct. They wouldn't be
in the case I've looked at (involving two __virt_to_maddr() invocations),
as the underlying va_to_par() isn't pure. Still in the normal case the
sequence of calls made is irrelevant to the overall result.

As to improving compiler diagnostics: It has been my experience that
pure and const are largely ignored when used on inline functions. The
compiler rather looks at the inline-expanded code to judge. (But it has
been a couple of years back that I last checked, so things may have
changed since then.)

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 11:35:08 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 11:35:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345104.570756 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzGRO-00085W-LK; Thu, 09 Jun 2022 11:35:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345104.570756; Thu, 09 Jun 2022 11:35:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzGRO-00085P-IT; Thu, 09 Jun 2022 11:35:06 +0000
Received: by outflank-mailman (input) for mailman id 345104;
 Thu, 09 Jun 2022 11:31:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pINC=WQ=chromium.org=senozhatsky@srs-se1.protection.inumbo.net>)
 id 1nzGNb-0007EE-W9
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 11:31:12 +0000
Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com
 [2a00:1450:4864:20::635])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a92481be-e7e7-11ec-b605-df0040e90b76;
 Thu, 09 Jun 2022 13:31:10 +0200 (CEST)
Received: by mail-ej1-x635.google.com with SMTP id o7so13563403eja.1
 for <xen-devel@lists.xenproject.org>; Thu, 09 Jun 2022 04:31:10 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a92481be-e7e7-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=+XewJc/p+7yp1re11t6BfmEtfp27WPbXPnywgusXXTc=;
        b=dD0Bz9eW32MHk+TeRLS+QqFpXQSyJG1PFuM0sFc2oIQiPmRbBKI4wiAjUnrY0TOPB3
         vmLpoVIxPESYT9ISrrirk39KtLhSpCgEKtG3yrDCpv/Gh6ZWSamp4kS8RJg23gIGNIEa
         lDd2yagv7BcuRvFzrWxMhthjwknH7br96szys=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=+XewJc/p+7yp1re11t6BfmEtfp27WPbXPnywgusXXTc=;
        b=uOSQvCB1vmFENbDYIJeYXJuaejA9VnmVRreE73ZoKXMTE0jpH1KohaXecIIlmatggO
         ahtycpKjZ1xteE8FqvADjnjow8XbfSaV7EKiaVynwB+Q/fden0zfK+s1X7z9HJqPM5bd
         ylV9gvh/qiDjKmNmISfE+lS60sDqPMghEHizj2h5W5h8PUtLLHCGI/i/Ss1g6k993OFv
         o6bItU5/jAwfi+tLlDygxPk/JRiHCA1kgBsQ305849WFQIrambUhv0cS1c6rdxZeRvcz
         bwheXMcYMDLs1Rd7s4WpdO16ATML2mdwllyefvxYkY3zTZZGA39ZH8cPVsVrSQH+jKD9
         gLJA==
X-Gm-Message-State: AOAM5310jp1ctll4Nesy9JF1mLv5SuBMLs3NFJ9gzochFfV2EbheF2Mw
	vhyLevZSz1+hRpZuKnwimOOjfgmBlPjCSRrGfGKhOQ==
X-Google-Smtp-Source: ABdhPJx9nV7wAxP/lyAtNedlGfDPIROj9YXpatcH7EqCac1uVlfTSKszMzYbWUxkatf8WgjTCZNOZTjPAlJBoltWDx4=
X-Received: by 2002:a17:907:c22:b0:711:dc95:3996 with SMTP id
 ga34-20020a1709070c2200b00711dc953996mr13928684ejc.62.1654774269713; Thu, 09
 Jun 2022 04:31:09 -0700 (PDT)
MIME-Version: 1.0
References: <20220608142723.103523089@infradead.org> <20220608144517.444659212@infradead.org>
 <YqG6URbihTNCk9YR@alley> <YqHFHB6qqv5wiR8t@worktop.programming.kicks-ass.net>
In-Reply-To: <YqHFHB6qqv5wiR8t@worktop.programming.kicks-ass.net>
From: Sergey Senozhatsky <senozhatsky@chromium.org>
Date: Thu, 9 Jun 2022 20:30:58 +0900
Message-ID: <CA+_sPaoJGrXhNPCs2dKf2J7u07y1xYrRFZBUtkKwzK9GqcHSuQ@mail.gmail.com>
Subject: Re: [PATCH 24/36] printk: Remove trace_.*_rcuidle() usage
To: Peter Zijlstra <peterz@infradead.org>
Cc: Petr Mladek <pmladek@suse.com>, ink@jurassic.park.msu.ru, mattst88@gmail.com, 
	vgupta@kernel.org, linux@armlinux.org.uk, ulli.kroll@googlemail.com, 
	linus.walleij@linaro.org, shawnguo@kernel.org, 
	Sascha Hauer <s.hauer@pengutronix.de>, kernel@pengutronix.de, festevam@gmail.com, 
	linux-imx@nxp.com, tony@atomide.com, khilman@kernel.org, 
	catalin.marinas@arm.com, will@kernel.org, guoren@kernel.org, 
	bcain@quicinc.com, chenhuacai@kernel.org, kernel@xen0n.name, 
	geert@linux-m68k.org, sammy@sammy.net, monstr@monstr.eu, 
	tsbogend@alpha.franken.de, dinguyen@kernel.org, jonas@southpole.se, 
	stefan.kristiansson@saunalahti.fi, shorne@gmail.com, 
	James.Bottomley@hansenpartnership.com, deller@gmx.de, mpe@ellerman.id.au, 
	benh@kernel.crashing.org, paulus@samba.org, paul.walmsley@sifive.com, 
	palmer@dabbelt.com, aou@eecs.berkeley.edu, hca@linux.ibm.com, 
	gor@linux.ibm.com, agordeev@linux.ibm.com, borntraeger@linux.ibm.com, 
	svens@linux.ibm.com, ysato@users.sourceforge.jp, dalias@libc.org, 
	davem@davemloft.net, richard@nod.at, anton.ivanov@cambridgegreys.com, 
	johannes@sipsolutions.net, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, 
	dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, acme@kernel.org, 
	mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, 
	namhyung@kernel.org, jgross@suse.com, srivatsa@csail.mit.edu, 
	amakhalov@vmware.com, pv-drivers@vmware.com, boris.ostrovsky@oracle.com, 
	chris@zankel.net, jcmvbkbc@gmail.com, rafael@kernel.org, lenb@kernel.org, 
	pavel@ucw.cz, gregkh@linuxfoundation.org, mturquette@baylibre.com, 
	sboyd@kernel.org, daniel.lezcano@linaro.org, lpieralisi@kernel.org, 
	sudeep.holla@arm.com, agross@kernel.org, bjorn.andersson@linaro.org, 
	anup@brainfault.org, thierry.reding@gmail.com, jonathanh@nvidia.com, 
	jacob.jun.pan@linux.intel.com, Arnd Bergmann <arnd@arndb.de>, yury.norov@gmail.com, 
	andriy.shevchenko@linux.intel.com, linux@rasmusvillemoes.dk, 
	rostedt@goodmis.org, john.ogness@linutronix.de, paulmck@kernel.org, 
	frederic@kernel.org, quic_neeraju@quicinc.com, josh@joshtriplett.org, 
	mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, 
	joel@joelfernandes.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, 
	dietmar.eggemann@arm.com, bsegall@google.com, mgorman@suse.de, 
	bristot@redhat.com, vschneid@redhat.com, jpoimboe@kernel.org, 
	linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, 
	linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, 
	linux-omap@vger.kernel.org, linux-csky@vger.kernel.org, 
	linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, 
	linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, 
	openrisc@lists.librecores.org, linux-parisc@vger.kernel.org, 
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, 
	linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, 
	sparclinux@vger.kernel.org, linux-um@lists.infradead.org, 
	linux-perf-users@vger.kernel.org, virtualization@lists.linux-foundation.org, 
	xen-devel@lists.xenproject.org, linux-xtensa@linux-xtensa.org, 
	linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org, 
	linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org, 
	linux-tegra@vger.kernel.org, linux-arch@vger.kernel.org, rcu@vger.kernel.org
Content-Type: text/plain; charset="UTF-8"

My emails are getting rejected... Let me try web-interface

Kudos to Petr for the questions and thanks to PeterZ for the answers.

On Thu, Jun 9, 2022 at 7:02 PM Peter Zijlstra <peterz@infradead.org> wrote:
> This is the tracepoint used to spool all of printk into ftrace, I
> suspect there's users, but I haven't used it myself.

I'm somewhat curious whether we can actually remove that trace event.


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 11:45:54 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 11:45:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345126.570766 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzGbl-0001Ps-Q3; Thu, 09 Jun 2022 11:45:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345126.570766; Thu, 09 Jun 2022 11:45:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzGbl-0001Pl-NJ; Thu, 09 Jun 2022 11:45:49 +0000
Received: by outflank-mailman (input) for mailman id 345126;
 Thu, 09 Jun 2022 11:45:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sdCg=WQ=gmail.com=akihiko.odaki@srs-se1.protection.inumbo.net>)
 id 1nzGbk-0001PM-Gh
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 11:45:48 +0000
Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com
 [2607:f8b0:4864:20::102a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b3535908-e7e9-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 13:45:47 +0200 (CEST)
Received: by mail-pj1-x102a.google.com with SMTP id e24so21178586pjt.0
 for <xen-devel@lists.xenproject.org>; Thu, 09 Jun 2022 04:45:47 -0700 (PDT)
Received: from [192.168.66.3] (p912131-ipoe.ipoe.ocn.ne.jp. [153.243.13.130])
 by smtp.gmail.com with ESMTPSA id
 k14-20020aa792ce000000b0050dc76281b5sm17501351pfa.143.2022.06.09.04.45.43
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 09 Jun 2022 04:45:45 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b3535908-e7e9-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=message-id:date:mime-version:user-agent:subject:content-language:to
         :cc:references:from:in-reply-to:content-transfer-encoding;
        bh=1uWRq0qALlsYTQq3er3sD+PXamQFroMLzNzNZ5Baq/E=;
        b=nDYk9F2UkksG7vJXwzdocA1HGXnEFw3FhR+DLGD+lBRr8jWzyttCTlkhOU/M6RtFod
         AtQ7el+DjbiVebfgHf/rkjH+pSUcLBsGXJbLqR5fvzb6ML5W8QC8nA8Dhwbw3uL8SVke
         0hfFTKNtxfZzeospUYTn17XH33tx0FFHrA5ebXazRYhs3mf3nmMvLTiKJzzX+14wpB7o
         dzHgB+4+x2f++5WPxFhGhZeb/glqK+FJlGPWxTpDqsGqHmpyMs4MU+9VJsEvl+MzIpF2
         FgEvPSDwZDYe2ztAARR1NlDLaCM7lo8UtkbRcij8JLSD0Yd96Ngkkn2rQe4ESwX1S5jq
         /VYw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:message-id:date:mime-version:user-agent:subject
         :content-language:to:cc:references:from:in-reply-to
         :content-transfer-encoding;
        bh=1uWRq0qALlsYTQq3er3sD+PXamQFroMLzNzNZ5Baq/E=;
        b=TfIaL/KPXOCUjEBCXaR04xOCdguzoHSc25a6BK2Ok6qV8Lzuk1ivT/W1lpG63VJsSe
         uqdhtCH7Ht9/CD7bvBCYvj9G82O+9Jgw8zHVD6m2uO9z5kurgBR4y3hDcFDAK63Tfnq/
         7GuQg43HcPzakmtPC8Rc4hD+E3RG4L0Dl18cchSlNcYBEpJ9xa6de20aaC9ssuc0phK0
         XqkxBVV46SU620mWDJ8gwSU4B9dAIH+yKWJoGZ6VAUCDBu2yRIdqRZnIvuzu7psq05w2
         IuSVHPoE2uc4r+PRLvF4Ko41G+evYtajItytCraPHDW8TWjj23DBJsLzgrn6d1KEFG5Q
         KTgQ==
X-Gm-Message-State: AOAM532N/GoIsPGNPyS/xkic/gSoa8+6uOEmARalUBH+kxRtlpl5gV1R
	kOtxv7qC+0CWEctPG6KXb0U=
X-Google-Smtp-Source: ABdhPJy6NBGODlL2qFbVm78WCpoq/0vnl33flGVIgU6549lot3LxsKE9L21JFV8c0odlHFpFhjRTgw==
X-Received: by 2002:a17:90a:e7c5:b0:1e3:3cf1:6325 with SMTP id kb5-20020a17090ae7c500b001e33cf16325mr3066860pjb.178.1654775145812;
        Thu, 09 Jun 2022 04:45:45 -0700 (PDT)
Message-ID: <19ae71a4-c988-3c9e-20d6-614098376524@gmail.com>
Date: Thu, 9 Jun 2022 20:45:41 +0900
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux aarch64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v3 2/3] ui: Deliver refresh rate via QemuUIInfo
Content-Language: en-US
To: Gerd Hoffmann <kraxel@redhat.com>
Cc: qemu Developers <qemu-devel@nongnu.org>, xen-devel@lists.xenproject.org,
 "Michael S . Tsirkin" <mst@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>
References: <20220226115516.59830-1-akihiko.odaki@gmail.com>
 <20220226115516.59830-3-akihiko.odaki@gmail.com>
 <20220609102805.qz2xrnd6ms6cigir@sirius.home.kraxel.org>
From: Akihiko Odaki <akihiko.odaki@gmail.com>
In-Reply-To: <20220609102805.qz2xrnd6ms6cigir@sirius.home.kraxel.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 2022/06/09 19:28, Gerd Hoffmann wrote:
>> --- a/include/ui/console.h
>> +++ b/include/ui/console.h
>> @@ -139,6 +139,7 @@ typedef struct QemuUIInfo {
>>       int       yoff;
>>       uint32_t  width;
>>       uint32_t  height;
>> +    uint32_t  refresh_rate;
>>   } QemuUIInfo;
>>   
>>   /* cursor data format is 32bit RGBA */
>> @@ -426,7 +427,6 @@ typedef struct GraphicHwOps {
>>       void (*gfx_update)(void *opaque);
>>       bool gfx_update_async; /* if true, calls graphic_hw_update_done() */
>>       void (*text_update)(void *opaque, console_ch_t *text);
>> -    void (*update_interval)(void *opaque, uint64_t interval);
>>       void (*ui_info)(void *opaque, uint32_t head, QemuUIInfo *info);
>>       void (*gl_block)(void *opaque, bool block);
>>   } GraphicHwOps;
> 
> So you are dropping update_interval, which isn't mentioned in the commit
> message at all.  Also this patch is rather big.  I'd suggest:
> 
> (1) add refresh_rate
> (2) update users one by one
> (3) finally drop update_interval when no user is left.
> 
> thanks,
>    Gerd
> 

I think 1 and 3 should have to be done once since refresh_rate and 
update_interval would interfere with each other otherwise. Does that 
make sense?

Regards,
Akihiko Odaki


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 11:51:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 11:51:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345135.570778 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzGhM-0002oG-Do; Thu, 09 Jun 2022 11:51:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345135.570778; Thu, 09 Jun 2022 11:51:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzGhM-0002o9-BC; Thu, 09 Jun 2022 11:51:36 +0000
Received: by outflank-mailman (input) for mailman id 345135;
 Thu, 09 Jun 2022 11:51:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6kiA=WQ=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1nzGhK-0002o3-93
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 11:51:34 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20608.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::608])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 814e152b-e7ea-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 13:51:32 +0200 (CEST)
Received: from DB9PR06CA0002.eurprd06.prod.outlook.com (2603:10a6:10:1db::7)
 by PA4PR08MB5999.eurprd08.prod.outlook.com (2603:10a6:102:f2::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.12; Thu, 9 Jun
 2022 11:51:29 +0000
Received: from DBAEUR03FT041.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:1db:cafe::84) by DB9PR06CA0002.outlook.office365.com
 (2603:10a6:10:1db::7) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.12 via Frontend
 Transport; Thu, 9 Jun 2022 11:51:29 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT041.mail.protection.outlook.com (100.127.142.233) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5332.12 via Frontend Transport; Thu, 9 Jun 2022 11:51:28 +0000
Received: ("Tessian outbound ff2e13d26e0f:v120");
 Thu, 09 Jun 2022 11:51:28 +0000
Received: from fb01a8e492f6.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 FEE024F7-F983-4C49-823C-27D84901509B.1; 
 Thu, 09 Jun 2022 11:51:21 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id fb01a8e492f6.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 09 Jun 2022 11:51:21 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AS8PR08MB6358.eurprd08.prod.outlook.com (2603:10a6:20b:337::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.12; Thu, 9 Jun
 2022 11:51:20 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5314.019; Thu, 9 Jun 2022
 11:51:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 814e152b-e7ea-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=gx3T+tkFoYFFuf0VjAVw4b6bFnCv4V04Z/VegoicZUHo+kylfeIWsMcIewDMyp0owBpKtVGG1KnKEC0zSKT95eUycjrdRvbs+EZPjul7i6Cxpk5Ien6TMRfvXazKRbZVTZQhuhnyR6u/+49Q5eH45L/T1Fr2rQ2maW7duBWxcKbW8eGl2URSwnvChnsvDQWt6UikGGV9PreCwqp8IhDftYAOl2imk2SgtcTh79hNMbyWOW9b9inA0S9zmcL2wJGszD6t8iCTRKqJk7Phc9rZ8y0Ke1f0cuEHTU+LTREiOOoyn/unNtpbJN6TubiD+Q0uPm5gDYj3rUuok1k/8EN75g==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=uMZGG/HZy6KANbMxbw+WZ985zFopS3fJBwcjGW9i9p0=;
 b=FEtyioIFBw6MXboCbOoPRHVRZ7Zpr0NwC7YOxwhpbx114fuK/MXe++WSLITx7vG07SQzUd55fB75cYXlx3UvSfW8kuo8FN1SCJjjSHqb0qFLZ4Ybi3G93kK0INFrn9itP8zIJdgNmOVJHHfS5smdJFZJh2UXnwJty2b5s+KahCA0nB9ueneTndr58aP5TljXqDK+vOTM9cwCSQBORblvWoVVI29XZwxRoFoX9XshNSH9SfpC/TJEcfhi2+tLY2vMl7+kh7YP0R3eFs3jtA7W7pbnteQiyU7WT2XB2Be/ChjQtQiFrhiwc+fcaqnibGJLLypNR0jZ5/HOmzNWLwouAQ==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uMZGG/HZy6KANbMxbw+WZ985zFopS3fJBwcjGW9i9p0=;
 b=7Q0HuxW/QVjWm2d5ooP9g6K1ocSgQS9UE5JLyiipN9akmJK2tndZyLOLR+S2BQChgBVzjVjLNXz+qqzpe6Kzo/1ThrqR4SclfrJiAU9LIusekVfSFIz+Td62mb50Jyw6CA80fO+DbZ/8MdrpzGSNSytCBXiwHq+mDwjf0sJODd4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 1afd5b2e6379cff2
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OQI2U9CXfv5YJ8LbidbZJwyaYhTu97+NB5KHrlAHGaOyo6GHUDA/AVT3EarEiwwe+Gyw3Lt2qz4ojpViOR1765YpO01t7CEHxji0oyQ6D7aqwVk3mVupTt7P6O3Bp5DgjvXeXSMCiRttQenc+7GS8J1LC2oZGVpqXVu1CevxPVpBfLR7D7fEYa8aUqR3EL9brT/jSefw8kpZ/tEDsJYy26cn9ZSznd6npr2ue7DORN3ibsnCGjYNxR8fvobjHo2XVzxAKztFzDLhrgEN2Z7gRfrzOlkN9FHrxMxpx5C3cIym+NCEnRwOKfE2FQSZAAGicu5kUF/8Z+xYY7MLmDOdPA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=uMZGG/HZy6KANbMxbw+WZ985zFopS3fJBwcjGW9i9p0=;
 b=KgRtP6qQZWkKJRj1eywqJlTu8nG/jpzt9dAeJSHkMJOZwY2qIz9t4ulaU+7Kgl9+z8cYEfKFGg9NWhPekqq9lNPSWgxxmjFyALy9bfDPMZpH+Elwe9/mHi3BM5DnfIp9Kzjb0EjVCKdCumPsOWqfgqG9Wh7f/NbE1fdWtr2PBl++bLfqzXLKrZt982Pm4Ucfwq/9Stkj1VF+Z6bndwGhe8lWl1T2m9wrqvtblfDJTGKRyQYBfESodXA8/hjlH0za8zjrIa+kStt1AysdR67SG0SPbmmjQBhe0F5UhqsvEe17aEKLyIKm2uPNwoi4rRkZisIrW/kiQSoky3i/vkvLhA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uMZGG/HZy6KANbMxbw+WZ985zFopS3fJBwcjGW9i9p0=;
 b=7Q0HuxW/QVjWm2d5ooP9g6K1ocSgQS9UE5JLyiipN9akmJK2tndZyLOLR+S2BQChgBVzjVjLNXz+qqzpe6Kzo/1ThrqR4SclfrJiAU9LIusekVfSFIz+Td62mb50Jyw6CA80fO+DbZ/8MdrpzGSNSytCBXiwHq+mDwjf0sJODd4=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [XEN PATCH 1/4] build: xen/include: use if_changed
Thread-Topic: [XEN PATCH 1/4] build: xen/include: use if_changed
Thread-Index: AQHYddkD4JsESJA0gkGnWWwykzQjSK1G6E+AgAAC3wCAABeVAA==
Date: Thu, 9 Jun 2022 11:51:20 +0000
Message-ID: <258D1BE1-8E77-4748-A64C-6F080B9C1539@arm.com>
References: <20220601165909.46588-1-anthony.perard@citrix.com>
 <20220601165909.46588-2-anthony.perard@citrix.com>
 <6EE2C13C-7218-4063-8C73-88695C6BF4CE@arm.com>
 <0d85ad23-a232-eac3-416f-fff4d5ec1a93@suse.com>
In-Reply-To: <0d85ad23-a232-eac3-416f-fff4d5ec1a93@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.80.82.1.1)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 8f392215-5713-4183-546b-08da4a0e6383
x-ms-traffictypediagnostic:
	AS8PR08MB6358:EE_|DBAEUR03FT041:EE_|PA4PR08MB5999:EE_
X-Microsoft-Antispam-PRVS:
	<PA4PR08MB59990D9FE1BB9B6A0C637B099DA79@PA4PR08MB5999.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 K7+ZdpfZTsfkPNiJERqiBgSnsfeNNxotVaFvXz3p5bup6x0mNz//puehWo/wqw9+zNBJUWFKT6gMAeodglV9kIH3pgsVha/9kNFF3JJy50DLQKUNN6GAEmkiqz7VcqVXRU7X8FCfo1zN3TuHIFBT7vp4rPMccHCihCVVHP9D/PUaGh2h61JjZTLlgRTu4HfVJWpjr3mMAObIzpCzk0R9x0wkZZ44xmltsyycexROJoMb7ahEKLrt29v8SBvVLy7WzF5ac82rYnGgRv5CT6vYJ+6aAMaSstKXqdqXO3goA4qB7+i4CfG+t4jOJeuDVdnks4LvTmEKllNXSaqVyscu0UdK6n36KwYFtxf/4LBaeSXEd9wVVxsAClO/MZ910VUuIS7AyQnQUSrFZmvzgRGx15O7qlHsV5H0w1OOUwzp+EZgGA2B6KeUvwCPfjVnkweUAO632yGAqTJaTZrQQH0X1DASwG3ysdPshSnTykdHue6NCJgbWfrfl8JNuN4EhevQnUa8Q8/WwTJjHmIV7gQkN2omxN30mG3wEt8lFHPS3s6yyZi5lHk8lB3Ux358wUCGqVvGdGmUGWQ4khe6CCXjSluw5uzoOjrAf49Q/9YG9RYLYxheG2JrwuOvul88BvDi/UKL6Sx2c/9wld86lqIj97tfLaVZfVTkzY5PYmwMYK1GImHv7b3lBgRbOLGZNRWJhAZrU94FT1GfdBKasmWS6yfh8xAd3TA1YAFXExljB6oQnrwe9gnqZGbWV8RBukz+
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(53546011)(6506007)(6512007)(26005)(83380400001)(38070700005)(5660300002)(8936002)(6486002)(508600001)(2616005)(86362001)(33656002)(66556008)(76116006)(4326008)(66446008)(64756008)(66946007)(186003)(8676002)(66476007)(71200400001)(2906002)(91956017)(36756003)(6916009)(54906003)(316002)(38100700002)(122000001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <AA40CDA3478BB044BE1B8881C4153BE0@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6358
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT041.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	3dcfac88-b764-4076-1020-08da4a0e5e9b
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	EDbwc1tGVqBR7oGQgZZWTJiOiXMkvvi95FWGenQTT+Ve9NY95uKaof1fNYRq6PoOyDxswgM1M7gI7IS2fkweER/lGOAT2RJ7P6bPvALVyFyswhBIMePntLpF6q5a/UYgNbzRxcyNJneOuNZDy6oxUDMg4yGe5MQGssdncZPKtWKgGUth6E3FNeFnyBMmJtZXtbjeboSSwweW+TaM2O1BlF/TGe4MQtkwwlECW4ZUy5sollxiuP6AGc0Y2PnoDnT6Jp92c0IAU3sO+Y+j6rjE66YFRN+5PvvNmwOQ24aZ769VrVrAJW12MFZeI8toOivENC9hoFG3+WXNeXjmXjjRbVr6GWJEUqTvRSPY1FuKO4rQ4xgH0joBwpcXj889jcS3QP5FJGhJzV4PYepQ+l8/R8NeNwUCxSm1DnfT7sO/Sv1Oe/MIs4jBwNdTmfZEK877gtzg/n8OF+p8MKgMEVIbC8OqoJXWJWA6rSpVKhsQRpbMt16L61Npetg9nh59m214hZNyBQh0qd5LUvFxX1trilvjPCWIh2RilErrWq5Q5fx0tG6QKbhHWhpHjhC5Uz+v84bMiWC5hJkOj/BVP3iQl0TXr6+tq0k9D/zk71BpQb1A8KbHJXrSKUEis2/VbuJO9e29VYo4Xfxh+pgEUUCTtnb83bnqB6MWHG5PY5xhoxhphOqPQ9GhE/giau5ljAUi
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230001)(4636009)(40470700004)(36840700001)(46966006)(356005)(26005)(6512007)(186003)(4326008)(316002)(6506007)(53546011)(8936002)(54906003)(8676002)(86362001)(6862004)(47076005)(6486002)(70206006)(70586007)(107886003)(40460700003)(83380400001)(81166007)(508600001)(336012)(36860700001)(33656002)(82310400005)(5660300002)(2616005)(2906002)(36756003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 11:51:28.5739
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8f392215-5713-4183-546b-08da4a0e6383
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT041.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB5999

Hi,

> On 9 Jun 2022, at 11:26, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 09.06.2022 12:16, Bertrand Marquis wrote:
>>> On 1 Jun 2022, at 17:59, Anthony PERARD <anthony.perard@citrix.com> wro=
te:
>>>=20
>>> Use "define" for the headers*_chk commands as otherwise the "#"
>>> is interpreted as a comment and make can't find the end of
>>> $(foreach,).
>>>=20
>>> Adding several .PRECIOUS as without them `make` deletes the
>>> intermediate targets. This is an issue because the macro $(if_changed,)
>>> check if the target exist in order to decide whether to recreate the
>>> target.
>>>=20
>>> Removing the call to `mkdir` from the commands. Those aren't needed
>>> anymore because a rune in Rules.mk creates the directory for each
>>> $(targets).
>>>=20
>>> Remove "export PYTHON" as it is already exported.
>>=20
>> With this change, compiling for x86 is now ending up in:
>> CHK     include/headers99.chk
>> make[9]: execvp: /bin/sh: Argument list too long
>> make[9]: *** [include/Makefile:181: include/headers++.chk] Error 127
>>=20
>> Not quite sure yet why but I wanted to signal it early as other might be=
 impacted.
>>=20
>> Arm and arm64 builds are not impacted.
>=20
> Hmm, that patch has passed the smoke push gate already, so there likely i=
s
> more to it than there being an unconditional issue. I did build-test this
> before pushing, and I've just re-tested on a 2nd system without seeing an
> issue.

I have the problem only when building using Yocto, I did a normal build and=
 the
issue is not coming.

Doing a verbose compilation I have this (sorry for the long lines):

 for i in include/public/vcpu.h include/public/errno.h include/public/kexec=
.h include/public/argo.h include/public/xen.h include/public/nmi.h include/=
public/xencomm.h include/public/xenoprof.h include/public/device_tree_defs.=
h include/public/version.h include/public/memory.h include/public/features.=
h include/public/sched.h include/public/xen-compat.h include/public/callbac=
k.h include/public/vm_event.h include/public/grant_table.h include/public/p=
hysdev.h include/public/tmem.h include/public/hypfs.h include/public/platfo=
rm.h include/public/pmu.h include/public/elfnote.h include/public/trace.h i=
nclude/public/event_channel.h include/public/io/vscsiif.h include/public/io=
/kbdif.h include/public/io/protocols.h include/public/io/ring.h include/pub=
lic/io/displif.h include/public/io/fsif.h include/public/io/blkif.h include=
/public/io/console.h include/public/io/sndif.h include/public/io/fbif.h inc=
lude/public/io/libxenvchan.h include/public/io/netif.h include/public/io/us=
bif.h include/public/io/pciif.h include/public/io/tpmif.h include/public/io=
/xs_wire.h include/public/io/xenbus.h include/public/io/cameraif.h include/=
public/hvm/pvdrivers.h include/public/hvm/e820.h include/public/hvm/hvm_xs_=
strings.h include/public/hvm/dm_op.h include/public/hvm/ioreq.h include/pub=
lic/hvm/hvm_info_table.h include/public/hvm/hvm_vcpu.h include/public/hvm/h=
vm_op.h include/public/hvm/params.h; do x86_64-poky-linux-gcc  --sysroot=3D=
/home/bermar01/Development/xen-dev/build/profile-qemu-x86_64.prj/tmp/work/c=
ore2-64-poky-linux/xen/4.17+git1-r0/recipe-sysroot  -x c -ansi -Wall -Werro=
r -include stdint.h -S -o /dev/null $i || exit 1; echo $i; done >include/he=
aders.chk.new; mv include/headers.chk.new include/headers.chk
|       rm -f include/headers99.chk.new;  echo "#include "\"include/public/=
io/9pfs.h\" | x86_64-poky-linux-gcc  --sysroot=3D/home/bermar01/Development=
/xen-dev/build/profile-qemu-x86_64.prj/tmp/work/core2-64-poky-linux/xen/4.1=
7+git1-r0/recipe-sysroot  -x c -std=3Dc99 -Wall -Werror -include stdint.h  =
-include string.h -S -o /dev/null - || exit $?; echo include/public/io/9pfs=
.h >> include/headers99.chk.new;  echo "#include "\"include/public/io/pvcal=
ls.h\" | x86_64-poky-linux-gcc  --sysroot=3D/home/bermar01/Development/xen-=
dev/build/profile-qemu-x86_64.prj/tmp/work/core2-64-poky-linux/xen/4.17+git=
1-r0/recipe-sysroot  -x c -std=3Dc99 -Wall -Werror -include stdint.h  -incl=
ude string.h -S -o /dev/null - || exit $?; echo include/public/io/pvcalls.h=
 >> include/headers99.chk.new; mv include/headers99.chk.new include/headers=
99.chk
| make[9]: execvp: /bin/sh: Argument list too long
| make[9]: *** [include/Makefile:181: include/headers++.chk] Error 127
| make[9]: *** Waiting for unfinished jobs....

So the command passed to the sub shell by make is quite long.

No idea why this comes out only when building in Yocto but I will dig a bit=
.

>=20
> Also please remember to trim your replies.
>=20

Will do.

Bertrand



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 12:02:26 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 12:02:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345146.570789 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzGrl-0004YP-Mz; Thu, 09 Jun 2022 12:02:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345146.570789; Thu, 09 Jun 2022 12:02:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzGrl-0004YI-Jr; Thu, 09 Jun 2022 12:02:21 +0000
Received: by outflank-mailman (input) for mailman id 345146;
 Thu, 09 Jun 2022 12:02:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Nuio=WQ=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1nzGrk-0004YC-Ba
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 12:02:20 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fd6a9197-e7eb-11ec-b605-df0040e90b76;
 Thu, 09 Jun 2022 14:02:10 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-507-FrWQox30OgKOk83Mxg-NUA-1; Thu, 09 Jun 2022 08:02:16 -0400
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com
 [10.11.54.6])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 029381C0F690;
 Thu,  9 Jun 2022 12:02:16 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id BF44E2166B26;
 Thu,  9 Jun 2022 12:02:15 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 7C2601800094; Thu,  9 Jun 2022 14:02:14 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fd6a9197-e7eb-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1654776138;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=KvQTvntB7sQwEjW9N35sI+wAlKg3Uf86xvIHaIVGPP0=;
	b=azRx1xJe8XBkBcE2Bqo0+3eOe0WcgoLmer+2bExYRw8p/yII5A5tpX2sm1fC8GPcl0ZZNN
	LwsoU9bqpm6dRSikLxI6oPFie+ZAuMZl70eevSISfE+hnIi9vA/wS/+8oz+bVS/fn3d+sK
	8FBsk4RB7nfxPeg+QF6ZOtIkGkPVHUY=
X-MC-Unique: FrWQox30OgKOk83Mxg-NUA-1
Date: Thu, 9 Jun 2022 14:02:14 +0200
From: Gerd Hoffmann <kraxel@redhat.com>
To: Akihiko Odaki <akihiko.odaki@gmail.com>
Cc: qemu Developers <qemu-devel@nongnu.org>, xen-devel@lists.xenproject.org,
	"Michael S . Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>
Subject: Re: [PATCH v3 2/3] ui: Deliver refresh rate via QemuUIInfo
Message-ID: <20220609120214.bay3cl24oays6x6d@sirius.home.kraxel.org>
References: <20220226115516.59830-1-akihiko.odaki@gmail.com>
 <20220226115516.59830-3-akihiko.odaki@gmail.com>
 <20220609102805.qz2xrnd6ms6cigir@sirius.home.kraxel.org>
 <19ae71a4-c988-3c9e-20d6-614098376524@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <19ae71a4-c988-3c9e-20d6-614098376524@gmail.com>
X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6

On Thu, Jun 09, 2022 at 08:45:41PM +0900, Akihiko Odaki wrote:
> On 2022/06/09 19:28, Gerd Hoffmann wrote:
> > > --- a/include/ui/console.h
> > > +++ b/include/ui/console.h
> > > @@ -139,6 +139,7 @@ typedef struct QemuUIInfo {
> > >       int       yoff;
> > >       uint32_t  width;
> > >       uint32_t  height;
> > > +    uint32_t  refresh_rate;
> > >   } QemuUIInfo;
> > >   /* cursor data format is 32bit RGBA */
> > > @@ -426,7 +427,6 @@ typedef struct GraphicHwOps {
> > >       void (*gfx_update)(void *opaque);
> > >       bool gfx_update_async; /* if true, calls graphic_hw_update_done() */
> > >       void (*text_update)(void *opaque, console_ch_t *text);
> > > -    void (*update_interval)(void *opaque, uint64_t interval);
> > >       void (*ui_info)(void *opaque, uint32_t head, QemuUIInfo *info);
> > >       void (*gl_block)(void *opaque, bool block);
> > >   } GraphicHwOps;
> > 
> > So you are dropping update_interval, which isn't mentioned in the commit
> > message at all.  Also this patch is rather big.  I'd suggest:
> > 
> > (1) add refresh_rate
> > (2) update users one by one
> > (3) finally drop update_interval when no user is left.
> > 
> > thanks,
> >    Gerd
> > 
> 
> I think 1 and 3 should have to be done once since refresh_rate and
> update_interval would interfere with each other otherwise.

Well, between 1 and 3 both old and new API are active.  Shouldn't be
much of a problem because the GraphicHwOps implementations are using
only the one or the other.

take care,
  Gerd



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 12:08:15 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 12:08:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345157.570806 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzGxP-0005SH-F8; Thu, 09 Jun 2022 12:08:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345157.570806; Thu, 09 Jun 2022 12:08:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzGxP-0005SA-Aj; Thu, 09 Jun 2022 12:08:11 +0000
Received: by outflank-mailman (input) for mailman id 345157;
 Thu, 09 Jun 2022 12:08:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=itVH=WQ=xenbits.xen.org=julieng@srs-se1.protection.inumbo.net>)
 id 1nzGxN-0005QK-FK
 for xen-devel@lists.xen.org; Thu, 09 Jun 2022 12:08:10 +0000
Received: from mail.xenproject.org (mail.xenproject.org [104.130.215.37])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d1af2a55-e7ec-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 14:08:08 +0200 (CEST)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julieng@xenbits.xen.org>)
 id 1nzGx8-0006VL-BZ; Thu, 09 Jun 2022 12:07:54 +0000
Received: from julieng by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <julieng@xenbits.xen.org>)
 id 1nzGx8-0006Rg-9q; Thu, 09 Jun 2022 12:07:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d1af2a55-e7ec-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=hu6YemG4RnHBYczTEhjBv8Ma4TZp7GOsfW6cHHGNGys=; b=RIfhVVmEhiZ1sXn98z8cKtHJ55
	vpcAFyhW/owQJxUpMG4A9X4s7PJ6mt+aTSMTnufm77fPl8hNB+zu73GHTyUdfj29JriSnmWVe9B7R
	tudISazLcLoYwEQM+2x8S6JhHXsQ1KamkzRWKxFOCh0fd5vH1vZySstTmp6OKrNWriIQ=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 401 v2 (CVE-2022-26362) - x86 pv: Race
 condition in typeref acquisition
Message-Id: <E1nzGx8-0006Rg-9q@xenbits.xenproject.org>
Date: Thu, 09 Jun 2022 12:07:54 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2022-26362 / XSA-401
                               version 2

             x86 pv: Race condition in typeref acquisition

UPDATES IN VERSION 2
====================

Update 4.16 and 4.15 baselines.

Public release.

ISSUE DESCRIPTION
=================

Xen maintains a type reference count for pages, in addition to a regular
reference count.  This scheme is used to maintain invariants required
for Xen's safety, e.g. PV guests may not have direct writeable access to
pagetables; updates need auditing by Xen.

Unfortunately, the logic for acquiring a type reference has a race
condition, whereby a safely TLB flush is issued too early and creates a
window where the guest can re-establish the read/write mapping before
writeability is prohibited.

IMPACT
======

Malicious x86 PV guest administrators may be able to escalate privilege
so as to control the whole system.

VULNERABLE SYSTEMS
==================

All versions of Xen are vulnerable.

Only x86 PV guests can trigger this vulnerability.

To exploit the vulnerability, there needs to be an undue delay at just
the wrong moment in _get_page_type().  The degree to which an x86 PV
guest can practically control this race condition is unknown.

MITIGATION
==========

Not running x86 PV guests will avoid the vulnerability.

CREDITS
=======

This issue was discovered by Jann Horn of Google Project Zero.

RESOLUTION
==========

Applying the appropriate attached patches resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa401/xsa401-?.patch           xen-unstable
xsa401/xsa401-4.16-?.patch      Xen 4.16.x - Xen 4.14.x
xsa401/xsa401-4.13-?.patch      Xen 4.13.x

$ sha256sum xsa401* xsa401*/*
d442bc0946eaa4c325226fd0805ab81eba6a68b68cffb9b03d9552edea86b118  xsa401.meta
074b57204f828cbd004c2d024b02a41af5d5bf3547d407af27249dca95eca13a  xsa401/xsa401-1.patch
a095b39b203d501f9c9d4974638cd4d5e2d7a18daee7a7a61e2010dea477e212  xsa401/xsa401-2.patch
99af3efc91d2dbf4fd54cc9f454b87bd76edbc85abd1a20bdad0bd22acabf466  xsa401/xsa401-4.13-1.patch
bb997094052edbbbdd0dc9f3a0454508eb737556e2449ec6a0bc649deb921e4f  xsa401/xsa401-4.13-2.patch
d336b31cb91466942e4fb8b44783bb2f0be4995076e70e0e78cdf992147cf72a  xsa401/xsa401-4.16-1.patch
b380a76d67957b602ff3c9a3faaa4d9b6666422834d6ee3ab72432a6d07ddbc6  xsa401/xsa401-4.16-2.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.


(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAmKh4lsMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZcoAH/ijbKKkQet6frag9HVfDHZtcb6N7yIxMUioVOu9t
tNhg4LdJJnnrCqXmJdXygZTYwIZufQGQOxMR3b66+6MJyz0JIL7XExqnLJs6mDsO
GFcvsxoGLYSdsBTVtGQgLpEPxwgkblKUQuwokz3K3kdxcHJmJceZitvaDdrycw8M
kRZ22qHUbFWTSOKZNe5t9t0x/4xwdyM4dYElAmuN4Ej1cQhhXG/Gbl+acZexS+cz
TFEbIS5G/j6EgaCpBSP5XCoUn2LlyswRxBllGh0kpaLrJRH4CX3E/KHBSdPMkWoP
3HyQF3o+WYvpWUGXVaAREaR+WxlsAwmQJUxpO64O4Y4IUEY=
=UGgq
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa401.meta"
Content-Disposition: attachment; filename="xsa401.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiA0MDEsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xNiIsCiAgICAiNC4xNSIsCiAgICAiNC4xNCIs
CiAgICAiNC4xMyIKICBdLAogICJUcmVlcyI6IFsKICAgICJ4ZW4iCiAgXSwK
ICAiUmVjaXBlcyI6IHsKICAgICI0LjEzIjogewogICAgICAiUmVjaXBlcyI6
IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0YWJsZVJlZiI6ICJm
ZTk3MTMzYjVkZWVmNThiZDE0MjJmNGQ4NzgyMTEzMWM2NmIxZDBlIiwKICAg
ICAgICAgICJQcmVyZXFzIjogW10sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsK
ICAgICAgICAgICAgInhzYTQwMS94c2E0MDEtNC4xMy0/LnBhdGNoIgogICAg
ICAgICAgXQogICAgICAgIH0KICAgICAgfQogICAgfSwKICAgICI0LjE0Ijog
ewogICAgICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAg
ICAgIlN0YWJsZVJlZiI6ICIxNzg0OGRmZWQ0N2Y1MmI0NzljNGU3ZWI0MTI2
NzFhZWM1NzU3MzI5IiwKICAgICAgICAgICJQcmVyZXFzIjogW10sCiAgICAg
ICAgICAiUGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhzYTQwMS94c2E0MDEt
NC4xNi0/LnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAgICAgfQog
ICAgfSwKICAgICI0LjE1IjogewogICAgICAiUmVjaXBlcyI6IHsKICAgICAg
ICAieGVuIjogewogICAgICAgICAgIlN0YWJsZVJlZiI6ICI2NDI0OWFmZWI2
M2NmN2Q3MGI0ZmFmMDJlNzZkZjVlZWQ4MjM3MWY5IiwKICAgICAgICAgICJQ
cmVyZXFzIjogW10sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAgICAg
ICAgInhzYTQwMS94c2E0MDEtNC4xNi0/LnBhdGNoIgogICAgICAgICAgXQog
ICAgICAgIH0KICAgICAgfQogICAgfSwKICAgICI0LjE2IjogewogICAgICAi
UmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0YWJs
ZVJlZiI6ICI4ZTExZWM4ZmJmNmY5MzNmODg1NGY0YmM1NDIyNjY1MzMxNjkw
M2YyIiwKICAgICAgICAgICJQcmVyZXFzIjogW10sCiAgICAgICAgICAiUGF0
Y2hlcyI6IFsKICAgICAgICAgICAgInhzYTQwMS94c2E0MDEtNC4xNi0/LnBh
dGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAgICAgfQogICAgfSwKICAg
ICJtYXN0ZXIiOiB7CiAgICAgICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4i
OiB7CiAgICAgICAgICAiU3RhYmxlUmVmIjogIjQ5ZGQ1MmZiMTMxMWRhZGFi
MjlmNjYzNGQwYmMxZjRjMDIyYzM1N2EiLAogICAgICAgICAgIlByZXJlcXMi
OiBbXSwKICAgICAgICAgICJQYXRjaGVzIjogWwogICAgICAgICAgICAieHNh
NDAxL3hzYTQwMS0/LnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAg
ICAgfQogICAgfQogIH0KfQ==

--=separator
Content-Type: application/octet-stream; name="xsa401/xsa401-1.patch"
Content-Disposition: attachment; filename="xsa401/xsa401-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3B2OiBDbGVhbiB1cCBfZ2V0X3BhZ2VfdHlwZSgp
CgpWYXJpb3VzIGZpeGVzIGZvciBjbGFyaXR5LCBhaGVhZCBvZiBtYWtpbmcg
Y29tcGxpY2F0ZWQgY2hhbmdlcy4KCiAqIFNwbGl0IHRoZSBvdmVyZmxvdyBj
aGVjayBvdXQgb2YgdGhlIGlmL2Vsc2UgY2hhaW4gZm9yIHR5cGUgaGFuZGxp
bmcsIGFzCiAgIGl0J3Mgc29tZXdoYXQgdW5yZWxhdGVkLgogKiBDb21tZW50
IHRoZSBtYWluIGlmL2Vsc2UgY2hhaW4gdG8gZXhwbGFpbiB3aGF0IGlzIGdv
aW5nIG9uLiAgQWRqdXN0IG9uZQogICBBU1NFUlQoKSBhbmQgc3RhdGUgdGhl
IGJpdCBsYXlvdXQgZm9yIHZhbGlkYXRlLWxvY2tlZCBhbmQgcGFydGlhbCBz
dGF0ZXMuCiAqIENvcnJlY3QgdGhlIGNvbW1lbnQgYWJvdXQgVExCIGZsdXNo
aW5nLCBhcyBpdCdzIGJhY2t3YXJkcy4gIFRoZSBwcm9ibGVtCiAgIGNhc2Ug
aXMgd2hlbiB3cml0ZWFibGUgbWFwcGluZ3MgYXJlIHJldGFpbmVkIHRvIGEg
cGFnZSBiZWNvbWluZyByZWFkLW9ubHksCiAgIGFzIGl0IGFsbG93cyB0aGUg
Z3Vlc3QgdG8gYnlwYXNzIFhlbidzIHNhZmV0eSBjaGVja3MgZm9yIHVwZGF0
ZXMuCiAqIFJlZHVjZSB0aGUgc2NvcGUgb2YgJ3knLiAgSXQgaXMgYW4gYXJ0
ZWZhY3Qgb2YgdGhlIGNtcHhjaGcgbG9vcCBhbmQgbm90CiAgIHZhbGlkIGZv
ciB1c2UgYnkgc3Vic2VxdWVudCBsb2dpYy4gIFN3aXRjaCB0byB1c2luZyBB
Q0NFU1NfT05DRSgpIHRvIHRyZWF0CiAgIGFsbCByZWFkcyBhcyBleHBsaWNp
dGx5IHZvbGF0aWxlLiAgVGhlIG9ubHkgdGhpbmcgcHJldmVudGluZyB0aGUg
dmFsaWRhdGVkCiAgIHdhaXQtbG9vcCBiZWluZyBpbmZpbml0ZSBpcyB0aGUg
Y29tcGlsZXIgYmFycmllciBoaWRkZW4gaW4gY3B1X3JlbGF4KCkuCiAqIFJl
cGxhY2Ugb25lIHBhZ2VfZ2V0X293bmVyKHBhZ2UpIHdpdGggdGhlIGFscmVh
ZHktY2FsY3VsYXRlZCAnZCcgYWxyZWFkeSBpbgogICBzY29wZS4KCk5vIGZ1
bmN0aW9uYWwgY2hhbmdlLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS00MDEgLyBD
VkUtMjAyMi0yNjM2Mi4KClNpZ25lZC1vZmYtYnk6IEFuZHJldyBDb29wZXIg
PGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+ClNpZ25lZC1vZmYtYnk6IEdl
b3JnZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAZXUuY2l0cml4LmNvbT4KUmV2
aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KUmV2
aWV3ZWQtYnk6IEdlb3JnZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4
LmNvbT4KCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvbW0uYyBiL3hlbi9h
cmNoL3g4Ni9tbS5jCmluZGV4IDA0ZDVlYzcwNWQ4ZS4uNjQzNDkwMGFhNzY3
IDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYvbW0uYworKysgYi94ZW4vYXJj
aC94ODYvbW0uYwpAQCAtMjkzNSwxNiArMjkzNSwxNyBAQCBzdGF0aWMgaW50
IF9wdXRfcGFnZV90eXBlKHN0cnVjdCBwYWdlX2luZm8gKnBhZ2UsIHVuc2ln
bmVkIGludCBmbGFncywKIHN0YXRpYyBpbnQgX2dldF9wYWdlX3R5cGUoc3Ry
dWN0IHBhZ2VfaW5mbyAqcGFnZSwgdW5zaWduZWQgbG9uZyB0eXBlLAogICAg
ICAgICAgICAgICAgICAgICAgICAgICBib29sIHByZWVtcHRpYmxlKQogewot
ICAgIHVuc2lnbmVkIGxvbmcgbngsIHgsIHkgPSBwYWdlLT51LmludXNlLnR5
cGVfaW5mbzsKKyAgICB1bnNpZ25lZCBsb25nIG54LCB4OwogICAgIGludCBy
YyA9IDA7CiAKICAgICBBU1NFUlQoISh0eXBlICYgfihQR1RfdHlwZV9tYXNr
IHwgUEdUX3BhZV94ZW5fbDIpKSk7CiAgICAgQVNTRVJUKCFpbl9pcnEoKSk7
CiAKLSAgICBmb3IgKCA7IDsgKQorICAgIGZvciAoIHVuc2lnbmVkIGxvbmcg
eSA9IEFDQ0VTU19PTkNFKHBhZ2UtPnUuaW51c2UudHlwZV9pbmZvKTsgOyAp
CiAgICAgewogICAgICAgICB4ICA9IHk7CiAgICAgICAgIG54ID0geCArIDE7
CisKICAgICAgICAgaWYgKCB1bmxpa2VseSgobnggJiBQR1RfY291bnRfbWFz
aykgPT0gMCkgKQogICAgICAgICB7CiAgICAgICAgICAgICBnZHByaW50ayhY
RU5MT0dfV0FSTklORywKQEAgLTI5NTIsOCArMjk1MywxNSBAQCBzdGF0aWMg
aW50IF9nZXRfcGFnZV90eXBlKHN0cnVjdCBwYWdlX2luZm8gKnBhZ2UsIHVu
c2lnbmVkIGxvbmcgdHlwZSwKICAgICAgICAgICAgICAgICAgICAgIG1mbl94
KHBhZ2VfdG9fbWZuKHBhZ2UpKSk7CiAgICAgICAgICAgICByZXR1cm4gLUVJ
TlZBTDsKICAgICAgICAgfQotICAgICAgICBlbHNlIGlmICggdW5saWtlbHko
KHggJiBQR1RfY291bnRfbWFzaykgPT0gMCkgKQorCisgICAgICAgIGlmICgg
dW5saWtlbHkoKHggJiBQR1RfY291bnRfbWFzaykgPT0gMCkgKQogICAgICAg
ICB7CisgICAgICAgICAgICAvKgorICAgICAgICAgICAgICogVHlwZXJlZiAw
IC0+IDEuCisgICAgICAgICAgICAgKgorICAgICAgICAgICAgICogVHlwZSBj
aGFuZ2VzIGFyZSBwZXJtaXR0ZWQgd2hlbiB0aGUgdHlwZXJlZiBpcyAwLiAg
SWYgdGhlIHR5cGUKKyAgICAgICAgICAgICAqIGFjdHVhbGx5IGNoYW5nZXMs
IHRoZSBwYWdlIG5lZWRzIHJlLXZhbGlkYXRpbmcuCisgICAgICAgICAgICAg
Ki8KICAgICAgICAgICAgIHN0cnVjdCBkb21haW4gKmQgPSBwYWdlX2dldF9v
d25lcihwYWdlKTsKIAogICAgICAgICAgICAgaWYgKCBkICYmIHNoYWRvd19t
b2RlX2VuYWJsZWQoZCkgKQpAQCAtMjk2NCw4ICsyOTcyLDggQEAgc3RhdGlj
IGludCBfZ2V0X3BhZ2VfdHlwZShzdHJ1Y3QgcGFnZV9pbmZvICpwYWdlLCB1
bnNpZ25lZCBsb25nIHR5cGUsCiAgICAgICAgICAgICB7CiAgICAgICAgICAg
ICAgICAgLyoKICAgICAgICAgICAgICAgICAgKiBPbiB0eXBlIGNoYW5nZSB3
ZSBjaGVjayB0byBmbHVzaCBzdGFsZSBUTEIgZW50cmllcy4gSXQgaXMKLSAg
ICAgICAgICAgICAgICAgKiB2aXRhbCB0aGF0IG5vIG90aGVyIENQVXMgYXJl
IGxlZnQgd2l0aCBtYXBwaW5ncyBvZiBhIGZyYW1lCi0gICAgICAgICAgICAg
ICAgICogd2hpY2ggaXMgYWJvdXQgdG8gYmVjb21lIHdyaXRlYWJsZSB0byB0
aGUgZ3Vlc3QuCisgICAgICAgICAgICAgICAgICogdml0YWwgdGhhdCBubyBv
dGhlciBDUFVzIGFyZSBsZWZ0IHdpdGggd3JpdGVhYmxlIG1hcHBpbmdzCisg
ICAgICAgICAgICAgICAgICogdG8gYSBmcmFtZSB3aGljaCBpcyBpbnRlbmRp
bmcgdG8gYmVjb21lIHBndGFibGUvc2VnZGVzYy4KICAgICAgICAgICAgICAg
ICAgKi8KICAgICAgICAgICAgICAgICBjcHVtYXNrX3QgKm1hc2sgPSB0aGlz
X2NwdShzY3JhdGNoX2NwdW1hc2spOwogCkBAIC0yOTc3LDcgKzI5ODUsNyBA
QCBzdGF0aWMgaW50IF9nZXRfcGFnZV90eXBlKHN0cnVjdCBwYWdlX2luZm8g
KnBhZ2UsIHVuc2lnbmVkIGxvbmcgdHlwZSwKIAogICAgICAgICAgICAgICAg
IGlmICggdW5saWtlbHkoIWNwdW1hc2tfZW1wdHkobWFzaykpICYmCiAgICAg
ICAgICAgICAgICAgICAgICAvKiBTaGFkb3cgbW9kZTogdHJhY2sgb25seSB3
cml0YWJsZSBwYWdlcy4gKi8KLSAgICAgICAgICAgICAgICAgICAgICghc2hh
ZG93X21vZGVfZW5hYmxlZChwYWdlX2dldF9vd25lcihwYWdlKSkgfHwKKyAg
ICAgICAgICAgICAgICAgICAgICghc2hhZG93X21vZGVfZW5hYmxlZChkKSB8
fAogICAgICAgICAgICAgICAgICAgICAgICgobnggJiBQR1RfdHlwZV9tYXNr
KSA9PSBQR1Rfd3JpdGFibGVfcGFnZSkpICkKICAgICAgICAgICAgICAgICB7
CiAgICAgICAgICAgICAgICAgICAgIHBlcmZjX2luY3IobmVlZF9mbHVzaF90
bGJfZmx1c2gpOwpAQCAtMzAwOCw3ICszMDE2LDE0IEBAIHN0YXRpYyBpbnQg
X2dldF9wYWdlX3R5cGUoc3RydWN0IHBhZ2VfaW5mbyAqcGFnZSwgdW5zaWdu
ZWQgbG9uZyB0eXBlLAogICAgICAgICB9CiAgICAgICAgIGVsc2UgaWYgKCB1
bmxpa2VseSgoeCAmIChQR1RfdHlwZV9tYXNrfFBHVF9wYWVfeGVuX2wyKSkg
IT0gdHlwZSkgKQogICAgICAgICB7Ci0gICAgICAgICAgICAvKiBEb24ndCBs
b2cgZmFpbHVyZSBpZiBpdCBjb3VsZCBiZSBhIHJlY3Vyc2l2ZS1tYXBwaW5n
IGF0dGVtcHQuICovCisgICAgICAgICAgICAvKgorICAgICAgICAgICAgICog
ZWxzZSwgd2UncmUgdHJ5aW5nIHRvIHRha2UgYSBuZXcgcmVmZXJlbmNlLCBv
ZiB0aGUgd3JvbmcgdHlwZS4KKyAgICAgICAgICAgICAqCisgICAgICAgICAg
ICAgKiBUaGlzIChiZWluZyBhYmxlIHRvIHByb2hpYml0IHVzZSBvZiB0aGUg
d3JvbmcgdHlwZSkgaXMgd2hhdCB0aGUKKyAgICAgICAgICAgICAqIHR5cGVy
ZWYgc3lzdGVtIGV4aXN0cyBmb3IsIGJ1dCBza2lwIHByaW50aW5nIHRoZSBm
YWlsdXJlIGlmIGl0CisgICAgICAgICAgICAgKiBsb29rcyBsaWtlIGEgcmVj
dXJzaXZlIG1hcHBpbmcsIGFzIHN1YnNlcXVlbnQgbG9naWMgbWlnaHQKKyAg
ICAgICAgICAgICAqIHVsdGltYXRlbHkgcGVybWl0IHRoZSBhdHRlbXB0Lgor
ICAgICAgICAgICAgICovCiAgICAgICAgICAgICBpZiAoICgoeCAmIFBHVF90
eXBlX21hc2spID09IFBHVF9sMl9wYWdlX3RhYmxlKSAmJgogICAgICAgICAg
ICAgICAgICAodHlwZSA9PSBQR1RfbDFfcGFnZV90YWJsZSkgKQogICAgICAg
ICAgICAgICAgIHJldHVybiAtRUlOVkFMOwpAQCAtMzAyNywxOCArMzA0Miw0
NiBAQCBzdGF0aWMgaW50IF9nZXRfcGFnZV90eXBlKHN0cnVjdCBwYWdlX2lu
Zm8gKnBhZ2UsIHVuc2lnbmVkIGxvbmcgdHlwZSwKICAgICAgICAgfQogICAg
ICAgICBlbHNlIGlmICggdW5saWtlbHkoISh4ICYgUEdUX3ZhbGlkYXRlZCkp
ICkKICAgICAgICAgeworICAgICAgICAgICAgLyoKKyAgICAgICAgICAgICAq
IGVsc2UsIHRoZSBjb3VudCBpcyBub24temVybywgYW5kIHdlJ3JlIGdyYWJi
aW5nIHRoZSByaWdodCB0eXBlOworICAgICAgICAgICAgICogYnV0IHRoZSBw
YWdlIGhhc24ndCBiZWVuIHZhbGlkYXRlZCB5ZXQuCisgICAgICAgICAgICAg
KgorICAgICAgICAgICAgICogVGhlIHBhZ2UgaXMgaW4gb25lIG9mIHR3byBz
dGF0ZXMgKGRlcGVuZGluZyBvbiBQR1RfcGFydGlhbCksCisgICAgICAgICAg
ICAgKiBhbmQgc2hvdWxkIGhhdmUgZXhhY3RseSBvbmUgcmVmZXJlbmNlLgor
ICAgICAgICAgICAgICovCisgICAgICAgICAgICBBU1NFUlQoKHggJiAoUEdU
X3R5cGVfbWFzayB8IFBHVF9jb3VudF9tYXNrKSkgPT0gKHR5cGUgfCAxKSk7
CisKICAgICAgICAgICAgIGlmICggISh4ICYgUEdUX3BhcnRpYWwpICkKICAg
ICAgICAgICAgIHsKLSAgICAgICAgICAgICAgICAvKiBTb21lb25lIGVsc2Ug
aXMgdXBkYXRpbmcgdmFsaWRhdGlvbiBvZiB0aGlzIHBhZ2UuIFdhaXQuLi4g
Ki8KKyAgICAgICAgICAgICAgICAvKgorICAgICAgICAgICAgICAgICAqIFRo
ZSBwYWdlIGhhcyBiZWVuIGxlZnQgaW4gdGhlICJ2YWxpZGF0ZSBsb2NrZWQi
IHN0YXRlCisgICAgICAgICAgICAgICAgICogKGkuZS4gUEdUX1t0eXBlXSB8
IDEpIHdoaWNoIG1lYW5zIHRoYXQgYSBjb25jdXJyZW50IGNhbGxlcgorICAg
ICAgICAgICAgICAgICAqIG9mIF9nZXRfcGFnZV90eXBlKCkgaXMgaW4gdGhl
IG1pZGRsZSBvZiB2YWxpZGF0aW9uLgorICAgICAgICAgICAgICAgICAqCisg
ICAgICAgICAgICAgICAgICogU3BpbiB3YWl0aW5nIGZvciB0aGUgY29uY3Vy
cmVudCB1c2VyIHRvIGNvbXBsZXRlIChwYXJ0aWFsCisgICAgICAgICAgICAg
ICAgICogb3IgZnVsbHkgdmFsaWRhdGVkKSwgdGhlbiByZXN0YXJ0IG91ciBh
dHRlbXB0IHRvIGFjcXVpcmUgYQorICAgICAgICAgICAgICAgICAqIHR5cGUg
cmVmZXJlbmNlLgorICAgICAgICAgICAgICAgICAqLwogICAgICAgICAgICAg
ICAgIGRvIHsKICAgICAgICAgICAgICAgICAgICAgaWYgKCBwcmVlbXB0aWJs
ZSAmJiBoeXBlcmNhbGxfcHJlZW1wdF9jaGVjaygpICkKICAgICAgICAgICAg
ICAgICAgICAgICAgIHJldHVybiAtRUlOVFI7CiAgICAgICAgICAgICAgICAg
ICAgIGNwdV9yZWxheCgpOwotICAgICAgICAgICAgICAgIH0gd2hpbGUgKCAo
eSA9IHBhZ2UtPnUuaW51c2UudHlwZV9pbmZvKSA9PSB4ICk7CisgICAgICAg
ICAgICAgICAgfSB3aGlsZSAoICh5ID0gQUNDRVNTX09OQ0UocGFnZS0+dS5p
bnVzZS50eXBlX2luZm8pKSA9PSB4ICk7CiAgICAgICAgICAgICAgICAgY29u
dGludWU7CiAgICAgICAgICAgICB9Ci0gICAgICAgICAgICAvKiBUeXBlIHJl
ZiBjb3VudCB3YXMgbGVmdCBhdCAxIHdoZW4gUEdUX3BhcnRpYWwgZ290IHNl
dC4gKi8KLSAgICAgICAgICAgIEFTU0VSVCgoeCAmIFBHVF9jb3VudF9tYXNr
KSA9PSAxKTsKKworICAgICAgICAgICAgLyoKKyAgICAgICAgICAgICAqIFRo
ZSBwYWdlIGhhcyBiZWVuIGxlZnQgaW4gdGhlICJwYXJ0aWFsIiBzdGF0ZQor
ICAgICAgICAgICAgICogKGkuZS4sIFBHVF9bdHlwZV0gfCBQR1RfcGFydGlh
bCB8IDEpLgorICAgICAgICAgICAgICoKKyAgICAgICAgICAgICAqIFJhdGhl
ciB0aGFuIGJ1bXBpbmcgdGhlIHR5cGUgY291bnQsIHdlIG5lZWQgdG8gdHJ5
IHRvIGdyYWIgdGhlCisgICAgICAgICAgICAgKiB2YWxpZGF0aW9uIGxvY2s7
IGlmIHdlIHN1Y2NlZWQsIHdlIG5lZWQgdG8gdmFsaWRhdGUgdGhlIHBhZ2Us
CisgICAgICAgICAgICAgKiB0aGVuIGRyb3AgdGhlIGdlbmVyYWwgcmVmIGFz
c29jaWF0ZWQgd2l0aCB0aGUgUEdUX3BhcnRpYWwgYml0LgorICAgICAgICAg
ICAgICoKKyAgICAgICAgICAgICAqIFdlIGdyYWIgdGhlIHZhbGlkYXRpb24g
bG9jayBieSBzZXR0aW5nIG54IHRvIChQR1RfW3R5cGVdIHwgMSkKKyAgICAg
ICAgICAgICAqIChpLmUuLCBub24temVybyB0eXBlIGNvdW50LCBuZWl0aGVy
IFBHVF92YWxpZGF0ZWQgbm9yCisgICAgICAgICAgICAgKiBQR1RfcGFydGlh
bCBzZXQpLgorICAgICAgICAgICAgICovCiAgICAgICAgICAgICBueCA9IHgg
JiB+UEdUX3BhcnRpYWw7CiAgICAgICAgIH0KIApAQCAtMzA4Nyw2ICszMTMw
LDEzIEBAIHN0YXRpYyBpbnQgX2dldF9wYWdlX3R5cGUoc3RydWN0IHBhZ2Vf
aW5mbyAqcGFnZSwgdW5zaWduZWQgbG9uZyB0eXBlLAogICAgIH0KIAogIG91
dDoKKyAgICAvKgorICAgICAqIERpZCB3ZSBkcm9wIHRoZSBQR1RfcGFydGlh
bCBiaXQgd2hlbiBhY3F1aXJpbmcgdGhlIHR5cGVyZWY/ICBJZiBzbywKKyAg
ICAgKiBkcm9wIHRoZSBnZW5lcmFsIHJlZmVyZW5jZSB0aGF0IHdlbnQgYWxv
bmcgd2l0aCBpdC4KKyAgICAgKgorICAgICAqIE4uQi4gdmFsaWRhdGVfcGFn
ZSgpIG1heSBoYXZlIGhhdmUgcmUtc2V0IFBHVF9wYXJ0aWFsLCBub3QgcmVm
bGVjdGVkIGluCisgICAgICogbngsIGJ1dCB3aWxsIGhhdmUgdGFrZW4gYW4g
ZXh0cmEgcmVmIHdoZW4gZG9pbmcgc28uCisgICAgICovCiAgICAgaWYgKCAo
eCAmIFBHVF9wYXJ0aWFsKSAmJiAhKG54ICYgUEdUX3BhcnRpYWwpICkKICAg
ICAgICAgcHV0X3BhZ2UocGFnZSk7CiAK

--=separator
Content-Type: application/octet-stream; name="xsa401/xsa401-2.patch"
Content-Disposition: attachment; filename="xsa401/xsa401-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3B2OiBGaXggQUJBQyBjbXB4Y2hnKCkgcmFjZSBp
biBfZ2V0X3BhZ2VfdHlwZSgpCgpfZ2V0X3BhZ2VfdHlwZSgpIHN1ZmZlcnMg
ZnJvbSBhIHJhY2UgY29uZGl0aW9uIHdoZXJlIGl0IGluY29ycmVjdGx5IGFz
c3VtZXMKdGhhdCBiZWNhdXNlICd4JyB3YXMgcmVhZCBhbmQgYSBzdWJzZXF1
ZW50IGEgY21weGNoZygpIHN1Y2NlZWRzLCB0aGUgdHlwZQpjYW5ub3QgaGF2
ZSBjaGFuZ2VkIGluLWJldHdlZW4uICBDb25zaWRlcjoKCkNQVSBBOgogIDEu
IENyZWF0ZXMgYW4gTDJlIHJlZmVyZW5jaW5nIHBnCiAgICAgYC0+IF9nZXRf
cGFnZV90eXBlKHBnLCBQR1RfbDFfcGFnZV90YWJsZSksIHNlZXMgY291bnQg
MCwgdHlwZSBQR1Rfd3JpdGFibGVfcGFnZQogIDIuICAgICBJc3N1ZXMgZmx1
c2hfdGxiX21hc2soKQpDUFUgQjoKICAzLiBDcmVhdGVzIGEgd3JpdGVhYmxl
IG1hcHBpbmcgb2YgcGcKICAgICBgLT4gX2dldF9wYWdlX3R5cGUocGcsIFBH
VF93cml0YWJsZV9wYWdlKSwgY291bnQgaW5jcmVhc2VzIHRvIDEKICA0LiBX
cml0ZXMgaW50byBuZXcgbWFwcGluZywgY3JlYXRpbmcgYSBUTEIgZW50cnkg
Zm9yIHBnCiAgNS4gUmVtb3ZlcyB0aGUgd3JpdGVhYmxlIG1hcHBpbmcgb2Yg
cGcKICAgICBgLT4gX3B1dF9wYWdlX3R5cGUocGcpLCBjb3VudCBnb2VzIGJh
Y2sgZG93biB0byAwCkNQVSBBOgogIDcuICAgICBJc3N1ZXMgY21weGNoZygp
LCBzZXR0aW5nIGNvdW50IDEsIHR5cGUgUEdUX2wxX3BhZ2VfdGFibGUKCkNQ
VSBCIG5vdyBoYXMgYSB3cml0ZWFibGUgbWFwcGluZyB0byBwZywgd2hpY2gg
WGVuIGJlbGlldmVzIGlzIGEgcGFnZXRhYmxlIGFuZApzdWl0YWJseSBwcm90
ZWN0ZWQgKGkuZS4gcmVhZC1vbmx5KS4gIFRoZSBUTEIgZmx1c2ggaW4gc3Rl
cCAyIG11c3QgYmUgZGVmZXJyZWQKdW50aWwgYWZ0ZXIgdGhlIGd1ZXN0IGlz
IHByb2hpYml0ZWQgZnJvbSBjcmVhdGluZyBuZXcgd3JpdGVhYmxlIG1hcHBp
bmdzLAp3aGljaCBpcyBhZnRlciBzdGVwIDcuCgpEZWZlciBhbGwgc2FmZXR5
IGFjdGlvbnMgdW50aWwgYWZ0ZXIgdGhlIGNtcHhjaGcoKSBoYXMgc3VjY2Vz
c2Z1bGx5IHRha2VuIHRoZQppbnRlbmRlZCB0eXBlcmVmLCBiZWNhdXNlIHRo
YXQgaXMgd2hhdCBwcmV2ZW50cyBjb25jdXJyZW50IHVzZXJzIGZyb20gdXNp
bmcKdGhlIG9sZCB0eXBlLgoKQWxzbyByZW1vdmUgdGhlIGVhcmx5IHZhbGlk
YXRpb24gZm9yIHdyaXRlYWJsZSBhbmQgc2hhcmVkIHBhZ2VzLiAgVGhpcyBy
ZW1vdmVzCnJhY2UgY29uZGl0aW9ucyB3aGVyZSBvbmUgaGFsZiBvZiBhIHBh
cmFsbGVsIG1hcHBpbmcgYXR0ZW1wdCBjYW4gcmV0dXJuCnN1Y2Nlc3NmdWxs
eSBiZWZvcmU6CiAqIFRoZSBJT01NVSBwYWdldGFibGVzIGFyZSBpbiBzeW5j
IHdpdGggdGhlIG5ldyBwYWdlIHR5cGUKICogV3JpdGVhYmxlIG1hcHBpbmdz
IHRvIHNoYXJlZCBwYWdlcyBoYXZlIGJlZW4gdG9ybiBkb3duCgpUaGlzIGlz
IHBhcnQgb2YgWFNBLTQwMSAvIENWRS0yMDIyLTI2MzYyLgoKUmVwb3J0ZWQt
Ynk6IEphbm4gSG9ybiA8amFubmhAZ29vZ2xlLmNvbT4KU2lnbmVkLW9mZi1i
eTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4K
UmV2aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4K
UmV2aWV3ZWQtYnk6IEdlb3JnZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAY2l0
cml4LmNvbT4KCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvbW0uYyBiL3hl
bi9hcmNoL3g4Ni9tbS5jCmluZGV4IDY0MzQ5MDBhYTc2Ny4uMzRiYjlkZGRh
YjhkIDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYvbW0uYworKysgYi94ZW4v
YXJjaC94ODYvbW0uYwpAQCAtMjk2Miw1NiArMjk2MiwxMiBAQCBzdGF0aWMg
aW50IF9nZXRfcGFnZV90eXBlKHN0cnVjdCBwYWdlX2luZm8gKnBhZ2UsIHVu
c2lnbmVkIGxvbmcgdHlwZSwKICAgICAgICAgICAgICAqIFR5cGUgY2hhbmdl
cyBhcmUgcGVybWl0dGVkIHdoZW4gdGhlIHR5cGVyZWYgaXMgMC4gIElmIHRo
ZSB0eXBlCiAgICAgICAgICAgICAgKiBhY3R1YWxseSBjaGFuZ2VzLCB0aGUg
cGFnZSBuZWVkcyByZS12YWxpZGF0aW5nLgogICAgICAgICAgICAgICovCi0g
ICAgICAgICAgICBzdHJ1Y3QgZG9tYWluICpkID0gcGFnZV9nZXRfb3duZXIo
cGFnZSk7Ci0KLSAgICAgICAgICAgIGlmICggZCAmJiBzaGFkb3dfbW9kZV9l
bmFibGVkKGQpICkKLSAgICAgICAgICAgICAgIHNoYWRvd19wcmVwYXJlX3Bh
Z2VfdHlwZV9jaGFuZ2UoZCwgcGFnZSwgdHlwZSk7CiAKICAgICAgICAgICAg
IEFTU0VSVCghKHggJiBQR1RfcGFlX3hlbl9sMikpOwogICAgICAgICAgICAg
aWYgKCAoeCAmIFBHVF90eXBlX21hc2spICE9IHR5cGUgKQogICAgICAgICAg
ICAgewotICAgICAgICAgICAgICAgIC8qCi0gICAgICAgICAgICAgICAgICog
T24gdHlwZSBjaGFuZ2Ugd2UgY2hlY2sgdG8gZmx1c2ggc3RhbGUgVExCIGVu
dHJpZXMuIEl0IGlzCi0gICAgICAgICAgICAgICAgICogdml0YWwgdGhhdCBu
byBvdGhlciBDUFVzIGFyZSBsZWZ0IHdpdGggd3JpdGVhYmxlIG1hcHBpbmdz
Ci0gICAgICAgICAgICAgICAgICogdG8gYSBmcmFtZSB3aGljaCBpcyBpbnRl
bmRpbmcgdG8gYmVjb21lIHBndGFibGUvc2VnZGVzYy4KLSAgICAgICAgICAg
ICAgICAgKi8KLSAgICAgICAgICAgICAgICBjcHVtYXNrX3QgKm1hc2sgPSB0
aGlzX2NwdShzY3JhdGNoX2NwdW1hc2spOwotCi0gICAgICAgICAgICAgICAg
QlVHX09OKGluX2lycSgpKTsKLSAgICAgICAgICAgICAgICBjcHVtYXNrX2Nv
cHkobWFzaywgZC0+ZGlydHlfY3B1bWFzayk7Ci0KLSAgICAgICAgICAgICAg
ICAvKiBEb24ndCBmbHVzaCBpZiB0aGUgdGltZXN0YW1wIGlzIG9sZCBlbm91
Z2ggKi8KLSAgICAgICAgICAgICAgICB0bGJmbHVzaF9maWx0ZXIobWFzaywg
cGFnZS0+dGxiZmx1c2hfdGltZXN0YW1wKTsKLQotICAgICAgICAgICAgICAg
IGlmICggdW5saWtlbHkoIWNwdW1hc2tfZW1wdHkobWFzaykpICYmCi0gICAg
ICAgICAgICAgICAgICAgICAvKiBTaGFkb3cgbW9kZTogdHJhY2sgb25seSB3
cml0YWJsZSBwYWdlcy4gKi8KLSAgICAgICAgICAgICAgICAgICAgICghc2hh
ZG93X21vZGVfZW5hYmxlZChkKSB8fAotICAgICAgICAgICAgICAgICAgICAg
ICgobnggJiBQR1RfdHlwZV9tYXNrKSA9PSBQR1Rfd3JpdGFibGVfcGFnZSkp
ICkKLSAgICAgICAgICAgICAgICB7Ci0gICAgICAgICAgICAgICAgICAgIHBl
cmZjX2luY3IobmVlZF9mbHVzaF90bGJfZmx1c2gpOwotICAgICAgICAgICAg
ICAgICAgICAvKgotICAgICAgICAgICAgICAgICAgICAgKiBJZiBwYWdlIHdh
cyBhIHBhZ2UgdGFibGUgbWFrZSBzdXJlIHRoZSBmbHVzaCBpcwotICAgICAg
ICAgICAgICAgICAgICAgKiBwZXJmb3JtZWQgdXNpbmcgYW4gSVBJIGluIG9y
ZGVyIHRvIGF2b2lkIGNoYW5naW5nIHRoZQotICAgICAgICAgICAgICAgICAg
ICAgKiB0eXBlIG9mIGEgcGFnZSB0YWJsZSBwYWdlIHVuZGVyIHRoZSBmZWV0
IG9mCi0gICAgICAgICAgICAgICAgICAgICAqIHNwdXJpb3VzX3BhZ2VfZmF1
bHQoKS4KLSAgICAgICAgICAgICAgICAgICAgICovCi0gICAgICAgICAgICAg
ICAgICAgIGZsdXNoX21hc2sobWFzaywKLSAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAoeCAmIFBHVF90eXBlX21hc2spICYmCi0gICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgKHggJiBQR1RfdHlwZV9tYXNrKSA8PSBQ
R1Rfcm9vdF9wYWdlX3RhYmxlCi0gICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgPyBGTFVTSF9UTEIgfCBGTFVTSF9OT19BU1NJU1QKLSAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICA6IEZMVVNIX1RMQik7Ci0gICAgICAg
ICAgICAgICAgfQotCi0gICAgICAgICAgICAgICAgLyogV2UgbG9zZSBleGlz
dGluZyB0eXBlIGFuZCB2YWxpZGl0eS4gKi8KICAgICAgICAgICAgICAgICBu
eCAmPSB+KFBHVF90eXBlX21hc2sgfCBQR1RfdmFsaWRhdGVkKTsKICAgICAg
ICAgICAgICAgICBueCB8PSB0eXBlOwotCi0gICAgICAgICAgICAgICAgLyoK
LSAgICAgICAgICAgICAgICAgKiBObyBzcGVjaWFsIHZhbGlkYXRpb24gbmVl
ZGVkIGZvciB3cml0YWJsZSBwYWdlcy4KLSAgICAgICAgICAgICAgICAgKiBQ
YWdlIHRhYmxlcyBhbmQgR0RUL0xEVCBuZWVkIHRvIGJlIHNjYW5uZWQgZm9y
IHZhbGlkaXR5LgotICAgICAgICAgICAgICAgICAqLwotICAgICAgICAgICAg
ICAgIGlmICggdHlwZSA9PSBQR1Rfd3JpdGFibGVfcGFnZSB8fCB0eXBlID09
IFBHVF9zaGFyZWRfcGFnZSApCi0gICAgICAgICAgICAgICAgICAgIG54IHw9
IFBHVF92YWxpZGF0ZWQ7CiAgICAgICAgICAgICB9CiAgICAgICAgIH0KICAg
ICAgICAgZWxzZSBpZiAoIHVubGlrZWx5KCh4ICYgKFBHVF90eXBlX21hc2t8
UEdUX3BhZV94ZW5fbDIpKSAhPSB0eXBlKSApCkBAIC0zMDkyLDYgKzMwNDgs
NTYgQEAgc3RhdGljIGludCBfZ2V0X3BhZ2VfdHlwZShzdHJ1Y3QgcGFnZV9p
bmZvICpwYWdlLCB1bnNpZ25lZCBsb25nIHR5cGUsCiAgICAgICAgICAgICBy
ZXR1cm4gLUVJTlRSOwogICAgIH0KIAorICAgIC8qCisgICAgICogT25lIHR5
cGVyZWYgaGFzIGJlZW4gdGFrZW4gYW5kIGlzIG5vdyBnbG9iYWxseSB2aXNp
YmxlLgorICAgICAqCisgICAgICogVGhlIHBhZ2UgaXMgZWl0aGVyIGluIHRo
ZSAidmFsaWRhdGUgbG9ja2VkIiBzdGF0ZSAoUEdUX1t0eXBlXSB8IDEpIG9y
CisgICAgICogZnVsbHkgdmFsaWRhdGVkIChQR1RfW3R5cGVdIHwgUEdUX3Zh
bGlkYXRlZCB8ID4wKS4KKyAgICAgKi8KKworICAgIGlmICggdW5saWtlbHko
KHggJiBQR1RfY291bnRfbWFzaykgPT0gMCkgKQorICAgIHsKKyAgICAgICAg
c3RydWN0IGRvbWFpbiAqZCA9IHBhZ2VfZ2V0X293bmVyKHBhZ2UpOworCisg
ICAgICAgIGlmICggZCAmJiBzaGFkb3dfbW9kZV9lbmFibGVkKGQpICkKKyAg
ICAgICAgICAgIHNoYWRvd19wcmVwYXJlX3BhZ2VfdHlwZV9jaGFuZ2UoZCwg
cGFnZSwgdHlwZSk7CisKKyAgICAgICAgaWYgKCAoeCAmIFBHVF90eXBlX21h
c2spICE9IHR5cGUgKQorICAgICAgICB7CisgICAgICAgICAgICAvKgorICAg
ICAgICAgICAgICogT24gdHlwZSBjaGFuZ2Ugd2UgY2hlY2sgdG8gZmx1c2gg
c3RhbGUgVExCIGVudHJpZXMuIEl0IGlzCisgICAgICAgICAgICAgKiB2aXRh
bCB0aGF0IG5vIG90aGVyIENQVXMgYXJlIGxlZnQgd2l0aCB3cml0ZWFibGUg
bWFwcGluZ3MKKyAgICAgICAgICAgICAqIHRvIGEgZnJhbWUgd2hpY2ggaXMg
aW50ZW5kaW5nIHRvIGJlY29tZSBwZ3RhYmxlL3NlZ2Rlc2MuCisgICAgICAg
ICAgICAgKi8KKyAgICAgICAgICAgIGNwdW1hc2tfdCAqbWFzayA9IHRoaXNf
Y3B1KHNjcmF0Y2hfY3B1bWFzayk7CisKKyAgICAgICAgICAgIEJVR19PTihp
bl9pcnEoKSk7CisgICAgICAgICAgICBjcHVtYXNrX2NvcHkobWFzaywgZC0+
ZGlydHlfY3B1bWFzayk7CisKKyAgICAgICAgICAgIC8qIERvbid0IGZsdXNo
IGlmIHRoZSB0aW1lc3RhbXAgaXMgb2xkIGVub3VnaCAqLworICAgICAgICAg
ICAgdGxiZmx1c2hfZmlsdGVyKG1hc2ssIHBhZ2UtPnRsYmZsdXNoX3RpbWVz
dGFtcCk7CisKKyAgICAgICAgICAgIGlmICggdW5saWtlbHkoIWNwdW1hc2tf
ZW1wdHkobWFzaykpICYmCisgICAgICAgICAgICAgICAgIC8qIFNoYWRvdyBt
b2RlOiB0cmFjayBvbmx5IHdyaXRhYmxlIHBhZ2VzLiAqLworICAgICAgICAg
ICAgICAgICAoIXNoYWRvd19tb2RlX2VuYWJsZWQoZCkgfHwKKyAgICAgICAg
ICAgICAgICAgICgobnggJiBQR1RfdHlwZV9tYXNrKSA9PSBQR1Rfd3JpdGFi
bGVfcGFnZSkpICkKKyAgICAgICAgICAgIHsKKyAgICAgICAgICAgICAgICBw
ZXJmY19pbmNyKG5lZWRfZmx1c2hfdGxiX2ZsdXNoKTsKKyAgICAgICAgICAg
ICAgICAvKgorICAgICAgICAgICAgICAgICAqIElmIHBhZ2Ugd2FzIGEgcGFn
ZSB0YWJsZSBtYWtlIHN1cmUgdGhlIGZsdXNoIGlzCisgICAgICAgICAgICAg
ICAgICogcGVyZm9ybWVkIHVzaW5nIGFuIElQSSBpbiBvcmRlciB0byBhdm9p
ZCBjaGFuZ2luZyB0aGUKKyAgICAgICAgICAgICAgICAgKiB0eXBlIG9mIGEg
cGFnZSB0YWJsZSBwYWdlIHVuZGVyIHRoZSBmZWV0IG9mCisgICAgICAgICAg
ICAgICAgICogc3B1cmlvdXNfcGFnZV9mYXVsdCgpLgorICAgICAgICAgICAg
ICAgICAqLworICAgICAgICAgICAgICAgIGZsdXNoX21hc2sobWFzaywKKyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICh4ICYgUEdUX3R5cGVfbWFzaykg
JiYKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICh4ICYgUEdUX3R5cGVf
bWFzaykgPD0gUEdUX3Jvb3RfcGFnZV90YWJsZQorICAgICAgICAgICAgICAg
ICAgICAgICAgICAgPyBGTFVTSF9UTEIgfCBGTFVTSF9OT19BU1NJU1QKKyAg
ICAgICAgICAgICAgICAgICAgICAgICAgIDogRkxVU0hfVExCKTsKKyAgICAg
ICAgICAgIH0KKyAgICAgICAgfQorICAgIH0KKwogICAgIGlmICggdW5saWtl
bHkoKCh4ICYgUEdUX3R5cGVfbWFzaykgPT0gUEdUX3dyaXRhYmxlX3BhZ2Up
ICE9CiAgICAgICAgICAgICAgICAgICAodHlwZSA9PSBQR1Rfd3JpdGFibGVf
cGFnZSkpICkKICAgICB7CkBAIC0zMTIwLDEzICszMTI2LDI1IEBAIHN0YXRp
YyBpbnQgX2dldF9wYWdlX3R5cGUoc3RydWN0IHBhZ2VfaW5mbyAqcGFnZSwg
dW5zaWduZWQgbG9uZyB0eXBlLAogCiAgICAgaWYgKCB1bmxpa2VseSghKG54
ICYgUEdUX3ZhbGlkYXRlZCkpICkKICAgICB7Ci0gICAgICAgIGlmICggISh4
ICYgUEdUX3BhcnRpYWwpICkKKyAgICAgICAgLyoKKyAgICAgICAgICogTm8g
c3BlY2lhbCB2YWxpZGF0aW9uIG5lZWRlZCBmb3Igd3JpdGFibGUgb3Igc2hh
cmVkIHBhZ2VzLiAgUGFnZQorICAgICAgICAgKiB0YWJsZXMgYW5kIEdEVC9M
RFQgbmVlZCB0byBoYXZlIHRoZWlyIGNvbnRlbnRzIGF1ZGl0ZWQuCisgICAg
ICAgICAqCisgICAgICAgICAqIHBlciB2YWxpZGF0ZV9wYWdlKCksIG5vbi1h
dG9taWMgdXBkYXRlcyBhcmUgZmluZSBoZXJlLgorICAgICAgICAgKi8KKyAg
ICAgICAgaWYgKCB0eXBlID09IFBHVF93cml0YWJsZV9wYWdlIHx8IHR5cGUg
PT0gUEdUX3NoYXJlZF9wYWdlICkKKyAgICAgICAgICAgIHBhZ2UtPnUuaW51
c2UudHlwZV9pbmZvIHw9IFBHVF92YWxpZGF0ZWQ7CisgICAgICAgIGVsc2UK
ICAgICAgICAgewotICAgICAgICAgICAgcGFnZS0+bnJfdmFsaWRhdGVkX3B0
ZXMgPSAwOwotICAgICAgICAgICAgcGFnZS0+cGFydGlhbF9mbGFncyA9IDA7
Ci0gICAgICAgICAgICBwYWdlLT5saW5lYXJfcHRfY291bnQgPSAwOworICAg
ICAgICAgICAgaWYgKCAhKHggJiBQR1RfcGFydGlhbCkgKQorICAgICAgICAg
ICAgeworICAgICAgICAgICAgICAgIHBhZ2UtPm5yX3ZhbGlkYXRlZF9wdGVz
ID0gMDsKKyAgICAgICAgICAgICAgICBwYWdlLT5wYXJ0aWFsX2ZsYWdzID0g
MDsKKyAgICAgICAgICAgICAgICBwYWdlLT5saW5lYXJfcHRfY291bnQgPSAw
OworICAgICAgICAgICAgfQorCisgICAgICAgICAgICByYyA9IHZhbGlkYXRl
X3BhZ2UocGFnZSwgdHlwZSwgcHJlZW1wdGlibGUpOwogICAgICAgICB9Ci0g
ICAgICAgIHJjID0gdmFsaWRhdGVfcGFnZShwYWdlLCB0eXBlLCBwcmVlbXB0
aWJsZSk7CiAgICAgfQogCiAgb3V0Ogo=

--=separator
Content-Type: application/octet-stream; name="xsa401/xsa401-4.13-1.patch"
Content-Disposition: attachment; filename="xsa401/xsa401-4.13-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3B2OiBDbGVhbiB1cCBfZ2V0X3BhZ2VfdHlwZSgp
CgpWYXJpb3VzIGZpeGVzIGZvciBjbGFyaXR5LCBhaGVhZCBvZiBtYWtpbmcg
Y29tcGxpY2F0ZWQgY2hhbmdlcy4KCiAqIFNwbGl0IHRoZSBvdmVyZmxvdyBj
aGVjayBvdXQgb2YgdGhlIGlmL2Vsc2UgY2hhaW4gZm9yIHR5cGUgaGFuZGxp
bmcsIGFzCiAgIGl0J3Mgc29tZXdoYXQgdW5yZWxhdGVkLgogKiBDb21tZW50
IHRoZSBtYWluIGlmL2Vsc2UgY2hhaW4gdG8gZXhwbGFpbiB3aGF0IGlzIGdv
aW5nIG9uLiAgQWRqdXN0IG9uZQogICBBU1NFUlQoKSBhbmQgc3RhdGUgdGhl
IGJpdCBsYXlvdXQgZm9yIHZhbGlkYXRlLWxvY2tlZCBhbmQgcGFydGlhbCBz
dGF0ZXMuCiAqIENvcnJlY3QgdGhlIGNvbW1lbnQgYWJvdXQgVExCIGZsdXNo
aW5nLCBhcyBpdCdzIGJhY2t3YXJkcy4gIFRoZSBwcm9ibGVtCiAgIGNhc2Ug
aXMgd2hlbiB3cml0ZWFibGUgbWFwcGluZ3MgYXJlIHJldGFpbmVkIHRvIGEg
cGFnZSBiZWNvbWluZyByZWFkLW9ubHksCiAgIGFzIGl0IGFsbG93cyB0aGUg
Z3Vlc3QgdG8gYnlwYXNzIFhlbidzIHNhZmV0eSBjaGVja3MgZm9yIHVwZGF0
ZXMuCiAqIFJlZHVjZSB0aGUgc2NvcGUgb2YgJ3knLiAgSXQgaXMgYW4gYXJ0
ZWZhY3Qgb2YgdGhlIGNtcHhjaGcgbG9vcCBhbmQgbm90CiAgIHZhbGlkIGZv
ciB1c2UgYnkgc3Vic2VxdWVudCBsb2dpYy4gIFN3aXRjaCB0byB1c2luZyBB
Q0NFU1NfT05DRSgpIHRvIHRyZWF0CiAgIGFsbCByZWFkcyBhcyBleHBsaWNp
dGx5IHZvbGF0aWxlLiAgVGhlIG9ubHkgdGhpbmcgcHJldmVudGluZyB0aGUg
dmFsaWRhdGVkCiAgIHdhaXQtbG9vcCBiZWluZyBpbmZpbml0ZSBpcyB0aGUg
Y29tcGlsZXIgYmFycmllciBoaWRkZW4gaW4gY3B1X3JlbGF4KCkuCiAqIFJl
cGxhY2Ugb25lIHBhZ2VfZ2V0X293bmVyKHBhZ2UpIHdpdGggdGhlIGFscmVh
ZHktY2FsY3VsYXRlZCAnZCcgYWxyZWFkeSBpbgogICBzY29wZS4KCk5vIGZ1
bmN0aW9uYWwgY2hhbmdlLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS00MDEgLyBD
VkUtMjAyMi0yNjM2Mi4KClNpZ25lZC1vZmYtYnk6IEFuZHJldyBDb29wZXIg
PGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+ClNpZ25lZC1vZmYtYnk6IEdl
b3JnZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAZXUuY2l0cml4LmNvbT4KUmV2
aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KUmV2
aWV3ZWQtYnk6IEdlb3JnZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4
LmNvbT4KCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvbW0uYyBiL3hlbi9h
cmNoL3g4Ni9tbS5jCmluZGV4IGFkODliZmI0NWZmZi4uOTY3MzhiMDI3ODI3
IDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYvbW0uYworKysgYi94ZW4vYXJj
aC94ODYvbW0uYwpAQCAtMjk3OCwxNiArMjk3OCwxNyBAQCBzdGF0aWMgaW50
IF9wdXRfcGFnZV90eXBlKHN0cnVjdCBwYWdlX2luZm8gKnBhZ2UsIHVuc2ln
bmVkIGludCBmbGFncywKIHN0YXRpYyBpbnQgX2dldF9wYWdlX3R5cGUoc3Ry
dWN0IHBhZ2VfaW5mbyAqcGFnZSwgdW5zaWduZWQgbG9uZyB0eXBlLAogICAg
ICAgICAgICAgICAgICAgICAgICAgICBib29sIHByZWVtcHRpYmxlKQogewot
ICAgIHVuc2lnbmVkIGxvbmcgbngsIHgsIHkgPSBwYWdlLT51LmludXNlLnR5
cGVfaW5mbzsKKyAgICB1bnNpZ25lZCBsb25nIG54LCB4OwogICAgIGludCBy
YyA9IDA7CiAKICAgICBBU1NFUlQoISh0eXBlICYgfihQR1RfdHlwZV9tYXNr
IHwgUEdUX3BhZV94ZW5fbDIpKSk7CiAgICAgQVNTRVJUKCFpbl9pcnEoKSk7
CiAKLSAgICBmb3IgKCA7IDsgKQorICAgIGZvciAoIHVuc2lnbmVkIGxvbmcg
eSA9IEFDQ0VTU19PTkNFKHBhZ2UtPnUuaW51c2UudHlwZV9pbmZvKTsgOyAp
CiAgICAgewogICAgICAgICB4ICA9IHk7CiAgICAgICAgIG54ID0geCArIDE7
CisKICAgICAgICAgaWYgKCB1bmxpa2VseSgobnggJiBQR1RfY291bnRfbWFz
aykgPT0gMCkgKQogICAgICAgICB7CiAgICAgICAgICAgICBnZHByaW50ayhY
RU5MT0dfV0FSTklORywKQEAgLTI5OTUsOCArMjk5NiwxNSBAQCBzdGF0aWMg
aW50IF9nZXRfcGFnZV90eXBlKHN0cnVjdCBwYWdlX2luZm8gKnBhZ2UsIHVu
c2lnbmVkIGxvbmcgdHlwZSwKICAgICAgICAgICAgICAgICAgICAgIG1mbl94
KHBhZ2VfdG9fbWZuKHBhZ2UpKSk7CiAgICAgICAgICAgICByZXR1cm4gLUVJ
TlZBTDsKICAgICAgICAgfQotICAgICAgICBlbHNlIGlmICggdW5saWtlbHko
KHggJiBQR1RfY291bnRfbWFzaykgPT0gMCkgKQorCisgICAgICAgIGlmICgg
dW5saWtlbHkoKHggJiBQR1RfY291bnRfbWFzaykgPT0gMCkgKQogICAgICAg
ICB7CisgICAgICAgICAgICAvKgorICAgICAgICAgICAgICogVHlwZXJlZiAw
IC0+IDEuCisgICAgICAgICAgICAgKgorICAgICAgICAgICAgICogVHlwZSBj
aGFuZ2VzIGFyZSBwZXJtaXR0ZWQgd2hlbiB0aGUgdHlwZXJlZiBpcyAwLiAg
SWYgdGhlIHR5cGUKKyAgICAgICAgICAgICAqIGFjdHVhbGx5IGNoYW5nZXMs
IHRoZSBwYWdlIG5lZWRzIHJlLXZhbGlkYXRpbmcuCisgICAgICAgICAgICAg
Ki8KICAgICAgICAgICAgIHN0cnVjdCBkb21haW4gKmQgPSBwYWdlX2dldF9v
d25lcihwYWdlKTsKIAogICAgICAgICAgICAgaWYgKCBkICYmIHNoYWRvd19t
b2RlX2VuYWJsZWQoZCkgKQpAQCAtMzAwNyw4ICszMDE1LDggQEAgc3RhdGlj
IGludCBfZ2V0X3BhZ2VfdHlwZShzdHJ1Y3QgcGFnZV9pbmZvICpwYWdlLCB1
bnNpZ25lZCBsb25nIHR5cGUsCiAgICAgICAgICAgICB7CiAgICAgICAgICAg
ICAgICAgLyoKICAgICAgICAgICAgICAgICAgKiBPbiB0eXBlIGNoYW5nZSB3
ZSBjaGVjayB0byBmbHVzaCBzdGFsZSBUTEIgZW50cmllcy4gSXQgaXMKLSAg
ICAgICAgICAgICAgICAgKiB2aXRhbCB0aGF0IG5vIG90aGVyIENQVXMgYXJl
IGxlZnQgd2l0aCBtYXBwaW5ncyBvZiBhIGZyYW1lCi0gICAgICAgICAgICAg
ICAgICogd2hpY2ggaXMgYWJvdXQgdG8gYmVjb21lIHdyaXRlYWJsZSB0byB0
aGUgZ3Vlc3QuCisgICAgICAgICAgICAgICAgICogdml0YWwgdGhhdCBubyBv
dGhlciBDUFVzIGFyZSBsZWZ0IHdpdGggd3JpdGVhYmxlIG1hcHBpbmdzCisg
ICAgICAgICAgICAgICAgICogdG8gYSBmcmFtZSB3aGljaCBpcyBpbnRlbmRp
bmcgdG8gYmVjb21lIHBndGFibGUvc2VnZGVzYy4KICAgICAgICAgICAgICAg
ICAgKi8KICAgICAgICAgICAgICAgICBjcHVtYXNrX3QgKm1hc2sgPSB0aGlz
X2NwdShzY3JhdGNoX2NwdW1hc2spOwogCkBAIC0zMDIwLDcgKzMwMjgsNyBA
QCBzdGF0aWMgaW50IF9nZXRfcGFnZV90eXBlKHN0cnVjdCBwYWdlX2luZm8g
KnBhZ2UsIHVuc2lnbmVkIGxvbmcgdHlwZSwKIAogICAgICAgICAgICAgICAg
IGlmICggdW5saWtlbHkoIWNwdW1hc2tfZW1wdHkobWFzaykpICYmCiAgICAg
ICAgICAgICAgICAgICAgICAvKiBTaGFkb3cgbW9kZTogdHJhY2sgb25seSB3
cml0YWJsZSBwYWdlcy4gKi8KLSAgICAgICAgICAgICAgICAgICAgICghc2hh
ZG93X21vZGVfZW5hYmxlZChwYWdlX2dldF9vd25lcihwYWdlKSkgfHwKKyAg
ICAgICAgICAgICAgICAgICAgICghc2hhZG93X21vZGVfZW5hYmxlZChkKSB8
fAogICAgICAgICAgICAgICAgICAgICAgICgobnggJiBQR1RfdHlwZV9tYXNr
KSA9PSBQR1Rfd3JpdGFibGVfcGFnZSkpICkKICAgICAgICAgICAgICAgICB7
CiAgICAgICAgICAgICAgICAgICAgIHBlcmZjX2luY3IobmVlZF9mbHVzaF90
bGJfZmx1c2gpOwpAQCAtMzA0MSw3ICszMDQ5LDE0IEBAIHN0YXRpYyBpbnQg
X2dldF9wYWdlX3R5cGUoc3RydWN0IHBhZ2VfaW5mbyAqcGFnZSwgdW5zaWdu
ZWQgbG9uZyB0eXBlLAogICAgICAgICB9CiAgICAgICAgIGVsc2UgaWYgKCB1
bmxpa2VseSgoeCAmIChQR1RfdHlwZV9tYXNrfFBHVF9wYWVfeGVuX2wyKSkg
IT0gdHlwZSkgKQogICAgICAgICB7Ci0gICAgICAgICAgICAvKiBEb24ndCBs
b2cgZmFpbHVyZSBpZiBpdCBjb3VsZCBiZSBhIHJlY3Vyc2l2ZS1tYXBwaW5n
IGF0dGVtcHQuICovCisgICAgICAgICAgICAvKgorICAgICAgICAgICAgICog
ZWxzZSwgd2UncmUgdHJ5aW5nIHRvIHRha2UgYSBuZXcgcmVmZXJlbmNlLCBv
ZiB0aGUgd3JvbmcgdHlwZS4KKyAgICAgICAgICAgICAqCisgICAgICAgICAg
ICAgKiBUaGlzIChiZWluZyBhYmxlIHRvIHByb2hpYml0IHVzZSBvZiB0aGUg
d3JvbmcgdHlwZSkgaXMgd2hhdCB0aGUKKyAgICAgICAgICAgICAqIHR5cGVy
ZWYgc3lzdGVtIGV4aXN0cyBmb3IsIGJ1dCBza2lwIHByaW50aW5nIHRoZSBm
YWlsdXJlIGlmIGl0CisgICAgICAgICAgICAgKiBsb29rcyBsaWtlIGEgcmVj
dXJzaXZlIG1hcHBpbmcsIGFzIHN1YnNlcXVlbnQgbG9naWMgbWlnaHQKKyAg
ICAgICAgICAgICAqIHVsdGltYXRlbHkgcGVybWl0IHRoZSBhdHRlbXB0Lgor
ICAgICAgICAgICAgICovCiAgICAgICAgICAgICBpZiAoICgoeCAmIFBHVF90
eXBlX21hc2spID09IFBHVF9sMl9wYWdlX3RhYmxlKSAmJgogICAgICAgICAg
ICAgICAgICAodHlwZSA9PSBQR1RfbDFfcGFnZV90YWJsZSkgKQogICAgICAg
ICAgICAgICAgIHJldHVybiAtRUlOVkFMOwpAQCAtMzA2MCwxOCArMzA3NSw0
NiBAQCBzdGF0aWMgaW50IF9nZXRfcGFnZV90eXBlKHN0cnVjdCBwYWdlX2lu
Zm8gKnBhZ2UsIHVuc2lnbmVkIGxvbmcgdHlwZSwKICAgICAgICAgfQogICAg
ICAgICBlbHNlIGlmICggdW5saWtlbHkoISh4ICYgUEdUX3ZhbGlkYXRlZCkp
ICkKICAgICAgICAgeworICAgICAgICAgICAgLyoKKyAgICAgICAgICAgICAq
IGVsc2UsIHRoZSBjb3VudCBpcyBub24temVybywgYW5kIHdlJ3JlIGdyYWJi
aW5nIHRoZSByaWdodCB0eXBlOworICAgICAgICAgICAgICogYnV0IHRoZSBw
YWdlIGhhc24ndCBiZWVuIHZhbGlkYXRlZCB5ZXQuCisgICAgICAgICAgICAg
KgorICAgICAgICAgICAgICogVGhlIHBhZ2UgaXMgaW4gb25lIG9mIHR3byBz
dGF0ZXMgKGRlcGVuZGluZyBvbiBQR1RfcGFydGlhbCksCisgICAgICAgICAg
ICAgKiBhbmQgc2hvdWxkIGhhdmUgZXhhY3RseSBvbmUgcmVmZXJlbmNlLgor
ICAgICAgICAgICAgICovCisgICAgICAgICAgICBBU1NFUlQoKHggJiAoUEdU
X3R5cGVfbWFzayB8IFBHVF9jb3VudF9tYXNrKSkgPT0gKHR5cGUgfCAxKSk7
CisKICAgICAgICAgICAgIGlmICggISh4ICYgUEdUX3BhcnRpYWwpICkKICAg
ICAgICAgICAgIHsKLSAgICAgICAgICAgICAgICAvKiBTb21lb25lIGVsc2Ug
aXMgdXBkYXRpbmcgdmFsaWRhdGlvbiBvZiB0aGlzIHBhZ2UuIFdhaXQuLi4g
Ki8KKyAgICAgICAgICAgICAgICAvKgorICAgICAgICAgICAgICAgICAqIFRo
ZSBwYWdlIGhhcyBiZWVuIGxlZnQgaW4gdGhlICJ2YWxpZGF0ZSBsb2NrZWQi
IHN0YXRlCisgICAgICAgICAgICAgICAgICogKGkuZS4gUEdUX1t0eXBlXSB8
IDEpIHdoaWNoIG1lYW5zIHRoYXQgYSBjb25jdXJyZW50IGNhbGxlcgorICAg
ICAgICAgICAgICAgICAqIG9mIF9nZXRfcGFnZV90eXBlKCkgaXMgaW4gdGhl
IG1pZGRsZSBvZiB2YWxpZGF0aW9uLgorICAgICAgICAgICAgICAgICAqCisg
ICAgICAgICAgICAgICAgICogU3BpbiB3YWl0aW5nIGZvciB0aGUgY29uY3Vy
cmVudCB1c2VyIHRvIGNvbXBsZXRlIChwYXJ0aWFsCisgICAgICAgICAgICAg
ICAgICogb3IgZnVsbHkgdmFsaWRhdGVkKSwgdGhlbiByZXN0YXJ0IG91ciBh
dHRlbXB0IHRvIGFjcXVpcmUgYQorICAgICAgICAgICAgICAgICAqIHR5cGUg
cmVmZXJlbmNlLgorICAgICAgICAgICAgICAgICAqLwogICAgICAgICAgICAg
ICAgIGRvIHsKICAgICAgICAgICAgICAgICAgICAgaWYgKCBwcmVlbXB0aWJs
ZSAmJiBoeXBlcmNhbGxfcHJlZW1wdF9jaGVjaygpICkKICAgICAgICAgICAg
ICAgICAgICAgICAgIHJldHVybiAtRUlOVFI7CiAgICAgICAgICAgICAgICAg
ICAgIGNwdV9yZWxheCgpOwotICAgICAgICAgICAgICAgIH0gd2hpbGUgKCAo
eSA9IHBhZ2UtPnUuaW51c2UudHlwZV9pbmZvKSA9PSB4ICk7CisgICAgICAg
ICAgICAgICAgfSB3aGlsZSAoICh5ID0gQUNDRVNTX09OQ0UocGFnZS0+dS5p
bnVzZS50eXBlX2luZm8pKSA9PSB4ICk7CiAgICAgICAgICAgICAgICAgY29u
dGludWU7CiAgICAgICAgICAgICB9Ci0gICAgICAgICAgICAvKiBUeXBlIHJl
ZiBjb3VudCB3YXMgbGVmdCBhdCAxIHdoZW4gUEdUX3BhcnRpYWwgZ290IHNl
dC4gKi8KLSAgICAgICAgICAgIEFTU0VSVCgoeCAmIFBHVF9jb3VudF9tYXNr
KSA9PSAxKTsKKworICAgICAgICAgICAgLyoKKyAgICAgICAgICAgICAqIFRo
ZSBwYWdlIGhhcyBiZWVuIGxlZnQgaW4gdGhlICJwYXJ0aWFsIiBzdGF0ZQor
ICAgICAgICAgICAgICogKGkuZS4sIFBHVF9bdHlwZV0gfCBQR1RfcGFydGlh
bCB8IDEpLgorICAgICAgICAgICAgICoKKyAgICAgICAgICAgICAqIFJhdGhl
ciB0aGFuIGJ1bXBpbmcgdGhlIHR5cGUgY291bnQsIHdlIG5lZWQgdG8gdHJ5
IHRvIGdyYWIgdGhlCisgICAgICAgICAgICAgKiB2YWxpZGF0aW9uIGxvY2s7
IGlmIHdlIHN1Y2NlZWQsIHdlIG5lZWQgdG8gdmFsaWRhdGUgdGhlIHBhZ2Us
CisgICAgICAgICAgICAgKiB0aGVuIGRyb3AgdGhlIGdlbmVyYWwgcmVmIGFz
c29jaWF0ZWQgd2l0aCB0aGUgUEdUX3BhcnRpYWwgYml0LgorICAgICAgICAg
ICAgICoKKyAgICAgICAgICAgICAqIFdlIGdyYWIgdGhlIHZhbGlkYXRpb24g
bG9jayBieSBzZXR0aW5nIG54IHRvIChQR1RfW3R5cGVdIHwgMSkKKyAgICAg
ICAgICAgICAqIChpLmUuLCBub24temVybyB0eXBlIGNvdW50LCBuZWl0aGVy
IFBHVF92YWxpZGF0ZWQgbm9yCisgICAgICAgICAgICAgKiBQR1RfcGFydGlh
bCBzZXQpLgorICAgICAgICAgICAgICovCiAgICAgICAgICAgICBueCA9IHgg
JiB+UEdUX3BhcnRpYWw7CiAgICAgICAgIH0KIApAQCAtMzExNiw2ICszMTU5
LDEzIEBAIHN0YXRpYyBpbnQgX2dldF9wYWdlX3R5cGUoc3RydWN0IHBhZ2Vf
aW5mbyAqcGFnZSwgdW5zaWduZWQgbG9uZyB0eXBlLAogICAgIH0KIAogIG91
dDoKKyAgICAvKgorICAgICAqIERpZCB3ZSBkcm9wIHRoZSBQR1RfcGFydGlh
bCBiaXQgd2hlbiBhY3F1aXJpbmcgdGhlIHR5cGVyZWY/ICBJZiBzbywKKyAg
ICAgKiBkcm9wIHRoZSBnZW5lcmFsIHJlZmVyZW5jZSB0aGF0IHdlbnQgYWxv
bmcgd2l0aCBpdC4KKyAgICAgKgorICAgICAqIE4uQi4gdmFsaWRhdGVfcGFn
ZSgpIG1heSBoYXZlIGhhdmUgcmUtc2V0IFBHVF9wYXJ0aWFsLCBub3QgcmVm
bGVjdGVkIGluCisgICAgICogbngsIGJ1dCB3aWxsIGhhdmUgdGFrZW4gYW4g
ZXh0cmEgcmVmIHdoZW4gZG9pbmcgc28uCisgICAgICovCiAgICAgaWYgKCAo
eCAmIFBHVF9wYXJ0aWFsKSAmJiAhKG54ICYgUEdUX3BhcnRpYWwpICkKICAg
ICAgICAgcHV0X3BhZ2UocGFnZSk7CiAK

--=separator
Content-Type: application/octet-stream; name="xsa401/xsa401-4.13-2.patch"
Content-Disposition: attachment; filename="xsa401/xsa401-4.13-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3B2OiBGaXggQUJBQyBjbXB4Y2hnKCkgcmFjZSBp
biBfZ2V0X3BhZ2VfdHlwZSgpCgpfZ2V0X3BhZ2VfdHlwZSgpIHN1ZmZlcnMg
ZnJvbSBhIHJhY2UgY29uZGl0aW9uIHdoZXJlIGl0IGluY29ycmVjdGx5IGFz
c3VtZXMKdGhhdCBiZWNhdXNlICd4JyB3YXMgcmVhZCBhbmQgYSBzdWJzZXF1
ZW50IGEgY21weGNoZygpIHN1Y2NlZWRzLCB0aGUgdHlwZQpjYW5ub3QgaGF2
ZSBjaGFuZ2VkIGluLWJldHdlZW4uICBDb25zaWRlcjoKCkNQVSBBOgogIDEu
IENyZWF0ZXMgYW4gTDJlIHJlZmVyZW5jaW5nIHBnCiAgICAgYC0+IF9nZXRf
cGFnZV90eXBlKHBnLCBQR1RfbDFfcGFnZV90YWJsZSksIHNlZXMgY291bnQg
MCwgdHlwZSBQR1Rfd3JpdGFibGVfcGFnZQogIDIuICAgICBJc3N1ZXMgZmx1
c2hfdGxiX21hc2soKQpDUFUgQjoKICAzLiBDcmVhdGVzIGEgd3JpdGVhYmxl
IG1hcHBpbmcgb2YgcGcKICAgICBgLT4gX2dldF9wYWdlX3R5cGUocGcsIFBH
VF93cml0YWJsZV9wYWdlKSwgY291bnQgaW5jcmVhc2VzIHRvIDEKICA0LiBX
cml0ZXMgaW50byBuZXcgbWFwcGluZywgY3JlYXRpbmcgYSBUTEIgZW50cnkg
Zm9yIHBnCiAgNS4gUmVtb3ZlcyB0aGUgd3JpdGVhYmxlIG1hcHBpbmcgb2Yg
cGcKICAgICBgLT4gX3B1dF9wYWdlX3R5cGUocGcpLCBjb3VudCBnb2VzIGJh
Y2sgZG93biB0byAwCkNQVSBBOgogIDcuICAgICBJc3N1ZXMgY21weGNoZygp
LCBzZXR0aW5nIGNvdW50IDEsIHR5cGUgUEdUX2wxX3BhZ2VfdGFibGUKCkNQ
VSBCIG5vdyBoYXMgYSB3cml0ZWFibGUgbWFwcGluZyB0byBwZywgd2hpY2gg
WGVuIGJlbGlldmVzIGlzIGEgcGFnZXRhYmxlIGFuZApzdWl0YWJseSBwcm90
ZWN0ZWQgKGkuZS4gcmVhZC1vbmx5KS4gIFRoZSBUTEIgZmx1c2ggaW4gc3Rl
cCAyIG11c3QgYmUgZGVmZXJyZWQKdW50aWwgYWZ0ZXIgdGhlIGd1ZXN0IGlz
IHByb2hpYml0ZWQgZnJvbSBjcmVhdGluZyBuZXcgd3JpdGVhYmxlIG1hcHBp
bmdzLAp3aGljaCBpcyBhZnRlciBzdGVwIDcuCgpEZWZlciBhbGwgc2FmZXR5
IGFjdGlvbnMgdW50aWwgYWZ0ZXIgdGhlIGNtcHhjaGcoKSBoYXMgc3VjY2Vz
c2Z1bGx5IHRha2VuIHRoZQppbnRlbmRlZCB0eXBlcmVmLCBiZWNhdXNlIHRo
YXQgaXMgd2hhdCBwcmV2ZW50cyBjb25jdXJyZW50IHVzZXJzIGZyb20gdXNp
bmcKdGhlIG9sZCB0eXBlLgoKQWxzbyByZW1vdmUgdGhlIGVhcmx5IHZhbGlk
YXRpb24gZm9yIHdyaXRlYWJsZSBhbmQgc2hhcmVkIHBhZ2VzLiAgVGhpcyBy
ZW1vdmVzCnJhY2UgY29uZGl0aW9ucyB3aGVyZSBvbmUgaGFsZiBvZiBhIHBh
cmFsbGVsIG1hcHBpbmcgYXR0ZW1wdCBjYW4gcmV0dXJuCnN1Y2Nlc3NmdWxs
eSBiZWZvcmU6CiAqIFRoZSBJT01NVSBwYWdldGFibGVzIGFyZSBpbiBzeW5j
IHdpdGggdGhlIG5ldyBwYWdlIHR5cGUKICogV3JpdGVhYmxlIG1hcHBpbmdz
IHRvIHNoYXJlZCBwYWdlcyBoYXZlIGJlZW4gdG9ybiBkb3duCgpUaGlzIGlz
IHBhcnQgb2YgWFNBLTQwMSAvIENWRS0yMDIyLTI2MzYyLgoKUmVwb3J0ZWQt
Ynk6IEphbm4gSG9ybiA8amFubmhAZ29vZ2xlLmNvbT4KU2lnbmVkLW9mZi1i
eTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4K
UmV2aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4K
UmV2aWV3ZWQtYnk6IEdlb3JnZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAY2l0
cml4LmNvbT4KCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvbW0uYyBiL3hl
bi9hcmNoL3g4Ni9tbS5jCmluZGV4IDk2NzM4YjAyNzgyNy4uZWU5MWM3ZmU1
ZjY5IDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYvbW0uYworKysgYi94ZW4v
YXJjaC94ODYvbW0uYwpAQCAtMzAwNSw0NiArMzAwNSwxMiBAQCBzdGF0aWMg
aW50IF9nZXRfcGFnZV90eXBlKHN0cnVjdCBwYWdlX2luZm8gKnBhZ2UsIHVu
c2lnbmVkIGxvbmcgdHlwZSwKICAgICAgICAgICAgICAqIFR5cGUgY2hhbmdl
cyBhcmUgcGVybWl0dGVkIHdoZW4gdGhlIHR5cGVyZWYgaXMgMC4gIElmIHRo
ZSB0eXBlCiAgICAgICAgICAgICAgKiBhY3R1YWxseSBjaGFuZ2VzLCB0aGUg
cGFnZSBuZWVkcyByZS12YWxpZGF0aW5nLgogICAgICAgICAgICAgICovCi0g
ICAgICAgICAgICBzdHJ1Y3QgZG9tYWluICpkID0gcGFnZV9nZXRfb3duZXIo
cGFnZSk7Ci0KLSAgICAgICAgICAgIGlmICggZCAmJiBzaGFkb3dfbW9kZV9l
bmFibGVkKGQpICkKLSAgICAgICAgICAgICAgIHNoYWRvd19wcmVwYXJlX3Bh
Z2VfdHlwZV9jaGFuZ2UoZCwgcGFnZSwgdHlwZSk7CiAKICAgICAgICAgICAg
IEFTU0VSVCghKHggJiBQR1RfcGFlX3hlbl9sMikpOwogICAgICAgICAgICAg
aWYgKCAoeCAmIFBHVF90eXBlX21hc2spICE9IHR5cGUgKQogICAgICAgICAg
ICAgewotICAgICAgICAgICAgICAgIC8qCi0gICAgICAgICAgICAgICAgICog
T24gdHlwZSBjaGFuZ2Ugd2UgY2hlY2sgdG8gZmx1c2ggc3RhbGUgVExCIGVu
dHJpZXMuIEl0IGlzCi0gICAgICAgICAgICAgICAgICogdml0YWwgdGhhdCBu
byBvdGhlciBDUFVzIGFyZSBsZWZ0IHdpdGggd3JpdGVhYmxlIG1hcHBpbmdz
Ci0gICAgICAgICAgICAgICAgICogdG8gYSBmcmFtZSB3aGljaCBpcyBpbnRl
bmRpbmcgdG8gYmVjb21lIHBndGFibGUvc2VnZGVzYy4KLSAgICAgICAgICAg
ICAgICAgKi8KLSAgICAgICAgICAgICAgICBjcHVtYXNrX3QgKm1hc2sgPSB0
aGlzX2NwdShzY3JhdGNoX2NwdW1hc2spOwotCi0gICAgICAgICAgICAgICAg
QlVHX09OKGluX2lycSgpKTsKLSAgICAgICAgICAgICAgICBjcHVtYXNrX2Nv
cHkobWFzaywgZC0+ZGlydHlfY3B1bWFzayk7Ci0KLSAgICAgICAgICAgICAg
ICAvKiBEb24ndCBmbHVzaCBpZiB0aGUgdGltZXN0YW1wIGlzIG9sZCBlbm91
Z2ggKi8KLSAgICAgICAgICAgICAgICB0bGJmbHVzaF9maWx0ZXIobWFzaywg
cGFnZS0+dGxiZmx1c2hfdGltZXN0YW1wKTsKLQotICAgICAgICAgICAgICAg
IGlmICggdW5saWtlbHkoIWNwdW1hc2tfZW1wdHkobWFzaykpICYmCi0gICAg
ICAgICAgICAgICAgICAgICAvKiBTaGFkb3cgbW9kZTogdHJhY2sgb25seSB3
cml0YWJsZSBwYWdlcy4gKi8KLSAgICAgICAgICAgICAgICAgICAgICghc2hh
ZG93X21vZGVfZW5hYmxlZChkKSB8fAotICAgICAgICAgICAgICAgICAgICAg
ICgobnggJiBQR1RfdHlwZV9tYXNrKSA9PSBQR1Rfd3JpdGFibGVfcGFnZSkp
ICkKLSAgICAgICAgICAgICAgICB7Ci0gICAgICAgICAgICAgICAgICAgIHBl
cmZjX2luY3IobmVlZF9mbHVzaF90bGJfZmx1c2gpOwotICAgICAgICAgICAg
ICAgICAgICBmbHVzaF90bGJfbWFzayhtYXNrKTsKLSAgICAgICAgICAgICAg
ICB9Ci0KLSAgICAgICAgICAgICAgICAvKiBXZSBsb3NlIGV4aXN0aW5nIHR5
cGUgYW5kIHZhbGlkaXR5LiAqLwogICAgICAgICAgICAgICAgIG54ICY9IH4o
UEdUX3R5cGVfbWFzayB8IFBHVF92YWxpZGF0ZWQpOwogICAgICAgICAgICAg
ICAgIG54IHw9IHR5cGU7Ci0KLSAgICAgICAgICAgICAgICAvKgotICAgICAg
ICAgICAgICAgICAqIE5vIHNwZWNpYWwgdmFsaWRhdGlvbiBuZWVkZWQgZm9y
IHdyaXRhYmxlIHBhZ2VzLgotICAgICAgICAgICAgICAgICAqIFBhZ2UgdGFi
bGVzIGFuZCBHRFQvTERUIG5lZWQgdG8gYmUgc2Nhbm5lZCBmb3IgdmFsaWRp
dHkuCi0gICAgICAgICAgICAgICAgICovCi0gICAgICAgICAgICAgICAgaWYg
KCB0eXBlID09IFBHVF93cml0YWJsZV9wYWdlIHx8IHR5cGUgPT0gUEdUX3No
YXJlZF9wYWdlICkKLSAgICAgICAgICAgICAgICAgICAgbnggfD0gUEdUX3Zh
bGlkYXRlZDsKICAgICAgICAgICAgIH0KICAgICAgICAgfQogICAgICAgICBl
bHNlIGlmICggdW5saWtlbHkoKHggJiAoUEdUX3R5cGVfbWFza3xQR1RfcGFl
X3hlbl9sMikpICE9IHR5cGUpICkKQEAgLTMxMjUsNiArMzA5MSw0NiBAQCBz
dGF0aWMgaW50IF9nZXRfcGFnZV90eXBlKHN0cnVjdCBwYWdlX2luZm8gKnBh
Z2UsIHVuc2lnbmVkIGxvbmcgdHlwZSwKICAgICAgICAgICAgIHJldHVybiAt
RUlOVFI7CiAgICAgfQogCisgICAgLyoKKyAgICAgKiBPbmUgdHlwZXJlZiBo
YXMgYmVlbiB0YWtlbiBhbmQgaXMgbm93IGdsb2JhbGx5IHZpc2libGUuCisg
ICAgICoKKyAgICAgKiBUaGUgcGFnZSBpcyBlaXRoZXIgaW4gdGhlICJ2YWxp
ZGF0ZSBsb2NrZWQiIHN0YXRlIChQR1RfW3R5cGVdIHwgMSkgb3IKKyAgICAg
KiBmdWxseSB2YWxpZGF0ZWQgKFBHVF9bdHlwZV0gfCBQR1RfdmFsaWRhdGVk
IHwgPjApLgorICAgICAqLworCisgICAgaWYgKCB1bmxpa2VseSgoeCAmIFBH
VF9jb3VudF9tYXNrKSA9PSAwKSApCisgICAgeworICAgICAgICBzdHJ1Y3Qg
ZG9tYWluICpkID0gcGFnZV9nZXRfb3duZXIocGFnZSk7CisKKyAgICAgICAg
aWYgKCBkICYmIHNoYWRvd19tb2RlX2VuYWJsZWQoZCkgKQorICAgICAgICAg
ICAgc2hhZG93X3ByZXBhcmVfcGFnZV90eXBlX2NoYW5nZShkLCBwYWdlLCB0
eXBlKTsKKworICAgICAgICBpZiAoICh4ICYgUEdUX3R5cGVfbWFzaykgIT0g
dHlwZSApCisgICAgICAgIHsKKyAgICAgICAgICAgIC8qCisgICAgICAgICAg
ICAgKiBPbiB0eXBlIGNoYW5nZSB3ZSBjaGVjayB0byBmbHVzaCBzdGFsZSBU
TEIgZW50cmllcy4gSXQgaXMKKyAgICAgICAgICAgICAqIHZpdGFsIHRoYXQg
bm8gb3RoZXIgQ1BVcyBhcmUgbGVmdCB3aXRoIHdyaXRlYWJsZSBtYXBwaW5n
cworICAgICAgICAgICAgICogdG8gYSBmcmFtZSB3aGljaCBpcyBpbnRlbmRp
bmcgdG8gYmVjb21lIHBndGFibGUvc2VnZGVzYy4KKyAgICAgICAgICAgICAq
LworICAgICAgICAgICAgY3B1bWFza190ICptYXNrID0gdGhpc19jcHUoc2Ny
YXRjaF9jcHVtYXNrKTsKKworICAgICAgICAgICAgQlVHX09OKGluX2lycSgp
KTsKKyAgICAgICAgICAgIGNwdW1hc2tfY29weShtYXNrLCBkLT5kaXJ0eV9j
cHVtYXNrKTsKKworICAgICAgICAgICAgLyogRG9uJ3QgZmx1c2ggaWYgdGhl
IHRpbWVzdGFtcCBpcyBvbGQgZW5vdWdoICovCisgICAgICAgICAgICB0bGJm
bHVzaF9maWx0ZXIobWFzaywgcGFnZS0+dGxiZmx1c2hfdGltZXN0YW1wKTsK
KworICAgICAgICAgICAgaWYgKCB1bmxpa2VseSghY3B1bWFza19lbXB0eSht
YXNrKSkgJiYKKyAgICAgICAgICAgICAgICAgLyogU2hhZG93IG1vZGU6IHRy
YWNrIG9ubHkgd3JpdGFibGUgcGFnZXMuICovCisgICAgICAgICAgICAgICAg
ICghc2hhZG93X21vZGVfZW5hYmxlZChkKSB8fAorICAgICAgICAgICAgICAg
ICAgKChueCAmIFBHVF90eXBlX21hc2spID09IFBHVF93cml0YWJsZV9wYWdl
KSkgKQorICAgICAgICAgICAgeworICAgICAgICAgICAgICAgIHBlcmZjX2lu
Y3IobmVlZF9mbHVzaF90bGJfZmx1c2gpOworICAgICAgICAgICAgICAgIGZs
dXNoX3RsYl9tYXNrKG1hc2spOworICAgICAgICAgICAgfQorICAgICAgICB9
CisgICAgfQorCiAgICAgaWYgKCB1bmxpa2VseSgoeCAmIFBHVF90eXBlX21h
c2spICE9IHR5cGUpICkKICAgICB7CiAgICAgICAgIC8qIFNwZWNpYWwgcGFn
ZXMgc2hvdWxkIG5vdCBiZSBhY2Nlc3NpYmxlIGZyb20gZGV2aWNlcy4gKi8K
QEAgLTMxNDksMTMgKzMxNTUsMjUgQEAgc3RhdGljIGludCBfZ2V0X3BhZ2Vf
dHlwZShzdHJ1Y3QgcGFnZV9pbmZvICpwYWdlLCB1bnNpZ25lZCBsb25nIHR5
cGUsCiAKICAgICBpZiAoIHVubGlrZWx5KCEobnggJiBQR1RfdmFsaWRhdGVk
KSkgKQogICAgIHsKLSAgICAgICAgaWYgKCAhKHggJiBQR1RfcGFydGlhbCkg
KQorICAgICAgICAvKgorICAgICAgICAgKiBObyBzcGVjaWFsIHZhbGlkYXRp
b24gbmVlZGVkIGZvciB3cml0YWJsZSBvciBzaGFyZWQgcGFnZXMuICBQYWdl
CisgICAgICAgICAqIHRhYmxlcyBhbmQgR0RUL0xEVCBuZWVkIHRvIGhhdmUg
dGhlaXIgY29udGVudHMgYXVkaXRlZC4KKyAgICAgICAgICoKKyAgICAgICAg
ICogcGVyIHZhbGlkYXRlX3BhZ2UoKSwgbm9uLWF0b21pYyB1cGRhdGVzIGFy
ZSBmaW5lIGhlcmUuCisgICAgICAgICAqLworICAgICAgICBpZiAoIHR5cGUg
PT0gUEdUX3dyaXRhYmxlX3BhZ2UgfHwgdHlwZSA9PSBQR1Rfc2hhcmVkX3Bh
Z2UgKQorICAgICAgICAgICAgcGFnZS0+dS5pbnVzZS50eXBlX2luZm8gfD0g
UEdUX3ZhbGlkYXRlZDsKKyAgICAgICAgZWxzZQogICAgICAgICB7Ci0gICAg
ICAgICAgICBwYWdlLT5ucl92YWxpZGF0ZWRfcHRlcyA9IDA7Ci0gICAgICAg
ICAgICBwYWdlLT5wYXJ0aWFsX2ZsYWdzID0gMDsKLSAgICAgICAgICAgIHBh
Z2UtPmxpbmVhcl9wdF9jb3VudCA9IDA7CisgICAgICAgICAgICBpZiAoICEo
eCAmIFBHVF9wYXJ0aWFsKSApCisgICAgICAgICAgICB7CisgICAgICAgICAg
ICAgICAgcGFnZS0+bnJfdmFsaWRhdGVkX3B0ZXMgPSAwOworICAgICAgICAg
ICAgICAgIHBhZ2UtPnBhcnRpYWxfZmxhZ3MgPSAwOworICAgICAgICAgICAg
ICAgIHBhZ2UtPmxpbmVhcl9wdF9jb3VudCA9IDA7CisgICAgICAgICAgICB9
CisKKyAgICAgICAgICAgIHJjID0gYWxsb2NfcGFnZV90eXBlKHBhZ2UsIHR5
cGUsIHByZWVtcHRpYmxlKTsKICAgICAgICAgfQotICAgICAgICByYyA9IGFs
bG9jX3BhZ2VfdHlwZShwYWdlLCB0eXBlLCBwcmVlbXB0aWJsZSk7CiAgICAg
fQogCiAgb3V0Ogo=

--=separator
Content-Type: application/octet-stream; name="xsa401/xsa401-4.16-1.patch"
Content-Disposition: attachment; filename="xsa401/xsa401-4.16-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3B2OiBDbGVhbiB1cCBfZ2V0X3BhZ2VfdHlwZSgp
CgpWYXJpb3VzIGZpeGVzIGZvciBjbGFyaXR5LCBhaGVhZCBvZiBtYWtpbmcg
Y29tcGxpY2F0ZWQgY2hhbmdlcy4KCiAqIFNwbGl0IHRoZSBvdmVyZmxvdyBj
aGVjayBvdXQgb2YgdGhlIGlmL2Vsc2UgY2hhaW4gZm9yIHR5cGUgaGFuZGxp
bmcsIGFzCiAgIGl0J3Mgc29tZXdoYXQgdW5yZWxhdGVkLgogKiBDb21tZW50
IHRoZSBtYWluIGlmL2Vsc2UgY2hhaW4gdG8gZXhwbGFpbiB3aGF0IGlzIGdv
aW5nIG9uLiAgQWRqdXN0IG9uZQogICBBU1NFUlQoKSBhbmQgc3RhdGUgdGhl
IGJpdCBsYXlvdXQgZm9yIHZhbGlkYXRlLWxvY2tlZCBhbmQgcGFydGlhbCBz
dGF0ZXMuCiAqIENvcnJlY3QgdGhlIGNvbW1lbnQgYWJvdXQgVExCIGZsdXNo
aW5nLCBhcyBpdCdzIGJhY2t3YXJkcy4gIFRoZSBwcm9ibGVtCiAgIGNhc2Ug
aXMgd2hlbiB3cml0ZWFibGUgbWFwcGluZ3MgYXJlIHJldGFpbmVkIHRvIGEg
cGFnZSBiZWNvbWluZyByZWFkLW9ubHksCiAgIGFzIGl0IGFsbG93cyB0aGUg
Z3Vlc3QgdG8gYnlwYXNzIFhlbidzIHNhZmV0eSBjaGVja3MgZm9yIHVwZGF0
ZXMuCiAqIFJlZHVjZSB0aGUgc2NvcGUgb2YgJ3knLiAgSXQgaXMgYW4gYXJ0
ZWZhY3Qgb2YgdGhlIGNtcHhjaGcgbG9vcCBhbmQgbm90CiAgIHZhbGlkIGZv
ciB1c2UgYnkgc3Vic2VxdWVudCBsb2dpYy4gIFN3aXRjaCB0byB1c2luZyBB
Q0NFU1NfT05DRSgpIHRvIHRyZWF0CiAgIGFsbCByZWFkcyBhcyBleHBsaWNp
dGx5IHZvbGF0aWxlLiAgVGhlIG9ubHkgdGhpbmcgcHJldmVudGluZyB0aGUg
dmFsaWRhdGVkCiAgIHdhaXQtbG9vcCBiZWluZyBpbmZpbml0ZSBpcyB0aGUg
Y29tcGlsZXIgYmFycmllciBoaWRkZW4gaW4gY3B1X3JlbGF4KCkuCiAqIFJl
cGxhY2Ugb25lIHBhZ2VfZ2V0X293bmVyKHBhZ2UpIHdpdGggdGhlIGFscmVh
ZHktY2FsY3VsYXRlZCAnZCcgYWxyZWFkeSBpbgogICBzY29wZS4KCk5vIGZ1
bmN0aW9uYWwgY2hhbmdlLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS00MDEgLyBD
VkUtMjAyMi0yNjM2Mi4KClNpZ25lZC1vZmYtYnk6IEFuZHJldyBDb29wZXIg
PGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+ClNpZ25lZC1vZmYtYnk6IEdl
b3JnZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAZXUuY2l0cml4LmNvbT4KUmV2
aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KUmV2
aWV3ZWQtYnk6IEdlb3JnZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4
LmNvbT4KCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvbW0uYyBiL3hlbi9h
cmNoL3g4Ni9tbS5jCmluZGV4IDc5NmZhY2E2NDEwMy4uZGRkMzJmODhjNzk4
IDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYvbW0uYworKysgYi94ZW4vYXJj
aC94ODYvbW0uYwpAQCAtMjkzNSwxNiArMjkzNSwxNyBAQCBzdGF0aWMgaW50
IF9wdXRfcGFnZV90eXBlKHN0cnVjdCBwYWdlX2luZm8gKnBhZ2UsIHVuc2ln
bmVkIGludCBmbGFncywKIHN0YXRpYyBpbnQgX2dldF9wYWdlX3R5cGUoc3Ry
dWN0IHBhZ2VfaW5mbyAqcGFnZSwgdW5zaWduZWQgbG9uZyB0eXBlLAogICAg
ICAgICAgICAgICAgICAgICAgICAgICBib29sIHByZWVtcHRpYmxlKQogewot
ICAgIHVuc2lnbmVkIGxvbmcgbngsIHgsIHkgPSBwYWdlLT51LmludXNlLnR5
cGVfaW5mbzsKKyAgICB1bnNpZ25lZCBsb25nIG54LCB4OwogICAgIGludCBy
YyA9IDA7CiAKICAgICBBU1NFUlQoISh0eXBlICYgfihQR1RfdHlwZV9tYXNr
IHwgUEdUX3BhZV94ZW5fbDIpKSk7CiAgICAgQVNTRVJUKCFpbl9pcnEoKSk7
CiAKLSAgICBmb3IgKCA7IDsgKQorICAgIGZvciAoIHVuc2lnbmVkIGxvbmcg
eSA9IEFDQ0VTU19PTkNFKHBhZ2UtPnUuaW51c2UudHlwZV9pbmZvKTsgOyAp
CiAgICAgewogICAgICAgICB4ICA9IHk7CiAgICAgICAgIG54ID0geCArIDE7
CisKICAgICAgICAgaWYgKCB1bmxpa2VseSgobnggJiBQR1RfY291bnRfbWFz
aykgPT0gMCkgKQogICAgICAgICB7CiAgICAgICAgICAgICBnZHByaW50ayhY
RU5MT0dfV0FSTklORywKQEAgLTI5NTIsOCArMjk1MywxNSBAQCBzdGF0aWMg
aW50IF9nZXRfcGFnZV90eXBlKHN0cnVjdCBwYWdlX2luZm8gKnBhZ2UsIHVu
c2lnbmVkIGxvbmcgdHlwZSwKICAgICAgICAgICAgICAgICAgICAgIG1mbl94
KHBhZ2VfdG9fbWZuKHBhZ2UpKSk7CiAgICAgICAgICAgICByZXR1cm4gLUVJ
TlZBTDsKICAgICAgICAgfQotICAgICAgICBlbHNlIGlmICggdW5saWtlbHko
KHggJiBQR1RfY291bnRfbWFzaykgPT0gMCkgKQorCisgICAgICAgIGlmICgg
dW5saWtlbHkoKHggJiBQR1RfY291bnRfbWFzaykgPT0gMCkgKQogICAgICAg
ICB7CisgICAgICAgICAgICAvKgorICAgICAgICAgICAgICogVHlwZXJlZiAw
IC0+IDEuCisgICAgICAgICAgICAgKgorICAgICAgICAgICAgICogVHlwZSBj
aGFuZ2VzIGFyZSBwZXJtaXR0ZWQgd2hlbiB0aGUgdHlwZXJlZiBpcyAwLiAg
SWYgdGhlIHR5cGUKKyAgICAgICAgICAgICAqIGFjdHVhbGx5IGNoYW5nZXMs
IHRoZSBwYWdlIG5lZWRzIHJlLXZhbGlkYXRpbmcuCisgICAgICAgICAgICAg
Ki8KICAgICAgICAgICAgIHN0cnVjdCBkb21haW4gKmQgPSBwYWdlX2dldF9v
d25lcihwYWdlKTsKIAogICAgICAgICAgICAgaWYgKCBkICYmIHNoYWRvd19t
b2RlX2VuYWJsZWQoZCkgKQpAQCAtMjk2NCw4ICsyOTcyLDggQEAgc3RhdGlj
IGludCBfZ2V0X3BhZ2VfdHlwZShzdHJ1Y3QgcGFnZV9pbmZvICpwYWdlLCB1
bnNpZ25lZCBsb25nIHR5cGUsCiAgICAgICAgICAgICB7CiAgICAgICAgICAg
ICAgICAgLyoKICAgICAgICAgICAgICAgICAgKiBPbiB0eXBlIGNoYW5nZSB3
ZSBjaGVjayB0byBmbHVzaCBzdGFsZSBUTEIgZW50cmllcy4gSXQgaXMKLSAg
ICAgICAgICAgICAgICAgKiB2aXRhbCB0aGF0IG5vIG90aGVyIENQVXMgYXJl
IGxlZnQgd2l0aCBtYXBwaW5ncyBvZiBhIGZyYW1lCi0gICAgICAgICAgICAg
ICAgICogd2hpY2ggaXMgYWJvdXQgdG8gYmVjb21lIHdyaXRlYWJsZSB0byB0
aGUgZ3Vlc3QuCisgICAgICAgICAgICAgICAgICogdml0YWwgdGhhdCBubyBv
dGhlciBDUFVzIGFyZSBsZWZ0IHdpdGggd3JpdGVhYmxlIG1hcHBpbmdzCisg
ICAgICAgICAgICAgICAgICogdG8gYSBmcmFtZSB3aGljaCBpcyBpbnRlbmRp
bmcgdG8gYmVjb21lIHBndGFibGUvc2VnZGVzYy4KICAgICAgICAgICAgICAg
ICAgKi8KICAgICAgICAgICAgICAgICBjcHVtYXNrX3QgKm1hc2sgPSB0aGlz
X2NwdShzY3JhdGNoX2NwdW1hc2spOwogCkBAIC0yOTc3LDcgKzI5ODUsNyBA
QCBzdGF0aWMgaW50IF9nZXRfcGFnZV90eXBlKHN0cnVjdCBwYWdlX2luZm8g
KnBhZ2UsIHVuc2lnbmVkIGxvbmcgdHlwZSwKIAogICAgICAgICAgICAgICAg
IGlmICggdW5saWtlbHkoIWNwdW1hc2tfZW1wdHkobWFzaykpICYmCiAgICAg
ICAgICAgICAgICAgICAgICAvKiBTaGFkb3cgbW9kZTogdHJhY2sgb25seSB3
cml0YWJsZSBwYWdlcy4gKi8KLSAgICAgICAgICAgICAgICAgICAgICghc2hh
ZG93X21vZGVfZW5hYmxlZChwYWdlX2dldF9vd25lcihwYWdlKSkgfHwKKyAg
ICAgICAgICAgICAgICAgICAgICghc2hhZG93X21vZGVfZW5hYmxlZChkKSB8
fAogICAgICAgICAgICAgICAgICAgICAgICgobnggJiBQR1RfdHlwZV9tYXNr
KSA9PSBQR1Rfd3JpdGFibGVfcGFnZSkpICkKICAgICAgICAgICAgICAgICB7
CiAgICAgICAgICAgICAgICAgICAgIHBlcmZjX2luY3IobmVlZF9mbHVzaF90
bGJfZmx1c2gpOwpAQCAtMzAwOCw3ICszMDE2LDE0IEBAIHN0YXRpYyBpbnQg
X2dldF9wYWdlX3R5cGUoc3RydWN0IHBhZ2VfaW5mbyAqcGFnZSwgdW5zaWdu
ZWQgbG9uZyB0eXBlLAogICAgICAgICB9CiAgICAgICAgIGVsc2UgaWYgKCB1
bmxpa2VseSgoeCAmIChQR1RfdHlwZV9tYXNrfFBHVF9wYWVfeGVuX2wyKSkg
IT0gdHlwZSkgKQogICAgICAgICB7Ci0gICAgICAgICAgICAvKiBEb24ndCBs
b2cgZmFpbHVyZSBpZiBpdCBjb3VsZCBiZSBhIHJlY3Vyc2l2ZS1tYXBwaW5n
IGF0dGVtcHQuICovCisgICAgICAgICAgICAvKgorICAgICAgICAgICAgICog
ZWxzZSwgd2UncmUgdHJ5aW5nIHRvIHRha2UgYSBuZXcgcmVmZXJlbmNlLCBv
ZiB0aGUgd3JvbmcgdHlwZS4KKyAgICAgICAgICAgICAqCisgICAgICAgICAg
ICAgKiBUaGlzIChiZWluZyBhYmxlIHRvIHByb2hpYml0IHVzZSBvZiB0aGUg
d3JvbmcgdHlwZSkgaXMgd2hhdCB0aGUKKyAgICAgICAgICAgICAqIHR5cGVy
ZWYgc3lzdGVtIGV4aXN0cyBmb3IsIGJ1dCBza2lwIHByaW50aW5nIHRoZSBm
YWlsdXJlIGlmIGl0CisgICAgICAgICAgICAgKiBsb29rcyBsaWtlIGEgcmVj
dXJzaXZlIG1hcHBpbmcsIGFzIHN1YnNlcXVlbnQgbG9naWMgbWlnaHQKKyAg
ICAgICAgICAgICAqIHVsdGltYXRlbHkgcGVybWl0IHRoZSBhdHRlbXB0Lgor
ICAgICAgICAgICAgICovCiAgICAgICAgICAgICBpZiAoICgoeCAmIFBHVF90
eXBlX21hc2spID09IFBHVF9sMl9wYWdlX3RhYmxlKSAmJgogICAgICAgICAg
ICAgICAgICAodHlwZSA9PSBQR1RfbDFfcGFnZV90YWJsZSkgKQogICAgICAg
ICAgICAgICAgIHJldHVybiAtRUlOVkFMOwpAQCAtMzAyNywxOCArMzA0Miw0
NiBAQCBzdGF0aWMgaW50IF9nZXRfcGFnZV90eXBlKHN0cnVjdCBwYWdlX2lu
Zm8gKnBhZ2UsIHVuc2lnbmVkIGxvbmcgdHlwZSwKICAgICAgICAgfQogICAg
ICAgICBlbHNlIGlmICggdW5saWtlbHkoISh4ICYgUEdUX3ZhbGlkYXRlZCkp
ICkKICAgICAgICAgeworICAgICAgICAgICAgLyoKKyAgICAgICAgICAgICAq
IGVsc2UsIHRoZSBjb3VudCBpcyBub24temVybywgYW5kIHdlJ3JlIGdyYWJi
aW5nIHRoZSByaWdodCB0eXBlOworICAgICAgICAgICAgICogYnV0IHRoZSBw
YWdlIGhhc24ndCBiZWVuIHZhbGlkYXRlZCB5ZXQuCisgICAgICAgICAgICAg
KgorICAgICAgICAgICAgICogVGhlIHBhZ2UgaXMgaW4gb25lIG9mIHR3byBz
dGF0ZXMgKGRlcGVuZGluZyBvbiBQR1RfcGFydGlhbCksCisgICAgICAgICAg
ICAgKiBhbmQgc2hvdWxkIGhhdmUgZXhhY3RseSBvbmUgcmVmZXJlbmNlLgor
ICAgICAgICAgICAgICovCisgICAgICAgICAgICBBU1NFUlQoKHggJiAoUEdU
X3R5cGVfbWFzayB8IFBHVF9jb3VudF9tYXNrKSkgPT0gKHR5cGUgfCAxKSk7
CisKICAgICAgICAgICAgIGlmICggISh4ICYgUEdUX3BhcnRpYWwpICkKICAg
ICAgICAgICAgIHsKLSAgICAgICAgICAgICAgICAvKiBTb21lb25lIGVsc2Ug
aXMgdXBkYXRpbmcgdmFsaWRhdGlvbiBvZiB0aGlzIHBhZ2UuIFdhaXQuLi4g
Ki8KKyAgICAgICAgICAgICAgICAvKgorICAgICAgICAgICAgICAgICAqIFRo
ZSBwYWdlIGhhcyBiZWVuIGxlZnQgaW4gdGhlICJ2YWxpZGF0ZSBsb2NrZWQi
IHN0YXRlCisgICAgICAgICAgICAgICAgICogKGkuZS4gUEdUX1t0eXBlXSB8
IDEpIHdoaWNoIG1lYW5zIHRoYXQgYSBjb25jdXJyZW50IGNhbGxlcgorICAg
ICAgICAgICAgICAgICAqIG9mIF9nZXRfcGFnZV90eXBlKCkgaXMgaW4gdGhl
IG1pZGRsZSBvZiB2YWxpZGF0aW9uLgorICAgICAgICAgICAgICAgICAqCisg
ICAgICAgICAgICAgICAgICogU3BpbiB3YWl0aW5nIGZvciB0aGUgY29uY3Vy
cmVudCB1c2VyIHRvIGNvbXBsZXRlIChwYXJ0aWFsCisgICAgICAgICAgICAg
ICAgICogb3IgZnVsbHkgdmFsaWRhdGVkKSwgdGhlbiByZXN0YXJ0IG91ciBh
dHRlbXB0IHRvIGFjcXVpcmUgYQorICAgICAgICAgICAgICAgICAqIHR5cGUg
cmVmZXJlbmNlLgorICAgICAgICAgICAgICAgICAqLwogICAgICAgICAgICAg
ICAgIGRvIHsKICAgICAgICAgICAgICAgICAgICAgaWYgKCBwcmVlbXB0aWJs
ZSAmJiBoeXBlcmNhbGxfcHJlZW1wdF9jaGVjaygpICkKICAgICAgICAgICAg
ICAgICAgICAgICAgIHJldHVybiAtRUlOVFI7CiAgICAgICAgICAgICAgICAg
ICAgIGNwdV9yZWxheCgpOwotICAgICAgICAgICAgICAgIH0gd2hpbGUgKCAo
eSA9IHBhZ2UtPnUuaW51c2UudHlwZV9pbmZvKSA9PSB4ICk7CisgICAgICAg
ICAgICAgICAgfSB3aGlsZSAoICh5ID0gQUNDRVNTX09OQ0UocGFnZS0+dS5p
bnVzZS50eXBlX2luZm8pKSA9PSB4ICk7CiAgICAgICAgICAgICAgICAgY29u
dGludWU7CiAgICAgICAgICAgICB9Ci0gICAgICAgICAgICAvKiBUeXBlIHJl
ZiBjb3VudCB3YXMgbGVmdCBhdCAxIHdoZW4gUEdUX3BhcnRpYWwgZ290IHNl
dC4gKi8KLSAgICAgICAgICAgIEFTU0VSVCgoeCAmIFBHVF9jb3VudF9tYXNr
KSA9PSAxKTsKKworICAgICAgICAgICAgLyoKKyAgICAgICAgICAgICAqIFRo
ZSBwYWdlIGhhcyBiZWVuIGxlZnQgaW4gdGhlICJwYXJ0aWFsIiBzdGF0ZQor
ICAgICAgICAgICAgICogKGkuZS4sIFBHVF9bdHlwZV0gfCBQR1RfcGFydGlh
bCB8IDEpLgorICAgICAgICAgICAgICoKKyAgICAgICAgICAgICAqIFJhdGhl
ciB0aGFuIGJ1bXBpbmcgdGhlIHR5cGUgY291bnQsIHdlIG5lZWQgdG8gdHJ5
IHRvIGdyYWIgdGhlCisgICAgICAgICAgICAgKiB2YWxpZGF0aW9uIGxvY2s7
IGlmIHdlIHN1Y2NlZWQsIHdlIG5lZWQgdG8gdmFsaWRhdGUgdGhlIHBhZ2Us
CisgICAgICAgICAgICAgKiB0aGVuIGRyb3AgdGhlIGdlbmVyYWwgcmVmIGFz
c29jaWF0ZWQgd2l0aCB0aGUgUEdUX3BhcnRpYWwgYml0LgorICAgICAgICAg
ICAgICoKKyAgICAgICAgICAgICAqIFdlIGdyYWIgdGhlIHZhbGlkYXRpb24g
bG9jayBieSBzZXR0aW5nIG54IHRvIChQR1RfW3R5cGVdIHwgMSkKKyAgICAg
ICAgICAgICAqIChpLmUuLCBub24temVybyB0eXBlIGNvdW50LCBuZWl0aGVy
IFBHVF92YWxpZGF0ZWQgbm9yCisgICAgICAgICAgICAgKiBQR1RfcGFydGlh
bCBzZXQpLgorICAgICAgICAgICAgICovCiAgICAgICAgICAgICBueCA9IHgg
JiB+UEdUX3BhcnRpYWw7CiAgICAgICAgIH0KIApAQCAtMzA4Nyw2ICszMTMw
LDEzIEBAIHN0YXRpYyBpbnQgX2dldF9wYWdlX3R5cGUoc3RydWN0IHBhZ2Vf
aW5mbyAqcGFnZSwgdW5zaWduZWQgbG9uZyB0eXBlLAogICAgIH0KIAogIG91
dDoKKyAgICAvKgorICAgICAqIERpZCB3ZSBkcm9wIHRoZSBQR1RfcGFydGlh
bCBiaXQgd2hlbiBhY3F1aXJpbmcgdGhlIHR5cGVyZWY/ICBJZiBzbywKKyAg
ICAgKiBkcm9wIHRoZSBnZW5lcmFsIHJlZmVyZW5jZSB0aGF0IHdlbnQgYWxv
bmcgd2l0aCBpdC4KKyAgICAgKgorICAgICAqIE4uQi4gdmFsaWRhdGVfcGFn
ZSgpIG1heSBoYXZlIGhhdmUgcmUtc2V0IFBHVF9wYXJ0aWFsLCBub3QgcmVm
bGVjdGVkIGluCisgICAgICogbngsIGJ1dCB3aWxsIGhhdmUgdGFrZW4gYW4g
ZXh0cmEgcmVmIHdoZW4gZG9pbmcgc28uCisgICAgICovCiAgICAgaWYgKCAo
eCAmIFBHVF9wYXJ0aWFsKSAmJiAhKG54ICYgUEdUX3BhcnRpYWwpICkKICAg
ICAgICAgcHV0X3BhZ2UocGFnZSk7CiAK

--=separator
Content-Type: application/octet-stream; name="xsa401/xsa401-4.16-2.patch"
Content-Disposition: attachment; filename="xsa401/xsa401-4.16-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3B2OiBGaXggQUJBQyBjbXB4Y2hnKCkgcmFjZSBp
biBfZ2V0X3BhZ2VfdHlwZSgpCgpfZ2V0X3BhZ2VfdHlwZSgpIHN1ZmZlcnMg
ZnJvbSBhIHJhY2UgY29uZGl0aW9uIHdoZXJlIGl0IGluY29ycmVjdGx5IGFz
c3VtZXMKdGhhdCBiZWNhdXNlICd4JyB3YXMgcmVhZCBhbmQgYSBzdWJzZXF1
ZW50IGEgY21weGNoZygpIHN1Y2NlZWRzLCB0aGUgdHlwZQpjYW5ub3QgaGF2
ZSBjaGFuZ2VkIGluLWJldHdlZW4uICBDb25zaWRlcjoKCkNQVSBBOgogIDEu
IENyZWF0ZXMgYW4gTDJlIHJlZmVyZW5jaW5nIHBnCiAgICAgYC0+IF9nZXRf
cGFnZV90eXBlKHBnLCBQR1RfbDFfcGFnZV90YWJsZSksIHNlZXMgY291bnQg
MCwgdHlwZSBQR1Rfd3JpdGFibGVfcGFnZQogIDIuICAgICBJc3N1ZXMgZmx1
c2hfdGxiX21hc2soKQpDUFUgQjoKICAzLiBDcmVhdGVzIGEgd3JpdGVhYmxl
IG1hcHBpbmcgb2YgcGcKICAgICBgLT4gX2dldF9wYWdlX3R5cGUocGcsIFBH
VF93cml0YWJsZV9wYWdlKSwgY291bnQgaW5jcmVhc2VzIHRvIDEKICA0LiBX
cml0ZXMgaW50byBuZXcgbWFwcGluZywgY3JlYXRpbmcgYSBUTEIgZW50cnkg
Zm9yIHBnCiAgNS4gUmVtb3ZlcyB0aGUgd3JpdGVhYmxlIG1hcHBpbmcgb2Yg
cGcKICAgICBgLT4gX3B1dF9wYWdlX3R5cGUocGcpLCBjb3VudCBnb2VzIGJh
Y2sgZG93biB0byAwCkNQVSBBOgogIDcuICAgICBJc3N1ZXMgY21weGNoZygp
LCBzZXR0aW5nIGNvdW50IDEsIHR5cGUgUEdUX2wxX3BhZ2VfdGFibGUKCkNQ
VSBCIG5vdyBoYXMgYSB3cml0ZWFibGUgbWFwcGluZyB0byBwZywgd2hpY2gg
WGVuIGJlbGlldmVzIGlzIGEgcGFnZXRhYmxlIGFuZApzdWl0YWJseSBwcm90
ZWN0ZWQgKGkuZS4gcmVhZC1vbmx5KS4gIFRoZSBUTEIgZmx1c2ggaW4gc3Rl
cCAyIG11c3QgYmUgZGVmZXJyZWQKdW50aWwgYWZ0ZXIgdGhlIGd1ZXN0IGlz
IHByb2hpYml0ZWQgZnJvbSBjcmVhdGluZyBuZXcgd3JpdGVhYmxlIG1hcHBp
bmdzLAp3aGljaCBpcyBhZnRlciBzdGVwIDcuCgpEZWZlciBhbGwgc2FmZXR5
IGFjdGlvbnMgdW50aWwgYWZ0ZXIgdGhlIGNtcHhjaGcoKSBoYXMgc3VjY2Vz
c2Z1bGx5IHRha2VuIHRoZQppbnRlbmRlZCB0eXBlcmVmLCBiZWNhdXNlIHRo
YXQgaXMgd2hhdCBwcmV2ZW50cyBjb25jdXJyZW50IHVzZXJzIGZyb20gdXNp
bmcKdGhlIG9sZCB0eXBlLgoKQWxzbyByZW1vdmUgdGhlIGVhcmx5IHZhbGlk
YXRpb24gZm9yIHdyaXRlYWJsZSBhbmQgc2hhcmVkIHBhZ2VzLiAgVGhpcyBy
ZW1vdmVzCnJhY2UgY29uZGl0aW9ucyB3aGVyZSBvbmUgaGFsZiBvZiBhIHBh
cmFsbGVsIG1hcHBpbmcgYXR0ZW1wdCBjYW4gcmV0dXJuCnN1Y2Nlc3NmdWxs
eSBiZWZvcmU6CiAqIFRoZSBJT01NVSBwYWdldGFibGVzIGFyZSBpbiBzeW5j
IHdpdGggdGhlIG5ldyBwYWdlIHR5cGUKICogV3JpdGVhYmxlIG1hcHBpbmdz
IHRvIHNoYXJlZCBwYWdlcyBoYXZlIGJlZW4gdG9ybiBkb3duCgpUaGlzIGlz
IHBhcnQgb2YgWFNBLTQwMSAvIENWRS0yMDIyLTI2MzYyLgoKUmVwb3J0ZWQt
Ynk6IEphbm4gSG9ybiA8amFubmhAZ29vZ2xlLmNvbT4KU2lnbmVkLW9mZi1i
eTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4K
UmV2aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4K
UmV2aWV3ZWQtYnk6IEdlb3JnZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAY2l0
cml4LmNvbT4KCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvbW0uYyBiL3hl
bi9hcmNoL3g4Ni9tbS5jCmluZGV4IGRkZDMyZjg4Yzc5OC4uMTY5M2I1ODBi
MTUyIDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYvbW0uYworKysgYi94ZW4v
YXJjaC94ODYvbW0uYwpAQCAtMjk2Miw1NiArMjk2MiwxMiBAQCBzdGF0aWMg
aW50IF9nZXRfcGFnZV90eXBlKHN0cnVjdCBwYWdlX2luZm8gKnBhZ2UsIHVu
c2lnbmVkIGxvbmcgdHlwZSwKICAgICAgICAgICAgICAqIFR5cGUgY2hhbmdl
cyBhcmUgcGVybWl0dGVkIHdoZW4gdGhlIHR5cGVyZWYgaXMgMC4gIElmIHRo
ZSB0eXBlCiAgICAgICAgICAgICAgKiBhY3R1YWxseSBjaGFuZ2VzLCB0aGUg
cGFnZSBuZWVkcyByZS12YWxpZGF0aW5nLgogICAgICAgICAgICAgICovCi0g
ICAgICAgICAgICBzdHJ1Y3QgZG9tYWluICpkID0gcGFnZV9nZXRfb3duZXIo
cGFnZSk7Ci0KLSAgICAgICAgICAgIGlmICggZCAmJiBzaGFkb3dfbW9kZV9l
bmFibGVkKGQpICkKLSAgICAgICAgICAgICAgIHNoYWRvd19wcmVwYXJlX3Bh
Z2VfdHlwZV9jaGFuZ2UoZCwgcGFnZSwgdHlwZSk7CiAKICAgICAgICAgICAg
IEFTU0VSVCghKHggJiBQR1RfcGFlX3hlbl9sMikpOwogICAgICAgICAgICAg
aWYgKCAoeCAmIFBHVF90eXBlX21hc2spICE9IHR5cGUgKQogICAgICAgICAg
ICAgewotICAgICAgICAgICAgICAgIC8qCi0gICAgICAgICAgICAgICAgICog
T24gdHlwZSBjaGFuZ2Ugd2UgY2hlY2sgdG8gZmx1c2ggc3RhbGUgVExCIGVu
dHJpZXMuIEl0IGlzCi0gICAgICAgICAgICAgICAgICogdml0YWwgdGhhdCBu
byBvdGhlciBDUFVzIGFyZSBsZWZ0IHdpdGggd3JpdGVhYmxlIG1hcHBpbmdz
Ci0gICAgICAgICAgICAgICAgICogdG8gYSBmcmFtZSB3aGljaCBpcyBpbnRl
bmRpbmcgdG8gYmVjb21lIHBndGFibGUvc2VnZGVzYy4KLSAgICAgICAgICAg
ICAgICAgKi8KLSAgICAgICAgICAgICAgICBjcHVtYXNrX3QgKm1hc2sgPSB0
aGlzX2NwdShzY3JhdGNoX2NwdW1hc2spOwotCi0gICAgICAgICAgICAgICAg
QlVHX09OKGluX2lycSgpKTsKLSAgICAgICAgICAgICAgICBjcHVtYXNrX2Nv
cHkobWFzaywgZC0+ZGlydHlfY3B1bWFzayk7Ci0KLSAgICAgICAgICAgICAg
ICAvKiBEb24ndCBmbHVzaCBpZiB0aGUgdGltZXN0YW1wIGlzIG9sZCBlbm91
Z2ggKi8KLSAgICAgICAgICAgICAgICB0bGJmbHVzaF9maWx0ZXIobWFzaywg
cGFnZS0+dGxiZmx1c2hfdGltZXN0YW1wKTsKLQotICAgICAgICAgICAgICAg
IGlmICggdW5saWtlbHkoIWNwdW1hc2tfZW1wdHkobWFzaykpICYmCi0gICAg
ICAgICAgICAgICAgICAgICAvKiBTaGFkb3cgbW9kZTogdHJhY2sgb25seSB3
cml0YWJsZSBwYWdlcy4gKi8KLSAgICAgICAgICAgICAgICAgICAgICghc2hh
ZG93X21vZGVfZW5hYmxlZChkKSB8fAotICAgICAgICAgICAgICAgICAgICAg
ICgobnggJiBQR1RfdHlwZV9tYXNrKSA9PSBQR1Rfd3JpdGFibGVfcGFnZSkp
ICkKLSAgICAgICAgICAgICAgICB7Ci0gICAgICAgICAgICAgICAgICAgIHBl
cmZjX2luY3IobmVlZF9mbHVzaF90bGJfZmx1c2gpOwotICAgICAgICAgICAg
ICAgICAgICAvKgotICAgICAgICAgICAgICAgICAgICAgKiBJZiBwYWdlIHdh
cyBhIHBhZ2UgdGFibGUgbWFrZSBzdXJlIHRoZSBmbHVzaCBpcwotICAgICAg
ICAgICAgICAgICAgICAgKiBwZXJmb3JtZWQgdXNpbmcgYW4gSVBJIGluIG9y
ZGVyIHRvIGF2b2lkIGNoYW5naW5nIHRoZQotICAgICAgICAgICAgICAgICAg
ICAgKiB0eXBlIG9mIGEgcGFnZSB0YWJsZSBwYWdlIHVuZGVyIHRoZSBmZWV0
IG9mCi0gICAgICAgICAgICAgICAgICAgICAqIHNwdXJpb3VzX3BhZ2VfZmF1
bHQoKS4KLSAgICAgICAgICAgICAgICAgICAgICovCi0gICAgICAgICAgICAg
ICAgICAgIGZsdXNoX21hc2sobWFzaywKLSAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAoeCAmIFBHVF90eXBlX21hc2spICYmCi0gICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgKHggJiBQR1RfdHlwZV9tYXNrKSA8PSBQ
R1Rfcm9vdF9wYWdlX3RhYmxlCi0gICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgPyBGTFVTSF9UTEIgfCBGTFVTSF9GT1JDRV9JUEkKLSAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICA6IEZMVVNIX1RMQik7Ci0gICAgICAg
ICAgICAgICAgfQotCi0gICAgICAgICAgICAgICAgLyogV2UgbG9zZSBleGlz
dGluZyB0eXBlIGFuZCB2YWxpZGl0eS4gKi8KICAgICAgICAgICAgICAgICBu
eCAmPSB+KFBHVF90eXBlX21hc2sgfCBQR1RfdmFsaWRhdGVkKTsKICAgICAg
ICAgICAgICAgICBueCB8PSB0eXBlOwotCi0gICAgICAgICAgICAgICAgLyoK
LSAgICAgICAgICAgICAgICAgKiBObyBzcGVjaWFsIHZhbGlkYXRpb24gbmVl
ZGVkIGZvciB3cml0YWJsZSBwYWdlcy4KLSAgICAgICAgICAgICAgICAgKiBQ
YWdlIHRhYmxlcyBhbmQgR0RUL0xEVCBuZWVkIHRvIGJlIHNjYW5uZWQgZm9y
IHZhbGlkaXR5LgotICAgICAgICAgICAgICAgICAqLwotICAgICAgICAgICAg
ICAgIGlmICggdHlwZSA9PSBQR1Rfd3JpdGFibGVfcGFnZSB8fCB0eXBlID09
IFBHVF9zaGFyZWRfcGFnZSApCi0gICAgICAgICAgICAgICAgICAgIG54IHw9
IFBHVF92YWxpZGF0ZWQ7CiAgICAgICAgICAgICB9CiAgICAgICAgIH0KICAg
ICAgICAgZWxzZSBpZiAoIHVubGlrZWx5KCh4ICYgKFBHVF90eXBlX21hc2t8
UEdUX3BhZV94ZW5fbDIpKSAhPSB0eXBlKSApCkBAIC0zMDkyLDYgKzMwNDgs
NTYgQEAgc3RhdGljIGludCBfZ2V0X3BhZ2VfdHlwZShzdHJ1Y3QgcGFnZV9p
bmZvICpwYWdlLCB1bnNpZ25lZCBsb25nIHR5cGUsCiAgICAgICAgICAgICBy
ZXR1cm4gLUVJTlRSOwogICAgIH0KIAorICAgIC8qCisgICAgICogT25lIHR5
cGVyZWYgaGFzIGJlZW4gdGFrZW4gYW5kIGlzIG5vdyBnbG9iYWxseSB2aXNp
YmxlLgorICAgICAqCisgICAgICogVGhlIHBhZ2UgaXMgZWl0aGVyIGluIHRo
ZSAidmFsaWRhdGUgbG9ja2VkIiBzdGF0ZSAoUEdUX1t0eXBlXSB8IDEpIG9y
CisgICAgICogZnVsbHkgdmFsaWRhdGVkIChQR1RfW3R5cGVdIHwgUEdUX3Zh
bGlkYXRlZCB8ID4wKS4KKyAgICAgKi8KKworICAgIGlmICggdW5saWtlbHko
KHggJiBQR1RfY291bnRfbWFzaykgPT0gMCkgKQorICAgIHsKKyAgICAgICAg
c3RydWN0IGRvbWFpbiAqZCA9IHBhZ2VfZ2V0X293bmVyKHBhZ2UpOworCisg
ICAgICAgIGlmICggZCAmJiBzaGFkb3dfbW9kZV9lbmFibGVkKGQpICkKKyAg
ICAgICAgICAgIHNoYWRvd19wcmVwYXJlX3BhZ2VfdHlwZV9jaGFuZ2UoZCwg
cGFnZSwgdHlwZSk7CisKKyAgICAgICAgaWYgKCAoeCAmIFBHVF90eXBlX21h
c2spICE9IHR5cGUgKQorICAgICAgICB7CisgICAgICAgICAgICAvKgorICAg
ICAgICAgICAgICogT24gdHlwZSBjaGFuZ2Ugd2UgY2hlY2sgdG8gZmx1c2gg
c3RhbGUgVExCIGVudHJpZXMuIEl0IGlzCisgICAgICAgICAgICAgKiB2aXRh
bCB0aGF0IG5vIG90aGVyIENQVXMgYXJlIGxlZnQgd2l0aCB3cml0ZWFibGUg
bWFwcGluZ3MKKyAgICAgICAgICAgICAqIHRvIGEgZnJhbWUgd2hpY2ggaXMg
aW50ZW5kaW5nIHRvIGJlY29tZSBwZ3RhYmxlL3NlZ2Rlc2MuCisgICAgICAg
ICAgICAgKi8KKyAgICAgICAgICAgIGNwdW1hc2tfdCAqbWFzayA9IHRoaXNf
Y3B1KHNjcmF0Y2hfY3B1bWFzayk7CisKKyAgICAgICAgICAgIEJVR19PTihp
bl9pcnEoKSk7CisgICAgICAgICAgICBjcHVtYXNrX2NvcHkobWFzaywgZC0+
ZGlydHlfY3B1bWFzayk7CisKKyAgICAgICAgICAgIC8qIERvbid0IGZsdXNo
IGlmIHRoZSB0aW1lc3RhbXAgaXMgb2xkIGVub3VnaCAqLworICAgICAgICAg
ICAgdGxiZmx1c2hfZmlsdGVyKG1hc2ssIHBhZ2UtPnRsYmZsdXNoX3RpbWVz
dGFtcCk7CisKKyAgICAgICAgICAgIGlmICggdW5saWtlbHkoIWNwdW1hc2tf
ZW1wdHkobWFzaykpICYmCisgICAgICAgICAgICAgICAgIC8qIFNoYWRvdyBt
b2RlOiB0cmFjayBvbmx5IHdyaXRhYmxlIHBhZ2VzLiAqLworICAgICAgICAg
ICAgICAgICAoIXNoYWRvd19tb2RlX2VuYWJsZWQoZCkgfHwKKyAgICAgICAg
ICAgICAgICAgICgobnggJiBQR1RfdHlwZV9tYXNrKSA9PSBQR1Rfd3JpdGFi
bGVfcGFnZSkpICkKKyAgICAgICAgICAgIHsKKyAgICAgICAgICAgICAgICBw
ZXJmY19pbmNyKG5lZWRfZmx1c2hfdGxiX2ZsdXNoKTsKKyAgICAgICAgICAg
ICAgICAvKgorICAgICAgICAgICAgICAgICAqIElmIHBhZ2Ugd2FzIGEgcGFn
ZSB0YWJsZSBtYWtlIHN1cmUgdGhlIGZsdXNoIGlzCisgICAgICAgICAgICAg
ICAgICogcGVyZm9ybWVkIHVzaW5nIGFuIElQSSBpbiBvcmRlciB0byBhdm9p
ZCBjaGFuZ2luZyB0aGUKKyAgICAgICAgICAgICAgICAgKiB0eXBlIG9mIGEg
cGFnZSB0YWJsZSBwYWdlIHVuZGVyIHRoZSBmZWV0IG9mCisgICAgICAgICAg
ICAgICAgICogc3B1cmlvdXNfcGFnZV9mYXVsdCgpLgorICAgICAgICAgICAg
ICAgICAqLworICAgICAgICAgICAgICAgIGZsdXNoX21hc2sobWFzaywKKyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICh4ICYgUEdUX3R5cGVfbWFzaykg
JiYKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICh4ICYgUEdUX3R5cGVf
bWFzaykgPD0gUEdUX3Jvb3RfcGFnZV90YWJsZQorICAgICAgICAgICAgICAg
ICAgICAgICAgICAgPyBGTFVTSF9UTEIgfCBGTFVTSF9GT1JDRV9JUEkKKyAg
ICAgICAgICAgICAgICAgICAgICAgICAgIDogRkxVU0hfVExCKTsKKyAgICAg
ICAgICAgIH0KKyAgICAgICAgfQorICAgIH0KKwogICAgIGlmICggdW5saWtl
bHkoKCh4ICYgUEdUX3R5cGVfbWFzaykgPT0gUEdUX3dyaXRhYmxlX3BhZ2Up
ICE9CiAgICAgICAgICAgICAgICAgICAodHlwZSA9PSBQR1Rfd3JpdGFibGVf
cGFnZSkpICkKICAgICB7CkBAIC0zMTIwLDEzICszMTI2LDI1IEBAIHN0YXRp
YyBpbnQgX2dldF9wYWdlX3R5cGUoc3RydWN0IHBhZ2VfaW5mbyAqcGFnZSwg
dW5zaWduZWQgbG9uZyB0eXBlLAogCiAgICAgaWYgKCB1bmxpa2VseSghKG54
ICYgUEdUX3ZhbGlkYXRlZCkpICkKICAgICB7Ci0gICAgICAgIGlmICggISh4
ICYgUEdUX3BhcnRpYWwpICkKKyAgICAgICAgLyoKKyAgICAgICAgICogTm8g
c3BlY2lhbCB2YWxpZGF0aW9uIG5lZWRlZCBmb3Igd3JpdGFibGUgb3Igc2hh
cmVkIHBhZ2VzLiAgUGFnZQorICAgICAgICAgKiB0YWJsZXMgYW5kIEdEVC9M
RFQgbmVlZCB0byBoYXZlIHRoZWlyIGNvbnRlbnRzIGF1ZGl0ZWQuCisgICAg
ICAgICAqCisgICAgICAgICAqIHBlciB2YWxpZGF0ZV9wYWdlKCksIG5vbi1h
dG9taWMgdXBkYXRlcyBhcmUgZmluZSBoZXJlLgorICAgICAgICAgKi8KKyAg
ICAgICAgaWYgKCB0eXBlID09IFBHVF93cml0YWJsZV9wYWdlIHx8IHR5cGUg
PT0gUEdUX3NoYXJlZF9wYWdlICkKKyAgICAgICAgICAgIHBhZ2UtPnUuaW51
c2UudHlwZV9pbmZvIHw9IFBHVF92YWxpZGF0ZWQ7CisgICAgICAgIGVsc2UK
ICAgICAgICAgewotICAgICAgICAgICAgcGFnZS0+bnJfdmFsaWRhdGVkX3B0
ZXMgPSAwOwotICAgICAgICAgICAgcGFnZS0+cGFydGlhbF9mbGFncyA9IDA7
Ci0gICAgICAgICAgICBwYWdlLT5saW5lYXJfcHRfY291bnQgPSAwOworICAg
ICAgICAgICAgaWYgKCAhKHggJiBQR1RfcGFydGlhbCkgKQorICAgICAgICAg
ICAgeworICAgICAgICAgICAgICAgIHBhZ2UtPm5yX3ZhbGlkYXRlZF9wdGVz
ID0gMDsKKyAgICAgICAgICAgICAgICBwYWdlLT5wYXJ0aWFsX2ZsYWdzID0g
MDsKKyAgICAgICAgICAgICAgICBwYWdlLT5saW5lYXJfcHRfY291bnQgPSAw
OworICAgICAgICAgICAgfQorCisgICAgICAgICAgICByYyA9IHZhbGlkYXRl
X3BhZ2UocGFnZSwgdHlwZSwgcHJlZW1wdGlibGUpOwogICAgICAgICB9Ci0g
ICAgICAgIHJjID0gdmFsaWRhdGVfcGFnZShwYWdlLCB0eXBlLCBwcmVlbXB0
aWJsZSk7CiAgICAgfQogCiAgb3V0Ogo=

--=separator--


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 12:13:32 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 12:13:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345240.570895 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzH2Y-0001uE-TA; Thu, 09 Jun 2022 12:13:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345240.570895; Thu, 09 Jun 2022 12:13:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzH2Y-0001u7-Oc; Thu, 09 Jun 2022 12:13:30 +0000
Received: by outflank-mailman (input) for mailman id 345240;
 Thu, 09 Jun 2022 12:13:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jWvP=WQ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1nzGyy-00071T-5T
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 12:09:48 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0610.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::610])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0dce25b0-e7ed-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 14:09:47 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM6PR0402MB3400.eurprd04.prod.outlook.com (2603:10a6:209:3::27)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Thu, 9 Jun
 2022 12:09:44 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.013; Thu, 9 Jun 2022
 12:09:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0dce25b0-e7ed-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GBFGrSZeGtSvVB9Kkwwc1gcbx26zXq/Rt17WuaJcmBDIbwNBpU1+2Ev/5YorVBCYVM85H5np5VF3WcsxtLqShe8bE/2O7fByCfKEGpdNYLLcsiOm0ZmOKd7HiAusXDV11g7LdLfkx/oRCsNzHKziE0vCWD2KIVHrcET3R7biqoGIMxkEvg590fyTgBtlgNlaGk0JNjeIRaD2oR8RNrdo3viLSFkfg2oUoh1dLR8ZjaR7nJHIW17aktl+C1/3PtJUsnDDk+q8PdgmGRe3gajJrs6RI08oBs42MXHRklT/tK9RFMuS7fgiWSikDGKHlWDjtBj2dj4DD8ivEBfIuWA0gw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=YWd0Ppr0HmI+sfytjX9c5Kw53gfy43AairUd+Ma993Q=;
 b=iKtQFjh91SknX4ncOuNL83SZLf4uTuER5H6UbCr2W3qBAxgWs2ti4gyNm7AYk+jliUylqBYzupcuFG9PuOksEKbbm2Yk3aPc0Vjl48BW3/t55M8LdBlifGrmcQzQQqxOEFJzjcqMX5sQrnNqymiWLxnGi/a4WHc/0WgenwLbpKAkC3RZlawemhxp2kH0eO69kmIq9ogghWf3Ewr+bbzjemA3UyiJfPtki6HWbNvpgJknmHmvH2j03aOLpkj72XYl7VoD1bdj/dp2OUV8WA39HVncafX6Hk0ccLejEtpwo/BYrS3jSlYBdfwDYxz0c5NjGOCtjXXu2JwuiOuue2dKZQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YWd0Ppr0HmI+sfytjX9c5Kw53gfy43AairUd+Ma993Q=;
 b=P49FKdBkblcvqXncOnTfXjbduOO2p+h2ns18ef0OCMu38h8DyyHAS1JuIXamO5mOuIBQL/SQJBVYSQ3WT18mRN6A2mnWHsfVZVJ/zKSCSHysCLZYLGS9Bfwp5E+d+52shkXPAtRp2DiezmArj2eFuhyq6Ojht4OL0W4XriCZ8sHjAdFv2OiNdFjW8bq3J6w4lQHy3QLR0V3TnnZc1H/AhZZRP4n2TRebB+DY8tgoR6LyiOoecU0tvCa9bKoweaPXhvgM6FJeU+6b3zAkP71Ra1KEeuardyn/pqIRz54sfQoqllpDBE13mhhNnCd6GY1nd3tgGkQs2DD6uD6TRCwB+A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <23552ac7-7548-9dad-fe41-7dc581c78585@suse.com>
Date: Thu, 9 Jun 2022 14:09:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 1/2] xen/heap: Split init_heap_pages() in two
Content-Language: en-US
To: Julien Grall <julien@xen.org>
Cc: bertrand.marquis@arm.com, Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220609083039.76667-1-julien@xen.org>
 <20220609083039.76667-2-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220609083039.76667-2-julien@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS8PR04CA0031.eurprd04.prod.outlook.com
 (2603:10a6:20b:312::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 2e6366d9-7f99-40c6-d8af-08da4a10f0a7
X-MS-TrafficTypeDiagnostic: AM6PR0402MB3400:EE_
X-Microsoft-Antispam-PRVS:
	<AM6PR0402MB34007EC47B59BE0FA3B95104B3A79@AM6PR0402MB3400.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	B13dEGlOaHjmwwbEO7yj/dtjbpxqreAAYp6zvfjJe2Cgt7Fi3ut7TKOcxjRnBO+9tGkPTKqWHcmk89MW7LhJ/DZApuAf1piRRP9I7jx6pXCvNyYW8wfRaX7dnN5eumn2QDmQX9rHxKyZZKo+mGDDNDbmiNE7D0tU0KczReL/LgshQUjdNtI4IIuoAPoPDV6vXKakNUp8E0VBTRzESxqc2SFUi/wSZKGHFI0Ys9PwYN/dgXAFTmCZW8rru7a2TU9Kon/Q3/XSFkpKZc0IOVHTTiAtNr0SZ4nQAEklbNOkgXfS86lxwhiuDBkaWFnHtyAI4turiFDyMjkrnc62ozKmB6vo5xomJBTM7wzxAv+uFpg3372nT5I14ZYXYSQWp78VhVLE8RPHZ+pYLFdhD4DGqdlplUv5dT5PNC+NBM63Sfkh68qIMeIfqo1iGCJAWfGtZwHf6xAlY0NZS/vXZFGaPVNP/8B9E6FYDTyZRLrJK/Ez4vMs1n5wA5j/pJ1dwIPs7PuKKqevlXMskmaBGUbJ64Fb7OgYL/cR0qRE3mDfHhc6SsuxQuMJ+o6Hcth1afQihZVMXwL/1R16IiaL90ecsBzO6eLrme6k0jJFgjMb5+Hs3V8IcXXE2LNKyKh4SLbF3+Sw3LL6NKE9ESnp/y+D36w4PIjZKQc8ll8Uevq7hJW2hMT+HgzDXBWKKubmNNd3TY7QjhUe4EEpj2jjjgP6Xpl+iWVzgm6cuTtkIdoIQQE=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(6916009)(186003)(31686004)(2616005)(508600001)(54906003)(8936002)(38100700002)(66946007)(66556008)(66476007)(8676002)(4326008)(5660300002)(2906002)(316002)(83380400001)(6486002)(31696002)(26005)(6512007)(6506007)(86362001)(53546011)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TWEwYjQwUlB2OXEwWjNaYlhyQTg4Z09mSndocHJuUTgwSzJuRFhhRUwwNUV0?=
 =?utf-8?B?amY3UUZBRlBQK1I5VTlTRmxYMFkzSFNaY2tQWk8xOFVwU0x6K3h3THpEbTZK?=
 =?utf-8?B?akY5bGd1Sy9QbTI0VXZUdWZVamxXblZUb3Z3MklNNEJVdDROOUs3VXg3WEZC?=
 =?utf-8?B?c2F4dU9QcXRXT05jcEZMRkMrUkNOMGFwbmtEU2g3U09DOHlxR1dmaEtxM2JF?=
 =?utf-8?B?N3I3K2pidERCYTh4Z1FvdzhORWJ5eGlwek1EV2w3TzBHd3lkQ0w5bGxZWDMr?=
 =?utf-8?B?SVZlVEhaWXR1VWZzdVQxeEZqK1h4Y1dybHByek9VOTUvcXhFTkhZTTNsL0Fs?=
 =?utf-8?B?VUtNczI5WHBxWDR2TEl2TnhYYWFkekxRN2VUQVF0c2VqWmcvOVZyZWw0Wmps?=
 =?utf-8?B?MjlTekE3emg4RWpOckRBOWU0NmF6KzNBVEQrMEVGRjlMYnpQSTBXWUxIdER1?=
 =?utf-8?B?cTJVc3JGSFRMV1JYZ3VXU2locXdtSVF5N1dxdkt3cXp6eEVtRXYzUzdNUkxu?=
 =?utf-8?B?VGROdnBweWlWVlNXaWF5UkxLTHd5c0pOVGRqODZZbjZBOFovWU9CQ1U4SFNp?=
 =?utf-8?B?Y1dobXU4eElQMEFlUWptTXo3anZVdTFrZllKS2M1OEtscmR3Z0s3bVYzS1VJ?=
 =?utf-8?B?SzFCdm10UEM0Vk1WbVdqaEM5L0xMVXNVSTFuQmhwSDhLUzNNS3dSbVBKM0Fk?=
 =?utf-8?B?K0NrN0RmRjJUSHBjM2xNVkRXQVhRdy9vSk05TWFnMkRWUjlHQk5zMXZOZVZi?=
 =?utf-8?B?NG9kaUxFSFV5NzVxTmsva2lsVlh3UVZZdWNCUkQ4VWdydmZrTElQNExjTlpl?=
 =?utf-8?B?NC9FOGNoMERzL1p5U3dPTjkwb2NCbFRVbUl1T0dJaDhMR3BtYWI5eS9QeS80?=
 =?utf-8?B?OFA3V2RyQjVGN1FoUGtBSlowd3FLZ2NiZ1NhMzVSdVlYOW1uSGhRUTJwQkxC?=
 =?utf-8?B?WW82ZGxINGdqNEVidHowR1diakgrVWl0U1kvSndocVFLOUZZSFZuMElMd0FC?=
 =?utf-8?B?UXlscFRDS1BuVHcvdUs0VnFpT2srZXgwMlJpdnhOTWdhYklodFlkT3dwcmFt?=
 =?utf-8?B?UjlTUjBtT0dUMUF6K2orUWhTQ3hPa1QyK2c1VDVHNUYxbnU1MWZBb3E5dGl1?=
 =?utf-8?B?UkNnR0VDeEpweGRBTWJDMGhwbzE0d3lGb2VuUXVRU3l2TFhUOEJpZ1oyei8w?=
 =?utf-8?B?U0ZpZXNacFYvU05yUUJrZjRhOXYyN3RGRXpGVlUvTGZuaG03eEJYZkV3c3Z2?=
 =?utf-8?B?YldrREw3aWlTcDVBR01FcXd4RGhCR1lzdkNhT1BCeVVUNzYwR1RKdHc3emlK?=
 =?utf-8?B?TVAxTVBsRnIzV0Q2ckVnaEMyMHAvUS9tOEZZdExFUGxsUjRYV1A2SytLcmxP?=
 =?utf-8?B?bzR1U0ltTnJMQmlVb2hKcWsreS9BdzlhTU5rKzdUVDlFQk1VdUUyaGJKbGht?=
 =?utf-8?B?S2tNRWpkT1A4dUdrUGhpZ3pDVDhoQXpJcjRGZkp2RDBaOG5GMEtQdDFCd0NF?=
 =?utf-8?B?bHJreFhKMGQxMEhWOWJTWENoRC9ZMnN0OENGbXpNbENBVEE2SStrbXBhdVps?=
 =?utf-8?B?WWxJT3h2U21IZkc5V0x5ZGFDR2M3QXR1TW4zY0YvRnRQTE5EWHRMUllyNFpF?=
 =?utf-8?B?ZnVKNGUrLyttWWhqNnFnZFpRNHE5aEFXWUxMUnNIdFFrdDhJUnZnQllXbmxM?=
 =?utf-8?B?N1JUM3FCa3dGVjUxbzZxKzFobDJMVm9UTHhvM1cwdTM1RDc1NUgrMWs1a1Y2?=
 =?utf-8?B?UXdXMDR6SU9wY0pFN1lQTmVJN3hXdTdITVZHdXc3QnVMeHVHbWpSeWRHSmRW?=
 =?utf-8?B?c0RHU1VpWXNSK1FvU3ZCV084WlZ4OWpwN0dmNEFnMkhyYzNhQk5SWkNpSGRP?=
 =?utf-8?B?aVZrMFVxbDJaOGt6ZmplWWlWeVlEZUZrUXN4ZldXcW9vbTFjV296MTBkRm9i?=
 =?utf-8?B?QS9seUpudHl5cHE0alA2WXVmbURNaUhRb2tHMGRpenkvQUsrZ01MdnNmVzVG?=
 =?utf-8?B?TjBCZGxvOUYxc2phQ2NsMDZlb3ZOVWVsODBodzMycWVGWEhXckNBUkFFcktw?=
 =?utf-8?B?MkhDVFFjdWhHT3lQb25leXNOV0FHK0xYQ3hFMFM5MlZsSWV6WTQ0VmllSnow?=
 =?utf-8?B?NGdKN3dTS0dodXZhNFUyZUVOQThsWnVsVG1DYnFNVHVQbldkTVJZRURhRWxM?=
 =?utf-8?B?SktUNUt1Yk1raVRWelc1d0VSdGNVR0tEM1NyVCtxRmYvUTdvVERXVGhpWW5v?=
 =?utf-8?B?VTA2ZUs1bC9GOWk4MWZlYWZjbk1ycnBQZ2sxQTR6dUh2YWE1ZlduNno1Nzdu?=
 =?utf-8?B?WFMrd3Q5azcxQkZDOE5CZUt5WGFER3RDM2tPTmFGeDArRjhiWm5Odz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2e6366d9-7f99-40c6-d8af-08da4a10f0a7
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 12:09:44.5060
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: sC26MOG2SCPFS+dvMWC+OqngvUSICd3PIg6uNfwMEmTvXSUK57RiQS/QB4LwQIIzO5m/KXXiuTsa00KvKWreGg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR0402MB3400

On 09.06.2022 10:30, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> At the moment, init_heap_pages() will call free_heap_pages() page
> by page. To reduce the time to initialize the heap, we will want
> to provide multiple pages at the same time.
> 
> init_heap_pages() is now split in two parts:
>     - init_heap_pages(): will break down the range in multiple set
>       of contiguous pages. For now, the criteria is the pages should
>       belong to the same NUMA node.
>     - init_contig_pages(): will initialize a set of contiguous pages.
>       For now the pages are still passed one by one to free_heap_pages().

Hmm, the common use of "contiguous" is to describe address correlation.
Therefore I'm afraid I'd like to see "contig" avoided when you mean
"same node". Perhaps init_node_pages()?

> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -1778,16 +1778,55 @@ int query_page_offline(mfn_t mfn, uint32_t *status)
>  }
>  
>  /*
> - * Hand the specified arbitrary page range to the specified heap zone
> - * checking the node_id of the previous page.  If they differ and the
> - * latter is not on a MAX_ORDER boundary, then we reserve the page by
> - * not freeing it to the buddy allocator.
> + * init_contig_heap_pages() is intended to only take pages from the same
> + * NUMA node.
>   */
> +static bool is_contig_page(struct page_info *pg, unsigned int nid)
> +{
> +    return (nid == (phys_to_nid(page_to_maddr(pg))));
> +}

If such a helper is indeed needed, then I think it absolutely wants
pg to be pointer-to-const. And imo it would also help readability if
the extra pair of parentheses around the nested function calls was
omitted. Given the naming concern, though, I wonder whether this
wouldn't better be open-coded in the one place it is used. Actually
naming is quite odd here beyond what I'd said further up: "Is this
page contiguous?" Such a question requires two pages, i.e. "Are these
two pages contiguous?" What you want to know is "Is this page on the
given node?"

> +/*
> + * This function should only be called with valid pages from the same NUMA
> + * node.
> + *
> + * Callers should use is_contig_page() first to check if all the pages
> + * in a range are contiguous.
> + */
> +static void init_contig_heap_pages(struct page_info *pg, unsigned long nr_pages,

const again?

> +                                   bool need_scrub)
> +{
> +    unsigned long s, e;
> +    unsigned int nid = phys_to_nid(page_to_maddr(pg));
> +
> +    s = mfn_x(page_to_mfn(pg));
> +    e = mfn_x(mfn_add(page_to_mfn(pg + nr_pages - 1), 1));
> +    if ( unlikely(!avail[nid]) )
> +    {
> +        bool use_tail = !(s & ((1UL << MAX_ORDER) - 1)) &&

IS_ALIGNED(s, 1UL << MAX_ORDER) to "describe" what's meant?

> +                        (find_first_set_bit(e) <= find_first_set_bit(s));
> +        unsigned long n;
> +
> +        n = init_node_heap(nid, s, nr_pages, &use_tail);
> +        BUG_ON(n > nr_pages);
> +        if ( use_tail )
> +            e -= n;
> +        else
> +            s += n;
> +    }
> +
> +    while ( s < e )
> +    {
> +        free_heap_pages(mfn_to_page(_mfn(s)), 0, need_scrub);
> +        s += 1UL;

Nit (I realize the next patch will replace this anyway): Just ++s? Or
at least a plain 1 without UL suffix?

> @@ -1812,35 +1851,24 @@ static void init_heap_pages(
>      spin_unlock(&heap_lock);
>  
>      if ( system_state < SYS_STATE_active && opt_bootscrub == BOOTSCRUB_IDLE )
> -        idle_scrub = true;
> +        need_scrub = true;
>  
> -    for ( i = 0; i < nr_pages; i++ )
> +    for ( i = 0; i < nr_pages; )
>      {
> -        unsigned int nid = phys_to_nid(page_to_maddr(pg+i));
> +        unsigned int nid = phys_to_nid(page_to_maddr(pg));
> +        unsigned long left = nr_pages - i;
> +        unsigned long contig_pages;
>  
> -        if ( unlikely(!avail[nid]) )
> +        for ( contig_pages = 1; contig_pages < left; contig_pages++ )
>          {
> -            unsigned long s = mfn_x(page_to_mfn(pg + i));
> -            unsigned long e = mfn_x(mfn_add(page_to_mfn(pg + nr_pages - 1), 1));
> -            bool use_tail = (nid == phys_to_nid(pfn_to_paddr(e - 1))) &&
> -                            !(s & ((1UL << MAX_ORDER) - 1)) &&
> -                            (find_first_set_bit(e) <= find_first_set_bit(s));
> -            unsigned long n;
> -
> -            n = init_node_heap(nid, mfn_x(page_to_mfn(pg + i)), nr_pages - i,
> -                               &use_tail);
> -            BUG_ON(i + n > nr_pages);
> -            if ( n && !use_tail )
> -            {
> -                i += n - 1;
> -                continue;
> -            }
> -            if ( i + n == nr_pages )
> +            if ( !is_contig_page(pg + contig_pages, nid) )
>                  break;
> -            nr_pages -= n;
>          }

Isn't doing this page by page in a loop quite inefficient? Can't you
simply obtain the end of the node's range covering the first page, and
then adjust "left" accordingly? I even wonder whether the admittedly
lax original check's assumption couldn't be leveraged alternatively,
by effectively bisecting to the end address on the node of interest
(where the assumption is that nodes aren't interleaved - see Wei's
NUMA series dealing with precisely that situation).

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 12:13:32 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 12:13:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345228.570873 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzH2M-00012Y-2g; Thu, 09 Jun 2022 12:13:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345228.570873; Thu, 09 Jun 2022 12:13:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzH2L-00010M-TT; Thu, 09 Jun 2022 12:13:17 +0000
Received: by outflank-mailman (input) for mailman id 345228;
 Thu, 09 Jun 2022 12:13:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=itVH=WQ=xenbits.xen.org=julieng@srs-se1.protection.inumbo.net>)
 id 1nzGxq-0005QK-2G
 for xen-devel@lists.xen.org; Thu, 09 Jun 2022 12:08:38 +0000
Received: from mail.xenproject.org (mail.xenproject.org [104.130.215.37])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e37b1a8e-e7ec-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 14:08:36 +0200 (CEST)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julieng@xenbits.xen.org>)
 id 1nzGxe-0006XB-CE; Thu, 09 Jun 2022 12:08:26 +0000
Received: from julieng by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <julieng@xenbits.xen.org>)
 id 1nzGxe-0008Qh-Ab; Thu, 09 Jun 2022 12:08:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e37b1a8e-e7ec-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=8WEdhmHumiBG946Qt0l3I48Pomj+rCfHG1wWQI0sjEk=; b=Lq3Zs4gZDhoRcT9Ru5m/U1eLv0
	F4AMakdPJcL8erOr5l7CK0ekNjLBrjSla5IF+bPQdprQOHVLNw7oOAv1eyZyAiis7BBAqX5vdYVLT
	6pvtB4mqD/0mXv0CDq+8lpWosu46FSOUBRt0KtRY2MEorPiimTCcuSSI2QSWQIUmJXKI=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 402 v4 (CVE-2022-26363,CVE-2022-26364) -
 x86 pv: Insufficient care with non-coherent mappings
Message-Id: <E1nzGxe-0008Qh-Ab@xenbits.xenproject.org>
Date: Thu, 09 Jun 2022 12:08:26 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

     Xen Security Advisory CVE-2022-26363,CVE-2022-26364 / XSA-402
                               version 4

         x86 pv: Insufficient care with non-coherent mappings

UPDATES IN VERSION 4
====================

Public release.

ISSUE DESCRIPTION
=================

Xen maintains a type reference count for pages, in addition to a regular
reference count.  This scheme is used to maintain invariants required
for Xen's safety, e.g. PV guests may not have direct writeable access to
pagetables; updates need auditing by Xen.

Unfortunately, Xen's safety logic doesn't account for CPU-induced cache
non-coherency; cases where the CPU can cause the content of the cache to
be different to the content in main memory.  In such cases, Xen's safety
logic can incorrectly conclude that the contents of a page is safe.

IMPACT
======

Malicious x86 PV guest administrators can escalate privilege so as to
control the whole system.

VULNERABLE SYSTEMS
==================

All versions of Xen are vulnerable.

Only x86 PV guests can trigger this vulnerability.

Only x86 PV guests configured with access to devices (e.g. PCI
Passthrough) can trigger the vulnerability.

Only CPUs which can issue non-coherent memory accesses are impacted.
CPUs which enumerate the SelfSnoop feature are not impacted, except as
noted in errata.  Therefore, we believe that Xen running on Intel
IvyBridge or later CPUs is not impacted by the vulnerability.

MITIGATION
==========

Not passing devices through to untrusted x86 PV guests will avoid the
vulnerability.

CREDITS
=======

This issue was discovered by Jann Horn of Google Project Zero.

RESOLUTION
==========

Applying the appropriate attached patches resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

Furthermore, the XSA-402 patches depend logically on the XSA-401
patches, and will not function safely without XSA-401 in place first.

xsa402/xsa402-?.patch           xen-unstable
xsa402/xsa402-4.16-?.patch      Xen 4.16.x
xsa402/xsa402-4.15-?.patch      Xen 4.15.x
xsa402/xsa402-4.14-?.patch      Xen 4.14.x
xsa402/xsa402-4.13-?.patch      Xen 4.13.x

$ sha256sum xsa402* xsa402*/*
3572a7bf70f372707705eec7e24ec6737d41dde906d82b4197597480df557b0c  xsa402.meta
ce956e3b24b34b10034d6cc219f616e96e5b7b3a391f6d9a97d96694579e86b3  xsa402/xsa402-1.patch
8faaae88b7d88a3ef66ebd9db7d5fbfa600ab1216b38954a7a8b44822a32b87e  xsa402/xsa402-2.patch
344c76e842e830ef209427359d2b566d6b54f8862c16662ca628c459680614d7  xsa402/xsa402-3.patch
210e8312f351f1b26b58e5f24479381371ddfbae4b1d3b7233a9ed909a3d08cd  xsa402/xsa402-4.13-1.patch
5d9e6cb667d47f58f1b85c02844510673f9bfa5a94a74847bfe641bc9722dc67  xsa402/xsa402-4.13-2.patch
176bbd997d163cfb17811065e084ee118ba272e02302c0237dcfeca7d261943d  xsa402/xsa402-4.13-3.patch
0ee1adac14c185c3b928f8384c6f5749ecf1c028eb65e17ad54de5be0773b40b  xsa402/xsa402-4.13-4.patch
366a79734861535818c54e3d831c7349de11fbd761ee04ced712590e50a149ed  xsa402/xsa402-4.13-5.patch
487227003630a70a640e434c6b0125f73c8d7affc9c90297de737a29a0cf0c7e  xsa402/xsa402-4.14-1.patch
328dd4090ecb6bd13696a9a69d098476d14ccde4d78e0127c2512569c73aa01a  xsa402/xsa402-4.14-2.patch
739263e622620e95c03118d3ea9d4f96ea3ce83d17ae6d06ca596cbe3d7c6035  xsa402/xsa402-4.14-3.patch
f35a7c0282efe0271517fe6407f2d36f97455710041fa3bce72a61bc3733b556  xsa402/xsa402-4.14-4.patch
ba4b84e95fbad023c1db21b677b166e09a4a2c0c4346ecb4612a32ee97f37efe  xsa402/xsa402-4.14-5.patch
ccdcbebcef9b84dce82c95f6faaa85f73f137c47c54aa891ee350e90cf1e8ceb  xsa402/xsa402-4.15-1.patch
51d6875b097ba9913620e827cc1d634e6d3506fb6ab8ab7ac763e46634d7b67c  xsa402/xsa402-4.15-2.patch
58b02bd665366c235534c58bc0f040863f3b1083551541c2b6de090c5d0caf6c  xsa402/xsa402-4.15-3.patch
457cb2be1425948589cd0b7084087f6b995df29af289c10f9e9011df6f704cc0  xsa402/xsa402-4.15-4.patch
808cb71f43ab64ac6e992ffc081790292e014b7476304502caaee0f2d8e92b6d  xsa402/xsa402-4.15-5.patch
d90732dd1ef85c6d33471f83a707245e4bff3b737110ba4b8533c549b06175cc  xsa402/xsa402-4.16-1.patch
f68ad7dd8f68f688bb2f42664af8c7eeecc4888b84afe8e102e96518c22ceb2c  xsa402/xsa402-4.16-2.patch
96f0c356281c59cc90894c0160121469096c3076cc4e1b52e81a521da10e9d76  xsa402/xsa402-4.16-3.patch
27aeb50651bfde461b84c98a897062e261b9ac84b6e07e9afbaae4c20c61a963  xsa402/xsa402-4.16-4.patch
d65c84f2cf1f75d96c1853ffeeb8eed793e6382d21795af04871ead07f6b330c  xsa402/xsa402-4.16-5.patch
5b472eda637ff59b0b7dff85a7869d7197f728b581581ce97b1617c2dcb62397  xsa402/xsa402-4.patch
15741042b7e6c04deff2edcbd21f1c4649d58790919fa18d4b131b53d68a124f  xsa402/xsa402-5.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches described above (or others which are
substantially similar) is permitted during the embargo, even on
public-facing systems with untrusted guest users and administrators.

But: deployment of the mitigation (i.e., switching guests from
passed-through devices to virtual devices) is *not* permitted during
the embargo, as it could be seen by an attacker and potentially give
them a hint about the nature of the vulnerability.

Futhermore, distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.


(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAmKh4lkMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZWi0H/3qjj6TwK57IN/QXxMxnPf/Z8w1C4J64dHXXksXz
epUV7NAUMhZMiL1TDRXORLCcUEC9ErwBb3xdz+rSy/3oyqVNL2vERu7LtXKriIgi
WZYvk/19QzBNVTrGUbXmLFER/0hGo6r3wW3VPhziAoTc71f2PW4wIWbvGOzvHpSU
PuRhScXNMdJsu6dh5mNahqQE2nxRSOY/B9D8KDZTCJ4GwMKqZGuwRu5FuoHhXDa/
iOy4kUt6SOJ46L7Za1ULdYe6wdYWzJJtVaoojgjU/gqwtT3uXLa3eqsUqXjGynxj
iGGOMFTypAMhMXqEgKzUEbJOYvvaLmC/D/bbVZ7U80Nya18=
=bGG4
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa402.meta"
Content-Disposition: attachment; filename="xsa402.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiA0MDIsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xNiIsCiAgICAiNC4xNSIsCiAgICAiNC4xNCIs
CiAgICAiNC4xMyIKICBdLAogICJUcmVlcyI6IFsKICAgICJ4ZW4iCiAgXSwK
ICAiUmVjaXBlcyI6IHsKICAgICI0LjEzIjogewogICAgICAiUmVjaXBlcyI6
IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0YWJsZVJlZiI6ICJm
ZTk3MTMzYjVkZWVmNThiZDE0MjJmNGQ4NzgyMTEzMWM2NmIxZDBlIiwKICAg
ICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICA0MDEKICAgICAgICAg
IF0sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhzYTQw
Mi94c2E0MDItNC4xMy0/LnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0K
ICAgICAgfQogICAgfSwKICAgICI0LjE0IjogewogICAgICAiUmVjaXBlcyI6
IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0YWJsZVJlZiI6ICIx
Nzg0OGRmZWQ0N2Y1MmI0NzljNGU3ZWI0MTI2NzFhZWM1NzU3MzI5IiwKICAg
ICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICA0MDEKICAgICAgICAg
IF0sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhzYTQw
Mi94c2E0MDItNC4xNC0/LnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0K
ICAgICAgfQogICAgfSwKICAgICI0LjE1IjogewogICAgICAiUmVjaXBlcyI6
IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0YWJsZVJlZiI6ICI2
NDI0OWFmZWI2M2NmN2Q3MGI0ZmFmMDJlNzZkZjVlZWQ4MjM3MWY5IiwKICAg
ICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICA0MDEKICAgICAgICAg
IF0sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhzYTQw
Mi94c2E0MDItNC4xNS0/LnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0K
ICAgICAgfQogICAgfSwKICAgICI0LjE2IjogewogICAgICAiUmVjaXBlcyI6
IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0YWJsZVJlZiI6ICI4
ZTExZWM4ZmJmNmY5MzNmODg1NGY0YmM1NDIyNjY1MzMxNjkwM2YyIiwKICAg
ICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICA0MDEKICAgICAgICAg
IF0sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhzYTQw
Mi94c2E0MDItNC4xNi0/LnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0K
ICAgICAgfQogICAgfSwKICAgICJtYXN0ZXIiOiB7CiAgICAgICJSZWNpcGVz
IjogewogICAgICAgICJ4ZW4iOiB7CiAgICAgICAgICAiU3RhYmxlUmVmIjog
IjQ5ZGQ1MmZiMTMxMWRhZGFiMjlmNjYzNGQwYmMxZjRjMDIyYzM1N2EiLAog
ICAgICAgICAgIlByZXJlcXMiOiBbCiAgICAgICAgICAgIDQwMQogICAgICAg
ICAgXSwKICAgICAgICAgICJQYXRjaGVzIjogWwogICAgICAgICAgICAieHNh
NDAyL3hzYTQwMi0/LnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAg
ICAgfQogICAgfQogIH0KfQ==

--=separator
Content-Type: application/octet-stream; name="xsa402/xsa402-1.patch"
Content-Disposition: attachment; filename="xsa402/xsa402-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3BhZ2U6IEludHJvZHVjZSBfUEFHRV8qIGNvbnN0
YW50cyBmb3IgbWVtb3J5IHR5cGVzCgouLi4gcmF0aGVyIHRoYW4gb3BlbmNv
ZGluZyB0aGUgUEFUL1BDRC9QV1QgYXR0cmlidXRlcyBpbiBfX1BBR0VfSFlQ
RVJWSVNPUl8qCmNvbnN0YW50cy4gIFRoZXNlIGFyZSBnb2luZyB0byBiZSBu
ZWVkZWQgYnkgZm9ydGhjb21pbmcgbG9naWMuCgpObyBmdW5jdGlvbmFsIGNo
YW5nZS4KClRoaXMgaXMgcGFydCBvZiBYU0EtNDAyLgoKU2lnbmVkLW9mZi1i
eTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4K
UmV2aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4K
CmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvaW5jbHVkZS9hc20vcGFnZS5o
IGIveGVuL2FyY2gveDg2L2luY2x1ZGUvYXNtL3BhZ2UuaAppbmRleCA0NDc0
OTFjZDA5ZDYuLmI1ODUyMzVkMDY0YSAxMDA2NDQKLS0tIGEveGVuL2FyY2gv
eDg2L2luY2x1ZGUvYXNtL3BhZ2UuaAorKysgYi94ZW4vYXJjaC94ODYvaW5j
bHVkZS9hc20vcGFnZS5oCkBAIC0zMzEsNiArMzMxLDE0IEBAIHZvaWQgZWZp
X3VwZGF0ZV9sNF9wZ3RhYmxlKHVuc2lnbmVkIGludCBsNGlkeCwgbDRfcGdl
bnRyeV90KTsKIAogI2RlZmluZSBQQUdFX0NBQ0hFX0FUVFJTIChfUEFHRV9Q
QVQgfCBfUEFHRV9QQ0QgfCBfUEFHRV9QV1QpCiAKKy8qIE1lbW9yeSB0eXBl
cywgZW5jb2RlZCB1bmRlciBYZW4ncyBjaG9pY2Ugb2YgTVNSX1BBVC4gKi8K
KyNkZWZpbmUgX1BBR0VfV0IgICAgICAgICAoICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAwKQorI2RlZmluZSBfUEFHRV9XVCAgICAgICAgICgg
ICAgICAgICAgICAgICAgICAgICAgICBfUEFHRV9QV1QpCisjZGVmaW5lIF9Q
QUdFX1VDTSAgICAgICAgKCAgICAgICAgICAgIF9QQUdFX1BDRCAgICAgICAg
ICAgICkKKyNkZWZpbmUgX1BBR0VfVUMgICAgICAgICAoICAgICAgICAgICAg
X1BBR0VfUENEIHwgX1BBR0VfUFdUKQorI2RlZmluZSBfUEFHRV9XQyAgICAg
ICAgIChfUEFHRV9QQVQgICAgICAgICAgICAgICAgICAgICAgICApCisjZGVm
aW5lIF9QQUdFX1dQICAgICAgICAgKF9QQUdFX1BBVCB8ICAgICAgICAgICAg
IF9QQUdFX1BXVCkKKwogLyoKICAqIERlYnVnIG9wdGlvbjogRW5zdXJlIHRo
YXQgZ3JhbnRlZCBtYXBwaW5ncyBhcmUgbm90IGltcGxpY2l0bHkgdW5tYXBw
ZWQuCiAgKiBXQVJOSU5HOiBUaGlzIHdpbGwgbmVlZCB0byBiZSBkaXNhYmxl
ZCB0byBydW4gT1NlcyB0aGF0IHVzZSB0aGUgc3BhcmUgUFRFCkBAIC0zNDks
MTAgKzM1NywxMCBAQCB2b2lkIGVmaV91cGRhdGVfbDRfcGd0YWJsZSh1bnNp
Z25lZCBpbnQgbDRpZHgsIGw0X3BnZW50cnlfdCk7CiAjZGVmaW5lIF9fUEFH
RV9IWVBFUlZJU09SX1JYICAgICAgKF9QQUdFX1BSRVNFTlQgfCBfUEFHRV9B
Q0NFU1NFRCkKICNkZWZpbmUgX19QQUdFX0hZUEVSVklTT1IgICAgICAgICAo
X19QQUdFX0hZUEVSVklTT1JfUlggfCBcCiAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIF9QQUdFX0RJUlRZIHwgX1BBR0VfUlcpCi0jZGVm
aW5lIF9fUEFHRV9IWVBFUlZJU09SX1dUICAgICAgKF9fUEFHRV9IWVBFUlZJ
U09SIHwgX1BBR0VfUFdUKQotI2RlZmluZSBfX1BBR0VfSFlQRVJWSVNPUl9V
Q01JTlVTIChfX1BBR0VfSFlQRVJWSVNPUiB8IF9QQUdFX1BDRCkKLSNkZWZp
bmUgX19QQUdFX0hZUEVSVklTT1JfVUMgICAgICAoX19QQUdFX0hZUEVSVklT
T1IgfCBfUEFHRV9QQ0QgfCBfUEFHRV9QV1QpCi0jZGVmaW5lIF9fUEFHRV9I
WVBFUlZJU09SX1dDICAgICAgKF9fUEFHRV9IWVBFUlZJU09SIHwgX1BBR0Vf
UEFUKQorI2RlZmluZSBfX1BBR0VfSFlQRVJWSVNPUl9XVCAgICAgIChfX1BB
R0VfSFlQRVJWSVNPUiB8IF9QQUdFX1dUKQorI2RlZmluZSBfX1BBR0VfSFlQ
RVJWSVNPUl9VQ01JTlVTIChfX1BBR0VfSFlQRVJWSVNPUiB8IF9QQUdFX1VD
TSkKKyNkZWZpbmUgX19QQUdFX0hZUEVSVklTT1JfVUMgICAgICAoX19QQUdF
X0hZUEVSVklTT1IgfCBfUEFHRV9VQykKKyNkZWZpbmUgX19QQUdFX0hZUEVS
VklTT1JfV0MgICAgICAoX19QQUdFX0hZUEVSVklTT1IgfCBfUEFHRV9XQykK
ICNkZWZpbmUgX19QQUdFX0hZUEVSVklTT1JfU0hTVEsgICAoX19QQUdFX0hZ
UEVSVklTT1JfUk8gfCBfUEFHRV9ESVJUWSkKIAogI2RlZmluZSBNQVBfU01B
TExfUEFHRVMgX1BBR0VfQVZBSUwwIC8qIGRvbid0IHVzZSBzdXBlcnBhZ2Vz
IG1hcHBpbmdzICovCg==

--=separator
Content-Type: application/octet-stream; name="xsa402/xsa402-2.patch"
Content-Disposition: attachment; filename="xsa402/xsa402-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2OiBEb24ndCBjaGFuZ2UgdGhlIGNhY2hlYWJpbGl0
eSBvZiB0aGUgZGlyZWN0bWFwCgpDaGFuZ2VzZXQgNTVmOTdmNDliN2NlICgi
eDg2OiBDaGFuZ2UgY2FjaGUgYXR0cmlidXRlcyBvZiBYZW4gMToxIHBhZ2Ug
bWFwcGluZ3MKaW4gcmVzcG9uc2UgdG8gZ3Vlc3QgbWFwcGluZyByZXF1ZXN0
cyIpIGF0dGVtcHRlZCB0byBrZWVwIHRoZSBjYWNoZWFiaWxpdHkKY29uc2lz
dGVudCBiZXR3ZWVuIGRpZmZlcmVudCBtYXBwaW5ncyBvZiB0aGUgc2FtZSBw
YWdlLgoKVGhlIHJlYXNvbiB3YXNuJ3QgZGVzY3JpYmVkIGluIHRoZSBjaGFu
Z2Vsb2csIGJ1dCBpdCBpcyB1bmRlcnN0b29kIHRvIGJlIGluCnJlZ2FyZHMg
dG8gYSBjb25jZXJuIG92ZXIgbWFjaGluZSBjaGVjayBleGNlcHRpb25zLCBv
d2luZyB0byBlcnJhdGEgd2hlbiB1c2luZwptaXhlZCBjYWNoZWFiaWxpdGll
cy4gIEl0IGRpZCB0aGlzIHByaW1hcmlseSBieSB1cGRhdGluZyBYZW4ncyBt
YXBwaW5nIG9mIHRoZQpwYWdlIGluIHRoZSBkaXJlY3QgbWFwIHdoZW4gdGhl
IGd1ZXN0IG1hcHBlZCBhIHBhZ2Ugd2l0aCByZWR1Y2VkIGNhY2hlYWJpbGl0
eS4KClVuZm9ydHVuYXRlbHksIHRoZSBsb2dpYyBkaWRuJ3QgYWN0dWFsbHkg
cHJldmVudCBtaXhlZCBjYWNoZWFiaWxpdHkgZnJvbQpvY2N1cnJpbmc6CiAq
IEEgZ3Vlc3QgY291bGQgbWFwIGEgcGFnZSBub3JtYWxseSwgYW5kIHRoZW4g
bWFwIHRoZSBzYW1lIHBhZ2Ugd2l0aAogICBkaWZmZXJlbnQgY2FjaGVhYmls
aXR5OyBub3RoaW5nIHByZXZlbnRlZCB0aGlzLgogKiBUaGUgY2FjaGVhYmls
aXR5IG9mIHRoZSBkaXJlY3RtYXAgd2FzIGFsd2F5cyBsYXRlc3QtdGFrZXMt
cHJlY2VkZW5jZSBpbgogICB0ZXJtcyBvZiBndWVzdCByZXF1ZXN0cy4KICog
R3JhbnQtbWFwcGVkIGZyYW1lcyB3aXRoIGxlc3NlciBjYWNoZWFiaWxpdHkg
ZGlkbid0IGFkanVzdCB0aGUgcGFnZSdzCiAgIGNhY2hlYXR0ciBzZXR0aW5n
cy4KICogVGhlIG1hcF9kb21haW5fcGFnZSgpIGZ1bmN0aW9uIHN0aWxsIHVu
Y29uZGl0aW9uYWxseSBjcmVhdGVkIFdCIG1hcHBpbmdzLAogICBpcnJlc3Bl
Y3RpdmUgb2YgdGhlIHBhZ2UncyBjYWNoZWF0dHIgc2V0dGluZ3MuCgpBZGRp
dGlvbmFsbHksIHVwZGF0ZV94ZW5fbWFwcGluZ3MoKSBoYWQgYSBidWcgd2hl
cmUgdGhlIGFsaWFzIGNhbGN1bGF0aW9uIHdhcwp3cm9uZyBmb3IgbWZuJ3Mg
d2hpY2ggd2VyZSAuaW5pdCBjb250ZW50LCB3aGljaCBzaG91bGQgaGF2ZSBi
ZWVuIHRyZWF0ZWQgYXMKZnVsbHkgZ3Vlc3QgcGFnZXMsIG5vdCBYZW4gcGFn
ZXMuCgpXb3JzZSB5ZXQsIHRoZSBsb2dpYyBpbnRyb2R1Y2VkIGEgdnVsbmVy
YWJpbGl0eSB3aGVyZWJ5IG5lY2Vzc2FyeQpwYWdldGFibGUvc2VnZGVzYyBh
ZGp1c3RtZW50cyBtYWRlIGJ5IFhlbiBpbiB0aGUgdmFsaWRhdGlvbiBsb2dp
YyBjb3VsZCBiZWNvbWUKbm9uLWNvaGVyZW50IGJldHdlZW4gdGhlIGNhY2hl
IGFuZCBtYWluIG1lbW9yeS4gIFRoZSBDUFUgY291bGQgc3Vic2VxdWVudGx5
Cm9wZXJhdGUgb24gdGhlIHN0YWxlIHZhbHVlIGluIHRoZSBjYWNoZSwgcmF0
aGVyIHRoYW4gdGhlIHNhZmUgdmFsdWUgaW4gbWFpbgptZW1vcnkuCgpUaGUg
ZGlyZWN0bWFwIGNvbnRhaW5zIHByaW1hcmlseSBtYXBwaW5ncyBvZiBSQU0u
ICBQQVQvTVRSUiBjb25mbGljdApyZXNvbHV0aW9uIGlzIGFzeW1tZXRyaWMs
IGFuZCBnZW5lcmFsbHkgZm9yIE1UUlI9V0IgcmFuZ2VzLCBQQVQgb2YgbGVz
c2VyCmNhY2hlYWJpbGl0eSByZXNvbHZlcyB0byBiZWluZyBjb2hlcmVudC4g
IFRoZSBzcGVjaWFsIGNhc2UgaXMgV0MgbWFwcGluZ3MsCndoaWNoIGFyZSBu
b24tY29oZXJlbnQgYWdhaW5zdCBNVFJSPVdCIHJlZ2lvbnMgKGV4Y2VwdCBm
b3IgZnVsbHktY29oZXJlbnQKQ1BVcykuCgpYZW4gbXVzdCBub3QgaGF2ZSBh
bnkgV0MgY2FjaGVhYmlsaXR5IGluIHRoZSBkaXJlY3RtYXAsIHRvIHByZXZl
bnQgWGVuJ3MKYWN0aW9ucyBmcm9tIGNyZWF0aW5nIG5vbi1jb2hlcmVuY3ku
ICAoR3Vlc3QgYWN0aW9ucyBjcmVhdGluZyBub24tY29oZXJlbmN5IGlzCmRl
YWx0IHdpdGggaW4gc3Vic2VxdWVudCBwYXRjaGVzLikgIEFzIGFsbCBtZW1v
cnkgdHlwZXMgZm9yIE1UUlI9V0IgcmFuZ2VzCmludGVyLW9wZXJhdGUgY29o
ZXJlbnRseSwgc28gbGVhdmUgWGVuJ3MgZGlyZWN0bWFwIG1hcHBpbmdzIGFz
IFdCLgoKT25seSBQViBndWVzdHMgd2l0aCBhY2Nlc3MgdG8gZGV2aWNlcyBj
YW4gdXNlIHJlZHVjZWQtY2FjaGVhYmlsaXR5IG1hcHBpbmdzIHRvCmJlZ2lu
IHdpdGgsIGFuZCB0aGV5J3JlIHRydXN0ZWQgbm90IHRvIG1vdW50IERvU3Mg
YWdhaW5zdCB0aGUgc3lzdGVtIGFueXdheS4KCkRyb3AgUEdDX2NhY2hlYXR0
cl97YmFzZSxtYXNrfSBlbnRpcmVseSwgYW5kIHRoZSBsb2dpYyB0byBtYW5p
cHVsYXRlIHRoZW0uClNoaWZ0IHRoZSBsYXRlciBQR0NfKiBjb25zdGFudHMg
dXAsIHRvIGdhaW4gMyBleHRyYSBiaXRzIGluIHRoZSBtYWluIHJlZmVyZW5j
ZQpjb3VudC4gIFJldGFpbiB0aGUgY2hlY2sgaW4gZ2V0X3BhZ2VfZnJvbV9s
MWUoKSBmb3Igc3BlY2lhbF9wYWdlcygpIGJlY2F1c2UgYQpndWVzdCBoYXMg
bm8gYnVzaW5lc3MgdXNpbmcgcmVkdWNlZCBjYWNoZWFiaWxpdHkgb24gdGhl
c2UuCgpUaGlzIHJldmVydHMgY2hhbmdlc2V0IDU1Zjk3ZjQ5YjdjZTZjMzUy
MGM1NTVkMTljYWFjNmNmM2Y5YTVkZjAKClRoaXMgaXMgQ1ZFLTIwMjItMjYz
NjMsIHBhcnQgb2YgWFNBLTQwMi4KClNpZ25lZC1vZmYtYnk6IEFuZHJldyBD
b29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+ClJldmlld2VkLWJ5
OiBHZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJpeC5jb20+Cgpk
aWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L2luY2x1ZGUvYXNtL21tLmggYi94
ZW4vYXJjaC94ODYvaW5jbHVkZS9hc20vbW0uaAppbmRleCBmMmY3YjY5MDJj
ZTQuLjYwNWMxMDE1MjgwNSAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L2lu
Y2x1ZGUvYXNtL21tLmgKKysrIGIveGVuL2FyY2gveDg2L2luY2x1ZGUvYXNt
L21tLmgKQEAgLTY5LDI1ICs2OSwyMiBAQAogIC8qIFNldCB3aGVuIGlzIHVz
aW5nIGEgcGFnZSBhcyBhIHBhZ2UgdGFibGUgKi8KICNkZWZpbmUgX1BHQ19w
YWdlX3RhYmxlICAgUEdfc2hpZnQoMykKICNkZWZpbmUgUEdDX3BhZ2VfdGFi
bGUgICAgUEdfbWFzaygxLCAzKQotIC8qIDMtYml0IFBBVC9QQ0QvUFdUIGNh
Y2hlLWF0dHJpYnV0ZSBoaW50LiAqLwotI2RlZmluZSBQR0NfY2FjaGVhdHRy
X2Jhc2UgUEdfc2hpZnQoNikKLSNkZWZpbmUgUEdDX2NhY2hlYXR0cl9tYXNr
IFBHX21hc2soNywgNikKICAvKiBQYWdlIGlzIGJyb2tlbj8gKi8KLSNkZWZp
bmUgX1BHQ19icm9rZW4gICAgICAgUEdfc2hpZnQoNykKLSNkZWZpbmUgUEdD
X2Jyb2tlbiAgICAgICAgUEdfbWFzaygxLCA3KQorI2RlZmluZSBfUEdDX2Jy
b2tlbiAgICAgICBQR19zaGlmdCg0KQorI2RlZmluZSBQR0NfYnJva2VuICAg
ICAgICBQR19tYXNrKDEsIDQpCiAgLyogTXV0dWFsbHktZXhjbHVzaXZlIHBh
Z2Ugc3RhdGVzOiB7IGludXNlLCBvZmZsaW5pbmcsIG9mZmxpbmVkLCBmcmVl
IH0uICovCi0jZGVmaW5lIFBHQ19zdGF0ZSAgICAgICAgIFBHX21hc2soMywg
OSkKLSNkZWZpbmUgUEdDX3N0YXRlX2ludXNlICAgUEdfbWFzaygwLCA5KQot
I2RlZmluZSBQR0Nfc3RhdGVfb2ZmbGluaW5nIFBHX21hc2soMSwgOSkKLSNk
ZWZpbmUgUEdDX3N0YXRlX29mZmxpbmVkIFBHX21hc2soMiwgOSkKLSNkZWZp
bmUgUEdDX3N0YXRlX2ZyZWUgICAgUEdfbWFzaygzLCA5KQorI2RlZmluZSBQ
R0Nfc3RhdGUgICAgICAgICAgIFBHX21hc2soMywgNikKKyNkZWZpbmUgUEdD
X3N0YXRlX2ludXNlICAgICBQR19tYXNrKDAsIDYpCisjZGVmaW5lIFBHQ19z
dGF0ZV9vZmZsaW5pbmcgUEdfbWFzaygxLCA2KQorI2RlZmluZSBQR0Nfc3Rh
dGVfb2ZmbGluZWQgIFBHX21hc2soMiwgNikKKyNkZWZpbmUgUEdDX3N0YXRl
X2ZyZWUgICAgICBQR19tYXNrKDMsIDYpCiAjZGVmaW5lIHBhZ2Vfc3RhdGVf
aXMocGcsIHN0KSAoKChwZyktPmNvdW50X2luZm8mUEdDX3N0YXRlKSA9PSBQ
R0Nfc3RhdGVfIyNzdCkKIC8qIFBhZ2UgaXMgbm90IHJlZmVyZW5jZSBjb3Vu
dGVkIChzZWUgYmVsb3cgZm9yIGNhdmVhdHMpICovCi0jZGVmaW5lIF9QR0Nf
ZXh0cmEgICAgICAgIFBHX3NoaWZ0KDEwKQotI2RlZmluZSBQR0NfZXh0cmEg
ICAgICAgICBQR19tYXNrKDEsIDEwKQorI2RlZmluZSBfUEdDX2V4dHJhICAg
ICAgICBQR19zaGlmdCg3KQorI2RlZmluZSBQR0NfZXh0cmEgICAgICAgICBQ
R19tYXNrKDEsIDcpCiAKIC8qIENvdW50IG9mIHJlZmVyZW5jZXMgdG8gdGhp
cyBmcmFtZS4gKi8KLSNkZWZpbmUgUEdDX2NvdW50X3dpZHRoICAgUEdfc2hp
ZnQoMTApCisjZGVmaW5lIFBHQ19jb3VudF93aWR0aCAgIFBHX3NoaWZ0KDcp
CiAjZGVmaW5lIFBHQ19jb3VudF9tYXNrICAgICgoMVVMPDxQR0NfY291bnRf
d2lkdGgpLTEpCiAKIC8qCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvbW0u
YyBiL3hlbi9hcmNoL3g4Ni9tbS5jCmluZGV4IDM0YmI5ZGRkYWI4ZC4uMmI1
ZjViNTUzZDk0IDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYvbW0uYworKysg
Yi94ZW4vYXJjaC94ODYvbW0uYwpAQCAtODA4LDI5ICs4MDgsNiBAQCBib29s
IGlzX21lbW9yeV9ob2xlKG1mbl90IHN0YXJ0LCBtZm5fdCBlbmQpCiAgICAg
cmV0dXJuIHRydWU7CiB9CiAKLXN0YXRpYyBpbnQgdXBkYXRlX3hlbl9tYXBw
aW5ncyh1bnNpZ25lZCBsb25nIG1mbiwgdW5zaWduZWQgaW50IGNhY2hlYXR0
cikKLXsKLSAgICBpbnQgZXJyID0gMDsKLSAgICBib29sIGFsaWFzID0gbWZu
ID49IFBGTl9ET1dOKHhlbl9waHlzX3N0YXJ0KSAmJgotICAgICAgICAgICAg
ICAgICBtZm4gPCAgUEZOX1VQKHhlbl9waHlzX3N0YXJ0ICsgKHVuc2lnbmVk
IGxvbmcpX18yTV9yd2RhdGFfZW5kIC0KLSAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBYRU5fVklSVF9TVEFSVCk7Ci0gICAgdW5zaWduZWQgbG9u
ZyB4ZW5fdmEgPQotICAgICAgICBYRU5fVklSVF9TVEFSVCArICgobWZuIC0g
UEZOX0RPV04oeGVuX3BoeXNfc3RhcnQpKSA8PCBQQUdFX1NISUZUKTsKLQot
ICAgIGlmICggYm9vdF9jcHVfaGFzKFg4Nl9GRUFUVVJFX1hFTl9TRUxGU05P
T1ApICkKLSAgICAgICAgcmV0dXJuIDA7Ci0KLSAgICBpZiAoIHVubGlrZWx5
KGFsaWFzKSAmJiBjYWNoZWF0dHIgKQotICAgICAgICBlcnIgPSBtYXBfcGFn
ZXNfdG9feGVuKHhlbl92YSwgX21mbihtZm4pLCAxLCAwKTsKLSAgICBpZiAo
ICFlcnIgKQotICAgICAgICBlcnIgPSBtYXBfcGFnZXNfdG9feGVuKCh1bnNp
Z25lZCBsb25nKW1mbl90b192aXJ0KG1mbiksIF9tZm4obWZuKSwgMSwKLSAg
ICAgICAgICAgICAgICAgICAgIFBBR0VfSFlQRVJWSVNPUiB8IGNhY2hlYXR0
cl90b19wdGVfZmxhZ3MoY2FjaGVhdHRyKSk7Ci0gICAgaWYgKCB1bmxpa2Vs
eShhbGlhcykgJiYgIWNhY2hlYXR0ciAmJiAhZXJyICkKLSAgICAgICAgZXJy
ID0gbWFwX3BhZ2VzX3RvX3hlbih4ZW5fdmEsIF9tZm4obWZuKSwgMSwgUEFH
RV9IWVBFUlZJU09SKTsKLQotICAgIHJldHVybiBlcnI7Ci19Ci0KICNpZm5k
ZWYgTkRFQlVHCiBzdHJ1Y3QgbW1pb19lbXVsX3JhbmdlX2N0eHQgewogICAg
IGNvbnN0IHN0cnVjdCBkb21haW4gKmQ7CkBAIC0xMDM4LDQ3ICsxMDE1LDE0
IEBAIGdldF9wYWdlX2Zyb21fbDFlKAogICAgICAgICBnb3RvIGNvdWxkX25v
dF9waW47CiAgICAgfQogCi0gICAgaWYgKCBwdGVfZmxhZ3NfdG9fY2FjaGVh
dHRyKGwxZikgIT0KLSAgICAgICAgICgocGFnZS0+Y291bnRfaW5mbyAmIFBH
Q19jYWNoZWF0dHJfbWFzaykgPj4gUEdDX2NhY2hlYXR0cl9iYXNlKSApCisg
ICAgaWYgKCAobDFmICYgUEFHRV9DQUNIRV9BVFRSUykgIT0gX1BBR0VfV0Ig
JiYgaXNfc3BlY2lhbF9wYWdlKHBhZ2UpICkKICAgICB7Ci0gICAgICAgIHVu
c2lnbmVkIGxvbmcgeCwgbngsIHkgPSBwYWdlLT5jb3VudF9pbmZvOwotICAg
ICAgICB1bnNpZ25lZCBsb25nIGNhY2hlYXR0ciA9IHB0ZV9mbGFnc190b19j
YWNoZWF0dHIobDFmKTsKLSAgICAgICAgaW50IGVycjsKLQotICAgICAgICBp
ZiAoIGlzX3NwZWNpYWxfcGFnZShwYWdlKSApCi0gICAgICAgIHsKLSAgICAg
ICAgICAgIGlmICggd3JpdGUgKQotICAgICAgICAgICAgICAgIHB1dF9wYWdl
X3R5cGUocGFnZSk7Ci0gICAgICAgICAgICBwdXRfcGFnZShwYWdlKTsKLSAg
ICAgICAgICAgIGdkcHJpbnRrKFhFTkxPR19XQVJOSU5HLAotICAgICAgICAg
ICAgICAgICAgICAgIkF0dGVtcHQgdG8gY2hhbmdlIGNhY2hlIGF0dHJpYnV0
ZXMgb2YgWGVuIGhlYXAgcGFnZVxuIik7Ci0gICAgICAgICAgICByZXR1cm4g
LUVBQ0NFUzsKLSAgICAgICAgfQotCi0gICAgICAgIGRvIHsKLSAgICAgICAg
ICAgIHggID0geTsKLSAgICAgICAgICAgIG54ID0gKHggJiB+UEdDX2NhY2hl
YXR0cl9tYXNrKSB8IChjYWNoZWF0dHIgPDwgUEdDX2NhY2hlYXR0cl9iYXNl
KTsKLSAgICAgICAgfSB3aGlsZSAoICh5ID0gY21weGNoZygmcGFnZS0+Y291
bnRfaW5mbywgeCwgbngpKSAhPSB4ICk7Ci0KLSAgICAgICAgZXJyID0gdXBk
YXRlX3hlbl9tYXBwaW5ncyhtZm4sIGNhY2hlYXR0cik7Ci0gICAgICAgIGlm
ICggdW5saWtlbHkoZXJyKSApCi0gICAgICAgIHsKLSAgICAgICAgICAgIGNh
Y2hlYXR0ciA9IHkgJiBQR0NfY2FjaGVhdHRyX21hc2s7Ci0gICAgICAgICAg
ICBkbyB7Ci0gICAgICAgICAgICAgICAgeCAgPSB5OwotICAgICAgICAgICAg
ICAgIG54ID0gKHggJiB+UEdDX2NhY2hlYXR0cl9tYXNrKSB8IGNhY2hlYXR0
cjsKLSAgICAgICAgICAgIH0gd2hpbGUgKCAoeSA9IGNtcHhjaGcoJnBhZ2Ut
PmNvdW50X2luZm8sIHgsIG54KSkgIT0geCApOwotCi0gICAgICAgICAgICBp
ZiAoIHdyaXRlICkKLSAgICAgICAgICAgICAgICBwdXRfcGFnZV90eXBlKHBh
Z2UpOwotICAgICAgICAgICAgcHV0X3BhZ2UocGFnZSk7Ci0KLSAgICAgICAg
ICAgIGdkcHJpbnRrKFhFTkxPR19XQVJOSU5HLCAiRXJyb3IgdXBkYXRpbmcg
bWFwcGluZ3MgZm9yIG1mbiAlIiBQUklfbWZuCi0gICAgICAgICAgICAgICAg
ICAgICAiIChwZm4gJSIgUFJJX3BmbiAiLCBmcm9tIEwxIGVudHJ5ICUiIFBS
SXB0ZSAiKSBmb3IgZCVkXG4iLAotICAgICAgICAgICAgICAgICAgICAgbWZu
LCBnZXRfZ3Bmbl9mcm9tX21mbihtZm4pLAotICAgICAgICAgICAgICAgICAg
ICAgbDFlX2dldF9pbnRwdGUobDFlKSwgbDFlX293bmVyLT5kb21haW5faWQp
OwotICAgICAgICAgICAgcmV0dXJuIGVycjsKLSAgICAgICAgfQorICAgICAg
ICBpZiAoIHdyaXRlICkKKyAgICAgICAgICAgIHB1dF9wYWdlX3R5cGUocGFn
ZSk7CisgICAgICAgIHB1dF9wYWdlKHBhZ2UpOworICAgICAgICBnZHByaW50
ayhYRU5MT0dfV0FSTklORywKKyAgICAgICAgICAgICAgICAgIkF0dGVtcHQg
dG8gY2hhbmdlIGNhY2hlIGF0dHJpYnV0ZXMgb2YgWGVuIGhlYXAgcGFnZVxu
Iik7CisgICAgICAgIHJldHVybiAtRUFDQ0VTOwogICAgIH0KIAogICAgIHJl
dHVybiAwOwpAQCAtMjQ5NiwyNSArMjQ0MCwxMCBAQCBzdGF0aWMgaW50IG1v
ZF9sNF9lbnRyeShsNF9wZ2VudHJ5X3QgKnBsNGUsCiAgKi8KIHN0YXRpYyBp
bnQgY2xlYW51cF9wYWdlX21hcHBpbmdzKHN0cnVjdCBwYWdlX2luZm8gKnBh
Z2UpCiB7Ci0gICAgdW5zaWduZWQgaW50IGNhY2hlYXR0ciA9Ci0gICAgICAg
IChwYWdlLT5jb3VudF9pbmZvICYgUEdDX2NhY2hlYXR0cl9tYXNrKSA+PiBQ
R0NfY2FjaGVhdHRyX2Jhc2U7CiAgICAgaW50IHJjID0gMDsKICAgICB1bnNp
Z25lZCBsb25nIG1mbiA9IG1mbl94KHBhZ2VfdG9fbWZuKHBhZ2UpKTsKIAog
ICAgIC8qCi0gICAgICogSWYgd2UndmUgbW9kaWZpZWQgeGVuIG1hcHBpbmdz
IGFzIGEgcmVzdWx0IG9mIGd1ZXN0IGNhY2hlCi0gICAgICogYXR0cmlidXRl
cywgcmVzdG9yZSB0aGVtIHRvIHRoZSAibm9ybWFsIiBzdGF0ZS4KLSAgICAg
Ki8KLSAgICBpZiAoIHVubGlrZWx5KGNhY2hlYXR0cikgKQotICAgIHsKLSAg
ICAgICAgcGFnZS0+Y291bnRfaW5mbyAmPSB+UEdDX2NhY2hlYXR0cl9tYXNr
OwotCi0gICAgICAgIEJVR19PTihpc19zcGVjaWFsX3BhZ2UocGFnZSkpOwot
Ci0gICAgICAgIHJjID0gdXBkYXRlX3hlbl9tYXBwaW5ncyhtZm4sIDApOwot
ICAgIH0KLQotICAgIC8qCiAgICAgICogSWYgdGhpcyBtYXkgYmUgaW4gYSBQ
ViBkb21haW4ncyBJT01NVSwgcmVtb3ZlIGl0LgogICAgICAqCiAgICAgICog
TkIgdGhhdCB3cml0YWJsZSB4ZW5oZWFwIHBhZ2VzIGhhdmUgdGhlaXIgdHlw
ZSBzZXQgYW5kIGNsZWFyZWQgYnkK

--=separator
Content-Type: application/octet-stream; name="xsa402/xsa402-3.patch"
Content-Disposition: attachment; filename="xsa402/xsa402-3.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2OiBTcGxpdCBjYWNoZV9mbHVzaCgpIG91dCBvZiBj
YWNoZV93cml0ZWJhY2soKQoKU3Vic2VxdWVudCBjaGFuZ2VzIHdpbGwgd2Fu
dCBhIGZ1bGx5IGZsdXNoaW5nIHZlcnNpb24uCgpVc2UgdGhlIG5ldyBoZWxw
ZXIgcmF0aGVyIHRoYW4gb3BlbmNvZGluZyBpdCBpbiBmbHVzaF9hcmVhX2xv
Y2FsKCkuICBUaGlzCnJlc29sdmVzIGFuIG91dHN0YW5kaW5nIGlzc3VlIHdo
ZXJlIHRoZSBjb25kaXRpb25hbCBzZmVuY2UgaXMgb24gdGhlIHdyb25nCnNp
ZGUgb2YgdGhlIGNsZmx1c2hvcHQgbG9vcC4gIGNsZmx1c2hvcHQgaXMgb3Jk
ZXJlZCB3aXRoIHJlc3BlY3QgdG8gb2xkZXIKc3RvcmVzLCBub3QgdG8geW91
bmdlciBzdG9yZXMuCgpSZW5hbWUgZ250dGFiX2NhY2hlX2ZsdXNoKCkncyBo
ZWxwZXIgdG8gYXZvaWQgY29sbGlkaW5nIGluIG5hbWUuCmdyYW50X3RhYmxl
LmMgY2FuIHNlZSB0aGUgcHJvdG90eXBlIGZyb20gY2FjaGUuaCBzbyB0aGUg
YnVpbGQgZmFpbHMKb3RoZXJ3aXNlLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS00
MDIuCgpTaWduZWQtb2ZmLWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29v
cGVyM0BjaXRyaXguY29tPgpSZXZpZXdlZC1ieTogSmFuIEJldWxpY2ggPGpi
ZXVsaWNoQHN1c2UuY29tPgoKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9m
bHVzaHRsYi5jIGIveGVuL2FyY2gveDg2L2ZsdXNodGxiLmMKaW5kZXggMGM1
YTFkZTQ0MzJhLi40NzFiM2UzMWM0NmMgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNo
L3g4Ni9mbHVzaHRsYi5jCisrKyBiL3hlbi9hcmNoL3g4Ni9mbHVzaHRsYi5j
CkBAIC0yMzUsNyArMjM1LDcgQEAgdW5zaWduZWQgaW50IGZsdXNoX2FyZWFf
bG9jYWwoY29uc3Qgdm9pZCAqdmEsIHVuc2lnbmVkIGludCBmbGFncykKICAg
ICBpZiAoIGZsYWdzICYgRkxVU0hfQ0FDSEUgKQogICAgIHsKICAgICAgICAg
Y29uc3Qgc3RydWN0IGNwdWluZm9feDg2ICpjID0gJmN1cnJlbnRfY3B1X2Rh
dGE7Ci0gICAgICAgIHVuc2lnbmVkIGxvbmcgaSwgc3ogPSAwOworICAgICAg
ICB1bnNpZ25lZCBsb25nIHN6ID0gMDsKIAogICAgICAgICBpZiAoIG9yZGVy
IDwgKEJJVFNfUEVSX0xPTkcgLSBQQUdFX1NISUZUKSApCiAgICAgICAgICAg
ICBzeiA9IDFVTCA8PCAob3JkZXIgKyBQQUdFX1NISUZUKTsKQEAgLTI0NSwx
MiArMjQ1LDcgQEAgdW5zaWduZWQgaW50IGZsdXNoX2FyZWFfbG9jYWwoY29u
c3Qgdm9pZCAqdmEsIHVuc2lnbmVkIGludCBmbGFncykKICAgICAgICAgICAg
ICBjLT54ODZfY2xmbHVzaF9zaXplICYmIGMtPng4Nl9jYWNoZV9zaXplICYm
IHN6ICYmCiAgICAgICAgICAgICAgKChzeiA+PiAxMCkgPCBjLT54ODZfY2Fj
aGVfc2l6ZSkgKQogICAgICAgICB7Ci0gICAgICAgICAgICBhbHRlcm5hdGl2
ZSgiIiwgInNmZW5jZSIsIFg4Nl9GRUFUVVJFX0NMRkxVU0hPUFQpOwotICAg
ICAgICAgICAgZm9yICggaSA9IDA7IGkgPCBzejsgaSArPSBjLT54ODZfY2xm
bHVzaF9zaXplICkKLSAgICAgICAgICAgICAgICBhbHRlcm5hdGl2ZV9pbnB1
dCgiZHM7IGNsZmx1c2ggJTAiLAotICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICJkYXRhMTYgY2xmbHVzaCAlMCIsICAgICAgLyogY2xmbHVz
aG9wdCAqLwotICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFg4
Nl9GRUFUVVJFX0NMRkxVU0hPUFQsCi0gICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIm0iICgoKGNvbnN0IGNoYXIgKil2YSlbaV0pKTsKKyAg
ICAgICAgICAgIGNhY2hlX2ZsdXNoKHZhLCBzeik7CiAgICAgICAgICAgICBm
bGFncyAmPSB+RkxVU0hfQ0FDSEU7CiAgICAgICAgIH0KICAgICAgICAgZWxz
ZQpAQCAtMjY1LDcgKzI2MCw3IEBAIHVuc2lnbmVkIGludCBmbHVzaF9hcmVh
X2xvY2FsKGNvbnN0IHZvaWQgKnZhLCB1bnNpZ25lZCBpbnQgZmxhZ3MpCiAg
ICAgcmV0dXJuIGZsYWdzOwogfQogCi12b2lkIGNhY2hlX3dyaXRlYmFjayhj
b25zdCB2b2lkICphZGRyLCB1bnNpZ25lZCBpbnQgc2l6ZSkKK3ZvaWQgY2Fj
aGVfZmx1c2goY29uc3Qgdm9pZCAqYWRkciwgdW5zaWduZWQgaW50IHNpemUp
CiB7CiAgICAgLyoKICAgICAgKiBUaGlzIGZ1bmN0aW9uIG1heSBiZSBjYWxs
ZWQgYmVmb3JlIGN1cnJlbnRfY3B1X2RhdGEgaXMgZXN0YWJsaXNoZWQuCkBA
IC0yNzcsNiArMjcyLDM4IEBAIHZvaWQgY2FjaGVfd3JpdGViYWNrKGNvbnN0
IHZvaWQgKmFkZHIsIHVuc2lnbmVkIGludCBzaXplKQogICAgIGFkZHIgLT0g
KHVuc2lnbmVkIGxvbmcpYWRkciAmIChjbGZsdXNoX3NpemUgLSAxKTsKICAg
ICBmb3IgKCA7IGFkZHIgPCBlbmQ7IGFkZHIgKz0gY2xmbHVzaF9zaXplICkK
ICAgICB7CisgICAgICAgIC8qCisgICAgICAgICAqIE5vdGUgcmVnYXJkaW5n
IHRoZSAiZHMiIHByZWZpeCB1c2U6IGl0J3MgZmFzdGVyIHRvIGRvIGEgY2xm
bHVzaAorICAgICAgICAgKiArIHByZWZpeCB0aGFuIGEgY2xmbHVzaCArIG5v
cCwgYW5kIGhlbmNlIHRoZSBwcmVmaXggaXMgYWRkZWQgaW5zdGVhZAorICAg
ICAgICAgKiBvZiBsZXR0aW5nIHRoZSBhbHRlcm5hdGl2ZSBmcmFtZXdvcmsg
ZmlsbCB0aGUgZ2FwIGJ5IGFwcGVuZGluZyBub3BzLgorICAgICAgICAgKi8K
KyAgICAgICAgYWx0ZXJuYXRpdmVfaW8oImRzOyBjbGZsdXNoICVbcF0iLAor
ICAgICAgICAgICAgICAgICAgICAgICAiZGF0YTE2IGNsZmx1c2ggJVtwXSIs
IC8qIGNsZmx1c2hvcHQgKi8KKyAgICAgICAgICAgICAgICAgICAgICAgWDg2
X0ZFQVRVUkVfQ0xGTFVTSE9QVCwKKyAgICAgICAgICAgICAgICAgICAgICAg
Lyogbm8gb3V0cHV0cyAqLywKKyAgICAgICAgICAgICAgICAgICAgICAgW3Bd
ICJtIiAoKihjb25zdCBjaGFyICopKGFkZHIpKSk7CisgICAgfQorCisgICAg
YWx0ZXJuYXRpdmUoIiIsICJzZmVuY2UiLCBYODZfRkVBVFVSRV9DTEZMVVNI
T1BUKTsKK30KKwordm9pZCBjYWNoZV93cml0ZWJhY2soY29uc3Qgdm9pZCAq
YWRkciwgdW5zaWduZWQgaW50IHNpemUpCit7CisgICAgdW5zaWduZWQgaW50
IGNsZmx1c2hfc2l6ZTsKKyAgICBjb25zdCB2b2lkICplbmQgPSBhZGRyICsg
c2l6ZTsKKworICAgIC8qIEZhbGwgYmFjayB0byBDTEZMVVNIeyxPUFR9IHdo
ZW4gQ0xXQiBpc24ndCBhdmFpbGFibGUuICovCisgICAgaWYgKCAhYm9vdF9j
cHVfaGFzKFg4Nl9GRUFUVVJFX0NMV0IpICkKKyAgICAgICAgcmV0dXJuIGNh
Y2hlX2ZsdXNoKGFkZHIsIHNpemUpOworCisgICAgLyoKKyAgICAgKiBUaGlz
IGZ1bmN0aW9uIG1heSBiZSBjYWxsZWQgYmVmb3JlIGN1cnJlbnRfY3B1X2Rh
dGEgaXMgZXN0YWJsaXNoZWQuCisgICAgICogSGVuY2UgYSBmYWxsYmFjayBp
cyBuZWVkZWQgdG8gcHJldmVudCB0aGUgbG9vcCBiZWxvdyBiZWNvbWluZyBp
bmZpbml0ZS4KKyAgICAgKi8KKyAgICBjbGZsdXNoX3NpemUgPSBjdXJyZW50
X2NwdV9kYXRhLng4Nl9jbGZsdXNoX3NpemUgPzogMTY7CisgICAgYWRkciAt
PSAodW5zaWduZWQgbG9uZylhZGRyICYgKGNsZmx1c2hfc2l6ZSAtIDEpOwor
ICAgIGZvciAoIDsgYWRkciA8IGVuZDsgYWRkciArPSBjbGZsdXNoX3NpemUg
KQorICAgIHsKIC8qCiAgKiBUaGUgYXJndW1lbnRzIHRvIGEgbWFjcm8gbXVz
dCBub3QgaW5jbHVkZSBwcmVwcm9jZXNzb3IgZGlyZWN0aXZlcy4gRG9pbmcg
c28KICAqIHJlc3VsdHMgaW4gdW5kZWZpbmVkIGJlaGF2aW9yLCBzbyB3ZSBo
YXZlIHRvIGNyZWF0ZSBzb21lIGRlZmluZXMgaGVyZSBpbgpAQCAtMjk2LDI0
ICszMjMsMTUgQEAgdm9pZCBjYWNoZV93cml0ZWJhY2soY29uc3Qgdm9pZCAq
YWRkciwgdW5zaWduZWQgaW50IHNpemUpCiAjZWxzZQogIyBkZWZpbmUgSU5Q
VVQoYWRkcikgImEiIChhZGRyKSwgQkFTRV9JTlBVVChhZGRyKQogI2VuZGlm
Ci0gICAgICAgIC8qCi0gICAgICAgICAqIE5vdGUgcmVnYXJkaW5nIHRoZSAi
ZHMiIHByZWZpeCB1c2U6IGl0J3MgZmFzdGVyIHRvIGRvIGEgY2xmbHVzaAot
ICAgICAgICAgKiArIHByZWZpeCB0aGFuIGEgY2xmbHVzaCArIG5vcCwgYW5k
IGhlbmNlIHRoZSBwcmVmaXggaXMgYWRkZWQgaW5zdGVhZAotICAgICAgICAg
KiBvZiBsZXR0aW5nIHRoZSBhbHRlcm5hdGl2ZSBmcmFtZXdvcmsgZmlsbCB0
aGUgZ2FwIGJ5IGFwcGVuZGluZyBub3BzLgotICAgICAgICAgKi8KLSAgICAg
ICAgYWx0ZXJuYXRpdmVfaW9fMigiZHM7IGNsZmx1c2ggJVtwXSIsCi0gICAg
ICAgICAgICAgICAgICAgICAgICAgImRhdGExNiBjbGZsdXNoICVbcF0iLCAv
KiBjbGZsdXNob3B0ICovCi0gICAgICAgICAgICAgICAgICAgICAgICAgWDg2
X0ZFQVRVUkVfQ0xGTFVTSE9QVCwKLSAgICAgICAgICAgICAgICAgICAgICAg
ICBDTFdCX0VOQ09ESU5HLAotICAgICAgICAgICAgICAgICAgICAgICAgIFg4
Nl9GRUFUVVJFX0NMV0IsIC8qIG5vIG91dHB1dHMgKi8sCi0gICAgICAgICAg
ICAgICAgICAgICAgICAgSU5QVVQoYWRkcikpOworCisgICAgICAgIGFzbSB2
b2xhdGlsZSAoQ0xXQl9FTkNPRElORyA6OiBJTlBVVChhZGRyKSk7CisKICN1
bmRlZiBJTlBVVAogI3VuZGVmIEJBU0VfSU5QVVQKICN1bmRlZiBDTFdCX0VO
Q09ESU5HCiAgICAgfQogCi0gICAgYWx0ZXJuYXRpdmVfMigiIiwgInNmZW5j
ZSIsIFg4Nl9GRUFUVVJFX0NMRkxVU0hPUFQsCi0gICAgICAgICAgICAgICAg
ICAgICAgInNmZW5jZSIsIFg4Nl9GRUFUVVJFX0NMV0IpOworICAgIGFzbSB2
b2xhdGlsZSAoInNmZW5jZSIgOjo6ICJtZW1vcnkiKTsKIH0KIAogdW5zaWdu
ZWQgaW50IGd1ZXN0X2ZsdXNoX3RsYl9mbGFncyhjb25zdCBzdHJ1Y3QgZG9t
YWluICpkKQpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L2luY2x1ZGUvYXNt
L2NhY2hlLmggYi94ZW4vYXJjaC94ODYvaW5jbHVkZS9hc20vY2FjaGUuaApp
bmRleCA0MjRkYzViN2I5OTkuLmU0NzcwZWZiMjJiOSAxMDA2NDQKLS0tIGEv
eGVuL2FyY2gveDg2L2luY2x1ZGUvYXNtL2NhY2hlLmgKKysrIGIveGVuL2Fy
Y2gveDg2L2luY2x1ZGUvYXNtL2NhY2hlLmgKQEAgLTEzLDYgKzEzLDcgQEAK
IAogI2lmbmRlZiBfX0FTU0VNQkxZX18KIAordm9pZCBjYWNoZV9mbHVzaChj
b25zdCB2b2lkICphZGRyLCB1bnNpZ25lZCBpbnQgc2l6ZSk7CiB2b2lkIGNh
Y2hlX3dyaXRlYmFjayhjb25zdCB2b2lkICphZGRyLCB1bnNpZ25lZCBpbnQg
c2l6ZSk7CiAKICNlbmRpZgpkaWZmIC0tZ2l0IGEveGVuL2NvbW1vbi9ncmFu
dF90YWJsZS5jIGIveGVuL2NvbW1vbi9ncmFudF90YWJsZS5jCmluZGV4IGZl
YmJlMTJlYWI5OC4uMzkxOGU2ZGU2YjZhIDEwMDY0NAotLS0gYS94ZW4vY29t
bW9uL2dyYW50X3RhYmxlLmMKKysrIGIveGVuL2NvbW1vbi9ncmFudF90YWJs
ZS5jCkBAIC0zNDQ3LDcgKzM0NDcsNyBAQCBnbnR0YWJfc3dhcF9ncmFudF9y
ZWYoWEVOX0dVRVNUX0hBTkRMRV9QQVJBTShnbnR0YWJfc3dhcF9ncmFudF9y
ZWZfdCkgdW9wLAogICAgIHJldHVybiAwOwogfQogCi1zdGF0aWMgaW50IGNh
Y2hlX2ZsdXNoKGNvbnN0IGdudHRhYl9jYWNoZV9mbHVzaF90ICpjZmx1c2gs
IGdyYW50X3JlZl90ICpjdXJfcmVmKQorc3RhdGljIGludCBfY2FjaGVfZmx1
c2goY29uc3QgZ250dGFiX2NhY2hlX2ZsdXNoX3QgKmNmbHVzaCwgZ3JhbnRf
cmVmX3QgKmN1cl9yZWYpCiB7CiAgICAgc3RydWN0IGRvbWFpbiAqZCwgKm93
bmVyOwogICAgIHN0cnVjdCBwYWdlX2luZm8gKnBhZ2U7CkBAIC0zNTQxLDcg
KzM1NDEsNyBAQCBnbnR0YWJfY2FjaGVfZmx1c2goWEVOX0dVRVNUX0hBTkRM
RV9QQVJBTShnbnR0YWJfY2FjaGVfZmx1c2hfdCkgdW9wLAogICAgICAgICAg
ICAgcmV0dXJuIC1FRkFVTFQ7CiAgICAgICAgIGZvciAoIDsgOyApCiAgICAg
ICAgIHsKLSAgICAgICAgICAgIGludCByZXQgPSBjYWNoZV9mbHVzaCgmb3As
IGN1cl9yZWYpOworICAgICAgICAgICAgaW50IHJldCA9IF9jYWNoZV9mbHVz
aCgmb3AsIGN1cl9yZWYpOwogCiAgICAgICAgICAgICBpZiAoIHJldCA8IDAg
KQogICAgICAgICAgICAgICAgIHJldHVybiByZXQ7Cg==

--=separator
Content-Type: application/octet-stream; name="xsa402/xsa402-4.13-1.patch"
Content-Disposition: attachment; filename="xsa402/xsa402-4.13-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3BhZ2U6IEludHJvZHVjZSBfUEFHRV8qIGNvbnN0
YW50cyBmb3IgbWVtb3J5IHR5cGVzCgouLi4gcmF0aGVyIHRoYW4gb3BlbmNv
ZGluZyB0aGUgUEFUL1BDRC9QV1QgYXR0cmlidXRlcyBpbiBfX1BBR0VfSFlQ
RVJWSVNPUl8qCmNvbnN0YW50cy4gIFRoZXNlIGFyZSBnb2luZyB0byBiZSBu
ZWVkZWQgYnkgZm9ydGhjb21pbmcgbG9naWMuCgpObyBmdW5jdGlvbmFsIGNo
YW5nZS4KClRoaXMgaXMgcGFydCBvZiBYU0EtNDAyLgoKU2lnbmVkLW9mZi1i
eTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4K
UmV2aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4K
CmRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9hc20teDg2L3BhZ2UuaCBiL3hl
bi9pbmNsdWRlL2FzbS14ODYvcGFnZS5oCmluZGV4IGMxZTkyOTM3YzA3My4u
NzI2OWFlODliODgwIDEwMDY0NAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2
L3BhZ2UuaAorKysgYi94ZW4vaW5jbHVkZS9hc20teDg2L3BhZ2UuaApAQCAt
MzIwLDYgKzMyMCwxNCBAQCB2b2lkIGVmaV91cGRhdGVfbDRfcGd0YWJsZSh1
bnNpZ25lZCBpbnQgbDRpZHgsIGw0X3BnZW50cnlfdCk7CiAKICNkZWZpbmUg
UEFHRV9DQUNIRV9BVFRSUyAoX1BBR0VfUEFUIHwgX1BBR0VfUENEIHwgX1BB
R0VfUFdUKQogCisvKiBNZW1vcnkgdHlwZXMsIGVuY29kZWQgdW5kZXIgWGVu
J3MgY2hvaWNlIG9mIE1TUl9QQVQuICovCisjZGVmaW5lIF9QQUdFX1dCICAg
ICAgICAgKCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgMCkKKyNk
ZWZpbmUgX1BBR0VfV1QgICAgICAgICAoICAgICAgICAgICAgICAgICAgICAg
ICAgX1BBR0VfUFdUKQorI2RlZmluZSBfUEFHRV9VQ00gICAgICAgICggICAg
ICAgICAgICBfUEFHRV9QQ0QgICAgICAgICAgICApCisjZGVmaW5lIF9QQUdF
X1VDICAgICAgICAgKCAgICAgICAgICAgIF9QQUdFX1BDRCB8IF9QQUdFX1BX
VCkKKyNkZWZpbmUgX1BBR0VfV0MgICAgICAgICAoX1BBR0VfUEFUICAgICAg
ICAgICAgICAgICAgICAgICAgKQorI2RlZmluZSBfUEFHRV9XUCAgICAgICAg
IChfUEFHRV9QQVQgfCAgICAgICAgICAgICBfUEFHRV9QV1QpCisKIC8qCiAg
KiBEZWJ1ZyBvcHRpb246IEVuc3VyZSB0aGF0IGdyYW50ZWQgbWFwcGluZ3Mg
YXJlIG5vdCBpbXBsaWNpdGx5IHVubWFwcGVkLgogICogV0FSTklORzogVGhp
cyB3aWxsIG5lZWQgdG8gYmUgZGlzYWJsZWQgdG8gcnVuIE9TZXMgdGhhdCB1
c2UgdGhlIHNwYXJlIFBURQpAQCAtMzM4LDggKzM0Niw4IEBAIHZvaWQgZWZp
X3VwZGF0ZV9sNF9wZ3RhYmxlKHVuc2lnbmVkIGludCBsNGlkeCwgbDRfcGdl
bnRyeV90KTsKICNkZWZpbmUgX19QQUdFX0hZUEVSVklTT1JfUlggICAgICAo
X1BBR0VfUFJFU0VOVCB8IF9QQUdFX0FDQ0VTU0VEKQogI2RlZmluZSBfX1BB
R0VfSFlQRVJWSVNPUiAgICAgICAgIChfX1BBR0VfSFlQRVJWSVNPUl9SWCB8
IFwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgX1BBR0Vf
RElSVFkgfCBfUEFHRV9SVykKLSNkZWZpbmUgX19QQUdFX0hZUEVSVklTT1Jf
VUNNSU5VUyAoX19QQUdFX0hZUEVSVklTT1IgfCBfUEFHRV9QQ0QpCi0jZGVm
aW5lIF9fUEFHRV9IWVBFUlZJU09SX1VDICAgICAgKF9fUEFHRV9IWVBFUlZJ
U09SIHwgX1BBR0VfUENEIHwgX1BBR0VfUFdUKQorI2RlZmluZSBfX1BBR0Vf
SFlQRVJWSVNPUl9VQ01JTlVTIChfX1BBR0VfSFlQRVJWSVNPUiB8IF9QQUdF
X1VDTSkKKyNkZWZpbmUgX19QQUdFX0hZUEVSVklTT1JfVUMgICAgICAoX19Q
QUdFX0hZUEVSVklTT1IgfCBfUEFHRV9VQykKIAogI2RlZmluZSBNQVBfU01B
TExfUEFHRVMgX1BBR0VfQVZBSUwwIC8qIGRvbid0IHVzZSBzdXBlcnBhZ2Vz
IG1hcHBpbmdzICovCiAK

--=separator
Content-Type: application/octet-stream; name="xsa402/xsa402-4.13-2.patch"
Content-Disposition: attachment; filename="xsa402/xsa402-4.13-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2OiBEb24ndCBjaGFuZ2UgdGhlIGNhY2hlYWJpbGl0
eSBvZiB0aGUgZGlyZWN0bWFwCgpDaGFuZ2VzZXQgNTVmOTdmNDliN2NlICgi
eDg2OiBDaGFuZ2UgY2FjaGUgYXR0cmlidXRlcyBvZiBYZW4gMToxIHBhZ2Ug
bWFwcGluZ3MKaW4gcmVzcG9uc2UgdG8gZ3Vlc3QgbWFwcGluZyByZXF1ZXN0
cyIpIGF0dGVtcHRlZCB0byBrZWVwIHRoZSBjYWNoZWFiaWxpdHkKY29uc2lz
dGVudCBiZXR3ZWVuIGRpZmZlcmVudCBtYXBwaW5ncyBvZiB0aGUgc2FtZSBw
YWdlLgoKVGhlIHJlYXNvbiB3YXNuJ3QgZGVzY3JpYmVkIGluIHRoZSBjaGFu
Z2Vsb2csIGJ1dCBpdCBpcyB1bmRlcnN0b29kIHRvIGJlIGluCnJlZ2FyZHMg
dG8gYSBjb25jZXJuIG92ZXIgbWFjaGluZSBjaGVjayBleGNlcHRpb25zLCBv
d2luZyB0byBlcnJhdGEgd2hlbiB1c2luZwptaXhlZCBjYWNoZWFiaWxpdGll
cy4gIEl0IGRpZCB0aGlzIHByaW1hcmlseSBieSB1cGRhdGluZyBYZW4ncyBt
YXBwaW5nIG9mIHRoZQpwYWdlIGluIHRoZSBkaXJlY3QgbWFwIHdoZW4gdGhl
IGd1ZXN0IG1hcHBlZCBhIHBhZ2Ugd2l0aCByZWR1Y2VkIGNhY2hlYWJpbGl0
eS4KClVuZm9ydHVuYXRlbHksIHRoZSBsb2dpYyBkaWRuJ3QgYWN0dWFsbHkg
cHJldmVudCBtaXhlZCBjYWNoZWFiaWxpdHkgZnJvbQpvY2N1cnJpbmc6CiAq
IEEgZ3Vlc3QgY291bGQgbWFwIGEgcGFnZSBub3JtYWxseSwgYW5kIHRoZW4g
bWFwIHRoZSBzYW1lIHBhZ2Ugd2l0aAogICBkaWZmZXJlbnQgY2FjaGVhYmls
aXR5OyBub3RoaW5nIHByZXZlbnRlZCB0aGlzLgogKiBUaGUgY2FjaGVhYmls
aXR5IG9mIHRoZSBkaXJlY3RtYXAgd2FzIGFsd2F5cyBsYXRlc3QtdGFrZXMt
cHJlY2VkZW5jZSBpbgogICB0ZXJtcyBvZiBndWVzdCByZXF1ZXN0cy4KICog
R3JhbnQtbWFwcGVkIGZyYW1lcyB3aXRoIGxlc3NlciBjYWNoZWFiaWxpdHkg
ZGlkbid0IGFkanVzdCB0aGUgcGFnZSdzCiAgIGNhY2hlYXR0ciBzZXR0aW5n
cy4KICogVGhlIG1hcF9kb21haW5fcGFnZSgpIGZ1bmN0aW9uIHN0aWxsIHVu
Y29uZGl0aW9uYWxseSBjcmVhdGVkIFdCIG1hcHBpbmdzLAogICBpcnJlc3Bl
Y3RpdmUgb2YgdGhlIHBhZ2UncyBjYWNoZWF0dHIgc2V0dGluZ3MuCgpBZGRp
dGlvbmFsbHksIHVwZGF0ZV94ZW5fbWFwcGluZ3MoKSBoYWQgYSBidWcgd2hl
cmUgdGhlIGFsaWFzIGNhbGN1bGF0aW9uIHdhcwp3cm9uZyBmb3IgbWZuJ3Mg
d2hpY2ggd2VyZSAuaW5pdCBjb250ZW50LCB3aGljaCBzaG91bGQgaGF2ZSBi
ZWVuIHRyZWF0ZWQgYXMKZnVsbHkgZ3Vlc3QgcGFnZXMsIG5vdCBYZW4gcGFn
ZXMuCgpXb3JzZSB5ZXQsIHRoZSBsb2dpYyBpbnRyb2R1Y2VkIGEgdnVsbmVy
YWJpbGl0eSB3aGVyZWJ5IG5lY2Vzc2FyeQpwYWdldGFibGUvc2VnZGVzYyBh
ZGp1c3RtZW50cyBtYWRlIGJ5IFhlbiBpbiB0aGUgdmFsaWRhdGlvbiBsb2dp
YyBjb3VsZCBiZWNvbWUKbm9uLWNvaGVyZW50IGJldHdlZW4gdGhlIGNhY2hl
IGFuZCBtYWluIG1lbW9yeS4gIFRoZSBDUFUgY291bGQgc3Vic2VxdWVudGx5
Cm9wZXJhdGUgb24gdGhlIHN0YWxlIHZhbHVlIGluIHRoZSBjYWNoZSwgcmF0
aGVyIHRoYW4gdGhlIHNhZmUgdmFsdWUgaW4gbWFpbgptZW1vcnkuCgpUaGUg
ZGlyZWN0bWFwIGNvbnRhaW5zIHByaW1hcmlseSBtYXBwaW5ncyBvZiBSQU0u
ICBQQVQvTVRSUiBjb25mbGljdApyZXNvbHV0aW9uIGlzIGFzeW1tZXRyaWMs
IGFuZCBnZW5lcmFsbHkgZm9yIE1UUlI9V0IgcmFuZ2VzLCBQQVQgb2YgbGVz
c2VyCmNhY2hlYWJpbGl0eSByZXNvbHZlcyB0byBiZWluZyBjb2hlcmVudC4g
IFRoZSBzcGVjaWFsIGNhc2UgaXMgV0MgbWFwcGluZ3MsCndoaWNoIGFyZSBu
b24tY29oZXJlbnQgYWdhaW5zdCBNVFJSPVdCIHJlZ2lvbnMgKGV4Y2VwdCBm
b3IgZnVsbHktY29oZXJlbnQKQ1BVcykuCgpYZW4gbXVzdCBub3QgaGF2ZSBh
bnkgV0MgY2FjaGVhYmlsaXR5IGluIHRoZSBkaXJlY3RtYXAsIHRvIHByZXZl
bnQgWGVuJ3MKYWN0aW9ucyBmcm9tIGNyZWF0aW5nIG5vbi1jb2hlcmVuY3ku
ICAoR3Vlc3QgYWN0aW9ucyBjcmVhdGluZyBub24tY29oZXJlbmN5IGlzCmRl
YWx0IHdpdGggaW4gc3Vic2VxdWVudCBwYXRjaGVzLikgIEFzIGFsbCBtZW1v
cnkgdHlwZXMgZm9yIE1UUlI9V0IgcmFuZ2VzCmludGVyLW9wZXJhdGUgY29o
ZXJlbnRseSwgc28gbGVhdmUgWGVuJ3MgZGlyZWN0bWFwIG1hcHBpbmdzIGFz
IFdCLgoKT25seSBQViBndWVzdHMgd2l0aCBhY2Nlc3MgdG8gZGV2aWNlcyBj
YW4gdXNlIHJlZHVjZWQtY2FjaGVhYmlsaXR5IG1hcHBpbmdzIHRvCmJlZ2lu
IHdpdGgsIGFuZCB0aGV5J3JlIHRydXN0ZWQgbm90IHRvIG1vdW50IERvU3Mg
YWdhaW5zdCB0aGUgc3lzdGVtIGFueXdheS4KCkRyb3AgUEdDX2NhY2hlYXR0
cl97YmFzZSxtYXNrfSBlbnRpcmVseSwgYW5kIHRoZSBsb2dpYyB0byBtYW5p
cHVsYXRlIHRoZW0uClNoaWZ0IHRoZSBsYXRlciBQR0NfKiBjb25zdGFudHMg
dXAsIHRvIGdhaW4gMyBleHRyYSBiaXRzIGluIHRoZSBtYWluIHJlZmVyZW5j
ZQpjb3VudC4gIFJldGFpbiB0aGUgY2hlY2sgaW4gZ2V0X3BhZ2VfZnJvbV9s
MWUoKSBmb3Igc3BlY2lhbF9wYWdlcygpIGJlY2F1c2UgYQpndWVzdCBoYXMg
bm8gYnVzaW5lc3MgdXNpbmcgcmVkdWNlZCBjYWNoZWFiaWxpdHkgb24gdGhl
c2UuCgpUaGlzIHJldmVydHMgY2hhbmdlc2V0IDU1Zjk3ZjQ5YjdjZTZjMzUy
MGM1NTVkMTljYWFjNmNmM2Y5YTVkZjAKClRoaXMgaXMgQ1ZFLTIwMjItMjYz
NjMsIHBhcnQgb2YgWFNBLTQwMi4KClNpZ25lZC1vZmYtYnk6IEFuZHJldyBD
b29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+ClJldmlld2VkLWJ5
OiBHZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJpeC5jb20+Cgpk
aWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L21tLmMgYi94ZW4vYXJjaC94ODYv
bW0uYwppbmRleCBlZTkxYzdmZTVmNjkuLjg1OTY0NmI2NzBhOCAxMDA2NDQK
LS0tIGEveGVuL2FyY2gveDg2L21tLmMKKysrIGIveGVuL2FyY2gveDg2L21t
LmMKQEAgLTc4NiwyNCArNzg2LDYgQEAgYm9vbCBpc19pb21lbV9wYWdlKG1m
bl90IG1mbikKICAgICByZXR1cm4gKHBhZ2VfZ2V0X293bmVyKHBhZ2UpID09
IGRvbV9pbyk7CiB9CiAKLXN0YXRpYyBpbnQgdXBkYXRlX3hlbl9tYXBwaW5n
cyh1bnNpZ25lZCBsb25nIG1mbiwgdW5zaWduZWQgaW50IGNhY2hlYXR0cikK
LXsKLSAgICBpbnQgZXJyID0gMDsKLSAgICBib29sIGFsaWFzID0gbWZuID49
IFBGTl9ET1dOKHhlbl9waHlzX3N0YXJ0KSAmJgotICAgICAgICAgbWZuIDwg
UEZOX1VQKHhlbl9waHlzX3N0YXJ0ICsgeGVuX3ZpcnRfZW5kIC0gWEVOX1ZJ
UlRfU1RBUlQpOwotICAgIHVuc2lnbmVkIGxvbmcgeGVuX3ZhID0KLSAgICAg
ICAgWEVOX1ZJUlRfU1RBUlQgKyAoKG1mbiAtIFBGTl9ET1dOKHhlbl9waHlz
X3N0YXJ0KSkgPDwgUEFHRV9TSElGVCk7Ci0KLSAgICBpZiAoIHVubGlrZWx5
KGFsaWFzKSAmJiBjYWNoZWF0dHIgKQotICAgICAgICBlcnIgPSBtYXBfcGFn
ZXNfdG9feGVuKHhlbl92YSwgX21mbihtZm4pLCAxLCAwKTsKLSAgICBpZiAo
ICFlcnIgKQotICAgICAgICBlcnIgPSBtYXBfcGFnZXNfdG9feGVuKCh1bnNp
Z25lZCBsb25nKW1mbl90b192aXJ0KG1mbiksIF9tZm4obWZuKSwgMSwKLSAg
ICAgICAgICAgICAgICAgICAgIFBBR0VfSFlQRVJWSVNPUiB8IGNhY2hlYXR0
cl90b19wdGVfZmxhZ3MoY2FjaGVhdHRyKSk7Ci0gICAgaWYgKCB1bmxpa2Vs
eShhbGlhcykgJiYgIWNhY2hlYXR0ciAmJiAhZXJyICkKLSAgICAgICAgZXJy
ID0gbWFwX3BhZ2VzX3RvX3hlbih4ZW5fdmEsIF9tZm4obWZuKSwgMSwgUEFH
RV9IWVBFUlZJU09SKTsKLSAgICByZXR1cm4gZXJyOwotfQotCiAjaWZuZGVm
IE5ERUJVRwogc3RydWN0IG1taW9fZW11bF9yYW5nZV9jdHh0IHsKICAgICBj
b25zdCBzdHJ1Y3QgZG9tYWluICpkOwpAQCAtMTAwOCw0NyArOTkwLDE0IEBA
IGdldF9wYWdlX2Zyb21fbDFlKAogICAgICAgICBnb3RvIGNvdWxkX25vdF9w
aW47CiAgICAgfQogCi0gICAgaWYgKCBwdGVfZmxhZ3NfdG9fY2FjaGVhdHRy
KGwxZikgIT0KLSAgICAgICAgICgocGFnZS0+Y291bnRfaW5mbyAmIFBHQ19j
YWNoZWF0dHJfbWFzaykgPj4gUEdDX2NhY2hlYXR0cl9iYXNlKSApCisgICAg
aWYgKCAobDFmICYgUEFHRV9DQUNIRV9BVFRSUykgIT0gX1BBR0VfV0IgJiYg
aXNfeGVuX2hlYXBfcGFnZShwYWdlKSApCiAgICAgewotICAgICAgICB1bnNp
Z25lZCBsb25nIHgsIG54LCB5ID0gcGFnZS0+Y291bnRfaW5mbzsKLSAgICAg
ICAgdW5zaWduZWQgbG9uZyBjYWNoZWF0dHIgPSBwdGVfZmxhZ3NfdG9fY2Fj
aGVhdHRyKGwxZik7Ci0gICAgICAgIGludCBlcnI7Ci0KLSAgICAgICAgaWYg
KCBpc194ZW5faGVhcF9wYWdlKHBhZ2UpICkKLSAgICAgICAgewotICAgICAg
ICAgICAgaWYgKCB3cml0ZSApCi0gICAgICAgICAgICAgICAgcHV0X3BhZ2Vf
dHlwZShwYWdlKTsKLSAgICAgICAgICAgIHB1dF9wYWdlKHBhZ2UpOwotICAg
ICAgICAgICAgZ2RwcmludGsoWEVOTE9HX1dBUk5JTkcsCi0gICAgICAgICAg
ICAgICAgICAgICAiQXR0ZW1wdCB0byBjaGFuZ2UgY2FjaGUgYXR0cmlidXRl
cyBvZiBYZW4gaGVhcCBwYWdlXG4iKTsKLSAgICAgICAgICAgIHJldHVybiAt
RUFDQ0VTOwotICAgICAgICB9Ci0KLSAgICAgICAgZG8gewotICAgICAgICAg
ICAgeCAgPSB5OwotICAgICAgICAgICAgbnggPSAoeCAmIH5QR0NfY2FjaGVh
dHRyX21hc2spIHwgKGNhY2hlYXR0ciA8PCBQR0NfY2FjaGVhdHRyX2Jhc2Up
OwotICAgICAgICB9IHdoaWxlICggKHkgPSBjbXB4Y2hnKCZwYWdlLT5jb3Vu
dF9pbmZvLCB4LCBueCkpICE9IHggKTsKLQotICAgICAgICBlcnIgPSB1cGRh
dGVfeGVuX21hcHBpbmdzKG1mbiwgY2FjaGVhdHRyKTsKLSAgICAgICAgaWYg
KCB1bmxpa2VseShlcnIpICkKLSAgICAgICAgewotICAgICAgICAgICAgY2Fj
aGVhdHRyID0geSAmIFBHQ19jYWNoZWF0dHJfbWFzazsKLSAgICAgICAgICAg
IGRvIHsKLSAgICAgICAgICAgICAgICB4ICA9IHk7Ci0gICAgICAgICAgICAg
ICAgbnggPSAoeCAmIH5QR0NfY2FjaGVhdHRyX21hc2spIHwgY2FjaGVhdHRy
OwotICAgICAgICAgICAgfSB3aGlsZSAoICh5ID0gY21weGNoZygmcGFnZS0+
Y291bnRfaW5mbywgeCwgbngpKSAhPSB4ICk7Ci0KLSAgICAgICAgICAgIGlm
ICggd3JpdGUgKQotICAgICAgICAgICAgICAgIHB1dF9wYWdlX3R5cGUocGFn
ZSk7Ci0gICAgICAgICAgICBwdXRfcGFnZShwYWdlKTsKLQotICAgICAgICAg
ICAgZ2RwcmludGsoWEVOTE9HX1dBUk5JTkcsICJFcnJvciB1cGRhdGluZyBt
YXBwaW5ncyBmb3IgbWZuICUiIFBSSV9tZm4KLSAgICAgICAgICAgICAgICAg
ICAgICIgKHBmbiAlIiBQUklfcGZuICIsIGZyb20gTDEgZW50cnkgJSIgUFJJ
cHRlICIpIGZvciBkJWRcbiIsCi0gICAgICAgICAgICAgICAgICAgICBtZm4s
IGdldF9ncGZuX2Zyb21fbWZuKG1mbiksCi0gICAgICAgICAgICAgICAgICAg
ICBsMWVfZ2V0X2ludHB0ZShsMWUpLCBsMWVfb3duZXItPmRvbWFpbl9pZCk7
Ci0gICAgICAgICAgICByZXR1cm4gZXJyOwotICAgICAgICB9CisgICAgICAg
IGlmICggd3JpdGUgKQorICAgICAgICAgICAgcHV0X3BhZ2VfdHlwZShwYWdl
KTsKKyAgICAgICAgcHV0X3BhZ2UocGFnZSk7CisgICAgICAgIGdkcHJpbnRr
KFhFTkxPR19XQVJOSU5HLAorICAgICAgICAgICAgICAgICAiQXR0ZW1wdCB0
byBjaGFuZ2UgY2FjaGUgYXR0cmlidXRlcyBvZiBYZW4gaGVhcCBwYWdlXG4i
KTsKKyAgICAgICAgcmV0dXJuIC1FQUNDRVM7CiAgICAgfQogCiAgICAgcmV0
dXJuIDA7CkBAIC0yNTQxLDI1ICsyNDkwLDEwIEBAIHN0YXRpYyBpbnQgbW9k
X2w0X2VudHJ5KGw0X3BnZW50cnlfdCAqcGw0ZSwKICAqLwogc3RhdGljIGlu
dCBjbGVhbnVwX3BhZ2VfbWFwcGluZ3Moc3RydWN0IHBhZ2VfaW5mbyAqcGFn
ZSkKIHsKLSAgICB1bnNpZ25lZCBpbnQgY2FjaGVhdHRyID0KLSAgICAgICAg
KHBhZ2UtPmNvdW50X2luZm8gJiBQR0NfY2FjaGVhdHRyX21hc2spID4+IFBH
Q19jYWNoZWF0dHJfYmFzZTsKICAgICBpbnQgcmMgPSAwOwogICAgIHVuc2ln
bmVkIGxvbmcgbWZuID0gbWZuX3gocGFnZV90b19tZm4ocGFnZSkpOwogCiAg
ICAgLyoKLSAgICAgKiBJZiB3ZSd2ZSBtb2RpZmllZCB4ZW4gbWFwcGluZ3Mg
YXMgYSByZXN1bHQgb2YgZ3Vlc3QgY2FjaGUKLSAgICAgKiBhdHRyaWJ1dGVz
LCByZXN0b3JlIHRoZW0gdG8gdGhlICJub3JtYWwiIHN0YXRlLgotICAgICAq
LwotICAgIGlmICggdW5saWtlbHkoY2FjaGVhdHRyKSApCi0gICAgewotICAg
ICAgICBwYWdlLT5jb3VudF9pbmZvICY9IH5QR0NfY2FjaGVhdHRyX21hc2s7
Ci0KLSAgICAgICAgQlVHX09OKGlzX3hlbl9oZWFwX3BhZ2UocGFnZSkpOwot
Ci0gICAgICAgIHJjID0gdXBkYXRlX3hlbl9tYXBwaW5ncyhtZm4sIDApOwot
ICAgIH0KLQotICAgIC8qCiAgICAgICogSWYgdGhpcyBtYXkgYmUgaW4gYSBQ
ViBkb21haW4ncyBJT01NVSwgcmVtb3ZlIGl0LgogICAgICAqCiAgICAgICog
TkIgdGhhdCB3cml0YWJsZSB4ZW5oZWFwIHBhZ2VzIGhhdmUgdGhlaXIgdHlw
ZSBzZXQgYW5kIGNsZWFyZWQgYnkKZGlmZiAtLWdpdCBhL3hlbi9pbmNsdWRl
L2FzbS14ODYvbW0uaCBiL3hlbi9pbmNsdWRlL2FzbS14ODYvbW0uaAppbmRl
eCAzMjBjNmNkMTk2NjkuLmRiMDk4NDlmNzNmOCAxMDA2NDQKLS0tIGEveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tbS5oCisrKyBiL3hlbi9pbmNsdWRlL2FzbS14
ODYvbW0uaApAQCAtNjQsMjIgKzY0LDE5IEBACiAgLyogU2V0IHdoZW4gaXMg
dXNpbmcgYSBwYWdlIGFzIGEgcGFnZSB0YWJsZSAqLwogI2RlZmluZSBfUEdD
X3BhZ2VfdGFibGUgICBQR19zaGlmdCgzKQogI2RlZmluZSBQR0NfcGFnZV90
YWJsZSAgICBQR19tYXNrKDEsIDMpCi0gLyogMy1iaXQgUEFUL1BDRC9QV1Qg
Y2FjaGUtYXR0cmlidXRlIGhpbnQuICovCi0jZGVmaW5lIFBHQ19jYWNoZWF0
dHJfYmFzZSBQR19zaGlmdCg2KQotI2RlZmluZSBQR0NfY2FjaGVhdHRyX21h
c2sgUEdfbWFzayg3LCA2KQogIC8qIFBhZ2UgaXMgYnJva2VuPyAqLwotI2Rl
ZmluZSBfUEdDX2Jyb2tlbiAgICAgICBQR19zaGlmdCg3KQotI2RlZmluZSBQ
R0NfYnJva2VuICAgICAgICBQR19tYXNrKDEsIDcpCisjZGVmaW5lIF9QR0Nf
YnJva2VuICAgICAgIFBHX3NoaWZ0KDQpCisjZGVmaW5lIFBHQ19icm9rZW4g
ICAgICAgIFBHX21hc2soMSwgNCkKICAvKiBNdXR1YWxseS1leGNsdXNpdmUg
cGFnZSBzdGF0ZXM6IHsgaW51c2UsIG9mZmxpbmluZywgb2ZmbGluZWQsIGZy
ZWUgfS4gKi8KLSNkZWZpbmUgUEdDX3N0YXRlICAgICAgICAgUEdfbWFzaygz
LCA5KQotI2RlZmluZSBQR0Nfc3RhdGVfaW51c2UgICBQR19tYXNrKDAsIDkp
Ci0jZGVmaW5lIFBHQ19zdGF0ZV9vZmZsaW5pbmcgUEdfbWFzaygxLCA5KQot
I2RlZmluZSBQR0Nfc3RhdGVfb2ZmbGluZWQgUEdfbWFzaygyLCA5KQotI2Rl
ZmluZSBQR0Nfc3RhdGVfZnJlZSAgICBQR19tYXNrKDMsIDkpCisjZGVmaW5l
IFBHQ19zdGF0ZSAgICAgICAgICAgUEdfbWFzaygzLCA2KQorI2RlZmluZSBQ
R0Nfc3RhdGVfaW51c2UgICAgIFBHX21hc2soMCwgNikKKyNkZWZpbmUgUEdD
X3N0YXRlX29mZmxpbmluZyBQR19tYXNrKDEsIDYpCisjZGVmaW5lIFBHQ19z
dGF0ZV9vZmZsaW5lZCAgUEdfbWFzaygyLCA2KQorI2RlZmluZSBQR0Nfc3Rh
dGVfZnJlZSAgICAgIFBHX21hc2soMywgNikKICNkZWZpbmUgcGFnZV9zdGF0
ZV9pcyhwZywgc3QpICgoKHBnKS0+Y291bnRfaW5mbyZQR0Nfc3RhdGUpID09
IFBHQ19zdGF0ZV8jI3N0KQogCiAgLyogQ291bnQgb2YgcmVmZXJlbmNlcyB0
byB0aGlzIGZyYW1lLiAqLwotI2RlZmluZSBQR0NfY291bnRfd2lkdGggICBQ
R19zaGlmdCg5KQorI2RlZmluZSBQR0NfY291bnRfd2lkdGggICBQR19zaGlm
dCg2KQogI2RlZmluZSBQR0NfY291bnRfbWFzayAgICAoKDFVTDw8UEdDX2Nv
dW50X3dpZHRoKS0xKQogCiAvKgo=

--=separator
Content-Type: application/octet-stream; name="xsa402/xsa402-4.13-3.patch"
Content-Disposition: attachment; filename="xsa402/xsa402-4.13-3.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2OiBTcGxpdCBjYWNoZV9mbHVzaCgpIG91dCBvZiBj
YWNoZV93cml0ZWJhY2soKQoKU3Vic2VxdWVudCBjaGFuZ2VzIHdpbGwgd2Fu
dCBhIGZ1bGx5IGZsdXNoaW5nIHZlcnNpb24uCgpVc2UgdGhlIG5ldyBoZWxw
ZXIgcmF0aGVyIHRoYW4gb3BlbmNvZGluZyBpdCBpbiBmbHVzaF9hcmVhX2xv
Y2FsKCkuICBUaGlzCnJlc29sdmVzIGFuIG91dHN0YW5kaW5nIGlzc3VlIHdo
ZXJlIHRoZSBjb25kaXRpb25hbCBzZmVuY2UgaXMgb24gdGhlIHdyb25nCnNp
ZGUgb2YgdGhlIGNsZmx1c2hvcHQgbG9vcC4gIGNsZmx1c2hvcHQgaXMgb3Jk
ZXJlZCB3aXRoIHJlc3BlY3QgdG8gb2xkZXIKc3RvcmVzLCBub3QgdG8geW91
bmdlciBzdG9yZXMuCgpSZW5hbWUgZ250dGFiX2NhY2hlX2ZsdXNoKCkncyBo
ZWxwZXIgdG8gYXZvaWQgY29sbGlkaW5nIGluIG5hbWUuCmdyYW50X3RhYmxl
LmMgY2FuIHNlZSB0aGUgcHJvdG90eXBlIGZyb20gY2FjaGUuaCBzbyB0aGUg
YnVpbGQgZmFpbHMKb3RoZXJ3aXNlLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS00
MDIuCgpTaWduZWQtb2ZmLWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29v
cGVyM0BjaXRyaXguY29tPgpSZXZpZXdlZC1ieTogSmFuIEJldWxpY2ggPGpi
ZXVsaWNoQHN1c2UuY29tPgoKWGVuIDQuMTYgYW5kIGVhcmxpZXI6CiAqIEFs
c28gYmFja3BvcnQgaGFsZiBvZiBjL3MgMzMzMDAxM2U2NzM5NiAiVlQtZCAv
IHg4NjogcmUtYXJyYW5nZSBjYWNoZQogICBzeW5jaW5nIiB0byBzcGxpdCBj
YWNoZV93cml0ZWJhY2soKSBvdXQgb2YgdGhlIElPTU1VIGxvZ2ljLCBidXQg
d2l0aG91dCB0aGUKICAgYXNzb2NpYXRlZCBob29rcyBjaGFuZ2VzLgoKZGlm
ZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9mbHVzaHRsYi5jIGIveGVuL2FyY2gv
eDg2L2ZsdXNodGxiLmMKaW5kZXggMDNmOTJjMjNkY2FmLi44NTY4NDkxYzdl
YTkgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni9mbHVzaHRsYi5jCisrKyBi
L3hlbi9hcmNoL3g4Ni9mbHVzaHRsYi5jCkBAIC0yMjQsNyArMjI0LDcgQEAg
dW5zaWduZWQgaW50IGZsdXNoX2FyZWFfbG9jYWwoY29uc3Qgdm9pZCAqdmEs
IHVuc2lnbmVkIGludCBmbGFncykKICAgICBpZiAoIGZsYWdzICYgRkxVU0hf
Q0FDSEUgKQogICAgIHsKICAgICAgICAgY29uc3Qgc3RydWN0IGNwdWluZm9f
eDg2ICpjID0gJmN1cnJlbnRfY3B1X2RhdGE7Ci0gICAgICAgIHVuc2lnbmVk
IGxvbmcgaSwgc3ogPSAwOworICAgICAgICB1bnNpZ25lZCBsb25nIHN6ID0g
MDsKIAogICAgICAgICBpZiAoIG9yZGVyIDwgKEJJVFNfUEVSX0xPTkcgLSBQ
QUdFX1NISUZUKSApCiAgICAgICAgICAgICBzeiA9IDFVTCA8PCAob3JkZXIg
KyBQQUdFX1NISUZUKTsKQEAgLTIzNCwxMyArMjM0LDcgQEAgdW5zaWduZWQg
aW50IGZsdXNoX2FyZWFfbG9jYWwoY29uc3Qgdm9pZCAqdmEsIHVuc2lnbmVk
IGludCBmbGFncykKICAgICAgICAgICAgICBjLT54ODZfY2xmbHVzaF9zaXpl
ICYmIGMtPng4Nl9jYWNoZV9zaXplICYmIHN6ICYmCiAgICAgICAgICAgICAg
KChzeiA+PiAxMCkgPCBjLT54ODZfY2FjaGVfc2l6ZSkgKQogICAgICAgICB7
Ci0gICAgICAgICAgICBhbHRlcm5hdGl2ZSgiIiwgInNmZW5jZSIsIFg4Nl9G
RUFUVVJFX0NMRkxVU0hPUFQpOwotICAgICAgICAgICAgZm9yICggaSA9IDA7
IGkgPCBzejsgaSArPSBjLT54ODZfY2xmbHVzaF9zaXplICkKLSAgICAgICAg
ICAgICAgICBhbHRlcm5hdGl2ZV9pbnB1dCgiLmJ5dGUgIiBfX3N0cmluZ2lm
eShOT1BfRFNfUFJFRklYKSAiOyIKLSAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAiIGNsZmx1c2ggJTAiLAotICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICJkYXRhMTYgY2xmbHVzaCAlMCIsICAgICAgLyog
Y2xmbHVzaG9wdCAqLwotICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIFg4Nl9GRUFUVVJFX0NMRkxVU0hPUFQsCi0gICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIm0iICgoKGNvbnN0IGNoYXIgKil2YSlbaV0p
KTsKKyAgICAgICAgICAgIGNhY2hlX2ZsdXNoKHZhLCBzeik7CiAgICAgICAg
ICAgICBmbGFncyAmPSB+RkxVU0hfQ0FDSEU7CiAgICAgICAgIH0KICAgICAg
ICAgZWxzZQpAQCAtMjU0LDMgKzI0OCw3NyBAQCB1bnNpZ25lZCBpbnQgZmx1
c2hfYXJlYV9sb2NhbChjb25zdCB2b2lkICp2YSwgdW5zaWduZWQgaW50IGZs
YWdzKQogCiAgICAgcmV0dXJuIGZsYWdzOwogfQorCit2b2lkIGNhY2hlX2Zs
dXNoKGNvbnN0IHZvaWQgKmFkZHIsIHVuc2lnbmVkIGludCBzaXplKQorewor
ICAgIC8qCisgICAgICogVGhpcyBmdW5jdGlvbiBtYXkgYmUgY2FsbGVkIGJl
Zm9yZSBjdXJyZW50X2NwdV9kYXRhIGlzIGVzdGFibGlzaGVkLgorICAgICAq
IEhlbmNlIGEgZmFsbGJhY2sgaXMgbmVlZGVkIHRvIHByZXZlbnQgdGhlIGxv
b3AgYmVsb3cgYmVjb21pbmcgaW5maW5pdGUuCisgICAgICovCisgICAgdW5z
aWduZWQgaW50IGNsZmx1c2hfc2l6ZSA9IGN1cnJlbnRfY3B1X2RhdGEueDg2
X2NsZmx1c2hfc2l6ZSA/OiAxNjsKKyAgICBjb25zdCB2b2lkICplbmQgPSBh
ZGRyICsgc2l6ZTsKKworICAgIGFkZHIgLT0gKHVuc2lnbmVkIGxvbmcpYWRk
ciAmIChjbGZsdXNoX3NpemUgLSAxKTsKKyAgICBmb3IgKCA7IGFkZHIgPCBl
bmQ7IGFkZHIgKz0gY2xmbHVzaF9zaXplICkKKyAgICB7CisgICAgICAgIC8q
CisgICAgICAgICAqIE5vdGUgcmVnYXJkaW5nIHRoZSAiZHMiIHByZWZpeCB1
c2U6IGl0J3MgZmFzdGVyIHRvIGRvIGEgY2xmbHVzaAorICAgICAgICAgKiAr
IHByZWZpeCB0aGFuIGEgY2xmbHVzaCArIG5vcCwgYW5kIGhlbmNlIHRoZSBw
cmVmaXggaXMgYWRkZWQgaW5zdGVhZAorICAgICAgICAgKiBvZiBsZXR0aW5n
IHRoZSBhbHRlcm5hdGl2ZSBmcmFtZXdvcmsgZmlsbCB0aGUgZ2FwIGJ5IGFw
cGVuZGluZyBub3BzLgorICAgICAgICAgKi8KKyAgICAgICAgYWx0ZXJuYXRp
dmVfaW8oImRzOyBjbGZsdXNoICVbcF0iLAorICAgICAgICAgICAgICAgICAg
ICAgICAiZGF0YTE2IGNsZmx1c2ggJVtwXSIsIC8qIGNsZmx1c2hvcHQgKi8K
KyAgICAgICAgICAgICAgICAgICAgICAgWDg2X0ZFQVRVUkVfQ0xGTFVTSE9Q
VCwKKyAgICAgICAgICAgICAgICAgICAgICAgLyogbm8gb3V0cHV0cyAqLywK
KyAgICAgICAgICAgICAgICAgICAgICAgW3BdICJtIiAoKihjb25zdCBjaGFy
ICopKGFkZHIpKSk7CisgICAgfQorCisgICAgYWx0ZXJuYXRpdmUoIiIsICJz
ZmVuY2UiLCBYODZfRkVBVFVSRV9DTEZMVVNIT1BUKTsKK30KKwordm9pZCBj
YWNoZV93cml0ZWJhY2soY29uc3Qgdm9pZCAqYWRkciwgdW5zaWduZWQgaW50
IHNpemUpCit7CisgICAgdW5zaWduZWQgaW50IGNsZmx1c2hfc2l6ZTsKKyAg
ICBjb25zdCB2b2lkICplbmQgPSBhZGRyICsgc2l6ZTsKKworICAgIC8qIEZh
bGwgYmFjayB0byBDTEZMVVNIeyxPUFR9IHdoZW4gQ0xXQiBpc24ndCBhdmFp
bGFibGUuICovCisgICAgaWYgKCAhYm9vdF9jcHVfaGFzKFg4Nl9GRUFUVVJF
X0NMV0IpICkKKyAgICAgICAgcmV0dXJuIGNhY2hlX2ZsdXNoKGFkZHIsIHNp
emUpOworCisgICAgLyoKKyAgICAgKiBUaGlzIGZ1bmN0aW9uIG1heSBiZSBj
YWxsZWQgYmVmb3JlIGN1cnJlbnRfY3B1X2RhdGEgaXMgZXN0YWJsaXNoZWQu
CisgICAgICogSGVuY2UgYSBmYWxsYmFjayBpcyBuZWVkZWQgdG8gcHJldmVu
dCB0aGUgbG9vcCBiZWxvdyBiZWNvbWluZyBpbmZpbml0ZS4KKyAgICAgKi8K
KyAgICBjbGZsdXNoX3NpemUgPSBjdXJyZW50X2NwdV9kYXRhLng4Nl9jbGZs
dXNoX3NpemUgPzogMTY7CisgICAgYWRkciAtPSAodW5zaWduZWQgbG9uZylh
ZGRyICYgKGNsZmx1c2hfc2l6ZSAtIDEpOworICAgIGZvciAoIDsgYWRkciA8
IGVuZDsgYWRkciArPSBjbGZsdXNoX3NpemUgKQorICAgIHsKKy8qCisgKiBU
aGUgYXJndW1lbnRzIHRvIGEgbWFjcm8gbXVzdCBub3QgaW5jbHVkZSBwcmVw
cm9jZXNzb3IgZGlyZWN0aXZlcy4gRG9pbmcgc28KKyAqIHJlc3VsdHMgaW4g
dW5kZWZpbmVkIGJlaGF2aW9yLCBzbyB3ZSBoYXZlIHRvIGNyZWF0ZSBzb21l
IGRlZmluZXMgaGVyZSBpbgorICogb3JkZXIgdG8gYXZvaWQgaXQuCisgKi8K
KyNpZiBkZWZpbmVkKEhBVkVfQVNfQ0xXQikKKyMgZGVmaW5lIENMV0JfRU5D
T0RJTkcgImNsd2IgJVtwXSIKKyNlbGlmIGRlZmluZWQoSEFWRV9BU19YU0FW
RU9QVCkKKyMgZGVmaW5lIENMV0JfRU5DT0RJTkcgImRhdGExNiB4c2F2ZW9w
dCAlW3BdIiAvKiBjbHdiICovCisjZWxzZQorIyBkZWZpbmUgQ0xXQl9FTkNP
RElORyAiLmJ5dGUgMHg2NiwgMHgwZiwgMHhhZSwgMHgzMCIgLyogY2x3YiAo
JSVyYXgpICovCisjZW5kaWYKKworI2RlZmluZSBCQVNFX0lOUFVUKGFkZHIp
IFtwXSAibSIgKCooY29uc3QgY2hhciAqKShhZGRyKSkKKyNpZiBkZWZpbmVk
KEhBVkVfQVNfQ0xXQikgfHwgZGVmaW5lZChIQVZFX0FTX1hTQVZFT1BUKQor
IyBkZWZpbmUgSU5QVVQgQkFTRV9JTlBVVAorI2Vsc2UKKyMgZGVmaW5lIElO
UFVUKGFkZHIpICJhIiAoYWRkciksIEJBU0VfSU5QVVQoYWRkcikKKyNlbmRp
ZgorCisgICAgICAgIGFzbSB2b2xhdGlsZSAoQ0xXQl9FTkNPRElORyA6OiBJ
TlBVVChhZGRyKSk7CisKKyN1bmRlZiBJTlBVVAorI3VuZGVmIEJBU0VfSU5Q
VVQKKyN1bmRlZiBDTFdCX0VOQ09ESU5HCisgICAgfQorCisgICAgYXNtIHZv
bGF0aWxlICgic2ZlbmNlIiA6OjogIm1lbW9yeSIpOworfQpkaWZmIC0tZ2l0
IGEveGVuL2NvbW1vbi9ncmFudF90YWJsZS5jIGIveGVuL2NvbW1vbi9ncmFu
dF90YWJsZS5jCmluZGV4IGNiYjJjZTE3YzAwMS4uNzA5NTA5ZTBmYzllIDEw
MDY0NAotLS0gYS94ZW4vY29tbW9uL2dyYW50X3RhYmxlLmMKKysrIGIveGVu
L2NvbW1vbi9ncmFudF90YWJsZS5jCkBAIC0zNDA3LDcgKzM0MDcsNyBAQCBn
bnR0YWJfc3dhcF9ncmFudF9yZWYoWEVOX0dVRVNUX0hBTkRMRV9QQVJBTShn
bnR0YWJfc3dhcF9ncmFudF9yZWZfdCkgdW9wLAogICAgIHJldHVybiAwOwog
fQogCi1zdGF0aWMgaW50IGNhY2hlX2ZsdXNoKGNvbnN0IGdudHRhYl9jYWNo
ZV9mbHVzaF90ICpjZmx1c2gsIGdyYW50X3JlZl90ICpjdXJfcmVmKQorc3Rh
dGljIGludCBfY2FjaGVfZmx1c2goY29uc3QgZ250dGFiX2NhY2hlX2ZsdXNo
X3QgKmNmbHVzaCwgZ3JhbnRfcmVmX3QgKmN1cl9yZWYpCiB7CiAgICAgc3Ry
dWN0IGRvbWFpbiAqZCwgKm93bmVyOwogICAgIHN0cnVjdCBwYWdlX2luZm8g
KnBhZ2U7CkBAIC0zNTAxLDcgKzM1MDEsNyBAQCBnbnR0YWJfY2FjaGVfZmx1
c2goWEVOX0dVRVNUX0hBTkRMRV9QQVJBTShnbnR0YWJfY2FjaGVfZmx1c2hf
dCkgdW9wLAogICAgICAgICAgICAgcmV0dXJuIC1FRkFVTFQ7CiAgICAgICAg
IGZvciAoIDsgOyApCiAgICAgICAgIHsKLSAgICAgICAgICAgIGludCByZXQg
PSBjYWNoZV9mbHVzaCgmb3AsIGN1cl9yZWYpOworICAgICAgICAgICAgaW50
IHJldCA9IF9jYWNoZV9mbHVzaCgmb3AsIGN1cl9yZWYpOwogCiAgICAgICAg
ICAgICBpZiAoIHJldCA8IDAgKQogICAgICAgICAgICAgICAgIHJldHVybiBy
ZXQ7CmRpZmYgLS1naXQgYS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQv
ZXh0ZXJuLmggYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvZXh0ZXJu
LmgKaW5kZXggZmJlOTUxYjJmYWQwLi4zZGVmZTk2NzdmMDYgMTAwNjQ0Ci0t
LSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9leHRlcm4uaAorKysg
Yi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvZXh0ZXJuLmgKQEAgLTc3
LDcgKzc3LDYgQEAgaW50IF9fbXVzdF9jaGVjayBxaW52YWxfZGV2aWNlX2lv
dGxiX3N5bmMoc3RydWN0IHZ0ZF9pb21tdSAqaW9tbXUsCiAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdHJ1Y3QgcGNpX2Rl
diAqcGRldiwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIHUxNiBkaWQsIHUxNiBzaXplLCB1NjQgYWRkcik7CiAKLXVuc2ln
bmVkIGludCBnZXRfY2FjaGVfbGluZV9zaXplKHZvaWQpOwogdm9pZCBmbHVz
aF9hbGxfY2FjaGUodm9pZCk7CiAKIHVpbnQ2NF90IGFsbG9jX3BndGFibGVf
bWFkZHIodW5zaWduZWQgbG9uZyBucGFnZXMsIG5vZGVpZF90IG5vZGUpOwpk
aWZmIC0tZ2l0IGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL2lvbW11
LmMgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvaW9tbXUuYwppbmRl
eCBmMDUxYTU1NzY0YjkuLjJiZjVmMDJjMDhkZSAxMDA2NDQKLS0tIGEveGVu
L2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL2lvbW11LmMKKysrIGIveGVuL2Ry
aXZlcnMvcGFzc3Rocm91Z2gvdnRkL2lvbW11LmMKQEAgLTMxLDYgKzMxLDcg
QEAKICNpbmNsdWRlIDx4ZW4vcGNpLmg+CiAjaW5jbHVkZSA8eGVuL3BjaV9y
ZWdzLmg+CiAjaW5jbHVkZSA8eGVuL2tleWhhbmRsZXIuaD4KKyNpbmNsdWRl
IDxhc20vY2FjaGUuaD4KICNpbmNsdWRlIDxhc20vbXNpLmg+CiAjaW5jbHVk
ZSA8YXNtL25vcHMuaD4KICNpbmNsdWRlIDxhc20vaXJxLmg+CkBAIC0yMDEs
NTMgKzIwMiwxMCBAQCBzdGF0aWMgaW50IGlvbW11c19pbmNvaGVyZW50Owog
CiBzdGF0aWMgdm9pZCBzeW5jX2NhY2hlKGNvbnN0IHZvaWQgKmFkZHIsIHVu
c2lnbmVkIGludCBzaXplKQogewotICAgIHN0YXRpYyB1bnNpZ25lZCBsb25n
IGNsZmx1c2hfc2l6ZSA9IDA7Ci0gICAgY29uc3Qgdm9pZCAqZW5kID0gYWRk
ciArIHNpemU7Ci0KICAgICBpZiAoICFpb21tdXNfaW5jb2hlcmVudCApCiAg
ICAgICAgIHJldHVybjsKIAotICAgIGlmICggY2xmbHVzaF9zaXplID09IDAg
KQotICAgICAgICBjbGZsdXNoX3NpemUgPSBnZXRfY2FjaGVfbGluZV9zaXpl
KCk7Ci0KLSAgICBhZGRyIC09ICh1bnNpZ25lZCBsb25nKWFkZHIgJiAoY2xm
bHVzaF9zaXplIC0gMSk7Ci0gICAgZm9yICggOyBhZGRyIDwgZW5kOyBhZGRy
ICs9IGNsZmx1c2hfc2l6ZSApCi0vKgotICogVGhlIGFyZ3VtZW50cyB0byBh
IG1hY3JvIG11c3Qgbm90IGluY2x1ZGUgcHJlcHJvY2Vzc29yIGRpcmVjdGl2
ZXMuIERvaW5nIHNvCi0gKiByZXN1bHRzIGluIHVuZGVmaW5lZCBiZWhhdmlv
ciwgc28gd2UgaGF2ZSB0byBjcmVhdGUgc29tZSBkZWZpbmVzIGhlcmUgaW4K
LSAqIG9yZGVyIHRvIGF2b2lkIGl0LgotICovCi0jaWYgZGVmaW5lZChIQVZF
X0FTX0NMV0IpCi0jIGRlZmluZSBDTFdCX0VOQ09ESU5HICJjbHdiICVbcF0i
Ci0jZWxpZiBkZWZpbmVkKEhBVkVfQVNfWFNBVkVPUFQpCi0jIGRlZmluZSBD
TFdCX0VOQ09ESU5HICJkYXRhMTYgeHNhdmVvcHQgJVtwXSIgLyogY2x3YiAq
LwotI2Vsc2UKLSMgZGVmaW5lIENMV0JfRU5DT0RJTkcgIi5ieXRlIDB4NjYs
IDB4MGYsIDB4YWUsIDB4MzAiIC8qIGNsd2IgKCUlcmF4KSAqLwotI2VuZGlm
Ci0KLSNkZWZpbmUgQkFTRV9JTlBVVChhZGRyKSBbcF0gIm0iICgqKGNvbnN0
IGNoYXIgKikoYWRkcikpCi0jaWYgZGVmaW5lZChIQVZFX0FTX0NMV0IpIHx8
IGRlZmluZWQoSEFWRV9BU19YU0FWRU9QVCkKLSMgZGVmaW5lIElOUFVUIEJB
U0VfSU5QVVQKLSNlbHNlCi0jIGRlZmluZSBJTlBVVChhZGRyKSAiYSIgKGFk
ZHIpLCBCQVNFX0lOUFVUKGFkZHIpCi0jZW5kaWYKLSAgICAgICAgLyoKLSAg
ICAgICAgICogTm90ZSByZWdhcmRpbmcgdGhlIHVzZSBvZiBOT1BfRFNfUFJF
RklYOiBpdCdzIGZhc3RlciB0byBkbyBhIGNsZmx1c2gKLSAgICAgICAgICog
KyBwcmVmaXggdGhhbiBhIGNsZmx1c2ggKyBub3AsIGFuZCBoZW5jZSB0aGUg
cHJlZml4IGlzIGFkZGVkIGluc3RlYWQKLSAgICAgICAgICogb2YgbGV0dGlu
ZyB0aGUgYWx0ZXJuYXRpdmUgZnJhbWV3b3JrIGZpbGwgdGhlIGdhcCBieSBh
cHBlbmRpbmcgbm9wcy4KLSAgICAgICAgICovCi0gICAgICAgIGFsdGVybmF0
aXZlX2lvXzIoIi5ieXRlICIgX19zdHJpbmdpZnkoTk9QX0RTX1BSRUZJWCkg
IjsgY2xmbHVzaCAlW3BdIiwKLSAgICAgICAgICAgICAgICAgICAgICAgICAi
ZGF0YTE2IGNsZmx1c2ggJVtwXSIsIC8qIGNsZmx1c2hvcHQgKi8KLSAgICAg
ICAgICAgICAgICAgICAgICAgICBYODZfRkVBVFVSRV9DTEZMVVNIT1BULAot
ICAgICAgICAgICAgICAgICAgICAgICAgIENMV0JfRU5DT0RJTkcsCi0gICAg
ICAgICAgICAgICAgICAgICAgICAgWDg2X0ZFQVRVUkVfQ0xXQiwgLyogbm8g
b3V0cHV0cyAqLywKLSAgICAgICAgICAgICAgICAgICAgICAgICBJTlBVVChh
ZGRyKSk7Ci0jdW5kZWYgSU5QVVQKLSN1bmRlZiBCQVNFX0lOUFVUCi0jdW5k
ZWYgQ0xXQl9FTkNPRElORwotCi0gICAgYWx0ZXJuYXRpdmVfMigiIiwgInNm
ZW5jZSIsIFg4Nl9GRUFUVVJFX0NMRkxVU0hPUFQsCi0gICAgICAgICAgICAg
ICAgICAgICAgInNmZW5jZSIsIFg4Nl9GRUFUVVJFX0NMV0IpOworICAgIGNh
Y2hlX3dyaXRlYmFjayhhZGRyLCBzaXplKTsKIH0KIAogLyogQWxsb2NhdGUg
cGFnZSB0YWJsZSwgcmV0dXJuIGl0cyBtYWNoaW5lIGFkZHJlc3MgKi8KZGlm
ZiAtLWdpdCBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC94ODYvdnRk
LmMgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQveDg2L3Z0ZC5jCmlu
ZGV4IDIyOTkzOGYzYTgxMi4uMmExOGI3NmU4MDBkIDEwMDY0NAotLS0gYS94
ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQveDg2L3Z0ZC5jCisrKyBiL3hl
bi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC94ODYvdnRkLmMKQEAgLTQ2LDEx
ICs0Niw2IEBAIHZvaWQgdW5tYXBfdnRkX2RvbWFpbl9wYWdlKHZvaWQgKnZh
KQogICAgIHVubWFwX2RvbWFpbl9wYWdlKHZhKTsKIH0KIAotdW5zaWduZWQg
aW50IGdldF9jYWNoZV9saW5lX3NpemUodm9pZCkKLXsKLSAgICByZXR1cm4g
KChjcHVpZF9lYngoMSkgPj4gOCkgJiAweGZmKSAqIDg7Ci19Ci0KIHZvaWQg
Zmx1c2hfYWxsX2NhY2hlKCkKIHsKICAgICB3YmludmQoKTsKZGlmZiAtLWdp
dCBhL3hlbi9pbmNsdWRlL2FzbS14ODYvY2FjaGUuaCBiL3hlbi9pbmNsdWRl
L2FzbS14ODYvY2FjaGUuaAppbmRleCAxZjcxNzNkOGM3MmMuLmU0NzcwZWZi
MjJiOSAxMDA2NDQKLS0tIGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9jYWNoZS5o
CisrKyBiL3hlbi9pbmNsdWRlL2FzbS14ODYvY2FjaGUuaApAQCAtMTEsNCAr
MTEsMTEgQEAKIAogI2RlZmluZSBfX3JlYWRfbW9zdGx5IF9fc2VjdGlvbigi
LmRhdGEucmVhZF9tb3N0bHkiKQogCisjaWZuZGVmIF9fQVNTRU1CTFlfXwor
Cit2b2lkIGNhY2hlX2ZsdXNoKGNvbnN0IHZvaWQgKmFkZHIsIHVuc2lnbmVk
IGludCBzaXplKTsKK3ZvaWQgY2FjaGVfd3JpdGViYWNrKGNvbnN0IHZvaWQg
KmFkZHIsIHVuc2lnbmVkIGludCBzaXplKTsKKworI2VuZGlmCisKICNlbmRp
Zgo=

--=separator
Content-Type: application/octet-stream; name="xsa402/xsa402-4.13-4.patch"
Content-Disposition: attachment; filename="xsa402/xsa402-4.13-4.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L2FtZDogV29yayBhcm91bmQgQ0xGTFVTSCBvcmRl
cmluZyBvbiBvbGRlciBwYXJ0cwoKT24gcHJlLUNMRkxVU0hPUFQgQU1EIENQ
VXMsIENMRkxVU0ggaXMgd2Vha2VseSBvcmRlcmVkIHdpdGggZXZlcnl0aGlu
ZywKaW5jbHVkaW5nIHJlYWRzIGFuZCB3cml0ZXMgdG8gdGhlIGFkZHJlc3Ms
IGFuZCBMRkVOQ0UvU0ZFTkNFIGluc3RydWN0aW9ucy4KClRoaXMgY3JlYXRl
cyBhIG11bHRpdHVkZSBvZiBwcm9ibGVtYXRpYyBjb3JuZXIgY2FzZXMsIGxh
aWQgb3V0IGluIHRoZSBtYW51YWwuCkFycmFuZ2UgdG8gdXNlIE1GRU5DRSBv
biBib3RoIHNpZGVzIG9mIHRoZSBDTEZMVVNIIHRvIGZvcmNlIHByb3BlciBv
cmRlcmluZy4KClRoaXMgaXMgcGFydCBvZiBYU0EtNDAyLgoKU2lnbmVkLW9m
Zi1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KUmV2aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNv
bT4KCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvY3B1L2FtZC5jIGIveGVu
L2FyY2gveDg2L2NwdS9hbWQuYwppbmRleCBiNzdmYTE5Mjk3MzMuLmFhMWI5
ZDBkZGE2YiAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L2NwdS9hbWQuYwor
KysgYi94ZW4vYXJjaC94ODYvY3B1L2FtZC5jCkBAIC02MzksNiArNjM5LDE0
IEBAIHN0YXRpYyB2b2lkIGluaXRfYW1kKHN0cnVjdCBjcHVpbmZvX3g4NiAq
YykKIAlpZiAoIWNwdV9oYXNfbGZlbmNlX2Rpc3BhdGNoKQogCQlfX3NldF9i
aXQoWDg2X0ZFQVRVUkVfTUZFTkNFX1JEVFNDLCBjLT54ODZfY2FwYWJpbGl0
eSk7CiAKKwkvKgorCSAqIE9uIHByZS1DTEZMVVNIT1BUIEFNRCBDUFVzLCBD
TEZMVVNIIGlzIHdlYWtseSBvcmRlcmVkIHdpdGgKKwkgKiBldmVyeXRoaW5n
LCBpbmNsdWRpbmcgcmVhZHMgYW5kIHdyaXRlcyB0byBhZGRyZXNzLCBhbmQK
KwkgKiBMRkVOQ0UvU0ZFTkNFIGluc3RydWN0aW9ucy4KKwkgKi8KKwlpZiAo
IWNwdV9oYXNfY2xmbHVzaG9wdCkKKwkJc2V0dXBfZm9yY2VfY3B1X2NhcChY
ODZfQlVHX0NMRkxVU0hfTUZFTkNFKTsKKwogCXN3aXRjaChjLT54ODYpCiAJ
ewogCWNhc2UgMHhmIC4uLiAweDExOgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gv
eDg2L2ZsdXNodGxiLmMgYi94ZW4vYXJjaC94ODYvZmx1c2h0bGIuYwppbmRl
eCA4NTY4NDkxYzdlYTkuLjZmM2Y1YWIxYTNjNCAxMDA2NDQKLS0tIGEveGVu
L2FyY2gveDg2L2ZsdXNodGxiLmMKKysrIGIveGVuL2FyY2gveDg2L2ZsdXNo
dGxiLmMKQEAgLTI0OSw2ICsyNDksMTMgQEAgdW5zaWduZWQgaW50IGZsdXNo
X2FyZWFfbG9jYWwoY29uc3Qgdm9pZCAqdmEsIHVuc2lnbmVkIGludCBmbGFn
cykKICAgICByZXR1cm4gZmxhZ3M7CiB9CiAKKy8qCisgKiBPbiBwcmUtQ0xG
TFVTSE9QVCBBTUQgQ1BVcywgQ0xGTFVTSCBpcyB3ZWFrbHkgb3JkZXJlZCB3
aXRoIGV2ZXJ5dGhpbmcsCisgKiBpbmNsdWRpbmcgcmVhZHMgYW5kIHdyaXRl
cyB0byBhZGRyZXNzLCBhbmQgTEZFTkNFL1NGRU5DRSBpbnN0cnVjdGlvbnMu
CisgKgorICogVGhpcyBmdW5jdGlvbiBvbmx5IHdvcmtzIHNhZmVseSBhZnRl
ciBhbHRlcm5hdGl2ZXMgaGF2ZSBydW4uICBMdWNraWx5LCBhdAorICogdGhl
IHRpbWUgb2Ygd3JpdGluZywgd2UgZG9uJ3QgZmx1c2ggdGhlIGNhY2hlcyB0
aGF0IGVhcmx5LgorICovCiB2b2lkIGNhY2hlX2ZsdXNoKGNvbnN0IHZvaWQg
KmFkZHIsIHVuc2lnbmVkIGludCBzaXplKQogewogICAgIC8qCkBAIC0yNTgs
NiArMjY1LDggQEAgdm9pZCBjYWNoZV9mbHVzaChjb25zdCB2b2lkICphZGRy
LCB1bnNpZ25lZCBpbnQgc2l6ZSkKICAgICB1bnNpZ25lZCBpbnQgY2xmbHVz
aF9zaXplID0gY3VycmVudF9jcHVfZGF0YS54ODZfY2xmbHVzaF9zaXplID86
IDE2OwogICAgIGNvbnN0IHZvaWQgKmVuZCA9IGFkZHIgKyBzaXplOwogCisg
ICAgYWx0ZXJuYXRpdmUoIiIsICJtZmVuY2UiLCBYODZfQlVHX0NMRkxVU0hf
TUZFTkNFKTsKKwogICAgIGFkZHIgLT0gKHVuc2lnbmVkIGxvbmcpYWRkciAm
IChjbGZsdXNoX3NpemUgLSAxKTsKICAgICBmb3IgKCA7IGFkZHIgPCBlbmQ7
IGFkZHIgKz0gY2xmbHVzaF9zaXplICkKICAgICB7CkBAIC0yNzMsNyArMjgy
LDkgQEAgdm9pZCBjYWNoZV9mbHVzaChjb25zdCB2b2lkICphZGRyLCB1bnNp
Z25lZCBpbnQgc2l6ZSkKICAgICAgICAgICAgICAgICAgICAgICAgW3BdICJt
IiAoKihjb25zdCBjaGFyICopKGFkZHIpKSk7CiAgICAgfQogCi0gICAgYWx0
ZXJuYXRpdmUoIiIsICJzZmVuY2UiLCBYODZfRkVBVFVSRV9DTEZMVVNIT1BU
KTsKKyAgICBhbHRlcm5hdGl2ZV8yKCIiLAorICAgICAgICAgICAgICAgICAg
InNmZW5jZSIsIFg4Nl9GRUFUVVJFX0NMRkxVU0hPUFQsCisgICAgICAgICAg
ICAgICAgICAibWZlbmNlIiwgWDg2X0JVR19DTEZMVVNIX01GRU5DRSk7CiB9
CiAKIHZvaWQgY2FjaGVfd3JpdGViYWNrKGNvbnN0IHZvaWQgKmFkZHIsIHVu
c2lnbmVkIGludCBzaXplKQpkaWZmIC0tZ2l0IGEveGVuL2luY2x1ZGUvYXNt
LXg4Ni9jcHVmZWF0dXJlcy5oIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9jcHVm
ZWF0dXJlcy5oCmluZGV4IGI5ZDNjYWM5NzUzOC4uYTgyMjJlOTc4Y2Q5IDEw
MDY0NAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L2NwdWZlYXR1cmVzLmgK
KysrIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9jcHVmZWF0dXJlcy5oCkBAIC00
NCw2ICs0NCw3IEBAIFhFTl9DUFVGRUFUVVJFKFNDX1ZFUldfSURMRSwgICAg
ICBYODZfU1lOVEgoMjUpKSAvKiBWRVJXIHVzZWQgYnkgWGVuIGZvciBpZGxl
ICovCiAjZGVmaW5lIFg4Nl9CVUcoeCkgKChGU0NBUElOVFMgKyBYODZfTlJf
U1lOVEgpICogMzIgKyAoeCkpCiAKICNkZWZpbmUgWDg2X0JVR19GUFVfUFRS
UyAgICAgICAgICBYODZfQlVHKCAwKSAvKiAoRilYe1NBVkUsUlNUT1J9IGRv
ZXNuJ3Qgc2F2ZS9yZXN0b3JlIEZPUC9GSVAvRkRQLiAqLworI2RlZmluZSBY
ODZfQlVHX0NMRkxVU0hfTUZFTkNFICAgIFg4Nl9CVUcoIDIpIC8qIE1GRU5D
RSBuZWVkZWQgdG8gc2VyaWFsaXNlIENMRkxVU0ggKi8KIAogLyogVG90YWwg
bnVtYmVyIG9mIGNhcGFiaWxpdHkgd29yZHMsIGluYyBzeW50aCBhbmQgYnVn
IHdvcmRzLiAqLwogI2RlZmluZSBOQ0FQSU5UUyAoRlNDQVBJTlRTICsgWDg2
X05SX1NZTlRIICsgWDg2X05SX0JVRykgLyogTiAzMi1iaXQgd29yZHMgd29y
dGggb2YgaW5mbyAqLwo=

--=separator
Content-Type: application/octet-stream; name="xsa402/xsa402-4.13-5.patch"
Content-Disposition: attachment; filename="xsa402/xsa402-4.13-5.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3B2OiBUcmFjayBhbmQgZmx1c2ggbm9uLWNvaGVy
ZW50IG1hcHBpbmdzIG9mIFJBTQoKVGhlcmUgYXJlIGxlZ2l0aW1hdGUgdXNl
cyBvZiBXQyBtYXBwaW5ncyBvZiBSQU0sIGUuZy4gZm9yIERNQSBidWZmZXJz
IHdpdGgKZGV2aWNlcyB0aGF0IG1ha2Ugbm9uLWNvaGVyZW50IHdyaXRlcy4g
IFRoZSBMaW51eCBzb3VuZCBzdWJzeXN0ZW0gbWFrZXMKZXh0ZW5zaXZlIHVz
ZSBvZiB0aGlzIHRlY2huaXF1ZS4KCkZvciBzdWNoIHVzZWNhc2VzLCB0aGUg
Z3Vlc3QncyBETUEgYnVmZmVyIGlzIG1hcHBlZCBhbmQgY29uc2lzdGVudGx5
IHVzZWQgYXMKV0MsIGFuZCBYZW4gZG9lc24ndCBpbnRlcmFjdCB3aXRoIHRo
ZSBidWZmZXIuCgpIb3dldmVyLCBhIG1pc2NoZXZpb3VzIGd1ZXN0IGNhbiB1
c2UgV0MgbWFwcGluZ3MgdG8gZGVsaWJlcmF0ZWx5IGNyZWF0ZQpub24tY29o
ZXJlbmN5IGJldHdlZW4gdGhlIGNhY2hlIGFuZCBSQU0sIGFuZCB1c2UgdGhp
cyB0byB0cmljayBYZW4gaW50bwp2YWxpZGF0aW5nIGEgcGFnZXRhYmxlIHdo
aWNoIGlzbid0IGFjdHVhbGx5IHNhZmUuCgpBbGxvY2F0ZSBhIG5ldyBQR1Rf
bm9uX2NvaGVyZW50IHRvIHRyYWNrIHRoZSBub24tY29oZXJlbmN5IG9mIG1h
cHBpbmdzLiAgU2V0Cml0IHdoZW5ldmVyIGEgbm9uLWNvaGVyZW50IHdyaXRl
YWJsZSBtYXBwaW5nIGlzIGNyZWF0ZWQuICBJZiB0aGUgcGFnZSBpcyB1c2Vk
CmFzIGFueXRoaW5nIG90aGVyIHRoYW4gUEdUX3dyaXRhYmxlX3BhZ2UsIGZv
cmNlIGEgY2FjaGUgZmx1c2ggYmVmb3JlCnZhbGlkYXRpb24uICBBbHNvIGZv
cmNlIGEgY2FjaGUgZmx1c2ggYmVmb3JlIHRoZSBwYWdlIGlzIHJldHVybmVk
IHRvIHRoZSBoZWFwLgoKVGhpcyBpcyBDVkUtMjAyMi0yNjM2NCwgcGFydCBv
ZiBYU0EtNDAyLgoKUmVwb3J0ZWQtYnk6IEphbm4gSG9ybiA8amFubmhAZ29v
Z2xlLmNvbT4KU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3
LmNvb3BlcjNAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEdlb3JnZSBEdW5s
YXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEph
biBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KCmRpZmYgLS1naXQgYS94
ZW4vYXJjaC94ODYvbW0uYyBiL3hlbi9hcmNoL3g4Ni9tbS5jCmluZGV4IDg1
OTY0NmI2NzBhOC4uZjVlZWRkY2U1ODY3IDEwMDY0NAotLS0gYS94ZW4vYXJj
aC94ODYvbW0uYworKysgYi94ZW4vYXJjaC94ODYvbW0uYwpAQCAtMTAwMCw2
ICsxMDAwLDE1IEBAIGdldF9wYWdlX2Zyb21fbDFlKAogICAgICAgICByZXR1
cm4gLUVBQ0NFUzsKICAgICB9CiAKKyAgICAvKgorICAgICAqIFRyYWNrIHdy
aXRlYWJsZSBub24tY29oZXJlbnQgbWFwcGluZ3MgdG8gUkFNIHBhZ2VzLCB0
byB0cmlnZ2VyIGEgY2FjaGUKKyAgICAgKiBmbHVzaCBsYXRlciBpZiB0aGUg
dGFyZ2V0IGlzIHVzZWQgYXMgYW55dGhpbmcgYnV0IGEgUEdUX3dyaXRlYWJs
ZSBwYWdlLgorICAgICAqIFdlIGNhcmUgYWJvdXQgYWxsIHdyaXRlYWJsZSBt
YXBwaW5ncywgaW5jbHVkaW5nIGZvcmVpZ24gbWFwcGluZ3MuCisgICAgICov
CisgICAgaWYgKCAhYm9vdF9jcHVfaGFzKFg4Nl9GRUFUVVJFX1hFTl9TRUxG
U05PT1ApICYmCisgICAgICAgICAobDFmICYgKFBBR0VfQ0FDSEVfQVRUUlMg
fCBfUEFHRV9SVykpID09IChfUEFHRV9XQyB8IF9QQUdFX1JXKSApCisgICAg
ICAgIHNldF9iaXQoX1BHVF9ub25fY29oZXJlbnQsICZwYWdlLT51LmludXNl
LnR5cGVfaW5mbyk7CisKICAgICByZXR1cm4gMDsKIAogIGNvdWxkX25vdF9w
aW46CkBAIC0yNTMyLDYgKzI1NDEsMTkgQEAgc3RhdGljIGludCBjbGVhbnVw
X3BhZ2VfbWFwcGluZ3Moc3RydWN0IHBhZ2VfaW5mbyAqcGFnZSkKICAgICAg
ICAgfQogICAgIH0KIAorICAgIC8qCisgICAgICogRmx1c2ggdGhlIGNhY2hl
IGlmIHRoZXJlIHdlcmUgcHJldmlvdXNseSBub24tY29oZXJlbnQgd3JpdGVh
YmxlCisgICAgICogbWFwcGluZ3Mgb2YgdGhpcyBwYWdlLiAgVGhpcyBmb3Jj
ZXMgdGhlIHBhZ2UgdG8gYmUgY29oZXJlbnQgYmVmb3JlIGl0CisgICAgICog
aXMgZnJlZWQgYmFjayB0byB0aGUgaGVhcC4KKyAgICAgKi8KKyAgICBpZiAo
IF9fdGVzdF9hbmRfY2xlYXJfYml0KF9QR1Rfbm9uX2NvaGVyZW50LCAmcGFn
ZS0+dS5pbnVzZS50eXBlX2luZm8pICkKKyAgICB7CisgICAgICAgIHZvaWQg
KmFkZHIgPSBfX21hcF9kb21haW5fcGFnZShwYWdlKTsKKworICAgICAgICBj
YWNoZV9mbHVzaChhZGRyLCBQQUdFX1NJWkUpOworICAgICAgICB1bm1hcF9k
b21haW5fcGFnZShhZGRyKTsKKyAgICB9CisKICAgICByZXR1cm4gcmM7CiB9
CiAKQEAgLTMwOTAsNiArMzExMiwyMiBAQCBzdGF0aWMgaW50IF9nZXRfcGFn
ZV90eXBlKHN0cnVjdCBwYWdlX2luZm8gKnBhZ2UsIHVuc2lnbmVkIGxvbmcg
dHlwZSwKICAgICBpZiAoIHVubGlrZWx5KCEobnggJiBQR1RfdmFsaWRhdGVk
KSkgKQogICAgIHsKICAgICAgICAgLyoKKyAgICAgICAgICogRmx1c2ggdGhl
IGNhY2hlIGlmIHRoZXJlIHdlcmUgcHJldmlvdXNseSBub24tY29oZXJlbnQg
bWFwcGluZ3Mgb2YKKyAgICAgICAgICogdGhpcyBwYWdlLCBhbmQgd2UncmUg
dHJ5aW5nIHRvIHVzZSBpdCBhcyBhbnl0aGluZyBvdGhlciB0aGFuIGEKKyAg
ICAgICAgICogd3JpdGVhYmxlIHBhZ2UuICBUaGlzIGZvcmNlcyB0aGUgcGFn
ZSB0byBiZSBjb2hlcmVudCBiZWZvcmUgd2UKKyAgICAgICAgICogdmFsaWRh
dGUgaXRzIGNvbnRlbnRzIGZvciBzYWZldHkuCisgICAgICAgICAqLworICAg
ICAgICBpZiAoIChueCAmIFBHVF9ub25fY29oZXJlbnQpICYmIHR5cGUgIT0g
UEdUX3dyaXRhYmxlX3BhZ2UgKQorICAgICAgICB7CisgICAgICAgICAgICB2
b2lkICphZGRyID0gX19tYXBfZG9tYWluX3BhZ2UocGFnZSk7CisKKyAgICAg
ICAgICAgIGNhY2hlX2ZsdXNoKGFkZHIsIFBBR0VfU0laRSk7CisgICAgICAg
ICAgICB1bm1hcF9kb21haW5fcGFnZShhZGRyKTsKKworICAgICAgICAgICAg
cGFnZS0+dS5pbnVzZS50eXBlX2luZm8gJj0gflBHVF9ub25fY29oZXJlbnQ7
CisgICAgICAgIH0KKworICAgICAgICAvKgogICAgICAgICAgKiBObyBzcGVj
aWFsIHZhbGlkYXRpb24gbmVlZGVkIGZvciB3cml0YWJsZSBvciBzaGFyZWQg
cGFnZXMuICBQYWdlCiAgICAgICAgICAqIHRhYmxlcyBhbmQgR0RUL0xEVCBu
ZWVkIHRvIGhhdmUgdGhlaXIgY29udGVudHMgYXVkaXRlZC4KICAgICAgICAg
ICoKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9wdi9ncmFudF90YWJsZS5j
IGIveGVuL2FyY2gveDg2L3B2L2dyYW50X3RhYmxlLmMKaW5kZXggMDMyNTYx
OGM5ODgzLi44MWM3MmU2MWVkNTUgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4
Ni9wdi9ncmFudF90YWJsZS5jCisrKyBiL3hlbi9hcmNoL3g4Ni9wdi9ncmFu
dF90YWJsZS5jCkBAIC0xMDksNyArMTA5LDE3IEBAIGludCBjcmVhdGVfZ3Jh
bnRfcHZfbWFwcGluZyh1aW50NjRfdCBhZGRyLCBtZm5fdCBmcmFtZSwKIAog
ICAgIG9sMWUgPSAqcGwxZTsKICAgICBpZiAoIFVQREFURV9FTlRSWShsMSwg
cGwxZSwgb2wxZSwgbmwxZSwgZ2wxbWZuLCBjdXJyLCAwKSApCisgICAgewor
ICAgICAgICAvKgorICAgICAgICAgKiBXZSBhbHdheXMgY3JlYXRlIG1hcHBp
bmdzIGluIHRoaXMgcGF0aC4gIEhvd2V2ZXIsIG91ciBjYWxsZXIsCisgICAg
ICAgICAqIG1hcF9ncmFudF9yZWYoKSwgb25seSBwYXNzZXMgcG90ZW50aWFs
bHkgbm9uLXplcm8gY2FjaGVfZmxhZ3MgZm9yCisgICAgICAgICAqIE1NSU8g
ZnJhbWVzLCBzbyB0aGlzIHBhdGggZG9lc24ndCBjcmVhdGUgbm9uLWNvaGVy
ZW50IG1hcHBpbmdzIG9mCisgICAgICAgICAqIFJBTSBmcmFtZXMgYW5kIHRo
ZXJlJ3Mgbm8gbmVlZCB0byBjYWxjdWxhdGUgUEdUX25vbl9jb2hlcmVudC4K
KyAgICAgICAgICovCisgICAgICAgIEFTU0VSVCghY2FjaGVfZmxhZ3MgfHwg
aXNfaW9tZW1fcGFnZShmcmFtZSkpOworCiAgICAgICAgIHJjID0gR05UU1Rf
b2theTsKKyAgICB9CiAKICBvdXRfdW5sb2NrOgogICAgIHBhZ2VfdW5sb2Nr
KHBhZ2UpOwpAQCAtMjk0LDcgKzMwNCwxOCBAQCBpbnQgcmVwbGFjZV9ncmFu
dF9wdl9tYXBwaW5nKHVpbnQ2NF90IGFkZHIsIG1mbl90IGZyYW1lLAogICAg
ICAgICAgICAgICAgICBsMWVfZ2V0X2ZsYWdzKG9sMWUpLCBhZGRyLCBncmFu
dF9wdGVfZmxhZ3MpOwogCiAgICAgaWYgKCBVUERBVEVfRU5UUlkobDEsIHBs
MWUsIG9sMWUsIG5sMWUsIGdsMW1mbiwgY3VyciwgMCkgKQorICAgIHsKKyAg
ICAgICAgLyoKKyAgICAgICAgICogR2VuZXJhbGx5LCByZXBsYWNlX2dyYW50
X3B2X21hcHBpbmcoKSBpcyB1c2VkIHRvIGRlc3Ryb3kgbWFwcGluZ3MKKyAg
ICAgICAgICogKG4xbGUgPSBsMWVfZW1wdHkoKSksIGJ1dCBpdCBjYW4gYmUg
YSBwcmVzZW50IG1hcHBpbmcgb24gdGhlCisgICAgICAgICAqIEdOVEFCT1Bf
dW5tYXBfYW5kX3JlcGxhY2UgcGF0aC4KKyAgICAgICAgICoKKyAgICAgICAg
ICogSW4gc3VjaCBjYXNlcywgdGhlIFBURSBpcyBmdWxseSB0cmFuc3BsYW50
ZWQgZnJvbSBpdHMgb2xkIGxvY2F0aW9uCisgICAgICAgICAqIHZpYSBzdGVh
bF9saW5lYXJfYWRkcigpLCBzbyB3ZSBuZWVkIG5vdCBwZXJmb3JtIFBHVF9u
b25fY29oZXJlbnQKKyAgICAgICAgICogY2hlY2tpbmcgaGVyZS4KKyAgICAg
ICAgICovCiAgICAgICAgIHJjID0gR05UU1Rfb2theTsKKyAgICB9CiAKICBv
dXRfdW5sb2NrOgogICAgIHBhZ2VfdW5sb2NrKHBhZ2UpOwpkaWZmIC0tZ2l0
IGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9tbS5oIGIveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tbS5oCmluZGV4IGRiMDk4NDlmNzNmOC4uODJkMGZkNjEwNGEyIDEw
MDY0NAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L21tLmgKKysrIGIveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tbS5oCkBAIC00OCw4ICs0OCwxMiBAQAogI2Rl
ZmluZSBfUEdUX3BhcnRpYWwgICAgICBQR19zaGlmdCg4KQogI2RlZmluZSBQ
R1RfcGFydGlhbCAgICAgICBQR19tYXNrKDEsIDgpCiAKKy8qIEhhcyB0aGlz
IHBhZ2UgYmVlbiBtYXBwZWQgd3JpdGVhYmxlIHdpdGggYSBub24tY29oZXJl
bnQgbWVtb3J5IHR5cGU/ICovCisjZGVmaW5lIF9QR1Rfbm9uX2NvaGVyZW50
IFBHX3NoaWZ0KDkpCisjZGVmaW5lIFBHVF9ub25fY29oZXJlbnQgIFBHX21h
c2soMSwgOSkKKwogIC8qIENvdW50IG9mIHVzZXMgb2YgdGhpcyBmcmFtZSBh
cyBpdHMgY3VycmVudCB0eXBlLiAqLwotI2RlZmluZSBQR1RfY291bnRfd2lk
dGggICBQR19zaGlmdCg4KQorI2RlZmluZSBQR1RfY291bnRfd2lkdGggICBQ
R19zaGlmdCg5KQogI2RlZmluZSBQR1RfY291bnRfbWFzayAgICAoKDFVTDw8
UEdUX2NvdW50X3dpZHRoKS0xKQogCiAvKiBBcmUgdGhlICd0eXBlIG1hc2sn
IGJpdHMgaWRlbnRpY2FsPyAqLwo=

--=separator
Content-Type: application/octet-stream; name="xsa402/xsa402-4.14-1.patch"
Content-Disposition: attachment; filename="xsa402/xsa402-4.14-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3BhZ2U6IEludHJvZHVjZSBfUEFHRV8qIGNvbnN0
YW50cyBmb3IgbWVtb3J5IHR5cGVzCgouLi4gcmF0aGVyIHRoYW4gb3BlbmNv
ZGluZyB0aGUgUEFUL1BDRC9QV1QgYXR0cmlidXRlcyBpbiBfX1BBR0VfSFlQ
RVJWSVNPUl8qCmNvbnN0YW50cy4gIFRoZXNlIGFyZSBnb2luZyB0byBiZSBu
ZWVkZWQgYnkgZm9ydGhjb21pbmcgbG9naWMuCgpObyBmdW5jdGlvbmFsIGNo
YW5nZS4KClRoaXMgaXMgcGFydCBvZiBYU0EtNDAyLgoKU2lnbmVkLW9mZi1i
eTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4K
UmV2aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4K
CmRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9hc20teDg2L3BhZ2UuaCBiL3hl
bi9pbmNsdWRlL2FzbS14ODYvcGFnZS5oCmluZGV4IGY2MzJhZmZhZWY2OC4u
NTI1NTE1MzVhOTkxIDEwMDY0NAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2
L3BhZ2UuaAorKysgYi94ZW4vaW5jbHVkZS9hc20teDg2L3BhZ2UuaApAQCAt
MzQ0LDYgKzM0NCwxNCBAQCB2b2lkIGVmaV91cGRhdGVfbDRfcGd0YWJsZSh1
bnNpZ25lZCBpbnQgbDRpZHgsIGw0X3BnZW50cnlfdCk7CiAKICNkZWZpbmUg
UEFHRV9DQUNIRV9BVFRSUyAoX1BBR0VfUEFUIHwgX1BBR0VfUENEIHwgX1BB
R0VfUFdUKQogCisvKiBNZW1vcnkgdHlwZXMsIGVuY29kZWQgdW5kZXIgWGVu
J3MgY2hvaWNlIG9mIE1TUl9QQVQuICovCisjZGVmaW5lIF9QQUdFX1dCICAg
ICAgICAgKCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgMCkKKyNk
ZWZpbmUgX1BBR0VfV1QgICAgICAgICAoICAgICAgICAgICAgICAgICAgICAg
ICAgX1BBR0VfUFdUKQorI2RlZmluZSBfUEFHRV9VQ00gICAgICAgICggICAg
ICAgICAgICBfUEFHRV9QQ0QgICAgICAgICAgICApCisjZGVmaW5lIF9QQUdF
X1VDICAgICAgICAgKCAgICAgICAgICAgIF9QQUdFX1BDRCB8IF9QQUdFX1BX
VCkKKyNkZWZpbmUgX1BBR0VfV0MgICAgICAgICAoX1BBR0VfUEFUICAgICAg
ICAgICAgICAgICAgICAgICAgKQorI2RlZmluZSBfUEFHRV9XUCAgICAgICAg
IChfUEFHRV9QQVQgfCAgICAgICAgICAgICBfUEFHRV9QV1QpCisKIC8qCiAg
KiBEZWJ1ZyBvcHRpb246IEVuc3VyZSB0aGF0IGdyYW50ZWQgbWFwcGluZ3Mg
YXJlIG5vdCBpbXBsaWNpdGx5IHVubWFwcGVkLgogICogV0FSTklORzogVGhp
cyB3aWxsIG5lZWQgdG8gYmUgZGlzYWJsZWQgdG8gcnVuIE9TZXMgdGhhdCB1
c2UgdGhlIHNwYXJlIFBURQpAQCAtMzYyLDggKzM3MCw4IEBAIHZvaWQgZWZp
X3VwZGF0ZV9sNF9wZ3RhYmxlKHVuc2lnbmVkIGludCBsNGlkeCwgbDRfcGdl
bnRyeV90KTsKICNkZWZpbmUgX19QQUdFX0hZUEVSVklTT1JfUlggICAgICAo
X1BBR0VfUFJFU0VOVCB8IF9QQUdFX0FDQ0VTU0VEKQogI2RlZmluZSBfX1BB
R0VfSFlQRVJWSVNPUiAgICAgICAgIChfX1BBR0VfSFlQRVJWSVNPUl9SWCB8
IFwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgX1BBR0Vf
RElSVFkgfCBfUEFHRV9SVykKLSNkZWZpbmUgX19QQUdFX0hZUEVSVklTT1Jf
VUNNSU5VUyAoX19QQUdFX0hZUEVSVklTT1IgfCBfUEFHRV9QQ0QpCi0jZGVm
aW5lIF9fUEFHRV9IWVBFUlZJU09SX1VDICAgICAgKF9fUEFHRV9IWVBFUlZJ
U09SIHwgX1BBR0VfUENEIHwgX1BBR0VfUFdUKQorI2RlZmluZSBfX1BBR0Vf
SFlQRVJWSVNPUl9VQ01JTlVTIChfX1BBR0VfSFlQRVJWSVNPUiB8IF9QQUdF
X1VDTSkKKyNkZWZpbmUgX19QQUdFX0hZUEVSVklTT1JfVUMgICAgICAoX19Q
QUdFX0hZUEVSVklTT1IgfCBfUEFHRV9VQykKICNkZWZpbmUgX19QQUdFX0hZ
UEVSVklTT1JfU0hTVEsgICAoX19QQUdFX0hZUEVSVklTT1JfUk8gfCBfUEFH
RV9ESVJUWSkKIAogI2RlZmluZSBNQVBfU01BTExfUEFHRVMgX1BBR0VfQVZB
SUwwIC8qIGRvbid0IHVzZSBzdXBlcnBhZ2VzIG1hcHBpbmdzICovCg==

--=separator
Content-Type: application/octet-stream; name="xsa402/xsa402-4.14-2.patch"
Content-Disposition: attachment; filename="xsa402/xsa402-4.14-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2OiBEb24ndCBjaGFuZ2UgdGhlIGNhY2hlYWJpbGl0
eSBvZiB0aGUgZGlyZWN0bWFwCgpDaGFuZ2VzZXQgNTVmOTdmNDliN2NlICgi
eDg2OiBDaGFuZ2UgY2FjaGUgYXR0cmlidXRlcyBvZiBYZW4gMToxIHBhZ2Ug
bWFwcGluZ3MKaW4gcmVzcG9uc2UgdG8gZ3Vlc3QgbWFwcGluZyByZXF1ZXN0
cyIpIGF0dGVtcHRlZCB0byBrZWVwIHRoZSBjYWNoZWFiaWxpdHkKY29uc2lz
dGVudCBiZXR3ZWVuIGRpZmZlcmVudCBtYXBwaW5ncyBvZiB0aGUgc2FtZSBw
YWdlLgoKVGhlIHJlYXNvbiB3YXNuJ3QgZGVzY3JpYmVkIGluIHRoZSBjaGFu
Z2Vsb2csIGJ1dCBpdCBpcyB1bmRlcnN0b29kIHRvIGJlIGluCnJlZ2FyZHMg
dG8gYSBjb25jZXJuIG92ZXIgbWFjaGluZSBjaGVjayBleGNlcHRpb25zLCBv
d2luZyB0byBlcnJhdGEgd2hlbiB1c2luZwptaXhlZCBjYWNoZWFiaWxpdGll
cy4gIEl0IGRpZCB0aGlzIHByaW1hcmlseSBieSB1cGRhdGluZyBYZW4ncyBt
YXBwaW5nIG9mIHRoZQpwYWdlIGluIHRoZSBkaXJlY3QgbWFwIHdoZW4gdGhl
IGd1ZXN0IG1hcHBlZCBhIHBhZ2Ugd2l0aCByZWR1Y2VkIGNhY2hlYWJpbGl0
eS4KClVuZm9ydHVuYXRlbHksIHRoZSBsb2dpYyBkaWRuJ3QgYWN0dWFsbHkg
cHJldmVudCBtaXhlZCBjYWNoZWFiaWxpdHkgZnJvbQpvY2N1cnJpbmc6CiAq
IEEgZ3Vlc3QgY291bGQgbWFwIGEgcGFnZSBub3JtYWxseSwgYW5kIHRoZW4g
bWFwIHRoZSBzYW1lIHBhZ2Ugd2l0aAogICBkaWZmZXJlbnQgY2FjaGVhYmls
aXR5OyBub3RoaW5nIHByZXZlbnRlZCB0aGlzLgogKiBUaGUgY2FjaGVhYmls
aXR5IG9mIHRoZSBkaXJlY3RtYXAgd2FzIGFsd2F5cyBsYXRlc3QtdGFrZXMt
cHJlY2VkZW5jZSBpbgogICB0ZXJtcyBvZiBndWVzdCByZXF1ZXN0cy4KICog
R3JhbnQtbWFwcGVkIGZyYW1lcyB3aXRoIGxlc3NlciBjYWNoZWFiaWxpdHkg
ZGlkbid0IGFkanVzdCB0aGUgcGFnZSdzCiAgIGNhY2hlYXR0ciBzZXR0aW5n
cy4KICogVGhlIG1hcF9kb21haW5fcGFnZSgpIGZ1bmN0aW9uIHN0aWxsIHVu
Y29uZGl0aW9uYWxseSBjcmVhdGVkIFdCIG1hcHBpbmdzLAogICBpcnJlc3Bl
Y3RpdmUgb2YgdGhlIHBhZ2UncyBjYWNoZWF0dHIgc2V0dGluZ3MuCgpBZGRp
dGlvbmFsbHksIHVwZGF0ZV94ZW5fbWFwcGluZ3MoKSBoYWQgYSBidWcgd2hl
cmUgdGhlIGFsaWFzIGNhbGN1bGF0aW9uIHdhcwp3cm9uZyBmb3IgbWZuJ3Mg
d2hpY2ggd2VyZSAuaW5pdCBjb250ZW50LCB3aGljaCBzaG91bGQgaGF2ZSBi
ZWVuIHRyZWF0ZWQgYXMKZnVsbHkgZ3Vlc3QgcGFnZXMsIG5vdCBYZW4gcGFn
ZXMuCgpXb3JzZSB5ZXQsIHRoZSBsb2dpYyBpbnRyb2R1Y2VkIGEgdnVsbmVy
YWJpbGl0eSB3aGVyZWJ5IG5lY2Vzc2FyeQpwYWdldGFibGUvc2VnZGVzYyBh
ZGp1c3RtZW50cyBtYWRlIGJ5IFhlbiBpbiB0aGUgdmFsaWRhdGlvbiBsb2dp
YyBjb3VsZCBiZWNvbWUKbm9uLWNvaGVyZW50IGJldHdlZW4gdGhlIGNhY2hl
IGFuZCBtYWluIG1lbW9yeS4gIFRoZSBDUFUgY291bGQgc3Vic2VxdWVudGx5
Cm9wZXJhdGUgb24gdGhlIHN0YWxlIHZhbHVlIGluIHRoZSBjYWNoZSwgcmF0
aGVyIHRoYW4gdGhlIHNhZmUgdmFsdWUgaW4gbWFpbgptZW1vcnkuCgpUaGUg
ZGlyZWN0bWFwIGNvbnRhaW5zIHByaW1hcmlseSBtYXBwaW5ncyBvZiBSQU0u
ICBQQVQvTVRSUiBjb25mbGljdApyZXNvbHV0aW9uIGlzIGFzeW1tZXRyaWMs
IGFuZCBnZW5lcmFsbHkgZm9yIE1UUlI9V0IgcmFuZ2VzLCBQQVQgb2YgbGVz
c2VyCmNhY2hlYWJpbGl0eSByZXNvbHZlcyB0byBiZWluZyBjb2hlcmVudC4g
IFRoZSBzcGVjaWFsIGNhc2UgaXMgV0MgbWFwcGluZ3MsCndoaWNoIGFyZSBu
b24tY29oZXJlbnQgYWdhaW5zdCBNVFJSPVdCIHJlZ2lvbnMgKGV4Y2VwdCBm
b3IgZnVsbHktY29oZXJlbnQKQ1BVcykuCgpYZW4gbXVzdCBub3QgaGF2ZSBh
bnkgV0MgY2FjaGVhYmlsaXR5IGluIHRoZSBkaXJlY3RtYXAsIHRvIHByZXZl
bnQgWGVuJ3MKYWN0aW9ucyBmcm9tIGNyZWF0aW5nIG5vbi1jb2hlcmVuY3ku
ICAoR3Vlc3QgYWN0aW9ucyBjcmVhdGluZyBub24tY29oZXJlbmN5IGlzCmRl
YWx0IHdpdGggaW4gc3Vic2VxdWVudCBwYXRjaGVzLikgIEFzIGFsbCBtZW1v
cnkgdHlwZXMgZm9yIE1UUlI9V0IgcmFuZ2VzCmludGVyLW9wZXJhdGUgY29o
ZXJlbnRseSwgc28gbGVhdmUgWGVuJ3MgZGlyZWN0bWFwIG1hcHBpbmdzIGFz
IFdCLgoKT25seSBQViBndWVzdHMgd2l0aCBhY2Nlc3MgdG8gZGV2aWNlcyBj
YW4gdXNlIHJlZHVjZWQtY2FjaGVhYmlsaXR5IG1hcHBpbmdzIHRvCmJlZ2lu
IHdpdGgsIGFuZCB0aGV5J3JlIHRydXN0ZWQgbm90IHRvIG1vdW50IERvU3Mg
YWdhaW5zdCB0aGUgc3lzdGVtIGFueXdheS4KCkRyb3AgUEdDX2NhY2hlYXR0
cl97YmFzZSxtYXNrfSBlbnRpcmVseSwgYW5kIHRoZSBsb2dpYyB0byBtYW5p
cHVsYXRlIHRoZW0uClNoaWZ0IHRoZSBsYXRlciBQR0NfKiBjb25zdGFudHMg
dXAsIHRvIGdhaW4gMyBleHRyYSBiaXRzIGluIHRoZSBtYWluIHJlZmVyZW5j
ZQpjb3VudC4gIFJldGFpbiB0aGUgY2hlY2sgaW4gZ2V0X3BhZ2VfZnJvbV9s
MWUoKSBmb3Igc3BlY2lhbF9wYWdlcygpIGJlY2F1c2UgYQpndWVzdCBoYXMg
bm8gYnVzaW5lc3MgdXNpbmcgcmVkdWNlZCBjYWNoZWFiaWxpdHkgb24gdGhl
c2UuCgpUaGlzIHJldmVydHMgY2hhbmdlc2V0IDU1Zjk3ZjQ5YjdjZTZjMzUy
MGM1NTVkMTljYWFjNmNmM2Y5YTVkZjAKClRoaXMgaXMgQ1ZFLTIwMjItMjYz
NjMsIHBhcnQgb2YgWFNBLTQwMi4KClNpZ25lZC1vZmYtYnk6IEFuZHJldyBD
b29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+ClJldmlld2VkLWJ5
OiBHZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJpeC5jb20+Cgpk
aWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L21tLmMgYi94ZW4vYXJjaC94ODYv
bW0uYwppbmRleCAwYjc1YjYzNzFkNGIuLjdkM2QxODZlZGJkNSAxMDA2NDQK
LS0tIGEveGVuL2FyY2gveDg2L21tLmMKKysrIGIveGVuL2FyY2gveDg2L21t
LmMKQEAgLTc4NSwyNCArNzg1LDYgQEAgYm9vbCBpc19pb21lbV9wYWdlKG1m
bl90IG1mbikKICAgICByZXR1cm4gKHBhZ2VfZ2V0X293bmVyKHBhZ2UpID09
IGRvbV9pbyk7CiB9CiAKLXN0YXRpYyBpbnQgdXBkYXRlX3hlbl9tYXBwaW5n
cyh1bnNpZ25lZCBsb25nIG1mbiwgdW5zaWduZWQgaW50IGNhY2hlYXR0cikK
LXsKLSAgICBpbnQgZXJyID0gMDsKLSAgICBib29sIGFsaWFzID0gbWZuID49
IFBGTl9ET1dOKHhlbl9waHlzX3N0YXJ0KSAmJgotICAgICAgICAgbWZuIDwg
UEZOX1VQKHhlbl9waHlzX3N0YXJ0ICsgeGVuX3ZpcnRfZW5kIC0gWEVOX1ZJ
UlRfU1RBUlQpOwotICAgIHVuc2lnbmVkIGxvbmcgeGVuX3ZhID0KLSAgICAg
ICAgWEVOX1ZJUlRfU1RBUlQgKyAoKG1mbiAtIFBGTl9ET1dOKHhlbl9waHlz
X3N0YXJ0KSkgPDwgUEFHRV9TSElGVCk7Ci0KLSAgICBpZiAoIHVubGlrZWx5
KGFsaWFzKSAmJiBjYWNoZWF0dHIgKQotICAgICAgICBlcnIgPSBtYXBfcGFn
ZXNfdG9feGVuKHhlbl92YSwgX21mbihtZm4pLCAxLCAwKTsKLSAgICBpZiAo
ICFlcnIgKQotICAgICAgICBlcnIgPSBtYXBfcGFnZXNfdG9feGVuKCh1bnNp
Z25lZCBsb25nKW1mbl90b192aXJ0KG1mbiksIF9tZm4obWZuKSwgMSwKLSAg
ICAgICAgICAgICAgICAgICAgIFBBR0VfSFlQRVJWSVNPUiB8IGNhY2hlYXR0
cl90b19wdGVfZmxhZ3MoY2FjaGVhdHRyKSk7Ci0gICAgaWYgKCB1bmxpa2Vs
eShhbGlhcykgJiYgIWNhY2hlYXR0ciAmJiAhZXJyICkKLSAgICAgICAgZXJy
ID0gbWFwX3BhZ2VzX3RvX3hlbih4ZW5fdmEsIF9tZm4obWZuKSwgMSwgUEFH
RV9IWVBFUlZJU09SKTsKLSAgICByZXR1cm4gZXJyOwotfQotCiAjaWZuZGVm
IE5ERUJVRwogc3RydWN0IG1taW9fZW11bF9yYW5nZV9jdHh0IHsKICAgICBj
b25zdCBzdHJ1Y3QgZG9tYWluICpkOwpAQCAtMTAwNyw0NyArOTg5LDE0IEBA
IGdldF9wYWdlX2Zyb21fbDFlKAogICAgICAgICBnb3RvIGNvdWxkX25vdF9w
aW47CiAgICAgfQogCi0gICAgaWYgKCBwdGVfZmxhZ3NfdG9fY2FjaGVhdHRy
KGwxZikgIT0KLSAgICAgICAgICgocGFnZS0+Y291bnRfaW5mbyAmIFBHQ19j
YWNoZWF0dHJfbWFzaykgPj4gUEdDX2NhY2hlYXR0cl9iYXNlKSApCisgICAg
aWYgKCAobDFmICYgUEFHRV9DQUNIRV9BVFRSUykgIT0gX1BBR0VfV0IgJiYg
aXNfc3BlY2lhbF9wYWdlKHBhZ2UpICkKICAgICB7Ci0gICAgICAgIHVuc2ln
bmVkIGxvbmcgeCwgbngsIHkgPSBwYWdlLT5jb3VudF9pbmZvOwotICAgICAg
ICB1bnNpZ25lZCBsb25nIGNhY2hlYXR0ciA9IHB0ZV9mbGFnc190b19jYWNo
ZWF0dHIobDFmKTsKLSAgICAgICAgaW50IGVycjsKLQotICAgICAgICBpZiAo
IGlzX3NwZWNpYWxfcGFnZShwYWdlKSApCi0gICAgICAgIHsKLSAgICAgICAg
ICAgIGlmICggd3JpdGUgKQotICAgICAgICAgICAgICAgIHB1dF9wYWdlX3R5
cGUocGFnZSk7Ci0gICAgICAgICAgICBwdXRfcGFnZShwYWdlKTsKLSAgICAg
ICAgICAgIGdkcHJpbnRrKFhFTkxPR19XQVJOSU5HLAotICAgICAgICAgICAg
ICAgICAgICAgIkF0dGVtcHQgdG8gY2hhbmdlIGNhY2hlIGF0dHJpYnV0ZXMg
b2YgWGVuIGhlYXAgcGFnZVxuIik7Ci0gICAgICAgICAgICByZXR1cm4gLUVB
Q0NFUzsKLSAgICAgICAgfQotCi0gICAgICAgIGRvIHsKLSAgICAgICAgICAg
IHggID0geTsKLSAgICAgICAgICAgIG54ID0gKHggJiB+UEdDX2NhY2hlYXR0
cl9tYXNrKSB8IChjYWNoZWF0dHIgPDwgUEdDX2NhY2hlYXR0cl9iYXNlKTsK
LSAgICAgICAgfSB3aGlsZSAoICh5ID0gY21weGNoZygmcGFnZS0+Y291bnRf
aW5mbywgeCwgbngpKSAhPSB4ICk7Ci0KLSAgICAgICAgZXJyID0gdXBkYXRl
X3hlbl9tYXBwaW5ncyhtZm4sIGNhY2hlYXR0cik7Ci0gICAgICAgIGlmICgg
dW5saWtlbHkoZXJyKSApCi0gICAgICAgIHsKLSAgICAgICAgICAgIGNhY2hl
YXR0ciA9IHkgJiBQR0NfY2FjaGVhdHRyX21hc2s7Ci0gICAgICAgICAgICBk
byB7Ci0gICAgICAgICAgICAgICAgeCAgPSB5OwotICAgICAgICAgICAgICAg
IG54ID0gKHggJiB+UEdDX2NhY2hlYXR0cl9tYXNrKSB8IGNhY2hlYXR0cjsK
LSAgICAgICAgICAgIH0gd2hpbGUgKCAoeSA9IGNtcHhjaGcoJnBhZ2UtPmNv
dW50X2luZm8sIHgsIG54KSkgIT0geCApOwotCi0gICAgICAgICAgICBpZiAo
IHdyaXRlICkKLSAgICAgICAgICAgICAgICBwdXRfcGFnZV90eXBlKHBhZ2Up
OwotICAgICAgICAgICAgcHV0X3BhZ2UocGFnZSk7Ci0KLSAgICAgICAgICAg
IGdkcHJpbnRrKFhFTkxPR19XQVJOSU5HLCAiRXJyb3IgdXBkYXRpbmcgbWFw
cGluZ3MgZm9yIG1mbiAlIiBQUklfbWZuCi0gICAgICAgICAgICAgICAgICAg
ICAiIChwZm4gJSIgUFJJX3BmbiAiLCBmcm9tIEwxIGVudHJ5ICUiIFBSSXB0
ZSAiKSBmb3IgZCVkXG4iLAotICAgICAgICAgICAgICAgICAgICAgbWZuLCBn
ZXRfZ3Bmbl9mcm9tX21mbihtZm4pLAotICAgICAgICAgICAgICAgICAgICAg
bDFlX2dldF9pbnRwdGUobDFlKSwgbDFlX293bmVyLT5kb21haW5faWQpOwot
ICAgICAgICAgICAgcmV0dXJuIGVycjsKLSAgICAgICAgfQorICAgICAgICBp
ZiAoIHdyaXRlICkKKyAgICAgICAgICAgIHB1dF9wYWdlX3R5cGUocGFnZSk7
CisgICAgICAgIHB1dF9wYWdlKHBhZ2UpOworICAgICAgICBnZHByaW50ayhY
RU5MT0dfV0FSTklORywKKyAgICAgICAgICAgICAgICAgIkF0dGVtcHQgdG8g
Y2hhbmdlIGNhY2hlIGF0dHJpYnV0ZXMgb2YgWGVuIGhlYXAgcGFnZVxuIik7
CisgICAgICAgIHJldHVybiAtRUFDQ0VTOwogICAgIH0KIAogICAgIHJldHVy
biAwOwpAQCAtMjQ1MywyNSArMjQwMiwxMCBAQCBzdGF0aWMgaW50IG1vZF9s
NF9lbnRyeShsNF9wZ2VudHJ5X3QgKnBsNGUsCiAgKi8KIHN0YXRpYyBpbnQg
Y2xlYW51cF9wYWdlX21hcHBpbmdzKHN0cnVjdCBwYWdlX2luZm8gKnBhZ2Up
CiB7Ci0gICAgdW5zaWduZWQgaW50IGNhY2hlYXR0ciA9Ci0gICAgICAgIChw
YWdlLT5jb3VudF9pbmZvICYgUEdDX2NhY2hlYXR0cl9tYXNrKSA+PiBQR0Nf
Y2FjaGVhdHRyX2Jhc2U7CiAgICAgaW50IHJjID0gMDsKICAgICB1bnNpZ25l
ZCBsb25nIG1mbiA9IG1mbl94KHBhZ2VfdG9fbWZuKHBhZ2UpKTsKIAogICAg
IC8qCi0gICAgICogSWYgd2UndmUgbW9kaWZpZWQgeGVuIG1hcHBpbmdzIGFz
IGEgcmVzdWx0IG9mIGd1ZXN0IGNhY2hlCi0gICAgICogYXR0cmlidXRlcywg
cmVzdG9yZSB0aGVtIHRvIHRoZSAibm9ybWFsIiBzdGF0ZS4KLSAgICAgKi8K
LSAgICBpZiAoIHVubGlrZWx5KGNhY2hlYXR0cikgKQotICAgIHsKLSAgICAg
ICAgcGFnZS0+Y291bnRfaW5mbyAmPSB+UEdDX2NhY2hlYXR0cl9tYXNrOwot
Ci0gICAgICAgIEJVR19PTihpc19zcGVjaWFsX3BhZ2UocGFnZSkpOwotCi0g
ICAgICAgIHJjID0gdXBkYXRlX3hlbl9tYXBwaW5ncyhtZm4sIDApOwotICAg
IH0KLQotICAgIC8qCiAgICAgICogSWYgdGhpcyBtYXkgYmUgaW4gYSBQViBk
b21haW4ncyBJT01NVSwgcmVtb3ZlIGl0LgogICAgICAqCiAgICAgICogTkIg
dGhhdCB3cml0YWJsZSB4ZW5oZWFwIHBhZ2VzIGhhdmUgdGhlaXIgdHlwZSBz
ZXQgYW5kIGNsZWFyZWQgYnkKZGlmZiAtLWdpdCBhL3hlbi9pbmNsdWRlL2Fz
bS14ODYvbW0uaCBiL3hlbi9pbmNsdWRlL2FzbS14ODYvbW0uaAppbmRleCA3
ZTc0OTk2MDUzYjAuLjdhMjA5M2RhNTk3NyAxMDA2NDQKLS0tIGEveGVuL2lu
Y2x1ZGUvYXNtLXg4Ni9tbS5oCisrKyBiL3hlbi9pbmNsdWRlL2FzbS14ODYv
bW0uaApAQCAtNjQsMjUgKzY0LDIyIEBACiAgLyogU2V0IHdoZW4gaXMgdXNp
bmcgYSBwYWdlIGFzIGEgcGFnZSB0YWJsZSAqLwogI2RlZmluZSBfUEdDX3Bh
Z2VfdGFibGUgICBQR19zaGlmdCgzKQogI2RlZmluZSBQR0NfcGFnZV90YWJs
ZSAgICBQR19tYXNrKDEsIDMpCi0gLyogMy1iaXQgUEFUL1BDRC9QV1QgY2Fj
aGUtYXR0cmlidXRlIGhpbnQuICovCi0jZGVmaW5lIFBHQ19jYWNoZWF0dHJf
YmFzZSBQR19zaGlmdCg2KQotI2RlZmluZSBQR0NfY2FjaGVhdHRyX21hc2sg
UEdfbWFzayg3LCA2KQogIC8qIFBhZ2UgaXMgYnJva2VuPyAqLwotI2RlZmlu
ZSBfUEdDX2Jyb2tlbiAgICAgICBQR19zaGlmdCg3KQotI2RlZmluZSBQR0Nf
YnJva2VuICAgICAgICBQR19tYXNrKDEsIDcpCisjZGVmaW5lIF9QR0NfYnJv
a2VuICAgICAgIFBHX3NoaWZ0KDQpCisjZGVmaW5lIFBHQ19icm9rZW4gICAg
ICAgIFBHX21hc2soMSwgNCkKICAvKiBNdXR1YWxseS1leGNsdXNpdmUgcGFn
ZSBzdGF0ZXM6IHsgaW51c2UsIG9mZmxpbmluZywgb2ZmbGluZWQsIGZyZWUg
fS4gKi8KLSNkZWZpbmUgUEdDX3N0YXRlICAgICAgICAgUEdfbWFzaygzLCA5
KQotI2RlZmluZSBQR0Nfc3RhdGVfaW51c2UgICBQR19tYXNrKDAsIDkpCi0j
ZGVmaW5lIFBHQ19zdGF0ZV9vZmZsaW5pbmcgUEdfbWFzaygxLCA5KQotI2Rl
ZmluZSBQR0Nfc3RhdGVfb2ZmbGluZWQgUEdfbWFzaygyLCA5KQotI2RlZmlu
ZSBQR0Nfc3RhdGVfZnJlZSAgICBQR19tYXNrKDMsIDkpCisjZGVmaW5lIFBH
Q19zdGF0ZSAgICAgICAgICAgUEdfbWFzaygzLCA2KQorI2RlZmluZSBQR0Nf
c3RhdGVfaW51c2UgICAgIFBHX21hc2soMCwgNikKKyNkZWZpbmUgUEdDX3N0
YXRlX29mZmxpbmluZyBQR19tYXNrKDEsIDYpCisjZGVmaW5lIFBHQ19zdGF0
ZV9vZmZsaW5lZCAgUEdfbWFzaygyLCA2KQorI2RlZmluZSBQR0Nfc3RhdGVf
ZnJlZSAgICAgIFBHX21hc2soMywgNikKICNkZWZpbmUgcGFnZV9zdGF0ZV9p
cyhwZywgc3QpICgoKHBnKS0+Y291bnRfaW5mbyZQR0Nfc3RhdGUpID09IFBH
Q19zdGF0ZV8jI3N0KQogLyogUGFnZSBpcyBub3QgcmVmZXJlbmNlIGNvdW50
ZWQgKi8KLSNkZWZpbmUgX1BHQ19leHRyYSAgICAgICAgUEdfc2hpZnQoMTAp
Ci0jZGVmaW5lIFBHQ19leHRyYSAgICAgICAgIFBHX21hc2soMSwgMTApCisj
ZGVmaW5lIF9QR0NfZXh0cmEgICAgICAgIFBHX3NoaWZ0KDcpCisjZGVmaW5l
IFBHQ19leHRyYSAgICAgICAgIFBHX21hc2soMSwgNykKIAogLyogQ291bnQg
b2YgcmVmZXJlbmNlcyB0byB0aGlzIGZyYW1lLiAqLwotI2RlZmluZSBQR0Nf
Y291bnRfd2lkdGggICBQR19zaGlmdCgxMCkKKyNkZWZpbmUgUEdDX2NvdW50
X3dpZHRoICAgUEdfc2hpZnQoNykKICNkZWZpbmUgUEdDX2NvdW50X21hc2sg
ICAgKCgxVUw8PFBHQ19jb3VudF93aWR0aCktMSkKIAogLyoK

--=separator
Content-Type: application/octet-stream; name="xsa402/xsa402-4.14-3.patch"
Content-Disposition: attachment; filename="xsa402/xsa402-4.14-3.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2OiBTcGxpdCBjYWNoZV9mbHVzaCgpIG91dCBvZiBj
YWNoZV93cml0ZWJhY2soKQoKU3Vic2VxdWVudCBjaGFuZ2VzIHdpbGwgd2Fu
dCBhIGZ1bGx5IGZsdXNoaW5nIHZlcnNpb24uCgpVc2UgdGhlIG5ldyBoZWxw
ZXIgcmF0aGVyIHRoYW4gb3BlbmNvZGluZyBpdCBpbiBmbHVzaF9hcmVhX2xv
Y2FsKCkuICBUaGlzCnJlc29sdmVzIGFuIG91dHN0YW5kaW5nIGlzc3VlIHdo
ZXJlIHRoZSBjb25kaXRpb25hbCBzZmVuY2UgaXMgb24gdGhlIHdyb25nCnNp
ZGUgb2YgdGhlIGNsZmx1c2hvcHQgbG9vcC4gIGNsZmx1c2hvcHQgaXMgb3Jk
ZXJlZCB3aXRoIHJlc3BlY3QgdG8gb2xkZXIKc3RvcmVzLCBub3QgdG8geW91
bmdlciBzdG9yZXMuCgpSZW5hbWUgZ250dGFiX2NhY2hlX2ZsdXNoKCkncyBo
ZWxwZXIgdG8gYXZvaWQgY29sbGlkaW5nIGluIG5hbWUuCmdyYW50X3RhYmxl
LmMgY2FuIHNlZSB0aGUgcHJvdG90eXBlIGZyb20gY2FjaGUuaCBzbyB0aGUg
YnVpbGQgZmFpbHMKb3RoZXJ3aXNlLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS00
MDIuCgpTaWduZWQtb2ZmLWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29v
cGVyM0BjaXRyaXguY29tPgpSZXZpZXdlZC1ieTogSmFuIEJldWxpY2ggPGpi
ZXVsaWNoQHN1c2UuY29tPgoKWGVuIDQuMTYgYW5kIGVhcmxpZXI6CiAqIEFs
c28gYmFja3BvcnQgaGFsZiBvZiBjL3MgMzMzMDAxM2U2NzM5NiAiVlQtZCAv
IHg4NjogcmUtYXJyYW5nZSBjYWNoZQogICBzeW5jaW5nIiB0byBzcGxpdCBj
YWNoZV93cml0ZWJhY2soKSBvdXQgb2YgdGhlIElPTU1VIGxvZ2ljLCBidXQg
d2l0aG91dCB0aGUKICAgYXNzb2NpYXRlZCBob29rcyBjaGFuZ2VzLgoKZGlm
ZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9mbHVzaHRsYi5jIGIveGVuL2FyY2gv
eDg2L2ZsdXNodGxiLmMKaW5kZXggMjU3OThkZjUwZjU0Li4wYzkxMmI4NjY5
ZjggMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni9mbHVzaHRsYi5jCisrKyBi
L3hlbi9hcmNoL3g4Ni9mbHVzaHRsYi5jCkBAIC0yMzQsNyArMjM0LDcgQEAg
dW5zaWduZWQgaW50IGZsdXNoX2FyZWFfbG9jYWwoY29uc3Qgdm9pZCAqdmEs
IHVuc2lnbmVkIGludCBmbGFncykKICAgICBpZiAoIGZsYWdzICYgRkxVU0hf
Q0FDSEUgKQogICAgIHsKICAgICAgICAgY29uc3Qgc3RydWN0IGNwdWluZm9f
eDg2ICpjID0gJmN1cnJlbnRfY3B1X2RhdGE7Ci0gICAgICAgIHVuc2lnbmVk
IGxvbmcgaSwgc3ogPSAwOworICAgICAgICB1bnNpZ25lZCBsb25nIHN6ID0g
MDsKIAogICAgICAgICBpZiAoIG9yZGVyIDwgKEJJVFNfUEVSX0xPTkcgLSBQ
QUdFX1NISUZUKSApCiAgICAgICAgICAgICBzeiA9IDFVTCA8PCAob3JkZXIg
KyBQQUdFX1NISUZUKTsKQEAgLTI0NCwxMyArMjQ0LDcgQEAgdW5zaWduZWQg
aW50IGZsdXNoX2FyZWFfbG9jYWwoY29uc3Qgdm9pZCAqdmEsIHVuc2lnbmVk
IGludCBmbGFncykKICAgICAgICAgICAgICBjLT54ODZfY2xmbHVzaF9zaXpl
ICYmIGMtPng4Nl9jYWNoZV9zaXplICYmIHN6ICYmCiAgICAgICAgICAgICAg
KChzeiA+PiAxMCkgPCBjLT54ODZfY2FjaGVfc2l6ZSkgKQogICAgICAgICB7
Ci0gICAgICAgICAgICBhbHRlcm5hdGl2ZSgiIiwgInNmZW5jZSIsIFg4Nl9G
RUFUVVJFX0NMRkxVU0hPUFQpOwotICAgICAgICAgICAgZm9yICggaSA9IDA7
IGkgPCBzejsgaSArPSBjLT54ODZfY2xmbHVzaF9zaXplICkKLSAgICAgICAg
ICAgICAgICBhbHRlcm5hdGl2ZV9pbnB1dCgiLmJ5dGUgIiBfX3N0cmluZ2lm
eShOT1BfRFNfUFJFRklYKSAiOyIKLSAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAiIGNsZmx1c2ggJTAiLAotICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICJkYXRhMTYgY2xmbHVzaCAlMCIsICAgICAgLyog
Y2xmbHVzaG9wdCAqLwotICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIFg4Nl9GRUFUVVJFX0NMRkxVU0hPUFQsCi0gICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIm0iICgoKGNvbnN0IGNoYXIgKil2YSlbaV0p
KTsKKyAgICAgICAgICAgIGNhY2hlX2ZsdXNoKHZhLCBzeik7CiAgICAgICAg
ICAgICBmbGFncyAmPSB+RkxVU0hfQ0FDSEU7CiAgICAgICAgIH0KICAgICAg
ICAgZWxzZQpAQCAtMjY1LDYgKzI1OSw4MCBAQCB1bnNpZ25lZCBpbnQgZmx1
c2hfYXJlYV9sb2NhbChjb25zdCB2b2lkICp2YSwgdW5zaWduZWQgaW50IGZs
YWdzKQogICAgIHJldHVybiBmbGFnczsKIH0KIAordm9pZCBjYWNoZV9mbHVz
aChjb25zdCB2b2lkICphZGRyLCB1bnNpZ25lZCBpbnQgc2l6ZSkKK3sKKyAg
ICAvKgorICAgICAqIFRoaXMgZnVuY3Rpb24gbWF5IGJlIGNhbGxlZCBiZWZv
cmUgY3VycmVudF9jcHVfZGF0YSBpcyBlc3RhYmxpc2hlZC4KKyAgICAgKiBI
ZW5jZSBhIGZhbGxiYWNrIGlzIG5lZWRlZCB0byBwcmV2ZW50IHRoZSBsb29w
IGJlbG93IGJlY29taW5nIGluZmluaXRlLgorICAgICAqLworICAgIHVuc2ln
bmVkIGludCBjbGZsdXNoX3NpemUgPSBjdXJyZW50X2NwdV9kYXRhLng4Nl9j
bGZsdXNoX3NpemUgPzogMTY7CisgICAgY29uc3Qgdm9pZCAqZW5kID0gYWRk
ciArIHNpemU7CisKKyAgICBhZGRyIC09ICh1bnNpZ25lZCBsb25nKWFkZHIg
JiAoY2xmbHVzaF9zaXplIC0gMSk7CisgICAgZm9yICggOyBhZGRyIDwgZW5k
OyBhZGRyICs9IGNsZmx1c2hfc2l6ZSApCisgICAgeworICAgICAgICAvKgor
ICAgICAgICAgKiBOb3RlIHJlZ2FyZGluZyB0aGUgImRzIiBwcmVmaXggdXNl
OiBpdCdzIGZhc3RlciB0byBkbyBhIGNsZmx1c2gKKyAgICAgICAgICogKyBw
cmVmaXggdGhhbiBhIGNsZmx1c2ggKyBub3AsIGFuZCBoZW5jZSB0aGUgcHJl
Zml4IGlzIGFkZGVkIGluc3RlYWQKKyAgICAgICAgICogb2YgbGV0dGluZyB0
aGUgYWx0ZXJuYXRpdmUgZnJhbWV3b3JrIGZpbGwgdGhlIGdhcCBieSBhcHBl
bmRpbmcgbm9wcy4KKyAgICAgICAgICovCisgICAgICAgIGFsdGVybmF0aXZl
X2lvKCJkczsgY2xmbHVzaCAlW3BdIiwKKyAgICAgICAgICAgICAgICAgICAg
ICAgImRhdGExNiBjbGZsdXNoICVbcF0iLCAvKiBjbGZsdXNob3B0ICovCisg
ICAgICAgICAgICAgICAgICAgICAgIFg4Nl9GRUFUVVJFX0NMRkxVU0hPUFQs
CisgICAgICAgICAgICAgICAgICAgICAgIC8qIG5vIG91dHB1dHMgKi8sCisg
ICAgICAgICAgICAgICAgICAgICAgIFtwXSAibSIgKCooY29uc3QgY2hhciAq
KShhZGRyKSkpOworICAgIH0KKworICAgIGFsdGVybmF0aXZlKCIiLCAic2Zl
bmNlIiwgWDg2X0ZFQVRVUkVfQ0xGTFVTSE9QVCk7Cit9CisKK3ZvaWQgY2Fj
aGVfd3JpdGViYWNrKGNvbnN0IHZvaWQgKmFkZHIsIHVuc2lnbmVkIGludCBz
aXplKQoreworICAgIHVuc2lnbmVkIGludCBjbGZsdXNoX3NpemU7CisgICAg
Y29uc3Qgdm9pZCAqZW5kID0gYWRkciArIHNpemU7CisKKyAgICAvKiBGYWxs
IGJhY2sgdG8gQ0xGTFVTSHssT1BUfSB3aGVuIENMV0IgaXNuJ3QgYXZhaWxh
YmxlLiAqLworICAgIGlmICggIWJvb3RfY3B1X2hhcyhYODZfRkVBVFVSRV9D
TFdCKSApCisgICAgICAgIHJldHVybiBjYWNoZV9mbHVzaChhZGRyLCBzaXpl
KTsKKworICAgIC8qCisgICAgICogVGhpcyBmdW5jdGlvbiBtYXkgYmUgY2Fs
bGVkIGJlZm9yZSBjdXJyZW50X2NwdV9kYXRhIGlzIGVzdGFibGlzaGVkLgor
ICAgICAqIEhlbmNlIGEgZmFsbGJhY2sgaXMgbmVlZGVkIHRvIHByZXZlbnQg
dGhlIGxvb3AgYmVsb3cgYmVjb21pbmcgaW5maW5pdGUuCisgICAgICovCisg
ICAgY2xmbHVzaF9zaXplID0gY3VycmVudF9jcHVfZGF0YS54ODZfY2xmbHVz
aF9zaXplID86IDE2OworICAgIGFkZHIgLT0gKHVuc2lnbmVkIGxvbmcpYWRk
ciAmIChjbGZsdXNoX3NpemUgLSAxKTsKKyAgICBmb3IgKCA7IGFkZHIgPCBl
bmQ7IGFkZHIgKz0gY2xmbHVzaF9zaXplICkKKyAgICB7CisvKgorICogVGhl
IGFyZ3VtZW50cyB0byBhIG1hY3JvIG11c3Qgbm90IGluY2x1ZGUgcHJlcHJv
Y2Vzc29yIGRpcmVjdGl2ZXMuIERvaW5nIHNvCisgKiByZXN1bHRzIGluIHVu
ZGVmaW5lZCBiZWhhdmlvciwgc28gd2UgaGF2ZSB0byBjcmVhdGUgc29tZSBk
ZWZpbmVzIGhlcmUgaW4KKyAqIG9yZGVyIHRvIGF2b2lkIGl0LgorICovCisj
aWYgZGVmaW5lZChIQVZFX0FTX0NMV0IpCisjIGRlZmluZSBDTFdCX0VOQ09E
SU5HICJjbHdiICVbcF0iCisjZWxpZiBkZWZpbmVkKEhBVkVfQVNfWFNBVkVP
UFQpCisjIGRlZmluZSBDTFdCX0VOQ09ESU5HICJkYXRhMTYgeHNhdmVvcHQg
JVtwXSIgLyogY2x3YiAqLworI2Vsc2UKKyMgZGVmaW5lIENMV0JfRU5DT0RJ
TkcgIi5ieXRlIDB4NjYsIDB4MGYsIDB4YWUsIDB4MzAiIC8qIGNsd2IgKCUl
cmF4KSAqLworI2VuZGlmCisKKyNkZWZpbmUgQkFTRV9JTlBVVChhZGRyKSBb
cF0gIm0iICgqKGNvbnN0IGNoYXIgKikoYWRkcikpCisjaWYgZGVmaW5lZChI
QVZFX0FTX0NMV0IpIHx8IGRlZmluZWQoSEFWRV9BU19YU0FWRU9QVCkKKyMg
ZGVmaW5lIElOUFVUIEJBU0VfSU5QVVQKKyNlbHNlCisjIGRlZmluZSBJTlBV
VChhZGRyKSAiYSIgKGFkZHIpLCBCQVNFX0lOUFVUKGFkZHIpCisjZW5kaWYK
KworICAgICAgICBhc20gdm9sYXRpbGUgKENMV0JfRU5DT0RJTkcgOjogSU5Q
VVQoYWRkcikpOworCisjdW5kZWYgSU5QVVQKKyN1bmRlZiBCQVNFX0lOUFVU
CisjdW5kZWYgQ0xXQl9FTkNPRElORworICAgIH0KKworICAgIGFzbSB2b2xh
dGlsZSAoInNmZW5jZSIgOjo6ICJtZW1vcnkiKTsKK30KKwogdW5zaWduZWQg
aW50IGd1ZXN0X2ZsdXNoX3RsYl9mbGFncyhjb25zdCBzdHJ1Y3QgZG9tYWlu
ICpkKQogewogICAgIGJvb2wgc2hhZG93ID0gcGFnaW5nX21vZGVfc2hhZG93
KGQpOwpkaWZmIC0tZ2l0IGEveGVuL2NvbW1vbi9ncmFudF90YWJsZS5jIGIv
eGVuL2NvbW1vbi9ncmFudF90YWJsZS5jCmluZGV4IDcxZWU1YzZlYzUxMS4u
MzQ0OThkNDY1Mjg1IDEwMDY0NAotLS0gYS94ZW4vY29tbW9uL2dyYW50X3Rh
YmxlLmMKKysrIGIveGVuL2NvbW1vbi9ncmFudF90YWJsZS5jCkBAIC0zNDQw
LDcgKzM0NDAsNyBAQCBnbnR0YWJfc3dhcF9ncmFudF9yZWYoWEVOX0dVRVNU
X0hBTkRMRV9QQVJBTShnbnR0YWJfc3dhcF9ncmFudF9yZWZfdCkgdW9wLAog
ICAgIHJldHVybiAwOwogfQogCi1zdGF0aWMgaW50IGNhY2hlX2ZsdXNoKGNv
bnN0IGdudHRhYl9jYWNoZV9mbHVzaF90ICpjZmx1c2gsIGdyYW50X3JlZl90
ICpjdXJfcmVmKQorc3RhdGljIGludCBfY2FjaGVfZmx1c2goY29uc3QgZ250
dGFiX2NhY2hlX2ZsdXNoX3QgKmNmbHVzaCwgZ3JhbnRfcmVmX3QgKmN1cl9y
ZWYpCiB7CiAgICAgc3RydWN0IGRvbWFpbiAqZCwgKm93bmVyOwogICAgIHN0
cnVjdCBwYWdlX2luZm8gKnBhZ2U7CkBAIC0zNTM0LDcgKzM1MzQsNyBAQCBn
bnR0YWJfY2FjaGVfZmx1c2goWEVOX0dVRVNUX0hBTkRMRV9QQVJBTShnbnR0
YWJfY2FjaGVfZmx1c2hfdCkgdW9wLAogICAgICAgICAgICAgcmV0dXJuIC1F
RkFVTFQ7CiAgICAgICAgIGZvciAoIDsgOyApCiAgICAgICAgIHsKLSAgICAg
ICAgICAgIGludCByZXQgPSBjYWNoZV9mbHVzaCgmb3AsIGN1cl9yZWYpOwor
ICAgICAgICAgICAgaW50IHJldCA9IF9jYWNoZV9mbHVzaCgmb3AsIGN1cl9y
ZWYpOwogCiAgICAgICAgICAgICBpZiAoIHJldCA8IDAgKQogICAgICAgICAg
ICAgICAgIHJldHVybiByZXQ7CmRpZmYgLS1naXQgYS94ZW4vZHJpdmVycy9w
YXNzdGhyb3VnaC92dGQvZXh0ZXJuLmggYi94ZW4vZHJpdmVycy9wYXNzdGhy
b3VnaC92dGQvZXh0ZXJuLmgKaW5kZXggZmJlOTUxYjJmYWQwLi4zZGVmZTk2
NzdmMDYgMTAwNjQ0Ci0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0
ZC9leHRlcm4uaAorKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQv
ZXh0ZXJuLmgKQEAgLTc3LDcgKzc3LDYgQEAgaW50IF9fbXVzdF9jaGVjayBx
aW52YWxfZGV2aWNlX2lvdGxiX3N5bmMoc3RydWN0IHZ0ZF9pb21tdSAqaW9t
bXUsCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBzdHJ1Y3QgcGNpX2RldiAqcGRldiwKICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIHUxNiBkaWQsIHUxNiBzaXplLCB1NjQg
YWRkcik7CiAKLXVuc2lnbmVkIGludCBnZXRfY2FjaGVfbGluZV9zaXplKHZv
aWQpOwogdm9pZCBmbHVzaF9hbGxfY2FjaGUodm9pZCk7CiAKIHVpbnQ2NF90
IGFsbG9jX3BndGFibGVfbWFkZHIodW5zaWduZWQgbG9uZyBucGFnZXMsIG5v
ZGVpZF90IG5vZGUpOwpkaWZmIC0tZ2l0IGEveGVuL2RyaXZlcnMvcGFzc3Ro
cm91Z2gvdnRkL2lvbW11LmMgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92
dGQvaW9tbXUuYwppbmRleCBjYzA4OGNkOWZmMjAuLjNiZDE3YTRhMjRhMiAx
MDA2NDQKLS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL2lvbW11
LmMKKysrIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL2lvbW11LmMK
QEAgLTMxLDYgKzMxLDcgQEAKICNpbmNsdWRlIDx4ZW4vcGNpLmg+CiAjaW5j
bHVkZSA8eGVuL3BjaV9yZWdzLmg+CiAjaW5jbHVkZSA8eGVuL2tleWhhbmRs
ZXIuaD4KKyNpbmNsdWRlIDxhc20vY2FjaGUuaD4KICNpbmNsdWRlIDxhc20v
bXNpLmg+CiAjaW5jbHVkZSA8YXNtL25vcHMuaD4KICNpbmNsdWRlIDxhc20v
aXJxLmg+CkBAIC0yMDcsNTMgKzIwOCwxMCBAQCBzdGF0aWMgaW50IGlvbW11
c19pbmNvaGVyZW50OwogCiBzdGF0aWMgdm9pZCBzeW5jX2NhY2hlKGNvbnN0
IHZvaWQgKmFkZHIsIHVuc2lnbmVkIGludCBzaXplKQogewotICAgIHN0YXRp
YyB1bnNpZ25lZCBsb25nIGNsZmx1c2hfc2l6ZSA9IDA7Ci0gICAgY29uc3Qg
dm9pZCAqZW5kID0gYWRkciArIHNpemU7Ci0KICAgICBpZiAoICFpb21tdXNf
aW5jb2hlcmVudCApCiAgICAgICAgIHJldHVybjsKIAotICAgIGlmICggY2xm
bHVzaF9zaXplID09IDAgKQotICAgICAgICBjbGZsdXNoX3NpemUgPSBnZXRf
Y2FjaGVfbGluZV9zaXplKCk7Ci0KLSAgICBhZGRyIC09ICh1bnNpZ25lZCBs
b25nKWFkZHIgJiAoY2xmbHVzaF9zaXplIC0gMSk7Ci0gICAgZm9yICggOyBh
ZGRyIDwgZW5kOyBhZGRyICs9IGNsZmx1c2hfc2l6ZSApCi0vKgotICogVGhl
IGFyZ3VtZW50cyB0byBhIG1hY3JvIG11c3Qgbm90IGluY2x1ZGUgcHJlcHJv
Y2Vzc29yIGRpcmVjdGl2ZXMuIERvaW5nIHNvCi0gKiByZXN1bHRzIGluIHVu
ZGVmaW5lZCBiZWhhdmlvciwgc28gd2UgaGF2ZSB0byBjcmVhdGUgc29tZSBk
ZWZpbmVzIGhlcmUgaW4KLSAqIG9yZGVyIHRvIGF2b2lkIGl0LgotICovCi0j
aWYgZGVmaW5lZChIQVZFX0FTX0NMV0IpCi0jIGRlZmluZSBDTFdCX0VOQ09E
SU5HICJjbHdiICVbcF0iCi0jZWxpZiBkZWZpbmVkKEhBVkVfQVNfWFNBVkVP
UFQpCi0jIGRlZmluZSBDTFdCX0VOQ09ESU5HICJkYXRhMTYgeHNhdmVvcHQg
JVtwXSIgLyogY2x3YiAqLwotI2Vsc2UKLSMgZGVmaW5lIENMV0JfRU5DT0RJ
TkcgIi5ieXRlIDB4NjYsIDB4MGYsIDB4YWUsIDB4MzAiIC8qIGNsd2IgKCUl
cmF4KSAqLwotI2VuZGlmCi0KLSNkZWZpbmUgQkFTRV9JTlBVVChhZGRyKSBb
cF0gIm0iICgqKGNvbnN0IGNoYXIgKikoYWRkcikpCi0jaWYgZGVmaW5lZChI
QVZFX0FTX0NMV0IpIHx8IGRlZmluZWQoSEFWRV9BU19YU0FWRU9QVCkKLSMg
ZGVmaW5lIElOUFVUIEJBU0VfSU5QVVQKLSNlbHNlCi0jIGRlZmluZSBJTlBV
VChhZGRyKSAiYSIgKGFkZHIpLCBCQVNFX0lOUFVUKGFkZHIpCi0jZW5kaWYK
LSAgICAgICAgLyoKLSAgICAgICAgICogTm90ZSByZWdhcmRpbmcgdGhlIHVz
ZSBvZiBOT1BfRFNfUFJFRklYOiBpdCdzIGZhc3RlciB0byBkbyBhIGNsZmx1
c2gKLSAgICAgICAgICogKyBwcmVmaXggdGhhbiBhIGNsZmx1c2ggKyBub3As
IGFuZCBoZW5jZSB0aGUgcHJlZml4IGlzIGFkZGVkIGluc3RlYWQKLSAgICAg
ICAgICogb2YgbGV0dGluZyB0aGUgYWx0ZXJuYXRpdmUgZnJhbWV3b3JrIGZp
bGwgdGhlIGdhcCBieSBhcHBlbmRpbmcgbm9wcy4KLSAgICAgICAgICovCi0g
ICAgICAgIGFsdGVybmF0aXZlX2lvXzIoIi5ieXRlICIgX19zdHJpbmdpZnko
Tk9QX0RTX1BSRUZJWCkgIjsgY2xmbHVzaCAlW3BdIiwKLSAgICAgICAgICAg
ICAgICAgICAgICAgICAiZGF0YTE2IGNsZmx1c2ggJVtwXSIsIC8qIGNsZmx1
c2hvcHQgKi8KLSAgICAgICAgICAgICAgICAgICAgICAgICBYODZfRkVBVFVS
RV9DTEZMVVNIT1BULAotICAgICAgICAgICAgICAgICAgICAgICAgIENMV0Jf
RU5DT0RJTkcsCi0gICAgICAgICAgICAgICAgICAgICAgICAgWDg2X0ZFQVRV
UkVfQ0xXQiwgLyogbm8gb3V0cHV0cyAqLywKLSAgICAgICAgICAgICAgICAg
ICAgICAgICBJTlBVVChhZGRyKSk7Ci0jdW5kZWYgSU5QVVQKLSN1bmRlZiBC
QVNFX0lOUFVUCi0jdW5kZWYgQ0xXQl9FTkNPRElORwotCi0gICAgYWx0ZXJu
YXRpdmVfMigiIiwgInNmZW5jZSIsIFg4Nl9GRUFUVVJFX0NMRkxVU0hPUFQs
Ci0gICAgICAgICAgICAgICAgICAgICAgInNmZW5jZSIsIFg4Nl9GRUFUVVJF
X0NMV0IpOworICAgIGNhY2hlX3dyaXRlYmFjayhhZGRyLCBzaXplKTsKIH0K
IAogLyogQWxsb2NhdGUgcGFnZSB0YWJsZSwgcmV0dXJuIGl0cyBtYWNoaW5l
IGFkZHJlc3MgKi8KZGlmZiAtLWdpdCBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJv
dWdoL3Z0ZC94ODYvdnRkLmMgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92
dGQveDg2L3Z0ZC5jCmluZGV4IGJiZTM1OGRjMzZjNy4uYmIwOGE1NWUyOTRh
IDEwMDY0NAotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQveDg2
L3Z0ZC5jCisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC94ODYv
dnRkLmMKQEAgLTQ3LDExICs0Nyw2IEBAIHZvaWQgdW5tYXBfdnRkX2RvbWFp
bl9wYWdlKHZvaWQgKnZhKQogICAgIHVubWFwX2RvbWFpbl9wYWdlKHZhKTsK
IH0KIAotdW5zaWduZWQgaW50IGdldF9jYWNoZV9saW5lX3NpemUodm9pZCkK
LXsKLSAgICByZXR1cm4gKChjcHVpZF9lYngoMSkgPj4gOCkgJiAweGZmKSAq
IDg7Ci19Ci0KIHZvaWQgZmx1c2hfYWxsX2NhY2hlKCkKIHsKICAgICB3Ymlu
dmQoKTsKZGlmZiAtLWdpdCBhL3hlbi9pbmNsdWRlL2FzbS14ODYvY2FjaGUu
aCBiL3hlbi9pbmNsdWRlL2FzbS14ODYvY2FjaGUuaAppbmRleCAxZjcxNzNk
OGM3MmMuLmU0NzcwZWZiMjJiOSAxMDA2NDQKLS0tIGEveGVuL2luY2x1ZGUv
YXNtLXg4Ni9jYWNoZS5oCisrKyBiL3hlbi9pbmNsdWRlL2FzbS14ODYvY2Fj
aGUuaApAQCAtMTEsNCArMTEsMTEgQEAKIAogI2RlZmluZSBfX3JlYWRfbW9z
dGx5IF9fc2VjdGlvbigiLmRhdGEucmVhZF9tb3N0bHkiKQogCisjaWZuZGVm
IF9fQVNTRU1CTFlfXworCit2b2lkIGNhY2hlX2ZsdXNoKGNvbnN0IHZvaWQg
KmFkZHIsIHVuc2lnbmVkIGludCBzaXplKTsKK3ZvaWQgY2FjaGVfd3JpdGVi
YWNrKGNvbnN0IHZvaWQgKmFkZHIsIHVuc2lnbmVkIGludCBzaXplKTsKKwor
I2VuZGlmCisKICNlbmRpZgo=

--=separator
Content-Type: application/octet-stream; name="xsa402/xsa402-4.14-4.patch"
Content-Disposition: attachment; filename="xsa402/xsa402-4.14-4.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L2FtZDogV29yayBhcm91bmQgQ0xGTFVTSCBvcmRl
cmluZyBvbiBvbGRlciBwYXJ0cwoKT24gcHJlLUNMRkxVU0hPUFQgQU1EIENQ
VXMsIENMRkxVU0ggaXMgd2Vha2VseSBvcmRlcmVkIHdpdGggZXZlcnl0aGlu
ZywKaW5jbHVkaW5nIHJlYWRzIGFuZCB3cml0ZXMgdG8gdGhlIGFkZHJlc3Ms
IGFuZCBMRkVOQ0UvU0ZFTkNFIGluc3RydWN0aW9ucy4KClRoaXMgY3JlYXRl
cyBhIG11bHRpdHVkZSBvZiBwcm9ibGVtYXRpYyBjb3JuZXIgY2FzZXMsIGxh
aWQgb3V0IGluIHRoZSBtYW51YWwuCkFycmFuZ2UgdG8gdXNlIE1GRU5DRSBv
biBib3RoIHNpZGVzIG9mIHRoZSBDTEZMVVNIIHRvIGZvcmNlIHByb3BlciBv
cmRlcmluZy4KClRoaXMgaXMgcGFydCBvZiBYU0EtNDAyLgoKU2lnbmVkLW9m
Zi1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KUmV2aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNv
bT4KCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvY3B1L2FtZC5jIGIveGVu
L2FyY2gveDg2L2NwdS9hbWQuYwppbmRleCAyZWY1OWUyMmRjMzEuLjE0MmYz
NGFmNWY3MCAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L2NwdS9hbWQuYwor
KysgYi94ZW4vYXJjaC94ODYvY3B1L2FtZC5jCkBAIC03ODcsNiArNzg3LDE0
IEBAIHN0YXRpYyB2b2lkIGluaXRfYW1kKHN0cnVjdCBjcHVpbmZvX3g4NiAq
YykKIAlpZiAoIWNwdV9oYXNfbGZlbmNlX2Rpc3BhdGNoKQogCQlfX3NldF9i
aXQoWDg2X0ZFQVRVUkVfTUZFTkNFX1JEVFNDLCBjLT54ODZfY2FwYWJpbGl0
eSk7CiAKKwkvKgorCSAqIE9uIHByZS1DTEZMVVNIT1BUIEFNRCBDUFVzLCBD
TEZMVVNIIGlzIHdlYWtseSBvcmRlcmVkIHdpdGgKKwkgKiBldmVyeXRoaW5n
LCBpbmNsdWRpbmcgcmVhZHMgYW5kIHdyaXRlcyB0byBhZGRyZXNzLCBhbmQK
KwkgKiBMRkVOQ0UvU0ZFTkNFIGluc3RydWN0aW9ucy4KKwkgKi8KKwlpZiAo
IWNwdV9oYXNfY2xmbHVzaG9wdCkKKwkJc2V0dXBfZm9yY2VfY3B1X2NhcChY
ODZfQlVHX0NMRkxVU0hfTUZFTkNFKTsKKwogCXN3aXRjaChjLT54ODYpCiAJ
ewogCWNhc2UgMHhmIC4uLiAweDExOgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gv
eDg2L2ZsdXNodGxiLmMgYi94ZW4vYXJjaC94ODYvZmx1c2h0bGIuYwppbmRl
eCAwYzkxMmI4NjY5ZjguLmRjYmI0MDY0MDEyZSAxMDA2NDQKLS0tIGEveGVu
L2FyY2gveDg2L2ZsdXNodGxiLmMKKysrIGIveGVuL2FyY2gveDg2L2ZsdXNo
dGxiLmMKQEAgLTI1OSw2ICsyNTksMTMgQEAgdW5zaWduZWQgaW50IGZsdXNo
X2FyZWFfbG9jYWwoY29uc3Qgdm9pZCAqdmEsIHVuc2lnbmVkIGludCBmbGFn
cykKICAgICByZXR1cm4gZmxhZ3M7CiB9CiAKKy8qCisgKiBPbiBwcmUtQ0xG
TFVTSE9QVCBBTUQgQ1BVcywgQ0xGTFVTSCBpcyB3ZWFrbHkgb3JkZXJlZCB3
aXRoIGV2ZXJ5dGhpbmcsCisgKiBpbmNsdWRpbmcgcmVhZHMgYW5kIHdyaXRl
cyB0byBhZGRyZXNzLCBhbmQgTEZFTkNFL1NGRU5DRSBpbnN0cnVjdGlvbnMu
CisgKgorICogVGhpcyBmdW5jdGlvbiBvbmx5IHdvcmtzIHNhZmVseSBhZnRl
ciBhbHRlcm5hdGl2ZXMgaGF2ZSBydW4uICBMdWNraWx5LCBhdAorICogdGhl
IHRpbWUgb2Ygd3JpdGluZywgd2UgZG9uJ3QgZmx1c2ggdGhlIGNhY2hlcyB0
aGF0IGVhcmx5LgorICovCiB2b2lkIGNhY2hlX2ZsdXNoKGNvbnN0IHZvaWQg
KmFkZHIsIHVuc2lnbmVkIGludCBzaXplKQogewogICAgIC8qCkBAIC0yNjgs
NiArMjc1LDggQEAgdm9pZCBjYWNoZV9mbHVzaChjb25zdCB2b2lkICphZGRy
LCB1bnNpZ25lZCBpbnQgc2l6ZSkKICAgICB1bnNpZ25lZCBpbnQgY2xmbHVz
aF9zaXplID0gY3VycmVudF9jcHVfZGF0YS54ODZfY2xmbHVzaF9zaXplID86
IDE2OwogICAgIGNvbnN0IHZvaWQgKmVuZCA9IGFkZHIgKyBzaXplOwogCisg
ICAgYWx0ZXJuYXRpdmUoIiIsICJtZmVuY2UiLCBYODZfQlVHX0NMRkxVU0hf
TUZFTkNFKTsKKwogICAgIGFkZHIgLT0gKHVuc2lnbmVkIGxvbmcpYWRkciAm
IChjbGZsdXNoX3NpemUgLSAxKTsKICAgICBmb3IgKCA7IGFkZHIgPCBlbmQ7
IGFkZHIgKz0gY2xmbHVzaF9zaXplICkKICAgICB7CkBAIC0yODMsNyArMjky
LDkgQEAgdm9pZCBjYWNoZV9mbHVzaChjb25zdCB2b2lkICphZGRyLCB1bnNp
Z25lZCBpbnQgc2l6ZSkKICAgICAgICAgICAgICAgICAgICAgICAgW3BdICJt
IiAoKihjb25zdCBjaGFyICopKGFkZHIpKSk7CiAgICAgfQogCi0gICAgYWx0
ZXJuYXRpdmUoIiIsICJzZmVuY2UiLCBYODZfRkVBVFVSRV9DTEZMVVNIT1BU
KTsKKyAgICBhbHRlcm5hdGl2ZV8yKCIiLAorICAgICAgICAgICAgICAgICAg
InNmZW5jZSIsIFg4Nl9GRUFUVVJFX0NMRkxVU0hPUFQsCisgICAgICAgICAg
ICAgICAgICAibWZlbmNlIiwgWDg2X0JVR19DTEZMVVNIX01GRU5DRSk7CiB9
CiAKIHZvaWQgY2FjaGVfd3JpdGViYWNrKGNvbnN0IHZvaWQgKmFkZHIsIHVu
c2lnbmVkIGludCBzaXplKQpkaWZmIC0tZ2l0IGEveGVuL2luY2x1ZGUvYXNt
LXg4Ni9jcHVmZWF0dXJlcy5oIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9jcHVm
ZWF0dXJlcy5oCmluZGV4IGZlMmY5NzM1NGZiNi4uMDlmNjE5NDU5YmM3IDEw
MDY0NAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L2NwdWZlYXR1cmVzLmgK
KysrIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9jcHVmZWF0dXJlcy5oCkBAIC00
Niw2ICs0Niw3IEBAIFhFTl9DUFVGRUFUVVJFKFhFTl9JQlQsICAgICAgICAg
ICBYODZfU1lOVEgoMjcpKSAvKiBYZW4gdXNlcyBDRVQgSW5kaXJlY3QgQnJh
bmNoCiAjZGVmaW5lIFg4Nl9CVUcoeCkgKChGU0NBUElOVFMgKyBYODZfTlJf
U1lOVEgpICogMzIgKyAoeCkpCiAKICNkZWZpbmUgWDg2X0JVR19GUFVfUFRS
UyAgICAgICAgICBYODZfQlVHKCAwKSAvKiAoRilYe1NBVkUsUlNUT1J9IGRv
ZXNuJ3Qgc2F2ZS9yZXN0b3JlIEZPUC9GSVAvRkRQLiAqLworI2RlZmluZSBY
ODZfQlVHX0NMRkxVU0hfTUZFTkNFICAgIFg4Nl9CVUcoIDIpIC8qIE1GRU5D
RSBuZWVkZWQgdG8gc2VyaWFsaXNlIENMRkxVU0ggKi8KIAogLyogVG90YWwg
bnVtYmVyIG9mIGNhcGFiaWxpdHkgd29yZHMsIGluYyBzeW50aCBhbmQgYnVn
IHdvcmRzLiAqLwogI2RlZmluZSBOQ0FQSU5UUyAoRlNDQVBJTlRTICsgWDg2
X05SX1NZTlRIICsgWDg2X05SX0JVRykgLyogTiAzMi1iaXQgd29yZHMgd29y
dGggb2YgaW5mbyAqLwo=

--=separator
Content-Type: application/octet-stream; name="xsa402/xsa402-4.14-5.patch"
Content-Disposition: attachment; filename="xsa402/xsa402-4.14-5.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3B2OiBUcmFjayBhbmQgZmx1c2ggbm9uLWNvaGVy
ZW50IG1hcHBpbmdzIG9mIFJBTQoKVGhlcmUgYXJlIGxlZ2l0aW1hdGUgdXNl
cyBvZiBXQyBtYXBwaW5ncyBvZiBSQU0sIGUuZy4gZm9yIERNQSBidWZmZXJz
IHdpdGgKZGV2aWNlcyB0aGF0IG1ha2Ugbm9uLWNvaGVyZW50IHdyaXRlcy4g
IFRoZSBMaW51eCBzb3VuZCBzdWJzeXN0ZW0gbWFrZXMKZXh0ZW5zaXZlIHVz
ZSBvZiB0aGlzIHRlY2huaXF1ZS4KCkZvciBzdWNoIHVzZWNhc2VzLCB0aGUg
Z3Vlc3QncyBETUEgYnVmZmVyIGlzIG1hcHBlZCBhbmQgY29uc2lzdGVudGx5
IHVzZWQgYXMKV0MsIGFuZCBYZW4gZG9lc24ndCBpbnRlcmFjdCB3aXRoIHRo
ZSBidWZmZXIuCgpIb3dldmVyLCBhIG1pc2NoZXZpb3VzIGd1ZXN0IGNhbiB1
c2UgV0MgbWFwcGluZ3MgdG8gZGVsaWJlcmF0ZWx5IGNyZWF0ZQpub24tY29o
ZXJlbmN5IGJldHdlZW4gdGhlIGNhY2hlIGFuZCBSQU0sIGFuZCB1c2UgdGhp
cyB0byB0cmljayBYZW4gaW50bwp2YWxpZGF0aW5nIGEgcGFnZXRhYmxlIHdo
aWNoIGlzbid0IGFjdHVhbGx5IHNhZmUuCgpBbGxvY2F0ZSBhIG5ldyBQR1Rf
bm9uX2NvaGVyZW50IHRvIHRyYWNrIHRoZSBub24tY29oZXJlbmN5IG9mIG1h
cHBpbmdzLiAgU2V0Cml0IHdoZW5ldmVyIGEgbm9uLWNvaGVyZW50IHdyaXRl
YWJsZSBtYXBwaW5nIGlzIGNyZWF0ZWQuICBJZiB0aGUgcGFnZSBpcyB1c2Vk
CmFzIGFueXRoaW5nIG90aGVyIHRoYW4gUEdUX3dyaXRhYmxlX3BhZ2UsIGZv
cmNlIGEgY2FjaGUgZmx1c2ggYmVmb3JlCnZhbGlkYXRpb24uICBBbHNvIGZv
cmNlIGEgY2FjaGUgZmx1c2ggYmVmb3JlIHRoZSBwYWdlIGlzIHJldHVybmVk
IHRvIHRoZSBoZWFwLgoKVGhpcyBpcyBDVkUtMjAyMi0yNjM2NCwgcGFydCBv
ZiBYU0EtNDAyLgoKUmVwb3J0ZWQtYnk6IEphbm4gSG9ybiA8amFubmhAZ29v
Z2xlLmNvbT4KU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3
LmNvb3BlcjNAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEdlb3JnZSBEdW5s
YXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEph
biBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KCmRpZmYgLS1naXQgYS94
ZW4vYXJjaC94ODYvbW0uYyBiL3hlbi9hcmNoL3g4Ni9tbS5jCmluZGV4IDdk
M2QxODZlZGJkNS4uNGQ2YjA0YzFjZjMxIDEwMDY0NAotLS0gYS94ZW4vYXJj
aC94ODYvbW0uYworKysgYi94ZW4vYXJjaC94ODYvbW0uYwpAQCAtOTk5LDYg
Kzk5OSwxNSBAQCBnZXRfcGFnZV9mcm9tX2wxZSgKICAgICAgICAgcmV0dXJu
IC1FQUNDRVM7CiAgICAgfQogCisgICAgLyoKKyAgICAgKiBUcmFjayB3cml0
ZWFibGUgbm9uLWNvaGVyZW50IG1hcHBpbmdzIHRvIFJBTSBwYWdlcywgdG8g
dHJpZ2dlciBhIGNhY2hlCisgICAgICogZmx1c2ggbGF0ZXIgaWYgdGhlIHRh
cmdldCBpcyB1c2VkIGFzIGFueXRoaW5nIGJ1dCBhIFBHVF93cml0ZWFibGUg
cGFnZS4KKyAgICAgKiBXZSBjYXJlIGFib3V0IGFsbCB3cml0ZWFibGUgbWFw
cGluZ3MsIGluY2x1ZGluZyBmb3JlaWduIG1hcHBpbmdzLgorICAgICAqLwor
ICAgIGlmICggIWJvb3RfY3B1X2hhcyhYODZfRkVBVFVSRV9YRU5fU0VMRlNO
T09QKSAmJgorICAgICAgICAgKGwxZiAmIChQQUdFX0NBQ0hFX0FUVFJTIHwg
X1BBR0VfUlcpKSA9PSAoX1BBR0VfV0MgfCBfUEFHRV9SVykgKQorICAgICAg
ICBzZXRfYml0KF9QR1Rfbm9uX2NvaGVyZW50LCAmcGFnZS0+dS5pbnVzZS50
eXBlX2luZm8pOworCiAgICAgcmV0dXJuIDA7CiAKICBjb3VsZF9ub3RfcGlu
OgpAQCAtMjQ0NCw2ICsyNDUzLDE5IEBAIHN0YXRpYyBpbnQgY2xlYW51cF9w
YWdlX21hcHBpbmdzKHN0cnVjdCBwYWdlX2luZm8gKnBhZ2UpCiAgICAgICAg
IH0KICAgICB9CiAKKyAgICAvKgorICAgICAqIEZsdXNoIHRoZSBjYWNoZSBp
ZiB0aGVyZSB3ZXJlIHByZXZpb3VzbHkgbm9uLWNvaGVyZW50IHdyaXRlYWJs
ZQorICAgICAqIG1hcHBpbmdzIG9mIHRoaXMgcGFnZS4gIFRoaXMgZm9yY2Vz
IHRoZSBwYWdlIHRvIGJlIGNvaGVyZW50IGJlZm9yZSBpdAorICAgICAqIGlz
IGZyZWVkIGJhY2sgdG8gdGhlIGhlYXAuCisgICAgICovCisgICAgaWYgKCBf
X3Rlc3RfYW5kX2NsZWFyX2JpdChfUEdUX25vbl9jb2hlcmVudCwgJnBhZ2Ut
PnUuaW51c2UudHlwZV9pbmZvKSApCisgICAgeworICAgICAgICB2b2lkICph
ZGRyID0gX19tYXBfZG9tYWluX3BhZ2UocGFnZSk7CisKKyAgICAgICAgY2Fj
aGVfZmx1c2goYWRkciwgUEFHRV9TSVpFKTsKKyAgICAgICAgdW5tYXBfZG9t
YWluX3BhZ2UoYWRkcik7CisgICAgfQorCiAgICAgcmV0dXJuIHJjOwogfQog
CkBAIC0zMDE2LDYgKzMwMzgsMjIgQEAgc3RhdGljIGludCBfZ2V0X3BhZ2Vf
dHlwZShzdHJ1Y3QgcGFnZV9pbmZvICpwYWdlLCB1bnNpZ25lZCBsb25nIHR5
cGUsCiAgICAgaWYgKCB1bmxpa2VseSghKG54ICYgUEdUX3ZhbGlkYXRlZCkp
ICkKICAgICB7CiAgICAgICAgIC8qCisgICAgICAgICAqIEZsdXNoIHRoZSBj
YWNoZSBpZiB0aGVyZSB3ZXJlIHByZXZpb3VzbHkgbm9uLWNvaGVyZW50IG1h
cHBpbmdzIG9mCisgICAgICAgICAqIHRoaXMgcGFnZSwgYW5kIHdlJ3JlIHRy
eWluZyB0byB1c2UgaXQgYXMgYW55dGhpbmcgb3RoZXIgdGhhbiBhCisgICAg
ICAgICAqIHdyaXRlYWJsZSBwYWdlLiAgVGhpcyBmb3JjZXMgdGhlIHBhZ2Ug
dG8gYmUgY29oZXJlbnQgYmVmb3JlIHdlCisgICAgICAgICAqIHZhbGlkYXRl
IGl0cyBjb250ZW50cyBmb3Igc2FmZXR5LgorICAgICAgICAgKi8KKyAgICAg
ICAgaWYgKCAobnggJiBQR1Rfbm9uX2NvaGVyZW50KSAmJiB0eXBlICE9IFBH
VF93cml0YWJsZV9wYWdlICkKKyAgICAgICAgeworICAgICAgICAgICAgdm9p
ZCAqYWRkciA9IF9fbWFwX2RvbWFpbl9wYWdlKHBhZ2UpOworCisgICAgICAg
ICAgICBjYWNoZV9mbHVzaChhZGRyLCBQQUdFX1NJWkUpOworICAgICAgICAg
ICAgdW5tYXBfZG9tYWluX3BhZ2UoYWRkcik7CisKKyAgICAgICAgICAgIHBh
Z2UtPnUuaW51c2UudHlwZV9pbmZvICY9IH5QR1Rfbm9uX2NvaGVyZW50Owor
ICAgICAgICB9CisKKyAgICAgICAgLyoKICAgICAgICAgICogTm8gc3BlY2lh
bCB2YWxpZGF0aW9uIG5lZWRlZCBmb3Igd3JpdGFibGUgb3Igc2hhcmVkIHBh
Z2VzLiAgUGFnZQogICAgICAgICAgKiB0YWJsZXMgYW5kIEdEVC9MRFQgbmVl
ZCB0byBoYXZlIHRoZWlyIGNvbnRlbnRzIGF1ZGl0ZWQuCiAgICAgICAgICAq
CmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvcHYvZ3JhbnRfdGFibGUuYyBi
L3hlbi9hcmNoL3g4Ni9wdi9ncmFudF90YWJsZS5jCmluZGV4IDAzMjU2MThj
OTg4My4uODFjNzJlNjFlZDU1IDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYv
cHYvZ3JhbnRfdGFibGUuYworKysgYi94ZW4vYXJjaC94ODYvcHYvZ3JhbnRf
dGFibGUuYwpAQCAtMTA5LDcgKzEwOSwxNyBAQCBpbnQgY3JlYXRlX2dyYW50
X3B2X21hcHBpbmcodWludDY0X3QgYWRkciwgbWZuX3QgZnJhbWUsCiAKICAg
ICBvbDFlID0gKnBsMWU7CiAgICAgaWYgKCBVUERBVEVfRU5UUlkobDEsIHBs
MWUsIG9sMWUsIG5sMWUsIGdsMW1mbiwgY3VyciwgMCkgKQorICAgIHsKKyAg
ICAgICAgLyoKKyAgICAgICAgICogV2UgYWx3YXlzIGNyZWF0ZSBtYXBwaW5n
cyBpbiB0aGlzIHBhdGguICBIb3dldmVyLCBvdXIgY2FsbGVyLAorICAgICAg
ICAgKiBtYXBfZ3JhbnRfcmVmKCksIG9ubHkgcGFzc2VzIHBvdGVudGlhbGx5
IG5vbi16ZXJvIGNhY2hlX2ZsYWdzIGZvcgorICAgICAgICAgKiBNTUlPIGZy
YW1lcywgc28gdGhpcyBwYXRoIGRvZXNuJ3QgY3JlYXRlIG5vbi1jb2hlcmVu
dCBtYXBwaW5ncyBvZgorICAgICAgICAgKiBSQU0gZnJhbWVzIGFuZCB0aGVy
ZSdzIG5vIG5lZWQgdG8gY2FsY3VsYXRlIFBHVF9ub25fY29oZXJlbnQuCisg
ICAgICAgICAqLworICAgICAgICBBU1NFUlQoIWNhY2hlX2ZsYWdzIHx8IGlz
X2lvbWVtX3BhZ2UoZnJhbWUpKTsKKwogICAgICAgICByYyA9IEdOVFNUX29r
YXk7CisgICAgfQogCiAgb3V0X3VubG9jazoKICAgICBwYWdlX3VubG9jayhw
YWdlKTsKQEAgLTI5NCw3ICszMDQsMTggQEAgaW50IHJlcGxhY2VfZ3JhbnRf
cHZfbWFwcGluZyh1aW50NjRfdCBhZGRyLCBtZm5fdCBmcmFtZSwKICAgICAg
ICAgICAgICAgICAgbDFlX2dldF9mbGFncyhvbDFlKSwgYWRkciwgZ3JhbnRf
cHRlX2ZsYWdzKTsKIAogICAgIGlmICggVVBEQVRFX0VOVFJZKGwxLCBwbDFl
LCBvbDFlLCBubDFlLCBnbDFtZm4sIGN1cnIsIDApICkKKyAgICB7CisgICAg
ICAgIC8qCisgICAgICAgICAqIEdlbmVyYWxseSwgcmVwbGFjZV9ncmFudF9w
dl9tYXBwaW5nKCkgaXMgdXNlZCB0byBkZXN0cm95IG1hcHBpbmdzCisgICAg
ICAgICAqIChuMWxlID0gbDFlX2VtcHR5KCkpLCBidXQgaXQgY2FuIGJlIGEg
cHJlc2VudCBtYXBwaW5nIG9uIHRoZQorICAgICAgICAgKiBHTlRBQk9QX3Vu
bWFwX2FuZF9yZXBsYWNlIHBhdGguCisgICAgICAgICAqCisgICAgICAgICAq
IEluIHN1Y2ggY2FzZXMsIHRoZSBQVEUgaXMgZnVsbHkgdHJhbnNwbGFudGVk
IGZyb20gaXRzIG9sZCBsb2NhdGlvbgorICAgICAgICAgKiB2aWEgc3RlYWxf
bGluZWFyX2FkZHIoKSwgc28gd2UgbmVlZCBub3QgcGVyZm9ybSBQR1Rfbm9u
X2NvaGVyZW50CisgICAgICAgICAqIGNoZWNraW5nIGhlcmUuCisgICAgICAg
ICAqLwogICAgICAgICByYyA9IEdOVFNUX29rYXk7CisgICAgfQogCiAgb3V0
X3VubG9jazoKICAgICBwYWdlX3VubG9jayhwYWdlKTsKZGlmZiAtLWdpdCBh
L3hlbi9pbmNsdWRlL2FzbS14ODYvbW0uaCBiL3hlbi9pbmNsdWRlL2FzbS14
ODYvbW0uaAppbmRleCA3YTIwOTNkYTU5NzcuLjRjODE0YWJhYTAyOCAxMDA2
NDQKLS0tIGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9tbS5oCisrKyBiL3hlbi9p
bmNsdWRlL2FzbS14ODYvbW0uaApAQCAtNDgsOCArNDgsMTIgQEAKICNkZWZp
bmUgX1BHVF9wYXJ0aWFsICAgICAgUEdfc2hpZnQoOCkKICNkZWZpbmUgUEdU
X3BhcnRpYWwgICAgICAgUEdfbWFzaygxLCA4KQogCisvKiBIYXMgdGhpcyBw
YWdlIGJlZW4gbWFwcGVkIHdyaXRlYWJsZSB3aXRoIGEgbm9uLWNvaGVyZW50
IG1lbW9yeSB0eXBlPyAqLworI2RlZmluZSBfUEdUX25vbl9jb2hlcmVudCBQ
R19zaGlmdCg5KQorI2RlZmluZSBQR1Rfbm9uX2NvaGVyZW50ICBQR19tYXNr
KDEsIDkpCisKICAvKiBDb3VudCBvZiB1c2VzIG9mIHRoaXMgZnJhbWUgYXMg
aXRzIGN1cnJlbnQgdHlwZS4gKi8KLSNkZWZpbmUgUEdUX2NvdW50X3dpZHRo
ICAgUEdfc2hpZnQoOCkKKyNkZWZpbmUgUEdUX2NvdW50X3dpZHRoICAgUEdf
c2hpZnQoOSkKICNkZWZpbmUgUEdUX2NvdW50X21hc2sgICAgKCgxVUw8PFBH
VF9jb3VudF93aWR0aCktMSkKIAogLyogQXJlIHRoZSAndHlwZSBtYXNrJyBi
aXRzIGlkZW50aWNhbD8gKi8K

--=separator
Content-Type: application/octet-stream; name="xsa402/xsa402-4.15-1.patch"
Content-Disposition: attachment; filename="xsa402/xsa402-4.15-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3BhZ2U6IEludHJvZHVjZSBfUEFHRV8qIGNvbnN0
YW50cyBmb3IgbWVtb3J5IHR5cGVzCgouLi4gcmF0aGVyIHRoYW4gb3BlbmNv
ZGluZyB0aGUgUEFUL1BDRC9QV1QgYXR0cmlidXRlcyBpbiBfX1BBR0VfSFlQ
RVJWSVNPUl8qCmNvbnN0YW50cy4gIFRoZXNlIGFyZSBnb2luZyB0byBiZSBu
ZWVkZWQgYnkgZm9ydGhjb21pbmcgbG9naWMuCgpObyBmdW5jdGlvbmFsIGNo
YW5nZS4KClRoaXMgaXMgcGFydCBvZiBYU0EtNDAyLgoKU2lnbmVkLW9mZi1i
eTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4K
UmV2aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4K
CmRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9hc20teDg2L3BhZ2UuaCBiL3hl
bi9pbmNsdWRlL2FzbS14ODYvcGFnZS5oCmluZGV4IDRjN2YyY2I3MGM2OS4u
NTM0YmMxZjQwM2IzIDEwMDY0NAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2
L3BhZ2UuaAorKysgYi94ZW4vaW5jbHVkZS9hc20teDg2L3BhZ2UuaApAQCAt
MzM2LDYgKzMzNiwxNCBAQCB2b2lkIGVmaV91cGRhdGVfbDRfcGd0YWJsZSh1
bnNpZ25lZCBpbnQgbDRpZHgsIGw0X3BnZW50cnlfdCk7CiAKICNkZWZpbmUg
UEFHRV9DQUNIRV9BVFRSUyAoX1BBR0VfUEFUIHwgX1BBR0VfUENEIHwgX1BB
R0VfUFdUKQogCisvKiBNZW1vcnkgdHlwZXMsIGVuY29kZWQgdW5kZXIgWGVu
J3MgY2hvaWNlIG9mIE1TUl9QQVQuICovCisjZGVmaW5lIF9QQUdFX1dCICAg
ICAgICAgKCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgMCkKKyNk
ZWZpbmUgX1BBR0VfV1QgICAgICAgICAoICAgICAgICAgICAgICAgICAgICAg
ICAgX1BBR0VfUFdUKQorI2RlZmluZSBfUEFHRV9VQ00gICAgICAgICggICAg
ICAgICAgICBfUEFHRV9QQ0QgICAgICAgICAgICApCisjZGVmaW5lIF9QQUdF
X1VDICAgICAgICAgKCAgICAgICAgICAgIF9QQUdFX1BDRCB8IF9QQUdFX1BX
VCkKKyNkZWZpbmUgX1BBR0VfV0MgICAgICAgICAoX1BBR0VfUEFUICAgICAg
ICAgICAgICAgICAgICAgICAgKQorI2RlZmluZSBfUEFHRV9XUCAgICAgICAg
IChfUEFHRV9QQVQgfCAgICAgICAgICAgICBfUEFHRV9QV1QpCisKIC8qCiAg
KiBEZWJ1ZyBvcHRpb246IEVuc3VyZSB0aGF0IGdyYW50ZWQgbWFwcGluZ3Mg
YXJlIG5vdCBpbXBsaWNpdGx5IHVubWFwcGVkLgogICogV0FSTklORzogVGhp
cyB3aWxsIG5lZWQgdG8gYmUgZGlzYWJsZWQgdG8gcnVuIE9TZXMgdGhhdCB1
c2UgdGhlIHNwYXJlIFBURQpAQCAtMzU0LDggKzM2Miw4IEBAIHZvaWQgZWZp
X3VwZGF0ZV9sNF9wZ3RhYmxlKHVuc2lnbmVkIGludCBsNGlkeCwgbDRfcGdl
bnRyeV90KTsKICNkZWZpbmUgX19QQUdFX0hZUEVSVklTT1JfUlggICAgICAo
X1BBR0VfUFJFU0VOVCB8IF9QQUdFX0FDQ0VTU0VEKQogI2RlZmluZSBfX1BB
R0VfSFlQRVJWSVNPUiAgICAgICAgIChfX1BBR0VfSFlQRVJWSVNPUl9SWCB8
IFwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgX1BBR0Vf
RElSVFkgfCBfUEFHRV9SVykKLSNkZWZpbmUgX19QQUdFX0hZUEVSVklTT1Jf
VUNNSU5VUyAoX19QQUdFX0hZUEVSVklTT1IgfCBfUEFHRV9QQ0QpCi0jZGVm
aW5lIF9fUEFHRV9IWVBFUlZJU09SX1VDICAgICAgKF9fUEFHRV9IWVBFUlZJ
U09SIHwgX1BBR0VfUENEIHwgX1BBR0VfUFdUKQorI2RlZmluZSBfX1BBR0Vf
SFlQRVJWSVNPUl9VQ01JTlVTIChfX1BBR0VfSFlQRVJWSVNPUiB8IF9QQUdF
X1VDTSkKKyNkZWZpbmUgX19QQUdFX0hZUEVSVklTT1JfVUMgICAgICAoX19Q
QUdFX0hZUEVSVklTT1IgfCBfUEFHRV9VQykKICNkZWZpbmUgX19QQUdFX0hZ
UEVSVklTT1JfU0hTVEsgICAoX19QQUdFX0hZUEVSVklTT1JfUk8gfCBfUEFH
RV9ESVJUWSkKIAogI2RlZmluZSBNQVBfU01BTExfUEFHRVMgX1BBR0VfQVZB
SUwwIC8qIGRvbid0IHVzZSBzdXBlcnBhZ2VzIG1hcHBpbmdzICovCg==

--=separator
Content-Type: application/octet-stream; name="xsa402/xsa402-4.15-2.patch"
Content-Disposition: attachment; filename="xsa402/xsa402-4.15-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2OiBEb24ndCBjaGFuZ2UgdGhlIGNhY2hlYWJpbGl0
eSBvZiB0aGUgZGlyZWN0bWFwCgpDaGFuZ2VzZXQgNTVmOTdmNDliN2NlICgi
eDg2OiBDaGFuZ2UgY2FjaGUgYXR0cmlidXRlcyBvZiBYZW4gMToxIHBhZ2Ug
bWFwcGluZ3MKaW4gcmVzcG9uc2UgdG8gZ3Vlc3QgbWFwcGluZyByZXF1ZXN0
cyIpIGF0dGVtcHRlZCB0byBrZWVwIHRoZSBjYWNoZWFiaWxpdHkKY29uc2lz
dGVudCBiZXR3ZWVuIGRpZmZlcmVudCBtYXBwaW5ncyBvZiB0aGUgc2FtZSBw
YWdlLgoKVGhlIHJlYXNvbiB3YXNuJ3QgZGVzY3JpYmVkIGluIHRoZSBjaGFu
Z2Vsb2csIGJ1dCBpdCBpcyB1bmRlcnN0b29kIHRvIGJlIGluCnJlZ2FyZHMg
dG8gYSBjb25jZXJuIG92ZXIgbWFjaGluZSBjaGVjayBleGNlcHRpb25zLCBv
d2luZyB0byBlcnJhdGEgd2hlbiB1c2luZwptaXhlZCBjYWNoZWFiaWxpdGll
cy4gIEl0IGRpZCB0aGlzIHByaW1hcmlseSBieSB1cGRhdGluZyBYZW4ncyBt
YXBwaW5nIG9mIHRoZQpwYWdlIGluIHRoZSBkaXJlY3QgbWFwIHdoZW4gdGhl
IGd1ZXN0IG1hcHBlZCBhIHBhZ2Ugd2l0aCByZWR1Y2VkIGNhY2hlYWJpbGl0
eS4KClVuZm9ydHVuYXRlbHksIHRoZSBsb2dpYyBkaWRuJ3QgYWN0dWFsbHkg
cHJldmVudCBtaXhlZCBjYWNoZWFiaWxpdHkgZnJvbQpvY2N1cnJpbmc6CiAq
IEEgZ3Vlc3QgY291bGQgbWFwIGEgcGFnZSBub3JtYWxseSwgYW5kIHRoZW4g
bWFwIHRoZSBzYW1lIHBhZ2Ugd2l0aAogICBkaWZmZXJlbnQgY2FjaGVhYmls
aXR5OyBub3RoaW5nIHByZXZlbnRlZCB0aGlzLgogKiBUaGUgY2FjaGVhYmls
aXR5IG9mIHRoZSBkaXJlY3RtYXAgd2FzIGFsd2F5cyBsYXRlc3QtdGFrZXMt
cHJlY2VkZW5jZSBpbgogICB0ZXJtcyBvZiBndWVzdCByZXF1ZXN0cy4KICog
R3JhbnQtbWFwcGVkIGZyYW1lcyB3aXRoIGxlc3NlciBjYWNoZWFiaWxpdHkg
ZGlkbid0IGFkanVzdCB0aGUgcGFnZSdzCiAgIGNhY2hlYXR0ciBzZXR0aW5n
cy4KICogVGhlIG1hcF9kb21haW5fcGFnZSgpIGZ1bmN0aW9uIHN0aWxsIHVu
Y29uZGl0aW9uYWxseSBjcmVhdGVkIFdCIG1hcHBpbmdzLAogICBpcnJlc3Bl
Y3RpdmUgb2YgdGhlIHBhZ2UncyBjYWNoZWF0dHIgc2V0dGluZ3MuCgpBZGRp
dGlvbmFsbHksIHVwZGF0ZV94ZW5fbWFwcGluZ3MoKSBoYWQgYSBidWcgd2hl
cmUgdGhlIGFsaWFzIGNhbGN1bGF0aW9uIHdhcwp3cm9uZyBmb3IgbWZuJ3Mg
d2hpY2ggd2VyZSAuaW5pdCBjb250ZW50LCB3aGljaCBzaG91bGQgaGF2ZSBi
ZWVuIHRyZWF0ZWQgYXMKZnVsbHkgZ3Vlc3QgcGFnZXMsIG5vdCBYZW4gcGFn
ZXMuCgpXb3JzZSB5ZXQsIHRoZSBsb2dpYyBpbnRyb2R1Y2VkIGEgdnVsbmVy
YWJpbGl0eSB3aGVyZWJ5IG5lY2Vzc2FyeQpwYWdldGFibGUvc2VnZGVzYyBh
ZGp1c3RtZW50cyBtYWRlIGJ5IFhlbiBpbiB0aGUgdmFsaWRhdGlvbiBsb2dp
YyBjb3VsZCBiZWNvbWUKbm9uLWNvaGVyZW50IGJldHdlZW4gdGhlIGNhY2hl
IGFuZCBtYWluIG1lbW9yeS4gIFRoZSBDUFUgY291bGQgc3Vic2VxdWVudGx5
Cm9wZXJhdGUgb24gdGhlIHN0YWxlIHZhbHVlIGluIHRoZSBjYWNoZSwgcmF0
aGVyIHRoYW4gdGhlIHNhZmUgdmFsdWUgaW4gbWFpbgptZW1vcnkuCgpUaGUg
ZGlyZWN0bWFwIGNvbnRhaW5zIHByaW1hcmlseSBtYXBwaW5ncyBvZiBSQU0u
ICBQQVQvTVRSUiBjb25mbGljdApyZXNvbHV0aW9uIGlzIGFzeW1tZXRyaWMs
IGFuZCBnZW5lcmFsbHkgZm9yIE1UUlI9V0IgcmFuZ2VzLCBQQVQgb2YgbGVz
c2VyCmNhY2hlYWJpbGl0eSByZXNvbHZlcyB0byBiZWluZyBjb2hlcmVudC4g
IFRoZSBzcGVjaWFsIGNhc2UgaXMgV0MgbWFwcGluZ3MsCndoaWNoIGFyZSBu
b24tY29oZXJlbnQgYWdhaW5zdCBNVFJSPVdCIHJlZ2lvbnMgKGV4Y2VwdCBm
b3IgZnVsbHktY29oZXJlbnQKQ1BVcykuCgpYZW4gbXVzdCBub3QgaGF2ZSBh
bnkgV0MgY2FjaGVhYmlsaXR5IGluIHRoZSBkaXJlY3RtYXAsIHRvIHByZXZl
bnQgWGVuJ3MKYWN0aW9ucyBmcm9tIGNyZWF0aW5nIG5vbi1jb2hlcmVuY3ku
ICAoR3Vlc3QgYWN0aW9ucyBjcmVhdGluZyBub24tY29oZXJlbmN5IGlzCmRl
YWx0IHdpdGggaW4gc3Vic2VxdWVudCBwYXRjaGVzLikgIEFzIGFsbCBtZW1v
cnkgdHlwZXMgZm9yIE1UUlI9V0IgcmFuZ2VzCmludGVyLW9wZXJhdGUgY29o
ZXJlbnRseSwgc28gbGVhdmUgWGVuJ3MgZGlyZWN0bWFwIG1hcHBpbmdzIGFz
IFdCLgoKT25seSBQViBndWVzdHMgd2l0aCBhY2Nlc3MgdG8gZGV2aWNlcyBj
YW4gdXNlIHJlZHVjZWQtY2FjaGVhYmlsaXR5IG1hcHBpbmdzIHRvCmJlZ2lu
IHdpdGgsIGFuZCB0aGV5J3JlIHRydXN0ZWQgbm90IHRvIG1vdW50IERvU3Mg
YWdhaW5zdCB0aGUgc3lzdGVtIGFueXdheS4KCkRyb3AgUEdDX2NhY2hlYXR0
cl97YmFzZSxtYXNrfSBlbnRpcmVseSwgYW5kIHRoZSBsb2dpYyB0byBtYW5p
cHVsYXRlIHRoZW0uClNoaWZ0IHRoZSBsYXRlciBQR0NfKiBjb25zdGFudHMg
dXAsIHRvIGdhaW4gMyBleHRyYSBiaXRzIGluIHRoZSBtYWluIHJlZmVyZW5j
ZQpjb3VudC4gIFJldGFpbiB0aGUgY2hlY2sgaW4gZ2V0X3BhZ2VfZnJvbV9s
MWUoKSBmb3Igc3BlY2lhbF9wYWdlcygpIGJlY2F1c2UgYQpndWVzdCBoYXMg
bm8gYnVzaW5lc3MgdXNpbmcgcmVkdWNlZCBjYWNoZWFiaWxpdHkgb24gdGhl
c2UuCgpUaGlzIHJldmVydHMgY2hhbmdlc2V0IDU1Zjk3ZjQ5YjdjZTZjMzUy
MGM1NTVkMTljYWFjNmNmM2Y5YTVkZjAKClRoaXMgaXMgQ1ZFLTIwMjItMjYz
NjMsIHBhcnQgb2YgWFNBLTQwMi4KClNpZ25lZC1vZmYtYnk6IEFuZHJldyBD
b29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+ClJldmlld2VkLWJ5
OiBHZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJpeC5jb20+Cgpk
aWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L21tLmMgYi94ZW4vYXJjaC94ODYv
bW0uYwppbmRleCAyNjQ0YjlmMDMzN2MuLjZjZThjMTlkY2VjYyAxMDA2NDQK
LS0tIGEveGVuL2FyY2gveDg2L21tLmMKKysrIGIveGVuL2FyY2gveDg2L21t
LmMKQEAgLTc4MywyOCArNzgzLDYgQEAgYm9vbCBpc19pb21lbV9wYWdlKG1m
bl90IG1mbikKICAgICByZXR1cm4gKHBhZ2VfZ2V0X293bmVyKHBhZ2UpID09
IGRvbV9pbyk7CiB9CiAKLXN0YXRpYyBpbnQgdXBkYXRlX3hlbl9tYXBwaW5n
cyh1bnNpZ25lZCBsb25nIG1mbiwgdW5zaWduZWQgaW50IGNhY2hlYXR0cikK
LXsKLSAgICBpbnQgZXJyID0gMDsKLSAgICBib29sIGFsaWFzID0gbWZuID49
IFBGTl9ET1dOKHhlbl9waHlzX3N0YXJ0KSAmJgotICAgICAgICAgbWZuIDwg
UEZOX1VQKHhlbl9waHlzX3N0YXJ0ICsgeGVuX3ZpcnRfZW5kIC0gWEVOX1ZJ
UlRfU1RBUlQpOwotICAgIHVuc2lnbmVkIGxvbmcgeGVuX3ZhID0KLSAgICAg
ICAgWEVOX1ZJUlRfU1RBUlQgKyAoKG1mbiAtIFBGTl9ET1dOKHhlbl9waHlz
X3N0YXJ0KSkgPDwgUEFHRV9TSElGVCk7Ci0KLSAgICBpZiAoIGJvb3RfY3B1
X2hhcyhYODZfRkVBVFVSRV9YRU5fU0VMRlNOT09QKSApCi0gICAgICAgIHJl
dHVybiAwOwotCi0gICAgaWYgKCB1bmxpa2VseShhbGlhcykgJiYgY2FjaGVh
dHRyICkKLSAgICAgICAgZXJyID0gbWFwX3BhZ2VzX3RvX3hlbih4ZW5fdmEs
IF9tZm4obWZuKSwgMSwgMCk7Ci0gICAgaWYgKCAhZXJyICkKLSAgICAgICAg
ZXJyID0gbWFwX3BhZ2VzX3RvX3hlbigodW5zaWduZWQgbG9uZyltZm5fdG9f
dmlydChtZm4pLCBfbWZuKG1mbiksIDEsCi0gICAgICAgICAgICAgICAgICAg
ICBQQUdFX0hZUEVSVklTT1IgfCBjYWNoZWF0dHJfdG9fcHRlX2ZsYWdzKGNh
Y2hlYXR0cikpOwotICAgIGlmICggdW5saWtlbHkoYWxpYXMpICYmICFjYWNo
ZWF0dHIgJiYgIWVyciApCi0gICAgICAgIGVyciA9IG1hcF9wYWdlc190b194
ZW4oeGVuX3ZhLCBfbWZuKG1mbiksIDEsIFBBR0VfSFlQRVJWSVNPUik7Ci0K
LSAgICByZXR1cm4gZXJyOwotfQotCiAjaWZuZGVmIE5ERUJVRwogc3RydWN0
IG1taW9fZW11bF9yYW5nZV9jdHh0IHsKICAgICBjb25zdCBzdHJ1Y3QgZG9t
YWluICpkOwpAQCAtMTAwOSw0NyArOTg3LDE0IEBAIGdldF9wYWdlX2Zyb21f
bDFlKAogICAgICAgICBnb3RvIGNvdWxkX25vdF9waW47CiAgICAgfQogCi0g
ICAgaWYgKCBwdGVfZmxhZ3NfdG9fY2FjaGVhdHRyKGwxZikgIT0KLSAgICAg
ICAgICgocGFnZS0+Y291bnRfaW5mbyAmIFBHQ19jYWNoZWF0dHJfbWFzaykg
Pj4gUEdDX2NhY2hlYXR0cl9iYXNlKSApCisgICAgaWYgKCAobDFmICYgUEFH
RV9DQUNIRV9BVFRSUykgIT0gX1BBR0VfV0IgJiYgaXNfc3BlY2lhbF9wYWdl
KHBhZ2UpICkKICAgICB7Ci0gICAgICAgIHVuc2lnbmVkIGxvbmcgeCwgbngs
IHkgPSBwYWdlLT5jb3VudF9pbmZvOwotICAgICAgICB1bnNpZ25lZCBsb25n
IGNhY2hlYXR0ciA9IHB0ZV9mbGFnc190b19jYWNoZWF0dHIobDFmKTsKLSAg
ICAgICAgaW50IGVycjsKLQotICAgICAgICBpZiAoIGlzX3NwZWNpYWxfcGFn
ZShwYWdlKSApCi0gICAgICAgIHsKLSAgICAgICAgICAgIGlmICggd3JpdGUg
KQotICAgICAgICAgICAgICAgIHB1dF9wYWdlX3R5cGUocGFnZSk7Ci0gICAg
ICAgICAgICBwdXRfcGFnZShwYWdlKTsKLSAgICAgICAgICAgIGdkcHJpbnRr
KFhFTkxPR19XQVJOSU5HLAotICAgICAgICAgICAgICAgICAgICAgIkF0dGVt
cHQgdG8gY2hhbmdlIGNhY2hlIGF0dHJpYnV0ZXMgb2YgWGVuIGhlYXAgcGFn
ZVxuIik7Ci0gICAgICAgICAgICByZXR1cm4gLUVBQ0NFUzsKLSAgICAgICAg
fQotCi0gICAgICAgIGRvIHsKLSAgICAgICAgICAgIHggID0geTsKLSAgICAg
ICAgICAgIG54ID0gKHggJiB+UEdDX2NhY2hlYXR0cl9tYXNrKSB8IChjYWNo
ZWF0dHIgPDwgUEdDX2NhY2hlYXR0cl9iYXNlKTsKLSAgICAgICAgfSB3aGls
ZSAoICh5ID0gY21weGNoZygmcGFnZS0+Y291bnRfaW5mbywgeCwgbngpKSAh
PSB4ICk7Ci0KLSAgICAgICAgZXJyID0gdXBkYXRlX3hlbl9tYXBwaW5ncyht
Zm4sIGNhY2hlYXR0cik7Ci0gICAgICAgIGlmICggdW5saWtlbHkoZXJyKSAp
Ci0gICAgICAgIHsKLSAgICAgICAgICAgIGNhY2hlYXR0ciA9IHkgJiBQR0Nf
Y2FjaGVhdHRyX21hc2s7Ci0gICAgICAgICAgICBkbyB7Ci0gICAgICAgICAg
ICAgICAgeCAgPSB5OwotICAgICAgICAgICAgICAgIG54ID0gKHggJiB+UEdD
X2NhY2hlYXR0cl9tYXNrKSB8IGNhY2hlYXR0cjsKLSAgICAgICAgICAgIH0g
d2hpbGUgKCAoeSA9IGNtcHhjaGcoJnBhZ2UtPmNvdW50X2luZm8sIHgsIG54
KSkgIT0geCApOwotCi0gICAgICAgICAgICBpZiAoIHdyaXRlICkKLSAgICAg
ICAgICAgICAgICBwdXRfcGFnZV90eXBlKHBhZ2UpOwotICAgICAgICAgICAg
cHV0X3BhZ2UocGFnZSk7Ci0KLSAgICAgICAgICAgIGdkcHJpbnRrKFhFTkxP
R19XQVJOSU5HLCAiRXJyb3IgdXBkYXRpbmcgbWFwcGluZ3MgZm9yIG1mbiAl
IiBQUklfbWZuCi0gICAgICAgICAgICAgICAgICAgICAiIChwZm4gJSIgUFJJ
X3BmbiAiLCBmcm9tIEwxIGVudHJ5ICUiIFBSSXB0ZSAiKSBmb3IgZCVkXG4i
LAotICAgICAgICAgICAgICAgICAgICAgbWZuLCBnZXRfZ3Bmbl9mcm9tX21m
bihtZm4pLAotICAgICAgICAgICAgICAgICAgICAgbDFlX2dldF9pbnRwdGUo
bDFlKSwgbDFlX293bmVyLT5kb21haW5faWQpOwotICAgICAgICAgICAgcmV0
dXJuIGVycjsKLSAgICAgICAgfQorICAgICAgICBpZiAoIHdyaXRlICkKKyAg
ICAgICAgICAgIHB1dF9wYWdlX3R5cGUocGFnZSk7CisgICAgICAgIHB1dF9w
YWdlKHBhZ2UpOworICAgICAgICBnZHByaW50ayhYRU5MT0dfV0FSTklORywK
KyAgICAgICAgICAgICAgICAgIkF0dGVtcHQgdG8gY2hhbmdlIGNhY2hlIGF0
dHJpYnV0ZXMgb2YgWGVuIGhlYXAgcGFnZVxuIik7CisgICAgICAgIHJldHVy
biAtRUFDQ0VTOwogICAgIH0KIAogICAgIHJldHVybiAwOwpAQCAtMjQ1NSwy
NSArMjQwMCwxMCBAQCBzdGF0aWMgaW50IG1vZF9sNF9lbnRyeShsNF9wZ2Vu
dHJ5X3QgKnBsNGUsCiAgKi8KIHN0YXRpYyBpbnQgY2xlYW51cF9wYWdlX21h
cHBpbmdzKHN0cnVjdCBwYWdlX2luZm8gKnBhZ2UpCiB7Ci0gICAgdW5zaWdu
ZWQgaW50IGNhY2hlYXR0ciA9Ci0gICAgICAgIChwYWdlLT5jb3VudF9pbmZv
ICYgUEdDX2NhY2hlYXR0cl9tYXNrKSA+PiBQR0NfY2FjaGVhdHRyX2Jhc2U7
CiAgICAgaW50IHJjID0gMDsKICAgICB1bnNpZ25lZCBsb25nIG1mbiA9IG1m
bl94KHBhZ2VfdG9fbWZuKHBhZ2UpKTsKIAogICAgIC8qCi0gICAgICogSWYg
d2UndmUgbW9kaWZpZWQgeGVuIG1hcHBpbmdzIGFzIGEgcmVzdWx0IG9mIGd1
ZXN0IGNhY2hlCi0gICAgICogYXR0cmlidXRlcywgcmVzdG9yZSB0aGVtIHRv
IHRoZSAibm9ybWFsIiBzdGF0ZS4KLSAgICAgKi8KLSAgICBpZiAoIHVubGlr
ZWx5KGNhY2hlYXR0cikgKQotICAgIHsKLSAgICAgICAgcGFnZS0+Y291bnRf
aW5mbyAmPSB+UEdDX2NhY2hlYXR0cl9tYXNrOwotCi0gICAgICAgIEJVR19P
Tihpc19zcGVjaWFsX3BhZ2UocGFnZSkpOwotCi0gICAgICAgIHJjID0gdXBk
YXRlX3hlbl9tYXBwaW5ncyhtZm4sIDApOwotICAgIH0KLQotICAgIC8qCiAg
ICAgICogSWYgdGhpcyBtYXkgYmUgaW4gYSBQViBkb21haW4ncyBJT01NVSwg
cmVtb3ZlIGl0LgogICAgICAqCiAgICAgICogTkIgdGhhdCB3cml0YWJsZSB4
ZW5oZWFwIHBhZ2VzIGhhdmUgdGhlaXIgdHlwZSBzZXQgYW5kIGNsZWFyZWQg
YnkKZGlmZiAtLWdpdCBhL3hlbi9pbmNsdWRlL2FzbS14ODYvbW0uaCBiL3hl
bi9pbmNsdWRlL2FzbS14ODYvbW0uaAppbmRleCAwNDFjMTU4ZjAzZjYuLmY1
Yjg4NjJiODM3NCAxMDA2NDQKLS0tIGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9t
bS5oCisrKyBiL3hlbi9pbmNsdWRlL2FzbS14ODYvbW0uaApAQCAtNjksMjUg
KzY5LDIyIEBACiAgLyogU2V0IHdoZW4gaXMgdXNpbmcgYSBwYWdlIGFzIGEg
cGFnZSB0YWJsZSAqLwogI2RlZmluZSBfUEdDX3BhZ2VfdGFibGUgICBQR19z
aGlmdCgzKQogI2RlZmluZSBQR0NfcGFnZV90YWJsZSAgICBQR19tYXNrKDEs
IDMpCi0gLyogMy1iaXQgUEFUL1BDRC9QV1QgY2FjaGUtYXR0cmlidXRlIGhp
bnQuICovCi0jZGVmaW5lIFBHQ19jYWNoZWF0dHJfYmFzZSBQR19zaGlmdCg2
KQotI2RlZmluZSBQR0NfY2FjaGVhdHRyX21hc2sgUEdfbWFzayg3LCA2KQog
IC8qIFBhZ2UgaXMgYnJva2VuPyAqLwotI2RlZmluZSBfUEdDX2Jyb2tlbiAg
ICAgICBQR19zaGlmdCg3KQotI2RlZmluZSBQR0NfYnJva2VuICAgICAgICBQ
R19tYXNrKDEsIDcpCisjZGVmaW5lIF9QR0NfYnJva2VuICAgICAgIFBHX3No
aWZ0KDQpCisjZGVmaW5lIFBHQ19icm9rZW4gICAgICAgIFBHX21hc2soMSwg
NCkKICAvKiBNdXR1YWxseS1leGNsdXNpdmUgcGFnZSBzdGF0ZXM6IHsgaW51
c2UsIG9mZmxpbmluZywgb2ZmbGluZWQsIGZyZWUgfS4gKi8KLSNkZWZpbmUg
UEdDX3N0YXRlICAgICAgICAgUEdfbWFzaygzLCA5KQotI2RlZmluZSBQR0Nf
c3RhdGVfaW51c2UgICBQR19tYXNrKDAsIDkpCi0jZGVmaW5lIFBHQ19zdGF0
ZV9vZmZsaW5pbmcgUEdfbWFzaygxLCA5KQotI2RlZmluZSBQR0Nfc3RhdGVf
b2ZmbGluZWQgUEdfbWFzaygyLCA5KQotI2RlZmluZSBQR0Nfc3RhdGVfZnJl
ZSAgICBQR19tYXNrKDMsIDkpCisjZGVmaW5lIFBHQ19zdGF0ZSAgICAgICAg
ICAgUEdfbWFzaygzLCA2KQorI2RlZmluZSBQR0Nfc3RhdGVfaW51c2UgICAg
IFBHX21hc2soMCwgNikKKyNkZWZpbmUgUEdDX3N0YXRlX29mZmxpbmluZyBQ
R19tYXNrKDEsIDYpCisjZGVmaW5lIFBHQ19zdGF0ZV9vZmZsaW5lZCAgUEdf
bWFzaygyLCA2KQorI2RlZmluZSBQR0Nfc3RhdGVfZnJlZSAgICAgIFBHX21h
c2soMywgNikKICNkZWZpbmUgcGFnZV9zdGF0ZV9pcyhwZywgc3QpICgoKHBn
KS0+Y291bnRfaW5mbyZQR0Nfc3RhdGUpID09IFBHQ19zdGF0ZV8jI3N0KQog
LyogUGFnZSBpcyBub3QgcmVmZXJlbmNlIGNvdW50ZWQgKi8KLSNkZWZpbmUg
X1BHQ19leHRyYSAgICAgICAgUEdfc2hpZnQoMTApCi0jZGVmaW5lIFBHQ19l
eHRyYSAgICAgICAgIFBHX21hc2soMSwgMTApCisjZGVmaW5lIF9QR0NfZXh0
cmEgICAgICAgIFBHX3NoaWZ0KDcpCisjZGVmaW5lIFBHQ19leHRyYSAgICAg
ICAgIFBHX21hc2soMSwgNykKIAogLyogQ291bnQgb2YgcmVmZXJlbmNlcyB0
byB0aGlzIGZyYW1lLiAqLwotI2RlZmluZSBQR0NfY291bnRfd2lkdGggICBQ
R19zaGlmdCgxMCkKKyNkZWZpbmUgUEdDX2NvdW50X3dpZHRoICAgUEdfc2hp
ZnQoNykKICNkZWZpbmUgUEdDX2NvdW50X21hc2sgICAgKCgxVUw8PFBHQ19j
b3VudF93aWR0aCktMSkKIAogLyoK

--=separator
Content-Type: application/octet-stream; name="xsa402/xsa402-4.15-3.patch"
Content-Disposition: attachment; filename="xsa402/xsa402-4.15-3.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2OiBTcGxpdCBjYWNoZV9mbHVzaCgpIG91dCBvZiBj
YWNoZV93cml0ZWJhY2soKQoKU3Vic2VxdWVudCBjaGFuZ2VzIHdpbGwgd2Fu
dCBhIGZ1bGx5IGZsdXNoaW5nIHZlcnNpb24uCgpVc2UgdGhlIG5ldyBoZWxw
ZXIgcmF0aGVyIHRoYW4gb3BlbmNvZGluZyBpdCBpbiBmbHVzaF9hcmVhX2xv
Y2FsKCkuICBUaGlzCnJlc29sdmVzIGFuIG91dHN0YW5kaW5nIGlzc3VlIHdo
ZXJlIHRoZSBjb25kaXRpb25hbCBzZmVuY2UgaXMgb24gdGhlIHdyb25nCnNp
ZGUgb2YgdGhlIGNsZmx1c2hvcHQgbG9vcC4gIGNsZmx1c2hvcHQgaXMgb3Jk
ZXJlZCB3aXRoIHJlc3BlY3QgdG8gb2xkZXIKc3RvcmVzLCBub3QgdG8geW91
bmdlciBzdG9yZXMuCgpSZW5hbWUgZ250dGFiX2NhY2hlX2ZsdXNoKCkncyBo
ZWxwZXIgdG8gYXZvaWQgY29sbGlkaW5nIGluIG5hbWUuCmdyYW50X3RhYmxl
LmMgY2FuIHNlZSB0aGUgcHJvdG90eXBlIGZyb20gY2FjaGUuaCBzbyB0aGUg
YnVpbGQgZmFpbHMKb3RoZXJ3aXNlLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS00
MDIuCgpTaWduZWQtb2ZmLWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29v
cGVyM0BjaXRyaXguY29tPgpSZXZpZXdlZC1ieTogSmFuIEJldWxpY2ggPGpi
ZXVsaWNoQHN1c2UuY29tPgoKWGVuIDQuMTYgYW5kIGVhcmxpZXI6CiAqIEFs
c28gYmFja3BvcnQgaGFsZiBvZiBjL3MgMzMzMDAxM2U2NzM5NiAiVlQtZCAv
IHg4NjogcmUtYXJyYW5nZSBjYWNoZQogICBzeW5jaW5nIiB0byBzcGxpdCBj
YWNoZV93cml0ZWJhY2soKSBvdXQgb2YgdGhlIElPTU1VIGxvZ2ljLCBidXQg
d2l0aG91dCB0aGUKICAgYXNzb2NpYXRlZCBob29rcyBjaGFuZ2VzLgoKZGlm
ZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9mbHVzaHRsYi5jIGIveGVuL2FyY2gv
eDg2L2ZsdXNodGxiLmMKaW5kZXggMjU3OThkZjUwZjU0Li4wYzkxMmI4NjY5
ZjggMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni9mbHVzaHRsYi5jCisrKyBi
L3hlbi9hcmNoL3g4Ni9mbHVzaHRsYi5jCkBAIC0yMzQsNyArMjM0LDcgQEAg
dW5zaWduZWQgaW50IGZsdXNoX2FyZWFfbG9jYWwoY29uc3Qgdm9pZCAqdmEs
IHVuc2lnbmVkIGludCBmbGFncykKICAgICBpZiAoIGZsYWdzICYgRkxVU0hf
Q0FDSEUgKQogICAgIHsKICAgICAgICAgY29uc3Qgc3RydWN0IGNwdWluZm9f
eDg2ICpjID0gJmN1cnJlbnRfY3B1X2RhdGE7Ci0gICAgICAgIHVuc2lnbmVk
IGxvbmcgaSwgc3ogPSAwOworICAgICAgICB1bnNpZ25lZCBsb25nIHN6ID0g
MDsKIAogICAgICAgICBpZiAoIG9yZGVyIDwgKEJJVFNfUEVSX0xPTkcgLSBQ
QUdFX1NISUZUKSApCiAgICAgICAgICAgICBzeiA9IDFVTCA8PCAob3JkZXIg
KyBQQUdFX1NISUZUKTsKQEAgLTI0NCwxMyArMjQ0LDcgQEAgdW5zaWduZWQg
aW50IGZsdXNoX2FyZWFfbG9jYWwoY29uc3Qgdm9pZCAqdmEsIHVuc2lnbmVk
IGludCBmbGFncykKICAgICAgICAgICAgICBjLT54ODZfY2xmbHVzaF9zaXpl
ICYmIGMtPng4Nl9jYWNoZV9zaXplICYmIHN6ICYmCiAgICAgICAgICAgICAg
KChzeiA+PiAxMCkgPCBjLT54ODZfY2FjaGVfc2l6ZSkgKQogICAgICAgICB7
Ci0gICAgICAgICAgICBhbHRlcm5hdGl2ZSgiIiwgInNmZW5jZSIsIFg4Nl9G
RUFUVVJFX0NMRkxVU0hPUFQpOwotICAgICAgICAgICAgZm9yICggaSA9IDA7
IGkgPCBzejsgaSArPSBjLT54ODZfY2xmbHVzaF9zaXplICkKLSAgICAgICAg
ICAgICAgICBhbHRlcm5hdGl2ZV9pbnB1dCgiLmJ5dGUgIiBfX3N0cmluZ2lm
eShOT1BfRFNfUFJFRklYKSAiOyIKLSAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAiIGNsZmx1c2ggJTAiLAotICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICJkYXRhMTYgY2xmbHVzaCAlMCIsICAgICAgLyog
Y2xmbHVzaG9wdCAqLwotICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIFg4Nl9GRUFUVVJFX0NMRkxVU0hPUFQsCi0gICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIm0iICgoKGNvbnN0IGNoYXIgKil2YSlbaV0p
KTsKKyAgICAgICAgICAgIGNhY2hlX2ZsdXNoKHZhLCBzeik7CiAgICAgICAg
ICAgICBmbGFncyAmPSB+RkxVU0hfQ0FDSEU7CiAgICAgICAgIH0KICAgICAg
ICAgZWxzZQpAQCAtMjY1LDYgKzI1OSw4MCBAQCB1bnNpZ25lZCBpbnQgZmx1
c2hfYXJlYV9sb2NhbChjb25zdCB2b2lkICp2YSwgdW5zaWduZWQgaW50IGZs
YWdzKQogICAgIHJldHVybiBmbGFnczsKIH0KIAordm9pZCBjYWNoZV9mbHVz
aChjb25zdCB2b2lkICphZGRyLCB1bnNpZ25lZCBpbnQgc2l6ZSkKK3sKKyAg
ICAvKgorICAgICAqIFRoaXMgZnVuY3Rpb24gbWF5IGJlIGNhbGxlZCBiZWZv
cmUgY3VycmVudF9jcHVfZGF0YSBpcyBlc3RhYmxpc2hlZC4KKyAgICAgKiBI
ZW5jZSBhIGZhbGxiYWNrIGlzIG5lZWRlZCB0byBwcmV2ZW50IHRoZSBsb29w
IGJlbG93IGJlY29taW5nIGluZmluaXRlLgorICAgICAqLworICAgIHVuc2ln
bmVkIGludCBjbGZsdXNoX3NpemUgPSBjdXJyZW50X2NwdV9kYXRhLng4Nl9j
bGZsdXNoX3NpemUgPzogMTY7CisgICAgY29uc3Qgdm9pZCAqZW5kID0gYWRk
ciArIHNpemU7CisKKyAgICBhZGRyIC09ICh1bnNpZ25lZCBsb25nKWFkZHIg
JiAoY2xmbHVzaF9zaXplIC0gMSk7CisgICAgZm9yICggOyBhZGRyIDwgZW5k
OyBhZGRyICs9IGNsZmx1c2hfc2l6ZSApCisgICAgeworICAgICAgICAvKgor
ICAgICAgICAgKiBOb3RlIHJlZ2FyZGluZyB0aGUgImRzIiBwcmVmaXggdXNl
OiBpdCdzIGZhc3RlciB0byBkbyBhIGNsZmx1c2gKKyAgICAgICAgICogKyBw
cmVmaXggdGhhbiBhIGNsZmx1c2ggKyBub3AsIGFuZCBoZW5jZSB0aGUgcHJl
Zml4IGlzIGFkZGVkIGluc3RlYWQKKyAgICAgICAgICogb2YgbGV0dGluZyB0
aGUgYWx0ZXJuYXRpdmUgZnJhbWV3b3JrIGZpbGwgdGhlIGdhcCBieSBhcHBl
bmRpbmcgbm9wcy4KKyAgICAgICAgICovCisgICAgICAgIGFsdGVybmF0aXZl
X2lvKCJkczsgY2xmbHVzaCAlW3BdIiwKKyAgICAgICAgICAgICAgICAgICAg
ICAgImRhdGExNiBjbGZsdXNoICVbcF0iLCAvKiBjbGZsdXNob3B0ICovCisg
ICAgICAgICAgICAgICAgICAgICAgIFg4Nl9GRUFUVVJFX0NMRkxVU0hPUFQs
CisgICAgICAgICAgICAgICAgICAgICAgIC8qIG5vIG91dHB1dHMgKi8sCisg
ICAgICAgICAgICAgICAgICAgICAgIFtwXSAibSIgKCooY29uc3QgY2hhciAq
KShhZGRyKSkpOworICAgIH0KKworICAgIGFsdGVybmF0aXZlKCIiLCAic2Zl
bmNlIiwgWDg2X0ZFQVRVUkVfQ0xGTFVTSE9QVCk7Cit9CisKK3ZvaWQgY2Fj
aGVfd3JpdGViYWNrKGNvbnN0IHZvaWQgKmFkZHIsIHVuc2lnbmVkIGludCBz
aXplKQoreworICAgIHVuc2lnbmVkIGludCBjbGZsdXNoX3NpemU7CisgICAg
Y29uc3Qgdm9pZCAqZW5kID0gYWRkciArIHNpemU7CisKKyAgICAvKiBGYWxs
IGJhY2sgdG8gQ0xGTFVTSHssT1BUfSB3aGVuIENMV0IgaXNuJ3QgYXZhaWxh
YmxlLiAqLworICAgIGlmICggIWJvb3RfY3B1X2hhcyhYODZfRkVBVFVSRV9D
TFdCKSApCisgICAgICAgIHJldHVybiBjYWNoZV9mbHVzaChhZGRyLCBzaXpl
KTsKKworICAgIC8qCisgICAgICogVGhpcyBmdW5jdGlvbiBtYXkgYmUgY2Fs
bGVkIGJlZm9yZSBjdXJyZW50X2NwdV9kYXRhIGlzIGVzdGFibGlzaGVkLgor
ICAgICAqIEhlbmNlIGEgZmFsbGJhY2sgaXMgbmVlZGVkIHRvIHByZXZlbnQg
dGhlIGxvb3AgYmVsb3cgYmVjb21pbmcgaW5maW5pdGUuCisgICAgICovCisg
ICAgY2xmbHVzaF9zaXplID0gY3VycmVudF9jcHVfZGF0YS54ODZfY2xmbHVz
aF9zaXplID86IDE2OworICAgIGFkZHIgLT0gKHVuc2lnbmVkIGxvbmcpYWRk
ciAmIChjbGZsdXNoX3NpemUgLSAxKTsKKyAgICBmb3IgKCA7IGFkZHIgPCBl
bmQ7IGFkZHIgKz0gY2xmbHVzaF9zaXplICkKKyAgICB7CisvKgorICogVGhl
IGFyZ3VtZW50cyB0byBhIG1hY3JvIG11c3Qgbm90IGluY2x1ZGUgcHJlcHJv
Y2Vzc29yIGRpcmVjdGl2ZXMuIERvaW5nIHNvCisgKiByZXN1bHRzIGluIHVu
ZGVmaW5lZCBiZWhhdmlvciwgc28gd2UgaGF2ZSB0byBjcmVhdGUgc29tZSBk
ZWZpbmVzIGhlcmUgaW4KKyAqIG9yZGVyIHRvIGF2b2lkIGl0LgorICovCisj
aWYgZGVmaW5lZChIQVZFX0FTX0NMV0IpCisjIGRlZmluZSBDTFdCX0VOQ09E
SU5HICJjbHdiICVbcF0iCisjZWxpZiBkZWZpbmVkKEhBVkVfQVNfWFNBVkVP
UFQpCisjIGRlZmluZSBDTFdCX0VOQ09ESU5HICJkYXRhMTYgeHNhdmVvcHQg
JVtwXSIgLyogY2x3YiAqLworI2Vsc2UKKyMgZGVmaW5lIENMV0JfRU5DT0RJ
TkcgIi5ieXRlIDB4NjYsIDB4MGYsIDB4YWUsIDB4MzAiIC8qIGNsd2IgKCUl
cmF4KSAqLworI2VuZGlmCisKKyNkZWZpbmUgQkFTRV9JTlBVVChhZGRyKSBb
cF0gIm0iICgqKGNvbnN0IGNoYXIgKikoYWRkcikpCisjaWYgZGVmaW5lZChI
QVZFX0FTX0NMV0IpIHx8IGRlZmluZWQoSEFWRV9BU19YU0FWRU9QVCkKKyMg
ZGVmaW5lIElOUFVUIEJBU0VfSU5QVVQKKyNlbHNlCisjIGRlZmluZSBJTlBV
VChhZGRyKSAiYSIgKGFkZHIpLCBCQVNFX0lOUFVUKGFkZHIpCisjZW5kaWYK
KworICAgICAgICBhc20gdm9sYXRpbGUgKENMV0JfRU5DT0RJTkcgOjogSU5Q
VVQoYWRkcikpOworCisjdW5kZWYgSU5QVVQKKyN1bmRlZiBCQVNFX0lOUFVU
CisjdW5kZWYgQ0xXQl9FTkNPRElORworICAgIH0KKworICAgIGFzbSB2b2xh
dGlsZSAoInNmZW5jZSIgOjo6ICJtZW1vcnkiKTsKK30KKwogdW5zaWduZWQg
aW50IGd1ZXN0X2ZsdXNoX3RsYl9mbGFncyhjb25zdCBzdHJ1Y3QgZG9tYWlu
ICpkKQogewogICAgIGJvb2wgc2hhZG93ID0gcGFnaW5nX21vZGVfc2hhZG93
KGQpOwpkaWZmIC0tZ2l0IGEveGVuL2NvbW1vbi9ncmFudF90YWJsZS5jIGIv
eGVuL2NvbW1vbi9ncmFudF90YWJsZS5jCmluZGV4IDQ3YjAxOWM3NTAxNy4u
NzdiYmE5ODA2OTM3IDEwMDY0NAotLS0gYS94ZW4vY29tbW9uL2dyYW50X3Rh
YmxlLmMKKysrIGIveGVuL2NvbW1vbi9ncmFudF90YWJsZS5jCkBAIC0zNDIz
LDcgKzM0MjMsNyBAQCBnbnR0YWJfc3dhcF9ncmFudF9yZWYoWEVOX0dVRVNU
X0hBTkRMRV9QQVJBTShnbnR0YWJfc3dhcF9ncmFudF9yZWZfdCkgdW9wLAog
ICAgIHJldHVybiAwOwogfQogCi1zdGF0aWMgaW50IGNhY2hlX2ZsdXNoKGNv
bnN0IGdudHRhYl9jYWNoZV9mbHVzaF90ICpjZmx1c2gsIGdyYW50X3JlZl90
ICpjdXJfcmVmKQorc3RhdGljIGludCBfY2FjaGVfZmx1c2goY29uc3QgZ250
dGFiX2NhY2hlX2ZsdXNoX3QgKmNmbHVzaCwgZ3JhbnRfcmVmX3QgKmN1cl9y
ZWYpCiB7CiAgICAgc3RydWN0IGRvbWFpbiAqZCwgKm93bmVyOwogICAgIHN0
cnVjdCBwYWdlX2luZm8gKnBhZ2U7CkBAIC0zNTE3LDcgKzM1MTcsNyBAQCBn
bnR0YWJfY2FjaGVfZmx1c2goWEVOX0dVRVNUX0hBTkRMRV9QQVJBTShnbnR0
YWJfY2FjaGVfZmx1c2hfdCkgdW9wLAogICAgICAgICAgICAgcmV0dXJuIC1F
RkFVTFQ7CiAgICAgICAgIGZvciAoIDsgOyApCiAgICAgICAgIHsKLSAgICAg
ICAgICAgIGludCByZXQgPSBjYWNoZV9mbHVzaCgmb3AsIGN1cl9yZWYpOwor
ICAgICAgICAgICAgaW50IHJldCA9IF9jYWNoZV9mbHVzaCgmb3AsIGN1cl9y
ZWYpOwogCiAgICAgICAgICAgICBpZiAoIHJldCA8IDAgKQogICAgICAgICAg
ICAgICAgIHJldHVybiByZXQ7CmRpZmYgLS1naXQgYS94ZW4vZHJpdmVycy9w
YXNzdGhyb3VnaC92dGQvZXh0ZXJuLmggYi94ZW4vZHJpdmVycy9wYXNzdGhy
b3VnaC92dGQvZXh0ZXJuLmgKaW5kZXggY2Y0ZDIyMThmYThiLi44ZjcwYWU3
MjdiODYgMTAwNjQ0Ci0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0
ZC9leHRlcm4uaAorKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQv
ZXh0ZXJuLmgKQEAgLTc2LDcgKzc2LDYgQEAgaW50IF9fbXVzdF9jaGVjayBx
aW52YWxfZGV2aWNlX2lvdGxiX3N5bmMoc3RydWN0IHZ0ZF9pb21tdSAqaW9t
bXUsCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBzdHJ1Y3QgcGNpX2RldiAqcGRldiwKICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIHUxNiBkaWQsIHUxNiBzaXplLCB1NjQg
YWRkcik7CiAKLXVuc2lnbmVkIGludCBnZXRfY2FjaGVfbGluZV9zaXplKHZv
aWQpOwogdm9pZCBmbHVzaF9hbGxfY2FjaGUodm9pZCk7CiAKIHVpbnQ2NF90
IGFsbG9jX3BndGFibGVfbWFkZHIodW5zaWduZWQgbG9uZyBucGFnZXMsIG5v
ZGVpZF90IG5vZGUpOwpkaWZmIC0tZ2l0IGEveGVuL2RyaXZlcnMvcGFzc3Ro
cm91Z2gvdnRkL2lvbW11LmMgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92
dGQvaW9tbXUuYwppbmRleCBhMDYzNDYyY2ZmNWEuLjY4YTY1ODkzMGE2YSAx
MDA2NDQKLS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL2lvbW11
LmMKKysrIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL2lvbW11LmMK
QEAgLTMxLDYgKzMxLDcgQEAKICNpbmNsdWRlIDx4ZW4vcGNpLmg+CiAjaW5j
bHVkZSA8eGVuL3BjaV9yZWdzLmg+CiAjaW5jbHVkZSA8eGVuL2tleWhhbmRs
ZXIuaD4KKyNpbmNsdWRlIDxhc20vY2FjaGUuaD4KICNpbmNsdWRlIDxhc20v
bXNpLmg+CiAjaW5jbHVkZSA8YXNtL25vcHMuaD4KICNpbmNsdWRlIDxhc20v
aXJxLmg+CkBAIC0yMDQsNTQgKzIwNSw2IEBAIHN0YXRpYyB2b2lkIGNoZWNr
X2NsZWFudXBfZG9taWRfbWFwKHN0cnVjdCBkb21haW4gKmQsCiAgICAgfQog
fQogCi1zdGF0aWMgdm9pZCBzeW5jX2NhY2hlKGNvbnN0IHZvaWQgKmFkZHIs
IHVuc2lnbmVkIGludCBzaXplKQotewotICAgIHN0YXRpYyB1bnNpZ25lZCBs
b25nIGNsZmx1c2hfc2l6ZSA9IDA7Ci0gICAgY29uc3Qgdm9pZCAqZW5kID0g
YWRkciArIHNpemU7Ci0KLSAgICBpZiAoIGNsZmx1c2hfc2l6ZSA9PSAwICkK
LSAgICAgICAgY2xmbHVzaF9zaXplID0gZ2V0X2NhY2hlX2xpbmVfc2l6ZSgp
OwotCi0gICAgYWRkciAtPSAodW5zaWduZWQgbG9uZylhZGRyICYgKGNsZmx1
c2hfc2l6ZSAtIDEpOwotICAgIGZvciAoIDsgYWRkciA8IGVuZDsgYWRkciAr
PSBjbGZsdXNoX3NpemUgKQotLyoKLSAqIFRoZSBhcmd1bWVudHMgdG8gYSBt
YWNybyBtdXN0IG5vdCBpbmNsdWRlIHByZXByb2Nlc3NvciBkaXJlY3RpdmVz
LiBEb2luZyBzbwotICogcmVzdWx0cyBpbiB1bmRlZmluZWQgYmVoYXZpb3Is
IHNvIHdlIGhhdmUgdG8gY3JlYXRlIHNvbWUgZGVmaW5lcyBoZXJlIGluCi0g
KiBvcmRlciB0byBhdm9pZCBpdC4KLSAqLwotI2lmIGRlZmluZWQoSEFWRV9B
U19DTFdCKQotIyBkZWZpbmUgQ0xXQl9FTkNPRElORyAiY2x3YiAlW3BdIgot
I2VsaWYgZGVmaW5lZChIQVZFX0FTX1hTQVZFT1BUKQotIyBkZWZpbmUgQ0xX
Ql9FTkNPRElORyAiZGF0YTE2IHhzYXZlb3B0ICVbcF0iIC8qIGNsd2IgKi8K
LSNlbHNlCi0jIGRlZmluZSBDTFdCX0VOQ09ESU5HICIuYnl0ZSAweDY2LCAw
eDBmLCAweGFlLCAweDMwIiAvKiBjbHdiICglJXJheCkgKi8KLSNlbmRpZgot
Ci0jZGVmaW5lIEJBU0VfSU5QVVQoYWRkcikgW3BdICJtIiAoKihjb25zdCBj
aGFyICopKGFkZHIpKQotI2lmIGRlZmluZWQoSEFWRV9BU19DTFdCKSB8fCBk
ZWZpbmVkKEhBVkVfQVNfWFNBVkVPUFQpCi0jIGRlZmluZSBJTlBVVCBCQVNF
X0lOUFVUCi0jZWxzZQotIyBkZWZpbmUgSU5QVVQoYWRkcikgImEiIChhZGRy
KSwgQkFTRV9JTlBVVChhZGRyKQotI2VuZGlmCi0gICAgICAgIC8qCi0gICAg
ICAgICAqIE5vdGUgcmVnYXJkaW5nIHRoZSB1c2Ugb2YgTk9QX0RTX1BSRUZJ
WDogaXQncyBmYXN0ZXIgdG8gZG8gYSBjbGZsdXNoCi0gICAgICAgICAqICsg
cHJlZml4IHRoYW4gYSBjbGZsdXNoICsgbm9wLCBhbmQgaGVuY2UgdGhlIHBy
ZWZpeCBpcyBhZGRlZCBpbnN0ZWFkCi0gICAgICAgICAqIG9mIGxldHRpbmcg
dGhlIGFsdGVybmF0aXZlIGZyYW1ld29yayBmaWxsIHRoZSBnYXAgYnkgYXBw
ZW5kaW5nIG5vcHMuCi0gICAgICAgICAqLwotICAgICAgICBhbHRlcm5hdGl2
ZV9pb18yKCIuYnl0ZSAiIF9fc3RyaW5naWZ5KE5PUF9EU19QUkVGSVgpICI7
IGNsZmx1c2ggJVtwXSIsCi0gICAgICAgICAgICAgICAgICAgICAgICAgImRh
dGExNiBjbGZsdXNoICVbcF0iLCAvKiBjbGZsdXNob3B0ICovCi0gICAgICAg
ICAgICAgICAgICAgICAgICAgWDg2X0ZFQVRVUkVfQ0xGTFVTSE9QVCwKLSAg
ICAgICAgICAgICAgICAgICAgICAgICBDTFdCX0VOQ09ESU5HLAotICAgICAg
ICAgICAgICAgICAgICAgICAgIFg4Nl9GRUFUVVJFX0NMV0IsIC8qIG5vIG91
dHB1dHMgKi8sCi0gICAgICAgICAgICAgICAgICAgICAgICAgSU5QVVQoYWRk
cikpOwotI3VuZGVmIElOUFVUCi0jdW5kZWYgQkFTRV9JTlBVVAotI3VuZGVm
IENMV0JfRU5DT0RJTkcKLQotICAgIGFsdGVybmF0aXZlXzIoIiIsICJzZmVu
Y2UiLCBYODZfRkVBVFVSRV9DTEZMVVNIT1BULAotICAgICAgICAgICAgICAg
ICAgICAgICJzZmVuY2UiLCBYODZfRkVBVFVSRV9DTFdCKTsKLX0KLQogLyog
QWxsb2NhdGUgcGFnZSB0YWJsZSwgcmV0dXJuIGl0cyBtYWNoaW5lIGFkZHJl
c3MgKi8KIHVpbnQ2NF90IGFsbG9jX3BndGFibGVfbWFkZHIodW5zaWduZWQg
bG9uZyBucGFnZXMsIG5vZGVpZF90IG5vZGUpCiB7CkBAIC0yNzEsNyArMjI0
LDcgQEAgdWludDY0X3QgYWxsb2NfcGd0YWJsZV9tYWRkcih1bnNpZ25lZCBs
b25nIG5wYWdlcywgbm9kZWlkX3Qgbm9kZSkKICAgICAgICAgY2xlYXJfcGFn
ZSh2YWRkcik7CiAKICAgICAgICAgaWYgKCAoaW9tbXVfb3BzLmluaXQgPyAm
aW9tbXVfb3BzIDogJnZ0ZF9vcHMpLT5zeW5jX2NhY2hlICkKLSAgICAgICAg
ICAgIHN5bmNfY2FjaGUodmFkZHIsIFBBR0VfU0laRSk7CisgICAgICAgICAg
ICBjYWNoZV93cml0ZWJhY2sodmFkZHIsIFBBR0VfU0laRSk7CiAgICAgICAg
IHVubWFwX2RvbWFpbl9wYWdlKHZhZGRyKTsKICAgICAgICAgY3VyX3BnKys7
CiAgICAgfQpAQCAtMTMwMiw3ICsxMjU1LDcgQEAgaW50IF9faW5pdCBpb21t
dV9hbGxvYyhzdHJ1Y3QgYWNwaV9kcmhkX3VuaXQgKmRyaGQpCiAgICAgaW9t
bXUtPm5yX3B0X2xldmVscyA9IGFnYXdfdG9fbGV2ZWwoYWdhdyk7CiAKICAg
ICBpZiAoICFlY2FwX2NvaGVyZW50KGlvbW11LT5lY2FwKSApCi0gICAgICAg
IHZ0ZF9vcHMuc3luY19jYWNoZSA9IHN5bmNfY2FjaGU7CisgICAgICAgIHZ0
ZF9vcHMuc3luY19jYWNoZSA9IGNhY2hlX3dyaXRlYmFjazsKIAogICAgIC8q
IGFsbG9jYXRlIGRvbWFpbiBpZCBiaXRtYXAgKi8KICAgICBpb21tdS0+ZG9t
aWRfYml0bWFwID0geHphbGxvY19hcnJheSh1bnNpZ25lZCBsb25nLCBCSVRT
X1RPX0xPTkdTKG5yX2RvbSkpOwpkaWZmIC0tZ2l0IGEveGVuL2RyaXZlcnMv
cGFzc3Rocm91Z2gvdnRkL3g4Ni92dGQuYyBiL3hlbi9kcml2ZXJzL3Bhc3N0
aHJvdWdoL3Z0ZC94ODYvdnRkLmMKaW5kZXggNjY4MWRjY2Q2OTcwLi41NWYw
ZmFhNTIxY2IgMTAwNjQ0Ci0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdo
L3Z0ZC94ODYvdnRkLmMKKysrIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gv
dnRkL3g4Ni92dGQuYwpAQCAtNDcsMTEgKzQ3LDYgQEAgdm9pZCB1bm1hcF92
dGRfZG9tYWluX3BhZ2UoY29uc3Qgdm9pZCAqdmEpCiAgICAgdW5tYXBfZG9t
YWluX3BhZ2UodmEpOwogfQogCi11bnNpZ25lZCBpbnQgZ2V0X2NhY2hlX2xp
bmVfc2l6ZSh2b2lkKQotewotICAgIHJldHVybiAoKGNwdWlkX2VieCgxKSA+
PiA4KSAmIDB4ZmYpICogODsKLX0KLQogdm9pZCBmbHVzaF9hbGxfY2FjaGUo
KQogewogICAgIHdiaW52ZCgpOwpkaWZmIC0tZ2l0IGEveGVuL2luY2x1ZGUv
YXNtLXg4Ni9jYWNoZS5oIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9jYWNoZS5o
CmluZGV4IDFmNzE3M2Q4YzcyYy4uZTQ3NzBlZmIyMmI5IDEwMDY0NAotLS0g
YS94ZW4vaW5jbHVkZS9hc20teDg2L2NhY2hlLmgKKysrIGIveGVuL2luY2x1
ZGUvYXNtLXg4Ni9jYWNoZS5oCkBAIC0xMSw0ICsxMSwxMSBAQAogCiAjZGVm
aW5lIF9fcmVhZF9tb3N0bHkgX19zZWN0aW9uKCIuZGF0YS5yZWFkX21vc3Rs
eSIpCiAKKyNpZm5kZWYgX19BU1NFTUJMWV9fCisKK3ZvaWQgY2FjaGVfZmx1
c2goY29uc3Qgdm9pZCAqYWRkciwgdW5zaWduZWQgaW50IHNpemUpOwordm9p
ZCBjYWNoZV93cml0ZWJhY2soY29uc3Qgdm9pZCAqYWRkciwgdW5zaWduZWQg
aW50IHNpemUpOworCisjZW5kaWYKKwogI2VuZGlmCg==

--=separator
Content-Type: application/octet-stream; name="xsa402/xsa402-4.15-4.patch"
Content-Disposition: attachment; filename="xsa402/xsa402-4.15-4.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L2FtZDogV29yayBhcm91bmQgQ0xGTFVTSCBvcmRl
cmluZyBvbiBvbGRlciBwYXJ0cwoKT24gcHJlLUNMRkxVU0hPUFQgQU1EIENQ
VXMsIENMRkxVU0ggaXMgd2Vha2VseSBvcmRlcmVkIHdpdGggZXZlcnl0aGlu
ZywKaW5jbHVkaW5nIHJlYWRzIGFuZCB3cml0ZXMgdG8gdGhlIGFkZHJlc3Ms
IGFuZCBMRkVOQ0UvU0ZFTkNFIGluc3RydWN0aW9ucy4KClRoaXMgY3JlYXRl
cyBhIG11bHRpdHVkZSBvZiBwcm9ibGVtYXRpYyBjb3JuZXIgY2FzZXMsIGxh
aWQgb3V0IGluIHRoZSBtYW51YWwuCkFycmFuZ2UgdG8gdXNlIE1GRU5DRSBv
biBib3RoIHNpZGVzIG9mIHRoZSBDTEZMVVNIIHRvIGZvcmNlIHByb3BlciBv
cmRlcmluZy4KClRoaXMgaXMgcGFydCBvZiBYU0EtNDAyLgoKU2lnbmVkLW9m
Zi1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KUmV2aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNv
bT4KCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvY3B1L2FtZC5jIGIveGVu
L2FyY2gveDg2L2NwdS9hbWQuYwppbmRleCAxZWU2ODdkMGQyMjQuLjk4NjY3
MmEwNzJiNyAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L2NwdS9hbWQuYwor
KysgYi94ZW4vYXJjaC94ODYvY3B1L2FtZC5jCkBAIC03ODcsNiArNzg3LDE0
IEBAIHN0YXRpYyB2b2lkIGluaXRfYW1kKHN0cnVjdCBjcHVpbmZvX3g4NiAq
YykKIAlpZiAoIWNwdV9oYXNfbGZlbmNlX2Rpc3BhdGNoKQogCQlfX3NldF9i
aXQoWDg2X0ZFQVRVUkVfTUZFTkNFX1JEVFNDLCBjLT54ODZfY2FwYWJpbGl0
eSk7CiAKKwkvKgorCSAqIE9uIHByZS1DTEZMVVNIT1BUIEFNRCBDUFVzLCBD
TEZMVVNIIGlzIHdlYWtseSBvcmRlcmVkIHdpdGgKKwkgKiBldmVyeXRoaW5n
LCBpbmNsdWRpbmcgcmVhZHMgYW5kIHdyaXRlcyB0byBhZGRyZXNzLCBhbmQK
KwkgKiBMRkVOQ0UvU0ZFTkNFIGluc3RydWN0aW9ucy4KKwkgKi8KKwlpZiAo
IWNwdV9oYXNfY2xmbHVzaG9wdCkKKwkJc2V0dXBfZm9yY2VfY3B1X2NhcChY
ODZfQlVHX0NMRkxVU0hfTUZFTkNFKTsKKwogCXN3aXRjaChjLT54ODYpCiAJ
ewogCWNhc2UgMHhmIC4uLiAweDExOgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gv
eDg2L2ZsdXNodGxiLmMgYi94ZW4vYXJjaC94ODYvZmx1c2h0bGIuYwppbmRl
eCAwYzkxMmI4NjY5ZjguLmRjYmI0MDY0MDEyZSAxMDA2NDQKLS0tIGEveGVu
L2FyY2gveDg2L2ZsdXNodGxiLmMKKysrIGIveGVuL2FyY2gveDg2L2ZsdXNo
dGxiLmMKQEAgLTI1OSw2ICsyNTksMTMgQEAgdW5zaWduZWQgaW50IGZsdXNo
X2FyZWFfbG9jYWwoY29uc3Qgdm9pZCAqdmEsIHVuc2lnbmVkIGludCBmbGFn
cykKICAgICByZXR1cm4gZmxhZ3M7CiB9CiAKKy8qCisgKiBPbiBwcmUtQ0xG
TFVTSE9QVCBBTUQgQ1BVcywgQ0xGTFVTSCBpcyB3ZWFrbHkgb3JkZXJlZCB3
aXRoIGV2ZXJ5dGhpbmcsCisgKiBpbmNsdWRpbmcgcmVhZHMgYW5kIHdyaXRl
cyB0byBhZGRyZXNzLCBhbmQgTEZFTkNFL1NGRU5DRSBpbnN0cnVjdGlvbnMu
CisgKgorICogVGhpcyBmdW5jdGlvbiBvbmx5IHdvcmtzIHNhZmVseSBhZnRl
ciBhbHRlcm5hdGl2ZXMgaGF2ZSBydW4uICBMdWNraWx5LCBhdAorICogdGhl
IHRpbWUgb2Ygd3JpdGluZywgd2UgZG9uJ3QgZmx1c2ggdGhlIGNhY2hlcyB0
aGF0IGVhcmx5LgorICovCiB2b2lkIGNhY2hlX2ZsdXNoKGNvbnN0IHZvaWQg
KmFkZHIsIHVuc2lnbmVkIGludCBzaXplKQogewogICAgIC8qCkBAIC0yNjgs
NiArMjc1LDggQEAgdm9pZCBjYWNoZV9mbHVzaChjb25zdCB2b2lkICphZGRy
LCB1bnNpZ25lZCBpbnQgc2l6ZSkKICAgICB1bnNpZ25lZCBpbnQgY2xmbHVz
aF9zaXplID0gY3VycmVudF9jcHVfZGF0YS54ODZfY2xmbHVzaF9zaXplID86
IDE2OwogICAgIGNvbnN0IHZvaWQgKmVuZCA9IGFkZHIgKyBzaXplOwogCisg
ICAgYWx0ZXJuYXRpdmUoIiIsICJtZmVuY2UiLCBYODZfQlVHX0NMRkxVU0hf
TUZFTkNFKTsKKwogICAgIGFkZHIgLT0gKHVuc2lnbmVkIGxvbmcpYWRkciAm
IChjbGZsdXNoX3NpemUgLSAxKTsKICAgICBmb3IgKCA7IGFkZHIgPCBlbmQ7
IGFkZHIgKz0gY2xmbHVzaF9zaXplICkKICAgICB7CkBAIC0yODMsNyArMjky
LDkgQEAgdm9pZCBjYWNoZV9mbHVzaChjb25zdCB2b2lkICphZGRyLCB1bnNp
Z25lZCBpbnQgc2l6ZSkKICAgICAgICAgICAgICAgICAgICAgICAgW3BdICJt
IiAoKihjb25zdCBjaGFyICopKGFkZHIpKSk7CiAgICAgfQogCi0gICAgYWx0
ZXJuYXRpdmUoIiIsICJzZmVuY2UiLCBYODZfRkVBVFVSRV9DTEZMVVNIT1BU
KTsKKyAgICBhbHRlcm5hdGl2ZV8yKCIiLAorICAgICAgICAgICAgICAgICAg
InNmZW5jZSIsIFg4Nl9GRUFUVVJFX0NMRkxVU0hPUFQsCisgICAgICAgICAg
ICAgICAgICAibWZlbmNlIiwgWDg2X0JVR19DTEZMVVNIX01GRU5DRSk7CiB9
CiAKIHZvaWQgY2FjaGVfd3JpdGViYWNrKGNvbnN0IHZvaWQgKmFkZHIsIHVu
c2lnbmVkIGludCBzaXplKQpkaWZmIC0tZ2l0IGEveGVuL2luY2x1ZGUvYXNt
LXg4Ni9jcHVmZWF0dXJlcy5oIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9jcHVm
ZWF0dXJlcy5oCmluZGV4IGZlMmY5NzM1NGZiNi4uMDlmNjE5NDU5YmM3IDEw
MDY0NAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L2NwdWZlYXR1cmVzLmgK
KysrIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9jcHVmZWF0dXJlcy5oCkBAIC00
Niw2ICs0Niw3IEBAIFhFTl9DUFVGRUFUVVJFKFhFTl9JQlQsICAgICAgICAg
ICBYODZfU1lOVEgoMjcpKSAvKiBYZW4gdXNlcyBDRVQgSW5kaXJlY3QgQnJh
bmNoCiAjZGVmaW5lIFg4Nl9CVUcoeCkgKChGU0NBUElOVFMgKyBYODZfTlJf
U1lOVEgpICogMzIgKyAoeCkpCiAKICNkZWZpbmUgWDg2X0JVR19GUFVfUFRS
UyAgICAgICAgICBYODZfQlVHKCAwKSAvKiAoRilYe1NBVkUsUlNUT1J9IGRv
ZXNuJ3Qgc2F2ZS9yZXN0b3JlIEZPUC9GSVAvRkRQLiAqLworI2RlZmluZSBY
ODZfQlVHX0NMRkxVU0hfTUZFTkNFICAgIFg4Nl9CVUcoIDIpIC8qIE1GRU5D
RSBuZWVkZWQgdG8gc2VyaWFsaXNlIENMRkxVU0ggKi8KIAogLyogVG90YWwg
bnVtYmVyIG9mIGNhcGFiaWxpdHkgd29yZHMsIGluYyBzeW50aCBhbmQgYnVn
IHdvcmRzLiAqLwogI2RlZmluZSBOQ0FQSU5UUyAoRlNDQVBJTlRTICsgWDg2
X05SX1NZTlRIICsgWDg2X05SX0JVRykgLyogTiAzMi1iaXQgd29yZHMgd29y
dGggb2YgaW5mbyAqLwo=

--=separator
Content-Type: application/octet-stream; name="xsa402/xsa402-4.15-5.patch"
Content-Disposition: attachment; filename="xsa402/xsa402-4.15-5.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3B2OiBUcmFjayBhbmQgZmx1c2ggbm9uLWNvaGVy
ZW50IG1hcHBpbmdzIG9mIFJBTQoKVGhlcmUgYXJlIGxlZ2l0aW1hdGUgdXNl
cyBvZiBXQyBtYXBwaW5ncyBvZiBSQU0sIGUuZy4gZm9yIERNQSBidWZmZXJz
IHdpdGgKZGV2aWNlcyB0aGF0IG1ha2Ugbm9uLWNvaGVyZW50IHdyaXRlcy4g
IFRoZSBMaW51eCBzb3VuZCBzdWJzeXN0ZW0gbWFrZXMKZXh0ZW5zaXZlIHVz
ZSBvZiB0aGlzIHRlY2huaXF1ZS4KCkZvciBzdWNoIHVzZWNhc2VzLCB0aGUg
Z3Vlc3QncyBETUEgYnVmZmVyIGlzIG1hcHBlZCBhbmQgY29uc2lzdGVudGx5
IHVzZWQgYXMKV0MsIGFuZCBYZW4gZG9lc24ndCBpbnRlcmFjdCB3aXRoIHRo
ZSBidWZmZXIuCgpIb3dldmVyLCBhIG1pc2NoZXZpb3VzIGd1ZXN0IGNhbiB1
c2UgV0MgbWFwcGluZ3MgdG8gZGVsaWJlcmF0ZWx5IGNyZWF0ZQpub24tY29o
ZXJlbmN5IGJldHdlZW4gdGhlIGNhY2hlIGFuZCBSQU0sIGFuZCB1c2UgdGhp
cyB0byB0cmljayBYZW4gaW50bwp2YWxpZGF0aW5nIGEgcGFnZXRhYmxlIHdo
aWNoIGlzbid0IGFjdHVhbGx5IHNhZmUuCgpBbGxvY2F0ZSBhIG5ldyBQR1Rf
bm9uX2NvaGVyZW50IHRvIHRyYWNrIHRoZSBub24tY29oZXJlbmN5IG9mIG1h
cHBpbmdzLiAgU2V0Cml0IHdoZW5ldmVyIGEgbm9uLWNvaGVyZW50IHdyaXRl
YWJsZSBtYXBwaW5nIGlzIGNyZWF0ZWQuICBJZiB0aGUgcGFnZSBpcyB1c2Vk
CmFzIGFueXRoaW5nIG90aGVyIHRoYW4gUEdUX3dyaXRhYmxlX3BhZ2UsIGZv
cmNlIGEgY2FjaGUgZmx1c2ggYmVmb3JlCnZhbGlkYXRpb24uICBBbHNvIGZv
cmNlIGEgY2FjaGUgZmx1c2ggYmVmb3JlIHRoZSBwYWdlIGlzIHJldHVybmVk
IHRvIHRoZSBoZWFwLgoKVGhpcyBpcyBDVkUtMjAyMi0yNjM2NCwgcGFydCBv
ZiBYU0EtNDAyLgoKUmVwb3J0ZWQtYnk6IEphbm4gSG9ybiA8amFubmhAZ29v
Z2xlLmNvbT4KU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3
LmNvb3BlcjNAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEdlb3JnZSBEdW5s
YXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEph
biBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KCmRpZmYgLS1naXQgYS94
ZW4vYXJjaC94ODYvbW0uYyBiL3hlbi9hcmNoL3g4Ni9tbS5jCmluZGV4IDZj
ZThjMTlkY2VjYy4uMTc1OWI4NGJhOTdjIDEwMDY0NAotLS0gYS94ZW4vYXJj
aC94ODYvbW0uYworKysgYi94ZW4vYXJjaC94ODYvbW0uYwpAQCAtOTk3LDYg
Kzk5NywxNSBAQCBnZXRfcGFnZV9mcm9tX2wxZSgKICAgICAgICAgcmV0dXJu
IC1FQUNDRVM7CiAgICAgfQogCisgICAgLyoKKyAgICAgKiBUcmFjayB3cml0
ZWFibGUgbm9uLWNvaGVyZW50IG1hcHBpbmdzIHRvIFJBTSBwYWdlcywgdG8g
dHJpZ2dlciBhIGNhY2hlCisgICAgICogZmx1c2ggbGF0ZXIgaWYgdGhlIHRh
cmdldCBpcyB1c2VkIGFzIGFueXRoaW5nIGJ1dCBhIFBHVF93cml0ZWFibGUg
cGFnZS4KKyAgICAgKiBXZSBjYXJlIGFib3V0IGFsbCB3cml0ZWFibGUgbWFw
cGluZ3MsIGluY2x1ZGluZyBmb3JlaWduIG1hcHBpbmdzLgorICAgICAqLwor
ICAgIGlmICggIWJvb3RfY3B1X2hhcyhYODZfRkVBVFVSRV9YRU5fU0VMRlNO
T09QKSAmJgorICAgICAgICAgKGwxZiAmIChQQUdFX0NBQ0hFX0FUVFJTIHwg
X1BBR0VfUlcpKSA9PSAoX1BBR0VfV0MgfCBfUEFHRV9SVykgKQorICAgICAg
ICBzZXRfYml0KF9QR1Rfbm9uX2NvaGVyZW50LCAmcGFnZS0+dS5pbnVzZS50
eXBlX2luZm8pOworCiAgICAgcmV0dXJuIDA7CiAKICBjb3VsZF9ub3RfcGlu
OgpAQCAtMjQ0Miw2ICsyNDUxLDE5IEBAIHN0YXRpYyBpbnQgY2xlYW51cF9w
YWdlX21hcHBpbmdzKHN0cnVjdCBwYWdlX2luZm8gKnBhZ2UpCiAgICAgICAg
IH0KICAgICB9CiAKKyAgICAvKgorICAgICAqIEZsdXNoIHRoZSBjYWNoZSBp
ZiB0aGVyZSB3ZXJlIHByZXZpb3VzbHkgbm9uLWNvaGVyZW50IHdyaXRlYWJs
ZQorICAgICAqIG1hcHBpbmdzIG9mIHRoaXMgcGFnZS4gIFRoaXMgZm9yY2Vz
IHRoZSBwYWdlIHRvIGJlIGNvaGVyZW50IGJlZm9yZSBpdAorICAgICAqIGlz
IGZyZWVkIGJhY2sgdG8gdGhlIGhlYXAuCisgICAgICovCisgICAgaWYgKCBf
X3Rlc3RfYW5kX2NsZWFyX2JpdChfUEdUX25vbl9jb2hlcmVudCwgJnBhZ2Ut
PnUuaW51c2UudHlwZV9pbmZvKSApCisgICAgeworICAgICAgICB2b2lkICph
ZGRyID0gX19tYXBfZG9tYWluX3BhZ2UocGFnZSk7CisKKyAgICAgICAgY2Fj
aGVfZmx1c2goYWRkciwgUEFHRV9TSVpFKTsKKyAgICAgICAgdW5tYXBfZG9t
YWluX3BhZ2UoYWRkcik7CisgICAgfQorCiAgICAgcmV0dXJuIHJjOwogfQog
CkBAIC0zMDE2LDYgKzMwMzgsMjIgQEAgc3RhdGljIGludCBfZ2V0X3BhZ2Vf
dHlwZShzdHJ1Y3QgcGFnZV9pbmZvICpwYWdlLCB1bnNpZ25lZCBsb25nIHR5
cGUsCiAgICAgaWYgKCB1bmxpa2VseSghKG54ICYgUEdUX3ZhbGlkYXRlZCkp
ICkKICAgICB7CiAgICAgICAgIC8qCisgICAgICAgICAqIEZsdXNoIHRoZSBj
YWNoZSBpZiB0aGVyZSB3ZXJlIHByZXZpb3VzbHkgbm9uLWNvaGVyZW50IG1h
cHBpbmdzIG9mCisgICAgICAgICAqIHRoaXMgcGFnZSwgYW5kIHdlJ3JlIHRy
eWluZyB0byB1c2UgaXQgYXMgYW55dGhpbmcgb3RoZXIgdGhhbiBhCisgICAg
ICAgICAqIHdyaXRlYWJsZSBwYWdlLiAgVGhpcyBmb3JjZXMgdGhlIHBhZ2Ug
dG8gYmUgY29oZXJlbnQgYmVmb3JlIHdlCisgICAgICAgICAqIHZhbGlkYXRl
IGl0cyBjb250ZW50cyBmb3Igc2FmZXR5LgorICAgICAgICAgKi8KKyAgICAg
ICAgaWYgKCAobnggJiBQR1Rfbm9uX2NvaGVyZW50KSAmJiB0eXBlICE9IFBH
VF93cml0YWJsZV9wYWdlICkKKyAgICAgICAgeworICAgICAgICAgICAgdm9p
ZCAqYWRkciA9IF9fbWFwX2RvbWFpbl9wYWdlKHBhZ2UpOworCisgICAgICAg
ICAgICBjYWNoZV9mbHVzaChhZGRyLCBQQUdFX1NJWkUpOworICAgICAgICAg
ICAgdW5tYXBfZG9tYWluX3BhZ2UoYWRkcik7CisKKyAgICAgICAgICAgIHBh
Z2UtPnUuaW51c2UudHlwZV9pbmZvICY9IH5QR1Rfbm9uX2NvaGVyZW50Owor
ICAgICAgICB9CisKKyAgICAgICAgLyoKICAgICAgICAgICogTm8gc3BlY2lh
bCB2YWxpZGF0aW9uIG5lZWRlZCBmb3Igd3JpdGFibGUgb3Igc2hhcmVkIHBh
Z2VzLiAgUGFnZQogICAgICAgICAgKiB0YWJsZXMgYW5kIEdEVC9MRFQgbmVl
ZCB0byBoYXZlIHRoZWlyIGNvbnRlbnRzIGF1ZGl0ZWQuCiAgICAgICAgICAq
CmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvcHYvZ3JhbnRfdGFibGUuYyBi
L3hlbi9hcmNoL3g4Ni9wdi9ncmFudF90YWJsZS5jCmluZGV4IDAzMjU2MThj
OTg4My4uODFjNzJlNjFlZDU1IDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYv
cHYvZ3JhbnRfdGFibGUuYworKysgYi94ZW4vYXJjaC94ODYvcHYvZ3JhbnRf
dGFibGUuYwpAQCAtMTA5LDcgKzEwOSwxNyBAQCBpbnQgY3JlYXRlX2dyYW50
X3B2X21hcHBpbmcodWludDY0X3QgYWRkciwgbWZuX3QgZnJhbWUsCiAKICAg
ICBvbDFlID0gKnBsMWU7CiAgICAgaWYgKCBVUERBVEVfRU5UUlkobDEsIHBs
MWUsIG9sMWUsIG5sMWUsIGdsMW1mbiwgY3VyciwgMCkgKQorICAgIHsKKyAg
ICAgICAgLyoKKyAgICAgICAgICogV2UgYWx3YXlzIGNyZWF0ZSBtYXBwaW5n
cyBpbiB0aGlzIHBhdGguICBIb3dldmVyLCBvdXIgY2FsbGVyLAorICAgICAg
ICAgKiBtYXBfZ3JhbnRfcmVmKCksIG9ubHkgcGFzc2VzIHBvdGVudGlhbGx5
IG5vbi16ZXJvIGNhY2hlX2ZsYWdzIGZvcgorICAgICAgICAgKiBNTUlPIGZy
YW1lcywgc28gdGhpcyBwYXRoIGRvZXNuJ3QgY3JlYXRlIG5vbi1jb2hlcmVu
dCBtYXBwaW5ncyBvZgorICAgICAgICAgKiBSQU0gZnJhbWVzIGFuZCB0aGVy
ZSdzIG5vIG5lZWQgdG8gY2FsY3VsYXRlIFBHVF9ub25fY29oZXJlbnQuCisg
ICAgICAgICAqLworICAgICAgICBBU1NFUlQoIWNhY2hlX2ZsYWdzIHx8IGlz
X2lvbWVtX3BhZ2UoZnJhbWUpKTsKKwogICAgICAgICByYyA9IEdOVFNUX29r
YXk7CisgICAgfQogCiAgb3V0X3VubG9jazoKICAgICBwYWdlX3VubG9jayhw
YWdlKTsKQEAgLTI5NCw3ICszMDQsMTggQEAgaW50IHJlcGxhY2VfZ3JhbnRf
cHZfbWFwcGluZyh1aW50NjRfdCBhZGRyLCBtZm5fdCBmcmFtZSwKICAgICAg
ICAgICAgICAgICAgbDFlX2dldF9mbGFncyhvbDFlKSwgYWRkciwgZ3JhbnRf
cHRlX2ZsYWdzKTsKIAogICAgIGlmICggVVBEQVRFX0VOVFJZKGwxLCBwbDFl
LCBvbDFlLCBubDFlLCBnbDFtZm4sIGN1cnIsIDApICkKKyAgICB7CisgICAg
ICAgIC8qCisgICAgICAgICAqIEdlbmVyYWxseSwgcmVwbGFjZV9ncmFudF9w
dl9tYXBwaW5nKCkgaXMgdXNlZCB0byBkZXN0cm95IG1hcHBpbmdzCisgICAg
ICAgICAqIChuMWxlID0gbDFlX2VtcHR5KCkpLCBidXQgaXQgY2FuIGJlIGEg
cHJlc2VudCBtYXBwaW5nIG9uIHRoZQorICAgICAgICAgKiBHTlRBQk9QX3Vu
bWFwX2FuZF9yZXBsYWNlIHBhdGguCisgICAgICAgICAqCisgICAgICAgICAq
IEluIHN1Y2ggY2FzZXMsIHRoZSBQVEUgaXMgZnVsbHkgdHJhbnNwbGFudGVk
IGZyb20gaXRzIG9sZCBsb2NhdGlvbgorICAgICAgICAgKiB2aWEgc3RlYWxf
bGluZWFyX2FkZHIoKSwgc28gd2UgbmVlZCBub3QgcGVyZm9ybSBQR1Rfbm9u
X2NvaGVyZW50CisgICAgICAgICAqIGNoZWNraW5nIGhlcmUuCisgICAgICAg
ICAqLwogICAgICAgICByYyA9IEdOVFNUX29rYXk7CisgICAgfQogCiAgb3V0
X3VubG9jazoKICAgICBwYWdlX3VubG9jayhwYWdlKTsKZGlmZiAtLWdpdCBh
L3hlbi9pbmNsdWRlL2FzbS14ODYvbW0uaCBiL3hlbi9pbmNsdWRlL2FzbS14
ODYvbW0uaAppbmRleCBmNWI4ODYyYjgzNzQuLjVjMTliNzFlY2E3MCAxMDA2
NDQKLS0tIGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9tbS5oCisrKyBiL3hlbi9p
bmNsdWRlL2FzbS14ODYvbW0uaApAQCAtNTMsOCArNTMsMTIgQEAKICNkZWZp
bmUgX1BHVF9wYXJ0aWFsICAgICAgUEdfc2hpZnQoOCkKICNkZWZpbmUgUEdU
X3BhcnRpYWwgICAgICAgUEdfbWFzaygxLCA4KQogCisvKiBIYXMgdGhpcyBw
YWdlIGJlZW4gbWFwcGVkIHdyaXRlYWJsZSB3aXRoIGEgbm9uLWNvaGVyZW50
IG1lbW9yeSB0eXBlPyAqLworI2RlZmluZSBfUEdUX25vbl9jb2hlcmVudCBQ
R19zaGlmdCg5KQorI2RlZmluZSBQR1Rfbm9uX2NvaGVyZW50ICBQR19tYXNr
KDEsIDkpCisKICAvKiBDb3VudCBvZiB1c2VzIG9mIHRoaXMgZnJhbWUgYXMg
aXRzIGN1cnJlbnQgdHlwZS4gKi8KLSNkZWZpbmUgUEdUX2NvdW50X3dpZHRo
ICAgUEdfc2hpZnQoOCkKKyNkZWZpbmUgUEdUX2NvdW50X3dpZHRoICAgUEdf
c2hpZnQoOSkKICNkZWZpbmUgUEdUX2NvdW50X21hc2sgICAgKCgxVUw8PFBH
VF9jb3VudF93aWR0aCktMSkKIAogLyogQXJlIHRoZSAndHlwZSBtYXNrJyBi
aXRzIGlkZW50aWNhbD8gKi8K

--=separator
Content-Type: application/octet-stream; name="xsa402/xsa402-4.16-1.patch"
Content-Disposition: attachment; filename="xsa402/xsa402-4.16-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3BhZ2U6IEludHJvZHVjZSBfUEFHRV8qIGNvbnN0
YW50cyBmb3IgbWVtb3J5IHR5cGVzCgouLi4gcmF0aGVyIHRoYW4gb3BlbmNv
ZGluZyB0aGUgUEFUL1BDRC9QV1QgYXR0cmlidXRlcyBpbiBfX1BBR0VfSFlQ
RVJWSVNPUl8qCmNvbnN0YW50cy4gIFRoZXNlIGFyZSBnb2luZyB0byBiZSBu
ZWVkZWQgYnkgZm9ydGhjb21pbmcgbG9naWMuCgpObyBmdW5jdGlvbmFsIGNo
YW5nZS4KClRoaXMgaXMgcGFydCBvZiBYU0EtNDAyLgoKU2lnbmVkLW9mZi1i
eTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4K
UmV2aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4K
CmRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9hc20teDg2L3BhZ2UuaCBiL3hl
bi9pbmNsdWRlL2FzbS14ODYvcGFnZS5oCmluZGV4IDFkMDgwY2ZmYmU4NC4u
MmU1NDIwNTBmNjVhIDEwMDY0NAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2
L3BhZ2UuaAorKysgYi94ZW4vaW5jbHVkZS9hc20teDg2L3BhZ2UuaApAQCAt
MzMxLDYgKzMzMSwxNCBAQCB2b2lkIGVmaV91cGRhdGVfbDRfcGd0YWJsZSh1
bnNpZ25lZCBpbnQgbDRpZHgsIGw0X3BnZW50cnlfdCk7CiAKICNkZWZpbmUg
UEFHRV9DQUNIRV9BVFRSUyAoX1BBR0VfUEFUIHwgX1BBR0VfUENEIHwgX1BB
R0VfUFdUKQogCisvKiBNZW1vcnkgdHlwZXMsIGVuY29kZWQgdW5kZXIgWGVu
J3MgY2hvaWNlIG9mIE1TUl9QQVQuICovCisjZGVmaW5lIF9QQUdFX1dCICAg
ICAgICAgKCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgMCkKKyNk
ZWZpbmUgX1BBR0VfV1QgICAgICAgICAoICAgICAgICAgICAgICAgICAgICAg
ICAgX1BBR0VfUFdUKQorI2RlZmluZSBfUEFHRV9VQ00gICAgICAgICggICAg
ICAgICAgICBfUEFHRV9QQ0QgICAgICAgICAgICApCisjZGVmaW5lIF9QQUdF
X1VDICAgICAgICAgKCAgICAgICAgICAgIF9QQUdFX1BDRCB8IF9QQUdFX1BX
VCkKKyNkZWZpbmUgX1BBR0VfV0MgICAgICAgICAoX1BBR0VfUEFUICAgICAg
ICAgICAgICAgICAgICAgICAgKQorI2RlZmluZSBfUEFHRV9XUCAgICAgICAg
IChfUEFHRV9QQVQgfCAgICAgICAgICAgICBfUEFHRV9QV1QpCisKIC8qCiAg
KiBEZWJ1ZyBvcHRpb246IEVuc3VyZSB0aGF0IGdyYW50ZWQgbWFwcGluZ3Mg
YXJlIG5vdCBpbXBsaWNpdGx5IHVubWFwcGVkLgogICogV0FSTklORzogVGhp
cyB3aWxsIG5lZWQgdG8gYmUgZGlzYWJsZWQgdG8gcnVuIE9TZXMgdGhhdCB1
c2UgdGhlIHNwYXJlIFBURQpAQCAtMzQ5LDggKzM1Nyw4IEBAIHZvaWQgZWZp
X3VwZGF0ZV9sNF9wZ3RhYmxlKHVuc2lnbmVkIGludCBsNGlkeCwgbDRfcGdl
bnRyeV90KTsKICNkZWZpbmUgX19QQUdFX0hZUEVSVklTT1JfUlggICAgICAo
X1BBR0VfUFJFU0VOVCB8IF9QQUdFX0FDQ0VTU0VEKQogI2RlZmluZSBfX1BB
R0VfSFlQRVJWSVNPUiAgICAgICAgIChfX1BBR0VfSFlQRVJWSVNPUl9SWCB8
IFwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgX1BBR0Vf
RElSVFkgfCBfUEFHRV9SVykKLSNkZWZpbmUgX19QQUdFX0hZUEVSVklTT1Jf
VUNNSU5VUyAoX19QQUdFX0hZUEVSVklTT1IgfCBfUEFHRV9QQ0QpCi0jZGVm
aW5lIF9fUEFHRV9IWVBFUlZJU09SX1VDICAgICAgKF9fUEFHRV9IWVBFUlZJ
U09SIHwgX1BBR0VfUENEIHwgX1BBR0VfUFdUKQorI2RlZmluZSBfX1BBR0Vf
SFlQRVJWSVNPUl9VQ01JTlVTIChfX1BBR0VfSFlQRVJWSVNPUiB8IF9QQUdF
X1VDTSkKKyNkZWZpbmUgX19QQUdFX0hZUEVSVklTT1JfVUMgICAgICAoX19Q
QUdFX0hZUEVSVklTT1IgfCBfUEFHRV9VQykKICNkZWZpbmUgX19QQUdFX0hZ
UEVSVklTT1JfU0hTVEsgICAoX19QQUdFX0hZUEVSVklTT1JfUk8gfCBfUEFH
RV9ESVJUWSkKIAogI2RlZmluZSBNQVBfU01BTExfUEFHRVMgX1BBR0VfQVZB
SUwwIC8qIGRvbid0IHVzZSBzdXBlcnBhZ2VzIG1hcHBpbmdzICovCg==

--=separator
Content-Type: application/octet-stream; name="xsa402/xsa402-4.16-2.patch"
Content-Disposition: attachment; filename="xsa402/xsa402-4.16-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2OiBEb24ndCBjaGFuZ2UgdGhlIGNhY2hlYWJpbGl0
eSBvZiB0aGUgZGlyZWN0bWFwCgpDaGFuZ2VzZXQgNTVmOTdmNDliN2NlICgi
eDg2OiBDaGFuZ2UgY2FjaGUgYXR0cmlidXRlcyBvZiBYZW4gMToxIHBhZ2Ug
bWFwcGluZ3MKaW4gcmVzcG9uc2UgdG8gZ3Vlc3QgbWFwcGluZyByZXF1ZXN0
cyIpIGF0dGVtcHRlZCB0byBrZWVwIHRoZSBjYWNoZWFiaWxpdHkKY29uc2lz
dGVudCBiZXR3ZWVuIGRpZmZlcmVudCBtYXBwaW5ncyBvZiB0aGUgc2FtZSBw
YWdlLgoKVGhlIHJlYXNvbiB3YXNuJ3QgZGVzY3JpYmVkIGluIHRoZSBjaGFu
Z2Vsb2csIGJ1dCBpdCBpcyB1bmRlcnN0b29kIHRvIGJlIGluCnJlZ2FyZHMg
dG8gYSBjb25jZXJuIG92ZXIgbWFjaGluZSBjaGVjayBleGNlcHRpb25zLCBv
d2luZyB0byBlcnJhdGEgd2hlbiB1c2luZwptaXhlZCBjYWNoZWFiaWxpdGll
cy4gIEl0IGRpZCB0aGlzIHByaW1hcmlseSBieSB1cGRhdGluZyBYZW4ncyBt
YXBwaW5nIG9mIHRoZQpwYWdlIGluIHRoZSBkaXJlY3QgbWFwIHdoZW4gdGhl
IGd1ZXN0IG1hcHBlZCBhIHBhZ2Ugd2l0aCByZWR1Y2VkIGNhY2hlYWJpbGl0
eS4KClVuZm9ydHVuYXRlbHksIHRoZSBsb2dpYyBkaWRuJ3QgYWN0dWFsbHkg
cHJldmVudCBtaXhlZCBjYWNoZWFiaWxpdHkgZnJvbQpvY2N1cnJpbmc6CiAq
IEEgZ3Vlc3QgY291bGQgbWFwIGEgcGFnZSBub3JtYWxseSwgYW5kIHRoZW4g
bWFwIHRoZSBzYW1lIHBhZ2Ugd2l0aAogICBkaWZmZXJlbnQgY2FjaGVhYmls
aXR5OyBub3RoaW5nIHByZXZlbnRlZCB0aGlzLgogKiBUaGUgY2FjaGVhYmls
aXR5IG9mIHRoZSBkaXJlY3RtYXAgd2FzIGFsd2F5cyBsYXRlc3QtdGFrZXMt
cHJlY2VkZW5jZSBpbgogICB0ZXJtcyBvZiBndWVzdCByZXF1ZXN0cy4KICog
R3JhbnQtbWFwcGVkIGZyYW1lcyB3aXRoIGxlc3NlciBjYWNoZWFiaWxpdHkg
ZGlkbid0IGFkanVzdCB0aGUgcGFnZSdzCiAgIGNhY2hlYXR0ciBzZXR0aW5n
cy4KICogVGhlIG1hcF9kb21haW5fcGFnZSgpIGZ1bmN0aW9uIHN0aWxsIHVu
Y29uZGl0aW9uYWxseSBjcmVhdGVkIFdCIG1hcHBpbmdzLAogICBpcnJlc3Bl
Y3RpdmUgb2YgdGhlIHBhZ2UncyBjYWNoZWF0dHIgc2V0dGluZ3MuCgpBZGRp
dGlvbmFsbHksIHVwZGF0ZV94ZW5fbWFwcGluZ3MoKSBoYWQgYSBidWcgd2hl
cmUgdGhlIGFsaWFzIGNhbGN1bGF0aW9uIHdhcwp3cm9uZyBmb3IgbWZuJ3Mg
d2hpY2ggd2VyZSAuaW5pdCBjb250ZW50LCB3aGljaCBzaG91bGQgaGF2ZSBi
ZWVuIHRyZWF0ZWQgYXMKZnVsbHkgZ3Vlc3QgcGFnZXMsIG5vdCBYZW4gcGFn
ZXMuCgpXb3JzZSB5ZXQsIHRoZSBsb2dpYyBpbnRyb2R1Y2VkIGEgdnVsbmVy
YWJpbGl0eSB3aGVyZWJ5IG5lY2Vzc2FyeQpwYWdldGFibGUvc2VnZGVzYyBh
ZGp1c3RtZW50cyBtYWRlIGJ5IFhlbiBpbiB0aGUgdmFsaWRhdGlvbiBsb2dp
YyBjb3VsZCBiZWNvbWUKbm9uLWNvaGVyZW50IGJldHdlZW4gdGhlIGNhY2hl
IGFuZCBtYWluIG1lbW9yeS4gIFRoZSBDUFUgY291bGQgc3Vic2VxdWVudGx5
Cm9wZXJhdGUgb24gdGhlIHN0YWxlIHZhbHVlIGluIHRoZSBjYWNoZSwgcmF0
aGVyIHRoYW4gdGhlIHNhZmUgdmFsdWUgaW4gbWFpbgptZW1vcnkuCgpUaGUg
ZGlyZWN0bWFwIGNvbnRhaW5zIHByaW1hcmlseSBtYXBwaW5ncyBvZiBSQU0u
ICBQQVQvTVRSUiBjb25mbGljdApyZXNvbHV0aW9uIGlzIGFzeW1tZXRyaWMs
IGFuZCBnZW5lcmFsbHkgZm9yIE1UUlI9V0IgcmFuZ2VzLCBQQVQgb2YgbGVz
c2VyCmNhY2hlYWJpbGl0eSByZXNvbHZlcyB0byBiZWluZyBjb2hlcmVudC4g
IFRoZSBzcGVjaWFsIGNhc2UgaXMgV0MgbWFwcGluZ3MsCndoaWNoIGFyZSBu
b24tY29oZXJlbnQgYWdhaW5zdCBNVFJSPVdCIHJlZ2lvbnMgKGV4Y2VwdCBm
b3IgZnVsbHktY29oZXJlbnQKQ1BVcykuCgpYZW4gbXVzdCBub3QgaGF2ZSBh
bnkgV0MgY2FjaGVhYmlsaXR5IGluIHRoZSBkaXJlY3RtYXAsIHRvIHByZXZl
bnQgWGVuJ3MKYWN0aW9ucyBmcm9tIGNyZWF0aW5nIG5vbi1jb2hlcmVuY3ku
ICAoR3Vlc3QgYWN0aW9ucyBjcmVhdGluZyBub24tY29oZXJlbmN5IGlzCmRl
YWx0IHdpdGggaW4gc3Vic2VxdWVudCBwYXRjaGVzLikgIEFzIGFsbCBtZW1v
cnkgdHlwZXMgZm9yIE1UUlI9V0IgcmFuZ2VzCmludGVyLW9wZXJhdGUgY29o
ZXJlbnRseSwgc28gbGVhdmUgWGVuJ3MgZGlyZWN0bWFwIG1hcHBpbmdzIGFz
IFdCLgoKT25seSBQViBndWVzdHMgd2l0aCBhY2Nlc3MgdG8gZGV2aWNlcyBj
YW4gdXNlIHJlZHVjZWQtY2FjaGVhYmlsaXR5IG1hcHBpbmdzIHRvCmJlZ2lu
IHdpdGgsIGFuZCB0aGV5J3JlIHRydXN0ZWQgbm90IHRvIG1vdW50IERvU3Mg
YWdhaW5zdCB0aGUgc3lzdGVtIGFueXdheS4KCkRyb3AgUEdDX2NhY2hlYXR0
cl97YmFzZSxtYXNrfSBlbnRpcmVseSwgYW5kIHRoZSBsb2dpYyB0byBtYW5p
cHVsYXRlIHRoZW0uClNoaWZ0IHRoZSBsYXRlciBQR0NfKiBjb25zdGFudHMg
dXAsIHRvIGdhaW4gMyBleHRyYSBiaXRzIGluIHRoZSBtYWluIHJlZmVyZW5j
ZQpjb3VudC4gIFJldGFpbiB0aGUgY2hlY2sgaW4gZ2V0X3BhZ2VfZnJvbV9s
MWUoKSBmb3Igc3BlY2lhbF9wYWdlcygpIGJlY2F1c2UgYQpndWVzdCBoYXMg
bm8gYnVzaW5lc3MgdXNpbmcgcmVkdWNlZCBjYWNoZWFiaWxpdHkgb24gdGhl
c2UuCgpUaGlzIHJldmVydHMgY2hhbmdlc2V0IDU1Zjk3ZjQ5YjdjZTZjMzUy
MGM1NTVkMTljYWFjNmNmM2Y5YTVkZjAKClRoaXMgaXMgQ1ZFLTIwMjItMjYz
NjMsIHBhcnQgb2YgWFNBLTQwMi4KClNpZ25lZC1vZmYtYnk6IEFuZHJldyBD
b29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+ClJldmlld2VkLWJ5
OiBHZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJpeC5jb20+Cgpk
aWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L21tLmMgYi94ZW4vYXJjaC94ODYv
bW0uYwppbmRleCBjNjQyOWIwZjc0OWEuLmFiMzJkMTNhMWEwZCAxMDA2NDQK
LS0tIGEveGVuL2FyY2gveDg2L21tLmMKKysrIGIveGVuL2FyY2gveDg2L21t
LmMKQEAgLTc4MywyOCArNzgzLDYgQEAgYm9vbCBpc19pb21lbV9wYWdlKG1m
bl90IG1mbikKICAgICByZXR1cm4gKHBhZ2VfZ2V0X293bmVyKHBhZ2UpID09
IGRvbV9pbyk7CiB9CiAKLXN0YXRpYyBpbnQgdXBkYXRlX3hlbl9tYXBwaW5n
cyh1bnNpZ25lZCBsb25nIG1mbiwgdW5zaWduZWQgaW50IGNhY2hlYXR0cikK
LXsKLSAgICBpbnQgZXJyID0gMDsKLSAgICBib29sIGFsaWFzID0gbWZuID49
IFBGTl9ET1dOKHhlbl9waHlzX3N0YXJ0KSAmJgotICAgICAgICAgbWZuIDwg
UEZOX1VQKHhlbl9waHlzX3N0YXJ0ICsgeGVuX3ZpcnRfZW5kIC0gWEVOX1ZJ
UlRfU1RBUlQpOwotICAgIHVuc2lnbmVkIGxvbmcgeGVuX3ZhID0KLSAgICAg
ICAgWEVOX1ZJUlRfU1RBUlQgKyAoKG1mbiAtIFBGTl9ET1dOKHhlbl9waHlz
X3N0YXJ0KSkgPDwgUEFHRV9TSElGVCk7Ci0KLSAgICBpZiAoIGJvb3RfY3B1
X2hhcyhYODZfRkVBVFVSRV9YRU5fU0VMRlNOT09QKSApCi0gICAgICAgIHJl
dHVybiAwOwotCi0gICAgaWYgKCB1bmxpa2VseShhbGlhcykgJiYgY2FjaGVh
dHRyICkKLSAgICAgICAgZXJyID0gbWFwX3BhZ2VzX3RvX3hlbih4ZW5fdmEs
IF9tZm4obWZuKSwgMSwgMCk7Ci0gICAgaWYgKCAhZXJyICkKLSAgICAgICAg
ZXJyID0gbWFwX3BhZ2VzX3RvX3hlbigodW5zaWduZWQgbG9uZyltZm5fdG9f
dmlydChtZm4pLCBfbWZuKG1mbiksIDEsCi0gICAgICAgICAgICAgICAgICAg
ICBQQUdFX0hZUEVSVklTT1IgfCBjYWNoZWF0dHJfdG9fcHRlX2ZsYWdzKGNh
Y2hlYXR0cikpOwotICAgIGlmICggdW5saWtlbHkoYWxpYXMpICYmICFjYWNo
ZWF0dHIgJiYgIWVyciApCi0gICAgICAgIGVyciA9IG1hcF9wYWdlc190b194
ZW4oeGVuX3ZhLCBfbWZuKG1mbiksIDEsIFBBR0VfSFlQRVJWSVNPUik7Ci0K
LSAgICByZXR1cm4gZXJyOwotfQotCiAjaWZuZGVmIE5ERUJVRwogc3RydWN0
IG1taW9fZW11bF9yYW5nZV9jdHh0IHsKICAgICBjb25zdCBzdHJ1Y3QgZG9t
YWluICpkOwpAQCAtMTAwOSw0NyArOTg3LDE0IEBAIGdldF9wYWdlX2Zyb21f
bDFlKAogICAgICAgICBnb3RvIGNvdWxkX25vdF9waW47CiAgICAgfQogCi0g
ICAgaWYgKCBwdGVfZmxhZ3NfdG9fY2FjaGVhdHRyKGwxZikgIT0KLSAgICAg
ICAgICgocGFnZS0+Y291bnRfaW5mbyAmIFBHQ19jYWNoZWF0dHJfbWFzaykg
Pj4gUEdDX2NhY2hlYXR0cl9iYXNlKSApCisgICAgaWYgKCAobDFmICYgUEFH
RV9DQUNIRV9BVFRSUykgIT0gX1BBR0VfV0IgJiYgaXNfc3BlY2lhbF9wYWdl
KHBhZ2UpICkKICAgICB7Ci0gICAgICAgIHVuc2lnbmVkIGxvbmcgeCwgbngs
IHkgPSBwYWdlLT5jb3VudF9pbmZvOwotICAgICAgICB1bnNpZ25lZCBsb25n
IGNhY2hlYXR0ciA9IHB0ZV9mbGFnc190b19jYWNoZWF0dHIobDFmKTsKLSAg
ICAgICAgaW50IGVycjsKLQotICAgICAgICBpZiAoIGlzX3NwZWNpYWxfcGFn
ZShwYWdlKSApCi0gICAgICAgIHsKLSAgICAgICAgICAgIGlmICggd3JpdGUg
KQotICAgICAgICAgICAgICAgIHB1dF9wYWdlX3R5cGUocGFnZSk7Ci0gICAg
ICAgICAgICBwdXRfcGFnZShwYWdlKTsKLSAgICAgICAgICAgIGdkcHJpbnRr
KFhFTkxPR19XQVJOSU5HLAotICAgICAgICAgICAgICAgICAgICAgIkF0dGVt
cHQgdG8gY2hhbmdlIGNhY2hlIGF0dHJpYnV0ZXMgb2YgWGVuIGhlYXAgcGFn
ZVxuIik7Ci0gICAgICAgICAgICByZXR1cm4gLUVBQ0NFUzsKLSAgICAgICAg
fQotCi0gICAgICAgIGRvIHsKLSAgICAgICAgICAgIHggID0geTsKLSAgICAg
ICAgICAgIG54ID0gKHggJiB+UEdDX2NhY2hlYXR0cl9tYXNrKSB8IChjYWNo
ZWF0dHIgPDwgUEdDX2NhY2hlYXR0cl9iYXNlKTsKLSAgICAgICAgfSB3aGls
ZSAoICh5ID0gY21weGNoZygmcGFnZS0+Y291bnRfaW5mbywgeCwgbngpKSAh
PSB4ICk7Ci0KLSAgICAgICAgZXJyID0gdXBkYXRlX3hlbl9tYXBwaW5ncyht
Zm4sIGNhY2hlYXR0cik7Ci0gICAgICAgIGlmICggdW5saWtlbHkoZXJyKSAp
Ci0gICAgICAgIHsKLSAgICAgICAgICAgIGNhY2hlYXR0ciA9IHkgJiBQR0Nf
Y2FjaGVhdHRyX21hc2s7Ci0gICAgICAgICAgICBkbyB7Ci0gICAgICAgICAg
ICAgICAgeCAgPSB5OwotICAgICAgICAgICAgICAgIG54ID0gKHggJiB+UEdD
X2NhY2hlYXR0cl9tYXNrKSB8IGNhY2hlYXR0cjsKLSAgICAgICAgICAgIH0g
d2hpbGUgKCAoeSA9IGNtcHhjaGcoJnBhZ2UtPmNvdW50X2luZm8sIHgsIG54
KSkgIT0geCApOwotCi0gICAgICAgICAgICBpZiAoIHdyaXRlICkKLSAgICAg
ICAgICAgICAgICBwdXRfcGFnZV90eXBlKHBhZ2UpOwotICAgICAgICAgICAg
cHV0X3BhZ2UocGFnZSk7Ci0KLSAgICAgICAgICAgIGdkcHJpbnRrKFhFTkxP
R19XQVJOSU5HLCAiRXJyb3IgdXBkYXRpbmcgbWFwcGluZ3MgZm9yIG1mbiAl
IiBQUklfbWZuCi0gICAgICAgICAgICAgICAgICAgICAiIChwZm4gJSIgUFJJ
X3BmbiAiLCBmcm9tIEwxIGVudHJ5ICUiIFBSSXB0ZSAiKSBmb3IgZCVkXG4i
LAotICAgICAgICAgICAgICAgICAgICAgbWZuLCBnZXRfZ3Bmbl9mcm9tX21m
bihtZm4pLAotICAgICAgICAgICAgICAgICAgICAgbDFlX2dldF9pbnRwdGUo
bDFlKSwgbDFlX293bmVyLT5kb21haW5faWQpOwotICAgICAgICAgICAgcmV0
dXJuIGVycjsKLSAgICAgICAgfQorICAgICAgICBpZiAoIHdyaXRlICkKKyAg
ICAgICAgICAgIHB1dF9wYWdlX3R5cGUocGFnZSk7CisgICAgICAgIHB1dF9w
YWdlKHBhZ2UpOworICAgICAgICBnZHByaW50ayhYRU5MT0dfV0FSTklORywK
KyAgICAgICAgICAgICAgICAgIkF0dGVtcHQgdG8gY2hhbmdlIGNhY2hlIGF0
dHJpYnV0ZXMgb2YgWGVuIGhlYXAgcGFnZVxuIik7CisgICAgICAgIHJldHVy
biAtRUFDQ0VTOwogICAgIH0KIAogICAgIHJldHVybiAwOwpAQCAtMjQ2Nywy
NSArMjQxMiwxMCBAQCBzdGF0aWMgaW50IG1vZF9sNF9lbnRyeShsNF9wZ2Vu
dHJ5X3QgKnBsNGUsCiAgKi8KIHN0YXRpYyBpbnQgY2xlYW51cF9wYWdlX21h
cHBpbmdzKHN0cnVjdCBwYWdlX2luZm8gKnBhZ2UpCiB7Ci0gICAgdW5zaWdu
ZWQgaW50IGNhY2hlYXR0ciA9Ci0gICAgICAgIChwYWdlLT5jb3VudF9pbmZv
ICYgUEdDX2NhY2hlYXR0cl9tYXNrKSA+PiBQR0NfY2FjaGVhdHRyX2Jhc2U7
CiAgICAgaW50IHJjID0gMDsKICAgICB1bnNpZ25lZCBsb25nIG1mbiA9IG1m
bl94KHBhZ2VfdG9fbWZuKHBhZ2UpKTsKIAogICAgIC8qCi0gICAgICogSWYg
d2UndmUgbW9kaWZpZWQgeGVuIG1hcHBpbmdzIGFzIGEgcmVzdWx0IG9mIGd1
ZXN0IGNhY2hlCi0gICAgICogYXR0cmlidXRlcywgcmVzdG9yZSB0aGVtIHRv
IHRoZSAibm9ybWFsIiBzdGF0ZS4KLSAgICAgKi8KLSAgICBpZiAoIHVubGlr
ZWx5KGNhY2hlYXR0cikgKQotICAgIHsKLSAgICAgICAgcGFnZS0+Y291bnRf
aW5mbyAmPSB+UEdDX2NhY2hlYXR0cl9tYXNrOwotCi0gICAgICAgIEJVR19P
Tihpc19zcGVjaWFsX3BhZ2UocGFnZSkpOwotCi0gICAgICAgIHJjID0gdXBk
YXRlX3hlbl9tYXBwaW5ncyhtZm4sIDApOwotICAgIH0KLQotICAgIC8qCiAg
ICAgICogSWYgdGhpcyBtYXkgYmUgaW4gYSBQViBkb21haW4ncyBJT01NVSwg
cmVtb3ZlIGl0LgogICAgICAqCiAgICAgICogTkIgdGhhdCB3cml0YWJsZSB4
ZW5oZWFwIHBhZ2VzIGhhdmUgdGhlaXIgdHlwZSBzZXQgYW5kIGNsZWFyZWQg
YnkKZGlmZiAtLWdpdCBhL3hlbi9pbmNsdWRlL2FzbS14ODYvbW0uaCBiL3hl
bi9pbmNsdWRlL2FzbS14ODYvbW0uaAppbmRleCBjYjkwNTI3NDk5NjMuLjhh
OWE0M2JiMGE5ZCAxMDA2NDQKLS0tIGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9t
bS5oCisrKyBiL3hlbi9pbmNsdWRlL2FzbS14ODYvbW0uaApAQCAtNjksMjUg
KzY5LDIyIEBACiAgLyogU2V0IHdoZW4gaXMgdXNpbmcgYSBwYWdlIGFzIGEg
cGFnZSB0YWJsZSAqLwogI2RlZmluZSBfUEdDX3BhZ2VfdGFibGUgICBQR19z
aGlmdCgzKQogI2RlZmluZSBQR0NfcGFnZV90YWJsZSAgICBQR19tYXNrKDEs
IDMpCi0gLyogMy1iaXQgUEFUL1BDRC9QV1QgY2FjaGUtYXR0cmlidXRlIGhp
bnQuICovCi0jZGVmaW5lIFBHQ19jYWNoZWF0dHJfYmFzZSBQR19zaGlmdCg2
KQotI2RlZmluZSBQR0NfY2FjaGVhdHRyX21hc2sgUEdfbWFzayg3LCA2KQog
IC8qIFBhZ2UgaXMgYnJva2VuPyAqLwotI2RlZmluZSBfUEdDX2Jyb2tlbiAg
ICAgICBQR19zaGlmdCg3KQotI2RlZmluZSBQR0NfYnJva2VuICAgICAgICBQ
R19tYXNrKDEsIDcpCisjZGVmaW5lIF9QR0NfYnJva2VuICAgICAgIFBHX3No
aWZ0KDQpCisjZGVmaW5lIFBHQ19icm9rZW4gICAgICAgIFBHX21hc2soMSwg
NCkKICAvKiBNdXR1YWxseS1leGNsdXNpdmUgcGFnZSBzdGF0ZXM6IHsgaW51
c2UsIG9mZmxpbmluZywgb2ZmbGluZWQsIGZyZWUgfS4gKi8KLSNkZWZpbmUg
UEdDX3N0YXRlICAgICAgICAgUEdfbWFzaygzLCA5KQotI2RlZmluZSBQR0Nf
c3RhdGVfaW51c2UgICBQR19tYXNrKDAsIDkpCi0jZGVmaW5lIFBHQ19zdGF0
ZV9vZmZsaW5pbmcgUEdfbWFzaygxLCA5KQotI2RlZmluZSBQR0Nfc3RhdGVf
b2ZmbGluZWQgUEdfbWFzaygyLCA5KQotI2RlZmluZSBQR0Nfc3RhdGVfZnJl
ZSAgICBQR19tYXNrKDMsIDkpCisjZGVmaW5lIFBHQ19zdGF0ZSAgICAgICAg
ICAgUEdfbWFzaygzLCA2KQorI2RlZmluZSBQR0Nfc3RhdGVfaW51c2UgICAg
IFBHX21hc2soMCwgNikKKyNkZWZpbmUgUEdDX3N0YXRlX29mZmxpbmluZyBQ
R19tYXNrKDEsIDYpCisjZGVmaW5lIFBHQ19zdGF0ZV9vZmZsaW5lZCAgUEdf
bWFzaygyLCA2KQorI2RlZmluZSBQR0Nfc3RhdGVfZnJlZSAgICAgIFBHX21h
c2soMywgNikKICNkZWZpbmUgcGFnZV9zdGF0ZV9pcyhwZywgc3QpICgoKHBn
KS0+Y291bnRfaW5mbyZQR0Nfc3RhdGUpID09IFBHQ19zdGF0ZV8jI3N0KQog
LyogUGFnZSBpcyBub3QgcmVmZXJlbmNlIGNvdW50ZWQgKHNlZSBiZWxvdyBm
b3IgY2F2ZWF0cykgKi8KLSNkZWZpbmUgX1BHQ19leHRyYSAgICAgICAgUEdf
c2hpZnQoMTApCi0jZGVmaW5lIFBHQ19leHRyYSAgICAgICAgIFBHX21hc2so
MSwgMTApCisjZGVmaW5lIF9QR0NfZXh0cmEgICAgICAgIFBHX3NoaWZ0KDcp
CisjZGVmaW5lIFBHQ19leHRyYSAgICAgICAgIFBHX21hc2soMSwgNykKIAog
LyogQ291bnQgb2YgcmVmZXJlbmNlcyB0byB0aGlzIGZyYW1lLiAqLwotI2Rl
ZmluZSBQR0NfY291bnRfd2lkdGggICBQR19zaGlmdCgxMCkKKyNkZWZpbmUg
UEdDX2NvdW50X3dpZHRoICAgUEdfc2hpZnQoNykKICNkZWZpbmUgUEdDX2Nv
dW50X21hc2sgICAgKCgxVUw8PFBHQ19jb3VudF93aWR0aCktMSkKIAogLyoK

--=separator
Content-Type: application/octet-stream; name="xsa402/xsa402-4.16-3.patch"
Content-Disposition: attachment; filename="xsa402/xsa402-4.16-3.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2OiBTcGxpdCBjYWNoZV9mbHVzaCgpIG91dCBvZiBj
YWNoZV93cml0ZWJhY2soKQoKU3Vic2VxdWVudCBjaGFuZ2VzIHdpbGwgd2Fu
dCBhIGZ1bGx5IGZsdXNoaW5nIHZlcnNpb24uCgpVc2UgdGhlIG5ldyBoZWxw
ZXIgcmF0aGVyIHRoYW4gb3BlbmNvZGluZyBpdCBpbiBmbHVzaF9hcmVhX2xv
Y2FsKCkuICBUaGlzCnJlc29sdmVzIGFuIG91dHN0YW5kaW5nIGlzc3VlIHdo
ZXJlIHRoZSBjb25kaXRpb25hbCBzZmVuY2UgaXMgb24gdGhlIHdyb25nCnNp
ZGUgb2YgdGhlIGNsZmx1c2hvcHQgbG9vcC4gIGNsZmx1c2hvcHQgaXMgb3Jk
ZXJlZCB3aXRoIHJlc3BlY3QgdG8gb2xkZXIKc3RvcmVzLCBub3QgdG8geW91
bmdlciBzdG9yZXMuCgpSZW5hbWUgZ250dGFiX2NhY2hlX2ZsdXNoKCkncyBo
ZWxwZXIgdG8gYXZvaWQgY29sbGlkaW5nIGluIG5hbWUuCmdyYW50X3RhYmxl
LmMgY2FuIHNlZSB0aGUgcHJvdG90eXBlIGZyb20gY2FjaGUuaCBzbyB0aGUg
YnVpbGQgZmFpbHMKb3RoZXJ3aXNlLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS00
MDIuCgpTaWduZWQtb2ZmLWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29v
cGVyM0BjaXRyaXguY29tPgpSZXZpZXdlZC1ieTogSmFuIEJldWxpY2ggPGpi
ZXVsaWNoQHN1c2UuY29tPgoKWGVuIDQuMTYgYW5kIGVhcmxpZXI6CiAqIEFs
c28gYmFja3BvcnQgaGFsZiBvZiBjL3MgMzMzMDAxM2U2NzM5NiAiVlQtZCAv
IHg4NjogcmUtYXJyYW5nZSBjYWNoZQogICBzeW5jaW5nIiB0byBzcGxpdCBj
YWNoZV93cml0ZWJhY2soKSBvdXQgb2YgdGhlIElPTU1VIGxvZ2ljLCBidXQg
d2l0aG91dCB0aGUKICAgYXNzb2NpYXRlZCBob29rcyBjaGFuZ2VzLgoKZGlm
ZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9mbHVzaHRsYi5jIGIveGVuL2FyY2gv
eDg2L2ZsdXNodGxiLmMKaW5kZXggMjU3OThkZjUwZjU0Li4wYzkxMmI4NjY5
ZjggMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni9mbHVzaHRsYi5jCisrKyBi
L3hlbi9hcmNoL3g4Ni9mbHVzaHRsYi5jCkBAIC0yMzQsNyArMjM0LDcgQEAg
dW5zaWduZWQgaW50IGZsdXNoX2FyZWFfbG9jYWwoY29uc3Qgdm9pZCAqdmEs
IHVuc2lnbmVkIGludCBmbGFncykKICAgICBpZiAoIGZsYWdzICYgRkxVU0hf
Q0FDSEUgKQogICAgIHsKICAgICAgICAgY29uc3Qgc3RydWN0IGNwdWluZm9f
eDg2ICpjID0gJmN1cnJlbnRfY3B1X2RhdGE7Ci0gICAgICAgIHVuc2lnbmVk
IGxvbmcgaSwgc3ogPSAwOworICAgICAgICB1bnNpZ25lZCBsb25nIHN6ID0g
MDsKIAogICAgICAgICBpZiAoIG9yZGVyIDwgKEJJVFNfUEVSX0xPTkcgLSBQ
QUdFX1NISUZUKSApCiAgICAgICAgICAgICBzeiA9IDFVTCA8PCAob3JkZXIg
KyBQQUdFX1NISUZUKTsKQEAgLTI0NCwxMyArMjQ0LDcgQEAgdW5zaWduZWQg
aW50IGZsdXNoX2FyZWFfbG9jYWwoY29uc3Qgdm9pZCAqdmEsIHVuc2lnbmVk
IGludCBmbGFncykKICAgICAgICAgICAgICBjLT54ODZfY2xmbHVzaF9zaXpl
ICYmIGMtPng4Nl9jYWNoZV9zaXplICYmIHN6ICYmCiAgICAgICAgICAgICAg
KChzeiA+PiAxMCkgPCBjLT54ODZfY2FjaGVfc2l6ZSkgKQogICAgICAgICB7
Ci0gICAgICAgICAgICBhbHRlcm5hdGl2ZSgiIiwgInNmZW5jZSIsIFg4Nl9G
RUFUVVJFX0NMRkxVU0hPUFQpOwotICAgICAgICAgICAgZm9yICggaSA9IDA7
IGkgPCBzejsgaSArPSBjLT54ODZfY2xmbHVzaF9zaXplICkKLSAgICAgICAg
ICAgICAgICBhbHRlcm5hdGl2ZV9pbnB1dCgiLmJ5dGUgIiBfX3N0cmluZ2lm
eShOT1BfRFNfUFJFRklYKSAiOyIKLSAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAiIGNsZmx1c2ggJTAiLAotICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICJkYXRhMTYgY2xmbHVzaCAlMCIsICAgICAgLyog
Y2xmbHVzaG9wdCAqLwotICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIFg4Nl9GRUFUVVJFX0NMRkxVU0hPUFQsCi0gICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIm0iICgoKGNvbnN0IGNoYXIgKil2YSlbaV0p
KTsKKyAgICAgICAgICAgIGNhY2hlX2ZsdXNoKHZhLCBzeik7CiAgICAgICAg
ICAgICBmbGFncyAmPSB+RkxVU0hfQ0FDSEU7CiAgICAgICAgIH0KICAgICAg
ICAgZWxzZQpAQCAtMjY1LDYgKzI1OSw4MCBAQCB1bnNpZ25lZCBpbnQgZmx1
c2hfYXJlYV9sb2NhbChjb25zdCB2b2lkICp2YSwgdW5zaWduZWQgaW50IGZs
YWdzKQogICAgIHJldHVybiBmbGFnczsKIH0KIAordm9pZCBjYWNoZV9mbHVz
aChjb25zdCB2b2lkICphZGRyLCB1bnNpZ25lZCBpbnQgc2l6ZSkKK3sKKyAg
ICAvKgorICAgICAqIFRoaXMgZnVuY3Rpb24gbWF5IGJlIGNhbGxlZCBiZWZv
cmUgY3VycmVudF9jcHVfZGF0YSBpcyBlc3RhYmxpc2hlZC4KKyAgICAgKiBI
ZW5jZSBhIGZhbGxiYWNrIGlzIG5lZWRlZCB0byBwcmV2ZW50IHRoZSBsb29w
IGJlbG93IGJlY29taW5nIGluZmluaXRlLgorICAgICAqLworICAgIHVuc2ln
bmVkIGludCBjbGZsdXNoX3NpemUgPSBjdXJyZW50X2NwdV9kYXRhLng4Nl9j
bGZsdXNoX3NpemUgPzogMTY7CisgICAgY29uc3Qgdm9pZCAqZW5kID0gYWRk
ciArIHNpemU7CisKKyAgICBhZGRyIC09ICh1bnNpZ25lZCBsb25nKWFkZHIg
JiAoY2xmbHVzaF9zaXplIC0gMSk7CisgICAgZm9yICggOyBhZGRyIDwgZW5k
OyBhZGRyICs9IGNsZmx1c2hfc2l6ZSApCisgICAgeworICAgICAgICAvKgor
ICAgICAgICAgKiBOb3RlIHJlZ2FyZGluZyB0aGUgImRzIiBwcmVmaXggdXNl
OiBpdCdzIGZhc3RlciB0byBkbyBhIGNsZmx1c2gKKyAgICAgICAgICogKyBw
cmVmaXggdGhhbiBhIGNsZmx1c2ggKyBub3AsIGFuZCBoZW5jZSB0aGUgcHJl
Zml4IGlzIGFkZGVkIGluc3RlYWQKKyAgICAgICAgICogb2YgbGV0dGluZyB0
aGUgYWx0ZXJuYXRpdmUgZnJhbWV3b3JrIGZpbGwgdGhlIGdhcCBieSBhcHBl
bmRpbmcgbm9wcy4KKyAgICAgICAgICovCisgICAgICAgIGFsdGVybmF0aXZl
X2lvKCJkczsgY2xmbHVzaCAlW3BdIiwKKyAgICAgICAgICAgICAgICAgICAg
ICAgImRhdGExNiBjbGZsdXNoICVbcF0iLCAvKiBjbGZsdXNob3B0ICovCisg
ICAgICAgICAgICAgICAgICAgICAgIFg4Nl9GRUFUVVJFX0NMRkxVU0hPUFQs
CisgICAgICAgICAgICAgICAgICAgICAgIC8qIG5vIG91dHB1dHMgKi8sCisg
ICAgICAgICAgICAgICAgICAgICAgIFtwXSAibSIgKCooY29uc3QgY2hhciAq
KShhZGRyKSkpOworICAgIH0KKworICAgIGFsdGVybmF0aXZlKCIiLCAic2Zl
bmNlIiwgWDg2X0ZFQVRVUkVfQ0xGTFVTSE9QVCk7Cit9CisKK3ZvaWQgY2Fj
aGVfd3JpdGViYWNrKGNvbnN0IHZvaWQgKmFkZHIsIHVuc2lnbmVkIGludCBz
aXplKQoreworICAgIHVuc2lnbmVkIGludCBjbGZsdXNoX3NpemU7CisgICAg
Y29uc3Qgdm9pZCAqZW5kID0gYWRkciArIHNpemU7CisKKyAgICAvKiBGYWxs
IGJhY2sgdG8gQ0xGTFVTSHssT1BUfSB3aGVuIENMV0IgaXNuJ3QgYXZhaWxh
YmxlLiAqLworICAgIGlmICggIWJvb3RfY3B1X2hhcyhYODZfRkVBVFVSRV9D
TFdCKSApCisgICAgICAgIHJldHVybiBjYWNoZV9mbHVzaChhZGRyLCBzaXpl
KTsKKworICAgIC8qCisgICAgICogVGhpcyBmdW5jdGlvbiBtYXkgYmUgY2Fs
bGVkIGJlZm9yZSBjdXJyZW50X2NwdV9kYXRhIGlzIGVzdGFibGlzaGVkLgor
ICAgICAqIEhlbmNlIGEgZmFsbGJhY2sgaXMgbmVlZGVkIHRvIHByZXZlbnQg
dGhlIGxvb3AgYmVsb3cgYmVjb21pbmcgaW5maW5pdGUuCisgICAgICovCisg
ICAgY2xmbHVzaF9zaXplID0gY3VycmVudF9jcHVfZGF0YS54ODZfY2xmbHVz
aF9zaXplID86IDE2OworICAgIGFkZHIgLT0gKHVuc2lnbmVkIGxvbmcpYWRk
ciAmIChjbGZsdXNoX3NpemUgLSAxKTsKKyAgICBmb3IgKCA7IGFkZHIgPCBl
bmQ7IGFkZHIgKz0gY2xmbHVzaF9zaXplICkKKyAgICB7CisvKgorICogVGhl
IGFyZ3VtZW50cyB0byBhIG1hY3JvIG11c3Qgbm90IGluY2x1ZGUgcHJlcHJv
Y2Vzc29yIGRpcmVjdGl2ZXMuIERvaW5nIHNvCisgKiByZXN1bHRzIGluIHVu
ZGVmaW5lZCBiZWhhdmlvciwgc28gd2UgaGF2ZSB0byBjcmVhdGUgc29tZSBk
ZWZpbmVzIGhlcmUgaW4KKyAqIG9yZGVyIHRvIGF2b2lkIGl0LgorICovCisj
aWYgZGVmaW5lZChIQVZFX0FTX0NMV0IpCisjIGRlZmluZSBDTFdCX0VOQ09E
SU5HICJjbHdiICVbcF0iCisjZWxpZiBkZWZpbmVkKEhBVkVfQVNfWFNBVkVP
UFQpCisjIGRlZmluZSBDTFdCX0VOQ09ESU5HICJkYXRhMTYgeHNhdmVvcHQg
JVtwXSIgLyogY2x3YiAqLworI2Vsc2UKKyMgZGVmaW5lIENMV0JfRU5DT0RJ
TkcgIi5ieXRlIDB4NjYsIDB4MGYsIDB4YWUsIDB4MzAiIC8qIGNsd2IgKCUl
cmF4KSAqLworI2VuZGlmCisKKyNkZWZpbmUgQkFTRV9JTlBVVChhZGRyKSBb
cF0gIm0iICgqKGNvbnN0IGNoYXIgKikoYWRkcikpCisjaWYgZGVmaW5lZChI
QVZFX0FTX0NMV0IpIHx8IGRlZmluZWQoSEFWRV9BU19YU0FWRU9QVCkKKyMg
ZGVmaW5lIElOUFVUIEJBU0VfSU5QVVQKKyNlbHNlCisjIGRlZmluZSBJTlBV
VChhZGRyKSAiYSIgKGFkZHIpLCBCQVNFX0lOUFVUKGFkZHIpCisjZW5kaWYK
KworICAgICAgICBhc20gdm9sYXRpbGUgKENMV0JfRU5DT0RJTkcgOjogSU5Q
VVQoYWRkcikpOworCisjdW5kZWYgSU5QVVQKKyN1bmRlZiBCQVNFX0lOUFVU
CisjdW5kZWYgQ0xXQl9FTkNPRElORworICAgIH0KKworICAgIGFzbSB2b2xh
dGlsZSAoInNmZW5jZSIgOjo6ICJtZW1vcnkiKTsKK30KKwogdW5zaWduZWQg
aW50IGd1ZXN0X2ZsdXNoX3RsYl9mbGFncyhjb25zdCBzdHJ1Y3QgZG9tYWlu
ICpkKQogewogICAgIGJvb2wgc2hhZG93ID0gcGFnaW5nX21vZGVfc2hhZG93
KGQpOwpkaWZmIC0tZ2l0IGEveGVuL2NvbW1vbi9ncmFudF90YWJsZS5jIGIv
eGVuL2NvbW1vbi9ncmFudF90YWJsZS5jCmluZGV4IDY2ZjhjZTcxNzQxYy4u
NGM3NDJjZDhmZTgxIDEwMDY0NAotLS0gYS94ZW4vY29tbW9uL2dyYW50X3Rh
YmxlLmMKKysrIGIveGVuL2NvbW1vbi9ncmFudF90YWJsZS5jCkBAIC0zNDMx
LDcgKzM0MzEsNyBAQCBnbnR0YWJfc3dhcF9ncmFudF9yZWYoWEVOX0dVRVNU
X0hBTkRMRV9QQVJBTShnbnR0YWJfc3dhcF9ncmFudF9yZWZfdCkgdW9wLAog
ICAgIHJldHVybiAwOwogfQogCi1zdGF0aWMgaW50IGNhY2hlX2ZsdXNoKGNv
bnN0IGdudHRhYl9jYWNoZV9mbHVzaF90ICpjZmx1c2gsIGdyYW50X3JlZl90
ICpjdXJfcmVmKQorc3RhdGljIGludCBfY2FjaGVfZmx1c2goY29uc3QgZ250
dGFiX2NhY2hlX2ZsdXNoX3QgKmNmbHVzaCwgZ3JhbnRfcmVmX3QgKmN1cl9y
ZWYpCiB7CiAgICAgc3RydWN0IGRvbWFpbiAqZCwgKm93bmVyOwogICAgIHN0
cnVjdCBwYWdlX2luZm8gKnBhZ2U7CkBAIC0zNTI1LDcgKzM1MjUsNyBAQCBn
bnR0YWJfY2FjaGVfZmx1c2goWEVOX0dVRVNUX0hBTkRMRV9QQVJBTShnbnR0
YWJfY2FjaGVfZmx1c2hfdCkgdW9wLAogICAgICAgICAgICAgcmV0dXJuIC1F
RkFVTFQ7CiAgICAgICAgIGZvciAoIDsgOyApCiAgICAgICAgIHsKLSAgICAg
ICAgICAgIGludCByZXQgPSBjYWNoZV9mbHVzaCgmb3AsIGN1cl9yZWYpOwor
ICAgICAgICAgICAgaW50IHJldCA9IF9jYWNoZV9mbHVzaCgmb3AsIGN1cl9y
ZWYpOwogCiAgICAgICAgICAgICBpZiAoIHJldCA8IDAgKQogICAgICAgICAg
ICAgICAgIHJldHVybiByZXQ7CmRpZmYgLS1naXQgYS94ZW4vZHJpdmVycy9w
YXNzdGhyb3VnaC92dGQvZXh0ZXJuLmggYi94ZW4vZHJpdmVycy9wYXNzdGhy
b3VnaC92dGQvZXh0ZXJuLmgKaW5kZXggMDFlMDEwYTEwZDYxLi40MDEwNzky
OTk3MjUgMTAwNjQ0Ci0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0
ZC9leHRlcm4uaAorKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQv
ZXh0ZXJuLmgKQEAgLTc2LDcgKzc2LDYgQEAgaW50IF9fbXVzdF9jaGVjayBx
aW52YWxfZGV2aWNlX2lvdGxiX3N5bmMoc3RydWN0IHZ0ZF9pb21tdSAqaW9t
bXUsCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBzdHJ1Y3QgcGNpX2RldiAqcGRldiwKICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIHUxNiBkaWQsIHUxNiBzaXplLCB1NjQg
YWRkcik7CiAKLXVuc2lnbmVkIGludCBnZXRfY2FjaGVfbGluZV9zaXplKHZv
aWQpOwogdm9pZCBmbHVzaF9hbGxfY2FjaGUodm9pZCk7CiAKIHVpbnQ2NF90
IGFsbG9jX3BndGFibGVfbWFkZHIodW5zaWduZWQgbG9uZyBucGFnZXMsIG5v
ZGVpZF90IG5vZGUpOwpkaWZmIC0tZ2l0IGEveGVuL2RyaXZlcnMvcGFzc3Ro
cm91Z2gvdnRkL2lvbW11LmMgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92
dGQvaW9tbXUuYwppbmRleCA4OTc1YzFkZTYxYmMuLmJjMzc3YzliY2ZhNCAx
MDA2NDQKLS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL2lvbW11
LmMKKysrIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL2lvbW11LmMK
QEAgLTMxLDYgKzMxLDcgQEAKICNpbmNsdWRlIDx4ZW4vcGNpLmg+CiAjaW5j
bHVkZSA8eGVuL3BjaV9yZWdzLmg+CiAjaW5jbHVkZSA8eGVuL2tleWhhbmRs
ZXIuaD4KKyNpbmNsdWRlIDxhc20vY2FjaGUuaD4KICNpbmNsdWRlIDxhc20v
bXNpLmg+CiAjaW5jbHVkZSA8YXNtL25vcHMuaD4KICNpbmNsdWRlIDxhc20v
aXJxLmg+CkBAIC0yMDYsNTQgKzIwNyw2IEBAIHN0YXRpYyB2b2lkIGNoZWNr
X2NsZWFudXBfZG9taWRfbWFwKGNvbnN0IHN0cnVjdCBkb21haW4gKmQsCiAg
ICAgfQogfQogCi1zdGF0aWMgdm9pZCBzeW5jX2NhY2hlKGNvbnN0IHZvaWQg
KmFkZHIsIHVuc2lnbmVkIGludCBzaXplKQotewotICAgIHN0YXRpYyB1bnNp
Z25lZCBsb25nIGNsZmx1c2hfc2l6ZSA9IDA7Ci0gICAgY29uc3Qgdm9pZCAq
ZW5kID0gYWRkciArIHNpemU7Ci0KLSAgICBpZiAoIGNsZmx1c2hfc2l6ZSA9
PSAwICkKLSAgICAgICAgY2xmbHVzaF9zaXplID0gZ2V0X2NhY2hlX2xpbmVf
c2l6ZSgpOwotCi0gICAgYWRkciAtPSAodW5zaWduZWQgbG9uZylhZGRyICYg
KGNsZmx1c2hfc2l6ZSAtIDEpOwotICAgIGZvciAoIDsgYWRkciA8IGVuZDsg
YWRkciArPSBjbGZsdXNoX3NpemUgKQotLyoKLSAqIFRoZSBhcmd1bWVudHMg
dG8gYSBtYWNybyBtdXN0IG5vdCBpbmNsdWRlIHByZXByb2Nlc3NvciBkaXJl
Y3RpdmVzLiBEb2luZyBzbwotICogcmVzdWx0cyBpbiB1bmRlZmluZWQgYmVo
YXZpb3IsIHNvIHdlIGhhdmUgdG8gY3JlYXRlIHNvbWUgZGVmaW5lcyBoZXJl
IGluCi0gKiBvcmRlciB0byBhdm9pZCBpdC4KLSAqLwotI2lmIGRlZmluZWQo
SEFWRV9BU19DTFdCKQotIyBkZWZpbmUgQ0xXQl9FTkNPRElORyAiY2x3YiAl
W3BdIgotI2VsaWYgZGVmaW5lZChIQVZFX0FTX1hTQVZFT1BUKQotIyBkZWZp
bmUgQ0xXQl9FTkNPRElORyAiZGF0YTE2IHhzYXZlb3B0ICVbcF0iIC8qIGNs
d2IgKi8KLSNlbHNlCi0jIGRlZmluZSBDTFdCX0VOQ09ESU5HICIuYnl0ZSAw
eDY2LCAweDBmLCAweGFlLCAweDMwIiAvKiBjbHdiICglJXJheCkgKi8KLSNl
bmRpZgotCi0jZGVmaW5lIEJBU0VfSU5QVVQoYWRkcikgW3BdICJtIiAoKihj
b25zdCBjaGFyICopKGFkZHIpKQotI2lmIGRlZmluZWQoSEFWRV9BU19DTFdC
KSB8fCBkZWZpbmVkKEhBVkVfQVNfWFNBVkVPUFQpCi0jIGRlZmluZSBJTlBV
VCBCQVNFX0lOUFVUCi0jZWxzZQotIyBkZWZpbmUgSU5QVVQoYWRkcikgImEi
IChhZGRyKSwgQkFTRV9JTlBVVChhZGRyKQotI2VuZGlmCi0gICAgICAgIC8q
Ci0gICAgICAgICAqIE5vdGUgcmVnYXJkaW5nIHRoZSB1c2Ugb2YgTk9QX0RT
X1BSRUZJWDogaXQncyBmYXN0ZXIgdG8gZG8gYSBjbGZsdXNoCi0gICAgICAg
ICAqICsgcHJlZml4IHRoYW4gYSBjbGZsdXNoICsgbm9wLCBhbmQgaGVuY2Ug
dGhlIHByZWZpeCBpcyBhZGRlZCBpbnN0ZWFkCi0gICAgICAgICAqIG9mIGxl
dHRpbmcgdGhlIGFsdGVybmF0aXZlIGZyYW1ld29yayBmaWxsIHRoZSBnYXAg
YnkgYXBwZW5kaW5nIG5vcHMuCi0gICAgICAgICAqLwotICAgICAgICBhbHRl
cm5hdGl2ZV9pb18yKCIuYnl0ZSAiIF9fc3RyaW5naWZ5KE5PUF9EU19QUkVG
SVgpICI7IGNsZmx1c2ggJVtwXSIsCi0gICAgICAgICAgICAgICAgICAgICAg
ICAgImRhdGExNiBjbGZsdXNoICVbcF0iLCAvKiBjbGZsdXNob3B0ICovCi0g
ICAgICAgICAgICAgICAgICAgICAgICAgWDg2X0ZFQVRVUkVfQ0xGTFVTSE9Q
VCwKLSAgICAgICAgICAgICAgICAgICAgICAgICBDTFdCX0VOQ09ESU5HLAot
ICAgICAgICAgICAgICAgICAgICAgICAgIFg4Nl9GRUFUVVJFX0NMV0IsIC8q
IG5vIG91dHB1dHMgKi8sCi0gICAgICAgICAgICAgICAgICAgICAgICAgSU5Q
VVQoYWRkcikpOwotI3VuZGVmIElOUFVUCi0jdW5kZWYgQkFTRV9JTlBVVAot
I3VuZGVmIENMV0JfRU5DT0RJTkcKLQotICAgIGFsdGVybmF0aXZlXzIoIiIs
ICJzZmVuY2UiLCBYODZfRkVBVFVSRV9DTEZMVVNIT1BULAotICAgICAgICAg
ICAgICAgICAgICAgICJzZmVuY2UiLCBYODZfRkVBVFVSRV9DTFdCKTsKLX0K
LQogLyogQWxsb2NhdGUgcGFnZSB0YWJsZSwgcmV0dXJuIGl0cyBtYWNoaW5l
IGFkZHJlc3MgKi8KIHVpbnQ2NF90IGFsbG9jX3BndGFibGVfbWFkZHIodW5z
aWduZWQgbG9uZyBucGFnZXMsIG5vZGVpZF90IG5vZGUpCiB7CkBAIC0yNzMs
NyArMjI2LDcgQEAgdWludDY0X3QgYWxsb2NfcGd0YWJsZV9tYWRkcih1bnNp
Z25lZCBsb25nIG5wYWdlcywgbm9kZWlkX3Qgbm9kZSkKICAgICAgICAgY2xl
YXJfcGFnZSh2YWRkcik7CiAKICAgICAgICAgaWYgKCAoaW9tbXVfb3BzLmlu
aXQgPyAmaW9tbXVfb3BzIDogJnZ0ZF9vcHMpLT5zeW5jX2NhY2hlICkKLSAg
ICAgICAgICAgIHN5bmNfY2FjaGUodmFkZHIsIFBBR0VfU0laRSk7CisgICAg
ICAgICAgICBjYWNoZV93cml0ZWJhY2sodmFkZHIsIFBBR0VfU0laRSk7CiAg
ICAgICAgIHVubWFwX2RvbWFpbl9wYWdlKHZhZGRyKTsKICAgICAgICAgY3Vy
X3BnKys7CiAgICAgfQpAQCAtMTMwNSw3ICsxMjU4LDcgQEAgaW50IF9faW5p
dCBpb21tdV9hbGxvYyhzdHJ1Y3QgYWNwaV9kcmhkX3VuaXQgKmRyaGQpCiAg
ICAgaW9tbXUtPm5yX3B0X2xldmVscyA9IGFnYXdfdG9fbGV2ZWwoYWdhdyk7
CiAKICAgICBpZiAoICFlY2FwX2NvaGVyZW50KGlvbW11LT5lY2FwKSApCi0g
ICAgICAgIHZ0ZF9vcHMuc3luY19jYWNoZSA9IHN5bmNfY2FjaGU7CisgICAg
ICAgIHZ0ZF9vcHMuc3luY19jYWNoZSA9IGNhY2hlX3dyaXRlYmFjazsKIAog
ICAgIC8qIGFsbG9jYXRlIGRvbWFpbiBpZCBiaXRtYXAgKi8KICAgICBpb21t
dS0+ZG9taWRfYml0bWFwID0geHphbGxvY19hcnJheSh1bnNpZ25lZCBsb25n
LCBCSVRTX1RPX0xPTkdTKG5yX2RvbSkpOwpkaWZmIC0tZ2l0IGEveGVuL2Ry
aXZlcnMvcGFzc3Rocm91Z2gvdnRkL3g4Ni92dGQuYyBiL3hlbi9kcml2ZXJz
L3Bhc3N0aHJvdWdoL3Z0ZC94ODYvdnRkLmMKaW5kZXggNjY4MWRjY2Q2OTcw
Li41NWYwZmFhNTIxY2IgMTAwNjQ0Ci0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0
aHJvdWdoL3Z0ZC94ODYvdnRkLmMKKysrIGIveGVuL2RyaXZlcnMvcGFzc3Ro
cm91Z2gvdnRkL3g4Ni92dGQuYwpAQCAtNDcsMTEgKzQ3LDYgQEAgdm9pZCB1
bm1hcF92dGRfZG9tYWluX3BhZ2UoY29uc3Qgdm9pZCAqdmEpCiAgICAgdW5t
YXBfZG9tYWluX3BhZ2UodmEpOwogfQogCi11bnNpZ25lZCBpbnQgZ2V0X2Nh
Y2hlX2xpbmVfc2l6ZSh2b2lkKQotewotICAgIHJldHVybiAoKGNwdWlkX2Vi
eCgxKSA+PiA4KSAmIDB4ZmYpICogODsKLX0KLQogdm9pZCBmbHVzaF9hbGxf
Y2FjaGUoKQogewogICAgIHdiaW52ZCgpOwpkaWZmIC0tZ2l0IGEveGVuL2lu
Y2x1ZGUvYXNtLXg4Ni9jYWNoZS5oIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9j
YWNoZS5oCmluZGV4IDFmNzE3M2Q4YzcyYy4uZTQ3NzBlZmIyMmI5IDEwMDY0
NAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L2NhY2hlLmgKKysrIGIveGVu
L2luY2x1ZGUvYXNtLXg4Ni9jYWNoZS5oCkBAIC0xMSw0ICsxMSwxMSBAQAog
CiAjZGVmaW5lIF9fcmVhZF9tb3N0bHkgX19zZWN0aW9uKCIuZGF0YS5yZWFk
X21vc3RseSIpCiAKKyNpZm5kZWYgX19BU1NFTUJMWV9fCisKK3ZvaWQgY2Fj
aGVfZmx1c2goY29uc3Qgdm9pZCAqYWRkciwgdW5zaWduZWQgaW50IHNpemUp
Owordm9pZCBjYWNoZV93cml0ZWJhY2soY29uc3Qgdm9pZCAqYWRkciwgdW5z
aWduZWQgaW50IHNpemUpOworCisjZW5kaWYKKwogI2VuZGlmCg==

--=separator
Content-Type: application/octet-stream; name="xsa402/xsa402-4.16-4.patch"
Content-Disposition: attachment; filename="xsa402/xsa402-4.16-4.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L2FtZDogV29yayBhcm91bmQgQ0xGTFVTSCBvcmRl
cmluZyBvbiBvbGRlciBwYXJ0cwoKT24gcHJlLUNMRkxVU0hPUFQgQU1EIENQ
VXMsIENMRkxVU0ggaXMgd2Vha2VseSBvcmRlcmVkIHdpdGggZXZlcnl0aGlu
ZywKaW5jbHVkaW5nIHJlYWRzIGFuZCB3cml0ZXMgdG8gdGhlIGFkZHJlc3Ms
IGFuZCBMRkVOQ0UvU0ZFTkNFIGluc3RydWN0aW9ucy4KClRoaXMgY3JlYXRl
cyBhIG11bHRpdHVkZSBvZiBwcm9ibGVtYXRpYyBjb3JuZXIgY2FzZXMsIGxh
aWQgb3V0IGluIHRoZSBtYW51YWwuCkFycmFuZ2UgdG8gdXNlIE1GRU5DRSBv
biBib3RoIHNpZGVzIG9mIHRoZSBDTEZMVVNIIHRvIGZvcmNlIHByb3BlciBv
cmRlcmluZy4KClRoaXMgaXMgcGFydCBvZiBYU0EtNDAyLgoKU2lnbmVkLW9m
Zi1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KUmV2aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNv
bT4KCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvY3B1L2FtZC5jIGIveGVu
L2FyY2gveDg2L2NwdS9hbWQuYwppbmRleCBhOGUzN2RiYjFmNWMuLmIzYjlh
MGRmNWZlZCAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L2NwdS9hbWQuYwor
KysgYi94ZW4vYXJjaC94ODYvY3B1L2FtZC5jCkBAIC04MTIsNiArODEyLDE0
IEBAIHN0YXRpYyB2b2lkIGluaXRfYW1kKHN0cnVjdCBjcHVpbmZvX3g4NiAq
YykKIAlpZiAoIWNwdV9oYXNfbGZlbmNlX2Rpc3BhdGNoKQogCQlfX3NldF9i
aXQoWDg2X0ZFQVRVUkVfTUZFTkNFX1JEVFNDLCBjLT54ODZfY2FwYWJpbGl0
eSk7CiAKKwkvKgorCSAqIE9uIHByZS1DTEZMVVNIT1BUIEFNRCBDUFVzLCBD
TEZMVVNIIGlzIHdlYWtseSBvcmRlcmVkIHdpdGgKKwkgKiBldmVyeXRoaW5n
LCBpbmNsdWRpbmcgcmVhZHMgYW5kIHdyaXRlcyB0byBhZGRyZXNzLCBhbmQK
KwkgKiBMRkVOQ0UvU0ZFTkNFIGluc3RydWN0aW9ucy4KKwkgKi8KKwlpZiAo
IWNwdV9oYXNfY2xmbHVzaG9wdCkKKwkJc2V0dXBfZm9yY2VfY3B1X2NhcChY
ODZfQlVHX0NMRkxVU0hfTUZFTkNFKTsKKwogCXN3aXRjaChjLT54ODYpCiAJ
ewogCWNhc2UgMHhmIC4uLiAweDExOgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gv
eDg2L2ZsdXNodGxiLmMgYi94ZW4vYXJjaC94ODYvZmx1c2h0bGIuYwppbmRl
eCAwYzkxMmI4NjY5ZjguLmRjYmI0MDY0MDEyZSAxMDA2NDQKLS0tIGEveGVu
L2FyY2gveDg2L2ZsdXNodGxiLmMKKysrIGIveGVuL2FyY2gveDg2L2ZsdXNo
dGxiLmMKQEAgLTI1OSw2ICsyNTksMTMgQEAgdW5zaWduZWQgaW50IGZsdXNo
X2FyZWFfbG9jYWwoY29uc3Qgdm9pZCAqdmEsIHVuc2lnbmVkIGludCBmbGFn
cykKICAgICByZXR1cm4gZmxhZ3M7CiB9CiAKKy8qCisgKiBPbiBwcmUtQ0xG
TFVTSE9QVCBBTUQgQ1BVcywgQ0xGTFVTSCBpcyB3ZWFrbHkgb3JkZXJlZCB3
aXRoIGV2ZXJ5dGhpbmcsCisgKiBpbmNsdWRpbmcgcmVhZHMgYW5kIHdyaXRl
cyB0byBhZGRyZXNzLCBhbmQgTEZFTkNFL1NGRU5DRSBpbnN0cnVjdGlvbnMu
CisgKgorICogVGhpcyBmdW5jdGlvbiBvbmx5IHdvcmtzIHNhZmVseSBhZnRl
ciBhbHRlcm5hdGl2ZXMgaGF2ZSBydW4uICBMdWNraWx5LCBhdAorICogdGhl
IHRpbWUgb2Ygd3JpdGluZywgd2UgZG9uJ3QgZmx1c2ggdGhlIGNhY2hlcyB0
aGF0IGVhcmx5LgorICovCiB2b2lkIGNhY2hlX2ZsdXNoKGNvbnN0IHZvaWQg
KmFkZHIsIHVuc2lnbmVkIGludCBzaXplKQogewogICAgIC8qCkBAIC0yNjgs
NiArMjc1LDggQEAgdm9pZCBjYWNoZV9mbHVzaChjb25zdCB2b2lkICphZGRy
LCB1bnNpZ25lZCBpbnQgc2l6ZSkKICAgICB1bnNpZ25lZCBpbnQgY2xmbHVz
aF9zaXplID0gY3VycmVudF9jcHVfZGF0YS54ODZfY2xmbHVzaF9zaXplID86
IDE2OwogICAgIGNvbnN0IHZvaWQgKmVuZCA9IGFkZHIgKyBzaXplOwogCisg
ICAgYWx0ZXJuYXRpdmUoIiIsICJtZmVuY2UiLCBYODZfQlVHX0NMRkxVU0hf
TUZFTkNFKTsKKwogICAgIGFkZHIgLT0gKHVuc2lnbmVkIGxvbmcpYWRkciAm
IChjbGZsdXNoX3NpemUgLSAxKTsKICAgICBmb3IgKCA7IGFkZHIgPCBlbmQ7
IGFkZHIgKz0gY2xmbHVzaF9zaXplICkKICAgICB7CkBAIC0yODMsNyArMjky
LDkgQEAgdm9pZCBjYWNoZV9mbHVzaChjb25zdCB2b2lkICphZGRyLCB1bnNp
Z25lZCBpbnQgc2l6ZSkKICAgICAgICAgICAgICAgICAgICAgICAgW3BdICJt
IiAoKihjb25zdCBjaGFyICopKGFkZHIpKSk7CiAgICAgfQogCi0gICAgYWx0
ZXJuYXRpdmUoIiIsICJzZmVuY2UiLCBYODZfRkVBVFVSRV9DTEZMVVNIT1BU
KTsKKyAgICBhbHRlcm5hdGl2ZV8yKCIiLAorICAgICAgICAgICAgICAgICAg
InNmZW5jZSIsIFg4Nl9GRUFUVVJFX0NMRkxVU0hPUFQsCisgICAgICAgICAg
ICAgICAgICAibWZlbmNlIiwgWDg2X0JVR19DTEZMVVNIX01GRU5DRSk7CiB9
CiAKIHZvaWQgY2FjaGVfd3JpdGViYWNrKGNvbnN0IHZvaWQgKmFkZHIsIHVu
c2lnbmVkIGludCBzaXplKQpkaWZmIC0tZ2l0IGEveGVuL2luY2x1ZGUvYXNt
LXg4Ni9jcHVmZWF0dXJlcy5oIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9jcHVm
ZWF0dXJlcy5oCmluZGV4IDc0MTNmZWJkN2FkOC4uZmYzMTU3ZDUyZDEzIDEw
MDY0NAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L2NwdWZlYXR1cmVzLmgK
KysrIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9jcHVmZWF0dXJlcy5oCkBAIC00
Nyw2ICs0Nyw3IEBAIFhFTl9DUFVGRUFUVVJFKFhFTl9JQlQsICAgICAgICAg
ICBYODZfU1lOVEgoMjcpKSAvKiBYZW4gdXNlcyBDRVQgSW5kaXJlY3QgQnJh
bmNoCiAKICNkZWZpbmUgWDg2X0JVR19GUFVfUFRSUyAgICAgICAgICBYODZf
QlVHKCAwKSAvKiAoRilYe1NBVkUsUlNUT1J9IGRvZXNuJ3Qgc2F2ZS9yZXN0
b3JlIEZPUC9GSVAvRkRQLiAqLwogI2RlZmluZSBYODZfQlVHX05VTExfU0VH
ICAgICAgICAgIFg4Nl9CVUcoIDEpIC8qIE5VTEwtaW5nIGEgc2VsZWN0b3Ig
cHJlc2VydmVzIHRoZSBiYXNlIGFuZCBsaW1pdC4gKi8KKyNkZWZpbmUgWDg2
X0JVR19DTEZMVVNIX01GRU5DRSAgICBYODZfQlVHKCAyKSAvKiBNRkVOQ0Ug
bmVlZGVkIHRvIHNlcmlhbGlzZSBDTEZMVVNIICovCiAKIC8qIFRvdGFsIG51
bWJlciBvZiBjYXBhYmlsaXR5IHdvcmRzLCBpbmMgc3ludGggYW5kIGJ1ZyB3
b3Jkcy4gKi8KICNkZWZpbmUgTkNBUElOVFMgKEZTQ0FQSU5UUyArIFg4Nl9O
Ul9TWU5USCArIFg4Nl9OUl9CVUcpIC8qIE4gMzItYml0IHdvcmRzIHdvcnRo
IG9mIGluZm8gKi8K

--=separator
Content-Type: application/octet-stream; name="xsa402/xsa402-4.16-5.patch"
Content-Disposition: attachment; filename="xsa402/xsa402-4.16-5.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3B2OiBUcmFjayBhbmQgZmx1c2ggbm9uLWNvaGVy
ZW50IG1hcHBpbmdzIG9mIFJBTQoKVGhlcmUgYXJlIGxlZ2l0aW1hdGUgdXNl
cyBvZiBXQyBtYXBwaW5ncyBvZiBSQU0sIGUuZy4gZm9yIERNQSBidWZmZXJz
IHdpdGgKZGV2aWNlcyB0aGF0IG1ha2Ugbm9uLWNvaGVyZW50IHdyaXRlcy4g
IFRoZSBMaW51eCBzb3VuZCBzdWJzeXN0ZW0gbWFrZXMKZXh0ZW5zaXZlIHVz
ZSBvZiB0aGlzIHRlY2huaXF1ZS4KCkZvciBzdWNoIHVzZWNhc2VzLCB0aGUg
Z3Vlc3QncyBETUEgYnVmZmVyIGlzIG1hcHBlZCBhbmQgY29uc2lzdGVudGx5
IHVzZWQgYXMKV0MsIGFuZCBYZW4gZG9lc24ndCBpbnRlcmFjdCB3aXRoIHRo
ZSBidWZmZXIuCgpIb3dldmVyLCBhIG1pc2NoZXZpb3VzIGd1ZXN0IGNhbiB1
c2UgV0MgbWFwcGluZ3MgdG8gZGVsaWJlcmF0ZWx5IGNyZWF0ZQpub24tY29o
ZXJlbmN5IGJldHdlZW4gdGhlIGNhY2hlIGFuZCBSQU0sIGFuZCB1c2UgdGhp
cyB0byB0cmljayBYZW4gaW50bwp2YWxpZGF0aW5nIGEgcGFnZXRhYmxlIHdo
aWNoIGlzbid0IGFjdHVhbGx5IHNhZmUuCgpBbGxvY2F0ZSBhIG5ldyBQR1Rf
bm9uX2NvaGVyZW50IHRvIHRyYWNrIHRoZSBub24tY29oZXJlbmN5IG9mIG1h
cHBpbmdzLiAgU2V0Cml0IHdoZW5ldmVyIGEgbm9uLWNvaGVyZW50IHdyaXRl
YWJsZSBtYXBwaW5nIGlzIGNyZWF0ZWQuICBJZiB0aGUgcGFnZSBpcyB1c2Vk
CmFzIGFueXRoaW5nIG90aGVyIHRoYW4gUEdUX3dyaXRhYmxlX3BhZ2UsIGZv
cmNlIGEgY2FjaGUgZmx1c2ggYmVmb3JlCnZhbGlkYXRpb24uICBBbHNvIGZv
cmNlIGEgY2FjaGUgZmx1c2ggYmVmb3JlIHRoZSBwYWdlIGlzIHJldHVybmVk
IHRvIHRoZSBoZWFwLgoKVGhpcyBpcyBDVkUtMjAyMi0yNjM2NCwgcGFydCBv
ZiBYU0EtNDAyLgoKUmVwb3J0ZWQtYnk6IEphbm4gSG9ybiA8amFubmhAZ29v
Z2xlLmNvbT4KU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3
LmNvb3BlcjNAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEdlb3JnZSBEdW5s
YXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEph
biBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KCmRpZmYgLS1naXQgYS94
ZW4vYXJjaC94ODYvbW0uYyBiL3hlbi9hcmNoL3g4Ni9tbS5jCmluZGV4IGFi
MzJkMTNhMWEwZC4uYmFiOTYyNGZhYmI3IDEwMDY0NAotLS0gYS94ZW4vYXJj
aC94ODYvbW0uYworKysgYi94ZW4vYXJjaC94ODYvbW0uYwpAQCAtOTk3LDYg
Kzk5NywxNSBAQCBnZXRfcGFnZV9mcm9tX2wxZSgKICAgICAgICAgcmV0dXJu
IC1FQUNDRVM7CiAgICAgfQogCisgICAgLyoKKyAgICAgKiBUcmFjayB3cml0
ZWFibGUgbm9uLWNvaGVyZW50IG1hcHBpbmdzIHRvIFJBTSBwYWdlcywgdG8g
dHJpZ2dlciBhIGNhY2hlCisgICAgICogZmx1c2ggbGF0ZXIgaWYgdGhlIHRh
cmdldCBpcyB1c2VkIGFzIGFueXRoaW5nIGJ1dCBhIFBHVF93cml0ZWFibGUg
cGFnZS4KKyAgICAgKiBXZSBjYXJlIGFib3V0IGFsbCB3cml0ZWFibGUgbWFw
cGluZ3MsIGluY2x1ZGluZyBmb3JlaWduIG1hcHBpbmdzLgorICAgICAqLwor
ICAgIGlmICggIWJvb3RfY3B1X2hhcyhYODZfRkVBVFVSRV9YRU5fU0VMRlNO
T09QKSAmJgorICAgICAgICAgKGwxZiAmIChQQUdFX0NBQ0hFX0FUVFJTIHwg
X1BBR0VfUlcpKSA9PSAoX1BBR0VfV0MgfCBfUEFHRV9SVykgKQorICAgICAg
ICBzZXRfYml0KF9QR1Rfbm9uX2NvaGVyZW50LCAmcGFnZS0+dS5pbnVzZS50
eXBlX2luZm8pOworCiAgICAgcmV0dXJuIDA7CiAKICBjb3VsZF9ub3RfcGlu
OgpAQCAtMjQ1NCw2ICsyNDYzLDE5IEBAIHN0YXRpYyBpbnQgY2xlYW51cF9w
YWdlX21hcHBpbmdzKHN0cnVjdCBwYWdlX2luZm8gKnBhZ2UpCiAgICAgICAg
IH0KICAgICB9CiAKKyAgICAvKgorICAgICAqIEZsdXNoIHRoZSBjYWNoZSBp
ZiB0aGVyZSB3ZXJlIHByZXZpb3VzbHkgbm9uLWNvaGVyZW50IHdyaXRlYWJs
ZQorICAgICAqIG1hcHBpbmdzIG9mIHRoaXMgcGFnZS4gIFRoaXMgZm9yY2Vz
IHRoZSBwYWdlIHRvIGJlIGNvaGVyZW50IGJlZm9yZSBpdAorICAgICAqIGlz
IGZyZWVkIGJhY2sgdG8gdGhlIGhlYXAuCisgICAgICovCisgICAgaWYgKCBf
X3Rlc3RfYW5kX2NsZWFyX2JpdChfUEdUX25vbl9jb2hlcmVudCwgJnBhZ2Ut
PnUuaW51c2UudHlwZV9pbmZvKSApCisgICAgeworICAgICAgICB2b2lkICph
ZGRyID0gX19tYXBfZG9tYWluX3BhZ2UocGFnZSk7CisKKyAgICAgICAgY2Fj
aGVfZmx1c2goYWRkciwgUEFHRV9TSVpFKTsKKyAgICAgICAgdW5tYXBfZG9t
YWluX3BhZ2UoYWRkcik7CisgICAgfQorCiAgICAgcmV0dXJuIHJjOwogfQog
CkBAIC0zMDI4LDYgKzMwNTAsMjIgQEAgc3RhdGljIGludCBfZ2V0X3BhZ2Vf
dHlwZShzdHJ1Y3QgcGFnZV9pbmZvICpwYWdlLCB1bnNpZ25lZCBsb25nIHR5
cGUsCiAgICAgaWYgKCB1bmxpa2VseSghKG54ICYgUEdUX3ZhbGlkYXRlZCkp
ICkKICAgICB7CiAgICAgICAgIC8qCisgICAgICAgICAqIEZsdXNoIHRoZSBj
YWNoZSBpZiB0aGVyZSB3ZXJlIHByZXZpb3VzbHkgbm9uLWNvaGVyZW50IG1h
cHBpbmdzIG9mCisgICAgICAgICAqIHRoaXMgcGFnZSwgYW5kIHdlJ3JlIHRy
eWluZyB0byB1c2UgaXQgYXMgYW55dGhpbmcgb3RoZXIgdGhhbiBhCisgICAg
ICAgICAqIHdyaXRlYWJsZSBwYWdlLiAgVGhpcyBmb3JjZXMgdGhlIHBhZ2Ug
dG8gYmUgY29oZXJlbnQgYmVmb3JlIHdlCisgICAgICAgICAqIHZhbGlkYXRl
IGl0cyBjb250ZW50cyBmb3Igc2FmZXR5LgorICAgICAgICAgKi8KKyAgICAg
ICAgaWYgKCAobnggJiBQR1Rfbm9uX2NvaGVyZW50KSAmJiB0eXBlICE9IFBH
VF93cml0YWJsZV9wYWdlICkKKyAgICAgICAgeworICAgICAgICAgICAgdm9p
ZCAqYWRkciA9IF9fbWFwX2RvbWFpbl9wYWdlKHBhZ2UpOworCisgICAgICAg
ICAgICBjYWNoZV9mbHVzaChhZGRyLCBQQUdFX1NJWkUpOworICAgICAgICAg
ICAgdW5tYXBfZG9tYWluX3BhZ2UoYWRkcik7CisKKyAgICAgICAgICAgIHBh
Z2UtPnUuaW51c2UudHlwZV9pbmZvICY9IH5QR1Rfbm9uX2NvaGVyZW50Owor
ICAgICAgICB9CisKKyAgICAgICAgLyoKICAgICAgICAgICogTm8gc3BlY2lh
bCB2YWxpZGF0aW9uIG5lZWRlZCBmb3Igd3JpdGFibGUgb3Igc2hhcmVkIHBh
Z2VzLiAgUGFnZQogICAgICAgICAgKiB0YWJsZXMgYW5kIEdEVC9MRFQgbmVl
ZCB0byBoYXZlIHRoZWlyIGNvbnRlbnRzIGF1ZGl0ZWQuCiAgICAgICAgICAq
CmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvcHYvZ3JhbnRfdGFibGUuYyBi
L3hlbi9hcmNoL3g4Ni9wdi9ncmFudF90YWJsZS5jCmluZGV4IDAzMjU2MThj
OTg4My4uODFjNzJlNjFlZDU1IDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYv
cHYvZ3JhbnRfdGFibGUuYworKysgYi94ZW4vYXJjaC94ODYvcHYvZ3JhbnRf
dGFibGUuYwpAQCAtMTA5LDcgKzEwOSwxNyBAQCBpbnQgY3JlYXRlX2dyYW50
X3B2X21hcHBpbmcodWludDY0X3QgYWRkciwgbWZuX3QgZnJhbWUsCiAKICAg
ICBvbDFlID0gKnBsMWU7CiAgICAgaWYgKCBVUERBVEVfRU5UUlkobDEsIHBs
MWUsIG9sMWUsIG5sMWUsIGdsMW1mbiwgY3VyciwgMCkgKQorICAgIHsKKyAg
ICAgICAgLyoKKyAgICAgICAgICogV2UgYWx3YXlzIGNyZWF0ZSBtYXBwaW5n
cyBpbiB0aGlzIHBhdGguICBIb3dldmVyLCBvdXIgY2FsbGVyLAorICAgICAg
ICAgKiBtYXBfZ3JhbnRfcmVmKCksIG9ubHkgcGFzc2VzIHBvdGVudGlhbGx5
IG5vbi16ZXJvIGNhY2hlX2ZsYWdzIGZvcgorICAgICAgICAgKiBNTUlPIGZy
YW1lcywgc28gdGhpcyBwYXRoIGRvZXNuJ3QgY3JlYXRlIG5vbi1jb2hlcmVu
dCBtYXBwaW5ncyBvZgorICAgICAgICAgKiBSQU0gZnJhbWVzIGFuZCB0aGVy
ZSdzIG5vIG5lZWQgdG8gY2FsY3VsYXRlIFBHVF9ub25fY29oZXJlbnQuCisg
ICAgICAgICAqLworICAgICAgICBBU1NFUlQoIWNhY2hlX2ZsYWdzIHx8IGlz
X2lvbWVtX3BhZ2UoZnJhbWUpKTsKKwogICAgICAgICByYyA9IEdOVFNUX29r
YXk7CisgICAgfQogCiAgb3V0X3VubG9jazoKICAgICBwYWdlX3VubG9jayhw
YWdlKTsKQEAgLTI5NCw3ICszMDQsMTggQEAgaW50IHJlcGxhY2VfZ3JhbnRf
cHZfbWFwcGluZyh1aW50NjRfdCBhZGRyLCBtZm5fdCBmcmFtZSwKICAgICAg
ICAgICAgICAgICAgbDFlX2dldF9mbGFncyhvbDFlKSwgYWRkciwgZ3JhbnRf
cHRlX2ZsYWdzKTsKIAogICAgIGlmICggVVBEQVRFX0VOVFJZKGwxLCBwbDFl
LCBvbDFlLCBubDFlLCBnbDFtZm4sIGN1cnIsIDApICkKKyAgICB7CisgICAg
ICAgIC8qCisgICAgICAgICAqIEdlbmVyYWxseSwgcmVwbGFjZV9ncmFudF9w
dl9tYXBwaW5nKCkgaXMgdXNlZCB0byBkZXN0cm95IG1hcHBpbmdzCisgICAg
ICAgICAqIChuMWxlID0gbDFlX2VtcHR5KCkpLCBidXQgaXQgY2FuIGJlIGEg
cHJlc2VudCBtYXBwaW5nIG9uIHRoZQorICAgICAgICAgKiBHTlRBQk9QX3Vu
bWFwX2FuZF9yZXBsYWNlIHBhdGguCisgICAgICAgICAqCisgICAgICAgICAq
IEluIHN1Y2ggY2FzZXMsIHRoZSBQVEUgaXMgZnVsbHkgdHJhbnNwbGFudGVk
IGZyb20gaXRzIG9sZCBsb2NhdGlvbgorICAgICAgICAgKiB2aWEgc3RlYWxf
bGluZWFyX2FkZHIoKSwgc28gd2UgbmVlZCBub3QgcGVyZm9ybSBQR1Rfbm9u
X2NvaGVyZW50CisgICAgICAgICAqIGNoZWNraW5nIGhlcmUuCisgICAgICAg
ICAqLwogICAgICAgICByYyA9IEdOVFNUX29rYXk7CisgICAgfQogCiAgb3V0
X3VubG9jazoKICAgICBwYWdlX3VubG9jayhwYWdlKTsKZGlmZiAtLWdpdCBh
L3hlbi9pbmNsdWRlL2FzbS14ODYvbW0uaCBiL3hlbi9pbmNsdWRlL2FzbS14
ODYvbW0uaAppbmRleCA4YTlhNDNiYjBhOWQuLjc0NjQxNjdhZTE5MiAxMDA2
NDQKLS0tIGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9tbS5oCisrKyBiL3hlbi9p
bmNsdWRlL2FzbS14ODYvbW0uaApAQCAtNTMsOCArNTMsMTIgQEAKICNkZWZp
bmUgX1BHVF9wYXJ0aWFsICAgICAgUEdfc2hpZnQoOCkKICNkZWZpbmUgUEdU
X3BhcnRpYWwgICAgICAgUEdfbWFzaygxLCA4KQogCisvKiBIYXMgdGhpcyBw
YWdlIGJlZW4gbWFwcGVkIHdyaXRlYWJsZSB3aXRoIGEgbm9uLWNvaGVyZW50
IG1lbW9yeSB0eXBlPyAqLworI2RlZmluZSBfUEdUX25vbl9jb2hlcmVudCBQ
R19zaGlmdCg5KQorI2RlZmluZSBQR1Rfbm9uX2NvaGVyZW50ICBQR19tYXNr
KDEsIDkpCisKICAvKiBDb3VudCBvZiB1c2VzIG9mIHRoaXMgZnJhbWUgYXMg
aXRzIGN1cnJlbnQgdHlwZS4gKi8KLSNkZWZpbmUgUEdUX2NvdW50X3dpZHRo
ICAgUEdfc2hpZnQoOCkKKyNkZWZpbmUgUEdUX2NvdW50X3dpZHRoICAgUEdf
c2hpZnQoOSkKICNkZWZpbmUgUEdUX2NvdW50X21hc2sgICAgKCgxVUw8PFBH
VF9jb3VudF93aWR0aCktMSkKIAogLyogQXJlIHRoZSAndHlwZSBtYXNrJyBi
aXRzIGlkZW50aWNhbD8gKi8K

--=separator
Content-Type: application/octet-stream; name="xsa402/xsa402-4.patch"
Content-Disposition: attachment; filename="xsa402/xsa402-4.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L2FtZDogV29yayBhcm91bmQgQ0xGTFVTSCBvcmRl
cmluZyBvbiBvbGRlciBwYXJ0cwoKT24gcHJlLUNMRkxVU0hPUFQgQU1EIENQ
VXMsIENMRkxVU0ggaXMgd2Vha2VseSBvcmRlcmVkIHdpdGggZXZlcnl0aGlu
ZywKaW5jbHVkaW5nIHJlYWRzIGFuZCB3cml0ZXMgdG8gdGhlIGFkZHJlc3Ms
IGFuZCBMRkVOQ0UvU0ZFTkNFIGluc3RydWN0aW9ucy4KClRoaXMgY3JlYXRl
cyBhIG11bHRpdHVkZSBvZiBwcm9ibGVtYXRpYyBjb3JuZXIgY2FzZXMsIGxh
aWQgb3V0IGluIHRoZSBtYW51YWwuCkFycmFuZ2UgdG8gdXNlIE1GRU5DRSBv
biBib3RoIHNpZGVzIG9mIHRoZSBDTEZMVVNIIHRvIGZvcmNlIHByb3BlciBv
cmRlcmluZy4KClRoaXMgaXMgcGFydCBvZiBYU0EtNDAyLgoKU2lnbmVkLW9m
Zi1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KUmV2aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNv
bT4KCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvY3B1L2FtZC5jIGIveGVu
L2FyY2gveDg2L2NwdS9hbWQuYwppbmRleCA0OTk5ZjhiZTJiMTEuLjk0Yjll
MzEwMTYxZiAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L2NwdS9hbWQuYwor
KysgYi94ZW4vYXJjaC94ODYvY3B1L2FtZC5jCkBAIC04MTIsNiArODEyLDE0
IEBAIHN0YXRpYyB2b2lkIGNmX2NoZWNrIGluaXRfYW1kKHN0cnVjdCBjcHVp
bmZvX3g4NiAqYykKIAlpZiAoIWNwdV9oYXNfbGZlbmNlX2Rpc3BhdGNoKQog
CQlfX3NldF9iaXQoWDg2X0ZFQVRVUkVfTUZFTkNFX1JEVFNDLCBjLT54ODZf
Y2FwYWJpbGl0eSk7CiAKKwkvKgorCSAqIE9uIHByZS1DTEZMVVNIT1BUIEFN
RCBDUFVzLCBDTEZMVVNIIGlzIHdlYWtseSBvcmRlcmVkIHdpdGgKKwkgKiBl
dmVyeXRoaW5nLCBpbmNsdWRpbmcgcmVhZHMgYW5kIHdyaXRlcyB0byBhZGRy
ZXNzLCBhbmQKKwkgKiBMRkVOQ0UvU0ZFTkNFIGluc3RydWN0aW9ucy4KKwkg
Ki8KKwlpZiAoIWNwdV9oYXNfY2xmbHVzaG9wdCkKKwkJc2V0dXBfZm9yY2Vf
Y3B1X2NhcChYODZfQlVHX0NMRkxVU0hfTUZFTkNFKTsKKwogCXN3aXRjaChj
LT54ODYpCiAJewogCWNhc2UgMHhmIC4uLiAweDExOgpkaWZmIC0tZ2l0IGEv
eGVuL2FyY2gveDg2L2ZsdXNodGxiLmMgYi94ZW4vYXJjaC94ODYvZmx1c2h0
bGIuYwppbmRleCA0NzFiM2UzMWM0NmMuLjE4NzQ4YjJiYzgwNSAxMDA2NDQK
LS0tIGEveGVuL2FyY2gveDg2L2ZsdXNodGxiLmMKKysrIGIveGVuL2FyY2gv
eDg2L2ZsdXNodGxiLmMKQEAgLTI2MCw2ICsyNjAsMTMgQEAgdW5zaWduZWQg
aW50IGZsdXNoX2FyZWFfbG9jYWwoY29uc3Qgdm9pZCAqdmEsIHVuc2lnbmVk
IGludCBmbGFncykKICAgICByZXR1cm4gZmxhZ3M7CiB9CiAKKy8qCisgKiBP
biBwcmUtQ0xGTFVTSE9QVCBBTUQgQ1BVcywgQ0xGTFVTSCBpcyB3ZWFrbHkg
b3JkZXJlZCB3aXRoIGV2ZXJ5dGhpbmcsCisgKiBpbmNsdWRpbmcgcmVhZHMg
YW5kIHdyaXRlcyB0byBhZGRyZXNzLCBhbmQgTEZFTkNFL1NGRU5DRSBpbnN0
cnVjdGlvbnMuCisgKgorICogVGhpcyBmdW5jdGlvbiBvbmx5IHdvcmtzIHNh
ZmVseSBhZnRlciBhbHRlcm5hdGl2ZXMgaGF2ZSBydW4uICBMdWNraWx5LCBh
dAorICogdGhlIHRpbWUgb2Ygd3JpdGluZywgd2UgZG9uJ3QgZmx1c2ggdGhl
IGNhY2hlcyB0aGF0IGVhcmx5LgorICovCiB2b2lkIGNhY2hlX2ZsdXNoKGNv
bnN0IHZvaWQgKmFkZHIsIHVuc2lnbmVkIGludCBzaXplKQogewogICAgIC8q
CkBAIC0yNjksNiArMjc2LDggQEAgdm9pZCBjYWNoZV9mbHVzaChjb25zdCB2
b2lkICphZGRyLCB1bnNpZ25lZCBpbnQgc2l6ZSkKICAgICB1bnNpZ25lZCBp
bnQgY2xmbHVzaF9zaXplID0gY3VycmVudF9jcHVfZGF0YS54ODZfY2xmbHVz
aF9zaXplID86IDE2OwogICAgIGNvbnN0IHZvaWQgKmVuZCA9IGFkZHIgKyBz
aXplOwogCisgICAgYWx0ZXJuYXRpdmUoIiIsICJtZmVuY2UiLCBYODZfQlVH
X0NMRkxVU0hfTUZFTkNFKTsKKwogICAgIGFkZHIgLT0gKHVuc2lnbmVkIGxv
bmcpYWRkciAmIChjbGZsdXNoX3NpemUgLSAxKTsKICAgICBmb3IgKCA7IGFk
ZHIgPCBlbmQ7IGFkZHIgKz0gY2xmbHVzaF9zaXplICkKICAgICB7CkBAIC0y
ODQsNyArMjkzLDkgQEAgdm9pZCBjYWNoZV9mbHVzaChjb25zdCB2b2lkICph
ZGRyLCB1bnNpZ25lZCBpbnQgc2l6ZSkKICAgICAgICAgICAgICAgICAgICAg
ICAgW3BdICJtIiAoKihjb25zdCBjaGFyICopKGFkZHIpKSk7CiAgICAgfQog
Ci0gICAgYWx0ZXJuYXRpdmUoIiIsICJzZmVuY2UiLCBYODZfRkVBVFVSRV9D
TEZMVVNIT1BUKTsKKyAgICBhbHRlcm5hdGl2ZV8yKCIiLAorICAgICAgICAg
ICAgICAgICAgInNmZW5jZSIsIFg4Nl9GRUFUVVJFX0NMRkxVU0hPUFQsCisg
ICAgICAgICAgICAgICAgICAibWZlbmNlIiwgWDg2X0JVR19DTEZMVVNIX01G
RU5DRSk7CiB9CiAKIHZvaWQgY2FjaGVfd3JpdGViYWNrKGNvbnN0IHZvaWQg
KmFkZHIsIHVuc2lnbmVkIGludCBzaXplKQpkaWZmIC0tZ2l0IGEveGVuL2Fy
Y2gveDg2L2luY2x1ZGUvYXNtL2NwdWZlYXR1cmVzLmggYi94ZW4vYXJjaC94
ODYvaW5jbHVkZS9hc20vY3B1ZmVhdHVyZXMuaAppbmRleCA3NDEzZmViZDdh
ZDguLmZmMzE1N2Q1MmQxMyAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L2lu
Y2x1ZGUvYXNtL2NwdWZlYXR1cmVzLmgKKysrIGIveGVuL2FyY2gveDg2L2lu
Y2x1ZGUvYXNtL2NwdWZlYXR1cmVzLmgKQEAgLTQ3LDYgKzQ3LDcgQEAgWEVO
X0NQVUZFQVRVUkUoWEVOX0lCVCwgICAgICAgICAgIFg4Nl9TWU5USCgyNykp
IC8qIFhlbiB1c2VzIENFVCBJbmRpcmVjdCBCcmFuY2gKIAogI2RlZmluZSBY
ODZfQlVHX0ZQVV9QVFJTICAgICAgICAgIFg4Nl9CVUcoIDApIC8qIChGKVh7
U0FWRSxSU1RPUn0gZG9lc24ndCBzYXZlL3Jlc3RvcmUgRk9QL0ZJUC9GRFAu
ICovCiAjZGVmaW5lIFg4Nl9CVUdfTlVMTF9TRUcgICAgICAgICAgWDg2X0JV
RyggMSkgLyogTlVMTC1pbmcgYSBzZWxlY3RvciBwcmVzZXJ2ZXMgdGhlIGJh
c2UgYW5kIGxpbWl0LiAqLworI2RlZmluZSBYODZfQlVHX0NMRkxVU0hfTUZF
TkNFICAgIFg4Nl9CVUcoIDIpIC8qIE1GRU5DRSBuZWVkZWQgdG8gc2VyaWFs
aXNlIENMRkxVU0ggKi8KIAogLyogVG90YWwgbnVtYmVyIG9mIGNhcGFiaWxp
dHkgd29yZHMsIGluYyBzeW50aCBhbmQgYnVnIHdvcmRzLiAqLwogI2RlZmlu
ZSBOQ0FQSU5UUyAoRlNDQVBJTlRTICsgWDg2X05SX1NZTlRIICsgWDg2X05S
X0JVRykgLyogTiAzMi1iaXQgd29yZHMgd29ydGggb2YgaW5mbyAqLwo=

--=separator
Content-Type: application/octet-stream; name="xsa402/xsa402-5.patch"
Content-Disposition: attachment; filename="xsa402/xsa402-5.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3B2OiBUcmFjayBhbmQgZmx1c2ggbm9uLWNvaGVy
ZW50IG1hcHBpbmdzIG9mIFJBTQoKVGhlcmUgYXJlIGxlZ2l0aW1hdGUgdXNl
cyBvZiBXQyBtYXBwaW5ncyBvZiBSQU0sIGUuZy4gZm9yIERNQSBidWZmZXJz
IHdpdGgKZGV2aWNlcyB0aGF0IG1ha2Ugbm9uLWNvaGVyZW50IHdyaXRlcy4g
IFRoZSBMaW51eCBzb3VuZCBzdWJzeXN0ZW0gbWFrZXMKZXh0ZW5zaXZlIHVz
ZSBvZiB0aGlzIHRlY2huaXF1ZS4KCkZvciBzdWNoIHVzZWNhc2VzLCB0aGUg
Z3Vlc3QncyBETUEgYnVmZmVyIGlzIG1hcHBlZCBhbmQgY29uc2lzdGVudGx5
IHVzZWQgYXMKV0MsIGFuZCBYZW4gZG9lc24ndCBpbnRlcmFjdCB3aXRoIHRo
ZSBidWZmZXIuCgpIb3dldmVyLCBhIG1pc2NoZXZpb3VzIGd1ZXN0IGNhbiB1
c2UgV0MgbWFwcGluZ3MgdG8gZGVsaWJlcmF0ZWx5IGNyZWF0ZQpub24tY29o
ZXJlbmN5IGJldHdlZW4gdGhlIGNhY2hlIGFuZCBSQU0sIGFuZCB1c2UgdGhp
cyB0byB0cmljayBYZW4gaW50bwp2YWxpZGF0aW5nIGEgcGFnZXRhYmxlIHdo
aWNoIGlzbid0IGFjdHVhbGx5IHNhZmUuCgpBbGxvY2F0ZSBhIG5ldyBQR1Rf
bm9uX2NvaGVyZW50IHRvIHRyYWNrIHRoZSBub24tY29oZXJlbmN5IG9mIG1h
cHBpbmdzLiAgU2V0Cml0IHdoZW5ldmVyIGEgbm9uLWNvaGVyZW50IHdyaXRl
YWJsZSBtYXBwaW5nIGlzIGNyZWF0ZWQuICBJZiB0aGUgcGFnZSBpcyB1c2Vk
CmFzIGFueXRoaW5nIG90aGVyIHRoYW4gUEdUX3dyaXRhYmxlX3BhZ2UsIGZv
cmNlIGEgY2FjaGUgZmx1c2ggYmVmb3JlCnZhbGlkYXRpb24uICBBbHNvIGZv
cmNlIGEgY2FjaGUgZmx1c2ggYmVmb3JlIHRoZSBwYWdlIGlzIHJldHVybmVk
IHRvIHRoZSBoZWFwLgoKVGhpcyBpcyBDVkUtMjAyMi0yNjM2NCwgcGFydCBv
ZiBYU0EtNDAyLgoKUmVwb3J0ZWQtYnk6IEphbm4gSG9ybiA8amFubmhAZ29v
Z2xlLmNvbT4KU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3
LmNvb3BlcjNAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEdlb3JnZSBEdW5s
YXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEph
biBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KCmRpZmYgLS1naXQgYS94
ZW4vYXJjaC94ODYvaW5jbHVkZS9hc20vbW0uaCBiL3hlbi9hcmNoL3g4Ni9p
bmNsdWRlL2FzbS9tbS5oCmluZGV4IDYwNWMxMDE1MjgwNS4uMDdiNTljOTgy
YjM5IDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYvaW5jbHVkZS9hc20vbW0u
aAorKysgYi94ZW4vYXJjaC94ODYvaW5jbHVkZS9hc20vbW0uaApAQCAtNTMs
OCArNTMsMTIgQEAKICNkZWZpbmUgX1BHVF9wYXJ0aWFsICAgICAgUEdfc2hp
ZnQoOCkKICNkZWZpbmUgUEdUX3BhcnRpYWwgICAgICAgUEdfbWFzaygxLCA4
KQogCisvKiBIYXMgdGhpcyBwYWdlIGJlZW4gbWFwcGVkIHdyaXRlYWJsZSB3
aXRoIGEgbm9uLWNvaGVyZW50IG1lbW9yeSB0eXBlPyAqLworI2RlZmluZSBf
UEdUX25vbl9jb2hlcmVudCBQR19zaGlmdCg5KQorI2RlZmluZSBQR1Rfbm9u
X2NvaGVyZW50ICBQR19tYXNrKDEsIDkpCisKICAvKiBDb3VudCBvZiB1c2Vz
IG9mIHRoaXMgZnJhbWUgYXMgaXRzIGN1cnJlbnQgdHlwZS4gKi8KLSNkZWZp
bmUgUEdUX2NvdW50X3dpZHRoICAgUEdfc2hpZnQoOCkKKyNkZWZpbmUgUEdU
X2NvdW50X3dpZHRoICAgUEdfc2hpZnQoOSkKICNkZWZpbmUgUEdUX2NvdW50
X21hc2sgICAgKCgxVUw8PFBHVF9jb3VudF93aWR0aCktMSkKIAogLyogQXJl
IHRoZSAndHlwZSBtYXNrJyBiaXRzIGlkZW50aWNhbD8gKi8KZGlmZiAtLWdp
dCBhL3hlbi9hcmNoL3g4Ni9tbS5jIGIveGVuL2FyY2gveDg2L21tLmMKaW5k
ZXggMmI1ZjViNTUzZDk0Li45NGY2YWU5YWU3NDIgMTAwNjQ0Ci0tLSBhL3hl
bi9hcmNoL3g4Ni9tbS5jCisrKyBiL3hlbi9hcmNoL3g4Ni9tbS5jCkBAIC0x
MDI1LDYgKzEwMjUsMTUgQEAgZ2V0X3BhZ2VfZnJvbV9sMWUoCiAgICAgICAg
IHJldHVybiAtRUFDQ0VTOwogICAgIH0KIAorICAgIC8qCisgICAgICogVHJh
Y2sgd3JpdGVhYmxlIG5vbi1jb2hlcmVudCBtYXBwaW5ncyB0byBSQU0gcGFn
ZXMsIHRvIHRyaWdnZXIgYSBjYWNoZQorICAgICAqIGZsdXNoIGxhdGVyIGlm
IHRoZSB0YXJnZXQgaXMgdXNlZCBhcyBhbnl0aGluZyBidXQgYSBQR1Rfd3Jp
dGVhYmxlIHBhZ2UuCisgICAgICogV2UgY2FyZSBhYm91dCBhbGwgd3JpdGVh
YmxlIG1hcHBpbmdzLCBpbmNsdWRpbmcgZm9yZWlnbiBtYXBwaW5ncy4KKyAg
ICAgKi8KKyAgICBpZiAoICFib290X2NwdV9oYXMoWDg2X0ZFQVRVUkVfWEVO
X1NFTEZTTk9PUCkgJiYKKyAgICAgICAgIChsMWYgJiAoUEFHRV9DQUNIRV9B
VFRSUyB8IF9QQUdFX1JXKSkgPT0gKF9QQUdFX1dDIHwgX1BBR0VfUlcpICkK
KyAgICAgICAgc2V0X2JpdChfUEdUX25vbl9jb2hlcmVudCwgJnBhZ2UtPnUu
aW51c2UudHlwZV9pbmZvKTsKKwogICAgIHJldHVybiAwOwogCiAgY291bGRf
bm90X3BpbjoKQEAgLTI0ODIsNiArMjQ5MSwxOSBAQCBzdGF0aWMgaW50IGNs
ZWFudXBfcGFnZV9tYXBwaW5ncyhzdHJ1Y3QgcGFnZV9pbmZvICpwYWdlKQog
ICAgICAgICB9CiAgICAgfQogCisgICAgLyoKKyAgICAgKiBGbHVzaCB0aGUg
Y2FjaGUgaWYgdGhlcmUgd2VyZSBwcmV2aW91c2x5IG5vbi1jb2hlcmVudCB3
cml0ZWFibGUKKyAgICAgKiBtYXBwaW5ncyBvZiB0aGlzIHBhZ2UuICBUaGlz
IGZvcmNlcyB0aGUgcGFnZSB0byBiZSBjb2hlcmVudCBiZWZvcmUgaXQKKyAg
ICAgKiBpcyBmcmVlZCBiYWNrIHRvIHRoZSBoZWFwLgorICAgICAqLworICAg
IGlmICggX190ZXN0X2FuZF9jbGVhcl9iaXQoX1BHVF9ub25fY29oZXJlbnQs
ICZwYWdlLT51LmludXNlLnR5cGVfaW5mbykgKQorICAgIHsKKyAgICAgICAg
dm9pZCAqYWRkciA9IF9fbWFwX2RvbWFpbl9wYWdlKHBhZ2UpOworCisgICAg
ICAgIGNhY2hlX2ZsdXNoKGFkZHIsIFBBR0VfU0laRSk7CisgICAgICAgIHVu
bWFwX2RvbWFpbl9wYWdlKGFkZHIpOworICAgIH0KKwogICAgIHJldHVybiBy
YzsKIH0KIApAQCAtMzA1Niw2ICszMDc4LDIyIEBAIHN0YXRpYyBpbnQgX2dl
dF9wYWdlX3R5cGUoc3RydWN0IHBhZ2VfaW5mbyAqcGFnZSwgdW5zaWduZWQg
bG9uZyB0eXBlLAogICAgIGlmICggdW5saWtlbHkoIShueCAmIFBHVF92YWxp
ZGF0ZWQpKSApCiAgICAgewogICAgICAgICAvKgorICAgICAgICAgKiBGbHVz
aCB0aGUgY2FjaGUgaWYgdGhlcmUgd2VyZSBwcmV2aW91c2x5IG5vbi1jb2hl
cmVudCBtYXBwaW5ncyBvZgorICAgICAgICAgKiB0aGlzIHBhZ2UsIGFuZCB3
ZSdyZSB0cnlpbmcgdG8gdXNlIGl0IGFzIGFueXRoaW5nIG90aGVyIHRoYW4g
YQorICAgICAgICAgKiB3cml0ZWFibGUgcGFnZS4gIFRoaXMgZm9yY2VzIHRo
ZSBwYWdlIHRvIGJlIGNvaGVyZW50IGJlZm9yZSB3ZQorICAgICAgICAgKiB2
YWxpZGF0ZSBpdHMgY29udGVudHMgZm9yIHNhZmV0eS4KKyAgICAgICAgICov
CisgICAgICAgIGlmICggKG54ICYgUEdUX25vbl9jb2hlcmVudCkgJiYgdHlw
ZSAhPSBQR1Rfd3JpdGFibGVfcGFnZSApCisgICAgICAgIHsKKyAgICAgICAg
ICAgIHZvaWQgKmFkZHIgPSBfX21hcF9kb21haW5fcGFnZShwYWdlKTsKKwor
ICAgICAgICAgICAgY2FjaGVfZmx1c2goYWRkciwgUEFHRV9TSVpFKTsKKyAg
ICAgICAgICAgIHVubWFwX2RvbWFpbl9wYWdlKGFkZHIpOworCisgICAgICAg
ICAgICBwYWdlLT51LmludXNlLnR5cGVfaW5mbyAmPSB+UEdUX25vbl9jb2hl
cmVudDsKKyAgICAgICAgfQorCisgICAgICAgIC8qCiAgICAgICAgICAqIE5v
IHNwZWNpYWwgdmFsaWRhdGlvbiBuZWVkZWQgZm9yIHdyaXRhYmxlIG9yIHNo
YXJlZCBwYWdlcy4gIFBhZ2UKICAgICAgICAgICogdGFibGVzIGFuZCBHRFQv
TERUIG5lZWQgdG8gaGF2ZSB0aGVpciBjb250ZW50cyBhdWRpdGVkLgogICAg
ICAgICAgKgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L3B2L2dyYW50X3Rh
YmxlLmMgYi94ZW4vYXJjaC94ODYvcHYvZ3JhbnRfdGFibGUuYwppbmRleCAw
MzI1NjE4Yzk4ODMuLjgxYzcyZTYxZWQ1NSAxMDA2NDQKLS0tIGEveGVuL2Fy
Y2gveDg2L3B2L2dyYW50X3RhYmxlLmMKKysrIGIveGVuL2FyY2gveDg2L3B2
L2dyYW50X3RhYmxlLmMKQEAgLTEwOSw3ICsxMDksMTcgQEAgaW50IGNyZWF0
ZV9ncmFudF9wdl9tYXBwaW5nKHVpbnQ2NF90IGFkZHIsIG1mbl90IGZyYW1l
LAogCiAgICAgb2wxZSA9ICpwbDFlOwogICAgIGlmICggVVBEQVRFX0VOVFJZ
KGwxLCBwbDFlLCBvbDFlLCBubDFlLCBnbDFtZm4sIGN1cnIsIDApICkKKyAg
ICB7CisgICAgICAgIC8qCisgICAgICAgICAqIFdlIGFsd2F5cyBjcmVhdGUg
bWFwcGluZ3MgaW4gdGhpcyBwYXRoLiAgSG93ZXZlciwgb3VyIGNhbGxlciwK
KyAgICAgICAgICogbWFwX2dyYW50X3JlZigpLCBvbmx5IHBhc3NlcyBwb3Rl
bnRpYWxseSBub24temVybyBjYWNoZV9mbGFncyBmb3IKKyAgICAgICAgICog
TU1JTyBmcmFtZXMsIHNvIHRoaXMgcGF0aCBkb2Vzbid0IGNyZWF0ZSBub24t
Y29oZXJlbnQgbWFwcGluZ3Mgb2YKKyAgICAgICAgICogUkFNIGZyYW1lcyBh
bmQgdGhlcmUncyBubyBuZWVkIHRvIGNhbGN1bGF0ZSBQR1Rfbm9uX2NvaGVy
ZW50LgorICAgICAgICAgKi8KKyAgICAgICAgQVNTRVJUKCFjYWNoZV9mbGFn
cyB8fCBpc19pb21lbV9wYWdlKGZyYW1lKSk7CisKICAgICAgICAgcmMgPSBH
TlRTVF9va2F5OworICAgIH0KIAogIG91dF91bmxvY2s6CiAgICAgcGFnZV91
bmxvY2socGFnZSk7CkBAIC0yOTQsNyArMzA0LDE4IEBAIGludCByZXBsYWNl
X2dyYW50X3B2X21hcHBpbmcodWludDY0X3QgYWRkciwgbWZuX3QgZnJhbWUs
CiAgICAgICAgICAgICAgICAgIGwxZV9nZXRfZmxhZ3Mob2wxZSksIGFkZHIs
IGdyYW50X3B0ZV9mbGFncyk7CiAKICAgICBpZiAoIFVQREFURV9FTlRSWShs
MSwgcGwxZSwgb2wxZSwgbmwxZSwgZ2wxbWZuLCBjdXJyLCAwKSApCisgICAg
eworICAgICAgICAvKgorICAgICAgICAgKiBHZW5lcmFsbHksIHJlcGxhY2Vf
Z3JhbnRfcHZfbWFwcGluZygpIGlzIHVzZWQgdG8gZGVzdHJveSBtYXBwaW5n
cworICAgICAgICAgKiAobjFsZSA9IGwxZV9lbXB0eSgpKSwgYnV0IGl0IGNh
biBiZSBhIHByZXNlbnQgbWFwcGluZyBvbiB0aGUKKyAgICAgICAgICogR05U
QUJPUF91bm1hcF9hbmRfcmVwbGFjZSBwYXRoLgorICAgICAgICAgKgorICAg
ICAgICAgKiBJbiBzdWNoIGNhc2VzLCB0aGUgUFRFIGlzIGZ1bGx5IHRyYW5z
cGxhbnRlZCBmcm9tIGl0cyBvbGQgbG9jYXRpb24KKyAgICAgICAgICogdmlh
IHN0ZWFsX2xpbmVhcl9hZGRyKCksIHNvIHdlIG5lZWQgbm90IHBlcmZvcm0g
UEdUX25vbl9jb2hlcmVudAorICAgICAgICAgKiBjaGVja2luZyBoZXJlLgor
ICAgICAgICAgKi8KICAgICAgICAgcmMgPSBHTlRTVF9va2F5OworICAgIH0K
IAogIG91dF91bmxvY2s6CiAgICAgcGFnZV91bmxvY2socGFnZSk7Cg==

--=separator--


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 12:13:40 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 12:13:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345248.570906 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzH2i-0002Ro-5a; Thu, 09 Jun 2022 12:13:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345248.570906; Thu, 09 Jun 2022 12:13:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzH2i-0002Rd-1j; Thu, 09 Jun 2022 12:13:40 +0000
Received: by outflank-mailman (input) for mailman id 345248;
 Thu, 09 Jun 2022 12:13:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sdCg=WQ=gmail.com=akihiko.odaki@srs-se1.protection.inumbo.net>)
 id 1nzH29-0006fn-Sv
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 12:13:06 +0000
Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com
 [2607:f8b0:4864:20::1029])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7e6fb2e4-e7ed-11ec-b605-df0040e90b76;
 Thu, 09 Jun 2022 14:12:56 +0200 (CEST)
Received: by mail-pj1-x1029.google.com with SMTP id
 gc3-20020a17090b310300b001e33092c737so20863806pjb.3
 for <xen-devel@lists.xenproject.org>; Thu, 09 Jun 2022 05:13:05 -0700 (PDT)
Received: from [192.168.66.3] (p912131-ipoe.ipoe.ocn.ne.jp. [153.243.13.130])
 by smtp.gmail.com with ESMTPSA id
 g23-20020a170902d5d700b00166423df3cdsm14594081plh.209.2022.06.09.05.13.01
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 09 Jun 2022 05:13:02 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7e6fb2e4-e7ed-11ec-b605-df0040e90b76
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=message-id:date:mime-version:user-agent:subject:content-language:to
         :cc:references:from:in-reply-to:content-transfer-encoding;
        bh=4OEpwKfRhk+/Bw+ajXIlaom3ggZwPPaPsXOJxTS2d7g=;
        b=YePZDPXQlHwHra1M8oaueRHnzF0+1wwx47W6shZG9D5RX6o4W641iQqi41eqckK2Ze
         2OeonslVAfH8l7bYS+Wa7mQM/QQ1KzGzpWeI3GQGAEPOJG+1soa40qQ2qwEI/yPRb6bM
         sPm13NdTMu9IZh4xRkejAKcDCC1HaxLedergL0HkYC5T9hk1PVpp7/vmVAr23rU+jwm7
         00CK+LDaqrS9BqV6miBVfQvc19UagmlQq0M7qoeSSgBw3vIPUJJ8fq4RabRYGX44Rq5L
         dAPEA20CTXkeEghzUucB5RvM0j0BBWOSE8nqxDhxEe6ops21mZcWBT5AFYcyORzghpWY
         3EJA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:message-id:date:mime-version:user-agent:subject
         :content-language:to:cc:references:from:in-reply-to
         :content-transfer-encoding;
        bh=4OEpwKfRhk+/Bw+ajXIlaom3ggZwPPaPsXOJxTS2d7g=;
        b=v0zukAiQ24QjLsJ5/PFCEvHEF3olVXuXJYcuCTQUg7UYgj7EoKfR7RQuaOcaLo5ExN
         Xl75l33XCafqEz1F1WB7kEhhcF8s6XTGkMzvg9U0W6QQiC/lpeL26Lhn1AYKu0WTkY4G
         GrxAaHzhDmkISOL6O0wGcld02PeLLHS89zaeo9X2QXfvTyaq+3nHcPsQC4PJSD4k43rq
         MqiW/7xfx61cpjGPVDIBVmoL8L1+YtbxMbMTTqeAv5PdDSbz6mXx7gYP/d4mT4BZicl8
         +Nm/LGMTN6XTTON364LS+p3Fesmo/9hVcLnEDV3wYzNMZPCoahPcEdo3qsmqnYwbm4U8
         HFgg==
X-Gm-Message-State: AOAM5316/VKWmxrWY+bayxB/4dWGZvSjMHHtGfANqTYYip2fGwcAOhWG
	cKC8XXnG4dpmbesEl/Bj3bc=
X-Google-Smtp-Source: ABdhPJxolsVGMF3LD135fEKYtts558HztJx0DtQ/nXL2nAEKTVv629O5Bqlcj2rcS1Ve97CgGhad9A==
X-Received: by 2002:a17:903:246:b0:153:857c:a1f6 with SMTP id j6-20020a170903024600b00153857ca1f6mr39078667plh.153.1654776783535;
        Thu, 09 Jun 2022 05:13:03 -0700 (PDT)
Message-ID: <c0c610b8-df0c-7e2a-385f-bcf70c987182@gmail.com>
Date: Thu, 9 Jun 2022 21:12:59 +0900
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux aarch64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v3 2/3] ui: Deliver refresh rate via QemuUIInfo
Content-Language: en-US
To: Gerd Hoffmann <kraxel@redhat.com>
Cc: qemu Developers <qemu-devel@nongnu.org>, xen-devel@lists.xenproject.org,
 "Michael S . Tsirkin" <mst@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>
References: <20220226115516.59830-1-akihiko.odaki@gmail.com>
 <20220226115516.59830-3-akihiko.odaki@gmail.com>
 <20220609102805.qz2xrnd6ms6cigir@sirius.home.kraxel.org>
 <19ae71a4-c988-3c9e-20d6-614098376524@gmail.com>
 <20220609120214.bay3cl24oays6x6d@sirius.home.kraxel.org>
From: Akihiko Odaki <akihiko.odaki@gmail.com>
In-Reply-To: <20220609120214.bay3cl24oays6x6d@sirius.home.kraxel.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 2022/06/09 21:02, Gerd Hoffmann wrote:
> On Thu, Jun 09, 2022 at 08:45:41PM +0900, Akihiko Odaki wrote:
>> On 2022/06/09 19:28, Gerd Hoffmann wrote:
>>>> --- a/include/ui/console.h
>>>> +++ b/include/ui/console.h
>>>> @@ -139,6 +139,7 @@ typedef struct QemuUIInfo {
>>>>        int       yoff;
>>>>        uint32_t  width;
>>>>        uint32_t  height;
>>>> +    uint32_t  refresh_rate;
>>>>    } QemuUIInfo;
>>>>    /* cursor data format is 32bit RGBA */
>>>> @@ -426,7 +427,6 @@ typedef struct GraphicHwOps {
>>>>        void (*gfx_update)(void *opaque);
>>>>        bool gfx_update_async; /* if true, calls graphic_hw_update_done() */
>>>>        void (*text_update)(void *opaque, console_ch_t *text);
>>>> -    void (*update_interval)(void *opaque, uint64_t interval);
>>>>        void (*ui_info)(void *opaque, uint32_t head, QemuUIInfo *info);
>>>>        void (*gl_block)(void *opaque, bool block);
>>>>    } GraphicHwOps;
>>>
>>> So you are dropping update_interval, which isn't mentioned in the commit
>>> message at all.  Also this patch is rather big.  I'd suggest:
>>>
>>> (1) add refresh_rate
>>> (2) update users one by one
>>> (3) finally drop update_interval when no user is left.
>>>
>>> thanks,
>>>     Gerd
>>>
>>
>> I think 1 and 3 should have to be done once since refresh_rate and
>> update_interval would interfere with each other otherwise.
> 
> Well, between 1 and 3 both old and new API are active.  Shouldn't be
> much of a problem because the GraphicHwOps implementations are using
> only the one or the other.
> 
> take care,
>    Gerd
> 

The only GraphicHwOps implementation updated with this change is xenfb. 
xenfb can be switched to use refresh_rate in step 1 or 3.

Switching to use refresh_rate in step 1 would break the refresh rate 
propagation until all host displays are updated to set refresh_rate 
instead of calling update_interval.

Switching to use refresh_rate in step 3 would break the refresh rate 
propagation when a host display is updated to set refresh_rate instead 
of calling update_interval but xenfb does not use refresh_rate.

Regards,
Akihiko Odaki


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 12:16:34 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 12:16:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345294.570917 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzH5V-0004is-Lp; Thu, 09 Jun 2022 12:16:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345294.570917; Thu, 09 Jun 2022 12:16:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzH5V-0004il-Ir; Thu, 09 Jun 2022 12:16:33 +0000
Received: by outflank-mailman (input) for mailman id 345294;
 Thu, 09 Jun 2022 12:16:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jWvP=WQ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1nzH5U-0004iH-MS
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 12:16:32 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur03on060e.outbound.protection.outlook.com
 [2a01:111:f400:fe09::60e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fec863bf-e7ed-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 14:16:31 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB6132.eurprd04.prod.outlook.com (2603:10a6:208:13c::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.12; Thu, 9 Jun
 2022 12:16:29 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.013; Thu, 9 Jun 2022
 12:16:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fec863bf-e7ed-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=N/6mSSvtvjd7U04wTW92EsuoaDac5Wu4em6APdNwMDhzE7H/HApRTkOkB1v6QJGPSzNzSv2+IfUG/Kdig8MKCSPVlIhaU5gLbv1RfcC59jbhCu0seBZKzw8aHLYH30EpDtlaIzGHC2JIC9Nv6uc4080YtbmEod6wSMx7I+psJOZplRFZ/UDBv74BwQJKHkCcQkKamCfUqNrgcvXYL+LAASe2Xc2DgRdbv5clZRax7IYbix0aLSeiovd4cA8TVaMDXDMp4htNFm+4l52yPEEGCFVGVWzI4m8VP6eOtS5eqAdBpAPw4HRs3Tzu6RqOABZ5SlM1ANGosrFIayBn+ck3dw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=B3R5uzim8M6DYJNBfv5SQTaEjZKPTIOqo2PXp59J748=;
 b=fbBYJBskB6WjgefJgEI0/9X/LzQFnZYc4oC2Vxk2ejN48N97BPQ5eS9aiCq8DLSN6+4eY/amZcfnPwIXNkHilypSJG2VNK6JAqtOORc0tXiO6VDtShOZyHLAF/YjqOi0O/uQxTfP2nbqrfgMOW3UUycc9yCu3tIL+Ak0iwrXuDNsExRoWydRAnIf1EzzWaMTRPKKGARTOkP3PlVYMT+1/saksgJVY+uMUihL5G53n1zqHjmJgAurtJe71+Pzz7TDDAHUXC3oxctnofnIB2CKpBG6EloR2Pexwbimnpsev4rEyPgotFreJdXrO3CViKmCBgMg4BmjHohupjbFyREYdg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=B3R5uzim8M6DYJNBfv5SQTaEjZKPTIOqo2PXp59J748=;
 b=Scow91BuAHLGpTCgEhZzdM6QZbEqYHOtjqEvz2z5ZraJWhzy7gWGaRTbvDuJkxOecco02FcXN6P0Lii0bok3JWrn4AZn0DzkmEj4J8BPpUluaoK83awOU5a9PxHU2LBEDcT0BFqvjnJL4UzkHGdhhgQYrDv1LeS31Cq12E/hKlr0qIO5NBwjvHB93Kp0CFBy/HZfriHk1ZYUHFtHq6d/o4VkIbhrs9ys1DRZbFzPE5p/p+FE6HH1Y9+QVmRcD9cW5ea6tJDA4hvjPVbHpIFya0sH4txiESfL/GrXsJLkf82SKwyWEIsZYFETc7ITl6FiwjWUDPgR0CY3Shk57Svd0A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5195b659-ae2f-be1f-eb5a-bbe3e4b5d9fb@suse.com>
Date: Thu, 9 Jun 2022 14:16:27 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [XEN PATCH 1/4] build: xen/include: use if_changed
Content-Language: en-US
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
References: <20220601165909.46588-1-anthony.perard@citrix.com>
 <20220601165909.46588-2-anthony.perard@citrix.com>
 <6EE2C13C-7218-4063-8C73-88695C6BF4CE@arm.com>
 <0d85ad23-a232-eac3-416f-fff4d5ec1a93@suse.com>
 <258D1BE1-8E77-4748-A64C-6F080B9C1539@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <258D1BE1-8E77-4748-A64C-6F080B9C1539@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-ClientProxiedBy: AS9PR04CA0086.eurprd04.prod.outlook.com
 (2603:10a6:20b:50e::24) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4eff899b-6fb4-4463-f074-08da4a11e215
X-MS-TrafficTypeDiagnostic: AM0PR04MB6132:EE_
X-Microsoft-Antispam-PRVS:
	<AM0PR04MB61326DDE08FB6AC258BE352BB3A79@AM0PR04MB6132.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	GCcNzt6pmt7PHuU2rb0avcgJIt8LCOg2GDcaQqXxE87e+QMKEH9tYGK3Ffkavgr5Ligq/fbd5TlA7eSlsQXx9ZLmGlUHZW3Gpskufs7L1xtCcShtYYo8OXb3kN8yIDHDHe9cRid9ezexgZMkpqMPtikAQtmYpweo3QDwELiSn+EczcgzKgqPdwYxAIN1d+7hGJ/K5bmRfI/JtcbKF0sD4h9+Bnf7BCTHeifRSRDfrJZvUh90Pfae/KEiS0/EuHJUBXrNnUsRcwyHVN2EVmXeKaRu1Pii6NWCU00Bq2Gia9SV4fLPQzaFjzcbOlN+ZuW6/LLwhQ4ZBmBtfVGic7vdTeovhIRDB4GNXjHLFFHrAFzBvm58EAMSzn8fO5RfDLy0pyv/slWsK50cbW1E8w2yeqouxQMYL9JcEb7p9b5zCcwq1emQOlnUrdVlVhAdTObO075eMy+RXmbF0PbBp3i/deKxw4zRO4eQk/rlxKpNkHasxSJ1t9RescoUMG0ItsfPnjd0JUeuSfK+sQlu9c8z+4lhOyX3W4cH/SszDNOCMuBlq0dNC/BGty3JJH8CKXHyAMPOuKCmJAGmWDmX/4dgo/BeT4x0KwZtaiwVTzAkR5V83bjDFZOMCuZg6zvz1VMOHcyfUAhNO0ZqUCKO+G1+v6CMDLru991k0hB4tWfNsNniDv9Dcjo5df+wCQtmll2SKG0+0tpuSKoomKL/jI2cXKP6qnvZaQwcgticqkXg32rTqOgMrx1BMMkm5zJJRwgsfOZlYB3+c0rswV8X1uZBKMil2+kvFIkRR8Ah5BWx5CM=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(4326008)(54906003)(66946007)(66476007)(31686004)(6916009)(66556008)(6506007)(2616005)(26005)(36756003)(6512007)(83380400001)(186003)(53546011)(8676002)(31696002)(38100700002)(86362001)(6486002)(2906002)(316002)(508600001)(5660300002)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Y2d6NjJjU0dralgzbElmdDkvUkJnSDlYM2U3YkVYeUlwMUxGN3hVUkpIWXd0?=
 =?utf-8?B?KzFJWjJSSFk3S0pBNWlYUkJXd2lKNk0xempmVUFORWhwclVNT2hRSWs0Wnh3?=
 =?utf-8?B?elhRNlRzUGZxdk02eWFpUVl1ZmRnNjNobE84dWdaeU5vZkZwUmxnK0lxcjh6?=
 =?utf-8?B?OENCMU81VlU0V2RlQzladjBqYmEvS2tOdW4vb1I3a3RvYUloWitXMEVtSHlR?=
 =?utf-8?B?dm1jUHNwejhPakZjRWZlWnJ2MEthYjlFa0FDTkxzWWJSelRoa3pOV2Q2WWEv?=
 =?utf-8?B?a0oydCtOcU5RNXdId2dHNFNUcVhNWVdtTnpIVkZ3bGkrK3dtcVpzUTVWbUlI?=
 =?utf-8?B?MTh3c3J3aGxRQzl6eHRvMVBrbDdvUGFDVk1DQVNLTmRkbGpNS1VGZEFlRnVW?=
 =?utf-8?B?Yi9aQ1BFOWV4dWx4VlVIUHNGZ2RkUndjdlgvdE1LWUV4L2FCSXZTYmxsMmZB?=
 =?utf-8?B?bkh1TUpydEd6OVJaUWVZY245bDVuV3NCT3dmNUNPZ09oelpPa2NQKytleG9v?=
 =?utf-8?B?ZWJwVzR4ODV6bDc1cmF6VHJZSDZRZG1NZnJGY1V2WEt3YnUrWWNtcUFOREZT?=
 =?utf-8?B?MlVNdUJBeS9hU3Z1Y0RwZHFPNlV0VmdXQTAyU2FuZXpncDlsSXRGbHhTK0tB?=
 =?utf-8?B?aVVhS3FaZUhBL2tRcXJ1ai95RmFpMC9GTExzMHBTRTlwbkdqbUV5TkhzdWlw?=
 =?utf-8?B?ZXdpQ0syZ0dzdDhaMzErcEgwTTFPbmpGTlBuL0djbjFiQ0JHT2hJUUhpMG14?=
 =?utf-8?B?L2d3ZVlxK0k4cFdPc1RtOGpnTlVxcDZnNG85ZENWWjM4ODNxV2dsVTVjOUdn?=
 =?utf-8?B?NHI3STNhOC9vSnNJVkkrL3YwalBNT0Frc0o2aTV1TXBmM1I0UjNiQ3BKaDRu?=
 =?utf-8?B?MmlTaVllWE1MWUo3d1gzZmZ6ZzVNTDB4d051Nmhsd3dCMVBXaC85d3k5Wklx?=
 =?utf-8?B?MVZmTG5HQ3J4ZnVES1RFNDFrVS9kczZ5ZHg3VDQyYThjdHdhN3ZrQkIydFNP?=
 =?utf-8?B?TFFiVFZteG5veDFuMGFRV1NXNnRYNWRjemxhc3lydmVlRVU1blJ4SjgzTXRU?=
 =?utf-8?B?L1NDNjQzY2Z6RnVQcWpMRE95MlB1MmZOTzVOZmVibW9JN2dNNG1qQ3ZIamVM?=
 =?utf-8?B?NzJtN3ViVGkvUWlsVjRPUysyWkRIL09mOTVjMi9zdnozK0VGNzg4emN6Y2Yy?=
 =?utf-8?B?ZHhUUUNRbVRoQ2F0emtVM01wWHptQ0FESnkvTjdqeXdPYnNnY3RWYmRBdHU0?=
 =?utf-8?B?eWNpRzJPQ3J4QlZPN2hLaCtoOFoxVzVrZlBsL0JUUWsvVHlKekc3S3lNbE1N?=
 =?utf-8?B?eWZhc2RHSkpHaElpem1QWUZZbWdvOHFZSlpkcjZLWHRaS0VZVEU0OXJXcXIy?=
 =?utf-8?B?b3dRRGJYME45L21QQlA3V2hBdnJYSE54Q3RxQVNyU0JHanJhajZDeHlXK0Vw?=
 =?utf-8?B?Y25pU1JDbHhSZWVMWjZLTEtyWmpKSTRFWVRDZUxkOERuNHJPMFRSZ25VUTJi?=
 =?utf-8?B?bHNtaW9EUkdSdU9YVEdVTTgxcFBNVW11Wng4WUcvMml0cVR0ZnVFNzNpeFda?=
 =?utf-8?B?eVVuMVE0bXhIVkc1bU9CQUZEQmhPU0UrbW0rdlB3RGlPL1k5bzIxeFd4ZHpx?=
 =?utf-8?B?SkRmbEZkT3R4cjFTTFFpZUNHWjRNSVZqWFNjM01mLzBtZ3B3SUVFdjVJN244?=
 =?utf-8?B?QlZoNU5adjRZazFseFZ0NGhsa3crbE1mNXRsOXpWVlJBdkJORHd5WmRvS0ht?=
 =?utf-8?B?SGoya2ZsaTFkbW9HKzFhSk10NXJJQmVybjRkMkRkZyttSUdSTTlVQnJ0TUNP?=
 =?utf-8?B?UGk3R3I5aEJONDl2OFh6d1Z1eWdFTXZ1aVpvZUtvTVh1eGZ3U2dIR1hLcDM1?=
 =?utf-8?B?NkZ0MENVVjVJUkRLbGtTZGMydXQ4dXBaU0tLZnRzNUE1TG05cE0wbGpNQzRq?=
 =?utf-8?B?c3lnbzNMb0dGUkdvajVBcGZ0a2lLV3U3YldPalliMWl6dzZZWTJZdC91azNV?=
 =?utf-8?B?bDJKYW03OUFEeEVaOWlRS0EvRVFUS3VRQlVsSHZQVzlmOCtrNXg1N1htVnZJ?=
 =?utf-8?B?SGJjUUQ4bWwzQzRQMW5BUEl0WC9hYWVDVkF5TXlyQnRtTVFNSTdZMEVtRnJq?=
 =?utf-8?B?akg1YitxYXlxbGt2RUFCZlRGZisxSDFTTURMelhTKzRvVkVqcFBwV1ZxY0h6?=
 =?utf-8?B?aXlUSzArV2p5cWVVT1NIYTNYdytjOC9TSlpHcm5odjVubXpEZEpldHVqY3RX?=
 =?utf-8?B?S3lGT2VZeTRjYWVqWkZxcTJHbUFhazZCZUVtN0RWWGQraDl3QXhtZGVSZGxi?=
 =?utf-8?B?ck1BRVNqcXl0bGdhY2dYbzhKWDVld0JHZHpkMzlIcU9YcnhUTGVuZz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4eff899b-6fb4-4463-f074-08da4a11e215
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 12:16:29.5584
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3O+zRMgmfXiQcwlQFqSXzIeMBQi3jG18M07tPSZL31bF+gdT6oqHmkgoHEJ8vY/8pb4A9zh2bA+8ch+inFYslw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB6132

On 09.06.2022 13:51, Bertrand Marquis wrote:
>> On 9 Jun 2022, at 11:26, Jan Beulich <jbeulich@suse.com> wrote:
>> On 09.06.2022 12:16, Bertrand Marquis wrote:
>>>> On 1 Jun 2022, at 17:59, Anthony PERARD <anthony.perard@citrix.com> wr=
ote:
>>>>
>>>> Use "define" for the headers*_chk commands as otherwise the "#"
>>>> is interpreted as a comment and make can't find the end of
>>>> $(foreach,).
>>>>
>>>> Adding several .PRECIOUS as without them `make` deletes the
>>>> intermediate targets. This is an issue because the macro $(if_changed,=
)
>>>> check if the target exist in order to decide whether to recreate the
>>>> target.
>>>>
>>>> Removing the call to `mkdir` from the commands. Those aren't needed
>>>> anymore because a rune in Rules.mk creates the directory for each
>>>> $(targets).
>>>>
>>>> Remove "export PYTHON" as it is already exported.
>>>
>>> With this change, compiling for x86 is now ending up in:
>>> CHK     include/headers99.chk
>>> make[9]: execvp: /bin/sh: Argument list too long
>>> make[9]: *** [include/Makefile:181: include/headers++.chk] Error 127
>>>
>>> Not quite sure yet why but I wanted to signal it early as other might b=
e impacted.
>>>
>>> Arm and arm64 builds are not impacted.
>>
>> Hmm, that patch has passed the smoke push gate already, so there likely =
is
>> more to it than there being an unconditional issue. I did build-test thi=
s
>> before pushing, and I've just re-tested on a 2nd system without seeing a=
n
>> issue.
>=20
> I have the problem only when building using Yocto, I did a normal build a=
nd the
> issue is not coming.
>=20
> Doing a verbose compilation I have this (sorry for the long lines):
>=20
>  for i in include/public/vcpu.h include/public/errno.h include/public/kex=
ec.h include/public/argo.h include/public/xen.h include/public/nmi.h includ=
e/public/xencomm.h include/public/xenoprof.h include/public/device_tree_def=
s.h include/public/version.h include/public/memory.h include/public/feature=
s.h include/public/sched.h include/public/xen-compat.h include/public/callb=
ack.h include/public/vm_event.h include/public/grant_table.h include/public=
/physdev.h include/public/tmem.h include/public/hypfs.h include/public/plat=
form.h include/public/pmu.h include/public/elfnote.h include/public/trace.h=
 include/public/event_channel.h include/public/io/vscsiif.h include/public/=
io/kbdif.h include/public/io/protocols.h include/public/io/ring.h include/p=
ublic/io/displif.h include/public/io/fsif.h include/public/io/blkif.h inclu=
de/public/io/console.h include/public/io/sndif.h include/public/io/fbif.h i=
nclude/public/io/libxenvchan.h include/public/io/netif.h include/public/io/=
usbif.h include/public/io/pciif.h include/public/io/tpmif.h include/public/=
io/xs_wire.h include/public/io/xenbus.h include/public/io/cameraif.h includ=
e/public/hvm/pvdrivers.h include/public/hvm/e820.h include/public/hvm/hvm_x=
s_strings.h include/public/hvm/dm_op.h include/public/hvm/ioreq.h include/p=
ublic/hvm/hvm_info_table.h include/public/hvm/hvm_vcpu.h include/public/hvm=
/hvm_op.h include/public/hvm/params.h; do x86_64-poky-linux-gcc  --sysroot=
=3D/home/bermar01/Development/xen-dev/build/profile-qemu-x86_64.prj/tmp/wor=
k/core2-64-poky-linux/xen/4.17+git1-r0/recipe-sysroot  -x c -ansi -Wall -We=
rror -include stdint.h -S -o /dev/null $i || exit 1; echo $i; done >include=
/headers.chk.new; mv include/headers.chk.new include/headers.chk
> |       rm -f include/headers99.chk.new;  echo "#include "\"include/publi=
c/io/9pfs.h\" | x86_64-poky-linux-gcc  --sysroot=3D/home/bermar01/Developme=
nt/xen-dev/build/profile-qemu-x86_64.prj/tmp/work/core2-64-poky-linux/xen/4=
.17+git1-r0/recipe-sysroot  -x c -std=3Dc99 -Wall -Werror -include stdint.h=
  -include string.h -S -o /dev/null - || exit $?; echo include/public/io/9p=
fs.h >> include/headers99.chk.new;  echo "#include "\"include/public/io/pvc=
alls.h\" | x86_64-poky-linux-gcc  --sysroot=3D/home/bermar01/Development/xe=
n-dev/build/profile-qemu-x86_64.prj/tmp/work/core2-64-poky-linux/xen/4.17+g=
it1-r0/recipe-sysroot  -x c -std=3Dc99 -Wall -Werror -include stdint.h  -in=
clude string.h -S -o /dev/null - || exit $?; echo include/public/io/pvcalls=
.h >> include/headers99.chk.new; mv include/headers99.chk.new include/heade=
rs99.chk
> | make[9]: execvp: /bin/sh: Argument list too long
> | make[9]: *** [include/Makefile:181: include/headers++.chk] Error 127
> | make[9]: *** Waiting for unfinished jobs....
>=20
> So the command passed to the sub shell by make is quite long.
>=20
> No idea why this comes out only when building in Yocto but I will dig a b=
it.

Maybe Yocto has an unusually low limit on command arguments' total size?
The whole thing is just over 2500 chars, which doesn't look to be unusually
long for Unix-like environments.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 12:17:08 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 12:17:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345310.570928 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzH64-0005Kh-6a; Thu, 09 Jun 2022 12:17:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345310.570928; Thu, 09 Jun 2022 12:17:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzH64-0005KY-1v; Thu, 09 Jun 2022 12:17:08 +0000
Received: by outflank-mailman (input) for mailman id 345310;
 Thu, 09 Jun 2022 12:17:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=t7Sx=WQ=citrix.com=prvs=15236f833=George.Dunlap@srs-se1.protection.inumbo.net>)
 id 1nzH62-0004iH-FY
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 12:17:06 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1103407c-e7ee-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 14:17:03 +0200 (CEST)
Received: from mail-bn8nam12lp2170.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.170])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 09 Jun 2022 08:16:59 -0400
Received: from PH0PR03MB5669.namprd03.prod.outlook.com (2603:10b6:510:33::16)
 by SN4PR03MB6784.namprd03.prod.outlook.com (2603:10b6:806:217::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.14; Thu, 9 Jun
 2022 12:16:57 +0000
Received: from PH0PR03MB5669.namprd03.prod.outlook.com
 ([fe80::b402:44ba:be8:2308]) by PH0PR03MB5669.namprd03.prod.outlook.com
 ([fe80::b402:44ba:be8:2308%4]) with mapi id 15.20.5332.013; Thu, 9 Jun 2022
 12:16:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1103407c-e7ee-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654777023;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:mime-version;
  bh=SFHZGe/NyJm4UgOLlTeQkgHrC66QfJCVmXFEpeixUB4=;
  b=IqfO7Nbb7oszHpOxX5ErzzrdwJxX6CSdeZNAna89U4Ty7NboRAuG5sBv
   ZmSdGeDHeBccPgNQDiuQeTxengzi4c8Wi3CnLfJbZgbu3NCIIB5fJSjTF
   TRttk9SiiIlk4tsyAQD24sCqt45HKAddmu8NT03xgANMR5pv59XyZvqp3
   U=;
X-IronPort-RemoteIP: 104.47.55.170
X-IronPort-MID: 73221298
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:PYoukKj1s6kvCSy+xcnEes+ZX161ChAKZh0ujC45NGQN5FlGYwSy9
 lOraxnFY6jUMyawOYxoOc7lxf41yZbcxt4yG1NvpXowHi8b8MHMDIXIcUqhNXrPJJSZE09tt
 ZpEY9KYd5xlFiDS/0n8aeG+oyMhjv/USrbyVOfJYysZqWOIMMsEoUsLd7kR3t446TTAPz6wh
 D/SnyH+EAOugW8rPjNEuq7c9R9j7ayj6T1FsgU1PaBH4g+FnCMbXc4Tfa2/ESD1E9JedgKYq
 0cv710bEkfxpUpF5gaNy+6jGqEyaueOe1DI0BK6YoD66vR4jnVaPp0TabxNMC+7tx3Tx4ork
 IsV5cTrIesUFvakdNo1AkEw/x5WZcWqyJefSZRomZXOp6FuWyKEL8RGVCnaD6VBkgpEKTgmG
 cgjACIMdni+a9eem9pXfAXOavMLd6EHNKtH0p1pIKqw4fwOGfgvSI2SjTNUMatZammj0p8ya
 uJAAQeDYigsbDURNGcKUoMi396Ci1C8ThhAkXW5/qgotj27IAxZiNABMfLzU/nTH4B/uBbdo
 WjLuWPkHhsdKdqTjyKf9W6hjfPOmiW9X58OELq/9bhhh1j7Km47UUVKEwfk56TkzBfhA7qzK
 GRNksYqhYc/81akQ5/RQhu8qWastR8AQdtAVeY97Wlhz4KLuFzDXzJVHlatbvQ/iOwURGUY6
 mWrwc/nK2xXoeScSkuko+L8QTSafHJ9wXU5TS0OQBYB4tLjiJoulR+JRdFmeIavidf4Ay33h
 TqDoy43nbw7h9MEzKi98hbMhDfEjojESEs56xvaWkqh7xhlf8i1aoqw81/Z4P1caoGDQTGpl
 WIYls2pyfEBBJCAigSAWOwIWrqu4p6tMiDYgFNpN4ks8XKq4XHLVapd+i1kLUFldOMNYyb0Y
 VT7sBlUopRUOROCY7JsaseqCssCyan7Cc+jRv3SdsBJYJV6aEmA5i4GTVWLw2nnnUwokKc+E
 ZSWa8ChCTAdE6sP8datb+IU0LtuzCZuw2rWHMr/107+juLYY2OJQ7AYNlfIdvo+8K6PvATS9
 ZBYKteOzBJcFub5Z0E77LIuELzDFlBjbbieliCdXrfrztZOcI35N8Ls/A==
IronPort-HdrOrdr: A9a23:RSyxKKgUDEzWNIUaEfYMqH2zA3BQX3Z13DAbv31ZSRFFG/FwyP
 rCoB1L73XJYWgqM03IwerwQ5VpQRvnhP1ICPoqTM2ftWjdySaVxeRZgbcKrAeQfBEWmtQ96U
 4kSdkHNDSSNyk3sS+Z2njfLz9I+rDun86VbKXlvg5QpGpRGsNdBnJCe2Km+zpNNWx77PQCdK
 a0145inX6NaH4XZsO0Cj0uRO7YveDGk5rgfFovGwMnwBPmt0Ln1JfKVzyjmjsOWTJGxrkvtU
 LflRbi26mlu/anjjfBym7o6YhMkteJ8KoDOCXMsLlUFtzfsHfrWG1TYczGgNnzmpDq1L8eqq
 iOn/7nBbU115qeRBDynfKn4Xic7N9n0Q6f9bbfuwqtnSWxfkNFN+NRwY1eaRfX8EwmoZV117
 9KxXuQs95NAQrHhzmV3amBa/n7/nDE3kbKvNRj+UC3a7FuIYO5bLZvjn99AdMFBmb3+YonGO
 5hAIXV4+tXa0qTazTcsnN0yNKhU3wvFlPeK3Jy8PC9wnxThjR03kEYzMsQkjMJ8488UYBN46
 DBPr5znL9DQ8cKZeZ2BfsHQ8GwFmvRKCi8e166MBDiDuUKKnjNo5n47PE84/yrYoUByN8olJ
 HIQDpjxBkPkoLVeLmzNbFwg2DwqT+GLEXQI+lllutEk6y5Qqb3OiueT11rm9e8opwkc7jmZ8
 o=
X-IronPort-AV: E=Sophos;i="5.91,287,1647316800"; 
   d="asc'?scan'208,217";a="73221298"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XnAehYaNjuIkC/GHhmlKTxo2kSXaNaIdVomKEyK7qIRbQ+AAVUzr4t8PSEsuEoMdYAthCH195vnDXo4j8i9D55EtgEfVPSVj4ZC5sJorF04Mb4L9JxEWAfvMxYJDlPv5oqwHbABm+KpGHcvWXGBNPyNIjEGeHF8X0d+OUdSw+pexQd/U+eJOtQtymI3wruXrd+LHvo6h13ibvUMjmRyCAe+2PhcVM7x7vkpaPvawLOmylx5dTo9fzu9lkln25lZZ+3Ax1vg3uX/jKcXhQmAcJHtAY4ENSESMddGPq+VhOvnink8L8Ao5Xq8+KSg+6U1JOm2riYAZPimMQWEXpt3mVQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=tIeRCBvMRWO8ZG+IXg2F1S36S3Jl98VDjZhcq9AUGOo=;
 b=HLi/6FO0inJVmx8/eIl/vYi4PzP+SJtH7969V2cfhRlhjkJPnkz5j3lHceVaMQ4zA48F9hFxdMfEgjcOeavRFbLQ3TsilY7KxUa8bw8M9+Hzrk1mRNX5J2c11C4DkzXiV7ODskHuhmJ9T4GICYV+dt8j+RLUI8szNsNHyENOvdn12I6dLgLqWud+LACAI6oARmKLWMdyKkPyQoHhhlRq6t13eOyVrUWsagItLBA9LNzs7B1pAhieNJIEfg6EXSlC5BAFrz+gm7WVr9kCiaq6LFHM3DLGP+f6x9UUPRRfAfPzJFql+h02ikOhzu4NXeOyssy3cZ7NLUa4d1BXqvsb1Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tIeRCBvMRWO8ZG+IXg2F1S36S3Jl98VDjZhcq9AUGOo=;
 b=ekk7CrQm/tY6KaEW9JyG2SloNL3TQ5tTKPtMr6cy+pZrUUUpwt1VdsIHiqEc/kMVJGDx8OzpIeeVdf488gKO9ctVta1koHoFjnX55jXfNBxjuZYi5gR7xrOjmo2r3XWP6GUw8pwRnzHoIxcybnWEdpwCh2LjGMFxkr4wOPZMRHM=
From: George Dunlap <George.Dunlap@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Roberto Bagnara <roberto.bagnara@bugseng.com>, xen-devel
	<xen-devel@lists.xenproject.org>, Roger Pau Monne <roger.pau@citrix.com>,
	Artem Mygaiev <Artem_Mygaiev@epam.com>, Andrew Cooper
	<Andrew.Cooper3@citrix.com>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, "fusa-sig@lists.xenproject.org"
	<fusa-sig@lists.xenproject.org>, Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: MOVING COMMUNITY CALL Call for agenda items for 9 June Community
 Call @ 1500 UTC
Thread-Topic: MOVING COMMUNITY CALL Call for agenda items for 9 June Community
 Call @ 1500 UTC
Thread-Index:
 AQHYdZ38SXqYMiOAsUySyh3MMtjDRK06wjeAgAA+pgCAAMtvgIAHcfwAgAO5uACAAAOYAIAADsYA
Date: Thu, 9 Jun 2022 12:16:57 +0000
Message-ID: <A59C0FF8-649D-4B82-AC56-2B8089872FA2@citrix.com>
References: <CC75A251-2695-4E9E-95A7-043874B22F32@citrix.com>
 <alpine.DEB.2.22.394.2206010942010.1905099@ubuntu-linux-20-04-desktop>
 <alpine.DEB.2.22.394.2206011324400.1905099@ubuntu-linux-20-04-desktop>
 <ebe4b409-318f-6b2c-0e05-fe9256528b32@suse.com>
 <alpine.DEB.2.22.394.2206061731421.277622@ubuntu-linux-20-04-desktop>
 <45c4d8fa-06de-b4a2-5688-14b9cbe5b48c@bugseng.com>
 <a10972fc-0f25-4187-4386-e73b4f5563af@suse.com>
In-Reply-To: <a10972fc-0f25-4187-4386-e73b4f5563af@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 263902de-a979-40c9-0490-08da4a11f2b2
x-ms-traffictypediagnostic: SN4PR03MB6784:EE_
x-microsoft-antispam-prvs:
 <SN4PR03MB6784B70B8D34C2DA6CE9807A99A79@SN4PR03MB6784.namprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 hai489dvc7y9/WdNMybvvObPbpqh1JvORcV57fo+JnFbQoLrPqf+zSMKeO9PQd7/ciC60o5ZdJeHtBSLczBG+CyUNAIIiYgsg66Xv3golXkaUtiA1Q5yxrKafE3Jiq5jXj7cUmkmWnDDFvd7HICOye7vnk3aqk0Pf0uAgcj71RkidaczxsNtIg/wNDbpFRVUMeAAFZFixa1H/w/YvkYuy4oB0xjcP4QQExhg6wFLhpBdT5bbNIFDJv1qNpJeyG3SgmjhFAYdm8nBlaqHevCJO6QpgZvM5EOXWFmsHEI6NR/paAC1BV+CCrduvPAUvyKE9LuBjJyyP/bxGS1Z/mKjl9scucmUfsSXUKZNmV5yIgvWFHShqPub94WeCaZt+g6hJlaJC63BWTJavpX9bKY1KGMmkpVSET+6DTSDEk73oKBYTlggyiQ5HnFajsWwu+ARoe2B8Mh1Ej0OnWuoMG2M1aFHR2UIBaMZCXYpfRzoMR/b2PyiQ12CWG6RoRyD3G/gC0GwWoXM31HP76zYUab8K+pMmo9mdtZJmfKWjYHfd6BSzBCY3yp8jbGEgqrqzUBluA0ImmHE2vCs6Cfds3G8wVRk3M/lWxNncbllrxlL27QZ/V+w/3dx9pUoezQJDe+OuuInnTcieYz0/HAbxT7HehQbhbltNc0KA1J4gfREqlYAxadZ5NSqp8YrOp3TQdYgr2ftixzpj70Ejka7GI8RCr6+0qEloshTUIo2WmvIqjREXZ5gQ+kkLTg5g0tC9P5jjIT4WT01WkrECgckfQjGFGxEMb2dxKyAC1nUG30wBfBgd+YtB8p2YI8Ci3VOM9tv1jUZvvkJblTqIAoVtU4D4Hf8kpAqZHS/iKYLZWSLS2M=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(2906002)(91956017)(54906003)(6916009)(82960400001)(76116006)(71200400001)(99936003)(36756003)(5660300002)(316002)(26005)(38070700005)(122000001)(6512007)(53546011)(86362001)(4326008)(8936002)(8676002)(38100700002)(66446008)(66476007)(64756008)(66946007)(2616005)(166002)(33656002)(508600001)(186003)(6486002)(966005)(83380400001)(66556008)(6506007)(221023002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?WG9TKzcybUhjaXRCUDJQS0Z5bHZHRkRGQzUvbmNQY0FPT29MYTRDblUvb1hm?=
 =?utf-8?B?MnZjL2VTNDluOWpyRVFiekxKNnErQVhXZUdhNENEZHVNVkwzRG95VkxQK20v?=
 =?utf-8?B?cUFDRysrWlhIRGJjTVJ0VHprTEJ4Tks3cjQwRTAvL1V3aW13WVBJbmJOM2ow?=
 =?utf-8?B?aUZxbXBxNW5CSEJjV2Y2QzNMOFE2V3JtWkhXblF1TWlxY0lSWXZyVVQxTDNj?=
 =?utf-8?B?SVdqak9oODVGTmUrbTdSVitGb0U2Zk1CSncwMURqeUJadkpJbEVCLy9wOStl?=
 =?utf-8?B?QTk1V1NJaWlRMnJoaDdLYk5pT1V2MDNmSEswV3Y4Sk42Y2d0N2h1STUza0xO?=
 =?utf-8?B?cUZsUEVrcFprMVY3dWkrTFZwVXpzSEJOdmJNaktoMmdRS2FIK29sV2VzYUFq?=
 =?utf-8?B?WEJ0ZzZRSDlVRjhyT3dHci8xL2pMa2pyMnlodGNCUGtuUHBPaEdraGROcHA4?=
 =?utf-8?B?MGJ6VjBXYVEvdUw0Z1dsUUhwRmV3QXFLR3dSaEEwM21YNERhNC9tOFkwWVRo?=
 =?utf-8?B?MEUzVm5wb1JKc1Z1RUIyZ21GeFc1UzhmMFRBTkdxOXQxalFxa0RiUHdQcjZy?=
 =?utf-8?B?N094K2xPUGEwWEN0U0oyMDdab1A5bmFWTkNHbTVXZjczUEZ0c0FtcGMxWlNH?=
 =?utf-8?B?L29tZ0VSNjB4NW9BT0k2cGVjYmM4eVJEUytBY2ZSL01FVHZwcjExNHhxK3ND?=
 =?utf-8?B?bGMzRCtQdHFSanI0dUFpbVFGc3lrQk1rOUFlTlFmbmpya1dOSEE3SlVFSElz?=
 =?utf-8?B?aHVzbzlweGZ5TWRaZDNUUFUrSzNFL1l3VTNkaWx4dmZzSGNEOGVrVVNLWUVl?=
 =?utf-8?B?Q3RWeVNhNGlTY1B2YW9xVHp3RjRnOEViSnBRRGZtcnlWRTZ1eXNwcThQL1VL?=
 =?utf-8?B?a25oRjRvSGVscEx2MUU4UFNlazYwMjBnUTIzWGU3T25XZCtDWSt4QVp0c2ww?=
 =?utf-8?B?Zk41cy9ULzVYTTM5ZXdDbFJHTHlpcXdJc1NnaFZNN2M2Qm9COEd3TjNWSmxi?=
 =?utf-8?B?NWlHL1Nxdnc1SlFXMjZoZGJ6Zm9GRGM1VEpMendGUXJnZWgvVG1vUWpRNFlK?=
 =?utf-8?B?bnFKTzBra2h5bmpsY2M0NXRFZDc1cUpRWk5vVHFJZGIyL0Rqc09LK21JK2Fj?=
 =?utf-8?B?N01jVEcxTG9tWFhkR2k4ZXZwUVpzVHgydlUzbldicHdJSDVSZ1BLNDR4NDVq?=
 =?utf-8?B?K3laVTFjZnd2bkk1WWZCWjNLVUlOQ1VKQTRuWUlsM2NJbWQyQTZvcm9nVTBk?=
 =?utf-8?B?Q2hCUStNbFBkeDREbWZKdDlnT09VOEtMUlZVSUZRUS85RTNLaVpXMnpHNDVX?=
 =?utf-8?B?OE5vZVA0eCtiUllRUUxhajA2cFY0REZxMXI3Qmo5Ynl2VmlPZVZzWE5SN2Rj?=
 =?utf-8?B?VERYc1JhRXFFUnY0S3NUaEZqRnV1dnA1c0tTdDFXeEQyRTc1U1IrOWtBR29z?=
 =?utf-8?B?T2h2YzFlZkxFTzRiVW5Kc3dDYzZMNldiWGw3YmdVd1d3WWRVaGQwRk8yR25v?=
 =?utf-8?B?WVdscy96NzVNUUZtK0R3anE2UTJ1c2RnZzl4RC94akxYblUwaWg5VlZwRVI3?=
 =?utf-8?B?cXB4VGM0NUZmUUpoR0U0cW9zRk1hTERqOXlOK252NjJPUTlXdWNCNmdjTENr?=
 =?utf-8?B?bjRPYmpQYys0Z1h1Z2lubUViM01rQzB2WnRRZFpVTlBCekpkaW5ld2w2Snd3?=
 =?utf-8?B?ZTd5bmMyTzQ4RUx3TmtMeFRhMEcxU1NsdWJSUFRaR0h1VVpuUUI5SWNnSWYy?=
 =?utf-8?B?K3BGdkdyNHpkeVZSa2NuUytmb1NKMUJxcGRXaW1Eak9ndEtrK0RhREFobkVv?=
 =?utf-8?B?MmtJcWloUkQ1WlpmeFVRY0lmcldpcW56OWtYMGd6bUJIVEpNbVVpbzBEQUdK?=
 =?utf-8?B?aThSZ0xxYXdsZWpqTUNZcUtlalN6eUxVeWJTK2NLcHQ2ejN5Rit3bmprNENQ?=
 =?utf-8?B?MXJpU1NtVVQzRkxnZnFkdnlFd1l6YVp3UjM3cnRHL3FhNVZSaEovWVV5UzlB?=
 =?utf-8?B?VHBOL3IvdkhLOHBTblhRWHZZeDdaUnlIK0dCSGJSSFl6eEhKSmY0anMyQ1BJ?=
 =?utf-8?B?M0tXcCtpV1pITDJqekR1aWVZUVIvWWZvYTVEcS9xUWcwVlkxWkl3VW9UZFVC?=
 =?utf-8?B?RlhYMkxCRW0rWXQxYTNzQmZ2ams0TzdpSGtVUDlvcjJJaHd1QlE1djdOWURZ?=
 =?utf-8?B?VEFwRytoaXd0THFZM2daUUJUSGdPVUhWZkZYSVdkUzdaWlZ5czI4c0RPZjB3?=
 =?utf-8?B?aDRBYVNCNXFLSVpnL1Vaai9LK2VrWHZzQ1prNUVjYitZVDUyeCttVDRJMUU4?=
 =?utf-8?B?TEw2RE94cW5KTThkRXVrYm1HZExIYldmaGUwZGRBNURSd05Hc2JFUTBrdGJ4?=
 =?utf-8?Q?hyOYNY8uoQ51mrnQ=3D?=
Content-Type: multipart/signed;
	boundary="Apple-Mail=_E84EEBFA-12A4-4474-989F-9003B89B2E08";
	protocol="application/pgp-signature";
	micalg=pgp-sha256
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 263902de-a979-40c9-0490-08da4a11f2b2
X-MS-Exchange-CrossTenant-originalarrivaltime: 09 Jun 2022 12:16:57.2373
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: +K5m30vCVW6LWSoz5ODzGYKnVOOVxauU8GDGWOmRj3gnSlq29KX90o7r+4hOOwIfzJhFKcwvIdhtBgB5jJaM5ahS9piZCKFAjxMKKBRY158=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN4PR03MB6784

--Apple-Mail=_E84EEBFA-12A4-4474-989F-9003B89B2E08
Content-Type: multipart/alternative;
	boundary="Apple-Mail=_6AA87F33-84D6-45EB-9BE4-89565F3D7D43"


--Apple-Mail=_6AA87F33-84D6-45EB-9BE4-89565F3D7D43
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=utf-8



> On 9 Jun 2022, at 12:24, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 09.06.2022 13:11, Roberto Bagnara wrote:
>> On 07/06/22 04:17, Stefano Stabellini wrote:
>>> # Rule 9.1 "The value of an object with automatic storage duration =
shall not be read before it has been set"
>>>=20
>>> The question is whether -Wuninitalised already covers this case or =
not.
>>> I think it does.
>>>=20
>>> Eclair is reporting a few issues where variables are "possibly
>>> uninitialized". We should ask Roberto about them, I don't think they =
are
>>> actual errors? More like extra warnings?
>>=20
>> No, -Wuninitialized is not reliable, as it has plenty of (well known)
>> false negatives. This is typical of compilers, for which the =
generation
>> of warnings is only a secondary objective. I wrote about that here:
>>=20
>> =
https://www.bugseng.com/blog/compiler-warnings-use-them-dont-trust-them
>>=20
>> On the specifics:
>>=20
>> $ cat p.c
>> int foo (int b)
>> {
>> int a;
>>=20
>> if (b)
>> {
>> a =3D 1;
>> }
>>=20
>> return a;
>> }
>>=20

> I understand what you're saying, yet I'd like to point out that adding
> initializers "blindly" may give a false sense of code correctness.
> Among other things it takes away the chance for tools to point out
> possible issues. Plus some tools warn about stray initializers ...

Right =E2=80=94 if you always set =E2=80=9Cint a=3D0;=E2=80=9D, then =
you=E2=80=99re getting a known value; but if your algorithm relies on it =
being something specific (and not zero), then it=E2=80=99s not clear the =
resulting software is actually more reliable.  If you don=E2=80=99t =
initialise it, there=E2=80=99s at least a chance the compiler will be =
able to tell you that you made a mistake; if you explicitly initialise =
it, then it=E2=80=99s all on you.

 -George

--Apple-Mail=_6AA87F33-84D6-45EB-9BE4-89565F3D7D43
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
	charset=utf-8

<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html; =
charset=3Dutf-8"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; line-break: after-white-space;" class=3D""><br =
class=3D""><div><br class=3D""><blockquote type=3D"cite" class=3D""><div =
class=3D"">On 9 Jun 2022, at 12:24, Jan Beulich &lt;<a =
href=3D"mailto:jbeulich@suse.com" class=3D"">jbeulich@suse.com</a>&gt; =
wrote:</div><br class=3D"Apple-interchange-newline"><div class=3D""><span =
style=3D"caret-color: rgb(0, 0, 0); font-family: =
JetBrainsMonoRoman-Thin; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: 400; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">On 09.06.2022 13:11, Roberto Bagnara wrote:</span><br =
style=3D"caret-color: rgb(0, 0, 0); font-family: =
JetBrainsMonoRoman-Thin; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: 400; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><blockquote type=3D"cite" =
style=3D"font-family: JetBrainsMonoRoman-Thin; font-size: 14px; =
font-style: normal; font-variant-caps: normal; font-weight: 400; =
letter-spacing: normal; orphans: auto; text-align: start; text-indent: =
0px; text-transform: none; white-space: normal; widows: auto; =
word-spacing: 0px; -webkit-text-size-adjust: auto; =
-webkit-text-stroke-width: 0px; text-decoration: none;" class=3D"">On =
07/06/22 04:17, Stefano Stabellini wrote:<br class=3D""><blockquote =
type=3D"cite" class=3D""># Rule 9.1 "The value of an object with =
automatic storage duration shall not be read before it has been set"<br =
class=3D""><br class=3D"">The question is whether -Wuninitalised already =
covers this case or not.<br class=3D"">I think it does.<br class=3D""><br =
class=3D"">Eclair is reporting a few issues where variables are =
"possibly<br class=3D"">uninitialized". We should ask Roberto about =
them, I don't think they are<br class=3D"">actual errors? More like =
extra warnings?<br class=3D""></blockquote><br class=3D"">No, =
-Wuninitialized is not reliable, as it has plenty of (well known)<br =
class=3D"">false negatives. This is typical of compilers, for which the =
generation<br class=3D"">of warnings is only a secondary objective. I =
wrote about that here:<br class=3D""><br class=3D""><a =
href=3D"https://www.bugseng.com/blog/compiler-warnings-use-them-dont-trust=
-them" =
class=3D"">https://www.bugseng.com/blog/compiler-warnings-use-them-dont-tr=
ust-them</a><br class=3D""><br class=3D"">On the specifics:<br =
class=3D""><br class=3D"">$ cat p.c<br class=3D"">int foo (int b)<br =
class=3D"">{<br class=3D"">int a;<br class=3D""><br class=3D"">if (b)<br =
class=3D"">{<br class=3D"">a =3D 1;<br class=3D"">}<br class=3D""><br =
class=3D"">return a;<br class=3D"">}<br class=3D""><br =
class=3D""></blockquote></div></blockquote><div><br =
class=3D""></div><blockquote type=3D"cite" class=3D""><div =
class=3D""><span style=3D"caret-color: rgb(0, 0, 0); font-family: =
JetBrainsMonoRoman-Thin; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: 400; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">I understand what you're saying, yet I'd like to point out =
that adding</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: =
JetBrainsMonoRoman-Thin; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: 400; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><span style=3D"caret-color: rgb(0, 0, =
0); font-family: JetBrainsMonoRoman-Thin; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">initializers "blindly" may give a false sense of code =
correctness.</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: =
JetBrainsMonoRoman-Thin; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: 400; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><span style=3D"caret-color: rgb(0, 0, =
0); font-family: JetBrainsMonoRoman-Thin; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">Among other things it takes away the chance for tools to =
point out</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: =
JetBrainsMonoRoman-Thin; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: 400; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><span style=3D"caret-color: rgb(0, 0, =
0); font-family: JetBrainsMonoRoman-Thin; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">possible issues. Plus some tools warn about stray =
initializers ...</span><br style=3D"caret-color: rgb(0, 0, 0); =
font-family: JetBrainsMonoRoman-Thin; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""></div></blockquote><div><br =
class=3D""></div></div>Right =E2=80=94 if you always set =E2=80=9Cint =
a=3D0;=E2=80=9D, then you=E2=80=99re getting a known value; but if your =
algorithm relies on it being something specific (and not zero), then =
it=E2=80=99s not clear the resulting software is actually more reliable. =
&nbsp;If you don=E2=80=99t initialise it, there=E2=80=99s at least a =
chance the compiler will be able to tell you that you made a mistake; if =
you explicitly initialise it, then it=E2=80=99s all on you.<div =
class=3D""><br class=3D""></div><div =
class=3D"">&nbsp;-George</div></body></html>=

--Apple-Mail=_6AA87F33-84D6-45EB-9BE4-89565F3D7D43--

--Apple-Mail=_E84EEBFA-12A4-4474-989F-9003B89B2E08
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
	filename=signature.asc
Content-Type: application/pgp-signature;
	name=signature.asc
Content-Description: Message signed with OpenPGP

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEj3+7SZ4EDefWZFyCshXHp8eEG+0FAmKh5LQACgkQshXHp8eE
G+3cdAf+MK66lHxuw653pjgr1E1SZYB5Rwjoby92BX5tLhPa7EC6NCMFzin40pEv
yyYkBOvlFBEx7+J3Ju7siKseSWm6dnrdBPeRHb/p0nQkGDARTAN/Fbr5qxrOaur+
DBaSE9lkkeM0l54v3MxspeK8R0GTUMUK92QgFzuDbZjCFVPEdFLKAv80J/eMhC/s
W0C/dShT63JmoHyir3GSQ9CwkWIZfmawVJKL0DviH/9bHEdM45LduHcQoEU0a1Cp
hhk7J1UYnVQgb0qMqDKyUIvtZXbLR9w+eR6tohJAQqRFxpzefgSj34LwosbfSJAS
liHl6cBhmO/7ojw70KA1cV9Z7G6y4A==
=3l7v
-----END PGP SIGNATURE-----

--Apple-Mail=_E84EEBFA-12A4-4474-989F-9003B89B2E08--


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 12:32:16 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 12:32:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345406.570938 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzHKa-0008Af-HD; Thu, 09 Jun 2022 12:32:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345406.570938; Thu, 09 Jun 2022 12:32:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzHKa-0008AY-EB; Thu, 09 Jun 2022 12:32:08 +0000
Received: by outflank-mailman (input) for mailman id 345406;
 Thu, 09 Jun 2022 12:32:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzHKZ-0008AO-4a; Thu, 09 Jun 2022 12:32:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzHKY-00071e-Tw; Thu, 09 Jun 2022 12:32:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzHKY-0001xj-Fj; Thu, 09 Jun 2022 12:32:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nzHKY-0004zD-FH; Thu, 09 Jun 2022 12:32:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=g/aB293c9JUvLqH3fSaRM4fdtkxexLOYnc/X9sP/QWs=; b=BF7C2lR7ge0TZ62zBRGhZvIOh6
	YHDZ/rXfXMZk2ybHgCuiTW5q6WMeb24NoLQZ6aKw/gL1Im1bK+TCNuS479dC/dddZVmIIsH1RG7Or
	bQkSQuLIUYrSRaD/fTygDrzDwNbANglKH/m84RVvjaSmEWHaBA9xtIBrcoincArb+0Hk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170899-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 170899: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=59fbdf8a3667ce42c1cf70c94c3bcd0451afd4d8
X-Osstest-Versions-That:
    xen=f3185c165d28901c3222becfc8be547263c53745
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 09 Jun 2022 12:32:06 +0000

flight 170899 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170899/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  59fbdf8a3667ce42c1cf70c94c3bcd0451afd4d8
baseline version:
 xen                  f3185c165d28901c3222becfc8be547263c53745

Last test of basis   170889  2022-06-08 19:01:56 Z    0 days
Testing same since   170899  2022-06-09 09:00:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  George Dunlap <george.dunlap@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <jgrall@amazon.com> # Arm

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   f3185c165d..59fbdf8a36  59fbdf8a3667ce42c1cf70c94c3bcd0451afd4d8 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 12:33:22 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 12:33:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345418.570958 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzHLk-0000Oj-V3; Thu, 09 Jun 2022 12:33:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345418.570958; Thu, 09 Jun 2022 12:33:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzHLk-0000Oc-RO; Thu, 09 Jun 2022 12:33:20 +0000
Received: by outflank-mailman (input) for mailman id 345418;
 Thu, 09 Jun 2022 12:33:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1nzHLi-0000Nv-P4
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 12:33:18 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nzHLc-00072a-Sm; Thu, 09 Jun 2022 12:33:12 +0000
Received: from [54.239.6.190] (helo=[10.85.101.129])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nzHLc-0003VY-KB; Thu, 09 Jun 2022 12:33:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=46GEvas4BzI7vJ445tP8xsphorxZ6Ll7LYD/ExAhkaU=; b=yS8sWkQDtTzvVPe/d8ToMkxnbU
	tds5CGZgqKhRCOoiuPBAc61n4V5Wps4NlpBvTtppD8seqlX4/LrzLmjV1dcx28XTAVfsHCvb7GwGS
	YVKHMxIgF7vlY6K51U9BxkBlTI+lwy8GTciLBO6HuJJtJBrL9njAfy5k8mDhSR5WHpqc=;
Message-ID: <b3d82607-a7ae-52b3-a9e4-d50780d35b9c@xen.org>
Date: Thu, 9 Jun 2022 13:33:10 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH 1/2] xen/heap: Split init_heap_pages() in two
To: Jan Beulich <jbeulich@suse.com>
Cc: bertrand.marquis@arm.com, Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220609083039.76667-1-julien@xen.org>
 <20220609083039.76667-2-julien@xen.org>
 <23552ac7-7548-9dad-fe41-7dc581c78585@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <23552ac7-7548-9dad-fe41-7dc581c78585@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Jan,

On 09/06/2022 13:09, Jan Beulich wrote:
> On 09.06.2022 10:30, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> At the moment, init_heap_pages() will call free_heap_pages() page
>> by page. To reduce the time to initialize the heap, we will want
>> to provide multiple pages at the same time.
>>
>> init_heap_pages() is now split in two parts:
>>      - init_heap_pages(): will break down the range in multiple set
>>        of contiguous pages. For now, the criteria is the pages should
>>        belong to the same NUMA node.
>>      - init_contig_pages(): will initialize a set of contiguous pages.
>>        For now the pages are still passed one by one to free_heap_pages().
> 
> Hmm, the common use of "contiguous" is to describe address correlation.
> Therefore I'm afraid I'd like to see "contig" avoided when you mean
> "same node". Perhaps init_node_pages()?

After the next patch, it will not only be the same node, It will also be 
the same zone at least. Also, in the future, I would like to 
re-submitting David Woodhouse patch to exclude broken pages (see [1]).

Therefore, I think the name init_node_pages() would not be suitable. 
Please suggest a different name.

> 
>> --- a/xen/common/page_alloc.c
>> +++ b/xen/common/page_alloc.c
>> @@ -1778,16 +1778,55 @@ int query_page_offline(mfn_t mfn, uint32_t *status)
>>   }
>>   
>>   /*
>> - * Hand the specified arbitrary page range to the specified heap zone
>> - * checking the node_id of the previous page.  If they differ and the
>> - * latter is not on a MAX_ORDER boundary, then we reserve the page by
>> - * not freeing it to the buddy allocator.
>> + * init_contig_heap_pages() is intended to only take pages from the same
>> + * NUMA node.
>>    */
>> +static bool is_contig_page(struct page_info *pg, unsigned int nid)
>> +{
>> +    return (nid == (phys_to_nid(page_to_maddr(pg))));
>> +}
> 
> If such a helper is indeed needed, then I think it absolutely wants
> pg to be pointer-to-const. And imo it would also help readability if
> the extra pair of parentheses around the nested function calls was
> omitted. Given the naming concern, though, I wonder whether this
> wouldn't better be open-coded in the one place it is used. Actually
> naming is quite odd here beyond what I'd said further up: "Is this
> page contiguous?" Such a question requires two pages, i.e. "Are these
> two pages contiguous?" What you want to know is "Is this page on the
> given node?"

There will be more check in the future (see next patch). I created an 
helper because it reduces the size of the loop init_heap_pages(). I 
would be OK to fold it if you strongly prefer that.

> 
>> +/*
>> + * This function should only be called with valid pages from the same NUMA
>> + * node.
>> + *
>> + * Callers should use is_contig_page() first to check if all the pages
>> + * in a range are contiguous.
>> + */
>> +static void init_contig_heap_pages(struct page_info *pg, unsigned long nr_pages,
> 
> const again?

I will have a look.

> 
>> +                                   bool need_scrub)
>> +{
>> +    unsigned long s, e;
>> +    unsigned int nid = phys_to_nid(page_to_maddr(pg));
>> +
>> +    s = mfn_x(page_to_mfn(pg));
>> +    e = mfn_x(mfn_add(page_to_mfn(pg + nr_pages - 1), 1));
>> +    if ( unlikely(!avail[nid]) )
>> +    {
>> +        bool use_tail = !(s & ((1UL << MAX_ORDER) - 1)) &&
> 
> IS_ALIGNED(s, 1UL << MAX_ORDER) to "describe" what's meant?

This is existing code and it is quite complex. So I would prefer if we 
avoid to simplify and move the code in the same patch. I would be happy 
to write a follow-up patch to switch to IS_ALIGNED().

> 
>> +                        (find_first_set_bit(e) <= find_first_set_bit(s));
>> +        unsigned long n;
>> +
>> +        n = init_node_heap(nid, s, nr_pages, &use_tail);
>> +        BUG_ON(n > nr_pages);
>> +        if ( use_tail )
>> +            e -= n;
>> +        else
>> +            s += n;
>> +    }
>> +
>> +    while ( s < e )
>> +    {
>> +        free_heap_pages(mfn_to_page(_mfn(s)), 0, need_scrub);
>> +        s += 1UL;
> 
> Nit (I realize the next patch will replace this anyway): Just ++s? Or
> at least a plain 1 without UL suffix?

I will switch to s++.

> 
>> @@ -1812,35 +1851,24 @@ static void init_heap_pages(
>>       spin_unlock(&heap_lock);
>>   
>>       if ( system_state < SYS_STATE_active && opt_bootscrub == BOOTSCRUB_IDLE )
>> -        idle_scrub = true;
>> +        need_scrub = true;
>>   
>> -    for ( i = 0; i < nr_pages; i++ )
>> +    for ( i = 0; i < nr_pages; )
>>       {
>> -        unsigned int nid = phys_to_nid(page_to_maddr(pg+i));
>> +        unsigned int nid = phys_to_nid(page_to_maddr(pg));
>> +        unsigned long left = nr_pages - i;
>> +        unsigned long contig_pages;
>>   
>> -        if ( unlikely(!avail[nid]) )
>> +        for ( contig_pages = 1; contig_pages < left; contig_pages++ )
>>           {
>> -            unsigned long s = mfn_x(page_to_mfn(pg + i));
>> -            unsigned long e = mfn_x(mfn_add(page_to_mfn(pg + nr_pages - 1), 1));
>> -            bool use_tail = (nid == phys_to_nid(pfn_to_paddr(e - 1))) &&
>> -                            !(s & ((1UL << MAX_ORDER) - 1)) &&
>> -                            (find_first_set_bit(e) <= find_first_set_bit(s));
>> -            unsigned long n;
>> -
>> -            n = init_node_heap(nid, mfn_x(page_to_mfn(pg + i)), nr_pages - i,
>> -                               &use_tail);
>> -            BUG_ON(i + n > nr_pages);
>> -            if ( n && !use_tail )
>> -            {
>> -                i += n - 1;
>> -                continue;
>> -            }
>> -            if ( i + n == nr_pages )
>> +            if ( !is_contig_page(pg + contig_pages, nid) )
>>                   break;
>> -            nr_pages -= n;
>>           }
> 
> Isn't doing this page by page in a loop quite inefficient? Can't you
> simply obtain the end of the node's range covering the first page, and
> then adjust "left" accordingly?

The page by page is necessary because we may need to exclude some pages 
(see [1]) or the range may cross multiple-zone (see [2]).

> I even wonder whether the admittedly
> lax original check's assumption couldn't be leveraged alternatively,
> by effectively bisecting to the end address on the node of interest
> (where the assumption is that nodes aren't interleaved - see Wei's
> NUMA series dealing with precisely that situation).
See above. We went this way because there are some pages to be excluded. 
The immediate use case is broken pages, but in the future we may need to 
also exclude pages that contain guest content after Live-Update.

I also plan to get rid of the loop in free_heap_pages() to mark each 
page free. This would mean that pages would only be accessed once in 
init_heap_pages() (I am still cleaning up that patch and it is a much 
more controversial change).

Cheers,

[1] 
https://lore.kernel.org/xen-devel/20200201003303.2363081-2-dwmw2@infradead.org/

> 
> Jan

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 12:49:14 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 12:49:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345436.570989 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzHb1-0002Wo-NJ; Thu, 09 Jun 2022 12:49:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345436.570989; Thu, 09 Jun 2022 12:49:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzHb1-0002Wh-Jv; Thu, 09 Jun 2022 12:49:07 +0000
Received: by outflank-mailman (input) for mailman id 345436;
 Thu, 09 Jun 2022 12:49:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6kiA=WQ=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1nzHaz-0002Wb-Q9
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 12:49:06 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20608.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::608])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8b35935d-e7f2-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 14:49:05 +0200 (CEST)
Received: from DB6P193CA0014.EURP193.PROD.OUTLOOK.COM (2603:10a6:6:29::24) by
 PR3PR08MB5851.eurprd08.prod.outlook.com (2603:10a6:102:85::23) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5314.14; Thu, 9 Jun 2022 12:49:02 +0000
Received: from DBAEUR03FT027.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:29:cafe::3d) by DB6P193CA0014.outlook.office365.com
 (2603:10a6:6:29::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.12 via Frontend
 Transport; Thu, 9 Jun 2022 12:49:02 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT027.mail.protection.outlook.com (100.127.142.237) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5332.12 via Frontend Transport; Thu, 9 Jun 2022 12:49:01 +0000
Received: ("Tessian outbound 1766a3bff204:v120");
 Thu, 09 Jun 2022 12:49:01 +0000
Received: from 9678aac2f36e.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 EAECB23A-DD3A-4636-9495-CFA4C3E491A7.1; 
 Thu, 09 Jun 2022 12:48:55 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9678aac2f36e.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 09 Jun 2022 12:48:54 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AM6PR08MB4439.eurprd08.prod.outlook.com (2603:10a6:20b:be::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Thu, 9 Jun
 2022 12:48:51 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5314.019; Thu, 9 Jun 2022
 12:48:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8b35935d-e7f2-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=Gx1VdRmOidvUWYcCvUXXhOqmG+4jNoEK9RmfQ5yWgoc5ys5/nUkXoi88TD3NvFev/s8dYaay+wKUL1AbyiRW1CzatIIA2zgAtoiiThlx89ote9kh1diTgb67yJ0hW21i4vt0BPlCkETWXy0xjQO0dhzwqCcZ5NmvPzCYH4npQLlx7XvxuOJpiwo1ldyooTa+9JXekVxc6cCzbc+sY7o2Umz1feLja5Cq4H8X4zaB0l8iT+38RiRyJeeFgX1KnwR/rrB5U6Ew8mc3ZNikQn5LCZzIyQb38ICYfLGBGpEUBqiKI4vb7zJ6RRut8Fjff5rkIS0hDBewurCqtaucuJnxYw==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=lmCiZoGXLgvkl8dlFwwrWhVN4bP43LI+QBrB4IVGc3Y=;
 b=KG8JZurbTVC9caqrKknY/LTBpxVCN6p3nWGMVgPiI2IGehNj+5ftL27/phwH6UGnTyoaujI7O27Qgt4EStNnEkQeTp7w6TS9E43gQ3BJSUK67+bRmNlCAU/S5NLMAWkA6vbVaLw9xKtyuFOBwmaHGC1X5ecVILwonnc9rLOHAY1LH+q/fGouTjyDx1d1J+JMlmWshSeGhwHzQjYyBAoAobjSw8Li560NP53A1wuLFwVaylZIfWij2TvHLY7vCgkmYiS6r3bCv3oiLexH3LvXP2fcrd+aXUh5kAby1c7LrccgJokKcMlqs3694fIsUdSEqFIP8h6+nmGcznDXjVhCRA==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lmCiZoGXLgvkl8dlFwwrWhVN4bP43LI+QBrB4IVGc3Y=;
 b=EgHNzgmxwYP/cEzV5bceG9vAsVN6/9/C9VNwJS5/jU2A+2Sj0smxgNESHm7SrV76ttpk2L3EbntkKZgfUQRoLLUBRaRyOaPma5JtBGxKPeh9vnT2Hfo2f5+yMo/v+MNWnHI1K/jqIAvr3bEY+Q3mmXubrxbNhhWg1oq1GTVN9Ug=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 325295dfe50e2976
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EAYz8NKSn7OI2QZp7OgRLuBuQD6DwAn+jaGAG+5rHozL9ujzUJtd+8DBURrCkRkTy7Q0G1vMmohjvMwkyD8nNiY/Q0Fq8jjqkc0dQvuhljEVGaxfdlbOxAJVK+rW+hnM2fH8vZfZ36LVL0oANg5HvK2AN3UDTribNwIw0y77y7CcjtVZJ2BfmmxdEVteC6we67JlhMQcXwWa3iRcdFFJXE3fEIGL7UxU8oE+gv83yMImmepfg6mqMhTPNpbiVl77mOIHlWfQ4Yzc85h8GtqvPviDBmDWdTw+wVBEWEbLdcdqQTncchMpNh7/AmPX5O8dualExSLE3JpTZI4KeyPnSw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=lmCiZoGXLgvkl8dlFwwrWhVN4bP43LI+QBrB4IVGc3Y=;
 b=SUAP0BDYpOcr5wMukAl6inp9JNItscrgitQRGlzdoGUr8HvcZGq++ohzr8aIK5HbzAwdMxRCnOa9yGVj4QMpLfPBvK2oERIK1VCpEL5xs7/NNTvTzLQSsIjAiCk7mgZjTNwjM+O14D0uxEj6xg7dPmKcnO/PMuacgwRFZwcSnt1ICIE7yVyiI4m0m3i3WxkcikCanO/h8eQVkKi+5bGvOaOKhPLkQLsTcdbLUEvC9RiSvyFrk614PU+yZ1ugPDOOWcT9nnmBhZ7HO91i1HrH60/xv2mnBoghpqdVCfk58DbS8Ss72ZyZIBVLXKmz1v5d+9pFWJeBYa/jgLQgOu0+IQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lmCiZoGXLgvkl8dlFwwrWhVN4bP43LI+QBrB4IVGc3Y=;
 b=EgHNzgmxwYP/cEzV5bceG9vAsVN6/9/C9VNwJS5/jU2A+2Sj0smxgNESHm7SrV76ttpk2L3EbntkKZgfUQRoLLUBRaRyOaPma5JtBGxKPeh9vnT2Hfo2f5+yMo/v+MNWnHI1K/jqIAvr3bEY+Q3mmXubrxbNhhWg1oq1GTVN9Ug=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [XEN PATCH 1/4] build: xen/include: use if_changed
Thread-Topic: [XEN PATCH 1/4] build: xen/include: use if_changed
Thread-Index: AQHYddkD4JsESJA0gkGnWWwykzQjSK1G6E+AgAAC3wCAABeVAIAABweAgAAJDAA=
Date: Thu, 9 Jun 2022 12:48:51 +0000
Message-ID: <09F76BE7-6BC4-4D95-B79D-F60FB375D074@arm.com>
References: <20220601165909.46588-1-anthony.perard@citrix.com>
 <20220601165909.46588-2-anthony.perard@citrix.com>
 <6EE2C13C-7218-4063-8C73-88695C6BF4CE@arm.com>
 <0d85ad23-a232-eac3-416f-fff4d5ec1a93@suse.com>
 <258D1BE1-8E77-4748-A64C-6F080B9C1539@arm.com>
 <5195b659-ae2f-be1f-eb5a-bbe3e4b5d9fb@suse.com>
In-Reply-To: <5195b659-ae2f-be1f-eb5a-bbe3e4b5d9fb@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.80.82.1.1)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 64648533-773b-451a-b415-08da4a166dc2
x-ms-traffictypediagnostic:
	AM6PR08MB4439:EE_|DBAEUR03FT027:EE_|PR3PR08MB5851:EE_
X-Microsoft-Antispam-PRVS:
	<PR3PR08MB5851A9BEFB8ECE9148F598C29DA79@PR3PR08MB5851.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 T1cKPVTv7rTDKKT+byAiEZOVWpimF308QK8mrS6JRR0Ph1/Zgqph0x53eAFvgOLg0Ad00UKHfzDnMt9I3204xEy0cxLKP9ZSrAdUr7QJkC+NpL4hCEfw8boEKgjiBLaKg0RtB3HpFnBTOayMfUZFZ15+a/aZtcccVeKiuV1D4SgdI3vj9Q/0Uh0on9w53G+vRyxNDS1RL2y34S4Evb/SO5fpJhcd8J2Ewyo+HZ0G8mYt9y9N7dY9w1pM6aOIEEmW9HOd6uFxJ7LH+62DicTqK/iB1jJ1OAPiI2e56Fb+DF1WiBM7FdccsdIDk8U5E9YrhDP6eNz5mhIwzkt3fCTa46bg5My7mVwUU92xMZg4G75wmJbHscO30+ELje7Xiyzfg1QTJqu4HwnsIOjftKNk0bIz5WaBCE9PVKZrYsjXP6MTxsGSOoFqeH+xWNo07c7P3/yRzCkOhWvXhhBWFQm/b8KNX/H1P4gs66CX7iVS12JwXNQJy6wHWsrjd8PbBTij13dEig7nrcdXraPr6MCkbRepAsmxaZ3CmvJPWGES/grtAk2fwteWmxEngpY0dqGs5GFAu8sJdIj/JeSnQuTGj+LeAC7crWEV4mbdY2hLJQ/y2S9O6kEB/lrJ5ikk+z2zHLOJoEYK3Fy8pn3wYPcp60p48KtrzQpbAw1GBNP3x8Yf5PXveid0uNONfRJR2Hlr3m/xd3bDH6nTSSto3Shc+tlm7P7h8ehMAOrNIXnqK0umOlwUKZcBzwppL8PtNgqA
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(38070700005)(33656002)(316002)(38100700002)(4326008)(8676002)(86362001)(66446008)(66946007)(66476007)(64756008)(76116006)(66556008)(122000001)(54906003)(6916009)(91956017)(36756003)(71200400001)(2616005)(186003)(53546011)(2906002)(83380400001)(6506007)(6512007)(26005)(508600001)(8936002)(6486002)(5660300002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <87E4BB5D84CC294A8F9694E12A1A9B17@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4439
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT027.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	5416a52a-7f29-456c-1818-08da4a16679a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	f3N588KlRrOMbKUsrvqaYdmc2lD59oYOjdswjLmdYXMWH7VsfhHm0kYeznRki4emMmfPpVHgb4hqMFPqSgwLbe31RuJSDCbJ9M0z+MdLLfj9qFQamgkWH4vqGrwBRtBioUnX2LjRsFP/cD7Oxj43mNHZUm98q87BdgyqixXvgy1HM278KoMq0ciSRv9oHuHy0NZ0tXchpgD5Ti9E2KPtGNA7Sepof896uOUheRgo2PLVZXTt9PawF3cJ29oxJk1yPjRpiCcnDHEHusavBHW0sqXG10EtOZ+jlShz+O7ZrJNDX+5ZeqpxX5KEEOXq81B8Bqx/id5WoYXYjQe+C0EWSShtLTC+EWEaVW2eZ2dmKCULVTqQLD+TvXsYKtrggqlb20p43LrJS6ga4l/1WpM3s+bNuVZvb4UMiUISc/RpmJzkCH11zSmkj+bTzaa8PtKGWTTGM0qjBkvOg2NBgktYxYfE/+LVWW+k8aVFbKrJKWAmzyXCT92oosgdcp7Z4JILDFVK+i6kDlXzhx74dZVSn4dSm3ZORFZc7/CHAxBDXPaVnFrdK1DtlZs6oGATceyOPVx/8oQIwdw6esW8xCuhlKFaQON1OzvxFC44s1dLyz0hAve6sWdUoeRSr7EyyonPj/JwcCPk+yjbQwLhJu6CEVqwsHwqzR8SRCRc7l3P5UUdgnjzRjba2bY7xyld05Cj
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230001)(4636009)(40470700004)(46966006)(36840700001)(5660300002)(508600001)(2616005)(70206006)(6862004)(6512007)(82310400005)(2906002)(186003)(316002)(26005)(8936002)(336012)(83380400001)(81166007)(356005)(54906003)(40460700003)(36860700001)(107886003)(53546011)(33656002)(36756003)(70586007)(8676002)(47076005)(6486002)(4326008)(6506007)(86362001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 12:49:01.7559
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 64648533-773b-451a-b415-08da4a166dc2
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT027.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR08MB5851

Hi,

> On 9 Jun 2022, at 13:16, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 09.06.2022 13:51, Bertrand Marquis wrote:
>>> On 9 Jun 2022, at 11:26, Jan Beulich <jbeulich@suse.com> wrote:
>>> On 09.06.2022 12:16, Bertrand Marquis wrote:
>>>>> On 1 Jun 2022, at 17:59, Anthony PERARD <anthony.perard@citrix.com> w=
rote:
>>>>>=20
>>>>> Use "define" for the headers*_chk commands as otherwise the "#"
>>>>> is interpreted as a comment and make can't find the end of
>>>>> $(foreach,).
>>>>>=20
>>>>> Adding several .PRECIOUS as without them `make` deletes the
>>>>> intermediate targets. This is an issue because the macro $(if_changed=
,)
>>>>> check if the target exist in order to decide whether to recreate the
>>>>> target.
>>>>>=20
>>>>> Removing the call to `mkdir` from the commands. Those aren't needed
>>>>> anymore because a rune in Rules.mk creates the directory for each
>>>>> $(targets).
>>>>>=20
>>>>> Remove "export PYTHON" as it is already exported.
>>>>=20
>>>> With this change, compiling for x86 is now ending up in:
>>>> CHK     include/headers99.chk
>>>> make[9]: execvp: /bin/sh: Argument list too long
>>>> make[9]: *** [include/Makefile:181: include/headers++.chk] Error 127
>>>>=20
>>>> Not quite sure yet why but I wanted to signal it early as other might =
be impacted.
>>>>=20
>>>> Arm and arm64 builds are not impacted.
>>>=20
>>> Hmm, that patch has passed the smoke push gate already, so there likely=
 is
>>> more to it than there being an unconditional issue. I did build-test th=
is
>>> before pushing, and I've just re-tested on a 2nd system without seeing =
an
>>> issue.
>>=20
>> I have the problem only when building using Yocto, I did a normal build =
and the
>> issue is not coming.
>>=20
>> Doing a verbose compilation I have this (sorry for the long lines):
>>=20
>> for i in include/public/vcpu.h include/public/errno.h include/public/kex=
ec.h include/public/argo.h include/public/xen.h include/public/nmi.h includ=
e/public/xencomm.h include/public/xenoprof.h include/public/device_tree_def=
s.h include/public/version.h include/public/memory.h include/public/feature=
s.h include/public/sched.h include/public/xen-compat.h include/public/callb=
ack.h include/public/vm_event.h include/public/grant_table.h include/public=
/physdev.h include/public/tmem.h include/public/hypfs.h include/public/plat=
form.h include/public/pmu.h include/public/elfnote.h include/public/trace.h=
 include/public/event_channel.h include/public/io/vscsiif.h include/public/=
io/kbdif.h include/public/io/protocols.h include/public/io/ring.h include/p=
ublic/io/displif.h include/public/io/fsif.h include/public/io/blkif.h inclu=
de/public/io/console.h include/public/io/sndif.h include/public/io/fbif.h i=
nclude/public/io/libxenvchan.h include/public/io/netif.h include/public/io/=
usbif.h include/public/io/pciif.h include/public/io/tpmif.h include/public/=
io/xs_wire.h include/public/io/xenbus.h include/public/io/cameraif.h includ=
e/public/hvm/pvdrivers.h include/public/hvm/e820.h include/public/hvm/hvm_x=
s_strings.h include/public/hvm/dm_op.h include/public/hvm/ioreq.h include/p=
ublic/hvm/hvm_info_table.h include/public/hvm/hvm_vcpu.h include/public/hvm=
/hvm_op.h include/public/hvm/params.h; do x86_64-poky-linux-gcc  --sysroot=
=3D/home/bermar01/Development/xen-dev/build/profile-qemu-x86_64.prj/tmp/wor=
k/core2-64-poky-linux/xen/4.17+git1-r0/recipe-sysroot  -x c -ansi -Wall -We=
rror -include stdint.h -S -o /dev/null $i || exit 1; echo $i; done >include=
/headers.chk.new; mv include/headers.chk.new include/headers.chk
>> |       rm -f include/headers99.chk.new;  echo "#include "\"include/publ=
ic/io/9pfs.h\" | x86_64-poky-linux-gcc  --sysroot=3D/home/bermar01/Developm=
ent/xen-dev/build/profile-qemu-x86_64.prj/tmp/work/core2-64-poky-linux/xen/=
4.17+git1-r0/recipe-sysroot  -x c -std=3Dc99 -Wall -Werror -include stdint.=
h  -include string.h -S -o /dev/null - || exit $?; echo include/public/io/9=
pfs.h >> include/headers99.chk.new;  echo "#include "\"include/public/io/pv=
calls.h\" | x86_64-poky-linux-gcc  --sysroot=3D/home/bermar01/Development/x=
en-dev/build/profile-qemu-x86_64.prj/tmp/work/core2-64-poky-linux/xen/4.17+=
git1-r0/recipe-sysroot  -x c -std=3Dc99 -Wall -Werror -include stdint.h  -i=
nclude string.h -S -o /dev/null - || exit $?; echo include/public/io/pvcall=
s.h >> include/headers99.chk.new; mv include/headers99.chk.new include/head=
ers99.chk
>> | make[9]: execvp: /bin/sh: Argument list too long
>> | make[9]: *** [include/Makefile:181: include/headers++.chk] Error 127
>> | make[9]: *** Waiting for unfinished jobs....
>>=20
>> So the command passed to the sub shell by make is quite long.
>>=20
>> No idea why this comes out only when building in Yocto but I will dig a =
bit.
>=20
> Maybe Yocto has an unusually low limit on command arguments' total size?
> The whole thing is just over 2500 chars, which doesn't look to be unusual=
ly
> long for Unix-like environments.
>=20

Actually the command to generate headers++.chk is 15294 characters when I b=
uild xen normally.
In Yocto it becomes a lot bigger as CC includes a sysroot parameter with a =
path.

Bertrand



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 12:55:48 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 12:55:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345446.570999 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzHhQ-0004UY-Dj; Thu, 09 Jun 2022 12:55:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345446.570999; Thu, 09 Jun 2022 12:55:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzHhQ-0004UR-Ay; Thu, 09 Jun 2022 12:55:44 +0000
Received: by outflank-mailman (input) for mailman id 345446;
 Thu, 09 Jun 2022 12:55:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pGOA=WQ=citrix.com=prvs=1525abdf4=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1nzHhO-0004UL-WF
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 12:55:43 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 770ab48a-e7f3-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 14:55:41 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 770ab48a-e7f3-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654779341;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=An32kfb/a4ZZeopdzRTufgZnUzRsPAd2oqnosPKHT+Y=;
  b=PwCInauNT+t1PVr0OuRmkKs1ghHTYcs+OMNT1M1A8orK60ihsLRCxwgY
   aafEXQ45p4Q6eH/TB/8wXf7QT5Qe2tXTpcZaoersmSXA3ZRAN5/1N7bce
   i4xdig2nsFsLIMlP2V6dcCPNTtK7IuxSLNtsL4nWoNFoONOj4f3eaRqTi
   Q=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 75780608
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:QfMLaKluh8wU5BAcxp5fE77o5gyYJkRdPkR7XQ2eYbSJt1+Wr1Gzt
 xIbWW2GOfbcajH8fd91OYyy9EkFsZ/dmNNnGQBkrH1nESMWpZLJC+rCIxarNUt+DCFioGGLT
 Sk6QoOdRCzhZiaE/n9BCpC48T8kk/vgqoPUUIYoAAgoLeNfYHpn2EsLd9IR2NYy24DnW1PV4
 rsenuWEULOb828sWo4rw/rrRCNH5JwebxtB4zTSzdgS1LPvvyF94KA3fMldHFOhKmVgJcaoR
 v6r8V2M1jixEyHBqD+Suu2TnkUiGtY+NOUV45Zcc/DKbhNq/kTe3kunXRa1hIg+ZzihxrhMJ
 NtxWZOYECIbPfDontshDBBXSwhbIYprwoL6CC3q2SCT5xWun3rExvxvCAc9PJEC+/YxCmZLn
 RAaAGlTNFbZ3bvwme/lDLk37iggBJCD0Ic3s3d8zTbfHLA+TIrKWani7t5ExjYgwMtJGJ4yY
 uJGNGU/NEqYM3WjPH8zVqoyo7yypkPjKTJgpnWr4vIO/1XqmVkZPL/Fb4OOJ43iqd9utkSFo
 mPL+UzpDxdcM8aQoRKe6W6ljOLLmSL9WaoRGae++/osh0ecrkQMDDUGWF39puO24maUVshDM
 UUS9mwLpLIr6U2wZtDnWluzp3vslhwWVsdUEuY6wBqQ0aeS6AGcbkAUQzgEZNE4ucseQT0xy
 kTPj97vHSZosrCeVTSa7Lj8kN+pEXFLdylYP3ZCFFZbpYm4yG0usv7RZo9GIq3oqtvKJW75z
 GijiTQZqo8usOdegs1X4mv7byKQSonhF1BouluKBD71sWuVd6b+OdX2tAGzAeJoad/AEwLf5
 CVsd922trhmMH2bqMCarAzh9pmN7u3NDjDTiEUH83IJp2X0oC7LkWy9DVhDyKZV3iUsI2aBj
 Lf741852XOqFCLCgVVLS4ywEd826qPrCM7oUPvZBvIXPMUtK1bcpnEyOxXIt4wIrKTLufhnU
 ap3jO72VSpKYUiZ5GHeqxghPU8DmXllmDK7qWHTxBW7y7uODEOopUM+GALWNIgRtfrcyC2Mq
 oo3H5bamn13DbylCgGKoNF7ELz/BSVibXwAg5cMLbDrz8sPMDxJNsI9Npt4Itw8xP8Ny7eZl
 px/M2cBoGfCabT8AV3iQhhehHnHBP6TcVpT0fQQAGuV
IronPort-HdrOrdr: A9a23:NRW1TaiofwuBzpXKcC1OeGAL8nBQXtgji2hC6mlwRA09TySZ//
 rOoB0+726StN9xYgBFpTnuAsW9qB/nmqKdpLNhW4tKPzOW3VdATrsSjrcKqgeIc0aVm9K1l5
 0QEZSWYOeAdGSS5vyb3ODXKbgd/OU=
X-IronPort-AV: E=Sophos;i="5.91,287,1647316800"; 
   d="scan'208";a="75780608"
Date: Thu, 9 Jun 2022 13:55:25 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
CC: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [XEN PATCH 1/4] build: xen/include: use if_changed
Message-ID: <YqHtvZPQJOAFt/8K@perard.uk.xensource.com>
References: <20220601165909.46588-1-anthony.perard@citrix.com>
 <20220601165909.46588-2-anthony.perard@citrix.com>
 <6EE2C13C-7218-4063-8C73-88695C6BF4CE@arm.com>
 <0d85ad23-a232-eac3-416f-fff4d5ec1a93@suse.com>
 <258D1BE1-8E77-4748-A64C-6F080B9C1539@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <258D1BE1-8E77-4748-A64C-6F080B9C1539@arm.com>

On Thu, Jun 09, 2022 at 11:51:20AM +0000, Bertrand Marquis wrote:
> Hi,
> 
> > On 9 Jun 2022, at 11:26, Jan Beulich <jbeulich@suse.com> wrote:
> > 
> > On 09.06.2022 12:16, Bertrand Marquis wrote:
> >> With this change, compiling for x86 is now ending up in:
> >> CHK     include/headers99.chk
> >> make[9]: execvp: /bin/sh: Argument list too long
> >> make[9]: *** [include/Makefile:181: include/headers++.chk] Error 127
> >> 
> >> Not quite sure yet why but I wanted to signal it early as other might be impacted.
> >> 
> >> Arm and arm64 builds are not impacted.
> > 
> > Hmm, that patch has passed the smoke push gate already, so there likely is
> > more to it than there being an unconditional issue. I did build-test this
> > before pushing, and I've just re-tested on a 2nd system without seeing an
> > issue.
> 
> I have the problem only when building using Yocto, I did a normal build and the
> issue is not coming.
> 

Will the following patch help?


>From 0f32f749304b233c0d5574dc6b14f66e8709feba Mon Sep 17 00:00:00 2001
From: Anthony PERARD <anthony.perard@citrix.com>
Date: Thu, 9 Jun 2022 13:42:52 +0100
Subject: [XEN PATCH] build,include: rework shell script for headers++.chk

The command line generated for headers++.chk by make is quite long,
and in some environment it is too long. This issue have been seen in
Yocto build environment.

Error messages:
    make[9]: execvp: /bin/sh: Argument list too long
    make[9]: *** [include/Makefile:181: include/headers++.chk] Error 127

Rework to that we do the foreach loop in shell rather that make to
reduce the command line size by a lot. We also need a way to get
headers prerequisite for some public headers so we use a switch "case"
in shell to be able to do some simple pattern matching. Variables
alone in POSIX shell don't allow to work with associative array or
variables with "/".

Reported-by: Bertrand Marquis <Bertrand.Marquis@arm.com>
Fixes: 28e13c7f43 ("build: xen/include: use if_changed")
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 xen/include/Makefile | 17 +++++++++++++----
 1 file changed, 13 insertions(+), 4 deletions(-)

diff --git a/xen/include/Makefile b/xen/include/Makefile
index 6d9bcc19b0..ca5e868f38 100644
--- a/xen/include/Makefile
+++ b/xen/include/Makefile
@@ -158,13 +158,22 @@ define cmd_headerscxx_chk
 	    touch $@.new;                                                     \
 	    exit 0;                                                           \
 	fi;                                                                   \
-	$(foreach i, $(filter %.h,$^),                                        \
-	    echo "#include "\"$(i)\"                                          \
+	get_prereq() {                                                        \
+	    case $$1 in                                                       \
+	    $(foreach i, $(filter %.h,$^),                                    \
+	    $(if $($(patsubst $(srctree)/%,%,$i)-prereq),                     \
+		$(patsubst $(srctree)/%,%,$i)$(close)                         \
+		echo "$(foreach j, $($(patsubst $(srctree)/%,%,$i)-prereq),   \
+			-include c$(j))";;))                                  \
+	    esac;                                                             \
+	};                                                                    \
+	for i in $(filter %.h,$^); do                                         \
+	    echo "#include "\"$$i\"                                           \
 	    | $(CXX) -x c++ -std=gnu++98 -Wall -Werror -D__XEN_TOOLS__        \
 	      -include stdint.h -include $(srcdir)/public/xen.h               \
-	      $(foreach j, $($(patsubst $(srctree)/%,%,$i)-prereq), -include c$(j)) \
+	      `get_prereq $$i`                                                \
 	      -S -o /dev/null -                                               \
-	    || exit $$?; echo $(i) >> $@.new;) \
+	    || exit $$?; echo $$i >> $@.new; done;                            \
 	mv $@.new $@
 endef
 



-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 13:09:18 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 13:09:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345455.571011 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzHuS-00069y-NV; Thu, 09 Jun 2022 13:09:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345455.571011; Thu, 09 Jun 2022 13:09:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzHuS-00069r-KC; Thu, 09 Jun 2022 13:09:12 +0000
Received: by outflank-mailman (input) for mailman id 345455;
 Thu, 09 Jun 2022 13:02:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7WWr=WQ=suse.com=pmladek@srs-se1.protection.inumbo.net>)
 id 1nzHnx-0005xS-2Y
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 13:02:29 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 697c06a5-e7f4-11ec-8b38-e96605d6a9a5;
 Thu, 09 Jun 2022 15:02:28 +0200 (CEST)
Received: from relay2.suse.de (relay2.suse.de [149.44.160.134])
 by smtp-out1.suse.de (Postfix) with ESMTP id A55BE21F03;
 Thu,  9 Jun 2022 13:02:26 +0000 (UTC)
Received: from suse.cz (unknown [10.100.208.146])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by relay2.suse.de (Postfix) with ESMTPS id 6EA862C141;
 Thu,  9 Jun 2022 13:02:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 697c06a5-e7f4-11ec-8b38-e96605d6a9a5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1654779746; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=WNgY7GyrprNl76LhtyM4yyAr1//LVOR4uCT8xLqGCE8=;
	b=ojtY10eQLedQ1p8W7bkgtPGLkS2AZjpFLq6Jm7M5r6QmnKTONMm4DEE4c/QdWiGdnShevp
	eIN6ALkkVVj/WhUF02Ld8MHz00y8tRrFQ0iNR/EUdLO4UR9+Vq8VAuuJcSid16nMNGAGa3
	XvOmhnJPAVDKUKq8/PtY3J2tLln4dQU=
Date: Thu, 9 Jun 2022 15:02:20 +0200
From: Petr Mladek <pmladek@suse.com>
To: Sergey Senozhatsky <senozhatsky@chromium.org>
Cc: Peter Zijlstra <peterz@infradead.org>, ink@jurassic.park.msu.ru,
	mattst88@gmail.com, vgupta@kernel.org, linux@armlinux.org.uk,
	ulli.kroll@googlemail.com, linus.walleij@linaro.org,
	shawnguo@kernel.org, Sascha Hauer <s.hauer@pengutronix.de>,
	kernel@pengutronix.de, festevam@gmail.com, linux-imx@nxp.com,
	tony@atomide.com, khilman@kernel.org, catalin.marinas@arm.com,
	will@kernel.org, guoren@kernel.org, bcain@quicinc.com,
	chenhuacai@kernel.org, kernel@xen0n.name, geert@linux-m68k.org,
	sammy@sammy.net, monstr@monstr.eu, tsbogend@alpha.franken.de,
	dinguyen@kernel.org, jonas@southpole.se,
	stefan.kristiansson@saunalahti.fi, shorne@gmail.com,
	James.Bottomley@hansenpartnership.com, deller@gmx.de,
	mpe@ellerman.id.au, benh@kernel.crashing.org, paulus@samba.org,
	paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu,
	hca@linux.ibm.com, gor@linux.ibm.com, agordeev@linux.ibm.com,
	borntraeger@linux.ibm.com, svens@linux.ibm.com,
	ysato@users.sourceforge.jp, dalias@libc.org, davem@davemloft.net,
	richard@nod.at, anton.ivanov@cambridgegreys.com,
	johannes@sipsolutions.net, tglx@linutronix.de, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, acme@kernel.org, mark.rutland@arm.com,
	alexander.shishkin@linux.intel.com, jolsa@kernel.org,
	namhyung@kernel.org, jgross@suse.com, srivatsa@csail.mit.edu,
	amakhalov@vmware.com, pv-drivers@vmware.com,
	boris.ostrovsky@oracle.com, chris@zankel.net, jcmvbkbc@gmail.com,
	rafael@kernel.org, lenb@kernel.org, pavel@ucw.cz,
	gregkh@linuxfoundation.org, mturquette@baylibre.com,
	sboyd@kernel.org, daniel.lezcano@linaro.org, lpieralisi@kernel.org,
	sudeep.holla@arm.com, agross@kernel.org, bjorn.andersson@linaro.org,
	anup@brainfault.org, thierry.reding@gmail.com, jonathanh@nvidia.com,
	jacob.jun.pan@linux.intel.com, Arnd Bergmann <arnd@arndb.de>,
	yury.norov@gmail.com, andriy.shevchenko@linux.intel.com,
	linux@rasmusvillemoes.dk, rostedt@goodmis.org,
	john.ogness@linutronix.de, paulmck@kernel.org, frederic@kernel.org,
	quic_neeraju@quicinc.com, josh@joshtriplett.org,
	mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com,
	joel@joelfernandes.org, juri.lelli@redhat.com,
	vincent.guittot@linaro.org, dietmar.eggemann@arm.com,
	bsegall@google.com, mgorman@suse.de, bristot@redhat.com,
	vschneid@redhat.com, jpoimboe@kernel.org,
	linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-snps-arc@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, openrisc@lists.librecores.org,
	linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org, sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org, linux-perf-users@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, linux-xtensa@linux-xtensa.org,
	linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org,
	linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	linux-tegra@vger.kernel.org, linux-arch@vger.kernel.org,
	rcu@vger.kernel.org
Subject: Re: [PATCH 24/36] printk: Remove trace_.*_rcuidle() usage
Message-ID: <YqHvXFdIJfvUDI6e@alley>
References: <20220608142723.103523089@infradead.org>
 <20220608144517.444659212@infradead.org>
 <YqG6URbihTNCk9YR@alley>
 <YqHFHB6qqv5wiR8t@worktop.programming.kicks-ass.net>
 <CA+_sPaoJGrXhNPCs2dKf2J7u07y1xYrRFZBUtkKwzK9GqcHSuQ@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CA+_sPaoJGrXhNPCs2dKf2J7u07y1xYrRFZBUtkKwzK9GqcHSuQ@mail.gmail.com>

On Thu 2022-06-09 20:30:58, Sergey Senozhatsky wrote:
> My emails are getting rejected... Let me try web-interface

Bad day for mail sending. I have problems as well ;-)

> Kudos to Petr for the questions and thanks to PeterZ for the answers.
> 
> On Thu, Jun 9, 2022 at 7:02 PM Peter Zijlstra <peterz@infradead.org> wrote:
> > This is the tracepoint used to spool all of printk into ftrace, I
> > suspect there's users, but I haven't used it myself.
> 
> I'm somewhat curious whether we can actually remove that trace event.

Good question.

Well, I think that it might be useful. It allows to see trace and
printk messages together.

It was ugly when it was in the console code. The new location
in vprintk_store() allows to have it even "correctly" sorted
(timestamp) against other tracing messages.

Best Regards,
Petr


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 13:09:18 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 13:09:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345457.571016 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzHuT-0006DS-2v; Thu, 09 Jun 2022 13:09:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345457.571016; Thu, 09 Jun 2022 13:09:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzHuS-0006Cl-TF; Thu, 09 Jun 2022 13:09:12 +0000
Received: by outflank-mailman (input) for mailman id 345457;
 Thu, 09 Jun 2022 13:06:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7WWr=WQ=suse.com=pmladek@srs-se1.protection.inumbo.net>)
 id 1nzHrq-00064z-LX
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 13:06:30 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f9f8d908-e7f4-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 15:06:29 +0200 (CEST)
Received: from relay2.suse.de (relay2.suse.de [149.44.160.134])
 by smtp-out1.suse.de (Postfix) with ESMTP id EEA1421F3C;
 Thu,  9 Jun 2022 13:06:28 +0000 (UTC)
Received: from suse.cz (unknown [10.100.208.146])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by relay2.suse.de (Postfix) with ESMTPS id CEE382C141;
 Thu,  9 Jun 2022 13:06:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f9f8d908-e7f4-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1654779989; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=fQ7/fX+AXX7RSe4KqEkRum0uc4vC19BTUP6nq2wTOec=;
	b=o28ZaXYpRsbDNeH7DdY3dJH3Fnc3dlxZT8L+x/2aFmo5tE4BAEDzQGW4+i3DDhNGrDHekt
	G23+f/oOyTepRO03yIh1gixSASehX2C07pg1fm0i3R3KDgn22PhYw8V9jvCq8nE2YIN+/O
	p3FQSqV1HR0kVCZZzDry3RvUltYRjYI=
Date: Thu, 9 Jun 2022 15:06:00 +0200
From: Petr Mladek <pmladek@suse.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: ink@jurassic.park.msu.ru, mattst88@gmail.com, vgupta@kernel.org,
	linux@armlinux.org.uk, ulli.kroll@googlemail.com,
	linus.walleij@linaro.org, shawnguo@kernel.org,
	Sascha Hauer <s.hauer@pengutronix.de>, kernel@pengutronix.de,
	festevam@gmail.com, linux-imx@nxp.com, tony@atomide.com,
	khilman@kernel.org, catalin.marinas@arm.com, will@kernel.org,
	guoren@kernel.org, bcain@quicinc.com, chenhuacai@kernel.org,
	kernel@xen0n.name, geert@linux-m68k.org, sammy@sammy.net,
	monstr@monstr.eu, tsbogend@alpha.franken.de, dinguyen@kernel.org,
	jonas@southpole.se, stefan.kristiansson@saunalahti.fi,
	shorne@gmail.com, James.Bottomley@hansenpartnership.com,
	deller@gmx.de, mpe@ellerman.id.au, benh@kernel.crashing.org,
	paulus@samba.org, paul.walmsley@sifive.com, palmer@dabbelt.com,
	aou@eecs.berkeley.edu, hca@linux.ibm.com, gor@linux.ibm.com,
	agordeev@linux.ibm.com, borntraeger@linux.ibm.com,
	svens@linux.ibm.com, ysato@users.sourceforge.jp, dalias@libc.org,
	davem@davemloft.net, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
	dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com,
	acme@kernel.org, mark.rutland@arm.com,
	alexander.shishkin@linux.intel.com, jolsa@kernel.org,
	namhyung@kernel.org, jgross@suse.com, srivatsa@csail.mit.edu,
	amakhalov@vmware.com, pv-drivers@vmware.com,
	boris.ostrovsky@oracle.com, chris@zankel.net, jcmvbkbc@gmail.com,
	rafael@kernel.org, lenb@kernel.org, pavel@ucw.cz,
	gregkh@linuxfoundation.org, mturquette@baylibre.com,
	sboyd@kernel.org, daniel.lezcano@linaro.org, lpieralisi@kernel.org,
	sudeep.holla@arm.com, agross@kernel.org, bjorn.andersson@linaro.org,
	anup@brainfault.org, thierry.reding@gmail.com, jonathanh@nvidia.com,
	jacob.jun.pan@linux.intel.com, Arnd Bergmann <arnd@arndb.de>,
	yury.norov@gmail.com, andriy.shevchenko@linux.intel.com,
	linux@rasmusvillemoes.dk, rostedt@goodmis.org,
	senozhatsky@chromium.org, john.ogness@linutronix.de,
	paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com,
	josh@joshtriplett.org, mathieu.desnoyers@efficios.com,
	jiangshanlai@gmail.com, joel@joelfernandes.org,
	juri.lelli@redhat.com, vincent.guittot@linaro.org,
	dietmar.eggemann@arm.com, bsegall@google.com, mgorman@suse.de,
	bristot@redhat.com, vschneid@redhat.com, jpoimboe@kernel.org,
	linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-snps-arc@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, openrisc@lists.librecores.org,
	linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org, sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org, linux-perf-users@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, linux-xtensa@linux-xtensa.org,
	linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org,
	linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	linux-tegra@vger.kernel.org, linux-arch@vger.kernel.org,
	rcu@vger.kernel.org
Subject: Re: [PATCH 24/36] printk: Remove trace_.*_rcuidle() usage
Message-ID: <YqHwOFg/WlMqe8/Z@alley>
References: <20220608142723.103523089@infradead.org>
 <20220608144517.444659212@infradead.org>
 <YqG6URbihTNCk9YR@alley>
 <YqHFHB6qqv5wiR8t@worktop.programming.kicks-ass.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <YqHFHB6qqv5wiR8t@worktop.programming.kicks-ass.net>

On Thu 2022-06-09 12:02:04, Peter Zijlstra wrote:
> On Thu, Jun 09, 2022 at 11:16:46AM +0200, Petr Mladek wrote:
> > On Wed 2022-06-08 16:27:47, Peter Zijlstra wrote:
> > > The problem, per commit fc98c3c8c9dc ("printk: use rcuidle console
> > > tracepoint"), was printk usage from the cpuidle path where RCU was
> > > already disabled.
> > > 
> > Does this "prevent" calling printk() a safe way in code with
> > RCU disabled?
> 
> On x86_64, yes. Other architectures, less so.
> 
> Specifically, the objtool noinstr validation pass will warn at build
> time (DEBUG_ENTRY=y) if any noinstr/cpuidle code does a call to
> non-vetted code like printk().
> 
> At the same time; there's a few hacks that allow WARN to work, but
> mostly if you hit WARN in entry/noinstr you get to keep the pieces in
> any case.
> 
> On other architecture we'll need to rely on runtime coverage with
> PROVE_RCU. That is, if a splat like in the above mentioned commit
> happens again, we'll need to fix it by adjusting the callchain, not by
> mucking about with RCU state.

Makes sense. Feel free to use for this patch:

Acked-by: Petr Mladek <pmladek@suse.com>

> > Therefore if this patch allows to remove some tricky tracing
> > code then it might be worth it. But if trace_console_rcuidle()
> > variant is still going to be available then I would keep using it.
> 
> My ultimate goal is to delete trace_.*_rcuidle() and RCU_NONIDLE()
> entirely. We're close, but not quite there yet.

I keep my fingers crossed.

Best Regards,
Petr


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 13:12:52 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 13:12:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345474.571032 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzHxz-000881-N2; Thu, 09 Jun 2022 13:12:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345474.571032; Thu, 09 Jun 2022 13:12:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzHxz-00087u-KP; Thu, 09 Jun 2022 13:12:51 +0000
Received: by outflank-mailman (input) for mailman id 345474;
 Thu, 09 Jun 2022 13:12:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jWvP=WQ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1nzHxy-00087o-7e
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 13:12:50 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 (mail-am5eur03on062b.outbound.protection.outlook.com
 [2a01:111:f400:fe08::62b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dbb69295-e7f5-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 15:12:48 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR0402MB3889.eurprd04.prod.outlook.com (2603:10a6:208:d::29)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Thu, 9 Jun
 2022 13:12:46 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.013; Thu, 9 Jun 2022
 13:12:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dbb69295-e7f5-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jyiJHIdhsjCKCErzx2rT6/Gw2tbRYiBlxMLU6GkgS6RM/sjQXvJzyq84ZjJOo0iaLrR7vYiwRlbf7eo6Ii85W2L1ienTGTNPns030v3XcftrRpPicl2liWPgv2smt7C77QgPYmDcGdLvhp8Xz0b2dU57BRdNLoOCCR2CoD+hClYHVH6i1DCcv5UfoOkCkvas7opaAAor4rRPzajY/xeYH6Vpo6c3Yb3ft1oYRzoftavRjUmay/T2Xs1QEL8J3nBNNtvegGTLm0AJ23W/m3rJLiGVs/XI1eeackPOD5k/5QZeYiw7h9XyRCvs380EokXvZIN7PQOxTCV5RC//aRZoaw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=hXCEqLr78E0GK+gCnQehEZAMC4x4ISHwt4p+XJw6k4c=;
 b=nVWaaslSyiUM0u9fYiFrkLglX9bBKg7gFLktMtbmaXE6/6cMQQd6A6gbz2CYpzKGhZ9ab7CRY48S4rjzAVXced7Di3tWiPuxrHRwVodhhtYlWKBHwwI0AOTN4VdTM0eK3sS+KHPXGwQ6jpS20rRbysIyE/BK+kEqIBP8IzMO/HBXJ8vRMI0Jk4lvFS1zxNeLSuJ3PKBPFSTr3uKzPl2d+k80Jkgpc89ewZAK2HUCmo8vVYgUxDKL2JBraTf8JNciupJPsHFAnHleYweMCyQinslGU2jIH5TE/7w24rBammclx7Hr3f+PIb8eHBQadO2tyBoUZzTbAjXPHtWfR6mKMQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hXCEqLr78E0GK+gCnQehEZAMC4x4ISHwt4p+XJw6k4c=;
 b=1QinHxfy8OZWtpw2bE6PvJcO8M6+fHzZixPdye3QF2+VsKcdd5DYXKh5AyQGxh+Tn/wncFWaao/wS2OEVSPyg6fqg0YDW283gHXbg0gaOgCha8cRzpCYk6+mQNMkkJaeW2D9PCPPiYkseRzzXnn9Tnz/oLrRYs6+hKEXKTuhMH1n00kYYtvYojK0LpdbTbBEAyeSNC4fkiwNkeqK4WM2Dsbnd5mr0Lg/QaVpMHjOWd7pFU5FE/Fz3VzcI/jYEFVQ2j3+/sztfoSTHe/zisfDeYLKG6SQU40YOgRU+Xq3obIHQTtBK9k5p8FNn9HwF+Givc4qA+zjPknIg25wo0+evA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f4e6c1a2-639e-d9e0-5cac-c34cf5f148d7@suse.com>
Date: Thu, 9 Jun 2022 15:12:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 1/2] xen/heap: Split init_heap_pages() in two
Content-Language: en-US
To: Julien Grall <julien@xen.org>
Cc: bertrand.marquis@arm.com, Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220609083039.76667-1-julien@xen.org>
 <20220609083039.76667-2-julien@xen.org>
 <23552ac7-7548-9dad-fe41-7dc581c78585@suse.com>
 <b3d82607-a7ae-52b3-a9e4-d50780d35b9c@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <b3d82607-a7ae-52b3-a9e4-d50780d35b9c@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM6PR10CA0031.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:209:89::44) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 96b457ae-52b4-42ac-ae31-08da4a19bebd
X-MS-TrafficTypeDiagnostic: AM0PR0402MB3889:EE_
X-Microsoft-Antispam-PRVS:
	<AM0PR0402MB3889D67292CCA487CCA3078EB3A79@AM0PR0402MB3889.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	gReOz9jpssYGL7POfEK47VLmyunipBoAl8W6378cdebAt3pykvrFnx9f2I+vegs6+4ZbFnRPxR4diokZ1b2XEioi5M9mwfm5qEWmlmhiSFL2sGEr4+Knd6enkQag1nkRvlQAerOkcJwyDZlDiusCPCa83dM1OqlRPy7ti6+pARzdvuyCifhlRW6mASS0pTyaUuyvxPmwekezKus0ki6Pg+TEnFOh8MVEMnTwIn7zbBJzTmQdePZ11xfcYSpOPnRTtHQZS2mYCvJGg6AZyUOpIoVoxp29r7sGOi3gqqhFmJ/ZkOGSkz0tbXv+azyBnD1DykB4v/hirWXWYHvRO8GlxqUReaOqtzkOOAfarudebPRJ1Hpi6ojFPyA8Ba/BFGo0mGMmaZUtm9unrrbNMkgWnyCCcIxUblHwbOVgH+Psv+CofBSLss8iGAMnPls+A1hD2qAeNHHQL9bDhDslXwKeEs9xa+QNu0czcm9CQ7l8V9lEBEgwJxlJy3frafYcFXxErdiF6kQtMODf+xz5k6rcuxToVs2dvyvkTJriVw+FDSzPSHg5y+Snj1n6n96391KHxrNnxgGLuCTRICWTuJ5EsrCZpoPh/rUmzfF+K8T5bgNMpQq7SfxVU3d6sL3YxFEJsAnv7Sj7PY8s/aUr72nD4bKy3Q90Kt0M58kaQW0Q0kHy2Dmyk2SrBTO6JnIKd3w715toTzagagfC5K9tkhh07qxbrmqlnsXIStII0SUMMSaVt13WJOFDramy1CHPXggHCLLi/i65N+ciBnhewXzzgmrsCVBRRCfZX2sn6J6mrv3wZa2O5WuO7Va0AY5siakG6HvuM6xuOaJrxXow54/mbw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(83380400001)(316002)(966005)(2906002)(6486002)(5660300002)(4326008)(66556008)(66946007)(66476007)(36756003)(6666004)(26005)(31696002)(53546011)(6512007)(8676002)(6506007)(86362001)(508600001)(2616005)(6916009)(186003)(31686004)(38100700002)(54906003)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ajdMMnVmWFhEREQwTzYxaUJrNURsZHRWQTUyL2RiYXdmUTZvWjRVenVrTkxL?=
 =?utf-8?B?TmdybzF2ZDhnSnNEaE8vWEJHNlBjbkpaR25SYy9RbWhDMnNXRU1OUHE3NFhC?=
 =?utf-8?B?Tnc3ZzNxaVlETWF6UXFwZFphQkVnYVk0c2lpRCtUdmNyaGpRbWRVeUhWM1ox?=
 =?utf-8?B?Q1RmOFJyQjFlM1N1dnBDVXZmWnErTmtuTVRSU0wwWUV4SUs3WDlMNWdmVUxJ?=
 =?utf-8?B?SlI4aWhFYTBSU3NqT081UDBMcXYwSithaVBqK3VjeitEQVFOUFVoVEpzaEU5?=
 =?utf-8?B?bFAvc2ZKMVVqWnh1KzV1VVRHK05NQjBmSkFjOGR6RytZRy9sQ2NJTUdHYnZW?=
 =?utf-8?B?TXFqYmRtQUxWekR6V1l6d25CM1pzZ2hWeWVnQ0JrT0UxUXJyTEdwY2FDWHU0?=
 =?utf-8?B?S2xZUEtFU2FxZHRjOG5FbTRrL0NwUzY2Rmw0T3R3aEd2NmJJelZnaVZjSWNG?=
 =?utf-8?B?UDRYb1lyVVRCTmhZK2hCanpaaXZGSFpXQVBiU3JVdzVXc0l3d21WU2xpRzBB?=
 =?utf-8?B?YW9JUVFCTGxwVVNhSFg1b3BkRDVhWUFSNG5OUEdxYTkySUNQVVFOeUhWTkpL?=
 =?utf-8?B?bU9rYkRCeEFKeEVQSHMyWDFXSEhlK0NPNy94TWc2T0F5LzFSQWNWY0lLb0Fx?=
 =?utf-8?B?Vi9JV3k3Y2NuQUJid1JueENuNmpGeC84T3l4U1hoaDRvb2RKdEE3aEdDelIv?=
 =?utf-8?B?YjkxWTFvSnhBQlpRVGo3ckt0b3ZyNHFhT2NRQmd3SFAzallNZ1Nmb1dMd1Mr?=
 =?utf-8?B?d015Vkh6MTI3bzJId213SnhUZkYydm90VnhRdmowTzVxUGNWeGgrVTBwZmlB?=
 =?utf-8?B?OFRUMU5NcDRDUDN1Tjh6V1J2bmNBT2Vpbnljb1EzVVE3UUtZZmQ1VkxORlo5?=
 =?utf-8?B?L1NBSG05SlJvZTVGR0xGMnArSS91OXY3Yzc4Z1hneCszS3Mwam91WlFqU2Fw?=
 =?utf-8?B?cnF2elQ3Nmt2WHhzSU5JSGo0dnRnZzZ4SmhKaGtZczhjOG8wdGl5S0U3L2tw?=
 =?utf-8?B?U3JDV2Yxc3NJT0tNSmZFTzdQQ3ZmNWpEOURyYmU3Yko2eUtiSEpSTHJteHNL?=
 =?utf-8?B?UnZqTzl4a0dzNlUzb0J1WTZZblZUT0VmU0lwd2YwdmF3L0xEVXB1czJNVUVR?=
 =?utf-8?B?TlhSOERKUElhOU53dnNDYWNZSGRPc2V2a0owbzRDRFNDMDhET1JUMXUxQmNr?=
 =?utf-8?B?SlhyaHFWN25XdTQzQWlVMit4ZVUwY29tbkZrcWxSaHhUM1hDMW93WUlSN2Zn?=
 =?utf-8?B?Y3lSVUhra1dCdVA5STlEM2lrcXpQWm1oQVhIVFI0TzdYRHhramJJdjRMcUR6?=
 =?utf-8?B?TTJqb3BLbmQwcHkxRXVnZU9GTGRleXNvZkpOL1RXWDh2eHRubktIQzBXaE4w?=
 =?utf-8?B?TEFBdnFYZ0ZubEhhOGVhN1IyL2QrVzE4NGtsS05GUGdQM0psdWJUWW5PWHVh?=
 =?utf-8?B?SE5uNjFTelRjNnNTdEZPVmJ5M3F0S0NidWxKZUZzenZuQ1R3SGJLdUd5N2k4?=
 =?utf-8?B?VmFrRkhYblBZNFJ4bHMydEpXU0FlaCttWlY4NVhsRlhNT1F6NTNLZ0UydGZr?=
 =?utf-8?B?b2c0MVJGRmZHdGZndmhtcUU0NG1DRXJQeWk0NEFtdnRrZzkvME4za2cvM3Vv?=
 =?utf-8?B?b21sdDU1c0Z6UytUbUlKcElpV1R1YXVnSmxTM0pwV1RPb2UyYzBTU21JTFNB?=
 =?utf-8?B?cERCeVdGQlZYbm4rVVJ1bVkxTGlOeXdYczNTRmFwUStOSWZFNDNESWJrU2JD?=
 =?utf-8?B?L0JGUWI5RWpORUl4NEtvRm05Z2k3Q25MUXB3UThqTFEvd1FLUUFWNU00YTlQ?=
 =?utf-8?B?bW1OM0JiMUdySGUvVUZ2YXFYQ09WYUdNc1VOZjZqWkhxQzhLQkc1N0RoMnlT?=
 =?utf-8?B?ZGlDU3lQaDFsTy9MNWlWd1B5a2xrdStZT1VrRjdReUlxUEpPNENRaDZ2K0t3?=
 =?utf-8?B?WGd3RnZsV0t6Z2I4Q2Jqb3NZM1VobTBNV0d3bVlGWndxWHo1N1FtOEhTRXRZ?=
 =?utf-8?B?QmtXSTFMSWg2VTlTNW94QXNUaHdwbjZaK0svOFl0aWtEZnRSV2VYZVZtQm1Y?=
 =?utf-8?B?Smd1MHg5a1Z1VTlNZm5nY0ZydmIyTXVrbjJkMytSL0lmaTEwL0JFd1VUbHpW?=
 =?utf-8?B?L0FuT3h2YXZxQ0txTlhSMG5oK1c3NTB5WW9zTGNsOUl5RHA0QWlkempHL0pR?=
 =?utf-8?B?MUo3TVRLNjI0K253eGo0czNWc05KZHYyY1pTNWtUd3FwWVBYZTAxNEY5NFAw?=
 =?utf-8?B?S1o5dkRxemtoMlBRbGZRWHdQV1I2alZybWhKT2tLaUl2ZVAweUtxOVFBY2xL?=
 =?utf-8?B?emxtZEZBSDVzTlVlQ3RUMU1KNFluMkwrb3F5Z3FsVUYySldPeFNuZz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 96b457ae-52b4-42ac-ae31-08da4a19bebd
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 13:12:46.2804
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Lt/CqYXYRpMV5eys5NbUU5AJZWZ3XBacwXk6qcxaiUfSzvM617QZnEGByYq7lZeJJOWD/imVrIijRt7uJ89rWQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR0402MB3889

On 09.06.2022 14:33, Julien Grall wrote:
> On 09/06/2022 13:09, Jan Beulich wrote:
>> On 09.06.2022 10:30, Julien Grall wrote:
>>> From: Julien Grall <jgrall@amazon.com>
>>>
>>> At the moment, init_heap_pages() will call free_heap_pages() page
>>> by page. To reduce the time to initialize the heap, we will want
>>> to provide multiple pages at the same time.
>>>
>>> init_heap_pages() is now split in two parts:
>>>      - init_heap_pages(): will break down the range in multiple set
>>>        of contiguous pages. For now, the criteria is the pages should
>>>        belong to the same NUMA node.
>>>      - init_contig_pages(): will initialize a set of contiguous pages.
>>>        For now the pages are still passed one by one to free_heap_pages().
>>
>> Hmm, the common use of "contiguous" is to describe address correlation.
>> Therefore I'm afraid I'd like to see "contig" avoided when you mean
>> "same node". Perhaps init_node_pages()?
> 
> After the next patch, it will not only be the same node, It will also be 
> the same zone at least. Also, in the future, I would like to 
> re-submitting David Woodhouse patch to exclude broken pages (see [1]).
> 
> Therefore, I think the name init_node_pages() would not be suitable. 
> Please suggest a different name.

_init_heap_pages() then, as a helper of init_heap_pages()?

>>> --- a/xen/common/page_alloc.c
>>> +++ b/xen/common/page_alloc.c
>>> @@ -1778,16 +1778,55 @@ int query_page_offline(mfn_t mfn, uint32_t *status)
>>>   }
>>>   
>>>   /*
>>> - * Hand the specified arbitrary page range to the specified heap zone
>>> - * checking the node_id of the previous page.  If they differ and the
>>> - * latter is not on a MAX_ORDER boundary, then we reserve the page by
>>> - * not freeing it to the buddy allocator.
>>> + * init_contig_heap_pages() is intended to only take pages from the same
>>> + * NUMA node.
>>>    */
>>> +static bool is_contig_page(struct page_info *pg, unsigned int nid)
>>> +{
>>> +    return (nid == (phys_to_nid(page_to_maddr(pg))));
>>> +}
>>
>> If such a helper is indeed needed, then I think it absolutely wants
>> pg to be pointer-to-const. And imo it would also help readability if
>> the extra pair of parentheses around the nested function calls was
>> omitted. Given the naming concern, though, I wonder whether this
>> wouldn't better be open-coded in the one place it is used. Actually
>> naming is quite odd here beyond what I'd said further up: "Is this
>> page contiguous?" Such a question requires two pages, i.e. "Are these
>> two pages contiguous?" What you want to know is "Is this page on the
>> given node?"
> 
> There will be more check in the future (see next patch). I created an 
> helper because it reduces the size of the loop init_heap_pages(). I 
> would be OK to fold it if you strongly prefer that.

I don't "strongly" prefer that; I'd also be okay with a suitably named
helper. Just that I can't seem to be able to come up with any good name.

>>> +/*
>>> + * This function should only be called with valid pages from the same NUMA
>>> + * node.
>>> + *
>>> + * Callers should use is_contig_page() first to check if all the pages
>>> + * in a range are contiguous.
>>> + */
>>> +static void init_contig_heap_pages(struct page_info *pg, unsigned long nr_pages,
>>> +                                   bool need_scrub)
>>> +{
>>> +    unsigned long s, e;
>>> +    unsigned int nid = phys_to_nid(page_to_maddr(pg));
>>> +
>>> +    s = mfn_x(page_to_mfn(pg));
>>> +    e = mfn_x(mfn_add(page_to_mfn(pg + nr_pages - 1), 1));
>>> +    if ( unlikely(!avail[nid]) )
>>> +    {
>>> +        bool use_tail = !(s & ((1UL << MAX_ORDER) - 1)) &&
>>
>> IS_ALIGNED(s, 1UL << MAX_ORDER) to "describe" what's meant?
> 
> This is existing code and it is quite complex. So I would prefer if we 
> avoid to simplify and move the code in the same patch. I would be happy 
> to write a follow-up patch to switch to IS_ALIGNED().

I do realize it's code you move, but I can accept your desire to merely
move the code without any cleanup. Personally I think that rather than a
follow-up patch (which doesn't help the reviewing of this one) such an
adjustment would better be a prereq one.

>>> @@ -1812,35 +1851,24 @@ static void init_heap_pages(
>>>       spin_unlock(&heap_lock);
>>>   
>>>       if ( system_state < SYS_STATE_active && opt_bootscrub == BOOTSCRUB_IDLE )
>>> -        idle_scrub = true;
>>> +        need_scrub = true;
>>>   
>>> -    for ( i = 0; i < nr_pages; i++ )
>>> +    for ( i = 0; i < nr_pages; )
>>>       {
>>> -        unsigned int nid = phys_to_nid(page_to_maddr(pg+i));
>>> +        unsigned int nid = phys_to_nid(page_to_maddr(pg));
>>> +        unsigned long left = nr_pages - i;
>>> +        unsigned long contig_pages;
>>>   
>>> -        if ( unlikely(!avail[nid]) )
>>> +        for ( contig_pages = 1; contig_pages < left; contig_pages++ )
>>>           {
>>> -            unsigned long s = mfn_x(page_to_mfn(pg + i));
>>> -            unsigned long e = mfn_x(mfn_add(page_to_mfn(pg + nr_pages - 1), 1));
>>> -            bool use_tail = (nid == phys_to_nid(pfn_to_paddr(e - 1))) &&
>>> -                            !(s & ((1UL << MAX_ORDER) - 1)) &&
>>> -                            (find_first_set_bit(e) <= find_first_set_bit(s));
>>> -            unsigned long n;
>>> -
>>> -            n = init_node_heap(nid, mfn_x(page_to_mfn(pg + i)), nr_pages - i,
>>> -                               &use_tail);
>>> -            BUG_ON(i + n > nr_pages);
>>> -            if ( n && !use_tail )
>>> -            {
>>> -                i += n - 1;
>>> -                continue;
>>> -            }
>>> -            if ( i + n == nr_pages )
>>> +            if ( !is_contig_page(pg + contig_pages, nid) )
>>>                   break;
>>> -            nr_pages -= n;
>>>           }
>>
>> Isn't doing this page by page in a loop quite inefficient? Can't you
>> simply obtain the end of the node's range covering the first page, and
>> then adjust "left" accordingly?
> 
> The page by page is necessary because we may need to exclude some pages 
> (see [1]) or the range may cross multiple-zone (see [2]).

If you want/need to do this for "future" reasons (aiui [1] was never
committed, and you forgot to supply [2], but yes, splitting at zone
boundaries is of course necessary), then I think this wants making quite
clear in the description.

Jan

>> I even wonder whether the admittedly
>> lax original check's assumption couldn't be leveraged alternatively,
>> by effectively bisecting to the end address on the node of interest
>> (where the assumption is that nodes aren't interleaved - see Wei's
>> NUMA series dealing with precisely that situation).
> See above. We went this way because there are some pages to be excluded. 
> The immediate use case is broken pages, but in the future we may need to 
> also exclude pages that contain guest content after Live-Update.
> 
> I also plan to get rid of the loop in free_heap_pages() to mark each 
> page free. This would mean that pages would only be accessed once in 
> init_heap_pages() (I am still cleaning up that patch and it is a much 
> more controversial change).
> 
> Cheers,
> 
> [1] 
> https://lore.kernel.org/xen-devel/20200201003303.2363081-2-dwmw2@infradead.org/
> 
>>
>> Jan
> 



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 13:18:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 13:18:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345483.571044 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzI3U-0000Q1-Cq; Thu, 09 Jun 2022 13:18:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345483.571044; Thu, 09 Jun 2022 13:18:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzI3U-0000Pu-A3; Thu, 09 Jun 2022 13:18:32 +0000
Received: by outflank-mailman (input) for mailman id 345483;
 Thu, 09 Jun 2022 13:18:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1nzI3T-0000Pn-5a
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 13:18:31 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nzI3N-0007uV-JX; Thu, 09 Jun 2022 13:18:25 +0000
Received: from [54.239.6.190] (helo=[10.85.101.129])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nzI3N-0006MY-CO; Thu, 09 Jun 2022 13:18:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=efunnnPbqK4GXOt7CvuhUtWV5yBMfFXIffr2EMXOBSE=; b=BgcCHPoPzx6RtMOgmi05Y3zoD/
	TxeeF1zAKY61i2uXtB4sz/7MIxL3oIJDAnGQ3pG2MV4PdXXfptzt94+a2MOkdGwYvZEsdL60DiY8X
	nz8s5nve1uIxXICW/MbSgE7C0o5I2ZvtyZN6Tj+05TpbwFPH2ppThAQkfxRPaIhlU10U=;
Message-ID: <d18ddf8e-7829-4e88-afaf-684b47d8f180@xen.org>
Date: Thu, 9 Jun 2022 14:18:22 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH 1/2] xen/heap: Split init_heap_pages() in two
To: Jan Beulich <jbeulich@suse.com>
Cc: bertrand.marquis@arm.com, Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220609083039.76667-1-julien@xen.org>
 <20220609083039.76667-2-julien@xen.org>
 <23552ac7-7548-9dad-fe41-7dc581c78585@suse.com>
 <b3d82607-a7ae-52b3-a9e4-d50780d35b9c@xen.org>
 <f4e6c1a2-639e-d9e0-5cac-c34cf5f148d7@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <f4e6c1a2-639e-d9e0-5cac-c34cf5f148d7@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 09/06/2022 14:12, Jan Beulich wrote:
> On 09.06.2022 14:33, Julien Grall wrote:
>> On 09/06/2022 13:09, Jan Beulich wrote:
>>> On 09.06.2022 10:30, Julien Grall wrote:
>>>> From: Julien Grall <jgrall@amazon.com>
>>>>
>>>> At the moment, init_heap_pages() will call free_heap_pages() page
>>>> by page. To reduce the time to initialize the heap, we will want
>>>> to provide multiple pages at the same time.
>>>>
>>>> init_heap_pages() is now split in two parts:
>>>>       - init_heap_pages(): will break down the range in multiple set
>>>>         of contiguous pages. For now, the criteria is the pages should
>>>>         belong to the same NUMA node.
>>>>       - init_contig_pages(): will initialize a set of contiguous pages.
>>>>         For now the pages are still passed one by one to free_heap_pages().
>>>
>>> Hmm, the common use of "contiguous" is to describe address correlation.
>>> Therefore I'm afraid I'd like to see "contig" avoided when you mean
>>> "same node". Perhaps init_node_pages()?
>>
>> After the next patch, it will not only be the same node, It will also be
>> the same zone at least. Also, in the future, I would like to
>> re-submitting David Woodhouse patch to exclude broken pages (see [1]).
>>
>> Therefore, I think the name init_node_pages() would not be suitable.
>> Please suggest a different name.
> 
> _init_heap_pages() then, as a helper of init_heap_pages()?

I am fine with your proposed named. That said...

> 
>>>> --- a/xen/common/page_alloc.c
>>>> +++ b/xen/common/page_alloc.c
>>>> @@ -1778,16 +1778,55 @@ int query_page_offline(mfn_t mfn, uint32_t *status)
>>>>    }
>>>>    
>>>>    /*
>>>> - * Hand the specified arbitrary page range to the specified heap zone
>>>> - * checking the node_id of the previous page.  If they differ and the
>>>> - * latter is not on a MAX_ORDER boundary, then we reserve the page by
>>>> - * not freeing it to the buddy allocator.
>>>> + * init_contig_heap_pages() is intended to only take pages from the same
>>>> + * NUMA node.
>>>>     */
>>>> +static bool is_contig_page(struct page_info *pg, unsigned int nid)
>>>> +{
>>>> +    return (nid == (phys_to_nid(page_to_maddr(pg))));
>>>> +}
>>>
>>> If such a helper is indeed needed, then I think it absolutely wants
>>> pg to be pointer-to-const. And imo it would also help readability if
>>> the extra pair of parentheses around the nested function calls was
>>> omitted. Given the naming concern, though, I wonder whether this
>>> wouldn't better be open-coded in the one place it is used. Actually
>>> naming is quite odd here beyond what I'd said further up: "Is this
>>> page contiguous?" Such a question requires two pages, i.e. "Are these
>>> two pages contiguous?" What you want to know is "Is this page on the
>>> given node?"
>>
>> There will be more check in the future (see next patch). I created an
>> helper because it reduces the size of the loop init_heap_pages(). I
>> would be OK to fold it if you strongly prefer that.
> 
> I don't "strongly" prefer that; I'd also be okay with a suitably named
> helper. Just that I can't seem to be able to come up with any good name.

... I am not sure what could be a suitable name for this helper. I will 
have a look how bad the fold version look like.

> 
>>>> +/*
>>>> + * This function should only be called with valid pages from the same NUMA
>>>> + * node.
>>>> + *
>>>> + * Callers should use is_contig_page() first to check if all the pages
>>>> + * in a range are contiguous.
>>>> + */
>>>> +static void init_contig_heap_pages(struct page_info *pg, unsigned long nr_pages,
>>>> +                                   bool need_scrub)
>>>> +{
>>>> +    unsigned long s, e;
>>>> +    unsigned int nid = phys_to_nid(page_to_maddr(pg));
>>>> +
>>>> +    s = mfn_x(page_to_mfn(pg));
>>>> +    e = mfn_x(mfn_add(page_to_mfn(pg + nr_pages - 1), 1));
>>>> +    if ( unlikely(!avail[nid]) )
>>>> +    {
>>>> +        bool use_tail = !(s & ((1UL << MAX_ORDER) - 1)) &&
>>>
>>> IS_ALIGNED(s, 1UL << MAX_ORDER) to "describe" what's meant?
>>
>> This is existing code and it is quite complex. So I would prefer if we
>> avoid to simplify and move the code in the same patch. I would be happy
>> to write a follow-up patch to switch to IS_ALIGNED().
> 
> I do realize it's code you move, but I can accept your desire to merely
> move the code without any cleanup. Personally I think that rather than a
> follow-up patch (which doesn't help the reviewing of this one) such an
> adjustment would better be a prereq one.

I will look for a prereq.

> 
>>>> @@ -1812,35 +1851,24 @@ static void init_heap_pages(
>>>>        spin_unlock(&heap_lock);
>>>>    
>>>>        if ( system_state < SYS_STATE_active && opt_bootscrub == BOOTSCRUB_IDLE )
>>>> -        idle_scrub = true;
>>>> +        need_scrub = true;
>>>>    
>>>> -    for ( i = 0; i < nr_pages; i++ )
>>>> +    for ( i = 0; i < nr_pages; )
>>>>        {
>>>> -        unsigned int nid = phys_to_nid(page_to_maddr(pg+i));
>>>> +        unsigned int nid = phys_to_nid(page_to_maddr(pg));
>>>> +        unsigned long left = nr_pages - i;
>>>> +        unsigned long contig_pages;
>>>>    
>>>> -        if ( unlikely(!avail[nid]) )
>>>> +        for ( contig_pages = 1; contig_pages < left; contig_pages++ )
>>>>            {
>>>> -            unsigned long s = mfn_x(page_to_mfn(pg + i));
>>>> -            unsigned long e = mfn_x(mfn_add(page_to_mfn(pg + nr_pages - 1), 1));
>>>> -            bool use_tail = (nid == phys_to_nid(pfn_to_paddr(e - 1))) &&
>>>> -                            !(s & ((1UL << MAX_ORDER) - 1)) &&
>>>> -                            (find_first_set_bit(e) <= find_first_set_bit(s));
>>>> -            unsigned long n;
>>>> -
>>>> -            n = init_node_heap(nid, mfn_x(page_to_mfn(pg + i)), nr_pages - i,
>>>> -                               &use_tail);
>>>> -            BUG_ON(i + n > nr_pages);
>>>> -            if ( n && !use_tail )
>>>> -            {
>>>> -                i += n - 1;
>>>> -                continue;
>>>> -            }
>>>> -            if ( i + n == nr_pages )
>>>> +            if ( !is_contig_page(pg + contig_pages, nid) )
>>>>                    break;
>>>> -            nr_pages -= n;
>>>>            }
>>>
>>> Isn't doing this page by page in a loop quite inefficient? Can't you
>>> simply obtain the end of the node's range covering the first page, and
>>> then adjust "left" accordingly?
>>
>> The page by page is necessary because we may need to exclude some pages
>> (see [1]) or the range may cross multiple-zone (see [2]).
> 
> If you want/need to do this for "future" reasons (aiui [1] was never
> committed

You are correct. I would like to revive it at some point.

, and you forgot to supply [2], but yes, splitting at zone
> boundaries is of course necessary)

Sorry. I was meant to write patch #2:

20220609083039.76667-3-julien@xen.org


> , then I think this wants making quite
> clear in the description.

Good point. I will update the commit message.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 13:22:43 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 13:22:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345494.571054 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzI7V-0002B3-1b; Thu, 09 Jun 2022 13:22:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345494.571054; Thu, 09 Jun 2022 13:22:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzI7U-0002Aw-V8; Thu, 09 Jun 2022 13:22:40 +0000
Received: by outflank-mailman (input) for mailman id 345494;
 Thu, 09 Jun 2022 13:22:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jWvP=WQ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1nzI7T-0002Aq-P9
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 13:22:40 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on060b.outbound.protection.outlook.com
 [2a01:111:f400:fe0d::60b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3b61de5b-e7f7-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 15:22:38 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB9578.eurprd04.prod.outlook.com (2603:10a6:10:305::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Thu, 9 Jun
 2022 13:22:35 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.013; Thu, 9 Jun 2022
 13:22:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3b61de5b-e7f7-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FUy45xmj+wOaIf05OIMvbNbmoJi1nUcWdS5OcvjKJlR1y1ic5BKGB4Nj+fNh6PLoiXEeP6nDdU9u8aengt6kMBQZpuP44mJV1FBs61VHZ3LvPaUNSAwUOfC7DxU/L0pI/a+4MSARgD6JNI4Cj3pI0d+lmirVEHilg8drEN0gPgGUJWDcrtTJARnI7FNNbtQGBqiibWjvqbj0aJ63GbBJEIA6Yk0xa6WdUUAR+pMqjj+LjidgcB/rTObGxGmTr4ZyiqkDt+Yyr4o8F0ellXuYUl8LfQKmDZ406MuYFCuoh62eyd6fy8PS0c8H/wq+z5fmwto7v0S+QQk8zg1P3gr2tg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=VD4+4dZlvMlKysfjtIuJX1niQ8S7rQhVurNDguliCo4=;
 b=k8LK15mzT8ZIjI3WEbNsEJbD7iMyWcBSNiLKa8E4yCQSdCws0GQJCbyH678QPU8ASZyH70QWHueEGQnWLwcs3cOkWHKPjrZ00vrvL1NSuiYMNo/kqVi5/dc9FkJJjoDfcDHgnxhDPwQS3m4rWElMC12lPqL84toQ/YIi2vbBxa9mL9s+vRDqc+5MybTEBu1wFYTYUT6i1dMmqEfbAElDY+Rv8idABDl9qngKziIAWD3YNWcyTVtJPJsXa727gaeJSuqetpr1bBTIswzkcAIieDIbOWp4wN8NmubVaBLkCIJpPyHijZGGeHk/NVswSocoreYfZ0iPT7vZWsZO48c8Rw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VD4+4dZlvMlKysfjtIuJX1niQ8S7rQhVurNDguliCo4=;
 b=kkTEkvA4H56Xk6mYh0wvV34zAQmlazzYjO40fFJB/dFnnBTpdaK+tqe4YN8gyAzOok4sPLlCW0Hr8x+uMLxeJFRGacdxAzWJd++U/kFQBj3F444SXXGIBBh7Y9utl4uh4knmqEIi4xQ0krHf2cOs+9aIjew7x2mpvokCPdPvkJN7ceuZ7qf9aIfML5v29dUUrKkBHG1nQGmsEFrEjOsJEYgPk4/MWzbvP+p35TOQbumHVJClsVbriHu/+l4MjHTJseaf781cavTi96qUCVI6CHnib20Rc/TnMeRbUKUKQMEUwswv2kHkCFqKmM4ZVgTPXSr1fhmpF32RRIQRkOZTww==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <4462bfdf-fc24-12df-0f16-7c09d0618d0f@suse.com>
Date: Thu, 9 Jun 2022 15:22:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 2/2] xen/heap: pass order to free_heap_pages() in heap
 init
Content-Language: en-US
To: Julien Grall <julien@xen.org>
Cc: bertrand.marquis@arm.com, Hongyan Xia <hongyxia@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Julien Grall <jgrall@amazon.com>, xen-devel@lists.xenproject.org
References: <20220609083039.76667-1-julien@xen.org>
 <20220609083039.76667-3-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220609083039.76667-3-julien@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR06CA0041.eurprd06.prod.outlook.com
 (2603:10a6:20b:463::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 789cc5c2-6876-4a51-c546-08da4a1b1dad
X-MS-TrafficTypeDiagnostic: DB9PR04MB9578:EE_
X-Microsoft-Antispam-PRVS:
	<DB9PR04MB957850C9FA09DEA72C47EE39B3A79@DB9PR04MB9578.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qK5j6l1YoRSMVq2IwxL3Ed1pIz/iuVbaNw/uHtkulrWY1cK1+dunXlRLLfvaTZ/hRw1fyjToD5eYl7I2Wz9ISasOaiZzJaE+4iqbr1mJ5GeLnK0amWMuFvx7jVXeThTpFWyaNO1KsLldib/LkdX5JYSO+U/TwKfRX/96sBWEeqXh7RKPdlCPT1iUZLL0FepzPdIo7lwgw3/tF4Q8SV3dJ9cN5IhcIzS0ckuPlNJuISKgzJA7l3XpWKnpqCsLGTpH3GPlemwipQKixarV2Fc42+SjhkSneW1WpJ6iaaujWrocOOk3WBFBcF4XJ/u3L8XAE+T/dfioBZDpq2F5WA88NaDCcwY0e427nGEPsruF+1CxhlzirW/lZGFJFUVOmfnaqY+qUlHEpsJwX5ix4NJTo3EVPEh5Ym5BXvOII8O5PrOSuX2jU5C31v9aP5CjDIVtdEPzNDDdS8yKWwWyWW/zAERb0jRle8C9JH9VVLEnS8+UHceVqlIxI0EMUoCxRlbdEAdawrzunB3xB/qcMV/yVaGqjb+DBBuPXQJTBmXAP35UvFj6mwgDXeOfy6DOv6N7sg17PxYC2rb0Uzdhk38uk46Cv2hcajIvCFtcU4dGrZ5WN6SjXtyctJDmtcJtbZroIZCfskUasmUQlBeKgzcrJnVtrHO6qiHLj0EapZYTEX8tlOkLjR8+ekzUHohnRq9djtJYeZo1IEGLsPKPbCnU3NZV6vvg7DkNuA9LHLAQt/I=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(2616005)(38100700002)(186003)(31686004)(6916009)(316002)(4326008)(54906003)(66946007)(83380400001)(66476007)(66556008)(8676002)(36756003)(26005)(31696002)(53546011)(6512007)(508600001)(6666004)(86362001)(2906002)(6506007)(6486002)(5660300002)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SlB4R1NQYlNqMXJ0UEJyelFKKzh0bXRPUEtTSmkvVmFITnVpWDVqcEg4RjRE?=
 =?utf-8?B?MFRjdUp5dlVrbVh4RGh3T3lVaHRWOFhnZ1I4ekE5TFBqWHF3aUY4TWxhQ04z?=
 =?utf-8?B?MnBBSEJXUWcrT3ZqY01pRFo3bDJtaG5tc003ZmZ4TURaNmEzT25Ha1pyVm9H?=
 =?utf-8?B?VitxR0ZBSHNsTStlYUlaRjRVaEJFNXJhdEhiZkJuWGdBQUd1Sm0zYVhPdnNV?=
 =?utf-8?B?dTIzR2Q5RjRRbExtRXpyZFhrSVhMRmUrYkpBUytJSEEvNzFtNG9lTGgvL3E0?=
 =?utf-8?B?dXdFMnVZWG1LNURZVEhhbU1iaEFRcm5mKytxc2xWLzFaSDRBUzZIVSt3Vzlx?=
 =?utf-8?B?U0Q1UnNkL2Qzdm5xaEpQVkRUR1M5UzFYSzFoQmpKeU9LRSt2aTI0eEczMWdW?=
 =?utf-8?B?bXM5b0VtRUd4LzhhQ3BtbDV0NmxiZ3lYMmNUNDVDV25HWFh2SWNMWFJxR0VT?=
 =?utf-8?B?S01hbmRieWUrNEVpOHVmK0dPc0piaVlWUEFkNWROakJFM096cUxuTk9ib0h5?=
 =?utf-8?B?ZFh3TlBLcHVmbnVhQ0RiOEFPYXQxbTN5TlpSajE2SzlwUkRLNjJVb1dPQzdE?=
 =?utf-8?B?VlFYMFBhcS9qdThDQkpyby8veERrOFdVME9qL3kxNjZMUjM0Q3gwZkN6VWpp?=
 =?utf-8?B?YUkzMTdEZEFDWXNlbXBUN1VJQ0p4b2ozNW52N2EvMWpEZXRtcURJNjdLNEY4?=
 =?utf-8?B?cXRDVVZrSDRHRitjOEhnRWJWOFo3TTdPNDBPZE16ZHUxV1NXWmtWTzFKVE9U?=
 =?utf-8?B?eDMyUHJ2WjRpc29Salk4eTlubDBpRytCSmJPdTk2anlieTlrbjBrZmRFV3kw?=
 =?utf-8?B?cFNJNkpvbG9FVjg4U3QwdWNrc3BMM1lmaEtJYmxNZ3FKc0xCbm5ndUhSQ1VQ?=
 =?utf-8?B?Wm1pUXdib1NwS0ZKM0lsVStFYitnRUhKTXNqNnVzUitZT1NOWnBJZXh6WnBn?=
 =?utf-8?B?QzI3TVNWOXl0SDVCdmNNcmhsSjV5OGxvRVc1YkgzcTJiczZud1pSbEs0Q1dZ?=
 =?utf-8?B?aXNxb085K3JuQTBXVHFzR3NIZlJJeGhOWTVxLzFUa3lWMWdYVlRJbktOT010?=
 =?utf-8?B?aUY3ZmFQcHFsRVVBd2d0dGRMNGpZeTRMU1RrT1ZRVTVmVHRsTmZNd1NRZVlV?=
 =?utf-8?B?TW1zNUQya0tQQmFia2F0UDUxc2VWcjRYMWZmYko4NndpOHJvMVNJMXJoMldS?=
 =?utf-8?B?S0pXekRUTmc5bzcrcXR6bGl0SzlEQytwMkZ3TnJTenRjMWptTGpROU03N3Br?=
 =?utf-8?B?WWR5K2ZWbVdHSlhzaUROYWdnS0VOTklISisxRzdTcVZSZXdIWEFMOVIvbDlF?=
 =?utf-8?B?TThzNCtTMWI3eDJsWG9LUWtDNEdWMUdkaDVjWklTYVFDUWN2WFFHeGlRdWJI?=
 =?utf-8?B?UmcrV0NYS0diamgxbzk1eUNIODQrNUlCSjBWbWJDblVmbFpoQWk0ZVQ3b3Z0?=
 =?utf-8?B?d2x0dzlUV1N6RjlkUHpUVmpVTGltZmlXbXY5V2JoRW9ld2lrK01VQU85VlFP?=
 =?utf-8?B?NHVsTlp6MTlJbTB6YlcrOVRiczVFUzRWY0kxMmdkU3MweVJPNS94MFFvN0pn?=
 =?utf-8?B?M05hUlkyUDlxOFg4Z0o4a09vWGpMSW9RRVlCMEVOdS9tTmpVU1ZkRWVKMkNv?=
 =?utf-8?B?dE11dnRqYXh2dnhBOHRjdG9SanQ5VnE5aU1uZElvdFBHcXFxSXlqU2FWeEhs?=
 =?utf-8?B?ZXNQanZCRUhET3FIaHhmcU9QVUVtdHNKY0k0Z3hHTVlwVVdwRU56czdQQlNU?=
 =?utf-8?B?Q0ptODdaUnI3VXNpU1pVSUFXdjQ4YXUzZ01aS0FnNzNyVUhMcldrekdzUTl2?=
 =?utf-8?B?ZkVyQ0J0YlN6RHRNaEVhdlR2ZUhRNEFXSFVGOVNmZ1ZHSXpGMDZrbnFEb3No?=
 =?utf-8?B?bFVQSVFtVWxNOXI2djZpY21vVWNNcjdUVCtRREtocUJiMTE4djZQdVY2WnZW?=
 =?utf-8?B?czM4TSsxZnZraTVlNDBmbTl6cldxQjRzWXJncXRtU2VsUlpvRGNJT3kwcmdB?=
 =?utf-8?B?cGZXRW5iZzFkbnpLWFhYZFZFRVh1Y0Q0a2ZoQlA5aEd6NENvZ09PY3ZBYWMr?=
 =?utf-8?B?ZUlSQ0NvSnYwYXZtYWkvWFEvYU5UaXgvVVBzcWNTcFBUVzVOWVNCcjlpK0NW?=
 =?utf-8?B?eDVmVXBtRC9aZWhhUHdPemxYTWNZbkhPcTJvdHA2aktvdUpPU1VFeHdiTUsz?=
 =?utf-8?B?aGZxS3lGb2RWUEtNQ3l6eVNCb0Z0RkpYZUU0SWZ6T085YmREbG9NT1kyMnZ2?=
 =?utf-8?B?SC9YYkx6YVhPVlMzQTU0WEZMUGJvSFVlM1ROVm5RUloxc2doNHhyKzJuV0JY?=
 =?utf-8?B?VFdwSVNkMGwxanJUdGloZ0wzY1RhSTlFWXMxQ0wvTExoL3l2Z1VWZz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 789cc5c2-6876-4a51-c546-08da4a1b1dad
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 13:22:35.0096
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: t1DTgde+9gZF9/K8yTGXsQwE/t7KZKgMdLkvCplv202/HreVYiOSEfL+E7Xb+eWx6uN6NUgg2E+bTf0C3o393A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB9578

On 09.06.2022 10:30, Julien Grall wrote:
> From: Hongyan Xia <hongyxia@amazon.com>
> 
> The idea is to split the range into multiple aligned power-of-2 regions
> which only needs to call free_heap_pages() once each. We check the least
> significant set bit of the start address and use its bit index as the
> order of this increment. This makes sure that each increment is both
> power-of-2 and properly aligned, which can be safely passed to
> free_heap_pages(). Of course, the order also needs to be sanity checked
> against the upper bound and MAX_ORDER.
> 
> Testing on a nested environment on c5.metal with various amount
> of RAM. Time for end_boot_allocator() to complete:
>             Before         After
>     - 90GB: 1426 ms        166 ms
>     -  8GB:  124 ms         12 ms
>     -  4GB:   60 ms          6 ms
> 
> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> ---
>  xen/common/page_alloc.c | 39 +++++++++++++++++++++++++++++++++------
>  1 file changed, 33 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index a1938df1406c..bf852cfc11ea 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -1779,16 +1779,28 @@ int query_page_offline(mfn_t mfn, uint32_t *status)
>  
>  /*
>   * init_contig_heap_pages() is intended to only take pages from the same
> - * NUMA node.
> + * NUMA node and zone.
> + *
> + * For the latter, it is always true for !CONFIG_SEPARATE_XENHEAP since
> + * free_heap_pages() can only take power-of-two ranges which never cross
> + * zone boundaries. But for separate xenheap which is manually defined,
> + * it is possible for a power-of-two range to cross zones, so we need to
> + * check that as well.
>   */
> -static bool is_contig_page(struct page_info *pg, unsigned int nid)
> +static bool is_contig_page(struct page_info *pg, unsigned int nid,
> +                           unsigned int zone)
>  {
> +#ifdef CONFIG_SEPARATE_XENHEAP
> +    if ( zone != page_to_zone(pg) )
> +        return false;
> +#endif
> +
>      return (nid == (phys_to_nid(page_to_maddr(pg))));
>  }
>  
>  /*
>   * This function should only be called with valid pages from the same NUMA
> - * node.
> + * node and the same zone.
>   *
>   * Callers should use is_contig_page() first to check if all the pages
>   * in a range are contiguous.
> @@ -1817,8 +1829,22 @@ static void init_contig_heap_pages(struct page_info *pg, unsigned long nr_pages,
>  
>      while ( s < e )
>      {
> -        free_heap_pages(mfn_to_page(_mfn(s)), 0, need_scrub);
> -        s += 1UL;
> +        /*
> +         * For s == 0, we simply use the largest increment by checking the
> +         * index of the MSB set. For s != 0, we also need to ensure that the

"The MSB" reads as it it was not in line with the code; at least I would,
short of it being spelled out, assume it can only be the page's address
which is meant (as is the case for LSB below). But it's the MSB of the
range's size.

> +         * chunk is properly sized to end at power-of-two alignment. We do this
> +         * by checking the LSB set and use its index as the increment. Both
> +         * cases need to be guarded by MAX_ORDER.
> +         *
> +         * Note that the value of ffsl() and flsl() starts from 1 so we need
> +         * to decrement it by 1.
> +         */
> +        int inc_order = min(MAX_ORDER, flsl(e - s) - 1);
> +
> +        if ( s )
> +            inc_order = min(inc_order, ffsl(s) - 1);
> +        free_heap_pages(mfn_to_page(_mfn(s)), inc_order, need_scrub);
> +        s += (1UL << inc_order);
>      }
>  }
>  
> @@ -1856,12 +1882,13 @@ static void init_heap_pages(
>      for ( i = 0; i < nr_pages; )
>      {
>          unsigned int nid = phys_to_nid(page_to_maddr(pg));
> +        unsigned int zone = page_to_zone(pg);
>          unsigned long left = nr_pages - i;
>          unsigned long contig_pages;
>  
>          for ( contig_pages = 1; contig_pages < left; contig_pages++ )
>          {
> -            if ( !is_contig_page(pg + contig_pages, nid) )
> +            if ( !is_contig_page(pg + contig_pages, nid, zone) )
>                  break;
>          }

Coming back to your reply to my comment on patch 1: I think this
addition of the node check is actually an argument for inlining the
function's body here (and then dropping the function). That way the
separate-Xen-heap aspect is visible at the place where it matters,
rather than requiring an indirection via looking at the helper
function (and leaving a dead parameter in the opposite case). But as
said - I'm not going to insist as long as the helper function has a
suitable name (representing what it does and not misguiding anyone
with the common "contig"-means-addresses implication).

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 13:34:37 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 13:34:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345510.571093 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzIIy-0004K0-HL; Thu, 09 Jun 2022 13:34:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345510.571093; Thu, 09 Jun 2022 13:34:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzIIy-0004Jt-EP; Thu, 09 Jun 2022 13:34:32 +0000
Received: by outflank-mailman (input) for mailman id 345510;
 Thu, 09 Jun 2022 13:34:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Nuio=WQ=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1nzIIx-0004Jn-7k
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 13:34:31 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e2c694a7-e7f8-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 15:34:29 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-28-0J5iAZZSO5Cz0A05vvpusQ-1; Thu, 09 Jun 2022 09:34:25 -0400
Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com
 [10.11.54.10])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8822E294EDE1;
 Thu,  9 Jun 2022 13:34:24 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 37038492C3B;
 Thu,  9 Jun 2022 13:34:24 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 9A06F1800094; Thu,  9 Jun 2022 15:34:11 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e2c694a7-e7f8-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1654781668;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=GlQ8FBlP/drXQ8Lb3KbrrEPrF8f85P+9QjKCvTgg4Bg=;
	b=W9NpdQzLTSS9Bd0uCAmrtGcGvSXJUMA2jvOocF/4eo4xS5dvI+z/Ta/fhUGWBqvezbevOp
	pS7QlnF2C5d8Tnok+V14lQgBPJdz+iHF4wHAMlsrhf8BSliiJnCC8KmjlQUHz1OiJEAFY+
	lUrn56nJWMJbIDYPI+DjbkgU+O1uHTg=
X-MC-Unique: 0J5iAZZSO5Cz0A05vvpusQ-1
Date: Thu, 9 Jun 2022 15:34:11 +0200
From: Gerd Hoffmann <kraxel@redhat.com>
To: Akihiko Odaki <akihiko.odaki@gmail.com>
Cc: qemu Developers <qemu-devel@nongnu.org>, xen-devel@lists.xenproject.org,
	"Michael S . Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>
Subject: Re: [PATCH v3 2/3] ui: Deliver refresh rate via QemuUIInfo
Message-ID: <20220609133411.uneuazwmcczzap6i@sirius.home.kraxel.org>
References: <20220226115516.59830-1-akihiko.odaki@gmail.com>
 <20220226115516.59830-3-akihiko.odaki@gmail.com>
 <20220609102805.qz2xrnd6ms6cigir@sirius.home.kraxel.org>
 <19ae71a4-c988-3c9e-20d6-614098376524@gmail.com>
 <20220609120214.bay3cl24oays6x6d@sirius.home.kraxel.org>
 <c0c610b8-df0c-7e2a-385f-bcf70c987182@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <c0c610b8-df0c-7e2a-385f-bcf70c987182@gmail.com>
X-Scanned-By: MIMEDefang 2.85 on 10.11.54.10

  Hi,

> > > > (1) add refresh_rate
> > > > (2) update users one by one
> > > > (3) finally drop update_interval when no user is left.
> > > > 
> > > > thanks,
> > > >     Gerd
> > > > 
> > > 
> > > I think 1 and 3 should have to be done once since refresh_rate and
> > > update_interval would interfere with each other otherwise.
> > 
> > Well, between 1 and 3 both old and new API are active.  Shouldn't be
> > much of a problem because the GraphicHwOps implementations are using
> > only the one or the other.
> > 
> > take care,
> >    Gerd
> > 
> 
> The only GraphicHwOps implementation updated with this change is xenfb.
> xenfb can be switched to use refresh_rate in step 1 or 3.
> 
> Switching to use refresh_rate in step 1 would break the refresh rate
> propagation until all host displays are updated to set refresh_rate instead
> of calling update_interval.

Well, host display update would need splitting into two pieces too,
first add refresh_rate, then later drop update_interval, to make the
update scheme work without temporary breakage.

That sounds increasingly like over engineering it though, I guess I just
queue up the patches as-is.

thanks,
  Gerd



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 13:43:48 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 13:43:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345519.571105 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzIRr-00065L-EA; Thu, 09 Jun 2022 13:43:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345519.571105; Thu, 09 Jun 2022 13:43:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzIRr-00065E-Au; Thu, 09 Jun 2022 13:43:43 +0000
Received: by outflank-mailman (input) for mailman id 345519;
 Thu, 09 Jun 2022 13:43:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzIRq-000654-23; Thu, 09 Jun 2022 13:43:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzIRp-0008LM-VY; Thu, 09 Jun 2022 13:43:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzIRp-0005bK-Cy; Thu, 09 Jun 2022 13:43:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nzIRp-0005jx-CW; Thu, 09 Jun 2022 13:43:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0F0c5JS+IzeyYKrSfvk5lXcEwoNnfn+o29UdI42b+5k=; b=GbKS7JMQxe6vPEdzoTNz+S6vhs
	z8VIGkjj0ojGEw5+EF04TpKn+aVE86Mf9GPuuexZCDjvwH00ZJk8vFcmxjYFMWWd/89FyeD7YEq0S
	rnShQyCMAP/7TPBkwT/Hgd0WNGu+noCnpQDYM10S0rO2B0Xijv/2JtLPBfcepW8DnVQc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170891-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 170891: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-rtds:<job status>:broken:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:guest-saverestore.2:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:host-install(5):broken:allowable
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=6d940eff4734bcb40b1a25f62d7cec5a396f994a
X-Osstest-Versions-That:
    qemuu=9b1f58854959c5a9bdb347e3e04c252ab7fc9ef5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 09 Jun 2022 13:43:41 +0000

flight 170891 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170891/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-rtds        <job status>                 broken
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 16 guest-saverestore.2 fail REGR. vs. 170884
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 170884

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      5 host-install(5)        broken REGR. vs. 170884

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170884
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170884
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170884
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170884
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170884
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170884
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170884
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170884
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 qemuu                6d940eff4734bcb40b1a25f62d7cec5a396f994a
baseline version:
 qemuu                9b1f58854959c5a9bdb347e3e04c252ab7fc9ef5

Last test of basis   170884  2022-06-08 10:42:45 Z    1 days
Testing same since   170891  2022-06-09 01:39:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Peter Maydell <peter.maydell@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Stefan Berger <stefanb@linux.ibm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     broken  
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-amd64-xl-rtds broken
broken-step test-amd64-amd64-xl-rtds host-install(5)

Not pushing.

------------------------------------------------------------
commit 6d940eff4734bcb40b1a25f62d7cec5a396f994a
Merge: 9b1f588549 e37a0ef460
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Tue Jun 7 19:22:18 2022 -0700

    Merge tag 'pull-tpm-2022-06-07-1' of https://github.com/stefanberger/qemu-tpm into staging
    
    Merge tpm 2022/06/07 v1
    
    # -----BEGIN PGP SIGNATURE-----
    #
    # iQEzBAABCAAdFiEEuBi5yt+QicLVzsZrda1lgCoLQhEFAmKf8HgACgkQda1lgCoL
    # QhHx8Qf/QB2z+0B1xKKn8NqrWbZ+FaVlnPu/3hX4kraCY5zAYV9e64kdWhuIKRbM
    # 74/KARGMpkme6Y8rUSK6mVeiY+ul+egfVMnKyfhsM1jhAQT/DzSlht/XZzbn3Mg+
    # FFXQBMqcvcNWH53q9zi9GJYqH4tcxUku3ejgodU4+SO2wB5S59pS/tD+i5H06Vy5
    # Iw1kW6I11gYhJGETxVgb6F2Jfyu6uPWFhIg7eN06XwNExFc45E8GjrpIs2rO78GN
    # OzMBjwAG+C+/PU+UZDOd5Zhq5qv+8DcvDQuPXyqksxPcFvouvLghQvQL/h7neMlM
    # jOwHS153ay0EAT/t2lZafsBwqKQxvQ==
    # =b9Qe
    # -----END PGP SIGNATURE-----
    # gpg: Signature made Tue 07 Jun 2022 05:42:32 PM PDT
    # gpg:                using RSA key B818B9CADF9089C2D5CEC66B75AD65802A0B4211
    # gpg: Good signature from "Stefan Berger <stefanb@linux.vnet.ibm.com>" [unknown]
    # gpg: WARNING: This key is not certified with a trusted signature!
    # gpg:          There is no indication that the signature belongs to the owner.
    # Primary key fingerprint: B818 B9CA DF90 89C2 D5CE  C66B 75AD 6580 2A0B 4211
    
    * tag 'pull-tpm-2022-06-07-1' of https://github.com/stefanberger/qemu-tpm:
      tpm_crb: mark command buffer as dirty on request completion
      hw/tpm/tpm_tis_common.c: Assert that locty is in range
    
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit e37a0ef4605e5d2041785ff3fc89ca6021faf7a0
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Mon Apr 11 15:47:49 2022 +0100

    tpm_crb: mark command buffer as dirty on request completion
    
    At the moment, there doesn't seems to be any way to know that QEMU
    made modification to the command buffer. This is potentially an issue
    on Xen while migrating a guest, as modification to the buffer after
    the migration as started could be ignored and not transfered to the
    destination.
    
    Mark the memory region of the command buffer as dirty once a request
    is completed.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
    Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
    Message-id: 20220411144749.47185-1-anthony.perard@citrix.com

commit 4d84bb6c8b42cc781a02e1ac6648875966abc877
Author: Peter Maydell <peter.maydell@linaro.org>
Date:   Wed May 25 08:59:04 2022 -0400

    hw/tpm/tpm_tis_common.c: Assert that locty is in range
    
    In tpm_tis_mmio_read(), tpm_tis_mmio_write() and
    tpm_tis_dump_state(), we calculate a locality index with
    tpm_tis_locality_from_addr() and then use it as an index into the
    s->loc[] array.  In all these cases, the array index can't overflow
    because the MemoryRegion is sized to be TPM_TIS_NUM_LOCALITIES <<
    TPM_TIS_LOCALITY_SHIFT bytes.  However, Coverity can't see that, and
    it complains (CID 1487138, 1487180, 1487188, 1487198, 1487240).
    
    Add an assertion to tpm_tis_locality_from_addr() that the calculated
    locality index is valid, which will help Coverity and also catch any
    potential future bug where the MemoryRegion isn't sized exactly.
    
    Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
    Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
    Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
    Message-id: 20220525125904.483075-1-stefanb@linux.ibm.com


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 14:02:57 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 14:02:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345547.571186 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzIkM-0001JT-6n; Thu, 09 Jun 2022 14:02:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345547.571186; Thu, 09 Jun 2022 14:02:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzIkM-0001JM-3u; Thu, 09 Jun 2022 14:02:50 +0000
Received: by outflank-mailman (input) for mailman id 345547;
 Thu, 09 Jun 2022 14:02:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pGOA=WQ=citrix.com=prvs=1525abdf4=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1nzIkK-0001J0-DE
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 14:02:48 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d5cae84f-e7fc-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 16:02:46 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d5cae84f-e7fc-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654783366;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=2U3jUoNslZi3eqK5NFra4Ihi53xzZqdKWsALmkjx8cU=;
  b=NuPnBJQTgBdyxhbZInEKjXdjsqqHls7O/OspTlhqigMK9vJtSbOke0ri
   j1BPhAA74O0kFO6iCuB12i7AIb+vaEZoha+JJN+goLBRErb6ySCgiiahk
   9PP4myjiRZKVzPNNol8AyhUGKzdTHF+R1SBWGFX9/J2TGospXM2TSLMpi
   8=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 72584691
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:/nEWgK+ZETWmMnB+WLlxDrUDJ36TJUtcMsCJ2f8bNWPcYEJGY0x3m
 jEdWDiPM/yNMGP9edF2aN+xpE1TusCAydVnQVFp/n88E34SpcT7XtnIdU2Y0wF+jyHgoOCLy
 +1EN7Es+ehtFie0Si+Fa+Sn9T8mvU2xbuKU5NTsY0idfic5DnZ44f5fs7Rh2NQw34DpW1jlV
 e7a+KUzBnf0g1aYDUpMg06zgEsHUCPa4W5wUvQWPJinjXeG/5UnJMt3yZKZdhMUdrJ8DO+iL
 9sv+Znilo/vE7XBPfv++lrzWhVirrc/pmFigFIOM0SpqkAqSiDfTs/XnRfTAKtao2zhojx/9
 DlCnczoTDksMKzyo7wyCCtjHwh5BYZL46CSdBBTseTLp6HHW37lwvEoB0AqJ4wIvO1wBAmi9
 9RBdmpLNErawbvrnvTrEYGAhex6RCXvFIoZpnFnyyCfFfs8SIrPa67L+cVZzHE7gcUm8fP2O
 JBEOWYxPU2ojxtnancZCJEag+eTl2TOeH5n80mTi7sM7D2GpOB2+Oe0a4eEEjCQfu1Ql1yTq
 2aA9nz3DxUQPcGa4TWA+3OowOTImEvTXZkOPKe1+v5jnBuYwWl7IBAaSFKhrf6Rike0WNVEN
 woS9zZGhbE/8VHuRN36VB6QpnmCsRgBHd1KHIUS+AyLj6bZ/QudLmwFVSJaLswrstcsQj4n3
 UPPmMnmbRRtq7uSVlqH+7uUpC/0Mi8QRUcZfjMNRwYB59jloakwgwjJQ9IlF7S65vXuGTz23
 z2bhCc7jrQXy8UM0s2T5F3Cnnegq4bESiYz4QPYWH/j6Rl2DKa/Zoeo4ELXq/ZNKoqUVEKGu
 lAAms6X9udICouC/ASVSuILDrCv5t6fPTHciEIpFJ4knxy18mOnVZBd5nd5PkgBDyofUWa3O
 gmJ41oXvcINeivxBUNqX26vI5RyyJblG9Xbbaj7Neh1brF2TCSK5z47MCZ8wFvRfFgQfbAXY
 MnGLZjxVypKVsyL3xLtGb5DjOZDKjQWgDqKGMull0nPPa+2Pib9dFsTDLeZggnVBougqR6dz
 dtQPtDiJ/53ALynOXm/HWL+wDk3wZkH6XPe8ZU/mhareFYOJY3YI6a5LUkdU4Jkhb9JsezD4
 2uwXERVoHKm2yCbdlzRMCA/Nui3NXqakZ7cFXV0VWtEJlB5Odr/hEvhX8BfkUYbGBxLkqcvE
 qhtlzSoCfVTUDXXkwkggW3GhNU6LnyD3FvWVwL8OWRXV8MxHGThp467FjYDAQFTV0JbQ+Nl+
 +3+vu4aKLJeLzlf4DH+OKr1nwrg5ihE8A+wNmORSuRulIzX2NACA0TMYjUfeqng9T2rKuOm6
 jur
IronPort-HdrOrdr: A9a23:F79vfakFKVUIC4pgni6YLoKgTvnpDfIq3DAbv31ZSRFFG/Fxl6
 iV88jzsiWE7gr5OUtQ/uxoV5PgfZqxz/NICMwqTNWftWrdyQ+VxeNZjbcKqgeIc0aVygce79
 YET0EXMqyIMbEQt6jHCWeDf+rIuOP3k5yVuQ==
X-IronPort-AV: E=Sophos;i="5.91,287,1647316800"; 
   d="scan'208";a="72584691"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <qemu-devel@nongnu.org>
CC: Bernhard Beschow <shentey@gmail.com>, Anthony PERARD
	<anthony.perard@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>, Paolo Bonzini <pbonzini@redhat.com>, "Richard
 Henderson" <richard.henderson@linaro.org>, Eduardo Habkost
	<eduardo@habkost.net>, "Michael S. Tsirkin" <mst@redhat.com>, "Marcel
 Apfelbaum" <marcel.apfelbaum@gmail.com>, John Snow <jsnow@redhat.com>,
	<xen-devel@lists.xenproject.org>, <qemu-block@nongnu.org>
Subject: [PULL 3/3] include/hw/ide: Unexport pci_piix3_xen_ide_unplug()
Date: Thu, 9 Jun 2022 15:02:02 +0100
Message-ID: <20220609140202.45227-4-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220609140202.45227-1-anthony.perard@citrix.com>
References: <20220609140202.45227-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

From: Bernhard Beschow <shentey@gmail.com>

This function was declared in a generic and public header, implemented
in a device-specific source file but only used in xen_platform. Given its
'aux' parameter, this function is more xen-specific than piix-specific.
Also, the hardcoded magic constants seem to be generic and related to
PCIIDEState and IDEBus rather than piix.

Therefore, move this function to xen_platform, unexport it, and drop the
"piix3" in the function name as well.

Signed-off-by: Bernhard Beschow <shentey@gmail.com>
Reviewed-by: Paul Durrant <paul@xen.org>
Acked-by: Anthony PERARD <anthony.perard@citrix.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220513180957.90514-4-shentey@gmail.com>
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 hw/i386/xen/xen_platform.c | 48 +++++++++++++++++++++++++++++++++++++-
 hw/ide/piix.c              | 46 ------------------------------------
 include/hw/ide.h           |  3 ---
 3 files changed, 47 insertions(+), 50 deletions(-)

diff --git a/hw/i386/xen/xen_platform.c b/hw/i386/xen/xen_platform.c
index 72028449ba..a64265cca0 100644
--- a/hw/i386/xen/xen_platform.c
+++ b/hw/i386/xen/xen_platform.c
@@ -26,6 +26,7 @@
 #include "qemu/osdep.h"
 #include "qapi/error.h"
 #include "hw/ide.h"
+#include "hw/ide/pci.h"
 #include "hw/pci/pci.h"
 #include "hw/xen/xen_common.h"
 #include "migration/vmstate.h"
@@ -134,6 +135,51 @@ static void pci_unplug_nics(PCIBus *bus)
     pci_for_each_device(bus, 0, unplug_nic, NULL);
 }
 
+/*
+ * The Xen HVM unplug protocol [1] specifies a mechanism to allow guests to
+ * request unplug of 'aux' disks (which is stated to mean all IDE disks,
+ * except the primary master).
+ *
+ * NOTE: The semantics of what happens if unplug of all disks and 'aux' disks
+ *       is simultaneously requested is not clear. The implementation assumes
+ *       that an 'all' request overrides an 'aux' request.
+ *
+ * [1] https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=docs/misc/hvm-emulated-unplug.pandoc
+ */
+static void pci_xen_ide_unplug(DeviceState *dev, bool aux)
+{
+    PCIIDEState *pci_ide;
+    int i;
+    IDEDevice *idedev;
+    IDEBus *idebus;
+    BlockBackend *blk;
+
+    pci_ide = PCI_IDE(dev);
+
+    for (i = aux ? 1 : 0; i < 4; i++) {
+        idebus = &pci_ide->bus[i / 2];
+        blk = idebus->ifs[i % 2].blk;
+
+        if (blk && idebus->ifs[i % 2].drive_kind != IDE_CD) {
+            if (!(i % 2)) {
+                idedev = idebus->master;
+            } else {
+                idedev = idebus->slave;
+            }
+
+            blk_drain(blk);
+            blk_flush(blk);
+
+            blk_detach_dev(blk, DEVICE(idedev));
+            idebus->ifs[i % 2].blk = NULL;
+            idedev->conf.blk = NULL;
+            monitor_remove_blk(blk);
+            blk_unref(blk);
+        }
+    }
+    qdev_reset_all(dev);
+}
+
 static void unplug_disks(PCIBus *b, PCIDevice *d, void *opaque)
 {
     uint32_t flags = *(uint32_t *)opaque;
@@ -147,7 +193,7 @@ static void unplug_disks(PCIBus *b, PCIDevice *d, void *opaque)
 
     switch (pci_get_word(d->config + PCI_CLASS_DEVICE)) {
     case PCI_CLASS_STORAGE_IDE:
-        pci_piix3_xen_ide_unplug(DEVICE(d), aux);
+        pci_xen_ide_unplug(DEVICE(d), aux);
         break;
 
     case PCI_CLASS_STORAGE_SCSI:
diff --git a/hw/ide/piix.c b/hw/ide/piix.c
index bc1b37512a..9a9b28078e 100644
--- a/hw/ide/piix.c
+++ b/hw/ide/piix.c
@@ -173,52 +173,6 @@ static void pci_piix_ide_realize(PCIDevice *dev, Error **errp)
     }
 }
 
-/*
- * The Xen HVM unplug protocol [1] specifies a mechanism to allow guests to
- * request unplug of 'aux' disks (which is stated to mean all IDE disks,
- * except the primary master).
- *
- * NOTE: The semantics of what happens if unplug of all disks and 'aux' disks
- *       is simultaneously requested is not clear. The implementation assumes
- *       that an 'all' request overrides an 'aux' request.
- *
- * [1] https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=docs/misc/hvm-emulated-unplug.pandoc
- */
-int pci_piix3_xen_ide_unplug(DeviceState *dev, bool aux)
-{
-    PCIIDEState *pci_ide;
-    int i;
-    IDEDevice *idedev;
-    IDEBus *idebus;
-    BlockBackend *blk;
-
-    pci_ide = PCI_IDE(dev);
-
-    for (i = aux ? 1 : 0; i < 4; i++) {
-        idebus = &pci_ide->bus[i / 2];
-        blk = idebus->ifs[i % 2].blk;
-
-        if (blk && idebus->ifs[i % 2].drive_kind != IDE_CD) {
-            if (!(i % 2)) {
-                idedev = idebus->master;
-            } else {
-                idedev = idebus->slave;
-            }
-
-            blk_drain(blk);
-            blk_flush(blk);
-
-            blk_detach_dev(blk, DEVICE(idedev));
-            idebus->ifs[i % 2].blk = NULL;
-            idedev->conf.blk = NULL;
-            monitor_remove_blk(blk);
-            blk_unref(blk);
-        }
-    }
-    qdev_reset_all(dev);
-    return 0;
-}
-
 static void pci_piix_ide_exitfn(PCIDevice *dev)
 {
     PCIIDEState *d = PCI_IDE(dev);
diff --git a/include/hw/ide.h b/include/hw/ide.h
index c5ce5da4f4..60f1f4f714 100644
--- a/include/hw/ide.h
+++ b/include/hw/ide.h
@@ -8,9 +8,6 @@
 ISADevice *isa_ide_init(ISABus *bus, int iobase, int iobase2, int isairq,
                         DriveInfo *hd0, DriveInfo *hd1);
 
-/* ide-pci.c */
-int pci_piix3_xen_ide_unplug(DeviceState *dev, bool aux);
-
 /* ide-mmio.c */
 void mmio_ide_init_drives(DeviceState *dev, DriveInfo *hd0, DriveInfo *hd1);
 
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 14:27:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 14:27:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345556.571197 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzJ7x-0004Pr-7b; Thu, 09 Jun 2022 14:27:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345556.571197; Thu, 09 Jun 2022 14:27:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzJ7x-0004Pk-4o; Thu, 09 Jun 2022 14:27:13 +0000
Received: by outflank-mailman (input) for mailman id 345556;
 Thu, 09 Jun 2022 14:27:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6kiA=WQ=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1nzJ7w-0004Pe-3k
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 14:27:12 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on0602.outbound.protection.outlook.com
 [2a01:111:f400:fe0d::602])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3ed8ef23-e800-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 16:27:10 +0200 (CEST)
Received: from AM5PR0201CA0014.eurprd02.prod.outlook.com
 (2603:10a6:203:3d::24) by PA4PR08MB6080.eurprd08.prod.outlook.com
 (2603:10a6:102:ec::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.12; Thu, 9 Jun
 2022 14:27:07 +0000
Received: from AM5EUR03FT018.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:3d:cafe::2f) by AM5PR0201CA0014.outlook.office365.com
 (2603:10a6:203:3d::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13 via Frontend
 Transport; Thu, 9 Jun 2022 14:27:07 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT018.mail.protection.outlook.com (10.152.16.114) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5332.12 via Frontend Transport; Thu, 9 Jun 2022 14:27:06 +0000
Received: ("Tessian outbound 5b5a41c043d3:v120");
 Thu, 09 Jun 2022 14:27:05 +0000
Received: from 0714082b2b4c.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 DFB08579-FE97-4829-9A20-09A747D570F9.1; 
 Thu, 09 Jun 2022 14:26:58 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 0714082b2b4c.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 09 Jun 2022 14:26:58 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by VE1PR08MB4957.eurprd08.prod.outlook.com (2603:10a6:803:109::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Thu, 9 Jun
 2022 14:26:47 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5314.019; Thu, 9 Jun 2022
 14:26:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3ed8ef23-e800-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=mOwsHeif3aKvUaDwji3/QC6pDlancOfMUAHwggmM0oqW8WOE5uZnNacynr47WPA/fdvz0UWXNxYSlYzjWV+2iYabIal53lOUvpPJC0Mz6QTXTDIPy8XTPkV80kqVYv3Zvr6JfBleWsgvXDT4gaM/Gq6nHDNlX61XNFf/LZ5fmmXYM4Jl6cspw7WlmdV1pqXwARFAY/KF9IQInIE7oJ2XIqjgjLyv3/E73BuLdYJQhL4pCdn0cjthG0PsevoebTRwHVZehYQwCg/+e/TMnb/BqU26U1KR0x0wkuWb/vjXnLOby17oL8XxbXEsMtZ1lliTkhBbc9zWA4tbZeBWxiA9Fg==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=KXQt8AAD0BbyAzVfLu9P2t3Skg4ZAY9an+vYNmZO0fk=;
 b=CwsC5zey0zUm4/N7fyALZrr9Hutv8+7OreDBbmKshrMQLpSpAQavk1QVs280fFAtMyCSfjA6vOgXOFV4fki06Fp5hIFM+44LL7Nv9xBiqdg7o37mbAYhvHJTtFyI9Nv5px38FQlRou4C8qidhVjo2sgxuiq6Df8O9XDiO8EpY3gqA5lOkmzwMB1k5oZ8PvmZu03OI867D6s3zFcdmNX5vXFUA1qi+9vp8rCAyMeP4mxJLA23te3yZC9YhxQeTipJDYCjmpyb/nbNznjtjDDg3KWaSWBOCQbBlQLVlqeee8E1nwVUvLj9hr3uCV8Y35Z73cmNOPcjindUiUscB00IYg==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KXQt8AAD0BbyAzVfLu9P2t3Skg4ZAY9an+vYNmZO0fk=;
 b=Rp+9TpW1jUbf1KI7BgWsiaYc2OC3xKAT2V4rgCKb6OfNFazx6IkV8u4m82USCWulXFbrtblegdXA62Xidr7/ImVXKDtRHb5QV+s8rUygpjvOJmPkyh87Rnd20s6MK4mxWLkKKCpBi+hj48GPSRbMBrdub+fmbzu8BNI83xa8ms4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 324674f756b96692
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kfEfhstPjtJgFhUKCVhyWFm6rCg6FGoCSuOIfEbjxUHt7mF7pyS56toIKQ1wNr/rLy1qdQOo6qkxZz7ZL/EFBwAU1CNBDKJ/djiI0mA2vUnlHYrBtBVchWdks2PZC4PnIvOgoSCUfoGLYf6lFR//Zkn1pBnCd72DmbvSmGdNs6OE9kj1Q35cBERbUP2hoQxa5xEQq5SbOQg5g23H9nrbsz8YFUSfvEguf5fsPSpQ7ZEhBxW1tfMur0b11PHNQdXjA/AjhMksJnlpafsJE/UftBPEs0y/sxNEQ09WLawZn/l7GWZPlSIaTvrXSoCOqmli/sC6D4RYfwrm0oWMXTGsdQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=KXQt8AAD0BbyAzVfLu9P2t3Skg4ZAY9an+vYNmZO0fk=;
 b=A8wDWEb2HmEOViAq68eRfz/CMdmB0KSniwPE9+nIojKrKJYEzYPc9+X859/Puf2dtFnZRkpfnOCuFmR5646VQcR4M5vHkegZ3nNlGsTHMfjoJ4qRSXM+FBuUUFyWqkzQeYhYAo27P1R6fyqKOLhVP97mVH1ODNUzc4FCcgKzxxwJ1GBwxljQR71eHdcRVSwc8hfp9tF4PixE6D3tYXGl/J1zSFa09S6KutiCHUyyR2zLPAcXt3PrUrkm1n+f54prNiK/MUPHTZyLLDEK704RI6FK017ZDXzKv/uvyWQOmeexnLie2WkBuYK0BinuGuF+EwovH5k5uVjJ8r4qSLNKPA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KXQt8AAD0BbyAzVfLu9P2t3Skg4ZAY9an+vYNmZO0fk=;
 b=Rp+9TpW1jUbf1KI7BgWsiaYc2OC3xKAT2V4rgCKb6OfNFazx6IkV8u4m82USCWulXFbrtblegdXA62Xidr7/ImVXKDtRHb5QV+s8rUygpjvOJmPkyh87Rnd20s6MK4mxWLkKKCpBi+hj48GPSRbMBrdub+fmbzu8BNI83xa8ms4=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Anthony PERARD <anthony.perard@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [XEN PATCH 1/4] build: xen/include: use if_changed
Thread-Topic: [XEN PATCH 1/4] build: xen/include: use if_changed
Thread-Index: AQHYddkD4JsESJA0gkGnWWwykzQjSK1G6E+AgAAC3wCAABeVAIAAEeqAgAAZh4A=
Date: Thu, 9 Jun 2022 14:26:47 +0000
Message-ID: <F2278406-E9CC-4672-9669-10B4895CA854@arm.com>
References: <20220601165909.46588-1-anthony.perard@citrix.com>
 <20220601165909.46588-2-anthony.perard@citrix.com>
 <6EE2C13C-7218-4063-8C73-88695C6BF4CE@arm.com>
 <0d85ad23-a232-eac3-416f-fff4d5ec1a93@suse.com>
 <258D1BE1-8E77-4748-A64C-6F080B9C1539@arm.com>
 <YqHtvZPQJOAFt/8K@perard.uk.xensource.com>
In-Reply-To: <YqHtvZPQJOAFt/8K@perard.uk.xensource.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.80.82.1.1)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 126c83ef-852f-4ad0-4b96-08da4a242119
x-ms-traffictypediagnostic:
	VE1PR08MB4957:EE_|AM5EUR03FT018:EE_|PA4PR08MB6080:EE_
X-Microsoft-Antispam-PRVS:
	<PA4PR08MB60806B2BF95542D2A69A83B79DA79@PA4PR08MB6080.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 fLK2H/TycALrzoUg7W8OARGc8b2JQxAlvC57tbhrageo1DZ6cOMJTQ0FsrD30+MNcNIEE6KEY1q+WvbXM5CQSsHtKvQ2UWvFZROgA1INH2cP+ZqrItHpRYeVcw5cUsiZzSpnNRanU2n5U8+najJ0htlk8Vb7ndTdFC3R57SdKOBeK2FChXdQh6PSzFnN35b7LebvQBjCxzpBjjW6EqIvSyxTHu+7M2c8qCBqihH+oYGSj4IvKdGYfE3zpTUvC4AJ4lGgNlrvhPTMx4S7+0k8oJaQgVC/BASGHMWTfhWnzxuJxgMZFeIVxBnliZBLe71JKkbaoyTEP6c2K+8o1SU2CfNgBncWyHDfuLkFNtdULc3k461WkS6x1RN1EGra6tnxWqbafL11vWAsFHDHCE+BQOa+w5SN3S2VpNvU1mD+XLZEiqDq96TIfAqkTmjm/1cpwM30KmxYqWKBTscYSMKjotiplaY8j5vImieoG2t35rgulCKIn78+LXqQCvok5pcQZQD00I3bBvEpNwW/IyqU7WzA1tgEivBGTBIfgU1uusNS1+DJbWDHNK9VXQPXcs/ggezIymGYFmxt9IH9mCVAIPZMaNDfyc369egDCu7gXODn1FHAvqxq6iodnYo/HyjWj0+5e0ykFhrUcinKQZwxEI2RISfsfFN8o9oTEpEbCPU4BfhJPORcFUHrH6grTY0awelS+v5CtETu8hh09JawohvtO3cDFHbxperemdxU3YW8ToSobp7rB4VbSY/3+xT5
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(33656002)(26005)(6506007)(38100700002)(6916009)(122000001)(316002)(6512007)(36756003)(86362001)(83380400001)(2906002)(71200400001)(38070700005)(8936002)(54906003)(53546011)(76116006)(4326008)(66476007)(2616005)(66946007)(8676002)(64756008)(66446008)(91956017)(6486002)(186003)(5660300002)(66556008)(508600001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <52AC22F6A40552438030B30563131149@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB4957
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT018.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	702a3087-05fe-4f2f-e2c9-08da4a24161a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	APv1ooL0x9vWL3+7qAvnmXhrXp6rJP9P/LPXMCrc55IPPpGh0Lf44eM6b1x+dZJwdcLq/wXvks5V/6XwVWx/GEo/0I4l4hR80Yra6CDq3oqst3gvkCFv1iD+8eDVbRNWTxn9A8q0d39WtfF9AYZl9DzCteMswPftmspNfa/fuPU3CnWVxnvNIPd4C+qRop2+pIE98P19ohhNoHaP4w1y80+rG5jMxHb8Tpa19GJdOS+jTtF29t2Aw2aKs+wMkWCBBVnLyhjCv0yBV1Ab14CD5IqEXbTu0b3DgKLkhlZ3/KxpfItOHeowczsgyqJv9zG3uOb69OJ9ZXy47oYYYrfIADHBieCWkKhhhq7r90X2bmQRI6zMbLmMGF+TriwwNc9MPtr9arJtYpQn7sinzDpqZIGu+rDyonkv2h/ZPn9mMHbvp076ISXEUM6I6PZQT3PEqLF7hYSbvqZlE5uaL/UbnRkBnsWyZ2ms4xIPK0P+NSeWLHkuU25Vto0oz2Hcdd8VwPnYCjzKgS/S4xqSkM03l4btceJoyhbLJ2XVAPJSJ6Vt+i/E+BME51/GDPXJ4TOnnpQrwrSoE2ea+JknwuaY9bqp2HD6HMW5w/lGFLVa/rkd6vhyW2XtoQsREJNS5MwCDwfO3/vrQyxzvB08YoofwfiwlQ7dJQ/x9kFMwfRpSpyhm6uA+8k8pKOfjlECU0Nm
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(70586007)(6862004)(6486002)(70206006)(36860700001)(508600001)(40460700003)(316002)(54906003)(8676002)(4326008)(83380400001)(336012)(186003)(47076005)(86362001)(5660300002)(8936002)(36756003)(2616005)(2906002)(6506007)(6512007)(356005)(53546011)(81166007)(82310400005)(33656002)(26005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 14:27:06.0149
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 126c83ef-852f-4ad0-4b96-08da4a242119
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT018.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB6080

Hi Antony,

> On 9 Jun 2022, at 13:55, Anthony PERARD <anthony.perard@citrix.com> wrote=
:
>=20
> On Thu, Jun 09, 2022 at 11:51:20AM +0000, Bertrand Marquis wrote:
>> Hi,
>>=20
>>> On 9 Jun 2022, at 11:26, Jan Beulich <jbeulich@suse.com> wrote:
>>>=20
>>> On 09.06.2022 12:16, Bertrand Marquis wrote:
>>>> With this change, compiling for x86 is now ending up in:
>>>> CHK     include/headers99.chk
>>>> make[9]: execvp: /bin/sh: Argument list too long
>>>> make[9]: *** [include/Makefile:181: include/headers++.chk] Error 127
>>>>=20
>>>> Not quite sure yet why but I wanted to signal it early as other might =
be impacted.
>>>>=20
>>>> Arm and arm64 builds are not impacted.
>>>=20
>>> Hmm, that patch has passed the smoke push gate already, so there likely=
 is
>>> more to it than there being an unconditional issue. I did build-test th=
is
>>> before pushing, and I've just re-tested on a 2nd system without seeing =
an
>>> issue.
>>=20
>> I have the problem only when building using Yocto, I did a normal build =
and the
>> issue is not coming.
>>=20
>=20
> Will the following patch help?

Yes it does, thanks a lot.

You can add my:
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

>=20
>=20
> From 0f32f749304b233c0d5574dc6b14f66e8709feba Mon Sep 17 00:00:00 2001
> From: Anthony PERARD <anthony.perard@citrix.com>
> Date: Thu, 9 Jun 2022 13:42:52 +0100
> Subject: [XEN PATCH] build,include: rework shell script for headers++.chk
>=20
> The command line generated for headers++.chk by make is quite long,
> and in some environment it is too long. This issue have been seen in
> Yocto build environment.
>=20
> Error messages:
>    make[9]: execvp: /bin/sh: Argument list too long
>    make[9]: *** [include/Makefile:181: include/headers++.chk] Error 127
>=20
> Rework to that we do the foreach loop in shell rather that make to
> reduce the command line size by a lot. We also need a way to get
> headers prerequisite for some public headers so we use a switch "case"
> in shell to be able to do some simple pattern matching. Variables
> alone in POSIX shell don't allow to work with associative array or
> variables with "/".
>=20
> Reported-by: Bertrand Marquis <Bertrand.Marquis@arm.com>
> Fixes: 28e13c7f43 ("build: xen/include: use if_changed")
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> ---
> xen/include/Makefile | 17 +++++++++++++----
> 1 file changed, 13 insertions(+), 4 deletions(-)
>=20
> diff --git a/xen/include/Makefile b/xen/include/Makefile
> index 6d9bcc19b0..ca5e868f38 100644
> --- a/xen/include/Makefile
> +++ b/xen/include/Makefile
> @@ -158,13 +158,22 @@ define cmd_headerscxx_chk
> 	    touch $@.new;                                                     \
> 	    exit 0;                                                           \
> 	fi;                                                                   \
> -	$(foreach i, $(filter %.h,$^),                                        \
> -	    echo "#include "\"$(i)\"                                          \
> +	get_prereq() {                                                        \
> +	    case $$1 in                                                       \
> +	    $(foreach i, $(filter %.h,$^),                                    \
> +	    $(if $($(patsubst $(srctree)/%,%,$i)-prereq),                     \
> +		$(patsubst $(srctree)/%,%,$i)$(close)                         \
> +		echo "$(foreach j, $($(patsubst $(srctree)/%,%,$i)-prereq),   \
> +			-include c$(j))";;))                                  \
> +	    esac;                                                             \
> +	};                                                                    \
> +	for i in $(filter %.h,$^); do                                         \
> +	    echo "#include "\"$$i\"                                           \
> 	    | $(CXX) -x c++ -std=3Dgnu++98 -Wall -Werror -D__XEN_TOOLS__        =
\
> 	      -include stdint.h -include $(srcdir)/public/xen.h               \
> -	      $(foreach j, $($(patsubst $(srctree)/%,%,$i)-prereq), -include c$=
(j)) \
> +	      `get_prereq $$i`                                                \
> 	      -S -o /dev/null -                                               \
> -	    || exit $$?; echo $(i) >> $@.new;) \
> +	    || exit $$?; echo $$i >> $@.new; done;                            \
> 	mv $@.new $@
> endef
>=20
>=20
>=20
>=20
> --=20
> Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 15:33:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 15:33:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345573.571236 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzK9t-0004XJ-H8; Thu, 09 Jun 2022 15:33:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345573.571236; Thu, 09 Jun 2022 15:33:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzK9t-0004XC-DW; Thu, 09 Jun 2022 15:33:17 +0000
Received: by outflank-mailman (input) for mailman id 345573;
 Thu, 09 Jun 2022 15:33:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jWvP=WQ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1nzK9r-0004WI-FZ
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 15:33:15 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 (mail-am5eur02on060e.outbound.protection.outlook.com
 [2a01:111:f400:fe07::60e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 79833a59-e809-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 17:33:13 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB7PR04MB3964.eurprd04.prod.outlook.com (2603:10a6:5:17::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Thu, 9 Jun
 2022 15:33:11 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.013; Thu, 9 Jun 2022
 15:33:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 79833a59-e809-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YWnU1I06i8cJcK2QU4H7TaL2Dc7Mx+fcNkF+G2zIdLA0b7mg18rZ38hghRPBmOI2/SzNvzj/UCKOKW7jWTV+F65Byg/DDNq+nVSRg4Bb/jMcFr6qpWhiLFRDQInumS5wlyHQug7QwoIAyjFBP/HVHTbrylko52Pjqrep7rsZ6Di394nwDHPALVj4VuypF829mqjDIkIfbTh5L2be8kBlXaIvDf3fWvX+VDc2hNc6niLDgYwuGe4mPAEn7bKzxij433QaXTA5dWZtzGU71/TtmfD22m5jISW3DwdZ7DeHFfsi2MY1JAZGEepKmMyvOP6c+re6J4J9zFiA4Iv0FS2v+Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=kWJJO6antbaHMy0EvuQ/2kfKU/qfWA/ALEs00ZVoJvc=;
 b=HpBMGVVwGr0d7TTUTwbG/x3bQTDi8pYdKE1SRSKcqZpnlQU100rI5dpjz/nMcYFiisr35vM0wrXclZZhV4ocjU+nRALhNZqjrffmtNoSDRlr4CexGtRHDGO02WEW3nJrXryrqvrZM0lHnLUkYH0kEbMBR/f1tRwK8ilJ6fr0VwEUTJLodXBKS3rwiDLdMpWxZGMjhAR79XhJHE9eQVVJkd//I0mEbsHKfeio/cKVVmJt68nwhnpgwk3hH/5P/3evxPVFCYbOl165oYaCkUjPH4KyDWrR5bfButDhgQzU3LY/TGCGtTwJeyOatj30DL129qGtS450+zoPNGoW9dvEQQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kWJJO6antbaHMy0EvuQ/2kfKU/qfWA/ALEs00ZVoJvc=;
 b=OG63E1YLqwKKjOFXwH1O7jS250HWUJJWlO8lq66KnGJSxt1NklFeOXIj0jLkdIsTMdmwXufp0q8JhDxYoRyJJnsQlsu4rnf6NSBeeT5wJtQAqwDlYYodfN4gJWBE1QlpOdl9mhmWfCR4uoQ4W4HK/EppYP+HRaV+xAAZ0ziTbIdtZl963wihan7ELux4ywOkzitBFTgzyaUBo3TOCFzmg7j3+GUK+uYYMSdtF9z62jA+vOMxRAUDrOiBKkHNVUedvBZjTUFp+a6d2uEX4CAuqS902BtHvwQAnpEEtfJoiZxav4L07YYlxRSjeP8+FZiRyAP7QsbO6pYqRVhNrB/r0Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <baa7d303-1fcc-cd59-0872-a930ea43734d@suse.com>
Date: Thu, 9 Jun 2022 17:33:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Daniel de Graaf <dgdegra@tycho.nsa.gov>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] MAINTAINERS: drop XSM maintainer
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM6PR0502CA0045.eurprd05.prod.outlook.com
 (2603:10a6:20b:56::22) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8aaac52b-b51e-4655-5d01-08da4a2d5ca7
X-MS-TrafficTypeDiagnostic: DB7PR04MB3964:EE_
X-Microsoft-Antispam-PRVS:
	<DB7PR04MB39642FBC0A8CBF81A23CFE6EB3A79@DB7PR04MB3964.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	FCW73r4cXKdlkfSZDRGygjvkDjMdtTRpnXAHNlk5I3jQTgJrHOGFN+SVQ0KWXOn2nNjZhQEWH7cYx+Usza4slc9ExO4vj3X/pZrEclaAXO5yTjYtruF/05clCRymvI3voGV0wXmNBxETqPtTSaU6FgJLKtlEx4YTgBdLd22J2IUYeoA9HJsVywXhgm8nslYRoiyHupvz1tNEaqG9HIb319RiCkobdDw0TaQ3p5eh4uvhM61vTYx0nPu4r0Pi0ktOr4pd897N/8BSVzcM8g3bjPfalJZcgNh8iE5jS7rkLaQJHhsaLHRY+3Ve54VxPnD08tYFweHiq/CIJiwTzKw92hNHvxwm5NjmWfrBw6wjfiBJw76o2YPrPHVOCnMSfNWeMMrOYCkpBGjhlUepgqq5vNjyodAxRBtVozbee+lnWehAMWZa2aCSjJDVvXs/QTagMGPjIq1OhHAwiKQq52soSPNo7uAhF3wz8N5puUtkK6AAmld0QZCmatggBIIKsiAF5LYfQ4pgYJt4qPE4zy/82SHvxQTBEZg9Z5GpJcj0gVycEIuyoZYM1zm01s7zuvZ78GefF/8gUaIYVzm/csEndc1YT198ekQjYw3B0+iOO2nrPo0WfeJpxZrdMskrKhbX4+/UW0DZ/UPOjecX6kUjkjGw8+43PPjMtHMRy3Wda3we81vEpU9u/l6r3qzdQjTHxF8Dg78TiXDmGtSLskmd8L61IuNCmEkViTTBlaqPK9s=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(2906002)(6506007)(6486002)(2616005)(6666004)(6916009)(54906003)(186003)(316002)(8936002)(38100700002)(31696002)(36756003)(83380400001)(4744005)(86362001)(508600001)(5660300002)(4326008)(8676002)(6512007)(66476007)(66556008)(26005)(66946007)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NGIzQ055dE5NeFdsRGxCQ3RZR3JsQ3pQd3pQOVJmTm1nZkE1TlhLcFUwYWVF?=
 =?utf-8?B?dUJFY1NTbUFNNDY5QVlpNVVBbXo4emZoVjgyZkNYSXNjZE83Nyt0eWNBaTIr?=
 =?utf-8?B?SzBHdWFKRjJFZ2pIVEVyL2ZQL0g0N1BVV2llQnovK2xPNWFQc1NvVXpTQjF6?=
 =?utf-8?B?QlhKY2I0eVgwVDM1TFB5eGw1Vml5ZE51OEw2MHNOVWJDOHVvdmNCNzR0NVJU?=
 =?utf-8?B?eGNZMG9lbW1ycWRnUWtMWkVhdm0yUmVuNVRKeDltbzBhc0ZhYithemRMS0Ry?=
 =?utf-8?B?RHFVekJ1VE54UWljeTdzeldNWHlHK0Z2a0lpQ2t6RFNmNTVheFRCenRpb0ta?=
 =?utf-8?B?aHlSNHp0ZGRjejJLOFFJOGZoVCtTOWVmS3NtaDNiOWVrRUlpSHhZZDdtQ0hQ?=
 =?utf-8?B?bGM4UlNlUXU2U1UzR1MwdStMcWFRZ2p0WXhyYkxwVGk3WkNxdzNBODMxNDV1?=
 =?utf-8?B?MzcvNHh1SVhEbGNCa0wvZmNBUUF2RG5mb2c0cjhldUI0R25GREM1alJHdDRE?=
 =?utf-8?B?TkptM2FxQ3FCR05uODRrYUFqTkR1cW5SWkRIM01VS0Z6SndsTmhlSWFrcmxO?=
 =?utf-8?B?MnlDYnBUY3p3NnpsMzNNZDJJbHUyK2JabWxtUFkrOENXTnltYjRRODNrUi9G?=
 =?utf-8?B?Tzk4ZHhuaUYvdUtJRVlZbnE5UGxReExER3c5citJRDk4dlRLNFcrRXlzY1Bx?=
 =?utf-8?B?a3lhUjJPOFMwRk1lQzlGNG8yb1lpSjNlclZES25nNmp6N0kvMXp2M1cvbmJT?=
 =?utf-8?B?OHlvVnJURjVMM3F4MHFhM1F6MWF3NmlYdFBIUXgwckpWN2NZMzFjT3paeFND?=
 =?utf-8?B?dUhaTk9Zcmt3c1Vpb1BzS2RPNkk4QlZlZHBTcC9Jc283Q1VaMGxwYnptRGFV?=
 =?utf-8?B?aW4wMHFyM3J4eG1FY3d3K0czdzNZcXczdzA2c01hYjBQdll4TE5ycEdYMmpv?=
 =?utf-8?B?ZUNVSzcrcTdCMldLb2Y5REEvZUtlNld6Ty9xc3RXM1RJZWhXaFVjSktZRms3?=
 =?utf-8?B?UTJHV0dxM1VRNisrdFJXL1VYeWR6MW9ZSTFSYk1RaVdtK0xNaGNVSTFSbjJU?=
 =?utf-8?B?TUpNYVE5MEtWdHQ5Tlc0dXQvNWo3aitFOUJjZlBPRjgyelFwcmdQY1JXelVT?=
 =?utf-8?B?VTI3eEVIKzZHWE9QRGNqenBrV0N5elhBZDUrWTExdTZBM3RESTR4YjAwR1hr?=
 =?utf-8?B?UlpNdk51UStTU0dra3IrcjdxTzJVTXdOcHA1eGpFSHQwaG5jenFRWVpQSTZX?=
 =?utf-8?B?Njl6MnJUVzBXWUdRTSttY3dialVqYndtK1ZMMDlrWVN5eExTKzNrc1VtT2Fz?=
 =?utf-8?B?L08zbkVCMDRZUENwOER6SlF5UDNlRklibC9nWDJjdWowSmVmeEM2VDBibFpy?=
 =?utf-8?B?R1NlR2xxdk42MHl6Y0Qxa1cvSC9veHlXUHBMN3hYWTRlQlJxdkcyOXp5ZmJY?=
 =?utf-8?B?elVuaWNHZXFzQ3gyREpZTFFwT1BRZlpJY1NuaVY4VVliVk91ZkZOUkNnWUt3?=
 =?utf-8?B?alZMTVRpOFkxZnZ3SjM1eEY5QVBKV0NyZDRmeW16RUxZUWkxUFQ3ZG1Tbmt0?=
 =?utf-8?B?Y2tYalJhaTNjZUlQS2c3KzJXNUtlbUlFNXlQd2hzVmg2SjgrMmQwQXhNN3Ir?=
 =?utf-8?B?UlcxendyZnJCc2phR2FjcDkweURabS9yU3NvOGtlQ2V5eTZSV2tUYVlCV3g3?=
 =?utf-8?B?REpOOEU5d1N4bHZvM2JjMnpjM0ZUNGh5Z05HV1VKOGU1OURoRUtKaTdmYmFE?=
 =?utf-8?B?TG5sQktWVFNlQll4SlZyUGJxQnRRQkFJQnBrbEorUmh0SkdVZlR1ZWFuTStK?=
 =?utf-8?B?aStNS2dPOXVSZWkyUXdTRFBvVEY2RTIwbW9FdDd6NnZ3WE54NVc1VnZOZEtz?=
 =?utf-8?B?OWxZMERMR0QyV2lTaXZsSU5SVjNZa3BZT0Q1RlBabzIvTHpLcW0xbUdrRDZu?=
 =?utf-8?B?S25lYlFiZDVzMDJvTHdTOS9rTFJqTEZ3TlA4Yks1UHJ5dkxQeWY0V3Y2eG5T?=
 =?utf-8?B?T3kxVXJQNElLQ1VydzBISllsK0ZCdUM1aE1mTW10K3lTOTR1SVBjc0cwYmRo?=
 =?utf-8?B?WGxDRFhVNkhBamM3WWozNHlkazFLWGRlTHZib0ZTVk9iaFNXREJwV1RLdkE2?=
 =?utf-8?B?R2tpL0ovTXpoZkI1dkZkZVpoRFhOMEFIWXJCbFpRUFpodUxqWWJqc2Q0T0I3?=
 =?utf-8?B?aU1DYWsrKzFzV1RpcS9XbjZWSTlybjRmV29BWTZyS1F1czVzVHpOZWEyYXdP?=
 =?utf-8?B?bkRVT2ExWjJsQk5xK0JDb2NrRkQ3N3ZneXlYeC85b0JtbjU0UGx1bVRaUVlU?=
 =?utf-8?B?anptNE1YZW04ZWxyMlZUVk9oTXkzaGhNTittbUhMQ3Z2NmdSL0tGUT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8aaac52b-b51e-4655-5d01-08da4a2d5ca7
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 15:33:11.6208
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: pKnY42bzjlRCg7xwio/cDFwd8e08KE7x4S3HoaVYoWhEnZ2dj9Q8OLWc6A+aNR81kPFFq4k0jfb6+bwXAH8KFg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR04MB3964

While mail hasn't been bouncing, Daniel has not been responding to patch
submissions or otherwise interacting with the community for several
years. Move maintainership to THE REST in kind of an unusual way, with
the goal to avoid
- orphaning the component,
- repeating all THE REST members here,
- removing the entry altogether.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
We hope this to be transient, with a new maintainer to be established
rather sooner than later.

I realize the way I'm expressing this may upset scripts/*_maintainer*.pl,
so I'd welcome any better alternative suggestion.

--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -648,7 +648,7 @@ F:	xen/common/trace.c
 F:	xen/include/xen/trace.h
 
 XSM/FLASK
-M:	Daniel De Graaf <dgdegra@tycho.nsa.gov>
+M:	THE REST (see below)
 R:	Daniel P. Smith <dpsmith@apertussolutions.com>
 S:	Supported
 F:	tools/flask/


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 15:36:03 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 15:36:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345582.571248 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzKCZ-0005IE-4s; Thu, 09 Jun 2022 15:36:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345582.571248; Thu, 09 Jun 2022 15:36:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzKCZ-0005I7-0i; Thu, 09 Jun 2022 15:36:03 +0000
Received: by outflank-mailman (input) for mailman id 345582;
 Thu, 09 Jun 2022 15:36:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jWvP=WQ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1nzKCX-0005I0-HV
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 15:36:01 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 (mail-am5eur02on061b.outbound.protection.outlook.com
 [2a01:111:f400:fe07::61b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dcf580c2-e809-11ec-8b38-e96605d6a9a5;
 Thu, 09 Jun 2022 17:36:00 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB7PR04MB3964.eurprd04.prod.outlook.com (2603:10a6:5:17::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Thu, 9 Jun
 2022 15:35:58 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.013; Thu, 9 Jun 2022
 15:35:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dcf580c2-e809-11ec-8b38-e96605d6a9a5
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SpunpDOQDSuxpxbZA2XxgAuYRDSd/o6mpjpm1Ce/FFwODXgGm4XKDF82+/MQgVQeXMbX13RFu969XUpiDusLoSejCX3QlcDHJ3uAgFNwFXthoO7noSNG3O+tZ1ZTOy4N5UcC0qNNrzYHRKnOvdFUyl+Fz2GkxJa1Fyv04g+JxHCpmqjVB+vKBCbiaNxaOLOZSltYrKaTx24Qzagl6FkLkCOubK5KdjSohzh9UoNKwOluQ6nrkeZwWQKHG4/lpIevZr86ulsM1MaNziUDwiS8h9h5hkMLucn6yMldFKlArjueFVee7A53vzVEYTOWMpTYyn+OZ6viST1v7loegBZd3g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=E1K2KdFrhuJ8M4BNNSE8D0OJPk9QvO1yEUUoAbUs8YU=;
 b=Lb2hjNo8jfzeVzy9zzCJKLAA9HTpHAF7By+0urFUsEOP8LIBg6MVDvQAQB58aAXXC6rU/XT+RZrnYlfG5t8aL1mJGkG1XwODvxBoy9FGAloq2Zshp99y2YhXkZydJYsPJOY+JF0xd7RcoxMUgUwnTcZiK3V5cy6+Q+9WVBFMI4MSh/Ks0ElM3HAGxT6ZRT4gMvS80FzbNxIwF6Lt8JiOTkJyI8NTrBTdR4ujpLUW1AL7Gnh/IbjMsDoLRHJWyFaCmqivGUJbgtZUERIO0zjKbebUIqIFllcWzRpGyqlsdf/LUldGOmWPDudMmW2ekscjaG2B8WrkKvu7yfoLssglhg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=E1K2KdFrhuJ8M4BNNSE8D0OJPk9QvO1yEUUoAbUs8YU=;
 b=PsHzWgtU17HpTbIm4+QODtIif9bTeKvV7C+C18aYlR1en+yihxNy2a43FxdcADY2wR5rrvFiBsM7PBXQSFoxgCtv9pRWIsCpSBYUhFD4AD/ZutthOVsMigkeafc/0gPQ4uSQfjs8ZbBhY5iBzTDz7jj3Li1mgU/ntBsmgvca/vsPZNZ2DstqKCQyiNS6cS9ONNBONznXTcDI+p8SRohToVkk+4frx8X0cUV0Cirk/kNUg3Q9tki/WSCOYXW1x2ZCQjUpaZdQg18K+wk7ZXiTBz66vL6dWAQLybn40PLuVTJLZnReMvYnvXf3xjZedDUXck5JfR/EzEml+fkKrO01mw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e90402d6-0ea9-b977-1c1f-b3ace4e8f107@suse.com>
Date: Thu, 9 Jun 2022 17:35:56 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2] x86emul/test: encourage compiler to use more embedded
 broadcast
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR06CA0347.eurprd06.prod.outlook.com
 (2603:10a6:20b:466::33) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 84995aac-7a46-49bd-d5a7-08da4a2dc009
X-MS-TrafficTypeDiagnostic: DB7PR04MB3964:EE_
X-Microsoft-Antispam-PRVS:
	<DB7PR04MB39643333CCDF183ACDEB7DF7B3A79@DB7PR04MB3964.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tl2bVwMMX4bwi4WBEks0QZKJ8vv7nNhLHB6AbJywO8449ar5OzqQv4iB5QrgMM/i11begWbH7HR3EXOrBvOhZtZuSBH9ouL9zRmS9enRa7NTj0L+EdDAW+oImHXh2bZDC6xdWFB8mw9jgO15jEB1odUMBV9ZaLUeA26uDljQ3BvpERyizwH+KBCRX7mTSTQzLSZomcYC/WINM8Tt7AyWGsbglXYocrb95s8TIMrjXe0if2kh4EBGUxNyHTkC8Mja6p81kEkwi510iPMex131u6KIM9uwvJ6zpLvtUEfqQe/YBACKrC/uLN30NsYz+ZiC7xlA7RiL6FA6nLx4eWpBMqeE6y1XYzGCee4y8A16XotZZLSZS5s9PhDMKkCgT53cD2EzBzqYsdteu8KEoStUQ+7N8yLlWQjalU5lg/HE+SLd6GEeE4KRs0ef/QHUHsUysNwQJIfZv/fsclE89hHwaTB6PQVL5dMRR/TyTLb2xb+Y4aaXf33w2mgSMvBglMCmOXImAlmWvJdYbvVxkcmE3nLctC3hf8e04Ur20S4W1YA6/Lht+4mTyS1gOWPq0ib8ekmIQ3oODYlHIhukKudPouKWeUfqKbXI7CZ99dfmxSz8R32f/YoxU1I+JhziZBN0XPDO7TVMl/G+aM+QAwK5G8wa+b5vO8uEcEfHLVAKXHlF505MtjaPr+eu9I3Bi3OFUJOtsUs18Qzdne/52KPijUofBBp4qw3qtP/XPiRI2BiYGZfx0r2dRdtZ0cOviDmk
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(2906002)(6506007)(6486002)(2616005)(6916009)(54906003)(186003)(316002)(8936002)(38100700002)(31696002)(36756003)(86362001)(508600001)(5660300002)(4326008)(8676002)(6512007)(66476007)(66556008)(26005)(66946007)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bktxQUQ0cktINmhiYThrRE4zeC9JMkJVSk1NeVlaUHZoaFV4QXdMZGNBUVBP?=
 =?utf-8?B?Y2VTTUJqSHhiRGJJSWZFazkvZUl1U2tqUWtZU1JKU2h0dlViUkpWbm1JVG1S?=
 =?utf-8?B?RXNvci9iaFNDMjE0VllxVjc3UTB5RTFTaXRPczQweVNmRjBqSTkzR3JiWS91?=
 =?utf-8?B?N21NYlozb0FDZGFhSkpEeWNXNnBHNjBHaXRlZWlCY1I5dDY2anRRWFZ0ZXBT?=
 =?utf-8?B?MTUyQjdSKzAzK0FDM3Y2WXM2UkJkRjhiaGFnUGR0azZseTB6V1dzY2o4UzNX?=
 =?utf-8?B?aVVZMnl3ZUdWeVpTclpqK1V5WFl1ZmR5WDR6UkZiSGZlSUsvazd4VG04L1lO?=
 =?utf-8?B?Zy8ycVUyRVN5ZEMzUjZWeTN3cm1ER0YvYjJha2tGd2pINmFhZk8xREgrS3pP?=
 =?utf-8?B?MUpFSTYzTm96SUUweThWMVdWNG42ZTZyNWZKM0pUL2NwemNpNi9tWGEvRWtD?=
 =?utf-8?B?aW9iY0F1Y2l0Y1hJWFEyNHJUM1ZTN0NudGhZaDc0VnhlendNdEdEeENlak5r?=
 =?utf-8?B?UEhpbjFHWVYyMzJ2N1g4TlNPZVlkMWh3ejVDeTRlR3ZDdGZnQk9XK1BwTVBX?=
 =?utf-8?B?MUxFUDFTaHRtRFJ6dlFVVTdaekpvS0VrZk9LT0twWmhGcVh2NFBXRno2cTZY?=
 =?utf-8?B?eFphSE5tMzFOMFAwZDNORFdnbVRGdTZWVEhCM0FlQXlmaTlVRzJTd2NjeWMy?=
 =?utf-8?B?WjVFcXNaWlo0NEQ4eHdqM1ZkYmE2V2xBUzBiOFNtamFrV1lJblFtVlZMWjIx?=
 =?utf-8?B?SHhMd3VGSmhjOFdXUnNHdjlmNzA3cHo2YjNCYzgyRzRjMFpaSis1L3NKSGtw?=
 =?utf-8?B?eE5pS3ZLSUhkaVR4dzFiYjM4SlVyakNpK0R6SkFadzVnNVhueVp5UjMzOCt6?=
 =?utf-8?B?VDl3d2MzZXRMWWxHSTFzY0NjRzI1djdKQVZBck9FbVNQMW9EcDl0UDZJYVpo?=
 =?utf-8?B?UEx1cHdVNE9vcHgvTmFWNm5VVm1jNVJSRVdiRFlKY1FXVnpWT3dLYko3TVZ0?=
 =?utf-8?B?WHNvMTdiWDVFd2tlUDVyZElCZ3B2SW1IYzE0S0hxcGo0KzBadk4vZk50T3I0?=
 =?utf-8?B?TFZGSzhMaVJ0Skxob3Q3WWhhcms4THNWaDFSY2Y1SXJzT3hPaGQyZUUxM0Vk?=
 =?utf-8?B?d2tlTk1jZWJsSnBXbUdwelNuM2NFVVJab2tpd0hXdkVLZVhJZ0RwZ1ppNG1u?=
 =?utf-8?B?b2Z5TFdyV1k4dXpaQXBrN3crSmtXZ1VhNVZ3dWt4cURGK1pLaXo1VmFPRW93?=
 =?utf-8?B?SEFrOUU0cG8xdy9NQU1JdEttY0tsN2oraTFyeGdCMXpoOU45RmVuNlFEUmVi?=
 =?utf-8?B?VzgybWsxWG45VmZLdHFuRE5nZkFteGNQZFpuUERhbGFuYTQ1bmJmeGt3VlZD?=
 =?utf-8?B?M29mczc4VmV6Q2FMTTQxNHVPbGVNOFVLRHljRWJuaTYyWEpGN1NwZlBXRnhx?=
 =?utf-8?B?WUNzdFRTa2hvK2NpTy9VVUpHaDVWUDhIdml0KytuOVZYTGVUUUI0OFJ6VTBy?=
 =?utf-8?B?REdCcXdxOWlxcFc1RjBVeUdyK2RDbHJlakI1TzU2WjJEY25DRmhRT2FKajJv?=
 =?utf-8?B?MVhXRGhCR3VOUEZ6SVEvMEpaa3VRcS94Yzdkdm4rMUUvWUM1NDlqQjBsM0RX?=
 =?utf-8?B?QzZMUTlDdUkyVDNhQ1czbFNrVDdmV21OUUJXaHorK1BzYldCV1J4b3AxSHJt?=
 =?utf-8?B?cGVxeGdDcytRMGFsNVJ3UUM2bkFlTDhVZGFKSE0vYWFFSWpFVCtLd3huUG9n?=
 =?utf-8?B?WGhHWjIwZzEyNHlmeFI1V0NRWkJhQi84RzhEK1N6R01SNDh1enBMcXNQR1hD?=
 =?utf-8?B?Q1c0cmdHb0lzMU1PdUdCNjVOZVprQlpITjBaeDNLYThyMG9UZmw1T1hnTHpH?=
 =?utf-8?B?VjBZYUhiNUJVRXNCU2Z5RytraEJQRFV3WUNwdG5CUlJ0dWJYdDd3VDJJRk40?=
 =?utf-8?B?Rmt3RGZxZTJXZlhndklnSTZYV3Z5cVBkVmIwTEtXUlEyclUycG9DMkxrNDJs?=
 =?utf-8?B?SmJVblprYmdqVHBHVEZwMFU0UndUanZWVFJFb0ZvTjM3cGlDZkY0cjFwbWpL?=
 =?utf-8?B?WlNlM29aOXRkVzc4UFEydU1yNGNuRHRQOGl2ZlhlK3QwbTZ4RzFHN0FzRC9T?=
 =?utf-8?B?QVNtaWVTSXdLdEtOV2xzNHR3SjRtb2taK3I1VGh1bGs0WC9IS0UzWUMyVDNG?=
 =?utf-8?B?eXZOMzRCRjgyRkpMRGxuSUZwQ0ZiRlZzZWw4RlVrcVM1eWFDMnUvMHZ5T3F1?=
 =?utf-8?B?QTUwTHl2aXNYOE54emwwSngzSmJyQUF4UGMxZVdYa1BrMXNYVUh5TDRaQm5R?=
 =?utf-8?B?bkd2UkdMWWtUM3JDZCtIOWFWZzlVNkUySjI2RkxGeHRySCtGT2I4dz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 84995aac-7a46-49bd-d5a7-08da4a2dc009
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 15:35:58.3762
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: NOQVKSh9TlJTqX05Cb9gEHD5t7Y6byqQofXvT5WmU0x8T/OSP6nTtAc799+ZRnLaPZQWS+D+kZf5y8cAxQ2IYQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR04MB3964

For one it was an oversight to leave dup_{hi,lo}() undefined for 512-bit
vector size. And then in FMA testing we can also arrange for the
compiler to (hopefully) recognize broadcasting potential. Plus we can
replace the broadcast(1) use in the addsub() surrogate with inline
assembly explicitly using embedded broadcast (even gcc12 still doesn't
support broadcast for any of the addsub/subadd builtins).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Also alter addsub() surrogate.

--- a/tools/tests/x86_emulator/simd.c
+++ b/tools/tests/x86_emulator/simd.c
@@ -912,6 +912,13 @@ static inline vec_t movlhps(vec_t x, vec
 })
 #  endif
 # endif
+#elif VEC_SIZE == 64
+# if FLOAT_SIZE == 4
+#  define dup_hi(x) B(movshdup, _mask, x, undef(), ~0)
+#  define dup_lo(x) B(movsldup, _mask, x, undef(), ~0)
+# elif FLOAT_SIZE == 8
+#  define dup_lo(x) B(movddup, _mask, x, undef(), ~0)
+# endif
 #endif
 #if VEC_SIZE == 16 && defined(__SSSE3__) && !defined(__AVX512VL__)
 # if INT_SIZE == 1
--- a/tools/tests/x86_emulator/simd.h
+++ b/tools/tests/x86_emulator/simd.h
@@ -49,8 +49,10 @@ float
 # define ELEM_SIZE FLOAT_SIZE
 # if FLOAT_SIZE == 4
 #  define MODE SF
+#  define ELEM_SFX "s"
 # elif FLOAT_SIZE == 8
 #  define MODE DF
+#  define ELEM_SFX "d"
 # endif
 #endif
 #ifndef VEC_SIZE
--- a/tools/tests/x86_emulator/simd-fma.c
+++ b/tools/tests/x86_emulator/simd-fma.c
@@ -56,13 +56,27 @@ ENTRY(fma_test);
 #endif
 
 #if defined(fmaddsub) && !defined(addsub)
-# define addsub(x, y) fmaddsub(x, broadcast(1), y)
+# ifdef __AVX512F__
+#  define addsub(x, y) ({ \
+    vec_t t_; \
+    typeof(t_[0]) one_ = 1; \
+    asm ( "vfmaddsub231p" ELEM_SFX " %2%{1to%c4%}, %1, %0" \
+          : "=v" (t_) \
+          : "v" (x), "m" (one_), "0" (y), "i" (ELEM_COUNT) ); \
+    t_; \
+})
+# else
+#  define addsub(x, y) fmaddsub(x, broadcast(1), y)
+# endif
 #endif
 
 int fma_test(void)
 {
     unsigned int i;
     vec_t x, y, z, src, inv, one;
+#ifdef __AVX512F__
+    typeof(one[0]) one_ = 1;
+#endif
 
     for ( i = 0; i < ELEM_COUNT; ++i )
     {
@@ -71,6 +85,10 @@ int fma_test(void)
         one[i] = 1;
     }
 
+#ifdef __AVX512F__
+# define one one_
+#endif
+
     x = (src + one) * inv;
     y = (src - one) * inv;
     touch(src);
@@ -93,22 +111,28 @@ int fma_test(void)
     x = src + inv;
     y = src - inv;
     touch(inv);
+    touch(one);
     z = src * one + inv;
     if ( !eq(x, z) ) return __LINE__;
 
     touch(inv);
+    touch(one);
     z = -src * one - inv;
     if ( !eq(-x, z) ) return __LINE__;
 
     touch(inv);
+    touch(one);
     z = src * one - inv;
     if ( !eq(y, z) ) return __LINE__;
 
     touch(inv);
+    touch(one);
     z = -src * one + inv;
     if ( !eq(-y, z) ) return __LINE__;
     touch(inv);
 
+#undef one
+
 #if defined(addsub) && defined(fmaddsub)
     x = addsub(src * inv, one);
     y = addsub(src * inv, -one);


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 15:39:37 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 15:39:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345592.571259 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzKFx-00064P-Kk; Thu, 09 Jun 2022 15:39:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345592.571259; Thu, 09 Jun 2022 15:39:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzKFx-00064I-Hw; Thu, 09 Jun 2022 15:39:33 +0000
Received: by outflank-mailman (input) for mailman id 345592;
 Thu, 09 Jun 2022 15:39:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jWvP=WQ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1nzKFw-00064C-0r
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 15:39:32 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 (mail-am5eur02on061f.outbound.protection.outlook.com
 [2a01:111:f400:fe07::61f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5a801793-e80a-11ec-8b38-e96605d6a9a5;
 Thu, 09 Jun 2022 17:39:31 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB7PR04MB3964.eurprd04.prod.outlook.com (2603:10a6:5:17::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Thu, 9 Jun
 2022 15:39:30 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.013; Thu, 9 Jun 2022
 15:39:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5a801793-e80a-11ec-8b38-e96605d6a9a5
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RUH5oqs7pFPbC8PJof2LCId/UAdxg3EyF6x1zjPGrrITy7M8ZovqG3u4P8MjraPiP6yGcKN5hhH7A2GyAeuOG8eUBt9OgjOmbTkQ92GA/cyja9c3qNQLkYDvCUBgAQJ76JV5tVEZR8GXEKy793fkTaj9JjfxxXcuN4JgXSJk1hKtBhaidQt6GRya4p883ZgjvEocZXU43O1p3KuLeXyBvuO878bi/8jg3MZyOdzKMx+wO9cmGAoxL+veqRGukV02RsUtxYTpCr1bp5o5Kiuy8ja9fDCsGTSoNO5KO94ylxzf3lPOmXIEfHwoFU+OHuOC+TxJvkOAQuSivS69hx+3LA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=KqPCUXPxC/LvcetQHGK7A6OTL9d/F0npFli+zI/opfM=;
 b=mzMjKjj5N/ID2CyeahjFFwm0UAJxecqg24mvEL8wyF2dVVrrvmxfwzCfRZJKz2+fih0d1dKosvdGZl8i7mamqP/SUOZNL3egN+2PN8XPPfzlHlIQq5CXQXTRmq7K2QKn9p1LLidB+PqZMJIUzVS1ub8sBqX3XCgYBq7/ZLssMDF9PxIr8Sk3el3f7FYptWthj5Ha8c2vj5cMT+X980XsyJveIvaIeMUw+g2Y8g+XJnVExBHOZFg9wQvim21ub9+iMXJoxqWrCiJNJbKLljLdcvHafXAqVKVJbjbwYbQAYDbqg+MJLkMcVOZc8N5tM5oUcEEsBF+RxLHff0pqRtx4Vw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KqPCUXPxC/LvcetQHGK7A6OTL9d/F0npFli+zI/opfM=;
 b=U77Z/QvIvxy7Yvv+y3o1QfQX2wKPVKI/iYtRxSM6CecU53o5ulwCRePp1Md+HdY7/fdgToK1vPZvDZK7BsKD98V6e+0ouDNZ9S6pQEYgn+5eSWfruPmR1jHZ7YEl/VRi8XbJMoxlFBjgs8ZjnD1oNI5jj2g/CMFiCW1A+szPz5DFE59ck/HVG1FOXhFQeMTHr2gsT3x8e0VWl669o4U2HNBSZHBgJVbU1H1UN0arLGqclfPNfZfuSO9Auvi50duYQjRGjdev5x4G2WTELf+yAVRabPlVYhykiN7OgSzjn2Pe/TA/P0uiaGWopNpl43pNDauD0kkGlDbOnv/IKZn5jw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <56d16bba-05db-cef1-0746-1591a6744afc@suse.com>
Date: Thu, 9 Jun 2022 17:39:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/mm: further simplify cleanup_page_mappings()
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9P251CA0012.EURP251.PROD.OUTLOOK.COM
 (2603:10a6:20b:50f::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3ef5e3fc-31d7-4ebc-f616-08da4a2e3e2f
X-MS-TrafficTypeDiagnostic: DB7PR04MB3964:EE_
X-Microsoft-Antispam-PRVS:
	<DB7PR04MB3964797BE6BB90AD19366CA7B3A79@DB7PR04MB3964.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	dgqA7CW8l45UoHUeJgdiTe5OlAHxzUyVbpkT2OIWihT+Nb18iuV1H1cxV6nJJaGq3rZSb40k0Z7mYRX+PAKxIktpCNeBV9mRpsaXtxQk0J+K3XN3JTrVn6otYHKhnQXAtU2Y7ZOqWPghOMsf+AwMqoOG+LxPT2W0kh4DMX8ZEBDz0/IoKLcK8/mJjnKFbBM3klGxW0suuekjArbcLLruEO05U72QeVEkBMl9PRA8xCy059AxdI4ULC3QPUJ3k3PDWQgeN+B+r7Zn+CXiT9fqmJqttlPSiNtp6lUSNmj7UwuIdZNSbyiQIVyOo3DTOXj1IZn4wJqDtqBBLc/qvwusORoKqUGbqh7B85OkEdyvO7E4zOUubNKyEohgPdxwZ8giffhdS93RoGrgqCUdlDQbhBIJDTbmYvQz91D47sYPVTtaJ1MyRZJ1xu5H4PlqHaR8ySkWndsMZM8dreWgTFlZh3dfKsgKf0NXUIvA1eKQdTtP6vKzTJewoSLH6GfrnljS+S8jG14HM3+Yz+gh9/rSEaOoZogrW6XhinX4Kh6pfXuYdfpZ4mRZCNinPLWe0fqX3KHPeAyKW2uREv162DkxtbuEiDGnY5Ez4R1sBsCNtrbX4bIEyiDBwO4ZzKY5OTglj0AM25gXdzxn05J1l0OiSOiX70wpPMLjc93Mf0JETjBlcLfIChevKSChXA5divHB+4ICyfWzrhv1WjGiUgv0MYzj6kyFDJr+m8h+n0P170ALeuviibZrm1siBQIkh9V1
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(2906002)(6506007)(6486002)(2616005)(6666004)(6916009)(54906003)(186003)(316002)(8936002)(38100700002)(31696002)(36756003)(83380400001)(4744005)(86362001)(508600001)(5660300002)(4326008)(8676002)(6512007)(66476007)(66556008)(26005)(66946007)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?K2lOSXdIOFVxeE5kWUhZNzdvSWs1d0J1bjNvZ0FLQ25WT1V0WThtL2JQQmNh?=
 =?utf-8?B?L3kvVzBJQVhxTDMwV2Z4UFRvN0dSWUZHYUdPY04vS0JiU3ZZMGpoTWhiTml1?=
 =?utf-8?B?RGdEY09KRC9FU2FDZlJyYmN6SjJGNE5mckY1M3c4dnVQZTBWY09IWmoxZjdt?=
 =?utf-8?B?NU5qelg4Z2pyeVA5cUpKUnkva2Nub1Q5aTRJdnhFbEpQZmpRWGNNRXVYb3Z5?=
 =?utf-8?B?SzU0VmtGcFUrQk05ZlZQbmxXSWtPMlZOVTMwOGNqTnl2YmNyUktxSEc0bXFI?=
 =?utf-8?B?ZGkvanJlTjIvMlJPdXhYc0dXY1pRZWd4RndsYVl3NnlhVDFkN3lGTTlQMDdn?=
 =?utf-8?B?MjZrc2ZOUWpTTDlUYWxoQytyNVB3ckYzWFIwbEFkUFVuWXY2eDhXTC9CM2JO?=
 =?utf-8?B?amZOL2Z1OGxZZnlEa3hWMzFrMi9GNkhLWnFtbnpNOHhYckNkam0xck96bVRp?=
 =?utf-8?B?T1ZhMlF2b0lkazhLcDdoVUl0TzlFcHdrdGZqKytEOFdCK3ppcDdIbjMzdFNj?=
 =?utf-8?B?eUNRV0pBdzdOZTg3dDUyYWxRWW1KN1NFN0kwUEF5RkYrNjlmS0pqK250ak9n?=
 =?utf-8?B?WHpxMkRKKzZsOXF5VjFFVFltenpCSnFPeElDaUhtU3o2WS9ib2JMdDlIdW5r?=
 =?utf-8?B?dWN5WStpbEhPL1pGT1hlUW9pRHFJVHhyMy9uL1ZpdlVpeWhuM2VmcE95S2g3?=
 =?utf-8?B?R0FRRW5SSEN2eVdHb0E2TFgrME5NaVJoV3ZudEdqUFVER3ZFbWpIQm1Kb0Q3?=
 =?utf-8?B?WW5sR2hCTStFbTQ5WXMzSy9yb1R0K0JKU2ZEZ3RjNlA1MnNrUndQVW9WQXJa?=
 =?utf-8?B?b2RzTVY2OUFvQTBUMk5rdHV5OWVicjBjeDg2ZXZSVjVUcUhyZm02TzB3L2lm?=
 =?utf-8?B?UzlYd2ZibDMzZjBMbk5SZjk3ZHA2eTFZb2Nab0k2Q2l4QkNvOFdKVi9QZFJZ?=
 =?utf-8?B?R3lTYlEwdFc0RFJrV01XVUZ0TnVpTkNYa3dKTXN0aXRkQzhlaVorT1ozdGJk?=
 =?utf-8?B?WEVnNTJJdTl0OEtWKzRKbjJRSGxzcUZZaEZ6SVo5dzJaakdWR0xabUYxN3BN?=
 =?utf-8?B?ZmpvSDYwODV2bFIyRFpxcDJEQTNYSWtDZktES3dveHJ2RkhOd0swUFBWNDZJ?=
 =?utf-8?B?V0dzSGk4dEtWNmZKek9HMjFaNHgwWGVqaGxxamJpWjN4bmxnUEtzckI1V214?=
 =?utf-8?B?YmtGRnhHZHl3QkZmTzJZWUx4M2dlVlc4NDEzUzlqcVhMRlNOVXZaQzAyYWRJ?=
 =?utf-8?B?T1dXdjdYMDBMVHJaWWtaQlNxcFAvclliWU5MTUJJMDRaRHZ1R3I4T1lvRzFM?=
 =?utf-8?B?cEhrK2VCNmlSZ3krTnM0SDVEdC9IeTFIcE5qMmhwOTltaXJmQ0RHeWc3VnZK?=
 =?utf-8?B?T1pxdWd6cUx1SHVwUWFiSXFFSkZrYXdoamdTM3JzcVRLeWRKMlJXT0VyVjRo?=
 =?utf-8?B?TWhXYk8rVFBZMVhCWXRlT2NXck1yRmNsUVByd0ZKRjJ5N0tlTms0M0tDQlBm?=
 =?utf-8?B?RDdBNE9WZG4rYzloMy9qZVNxTW1DbW5nZmgvc1R4Vk1XRHM5RWpxYmRvd1Jh?=
 =?utf-8?B?eWZubFJoU0tRdndML0t4cFlSV0RJWXdxZjRyRzlrY3FMS0VleG9Qc24xWU9T?=
 =?utf-8?B?TklpKzVBcEFnNlJLQjhFNllMQUt5V1k0MHhFc3duQkhvU1BCajM4Sm5iZE5m?=
 =?utf-8?B?SW1kb3pjdGxGdnJSck9VWWVSTGNhV295UE5xZmFIRG14ZERyZXFwVEJCZTM5?=
 =?utf-8?B?K0JLZFgzZytRQWN3T0xCVHlDY3hBWisrd1J0ZDZoc21HWGZNVlo5SGdKWEox?=
 =?utf-8?B?eUY0N3daNlAwL0R0YWM4cTNDVkhEWTd5K3JzYnl2NnlCR29GaU03cTJaUkUx?=
 =?utf-8?B?NjlZWWtXTjViR2ltVGtVRlBGd0NWODFRQWV0Q25SdVlScktYU28zK3NwSGNs?=
 =?utf-8?B?b3d5cldoTWVuTE53RXpITEZFdUV2STZXd2xZSThCbGdJZ0xIR3J6ZDNrZk9S?=
 =?utf-8?B?dHU0VVRjV3NZd1ZCckRYTmNESGNaTk9HcUhTWUhSVW8zeXFGa0NpbGg0UUdI?=
 =?utf-8?B?VW4yWi9zSmU3WkJ2czlhbXFVR0pUQXJrR0RRNFpnL3FVZFBxd0dROHh6QU1z?=
 =?utf-8?B?SjNjc01OVWM5clRtblhpTlMzUFppVkRkSVVVSTIwMDJzajZXejAxRXQyQ1Vt?=
 =?utf-8?B?Uzc5NWlxRDk3QmF3Q0RiR1ZWNmM2OStYWWRWOW9MQzJxNFVmR1Fxc0dqeHpa?=
 =?utf-8?B?YmhIRldWK2JyTEk1djVRZDh4YU9FVnBOd3ZYWCtQaDJZLzZFbFIrUTkrR3RM?=
 =?utf-8?B?a3hGQVpOZG1yOUtsRjdoZXhOZEluRzk2aS8yeTg2bjJMWURtdE5CQT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3ef5e3fc-31d7-4ebc-f616-08da4a2e3e2f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 15:39:29.9878
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: svo3/M00rfo6WCkSkWcfSaWcwp6pgHn92ZIYGWWalOrktFIbOkDH0d3MfPX3CqelVushbLsIQfuik/F7LMkAXg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR04MB3964

With the removal of update_xen_mappings() there's no need anymore for a
2nd error code variable to transiently hold the IOMMU unmap return
value.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
I have to admit that I was tempted to get rid of PAGE_ORDER_4K at this
occasion, as it feels awkward to me to have such in clearly x86-only
code.

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2470,12 +2470,7 @@ static int cleanup_page_mappings(struct
         struct domain *d = page_get_owner(page);
 
         if ( d && unlikely(need_iommu_pt_sync(d)) && is_pv_domain(d) )
-        {
-            int rc2 = iommu_legacy_unmap(d, _dfn(mfn), 1u << PAGE_ORDER_4K);
-
-            if ( !rc )
-                rc = rc2;
-        }
+            rc = iommu_legacy_unmap(d, _dfn(mfn), 1u << PAGE_ORDER_4K);
 
         if ( likely(!is_special_page(page)) )
         {


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 15:51:22 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 15:51:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345601.571269 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzKRI-0000LR-Mp; Thu, 09 Jun 2022 15:51:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345601.571269; Thu, 09 Jun 2022 15:51:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzKRI-0000LK-Jh; Thu, 09 Jun 2022 15:51:16 +0000
Received: by outflank-mailman (input) for mailman id 345601;
 Thu, 09 Jun 2022 15:51:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+nUK=WQ=citrix.com=prvs=15254dc06=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1nzKRH-0000LE-AG
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 15:51:15 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fb6bf6b0-e80b-11ec-8b38-e96605d6a9a5;
 Thu, 09 Jun 2022 17:51:11 +0200 (CEST)
Received: from mail-dm6nam12lp2176.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.176])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 09 Jun 2022 11:50:28 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by CH0PR03MB6066.namprd03.prod.outlook.com (2603:10b6:610:bd::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Thu, 9 Jun
 2022 15:50:26 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::50a2:499b:fa53:b1eb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::50a2:499b:fa53:b1eb%5]) with mapi id 15.20.5314.019; Thu, 9 Jun 2022
 15:50:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fb6bf6b0-e80b-11ec-8b38-e96605d6a9a5
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654789871;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=yuceVizFDn5qDctsHHjXyyVu1GYqojQxEMbWf4ogXdE=;
  b=IGZCiLhPTT/DyRr/rW94O2xpcGtjYfyD+WWsJD42D1V+QqLjWwrrO/7x
   ar8WO7KaUomnKSeGw4sshLov/j1lGb8JYCVkC1MRbxTYOck+TJ1/t7d4i
   AjUB4P925Wc/SokvTeTRx1bVtvuhX1KA33mC/nuToyAubphln0FLrPe6p
   Q=;
X-IronPort-RemoteIP: 104.47.59.176
X-IronPort-MID: 75800183
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:EGk+c6Pw3ynXBG/vrR3XlsFynXyQoLVcMsEvi/4bfWQNrUoq1TMFx
 mJJUG7SOPzeY2anL4wga9m1pEIAu5TcxoUwTgto+SlhQUwRpJueD7x1DKtR0wB+jCHnZBg6h
 ynLQoCYdKjYdleF+lH1dOKJQUBUjclkfJKlYAL/En03FFYMpBsJ00o5wbZn2t4w2rBVPivW0
 T/Mi5yHULOa82Yc3lI8s8pvfzs24ZweEBtB1rAPTagjUG32zhH5P7pGTU2FFFPqQ5E8IwKPb
 72rIIdVXI/u10xF5tuNyt4Xe6CRK1LYFVDmZnF+A8BOjvXez8CbP2lS2Pc0MC9qZzu1c99Z9
 oxuiKObRwQQI6z2g8YWfgh3UCZGIvgTkFPHCSDXXc276WTjKiGp79AwSUY8MMsf5/p9BnxI+
 boAMjcRYxufhuWwhrWmVu1rgcdlJ87uVG8dkig4kXeFUrB7ENaaHPuiCdxwhV/cguhnG/rEa
 tVfQj1odBnaODVEO0sNCYJ4l+Ct7pX6W2IF+ArN/Ppsi4TV5CNy0byzDdqKQdfQGsJtgU+Xt
 E3az2usV3n2M/Tak1Jp6EmEhOXCgCf6U4I6D6Cj+7hhh1j77nweDlgaWEW2pdG9i1WiQJRPJ
 koM4C0soKMuskuxQbHVQBmQsHOC+BkGVLJt//YS7QiMzu/Y5lifD21dFDpZMoV56okxWCAg0
 UKPk5XxHztzvbaJSHWbsLCJsTe1PitTJmgHDcMZcTY4DxDYiNlbpnryohxLTfDdYgHdcd0o/
 w23kQ==
IronPort-HdrOrdr: A9a23:bCz7zahng4cMjckUmdpkBoCpQ3BQX4N23DAbv31ZSRFFG/FwyP
 rCoB1L73XJYWgqM03IwerwQ5VpQRvnhP1ICRF4B8bvYOCUghrTEGgE1/qs/9SAIVyyygc578
 tdmsdFebrN5DRB7PoSpTPIa+rIo+P3vpxA592uqUuFJDsCA84P0+46MHfjLqQcfnglOXNNLu
 v52iMxnUvERZ14VKSGL0hAe9KGi8zAlZrgbxJDLQUg8hOygTSh76O/OwSE3z8FOgk/gYsKwC
 zgqUjU96+ju/a0xlv3zGnI9albn9Pn159qGNGMsM4IMT/h4zzYJ7iJGofy/gzdktvfrGrCo+
 O85CvI+P4DrU85S1vF5CcFHTOQiQrGpUWSkWNwykGT3PARDAhKd/apw7gpMycxonBQwu2Vms
 hwrh2knosSAhXakCvn4d/UExlsi0qvuHIn1fUelnpFTOIlGfdsRKEkjTVo+a07bWvHAUEcYZ
 tTJdCZ4OwTfUKRbnjfsGUqyNuwXm4rFhPDRkQZoMSa3zVfgXg8liIjtYYit2ZF8Ih4R4hP5u
 zCPKgtnLZSTtUOZaY4AOsaW8O4BmHEXBqJOmOPJlbsEr0BJhv22tXKyaRw4PvvdI0DzZM0lp
 iEWFREtXQqc0arEsGK1I0jyGG7fIx8Z0WY9ihz3ekIhlSnfsubDcSqciFcr+Kw5/MCH8bcR/
 G/fJpLHv6LFxqaJbp0
X-IronPort-AV: E=Sophos;i="5.91,287,1647316800"; 
   d="scan'208";a="75800183"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=guHFeR5unr6zjrTZf0lW0yJv2rfzN/Duf071u7uF6EG0Q6vo/RLQEYcwd1UJVdheihvF0V/F4LvOLsCet5mnS+RUQwWigUwK5gAMwvLh4DNXu9reGamwC9NEd+iCxchbcxE0q0XtfgcJCzyUn3XSLra/Uwk5vncE9Tfks2B8zjuGQhoR5o31GEr2vBFX8SvcxcT+TmQYCioT63Ib76882RXdqtaIpJr2DrBoMv9zvIABBjg0AC8F0J07B4WCken1DVMj7WEDzLUCN6t1Zxad0kMUmdUaEecRDTeEciLOUzhACAGooR+hnhMBBBhm+lrSX4DdgEtpN9Zfu5fdFhQvkw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=yuceVizFDn5qDctsHHjXyyVu1GYqojQxEMbWf4ogXdE=;
 b=WQ6RxRiYdZjZeJIa8IYnp7D3JlOajXswfDVgCkB7McYUOxftA1ZhIGk2o1kr2G210EYhsDaaKgdMFAkPzynEaiB3D15Zffn8nfeK4f0B9YH6v2IgaF63UeTpeGavJEa9hLJhEaZ6Wf+MKgwoa0khhrxsy1RaB/cPV2D2rKzxoBxSx19a59QLhEhHRIMlAXq2rvALD/xiCf8oA+ogql4kY8iaf2OnHbmFgFjpXiWQ2DormMTC4zBIG2pNGGm0Y6UpmXEtMV576PvCQ9/MddjPgAGm5owkHn6dUditOYbdKlqqNcZdgEuZ0bcytOyeOrvlO3hYAeJlFGqrRn59Bc4fxQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yuceVizFDn5qDctsHHjXyyVu1GYqojQxEMbWf4ogXdE=;
 b=KKMahrG1AINqUdSIHTUqDswmGuFemiLo4AUOght2GRj8C3JBSsSXoTmnMrRKNTiNcEyhW7DZt37zLezyjw+R3WfK8teo2wQ/7W/4Tuau0wRsilzy8GSBh0GN/C9psHB33ekaTav9oMplEVW72dFCLyOJBt5k3O8losC04ZL6qIA=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [PATCH] x86/mm: further simplify cleanup_page_mappings()
Thread-Topic: [PATCH] x86/mm: further simplify cleanup_page_mappings()
Thread-Index: AQHYfBcxlHJGb8s4Mk+MGPtEwJgQia1HORaA
Date: Thu, 9 Jun 2022 15:50:26 +0000
Message-ID: <756741e4-e736-fe66-6653-775e16114059@citrix.com>
References: <56d16bba-05db-cef1-0746-1591a6744afc@suse.com>
In-Reply-To: <56d16bba-05db-cef1-0746-1591a6744afc@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 3690bc74-d46d-4d47-4022-08da4a2fc55d
x-ms-traffictypediagnostic: CH0PR03MB6066:EE_
x-microsoft-antispam-prvs:
 <CH0PR03MB606627327C44998719272175BAA79@CH0PR03MB6066.namprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 cuDazsaikBufG1Xj5yVB1N3+s8oBZ2PyzD2CKjtyy5fz8tXHTjgWS1e2oipJDVtkVbOgmmdgpM1Ow2XkLWgo4Zfmb7EOOje4MxxaYHkjsP9L3JhIbpp7AGDlzkLqLFywchAaC3o0kxZ2fuDdpGSPOAzydofzYXv9tj+fZ2uTmhPhK8O6PYRxXHHJhelORcBKZlLl9Uly6ysTzQYd4KF0uGf1TtYJd42fswUwOSBI4aH0hfqSjcKfpiqzYqVGwsLKZVJiuzwiWGcP86B0yzZZNUobjAYs27/CMQxngQInQy6I0qATfR+Llvw1f9eJZmOUQeUt+USeA1uLNzBiAjMeb6fHHk0fV2K7Yhg44ZGVCfJUG6oJelNeFf1tijqi7Vss2MlVFU9qJXCrLYh7oZsKAkdHKd/+z17Dt4X5AOKmNlI53KBqJOv0OX38/ZbFcSq4bV62xdTpgC6gt6cn7pdMuQgU1/mCUR32AhRyNpPebvvKyOqi+ZAdCWsorl6Q4Ul7kVcZlPB1dboMstpkiTDpkOtR1VL+yaCUqckPYwRDjFf+QTl8Teehz1D6DOUDVkmmIUBUbhZwx+FHtsSaTJSeCwYL02NuD3Olt9XxI7Fy5jq8WAfTlfO3D/lmSMdeBGE7UC2nbSwq04hqwhwl1wo2uQnSmt5kVGHOZ7gsEqPVF95uBeuRrIxN93JJM+8nl232XDBLsblI9h40e9xIeh8AvZY6OeUox6WJXV2CJO1oDkjlP9BnX7eb0Z8rEf6A+ZzgTPhkV22Alx4CPFEPVk2uvg==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(122000001)(508600001)(83380400001)(86362001)(82960400001)(31696002)(38070700005)(38100700002)(36756003)(71200400001)(2906002)(6486002)(5660300002)(316002)(64756008)(4744005)(91956017)(110136005)(54906003)(76116006)(8676002)(66446008)(66476007)(66556008)(4326008)(66946007)(8936002)(53546011)(6506007)(186003)(2616005)(26005)(6512007)(107886003)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?UzhVOHVlTVFKTWh0bTBtYWRyVGN6NUdRb1hQTWMzUTNsZy9Td28wbjFVTXda?=
 =?utf-8?B?V3A1c1h0R0VUcWpzUS9UYzdQTElOM0t5bHl3RnlCbkY3cVVZY0hRNjY2SXBu?=
 =?utf-8?B?Wk9oRTZlVVNTY3luMGpqS3hoblFXTjI3TGdNMzMraXJrOVVCZGZvcFZRNWpn?=
 =?utf-8?B?V2E1VnFPZTAxSGRSeFpMQ2xmNFZlWWJjQklOeFVtUHU2SXFHT1Q3dTN4UTdM?=
 =?utf-8?B?Q1k4Nkd1RWRoNWhHYkhDRldCd3VmQ0ZHN1JlTGllcGlib2g4SHhURTlVK0xP?=
 =?utf-8?B?SWJ0SmJvbEhuaDIwYUFIMDh5dzhoclFCMkVQNXdEY1hiWTJ1Smx0RnJzdTZi?=
 =?utf-8?B?eWdzQ3FHZHVwMWNHMzZ0V25OTDE3ckg5aXVWMWVkSXJiaWZOWmxoU29QTjZs?=
 =?utf-8?B?aTNYNkJHWHdVaEJCTzl3MTY3eGZZd05HYWRjVHFvOTVjRWxHcVh5MHltUTRS?=
 =?utf-8?B?SlVWS1lGUVlnRDFjYnAzc2lzcGlHUkZwYm8vSkUvbUR0YXRFRDZIcUNmSUNs?=
 =?utf-8?B?cVVHVTV2L21XMDZweUIvY3JqT2tGUzBmaDY1eVdGSEptRXBwWVNWUkhsUS9M?=
 =?utf-8?B?K1N2N25haHF5ZFUySkNXenBYb2JCZ3EzTDE2R2lQWFNxQ3BVczNBdkJocnB0?=
 =?utf-8?B?akZiTkkwRVl4WG8zN20ycGNUMjB5QkF2Wk5leTJ4Ny8vbmVlM2h6NG1YbCtG?=
 =?utf-8?B?SVhhR2c1RnpnVUdNdFdHK0RNUUxsMVJUVUVySzVKV0REYWQ1VHB5UnJ4RmV0?=
 =?utf-8?B?THdlRGl3dkJVMmE3TVdmUFNqSjJvUjdoelFXK2xXeDhCZ2RYRUJmdzlxUDhy?=
 =?utf-8?B?SGdOUlhjbXU0NExGSnpoYjU5Mm9OREhoTEJDL2tBbnVyNGdPK1BSTGx1OXpS?=
 =?utf-8?B?OGsyMXFaeHFWVXJ3bHlZSUlMRzJKRVE4a0pGR2kxUjlYalVLenhrNmNPN29l?=
 =?utf-8?B?cXlubU5NaldrSXdIbW9PaC9MbU9lZkl2TFBNVDRFaUxmdWtVb1RGZko5VDVK?=
 =?utf-8?B?b2x6QUhBOWU5WWV0ZmRNeGRTM2xkYkdiUGwzczRjSGpnYVZoSHQ1YUxVdWFj?=
 =?utf-8?B?VWQ4VGxJQSttdlZTaVZ2blNVTGxLcnFFM0MwajV2T1g3N0lXZHN5eGF3ZElu?=
 =?utf-8?B?WjZ3SnhZSlFXSDd1OGlZS08vaDNSbnBPWGRKK3B3Y1U2cW8xM1dPSDlGZEhD?=
 =?utf-8?B?ZFhadjJLTnI2dVlyU1pKb3gzTUFLaTdZdGpEclVwczhmUlI5a1lSRTFxQlNN?=
 =?utf-8?B?S09yQWdvTmgvMTdVVFR1bWJURDNSWE40My95Z0Q4UXRBUFYwODNmdGhCNkcz?=
 =?utf-8?B?QmJnVHFtVjk5cndReVVPbGNDbStaMSsrWkVQTy9nTUt5YU1yZWFLWDRpc0Fx?=
 =?utf-8?B?Y3Q4R0FWUmtsT0NGa0Q0R2Y1ZE9wL2lnM0tIMk1CQWlUSEVWNE9GNnhtZGo4?=
 =?utf-8?B?VDF0alFJa3VHblFWdGVlcmI3Wm8zUnlMM0NWOWxQS1c4Q1NteC92RGlzVit4?=
 =?utf-8?B?aFRQcldoL213cGxpUjBsaDRiVlNPM2dOVjA1NEU4bjNSd1pzTWcyK3prTm1H?=
 =?utf-8?B?VTZKN3ZkN2NlUGdYb25RL0hodzdUeThiK0pWM29qenc4YWt4cDFqNTM5dURW?=
 =?utf-8?B?M1pWVnJqQlBYb3FxclZaSmprTlVFT0lsb081cUNOV21hWmZyeGYrUnRGc0lH?=
 =?utf-8?B?UWRqamJONzIvVkREWmd6YXlkaFduL1BqRytMRmhTbGxBQlQzQjZzRDNQY3A1?=
 =?utf-8?B?SUhkWFVVaTYxYlNZbE5jNDUyc2liZng3NWFQalFacndQbUVZZlRIbXl3YXdp?=
 =?utf-8?B?OEJtcHhHamJJRUU4TjFsRWJlc3NwVkZIK3JPdWFYRnlYenV4eVIydEFKNElk?=
 =?utf-8?B?WmdLNlNNMmQyU0tYL0lYOFF2TDcvTGw4QlkzWUlsSEROZld1Sm40KzE1SGpv?=
 =?utf-8?B?VHdzbk12K2hGeVlodm9uMnRqbmFvV09YYkE4Q3JBNDN3TFlCQTdGVmNkT0RG?=
 =?utf-8?B?WnVMTVJpZFZncU9sNlVBcmdOV0ZiZmdaclRUVTZHTGRBU05CWTBZNWZOS0hD?=
 =?utf-8?B?RStKT3VNTHMvWVZyUVIwbVNqMFZMcmNjN09YZ3AzeEhYNmJzWWNJdjJqRWtv?=
 =?utf-8?B?RVQ1bUVKcXVBUndkb2lqUUl6MzBZeW1UMWtkTzZHbEQ5L0I2UHNSZlAyRWYy?=
 =?utf-8?B?SW1yQUhlNHVRbTA0MFlkYy9wSTRVK3YwSEdKeVplL1RDeWl5ZEZZQitzQ2Va?=
 =?utf-8?B?aHVCajVMYktBbXphNU9nN2ZuYmxhUHFlZWdIOTRHTFI2RXVON29yZXBQT3Bu?=
 =?utf-8?B?M2UxYmI1d1UwdzV0S1JpbEU2dG5BdTJKRVpkblIyZ3BVa3pCR3hlTlFwVzZ5?=
 =?utf-8?Q?ud9+/Y0J3wck8JSQ=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <793C09DFB12A724689E9DF59C568292C@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3690bc74-d46d-4d47-4022-08da4a2fc55d
X-MS-Exchange-CrossTenant-originalarrivaltime: 09 Jun 2022 15:50:26.1175
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: LgPri/ZWAb4KcZqJsEmCEPYhhxGeQzZY1ANby4RofbUDfxja/z6btLBuadHcxEqiLG0AYZ44jSAtN4Nj4y0edIReRLjFZ6EM34aPVCwuRDY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR03MB6066

T24gMDkvMDYvMjAyMiAxNjozOSwgSmFuIEJldWxpY2ggd3JvdGU6DQo+IFdpdGggdGhlIHJlbW92
YWwgb2YgdXBkYXRlX3hlbl9tYXBwaW5ncygpIHRoZXJlJ3Mgbm8gbmVlZCBhbnltb3JlIGZvciBh
DQo+IDJuZCBlcnJvciBjb2RlIHZhcmlhYmxlIHRvIHRyYW5zaWVudGx5IGhvbGQgdGhlIElPTU1V
IHVubWFwIHJldHVybg0KPiB2YWx1ZS4NCj4NCj4gU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2gg
PGpiZXVsaWNoQHN1c2UuY29tPg0KDQpPaCAtIEknZCBub3QgZXZlbiBzcG90dGVkIHRoYXQgc2lt
cGxpZmljYXRpb24uDQoNCkFja2VkLWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0Bj
aXRyaXguY29tPg0KDQo+IC0tLQ0KPiBJIGhhdmUgdG8gYWRtaXQgdGhhdCBJIHdhcyB0ZW1wdGVk
IHRvIGdldCByaWQgb2YgUEFHRV9PUkRFUl80SyBhdCB0aGlzDQo+IG9jY2FzaW9uLCBhcyBpdCBm
ZWVscyBhd2t3YXJkIHRvIG1lIHRvIGhhdmUgc3VjaCBpbiBjbGVhcmx5IHg4Ni1vbmx5DQo+IGNv
ZGUuDQoNCkhhcHB5IGZvciB0aGF0IHRvIGdvIHRvby4NCg0KfkFuZHJldw0K


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 15:52:53 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 15:52:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345610.571280 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzKSq-0000xy-4j; Thu, 09 Jun 2022 15:52:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345610.571280; Thu, 09 Jun 2022 15:52:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzKSq-0000xr-26; Thu, 09 Jun 2022 15:52:52 +0000
Received: by outflank-mailman (input) for mailman id 345610;
 Thu, 09 Jun 2022 15:52:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jWvP=WQ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1nzKSo-0000xE-Kq
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 15:52:50 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur02on062d.outbound.protection.outlook.com
 [2a01:111:f400:fe06::62d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3690995b-e80c-11ec-8b38-e96605d6a9a5;
 Thu, 09 Jun 2022 17:52:49 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8545.eurprd04.prod.outlook.com (2603:10a6:20b:420::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Thu, 9 Jun
 2022 15:52:47 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.013; Thu, 9 Jun 2022
 15:52:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3690995b-e80c-11ec-8b38-e96605d6a9a5
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jzpsjknwbU40iC/+l8GagzCTs0OXrg/jVbCkp2LMq6fM5NJ/mhcKheii8WBweNuRh3ccr6qfsf3M1LR5CEP99x95Wriy/PsStWqo5vEH7imrMeiSKz+sWIQEfbky49BFoA6gFYXcJ++e4gvQ5+Ct2SF1lJ8Gr6OyeS7O6Ra9T7R2m0StS3on9+gNkcdY0Rqlrd23HOVV895IpOczYke1MY392bFi7FCxGp+CYTwTEjJXORNXfwIkA0LO9E87xnew0Rwx2yC1QTPSvYDFzRj2fGGkgSfyL4p9iG5gEpxKrBTSWMt3VmJY0Pjkq5bVmvmk3LWfoAjhGfNJfU3SLSHIrQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=W0KfPugo3KQU69jKeUIwn3lcDuM0nZSDmJTaW1CYFB4=;
 b=iRKpcjbG2JA2IXIdgXHnHb5m/nKCaNfKSxdRALS4FfPO2QyRhVQmqWiF/MPOTbFQQsTNHSydCsB8hUs+xLDWk7w2l9DaKQHZ6C5iC73r/CK40r3xm7T806Gk4dG/9CoF9Ohe75Bbs/88fmV5HJTDRUKOAumCTpreRTjqI3T9Pvpi4r/C7cGz4QjMkwHQ4OEwN9Bp6RJrW/YzEcuzy9tfrKUiTTl8mg82KFJBX+hqYvziUjKbG6vmAZyoUBsiHS63Wsi0Z4rPdb68D4ykGHK0RlSXpo3MBwCwu0fJx3ANnQoZHdKwwYr6e+75Y+bfRCODO9P6TfWc8h7sB+VeRpDtwQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=W0KfPugo3KQU69jKeUIwn3lcDuM0nZSDmJTaW1CYFB4=;
 b=z/5J8jlgVTLTrwN31eyScP09K+y5f/Ls56zYnlLYTGVpH4l4DupMCCbt5njMdDQDbwHt1AGPSt08H59ZYnbxG33bHaC6fFDvMMDFjyavTtFnnps1w+D6OZ1MCNlYs5Npq5In1t8fTWyOH4886/UOddFQDLnerRSTrbXafle+u91vwfw63cU9+SomY1RhefP32pGIlBeazz2gA73+z0XtLTafvKOA2yUYZu38b3b6NCFhUmWXTBUQK/CHfVDcf3oQ0gZovSc765BKxosPFS8Eczi8yT6fYLb9N8kIkYV80Am4c8IxVJtAruTpSFBnjq418EFvBL7gygcgViG1GxJyIg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <79ebbde2-4be8-d393-476d-25326a2aa327@suse.com>
Date: Thu, 9 Jun 2022 17:52:45 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] EFI: strip xen.efi when putting it on the EFI partition
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR06CA0288.eurprd06.prod.outlook.com
 (2603:10a6:20b:45a::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8a5cad77-d870-43cd-d60f-08da4a301985
X-MS-TrafficTypeDiagnostic: AS8PR04MB8545:EE_
X-Microsoft-Antispam-PRVS:
	<AS8PR04MB85450C349A57423204154293B3A79@AS8PR04MB8545.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/pn6PVK6v8f0202XpUKlM7IKVvYaaQZaSX2TAj+r6T9JfFDTp/02fgCCZMnIsAsZHJe5kP9lGzOamkhxuk3ffFTjYrhMLmvRML8nVDhoMfs/5rRqgL8UtK4UKhZQOAseh6eSlIbdeeDYHTQ2F9rGQeATXzr6ze+JYuXBtWM8fTywHutH6vhVgIaxhSN5xKABkm5dVm1ucNhLFsikRoJTpLJBnt9BVrkcXSF4SPg5nRmeISXbcEX1rEeCLjCTnPy7dr1qElGA9fS0P/iTyp3umhyJA7+1pEd0cAhIE84o0m6ctoOQEatWPtwX3lPsW4ISJbFU36IQgtqrXIYqrTorRhwXPK/0uaNUCRlpj2kr9GY1b652Ts9SDfkpYrVQrw8nTUXTJZaziGyBdKW8wwkb2Oz+vKFN096czlASddKsEI3tg991O9mhDOv3jAQzwHw8003os8SPXOgegweegajWQU2aHyJJmLEnBUPR9H4NgrDW8w0kTIwbn/0a+MIEXNnuvfeq6Uy8EMQ9Y1Rd8wh9hhKcqcCLiSF/COleYhpR8yVYb4TTpCVpZ/xt8E1QAuLkOjjo5cEnnmTEOMLvT81ENdiBtJYsBjUGU9j3s2/5v+a9UnQIKNWhsLA+CDe96N+FPyccRS+w/9zYpTkop2j2ANZYkSRr7c+fzOxyDWd6gs3FBoH4rEyhGp76TG8rKBjBel2zcsOe0MHz+xTPiO2xA/7cPHBVHGNL+J+4D31rL5lqv4wMqy9w+jWBbj83RhAP
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(54906003)(6916009)(66556008)(4326008)(8676002)(31686004)(66476007)(316002)(2906002)(66946007)(5660300002)(6506007)(6486002)(31696002)(26005)(6512007)(36756003)(2616005)(38100700002)(186003)(8936002)(86362001)(83380400001)(508600001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TXlBa21BZDlWQWtoemVzVGhYaDhXSndvVUpYeTh5ampyOC9ncHdvejFLY3hi?=
 =?utf-8?B?alZ4NkdtWitpY25xVThPUnNTOFZ2ZzZLRXk1WGVFRTNrTU5DYTA5dnQvM3Ba?=
 =?utf-8?B?bnFCYk40MUdDZnkzOUFMb2NOdFlCUE8xRzljRVlsTUVrd2sxTWdZaGpRbUpL?=
 =?utf-8?B?VDdXb1lLOTJLc2VNbWwyVGMxWkFZSWZTSGJKL2YrSXNna1BrRHA0UDVsUjRJ?=
 =?utf-8?B?ZzdTc1FTYUZYdUNMVVExRmVQOTRMMnVYT21JVklzTjhtUlk0bWt4dlhPV2Jy?=
 =?utf-8?B?TVZZMlc5OExQYi9HSHp4ckMzb3hyZVU5N1oyZjVuaGFRbTNxZ0Z1ejdnZFBG?=
 =?utf-8?B?Vk5wRFZGRjAvMllKUFRuTHVaMGZ1RjNFSmViZG9mOTQ1QnhwM0htbVBJanJB?=
 =?utf-8?B?bXJyZ2RiSzA3Uzk2cWV5N2xWRmFzbi9hczZaTnd6dUNwRU40MDlRMW93SkF4?=
 =?utf-8?B?Tzg1Ti9FSWJ1MzNSMnFxdEJwUU9vK2U4bk51TVI3cnZ2WXMrcDI0U2dDZkRm?=
 =?utf-8?B?Nm5uL0V0QVByejNydzJuaFRNVVhMNjdUK0NhL0hEVVZ4NkhnOFo1dVFvV2ZH?=
 =?utf-8?B?RHZQMmhHOFFyZi8zeFk4QUc0MXNRVGNiazVSM2N5RVJJSnpzSHh0eXA4Q1Vt?=
 =?utf-8?B?Ni9kL2J3Ykh0WGI4bENwazYxR2hhQUVOTTlUckxYOFlsN2xwUitvMzNyREtC?=
 =?utf-8?B?QkFHeDJNaW5VRkVsRldlaFYrZUorZUxua1JjUC9ISkpXMFM1MytLcE1ScUxR?=
 =?utf-8?B?TlVRcm9jODFwMUU4b1dQSllBdWkzK0Z4eUF2SWE1V0ZQcHlnODhkK2p3ODgx?=
 =?utf-8?B?bWR0bzJIVWhRdEVLQVAvbnF5QTYxOWFDdVl0Nk1IdjBNNTdLY2QzWUxkSjZ6?=
 =?utf-8?B?QWtsR0N1d0xlVXU1blF5cXZpcWs0eWo3U3hhdDVXWUlsbklDSE5NWFE1NWdP?=
 =?utf-8?B?VTdyWHNlczN5Umh4V2FFdkVFaFJ2L1ZDTXk2R1NsNEovamRwNktvRkpoNkJs?=
 =?utf-8?B?bWJIa0lDdjJZK3RQc1plV2hGTUpFaWU3WkRNcUlqMDB4MEhaM2FwWTJnMU1T?=
 =?utf-8?B?Uncxb0dFTlF4ZHNtM1RHOS9HQldNUXc2NEZnZnQxdUlFdHBFSTVuSUhVV2FQ?=
 =?utf-8?B?UjlEeFMxR0o5R2tROWF0U2NGaTlObm9uSGN1OUJNcDU5aG11VkhDdkF5L0tL?=
 =?utf-8?B?SjJZV3ZpSlQ4TEMyNlpuZTlTWGM4NVMrbUZmZkd0UXN5VGwxcEpCa3dZSG9l?=
 =?utf-8?B?RGRLRzBIMW4vTTFUQUlTb1FJS2RCMWJrUTdjSllSS016UzQvcDN5WEFadTdu?=
 =?utf-8?B?L2orN1FzOWZIQTVUTDZGTE1Fc2pjWW9nZVo0U3lFV0ZCdTE1SmdnUjdDSlFv?=
 =?utf-8?B?S0llZnBVRytoNWlvTjV5dE1rSXpqd1dyMUVKeEpmTlYvSVFOMkVoWjI1S0k1?=
 =?utf-8?B?N08zdVJaVEFlNUtUYXRmMkg3QjZXWFc5NFdyVkVqVEQxYUxkK0owV09yQnNl?=
 =?utf-8?B?WFArdW5Zb0oyVGNzd2thajVoellaa2hqdHZtUU5IMG5SQkthUDhuN1libGY2?=
 =?utf-8?B?QXNCeE41VjlnekQvd082NTlOMXVFTTNoSzVDbVZGdkJRVkpuMnpVR2EweGNz?=
 =?utf-8?B?RGxUV3o0Z1k0MSs5V0ZhanVmOURDVnpESDMrcFdGNXZTNzRFT0dvaTVPVlpP?=
 =?utf-8?B?MzlUMG1xQkdWQ21TS1o5SitIZEpIdGp0RmpQeHhsbTFORWswNmljL1o5MzI2?=
 =?utf-8?B?TjF0R2t0cEhzQ1ZqbTZMV2xjaWo1RkE1WkY3dUVjaWliZzc0d2VSaUVJWWtx?=
 =?utf-8?B?MDgrdmgxeFZnZ1ZMcDh2M2Z2cGI3andYS2t6eDJsbmdmR0JNNzBJNDA1dFFu?=
 =?utf-8?B?a0MyOHpxaE1BZkFqOEEzUHpZVnkxZ3hyeGdDZEYyeUNmR0MrUzhCaTQ2ZHpR?=
 =?utf-8?B?bnhvN055cXlXcUtLRW85bWZqRGE5QTJQR01iaW1oUVVGeHE0bnNQYVdaK01I?=
 =?utf-8?B?RkEyc0lxbXdRYk9oUHkwUFNYQUduQUtpVlZEbGpnR280amZRT3ZkREZIbWwz?=
 =?utf-8?B?eVIreitmdmpWcGttKzExM2VVaWNvUlhPdVVjWHBTS2dRZ3JHUXRUb2hhbzkw?=
 =?utf-8?B?WCtWbmsxbURLajVvQlovUG5wS2xNdE9jdXNtRzgvbFlrSTVSUXlRM0V4d2k3?=
 =?utf-8?B?eVF6NEYyaDR2USswcEQxZ0U1Tk9ENXA2VzExUzgxOEo4cXlLUDQ4dDBZYVFY?=
 =?utf-8?B?NDdNVituQ1NzNmRCL1JiVll2U01ZZ3gzajRVUWdDUC84UjY5K3ZLcXd6cERx?=
 =?utf-8?B?THNmU2JWSHZ6Zy9SanFCU1Q5Sm90NHdqTWxCVTVzMnNabVZmUmF3dz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8a5cad77-d870-43cd-d60f-08da4a301985
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jun 2022 15:52:47.4683
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Ee477lgsmE3EHTMqg5K2/ymklB01xSawyFrRvYQLLkd5RVKzy5O5AdjwDQ3IVL2Y7sid6UfPNexQoX2xeHUSAw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8545

With debug info retained, xen.efi can be quite large. Unlike for xen.gz
there's no intermediate step (mkelf32 there) involved which would strip
debug info kind of as a side effect. While the installing of xen.efi on
the EFI partition is an optional step (intended to be a courtesy to the
developer), adjust it also for the purpose of documenting what distros
would be expected to do during boot loader configuration (which is what
would normally put xen.efi into the EFI partition).

Model the control over stripping after Linux'es module installation,
except that the stripped executable is constructed in the build area
instead of in the destination location. This is to conserve on space
used there - EFI partitions tend to be only a few hundred Mb in size.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
GNU strip 2.38 appears to have issues when acting on a PE binary:
- file name symbols are also stripped; while there is a separate
  --keep-file-symbols option (which I would have thought to be on by
  default anyway), its use so far makes no difference,
- the string table grows in size, when one would expect it to retain its
  size (or shrink),
- linker version is changed in and timestamp zapped from the header.
Older GNU strip (observed with 2.35.1) doesn't work at all ("Data
Directory size (1c) exceeds space left in section (8)").

Future GNU strip is going to honor --keep-file-symbols (and will also
have the other issues fixed). Question is whether we should use that
option (for the symbol table as a whole to make sense), or whether
instead we should (by default) strip the symbol table as well.

--- a/xen/Makefile
+++ b/xen/Makefile
@@ -465,6 +465,22 @@ endif
 .PHONY: _build
 _build: $(TARGET)$(CONFIG_XEN_INSTALL_SUFFIX)
 
+# Strip
+#
+# INSTALL_EFI_STRIP, if defined, will cause xen.efi to be stripped before it
+# is installed. If INSTALL_EFI_STRIP is '1', then the default option
+# --strip-debug will be used. Otherwise, INSTALL_EFI_STRIP value will be used
+# as the option(s) to the strip command.
+ifdef INSTALL_EFI_STRIP
+
+ifeq ($(INSTALL_EFI_STRIP),1)
+efi-strip-opt := --strip-debug
+else
+efi-strip-opt := $(INSTALL_EFI_STRIP)
+endif
+
+endif
+
 .PHONY: _install
 _install: D=$(DESTDIR)
 _install: T=$(notdir $(TARGET))
@@ -489,6 +505,9 @@ _install: $(TARGET)$(CONFIG_XEN_INSTALL_
 		ln -sf $(T)-$(XEN_FULLVERSION).efi $(D)$(EFI_DIR)/$(T)-$(XEN_VERSION).efi; \
 		ln -sf $(T)-$(XEN_FULLVERSION).efi $(D)$(EFI_DIR)/$(T).efi; \
 		if [ -n '$(EFI_MOUNTPOINT)' -a -n '$(EFI_VENDOR)' ]; then \
+			$(if $(efi-strip-opt), \
+			     $(STRIP) $(efi-strip-opt) -p -o $(TARGET).efi.stripped $(TARGET).efi && \
+			     $(INSTALL_DATA) $(TARGET).efi.stripped $(D)$(EFI_MOUNTPOINT)/efi/$(EFI_VENDOR)/$(T)-$(XEN_FULLVERSION).efi ||) \
 			$(INSTALL_DATA) $(TARGET).efi $(D)$(EFI_MOUNTPOINT)/efi/$(EFI_VENDOR)/$(T)-$(XEN_FULLVERSION).efi; \
 		elif [ "$(D)" = "$(patsubst $(shell cd $(XEN_ROOT) && pwd)/%,%,$(D))" ]; then \
 			echo 'EFI installation only partially done (EFI_VENDOR not set)' >&2; \


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 15:54:59 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 15:54:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345618.571291 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzKUs-0001rj-Ho; Thu, 09 Jun 2022 15:54:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345618.571291; Thu, 09 Jun 2022 15:54:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzKUs-0001rc-F1; Thu, 09 Jun 2022 15:54:58 +0000
Received: by outflank-mailman (input) for mailman id 345618;
 Thu, 09 Jun 2022 15:54:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+nUK=WQ=citrix.com=prvs=15254dc06=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1nzKUr-0001rU-QH
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 15:54:57 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 81539779-e80c-11ec-8b38-e96605d6a9a5;
 Thu, 09 Jun 2022 17:54:56 +0200 (CEST)
Received: from mail-mw2nam04lp2168.outbound.protection.outlook.com (HELO
 NAM04-MW2-obe.outbound.protection.outlook.com) ([104.47.73.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 09 Jun 2022 11:54:52 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BLAPR03MB5378.namprd03.prod.outlook.com (2603:10b6:208:292::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.12; Thu, 9 Jun
 2022 15:54:51 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::50a2:499b:fa53:b1eb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::50a2:499b:fa53:b1eb%5]) with mapi id 15.20.5314.019; Thu, 9 Jun 2022
 15:54:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 81539779-e80c-11ec-8b38-e96605d6a9a5
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654790096;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=s9B+x4Z6kIOoktUzU9vABSgWTXldow1jlOmtPjlQxE4=;
  b=PqBjcPrgbZnHsXVNOvTSQMLj/aqn+SwisZPnIt47BV/lIB9TekgVzbgY
   8z1unZdv80oJMXWJplhe2PVlesidshMZ9yGeDpjB8v8ZNEzvKY9QA/p3r
   sdLVdv5YssxesgfgGbDzHafybXYi+a/52iZ4ccQPtW/6LEvRKM4EKlxUq
   o=;
X-IronPort-RemoteIP: 104.47.73.168
X-IronPort-MID: 73093469
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:OT0/U6lhxw8C4YUJsIZXnCjo5gzyJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xIfDzyOM6mINGL2L9AjO4/n8EkHvsSDyIJgHlBrqS8yFyMWpZLJC+rCIxarNUt+DCFioGGLT
 Sk6QoOdRCzhZiaE/n9BCpC48T8kk/vgqoPUUIYoAAgoLeNfYHpn2EsLd9IR2NYy24DnW1PV4
 rsenuWEULOb828sWo4rw/rrRCNH5JwebxtB4zTSzdgS1LPvvyF94KA3fMldHFOhKmVgJcaoR
 v6r8V2M1jixEyHBqD+Suu2TnkUiGtY+NOUV45Zcc/DKbhNq/kTe3kunXRa1hIg+ZzihxrhMJ
 NtxWZOYRTkbJLTCk80hfhx5Gh9aMK1eyLPEPi3q2SCT5xWun3rE5dxLVB1zGLJCv+F9DCdJ6
 OASLy0LYlabneWqzbmnS+5qwMM+MM3sO4BZsXZlpd3bJa9+HdafHOOVvpkBgmdYasNmRJ4yY
 +IwbzZ1YQuGSBpIIloNU7o1nfuyh2m5eDpdwL6QjfVsuzmIkFIguFTrGOaKff60etpQpHeFp
 z7552HGHA4aEuXKnFJp9Vrp3IcjhxjTWogfCbm5/f5Cm0CIyyoYDxh+fUu2p7y1h1CzX/pbK
 lcI4Ww+oK4q7kupQ9LhGRqirxasoRo0S9dWVeog52mwJrH85g+YAi0BUWRHYdl+6MsuH2V0h
 hmOgs/jAiFpvPuNU3WB+7yIrDS0fy8IMWsFYixCRgwAizX+nLwOYtv0Zo4LOMaIYhfdQlkcH
 xjiQPACuogu
IronPort-HdrOrdr: A9a23:eGR2caCj8fCKB67lHej1sseALOsnbusQ8zAXPh9KJCC9I/bzqy
 nxpp8mPEfP+U0ssHFJo6HiBEEZKUmsuaKdkrNhR4tKOzOW91dATbsSoLcKpgeNJ8SQzJ876U
 4NSclD4ZjLfCBHZKXBkUeF+rQbsb+6GcmT7I+woUuFDzsaEp2IhD0JaDpzZ3cGIDWucqBJca
 Z0iPAmmxOQPVAsKuirDHgMWObO4/fRkoj9XBIADxk7rCGTkDKB8tfBYlil9yZbdwkK7aYp8G
 DDnQC8zL6kqeuHxhjV0HKWx4hKmeHm1sBICKW3+4sow3TX+0SVjbZaKvm/VQMO0aaSAZER4Z
 /xSiIbToFOArXqDziISFXWqlHdOX0VmgLfIBej8AfeSIrCNXMH4oN69PxkmlGy0TtegPhslK
 1MxG6XrJxREFfJmzn8/cHBU1VwmlOzumdKq59as5Vza/ppVFZql/1XwKqVKuZzIAvqrIQ8VO
 V+BsDV4/hbNVuccnDCp2FqhNihRG46EBuKSlUL/pX96UkdoFlpi08DgMAPlHYJ85wwD5FC+u
 TfK6xt0LVDVNUfY65xDPoIBcG3FmvOSxTRN3/6GyWtKIgXf3bW75Ln6rQ84++nPJQO0ZspgZ
 zEFEhVsGYjEniefvFmHKc7hiwlbF/NLQgFkPsulqSRkoeMN4bDIGmEVE0kldemrrEWHtDbMs
 zDTa5rPw==
X-IronPort-AV: E=Sophos;i="5.91,287,1647316800"; 
   d="scan'208";a="73093469"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ag5R31BIzt7xuYY1b4DhteLaQuhlxVPPlW82HRH4jtw+Q/Kv5XXKrhhfAxSX+qAi6L7RzLwhduHWFQy9MWfX8cCmMuwfpg5SZja1mhKnZPSQqy4Z1itnyHtVh/aRkE3wgLPL18S0tUudkQ/l44fp86tG2xO8uAFKDxBi1Hmz1mxriAl4om+82wd0eibk3fV5clsPU2RqWssa/aiEffhzCzg8l0+CkpiH7ORfGiHF0FiQN4jaEAtiVQAEO/2NrbgOg2nVd6kWXpZbxKrWX3AtPhouTWLgdIRPEOQZwR9SsFofn4AewmjqPlAS13Wi98tUd9WbFhTnNd/nm/lmsMHgTg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=s9B+x4Z6kIOoktUzU9vABSgWTXldow1jlOmtPjlQxE4=;
 b=Q8W5fQdCJWeQ7sI0Az+DBcaCiqjcH6mBz0BuvPCjZpMYgF9NRPlyEDjh7I5zdXXXh0d0UUbuNiVgm50WQAsSXmf8MVYzvTG/+gAD/ZqIxQdQjTkpNQoFaAn401u/rzJc2ILSAxLI6VzGMYcqxpor4AVr9v9zld8LFgwQrkg5BknM2tCDa+8SCm9dKO9d1F4tBwVRoxT1ISCM48rIJDiE7IHSBwbg5Ixb6cFuwoTVQ7tOct/b7+BrpWaU9UrrLVzDDPlxWBVV2V4vs+ZkJQLJkImCKSCOX/TvGP/DjhSeWtLQ+i1h1iA5mXn6D9l0pDZwCmcLlh7TsHqVL7G+zI63Kg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=s9B+x4Z6kIOoktUzU9vABSgWTXldow1jlOmtPjlQxE4=;
 b=KIkjCSnsg4g671LHWY0kaSrw7ig8DDFk7U90mEEDCOKnVaJ9Ibt1hV8r2jG3+/s3y/MzTRqSa3oAPOJNCAozTwjNEgZitbW8vqAfAQzuXFXDJl/Sm/xDBfYoB6Gz/RBGIHP8E6B5tfGcOcSfRpXwMW335E3YL3B7OPokcDCWIX8=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [PATCH v2] x86emul/test: encourage compiler to use more embedded
 broadcast
Thread-Topic: [PATCH v2] x86emul/test: encourage compiler to use more embedded
 broadcast
Thread-Index: AQHYfBai5wHQ+WsWjkyECYCA9O3V9a1HOlMA
Date: Thu, 9 Jun 2022 15:54:51 +0000
Message-ID: <ff5a8ac7-95d4-be81-6434-5020a54ef727@citrix.com>
References: <e90402d6-0ea9-b977-1c1f-b3ace4e8f107@suse.com>
In-Reply-To: <e90402d6-0ea9-b977-1c1f-b3ace4e8f107@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: dac800c8-acc2-4acc-9f3c-08da4a306344
x-ms-traffictypediagnostic: BLAPR03MB5378:EE_
x-microsoft-antispam-prvs:
 <BLAPR03MB5378D0ABBB5576CB51D33D83BAA79@BLAPR03MB5378.namprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 xYYUazUL6pD4O7d5SLBKBmv4NGB04l3ZfzTqXKMFN82dXKMer3fCfe0mNYKFmxpHKXqeEM/vD/1sZe/Pmn7Ww18RsQgK1Gcye6yJ4GMK0Php1BiKkQuYAVlzFsc2cOWnJfVzfwCDeGqzNrc5Zym0gsNjgQ7jo84P+YAIk5ciJpcITk7/MzZcNSlYyEGqtZGMNVDGhYH173P6NTXI8y5Y/vafPrgDgQGVwQRuur3hgRnjHWYsxriinAN5bx5E7z58A4EqQbhTCEGksP5mvvNWp3AWdPzCs0u7P6AopcJkr+YuhhPYvvckIkt6KtyaZIhNFBazdIx/DR6x0HJkvd9tu7fvwX/X08TX/ckhRV7+LGBfq/gvQ0J8xj5PqYvsfRf0WKjkD67yKNH3iIGwaulB1wDGgLR6ImJczwUbYMEj6be3i72pg5N8uO3AtIgFAIBh88TJoUJt1XJA+tCNxwylH25Vg4UK9a80p4XEhSyLczdWRHc2zK3i8+lqJWpljaFpexDXbpK9hYOQR+atqFC+RNkMKNmYaLmVUXtiwvLP3H7P6uCyccwsEAIHE5yo1KdQea4WG/neDC18gd9lKu74SDMhGizMBLkKquM7c9rLqQGPSZby1nBvzjlxBJlfwfTgvAyGtC4SpUPVrsS2ymNeo0M/9FzU4iKsD0YTn1YFaTuSZ3F/pBGfgk3oQQd39WrqiIliSZmB3bpeH++QBxPtHxIzfduH8REGra6mYieSbTFQ5CanzQPD/rzhY6oL/QAK0jEvZ4r57qTKki2nKA1+Lw==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(186003)(66946007)(2616005)(31686004)(38100700002)(26005)(6512007)(66556008)(36756003)(2906002)(53546011)(6506007)(122000001)(8936002)(66446008)(508600001)(86362001)(110136005)(4744005)(71200400001)(91956017)(82960400001)(6486002)(31696002)(316002)(54906003)(5660300002)(107886003)(8676002)(4326008)(64756008)(38070700005)(66476007)(76116006)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?OHA4bG85SmJFVG9sQmhFZk5lOG0vRmZZaE93R2l3djZQdnZMeHRqQnBYZmNi?=
 =?utf-8?B?aXZLVHVsc25ZbFQ1L3krQld0VHkvTDZ4NkFjQ2c2dWwxWjVrUFprS0J3QzAy?=
 =?utf-8?B?bis5a1UrSHV6RHdjLy9hSU1Xd3E3RmNpWE5tSnhDNHc0bWVON3ZuNkRZS0RI?=
 =?utf-8?B?K0duM1BBVHFqODJ1Vmw5eVdSM2NzcjM5ZDlCSFdDK0M0TWIyRURkT0ZjWmNq?=
 =?utf-8?B?M2REcE1wRDEvejdaSnJ0b0UrQWFBMGRuSUdGUS9GMjcxT01rcXpMLzN4eGJp?=
 =?utf-8?B?Y293VFM2Z1o5V084bHViSnZKdGN0RVNVZnZ6NG5ab2JRTU1CenRUcmUwcG40?=
 =?utf-8?B?YWlvcTZhMVAzOXFQSmJ3ejFKWTlLSXFNelBMVnJ3SVJyTUNTTEplZ0Jiajg2?=
 =?utf-8?B?dGpYRG1Tbm1mMXNpTG95SGdDcVUyQ3BKUnZRSUJTOCtudEpMdFhJaUhJZE96?=
 =?utf-8?B?UDBvKzVRMmhaVVZYQkk4SVVmb1hTeWF1a3NWRUZqMjdOMzFrLzFvTTdWV3dT?=
 =?utf-8?B?R0hpaEZtYVFnakVFQ1V3d3FjeFFxMnplcVJhczlVcFVjRmRQaUJCTHFuSURo?=
 =?utf-8?B?SG5XRkYxY3dLK252WEJsYVpWOWNwaGJ5WU94Y21ORUNwMDNwY2VTV0lqY3F6?=
 =?utf-8?B?V1UvNWRkTktjWGRtMUFWaWIwTDZCQUJ5b2hZZVJwc3d0ZnpQeVlmZ1crQWpt?=
 =?utf-8?B?Q0paQWZ1T09lZWZxNi9kKy9XVlFWZUxaSmVaUnRhamNlcVVJYjk4K3hPeXRY?=
 =?utf-8?B?MVhKRXJvaUlvRENhc3dLT1dtT3JIaFl4WEcxYXVSeXBnKzZnbTNXaGhlMkNS?=
 =?utf-8?B?Zml4N2xHdUlFcU9jRHpUU0N6UlNqTnNnaS9Pc2RieDFUajZ4aEExZ01VSlhv?=
 =?utf-8?B?ZTFDN00rOG1iSTY5RFBJZ3FBT0c5T3dhakp0OElrY3NoTlFLUXpNOUQveGU3?=
 =?utf-8?B?ekVFRGV6a0xLRWhPWDcwZUxGN3lQWEtTMzg0RXN4b2h1Y0dNbTNLaDdQSWk4?=
 =?utf-8?B?SlZhSUhtNDRTY0VuTXozbVVFWVl2ZFB2TitYUlJMQ2ZTMzMvRG1wWXRnaHA3?=
 =?utf-8?B?V0ozSElvbUpPNStHV3ViSHpVSi9mTUJRMHNFRkVsVmNmZGRjOW1sYmFOWGl4?=
 =?utf-8?B?N2Z0NVN2MWxoa09UVXQ5dnRvQU9CTWVTbEpBOGJKa1pJZzFWTmFOT2lycTE0?=
 =?utf-8?B?eXdPVE5yRXpMSm9WeEw5Y2pkS0xnV3JvbkpPRHJEQXRGblkvb1dqeXZZTDR2?=
 =?utf-8?B?SzhXdWo1YURYeHNzd2lnajdyYUVqKzZ1bE9YSWFaaXJLeHNqcFpiQVJrRGVZ?=
 =?utf-8?B?RmxWUHZVdWZIU0xtNkdaVjk0amcrYkJWTGp0SmdJQ2l1SUVoMzFNZEF2V2RO?=
 =?utf-8?B?MzNIRS9HWVBFcUZtc2NRTGNLSURWSzNrZVBUOWtCT0hZNW9oL0krTjBPSlRO?=
 =?utf-8?B?K2l2bTBRZHRZYU9EZ0F4MEpkWHhyNDdkb0VBRVFMM0hkMXVlMkZlWHduaENa?=
 =?utf-8?B?Wkp0VVBydVZ5Tkdxb0ZiV3Zlc015VVVvVnJIL1pLOWtTQXBrWWdpM3Zna09M?=
 =?utf-8?B?a0hJcDV5R1g2cCt4bVZPZXYrOTZ2K0tEUDN6ZXdIeTR2NEZ1RXFybDk0cnFP?=
 =?utf-8?B?dkplR010dFZWeEwzMmdObXdnZGNVOWtLdFlkZnRYZjRHdjZyUWVYS0ZjOUVK?=
 =?utf-8?B?VFBScUNJVVdDeEJ4MU1hY3Boak50aTZIWG9COGs1MVhXMENiVTRSdmVjOG5I?=
 =?utf-8?B?OGhzVmZWdTJ6T3l0N3BnT050eEtwcG9kSG1jU3N2MU5qblp1TXBGdnMrUFp3?=
 =?utf-8?B?RXRMSng1T3FUN0xScXlCNklPSDNqQ1owbGwwUFFCK2tENDRMRCtOSEJKL1lV?=
 =?utf-8?B?VEFSMFRuY3ZxSVJhTlFsajE1dzRUYnNYMExUc25OaTdTeVpUYm5zeWpXckFu?=
 =?utf-8?B?NU83YW9ERmlZTlN2WEx5bW1ib1p2dlhKb0YxSW9mdGFhS0Y1YUU2UUdxdmNF?=
 =?utf-8?B?OFdkeGEreEdVY3BsV3h0VWYrOTlWdzArUUQ2RDFhYU1ORXBZWTdNNVFMVGF2?=
 =?utf-8?B?V1NFd3RQSXc2NWF2WVFmY0tlTEVjMXIxWVY4S01Pa0VBem1ZR0F3QUhUVVR5?=
 =?utf-8?B?ZnhiWld0SkVaRXVldzVsci94NVpNL1BFMkcwUVA1MXBsbW1CbFZOSnYvWkZh?=
 =?utf-8?B?VW9peEhKelhzekErallRUHR4TzhRREhRaktMcHgyb1dWRzNqbDB5aEhYMVV5?=
 =?utf-8?B?Y1BvdDJqbS9Ub3lxdTlNVXVXQzYzVk1uaVFEYU84dk9BMCtQNllUcmZIOUpN?=
 =?utf-8?B?T1RMTlpVakpBb2xsWldiTStWME9BNUEzcFBLUmhCdE1tekFkMUU1N2J2cEZP?=
 =?utf-8?Q?wWNk+07hevlM0eGQ=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <98129FDA27D4A44997935774B0CB295C@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: dac800c8-acc2-4acc-9f3c-08da4a306344
X-MS-Exchange-CrossTenant-originalarrivaltime: 09 Jun 2022 15:54:51.0517
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: Wfx//4ndAcMIbNf5Y87eDwppHKJ0NyDsZ7KAHE6sLIjTTYEjbWfomV5YWXG74HR0F5rno+7LmabnpH/KQEx2q19Qs40KfdYkf+Eyfo+xVzs=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR03MB5378

T24gMDkvMDYvMjAyMiAxNjozNSwgSmFuIEJldWxpY2ggd3JvdGU6DQo+IEZvciBvbmUgaXQgd2Fz
IGFuIG92ZXJzaWdodCB0byBsZWF2ZSBkdXBfe2hpLGxvfSgpIHVuZGVmaW5lZCBmb3IgNTEyLWJp
dA0KPiB2ZWN0b3Igc2l6ZS4gQW5kIHRoZW4gaW4gRk1BIHRlc3Rpbmcgd2UgY2FuIGFsc28gYXJy
YW5nZSBmb3IgdGhlDQo+IGNvbXBpbGVyIHRvIChob3BlZnVsbHkpIHJlY29nbml6ZSBicm9hZGNh
c3RpbmcgcG90ZW50aWFsLiBQbHVzIHdlIGNhbg0KPiByZXBsYWNlIHRoZSBicm9hZGNhc3QoMSkg
dXNlIGluIHRoZSBhZGRzdWIoKSBzdXJyb2dhdGUgd2l0aCBpbmxpbmUNCj4gYXNzZW1ibHkgZXhw
bGljaXRseSB1c2luZyBlbWJlZGRlZCBicm9hZGNhc3QgKGV2ZW4gZ2NjMTIgc3RpbGwgZG9lc24n
dA0KPiBzdXBwb3J0IGJyb2FkY2FzdCBmb3IgYW55IG9mIHRoZSBhZGRzdWIvc3ViYWRkIGJ1aWx0
aW5zKS4NCj4NCj4gU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29t
Pg0KDQpBY2tlZC1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4N
Cg==


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 16:19:22 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 16:19:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345629.571303 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzKsN-0005LM-IE; Thu, 09 Jun 2022 16:19:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345629.571303; Thu, 09 Jun 2022 16:19:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzKsN-0005LF-FK; Thu, 09 Jun 2022 16:19:15 +0000
Received: by outflank-mailman (input) for mailman id 345629;
 Thu, 09 Jun 2022 16:19:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzKsL-0005L5-Nd; Thu, 09 Jun 2022 16:19:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzKsL-0003E5-Ks; Thu, 09 Jun 2022 16:19:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzKsL-0003vd-AY; Thu, 09 Jun 2022 16:19:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nzKsL-0004Fz-9n; Thu, 09 Jun 2022 16:19:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=s04Bxk4hy4RuMu4Q+SAwl7HvauQRIyb27DVhN25UawM=; b=tI76EmFI8de2A3ddUWYVfj0nvq
	NvTQ91rEqBTw3kIa/wtfWajiEJq3tJZ5Vb/rbXAXW60haxk1CohvUqwHlxpLOx/D74xcjkGv3kc08
	QaklBKnBTws1gY2ROPNJ7lEYDuFl05rvtXqhTlc/arOhAz5vTz/6Z/B/e0cIW7I5mCl0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170892-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 170892: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=2177de7b6e35499584731e6f4869903aa553022b
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 09 Jun 2022 16:19:13 +0000

flight 170892 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170892/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              2177de7b6e35499584731e6f4869903aa553022b
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  699 days
Failing since        151818  2020-07-11 04:18:52 Z  698 days  680 attempts
Testing same since   170892  2022-06-09 04:20:18 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Amneesh Singh <natto@weirdnatto.in>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani@anisinha.ca>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brad Laue <brad@brad-x.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Kirbach <christian.kirbach@gmail.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Christophe Fergeau <cfergeau@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Didik Supriadi <didiksupriadi41@gmail.com>
  dinglimin <dinglimin@cmss.chinamobile.com>
  Divya Garg <divya.garg@nutanix.com>
  Dmitrii Shcherbakov <dmitrii.shcherbakov@canonical.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Emilio Herrera <ehespinosa57@gmail.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Franck Ridel <fridel@protonmail.com>
  Gavi Teitz <gavi@nvidia.com>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Haonan Wang <hnwanga1@gmail.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Hiroki Narukawa <hnarukaw@yahoo-corp.jp>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Ian Wienand <iwienand@redhat.com>
  Ioanna Alifieraki <ioanna-maria.alifieraki@canonical.com>
  Ivan Teterevkov <ivan.teterevkov@nutanix.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  jason lee <ppark5237@gmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jia Zhou <zhou.jia2@zte.com.cn>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jing Qi <jinqi@redhat.com>
  Jinsheng Zhang <zhangjl02@inspur.com>
  Jiri Denemark <jdenemar@redhat.com>
  Joachim Falk <joachim.falk@gmx.de>
  John Ferlan <jferlan@redhat.com>
  John Levon <john.levon@nutanix.com>
  John Levon <levon@movementarian.org>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Justin Gatzen <justin.gatzen@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kim InSoo <simmon@nplob.com>
  Koichi Murase <myoga.murase@gmail.com>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Lei Yang <yanglei209@huawei.com>
  Lena Voytek <lena.voytek@canonical.com>
  Liang Yan <lyan@digitalocean.com>
  Liang Yan <lyan@digtalocean.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Liu Yiding <liuyd.fnst@fujitsu.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  luzhipeng <luzhipeng@cestc.cn>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Martin Pitt <mpitt@debian.org>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matej Cepl <mcepl@cepl.eu>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Goodhart <c@chromakode.com>
  Maxim Nestratov <mnestratov@virtuozzo.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Moteen Shah <codeguy.moteen@gmail.com>
  Moteen Shah <moteenshah.02@gmail.com>
  Muha Aliss <muhaaliss@gmail.com>
  Nathan <nathan95@live.it>
  Neal Gompa <ngompa13@gmail.com>
  Nick Chevsky <nchevsky@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nicolas Lécureuil <neoclust@mageia.org>
  Nicolas Lécureuil <nicolas.lecureuil@siveo.net>
  Nikolay Shirokovskiy <nikolay.shirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Niteesh Dubey <niteesh@linux.ibm.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Or Ozeri <oro@il.ibm.com>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peng Liang <tcx4c70@gmail.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Praveen K Paladugu <prapal@linux.microsoft.com>
  Richard W.M. Jones <rjones@redhat.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Robin Lee <cheeselee@fedoraproject.org>
  Rohit Kumar <rohit.kumar3@nutanix.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Davis <scott.davis@starlab.io>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  shenjiatong <yshxxsjt715@gmail.com>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Simon Rowe <simon.rowe@nutanix.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tom Wieczorek <tom@bibbu.net>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tu Qiang <tu.qiang35@zte.com.cn>
  Tuguoyi <tu.guoyi@h3c.com>
  tuqiang <tu.qiang35@zte.com.cn>
  Vasiliy Ulyanov <vulyanov@suse.de>
  Victor Toso <victortoso@redhat.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Vinayak Kale <vkale@nvidia.com>
  Vineeth Pillai <viremana@linux.microsoft.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  Wei-Chen Chen <weicche@microsoft.com>
  William Douglas <william.douglas@intel.com>
  Xu Chao <xu.chao6@zte.com.cn>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Fei <yangfei85@huawei.com>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yasuhiko Kamata <belphegor@belbel.or.jp>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
  zhangjl02 <zhangjl02@inspur.com>
  zhanglei <zhanglei@smartx.com>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>
  Дамјан Георгиевски <gdamjan@gmail.com>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 112660 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 16:52:18 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 16:52:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345641.571314 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzLO7-0001rW-7i; Thu, 09 Jun 2022 16:52:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345641.571314; Thu, 09 Jun 2022 16:52:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzLO7-0001rP-3s; Thu, 09 Jun 2022 16:52:03 +0000
Received: by outflank-mailman (input) for mailman id 345641;
 Thu, 09 Jun 2022 16:52:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzLO5-0001rD-MH; Thu, 09 Jun 2022 16:52:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzLO5-0003ro-KK; Thu, 09 Jun 2022 16:52:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzLO5-0005Co-5r; Thu, 09 Jun 2022 16:52:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nzLO5-0006EY-5N; Thu, 09 Jun 2022 16:52:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=kiPRYUSAhxKK+W8UdnpXbEMbeDuFRM0GyJbbqk4sIDM=; b=F+eGQFGxM4sdqLAh9LmWm0t9Xp
	ds1CusFmxzyADBMJgCEG6ysAhg3dVLqFk4MYAZ4nKAtRS4zVIfuVrlSXH+FsleNaVFY7dwtKaIjKo
	UW3479mSgUJBU1t6j7Bsp4PxhEKJfCd699TKvzxxizmwvcfwCPTpdnHZufEqnwPwWNhI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170900-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 170900: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c1c9cae3a9633054b177c5de21ad7268162b2f2c
X-Osstest-Versions-That:
    xen=59fbdf8a3667ce42c1cf70c94c3bcd0451afd4d8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 09 Jun 2022 16:52:01 +0000

flight 170900 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170900/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c1c9cae3a9633054b177c5de21ad7268162b2f2c
baseline version:
 xen                  59fbdf8a3667ce42c1cf70c94c3bcd0451afd4d8

Last test of basis   170899  2022-06-09 09:00:28 Z    0 days
Testing same since   170900  2022-06-09 13:01:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  George Dunlap <george.dunlap@eu.citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   59fbdf8a36..c1c9cae3a9  c1c9cae3a9633054b177c5de21ad7268162b2f2c -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 18:39:56 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 18:39:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345652.571325 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzN4G-0005sD-7v; Thu, 09 Jun 2022 18:39:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345652.571325; Thu, 09 Jun 2022 18:39:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzN4G-0005s6-3g; Thu, 09 Jun 2022 18:39:40 +0000
Received: by outflank-mailman (input) for mailman id 345652;
 Thu, 09 Jun 2022 18:39:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3t4G=WQ=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1nzN4E-0005s0-K3
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 18:39:38 +0000
Received: from mail-lf1-x136.google.com (mail-lf1-x136.google.com
 [2a00:1450:4864:20::136])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 83319a8d-e823-11ec-8b38-e96605d6a9a5;
 Thu, 09 Jun 2022 20:39:36 +0200 (CEST)
Received: by mail-lf1-x136.google.com with SMTP id a15so39315801lfb.9
 for <xen-devel@lists.xenproject.org>; Thu, 09 Jun 2022 11:39:36 -0700 (PDT)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 l28-20020a2e571c000000b0024f3d1dae7csm3796073ljb.4.2022.06.09.11.39.34
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 09 Jun 2022 11:39:35 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 83319a8d-e823-11ec-8b38-e96605d6a9a5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=de7CZ8jeEcERGED2Q96ugWzLITXNhs+e7DuD9EZQpkI=;
        b=Z3UGsAv2k8V7+UhtCeOEPgRUU8g3Gd5ZsoDNWGXDXBx63L2yq4A5L1yAa4y4PlXPC/
         s6sJVvfhSSdKcaZZOPvZNolhW1/I5Ywy0r8A68dG826aaekyPcC+lndkpzjfFIt+IWC0
         CtGS1RgpLSe+QcQ0ORHIYlA9QNWL8a7o8/tjm/U0x5eJ8oYBkPuPuOoiW5W6GnACdHWI
         oXgZ8B40x/z5lyBx4qYrXb7HBFvX6aoHpXuVV3iJBNJnNiz4x8Vd5XdIMJdlPGt0NYM6
         r08cTDfZkgJe4PuW/t6tNa8w1UaVthWe6f+XYpLyZnkQQo7a1qybjxDKPsoUe3pbK8F8
         6XJA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=de7CZ8jeEcERGED2Q96ugWzLITXNhs+e7DuD9EZQpkI=;
        b=78z83A6AOZ1HFV0kjo/2ntWlViTz+9aEvDjFB8ZOHC6uMSDU4XwCdqVnltsDF7tIQ/
         cJGBywZ92Tp0ZzyoBwaDnhqxMoWqRSfO2VgSXlbbx7X1bBZlUudv16E5YC+NN+1VeN2t
         4kTdBaviJnFiqfCudwk01qJIE/FAhsbMv1JTG90NUrx2OmnIxdkWE13EF8/LLBkDvxi9
         BG5L8FoqN3bqD6w5aSSL/N0Mf3nSZF9cHeMGBj9ucrw5tWH2eFXfI6sNkQbXgc4JDdD4
         eWVQaSnW5qruwjCt+e51Z7zXpjz8WeyIcPFPFBYWlChfs1XW6iR9cD2dCqg2t2mvXrCH
         XMag==
X-Gm-Message-State: AOAM531Zb3aI3OV69yOWvDAT/IYwwVAX38DwH66G/vPADz97q4yiw9GX
	gNwdvMMubSQtcxzdXVoxWWU=
X-Google-Smtp-Source: ABdhPJyxgn8y703PIdZdT+iSaqkj7xP8yQhxWw4Bbr2lxRzCyaF5hcAgLMHi7vf3aYveYvKsHF5wpg==
X-Received: by 2002:a05:6512:606:b0:478:fdce:eef8 with SMTP id b6-20020a056512060600b00478fdceeef8mr29927481lfe.461.1654799975798;
        Thu, 09 Jun 2022 11:39:35 -0700 (PDT)
Subject: Re: [RFC PATCH 2/2] xen/grant-table: Use unpopulated DMAable pages
 instead of real RAM ones
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, Juergen Gross
 <jgross@suse.com>, Julien Grall <julien@xen.org>
References: <1652810658-27810-1-git-send-email-olekstysh@gmail.com>
 <1652810658-27810-3-git-send-email-olekstysh@gmail.com>
 <alpine.DEB.2.22.394.2206031348230.2783803@ubuntu-linux-20-04-desktop>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <7f886dfb-2b42-bc70-d55f-14ecd8144e3e@gmail.com>
Date: Thu, 9 Jun 2022 21:39:34 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.22.394.2206031348230.2783803@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 04.06.22 00:19, Stefano Stabellini wrote:


Hello Stefano

Thank you for having a look and sorry for the late response.

> On Tue, 17 May 2022, Oleksandr Tyshchenko wrote:
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> Depends on CONFIG_XEN_UNPOPULATED_ALLOC. If enabled then unpopulated
>> DMAable (contiguous) pages will be allocated for grant mapping into
>> instead of ballooning out real RAM pages.
>>
>> TODO: Fallback to real RAM pages if xen_alloc_unpopulated_dma_pages()
>> fails.
>>
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>> ---
>>   drivers/xen/grant-table.c | 27 +++++++++++++++++++++++++++
>>   1 file changed, 27 insertions(+)
>>
>> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
>> index 8ccccac..2bb4392 100644
>> --- a/drivers/xen/grant-table.c
>> +++ b/drivers/xen/grant-table.c
>> @@ -864,6 +864,25 @@ EXPORT_SYMBOL_GPL(gnttab_free_pages);
>>    */
>>   int gnttab_dma_alloc_pages(struct gnttab_dma_alloc_args *args)
>>   {
>> +#ifdef CONFIG_XEN_UNPOPULATED_ALLOC
>> +	int ret;
> This is an alternative implementation of the same function.

Currently, yes.


>   If we are
> going to use #ifdef, then I would #ifdef the entire function, rather
> than just the body. Otherwise within the function body we can use
> IS_ENABLED.


Good point. Note, there is one missing thing in current patch which is 
described in TODO.

"Fallback to real RAM pages if xen_alloc_unpopulated_dma_pages() 
fails."  So I will likely use IS_ENABLED within the function body.

If CONFIG_XEN_UNPOPULATED_ALLOC is enabled then gnttab_dma_alloc_pages() 
will try to call xen_alloc_unpopulated_dma_pages() the first and if 
fails then fallback to allocate RAM pages and balloon them out.

One moment is not entirely clear to me. If we use fallback in 
gnttab_dma_alloc_pages() then we must use fallback in 
gnttab_dma_free_pages() as well, we cannot use 
xen_free_unpopulated_dma_pages() for real RAM pages. The question is how 
to pass this information to the gnttab_dma_free_pages()? The first idea 
which comes to mind is to add a flag to struct gnttab_dma_alloc_args...


>
>> +	ret = xen_alloc_unpopulated_dma_pages(args->dev, args->nr_pages,
>> +			args->pages);
>> +	if (ret < 0)
>> +		return ret;
>> +
>> +	ret = gnttab_pages_set_private(args->nr_pages, args->pages);
>> +	if (ret < 0) {
>> +		gnttab_dma_free_pages(args);
> it should xen_free_unpopulated_dma_pages ?

Besides calling the xen_free_unpopulated_dma_pages(), we also need to 
call gnttab_pages_clear_private() here, this is what 
gnttab_dma_free_pages() is doing.

I can change to call both function instead:

     gnttab_pages_clear_private(args->nr_pages, args->pages);
     xen_free_unpopulated_dma_pages(args->dev, args->nr_pages, args->pages);

Shall I?


>
>
>> +		return ret;
>> +	}
>> +
>> +	args->vaddr = page_to_virt(args->pages[0]);
>> +	args->dev_bus_addr = page_to_phys(args->pages[0]);
> There are two things to note here.
>
> The first thing to note is that normally we would call pfn_to_bfn to
> retrieve the dev_bus_addr of a page because pfn_to_bfn takes into
> account foreign mappings. However, these are freshly allocated pages
> without foreign mappings, so page_to_phys/dma should be sufficient.

agree


>
>
> The second has to do with physical addresses and DMA addresses. The
> functions are called gnttab_dma_alloc_pages and
> xen_alloc_unpopulated_dma_pages which make you think we are retrieving a
> DMA address here. However, to get a DMA address we need to call
> page_to_dma rather than page_to_phys.
>
> page_to_dma takes into account special offsets that some devices have
> when accessing memory. There are real cases on ARM where the physical
> address != DMA address, e.g. RPi4.

I got it. Now I am in doubt whether it would be better to name the API:

xen_alloc_unpopulated_cma_pages()

or

xen_alloc_unpopulated_contiguous_pages()

What do you think?


>
> However, to call page_to_dma you need to specify as first argument the
> DMA-capable device that is expected to use those pages for DMA (e.g. an
> ethernet device or a MMC controller.) While the args->dev we have in
> gnttab_dma_alloc_pages is the gntdev_miscdev.

agree

As I understand, at this time it is unknown for what exactly device 
these pages are supposed to be used at the end.

For now, it is only known that these pages to be used by userspace PV 
backend for grant mappings.


>
> So this interface cannot actually be used to allocate memory that is
> supposed to be DMA-able by a DMA-capable device, such as an ethernet
> device.

agree


>
> But I think that should be fine because the memory is meant to be used
> by a userspace PV backend for grant mappings. If any of those mappings
> end up being used for actual DMA in the kernel they should go through the
> drivers/xen/swiotlb-xen.c and xen_phys_to_dma should be called, which
> ends up calling page_to_dma as appropriate.
>
> It would be good to double-check that the above is correct and, if so,
> maybe add a short in-code comment about it:
>
> /*
>   * These are not actually DMA addresses but regular physical addresses.
>   * If these pages end up being used in a DMA operation then the
>   * swiotlb-xen functions are called and xen_phys_to_dma takes care of
>   * the address translations:
>   *
>   * - from gfn to bfn in case of foreign mappings
>   * - from physical to DMA addresses in case the two are different for a
>   *   given DMA-mastering device
>   */

I agree this needs to be re-checked. But, there is one moment here, if 
userspace PV backend runs in other than Dom0 domain (non 1:1 mapped 
domain), the xen-swiotlb seems not to be in use then? How to be in this 
case?


>
>
>
>> +	return ret;
>> +#else
>>   	unsigned long pfn, start_pfn;
>>   	size_t size;
>>   	int i, ret;
>> @@ -910,6 +929,7 @@ int gnttab_dma_alloc_pages(struct gnttab_dma_alloc_args *args)
>>   fail:
>>   	gnttab_dma_free_pages(args);
>>   	return ret;
>> +#endif
>>   }
>>   EXPORT_SYMBOL_GPL(gnttab_dma_alloc_pages);
>>   
>> @@ -919,6 +939,12 @@ EXPORT_SYMBOL_GPL(gnttab_dma_alloc_pages);
>>    */
>>   int gnttab_dma_free_pages(struct gnttab_dma_alloc_args *args)
>>   {
>> +#ifdef CONFIG_XEN_UNPOPULATED_ALLOC
>> +	gnttab_pages_clear_private(args->nr_pages, args->pages);
>> +	xen_free_unpopulated_dma_pages(args->dev, args->nr_pages, args->pages);
>> +
>> +	return 0;
>> +#else
>>   	size_t size;
>>   	int i, ret;
>>   
>> @@ -946,6 +972,7 @@ int gnttab_dma_free_pages(struct gnttab_dma_alloc_args *args)
>>   		dma_free_wc(args->dev, size,
>>   			    args->vaddr, args->dev_bus_addr);
>>   	return ret;
>> +#endif
>>   }
>>   EXPORT_SYMBOL_GPL(gnttab_dma_free_pages);
>>   #endif
>> -- 
>> 2.7.4
>>
-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 18:50:37 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 18:50:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345661.571336 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzNEn-0008Mf-8e; Thu, 09 Jun 2022 18:50:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345661.571336; Thu, 09 Jun 2022 18:50:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzNEn-0008MY-4d; Thu, 09 Jun 2022 18:50:33 +0000
Received: by outflank-mailman (input) for mailman id 345661;
 Thu, 09 Jun 2022 18:50:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jU2H=WQ=efficios.com=mathieu.desnoyers@srs-se1.protection.inumbo.net>)
 id 1nzNEm-0008MS-7d
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 18:50:32 +0000
Received: from mail.efficios.com (mail.efficios.com [167.114.26.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 08143099-e825-11ec-bd2c-47488cf2e6aa;
 Thu, 09 Jun 2022 20:50:30 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by mail.efficios.com (Postfix) with ESMTP id 5A8574128F6;
 Thu,  9 Jun 2022 14:50:28 -0400 (EDT)
Received: from mail.efficios.com ([127.0.0.1])
 by localhost (mail03.efficios.com [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id Cdaco1Q1VkzD; Thu,  9 Jun 2022 14:50:28 -0400 (EDT)
Received: from localhost (localhost [127.0.0.1])
 by mail.efficios.com (Postfix) with ESMTP id E265B412B3B;
 Thu,  9 Jun 2022 14:50:27 -0400 (EDT)
Received: from mail.efficios.com ([127.0.0.1])
 by localhost (mail03.efficios.com [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id 0jctropM8IS9; Thu,  9 Jun 2022 14:50:27 -0400 (EDT)
Received: from thinkos.internal.efficios.com (192-222-180-24.qc.cable.ebox.net
 [192.222.180.24])
 by mail.efficios.com (Postfix) with ESMTPSA id 7D691412C30;
 Thu,  9 Jun 2022 14:50:27 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 08143099-e825-11ec-bd2c-47488cf2e6aa
DKIM-Filter: OpenDKIM Filter v2.10.3 mail.efficios.com E265B412B3B
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=efficios.com;
	s=default; t=1654800627;
	bh=X19fc2ZJzlK+2LLBPb6sjkiFfHHu0f+h2vgW4rhOtaM=;
	h=From:To:Date:Message-Id:MIME-Version;
	b=ZTchHqWwc67DZj2lqEkw7mDhWZCF5dI334JQ9WQlbx3E8zey19t5KaAxuWvWeRxvB
	 paszcqumr+nQbv2QiVrZDFIRsvAl7TdGarogQ7JV+GkgLu413bQ/JookNa38SP4q65
	 AQXmXUbk/9VotSgg26bOXp7CmkZhlUAydI3UAG65rZMb8pwO06F9osklhFUYTtRHaZ
	 pKF+Z/h+6BZAzm+jT2nJ3a8dxlnS/LDn+QHNJqP8cUvYd5PsxukDe+nuM8Ptd6S1XD
	 GB2KVT1dKcnxe2ZPxfDcyTlNMQMYFeLKPDsLmxwpGxnAq88Xoh2QnA18vshJu3N/bI
	 bDda1Q51GzK4w==
X-Virus-Scanned: amavisd-new at efficios.com
From: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
To: grub-devel@gnu.org,
	Daniel Kiper <dkiper@net-space.pl>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>,
	xen-devel@lists.xenproject.org
Subject: [PATCH v5 2/5] grub-mkconfig linux_xen: Fix quadratic algorithm for sorting menu items
Date: Thu,  9 Jun 2022 14:50:21 -0400
Message-Id: <20220609185024.447922-3-mathieu.desnoyers@efficios.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220609185024.447922-1-mathieu.desnoyers@efficios.com>
References: <20220609185024.447922-1-mathieu.desnoyers@efficios.com>
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable

The current implementation of the 20_linux_xen script implements its
menu items sorting in bash with a quadratic algorithm, calling "sed",
"sort", "head", and "grep" to compare versions between individual lines,
which is annoyingly slow for kernel developers who can easily end up
with 50-100 kernels in their boot partition.

This fix is ported from the 10_linux script, which has a similar
quadratic code pattern.

[ Note: this is untested. I would be grateful if anyone with a Xen
  environment could test it before it is merged. ]

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: xen-devel@lists.xenproject.org
---
Changes since v4:
- Combine sed -e '...' -e '...' into sed -e '...; ...'
---
 util/grub.d/20_linux_xen.in | 18 ++++++++++--------
 1 file changed, 10 insertions(+), 8 deletions(-)

diff --git a/util/grub.d/20_linux_xen.in b/util/grub.d/20_linux_xen.in
index 51a983926..4382303c1 100644
--- a/util/grub.d/20_linux_xen.in
+++ b/util/grub.d/20_linux_xen.in
@@ -237,11 +237,17 @@ esac
 # yet, so it's empty. In a submenu it will be equal to '\t' (one tab).
 submenu_indentation=3D""
=20
+# Perform a reverse version sort on the entire xen_list and linux_list.
+# Temporarily replace the '.old' suffix by ' 1' and append ' 2' for all
+# other files to order the '.old' files after their non-old counterpart
+# in reverse-sorted order.
+
+reverse_sorted_xen_list=3D$(echo ${xen_list} | tr ' ' '\n' | sed -e 's/\=
.old$/ 1/; / 1$/! s/$/ 2/' | version_sort -r | sed -e 's/ 1$/.old/; s/ 2$=
//')
+reverse_sorted_linux_list=3D$(echo ${linux_list} | tr ' ' '\n' | sed -e =
's/\.old$/ 1/; / 1$/! s/$/ 2/' | version_sort -r | sed -e 's/ 1$/.old/; s=
/ 2$//')
+
 is_top_level=3Dtrue
=20
-while [ "x${xen_list}" !=3D "x" ] ; do
-    list=3D"${linux_list}"
-    current_xen=3D`version_find_latest $xen_list`
+for current_xen in ${reverse_sorted_xen_list}; do
     xen_basename=3D`basename ${current_xen}`
     xen_dirname=3D`dirname ${current_xen}`
     rel_xen_dirname=3D`make_system_path_relative_to_its_root $xen_dirnam=
e`
@@ -273,8 +279,7 @@ while [ "x${xen_list}" !=3D "x" ] ; do
        fi
     done
=20
-    while [ "x$list" !=3D "x" ] ; do
-	linux=3D`version_find_latest $list`
+    for linux in ${reverse_sorted_linux_list}; do
 	gettext_printf "Found linux image: %s\n" "$linux" >&2
 	basename=3D`basename $linux`
 	dirname=3D`dirname $linux`
@@ -351,13 +356,10 @@ while [ "x${xen_list}" !=3D "x" ] ; do
 	    linux_entry "${OS}" "${version}" "${xen_version}" recovery \
 		"${GRUB_CMDLINE_LINUX_RECOVERY} ${GRUB_CMDLINE_LINUX}" "${GRUB_CMDLINE=
_XEN}"
 	fi
-
-	list=3D`echo $list | tr ' ' '\n' | fgrep -vx "$linux" | tr '\n' ' '`
     done
     if [ x"$is_top_level" !=3D xtrue ]; then
 	echo '	}'
     fi
-    xen_list=3D`echo $xen_list | tr ' ' '\n' | fgrep -vx "$current_xen" =
| tr '\n' ' '`
 done
=20
 # If at least one kernel was found, then we need to
--=20
2.30.2



From xen-devel-bounces@lists.xenproject.org Thu Jun 09 21:29:56 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 21:29:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345671.571347 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzPil-0001LU-M8; Thu, 09 Jun 2022 21:29:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345671.571347; Thu, 09 Jun 2022 21:29:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzPil-0001LN-Id; Thu, 09 Jun 2022 21:29:39 +0000
Received: by outflank-mailman (input) for mailman id 345671;
 Thu, 09 Jun 2022 21:29:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzPik-0001LD-93; Thu, 09 Jun 2022 21:29:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzPij-0000Qj-Tf; Thu, 09 Jun 2022 21:29:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzPii-0004oE-In; Thu, 09 Jun 2022 21:29:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nzPii-000102-G4; Thu, 09 Jun 2022 21:29:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+zMXoulSK7TgVNVdhOHXoMc/Ea1FIA/rZd5VIh9tQnI=; b=xvvIsbrg6txlnbXdWeTIPjIo1f
	IPEDkM5xWjc6gb4Gektpv/S19c9JPz6TsUv+uJPv5pxl5RWox2HR/xi1hTm2HPJv4EjmmWwP/WlMD
	VgyG7ccGm/gbEw1RfdcT5LXd4O+LpcPfJC14fR/IVR2TiTD93wt4VDiFLWIbRvHN5rSE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170894-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 170894: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=6bfb56e93bcef41859c2d5ab234ffd80b691be35
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 09 Jun 2022 21:29:36 +0000

flight 170894 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170894/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 170714
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 170714
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 170714
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 170714
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 170714
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 170714

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 170714

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170714
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170714
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170714
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                6bfb56e93bcef41859c2d5ab234ffd80b691be35
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z   16 days
Failing since        170716  2022-05-24 11:12:06 Z   16 days   43 attempts
Testing same since   170894  2022-06-09 05:18:39 Z    0 days    1 attempts

------------------------------------------------------------
2281 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 268886 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 09 22:42:38 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jun 2022 22:42:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345683.571358 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzQr6-0002QQ-74; Thu, 09 Jun 2022 22:42:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345683.571358; Thu, 09 Jun 2022 22:42:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzQr6-0002QJ-3R; Thu, 09 Jun 2022 22:42:20 +0000
Received: by outflank-mailman (input) for mailman id 345683;
 Thu, 09 Jun 2022 22:42:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzQr5-0002Q9-0y; Thu, 09 Jun 2022 22:42:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzQr4-00020v-Tq; Thu, 09 Jun 2022 22:42:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzQr4-0008Co-AT; Thu, 09 Jun 2022 22:42:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nzQr4-0002JG-9c; Thu, 09 Jun 2022 22:42:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jQ2EyHbwM/pRSb/g6rCkH4tGkM8apwsBAvllnVbG1YM=; b=DgeMIar8AYH9oBO09Va29Ge05x
	zE3LQJ9MHWqvZMxfu/6mCe4FQkQO+mFEEe+u20tXdwIzX0eZgeStJbBBXJSHg0kZs5hicWDI1DTQe
	+0D7uvEz+m5R76CShaMpMOaEPElkNwcR1qope753Mi61BsR8TRg0q6k9zlBVzrBC6b/g=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170895-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 170895: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=35c6471fd2c181f6e5e0b292dc759b49dbd95d6a
X-Osstest-Versions-That:
    linux=04b092e4a01a3488e762897e2d29f85eda2c6a60
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 09 Jun 2022 22:42:18 +0000

flight 170895 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170895/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-credit1 18 guest-start/debian.repeat fail in 170887 pass in 170895
 test-armhf-armhf-xl-credit2  14 guest-start                fail pass in 170887

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit2 15 migrate-support-check fail in 170887 never pass
 test-armhf-armhf-xl-credit2 16 saverestore-support-check fail in 170887 never pass
 test-armhf-armhf-xl-multivcpu 18 guest-start/debian.repeat    fail like 170724
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170724
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170736
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170736
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 170736
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170736
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170736
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170736
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170736
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 170736
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 170736
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170736
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170736
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170736
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                35c6471fd2c181f6e5e0b292dc759b49dbd95d6a
baseline version:
 linux                04b092e4a01a3488e762897e2d29f85eda2c6a60

Last test of basis   170736  2022-05-25 18:40:38 Z   15 days
Testing same since   170843  2022-06-06 06:44:17 Z    3 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Akira Yokosawa <akiyks@gmail.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andy Lutomirski <luto@kernel.org>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Anna Schumaker <Anna.Schumaker@Netapp.com>
  Ard Biesheuvel <ardb@kernel.org>
  Ariadne Conill <ariadne@dereferenced.org>
  Aristeu Rozanski <aris@redhat.com>
  Benjamin Tissoires <benjamin.tissoires@redhat.com>
  Christian Brauner <brauner@kernel.org>
  Chuck Lever <chuck.lever@oracle.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Thompson <daniel.thompson@linaro.org>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  Denis Efremov (Oracle) <efremov@linux.com>
  Dmitry Mastykin <dmastykin@astralinux.ru>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Eric Dumazet <edumazet@google.com>
  Florian Westphal <fw@strlen.de>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo A. R. Silva <gustavoars@kernel.org>
  Hans Verkuil <hverkuil-cisco@xs4all.nl>
  Herbert Xu <herbert@gondor.apana.org.au>
  Hulk Robot <hulkrobot@huawei.com>
  IotaHydrae <writeforever@foxmail.com>
  Jakub Kicinski <kuba@kernel.org>
  Jarkko Sakkinen <jarkko@kernel.org>
  Jiri Kosina <jkosina@suse.cz>
  Joel Stanley <joel@jms.id.au>
  Johannes Berg <johannes.berg@intel.com>
  Jonathan Corbet <corbet@lwn.net>
  Kees Cook <keescook@chromium.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Liu Jian <liujian56@huawei.com>
  Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
  Luca Coelho <luciano.coelho@intel.com>
  Marek Maslanka <mm@semihalf.com>
  Marek Maślanka <mm@semihalf.com>
  Mariusz Tkaczyk <mariusz.tkaczyk@linux.intel.com>
  Mark-PK Tsai <mark-pk.tsai@mediatek.com>
  Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
  Mika Westerberg <mika.westerberg@linux.intel.com>
  Mike Snitzer <snitzer@kernel.org>
  Mikulas Patocka <mpatocka@redhat.com>
  Milan Broz <gmazyland@gmail.com>
  Minchan Kim <minchan@kernel.org>
  Miri Korenblit <miriam.rachel.korenblit@intel.com>
  Noah Meyerhans <nmeyerha@amazon.com>
  Noah Meyerhans <noahm@debian.org>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Piyush Malgujar <pmalgujar@marvell.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Sakari Ailus <sakari.ailus@linux.intel.com>
  Sarthak Kukreti <sarthakkukreti@google.com>
  Sasha Levin <sashal@kernel.org>
  Song Liu <song@kernel.org>
  Song Liu <songliubraving@fb.com>
  Stefan Ghinea <stefan.ghinea@windriver.com>
  Stefan Mahnke-Hartmann <stefan.mahnke-hartmann@infineon.com>
  Steffen Klassert <steffen.klassert@secunet.com>
  Stephen Brennan <stephen.s.brennan@oracle.com>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Sultan Alsawaf <sultan@kerneltoast.com>
  Szymon Balcerak <sbalcerak@marvell.com>
  Thomas Bartschies <thomas.bartschies@cvk.de>
  Thomas Gleixner <tglx@linutronix.de>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Vegard Nossum <vegard.nossum@oracle.com>
  Veronika Kabatova <vkabatov@redhat.com>
  Vitaly Chikunov <vt@altlinux.org>
  Willy Tarreau <w@1wt.eu>
  Wolfram Sang <wsa@kernel.org>
  Xiu Jianfeng <xiujianfeng@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   04b092e4a01a..35c6471fd2c1  35c6471fd2c181f6e5e0b292dc759b49dbd95d6a -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 00:49:01 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 00:49:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345697.571368 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzSpP-00013V-4z; Fri, 10 Jun 2022 00:48:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345697.571368; Fri, 10 Jun 2022 00:48:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzSpP-00013O-2F; Fri, 10 Jun 2022 00:48:43 +0000
Received: by outflank-mailman (input) for mailman id 345697;
 Fri, 10 Jun 2022 00:48:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Tsmg=WR=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nzSpN-00013I-6D
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 00:48:41 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0f61a36d-e857-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 02:48:38 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 68757615D1;
 Fri, 10 Jun 2022 00:48:35 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 56752C34114;
 Fri, 10 Jun 2022 00:48:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0f61a36d-e857-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654822114;
	bh=IDgNoo+BA5lAS7UKqsa5Qr/m/XsUGO4spdDI9oW0io0=;
	h=Date:From:To:cc:Subject:From;
	b=r+GfQOEe+TGzWbFJfMwnH4fYrjWZZQRVhyAva4Y+/OiPh1E4SSqDeTjpnIW7pvvxX
	 bXur0ZnDs3qZwwQGg+dGctG9VrseaxY30kzkBnNZeVEpg15sAa6Gu5gC4D+TRLbw2i
	 iI3VhQhQjnCO05mLKRy3eKg6EmkAYLO8zmWxq5ovRCyXS9RKoHxpw+TT8Vw2OhoZ3D
	 LdcGN4mP+4T8p+xp5eoCDbclHmmHcX7rkM/szvd25BrG0J+XnMNLcEhIJ48Xnakrlh
	 K01o6bsYJ3pSeyRHlloTrTzpvXJtTnJyytS1kTVYY7jGWZeMsR9Xe6hohQsxNZ4MEm
	 W1Qxd6zjGpadw==
Date: Thu, 9 Jun 2022 17:48:26 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: xen-devel@lists.xenproject.org
cc: sstabellini@kernel.org, jbeulich@suse.com, George.Dunlap@citrix.com, 
    roger.pau@citrix.com, Artem_Mygaiev@epam.com, Andrew.Cooper3@citrix.com, 
    julien@xen.org, Bertrand.Marquis@arm.com
Subject: [PATCH] add more MISRA C rules to docs/misra/rules.rst
Message-ID: <alpine.DEB.2.22.394.2206091748210.756493@ubuntu-linux-20-04-desktop>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

Add the new MISRA C rules agreed by the MISRA C working group to
docs/misra/rules.rst.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>

---

I added the rules that we agreed upon this morning together with all the
notes we discussed, in particular:

- macros as macro parameters at invocation time for Rule 5.3
- the clarification of Rule 9.1
- gnu_inline exception for Rule 8.10


diff --git a/docs/misra/rules.rst b/docs/misra/rules.rst
index 6ccff07765..5c28836bc8 100644
--- a/docs/misra/rules.rst
+++ b/docs/misra/rules.rst
@@ -89,6 +89,28 @@ existing codebase are work-in-progress.
        (xen/include/public/) are allowed to retain longer identifiers
        for backward compatibility.
 
+   * - `Rule 5.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_05_02.c>`_
+     - Required
+     - Identifiers declared in the same scope and name space shall be
+       distinct
+     - The Xen characters limit for identifiers is 40. Public headers
+       (xen/include/public/) are allowed to retain longer identifiers
+       for backward compatibility.
+
+   * - `Rule 5.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_05_03.c>`_
+     - Required
+     - An identifier declared in an inner scope shall not hide an
+       identifier declared in an outer scope
+     - Using macros as macro parameters at invocation time is allowed,
+       e.g. MAX(var0, MIN(var1, var2))
+
+   * - `Rule 5.4 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_05_04.c>`_
+     - Required
+     - Macro identifiers shall be distinct
+     - The Xen characters limit for macro identifiers is 40. Public
+       headers (xen/include/public/) are allowed to retain longer
+       identifiers for backward compatibility.
+
    * - `Rule 6.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_06_02.c>`_
      - Required
      - Single-bit named bit fields shall not be of a signed type
@@ -123,8 +145,75 @@ existing codebase are work-in-progress.
        declarations of objects and functions that have internal linkage
      -
 
+   * - `Rule 8.10 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_08_10.c>`_
+     - Required
+     - An inline function shall be declared with the static storage class
+     - gnu_inline (without static) is allowed.
+
    * - `Rule 8.12 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_08_12.c>`_
      - Required
      - Within an enumerator list the value of an implicitly-specified
        enumeration constant shall be unique
      -
+
+   * - `Rule 9.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_09_01.c>`_
+     - Mandatory
+     - The value of an object with automatic storage duration shall not
+       be read before it has been set
+     - Rule clarification: do not use variables before they are
+       initialized. An explicit initializer is not necessarily required.
+       Try reducing the scope of the variable. If an explicit
+       initializer is added, consider initializing the variable to a
+       poison value.
+
+   * - `Rule 9.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_09_02.c>`_
+     - Required
+     - The initializer for an aggregate or union shall be enclosed in
+       braces
+     -
+
+   * - `Rule 13.6 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_13_06.c>`_
+     - Mandatory
+     - The operand of the sizeof operator shall not contain any
+       expression which has potential side effects
+     -
+
+   * - `Rule 14.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_14_01.c>`_
+     - Required
+     - A loop counter shall not have essentially floating type
+     -
+
+   * - `Rule 16.7 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_16_07.c>`_
+     - Required
+     - A switch-expression shall not have essentially Boolean type
+     -
+
+   * - `Rule 17.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_17_03.c>`_
+     - Mandatory
+     - A function shall not be declared implicitly
+     -
+
+   * - `Rule 17.4 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_17_04.c>`_
+     - Mandatory
+     - All exit paths from a function with non-void return type shall
+       have an explicit return statement with an expression
+     -
+
+   * - `Rule 20.7 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_20_07.c>`_
+     - Required
+     - Expressions resulting from the expansion of macro parameters
+       shall be enclosed in parentheses
+     -
+
+   * - `Rule 20.13 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_20_13.c>`_
+     - Required
+     - A line whose first token is # shall be a valid preprocessing
+       directive
+     -
+
+   * - `Rule 20.14 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_20_14.c>`_
+     - Required
+     - All #else #elif and #endif preprocessor directives shall reside
+       in the same file as the #if #ifdef or #ifndef directive to which
+       they are related
+     -


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 01:36:04 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 01:36:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345707.571380 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzTZ5-00066t-Rs; Fri, 10 Jun 2022 01:35:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345707.571380; Fri, 10 Jun 2022 01:35:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzTZ5-00066m-OI; Fri, 10 Jun 2022 01:35:55 +0000
Received: by outflank-mailman (input) for mailman id 345707;
 Fri, 10 Jun 2022 01:35:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzTZ5-00066c-0s; Fri, 10 Jun 2022 01:35:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzTZ4-0003yp-SW; Fri, 10 Jun 2022 01:35:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzTZ4-0001N5-BU; Fri, 10 Jun 2022 01:35:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nzTZ4-00056r-B2; Fri, 10 Jun 2022 01:35:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=rCR3E/VNSjIGSCh0U+KoCvKozTm7/X3iqF4JtJdTIng=; b=tmON/qempogdZpLPj29+vFuMH7
	reqq7IzGmBCqwh1UXpUNVlu8EP5Z3UQ0ARmfnURpYdSxhqrwKHdpXE+AwyHCFa1zqYFmg8+ylhKoK
	vl6QVXqVkpXjGmfCmoiKdgAku7IQJpFezUH4Xe/DMBoyl4yR1CvgU4BTEhk7NC1EMogk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170897-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 170897: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:guest-start/debianhvm.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f3185c165d28901c3222becfc8be547263c53745
X-Osstest-Versions-That:
    xen=7ac12e3634cc3ed9234de03e48149e7f5fbf73c3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 10 Jun 2022 01:35:54 +0000

flight 170897 xen-unstable real [real]
flight 170907 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/170897/
http://logs.test-lab.xenproject.org/osstest/logs/170907/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 170890
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 170890

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-amd64 20 guest-start/debianhvm.repeat fail pass in 170907-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170890
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170890
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170890
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170890
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 170890
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 170890
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170890
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170890
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170890
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170890
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170890
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170890
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 xen                  f3185c165d28901c3222becfc8be547263c53745
baseline version:
 xen                  7ac12e3634cc3ed9234de03e48149e7f5fbf73c3

Last test of basis   170890  2022-06-08 20:37:45 Z    1 days
Testing same since   170897  2022-06-09 08:20:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Luca Fancellu <luca.fancellu@arm.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit f3185c165d28901c3222becfc8be547263c53745
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jun 8 17:03:32 2022 +0200

    IOMMU/x86: perform PV Dom0 mappings in batches
    
    For large page mappings to be easily usable (i.e. in particular without
    un-shattering of smaller page mappings) and for mapping operations to
    then also be more efficient, pass batches of Dom0 memory to iommu_map().
    In dom0_construct_pv() and its helpers (covering strict mode) this
    additionally requires establishing the type of those pages (albeit with
    zero type references).
    
    The earlier establishing of PGT_writable_page | PGT_validated requires
    the existing places where this gets done (through get_page_and_type())
    to be updated: For pages which actually have a mapping, the type
    refcount needs to be 1.
    
    There is actually a related bug that gets fixed here as a side effect:
    Typically the last L1 table would get marked as such only after
    get_page_and_type(..., PGT_writable_page). While this is fine as far as
    refcounting goes, the page did remain mapped in the IOMMU in this case
    (when "iommu=dom0-strict").
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 7158e80c887d8b451c8525b7fe32049742814e69
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jun 8 17:02:19 2022 +0200

    IOMMU/x86: restrict IO-APIC mappings for PV Dom0
    
    While already the case for PVH, there's no reason to treat PV
    differently here, though of course the addresses get taken from another
    source in this case. Except that, to match CPU side mappings, by default
    we permit r/o ones. This then also means we now deal consistently with
    IO-APICs whose MMIO is or is not covered by E820 reserved regions.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

commit 28e13c7f4382f5dce6b2fb2ccef2098f22c04694
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Wed Jun 8 17:00:29 2022 +0200

    build: xen/include: use if_changed
    
    Use "define" for the headers*_chk commands. That allow us to keep
    writing "#include" in the Makefile without having to replace that by
    "$(pound)include" which would be a bit less obvious about the command
    line purpose.
    
    Adding several .PRECIOUS as without them `make` deletes the
    intermediate targets. This is an issue because the macro $(if_changed,)
    check if the target exist in order to decide whether to recreate the
    target.
    
    Removing the call to `mkdir` from the commands. Those aren't needed
    anymore because a rune in Rules.mk creates the directory for each
    $(targets).
    
    Remove "export PYTHON" as it is already exported.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit be464973e4565fd9b4999a6eb9db9f469616f07b
Author: Luca Fancellu <luca.fancellu@arm.com>
Date:   Wed Jun 8 16:59:55 2022 +0200

    tools/libxl: optimize domain creation skipping domain cpupool move
    
    Commit 92ea9c54fc81 ("arm/dom0less: assign dom0less guests to cpupools")
    introduced a way to start a domain directly on a certain cpupool,
    adding a "cpupool_id" member to struct xen_domctl_createdomain.
    
    This was done to be able to start dom0less guests in different pools than
    cpupool0, but the toolstack can benefit from it because it can now use
    the struct member directly instead of creating the guest in cpupool0
    and then moving it to the target cpupool.
    
    Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
    Reviewed-by: Juergen Gross <jgross@suse.com>
    Acked-by: Anthony PERARD <anthony.perard@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 04:29:18 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 04:29:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345695.571400 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzWGd-0001zl-8B; Fri, 10 Jun 2022 04:29:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345695.571400; Fri, 10 Jun 2022 04:29:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzWGd-0001ze-5H; Fri, 10 Jun 2022 04:29:03 +0000
Received: by outflank-mailman (input) for mailman id 345695;
 Thu, 09 Jun 2022 23:45:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ErD5=WQ=linux.intel.com=jacob.jun.pan@srs-se1.protection.inumbo.net>)
 id 1nzRqC-0001nr-9W
 for xen-devel@lists.xenproject.org; Thu, 09 Jun 2022 23:45:29 +0000
Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 393e3350-e84e-11ec-8b38-e96605d6a9a5;
 Fri, 10 Jun 2022 01:45:23 +0200 (CEST)
Received: from orsmga008.jf.intel.com ([10.7.209.65])
 by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Jun 2022 16:45:19 -0700
Received: from jacob-builder.jf.intel.com (HELO jacob-builder) ([10.7.198.157])
 by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Jun 2022 16:45:19 -0700
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 393e3350-e84e-11ec-8b38-e96605d6a9a5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1654818323; x=1686354323;
  h=date:from:to:cc:subject:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=/qix89+4Yux2UyS3QMBWmKLCne+VvkXES21cCGFIhLs=;
  b=Xj4Fc0OJVYzuUeZrcVGf3jxWS1bHTpTkbcTYITPk4zcyBdoQlDuGPsIO
   n2AZde1Tic6vi7AzizkwDmFb/9sQ57cUW+ZkK6+W0vllSRyw7VJxIHE+s
   +640i10/b19rTIEWHlrIV+wDFy8mFgLMwbo889os9WkmNhmlEO4apGPO3
   7+C68gosy55h5dwYzXjiZqQSxM8ZUpcw9PhuBPbWCW4lRd0ttaVQiiaU+
   x7FgtviR9yIoJF4YtkJz5GxPwpr5DgCEDdWKzec0itfRYO+J1E6LmXk9Y
   zm8nUDmuRxlFs2SD7Syq3Lng0t3SuH0jAWW8Ryz90PFoCeHD1u3SB+xzJ
   w==;
X-IronPort-AV: E=McAfee;i="6400,9594,10373"; a="339212527"
X-IronPort-AV: E=Sophos;i="5.91,288,1647327600"; 
   d="scan'208";a="339212527"
X-IronPort-AV: E=Sophos;i="5.91,288,1647327600"; 
   d="scan'208";a="610480359"
Date: Thu, 9 Jun 2022 16:49:21 -0700
From: Jacob Pan <jacob.jun.pan@linux.intel.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: rth@twiddle.net, ink@jurassic.park.msu.ru, mattst88@gmail.com,
 vgupta@kernel.org, linux@armlinux.org.uk, ulli.kroll@googlemail.com,
 linus.walleij@linaro.org, shawnguo@kernel.org, Sascha Hauer
 <s.hauer@pengutronix.de>, kernel@pengutronix.de, festevam@gmail.com,
 linux-imx@nxp.com, tony@atomide.com, khilman@kernel.org,
 catalin.marinas@arm.com, will@kernel.org, guoren@kernel.org,
 bcain@quicinc.com, chenhuacai@kernel.org, kernel@xen0n.name,
 geert@linux-m68k.org, sammy@sammy.net, monstr@monstr.eu,
 tsbogend@alpha.franken.de, dinguyen@kernel.org, jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi, shorne@gmail.com,
 James.Bottomley@HansenPartnership.com, deller@gmx.de, mpe@ellerman.id.au,
 benh@kernel.crashing.org, paulus@samba.org, paul.walmsley@sifive.com,
 palmer@dabbelt.com, aou@eecs.berkeley.edu, hca@linux.ibm.com,
 gor@linux.ibm.com, agordeev@linux.ibm.com, borntraeger@linux.ibm.com,
 svens@linux.ibm.com, ysato@users.sourceforge.jp, dalias@libc.org,
 davem@davemloft.net, richard@nod.at, anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net, tglx@linutronix.de, mingo@redhat.com,
 bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com,
 acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com,
 jolsa@kernel.org, namhyung@kernel.org, jgross@suse.com,
 srivatsa@csail.mit.edu, amakhalov@vmware.com, pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com, chris@zankel.net, jcmvbkbc@gmail.com,
 rafael@kernel.org, lenb@kernel.org, pavel@ucw.cz,
 gregkh@linuxfoundation.org, mturquette@baylibre.com, sboyd@kernel.org,
 daniel.lezcano@linaro.org, lpieralisi@kernel.org, sudeep.holla@arm.com,
 agross@kernel.org, bjorn.andersson@linaro.org, anup@brainfault.org,
 thierry.reding@gmail.com, jonathanh@nvidia.com, Arnd Bergmann
 <arnd@arndb.de>, yury.norov@gmail.com, andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk, rostedt@goodmis.org, pmladek@suse.com,
 senozhatsky@chromium.org, john.ogness@linutronix.de, paulmck@kernel.org,
 frederic@kernel.org, quic_neeraju@quicinc.com, josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com,
 joel@joelfernandes.org, juri.lelli@redhat.com, vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com, bsegall@google.com, mgorman@suse.de,
 bristot@redhat.com, vschneid@redhat.com, jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org, linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org, linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org, linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org, linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org, xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org, linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org, linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org, linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org, rcu@vger.kernel.org,
 jacob.jun.pan@linux.intel.com
Subject: Re: [PATCH 04/36] cpuidle,intel_idle: Fix CPUIDLE_FLAG_IRQ_ENABLE
Message-ID: <20220609164921.5e61711d@jacob-builder>
In-Reply-To: <20220608144516.172460444@infradead.org>
References: <20220608142723.103523089@infradead.org>
	<20220608144516.172460444@infradead.org>
Organization: OTC
X-Mailer: Claws Mail 3.17.5 (GTK+ 2.24.32; x86_64-pc-linux-gnu)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

Hi Peter,

On Wed, 08 Jun 2022 16:27:27 +0200, Peter Zijlstra <peterz@infradead.org>
wrote:

> Commit c227233ad64c ("intel_idle: enable interrupts before C1 on
> Xeons") wrecked intel_idle in two ways:
> 
>  - must not have tracing in idle functions
>  - must return with IRQs disabled
> 
> Additionally, it added a branch for no good reason.
> 
> Fixes: c227233ad64c ("intel_idle: enable interrupts before C1 on Xeons")
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> ---
>  drivers/idle/intel_idle.c |   48
> +++++++++++++++++++++++++++++++++++----------- 1 file changed, 37
> insertions(+), 11 deletions(-)
> 
> --- a/drivers/idle/intel_idle.c
> +++ b/drivers/idle/intel_idle.c
> @@ -129,21 +137,37 @@ static unsigned int mwait_substates __in
>   *
>   * Must be called under local_irq_disable().
>   */
nit: this comment is no long true, right?

> +
> -static __cpuidle int intel_idle(struct cpuidle_device *dev,
> -				struct cpuidle_driver *drv, int index)
> +static __always_inline int __intel_idle(struct cpuidle_device *dev,
> +					struct cpuidle_driver *drv, int
> index) {
>  	struct cpuidle_state *state = &drv->states[index];
>  	unsigned long eax = flg2MWAIT(state->flags);
>  	unsigned long ecx = 1; /* break on interrupt flag */
>  
> -	if (state->flags & CPUIDLE_FLAG_IRQ_ENABLE)
> -		local_irq_enable();
> -
>  	mwait_idle_with_hints(eax, ecx);
>  
>  	return index;
>  }
>  
> +static __cpuidle int intel_idle(struct cpuidle_device *dev,
> +				struct cpuidle_driver *drv, int index)
> +{
> +	return __intel_idle(dev, drv, index);
> +}
> +
> +static __cpuidle int intel_idle_irq(struct cpuidle_device *dev,
> +				    struct cpuidle_driver *drv, int
> index) +{
> +	int ret;
> +
> +	raw_local_irq_enable();
> +	ret = __intel_idle(dev, drv, index);
> +	raw_local_irq_disable();
> +
> +	return ret;
> +}
> +
>  /**
>   * intel_idle_s2idle - Ask the processor to enter the given idle state.
>   * @dev: cpuidle device of the target CPU.
> @@ -1801,6 +1824,9 @@ static void __init intel_idle_init_cstat
>  		/* Structure copy. */
>  		drv->states[drv->state_count] =
> cpuidle_state_table[cstate]; 
> +		if (cpuidle_state_table[cstate].flags &
> CPUIDLE_FLAG_IRQ_ENABLE)
> +			drv->states[drv->state_count].enter =
> intel_idle_irq; +
>  		if ((disabled_states_mask & BIT(drv->state_count)) ||
>  		    ((icpu->use_acpi || force_use_acpi) &&
>  		     intel_idle_off_by_default(mwait_hint) &&
> 
> 


Thanks,

Jacob


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 04:49:10 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 04:49:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345754.571411 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzWZy-00053Z-3t; Fri, 10 Jun 2022 04:49:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345754.571411; Fri, 10 Jun 2022 04:49:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzWZx-00053S-V7; Fri, 10 Jun 2022 04:49:01 +0000
Received: by outflank-mailman (input) for mailman id 345754;
 Fri, 10 Jun 2022 04:49:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FoyK=WR=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1nzWZw-00053M-N2
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 04:49:00 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a43afb82-e878-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 06:48:59 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id BD6661FDA6;
 Fri, 10 Jun 2022 04:48:58 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 96DBB139ED;
 Fri, 10 Jun 2022 04:48:58 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id KAqTIzrNomKDAgAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 10 Jun 2022 04:48:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a43afb82-e878-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1654836538; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=ehPBDIeGo/IVyVU/00SU2xxg0fPE5snMj1Ycn6dtS/k=;
	b=R9D4hN0hV9wkzyPOFBVw6H33koGOOe8nQL8by+vWmodVHt1cwdnqwKgJMvBPtpyMt2weuH
	Tmrxam4DoC651QO5Z7jHZfyoqmIA+bd7iy5UW0w5FLz/awX12Toc5X3HC+ZZ37Gx2oBQ1u
	zQ0ABu39T1ar2udtXkSd+2lSP7WX/2I=
From: Juergen Gross <jgross@suse.com>
To: torvalds@linux-foundation.org
Cc: linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	sstabellini@kernel.org
Subject: [GIT PULL] xen: branch for v5.19-rc2
Date: Fri, 10 Jun 2022 06:48:58 +0200
Message-Id: <20220610044858.30822-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Linus,

Please git pull the following tag:

 git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.19a-rc2-tag

xen: branch for v5.19-rc2

It contains:
- a small cleanup removing "export" of an __init function
- a small series adding a new infrastructure for platform flags
- a series adding generic virtio support for Xen guests (frontend side)

Thanks.

Juergen

 .../devicetree/bindings/iommu/xen,grant-dma.yaml   |  39 +++
 MAINTAINERS                                        |   8 +
 arch/arm/include/asm/xen/xen-ops.h                 |   2 +
 arch/arm/mm/dma-mapping.c                          |   7 +-
 arch/arm/xen/enlighten.c                           |   2 +
 arch/arm64/include/asm/xen/xen-ops.h               |   2 +
 arch/arm64/mm/dma-mapping.c                        |   7 +-
 arch/s390/Kconfig                                  |   1 -
 arch/s390/mm/init.c                                |  13 +-
 arch/x86/Kconfig                                   |   1 -
 arch/x86/mm/mem_encrypt.c                          |   7 -
 arch/x86/mm/mem_encrypt_amd.c                      |   4 +
 arch/x86/xen/enlighten_hvm.c                       |   2 +
 arch/x86/xen/enlighten_pv.c                        |   2 +
 drivers/virtio/Kconfig                             |   6 -
 drivers/virtio/virtio.c                            |   5 +-
 drivers/xen/Kconfig                                |  20 ++
 drivers/xen/Makefile                               |   2 +
 drivers/xen/grant-dma-iommu.c                      |  78 +++++
 drivers/xen/grant-dma-ops.c                        | 346 +++++++++++++++++++++
 drivers/xen/grant-table.c                          | 251 ++++++++++++---
 drivers/xen/xlate_mmu.c                            |   1 -
 include/asm-generic/Kbuild                         |   1 +
 include/asm-generic/platform-feature.h             |   8 +
 include/linux/platform-feature.h                   |  19 ++
 include/linux/virtio_config.h                      |   9 -
 include/xen/arm/xen-ops.h                          |  18 ++
 include/xen/grant_table.h                          |   4 +
 include/xen/xen-ops.h                              |  13 +
 include/xen/xen.h                                  |   8 +
 kernel/Makefile                                    |   2 +-
 kernel/platform-feature.c                          |  27 ++
 32 files changed, 830 insertions(+), 85 deletions(-)

Juergen Gross (5):
      kernel: add platform_has() infrastructure
      virtio: replace arch_has_restricted_virtio_memory_access()
      xen/grants: support allocating consecutive grants
      xen/grant-dma-ops: Add option to restrict memory access under Xen
      xen/virtio: Enable restricted memory access using Xen grant mappings

Masahiro Yamada (1):
      xen: unexport __init-annotated xen_xlate_map_ballooned_pages()

Oleksandr Tyshchenko (5):
      arm/xen: Introduce xen_setup_dma_ops()
      dt-bindings: Add xen,grant-dma IOMMU description for xen-grant DMA ops
      xen/grant-dma-iommu: Introduce stub IOMMU driver
      xen/grant-dma-ops: Retrieve the ID of backend's domain for DT devices
      arm/xen: Assign xen-grant DMA ops for xen-grant DMA devices


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 05:20:26 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 05:20:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345766.571428 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzX4F-0001h7-L5; Fri, 10 Jun 2022 05:20:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345766.571428; Fri, 10 Jun 2022 05:20:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzX4F-0001h0-I3; Fri, 10 Jun 2022 05:20:19 +0000
Received: by outflank-mailman (input) for mailman id 345766;
 Fri, 10 Jun 2022 05:20:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzX4E-0001gq-Nt; Fri, 10 Jun 2022 05:20:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzX4E-0000In-Gu; Fri, 10 Jun 2022 05:20:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzX4D-0002ai-Uz; Fri, 10 Jun 2022 05:20:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nzX4D-0000Jj-UV; Fri, 10 Jun 2022 05:20:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=73WnD29Ex1VfqElNtEK+1qEitWNNw9G3K2MFQmk09dU=; b=FoNqMX8KEgkHKp8E2BfWY0kzVQ
	qV6kq4XLosxKRlyWczRmDFE1G4Gt9BGL7TJISTjPzJrCNrIhohitC166ryKFyQ+f95NsqnkTK3SZm
	mLuJJ0JkObeSAd90Z2QRMvBnsSBsW6H8NKcpM49jbVjjebjqwVtbZiRHTSIcT+YIyBVA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170901-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.16-testing test] 170901: regressions - FAIL
X-Osstest-Failures:
    xen-4.16-testing:test-amd64-i386-xl-shadow:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-coresched-i386-xl:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-xl-xsm:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-pair:xen-boot/src_host:fail:regression
    xen-4.16-testing:test-amd64-i386-pair:xen-boot/dst_host:fail:regression
    xen-4.16-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-freebsd10-i386:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-libvirt:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-livepatch:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-qemuu-rhel6hvm-intel:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-xl-qemut-win7-amd64:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-qemut-rhel6hvm-intel:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-xl-qemuu-win7-amd64:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-libvirt-xsm:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-migrupgrade:xen-boot/dst_host:fail:regression
    xen-4.16-testing:test-amd64-i386-qemut-rhel6hvm-amd:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-xl-pvshim:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-libvirt-pair:xen-boot/src_host:fail:regression
    xen-4.16-testing:test-amd64-i386-libvirt-pair:xen-boot/dst_host:fail:regression
    xen-4.16-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-xl-qemut-ws16-amd64:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-xl:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-freebsd10-amd64:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-qemuu-rhel6hvm-amd:xen-boot:fail:regression
    xen-4.16-testing:test-xtf-amd64-amd64-2:xtf/test-pv32pae-xsa-188:fail:regression
    xen-4.16-testing:test-xtf-amd64-amd64-2:leak-check/check:fail:regression
    xen-4.16-testing:test-armhf-armhf-xl-credit1:host-ping-check-xen:fail:regression
    xen-4.16-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-xl-vhd:xen-boot:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt-raw:xen-boot:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=dc020d8d1ba420e2dd0e7a40f5045db897f3c4f4
X-Osstest-Versions-That:
    xen=8e11ec8fbf6f933f8854f4bc54226653316903f2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 10 Jun 2022 05:20:17 +0000

flight 170901 xen-4.16-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170901/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-shadow     8 xen-boot                 fail REGR. vs. 170871
 test-amd64-coresched-i386-xl  8 xen-boot                 fail REGR. vs. 170871
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 170871
 test-amd64-i386-xl-xsm        8 xen-boot                 fail REGR. vs. 170871
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 170871
 test-amd64-i386-pair         12 xen-boot/src_host        fail REGR. vs. 170871
 test-amd64-i386-pair         13 xen-boot/dst_host        fail REGR. vs. 170871
 test-amd64-i386-xl-qemuu-debianhvm-amd64  8 xen-boot     fail REGR. vs. 170871
 test-amd64-i386-freebsd10-i386  8 xen-boot               fail REGR. vs. 170871
 test-amd64-i386-libvirt       8 xen-boot                 fail REGR. vs. 170871
 test-amd64-i386-livepatch     8 xen-boot                 fail REGR. vs. 170871
 test-amd64-i386-qemuu-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 170871
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 170871
 test-amd64-i386-xl-qemut-win7-amd64  8 xen-boot          fail REGR. vs. 170871
 test-amd64-i386-qemut-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 170871
 test-amd64-i386-xl-qemuu-win7-amd64  8 xen-boot          fail REGR. vs. 170871
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 170871
 test-amd64-i386-libvirt-xsm   8 xen-boot                 fail REGR. vs. 170871
 test-amd64-i386-xl-qemut-debianhvm-amd64  8 xen-boot     fail REGR. vs. 170871
 test-amd64-i386-migrupgrade  13 xen-boot/dst_host        fail REGR. vs. 170871
 test-amd64-i386-qemut-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 170871
 test-amd64-i386-xl-qemuu-ovmf-amd64  8 xen-boot          fail REGR. vs. 170871
 test-amd64-i386-xl-pvshim     8 xen-boot                 fail REGR. vs. 170871
 test-amd64-i386-libvirt-pair 12 xen-boot/src_host        fail REGR. vs. 170871
 test-amd64-i386-libvirt-pair 13 xen-boot/dst_host        fail REGR. vs. 170871
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 170871
 test-amd64-i386-xl-qemut-ws16-amd64  8 xen-boot          fail REGR. vs. 170871
 test-amd64-i386-xl            8 xen-boot                 fail REGR. vs. 170871
 test-amd64-i386-xl-qemuu-ws16-amd64  8 xen-boot          fail REGR. vs. 170871
 test-amd64-i386-freebsd10-amd64  8 xen-boot              fail REGR. vs. 170871
 test-amd64-i386-qemuu-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 170871
 test-xtf-amd64-amd64-2       87 xtf/test-pv32pae-xsa-188 fail REGR. vs. 170871
 test-xtf-amd64-amd64-2       88 leak-check/check         fail REGR. vs. 170871
 test-armhf-armhf-xl-credit1  10 host-ping-check-xen      fail REGR. vs. 170871
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 170871

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-vhd        8 xen-boot                fail blocked in 170871
 test-amd64-i386-libvirt-raw   8 xen-boot                fail blocked in 170871
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170871
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170871
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170871
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170871
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170871
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170871
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170871
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170871
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  dc020d8d1ba420e2dd0e7a40f5045db897f3c4f4
baseline version:
 xen                  8e11ec8fbf6f933f8854f4bc54226653316903f2

Last test of basis   170871  2022-06-07 12:36:55 Z    2 days
Testing same since   170901  2022-06-09 13:36:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  George Dunlap <george.dunlap@eu.citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       fail    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    fail    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit dc020d8d1ba420e2dd0e7a40f5045db897f3c4f4
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jun 9 15:29:38 2022 +0200

    x86/pv: Track and flush non-coherent mappings of RAM
    
    There are legitimate uses of WC mappings of RAM, e.g. for DMA buffers with
    devices that make non-coherent writes.  The Linux sound subsystem makes
    extensive use of this technique.
    
    For such usecases, the guest's DMA buffer is mapped and consistently used as
    WC, and Xen doesn't interact with the buffer.
    
    However, a mischevious guest can use WC mappings to deliberately create
    non-coherency between the cache and RAM, and use this to trick Xen into
    validating a pagetable which isn't actually safe.
    
    Allocate a new PGT_non_coherent to track the non-coherency of mappings.  Set
    it whenever a non-coherent writeable mapping is created.  If the page is used
    as anything other than PGT_writable_page, force a cache flush before
    validation.  Also force a cache flush before the page is returned to the heap.
    
    This is CVE-2022-26364, part of XSA-402.
    
    Reported-by: Jann Horn <jannh@google.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: c1c9cae3a9633054b177c5de21ad7268162b2f2c
    master date: 2022-06-09 14:23:37 +0200

commit c4815be949aae6583a9a22897beb96b095b4f1a2
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jun 9 15:29:13 2022 +0200

    x86/amd: Work around CLFLUSH ordering on older parts
    
    On pre-CLFLUSHOPT AMD CPUs, CLFLUSH is weakely ordered with everything,
    including reads and writes to the address, and LFENCE/SFENCE instructions.
    
    This creates a multitude of problematic corner cases, laid out in the manual.
    Arrange to use MFENCE on both sides of the CLFLUSH to force proper ordering.
    
    This is part of XSA-402.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: 062868a5a8b428b85db589fa9a6d6e43969ffeb9
    master date: 2022-06-09 14:23:07 +0200

commit 8eafa2d871ae51d461256e4a14175e24df330c70
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jun 9 15:28:48 2022 +0200

    x86: Split cache_flush() out of cache_writeback()
    
    Subsequent changes will want a fully flushing version.
    
    Use the new helper rather than opencoding it in flush_area_local().  This
    resolves an outstanding issue where the conditional sfence is on the wrong
    side of the clflushopt loop.  clflushopt is ordered with respect to older
    stores, not to younger stores.
    
    Rename gnttab_cache_flush()'s helper to avoid colliding in name.
    grant_table.c can see the prototype from cache.h so the build fails
    otherwise.
    
    This is part of XSA-402.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: 9a67ffee3371506e1cbfdfff5b90658d4828f6a2
    master date: 2022-06-09 14:22:38 +0200

commit 74193f4292d9cfc2874866e941d9939d8f33fcef
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jun 9 15:28:23 2022 +0200

    x86: Don't change the cacheability of the directmap
    
    Changeset 55f97f49b7ce ("x86: Change cache attributes of Xen 1:1 page mappings
    in response to guest mapping requests") attempted to keep the cacheability
    consistent between different mappings of the same page.
    
    The reason wasn't described in the changelog, but it is understood to be in
    regards to a concern over machine check exceptions, owing to errata when using
    mixed cacheabilities.  It did this primarily by updating Xen's mapping of the
    page in the direct map when the guest mapped a page with reduced cacheability.
    
    Unfortunately, the logic didn't actually prevent mixed cacheability from
    occurring:
     * A guest could map a page normally, and then map the same page with
       different cacheability; nothing prevented this.
     * The cacheability of the directmap was always latest-takes-precedence in
       terms of guest requests.
     * Grant-mapped frames with lesser cacheability didn't adjust the page's
       cacheattr settings.
     * The map_domain_page() function still unconditionally created WB mappings,
       irrespective of the page's cacheattr settings.
    
    Additionally, update_xen_mappings() had a bug where the alias calculation was
    wrong for mfn's which were .init content, which should have been treated as
    fully guest pages, not Xen pages.
    
    Worse yet, the logic introduced a vulnerability whereby necessary
    pagetable/segdesc adjustments made by Xen in the validation logic could become
    non-coherent between the cache and main memory.  The CPU could subsequently
    operate on the stale value in the cache, rather than the safe value in main
    memory.
    
    The directmap contains primarily mappings of RAM.  PAT/MTRR conflict
    resolution is asymmetric, and generally for MTRR=WB ranges, PAT of lesser
    cacheability resolves to being coherent.  The special case is WC mappings,
    which are non-coherent against MTRR=WB regions (except for fully-coherent
    CPUs).
    
    Xen must not have any WC cacheability in the directmap, to prevent Xen's
    actions from creating non-coherency.  (Guest actions creating non-coherency is
    dealt with in subsequent patches.)  As all memory types for MTRR=WB ranges
    inter-operate coherently, so leave Xen's directmap mappings as WB.
    
    Only PV guests with access to devices can use reduced-cacheability mappings to
    begin with, and they're trusted not to mount DoSs against the system anyway.
    
    Drop PGC_cacheattr_{base,mask} entirely, and the logic to manipulate them.
    Shift the later PGC_* constants up, to gain 3 extra bits in the main reference
    count.  Retain the check in get_page_from_l1e() for special_pages() because a
    guest has no business using reduced cacheability on these.
    
    This reverts changeset 55f97f49b7ce6c3520c555d19caac6cf3f9a5df0
    
    This is CVE-2022-26363, part of XSA-402.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>
    master commit: ae09597da34aee6bc5b76475c5eea6994457e854
    master date: 2022-06-09 14:22:08 +0200

commit 9cfd796ae05421ded8e4f70b2c55352491cfa841
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jun 9 15:27:53 2022 +0200

    x86/page: Introduce _PAGE_* constants for memory types
    
    ... rather than opencoding the PAT/PCD/PWT attributes in __PAGE_HYPERVISOR_*
    constants.  These are going to be needed by forthcoming logic.
    
    No functional change.
    
    This is part of XSA-402.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: 1be8707c75bf4ba68447c74e1618b521dd432499
    master date: 2022-06-09 14:21:38 +0200

commit 8dab3f79b122e69cbcdebca72cdc14f004ee2193
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jun 9 15:27:37 2022 +0200

    x86/pv: Fix ABAC cmpxchg() race in _get_page_type()
    
    _get_page_type() suffers from a race condition where it incorrectly assumes
    that because 'x' was read and a subsequent a cmpxchg() succeeds, the type
    cannot have changed in-between.  Consider:
    
    CPU A:
      1. Creates an L2e referencing pg
         `-> _get_page_type(pg, PGT_l1_page_table), sees count 0, type PGT_writable_page
      2.     Issues flush_tlb_mask()
    CPU B:
      3. Creates a writeable mapping of pg
         `-> _get_page_type(pg, PGT_writable_page), count increases to 1
      4. Writes into new mapping, creating a TLB entry for pg
      5. Removes the writeable mapping of pg
         `-> _put_page_type(pg), count goes back down to 0
    CPU A:
      7.     Issues cmpxchg(), setting count 1, type PGT_l1_page_table
    
    CPU B now has a writeable mapping to pg, which Xen believes is a pagetable and
    suitably protected (i.e. read-only).  The TLB flush in step 2 must be deferred
    until after the guest is prohibited from creating new writeable mappings,
    which is after step 7.
    
    Defer all safety actions until after the cmpxchg() has successfully taken the
    intended typeref, because that is what prevents concurrent users from using
    the old type.
    
    Also remove the early validation for writeable and shared pages.  This removes
    race conditions where one half of a parallel mapping attempt can return
    successfully before:
     * The IOMMU pagetables are in sync with the new page type
     * Writeable mappings to shared pages have been torn down
    
    This is part of XSA-401 / CVE-2022-26362.
    
    Reported-by: Jann Horn <jannh@google.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>
    master commit: 8cc5036bc385112a82f1faff27a0970e6440dfed
    master date: 2022-06-09 14:21:04 +0200

commit b152dfbc3ad71a788996440b18174d995c3bffc9
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jun 9 15:27:19 2022 +0200

    x86/pv: Clean up _get_page_type()
    
    Various fixes for clarity, ahead of making complicated changes.
    
     * Split the overflow check out of the if/else chain for type handling, as
       it's somewhat unrelated.
     * Comment the main if/else chain to explain what is going on.  Adjust one
       ASSERT() and state the bit layout for validate-locked and partial states.
     * Correct the comment about TLB flushing, as it's backwards.  The problem
       case is when writeable mappings are retained to a page becoming read-only,
       as it allows the guest to bypass Xen's safety checks for updates.
     * Reduce the scope of 'y'.  It is an artefact of the cmpxchg loop and not
       valid for use by subsequent logic.  Switch to using ACCESS_ONCE() to treat
       all reads as explicitly volatile.  The only thing preventing the validated
       wait-loop being infinite is the compiler barrier hidden in cpu_relax().
     * Replace one page_get_owner(page) with the already-calculated 'd' already in
       scope.
    
    No functional change.
    
    This is part of XSA-401 / CVE-2022-26362.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>
    master commit: 9186e96b199e4f7e52e033b238f9fe869afb69c7
    master date: 2022-06-09 14:20:36 +0200
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 05:24:32 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 05:24:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345778.571439 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzX8I-0002PF-ET; Fri, 10 Jun 2022 05:24:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345778.571439; Fri, 10 Jun 2022 05:24:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzX8I-0002P8-A4; Fri, 10 Jun 2022 05:24:30 +0000
Received: by outflank-mailman (input) for mailman id 345778;
 Fri, 10 Jun 2022 05:24:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzX8H-0002Oy-Hu; Fri, 10 Jun 2022 05:24:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzX8H-0000NJ-FK; Fri, 10 Jun 2022 05:24:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzX8H-0002ss-3T; Fri, 10 Jun 2022 05:24:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nzX8H-0003rz-31; Fri, 10 Jun 2022 05:24:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=j9Ub6tfday/VbGweocQvGhPoP6gaeHolRHIbUB+IXVU=; b=dzSHTCksB8tXMsJCOous/mgaJi
	iLzzdHxT/Q70d/lr+baEzfR9VTnSsq6Q5x/fk273X14p0UHZ4JvIIZ8070Wi4X7dZ2PguCvsvMIMB
	ue13jVr8p3eftur3MJBLQHNCNrDpG9D7mHkE0nbhy/+thXZCKXaIOGIKen1qX0zTf7lA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170902-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 170902: FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-rtds:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:host-install(5):broken:heisenbug
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:guest-saverestore.2:fail:heisenbug
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=6d940eff4734bcb40b1a25f62d7cec5a396f994a
X-Osstest-Versions-That:
    qemuu=9b1f58854959c5a9bdb347e3e04c252ab7fc9ef5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 10 Jun 2022 05:24:29 +0000

flight 170902 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170902/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-rtds        <job status>                 broken  in 170891

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds     5 host-install(5) broken in 170891 pass in 170902
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 16 guest-saverestore.2 fail in 170891 pass in 170902
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail in 170891 pass in 170902
 test-amd64-amd64-xl-qcow2    21 guest-start/debian.repeat  fail pass in 170891

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170884
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170884
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170884
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170884
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170884
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170884
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170884
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170884
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 qemuu                6d940eff4734bcb40b1a25f62d7cec5a396f994a
baseline version:
 qemuu                9b1f58854959c5a9bdb347e3e04c252ab7fc9ef5

Last test of basis   170884  2022-06-08 10:42:45 Z    1 days
Testing same since   170891  2022-06-09 01:39:44 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Peter Maydell <peter.maydell@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Stefan Berger <stefanb@linux.ibm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-amd64-xl-rtds broken

Not pushing.

------------------------------------------------------------
commit 6d940eff4734bcb40b1a25f62d7cec5a396f994a
Merge: 9b1f588549 e37a0ef460
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Tue Jun 7 19:22:18 2022 -0700

    Merge tag 'pull-tpm-2022-06-07-1' of https://github.com/stefanberger/qemu-tpm into staging
    
    Merge tpm 2022/06/07 v1
    
    # -----BEGIN PGP SIGNATURE-----
    #
    # iQEzBAABCAAdFiEEuBi5yt+QicLVzsZrda1lgCoLQhEFAmKf8HgACgkQda1lgCoL
    # QhHx8Qf/QB2z+0B1xKKn8NqrWbZ+FaVlnPu/3hX4kraCY5zAYV9e64kdWhuIKRbM
    # 74/KARGMpkme6Y8rUSK6mVeiY+ul+egfVMnKyfhsM1jhAQT/DzSlht/XZzbn3Mg+
    # FFXQBMqcvcNWH53q9zi9GJYqH4tcxUku3ejgodU4+SO2wB5S59pS/tD+i5H06Vy5
    # Iw1kW6I11gYhJGETxVgb6F2Jfyu6uPWFhIg7eN06XwNExFc45E8GjrpIs2rO78GN
    # OzMBjwAG+C+/PU+UZDOd5Zhq5qv+8DcvDQuPXyqksxPcFvouvLghQvQL/h7neMlM
    # jOwHS153ay0EAT/t2lZafsBwqKQxvQ==
    # =b9Qe
    # -----END PGP SIGNATURE-----
    # gpg: Signature made Tue 07 Jun 2022 05:42:32 PM PDT
    # gpg:                using RSA key B818B9CADF9089C2D5CEC66B75AD65802A0B4211
    # gpg: Good signature from "Stefan Berger <stefanb@linux.vnet.ibm.com>" [unknown]
    # gpg: WARNING: This key is not certified with a trusted signature!
    # gpg:          There is no indication that the signature belongs to the owner.
    # Primary key fingerprint: B818 B9CA DF90 89C2 D5CE  C66B 75AD 6580 2A0B 4211
    
    * tag 'pull-tpm-2022-06-07-1' of https://github.com/stefanberger/qemu-tpm:
      tpm_crb: mark command buffer as dirty on request completion
      hw/tpm/tpm_tis_common.c: Assert that locty is in range
    
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit e37a0ef4605e5d2041785ff3fc89ca6021faf7a0
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Mon Apr 11 15:47:49 2022 +0100

    tpm_crb: mark command buffer as dirty on request completion
    
    At the moment, there doesn't seems to be any way to know that QEMU
    made modification to the command buffer. This is potentially an issue
    on Xen while migrating a guest, as modification to the buffer after
    the migration as started could be ignored and not transfered to the
    destination.
    
    Mark the memory region of the command buffer as dirty once a request
    is completed.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
    Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
    Message-id: 20220411144749.47185-1-anthony.perard@citrix.com

commit 4d84bb6c8b42cc781a02e1ac6648875966abc877
Author: Peter Maydell <peter.maydell@linaro.org>
Date:   Wed May 25 08:59:04 2022 -0400

    hw/tpm/tpm_tis_common.c: Assert that locty is in range
    
    In tpm_tis_mmio_read(), tpm_tis_mmio_write() and
    tpm_tis_dump_state(), we calculate a locality index with
    tpm_tis_locality_from_addr() and then use it as an index into the
    s->loc[] array.  In all these cases, the array index can't overflow
    because the MemoryRegion is sized to be TPM_TIS_NUM_LOCALITIES <<
    TPM_TIS_LOCALITY_SHIFT bytes.  However, Coverity can't see that, and
    it complains (CID 1487138, 1487180, 1487188, 1487198, 1487240).
    
    Add an assertion to tpm_tis_locality_from_addr() that the calculated
    locality index is valid, which will help Coverity and also catch any
    potential future bug where the MemoryRegion isn't sized exactly.
    
    Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
    Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
    Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
    Message-id: 20220525125904.483075-1-stefanb@linux.ibm.com


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 05:53:54 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 05:53:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345790.571452 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzXaX-0006c4-Ov; Fri, 10 Jun 2022 05:53:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345790.571452; Fri, 10 Jun 2022 05:53:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzXaX-0006bx-MH; Fri, 10 Jun 2022 05:53:41 +0000
Received: by outflank-mailman (input) for mailman id 345790;
 Fri, 10 Jun 2022 05:53:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=B4Vh=WR=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1nzXaW-0006br-D7
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 05:53:40 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 (mail-am5eur03on0606.outbound.protection.outlook.com
 [2a01:111:f400:fe08::606])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ac076644-e881-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 07:53:38 +0200 (CEST)
Received: from AS8PR07CA0006.eurprd07.prod.outlook.com (2603:10a6:20b:451::27)
 by HE1PR0801MB1801.eurprd08.prod.outlook.com (2603:10a6:3:7f::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Fri, 10 Jun
 2022 05:53:34 +0000
Received: from VE1EUR03FT038.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:451:cafe::40) by AS8PR07CA0006.outlook.office365.com
 (2603:10a6:20b:451::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.6 via Frontend
 Transport; Fri, 10 Jun 2022 05:53:34 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT038.mail.protection.outlook.com (10.152.19.112) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5332.12 via Frontend Transport; Fri, 10 Jun 2022 05:53:33 +0000
Received: ("Tessian outbound 6f53897bcd4e:v120");
 Fri, 10 Jun 2022 05:53:33 +0000
Received: from 4d8bd2406cca.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 9E9EAD5D-B003-4483-B4F1-2C988237D92A.1; 
 Fri, 10 Jun 2022 05:53:26 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 4d8bd2406cca.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 10 Jun 2022 05:53:26 +0000
Received: from AS9PR06CA0541.eurprd06.prod.outlook.com (2603:10a6:20b:485::13)
 by VE1PR08MB5758.eurprd08.prod.outlook.com (2603:10a6:800:1a0::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.12; Fri, 10 Jun
 2022 05:53:23 +0000
Received: from AM5EUR03FT030.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:485:cafe::3f) by AS9PR06CA0541.outlook.office365.com
 (2603:10a6:20b:485::13) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.14 via Frontend
 Transport; Fri, 10 Jun 2022 05:53:23 +0000
Received: from nebula.arm.com (40.67.248.234) by
 AM5EUR03FT030.mail.protection.outlook.com (10.152.16.117) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5332.12 via Frontend Transport; Fri, 10 Jun 2022 05:53:22 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX03.Arm.com
 (10.251.24.31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.27; Fri, 10 Jun
 2022 05:53:21 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2308.27 via Frontend
 Transport; Fri, 10 Jun 2022 05:53:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ac076644-e881-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=OEa/BZaXGbLaZNjmpjNHaAMZqrvVjyMfucfnqcJtrCOEoF0UdDEVvsCtmkYumvqaDHQo5NhTvLAv1M1Q0Em9kVyZrYL/DleVGGAOHk5/HYfYRXHRdWjHkLQ74E9hhG2S9elAoH0LjJ3YYdF+JmKIs309rXjokWwEPGJk1ZHZAmL4x0QcQmTQoiJtiilOR5wsywacTToXwrT9T1n5brPdIGGR3nZFLJ0FJmLLDsbetAfRbHwLyr8MBLVLLzgFnkMxS0+6E5f18BEMF0O4pR//J2nDTT2fkhKDLto1lSCw4U7/IWk+eLQOyVG/rDvGTzIDVX7FfMFkKut/PM0t6Gl8Mg==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9GqWYxjtFiGnz9l9IDUqaGsYEe/uijkzfmw4+eeG2J8=;
 b=NtL6gi7H35TCXgBCjsISFn7QCZXQ1kYI0XwErZFGbT8BKzq6o9iKl4yxivX1ZtrgekaJhXcHDg1wROyIcY1dMumSg0mnzX+/kT/oGHkxNvEkZBairHxCnzGJcudt+GWKzK99oOOF4exktVWcjZRF9EuhulFKW77AopxbP8QcBn6BH4NnsTLfc372PpOGfNRrxnmYNenne5osJ8Kf2Q5/+dRCRUd4egIqpwjF8rM+tMj92L2VR+T/Jb8RH/qFDNkgUZrAcZnO3RdOCZUWid6Vo0kpiHSd5uyQe8DfwQ/GRpXQdAEZk9F623eF/RH55dHGbr1W+1kwwGcvDdev73c0WQ==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9GqWYxjtFiGnz9l9IDUqaGsYEe/uijkzfmw4+eeG2J8=;
 b=H7kDaQnHuelvkONYGWxw/xGuuOd66aMCchzS/JQEvCIzMGscefQB+jR+v1hg/6cqG0SMG3wlpLovXW3o9mbf0U76q13fyVrT1kqwuB3VGkEcE9dcZ+sFR2ZzKLN8znXl3ytszVwB6APxaQCivbfdVofHq36XHUk08FBceE4akPk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: da7725255b4d626e
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jdynS8saXlbP1IlhBhosd01ghIVnt5ETRt1D11PDb/TmGq6ae+qT0QDug38l0mu9G4FGdiNIInMeGRQC/uRbuRjGZLWn6YbvfO+OsRBnbo/VA+IUdGHnhLHxqNr524jzsLKP/aSY/bWDsu//Dc1GssPceJS6shg/LwZ3sieGg8evXFrk5GHSthQHXfARKXScwyaXg0Musr0ptN0P2nDcyiAVjnP/07RzUJIlQO91AcN9CCvKawkeg5iAnV8/DqSH5thuxgRzeXa0d7x4aDaq2rM48eGarQ5QE4lwPPGoT8Fcu7yEBGnvLq8rJ4WM658u2YCRX3KXFJJqYq03vKYmYw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9GqWYxjtFiGnz9l9IDUqaGsYEe/uijkzfmw4+eeG2J8=;
 b=lrdY3Scj63RpTgQTQnjcSrSbSFUHaMG7r8fMuftJuuvdu/m7dShtVCyEqQEY4uYHj82oo8dcC15ZQtm2Qo9sJ7pYMhSKevN1VsKLy5LoR4LYNUkWcbUIV9SLW7rbzUCM7O/4X/cgQbtyLDUcvEtDb7i5JYMx/XOKz15DTdD8QbHWW76ooK8ddOxfpIiwjKXJ0MSOI5Ygewj4nWETbbnE834DGfRHWz4NuBSs/RTGA8zcTca07CDmn5BTy+yTpgsy3mECU5Zw+dR40qOW59JD7PNJl9VoOGEN0/ykdtD99ZzGraMhUrIdtUA5jC0PMBDgDGwMbR5wnyHC6i3Np1cspA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9GqWYxjtFiGnz9l9IDUqaGsYEe/uijkzfmw4+eeG2J8=;
 b=H7kDaQnHuelvkONYGWxw/xGuuOd66aMCchzS/JQEvCIzMGscefQB+jR+v1hg/6cqG0SMG3wlpLovXW3o9mbf0U76q13fyVrT1kqwuB3VGkEcE9dcZ+sFR2ZzKLN8znXl3ytszVwB6APxaQCivbfdVofHq36XHUk08FBceE4akPk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Wei Chen <wei.chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>
Subject: [PATCH v6 0/8] Device tree based NUMA support for Arm - Part#1 
Date: Fri, 10 Jun 2022 13:53:08 +0800
Message-ID: <20220610055316.2197571-1-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-Office365-Filtering-Correlation-Id: 654e81c0-f079-4865-cf77-08da4aa58dc3
X-MS-TrafficTypeDiagnostic:
	VE1PR08MB5758:EE_|VE1EUR03FT038:EE_|HE1PR0801MB1801:EE_
X-Microsoft-Antispam-PRVS:
	<HE1PR0801MB1801EEB56418BEB2C252A16F9EA69@HE1PR0801MB1801.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 5uBIWaJohpCQqQ/wBpLiLxBfKVmL2N/SMPv9Z+Buy5VEy7zJTxRxlYZOYDh93Fbdi49HqtfWhge0JxCQzSsWKruNk53LNOCrkGbXA8c6YvnlElGIcxvSo/QLp5qvpunRmUDwfIoYIiBrcxyGTyuTedE4GhGziTCRbrytdA9aTnaiGY0jBO40kWOJMY4kqoAGWWBr8B1EJ+9KqFJ0B8ebozdgnZ2r2aPREKoyLYZQ9ATrfZBl7xjtMdOA/DOO0WDMKnVrtSgqHAG1F5VaUKIxaqRUrO14WkXJuCIJB0SSbuOZSt2i+R8I1MBusp1vc4fiGsFNKGMgdLEyvf/fVqhJ6RFfQAU+kXnU+v7KLOe/AhWgRImuGmpsgfV8C30oWQ+tz+qSKDCJHY+l8qO/dapqsujDQS3T25iyfAEy1+56OlTk/n3pfBp07qVE15rj5cUc91KGkeDTjCb78z+5EDsymRz8qU/NJCTnzBpN/MaHGulcnrFtvEH5MD+mwVg8u7FVXbbGCLjwDOSs/+kmXaXrIkL1s/SGeN0KIAEKuSCZTQa5piyIMcIxIz9s6Xxu4nb7tdeoA+IIggFJXTlaOXjf9dEhs1oDziol5Wo0mIV3fk3U/ov9Axd79dm/xN/z99sIQBrHL317Fc6GsfirMrZZbPFzj612h5u4heUrRp4xs7ArZ4Yj1O+WhgCutOlZBQmKsududutPFROcXwcRERUuJVC7Y6OZH+P2JYkWljLM3PRvUmyLWln7iyp2semNEt8ZWNULV5wrh7iM8HouRQsoMg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230001)(4636009)(40470700004)(46966006)(36840700001)(40460700003)(6666004)(6916009)(47076005)(4743002)(26005)(2616005)(7696005)(82310400005)(86362001)(1076003)(36756003)(316002)(54906003)(36860700001)(4326008)(70206006)(8676002)(2906002)(44832011)(70586007)(508600001)(426003)(186003)(336012)(83380400001)(81166007)(8936002)(356005)(5660300002)(17413003)(21314003)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5758
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT038.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	46ff949d-8338-417e-ba4e-08da4aa58772
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	9jpg0ZRzCEZui8NKyIJdHsTQrLsVWANXgFSAmNyNAryAhJPIyCKzNiaky8TI1lBfPcHSJllXQPOap2+e3R0taBpNPpy5eMmV+ylyUNES20/Vyi7a2R19QDZ1sksWdxLYq68z+Qs152U1iwMUHR+rbJLcdf8IzKTqck9TtRd0sAnoZKt/crQcVPhn1O2eGSAfa9IdfXlufevMKEeQY5XuNrO8tHh5IELOW9F+7FEVbXf+k5RQ+pAP6XmVxc2niPOE0n0r09fIGaTI1RKWKLxv3o3hoN8+DAPs/NqiH0wYvGoMRO+A6cD6Zn98TTvhTPr95z21CUieytFiR1j3HeP6qrehs26aNe4n3Te5gmnULDML3mo/pXLVMQC0lXK+eJ/5UwGv5t0/4pF8SmtE+UECBTI6VpkU8Jvg88pDmWjt5Lj0HGSmyhnSY9f/vEMyH5kzgq8mn7yE2eGjm6Hrezw7P0v4zORvJGqpFiRe50f87QRXH3hP15bJJ0SaFWeVyB+mCCntvIEv1MYbkw7uQm4L15pNyIXcf4lmE1DiDy/1YdJM5LdZfYmnj3uSN1bbHZkAFIMhx/0XnpOHJ5OOs1O4wnjdGo064h8un/KdIAdkRNXF/oNZC8a724MeMqTRJGtDE0P68QI3psJ/sjQppWF2FcJAtHTVRywSsYoRDnXi/b+e9yxlEmswCfAzjiZ/aFi5YvpLCGT21XvjKIaSv5Xt5SBnYuzfRjUWiMqb9dB3TbU=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(83380400001)(508600001)(316002)(54906003)(2906002)(70206006)(70586007)(40460700003)(6916009)(26005)(7696005)(86362001)(44832011)(5660300002)(6666004)(36756003)(107886003)(4743002)(82310400005)(47076005)(36860700001)(8936002)(81166007)(186003)(336012)(8676002)(426003)(4326008)(2616005)(1076003)(21314003)(17413003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2022 05:53:33.3732
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 654e81c0-f079-4865-cf77-08da4aa58dc3
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT038.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0801MB1801

(The Arm device tree based NUMA support patch set contains 35
patches. In order to make stuff easier for reviewers, I split
them into 3 parts:
1. Preparation. I have re-sorted the patch series. And moved
   independent patches to the head of the series.
2. Move generically usable code from x86 to common.
3. Add new code to support Arm.

This series only contains the first part patches.)

Xen memory allocation and scheduler modules are NUMA aware.
But actually, on x86 has implemented the architecture APIs
to support NUMA. Arm was providing a set of fake architecture
APIs to make it compatible with NUMA awared memory allocation
and scheduler.

Arm system was working well as a single node NUMA system with
these fake APIs, because we didn't have multiple nodes NUMA
system on Arm. But in recent years, more and more Arm devices
support multiple nodes NUMA system.

So now we have a new problem. When Xen is running on these Arm
devices, Xen still treat them as single node SMP systems. The
NUMA affinity capability of Xen memory allocation and scheduler
becomes meaningless. Because they rely on input data that does
not reflect real NUMA layout.

Xen still think the access time for all of the memory is the
same for all CPUs. However, Xen may allocate memory to a VM
from different NUMA nodes with different access speeds. This
difference can be amplified in workloads inside VM, causing
performance instability and timeouts.

So in this patch series, we implement a set of NUMA API to use
device tree to describe the NUMA layout. We reuse most of the
code of x86 NUMA to create and maintain the mapping between
memory and CPU, create the matrix between any two NUMA nodes.
Except ACPI and some x86 specified code, we have moved other
code to common. In next stage, when we implement ACPI based
NUMA for Arm64, we may move the ACPI NUMA code to common too,
but in current stage, we keep it as x86 only.

This patch serires has been tested and booted well on one
Arm64 NUMA machine and one HPE x86 NUMA machine.

---
Part1 v5->v6:
1. Use comma to replace dash for "[start, end]".
Part1 v4->v5:
1. Remove "nd->end == end && nd->start == start" from
   conflicting_memblks.
2. Use case NO_CONFLICT instead of "default".
3. Correct wrong "node" to "pxm" in print message.
4. Remove unnecesary "else" to remove the indent depth.
5. Convert all ranges to proper mathematical interval
   representation.
6. Fix code-style comments.
Part1 v3->v4:
1. Add indent to make ln and test to be aligned in EFI
   common makefile.
2. Drop "ERR" prefix for node conflict check enumeration,
   and remove init value.
3. Use "switch case" for enumeration, and add "default:"
4. Use "PXM" in log messages.
5. Use unsigned int for node memory block id.
6. Fix some code-style comments.
7. Use "nd->end" in node range expansion check.
Part1 v2->v3:
1. Rework EFI stub patch:
   1.1. Add existed file check, if exists a regular stub files,
        the common/stub files' links will be ignored.
   1.2. Keep stub.c in x86/efi to include common/efi/stub.c
   1.3. Restore efi_compat_xxx stub functions to x86/efi.c.
        Other architectures will not use efi_compat_xxx.
   1.4. Remove ARM_EFI dependency from ARM_64.
   1.5. Add comment for adding stub.o to EFIOBJ-y.
   1.6. Merge patch#2 and patch#3 to one patch.
2. Rename arch_have_default_dmazone to arch_want_default_dmazone
3. Use uint64_t for size in acpi_scan_nodes, make it be
   consistent with numa_emulation.
4. Merge the interleaves checking code from a separate function
   to conflicting_memblks.
5. Use INFO level for node's without memory log message.
6. Move "xen/x86: Use ASSERT instead of VIRTUAL_BUG_ON for
   phys_to_nid" to part#2.
Part1 v1->v2:
1. Move independent patches from later to early of this series.
2. Drop the copy of EFI stub.c from Arm. Share common codes of
   x86 EFI stub for Arm.
3. Use CONFIG_ARM_EFI to replace CONFIG_EFI and remove help text
   and make CONFIG_ARM_EFI invisible.
4. Use ASSERT to replace VIRTUAL_BUG_ON in phys_to_nid.
5. Move MAX_NUMNODES from xen/numa.h to asm/numa.h for x86.
6. Extend the description of Arm's workaround for reserve DMA
   allocations to avoid the same discussion every time for
   arch_have_default_dmazone.
7. Update commit messages.

Wei Chen (8):
  xen: reuse x86 EFI stub functions for Arm
  xen/arm: Keep memory nodes in device tree when Xen boots from EFI
  xen: introduce an arch helper for default dma zone status
  xen: decouple NUMA from ACPI in Kconfig
  xen/arm: use !CONFIG_NUMA to keep fake NUMA API
  xen/x86: use paddr_t for addresses in NUMA node structure
  xen/x86: add detection of memory interleaves for different nodes
  xen/x86: use INFO level for node's without memory log message

 xen/arch/arm/Kconfig              |   4 +
 xen/arch/arm/Makefile             |   2 +-
 xen/arch/arm/bootfdt.c            |   8 +-
 xen/arch/arm/efi/Makefile         |   8 ++
 xen/arch/arm/efi/efi-boot.h       |  25 -----
 xen/arch/arm/include/asm/numa.h   |   6 ++
 xen/arch/x86/Kconfig              |   2 +-
 xen/arch/x86/efi/stub.c           |  32 +-----
 xen/arch/x86/include/asm/config.h |   1 -
 xen/arch/x86/include/asm/numa.h   |   9 +-
 xen/arch/x86/numa.c               |  32 +++---
 xen/arch/x86/srat.c               | 157 +++++++++++++++++++++---------
 xen/common/Kconfig                |   3 +
 xen/common/efi/efi-common.mk      |   3 +-
 xen/common/efi/stub.c             |  32 ++++++
 xen/common/page_alloc.c           |   2 +-
 xen/drivers/acpi/Kconfig          |   3 +-
 xen/drivers/acpi/Makefile         |   2 +-
 18 files changed, 199 insertions(+), 132 deletions(-)
 create mode 100644 xen/common/efi/stub.c

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 05:53:54 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 05:53:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345792.571475 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzXaj-0007BS-Hw; Fri, 10 Jun 2022 05:53:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345792.571475; Fri, 10 Jun 2022 05:53:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzXaj-0007BF-Ej; Fri, 10 Jun 2022 05:53:53 +0000
Received: by outflank-mailman (input) for mailman id 345792;
 Fri, 10 Jun 2022 05:53:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=B4Vh=WR=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1nzXah-00078y-Sq
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 05:53:52 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0610.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::610])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b252cbcc-e881-11ec-8b38-e96605d6a9a5;
 Fri, 10 Jun 2022 07:53:48 +0200 (CEST)
Received: from AS9PR05CA0002.eurprd05.prod.outlook.com (2603:10a6:20b:488::35)
 by AM0PR08MB5155.eurprd08.prod.outlook.com (2603:10a6:208:15f::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.11; Fri, 10 Jun
 2022 05:53:45 +0000
Received: from VE1EUR03FT051.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:488:cafe::90) by AS9PR05CA0002.outlook.office365.com
 (2603:10a6:20b:488::35) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.14 via Frontend
 Transport; Fri, 10 Jun 2022 05:53:45 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT051.mail.protection.outlook.com (10.152.19.75) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5332.12 via Frontend Transport; Fri, 10 Jun 2022 05:53:44 +0000
Received: ("Tessian outbound e40990bc24d7:v120");
 Fri, 10 Jun 2022 05:53:44 +0000
Received: from 68f3d0fdfe0a.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 F780FB04-C8D7-46D0-98EC-33D5B5BAAEBE.1; 
 Fri, 10 Jun 2022 05:53:37 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 68f3d0fdfe0a.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 10 Jun 2022 05:53:37 +0000
Received: from AS9PR05CA0008.eurprd05.prod.outlook.com (2603:10a6:20b:488::28)
 by AM8PR08MB6435.eurprd08.prod.outlook.com (2603:10a6:20b:317::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.11; Fri, 10 Jun
 2022 05:53:32 +0000
Received: from AM5EUR03FT003.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:488:cafe::e1) by AS9PR05CA0008.outlook.office365.com
 (2603:10a6:20b:488::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.15 via Frontend
 Transport; Fri, 10 Jun 2022 05:53:32 +0000
Received: from nebula.arm.com (40.67.248.234) by
 AM5EUR03FT003.mail.protection.outlook.com (10.152.16.149) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5332.12 via Frontend Transport; Fri, 10 Jun 2022 05:53:31 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX03.Arm.com
 (10.251.24.31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.27; Fri, 10 Jun
 2022 05:53:25 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2308.27 via Frontend
 Transport; Fri, 10 Jun 2022 05:53:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b252cbcc-e881-11ec-8b38-e96605d6a9a5
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=F+UezYvx7ynEFWPzZKMVLYnkY8ho1BbsxYOeU6jnqWZs5ijTljwbBIG6/pcOsCqiG6SJhjcGOauQBTXmAPsD8ircrngWcHTCzDz3srgSv9DBSKNTKvZopCqrJc7Hth7igpnwxWDEjiRSBlJ/NKQP2fTFcjnO4qob/DhSQ9PLL/K6wsDTwsg+0U9iY6a2qulHihEvvlQ0ptMEHu4AxCX+x8sBAxnQ9EyRG+3GkArbSBU+Kv5M1SHNAsi1EiE5chr4rsIlnrb393tgDAebtTE8BggZ2jmajo/QP9355KfZKZ27so1JS8GJdPYCr07eSIDL5JcjETk5jhVuwvCvNqb24A==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=h1Mk+96NdZCuYCNVMfuYJAf8YYMD5xhqAKxBbA5l6eU=;
 b=Box0C1tXqYBo77Cwteykhnycv/SiTPkA9Dh2UzBSvky7La1DcVdW0o+bBQh5Gg8W67o7JpXdhN9IKogHKSlGp0simasq3ncplDhv5t9emZxkSyYfQpvKiQzXcUg1WPHB/jMaPBEkkMXuCIkHe6NSvRh35IKjK1cIa2csyrt0Eymx1QrdPOzlXwC6on62tzzqCLFpp6xJKJpTSY5XXcBGhj4i8CzP0uFZjfxPXcAkEVZJ/pl75eGEWi7uFNVNgha/iX0HIC+8UBEc6YwZEJe85fuoDsKZn6Jmk3IMGuOGOWTRJ5BmMo9YOi0cI0iH59L/RY0NBLCvqEXrq0BaA2gBsg==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=h1Mk+96NdZCuYCNVMfuYJAf8YYMD5xhqAKxBbA5l6eU=;
 b=hk0LL7BZPrbco/PiUfd8AJb4426z/uNgEwO7nVPPMn8OhFeESDnlHhXRYIija5zeYVf/ZO+0OFQSiadJpW+X1H9R8FFmEniomwfuaa83FSSKQjW8ryE8c893vPWw/gwWehmf6fBAnvEpQ31l1t4UErPV5WcszWLGTzd4u2ks2Xs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 6fb21e0bb163feb8
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Uh9EFywxyJKcGYkOb28UY55GopgAUmI/G8CPP/Bl0mzJgvnzvxAC3YsvH2ow7A/XYJTMIA+e2hSW/WVrCupGNzPJihlakDQhW4ib8WGiHfoL9h3HuCnOq95Dc04eTD8D/qaUPpPbPl8nnJY7xec0z4R9zkeyqGhFpAk1PjB4osoNSAmMAxHIxwrmGXQGjYtjzPejW/87Tr3J2RiV0xJgLLtOYnkJmCNJi9KuPjwsaPQNooAXFw52TRKOVdizHmIbJZJW6eHHilMI89BfnziONzLjoZV2pSbNb5jGWgtuYF2fUBcXBpiUc6yBL1+363w02fcNaMAverKnFF5+7W0RJA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=h1Mk+96NdZCuYCNVMfuYJAf8YYMD5xhqAKxBbA5l6eU=;
 b=f5wVGbRczca9b7iqsquDTyMIjP81AZeNsOa8VciyIxwDojEV6QwlkmQE6QHI9zF8mnT2kWSlazSR39RSmrrEbgfaOMt3mbHuNuTYORqT4t38lFr+RV8EZG5Laxm/FNvSlzO58v+JYXt0xDeVezEaKHEotlVMD11c4YjhXikUEwrCPa0yjmA6/8V+uL/55TFPGY2xmID0u6yeXQ72pnTd9A2uJtuf1N1z++a3CyqF0mHu2Kdgxm0Ed3DkGhAq4X/MHKU0sV3Pxg3mRtCo3ru7pIj4//eeqRnxNKThXmp/kTdttnnaBU0qWRFBZHWZHCh2ZjUMp/uB3Fb3WbAU7KF8LA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=h1Mk+96NdZCuYCNVMfuYJAf8YYMD5xhqAKxBbA5l6eU=;
 b=hk0LL7BZPrbco/PiUfd8AJb4426z/uNgEwO7nVPPMn8OhFeESDnlHhXRYIija5zeYVf/ZO+0OFQSiadJpW+X1H9R8FFmEniomwfuaa83FSSKQjW8ryE8c893vPWw/gwWehmf6fBAnvEpQ31l1t4UErPV5WcszWLGTzd4u2ks2Xs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Wei Chen <wei.chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Jiamei Xie <jiamei.xie@arm.com>
Subject: [PATCH v6 1/8] xen: reuse x86 EFI stub functions for Arm
Date: Fri, 10 Jun 2022 13:53:09 +0800
Message-ID: <20220610055316.2197571-2-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220610055316.2197571-1-wei.chen@arm.com>
References: <20220610055316.2197571-1-wei.chen@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-Office365-Filtering-Correlation-Id: 463e935d-7839-4355-975e-08da4aa5948a
X-MS-TrafficTypeDiagnostic:
	AM8PR08MB6435:EE_|VE1EUR03FT051:EE_|AM0PR08MB5155:EE_
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB515506C237728CA19B50837F9EA69@AM0PR08MB5155.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 X1PKdMnFiLCqt1sXEdQazsDfx4yVraZ/BRTkf7nOFH6HkP+9Vox0tmc9JrfstOa209dXiZWt8zQjW6hKHPPXfEtfYQ467dvejzN9hBEyCt36zwD2SdUkkEO4YzudOUZo9Ax7N/hUcbfgwJaymFmWVPmK/O+mqvmm4JgMfW9APAcuLZr3lNWOreAiSSzpyEWlF2aBahKmj2mOMcqAcgWf2rKY7B4tXSpdL5u0M7azPbbKn7bSfo0RoQsNfR7uwj4ZRsvnix63vVRMiCTfY1+9QxylWVOHN+M+1VdHcr+NDjtrDfYr5fBSFgW3IMm/wHu+iZ6WLv3SY7Qb8H7JJxxEs0KOHTxxxlNW5FxIecaXpGMcQo8M57JuQxkqMytXNrcVAmc0cIcFnxkRUrij3HWNbxPW+PzfIOWOu4Q9pg4JyP9a/WI0/w3D6kLnRfkYt7Sj9OqYUpgqmqTutBJpnTHQ8XLjAVH+/c+UhMCV32O6zAgGW3aIJ/gVU0rhZ3fB+2shDphQUw2/5Jq7Q8Z+5r46IekNtPwQWVS6GTyMIt8Mx15j6TNWScOVXOQXy+dlesbkTDdQktGRaqu0XmlyMUTFhKh+tnnazDzAFeroux9/QxB0SEl0lI8A/AY7buwNvNx8DnuHnaNqcQHjdXVwn/X4u2RSrNLzKzDQHOZXT4OeKJJLmcSuV1F0WhkzaRqkt9FHJroLmdlESRUW+4/vmpJYeIEvJona0y8X+lyTilUDWSs=
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230001)(4636009)(40470700004)(46966006)(36840700001)(316002)(1076003)(44832011)(7696005)(82310400005)(40460700003)(8676002)(4326008)(36860700001)(70206006)(70586007)(26005)(2616005)(54906003)(6916009)(2906002)(426003)(8936002)(47076005)(83380400001)(81166007)(36756003)(86362001)(336012)(356005)(5660300002)(508600001)(186003)(6666004)(21314003)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB6435
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT051.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	4b9e15fb-4f6f-482d-cf6c-08da4aa58cca
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	AwXtMinKcluzfivA0UI87TetRo+Tw2Pr5KsJsLYP/KvcJOjpkVIo1L4vWLbCzMzsi+okAnQBl/zHWaoH/xaqVR7gblYSYVasJlouhi+rd9DJfU2EyA87pX01u7/DZknv8eGhJmQ3xk6PRl+9iDmgwqUG64pPKNjtl6syLDDru11IYopG/Pn260eMAGNfMEFBhCH0z3S70W6Mnv18u324FNrKtYT7sJ+N5QOcOcWQHmlna0N+VFsCySf1ZOzrGWnwlg8f3owiKhyWGKAXvunKF9sh3HtECuKN+TY5wKoyigCCy9/Emx9IQkRcdju5/LVsUeTI38qaNOGSB+e9rhybBfb6nWvx9+mmAFhQoxYb/f+Yk1syvaXar5dX2MpIwWHoAE35cweRPiQF6cjNUenY4t/J4+aF7Y6nb40IzBtAv7UhPUONQTTs/AzpFyW0CaFcucEqvvwg0oBTObkVCoePQBcMogUqTfc6hxb2eAMW4g4KxBg0FqV4MXeFzfJ+Lr5/zt7PT4XSvn8J64Sahbm50Oz6Xg57TaJT1ak3sJ6UPsU47SRTsp71V2OiP9F7dY+ej6g5P917zjn5N5Yrxvx229okPZahuTnEpz7LWB+AwTJl8QEvCXHdQ9LWESOrAn6Kx2CG2rBoce/5kjtq+f3Lkb9kCafY+o4khVpotWScygmV2hquKg59GuYobI6kLhwKnapSyHCKSGBf3QqjY2xMBw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(70206006)(44832011)(26005)(316002)(82310400005)(70586007)(83380400001)(81166007)(86362001)(2616005)(40460700003)(2906002)(1076003)(426003)(186003)(336012)(47076005)(5660300002)(8676002)(36756003)(6666004)(4326008)(508600001)(8936002)(36860700001)(54906003)(7696005)(6916009)(21314003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2022 05:53:44.7422
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 463e935d-7839-4355-975e-08da4aa5948a
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT051.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB5155

x86 is using compiler feature testing to decide EFI build
enable or not. When EFI build is disabled, x86 will use an
efi/stub.c file to replace efi/runtime.c for build objects.
Following this idea, we introduce a stub file for Arm, but
use CONFIG_ARM_EFI to decide EFI build enable or not.

And the most functions in x86 EFI stub.c can be reused for
other architectures, like Arm. So we move them to common
and keep the x86 specific function in x86/efi/stub.c.

To avoid the symbol link conflict error when linking common
stub files to x86/efi. We add a regular file check in efi
stub files' link script. Depends on this check we can bypass
the link behaviors for existed stub files in x86/efi.

As there is no Arm specific EFI stub function for Arm in
current stage, Arm still can use the existed symbol link
method for EFI stub files.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Tested-by: Jiamei Xie <jiamei.xie@arm.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
v4 -> v5:
1. Add acked-by.
v3 -> v4:
1. Add indent to make ln and test to be aligned.
v2 -> v3:
1. Add existed file check, if a regular stub files,
   the common/stub files' link will be ignored.
2. Keep stub.c in x86/efi to include common/efi/stub.c
3. Restore efi_compat_xxx stub functions to x86/efi.c.
   Other architectures will not use efi_compat_xxx.
4. Remove ARM_EFI dependency from ARM_64.
5. Add comment for adding stub.o to EFIOBJ-y.
6. Merge patch#2 and patch#3 to one patch.
v1 -> v2:
1. Drop the copy of stub.c from Arm EFI.
2. Share common codes of x86 EFI stub for other architectures.
3. Use CONFIG_ARM_EFI to replace CONFIG_EFI
4. Remove help text and make CONFIG_ARM_EFI invisible.
5. Merge one following patch:
   xen/arm: introduce a stub file for non-EFI architectures
6. Use the common stub.c instead of creating new one.
---
 xen/arch/arm/Kconfig         |  4 ++++
 xen/arch/arm/Makefile        |  2 +-
 xen/arch/arm/efi/Makefile    |  8 ++++++++
 xen/arch/x86/efi/stub.c      | 32 +-------------------------------
 xen/common/efi/efi-common.mk |  3 ++-
 xen/common/efi/stub.c        | 32 ++++++++++++++++++++++++++++++++
 6 files changed, 48 insertions(+), 33 deletions(-)
 create mode 100644 xen/common/efi/stub.c

diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index ecfa6822e4..8a16d43bd5 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -6,6 +6,7 @@ config ARM_64
 	def_bool y
 	depends on !ARM_32
 	select 64BIT
+	select ARM_EFI
 	select HAS_FAST_MULTIPLY
 
 config ARM
@@ -33,6 +34,9 @@ config ACPI
 	  Advanced Configuration and Power Interface (ACPI) support for Xen is
 	  an alternative to device tree on ARM64.
 
+config ARM_EFI
+	bool
+
 config GICV3
 	bool "GICv3 driver"
 	depends on ARM_64 && !NEW_VGIC
diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 1d862351d1..bb7a6151c1 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -1,6 +1,5 @@
 obj-$(CONFIG_ARM_32) += arm32/
 obj-$(CONFIG_ARM_64) += arm64/
-obj-$(CONFIG_ARM_64) += efi/
 obj-$(CONFIG_ACPI) += acpi/
 obj-$(CONFIG_HAS_PCI) += pci/
 ifneq ($(CONFIG_NO_PLAT),y)
@@ -20,6 +19,7 @@ obj-y += domain.o
 obj-y += domain_build.init.o
 obj-y += domctl.o
 obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
+obj-y += efi/
 obj-y += gic.o
 obj-y += gic-v2.o
 obj-$(CONFIG_GICV3) += gic-v3.o
diff --git a/xen/arch/arm/efi/Makefile b/xen/arch/arm/efi/Makefile
index 4313c39066..dffe72e589 100644
--- a/xen/arch/arm/efi/Makefile
+++ b/xen/arch/arm/efi/Makefile
@@ -1,4 +1,12 @@
 include $(srctree)/common/efi/efi-common.mk
 
+ifeq ($(CONFIG_ARM_EFI),y)
 obj-y += $(EFIOBJ-y)
 obj-$(CONFIG_ACPI) +=  efi-dom0.init.o
+else
+# Add stub.o to EFIOBJ-y to re-use the clean-files in
+# efi-common.mk. Otherwise the link of stub.c in arm/efi
+# will not be cleaned in "make clean".
+EFIOBJ-y += stub.o
+obj-y += stub.o
+endif
diff --git a/xen/arch/x86/efi/stub.c b/xen/arch/x86/efi/stub.c
index 9984932626..f2365bc041 100644
--- a/xen/arch/x86/efi/stub.c
+++ b/xen/arch/x86/efi/stub.c
@@ -1,7 +1,5 @@
 #include <xen/efi.h>
-#include <xen/errno.h>
 #include <xen/init.h>
-#include <xen/lib.h>
 #include <asm/asm_defns.h>
 #include <asm/efibind.h>
 #include <asm/page.h>
@@ -10,6 +8,7 @@
 #include <efi/eficon.h>
 #include <efi/efidevp.h>
 #include <efi/efiapi.h>
+#include "../../../common/efi/stub.c"
 
 /*
  * Here we are in EFI stub. EFI calls are not supported due to lack
@@ -45,11 +44,6 @@ void __init noreturn efi_multiboot2(EFI_HANDLE ImageHandle,
     unreachable();
 }
 
-bool efi_enabled(unsigned int feature)
-{
-    return false;
-}
-
 void __init efi_init_memory(void) { }
 
 bool efi_boot_mem_unused(unsigned long *start, unsigned long *end)
@@ -62,32 +56,8 @@ bool efi_boot_mem_unused(unsigned long *start, unsigned long *end)
 
 void efi_update_l4_pgtable(unsigned int l4idx, l4_pgentry_t l4e) { }
 
-bool efi_rs_using_pgtables(void)
-{
-    return false;
-}
-
-unsigned long efi_get_time(void)
-{
-    BUG();
-    return 0;
-}
-
-void efi_halt_system(void) { }
-void efi_reset_system(bool warm) { }
-
-int efi_get_info(uint32_t idx, union xenpf_efi_info *info)
-{
-    return -ENOSYS;
-}
-
 int efi_compat_get_info(uint32_t idx, union compat_pf_efi_info *)
     __attribute__((__alias__("efi_get_info")));
 
-int efi_runtime_call(struct xenpf_efi_runtime_call *op)
-{
-    return -ENOSYS;
-}
-
 int efi_compat_runtime_call(struct compat_pf_efi_runtime_call *)
     __attribute__((__alias__("efi_runtime_call")));
diff --git a/xen/common/efi/efi-common.mk b/xen/common/efi/efi-common.mk
index 4298ceaee7..ec2c34f198 100644
--- a/xen/common/efi/efi-common.mk
+++ b/xen/common/efi/efi-common.mk
@@ -9,7 +9,8 @@ CFLAGS-y += -iquote $(srcdir)
 # e.g.: It transforms "dir/foo/bar" into successively
 #       "dir foo bar", ".. .. ..", "../../.."
 $(obj)/%.c: $(srctree)/common/efi/%.c FORCE
-	$(Q)ln -nfs $(subst $(space),/,$(patsubst %,..,$(subst /, ,$(obj))))/source/common/efi/$(<F) $@
+	$(Q)test -f $@ || \
+	    ln -nfs $(subst $(space),/,$(patsubst %,..,$(subst /, ,$(obj))))/source/common/efi/$(<F) $@
 
 clean-files += $(patsubst %.o, %.c, $(EFIOBJ-y:.init.o=.o) $(EFIOBJ-))
 
diff --git a/xen/common/efi/stub.c b/xen/common/efi/stub.c
new file mode 100644
index 0000000000..15694632c2
--- /dev/null
+++ b/xen/common/efi/stub.c
@@ -0,0 +1,32 @@
+#include <xen/efi.h>
+#include <xen/errno.h>
+#include <xen/lib.h>
+
+bool efi_enabled(unsigned int feature)
+{
+    return false;
+}
+
+bool efi_rs_using_pgtables(void)
+{
+    return false;
+}
+
+unsigned long efi_get_time(void)
+{
+    BUG();
+    return 0;
+}
+
+void efi_halt_system(void) { }
+void efi_reset_system(bool warm) { }
+
+int efi_get_info(uint32_t idx, union xenpf_efi_info *info)
+{
+    return -ENOSYS;
+}
+
+int efi_runtime_call(struct xenpf_efi_runtime_call *op)
+{
+    return -ENOSYS;
+}
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 05:53:54 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 05:53:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345791.571463 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzXaf-0006t3-6z; Fri, 10 Jun 2022 05:53:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345791.571463; Fri, 10 Jun 2022 05:53:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzXaf-0006sw-3h; Fri, 10 Jun 2022 05:53:49 +0000
Received: by outflank-mailman (input) for mailman id 345791;
 Fri, 10 Jun 2022 05:53:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=B4Vh=WR=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1nzXad-0006br-LE
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 05:53:47 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on20630.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::630])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b12a7e39-e881-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 07:53:46 +0200 (CEST)
Received: from AM6P193CA0110.EURP193.PROD.OUTLOOK.COM (2603:10a6:209:85::15)
 by AS8PR08MB6840.eurprd08.prod.outlook.com (2603:10a6:20b:399::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Fri, 10 Jun
 2022 05:53:44 +0000
Received: from AM5EUR03FT005.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:85:cafe::88) by AM6P193CA0110.outlook.office365.com
 (2603:10a6:209:85::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13 via Frontend
 Transport; Fri, 10 Jun 2022 05:53:44 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT005.mail.protection.outlook.com (10.152.16.146) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5332.12 via Frontend Transport; Fri, 10 Jun 2022 05:53:44 +0000
Received: ("Tessian outbound 5b5a41c043d3:v120");
 Fri, 10 Jun 2022 05:53:43 +0000
Received: from 76bfde828f41.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 3459B700-23A7-461F-9F45-6105FD6E087E.1; 
 Fri, 10 Jun 2022 05:53:37 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 76bfde828f41.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 10 Jun 2022 05:53:37 +0000
Received: from AS9PR05CA0022.eurprd05.prod.outlook.com (2603:10a6:20b:488::16)
 by AS4PR08MB7504.eurprd08.prod.outlook.com (2603:10a6:20b:4e8::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Fri, 10 Jun
 2022 05:53:35 +0000
Received: from AM5EUR03FT003.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:488:cafe::53) by AS9PR05CA0022.outlook.office365.com
 (2603:10a6:20b:488::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.15 via Frontend
 Transport; Fri, 10 Jun 2022 05:53:35 +0000
Received: from nebula.arm.com (40.67.248.234) by
 AM5EUR03FT003.mail.protection.outlook.com (10.152.16.149) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5332.12 via Frontend Transport; Fri, 10 Jun 2022 05:53:34 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX03.Arm.com
 (10.251.24.31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.27; Fri, 10 Jun
 2022 05:53:32 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2308.27 via Frontend
 Transport; Fri, 10 Jun 2022 05:53:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b12a7e39-e881-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=ZhPrZZpN7382oTc0J/fANBBTf0rsSLYY9Z/cyCcrRI931oxcAqKfxPhhsXuuBtRRloHNgZqFD2weJmHKqE7FyEJbNRpuew+qrRzTCYHLRfgLHTUPZzRdbeAWWITj/aoZdpO2GV3iIfOhCxvMv2bbfPED+l511su9A+OAvfRT2GL4k7GdRFwxrYJ6Gq5kC58VpnzNZ6lO3LT/jUU7hMAu3wFGG1lrKcNHefl5xuGPiHbaALUDy4ISjsLFlg0UC2vG9Qy6Wfbz4+IU/a5ZAgixuxV2rdb3I3FzB+vRY+j0OjZehBK4UcvR0+0O2iBuvtGvX+R9MhBpxi6mTlxJ9eIs6g==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=z8kc1oeWJd8eJovTz7Jo1ZUCorNAUDwG0WrccA7BRjY=;
 b=hVtXnTU0ZVjnXiv7br87lV2lbiCjQ08X47hmuLncT9dKXuNmcNwqAGS9WoxMeY6sKgnkerUWQ7Kow8G9LM3WeJnl04igeJpOR3pdzUEwbqAr79UjoOC+nQ/5m5BMBK/TlesCBfDJiJYaL+6aoluNzA4tIb+xyUymNxvLge2QgsxAmOo4bwf7HttOiScqdtCqSDossytHggT/iOwME+C7tN2i9g61XCr742/Jr8762GqDO2y0+WdnKZ5WBb8D0ip5U8NEsXyRzOhXlB6CZeez9ZN41QiqTk5HzULL62+2ViKTrti6P+c+TC11JmGoWLq3WXeDc8fGSfRoONMLg/lkOA==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=z8kc1oeWJd8eJovTz7Jo1ZUCorNAUDwG0WrccA7BRjY=;
 b=0+KJEYJZO+wU4CSdXMR/nXQcvXRWrf5EdRNZdpOETzGMHaueKQZV3aMn7x9lrilHf6clCl5WEnbtCgFz/bULJt8jnnfIa2cjoE1P7tQdHuhsolAUMNkI956uZ7I7W6tQOfiGmZI2GfMMHJNM7/Y8sI1esBMWD5ObGu2YR97qrMA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 12ae1ba1e1c0f8a6
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OdWSr5ELu63mkerJEd/A09nMapB1GsTJ2v+SGHQfZeIR4urHPQf46WeH0q/zzyUOMYjpbePDt+2F/YkjCHwWIPYyXRH4pdu+u89RV2XWnmRR43MwBqYogeMUSYtD+qEW71U6YzHxOXvHbLRA3+MXMXJg1xm2i2uY6ITlvupIkitze/qVpIoxH34pF/hs2h54Aqaqufhq+GiN2E3nPknUwKzqxz+sHLO5I8duDIfOH6DoV4Nn2YUoRutVCuo09D9K7cS+UxzAfU6Q0HIR+TZxZjreENDpryQF8DsXQldclo7/JBRKtijNmCQuXJPYVi2PnERzXGC8Tlm1K0kxNi0npA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=z8kc1oeWJd8eJovTz7Jo1ZUCorNAUDwG0WrccA7BRjY=;
 b=JDTfiuuTku9KXyoAJhp5kfeEoG364Idovu6rNyX/bef7ejYDZqy4cj/ZidA5r1g4E8ZoHjJbbLX5l4v50GuzzucljG9lhGhGeY9Uf4TDgKrbfoCLGL18UZAQ2cbOe8/6i4oOpUmUDeEHtdxrz449FiJWK2P7wZA6v7mv3+6Q6oZ1cru+3J64srHLOHy8jhwAFe0b7JZgCRgtBhY+fQJbGH2qavfbh/4mFpyA8ucIUMLfOpmxtSQlxYco5M+GJENU48p0NBHY63zJ03b1CaCZ/2QhqNdLcDT4N+74ZP0xR3cXjwfqGyP9Xl3Y/bYGZvwgycYhXut+gZca8qSEFo8RBQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=z8kc1oeWJd8eJovTz7Jo1ZUCorNAUDwG0WrccA7BRjY=;
 b=0+KJEYJZO+wU4CSdXMR/nXQcvXRWrf5EdRNZdpOETzGMHaueKQZV3aMn7x9lrilHf6clCl5WEnbtCgFz/bULJt8jnnfIa2cjoE1P7tQdHuhsolAUMNkI956uZ7I7W6tQOfiGmZI2GfMMHJNM7/Y8sI1esBMWD5ObGu2YR97qrMA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Wei Chen <wei.chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Wei Liu
	<wl@xen.org>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Jiamei Xie <jiamei.xie@arm.com>
Subject: [PATCH v6 3/8] xen: introduce an arch helper for default dma zone status
Date: Fri, 10 Jun 2022 13:53:11 +0800
Message-ID: <20220610055316.2197571-4-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220610055316.2197571-1-wei.chen@arm.com>
References: <20220610055316.2197571-1-wei.chen@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-Office365-Filtering-Correlation-Id: 47c9265c-cf71-4ae8-95dc-08da4aa59435
X-MS-TrafficTypeDiagnostic:
	AS4PR08MB7504:EE_|AM5EUR03FT005:EE_|AS8PR08MB6840:EE_
X-Microsoft-Antispam-PRVS:
	<AS8PR08MB6840747E142CCB16140BD8D29EA69@AS8PR08MB6840.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 fbiTxNa6aM5OvhUis7UUUpVekG4Q6/YHB9uFTPfq+9v9TngHu8CzqjixKf6e2Gk1MiGUD1P8/0gcpt9UB9Fmi8VWtV/opVc+HzFy6mO28Q0GxYgw2P8fjSQXCk6WTro1rOedxXDYI/rpLvE+TVLozpSZBNf5dHEdxWIC/IqV7Xu2fsEWdtk0XjJ+ik0ZDoltBo/ZhV6/h0Xl0geWBvMSvwCo0ahzNO5ruBLo73EXo/dFJLDQX0bi/NW9dkJxOgvptjorcaXjyxK80Kzm1IMMVZLjvd9Q9FJVC/TzWgUPZ58rQBzu6kk8MTRZZPduyY5jVbduZxXEIuO6N0nvMeSAbIj/+OgW8l41ALFGJwrIQ2oDd8QUXPDNOAmf4QGFNm669LALeJVtO6ZX4CCnQv4RV49NwcXlc3VvEOa6GwIA8BxQuekZjlkkzsKq3QwyZ51OAXXVvsXwSTui8JpkKTv+4uv08tPMAQMP8Z7yiEweGCsbCEd0tEFEVQlgyNkaSml4DJDVatEcp+k6aMWp1nA59Zw4WUL330tFpuY88+efbp16zcKEQ1QYlIWhYHfFh9tBEXATqY7MnJvBA2LV+jHw1/A8MZ4i2spJNsLhfk6u29bd63PI1P/yyVNJONEzOQxC+cX5PFaqR4q9Ze6KQgFOcifr9j+8mTgTb9ft4hqUN/qGXOCmnhb/Sj20z63aXZKUEorIk4GkSf6iULHAk0n+vw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230001)(4636009)(40470700004)(46966006)(36840700001)(316002)(83380400001)(70206006)(2906002)(8676002)(81166007)(70586007)(5660300002)(4326008)(44832011)(36756003)(6666004)(26005)(356005)(7696005)(86362001)(82310400005)(186003)(426003)(336012)(2616005)(47076005)(508600001)(1076003)(6916009)(40460700003)(36860700001)(8936002)(54906003)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR08MB7504
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT005.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	2ec99269-d152-4322-c201-08da4aa58eb2
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	k+lmuKGde+qM+fa4M0Wf+4XTO95RzCAithceKtq/pl6bKJnM1oLLQuSMU2T8YqBy5zzfMAxDe5bXP9YY3ACUxgs8oKrNKsnRPBdvyKPK089rFR/dIxcJCp1RQnQfHzOnGyhGR24Zz/3ryP3gMtymK3c2+IR5EHCjNPLD5Lu3DDTjW5eOSjAXSf+zMHBNVLH5tmQvx6vCpbjoAHxIyMI+erkmPhUBLnOIjp0Uf7SeK5b0z1LPeWSC4Kj2gOABHftvIx8TebDaCRUHhzhk+fHk3kCngVjpxAI7iltXOq4rTvIDOolQRU4r5dw5Ej/mCzKVzCCPLwhizFysDp71SpmztvHmR7JxfK7peZyrbQFlgNdsXExFcMNy41dsbApHNid0FqMLjnjDVpL3/7tvlWoSpoYwGpfo9FSLurgw5Xw0juOCNmaX0XmPQXKw0QKVGDOwIV3XYiuOvLsXXpzj0Qg72V1WY8ieRe0n2CJUWVOCx0kpWm9NGPzuOmXl3aT8XJT/6HrkhhfeFHjasDIGB2PDeDtO9bioR1KhPhzXRn4Kq7FbahAGmiX43S010aCphXok+th363E5hmFKTXDqGvnxelpE4wK3/fuF5jrtkTIC2+A38Qeh69mxE8fxGDH1AvNCqT/YadJ+yjERn9ys3UyoExH3UahRgTz0PercIgvZus52WZxMjVCjLhBD/05Jb3hu
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230001)(4636009)(36840700001)(40470700004)(46966006)(82310400005)(83380400001)(2906002)(36860700001)(40460700003)(81166007)(6666004)(26005)(7696005)(36756003)(4326008)(54906003)(508600001)(316002)(6916009)(5660300002)(8936002)(426003)(336012)(186003)(8676002)(2616005)(1076003)(70206006)(70586007)(47076005)(86362001)(44832011);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2022 05:53:44.2320
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 47c9265c-cf71-4ae8-95dc-08da4aa59435
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT005.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6840

In current code, when Xen is running in a multiple nodes
NUMA system, it will set dma_bitsize in end_boot_allocator
to reserve some low address memory as DMA zone.

There are some x86 implications in the implementation.
Because on x86, memory starts from 0. On a multiple-nodes
NUMA system, if a single node contains the majority or all
of the DMA memory, x86 prefers to give out memory from
non-local allocations rather than exhausting the DMA memory
ranges. Hence x86 uses dma_bitsize to set aside some largely
arbitrary amount of memory for DMA zone. The allocations
from DMA zone would happen only after exhausting all other
nodes' memory.

But the implications are not shared across all architectures.
For example, Arm cannot guarantee the availability of memory
below a certain boundary for DMA limited-capability devices
either. But currently, Arm doesn't need a reserved DMA zone
in Xen. Because there is no DMA device in Xen. And for guests,
Xen Arm only allows Dom0 to have DMA operations without IOMMU.
Xen will try to allocate memory under 4GB or memory range that
is limited by dma_bitsize for Dom0 in boot time. For DomU, even
Xen can passthrough devices to DomU without IOMMU, but Xen Arm
doesn't guarantee their DMA operations. So, Xen Arm doesn't
need a reserved DMA zone to provide DMA memory for guests.

In this patch, we introduce an arch_want_default_dmazone helper
for different architectures to determine whether they need to
set dma_bitsize for DMA zone reservation or not.

At the same time, when x86 Xen is built with CONFIG_PV=n could
probably leverage this new helper to actually not trigger DMA
zone reservation.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Tested-by: Jiamei Xie <jiamei.xie@arm.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
v3 -> v4:
1. Add Acked-by.
v2 -> v3:
1. Add Tb.
2. Rename arch_have_default_dmazone to arch_want_default_dmazone.
v1 -> v2:
1. Extend the description of Arm's workaround for reserve DMA
   allocations to avoid the same discussion every time.
2. Use a macro to define arch_have_default_dmazone, because
   it's little hard to make x86 version to static inline.
   Use a macro will also avoid add __init for this function.
3. Change arch_have_default_dmazone return value from
   unsigned int to bool.
4. Un-addressed comment: make arch_have_default_dmazone
   of x86 to be static inline. Because, if we move
   arch_have_default_dmazone to x86/asm/numa.h, it depends
   on nodemask.h to provide num_online_nodes. But nodemask.h
   needs numa.h to provide MAX_NUMANODES. This will cause a
   loop dependency. And this function can only be used in
   end_boot_allocator, in Xen initialization. So I think,
   compared to the changes introduced by inline, it doesn't
   mean much.
---
 xen/arch/arm/include/asm/numa.h | 1 +
 xen/arch/x86/include/asm/numa.h | 1 +
 xen/common/page_alloc.c         | 2 +-
 3 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index 31a6de4e23..e4c4d89192 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -24,6 +24,7 @@ extern mfn_t first_valid_mfn;
 #define node_spanned_pages(nid) (max_page - mfn_x(first_valid_mfn))
 #define node_start_pfn(nid) (mfn_x(first_valid_mfn))
 #define __node_distance(a, b) (20)
+#define arch_want_default_dmazone() (false)
 
 #endif /* __ARCH_ARM_NUMA_H */
 /*
diff --git a/xen/arch/x86/include/asm/numa.h b/xen/arch/x86/include/asm/numa.h
index bada2c0bb9..5d8385f2e1 100644
--- a/xen/arch/x86/include/asm/numa.h
+++ b/xen/arch/x86/include/asm/numa.h
@@ -74,6 +74,7 @@ static inline __attribute__((pure)) nodeid_t phys_to_nid(paddr_t addr)
 #define node_spanned_pages(nid)	(NODE_DATA(nid)->node_spanned_pages)
 #define node_end_pfn(nid)       (NODE_DATA(nid)->node_start_pfn + \
 				 NODE_DATA(nid)->node_spanned_pages)
+#define arch_want_default_dmazone() (num_online_nodes() > 1)
 
 extern int valid_numa_range(u64 start, u64 end, nodeid_t node);
 
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index ea59cd1a4a..000ae6b972 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -1889,7 +1889,7 @@ void __init end_boot_allocator(void)
     }
     nr_bootmem_regions = 0;
 
-    if ( !dma_bitsize && (num_online_nodes() > 1) )
+    if ( !dma_bitsize && arch_want_default_dmazone() )
         dma_bitsize = arch_get_dma_bitsize();
 
     printk("Domain heap initialised");
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 05:53:54 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 05:53:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345793.571480 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzXaj-0007ET-UN; Fri, 10 Jun 2022 05:53:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345793.571480; Fri, 10 Jun 2022 05:53:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzXaj-0007E0-OW; Fri, 10 Jun 2022 05:53:53 +0000
Received: by outflank-mailman (input) for mailman id 345793;
 Fri, 10 Jun 2022 05:53:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=B4Vh=WR=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1nzXai-00078y-J7
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 05:53:52 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur02on0615.outbound.protection.outlook.com
 [2a01:111:f400:fe05::615])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b4239899-e881-11ec-8b38-e96605d6a9a5;
 Fri, 10 Jun 2022 07:53:51 +0200 (CEST)
Received: from AM6PR0502CA0070.eurprd05.prod.outlook.com
 (2603:10a6:20b:56::47) by VI1PR0801MB1646.eurprd08.prod.outlook.com
 (2603:10a6:800:4f::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Fri, 10 Jun
 2022 05:53:49 +0000
Received: from VE1EUR03FT044.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:56:cafe::ee) by AM6PR0502CA0070.outlook.office365.com
 (2603:10a6:20b:56::47) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.15 via Frontend
 Transport; Fri, 10 Jun 2022 05:53:49 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT044.mail.protection.outlook.com (10.152.19.106) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5332.12 via Frontend Transport; Fri, 10 Jun 2022 05:53:48 +0000
Received: ("Tessian outbound e40990bc24d7:v120");
 Fri, 10 Jun 2022 05:53:48 +0000
Received: from 804bed6f1c78.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 975C0AF7-5E0E-4CDA-8934-36219FB31A39.1; 
 Fri, 10 Jun 2022 05:53:41 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 804bed6f1c78.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 10 Jun 2022 05:53:41 +0000
Received: from AS9PR06CA0062.eurprd06.prod.outlook.com (2603:10a6:20b:464::12)
 by AM0PR08MB5377.eurprd08.prod.outlook.com (2603:10a6:208:181::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Fri, 10 Jun
 2022 05:53:39 +0000
Received: from VE1EUR03FT027.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:464:cafe::c6) by AS9PR06CA0062.outlook.office365.com
 (2603:10a6:20b:464::12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.11 via Frontend
 Transport; Fri, 10 Jun 2022 05:53:38 +0000
Received: from nebula.arm.com (40.67.248.234) by
 VE1EUR03FT027.mail.protection.outlook.com (10.152.18.154) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5332.12 via Frontend Transport; Fri, 10 Jun 2022 05:53:38 +0000
Received: from AZ-NEU-EX01.Emea.Arm.com (10.251.26.4) by AZ-NEU-EX04.Arm.com
 (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2308.27; Fri, 10 Jun
 2022 05:53:37 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX01.Emea.Arm.com
 (10.251.26.4) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2308.27; Fri, 10
 Jun 2022 05:53:36 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2308.27 via Frontend
 Transport; Fri, 10 Jun 2022 05:53:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b4239899-e881-11ec-8b38-e96605d6a9a5
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=Qkw7hl/cegNtumNBTnTQZtKSLnHzVtsTz9jcwCVvSIZ8l4zSwSl94lWGuFoIS5FyjHEOeG4ZJoSrWdzWGNZ5BNI87NZyLmS6tjw2oQS3QeBR8M+op/x0FJQV6uEvHo0RhzD0c2pDFobH5ynMxkwdckU+ANJeVU+ASCDZK9tdo04lHYQrZFbtFePWWNNi77Pq7cXCmP5l+ABq4UTeHA/IOGaCMt8uREZaLK9lckFvuRql/gBPed8jyIeSMX9HotKdAmnrphYTVAJEmiIgFoRY6xBJBRRcXAQbzk77YXWVKr7O8uoPX+Z0XryurcmBQsvC6VGn4fagLis4VGEEBO+7gQ==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=V9wUoq3XTl/aE5RFIP3KDTc2vdcTUcKiAanOp0Qrrhc=;
 b=DDH4CSW3KpTePMHY66Qk0u8QK8dgjmQ23fNrn+xE+aXgW8gXGSqbL9BHrzpkNynnqwGG2GiQAOd1Z1/IpN20hHTw9RkUmUXZFAbcB8xXtjsd2MRokoGwClUYv1+QV8DvDiUjTgqvC/omPAtTSUYjgabBOu6c7TsAitjMj2+5lKOaWnVDV+xVVelpMQHqMRxLsjCOSrhiIjV5AWdo7p3aQJOiKmqs9A4jlMHpwvGKuymx2lDnmvtqS3sFWFDqnUn0TcnPqiizitzZnG+SK4OjwyAsbzn0fxePuFomiwV4Nbu/OTdLFMUk9iZCzPO9zXzsY4Ve041mKqznQmiCEduPNw==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=V9wUoq3XTl/aE5RFIP3KDTc2vdcTUcKiAanOp0Qrrhc=;
 b=Qv4apFVaZkv+VSmHCPKx5/+7nhDz+GteI+zxPZD5xyqtLUNmP2qtOVjosfjmmeSqLbnD72wvcLt34JJbW9gEkJArYuKVmcgjFXHA6xqhS+kWTTyBGzlGWMNqiYgBD3QF5Z2nZsZfUDz+9xtxNUeqbG3PR3quBPcPY6lPIZ3imgU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: b7530664715039b2
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oDZuDwLte6/pZpfGuvFpUbNpggKC5G/TRJlJO1W+ZwTKbZaREaIr2mGaSFVC2CznA/84gBTUuMNmSnJGwLUe/5RoDYBGIjn2WK/eWelUJar91XUti8G/inXYC1rgQeyMITO3uxOjSe+SGUm4ehVwwBXUAwOHSbz08awa/PIzibD5AA1MC+gRh7x/S90OnZTZSkl/U1eQvxE++30cVBZF/4Pu8ylfMApWWQtj2vsfOlYVW48O6bkcsFjsAR34N6asaBcoHvYWMA0BG8p+HZPhAb8BhLvfFPW0MP5G6wFy6AjYVDsIk6mVdsd0u1RRsXix3T9LZURRqsoieIRwZOAqmQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=V9wUoq3XTl/aE5RFIP3KDTc2vdcTUcKiAanOp0Qrrhc=;
 b=i50GP0jxKwPGNu5rc1ItaH2BYqD9L32lcaCWE7uBxv7s1XKpdG0RZ7CK+KfaduRnVRnRFj5JkhNoMOlwcwLi1DaUWhXr76lKUYTRg6yH56P024LO2/u0BGflpALLpDDBNkgQB+1C/UCdrn+Bft9JQOIaTYS4TRtd2CSMSKmA6VV7cAXwweO5dJ6EJkTioOlVblNnuGCao+cTHeAyQ3vsi9CHTQ//eRsFqNjfqGUEvhoqIQqkgFloUYbhknbGNZ1FY558XVs47hx7BkBn6jQ2OzzK52hdiHfdi3RLbmVZFDNdQ7HvjZ6uJVwEph3kTO4AtUoQlz58pBr5gcEZd5wNtA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=V9wUoq3XTl/aE5RFIP3KDTc2vdcTUcKiAanOp0Qrrhc=;
 b=Qv4apFVaZkv+VSmHCPKx5/+7nhDz+GteI+zxPZD5xyqtLUNmP2qtOVjosfjmmeSqLbnD72wvcLt34JJbW9gEkJArYuKVmcgjFXHA6xqhS+kWTTyBGzlGWMNqiYgBD3QF5Z2nZsZfUDz+9xtxNUeqbG3PR3quBPcPY6lPIZ3imgU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Wei Chen <wei.chen@arm.com>, Jan Beulich
	<jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Jiamei Xie
	<jiamei.xie@arm.com>
Subject: [PATCH v6 4/8] xen: decouple NUMA from ACPI in Kconfig
Date: Fri, 10 Jun 2022 13:53:12 +0800
Message-ID: <20220610055316.2197571-5-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220610055316.2197571-1-wei.chen@arm.com>
References: <20220610055316.2197571-1-wei.chen@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-Office365-Filtering-Correlation-Id: c2be363b-4279-4ca3-4005-08da4aa596d0
X-MS-TrafficTypeDiagnostic:
	AM0PR08MB5377:EE_|VE1EUR03FT044:EE_|VI1PR0801MB1646:EE_
X-Microsoft-Antispam-PRVS:
	<VI1PR0801MB1646C093D1E21E43FBDF14ED9EA69@VI1PR0801MB1646.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 8ZnsLLGNhQt1a+6E/sppjADgD2EnCramADmjMwGQi5sbZHizMgD+9WibPkeRz8tpH2mqqK07A/1adKuK4uhjDuJTlEX1Q9Ib2dAXV/ck0nrrvpztUohb4mSovuCivW9H66NM2xCNuW0Qhou03VZ7OME0XG7P9fCr8YiNFWcVY/qRn+ml2svNoKDLDL248eKqUomQvLQ2gJZNXO7ed/n6gOrOvZFK3H9Rj4sO0sXmMeD0wOIDb2sibsaetaxQGvicRurY4nOtSCL0gmiPtadkYFUoCR2i6PAhTDNtEZkIv9P+Lrlj8IiInlpd7XkEzaw1ROrrb6Q8Mwgz2YajSrojK75KMlxW3Hgo5mdVX45/rRveMk73k6nOsiDGGZeCQKQNURip0Tv8BPgd1H9KjMfSkDBbYksZh02aDdztKsIzLAjGRHjim0kRmrFeCWZxfJujtpYBClNU3dUoxFXmUPoLgzAbJRG8+ip9ynSWD3566yLya5rriv6gsHVZYJk/s/cLebaXAqvrFC6rrjBE0HpKbq4MP5sL0DzHBBsYcbgpJ8ojrONA5G6dTvU9grICSXd17IwtMdV45CaaO6TDlUKnYzGqnohaRsESATfDmcxaRsoCkhk+irhjG/8lTV04H7Fm4qZgl9F9F4kczSGI82YduPx3oiY1OZ0xWQtvFMNam+Y4Q7u11KRrpRP6sIN2rZbMy9KPjorH5JqE6U3+ZHJ0tA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230001)(4636009)(36840700001)(40470700004)(46966006)(2906002)(47076005)(316002)(36860700001)(8676002)(5660300002)(36756003)(4326008)(81166007)(82310400005)(356005)(70206006)(70586007)(8936002)(6666004)(508600001)(44832011)(83380400001)(426003)(86362001)(54906003)(26005)(1076003)(7696005)(186003)(6916009)(40460700003)(2616005)(336012)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB5377
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT044.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	7c48b166-d269-48eb-b6bf-08da4aa590b4
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	twtASVrJGitsdj1N1hpRc0RY20BcGr3VmpD/uujJSGgLv47CvJAXxtsfvgKn5qRpOsM3uHamm9VpskIUmQ+IaR6UUfzSJ2BXM4ssBP+/3Rr01Qlv5knbFf/45R1NjyOB/Ft/pKQ1A5J1uEDncO37fvnbccguLFvCgy4iUVdLN0O2Cd2zACDvo5Iw7gEBJvN1uS0qE8x6WQJfigNRSF7gtfGeItA6yyjijMe5ppXbgzsFr5SnspWka+W421X49+I+Ad1fkQkxYB9kypK2i7kmyVfF9LtwF9baPrhJUtl1qp8y9rh+LleDNUz0EVsvquDe3dR4c9qeb8PuJxQi74fXf2azJ81t/eU0pWBvUqSmdPPwuoba3cyn01tiDc0iW4E13d7IWP4g2huklbxhx6JR1Pg2i0WeHu9iGx/0gpYPCq5eFDn2ZiQO3kEs0SCHIG3xTyyk3g4H7tJVvj7kxEZ1jfM+9ZlCDwqHZBzLwhH5rUkQ1qU5r+SnEPVBJuc8cZao7U5EejtdMsxwcWpOGT4yWaSrv3PqXXuTo0B9Xkuzy/qIOmYADUFk9kINpwHSq9pBW9QusNsdEtxQ+ELPHc+vRiLhhRBnVdJmyRW2YW8HHz/Dk4Fj+9xOBUAPJ7909ZP84VSMd7Uq0YgnHyLr3tpwlvfQPm3Y4lfbyfv70Il80BGCqTeS9XNJWs20n77Ykq3G
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230001)(4636009)(36840700001)(40470700004)(46966006)(86362001)(508600001)(8936002)(2906002)(4326008)(81166007)(8676002)(70206006)(82310400005)(70586007)(40460700003)(44832011)(36860700001)(5660300002)(54906003)(316002)(6916009)(26005)(83380400001)(2616005)(7696005)(47076005)(6666004)(1076003)(36756003)(426003)(336012)(186003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2022 05:53:48.5542
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c2be363b-4279-4ca3-4005-08da4aa596d0
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT044.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0801MB1646

In current Xen code only implements x86 ACPI-based NUMA support.
So in Xen Kconfig system, NUMA equals to ACPI_NUMA. x86 selects
NUMA by default, and CONFIG_ACPI_NUMA is hardcode in config.h.

In a follow-up patch, we will introduce support for NUMA using
the device tree. That means we will have two NUMA implementations,
so in this patch we decouple NUMA from ACPI based NUMA in Kconfig.
Make NUMA as a common feature, that device tree based NUMA also
can select it.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Tested-by: Jiamei Xie <jiamei.xie@arm.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
v3 -> v4:
no change.
v2 -> v3:
Add Tb.
v1 -> v2:
No change.
---
 xen/arch/x86/Kconfig              | 2 +-
 xen/arch/x86/include/asm/config.h | 1 -
 xen/common/Kconfig                | 3 +++
 xen/drivers/acpi/Kconfig          | 3 ++-
 xen/drivers/acpi/Makefile         | 2 +-
 5 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
index 06d6fbc864..1e31edc99f 100644
--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -6,6 +6,7 @@ config X86
 	def_bool y
 	select ACPI
 	select ACPI_LEGACY_TABLES_LOOKUP
+	select ACPI_NUMA
 	select ALTERNATIVE_CALL
 	select ARCH_SUPPORTS_INT128
 	select CORE_PARKING
@@ -26,7 +27,6 @@ config X86
 	select HAS_UBSAN
 	select HAS_VPCI if HVM
 	select NEEDS_LIBELF
-	select NUMA
 
 config ARCH_DEFCONFIG
 	string
diff --git a/xen/arch/x86/include/asm/config.h b/xen/arch/x86/include/asm/config.h
index de20642524..07bcd15831 100644
--- a/xen/arch/x86/include/asm/config.h
+++ b/xen/arch/x86/include/asm/config.h
@@ -31,7 +31,6 @@
 /* Intel P4 currently has largest cache line (L2 line size is 128 bytes). */
 #define CONFIG_X86_L1_CACHE_SHIFT 7
 
-#define CONFIG_ACPI_NUMA 1
 #define CONFIG_ACPI_SRAT 1
 #define CONFIG_ACPI_CSTATE 1
 
diff --git a/xen/common/Kconfig b/xen/common/Kconfig
index d921c74d61..d65add3fc6 100644
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -70,6 +70,9 @@ config MEM_ACCESS
 config NEEDS_LIBELF
 	bool
 
+config NUMA
+	bool
+
 config STATIC_MEMORY
 	bool "Static Allocation Support (UNSUPPORTED)" if UNSUPPORTED
 	depends on ARM
diff --git a/xen/drivers/acpi/Kconfig b/xen/drivers/acpi/Kconfig
index b64d3731fb..e3f3d8f4b1 100644
--- a/xen/drivers/acpi/Kconfig
+++ b/xen/drivers/acpi/Kconfig
@@ -5,5 +5,6 @@ config ACPI
 config ACPI_LEGACY_TABLES_LOOKUP
 	bool
 
-config NUMA
+config ACPI_NUMA
 	bool
+	select NUMA
diff --git a/xen/drivers/acpi/Makefile b/xen/drivers/acpi/Makefile
index 4f8e97228e..2fc5230253 100644
--- a/xen/drivers/acpi/Makefile
+++ b/xen/drivers/acpi/Makefile
@@ -3,7 +3,7 @@ obj-y += utilities/
 obj-$(CONFIG_X86) += apei/
 
 obj-bin-y += tables.init.o
-obj-$(CONFIG_NUMA) += numa.o
+obj-$(CONFIG_ACPI_NUMA) += numa.o
 obj-y += osl.o
 obj-$(CONFIG_HAS_CPUFREQ) += pmstat.o
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 05:53:55 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 05:53:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345794.571496 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzXal-0007ht-EJ; Fri, 10 Jun 2022 05:53:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345794.571496; Fri, 10 Jun 2022 05:53:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzXal-0007hL-8n; Fri, 10 Jun 2022 05:53:55 +0000
Received: by outflank-mailman (input) for mailman id 345794;
 Fri, 10 Jun 2022 05:53:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=B4Vh=WR=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1nzXak-0006br-Ka
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 05:53:54 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0626.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::626])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b55134a2-e881-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 07:53:53 +0200 (CEST)
Received: from AM6PR04CA0016.eurprd04.prod.outlook.com (2603:10a6:20b:92::29)
 by AM8PR08MB5730.eurprd08.prod.outlook.com (2603:10a6:20b:1d5::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.11; Fri, 10 Jun
 2022 05:53:51 +0000
Received: from AM5EUR03FT028.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:92:cafe::bf) by AM6PR04CA0016.outlook.office365.com
 (2603:10a6:20b:92::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.15 via Frontend
 Transport; Fri, 10 Jun 2022 05:53:50 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT028.mail.protection.outlook.com (10.152.16.118) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5332.12 via Frontend Transport; Fri, 10 Jun 2022 05:53:50 +0000
Received: ("Tessian outbound e40990bc24d7:v120");
 Fri, 10 Jun 2022 05:53:50 +0000
Received: from a3ca2fe92f07.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 9CA918F4-8782-4CE1-908E-F754DF5B9AED.1; 
 Fri, 10 Jun 2022 05:53:44 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a3ca2fe92f07.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 10 Jun 2022 05:53:44 +0000
Received: from AM7PR02CA0024.eurprd02.prod.outlook.com (2603:10a6:20b:100::34)
 by AM0PR08MB3540.eurprd08.prod.outlook.com (2603:10a6:208:e3::27)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.12; Fri, 10 Jun
 2022 05:53:42 +0000
Received: from AM5EUR03FT044.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:100:cafe::eb) by AM7PR02CA0024.outlook.office365.com
 (2603:10a6:20b:100::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.14 via Frontend
 Transport; Fri, 10 Jun 2022 05:53:41 +0000
Received: from nebula.arm.com (40.67.248.234) by
 AM5EUR03FT044.mail.protection.outlook.com (10.152.17.56) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5332.12 via Frontend Transport; Fri, 10 Jun 2022 05:53:41 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX03.Arm.com
 (10.251.24.31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.27; Fri, 10 Jun
 2022 05:53:39 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2308.27 via Frontend
 Transport; Fri, 10 Jun 2022 05:53:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b55134a2-e881-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=kSfAkbNk81eQzNWPE3QDbQm3oD1YG9O7wL9/6vrLucxKacuXNpdQPqricAnJzqf2udy4AFeRioiFJ/DpSkYVaQJAvFQCM5oVqeuJyDbQt3l5q30XpwUyA1NGWD2xSUn2JmhL0+LNWTEgCyeZUKtPGt1FpuIi+V5D5YaVG6MVHOHn7t5hnkD3ZF0Q61dj46DzvuVHxxu8RMDc2+vGXAsY1mDJhXbcEyi9ofSrnUFSa1XO+YdUWqm8WPCGZfMK4673TsO+7J9WTMOdRjcmYxV8fvJFeEWvuAOi+on36+rBXbNNkYdZ2g9ZgvTIVTdM9tXnvn8LE5HUmZ+y1Mu7BQEdnw==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mmWNLdXhXdHZRodNywRZ0AluYyoEUalbdzqUNWlHEjU=;
 b=XkgdtfFf2aMCdZ90IIcIwVVr/SMQNPiwMZLkfOJA+7HMON5bnQejnwvj3/QXmCzf6Du9ITs8/rWaiWSl6VnUBLxxG6d+dbl2RijRpWXks3DhhdoLQHnyn58Np6wg/pO3sN0bzsWdR7WjSPRAQsFhLiQ1RHrN783w14DJMXG/2xPesLuWgoR1KHfy+bAQBQ0a44OYJsYftBWdPVKgwqT01ZmYrFNwnuEK4lCkEP2SOurXERn2pka46co0CDEd2GPJjKa3S6D+k76QC7SyjY6Xk8j5TaPpILUueUMZi5glbOjUgsCles9LY4snfKw4HhIjcMZy/kUqP1MwZ5bRpkwLFQ==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mmWNLdXhXdHZRodNywRZ0AluYyoEUalbdzqUNWlHEjU=;
 b=VVl79PuYH21t3JY8WZTzMoQDBh/zAOTbv1S7FUWXZEtF9GypSYnL9urhaTpAmvfz6rQkXxsOmNtzSHSHRAYHlNfmCnCW5BPr9+JhYGs4OMiR3NdibBKHyBlY0md73mnm94NJH6DptvfyIEl5EJJcgZNpaG0xLjNXTpGk5ruxQSs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: dcb9a448a5a7c00a
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=m2HwBh7mrxuzD6NjMg95fVqfZIxZh82MZmLDvR1ciqt+sNplv+z0Hbx/QxDikDn/ZkEHwKjgt7Kw4AO2DXbpLohMmcZGrd2rwnaoBdiTMZEpCwTU0JQ779bhmlAITRRi8BT+Hbzn5QobEHlD7DjDDrHDa7uT5EY5WIsDgMiyNkdW7sy03fMpZzuQ87yCiUlUl/uQDNreQdejqFcqQb6YkG3Dccc1aUOoEJK7LxavixAurBkbM+VUFNBFfYu/biUIH71ll0g8SpwcKQeg6tZ8H23fTr4MhvFBbl4RoCxw8AteE8yMLpo5FepQnlnGexLF/wFDGFhs3Aga+x/Rv/TUwg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mmWNLdXhXdHZRodNywRZ0AluYyoEUalbdzqUNWlHEjU=;
 b=h9fezbv4JRBSUPUgAnUQaCnD0ncE9Od4WDOgW8IaGgWUkBEYIf3/KQVV8UU1ZzVPFKdQYM7tN3xPaNKAr2DCCSzg6rXKVnqZIOshrRI1to1ArAVC34svEYqxEWQWx1fJv57HNM81tv440KHhdYbubqyD9ByXphanxAFRYMIq6cJQGtW1mJsDWygqiQB19haSo/d9TX17pUA2K84+NmC60BRFz4nWYKUkdK7aiZPw+KWz4C8BGHrc2rv47BJodkhiTi1Ig2Gi5fYFa3wsb+Sn79zx0mKoQkRao/gqC1iaKIv6ArAgakZYBaVJrRPPltVqD69t5wTcDhW30RLtFDqe0A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mmWNLdXhXdHZRodNywRZ0AluYyoEUalbdzqUNWlHEjU=;
 b=VVl79PuYH21t3JY8WZTzMoQDBh/zAOTbv1S7FUWXZEtF9GypSYnL9urhaTpAmvfz6rQkXxsOmNtzSHSHRAYHlNfmCnCW5BPr9+JhYGs4OMiR3NdibBKHyBlY0md73mnm94NJH6DptvfyIEl5EJJcgZNpaG0xLjNXTpGk5ruxQSs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Wei Chen <wei.chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Jiamei Xie <jiamei.xie@arm.com>
Subject: [PATCH v6 5/8] xen/arm: use !CONFIG_NUMA to keep fake NUMA API
Date: Fri, 10 Jun 2022 13:53:13 +0800
Message-ID: <20220610055316.2197571-6-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220610055316.2197571-1-wei.chen@arm.com>
References: <20220610055316.2197571-1-wei.chen@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-Office365-Filtering-Correlation-Id: 468dd8fa-9af7-4ef8-07ea-08da4aa59806
X-MS-TrafficTypeDiagnostic:
	AM0PR08MB3540:EE_|AM5EUR03FT028:EE_|AM8PR08MB5730:EE_
X-Microsoft-Antispam-PRVS:
	<AM8PR08MB573018800AAD6DAE5A91F4D19EA69@AM8PR08MB5730.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 GqdDHs7kS1W4EK09IJzUlGaJEjGEYKAXaxvNTRwGVQoMfusEqpbQ2lQ2RrvDD0hR6hHpYIlkdY+c03ukSt2rCMCcafUMKqmOVpM8UaNK0LqWSoNjuUOYFoTz8V/cTBN/obx+Lx27kWyY6XDZ9tIcPHOuXxfwIc081+UMFeODRQAKzuyeY/wZox/hicKmjt1FREO5ofd8OgmUiNL78Wa0hjJx+8rM+XMT23gDQeh53uy2eWNwHcy2VxvcNQ9cw8w0BfDe5gX01ei45EFnQVNuGCMAeFHpFCmFQBpUnU5BUVurMQJc2usVsiw+L66+r35vqpSQSSRklbuSKi8K7FsKSuQaCFrn3C0Nkap4kMPtXtG4fl6wrHBiNmMWDFwBvs2y2Z4CcB41q0j8FeNyILztMOdn+CFKKoBdYw11s+FOK+eJmtoNfdM9X2qn8oGE/+F14o7ENOf9IRjEcEeXGUqh2szUeDXs7VDNR0cBUGqKrTZGFy5+wZZLFWOVXnhK0upTJW5JEEuI+a6aDCXCudy28iNvdSKTdKI3JOrx9TiQjKUpthK2PiV6O0V+OP7yZjkCRbrAEO/xhJO6JKxYFMpB3zBDO0lgnvKDckfzX/wWyE1pwTEN4Jq3LFDOD3xjHYzCgrwYgSB3B8JGHTc/NLCc0f1gpeYyRbrsl2YPB8gR/RgLvEnxCxrI5snLZNiUBImYoOKix+pzUCOAZ2xNWStCnA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(83380400001)(1076003)(47076005)(186003)(81166007)(70586007)(356005)(40460700003)(82310400005)(2616005)(70206006)(26005)(426003)(336012)(6916009)(4326008)(7696005)(54906003)(36756003)(316002)(508600001)(86362001)(8936002)(44832011)(2906002)(6666004)(36860700001)(5660300002)(8676002)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB3540
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT028.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	668805e6-5774-4489-fcdf-08da4aa59298
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	CVDLQox4ssdRtS/wvPRIvPKKJMo/wgB93k46mTq9eiKqvaAG8SkZ3RFFk9GzDcNIKaSCQ53BO2emN+SHWRwRETyi+A83CuDUb/Bb3sVj7RkhtGMU4M2feOX+MyMx4lK2vxhLjeQx0CgMX0fU8q7Ve9jfyb9Poqi8vZeJLREN55m6uVSX4uGVMZQO2QS6xmGKpnDEK9IGhuwIlQduME9fBNxrEmDR3dvDtByV05CjWe9NG6IyMFx2mPzVhDlrvz46UfxJi0E9N9Un4A/JvEGFqZeYEVB6yCc0UWNRK626hVKj3Yyy4sZwpL6ZrSqPQnieaUF/A/ZMAkuCgsVkHR1CI/+pEMTXmy/eSoI792BnOZdrhARBjtRdgGEVilKBsI0ktX9TCISCy4F1rQO+/gZVTwQecHYFzBvR2v0DHQwR7o2y+ufn3E0qi25b9djln4fModFzwCL9VqUnMcRg9wbdXdTVUnTs66Yk/PYCaeCQElvD1hO/Q2qLQPbJpfRS2eVcn0b5JeMAytdX4Ugr3l94Rw2nsOHCPpeP45GeR65BocC/XwXm5IG8mO78vaWIBzZ+hfedEJUB/EOrU79o0UXeAmefmjssgFqr9PwAlRe3y+6PUWCFtRVff5BPrGCaQ7MqcO1K6VIFPj+d6daiiCNyYriemtuVXSiZ9y050FJkU0vB1Cr2Rmpg4P6JPZ6isZX6
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230001)(4636009)(40470700004)(36840700001)(46966006)(1076003)(6916009)(7696005)(36756003)(86362001)(81166007)(47076005)(2616005)(316002)(82310400005)(4326008)(336012)(8676002)(36860700001)(70206006)(70586007)(426003)(40460700003)(54906003)(6666004)(8936002)(5660300002)(26005)(508600001)(83380400001)(186003)(44832011)(2906002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2022 05:53:50.6312
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 468dd8fa-9af7-4ef8-07ea-08da4aa59806
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT028.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB5730

We have introduced CONFIG_NUMA in a previous patch. And this
option is enabled only on x86 at the current stage. In a follow
up patch, we will enable this option for Arm. But we still
want users to be able to disable the CONFIG_NUMA via Kconfig. In
this case, keep the fake NUMA API, will make Arm code still
able to work with NUMA aware memory allocation and scheduler.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Tested-by: Jiamei Xie <jiamei.xie@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
v3 -> v4:
no change
v2 -> v3:
Add Tb.
v1 -> v2:
No change.
---
 xen/arch/arm/include/asm/numa.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index e4c4d89192..268a9db055 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -5,6 +5,8 @@
 
 typedef u8 nodeid_t;
 
+#ifndef CONFIG_NUMA
+
 /* Fake one node for now. See also node_online_map. */
 #define cpu_to_node(cpu) 0
 #define node_to_cpumask(node)   (cpu_online_map)
@@ -24,6 +26,9 @@ extern mfn_t first_valid_mfn;
 #define node_spanned_pages(nid) (max_page - mfn_x(first_valid_mfn))
 #define node_start_pfn(nid) (mfn_x(first_valid_mfn))
 #define __node_distance(a, b) (20)
+
+#endif
+
 #define arch_want_default_dmazone() (false)
 
 #endif /* __ARCH_ARM_NUMA_H */
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 05:53:59 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 05:53:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345795.571508 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzXao-000853-Sn; Fri, 10 Jun 2022 05:53:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345795.571508; Fri, 10 Jun 2022 05:53:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzXao-00084u-P3; Fri, 10 Jun 2022 05:53:58 +0000
Received: by outflank-mailman (input) for mailman id 345795;
 Fri, 10 Jun 2022 05:53:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=B4Vh=WR=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1nzXan-0006br-4D
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 05:53:57 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2062d.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::62d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b6dd9b8d-e881-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 07:53:56 +0200 (CEST)
Received: from AS8PR05CA0001.eurprd05.prod.outlook.com (2603:10a6:20b:311::6)
 by AM4PR0802MB2275.eurprd08.prod.outlook.com (2603:10a6:200:63::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.12; Fri, 10 Jun
 2022 05:53:45 +0000
Received: from VE1EUR03FT026.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:311:cafe::20) by AS8PR05CA0001.outlook.office365.com
 (2603:10a6:20b:311::6) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.15 via Frontend
 Transport; Fri, 10 Jun 2022 05:53:45 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT026.mail.protection.outlook.com (10.152.18.148) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5332.12 via Frontend Transport; Fri, 10 Jun 2022 05:53:44 +0000
Received: ("Tessian outbound 6f53897bcd4e:v120");
 Fri, 10 Jun 2022 05:53:44 +0000
Received: from c852e8c72b8f.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 99BE106C-8132-402C-B53A-215209236EB4.1; 
 Fri, 10 Jun 2022 05:53:38 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c852e8c72b8f.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 10 Jun 2022 05:53:38 +0000
Received: from AS9PR05CA0005.eurprd05.prod.outlook.com (2603:10a6:20b:488::12)
 by PAXPR08MB7623.eurprd08.prod.outlook.com (2603:10a6:102:241::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.14; Fri, 10 Jun
 2022 05:53:34 +0000
Received: from AM5EUR03FT003.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:488:cafe::bb) by AS9PR05CA0005.outlook.office365.com
 (2603:10a6:20b:488::12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.15 via Frontend
 Transport; Fri, 10 Jun 2022 05:53:34 +0000
Received: from nebula.arm.com (40.67.248.234) by
 AM5EUR03FT003.mail.protection.outlook.com (10.152.16.149) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5332.12 via Frontend Transport; Fri, 10 Jun 2022 05:53:34 +0000
Received: from AZ-NEU-EX01.Emea.Arm.com (10.251.26.4) by AZ-NEU-EX03.Arm.com
 (10.251.24.31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2308.27; Fri, 10 Jun
 2022 05:53:29 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX01.Emea.Arm.com
 (10.251.26.4) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2308.27; Fri, 10
 Jun 2022 05:53:28 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2308.27 via Frontend
 Transport; Fri, 10 Jun 2022 05:53:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b6dd9b8d-e881-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=fG43HVGRzqgv268RgH4ZlslKXSyLWt+AjAAzjwzu/88XreU0HjoZ1HiIQY43wgnEOaZOG2jG+d1l+2u8+MN9ts6LohKPgslEfYhhOa0ofVKx5OrM9CMXgbkvqeImRr8PzAMyvX+Ric4wid8e/Joq9LlSPr2dzHe0K3Q1QiuM6sxOGZdJCc4oFjcFSrg61nb+KSzJibK1WIZ1QHXXSWJsOvT6oUZ6UpZqXNBRykB05MK7uEdahhv0hsZAoZRQJcPA9CEl+nSGfqrYPhX7jHr70Oigh8OLQKLMG64QN6OareLiicHheuPoe8fuHhWPWR98mz7aVzXsaxWakR0jzNiBYg==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fe/7SCgP8Par+1oz7XY3pgQuiN9XdtDg0NAdMEO3/oI=;
 b=OlC9r1/ueG6jm8gcw95bdK8Z7Zd2N7RdOjjsuu+vs69MqQhcGfJSQG93zqv2oAcco5DquPDRVtOW73i+YD7F5Uv7DhbFR5XtspbG4mZ0lsfY/VkwYzBJ4pnmra1+CTCEhD6NfKTLMmguuAAuBf+TkT9DVHSY/9afynxohCgRIxuyoJWHmOa7V7DDRxebJ2FmsSDWyyaO3YzGiWIbxMkJCK/gdAYxlv3eDuCt+G8AmtNW4VI196GKWj7ph2mq+OVRQahfxVY9ywE8O9ifxs5hnz2cm1be2vePFaYbb7bQndPSIen30BpKNpiAprqNMWG9MOzWQHBCdCbliCh/DNaUhQ==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fe/7SCgP8Par+1oz7XY3pgQuiN9XdtDg0NAdMEO3/oI=;
 b=xL6oTVx1ND7vOKqNFhfazXVx4p1w9pO0FxHvDrc2vwwn1OcFsBdCMMHdH5rveBqbiHirUl900js6GHSFfwUnOIYdiswN0vWogY9lqYXTOH5KEXStixjodpxTIDGGMYZBz3BzFiGSo+Z3qwGNGRJI/8UNobRHVJ04aPoLOcPL8Yc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: f2fc4f47abb0d7fd
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ddq0wSrEF0nEyvtRkGTdBTBFasFvFEJ/2RpAiQy4FzzM4Diz8aixV+ow9uRVMqPqg5O7TqkPmF0dgN7++UToo+qhuJ2oqpZU7W3Nmdj16+ACDzKtUoS7k6zU14kjFSXkBDD54RR8yn1y309NG28C8sGsiWhVYNxAwwr/LZxH0yl3025yXHGneY1vsnFGamesDIaNkKFLHSxnSLk15+67MOCFeZj1YUj6/X9W3jNAfSm2MSeW6HhEJO30Yj+K+5SuOuuMgpKjBZEo/Cvcji3E+eRKEgTqHprI7pNd+P+S0TKku44DVReQq2FLdZk5D4O0lFZGTKm11ctH7mCJg8xryg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fe/7SCgP8Par+1oz7XY3pgQuiN9XdtDg0NAdMEO3/oI=;
 b=ne7FhMXp4O3E3hp7VmY8xSKeyWXP310ReNPJ1i4bflH7pbDw5sAxO8omrbvxrDFse1OkiRYovzW44cNlXKU1weIY2oa8ad0XR55R9EEL4yzHOsgVD8QsRRBDvO36RU20dtvHPj1VQdeswUbMtjDCeWyvhSAoEGrRWLoXOCazA/T0vTdUV5TxPCf2NzzZpjlrqvkQfieJGYFUJH+nBgao03lq4LLkgMOJF27JdecFuNba4ElSwoVuCIQG54YATvzj3OAeHOYksDERzw+QfsMfErkoSyhrFZ71Qie68B/xhX7Ih0P4K7rucKowWQFNGXD1JsZeGIVh3PbK57+tRIAXdg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fe/7SCgP8Par+1oz7XY3pgQuiN9XdtDg0NAdMEO3/oI=;
 b=xL6oTVx1ND7vOKqNFhfazXVx4p1w9pO0FxHvDrc2vwwn1OcFsBdCMMHdH5rveBqbiHirUl900js6GHSFfwUnOIYdiswN0vWogY9lqYXTOH5KEXStixjodpxTIDGGMYZBz3BzFiGSo+Z3qwGNGRJI/8UNobRHVJ04aPoLOcPL8Yc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Wei Chen <wei.chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Jiamei Xie <jiamei.xie@arm.com>
Subject: [PATCH v6 2/8] xen/arm: Keep memory nodes in device tree when Xen boots from EFI
Date: Fri, 10 Jun 2022 13:53:10 +0800
Message-ID: <20220610055316.2197571-3-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220610055316.2197571-1-wei.chen@arm.com>
References: <20220610055316.2197571-1-wei.chen@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-Office365-Filtering-Correlation-Id: 171d5972-bd2d-4ed6-b68f-08da4aa59463
X-MS-TrafficTypeDiagnostic:
	PAXPR08MB7623:EE_|VE1EUR03FT026:EE_|AM4PR0802MB2275:EE_
X-Microsoft-Antispam-PRVS:
	<AM4PR0802MB2275290B3803D7A71E4136049EA69@AM4PR0802MB2275.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 xqbBZG49TMPTCKs6rgn64JU+WscbTrWspYtSoyrw6puiiJ3/GI9+WB2DIxc/rlXK9rMdhkUtzf6bDk5CEnQZh1pKHozyVMPi2ItA3AkpCCWPhKatOOtIf6Zc5ac8zpx11QQvvbzsY+CbW6DZyyQHcNMmwse/gdYcyMxO6o5O1Ex80TZjsty3pa9rf6TT1YIxJ42EyC8D887KweUa7MuSeTumxmUgXalyZ5pkksBsbLmJH8GPU2xmf6mozAc30X5/GURz7UCgUrNDM0OAj0Wg+6SRxqXsoNnIRniJsWHCAB4pF6h51JepdDX0V9YplDxxktwXQwaGxaX0bOU4ukGjrBl7kaqiQN2Lif7wfgbBviAeFIhGzZpZNa2JYNi5pmhAMV4eD1pkDdONuzTgdw9cbFmSZJUeDro7vOPjnV6rGX/GdsIqXGn/XkgsV1WjcT4/eiEOs72jOTeTTdKaeD+ivZDsgt5wFGTkphVcNJTcMb5gDPKBtuIzSAhzZIke1f/7JpvlDxSh+MydCxJ3nBXqWiFvvleK4TAQ9Uvkf4SV4T0vqeV8FzdtzycliGgp7wPrR2DMXyeSA8Ihy5JrB+xFwrUf2+UH02tgcyCubM2g+JLXkLLlrCG5mBamgTuSsnKLLMnjve5x2RMeJLN22Oi7jP0IzY9nfjmjF+OiBSFfn8AbND1+nmyCKml12o0S55EAWiOVK81A7DIVs2xVvL+smA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230001)(4636009)(46966006)(40470700004)(36840700001)(26005)(7696005)(86362001)(6666004)(2616005)(426003)(47076005)(336012)(2906002)(6916009)(54906003)(8676002)(4326008)(70586007)(36756003)(316002)(82310400005)(70206006)(81166007)(44832011)(36860700001)(83380400001)(40460700003)(1076003)(186003)(356005)(8936002)(5660300002)(508600001)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB7623
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT026.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	5c68f653-54d1-4ece-5c64-08da4aa58e67
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	rRmdD7QZJH6a0xhH7O+JB7iGrlvMeHvQy7k2UW5H3VVWDnSgL4GBGWuQc7nIlaxrtcZwoTVNK5EYglknivZNJAiG06jjKRxKQ5SbiAuY2TwJP+bmTtl8HlDqc/BJyrqSO1DsxrpjpeIIUR7ocOOVVxySITmu7hPf35I9qp/H+cA3G5VBv6iMmLofG8UouIDAKI6NqXQi+kkohqtRG7HKgzWx6Vl5KkcYCQuIbQtW4pJW1ukkeHpuqHHoEGeH7K/o1fLK1WmBh83+mSeFsI1Y1YP6QiXbF9soUGCuRHxZN2Ubsv/sAiKzVWNG/RQeuXCbzUzrW7Oix65kbe5xVEasjHhPsS7mp2Bh4sB++IIv4hFtBpY56id8a1/JuKA2EJxwY47nsUQd0ZesU8GypUA/lpYGtlMQJ1OWiCHK3LB7PydYhmjmwEl8ZseSa1hvea+AypTfcRgYsXuaUV5tsTYEY1PIE5y2qILeQ7CJeuTuPVJMGC7lEnfjG/fb3n0ptiwki9UtU0+U+XBLF9WKN668Zc9p2sYyeflwnY49QM0Jffznro0j9jYltodZs423rLFn9wvVjy2nJFn2DwoKKUdni54e7FmR5QlWowzmF4QB9cwDsHFmPT9u560IH7FItjkhfIGmuzehFzSRrZhY0Y+fLCtZ/0Z4xqRPaQGRGIpnef2wcetX/seVBjO0nT9rC/3T
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(5660300002)(508600001)(82310400005)(44832011)(47076005)(6666004)(426003)(336012)(2906002)(83380400001)(40460700003)(36860700001)(54906003)(6916009)(8936002)(4326008)(2616005)(81166007)(316002)(7696005)(186003)(70206006)(70586007)(8676002)(86362001)(36756003)(1076003)(26005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2022 05:53:44.4874
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 171d5972-bd2d-4ed6-b68f-08da4aa59463
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT026.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR0802MB2275

In current code, when Xen is booting from EFI, it will delete
all memory nodes in device tree. This would work well in current
stage, because Xen can get memory map from EFI system table.
However, EFI system table cannot completely replace memory nodes
of device tree. EFI system table doesn't contain memory NUMA
information. Xen depends on ACPI SRAT or device tree memory nodes
to parse memory blocks' NUMA mapping. So in EFI + DTB boot, Xen
doesn't have any method to get numa-node-id for memory blocks any
more. This makes device tree based NUMA support become impossible
for Xen in EFI + DTB boot.

So in this patch, we will keep memory nodes in device tree for
NUMA code to parse memory numa-node-id later.

As a side effect, if we still parse boot memory information in
early_scan_node, bootmem.info will calculate memory ranges in
memory nodes twice. So we have to prevent early_scan_node to
parse memory nodes in EFI boot.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Tested-by: Jiamei Xie <jiamei.xie@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
v3 -> v4:
1. No change.
v2 -> v3:
1. Add Rb.
v1 -> v2:
1. Move this patch from later to early of this series.
2. Refine commit message.
---
 xen/arch/arm/bootfdt.c      |  8 +++++++-
 xen/arch/arm/efi/efi-boot.h | 25 -------------------------
 2 files changed, 7 insertions(+), 26 deletions(-)

diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
index 29671c8df0..ec81a45de9 100644
--- a/xen/arch/arm/bootfdt.c
+++ b/xen/arch/arm/bootfdt.c
@@ -11,6 +11,7 @@
 #include <xen/lib.h>
 #include <xen/kernel.h>
 #include <xen/init.h>
+#include <xen/efi.h>
 #include <xen/device_tree.h>
 #include <xen/libfdt/libfdt.h>
 #include <xen/sort.h>
@@ -367,7 +368,12 @@ static int __init early_scan_node(const void *fdt,
 {
     int rc = 0;
 
-    if ( device_tree_node_matches(fdt, node, "memory") )
+    /*
+     * If Xen has been booted via UEFI, the memory banks are
+     * populated. So we should skip the parsing.
+     */
+    if ( !efi_enabled(EFI_BOOT) &&
+         device_tree_node_matches(fdt, node, "memory") )
         rc = process_memory_node(fdt, node, name, depth,
                                  address_cells, size_cells, &bootinfo.mem);
     else if ( depth == 1 && !dt_node_cmp(name, "reserved-memory") )
diff --git a/xen/arch/arm/efi/efi-boot.h b/xen/arch/arm/efi/efi-boot.h
index e452b687d8..59d93c24a1 100644
--- a/xen/arch/arm/efi/efi-boot.h
+++ b/xen/arch/arm/efi/efi-boot.h
@@ -231,33 +231,8 @@ EFI_STATUS __init fdt_add_uefi_nodes(EFI_SYSTEM_TABLE *sys_table,
     int status;
     u32 fdt_val32;
     u64 fdt_val64;
-    int prev;
     int num_rsv;
 
-    /*
-     * Delete any memory nodes present.  The EFI memory map is the only
-     * memory description provided to Xen.
-     */
-    prev = 0;
-    for (;;)
-    {
-        const char *type;
-        int len;
-
-        node = fdt_next_node(fdt, prev, NULL);
-        if ( node < 0 )
-            break;
-
-        type = fdt_getprop(fdt, node, "device_type", &len);
-        if ( type && strncmp(type, "memory", len) == 0 )
-        {
-            fdt_del_node(fdt, node);
-            continue;
-        }
-
-        prev = node;
-    }
-
    /*
     * Delete all memory reserve map entries. When booting via UEFI,
     * kernel will use the UEFI memory map to find reserved regions.
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 05:54:01 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 05:54:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345797.571519 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzXar-0008Qh-C1; Fri, 10 Jun 2022 05:54:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345797.571519; Fri, 10 Jun 2022 05:54:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzXar-0008QN-4l; Fri, 10 Jun 2022 05:54:01 +0000
Received: by outflank-mailman (input) for mailman id 345797;
 Fri, 10 Jun 2022 05:54:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=B4Vh=WR=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1nzXap-00078y-PJ
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 05:54:00 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20608.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::608])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b845b981-e881-11ec-8b38-e96605d6a9a5;
 Fri, 10 Jun 2022 07:53:58 +0200 (CEST)
Received: from AS8P250CA0005.EURP250.PROD.OUTLOOK.COM (2603:10a6:20b:330::10)
 by VI1PR08MB4605.eurprd08.prod.outlook.com (2603:10a6:803:b6::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Fri, 10 Jun
 2022 05:53:55 +0000
Received: from VE1EUR03FT006.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:330:cafe::57) by AS8P250CA0005.outlook.office365.com
 (2603:10a6:20b:330::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.15 via Frontend
 Transport; Fri, 10 Jun 2022 05:53:55 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT006.mail.protection.outlook.com (10.152.18.116) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5332.12 via Frontend Transport; Fri, 10 Jun 2022 05:53:54 +0000
Received: ("Tessian outbound 01afcf8ccfad:v120");
 Fri, 10 Jun 2022 05:53:54 +0000
Received: from 1999e13fadf9.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 9987A261-D355-4830-9504-45351C5B4B42.1; 
 Fri, 10 Jun 2022 05:53:49 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 1999e13fadf9.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 10 Jun 2022 05:53:49 +0000
Received: from AS8P189CA0046.EURP189.PROD.OUTLOOK.COM (2603:10a6:20b:458::18)
 by AS4PR08MB8189.eurprd08.prod.outlook.com (2603:10a6:20b:58c::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Fri, 10 Jun
 2022 05:53:47 +0000
Received: from VE1EUR03FT009.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:458:cafe::b4) by AS8P189CA0046.outlook.office365.com
 (2603:10a6:20b:458::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13 via Frontend
 Transport; Fri, 10 Jun 2022 05:53:47 +0000
Received: from nebula.arm.com (40.67.248.234) by
 VE1EUR03FT009.mail.protection.outlook.com (10.152.18.92) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5332.12 via Frontend Transport; Fri, 10 Jun 2022 05:53:46 +0000
Received: from AZ-NEU-EX01.Emea.Arm.com (10.251.26.4) by AZ-NEU-EX04.Arm.com
 (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2308.27; Fri, 10 Jun
 2022 05:53:45 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX01.Emea.Arm.com
 (10.251.26.4) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2308.27; Fri, 10
 Jun 2022 05:53:45 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2308.27 via Frontend
 Transport; Fri, 10 Jun 2022 05:53:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b845b981-e881-11ec-8b38-e96605d6a9a5
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=RSVIpQUx1tl/m9VdJg0XDqCj+cd4tn+v1vWc2aYLBXa6/ry1po+IZjsRUb4vugi56uBDPM97cnaYAPoWYUiRC5DtL2wZY4y6iaKkKGhARjq4LY7exhbXT7eB4l+2cXT3cJKIOPgeSEw1oAFuMZpw6uBZaJwQNPLhgxeHXcHhHJxj53hj6YVAj4vfqRwCr4iybCOgkZgTduxQbC9Xv3QcYIxvhUmo28SkidtMfaZry241YHiy715T6if/Jb/sqJb2GtW281SqYFUf94kcmY7EKS4KObXNgqZs2pdp4emedLYQpYG5nyeDbnvAqgFiXziH9QdSmJvUyapfTTcYPUhtHg==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=icfZOJNrOUgyvQFQOa5zeweNoNaUqNft+ccqz7fu4Hk=;
 b=i6oV6X1KQUDMbeGvH/6/rEv5ZEhgVJtA8CUaf3xpVNcwDfHEAbx8RyBn23UtHmzPG5sl/g7wKheHxiCBwLVrGWtUczVdJWiFKHkbbVORJWn/iakZD/pf9jPgn5rgjdkuTWTbcPiX+ZgUAesnx5Wi7OPxjc6vz5T8FDCi5jyPRwJz/95CAwYBRDUKbwAZ3TTj0JBMuln4l16SCnD27zEYAH5F2NL+PCld2Cmvb6zfxLdO2Oe79kx12dH7PjzTN3P0f7pGCKhv49J17HLLIdLF7jsN1BGuk+FciiwMn9rnf2lDLQLb9He5fcaNMm9zg/GzpsdaFMSzKraqacIZbjZ3cQ==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=icfZOJNrOUgyvQFQOa5zeweNoNaUqNft+ccqz7fu4Hk=;
 b=IXHWE6tgM+8wMbmyXFlMd495zvSMJSpPu2j3WLNA9KNWGATxprI8379rlxOVMImimKRQjP3JgBB0Z0KoXp7qXyDfnILgy5zOiOa/tKVZtAbMmmcxKx+Vi9b/6rwenjzKZ6v2PJEaEhlh00So/9zz1LbBiWJL0eIUYxC5T31XvtU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 145405ea45d947d3
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kj0p+OqyJaBp7/y8TKdRAQlMUXvIOKL22h76y0Vfz9WPJK9oaBEevWYb4FlfPQctlNcNHkxVAfi4cEZpsuHi0tK5ALNMElR3zemG47cb/yG4KgvNgOwC7RWY4F1NXuIZLaeD9EtALxC+l5VA9pBdtda6rtwr+CIoTjEG7npwOhc4BGJ9IJC0dd0lUXV4h0PoNe7iobYK/3oZ8vDOlABHEiuCzYEBHQovF8OuMbqFuvMiHZDjQ0IysPJWvWsuRG+MJO5nKMgifo7i1BZtvA5huTMYmiEks93k5VBARmS/qmqwy/KQuPMIQ5leIHzgWrm30W5RvKnXmeqBfrmwb6lirQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=icfZOJNrOUgyvQFQOa5zeweNoNaUqNft+ccqz7fu4Hk=;
 b=GnINzBG0yJVGck0YXndGJiklYSOB2WoHwc//vjE7sznUID/AfH1WYPOJeMWRpQFHp8JB/xmdF4q6R+zH8EsIbgHYSEvD+fEpQrI0pkHyhb2oOFi9fS3N0AzS32fXO36lAKPYv7r6L/vTG0twwolbIEuLHux0+XEH4SMVHIMD0fOh72GczuNenhLRmQ0v1M98zXfqBjDnmA/dF+6+GLxuquiD+11u5F7Y3Bwy5UaTEM6e5fxFlHFJJ7NeatNL7VWAsIefXFhlVWPX6txv+bRA9q0pBuUcuUFKB9/uL0YqO70rKCw6tRjd4IVYHubxDCv9Dbo5NqiFdQZsAEIp76qS7A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=icfZOJNrOUgyvQFQOa5zeweNoNaUqNft+ccqz7fu4Hk=;
 b=IXHWE6tgM+8wMbmyXFlMd495zvSMJSpPu2j3WLNA9KNWGATxprI8379rlxOVMImimKRQjP3JgBB0Z0KoXp7qXyDfnILgy5zOiOa/tKVZtAbMmmcxKx+Vi9b/6rwenjzKZ6v2PJEaEhlh00So/9zz1LbBiWJL0eIUYxC5T31XvtU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Wei Chen <wei.chen@arm.com>, Jan Beulich
	<jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Jiamei Xie <jiamei.xie@arm.com>
Subject: [PATCH v6 7/8] xen/x86: add detection of memory interleaves for different nodes
Date: Fri, 10 Jun 2022 13:53:15 +0800
Message-ID: <20220610055316.2197571-8-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220610055316.2197571-1-wei.chen@arm.com>
References: <20220610055316.2197571-1-wei.chen@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 1
X-MS-Office365-Filtering-Correlation-Id: 567ad5cc-a726-446f-5c48-08da4aa59aac
X-MS-TrafficTypeDiagnostic:
	AS4PR08MB8189:EE_|VE1EUR03FT006:EE_|VI1PR08MB4605:EE_
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB460594C6E68E76CD7761A6B69EA69@VI1PR08MB4605.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 eCmcb8RlSzoSzOWYuBdV/hRNswM3Bl82ssDPaZ367co89FIZDxaCEqRNxYuxJsGlnzy6zPYs727fjel4LmbiqWIdjhHZwawj1SR7z0Fo0Kx/BDFPUIiFF2mg9S2GJWtOF4WsuD5pl3SxbKWaJcWWQNkG3RuKiOHM9EMGasFgn7KceQoFHJccJUo74PEXNVwyoEbQvi3urUdr0OIvYoPCOC55FbXiOVLgptImo8932I3q9v/BDLaqsIuEwb4VUU8q3PtBm6TGXfoZA3qvVu9zeyTOjioIaYOlN+5fW4w7aQBQTi8upu04jcShckgETcW9XKv7iEH+tX1t3eeUcEsLuXWBp5ZL4WfAKLmSz3t0lJfsFPHK8jyvGbgKaCmj8UqRJUckFbb/QYGUHIr/0a1wuD68B2fiJ7g7KZPHzPV6K6wkzRywLLI0wR/vzQiHXx7s4QgHCdbBBCE2Ej09I+v6RGgiKfSsiDf/N1gADBwVrmmJ376X7J1xXocYRqP9PoSrbnh7eEu+luN9nf90oORUBNe9xtALmNZxYqifJ1CbSTnKg1VMyzFOtLj8cnTQSMs3JOydw0+73Saa0vSSwwuzfnaU0cTEY+iBYhjBZC2QJA4Eci4IMan0s1K/NAcmjdD33LpoGK4/O+ODfcFzAdFh/LfcB4NmW26mC+VJh7BidnZn2H40Qiflkal5MH9qujn+gxVMau+o4KZt1RwASDW7Vg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230001)(4636009)(40470700004)(36840700001)(46966006)(47076005)(70586007)(2906002)(83380400001)(70206006)(4326008)(7696005)(8676002)(6666004)(8936002)(336012)(316002)(426003)(54906003)(6916009)(356005)(44832011)(81166007)(36756003)(2616005)(508600001)(86362001)(1076003)(186003)(82310400005)(5660300002)(40460700003)(36860700001)(26005)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR08MB8189
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT006.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	30777c25-eee9-41d7-5437-08da4aa595c6
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2WsY2wNBMaejaDQW3xuqkc1miFDjgA14kpnzbCQdciLZM+4rlYBErH0HA5sZnr99bXZ7pCiE6aiFP94x69NVpc99WR+FIN1+DZkmbPyLYORNyXOIZvWUPQr6nsJJYK0RNcT+vBEen2hqnLbMo/5XY60pkWIuJlfuJVSXIGTDuXwTkURHGhBm8rx5IfNDZbvKSNn+tp84wWtffjCTirPS25oa/dxUbeIwL7KhtJHV8KwoDC2YT2OqAklg2kPnrnnVigHiTQiHTqJcs5guMJKpZfbO7J1T8ZMCpLx9/+qVTOs9XzjTjMQcChv8fDQSh4yZf9dDRqr0L/uE8YyVhuj0mXCKjPjY/Pn2xdasQ6JjykC2ymeddS+H3yE4F0bVDP3WfUcxzCdwvHieXAwfkdWXczTTOyxPsuSB4OGKuU2Z83fNqmGmIC76zeZbg2xw3XZRL20Q0q7iSpIh+C1eAB71CMzzKI5rwKY1B9Ow+HeimFP4Wxqfab6ZHUw69UeB6YKOsPtQ8l1bHejiiV9uuK1GJ+S9GapSeH/MF9dLNWDLv0Okpo7qHTcGOVUO7HiEXJhKp1ClqQiK5/Va28LZYx23zYkeYsX/x13VJAJukDuKJeGfgp0XK0M6JNw/9UkaFIuiNTXTxpfJxrFuihP4Iqamb2NiPLNRluGes1BuM1ivdaZcJRqIeB6KqojeXryjuF93
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230001)(4636009)(40470700004)(46966006)(36840700001)(70206006)(8676002)(70586007)(8936002)(26005)(4326008)(81166007)(508600001)(5660300002)(44832011)(36860700001)(40460700003)(47076005)(426003)(336012)(2616005)(2906002)(316002)(83380400001)(36756003)(82310400005)(186003)(1076003)(6916009)(54906003)(6666004)(86362001)(7696005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2022 05:53:54.9986
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 567ad5cc-a726-446f-5c48-08da4aa59aac
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT006.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4605

One NUMA node may contain several memory blocks. In current Xen
code, Xen will maintain a node memory range for each node to cover
all its memory blocks. But here comes the problem, in the gap of
one node's two memory blocks, if there are some memory blocks don't
belong to this node (remote memory blocks). This node's memory range
will be expanded to cover these remote memory blocks.

One node's memory range contains other nodes' memory, this is
obviously not very reasonable. This means current NUMA code only
can support node has no interleaved memory blocks. However, on a
physical machine, the addresses of multiple nodes can be interleaved.

So in this patch, we add code to detect memory interleaves of
different nodes. NUMA initialization will be failed and error
messages will be printed when Xen detect such hardware configuration.

As we have checked the node's range before, for a non-empty node,
the "nd->end == end && nd->start == start" check is unnecesary.
So we remove it from conflicting_memblks as well.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Tested-by: Jiamei Xie <jiamei.xie@arm.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
v5 -> v6:
1. Use comma to replace dash for [start, end].
2. Add Rb.
v4 -> v5:
1. Remove "nd->end == end && nd->start == start" from
   conflicting_memblks.
2. Use case NO_CONFLICT instead of "default".
3. Correct wrong "node" to "pxm" in print message.
4. Remove unnecesary "else" to remove the indent depth.
5. Convert all ranges to proper mathematical interval
   representation.
6. Fix code-style comments.
v3 -> v4:
1. Drop "ERR" prefix for enumeration, and remove init value.
2. Use "switch case" for enumeration, and add "default:"
3. Use "PXM" in log messages.
4. Use unsigned int for node memory block id.
5. Fix some code-style comments.
6. Use "nd->end" in node range expansion check.
v2 -> v3:
1. Merge the check code from a separate function to
   conflicting_memblks. This will reduce the loop
   times of node memory blocks.
2. Use an enumeration to indicate conflict check status.
3. Use a pointer to get conflict memory block id.
v1 -> v2:
1. Update the description to say we're after is no memory
   interleaves of different nodes.
2. Only update node range when it passes the interleave check.
3. Don't use full upper-case for "node".
---
 xen/arch/x86/srat.c | 139 ++++++++++++++++++++++++++++++++------------
 1 file changed, 101 insertions(+), 38 deletions(-)

diff --git a/xen/arch/x86/srat.c b/xen/arch/x86/srat.c
index 8ffe43bdfe..3d02520a5a 100644
--- a/xen/arch/x86/srat.c
+++ b/xen/arch/x86/srat.c
@@ -42,6 +42,12 @@ static struct node node_memblk_range[NR_NODE_MEMBLKS];
 static nodeid_t memblk_nodeid[NR_NODE_MEMBLKS];
 static __initdata DECLARE_BITMAP(memblk_hotplug, NR_NODE_MEMBLKS);
 
+enum conflicts {
+	NO_CONFLICT,
+	OVERLAP,
+	INTERLEAVE,
+};
+
 static inline bool node_found(unsigned idx, unsigned pxm)
 {
 	return ((pxm2node[idx].pxm == pxm) &&
@@ -119,20 +125,45 @@ int valid_numa_range(paddr_t start, paddr_t end, nodeid_t node)
 	return 0;
 }
 
-static __init int conflicting_memblks(paddr_t start, paddr_t end)
+static
+enum conflicts __init conflicting_memblks(nodeid_t nid, paddr_t start,
+					  paddr_t end, paddr_t nd_start,
+					  paddr_t nd_end, unsigned int *mblkid)
 {
-	int i;
+	unsigned int i;
 
+	/*
+	 * Scan all recorded nodes' memory blocks to check conflicts:
+	 * Overlap or interleave.
+	 */
 	for (i = 0; i < num_node_memblks; i++) {
 		struct node *nd = &node_memblk_range[i];
+
+		*mblkid = i;
+
+		/* Skip 0 bytes node memory block. */
 		if (nd->start == nd->end)
 			continue;
+		/*
+		 * Use memblk range to check memblk overlaps, include the
+		 * self-overlap case. As nd's range is non-empty, the special
+		 * case "nd->end == end && nd->start == start" also can be covered.
+		 */
 		if (nd->end > start && nd->start < end)
-			return i;
-		if (nd->end == end && nd->start == start)
-			return i;
+			return OVERLAP;
+
+		/*
+		 * Use node memory range to check whether new range contains
+		 * memory from other nodes - interleave check. We just need
+		 * to check full contains situation. Because overlaps have
+		 * been checked above.
+		 */
+	        if (nid != memblk_nodeid[i] &&
+		    nd->start >= nd_start && nd->end <= nd_end)
+			return INTERLEAVE;
 	}
-	return -1;
+
+	return NO_CONFLICT;
 }
 
 static __init void cutoff_node(int i, paddr_t start, paddr_t end)
@@ -275,10 +306,12 @@ acpi_numa_processor_affinity_init(const struct acpi_srat_cpu_affinity *pa)
 void __init
 acpi_numa_memory_affinity_init(const struct acpi_srat_mem_affinity *ma)
 {
+	struct node *nd;
+	paddr_t nd_start, nd_end;
 	paddr_t start, end;
 	unsigned pxm;
 	nodeid_t node;
-	int i;
+	unsigned int i;
 
 	if (srat_disabled())
 		return;
@@ -310,44 +343,74 @@ acpi_numa_memory_affinity_init(const struct acpi_srat_mem_affinity *ma)
 		bad_srat();
 		return;
 	}
+
+	/*
+	 * For the node that already has some memory blocks, we will
+	 * expand the node memory range temporarily to check memory
+	 * interleaves with other nodes. We will not use this node
+	 * temp memory range to check overlaps, because it will mask
+	 * the overlaps in same node.
+	 *
+	 * Node with 0 bytes memory doesn't need this expandsion.
+	 */
+	nd_start = start;
+	nd_end = end;
+	nd = &nodes[node];
+	if (nd->start != nd->end) {
+		if (nd_start > nd->start)
+			nd_start = nd->start;
+
+		if (nd_end < nd->end)
+			nd_end = nd->end;
+	}
+
 	/* It is fine to add this area to the nodes data it will be used later*/
-	i = conflicting_memblks(start, end);
-	if (i < 0)
-		/* everything fine */;
-	else if (memblk_nodeid[i] == node) {
-		bool mismatch = !(ma->flags & ACPI_SRAT_MEM_HOT_PLUGGABLE) !=
-		                !test_bit(i, memblk_hotplug);
-
-		printk("%sSRAT: PXM %u (%"PRIpaddr"-%"PRIpaddr") overlaps with itself (%"PRIpaddr"-%"PRIpaddr")\n",
-		       mismatch ? KERN_ERR : KERN_WARNING, pxm, start, end,
-		       node_memblk_range[i].start, node_memblk_range[i].end);
-		if (mismatch) {
-			bad_srat();
-			return;
+	switch (conflicting_memblks(node, start, end, nd_start, nd_end, &i)) {
+	case OVERLAP:
+		if (memblk_nodeid[i] == node) {
+			bool mismatch = !(ma->flags &
+					  ACPI_SRAT_MEM_HOT_PLUGGABLE) !=
+			                !test_bit(i, memblk_hotplug);
+
+			printk("%sSRAT: PXM %u [%"PRIpaddr", %"PRIpaddr"] overlaps with itself [%"PRIpaddr", %"PRIpaddr"]\n",
+			       mismatch ? KERN_ERR : KERN_WARNING, pxm, start,
+			       end - 1, node_memblk_range[i].start,
+			       node_memblk_range[i].end - 1);
+			if (mismatch) {
+				bad_srat();
+				return;
+			}
+			break;
 		}
-	} else {
+
+		printk(KERN_ERR
+		       "SRAT: PXM %u [%"PRIpaddr", %"PRIpaddr"] overlaps with PXM %u [%"PRIpaddr", %"PRIpaddr"]\n",
+		       pxm, start, end - 1, node_to_pxm(memblk_nodeid[i]),
+		       node_memblk_range[i].start,
+		       node_memblk_range[i].end - 1);
+		bad_srat();
+		return;
+
+	case INTERLEAVE:
 		printk(KERN_ERR
-		       "SRAT: PXM %u (%"PRIpaddr"-%"PRIpaddr") overlaps with PXM %u (%"PRIpaddr"-%"PRIpaddr")\n",
-		       pxm, start, end, node_to_pxm(memblk_nodeid[i]),
-		       node_memblk_range[i].start, node_memblk_range[i].end);
+		       "SRAT： PXM %u: [%"PRIpaddr", %"PRIpaddr"] interleaves with PXM %u memblk [%"PRIpaddr", %"PRIpaddr"]\n",
+		       pxm, nd_start, nd_end - 1, node_to_pxm(memblk_nodeid[i]),
+		       node_memblk_range[i].start, node_memblk_range[i].end - 1);
 		bad_srat();
 		return;
+
+	case NO_CONFLICT:
+		break;
 	}
+
 	if (!(ma->flags & ACPI_SRAT_MEM_HOT_PLUGGABLE)) {
-		struct node *nd = &nodes[node];
-
-		if (!node_test_and_set(node, memory_nodes_parsed)) {
-			nd->start = start;
-			nd->end = end;
-		} else {
-			if (start < nd->start)
-				nd->start = start;
-			if (nd->end < end)
-				nd->end = end;
-		}
+		node_set(node, memory_nodes_parsed);
+		nd->start = nd_start;
+		nd->end = nd_end;
 	}
-	printk(KERN_INFO "SRAT: Node %u PXM %u %"PRIpaddr"-%"PRIpaddr"%s\n",
-	       node, pxm, start, end,
+
+	printk(KERN_INFO "SRAT: Node %u PXM %u [%"PRIpaddr", %"PRIpaddr"]%s\n",
+	       node, pxm, start, end - 1,
 	       ma->flags & ACPI_SRAT_MEM_HOT_PLUGGABLE ? " (hotplug)" : "");
 
 	node_memblk_range[num_node_memblks].start = start;
@@ -396,7 +459,7 @@ static int __init nodes_cover_memory(void)
 
 		if (start < end) {
 			printk(KERN_ERR "SRAT: No PXM for e820 range: "
-				"%"PRIpaddr" - %"PRIpaddr"\n", start, end);
+				"[%"PRIpaddr", %"PRIpaddr"]\n", start, end - 1);
 			return 0;
 		}
 	}
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 05:54:02 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 05:54:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345799.571525 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzXas-00006f-Db; Fri, 10 Jun 2022 05:54:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345799.571525; Fri, 10 Jun 2022 05:54:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzXas-0008WN-3w; Fri, 10 Jun 2022 05:54:02 +0000
Received: by outflank-mailman (input) for mailman id 345799;
 Fri, 10 Jun 2022 05:54:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=B4Vh=WR=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1nzXaq-00078y-PR
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 05:54:01 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0618.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::618])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b86ad248-e881-11ec-8b38-e96605d6a9a5;
 Fri, 10 Jun 2022 07:53:58 +0200 (CEST)
Received: from AS9PR06CA0366.eurprd06.prod.outlook.com (2603:10a6:20b:460::6)
 by VI1PR08MB5422.eurprd08.prod.outlook.com (2603:10a6:803:12e::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Fri, 10 Jun
 2022 05:53:54 +0000
Received: from VE1EUR03FT041.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:460:cafe::6f) by AS9PR06CA0366.outlook.office365.com
 (2603:10a6:20b:460::6) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.11 via Frontend
 Transport; Fri, 10 Jun 2022 05:53:54 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT041.mail.protection.outlook.com (10.152.19.163) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5332.12 via Frontend Transport; Fri, 10 Jun 2022 05:53:53 +0000
Received: ("Tessian outbound 6f53897bcd4e:v120");
 Fri, 10 Jun 2022 05:53:53 +0000
Received: from 2aee9bf77758.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 FF9551B2-4BA4-46DC-8346-2B400D2B4580.1; 
 Fri, 10 Jun 2022 05:53:47 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 2aee9bf77758.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 10 Jun 2022 05:53:47 +0000
Received: from AM7PR02CA0019.eurprd02.prod.outlook.com (2603:10a6:20b:100::29)
 by AM6PR08MB3864.eurprd08.prod.outlook.com (2603:10a6:20b:8e::28)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.15; Fri, 10 Jun
 2022 05:53:44 +0000
Received: from AM5EUR03FT044.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:100:cafe::30) by AM7PR02CA0019.outlook.office365.com
 (2603:10a6:20b:100::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.14 via Frontend
 Transport; Fri, 10 Jun 2022 05:53:44 +0000
Received: from nebula.arm.com (40.67.248.234) by
 AM5EUR03FT044.mail.protection.outlook.com (10.152.17.56) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5332.12 via Frontend Transport; Fri, 10 Jun 2022 05:53:44 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX03.Arm.com
 (10.251.24.31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.27; Fri, 10 Jun
 2022 05:53:42 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2308.27 via Frontend
 Transport; Fri, 10 Jun 2022 05:53:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b86ad248-e881-11ec-8b38-e96605d6a9a5
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=NKIKsuhwg6NHQyPfd4H1aAMUNMkPpGJgXSyLolAjD7UD/cFaKUGuSX8X+nFxJ15GJIsGyhxVmuZ9/yr16yCaqY17xXlMpb6hR3mvv5BRRQjdRF/ByaBOC4pXDS/BII/A9pF7D64XlSSab64kFu0WOEQ8LAgOD0QYVpy5h4DqsKxYgyl/PZxHgxQm8sAafpqtNAgddzeCkU5EQtv+Bi+lknjiLPIJ08puo3kZbcbXf4JfO2lw3dRDQTYfY+XWd8gXHdfbpQjCNkWRa4gE54Ms25+ia8DPQ1rD8/OhWWuA9k7rA4S84LJxh5JLl58zbP+e2qUBQmlYee/bwHjJHkSQkA==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=YtTv1hIwjELwhmX84zR7R+LRlUkqarX1yb6jS7tn1R4=;
 b=VTBWJLjseepp3Ixtv0PBBS2+n19xx/0a88//HtySbawr38rSF2eHss+T1YoagMF8EE+QMKshPfJ7ABGP6Tsw9FcF86KEgYY+bEIBqNAjKbLQ8jMgwRds2wbMGmAfD7CRbLxkIEOdL+MUBGHZE75ED5irJkPvWl3klonf6x5/98Ma7VNF4ro+bufL9npyfG/wfLA6KE9YDDXiOMQWTRqyPb7kDcZAgcyYR53hsG0+NAgLyTzu0d9y4jloPbcT6xUBX7Vc9vJy4tjZhQvvyulT+wQPqvcQWkUOfn/8o9qO2KCIV40UXVnxDxNdAjlPWgaXwoTwO0LBzvw5467lv7cSjQ==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YtTv1hIwjELwhmX84zR7R+LRlUkqarX1yb6jS7tn1R4=;
 b=ERyGcs6zLVtmtlfUFnraYd+RnmZbeQcreQug1uJGVq/ZmmkMaC/n7OqGtI4UOwEyYbjHPRmr6PXn3MdLbl4nT6MODLjGuFpcFIpGc3isWfy5HNjpZIza15TPYu3YgEQy9xyOM/bswZ47O6qwHboqqjC4HLctcGPEzfqeIuy+3n4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 90f9115dc96b2a2f
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cXtUBV30FTrjgt+2vmyRNVGDJ9boZ2xnuAlcGcYdSDWPUV1lLYerpeMMzz3202Omq5TswlcWfEuJ4YUIrApUs0qj/QuBWvp/31jrNJ4pBOXyn1p0TsQhVczxTl4LhHkewXgJxgpb2ZscUYp5c3X4avyL13PjgyrSOBKYudCu8RKqASNyKYC3JyPimsB14f3dTiCx0BjBs4GhIssuWVNGUwhW/rghN9ukB+j8QCzONaWpoQcSYU2QfDFkUr/+b4iAZhArgiQKuuK3s2g8SGTGHX7e/Hrotna1W71/q8yi8Q8cOhWpStE/u+3BdMAcobSTIg55nxpSJcreteM0t7+xog==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=YtTv1hIwjELwhmX84zR7R+LRlUkqarX1yb6jS7tn1R4=;
 b=jiCbxPZPInsleCVWOtE3Wjum31vNC5YCKu8OZpzv1gNcNGf2gAJcH2fJXCepV315YvpPA29CrGrHMH0v4y/+tEAZsSeZ7KK/FpxpfAMfHkEYpQ3lSBZzfi/BXWLW6nD8IGKPC/8zbg+C2YbwA4tBs4E4/PZD9UBdeeEmRthpQUFZckyexiw4yhC3zFJBnSsdDlp0nRfexNlP9JzRjzMUQ5wQ8zQB/MOXj3JTMHpKvEVbyznekTaSKE2qdj5FpVcG+CUeAB2x5wjHMd1MUCpg2dywcJRMTTpVayxjYSPj0Fq0GnX2ki2N2d8Pz86x8TUc/3evgYY1djdb9LiP0b6MIQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YtTv1hIwjELwhmX84zR7R+LRlUkqarX1yb6jS7tn1R4=;
 b=ERyGcs6zLVtmtlfUFnraYd+RnmZbeQcreQug1uJGVq/ZmmkMaC/n7OqGtI4UOwEyYbjHPRmr6PXn3MdLbl4nT6MODLjGuFpcFIpGc3isWfy5HNjpZIza15TPYu3YgEQy9xyOM/bswZ47O6qwHboqqjC4HLctcGPEzfqeIuy+3n4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Wei Chen <wei.chen@arm.com>, Jan Beulich
	<jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Jiamei Xie <jiamei.xie@arm.com>
Subject: [PATCH v6 6/8] xen/x86: use paddr_t for addresses in NUMA node structure
Date: Fri, 10 Jun 2022 13:53:14 +0800
Message-ID: <20220610055316.2197571-7-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220610055316.2197571-1-wei.chen@arm.com>
References: <20220610055316.2197571-1-wei.chen@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-Office365-Filtering-Correlation-Id: 0d80f136-8adc-4d32-46b7-08da4aa599ec
X-MS-TrafficTypeDiagnostic:
	AM6PR08MB3864:EE_|VE1EUR03FT041:EE_|VI1PR08MB5422:EE_
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB54225928E0831DE193C75CB59EA69@VI1PR08MB5422.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 keccJe/YM9mtW1kZVaz/GVHp0cOMLOVwGXe0iUbMR8vTYch6V2NpidGo6mXg9Yob09r3nO5rRPPd4qPXR9XqCn3Z05PVrHeDwm76iI1Ae3ZcovWZVY4SeTG8HtiOZvc94iG9ixyMv1x/SdIwN+7koUirbsCLPmpCw+Ae7js6DZWChtBPsgSK1T+mZ3fFLWImUVO/yHre/68eycmz3Gf1lgGD/IoeWzd8KWebOXoboynfYiT0dOMntTXjG5Y6XnuFrj6WgyEJ+nPtr0ylPh9TV8eMTMNHGaEHXnI53f3qP65Ib527HB8GvPEpnkkAKBMAB/V+YdeDOXsUUw6lSA7by+Ggk+VBzElBanERyLjApR5H11vrpU6W+4BF6v24lSH2vSqEy7uZRNA6t1+FM+S69dK1Q51xBQbgOII4YiqvtOhfsghoM3EgoMczgQz8QsFMjQxmdr40qQgNrR2TupRtPQzCOjLBqWkCdboKhGaLxQIRTT5GJam2ti5CpSHSH5edX4DVKChDfJ53xEa/jTdSP9DD44f3YUs7QtYHJhQ4Ms0raOLIydx+6Wbyh3q0bLHuokg2mMy4PpqH+V08nrr6mFOtvvt7j2Uh4IfWok/E6fl3Nd0cmRiVK4tAxlErEE6IE+BlCDv7tMuYv6DhnNjkwIvXTYaIr8sprKuKVaegzaFnyAQcHmgX6B0DitdPrsg7Hg5WK5kz/SnvVdNNT4/AtA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(36860700001)(40460700003)(508600001)(36756003)(83380400001)(47076005)(426003)(82310400005)(336012)(2906002)(44832011)(5660300002)(186003)(6916009)(70206006)(8676002)(54906003)(316002)(2616005)(1076003)(8936002)(70586007)(86362001)(4326008)(81166007)(6666004)(7696005)(356005)(26005)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3864
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT041.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	98df04af-478d-4930-b939-08da4aa59481
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	EIbmVwp9cvP9KVpFDOfm6bbD6gOx0Un++wSivO4Issb4qiAJ10NR5SoCgrOgGA/7GI0noX5LPCODQyq47kCdR7o1r78RiCEM+T+n6pKRnyYsXkEkc0pnWvrsK3+Xrqgf7XMhpzsv1zhwVs0KieY4RAONo3zwLoSy0m9ZDbTTocp4CwuIMlFqIXRzoKShKT8SWRn1+8Fn/uXCRApq6wxRwhmG3Mh2YtcaCl09haBlK0dEkVo4njRW/92cRpU5MEh3QqhiLxYcnqiPPA6LEfMyTCr2EZW61N1C/9jrZp76XgjnbvxCfhyGsGTiowNz5w50GU+ncHTKhtci5zSsrVw7A9CzDNHAWi/rg48IZ/nhxXrg/n0zXrUMe9LzZU/zr4zHm9pN4BQCqtHXGSjSMmAXdgyux5TC1shPa8iB53IjY5+En5l1s/tI1Le5flMQ0UM6Wv8tux5Kqgcle2rPan6sXw4LtyS7r6Btr3gQeOn8/92+NZ2iUYyeUKFih11UM9YMUdztVcD6ll/8qY+cuEACR78Jfg6s/gEHsqo23XJsYBhlmFOeyQVKudemp/cDWJ742m0Bgj7ptI9EPGoBSvwIdtgq0ilQ2NT4mWke2jNokosFjw2eOCI3o8y9Azk0wd6Aip7p4zehc07i94oYsofNstma7hW7ixqWgxuSjJEySxRlZBJTNaczZY03nEDu3Y9o
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230001)(4636009)(36840700001)(40470700004)(46966006)(47076005)(426003)(7696005)(186003)(336012)(1076003)(4326008)(36756003)(508600001)(26005)(2616005)(6916009)(54906003)(2906002)(6666004)(8676002)(36860700001)(8936002)(82310400005)(44832011)(316002)(5660300002)(40460700003)(83380400001)(81166007)(86362001)(70206006)(70586007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2022 05:53:53.7252
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 0d80f136-8adc-4d32-46b7-08da4aa599ec
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT041.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB5422

NUMA node structure "struct node" is using u64 as node memory
range. In order to make other architectures can reuse this
NUMA node relative code, we replace the u64 to paddr_t. And
use pfn_to_paddr and paddr_to_pfn to replace explicit shift
operations. The relate PRIx64 in print messages have been
replaced by PRIpaddr at the same time. And some being-phased-out
types like u64 in the lines we have touched also have been
converted to uint64_t or unsigned long.

Tested-by: Jiamei Xie <jiamei.xie@arm.com>
Signed-off-by: Wei Chen <wei.chen@arm.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
v3 -> v4:
1. Add Tb.
v2 -> v3:
1. Use uint64_t for size in acpi_scan_nodes, make it be
   consistent with numa_emulation.
2. Add Tb.
v1 -> v2:
1. Drop useless cast.
2. Use initializers of the variables.
3. Replace u64 by uint64_t.
4. Use unsigned long for start_pfn and end_pfn.
---
 xen/arch/x86/include/asm/numa.h |  8 ++++----
 xen/arch/x86/numa.c             | 32 +++++++++++++++-----------------
 xen/arch/x86/srat.c             | 25 +++++++++++++------------
 3 files changed, 32 insertions(+), 33 deletions(-)

diff --git a/xen/arch/x86/include/asm/numa.h b/xen/arch/x86/include/asm/numa.h
index 5d8385f2e1..c32ccffde3 100644
--- a/xen/arch/x86/include/asm/numa.h
+++ b/xen/arch/x86/include/asm/numa.h
@@ -18,7 +18,7 @@ extern cpumask_t     node_to_cpumask[];
 #define node_to_cpumask(node)    (node_to_cpumask[node])
 
 struct node { 
-	u64 start,end; 
+	paddr_t start, end;
 };
 
 extern int compute_hash_shift(struct node *nodes, int numnodes,
@@ -38,7 +38,7 @@ extern void numa_set_node(int cpu, nodeid_t node);
 extern nodeid_t setup_node(unsigned int pxm);
 extern void srat_detect_node(int cpu);
 
-extern void setup_node_bootmem(nodeid_t nodeid, u64 start, u64 end);
+extern void setup_node_bootmem(nodeid_t nodeid, paddr_t start, paddr_t end);
 extern nodeid_t apicid_to_node[];
 extern void init_cpu_to_node(void);
 
@@ -76,9 +76,9 @@ static inline __attribute__((pure)) nodeid_t phys_to_nid(paddr_t addr)
 				 NODE_DATA(nid)->node_spanned_pages)
 #define arch_want_default_dmazone() (num_online_nodes() > 1)
 
-extern int valid_numa_range(u64 start, u64 end, nodeid_t node);
+extern int valid_numa_range(paddr_t start, paddr_t end, nodeid_t node);
 
-void srat_parse_regions(u64 addr);
+void srat_parse_regions(paddr_t addr);
 extern u8 __node_distance(nodeid_t a, nodeid_t b);
 unsigned int arch_get_dma_bitsize(void);
 
diff --git a/xen/arch/x86/numa.c b/xen/arch/x86/numa.c
index 680b7d9002..627ae8aa95 100644
--- a/xen/arch/x86/numa.c
+++ b/xen/arch/x86/numa.c
@@ -162,12 +162,10 @@ int __init compute_hash_shift(struct node *nodes, int numnodes,
     return shift;
 }
 /* initialize NODE_DATA given nodeid and start/end */
-void __init setup_node_bootmem(nodeid_t nodeid, u64 start, u64 end)
-{ 
-    unsigned long start_pfn, end_pfn;
-
-    start_pfn = start >> PAGE_SHIFT;
-    end_pfn = end >> PAGE_SHIFT;
+void __init setup_node_bootmem(nodeid_t nodeid, paddr_t start, paddr_t end)
+{
+    unsigned long start_pfn = paddr_to_pfn(start);
+    unsigned long end_pfn = paddr_to_pfn(end);
 
     NODE_DATA(nodeid)->node_start_pfn = start_pfn;
     NODE_DATA(nodeid)->node_spanned_pages = end_pfn - start_pfn;
@@ -198,11 +196,12 @@ void __init numa_init_array(void)
 static int numa_fake __initdata = 0;
 
 /* Numa emulation */
-static int __init numa_emulation(u64 start_pfn, u64 end_pfn)
+static int __init numa_emulation(unsigned long start_pfn,
+                                 unsigned long end_pfn)
 {
     int i;
     struct node nodes[MAX_NUMNODES];
-    u64 sz = ((end_pfn - start_pfn)<<PAGE_SHIFT) / numa_fake;
+    uint64_t sz = pfn_to_paddr(end_pfn - start_pfn) / numa_fake;
 
     /* Kludge needed for the hash function */
     if ( hweight64(sz) > 1 )
@@ -218,9 +217,9 @@ static int __init numa_emulation(u64 start_pfn, u64 end_pfn)
     memset(&nodes,0,sizeof(nodes));
     for ( i = 0; i < numa_fake; i++ )
     {
-        nodes[i].start = (start_pfn<<PAGE_SHIFT) + i*sz;
+        nodes[i].start = pfn_to_paddr(start_pfn) + i * sz;
         if ( i == numa_fake - 1 )
-            sz = (end_pfn<<PAGE_SHIFT) - nodes[i].start;
+            sz = pfn_to_paddr(end_pfn) - nodes[i].start;
         nodes[i].end = nodes[i].start + sz;
         printk(KERN_INFO "Faking node %d at %"PRIx64"-%"PRIx64" (%"PRIu64"MB)\n",
                i,
@@ -246,6 +245,8 @@ static int __init numa_emulation(u64 start_pfn, u64 end_pfn)
 void __init numa_initmem_init(unsigned long start_pfn, unsigned long end_pfn)
 { 
     int i;
+    paddr_t start = pfn_to_paddr(start_pfn);
+    paddr_t end = pfn_to_paddr(end_pfn);
 
 #ifdef CONFIG_NUMA_EMU
     if ( numa_fake && !numa_emulation(start_pfn, end_pfn) )
@@ -253,17 +254,15 @@ void __init numa_initmem_init(unsigned long start_pfn, unsigned long end_pfn)
 #endif
 
 #ifdef CONFIG_ACPI_NUMA
-    if ( !numa_off && !acpi_scan_nodes((u64)start_pfn << PAGE_SHIFT,
-         (u64)end_pfn << PAGE_SHIFT) )
+    if ( !numa_off && !acpi_scan_nodes(start, end) )
         return;
 #endif
 
     printk(KERN_INFO "%s\n",
            numa_off ? "NUMA turned off" : "No NUMA configuration found");
 
-    printk(KERN_INFO "Faking a node at %016"PRIx64"-%016"PRIx64"\n",
-           (u64)start_pfn << PAGE_SHIFT,
-           (u64)end_pfn << PAGE_SHIFT);
+    printk(KERN_INFO "Faking a node at %"PRIpaddr"-%"PRIpaddr"\n",
+           start, end);
     /* setup dummy node covering all memory */
     memnode_shift = BITS_PER_LONG - 1;
     memnodemap = _memnodemap;
@@ -276,8 +275,7 @@ void __init numa_initmem_init(unsigned long start_pfn, unsigned long end_pfn)
     for ( i = 0; i < nr_cpu_ids; i++ )
         numa_set_node(i, 0);
     cpumask_copy(&node_to_cpumask[0], cpumask_of(0));
-    setup_node_bootmem(0, (u64)start_pfn << PAGE_SHIFT,
-                    (u64)end_pfn << PAGE_SHIFT);
+    setup_node_bootmem(0, start, end);
 }
 
 void numa_add_cpu(int cpu)
diff --git a/xen/arch/x86/srat.c b/xen/arch/x86/srat.c
index cfe24c7e78..8ffe43bdfe 100644
--- a/xen/arch/x86/srat.c
+++ b/xen/arch/x86/srat.c
@@ -104,7 +104,7 @@ nodeid_t setup_node(unsigned pxm)
 	return node;
 }
 
-int valid_numa_range(u64 start, u64 end, nodeid_t node)
+int valid_numa_range(paddr_t start, paddr_t end, nodeid_t node)
 {
 	int i;
 
@@ -119,7 +119,7 @@ int valid_numa_range(u64 start, u64 end, nodeid_t node)
 	return 0;
 }
 
-static __init int conflicting_memblks(u64 start, u64 end)
+static __init int conflicting_memblks(paddr_t start, paddr_t end)
 {
 	int i;
 
@@ -135,7 +135,7 @@ static __init int conflicting_memblks(u64 start, u64 end)
 	return -1;
 }
 
-static __init void cutoff_node(int i, u64 start, u64 end)
+static __init void cutoff_node(int i, paddr_t start, paddr_t end)
 {
 	struct node *nd = &nodes[i];
 	if (nd->start < start) {
@@ -275,7 +275,7 @@ acpi_numa_processor_affinity_init(const struct acpi_srat_cpu_affinity *pa)
 void __init
 acpi_numa_memory_affinity_init(const struct acpi_srat_mem_affinity *ma)
 {
-	u64 start, end;
+	paddr_t start, end;
 	unsigned pxm;
 	nodeid_t node;
 	int i;
@@ -318,7 +318,7 @@ acpi_numa_memory_affinity_init(const struct acpi_srat_mem_affinity *ma)
 		bool mismatch = !(ma->flags & ACPI_SRAT_MEM_HOT_PLUGGABLE) !=
 		                !test_bit(i, memblk_hotplug);
 
-		printk("%sSRAT: PXM %u (%"PRIx64"-%"PRIx64") overlaps with itself (%"PRIx64"-%"PRIx64")\n",
+		printk("%sSRAT: PXM %u (%"PRIpaddr"-%"PRIpaddr") overlaps with itself (%"PRIpaddr"-%"PRIpaddr")\n",
 		       mismatch ? KERN_ERR : KERN_WARNING, pxm, start, end,
 		       node_memblk_range[i].start, node_memblk_range[i].end);
 		if (mismatch) {
@@ -327,7 +327,7 @@ acpi_numa_memory_affinity_init(const struct acpi_srat_mem_affinity *ma)
 		}
 	} else {
 		printk(KERN_ERR
-		       "SRAT: PXM %u (%"PRIx64"-%"PRIx64") overlaps with PXM %u (%"PRIx64"-%"PRIx64")\n",
+		       "SRAT: PXM %u (%"PRIpaddr"-%"PRIpaddr") overlaps with PXM %u (%"PRIpaddr"-%"PRIpaddr")\n",
 		       pxm, start, end, node_to_pxm(memblk_nodeid[i]),
 		       node_memblk_range[i].start, node_memblk_range[i].end);
 		bad_srat();
@@ -346,7 +346,7 @@ acpi_numa_memory_affinity_init(const struct acpi_srat_mem_affinity *ma)
 				nd->end = end;
 		}
 	}
-	printk(KERN_INFO "SRAT: Node %u PXM %u %"PRIx64"-%"PRIx64"%s\n",
+	printk(KERN_INFO "SRAT: Node %u PXM %u %"PRIpaddr"-%"PRIpaddr"%s\n",
 	       node, pxm, start, end,
 	       ma->flags & ACPI_SRAT_MEM_HOT_PLUGGABLE ? " (hotplug)" : "");
 
@@ -369,7 +369,7 @@ static int __init nodes_cover_memory(void)
 
 	for (i = 0; i < e820.nr_map; i++) {
 		int j, found;
-		unsigned long long start, end;
+		paddr_t start, end;
 
 		if (e820.map[i].type != E820_RAM) {
 			continue;
@@ -396,7 +396,7 @@ static int __init nodes_cover_memory(void)
 
 		if (start < end) {
 			printk(KERN_ERR "SRAT: No PXM for e820 range: "
-				"%016Lx - %016Lx\n", start, end);
+				"%"PRIpaddr" - %"PRIpaddr"\n", start, end);
 			return 0;
 		}
 	}
@@ -432,7 +432,7 @@ static int __init cf_check srat_parse_region(
 	return 0;
 }
 
-void __init srat_parse_regions(u64 addr)
+void __init srat_parse_regions(paddr_t addr)
 {
 	u64 mask;
 	unsigned int i;
@@ -457,7 +457,7 @@ void __init srat_parse_regions(u64 addr)
 }
 
 /* Use the information discovered above to actually set up the nodes. */
-int __init acpi_scan_nodes(u64 start, u64 end)
+int __init acpi_scan_nodes(paddr_t start, paddr_t end)
 {
 	int i;
 	nodemask_t all_nodes_parsed;
@@ -489,7 +489,8 @@ int __init acpi_scan_nodes(u64 start, u64 end)
 	/* Finally register nodes */
 	for_each_node_mask(i, all_nodes_parsed)
 	{
-		u64 size = nodes[i].end - nodes[i].start;
+		uint64_t size = nodes[i].end - nodes[i].start;
+
 		if ( size == 0 )
 			printk(KERN_WARNING "SRAT: Node %u has no memory. "
 			       "BIOS Bug or mis-configured hardware?\n", i);
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 05:54:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 05:54:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345806.571540 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzXb0-000105-Oq; Fri, 10 Jun 2022 05:54:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345806.571540; Fri, 10 Jun 2022 05:54:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzXb0-0000z9-JD; Fri, 10 Jun 2022 05:54:10 +0000
Received: by outflank-mailman (input) for mailman id 345806;
 Fri, 10 Jun 2022 05:54:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=B4Vh=WR=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1nzXay-0006br-M0
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 05:54:08 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20604.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::604])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bd6b350f-e881-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 07:54:07 +0200 (CEST)
Received: from DU2PR04CA0328.eurprd04.prod.outlook.com (2603:10a6:10:2b5::33)
 by AM6PR08MB3957.eurprd08.prod.outlook.com (2603:10a6:20b:a2::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.15; Fri, 10 Jun
 2022 05:54:05 +0000
Received: from DBAEUR03FT010.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:2b5:cafe::5b) by DU2PR04CA0328.outlook.office365.com
 (2603:10a6:10:2b5::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.14 via Frontend
 Transport; Fri, 10 Jun 2022 05:54:04 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT010.mail.protection.outlook.com (100.127.142.78) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5332.12 via Frontend Transport; Fri, 10 Jun 2022 05:54:04 +0000
Received: ("Tessian outbound 6f53897bcd4e:v120");
 Fri, 10 Jun 2022 05:54:04 +0000
Received: from 33806bc2eda2.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 DB52A1E6-B8CC-4081-A99C-0E5B26116A6C.1; 
 Fri, 10 Jun 2022 05:53:59 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 33806bc2eda2.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 10 Jun 2022 05:53:59 +0000
Received: from AM6PR08CA0001.eurprd08.prod.outlook.com (2603:10a6:20b:b2::13)
 by AM9PR08MB6802.eurprd08.prod.outlook.com (2603:10a6:20b:308::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.12; Fri, 10 Jun
 2022 05:53:51 +0000
Received: from AM5EUR03FT033.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:b2:cafe::e) by AM6PR08CA0001.outlook.office365.com
 (2603:10a6:20b:b2::13) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.11 via Frontend
 Transport; Fri, 10 Jun 2022 05:53:51 +0000
Received: from nebula.arm.com (40.67.248.234) by
 AM5EUR03FT033.mail.protection.outlook.com (10.152.16.99) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5332.12 via Frontend Transport; Fri, 10 Jun 2022 05:53:50 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX03.Arm.com
 (10.251.24.31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.27; Fri, 10 Jun
 2022 05:53:47 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2308.27 via Frontend
 Transport; Fri, 10 Jun 2022 05:53:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bd6b350f-e881-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=AIm+ukrKRp7SiLzo2UB3Zb/xaPUU667ynDB8ZLNrai8wIWQNDIVNr3gYCsJPdBS9V2FUZfPmLV74fHbkbyuyOkl99zotxER3BEMUeQO0Agb006c2JulISKh6Kmi+/qZcYOzPNv5G5K9HTOdaWL6lGBxhVZMRiQGcNf7ZWN49+4Q7PosTShTiIM9mYaNBOXQVs+2To50WP68M+IW6dL8aOszNLB6p4QCABNQd0wPkWUJ71ZnZ7d154DL4IcqwB+LA5Bt46QXJBb2bcvqyGguWTUeyOFS67cbR7KevY4SAmQQ74KeGUYC0fwibz+4S6w/0jaDwa6cRkAMdWGA2BoJ11Q==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=71g1qCFmdU3XbQ0p2R4qHlQkvDUCjtYpciVZO/CP2Pc=;
 b=TPXs2x77o2i4gqfSDTeR5wGsglCCVxcKVf+bZpcMfLFsmURj9by4kRicw/evM0cbwxz8HEQT4IDQJA3/sACu/jOdNp+zotwgHWDKVeu0rRnOe8D408hr5CEYsg1FDquSMy0lCIgc96Zf2ZnHDrHRm4H3w4V1sNffi9+hSBGlM+hiNrs49HlYczbPTpXcL5mm52sOqvDZ5nvLgeCi+GBvZMy7l3E1lmQpjpvwWvXe7pTUsNwoGHxcTwWXOVWc3CVO/c3SXGBotI0KIH0JZOL1S2ILQLG5G1gTNFleB0HJx/gf52j02yAy5j5JBCxsdsr7/iMuOz3z63qEligsnebPjA==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=71g1qCFmdU3XbQ0p2R4qHlQkvDUCjtYpciVZO/CP2Pc=;
 b=CzDjPGqwDAetOF4tqxe2rp1E0JF5d9FVZ6UrTB33XBroHuNYSibv5bA+MYQv0VmUHNNJjVIbwoofi0X3PENEXrSByQ2YNB1+O1zcpEFiVtyH2WboyZO2idW2Y11FuW+G4w6JA1XOQ3RT3c7UVgeQJN1NSp0CFjGrFj3T8SgFoDc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 8ab16ee7f36652e9
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=knwkhk5mwSdPpW9ZYo9K2GBneoFdNpKV8oWCqDJpMD55Anm69xPpP5mroImx/z2MjyoPfa3d/+HOaziC8DczfVU1L/Ksf5BMQYSTjV7Z3UdPXpD9xKqtxd4rpmLu57GHdDUuQpZeXXfFwD5XWOH2EnyyOmHXH9dBKnj4PDStxfAEhBT/37W2XfIXteniGcDnmDUWveHia34l+GQ8O2ZVHFN74mkrzlZodr3B1T6TLOqUP9vQApzLuP4VBVzKSoX6Po6KkVMRPf4hO0Yyw0NZBJwsYZRN4xnqXg3193pqaRZsMKfE9cD0P6SQBZg96PR3jjzwykVf/+AUF7oIOjeCEQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=71g1qCFmdU3XbQ0p2R4qHlQkvDUCjtYpciVZO/CP2Pc=;
 b=Cj0tu/b8DLYdt1Te8BL4jeyFIgVA86FoLfq1winHpUGOxbBYLjXIdd7VxPSOj3F3V5gdTgRwRIemROqBjWBvi4uE8IASBPuWwwo67DS/StlfBjShUR/HtifNR2NEfFNQekXTOKg88E4rucfpozJ+9ZQ3PZF4t0Ywi7sUl04JIjQTfulka5WFj7zvSN62NzoFK69xId+dGhgEmCAV/RTJZEXyny28c2LeDjkdyVwDte26IrRAe23id1wtT3XEl/3AMv+6jhf+QnBTZb0VdRAkS+KwVydAShuqQOCVezIgOh3nm0jauLNft8s/DsSI+0MfDei8FUzchwE/sUi08ECeew==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=71g1qCFmdU3XbQ0p2R4qHlQkvDUCjtYpciVZO/CP2Pc=;
 b=CzDjPGqwDAetOF4tqxe2rp1E0JF5d9FVZ6UrTB33XBroHuNYSibv5bA+MYQv0VmUHNNJjVIbwoofi0X3PENEXrSByQ2YNB1+O1zcpEFiVtyH2WboyZO2idW2Y11FuW+G4w6JA1XOQ3RT3c7UVgeQJN1NSp0CFjGrFj3T8SgFoDc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Wei Chen <wei.chen@arm.com>, Jan Beulich
	<jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: [PATCH v6 8/8] xen/x86: use INFO level for node's without memory log message
Date: Fri, 10 Jun 2022 13:53:16 +0800
Message-ID: <20220610055316.2197571-9-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220610055316.2197571-1-wei.chen@arm.com>
References: <20220610055316.2197571-1-wei.chen@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-Office365-Filtering-Correlation-Id: 3c8a299a-9af0-4388-c5b3-08da4aa5a060
X-MS-TrafficTypeDiagnostic:
	AM9PR08MB6802:EE_|DBAEUR03FT010:EE_|AM6PR08MB3957:EE_
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB395794CA5A2E41D1F0E1276D9EA69@AM6PR08MB3957.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 nRBxxvS6+3OYtOYc2d6VW0I6xKS1Pide9bBnoRZhr4lwO14UI1e+c9m5n0twcqiI65PsFxHnuCcUl70cc8U1UfacYyMpNh7Lc5c2scpXeLaWpTvXLBfm1TuVot/Eme6v31x8MX3wXkA4KsWw3pq+6IufXE7dkbUU1Hv59nMh3ll69cKAxEGZ7qghs26X6Al/+Q6f/NqV3iwuDOZUsjzTrhBQRu19/h+9ep0IurPEvGeTUOyl1udJgJO6KCJvjDYlwbn9azP4FHhCG1LF4NvkimxD4qGx/tcFyrVUXYpOQiJJB8XUtk5JQbT3DBUR3M232n8QplWRfxu7rFV24MhFGGRaYx9G4f7AB6XNfAfQ3AblvajACVmXGhB3Z8qqnauoL/MlKeYx/WSKlh6dW8h0zqFvXW2aS1B4sFgEHfyz9+3rJjAuOjaTdBGqhKcDtDA0i1AaerA7PnKGUKpVI/34FBVRv40bGJ/vWSH8BIw1CEG7J8iPFTqtaO0gWM3GbhxYYfCymuanRU+g45hlBjLKV/kGFnUYgB40QHEu6i+bt4yW5Tyn2KDuDIQR3p2ZMpgMkuRGdNTM11gPmh3JCkv3Sb8ffIkisgGJC+lx8HYpOovD+lXTKgDwG+2WGRdlN6HCmd0nu6mMvFetsqwgSOYBdHiMtcfw/htfHRmX5ahS8PjRQckOY5BvbKp381RJmCRfe/vhB5BiE9vkAo6/R1wBHw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(426003)(508600001)(336012)(44832011)(6916009)(70586007)(81166007)(54906003)(70206006)(316002)(40460700003)(15650500001)(82310400005)(8936002)(4326008)(6666004)(5660300002)(7696005)(8676002)(26005)(2906002)(36756003)(1076003)(36860700001)(47076005)(83380400001)(86362001)(186003)(356005)(2616005)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6802
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT010.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	4da6defc-fd78-455b-ac90-08da4aa59816
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	9ZB4quyO7AnNEQelgEiMev4cj4bPpb4H8nB9OiD2Blk/jGkpjkR4ZG5A8GaPnCrN66tpHmTaCTqRel7AuNE/4U7xdDzyY4aiJPH0oMxMOrOAaB5izCamcRRHoaT16+VMjsWWEG9TSO6HvXWRL5rkSLKHnU8azNjOHb0+1aPDz6/54kg/OG6HXduG6r6+/Qyv8NKrWyqmr30/yFRh1KsYkKpb5i3k63JlI0ZhLQHgSyosyqeVh8XCItMI1LHpbaC85cr1P/sjusc26F2SFcVd/ympxMwrcXJkVrOznC2pZ4J7361gsmrohh/q3cRQkFaD66bZDWp4ZLV765EF1Fg7ocnKyO2I7JYNvPctDETuCnMIUuvv718hYjjohPRH6vcYs+QlmlGtbPo5QbxgNsY6iscH+ZGe4uIKuFE0T1ghJMOtf5XP15gb2i+atUW+0fEPpa2R4/3amHNmQlJmvdayDne5V3YizQHf5k4l6NsnFHTkuw+R24G3lSXCU2zUXt+MPhqopMND+xGU/rCKbfAUkK4Ezhe/VfzXX4BIEhTL8G9LRTw+VqzXmpBMAduKz5D1yDxTwrC3jaxMltflKNbVk0ixoqhk/ciwW0mKtf3EfilBH7psX7AeiScrSEuNcFdT5e8mjIdRqL7tXRoyKcNrqi6VC4t4K+1U4qvcb4SZa1ljFF+Z9M8IAhsA/YPu0UuW
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230001)(4636009)(36840700001)(40470700004)(46966006)(70586007)(5660300002)(44832011)(4326008)(81166007)(36860700001)(86362001)(1076003)(47076005)(40460700003)(83380400001)(6916009)(186003)(15650500001)(82310400005)(8676002)(426003)(26005)(336012)(2616005)(70206006)(8936002)(36756003)(508600001)(7696005)(6666004)(2906002)(54906003)(316002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2022 05:54:04.6946
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3c8a299a-9af0-4388-c5b3-08da4aa5a060
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT010.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3957

In previous code, Xen was using KERN_WARNING for log message
when Xen found a node without memory. Xen will print this
warning message, and said that this may be an BIOS Bug or
mis-configured hardware. But actually, this warning is bogus,
because in an NUMA setting, nodes can only have processors,
and with 0 bytes memory. So it is unreasonable to warn about
BIOS or hardware corruption based on the detection of node
with 0 bytes memory.

So in this patch, we remove the warning messages, but just
keep an info message to info users that there is one or more
nodes with 0 bytes memory in the system.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
v3 -> v4:
1. Remove full stop and use lower-case for node.
2. Add Rb.
v2 -> v3:
new commit.
---
 xen/arch/x86/srat.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/xen/arch/x86/srat.c b/xen/arch/x86/srat.c
index 3d02520a5a..b62a152911 100644
--- a/xen/arch/x86/srat.c
+++ b/xen/arch/x86/srat.c
@@ -555,8 +555,7 @@ int __init acpi_scan_nodes(paddr_t start, paddr_t end)
 		uint64_t size = nodes[i].end - nodes[i].start;
 
 		if ( size == 0 )
-			printk(KERN_WARNING "SRAT: Node %u has no memory. "
-			       "BIOS Bug or mis-configured hardware?\n", i);
+			printk(KERN_INFO "SRAT: node %u has no memory\n", i);
 
 		setup_node_bootmem(i, nodes[i].start, nodes[i].end);
 	}
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 07:21:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 07:21:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345890.571564 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzYxQ-0005PM-KJ; Fri, 10 Jun 2022 07:21:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345890.571564; Fri, 10 Jun 2022 07:21:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzYxQ-0005PF-Gf; Fri, 10 Jun 2022 07:21:24 +0000
Received: by outflank-mailman (input) for mailman id 345890;
 Fri, 10 Jun 2022 07:21:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Y0XP=WR=citrix.com=prvs=1532263ae=roger.pau@srs-se1.protection.inumbo.net>)
 id 1nzYxO-0005P9-Um
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 07:21:23 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ec696e4b-e88d-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 09:21:21 +0200 (CEST)
Received: from mail-bn7nam10lp2106.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.106])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 10 Jun 2022 03:20:11 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by PH0PR03MB6786.namprd03.prod.outlook.com (2603:10b6:510:122::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.14; Fri, 10 Jun
 2022 07:20:10 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%6]) with mapi id 15.20.5332.014; Fri, 10 Jun 2022
 07:20:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ec696e4b-e88d-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654845681;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=dSu4CdkFOrXCCianLZv8vazJy4XB+5BY/fOt7Qv0ROc=;
  b=c0juvJt+Pv3XK5bcrguiY3i3h1LKgBVrmxdqzG2411wERe3nuOgXfvKa
   rxi1dmpPkVJES8OOGYVD0p/o9+iKrV6/eIYWm8Xg/7IZ9S/Yo3/xa95pp
   OWWQNXF50xrQ2NW8xBuxA/kLIBVNkCiron417nxtS435w3R+WaSVQiSMx
   s=;
X-IronPort-RemoteIP: 104.47.70.106
X-IronPort-MID: 73305796
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:fnSW2qLrqVDZ+5wOFE+RoJQlxSXFcZb7ZxGr2PjKsXjdYENShjJTn
 GAXCzzTbPrfazD9ft1wbti/8E4FusXdydQyTFRlqX01Q3x08seUXt7xwmUcns+xwm8vaGo9s
 q3yv/GZdJhcokf0/0vrav67xZVF/fngqoDUUYYoAQgsA149IMsdoUg7wbRh3Ncw2YHR7z6l4
 rseneWOYDdJ5BYsWo4kw/rrRMRH5amaVJsw5zTSVNgT1LPsvyB94KE3fMldG0DQUIhMdtNWc
 s6YpF2PEsE1yD92Yj+tuu6TnkTn2dc+NyDW4pZdc/DKbhSvOkXee0v0XRYRQR4/ttmHozx+4
 OQRp43ucl0SBfLFgb05D0ZpNwxFPJQTrdcrIVDn2SCS52vvViK0ht9IUwQxN4Be/ftrC2ZT8
 /BeMCoKch2Im+OxxvS8V/VogcMgasLsOevzuFk5lW2fUalgHsiFGv2UjTNb9G5YasRmB/HRa
 tBfcTNyRB/BfwdOKhEcD5dWcOKA2SKkK2AH+Qz9Sawf50/Q7QhX4JjUKZmMVIPWZ/cI2Wi3u
 TeTl4j+KlRAXDCF8hKH+H+xgu7EnQvgRZkfUra/85ZCkFCVg2AeFhASfV+6uuWizF6zXcpFL
 E4Z8TZoqrI9nGSzR8T5dw21pjiDpBF0ZjZLO+gz6QXIwKyL5Q+cXzAAVmQYMIJgs9IqTzs30
 FPPh8nuGTFkrLySTzSa66uQqjSxfyMSKAfueBM5cOfM2PG7yKlbs/4FZo8yeEJpprUZwQ3N/
 g0=
IronPort-HdrOrdr: A9a23:Hm9nGa0fMV/kyRMbYVtAfwqjBS5yeYIsimQD101hICG9Lfb0qy
 n+pp4mPEHP4wr5OEtOpTlPAtjkfZr5z+8M3WBxB8baYOCCggeVxe5ZjbcKrweQeBEWs9Qtrp
 uIEJIOdOEYb2IK6voSiTPQe7hA/DDEytHPuQ639QYRcegAUdAF0+4WMHf4LqUgLzM2f6bRWa
 DskPZvln6FQzA6f867Dn4KU6zqoMDKrovvZVojCwQ84AeDoDu04PqieiLolis2Yndq+/MP4G
 LFmwv26uGKtOy68AbV0yv2445NkNXs59NfDIini9QTKB/rlgG0Db4REoGqjXQQmqWC+VwqmN
 7Dr1MJONly0WrYeiWPrR7ky2DboUMTwk6n7WXdrWrooMT/Sj5/IdFGn5hlfhzQ7FdllM1g0Y
 pQtljp+6Z/PFflpmDQ9tLIXxZlmg6funw5i9MeiHRZTM83dKJRl4oC50lYea1wUR4S0LpXXt
 WGMfuspcq/KTihHjDkVyhUsZaRt00Ib1i7qhNogL3X79BU9EoJvXfwivZv3Evoz6hNNaWs19
 60TZiAq4s+P/P+TZgNcNvpEvHHfVAkf3r3QRKvCGWiMp07EFTwjLOyyIkJxYiRCe81Jd0J6d
 /8bG8=
X-IronPort-AV: E=Sophos;i="5.91,288,1647316800"; 
   d="scan'208";a="73305796"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oCW+STYwHvQ2mjLrWF/fC6A7g/vSCurLkdf+Rj7P5fWZFuI1etUZfTGLqOUQQ5OxDzrbyXv2JsXaU+nvgeCrhjSYkIT9ShQJOPpvy/bzCZhhPh9pKRKdT42uw3nLVcnVJ5ktV9FxKQuKxiJ4WeZgRxj7+ew9mBIzZ1CMSyEE4E+5lLCRDAX2ppOEjgJF+n5+j+lDSZWJyUsELhIP5QQLTTi2YXKRSUBxtzNFXutAXn+GZSD3nD3mqskXam+C0et1kMwIgH4YYpLTWLb2I3XWDw/ExWEJxs3Ol28Orp2LkBW+gk1rdbEv7YHgv6kOSwvp9twI3Jn0o4NRsfktorq7XQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=lEA6IMl8YlMCwvCPZ2+wucaeGxQUlLP1gmCEgyAOOs4=;
 b=O4KY6+ftGbD5jU8CbB1kGhLc0sEZoF6cb7lakB4xzX4VMw5rNbFpN6iiPjwi/heva8Mp1trxTjARLZKzmvZSc7/Q5ulho0unsGqZ1PZ9ejR1gJX6kpFenesO4Ee1yy3UeJgnbVkjkGIxyWJS27M8xRkZdQXbNjF7SIYaRGo362CuLsdiH+h3JDhqNgpdhi0PqZp6XoE5AhyvUtmtWRdzR1BLcoA8T+M+dJbKPVRG0Z2YvxORG/abDqWm08zhg2pjzwjU365dUfUdoxjK95WuDGbTpTmVZSQpqreyU/UY9HGK5E7r3ZZS2kRXjErx8YPaHCohCsxPWvLUhYhWQy9kqQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lEA6IMl8YlMCwvCPZ2+wucaeGxQUlLP1gmCEgyAOOs4=;
 b=CXdwBcdFrUShdPNZCchuKZQh2QYuYzYvnkkglk1VeqfqRQIca5uf8OgzTUWoUZdwzRel4gn19dnuENQy1f+HZQ04QBkI6Tj01obG20xkFZmhYyKBCjQMxlAPq6ZyDSIXP6tATTgC7UaKcAwilb9dB8Iw6braOFE0vF0VJqlgStQ=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Fri, 10 Jun 2022 09:20:04 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Paul Durrant <paul@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] IOMMU/x86: work around bogus gcc12 warning in
 hvm_gsi_eoi()
Message-ID: <YqLwpGXxCHy5HJpg@Air-de-Roger>
References: <52090c8d-fa21-6f53-c33b-776c12338f62@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <52090c8d-fa21-6f53-c33b-776c12338f62@suse.com>
X-ClientProxiedBy: LO4P123CA0119.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:192::16) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d8285029-363a-4e65-9e44-08da4ab1a6fc
X-MS-TrafficTypeDiagnostic: PH0PR03MB6786:EE_
X-Microsoft-Antispam-PRVS:
	<PH0PR03MB6786C0D2979AC1D2C64BBF0A8FA69@PH0PR03MB6786.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	MBgY7qJ59RMJy5eqOv7V/oOliOCwTkzU1x98QVdYw8UhL9hF4GCGuqf+qivUoLzVhNmMI6NJb17Z374rJg5i2gMlo2SmaAflUZ1+UH4zK+99rbCs/BMim11HzDKEeJIzWFf8Z+is5pKosP2DrdaEcMpQE0YgL35/UsuLt+AQBmwnTJIpJxtBNbvbh0Lg1WPOiSt3urlnMPea4B2B/WCuEdnijKAynCVh2fbS9vYzaAISKJRSM930mubN3Msmiiq0mebRRW7guSq0QMY70DHh1/98dAOT65PwGqL9Wu33ZJcNqqZk7oFURYoWxLhI7G3v97nH7RntSbS9TXtO16FjcQ2KSQ89e6aHuVGdVlNlgMCgmB7skbcTIFNJ4Ah9cqdVdjtdRzZrR+CIrGm9KnmfkVphS1WyW/Ok13GI2UD66Nu8CqMxbE9D4WDX+FErO/mWN3H2d/6wT/toIam450TEFwXdDcLoVgk+23pKIgnMdclEZcdZpy/aHjPRRw4s7Mt4dWetzZ6HcdO/Xb5qdsJHrXUmdR4z4tCFxM3Xpl7BeS6s0NKEs/JbBmAMUVeaWGu+BWlnr6xDXXNgDk5An9n9yNZnk5fG/hA0HVbLKcfoTYEdwyQ0OwKDzI+mUtoid84eqJS9knc7Bu1q/GUsSDQ0VYBdeK80DsyJxyvNfSIA5QIaaubZCvGIF3N7KsN+22by+MYFaZV2cBuWNy5bBRvN0CgmalbYVk52z9RlaLBYz60=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(7916004)(366004)(8936002)(82960400001)(5660300002)(6486002)(966005)(86362001)(54906003)(6916009)(316002)(508600001)(26005)(6512007)(38100700002)(66476007)(66946007)(66556008)(9686003)(6666004)(8676002)(4326008)(83380400001)(186003)(6506007)(85182001)(33716001)(107886003)(2906002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OFhrMUZoa3RaNFBhdFNITncyQTlmN2x2NEFPeUhxV3NCSCs0OUIxWXY3U0VO?=
 =?utf-8?B?Q2R5SUd3c1BWb21idk1wSGlUNVZLT2IxTlpJNHduRzhJT0sxVEg1eHNOSjNO?=
 =?utf-8?B?aGlhVmErYmh0cjI1Q3RoaVFXOUs0MldXRmhXN1VRaWtpUWNIN2lzekNlY0Vi?=
 =?utf-8?B?MVJ3RTBSTndMSTFaR2N5VEFKOUttWkFqczBwZC9oQ3F3Zk1NTUtjQkhjK1B3?=
 =?utf-8?B?SjJBZUZjY0V6TTRtdnArWktZeGpTSjlTTXczQlBzMVVKS3NxdjhsOVBlRmVO?=
 =?utf-8?B?aGhmT3lZTkZOZk5jT3hFUlBwT0U2ajlLQnYyc0VUUUh6VGkwOXFTcnVhVkFo?=
 =?utf-8?B?ZC8yMzVUOVptRHZQSUZHTkFmc3E0VXA5b0hJaUh3N0dBTVdGSG16cDFNdzBE?=
 =?utf-8?B?VjBFakk4TE1iS05aMy9uYVNtUE5PSU1pT2tVTmNBRXJXTmNuVE5CSTlNQ3FZ?=
 =?utf-8?B?M05NTWNGcmpNVHBHS25rN3lrdDRmYUNKU3pNOGpYYmlLcWI1UDl6dTVPcTdJ?=
 =?utf-8?B?SWNjZC9sZ09ZNXIwUit6K1Y1cmh3ejlLaXkyWTRveFgzUzA1WmZaS3FGWWsr?=
 =?utf-8?B?NmttR296N0pWa1o3THJwS01LRFNxcjR3dzlQRVBLaEhtUGh5ZEhCZVR5eURI?=
 =?utf-8?B?UEtsckNYWmU3ZUpjRHFxZUNTdGlVYlV0QlNORmg0ZlpMVXBwSWMxMGh4K0JF?=
 =?utf-8?B?bmJheG9sUmdEeVFRdUYwRG9aNFZqZ1c3Yit5Y1hzZ1pxRmI0YXBlbENENG9o?=
 =?utf-8?B?V1pLeU5nZGNXeXgzRjFDVmpjU3psVC9IM1JzN1d2Mm9iQnE2eDIrclFWV0p5?=
 =?utf-8?B?WWRtNjQyUU1tVSt2TzhDa2NKVlhqY01GMWVtdTdUK2Y3dXVZeDVmbXZNTWhB?=
 =?utf-8?B?cCtFdkNNSFdMMzVyanpjV1VFNWdsWlBBS0lLbnZsbVMyemZoVFJqRzU0SStW?=
 =?utf-8?B?Tkx0SVE0TUJhRkhRQ0VLdVlHaW56TndFY3pwMHVBMkFBZHdRdzVZd2xmb0ly?=
 =?utf-8?B?aXZZN1IrUVZISmhKNXppK0V2Q1BqSisvemtBWEJrWmtscVo3Umd0bU5GQWNh?=
 =?utf-8?B?Smw3K2VRT21pRU9KalJ6RWVid3VoYzR2NytzMlA0eUZMYWwzV1d5NndFcjc2?=
 =?utf-8?B?Q0ZMc1l3L2kwbEd1VW5TNSs3K1dJaWF2azRmVm9NcXBsNzZDblREY2VqM21w?=
 =?utf-8?B?cjJIMzdRRGR6TVNnU1lrdjdzc0RjaThvRUhnck0xNG5lYW14NkhvM0plOTZK?=
 =?utf-8?B?SHcyTXJSZEhDK3M1UEZEalZVb0IxanJXVjg2MXpIUXlZTE9mdWNXYVVHMzhh?=
 =?utf-8?B?UlNnL0F2OVoxbllTTENNdnFsSGlGSVZJS1Y4bitvejVBaGVrTmExWVhJaDAw?=
 =?utf-8?B?czVCZURON3R4S1V3c0ZsQ1lpT0xRR2VKVmUwVXhFYldNcW1XajRqTFQyM0ZY?=
 =?utf-8?B?WjRmUEFPdzdBMzZsNzVDSXhBbERLVzZrSWxldGEwTDYrUkl4ZXloMHVKSEha?=
 =?utf-8?B?RXJaSkZIMENvY3BpSWY5NHN0TW5GZ1ZqS29GcHpKNzRrWlU0SHNyZkpNaGVx?=
 =?utf-8?B?djNUYU1ydG5Nb0EyR0ZhZnMxeVU4MU1BNW1XcHlrN3JKSmZsTFkzdXFhbTMw?=
 =?utf-8?B?WkxhZGdWWjBBUVhoRHRvWEkrU1UrV0hyVWhLcGsvSHl5SkR0ci8zRFNhZytN?=
 =?utf-8?B?QmFCK1Rrb0NlNzhpbEdoNm1Senp1bkZwSjg1b3NqY2FBRlFuWi94VmRZcGhT?=
 =?utf-8?B?RkhFbDBKT3pWbGtXSmhHREFOT0VwZlF0OTU4S2YrLzZnUDNVSjBYN284S3dZ?=
 =?utf-8?B?WmtaVkVWYWY2ZnREUmdwOHFyM1RhNTlaZGhEdGc1c0JqSTRxTHJacm45Z1gy?=
 =?utf-8?B?TjJVME16N2VWaTZIQ00wcUxWOURxOEo5UnhuaHYveGYzeGMvdWRBNnROVFcz?=
 =?utf-8?B?YzgwNjVzZktyNXhvS3haNVVNWkd0bWZ0c1ZoeXRQY3Ztd2JPQ1RIUkhCVFg5?=
 =?utf-8?B?RVlZM2dYb2NhclFPeU42clBCUmJpSTdlTUZPMmI1RGlVL1hIdk9Gcno2MFZm?=
 =?utf-8?B?bzN6V3NUbVFOZk4rSElsRkwwOGRhM0FwRU1KRnlReHpHNU5uWlRNQWdTUlB1?=
 =?utf-8?B?ZUJTQXUzQUpDTG9idVNPZTFJWU1vNHUvWldhbEl3Y2J5STJiWGRCYnVOSGRO?=
 =?utf-8?B?aU1ibEtNQ0dzTnkwbm81WXFkeXc1MzF1dlJCSGNyUGRYaUY5T2ZvYUpJTkRV?=
 =?utf-8?B?ejlRUjMxRFp6YmN0bnhPV2l5cUgwMEhnR1NtMDdScTNsSUVQR2ZuZkVBdVRD?=
 =?utf-8?B?N3lTeHdoaHdNL2cyc200dURoSXFJaEE1NTJsY0ZnSVFuc0RtWUR0b2gxc0lS?=
 =?utf-8?Q?HjOBO7gc2SlzU+/M=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d8285029-363a-4e65-9e44-08da4ab1a6fc
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2022 07:20:10.0086
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: YLkxC+KFZaIHsdR8CPV9ALefcVV2yWctBgyAn896g+0RbSGyYHi/X9P7nDEJM8s1o40HqMlYvHgawPeX08xzwA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6786

On Fri, May 27, 2022 at 12:37:19PM +0200, Jan Beulich wrote:
> As per [1] the expansion of the pirq_dpci() macro causes a -Waddress
> controlled warning (enabled implicitly in our builds, if not by default)
> tying the middle part of the involved conditional expression to the
> surrounding boolean context. Work around this by introducing a local
> inline function in the affected source file.
> 
> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> [1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=102967
> ---
> This is intended to replace an earlier patch by Andrew [2], open-coding
> and then simplifying the macro in the one problematic place.
> 
> Note that, with pirq_dpci() presently used solely in the one file being
> changed here, we could in principle also remove the #define and use just
> an inline(?) function in this file. But then the macro would need
> reinstating as soon as a use elsewhere would become necessary.
> 
> As to the inline - I think it's warranted here, but it goes against our
> general policy of using inline only in header files. Hence I'd be okay
> to drop it to avoid controversy.
> 
> [2] https://lists.xen.org/archives/html/xen-devel/2021-10/msg01635.html
> 
> --- a/xen/drivers/passthrough/x86/hvm.c
> +++ b/xen/drivers/passthrough/x86/hvm.c
> @@ -25,6 +25,18 @@
>  #include <asm/hvm/support.h>
>  #include <asm/io_apic.h>
>  
> +/*
> + * Gcc12 takes issue with pirq_dpci() being used in boolean context (see gcc
> + * bug 102967). While we can't replace the macro definition in the header by an
> + * inline function, we can do so here.
> + */
> +static inline struct hvm_pirq_dpci *_pirq_dpci(struct pirq *pirq)
> +{
> +    return pirq_dpci(pirq);
> +}
> +#undef pirq_dpci
> +#define pirq_dpci(pirq) _pirq_dpci(pirq)

That's fairly ugly.  Seeing as pirq_dpci is only used in hvm.c, would
it make sense to just convert the macro to be a static inline in that
file? (and remove pirq_dpci() from irq.h).

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 07:26:56 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 07:26:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345899.571574 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzZ2i-0006J7-7c; Fri, 10 Jun 2022 07:26:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345899.571574; Fri, 10 Jun 2022 07:26:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzZ2i-0006J0-4t; Fri, 10 Jun 2022 07:26:52 +0000
Received: by outflank-mailman (input) for mailman id 345899;
 Fri, 10 Jun 2022 07:26:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LSau=WR=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1nzZ2g-0006Iu-EN
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 07:26:50 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur02on0606.outbound.protection.outlook.com
 [2a01:111:f400:fe06::606])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b0ae3f70-e88e-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 09:26:49 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8464.eurprd04.prod.outlook.com (2603:10a6:10:2c1::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Fri, 10 Jun
 2022 07:26:47 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.013; Fri, 10 Jun 2022
 07:26:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b0ae3f70-e88e-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BqXZusYFeC2YU9uxhk7Mndq1NH99RvYYZKpRuyAe+fYj8UkXBJcH/SjDSmzygqPayYTb5UeaWhb/vL6Wdgusoh4Muinc7HhesY20x+1N8OBV4s+YFnlzKThSrT+fSo50JbNMjVwL5khU82mUn/1xfjqcbX3hfjAK1rJsIZSA+b2++zLpu2bDyvlNicNwZ0WK59yiTsetUoX6a1q1citI12AkqG+NlPjTZtBYP0VOby1mBhFn46W+T4yp0eJjz30mryRpsDzeKg/gi7pinPINuYzAC/dLzy9jMLosEeUvTeHfMyLz+lPbJ/xOh/2MkzpqVcBFXwNnS3No3sMuy7eWWw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=CUi+/aZOqk6Pk2CluuFgLnlx0BJ86/N14ldIJ05RjQA=;
 b=EblOnEHW6lgnDKjgHeZraMEttlhHcwFJhQ+nezDLTjk5HonLYdEwOHZtxApEchHJApVLLjPKfBBCKArF1mxOpKEsdMfpCcloWWVx187NKsyrPkTckx0jr9Fj2Ldp9KnlascCNeNtsenB+Misqxc3DMDJuYfFZaC7N1Vo5WBt0MPPulczLIl9rRVKACyy5gXmSJvLVASYDERzo+5Kh1pxzL7gG6Wd8mGRfyoKd11g3PmFUuLoCJ+G2bVR/bK6SH9UW7FDAPC0ycKG7ut9Vl57Rg+6ZEmUCPwEF7FJLIKrTRJUh5e7xiPGmYCCKdp+067kpr/dFdS5CP21cLOt6w5CJg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CUi+/aZOqk6Pk2CluuFgLnlx0BJ86/N14ldIJ05RjQA=;
 b=ORtCneGjBpi12ZfIzIMVfhJ2i2vXE4eRo0ZIaDAmaaCyKidlzQYNvRBj4F3gX/FrGuNRR8EMkW6AYfTUEJ8ky8WO48+036bSGPpX9nvys1d915cfhIW9KR3Ri7c3SgzadhIjsjMNYiddhdcCTtwTfnbg/BPh/1n6wD+jhYJMG0cJ4SCPz6rZhrLXJSMSgEYxsIR0OjFnvrgXjwDsRkoqnPBOl61XroBiYAvMQa1o+ZhEEM8zqfGCQoTRUgSSKKeIdX60uw+ZD7QAJtm2KZbH45Gx2Ud0+me1P5lt2u9yKdRHJbxaLW1RIQet04I3o2s74RiNg2JHL/1OBxc4G4gOEg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ee7e7e1f-8d3c-0d8f-24ef-c281b09faa25@suse.com>
Date: Fri, 10 Jun 2022 09:26:45 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/mm: account for PGT_pae_xen_l2 in recently added
 assertion
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: CWLP123CA0243.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:400:19e::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a0f6b4aa-7c79-4c14-42af-08da4ab293c9
X-MS-TrafficTypeDiagnostic: DB9PR04MB8464:EE_
X-Microsoft-Antispam-PRVS:
	<DB9PR04MB84640B8C9A697623DB24E317B3A69@DB9PR04MB8464.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	m1Qylz/mImGzH9trS9b872tXjKIM3yKG3G9x1YB1Y2Nf+ILAzNExOYrMCesFEKfG9/DhRQbshaZHjLagqzv0No0+qN8wLsfbWuzlqn5RvwWq6t3oou3lVqTTpDfcQhhVF8pfLfdypR8dzjkBN+CT3rEn9Ud/FJvLy/YisP2xtuJcqXKY5p2IUYuTxTM9edA914BnKM85c90ElWvt4miIF6TfMXEHvyRYIFS/hTkuHSYG8y4L1wW6FbWKkWQEW+sxR9F66RlOHMGUi/J1tr3JTKV7fuMEq25MYCIPFX0ezYGBVfbKv1Bh9v5M/xPLq0HoFsZ30gg8zgBhz8Cxi8L2yVksYMY7ol/DnmZvntvT7Z3kCLgvuoskm6hpsDzDbtDrmG4wez3Jo9vBaRh1760hlAx5bv+whFyKA/2kZSLJdf+QexyCQDDz2eXDzMf/kYFtn5jDBYzRNqqyp+M1vmZLMOfgBtG7AG6ttUdKfGxt1hSfuIW/x5o5D6ODFqlr+ue8jJmvS59gkC2LqNGUZDA9b5Ct/kVKYGN415JhWxwYUrO8clAkddmFbif+06kJxj8YHFNNhgx657FzR5OH/z15AidN/Zs3Jbp3ihjktAqfybrq7NuI4Nq4GgkIMfER8CkPz2PtoqeC0SIbKTsAN4oFwADntS9Gf4qeiH3KnfUemUCG7RhjOX4A7GUQGgHUNnQWfLlRQniOZ+9EPXoPSWTlL9pESon9s7Mx8PfCqcH+Trs=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(54906003)(186003)(316002)(6916009)(66556008)(4744005)(4326008)(8676002)(66946007)(66476007)(31686004)(6512007)(5660300002)(8936002)(2616005)(26005)(6506007)(31696002)(86362001)(508600001)(38100700002)(2906002)(36756003)(6486002)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?S1luUE1JSE50MERWajVWTVpoKzhKcFVZT0JPVm9iMnRGOHlDdE1TYWtRdEFw?=
 =?utf-8?B?eEIva0pxNFUzaUNick9kMEFka1BHbFpOMzhxb2pDemVHN3ZjcitYSEZtV2p3?=
 =?utf-8?B?OVhydlppanBZTDZyZXNZVllWdFZ2OXlLNFZ3bkkva0RrYjRLNVJtTzFiNlUr?=
 =?utf-8?B?eVBZOXY0eUo0c1g5MzloVzhUZGFoNStOUHBHZk5oNzMvL1dodTJHd1RZWE00?=
 =?utf-8?B?QnF1RS81TWw0RWlFc3BxNUFyR0RQN1ZuR0ZzYjcwRWZZWFdDVGxVanhJTGJo?=
 =?utf-8?B?VHI5cFNPL2RxRWRZRk1HcGxUZ0dwVFF4OU1WWnhUSFBBVyttQ25mcnhlbnZ6?=
 =?utf-8?B?VXkyVjRRcCtkTk16dWdxanUyQ1lNS3JCcTRFQVF5RUh4a1I5RXFZRGc4TmRa?=
 =?utf-8?B?UXpqU3d5eGplNDd0TkxRKzRvaFVOa1BDckUvVGVjWU1jWXVYQnQvQ3FjY2p0?=
 =?utf-8?B?TXZOdXJFQUtsV21SZVkrcW1CSkd3dDQ1Q3pUWUl6cUtEUWtxSmZ4NlQvUUhy?=
 =?utf-8?B?VXFZR1lPclR6dkVLeXFJYzdCM1ZXSmVNSXJlTlQ5S3NZNnh2aFpUeTZTRXJC?=
 =?utf-8?B?dlJGMXJ4R0dCaFhzUXlnTnhybE15bE9BWE9NNlE0ZWo1N3J3aHFZelRpNXNJ?=
 =?utf-8?B?SnBGYVB6ZGo1cHh3UVNhV3FiNU9adXhRK0U3R0lESU8weXk5MUxVZWliTWRq?=
 =?utf-8?B?NU42Z0lHTUJrRmg4NUU4alZmd1Rmd2R5ZGJoNzAxdHlKalZsZUV1OE50b1lY?=
 =?utf-8?B?cU1VR3dKc0x2ZWUwSVA5NXFSbUVzRm1qdFdFL2FrRXU4VlFzeVdpZFhSL0Vn?=
 =?utf-8?B?TnNPOEFkNkNNaE41V0JYbENVc3gxbWxoZkxxU0tjN2x3c001bmt4aFgyeUcw?=
 =?utf-8?B?V1kzRkNTTmZOdXVzWVBZMmJrdXBtdUZNWDhHSDJrWWQ3NThCS2M3R0Z0WlRY?=
 =?utf-8?B?WTJ2ME0vV2hGOUJmNlBqeTV6dExDRVp6WlFGT0F0UmU5Z2VpTnlJZDFDZlhU?=
 =?utf-8?B?MlBQZkE0cWozTEVDUFM5RlVqOUwrSVR2M0ZUaU1tYUMxV1FzMWJROGJhcm9Z?=
 =?utf-8?B?c3orTmpVWkpHa0RTb0YxUitXQm5xYWdLMlFmaUFOaGI0UmpUaXkvSExBNEdU?=
 =?utf-8?B?R1VjRzhYK0JHY1pPWDk3QWJ4MVpYL0tXUEtzeDVURG9NUWRZRlFDVFdrMzJP?=
 =?utf-8?B?QTAzby9hcUZPSXh1SG0xNlk0WFJXOVpQekJldWx6dnNWM0FpenBSRXV4UnRN?=
 =?utf-8?B?c3BMQ0M1KzJvRnQ1ZFV5L3EwRVR3dWZuekhNc21UZmhUZG11OVhXNGtvdm93?=
 =?utf-8?B?cnJ3eXRVWW1YVnNXWGpuOFRJdmtDK2pmZDVleEJzdmtaL05pK0ordWcrNnov?=
 =?utf-8?B?Sk1RTFFYUUY2a1NZVGlaRWNWQ2p0U2tRdXdMemNmTHJTeDl4Wk1oT1hKb2Jx?=
 =?utf-8?B?UjdtdW02ZTRkOForbStrZ0ZzVHNrYWFjNWFCYnVVQmRZTVZBRU5NeC9Qc2E0?=
 =?utf-8?B?cmlYQVJwV0lPU2EzVGdXZ2lyRjhoaWUwMm9mM2xMcktRSHVyL29Wd09lVEZL?=
 =?utf-8?B?dVEwWi9NVTBUQThnVmN6N3hUM0tmeEhUd244WjV2Q2k2QnU5ZVV3dUpoYzRN?=
 =?utf-8?B?d3ppQ04ramZGOWYyVEd4ZWtLZmJUd3haUnhUZjZZZ3BsYTdaRDFycjJlSnFJ?=
 =?utf-8?B?cGdxS2g1b0oxdlBVWVNxdk5zV0FPeWx2VG42aDAxTU9Nb2EvK0ZvMHlZUWV2?=
 =?utf-8?B?RG52SU0rdGl5NGFmcGFuaWpQREVZOCsxZnQrc2xDZ2tBRFMyWmdKdDBSTUpk?=
 =?utf-8?B?bDJ0NkRsMkdFbnAxSmhOV0tHdGhMdlZzNVhBd0xWemZWZSs5MkhCWThFU0JS?=
 =?utf-8?B?a2RLSTdHUlRFUnpyNHBySm80VDhCYTBwcGxPcG1KR0wyVytiL3VxN0RKb25z?=
 =?utf-8?B?VWtGZEJPVHJGbVFmRyt1VGlXVm5EKzB2dk41OVJWdjdjMGNhbTlXLzRYSGFG?=
 =?utf-8?B?R2FGSmlDbkFIY3Bmc2ZtOUQwU2c1bE9uWEVuVzM5Tit4YUVuK0ZZMTNWdm95?=
 =?utf-8?B?YjdHMWtvMW9YRjV3bUV2cGJoWUlmRjUzdTNlbHN1TGZHSG80OUNnS1dIaUR2?=
 =?utf-8?B?c2dic1NYbzdEUjk3ZDhCSkJVa21aYXZnd1BMUGNwR2lHcmFFUHQ2R0h2QmUz?=
 =?utf-8?B?MFJETm9mejVxNXdLS2FOQW9BK1czSXlCMDdHSU5OcDNKT3VkYVVCQ0liUEkv?=
 =?utf-8?B?OUtiYWdWWGd5cWZnTHZveGtEVU5QNWN0TFVacG91VHNRdGhSS2dudm5LenBY?=
 =?utf-8?B?aWNBdkRFOGdsVHM1TGthRVNId284Y0xHMkQ4TmF0K2N0QWFpaDZjZz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a0f6b4aa-7c79-4c14-42af-08da4ab293c9
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2022 07:26:47.2452
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 11kZaHzOeLKE4Qtw5jceJS5+qAIP29m9hEIfjPM7KV0INRI79o88qVY6ihE8ZZMa/iimRlkC4/n9hvkXRgw9Yw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8464

While PGT_pae_xen_l2 will be zapped once the type refcount of an L2 page
reaches zero, it'll be retained as long as the type refcount is non-
zero. Hence any checking against the requested type needs to either zap
the bit from the type or include it in the used mask.

Fixes: 9186e96b199e ("x86/pv: Clean up _get_page_type()")
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
The check around the TLB flush which was moved for XSA-401 also looks to
needlessly trigger a flush when "type" has the bit set (while "x"
wouldn't). That's no different from original behavior, but still looks
inefficient.

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2956,7 +2956,8 @@ static int _get_page_type(struct page_in
              * The page is in one of two states (depending on PGT_partial),
              * and should have exactly one reference.
              */
-            ASSERT((x & (PGT_type_mask | PGT_count_mask)) == (type | 1));
+            ASSERT((x & (PGT_type_mask | PGT_pae_xen_l2 | PGT_count_mask)) ==
+                   (type | 1));
 
             if ( !(x & PGT_partial) )
             {


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 07:29:51 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 07:29:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345907.571585 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzZ5a-0006vU-M1; Fri, 10 Jun 2022 07:29:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345907.571585; Fri, 10 Jun 2022 07:29:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzZ5a-0006vN-J9; Fri, 10 Jun 2022 07:29:50 +0000
Received: by outflank-mailman (input) for mailman id 345907;
 Fri, 10 Jun 2022 07:29:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LSau=WR=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1nzZ5Z-0006vF-Jz
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 07:29:49 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur02on0618.outbound.protection.outlook.com
 [2a01:111:f400:fe05::618])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1b935de4-e88f-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 09:29:48 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB7759.eurprd04.prod.outlook.com (2603:10a6:102:c6::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Fri, 10 Jun
 2022 07:29:46 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.013; Fri, 10 Jun 2022
 07:29:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1b935de4-e88f-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iUnzLSENuXk3WIqnRZ1TXTwPbNe+gJnMmER4Yd3/0+rIBVVu4sQQ1st7d/7uDxHugDAmhaT2sUh3rr9iLma72ymQU7ZlHZRnrW7M3bHrJKNiXSvskpG/oJMNECTGs2jQPd053B7RomuF6LB8wdAAIO6Mu8N1YEEvefo8nERw2Ujb2p/lBzw7pJg4yK2R6lDTqbBXnu0Zx3boue+3x7ieJhpZ9NJJ1zE85jz88Z9OCWLRuc1Lx7IuchyRVcauwibbsUFKhFBgFrruGOk4CE4RIzwHGdBMyB/9sZuFbYv0a9OI/qXdMMR7FEO0xl+qcG5aXWcOIVDqLxuNTN+Tz8G5sQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=OFg+SRh2mEXaz+2IS2BU3ZXUtqSbZI+ynce61REkwpI=;
 b=O75O9EkNcSa89GGrsTunjUL2OB9jgXjjdAYjTRqEupBWvjo2Bn7yfmubHELVHmblOKOu3hwdDKheDLt6SfFxfDHVxKheOer9NgHdriUnMe8TsvFP3XGwq8RAF1v1emvgcveTzjnlSKKlD7gpF1/bVe54qo9tvz4NXOPd+t2YTt9oSsfvV9jVPJCOCZ798a7lft1Lda9ah3TAvJ8oRk8V+M0gg9o4HQtuqnO17yi50fdMcdZ7uiOGxTOyvIB03QcJqDtAKJlJjkXTihHmMqUli1l38cykfR9y7LGNbtoyPUtkW3v3FQyOE/pL61MVOdHNw4oWqBWvkpp5QW9blwo64A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OFg+SRh2mEXaz+2IS2BU3ZXUtqSbZI+ynce61REkwpI=;
 b=wntGm+yCCZTV1mb04e+34RCDeBMs4Lf3pc3TQEqALradhVRrxUAT3RMJgL9/OhS8YQ+r1g2j0MssaiTxBR+y2KLCB3KQP/WhGscpbRzhSXz8HeCMCNuYAaeLGnaqIZrFOH8SbtgGBetK8xP4f+3rvCE6xGFKezwcnhHE/kF27C0a+XypL3r/d4wuQ35HKVcK9lfXlRNNxvHAKT5yhgZ+EeC4a3Pg45yKg7AcOKm0NcvMNhhTEIl7oNfCXLuAgg5AeVQom8oxO/ZjSIArKiTQ8Jka0ThUecu3WTRSuATfBnKvj6aoA5LBL80WB10tYaefzH43h7IxPOxQKOFV0kVigg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <26f619a4-d0a8-7895-2017-cda17526b48f@suse.com>
Date: Fri, 10 Jun 2022 09:29:44 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH] IOMMU/x86: work around bogus gcc12 warning in
 hvm_gsi_eoi()
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
References: <52090c8d-fa21-6f53-c33b-776c12338f62@suse.com>
 <YqLwpGXxCHy5HJpg@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <YqLwpGXxCHy5HJpg@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: PR1P264CA0045.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:102:2cb::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6583c1d2-5cb7-45f8-30a8-08da4ab2fe9a
X-MS-TrafficTypeDiagnostic: PA4PR04MB7759:EE_
X-Microsoft-Antispam-PRVS:
	<PA4PR04MB775906422E61C440119EE28CB3A69@PA4PR04MB7759.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	B+a/WLxJkn8+Gr40/8CguyMugCbnLM2iFT8JbW7wVBSw3+0K2r9x7sdPMvcsA3Zc2AtL4Aef28Awp+2mdexocmBBoMMXP25GVa8SwKtPIqyUNCJSZ5ImsiYp7xVCOrs8gEbLHsARVrgdCZSA3mJ3p9fExWfep47HplE9jX3ufLozjhyefcOxeHKgDs6yiNIKVExag75xEibQ6wY3sRQjXpD+trppoljFgG9t3bliK+lF4bClnQOhgjP4s6waOv8Dg7t6xDCUpRT4UlpdU0Tin1h7L719Al2kV5km84rWJuDNCgNSUvFqA/5k3WC06hfQZdhcLT1FnLomhHMF/1ebhZvUxYPP0pQEzEhrmFugBFAjYPytDoK8ZK/qoUxL8apWRUafXZFOEexcWrO5owy65EHHiGIN7koKXslX05XSaOCDtgL5Uq6WpoNrKccgzsBBr9zWkMkRwfABJIPyouxRPPnQiNaLQ6uW/zjCC7h20E05KEKpd0z1iv6WHa5TkAdb042JUPBpwUnDJeHi8DIDY+8WymB0WS5yXIBL7UI5oNrSdc7cSSas7Ho5lwUJkibl+jcvWqHfSHI/N2BfrfSUKbC3YW8fsqtcCb6XZHI/Oeu67Osr9jJIx65zAME/stnKJFP9lHfPmh2YP57nsq0y7n1LkxhYkDCFJS9xZHRincy4SZsYCWumFqtS7tr2tjte2npCzXoVoT+SNq/RWhI6JTGa7y4bqbrgq6vUN5AKhEScxYp/pQ+jDjQHHL+E+MwMgSUnMX9I6LjVtWONNZNEB1uoVhQYRXHmqOyRt8rek+qO3fQMQswjOGYqsEYBp0PuMKRD3c5WWq3iZSeKhtbA/r1l376R7zRFCnuFPJmosZHXQHoPIBq2Oe8/E9sfSgAj
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(316002)(54906003)(6512007)(66476007)(66556008)(38100700002)(83380400001)(53546011)(2616005)(8676002)(4326008)(36756003)(8936002)(2906002)(5660300002)(31686004)(6916009)(966005)(86362001)(31696002)(6486002)(186003)(66946007)(508600001)(26005)(6506007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dEJJcytRaS9aRlhnYkkyamRtTDBaVWRlVXNIYnJiMmVTK21QNzA5OGhBUk9T?=
 =?utf-8?B?aStPUzZpaUNzamZ5WnJGZlJNa1ZWR1F6aW1Nd0RKQ2dHS1pqMCtIOG1FS1Jp?=
 =?utf-8?B?NmhFb2FrR3Q4WGVhamYxZEUwSlNpbnVHQ2tLWTJTNHUxS1ZlZnJ5Q1BVOG5k?=
 =?utf-8?B?R1VOM3lGTWlrNUdOZjJZY0ZnU2JGTkYwLzBGZE43NEx1QU4rZDl3RHA5bUVq?=
 =?utf-8?B?MWlLWWZ4L01KN2ZNWnJ4WWVWeE5DUU4rTGdFcFlmbitZc3RucGZtSVJUZnZL?=
 =?utf-8?B?MVJNK1Z5ZjdrVmZYZWJUMm5yeDBscm9iSlZLbXNJL1dUTjJTRGEzTDRieW5D?=
 =?utf-8?B?eFo4K1B1NzVSaE1nUmU2WmlLemFiUVF5eVdjUVBMQ3Bka1RBY28zbk9NWWVZ?=
 =?utf-8?B?M1ZydWxKekp4dG1GUVE4T21saldBbHdIUWNIM0VFOTZvWGxMZ1lxZURlSWdN?=
 =?utf-8?B?dldrTENNN3A4aWtxZnRTVlpDZUVlSk5MK29SU0N5OTE5VWZpeGxSd0R4a3Nx?=
 =?utf-8?B?ZzdqdWl4L21ZUFIwZGd3Uzh6eHNGa0p2WWU4ZkhUTDJCaXpmdktscFZ5WXJn?=
 =?utf-8?B?YkdZekFRYVI1clg5bHNFaVFucTlTRmMrMERLZUhlMkNEUXpjS0dsaVJLMCtL?=
 =?utf-8?B?S0lBTVhwMVM4U3Q0VEZBQW1GU3czYXpmZU5KN2k0TnZ1b3JSMzBGUWV5c0FE?=
 =?utf-8?B?d3p5Q0wwajhXY1FtTUxabHFpcWpFTi9yOXpSUEVNSjhGSVFFTTNjaS8zVG54?=
 =?utf-8?B?aHVjcjlSYU44OUdpUXM3VkFrU3BhdzhUcXJzRzJQS0ptbVQrWTduWGRVR04w?=
 =?utf-8?B?WHdwRldoK29ZTHdLamNYZ0htTkc4VS9ZaDdCazliSjNtd3lJK0NqdmhXK0ps?=
 =?utf-8?B?T3VmaStzK2R0TE5YSFhzYzZvdGFuYWNjY2UzYzF5dWtENFEyVytNUkg5aHhZ?=
 =?utf-8?B?NGQyUTk0dktmWEdSRjExV1dZZFpPRWZtYzlQUnV6ZTRzQzBybnkxa0J6Mm1m?=
 =?utf-8?B?N3I0WWkvOS8zMHZiUnp0ZzlkZVFCMUpJNHNxNVpMbHNwSGZaQ0VnemN0OE1u?=
 =?utf-8?B?VzlFem9IQzhvSldwcGhoVlFVMThiYlV4N2RmZE1HK1ZXNTBXYVE3YkNFSURt?=
 =?utf-8?B?RFZRdVU3QXhzRmZtRGNYYVhodVNnbVRLZlQ2SHpmSm1La1JpWjRCMmRBTVlw?=
 =?utf-8?B?MXNBTkRrTkpkQmRJNW1JaTREcHdPQzZHRXQrMllwaDBTazhyNHZqWS9SOTNq?=
 =?utf-8?B?SzZiVmNrZC9FUmkxbmRiR0VDMnBrR3ZDMlh4a1l5VUZTSVlhaElRRlBab1FQ?=
 =?utf-8?B?UmNKZ2gvVWFyQWdsYmZ3ek8xaEQ3ZU1rOWN5Uy9vNG5Yb1pwVHZRRW1JL0ND?=
 =?utf-8?B?UEtKdGpLQUZhc213NUUxczgxQnFjWDdyYXZlblo4QW9VeVVMWFN2bVpoV0ZL?=
 =?utf-8?B?QkZzWmxiNk9tcGZZVUU0bnAwdGtkUVp3QlN6d3N6ZG1qSVlrL0hDbklUQ0xz?=
 =?utf-8?B?ZWgrdUxmN1R6dDl4OUcyR08vSjlMSFpHOUw3V0lDM2EvbVRvQXVFazhZei8y?=
 =?utf-8?B?eXl6OTRXSVQ0cDdvNGpmTTdjOTM1RTgvVG9qK1pxTG10bk9UK3crUUJIdXpH?=
 =?utf-8?B?ekNFaStKYSsvWVF4UDVZQTU0TTNVQXd6Wmp1aWg2Q3E2bXIydkI0c1ZoSFFw?=
 =?utf-8?B?SVpUOW14azRzR1NodDZPdHBIdjBibUZyUzNoYTlsZXFRMWdONjlucFZYZHk1?=
 =?utf-8?B?M29MYUZTNmVNZ2o2alEzNHMvV2xKUjc1QmFJR1RPY3o1VFhwQTdkUUFQRFBt?=
 =?utf-8?B?cmdZZUpaYVM2bklWYlJjM3dXUHh2dXl3dktaZWRwaXdnRklVbzY5QjM2R0oy?=
 =?utf-8?B?c0pERWVhOHFoSzlKT1hpb0paMk8wQzF0a3hNZzA3K1k2VEdQR1BTNGw4cDFQ?=
 =?utf-8?B?cjJJdmZsWWIzZlg2RkdBWU4vUnViR2lHMUlBa3RVV1VtTkdyWk5vVHlMb3Vr?=
 =?utf-8?B?c21QQU9jNERJYkFhRjMyeU9zV1NYZUtYa1IvS0t4Tis0dFQvbzE3cUhoT3Z6?=
 =?utf-8?B?SjhVRmxKK2xaTU85K090bktWRFg2enpJa0pFa0gvaDZpTG1vWERFL2k2VVVj?=
 =?utf-8?B?TW1RaXcxRFo5S0NFRUlqc3FoNWowZi9RUVVzRFp0WGZjQjlQMytoQlFsT0NR?=
 =?utf-8?B?dHVJdGdRMC9KWnBRSmNuWWhVSVZybFB1enY5TlRQZm5UdmdsdzMyMU5yU2p4?=
 =?utf-8?B?dytTcVZGM0czYmV5eVR0VE44Lys5aS9yTExnYmNDL0JvcytBWkV4N09nOW9r?=
 =?utf-8?B?UVduZWZxdkRILytXanlTUW41cnhNRkdvMjBvbEpjZTErRDYxM2pRUT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6583c1d2-5cb7-45f8-30a8-08da4ab2fe9a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2022 07:29:46.3592
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: XCrLj4Bj8/IWMPJTXzEdoJINLxfg7GplRQ+v4zG7oK8+wR54e/N5K2dNMNtx73JubMka7ZHXCZk5jI/mP71nrw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB7759

On 10.06.2022 09:20, Roger Pau Monné wrote:
> On Fri, May 27, 2022 at 12:37:19PM +0200, Jan Beulich wrote:
>> As per [1] the expansion of the pirq_dpci() macro causes a -Waddress
>> controlled warning (enabled implicitly in our builds, if not by default)
>> tying the middle part of the involved conditional expression to the
>> surrounding boolean context. Work around this by introducing a local
>> inline function in the affected source file.
>>
>> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> [1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=102967
>> ---
>> This is intended to replace an earlier patch by Andrew [2], open-coding
>> and then simplifying the macro in the one problematic place.
>>
>> Note that, with pirq_dpci() presently used solely in the one file being
>> changed here, we could in principle also remove the #define and use just
>> an inline(?) function in this file. But then the macro would need
>> reinstating as soon as a use elsewhere would become necessary.

Did you read this before ...

>> As to the inline - I think it's warranted here, but it goes against our
>> general policy of using inline only in header files. Hence I'd be okay
>> to drop it to avoid controversy.
>>
>> [2] https://lists.xen.org/archives/html/xen-devel/2021-10/msg01635.html
>>
>> --- a/xen/drivers/passthrough/x86/hvm.c
>> +++ b/xen/drivers/passthrough/x86/hvm.c
>> @@ -25,6 +25,18 @@
>>  #include <asm/hvm/support.h>
>>  #include <asm/io_apic.h>
>>  
>> +/*
>> + * Gcc12 takes issue with pirq_dpci() being used in boolean context (see gcc
>> + * bug 102967). While we can't replace the macro definition in the header by an
>> + * inline function, we can do so here.
>> + */
>> +static inline struct hvm_pirq_dpci *_pirq_dpci(struct pirq *pirq)
>> +{
>> +    return pirq_dpci(pirq);
>> +}
>> +#undef pirq_dpci
>> +#define pirq_dpci(pirq) _pirq_dpci(pirq)
> 
> That's fairly ugly.  Seeing as pirq_dpci is only used in hvm.c, would
> it make sense to just convert the macro to be a static inline in that
> file? (and remove pirq_dpci() from irq.h).

... saying so? IOW I'm not entirely opposed, but I'm a little afraid we might
be setting us up for later trouble. 

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 08:13:00 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 08:13:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345923.571603 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzZlE-00053P-Ee; Fri, 10 Jun 2022 08:12:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345923.571603; Fri, 10 Jun 2022 08:12:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzZlE-00053I-BU; Fri, 10 Jun 2022 08:12:52 +0000
Received: by outflank-mailman (input) for mailman id 345923;
 Fri, 10 Jun 2022 08:12:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kdog=WR=citrix.com=prvs=1535499d8=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1nzZlD-00053C-08
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 08:12:51 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1d428037-e895-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 10:12:49 +0200 (CEST)
Received: from mail-bn7nam10lp2107.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.107])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 10 Jun 2022 04:12:45 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB6615.namprd03.prod.outlook.com (2603:10b6:a03:388::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.14; Fri, 10 Jun
 2022 08:12:43 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::50a2:499b:fa53:b1eb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::50a2:499b:fa53:b1eb%5]) with mapi id 15.20.5314.019; Fri, 10 Jun 2022
 08:12:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d428037-e895-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654848769;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=bmMrXtdP7KrwxUTAv7qdDLC9sk211vTy7IMy+N9vuRg=;
  b=SdJUgmfkNW8Y7Q1ynMaOJ7UmybwJvhH3NjVwWXyP1vZUzKL5zf5GLdFH
   rLTzb3bvg4Jg4F2+fv5HhyXKcVv8PdrQwFrJ+N20FN1tayYV3vPY2bUhg
   UIsiM8/mc0jiwqIDbQSXpOrEzv/uM3zEO2Nz2Mx3531TmOKw0N3ImwqWw
   4=;
X-IronPort-RemoteIP: 104.47.70.107
X-IronPort-MID: 72646867
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:jprdpKxSV+KKKocpCjZ6t+dExyrEfRIJ4+MujC+fZmUNrF6WrkUGn
 GNMWjrTO/reMGehcoxyOt/n/UNXusSDmIdrGwJurSAxQypGp/SeCIXCJC8cHc8zwu4v7q5Dx
 59DAjUVBJlsFhcwnj/0bv656yMUOZigHtIQMsadUsxKbVIiGX5JZS5LwbZj2NY22IDhWmthh
 PupyyHhEA79s9JLGjp8B5Kr8HuDa9yr5Vv0FnRnDRx6lAe2e0s9VfrzFonoR5fMeaFGH/bSe
 gr25OrRElU1XfsaIojNfr7TKiXmS1NJVOSEoiI+t6OK2nCuqsGuu0qS2TV1hUp/0l20c95NJ
 Npl5JKuVxlzObb2vaciYx94HzMjJKB+9+qSSZS/mZT7I0zuVVLJmq0rJmdpeIoS96BwHH1E8
 uEeJHYVdBefiumqwbW9DO5xmsAkK8qtN4Qa0p1i5WiBUbB6HtacG+OTvYAwMDQY36iiGd73Y
 cYDZCUpRxPHexBVYX8cCY4knffujX76G9FdgA3P+/ZpszaMpOB3+LzvbOrzdOapfNtqu3qTi
 WHl/E7LHQ5PYbRzzhLAqBpAnNTnnyn2RYYTH72Q7eNxjRuYwWl7IA0bUx63rOe0jma6WslDM
 AoE9yw2t68w+Ue3CN7nUHWQv3qsrhMaHd1KHIUHBBqlz6PV50OTADcCRzsYMNg+7pZuGnoty
 0ODmM7vCXp3qrqJRHmB97CS6zSvJSwSKmxEbigBJecY3+TeTEgIpkqnZr5e/GSd17UZxRmYL
 +i2kRUD
IronPort-HdrOrdr: A9a23:3Tig86H0a2zoOLHrpLqFsZLXdLJyesId70hD6qkvc3Fom52j/f
 xGws5x6fatskdrZJkh8erwW5Vp2RvnhNJICPoqTM2ftW7dySSVxeBZnMbfKljbdxEWmdQtsp
 uIH5IeNDS0NykDsS+Y2nj3Lz9D+qjgzEnAv463oBlQpENRGthdBmxCe2Sm+zhNNW177O0CZf
 +hD6R8xwaISDAyVICWF3MFV+/Mq5ngj5T9eyMLABYh9U2nkS6owKSSKWnZ4j4uFxd0hZsy+2
 nMlAL0oo+5teug9xPa32jPq7xLhdrazMdZDsDksLlXFtyssHfrWG1SYczHgNkHmpDp1L/sqq
 iLn/4UBbU315oWRBDtnfKi4Xi57N9k0Q6e9bbRuwqenSW+fkN6NyMJv/MmTvOSgXBQw+1Uwe
 ZF2XmUuIFQCg6FlCPh58LQXxUvjUasp2E++NRjx0C3fLFuHoO5l7ZvtX+90a1wbh7S+cQiCq
 1jHcvc7PFZfReTaG3YpHBmxJipUm4oFhmLT0AesojNugIm1kxR3g8d3ogSj30A/JUyR91N4P
 nFKL1hkPVLQtUNZaxwCe8dSY+8C3DLQxjLLGWOSG6XX50vKjbIsdr68b817OaldNgBy4Yzgo
 3IVBdCuWs7ayvVeLqzNV1wg2TwqUmGLETQI5tllulEU5XHNcnWGDzGTkwymM29pPhaCtHHWp
 +ISedrP8M=
X-IronPort-AV: E=Sophos;i="5.91,288,1647316800"; 
   d="scan'208";a="72646867"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ljlthibyRFjM1Kjnx5yuzGeGBA3dQpkOZ2Cly2Zm05tKV2lM0WVUo64MssIP+BCBwQTyk2F8g4TQ/MKm1gX+Juceh0BFLFwqPaufh5qowTYIMdfYlZJp1T91CgmpK4xw8bx5BlCeKIr3kw4hhGtmEsJkwGQyoyV9Co6MZyxjarblHHXxhJUNAuciEYS4f3b3O641lrkrzOdcYPPIKesoF0xjbE6MvuD6jjpDLrgehJ5euOlutsBkRqFkVg99WHkioyQDIu5j931TMgmnZlDzGmUY3zZrSztnyHtZMUv+4wrYSsDmcng+PfbkZo5E7NkIQKzxOURArlffyhBc+t4CPQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=bmMrXtdP7KrwxUTAv7qdDLC9sk211vTy7IMy+N9vuRg=;
 b=kzU5K9yJtoLNwxrv3M/1yQrMXws7o0iIopiWgSG2vFFExFe/EQFkFy4qbBJaBuU/par6Ssj09V6ZY0R+TOI/PXCTCPVLXCRVrn52RW8G+stLE9m5sAmsUGXkuMHeAWxh8nM9B7pnkPl4bj+25vUqCcMQiJ4hqmdfxe4VjX4yfoVkSHAvV6lk4vCwmxVEWbBljYdte2Ft5JymwCKUY+vVMSrgULZxWL95EWJh9NUqtjp/txD0PVEyQSVJP/2R2+HqnJSPpLArb8I7YyPzBhoit/J47SvEm2nEx+E486EeywLI9BduRBrjVD8sIyIQdPbhIdVOQ1td70kMb8uN7wvi6w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bmMrXtdP7KrwxUTAv7qdDLC9sk211vTy7IMy+N9vuRg=;
 b=nf1XqEzVduzaUI2KF040VpEjvzOkMTy5w4noAVCEigf+AS5kx4Z16W3KRYxUwr3R2ob3b+W6fn/wOQ0Psg7OGCXi1nDgLDW+UiyWq5OS2uCNi8/H7X8WKG1M/E/An/pVtTLucZgUicN6HjdEsEiqigMn4gsqJzUorKQRb40h350=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [PATCH] x86/mm: account for PGT_pae_xen_l2 in recently added
 assertion
Thread-Topic: [PATCH] x86/mm: account for PGT_pae_xen_l2 in recently added
 assertion
Thread-Index: AQHYfJt5YVvowoL3VkeMPGEizKI7eK1ISn8A
Date: Fri, 10 Jun 2022 08:12:42 +0000
Message-ID: <93283a6c-0a5b-aa73-7632-21cc9b4cae62@citrix.com>
References: <ee7e7e1f-8d3c-0d8f-24ef-c281b09faa25@suse.com>
In-Reply-To: <ee7e7e1f-8d3c-0d8f-24ef-c281b09faa25@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 8792d92a-eaff-47a1-a1a6-08da4ab8fe44
x-ms-traffictypediagnostic: SJ0PR03MB6615:EE_
x-microsoft-antispam-prvs:
 <SJ0PR03MB661539A9F1DD25F3698417B0BAA69@SJ0PR03MB6615.namprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 Neil/DChQ8OwnZ1/jucIlnUNrWFd1+R++LcTHqCewF9oGRvnyOWYl2vKQHJZi9ohJxCXqole2/QZDlYznf16/HpPMOeVevSPFYbmcqLnp6nSnorayONfD9dY1pl7tDQ98Eswn+MZbLGoxs9UwaiaGSfYwt7CazKBCzNkJHXPmubtldoXj9mls1NFJ2ePG5zxrR6+V+aN4T5TBQSqxEFGlTZ+hS0CpnBmrkuiPC8G48rhDigPeuIfS5z3P0m66JXjqiLdU0BYCQoJ0oFBZPb2w44BpRcrTHqE3NkOEpkXiUxMVlFWZSVtFXewDI+kw2meeVZ7s8Tz3hKLM+kM/AGS7E3Qx1j9h4qhbYkmezjn+sgVC7nZ1yx7LDJFUBbHuDOxDQCwYtFq0B4rzp3Tzp2CAXAwZBHaJz5w9HLdpxl+nEqFRV4rZa73MShas0CTDRugZHp6oJkeZK76M6uzNTs3cUzmO2OfT1OALQtklv087DHoclohtWT/lz1JtSegsWn3oSWWwhHHqipoYfa6bXqyuPkyNeEUSqNWOxljwzCJvyQ/KKo5yX8VGF8ZFNwz88Jzpd3dODycxpiUWSIKvcpH8tgDfJ0yGBb5PMLabfW0HQVmtJrF1sTeis7zVRmTBERSVnoto7ZeZZo7LS0gb52aBsNkAzbb2L75VBeDiQyd7s879N9XUwtOXqTUHvG92xwH9gASeCGs8ZVwaHWl7EassvcD8MON2YFA2PQedRY6+LJi04FRkFmyJusTsgJtZfteO3yVCVCtybuOSFyVcMTu+w==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(53546011)(2906002)(55236004)(82960400001)(38100700002)(31696002)(26005)(71200400001)(6512007)(508600001)(6486002)(110136005)(54906003)(38070700005)(6506007)(122000001)(8676002)(4326008)(316002)(86362001)(186003)(8936002)(91956017)(66946007)(5660300002)(66476007)(66446008)(64756008)(66556008)(31686004)(107886003)(76116006)(2616005)(36756003)(83380400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?bVhXd3Z5V0FrVUZweHpkaVRXNWJqMGo4dTZtQldpRXNsSGpvVkl2UVZPa2Zs?=
 =?utf-8?B?NjMvQVd2anlDN013TVp5bk4vamthMDJzM2VQK3BYaGZUVnFQR2J5emFuWGpI?=
 =?utf-8?B?cFU3ZzByMmlxR2hYTDdXVzgxR0Z0OTB0Y0JyMnBuQXBDQi9qS1M5U1NyOEJp?=
 =?utf-8?B?cXpIYlhBaC9LUDFJV1NsbVJPS1JjSUxjSEZnNXRuZDhyaDBSYWFWc2J2U1lo?=
 =?utf-8?B?aWRaL3UvTlJ1Q3dleFhlNExxWExGbGNiTHBVamVXWW5rdUlDNkw1T2U2elNp?=
 =?utf-8?B?aWtDNUF1cGFtRVhSQnJlSzJmbGpHSzJ3NE1VQjRCTVpPMCsrdVRGNVFxNHJs?=
 =?utf-8?B?eGYyUHQ4QTB4STdmaUExekM5eDliNU8zZzkwQkdjNVRBVUJGUE12ZExpYTFw?=
 =?utf-8?B?OSsyR2ZLTktnVTg0SUlydUE4a0lodDFzZGN5T1VXaC9CcTcvTDVtWU9OaUpq?=
 =?utf-8?B?dGdOdFdGckZCU1paWWtveGZ0UzdLWU5TeDVBREI3eHlEcHFyQi9oMS9ZcE4z?=
 =?utf-8?B?SmI5Rm5WZWhOUnBkREtHaldxUTQ5Vi8rV1VORHNwaCtkci9aUlYwbWtkWklW?=
 =?utf-8?B?aXNSSk1PT01vVElJQ2J6WFlkcWJ6UmpPU1daai9MeGYvcFAxcVE0OEhJZWJ1?=
 =?utf-8?B?dlZTaVdvcndwOGdJYWtQaGhvV1FIV05nWWRJYUVicU9yZmFCWWhNTHh2UHpz?=
 =?utf-8?B?UjlLaVBsckduOEpGcXc1VVdEOThCdmZZZkMwdE9WS2owWlRibUNxZUxJMzNW?=
 =?utf-8?B?akRicC9JNG5NYzg0by93eStyOUVtQlZoMC9VK2Z3eXVzSFVWNUlBM2NwV0R0?=
 =?utf-8?B?aGZ2aURsZEJtSCtxTEVFWGgxSXJLSlJrK1AvSHM2Zjh0TnVucUsvdWYxaURk?=
 =?utf-8?B?TnVIVndyRkp1UWxHOTJuUm54bnVWOEkvU3hMNkl5Z1RDMzdzTGtMQi9EazFv?=
 =?utf-8?B?R3pWc05ETk96SFVaWHhwMndRZDB3bWRwMk9vbVBHYS9qQWFnSFc1L0YyTWFi?=
 =?utf-8?B?dGFYYkc5SDRkZTNUSk52SVkvL2NCSGRXZll2YWZ5WWVTR0RrTUVZWFhXVEFY?=
 =?utf-8?B?VFhvbDNvWWV1RW9JeHAvWDlaWlJzdjNxMlAwTUJlMGM4MFlpbjVhTmRMWS9p?=
 =?utf-8?B?RUNmQ0pEeDJwcnhGUEgwMU5BOUFpU2FCMnBxOGQ5ZjJsa1YyNXJpZ1BqcTB5?=
 =?utf-8?B?VWZrMU5YRDhSY1dIT2FJWHgrMU1OODNIT0VUVVgvd2J0ZWFaVHByeVFUcjRo?=
 =?utf-8?B?UTljOWJlUXJFY0lIWGlNREdPeUtIdjlKY2YvVmJyNkJlcTNvS2M5Y0V0ekFn?=
 =?utf-8?B?RFNaWTNpaFpCazZINS9vS2dmMW43UmxqQmluRDU2UWh5MFNTU29IVU1IU21w?=
 =?utf-8?B?VHdYMWwvSFR5R3o2ZEZIODFlUXEzSERvMVVoMitTaFdmUVhSZjg3bjhMdjBC?=
 =?utf-8?B?L0l2UkFEQ2hWd3JoUnBjVUh2cEtSN0pkWkNUeWJNbFNSSG5qbVpIZDAxeDlK?=
 =?utf-8?B?bXpUWWg4Ym1OOTRvVmNzYTl5aVNFNUZDSlpvSWJ1bXM3d2orSmRhVVk4dWYr?=
 =?utf-8?B?Q2NMU0lmWjVLZVM0TEpMNlE0ZVZNdllqYTBVZnY3ZkpvdjE0a3dlMEZtc2RS?=
 =?utf-8?B?VUkwZzF4M3p1QURGMmdORjFQVzJ5RGNSVmhvY24rRThYSUdGNHlFWkdpUWQ1?=
 =?utf-8?B?OXlZMHBseExXY0lUR2U1d2ltU0dTNkxiOEVBN0xpSkVWQ01ITzdTSTlWdlFF?=
 =?utf-8?B?TUNwVVdEZTN0eVJwVGhDbzlrL3o5aEc5MHd1aFFtWXV1bVdZVEkrV1ZxdVIz?=
 =?utf-8?B?eTZpNTliVW5ldlZRSWVWMlVmT2VpQ29Kcm13dFR5QlBERGlNTWhNdDVDeDBF?=
 =?utf-8?B?Y2pOcDdKdk53TjdoM1AzbE1ZNG9KZG84MHpFdnJKSjVabHlMWkEzQVE4bTRH?=
 =?utf-8?B?aVlCQjZ3OER1aHVGYldIR3I5clhIUzJPVGJQR3ljOUhtMmdaM0xNa25NWVFC?=
 =?utf-8?B?T0VhSE9LaDZveEhoTUNsR3ByS2xiZGpnQVRWa2YwRGVNeTd6SE9NMEhuc2dk?=
 =?utf-8?B?VDNWSFViclZkSUNRUlpyY0tDZnFIeExaMi9Xa0VwVlY1QXR6cXBPT0I3Wklj?=
 =?utf-8?B?TDRCMUhBQzl6bkRxNlNNcVpWZlVNekRWbE1HRUlyOFBDNUlmUUFzajRHMHpa?=
 =?utf-8?B?TWlFcHFPZG1JaXUyUjhTTVF0Q2pFcTBuN2JUYmg3VGNyaTFzcnNtdlVyM3ZB?=
 =?utf-8?B?VHJSNzRUNmRVSmhhS0IvSUsrVzA5VmpUSDFUSTRqV3hvbndadHhxdUFaVVAw?=
 =?utf-8?B?b1BlMHQxTVFSRjIvK21HSlk0VmFoQkh1cWdjRnZnSGdIMWYxcGF3QT09?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <6628AAAE3B825F408DE2372EC4B54B48@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8792d92a-eaff-47a1-a1a6-08da4ab8fe44
X-MS-Exchange-CrossTenant-originalarrivaltime: 10 Jun 2022 08:12:42.6070
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: oM0viW35mrvFXqjScR0qUA1G2ro4+2FFgjVEGsK4/GM7FQUjFaklL5NOYCYrrcHyCr8U7LUcU+2ZLwb8YIKkRcXcaRhjA2dWYKMuz9OJs2M=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6615

T24gMTAvMDYvMjAyMiAwODoyNiwgSmFuIEJldWxpY2ggd3JvdGU6DQo+IFdoaWxlIFBHVF9wYWVf
eGVuX2wyIHdpbGwgYmUgemFwcGVkIG9uY2UgdGhlIHR5cGUgcmVmY291bnQgb2YgYW4gTDIgcGFn
ZQ0KPiByZWFjaGVzIHplcm8sIGl0J2xsIGJlIHJldGFpbmVkIGFzIGxvbmcgYXMgdGhlIHR5cGUg
cmVmY291bnQgaXMgbm9uLQ0KPiB6ZXJvLiBIZW5jZSBhbnkgY2hlY2tpbmcgYWdhaW5zdCB0aGUg
cmVxdWVzdGVkIHR5cGUgbmVlZHMgdG8gZWl0aGVyIHphcA0KPiB0aGUgYml0IGZyb20gdGhlIHR5
cGUgb3IgaW5jbHVkZSBpdCBpbiB0aGUgdXNlZCBtYXNrLg0KPg0KPiBGaXhlczogOTE4NmU5NmIx
OTllICgieDg2L3B2OiBDbGVhbiB1cCBfZ2V0X3BhZ2VfdHlwZSgpIikNCj4gU2lnbmVkLW9mZi1i
eTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KDQpwYWVfeGVuX2wyIGJlaW5nIG91
dHNpZGUgb2YgdGhlIHR5cGVtYXNrIGlzIGRlZXBseSBjb25mdXNpbmcgdG8gd29yaw0Kd2l0aC7C
oCBJdCBhbHNvIHJlbmRlcnMgYWxsIG9mIHRoZSBjb21tZW50cyB0cnlpbmcgdG8gZXhwbGFpbiB0
aGUNCnN0cnVjdHVyZSBvZiB0aGlzIGxvZ2ljIHdyb25nLg0KDQpJJ20gYSBsaXR0bGUgY29uY2Vy
bmVkIHdpdGggdHlwZSB1c2FnZSBpbiB0aGUgbm9uLWNvaGVyZW50IHBhdGggdG9vLsKgDQpJdCdz
IHNhZmUsIGJ1dCBpcyAoYWxvbmcgc2lkZSB0aGUgSU9NTVUgcGF0aCkgYSBtaXNsZWFkaW5nIGV4
YW1wbGUgdG8NCnN1cnJvdW5kaW5nIGNvZGUuDQoNClJldmlld2VkLWJ5OiBBbmRyZXcgQ29vcGVy
IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPg0KDQpJIGNhbid0IHRoaW5rIG9mIGFueXRoaW5n
IGJldHRlciB0byBkbyBpbiB0aGUgc2hvcnQgdGVybS4NCg0KPiAtLS0NCj4gVGhlIGNoZWNrIGFy
b3VuZCB0aGUgVExCIGZsdXNoIHdoaWNoIHdhcyBtb3ZlZCBmb3IgWFNBLTQwMSBhbHNvIGxvb2tz
IHRvDQo+IG5lZWRsZXNzbHkgdHJpZ2dlciBhIGZsdXNoIHdoZW4gInR5cGUiIGhhcyB0aGUgYml0
IHNldCAod2hpbGUgIngiDQo+IHdvdWxkbid0KS4gVGhhdCdzIG5vIGRpZmZlcmVudCBmcm9tIG9y
aWdpbmFsIGJlaGF2aW9yLCBidXQgc3RpbGwgbG9va3MNCj4gaW5lZmZpY2llbnQuDQoNCkl0J3Mg
bm90IHRoZSBvbmx5IGluZWZmaWNpZW5jeSBoZXJlLsKgIFN0aWxsIHBsZW50eSBvZiBpbXByb3Zl
bWVudHMgdG8gYmUNCmhhZCBpbiBfZ2V0X3BhZ2VfdHlwZSgpLg0KDQp+QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 08:17:45 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 08:17:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345933.571617 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzZpx-0005xw-3B; Fri, 10 Jun 2022 08:17:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345933.571617; Fri, 10 Jun 2022 08:17:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzZpx-0005xp-02; Fri, 10 Jun 2022 08:17:45 +0000
Received: by outflank-mailman (input) for mailman id 345933;
 Fri, 10 Jun 2022 08:17:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LSau=WR=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1nzZpw-0005xj-4D
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 08:17:44 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0605.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::605])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cba9d665-e895-11ec-8179-c7c2a468b362;
 Fri, 10 Jun 2022 10:17:41 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB5471.eurprd04.prod.outlook.com (2603:10a6:803:d0::33)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Fri, 10 Jun
 2022 08:17:40 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.013; Fri, 10 Jun 2022
 08:17:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cba9d665-e895-11ec-8179-c7c2a468b362
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=S7pYIyrGUSBe2WS/TwnhAp5z/nX/BP+YNvTM28mtaYCYLnl6SjiwtfIqEeCX1WIFt9pA1azXFm+xhsvbVyOw3xD2coLaf9rZuqkiGNH9BPtKJKLVi4KE8E4NvnEMiWnnAJeQt4wQTCZa+CoqA17mj0b53PaPTEMtUz872g3RQzGVXU+hHYZDOWIYqEVd2DcH5tPZyGRCKZ5Efjz1Y80+iNpOAvh5NUZBCovWAT2bBvUpZW4WI99yKwfFrBujSHJkoBJs7ny3sqpTEihIKm6G92MuavItcu4NtsXMFdISQzezq8xKsZ/vX1M/IkafUKzkGFuNjlIJlJYOfWQ3VN5yJA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=UWs5CcmZE/ScatItp1kjPlVfsqlRjmbQZrNRltuxlfY=;
 b=hph0N8V5Ulk0EwcNsH5boRws3lljvm1uikCWhbhcKxCdBLbd2iJiMXG/JKacQDI15w3RA8nxUu2WVtQioesEipv2QbzcSeJlmxCGsQS2btP3qC9Bks5kAxDLmm/iHh2iQLsMcxvdLzCqfzqDPh9DWvYMPXte31Hb9RuMGA7P6yINl/myV5XfvWOKOohdJHmopcZeR0jTyrEbqnml1v3ozGFQ7jJA8dRXe77bLxuVh0Vqi4uAnksEGkU9EApGZS3oqNYqryMjgxP2bx4/pSHqbEMDBCwXER+5UEqQ9s/wtKH1OoukCJkt3O+vVytiMVHh/2WhTlr4EC/msnI0/TtxGQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UWs5CcmZE/ScatItp1kjPlVfsqlRjmbQZrNRltuxlfY=;
 b=cMgndodEYN4qi/7SJghhJr5TIlGWmLBwMuwCbGtlKy32z2t2AYPZUzJRCEDvt5QxYcff6H5j0Pv1ORh6wA8+onL596wA4aSVljJqywTSzahIcMg8uKyelhK3LRrMzM0X53BkemoDfp9QlbTALHRY0HK5Jez0FPO/MXAW2jk9NPy6lbErCLzhUXg9Rv3CSjjkv8wEZxnFa65sRO9GV1N3DZShtaly1bp+nvwjQV5W9fY4k0xs4UTitKhi3DSDfWribRKtV52nyLhbH2MSS5mewlmRitzgaarpxJauSMD1qS+q+5Oy//iJ5Y3CbfC75Cf25zJIOehPdwQHKyHyjrQWYQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <661a2177-2352-a33d-3e75-39eacaff1b13@suse.com>
Date: Fri, 10 Jun 2022 10:17:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH] x86/mm: account for PGT_pae_xen_l2 in recently added
 assertion
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <ee7e7e1f-8d3c-0d8f-24ef-c281b09faa25@suse.com>
 <93283a6c-0a5b-aa73-7632-21cc9b4cae62@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <93283a6c-0a5b-aa73-7632-21cc9b4cae62@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AUXP273CA0073.AREP273.PROD.OUTLOOK.COM
 (2603:1086:200:44::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6f2bd925-4f2c-4b5a-8893-08da4ab9af85
X-MS-TrafficTypeDiagnostic: VI1PR04MB5471:EE_
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB5471E25BB563F23A0061DC63B3A69@VI1PR04MB5471.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tdifKKpr20AFDz934kmqkUNqSk+kbHgU9zwT8XBZxSzuDSMwXcQDaebnM/pDhLEG/k+60fanGobwRPPcsjCc3dhPjrPPqYkcnU5NWnC+654dzAOjQrql2QHDoJjkQ6q6McHD3wRkBDYgNOyEkdj1GZEpHgGVmf7MbFchzWQgePgaKD08jX2QgpaVjlStujWchk0PgvmYcg4h7DbG84fhk2uE5wQ8mVpy/HUJG2/8R4QGH9gr5XzvjbhylRZMAMHe6Usle5PPgJtxbFeNFmBsKcIi5tVSVSgXpSX0ZkCUw9A6PrWJNj9ylIVsbsNSKDdRD1kU5cPWVVWxnDHEDXFN3hXk0lFQV+jAOxksJhUeGtQkxIHQbmkVZBE68wVyktGVF5I+2LNr8IcZyjIaieL96/mkysqslmbqOfM0xzEI5g4WhEjMpHQ55mXkfXVQJ8D23atnuyj07ZDWDX5LEms/ppiKBzmJpN2LTCul9SWitHUyd9GWqhcALWVprBgQMp45rQ9LP9UceenltYyNqiCRsey1vI3kL2a/n83PWHuJBXqBAvzkeyCyWkPUaXOH7wJxEhC/84VgS+77PyPpXAAUXsfD73tmCzwhDVe5I8uDvok4BIBcGIRxpKjS74mMzVAiUF47YIWXhmC/xuABBD3T9s5NlglOb82Vr6+jbiYh7oWXBxhMOL5K8jvkflGZx6Mh29NT1yCstOZpinMU4XInKH8up18I69NyROvJ3AmUjr8=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(54906003)(6916009)(66556008)(6666004)(4326008)(8676002)(31686004)(66476007)(316002)(86362001)(508600001)(5660300002)(6506007)(53546011)(6486002)(66946007)(2906002)(26005)(6512007)(31696002)(36756003)(38100700002)(2616005)(186003)(8936002)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?d3ZQZExKd1FjL3lpUTMxYjVJOXZwSkE3ZkRNcFVPWFl2UmZXOUp5MlpvL0g1?=
 =?utf-8?B?NEVVNzZoYzYxTnNvNm5wdVVMWi9OVUdVa0xzWHl0dXV1TFZpV0xRb3BlSU0v?=
 =?utf-8?B?Mk1tR1RVZnVCaXcyNW00TWtUZFJnYzJxQlE5SlRDanlCdVRlbjZQaXI4TC9Q?=
 =?utf-8?B?S2prL0VYRUt3QjlVekFFVldhbFJMQkltOThiK3VvQ0xLWml5ZTFDczhPYzJW?=
 =?utf-8?B?NXU5ckpyUkgxWnBWeDg3MkRHbExwZVY4V25FNUIyYWpMTWhjK3gzM2tndHpn?=
 =?utf-8?B?ekRDell0Mk1SUGxqbHBWbDRUendNNkZvOWgrRGcvbTBuNlFCZ1A5V3RqbExa?=
 =?utf-8?B?cDhXVkVpTU5GdzNhVlA0bFhFS2Jsa3pxY0ZubkorR1NtUTZKbEJBWFJxcE5Y?=
 =?utf-8?B?SXViaW5yOGpWKzNiUDdqd1YzZlluS1pEWEFsa2E0M2xCQzBnTjZaQ2M2dFpl?=
 =?utf-8?B?VzVneldxTG5mWlptcXA0V3VOM3AxSmpNcjlxT1dIbkI4ZlBwZDF5dHUzVFcw?=
 =?utf-8?B?ZHJXdUNubGpqK0F2TlFWeGVhdzIvVjJyRGw1L0Flc3JCb29Fem1qWXR6TDlp?=
 =?utf-8?B?MzBFdXc3dkhlTDNkUzY4UTB1ZUJkY3ZLNVdUUTFUREQ0bldCUkNxWVZMTjBq?=
 =?utf-8?B?L01uN0NCZE5TUDZEcmw2bkVXdnVwbkNsOVlvSllrZlJZOVYzM053VzBKMFFP?=
 =?utf-8?B?THdvRFA1RTBDWFVralR2LzRvTkRUQng5SFFqKzdDK0hPYXhCNWFxTU9QTG9R?=
 =?utf-8?B?cmZCdVZHOFR2cG16ZENDWkJFcU85V2liejdaWkJDOW5Wa0lzRjBYSW9TSExl?=
 =?utf-8?B?eCtRSjhDMGFuSVFtaDVUUzd5YVZqUXVGOEtDV2pBaUlEVlQrKzB2NFBPNVYy?=
 =?utf-8?B?YzN3bkc2MWEvZnFxKzhxVFhPUE9XYkYzZGt2NzdnNlZONUFMV3lORHdnMHQr?=
 =?utf-8?B?K2VuNjZpcE1UQWwyQThHOE9iTVJVam1qRUxhckZMOWRCTmxnazVnWVhUOE15?=
 =?utf-8?B?RjZ5ZFBhVGFhZmR6S3l3bjFCd2ZuaWlId0o5MWlZWkpxZ2ZOeVpxaC9abjJr?=
 =?utf-8?B?enhhY0tTRTgyRUplNzJmS21yWHJQeE1SSFRkVjI4d1FaV0FtcjR2MVFuY0cy?=
 =?utf-8?B?YkxUcFdZdnB0cGppNjdqOXJlN0p4SmNUREZYa2JSZnpmZTc2NFRtUExIUFJB?=
 =?utf-8?B?dXBMMkRHU2M2TXpSMi9WYjRVbkJJc3dIdStSNTNYZjU4UEZTRFdjclFvMTFQ?=
 =?utf-8?B?bW81SnlVUzRFU1ZEUkFTMXdEM1FzMEtXU0lMR2luNGxXSmFKODVIWGVudlN2?=
 =?utf-8?B?VUtWREZJeEpIdUNVbExud2hTNTI4L1BMVU1jbzQweERLYis1c1lkUHBKd1Uy?=
 =?utf-8?B?K3R1Si9nVGgvSS9UWTFvVGlNZ05xY2ROdkprcm8ycWdjTEs0YUFHMnVlMlBM?=
 =?utf-8?B?NkozaGo5WUsrQ2hMSnZtbGU0NFVaUTN0Y201WmxFdmhyRWJWVlExaE9KaHpy?=
 =?utf-8?B?bzhVNXVrNUNtblpuWjdNRVNSSHlPaDdHMEhVUnUzZ29KS2ljcktUU2NaU0xF?=
 =?utf-8?B?eE5KaGNlckoxSG5UdXlEVWk4U05vOWhHaTd1NU0rUTNORGhDbk9uZnBwaW9a?=
 =?utf-8?B?a1lidjE5dkNVTHNUWWlQUUkyaERiSHlxVGdYdmxzVUZuaEZWcm5ublpyMTNO?=
 =?utf-8?B?VklyUlErR01KT2ZTalkrdzVDWkxTM29vWVpMeGtucDg5eUNTSjZXdkF5Zysr?=
 =?utf-8?B?S3ZWNGxWQmZxTXBHaDBNMjNyUm0vZFZ1bFQzL0RPRitlVXRLMXdlM2xZVFFJ?=
 =?utf-8?B?Nlh0WEdRZjE4SFZHZmRIbUxBMG1XVFRYa0RxQjFiNzJ2RmRRaUhOLzhvQzNr?=
 =?utf-8?B?NElIMldGdTJxZGxOWnVoNzJCSUI5dUw3ZTY0TDFnT1YzZWF3REkzTlQydXdl?=
 =?utf-8?B?MXhEN28yVjAwdzRQOW5PakxndUFmaGdNZEZqQm1sVzNmdC9IUzYxVjNuZkJZ?=
 =?utf-8?B?MVZOQW9QbzVDZDhaem9qbGFYVmZZT3RLdWJVR25zRkpqTEFyQTdhSyt5MjFh?=
 =?utf-8?B?bGkzQ2hNcCtnQ2NWdTRISVJxT0tjc0l1b2Y3b1pReTVFUlBTRnVJMysyemxt?=
 =?utf-8?B?ZjRaSkpGQ1NnMDdwbnI1MERSNzZUM2t6YVE3Z2pKU2VFWFF3eDE4U3BHVE92?=
 =?utf-8?B?RGdoWE9ZUGRHeW5SS2ZPKzQ2NzA3cWlaSW5ZNjZBajZEY09zMTllcWpqcUR2?=
 =?utf-8?B?NGZuYmhpTFhNeUM3MDg1RzlTZVpyQnJNcnZNN05wTFdkNk5BcWZIQzNSRWRi?=
 =?utf-8?B?ZmdhZnBPU1ZIeGpxa1U3ajhJeC9LQTlsMzJSVm9uSndiNm90ajQrdz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6f2bd925-4f2c-4b5a-8893-08da4ab9af85
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2022 08:17:40.3015
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: CKQniY2Vn5vbMsnFIg3WN426DTMxzxWLUMi9fYDtFCJc1h/B1zyoXIj0/ZMmvk2T9GnkfGPoy6bFlrQvnAEcLQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB5471

On 10.06.2022 10:12, Andrew Cooper wrote:
> On 10/06/2022 08:26, Jan Beulich wrote:
>> While PGT_pae_xen_l2 will be zapped once the type refcount of an L2 page
>> reaches zero, it'll be retained as long as the type refcount is non-
>> zero. Hence any checking against the requested type needs to either zap
>> the bit from the type or include it in the used mask.
>>
>> Fixes: 9186e96b199e ("x86/pv: Clean up _get_page_type()")
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> pae_xen_l2 being outside of the typemask is deeply confusing to work
> with.  It also renders all of the comments trying to explain the
> structure of this logic wrong.
> 
> I'm a little concerned with type usage in the non-coherent path too. 
> It's safe, but is (along side the IOMMU path) a misleading example to
> surrounding code.
> 
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

Thanks.

> I can't think of anything better to do in the short term.
> 
>> ---
>> The check around the TLB flush which was moved for XSA-401 also looks to
>> needlessly trigger a flush when "type" has the bit set (while "x"
>> wouldn't). That's no different from original behavior, but still looks
>> inefficient.
> 
> It's not the only inefficiency here.  Still plenty of improvements to be
> had in _get_page_type().

You did say you have some follow-up changes pending. It wasn't clear to
me whether this particular aspect was among them. If not, I can make
a(nother) patch ...

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 08:33:34 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 08:33:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345945.571639 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nza56-0000RX-Ku; Fri, 10 Jun 2022 08:33:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345945.571639; Fri, 10 Jun 2022 08:33:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nza56-0000RO-HW; Fri, 10 Jun 2022 08:33:24 +0000
Received: by outflank-mailman (input) for mailman id 345945;
 Fri, 10 Jun 2022 08:33:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Y0XP=WR=citrix.com=prvs=1532263ae=roger.pau@srs-se1.protection.inumbo.net>)
 id 1nza54-0000RA-Hw
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 08:33:22 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fb0fb2e4-e897-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 10:33:20 +0200 (CEST)
Received: from mail-dm6nam11lp2177.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.177])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 10 Jun 2022 04:33:12 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by PH0PR03MB5767.namprd03.prod.outlook.com (2603:10b6:510:42::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Fri, 10 Jun
 2022 08:33:08 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%6]) with mapi id 15.20.5332.014; Fri, 10 Jun 2022
 08:33:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fb0fb2e4-e897-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654850000;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=9KJQ2h6hrhgJD9t4ADu6fSuwAnN2c9GIrQ6DBIoVeg8=;
  b=NQXve0vxzD3Aae1g0YbnS/2LjN9YmmDqj5G6qIkQqeKvglVhrjn414mx
   5ledvX/41voDzn3iyhrWl+MD79v3uJxvzZjmQEJ+Kjbn/GhgOc4+kv8xM
   DvjqGC+NMDdNP6jUij22zO6E1SQpgDJjSs+GvssoSo0NB4MSIXvYgBv7a
   A=;
X-IronPort-RemoteIP: 104.47.57.177
X-IronPort-MID: 73147832
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:VeSwnq3qnjulUDW5S/bD5alwkn2cJEfYwER7XKvMYLTBsI5bpzxSm
 GsXWDiCOvncMzfxeNt/adjk80gA65GBn9BiTgE6pC1hF35El5HIVI+TRqvS04J+DSFhoGZPt
 Zh2hgzodZhsJpPkjk7xdOCn9xGQ7InQLlbGILes1htZGEk1EU/NtTo5w7Rj2tAx2YDga++wk
 YiaT/P3aQfNNwFcagr424rbwP+4lK2v0N+wlgVWicFj5DcypVFMZH4sDfjZw0/DaptVBoaHq
 9Prl9lVyI97EyAFUbtJmp6jGqEDryW70QKm0hK6UID66vROS7BbPg/W+5PwZG8O4whlkeydx
 /1Sha3uaS51O5bXv90/DAIAPjhuO4tvreqvzXiX6aR/zmXgWl61mrBCKR9zOocVvOFqHWtJ6
 PoUbigXaQyOjP63x7T9TfRwgsMkL4/gO4Z3VnNIlGmFS6p5B82cBfyVure03x9p7ixKNezZa
 McDLyJmcTzLYgFVO0dRA5U79AutriakKmEH+AnPzUYxy27TylQvi4ffC/3ceMCoAuZxm3eUg
 0uTqgwVBTlfbrRz0wGt4n+qw+PCgy7/cIYTD6GjsO5nhkWJwW4eAwFQUkG0ydG7gEOjX9NUK
 2QP5zEj66M18SSDUd3VTxC+5nmesXY0Q9NNF8Uq5QfLzbDbizt1HUABRz9FLdAj6sk/QGVz0
 kfTxoy2QztyrLeSVHSRsK+Oqi+/MjQUKmlEYjIYSQwC4J/op4RbYg/zc+uP2ZWd1rXdcQwcC
 RjQxMTir93/VfI26pg=
IronPort-HdrOrdr: A9a23:gIJORaoayCqaabfE40CSGgoaV5vNL9V00zEX/kB9WHVpm5Oj+v
 xGzc5w6farsl0ssREb9uxo9pPwI080kqQFmbX5XI3SJTUO3VHFEGgM1/qH/9SNIU3DH41mpN
 pdmtZFebrN5DFB5K6VgTVQe+xQuuVvm5rY4Ns2oU0dLj2DPMpbnnxE40ugYzpLbTgDIaB8OI
 uX58JBqTblUXMLbv6jDn1Ae+TYvdXEmL/vfBZDXnccmUCzpALtzIS/PwmT3x8YXT8K6bA+8V
 Ldmwi8wqm4qfm0xjLVymeWxZVLn9nKzMdFGaW3+74oAwSprjztSJVqWrWEsjxwiOaz6GwymN
 2JmBskN9Qb0QKiQkiF5T/WnyXw2jcn7HHvjXWCh2H4nMD/TDUmT+JcmINwaHLimgcdleA59J
 gO83OStpJRAx+Ftj/6/cL0WxZjkVfxiWY+kNQUk2dUXeIlGfVsRLQkjQxo+ao7bWzHANhNKp
 guMCic3occTbqiVQGUgoE1q+bcHkjaHX+9Mzs/U4KuontrdUtCvjQlLfwk7ws9Ha0GOud5Dp
 z/Q8JVfZF1P7orhPFGdZM8qfXeMB29fTv8dESvHH/AKIYrf1rwlr+f2sRE2AjtQu1B8KcP
X-IronPort-AV: E=Sophos;i="5.91,288,1647316800"; 
   d="scan'208";a="73147832"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EtJAHXqfzZkmqQuUrrOu4hZdTbYzm44iW5+PcQC/vdQkEINJRFVHBFeTZ4ShQwofxHwyeEvdDnYJn/vC3DVwPl24Yk4RDwbLGnGl0i80/I5poZc2ac8S2tujuL1eEZAj5drRrp7MpaM7I9STUqxwX2ek0HBkCCVAGKiMtPukksWmIT334b4K55Nusrk60IrLNr1Alid3aI5my4E9GFf5q1T0hXh2F64hrs/KayS6XhJmu6m+qzkbq+kPavhOxapyjFZ6Rhn6a7zcQzABN2HF19ZfDjrK0Gl2fgF8mHKBRNO09hqEbJQFZEVnD+lZ9Qt3SNWwYR8VJaySx+7zcEcXoA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9DyW1xcDZldX1qb0nLrAkp+PKepSFMDyWUhGZOq7rdg=;
 b=STsph8KXndZwlzPfOz4QVczCrabc5mDHweJ4Wqk2wooTesjjuTXbWwESTiBuANPI8Vlbsp007XgLuHFk4FxikSUsnfJLJwixHUryQGVytn3X265RurHqln0GzNEjBKiyq1cUcoHWRAiiiomcA/qva9fCGgqk9GEM1EQ9Bor+eMISpgrmYbJdBxRReGKqmHHfaEPObG5S+kdHAD3Ot7572n4BSQI6mQ7NShYUQ3d5th+266XOmjbrYt+AUJVc/Lv+G9JrkHqlwBQjt1XPGTrupXMiYz3v8MffUfF1ua0t42BE7Ch5eOToIlYE8D7P61IF7/gHqRxkZktRh3Tkg1g+2g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9DyW1xcDZldX1qb0nLrAkp+PKepSFMDyWUhGZOq7rdg=;
 b=dIpTHVPV6GiEC5FRX4g3k44U5HPB7u+kHGOFYlRhDShwzNmTlw0NFj4JWezILXTL6LlGggl8pgP58634AzsltuRg1BJAKLwWhsBuDgINleiiih/K3xFvhNSnT4YB0IrNCExmlWHgbO2Vlvw3QGJWGYc4CvnpTWPu3+n6APdbjnw=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH] iommu: add preemption support to iommu_{un,}map()
Date: Fri, 10 Jun 2022 10:32:48 +0200
Message-Id: <20220610083248.25800-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.36.1
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AS9PR04CA0142.eurprd04.prod.outlook.com
 (2603:10a6:20b:48a::16) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 29fecd27-c0d8-458c-46ef-08da4abbd8aa
X-MS-TrafficTypeDiagnostic: PH0PR03MB5767:EE_
X-Microsoft-Antispam-PRVS:
	<PH0PR03MB5767CB5E03D24072B08F96668FA69@PH0PR03MB5767.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	OP4yJs39IC4nZ1bj0aGHYs2LIRO5zuIlG9+X2YkC5mFPu6ayMp2xJLH1iuS7v/3pSlBPkgmHKuk92tP0JQuzy0uO9OGj4eJdRYX9amu7f9cIy6AWR38redgzTki9kqXXJTPeX9qCKISDwGw4yicMBrz+NnsGgaRAPJ2Pi7TjZLx8F3snqjvo+GIRq+iD49+VINixpN/M6U8gAjyUMeNt0hTKyVhcP5xVArncoSPafgspqsOaqed2EhprhxF6W5JaxJb+C71t74dwVjt+MkTt6YJEbpEseLHmPhSwxVMR06kv/XTJ34Y1TMgxfSoF+XUCSv0iyQCqDTOPMSdCiO8yI043s5svcQSMSx+fIB4aexWLqJfTAsCxXbqgB/DI9NPh/RM+b4yOqfzY1uPLXDmuOOLmUWNKXa8tKoPkPyh7d4OaWB++5KQJRCii7hGRZ6f97aChNF1YAfVsN+voAFHZqDNV1ROndTu6CzhN33A516p4uFVxYJoD8vVyG1sl3iLf9Rsb8UvPpdlwtED+xZJ4AQzgH3w2N84S6DHH2ILrPhcNrZxqeCIBQ+n2QPYLkBg3fmi1LWBtflZpPfbmG0B/Apnsr3xI71pqgRmOhYCTBa8OwwxiPLkC8uNf9jPPP60FkaSNu+PD+YobYO9NpGceIg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(1076003)(54906003)(8936002)(6916009)(2616005)(4326008)(8676002)(66946007)(66556008)(86362001)(316002)(6666004)(26005)(82960400001)(66476007)(38100700002)(6506007)(6512007)(508600001)(186003)(83380400001)(6486002)(36756003)(2906002)(5660300002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RksraGowVDdMQUNjT2J2dS9lSTVjQ2tuRGJERXYvdzBwd29FKy9oTVVLNUtS?=
 =?utf-8?B?Y04zK1V0NlJ3UWxWcmpyQ0lkSnhPRzZFOE5Ud1JiY0FCRkcrMnNTTTVsamxy?=
 =?utf-8?B?ZlNQVjhvcXVXQ1FyMUJhM2lVaW9UZVBBYWladllObWNwVWFZV0t3OHRvdGZR?=
 =?utf-8?B?aS8yelRDNnVSamlwYXNoNDJYZEE0TG9Sc1p6RUUza1k5MTJTMFExenlGaHdW?=
 =?utf-8?B?bEFkUUl6bUkxQXJia1NZWkx3N0Q1dnJXV01oMnpRUXdESjBScktJd1pGSGEr?=
 =?utf-8?B?Z0xNdlZrYVpGSDJNdmFSNTRXZ1JjTHlNYjRCeFpZSWRJWEdYQzkwRTZ4bjZl?=
 =?utf-8?B?a1JIdDJ3RkdPQ0hmTzVDUk0rVEpaVXNndlpkU3Nhc1BaNnVCQk9QTGl6UWZ2?=
 =?utf-8?B?QlNMd29iaVY5b3I2cDVHOGZ1NDJUT3BsK2JGZFhOU2hMQXRWNlI0azdoZTBI?=
 =?utf-8?B?WlZsTU5pamhreThaS05jZUYzT3plNUd2RlFYdk5mOU1KUUo5T0xDQmk3OE5C?=
 =?utf-8?B?VkQvZzM3aTZacjZzaXhaQ1Y5NXpKTmVQOW1ndU9WQjVlV1NkN210czUxaDZy?=
 =?utf-8?B?RklDYXJlZUJjS0ZISVBtYS9WVjdUMmh2cHJHNXZORTZwb29qTkplc1Z4cHd4?=
 =?utf-8?B?NTRNTmpaZnJLU2syeE82MFoyZTRpbVNON0UyZWM4TDFRRmhiL1hCb2tUOHFR?=
 =?utf-8?B?VCt0bWZZTVRFTzRCRFQvbnNvV3h1WUpqdW1pdTNMbXllcldWeWJ3RTlOenNh?=
 =?utf-8?B?UStjY3AzaDNGRFJSdmllUi9WTFBqSUFjaU1qUTRTMFJlNjFCODVxZ3U4Kyt6?=
 =?utf-8?B?Ri9ycGwrZTgveVNWMjVqajRpeVovSXI1SE1YL1BEMGZpQWRHYWFxRVNBOEQ1?=
 =?utf-8?B?dUpUc2s5bDVYYzQ0QTV2QVhKclIvQ0hON0o2VGlKQ2EwalBiUEE3OHNnWkRa?=
 =?utf-8?B?S05GUUdyYjFNNWZibEhjT3lFNjFFVUZrNDg0dmVnekVFcEZLMXNZWTZCQ0Er?=
 =?utf-8?B?M0NPT0E5eEp1REp1RlRUOGVWRG16M2lHdFFFaFJobFhzdWxseXV2TUxJaC94?=
 =?utf-8?B?c05UMkw0OE5aZGRSZzhmM2YyM2VkbFJLOUU5WlNvdzdpWW1BQlY4byt5Ymxx?=
 =?utf-8?B?ck92SWhqQmovTVQzR1M1ZTk1a3pobnpGdXRrZHRjOGZJM3ozMHV1ZEhmRDho?=
 =?utf-8?B?ZGZxZ09pdmFYWUdrTFJWTm5pVlZNQk5GT0hlM1BncW5XcHoxcFR4VG5yVWRH?=
 =?utf-8?B?bXBVZUxTM2FZUEp1TGFZV29VclhaWFFQU1duWUpkWXdmaEpNTXFRS29mZllx?=
 =?utf-8?B?SGl6ZzNpWTlKUW1zZVNJMWkvM1VGSyt2U2tjS1Ira1BGblZYejRwdjlIaGtU?=
 =?utf-8?B?NlJlNXVucldZYW1MdU13bmw1bzAxR2FhWXVsRzNuUERlYVBTc1ZXckdMT2Fw?=
 =?utf-8?B?a0YyZCt2a09WUEFieFlJZXJKRTVoN1paUmNseTRtM05IeURNQzlBVkZrWFFn?=
 =?utf-8?B?QnU5bWtqSDdha21SNXNxa05GVFRQWmY1QWxqbWp2aERpSExaV1FodDVBeHpZ?=
 =?utf-8?B?RDU1MWJ0ejZzYm9uVjVnMEE5dVRpN1RnVGx1VkJNamU3ZzdyVmN3c2QyNmNa?=
 =?utf-8?B?QmNOeG9MZ21XUUV0ekhyRUtWSnpHaGR4UjR1OGJ2VUlPZysvMVU0aHRhbjBO?=
 =?utf-8?B?S011SjFTamRHK2ZZRFN5cXh6NmpGMTJSbHdvdlBYVXZCeWpOaUhHQVZPOTRm?=
 =?utf-8?B?b2w1UjJuWXZacHRONG45UlRWYngzTWJnZHdKZ3hSc1JocVBrdFNxMzNMS1Uy?=
 =?utf-8?B?eWtVbkd6QnBmWk5yWEVQM0J5bVNuZ21LWWdXVlprVmNCMjc2aG9wZGR6ZWp5?=
 =?utf-8?B?bmRKSHZwZnEzMVFzUXBUZFdiMlZjL2xoOEFhRHFEZWlUck94cVFaRU9LaEIr?=
 =?utf-8?B?aytBa2RWNUVtbHltZy8vY2ZFVkhXMG9iYk5Va0t6NDV4NTJ4VDkzUUFBZ01X?=
 =?utf-8?B?R2VJTjR2b28xN3UyVnJnVXlxZFFtb01mMHovcHR5N3lZOWRVQzkvb05FV1E5?=
 =?utf-8?B?WXJ0OXBQcXFST2x6cmRGN25ZY3dBSVBqK29PcW1WYTdCdkkwV0U5TFdycVBy?=
 =?utf-8?B?WjNuMjlMeWFzMTg1N2VDMEZaazh1YncrU3lvOVZ6bldtMnFsYWxNcjRxNk5T?=
 =?utf-8?B?cjNoaXl6TU1uNlpzOFhQV3VJT3RDcWRKZkZnZ0FDWWdKTWVNK1lMS3VCSnpm?=
 =?utf-8?B?cHd5d29qTXpTWUdkcmwrVXR6VmJtbHlQdzRwQjNjMk9XNy9xYUp3dHJTL3VW?=
 =?utf-8?B?UHAwYW90Uk9aaHRrdmVBSENrY2EwTUMwbG44TVNncnI2dUdaUlY2Mk0yZ1Rw?=
 =?utf-8?Q?O3P2hMqaUcXigz6g=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 29fecd27-c0d8-458c-46ef-08da4abbd8aa
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2022 08:33:08.2755
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: pTbKX5BJ65j8M6l1oJv+pdfzuqBA67WOqnQ5ps934Jd5OVAQNRSV2ag7vUQAqyRTcHjLFmtNpjDsnK03Ii1R3Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5767

The loop in iommu_{un,}map() can be arbitrary large, and as such it
needs to handle preemption.  Introduce a new parameter that allow
returning the number of pages that have been processed, and which
presence also signals whether the function should do preemption
checks.

Note that the cleanup done in iommu_map() can now be incomplete if
preemption has happened, and hence callers would need to take care of
unmapping the whole range (ie: ranges already mapped by previously
preempted calls).  So far none of the callers care about having those
ranges unmapped, so error handling in iommu_memory_setup() and
arch_iommu_hwdom_init() can be kept as-is.

Note that iommu_legacy_{un,}map() is left without preemption handling:
callers of those interfaces are not modified to pass bigger chunks,
and hence the functions won't be modified as are legacy and should be
replaced with iommu_{un,}map() instead if preemption is required.

Fixes: f3185c165d ('IOMMU/x86: perform PV Dom0 mappings in batches')
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/pv/dom0_build.c        | 15 ++++++++++++---
 xen/drivers/passthrough/iommu.c     | 26 +++++++++++++++++++-------
 xen/drivers/passthrough/x86/iommu.c | 13 +++++++++++--
 xen/include/xen/iommu.h             |  4 ++--
 4 files changed, 44 insertions(+), 14 deletions(-)

diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c
index 04a4ea3c18..e5a42870ec 100644
--- a/xen/arch/x86/pv/dom0_build.c
+++ b/xen/arch/x86/pv/dom0_build.c
@@ -77,7 +77,8 @@ static __init void mark_pv_pt_pages_rdonly(struct domain *d,
          * iommu_memory_setup() ended up mapping them.
          */
         if ( need_iommu_pt_sync(d) &&
-             iommu_unmap(d, _dfn(mfn_x(page_to_mfn(page))), 1, flush_flags) )
+             iommu_unmap(d, _dfn(mfn_x(page_to_mfn(page))), 1, flush_flags,
+                         NULL) )
             BUG();
 
         /* Read-only mapping + PGC_allocated + page-table page. */
@@ -121,13 +122,21 @@ static void __init iommu_memory_setup(struct domain *d, const char *what,
                                       unsigned int *flush_flags)
 {
     int rc;
+    unsigned long done;
     mfn_t mfn = page_to_mfn(page);
 
     if ( !need_iommu_pt_sync(d) )
         return;
 
-    rc = iommu_map(d, _dfn(mfn_x(mfn)), mfn, nr,
-                   IOMMUF_readable | IOMMUF_writable, flush_flags);
+    while ( (rc = iommu_map(d, _dfn(mfn_x(mfn)), mfn, nr,
+                            IOMMUF_readable | IOMMUF_writable,
+                            flush_flags, &done)) == -ERESTART )
+    {
+        mfn_add(mfn, done);
+        nr -= done;
+        process_pending_softirqs();
+    }
+
     if ( rc )
     {
         printk(XENLOG_ERR "pre-mapping %s MFN [%lx,%lx) into IOMMU failed: %d\n",
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 75df3aa8dd..5c2a341112 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -310,11 +310,11 @@ static unsigned int mapping_order(const struct domain_iommu *hd,
 
 int iommu_map(struct domain *d, dfn_t dfn0, mfn_t mfn0,
               unsigned long page_count, unsigned int flags,
-              unsigned int *flush_flags)
+              unsigned int *flush_flags, unsigned long *done)
 {
     const struct domain_iommu *hd = dom_iommu(d);
     unsigned long i;
-    unsigned int order;
+    unsigned int order, j = 0;
     int rc = 0;
 
     if ( !is_iommu_enabled(d) )
@@ -327,6 +327,12 @@ int iommu_map(struct domain *d, dfn_t dfn0, mfn_t mfn0,
         dfn_t dfn = dfn_add(dfn0, i);
         mfn_t mfn = mfn_add(mfn0, i);
 
+        if ( done && !(++j & 0xfffff) && general_preempt_check() )
+        {
+            *done = i;
+            return -ERESTART;
+        }
+
         order = mapping_order(hd, dfn, mfn, page_count - i);
 
         rc = iommu_call(hd->platform_ops, map_page, d, dfn, mfn,
@@ -341,7 +347,7 @@ int iommu_map(struct domain *d, dfn_t dfn0, mfn_t mfn0,
                    d->domain_id, dfn_x(dfn), mfn_x(mfn), rc);
 
         /* while statement to satisfy __must_check */
-        while ( iommu_unmap(d, dfn0, i, flush_flags) )
+        while ( iommu_unmap(d, dfn0, i, flush_flags, NULL) )
             break;
 
         if ( !is_hardware_domain(d) )
@@ -365,7 +371,7 @@ int iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn,
                      unsigned long page_count, unsigned int flags)
 {
     unsigned int flush_flags = 0;
-    int rc = iommu_map(d, dfn, mfn, page_count, flags, &flush_flags);
+    int rc = iommu_map(d, dfn, mfn, page_count, flags, &flush_flags, NULL);
 
     if ( !this_cpu(iommu_dont_flush_iotlb) && !rc )
         rc = iommu_iotlb_flush(d, dfn, page_count, flush_flags);
@@ -374,11 +380,11 @@ int iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn,
 }
 
 int iommu_unmap(struct domain *d, dfn_t dfn0, unsigned long page_count,
-                unsigned int *flush_flags)
+                unsigned int *flush_flags, unsigned long *done)
 {
     const struct domain_iommu *hd = dom_iommu(d);
     unsigned long i;
-    unsigned int order;
+    unsigned int order, j = 0;
     int rc = 0;
 
     if ( !is_iommu_enabled(d) )
@@ -389,6 +395,12 @@ int iommu_unmap(struct domain *d, dfn_t dfn0, unsigned long page_count,
         dfn_t dfn = dfn_add(dfn0, i);
         int err;
 
+        if ( done && !(++j & 0xfffff) && general_preempt_check() )
+        {
+            *done = i;
+            return -ERESTART;
+        }
+
         order = mapping_order(hd, dfn, _mfn(0), page_count - i);
         err = iommu_call(hd->platform_ops, unmap_page, d, dfn,
                          order, flush_flags);
@@ -425,7 +437,7 @@ int iommu_unmap(struct domain *d, dfn_t dfn0, unsigned long page_count,
 int iommu_legacy_unmap(struct domain *d, dfn_t dfn, unsigned long page_count)
 {
     unsigned int flush_flags = 0;
-    int rc = iommu_unmap(d, dfn, page_count, &flush_flags);
+    int rc = iommu_unmap(d, dfn, page_count, &flush_flags, NULL);
 
     if ( !this_cpu(iommu_dont_flush_iotlb) && !rc )
         rc = iommu_iotlb_flush(d, dfn, page_count, flush_flags);
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index 11a4f244e4..546e6dbe2a 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -403,9 +403,18 @@ void __hwdom_init arch_iommu_hwdom_init(struct domain *d)
         }
         else if ( pfn != start + count || perms != start_perms )
         {
+            unsigned long done;
+
         commit:
-            rc = iommu_map(d, _dfn(start), _mfn(start), count, start_perms,
-                           &flush_flags);
+            while ( (rc = iommu_map(d, _dfn(start), _mfn(start), count,
+                                    start_perms, &flush_flags,
+                                    &done)) == -ERESTART )
+            {
+                start += done;
+                count -= done;
+                process_pending_softirqs();
+            }
+
             if ( rc )
                 printk(XENLOG_WARNING
                        "%pd: IOMMU identity mapping of [%lx,%lx) failed: %d\n",
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 79529adf1f..e6643bcc1c 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -155,10 +155,10 @@ enum
 
 int __must_check iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn,
                            unsigned long page_count, unsigned int flags,
-                           unsigned int *flush_flags);
+                           unsigned int *flush_flags, unsigned long *done);
 int __must_check iommu_unmap(struct domain *d, dfn_t dfn,
                              unsigned long page_count,
-                             unsigned int *flush_flags);
+                             unsigned int *flush_flags, unsigned long *done);
 
 int __must_check iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn,
                                   unsigned long page_count,
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 08:34:19 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 08:34:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345957.571674 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nza5z-0001J5-BP; Fri, 10 Jun 2022 08:34:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345957.571674; Fri, 10 Jun 2022 08:34:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nza5z-0001Iw-79; Fri, 10 Jun 2022 08:34:19 +0000
Received: by outflank-mailman (input) for mailman id 345957;
 Fri, 10 Jun 2022 08:34:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7j0Q=WR=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1nza5y-0001Ib-DE
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 08:34:18 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 1cde20e8-e898-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 10:34:16 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7F7A912FC;
 Fri, 10 Jun 2022 01:34:15 -0700 (PDT)
Received: from e129167.arm.com (unknown [10.57.4.71])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 331933F73B;
 Fri, 10 Jun 2022 01:34:13 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1cde20e8-e898-11ec-bd2c-47488cf2e6aa
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 0/3] small fixes
Date: Fri, 10 Jun 2022 10:33:55 +0200
Message-Id: <20220610083358.101412-1-michal.orzel@arm.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Small independent fixes.

Michal Orzel (3):
  xen/arm: traps: Fix reference to invalid erratum ID
  xen/arm: gicv2: Rename gicv2_map_hwdown_extra_mappings
  xen/console: Fix incorrect format tags for struct tm members

 xen/arch/arm/gic-v2.c      | 4 ++--
 xen/arch/arm/traps.c       | 2 +-
 xen/drivers/char/console.c | 4 ++--
 3 files changed, 5 insertions(+), 5 deletions(-)

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 08:34:20 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 08:34:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345958.571685 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nza60-0001ZI-Is; Fri, 10 Jun 2022 08:34:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345958.571685; Fri, 10 Jun 2022 08:34:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nza60-0001Z7-Fr; Fri, 10 Jun 2022 08:34:20 +0000
Received: by outflank-mailman (input) for mailman id 345958;
 Fri, 10 Jun 2022 08:34:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7j0Q=WR=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1nza5z-0001Ib-5j
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 08:34:19 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 1dee1c74-e898-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 10:34:17 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4D6B41576;
 Fri, 10 Jun 2022 01:34:17 -0700 (PDT)
Received: from e129167.arm.com (unknown [10.57.4.71])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B28C03F73B;
 Fri, 10 Jun 2022 01:34:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1dee1c74-e898-11ec-bd2c-47488cf2e6aa
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 1/3] xen/arm: traps: Fix reference to invalid erratum ID
Date: Fri, 10 Jun 2022 10:33:56 +0200
Message-Id: <20220610083358.101412-2-michal.orzel@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220610083358.101412-1-michal.orzel@arm.com>
References: <20220610083358.101412-1-michal.orzel@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The correct erratum ID should be 834220.

Fixes: 0a7ba2936457 ("xen/arm: arm64: Add Cortex-A57 erratum 834220 workaround")
Signed-off-by: Michal Orzel <michal.orzel@arm.com>
---
 xen/arch/arm/traps.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 43f30747cf..e989e742fd 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1856,7 +1856,7 @@ static inline bool hpfar_is_valid(bool s1ptw, uint8_t fsc)
      *  1. the stage 2 fault happen during a stage 1 page table walk
      *  (the bit ESR_EL2.S1PTW is set)
      *  2. the fault was due to a translation fault and the processor
-     *  does not carry erratum #8342220
+     *  does not carry erratum #834220
      *
      * Note that technically HPFAR is valid for other cases, but they
      * are currently not supported by Xen.
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 08:34:22 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 08:34:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345959.571695 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nza61-0001q1-SN; Fri, 10 Jun 2022 08:34:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345959.571695; Fri, 10 Jun 2022 08:34:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nza61-0001ps-OP; Fri, 10 Jun 2022 08:34:21 +0000
Received: by outflank-mailman (input) for mailman id 345959;
 Fri, 10 Jun 2022 08:34:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7j0Q=WR=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1nza60-0001Ib-5o
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 08:34:20 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 1ed5fd43-e898-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 10:34:19 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DD28115BF;
 Fri, 10 Jun 2022 01:34:18 -0700 (PDT)
Received: from e129167.arm.com (unknown [10.57.4.71])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 95D1D3F73B;
 Fri, 10 Jun 2022 01:34:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ed5fd43-e898-11ec-bd2c-47488cf2e6aa
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 2/3] xen/arm: gicv2: Rename gicv2_map_hwdown_extra_mappings
Date: Fri, 10 Jun 2022 10:33:57 +0200
Message-Id: <20220610083358.101412-3-michal.orzel@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220610083358.101412-1-michal.orzel@arm.com>
References: <20220610083358.101412-1-michal.orzel@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

... to gicv2_map_hwdom_extra_mappings as the former clearly contains
a typo.

Fixes: 86b93e00c0b6 ("xen/arm: gicv2: Export GICv2m register frames to Dom0 by device tree")
Signed-off-by: Michal Orzel <michal.orzel@arm.com>
---
 xen/arch/arm/gic-v2.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
index 2cc2f6bc18..bd773bcc67 100644
--- a/xen/arch/arm/gic-v2.c
+++ b/xen/arch/arm/gic-v2.c
@@ -679,7 +679,7 @@ static void gicv2_irq_set_affinity(struct irq_desc *desc, const cpumask_t *cpu_m
     spin_unlock(&gicv2.lock);
 }
 
-static int gicv2_map_hwdown_extra_mappings(struct domain *d)
+static int gicv2_map_hwdom_extra_mappings(struct domain *d)
 {
     const struct v2m_data *v2m_data;
 
@@ -1352,7 +1352,7 @@ const static struct gic_hw_operations gicv2_ops = {
     .make_hwdom_madt     = gicv2_make_hwdom_madt,
     .get_hwdom_extra_madt_size = gicv2_get_hwdom_extra_madt_size,
 #endif
-    .map_hwdom_extra_mappings = gicv2_map_hwdown_extra_mappings,
+    .map_hwdom_extra_mappings = gicv2_map_hwdom_extra_mappings,
     .iomem_deny_access   = gicv2_iomem_deny_access,
     .do_LPI              = gicv2_do_LPI,
 };
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 08:34:27 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 08:34:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345961.571707 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nza67-0002Ec-5f; Fri, 10 Jun 2022 08:34:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345961.571707; Fri, 10 Jun 2022 08:34:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nza67-0002EU-1y; Fri, 10 Jun 2022 08:34:27 +0000
Received: by outflank-mailman (input) for mailman id 345961;
 Fri, 10 Jun 2022 08:34:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7j0Q=WR=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1nza65-0000uL-7j
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 08:34:25 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 131201b3-e898-11ec-8179-c7c2a468b362;
 Fri, 10 Jun 2022 10:33:59 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4743212FC;
 Fri, 10 Jun 2022 01:34:21 -0700 (PDT)
Received: from e129167.arm.com (unknown [10.57.4.71])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 251CD3F73B;
 Fri, 10 Jun 2022 01:34:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 131201b3-e898-11ec-8179-c7c2a468b362
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 3/3] xen/console: Fix incorrect format tags for struct tm members
Date: Fri, 10 Jun 2022 10:33:58 +0200
Message-Id: <20220610083358.101412-4-michal.orzel@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220610083358.101412-1-michal.orzel@arm.com>
References: <20220610083358.101412-1-michal.orzel@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

All the members of struct tm are defined as integers but the format tags
used in console driver for snprintf wrongly expect unsigned values. Fix
the tags to expect integers.

Signed-off-by: Michal Orzel <michal.orzel@arm.com>
---
 xen/drivers/char/console.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index f9937c5134..beb44fe06f 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -844,7 +844,7 @@ static void printk_start_of_line(const char *prefix)
             /* nothing */;
         else if ( mode == TSM_DATE )
         {
-            snprintf(tstr, sizeof(tstr), "[%04u-%02u-%02u %02u:%02u:%02u] ",
+            snprintf(tstr, sizeof(tstr), "[%04d-%02d-%02d %02d:%02d:%02d] ",
                      1900 + tm.tm_year, tm.tm_mon + 1, tm.tm_mday,
                      tm.tm_hour, tm.tm_min, tm.tm_sec);
             break;
@@ -852,7 +852,7 @@ static void printk_start_of_line(const char *prefix)
         else
         {
             snprintf(tstr, sizeof(tstr),
-                     "[%04u-%02u-%02u %02u:%02u:%02u.%03"PRIu64"] ",
+                     "[%04d-%02d-%02d %02d:%02d:%02d.%03"PRIu64"] ",
                      1900 + tm.tm_year, tm.tm_mon + 1, tm.tm_mday,
                      tm.tm_hour, tm.tm_min, tm.tm_sec, nsec / 1000000);
             break;
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 08:37:05 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 08:37:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345983.571718 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nza8e-0003hF-LV; Fri, 10 Jun 2022 08:37:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345983.571718; Fri, 10 Jun 2022 08:37:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nza8e-0003h6-I1; Fri, 10 Jun 2022 08:37:04 +0000
Received: by outflank-mailman (input) for mailman id 345983;
 Fri, 10 Jun 2022 08:37:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kdog=WR=citrix.com=prvs=1535499d8=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1nza8d-0003gq-LU
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 08:37:03 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7f1c3aa6-e898-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 10:37:02 +0200 (CEST)
Received: from mail-dm6nam11lp2171.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.171])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 10 Jun 2022 04:36:59 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BLAPR03MB5393.namprd03.prod.outlook.com (2603:10b6:208:291::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Fri, 10 Jun
 2022 08:36:57 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::50a2:499b:fa53:b1eb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::50a2:499b:fa53:b1eb%5]) with mapi id 15.20.5314.019; Fri, 10 Jun 2022
 08:36:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7f1c3aa6-e898-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654850222;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=yYt0UB3eArcM0dqg+8VTE3BsLtm9fT8MM3lYITbAtps=;
  b=QhTqQBqVeCINalDlScvD3WWwcumSEfL0Gz3M+/tPWkL5BMG4MqdA7e+C
   w49YDh2c7q6udi9ccJeROKJAAsrU/G/gQ8OXojbLXkUqAJR5LexrhKBNZ
   C7eVf2SFyXQZfncYXxphVfLoCPJkhIi0mVrPYecJ9wSvg1th8yn1TtzxK
   A=;
X-IronPort-RemoteIP: 104.47.57.171
X-IronPort-MID: 73713353
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:kfacTKIInEly2ODtFE+RopQlxSXFcZb7ZxGr2PjKsXjdYENS3zwGz
 WMYUD/VMvuMYGGget50Po/l8kNX6JfRyIM2TgVlqX01Q3x08seUXt7xwmUcns+xwm8vaGo9s
 q3yv/GZdJhcokf0/0vrav67xZVF/fngqoDUUYYoAQgsA149IMsdoUg7wbRh3Ncw2YHR7z6l4
 rseneWOYDdJ5BYsWo4kw/rrRMRH5amaVJsw5zTSVNgT1LPsvyB94KE3fMldG0DQUIhMdtNWc
 s6YpF2PEsE1yD92Yj+tuu6TnkTn2dc+NyDW4pZdc/DKbhSvOkXee0v0XRYRQR4/ttmHozx+4
 Ilfja6deT8XB5brs90eTit8LD1+O5QTrdcrIVDn2SCS52vvViK1htlLUgQxN4Be/ftrC2ZT8
 /BeMCoKch2Im+OxxvS8V/VogcMgasLsOevzuFk5lW2fUalgHM+FGvqTjTNb9G5YasRmNPDSf
 ccGLxFoawzNeUZnMVYLEpMu2uyvgxETdhUH8w3M/vFquAA/yiRt7rbqEPuSSODNH9pkwninn
 HKB7mnAV0Ry2Nu3jGDtHmiXru3FkD7/WYkSPKal7fMsi1qWrkQMDDUGWF39puO24mauVtQaJ
 0EK9y4Gqakp6FftXtT7Rwe/onOPolgbQdU4LgEhwASEy66R7wPHAGEBF2dFcIZ/65ZwQiE23
 FiUmd+vHSZorLCeVXOa8PGTsC+2Pi8Wa2QFYEfoUDc43jUqm6lr5jqnczqpOPTdYgHdcd0o/
 w23kQ==
IronPort-HdrOrdr: A9a23:vkkfb69hCU72mc1hV81uk+E+db1zdoMgy1knxilNoENuH/Bwxv
 rFoB1E73TJYVYqN03IV+rwWpVoJkmsjaKdgLNhRItKOTOLhILGFvAH0WKP+V3d8k7Fh5NgPN
 lbAs9D4bTLZDAV7PoSiDPIaerIq+P3lZxA692urEuEGmpRGtpdBkpCe3GmO3wzYDMDKYsyFZ
 Ka6MYCjz28eU4PZsD+InUeReDMq/DCiZqjOHc9dlcawTjLqQntxK/xEhCe0BtbezRTwY06+W
 yAtwDi/K2sv9yy1xeZ/W7O6JZ9nsfn17J4dbqxo/lQDg+pphejZYxnVbHHlDcpoNu34FJvq9
 XIqwdIBbUA11rhOkWO5Tf90Qjp1zgjr1X4z0WDvHflqcvlABonFston+tiA1bkwntlmOs5/L
 NA3mqfuZYSJwjHhj7B69/BUAwvvlaooEAljfUYgxVkIMEjgYdq3MMiFX5uYdk99HqQ0vFnLA
 AuNrCW2B9uSyLXU5iD1VMfgOBFXRwIb2S7qwY5y4+oOgNt7Q9EJnsjtbAid0g7hewAouF/lo
 L524RT5cRzp5wtHNZA7Nloe7rHNkX9BTTxDUm1HXPLUIk6BlOlke+G3Fxy3pDjRKA1
X-IronPort-AV: E=Sophos;i="5.91,288,1647316800"; 
   d="scan'208";a="73713353"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SMFJCpC5bIDddmNwseEDOsulTmkT1MV5tjq0L8pUxeB4C0KRxn986Jgd+KpDKNaW/fgqxCalrPzp098LdBJgKkX/Cb7AHbrsCfYdUbL2b/YSmBVUwslguZWjL9zDxlBQOAceNY26duHFyFTePgcLWLkC+rj3ehHZXZlbxUfuHRtM1yr7lGDXvgR0KIit+u/PDXAEq5XHUUQklS33ConH5XcPAW9uhFv9FnDpNHIAvHfr2F3+Ywx6Eu7qwnpkHpbr9TC658mqsxzvyERpGp5B9+lQQxL5wyAt+bOO1O/7yeGPa5zS9JctYQQoXRYq+je6IQ70bWZveLlV/cwFzUBOuA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=yYt0UB3eArcM0dqg+8VTE3BsLtm9fT8MM3lYITbAtps=;
 b=TfMtKKNQGs/0EMDeC507JS+GqghSNUuhCC/aQPoqnSEaPOHTtfmK+Esz2zgmQfoEJTIB6LdUIdDUZfGtd2sJJTKa05II/1JCytGSe7Dld2I2FGCynDrVr30xH3obRhVQj61FbBvHgwIWADacdfFbPhbK0TLQOM7JrvFhFvAiPX0KEdtZM7ZjGRvVT5ob3nEno+uZHPBKq1LFeNMdZYQ62gwIzr7gxh1JU7gbrs/EGbpv8eaXthIZi0pGrmaspR+7+mHYPG9uJ5Ei6ScWk8WTjeyrD+Tf6tT3Y8cq28UvOCC+YmkBLRQpil7XYwGl12ewAtP5FZTEWBnITO4BEE286w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yYt0UB3eArcM0dqg+8VTE3BsLtm9fT8MM3lYITbAtps=;
 b=VE8J+6zHypFZzwYDEnJScA8kdn5i66m5K0CQcQgnQe2ZvtaMuWqLRyPKDwLQQ2D5xSvFqvT3YXwRCRwMfPyh5/ycYHZ83Lq9N6SvQF6Vsi/fiHM5BSVCI0DZTaMkDFL34O9Xj4bDbzw51wEkyBoVHx1IokF7RPqJP+GvKe3XbWA=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] x86/mm: account for PGT_pae_xen_l2 in recently added
 assertion
Thread-Topic: [PATCH] x86/mm: account for PGT_pae_xen_l2 in recently added
 assertion
Thread-Index: AQHYfJt5YVvowoL3VkeMPGEizKI7eK1ISn8AgAABW4CAAAVsgA==
Date: Fri, 10 Jun 2022 08:36:57 +0000
Message-ID: <225cf643-56d8-dd09-8313-628d16e3f9ef@citrix.com>
References: <ee7e7e1f-8d3c-0d8f-24ef-c281b09faa25@suse.com>
 <93283a6c-0a5b-aa73-7632-21cc9b4cae62@citrix.com>
 <661a2177-2352-a33d-3e75-39eacaff1b13@suse.com>
In-Reply-To: <661a2177-2352-a33d-3e75-39eacaff1b13@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: fde0f242-f655-40f9-24c7-08da4abc619a
x-ms-traffictypediagnostic: BLAPR03MB5393:EE_
x-microsoft-antispam-prvs:
 <BLAPR03MB53938F26C811F2FC0E9806C2BAA69@BLAPR03MB5393.namprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 25mqsTwsH1giMiHmV6jHQ97qERIRrCL86MzZyAZFtHFiyEArPlzgpWfdIMAfvnv0UOU8QqaQDhwBm/BxSX5KJbZFa6T106PWL1dyglVwJQkmouBCMdWDFk1STSKA8bCZPfw8/ak0yyL5/qJog9JtlnXN4MfOWv4yp7fqDLplrqdIOsWThrT6GyshJDCKxIoYi+tKebzv7qM6LyghROriJNrOVVnLeAHZqx5uFEpzRGApBWW0WNvMR4X7c1qM4rtH0TakCjqtEd/Qxk6A0SLQD97x5XjUoGdusBrqLT7NEBCrIDMYBzqws2voFLSye8J0pyS74i+MES+ZjCHMzBmeaH0Wbj3MVUUEa/kP32X3iMQKWEWQxihgxOq+HBIXCJaFVASzvv6i2AR0aPfv2xTFR+nWAdGidt+jZkHHqHZyL6GlGKuap348FGoQMgdTnB2PbXUfeGx1l7kcVcnok/Cf6rnGDPdXAVTyVgxGbM2qL6L9Kkm3fDHEjSmn3LRUVmMQNPq+PLD+ObP4W6c1GlAvu9qLSg6Sbg5qjo2oEoS0EL9IRp+TOR6cNZgOeKnhslE5splLarCgNoZzTSYuFLzWlC3vahb/lZcg/N5GcmtB3FeZUKem4V/MFRMWU2ELELHNjd6YzB++Q4Cptd7sxqZPWAEyJiAUBAcJ+pR/Mi54JIxkKd4CvGV/TwOhKDKXf0cRsuJ6r89LJt3NcOse2RD60dgttC65F8NEZ5uKO9EPEcvWJvaBx3kyvj2jT55tS7wTI3MGCkUa0SMDWPi99ifUAg==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(15650500001)(2906002)(6506007)(31686004)(83380400001)(55236004)(53546011)(6486002)(8936002)(5660300002)(6512007)(26005)(508600001)(316002)(54906003)(8676002)(31696002)(4326008)(122000001)(6916009)(38070700005)(38100700002)(71200400001)(36756003)(2616005)(186003)(91956017)(66446008)(66476007)(86362001)(64756008)(66556008)(76116006)(66946007)(82960400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?Yk1QbVFicCswTXpLUWQ3dzFoQ1BxZFRwN1R3OVozeTN0TFdpQThKTGJmZXlh?=
 =?utf-8?B?bFVSS2ZvckloQVhteGxQTlRlWUVhVVkvVGw0d3VVZ2VjYVlOZEdyeCsyZElE?=
 =?utf-8?B?dC9leUordSs1UlhrTmZiay8vc2wyemtpbllUczk5Q3A1ZmFWNDl3MmlEVHZG?=
 =?utf-8?B?dEV5c2JZUnJnM3oyc1UyS1huaTR1M2VOWHhFVmYxZXBLZXNoZjA0bDJBWVY2?=
 =?utf-8?B?dERadTE2QkVrZ255UG5PbFNsZHlKMzVLTW40c3ppMElwdWVrUzhwemdWT2ZO?=
 =?utf-8?B?MG5vSVJlYWdIaFA1blBCWm4wbFVmNW9CVXh3L2gwY1RwL2VjUnVuM3RPdFdF?=
 =?utf-8?B?WU1QVmpnbVc4anBrVnFtMUFkZk04R1NiWWw0MExCM0hDdW1uaFdmUDhPT295?=
 =?utf-8?B?UUMzT201U2J6Rkk1ZHoyVzJ2NWFRMFlFdmc5Z3MrYUdtYkZVNGMrOWYxKzVv?=
 =?utf-8?B?Zks4dnB6TWMyR1RxM2hmUnd3QTZNK3RKVEJZdXgraGRleTJRdFpZaWV3S0dW?=
 =?utf-8?B?cVIybzVxM0l2WWZtTGNqbXFOb3hPK29CaFh1SGpMcUVoU3hRTzc5R2pCYzNo?=
 =?utf-8?B?bXI4QVlMMkl5ZEd0ejRUNzlqWW5rRVBzY2RNOEJwcTBOd0R0THY5ZnRJZGhn?=
 =?utf-8?B?dmZVQ0MwbnVRb3h0Um9ORzBEOTkvSlpRL3RSWXlmS3pTUkZ0dEVQTWdFQndL?=
 =?utf-8?B?QXR3Rks5NlRKM1krWE1lanVPb1lWRWIvRXhuK1EzcThpczVpemx3YXdCbUsz?=
 =?utf-8?B?T2Q0NThFbjJseXhqVmlNSExtWUZBZ2RmM3loVkcxR3AzREVnZTN1MTNtZTFF?=
 =?utf-8?B?cW1WSCs4UU44Z1V5Yk5pbWZHeTJ4T0dsRVUzY1FIZ2FuVVUvZlY2VjBsQ0Zk?=
 =?utf-8?B?OHFhRmRZUDN1TCsrWCtSTTB0d0lsd0NjdHFVbTllMUI3V1liQ3c4elhqZGE3?=
 =?utf-8?B?QzV5bk15UGkrWC9Iako3dk96MnFyRGZLRUlneWFYRjA3RE43bW9NQVJiSm5p?=
 =?utf-8?B?b09haEI5Wm1yZ24rQjU4ZC90akZtL09wMjErU1RHREZ2TzRxSHJBak5lS08r?=
 =?utf-8?B?bXlhbFJ1TWdsZ1hKTXY5b0pIYllpZHhhYVNMbjZMNzFTamRGb3NqZy8xUHpt?=
 =?utf-8?B?NXhtd2swaUl1RDdlemcxMzg0YVRIUHR3bWc0K2JsVk52RXlUOGxjZ1lXYW80?=
 =?utf-8?B?bFNtWlZIbWVWQWZUUEtSZzJUZStOeDZMbUFKNUwwU2NYRERldTJZUUUrTHJG?=
 =?utf-8?B?TE1PNEoxVi84MUhzTFJoNitDb294Q0RxdElJQVRkTlFYTjJMNW54Titwa1By?=
 =?utf-8?B?bkNEUDF1ZzkwampiMXFxeGpsazNlTEI3UG00K3M3bXhMRHNLMWs2djBZZ3Mr?=
 =?utf-8?B?OXdIdTFWdm0wNXl1RTAxeVBLdS92QW8rOGFRUHp4L1ArSWlqYW9jUUFRTTZW?=
 =?utf-8?B?TERXZjJKVG01NTZGMWVVN1ozUGpXVnoybVhUOTZ2L09iVmJXU2J5R25hYTNh?=
 =?utf-8?B?Sks5cy9NT2RIME5iN21QcWpUOWhOSnVkL1lMOG1KZ2FTVEdybndhS0Z3a0hD?=
 =?utf-8?B?OXduMnloQjYyQ0xrMHNNWUx5T2NKWXJaTm1FMWlrZnEyZzltU1ZQbDAyWEV4?=
 =?utf-8?B?TjlIRXkyb0R5aVNVZm1ZZytLQVNrbFhyd0pEaWxIRnBTWnNSQ1cxaW9MK282?=
 =?utf-8?B?NmVOQktHMjJaRVU3K0JLTkZ4MGswNVB2RlVlZzVmTzNzVDBtSFVYeXd3Zldw?=
 =?utf-8?B?UVhqakJ0OUFNWlVoT25TVERrdlJhYjN3Rk9SdXYzNGJIeFJ4RjVIU3duME5E?=
 =?utf-8?B?clhJZ3p6NUd3akFWVGFjYnN4eERqQXZzUDE0NUNPS2N1ZnlJUHM3OHBRQlNU?=
 =?utf-8?B?a0pMWDlDb29kOXpSY25sYjVJR09DTCs5T3hla3VzWStQVmlMRnJUSFVMSFQv?=
 =?utf-8?B?MmFjV25wUkNmcmRNNzBnbEdQbjdHVWZCZ2NzQ1NVZE5pN2pHS29lVWs0VDVR?=
 =?utf-8?B?Wi84QmR3UW0yUTZYbzI5OHE2YnljaUIxcE5WQlJOeU1JYVdZdHphem5jYkJB?=
 =?utf-8?B?Yk9wQkpEVHl1d3MyWGdnVmxhR2hpbzVaWWY4dy94WUF1ajYvVmY1a2U0Ylk3?=
 =?utf-8?B?Vzg4M1VvQXg5bU80djRuaG5GT3NCZjBzTGdSdHJvY0poczN1RU5qQ2xINkxo?=
 =?utf-8?B?c2Rxem80cHBGaUt3cGg1UEczQ21TMUxRNkF6REJzQ2kyOFVQU0YwZW95M1RR?=
 =?utf-8?B?ZDdkbEs0TzliWVFrMVgwM3g2R2xhdXZaWmEvNGxyMmswVXFWQWFzeWpuMzZX?=
 =?utf-8?B?VXZFV0ZBWFU2YzV6aHF6dHJldHc3NHJoTTArOVE2VStJVlFJalZ1UT09?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <746DF271CF1B354297233DCAD992575A@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fde0f242-f655-40f9-24c7-08da4abc619a
X-MS-Exchange-CrossTenant-originalarrivaltime: 10 Jun 2022 08:36:57.7676
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 4irZ0TfRxenITXAnN87nsexybg3IMd2UTyhNEAu/yZ9zS2AN6DD/nbi/SbVkPABlUe4tMEOQH7zkgt5KCydPze2Z86HreAm9Jat+UF/wLO8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR03MB5393

T24gMTAvMDYvMjAyMiAwOToxNywgSmFuIEJldWxpY2ggd3JvdGU6DQo+IE9uIDEwLjA2LjIwMjIg
MTA6MTIsIEFuZHJldyBDb29wZXIgd3JvdGU6DQo+PiBPbiAxMC8wNi8yMDIyIDA4OjI2LCBKYW4g
QmV1bGljaCB3cm90ZToNCj4+PiAtLS0NCj4+PiBUaGUgY2hlY2sgYXJvdW5kIHRoZSBUTEIgZmx1
c2ggd2hpY2ggd2FzIG1vdmVkIGZvciBYU0EtNDAxIGFsc28gbG9va3MgdG8NCj4+PiBuZWVkbGVz
c2x5IHRyaWdnZXIgYSBmbHVzaCB3aGVuICJ0eXBlIiBoYXMgdGhlIGJpdCBzZXQgKHdoaWxlICJ4
Ig0KPj4+IHdvdWxkbid0KS4gVGhhdCdzIG5vIGRpZmZlcmVudCBmcm9tIG9yaWdpbmFsIGJlaGF2
aW9yLCBidXQgc3RpbGwgbG9va3MNCj4+PiBpbmVmZmljaWVudC4NCj4+IEl0J3Mgbm90IHRoZSBv
bmx5IGluZWZmaWNpZW5jeSBoZXJlLsKgIFN0aWxsIHBsZW50eSBvZiBpbXByb3ZlbWVudHMgdG8g
YmUNCj4+IGhhZCBpbiBfZ2V0X3BhZ2VfdHlwZSgpLg0KPiBZb3UgZGlkIHNheSB5b3UgaGF2ZSBz
b21lIGZvbGxvdy11cCBjaGFuZ2VzIHBlbmRpbmcuIEl0IHdhc24ndCBjbGVhciB0bw0KPiBtZSB3
aGV0aGVyIHRoaXMgcGFydGljdWxhciBhc3BlY3Qgd2FzIGFtb25nIHRoZW0uIElmIG5vdCwgSSBj
YW4gbWFrZQ0KPiBhKG5vdGhlcikgcGF0Y2ggLi4uDQoNCkF0IHRoaXMgcG9pbnQsIGl0J3MgcHJv
YmFibHkgbW9yZSBhY2N1cmF0ZSB0byBzYXkgdGhhdCBJJ3ZlIGdvdCBhIHBpbGUNCm9mIHBsYW5z
IGFib3V0IG1ha2luZyBpbXByb3ZlbWVudHMsIHJhdGhlciB0aGFuIGEgcGlsZSBvZiBwYXRjaGVz
Lg0KDQpUaGUgbWFqb3IgaW1wcm92ZW1lbnQgaXMgdGhlIGVhcmx5IGV4aXQgZm9yIFBHVF92YWxp
ZGF0ZWQsIG1ha2luZyB0aGUNCnNlY29uZCBoYWxmIG9mIHRoZSBmdW5jdGlvbiBleGNsdXNpdmVs
eSBmb3IgdGhlIHZhbGlkYXRlLWxvY2tlZCBzdGF0ZS4NCg0KT3RoZXIgaW1wcm92ZW1lbnRzIChv
ZmYgdGhlIHRvcCBvZiBteSBoZWFkKSBhcmUgc2h1ZmZsaW5nIHRoZSBUTEIgZmx1c2gNCnNldHVw
IGxvZ2ljIHRvIG5vdCBldmVuIGRvIHRoZSB0bGIgZmlsdGVyIGNhbGN1bGF0aW9ucyBpZiB3ZSdy
ZSBnb2luZyB0bw0Kc2tpcCB0aGUgZmx1c2ggY2FsbCBhbnl3YXksIGRlZHVwaW5nIHRoZSBwYWdl
X2dldF9vd25lcigpDQpjYWxscy92YXJpYWJsZXMsIHNvcnRpbmcgb3V0IFBHQ19wYWdlX3RhYmxl
IG5hbWluZy9zZW1hbnRpY3MsIHJlZHVjaW5nDQp0aGUgbnVtYmVyIG9mIHJlZHVuZGFudCB3YXlz
IHdlJ3ZlIGdvdCBvZiBleHByZXNzaW5nIHRoZSBzYW1lIGxvZ2ljDQoodGhlcmUgYXJlIGEgbG90
IG9mIGludmFyaWFudHMgYmV0d2VlbiB4LCBueCBhbmQgdHlwZSksIGJldHRlcg0KZXhwbGFuYXRp
b24gb2YgdGhlIGlvbW11IGJlaGF2aW91ciwgYW5kIGJldHRlciBleHBsYW5hdGlvbiBvZiB0aGUg
dGxiDQpmbHVzaGluZyBzYWZldHkgcmVxdWlyZW1lbnRzLg0KDQpJZiB5b3UgdGhpbmsgc29tZSBv
ZiB0aGF0J3MgZWFzeSBlbm91Z2ggdG8gZG8sIHRoZW4gZmVlbCBmcmVlLg0KDQp+QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 08:38:46 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 08:38:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.345996.571729 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzaAF-0004Js-1q; Fri, 10 Jun 2022 08:38:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 345996.571729; Fri, 10 Jun 2022 08:38:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzaAE-0004Jl-UT; Fri, 10 Jun 2022 08:38:42 +0000
Received: by outflank-mailman (input) for mailman id 345996;
 Fri, 10 Jun 2022 08:38:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzaAD-0004Jb-Ke; Fri, 10 Jun 2022 08:38:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzaAD-0004Fu-Fc; Fri, 10 Jun 2022 08:38:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzaAC-0003S8-UP; Fri, 10 Jun 2022 08:38:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nzaAC-0002Wo-Ty; Fri, 10 Jun 2022 08:38:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Egf5LbuLoqo7ukc/GwPpmVLluRufeNEpfen+Nbg7Ygk=; b=srv5QBqFGlWPTZaEUc46WLqKqG
	ujXvIj2wuDMO/yq63xLOKryNU0FxlZHnDBHESOvUaljWyIJZ5SNPFajhCwSS3lFvKk+NZ8Gn1qmFf
	MuCb7KRb461XnjiKEY5u63IfS60OvSS0GtbYFe6qDI2adIsdZy3Nv+hVdJOlWgBseYFw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170904-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.15-testing test] 170904: regressions - FAIL
X-Osstest-Failures:
    xen-4.15-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    xen-4.15-testing:test-amd64-i386-libvirt-raw:xen-boot:fail:regression
    xen-4.15-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    xen-4.15-testing:test-amd64-i386-pair:xen-boot/src_host:fail:regression
    xen-4.15-testing:test-amd64-i386-pair:xen-boot/dst_host:fail:regression
    xen-4.15-testing:test-amd64-i386-migrupgrade:xen-boot/dst_host:fail:regression
    xen-4.15-testing:test-amd64-i386-libvirt-xsm:xen-boot:fail:regression
    xen-4.15-testing:test-amd64-i386-freebsd10-amd64:xen-boot:fail:regression
    xen-4.15-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    xen-4.15-testing:test-amd64-i386-xl-vhd:xen-boot:fail:regression
    xen-4.15-testing:test-amd64-i386-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    xen-4.15-testing:test-amd64-coresched-i386-xl:xen-boot:fail:regression
    xen-4.15-testing:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    xen-4.15-testing:test-amd64-i386-qemut-rhel6hvm-intel:xen-boot:fail:regression
    xen-4.15-testing:test-amd64-i386-qemuu-rhel6hvm-intel:xen-boot:fail:regression
    xen-4.15-testing:test-amd64-i386-xl:xen-boot:fail:regression
    xen-4.15-testing:test-amd64-i386-xl-shadow:xen-boot:fail:regression
    xen-4.15-testing:test-amd64-i386-libvirt-pair:xen-boot/src_host:fail:regression
    xen-4.15-testing:test-amd64-i386-libvirt-pair:xen-boot/dst_host:fail:regression
    xen-4.15-testing:test-amd64-i386-livepatch:xen-boot:fail:regression
    xen-4.15-testing:test-amd64-i386-qemut-rhel6hvm-amd:xen-boot:fail:regression
    xen-4.15-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    xen-4.15-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-4.15-testing:test-amd64-i386-freebsd10-i386:xen-boot:fail:regression
    xen-4.15-testing:test-amd64-i386-xl-qemut-ws16-amd64:xen-boot:fail:regression
    xen-4.15-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-4.15-testing:test-amd64-i386-xl-pvshim:xen-boot:fail:regression
    xen-4.15-testing:test-amd64-i386-xl-xsm:xen-boot:fail:regression
    xen-4.15-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-4.15-testing:test-amd64-i386-xl-qemuu-win7-amd64:xen-boot:fail:regression
    xen-4.15-testing:test-amd64-i386-xl-qemut-win7-amd64:xen-boot:fail:regression
    xen-4.15-testing:test-amd64-i386-libvirt:xen-boot:fail:regression
    xen-4.15-testing:test-amd64-i386-qemuu-rhel6hvm-amd:xen-boot:fail:regression
    xen-4.15-testing:test-xtf-amd64-amd64-3:xtf/test-pv32pae-pv-iopl~hypercall:fail:regression
    xen-4.15-testing:test-xtf-amd64-amd64-3:leak-check/check:fail:regression
    xen-4.15-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-4.15-testing:test-xtf-amd64-amd64-1:xtf/test-pv32pae-pv-iopl~vmassist:fail:regression
    xen-4.15-testing:test-xtf-amd64-amd64-1:leak-check/check:fail:regression
    xen-4.15-testing:test-xtf-amd64-amd64-5:xtf/test-pv32pae-xsa-188:fail:regression
    xen-4.15-testing:test-xtf-amd64-amd64-5:leak-check/check:fail:regression
    xen-4.15-testing:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:guest-localmigrate/x10:fail:regression
    xen-4.15-testing:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    xen-4.15-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=a851dbce6876cd2c5689f912787c15cc507cbb39
X-Osstest-Versions-That:
    xen=64249afeb63cf7d70b4faf02e76df5eed82371f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 10 Jun 2022 08:38:40 +0000

flight 170904 xen-4.15-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170904/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-debianhvm-amd64  8 xen-boot     fail REGR. vs. 170870
 test-amd64-i386-libvirt-raw   8 xen-boot                 fail REGR. vs. 170870
 test-amd64-i386-xl-qemut-debianhvm-amd64  8 xen-boot     fail REGR. vs. 170870
 test-amd64-i386-pair         12 xen-boot/src_host        fail REGR. vs. 170870
 test-amd64-i386-pair         13 xen-boot/dst_host        fail REGR. vs. 170870
 test-amd64-i386-migrupgrade  13 xen-boot/dst_host        fail REGR. vs. 170870
 test-amd64-i386-libvirt-xsm   8 xen-boot                 fail REGR. vs. 170870
 test-amd64-i386-freebsd10-amd64  8 xen-boot              fail REGR. vs. 170870
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 170870
 test-amd64-i386-xl-vhd        8 xen-boot                 fail REGR. vs. 170870
 test-amd64-i386-xl-qemuu-ws16-amd64  8 xen-boot          fail REGR. vs. 170870
 test-amd64-coresched-i386-xl  8 xen-boot                 fail REGR. vs. 170870
 test-amd64-i386-xl-qemuu-ovmf-amd64  8 xen-boot          fail REGR. vs. 170870
 test-amd64-i386-qemut-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 170870
 test-amd64-i386-qemuu-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 170870
 test-amd64-i386-xl            8 xen-boot                 fail REGR. vs. 170870
 test-amd64-i386-xl-shadow     8 xen-boot                 fail REGR. vs. 170870
 test-amd64-i386-libvirt-pair 12 xen-boot/src_host        fail REGR. vs. 170870
 test-amd64-i386-libvirt-pair 13 xen-boot/dst_host        fail REGR. vs. 170870
 test-amd64-i386-livepatch     8 xen-boot                 fail REGR. vs. 170870
 test-amd64-i386-qemut-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 170870
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 170870
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 170870
 test-amd64-i386-freebsd10-i386  8 xen-boot               fail REGR. vs. 170870
 test-amd64-i386-xl-qemut-ws16-amd64  8 xen-boot          fail REGR. vs. 170870
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 170870
 test-amd64-i386-xl-pvshim     8 xen-boot                 fail REGR. vs. 170870
 test-amd64-i386-xl-xsm        8 xen-boot                 fail REGR. vs. 170870
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 170870
 test-amd64-i386-xl-qemuu-win7-amd64  8 xen-boot          fail REGR. vs. 170870
 test-amd64-i386-xl-qemut-win7-amd64  8 xen-boot          fail REGR. vs. 170870
 test-amd64-i386-libvirt       8 xen-boot                 fail REGR. vs. 170870
 test-amd64-i386-qemuu-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 170870
 test-xtf-amd64-amd64-3 82 xtf/test-pv32pae-pv-iopl~hypercall fail REGR. vs. 170870
 test-xtf-amd64-amd64-3       83 leak-check/check         fail REGR. vs. 170870
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 170870
 test-xtf-amd64-amd64-1 83 xtf/test-pv32pae-pv-iopl~vmassist fail REGR. vs. 170870
 test-xtf-amd64-amd64-1       84 leak-check/check         fail REGR. vs. 170870
 test-xtf-amd64-amd64-5       87 xtf/test-pv32pae-xsa-188 fail REGR. vs. 170870
 test-xtf-amd64-amd64-5       88 leak-check/check         fail REGR. vs. 170870
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 18 guest-localmigrate/x10 fail REGR. vs. 170870

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 170870

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170870
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170870
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170870
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170870
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170870
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 170870
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170870
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170870
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170870
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  a851dbce6876cd2c5689f912787c15cc507cbb39
baseline version:
 xen                  64249afeb63cf7d70b4faf02e76df5eed82371f9

Last test of basis   170870  2022-06-07 12:36:54 Z    2 days
Testing same since   170904  2022-06-09 14:07:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  George Dunlap <george.dunlap@eu.citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       fail    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       fail    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    fail    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit a851dbce6876cd2c5689f912787c15cc507cbb39
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jun 9 15:37:52 2022 +0200

    x86/pv: Track and flush non-coherent mappings of RAM
    
    There are legitimate uses of WC mappings of RAM, e.g. for DMA buffers with
    devices that make non-coherent writes.  The Linux sound subsystem makes
    extensive use of this technique.
    
    For such usecases, the guest's DMA buffer is mapped and consistently used as
    WC, and Xen doesn't interact with the buffer.
    
    However, a mischevious guest can use WC mappings to deliberately create
    non-coherency between the cache and RAM, and use this to trick Xen into
    validating a pagetable which isn't actually safe.
    
    Allocate a new PGT_non_coherent to track the non-coherency of mappings.  Set
    it whenever a non-coherent writeable mapping is created.  If the page is used
    as anything other than PGT_writable_page, force a cache flush before
    validation.  Also force a cache flush before the page is returned to the heap.
    
    This is CVE-2022-26364, part of XSA-402.
    
    Reported-by: Jann Horn <jannh@google.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: c1c9cae3a9633054b177c5de21ad7268162b2f2c
    master date: 2022-06-09 14:23:37 +0200

commit 890efc0d2e64c92f708bf0e5acd0257181dcbacf
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jun 9 15:37:07 2022 +0200

    x86/amd: Work around CLFLUSH ordering on older parts
    
    On pre-CLFLUSHOPT AMD CPUs, CLFLUSH is weakely ordered with everything,
    including reads and writes to the address, and LFENCE/SFENCE instructions.
    
    This creates a multitude of problematic corner cases, laid out in the manual.
    Arrange to use MFENCE on both sides of the CLFLUSH to force proper ordering.
    
    This is part of XSA-402.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: 062868a5a8b428b85db589fa9a6d6e43969ffeb9
    master date: 2022-06-09 14:23:07 +0200

commit 78fd76e1881d8fac3bec5f8a2ddd53d8a3de7762
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jun 9 15:36:40 2022 +0200

    x86: Split cache_flush() out of cache_writeback()
    
    Subsequent changes will want a fully flushing version.
    
    Use the new helper rather than opencoding it in flush_area_local().  This
    resolves an outstanding issue where the conditional sfence is on the wrong
    side of the clflushopt loop.  clflushopt is ordered with respect to older
    stores, not to younger stores.
    
    Rename gnttab_cache_flush()'s helper to avoid colliding in name.
    grant_table.c can see the prototype from cache.h so the build fails
    otherwise.
    
    This is part of XSA-402.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: 9a67ffee3371506e1cbfdfff5b90658d4828f6a2
    master date: 2022-06-09 14:22:38 +0200

commit 9b1e1e74a6c23ffad4c6a78973995957db2d4cc7
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jun 9 15:36:15 2022 +0200

    x86: Don't change the cacheability of the directmap
    
    Changeset 55f97f49b7ce ("x86: Change cache attributes of Xen 1:1 page mappings
    in response to guest mapping requests") attempted to keep the cacheability
    consistent between different mappings of the same page.
    
    The reason wasn't described in the changelog, but it is understood to be in
    regards to a concern over machine check exceptions, owing to errata when using
    mixed cacheabilities.  It did this primarily by updating Xen's mapping of the
    page in the direct map when the guest mapped a page with reduced cacheability.
    
    Unfortunately, the logic didn't actually prevent mixed cacheability from
    occurring:
     * A guest could map a page normally, and then map the same page with
       different cacheability; nothing prevented this.
     * The cacheability of the directmap was always latest-takes-precedence in
       terms of guest requests.
     * Grant-mapped frames with lesser cacheability didn't adjust the page's
       cacheattr settings.
     * The map_domain_page() function still unconditionally created WB mappings,
       irrespective of the page's cacheattr settings.
    
    Additionally, update_xen_mappings() had a bug where the alias calculation was
    wrong for mfn's which were .init content, which should have been treated as
    fully guest pages, not Xen pages.
    
    Worse yet, the logic introduced a vulnerability whereby necessary
    pagetable/segdesc adjustments made by Xen in the validation logic could become
    non-coherent between the cache and main memory.  The CPU could subsequently
    operate on the stale value in the cache, rather than the safe value in main
    memory.
    
    The directmap contains primarily mappings of RAM.  PAT/MTRR conflict
    resolution is asymmetric, and generally for MTRR=WB ranges, PAT of lesser
    cacheability resolves to being coherent.  The special case is WC mappings,
    which are non-coherent against MTRR=WB regions (except for fully-coherent
    CPUs).
    
    Xen must not have any WC cacheability in the directmap, to prevent Xen's
    actions from creating non-coherency.  (Guest actions creating non-coherency is
    dealt with in subsequent patches.)  As all memory types for MTRR=WB ranges
    inter-operate coherently, so leave Xen's directmap mappings as WB.
    
    Only PV guests with access to devices can use reduced-cacheability mappings to
    begin with, and they're trusted not to mount DoSs against the system anyway.
    
    Drop PGC_cacheattr_{base,mask} entirely, and the logic to manipulate them.
    Shift the later PGC_* constants up, to gain 3 extra bits in the main reference
    count.  Retain the check in get_page_from_l1e() for special_pages() because a
    guest has no business using reduced cacheability on these.
    
    This reverts changeset 55f97f49b7ce6c3520c555d19caac6cf3f9a5df0
    
    This is CVE-2022-26363, part of XSA-402.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>
    master commit: ae09597da34aee6bc5b76475c5eea6994457e854
    master date: 2022-06-09 14:22:08 +0200

commit 887b5ff2938ae256aa14df751816d830b5152d49
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jun 9 15:35:40 2022 +0200

    x86/page: Introduce _PAGE_* constants for memory types
    
    ... rather than opencoding the PAT/PCD/PWT attributes in __PAGE_HYPERVISOR_*
    constants.  These are going to be needed by forthcoming logic.
    
    No functional change.
    
    This is part of XSA-402.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: 1be8707c75bf4ba68447c74e1618b521dd432499
    master date: 2022-06-09 14:21:38 +0200

commit 82a94a179cae9fca3ecebe6c26868072088b8e3c
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jun 9 15:35:23 2022 +0200

    x86/pv: Fix ABAC cmpxchg() race in _get_page_type()
    
    _get_page_type() suffers from a race condition where it incorrectly assumes
    that because 'x' was read and a subsequent a cmpxchg() succeeds, the type
    cannot have changed in-between.  Consider:
    
    CPU A:
      1. Creates an L2e referencing pg
         `-> _get_page_type(pg, PGT_l1_page_table), sees count 0, type PGT_writable_page
      2.     Issues flush_tlb_mask()
    CPU B:
      3. Creates a writeable mapping of pg
         `-> _get_page_type(pg, PGT_writable_page), count increases to 1
      4. Writes into new mapping, creating a TLB entry for pg
      5. Removes the writeable mapping of pg
         `-> _put_page_type(pg), count goes back down to 0
    CPU A:
      7.     Issues cmpxchg(), setting count 1, type PGT_l1_page_table
    
    CPU B now has a writeable mapping to pg, which Xen believes is a pagetable and
    suitably protected (i.e. read-only).  The TLB flush in step 2 must be deferred
    until after the guest is prohibited from creating new writeable mappings,
    which is after step 7.
    
    Defer all safety actions until after the cmpxchg() has successfully taken the
    intended typeref, because that is what prevents concurrent users from using
    the old type.
    
    Also remove the early validation for writeable and shared pages.  This removes
    race conditions where one half of a parallel mapping attempt can return
    successfully before:
     * The IOMMU pagetables are in sync with the new page type
     * Writeable mappings to shared pages have been torn down
    
    This is part of XSA-401 / CVE-2022-26362.
    
    Reported-by: Jann Horn <jannh@google.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>
    master commit: 8cc5036bc385112a82f1faff27a0970e6440dfed
    master date: 2022-06-09 14:21:04 +0200

commit cc74ff882364317e3667c6251cc79bc76fa20ae4
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jun 9 15:35:00 2022 +0200

    x86/pv: Clean up _get_page_type()
    
    Various fixes for clarity, ahead of making complicated changes.
    
     * Split the overflow check out of the if/else chain for type handling, as
       it's somewhat unrelated.
     * Comment the main if/else chain to explain what is going on.  Adjust one
       ASSERT() and state the bit layout for validate-locked and partial states.
     * Correct the comment about TLB flushing, as it's backwards.  The problem
       case is when writeable mappings are retained to a page becoming read-only,
       as it allows the guest to bypass Xen's safety checks for updates.
     * Reduce the scope of 'y'.  It is an artefact of the cmpxchg loop and not
       valid for use by subsequent logic.  Switch to using ACCESS_ONCE() to treat
       all reads as explicitly volatile.  The only thing preventing the validated
       wait-loop being infinite is the compiler barrier hidden in cpu_relax().
     * Replace one page_get_owner(page) with the already-calculated 'd' already in
       scope.
    
    No functional change.
    
    This is part of XSA-401 / CVE-2022-26362.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>
    master commit: 9186e96b199e4f7e52e033b238f9fe869afb69c7
    master date: 2022-06-09 14:20:36 +0200
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 08:59:58 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 08:59:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346022.571740 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzaUi-0007IN-VV; Fri, 10 Jun 2022 08:59:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346022.571740; Fri, 10 Jun 2022 08:59:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzaUi-0007IG-SY; Fri, 10 Jun 2022 08:59:52 +0000
Received: by outflank-mailman (input) for mailman id 346022;
 Fri, 10 Jun 2022 08:59:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzaUh-0007I6-Il; Fri, 10 Jun 2022 08:59:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzaUh-0004bC-ES; Fri, 10 Jun 2022 08:59:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzaUh-0004TK-4V; Fri, 10 Jun 2022 08:59:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nzaUh-0003nv-1n; Fri, 10 Jun 2022 08:59:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=VPKUrZ3pSQwSy/v69fbxF1Ad2d6/8DeNP5zgLAu7hLs=; b=lPIOzX9zQzodXbK4ov03oQ+CX1
	eq6m8jX59+w0aroTUALeaBe6Cc8ISdNT29u2qKZ0bi3ARVRFFc+VC34fC7ZcNcfd1hL59PBafdJNv
	npTqVSgLMhkOfMMvHj+PWSrQ3nlcUG1aQly0OWg9K7Bo071vXEseOHccSJXoTSA7hs24=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170903-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 170903: regressions - FAIL
X-Osstest-Failures:
    xen-4.14-testing:test-amd64-i386-pair:xen-boot/src_host:fail:regression
    xen-4.14-testing:test-amd64-i386-pair:xen-boot/dst_host:fail:regression
    xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:xen-boot:fail:regression
    xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-4.14-testing:test-amd64-i386-xl-vhd:xen-boot:fail:regression
    xen-4.14-testing:test-amd64-i386-freebsd10-i386:xen-boot:fail:regression
    xen-4.14-testing:test-amd64-i386-livepatch:xen-boot:fail:regression
    xen-4.14-testing:test-amd64-i386-qemuu-rhel6hvm-intel:xen-boot:fail:regression
    xen-4.14-testing:test-amd64-i386-migrupgrade:xen-boot/dst_host:fail:regression
    xen-4.14-testing:test-amd64-coresched-i386-xl:xen-boot:fail:regression
    xen-4.14-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    xen-4.14-testing:test-amd64-i386-libvirt-pair:xen-boot/src_host:fail:regression
    xen-4.14-testing:test-amd64-i386-libvirt-pair:xen-boot/dst_host:fail:regression
    xen-4.14-testing:test-amd64-i386-libvirt-xsm:xen-boot:fail:regression
    xen-4.14-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    xen-4.14-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    xen-4.14-testing:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    xen-4.14-testing:test-amd64-i386-xl:xen-boot:fail:regression
    xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:xen-boot:fail:regression
    xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:xen-boot:fail:regression
    xen-4.14-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-4.14-testing:test-amd64-i386-freebsd10-amd64:xen-boot:fail:regression
    xen-4.14-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    xen-4.14-testing:test-amd64-i386-libvirt-raw:xen-install:fail:regression
    xen-4.14-testing:test-amd64-i386-libvirt:xen-boot:fail:regression
    xen-4.14-testing:test-amd64-i386-xl-pvshim:xen-boot:fail:regression
    xen-4.14-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    xen-4.14-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-4.14-testing:test-amd64-i386-qemut-rhel6hvm-intel:xen-boot:fail:regression
    xen-4.14-testing:test-amd64-i386-xl-shadow:xen-boot:fail:regression
    xen-4.14-testing:test-amd64-i386-xl-xsm:xen-boot:fail:regression
    xen-4.14-testing:test-xtf-amd64-amd64-2:xtf/test-pv32pae-xsa-213:fail:regression
    xen-4.14-testing:test-xtf-amd64-amd64-2:leak-check/check:fail:regression
    xen-4.14-testing:test-amd64-i386-qemuu-rhel6hvm-amd:xen-boot:fail:regression
    xen-4.14-testing:test-amd64-i386-qemut-rhel6hvm-amd:xen-boot:fail:regression
    xen-4.14-testing:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=82ba97ec6b24f3e92fd1749962154cedf2addc5d
X-Osstest-Versions-That:
    xen=17848dfed47f52b479c4e7eb412671aec5757329
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 10 Jun 2022 08:59:51 +0000

flight 170903 xen-4.14-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170903/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pair         12 xen-boot/src_host        fail REGR. vs. 169330
 test-amd64-i386-pair         13 xen-boot/dst_host        fail REGR. vs. 169330
 test-amd64-i386-xl-qemuu-win7-amd64  8 xen-boot          fail REGR. vs. 169330
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 169330
 test-amd64-i386-xl-vhd        8 xen-boot                 fail REGR. vs. 169330
 test-amd64-i386-freebsd10-i386  8 xen-boot               fail REGR. vs. 169330
 test-amd64-i386-livepatch     8 xen-boot                 fail REGR. vs. 169330
 test-amd64-i386-qemuu-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 169330
 test-amd64-i386-migrupgrade  13 xen-boot/dst_host        fail REGR. vs. 169330
 test-amd64-coresched-i386-xl  8 xen-boot                 fail REGR. vs. 169330
 test-amd64-i386-xl-qemut-debianhvm-amd64  8 xen-boot     fail REGR. vs. 169330
 test-amd64-i386-libvirt-pair 12 xen-boot/src_host        fail REGR. vs. 169330
 test-amd64-i386-libvirt-pair 13 xen-boot/dst_host        fail REGR. vs. 169330
 test-amd64-i386-libvirt-xsm   8 xen-boot                 fail REGR. vs. 169330
 test-amd64-i386-xl-qemuu-debianhvm-amd64  8 xen-boot     fail REGR. vs. 169330
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 169330
 test-amd64-i386-xl-qemuu-ovmf-amd64  8 xen-boot          fail REGR. vs. 169330
 test-amd64-i386-xl            8 xen-boot                 fail REGR. vs. 169330
 test-amd64-i386-xl-qemut-win7-amd64  8 xen-boot          fail REGR. vs. 169330
 test-amd64-i386-xl-qemut-ws16-amd64  8 xen-boot          fail REGR. vs. 169330
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 169330
 test-amd64-i386-freebsd10-amd64  8 xen-boot              fail REGR. vs. 169330
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 169330
 test-amd64-i386-libvirt-raw   7 xen-install              fail REGR. vs. 169330
 test-amd64-i386-libvirt       8 xen-boot                 fail REGR. vs. 169330
 test-amd64-i386-xl-pvshim     8 xen-boot                 fail REGR. vs. 169330
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 169330
 test-amd64-i386-xl-qemuu-ws16-amd64  8 xen-boot          fail REGR. vs. 169330
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 169330
 test-amd64-i386-qemut-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 169330
 test-amd64-i386-xl-shadow     8 xen-boot                 fail REGR. vs. 169330
 test-amd64-i386-xl-xsm        8 xen-boot                 fail REGR. vs. 169330
 test-xtf-amd64-amd64-2       89 xtf/test-pv32pae-xsa-213 fail REGR. vs. 169330
 test-xtf-amd64-amd64-2       90 leak-check/check         fail REGR. vs. 169330
 test-amd64-i386-qemuu-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 169330
 test-amd64-i386-qemut-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 169330

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 169330
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 169330
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 169330
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 169330
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 169330
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 169330
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 169330
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 169330
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  82ba97ec6b24f3e92fd1749962154cedf2addc5d
baseline version:
 xen                  17848dfed47f52b479c4e7eb412671aec5757329

Last test of basis   169330  2022-04-12 10:36:29 Z   58 days
Testing same since   170903  2022-06-09 14:07:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  George Dunlap <george.dunlap@eu.citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       fail    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    fail    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 82ba97ec6b24f3e92fd1749962154cedf2addc5d
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jun 9 15:52:11 2022 +0200

    x86/pv: Track and flush non-coherent mappings of RAM
    
    There are legitimate uses of WC mappings of RAM, e.g. for DMA buffers with
    devices that make non-coherent writes.  The Linux sound subsystem makes
    extensive use of this technique.
    
    For such usecases, the guest's DMA buffer is mapped and consistently used as
    WC, and Xen doesn't interact with the buffer.
    
    However, a mischevious guest can use WC mappings to deliberately create
    non-coherency between the cache and RAM, and use this to trick Xen into
    validating a pagetable which isn't actually safe.
    
    Allocate a new PGT_non_coherent to track the non-coherency of mappings.  Set
    it whenever a non-coherent writeable mapping is created.  If the page is used
    as anything other than PGT_writable_page, force a cache flush before
    validation.  Also force a cache flush before the page is returned to the heap.
    
    This is CVE-2022-26364, part of XSA-402.
    
    Reported-by: Jann Horn <jannh@google.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: c1c9cae3a9633054b177c5de21ad7268162b2f2c
    master date: 2022-06-09 14:23:37 +0200

commit 25c7adeefa7538d1f88bab1859ce77f8b46f229e
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jun 9 15:51:47 2022 +0200

    x86/amd: Work around CLFLUSH ordering on older parts
    
    On pre-CLFLUSHOPT AMD CPUs, CLFLUSH is weakely ordered with everything,
    including reads and writes to the address, and LFENCE/SFENCE instructions.
    
    This creates a multitude of problematic corner cases, laid out in the manual.
    Arrange to use MFENCE on both sides of the CLFLUSH to force proper ordering.
    
    This is part of XSA-402.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: 062868a5a8b428b85db589fa9a6d6e43969ffeb9
    master date: 2022-06-09 14:23:07 +0200

commit 204d4f16506334a0398649c714c19349145589be
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jun 9 15:51:23 2022 +0200

    x86: Split cache_flush() out of cache_writeback()
    
    Subsequent changes will want a fully flushing version.
    
    Use the new helper rather than opencoding it in flush_area_local().  This
    resolves an outstanding issue where the conditional sfence is on the wrong
    side of the clflushopt loop.  clflushopt is ordered with respect to older
    stores, not to younger stores.
    
    Rename gnttab_cache_flush()'s helper to avoid colliding in name.
    grant_table.c can see the prototype from cache.h so the build fails
    otherwise.
    
    This is part of XSA-402.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: 9a67ffee3371506e1cbfdfff5b90658d4828f6a2
    master date: 2022-06-09 14:22:38 +0200

commit 07fbed87582c117262541a9a0903848a36adcc79
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jun 9 15:50:59 2022 +0200

    x86: Don't change the cacheability of the directmap
    
    Changeset 55f97f49b7ce ("x86: Change cache attributes of Xen 1:1 page mappings
    in response to guest mapping requests") attempted to keep the cacheability
    consistent between different mappings of the same page.
    
    The reason wasn't described in the changelog, but it is understood to be in
    regards to a concern over machine check exceptions, owing to errata when using
    mixed cacheabilities.  It did this primarily by updating Xen's mapping of the
    page in the direct map when the guest mapped a page with reduced cacheability.
    
    Unfortunately, the logic didn't actually prevent mixed cacheability from
    occurring:
     * A guest could map a page normally, and then map the same page with
       different cacheability; nothing prevented this.
     * The cacheability of the directmap was always latest-takes-precedence in
       terms of guest requests.
     * Grant-mapped frames with lesser cacheability didn't adjust the page's
       cacheattr settings.
     * The map_domain_page() function still unconditionally created WB mappings,
       irrespective of the page's cacheattr settings.
    
    Additionally, update_xen_mappings() had a bug where the alias calculation was
    wrong for mfn's which were .init content, which should have been treated as
    fully guest pages, not Xen pages.
    
    Worse yet, the logic introduced a vulnerability whereby necessary
    pagetable/segdesc adjustments made by Xen in the validation logic could become
    non-coherent between the cache and main memory.  The CPU could subsequently
    operate on the stale value in the cache, rather than the safe value in main
    memory.
    
    The directmap contains primarily mappings of RAM.  PAT/MTRR conflict
    resolution is asymmetric, and generally for MTRR=WB ranges, PAT of lesser
    cacheability resolves to being coherent.  The special case is WC mappings,
    which are non-coherent against MTRR=WB regions (except for fully-coherent
    CPUs).
    
    Xen must not have any WC cacheability in the directmap, to prevent Xen's
    actions from creating non-coherency.  (Guest actions creating non-coherency is
    dealt with in subsequent patches.)  As all memory types for MTRR=WB ranges
    inter-operate coherently, so leave Xen's directmap mappings as WB.
    
    Only PV guests with access to devices can use reduced-cacheability mappings to
    begin with, and they're trusted not to mount DoSs against the system anyway.
    
    Drop PGC_cacheattr_{base,mask} entirely, and the logic to manipulate them.
    Shift the later PGC_* constants up, to gain 3 extra bits in the main reference
    count.  Retain the check in get_page_from_l1e() for special_pages() because a
    guest has no business using reduced cacheability on these.
    
    This reverts changeset 55f97f49b7ce6c3520c555d19caac6cf3f9a5df0
    
    This is CVE-2022-26363, part of XSA-402.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>
    master commit: ae09597da34aee6bc5b76475c5eea6994457e854
    master date: 2022-06-09 14:22:08 +0200

commit a72146db9e9719f16bf2cab2fc9ac7a0d8d7ee3f
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jun 9 15:50:32 2022 +0200

    x86/page: Introduce _PAGE_* constants for memory types
    
    ... rather than opencoding the PAT/PCD/PWT attributes in __PAGE_HYPERVISOR_*
    constants.  These are going to be needed by forthcoming logic.
    
    No functional change.
    
    This is part of XSA-402.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: 1be8707c75bf4ba68447c74e1618b521dd432499
    master date: 2022-06-09 14:21:38 +0200

commit 758f40d7fa7e98ef2d2772ef8f0f57eabde028bd
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jun 9 15:50:16 2022 +0200

    x86/pv: Fix ABAC cmpxchg() race in _get_page_type()
    
    _get_page_type() suffers from a race condition where it incorrectly assumes
    that because 'x' was read and a subsequent a cmpxchg() succeeds, the type
    cannot have changed in-between.  Consider:
    
    CPU A:
      1. Creates an L2e referencing pg
         `-> _get_page_type(pg, PGT_l1_page_table), sees count 0, type PGT_writable_page
      2.     Issues flush_tlb_mask()
    CPU B:
      3. Creates a writeable mapping of pg
         `-> _get_page_type(pg, PGT_writable_page), count increases to 1
      4. Writes into new mapping, creating a TLB entry for pg
      5. Removes the writeable mapping of pg
         `-> _put_page_type(pg), count goes back down to 0
    CPU A:
      7.     Issues cmpxchg(), setting count 1, type PGT_l1_page_table
    
    CPU B now has a writeable mapping to pg, which Xen believes is a pagetable and
    suitably protected (i.e. read-only).  The TLB flush in step 2 must be deferred
    until after the guest is prohibited from creating new writeable mappings,
    which is after step 7.
    
    Defer all safety actions until after the cmpxchg() has successfully taken the
    intended typeref, because that is what prevents concurrent users from using
    the old type.
    
    Also remove the early validation for writeable and shared pages.  This removes
    race conditions where one half of a parallel mapping attempt can return
    successfully before:
     * The IOMMU pagetables are in sync with the new page type
     * Writeable mappings to shared pages have been torn down
    
    This is part of XSA-401 / CVE-2022-26362.
    
    Reported-by: Jann Horn <jannh@google.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>
    master commit: 8cc5036bc385112a82f1faff27a0970e6440dfed
    master date: 2022-06-09 14:21:04 +0200

commit c70071eb6c6d43f96d0d9e2f2446de491c8ed527
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jun 9 15:49:50 2022 +0200

    x86/pv: Clean up _get_page_type()
    
    Various fixes for clarity, ahead of making complicated changes.
    
     * Split the overflow check out of the if/else chain for type handling, as
       it's somewhat unrelated.
     * Comment the main if/else chain to explain what is going on.  Adjust one
       ASSERT() and state the bit layout for validate-locked and partial states.
     * Correct the comment about TLB flushing, as it's backwards.  The problem
       case is when writeable mappings are retained to a page becoming read-only,
       as it allows the guest to bypass Xen's safety checks for updates.
     * Reduce the scope of 'y'.  It is an artefact of the cmpxchg loop and not
       valid for use by subsequent logic.  Switch to using ACCESS_ONCE() to treat
       all reads as explicitly volatile.  The only thing preventing the validated
       wait-loop being infinite is the compiler barrier hidden in cpu_relax().
     * Replace one page_get_owner(page) with the already-calculated 'd' already in
       scope.
    
    No functional change.
    
    This is part of XSA-401 / CVE-2022-26362.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>
    master commit: 9186e96b199e4f7e52e033b238f9fe869afb69c7
    master date: 2022-06-09 14:20:36 +0200
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 09:07:19 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 09:07:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346036.571753 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzabr-0000ee-Tm; Fri, 10 Jun 2022 09:07:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346036.571753; Fri, 10 Jun 2022 09:07:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzabr-0000eX-R9; Fri, 10 Jun 2022 09:07:15 +0000
Received: by outflank-mailman (input) for mailman id 346036;
 Fri, 10 Jun 2022 09:07:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1nzabp-0000eR-Ub
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 09:07:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nzabp-0004lV-IP; Fri, 10 Jun 2022 09:07:13 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=[192.168.23.251]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nzabp-0000BA-CX; Fri, 10 Jun 2022 09:07:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=POEapIdviqEu6mQ7zmme4MuYDSiX9aOswANa/EfbrKw=; b=FHqml65o37HW6VYXiWZVgn6rrt
	1AZV8OuvZkRA8PzIW/pYny4q/aWThnaqiHyUH8feKhl+gSEWabz8x0aLB43gUdo21fzlQBYWjiW3m
	V2zK0fWy2zbaUvaQWwOHCKgDCrk44khLY1rxujREpdf7U7u6TAjDgmgRd6W/JiuEqYZU=;
Message-ID: <ef705e00-17da-a8df-9a0f-27eb7ef686ed@xen.org>
Date: Fri, 10 Jun 2022 10:07:11 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH 2/3] xen/arm: gicv2: Rename
 gicv2_map_hwdown_extra_mappings
To: Michal Orzel <michal.orzel@arm.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220610083358.101412-1-michal.orzel@arm.com>
 <20220610083358.101412-3-michal.orzel@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220610083358.101412-3-michal.orzel@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 10/06/2022 09:33, Michal Orzel wrote:
> ... to gicv2_map_hwdom_extra_mappings as the former clearly contains
> a typo.
> 
> Fixes: 86b93e00c0b6 ("xen/arm: gicv2: Export GICv2m register frames to Dom0 by device tree"

NIT: In general, fixes tag are used for bug (i.e. Xen would not function 
properly without it and likely need backports). Even if the name is 
incorrect, there is no bug here. So my preference is to drop this tag.

Other than that:

Acked-by; Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 09:10:01 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 09:10:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346047.571765 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzaeV-0001Hi-Ds; Fri, 10 Jun 2022 09:09:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346047.571765; Fri, 10 Jun 2022 09:09:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzaeV-0001Hb-B3; Fri, 10 Jun 2022 09:09:59 +0000
Received: by outflank-mailman (input) for mailman id 346047;
 Fri, 10 Jun 2022 09:09:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1nzaeU-0001HV-Sl
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 09:09:58 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nzaeU-0004na-L9; Fri, 10 Jun 2022 09:09:58 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=[192.168.23.251]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nzaeU-0000PT-FR; Fri, 10 Jun 2022 09:09:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=2Jd1FXCCivjU8G/FVgbQacl2ETTk0v/5XHWSeEw9jBw=; b=lFB+jCriQJUvmItGGFdbQvasWw
	yIcQb/ecYpvKABJ7lr8ctAkfobTyh02/PqEE+EZrdcWyo4Qf6a2v8PmK9HMhzRm35ipD/MESllwUp
	GCbykdKNF3l25N5ikplKL3Wgj/qAM66W0GDmjl5ouZm8P8n5ROKLtBSe7Zt7XOTh1g4E=;
Message-ID: <79b286dd-8d79-093b-2bad-e12d237bc1f6@xen.org>
Date: Fri, 10 Jun 2022 10:09:56 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH 1/3] xen/arm: traps: Fix reference to invalid erratum ID
To: Michal Orzel <michal.orzel@arm.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220610083358.101412-1-michal.orzel@arm.com>
 <20220610083358.101412-2-michal.orzel@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220610083358.101412-2-michal.orzel@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Michal,

On 10/06/2022 09:33, Michal Orzel wrote:
> The correct erratum ID should be 834220.
> 
> Fixes: 0a7ba2936457 ("xen/arm: arm64: Add Cortex-A57 erratum 834220 workaround")

Despite my answer in #2, this is one of the exception where I would keep 
the fixes tag because we should try to keep the documentation correct 
even in stable trees (this is more important than typo in names).

> Signed-off-by: Michal Orzel <michal.orzel@arm.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 09:13:46 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 09:13:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346057.571776 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzai7-0002xZ-0H; Fri, 10 Jun 2022 09:13:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346057.571776; Fri, 10 Jun 2022 09:13:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzai6-0002xS-SF; Fri, 10 Jun 2022 09:13:42 +0000
Received: by outflank-mailman (input) for mailman id 346057;
 Fri, 10 Jun 2022 09:13:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LeXp=WR=arm.com=bertrand.marquis@srs-se1.protection.inumbo.net>)
 id 1nzai6-0002xM-3c
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 09:13:42 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 9e6ed50d-e89d-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 11:13:41 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7E19C12FC;
 Fri, 10 Jun 2022 02:13:40 -0700 (PDT)
Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com
 [10.1.199.62])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2D84F3F73B;
 Fri, 10 Jun 2022 02:13:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9e6ed50d-e89d-11ec-bd2c-47488cf2e6aa
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3] xen: Add MISRA support to cppcheck make rule
Date: Fri, 10 Jun 2022 10:13:31 +0100
Message-Id: <82a29dff7a0da97cc6ad9d247a97372bcf71f17c.1654850751.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

cppcheck MISRA addon can be used to check for non compliance to some of
the MISRA standard rules.

Add a CPPCHECK_MISRA variable that can be set to "y" using make command
line to generate a cppcheck report including cppcheck misra checks.

When MISRA checking is enabled, a file with a text description suitable
for cppcheck misra addon is generated out of Xen documentation file
which lists the rules followed by Xen (docs/misra/rules.rst).

By default MISRA checking is turned off.

While adding cppcheck-misra files to gitignore, also fix the missing /
for htmlreport gitignore

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes in v3:
- fix .gitignore order for cppcheck entries
- adapt python script to new rules.rst format
Changes in v2:
- fix missing / for htmlreport
- use wildcard for cppcheck-misra remove and gitignore
- fix comment in makefile
- fix dependencies for generation of json and txt file
---
 .gitignore                     |   5 +-
 xen/Makefile                   |  29 +++++-
 xen/tools/convert_misra_doc.py | 173 +++++++++++++++++++++++++++++++++
 3 files changed, 203 insertions(+), 4 deletions(-)
 create mode 100755 xen/tools/convert_misra_doc.py

diff --git a/.gitignore b/.gitignore
index 18ef56a780..c9951063c3 100644
--- a/.gitignore
+++ b/.gitignore
@@ -297,7 +297,6 @@ xen/.banner
 xen/.config
 xen/.config.old
 xen/.xen.elf32
-xen/xen-cppcheck.xml
 xen/System.map
 xen/arch/x86/boot/mkelf32
 xen/arch/x86/boot/cmdline.S
@@ -318,7 +317,8 @@ xen/arch/*/efi/runtime.c
 xen/arch/*/include/asm/asm-offsets.h
 xen/common/config_data.S
 xen/common/config.gz
-xen/cppcheck-htmlreport
+xen/cppcheck-htmlreport/
+xen/cppcheck-misra.*
 xen/include/headers*.chk
 xen/include/compat/*
 xen/include/config/
@@ -347,6 +347,7 @@ xen/xsm/flask/xenpolicy-*
 tools/flask/policy/policy.conf
 tools/flask/policy/xenpolicy-*
 xen/xen
+xen/xen-cppcheck.xml
 xen/xen-syms
 xen/xen-syms.map
 xen/xen.*
diff --git a/xen/Makefile b/xen/Makefile
index 82f5310b12..a4dce29efd 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -548,7 +548,7 @@ _clean:
 	rm -f include/asm $(TARGET) $(TARGET).gz $(TARGET).efi $(TARGET).efi.map $(TARGET)-syms $(TARGET)-syms.map
 	rm -f asm-offsets.s arch/*/include/asm/asm-offsets.h
 	rm -f .banner .allconfig.tmp include/xen/compile.h
-	rm -f xen-cppcheck.xml
+	rm -f cppcheck-misra.* xen-cppcheck.xml
 
 .PHONY: _distclean
 _distclean: clean
@@ -642,6 +642,10 @@ CPPCHECK_HTMLREPORT ?= cppcheck-htmlreport
 # build directory. This can be changed by giving a directory in this variable.
 CPPCHECK_HTMLREPORT_OUTDIR ?= cppcheck-htmlreport
 
+# By default we do not check misra rules, to enable pass "CPPCHECK_MISRA=y" to
+# make command line.
+CPPCHECK_MISRA ?= n
+
 # Compile flags to pass to cppcheck:
 # - include directories and defines Xen Makefile is passing (from CFLAGS)
 # - include config.h as this is passed directly to the compiler.
@@ -666,6 +670,15 @@ CPPCHECKFILES := $(wildcard $(patsubst $(objtree)/%.o,$(srctree)/%.c, \
                  $(filter-out $(objtree)/tools/%, \
                  $(shell find $(objtree) -name "*.o"))))
 
+# Headers and files required to run cppcheck on a file
+CPPCHECKDEPS := $(objtree)/include/generated/autoconf.h \
+                $(objtree)/include/generated/compiler-def.h
+
+ifeq ($(CPPCHECK_MISRA),y)
+    CPPCHECKFLAGS += --addon=cppcheck-misra.json
+    CPPCHECKDEPS += cppcheck-misra.json
+endif
+
 quiet_cmd_cppcheck_xml = CPPCHECK $(patsubst $(srctree)/%,%,$<)
 cmd_cppcheck_xml = $(CPPCHECK) -v -q --xml $(CPPCHECKFLAGS) \
                    --output-file=$@ $<
@@ -690,7 +703,7 @@ ifeq ($(CPPCHECKFILES),)
 endif
 	$(call if_changed,merge_cppcheck_reports)
 
-$(objtree)/%.c.cppcheck: $(srctree)/%.c $(objtree)/include/generated/autoconf.h $(objtree)/include/generated/compiler-def.h | cppcheck-version
+$(objtree)/%.c.cppcheck: $(srctree)/%.c $(CPPCHECKDEPS) | cppcheck-version
 	$(call if_changed,cppcheck_xml)
 
 cppcheck-version:
@@ -703,6 +716,18 @@ cppcheck-version:
 		exit 1; \
 	fi
 
+# List of Misra rules to respect is written inside a doc.
+# In order to have some helpful text in the cppcheck output, generate a text
+# file containing the rules identifier, classification and text from the Xen
+# documentation file. Also generate a json file with the right arguments for
+# cppcheck in json format including the list of rules to ignore.
+#
+cppcheck-misra.txt: $(XEN_ROOT)/docs/misra/rules.rst $(srctree)/tools/convert_misra_doc.py
+	$(Q)$(srctree)/tools/convert_misra_doc.py -i $< -o $@ -j $(@:.txt=.json)
+
+# convert_misra_doc is generating both files.
+cppcheck-misra.json: cppcheck-misra.txt
+
 # Put this in generated headers this way it is cleaned by include/Makefile
 $(objtree)/include/generated/compiler-def.h:
 	$(Q)$(CC) -dM -E -o $@ - < /dev/null
diff --git a/xen/tools/convert_misra_doc.py b/xen/tools/convert_misra_doc.py
new file mode 100755
index 0000000000..caa4487f64
--- /dev/null
+++ b/xen/tools/convert_misra_doc.py
@@ -0,0 +1,173 @@
+#!/usr/bin/env python
+
+"""
+This script is converting the misra documentation RST file into a text file
+that can be used as text-rules for cppcheck.
+Usage:
+    convert_misr_doc.py -i INPUT [-o OUTPUT] [-j JSON]
+
+    INPUT  - RST file containing the list of misra rules.
+    OUTPUT - file to store the text output to be used by cppcheck.
+             If not specified, the result will be printed to stdout.
+    JSON   - cppcheck json file to be created (optional).
+"""
+
+import sys, getopt, re
+
+def main(argv):
+    infile = ''
+    outfile = ''
+    outstr = sys.stdout
+    jsonfile = ''
+
+    try:
+        opts, args = getopt.getopt(argv,"hi:o:j:",["input=","output=","json="])
+    except getopt.GetoptError:
+        print('convert-misra.py -i <input> [-o <output>] [-j <json>')
+        sys.exit(2)
+    for opt, arg in opts:
+        if opt == '-h':
+            print('convert-misra.py -i <input> [-o <output>] [-j <json>')
+            print('  If output is not specified, print to stdout')
+            sys.exit(1)
+        elif opt in ("-i", "--input"):
+            infile = arg
+        elif opt in ("-o", "--output"):
+            outfile = arg
+        elif opt in ("-j", "--json"):
+            jsonfile = arg
+
+    try:
+        file_stream = open(infile, 'rt')
+    except:
+        print('Error opening ' + infile)
+        sys.exit(1)
+
+    if outfile:
+        try:
+            outstr = open(outfile, "w")
+        except:
+            print('Error creating ' + outfile)
+            sys.exit(1)
+
+    # Each rule start with '   * - `[Dir|Rule]' and is followed by the
+    # severity, the summary and then notes
+    # Only the summary can be multi line
+    pattern_dir = re.compile(r'^   \* - `Dir ([0-9]+.[0-9]+).*$')
+    pattern_rule = re.compile(r'^   \* - `Rule ([0-9]+.[0-9]+).*$')
+    pattern_col = re.compile(r'^     - (.*)$')
+    # allow empty notes
+    pattern_notes = re.compile(r'^     -.*$')
+    pattern_cont = re.compile(r'^      (.*)$')
+
+    rule_number = ''
+    rule_severity = ''
+    rule_summary = ''
+    rule_state  = 0
+    rule_list = []
+
+    # Start search by cppcheck misra
+    outstr.write('Appendix A Summary of guidelines\n')
+
+    for line in file_stream:
+
+        line = line.replace('\r', '').replace('\n', '')
+
+        if len(line) == 0:
+            continue
+
+        # New Rule or Directive
+        if rule_state == 0:
+            # new Rule
+            res = pattern_rule.match(line)
+            if res:
+                rule_number = res.group(1)
+                rule_list.append(rule_number)
+                rule_state = 1
+                continue
+
+            # new Directive
+            res = pattern_dir.match(line)
+            if res:
+                rule_number = res.group(1)
+                rule_list.append(rule_number)
+                rule_state = 1
+                continue
+            continue
+
+        # Severity
+        elif rule_state == 1:
+            res =pattern_col.match(line)
+            if res:
+                rule_severity = res.group(1)
+                rule_state = 2
+                continue
+
+            print('No severity for rule ' + rule_number)
+            sys.exit(1)
+
+        # Summary
+        elif rule_state == 2:
+            res = pattern_col.match(line)
+            if res:
+                rule_summary = res.group(1)
+                rule_state = 3
+                continue
+
+            print('No summary for rule ' + rule_number)
+            sys.exit(1)
+
+        # Notes or summary continuation
+        elif rule_state == 3:
+            res = pattern_cont.match(line)
+            if res:
+                rule_summary += res.group(1)
+                continue
+            res = pattern_notes.match(line)
+            if res:
+                outstr.write('Rule ' + rule_number + ' ' + rule_severity
+                             + '\n')
+                outstr.write(rule_summary + ' (Misra rule ' + rule_number
+                             + ')\n')
+                rule_state = 0
+                rule_number = ''
+                continue
+            print('No notes for rule ' + rule_number)
+            sys.exit(1)
+
+        else:
+            print('Impossible case in state machine')
+            sys.exit(1)
+
+    skip_list = []
+
+    # Search for missing rules and add a dummy text with the rule number
+    for i in list(range(1,22)):
+        for j in list(range(1,22)):
+            if str(i) + '.' + str(j) not in rule_list:
+                outstr.write('Rule ' + str(i) + '.' + str(j) + '\n')
+                outstr.write('No description for rule ' + str(i) + '.' + str(j)
+                             + '\n')
+                skip_list.append(str(i) + '.' + str(j))
+
+    # Make cppcheck happy by starting the appendix
+    outstr.write('Appendix B\n')
+    outstr.write('\n')
+    if outfile:
+        outstr.close()
+
+    if jsonfile:
+        with open(jsonfile, "w") as f:
+            f.write('{\n')
+            f.write('    "script": "misra.py",\n')
+            f.write('    "args": [\n')
+            if outfile:
+                f.write('      "--rule-texts=' + outfile + '",\n')
+
+            f.write('      "--suppress-rules=' + ",".join(skip_list) + '"\n')
+            f.write('    ]\n')
+            f.write('}\n')
+        f.close()
+
+if __name__ == "__main__":
+   main(sys.argv[1:])
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 09:19:48 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 09:19:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346067.571787 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzanw-0003bP-NV; Fri, 10 Jun 2022 09:19:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346067.571787; Fri, 10 Jun 2022 09:19:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzanw-0003bI-Jt; Fri, 10 Jun 2022 09:19:44 +0000
Received: by outflank-mailman (input) for mailman id 346067;
 Fri, 10 Jun 2022 09:19:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LeXp=WR=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1nzanv-0003bC-2H
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 09:19:43 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 (mail-am5eur02on0631.outbound.protection.outlook.com
 [2a01:111:f400:fe07::631])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7575bca2-e89e-11ec-8179-c7c2a468b362;
 Fri, 10 Jun 2022 11:19:42 +0200 (CEST)
Received: from DB6PR0801CA0061.eurprd08.prod.outlook.com (2603:10a6:4:2b::29)
 by AM6PR08MB4835.eurprd08.prod.outlook.com (2603:10a6:20b:c3::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Fri, 10 Jun
 2022 09:19:30 +0000
Received: from DBAEUR03FT026.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:2b:cafe::7d) by DB6PR0801CA0061.outlook.office365.com
 (2603:10a6:4:2b::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.12 via Frontend
 Transport; Fri, 10 Jun 2022 09:19:30 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT026.mail.protection.outlook.com (100.127.142.242) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5332.12 via Frontend Transport; Fri, 10 Jun 2022 09:19:29 +0000
Received: ("Tessian outbound d3318d0cda7b:v120");
 Fri, 10 Jun 2022 09:19:29 +0000
Received: from a94b9373e714.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 1E282DEC-438A-4BCD-AA82-97477A2DD2C6.1; 
 Fri, 10 Jun 2022 09:19:22 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a94b9373e714.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 10 Jun 2022 09:19:22 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by DBBPR08MB5963.eurprd08.prod.outlook.com (2603:10a6:10:205::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Fri, 10 Jun
 2022 09:19:14 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5314.019; Fri, 10 Jun 2022
 09:19:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7575bca2-e89e-11ec-8179-c7c2a468b362
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=bEkRXo94z2elnlTPg17GpQkYYGVoMKPB3HEU87mFC1BHwMfoHftOMFq1QZw45mutlIfslRBisqovkCBvRYRFAsCQTjJsyRQBq6fOVGsgBSqV6KRdEl7hkPKXpAtchv3K65sL7uhW9J1XOIh95e06jM9YrHFLK0apIXa/g6AKCgPRoGQA22z22DxrBZ6xskcv3oiN+/J62EE4H+cqwqWHcei2LudgpSM6oIS04OqtLYgIHdSYhmZOb6iVlelhPEk7uDA81A2HBt6qlv5o9lqoHyXdlaHGhw/GWQ46EKnQXbqpUoaAunNn80Ti0WpSrR1wtN6Yfhlue/hxdEVSkl6hog==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=yk/DcCv2eqW0SI4rL19b4iBDp92382m05OE5/TZHT58=;
 b=JIxwugowT0Ocz+3TKSBslMWA3twubMa6NoKgkdr0wqUZXq4xAtjQN7LZeMg7PYtMmV7HvtQYNkTcYPzJqbyRzdUMZLithJ8GO39yjtJUl863L03Kte/WDy8asws2wgEpztV6ekbg3wItq2T1VN2YDnuFnvZSKmXkwqff5J3imH6hGofjyJrU/DHlilYUIluqSO4qw+IOf5D2tDZp/1K3EU56QRAcnySWIpTVuAKKPy6OWW41DPM4PFO4dcYSWAqzRDn2pKY81npP45ad2R6T78hqlup0tRorp2Z9qN5faGbza68lonVZvPqiy56o3eBVi6U4oS9y2Hn3rDVquxpq5w==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yk/DcCv2eqW0SI4rL19b4iBDp92382m05OE5/TZHT58=;
 b=kKQWRWoYIw30Nk16dmzjZ/Hv9cPU1Wb4lzKr1qPtJ+6R+ztFAxCECvuX/6/cup4ddFRLidRVuD0FgkV0e0uuRtm5qzsQCMlqPZWLzTaniy5PKBMXm8uhJTYRgcufPXQmzTD96KOnFMbyCCGSPosr/iBEG6P/8fCULPDyz16M3u0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: c33fd2db30905f59
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Flqk4pwkMGMEeODWAO8COuVjONfIJlwHSv14r5bqwE7ciaAKlp2nu7Rtoe/RWz2hNCyPHghctd8ljOYrntvWRjVVKYql0nM0dNKmEAxOKvm8p3zkD1fHYrgNBIDoEakst0K9qogksWvwDlPfut2SNwb8hBhs7XAamCy5N7hYYDAguT3awVVVpuDos18VqFNsmX1q02yltnWZeNantvABF4jLkICjwmpYAieIg2gQhtMvSnDkLNMH9vJSp0tS+rtTX9J/Ouf5eYcPbPOeAJEzA8MQVc+t3emmNXv68SPYWOIXqG7LYlnBaB+w5NfRG3RGZ21IVrHl5umOAO7Et1wInA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=yk/DcCv2eqW0SI4rL19b4iBDp92382m05OE5/TZHT58=;
 b=lKWg44i/XZ8Z1JXoKEBvgtPp9yMkHw0icCzg3tZ0YsJyHmiYm315cUojHIgeDoj9MB4Or3dSdHoFawyrwrcEuwvtfesBZYl9CHBWsc2K40c+FhjC7lGEcL6Ulyt0edNK9TL49yeEbcud83mfn8mzzjzK1Zs6mb+IAHXB2Kmv+fjv661cBo2x4iv4IYiw2q9xJogbFTR5iaDOfkSzlGRna9dxpWCnh33Y9BBYXgga9dugkrBFZmcd0qLdMBVk1r4I8xARhAWiuW7F0a4vJByCMPKvfZCQjwSGrrZa1NssxfUf2skgqedt2I5QsBUCxG6dbrPgx8TdYEdPocWG+NRavA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yk/DcCv2eqW0SI4rL19b4iBDp92382m05OE5/TZHT58=;
 b=kKQWRWoYIw30Nk16dmzjZ/Hv9cPU1Wb4lzKr1qPtJ+6R+ztFAxCECvuX/6/cup4ddFRLidRVuD0FgkV0e0uuRtm5qzsQCMlqPZWLzTaniy5PKBMXm8uhJTYRgcufPXQmzTD96KOnFMbyCCGSPosr/iBEG6P/8fCULPDyz16M3u0=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: xen-devel <xen-devel@lists.xenproject.org>, "jbeulich@suse.com"
	<jbeulich@suse.com>, "George.Dunlap@citrix.com" <George.Dunlap@citrix.com>,
	"roger.pau@citrix.com" <roger.pau@citrix.com>, "Artem_Mygaiev@epam.com"
	<Artem_Mygaiev@epam.com>, "Andrew.Cooper3@citrix.com"
	<Andrew.Cooper3@citrix.com>, "julien@xen.org" <julien@xen.org>
Subject: Re: [PATCH] add more MISRA C rules to docs/misra/rules.rst
Thread-Topic: [PATCH] add more MISRA C rules to docs/misra/rules.rst
Thread-Index: AQHYfGPcMFsu69NoY0eiE0tgRprVc61IXYSA
Date: Fri, 10 Jun 2022 09:19:13 +0000
Message-ID: <451CA8F9-EBE9-43AA-8121-D1B4775D6EDB@arm.com>
References:
 <alpine.DEB.2.22.394.2206091748210.756493@ubuntu-linux-20-04-desktop>
In-Reply-To:
 <alpine.DEB.2.22.394.2206091748210.756493@ubuntu-linux-20-04-desktop>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.80.82.1.1)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: d66da8f9-8f3a-44d2-52b2-08da4ac252ba
x-ms-traffictypediagnostic:
	DBBPR08MB5963:EE_|DBAEUR03FT026:EE_|AM6PR08MB4835:EE_
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB48353FE97BA6E041A01CC7CF9DA69@AM6PR08MB4835.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 FOL/lQmSfyddwDgr5ov6ZWB5nAGgdN0EktIBqbj63XREoaUNaBkW6mBmY+GP+hhY2LDybORou/I0Q5VpyIMUOHwcOieWGfMfIty1dNnTUbhEO2Po4RSZykpM0Jnj0/rYggGImIsBLw4XTWFbmfWkemyz6M/N+6PgvTvdn/LBkK3x/kSbI8ZqpaGP3RuznVWTg9PRAZcL9leqa7FCaQkl4qla0aUL/5Iy9ugDOFnPi6JKGmHLSkhEtIc6u3qSREIQ9lETjKA4iJWQQOkzZd36Y5LHS1POeVfBnbcuJiRtOCCoKQ11uXnL21+c5KqZNZ6KNFfIU1oFMpOlgBxeh6uZmbXRSUW5Aidh5FPfffhEOTB6J2r8v8HEfJCS33VePR40NAcJNXIoDmqasVeQO1KniijKhGtt/jnNiO1rfUnHu1MaNou5IV/41sjdpti1Y9Dykt31grV+TQNDd7YDGjasRBcqbcYdx81V2dDvB5CMBSzRdcoE2vkuXk7Ax2f3LHK44iOVZc8WPDsEu6iHXjuzgXfvHL/318Lnv4JulqIoHPjp3ivsejws2pg34IZRdztJnox9FwsWh2HyxrLemZKO6PaJfpVryZ6dfHQUXPH5X5Q+aZRU1/Ak+xh/ZyLIfJ3aD19+fPp7+43ud2hoUREWw2aWFwVJegkSnt4PRBhgs6Ktr2QYFbJw9M1I5plDrsP1idLYAJ95qDMIvPctqacZD24J3c6ZtsOyQE8WYHCMsSjwcJVQ7D4hN8Aym8iyz6gW
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(53546011)(508600001)(33656002)(6512007)(8676002)(4326008)(186003)(6506007)(2906002)(38100700002)(26005)(38070700005)(5660300002)(316002)(122000001)(36756003)(8936002)(71200400001)(86362001)(6486002)(66556008)(64756008)(91956017)(76116006)(66476007)(66946007)(66446008)(6916009)(54906003)(2616005)(4744005)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <DE5D2E9AAFD0DC4CAEA861ABB12C37E4@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB5963
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT026.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	9d3001fe-bced-4e5c-8f9e-08da4ac2494c
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tj/yFjxxWhxMPr4BprL2YAhgCkDc5ll2Sd1eoQx53FiY4iPYohM2BNZp1/FCe8Z3dfhccHf4RLuLG9X0PF4DdNIABkNqbBXw4FVniFwxUBGrfKpBNbDWqVgdZPS9qS8tuW7TL5C3cgO5zRBT/QoJrNahumiZ1fG4PkdeDcyms/88EFWDGmb04LcIIm+4IKRZMEAq570W+hPXaEZOFhvonW0M/QPFLTi44vkwL79BaP+ErulikOOPzkz5FDeZHi0oDixvEtFWGpg3/EeV0S1pXYC8aeCroK6AlW2fCjuUcAQz+iuA1nLDzLDfDyTmkQsoWlEJ7SJ5BNFvkF0RIzRGgoMLXVzJE+y/N2pDrAiNLNbikiVvgrUyEHmXOg8ju8nuWUbsnf5PBa36E0p5c6YlAdy0QXta+WymyKDMORMuTT4uS1cvToE6W/XLoOunsjQG98CHAMNbn5loTYBbFtQ9f6UwNKgQsX0MoyWRN3xr1sUGL0wOiW00PWnu1eSkcLkAgBdBnE8j6wk65+3dUW5qYvRDc8DJeogOBzGI/as4baNBWoFJZONdHec2+a6c/xaE2lBXRwcfrZdeLRTt2FcfxysoiFAKHvM8ggzBRdyjjhmqfo8KfsrOF93gOJqKo327WRDkR3pXcj2faLOZKkaoCZJeZTZaNtIZHTUFGt16jFSTOMwUN4AuG3XBhdDNNyRK
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(316002)(33656002)(70206006)(2906002)(8676002)(6486002)(81166007)(5660300002)(70586007)(4326008)(4744005)(36756003)(26005)(53546011)(6512007)(356005)(86362001)(6506007)(82310400005)(336012)(47076005)(508600001)(2616005)(40460700003)(186003)(8936002)(54906003)(36860700001)(6862004);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2022 09:19:29.8399
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d66da8f9-8f3a-44d2-52b2-08da4ac252ba
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT026.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4835

Hi Stefano,

> On 10 Jun 2022, at 01:48, Stefano Stabellini <sstabellini@kernel.org> wro=
te:
>=20
> Add the new MISRA C rules agreed by the MISRA C working group to
> docs/misra/rules.rst.

The notes are now used to give more explanations or document deviations.
We might need a proper entry in the table at some point but I think this
should be part of something bigger to handle deviations so ..

>=20
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
>=20

Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 09:21:12 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 09:21:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346076.571798 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzapL-00050W-88; Fri, 10 Jun 2022 09:21:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346076.571798; Fri, 10 Jun 2022 09:21:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzapL-00050P-4m; Fri, 10 Jun 2022 09:21:11 +0000
Received: by outflank-mailman (input) for mailman id 346076;
 Fri, 10 Jun 2022 09:21:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2HR3=WR=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1nzapK-00050F-J2
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 09:21:10 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a8200d9a-e89e-11ec-8179-c7c2a468b362;
 Fri, 10 Jun 2022 11:21:07 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-617-nzXNfmnLMbOoxz3AGOvaGg-1; Fri, 10 Jun 2022 05:21:03 -0400
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com
 [10.11.54.4])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 42C611C05AB3;
 Fri, 10 Jun 2022 09:21:02 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id D13B42026D64;
 Fri, 10 Jun 2022 09:21:00 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id CC0541800395; Fri, 10 Jun 2022 11:20:43 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a8200d9a-e89e-11ec-8179-c7c2a468b362
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1654852866;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=h/Rq0T4qHdj5P4fsrjuVBaKf49W5WppINvxKA/pkbYI=;
	b=QvyYpERN/p0ov2jUXygGbS2X6GGDJaIh6HnnOVP2DnkcqqzU3kSyjASboULlUq/rZ6/szm
	Z8095fEqgAJ1NYI5MEvksbTZitOHFeu3zU3df7BbLkrs1xyGcUIosFdBnvdPImdE1X1BdO
	57ev0atq/XWWtjaZoPleX/ahy5nCKVo=
X-MC-Unique: nzXNfmnLMbOoxz3AGOvaGg-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Canokeys.org" <contact@canokeys.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	xen-devel@lists.xenproject.org,
	Alex Williamson <alex.williamson@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	=?UTF-8?q?Volker=20R=C3=BCmelin?= <vr_qemu@t-online.de>
Subject: [PULL 01/17] ui/gtk-gl-area: implement GL context destruction
Date: Fri, 10 Jun 2022 11:20:27 +0200
Message-Id: <20220610092043.1874654-2-kraxel@redhat.com>
In-Reply-To: <20220610092043.1874654-1-kraxel@redhat.com>
References: <20220610092043.1874654-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4

From: Volker Rümelin <vr_qemu@t-online.de>

The counterpart function for gd_gl_area_create_context() is
currently empty. Implement the gd_gl_area_destroy_context()
function to avoid GL context leaks.

Signed-off-by: Volker Rümelin <vr_qemu@t-online.de>
Message-Id: <20220605085131.7711-1-vr_qemu@t-online.de>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 ui/gtk-gl-area.c | 8 +++++++-
 ui/trace-events  | 1 +
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/ui/gtk-gl-area.c b/ui/gtk-gl-area.c
index fc5a082eb846..0e20ea031d34 100644
--- a/ui/gtk-gl-area.c
+++ b/ui/gtk-gl-area.c
@@ -201,7 +201,13 @@ QEMUGLContext gd_gl_area_create_context(DisplayGLCtx *dgc,
 
 void gd_gl_area_destroy_context(DisplayGLCtx *dgc, QEMUGLContext ctx)
 {
-    /* FIXME */
+    GdkGLContext *current_ctx = gdk_gl_context_get_current();
+
+    trace_gd_gl_area_destroy_context(ctx, current_ctx);
+    if (ctx == current_ctx) {
+        gdk_gl_context_clear_current();
+    }
+    g_clear_object(&ctx);
 }
 
 void gd_gl_area_scanout_texture(DisplayChangeListener *dcl,
diff --git a/ui/trace-events b/ui/trace-events
index f78b5e66061f..1040ba0f88c7 100644
--- a/ui/trace-events
+++ b/ui/trace-events
@@ -26,6 +26,7 @@ gd_key_event(const char *tab, int gdk_keycode, int qkeycode, const char *action)
 gd_grab(const char *tab, const char *device, const char *reason) "tab=%s, dev=%s, reason=%s"
 gd_ungrab(const char *tab, const char *device) "tab=%s, dev=%s"
 gd_keymap_windowing(const char *name) "backend=%s"
+gd_gl_area_destroy_context(void *ctx, void *current_ctx) "ctx=%p, current_ctx=%p"
 
 # vnc-auth-sasl.c
 # vnc-auth-vencrypt.c
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 09:21:15 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 09:21:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346077.571809 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzapP-0005HS-Gc; Fri, 10 Jun 2022 09:21:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346077.571809; Fri, 10 Jun 2022 09:21:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzapP-0005HJ-D1; Fri, 10 Jun 2022 09:21:15 +0000
Received: by outflank-mailman (input) for mailman id 346077;
 Fri, 10 Jun 2022 09:21:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2HR3=WR=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1nzapO-00050F-GR
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 09:21:14 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id aba16481-e89e-11ec-8179-c7c2a468b362;
 Fri, 10 Jun 2022 11:21:13 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-300-grsxZKmzNhO4Vag9lm3zCQ-1; Fri, 10 Jun 2022 05:21:08 -0400
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com
 [10.11.54.6])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id DF665185A7A4;
 Fri, 10 Jun 2022 09:21:07 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id D3CEA2166B26;
 Fri, 10 Jun 2022 09:21:00 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id B1AA518000AA; Fri, 10 Jun 2022 11:20:43 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aba16481-e89e-11ec-8179-c7c2a468b362
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1654852872;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=+EfyblCNeI5we03E+ZlHLbMHg2X2becflcf1jvFQlcY=;
	b=Af7r2CAj1w2Mf+oiZXsTIyRZfC78XzXJu1d3oUEnffVYI/Ca8IGJThd9Y4Cal4oCt5ViKt
	PhqYwTj+Cih0AIGK6eAUK5xjkDhltmPHD1mDWRiHa7bSkZ6xrSJ6v+mZ6MZGG4DBqK2qzo
	K4JutH5LFzvyxYs3lc8sjxPS2sz3+KU=
X-MC-Unique: grsxZKmzNhO4Vag9lm3zCQ-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Canokeys.org" <contact@canokeys.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	xen-devel@lists.xenproject.org,
	Alex Williamson <alex.williamson@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Subject: [PULL 00/17] Kraxel 20220610 patches
Date: Fri, 10 Jun 2022 11:20:26 +0200
Message-Id: <20220610092043.1874654-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6

The following changes since commit 9cc1bf1ebca550f8d90f967ccd2b6d2e00e81387:

  Merge tag 'pull-xen-20220609' of https://xenbits.xen.org/git-http/people/aperard/qemu-dm into staging (2022-06-09 08:25:17 -0700)

are available in the Git repository at:

  git://git.kraxel.org/qemu tags/kraxel-20220610-pull-request

for you to fetch changes up to 02319a4d67d3f19039127b8dc9ca9478b6d6ccd8:

  virtio-gpu: Respect UI refresh rate for EDID (2022-06-10 11:11:44 +0200)

----------------------------------------------------------------
usb: add CanoKey device, fixes for ehci + redir
ui: fixes for gtk and cocoa, move keymaps, rework refresh rate
virtio-gpu: scanout flush fix

----------------------------------------------------------------

Akihiko Odaki (4):
  ui/cocoa: Fix poweroff request code
  ui/console: Do not return a value with ui_info
  ui: Deliver refresh rate via QemuUIInfo
  virtio-gpu: Respect UI refresh rate for EDID

Arnout Engelen (1):
  hw/usb/hcd-ehci: fix writeback order

Bernhard Beschow (1):
  hw/audio/cs4231a: Const'ify global tables

Daniel P. Berrangé (1):
  ui: move 'pc-bios/keymaps' to 'ui/keymaps'

Dongwon Kim (1):
  virtio-gpu: update done only on the scanout associated with rect

Hongren (Zenithal) Zheng (6):
  hw/usb: Add CanoKey Implementation
  hw/usb/canokey: Add trace events
  meson: Add CanoKey
  docs: Add CanoKey documentation
  docs/system/devices/usb: Add CanoKey to USB devices examples
  MAINTAINERS: add myself as CanoKey maintainer

Joelle van Dyne (1):
  usbredir: avoid queuing hello packet on snapshot restore

Volker Rümelin (2):
  ui/gtk-gl-area: implement GL context destruction
  ui/gtk-gl-area: create the requested GL context version

 meson_options.txt                   |   2 +
 hw/usb/canokey.h                    |  69 ++++++
 include/hw/virtio/virtio-gpu.h      |   1 +
 include/ui/console.h                |   4 +-
 include/ui/gtk.h                    |   2 +-
 hw/audio/cs4231a.c                  |   8 +-
 hw/display/virtio-gpu-base.c        |   7 +-
 hw/display/virtio-gpu.c             |   4 +
 hw/display/virtio-vga.c             |   5 +-
 hw/display/xenfb.c                  |  14 +-
 hw/usb/canokey.c                    | 313 ++++++++++++++++++++++++++++
 hw/usb/hcd-ehci.c                   |   5 +-
 hw/usb/redirect.c                   |   3 +-
 hw/vfio/display.c                   |   8 +-
 ui/console.c                        |   6 -
 ui/gtk-egl.c                        |   4 +-
 ui/gtk-gl-area.c                    |  42 +++-
 ui/gtk.c                            |  45 ++--
 MAINTAINERS                         |   8 +
 docs/system/device-emulation.rst    |   1 +
 docs/system/devices/canokey.rst     | 168 +++++++++++++++
 docs/system/devices/usb.rst         |   4 +
 hw/usb/Kconfig                      |   5 +
 hw/usb/meson.build                  |   5 +
 hw/usb/trace-events                 |  16 ++
 meson.build                         |   6 +
 pc-bios/meson.build                 |   1 -
 scripts/meson-buildoptions.sh       |   3 +
 ui/cocoa.m                          |   6 +-
 {pc-bios => ui}/keymaps/ar          |   0
 {pc-bios => ui}/keymaps/bepo        |   0
 {pc-bios => ui}/keymaps/cz          |   0
 {pc-bios => ui}/keymaps/da          |   0
 {pc-bios => ui}/keymaps/de          |   0
 {pc-bios => ui}/keymaps/de-ch       |   0
 {pc-bios => ui}/keymaps/en-gb       |   0
 {pc-bios => ui}/keymaps/en-us       |   0
 {pc-bios => ui}/keymaps/es          |   0
 {pc-bios => ui}/keymaps/et          |   0
 {pc-bios => ui}/keymaps/fi          |   0
 {pc-bios => ui}/keymaps/fo          |   0
 {pc-bios => ui}/keymaps/fr          |   0
 {pc-bios => ui}/keymaps/fr-be       |   0
 {pc-bios => ui}/keymaps/fr-ca       |   0
 {pc-bios => ui}/keymaps/fr-ch       |   0
 {pc-bios => ui}/keymaps/hr          |   0
 {pc-bios => ui}/keymaps/hu          |   0
 {pc-bios => ui}/keymaps/is          |   0
 {pc-bios => ui}/keymaps/it          |   0
 {pc-bios => ui}/keymaps/ja          |   0
 {pc-bios => ui}/keymaps/lt          |   0
 {pc-bios => ui}/keymaps/lv          |   0
 {pc-bios => ui}/keymaps/meson.build |   0
 {pc-bios => ui}/keymaps/mk          |   0
 {pc-bios => ui}/keymaps/nl          |   0
 {pc-bios => ui}/keymaps/no          |   0
 {pc-bios => ui}/keymaps/pl          |   0
 {pc-bios => ui}/keymaps/pt          |   0
 {pc-bios => ui}/keymaps/pt-br       |   0
 {pc-bios => ui}/keymaps/ru          |   0
 {pc-bios => ui}/keymaps/sl          |   0
 {pc-bios => ui}/keymaps/sv          |   0
 {pc-bios => ui}/keymaps/th          |   0
 {pc-bios => ui}/keymaps/tr          |   0
 ui/meson.build                      |   1 +
 ui/trace-events                     |   2 +
 66 files changed, 712 insertions(+), 56 deletions(-)
 create mode 100644 hw/usb/canokey.h
 create mode 100644 hw/usb/canokey.c
 create mode 100644 docs/system/devices/canokey.rst
 rename {pc-bios => ui}/keymaps/ar (100%)
 rename {pc-bios => ui}/keymaps/bepo (100%)
 rename {pc-bios => ui}/keymaps/cz (100%)
 rename {pc-bios => ui}/keymaps/da (100%)
 rename {pc-bios => ui}/keymaps/de (100%)
 rename {pc-bios => ui}/keymaps/de-ch (100%)
 rename {pc-bios => ui}/keymaps/en-gb (100%)
 rename {pc-bios => ui}/keymaps/en-us (100%)
 rename {pc-bios => ui}/keymaps/es (100%)
 rename {pc-bios => ui}/keymaps/et (100%)
 rename {pc-bios => ui}/keymaps/fi (100%)
 rename {pc-bios => ui}/keymaps/fo (100%)
 rename {pc-bios => ui}/keymaps/fr (100%)
 rename {pc-bios => ui}/keymaps/fr-be (100%)
 rename {pc-bios => ui}/keymaps/fr-ca (100%)
 rename {pc-bios => ui}/keymaps/fr-ch (100%)
 rename {pc-bios => ui}/keymaps/hr (100%)
 rename {pc-bios => ui}/keymaps/hu (100%)
 rename {pc-bios => ui}/keymaps/is (100%)
 rename {pc-bios => ui}/keymaps/it (100%)
 rename {pc-bios => ui}/keymaps/ja (100%)
 rename {pc-bios => ui}/keymaps/lt (100%)
 rename {pc-bios => ui}/keymaps/lv (100%)
 rename {pc-bios => ui}/keymaps/meson.build (100%)
 rename {pc-bios => ui}/keymaps/mk (100%)
 rename {pc-bios => ui}/keymaps/nl (100%)
 rename {pc-bios => ui}/keymaps/no (100%)
 rename {pc-bios => ui}/keymaps/pl (100%)
 rename {pc-bios => ui}/keymaps/pt (100%)
 rename {pc-bios => ui}/keymaps/pt-br (100%)
 rename {pc-bios => ui}/keymaps/ru (100%)
 rename {pc-bios => ui}/keymaps/sl (100%)
 rename {pc-bios => ui}/keymaps/sv (100%)
 rename {pc-bios => ui}/keymaps/th (100%)
 rename {pc-bios => ui}/keymaps/tr (100%)

-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 09:21:16 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 09:21:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346078.571820 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzapQ-0005Y7-RG; Fri, 10 Jun 2022 09:21:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346078.571820; Fri, 10 Jun 2022 09:21:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzapQ-0005Y0-Ms; Fri, 10 Jun 2022 09:21:16 +0000
Received: by outflank-mailman (input) for mailman id 346078;
 Fri, 10 Jun 2022 09:21:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2HR3=WR=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1nzapP-0005HE-NT
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 09:21:15 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id abd153a6-e89e-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 11:21:14 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-326-7e0wnkN_M06CRWN5m6J7UQ-1; Fri, 10 Jun 2022 05:21:09 -0400
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com
 [10.11.54.4])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 59C6438149A2;
 Fri, 10 Jun 2022 09:21:08 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 0F6E52026D64;
 Fri, 10 Jun 2022 09:21:08 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 08DD818003BA; Fri, 10 Jun 2022 11:20:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: abd153a6-e89e-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1654852872;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qgE0u/psF+5HbFaMEFEBF+1+y1Gn5UXWEs7HICOeN3Q=;
	b=Bzpi+YnN2GH1XcdOhcem911qfpf67Z0BoCv+aJl0L6atMbT614KQxxsSyYPc1XQCXo+U36
	TJyi0t50p5zDw5MLY0yCfhrIqUfPgGl6MNrYOajH01TeSdzzNkJ2xKpgyfJspLqHYRiQJx
	JV/6ZOL9jM/cDFxEETOzjBpb3VTeXUY=
X-MC-Unique: 7e0wnkN_M06CRWN5m6J7UQ-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Canokeys.org" <contact@canokeys.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	xen-devel@lists.xenproject.org,
	Alex Williamson <alex.williamson@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Subject: [PULL 03/17] ui/cocoa: Fix poweroff request code
Date: Fri, 10 Jun 2022 11:20:29 +0200
Message-Id: <20220610092043.1874654-4-kraxel@redhat.com>
In-Reply-To: <20220610092043.1874654-1-kraxel@redhat.com>
References: <20220610092043.1874654-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4

From: Akihiko Odaki <akihiko.odaki@gmail.com>

Signed-off-by: Akihiko Odaki <akihiko.odaki@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220529082508.89097-1-akihiko.odaki@gmail.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 ui/cocoa.m | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/ui/cocoa.m b/ui/cocoa.m
index 09a62817f2a9..84c84e98fc5e 100644
--- a/ui/cocoa.m
+++ b/ui/cocoa.m
@@ -35,6 +35,7 @@
 #include "ui/kbd-state.h"
 #include "sysemu/sysemu.h"
 #include "sysemu/runstate.h"
+#include "sysemu/runstate-action.h"
 #include "sysemu/cpu-throttle.h"
 #include "qapi/error.h"
 #include "qapi/qapi-commands-block.h"
@@ -1290,7 +1291,10 @@ - (void)applicationWillTerminate:(NSNotification *)aNotification
 {
     COCOA_DEBUG("QemuCocoaAppController: applicationWillTerminate\n");
 
-    qemu_system_shutdown_request(SHUTDOWN_CAUSE_HOST_UI);
+    with_iothread_lock(^{
+        shutdown_action = SHUTDOWN_ACTION_POWEROFF;
+        qemu_system_shutdown_request(SHUTDOWN_CAUSE_HOST_UI);
+    });
 
     /*
      * Sleep here, because returning will cause OSX to kill us
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 09:21:21 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 09:21:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346079.571831 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzapV-0005u8-52; Fri, 10 Jun 2022 09:21:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346079.571831; Fri, 10 Jun 2022 09:21:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzapV-0005u1-1H; Fri, 10 Jun 2022 09:21:21 +0000
Received: by outflank-mailman (input) for mailman id 346079;
 Fri, 10 Jun 2022 09:21:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2HR3=WR=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1nzapT-00050F-18
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 09:21:19 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ae8c6d76-e89e-11ec-8179-c7c2a468b362;
 Fri, 10 Jun 2022 11:21:18 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-498--HdTtoCQPHeme0T9yXjfvg-1; Fri, 10 Jun 2022 05:21:14 -0400
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com
 [10.11.54.2])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 54D40811E75;
 Fri, 10 Jun 2022 09:21:13 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 13CD1404E4B5;
 Fri, 10 Jun 2022 09:21:12 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 1FC3F180060A; Fri, 10 Jun 2022 11:20:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ae8c6d76-e89e-11ec-8179-c7c2a468b362
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1654852877;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Ufj4JscCAW57Vs14x1NDo2rKzUHuAo6KJO/7yvjLaQA=;
	b=hRVA5tBZNrI28pxvTX90+JpQH7jccCsXIFoJhe3Vj2k9PEqbm+JAl6+tQsBadF0QemOp1C
	AIeeoh1H8DOOY+2onA/KjU+FipsLUTguWCeJrvr2+IdRNzGH3SM8i/s5RnlC69DH0jD0eZ
	VzJQdRAMjY4LGUVH2Xc+dHngw/374gM=
X-MC-Unique: -HdTtoCQPHeme0T9yXjfvg-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Canokeys.org" <contact@canokeys.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	xen-devel@lists.xenproject.org,
	Alex Williamson <alex.williamson@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	Bernhard Beschow <shentey@gmail.com>
Subject: [PULL 04/17] hw/audio/cs4231a: Const'ify global tables
Date: Fri, 10 Jun 2022 11:20:30 +0200
Message-Id: <20220610092043.1874654-5-kraxel@redhat.com>
In-Reply-To: <20220610092043.1874654-1-kraxel@redhat.com>
References: <20220610092043.1874654-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2

From: Bernhard Beschow <shentey@gmail.com>

The tables contain spcifically crafted constants for algorithms, so make
them immutable.

Signed-off-by: Bernhard Beschow <shentey@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220520180109.8224-3-shentey@gmail.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 hw/audio/cs4231a.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/hw/audio/cs4231a.c b/hw/audio/cs4231a.c
index 0723e3943044..7f17a72a9cb7 100644
--- a/hw/audio/cs4231a.c
+++ b/hw/audio/cs4231a.c
@@ -84,7 +84,7 @@ struct CSState {
     int transferred;
     int aci_counter;
     SWVoiceOut *voice;
-    int16_t *tab;
+    const int16_t *tab;
 };
 
 #define MODE2 (1 << 6)
@@ -142,13 +142,13 @@ enum {
     Capture_Lower_Base_Count
 };
 
-static int freqs[2][8] = {
+static const int freqs[2][8] = {
     { 8000, 16000, 27420, 32000,    -1,    -1, 48000, 9000 },
     { 5510, 11025, 18900, 22050, 37800, 44100, 33075, 6620 }
 };
 
 /* Tables courtesy http://hazelware.luggle.com/tutorials/mulawcompression.html */
-static int16_t MuLawDecompressTable[256] =
+static const int16_t MuLawDecompressTable[256] =
 {
      -32124,-31100,-30076,-29052,-28028,-27004,-25980,-24956,
      -23932,-22908,-21884,-20860,-19836,-18812,-17788,-16764,
@@ -184,7 +184,7 @@ static int16_t MuLawDecompressTable[256] =
          56,    48,    40,    32,    24,    16,     8,     0
 };
 
-static int16_t ALawDecompressTable[256] =
+static const int16_t ALawDecompressTable[256] =
 {
      -5504, -5248, -6016, -5760, -4480, -4224, -4992, -4736,
      -7552, -7296, -8064, -7808, -6528, -6272, -7040, -6784,
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 09:21:21 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 09:21:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346080.571836 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzapV-0005y7-K5; Fri, 10 Jun 2022 09:21:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346080.571836; Fri, 10 Jun 2022 09:21:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzapV-0005x6-DH; Fri, 10 Jun 2022 09:21:21 +0000
Received: by outflank-mailman (input) for mailman id 346080;
 Fri, 10 Jun 2022 09:21:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2HR3=WR=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1nzapT-0005HE-Ht
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 09:21:19 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id aec5ad2e-e89e-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 11:21:18 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-422-L28HogynNE2lkWLrb52jaw-1; Fri, 10 Jun 2022 05:21:05 -0400
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com
 [10.11.54.4])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BF36B85A583;
 Fri, 10 Jun 2022 09:21:04 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 47D442026D64;
 Fri, 10 Jun 2022 09:21:04 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id E61E518003A8; Fri, 10 Jun 2022 11:20:43 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aec5ad2e-e89e-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1654852877;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=wx3J1U/WRMVytLzUKnVNwgCHbbzsbHu+DjsV7mqclmI=;
	b=GAVWfEdkWHAjpcz0EjlFoBuvBbBkCQtb82r5ClL0gqZI+Fvr6RAA3clr1veUQ9/Kq7dWwm
	wzfrRxHF72Ty9oC0EiESB3kUs7EhT6SaA2bps+Z/xvtwxzqXT34hD8Z/FEW4JWn/lbpno4
	HjxAC0NPxlPuzfzzAM8nhig+1S15wo0=
X-MC-Unique: L28HogynNE2lkWLrb52jaw-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Canokeys.org" <contact@canokeys.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	xen-devel@lists.xenproject.org,
	Alex Williamson <alex.williamson@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	=?UTF-8?q?Volker=20R=C3=BCmelin?= <vr_qemu@t-online.de>
Subject: [PULL 02/17] ui/gtk-gl-area: create the requested GL context version
Date: Fri, 10 Jun 2022 11:20:28 +0200
Message-Id: <20220610092043.1874654-3-kraxel@redhat.com>
In-Reply-To: <20220610092043.1874654-1-kraxel@redhat.com>
References: <20220610092043.1874654-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4

From: Volker Rümelin <vr_qemu@t-online.de>

Since about 2018 virglrenderer (commit fa835b0f88 "vrend: don't
hardcode context version") tries to open the highest available GL
context version. This is done by creating the known GL context
versions from the highest to the lowest until (*create_gl_context)
returns a context != NULL.

This does not work properly with
the current QEMU gd_gl_area_create_context() function, because
gdk_gl_context_realize() on Wayland creates a version 3.0 legacy
context if the requested GL context version can't be created.

In order for virglrenderer to find the highest available GL
context version, return NULL if the created context version is
lower than the requested version.

This fixes the following error:
QEMU started with -device virtio-vga-gl -display gtk,gl=on.
Under Wayland, the guest window remains black and the following
information can be seen on the host.

gl_version 30 - compat profile
(qemu:5978): Gdk-WARNING **: 16:19:01.533:
  gdk_gl_context_set_required_version
  - GL context versions less than 3.2 are not supported.

(qemu:5978): Gdk-WARNING **: 16:19:01.537:
  gdk_gl_context_set_required_version -
  GL context versions less than 3.2 are not supported.

(qemu:5978): Gdk-WARNING **: 16:19:01.554:
  gdk_gl_context_set_required_version -
  GL context versions less than 3.2 are not supported.
vrend_renderer_fill_caps: Entering with stale GL error: 1282

To reproduce this error, an OpenGL driver is required on the host
that doesn't have the latest OpenGL extensions fully implemented.
An example for this is the Intel i965 driver on a Haswell processor.

Signed-off-by: Volker Rümelin <vr_qemu@t-online.de>
Message-Id: <20220605085131.7711-2-vr_qemu@t-online.de>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 ui/gtk-gl-area.c | 31 ++++++++++++++++++++++++++++++-
 ui/trace-events  |  1 +
 2 files changed, 31 insertions(+), 1 deletion(-)

diff --git a/ui/gtk-gl-area.c b/ui/gtk-gl-area.c
index 0e20ea031d34..2e0129c28cd4 100644
--- a/ui/gtk-gl-area.c
+++ b/ui/gtk-gl-area.c
@@ -170,6 +170,23 @@ void gd_gl_area_switch(DisplayChangeListener *dcl,
     }
 }
 
+static int gd_cmp_gl_context_version(int major, int minor, QEMUGLParams *params)
+{
+    if (major > params->major_ver) {
+        return 1;
+    }
+    if (major < params->major_ver) {
+        return -1;
+    }
+    if (minor > params->minor_ver) {
+        return 1;
+    }
+    if (minor < params->minor_ver) {
+        return -1;
+    }
+    return 0;
+}
+
 QEMUGLContext gd_gl_area_create_context(DisplayGLCtx *dgc,
                                         QEMUGLParams *params)
 {
@@ -177,8 +194,8 @@ QEMUGLContext gd_gl_area_create_context(DisplayGLCtx *dgc,
     GdkWindow *window;
     GdkGLContext *ctx;
     GError *err = NULL;
+    int major, minor;
 
-    gtk_gl_area_make_current(GTK_GL_AREA(vc->gfx.drawing_area));
     window = gtk_widget_get_window(vc->gfx.drawing_area);
     ctx = gdk_window_create_gl_context(window, &err);
     if (err) {
@@ -196,6 +213,18 @@ QEMUGLContext gd_gl_area_create_context(DisplayGLCtx *dgc,
         g_clear_object(&ctx);
         return NULL;
     }
+
+    gdk_gl_context_make_current(ctx);
+    gdk_gl_context_get_version(ctx, &major, &minor);
+    gdk_gl_context_clear_current();
+    gtk_gl_area_make_current(GTK_GL_AREA(vc->gfx.drawing_area));
+
+    if (gd_cmp_gl_context_version(major, minor, params) == -1) {
+        /* created ctx version < requested version */
+        g_clear_object(&ctx);
+    }
+
+    trace_gd_gl_area_create_context(ctx, params->major_ver, params->minor_ver);
     return ctx;
 }
 
diff --git a/ui/trace-events b/ui/trace-events
index 1040ba0f88c7..a922f00e10b4 100644
--- a/ui/trace-events
+++ b/ui/trace-events
@@ -26,6 +26,7 @@ gd_key_event(const char *tab, int gdk_keycode, int qkeycode, const char *action)
 gd_grab(const char *tab, const char *device, const char *reason) "tab=%s, dev=%s, reason=%s"
 gd_ungrab(const char *tab, const char *device) "tab=%s, dev=%s"
 gd_keymap_windowing(const char *name) "backend=%s"
+gd_gl_area_create_context(void *ctx, int major, int minor) "ctx=%p, major=%d, minor=%d"
 gd_gl_area_destroy_context(void *ctx, void *current_ctx) "ctx=%p, current_ctx=%p"
 
 # vnc-auth-sasl.c
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 09:21:24 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 09:21:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346081.571853 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzapY-0006W0-7w; Fri, 10 Jun 2022 09:21:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346081.571853; Fri, 10 Jun 2022 09:21:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzapY-0006Vc-1t; Fri, 10 Jun 2022 09:21:24 +0000
Received: by outflank-mailman (input) for mailman id 346081;
 Fri, 10 Jun 2022 09:21:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2HR3=WR=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1nzapW-0005HE-MT
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 09:21:22 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b0546c15-e89e-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 11:21:21 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-653-x5pcDao3M1ak-miX3-1pIA-1; Fri, 10 Jun 2022 05:21:16 -0400
Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com
 [10.11.54.10])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 16D8B1815CFA;
 Fri, 10 Jun 2022 09:21:16 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 804FD492C3B;
 Fri, 10 Jun 2022 09:21:15 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 3F7DA180060E; Fri, 10 Jun 2022 11:20:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b0546c15-e89e-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1654852880;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=70clork2yZyYjdUXKnsu9VZIWlpSX2dH9Nrdwr2C7Z4=;
	b=f0xRjmLBpVbhJzur+gat9/hwX6oaX4+HTX754cDHFS2UsQInDzOIG6M4eSNRdJcaV0n/Xj
	AGpcEYA3iyfldUcmnzwwfeyuaSTwPuUzbqCla5+GH4CVP+N1je80PBvo7B1GEaTfZpBc0Z
	6BSP4Q2HFlhG3s/cgzAyO3SvSKuQKPM=
X-MC-Unique: x5pcDao3M1ak-miX3-1pIA-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Canokeys.org" <contact@canokeys.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	xen-devel@lists.xenproject.org,
	Alex Williamson <alex.williamson@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Subject: [PULL 05/17] hw/usb: Add CanoKey Implementation
Date: Fri, 10 Jun 2022 11:20:31 +0200
Message-Id: <20220610092043.1874654-6-kraxel@redhat.com>
In-Reply-To: <20220610092043.1874654-1-kraxel@redhat.com>
References: <20220610092043.1874654-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.85 on 10.11.54.10

From: "Hongren (Zenithal) Zheng" <i@zenithal.me>

This commit added a new emulated device called CanoKey to QEMU.

CanoKey implements platform independent features in canokey-core
https://github.com/canokeys/canokey-core, and leaves the USB implementation
to the platform.

In this commit the USB part was implemented in QEMU using QEMU's USB APIs,
therefore the emulated CanoKey can communicate with the guest OS using USB.

Signed-off-by: Hongren (Zenithal) Zheng <i@zenithal.me>
Message-Id: <YoY6Mgph6f6Hc/zI@Sun>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 hw/usb/canokey.h |  69 +++++++++++
 hw/usb/canokey.c | 300 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 369 insertions(+)
 create mode 100644 hw/usb/canokey.h
 create mode 100644 hw/usb/canokey.c

diff --git a/hw/usb/canokey.h b/hw/usb/canokey.h
new file mode 100644
index 000000000000..24cf30420346
--- /dev/null
+++ b/hw/usb/canokey.h
@@ -0,0 +1,69 @@
+/*
+ * CanoKey QEMU device header.
+ *
+ * Copyright (c) 2021-2022 Canokeys.org <contact@canokeys.org>
+ * Written by Hongren (Zenithal) Zheng <i@zenithal.me>
+ *
+ * This code is licensed under the Apache-2.0.
+ */
+
+#ifndef CANOKEY_H
+#define CANOKEY_H
+
+#include "hw/qdev-core.h"
+
+#define TYPE_CANOKEY "canokey"
+#define CANOKEY(obj) \
+    OBJECT_CHECK(CanoKeyState, (obj), TYPE_CANOKEY)
+
+/*
+ * State of Canokey (i.e. hw/canokey.c)
+ */
+
+/* CTRL INTR BULK */
+#define CANOKEY_EP_NUM 3
+/* BULK/INTR IN can be up to 1352 bytes, e.g. get key info */
+#define CANOKEY_EP_IN_BUFFER_SIZE 2048
+/* BULK OUT can be up to 270 bytes, e.g. PIV import cert */
+#define CANOKEY_EP_OUT_BUFFER_SIZE 512
+
+typedef enum {
+    CANOKEY_EP_IN_WAIT,
+    CANOKEY_EP_IN_READY,
+    CANOKEY_EP_IN_STALL
+} CanoKeyEPState;
+
+typedef struct CanoKeyState {
+    USBDevice dev;
+
+    /* IN packets from canokey device loop */
+    uint8_t ep_in[CANOKEY_EP_NUM][CANOKEY_EP_IN_BUFFER_SIZE];
+    /*
+     * See canokey_emu_transmit
+     *
+     * For large INTR IN, receive multiple data from canokey device loop
+     * in this case ep_in_size would increase with every call
+     */
+    uint32_t ep_in_size[CANOKEY_EP_NUM];
+    /*
+     * Used in canokey_handle_data
+     * for IN larger than p->iov.size, we would do multiple handle_data()
+     *
+     * The difference between ep_in_pos and ep_in_size:
+     * We first increase ep_in_size to fill ep_in buffer in device_loop,
+     * then use ep_in_pos to submit data from ep_in buffer in handle_data
+     */
+    uint32_t ep_in_pos[CANOKEY_EP_NUM];
+    CanoKeyEPState ep_in_state[CANOKEY_EP_NUM];
+
+    /* OUT pointer to canokey recv buffer */
+    uint8_t *ep_out[CANOKEY_EP_NUM];
+    uint32_t ep_out_size[CANOKEY_EP_NUM];
+    /* For large BULK OUT, multiple write to ep_out is needed */
+    uint8_t ep_out_buffer[CANOKEY_EP_NUM][CANOKEY_EP_OUT_BUFFER_SIZE];
+
+    /* Properties */
+    char *file; /* canokey-file */
+} CanoKeyState;
+
+#endif /* CANOKEY_H */
diff --git a/hw/usb/canokey.c b/hw/usb/canokey.c
new file mode 100644
index 000000000000..6cb8b7cdb089
--- /dev/null
+++ b/hw/usb/canokey.c
@@ -0,0 +1,300 @@
+/*
+ * CanoKey QEMU device implementation.
+ *
+ * Copyright (c) 2021-2022 Canokeys.org <contact@canokeys.org>
+ * Written by Hongren (Zenithal) Zheng <i@zenithal.me>
+ *
+ * This code is licensed under the Apache-2.0.
+ */
+
+#include "qemu/osdep.h"
+#include <canokey-qemu.h>
+
+#include "qemu/module.h"
+#include "qapi/error.h"
+#include "hw/usb.h"
+#include "hw/qdev-properties.h"
+#include "desc.h"
+#include "canokey.h"
+
+#define CANOKEY_EP_IN(ep) ((ep) & 0x7F)
+
+#define CANOKEY_VENDOR_NUM     0x20a0
+#define CANOKEY_PRODUCT_NUM    0x42d2
+
+/*
+ * placeholder, canokey-qemu implements its own usb desc
+ * Namely we do not use usb_desc_handle_contorl
+ */
+enum {
+    STR_MANUFACTURER = 1,
+    STR_PRODUCT,
+    STR_SERIALNUMBER
+};
+
+static const USBDescStrings desc_strings = {
+    [STR_MANUFACTURER]     = "canokeys.org",
+    [STR_PRODUCT]          = "CanoKey QEMU",
+    [STR_SERIALNUMBER]     = "0"
+};
+
+static const USBDescDevice desc_device_canokey = {
+    .bcdUSB                        = 0x0,
+    .bMaxPacketSize0               = 16,
+    .bNumConfigurations            = 0,
+    .confs = NULL,
+};
+
+static const USBDesc desc_canokey = {
+    .id = {
+        .idVendor          = CANOKEY_VENDOR_NUM,
+        .idProduct         = CANOKEY_PRODUCT_NUM,
+        .bcdDevice         = 0x0100,
+        .iManufacturer     = STR_MANUFACTURER,
+        .iProduct          = STR_PRODUCT,
+        .iSerialNumber     = STR_SERIALNUMBER,
+    },
+    .full = &desc_device_canokey,
+    .high = &desc_device_canokey,
+    .str  = desc_strings,
+};
+
+
+/*
+ * libcanokey-qemu.so side functions
+ * All functions are called from canokey_emu_device_loop
+ */
+int canokey_emu_stall_ep(void *base, uint8_t ep)
+{
+    CanoKeyState *key = base;
+    uint8_t ep_in = CANOKEY_EP_IN(ep); /* INTR IN has ep 129 */
+    key->ep_in_size[ep_in] = 0;
+    key->ep_in_state[ep_in] = CANOKEY_EP_IN_STALL;
+    return 0;
+}
+
+int canokey_emu_set_address(void *base, uint8_t addr)
+{
+    CanoKeyState *key = base;
+    key->dev.addr = addr;
+    return 0;
+}
+
+int canokey_emu_prepare_receive(
+        void *base, uint8_t ep, uint8_t *pbuf, uint16_t size)
+{
+    CanoKeyState *key = base;
+    key->ep_out[ep] = pbuf;
+    key->ep_out_size[ep] = size;
+    return 0;
+}
+
+int canokey_emu_transmit(
+        void *base, uint8_t ep, const uint8_t *pbuf, uint16_t size)
+{
+    CanoKeyState *key = base;
+    uint8_t ep_in = CANOKEY_EP_IN(ep); /* INTR IN has ep 129 */
+    memcpy(key->ep_in[ep_in] + key->ep_in_size[ep_in],
+            pbuf, size);
+    key->ep_in_size[ep_in] += size;
+    key->ep_in_state[ep_in] = CANOKEY_EP_IN_READY;
+    /*
+     * ready for more data in device loop
+     *
+     * Note: this is a quirk for CanoKey CTAPHID
+     * because it calls multiple emu_transmit in one device_loop
+     * but w/o data_in it would stuck in device_loop
+     * This has no side effect for CCID as it is strictly
+     * OUT then IN transfer
+     * However it has side effect for Control transfer
+     */
+    if (ep_in != 0) {
+        canokey_emu_data_in(ep_in);
+    }
+    return 0;
+}
+
+uint32_t canokey_emu_get_rx_data_size(void *base, uint8_t ep)
+{
+    CanoKeyState *key = base;
+    return key->ep_out_size[ep];
+}
+
+/*
+ * QEMU side functions
+ */
+static void canokey_handle_reset(USBDevice *dev)
+{
+    CanoKeyState *key = CANOKEY(dev);
+    for (int i = 0; i != CANOKEY_EP_NUM; ++i) {
+        key->ep_in_state[i] = CANOKEY_EP_IN_WAIT;
+        key->ep_in_pos[i] = 0;
+        key->ep_in_size[i] = 0;
+    }
+    canokey_emu_reset();
+}
+
+static void canokey_handle_control(USBDevice *dev, USBPacket *p,
+               int request, int value, int index, int length, uint8_t *data)
+{
+    CanoKeyState *key = CANOKEY(dev);
+
+    canokey_emu_setup(request, value, index, length);
+
+    uint32_t dir_in = request & DeviceRequest;
+    if (!dir_in) {
+        /* OUT */
+        if (key->ep_out[0] != NULL) {
+            memcpy(key->ep_out[0], data, length);
+        }
+        canokey_emu_data_out(p->ep->nr, data);
+    }
+
+    canokey_emu_device_loop();
+
+    /* IN */
+    switch (key->ep_in_state[0]) {
+    case CANOKEY_EP_IN_WAIT:
+        p->status = USB_RET_NAK;
+        break;
+    case CANOKEY_EP_IN_STALL:
+        p->status = USB_RET_STALL;
+        break;
+    case CANOKEY_EP_IN_READY:
+        memcpy(data, key->ep_in[0], key->ep_in_size[0]);
+        p->actual_length = key->ep_in_size[0];
+        /* reset state */
+        key->ep_in_state[0] = CANOKEY_EP_IN_WAIT;
+        key->ep_in_size[0] = 0;
+        key->ep_in_pos[0] = 0;
+        break;
+    }
+}
+
+static void canokey_handle_data(USBDevice *dev, USBPacket *p)
+{
+    CanoKeyState *key = CANOKEY(dev);
+
+    uint8_t ep_in = CANOKEY_EP_IN(p->ep->nr);
+    uint8_t ep_out = p->ep->nr;
+    uint32_t in_len;
+    uint32_t out_pos;
+    uint32_t out_len;
+    switch (p->pid) {
+    case USB_TOKEN_OUT:
+        usb_packet_copy(p, key->ep_out_buffer[ep_out], p->iov.size);
+        out_pos = 0;
+        while (out_pos != p->iov.size) {
+            /*
+             * key->ep_out[ep_out] set by prepare_receive
+             * to be a buffer inside libcanokey-qemu.so
+             * key->ep_out_size[ep_out] set by prepare_receive
+             * to be the buffer length
+             */
+            out_len = MIN(p->iov.size - out_pos, key->ep_out_size[ep_out]);
+            memcpy(key->ep_out[ep_out],
+                    key->ep_out_buffer[ep_out] + out_pos, out_len);
+            out_pos += out_len;
+            /* update ep_out_size to actual len */
+            key->ep_out_size[ep_out] = out_len;
+            canokey_emu_data_out(ep_out, NULL);
+        }
+        break;
+    case USB_TOKEN_IN:
+        if (key->ep_in_pos[ep_in] == 0) { /* first time IN */
+            canokey_emu_data_in(ep_in);
+            canokey_emu_device_loop(); /* may call transmit multiple times */
+        }
+        switch (key->ep_in_state[ep_in]) {
+        case CANOKEY_EP_IN_WAIT:
+            /* NAK for early INTR IN */
+            p->status = USB_RET_NAK;
+            break;
+        case CANOKEY_EP_IN_STALL:
+            p->status = USB_RET_STALL;
+            break;
+        case CANOKEY_EP_IN_READY:
+            /* submit part of ep_in buffer to USBPacket */
+            in_len = MIN(key->ep_in_size[ep_in] - key->ep_in_pos[ep_in],
+                    p->iov.size);
+            usb_packet_copy(p,
+                    key->ep_in[ep_in] + key->ep_in_pos[ep_in], in_len);
+            key->ep_in_pos[ep_in] += in_len;
+            /* reset state if all data submitted */
+            if (key->ep_in_pos[ep_in] == key->ep_in_size[ep_in]) {
+                key->ep_in_state[ep_in] = CANOKEY_EP_IN_WAIT;
+                key->ep_in_size[ep_in] = 0;
+                key->ep_in_pos[ep_in] = 0;
+            }
+            break;
+        }
+        break;
+    default:
+        p->status = USB_RET_STALL;
+        break;
+    }
+}
+
+static void canokey_realize(USBDevice *base, Error **errp)
+{
+    CanoKeyState *key = CANOKEY(base);
+
+    if (key->file == NULL) {
+        error_setg(errp, "You must provide file=/path/to/canokey-file");
+        return;
+    }
+
+    usb_desc_init(base);
+
+    for (int i = 0; i != CANOKEY_EP_NUM; ++i) {
+        key->ep_in_state[i] = CANOKEY_EP_IN_WAIT;
+        key->ep_in_size[i] = 0;
+        key->ep_in_pos[i] = 0;
+    }
+
+    if (canokey_emu_init(key, key->file)) {
+        error_setg(errp, "canokey can not create or read %s", key->file);
+        return;
+    }
+}
+
+static void canokey_unrealize(USBDevice *base)
+{
+}
+
+static Property canokey_properties[] = {
+    DEFINE_PROP_STRING("file", CanoKeyState, file),
+    DEFINE_PROP_END_OF_LIST(),
+};
+
+static void canokey_class_init(ObjectClass *klass, void *data)
+{
+    DeviceClass *dc = DEVICE_CLASS(klass);
+    USBDeviceClass *uc = USB_DEVICE_CLASS(klass);
+
+    uc->product_desc   = "CanoKey QEMU";
+    uc->usb_desc       = &desc_canokey;
+    uc->handle_reset   = canokey_handle_reset;
+    uc->handle_control = canokey_handle_control;
+    uc->handle_data    = canokey_handle_data;
+    uc->handle_attach  = usb_desc_attach;
+    uc->realize        = canokey_realize;
+    uc->unrealize      = canokey_unrealize;
+    dc->desc           = "CanoKey QEMU";
+    device_class_set_props(dc, canokey_properties);
+    set_bit(DEVICE_CATEGORY_MISC, dc->categories);
+}
+
+static const TypeInfo canokey_info = {
+    .name = TYPE_CANOKEY,
+    .parent = TYPE_USB_DEVICE,
+    .instance_size = sizeof(CanoKeyState),
+    .class_init = canokey_class_init
+};
+
+static void canokey_register_types(void)
+{
+    type_register_static(&canokey_info);
+}
+
+type_init(canokey_register_types)
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 09:21:24 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 09:21:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346082.571858 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzapY-0006Zu-Qw; Fri, 10 Jun 2022 09:21:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346082.571858; Fri, 10 Jun 2022 09:21:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzapY-0006ZG-GY; Fri, 10 Jun 2022 09:21:24 +0000
Received: by outflank-mailman (input) for mailman id 346082;
 Fri, 10 Jun 2022 09:21:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2HR3=WR=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1nzapW-00050F-Vf
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 09:21:23 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b0d4ffb3-e89e-11ec-8179-c7c2a468b362;
 Fri, 10 Jun 2022 11:21:21 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-478-kRjVILAqOIK_LUG1oE5DDg-1; Fri, 10 Jun 2022 05:21:17 -0400
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com
 [10.11.54.4])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 406B1185A79C;
 Fri, 10 Jun 2022 09:21:17 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 66EEC2026D64;
 Fri, 10 Jun 2022 09:21:16 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 5BF6C1800620; Fri, 10 Jun 2022 11:20:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b0d4ffb3-e89e-11ec-8179-c7c2a468b362
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1654852880;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=XMFb51C69E0Si128czVwJqlqgwkrMEmsc4JCjQtiyLU=;
	b=Xfq+KvCTOPaByo/w/CUyDPG4fOiQYWX6IJ61go8qwKeHWBFs6lKXeSqbizmS0OY2KnQB9g
	II2PNx8DJNY7EWGg1MxyPlJfZBAtgPVuaWJl9yONFAANc5DMKYyodY3zotMrFCAx60EOvn
	1S9sdwhRbFhcbJzo12dRQgyNFz40/OA=
X-MC-Unique: kRjVILAqOIK_LUG1oE5DDg-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Canokeys.org" <contact@canokeys.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	xen-devel@lists.xenproject.org,
	Alex Williamson <alex.williamson@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Subject: [PULL 06/17] hw/usb/canokey: Add trace events
Date: Fri, 10 Jun 2022 11:20:32 +0200
Message-Id: <20220610092043.1874654-7-kraxel@redhat.com>
In-Reply-To: <20220610092043.1874654-1-kraxel@redhat.com>
References: <20220610092043.1874654-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4

From: "Hongren (Zenithal) Zheng" <i@zenithal.me>

Signed-off-by: Hongren (Zenithal) Zheng <i@zenithal.me>
Message-Id: <YoY6RoDKQIxSkFwL@Sun>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 hw/usb/canokey.c    | 13 +++++++++++++
 hw/usb/trace-events | 16 ++++++++++++++++
 2 files changed, 29 insertions(+)

diff --git a/hw/usb/canokey.c b/hw/usb/canokey.c
index 6cb8b7cdb089..4a08b1cbd776 100644
--- a/hw/usb/canokey.c
+++ b/hw/usb/canokey.c
@@ -14,6 +14,7 @@
 #include "qapi/error.h"
 #include "hw/usb.h"
 #include "hw/qdev-properties.h"
+#include "trace.h"
 #include "desc.h"
 #include "canokey.h"
 
@@ -66,6 +67,7 @@ static const USBDesc desc_canokey = {
  */
 int canokey_emu_stall_ep(void *base, uint8_t ep)
 {
+    trace_canokey_emu_stall_ep(ep);
     CanoKeyState *key = base;
     uint8_t ep_in = CANOKEY_EP_IN(ep); /* INTR IN has ep 129 */
     key->ep_in_size[ep_in] = 0;
@@ -75,6 +77,7 @@ int canokey_emu_stall_ep(void *base, uint8_t ep)
 
 int canokey_emu_set_address(void *base, uint8_t addr)
 {
+    trace_canokey_emu_set_address(addr);
     CanoKeyState *key = base;
     key->dev.addr = addr;
     return 0;
@@ -83,6 +86,7 @@ int canokey_emu_set_address(void *base, uint8_t addr)
 int canokey_emu_prepare_receive(
         void *base, uint8_t ep, uint8_t *pbuf, uint16_t size)
 {
+    trace_canokey_emu_prepare_receive(ep, size);
     CanoKeyState *key = base;
     key->ep_out[ep] = pbuf;
     key->ep_out_size[ep] = size;
@@ -92,6 +96,7 @@ int canokey_emu_prepare_receive(
 int canokey_emu_transmit(
         void *base, uint8_t ep, const uint8_t *pbuf, uint16_t size)
 {
+    trace_canokey_emu_transmit(ep, size);
     CanoKeyState *key = base;
     uint8_t ep_in = CANOKEY_EP_IN(ep); /* INTR IN has ep 129 */
     memcpy(key->ep_in[ep_in] + key->ep_in_size[ep_in],
@@ -125,6 +130,7 @@ uint32_t canokey_emu_get_rx_data_size(void *base, uint8_t ep)
  */
 static void canokey_handle_reset(USBDevice *dev)
 {
+    trace_canokey_handle_reset();
     CanoKeyState *key = CANOKEY(dev);
     for (int i = 0; i != CANOKEY_EP_NUM; ++i) {
         key->ep_in_state[i] = CANOKEY_EP_IN_WAIT;
@@ -137,6 +143,7 @@ static void canokey_handle_reset(USBDevice *dev)
 static void canokey_handle_control(USBDevice *dev, USBPacket *p,
                int request, int value, int index, int length, uint8_t *data)
 {
+    trace_canokey_handle_control_setup(request, value, index, length);
     CanoKeyState *key = CANOKEY(dev);
 
     canokey_emu_setup(request, value, index, length);
@@ -144,6 +151,7 @@ static void canokey_handle_control(USBDevice *dev, USBPacket *p,
     uint32_t dir_in = request & DeviceRequest;
     if (!dir_in) {
         /* OUT */
+        trace_canokey_handle_control_out();
         if (key->ep_out[0] != NULL) {
             memcpy(key->ep_out[0], data, length);
         }
@@ -163,6 +171,7 @@ static void canokey_handle_control(USBDevice *dev, USBPacket *p,
     case CANOKEY_EP_IN_READY:
         memcpy(data, key->ep_in[0], key->ep_in_size[0]);
         p->actual_length = key->ep_in_size[0];
+        trace_canokey_handle_control_in(p->actual_length);
         /* reset state */
         key->ep_in_state[0] = CANOKEY_EP_IN_WAIT;
         key->ep_in_size[0] = 0;
@@ -182,6 +191,7 @@ static void canokey_handle_data(USBDevice *dev, USBPacket *p)
     uint32_t out_len;
     switch (p->pid) {
     case USB_TOKEN_OUT:
+        trace_canokey_handle_data_out(ep_out, p->iov.size);
         usb_packet_copy(p, key->ep_out_buffer[ep_out], p->iov.size);
         out_pos = 0;
         while (out_pos != p->iov.size) {
@@ -226,6 +236,7 @@ static void canokey_handle_data(USBDevice *dev, USBPacket *p)
                 key->ep_in_size[ep_in] = 0;
                 key->ep_in_pos[ep_in] = 0;
             }
+            trace_canokey_handle_data_in(ep_in, in_len);
             break;
         }
         break;
@@ -237,6 +248,7 @@ static void canokey_handle_data(USBDevice *dev, USBPacket *p)
 
 static void canokey_realize(USBDevice *base, Error **errp)
 {
+    trace_canokey_realize();
     CanoKeyState *key = CANOKEY(base);
 
     if (key->file == NULL) {
@@ -260,6 +272,7 @@ static void canokey_realize(USBDevice *base, Error **errp)
 
 static void canokey_unrealize(USBDevice *base)
 {
+    trace_canokey_unrealize();
 }
 
 static Property canokey_properties[] = {
diff --git a/hw/usb/trace-events b/hw/usb/trace-events
index 9773cb53300d..914ca7166829 100644
--- a/hw/usb/trace-events
+++ b/hw/usb/trace-events
@@ -345,3 +345,19 @@ usb_serial_set_baud(int bus, int addr, int baud) "dev %d:%u baud rate %d"
 usb_serial_set_data(int bus, int addr, int parity, int data, int stop) "dev %d:%u parity %c, data bits %d, stop bits %d"
 usb_serial_set_flow_control(int bus, int addr, int index) "dev %d:%u flow control %d"
 usb_serial_set_xonxoff(int bus, int addr, uint8_t xon, uint8_t xoff) "dev %d:%u xon 0x%x xoff 0x%x"
+
+# canokey.c
+canokey_emu_stall_ep(uint8_t ep) "ep %d"
+canokey_emu_set_address(uint8_t addr) "addr %d"
+canokey_emu_prepare_receive(uint8_t ep, uint16_t size) "ep %d size %d"
+canokey_emu_transmit(uint8_t ep, uint16_t size) "ep %d size %d"
+canokey_thread_start(void)
+canokey_thread_stop(void)
+canokey_handle_reset(void)
+canokey_handle_control_setup(int request, int value, int index, int length) "request 0x%04X value 0x%04X index 0x%04X length 0x%04X"
+canokey_handle_control_out(void)
+canokey_handle_control_in(int actual_len) "len %d"
+canokey_handle_data_out(uint8_t ep_out, uint32_t out_len) "ep %d len %d"
+canokey_handle_data_in(uint8_t ep_in, uint32_t in_len) "ep %d len %d"
+canokey_realize(void)
+canokey_unrealize(void)
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 09:21:29 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 09:21:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346089.571875 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzapd-0007N4-9O; Fri, 10 Jun 2022 09:21:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346089.571875; Fri, 10 Jun 2022 09:21:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzapd-0007Me-2J; Fri, 10 Jun 2022 09:21:29 +0000
Received: by outflank-mailman (input) for mailman id 346089;
 Fri, 10 Jun 2022 09:21:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2HR3=WR=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1nzapc-00050F-48
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 09:21:28 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b3e62ad3-e89e-11ec-8179-c7c2a468b362;
 Fri, 10 Jun 2022 11:21:27 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-671-nr6FMJL-N2GlKoamkhxmQQ-1; Fri, 10 Jun 2022 05:21:22 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com
 [10.11.54.8])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7FC30101E985;
 Fri, 10 Jun 2022 09:21:21 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 3A282C53360;
 Fri, 10 Jun 2022 09:21:21 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 83F931800622; Fri, 10 Jun 2022 11:20:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b3e62ad3-e89e-11ec-8179-c7c2a468b362
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1654852886;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=HZ4OtpGl92/4vTO64NR8x1+3AJHChGb6d9Bq7oyIeKQ=;
	b=FRv0ac14hFJpcIVS3ef2wvDJBMweX5F+F+XiWM1bUQMZAB1z9Znf3r7YlSLoWt/0K6XWFJ
	61GoaQMvTPElRSIknGgSZv6u9XFHP9dJYa1ATNS88SKNXIR4KGlD0pLSGrXIvAt/rOijaP
	dtTNBhKy1D/F7JAEtKUy0+699V1eAJk=
X-MC-Unique: nr6FMJL-N2GlKoamkhxmQQ-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Canokeys.org" <contact@canokeys.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	xen-devel@lists.xenproject.org,
	Alex Williamson <alex.williamson@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Subject: [PULL 07/17] meson: Add CanoKey
Date: Fri, 10 Jun 2022 11:20:33 +0200
Message-Id: <20220610092043.1874654-8-kraxel@redhat.com>
In-Reply-To: <20220610092043.1874654-1-kraxel@redhat.com>
References: <20220610092043.1874654-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8

From: "Hongren (Zenithal) Zheng" <i@zenithal.me>

Signed-off-by: Hongren (Zenithal) Zheng <i@zenithal.me>
Message-Id: <YoY6YRD6cxH21mms@Sun>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 meson_options.txt             | 2 ++
 hw/usb/Kconfig                | 5 +++++
 hw/usb/meson.build            | 5 +++++
 meson.build                   | 6 ++++++
 scripts/meson-buildoptions.sh | 3 +++
 5 files changed, 21 insertions(+)

diff --git a/meson_options.txt b/meson_options.txt
index 2de94af03712..0e8197386b99 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -189,6 +189,8 @@ option('spice_protocol', type : 'feature', value : 'auto',
        description: 'Spice protocol support')
 option('u2f', type : 'feature', value : 'auto',
        description: 'U2F emulation support')
+option('canokey', type : 'feature', value : 'auto',
+       description: 'CanoKey support')
 option('usb_redir', type : 'feature', value : 'auto',
        description: 'libusbredir support')
 option('l2tpv3', type : 'feature', value : 'auto',
diff --git a/hw/usb/Kconfig b/hw/usb/Kconfig
index 53f8283ffdc1..ce4f4339763e 100644
--- a/hw/usb/Kconfig
+++ b/hw/usb/Kconfig
@@ -119,6 +119,11 @@ config USB_U2F
     default y
     depends on USB
 
+config USB_CANOKEY
+    bool
+    default y
+    depends on USB
+
 config IMX_USBPHY
     bool
     default y
diff --git a/hw/usb/meson.build b/hw/usb/meson.build
index de853d780dd8..793df42e2127 100644
--- a/hw/usb/meson.build
+++ b/hw/usb/meson.build
@@ -63,6 +63,11 @@ if u2f.found()
   softmmu_ss.add(when: 'CONFIG_USB_U2F', if_true: [u2f, files('u2f-emulated.c')])
 endif
 
+# CanoKey
+if canokey.found()
+  softmmu_ss.add(when: 'CONFIG_USB_CANOKEY', if_true: [canokey, files('canokey.c')])
+endif
+
 # usb redirect
 if usbredir.found()
   usbredir_ss = ss.source_set()
diff --git a/meson.build b/meson.build
index 21cd949082dc..0c2e11ff0715 100644
--- a/meson.build
+++ b/meson.build
@@ -1408,6 +1408,12 @@ if have_system
                    method: 'pkg-config',
                    kwargs: static_kwargs)
 endif
+canokey = not_found
+if have_system
+  canokey = dependency('canokey-qemu', required: get_option('canokey'),
+                   method: 'pkg-config',
+                   kwargs: static_kwargs)
+endif
 usbredir = not_found
 if not get_option('usb_redir').auto() or have_system
   usbredir = dependency('libusbredirparser-0.5', required: get_option('usb_redir'),
diff --git a/scripts/meson-buildoptions.sh b/scripts/meson-buildoptions.sh
index 00ea4d8cd169..1fc1d2e2c362 100644
--- a/scripts/meson-buildoptions.sh
+++ b/scripts/meson-buildoptions.sh
@@ -73,6 +73,7 @@ meson_options_help() {
   printf "%s\n" '  bpf             eBPF support'
   printf "%s\n" '  brlapi          brlapi character device driver'
   printf "%s\n" '  bzip2           bzip2 support for DMG images'
+  printf "%s\n" '  canokey         CanoKey support'
   printf "%s\n" '  cap-ng          cap_ng support'
   printf "%s\n" '  capstone        Whether and how to find the capstone library'
   printf "%s\n" '  cloop           cloop image format support'
@@ -204,6 +205,8 @@ _meson_option_parse() {
     --disable-brlapi) printf "%s" -Dbrlapi=disabled ;;
     --enable-bzip2) printf "%s" -Dbzip2=enabled ;;
     --disable-bzip2) printf "%s" -Dbzip2=disabled ;;
+    --enable-canokey) printf "%s" -Dcanokey=enabled ;;
+    --disable-canokey) printf "%s" -Dcanokey=disabled ;;
     --enable-cap-ng) printf "%s" -Dcap_ng=enabled ;;
     --disable-cap-ng) printf "%s" -Dcap_ng=disabled ;;
     --enable-capstone) printf "%s" -Dcapstone=enabled ;;
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 09:21:36 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 09:21:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346100.571885 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzapj-0008Bk-Tz; Fri, 10 Jun 2022 09:21:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346100.571885; Fri, 10 Jun 2022 09:21:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzapj-0008BP-Pj; Fri, 10 Jun 2022 09:21:35 +0000
Received: by outflank-mailman (input) for mailman id 346100;
 Fri, 10 Jun 2022 09:21:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2HR3=WR=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1nzapi-00050F-Sr
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 09:21:35 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b7d14d57-e89e-11ec-8179-c7c2a468b362;
 Fri, 10 Jun 2022 11:21:33 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-199-ie5LO4fzPN-Uj95gsVZ9kw-1; Fri, 10 Jun 2022 05:21:29 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com
 [10.11.54.8])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E493938149A2;
 Fri, 10 Jun 2022 09:21:28 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 6E551C53360;
 Fri, 10 Jun 2022 09:21:28 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id A169F180062F; Fri, 10 Jun 2022 11:20:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b7d14d57-e89e-11ec-8179-c7c2a468b362
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1654852892;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Iypp4lhP0vK8zaOIYv4YGS15C6hr5eRaL+NEXu7qlj4=;
	b=FhB15AM2+IRbBqhTlkRbxzl+jI7PYkf+c5YybuLkZ4roa7T0rkisLF7ZTXEktPDty0Njfi
	PX2UQIBmGN5Vbbo7rRX0M5gn3hSunDjgQVD59qMYSZeul5OhbK4yvziAeHZO1GgdOmr1jI
	+qgmHLjv9cYdSFvOitX8qDUv1nkhF50=
X-MC-Unique: ie5LO4fzPN-Uj95gsVZ9kw-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Canokeys.org" <contact@canokeys.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	xen-devel@lists.xenproject.org,
	Alex Williamson <alex.williamson@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Subject: [PULL 08/17] docs: Add CanoKey documentation
Date: Fri, 10 Jun 2022 11:20:34 +0200
Message-Id: <20220610092043.1874654-9-kraxel@redhat.com>
In-Reply-To: <20220610092043.1874654-1-kraxel@redhat.com>
References: <20220610092043.1874654-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8

From: "Hongren (Zenithal) Zheng" <i@zenithal.me>

Signed-off-by: Hongren (Zenithal) Zheng <i@zenithal.me>
Message-Id: <YoY6ilQimrK+l5NN@Sun>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 docs/system/device-emulation.rst |   1 +
 docs/system/devices/canokey.rst  | 168 +++++++++++++++++++++++++++++++
 2 files changed, 169 insertions(+)
 create mode 100644 docs/system/devices/canokey.rst

diff --git a/docs/system/device-emulation.rst b/docs/system/device-emulation.rst
index 3b729b920d7c..05060060563f 100644
--- a/docs/system/device-emulation.rst
+++ b/docs/system/device-emulation.rst
@@ -92,3 +92,4 @@ Emulated Devices
    devices/vhost-user.rst
    devices/virtio-pmem.rst
    devices/vhost-user-rng.rst
+   devices/canokey.rst
diff --git a/docs/system/devices/canokey.rst b/docs/system/devices/canokey.rst
new file mode 100644
index 000000000000..169f99b8eb82
--- /dev/null
+++ b/docs/system/devices/canokey.rst
@@ -0,0 +1,168 @@
+.. _canokey:
+
+CanoKey QEMU
+------------
+
+CanoKey [1]_ is an open-source secure key with supports of
+
+* U2F / FIDO2 with Ed25519 and HMAC-secret
+* OpenPGP Card V3.4 with RSA4096, Ed25519 and more [2]_
+* PIV (NIST SP 800-73-4)
+* HOTP / TOTP
+* NDEF
+
+All these platform-independent features are in canokey-core [3]_.
+
+For different platforms, CanoKey has different implementations,
+including both hardware implementions and virtual cards:
+
+* CanoKey STM32 [4]_
+* CanoKey Pigeon [5]_
+* (virt-card) CanoKey USB/IP
+* (virt-card) CanoKey FunctionFS
+
+In QEMU, yet another CanoKey virt-card is implemented.
+CanoKey QEMU exposes itself as a USB device to the guest OS.
+
+With the same software configuration as a hardware key,
+the guest OS can use all the functionalities of a secure key as if
+there was actually an hardware key plugged in.
+
+CanoKey QEMU provides much convenience for debuging:
+
+* libcanokey-qemu supports debuging output thus developers can
+  inspect what happens inside a secure key
+* CanoKey QEMU supports trace event thus event
+* QEMU USB stack supports pcap thus USB packet between the guest
+  and key can be captured and analysed
+
+Then for developers:
+
+* For developers on software with secure key support (e.g. FIDO2, OpenPGP),
+  they can see what happens inside the secure key
+* For secure key developers, USB packets between guest OS and CanoKey
+  can be easily captured and analysed
+
+Also since this is a virtual card, it can be easily used in CI for testing
+on code coping with secure key.
+
+Building
+========
+
+libcanokey-qemu is required to use CanoKey QEMU.
+
+.. code-block:: shell
+
+    git clone https://github.com/canokeys/canokey-qemu
+    mkdir canokey-qemu/build
+    pushd canokey-qemu/build
+
+If you want to install libcanokey-qemu in a different place,
+add ``-DCMAKE_INSTALL_PREFIX=/path/to/your/place`` to cmake below.
+
+.. code-block:: shell
+
+    cmake ..
+    make
+    make install # may need sudo
+    popd
+
+Then configuring and building:
+
+.. code-block:: shell
+
+    # depending on your env, lib/pkgconfig can be lib64/pkgconfig
+    export PKG_CONFIG_PATH=/path/to/your/place/lib/pkgconfig:$PKG_CONFIG_PATH
+    ./configure --enable-canokey && make
+
+Using CanoKey QEMU
+==================
+
+CanoKey QEMU stores all its data on a file of the host specified by the argument
+when invoking qemu.
+
+.. parsed-literal::
+
+    |qemu_system| -usb -device canokey,file=$HOME/.canokey-file
+
+Note: you should keep this file carefully as it may contain your private key!
+
+The first time when the file is used, it is created and initialized by CanoKey,
+afterwards CanoKey QEMU would just read this file.
+
+After the guest OS boots, you can check that there is a USB device.
+
+For example, If the guest OS is an Linux machine. You may invoke lsusb
+and find CanoKey QEMU there:
+
+.. code-block:: shell
+
+    $ lsusb
+    Bus 001 Device 002: ID 20a0:42d4 Clay Logic CanoKey QEMU
+
+You may setup the key as guided in [6]_. The console for the key is at [7]_.
+
+Debuging
+========
+
+CanoKey QEMU consists of two parts, ``libcanokey-qemu.so`` and ``canokey.c``,
+the latter of which resides in QEMU. The former provides core functionality
+of a secure key while the latter provides platform-dependent functions:
+USB packet handling.
+
+If you want to trace what happens inside the secure key, when compiling
+libcanokey-qemu, you should add ``-DQEMU_DEBUG_OUTPUT=ON`` in cmake command
+line:
+
+.. code-block:: shell
+
+    cmake .. -DQEMU_DEBUG_OUTPUT=ON
+
+If you want to trace events happened in canokey.c, use
+
+.. parsed-literal::
+
+    |qemu_system| --trace "canokey_*" \\
+        -usb -device canokey,file=$HOME/.canokey-file
+
+If you want to capture USB packets between the guest and the host, you can:
+
+.. parsed-literal::
+
+    |qemu_system| -usb -device canokey,file=$HOME/.canokey-file,pcap=key.pcap
+
+Limitations
+===========
+
+Currently libcanokey-qemu.so has dozens of global variables as it was originally
+designed for embedded systems. Thus one qemu instance can not have
+multiple CanoKey QEMU running, namely you can not
+
+.. parsed-literal::
+
+    |qemu_system| -usb -device canokey,file=$HOME/.canokey-file \\
+         -device canokey,file=$HOME/.canokey-file2
+
+Also, there is no lock on canokey-file, thus two CanoKey QEMU instance
+can not read one canokey-file at the same time.
+
+Another limitation is that this device is not compatible with ``qemu-xhci``,
+in that this device would hang when there are FIDO2 packets (traffic on
+interrupt endpoints). If you do not use FIDO2 then it works as intended,
+but for full functionality you should use old uhci/ehci bus and attach canokey
+to it, for example
+
+.. parsed-literal::
+
+   |qemu_system| -device piix3-usb-uhci,id=uhci -device canokey,bus=uhci.0
+
+References
+==========
+
+.. [1] `<https://canokeys.org>`_
+.. [2] `<https://docs.canokeys.org/userguide/openpgp/#supported-algorithm>`_
+.. [3] `<https://github.com/canokeys/canokey-core>`_
+.. [4] `<https://github.com/canokeys/canokey-stm32>`_
+.. [5] `<https://github.com/canokeys/canokey-pigeon>`_
+.. [6] `<https://docs.canokeys.org/>`_
+.. [7] `<https://console.canokeys.org/>`_
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 09:22:01 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 09:22:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346113.571897 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzaq9-0001RE-AL; Fri, 10 Jun 2022 09:22:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346113.571897; Fri, 10 Jun 2022 09:22:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzaq9-0001R3-5Q; Fri, 10 Jun 2022 09:22:01 +0000
Received: by outflank-mailman (input) for mailman id 346113;
 Fri, 10 Jun 2022 09:22:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1nzaq8-0001QI-0K
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 09:22:00 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nzaq7-00052V-Gs; Fri, 10 Jun 2022 09:21:59 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=[192.168.23.251]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nzaq7-0001By-Aq; Fri, 10 Jun 2022 09:21:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=1n1OGo/fhKvqbYvMMZubM5NgMCq1Vy19RgCUFKHE4IE=; b=mZn1NQO4rHVOv0qImiCOwyhW9J
	mftMA7SrWQu8wCATy1Xla0GBbQicqcYswUpgqqxkJELW8chP8ghi65mnqefB5PatQyr5KJ87cYGHV
	CzbojqXk4BYzFfGZ9BV0K0lsKMzWavT2WVSX9NqY/2wPy74o78RsnwGkRHPHfpyBo+qE=;
Message-ID: <e1369888-4135-c2f6-727a-a018d78a4d3a@xen.org>
Date: Fri, 10 Jun 2022 10:21:57 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH] add more MISRA C rules to docs/misra/rules.rst
To: Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
Cc: jbeulich@suse.com, George.Dunlap@citrix.com, roger.pau@citrix.com,
 Artem_Mygaiev@epam.com, Andrew.Cooper3@citrix.com, Bertrand.Marquis@arm.com
References: <alpine.DEB.2.22.394.2206091748210.756493@ubuntu-linux-20-04-desktop>
From: Julien Grall <julien@xen.org>
In-Reply-To: <alpine.DEB.2.22.394.2206091748210.756493@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 10/06/2022 01:48, Stefano Stabellini wrote:
> Add the new MISRA C rules agreed by the MISRA C working group to
> docs/misra/rules.rst.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

> 
> ---
> 
> I added the rules that we agreed upon this morning together with all the
> notes we discussed, in particular:
> 
> - macros as macro parameters at invocation time for Rule 5.3
> - the clarification of Rule 9.1
> - gnu_inline exception for Rule 8.10
> 
> 
> diff --git a/docs/misra/rules.rst b/docs/misra/rules.rst
> index 6ccff07765..5c28836bc8 100644
> --- a/docs/misra/rules.rst
> +++ b/docs/misra/rules.rst
> @@ -89,6 +89,28 @@ existing codebase are work-in-progress.
>          (xen/include/public/) are allowed to retain longer identifiers
>          for backward compatibility.
>   
> +   * - `Rule 5.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_05_02.c>`_
> +     - Required
> +     - Identifiers declared in the same scope and name space shall be
> +       distinct
> +     - The Xen characters limit for identifiers is 40. Public headers
> +       (xen/include/public/) are allowed to retain longer identifiers
> +       for backward compatibility.
> +
> +   * - `Rule 5.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_05_03.c>`_
> +     - Required
> +     - An identifier declared in an inner scope shall not hide an
> +       identifier declared in an outer scope
> +     - Using macros as macro parameters at invocation time is allowed,
> +       e.g. MAX(var0, MIN(var1, var2))
> +
> +   * - `Rule 5.4 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_05_04.c>`_
> +     - Required
> +     - Macro identifiers shall be distinct
> +     - The Xen characters limit for macro identifiers is 40. Public
> +       headers (xen/include/public/) are allowed to retain longer
> +       identifiers for backward compatibility.
> +
>      * - `Rule 6.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_06_02.c>`_
>        - Required
>        - Single-bit named bit fields shall not be of a signed type
> @@ -123,8 +145,75 @@ existing codebase are work-in-progress.
>          declarations of objects and functions that have internal linkage
>        -
>   
> +   * - `Rule 8.10 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_08_10.c>`_
> +     - Required
> +     - An inline function shall be declared with the static storage class
> +     - gnu_inline (without static) is allowed.
> +
>      * - `Rule 8.12 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_08_12.c>`_
>        - Required
>        - Within an enumerator list the value of an implicitly-specified
>          enumeration constant shall be unique
>        -
> +
> +   * - `Rule 9.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_09_01.c>`_
> +     - Mandatory
> +     - The value of an object with automatic storage duration shall not
> +       be read before it has been set
> +     - Rule clarification: do not use variables before they are
> +       initialized. An explicit initializer is not necessarily required.
> +       Try reducing the scope of the variable. If an explicit
> +       initializer is added, consider initializing the variable to a
> +       poison value.
> +
> +   * - `Rule 9.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_09_02.c>`_
> +     - Required
> +     - The initializer for an aggregate or union shall be enclosed in
> +       braces
> +     -
> +
> +   * - `Rule 13.6 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_13_06.c>`_
> +     - Mandatory
> +     - The operand of the sizeof operator shall not contain any
> +       expression which has potential side effects
> +     -
> +
> +   * - `Rule 14.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_14_01.c>`_
> +     - Required
> +     - A loop counter shall not have essentially floating type
> +     -
> +
> +   * - `Rule 16.7 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_16_07.c>`_
> +     - Required
> +     - A switch-expression shall not have essentially Boolean type
> +     -
> +
> +   * - `Rule 17.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_17_03.c>`_
> +     - Mandatory
> +     - A function shall not be declared implicitly
> +     -
> +
> +   * - `Rule 17.4 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_17_04.c>`_
> +     - Mandatory
> +     - All exit paths from a function with non-void return type shall
> +       have an explicit return statement with an expression
> +     -
> +
> +   * - `Rule 20.7 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_20_07.c>`_
> +     - Required
> +     - Expressions resulting from the expansion of macro parameters
> +       shall be enclosed in parentheses
> +     -
> +
> +   * - `Rule 20.13 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_20_13.c>`_
> +     - Required
> +     - A line whose first token is # shall be a valid preprocessing
> +       directive
> +     -
> +
> +   * - `Rule 20.14 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_20_14.c>`_
> +     - Required
> +     - All #else #elif and #endif preprocessor directives shall reside
> +       in the same file as the #if #ifdef or #ifndef directive to which
> +       they are related
> +     -

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 09:23:13 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 09:23:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346119.571908 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzarI-0002PU-Lu; Fri, 10 Jun 2022 09:23:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346119.571908; Fri, 10 Jun 2022 09:23:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzarI-0002PN-Ii; Fri, 10 Jun 2022 09:23:12 +0000
Received: by outflank-mailman (input) for mailman id 346119;
 Fri, 10 Jun 2022 09:23:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2HR3=WR=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1nzapn-0005HE-9H
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 09:21:39 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ba6716f7-e89e-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 11:21:38 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-271-OrbZa1CaOsm0-icMqDwwwA-1; Fri, 10 Jun 2022 05:21:31 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com
 [10.11.54.8])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 17259101AA47;
 Fri, 10 Jun 2022 09:21:31 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id C6582C53360;
 Fri, 10 Jun 2022 09:21:30 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id CCF041800634; Fri, 10 Jun 2022 11:20:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ba6716f7-e89e-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1654852896;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=o6vOmc4r9ui1udWdegwNJ5alAMO2M7wcYKQqptdB7KA=;
	b=ggc3Vb20CnJEwmYZyMXpt+RhDFcZ7y7166LHQNnfbJ9/0rWwye2O+KStzjqyqQBSZfx7nB
	Nb3U4oZoeBcIVhC0+AbaR+QT1My13biDuE0CY/7KE/e9ypIxtFIl3DwUt20dWLc06Jo6hb
	fk/Mg/qFAEHf9tC+WLe7vBeJFYDq/5A=
X-MC-Unique: OrbZa1CaOsm0-icMqDwwwA-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Canokeys.org" <contact@canokeys.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	xen-devel@lists.xenproject.org,
	Alex Williamson <alex.williamson@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Subject: [PULL 10/17] MAINTAINERS: add myself as CanoKey maintainer
Date: Fri, 10 Jun 2022 11:20:36 +0200
Message-Id: <20220610092043.1874654-11-kraxel@redhat.com>
In-Reply-To: <20220610092043.1874654-1-kraxel@redhat.com>
References: <20220610092043.1874654-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8

From: "Hongren (Zenithal) Zheng" <i@zenithal.me>

Signed-off-by: Hongren (Zenithal) Zheng <i@zenithal.me>
Message-Id: <YoY61xI0IcFT1fOP@Sun>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 MAINTAINERS | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 5580a36b68e1..4ae9d707d5b0 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -2425,6 +2425,14 @@ F: hw/intc/s390_flic*.c
 F: include/hw/s390x/s390_flic.h
 L: qemu-s390x@nongnu.org
 
+CanoKey
+M: Hongren (Zenithal) Zheng <i@zenithal.me>
+S: Maintained
+R: Canokeys.org <contact@canokeys.org>
+F: hw/usb/canokey.c
+F: hw/usb/canokey.h
+F: docs/system/devices/canokey.rst
+
 Subsystems
 ----------
 Overall Audio backends
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 09:23:20 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 09:23:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346123.571919 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzarP-0002nE-VQ; Fri, 10 Jun 2022 09:23:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346123.571919; Fri, 10 Jun 2022 09:23:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzarP-0002n5-SI; Fri, 10 Jun 2022 09:23:19 +0000
Received: by outflank-mailman (input) for mailman id 346123;
 Fri, 10 Jun 2022 09:23:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2HR3=WR=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1nzapk-00050F-4w
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 09:21:36 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b7e64093-e89e-11ec-8179-c7c2a468b362;
 Fri, 10 Jun 2022 11:21:33 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-588-LsraNzfaMNKg3TlTAuhYSQ-1; Fri, 10 Jun 2022 05:21:27 -0400
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com
 [10.11.54.3])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 012A12949BC0;
 Fri, 10 Jun 2022 09:21:27 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id A00221121314;
 Fri, 10 Jun 2022 09:21:26 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id B63451800630; Fri, 10 Jun 2022 11:20:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b7e64093-e89e-11ec-8179-c7c2a468b362
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1654852892;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=TqkIe5B5GmtLHYDEQNksBwY0pKybp1zEh7xnID9ieAU=;
	b=OQXk/Vm0GieqbuUFeqMSD773bVNA63dESqRmao+niPl3IWghseXVloRfZ8tpv2EHIpwjYK
	Dhe/nSX3aVrHUYMTKWynglW18pnPYj5idswfdwGLzIaTlg3f4cW2M2bnun8sH9eQkzflTw
	5Glpkh/Lrh6ti2hvMGUScnSTLaUvpec=
X-MC-Unique: LsraNzfaMNKg3TlTAuhYSQ-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Canokeys.org" <contact@canokeys.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	xen-devel@lists.xenproject.org,
	Alex Williamson <alex.williamson@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Subject: [PULL 09/17] docs/system/devices/usb: Add CanoKey to USB devices examples
Date: Fri, 10 Jun 2022 11:20:35 +0200
Message-Id: <20220610092043.1874654-10-kraxel@redhat.com>
In-Reply-To: <20220610092043.1874654-1-kraxel@redhat.com>
References: <20220610092043.1874654-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3

From: "Hongren (Zenithal) Zheng" <i@zenithal.me>

Signed-off-by: Hongren (Zenithal) Zheng <i@zenithal.me>
Message-Id: <YoY6o+QFhzA7VHcZ@Sun>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 docs/system/devices/usb.rst | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/docs/system/devices/usb.rst b/docs/system/devices/usb.rst
index afb7d6c2268d..872d9167589b 100644
--- a/docs/system/devices/usb.rst
+++ b/docs/system/devices/usb.rst
@@ -199,6 +199,10 @@ option or the ``device_add`` monitor command. Available devices are:
 ``u2f-{emulated,passthru}``
    Universal Second Factor device
 
+``canokey``
+   An Open-source Secure Key implementing FIDO2, OpenPGP, PIV and more.
+   For more information, see :ref:`canokey`.
+
 Physical port addressing
 ^^^^^^^^^^^^^^^^^^^^^^^^
 
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 09:23:20 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 09:23:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346125.571923 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzarQ-0002qw-Ca; Fri, 10 Jun 2022 09:23:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346125.571923; Fri, 10 Jun 2022 09:23:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzarQ-0002pl-4x; Fri, 10 Jun 2022 09:23:20 +0000
Received: by outflank-mailman (input) for mailman id 346125;
 Fri, 10 Jun 2022 09:23:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2HR3=WR=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1nzapp-00050F-C5
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 09:21:41 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bac577b5-e89e-11ec-8179-c7c2a468b362;
 Fri, 10 Jun 2022 11:21:39 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-635-IJrpUXNAP_GdSGR8Vldk4g-1; Fri, 10 Jun 2022 05:21:32 -0400
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com
 [10.11.54.3])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id DFB9A811E75;
 Fri, 10 Jun 2022 09:21:30 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 8D1FC1121314;
 Fri, 10 Jun 2022 09:21:30 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id E3E72180063C; Fri, 10 Jun 2022 11:20:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bac577b5-e89e-11ec-8179-c7c2a468b362
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1654852897;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=obSiKgR2g439N1NCymTkGuD5ZtC+AWQDUPf1CsA55V4=;
	b=A8KHt3jE/UCiWkmdPhylJnug3wxHG0GMdarfFZU0I97BTVe7IGgTZ+sEiBMtrSrwiFt9gm
	caMH7vKaomn22N2eDz69fU+QY1oNchh11G6RuXuGVAMymwZ7f9aoCov9XqSH/1xLx/zAvp
	LXAU/dFRIDCCGr+kQICBCYOSXi+kXFE=
X-MC-Unique: IJrpUXNAP_GdSGR8Vldk4g-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Canokeys.org" <contact@canokeys.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	xen-devel@lists.xenproject.org,
	Alex Williamson <alex.williamson@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	Arnout Engelen <arnout@bzzt.net>
Subject: [PULL 11/17] hw/usb/hcd-ehci: fix writeback order
Date: Fri, 10 Jun 2022 11:20:37 +0200
Message-Id: <20220610092043.1874654-12-kraxel@redhat.com>
In-Reply-To: <20220610092043.1874654-1-kraxel@redhat.com>
References: <20220610092043.1874654-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3

From: Arnout Engelen <arnout@bzzt.net>

The 'active' bit passes control over a qTD between the guest and the
controller: set to 1 by guest to enable execution by the controller,
and the controller sets it to '0' to hand back control to the guest.

ehci_state_writeback write two dwords to main memory using DMA:
the third dword of the qTD (containing dt, total bytes to transfer,
cpage, cerr and status) and the fourth dword of the qTD (containing
the offset).

This commit makes sure the fourth dword is written before the third,
avoiding a race condition where a new offset written into the qTD
by the guest after it observed the status going to go to '0' gets
overwritten by a 'late' DMA writeback of the previous offset.

This race condition could lead to 'cpage out of range (5)' errors,
and reproduced by:

./qemu-system-x86_64 -enable-kvm -bios $SEABIOS/bios.bin -m 4096 -device usb-ehci -blockdev driver=file,read-only=on,filename=/home/aengelen/Downloads/openSUSE-Tumbleweed-DVD-i586-Snapshot20220428-Media.iso,node-name=iso -device usb-storage,drive=iso,bootindex=0 -chardev pipe,id=shell,path=/tmp/pipe -device virtio-serial -device virtconsole,chardev=shell -device virtio-rng-pci -serial mon:stdio -nographic

(press a key, select 'Installation' (2), and accept the default
values. On my machine the 'cpage out of range' is reproduced while
loading the Linux Kernel about once per 7 attempts. With the fix in
this commit it no longer fails)

This problem was previously reported as a seabios problem in
https://mail.coreboot.org/hyperkitty/list/seabios@seabios.org/thread/OUTHT5ISSQJGXPNTUPY3O5E5EPZJCHM3/
and as a nixos CI build failure in
https://github.com/NixOS/nixpkgs/issues/170803

Signed-off-by: Arnout Engelen <arnout@bzzt.net>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 hw/usb/hcd-ehci.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/hw/usb/hcd-ehci.c b/hw/usb/hcd-ehci.c
index 33a8a377bd95..d4da8dcb8d15 100644
--- a/hw/usb/hcd-ehci.c
+++ b/hw/usb/hcd-ehci.c
@@ -2011,7 +2011,10 @@ static int ehci_state_writeback(EHCIQueue *q)
     ehci_trace_qtd(q, NLPTR_GET(p->qtdaddr), (EHCIqtd *) &q->qh.next_qtd);
     qtd = (uint32_t *) &q->qh.next_qtd;
     addr = NLPTR_GET(p->qtdaddr);
-    put_dwords(q->ehci, addr + 2 * sizeof(uint32_t), qtd + 2, 2);
+    /* First write back the offset */
+    put_dwords(q->ehci, addr + 3 * sizeof(uint32_t), qtd + 3, 1);
+    /* Then write back the token, clearing the 'active' bit */
+    put_dwords(q->ehci, addr + 2 * sizeof(uint32_t), qtd + 2, 1);
     ehci_free_packet(p);
 
     /*
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 09:23:32 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 09:23:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346136.571940 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzarc-0003mm-Nm; Fri, 10 Jun 2022 09:23:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346136.571940; Fri, 10 Jun 2022 09:23:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzarc-0003mS-Kw; Fri, 10 Jun 2022 09:23:32 +0000
Received: by outflank-mailman (input) for mailman id 346136;
 Fri, 10 Jun 2022 09:23:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2HR3=WR=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1nzaps-00050F-Cg
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 09:21:44 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bd0a61dc-e89e-11ec-8179-c7c2a468b362;
 Fri, 10 Jun 2022 11:21:42 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-640-igR9nvgvPSaAq0gbeel8tQ-1; Fri, 10 Jun 2022 05:21:37 -0400
Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com
 [10.11.54.9])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E3CE0811E75;
 Fri, 10 Jun 2022 09:21:36 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 97F71492C3B;
 Fri, 10 Jun 2022 09:21:36 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 07514180079D; Fri, 10 Jun 2022 11:20:45 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bd0a61dc-e89e-11ec-8179-c7c2a468b362
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1654852901;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ZStqjDDMtLMpDsz58F8O29JWAJ6F3RkNGXfWxICk7zU=;
	b=QJzN87mIxhMgyH8BfBH31cgaeo22BmpSuZjKyl3DusL2/J6KxwPzWgGPorFuFjtKvlz3f8
	LTUnERHgV9+l2GTRwZDh64dlwK6KvqykrGskATpBewReBq5KPfHyEs6Qn2pwJXB/DEk+0k
	vUgtw44FHaBT/9yboROB2MyOnoiA1sk=
X-MC-Unique: igR9nvgvPSaAq0gbeel8tQ-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Canokeys.org" <contact@canokeys.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	xen-devel@lists.xenproject.org,
	Alex Williamson <alex.williamson@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	Joelle van Dyne <j@getutm.app>
Subject: [PULL 12/17] usbredir: avoid queuing hello packet on snapshot restore
Date: Fri, 10 Jun 2022 11:20:38 +0200
Message-Id: <20220610092043.1874654-13-kraxel@redhat.com>
In-Reply-To: <20220610092043.1874654-1-kraxel@redhat.com>
References: <20220610092043.1874654-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9

From: Joelle van Dyne <j@getutm.app>

When launching QEMU with "-loadvm", usbredir_create_parser() should avoid
setting up the hello packet (just as with "-incoming". On the latest version
of libusbredir, usbredirparser_unserialize() will return error if the parser
is not "pristine."

Signed-off-by: Joelle van Dyne <j@getutm.app>
Message-Id: <20220507041850.98716-1-j@getutm.app>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 hw/usb/redirect.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/hw/usb/redirect.c b/hw/usb/redirect.c
index fd7df599bc0b..1bd30efc3ef0 100644
--- a/hw/usb/redirect.c
+++ b/hw/usb/redirect.c
@@ -1280,7 +1280,8 @@ static void usbredir_create_parser(USBRedirDevice *dev)
     }
 #endif
 
-    if (runstate_check(RUN_STATE_INMIGRATE)) {
+    if (runstate_check(RUN_STATE_INMIGRATE) ||
+        runstate_check(RUN_STATE_PRELAUNCH)) {
         flags |= usbredirparser_fl_no_hello;
     }
     usbredirparser_init(dev->parser, VERSION, caps, USB_REDIR_CAPS_SIZE,
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 09:23:37 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 09:23:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346152.571952 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzarh-0004LS-3G; Fri, 10 Jun 2022 09:23:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346152.571952; Fri, 10 Jun 2022 09:23:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzarg-0004LE-VF; Fri, 10 Jun 2022 09:23:36 +0000
Received: by outflank-mailman (input) for mailman id 346152;
 Fri, 10 Jun 2022 09:23:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2HR3=WR=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1nzapw-00050F-5x
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 09:21:48 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bfb29c08-e89e-11ec-8179-c7c2a468b362;
 Fri, 10 Jun 2022 11:21:46 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-654-_2Xh9ZlrMw-WSYMohO052Q-1; Fri, 10 Jun 2022 05:21:42 -0400
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com
 [10.11.54.1])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2981A811E75;
 Fri, 10 Jun 2022 09:21:42 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 9F36440466A9;
 Fri, 10 Jun 2022 09:21:41 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 9273F1800916; Fri, 10 Jun 2022 11:20:45 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bfb29c08-e89e-11ec-8179-c7c2a468b362
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1654852905;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ebiRccH2awWxGwAppvEzoFJ/M1x++CURc9+zFzjg06Y=;
	b=Tg3pgGK+yzR4n3AgVD3t94NHF1h1EnLDBiRXAb9GjAxdVz0CvzVyQjCQzv7bduAJRvRUoS
	c0zLzYYnX90WmPufmBGYJ/9H7+k1pon2sJg/Le4067gJ0WM2gWgx602x5muJmi8wb6rIl9
	peQ3AH4XmS7FImIcv/jWYGzDQhw56Co=
X-MC-Unique: _2Xh9ZlrMw-WSYMohO052Q-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Canokeys.org" <contact@canokeys.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	xen-devel@lists.xenproject.org,
	Alex Williamson <alex.williamson@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Subject: [PULL 16/17] ui: Deliver refresh rate via QemuUIInfo
Date: Fri, 10 Jun 2022 11:20:42 +0200
Message-Id: <20220610092043.1874654-17-kraxel@redhat.com>
In-Reply-To: <20220610092043.1874654-1-kraxel@redhat.com>
References: <20220610092043.1874654-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1

From: Akihiko Odaki <akihiko.odaki@gmail.com>

This change adds a new member, refresh_rate to QemuUIInfo in
include/ui/console.h. It represents the refresh rate of the
physical display backend, and it is more appropriate than
GUI update interval as the refresh rate which the emulated device
reports:
- sdl may set GUI update interval shorter than the refresh rate
  of the physical display to respond to user-generated events.
- sdl and vnc aggressively changes GUI update interval, but
  a guests is typically not designed to respond to frequent
  refresh rate changes, or frequent "display mode" changes in
  general. The frequency of refresh rate changes of the physical
  display backend matches better to the guest's expectation.

QemuUIInfo also has other members representing "display mode",
which makes it suitable for refresh rate representation. It has
a throttling of update notifications, and prevents frequent changes
of the display mode.

Signed-off-by: Akihiko Odaki <akihiko.odaki@gmail.com>
Message-Id: <20220226115516.59830-3-akihiko.odaki@gmail.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 include/ui/console.h |  2 +-
 include/ui/gtk.h     |  2 +-
 hw/display/xenfb.c   | 14 +++++++++++---
 ui/console.c         |  6 ------
 ui/gtk-egl.c         |  4 ++--
 ui/gtk-gl-area.c     |  3 +--
 ui/gtk.c             | 45 +++++++++++++++++++++++++-------------------
 7 files changed, 42 insertions(+), 34 deletions(-)

diff --git a/include/ui/console.h b/include/ui/console.h
index 642d6f5248cf..b64d82436097 100644
--- a/include/ui/console.h
+++ b/include/ui/console.h
@@ -139,6 +139,7 @@ typedef struct QemuUIInfo {
     int       yoff;
     uint32_t  width;
     uint32_t  height;
+    uint32_t  refresh_rate;
 } QemuUIInfo;
 
 /* cursor data format is 32bit RGBA */
@@ -431,7 +432,6 @@ typedef struct GraphicHwOps {
     void (*gfx_update)(void *opaque);
     bool gfx_update_async; /* if true, calls graphic_hw_update_done() */
     void (*text_update)(void *opaque, console_ch_t *text);
-    void (*update_interval)(void *opaque, uint64_t interval);
     void (*ui_info)(void *opaque, uint32_t head, QemuUIInfo *info);
     void (*gl_block)(void *opaque, bool block);
 } GraphicHwOps;
diff --git a/include/ui/gtk.h b/include/ui/gtk.h
index 101b147d1b98..ae0f53740d19 100644
--- a/include/ui/gtk.h
+++ b/include/ui/gtk.h
@@ -155,7 +155,7 @@ extern bool gtk_use_gl_area;
 
 /* ui/gtk.c */
 void gd_update_windowsize(VirtualConsole *vc);
-int gd_monitor_update_interval(GtkWidget *widget);
+void gd_update_monitor_refresh_rate(VirtualConsole *vc, GtkWidget *widget);
 void gd_hw_gl_flushed(void *vc);
 
 /* ui/gtk-egl.c */
diff --git a/hw/display/xenfb.c b/hw/display/xenfb.c
index cea10fe3c780..50857cd97a0b 100644
--- a/hw/display/xenfb.c
+++ b/hw/display/xenfb.c
@@ -777,16 +777,24 @@ static void xenfb_update(void *opaque)
     xenfb->up_fullscreen = 0;
 }
 
-static void xenfb_update_interval(void *opaque, uint64_t interval)
+static void xenfb_ui_info(void *opaque, uint32_t idx, QemuUIInfo *info)
 {
     struct XenFB *xenfb = opaque;
+    uint32_t refresh_rate;
 
     if (xenfb->feature_update) {
 #ifdef XENFB_TYPE_REFRESH_PERIOD
         if (xenfb_queue_full(xenfb)) {
             return;
         }
-        xenfb_send_refresh_period(xenfb, interval);
+
+        refresh_rate = info->refresh_rate;
+        if (!refresh_rate) {
+            refresh_rate = 75;
+        }
+
+        /* T = 1 / f = 1 [s*Hz] / f = 1000*1000 [ms*mHz] / f */
+        xenfb_send_refresh_period(xenfb, 1000 * 1000 / refresh_rate);
 #endif
     }
 }
@@ -983,5 +991,5 @@ struct XenDevOps xen_framebuffer_ops = {
 static const GraphicHwOps xenfb_ops = {
     .invalidate  = xenfb_invalidate,
     .gfx_update  = xenfb_update,
-    .update_interval = xenfb_update_interval,
+    .ui_info     = xenfb_ui_info,
 };
diff --git a/ui/console.c b/ui/console.c
index 36c80cd1de85..9331b85203a0 100644
--- a/ui/console.c
+++ b/ui/console.c
@@ -160,7 +160,6 @@ static void gui_update(void *opaque)
     uint64_t dcl_interval;
     DisplayState *ds = opaque;
     DisplayChangeListener *dcl;
-    QemuConsole *con;
 
     ds->refreshing = true;
     dpy_refresh(ds);
@@ -175,11 +174,6 @@ static void gui_update(void *opaque)
     }
     if (ds->update_interval != interval) {
         ds->update_interval = interval;
-        QTAILQ_FOREACH(con, &consoles, next) {
-            if (con->hw_ops->update_interval) {
-                con->hw_ops->update_interval(con->hw, interval);
-            }
-        }
         trace_console_refresh(interval);
     }
     ds->last_update = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
diff --git a/ui/gtk-egl.c b/ui/gtk-egl.c
index e3bd4bc27431..b5bffbab2522 100644
--- a/ui/gtk-egl.c
+++ b/ui/gtk-egl.c
@@ -140,8 +140,8 @@ void gd_egl_refresh(DisplayChangeListener *dcl)
 {
     VirtualConsole *vc = container_of(dcl, VirtualConsole, gfx.dcl);
 
-    vc->gfx.dcl.update_interval = gd_monitor_update_interval(
-            vc->window ? vc->window : vc->gfx.drawing_area);
+    gd_update_monitor_refresh_rate(
+            vc, vc->window ? vc->window : vc->gfx.drawing_area);
 
     if (!vc->gfx.esurface) {
         gd_egl_init(vc);
diff --git a/ui/gtk-gl-area.c b/ui/gtk-gl-area.c
index 2e0129c28cd4..682638a197d2 100644
--- a/ui/gtk-gl-area.c
+++ b/ui/gtk-gl-area.c
@@ -121,8 +121,7 @@ void gd_gl_area_refresh(DisplayChangeListener *dcl)
 {
     VirtualConsole *vc = container_of(dcl, VirtualConsole, gfx.dcl);
 
-    vc->gfx.dcl.update_interval = gd_monitor_update_interval(
-            vc->window ? vc->window : vc->gfx.drawing_area);
+    gd_update_monitor_refresh_rate(vc, vc->window ? vc->window : vc->gfx.drawing_area);
 
     if (!vc->gfx.gls) {
         if (!gtk_widget_get_realized(vc->gfx.drawing_area)) {
diff --git a/ui/gtk.c b/ui/gtk.c
index c57c36749e0e..2a791dd2aa04 100644
--- a/ui/gtk.c
+++ b/ui/gtk.c
@@ -710,11 +710,20 @@ static gboolean gd_window_close(GtkWidget *widget, GdkEvent *event,
     return TRUE;
 }
 
-static void gd_set_ui_info(VirtualConsole *vc, gint width, gint height)
+static void gd_set_ui_refresh_rate(VirtualConsole *vc, int refresh_rate)
 {
     QemuUIInfo info;
 
-    memset(&info, 0, sizeof(info));
+    info = *dpy_get_ui_info(vc->gfx.dcl.con);
+    info.refresh_rate = refresh_rate;
+    dpy_set_ui_info(vc->gfx.dcl.con, &info, true);
+}
+
+static void gd_set_ui_size(VirtualConsole *vc, gint width, gint height)
+{
+    QemuUIInfo info;
+
+    info = *dpy_get_ui_info(vc->gfx.dcl.con);
     info.width = width;
     info.height = height;
     dpy_set_ui_info(vc->gfx.dcl.con, &info, true);
@@ -738,33 +747,32 @@ static void gd_resize_event(GtkGLArea *area,
 {
     VirtualConsole *vc = (void *)opaque;
 
-    gd_set_ui_info(vc, width, height);
+    gd_set_ui_size(vc, width, height);
 }
 
 #endif
 
-/*
- * If available, return the update interval of the monitor in ms,
- * else return 0 (the default update interval).
- */
-int gd_monitor_update_interval(GtkWidget *widget)
+void gd_update_monitor_refresh_rate(VirtualConsole *vc, GtkWidget *widget)
 {
 #ifdef GDK_VERSION_3_22
     GdkWindow *win = gtk_widget_get_window(widget);
+    int refresh_rate;
 
     if (win) {
         GdkDisplay *dpy = gtk_widget_get_display(widget);
         GdkMonitor *monitor = gdk_display_get_monitor_at_window(dpy, win);
-        int refresh_rate = gdk_monitor_get_refresh_rate(monitor); /* [mHz] */
-
-        if (refresh_rate) {
-            /* T = 1 / f = 1 [s*Hz] / f = 1000*1000 [ms*mHz] / f */
-            return MIN(1000 * 1000 / refresh_rate,
-                       GUI_REFRESH_INTERVAL_DEFAULT);
-        }
+        refresh_rate = gdk_monitor_get_refresh_rate(monitor); /* [mHz] */
+    } else {
+        refresh_rate = 0;
     }
+
+    gd_set_ui_refresh_rate(vc, refresh_rate);
+
+    /* T = 1 / f = 1 [s*Hz] / f = 1000*1000 [ms*mHz] / f */
+    vc->gfx.dcl.update_interval = refresh_rate ?
+        MIN(1000 * 1000 / refresh_rate, GUI_REFRESH_INTERVAL_DEFAULT) :
+        GUI_REFRESH_INTERVAL_DEFAULT;
 #endif
-    return 0;
 }
 
 static gboolean gd_draw_event(GtkWidget *widget, cairo_t *cr, void *opaque)
@@ -801,8 +809,7 @@ static gboolean gd_draw_event(GtkWidget *widget, cairo_t *cr, void *opaque)
         return FALSE;
     }
 
-    vc->gfx.dcl.update_interval =
-        gd_monitor_update_interval(vc->window ? vc->window : s->window);
+    gd_update_monitor_refresh_rate(vc, vc->window ? vc->window : s->window);
 
     fbw = surface_width(vc->gfx.ds);
     fbh = surface_height(vc->gfx.ds);
@@ -1691,7 +1698,7 @@ static gboolean gd_configure(GtkWidget *widget,
 {
     VirtualConsole *vc = opaque;
 
-    gd_set_ui_info(vc, cfg->width, cfg->height);
+    gd_set_ui_size(vc, cfg->width, cfg->height);
     return FALSE;
 }
 
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 09:23:39 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 09:23:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346156.571963 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzarj-0004iN-Ch; Fri, 10 Jun 2022 09:23:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346156.571963; Fri, 10 Jun 2022 09:23:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzarj-0004i1-8c; Fri, 10 Jun 2022 09:23:39 +0000
Received: by outflank-mailman (input) for mailman id 346156;
 Fri, 10 Jun 2022 09:23:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2HR3=WR=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1nzapx-0005HE-V4
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 09:21:49 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c0bd9420-e89e-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 11:21:48 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-353-U1-fgaTYNJC2qVYGl5-reA-1; Fri, 10 Jun 2022 05:21:44 -0400
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com
 [10.11.54.6])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id DCDE0800971;
 Fri, 10 Jun 2022 09:21:43 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 9C9F22166B29;
 Fri, 10 Jun 2022 09:21:43 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id B26AF180091C; Fri, 10 Jun 2022 11:20:45 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c0bd9420-e89e-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1654852907;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=fjkT/ghvgIP+ASWCtRqN8cFZ3mDw+gAHZxFgoi4zHGM=;
	b=cG5Ik6AJ6C1McVqi6U77m0ezhnMX6gSZgJZoaWoceLI8BZYMTv+AVIOr9g3mF/F0qMvHJR
	eliTcOI2TOUFQl7j5YgdxkE14LAZkKdx4zml/5QwpJ6WP6Y7Vpvt/YY4n4LTGAY48pRZsq
	BgNrMrJicC6DKOoSvPK/kwS9gY50CsI=
X-MC-Unique: U1-fgaTYNJC2qVYGl5-reA-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Canokeys.org" <contact@canokeys.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	xen-devel@lists.xenproject.org,
	Alex Williamson <alex.williamson@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Subject: [PULL 17/17] virtio-gpu: Respect UI refresh rate for EDID
Date: Fri, 10 Jun 2022 11:20:43 +0200
Message-Id: <20220610092043.1874654-18-kraxel@redhat.com>
In-Reply-To: <20220610092043.1874654-1-kraxel@redhat.com>
References: <20220610092043.1874654-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6

From: Akihiko Odaki <akihiko.odaki@gmail.com>

Signed-off-by: Akihiko Odaki <akihiko.odaki@gmail.com>
Message-Id: <20220226115516.59830-4-akihiko.odaki@gmail.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 include/hw/virtio/virtio-gpu.h | 1 +
 hw/display/virtio-gpu-base.c   | 1 +
 hw/display/virtio-gpu.c        | 1 +
 3 files changed, 3 insertions(+)

diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index afff9e158e31..2e28507efe21 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -80,6 +80,7 @@ struct virtio_gpu_scanout {
 struct virtio_gpu_requested_state {
     uint16_t width_mm, height_mm;
     uint32_t width, height;
+    uint32_t refresh_rate;
     int x, y;
 };
 
diff --git a/hw/display/virtio-gpu-base.c b/hw/display/virtio-gpu-base.c
index b21d6e5b0be8..a29f191aa82e 100644
--- a/hw/display/virtio-gpu-base.c
+++ b/hw/display/virtio-gpu-base.c
@@ -79,6 +79,7 @@ static void virtio_gpu_ui_info(void *opaque, uint32_t idx, QemuUIInfo *info)
 
     g->req_state[idx].x = info->xoff;
     g->req_state[idx].y = info->yoff;
+    g->req_state[idx].refresh_rate = info->refresh_rate;
     g->req_state[idx].width = info->width;
     g->req_state[idx].height = info->height;
     g->req_state[idx].width_mm = info->width_mm;
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index 55c6dd576318..20cc703dcc6e 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -217,6 +217,7 @@ virtio_gpu_generate_edid(VirtIOGPU *g, int scanout,
         .height_mm = b->req_state[scanout].height_mm,
         .prefx = b->req_state[scanout].width,
         .prefy = b->req_state[scanout].height,
+        .refresh_rate = b->req_state[scanout].refresh_rate,
     };
 
     edid->size = cpu_to_le32(sizeof(edid->edid));
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 09:23:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 09:23:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346168.571974 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzarm-0005Cb-OX; Fri, 10 Jun 2022 09:23:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346168.571974; Fri, 10 Jun 2022 09:23:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzarm-0005B6-Je; Fri, 10 Jun 2022 09:23:42 +0000
Received: by outflank-mailman (input) for mailman id 346168;
 Fri, 10 Jun 2022 09:23:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2HR3=WR=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1nzapu-0005HE-Do
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 09:21:46 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id be6c05f9-e89e-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 11:21:44 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-593-4WEDaO-dNm6StV-sVKdKFw-1; Fri, 10 Jun 2022 05:21:40 -0400
Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com
 [10.11.54.9])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B364880159B;
 Fri, 10 Jun 2022 09:21:39 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id CCC85492C3B;
 Fri, 10 Jun 2022 09:21:38 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 3CE881800864; Fri, 10 Jun 2022 11:20:45 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: be6c05f9-e89e-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1654852903;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=fwmfDIy30DgQ0R3WnGybG8E0iUm2x/In8Zs4SMv8ftc=;
	b=T0cXmCTHF4/eaV9iQFk01HlN/RT98TqSJJqrKtAqHJcSR7/Fuwq4IEWOlgQR8E2PEfyK9z
	x9xfLF2KxYACrgZVBPcS+y5j94aB+Co+z7QVXgfj7C7D4X2FgRerU6cWfd7o9wUUOefeAw
	Qi3dlmfhfAIMYa386XeedYpFuzNlvGI=
X-MC-Unique: 4WEDaO-dNm6StV-sVKdKFw-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Canokeys.org" <contact@canokeys.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	xen-devel@lists.xenproject.org,
	Alex Williamson <alex.williamson@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>
Subject: [PULL 14/17] ui: move 'pc-bios/keymaps' to 'ui/keymaps'
Date: Fri, 10 Jun 2022 11:20:40 +0200
Message-Id: <20220610092043.1874654-15-kraxel@redhat.com>
In-Reply-To: <20220610092043.1874654-1-kraxel@redhat.com>
References: <20220610092043.1874654-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9

From: Daniel P. Berrangé <berrange@redhat.com>

The 'keymaps' directory contents is nothing to do with the firmware
blobs. The 'pc-bios/keymaps' directory appears to have been used
previously as a convenience for getting the files installed into
a subdir of the firmware install dir. This install time arrangement
does not need to be reflected in the source tree arrangement. These
keymaps logically belong with the UI code.

Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20220426160150.812530-1-berrange@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 pc-bios/meson.build                 | 1 -
 {pc-bios => ui}/keymaps/ar          | 0
 {pc-bios => ui}/keymaps/bepo        | 0
 {pc-bios => ui}/keymaps/cz          | 0
 {pc-bios => ui}/keymaps/da          | 0
 {pc-bios => ui}/keymaps/de          | 0
 {pc-bios => ui}/keymaps/de-ch       | 0
 {pc-bios => ui}/keymaps/en-gb       | 0
 {pc-bios => ui}/keymaps/en-us       | 0
 {pc-bios => ui}/keymaps/es          | 0
 {pc-bios => ui}/keymaps/et          | 0
 {pc-bios => ui}/keymaps/fi          | 0
 {pc-bios => ui}/keymaps/fo          | 0
 {pc-bios => ui}/keymaps/fr          | 0
 {pc-bios => ui}/keymaps/fr-be       | 0
 {pc-bios => ui}/keymaps/fr-ca       | 0
 {pc-bios => ui}/keymaps/fr-ch       | 0
 {pc-bios => ui}/keymaps/hr          | 0
 {pc-bios => ui}/keymaps/hu          | 0
 {pc-bios => ui}/keymaps/is          | 0
 {pc-bios => ui}/keymaps/it          | 0
 {pc-bios => ui}/keymaps/ja          | 0
 {pc-bios => ui}/keymaps/lt          | 0
 {pc-bios => ui}/keymaps/lv          | 0
 {pc-bios => ui}/keymaps/meson.build | 0
 {pc-bios => ui}/keymaps/mk          | 0
 {pc-bios => ui}/keymaps/nl          | 0
 {pc-bios => ui}/keymaps/no          | 0
 {pc-bios => ui}/keymaps/pl          | 0
 {pc-bios => ui}/keymaps/pt          | 0
 {pc-bios => ui}/keymaps/pt-br       | 0
 {pc-bios => ui}/keymaps/ru          | 0
 {pc-bios => ui}/keymaps/sl          | 0
 {pc-bios => ui}/keymaps/sv          | 0
 {pc-bios => ui}/keymaps/th          | 0
 {pc-bios => ui}/keymaps/tr          | 0
 ui/meson.build                      | 1 +
 37 files changed, 1 insertion(+), 1 deletion(-)
 rename {pc-bios => ui}/keymaps/ar (100%)
 rename {pc-bios => ui}/keymaps/bepo (100%)
 rename {pc-bios => ui}/keymaps/cz (100%)
 rename {pc-bios => ui}/keymaps/da (100%)
 rename {pc-bios => ui}/keymaps/de (100%)
 rename {pc-bios => ui}/keymaps/de-ch (100%)
 rename {pc-bios => ui}/keymaps/en-gb (100%)
 rename {pc-bios => ui}/keymaps/en-us (100%)
 rename {pc-bios => ui}/keymaps/es (100%)
 rename {pc-bios => ui}/keymaps/et (100%)
 rename {pc-bios => ui}/keymaps/fi (100%)
 rename {pc-bios => ui}/keymaps/fo (100%)
 rename {pc-bios => ui}/keymaps/fr (100%)
 rename {pc-bios => ui}/keymaps/fr-be (100%)
 rename {pc-bios => ui}/keymaps/fr-ca (100%)
 rename {pc-bios => ui}/keymaps/fr-ch (100%)
 rename {pc-bios => ui}/keymaps/hr (100%)
 rename {pc-bios => ui}/keymaps/hu (100%)
 rename {pc-bios => ui}/keymaps/is (100%)
 rename {pc-bios => ui}/keymaps/it (100%)
 rename {pc-bios => ui}/keymaps/ja (100%)
 rename {pc-bios => ui}/keymaps/lt (100%)
 rename {pc-bios => ui}/keymaps/lv (100%)
 rename {pc-bios => ui}/keymaps/meson.build (100%)
 rename {pc-bios => ui}/keymaps/mk (100%)
 rename {pc-bios => ui}/keymaps/nl (100%)
 rename {pc-bios => ui}/keymaps/no (100%)
 rename {pc-bios => ui}/keymaps/pl (100%)
 rename {pc-bios => ui}/keymaps/pt (100%)
 rename {pc-bios => ui}/keymaps/pt-br (100%)
 rename {pc-bios => ui}/keymaps/ru (100%)
 rename {pc-bios => ui}/keymaps/sl (100%)
 rename {pc-bios => ui}/keymaps/sv (100%)
 rename {pc-bios => ui}/keymaps/th (100%)
 rename {pc-bios => ui}/keymaps/tr (100%)

diff --git a/pc-bios/meson.build b/pc-bios/meson.build
index 41ba1c0ec7ba..e49c0e5f56de 100644
--- a/pc-bios/meson.build
+++ b/pc-bios/meson.build
@@ -97,4 +97,3 @@ foreach f : blobs
 endforeach
 
 subdir('descriptors')
-subdir('keymaps')
diff --git a/pc-bios/keymaps/ar b/ui/keymaps/ar
similarity index 100%
rename from pc-bios/keymaps/ar
rename to ui/keymaps/ar
diff --git a/pc-bios/keymaps/bepo b/ui/keymaps/bepo
similarity index 100%
rename from pc-bios/keymaps/bepo
rename to ui/keymaps/bepo
diff --git a/pc-bios/keymaps/cz b/ui/keymaps/cz
similarity index 100%
rename from pc-bios/keymaps/cz
rename to ui/keymaps/cz
diff --git a/pc-bios/keymaps/da b/ui/keymaps/da
similarity index 100%
rename from pc-bios/keymaps/da
rename to ui/keymaps/da
diff --git a/pc-bios/keymaps/de b/ui/keymaps/de
similarity index 100%
rename from pc-bios/keymaps/de
rename to ui/keymaps/de
diff --git a/pc-bios/keymaps/de-ch b/ui/keymaps/de-ch
similarity index 100%
rename from pc-bios/keymaps/de-ch
rename to ui/keymaps/de-ch
diff --git a/pc-bios/keymaps/en-gb b/ui/keymaps/en-gb
similarity index 100%
rename from pc-bios/keymaps/en-gb
rename to ui/keymaps/en-gb
diff --git a/pc-bios/keymaps/en-us b/ui/keymaps/en-us
similarity index 100%
rename from pc-bios/keymaps/en-us
rename to ui/keymaps/en-us
diff --git a/pc-bios/keymaps/es b/ui/keymaps/es
similarity index 100%
rename from pc-bios/keymaps/es
rename to ui/keymaps/es
diff --git a/pc-bios/keymaps/et b/ui/keymaps/et
similarity index 100%
rename from pc-bios/keymaps/et
rename to ui/keymaps/et
diff --git a/pc-bios/keymaps/fi b/ui/keymaps/fi
similarity index 100%
rename from pc-bios/keymaps/fi
rename to ui/keymaps/fi
diff --git a/pc-bios/keymaps/fo b/ui/keymaps/fo
similarity index 100%
rename from pc-bios/keymaps/fo
rename to ui/keymaps/fo
diff --git a/pc-bios/keymaps/fr b/ui/keymaps/fr
similarity index 100%
rename from pc-bios/keymaps/fr
rename to ui/keymaps/fr
diff --git a/pc-bios/keymaps/fr-be b/ui/keymaps/fr-be
similarity index 100%
rename from pc-bios/keymaps/fr-be
rename to ui/keymaps/fr-be
diff --git a/pc-bios/keymaps/fr-ca b/ui/keymaps/fr-ca
similarity index 100%
rename from pc-bios/keymaps/fr-ca
rename to ui/keymaps/fr-ca
diff --git a/pc-bios/keymaps/fr-ch b/ui/keymaps/fr-ch
similarity index 100%
rename from pc-bios/keymaps/fr-ch
rename to ui/keymaps/fr-ch
diff --git a/pc-bios/keymaps/hr b/ui/keymaps/hr
similarity index 100%
rename from pc-bios/keymaps/hr
rename to ui/keymaps/hr
diff --git a/pc-bios/keymaps/hu b/ui/keymaps/hu
similarity index 100%
rename from pc-bios/keymaps/hu
rename to ui/keymaps/hu
diff --git a/pc-bios/keymaps/is b/ui/keymaps/is
similarity index 100%
rename from pc-bios/keymaps/is
rename to ui/keymaps/is
diff --git a/pc-bios/keymaps/it b/ui/keymaps/it
similarity index 100%
rename from pc-bios/keymaps/it
rename to ui/keymaps/it
diff --git a/pc-bios/keymaps/ja b/ui/keymaps/ja
similarity index 100%
rename from pc-bios/keymaps/ja
rename to ui/keymaps/ja
diff --git a/pc-bios/keymaps/lt b/ui/keymaps/lt
similarity index 100%
rename from pc-bios/keymaps/lt
rename to ui/keymaps/lt
diff --git a/pc-bios/keymaps/lv b/ui/keymaps/lv
similarity index 100%
rename from pc-bios/keymaps/lv
rename to ui/keymaps/lv
diff --git a/pc-bios/keymaps/meson.build b/ui/keymaps/meson.build
similarity index 100%
rename from pc-bios/keymaps/meson.build
rename to ui/keymaps/meson.build
diff --git a/pc-bios/keymaps/mk b/ui/keymaps/mk
similarity index 100%
rename from pc-bios/keymaps/mk
rename to ui/keymaps/mk
diff --git a/pc-bios/keymaps/nl b/ui/keymaps/nl
similarity index 100%
rename from pc-bios/keymaps/nl
rename to ui/keymaps/nl
diff --git a/pc-bios/keymaps/no b/ui/keymaps/no
similarity index 100%
rename from pc-bios/keymaps/no
rename to ui/keymaps/no
diff --git a/pc-bios/keymaps/pl b/ui/keymaps/pl
similarity index 100%
rename from pc-bios/keymaps/pl
rename to ui/keymaps/pl
diff --git a/pc-bios/keymaps/pt b/ui/keymaps/pt
similarity index 100%
rename from pc-bios/keymaps/pt
rename to ui/keymaps/pt
diff --git a/pc-bios/keymaps/pt-br b/ui/keymaps/pt-br
similarity index 100%
rename from pc-bios/keymaps/pt-br
rename to ui/keymaps/pt-br
diff --git a/pc-bios/keymaps/ru b/ui/keymaps/ru
similarity index 100%
rename from pc-bios/keymaps/ru
rename to ui/keymaps/ru
diff --git a/pc-bios/keymaps/sl b/ui/keymaps/sl
similarity index 100%
rename from pc-bios/keymaps/sl
rename to ui/keymaps/sl
diff --git a/pc-bios/keymaps/sv b/ui/keymaps/sv
similarity index 100%
rename from pc-bios/keymaps/sv
rename to ui/keymaps/sv
diff --git a/pc-bios/keymaps/th b/ui/keymaps/th
similarity index 100%
rename from pc-bios/keymaps/th
rename to ui/keymaps/th
diff --git a/pc-bios/keymaps/tr b/ui/keymaps/tr
similarity index 100%
rename from pc-bios/keymaps/tr
rename to ui/keymaps/tr
diff --git a/ui/meson.build b/ui/meson.build
index e9f48c531588..25c9a5ff8cd9 100644
--- a/ui/meson.build
+++ b/ui/meson.build
@@ -170,6 +170,7 @@ if have_system or xkbcommon.found()
 endif
 
 subdir('shader')
+subdir('keymaps')
 
 if have_system
   subdir('icons')
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 09:23:43 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 09:23:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346171.571979 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzarn-0005Hc-AG; Fri, 10 Jun 2022 09:23:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346171.571979; Fri, 10 Jun 2022 09:23:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzarn-0005Fh-1e; Fri, 10 Jun 2022 09:23:43 +0000
Received: by outflank-mailman (input) for mailman id 346171;
 Fri, 10 Jun 2022 09:23:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2HR3=WR=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1nzapo-00050F-C0
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 09:21:40 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ba55ec05-e89e-11ec-8179-c7c2a468b362;
 Fri, 10 Jun 2022 11:21:38 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-227-ruaRDZJFNMeO8ESM-paIQA-1; Fri, 10 Jun 2022 05:21:33 -0400
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com
 [10.11.54.4])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 21ADC101E169;
 Fri, 10 Jun 2022 09:21:33 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id B77D42026985;
 Fri, 10 Jun 2022 09:21:32 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 1F29218007A4; Fri, 10 Jun 2022 11:20:45 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ba55ec05-e89e-11ec-8179-c7c2a468b362
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1654852896;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=nK8ui8MGXo28dreQTzjS6bbhtNnN1TEyDDyCd/Q1Dgo=;
	b=BSTLxGAglSRdxtqPn9q3bMZgN5k6DfpdBu7pyPvmGZilsNFAJPa2OZ/ZHO6oVMj1aUgpow
	UXh6SFnXWS9ohLC3BX63qahD1K5iSmS4oq0DiGr4BHzrhXaYQvHtLLseF3vIQGYhb2GuWT
	8Dm4cdaoEVz/DZ4jKuLALFZ+L/kRgEs=
X-MC-Unique: ruaRDZJFNMeO8ESM-paIQA-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Canokeys.org" <contact@canokeys.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	xen-devel@lists.xenproject.org,
	Alex Williamson <alex.williamson@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	Dongwon Kim <dongwon.kim@intel.com>,
	Vivek Kasireddy <vivek.kasireddy@intel.com>
Subject: [PULL 13/17] virtio-gpu: update done only on the scanout associated with rect
Date: Fri, 10 Jun 2022 11:20:39 +0200
Message-Id: <20220610092043.1874654-14-kraxel@redhat.com>
In-Reply-To: <20220610092043.1874654-1-kraxel@redhat.com>
References: <20220610092043.1874654-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4

From: Dongwon Kim <dongwon.kim@intel.com>

It only needs to update the scanouts containing the rect area
coming with the resource-flush request from the guest.

Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Vivek Kasireddy <vivek.kasireddy@intel.com>
Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
Message-Id: <20220505214030.4261-1-dongwon.kim@intel.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 hw/display/virtio-gpu.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index cd4a56056fd9..55c6dd576318 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -514,6 +514,9 @@ static void virtio_gpu_resource_flush(VirtIOGPU *g,
         for (i = 0; i < g->parent_obj.conf.max_outputs; i++) {
             scanout = &g->parent_obj.scanout[i];
             if (scanout->resource_id == res->resource_id &&
+                rf.r.x >= scanout->x && rf.r.y >= scanout->y &&
+                rf.r.x + rf.r.width <= scanout->x + scanout->width &&
+                rf.r.y + rf.r.height <= scanout->y + scanout->height &&
                 console_has_gl(scanout->con)) {
                 dpy_gl_update(scanout->con, 0, 0, scanout->width,
                               scanout->height);
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 09:23:48 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 09:23:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346185.571996 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzarr-00067I-U6; Fri, 10 Jun 2022 09:23:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346185.571996; Fri, 10 Jun 2022 09:23:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzarr-00066b-Q5; Fri, 10 Jun 2022 09:23:47 +0000
Received: by outflank-mailman (input) for mailman id 346185;
 Fri, 10 Jun 2022 09:23:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2HR3=WR=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1nzapz-0005HE-TA
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 09:21:51 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c0e4520b-e89e-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 11:21:48 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-299-qygipjx1NYupOP8hE8i2Jw-1; Fri, 10 Jun 2022 05:21:42 -0400
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com
 [10.11.54.2])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id F31F01C05AB7;
 Fri, 10 Jun 2022 09:21:41 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id A49F140D2827;
 Fri, 10 Jun 2022 09:21:40 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 6241F1800865; Fri, 10 Jun 2022 11:20:45 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c0e4520b-e89e-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1654852907;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=hTTJvnegxxvfJsWseZ7+SmUjba5GRnY9ME/e3uOvaP0=;
	b=Qt6NfMk14JEmdOand/QTiTIIuXfF1Jgs0EqHsxUREWwDsniQpfSG6OQEjFI+ifaq3WHkac
	X7HdwbzHDBC2DB+gObn6Fu/DqU2tNY6WOcbmrwG24fm1V+K4H1tBIxYD5WMjs8tEDqUi2F
	jUe7oGV4MNQFjhs6gmA3LO4Wrc/uHko=
X-MC-Unique: qygipjx1NYupOP8hE8i2Jw-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Canokeys.org" <contact@canokeys.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	xen-devel@lists.xenproject.org,
	Alex Williamson <alex.williamson@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Subject: [PULL 15/17] ui/console: Do not return a value with ui_info
Date: Fri, 10 Jun 2022 11:20:41 +0200
Message-Id: <20220610092043.1874654-16-kraxel@redhat.com>
In-Reply-To: <20220610092043.1874654-1-kraxel@redhat.com>
References: <20220610092043.1874654-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2

From: Akihiko Odaki <akihiko.odaki@gmail.com>

The returned value is not used and misleading.

Signed-off-by: Akihiko Odaki <akihiko.odaki@gmail.com>
Message-Id: <20220226115516.59830-2-akihiko.odaki@gmail.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 include/ui/console.h         | 2 +-
 hw/display/virtio-gpu-base.c | 6 +++---
 hw/display/virtio-vga.c      | 5 ++---
 hw/vfio/display.c            | 8 +++-----
 4 files changed, 9 insertions(+), 12 deletions(-)

diff --git a/include/ui/console.h b/include/ui/console.h
index c44b28a972ca..642d6f5248cf 100644
--- a/include/ui/console.h
+++ b/include/ui/console.h
@@ -432,7 +432,7 @@ typedef struct GraphicHwOps {
     bool gfx_update_async; /* if true, calls graphic_hw_update_done() */
     void (*text_update)(void *opaque, console_ch_t *text);
     void (*update_interval)(void *opaque, uint64_t interval);
-    int (*ui_info)(void *opaque, uint32_t head, QemuUIInfo *info);
+    void (*ui_info)(void *opaque, uint32_t head, QemuUIInfo *info);
     void (*gl_block)(void *opaque, bool block);
 } GraphicHwOps;
 
diff --git a/hw/display/virtio-gpu-base.c b/hw/display/virtio-gpu-base.c
index 790cec333c8c..b21d6e5b0be8 100644
--- a/hw/display/virtio-gpu-base.c
+++ b/hw/display/virtio-gpu-base.c
@@ -69,12 +69,12 @@ static void virtio_gpu_notify_event(VirtIOGPUBase *g, uint32_t event_type)
     virtio_notify_config(&g->parent_obj);
 }
 
-static int virtio_gpu_ui_info(void *opaque, uint32_t idx, QemuUIInfo *info)
+static void virtio_gpu_ui_info(void *opaque, uint32_t idx, QemuUIInfo *info)
 {
     VirtIOGPUBase *g = opaque;
 
     if (idx >= g->conf.max_outputs) {
-        return -1;
+        return;
     }
 
     g->req_state[idx].x = info->xoff;
@@ -92,7 +92,7 @@ static int virtio_gpu_ui_info(void *opaque, uint32_t idx, QemuUIInfo *info)
 
     /* send event to guest */
     virtio_gpu_notify_event(g, VIRTIO_GPU_EVENT_DISPLAY);
-    return 0;
+    return;
 }
 
 static void
diff --git a/hw/display/virtio-vga.c b/hw/display/virtio-vga.c
index c206b5da384b..4dcb34c4a740 100644
--- a/hw/display/virtio-vga.c
+++ b/hw/display/virtio-vga.c
@@ -47,15 +47,14 @@ static void virtio_vga_base_text_update(void *opaque, console_ch_t *chardata)
     }
 }
 
-static int virtio_vga_base_ui_info(void *opaque, uint32_t idx, QemuUIInfo *info)
+static void virtio_vga_base_ui_info(void *opaque, uint32_t idx, QemuUIInfo *info)
 {
     VirtIOVGABase *vvga = opaque;
     VirtIOGPUBase *g = vvga->vgpu;
 
     if (g->hw_ops->ui_info) {
-        return g->hw_ops->ui_info(g, idx, info);
+        g->hw_ops->ui_info(g, idx, info);
     }
-    return -1;
 }
 
 static void virtio_vga_base_gl_block(void *opaque, bool block)
diff --git a/hw/vfio/display.c b/hw/vfio/display.c
index 89bc90508fb8..78f4d82c1c35 100644
--- a/hw/vfio/display.c
+++ b/hw/vfio/display.c
@@ -106,14 +106,14 @@ err:
     return;
 }
 
-static int vfio_display_edid_ui_info(void *opaque, uint32_t idx,
-                                     QemuUIInfo *info)
+static void vfio_display_edid_ui_info(void *opaque, uint32_t idx,
+                                      QemuUIInfo *info)
 {
     VFIOPCIDevice *vdev = opaque;
     VFIODisplay *dpy = vdev->dpy;
 
     if (!dpy->edid_regs) {
-        return 0;
+        return;
     }
 
     if (info->width && info->height) {
@@ -121,8 +121,6 @@ static int vfio_display_edid_ui_info(void *opaque, uint32_t idx,
     } else {
         vfio_display_edid_update(vdev, false, 0, 0);
     }
-
-    return 0;
 }
 
 static void vfio_display_edid_init(VFIOPCIDevice *vdev)
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 09:34:36 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 09:34:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346225.572013 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzb24-0000lU-9H; Fri, 10 Jun 2022 09:34:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346225.572013; Fri, 10 Jun 2022 09:34:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzb24-0000lN-54; Fri, 10 Jun 2022 09:34:20 +0000
Received: by outflank-mailman (input) for mailman id 346225;
 Fri, 10 Jun 2022 09:31:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=asYd=WR=samsung.com=boyoun.park@srs-se1.protection.inumbo.net>)
 id 1nzazG-0000ZZ-0w
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 09:31:26 +0000
Received: from mailout2.samsung.com (mailout2.samsung.com [203.254.224.25])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 14dffb19-e8a0-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 11:31:22 +0200 (CEST)
Received: from epcas2p3.samsung.com (unknown [182.195.41.55])
 by mailout2.samsung.com (KnoxPortal) with ESMTP id
 20220610093117epoutp029dbc489e32173c7965a08d8d46cffa19~3OHhUl7zx1307913079epoutp02E
 for <xen-devel@lists.xenproject.org>; Fri, 10 Jun 2022 09:31:17 +0000 (GMT)
Received: from epsnrtp3.localdomain (unknown [182.195.42.164]) by
 epcas2p4.samsung.com (KnoxPortal) with ESMTP id
 20220610093116epcas2p44b9ac87a218e6a9be03e7f032987b4b5~3OHgRgU6J3259232592epcas2p4N;
 Fri, 10 Jun 2022 09:31:16 +0000 (GMT)
Received: from epsmges2p4.samsung.com (unknown [182.195.36.92]) by
 epsnrtp3.localdomain (Postfix) with ESMTP id 4LKG1H6Wd7z4x9Q0; Fri, 10 Jun
 2022 09:31:15 +0000 (GMT)
Received: from epcas2p3.samsung.com ( [182.195.41.55]) by
 epsmges2p4.samsung.com (Symantec Messaging Gateway) with SMTP id
 3B.C3.09694.36F03A26; Fri, 10 Jun 2022 18:31:15 +0900 (KST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: 14dffb19-e8a0-11ec-bd2c-47488cf2e6aa
DKIM-Filter: OpenDKIM Filter v2.11.0 mailout2.samsung.com 20220610093117epoutp029dbc489e32173c7965a08d8d46cffa19~3OHhUl7zx1307913079epoutp02E
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com;
	s=mail20170921; t=1654853477;
	bh=y4E6wR9slsQEJICiCAvb9vCCulQm65NDndLcLiIF6C4=;
	h=Subject:Reply-To:From:To:Date:References:From;
	b=qOVaLan4Z6ah/YVBnzIESxHGyoJKQgLkqJ62YN+MPz+f3XFQWv0En9nDLJgARDFMh
	 4ty1YZEE+G0mcKVOcJEAdsKBelF0BZ/Ahh0or/zN5n2ManOF18i6GNLiK1q4m2Ct8g
	 xs2j75bS/DXp6gm76fHVGFPwRK8swXuSDRrt71aE=
X-AuditID: b6c32a48-47fff700000025de-6b-62a30f63e042
Mime-Version: 1.0
Subject: [Question] Alternative Method for git send-email
Reply-To: boyoun.park@samsung.com
Sender: =?UTF-8?B?67CV67O07Jyk?= <boyoun.park@samsung.com>
From: =?UTF-8?B?67CV67O07Jyk?= <boyoun.park@samsung.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	=?UTF-8?B?67CV67O07Jyk?= <boyoun.park@samsung.com>
X-Priority: 3
X-Content-Kind-Code: NORMAL
X-CPGS-Detection: blocking_info_exchange
X-Drm-Type: N,general
X-Msg-Generator: Mail
X-Msg-Type: PERSONAL
X-Reply-Demand: N
Message-ID: <20220610093115epcms2p3017e473201e08cec7f3d8108d17199de@epcms2p3>
Date: Fri, 10 Jun 2022 18:31:15 +0900
X-CMS-MailID: 20220610093115epcms2p3017e473201e08cec7f3d8108d17199de
Content-Type: multipart/related;
	boundary="----ejgysZT_jCDz6-dBlD0fDNVgQyjTmtEDPX8gV1hbFULNT-md=_dcfe1_"
X-Sendblock-Type: AUTO_CONFIDENTIAL
CMS-TYPE: 102P
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrFKsWRmVeSWpSXmKPExsWy7bCmuW4y/+Ikg+0fNCxe3f7LZPF9y2Qm
	ByaPwx+usHj0bVnFGMAUlW2TkZqYklqkkJqXnJ+SmZduq+QdHO8cb2pmYKhraGlhrqSQl5ib
	aqvk4hOg65aZAzRfSaEsMacUKBSQWFyspG9nU5RfWpKqkJFfXGKrlFqQklNgXqBXnJhbXJqX
	rpeXWmJlaGBgZApUmJCdcfL9XbaCns2MFf82PGBtYFy1jrGLkZNDQsBE4lnnfbYuRi4OIYEd
	jBIPJjcAORwcvAKCEn93CIPUCAtYSjQ9m80MYgsJKEq0nlnIDhG3kjja08AKYrMJWEjsb1oA
	NlNEoFqid+4DVoj5vBIz2p+yQNjSEtuXb4XaqyHxY1kvM4QtKnFz9Vt2GPv9sflQNSISrffO
	QtUISjz4uRsqLiXR+OkQVH2xxJy+9ewg90sINDBK3Lx4kQkioS8xpWUO2BG8Ar4S5x80gjWw
	CKhKzH7yB2qoi0Tnj8dgNcwCWRIXF30H+11CQFniyC0WiDCfRMfhv+wwv+yY9wRqvKrEr6YX
	jDB/7TnbDmV7SOxcv4sZZIyQQKDE53d1ExjlZiECdBaSXRC2hkTrnLnsELaixJTuh1C2uMT+
	K23MmOLqEqf2LGFewMi+ilEstaA4Nz212KjABB7tyfm5mxjBiVDLYwfj7Lcf9A4xMnEwHmKU
	4GBWEuENuL0oSYg3JbGyKrUoP76oNCe1+BCjKTBYJjJLiSbnA1NxXkm8oYmlgYmZmaG5kamB
	uZI4r1fKhkQhgfTEktTs1NSC1CKYPiYOTqkGJhm2vcvVJi3KPnRrr/ee6Cov4TkHcyyK1wrW
	fnonlNVyIpF9LWf3rcLH5z9dtj51vLKFXX65Hs89yZSFWSKGV8s5fHp0A/cbbBRuSt54eHHy
	q/an53vMJ+bziCls523uShJ+Gr5w/uGASXObZIwPcorfaG7LCTySI7Eotoxj0bNncbrNt3dv
	WfN6o3vPzc+CIseOVzr475e++PH0nU3a+0QncfW7vm9mbZU+knrl76G1c2+U6idGKSglix2d
	6DSxfl1v8bmeT+onpkUYCevY7V64jS2y5lHpTouNs97yL0taPk+NNYxjd93EaVds/V3YlTzX
	RLQb/dx/4WVqus2CZBa1E4/zzmaXas38tPlLlRJLcUaioRZzUXEiAADjbWcNBAAA
DLP-Filter: Pass
X-CFilter-Loop: Reflected
X-CMS-RootMailID: 20220610093115epcms2p3017e473201e08cec7f3d8108d17199de
References: <CGME20220610093115epcms2p3017e473201e08cec7f3d8108d17199de@epcms2p3>

------ejgysZT_jCDz6-dBlD0fDNVgQyjTmtEDPX8gV1hbFULNT-md=_dcfe1_
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: base64

PCFET0NUWVBFIGh0bWw+DQo8aHRtbD4NCjxoZWFkPg0KPG1ldGEgY2xhc3M9ImN1aS1jb250ZW50
LWRlZmF1bHQiIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0idGV4dC9odG1sOyBj
aGFyc2V0PVVURi04Ij4NCjxzdHlsZSBjbGFzcz0iY3VpLWNvbnRlbnQtZGVmYXVsdCIgZGF0YS1j
YWZlLWRlZmF1bHQ9InRydWUiPi8qISBjYWZlIG5vdGUgdjIuMi4yNC4xIHwgQ29weXJpZ2h0IDIw
MTQsIFMtQ29yZSwgSW5jLiBBbGwgUmlnaHQgUmVzZXJ2ZWQuICovDQpAY2hhcnNldCAiVVRGLTgi
O2JvZHl7ZGlzcGxheTpibG9jazttYXJnaW46MTBweDt9Ym9keSBvbCxib2R5IHVse21hcmdpbjow
O3BhZGRpbmctbGVmdDo0MHB4O31ib2R5IHAsYm9keSBsaXtsaW5lLWhlaWdodDoxLjk7bWFyZ2lu
OjAgYXV0bzt9dGFibGUuY3VpLWRpdnt3aWR0aDoxMDAlO2Rpc3BsYXk6YmxvY2s7fXRhYmxlLmN1
aS1kaXYgPiB0Ym9keXtkaXNwbGF5OmJsb2NrO310YWJsZS5jdWktZGl2ID4gdGJvZHkgPiB0cntk
aXNwbGF5OmJsb2NrO310YWJsZS5jdWktZGl2ID4gdGJvZHkgPiB0ciA+IHRkLHRhYmxlLmN1aS1k
aXYgPiB0Ym9keSA+IHRyID4gdGh7ZGlzcGxheTpibG9jazt9dGFibGUuY3VpLXBhc3RlZC10YWJs
ZSB0aCx0YWJsZS5jdWktcGFzdGVkLXRhYmxlIHRkLHRhYmxlLmN1aS1wYXN0ZWQtdGFibGUgcCx0
YWJsZS5jdWktcGFzdGVkLXRhYmxlIGgxLHRhYmxlLmN1aS1wYXN0ZWQtdGFibGUgaDIsdGFibGUu
Y3VpLXBhc3RlZC10YWJsZSBoMyx0YWJsZS5jdWktcGFzdGVkLXRhYmxlIGg0LHRhYmxlLmN1aS1w
YXN0ZWQtdGFibGUgaDUsdGFibGUuY3VpLXBhc3RlZC10YWJsZSBoNix0YWJsZS5jdWktcGFzdGVk
LXRhYmxlIGxpe2xpbmUtaGVpZ2h0Om5vcm1hbDt9aW1nW2RhdGEtY3VpLWFsdC1pbWFnZV0sZGl2
W2RhdGEtY3VpLWFsdC1pbWFnZV17YmFja2dyb3VuZDp1cmwoImRhdGE6aW1hZ2UvcG5nO2Jhc2U2
NCxpVkJPUncwS0dnb0FBQUFOU1VoRVVnQUFBQllBQUFBVUNBWUFBQUNKZk0wd0FBQUJIMGxFUVZR
NGpiWFUyNnFFSUJRR1lGK3d0eExNZ2c0UWRHZEJSRVVoRkVGQkJ4L3RuNnNHcGNQZXpUVENBaS8w
UTExclNRZ2hoREdHSjRQOEFuM2oyMFFJZ1N6THZvb2tTZmF3VWdycnVrSXA5VlZjd2tJSWNNN2gr
ejY2cm5zR2JwckdlQ3ZmOTUrQjh6dzNZTTc1TS9BNGpuQWNCM3BTOVkzelBLUHYrMHZZS0RmOWpj
ZHhSSjduYUpyRzJEUk5FOEl3QkdNTVpWa2V3bVFiT253Vnk3SWdpcUwzVFd6YlJ0dTJ4K2dWbktZ
cFhOZEYyN1pZMXhWeEhPK2F3SFZkRE1NQUthV0puc0ZabGhrbjh6enZ0TU9DSUFDbDFObzhTcW0x
UzU1U0NrVlIzR3JkSTNRSDEzVjkvN1BSeG1HNVNTbGgyL2JIS0dNTVZWWHRZYzc1YlpSU2F1blgx
dzkyKzl2VVQ2bWp1M1dmSnVxdnYvemY4Rm1aWHE3L0Jmb0NBMVZSc3ZLNEFmZ0FBQUFBU1VWT1JL
NUNZSUk9Iikgbm8tcmVwZWF0IGNlbnRlciAjYzFjMWMxO31fOi1tcy1sYW5nKHgpLHRhYmxlIHRy
Om5vdCg6Zmlyc3QtY2hpbGQpIHRkW2NvbHNwYW5de2JvcmRlci10b3Atc3R5bGU6bm9uZVw5ICFp
bXBvcnRhbnQ7fV86LW1zLWxhbmcoeCksdGFibGUgdHI6bm90KDpmaXJzdC1jaGlsZCkgdGhbY29s
c3Bhbl17Ym9yZGVyLXRvcC1zdHlsZTpub25lXDkgIWltcG9ydGFudDt9Xzo6c2VsZWN0aW9uLGJv
ZHksYm9keSAqe3dvcmQtd3JhcDpicmVhay13b3JkXDA7fTwvc3R5bGU+DQo8c3R5bGUgY2xhc3M9
ImN1aS1jb250ZW50LWRlZmF1bHQiIGRhdGEtdXNlci1jb25maWc9InRydWUiPmJvZHkge21hcmdp
bjogMTBweDsgZm9udC1zaXplOiAxMHB0OyBmb250LWZhbWlseTon66eR7J2AIOqzoOuUlSc7IGxp
bmUtaGVpZ2h0OjEuOTt9DQpib2R5LGJvZHkgcCxib2R5IGxpLGJvZHkgaDEsYm9keSBoMiwgYm9k
eSBoMyxib2R5IGg0LGJvZHkgaDUsYm9keSBoNiB7Zm9udC1mYW1pbHk6J+unkeydgCDqs6DrlJUn
OyBsaW5lLWhlaWdodDoxLjk7fQ0KPC9zdHlsZT48L2hlYWQ+DQo8Ym9keT48cD5EZWFyJm5ic3A7
WGVuIFByb2plY3QgRGV2ZWxvcGVycyw8L3A+DQo8cD4mbmJzcDs8L3A+DQo8cD5IaSwmbmJzcDtJ
IHdhbnQgdG8gY29udHJpYnV0ZSB0byBYZW4gUHJvamVjdC48L3A+DQo8cD5TbyBJIHRyaWVkIHRv
IGZvbGxvdyB0aGUgY29udHJpYnV0aW9uIGd1aWRlbGluZXMgZnJvbSBYZW4gV2lraS48L3A+DQo8
cD48YSBocmVmPSJodHRwczovL3dpa2kueGVucHJvamVjdC5vcmcvd2lraS9TdWJtaXR0aW5nX1hl
bl9Qcm9qZWN0X1BhdGNoZXMiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3dpa2kueGVucHJvamVj
dC5vcmcvd2lraS9TdWJtaXR0aW5nX1hlbl9Qcm9qZWN0X1BhdGNoZXM8L2E+PC9wPg0KPHA+Jm5i
c3A7PC9wPg0KPHA+QnV0IEkgZm91bmQgdGhhdCBJIGNhbm5vdCB1c2Ugc210cCBzZXJ2ZXIgZGly
ZWN0bHkgZnJvbSBsaW51eCZuYnNwO2R1ZSB0byBzZWN1cml0eSBwb2xpY3kgZnJvbSB3b3JrLjwv
cD4NCjxwPlRoZXJlZm9yZSwgSSBjYW5ub3Qgc2VuZCBwYXRjaGVzIHRocm91Z2ggZ2l0IHNlbmQt
ZW1haWwuPC9wPg0KPHA+Q2FuIEkgYXNrIHdoZXRoZXIgdGhlcmUgaXMgYW55IGFsdGVybmF0aXZl
IG1ldGhvZCZuYnNwO2ZvciBnaXQgc2VuZC1lbWFpbD88L3A+DQo8cD4mbmJzcDs8L3A+DQo8cD5C
ZXN0IFJlZ2FyZHMsPC9wPg0KPHA+Qm8gWW91biBQYXJrLjwvcD4NCjx0YWJsZSBpZD1iYW5uZXJz
aWduaW1nIGRhdGEtY3VpLWxvY2s9InRydWUiIG5hbW9fbG9jaz48dHI+PHRkPjxwPiZuYnNwOzwv
cD4NCjwvdGQ+PC90cj48L3RhYmxlPjx0YWJsZSBpZD1jb25maWRlbnRpYWxzaWduaW1nIGRhdGEt
Y3VpLWxvY2s9InRydWUiIG5hbW9fbG9jaz48dHI+PHRkPjxwPjxpbWcgc3R5bGU9ImJvcmRlcjog
MHB4IHNvbGlkIGN1cnJlbnRDb2xvcjsgYm9yZGVyLWltYWdlOiBub25lOyB3aWR0aDogNTIwcHg7
IGhlaWdodDogMTQ0cHg7IGRpc3BsYXk6IGlubGluZS1ibG9jazsiIHVuc2VsZWN0YWJsZT0ib24i
IGRhdGEtY3VpLWltYWdlPSJ0cnVlIiBzcmM9ImNpZDpjYWZlX2ltYWdlXzBAcy1jb3JlLmNvLmty
Ij4mbmJzcDs8L3A+DQo8L3RkPjwvdHI+PC90YWJsZT48L2JvZHk+DQo8L2h0bWw+PHRhYmxlIHN0
eWxlPSdkaXNwbGF5OiBub25lOyc+PHRib2R5Pjx0cj48dGQ+PGltZyBzdHlsZT0nZGlzcGxheTog
bm9uZTsnIGJvcmRlcj0wIHNyYz0naHR0cDovL2V4dC5zYW1zdW5nLm5ldC9tYWlsL2V4dC92MS9l
eHRlcm5hbC9zdGF0dXMvdXBkYXRlP3VzZXJpZD1ib3lvdW4ucGFyayZkbz1iV0ZwYkVsRVBUSXdN
akl3TmpFd01Ea3pNVEUxWlhCamJYTXljRE13TVRkbE5EY3pNakF4WlRBNFkyVmpOMll6WkRneE1E
aGtNVGN4T1Rsa1pTWnlaV05wY0dsbGJuUkJaR1J5WlhOelBYaGxiaTFrWlhabGJFQnNhWE4wY3k1
NFpXNXdjbTlxWldOMExtOXlad19fJyB3aWR0aD0wIGhlaWdodD0wPjwvdGQ+PC90cj48L3Rib2R5
PjwvdGFibGU+
------ejgysZT_jCDz6-dBlD0fDNVgQyjTmtEDPX8gV1hbFULNT-md=_dcfe1_
Content-Type: image/gif
Content-Transfer-Encoding: base64
Content-ID: <cafe_image_0@s-core.co.kr>

R0lGODlhCAKQAIQAAAAAAP///8k6OspMTNRiYtt0dOSOjumiovLExPfZ2fvt7f/+/uvr69TU1Lm5
uYyMjG9vb0dHRzMzMyoqKgICAv///wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACH/C05F
VFNDQVBFMi4wAwEBAAAh+QQBAAAVACwAAAAACAKQAAAI/wAZCBxIsKDBgwgTKlzIsKHDhxAjSpxI
saLFixgzatzIsaPHjxkDiBxJsqTJkyhTqlzJsqXLlzBjypxJs6bNmzhz6tzJs6fPnzYFAh1KtKjR
o0iTKl3KtGnJBVCjSp1KtarVq1ZVCnXKtavXr2DDih1rE6vZs2iramVAtq3bt3Djyp07Mq3du1lT
bg3QwIFfvwwasJUpmK7WBikL41SMkjFLxyIDq+w7uCVkpJJZZo6MWKyDtpdpfl4qFQFWBQnwQk2g
gCpqtXorQ4gwgUKECA0ijI6p23DKBw9S9jYJgffuk8NbJh8JPPEE3I+DL0/anKWDCCSvj6XAtUFw
k9NLFv93yd3oeJFSB2BFUACtgakFTE9lDxvlXpHaRYZXftw3zvIv7TeSgMj1F0B1J+XHUnMEEoWg
SgoGEOFXADL14EsVrpQhUBXaRd9Z6p314VRrlaRgBBBQMAFiDERgW2UBtEgBBdKlOAFbMkbAgAPf
8Xhgbd+J1MBsNKaI3Y80xujiigGkmGSTMxbXAJDj2diZSM05GWSTDQzpYnFGIhmclxSM94BtE3z2
AJBYormbjEzmyFZvayYJ3AOj4TlSbWCqiNgDKY425QQPoOinki+SBMGZcbqY5KIqOlDbZ1OWKaSK
EWwpY5IM8Ildp2Vih+IEYiLKpJYiWZnqoYpGeWCd36X/mCJJwE3qYqxRtijkkUKyKWeTjHY5AaGb
BgdBl0SeByVuEEzQp6VQzgolkyNVqmOpP6bZZIoMwLpsXVG1Z8ABAxCQmgIEDGCAuKkl8N4B6b5n
wAAFuEuAAQakBi+9CowoVYnZ8XrbgWAWrKiU3NHGAAQGMzxlqg8wQIGwVzowcQM0dirYxSsWSvCU
O5JaqEArLtoidmt2SypzD4AsKUm6WbxxlxQ4IDHFMt98s8U2c4xzzcyh3HBxut08pXfAHTlx0N7d
uOaB10bm8MCAbmv1gChjxzBfCYvqNHbEZhpAmhJvuTXGwJ6c9sA1C+Szx4C6LLLTc6tMq44ap4z2
yNJi/+n0xdyNrDGTVe/pAMYLG0y1jg6fnbBfE99cbc0WsyV4x4wHTjfM0kXs881Po+g2xRjvlt4C
BbR3gLgHKJA66qbRN4BpAySQwOwIDKCA6wgoUDvqB/gbFcAjnfiZdktLTJJQFgcgtvJLo60tBd1W
GWR+5elW+KIeCyQpW52Bnyb1Eoo6GrWvgswXzH4d2Vvg1uen/Xhpbg8c/bt1z1b0j9sPnMQhC5jz
zoe0kjDoeNhJXoV6ox3mPe5V5XtYBFe2qOWxpXnk81gGE4gl6+nve+sbzopEeKUY8aV+33kg2lhW
PpGUR3wO2Jq2eoUfyaGNgSgLjgMHKL/jcEdBMOzNDf8JOKAxEQxihZOOmqwXgA5FRT3xWYDsWrM6
2EmxPb9LzQLU86EotmtcwoMK8fAjMAQ2cUYzIom1HtjENroQWA87ExqvpzT98AiNGaNNxKLFoiWN
j4zOw2P+jFWmEsbMfZ8JHB7xhMjqaG+RjnyTHvc3kv5BsnMVLJ757HihA7YQjws04xoHCMHrWAyN
ujnSg0ZZHuQBsjxynFHEJsnHQKLxOoIcCZxQ+MbhVKg52DNVmkBWoQgB6IHaac4oD2lHkvzwSLs0
ZS9z2SuxxZJGD6LTIt2InidaUXZQoU8U6QOvKG7xilCJ4rwIQIDgtYcqY2xhM10pJJLM8IHQw9EP
nVX/nOqUMJgD9GeMAtOsGHmHelsbIPm0M5wSKrNb5LNjD9so0B5GUqCR1CVBSUU+5WnvO38KDqCW
pklSjo1HW4IgDrlpSDPek5TATGVkdqTKLb20lRz8JJZAOtAhkSowZ2qRoExawoR+tJej+WVwgmnU
z/BTgBICYIyQmcOxpZCHiHSmPJvqS6KqsXzcq2d1lAjB9XEzAKcbJxbD2R61LoA1qzvAObuIgNVR
0Z31Ocl95ElKhi51ZS60XML6KSqRHmlYiAGh2ADpxpjdyHmA0iGNDivN0jFMbd1D350oO6D2NXOf
c2KkHR9WOsXiiVSlG0kFPWpYHj42U82RGGAB6bG9/6XUk36V0GxXmsEHxrSJfyrO0gjlTMFCtkmF
Ra4bTbvayYqkfp/CXIzQJ8SaVQd+B1KqPFUItECVhHoAtOYmkxmc3mK1mZWUZ3V51M/ARfdKZLuO
+jL1sJcx8LXlxQ9avenWAsh1XcBbwOoSQIDWjGuu74zPAd6DLrzCMzYmKmMLMeai4zTLWdSjTW0Q
Q2GSuki1w5otQHtzYdSiKWIhxo7FaJMpUM1murRR1v9SzL6JDmthIVbvZ0q84xw3KcS7oTBxO4wY
EufYkSlt4LCA1snJitI2QOtsCy/MLTotFWXPWdqahmW2YXFLyNgZ1HO4WWKuTZLGnYoxjJ2VHTTB
9v+qncoyra78xhU/Z6kb2jLCXFRkM54JUF7OsGdN+tw0HcnOLR7Wh9PMZuac+MfDStWkTFrmsd2I
O2mNXXsIzM725C514iJAAcy1AHbS1XejxlcYFxDPlhzOJF0yUWV2BKOS9AUmt47Mq/FzpVwfyHK8
2nWCSqgVm/HFQNmBka+japIdXYnWsDYQ+oY9GWIXz9qwxna0KxPrbC+vhJux9ZuELWwJ/dPay04Q
ro9TOFs/29iN4bC29QpvIfXn3taGtr2zg21f0zo4aHlNVGxXGvksoF+tmQrCzdJq/8xkTWdCtsNf
4qWJWxwoa5L4xXHSLf8IRDUgD7nIr8IyjZtk5Cj/T7lUFNA6lbs85BvPiV9iTnOdfBwvrLlLzl9u
l52X5uBaDLnPdZ5woSNg6EM3i8Cn4vOlzyfoKXc6wxNTa1xXfSwmhxBQKHV1Wyu76waVSdbDrhcO
g73sLbk5VN5zFnOmxe08b7vBvXkAuYq87iDHe8g/DXe4X+U9q3b7qg3QTpUD/p1oOYl3CC2TC+lE
TytZmIZosiGdBC6lBWJfSibUkspHGEKbfElxOK9XBpxO7nfxe9zzbveQ6x0vr1dN7MM196uEyCqq
p0ruQX57u5zEk2RpEGNT4nmWFP8mxy/JcghEepw0f/gweWbaTb92egm4XHYnF6kLsM7UjKtc5yoA
/7+s2P23ih9froNK+gcer3QSfuDrGoBc7YWvBBQg4flS3fzbf33ut4v/tyNqrfdW6bJ95HJ/b1Uu
/lUa7CRXrjN+32cunLZg+kIvBGAa+zJ/8Scv9OIuByd+8mdFqyEv2pcAJfiACIgvtRN/C2Z918cv
81IvDCZ+KViCwyMnkkIohtIoicIwL5IbO9YthERc0+JQdbJENVMp7bUiStgrSfIlRbgqfVNWOcRl
UlgeqjJSQVYbTJIptqEfZ1JHwGSFxeI8NjInaqIiW9JAjoInreUjA/IqVrgsNMUlfFEcLqY1yAKF
gfRiqmUbksKHjKIp1JeARyeBukNgJkgAqKM67/nxOlWELwsgiQp2f/AyiQbgOu1BagsmFe3UYFvU
O1GRO62jO7mzO/FxYAR2Tnj3if7lO/0yO6XWOgt4gbczgIVXRSFYi0cXglFxgacoiZSoOvdXf/Gh
iLlziL1jirXjO7aDO+LyHqsoeG1FjK7TOsL4iJmYO8/YOhJogLeDAPSRjY0oYIhnNZc1NUKDjkQD
AUJlMZ6TPjfyMiPzMo7WFyJzLcQSNig2S8ExTNSzJt6hI/SYOXP2RinTPHzjXgH0NPbUWmXSIuXl
jmJzeSdVOWdDKgqzNbpxNIhTUo7DUNyRSQg5jwyJIgxlRhxpPjkzMQupRpQTOdTjkRH/NVCn04mT
WHdsJ1f9pWlb1C5QxB52Zy66g06SeIHsx1bnxIBr52CVKGDvoR51hwCMeEU4CYxVyR6rmJMrd3C6
CBV1t5UHthpZ6YwJ+E36BztXWVdsB0YJRjvotDtSFJSj2FY+GUVZRJcfEiJe9IFyxUVYBJRoOTz8
o1JmpEAwlTQyBRwgVGTfoVn0U0D2mJKTiRuPZTO5NTZAyDUHSVFXVV3HpZkXgiMalFgJZJqeKUE2
s0OL1T9MBEg7xB22sZnpdV2ktCMp+UnAdl4D1BvKo1UW9Zo2SXcDUJzyF3s9iU63F5TmFB8hQh+K
2HsfAk6l8U5T+ZalZkVSGTzGSS96/xcfuWOc8WGdA6h9rxMi1wmWrecvyzmYeKdgdqdg3emUcAmd
6SJ+IohOydmebrWU6lQuhQeYS3lO/clqx2SYn4RHfYUiECdShHQstjQjg8RJdNZCMYNHMlVSjiKh
XcVCnolU03RLTfYcOUVPrsSYvHJCUGZSljRHJTVKaeIszaNVtpkcuWmiLIlIHZpeFrVNM3WTdqcv
2TeYblWUvgM75nSBRfkh98J24ZSVa1Wd6klXs1iU2/khu7OWWNov5Ml+qUGd6Slg6/lOy9gaR+pW
71lX8ammq4FO2olOSEmgBidOd0k7ZqqX78SXdYWAeCegRoqnhKlPCEpPIUQpgAiIEP8EVCM0VB56
VJXJm95zaHXYUCzamRbZRl1FqavEJDi6XByEojX0Ur7ZPzxVUi81G5lCG99VVirkWbwFSMzUm5/x
mzzaSKX6o95EYK3hX6tIlUR6l3JVRU95O5ioidVXe7+zgL03l6lhLlQKL2y3nT8JlbcTi7SjL1HZ
GoX3pOkUpX2qrQOoO864gMLqk2m6iuGoq8BDpX5KRXSplMlJrnbppnraiaAooPI6mGIkXoOaXPZ1
PhqZQUnzXDzyKTX5NLblQo5JSlomXKOTTAZbPdlFK6N3qdh1JlAzVUKYHSvDNsZSXh/7oeA1MeaF
IhnbkfhVUuYFcRajLB9KUf0EAbn/5TFi82Kxqj0GCZyINF9BonalVmCEx05rly6mkZzQSS/J2jvs
VHjooi5kuqxzyU6tsaxUyU6Hl06mcYtQIa0H+Dvah5RdW4EN6IlWy4joGawAWp5Wm4DilxpoypO9
E7RIKbduGkWmFmr3UjvU+Ktsm7T0irWo1qQ/e7Rtq6+sRmTZVShPVmEm9WEFBUFopmEyFmIO6jc3
9mN3xGfZgjAU6WVrViEU9mG2yWjc0SKeeyEhVlBo8lirS0nNoWeQliK5sWR9BmkipjWBRjOek1IW
C2MTox0rNjBilqPNJCsZMiwTVWm4qn6imABzJ4ppsXBPB5ZUZJ1OKhXOexVaWXt2/7FwJggVpHZw
0At0VTG+VSF1qxF00osW5pu9VrG+BBdy63sW61u/5vtg+sYX2Ja/LlFu5fYq/7tvxYMj9dZtx8Zr
2UFvK4FvEFIZ1LMbD5wYvUZsyJZuajRvLsG/ZMcZK6EnK6RL9aZG/eGzq4cV8EIuGBind7FqIIcu
61KVJRzDMjzDIBcWjldzq6pVOCwak+WyMUHCNDwVUxlO3IsW3+tyJthyQbzETNzEYTFzO4wStxXF
onHDMLEXmxFuIgxrYIdSQJHFGGwSfxHCO6HFOsEYfTHGZ8fBEPI/PIGbUFzGYdxsYwcuAwd1TfxW
RXcaeMzHVUFwRbx3Kzxyr5F0f/+8O318vomcFXuhIKS3YuAhcbexJj4cFP1kLHzVebehuTsxeim6
E8kxG7VxG3P8fD8CaGvcwbsRvJerF5VMHJncEgLZGFsyFfeSx28Ft2gxe+uBeFLRpy8HtVURPAMK
cnRKFdcLOyysfu8ygAzXyMH2yTsFEx4zXXXcvymafCgBIO2WE9psE9Nhyp9HfJ1BkjNhowKDeVA1
eee8WCbxINg7aricpqu3ary8xLu3wr48FcK8zG56FwNVUrQFK4MyS7LEJT51Hujzat5CJiMVZkSC
K0wCJPczm3joItfihVHzRrBaULAieUqiolKSLMsyJCINholSLVy4h9AShjMk0An/XYZ5iCgb3USD
URi/IiMxmyaSYilb1hkKwiAFpSpsoio6XdJSwicoXdNjoytO2DJzuF/qaQDyEYHOa3d1FRXfKabp
4oAg+F9fNIHcB34EqC7u104HYH+g9oHj17QANnAKyJNlfX8PSGoEZtbA451Sizo5SdYPeJzsZH/V
R2pWfb7xwi53XYwdSHj1V7Uh6IGTeNd1t39m7S7n9y/QXFJ6g1DH4jg8BF7pVRISczgrIjMYg2J9
kTEBuTkCCRylpTUG84Rbws3CtZoyeTRHxI8tSTOexY9N1Dn2pCa6cdtHwzPjXDmJw5kmMzBbk1Cp
Ql+qNTSdwiOPxY9Tskd8VZFq/8LaHbM5lwWQve2Pvx2a+FEcj3VSiOMwdlxqJsh2kFgvfNl6eieV
BUaskkisaqUAxuiIs3iNochyrSOOAEaO42Ksv7iMfxlg/lVFneiKf+mNj5jg742JBEhgpShgBXaJ
E47MtGiXC8iruCOu4Cl/yPiW8JLE/d1O3GiWYpTZA42QOnRBjzMheRY/deRaEENC5NVGyMNhsjnN
lXQbG+ZG9lNB9LUyppRVzyRBGgTk9dQtGTo/BHvc0CTjNp2xiEmxh4KYC1VAqhl6EmIbGsbROh56
K9LlPG6ZWJJTD1JBFaSaG2vHq4guWAunZarVdiet5ySXueOeuswevmqOS6l3iv84rUdalFVElk1J
oPMn6IgMpf6JgRLuk3xOl+9qLgWaHtWLpI2OYFgb6G756fq3lQtGpS1eGY5cVR+6TINWSbMW56Da
WIOWNIK049J3oLA0238R2tckWiaJShP1TPlBTw8iR7dhUd8Rzod2JKz0SgrabFsDSmcFTBhKaJLi
F1eSPaPBTBtaMxki7MtO5Db1HGwWIfAcTvRSuGg6Lr4832K6tXMNqPQM6GTqrUOZ5925lCPyIcBM
YAvoOp8YnsUZpVEEzG6V2JYuguPpk1TxnHbp73let+JInnRFTg5G73V+6iqt2VfVHEQ445+cHJFl
42Q+64pbT7aum1Ml7jqswwL/VTakkh+R+lnl86i5rtKmiezoJdD5cVOvVC3FA1iEihitJJlVbu3S
rO0Em5vrU/RpbvNOPiV/IX1Rde7rigAHdvDqUp4R7+5xKu9/Hh+QfraCDu9t6vB1qZ7n9KXch8iE
t6WR/u6Auq0Jn6SA/s+nR6dsH5US/yGhrsz6F+gYn58BXUmdYU0dX17GFatBjyPDdCMaM2KQgxjS
FflVJX38evOhrcOm5TytGVzB/kkLq/m6pR9R7hcmdm/hfuUYW7P+OltRVjVi86+QdV3BhfQmAUth
Jn3VLGjHdeuj7+Qk+T6jE1uI4U2K3p/it8ed2OfujuhQFKxpWaRum+BiWq3h/yJXvrqAb721zZrg
F8ilB3bo2Br3Bu+T0P+mVYTflA513G+XSAlOFd+MzsqIvjPxcl/9VGoahe83ABFhQgQGAR48CBCA
gkGEECZMgECBQQQHDiIkxBjAYQQKCDU+hMDAYsKFASg6eAiRAYOBEAM0oADhYICHI2FybKAw4UyM
JTP6dPhwJ4WcBgcSHWnSAc0JIx+kLFiSZ8KUDpNS/FgzY4CkSYNGZHBz4EsKHJdidEABKlmzL1My
3Hk0p9WESX8mZDmBwlKYIVtCcCuUqdO1Uj3SLGoQ8FOXDzrOXLDgQIHIkQcoKIBgAQLKkjtXTjCg
QIEBkg9YXmCAAAEDlxGIHv8tecBkzZwlryagYEFp0wcG/B6AIDRpzQlwf769+nRp3wUIHBj+/Dbp
BLsjZzZtHbtt1awvq369QMHq4Au2Uz5wGvRx8wiaS19tuz0C3AYiHxc+wED65LnlYy8tgJW2Eimx
rbZqwMADM2rAAQMbXDAjiyBEKzEKBXSgILwyjLDDAy/8kMMFRdLwpbM85ErBrUREEaMEMwprRRVT
lNHFExlk0cEWK7SxLgtPJLHHHT+0MLHKjkQSyf2SVECzJEGrDsojE4gSyQScfJJJ4SpToMopvYys
Sd3E2zKyK8fMMs0FqDyyzMjcRFIB9bgEk8wxxXzyyjbR1BM0LJM8TcAShyTatNAO7TI0UUUXZbRR
Rx+FNFJJM1IztdwqxTRTTTfltFNPPz0AzU9HzVTOBQSdNFUGDku1VVdfhTVWWRutFDpSb8U1V113
5bVXQXdNQNRec4XzTU2DzRMBYW9FdlhM/8z0z2YrhRZaZzm1Vs1s05zW026tXNZbUZGtVlNrBSyq
wUEVvXHWD9f1MEaMVtqVv2txVW3OAzTjDVPs4hStTm9P+/fafZ/sV9OE7c3y4IStu9fThxvml9OC
MbXvyIsjO3hjUOfM7uHSFMg4zYm5uoiBsmYs1Cd3D5yqRUT/6Y245k0vBvBW+XJlOGKPP51YzZxt
tvnnXoNOEjujOe25sqB3zvLkkRBl1OWXGZ2ZgSN9I6A6zETTbb8BunbvzfQIbk635gzw0jj9rlOt
Mnspc7sA3dwu2UwC3uY6yr71A3vNvfNOLYH09ib47da6lqw1BMQmgL7yvn6OvufYTm3s6g6X7sjW
7Bb8bfOak61rtxlH7wDlIls8ytZa2xrtwDnWHHDQ0fuN8L7FIw10yCXX7Pe9NeuMvwLSNo/1wE83
XHPKvvZ998rYptxL3xQPWDUDMO+77+M7w0xj0s2Lu3XPVucP8IwpI80+8zOHHUmUTYoJr4sEvKgB
vQAzaSdW/2MqS0H0p5YGPOAsBtQIATPygIg4aH91eYhHIjKBnByEIAmBQAMmEkAB7e9+eNGa3gxH
gNS4zz7OKWFornO23aAtNE0qT2WeM57lKEtu6inN8Ua3gBnqUIah6mGownQ8Bcxmh0HUmHv8Q0LV
Gc5uqvOMbmLjHDlRZntrKs32EpAZ1cmJif5ZmN0KYJ8ZSseIx9siEE8zGet0MTyTaRIT7RY/ji0H
bVwi4hmXs5tgMU6EULziFaeYGzYOsooQS98dmSNGMq5RjKHKognNgx7kcBE9eRtAH6kUnNcoADPC
6VoXiZgZxh0AkwbATA2r6MXbxDFkqNQhc0JpN/pEkY6Vmf/fUxJDQcXQ5AEsQYhhfoKQCCBkAgh5
QDH5R8GniGQCC7zgMSdizIpIJJnOZIhM3KIRwECgm4CJwAd/xbGMLac6KvRhaSJXRDntsTcZi9x6
3kSZh9krh6c5Jwk38xnjzLM9SUQkdPQJNS7icJ875KHh1IMd/tjGk5thDsEep57I2ROgTernQXlj
vOKIh40RRY0Pu4YdFd4woADd6B4zuaatlZOPWPxnQ9mnGZmeNKXa0UyTxOPRmVpnpeisTSXd8zxR
/RQ1/3GPS5X2ODL+aaW24Y1I3WPQ9FUHqgcV6RaLI7WLIMqbNKnIM+cnzJ74T0ACuuZCUKIRj/DS
rGtFmf7/6mKSthbQgBdhoEJyojKFFCSZMAphdpw22H8xZz+pC+jcsCSfq8YONcaRzmuA8xnG9rQy
Q0uPZH8jVIva5l8FvQ5NT2ObuoE0M5+dKmouu9jONJajt4FNGw1KGuBwxkkLc2fPMLtH1V2sZ/3K
oWixKlzX5naPvh3bFCsG3H8SVFnOEaLcCFDYg7ZHt8QDZUkHq9HQhtai6fNnaTwLnOBQV35TE6f+
YDK/sZq1rHBxzEAuwkxwUsC+e8HITNJyX4qIkyP3dcBB+IqV+9pXJ+wFoUklsxvdFPGf1gnNksD7
zvssdqD0RJJFrTpG+Tz0TRd+cHdT2mFRgdY6nnVSRRdK/1zKxFO27fmXilXb3SvVkrvZyUwRN/xi
nJrpwbg96WptuqYq6ks9y2lwcLOD4iVjOGR17PGVjLPjnF1GPEp2ricTEDfQFPnEnTGxRP/JGsLx
prExTu2LzYxhJrPUvEeaH9UgAhi4joSsJPGfeufnzaYoJSEG0u/9VtKViZylgsR0TFRcdOCsybOW
OiykfdjIw5um9IWvcWp1Yplh+4QnnlDdHCY1LWYhVzrUSewsJWGq2CYzWDY4HuqqDSo3SdtnpZuG
tY1jA9I2Pu8ypny1SW9aaihn8lJL0ttmSAjpntaUoaOl56iHDGxT6rqn9/RMc4W6JAfrLWx7RGo/
H43dzP9By4hL2ijdeA1SPX452+yk9siqFGdxJvAsRKGrXvP93oPANZyB2UnKJJJfj+A7rxJRWQEF
vipkjiXfay2mRsRJM9YND4vUMQ9u/FY845bOxW8a26UeNhzSyMY5NBXNSivzmtVI8U/guWlzVO5d
gzoUN2tccau705rm0WY86Jt1mKb71JDr5qbgcQ5rSLhu3q3GPuNxDpCHXfHwPFk1zvGSasoznFuz
OKbQttR0Oz6yoVenPiSE+cW7Xt3QRgdkV2dcfMAcPIsjvTYTE03LIfZz6Xx3OdMVORZh6XTxAF68
n6H3inzCmAkUJCgR57cxQSLWstgPJAv0iGNwYhSXTOT/8jOBiUdugl/9PWTigX0TmvA0HzQhO0uG
uw+Y+pQlnXLJhmG6/ZS29SaBrR5bAuNW27xUrCf5fvZP2hLxkT/8Tyn/TdnKve+9Bfwj+T73m6mS
9NWE0SQhAEvaT/3KqwRsJLkmS86vvrX6BKdtoSivQvozy1YELwnJH0MlehGP4sUieQH2Zi+/lDQp
IgNAIaIxwANEwARUwAVkQNWxFqRhQA95ina5mgr0P02JLo6hPi5Jj3BhwA8EwRAUwRE0QO97EpAR
QQ+pCAtkwQMZEAIxkP5rwRUhFBlsEQ2aQQqcFXkBkRGxv0exwQgJwklZiiFskSEcQh0klPzDix+8
GiP0/8FCUcJFiREcHBIo7JAXxCAE48IZfC8UobN6QxGsaEGrkZUw5IrKixA0hBWqiRA3nJSFiBlC
mUP32gozFBD+EcLFYJU6zEExdEE+bBkivAgy3Ao9lBBARBEt1Ak49EJFwcNHbJFIfBWX8cOycsRI
ycREnJVLdD9WIbgFwcNNdApQlMRE2cSX2cRRVMR4KZGIoIijAIwGWIwHWiCtGCD+KaayYIyC2MVi
eghfNKtfFMYN4p+nCCBYpEX84SCZUCCMyCBa5AhENIqOeAmH8CawyMVrdImt+EU8SyC3ioj6GaAL
QkZWkcb6qcaGWJkMiiCPCDBoLAuUmMa4sMZ54Qhe2v+gC3JGffQgt1DHCHAItHgIa7wgkyiRfXS8
+EIJ0tMLXsIrdHzInCBGshgIdFSLiNvGA5ugnNALdnQrqngMgbDGmRhHVjFGe0SIdGSgymNJWLxG
DEoQCKjHZRwgCbIvwPhI0LPFisSIi+jHhlyKoNSLIuSIC2KJmADKCrIvgoDJc0SZeOQKU3RB/NuL
tMgJpLgI/MI3vCCKp/ClaVKIX3KJiOsIlgCnYDIrg1RLb2KJDMG3YoKJigDKbzIJvPKvatKgrhQQ
olCvtPDFkAjLYwJMeIkJsZTDxgPLa2KJBoi4Yko4PasLokg4lXGgBLlKp8ygkoA8stBLvYrMkPyr
vPr/qq/6t7+SiYmYr18yyxzpCPXSJrmCRrv8N4FYlflair+SQ1bpM91ENLBSGd6ES2NiTbVspsVk
JsUUq7gIsMZbq4PQH2yazbe0zL/0y7OkIKwMvca8Cr1MuJFoCpi4zZVITrsKS7P8zbKqTcUsxEJk
T26Cz9SsTbj0xQeYy9BskAhYr31zxbvoTjvLCUA7RtnUzYTAipngzNxUS7gQpoUYOB0JULMcqwBd
CKxAlJO4H0N8P5nozm1SDEfENzvzM8Q4K/2xiJXsJbZCiwwNMP7RJrWKSOAcOExk0Q09jL8aEHwb
sKWwM7/ST7ESUQkRKw59pq/qib2qUAWdn70q0Jhh/9KtzLMhPQzZlAnZFFEyzM5CPIuQLEU8g87G
e4mfKAgHsdGkSFAzNVC6TNP//NG6KAgBaopSJFAohQv1VNKROFCEANC+7CuD0NISldAyxU2VoT8C
GRQYXVO8GogbmYquoNM8PTBIbdC1PDB5jKBGpFOdsNC87NChuK+7SlP2sggQrQso/S/7ypB8DM+L
NIgCO4zu5AkEzSVkkol6Q1SlcIxPxUdpUogC87MgHdUCy9RSXUvJJAlfhVSnaEoojZlc/TfDSIpG
vZ9+E1ZNLTA1PVX8Ilae2M0EyqC7CEWGQFOlGFcMTdREnRlVVdN+u58gjZkkhQs8vdOt9NWSkNdw
PP8mnchV+wJVBvqrIzxU9jJXDUqmG828OntUJU1QBqVUhrVXhfMrtexRPt3UjDDXEa3TnOhQhCXV
9iLDudBFHbEImRA9TsRVF5XYGO0LULxViuCJGAyLgfQJiuRROh1VQVM0k9UmPrvDRUvWH9VYZp1S
XnJXUK1T9mJXENJUQ5ufj+VEboULgp1RRisglC1XNY1UrL1YDCU0IzXXfgNSOn1XPws0stXTTM1K
YnVZczTOkrWJmKDKqvRPFp0QMNWmP3umtYrLBZXUUEVQilxQWcWKYsorYMLUfPuriv1JrT0RlKjP
Tt3bpLBCkmgI47wfosCKuWwKkfjR+lzRNI3Oxjz/XJ7Qi3VpWQfyXAxCiAHT02eq2IijSX07OMXT
IKJVx59sXbONSrF61qP1N7ENJo/8H+GVXcCQw5TJUj/toMToUn21IKo4kcG1z8bjzss916xdK8tk
075KuMzNTAYF3AMb25/VXTzNXdjV0sLNV7UC02K6igddxEGpCboFypYwEIfAL8bL2b7V3ccLXLUs
vWdSGYH4qvlli7isWYtl3IwIirxl0bgIxk4liZZQNJYg4DQUCMJtiYZ4i8/1s68oCAOOVTE04A/u
4D8ri3wdvaWo2ABW1M2zGpQACQ3hSxRuC/KtimfqVozIYfFtuEvNr5RoOBjuoAvuCwvuRhQmWS8l
ukvTY5AOBuGrvVqmEAjGJQyXSIsM/iW4/WEwvbOljdc/LdsVDoyxoAj9IWCdbLwGRlrr7c95YZED
WUE5/pFGYcIOuZAg2ZCEjONF6cEQoWMx1UEd+bMbURf485A87mMVJZBFDgsKLJD6y8IHWWQGsZDl
rBEUIWQVtL87dhFKhpdNppBNRuRARhBDnsIdqWQEqWNE/mNJieQmdEH+45A/vluAPcVc7sSe1WVo
BNNeVuXOBOZhJuZiVhSHKP9UuTXmZYaUuH1E+2TmCCmgaKbmal5maL7CZLbmbebmbvbmbwZnYmbE
cCbncjbnc0bndH5jdWbndnbnd4ZnLxzneKbnerbne8ZnZc7nfebnfvZnb57nfxbogSbognaXgDbo
hFbohWbo+G3oh4boiFboeZaAjKjoDrloD6nojJaAjrborejokM5oFOFokcaIixZpjw6AkV4Qlj7p
jz5plU4IlDbpmbbolMZomS7pkXZpnLZpjCbpA+npmA7pl17pml7pmTZplAZpo05qjQZppHbpo+Zp
ooZpoKZqnV4UpG7pp77pmt7pq3Zqjv5qn1ZqriaUoh5rmfbqCJlqVGlqozb/655W643+aa+u6rgm
6qXW65+e6rdWare+arL267F26sM27LYWaqi+a7VubKkW6pRma8Wma8fG67tu7LXGaZYmbMXGa7BO
7Msu7Mxe7L7WaLIG7NEWbL3u7NX27L+W7NSW7a8Wa8AqbcwmbZhmaqbO7dd2bco+a942bd1e7M42
7tzmabNmbZuObdyO6roO7eMebs4O7tvO66x+bp1m6+tGbuuW672W7qcO79r+bMkebbvuavHOaujO
buWO7eRG67jeadBubtG2b0Nt6rAO7e9OauFu7fUGbO5Gb7/2adi+6Y8Ob8I27skOatUecKpObweP
7vh+bwSvbQHn7+i+beDu/+3HZm/7lu7xBm/LVm/+fnALz2z4HuzDTm1DEW7mpu0In++3ZkQF9275
Lmr9du4huW4R524ez/AQT+z3ZnAIl/Ad5/AE3+8dGW+a1mrepu4Ob2vOhuzI7vHtBm+5rvLpFmsF
z/Hfxm4pT3AiB/D4Vm0Qt2oNR3ItpO4od24o9+xHGfAXL3EWd/H8jnECd+wWB3I0j+rSVnIpZ2wT
H+rLpvO5xnMkH24wX25Bb/AuX3H1rmy+9vOy/u8f5/H6FnIPr3T8LhRMh/MZ33MyT/Q4b+kKb2/l
XnTyHuz6tnSP1nFTB3Q1X+9UN/NHnPOc5nNFGXPztnJV5+pQ73MjV/TT9v/1Mm/xyibu7g5uGl+X
Y8ftQt/tQU/rDG/2ZH90ao/wbd/wvp5tbQ/0++72EU/1Xy9yAG/0Xdd1yy70Th/3VY/2wlb3b+dy
bYf3Yr/xSnd28qbpcafvEcfybO9vfkdsJn93Vi/4Vz/3KU/407byVt9ze/ftNyf4egdu2FZ1ir92
7Y74jcd2rD5wGD94fAd4+J5rV2d0VCf2KQ94fZ54Ri/uhpf5bG93da94kOf2e9/w/25wPv94mK/5
nff3kVf0E7/5dnftXSfyjSb1Ukd4ktf4Yld6Ukd6ojd1BtCNVxd4kFd5ZDd4rS/5re96lSd7lB/0
ro/pkIf5pzd1aEf3cj879Vvnd46Pe3Y3e+u++1bB+JN3e49XeFFf+HWXe1lf86yX6MNH/MSPZ6w/
FcV3/MeHfIA2/Min/Mq3/Fz/ZvzL1/zN53xZyfzOB/3QF/1E+Xxfd3KtjnTx3vKYbxF6/3QSz/ll
h3Sc5/odEYDbFwCMuH3dx33e95DdDwDc7/3gz33bL37i533gR/4OUf6MaP4DEf7od/7oP/5EEX7f
P2u2j/suz/h7Z/fYJ/oA9/e8l3mjF5DJl/2XJnER7+/tRu3nhvDT5/mqX3fEpnSkj/VJh/WZTwjg
BwgBAgcGCCCw4EGDAhAuVMjwocOIBSdSnJgwYsKMDQcSlMiQY0WLDUNCFElSIceOJy+SlDBRAsyY
MCu6LFgzwM2bL0/i1Elzp82QMyn6DNozJtCWMpH+bGo0ac+dRQMwULCAqNGi/zlpLmW6NGnNsDid
biUr9KzNrmKB+tS5FitbsGjhjuUJUePdlHn3YhxpVyJelCpNAmY5uOTKlH5JHryouPFCtU9bmp3c
lefbynXjQuXsdG7nzVDbcnY5tepVqWxldi5bt+3Q10FJP9UaWi1tt3BpZ/2Zk+lm22QlU9QIsi/f
wMoXJzbeMXDxjcz/flRM3S7kiscFn5zaWu71l26FBxdPNLNnukKJn7/8vffY01bVH4XNu+xU/Ejv
rwZOfjRduokmGmvgveXabeGJ5NhI2zm33HEs8RRhgypJiJxgjylYnYaJebSceAJ2t56AuMW24VH0
jYhWftd552JasYW1H0moxf/IGnqT1adfgfjNplmBXIkI4Gj+fXagb7Xd9tWLCl4YXXLToYjXk4RZ
+WSVGVqHopbPSQeRa8CVR12TXP41I24Dwifkmd2Z+KOBcFZkI2Y3DlUmimuZBqSOfabZHp68yQaj
kiu+9qaZHmLYZZaIbRcegwtuOeVjUkKpnYUNhTkXnkJ+Bah7iX6WXmh2fkpZknHGV+N8dbqq4nhB
DrpngKSKCRquhlqmY6ibGtoppnpZGSViITU6YYWYXlqssoxNqih2lv6aoJrgvYqqXQgKZx+vJ4rK
q7V6VssAAyqaS+Kp/8k44H4zoelVtvGC6q6nd8IaIn2IVmvsYgx2uKW/k2L/KW2l0vVbYYfNXlkp
dRcWjC+18hLIHravxvqpe+/aq2an2jY1nmoVLaBAuaWaPOqY565KpsSm4qhvrrvde63F10kY6bLO
Dsuvzsgq3LNH0Sp8rKT/HiyvibbtiSChdS4dccUsxjw1zVgVNXLJLv8Waoin7hho01F/qyvNXIuN
7Z/BPvqwzxxS2JzADLfNqJQJ050o22SzXGjaZ4erdctU9w141ShjPTbiiSu+OOONO/445JFLPjnl
lVve+OGXa7455517/jnooYs+OuSZk3466qmrvjrrrbuOOcmvyz477bXbfjvuKJqeO++9+/478MEj
vvuZJ5qGcb0nt+lyoSr7/80mqcFRfHeXQTv+6F9VYl+ds3Y/RPT30jZ89NxOil++3Q5+Of7bbted
qZYlqc+v983uPniqKLPrZ9dQS292u9a1vDHhjz43W1/OEpY3t3GHOwVDGPf0sj0PaShLkMmO+xq1
wMLQL27mW9jPeGa97K3vgCKEDgPL17YHFoR4g/qaWFrEInh9TGYfM9KomLavWq0JLNMzofVwtrOc
LQyDQqTSQtaWxMM0yzAEseASMWgSogERhE1kDvjklywoMQyFNhNWFQnzLOR08XxBw5kL0UMrwilt
N8QpUf7+57WUdU1WKdMYDqNltA6e0WBj1GIVcZYdJTaQggeTIs9AYkIqylrNipdyWCOH5pf6XZGS
WizawPhySRUCzYpp3FXzCMQnjoFSRDqUGLBI5DyteUtoktqZF5HlvSN+KX1/JCImLcVCLppxiJt8
ZC/ntshDbpFSuGRfMS8YMFjucpP/I2sVDJEEymnChzxBYlqOsjW91lVwkh784isLSSFFRnGL23tg
LZfoM3LmEop5G6c3LUnCB7EQQsE8TBZJiJIranJD7aMKNGFlvGimMpqBQ1XSyJbQebkLUX1DJyfh
JqzoOKeSJVTn/ArpT4zqspwjzOc8EdhJukEylopC4gYNmdJvgYhL/6ST/jaUnzn26XmlApvi/lM4
Zo0UnHDjHi/DJ9R/TrB6e4GiA6cDvnem05VmQiJFM2rSD+JNbmxraVQnCtWCwFSgGQsb/xi6P7CK
rWPb1Mo2hwjSiJLxiX6M6oIiGE+lHi1SXuzmSJcq0qIyi5FwLdYy+/lF8o0vhMvK/ygRUbgdOi10
W2Q9XgHBRVaCZpNqM4MUFjuKzpX+la/aY6JGmZlIkY42g271KbRMO9XCApZgpNWnEzFrWF+qlbBA
66o2M5ZWs8m0XreqKZnOelZ02RGXnG1cPjXYUYkqd4Q9fS50eRrSiTrVqCuMZFw/Ytru3VK6fcWu
wnArvPGSt7zmPW/qxIve9bK3ve5974bUC9/50re+9vWdfO+r3/3yt7+ey69/AyzgAROYSzBF3k6V
5z/KbM15aGVSHW/1xrIiuI6toSkPx1qbgY7IO2tsj4OJW9xVKoW3KhuS1SAG3CWlVXnA0piLB/i5
A0uNU9d0kXDtJc3/nTjDJhNUWv9i9iKv6TTDgtoxyODUI1Ly7cYJNmWLV4xi9UBWxy0TU5OG7MY8
/q0yMtyRgitHrpl9+csHdezKfOWritEopu+qobm0TE0pX9PKa1KznHpjTRXTMY5UPqhUzCznGImy
yjS8o9N0tedRpuu3fV7x5cZs4Rf6OMwGhTOt8BzKFG/YxB/+zWy2dugj+RaVAPJR88SVGyY/GrLT
Ak2WlXIeUjPYWit7NZjRxmZO8bqH0Xv0DiOdNSKV2abPe3ODPaNDGX34ZGvu8U1vJKp0qWrNyPZU
tbBspyVxhdZ6C8+UCRhoehG6yxyDjbdtvW1fj1rdkLacpGFIaXenCNyuimHTTpn/W4qtmrLpjkpM
EUeaZPMtz1spaLbNtMZFM9RIy9Z1yMJEbmCzDJuXFTWCi63KGQ8b0KcMFMxmJcclb5pwY3txZDET
8oYDutqizHOEN8Znyh48YgiXE8IX/kJr2zFpNZdsWCmuv4+LLt5M5la3bn5vmMNZ6EPP07z2Vupv
TzbG1aRjs5duWY2jWcQCnDcby4Nkc58N1UA3d9fLza5Wcs7of/qqqXwdcEDhnD3a9nqR566uxyq0
xNROcJCV7Gc2XibWOWS1gtQYbE7LhuATC7f/Ml3kv18a7l+Tu7Ava+zKvrvDlo5zxQddVhIjPl82
Nby9PZzw3gJ+y1EOcdWzvkrUQMvYspUOdpk4Tzm3vz73ODYx2nrv9zrPGu9RZ7mIr+VhuzMf+Fb3
LdufX/tE0b7lkPZ966WPSuFbP/MF/j74wy9cfq52fPzmPz/622v09LO//e7n3frfL//50/908a8/
/vOvf8ndf//+/z8Aogi5DCABFqABHiACJqACLiADNqADPiAERqAETiAFVqAFXiAGZqAGbiAHdqAH
fqAGBgQAOw==


------ejgysZT_jCDz6-dBlD0fDNVgQyjTmtEDPX8gV1hbFULNT-md=_dcfe1_--


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 09:36:06 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 09:36:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346265.572027 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzb3k-0001dC-Qx; Fri, 10 Jun 2022 09:36:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346265.572027; Fri, 10 Jun 2022 09:36:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzb3k-0001d5-Nm; Fri, 10 Jun 2022 09:36:04 +0000
Received: by outflank-mailman (input) for mailman id 346265;
 Fri, 10 Jun 2022 09:36:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1nzb3j-0001cz-Og
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 09:36:03 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nzb3j-0005JX-DM; Fri, 10 Jun 2022 09:36:03 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=[192.168.23.251]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nzb3j-0002BA-6q; Fri, 10 Jun 2022 09:36:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=s6BfmEU8864JGM4/VFVea1/LQptm4Ee9ClsMSi9Q/s8=; b=ajCsNIzqY9lSlw+s9QtqAv8EsR
	ok524mac7uTXy8sV7gGfVmCKDdY/P30vBlIMdZKjDy+v28WYR9mm8rAj5BN9aClshGfqc84c/uXoR
	xIoXYPlZxVjV3mk8pQpvOBQ5gzI6yZzs35pfOrBLyID2w6Y/XNd9pJs2oDp4Y/oGxTmc=;
Message-ID: <015d87aa-936b-94d4-2500-c438814c5d71@xen.org>
Date: Fri, 10 Jun 2022 10:36:01 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH 0/2] xen/mm: Optimize init_heap_pages()
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20220609083039.76667-1-julien@xen.org>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220609083039.76667-1-julien@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 09/06/2022 09:30, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Hi all,
> 
> As part of the Live-Update work, we noticed that a big part Xen boot
> is spent to add pages in the heap. For instance, on when running Xen
> in a nested envionment on a c5.metal, it takes ~1.5s.

On IRC, Bertrand asked me how I measured the time taken here. I will 
share on xen-devel so everyone can use it. Note the patch is x86 
specific, but could be easily adapted for Arm.

diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 53a73010e029..d99b9f3abf5e 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -615,10 +615,16 @@ static inline bool using_2M_mapping(void)
             !l1_table_offset((unsigned long)__2M_rwdata_end);
  }

+extern uint64_t myticks;
+
  static void noreturn init_done(void)
  {
      void *va;
      unsigned long start, end;
+    uint64_t elapsed = tsc_ticks2ns(myticks);
+
+    printk("elapsed %lu ms %lu ns\n", elapsed / MILLISECS(1),
+           elapsed % MILLISECS(1));

      system_state = SYS_STATE_active;

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index ea59cd1a4aba..3e6504283f1e 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -1865,9 +1865,12 @@ static unsigned long avail_heap_pages(
      return free_pages;
  }

+uint64_t myticks;
+
  void __init end_boot_allocator(void)
  {
      unsigned int i;
+    uint64_t stsc = rdtsc_ordered();

      /* Pages that are free now go to the domain sub-allocator. */
      for ( i = 0; i < nr_bootmem_regions; i++ )
@@ -1892,6 +1895,8 @@ void __init end_boot_allocator(void)
      if ( !dma_bitsize && (num_online_nodes() > 1) )
          dma_bitsize = arch_get_dma_bitsize();

+    myticks = rdtsc_ordered() - stsc;
+
      printk("Domain heap initialised");
      if ( dma_bitsize )
          printk(" DMA width %u bits", dma_bitsize);

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 09:40:01 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 09:40:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346276.572039 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzb7V-0002JN-Ef; Fri, 10 Jun 2022 09:39:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346276.572039; Fri, 10 Jun 2022 09:39:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzb7V-0002JG-9R; Fri, 10 Jun 2022 09:39:57 +0000
Received: by outflank-mailman (input) for mailman id 346276;
 Fri, 10 Jun 2022 09:39:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LSau=WR=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1nzb7T-0002JA-Fs
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 09:39:55 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on060b.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::60b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 48014e42-e8a1-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 11:39:54 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM6PR04MB4166.eurprd04.prod.outlook.com (2603:10a6:209:4b::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.19; Fri, 10 Jun
 2022 09:39:52 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.013; Fri, 10 Jun 2022
 09:39:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 48014e42-e8a1-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Vkh4tr1/plfJqy3LdliUnxO2tTqttDlxwcCVi4YxlLKptDAULlumZKuTUvirU+93c9FsK6abvrtn1flloYm5KTtzGieErYxILNfcP8CiNcrXLzOf0NTd3imEKY74MB3xlZLjFMNUvCQ/fIFG5Be2SH8Nd2cpwUBgyg99oZUw4XAIf/t/d4Cf4tW+gaaw55Onuqurl1uV5ZIfcs+P0yBCSeL7tV0i/gs4SxGtvv+G9P+TNQYTQWmpOxz9dITFgHdseip7LOm34/TStn4aNnPmDgA1WggbW2io/8trZz0v3RLUKuDiKtrcPv+d3n1A9sgr53MmHIomlV8i/okIjDCueA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=irl7u4k7TRAFph339AjNMwuRM+W3vbYjB+GXmhTMj/Y=;
 b=BPAC29OZQ11WKAtPjgGYW517AUNUK12iZxA3sUt7L9f46DZjtmG/DqQtvRq6Pj3vtdUVdbkiSZLXB9y7/H7ma41jHY//ts7G/dJIc/FygKyPALuW8e1ih14rEVsJuEHAhxB65fm5R9O/WN8tjgi8QFY8XX4vDjDuHigCbEfMjHi9xVOhj3UN5zpegddloiiBWgSc4mMT2Ho1Mm1gZKRjV3MLgftWMPirIMdYSjA7Wd9UaKqJaa9YIOS6rRZbvDT6kBOy+ND/8IsBKkuGvRz4afhQofCDWAYR8Q+WdKps7+Xnfy3zLAXDJVINcI6iDhGkpKSeNMIK0SVRyW0yCd2tRg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=irl7u4k7TRAFph339AjNMwuRM+W3vbYjB+GXmhTMj/Y=;
 b=mGPIqhv1zF5COMXoE3Kl8Oq4421rNil9l1S6iR+Ddu0+xs2m+Tl3+IxhF2kjA6L3IUQPkqJ0xEGvbnJuhryNVZB81OYK676tphDG+Y/BS+i0X3A4u3eG/k6b+3cX+7l3y4xxEqIod1mMCTXJV9gMOFZ6pt6c8cUAlUvQJtigCstUoKhQb6naTzqg1xE5zhRmF+45ASD0ZTtOt1bb2KzCFEtGPBKvajcvfokaLCofPtd5pBg/OZ7dIgcOe/uJ2bKjBnT/4tDNCxJaJF9H9GIO6LBn+L4ITlUfZspVx7JnJBgQOweDMUbuvMPMsOmNpybGBFK7itZx684ynnbgNPSxPQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e021461f-c21e-667f-d3a0-970b028be42c@suse.com>
Date: Fri, 10 Jun 2022 11:39:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [Question] Alternative Method for git send-email
Content-Language: en-US
To: boyoun.park@samsung.com
References: <CGME20220610093115epcms2p3017e473201e08cec7f3d8108d17199de@epcms2p3>
 <20220610093115epcms2p3017e473201e08cec7f3d8108d17199de@epcms2p3>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220610093115epcms2p3017e473201e08cec7f3d8108d17199de@epcms2p3>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AS9PR06CA0273.eurprd06.prod.outlook.com
 (2603:10a6:20b:45a::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 7e27656c-08da-4b0f-4cdf-08da4ac52abf
X-MS-TrafficTypeDiagnostic: AM6PR04MB4166:EE_
X-Microsoft-Antispam-PRVS:
	<AM6PR04MB4166334D4AC1B7FDC6FFD182B3A69@AM6PR04MB4166.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	wNB3gehvaV1fXj7LbPc5mRLgMBSQFeGlj7hI+XnNfjoWh/G5MhzScr/lnzTpEkerL9rOvlhjWqDmwq71sUj6wjzyuQ91eIV+WCgc0ov1EBmeRCweERcLiIveruyWTw3BsTyDbeQK0GkrhQj+zvdjMsI5N8iEty1v1M056ZtWB6XMl0Lf/5078NLuovIMerT8aqaVhYXGOAstGzOuuW2/WRZHn4m7VXunEqBBOLReOM3NjJVF1hdhKgkBjzxT48eQGLe7qNcf0eO5vwInOx36ABKfUy0iSu0nFEtYhRJczvSXioXrmxxaxesD4GJDzm1kp61J2CIYzDMMxRmyRSak54HOtJHq81veA1Kn1tSxaplU2vaEYF8M/hr7Gh90JkJ1Kp/++a/6ZKa1oAc6AY7EHefyA868YJtLOxo2R6hxdSrVHUxwplrGmqp+EI/f/FP9V8ZsvRUrefbUz8TTlol2VN6iSdUnMKtCm+MPVGqC7QKzhkPlETYuPS3M91/gT1P4DMqeUSL+/5Wxfm+vjSPKdNhBGgpOETwr08/TU4xmERXAQneQlGvBvCBzV7UuA9ZWihhCUnn8NdxjHgyKOEUIClCEI8rWDxXHDQbbNw7ufCGijEbiJi5UDkjFH8RIjCUAlfRaHqQ/jIcFMgVqdxMJFM7Sl1jG+IrRc7kLLOkMappQudTmEJfNN8iKf+t0EAtFlOk/lAWOa/H8rQuYnpC4Qf2j1DahgiN4oIik1SVotGdKHoKth28CDWry7MwXJVw6AN4AnY+EYMb3Q6oo8DF6zU0uFeSCRTgFQKsvJ/iEmr4yt8Wg+AURol2OB1lXMLMc
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(83380400001)(6506007)(6512007)(26005)(2906002)(6486002)(966005)(4744005)(31696002)(86362001)(8936002)(53546011)(38100700002)(5660300002)(36756003)(508600001)(66476007)(66556008)(186003)(2616005)(8676002)(66946007)(31686004)(6916009)(316002)(4326008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MU11SUhRUC90a012NHJMbnd4cVVHVE1VdEdSTll2QmNUMUlvVFFBb3FWeHFC?=
 =?utf-8?B?cDVFYVo5a1FBU2Y2TDB5RlRNWVYzU2hqclVyNlViQ3JCekFRblJSdUxUVHAz?=
 =?utf-8?B?U0hsdGlIN0ZxbzIrQnQxSHBBejhsVEk0NFB6L3dNNkRpQlI3ZFMwMGJKT1RC?=
 =?utf-8?B?NHZBZ3Q3MU1vemdLV2VwRkJEZnhDYVFSRm5KNG5QVldmOU45eTR1SlkyWXZZ?=
 =?utf-8?B?SEk4MkpvMGtBd0V4Qk9XN1VYbENaK3UzSHRqejFNd2hRcUt3cDVLdUIrQXlH?=
 =?utf-8?B?YXAwL0JqRm5sR1poRHJ4WjIyOTZuQW1DRFBmSFZSZTF3WnpWS3NxSng0cGVC?=
 =?utf-8?B?WkhoQlUyZklFcmJIeHpGc2FGNU5LaFM0T245d2xJaEhZVUtvNlhydnBXR2k1?=
 =?utf-8?B?L1g4S3VKTldlZE11cFI0N2pRZm9IV3pNUTdEeEw4RkNHbDUxbDdPdFZEQkM5?=
 =?utf-8?B?NGJvRHI4SFFHVnZnUGx5bTBrTlhKdjVyQmNtdWU2V3ZyN1Y1QlZmOHZ1VFdZ?=
 =?utf-8?B?RlJWY2NVcnQvY2swRFNHalNudFNnbklmT3k1YiszSGl0QUR1dHBLZExFa1or?=
 =?utf-8?B?WnlLQ1EvUzFsc0NpWTVvcnhwelM3RGVrQWxVSDB4MEphcnkvSGRTTFF2MStK?=
 =?utf-8?B?bzA2Q01wQWFLbDVpdUJIVWtzRUFWM1BrU3pnMDFrcVEycmVjVXZPSkFmdkgy?=
 =?utf-8?B?NllxQzdRdEdpNFAxOE5KU1RhSjNNWVMwRGwrcnlLL2hTZklGc3ZmRElrOUVU?=
 =?utf-8?B?SXljdTlGdjU0RUlUMHBTUjBjZlZjUFkyMFRUYVV0SEI4RklGbk52RFplWXp0?=
 =?utf-8?B?R0FMbWNPcm0wY09nZDJkdFRUTVFLNWpVUHB2OWl5QlZCdnVRUm1tOE9IKzY3?=
 =?utf-8?B?MXhjY2xuTVdjOUEzRlA5aEx1V0dNTXpjU2dsaUp1dzJEZVpQamFGd1RMUzgx?=
 =?utf-8?B?Zy9HQldBRkd3LzBoaEk2c2FFaDlyd3BCVHI3eTRsbjF3cEJGUnZvL0o1NjAr?=
 =?utf-8?B?ckZUREUzQWI4bjBhb0R6M1BRdjNzOVZHOEFraStmUzZhTi9zWVRIK1dZZnps?=
 =?utf-8?B?VnpXbHpYUDIrNW0zSCt1bk5wbThKdnJXZzZkNWU2dmFaMisyQ3BlNk1XU0dq?=
 =?utf-8?B?RTNZc1BMTkxUYk44aG1qZTAwS0NjbithV3dCVUM1QkNURC9KWG4vNkZCOTVZ?=
 =?utf-8?B?S2VaVmJoOXcrdG5TS1hlZkJZMFhBZkhVT0VRQUwwZnpXWGdEeG41RWpOdngr?=
 =?utf-8?B?MWlHRDBSRnFSNE9wc2Y5K3B1MDFubmNFY3VTN29ML1A5R2xLZmdic3lHTUtr?=
 =?utf-8?B?YUhyaytsbjluNEw5cGh4QWVWaDJtUDh2c05OK1dGb3N2TzBWNXFFWVpBYlJz?=
 =?utf-8?B?VzFCKy9INmJyY2NTM2R6cUxpcFZiNGUrWWV4a3VXd2tyQVJ2NHlsZjBlNWM3?=
 =?utf-8?B?YTZIdnpGOHpPKy9ibXlqRW1jWE1xYnJ0N3o2M2w1bUZ3OERUamF4TWVxZFh3?=
 =?utf-8?B?NVVmbFZzVmNuNWdJYmErcXNXMmJqSWZTbzRsM1VIOUZITUVDYWJ4TTU4aU4r?=
 =?utf-8?B?SW9zaCt5bERZbHI4Vnk1ZmFDaWpJTUU2UjRsbU40eVg0Y2lXeDRTbkhUMUEz?=
 =?utf-8?B?T29jNGJpSnB6YlBnTlIvMWRoZk1oZTcyM0Joako1Zjd4SWJkSWc0elJCV2Jh?=
 =?utf-8?B?VjZGVUdaOVpyWTBraEVFM2VwRk1WREJnZng0NTBnc2JncUkvQkxzbVArd2tZ?=
 =?utf-8?B?WU50RElGVEwxcE5pb0UrbVF6SFhMakNjNVlnc0pkbWpLd2E1akN3ZklrYmJE?=
 =?utf-8?B?d2E5ZEluZ05lai9MZDhBb1dUTDdseXB1UjFqSzRwSk96WWsxK0hTelZic2FZ?=
 =?utf-8?B?RkFWVE9lQ0dJTm5QZStmWm1vK3d0UElaRUhvMHVuY1JqRXFhb0V6eDNPQlZJ?=
 =?utf-8?B?RFZiRlE1a2pIRHdkeEQ3dUovOTRjSWRmdXBsQ3d3SHlhSXlRV0o5eU5NRkNS?=
 =?utf-8?B?SmFkb3FjajNrWm5raHUwdjBIUldBZlJvOUNNUWR6T3cvZ2l0RDgvOUVYd1g5?=
 =?utf-8?B?YTBUOXJwZ0FtYUtvbm5Ebk1kVWE0dHh3bjdmQ005ZDN3RFZRZytHaVdkVDM1?=
 =?utf-8?B?bXovclA0MzdaVGExT3ZKVFRneFRhaXRwL3VvWFJLbzZ0R3V5Wm83MlArcWd5?=
 =?utf-8?B?dlNVN0RhZGM5eVhIZXpsb1V4VkNnc3FHNEg3Q3djUHhJMjJsN1dpRU94N1Zt?=
 =?utf-8?B?aTdUeUJXNzhXczBSQ2Q3ZHRmdE5mb1VPUHA4L1Mxb1VUUHEwTURKdnNuOFA0?=
 =?utf-8?B?TGYvaUl6eEYwVWt1STNSRW9JS25yc1oyRlVvOWVibFRYcGlOTHhHUT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7e27656c-08da-4b0f-4cdf-08da4ac52abf
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2022 09:39:51.3789
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: CNXzyNk8BouNmzIaBcpgVRmhWEKfbLLs0v18os3L5mjGTLeqGLjgO06KuTxJGTJihtiS02Bdv7FlTMQYlqyuLw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR04MB4166

On 10.06.2022 11:31, 박보윤 wrote:
> Hi, I want to contribute to Xen Project.
> 
> So I tried to follow the contribution guidelines from Xen Wiki.
> 
> https://wiki.xenproject.org/wiki/Submitting_Xen_Project_Patches 
> <https://wiki.xenproject.org/wiki/Submitting_Xen_Project_Patches>
> 
> But I found that I cannot use smtp server directly from linux due to security 
> policy from work.
> 
> Therefore, I cannot send patches through git send-email.
> 
> Can I ask whether there is any alternative method for git send-email?

With a properly configured client (and the server side not fiddling with
mail content in undue ways) you can simply send patches manually. I do
so all the time. If you absolutely can't configure things such that
patches wouldn't be corrupted, you might need to resort to attaching
patches to your mails (but for the sake of people wanting to comment,
please nevertheless also inline the patch in such a case, pointing out
clearly that a well-formed patch is attached).

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 09:41:38 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 09:41:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346284.572049 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzb97-0003eB-O6; Fri, 10 Jun 2022 09:41:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346284.572049; Fri, 10 Jun 2022 09:41:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzb97-0003e4-Kq; Fri, 10 Jun 2022 09:41:37 +0000
Received: by outflank-mailman (input) for mailman id 346284;
 Fri, 10 Jun 2022 09:41:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7j0Q=WR=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1nzb95-0003du-PI
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 09:41:35 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 83ad0b02-e8a1-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 11:41:34 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7D18012FC;
 Fri, 10 Jun 2022 02:41:33 -0700 (PDT)
Received: from [10.57.4.71] (unknown [10.57.4.71])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 523753F73B;
 Fri, 10 Jun 2022 02:41:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 83ad0b02-e8a1-11ec-bd2c-47488cf2e6aa
Message-ID: <d740022d-5e96-bd82-765a-9d5c517ad54c@arm.com>
Date: Fri, 10 Jun 2022 11:41:20 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH 2/3] xen/arm: gicv2: Rename
 gicv2_map_hwdown_extra_mappings
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220610083358.101412-1-michal.orzel@arm.com>
 <20220610083358.101412-3-michal.orzel@arm.com>
 <ef705e00-17da-a8df-9a0f-27eb7ef686ed@xen.org>
From: Michal Orzel <michal.orzel@arm.com>
In-Reply-To: <ef705e00-17da-a8df-9a0f-27eb7ef686ed@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Hi Julien,

On 10.06.2022 11:07, Julien Grall wrote:
> Hi,
> 
> On 10/06/2022 09:33, Michal Orzel wrote:
>> ... to gicv2_map_hwdom_extra_mappings as the former clearly contains
>> a typo.
>>
>> Fixes: 86b93e00c0b6 ("xen/arm: gicv2: Export GICv2m register frames to Dom0 by device tree"
> 
> NIT: In general, fixes tag are used for bug (i.e. Xen would not function properly without it and likely need backports). Even if the name is incorrect, there is no bug here. So my preference is to drop this tag.
> 
Thanks, I will keep it in mind.

> Other than that:
> 
> Acked-by; Julien Grall <jgrall@amazon.com>
NIT: s/;/:

> 
> Cheers,
> 

Cheers,
Michal


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 09:44:36 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 09:44:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346296.572059 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzbBz-0004bG-6K; Fri, 10 Jun 2022 09:44:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346296.572059; Fri, 10 Jun 2022 09:44:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzbBz-0004b9-3V; Fri, 10 Jun 2022 09:44:35 +0000
Received: by outflank-mailman (input) for mailman id 346296;
 Fri, 10 Jun 2022 09:44:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LSau=WR=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1nzbBx-0004b3-TB
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 09:44:33 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0605.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::605])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ee3083c3-e8a1-11ec-8179-c7c2a468b362;
 Fri, 10 Jun 2022 11:44:32 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR0402MB3934.eurprd04.prod.outlook.com (2603:10a6:803:1b::29)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.14; Fri, 10 Jun
 2022 09:44:31 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.013; Fri, 10 Jun 2022
 09:44:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ee3083c3-e8a1-11ec-8179-c7c2a468b362
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bA9JfcsYb64w4XZRrCT7V7Sqv9WiLP5i61iT+JpYWSnn2UvQSXvtXc45vRyVXWD6wu1kGoxWdYRqYF6h4DraHSpEXK3JA9bNpXsUDR+aWfp+anGSUEoD8Q6vjPg17I5AFkFX0gL58azNukOgy0cAMHTj8cwdDRi9rwMpzmm8vvqLqKNjqwtPh7Kgb5DJNKGfNQH9t4k41O09czPTJFF523jlT7SrNXOchzX+G+B0bD/W8o53/CuSADv+5ZsGWqQxz11snUzQ01A8Ptv+0UVsiT0qso3x3au84lFanLsfcBVrzxOP9AhABV1TtuX8CNVS/cczDNrCHITr+TznpLETQQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=omd2Vt6+O8ay33KHX7XNM3y+yoIHEFxI4gkDvfffja0=;
 b=GR5Bs64x+9phoQmYWJqvUzYfAQGDeXps4mEs6pjBgdCgTZcqy4vF0YQQuEAqjsvs0Mo7xsZzhRJyTQUA76eZrW4DidZ/l1h5m49Th+vvFE2n+Y6OZjcnd5lYaLz5UvpBqard5j1f5G22chefNea2sBsdCkiLgn1s+SkHsWUZcyDZlVdk93znDLpcYZFuCRVjPSygf9x4/UVvm9RxVHGT7Kw+65lH5we097bxXKhmbAkGdoicJomti0TDlYsxKbbGjML4v4ViJl5np9LYdAC2pFk4Ygp0RYfRhp2Dff2dSOlxjQT9hukP5JmjtIpq5VhO/erSxVbk1X4fcUizMnvplQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=omd2Vt6+O8ay33KHX7XNM3y+yoIHEFxI4gkDvfffja0=;
 b=XoxIogY9r0aDsqw5vklvSxIPEPHT4QkwBOVDipNw0pc4mFJvU8NvMY3Tu7UluW9wfChnCNty6o6MDvM4ETOKpaRgRvjKSqw8Ph84aGxwFCO90ZAcZA7y/RS9uPd/4lX+7trTaQlsdHLEg2cGHpiGqx5wr5RNy2lR/jlFmZUoYCOWboqF26NyuMTWAoIWb1op51N//4SHfEoHnq6D1/webgFt1mKnGZ9/AfxFhheO7UcVBdHn/J21NkbFtjpQrizxlgv/fwTkUpNLzIr/3evWnGSfBjkHnFGC3hAMT3j0jGSX7ZlZPfpE7nMP4r2gMf5Kvp1tnPCo7F+IyOP1g4YhgA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b84abd29-2856-a173-55b4-4e642d8a6ee5@suse.com>
Date: Fri, 10 Jun 2022 11:44:29 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 3/3] xen/console: Fix incorrect format tags for struct tm
 members
Content-Language: en-US
To: Michal Orzel <michal.orzel@arm.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220610083358.101412-1-michal.orzel@arm.com>
 <20220610083358.101412-4-michal.orzel@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220610083358.101412-4-michal.orzel@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM5PR0402CA0009.eurprd04.prod.outlook.com
 (2603:10a6:203:90::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: eb984c24-b4e1-4a82-57ef-08da4ac5d189
X-MS-TrafficTypeDiagnostic: VI1PR0402MB3934:EE_
X-Microsoft-Antispam-PRVS:
	<VI1PR0402MB3934D94A81A9B510DD100250B3A69@VI1PR0402MB3934.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	vtFMW/NMwroEtuUugKRCgciAY5GxATNqczmosBbumnjo19rj72EouMe6YwMv0IhmKmQzr/7rdasTsTLGwZSmpmrcwO5JpCAqoFWvJu9zAWcO7gGw7ndMGQVxmj0KIONu6V2v9XGTSZL1fgtQKTWRfFuOkZEKuCQYG9GyVWoNvCiZNcBizdgWiesxHsjaH3svriZ9P1XANstBqWCv8h/TznP0r9hq++KFIHfMboJ+ekYy8d5uYV8hcXEj07Wd0kk2m3aLzS4BHzQ5DaE4pKgdxH7CJqslDpCWA8HpD6hqKk5eq8+2Xf70QL7VBD+fT+DkdE/5ByAvQUemDU+THbe8nx5QFNpy4jNj60OA4FcaDX5OKOHi8zJlmaCCRImiMKmUcujggHf+DVPGeSJL+6YZc93lerSafXOAeTi2kc3WIUvuREYzo5PiwX8NdsejBrifm6qakiyTLQI0Dd4h2g09T10fZJ7xxRkEf0Mge1eoqeHXAN+sP4ba34DdDvpVkPpncOPZdrz/h5BlBl0T5MoruErGNq8hNJSvKeqWNmvFK7SVCehJMRRbC5uX+o14GkDx5M1pHhTExYtBPR8hlaTAsMjTxXU7ibeGUYll1QrZwLQLyL2G4KMqn17JM7ZpmyIUq4//vZKda7SNSMEEok8xFyzNMnfzqebFfvKJXAKdKiVpeHWPXq+Tps8wLNdsdJpDPw9kc66OJWt0QF6Pdaov8XN5dG2rRW+nS6uAg5iPumDgABT+G+rhfYvkDU+nRkej
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(54906003)(2616005)(6506007)(36756003)(53546011)(6512007)(186003)(31686004)(2906002)(6486002)(316002)(4326008)(86362001)(6916009)(8676002)(66556008)(66476007)(38100700002)(66946007)(31696002)(508600001)(26005)(5660300002)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SmYzaDMyS0RZY3d5K21uY3lUY3B3aDlmSnNpQllNK1RaSkhrL3pqUUd3RHZQ?=
 =?utf-8?B?czlucy9RdGtNZDZiTGlmU2diVlY4Z2RTUG41MitlWkI0MXZMN3RTbGxkT24z?=
 =?utf-8?B?OVJpeXUyck9wZjBwTWRVQlNQTkF6ZS9kLzQ1MFpBbmV1b0t6WGxqcUxBS0dE?=
 =?utf-8?B?QkxvK2gyMnM5ZS9ycVVOYi9zWS9tUG1KS3puZjdkNHM0R2xNOWdMaGloZnVH?=
 =?utf-8?B?Y1dkWXM4Tm5HRXpnVVg4ZmU0dlBPWG93dVBtOTVWTTZSRVR6ZHV4Rjl3WSt0?=
 =?utf-8?B?bFUxSHRTNUVCdWxyQmoycUovRGlnbWk4cWdlK0tuaXdRN3ROTnJyRk5YbXJD?=
 =?utf-8?B?YXh3azIyR2k0NDBXWDNqMnhKOVM5SnpORXcwaUJYbTRvY3FJN2RFMUFFRGZt?=
 =?utf-8?B?NkRBNjFyZjBmdlBJV0hjejhSbzVhRTVwaDVpbjFnb2J1YisyWXlYMkkzSXYv?=
 =?utf-8?B?QUN4RXBZaFpQTUxkaUhUMGp0bCtWM1kzamtjdnY5cEVjR01sL2F6WlJWMnBC?=
 =?utf-8?B?SGdTQ2pvNVRJb2xnc2F2SndZRHlDbkg2Y0lTRDFZWE1IbjhqUVFxUlBidjdL?=
 =?utf-8?B?Y01yN2pmaDRnbmZrbEhZbVNrY1V1VE11RlJyaEo0RzZkaDVQUExiUWt2My96?=
 =?utf-8?B?b0l4ZWlmbmMzYmVhNHptRnlPRjVSZE9vMjFDSmtkSjI5T2swK0FXdjJBNGFB?=
 =?utf-8?B?MU4yRTNFb3FtYnhpMURsYlRDUzI0MGVEN3JlWlVnNVR3RHdtc0I4OWJVOStu?=
 =?utf-8?B?MzF2dy8xZzI0aW80ZmFGMXRuSUlSQ3J5SzRDLzBsZG45c2VDVFdxdGZUWE1T?=
 =?utf-8?B?SUV1Y0FUV1ovYnBTMVF2L3QzaXZ2T25wZDVob2RJV0JYQm9OeUtYVUo4dm9L?=
 =?utf-8?B?L1BEODlpdTcxek5HZjlEeHpWOTVQWVRCTFExMWZRZlBjZkdqOExIaEphNFhz?=
 =?utf-8?B?Yjc3NE9Tb2VmR3dGSE9CeStuRS9YbzdidFB5Mytna0xIOVdjcjZFU3NnTC9I?=
 =?utf-8?B?TEQrN0JoTTIycVNuVm9MNkRmcVFsQklGM1piT0kwRkpIdFJBcnRUMU5Zemc3?=
 =?utf-8?B?RHFHT1E0UXhuRkhzTDZrS3ZJcXZmb200aU9ETU9YTGJINXNiYWdBa2NSblpJ?=
 =?utf-8?B?bEJGRFR1VlhRc29tZ3pObjVQNFIwdHg0U25kOEhOREhxamc3bkwwVEFDQ2lU?=
 =?utf-8?B?a3lubkRBTnR3bWNLYVVFSTNqcEpZVXJPOUNUWFdQdUNvSkd2WFZFTUFOdGt2?=
 =?utf-8?B?K2pkZVhISE5BdXIyOStyc0U0L25MNjJ6U2hIU01KZ09IZFVCalE2L1ZMZU0w?=
 =?utf-8?B?eDZwUWFCaCtacWFzd2ZBenpabTN0N0NLVk8wSTdnNWlvZER5dCtBS0VQMzBJ?=
 =?utf-8?B?cGZ0MnEza20yZDVUZmphRDVpZVI3bFZYYzMrTURPSjRIbFpqSXkwTC9oNm42?=
 =?utf-8?B?dTcyOUlXalIvbmsvN2NkY28yY2doSnkzSHJkUGF3VTNqQ2NSOVJRcldZWEcv?=
 =?utf-8?B?SWYwNjRhVm5vejVzUU9xSW9NUjFuVDVWclEyVWthVDJ2VTNUQmwramxlSGt5?=
 =?utf-8?B?VEdyV2w0WHVjbm8wOHc1dzhtN0FibHlYWkZKRXBUN3VCckt5V3VkNXQwUnA0?=
 =?utf-8?B?d0xKUWV0bmdrZmtaMVF0d1VzSjEvVkthcVBlLzFsWEhRd09Mb3N6bFo3SGhD?=
 =?utf-8?B?UjZOZHgxYnN3M0pGczdLVllybkJwZEtzQmVzaFA3TXhNdlFJdElFV2xnVnpO?=
 =?utf-8?B?OHU3Lys1YmJSanJFcitvNWJmckdDbXhXdTdpNTFIUlNrUTA1MFNmM2I4SS9m?=
 =?utf-8?B?ZnV3RXVxRHAwR1h2SU5SUTA5YVdVOUdWcVVoRVVCSVR6MGcvSUtKUHZKd3Z1?=
 =?utf-8?B?dzJRTVJmeERLcVRGN1J3MkVCTjUzMU4yOElCaFY5dGtCaExyZk8rTnU4RXN2?=
 =?utf-8?B?eGg0TFNyYWNqS0Y0ZEcweEptRndXaEZxRlYvUXRWQlQvV29jYStEQnZQNDVB?=
 =?utf-8?B?SnV1RTJlbThzbjJ5ekJjL1ZodHdLd2NrWm94ZFhPTmt6MVpWcFFaNXdEakl4?=
 =?utf-8?B?QzF0TjhnYjJhQ1d5NDhOckZmSVJTRnpXRnJpZEx1RkkvRzNPRmRMVitJY0xq?=
 =?utf-8?B?TG5Yb05EcmhNSWEvK0lIRXFZd2p4cmtEd2ozTkI2WkN5dCtMbnBQNTVCakFy?=
 =?utf-8?B?UFNOUHg2eTBFdThtcGxRZFBOcm8waDgxbERQQ2JlZ2l5ZCtVbWZuNjNhK3Fl?=
 =?utf-8?B?K2xLb2hsUzZuUmpEa05nZU96Q2NhYU5tcnZtVEM3MGlwMVU2amprK2RkQzZG?=
 =?utf-8?B?M1FUTnhad0hhemhBRVJlN2Fua2NXaHJxbmxLdVA0SStwbUVEaWdPZz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: eb984c24-b4e1-4a82-57ef-08da4ac5d189
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2022 09:44:31.1579
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: EpTkBjq8IHKfDb+yIEXp5e095T68aqqZrQlrzWKH9/EJvRK331zhponCw3jmhBMcRnOyjj9EHDPkrY8h/dEbuA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0402MB3934

On 10.06.2022 10:33, Michal Orzel wrote:
> All the members of struct tm are defined as integers but the format tags
> used in console driver for snprintf wrongly expect unsigned values. Fix
> the tags to expect integers.

Perhaps do things the other way around - convert field types to unsigned
unless negative values can be stored there? This would match our general
aim of using unsigned types when only non-negative values can be held in
variables / parameters / fields.

Jan

> --- a/xen/drivers/char/console.c
> +++ b/xen/drivers/char/console.c
> @@ -844,7 +844,7 @@ static void printk_start_of_line(const char *prefix)
>              /* nothing */;
>          else if ( mode == TSM_DATE )
>          {
> -            snprintf(tstr, sizeof(tstr), "[%04u-%02u-%02u %02u:%02u:%02u] ",
> +            snprintf(tstr, sizeof(tstr), "[%04d-%02d-%02d %02d:%02d:%02d] ",
>                       1900 + tm.tm_year, tm.tm_mon + 1, tm.tm_mday,
>                       tm.tm_hour, tm.tm_min, tm.tm_sec);
>              break;
> @@ -852,7 +852,7 @@ static void printk_start_of_line(const char *prefix)
>          else
>          {
>              snprintf(tstr, sizeof(tstr),
> -                     "[%04u-%02u-%02u %02u:%02u:%02u.%03"PRIu64"] ",
> +                     "[%04d-%02d-%02d %02d:%02d:%02d.%03"PRIu64"] ",
>                       1900 + tm.tm_year, tm.tm_mon + 1, tm.tm_mday,
>                       tm.tm_hour, tm.tm_min, tm.tm_sec, nsec / 1000000);
>              break;



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 09:50:34 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 09:50:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346307.572076 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzbHi-00061u-Ux; Fri, 10 Jun 2022 09:50:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346307.572076; Fri, 10 Jun 2022 09:50:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzbHi-00061n-SG; Fri, 10 Jun 2022 09:50:30 +0000
Received: by outflank-mailman (input) for mailman id 346307;
 Fri, 10 Jun 2022 09:50:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7j0Q=WR=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1nzbHh-00061h-QF
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 09:50:29 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id c1d24801-e8a2-11ec-8179-c7c2a468b362;
 Fri, 10 Jun 2022 11:50:28 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 534881576;
 Fri, 10 Jun 2022 02:50:27 -0700 (PDT)
Received: from [10.57.4.71] (unknown [10.57.4.71])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5AF653F73B;
 Fri, 10 Jun 2022 02:50:25 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c1d24801-e8a2-11ec-8179-c7c2a468b362
Message-ID: <322b83f3-830c-f330-5656-59d21da965c6@arm.com>
Date: Fri, 10 Jun 2022 11:50:13 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH 3/3] xen/console: Fix incorrect format tags for struct tm
 members
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220610083358.101412-1-michal.orzel@arm.com>
 <20220610083358.101412-4-michal.orzel@arm.com>
 <b84abd29-2856-a173-55b4-4e642d8a6ee5@suse.com>
From: Michal Orzel <michal.orzel@arm.com>
In-Reply-To: <b84abd29-2856-a173-55b4-4e642d8a6ee5@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Hi Jan,

On 10.06.2022 11:44, Jan Beulich wrote:
> On 10.06.2022 10:33, Michal Orzel wrote:
>> All the members of struct tm are defined as integers but the format tags
>> used in console driver for snprintf wrongly expect unsigned values. Fix
>> the tags to expect integers.
> 
> Perhaps do things the other way around - convert field types to unsigned
> unless negative values can be stored there? This would match our general
> aim of using unsigned types when only non-negative values can be held in
> variables / parameters / fields.
> 

The reason why I did not do this is to be coherent with Linux kernel (include/linux/time.h).
I believe the time.h code in Xen comes from Linux.

So if the Linux time.h defines these fields as integers and uses proper %d format specifiers (unlike Xen),
I think my solution is better.

> Jan
Cheers,
Michal


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 09:51:20 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 09:51:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346317.572088 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzbIV-0006d1-C0; Fri, 10 Jun 2022 09:51:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346317.572088; Fri, 10 Jun 2022 09:51:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzbIV-0006cu-9H; Fri, 10 Jun 2022 09:51:19 +0000
Received: by outflank-mailman (input) for mailman id 346317;
 Fri, 10 Jun 2022 09:51:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FoyK=WR=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1nzbIT-0006RR-Rc
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 09:51:17 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id df25382b-e8a2-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 11:51:17 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 7796A1FD40;
 Fri, 10 Jun 2022 09:51:16 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 3A80B139ED;
 Fri, 10 Jun 2022 09:51:16 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id MC77DBQUo2JgbAAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 10 Jun 2022 09:51:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df25382b-e8a2-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1654854676; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=xtlNKH1Fij9QrsGIkf0MmQ4sa+LBNJh0axb3u0h5+44=;
	b=FZanm8XrtoSf0LLHaKAJuaWjQWC+zpTrp32TA9M1ou/c93ENWbeoBay0bwzDxp+vsbiIi1
	h7nGTuuFIkUL2ME+vLkFLh3ypq3Wlou1rzd36sFaGswdBjNnlcfZFTyqztRx7CAk6rj+IG
	ehXB0ygANNUY6GDWbdMGeJIpC8fKcBQ=
Message-ID: <2ccd52a7-a5b2-c221-b847-ed0c9de2effd@suse.com>
Date: Fri, 10 Jun 2022 11:51:15 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH 3/3] xen/console: Fix incorrect format tags for struct tm
 members
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>, Michal Orzel <michal.orzel@arm.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220610083358.101412-1-michal.orzel@arm.com>
 <20220610083358.101412-4-michal.orzel@arm.com>
 <b84abd29-2856-a173-55b4-4e642d8a6ee5@suse.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <b84abd29-2856-a173-55b4-4e642d8a6ee5@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------JKDhM0E2Smrq7Y9s5N200KHP"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------JKDhM0E2Smrq7Y9s5N200KHP
Content-Type: multipart/mixed; boundary="------------Qyj0ckMhSP7Xdp0pKbt0MfOr";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>, Michal Orzel <michal.orzel@arm.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
Message-ID: <2ccd52a7-a5b2-c221-b847-ed0c9de2effd@suse.com>
Subject: Re: [PATCH 3/3] xen/console: Fix incorrect format tags for struct tm
 members
References: <20220610083358.101412-1-michal.orzel@arm.com>
 <20220610083358.101412-4-michal.orzel@arm.com>
 <b84abd29-2856-a173-55b4-4e642d8a6ee5@suse.com>
In-Reply-To: <b84abd29-2856-a173-55b4-4e642d8a6ee5@suse.com>

--------------Qyj0ckMhSP7Xdp0pKbt0MfOr
Content-Type: multipart/mixed; boundary="------------I7VTX00ZdxBSnJ0zsHMFowbi"

--------------I7VTX00ZdxBSnJ0zsHMFowbi
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTAuMDYuMjIgMTE6NDQsIEphbiBCZXVsaWNoIHdyb3RlOg0KPiBPbiAxMC4wNi4yMDIy
IDEwOjMzLCBNaWNoYWwgT3J6ZWwgd3JvdGU6DQo+PiBBbGwgdGhlIG1lbWJlcnMgb2Ygc3Ry
dWN0IHRtIGFyZSBkZWZpbmVkIGFzIGludGVnZXJzIGJ1dCB0aGUgZm9ybWF0IHRhZ3MNCj4+
IHVzZWQgaW4gY29uc29sZSBkcml2ZXIgZm9yIHNucHJpbnRmIHdyb25nbHkgZXhwZWN0IHVu
c2lnbmVkIHZhbHVlcy4gRml4DQo+PiB0aGUgdGFncyB0byBleHBlY3QgaW50ZWdlcnMuDQo+
IA0KPiBQZXJoYXBzIGRvIHRoaW5ncyB0aGUgb3RoZXIgd2F5IGFyb3VuZCAtIGNvbnZlcnQg
ZmllbGQgdHlwZXMgdG8gdW5zaWduZWQNCj4gdW5sZXNzIG5lZ2F0aXZlIHZhbHVlcyBjYW4g
YmUgc3RvcmVkIHRoZXJlPyBUaGlzIHdvdWxkIG1hdGNoIG91ciBnZW5lcmFsDQo+IGFpbSBv
ZiB1c2luZyB1bnNpZ25lZCB0eXBlcyB3aGVuIG9ubHkgbm9uLW5lZ2F0aXZlIHZhbHVlcyBj
YW4gYmUgaGVsZCBpbg0KPiB2YXJpYWJsZXMgLyBwYXJhbWV0ZXJzIC8gZmllbGRzLg0KDQpE
b24ndCB5b3UgdGhpbmsga2VlcGluZyBzdHJ1Y3QgdG0gaW4gc3luYyB3aXRoIHRoZSBQb3Np
eCBkZWZpbml0aW9uIHNob3VsZA0KYmUgcHJlZmVycmVkIGhlcmU/DQoNCg0KSnVlcmdlbg0K

--------------I7VTX00ZdxBSnJ0zsHMFowbi
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------I7VTX00ZdxBSnJ0zsHMFowbi--

--------------Qyj0ckMhSP7Xdp0pKbt0MfOr--

--------------JKDhM0E2Smrq7Y9s5N200KHP
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmKjFBMFAwAAAAAACgkQsN6d1ii/Ey8M
oAf/dh5+ijkzdctApWj9n43I62FGACZFQmEBI5kwjFlr+S+nrXPyE8A2D7qhGeLtUPxov6vMmw26
a245oXasDdjmeFBcBNCidQ6BQWstrdLsNpdO02H4ZpG7qQ5p9NL5y99LL1F+KJpLB9UReHY/qIvc
QQ4TEHC3lNRgEsxZmmIusjv0YcBeiSGPcUhn226RJsv5iiU7sz80YzXKn/cVuG7ON9NDS8DhlwCl
Q4bh6m9RE5XuvTy7cbEn2RwLVbVHNIorh+N+BIVVy+lH7ANupyctur7GMeKAKpQvnlo7sNSixHQN
QXii+16hvzbRD9KbdxtE7swcqpOLtKtxkiNtX28XSg==
=65V6
-----END PGP SIGNATURE-----

--------------JKDhM0E2Smrq7Y9s5N200KHP--


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 09:55:21 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 09:55:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346327.572099 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzbMN-0007by-TT; Fri, 10 Jun 2022 09:55:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346327.572099; Fri, 10 Jun 2022 09:55:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzbMN-0007br-Qn; Fri, 10 Jun 2022 09:55:19 +0000
Received: by outflank-mailman (input) for mailman id 346327;
 Fri, 10 Jun 2022 09:55:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LSau=WR=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1nzbMM-0007bl-Ni
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 09:55:18 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on062b.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::62b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6e0b68c0-e8a3-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 11:55:17 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8379.eurprd04.prod.outlook.com (2603:10a6:10:241::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.14; Fri, 10 Jun
 2022 09:55:14 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.013; Fri, 10 Jun 2022
 09:55:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6e0b68c0-e8a3-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WuBKkmR5xTvxmrwQ0pJqSPMvswpEbKOnHyg1OqOUhMXk4WMZmOfinsMl8W+Ml6QzjCBhHjrHiPkScgNaEkC5LHBzMCKLgNFAzYjOwU972Kny2uL3aRPqqDRyb21CgXXuVyP69EzlVZAgNEqxqO1Bq+Ip13djkXq77lvZ7NU14fOxM831tdckiaYtUwZ3oLnTP0NkqYdmXXT0QutMziAXMltR5EWA0Pb+xqFm3ZbrVgHVKgHFcsAKmwuXuybhMVwUX1a0Hf0Pyox7Hdm3MgqQQyuOMXwT4YETE89Mzg4WEYbq07+LnrVrlHd0VQ71TqSrio4GbBKn8If4+dUbZV3/MQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8dIv2tz0F8UNY0fk7k6SDge/lwFDkLp5KvmHcM4Wrro=;
 b=KZTQXnEzAiV1HLVdSiLeGo/oJdgGgRsdrdrC9HXKC6E6oPs/aJLMquF8IYHjBAlp3YyWqdm3oAyNtA1+te9B0K/n1DIWrbzculMd8P+reudlWUiCo8qu5iX250dqLL9wE9H+okv6ovadIsnlvwR+q1QkzlAeKpV9BhuiWq2rCfslZxA+0JQ+niee4tDZ3fIQaaZkM3XPp9jJihONhskWd3iUnhOR+KMiMSmOMpZR5XHXhljrl0LIKAkeUe5lRWZQ13zEw5hm/6tdJUddGAB7xBWg6bkYUrCxoxfygWT2qRJzyAKI4u7XqSMfeJCUj+U8OIgUCDz5h15RyW8Ju9/bYw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8dIv2tz0F8UNY0fk7k6SDge/lwFDkLp5KvmHcM4Wrro=;
 b=aHPWgonr/sZYv+5U0rLlvlMk7CinmXY7xPfPnb2ZkS6zNPBAY7ACszQSkfWJTrk7FSJiaDdPCXtAAYD9XnmGFeljm3IrVexys4YgjjgcBSiY4YhwXaoZNjNnye9C8BMvKs4JiTHS9W7iHS4tP5vVIk/LYsiAgO788jIV/sPHI275AwY2p0pcnjN+sEMBZ/kDv8056lPeuZNry0XS3s5lEu08XdIwz8YjyuYoPBxaKDfm3LXzpyUrGYFOtN5el2TD9nD9pD3Jh5tvsBb8rLT2oWVZMCUm+AwULIEGAO14Jz61EPQPxMo70MEwX9Xa0Ids/4z4ds+GenHS1jGHC9QBBg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <295e9c7e-e0de-bbd3-eec4-0864cb2ef086@suse.com>
Date: Fri, 10 Jun 2022 11:55:12 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 3/3] xen/console: Fix incorrect format tags for struct tm
 members
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, Michal Orzel <michal.orzel@arm.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220610083358.101412-1-michal.orzel@arm.com>
 <20220610083358.101412-4-michal.orzel@arm.com>
 <b84abd29-2856-a173-55b4-4e642d8a6ee5@suse.com>
 <2ccd52a7-a5b2-c221-b847-ed0c9de2effd@suse.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <2ccd52a7-a5b2-c221-b847-ed0c9de2effd@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM6PR0202CA0062.eurprd02.prod.outlook.com
 (2603:10a6:20b:3a::39) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e50d11f9-3e18-49fb-9a63-08da4ac750bb
X-MS-TrafficTypeDiagnostic: DB9PR04MB8379:EE_
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-Microsoft-Antispam-PRVS:
	<DB9PR04MB83799DAC062DC930A197B96CB3A69@DB9PR04MB8379.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	z34sCjkNgDWVEZj+wypYBKFItZh3kHUeCT5ma9CUc/eZkE+X/qElX5QA/NKYHCEs2jS2uk8LiAnAzbbYbJscwm2gvpSS3s9FaUa5P299V+6RSqO4tQ+kQj0p/DpSaHW/gJoWJYnTS1pI27nl7dr4t6WmWL8PKySA5M+dz4ci2m1KTf4HK20fk/AbacLhIPi5sF2ZSJqxso3fvZfCVXyeMgIpe35JTQYjjBSS8OKgg+PZhxuU+GVvMmjxrKl5t7vJaLMsf8wlk/tGaUWPCgIqzIbpGoflq66za/WLAM1c3c2jYC3ZsNi3PS5D5VmEDGs+Y9YUDBBuLVeVZHxIwKxDRFBo/jffw/ObLZvcGoK9/RtfzizQc34xfOIvBYXwrBFUkotdMAnFaQHhkBfeCnY0EjioborFh/S4EV3zQxbrTmeHSJnK+BKiqANhmuTqBQuR2bORGLMbEM5vxGyropdWyQ3O/9bjZInWaY9xmtCYIw+n2kllcJ07HJIn2E0MfTkHoa97lw3f8ngVAvEvnoSKpktGK4yTqNOxn7kMq/vr9WZYA1emNvtMLs1zNjWdU8Fsy0Ys2u+qG85BPKUbGCpyCvqx/CfP7xcN7GzM84hMCJYoCL1NDp4r2jVxFfvlEDmaiWdWRsqSM60Q5jZtESLWnUXPC+Nq2s+XFg7R22/7/0eB3aUnx1IA1Nkvom02oXBUomV6zDxVTIGmGW3DeKfNIHcwd8kut4SGZA6pDY8WU1st76ghSLWxbN+zYZ38ePTZ
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(6486002)(2906002)(2616005)(53546011)(86362001)(8936002)(316002)(66476007)(66556008)(66946007)(6506007)(38100700002)(110136005)(31696002)(8676002)(4326008)(54906003)(4744005)(36756003)(186003)(5660300002)(508600001)(6512007)(26005)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YTNDRnNyWERVdWMwNWVDVXF2NWtDY0toUml3SytjSGpHZFJQUDZ2VXlQUDZP?=
 =?utf-8?B?VkhtQnQxTjBnVTI3RExBaW5NT0xVVlMrVnp5UUpiN0RzVERVaUpBbWNjem9J?=
 =?utf-8?B?blRLWXpOODlXTldudlJCR1dITjlnSlptUDBSMUNFbWpwMVg3MExZVkEvWGZj?=
 =?utf-8?B?eG16ekZvamU1MFV5QzBsajE5MzZicWhUNDRlaVB1Uy9NMHJTZll5M0RUWXUw?=
 =?utf-8?B?NlpaTmpLVm9acGJHR3I5OHkrd2xJdzdtZmlFTXBiYVpSQVc1Z1hyVkZGTit0?=
 =?utf-8?B?SzBEYk9qMFFpblZPNzVhSTdyaUFMOWJ5U0Rra0pJRDZudW9tUHBRdnBVQ3Fv?=
 =?utf-8?B?YndKN3c1MnFEUjVJMFBSL0ZpSnZLekV2cU5YSDhwVjZqSGtCbFZnK2lvUTlI?=
 =?utf-8?B?QXhxY1FOL01LaGQvSCtRdXlYZzVDUXd4SUt2Uk5INUhZYUt3OGVmVUhtRG5C?=
 =?utf-8?B?dXBOUE9VNmVSK0RvRGp4Rk8rRllTVHIwMWM1UlZRWFdVbThObERVeTE2Njh2?=
 =?utf-8?B?R0VqVDdDd21NZUthcXBXWnQrTWNTZWxEODA0bGZ1bEltckVqc0ROODYydzJP?=
 =?utf-8?B?R3pVZHlLbFBUbUtHS2h6UU82OEcwN3BJSG1ZQTZJSitoYklLTkZlWDVNSWJW?=
 =?utf-8?B?Wmw1aVl6dlVuTjE4czZ5RlBoRm9yejZncUE5amRWdGtPWGFSVExRbVIwSVVa?=
 =?utf-8?B?RlhWYUJtNkhiV29Uc3ZIa2pRREJ3ZXBaOVJyWHlhRDFPWWc1d0RtcU43aE1l?=
 =?utf-8?B?THR6dHJCVExMT1Y1VXY3SkdwWHp0MWZrVVJsMmVTbkJzN0g5dHVGTEF2OHBw?=
 =?utf-8?B?RXBSZThhY1JwN1VycHNMMFZoamZmV250L0plVjFsQWZuUDAyNm51TEJGY3Vx?=
 =?utf-8?B?aFJtR01hbEJSNUQ5TjUydTREbW9vVHVsbkQ1UCsrWVYwMC8wcDFsTHhnblR4?=
 =?utf-8?B?WFlqbll5MTRoMWdlMGluQmszNGFpWjl0anNGRCtJYkQ0M0g0anZWSk0ydm5N?=
 =?utf-8?B?RHlvVmhKYW5TV0J5akF5TVZqWHJlcWRPL1E5SW16THpIV24wTW5GSkZZU005?=
 =?utf-8?B?T2R1RktOTUdGeEpUdFZ2cThyckFkVGNWb081YVdqRlNZdHhQR3lKSFU0Q1dq?=
 =?utf-8?B?anRYaDNCZDcza0c3RlM2c1NHZzAwZWZkaEE1eUl4VXpzWFV2OHZmS0N1S0oy?=
 =?utf-8?B?MUZVdVVrRk1rcjY5Mjk2MWhKUVhpWkZYZnZYK3ZRaEE1dEVscE1sWUk1VXlR?=
 =?utf-8?B?MHVPT2V2NWg2cHZFTnpLKzhtU2c5M3BnSmNjNC9YTmE4TFZqQUVWMTBpYWV6?=
 =?utf-8?B?NzVlQTB0eHBURWR0K09QYTViZXRTNlJ4OTJrMjc3RHBWcW9TMk91bkk2Sm1P?=
 =?utf-8?B?a0RnK3d4LzhBZzNWeXh0V1BGWDg4SjNrTFk1RU55bW9URU9BTm5BMml6K2hJ?=
 =?utf-8?B?c3c5NHNMalRhZUtCMjVRTHdna29jVUhvamVQTjhva2ZsODEyTnJwMW1sZWZT?=
 =?utf-8?B?MjJIZzIvYmpxeVRSM0l2Smw4b3YyOGp3NERIYnBVNXBLekZWZlZwRzNtZ0N1?=
 =?utf-8?B?QU1TVDIxZWVCZHpEL3l4L0lYa1lVNmJlYkhud0ExL0JUZHFMbzhPYm5oa2Vy?=
 =?utf-8?B?RXFHNm5SSVFZU1lRYjRkUEg5UEpOcmdwYnRLNExlVXFzMDFKZDlRUEEwOVRY?=
 =?utf-8?B?ek5qZ1FCR2ZGM3Mzd2pJVHd4anNDbnFVZ1crbFNlZW00bUNnWVRacjJMTDZF?=
 =?utf-8?B?WktnVVJGYTBGbXdCaUE3cGMxSER3Yk1WY2VBUVZsVXpycGRKd0lwNmVWU1dS?=
 =?utf-8?B?L1djSVlJOHFuQSsrV09OUC9MTmpzRVp1ZWFFV0lWb2VDY0w2SzZjQmx6WENa?=
 =?utf-8?B?OU5jaGEvU01XN0xYSE82ZFNHUjJRMG1WYlIzMys2UEtEaS8yUkJGcEcxbGx0?=
 =?utf-8?B?eFNva3h4b2QxSm42bE1MNU1SeHpLRVZ1eUhJUFIyS0ZtZndvcUlURjVVbWs2?=
 =?utf-8?B?ZkU4M3YzU3hBbVhpMDRORkY0OTN5Ni9UZk1ua1I0bkdiSC8yVGZRK0V6U2hS?=
 =?utf-8?B?eG9QVWxBNWY0UHBXOTVEMTF4R2pveER6OHppOE5Bemd3cTNTQkJZMlUwRzBi?=
 =?utf-8?B?amtCS3J5SVNreVFJS0pRdDhQR3BkeUcyN0YwTmhDZk1UdEpsNnRiemFrbjAz?=
 =?utf-8?B?M0VwY1Qwb2UzL3ZqNEc1blRyT004ZCtxVWtvWFQzY3dBYThCNlJWR3lQZlJo?=
 =?utf-8?B?b0NQZFQ4TWltQnNGMmhWc0JqYSt3QnptU3N1NGtyTGp5SmtJM01tOGJPSFFt?=
 =?utf-8?B?QUhPbEphbWdnNjJBRGJmM2JoaDNsSkhTQXhWU0tielhxV1AySHd6dz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e50d11f9-3e18-49fb-9a63-08da4ac750bb
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2022 09:55:14.1015
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: bs82FZgwP7oUosThjBo0FdvUwZ7YF3nf2/FeW/HfSlKSm40+7Gjz1dXrLDocGf03GYpJ8LvXIHc/AhJ+Uy46ZA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8379

On 10.06.2022 11:51, Juergen Gross wrote:
> On 10.06.22 11:44, Jan Beulich wrote:
>> On 10.06.2022 10:33, Michal Orzel wrote:
>>> All the members of struct tm are defined as integers but the format tags
>>> used in console driver for snprintf wrongly expect unsigned values. Fix
>>> the tags to expect integers.
>>
>> Perhaps do things the other way around - convert field types to unsigned
>> unless negative values can be stored there? This would match our general
>> aim of using unsigned types when only non-negative values can be held in
>> variables / parameters / fields.
> 
> Don't you think keeping struct tm in sync with the Posix definition should
> be preferred here?

Not necessarily, no. It's not just POSIX which has a (imo bad) habit of
using plain "int" even for values which can never go negative.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 09:56:14 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 09:56:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346335.572110 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzbND-00089I-6w; Fri, 10 Jun 2022 09:56:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346335.572110; Fri, 10 Jun 2022 09:56:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzbND-00089B-3x; Fri, 10 Jun 2022 09:56:11 +0000
Received: by outflank-mailman (input) for mailman id 346335;
 Fri, 10 Jun 2022 09:56:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LSau=WR=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1nzbNB-0007bl-Lm
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 09:56:09 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on061a.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::61a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8cef401a-e8a3-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 11:56:09 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8379.eurprd04.prod.outlook.com (2603:10a6:10:241::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.14; Fri, 10 Jun
 2022 09:56:07 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.013; Fri, 10 Jun 2022
 09:56:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8cef401a-e8a3-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dCzAxoHXE1WAz1Trt1jLT87lsxdwtOgsMzttwF4McT3KVCBKMQFRpsHxjIgUydfX9pl0YV7iCC66UZwOm4UWetrGu8nVKcihhVqc5MzJaShYUdbZYis9KrVSwoecz2rMtMHJPBdPfxrBYvVRF+nlL6fA0fl2Wcs4XIkesyWfLQNgIDsGOmiBoGt4nf1yf+OYzH2Hj3rV8xQgXgJJzQu3Bw+KcpWw8/GO+8SfopbKC5cqMkPdX5CEuuk8QHt2SQA7YbESiwDYZVglQQF1pIvzJlgnOM6Ts3h3TfxTCh92tnJaJTOsqBW0rdGtRHJEWpxxe9PfLopsR3MX/mWAEeGFfw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=LIYl3OLZsTob8KuTnflsjwhyQ3SXczRH81zuCj0PO9c=;
 b=iyIw2N2w9WSCMXzGlFaXhOSQZcMOXhJ32JW0TfPVwaz0cCmWEjfAZyi1+4kuTUG56A1gWOerGE+EjzDidv+V5BGkTziVjFqh5osERFqWlo90vd3o+xvOBuiOEHA6LAz9xj7HmYoM9rl8oVjLXAs9t3uynxxxnAKtcq7H3jVArMp5qStSQMBs7/l6db53SRz3sKKNcHP4W6hEdTZ0bG4dH7w7+j2vjwAXgbVIUKXFsSxRE/v3c7OJIRyqGQY9LYpOLRmyUodQIFjNGf8XCaEyacNZWtcsm463xZV3f2kCo44gb0oZSYrSFAq0scEGcSDdCZ6GAQ3yaJE4fNoRFzUlhA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LIYl3OLZsTob8KuTnflsjwhyQ3SXczRH81zuCj0PO9c=;
 b=cFXh6j4X20b+eYSmDhwos65I6wum2/jzC0FROSzA18p2p0BEes2rw0oVClOfsJhPogCBlIuOh1NSVNtndG2hRQVTCyfO/iWXmQYwO3f/NuySWRtXprw8fZeXpS+bkXYjj8fRp990GCVUx7Q8jWmXiI82UO+S2zrunBvtrMzBCLpdzvMsz+b1DDf0Z8U02RlbYhSVxhduN5a1wynMYtZVuyWglruwSRLiHwA7sCKbM7rJPUUyOv8G9eSBgePFHwvvkdGhVfmL1nM2nM/cqwoPe4ExGgSlrAxSglLg+P4SuXozQ6T6WBZG4J+D3fukwoOdABaWFpGgyDYGb4ck/Bg4Jg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <99b1d19b-3704-5efa-9073-819327205a12@suse.com>
Date: Fri, 10 Jun 2022 11:56:05 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 3/3] xen/console: Fix incorrect format tags for struct tm
 members
Content-Language: en-US
To: Michal Orzel <michal.orzel@arm.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220610083358.101412-1-michal.orzel@arm.com>
 <20220610083358.101412-4-michal.orzel@arm.com>
 <b84abd29-2856-a173-55b4-4e642d8a6ee5@suse.com>
 <322b83f3-830c-f330-5656-59d21da965c6@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <322b83f3-830c-f330-5656-59d21da965c6@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM6PR0202CA0072.eurprd02.prod.outlook.com
 (2603:10a6:20b:3a::49) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3a6ebfd1-cb8b-44ce-5090-08da4ac77065
X-MS-TrafficTypeDiagnostic: DB9PR04MB8379:EE_
X-Microsoft-Antispam-PRVS:
	<DB9PR04MB8379240D651285C330C51880B3A69@DB9PR04MB8379.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	wA7KZ3KH3KYVynVjbmEUkyJAKpahDguduIIi6rK3WshNjISYjoPE1D7OaqDuiDHiol8V6z5BWGgmgDYp71TGWQkPXlCR0Re3FzCjDJD8TDmr38sQ4pPR6SN+fLvjnJKA+pnYZ8LQQ9zIHHCh1+Csu0tcn8CH7zBjl7GcrTjH9JXpY47L2mJCjtb+VYDZmIhxW3FzyZZ6xwpowANvPHQQGAjRwjq2miH8qDToVsr8HTfIQIj5Rnaua8b3PWMvV6Mo1/iC94BT7JHtuvZV+dZhie//pCtmL5Lft+5DaJlSax0ihCMaBnXCAdZA4RQu86+Vguf+nZCUiXfr08TSEXBBwhzbeb8Q7LUoNhQ3nUiDfSS8aByaYbQBKx79nCAIlthxKv6zpIWbnH/3mnvf+POrkbRz3HTOVMK0i0s6l2y/FumwVa4U6SfDXZqntdAtr3FQbqyTrg0C6ROY5zKyqNsq6lH1Vwa4s+uQj3fpe1blGPQe0xzc6kXnzfH9+bim0yIAZQKSlREgBTxhDYE0ains2C6c680husfOkQEO+SgSpGKCreHoTbdqdEBRu1u2WIhaDOcd1kzL+RAx/Z7uFhYL5NVs5BjYQWAVLvWiHotkpHxTo1WMmhKHVQuoNtblWF2jWEZfdZqZ8OtQPCuXK1maeaXDp1vSbaHv0xxcw/o7Ds0yu7bLAjyo9MSsTfoDK7pJbd5YQxGJuHX+bElJcH8crHqw/tG80GpICYJ1jt6Gma+kFUHEf4+aavFiJWPIGxuB
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(6486002)(2906002)(2616005)(53546011)(86362001)(8936002)(316002)(66476007)(66556008)(66946007)(6506007)(38100700002)(6916009)(31696002)(8676002)(4326008)(54906003)(4744005)(36756003)(186003)(5660300002)(508600001)(6512007)(26005)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZFNUTW1rYU04bFZGdFYxakx6WmNJVEczVTI0WCtHcHFmc09aNG1QMUdZeGhz?=
 =?utf-8?B?S0xLeFBYdlJBZFl4Q09wSkpZbFdCMFZQU0F5OUtMc3BRWFNDbVhnOGlBOFND?=
 =?utf-8?B?MTE3Mmd3VmZKVGQ2QnRnMW9QMFdtNk13clVCdHA0Y3Q4V0lBZWpoc2VldnNX?=
 =?utf-8?B?eU1Cc3hNTFI1bzh2T1hhWHlHdzhXSWVkb1NFR251eDVsY2hVYmlYSHpuWWlq?=
 =?utf-8?B?V2ppVjJFeUwzZXliOWI2eFc4ajlUNEo2VVhRdlVGMlhIczhHUHBBK1gxYXF1?=
 =?utf-8?B?UUUyNVNuUXFXSTZycjRadVFCeC9XdFoyZHNLQnpVYkNIMUpzL1ZqU1ZSRjBv?=
 =?utf-8?B?cnJJa2tDYVBTUVNXYWk5T0xQTWVjYXZtT3dJbG9KOXdSQ1NwVzZVZ1FMNFFh?=
 =?utf-8?B?RkIrSU9QNkN6bTRKcFF6ZG4xcEF2WHFTT1Vrc3BrUzJ2SVlQckJ2Qkt6eUJG?=
 =?utf-8?B?YS92YTVmMlpCcFhCMG45aUpSU2tVcStCeVBvblRoMElibkNoUi9BRjU2cEN0?=
 =?utf-8?B?ckZmbHErbzUxZVlrbks0azN6Tmp3Y09pL0JTQU9XYXFIUmM3azBqcDZNcFZ4?=
 =?utf-8?B?eVI4L0VwMWcvMk40OElnSllhUkU5VXkyb2ViSzIrdUd0WnFpMDFuOGxBSFpW?=
 =?utf-8?B?Z3JhKzl4WXBDSGJlU2FNWkMzY1lqQWZiR0dDNjZmKysxS0d5TW9VVEJocG9k?=
 =?utf-8?B?Mm4xYTloTTlQSllUUFdqbWl1czRTVW15ZXJjWUlCTDVPbE44cXcrMndscGZz?=
 =?utf-8?B?RWVVSElrbWJOdUdxOTB1UVZWTVg0WGNiNml5SS9nd09rY0tNK21CKzdnZFN1?=
 =?utf-8?B?WTFJYWVYanVBTkVQZ0hnRG0xczRBSWdMbyt3d0RyM2NydVBYVHF3VWROaEFj?=
 =?utf-8?B?bW11c1FVdldqN3JLaHQvakt2aUExWU9YSTI4MjFIclY5ZHNEZTJVMTBaREFy?=
 =?utf-8?B?a3NWQUo2OERCN0ZtZzExb3o3UzF6Q0pMbExtdERNVWI1L2lxUk9MZlp3akNS?=
 =?utf-8?B?bVZhOVJDbGRwWnBOblluVVQwZ3NUK2JiMk1QWUpHaW1FOURiNE9XRFl6VWIv?=
 =?utf-8?B?eTdLcVNRWGVDS0I5bklGNU9CN1RZV1ZOZHZlUWw1TnZNM2pLYjA5OHBxbmwx?=
 =?utf-8?B?MkxwRCtsUDZqa0hsTmZSR0EvK0tnUFM0QTNzMkI3M3BteWhoQjNvdFNIRFVw?=
 =?utf-8?B?QmtQNlc3azN6RHdQU1A0ZkZBUGxra3JValFJWHBlZWU5dC9Md3BKRHFsVW1H?=
 =?utf-8?B?MklJNUtMSHBRb3A2TzNoZDZ1a1ZEc1JyODJaUEdRNDRJdmNKZjBTWFRUd0pu?=
 =?utf-8?B?bytwNjRYYXo1UzBmMDduV0hZY1B1bFFFSHB5cWNkQzVPem80TmFBcUxYR1Zq?=
 =?utf-8?B?c21YSDVNZVptMFErWHNGQXVvZ1dhaWVGSk9GQld2MzRBampzVjBzbzRsbDZw?=
 =?utf-8?B?eWIzcUMvejI1WnRpM3BaV2k5amZObEp3WGRlOFNyWFE3cEdXNmZWeENnSTV1?=
 =?utf-8?B?VjZ6OEZtRFp6cC94VnBNMTBUSFFpc3k5YlUwZXlNTmlrVjk4SWZvUmNwbUZn?=
 =?utf-8?B?ZDA5TEJpdTVlOGErZTNQZFZGeDdRMDhKSTB6VGJ2NmdFdWpXS2RnaUljUWFo?=
 =?utf-8?B?U1Y5VmtFUFVheXJMSi9EbDBmMHF4S0NUdkFNM2MzUU4wbnlEaTNmckRUOWxD?=
 =?utf-8?B?MW52NGFreDN5eVBHNmR1VTJEY1paRjVmSU1oQW1CNjNlNUR3cTdSWUFrRGVp?=
 =?utf-8?B?b1RxL2xyWlQ1eG1jQnVpSmx0K2hpbFpFY0dyVlU3N0RzQkd3WEJuUmZzREpN?=
 =?utf-8?B?R2Z2KzVCTWp6L1JHaDBsRjJocldmaHpFbmRoUU9tWmYrVG9tTWJKUUdSaWhS?=
 =?utf-8?B?emQ4QTV3N3o4OURSNkIzWVMwSmFKcytscGpISVUydFZTVXRZbHd1VFI0Uk5H?=
 =?utf-8?B?dkhNTXJjSlkrN0krR3pWazdLOWl6K05kSVlzSk5SalIrK3VHVmJUc05WVmhW?=
 =?utf-8?B?TWMyNkFHUmpKcmtibklzZEcwcENaSnNQUTNEN1pWZHNvZUIxUHZnN25JWk96?=
 =?utf-8?B?YlhWMjNxbWxmL1BlZzVmTVhpY2tXVEtnWG1tRWFBampXbU5idGF5ckpqTGZG?=
 =?utf-8?B?VVdmUWlNSDhEbkFwTHIwa0g3OVo2K3p5TkFiVnhDa29BMXZ1SlhwOGJRWWxG?=
 =?utf-8?B?Nkc5ZUNzUzdQWm9XYzZqSzhpS3NyMG9Nbml5VU10dUxBaXFjRFdURmhYOTcv?=
 =?utf-8?B?K2M5OWE1Wld3QVRKbUNkTmtYQXhyWm5zZ0JudENpdzZuSjB1WVZCdXlYOW9o?=
 =?utf-8?B?Z0pYclRXWDRUazZLWkc5T1k5NGpKd0pZQnFwd3ZlaTRjQkZMRUxzdz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3a6ebfd1-cb8b-44ce-5090-08da4ac77065
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2022 09:56:07.2075
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 2LlWI4R04zyA4LtZs+ItQ9yaFcWQZrWF9d7tjaCLHTor5ZlMDEzTk3mDkKjHzkJGLSQksGvxXZi8KQWGo4Np8g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8379

On 10.06.2022 11:50, Michal Orzel wrote:
> On 10.06.2022 11:44, Jan Beulich wrote:
>> On 10.06.2022 10:33, Michal Orzel wrote:
>>> All the members of struct tm are defined as integers but the format tags
>>> used in console driver for snprintf wrongly expect unsigned values. Fix
>>> the tags to expect integers.
>>
>> Perhaps do things the other way around - convert field types to unsigned
>> unless negative values can be stored there? This would match our general
>> aim of using unsigned types when only non-negative values can be held in
>> variables / parameters / fields.
>>
> 
> The reason why I did not do this is to be coherent with Linux kernel (include/linux/time.h).
> I believe the time.h code in Xen comes from Linux.
> 
> So if the Linux time.h defines these fields as integers and uses proper %d format specifiers (unlike Xen),
> I think my solution is better.

One can view it that way, sure. I don't. But I also won't nak this change.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 09:57:27 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 09:57:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346344.572121 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzbOQ-0000O2-OC; Fri, 10 Jun 2022 09:57:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346344.572121; Fri, 10 Jun 2022 09:57:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzbOQ-0000Nv-L4; Fri, 10 Jun 2022 09:57:26 +0000
Received: by outflank-mailman (input) for mailman id 346344;
 Fri, 10 Jun 2022 09:57:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Y0XP=WR=citrix.com=prvs=1532263ae=roger.pau@srs-se1.protection.inumbo.net>)
 id 1nzbOP-0000Nl-N8
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 09:57:25 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b8cdd1ce-e8a3-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 11:57:23 +0200 (CEST)
Received: from mail-co1nam11lp2176.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.176])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 10 Jun 2022 05:57:19 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by SA0PR03MB5532.namprd03.prod.outlook.com (2603:10b6:806:bf::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.12; Fri, 10 Jun
 2022 09:57:18 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%6]) with mapi id 15.20.5332.014; Fri, 10 Jun 2022
 09:57:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b8cdd1ce-e8a3-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654855043;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=KlzvoWa3mz/UGVKcI0VBG3vbP4TrYEIwUnKBu3qNbnA=;
  b=hLQtMnhWCup0cus/tXyMnb1amXIHaQ9Tq/caSWGNxnnelZ4PbVwFmRCI
   BRYhVvERFcGZWWnJ1GFyRcO51b8F/qZublQm9pM0JTV5Jra4HHzVi3it/
   ms88BOLexsP4O1vj725Rm1gqtBx5i09AYMvO/IFJobHZzN2PQ+/4BJsWp
   U=;
X-IronPort-RemoteIP: 104.47.56.176
X-IronPort-MID: 73152418
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:q6pmeqKy9qVYkwj/FE+Rp5QlxSXFcZb7ZxGr2PjKsXjdYENS0TADz
 GJLWWHXOP2OZWr1fthzO4jn/BsG68LTmt9lSwBlqX01Q3x08seUXt7xwmUcns+xwm8vaGo9s
 q3yv/GZdJhcokf0/0vrav67xZVF/fngqoDUUYYoAQgsA149IMsdoUg7wbRh3Ncw2YHR7z6l4
 rseneWOYDdJ5BYsWo4kw/rrRMRH5amaVJsw5zTSVNgT1LPsvyB94KE3fMldG0DQUIhMdtNWc
 s6YpF2PEsE1yD92Yj+tuu6TnkTn2dc+NyDW4pZdc/DKbhSvOkXee0v0XRYRQR4/ttmHozx+4
 NdL65qpSA4mArD3mO9FWilUPTt6IKITrdcrIVDn2SCS52vvViK1ht5JVQQxN4Be/ftrC2ZT8
 /BeMCoKch2Im+OxxvS8V/VogcMgasLsOevzuFk5lW2fUalgHM6FGvqUjTNb9G5YasRmB/HRa
 tBfcTNyRB/BfwdOKhEcD5dWcOKA2SKmLmcE8QL9SawfuzD96ggu1bfUNoTUfcCnav5xk3e4u
 TeTl4j+KlRAXDCF8hKH+H+xgu7EnQvgRZkfUra/85ZCkFCVg2AeFhASfV+6uuWizF6zXcpFL
 E4Z8TZoqrI9nGSzR8T5dw21pjiDpBF0ZjZLO+gz6QXIwa2N5Q+cXzEAVmQYN4Jgs9IqTzs30
 FPPh8nuGTFkrLySTzSa66uQqjSxfyMSKAfueBM5cOfM2PG7yKlbs/4FZo8L/HKd5jEtJQzN/
 g==
IronPort-HdrOrdr: A9a23:jICWmKkVC5Mdu4nrpo3E2IsJFk3pDfO+imdD5ihNYBxZY6Wkfp
 +V8cjzhCWftN9OYhodcLC7V5Voj0mskKKdxbNhRYtKOzOWw1dATbsSlLcKpgeNJ8SQzI5gPM
 tbAstD4ZjLfCJHZKXBkXaF+rQbsb66GcmT7I+xrkuFDzsaDZ2Ihz0JdjpzeXcGIDWua6BJdq
 Z1saF81kedkDksH42GL0hAe9KGi8zAlZrgbxJDLxk76DOWhTftzLLhCRCX0joXTjsKmN4ZgC
 P4uj28wp/mn+Cwyxfa2WOWx5NKmOH5wt8GIMCXkMAaJhjllw7tToV8XL+puiwzvYiUmR4Xue
 iJhy1lE9V46nvXcG3wiRzx2zP42DJr0HPmwU/wuwqWneXJABYBT+ZRj4NQdRXUr2A6ustn7a
 5N12WF87JKEBLphk3Glpf1fiAvsnDxjWspkOYVgXAae5AZcqVtoYsW+14QOIscHRj99JssHI
 BVfY3hDc5tABKnhk3izylSKITGZAVxIv7GeDlOhiWt6UkZoJgjpHFohvD2nR87hecAotd/lq
 H5259T5cBzp/8tHNxA7dg6MLuK40z2MGXx2TGpUCLa/J9uAQO/l7fHpJMI2cqNRLskiLMPpb
 WpaiIriYd1QTOlNfGz
X-IronPort-AV: E=Sophos;i="5.91,288,1647316800"; 
   d="scan'208";a="73152418"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VvouZJgs3TzbvE8MLxcsI3zfEuEa5VtXVNSMVtBP/933rt+OLn2n4ZsuYWKC1zPRosgnJd7Yd7MJIWdDP5ptZE6Q78eePg7nSqFi2lzGIc+YnkZjfX6ksgV7GJ9uAdmdKzj8h15DMAlTpWq/GW7HWyMn2SXx6g34m/6euTCShXqn+yBWHR7kKSLy4N/OQIgPoG/NoWTYOWnx+Whp5WUku8x7+6Zhba/VUfqfhhq0XJuvj85hxt1AUE9OAmENCBUhmdG7FeCuZ19nPyFjQZ4sw8f8Ljhipk4DF76Lqiu0q6fvmFlHRfCmfZp1q7ncYmRptayhDuI1yte6dRp48uISug==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=KAdz/fKBfUohqW4NY8nypDWZFJbGBsWAmpP3H1gnypU=;
 b=OTcbfaJUa5MYfDsE6RhKfVu2Sd9/twU05qe4NyDdR+WhuHLIGzb0HFqmgOTSU45DfryndmRyqWlbUYpH2+6tG/mgpKFa1loNL7Tl7CbUEVwRH3j6wgoVNvzlvZDQaDvDjjgV+F16YH2/VncgsoSzD/xm05LSQ7Y6qOATaToxtQsnPdTBIzw0Z5+4Cm0qEPiHiHnDfbnyWJssVx7/d0nFYqYJ6aGqrQ4+K/gQQFmQbEwEla9lCrF4gIyM+HZcFvbW9nMqKcn37SUqtsNdkaIllxVGQeqMTAGIYa01fQT+ZGo6YBWkQ/iz87U1Wq2D3bN6uWj/Zn7f8SqGAQecZEOSOQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KAdz/fKBfUohqW4NY8nypDWZFJbGBsWAmpP3H1gnypU=;
 b=uuIVgR5s3Im6XLxm45KJUY/0DqgNdnFQij4l9+Y5oXtpihdFRdnCqGQUXXRq4BzRijTAXrkE3h+UjXDFgk6Z9bfHN8/fsBc8dXEFc1dNZ6X00jHGZLN1t/Ndv/AFPG9H8Izv/esMDDNzK3q2PKFfKwQF2cjqBhYCjAYevS5f0jQ=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Fri, 10 Jun 2022 11:57:14 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Paul Durrant <paul@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] IOMMU/x86: work around bogus gcc12 warning in
 hvm_gsi_eoi()
Message-ID: <YqMVev4YlUiRTxc0@Air-de-Roger>
References: <52090c8d-fa21-6f53-c33b-776c12338f62@suse.com>
 <YqLwpGXxCHy5HJpg@Air-de-Roger>
 <26f619a4-d0a8-7895-2017-cda17526b48f@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <26f619a4-d0a8-7895-2017-cda17526b48f@suse.com>
X-ClientProxiedBy: LO2P265CA0226.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:b::22) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4888c09b-be31-42ba-d59c-08da4ac79a8a
X-MS-TrafficTypeDiagnostic: SA0PR03MB5532:EE_
X-Microsoft-Antispam-PRVS:
	<SA0PR03MB55320BFEDFE70F6E65CED0B98FA69@SA0PR03MB5532.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	bfmJ1PdiIleEaT0/y6oixTERvt8eXMJFBO/7CfZJQ2BUGbOkIWPoTtqkHGDYrugptfuDqOwX/QyERVlDwcG/zm8TnWYbj/tiwt2ehEFh033VOhPJfPCMWb9b0DSnIY/a9CQmjomqICMvbsE3OJMstNGbx40eBOkT5VbDFMSJQGaFqDcOMcbFWCpzkscECj4Uks0z7lnJd8Yls40wZ8DqkBWx+rC5OTel0N4TuipsSa/yQ+QA7We7vTaq+JNPOi1pM+pBZAgTVD8JuxMT1O1TAbYfeuyF/tWpdRWDYfaKX4kyhAk36YtXN4jISmL6smnnezSgx1K9BZJClpQcvOMKqZLhar/8epcV4bYiC8gtdQpgjJsZ/TVHNZp9JpmtKNxGDrySj0gvYVJfWxgesi/Z4ot62LxppgJmho/5W3rIL3MR61cV8AIe4q0WKyET7UXRzFTDWdMO3kKOi0124YI5CURAQbpMDn9IaZR5pj6D04/pHq6P3uZMyEErG6d3e10eM/We8BtZDK9ZRU4O5sbeizchYSgntheXIvucR8eN+btvAEWPaQHDiiOOtnnTMXexH6AWPK+0zcv6I9i7/agkntSzjDUCuRcFLX+MHHlWlKZe/lMstWCsSA6Epdb8V4RUpyqgEZn+LsMt6bbzYd+c9yptq6xRmHNK5v1arc5XiP2bExN3z1IzVBGTLxw+aaWBpyRP3g059L8IIO0T1JFMMDA/dgub2aOxOTl/cR9z4Hc=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(7916004)(4636009)(366004)(966005)(6486002)(107886003)(6506007)(26005)(186003)(9686003)(6512007)(85182001)(86362001)(33716001)(508600001)(5660300002)(8936002)(38100700002)(316002)(6916009)(54906003)(53546011)(83380400001)(6666004)(82960400001)(2906002)(4326008)(8676002)(66556008)(66946007)(66476007);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MlVFS2RSWG8yemZqSVpIL2dyN21zZnEzVEN0VjJhcGJxQ2pUVmRVTnNBQ0FV?=
 =?utf-8?B?Wmh5NW0xWExsYXFKalAzb21sV09kV2ovaTNvQmRLUk5sa3YvaTF4ZlRLcEZy?=
 =?utf-8?B?eCtNaVB0NHNuMHF0eHlPQkZkOUJMbXUveGt5NWkvSEZBNWFaRmV5QUpQS1Zq?=
 =?utf-8?B?andoeW0vRnFESGY1VHFySDhnUXpDZVltQkordDdHN3k5SEJDSlNCTGRFTUtw?=
 =?utf-8?B?Z1hCa3BQWEN0cXZFRFpSQXEwdDBVT0R5SmZsOTBHNlhyamNIaGZzbGZTQzQv?=
 =?utf-8?B?djBxQWVZL0pZK1BTRFozQnhaWXl2SlpoMXZsbCtsMXZnYi91NFlzbklPWUtD?=
 =?utf-8?B?bkIzR0t2Qmh4MTJBWU9hZnMzQ3Y5MFl0WVk2RlZMRHVnQnNiNWhzcUJNMWRL?=
 =?utf-8?B?MHJWd0Q2Z0cra0lPWVV1ODlpdnZjSGJwNjJldk9qci8yVlZPd2hMMWltZnhM?=
 =?utf-8?B?QitIMldzWisyclBjaHdYdVdHNWVETkt4YjZxMTlzMVZDU0x3a0t6UDB4ckdS?=
 =?utf-8?B?R2ZPbXRmeXM5eDRlMVA3UlI2U3I3VTgwZUNaWEkxUXRLbFpVSGJxNGdBbmda?=
 =?utf-8?B?QXIvcThGUHVPNytWQXE3Vy81dFF1T2FNSTRVTFY3bGVnbnpta0JaNWhveGVF?=
 =?utf-8?B?Q1JlQ01QL1FDZjRhellQWVZaWlk1WVNNR3JsOEQ2NWdQZG44SWd5b3pkYzVl?=
 =?utf-8?B?czRtMUR0elZ4aTZPd0ZhSW1yaHI3bDM2MGRXMTd6VzVub3dPYmpPQkZpRm1J?=
 =?utf-8?B?WjlDblBaaHZIV3MvMWFjOWpldEVBN1JUVFI1eC9WbkEzOEZGd2plRnM2NHgx?=
 =?utf-8?B?SkhxVWQ2aUVPVDZCRERFM3lrS1lsOVBuMGtmZTZNdUw5Z3E4NVFPOHd0RUow?=
 =?utf-8?B?ZjQ5UkRjREptekxsa3Z0bkc5SnA1cFh6SVFocXRFM3hQbVkwWWRLTzJrL1NP?=
 =?utf-8?B?R3ZDQjlVTHdQWUd5NjkvR0V5UXRqenoxWlFkZ2NDbmlTWE5DSkR6NUxvMzhF?=
 =?utf-8?B?aXQwdWR4bjZTaGNlTWhUTXpqS0JVU2t6ME9YVjQwSzJlRHFVTmdnVUhzZzJz?=
 =?utf-8?B?NHVWbmhlWDV4MDhkRXlPOUNuK3dHQzlPMGF1WHR2K1MxSkJXYWR1L2R2OFU2?=
 =?utf-8?B?dVdaT05NbGdjaHhjNDNZRlhCb2s2LzdYNE54VnFSajBEM2w0QW5vL3JBWGo4?=
 =?utf-8?B?eit3Q1NRb0xLMHZta2JmakN3a0Y0ODZUQ203ZldaSHE3ZEdHTmNndllTTFoz?=
 =?utf-8?B?QjZjandWZEMrK29CVnpNWmRjUmRZTXQ2K2g4QmtxT3EwZUdJeDE5d1NCdW1W?=
 =?utf-8?B?d25iTFBleGVSMkN6eHN2LzgwTHpONjJmQXBrZzZBbFpWRWVMRWNhSmo4ZWNY?=
 =?utf-8?B?TGZpWjRCaEcrVVY2Nkkyd2VWREdEdzh5VTh5UFBndFdXTGZucDRwMHY0YVhL?=
 =?utf-8?B?Q2JIVWZDM0dka3lKRkxCczBoaFNuYTZmSngxMFY4b1E3STlVZTMxditaR0hx?=
 =?utf-8?B?Q2dVRXBWS0hjMGZnS3JITWRrTm5jbFRLcGVMc3dqNG4vM1BPUStzc0hEY3Vp?=
 =?utf-8?B?RSttTjdCOE04eEM0Q3FrTGxoWnFUaTFYNW96Y3BQdEwrejVaQys3QjFHelhK?=
 =?utf-8?B?cnBQNVVKdDRJTUZxalFlMVFHZjQ2R042TUxZbDhjY1JMZ1U2OFduNDhzSFdr?=
 =?utf-8?B?bngxcnNZNlZRb2lkNXJlM3REdUR3ZEdaUnJDZC92cFFLNmpiZzU1VU1rZ0VK?=
 =?utf-8?B?L3FhNUU5UVh0ZTRucE1HUUpCcStZZkp0U3dxWngxN2FmNko2d05wR3NVdHNG?=
 =?utf-8?B?WWNZby90dUZNRFJsTFZ0eTZTV2h2c0tQdTRqR1JRZVpSTFN2b3F3SGYxSmpG?=
 =?utf-8?B?N3JTaXdPWkJsZ2pyaVpUUGRYT211SkFnelo1UStoRDFNL00xSStidlRkbFJz?=
 =?utf-8?B?TEpNdWxKRjk3dGFmZ1B0STl1MlBTOGhGQTdjZXhScTFQek9jbHBMaExEQncv?=
 =?utf-8?B?bGE2eUpLYzJMdjlJM3pVdm1TVGpwZnhFSTF3S3BnWENzU3c4TXZWQW1qYTZW?=
 =?utf-8?B?NEtSZWs5SnNJNVo0MCtkYUNCVnhGaFh4bTdKenpoUkZ6TnM4V01pZmVnOFE2?=
 =?utf-8?B?K045WXN0OVlaWndDTXhKU1pnWGNpMzkycW5ZWGpUdElDOXdXOUJHTGVrV2JG?=
 =?utf-8?B?dlJZR1RaemlQTWhvMmlSUlZoNlI0ZVhha1ZqWEk3czZ4dnpWZEN2S00wWGJt?=
 =?utf-8?B?NEVEQlYydXJKMVRVYzAweWE5MXF2dk55T3kyOUxsVVVDVVRTUEtoaFZzYWtM?=
 =?utf-8?B?VFA4Y1NQRjZORmNWNHlXamkwRVFxd0V3RlBrc2ZQcGdmZjIvc1NiSFlpaXFn?=
 =?utf-8?Q?5MG1+4BDgn78CJ4E=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4888c09b-be31-42ba-d59c-08da4ac79a8a
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2022 09:57:17.9177
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: BDwAt8gS7g8RhW4hc03u6+g3NFZxXt1Quu9w/mplT5Kd/hzbSXIMamhoRrGBFXYUehoNrQTRQRMujCiO4pKggQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR03MB5532

On Fri, Jun 10, 2022 at 09:29:44AM +0200, Jan Beulich wrote:
> On 10.06.2022 09:20, Roger Pau Monné wrote:
> > On Fri, May 27, 2022 at 12:37:19PM +0200, Jan Beulich wrote:
> >> As per [1] the expansion of the pirq_dpci() macro causes a -Waddress
> >> controlled warning (enabled implicitly in our builds, if not by default)
> >> tying the middle part of the involved conditional expression to the
> >> surrounding boolean context. Work around this by introducing a local
> >> inline function in the affected source file.
> >>
> >> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >>
> >> [1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=102967
> >> ---
> >> This is intended to replace an earlier patch by Andrew [2], open-coding
> >> and then simplifying the macro in the one problematic place.
> >>
> >> Note that, with pirq_dpci() presently used solely in the one file being
> >> changed here, we could in principle also remove the #define and use just
> >> an inline(?) function in this file. But then the macro would need
> >> reinstating as soon as a use elsewhere would become necessary.
> 
> Did you read this before ...
> 
> >> As to the inline - I think it's warranted here, but it goes against our
> >> general policy of using inline only in header files. Hence I'd be okay
> >> to drop it to avoid controversy.
> >>
> >> [2] https://lists.xen.org/archives/html/xen-devel/2021-10/msg01635.html
> >>
> >> --- a/xen/drivers/passthrough/x86/hvm.c
> >> +++ b/xen/drivers/passthrough/x86/hvm.c
> >> @@ -25,6 +25,18 @@
> >>  #include <asm/hvm/support.h>
> >>  #include <asm/io_apic.h>
> >>  
> >> +/*
> >> + * Gcc12 takes issue with pirq_dpci() being used in boolean context (see gcc
> >> + * bug 102967). While we can't replace the macro definition in the header by an
> >> + * inline function, we can do so here.
> >> + */
> >> +static inline struct hvm_pirq_dpci *_pirq_dpci(struct pirq *pirq)
> >> +{
> >> +    return pirq_dpci(pirq);
> >> +}
> >> +#undef pirq_dpci
> >> +#define pirq_dpci(pirq) _pirq_dpci(pirq)
> > 
> > That's fairly ugly.  Seeing as pirq_dpci is only used in hvm.c, would
> > it make sense to just convert the macro to be a static inline in that
> > file? (and remove pirq_dpci() from irq.h).
> 
> ... saying so? IOW I'm not entirely opposed, but I'm a little afraid we might
> be setting us up for later trouble. 

Sorry, started replying yesterday but had to leave and left the reply
open.  Then when I came back this morning I just read the code and not
the commit message.

Hm, so ideally we would also then move dpci_pirq() to hvm.c in order
to match the move of pirq_dpci(), but that's not possible due to that
helper having other callers outside of hvm.c.

We could always export the function from hvm.c if we gained outside
callers.  In any case, I don't want to block you further on this, so:

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 10:23:24 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 10:23:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346355.572132 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzbnQ-0004HY-UH; Fri, 10 Jun 2022 10:23:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346355.572132; Fri, 10 Jun 2022 10:23:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzbnQ-0004HP-OS; Fri, 10 Jun 2022 10:23:16 +0000
Received: by outflank-mailman (input) for mailman id 346355;
 Fri, 10 Jun 2022 10:23:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LSau=WR=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1nzbnP-0004Fi-2h
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 10:23:15 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0615.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::615])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5583b2bb-e8a7-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 12:23:13 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8397.eurprd04.prod.outlook.com (2603:10a6:20b:3b5::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Fri, 10 Jun
 2022 10:23:11 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.013; Fri, 10 Jun 2022
 10:23:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5583b2bb-e8a7-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=L+//Bycmsicfi7hiUMe4gw9oORTQfoLBRIK3REaemV/2om+nWvQHFxHUmeLyuWhU2gLkoF6okmdbLuT4jusL8YjQzDc5/AqKwlUSRvAbz3h/NNQsXAUbypQLdirWrs2ZkB9J9oHXCmR5jJ5nntEZu+rQMj9rZLKbpLcFMqbzrxQF21WzbR1pcvjhipeIpUmfwhP5YylvLx/JFu5H0+xbml/ndI9136v1BysukTrGL1gUJt1Hp1w4G2NaiK3P76QXD7eKJoN4FlRl6hq7gvbop7JbEDtGM9lhPWMdxiQJSDiMNeWPo0Y/VYg5VupRqwmTfv7RxTgXrqGrjnkmOpB82w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+HsS90DI+RX+hBh9tqd3dEkYH5Jd/Gq7vTcCXRSzz/c=;
 b=f7VziVLi8roURD9+W0e0YTZFfQ8H9KDAPB1v9S1lKU3oVZfAbubVvubfB1OQ/AJDAN/3/+fEdVq91soK89yAxkZh9AzQo7E3PAzY/KR1FyJcLW7oAx164wl1ByZJ3zwxTCxkyu6un1QhgUBagApljHTEseIqsaJuc5jRH8zq6u2QRtt/dXNfMzg8dNEkuQEjGh+Aejh9B9Hy4krzhhZHchePtpS1kGxxlVeQSdmQPEoL231Y5EfUMF8AacxfMERGUZpFl7Rr+nnqUj/3dfLHRPzQAGkknXX0Uyp2tmoqdH0O7tnBMyL+pZ1nSjeYiy0rv6aIGSZnaPdIynzZSrdpxg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+HsS90DI+RX+hBh9tqd3dEkYH5Jd/Gq7vTcCXRSzz/c=;
 b=s5D73DrumqqqzeEpvSsTeDHr4LoaiB7H6WTnZc9b5YTRzBW0RSIYm5SEBCX5m9OtujcuU+Da7fz3U/SG8X1i1GIb33xwlWaHPIRw3iX7Wd4nPJ7UbctWP+Ld5wIlQEVri23hIsSO/gttLXQ43amsIDKvOytkLxzDPOs+Xn2pGK5Z5zhaHqF5dvyEDf/oe1llcRQathSvQF8t0AT24WWLCzvcyUHDvUDixGlXNacXnJOhvjKE6xcZqKB5XNWkpAv/7292+4v/fxP00JXauh07LP6lOyqmtwpxRwY0TNWjeB+Kk2BVSD9/ZIYWbo9tR/P0ogHk6mT8ZUiKQuQWnnxexQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <c61f607b-bfdd-3162-7b26-b4681b4cce59@suse.com>
Date: Fri, 10 Jun 2022 12:23:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH] add more MISRA C rules to docs/misra/rules.rst
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: George.Dunlap@citrix.com, roger.pau@citrix.com, Artem_Mygaiev@epam.com,
 Andrew.Cooper3@citrix.com, julien@xen.org, Bertrand.Marquis@arm.com,
 xen-devel@lists.xenproject.org
References: <alpine.DEB.2.22.394.2206091748210.756493@ubuntu-linux-20-04-desktop>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <alpine.DEB.2.22.394.2206091748210.756493@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR06CA0040.eurprd06.prod.outlook.com
 (2603:10a6:20b:463::23) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4fca9ffb-b65f-498c-fb74-08da4acb387f
X-MS-TrafficTypeDiagnostic: AM9PR04MB8397:EE_
X-Microsoft-Antispam-PRVS:
	<AM9PR04MB8397A3A730A3ADA85FD9742AB3A69@AM9PR04MB8397.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	dEJcHXKYoiAKq5J3Idcukxa8KPafyt8nF2ifv5FXjO008qnbE5AnLlcLh1iQnn1dkjkkwn9D7sbzPFf/efeYFxnUCYZiYN7XQODjTRexiWTaUrmtr6/C0fplft/UuRa3vASqN/ra98aSgO8Sv9Jw3lQziXMSjC4z8n+QTgMjfd1iA8WZRW5/UOGnd0kd9Fv+onLCHAdGNuhyxSOA5e/7ONfERbX2Ly3JpE+Hea+a3LziFRvdnwURC8Y84cBUdkX1UHE6ZsJU0uBbKnXXsDFJAPGavd1I6VCXzhAcUHW/FSeAvF4knhmfdSdSmZLgDmGEri1idJxohHCtp+Q4881UVcfXVOxFtlSOVB8ng2QVburjFMb0T3rAMCOeVr7xSWULIeumezDCY5BQh6t+u7Xw/khY1EPrwQ7xQu6Wz+3dtDFjpbQcN1FS60Z3UY9jidv9P7LjD+5f6xFWqL1jvXW45mH9tBb58HKOwZhmJ4enKWa8sZNjOvoNyIcC8g6Nm9MCXFw+yACtINhMBDISnERG571gHfbKv9BtjgZmSWSDEWuiII9QJFgpEj97RT4nqP/tONHMioUGc2iEuxD/8he8l+hunx58rjg9WNcps24YtiNyHf/u6bNsPyhr9lbYychcRxcOCwXrSa3/EhPsD9RF/lqZP1XVkh9Jdbngwoln1sNE3vcMKjLZPOCrGJmSJQ/2gIHIsC1NF3F9+CNtHZdT/OIvfI6jYEWcHnBercJ4Kxn9wSB2GoAULrCZOpoRzpG9QF8qe7MOaMy+7DZ6AQ5dYBj7cnau+ZKEiH+bskmDkHoEeyerWm/q/G5hrrG2fH0nYX4FYlWDg/CWo3iwsDY+FJg4RfTLKJvZFxTt6wC9EUA=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(66946007)(66476007)(26005)(6512007)(4326008)(8676002)(66556008)(186003)(4744005)(5660300002)(8936002)(508600001)(6486002)(53546011)(2906002)(2616005)(36756003)(38100700002)(316002)(31686004)(86362001)(6916009)(31696002)(6506007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RlB4OVJ0akNYL3FXb1JrUU93RGNveVh2SG1PbnFQR2J2Rjk5TlJNZENtRHht?=
 =?utf-8?B?UUt6c0thcUIyVE9qMTJUOWU4ek42dkFrckJaYk5XSG5iS2ZaRnNuSUJYV1da?=
 =?utf-8?B?ck85SkNvZ1pkM09iYWZpUExOYTRmWnRjQ1AydkZadFVodFM3WVg0RTVldjJH?=
 =?utf-8?B?ME9jN05NSDhhejlHUUZ2RmlsN2MwYi96eG1vQWdYZVQyZWE5SFl1NTRQcGpS?=
 =?utf-8?B?dW1TZVVFWnJXMndtSnNHaFF4djljbnFBK0dSMC91RmFVaDA2SzUyZWlHdUtt?=
 =?utf-8?B?QVd4ajd2QW5kUU40WmdVdk9hdkFRVEdsN1o0ZThLR2dKb01MT29sTERqTk4w?=
 =?utf-8?B?S3NHMDB1VTZTMG93L1JhZWlhRlhONlNDKzk3NGM1ZVgwbkFYNnpSUkFwYzYx?=
 =?utf-8?B?RTgzd2k1ZXVyY054MjlkMStHYzNUMjNmV0pqZVFrTU5HQnowOHRyMWtSem1p?=
 =?utf-8?B?N0VocHVxY0tQNnBHaXdFRFpMWDdKMCs4ekFkUkNFMzVTWGxhaEh2NmRLVkVR?=
 =?utf-8?B?MVQ3NmQ4SDFvUGV0YlIwU01sZ29rOWtYMGxKZksvQmN2ck9rekRUQXVpK2V0?=
 =?utf-8?B?S3JjSXA2OGRkM2tBUGxMWjV6NlV1YTl3R1ZDejNvNFBCTWJud2JFTVdGdUZv?=
 =?utf-8?B?bm1lcEYwUmdOaXZ0MmZvdFZhRTRQVFRxOXZIaWhPK0U3NGF0YUF4Qzk0bktZ?=
 =?utf-8?B?SUp3MFV1YU0veHo1N2tna0d2ZTdxRjF6K0RWSnMwSlVwMklUY1VkbmhmQ1NG?=
 =?utf-8?B?Q0FTQ0dmaTJ4U3B0RFYxMDFJTTRZeUFldXpvNXNrc2pBSXZBMVp1MGRTNEh2?=
 =?utf-8?B?ZWZ1MWNDV0x1QVcvRlpXcFlUODBOMC9NTjZrOGVmRFRXMExsdHRGZ2Z4a1RE?=
 =?utf-8?B?RFRvbVUvZDhHUmk0Z1Rqam9RT2srLzNwRThKbC9GaFg4aEQyMFk3TmJTTG5l?=
 =?utf-8?B?Sm9zVm0zeU9wTVRsSUMrZDI5ZmxtZW5XN291UHBEUnV0bFQxZDFDeUluOXhL?=
 =?utf-8?B?UForSW81cFpzK1hFU2FlcjJKWWo3S1RTNlIzT1VtbFRON2RVb2pmQUJqR0NW?=
 =?utf-8?B?TGlPVnpBcm1QRlg4MnRuRWcxUFNDZFNXeEpoT0g5Mm9ibngrWUQ0TU92ZzJZ?=
 =?utf-8?B?QWVzck5zc1VvM2RTWEw2L0NhTjRHQzh2MXlFSHVGYVlVcC9hMDJQMU1UUlpw?=
 =?utf-8?B?bDJoOFd4MEhCMmlFZlpKZnpXVTRCYjVBUVppNThtaUQzV3VRU0ZUbTRET3Zy?=
 =?utf-8?B?M29uU0kzVXNBN3A5cWdnUU1VWDB0UEVGWUsrMzFLRGlOUiszNzk1SGUvR1lj?=
 =?utf-8?B?aXltajdmVG9pS1JMMUdnK2dpaVpkeXh0OG91a1R2RVFxMDM3WlpTcFlMWFFU?=
 =?utf-8?B?d1ZNYm1rbjk5b2U1d2RIWFlNd0h0QTcxUVZvWDFTY2lDMXZkOUNDMmVqaTQw?=
 =?utf-8?B?WU1lUnVOOElMb2JTYVdmcXVVK2taWTgwSGxpRVlMWDRoWVViM0g3d1pFTm1K?=
 =?utf-8?B?WTdsMUQ3MkVXUm00N1ljZ0RtYUlqaWNVOVhYeGFJWkJIamxkQUVkN2g5VzZX?=
 =?utf-8?B?TTF3YUZQeHNHTDlTdzRjaUF2M3VvZHg1THRkUUR0ZUdnYjJCcit4b0k2b1hv?=
 =?utf-8?B?Y3NELzlheE4zUmw3UmlYNG5BVUJjYVk1TklqSVhKRVFGbFlLZWVhdEtSc0s0?=
 =?utf-8?B?eDFFbDh2Mm5keXFPMU1IRWJ4bVUrZWM1TlcwNG80WktrMjVibS95Z0Z3RnhV?=
 =?utf-8?B?dUovVFdJU0J1NFQ1U2FERFJDUkpiT056UU5EKzEwRzFUUkxkS00va0gvTWFB?=
 =?utf-8?B?U0NPK01aVlMrUVJMWk1ibmRob0hLTzVldTN2aEtpZU9SUnlvZ0NTd2JrdHYx?=
 =?utf-8?B?WUJwWjFvVXBnRXFZVFNzcnBCM1dqTVVudzdaeHhvMWhTTjVlb29wV1cxL0N4?=
 =?utf-8?B?S1ZKcjk5UFJKa0p6R3hNTUw3TmFvS2JNdHNXaVlKRmFqV3llNTdDa3VaRWha?=
 =?utf-8?B?SFphNjNUMTJJQ3FIczVTK29UTHdaKzJ5eTNrSFBrK2tkUjdFb29ZTU5OT0Z5?=
 =?utf-8?B?bi9Gc3ppRTJzTWZ2UGJUL0dPNU1hQXdsZUJwa1B2N0xCbWtWbWVxK2JlQ3RH?=
 =?utf-8?B?Mmk1RVAzbWZmSjFkM1hIODBWNFFFVkFjYjRXMnZ5Y1oxRW14K2IzVUxtZXRD?=
 =?utf-8?B?UHE3THZlQzg3Y2ZWNTdTdE5KeXZuTnNMRUMwb2kzcHlEQUVnOTlvN3U0c0hO?=
 =?utf-8?B?UkFTMnQ0czVGTUM4Qk5YSGpTdWZ0c3JOcWFvZHU0M0R0RUswTzlINGU2b2xK?=
 =?utf-8?B?ZWoxd2lLdU9PYjREYjNwNnRSWlUrRzFGUGxKdmI3Qm9NcWNkdDRBQT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4fca9ffb-b65f-498c-fb74-08da4acb387f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2022 10:23:11.4011
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: FeJZitsp1kWXhwKMBhF5bQ8Y6MyXH5cQk5DV5nO85lqCPA3X4atK/9CVRvW74wYbHKRuLaI3S8eFfXmDsskMUQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8397

On 10.06.2022 02:48, Stefano Stabellini wrote:
> +   * - `Rule 5.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_05_03.c>`_
> +     - Required
> +     - An identifier declared in an inner scope shall not hide an
> +       identifier declared in an outer scope
> +     - Using macros as macro parameters at invocation time is allowed,
> +       e.g. MAX(var0, MIN(var1, var2))

I think the connection between the example and the rule could be made more
clear, e.g. by adding "... even if both macros use identically named local
variables".

> +   * - `Rule 14.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_14_01.c>`_
> +     - Required
> +     - A loop counter shall not have essentially floating type

This looks to be missing "point"?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 10:49:39 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 10:49:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346367.572149 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzcCh-0007ZM-1O; Fri, 10 Jun 2022 10:49:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346367.572149; Fri, 10 Jun 2022 10:49:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzcCg-0007ZF-UT; Fri, 10 Jun 2022 10:49:22 +0000
Received: by outflank-mailman (input) for mailman id 346367;
 Fri, 10 Jun 2022 10:49:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzcCg-0007Z5-Dd; Fri, 10 Jun 2022 10:49:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzcCg-0006hD-9L; Fri, 10 Jun 2022 10:49:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzcCf-0003sG-OE; Fri, 10 Jun 2022 10:49:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nzcCf-0003Vm-Nk; Fri, 10 Jun 2022 10:49:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=QHdmwk+the+ThjdqiMH/IN4NIAkG9Nd20KSne4mmCaA=; b=P8CwiuORtx+lTXNwG8oBBbO5tT
	UdACh412Se8rXOSMkUiCkW7/qO+ZsMFBlgn3D2wjyusJWRBJb71dUun25Lr6hnk/V+bI8ENgUkzFP
	etY6tpLwhWaSOw8nwsqx1g8YL3+s+0cVtbCObZqTUk8b0g4KGL8o3JnW09FXUqIsJm0o=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170905-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.13-testing test] 170905: regressions - FAIL
X-Osstest-Failures:
    xen-4.13-testing:test-xtf-amd64-amd64-5:xtf/test-pv32pae-xsa-259:fail:regression
    xen-4.13-testing:test-xtf-amd64-amd64-5:leak-check/check:fail:regression
    xen-4.13-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    xen-4.13-testing:test-amd64-i386-pair:xen-boot/src_host:fail:regression
    xen-4.13-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    xen-4.13-testing:test-amd64-i386-pair:xen-boot/dst_host:fail:regression
    xen-4.13-testing:test-amd64-i386-livepatch:xen-boot:fail:regression
    xen-4.13-testing:test-amd64-i386-xl-qemuu-win7-amd64:xen-boot:fail:regression
    xen-4.13-testing:test-amd64-i386-libvirt-raw:xen-boot:fail:regression
    xen-4.13-testing:test-amd64-i386-xl-vhd:xen-boot:fail:regression
    xen-4.13-testing:test-amd64-coresched-i386-xl:xen-boot:fail:regression
    xen-4.13-testing:test-amd64-i386-xl-qemut-ws16-amd64:xen-boot:fail:regression
    xen-4.13-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    xen-4.13-testing:test-amd64-i386-freebsd10-amd64:xen-boot:fail:regression
    xen-4.13-testing:test-amd64-i386-xl-shadow:xen-boot:fail:regression
    xen-4.13-testing:test-amd64-i386-xl-qemut-win7-amd64:xen-boot:fail:regression
    xen-4.13-testing:test-amd64-i386-libvirt:xen-boot:fail:regression
    xen-4.13-testing:test-amd64-i386-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    xen-4.13-testing:test-amd64-i386-freebsd10-i386:xen-boot:fail:regression
    xen-4.13-testing:test-amd64-i386-qemut-rhel6hvm-amd:xen-boot:fail:regression
    xen-4.13-testing:test-amd64-i386-qemut-rhel6hvm-intel:xen-boot:fail:regression
    xen-4.13-testing:test-amd64-i386-qemuu-rhel6hvm-amd:xen-boot:fail:regression
    xen-4.13-testing:test-amd64-i386-xl-pvshim:xen-boot:fail:regression
    xen-4.13-testing:test-amd64-i386-libvirt-pair:xen-boot/src_host:fail:regression
    xen-4.13-testing:test-amd64-i386-libvirt-pair:xen-boot/dst_host:fail:regression
    xen-4.13-testing:test-amd64-i386-qemuu-rhel6hvm-intel:xen-boot:fail:regression
    xen-4.13-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-4.13-testing:test-amd64-i386-migrupgrade:xen-boot/dst_host:fail:regression
    xen-4.13-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-4.13-testing:test-amd64-i386-libvirt-xsm:xen-boot:fail:regression
    xen-4.13-testing:test-amd64-i386-xl-xsm:xen-boot:fail:regression
    xen-4.13-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-4.13-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    xen-4.13-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-4.13-testing:test-amd64-i386-xl:xen-boot:fail:regression
    xen-4.13-testing:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f9ae12fc103c7429d8ce6bc57b8cbcefdd71cd45
X-Osstest-Versions-That:
    xen=fe97133b5deef58bd1422f4d87821131c66b1d0e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 10 Jun 2022 10:49:21 +0000

flight 170905 xen-4.13-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170905/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-xtf-amd64-amd64-5       90 xtf/test-pv32pae-xsa-259 fail REGR. vs. 169240
 test-xtf-amd64-amd64-5       91 leak-check/check         fail REGR. vs. 169240
 test-amd64-i386-xl-qemuu-debianhvm-amd64  8 xen-boot     fail REGR. vs. 169240
 test-amd64-i386-pair         12 xen-boot/src_host        fail REGR. vs. 169240
 test-amd64-i386-xl-qemut-debianhvm-amd64  8 xen-boot     fail REGR. vs. 169240
 test-amd64-i386-pair         13 xen-boot/dst_host        fail REGR. vs. 169240
 test-amd64-i386-livepatch     8 xen-boot                 fail REGR. vs. 169240
 test-amd64-i386-xl-qemuu-win7-amd64  8 xen-boot          fail REGR. vs. 169240
 test-amd64-i386-libvirt-raw   8 xen-boot                 fail REGR. vs. 169240
 test-amd64-i386-xl-vhd        8 xen-boot                 fail REGR. vs. 169240
 test-amd64-coresched-i386-xl  8 xen-boot                 fail REGR. vs. 169240
 test-amd64-i386-xl-qemut-ws16-amd64  8 xen-boot          fail REGR. vs. 169240
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 169240
 test-amd64-i386-freebsd10-amd64  8 xen-boot              fail REGR. vs. 169240
 test-amd64-i386-xl-shadow     8 xen-boot                 fail REGR. vs. 169240
 test-amd64-i386-xl-qemut-win7-amd64  8 xen-boot          fail REGR. vs. 169240
 test-amd64-i386-libvirt       8 xen-boot                 fail REGR. vs. 169240
 test-amd64-i386-xl-qemuu-ws16-amd64  8 xen-boot          fail REGR. vs. 169240
 test-amd64-i386-freebsd10-i386  8 xen-boot               fail REGR. vs. 169240
 test-amd64-i386-qemut-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 169240
 test-amd64-i386-qemut-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 169240
 test-amd64-i386-qemuu-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 169240
 test-amd64-i386-xl-pvshim     8 xen-boot                 fail REGR. vs. 169240
 test-amd64-i386-libvirt-pair 12 xen-boot/src_host        fail REGR. vs. 169240
 test-amd64-i386-libvirt-pair 13 xen-boot/dst_host        fail REGR. vs. 169240
 test-amd64-i386-qemuu-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 169240
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 169240
 test-amd64-i386-migrupgrade  13 xen-boot/dst_host        fail REGR. vs. 169240
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 169240
 test-amd64-i386-libvirt-xsm   8 xen-boot                 fail REGR. vs. 169240
 test-amd64-i386-xl-xsm        8 xen-boot                 fail REGR. vs. 169240
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 169240
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 169240
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 169240
 test-amd64-i386-xl            8 xen-boot                 fail REGR. vs. 169240
 test-amd64-i386-xl-qemuu-ovmf-amd64  8 xen-boot          fail REGR. vs. 169240

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 169240
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 169240
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 169240
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 169240
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 169240
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 169240
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 169240
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 169240
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 xen                  f9ae12fc103c7429d8ce6bc57b8cbcefdd71cd45
baseline version:
 xen                  fe97133b5deef58bd1422f4d87821131c66b1d0e

Last test of basis   169240  2022-04-08 13:36:34 Z   62 days
Testing same since   170905  2022-06-09 15:37:34 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  George Dunlap <george.dunlap@eu.citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    fail    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit f9ae12fc103c7429d8ce6bc57b8cbcefdd71cd45
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jun 9 17:16:06 2022 +0200

    x86/pv: Track and flush non-coherent mappings of RAM
    
    There are legitimate uses of WC mappings of RAM, e.g. for DMA buffers with
    devices that make non-coherent writes.  The Linux sound subsystem makes
    extensive use of this technique.
    
    For such usecases, the guest's DMA buffer is mapped and consistently used as
    WC, and Xen doesn't interact with the buffer.
    
    However, a mischevious guest can use WC mappings to deliberately create
    non-coherency between the cache and RAM, and use this to trick Xen into
    validating a pagetable which isn't actually safe.
    
    Allocate a new PGT_non_coherent to track the non-coherency of mappings.  Set
    it whenever a non-coherent writeable mapping is created.  If the page is used
    as anything other than PGT_writable_page, force a cache flush before
    validation.  Also force a cache flush before the page is returned to the heap.
    
    This is CVE-2022-26364, part of XSA-402.
    
    Reported-by: Jann Horn <jannh@google.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: c1c9cae3a9633054b177c5de21ad7268162b2f2c
    master date: 2022-06-09 14:23:37 +0200

commit e8c04e468312713b5ad737e905494616f87f339f
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jun 9 17:15:39 2022 +0200

    x86/amd: Work around CLFLUSH ordering on older parts
    
    On pre-CLFLUSHOPT AMD CPUs, CLFLUSH is weakely ordered with everything,
    including reads and writes to the address, and LFENCE/SFENCE instructions.
    
    This creates a multitude of problematic corner cases, laid out in the manual.
    Arrange to use MFENCE on both sides of the CLFLUSH to force proper ordering.
    
    This is part of XSA-402.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: 062868a5a8b428b85db589fa9a6d6e43969ffeb9
    master date: 2022-06-09 14:23:07 +0200

commit 8d9f36132afd7c95c47809ee0e8e5c678b31f828
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jun 9 17:15:11 2022 +0200

    x86: Split cache_flush() out of cache_writeback()
    
    Subsequent changes will want a fully flushing version.
    
    Use the new helper rather than opencoding it in flush_area_local().  This
    resolves an outstanding issue where the conditional sfence is on the wrong
    side of the clflushopt loop.  clflushopt is ordered with respect to older
    stores, not to younger stores.
    
    Rename gnttab_cache_flush()'s helper to avoid colliding in name.
    grant_table.c can see the prototype from cache.h so the build fails
    otherwise.
    
    This is part of XSA-402.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: 9a67ffee3371506e1cbfdfff5b90658d4828f6a2
    master date: 2022-06-09 14:22:38 +0200

commit fce392fa36c31f88be4f6b5f5821d244549cc825
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jun 9 17:14:39 2022 +0200

    x86: Don't change the cacheability of the directmap
    
    Changeset 55f97f49b7ce ("x86: Change cache attributes of Xen 1:1 page mappings
    in response to guest mapping requests") attempted to keep the cacheability
    consistent between different mappings of the same page.
    
    The reason wasn't described in the changelog, but it is understood to be in
    regards to a concern over machine check exceptions, owing to errata when using
    mixed cacheabilities.  It did this primarily by updating Xen's mapping of the
    page in the direct map when the guest mapped a page with reduced cacheability.
    
    Unfortunately, the logic didn't actually prevent mixed cacheability from
    occurring:
     * A guest could map a page normally, and then map the same page with
       different cacheability; nothing prevented this.
     * The cacheability of the directmap was always latest-takes-precedence in
       terms of guest requests.
     * Grant-mapped frames with lesser cacheability didn't adjust the page's
       cacheattr settings.
     * The map_domain_page() function still unconditionally created WB mappings,
       irrespective of the page's cacheattr settings.
    
    Additionally, update_xen_mappings() had a bug where the alias calculation was
    wrong for mfn's which were .init content, which should have been treated as
    fully guest pages, not Xen pages.
    
    Worse yet, the logic introduced a vulnerability whereby necessary
    pagetable/segdesc adjustments made by Xen in the validation logic could become
    non-coherent between the cache and main memory.  The CPU could subsequently
    operate on the stale value in the cache, rather than the safe value in main
    memory.
    
    The directmap contains primarily mappings of RAM.  PAT/MTRR conflict
    resolution is asymmetric, and generally for MTRR=WB ranges, PAT of lesser
    cacheability resolves to being coherent.  The special case is WC mappings,
    which are non-coherent against MTRR=WB regions (except for fully-coherent
    CPUs).
    
    Xen must not have any WC cacheability in the directmap, to prevent Xen's
    actions from creating non-coherency.  (Guest actions creating non-coherency is
    dealt with in subsequent patches.)  As all memory types for MTRR=WB ranges
    inter-operate coherently, so leave Xen's directmap mappings as WB.
    
    Only PV guests with access to devices can use reduced-cacheability mappings to
    begin with, and they're trusted not to mount DoSs against the system anyway.
    
    Drop PGC_cacheattr_{base,mask} entirely, and the logic to manipulate them.
    Shift the later PGC_* constants up, to gain 3 extra bits in the main reference
    count.  Retain the check in get_page_from_l1e() for special_pages() because a
    guest has no business using reduced cacheability on these.
    
    This reverts changeset 55f97f49b7ce6c3520c555d19caac6cf3f9a5df0
    
    This is CVE-2022-26363, part of XSA-402.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>
    master commit: ae09597da34aee6bc5b76475c5eea6994457e854
    master date: 2022-06-09 14:22:08 +0200

commit c7da430b21e0f6c48d81874a465f94221163beba
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jun 9 17:14:12 2022 +0200

    x86/page: Introduce _PAGE_* constants for memory types
    
    ... rather than opencoding the PAT/PCD/PWT attributes in __PAGE_HYPERVISOR_*
    constants.  These are going to be needed by forthcoming logic.
    
    No functional change.
    
    This is part of XSA-402.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: 1be8707c75bf4ba68447c74e1618b521dd432499
    master date: 2022-06-09 14:21:38 +0200

commit 7669737d7da5790e66ffb670576175c2465ab09c
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jun 9 17:13:54 2022 +0200

    x86/pv: Fix ABAC cmpxchg() race in _get_page_type()
    
    _get_page_type() suffers from a race condition where it incorrectly assumes
    that because 'x' was read and a subsequent a cmpxchg() succeeds, the type
    cannot have changed in-between.  Consider:
    
    CPU A:
      1. Creates an L2e referencing pg
         `-> _get_page_type(pg, PGT_l1_page_table), sees count 0, type PGT_writable_page
      2.     Issues flush_tlb_mask()
    CPU B:
      3. Creates a writeable mapping of pg
         `-> _get_page_type(pg, PGT_writable_page), count increases to 1
      4. Writes into new mapping, creating a TLB entry for pg
      5. Removes the writeable mapping of pg
         `-> _put_page_type(pg), count goes back down to 0
    CPU A:
      7.     Issues cmpxchg(), setting count 1, type PGT_l1_page_table
    
    CPU B now has a writeable mapping to pg, which Xen believes is a pagetable and
    suitably protected (i.e. read-only).  The TLB flush in step 2 must be deferred
    until after the guest is prohibited from creating new writeable mappings,
    which is after step 7.
    
    Defer all safety actions until after the cmpxchg() has successfully taken the
    intended typeref, because that is what prevents concurrent users from using
    the old type.
    
    Also remove the early validation for writeable and shared pages.  This removes
    race conditions where one half of a parallel mapping attempt can return
    successfully before:
     * The IOMMU pagetables are in sync with the new page type
     * Writeable mappings to shared pages have been torn down
    
    This is part of XSA-401 / CVE-2022-26362.
    
    Reported-by: Jann Horn <jannh@google.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>
    master commit: 8cc5036bc385112a82f1faff27a0970e6440dfed
    master date: 2022-06-09 14:21:04 +0200

commit 3826ba5f8271676c1569588abb32d960e5882e54
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jun 9 17:13:30 2022 +0200

    x86/pv: Clean up _get_page_type()
    
    Various fixes for clarity, ahead of making complicated changes.
    
     * Split the overflow check out of the if/else chain for type handling, as
       it's somewhat unrelated.
     * Comment the main if/else chain to explain what is going on.  Adjust one
       ASSERT() and state the bit layout for validate-locked and partial states.
     * Correct the comment about TLB flushing, as it's backwards.  The problem
       case is when writeable mappings are retained to a page becoming read-only,
       as it allows the guest to bypass Xen's safety checks for updates.
     * Reduce the scope of 'y'.  It is an artefact of the cmpxchg loop and not
       valid for use by subsequent logic.  Switch to using ACCESS_ONCE() to treat
       all reads as explicitly volatile.  The only thing preventing the validated
       wait-loop being infinite is the compiler barrier hidden in cpu_relax().
     * Replace one page_get_owner(page) with the already-calculated 'd' already in
       scope.
    
    No functional change.
    
    This is part of XSA-401 / CVE-2022-26362.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>
    master commit: 9186e96b199e4f7e52e033b238f9fe869afb69c7
    master date: 2022-06-09 14:20:36 +0200
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 11:01:48 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 11:01:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346380.572163 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzcOc-0001lV-BL; Fri, 10 Jun 2022 11:01:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346380.572163; Fri, 10 Jun 2022 11:01:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzcOc-0001lO-7K; Fri, 10 Jun 2022 11:01:42 +0000
Received: by outflank-mailman (input) for mailman id 346380;
 Fri, 10 Jun 2022 11:01:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d6gm=WR=citrix.com=prvs=15368b7f5=Jane.Malalane@srs-se1.protection.inumbo.net>)
 id 1nzcOb-0001lI-AO
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 11:01:41 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b2b89d59-e8ac-11ec-8179-c7c2a468b362;
 Fri, 10 Jun 2022 13:01:39 +0200 (CEST)
Received: from mail-dm6nam10lp2101.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.101])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 10 Jun 2022 07:01:31 -0400
Received: from DM5PR03MB3386.namprd03.prod.outlook.com (2603:10b6:4:46::36) by
 PH7PR03MB6942.namprd03.prod.outlook.com (2603:10b6:510:157::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.12; Fri, 10 Jun
 2022 11:01:29 +0000
Received: from DM5PR03MB3386.namprd03.prod.outlook.com
 ([fe80::a932:ea60:fb82:12b7]) by DM5PR03MB3386.namprd03.prod.outlook.com
 ([fe80::a932:ea60:fb82:12b7%7]) with mapi id 15.20.5332.015; Fri, 10 Jun 2022
 11:01:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b2b89d59-e8ac-11ec-8179-c7c2a468b362
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654858899;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=0PpqRd62Ww4ABzLBXizCPsL0GJo+mfrFrxck1m5u3Ew=;
  b=Tnr1Ko8xk4q71Kk1n0mR8ZKMf0i5iETqL9ZenkUT6wrZp336XyaYvRpX
   gSeJuN7Wh9vUTAnuj+d+btEKicnDXokKeeRiFnLe330zgFEfai/ni+8ak
   z1nM5H7XmPG86ZX/QiBsjRz1DR0cJGhUzbRuYOvDb5/85JW4jeaDyugXL
   c=;
X-IronPort-RemoteIP: 104.47.58.101
X-IronPort-MID: 72655863
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:NLzRya8/En4aIjfmoLMeDrUD8H+TJUtcMsCJ2f8bNWPcYEJGY0x3y
 GpKCz+EPf3bYGT2ftkgaIrj/B9QuZ7dmtVrSwZoq3s8E34SpcT7XtnIdU2Y0wF+jyHgoOCLy
 +1EN7Es+ehtFie0Si+Fa+Sn9T8mvU2xbuKU5NTsY0idfic5DnZ44f5fs7Rh2NQw34HhW1nlV
 e7a+KUzBnf0g1aYDUpMg06zgEsHUCPa4W5wUvQWPJinjXeG/5UnJMt3yZKZdhMUdrJ8DO+iL
 9sv+Znilo/vE7XBPfv++lrzWhVirrc/pmFigFIOM0SpqkAqSiDfTs/XnRfTAKtao2zhojx/9
 DlCnazseB94D5Hho+kmeSJHE39OD/VD5oaSdBBTseTLp6HHW13F5qw2SW0TY8gf8OsxBnxS/
 /sFLjxLdgqEm++93LO8TK9rm9gnK87oeogYvxmMzxmAVapgHc+FHvWMvIcBtNszrpkm8fL2R
 cMfZHxKZRDJZxRJN38cCY4knffujX76G9FdgA3P+/dqszaIpOB3+OCwLsvbIO7JecJ6sHuan
 EnbwWrTJjhPYbRzzhLAqBpAnNTnnyn2RYYTH72Q7eNxjRuYwWl7IAISfUu2p7++kEHWc8JSL
 QkY9zQjqYA29Ve3VZ/tUhugunmGsxUAHd1KHIUHBBqlz6PV50OVAzYCRzsYMNg+7pZuHHoty
 0ODmM7vCXp3qrqJRHmB97CS6zSvJSwSKmxEbigBJecY3+TeTEgIpkqnZr5e/GSd17UZxRmYL
 +i2kRUD
IronPort-HdrOrdr: A9a23:87qk362ONiBdq/iYZfRVhwqjBRByeYIsimQD101hICG9Lfb0qy
 n+pp4mPEHP4wr5AEtQ4uxpOMG7MBDhHQYc2/hdAV7QZnidhILOFvAv0WKC+UyrJ8SazIJgPM
 hbAs9D4bHLbGSSyPyKmDVQcOxQj+VvkprY49s2pk0FJW4FV0gj1XYBNu/xKDwVeOAyP+tcKH
 Pq3Lsjm9PPQxQqR/X+IkNAc/nIptXNmp6jSwUBHQQb5A6Hii7twKLmEjCDty1uEg9n8PMHyy
 zoggb57qKsv7WQ0RnHzVLe6JxQhZ/I1sZDPsqRkcIYQw+cyjpAJb4RGIFqjgpF5d1H22xa1O
 UkZC1QePib3kmhPF1dZyGdnTUIngxeskMKgmXo/EcL6faJOA7STfAxy76xOyGplHYIrZVy1r
 lG0HmesIcSBRTcnD7l79yNTB1ykFGoyEBS2NL7okYvJrf2UoUh27D3PXklY6soDWb/8sQqAe
 NuBMbT6LJfdk6bdWnQui1qzMa3Vno+Ex+aSgxa0/blmAR+jTR81Q8V1cYflnAP+NY0TIRF/f
 3NNuBtmKtVRsEbYKphDKMKQNexCGbKXRXQWVjiaWjPBeUCITbAupT36LI66KWjf4EJ1oI7nN
 DbXFZRpQcJCjbT4A21reh2Gzz2MRSAtG7Wu79jDrBCy83BbauuNzGfQ1YzlMblq+kDA6TgKo
 SOBK4=
X-IronPort-AV: E=Sophos;i="5.91,290,1647316800"; 
   d="scan'208";a="72655863"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mnJWOGppM/8HnHM2TIz0IVluyRafwnjXKSw0pE480zomsqcA8X+j6KseuLyf/U3xwBDFGZYUL0PEJ160bCVJQTmOlj5F8xiSCmqaauGCUPlQkmxeSB7/TpsXcWp9tu/BlINsokBFrh+2fDY0uOz78zwCFeQNhO2eva4I6ovTfPkaZQTs5r5xbAheVrBgjrktEvfiz5ipACJToJ48fcu3xJQ1eB7PPhRu4HBKL28KX4zeXFvajMpNcG4pB0ukxbEvrAVYX32HZiWleYclrvczbFKvuv/obICUtRyTt8IuYyC8mF5cpoQfUFW7eU2nZR2qxsft5Qwuy4WnXtMVVSvAKg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0PpqRd62Ww4ABzLBXizCPsL0GJo+mfrFrxck1m5u3Ew=;
 b=Ry+DhEFypzGw/UWCKq2rx5nKjtMYdsOCURPhOI6qvPs6HvDOT3zY3usXGeuSHQRXAHBmDav0O2taUG2sXKgKVyPNnBePgQ3VXprvd7JcaP8qXGbXVcMiAeHwOMyYYsRwykEZbeT6L+Vc7Hul/vYVK1aaCXacPO5F1dDRv2ACLbArSVEIuCT+uMVx0l5BJib0fadbdcC4qQthGV095STlPfUw39U9dOakQKAI30SL6qnA/4C/Agn5rgB0Vra0hN6mJCr+sagS3KB1HpWxBVa0tXnTkWUBok+dxCOtLgw9BT0dfr/kQGf/IS1dbQO0M4VcXxsHXs1JJbyN/EP+s1sZ7g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0PpqRd62Ww4ABzLBXizCPsL0GJo+mfrFrxck1m5u3Ew=;
 b=fgc0JoyKXsEY2UF6L0zgyT+1PtyLacS+2P3mgGOtgzURz8UN7u7/NiqzJraFhaMeEzU3zbsAR7cBjarHtA8l2MQxg0qRA19a0lhhSm+Qsy99Dm7doUeIW6eyuqEwP9v+RUOXXIKPrPxhojointn3I/pSZQfVtB0RGgIWB5IUylw=
From: Jane Malalane <Jane.Malalane@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <Andrew.Cooper3@citrix.com>, Roger Pau Monne
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Xen-devel
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v3] x86/hvm: Widen condition for is_hvm_pv_evtchn_domain()
 and report fix in CPUID
Thread-Topic: [PATCH v3] x86/hvm: Widen condition for
 is_hvm_pv_evtchn_domain() and report fix in CPUID
Thread-Index: AQHYarsTV0eGhfiv2kicWrZcofLe1a0uLGIAgBpxAoA=
Date: Fri, 10 Jun 2022 11:01:29 +0000
Message-ID: <6f038c14-fdc4-4271-5a46-bd676d3dda02@citrix.com>
References: <20220518132714.5557-1-jane.malalane@citrix.com>
 <27a9ae9e-07fa-8300-d5b9-f9a88e4a1754@suse.com>
In-Reply-To: <27a9ae9e-07fa-8300-d5b9-f9a88e4a1754@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 0bf07972-93c2-471f-d6d0-08da4ad0924e
x-ms-traffictypediagnostic: PH7PR03MB6942:EE_
x-microsoft-antispam-prvs:
 <PH7PR03MB694241A79FDCCEEACF3FBA9E81A69@PH7PR03MB6942.namprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 gDM1vjKf65CLkU0eVh+5j0KR56ME+ILIhonyUxkSLNy6m/rKsKzMv/FkpiVw6L5phRTGtgQLL/qwji4qX3onD72p8x2aY1sgkpdxhl/p0gJMnlzegVBhJm8i/OQclIXK4lefvP7r5CtPnbRmPe0XA+OAdjMtdD1rfwmF/fq9G91vEq1AzzAtDwaSbJnbMxHy0iOaTDnB+s46MjcvU1WVv0r7HJU5jUKq6oJtEleUvHzyhuUKOMnP051T9S1CG+LcFyPm6ZgiH9cYEDOv7ivwUr9BMz2jykWQAvo/bsdPPfvDFUZy4bdUNRoyQ0/AAcITIXy3s5Lfyedc/lS7zWq6Z6c5P7Hi5YG4X6AcP4ZIueu1ZPYlc1S22bPL+0OTI54ho+UT1hmYsBgkoHmTkApK2KDYupnvT5HL5K4TjanbsAf29GMakIS78r0Pk8AtqREuLdv1+AgGRsS1DN6CVk9ZsPGHh0ON5LAIZpth3uFUIFzsshMWtJE5JMTnuOixHbHjiiP7tFcF97ZGcZ4MtwiH8cgFVMAO9JKMbwUXCTCoeTTgYkftori2F1T0bR+WxUlHHpmWGirfn63BqVreeE5moK+x4bm96GMraUsv4x5zQ+yUbHdKN/5FOi4G9jOh8fD99V+ZtF+cBKSFAUfdJ57dZoNo5Tm/Nq5e9eHiz7vvrREGm6kfDe+7gCe5b8Vt/sojHopUKtU7NjnBAeZGzf8icTOS83UGqBWv8ovT71WUNWA=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR03MB3386.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(6486002)(31696002)(2906002)(6506007)(316002)(38070700005)(86362001)(54906003)(8936002)(6916009)(508600001)(186003)(38100700002)(2616005)(5660300002)(122000001)(8676002)(26005)(6512007)(31686004)(82960400001)(4326008)(91956017)(64756008)(53546011)(76116006)(83380400001)(66446008)(66476007)(66556008)(66946007)(71200400001)(36756003)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?SmxYTjBCQXgvOEoxZ0pQRWZWSFVDanI0QmdPNUxlWXJkUVJFNjNJWTFGbEE5?=
 =?utf-8?B?dHVHZXMwRkZSbGFzVVRUQkxWTDJ3VExzZFo2SGdZL0dtTkdNcTRaUndnckVu?=
 =?utf-8?B?eHhJSFJOUUFwR3llWENlTVhXRFc5bitUU1pTZkhaeUZZRmNQZk5rdFp5Qlgv?=
 =?utf-8?B?Y0dBOUxUM1QxMmNpTk9YUG9UdlpzV2lsNFJsUTRRUEVWcy9vb0dwaEUzY3Z5?=
 =?utf-8?B?MjZjY2hka2R0TVhYSHpEZHA3TzJNWTlMeVJlcWE3N2liclQ3dzBRcTRTSHBq?=
 =?utf-8?B?UkE5alYvb3d0aGJqQ1QxTXBmdWZYcFUxOWVRVmlCbVNRQS9OcGdzcXZHc0NS?=
 =?utf-8?B?TTJGTDdzZlBQczZzQkRXY1FtSXpZRXBCNWtIaUZMaTVxRkwvNUhDSGJubFM5?=
 =?utf-8?B?bkpxQVhWVHkyR3A0Sm9jMXFMcWtEK05RSkhwdjV4TkFCUkVpM0JWVEthMW9k?=
 =?utf-8?B?MkF0NFEyVFNUS3JuL2hQN2JPeURBdFNhaklaVVR2OWsxazM2YlZLT1phaGtp?=
 =?utf-8?B?T0ZOVllEclBLRHIxRncwNi9pTnE0eWNqcTFiWUUySmwxWEVNNnNISHR0VHZZ?=
 =?utf-8?B?Q3B6S3RPRzFtWVhsZmV2dmovU3RKcHVNcnlHakRicjllVTdSN1FFV2tTQUx1?=
 =?utf-8?B?ZXpQeHJyd0FQOVd5VEk1a3hNeHVTdjQ5WkFicks0ZVJKYjc3cW1pMk9kOGsz?=
 =?utf-8?B?TFBpRmpmcWNBU2s1M3lPZDg0Um9iRVgrc2RFdW14cUhaakNIS2NtaWlaZzly?=
 =?utf-8?B?SVUyS1F2QXlCaE9NSmFQd2JrVi9TZ2ZJRHlRbXZNQXdpdlRkeGZidnNIRjN0?=
 =?utf-8?B?TVd3YlNiTzgrNytNTWxKOHIrK0hGaVZNb3IwU3R1TUZrZHlxZXNoeS9MY2dh?=
 =?utf-8?B?aktaN1NhMFBVOFF5dmJ5Wm9hTWhGb1hkcVc4TmRCOTJEb2ozMUMyMDdRbFBO?=
 =?utf-8?B?MjRDV0FGNW1oSEY5eWk0QWlNVk5KQ1RHM1NHcnZyUzdFY0RwR29Va3J2ZHN3?=
 =?utf-8?B?QWxGSTgwYTZhZlV3ZmNucUwvRnl1anZmTmgzN0F4d0Vud0libW5GYnBtWWMr?=
 =?utf-8?B?V1JVU25kWmlZZzFyQklLSHlHRU9LTVIvTGhaMjRLMzFxOGhnaldYdVV5YTBh?=
 =?utf-8?B?Z0hnT1N2NFAxSnh2dElESzhkSXFoUVNORkFMUU0yeFNpRDNWOE8vWFQyZkR2?=
 =?utf-8?B?NEhuNkJtWGYySUlrODdWeGp1Unh3UVN1a0t4a21rajMyVFU5RzNPc2V2cmtv?=
 =?utf-8?B?V0R4YSs0eTlrZDhod0VzdFZKL0tHYXprc3haKzNIdTNDbDlBbDlwWEtwQmRY?=
 =?utf-8?B?NGNzUlU2QVhMcm1kMkxNc3hFUEg1OWpFb2JnTWN5bWNycUVHU05qQUd1Z0N0?=
 =?utf-8?B?Um5NTjBWNnNuZGVUK29qdTFGblA0Y3p5Qk5EbXFwWEdRV2dEQjBiMElNbjhI?=
 =?utf-8?B?Zm5lS2xKVENZQkpLOThXbHRKYW1saFFGU0wzV2wzWTNOWVFWam5GY3crdk1s?=
 =?utf-8?B?NjBPUTV3NGI4RDVFMSttU3UzaFdOMiswamR5R2JKV0thRGtzRGNSOXo2WnpT?=
 =?utf-8?B?WHZPdnNIQWFUbDBCREdlUTJ6N3N4TGZFdWU3aTd6WnJkTFAxeEtxY0FJK2tK?=
 =?utf-8?B?aTFMTElFL3lhOXptRGFlcEZXZHVSVCtad0QyV2g2SjNHV3YzdW5nb1BDTUlQ?=
 =?utf-8?B?bXlZMndMYlFKRlZxeE1aZEViNFhsZnZVSUVmMTNxMnpnNk5HcEszeDQ3V1JZ?=
 =?utf-8?B?K1FIVmFGR0hUNHd0eHgwRkgzVGdDbS91dmNZOG1SM200d2dCbEF6NnhaMFBQ?=
 =?utf-8?B?S0ZhaXE5UmUzMWw3WTRYUS84elllYmFCYkh6Mkd1V1N6SmFRRjNrYXhLaUFx?=
 =?utf-8?B?R2RqK2F1T25OOGFXMWthRjVoWXVXQkJJNk1BeUhoVjlmTVRxaUdZdTBuWUNQ?=
 =?utf-8?B?OXYrWkw3MXE5VWIvQlJJenlNeHYwTUJzSGZwVko0S1UrVkFqZDJEMklMcjJz?=
 =?utf-8?B?dFgrS2VZbjNMZm1IQ08rTGdLR0ZKUStUQWQvN1dQcWFrdERMam9Iei9DSTV5?=
 =?utf-8?B?dDZkcHdwS3VRc05kd3liUWlHOVlkOUVRRFFEVExBekRFeERvYnV4cW4weGZ3?=
 =?utf-8?B?STZwcmE3MDBkSUloNmt2TEJOa1FBRFFJcVZtajlHSDlKV3h6ek83RHRuU01Y?=
 =?utf-8?B?N1lRczlreTROZnRRTmFJSkh1Vk41SVB0VlpLS1dUZzZ1c2tsYjJCMHVmakJp?=
 =?utf-8?B?anVxeFNUNit4MHlZcDdjcGhzY0gwY2JEWVJmU0wvSU4wczdDS3l2Mk1zcXdJ?=
 =?utf-8?B?eFB1TTMxUTdKRVF0dVBsQi9WclRIMnpFU2ZLY3cxQWFzOGxWbnpGUUNCbWZv?=
 =?utf-8?Q?L8W0SRCAA9WvN36Y=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <25658EDF9923ED41A019F7DA828C501A@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: DM5PR03MB3386.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0bf07972-93c2-471f-d6d0-08da4ad0924e
X-MS-Exchange-CrossTenant-originalarrivaltime: 10 Jun 2022 11:01:29.4327
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: Wdn6wHWRyioI4vUu3akkqAougZaWroCHTqk1YMet0ibGHoopHr4AvWAYKKEZepMLrWT6GMgG/fbxjfCBUUUwl9FrH4hCZfoHJp37ufK+3f0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR03MB6942

T24gMjQvMDUvMjAyMiAxNjoxNCwgSmFuIEJldWxpY2ggd3JvdGU6DQo+IFtDQVVUSU9OIC0gRVhU
RVJOQUwgRU1BSUxdIERPIE5PVCByZXBseSwgY2xpY2sgbGlua3MsIG9yIG9wZW4gYXR0YWNobWVu
dHMgdW5sZXNzIHlvdSBoYXZlIHZlcmlmaWVkIHRoZSBzZW5kZXIgYW5kIGtub3cgdGhlIGNvbnRl
bnQgaXMgc2FmZS4NCj4gDQo+IE9uIDE4LjA1LjIwMjIgMTU6MjcsIEphbmUgTWFsYWxhbmUgd3Jv
dGU6DQo+PiAtLS0gYS94ZW4vYXJjaC94ODYvaW5jbHVkZS9hc20vZG9tYWluLmgNCj4+ICsrKyBi
L3hlbi9hcmNoL3g4Ni9pbmNsdWRlL2FzbS9kb21haW4uaA0KPj4gQEAgLTE0LDggKzE0LDE0IEBA
DQo+PiAgIA0KPj4gICAjZGVmaW5lIGhhc18zMmJpdF9zaGluZm8oZCkgICAgKChkKS0+YXJjaC5o
YXNfMzJiaXRfc2hpbmZvKQ0KPj4gICANCj4+ICsvKg0KPj4gKyAqIFNldCB0byB0cnVlIGlmIGVp
dGhlciB0aGUgZ2xvYmFsIHZlY3Rvci10eXBlIGNhbGxiYWNrIG9yIHBlci12Q1BVDQo+PiArICog
TEFQSUMgdmVjdG9ycyBhcmUgdXNlZC4gQXNzdW1lIGFsbCB2Q1BVcyB3aWxsIHVzZQ0KPj4gKyAq
IEhWTU9QX3NldF9ldnRjaG5fdXBjYWxsX3ZlY3RvciBhcyBsb25nIGFzIHRoZSBpbml0aWFsIHZD
UFUgZG9lcy4NCj4+ICsgKi8NCj4+ICAgI2RlZmluZSBpc19odm1fcHZfZXZ0Y2huX2RvbWFpbihk
KSAoaXNfaHZtX2RvbWFpbihkKSAmJiBcDQo+PiAtICAgICAgICAoZCktPmFyY2guaHZtLmlycS0+
Y2FsbGJhY2tfdmlhX3R5cGUgPT0gSFZNSVJRX2NhbGxiYWNrX3ZlY3RvcikNCj4+ICsgICAgICAg
ICgoZCktPmFyY2guaHZtLmlycS0+Y2FsbGJhY2tfdmlhX3R5cGUgPT0gSFZNSVJRX2NhbGxiYWNr
X3ZlY3RvciB8fCBcDQo+PiArICAgICAgICAgKGQpLT52Y3B1WzBdLT5hcmNoLmh2bS5ldnRjaG5f
dXBjYWxsX3ZlY3RvcikpDQo+PiAgICNkZWZpbmUgaXNfaHZtX3B2X2V2dGNobl92Y3B1KHYpIChp
c19odm1fcHZfZXZ0Y2huX2RvbWFpbih2LT5kb21haW4pKQ0KPiANCj4gSSBjb250aW51ZSB0byB0
aGluayB0aGF0IHdpdGggdGhlIHZDUFUwIGRlcGVuZGVuY3kgYWRkZWQgdG8NCj4gaXNfaHZtX3B2
X2V2dGNobl9kb21haW4oKSwgaXNfaHZtX3B2X2V2dGNobl92Y3B1KCkgc2hvdWxkIGVpdGhlcg0K
PiBiZSBhZGp1c3RlZCBhcyB3ZWxsICh0byBjaGVjayB0aGUgY29ycmVjdCB2Q1BVJ3MgZmllbGQp
IG9yIGJlDQo+IGRlbGV0ZWQgKGFuZCB0aGUgc29sZSBjYWxsZXIgYmUgcmVwbGFjZWQpLg0KPiAN
Cj4gSmFuDQpJIHdpbGwgcmVwbGFjZSBpdCBpbiBhIG5ld2VyIHZlcnNpb24gb2YgdGhlIHBhdGNo
Lg0KDQpUaGFuayB5b3UuDQoNCkphbmUu


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 11:07:43 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 11:07:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346389.572174 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzcUM-0002hw-0T; Fri, 10 Jun 2022 11:07:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346389.572174; Fri, 10 Jun 2022 11:07:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzcUL-0002hp-Th; Fri, 10 Jun 2022 11:07:37 +0000
Received: by outflank-mailman (input) for mailman id 346389;
 Fri, 10 Jun 2022 11:07:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d6gm=WR=citrix.com=prvs=15368b7f5=Jane.Malalane@srs-se1.protection.inumbo.net>)
 id 1nzcUK-0002hi-P0
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 11:07:36 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 87618fd6-e8ad-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 13:07:35 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 87618fd6-e8ad-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654859255;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=TIIlT2YCZV5kt/tlpkQ71ZA41ADB8x6cU93JUDAjEZA=;
  b=UXIOypdIC269byQrQZ0czCzXxbgNeT5YlTJCzUZb0KHQ5Shl8cwG+zd/
   R5W7h8+VF6ly7jfXvA7MHtHG+ANZPSEAp9p2GAgN8pbizMGHq/cARH6lx
   QWerTWZ4SaWCuspx0yQ3JP4/s71P8czRSxFjXjOWRO6RduQaWAAxuAJju
   E=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 75863028
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:dUpG7KiMM9ovV31zk/Kn6DD+X161NxAKZh0ujC45NGQN5FlHY01je
 htvUW6HOfaMYzH2Ld0jOY638h5Qvp/SzNNhSFc5rnwwF3gb9cadCdqndUqhZCn6wu8v7a5EA
 2fyTvGacajYm1eF/k/F3oDJ9CU6jefSLlbFILas1hpZHGeIcw98z0M68wIFqtQw24LhXVrV4
 YmaT/D3YzdJ5RYlagr41IrbwP9flKyaVOQw5wFWiVhj5TcyplFNZH4tDfjZw0jQG+G4KtWSV
 efbpIxVy0uCl/sb5nFJpZ6gGqECaua60QFjERO6UYD66vRJjnRaPqrWqJPwwKqY4tmEt4kZ9
 TlDiXC/YVgqPPGQo+staStjOSZwPoMX1uHpLUHq5KR/z2WeG5ft6/BnDUVwNowE4OdnR2pJ8
 JT0KhhUMErF3bjvhuvmFK883azPL+GyVG8bklNpyzyfKP8iSJTKRaji7t5ExjYgwMtJGJ4yY
 uJGNGoxN0yaM3WjPH8XCZxhm9iln0DdbhN1rVK+u7NpoGH6mVkZPL/Fb4OOJ43iqd9utlmcj
 nLL+SL+GB5yHN6VxCeB83msrvTShi69U4UXfJWo+/gvjFCNy2g7DBwNSUD9sfS/klS5Wd9UN
 woT4CVGkEQp3BX1FJ+nBUT++SPa+E5HMzZNLwEkwF6OyPaI2AmpPFo/ZDlPa/J3mpEYSQV/g
 zdlgOjV6SxTXKy9ECzAqufO8GjuZ0D5PkdZO3ZaEFJtD83L5dhq00mRFosL/Lud1IWdJN3m/
 9ydQMHSbZ03hNVD6ai09Euvb9mE9smQFV5dCuk6swuYAuJFiG2NPdXABaDzt6ooEWpgZgDpU
 II4s8af9vsSKpqGiTaARu4AdJnwuavZb2yN3wE1RMN/n9hIx5JEVdE43d2DDB0xbpZslcHBO
 yc/Rj+9FLcMZSD3PMebkqq6CtgwzLiIKOkJosv8N4IUCrAoLVfv1Hg3OSa4gjG2+GBxwP5XB
 HtuWZv1ZZrsIf8/nGTeqiZ0+eJD+x3SMkuCHMuhlUn6iuH2ibz8Ye5tDWZip9sRtMusyDg5O
 f4FXydW432ziNHDXxQ=
IronPort-HdrOrdr: A9a23:73H5xqrcJtGA2zhz6TDLTmEaV5pMeYIsimQD101hICG9E/bo9P
 xG88526faZslgssRIb+exoWpPsfZq0z/cciuMs1N+ZLWvbUQCTTb2Kg7GM/wHd
X-IronPort-AV: E=Sophos;i="5.91,290,1647316800"; 
   d="scan'208";a="75863028"
From: Jane Malalane <jane.malalane@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Jane Malalane <jane.malalane@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: [PATCH v4] x86/hvm: Widen condition for is_hvm_pv_evtchn_domain() and report fix in CPUID
Date: Fri, 10 Jun 2022 12:07:04 +0100
Message-ID: <20220610110704.29039-1-jane.malalane@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Have is_hvm_pv_evtchn_domain() return true for vector callbacks for
evtchn delivery set up on a per-vCPU basis via
HVMOP_set_evtchn_upcall_vector.

Assume that if vCPU0 uses HVMOP_set_evtchn_upcall_vector, all
remaining vCPUs will too and thus remove is_hvm_pv_evtchn_vcpu() and
replace sole caller with is_hvm_pv_evtchn_domain().

is_hvm_pv_evtchn_domain() returning true is a condition for setting up
physical IRQ to event channel mappings. Therefore, also add a CPUID
bit so that guests know whether the check in is_hvm_pv_evtchn_domain()
will fail when using HVMOP_set_evtchn_upcall_vector. This matters for
guests that route PIRQs over event channels since
is_hvm_pv_evtchn_domain() is a condition in physdev_map_pirq().

The naming of the CPUID bit is quite generic about upcall support
being available. That's done so that the define name doesn't become
overly long.

A guest that doesn't care about physical interrupts routed over event
channels can just test for the availability of the hypercall directly
(HVMOP_set_evtchn_upcall_vector) without checking the CPUID bit.

Signed-off-by: Jane Malalane <jane.malalane@citrix.com>
---
CC: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: "Roger Pau Monné" <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

v4:
 * Remove is_hvm_pv_evtchn_vcpu and replace sole caller.

v3:
 * Improve commit message and title.

v2:
 * Since the naming of the CPUID bit is quite generic, better explain
   when it should be checked for, in code comments and commit message.
---
 xen/arch/x86/hvm/irq.c              | 2 +-
 xen/arch/x86/include/asm/domain.h   | 9 +++++++--
 xen/arch/x86/traps.c                | 6 ++++++
 xen/include/public/arch-x86/cpuid.h | 5 +++++
 4 files changed, 19 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index 5a7f39b54f..19252448cb 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -325,7 +325,7 @@ void hvm_assert_evtchn_irq(struct vcpu *v)
 
         vlapic_set_irq(vcpu_vlapic(v), vector, 0);
     }
-    else if ( is_hvm_pv_evtchn_vcpu(v) )
+    else if ( is_hvm_pv_evtchn_domain(v->domain) )
         vcpu_kick(v);
     else if ( v->vcpu_id == 0 )
         hvm_set_callback_irq_level(v);
diff --git a/xen/arch/x86/include/asm/domain.h b/xen/arch/x86/include/asm/domain.h
index 35898d725f..dcd221cc6f 100644
--- a/xen/arch/x86/include/asm/domain.h
+++ b/xen/arch/x86/include/asm/domain.h
@@ -14,9 +14,14 @@
 
 #define has_32bit_shinfo(d)    ((d)->arch.has_32bit_shinfo)
 
+/*
+ * Set to true if either the global vector-type callback or per-vCPU
+ * LAPIC vectors are used. Assume all vCPUs will use
+ * HVMOP_set_evtchn_upcall_vector as long as the initial vCPU does.
+ */
 #define is_hvm_pv_evtchn_domain(d) (is_hvm_domain(d) && \
-        (d)->arch.hvm.irq->callback_via_type == HVMIRQ_callback_vector)
-#define is_hvm_pv_evtchn_vcpu(v) (is_hvm_pv_evtchn_domain(v->domain))
+        ((d)->arch.hvm.irq->callback_via_type == HVMIRQ_callback_vector || \
+         (d)->vcpu[0]->arch.hvm.evtchn_upcall_vector))
 #define is_domain_direct_mapped(d) ((void)(d), 0)
 
 #define VCPU_TRAP_NONE         0
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 25bffe47d7..1a7f9df067 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -1152,6 +1152,12 @@ void cpuid_hypervisor_leaves(const struct vcpu *v, uint32_t leaf,
         res->a |= XEN_HVM_CPUID_DOMID_PRESENT;
         res->c = d->domain_id;
 
+        /*
+         * Per-vCPU event channel upcalls are implemented and work
+         * correctly with PIRQs routed over event channels.
+         */
+        res->a |= XEN_HVM_CPUID_UPCALL_VECTOR;
+
         break;
 
     case 5: /* PV-specific parameters */
diff --git a/xen/include/public/arch-x86/cpuid.h b/xen/include/public/arch-x86/cpuid.h
index f2b2b3632c..c49eefeaf8 100644
--- a/xen/include/public/arch-x86/cpuid.h
+++ b/xen/include/public/arch-x86/cpuid.h
@@ -109,6 +109,11 @@
  * field from 8 to 15 bits, allowing to target APIC IDs up 32768.
  */
 #define XEN_HVM_CPUID_EXT_DEST_ID      (1u << 5)
+/*
+ * Per-vCPU event channel upcalls work correctly with physical IRQs
+ * bound to event channels.
+ */
+#define XEN_HVM_CPUID_UPCALL_VECTOR    (1u << 6)
 
 /*
  * Leaf 6 (0x40000x05)
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 11:33:56 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 11:33:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346402.572197 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzcth-0006bx-9F; Fri, 10 Jun 2022 11:33:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346402.572197; Fri, 10 Jun 2022 11:33:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzcth-0006bq-6A; Fri, 10 Jun 2022 11:33:49 +0000
Received: by outflank-mailman (input) for mailman id 346402;
 Fri, 10 Jun 2022 11:33:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzctf-0006bg-UJ; Fri, 10 Jun 2022 11:33:47 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzctf-0007Ru-T8; Fri, 10 Jun 2022 11:33:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzctf-0005WK-GX; Fri, 10 Jun 2022 11:33:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nzctf-0005iL-G8; Fri, 10 Jun 2022 11:33:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=plAp/u0VwfbrD95IS7LQV7k3C/QvbxkL00N3KWbN8Og=; b=1MW7XHSfdZ+HTgL30wEdUqu32J
	zG1hXrBF/+X5NMyGr791KOsbuY4EKoQ6VuoSgTYXlrCHE8hzbxIZM8EGU2r1M6BCQJnNL9fGK+1Qq
	gYlv5l2DF7g6hKB/j5SMiSWFJ+BRsmmdcQSq7uv4UyUg/WmWaJJYbYNKaNSUvu4a5H8E=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170919-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 170919: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=e7abb94d1fb8a0e7725b983bbf5ab1334afe7ed1
X-Osstest-Versions-That:
    ovmf=ff36b2550f94dc5fac838cf298ae5a23cfddf204
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 10 Jun 2022 11:33:47 +0000

flight 170919 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170919/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 e7abb94d1fb8a0e7725b983bbf5ab1334afe7ed1
baseline version:
 ovmf                 ff36b2550f94dc5fac838cf298ae5a23cfddf204

Last test of basis   170885  2022-06-08 12:11:46 Z    1 days
Testing same since   170919  2022-06-10 08:10:33 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Eric Dong <eric.dong@intel.com>
  Ray Ni <ray.ni@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   ff36b2550f..e7abb94d1f  e7abb94d1fb8a0e7725b983bbf5ab1334afe7ed1 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 13:36:52 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 13:36:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346454.572250 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzeoO-00055q-HH; Fri, 10 Jun 2022 13:36:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346454.572250; Fri, 10 Jun 2022 13:36:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzeoO-00055j-EM; Fri, 10 Jun 2022 13:36:28 +0000
Received: by outflank-mailman (input) for mailman id 346454;
 Fri, 10 Jun 2022 13:36:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzeoN-00055Z-PJ; Fri, 10 Jun 2022 13:36:27 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzeoN-00016A-Me; Fri, 10 Jun 2022 13:36:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzeoN-00033A-Fd; Fri, 10 Jun 2022 13:36:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nzeoN-0001Ay-F8; Fri, 10 Jun 2022 13:36:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=UHqWA3YhI95TFMu2yZhOTT6T5q+mMdD8flTVjql5ncU=; b=p/zOlUM0mvL7Fdto/5vmLkbnzW
	mzWjw1JUARjE4ZdvqlpMNhOf6fiQxVfpa4S9JgSULUuKQH+9oYDFLh0KhC04hcT4XwZ8RQvT/1+xo
	/mV8/IAvy1DGKdGuFWsYETBE/LW/pSHuNrEOJJnuEXhGUxtx4/xc2boqUgYSgKqZY4wA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170924-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 170924: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=b8bc4588b32e8a40354defac29ceb9c90e570af8
X-Osstest-Versions-That:
    xen=c1c9cae3a9633054b177c5de21ad7268162b2f2c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 10 Jun 2022 13:36:27 +0000

flight 170924 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170924/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  b8bc4588b32e8a40354defac29ceb9c90e570af8
baseline version:
 xen                  c1c9cae3a9633054b177c5de21ad7268162b2f2c

Last test of basis   170900  2022-06-09 13:01:48 Z    1 days
Testing same since   170924  2022-06-10 09:10:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   c1c9cae3a9..b8bc4588b3  b8bc4588b32e8a40354defac29ceb9c90e570af8 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 14:01:22 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 14:01:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346471.572267 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzfCN-0000WV-Jx; Fri, 10 Jun 2022 14:01:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346471.572267; Fri, 10 Jun 2022 14:01:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzfCN-0000WO-H2; Fri, 10 Jun 2022 14:01:15 +0000
Received: by outflank-mailman (input) for mailman id 346471;
 Fri, 10 Jun 2022 14:01:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FaZi=WR=citrix.com=prvs=15310cb4b=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1nzfCM-0000WI-5K
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 14:01:14 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c8733a92-e8c5-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 16:01:12 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c8733a92-e8c5-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654869672;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=3c6CS2ZBnzD1lIFKytgX/aAl7KGBQiyYw4coSz1P0mQ=;
  b=gXZB/XYyp/+0j3W35VdDy4sRLQy2xmpE7tftNVALEXrtZP8gcfmrBFIH
   v+hvZpa5fP+u5wcfT51VbBfGlO+DQ9c1bB1Yp8b57cQourndlslPwCGGD
   ouht/BsGze4ILxdT0+yHswFbrdP0W9Bw8/qbzBieaP06VLDlKgUXfJqAb
   E=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 73168632
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:efWMv6BOAnJjYBVW/3/jw5YqxClBgxIJ4kV8jS/XYbTApD920WcBn
 GtOXT+PPvuNa2P2L9AgPdi18k4Bv5bcyNJiQQY4rX1jcSlH+JHPbTi7wuYcHM8wwunrFh8PA
 xA2M4GYRCwMZiaA4E/raNANlFEkvU2ybuOU5NXsZ2YgHGeIdA970Ug5w7Bg2tYy6TSEK1jlV
 e3a8pW31GCNg1aYAkpMg05UgEoy1BhakGpwUm0WPZinjneH/5UmJMt3yZWKB2n5WuFp8tuSH
 I4v+l0bElTxpH/BAvv9+lryn9ZjrrT6ZWBigVIOM0Sub4QrSoXfHc/XOdJFAXq7hQllkPhyl
 I9ti9uzET0gYKOQo/kgcyQBVCJxaPguFL/veRBTsOSWxkzCNXDt3+9vHAc9OohwFuRfWD8Us
 6ZCcXZUM07F17neLLGTE4GAguwqKtXrO4UO/Glt1zjDAd4tQIzZQrWM7thdtNs1rp8VRKiOO
 ZFDAdZpRAvbbyxDKltGNLsZzPr5qienaXpSmXvA8MLb5ECMlVcsgdABKuH9ZdiiVchT2EGCq
 Qru72n/Rx0XKtGb4T6E6W63wP/CmzvhX4AfH6H+8eRl6HWexmEWSw0bU3O+/OH/gUm7M++zM
 GRNpHBo9/JrshX2EJ+tBHVUvUJooDYfB4BwLao5sDuElKSI2BukJlVdCT1OPYlOWNANedA66
 rOYt4q3WGA/6uDOFSz1GqS89m3rZ3VMRYMWTWpdFFZevYG+yG0mpkiXJuuPBpJZmTEc9dvY5
 zmR5BYziLwI5SLg//XqpAuX695AS3Wgc+LU2uk0djj8hu+BTNT5D7FEEHCChRq6EK6XT0Oao
 F8PkNWE4eYFAPmlzXLQHLpcTe/2va3fblUwZGKD+LFwnwlBBlb5JdwAiN2ADBwB3jk4lc/BP
 xaI5FI5CG57N3q2d65nC7+M5zAR5fG4T7zND6mMBvIXO8QZXFLWp0lGOB/Pt10BZWBxyMnTz
 7/AKZbyZZvbYIw6pAeLqxA1i+Z2nn5vnD2KH/gWDX2PiNKjWZJccp9dWHPmUwzzxP7sTNn9m
 zqHC/a39g==
IronPort-HdrOrdr: A9a23:l/c1qqgAaopj5joIAMn/r3kQ6XBQXtgji2hC6mlwRA09TySZ//
 rOoB0+726StN9xYgBFpTnuAsW9qB/nmqKdpLNhW4tKPzOW3VdATrsSjrcKqgeIc0aVm9K1l5
 0QEZSWYOeAdGSS5vyb3ODXKbgd/OU=
X-IronPort-AV: E=Sophos;i="5.91,290,1647316800"; 
   d="scan'208";a="73168632"
Date: Fri, 10 Jun 2022 15:00:58 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Elliott Mitchell <ehem+xen@m5p.com>
CC: <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2 3/3] tools/xl: Allow specifying JSON for domain
 configuration file format
Message-ID: <YqNOmkeap+zzg3C5@perard.uk.xensource.com>
References: <cover.1651285313.git.ehem+xen@m5p.com>
 <9aa6160b2664a52ff778fad67c366d67d3a0f8ab.1651285313.git.ehem+xen@m5p.com>
 <Yoeh3nMNW0AfcHr/@perard.uk.xensource.com>
 <Ypa/+X7FQT2WaX12@mattapan.m5p.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <Ypa/+X7FQT2WaX12@mattapan.m5p.com>

On Tue, May 31, 2022 at 06:25:13PM -0700, Elliott Mitchell wrote:
> On Fri, May 20, 2022 at 03:12:46PM +0100, Anthony PERARD wrote:
> > On Tue, Apr 19, 2022 at 06:23:41PM -0700, Elliott Mitchell wrote:
> > > JSON is currently used when saving domains to mass storage.  Being able
> > > to use JSON as the normal input to `xl create` has potential to be
> > > valuable.  Add the functionality.
> > > 
> > > Move the memset() earlier so to allow use of the structure sooner.
> > > 
> > > Signed-off-by: Elliott Mitchell <ehem+xen@m5p.com>
> > 
> > So, I gave this a try and creating a guest from a json config, and that
> > fails very early with "Unknown guest type".
> > 
> > Have you actually tried to create a guest from config file written in
> > json?
> > 
> > Also, this would need documentation about the new option, about the
> > format. The man page need to be edited.
> > 
> > An example of a config file written in json would be nice as well.
> 
> I'll be trying for these at some point, but no timeframe yet.  This was
> an idea which occurred to me when looking at things.  I'm wavering on
> whether this is the way to go...
> 
> Real goal is I would like to generate a replacement for the `xendomains`
> init script.  While functional the script is woefully inadaquate for
> anything other than the tiniest installation.
> 
> Notably there can be ordering constraints for start/shutdown (worse,
> those could be distinct).  One might also wish different strategies for
> different domains (some get saved to disk on reboot, some might get
> shutdown/restarted).

Is this that something like `libvirt` or some other toolstacks could help
with? Or maybe you are looking for something that as a small footprint
on the system and just run once at boot and at shutdown.

> For some of the configuration for this, adding to domain.cfg files makes
> sense.  This though ends up with the issue of what should the extra data
> look like?

Maybe `xl` could be taught to ignore some options that have a prefix
like "xendomain_", even if at the moment it ignore everything it doesn't
know I think.

> I'm oscillating between adding a section in something libxl's parser
> takes as a comment, versus adding a configuration option to domain.cfg

A comment section could work too I guess, like there's one for `sysv`
init system which describe dependency as comments.

> (libxl's parser ignores unknown sections which is not entirely good!).
> JSON's structure would be good for an addition, but JSON comes with its
> own downsides.
> 
> Most likely such a thing would be implemented in Python.  Needs a bit
> more math than shell is good for.

If you plan to convert the `xendomains` init script to python, I don't
think that would be a good idea, as it is probably better to not add a
dependency for a init script that has been a shell script for a while.
But introducing a new utility in python or other language might be fine.

Cheers,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 15:07:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 15:07:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346493.572305 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzgE8-000084-V2; Fri, 10 Jun 2022 15:07:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346493.572305; Fri, 10 Jun 2022 15:07:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzgE8-00007x-RI; Fri, 10 Jun 2022 15:07:08 +0000
Received: by outflank-mailman (input) for mailman id 346493;
 Fri, 10 Jun 2022 15:07:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Y0XP=WR=citrix.com=prvs=1532263ae=roger.pau@srs-se1.protection.inumbo.net>)
 id 1nzgE7-00007q-Nb
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 15:07:08 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fc08f6b6-e8ce-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 17:07:05 +0200 (CEST)
Received: from mail-bn7nam10lp2103.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.103])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 10 Jun 2022 11:07:02 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by BL1PR03MB5990.namprd03.prod.outlook.com (2603:10b6:208:313::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.12; Fri, 10 Jun
 2022 15:06:59 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%6]) with mapi id 15.20.5332.014; Fri, 10 Jun 2022
 15:06:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fc08f6b6-e8ce-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654873625;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=gLjesH+T+vskWvkJfj6lxRgqCjz0WxAY6hopuCRcj3Q=;
  b=FRGkTXG698kcW1zeKubHvEJRM+JVlja17SD46WCcurahR7oFSPmMGTsg
   BUMq2HQEVujW6AhI5Ga/uXK4NjfeqZuIhpOunUs7IjEok5BzWdG1Slo60
   6SJHsTCYRAsWEUG+ZS18JAExgggIL/xMEOFshrGQXFxv54XPMlGJcIArL
   E=;
X-IronPort-RemoteIP: 104.47.70.103
X-IronPort-MID: 73336691
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:5hSOHaIDBimjoqYoFE+RzZQlxSXFcZb7ZxGr2PjKsXjdYENS1DIGz
 DceW27SOK6KNjbzf9t/b4Wzp01Uu8XVm9IxSFRlqX01Q3x08seUXt7xwmUcns+xwm8vaGo9s
 q3yv/GZdJhcokf0/0vrav67xZVF/fngqoDUUYYoAQgsA149IMsdoUg7wbRh3Ncw2YLR7z6l4
 rseneWOYDdJ5BYsWo4kw/rrRMRH5amaVJsw5zTSVNgT1LPsvyB94KE3fMldG0DQUIhMdtNWc
 s6YpF2PEsE1yD92Yj+tuu6TnkTn2dc+NyDW4pZdc/DKbhSvOkXee0v0XRYRQR4/ttmHozx+4
 NEStIeTRi54AqCWyP5Eehp2M3xXHKITrdcrIVDn2SCS52vvViK0ht9IUwQxN4Be/ftrC2ZT8
 /BeMCoKch2Im+OxxvS8V/VogcMgasLsOevzuFk5lW2fUalgHsiFGv2RjTNb9G5YasRmB/HRa
 tBfcTNyRB/BfwdOKhEcD5dWcOKA2SKkKGMG+Qv9Sawf7UGJi1N8wZbRNfnqXIGAVcFXvUuKq
 TeTl4j+KlRAXDCF8hKV/3TpiuLRkCfTXIMJCKb+5vNsmEeUxGEYFFsRT1TTifuzh1O6WtlfA
 1cJ4Sdopq83nGSpU938UhuQsHOC+BkGVLJ4CPYm4QuAzq7V5QexBWUeSDNFLts8u6ceWjgCx
 lKP2dTzClRSXKa9THuc8vKeq2y0MC1MdWsaP3ZcFk0C/sXpp5w1glTXVNF/HaWpj9rzXzbt3
 zSNqyt4jLIW5SIW65iGEZn8q2rEjvD0osQdv207gkrNAttFWbOY
IronPort-HdrOrdr: A9a23:qcncgKk0cXzxlOk5sXrzR8tG1iXpDfPKimdD5ihNYBxZY6Wkfp
 +V8cjzhCWftN9OYhodcLC7V5Voj0msl6KdhrNhRYtKPTOWwVdASbsP0WKM+UyFJ8STzI5gPM
 RbAtVD4aPLfD9HZK/BiWHXcurIqOP3ipxA7t2uqkuFIzsaCJ2JuGxCe32m+wBNNX57LKt8MK
 DZyttMpjKmd3hSRsOnBkMdV+yGg9HQjprpbTMPGhZisWC1/HqVwY+/NyLd8gYVUjtJz7tn2W
 /Zkzbh7qHml/2g0BfT20La8pwTstr8zdloAtCKl6EuW0PRozftQL4kd6yJvTgzru3qwFE2kO
 PUqxNlBMh342O5RBDGnTLdny3blBo+4X7rzlGVxVH5p9bieT48A81dwapEbxrw8SMbzZxB+Z
 MO+1jcm4tcDBvGkii4zcPPTQtWmk29pmdnufIPjkZYTZAVZNZq3MYiFXtuYdg99R/Bmc4a+L
 EENrCc2B8WSyLQU5nhhBgi/DT2NU5DXitvQSA5y7+oOnZt7TNEJnAjtbMid0c7he4AoqZ/lp
 r529xT5ddzp+8tHNdA7bQ6ML+K4lKke2O8DEuiZXLaKYogB1Xh77bK3ZRd3pDbRHVP9up7pK
 j8
X-IronPort-AV: E=Sophos;i="5.91,290,1647316800"; 
   d="scan'208";a="73336691"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eWaYAyPX9tWgaYTn6DEYlAzgsi4cKYfbuhgncKMqKuaR+a+0ARcuY8krHcBltd8sObP5GeeEljk5HxWQX5zFMkdyp2DvOyfyv3XNHQ4Kkf1Trs1x/V3uCFHrnAekAoDX6xZJJwMLIsArsgwCHla5gFl1gE0uQb6ASDOyUCOuCUn43DgOhAZvPXIyZuYlTJJ5j6UcnEHLU75bYwnJ1GSfKlvG4XkhsAa5sGJ5I1VErJQPfxotmcm2nxjCmgFRV0pNqCidK8lBUkRyRIpr7okvf60zfVvU9Seiicu6NzJ0UM9PeLsL8qepcElXeiYopFy0MvUfBjlsqPhVyZiKBOdpVQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=wDxbl4piy7/vppSjjPgZ0j1j3gPTXxcYrvotx+cUcJA=;
 b=UJ5DmxGH50OD7boF956SJNrpZvaS/sMMzMIVE9ittV6ZFRG0uGX+MJJJ2vLvXTtxcVfsS5A/lYxjzYf1EfkJqzO9kQuPGjSbQqAlaFGPqTpqPjlDJrUBDPbCgunqFs+eZSZJVeaKUOq3H2QoEQqxR4O7u59ZniCDo+0fsm08z/W0zTefki/FHOwLkNRwOXhmwOiypb3aT+rQ5vwcI52Eh9JGqldhCclX0EITDBoPTc2gLJLVAyGPPRQlxtO0v3Z27mGK3FpVufiYqELzDyf5hu6bxB7R3QvWdYXa8axxGQ32yCLznlYPDPmspCSn7iKzCY7V+zbHcJMi/eQPEszi/A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wDxbl4piy7/vppSjjPgZ0j1j3gPTXxcYrvotx+cUcJA=;
 b=gvm7D6vXLJqqZenfA4FJULdAgSALwOpKBWcJAGqTdUeuPvST78UjqRuPxoxn62WQNqyfHPDp6Zr7+3Q1u6xZDHuI8Wz5aRFuoGUIOJH6QKV02/7N7eyQ9Tk8zZHDzsO10nsBgBr8JOwsAVQs/Ts+Q03FvlZmcf0oq7LR53nVlT8=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] xen/console: do not drop serial output from the hardware domain
Date: Fri, 10 Jun 2022 17:06:51 +0200
Message-Id: <20220610150651.29933-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.36.1
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P265CA0043.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2ac::22) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 42daa9e5-950b-487f-a280-08da4af2dd9b
X-MS-TrafficTypeDiagnostic: BL1PR03MB5990:EE_
X-Microsoft-Antispam-PRVS:
	<BL1PR03MB5990F978482594853C9137638FA69@BL1PR03MB5990.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0AiRMAzlfZjOI+IjKQtnhib43t6kR0ZvVD5xrhYn9lPav0I0vFShnzhy3bDHUECx+/213URsMyZJrZwpmjF9o9NEsUmzn0fUfR5UDiSJxOtW4VUQP5s4xRqlSPe7qJD4l4Cs6J6b4PRkmJrgGhg+Dt0H7rkXf4HgOoo+Ko2kLRl6bp9jmOxWMNqLlxEgYe7qEQpj8lQ3eRvdaKYJZH0/CO23qwimiWkrhWdqxgc9uL2llEaGhoRFLvYysO05AmJ3SbkKBOdbCEnnoOhGxXpKQxIR0fRuyodbcFwmWwWd+4FtwwlJr/Is4TqmbIqhUGPHIvjWH6qBzP8FcdoANbxLnMo6dvFYEiPBCsc98J3femCwEF98eQ/GvUSnIhfEyPTsf3U7KkckXZ+6FIEQfoao/A8rlBwgPHwp+YvWklcwLDWIGZuUvVz9ZpNANtYE/5ar8kJlNijWmi3W4TJrvUPN47C6TNTi/ghbICPpT3suHjXhGKG2Avt6rBIoPCt91Kz6v3NQK37IzNNsW4sBOdTTkehGqHTG79jpcNgWSjl2xkw+oww/FEgdaFFl1kfTIykERUi7lMV+RkhmOjndwZ7ohw5QJFoKyu4o3puVY66T05KzeovA6KAbAvGV07DdCtywG8EIjydPK41pJI1odVGWIA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(54906003)(5660300002)(316002)(36756003)(38100700002)(82960400001)(8936002)(2906002)(6916009)(83380400001)(66946007)(66556008)(66476007)(2616005)(1076003)(186003)(4326008)(8676002)(6486002)(6506007)(6512007)(26005)(6666004)(508600001)(86362001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Z1lrcVdIaWZnZ0JBSUk3emVKL0RUM0VYYnV5KzhPUGpOODBVSVlFVStGdFpX?=
 =?utf-8?B?ZHlCRFA0WEl4bkpwdTRhS2Rqb1pwTTlZcFdDRnFBMFdna0tsZlJtaElUTEdT?=
 =?utf-8?B?TU9sbjFaL3R6S0lFcHRzRlREM0Z5ZkxlYWZia0pqSjVpTitFekd0NVN6WXk0?=
 =?utf-8?B?YzU3UTBaMWxHUlh0RE5DT1UvVGdFdWJJR1RrNE1nWmdRN1hVQlczQUc3MVZW?=
 =?utf-8?B?NTQ1dEwwMDFsV0UxUkp4YnlLdjBvbE5PMkEreWZKU3VQWGhZQnJmOWZ6R2U3?=
 =?utf-8?B?b3lVNXhITGxoS0tSaDBhWHRyV3d2QnFoTXowYk9hTiswdU9STzdvNFdjTTJE?=
 =?utf-8?B?cWxJUkRJRUlwVVRXUzB6dmF3WnJ6dWxnYUhUZ29pMUd3em5nUVRXbUgyQ08y?=
 =?utf-8?B?enI1QWhFeE9nc2psMG1hcDNaMTNpZDhlenExRmZkR2xBUUZZNXlrdUxIWUV4?=
 =?utf-8?B?aDhHSlpMVjVPUjRHZlFMSmlOckxIbDcwaWExbnJRTmx6eHUvOElzcDNUTWlK?=
 =?utf-8?B?S0R4R2ZIU2Z0cUZ3WVdDdWdwbi9udlFRbGIxNFM3MnF1QS9mWk1BVU9QcGFX?=
 =?utf-8?B?SkRpY1E2LzY4eWZObWZaZjhIT3UwTEtwQmYzeFRjU3FxMEp3bWg5K1ZFS2lk?=
 =?utf-8?B?ZEdyY0IvUzU1emk2YWdBQUVCOVhKU0dBUDZRYmRFa2xoQWVSN0JRM3hvL1Vw?=
 =?utf-8?B?dVR1eExESmR5SjFSTmptQ0IyVE00RXh3UUNpRXA4cUZ5NGo5dzJYK2J6Ky81?=
 =?utf-8?B?dmJ3OXIvYXloSjlCRHNxVXROTUNUMkRRY2FpMTh2MmNMcVNSM3czNjRsc1hm?=
 =?utf-8?B?dDBXNG1Dd0NmTkFSMHBSQTkzU0RHd0h0QW5rb1NoZUZlekg0a2g5bllqeXA2?=
 =?utf-8?B?NEMxOElhU2VpdGxVNWUrM2RRSkZVVUJVc3MyVUtGYWJNVHNaSGduMXhjRjlG?=
 =?utf-8?B?UVl3RXVBblRtbHpuZEROMEx2WDhnZy83ZzIyK1ZXN2J4dWtjeVh6c29RNjU1?=
 =?utf-8?B?V2UwQ3hYSVlydGNsY2hPL0FUS0tzbXpwV2hGNElISUM1b0U2SWpoY3RSUHZF?=
 =?utf-8?B?dHN0QkpxTHhFbWIrcEcrNjNzZkJlTkZCM3gxY0dUaFBrM2tHQVhlYVAza2h4?=
 =?utf-8?B?N1ZnQWdRcnpFL1JDNlZndFRwRUFFVU44RURUVEJTa0t5TlJHekpZcUpWZjJk?=
 =?utf-8?B?N1Iwd2E1ZmZDcEFOYnUza0IyeitsSVFxQXNHQ3h1YmQzaWJUZ2FjSHNXMkRH?=
 =?utf-8?B?N2xjaExXUVR6ajFOSk42Y0ZiOW5rSjdlNFNVN3Z1SHpYRVhBWHZsVDNHUHpy?=
 =?utf-8?B?QmowRlp1S3dNL3pscWVQU0hxeCtnd3Zvd1pINWpRMjZFMjZFWEUwRkFZOCtP?=
 =?utf-8?B?ZjFvK1QwVzFpM2doN2czZHNrNzVhQVR0VDd1b3FjTFdENmtiSC8vRHNXNlRm?=
 =?utf-8?B?eVA3TVdGNmJSM1ExeU9HSC9JRXhrcG1lemcrMWpsNURTVTRjZCt3NXJtTTBY?=
 =?utf-8?B?SklzYVNjd05vWVVuV1g1OWEzWXRxN2JqY1UzWUJObmJPMndoYUNIa0ZjUVNn?=
 =?utf-8?B?Ymt2YlhEb3MvamsrWkFxVmYwcWdCcTcxYnNtaFI4ek9yOWJ3cmR2cEdkWHRR?=
 =?utf-8?B?RDIwdTRPVTM1eDdpclVxblhUY0RTS3BSR3V1Wm9JdnMxSS80MURnRkxtTms3?=
 =?utf-8?B?NndKMUNNOWNkSUF0eDZZT0s0UHZISkcwb0pQQTJncGd3dG9MQkQyZ29YSFhZ?=
 =?utf-8?B?ZnJCWmJacGZ2cDFXNnBPY3JiTXIvM3NlUDZINXBWb3FiSkNMTkxXS3JMaElV?=
 =?utf-8?B?MHdLL1l3M29XWDhQVGhFN1RQTEtpTFZieTN6YWFCeEVMQnV0dDFtZnluLzR3?=
 =?utf-8?B?dTRlalowSHZnWWMwTzVET2V6ZEZWS3JVL1UzV2UyNllPcDl2cDFDV2xVcGlE?=
 =?utf-8?B?TzVSK0hBV0l1cytmRWlkcXIxM25RWkxUcExSR2JUa2FWc0ptUGIyMTJDSWhZ?=
 =?utf-8?B?STdwV24wbXVYTk9uRzNCYzdWMDFrWlFhVHBBZHV2MTNnZktxWlZ1SHNYTHln?=
 =?utf-8?B?bHNwVnRvMUExNGhnR0ltZHJDb3d4SXZWVHVrMXB6YkZCL0Q0L0pCcjN2TDUr?=
 =?utf-8?B?Uk9mVjBvMWtZcU03b05rQ3ZQOXdOdGxIZWp3MWdiS0JoM2RHQWVIU09TMExN?=
 =?utf-8?B?RElTTDgxcG8rdllVSk9PN2tsUjdpdUc1UHVBMXg4NjVRU3VjdEtrckpwNHBM?=
 =?utf-8?B?bHk0N0xFMmlmbWJ0RVZGWVl5THhiVTRwYk1JdW1QV0dHVWpQNjNVSWhwK29r?=
 =?utf-8?B?ZGh3V1hPcUJEQnN0VExBRHFPL0t3T1o4eVNLRVZLZnBlem85anJ5Sk85UStt?=
 =?utf-8?Q?Cpmt6/we1VhR9geY=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 42daa9e5-950b-487f-a280-08da4af2dd9b
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2022 15:06:58.9195
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: aDRA+3F3NB2LWckew0C76ptc2IsOsIqPY26vCrNNHrS0xhPerO9TUYgh7OEWqweqqadSxAyppEnqK6i9e5Y3nQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR03MB5990

Prevent dropping console output from the hardware domain, since it's
likely important to have all the output if the boot fails without
having to resort to sync_console (which also affects the output from
other guests).

Do so by pairing the console_serial_puts() with
serial_{start,end}_log_everything(), so that no output is dropped.

Note that such calls are placed inside of a section already protected
by the console_lock so there are no concurrent callers that could
abuse of the setting of serial_start_log_everything().

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/drivers/char/console.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index f9937c5134..13207f4d88 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -614,7 +614,10 @@ static long guest_console_write(XEN_GUEST_HANDLE_PARAM(char) buffer,
             /* Use direct console output as it could be interactive */
             spin_lock_irq(&console_lock);
 
+            serial_start_log_everything(sercon_handle);
             console_serial_puts(kbuf, kcount);
+            serial_end_log_everything(sercon_handle);
+
             video_puts(kbuf, kcount);
 
 #ifdef CONFIG_X86
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 15:12:29 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 15:12:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346502.572316 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzgJF-0001Xl-Ik; Fri, 10 Jun 2022 15:12:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346502.572316; Fri, 10 Jun 2022 15:12:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzgJF-0001Xe-Ev; Fri, 10 Jun 2022 15:12:25 +0000
Received: by outflank-mailman (input) for mailman id 346502;
 Fri, 10 Jun 2022 15:12:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FaZi=WR=citrix.com=prvs=15310cb4b=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1nzgJD-0001XY-V0
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 15:12:23 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b898bafb-e8cf-11ec-8901-93a377f238d6;
 Fri, 10 Jun 2022 17:12:22 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b898bafb-e8cf-11ec-8901-93a377f238d6
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654873942;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=Z5G5XG2PyEjAN59F561TtSE7MIpzTyd/w3HSrsxR6tg=;
  b=DIXpr4IvmdtvB+tSLzpLYALoHcqaUkeLD27nOhhk+DO8/wyY7Gj2ewG8
   IksaNJPrn4oNBePubKBznRMGEW/HZa2fmcMjnYY7+zOms73TaD6M+E9kc
   NrzxSQqRc6lU8Ja/dwSyAGWw+VNXOdfhUUlWsIXiGXo/QVzyRgHZzNiqC
   8=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 73740176
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:38XcNa+YQqNBEXtkTJxsDrUDf36TJUtcMsCJ2f8bNWPcYEJGY0x3z
 mIZWGGAaPnfYDP3eNkibI228E8OvJSBz9RhHAo4rXw8E34SpcT7XtnIdU2Y0wF+jyHgoOCLy
 +1EN7Es+ehtFie0Si+Fa+Sn9T8mvU2xbuKU5NTsY0idfic5DnZ44f5fs7Rh2NQw34HhW1nlV
 e7a+KUzBnf0g1aYDUpMg06zgEsHUCPa4W5wUvQWPJinjXeG/5UnJMt3yZKZdhMUdrJ8DO+iL
 9sv+Znilo/vE7XBPfv++lrzWhVirrc/pmFigFIOM0SpqkAqSiDfTs/XnRfTAKtao2zhojx/9
 DlCnd+xUghuIvGUoe0AdABGOHFxM6JA9oaSdBBTseTLp6HHW37lwvEoB0AqJ4wIvO1wBAmi9
 9RBdmpLNErawbvrnvTrEYGAhex6RCXvFIoZpnFnyyCfFfs8SIrPa67L+cVZzHE7gcUm8fP2O
 JFGNWI2M0qojxtnMU8yWK4wwOiU12DRSWV8jl2/gZQ97D2GpOB2+Oe0a4eEEjCQfu1OhVqRr
 G/C+2X/AzkZOcaZxD7D9Wij7sfNkjn8Q5k6D6Cj+7hhh1j77m4ODBwbU3OrrP//jVSxM/pVJ
 lYI4CMooe43/VayU9jmdxSipTiPuRt0c+RXF+o2+QSc0J3+6gySBnUHZjNZYdlgv8gzLRQ62
 1nMk973CDhHtLyOVWnb5rqStSm1OyUeMSkFfyBsZQwB7tr4vJAojjrAS99iFOi+ididMTb0z
 jORpS4ynYIPnNUL3KW2+1PAqz+0r52PRQkwji3NWXmv9AR+Z4iNaImh6Fyd5vFFRK6GSnGRs
 X5CnNKRhN3iFrnUynbLGr9UWuj0ubDVa1UwnGKDAbFi9i+ivGL/JblZ/RZnOmw5PcgLfzHQN
 Rq7VRxq2HNDAJe7RfYpPt7hUZlxkfaI+cfNDa6NMIcXCnRlXErepXw1OxbNt4z4uBJ0+ZzTL
 6t3ZipF4ZwyLa18hAS7SO4GuVPA7nBvnDiDLXwXIvnO7FZ/WJJ2Ye1cWLd2RrplhJ5oWS2Mm
 zqlC+OEyg9ETMr1aTTN/IgYIDgidCZmWc+n+pIILLXZfGKK/V3N7NeAqY7NhqQ/x/gF/gs21
 irVtrBkJKrX2iScdFTihoFLY7LzR5dvxU8G0dgXFQ/wgRALON/3hI9GLsdfVeR2r4RLkK8rJ
 8Tpju3dW5yjvByco2RDBXQ8xaQ/HCmWafWmZXP4OGZuJsA5HWQkOLbMJ2PSycXHNQLv3eNWn
 lFq/lqGKXbfb2yO1PrrVc8=
IronPort-HdrOrdr: A9a23:1UZIrqhVEGNWSbbW7CmXn0IFZnBQXtgji2hC6mlwRA09TySZ//
 rOoB0+726StN9xYgBFpTnuAsW9qB/nmqKdpLNhW4tKPzOW3VdATrsSjrcKqgeIc0aVm9K1l5
 0QEZSWYOeAdGSS5vyb3ODXKbgd/OU=
X-IronPort-AV: E=Sophos;i="5.91,290,1647316800"; 
   d="scan'208";a="73740176"
Date: Fri, 10 Jun 2022 16:12:11 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
CC: <xen-devel@lists.xenproject.org>, Oleksandr Tyshchenko
	<oleksandr_tyshchenko@epam.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Nick Rosbrook <rosbrookn@gmail.com>, "Juergen
 Gross" <jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>,
	"Julien Grall" <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Bertrand Marquis <bertrand.marquis@arm.com>
Subject: Re: [PATCH V9 1/2] libxl: Add support for Virtio disk configuration
Message-ID: <YqNbwtWrIdYWRG9c@perard.uk.xensource.com>
References: <1654106261-28044-1-git-send-email-olekstysh@gmail.com>
 <1654106261-28044-2-git-send-email-olekstysh@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <1654106261-28044-2-git-send-email-olekstysh@gmail.com>

On Wed, Jun 01, 2022 at 08:57:40PM +0300, Oleksandr Tyshchenko wrote:
> diff --git a/tools/libs/light/libxl_disk.c b/tools/libs/light/libxl_disk.c
> index a5ca778..e90bc25 100644
> --- a/tools/libs/light/libxl_disk.c
> +++ b/tools/libs/light/libxl_disk.c
> @@ -163,6 +163,25 @@ static int libxl__device_disk_setdefault(libxl__gc *gc, uint32_t domid,
>      rc = libxl__resolve_domid(gc, disk->backend_domname, &disk->backend_domid);
>      if (rc < 0) return rc;
>  
> +    if (disk->specification == LIBXL_DISK_SPECIFICATION_UNKNOWN)
> +        disk->specification = LIBXL_DISK_SPECIFICATION_XEN;
> +
> +    if (disk->specification == LIBXL_DISK_SPECIFICATION_XEN &&
> +        disk->transport != LIBXL_DISK_TRANSPORT_UNKNOWN) {
> +        LOGD(ERROR, domid, "Transport is only supported for specification virtio");
> +        return ERROR_FAIL;

Could you return ERROR_INVAL instead?

> +    }
> +
> +    /* Force transport mmio for specification virtio for now */
> +    if (disk->specification == LIBXL_DISK_SPECIFICATION_VIRTIO) {
> +        if (!(disk->transport == LIBXL_DISK_TRANSPORT_UNKNOWN ||
> +              disk->transport == LIBXL_DISK_TRANSPORT_MMIO)) {
> +            LOGD(ERROR, domid, "Unsupported transport for specification virtio");
> +            return ERROR_FAIL;

Same here.

> +        }
> +        disk->transport = LIBXL_DISK_TRANSPORT_MMIO;
> +    }
> +
>      /* Force Qdisk backend for CDROM devices of guests with a device model. */
>      if (disk->is_cdrom != 0 &&
>          libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM) {
> @@ -575,6 +660,41 @@ cleanup:
>      return rc;
>  }
>  
> +static int libxl__device_disk_get_path(libxl__gc *gc, uint32_t domid,
> +                                       char **path)
> +{
> +    const char *dir;
> +    int rc;
> +
> +    /*
> +     * As we don't know exactly what device kind to be used here, guess it
> +     * by checking the presence of the corresponding path in Xenstore.
> +     * First, try to read path for vbd device (default) and if not exists
> +     * read path for virtio_disk device. This will work as long as both Xen PV
> +     * and Virtio disk devices are not assigned to the same guest.
> +     */
> +    *path = GCSPRINTF("%s/device/%s",
> +                      libxl__xs_libxl_path(gc, domid),
> +                      libxl__device_kind_to_string(LIBXL__DEVICE_KIND_VBD));
> +
> +    rc = libxl__xs_read_checked(gc, XBT_NULL, *path, &dir);
> +    if (rc)
> +        return rc;
> +
> +    if (dir)
> +        return 0;
> +
> +    *path = GCSPRINTF("%s/device/%s",
> +                      libxl__xs_libxl_path(gc, domid),
> +                      libxl__device_kind_to_string(LIBXL__DEVICE_KIND_VIRTIO_DISK));
> +
> +    rc = libxl__xs_read_checked(gc, XBT_NULL, *path, &dir);
> +    if (rc)
> +        return rc;
> +
> +    return 0;

I still don't like this implementation of get_path() which return a
different answer depending on the environment which can change from one
call to the next. I think get_path() was introduced when the path for
the kind of device didn't correspond to the common path which other kind
of devices uses. And when get_path() is implemented, it always returns
the same answer (see libxl_pci.c for the only implementation).

I don't really know how to deal with a type of device that have two
different frontend kind at the moment. (But maybe there's something in
libxl_usb.c which could be useful as a potential example, but one of the
kind doesn't use xenstore so is probably easier to deal with.). So I
guess we are stuck with an implementation of get_path() which may return
a different answer depending on thing outside of libxl's full control.

So, could you at least make it much harder to have libxl's view of a
guest in a weird state? I mean:
- always check both xenstore paths
  -> if both exist, return an error
  -> if only one exist, return that one.
  -> default to the "vbd" kind, otherwise.

That would be better than the current implementation which returns the
"virtio" path by default.

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 15:12:54 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 15:12:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346506.572326 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzgJh-00022C-W0; Fri, 10 Jun 2022 15:12:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346506.572326; Fri, 10 Jun 2022 15:12:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzgJh-000225-TL; Fri, 10 Jun 2022 15:12:53 +0000
Received: by outflank-mailman (input) for mailman id 346506;
 Fri, 10 Jun 2022 15:12:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FaZi=WR=citrix.com=prvs=15310cb4b=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1nzgJg-0001XY-3c
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 15:12:52 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ca5f0844-e8cf-11ec-8901-93a377f238d6;
 Fri, 10 Jun 2022 17:12:51 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ca5f0844-e8cf-11ec-8901-93a377f238d6
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654873971;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=B3btawzkjKvYK63PXTcCLGEb4ArfCXeAOyDiDR9fQlc=;
  b=BuEFKsYJ8j9fE5C9EYn7PHMa6dydzYKTztKbOokCV1kygNXhT+q/jmiJ
   QzvabHBFB00oQBAB1jdkCfNqeFclAsYVdZKRPz0QWNi24h6JVRCkMSXJw
   rNUZq1M8JZLHY5VZe2W6lO1S9Xa0Mr5QejhurfpzO2ulyRsLMs8ziTWMT
   M=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 73174958
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:fOM36ampjdUFYnBgpoBImGro5gx+JkRdPkR7XQ2eYbSJt1+Wr1Gzt
 xJOWT+BPK2KZWT2c9EjbInj9UoPvZSBm4BqTgI4qSlkHiMWpZLJC+rCIxarNUt+DCFioGGLT
 Sk6QoOdRCzhZiaE/n9BCpC48T8kk/vgqoPUUIYoAAgoLeNfYHpn2EsLd9IR2NYy24DnWlvV4
 7senuWEULOb828sWo4rw/rrRCNH5JwebxtB4zTSzdgS1LPvvyF94KA3fMldHFOhKmVgJcaoR
 v6r8V2M1jixEyHBqD+Suu2TnkUiGtY+NOUV45Zcc/DKbhNq/kTe3kunXRa1hIg+ZzihxrhMJ
 NtxWZOYGRgUYOrsoeUmdEdUTjluA79awo75Li3q2SCT5xWun3rExvxvCAc9PJEC+/YxCmZLn
 RAaAGlTNFbZ3bvwme/lDLk37iggBJCD0Ic3s3d8zTbfHLA+TIrKWani7t5ExjYgwMtJGJ4yY
 uJGMmMwMEWdM3WjPH9JELtun/6YukPEKTpjtw6n4oAw7Hj6mVkZPL/Fb4OOJ43iqd9utkSFo
 mPL+UzpDxdcM8aQoRKe6W6ljOLLmSL9WaoRGae++/osh0ecrkQZBQcKT1K9rb+8g1SnRtNEA
 0UO/2wlqq1a3EuvQ9rmVhu0ukmYrwUcUNpdFe49wAyVw6+S6AGcbkA6STpGZM0jpdUBbzUg3
 V+UnPvkHTVq9raSTBq15rqS6D+/JyURBWsDfjMfCxsI5cH5p4M+hQ6JScxseIayitD2Ai3h2
 DCioy03hrFVhskOv4254FTGjjTqqYLASgod7x/SGGmi62tRZoG/YJezwUPG9vsGJ4GcJnGeu
 FAUls7Y6/oBZaxhjwTUHr9LRuvwoa/YbnuM2jaDAqXN6Rz95EP9OqBp4QpZeks0KtY6ayPTU
 XHc7FY5CIBoAJe6UUNmS9vvVph6l/W7SYqNuuP8NYQXPMUoHOOT1GQ3PBPLgTiw+KQ5uftnU
 ap3Z/pAGprz5U5P6DOtD9kQ3rYwrszV7TOCHMurp/hLPFf3WZJ0dVvmGAHXBgzBxPnYyDg5C
 v4GXydw9z1RUfflfg7c+pMJIFYBIBATXM6r95MGJ7DceFE8RgnN7sM9JptwE7GJYowPzruYl
 p1DchQwJKXDaY3vdlzRNyELhELHVpdjt3MrVRER0aKT8yF7O+6Htf5HH7NuJOVP3LEylpZcE
 qhaE/hs99wSE1wrDRxGNcmjxGGjHTz27T+z092NOmFiJ8AxHlKQoLcJvGLHrUEzM8Z+juNmy
 5XI6+8RacNrq9hKZCoOVM+S8g==
IronPort-HdrOrdr: A9a23:m25soqyc4BddCNpTNoUFKrPwLr1zdoMgy1knxilNoRw8SK2lfu
 SV7ZMmPH7P+VIssR4b9exoVJPufZqYz+8S3WBzB8bGYOCFghrKEGgK1+KLqFeMJ8S9zJ8+6U
 4JSdkGNDSaNzhHZKjBjjWFLw==
X-IronPort-AV: E=Sophos;i="5.91,290,1647316800"; 
   d="scan'208";a="73174958"
Date: Fri, 10 Jun 2022 16:12:35 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
CC: <xen-devel@lists.xenproject.org>, Julien Grall <julien.grall@arm.com>,
	"Wei Liu" <wl@xen.org>, Juergen Gross <jgross@suse.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: Re: [PATCH V9 2/2] libxl: Introduce basic virtio-mmio support on Arm
Message-ID: <YqNcE9j1r9+d7ekF@perard.uk.xensource.com>
References: <1654106261-28044-1-git-send-email-olekstysh@gmail.com>
 <1654106261-28044-3-git-send-email-olekstysh@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <1654106261-28044-3-git-send-email-olekstysh@gmail.com>

On Wed, Jun 01, 2022 at 08:57:41PM +0300, Oleksandr Tyshchenko wrote:
> From: Julien Grall <julien.grall@arm.com>
> 
> This patch introduces helpers to allocate Virtio MMIO params
> (IRQ and memory region) and create specific device node in
> the Guest device-tree with allocated params. In order to deal
> with multiple Virtio devices, reserve corresponding ranges.
> For now, we reserve 1MB for memory regions and 10 SPIs.
> 
> As these helpers should be used for every Virtio device attached
> to the Guest, call them for Virtio disk(s).
> 
> Please note, with statically allocated Virtio IRQs there is
> a risk of a clash with a physical IRQs of passthrough devices.
> For the first version, it's fine, but we should consider allocating
> the Virtio IRQs automatically. Thankfully, we know in advance which
> IRQs will be used for passthrough to be able to choose non-clashed
> ones.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 15:49:37 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 15:49:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346524.572347 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzgt7-0006js-VU; Fri, 10 Jun 2022 15:49:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346524.572347; Fri, 10 Jun 2022 15:49:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzgt7-0006jl-SN; Fri, 10 Jun 2022 15:49:29 +0000
Received: by outflank-mailman (input) for mailman id 346524;
 Fri, 10 Jun 2022 15:49:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Y0XP=WR=citrix.com=prvs=1532263ae=roger.pau@srs-se1.protection.inumbo.net>)
 id 1nzgt7-0006jf-9N
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 15:49:29 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e7cddce9-e8d4-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 17:49:27 +0200 (CEST)
Received: from mail-dm6nam04lp2049.outbound.protection.outlook.com (HELO
 NAM04-DM6-obe.outbound.protection.outlook.com) ([104.47.73.49])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 10 Jun 2022 11:49:25 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by DM4PR03MB5998.namprd03.prod.outlook.com (2603:10b6:5:389::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Fri, 10 Jun
 2022 15:49:23 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%6]) with mapi id 15.20.5332.014; Fri, 10 Jun 2022
 15:49:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7cddce9-e8d4-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654876167;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=bNXaUmpJHk6Otpx1bJFK7v0Vl3VMIHYbaxa5i5lJN1Q=;
  b=CEUzHUjcnw1WGEBGRPUAJEkBTkbrb6XVS+joqnhTEXUUMwVMw0qLlOzj
   3U8zjD/3D+WhoIsFrfis2TxrjxIa6SuyunZ7eE0k0rVUtgFMpiybU+QDd
   JjRKIdHi5UbGP6eSbPgOUX/TB4UkvwuzG2yGbv5DyHG45LVU0j+DbK8Aj
   Q=;
X-IronPort-RemoteIP: 104.47.73.49
X-IronPort-MID: 73340195
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:g+TicqvLuAkQrU0Thu+6Yc61U+fnVCNfMUV32f8akzHdYApBsoF/q
 tZmKWGBP/vfY2f0Ktp1aYy+8EsHucTTn4dqSQBlqik8Ey8W+JbJXdiXEBz9bniYRiHhoOOLz
 Cm8hv3odp1coqr0/0/1WlTZhSAgk/nOHNIQMcacUsxLbVYMpBwJ1FQywobVvqYy2YLjW13U4
 YuoyyHiEATNNwBcYzp8B52r8HuDjNyq0N/PlgVjDRzjlAa2e0g9VPrzF4noR5fLatA88tqBb
 /TC1NmEElbxpH/BPD8HfoHTKSXmSpaKVeSHZ+E/t6KK2nCurQRquko32WZ1he66RFxlkvgoo
 Oihu6BcRi8oGYjwqfUjVyVAHnxuN592x7/KAFqw5Jn7I03uKxMAwt1IJWRuYcg037gyBmtDs
 /sFNDoKcxaPwfqsx662QfVtgcJlK9T3OIQYuTdryjSx4fQOGMifBfmVo4ADmmth7ixNNa+2i
 84xcz1gYQ6GexRSElwWFIg/jKGjgXyXnzhw9wnO+fJusjW7IApZ4uLBK/TLRPm2T8RWlxeHl
 D/K3V3YK0RPXDCY4X/fmp62vcffkCW+VI8MGbmQ8v9xnEbV1mEVEAcRV1awvb++kEHWc9BVJ
 lEQ+yEuhbMv70HtRd74NzWnpFaUsxhaXMBfe9DW8ymIw6vQpgqcWG4NS2YdbMR87JNnAzs3y
 lWOgtXlQyR1t6GYQm6c8bHSqi6uPS8SLikJYipsoRY53uQPabob1nrnJuuP2obs5jEpMVkcG
 wy3kRU=
IronPort-HdrOrdr: A9a23:DOsZvaBVqPix1PjlHeg+sceALOsnbusQ8zAXPh9KJCC9I/bzqy
 nxpp8mPH/P5wr5lktQ/OxoHJPwOU80kqQFmrX5XI3SJTUO3VHFEGgM1+vfKlHbak7DH6tmpN
 1dmstFeaLN5DpB/KHHCWCDer5PoeVvsprY49s2p00dMT2CAJsQizuRZDzrcHGfE2J9dOcE/d
 enl4N6jgvlXU5SQtWwB3EDUeSGj9rXlKj+aRpDIxI88gGBgR6h9ba/SnGjr1wjegIK5Y1n3X
 nOkgT/6Knmm/anyiXE32uWy5hNgtPuxvZKGcTJoMkILTfHjBquee1aKvW/lQFwhNvqxEchkd
 HKrRtlF8Nv60nJdmXwmhfp0xmI6kdb11bSjXujxVfzq83wQzw3T+Bbg5hCTxff4008+Plhza
 NixQuixtVqJCKFuB64y8nDVhlsmEbxi2Eli/Qvg3tWVpZbQKNNrLYY4FheHP47bW7HAbgcYa
 hT5fznlbZrmQvwVQGbgoAv+q3gYp0LJGbJfqBY0fblkQS/nxhCvj4lLYIk7zI9HakGOuh5Dt
 T/Q9pVfY51P78rhIJGdZA8qJiMexrwqSylChPgHX3XUIc6Blnql7nbpJ0I2cDCQu178HJ1ou
 WKbG9l
X-IronPort-AV: E=Sophos;i="5.91,290,1647316800"; 
   d="scan'208";a="73340195"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fvBlsBIp7ZiDYmA1wlZ1Im0Tok3kFfGa2zcvZiwQgXY9ierNvFhodcx462uTyY9UuWxOJGtWgOOcTiYQrm8gS1agy4M+5n9ZW+nPq5TSItikjtVR3Xgr6V3KWguolSParOWhJca7lEJxDPEecdmywN1h9ghsQqfWZm5SO09ObEiGOfTdo0LqjBzdh1p2i3s+qqxdHIZraGjsZXwVA2OMXmFUK+D/9eRt12NUkvc5YszPi6vWIbmnFfx9dVq1Y7pDBLNkHbLKsbkkkYLHCaRa9iHXWue4c00ayfsbQGRf6n5hEaE4cCBLe/Lns5/GyPbN9bEl1eOSdGU0GGEs7nqitA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=WK/EBB/WkeUGGPPya+wMX8gD82aPsuxbyLsNpof2Mds=;
 b=BX0Tl5RMo9zY2bFpkUHKRSt3zPLHomYaDxrq6XyomcwK7BIVRBvVf2hA6H4EhjPSBx14dR2Qxg3QiLxjY8C67ivGfHmas1lYLf2PZV2nxNqjUYRQyJ/H/Ls0RlmoNoT9jWB6InD1d6g5UlnHP6SlRvzEVLKOYR68wj9n/NeJggW+AZ1LNV89DSxzJNhc7zueO3x33+PcEZiZ4sGcbyaE8jgHOYtqjg9Kx/dr5wI4VOviCvww6agmoHPrr5OJi82DNlEBhfbxtLF9qUx2OIuEzZM2hBOG91CSs67DZfJGW0v3/6fdg5hhi5REUYaFY1EVgXt4EFWPyF5spjsE+5Pkkg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WK/EBB/WkeUGGPPya+wMX8gD82aPsuxbyLsNpof2Mds=;
 b=GTj+DqGYuc03Gz7ut1Xgnk9xh0d7zSAlBcuWC2THlup+hsTxTSkhRkgo58iqL9rX4h2iuyRwJzrgfLT7xswpLjAua3Dw+bHW7/DMgfELFe8io5mfy1V9QfPt8uPc55goHPYw/GmW6CRhzMs9HFu+ls/zZuGcoFUYMJ9H0hdHkmU=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Fri, 10 Jun 2022 17:49:18 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jane Malalane <jane.malalane@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v4] x86/hvm: Widen condition for
 is_hvm_pv_evtchn_domain() and report fix in CPUID
Message-ID: <YqNn/hVUYZyrN8o2@Air-de-Roger>
References: <20220610110704.29039-1-jane.malalane@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20220610110704.29039-1-jane.malalane@citrix.com>
X-ClientProxiedBy: LO4P123CA0371.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18e::16) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c0b723f3-efdb-4154-efa3-08da4af8ca56
X-MS-TrafficTypeDiagnostic: DM4PR03MB5998:EE_
X-Microsoft-Antispam-PRVS:
	<DM4PR03MB59985D6B80D155B4F0F2073D8FA69@DM4PR03MB5998.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	J2rAk6dGac2+gVKXqKJlSYP4KXPXiZDkfJyj3+vUZRemDu2LdBVfRmZ1BA314mhO4M8R23tPpOsbdfhUCeLqrQ9qHoHE8CDjm8/34YwsWTfcd3c1gYOHQhn2IKofTWFNBas6arzwQy6R3toJqM44piLJ82U9cmWyyc0aB7dEVe5S7zMxgRDwI+qeEnLPyfJt1WZ5oqY50FtxyC9kmjfk0GWZgjD2a7hOeZWemIwmmtLXyvWSzi0ZtmNGiScPafSuwjm4IPuNPR09pR0zrEOK1ngfalpraABIP9lLoANDG5yuAGGANop7MM+muzqjpA7OKApo+PmIBB1Lf5PbkwPPORPubZOpU4r2iS0s11hExHrMxSZsrlN+1bWuPX6TELMS9l3q5y7H0puVeiSp0347BsfqjzunfSzJaLvfSmf20HeGCitlKXa+Ta7Iamjgo2hpcty2hibT7B+gP2opxf68OwTrJ5ByMXovqWEGzlhnjMKFZ+UAn5kdocXY2W/ti11Z5k0VKCvILte9PcIu9/waH2ekCryB1mpYQaB3wMNpbIVA2EVlDUUVFuMvnRsCPcfNJf107A4UIjdjtMm7B+NIaN9IGXEMZNzXnPhdS0pqt+U6hbUcNAyAq9ypfrzpT4JIua/C8COsdmX4n1lLt3vytg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(7916004)(4636009)(366004)(33716001)(8936002)(66946007)(66556008)(6862004)(85182001)(82960400001)(5660300002)(4326008)(8676002)(66476007)(6636002)(54906003)(316002)(38100700002)(508600001)(6486002)(26005)(6666004)(86362001)(6506007)(9686003)(186003)(2906002)(6512007);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MjlJSHFkN3hQUG1hV2owMWs2ZGtOZG5hdURydmZvbUUwZDc5czZlZEVML1lp?=
 =?utf-8?B?cTdrSzg2UURRL1Z1N2dVQXdpN05GYjU0NHkvYlIvcEsxN04wMU5Qa0Q3bnFj?=
 =?utf-8?B?c2QxS1ErTGVYaXdXQnkvcnppYUR3WU9pUHFTamVtSXd0MXFMMm56M29rOXk2?=
 =?utf-8?B?Z2Z4WTEwT1NqNEx6SG91aWMrTnVsTmM1ZkNRRi9yZzFsSyszY3pXQzhnYjRt?=
 =?utf-8?B?TW5SUnZBNlY2ZWxNUWtlZzF5dmtqV3pndDZQTzBpSG5XS1Q5ZExMbHRRQmwy?=
 =?utf-8?B?bExnNGhFcWtBTVYvT3VtNHFJU1BKWjdtV01ubGJ6NkxaRjBoemNabWhZS3Zk?=
 =?utf-8?B?VXBDbVZaS3NqSFg0bGlWVUJhQXBRRVVVQytabG9hd2ZFbW9wNU9yUnV0U2FW?=
 =?utf-8?B?TlBpL0tvY2dzc0tBcmZjd0pnRXA1dmNDd0ZFTnRSN2I3ak0vTE5nenFqczlR?=
 =?utf-8?B?dmJzTHBNVUVaRjdLdWZCRDhiTzdjVDZtTEZ3T0poNEZEWjkyeUVidTdaTVMy?=
 =?utf-8?B?b09vN2l2ZGxNUU9ubHdHcmFaMmFXekdsejViODFNd0Q0NXo5eEVEZHhXTlFX?=
 =?utf-8?B?V2FmaVJEOWZMT3BhL2FJWkxYZUVrS0VXSi9BYWlXZGxWNW5lQnFRamVVak5W?=
 =?utf-8?B?N1pBN05DTGh0cGlhME9ublRTOHhJTkRKWFBrMGJWRlRZbmJ1TnBDR1F3cTBF?=
 =?utf-8?B?MTR4YkRNUHdJSVZ4OS9NbXREVFprMTFOV0JNbDBOWnNIYTYrRUhxdTUwWHVQ?=
 =?utf-8?B?WG5hZk5OVzVEYTduWTFwRlh5Yzk1R1BJZEpWU1ZjWVpxSkR6ck0yRkI3QkM4?=
 =?utf-8?B?VWVWbWJUM3ZHZGY2UjdCRU1wUElsNW0zMTNvOHc2d3BXekR0eWJzOWoxVko5?=
 =?utf-8?B?K280NGN0alRuRnA2cEdnY1ZMRGhmdkJsY3Rzayt4V0QzME16NkRwQ3FoV3hy?=
 =?utf-8?B?MjArWUxGZDZMci9iRVlFWEJibXlYOVR2M2NRekZWSGVzcjZwMWZKdjJEbzF2?=
 =?utf-8?B?MDdkUW5Rc3Vjc0R1MUpSWm5LY1czTU5PalM0dTJOQ2ZuVW83ckMrRkZ0aDJ1?=
 =?utf-8?B?L2hTbmg2bFJCMzcySEh4elhyVi8rcUdxamg0eGhNcWtxOFVRWkFsOElOWVpL?=
 =?utf-8?B?U0xycjN1ZTFiZm9Xc0s2dEZaNDYwaHJrSTIvdE1WV2xKTUo5ZFZOSDJEQUVo?=
 =?utf-8?B?T0h6UnQ5NHZLMXh1Tk5YazBDMzh3aGVQYmp3YmU2dWlVV0pGY0NMZGNscTJL?=
 =?utf-8?B?VFNQOGl3NllmajhXUWZSWUpsaDdzVWxBWlVwQlRTbTB0cEpEencxSDFJUFR0?=
 =?utf-8?B?Q2VqYnhXYlpHZTlEdHkwUU0vSG4xMUpwcTI5RlNObEk5bXh4S0d0QkNvdXZ0?=
 =?utf-8?B?VWdUaTMzdG8rL1JSM0JkZUhIbTNDN2laOVRHdGtVcjZhY1FZb2JWZjBLMU54?=
 =?utf-8?B?V2o0RWJOa3F1dmRkQ2t6SzZpWEZQRWpTMUs3YkVzNk1JVnlJdWhVTStiSGNF?=
 =?utf-8?B?UEwyTmhGMm04UFE0amVSUUpWRDB3RkR0NVJrNEFQajlObFU5R01RNGM3SXpT?=
 =?utf-8?B?WG52V1RueTlNM2t4WWk0VnFweW1QY09QMVpreC9DbWh6bkVUMkM3Ymw5N1Vv?=
 =?utf-8?B?cTdEU2VBRCt4VVl3b2UvRFNyTHNiREUwbmZKa3AxWlJYRTNvWnVNQU5VOU1a?=
 =?utf-8?B?UUdEQzdmSFBIeDRuajMzYlZtem9McytjbS93dGZsdlFSczdVbVhudy9BZmlF?=
 =?utf-8?B?MHJGSUovS3dwanJSaTl4S1lFWDBzdDZ0VFVwM0hWZ3M4OEtObTBqQ004WHpq?=
 =?utf-8?B?ZXYrWnc5NThmOHF0R20zMThFR0ppZVZyOS95Unowa0RzdWg0QVdXRGtGTXZD?=
 =?utf-8?B?elViMGhFZ1VDTUsxVkFhc3ZaMk15aXF4Ti8vQithcVJvc2RRZXlCMy9PVEJl?=
 =?utf-8?B?c2RaeUtKUjNFTzk1cmxvc0o5UUhoMDhZTDFVdit4b2xPc0craTBnbW54cDE2?=
 =?utf-8?B?ejhaTkRVODdPRjFWbWFaNWZlUjY3SEdjRHdzalpoMWlJTGJ5dFpsZU5rUWhZ?=
 =?utf-8?B?VDFvSi9zY2krSmpISmRDZFo2cGZBVUJkeHZxdHB1UHhheml5eDFmNDU2R3pX?=
 =?utf-8?B?RWcxK3B0Mk1wQnB6WjhiSWF3QzZNcHp6bFhheHFrRmlJVXNJTWoydk9PSVo1?=
 =?utf-8?B?T3Jic29qa2R1Snl1SktScTViVDlUazJCUnlkc1JsamJrVGZqcFROaUl1Wkgv?=
 =?utf-8?B?Q0lBb2pqRVZBWGJNWEZUc0VvbDdDUE9JdFNKbVlnSCtaSWEvTVdDTHdDMWN6?=
 =?utf-8?B?SXpRcmdOYWZIRWh0NllYQzJ6ZVlEY3JrUWV4dGJ6b1BzVVRKWERNVDF5R0pr?=
 =?utf-8?Q?t7GyYyUkdUyDJriI=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c0b723f3-efdb-4154-efa3-08da4af8ca56
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2022 15:49:23.6042
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: vmVj8yoexPUn1Wh1yXFDSPVkTEZ5JqzqX9TGHc92iUQNhKESWGkqFpMr2Aqyak08IzY68BAItDa1JHG8xM1OQg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB5998

On Fri, Jun 10, 2022 at 12:07:04PM +0100, Jane Malalane wrote:
> Have is_hvm_pv_evtchn_domain() return true for vector callbacks for
> evtchn delivery set up on a per-vCPU basis via
> HVMOP_set_evtchn_upcall_vector.
> 
> Assume that if vCPU0 uses HVMOP_set_evtchn_upcall_vector, all
> remaining vCPUs will too and thus remove is_hvm_pv_evtchn_vcpu() and
> replace sole caller with is_hvm_pv_evtchn_domain().
> 
> is_hvm_pv_evtchn_domain() returning true is a condition for setting up
> physical IRQ to event channel mappings. Therefore, also add a CPUID
> bit so that guests know whether the check in is_hvm_pv_evtchn_domain()
> will fail when using HVMOP_set_evtchn_upcall_vector. This matters for
> guests that route PIRQs over event channels since
> is_hvm_pv_evtchn_domain() is a condition in physdev_map_pirq().
> 
> The naming of the CPUID bit is quite generic about upcall support
> being available. That's done so that the define name doesn't become
> overly long.
> 
> A guest that doesn't care about physical interrupts routed over event
> channels can just test for the availability of the hypercall directly
> (HVMOP_set_evtchn_upcall_vector) without checking the CPUID bit.
> 
> Signed-off-by: Jane Malalane <jane.malalane@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 15:55:10 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 15:55:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346535.572361 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzgyY-0008RC-LK; Fri, 10 Jun 2022 15:55:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346535.572361; Fri, 10 Jun 2022 15:55:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzgyY-0008R5-IR; Fri, 10 Jun 2022 15:55:06 +0000
Received: by outflank-mailman (input) for mailman id 346535;
 Fri, 10 Jun 2022 15:55:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Y0XP=WR=citrix.com=prvs=1532263ae=roger.pau@srs-se1.protection.inumbo.net>)
 id 1nzgyX-0008Qz-3w
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 15:55:05 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b00193de-e8d5-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 17:55:03 +0200 (CEST)
Received: from mail-dm6nam12lp2174.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.174])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 10 Jun 2022 11:54:56 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by SA2PR03MB5802.namprd03.prod.outlook.com (2603:10b6:806:f9::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.14; Fri, 10 Jun
 2022 15:54:55 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%6]) with mapi id 15.20.5332.014; Fri, 10 Jun 2022
 15:54:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b00193de-e8d5-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654876503;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=WcX9YVltZnpPo+Uvp37QfKg2Rw0H3rDnGkYdVEdWPcc=;
  b=LAmrI8pu6tgK6GQFhaWpmD30zNSK9hEz95WU86EEiuhrG1m82ZLMvGA5
   Jnoolm3+BCPZgBPY5A6yMC2WM4+ZCKV3BJdEotOUGu2w86dTcHYT85mfh
   CW1TkFEVUza07NotQQWi3czSAfxV4m8sC6ed3e3+UvvnTr3BgJoWAVUx0
   8=;
X-IronPort-RemoteIP: 104.47.59.174
X-IronPort-MID: 72677450
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:pRKVKqgBFsgRlZvGKIFkSh39X161WhEKZh0ujC45NGQN5FlHY01je
 htvW2mAa/vfN2fzedBxOtm+pktV7J7Xz4diSApvpCgwHygb9cadCdqndUqhZCn6wu8v7a5EA
 2fyTvGacajYm1eF/k/F3oDJ9CU6jefSLlbFILas1hpZHGeIcw98z0M68wIFqtQw24LhXVrV4
 YmaT/D3YzdJ5RYlagr41IrbwP9flKyaVOQw5wFWiVhj5TcyplFNZH4tDfjZw0jQG+G4KtWSV
 efbpIxVy0uCl/sb5nFJpZ6gGqECaua60QFjERO6UYD66vRJjnRaPqrWqJPwwKqY4tmEt4kZ9
 TlDiXC/YQAOJIqRsr4Xah9nOgB5PIAd2e/gDlHq5KR/z2WeG5ft69NHKRhseKE9pKNwC2wI8
 uEEIjcQaBzFn/ix3L+wVuhrgIIkMdXvO4Qc/HpnyFk1D95/GcyFH/qMuI4ehWhs7ixNNa+2i
 84xcz1gYQ6GexRSElwWFIg/jKGjgXyXnzhw9wjL+vVrvjC7IApZ36fKc5mFKsCzTNx8vUmig
 mTWvGfAK0RPXDCY4X/fmp62vcfUhj/yUo8WELy+99ZpjUeVy2hVDwcZPXOrrP/8hkOgVtZ3L
 00P5jFovaU07FasTNT2Q1u/unHslhwWVsdUEuY6wBqQ0aeS6AGcbkAtVCJMbesDpcA/RDE0/
 lKRltavDjtq2JWXVHac+7G8vT60fy8PIgcqTyIeUREM5dWlhYgplw/OVf5qCqvzhdrwcQwc2
 BiPpSk6wrkW08gC0vzh+Uid2m3w4J/UUgQy+wPbGHq/6R90b5KkYIru7kXH6fFHL8CSSVzpU
 GU4pvVyJdsmVfml/BFhis1UdF11z55p6AHhvGM=
IronPort-HdrOrdr: A9a23:ULUOlK4mgmPUy4/ykwPXwS6BI+orL9Y04lQ7vn2ZFiY+TiXIra
 uTdaogvSMc0AxhPk3Jmbi7WJVoMkmsjKKdgLNhS4tKOTOKhILGFvAH0WKP+VPd8k7Fh6dgPM
 VbAs9D4bTLZDQUsS+Q2njaLz9U+qjjzEnev5a9854Cd2BXQpAlyz08JheQE0VwSgUDL4E+Do
 Cg6s1OoCflUWgLb+ygb0N1FNTrlpnurtbLcBQGDxko5E2lljWz8oP3FBCew1M3Ty5P+7E/6m
 LI+jaJrJlL8svLhyM05VWjoKi+q+GRhOerMfb8xvT9ZA+cyzpAL74RI4Fq9ApF291Hrmxa2O
 Uk6i1QRfhb+jffeHq4rgDq3BSl2DEy62X6wVvdmnf7p9flLQhKfvapqLgpAScx0XBQzu2UEZ
 g7oV6xpt5SF1fNjS7979/HW1VjkVe1u2MrlaoWg2ZEWYUTZbdNpchHlXklZ6soDWb/8sQqAe
 NuBMbT6LJfdk6bdWnQui1qzMa3Vno+Ex+aSgwJu9CT0TJRgHdlpnFosPA3jzMF7tYwWpNE7+
 PLPuBhk6xPVNYfaeZnCOIIUaKMexzwqNL3QROvyHjcZd060ij22uPKCZ0OlZ6XRKA=
X-IronPort-AV: E=Sophos;i="5.91,290,1647316800"; 
   d="scan'208";a="72677450"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ALqAPiZgD3ZpJmwq0faFIoKCL/Zh5iREIAeu6FZBH+btpeFaA15g8bXj5gRWwWmGiaGVZDaRwfub1mNtibXaldzFUi7nukEpSHRKzUd7eZlNlFXjQ9cnKvfUlK4BxLD7N2EB6UDEuT0Puf+T2s2QQns24PjxlXletnqTH8mZj1/5e9wUeOEHHp3IbPoxFgcu+zu6I3Hc6WG9nYZJktHJGb2Sr35eJ6u8cxosAl1NTVo8ytt+rGTG2lNpE3qf1HzgY3iIC5Y5gabRGv8V/NqvBdcLrXRrSvijbObtCBdLQGhXXm8DGh3i9CMl3iY5kfLxcnCJ46YUXj8XZiCEthMuOg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=2/HjZAoPu/UJkDRozx+9PbMjOL6X89wZ/5Jj0mzpoGw=;
 b=WKwchkz1qpAFXYTxZH5u+TNut3tioqZBWbsaHl0u4UR03P9tvrL2/gkJFdy86EOnPnJMbXoZbazeb0zAaU0vNE+9gxHAr8tocLqrXeMEotFujIwByMhMCg01OzloJlqxFnsuANh4Nmf/AFj/2yV/uYTaJaPzb9uqTpKads6rFCOkIe75fRAwTdeLSLmXCXEqAkY5Y4RU30TrrGuEGZT2LJAzMqB1Ioi2GDfEvmu5l9lFVg8RhwucZ/jz8J0+aigdNq7cdfJqKiNMwE/uYwD5OIjo11yr+R1jhF4v3wFSmFgoI/o7GZzrB91aPhYDalhU34Hgt51++4qdTN1VBQVtsA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2/HjZAoPu/UJkDRozx+9PbMjOL6X89wZ/5Jj0mzpoGw=;
 b=iVaw13SUrvc11mwicC7hLd8G1872LjupGxK0dtHRq1cqQmyZRvbvCoHHa1ipFyydSlj9XgRxixxnquRlC2PpP+AVOMgGO4h29Tb1ZAg8a81ywtYf+eW5xcKwiUQlEgsQU874co7bcszzjFKRFj2/jSNnIxjOAm6Ophl3Dy+Yva0=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Fri, 10 Jun 2022 17:54:51 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, jbeulich@suse.com,
	George.Dunlap@citrix.com, Artem_Mygaiev@epam.com,
	Andrew.Cooper3@citrix.com, julien@xen.org, Bertrand.Marquis@arm.com
Subject: Re: [PATCH] add more MISRA C rules to docs/misra/rules.rst
Message-ID: <YqNpS94THUjnY7Vx@Air-de-Roger>
References: <alpine.DEB.2.22.394.2206091748210.756493@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <alpine.DEB.2.22.394.2206091748210.756493@ubuntu-linux-20-04-desktop>
X-ClientProxiedBy: LNXP265CA0096.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:76::36) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: bca57f1c-c4dd-4542-c362-08da4af98fd1
X-MS-TrafficTypeDiagnostic: SA2PR03MB5802:EE_
X-Microsoft-Antispam-PRVS:
	<SA2PR03MB58029ECCD89823DC52D4FB2F8FA69@SA2PR03MB5802.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	toH6z2p8HhhqR3I8QYYkwlOUZp9AqwZYRm2YccEMMW5kW40M+ZLE2mvEOP7oGBweCsVz3kdAZJotMg2I/yFminpWylRQ5XwWIzYbx0CgTwXxKCwsVwmOF42coma+afpMc+pbAY6JIYpOhixJwhTltTjbqomft6jjnE+sZxHyps2sVYVQfSu7JLZ0yrHr+4LX8v983Y8vXzPUJLsQflN9eL7O9ySTRGjEoiAciSCSt/wjTJncOxJYEmVG0u5W7BgTgcK6Y0oSDXHNh3iPfr8nFoaXljnzF2TOuzXj2gnOD/OTYjZEpHu4ZLRb1XKJ/g257Pta/ie8cV/4RGGClJCf3kywHb+XnuIXsBAfdDhEbRfpk526SNwFW9fXoOPAzup4E+OhGTIzSNLvoO8UUbMNWEaiOVNQ11fu+npAwCwUeNWQpfBi0sPanm5k3xgLO8XbMqOlElwFinLIHnL7GAlodZpTB8tTnqbxLrqtXFNjL4O3l2slxDT8e78QHRjQmEjlVEFIHdt6QoZpfXIwagu+H0tkUYbCHY0BWijI9hFv3B9I62Z6lFv8Fsx5NBCsyThy6UTx0fNqnl9RIIWPJfBQUFqB8Txl7WwOw/nO9VZ/MGOuQIafbtSS1lO/sRbv9i09nE2iL9l5ny+S9+gL6c073qzLhsx3/PQq1817iba00R+dt4UlWDvCKJ6sZvb95d9nqN8udlLIy6NydDi20hakAfuWzfxLuEWy7DbCE5dbSr13c+YOUFrv0cv54lPIicWU+zfCrleuzFxB62Lv/vj8vA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(7916004)(366004)(4326008)(8676002)(26005)(2906002)(5660300002)(38100700002)(6666004)(6512007)(9686003)(508600001)(6506007)(86362001)(186003)(8936002)(6486002)(83380400001)(66476007)(85182001)(316002)(6916009)(66556008)(66946007)(33716001)(82960400001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aFl4N0NQQ05xYlpSNjBlZ010Um9LWkRRS1ZYSHpXRFN6ZW9MSTc3Nmo1Ynp3?=
 =?utf-8?B?VzRlTkIxOXdPNDdUczZJSDhsQmptVlJjODlJTmNPbkVreWhlQ09KRis1SWt5?=
 =?utf-8?B?S1hnWXByOHFXZ1JyZ0ZhRW1MV0c4NG5YdkN0TFlFK0pqdFh0REJDMDQwUUNK?=
 =?utf-8?B?SHFwSTBXZTBuSDFrMldST043ODBnczRiNUFhcUZIUWhkU0p5ZzliYUJLWlJW?=
 =?utf-8?B?OVBtc0x2d2JoUkxRNmRaYlhqOXFaeDBNVE42cWg3Rmx3VWNvNE4yWVB5bUda?=
 =?utf-8?B?ZWMwbXMxeWU1WExpSkozU1JvYkJWSnMySTVSRXgxaGNSbVNrQTQ1SjcvMWxX?=
 =?utf-8?B?ZWhMdVZHYkVISWVHRTVSVnVrcGtSUGRhVVNNZ250UStFcFhKdEhKMmZMaXp3?=
 =?utf-8?B?NHk0V1ZIbVNJK3YwTStxeHJJSzRUL2hIZFJlMExRSDZMY3ZmckpIVGtlTE9Y?=
 =?utf-8?B?RVlSNFlKOVZDSXJKZmNWdzE2VnBRc3Q4K3dMNHdLLzVhZ1RXUlRPMG14bXlN?=
 =?utf-8?B?MDJLUG56VDdTeGFDMy9NT0xTa3Y4Rktac29nYkJ6Mnl1ODdUeDB2T2ZKR21t?=
 =?utf-8?B?NDNPNnNULzBlMFlVRGVpQXY2ZFNEY1ZhdFp5M0gvSDg1MEttR2JNbmJvM1R6?=
 =?utf-8?B?bDFKamtZZGdLUktCcnByaHpxWThTZEhqK1plTXl5OEp6b3d1OEFzN2MyaUhr?=
 =?utf-8?B?RVJkTFpnUjEwdDdmZlZtUVMxVEFtQ3pobGtaL0xsUUI2OHN4dWxNN3czY1Rm?=
 =?utf-8?B?bXhZSmk5UXNoM1BUYkFhUE5FcHhHMFBkL2EvWGdMMkRKKzQzMHYvVWdTeEMv?=
 =?utf-8?B?UTlMSW1rWGw3bVdHd0hwYU16UU1uLy9WcXNHK1BHbnZEYXRQbmFFZVBvb2ZV?=
 =?utf-8?B?MnpCamVaM1RReGtnS0NvUDluaExiMUlhTFdHSE00YnVNUmFNZjV6SjhBcmN0?=
 =?utf-8?B?ODF0VHdMRVBwb0pJcEM2WGRhZnU4aVVmSHl0YmxKaDVNTTFVT0ZWUSsrWW1v?=
 =?utf-8?B?T2pFSWtMVXpFbTJEVWlRODNPUkYvOEs4OVk4L09NVEQzUHRlUXJKa1c1K0tv?=
 =?utf-8?B?WVJPOGpTcy96aWhiMkx4WXBrQ2dpSEgvWStUdExJc09xdFA3Nk5sekV5Tjk5?=
 =?utf-8?B?bTdneHdiUzYvYXoyaU5oZm1WNWliNDBHUzhpVGI1ZUhJMVdsNDVVdWpNTzNU?=
 =?utf-8?B?dkNFSGxRTDBVdHdrZWRKcnkyRU1zRGQrVWJBaFgzMmJzQThETy8vaVM4NDdN?=
 =?utf-8?B?T2g5SWZZTGV6OGQ5ek9EVUxjV1o3Yjd6ZmdWck9aZTBnakk1b21pL0ZVc3hO?=
 =?utf-8?B?WE5aYVlORUJjOTJnRWhGOEZ4bVZGYWxCeGdhaWdOMS9saGxOK1VtNm5FaGpw?=
 =?utf-8?B?cWhraVVvaFBOWEFlTW5sTVJFTzNMV004dUx2QWR5dlhVWUJZMWI5TVErVGJp?=
 =?utf-8?B?bjd5dzRLMTdwY0F3QlJrd0VUblc4SmRSSzJTdXkyMjFrL2tIb3RoT2Zzc2xi?=
 =?utf-8?B?M1ZaRUkzTm51WS9lbnRReVduSHd1V3ZnejNoQk1UWldDak4yNkpCNW1QdzV4?=
 =?utf-8?B?VURuOVNLTWZiT0hVdm0vUWU1VWlBZU5zQld6aUpuL3NpV05FaStYUDZtd1A5?=
 =?utf-8?B?Njg1SUY2cGNicTlSc2JSQjU2UkVLV3Z5eXluVERZNEk2aU41MDcrQWU0dGpL?=
 =?utf-8?B?VllOdDlsZFkydkh0TzVTU1hCaEgyMDhWMkQwaGlOdTZDUmZvTU9GbHhrb2pT?=
 =?utf-8?B?Vll6WStzZGxPcUVaVS9VelVBU1dja0tqWVJlRGN1dlhKd21NazNhNmdBNzRt?=
 =?utf-8?B?bmtsNEN6QXBxclNDZ0U2dDNJRTE1RVZDZGk3RmlOZTlYam1OYk9US0NaQjk4?=
 =?utf-8?B?N2Nrc3F4elArK3diS21MN0kwRFJyVjRpREJCQUZSKzdjNzFxQUxrWGQ5YVlD?=
 =?utf-8?B?dEhTM0M0dHpSSmZrK3ZGZnZjUkpBcXFBZ25vMWxJVzZCd2MvaTVmeEE5RTMy?=
 =?utf-8?B?UjdjYzVENzI5ZlFjaTF3N0trMFNYcFFyVUVuZXplZ2RaM1QzMXpiNlRpTU5K?=
 =?utf-8?B?Mzd5S0I4L0pxTkxsM1RRdEFhRW1VVlpOcEcyQzJEL1UwMWZqVUdBd2NubFg4?=
 =?utf-8?B?TFdKMWxJY2M0RTNpWVBsUzB6T0RXQ2hNVFdKaGJkSU5kNlVjS2QvSWhnRnFD?=
 =?utf-8?B?QzcyeXhyUGpBYS9oZklTdkJjdDU0ZUpzcytJZ083TUx5N2dUWFpEQXl2M0NP?=
 =?utf-8?B?NVZSMlBWMHkwS2QxeDNMZU01QkJYR3ZpSjh6N3c5ejN4NGVEWkczbmZxenA3?=
 =?utf-8?B?d3dYTHZ2QU4xblJUVFpVbmt6OVAzeWJzbWVHZ3hlZmVKYUd1ZlFyNFYxTC9j?=
 =?utf-8?Q?UoCFVI7tvRJBq9A4=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bca57f1c-c4dd-4542-c362-08da4af98fd1
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2022 15:54:54.8890
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: y1q6A1tWYWuuFx3AMRudTS+Xix31+goUcfghCL3o3L6FmBk4OO2CYpHChpOwFeUyvqtaBQFn4ACVnlMZJH+k8A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA2PR03MB5802

On Thu, Jun 09, 2022 at 05:48:26PM -0700, Stefano Stabellini wrote:
> Add the new MISRA C rules agreed by the MISRA C working group to
> docs/misra/rules.rst.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> 
> ---
> 
> I added the rules that we agreed upon this morning together with all the
> notes we discussed, in particular:
> 
> - macros as macro parameters at invocation time for Rule 5.3
> - the clarification of Rule 9.1
> - gnu_inline exception for Rule 8.10
> 
> 
> diff --git a/docs/misra/rules.rst b/docs/misra/rules.rst
> index 6ccff07765..5c28836bc8 100644
> --- a/docs/misra/rules.rst
> +++ b/docs/misra/rules.rst
> @@ -89,6 +89,28 @@ existing codebase are work-in-progress.
>         (xen/include/public/) are allowed to retain longer identifiers
>         for backward compatibility.
>  
> +   * - `Rule 5.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_05_02.c>`_
> +     - Required
> +     - Identifiers declared in the same scope and name space shall be
> +       distinct
> +     - The Xen characters limit for identifiers is 40. Public headers
> +       (xen/include/public/) are allowed to retain longer identifiers
> +       for backward compatibility.
> +
> +   * - `Rule 5.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_05_03.c>`_
> +     - Required
> +     - An identifier declared in an inner scope shall not hide an
> +       identifier declared in an outer scope
> +     - Using macros as macro parameters at invocation time is allowed,
> +       e.g. MAX(var0, MIN(var1, var2))

I think you want to use the {min,max}_t macros as examples, because
those do define local variables.

The rest LGTM:

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 16:01:41 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 16:01:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346545.572372 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzh4q-00021Q-JB; Fri, 10 Jun 2022 16:01:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346545.572372; Fri, 10 Jun 2022 16:01:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzh4q-00021J-FD; Fri, 10 Jun 2022 16:01:36 +0000
Received: by outflank-mailman (input) for mailman id 346545;
 Fri, 10 Jun 2022 16:01:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kdog=WR=citrix.com=prvs=1535499d8=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1nzh4o-00021D-SU
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 16:01:34 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9832fcbb-e8d6-11ec-8901-93a377f238d6;
 Fri, 10 Jun 2022 18:01:33 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9832fcbb-e8d6-11ec-8901-93a377f238d6
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654876893;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=op65nTiNWEctpESvJ1znCak+8HQylj25GiEsv3WGy1Q=;
  b=eW/j5tFSqU6/iTIIjKBvkP/JiENYlPfQSsyCjb7Qj4Pc3KQoYflcxiPT
   M6rT2W29+WJqjDgTptipum+eGxZDB3zmZxYssuyYeTcWRhEetwn3gg4FV
   +Cbf9WKHAQO++TUb8bimoYLVWlF3IN4B1howkSqvyXBGINeehz4IXMDXa
   g=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 73179186
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:8TnnD61UWZFSHXz/NvbD5YJxkn2cJEfYwER7XKvMYLTBsI5bp2FRx
 mAaXW/XM66PMWf1ftF2YYi38R8Hu5PWm4IwGgtrpC1hF35El5HIVI+TRqvS04J+DSFhoGZPt
 Zh2hgzodZhsJpPkjk7xdOCn9xGQ7InQLlbGILes1htZGEk1EU/NtTo5w7Rj2tAx2YDja++wk
 YiaT/P3aQfNNwFcagr424rbwP+4lK2v0N+wlgVWicFj5DcypVFMZH4sDfjZw0/DaptVBoaHq
 9Prl9lVyI97EyAFUbtJmp6jGqEDryW70QKm0hK6UID66vROS7BbPg/W+5PwZG8O4whlkeydx
 /1HpZ+/DgMPIpTHo8cdWSBhSn8vJf1ZreqvzXiX6aR/zmXDenrohf5vEFs3LcsT/eMf7WNmr
 KJCbmpXN1ba2rzwkOnTpupE36zPKOHCOo8Ft24m5jbeFfs8GrjIQrnQ5M8e1zA17ixLNamFP
 pVIMGoxBPjGSwFQIlsdFrIQoMelmWe8IhsDpgqSjLVitgA/yyQuieOwYbI5YOeiWsF9jkue4
 GXc8AzRIDsXKdiewjqt6W+3i6nEmiaTcJIfEvi0++BnhHWXx3cPE1sGWF2ju/67h0WiHdVFJ
 CQpFjEG9PZoshbxF5+kAkP+8CXsUgMgt8R4Gf0550aJ7Lbt6D26BmI9d2cfOcR2q5pjLdA17
 WNlj+8FFBQ27uDJGSjArevKxd+hEXNLdDFfPEfoWSNAuoC++99r03ojW/45SMaIYsvJ9SYcK
 txghAw3nP0tgMECzM1XFniX0mv39vAlouPYjzg7v15JDSsjPeZJn6TytTDmAQ9ode51tGWps
 nkegNS55+sTF5yLnyHlaLxTQe32tqfbb2eH2wAH83wdG9OFqhaekX14umkidC+FzO5fEdMWX
 KMjkVwIv8ICVJdbRaR2f5iwG6wX8EQULvy8Dqq8RoMXOvBZLVbblAkzNBX49z28zyARfVQXZ
 M7znTCEVi5KV8yKDVOeGo8g7FPc7npnmDuNFMmllk7PPHj3TCf9dIrp+WCmNogRhJ5oai2Om
 zqDH6NmEylibdA=
IronPort-HdrOrdr: A9a23:c0scnar+YKFyfZEeu0JCn/caV5oneYIsimQD101hICG8cqSj+f
 xG+85rsiMc6QxhPE3I9urhBEDtex/hHP1OkOws1NWZLWrbUQKTRekIh+bfKlXbakvDH4VmtJ
 uIHZIQNDSJNykZsfrH
X-IronPort-AV: E=Sophos;i="5.91,290,1647316800"; 
   d="scan'208";a="73179186"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH] x86/spec-ctrl: More MSR_ARCH_CAPS enumerations
Date: Fri, 10 Jun 2022 17:00:50 +0100
Message-ID: <20220610160050.24221-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/best-practices/data-operand-independent-timing-isa-guidance.html
https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/advisory-guidance/running-average-power-limit-energy-reporting.html

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

The SDM also lists

  #define  ARCH_CAPS_OVERCLOCKING_STATUS      (_AC(1, ULL) << 23)

but I've got no idea what this is, nor the index of MSR_OVERCLOCKING_STATUS
which is the thing allegedly enumerated by this.
---
 xen/arch/x86/include/asm/msr-index.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/xen/arch/x86/include/asm/msr-index.h b/xen/arch/x86/include/asm/msr-index.h
index 6c250bfcadad..781584953654 100644
--- a/xen/arch/x86/include/asm/msr-index.h
+++ b/xen/arch/x86/include/asm/msr-index.h
@@ -51,6 +51,9 @@
 #define  PPIN_ENABLE                        (_AC(1, ULL) <<  1)
 #define MSR_PPIN                            0x0000004f
 
+#define MSR_MISC_PACKAGE_CTRL               0x000000bc
+#define  PGK_CTRL_ENERGY_FILTER_EN          (_AC(1, ULL) <<  0)
+
 #define MSR_CORE_CAPABILITIES               0x000000cf
 #define  CORE_CAPS_SPLITLOCK_DETECT         (_AC(1, ULL) <<  5)
 
@@ -71,6 +74,9 @@
 #define  ARCH_CAPS_IF_PSCHANGE_MC_NO        (_AC(1, ULL) <<  6)
 #define  ARCH_CAPS_TSX_CTRL                 (_AC(1, ULL) <<  7)
 #define  ARCH_CAPS_TAA_NO                   (_AC(1, ULL) <<  8)
+#define  ARCH_CAPS_MISC_PACKAGE_CTRL        (_AC(1, ULL) << 10)
+#define  ARCH_CAPS_ENERGY_FILTERING         (_AC(1, ULL) << 11)
+#define  ARCH_CAPS_DOITM                    (_AC(1, ULL) << 12)
 #define  ARCH_CAPS_RRSBA                    (_AC(1, ULL) << 19)
 #define  ARCH_CAPS_BHI_NO                   (_AC(1, ULL) << 20)
 
@@ -149,6 +155,9 @@
 #define  PASID_PASID_MASK                   0x000fffff
 #define  PASID_VALID                        (_AC(1, ULL) << 31)
 
+#define MSR_UARCH_MISC_CTRL                 0x00001b01
+#define  UARCH_CTRL_DOITM                   (_AC(1, ULL) <<  0)
+
 #define MSR_EFER                            0xc0000080 /* Extended Feature Enable Register */
 #define  EFER_SCE                           (_AC(1, ULL) <<  0) /* SYSCALL Enable */
 #define  EFER_LME                           (_AC(1, ULL) <<  8) /* Long Mode Enable */
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 16:22:01 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 16:22:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346555.572386 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzhOK-0004xP-6e; Fri, 10 Jun 2022 16:21:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346555.572386; Fri, 10 Jun 2022 16:21:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzhOK-0004xI-36; Fri, 10 Jun 2022 16:21:44 +0000
Received: by outflank-mailman (input) for mailman id 346555;
 Fri, 10 Jun 2022 16:21:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FaZi=WR=citrix.com=prvs=15310cb4b=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1nzhOJ-0004xC-4h
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 16:21:43 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6781ac01-e8d9-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 18:21:40 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6781ac01-e8d9-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654878100;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=MRW8FCuKIN7PAb5O33YLCY5idXh92TfGD9V0eYibXYg=;
  b=Fz2ml8Ozn0FDOdJoqYMsG4qSuowyUpRfHFNIqzPGnuen7DesnVGZUT37
   SB6cF8UEYjFarV5CfljjmlhzPJAeQtwfB/hLmpgXLL/tUtkyVOtGYTfXt
   OlsID9DYVdoDWYMVrTZUW+kAgjCxt/Rk9pQFZLJeNuRZ/7knOKpRLrv38
   Q=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 73343584
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:6hUISalgT0UEQunJ7333XMvo5gxFJkRdPkR7XQ2eYbSJt1+Wr1Gzt
 xJOXGHQP/7bZmD0foh2PY62ph8O757cyddiQAJvr3tnFCMWpZLJC+rCIxarNUt+DCFioGGLT
 Sk6QoOdRCzhZiaE/n9BCpC48T8kk/vgqoPUUIYoAAgoLeNfYHpn2EsLd9IR2NYy24DnWlvV4
 7senuWEULOb828sWo4rw/rrRCNH5JwebxtB4zTSzdgS1LPvvyF94KA3fMldHFOhKmVgJcaoR
 v6r8V2M1jixEyHBqD+Suu2TnkUiGtY+NOUV45Zcc/DKbhNq/kTe3kunXRa1hIg+ZzihxrhMJ
 NtxWZOYRwJxHO7litwmdiZ+Nz5xFPBl4/yYGC3q2SCT5xWun3rExvxvCAc9PJEC+/YxCmZLn
 RAaAGlTNFbZ3bvwme/lDLk37iggBJCD0Ic3s3d8zTbfHLA+TIrKWani7t5ExjYgwMtJGJ4yY
 uJGMmEzN0mQP3WjPH9UOKo3gMODnELNagFC9E6zhvcG0VXcmVkZPL/Fb4OOJ43iqd9utkSSq
 3/C/m/5KgoHL9HZwj2AmlquifXIhjjTQ58JGfuz8fsCqEaI2mUZBRkSVF26ifq0kEizX5RYM
 UN80igzqak/8mS7Q9+7WAe3yFaUsxhZV9dOHukS7ACW1rGS8wufHnIDTDNKdJohrsBebTsu2
 1ySg8LyBRRgtbSUTTSW8bL8hSy2ETgYKykFfyBsZQwB7tr4qYc/lCXTX81jG664iN7yMTzoy
 jXMpy8774j/luZSif/9pwqexWvx+N6ZFWbZ+zk7QEqr61tbJ6e3X7eN1n7/8vlpcaWCDQa46
 S1sd9el0AweMX2cvHXTHbldRuzyu6jt3C702gA2QcR4n9i50zv6JN0LvmkjTKt8GpxcEQIFd
 nM/ru+4CHV7GHKxJZF6bIuqYyjB5fixTI+1Phw4gzcnX3SQSONk1Hs3DaJo9zqx+HXAaIlmU
 XthTe6iDGwBFYNsxyesSuEW3NcDn35jmTyCFM6klU/9j9JygUJ5rp9UWGZik8hjtP/UyOkr2
 4032zS2J+V3D7SlP3i/HX87JlEWN3krba3LRzhsXrfbeGJOQTh5Y9eImO9JU9E0xMx9y7aXl
 kxRr2cFkTITc1WccVXUAp2iAZuyNatCQYUTZ3B1YQn2hiB9OO5CLs43LvMKQFXuz8Q7pdYcc
 hXPU5/o7ihnItgfxwkgUA==
IronPort-HdrOrdr: A9a23:kDX+TqikVpokmzzO8yve6exqJ3BQXuIji2hC6mlwRA09TySZ//
 rOoB0+726StN93YgBHpTngAtjlfZqyz/JICOUqUotKGTOWwVdAT7sSiLcKoQeQeBEWn9Q1vc
 wLHpSWSueAb2SS5fyKmDVQeOxB/DDoys6Vuds=
X-IronPort-AV: E=Sophos;i="5.91,290,1647316800"; 
   d="scan'208";a="73343584"
Date: Fri, 10 Jun 2022 17:21:27 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
CC: <xen-devel@lists.xenproject.org>, Oleksandr Tyshchenko
	<oleksandr_tyshchenko@epam.com>, Wei Liu <wl@xen.org>, Juergen Gross
	<jgross@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH V2] libxl/arm: Create specific IOMMU node to be referred
 by virtio-mmio device
Message-ID: <YqNvh4bprAh7W/c1@perard.uk.xensource.com>
References: <1653944813-17970-1-git-send-email-olekstysh@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <1653944813-17970-1-git-send-email-olekstysh@gmail.com>

On Tue, May 31, 2022 at 12:06:53AM +0300, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> Reuse generic IOMMU device tree bindings to communicate Xen specific
> information for the virtio devices for which the restricted memory
> access using Xen grant mappings need to be enabled.
> 
> Insert "iommus" property pointed to the IOMMU node with "xen,grant-dma"
> compatible to all virtio devices which backends are going to run in
> non-hardware domains (which are non-trusted by default).
> 
> Based on device-tree binding from Linux:
> Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml
> 
> The example of generated nodes:
> 
> xen_iommu {
>     compatible = "xen,grant-dma";
>     #iommu-cells = <0x01>;
>     phandle = <0xfde9>;
> };
> 
> virtio@2000000 {
>     compatible = "virtio,mmio";
>     reg = <0x00 0x2000000 0x00 0x200>;
>     interrupts = <0x00 0x01 0xf01>;
>     interrupt-parent = <0xfde8>;
>     dma-coherent;
>     iommus = <0xfde9 0x01>;
> };
> 
> virtio@2000200 {
>     compatible = "virtio,mmio";
>     reg = <0x00 0x2000200 0x00 0x200>;
>     interrupts = <0x00 0x02 0xf01>;
>     interrupt-parent = <0xfde8>;
>     dma-coherent;
>     iommus = <0xfde9 0x01>;
> };
> 
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

The patch looks fine.

> ---
> !!! This patch is based on non upstreamed yet “Virtio support for toolstack
> on Arm” V8 series which is on review now:
> https://lore.kernel.org/xen-devel/1651598763-12162-1-git-send-email-olekstysh@gmail.com/

With the patch added to the series it depends on: Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 16:31:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 16:31:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346569.572403 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzhXc-0006je-9d; Fri, 10 Jun 2022 16:31:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346569.572403; Fri, 10 Jun 2022 16:31:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzhXc-0006jX-5G; Fri, 10 Jun 2022 16:31:20 +0000
Received: by outflank-mailman (input) for mailman id 346569;
 Fri, 10 Jun 2022 16:31:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yiYo=WR=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1nzhXb-0006jR-BF
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 16:31:19 +0000
Received: from mail-lf1-x135.google.com (mail-lf1-x135.google.com
 [2a00:1450:4864:20::135])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c1260cba-e8da-11ec-8901-93a377f238d6;
 Fri, 10 Jun 2022 18:31:18 +0200 (CEST)
Received: by mail-lf1-x135.google.com with SMTP id a15so43651945lfb.9
 for <xen-devel@lists.xenproject.org>; Fri, 10 Jun 2022 09:31:18 -0700 (PDT)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 s17-20020a2e2c11000000b00253c8dfc4e4sm4058265ljs.101.2022.06.10.09.31.16
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 10 Jun 2022 09:31:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c1260cba-e8da-11ec-8901-93a377f238d6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=LkW9L+XI0DD7Z3As1v74N/7Z7Ki8b6HSjzj0N3Bcz+g=;
        b=aWaTGFi3/2MIsnW2tZ4Cg5NJb+wPxqCVRugeiD1BzAP4CMZIAsdgffkdUq16YSfd0j
         CzY38GOaZow+wqYnIQNHnbF0tfqI9naNuqRm4zRFd6e6TZvwf/UIAyRzDV5KHWN3dGJI
         ir9LwRoY9dOLF0GAYRNyNlUXi/QsuyMkKV7+YJ8LFYtamLHYeaLMSvdgWRWUq7V4eJBK
         QUvhhdcIEf57/hBI3tHELPVvKx/vVmQegPlncjD9Suvaq3IdaxfGaf0d1WO3Oj2N45MJ
         kzV8hY6eolxmNmrZmAnNv2fpBtqOxDm/l67jF33o7LmNUvF3a+KG5wq5OfooL7PA/qdi
         RS3g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=LkW9L+XI0DD7Z3As1v74N/7Z7Ki8b6HSjzj0N3Bcz+g=;
        b=hq/7WsbFJba8+ReXDL+cYulIU5QAPHkYSrLRAxhgtTB6uHRXNNOjCl/Omx4i8l4z9D
         SRysFEeA+dY174Iawjbn89bZ4OcvE4q8oK/bFGOH2lDNXtmb174k3bh23rnlV4+CMtRB
         saOpRXnjAPPyL/TspUvEL0HzQ6l6pGEdBLw5MEHMtm2V5RErHQ4hQ8LIRIqFdPgdvVYT
         DhpUToeTVNA3klwt8m6DTA4nVai01HU2Oa0QrIfK5iPo0AtKwUgtMJIQ1tH4m4iWRCZX
         fWh4ziyGPIwNLV06PSM7qEpiB/K1d+Bc+NOxvGwruwyjHQZ0UtHdYknzyylGvO1Vtrn1
         1U1w==
X-Gm-Message-State: AOAM530WUecf40uqasnKece8Uo3Cvo+x32aP0Do0uBKFAxTVGxCY/E7s
	0zFATxSMaGxiCkXHjjIOzio=
X-Google-Smtp-Source: ABdhPJwB+iCQfBiQ7fJzk2l4t07NBiXJthfVlZ/L3wGB6wMtbnIJAOXH1lh2MIZ86yp0J6HYDpW1Fw==
X-Received: by 2002:a05:6512:3d27:b0:478:f60f:7e8a with SMTP id d39-20020a0565123d2700b00478f60f7e8amr28130899lfv.294.1654878677806;
        Fri, 10 Jun 2022 09:31:17 -0700 (PDT)
Subject: Re: [PATCH V2] libxl/arm: Create specific IOMMU node to be referred
 by virtio-mmio device
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel@lists.xenproject.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
References: <1653944813-17970-1-git-send-email-olekstysh@gmail.com>
 <YqNvh4bprAh7W/c1@perard.uk.xensource.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <562bd256-2dcd-33fe-f537-e3186023ffa5@gmail.com>
Date: Fri, 10 Jun 2022 19:31:16 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <YqNvh4bprAh7W/c1@perard.uk.xensource.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 10.06.22 19:21, Anthony PERARD wrote:

Hello Anthony

> On Tue, May 31, 2022 at 12:06:53AM +0300, Oleksandr Tyshchenko wrote:
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> Reuse generic IOMMU device tree bindings to communicate Xen specific
>> information for the virtio devices for which the restricted memory
>> access using Xen grant mappings need to be enabled.
>>
>> Insert "iommus" property pointed to the IOMMU node with "xen,grant-dma"
>> compatible to all virtio devices which backends are going to run in
>> non-hardware domains (which are non-trusted by default).
>>
>> Based on device-tree binding from Linux:
>> Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml
>>
>> The example of generated nodes:
>>
>> xen_iommu {
>>      compatible = "xen,grant-dma";
>>      #iommu-cells = <0x01>;
>>      phandle = <0xfde9>;
>> };
>>
>> virtio@2000000 {
>>      compatible = "virtio,mmio";
>>      reg = <0x00 0x2000000 0x00 0x200>;
>>      interrupts = <0x00 0x01 0xf01>;
>>      interrupt-parent = <0xfde8>;
>>      dma-coherent;
>>      iommus = <0xfde9 0x01>;
>> };
>>
>> virtio@2000200 {
>>      compatible = "virtio,mmio";
>>      reg = <0x00 0x2000200 0x00 0x200>;
>>      interrupts = <0x00 0x02 0xf01>;
>>      interrupt-parent = <0xfde8>;
>>      dma-coherent;
>>      iommus = <0xfde9 0x01>;
>> };
>>
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> The patch looks fine.
>
>> ---
>> !!! This patch is based on non upstreamed yet “Virtio support for toolstack
>> on Arm” V8 series which is on review now:
>> https://lore.kernel.org/xen-devel/1651598763-12162-1-git-send-email-olekstysh@gmail.com/
> With the patch added to the series it depends on: Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks!


I will send V3 anyway (there is a comment from Julien here). But I will 
send it together with the related series:

https://lore.kernel.org/xen-devel/1654106261-28044-1-git-send-email-olekstysh@gmail.com/


>
> Thanks,
>
-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 17:09:03 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 17:09:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346586.572435 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzi81-0002sV-GV; Fri, 10 Jun 2022 17:08:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346586.572435; Fri, 10 Jun 2022 17:08:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzi81-0002sO-Cf; Fri, 10 Jun 2022 17:08:57 +0000
Received: by outflank-mailman (input) for mailman id 346586;
 Fri, 10 Jun 2022 17:08:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzi7z-0002sE-WA; Fri, 10 Jun 2022 17:08:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzi7z-0005Hu-UG; Fri, 10 Jun 2022 17:08:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzi7z-00020t-Db; Fri, 10 Jun 2022 17:08:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nzi7z-0003Fz-DB; Fri, 10 Jun 2022 17:08:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gonWw9Ihv5t04ev3f2qzt+6en+5f1ohwCc2nIBzgLwk=; b=SnQ6t4hOTRF1f9qnTKT9+hWFwm
	548qbO1RABicfmE6pPWTZi+ibU2L7ImFKXPGSeNGMRPPFo7acuunppGqHhwyaJwNwX0eGrpKgt994
	nQ2FZ0ANQajoZuy0KTO1XeLXmoLp+R7wjwo6cyR72bl7HBrcAKfqh3Gq1hYhoRhUfg2k=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170906-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 170906: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-examine:host-install:broken:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=3d9f55c57bc3659f986acc421eac431ff6edcc83
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 10 Jun 2022 17:08:55 +0000

flight 170906 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170906/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-examine      5 host-install           broken REGR. vs. 170714
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 170714
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 170714
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 170714
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 170714
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 170714

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 170714

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170714
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170714
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170714
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                3d9f55c57bc3659f986acc421eac431ff6edcc83
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z   17 days
Failing since        170716  2022-05-24 11:12:06 Z   17 days   44 attempts
Testing same since   170906  2022-06-09 21:43:09 Z    0 days    1 attempts

------------------------------------------------------------
2296 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-step test-arm64-arm64-examine host-install

Not pushing.

(No revision log; it would be 270827 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 17:14:00 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 17:14:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346597.572446 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nziCt-0004MR-8B; Fri, 10 Jun 2022 17:13:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346597.572446; Fri, 10 Jun 2022 17:13:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nziCt-0004MK-4l; Fri, 10 Jun 2022 17:13:59 +0000
Received: by outflank-mailman (input) for mailman id 346597;
 Fri, 10 Jun 2022 17:13:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kdog=WR=citrix.com=prvs=1535499d8=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1nziCs-0004ME-Kv
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 17:13:58 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b551127b-e8e0-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 19:13:57 +0200 (CEST)
Received: from mail-dm6nam10lp2107.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.107])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 10 Jun 2022 13:13:37 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB6700.namprd03.prod.outlook.com (2603:10b6:a03:403::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Fri, 10 Jun
 2022 17:13:35 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::50a2:499b:fa53:b1eb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::50a2:499b:fa53:b1eb%5]) with mapi id 15.20.5314.019; Fri, 10 Jun 2022
 17:13:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b551127b-e8e0-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1654881237;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=kaXn05QMKBXI/ngI0e27T2Wi0MpAUBPr6A+3QMcdK7A=;
  b=WlgOv3nqK1ZHnN7KePM3PdOFZGPMuzqR7Kf4HHwUzAo4baUYh1Lf7IVv
   4S+rFPBFAM/AGs8BYctP4w5tuVmpLoq68t929DjCR3CFEC04YPmVe69wV
   6RpnvZ/fBCoGxSJWNCMZKckfHjEJpHePDiTH2mR+r4WvnGi8D20mDNvnC
   U=;
X-IronPort-RemoteIP: 104.47.58.107
X-IronPort-MID: 75891161
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:BbDja69IrReXZQ+u98SADrUD8n+TJUtcMsCJ2f8bNWPcYEJGY0x3z
 GVNXz2HM6mNMDDzed9xPIW2oxhTvZHSmtE1Ggs++Hw8E34SpcT7XtnIdU2Y0wF+jyHgoOCLy
 +1EN7Es+ehtFie0Si+Fa+Sn9T8mvU2xbuKU5NTsY0idfic5DnZ44f5fs7Rh2NQw34HhW1jlV
 e7a+KUzBnf0g1aYDUpMg06zgEsHUCPa4W5wUvQWPJinjXeG/5UnJMt3yZKZdhMUdrJ8DO+iL
 9sv+Znilo/vE7XBPfv++lrzWhVirrc/pmFigFIOM0SpqkAqSiDfTs/XnRfTAKtao2zhojx/9
 DlCnY2IZypuHZTRo8sceCJSD3hMPaMW9qCSdBBTseTLp6HHW13F5qw2SW0TY8gf8OsxBnxS/
 /sFLjxLdgqEm++93LO8TK9rm9gnK87oeogYvxmMzxmAVapgHc+FHvWMvIcHtNszrpkm8fL2T
 swVczdwKj/HZAVCIAw/A5Mihua4wHL4dlW0rXrK+PpmujGNlWSd1pDuMMDUK4eXHPx5oR/Iu
 n7G/0DWWDMFYYn3JT2ttyjEavX0tTP2XsceGaO18tZugUaP3SoDBRsOT1y5rPKlzEmkVLp3C
 WYZ5y4vpqga71GwQ5/2WBjQiGGAlg4RXZxXCeJS1e2W4q/d4gLcCm1aSDdEMYQirJVvHWNs0
 UKVldT0AzApqKeSVX+W6raTq3W1JDQRKmgBIyQDSGPp/uXenW36tTqXJv4LLUJ/poSocd0s6
 1hmdBQDuog=
IronPort-HdrOrdr: A9a23:3b4vNqFzyPV9b1JqpLqFsZLXdLJyesId70hD6qkvc3Fom52j/f
 xGws5x6fatskdrZJkh8erwW5Vp2RvnhNJICPoqTM2ftW7dySSVxeBZnMbfKljbdxEWmdQtsp
 uIH5IeNDS0NykDsS+Y2nj3Lz9D+qjgzEnAv463oBlQpENRGthdBmxCe2Sm+zhNNW177O0CZf
 +hD6R8xwaISDAyVICWF3MFV+/Mq5ngj5T9eyMLABYh9U2nkS6owKSSKWnZ4j4uFxd0hZsy+2
 nMlAL0oo+5teug9xPa32jPq7xLhdrazMdZDsDksLlXFtyssHfrWG1SYczHgNkHmpDp1L/sqq
 iLn/4UBbU315oWRBDtnfKi4Xi57N9k0Q6e9bbRuwqenSW+fkN6NyMJv/MmTvOSgXBQw+1Uwe
 ZF2XmUuIFQCg6FlCPh58LQXxUvjUasp2E++NRjx0C3fLFuHoO5l7ZvtX+90a1wbh7S+cQiCq
 1jHcvc7PFZfReTaG3YpHBmxJipUm4oFhmLT0AesojNugIm1kxR3g8d3ogSj30A/JUyR91N4P
 nFKL1hkPVLQtUNZaxwCe8dSY+8C3DLQxjLLGWOSG6XX50vKjbIsdr68b817OaldNgBy4Yzgo
 3IVBdCuWs7ayvVeLqzNV1wg2TwqUmGLETQI5tllulEU5XHNcnWGDzGTkwymM29pPhaCtHHWp
 +ISedrP8M=
X-IronPort-AV: E=Sophos;i="5.91,291,1647316800"; 
   d="scan'208";a="75891161"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mPi9rIWJoFbIu4Gh27RHPKdUcyuCNDqpyfHWCuqLiRhxc9AnnkEfO4EEscB7vvo2qOZjF2uUF3LBUQ0E91HgzIYBYI9e+ztuZg76W1jv1iN3PBGc6KyUQ720dX47IKjgLbFtpUKUKqYz5iidjknMj+u9yHaXwx1v5A2FYsfaXTDHeMih7pmt1vUH1D3m7oVMopW/3xhfNU64mZ9i43EJQBKSB3axo3r3wNvEu00hEvYKi9h/vtXFWdt1ORN+ZZS2rso6ioRoipywbAXZevh6s+ZZ+indOtzGSbkYz2bBc8V4zfxTl2jt3ehlrv+EbN5rsa6DJwYL0msNgx6AWpyR0w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=kaXn05QMKBXI/ngI0e27T2Wi0MpAUBPr6A+3QMcdK7A=;
 b=KSZmZHeztKBDdjvnLXGSyFLkIp6LQUt3K2PufKu25l8dYhEZNl8EiF3ccXJ2R1/Kzar/6J02alc4YbQZ5OOkmvdt8Kc59TtUMTPYmkqsrtNjAq/qo0IShdR7tieVQUjccFg29ZKKFZP6dv3T4n2sGm8k7wL4QUi/R5djPBciEfnpUkthqCBmcIVvkuYmSZkWy78pNrPOe6B8WkV0QdNj974o3g08VTjKw6j6+L/2KYb0+lJW8sekKb++L2rXzIvj92QIo/NMXqClnqzhZcZYqyhvdT+x7iTlwy2FFga7cclkgwRp/57Whi4MiKCrIXHpT0jCV1jmiUhj2VNaSlw6og==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kaXn05QMKBXI/ngI0e27T2Wi0MpAUBPr6A+3QMcdK7A=;
 b=eLdDidxVwdNQOe9F5j9w+N8XE83ZyhSF9BVlG9JJ3Ce+AXPfHlLdDmJZVlO41v1/1y/+5YK4i4y7ao4WNQc5oEU/0vttuqxE7r/8pG/5eZx/MySNeqnWcmrcrIcy2ImzkZktNqlFFBu+As689Uqmcvmxqw5zoQjEImHKaLRcOkM=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Jan Beulich <JBeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH] x86/spec-ctrl: More MSR_ARCH_CAPS enumerations
Thread-Topic: [PATCH] x86/spec-ctrl: More MSR_ARCH_CAPS enumerations
Thread-Index: AQHYfONZpKeg248Rt0um2v0bci+Cz61I4Q+A
Date: Fri, 10 Jun 2022 17:13:35 +0000
Message-ID: <46e8251f-6a76-a45a-54db-c15a39b2ff68@citrix.com>
References: <20220610160050.24221-1-andrew.cooper3@citrix.com>
In-Reply-To: <20220610160050.24221-1-andrew.cooper3@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: bebd0669-cdcf-440a-9e11-08da4b048daf
x-ms-traffictypediagnostic: SJ0PR03MB6700:EE_
x-microsoft-antispam-prvs:
 <SJ0PR03MB670054367BC12513B2078371BAA69@SJ0PR03MB6700.namprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 NGRgaZfgg66bHbxD2sddLFKxp+folGd+nJPsPU4Qc7L5vC8QdXrwEem+MeB546++GEqoEEvQfP8aL62ayEBLrRV+tAnheiPCJJY6DDFT8LJUsJ0yUEdt5NCZ5o1xy1Lgk5zBU6bLxhLGAzVwiThJo4WpxnwkVt3cQUH9rJv+fFSSgPzxxu3U7Eq13rOqDR2as84CfgtOojL9BR3o+N/FHvZBjGkWgnR+tt293G1HLoXxzhFAg+F/FAoSUJWw+rf0a2J/YyBEFEWtKMGkyVBEAf7VTmP8xYF84JLLpUqvpim6QmVoSPSaFaW0ZXsz28rVIASFcyxfZrkI0D6zgoxbXpI1spl91EAFDWb5Mx9f8AtOsQodcj+Aj8l4NF/XXvcO2S3y9p3i0sDTmSLkYSpxl7B6SIvljWNNUrI6v2EdlXtwevQamA4zEDDGaqitqQ+Sg1NvVhGmsVYr6OGE9UgwGLjEpvrVrvRngkXaRtNrOPCfsHlrEd+mroLWQwh1pxANJbyFwkmbouvpSn1EXNaKboC7+lB3uqtF9wiewXOAaBTl5JrR18d74Oy2klrbk0AZizZ/rRBlhSNSkbuQ6zg7A2KtymjTMl27ue8Ly9tTp9u6CKC1C1GfT1IaVpqAkVRyF919I//pRwkOrkNchJ1BLwVz+VbESd4v2zZOiJ7IS1bczjqN6tLXPEaO1KPPlM6FMn3ZO0WZduj+BaceRHQXOGmKiP5JmWvnnhLThZkpRPcCfWDriYftrqGHSZ6/fV/Ekn8wCL23sb3dFyMMe+Dg4FqYYQSmXtcYlsxYTUubGtZEdbMhBhMyORhVQD86PgZQu3h5KJhZA0buXr4XbIHlZckWrb744s7lk0Hb3xCO380w6An8iHK6+aPcbd1IY5ouaS118WEYD4Z7g18UsTdxrA==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(366004)(71200400001)(36756003)(83380400001)(53546011)(31696002)(8936002)(86362001)(6506007)(966005)(6916009)(5660300002)(508600001)(122000001)(6486002)(316002)(6512007)(26005)(4744005)(54906003)(82960400001)(2906002)(38100700002)(186003)(38070700005)(8676002)(4326008)(66476007)(64756008)(91956017)(66556008)(76116006)(31686004)(66446008)(66946007)(2616005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?MlpLbERCcVlXaE5kVFRueHJNcnhPczM1TnJ3RW9ONHl1NnBQTnBzdVRtMHZB?=
 =?utf-8?B?S01VSTR6cGFOcXVrRGRDaFEyWjlQaVlpaXk2ME54RGJxYUxXdXFrbVFlUlhM?=
 =?utf-8?B?aml2Sld6R0tEVk1JdVg2Y0VaQ1M1SHpoWnRFUlF5VUdVMmVpSmxWQUhlRWVM?=
 =?utf-8?B?M2FRYllVT3hZalVqcjR4K1NOek5HNGpWb25UYnYvRTVUaEFlTHArMFhDVHFU?=
 =?utf-8?B?a3pwWG1oVGtUaWdqc0p5bzUvTmNROXVndCtHM3RIUWVXR0thL2RtbXZGV0pn?=
 =?utf-8?B?NFROQzYyRTEvL1Rqd0dIaFZtMUh4YlVmRkFlZzJ4ZGtDYUl1Wkx2N2hFL3lZ?=
 =?utf-8?B?RGxDRmhqQmZEUkdLZUNQRWtQVjR3d3lVa0MyVzhacm1tbDVXTVlsVmNtV2lT?=
 =?utf-8?B?QzBIbkZhRXhYWWxzSDdrNWVGZGM3K3U2SWx0RG4zdWs3dGJrVGZNWHFvUkYv?=
 =?utf-8?B?TWtHa3JucUpXVExoaGV0WmluUFZUWGlSWHNacC9KUnlOZGt1eXduNEUzMENw?=
 =?utf-8?B?d09ldHYzanM2alBZSzJFaUl5ckN5QzllbndvNUViLzFJZE9QV2piOHUrS1Ax?=
 =?utf-8?B?bmpmaVFKNUZxVkRuc2s2bUNBckhjcVRZMWhvUmpaOHJCT3B0eld2NlVGenlX?=
 =?utf-8?B?MVcwMkZEU0pLMTZZU2tuZFlrN1NaL21ybk5yRlJGSW9XWTZPeWFkalpwY2hX?=
 =?utf-8?B?VzBLREVXZ2NvQ0JaN2NWNGNYbWZQRzM4TjM4VWFkd0ZPNmZXZXByb0xyWWZE?=
 =?utf-8?B?WnJFVkdCRzJzczlES0NrOGJGMXRnU2RRdjB2RHMzK2gyMDNnaFl3bG50ZEp1?=
 =?utf-8?B?dis3RjB1T3IyNzVpc1BKWmxKbWlVV3QyS0FoektVbVJJYll4RFU0WlE3Q29w?=
 =?utf-8?B?WEpDcUV4WEM5NlVqdVBtOUdIcjcxZldGcmtvYktMNGVBM1IxcnRvRXhUQVpJ?=
 =?utf-8?B?RzdaUUd2MHhUamxOSERKWU12cDhCTHBCMCtlUnZ1bmVqMVVpOWR3MG45M0tt?=
 =?utf-8?B?UEFxenA2QlJYSmMrRERXcGxYWHVsSkNLbkRTR2R6N3gyUnMxekxTSU54aHJS?=
 =?utf-8?B?b3kzQ3VGb1h3Q2l5YThJSVFFdEt1d001VjlrNklja2ZGcFJMVDBtbk9QOG1C?=
 =?utf-8?B?NG9wc1c3TDNIVmw5UTJkMkxJTWs0Z2xVcXJJdG5TZkFsSkpXWjF0amVLdUsx?=
 =?utf-8?B?clo1ZklNazRibktVUU5XeVZpd3JtOE1NWWFBTEkwd0NyS1BGdEN6TGtQaEVi?=
 =?utf-8?B?L25helVEZjJxa3ZRbm4yenlyUzQ5M2VHSDB4YTRiZXhnWkYyN1gzcVA3WjBa?=
 =?utf-8?B?eHNKSllTb2l3LzhSNjZoditJVFZEUENOZ0Y4VmFSZlJQTzdnMFFqM1RTemFR?=
 =?utf-8?B?VmVGUGtvRkJQSXY4bzV2bWZDUkwzQTlDUXQrVDdyUE8rZVNqUTVoRXJ2alEz?=
 =?utf-8?B?Mlo1SUlpTWMxemxWNWwxeS9tV1ZsUnN5ZUxJcm5USDcvSUdTQ3B1OHI3aGsw?=
 =?utf-8?B?ZHhWd2pSWU1EU3pRaGllam9weENJUFVXbzNzRGRRQWZzL0V0TnNMUmFDaG1Z?=
 =?utf-8?B?TG83T3NSUVN6TEFjWlpwNzg1YXAyY0dGMnM2KzBSejBUTWFlMnJBZHhxanVH?=
 =?utf-8?B?bTFNZU5wRkd2aXJ2QjJpUk5oMEdXRWgxZER1ZmJEQ0NsOXdqR3J3UExmR1RL?=
 =?utf-8?B?aWFFSU1BcTVxQU41TVFTaHNYcmE1cHZZeEY1OHVVOC9hb2p4UFZCYUhSWmJh?=
 =?utf-8?B?Wmo3M1Z3RzkvbTZrK2YvZGxBekNrL1U3YmVpYlpibkV3Y1dHMlhmTlJZanFm?=
 =?utf-8?B?eE5wNy9FTWpXRnhsQ3BteEduMlY1WGRXUXZhY0FucFhnTmlRU3hDZXdIeW9P?=
 =?utf-8?B?WkVZakJraktxeVVNZ1ZIR05xSE9oMmpjcjRhMWpWaCtLeWF5SmJZMmNDNlc1?=
 =?utf-8?B?NnZDRVBubWZjRGpiQ05tSXY2dUtjWXlZY21PQTdrOCtKVTZkWkEzZTgwOVpt?=
 =?utf-8?B?RWZSUXZ0ZnRXVmdIU2lxMUlCTVluNy9wazRrbDhsV3pxMTFmMWw1RlR3RC92?=
 =?utf-8?B?ZlJqUGtQMzVHSEsrZ2dkeHAwK0VkcTFOdCtSRUZXZWxCTThRWVVkZXBnUmpr?=
 =?utf-8?B?c0N1N0R6QlNLcUF1Q044TFFmQnlLNDFNSzRNMWgrbFVMcXVsS1lMWmRIUG9G?=
 =?utf-8?B?MlBROUk2MDhPYkxkMnVRT2h4bzBZbG1oa3laQllvWGpYdjhTL1VDVXp3SGZm?=
 =?utf-8?B?dkVZeEdaZ0hlelRndFVmYlV3SGpkczRqWW9yMzFhcUNWKzZuQm5tMWR6dmhn?=
 =?utf-8?B?UDcyK3ZKSENHdHNOMkRabVZZTndzK1dObWgvMzliY01LMGJSVVpsNkhCanh3?=
 =?utf-8?Q?De+aY1I8HgYlpWB8=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <C8E50B8EF1BA974E87742C216155CA0D@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bebd0669-cdcf-440a-9e11-08da4b048daf
X-MS-Exchange-CrossTenant-originalarrivaltime: 10 Jun 2022 17:13:35.5239
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: vVsME6Q3ZK82mntx/l+j/A1NCzK75DfMm89xX9/gmYda5KHquWoJwHGaSLf690gB4Igr5RNaAgTA1lBNFpkciUfGcmEcSjymXRC8cr1jrl4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6700

T24gMTAvMDYvMjAyMiAxNzowMCwgQW5kcmV3IENvb3BlciB3cm90ZToNCj4gaHR0cHM6Ly93d3cu
aW50ZWwuY29tL2NvbnRlbnQvd3d3L3VzL2VuL2RldmVsb3Blci9hcnRpY2xlcy90ZWNobmljYWwv
c29mdHdhcmUtc2VjdXJpdHktZ3VpZGFuY2UvYmVzdC1wcmFjdGljZXMvZGF0YS1vcGVyYW5kLWlu
ZGVwZW5kZW50LXRpbWluZy1pc2EtZ3VpZGFuY2UuaHRtbA0KPiBodHRwczovL3d3dy5pbnRlbC5j
b20vY29udGVudC93d3cvdXMvZW4vZGV2ZWxvcGVyL2FydGljbGVzL3RlY2huaWNhbC9zb2Z0d2Fy
ZS1zZWN1cml0eS1ndWlkYW5jZS9hZHZpc29yeS1ndWlkYW5jZS9ydW5uaW5nLWF2ZXJhZ2UtcG93
ZXItbGltaXQtZW5lcmd5LXJlcG9ydGluZy5odG1sDQo+DQo+IFNpZ25lZC1vZmYtYnk6IEFuZHJl
dyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+DQo+IC0tLQ0KPiBDQzogSmFuIEJl
dWxpY2ggPEpCZXVsaWNoQHN1c2UuY29tPg0KPiBDQzogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIu
cGF1QGNpdHJpeC5jb20+DQo+IENDOiBXZWkgTGl1IDx3bEB4ZW4ub3JnPg0KPg0KPiBUaGUgU0RN
IGFsc28gbGlzdHMNCj4NCj4gICAjZGVmaW5lICBBUkNIX0NBUFNfT1ZFUkNMT0NLSU5HX1NUQVRV
UyAgICAgIChfQUMoMSwgVUxMKSA8PCAyMykNCj4NCj4gYnV0IEkndmUgZ290IG5vIGlkZWEgd2hh
dCB0aGlzIGlzLCBub3IgdGhlIGluZGV4IG9mIE1TUl9PVkVSQ0xPQ0tJTkdfU1RBVFVTDQo+IHdo
aWNoIGlzIHRoZSB0aGluZyBhbGxlZ2VkbHkgZW51bWVyYXRlZCBieSB0aGlzLg0KDQoNCkZvdW5k
IGl0LsKgIFRoZXJlJ3MgYW4gT1ZFUntDfUNMT0NLSU5HIHR5cG8gaW4gdGhlIFNETS7CoCBJdCdz
IE1TUiAweDE5NQ0KYW5kIG5ldyBpbiBBbGRlckxha2UgaXQgc2VlbXMuDQoNCn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 17:19:06 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 17:19:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346608.572457 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nziHl-0005Hp-T1; Fri, 10 Jun 2022 17:19:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346608.572457; Fri, 10 Jun 2022 17:19:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nziHl-0005Hi-PB; Fri, 10 Jun 2022 17:19:01 +0000
Received: by outflank-mailman (input) for mailman id 346608;
 Fri, 10 Jun 2022 17:19:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yiYo=WR=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1nziHk-0005Hc-D4
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 17:19:00 +0000
Received: from mail-lf1-x12e.google.com (mail-lf1-x12e.google.com
 [2a00:1450:4864:20::12e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6a649eb2-e8e1-11ec-8901-93a377f238d6;
 Fri, 10 Jun 2022 19:18:59 +0200 (CEST)
Received: by mail-lf1-x12e.google.com with SMTP id t25so43810743lfg.7
 for <xen-devel@lists.xenproject.org>; Fri, 10 Jun 2022 10:18:59 -0700 (PDT)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 j22-20020a056512109600b0047255d210e7sm4797634lfg.22.2022.06.10.10.18.57
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 10 Jun 2022 10:18:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6a649eb2-e8e1-11ec-8901-93a377f238d6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=afV9wUg31EG03L2a2kkqqSnmRWnpT+KqkYxw5XyGGz0=;
        b=pFhj/d+stsV9G5X4+UW9nTOrrJrxbZQQwArXKeVBrD0i575Y7V7JJw46H7h6/4JaxE
         iXKPTFxnOErPH/X5+b4c/QYFydSnhH412AXukyyWiQuuSU59JPJgbUU3jZTZmHcrwav6
         wOMv63j+GE53xEXoTt3RkUn78M0hK6ZPbFXKToGPhxrgB8FqKCNhiGNH4WBSKiLlKjfD
         BiFEYkVbVBpNifbPSza1UVcJ23nV86mHOOU+CHNmt3am+kwdqy0FtRgjiHBj+RdGv7yP
         MjPYryl4bHLejJel0rcZguOXhsUTy9OaGCjoaEj+p2bZlpTQ6jd07uZ45fd8qSY6zPtH
         bwDg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=afV9wUg31EG03L2a2kkqqSnmRWnpT+KqkYxw5XyGGz0=;
        b=gEHXuengsHFbKCFuVsZFuvzRfNYJyahdUXh8NdLR+5R9hPdzgAAibqs30/7WPfAk9h
         VA60diizfRo/CuD9LwFlLAt1CNEaOt3bGehJqHjrSSy9H9y+Qd3hNvoXAFuJ6U1odgEt
         Yq/NPCeTzAjYGAZUWS+CYMaN0+N/sX2wrWX7mVw3yYGlC3ls3UZUcJoND5ffKhHrsz00
         ro628W9nXzqvBHXV6QzEmDTAZ5F76PbQZTBvkL/MhQAB5KKZj+C8Ff1trIb28dqrU9tf
         7zOnwEH/RKot3FrF3QV1ZVRrCJt12Qh5PVCCL5gG6kz3A8Zmm40cGhk+qWyQrG543THG
         04FQ==
X-Gm-Message-State: AOAM531aAK846rSZzS/0/lR8roLWrAQNCxfOygD7ITk+LBsZVQ99sPSa
	Q8JT1LBqedSgpYj7bl79TLtiy+cGZvQSIg==
X-Google-Smtp-Source: ABdhPJxRoQugrS1fWvfLnhwTmgvw+qj8hXVKnGbxFhEKRi7phPKNSX8ZiMn8Kg9XxQjT/nDFgCtf4g==
X-Received: by 2002:a05:6512:515:b0:479:11a0:8132 with SMTP id o21-20020a056512051500b0047911a08132mr27112856lfb.344.1654881538779;
        Fri, 10 Jun 2022 10:18:58 -0700 (PDT)
Subject: Re: [PATCH V9 1/2] libxl: Add support for Virtio disk configuration
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel@lists.xenproject.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Nick Rosbrook
 <rosbrookn@gmail.com>, Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>
References: <1654106261-28044-1-git-send-email-olekstysh@gmail.com>
 <1654106261-28044-2-git-send-email-olekstysh@gmail.com>
 <YqNbwtWrIdYWRG9c@perard.uk.xensource.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <b653ebad-fc56-2a9e-36bb-59ab19da5db1@gmail.com>
Date: Fri, 10 Jun 2022 20:18:57 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <YqNbwtWrIdYWRG9c@perard.uk.xensource.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 10.06.22 18:12, Anthony PERARD wrote:

Hello Anthony

> On Wed, Jun 01, 2022 at 08:57:40PM +0300, Oleksandr Tyshchenko wrote:
>> diff --git a/tools/libs/light/libxl_disk.c b/tools/libs/light/libxl_disk.c
>> index a5ca778..e90bc25 100644
>> --- a/tools/libs/light/libxl_disk.c
>> +++ b/tools/libs/light/libxl_disk.c
>> @@ -163,6 +163,25 @@ static int libxl__device_disk_setdefault(libxl__gc *gc, uint32_t domid,
>>       rc = libxl__resolve_domid(gc, disk->backend_domname, &disk->backend_domid);
>>       if (rc < 0) return rc;
>>   
>> +    if (disk->specification == LIBXL_DISK_SPECIFICATION_UNKNOWN)
>> +        disk->specification = LIBXL_DISK_SPECIFICATION_XEN;
>> +
>> +    if (disk->specification == LIBXL_DISK_SPECIFICATION_XEN &&
>> +        disk->transport != LIBXL_DISK_TRANSPORT_UNKNOWN) {
>> +        LOGD(ERROR, domid, "Transport is only supported for specification virtio");
>> +        return ERROR_FAIL;
> Could you return ERROR_INVAL instead?

yes


>
>> +    }
>> +
>> +    /* Force transport mmio for specification virtio for now */
>> +    if (disk->specification == LIBXL_DISK_SPECIFICATION_VIRTIO) {
>> +        if (!(disk->transport == LIBXL_DISK_TRANSPORT_UNKNOWN ||
>> +              disk->transport == LIBXL_DISK_TRANSPORT_MMIO)) {
>> +            LOGD(ERROR, domid, "Unsupported transport for specification virtio");
>> +            return ERROR_FAIL;
> Same here.

yes


>
>> +        }
>> +        disk->transport = LIBXL_DISK_TRANSPORT_MMIO;
>> +    }
>> +
>>       /* Force Qdisk backend for CDROM devices of guests with a device model. */
>>       if (disk->is_cdrom != 0 &&
>>           libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM) {
>> @@ -575,6 +660,41 @@ cleanup:
>>       return rc;
>>   }
>>   
>> +static int libxl__device_disk_get_path(libxl__gc *gc, uint32_t domid,
>> +                                       char **path)
>> +{
>> +    const char *dir;
>> +    int rc;
>> +
>> +    /*
>> +     * As we don't know exactly what device kind to be used here, guess it
>> +     * by checking the presence of the corresponding path in Xenstore.
>> +     * First, try to read path for vbd device (default) and if not exists
>> +     * read path for virtio_disk device. This will work as long as both Xen PV
>> +     * and Virtio disk devices are not assigned to the same guest.
>> +     */
>> +    *path = GCSPRINTF("%s/device/%s",
>> +                      libxl__xs_libxl_path(gc, domid),
>> +                      libxl__device_kind_to_string(LIBXL__DEVICE_KIND_VBD));
>> +
>> +    rc = libxl__xs_read_checked(gc, XBT_NULL, *path, &dir);
>> +    if (rc)
>> +        return rc;
>> +
>> +    if (dir)
>> +        return 0;
>> +
>> +    *path = GCSPRINTF("%s/device/%s",
>> +                      libxl__xs_libxl_path(gc, domid),
>> +                      libxl__device_kind_to_string(LIBXL__DEVICE_KIND_VIRTIO_DISK));
>> +
>> +    rc = libxl__xs_read_checked(gc, XBT_NULL, *path, &dir);
>> +    if (rc)
>> +        return rc;
>> +
>> +    return 0;
> I still don't like this implementation of get_path() which return a
> different answer depending on the environment which can change from one
> call to the next. I think get_path() was introduced when the path for
> the kind of device didn't correspond to the common path which other kind
> of devices uses. And when get_path() is implemented, it always returns
> the same answer (see libxl_pci.c for the only implementation).
>
> I don't really know how to deal with a type of device that have two
> different frontend kind at the moment. (But maybe there's something in
> libxl_usb.c which could be useful as a potential example, but one of the
> kind doesn't use xenstore so is probably easier to deal with.). So I
> guess we are stuck with an implementation of get_path() which may return
> a different answer depending on thing outside of libxl's full control.
>
> So, could you at least make it much harder to have libxl's view of a
> guest in a weird state? I mean:
> - always check both xenstore paths
>    -> if both exist, return an error
>    -> if only one exist, return that one.
>    -> default to the "vbd" kind, otherwise.

I think I got your idea, will try to do this way


>
> That would be better than the current implementation which returns the
> "virtio" path by default.

agree,

making virtio by default wasn't the original intention, it was entirely 
involuntarily))


>
> Thanks,
>
-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 17:57:34 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 17:57:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346619.572474 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzisp-000275-Rw; Fri, 10 Jun 2022 17:57:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346619.572474; Fri, 10 Jun 2022 17:57:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzisp-00026y-OD; Fri, 10 Jun 2022 17:57:19 +0000
Received: by outflank-mailman (input) for mailman id 346619;
 Fri, 10 Jun 2022 17:57:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yiYo=WR=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1nziso-00026s-4v
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 17:57:18 +0000
Received: from mail-lj1-x232.google.com (mail-lj1-x232.google.com
 [2a00:1450:4864:20::232])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c3a64354-e8e6-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 19:57:16 +0200 (CEST)
Received: by mail-lj1-x232.google.com with SMTP id g25so30358029ljm.2
 for <xen-devel@lists.xenproject.org>; Fri, 10 Jun 2022 10:57:16 -0700 (PDT)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 m1-20020a056512014100b0047255d211bbsm4808954lfo.234.2022.06.10.10.57.14
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 10 Jun 2022 10:57:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c3a64354-e8e6-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=subject:from:to:cc:references:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=rRWAJcb2Uy8fqgTN/ziwtxpwnfNM4EOEYQHcP0bo1pk=;
        b=gw0+tHuu16wy6+K7ouk9bxSUWhzgUcBuNMmb5Luk6q1vx1AurBND1TmG6dow59+rQn
         4m131lnc19rJUZZaky73f5bQkWfsMJkJN2d1egtG5SoAjGAnAPIhqPfyMCV++dL2L1fi
         d4OWGG2xSp9DHVA0qVD+xg0t++niWb//FJBCEnwiiEeVpVpVRpRyT3qwxL+Wx1KEpwiH
         zDmsxLoCf9zlxP/A67Rf2lPoLDBVjxU3RDJ1zwkLbujbmpeYa0P9UD6lcx8IID191CuD
         Z+Reff/fmzlC8fow/ce8ZUvbh4K6Yn2EOSt7CLM+ojVirh+YSRrLEMmu+RBnQqRL98Qh
         1n3g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:subject:from:to:cc:references:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=rRWAJcb2Uy8fqgTN/ziwtxpwnfNM4EOEYQHcP0bo1pk=;
        b=bF/CdUa0z7JhFf9fKWf4012cTTm4qPrpG7CB4Q264ioN/PAUmf8RwpJrfaQzTsFymH
         n8c+ysW7pFeKk6gJugJbGh8toNM7TQ2bASU7jBpptkC1AGLnw/rBsU+4Br7y3SmzZQRB
         hKgX56vv4oEGlpqH8PzWwgP9x3Wq7juuru2qyGO+psce4GPMZ4jkAWVlTKIsjaPMsUOr
         cpzFoII6JxtcGUPXZSKD8wy54nPndsWw3cvvJtTz9xgxnhcZKSWYQoES2iHgm74inLr6
         C8BflJvfnZwQjQepQHipoKok2OlWNT1QQD7zpZh6i1DEeOjdcDOOhp9/C+Jl+QDaDsDj
         m56g==
X-Gm-Message-State: AOAM531TupMY3QZPqwPaex/PJHZlP5AtEH48ByOaKTNectWnmn2iIIQ4
	YDFyFM/E5WMYdWmDnuOwlLQ=
X-Google-Smtp-Source: ABdhPJxrCkrxRQJuZNW/yt2ICylNXpULVp78oyAB+qOtKd9cdFumUoeQcpDh6MquMjALKUzW7U4PwQ==
X-Received: by 2002:a2e:9b4a:0:b0:258:e01f:bcdf with SMTP id o10-20020a2e9b4a000000b00258e01fbcdfmr4213712ljj.87.1654883836066;
        Fri, 10 Jun 2022 10:57:16 -0700 (PDT)
Subject: Re: [PATCH V9 1/2] libxl: Add support for Virtio disk configuration
From: Oleksandr <olekstysh@gmail.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel@lists.xenproject.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Nick Rosbrook
 <rosbrookn@gmail.com>, Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>
References: <1654106261-28044-1-git-send-email-olekstysh@gmail.com>
 <1654106261-28044-2-git-send-email-olekstysh@gmail.com>
 <YqNbwtWrIdYWRG9c@perard.uk.xensource.com>
 <b653ebad-fc56-2a9e-36bb-59ab19da5db1@gmail.com>
Message-ID: <7e5b42a1-d781-6987-2cf8-35bdd701fff4@gmail.com>
Date: Fri, 10 Jun 2022 20:57:14 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <b653ebad-fc56-2a9e-36bb-59ab19da5db1@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 10.06.22 20:18, Oleksandr wrote:

Hello Anthony

>
> On 10.06.22 18:12, Anthony PERARD wrote:
>
> Hello Anthony
>
>> On Wed, Jun 01, 2022 at 08:57:40PM +0300, Oleksandr Tyshchenko wrote:
>>> diff --git a/tools/libs/light/libxl_disk.c 
>>> b/tools/libs/light/libxl_disk.c
>>> index a5ca778..e90bc25 100644
>>> --- a/tools/libs/light/libxl_disk.c
>>> +++ b/tools/libs/light/libxl_disk.c
>>> @@ -163,6 +163,25 @@ static int 
>>> libxl__device_disk_setdefault(libxl__gc *gc, uint32_t domid,
>>>       rc = libxl__resolve_domid(gc, disk->backend_domname, 
>>> &disk->backend_domid);
>>>       if (rc < 0) return rc;
>>>   +    if (disk->specification == LIBXL_DISK_SPECIFICATION_UNKNOWN)
>>> +        disk->specification = LIBXL_DISK_SPECIFICATION_XEN;
>>> +
>>> +    if (disk->specification == LIBXL_DISK_SPECIFICATION_XEN &&
>>> +        disk->transport != LIBXL_DISK_TRANSPORT_UNKNOWN) {
>>> +        LOGD(ERROR, domid, "Transport is only supported for 
>>> specification virtio");
>>> +        return ERROR_FAIL;
>> Could you return ERROR_INVAL instead?
>
> yes
>
>
>>
>>> +    }
>>> +
>>> +    /* Force transport mmio for specification virtio for now */
>>> +    if (disk->specification == LIBXL_DISK_SPECIFICATION_VIRTIO) {
>>> +        if (!(disk->transport == LIBXL_DISK_TRANSPORT_UNKNOWN ||
>>> +              disk->transport == LIBXL_DISK_TRANSPORT_MMIO)) {
>>> +            LOGD(ERROR, domid, "Unsupported transport for 
>>> specification virtio");
>>> +            return ERROR_FAIL;
>> Same here.
>
> yes
>
>
>>
>>> +        }
>>> +        disk->transport = LIBXL_DISK_TRANSPORT_MMIO;
>>> +    }
>>> +
>>>       /* Force Qdisk backend for CDROM devices of guests with a 
>>> device model. */
>>>       if (disk->is_cdrom != 0 &&
>>>           libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM) {
>>> @@ -575,6 +660,41 @@ cleanup:
>>>       return rc;
>>>   }
>>>   +static int libxl__device_disk_get_path(libxl__gc *gc, uint32_t 
>>> domid,
>>> +                                       char **path)
>>> +{
>>> +    const char *dir;
>>> +    int rc;
>>> +
>>> +    /*
>>> +     * As we don't know exactly what device kind to be used here, 
>>> guess it
>>> +     * by checking the presence of the corresponding path in Xenstore.
>>> +     * First, try to read path for vbd device (default) and if not 
>>> exists
>>> +     * read path for virtio_disk device. This will work as long as 
>>> both Xen PV
>>> +     * and Virtio disk devices are not assigned to the same guest.
>>> +     */
>>> +    *path = GCSPRINTF("%s/device/%s",
>>> +                      libxl__xs_libxl_path(gc, domid),
>>> + libxl__device_kind_to_string(LIBXL__DEVICE_KIND_VBD));
>>> +
>>> +    rc = libxl__xs_read_checked(gc, XBT_NULL, *path, &dir);
>>> +    if (rc)
>>> +        return rc;
>>> +
>>> +    if (dir)
>>> +        return 0;
>>> +
>>> +    *path = GCSPRINTF("%s/device/%s",
>>> +                      libxl__xs_libxl_path(gc, domid),
>>> + libxl__device_kind_to_string(LIBXL__DEVICE_KIND_VIRTIO_DISK));
>>> +
>>> +    rc = libxl__xs_read_checked(gc, XBT_NULL, *path, &dir);
>>> +    if (rc)
>>> +        return rc;
>>> +
>>> +    return 0;
>> I still don't like this implementation of get_path() which return a
>> different answer depending on the environment which can change from one
>> call to the next. I think get_path() was introduced when the path for
>> the kind of device didn't correspond to the common path which other kind
>> of devices uses. And when get_path() is implemented, it always returns
>> the same answer (see libxl_pci.c for the only implementation).
>>
>> I don't really know how to deal with a type of device that have two
>> different frontend kind at the moment. (But maybe there's something in
>> libxl_usb.c which could be useful as a potential example, but one of the
>> kind doesn't use xenstore so is probably easier to deal with.). So I
>> guess we are stuck with an implementation of get_path() which may return
>> a different answer depending on thing outside of libxl's full control.
>>
>> So, could you at least make it much harder to have libxl's view of a
>> guest in a weird state? I mean:
>> - always check both xenstore paths
>>    -> if both exist, return an error
>>    -> if only one exist, return that one.
>>    -> default to the "vbd" kind, otherwise.
>
> I think I got your idea, will try to do this way

I hope that this (it seems working) function follows your suggestion. If 
no objections, I will use it for V10.


static int libxl__device_disk_get_path(libxl__gc *gc, uint32_t domid,
                                        char **path)
{
     const char *xen_dir, *virtio_dir;
     char *xen_path, *virtio_path;
     int rc;

     xen_path = GCSPRINTF("%s/device/%s",
                          libxl__xs_libxl_path(gc, domid),
libxl__device_kind_to_string(LIBXL__DEVICE_KIND_VBD));

     rc = libxl__xs_read_checked(gc, XBT_NULL, xen_path, &xen_dir);
     if (rc)
         return rc;

     virtio_path = GCSPRINTF("%s/device/%s",
                             libxl__xs_libxl_path(gc, domid),
libxl__device_kind_to_string(LIBXL__DEVICE_KIND_VIRTIO_DISK));

     rc = libxl__xs_read_checked(gc, XBT_NULL, virtio_path, &virtio_dir);
     if (rc)
         return rc;

     if (xen_dir && virtio_dir) {
         LOGD(ERROR, domid, "Invalid configuration, both xen and virtio 
paths are present");
         return ERROR_INVAL;
     } else if (virtio_dir)
         *path = virtio_path;
     else
         *path = xen_path;

     return 0;
}


>
>
>>
>> That would be better than the current implementation which returns the
>> "virtio" path by default.
>
> agree,
>
> making virtio by default wasn't the original intention, it was 
> entirely involuntarily))
>
>
>>
>> Thanks,
>>
-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 18:14:26 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 18:14:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346632.572493 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzj9J-0005BO-Hd; Fri, 10 Jun 2022 18:14:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346632.572493; Fri, 10 Jun 2022 18:14:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzj9J-0005BH-El; Fri, 10 Jun 2022 18:14:21 +0000
Received: by outflank-mailman (input) for mailman id 346632;
 Fri, 10 Jun 2022 18:14:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1nzj9J-0005BB-29
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 18:14:21 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nzj9I-0006TO-PZ; Fri, 10 Jun 2022 18:14:20 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=[192.168.23.251]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nzj9I-0005mk-J5; Fri, 10 Jun 2022 18:14:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=jIwjcn9zGIQ+tqdvoZgb0VPlNVDBugB/lnUB0C0dc8M=; b=ZhDqxXCsw40kaI27wZ92wouo5q
	r670+hg/XddMfuBIkp2gXSWo/77R9GTxP0rp4sCR2lEf3Iu3Pl3kiYMtPrzB21KuQcLnioxBluP8a
	36+i4IrTLPaSBncqic9l4FOdRmfE4dBPyBC+UtUDWtWsEsG0KwpYNgYSNQ5tdHanPKkg=;
Message-ID: <e1db3f9f-eeea-eeed-7633-1e63187cfa8c@xen.org>
Date: Fri, 10 Jun 2022 19:14:18 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH v2 4/4] arm: Define kconfig symbols used by arm64
 cpufeatures
To: Bertrand Marquis <bertrand.marquis@arm.com>,
 xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <cover.1653993431.git.bertrand.marquis@arm.com>
 <be6be3d433a6cd5737e2d4ebb3494fcc99672df4.1653993431.git.bertrand.marquis@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <be6be3d433a6cd5737e2d4ebb3494fcc99672df4.1653993431.git.bertrand.marquis@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Bertrand,

On 31/05/2022 11:43, Bertrand Marquis wrote:
> Define kconfig symbols which are used by arm64 cpufeatures to prevent
> using undefined symbols and rely on IS_ENABLED returning false.
> All the features related to those symbols are unsupported by Xen:
> - pointer authentication
> - sve
> - memory tagging
> - branch target identification
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 18:20:58 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 18:20:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346642.572508 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzjFe-0006bm-AE; Fri, 10 Jun 2022 18:20:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346642.572508; Fri, 10 Jun 2022 18:20:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzjFe-0006bf-7I; Fri, 10 Jun 2022 18:20:54 +0000
Received: by outflank-mailman (input) for mailman id 346642;
 Fri, 10 Jun 2022 18:20:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1nzjFd-0006bZ-8p
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 18:20:53 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nzjFd-0006eP-09; Fri, 10 Jun 2022 18:20:53 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=[192.168.23.251]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nzjFc-0005xn-Ob; Fri, 10 Jun 2022 18:20:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=E/e75di51FqReb6gjA5OOPtz2t8eNRiBDZ4982t2t+k=; b=u23rpDxm+W4EYQYRyucBZw4Iu/
	zcnY4CWk5a+5pIxEZsD3hoh0+0D+dRoKT2KyJrNEfpC6tVklv9RkUswGHtR2EfISEms6zVS8UKeJT
	ib0nXWJrUWBAGKJ9ScJ+J5oXah9SK/Bzvw7XYN9tJ5c56AfZEdwQQsblzdbblP4ISlQo=;
Message-ID: <25cfe471-8e82-97cd-7a47-b5e85c849eae@xen.org>
Date: Fri, 10 Jun 2022 19:20:51 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH v2 2/4] xen/arm: Add sb instruction support
To: Bertrand Marquis <bertrand.marquis@arm.com>,
 xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <cover.1653993431.git.bertrand.marquis@arm.com>
 <efc2f01da9f9dfc0f678eaf7d8fe81f9b3d0cbc3.1653993431.git.bertrand.marquis@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <efc2f01da9f9dfc0f678eaf7d8fe81f9b3d0cbc3.1653993431.git.bertrand.marquis@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Bertrand,

On 31/05/2022 11:43, Bertrand Marquis wrote:
> diff --git a/xen/arch/arm/include/asm/cpufeature.h b/xen/arch/arm/include/asm/cpufeature.h
> index f7368766c0..9649a7afee 100644
> --- a/xen/arch/arm/include/asm/cpufeature.h
> +++ b/xen/arch/arm/include/asm/cpufeature.h
> @@ -67,8 +67,9 @@
>   #define ARM_WORKAROUND_BHB_LOOP_24 13
>   #define ARM_WORKAROUND_BHB_LOOP_32 14
>   #define ARM_WORKAROUND_BHB_SMCC_3 15
> +#define ARM64_HAS_SB 16

The feature is for both 32-bit and 64-bit. So I would prefer if it is 
called ARM_HAS_SB.

>   
> -#define ARM_NCAPS           16
> +#define ARM_NCAPS           17
>   
>   #ifndef __ASSEMBLY__
>   
> @@ -78,6 +79,9 @@
>   
>   extern DECLARE_BITMAP(cpu_hwcaps, ARM_NCAPS);
>   
> +void check_local_cpu_features(void);
> +void enable_cpu_features(void);
> +
>   static inline bool cpus_have_cap(unsigned int num)
>   {
>       if ( num >= ARM_NCAPS )
> diff --git a/xen/arch/arm/include/asm/macros.h b/xen/arch/arm/include/asm/macros.h
> index 1aa373760f..33e863d982 100644
> --- a/xen/arch/arm/include/asm/macros.h
> +++ b/xen/arch/arm/include/asm/macros.h
> @@ -5,14 +5,7 @@
>   # error "This file should only be included in assembly file"
>   #endif
>   
> -    /*
> -     * Speculative barrier
> -     * XXX: Add support for the 'sb' instruction
> -     */
> -    .macro sb
> -    dsb nsh
> -    isb
> -    .endm

Looking at the patch bcab2ac84931 "xen/arm64: Place a speculation 
barrier following an ret instruction", the macro was defined before 
including <asm/arm*/macros.h> so 'sb' could be used in macros defined by 
the headers.

I can't remember whether I chose the order because I had a failure on 
some compilers. However, I couldn't find anything in the assembler 
documentation suggesting that a macro A could use B before it is used.

So I would rather avoid to move the macro if there are no strong 
argument for it.

> +#include <asm/alternative.h>
>   
>   #if defined (CONFIG_ARM_32)
>   # include <asm/arm32/macros.h>
> @@ -29,4 +22,28 @@
>       .endr
>       .endm
>   
> +    /*
> +     * Speculative barrier
> +     */
> +    .macro sb
> +alternative_if_not ARM64_HAS_SB
> +    dsb nsh
> +    isb
> +alternative_else
> +    /*
> +     * SB encoding in hexadecimal to prevent recursive macro.
> +     * extra nop is required to keep same number of instructions on both sides
> +     * of the alternative.
> +     */
> +#if defined(CONFIG_ARM_32)
> +    .inst 0xf57ff070
> +#elif defined(CONFIG_ARM_64)
> +    .inst 0xd50330ff
> +#else
> +#   error "missing sb encoding for ARM variant"
> +#endif
> +    nop
> +alternative_endif
> +    .endm
> +
>   #endif /* __ASM_ARM_MACROS_H */
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index ea1f5ee3d3..b44494c9a9 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -961,6 +961,8 @@ void __init start_xen(unsigned long boot_phys_offset,
>        */
>       check_local_cpu_errata();
>   
> +    check_local_cpu_features();
> +
>       init_xen_time();
>   
>       gic_init();
> @@ -1030,6 +1032,7 @@ void __init start_xen(unsigned long boot_phys_offset,
>        */
>       apply_alternatives_all();
>       enable_errata_workarounds();
> +    enable_cpu_features();
>   
>       /* Create initial domain 0. */
>       if ( !is_dom0less_mode() )
> diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
> index 9bb32a301a..fb7cc43a93 100644
> --- a/xen/arch/arm/smpboot.c
> +++ b/xen/arch/arm/smpboot.c
> @@ -389,6 +389,7 @@ void start_secondary(void)
>       local_abort_enable();
>   
>       check_local_cpu_errata();
> +    check_local_cpu_features();
>   
>       printk(XENLOG_DEBUG "CPU %u booted.\n", smp_processor_id());
>   

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 18:26:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 18:26:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346651.572519 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzjKg-0007R7-U1; Fri, 10 Jun 2022 18:26:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346651.572519; Fri, 10 Jun 2022 18:26:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzjKg-0007R0-R4; Fri, 10 Jun 2022 18:26:06 +0000
Received: by outflank-mailman (input) for mailman id 346651;
 Fri, 10 Jun 2022 18:26:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iHn/=WR=kernel.org=pr-tracker-bot@srs-se1.protection.inumbo.net>)
 id 1nzjKf-0007Qu-LD
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 18:26:05 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c7c832e6-e8ea-11ec-8901-93a377f238d6;
 Fri, 10 Jun 2022 20:26:02 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 9B917621EA;
 Fri, 10 Jun 2022 18:26:00 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPS id 0C1A9C34114;
 Fri, 10 Jun 2022 18:26:00 +0000 (UTC)
Received: from aws-us-west-2-korg-oddjob-1.ci.codeaurora.org
 (localhost.localdomain [127.0.0.1])
 by aws-us-west-2-korg-oddjob-1.ci.codeaurora.org (Postfix) with ESMTP id
 E8375E737EA; Fri, 10 Jun 2022 18:25:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c7c832e6-e8ea-11ec-8901-93a377f238d6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654885560;
	bh=hnJGEJ6cGM5h+26mHdcmFFEsRgKuweQknb6iEEtMA3E=;
	h=Subject:From:In-Reply-To:References:Date:To:Cc:From;
	b=LE4Se7DCHVSvtvi3x7Cx7yZmxbN4gLsxqeZ3lEUZhlffWdVuMXR7GkYvAv3VCSRmq
	 M6uZe5QV+TxYmmoN2O9JVLFdSWl7kc4SD/eB+pPC58aqKHTHcL1PQA4rop8H7ErDnT
	 3zZUNKeErrRLfxaiaFoSirpJceI3WMlFz0k2s6kkZDm20BrpkA64E6OmCOCEys3IpN
	 qNXL9mSl1k5tGf3FSSI+v2Uq07kCYjZOKVT0PFH1gtI7RvOqpl1RdUKFU2deGJdN8n
	 klwyEuNEai5z0pTPty7OJxmZow2rmsR30Fg7iaq48nAe1PqAQrCvefgJeQ9A3rDv2j
	 8OUqi5DmmOO1A==
Subject: Re: [GIT PULL] xen: branch for v5.19-rc2
From: pr-tracker-bot@kernel.org
In-Reply-To: <20220610044858.30822-1-jgross@suse.com>
References: <20220610044858.30822-1-jgross@suse.com>
X-PR-Tracked-List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
X-PR-Tracked-Message-Id: <20220610044858.30822-1-jgross@suse.com>
X-PR-Tracked-Remote: git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.19a-rc2-tag
X-PR-Tracked-Commit-Id: dbac14a5a05ff8e1ce7c0da0e1f520ce39ec62ea
X-PR-Merge-Tree: torvalds/linux.git
X-PR-Merge-Refname: refs/heads/master
X-PR-Merge-Commit-Id: f2ecc964b9414a407828f9bef242b77483e95a6d
Message-Id: <165488555994.32117.5799921241915272315.pr-tracker-bot@kernel.org>
Date: Fri, 10 Jun 2022 18:25:59 +0000
To: Juergen Gross <jgross@suse.com>
Cc: torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, sstabellini@kernel.org

The pull request you sent on Fri, 10 Jun 2022 06:48:58 +0200:

> git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.19a-rc2-tag

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/f2ecc964b9414a407828f9bef242b77483e95a6d

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/prtracker.html


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 20:00:33 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 20:00:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346674.572559 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzknt-0003AH-8n; Fri, 10 Jun 2022 20:00:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346674.572559; Fri, 10 Jun 2022 20:00:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzknt-0003AA-5L; Fri, 10 Jun 2022 20:00:21 +0000
Received: by outflank-mailman (input) for mailman id 346674;
 Fri, 10 Jun 2022 20:00:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=O8JV=WR=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1nzkns-0003A4-9N
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 20:00:20 +0000
Received: from mail-lj1-x233.google.com (mail-lj1-x233.google.com
 [2a00:1450:4864:20::233])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f3a6e30a-e8f7-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 22:00:18 +0200 (CEST)
Received: by mail-lj1-x233.google.com with SMTP id d19so230838lji.10
 for <xen-devel@lists.xenproject.org>; Fri, 10 Jun 2022 13:00:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f3a6e30a-e8f7-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=ujDhRtw1GqfaFOiy5pTKfU1xZmqAMfX+H4865dSGPY8=;
        b=ltEWxWvQ76Z/Dzx7En5hEUnJGimFZUBC5s9Ypm9tybGMYOLaBSniFfytEh39DdHoQG
         vDpqC7RnudNBDoqFCJ9Cl1TlLhfrkLEHY6fMYf41Spp175HYS5unGJ6JvFy/GHFo2bHt
         s2CLbTakEIlv1HtDe8YiNV+IzLRxZt5xrj24JwRDI9t8GexF4V2dwGtzMfSi/6/4kDuN
         IDfzFxYvVtwVdQN5do5Hf28HQVPN2NVTP5uiXro6CLu8e6p4xJz4udOA9MXGAZdHpN1q
         0tjcZBN4/ANmbrfcfJrzhnP9f3CAiYy6CBJMCm3glz/nN9B3VJfV6WO8pFgj0OkMc+Yc
         dhag==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=ujDhRtw1GqfaFOiy5pTKfU1xZmqAMfX+H4865dSGPY8=;
        b=xcApGfj8on4RhTCcYFMLUomOxgzwWWE3aL4JJcVqhxSjCfoW94Ex17iPNBWNZtg0SM
         uaTUGIvgBX94TbY57i/XqE0CVfvtKtNeKnkuxPgl2DX815ktbWmkwTeT1oKxqpWvTNj5
         nzLaCmV9WC2FdaKcakFDjCNv13I4xczjyXj9xWW1Dsb8fgKbgiQsoJStAMPpxz1bMlzU
         Ak++Yrtmy6gB/x1GCoyIkPikK4gm+qIuW7hTGT36jr5K6a5f9jEL63/K4kpAfUOjou4p
         2mQeP6BO+TKPrGVEEoeuX8wy38fE0H2Gn2vHcY9+PAPK11sYBzxNQgnWPiVoQjT54JQ3
         HjPA==
X-Gm-Message-State: AOAM533hX7a87zkTKe9r3dsgIkUOyF09Sn4NRn2t8N/V/VRgbyhRmNfD
	FjM70ISBbdaAg/P0TOP/3FhGj912og5kXP26A68=
X-Google-Smtp-Source: ABdhPJyt8BzadCeh3ZrTJo8xv/NZrEx4+QfjHpzhm6AsGAVJ2HZDL0B/RgKQtDVn9mixT2hWRo/NShjw69wmGQoYzd4=
X-Received: by 2002:a2e:9f52:0:b0:255:7897:58be with SMTP id
 v18-20020a2e9f52000000b00255789758bemr22117827ljk.15.1654891217926; Fri, 10
 Jun 2022 13:00:17 -0700 (PDT)
MIME-Version: 1.0
References: <20220609185024.447922-1-mathieu.desnoyers@efficios.com> <20220609185024.447922-3-mathieu.desnoyers@efficios.com>
In-Reply-To: <20220609185024.447922-3-mathieu.desnoyers@efficios.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Fri, 10 Jun 2022 16:00:06 -0400
Message-ID: <CAKf6xpv3aBrzB=ds5jSd2MbFr=VePOMfJygos6E4cLegaizU0w@mail.gmail.com>
Subject: Re: [PATCH v5 2/5] grub-mkconfig linux_xen: Fix quadratic algorithm
 for sorting menu items
To: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: grub-devel@gnu.org, Daniel Kiper <dkiper@net-space.pl>, 
	xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

On Thu, Jun 9, 2022 at 2:50 PM Mathieu Desnoyers
<mathieu.desnoyers@efficios.com> wrote:
>
> The current implementation of the 20_linux_xen script implements its
> menu items sorting in bash with a quadratic algorithm, calling "sed",
> "sort", "head", and "grep" to compare versions between individual lines,
> which is annoyingly slow for kernel developers who can easily end up
> with 50-100 kernels in their boot partition.
>
> This fix is ported from the 10_linux script, which has a similar
> quadratic code pattern.
>
> [ Note: this is untested. I would be grateful if anyone with a Xen
>   environment could test it before it is merged. ]

Hi, Mathieu,

I tested by manually applying patch 2/5 on top of Fedora 36's
installed /etc/grub.d/20_linux_xen, and manually applying patch 1/5 to
/usr/share/grub/grub-mkconfig_lib.  It seems to generate grub.cfg
menuentry-ies in the correct order.

Note for patch 1/5, it's best practice to use "$@" with the double
quotes to prevent word splitting of arguments.  Doesn't really matter
for that function at this time though.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 20:09:47 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 20:09:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346683.572571 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzkww-00049S-6l; Fri, 10 Jun 2022 20:09:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346683.572571; Fri, 10 Jun 2022 20:09:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzkww-00049L-3B; Fri, 10 Jun 2022 20:09:42 +0000
Received: by outflank-mailman (input) for mailman id 346683;
 Fri, 10 Jun 2022 20:09:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzkwv-00049B-7h; Fri, 10 Jun 2022 20:09:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzkwv-0008TU-3r; Fri, 10 Jun 2022 20:09:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzkwu-0006V1-QF; Fri, 10 Jun 2022 20:09:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nzkwu-0001B2-Pj; Fri, 10 Jun 2022 20:09:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NRofbKSL7gGROpxUWfGHpXKRA4KxeU4E+cbEvtePrIA=; b=iwh+lRDAO4DHTPR4X+N7R1zDsp
	2TuDH5p8mMHM/+WAbJH0jtPdSn3Bqrqn5aWFrg4ak376EeuW1f+IobIynmoNuK/X2P/KBuWvlj7sh
	0Dd+pDeKhIxN0WVoDlhlGuoYZUrSf9Drw9M8IPFLXaqhftLaUPuc3oi/Nj6a6/E3S6JQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170911-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 170911: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=ae2dabe5ed880e31b2e8093a247026c45aa98337
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 10 Jun 2022 20:09:40 +0000

flight 170911 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170911/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              ae2dabe5ed880e31b2e8093a247026c45aa98337
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  700 days
Failing since        151818  2020-07-11 04:18:52 Z  699 days  681 attempts
Testing same since   170911  2022-06-10 04:20:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Amneesh Singh <natto@weirdnatto.in>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani@anisinha.ca>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brad Laue <brad@brad-x.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Kirbach <christian.kirbach@gmail.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Christophe Fergeau <cfergeau@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Didik Supriadi <didiksupriadi41@gmail.com>
  dinglimin <dinglimin@cmss.chinamobile.com>
  Divya Garg <divya.garg@nutanix.com>
  Dmitrii Shcherbakov <dmitrii.shcherbakov@canonical.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Emilio Herrera <ehespinosa57@gmail.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Franck Ridel <fridel@protonmail.com>
  Gavi Teitz <gavi@nvidia.com>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Haonan Wang <hnwanga1@gmail.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Hiroki Narukawa <hnarukaw@yahoo-corp.jp>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Ian Wienand <iwienand@redhat.com>
  Ioanna Alifieraki <ioanna-maria.alifieraki@canonical.com>
  Ivan Teterevkov <ivan.teterevkov@nutanix.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  jason lee <ppark5237@gmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jia Zhou <zhou.jia2@zte.com.cn>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jing Qi <jinqi@redhat.com>
  Jinsheng Zhang <zhangjl02@inspur.com>
  Jiri Denemark <jdenemar@redhat.com>
  Joachim Falk <joachim.falk@gmx.de>
  John Ferlan <jferlan@redhat.com>
  John Levon <john.levon@nutanix.com>
  John Levon <levon@movementarian.org>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Justin Gatzen <justin.gatzen@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kim InSoo <simmon@nplob.com>
  Koichi Murase <myoga.murase@gmail.com>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Lei Yang <yanglei209@huawei.com>
  Lena Voytek <lena.voytek@canonical.com>
  Liang Yan <lyan@digitalocean.com>
  Liang Yan <lyan@digtalocean.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Liu Yiding <liuyd.fnst@fujitsu.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  luzhipeng <luzhipeng@cestc.cn>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Martin Pitt <mpitt@debian.org>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matej Cepl <mcepl@cepl.eu>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Goodhart <c@chromakode.com>
  Maxim Nestratov <mnestratov@virtuozzo.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Moteen Shah <codeguy.moteen@gmail.com>
  Moteen Shah <moteenshah.02@gmail.com>
  Muha Aliss <muhaaliss@gmail.com>
  Nathan <nathan95@live.it>
  Neal Gompa <ngompa13@gmail.com>
  Nick Chevsky <nchevsky@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nicolas Lécureuil <neoclust@mageia.org>
  Nicolas Lécureuil <nicolas.lecureuil@siveo.net>
  Nikolay Shirokovskiy <nikolay.shirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Niteesh Dubey <niteesh@linux.ibm.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Or Ozeri <oro@il.ibm.com>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peng Liang <tcx4c70@gmail.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Praveen K Paladugu <prapal@linux.microsoft.com>
  Richard W.M. Jones <rjones@redhat.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Robin Lee <cheeselee@fedoraproject.org>
  Rohit Kumar <rohit.kumar3@nutanix.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Davis <scott.davis@starlab.io>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  shenjiatong <yshxxsjt715@gmail.com>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Simon Rowe <simon.rowe@nutanix.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tom Wieczorek <tom@bibbu.net>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tu Qiang <tu.qiang35@zte.com.cn>
  Tuguoyi <tu.guoyi@h3c.com>
  tuqiang <tu.qiang35@zte.com.cn>
  Vasiliy Ulyanov <vulyanov@suse.de>
  Victor Toso <victortoso@redhat.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Vinayak Kale <vkale@nvidia.com>
  Vineeth Pillai <viremana@linux.microsoft.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  Wei-Chen Chen <weicche@microsoft.com>
  William Douglas <william.douglas@intel.com>
  Xu Chao <xu.chao6@zte.com.cn>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Fei <yangfei85@huawei.com>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yasuhiko Kamata <belphegor@belbel.or.jp>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
  zhangjl02 <zhangjl02@inspur.com>
  zhanglei <zhanglei@smartx.com>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>
  Дамјан Георгиевски <gdamjan@gmail.com>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 112707 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 20:16:32 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 20:16:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346695.572582 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzl3Q-0005vi-1y; Fri, 10 Jun 2022 20:16:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346695.572582; Fri, 10 Jun 2022 20:16:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzl3P-0005vb-VU; Fri, 10 Jun 2022 20:16:23 +0000
Received: by outflank-mailman (input) for mailman id 346695;
 Fri, 10 Jun 2022 20:16:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/G3E=WR=linaro.org=richard.henderson@srs-se1.protection.inumbo.net>)
 id 1nzl3O-0005vV-2m
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 20:16:22 +0000
Received: from mail-pg1-x52a.google.com (mail-pg1-x52a.google.com
 [2607:f8b0:4864:20::52a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 30169666-e8fa-11ec-8901-93a377f238d6;
 Fri, 10 Jun 2022 22:16:19 +0200 (CEST)
Received: by mail-pg1-x52a.google.com with SMTP id 184so130770pga.12
 for <xen-devel@lists.xenproject.org>; Fri, 10 Jun 2022 13:16:19 -0700 (PDT)
Received: from [172.21.2.253] ([50.208.55.229])
 by smtp.gmail.com with ESMTPSA id
 b18-20020a62a112000000b0051c49fb62b7sm7044478pff.165.2022.06.10.13.16.16
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 10 Jun 2022 13:16:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 30169666-e8fa-11ec-8901-93a377f238d6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=message-id:date:mime-version:user-agent:subject:content-language:to
         :cc:references:from:in-reply-to:content-transfer-encoding;
        bh=dVctl4IRQkd1fURU+QcpGbC8x5JnhZO/zEKOCc/5Ty0=;
        b=EY+ItrPHt5Ye3nt8CxjZltnAx1rCtMg/lqcfxI3EB6SBDqixhySMFpr69GGHmhjDI/
         IR6A/48AT/DMp6xs2SxY+nWoyKfm+UjNvwReUJTB+78VWdllV/18YeDjIqkjxB1L+nQa
         IF6zmUEb/+UswLnzEmJQrdSuTsNQGcHf5CsOgKgYmmn52p+Po4CM4v+T00Ebtz8qXpql
         JradafFOVN7S0tDPnB1LyVdXSFC1XgU6Hhb0NBkRv+9pVjRzEvTYEVOsq1yZFgdHw344
         d6R5tDaiMBwriqTJyECVBeUfbVyDcJ82t2rnk33isaac2hIdhY0GjGrBq2ya5g5PChJP
         wWog==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:message-id:date:mime-version:user-agent:subject
         :content-language:to:cc:references:from:in-reply-to
         :content-transfer-encoding;
        bh=dVctl4IRQkd1fURU+QcpGbC8x5JnhZO/zEKOCc/5Ty0=;
        b=qxDRJrZlNVl5DQ4C542YA6bRDQI7pQwP8PCPRqSMcuZjuIFRSS0KE/OTzB+tEOx1Lc
         21iK3fDJ2IOW0Je5OfAr83LzxUpgXkUU6u7MynSXTRnEGx3kOIJTQJisol/VAwShQRyo
         AtptAJY7Ec6bdfHOWTPnENVNxHXaYv4J0EQL9215PK5SIM2vAIUBo9DvYqm+tsTQr+1j
         L6y2spHW1/97pH3rvVfADOm86xAAyu+uq4mdwHUbI/2JL+rTlqlBAfVct6+MFvoCVSRV
         KX09XHy3cb48U6f8SAzalqyhLxxXjh/ZZlRSKS/VUXUaXtQW9FOejO7NIips3vKvgCQw
         lccQ==
X-Gm-Message-State: AOAM531pUwtKr2OImK8ycKzTKkM1NcEuwzpoZywtC+Ad1UgsyTy2Dov4
	euMRWaVTWfjA9ZOQ4lntm1SaRA==
X-Google-Smtp-Source: ABdhPJxrGdO6Q0Zts48COU1P/9MJo/gokFe2MqT/BAV89cuQwKmYGsK+pVSRknZBtTOtWKGvYTgFPg==
X-Received: by 2002:a05:6a00:cd5:b0:51c:19f1:4657 with SMTP id b21-20020a056a000cd500b0051c19f14657mr28909933pfv.67.1654892178207;
        Fri, 10 Jun 2022 13:16:18 -0700 (PDT)
Message-ID: <adec1cff-54f1-e2bf-8092-945601aeb912@linaro.org>
Date: Fri, 10 Jun 2022 13:16:14 -0700
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PULL 00/17] Kraxel 20220610 patches
Content-Language: en-US
To: Gerd Hoffmann <kraxel@redhat.com>, qemu-devel@nongnu.org
Cc: "Canokeys.org" <contact@canokeys.org>, "Michael S. Tsirkin"
 <mst@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 Akihiko Odaki <akihiko.odaki@gmail.com>,
 Peter Maydell <peter.maydell@linaro.org>,
 "Hongren (Zenithal) Zheng" <i@zenithal.me>, xen-devel@lists.xenproject.org,
 Alex Williamson <alex.williamson@redhat.com>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <f4bug@amsat.org>
References: <20220610092043.1874654-1-kraxel@redhat.com>
From: Richard Henderson <richard.henderson@linaro.org>
In-Reply-To: <20220610092043.1874654-1-kraxel@redhat.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 6/10/22 02:20, Gerd Hoffmann wrote:
> The following changes since commit 9cc1bf1ebca550f8d90f967ccd2b6d2e00e81387:
> 
>    Merge tag 'pull-xen-20220609' of https://xenbits.xen.org/git-http/people/aperard/qemu-dm into staging (2022-06-09 08:25:17 -0700)
> 
> are available in the Git repository at:
> 
>    git://git.kraxel.org/qemu tags/kraxel-20220610-pull-request
> 
> for you to fetch changes up to 02319a4d67d3f19039127b8dc9ca9478b6d6ccd8:
> 
>    virtio-gpu: Respect UI refresh rate for EDID (2022-06-10 11:11:44 +0200)
> 
> ----------------------------------------------------------------
> usb: add CanoKey device, fixes for ehci + redir
> ui: fixes for gtk and cocoa, move keymaps, rework refresh rate
> virtio-gpu: scanout flush fix

This introduces regressions:

https://gitlab.com/qemu-project/qemu/-/jobs/2576157660
https://gitlab.com/qemu-project/qemu/-/jobs/2576151565
https://gitlab.com/qemu-project/qemu/-/jobs/2576154539
https://gitlab.com/qemu-project/qemu/-/jobs/2575867208


  (27/43) tests/avocado/vnc.py:Vnc.test_change_password_requires_a_password:  ERROR: 
ConnectError: Failed to establish session: EOFError\n	Exit code: 1\n	Command: 
./qemu-system-x86_64 -display none -vga none -chardev 
socket,id=mon,path=/var/tmp/avo_qemu_sock_4nrz0r37/qemu-2912538-7f732e94e0f0-monitor.sock 
-mon chardev=mon,mode=control -node... (0.09 s)
  (28/43) tests/avocado/vnc.py:Vnc.test_change_password:  ERROR: ConnectError: Failed to 
establish session: EOFError\n	Exit code: 1\n	Command: ./qemu-system-x86_64 -display none 
-vga none -chardev 
socket,id=mon,path=/var/tmp/avo_qemu_sock_yhpzy5c3/qemu-2912543-7f732e94b438-monitor.sock 
-mon chardev=mon,mode=control -node... (0.09 s)
  (29/43) tests/avocado/vnc.py:Vnc.test_change_password_requires_a_password:  ERROR: 
ConnectError: Failed to establish session: EOFError\n	Exit code: 1\n	Command: 
./qemu-system-x86_64 -display none -vga none -chardev 
socket,id=mon,path=/var/tmp/avo_qemu_sock_tk3pfmt2/qemu-2912548-7f732e93d7b8-monitor.sock 
-mon chardev=mon,mode=control -node... (0.09 s)


r~


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 20:24:48 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 20:24:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346709.572599 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzlBV-0007gx-Tl; Fri, 10 Jun 2022 20:24:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346709.572599; Fri, 10 Jun 2022 20:24:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzlBV-0007gq-Qu; Fri, 10 Jun 2022 20:24:45 +0000
Received: by outflank-mailman (input) for mailman id 346709;
 Fri, 10 Jun 2022 20:24:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzlBU-0007gg-6P; Fri, 10 Jun 2022 20:24:44 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzlBU-0000IU-2K; Fri, 10 Jun 2022 20:24:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzlBT-0007Gw-Lk; Fri, 10 Jun 2022 20:24:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nzlBT-0003cU-LF; Fri, 10 Jun 2022 20:24:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=IgIE+Tpl40SqXHmm3kfaVLNhLOS+F5qwsoozkk4ExLU=; b=VWA6FtDR1rHVixigdLxZjD8vJ0
	aI5XtC3ouShhkDuSdCrOeN2dnm+RYV1quUeP2VP92aH6DKDaESg5fe+1As+DYPCDbrbJYaBaqgyJj
	yNjnH4tHvMmizcxijn2Z+pRl4LYPnvJ8Dn1ZR3XeBqKPxR9jeYSIbj72gdoEk1rY4C0I=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170935-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 170935: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=ccc269756f773d35aab67ccb935fa9548f30cff3
X-Osstest-Versions-That:
    ovmf=e7abb94d1fb8a0e7725b983bbf5ab1334afe7ed1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 10 Jun 2022 20:24:43 +0000

flight 170935 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170935/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 ccc269756f773d35aab67ccb935fa9548f30cff3
baseline version:
 ovmf                 e7abb94d1fb8a0e7725b983bbf5ab1334afe7ed1

Last test of basis   170919  2022-06-10 08:10:33 Z    0 days
Testing same since   170935  2022-06-10 12:40:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ray Ni <ray.ni@intel.com>
  Tom Lendacky <thomas.lendacky@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   e7abb94d1f..ccc269756f  ccc269756f773d35aab67ccb935fa9548f30cff3 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 21:13:09 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 21:13:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346726.572628 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzlw7-0005TI-Qy; Fri, 10 Jun 2022 21:12:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346726.572628; Fri, 10 Jun 2022 21:12:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzlw7-0005TB-NV; Fri, 10 Jun 2022 21:12:55 +0000
Received: by outflank-mailman (input) for mailman id 346726;
 Fri, 10 Jun 2022 21:12:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzlw6-0005T1-H7; Fri, 10 Jun 2022 21:12:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzlw6-00018R-AQ; Fri, 10 Jun 2022 21:12:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzlw5-0001vZ-TK; Fri, 10 Jun 2022 21:12:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nzlw5-0003JI-Ss; Fri, 10 Jun 2022 21:12:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8ulJZ0KoM07CYVawfBrWrqLKOtLJZbfl8F39I+sRoaE=; b=ZKxosvKc/Sgr9leAC7zQ4aVnNX
	DSUCOXh0D1pngSftluUOjRXO4GhZOZlr7gn6oNcam2yHG8KbLKVgcBJsI5gsxu4VnlD6tYfI+FMVq
	nWGs4hI8oF+v5yIK0HIZLxhE6cFl0r/79tSg46suf/3pIXSGKgU4Ez4R30DOSCY7uSxw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170908-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 170908: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-coresched-i386-xl:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-pvshim:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-freebsd10-i386:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    xen-unstable:test-amd64-i386-examine-uefi:reboot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-freebsd10-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:xen-boot/src_host:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-libvirt-raw:xen-boot:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-examine-bios:reboot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-5:xtf/test-pv32pae-xsa-286:fail:regression
    xen-unstable:test-xtf-amd64-amd64-4:xtf/test-pv32pae-cpuid-faulting:fail:regression
    xen-unstable:test-xtf-amd64-amd64-4:leak-check/check:fail:regression
    xen-unstable:test-amd64-i386-xl-shadow:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-5:leak-check/check:fail:regression
    xen-unstable:test-amd64-i386-migrupgrade:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-vhd:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-livepatch:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-pair:xen-boot/src_host:fail:regression
    xen-unstable:test-amd64-i386-pair:xen-boot/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-libvirt:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:xen-boot:fail:regression
    xen-unstable:test-amd64-i386-examine:reboot:fail:regression
    xen-unstable:test-amd64-amd64-examine:reboot:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-start/debianhvm.repeat:fail:regression
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c1c9cae3a9633054b177c5de21ad7268162b2f2c
X-Osstest-Versions-That:
    xen=7ac12e3634cc3ed9234de03e48149e7f5fbf73c3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 10 Jun 2022 21:12:53 +0000

flight 170908 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170908/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-ws16-amd64  8 xen-boot          fail REGR. vs. 170890
 test-amd64-i386-xl            8 xen-boot                 fail REGR. vs. 170890
 test-amd64-i386-qemut-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 170890
 test-amd64-i386-xl-qemuu-debianhvm-amd64  8 xen-boot     fail REGR. vs. 170890
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 170890
 test-amd64-coresched-i386-xl  8 xen-boot                 fail REGR. vs. 170890
 test-amd64-i386-xl-pvshim     8 xen-boot                 fail REGR. vs. 170890
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 170890
 test-amd64-i386-qemuu-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 170890
 test-amd64-i386-xl-qemut-win7-amd64  8 xen-boot          fail REGR. vs. 170890
 test-amd64-i386-freebsd10-i386  8 xen-boot               fail REGR. vs. 170890
 test-amd64-i386-xl-xsm        8 xen-boot                 fail REGR. vs. 170890
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 170890
 test-amd64-i386-examine-uefi  8 reboot                   fail REGR. vs. 170890
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 170890
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 170890
 test-amd64-i386-xl-qemuu-ws16-amd64  8 xen-boot          fail REGR. vs. 170890
 test-amd64-i386-freebsd10-amd64  8 xen-boot              fail REGR. vs. 170890
 test-amd64-i386-libvirt-pair 12 xen-boot/src_host        fail REGR. vs. 170890
 test-amd64-i386-libvirt-pair 13 xen-boot/dst_host        fail REGR. vs. 170890
 test-amd64-i386-libvirt-raw   8 xen-boot                 fail REGR. vs. 170890
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 170890
 test-amd64-i386-xl-qemuu-win7-amd64  8 xen-boot          fail REGR. vs. 170890
 test-amd64-i386-examine-bios  8 reboot                   fail REGR. vs. 170890
 test-xtf-amd64-amd64-5       92 xtf/test-pv32pae-xsa-286 fail REGR. vs. 170890
 test-xtf-amd64-amd64-4 79 xtf/test-pv32pae-cpuid-faulting fail REGR. vs. 170890
 test-xtf-amd64-amd64-4       80 leak-check/check         fail REGR. vs. 170890
 test-amd64-i386-xl-shadow     8 xen-boot                 fail REGR. vs. 170890
 test-amd64-i386-xl-qemuu-ovmf-amd64  8 xen-boot          fail REGR. vs. 170890
 test-xtf-amd64-amd64-5       93 leak-check/check         fail REGR. vs. 170890
 test-amd64-i386-migrupgrade  13 xen-boot/dst_host        fail REGR. vs. 170890
 test-amd64-i386-xl-vhd        8 xen-boot                 fail REGR. vs. 170890
 test-amd64-i386-livepatch     8 xen-boot                 fail REGR. vs. 170890
 test-amd64-i386-libvirt-xsm   8 xen-boot                 fail REGR. vs. 170890
 test-amd64-i386-pair         12 xen-boot/src_host        fail REGR. vs. 170890
 test-amd64-i386-pair         13 xen-boot/dst_host        fail REGR. vs. 170890
 test-amd64-i386-xl-qemut-debianhvm-amd64  8 xen-boot     fail REGR. vs. 170890
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 170890
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 170890
 test-amd64-i386-libvirt       8 xen-boot                 fail REGR. vs. 170890
 test-amd64-i386-qemut-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 170890
 test-amd64-i386-qemuu-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 170890
 test-amd64-i386-examine       8 reboot                   fail REGR. vs. 170890
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 170890
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 20 guest-start/debianhvm.repeat fail REGR. vs. 170890

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10       fail  like 170890
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170890
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170890
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170890
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170890
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170890
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170890
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170890
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170890
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 xen                  c1c9cae3a9633054b177c5de21ad7268162b2f2c
baseline version:
 xen                  7ac12e3634cc3ed9234de03e48149e7f5fbf73c3

Last test of basis   170890  2022-06-08 20:37:45 Z    2 days
Failing since        170897  2022-06-09 08:20:47 Z    1 days    2 attempts
Testing same since   170908  2022-06-10 01:40:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <jgrall@amazon.com> # Arm
  Luca Fancellu <luca.fancellu@arm.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       fail    
 test-xtf-amd64-amd64-5                                       fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    fail    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 372 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 21:25:07 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 21:25:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346739.572641 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzm7r-0007dc-40; Fri, 10 Jun 2022 21:25:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346739.572641; Fri, 10 Jun 2022 21:25:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzm7r-0007dV-13; Fri, 10 Jun 2022 21:25:03 +0000
Received: by outflank-mailman (input) for mailman id 346739;
 Fri, 10 Jun 2022 21:25:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Tsmg=WR=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nzm7p-0007dM-RN
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 21:25:01 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org
 [2604:1380:4601:e00::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c8498ad1-e903-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 23:25:00 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 49949B83755;
 Fri, 10 Jun 2022 21:24:59 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id EE6A4C34114;
 Fri, 10 Jun 2022 21:24:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c8498ad1-e903-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654896298;
	bh=8vEeMnpQVQgn+XTr3cOzHqPBLBaipdXvRgKvW91D8h0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=C3+UbP9yiqyz+XDayfuDgvITF2N4I36PMINj/jO6GWC8MfX2i04juB6Z1Q4b1nbCh
	 Dx6LkNeFBwGpXLwR0bISrz9RmAVBQ14zBeKXLaKMkIktYv8K/36SvF01lC04BCAyeB
	 aTWxBbJD6XplS0E5mnxKSooHGOCDxkX9h9ZzRw5ncGdciisxQHaCIeCLHf2xcqWKIG
	 Yoxmtuu+X4qNlNyTkKF+Jgj0TGjhz17B5RI80+tV4quO8olZqCnCzgJcVTYeEybhCa
	 5Ru1eq3yGYspXW/m69oPoc204MlBRHgvW1pFVglyEvefc/d9ENI6U+2BDHz0u32nZs
	 rO2W9KMeOnnyw==
Date: Fri, 10 Jun 2022 14:24:54 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jan Beulich <jbeulich@suse.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, George.Dunlap@citrix.com, 
    roger.pau@citrix.com, Artem_Mygaiev@epam.com, Andrew.Cooper3@citrix.com, 
    julien@xen.org, Bertrand.Marquis@arm.com, xen-devel@lists.xenproject.org
Subject: Re: [PATCH] add more MISRA C rules to docs/misra/rules.rst
In-Reply-To: <c61f607b-bfdd-3162-7b26-b4681b4cce59@suse.com>
Message-ID: <alpine.DEB.2.22.394.2206101420290.756493@ubuntu-linux-20-04-desktop>
References: <alpine.DEB.2.22.394.2206091748210.756493@ubuntu-linux-20-04-desktop> <c61f607b-bfdd-3162-7b26-b4681b4cce59@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 10 Jun 2022, Jan Beulich wrote:
> On 10.06.2022 02:48, Stefano Stabellini wrote:
> > +   * - `Rule 5.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_05_03.c>`_
> > +     - Required
> > +     - An identifier declared in an inner scope shall not hide an
> > +       identifier declared in an outer scope
> > +     - Using macros as macro parameters at invocation time is allowed,
> > +       e.g. MAX(var0, MIN(var1, var2))
> 
> I think the connection between the example and the rule could be made more
> clear, e.g. by adding "... even if both macros use identically named local
> variables".

Yep, I'll add


> > +   * - `Rule 14.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_14_01.c>`_
> > +     - Required
> > +     - A loop counter shall not have essentially floating type
> 
> This looks to be missing "point"?

I am not sure what you mean. Do you mean "floating-point" instead of
"floating" ?

This is the actual headline for Rule 14.1. MISRA defines "Essential
types" (8.10.2), so in this case it is referring to the type
"essentially floating", which includes float, double and long double.

If you meant something different, I'll address it in the next version of
the patch.

Cheers,

Stefano


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 21:28:02 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 21:28:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346747.572652 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzmAj-0008E8-Kj; Fri, 10 Jun 2022 21:28:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346747.572652; Fri, 10 Jun 2022 21:28:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzmAj-0008E1-HY; Fri, 10 Jun 2022 21:28:01 +0000
Received: by outflank-mailman (input) for mailman id 346747;
 Fri, 10 Jun 2022 21:28:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Tsmg=WR=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nzmAi-0008Dv-CH
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 21:28:00 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 334508b0-e904-11ec-8901-93a377f238d6;
 Fri, 10 Jun 2022 23:27:59 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id BDE50B83765;
 Fri, 10 Jun 2022 21:27:58 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id CD2D9C34114;
 Fri, 10 Jun 2022 21:27:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 334508b0-e904-11ec-8901-93a377f238d6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654896477;
	bh=KBP82+EJ08ywiXHELyDwBuHDDTpTUfvp7+EqgZWDwuE=;
	h=From:To:Cc:Subject:Date:From;
	b=V9pX9DwVjbvtYEz06hJ4GIQaq0IVH/unouwpz8f4YnuYJscUR510Wn5BndKvBABGB
	 CDHDOLy8tSJ13msOeaxsIpK7X5d7cWK2z05MEqtzTg5xHVWq6qWOoiE7ZcGaLYc5Cp
	 uZClXCJFWWiCYyT6tehZnf5cMSHYivtHYz8Bt56JDUJZHkrRVMx8a/HlbLH1yEV1XH
	 wbCsYwNpLmM1RuTdohEe7ILyfPfZ8VjIxTN3HydDmY/4m1AibkCCd5JTuvpQncdVwV
	 ETQIetSIXu0QA6pBoUAHsc7JwWGpcVRwG02+BVDTbtr7e8Pc8oLAxOonwYv6PtNwjr
	 SZGdHdUO5F9Tw==
From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	George.Dunlap@citrix.com,
	roger.pau@citrix.com,
	Artem_Mygaiev@epam.com,
	Andrew.Cooper3@citrix.com,
	julien@xen.org,
	Bertrand.Marquis@arm.com,
	jbeulich@suse.com,
	Stefano Stabellini <stefano.stabellini@xilinx.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v2] add more MISRA C rules to docs/misra/rules.rst
Date: Fri, 10 Jun 2022 14:27:55 -0700
Message-Id: <20220610212755.1051640-1-sstabellini@kernel.org>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Add the new MISRA C rules agreed by the MISRA C working group to
docs/misra/rules.rst.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
Acked-by: Julien Grall <jgrall@amazon.com>
Acked-by: Roger Pau Monné <roger.pau@citrix.com>
---
Given the minimal/trivial changes I kept the acked/reviewed-by.

Changes in v2:
- use max_t/min_t instead of MAX/MIN in the Rule 5.3 example
- improve wording for the note of Rule 5.3
---
 docs/misra/rules.rst | 90 ++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 90 insertions(+)

diff --git a/docs/misra/rules.rst b/docs/misra/rules.rst
index 6ccff07765..c0bdc75987 100644
--- a/docs/misra/rules.rst
+++ b/docs/misra/rules.rst
@@ -89,6 +89,29 @@ existing codebase are work-in-progress.
        (xen/include/public/) are allowed to retain longer identifiers
        for backward compatibility.
 
+   * - `Rule 5.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_05_02.c>`_
+     - Required
+     - Identifiers declared in the same scope and name space shall be
+       distinct
+     - The Xen characters limit for identifiers is 40. Public headers
+       (xen/include/public/) are allowed to retain longer identifiers
+       for backward compatibility.
+
+   * - `Rule 5.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_05_03.c>`_
+     - Required
+     - An identifier declared in an inner scope shall not hide an
+       identifier declared in an outer scope
+     - Using macros as macro parameters at invocation time is allowed
+       even if both macros use identically named local variables, e.g.
+       max_t(var0, min_t(var1, var2))
+
+   * - `Rule 5.4 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_05_04.c>`_
+     - Required
+     - Macro identifiers shall be distinct
+     - The Xen characters limit for macro identifiers is 40. Public
+       headers (xen/include/public/) are allowed to retain longer
+       identifiers for backward compatibility.
+
    * - `Rule 6.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_06_02.c>`_
      - Required
      - Single-bit named bit fields shall not be of a signed type
@@ -123,8 +146,75 @@ existing codebase are work-in-progress.
        declarations of objects and functions that have internal linkage
      -
 
+   * - `Rule 8.10 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_08_10.c>`_
+     - Required
+     - An inline function shall be declared with the static storage class
+     - gnu_inline (without static) is allowed.
+
    * - `Rule 8.12 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_08_12.c>`_
      - Required
      - Within an enumerator list the value of an implicitly-specified
        enumeration constant shall be unique
      -
+
+   * - `Rule 9.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_09_01.c>`_
+     - Mandatory
+     - The value of an object with automatic storage duration shall not
+       be read before it has been set
+     - Rule clarification: do not use variables before they are
+       initialized. An explicit initializer is not necessarily required.
+       Try reducing the scope of the variable. If an explicit
+       initializer is added, consider initializing the variable to a
+       poison value.
+
+   * - `Rule 9.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_09_02.c>`_
+     - Required
+     - The initializer for an aggregate or union shall be enclosed in
+       braces
+     -
+
+   * - `Rule 13.6 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_13_06.c>`_
+     - Mandatory
+     - The operand of the sizeof operator shall not contain any
+       expression which has potential side effects
+     -
+
+   * - `Rule 14.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_14_01.c>`_
+     - Required
+     - A loop counter shall not have essentially floating type
+     -
+
+   * - `Rule 16.7 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_16_07.c>`_
+     - Required
+     - A switch-expression shall not have essentially Boolean type
+     -
+
+   * - `Rule 17.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_17_03.c>`_
+     - Mandatory
+     - A function shall not be declared implicitly
+     -
+
+   * - `Rule 17.4 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_17_04.c>`_
+     - Mandatory
+     - All exit paths from a function with non-void return type shall
+       have an explicit return statement with an expression
+     -
+
+   * - `Rule 20.7 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_20_07.c>`_
+     - Required
+     - Expressions resulting from the expansion of macro parameters
+       shall be enclosed in parentheses
+     -
+
+   * - `Rule 20.13 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_20_13.c>`_
+     - Required
+     - A line whose first token is # shall be a valid preprocessing
+       directive
+     -
+
+   * - `Rule 20.14 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_20_14.c>`_
+     - Required
+     - All #else #elif and #endif preprocessor directives shall reside
+       in the same file as the #if #ifdef or #ifndef directive to which
+       they are related
+     -
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 21:57:17 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 21:57:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346760.572673 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzmcw-0003vt-0k; Fri, 10 Jun 2022 21:57:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346760.572673; Fri, 10 Jun 2022 21:57:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzmcv-0003vm-Td; Fri, 10 Jun 2022 21:57:09 +0000
Received: by outflank-mailman (input) for mailman id 346760;
 Fri, 10 Jun 2022 21:57:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZtTF=WR=oracle.com=dongli.zhang@srs-se1.protection.inumbo.net>)
 id 1nzmcu-0003vg-3N
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 21:57:09 +0000
Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com
 [205.220.165.32]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 437d4bbf-e908-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 23:57:05 +0200 (CEST)
Received: from pps.filterd (m0246627.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 25AK4PA3013538;
 Fri, 10 Jun 2022 21:56:16 GMT
Received: from phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com
 (phxpaimrmta03.appoci.oracle.com [138.1.37.129])
 by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3gfydqxjwx-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 10 Jun 2022 21:56:16 +0000
Received: from pps.filterd
 (phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1])
 by phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com (8.16.1.2/8.16.1.2)
 with SMTP id 25ALURow035728; Fri, 10 Jun 2022 21:56:15 GMT
Received: from nam10-mw2-obe.outbound.protection.outlook.com
 (mail-mw2nam10lp2105.outbound.protection.outlook.com [104.47.55.105])
 by phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com with ESMTP id
 3gfwu70apy-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 10 Jun 2022 21:56:15 +0000
Received: from BN7PR10MB2659.namprd10.prod.outlook.com (2603:10b6:406:c5::18)
 by MWHPR10MB1887.namprd10.prod.outlook.com (2603:10b6:300:10b::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.15; Fri, 10 Jun
 2022 21:56:13 +0000
Received: from BN7PR10MB2659.namprd10.prod.outlook.com
 ([fe80::54fd:3174:8ce4:985a]) by BN7PR10MB2659.namprd10.prod.outlook.com
 ([fe80::54fd:3174:8ce4:985a%6]) with mapi id 15.20.5332.014; Fri, 10 Jun 2022
 21:56:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 437d4bbf-e908-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2021-07-09;
 bh=Ah1q/mlyzuwQjVr+0PaIA+PZqMDHC3wQFrogkSXgZ1o=;
 b=twvFhPE2hMl+9jDxQPaZ6hPSr5g5f+sVWYa5EKYaReCYaHZyme6fCvCluyt94gT1VK5L
 E9J76yJ+BXNaV/VKV3nAHOQ3ORCX8Br1cvV1hhLWsfKWWbX0J++lXtUsR69SjNbtMmbE
 KZ7sKY4oqn97s2kRqE+4s6GZZPXVHu0OqBlMKIi9TTp88xTu/mk5Vy5fWLsqVUGPqX+r
 W2fu/pjUhCvFW273l44LRRA2MZcuOqt+6bwqUP/+CCppi9/neIRopn4lMMi87fC3RuMv
 EON3GXFbE9TSWrJ7s/vAmBxrpink4+MGopueDV/BxmWUZ57u+JJD/O9BuzvqV3DiVa0m 7Q== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RkfIyM7RiKKEhZlKQMn5AG0Stv1daxuZRc6CgGjA+f5/gjHT0P8s6KWDVLOsMckVeVEiE4/RynRYK1ROhHRtthZRBOCGdE1td4UYabgcVMK1JpXCPmam3ZCvHOoJDOFfAPiqt54mBUIGSjPE5U41zmxgrqhciNJuMREXtdp97q3Jq/1HJmVIxJf2iwJLMzHBPGn+jp1oWEPAvQX6JMdaSpcSzmn7WrlC5/xU7sUTSA+AyPgH+E71WZIS2SQN/ds2QrAbrX9QtrNnt8/F0ZlanceoaDc05cebgoR9EcYTI+deTYIW5Kby+k4bNVEPRVKpN6utlz33RKxLSnamt7jz8g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Ah1q/mlyzuwQjVr+0PaIA+PZqMDHC3wQFrogkSXgZ1o=;
 b=S+Q8TeJNBI8rDNDuZau8dDQmxhwc9ZWNJ6GrnYaRm62bwrNZrEZj3PWkfjPB9cwxy5UiE6ejFlZvC5HnrhCwd5XrYzA8acGcQzryw9M/yOsfeMuApuRhgORis8X2Kmvk5H7ZemSNummNlr8+7+ivL+kxEFFQUCGS9i5amuLKmhSj6HgUlc3Xd8yADQeQqseYdhYMTOmwlrkVa1Ww+OLZGimhNTt3/hJvI13ZkRqRadZPkpTf6Itc/tAP08Vm5glt8VCc1YAl9OB66AX8B7HBtPiovRPleJQTD/LYts3SQ/ycTDhQl5gsUX0coMhAK43TajmoUNymNM9jXgSRWMFdqg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ah1q/mlyzuwQjVr+0PaIA+PZqMDHC3wQFrogkSXgZ1o=;
 b=AxKzW2CfhOl6Ha43E4IrDl+dLlbthxL9dgpYr2bNksa3w7cQlzNTKhxZils/pyA3y9drnQNIvMXsB0oxCOEG8ujj2BqZjh0Uj0H/OaazJ65ismvf66UXJKY6qRM8AdVhgXG2CfserLzhmNPCaHwSp1ycGfIRvAl5J6EZyhrJXpw=
Subject: Re: [PATCH RFC v1 4/7] swiotlb: to implement io_tlb_high_mem
To: Christoph Hellwig <hch@infradead.org>
Cc: iommu@lists.linux-foundation.org, xen-devel@lists.xenproject.org,
        x86@kernel.org, linuxppc-dev@lists.ozlabs.org,
        virtualization@lists.linux-foundation.org,
        linux-kernel@vger.kernel.org, m.szyprowski@samsung.com,
        jgross@suse.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
        dave.hansen@linux.intel.com, sstabellini@kernel.org,
        mpe@ellerman.id.au, konrad.wilk@oracle.com, mst@redhat.com,
        jasowang@redhat.com, joe.jin@oracle.com
References: <20220609005553.30954-1-dongli.zhang@oracle.com>
 <20220609005553.30954-5-dongli.zhang@oracle.com>
 <YqF/sphJj6n+22Si@infradead.org>
From: Dongli Zhang <dongli.zhang@oracle.com>
Message-ID: <e6345c27-78fd-be72-9551-1d1fd5db37a4@oracle.com>
Date: Fri, 10 Jun 2022 14:56:08 -0700
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
In-Reply-To: <YqF/sphJj6n+22Si@infradead.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: DM6PR04CA0023.namprd04.prod.outlook.com
 (2603:10b6:5:334::28) To BN7PR10MB2659.namprd10.prod.outlook.com
 (2603:10b6:406:c5::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3d1c90ed-5e46-4fa9-ef24-08da4b2c0924
X-MS-TrafficTypeDiagnostic: MWHPR10MB1887:EE_
X-Microsoft-Antispam-PRVS: 
	<MWHPR10MB18874953901C1646FDF821D9F0A69@MWHPR10MB1887.namprd10.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	g3DMbLC2NULjwmYZwgktrD/KIFgWWHKyIx4cQciMNUNL3NH3Dxaw508Vcej5050h3vD0RliliGOAbYLXZTuGwfDRxO/ePEL+vFlAgiwOL0ErnAAOt0L+9Lqs7iFkvUSQj2Iyux1CV5+Hn8JgLsJdE+HfZHyEJ6UPhNAlZIYOmHuer8YgQGvGteoqqgBqXVTwQe+BwNZtHpS8/cgkelYox3PrZC9q3D4SpmS7/pkgk9JM5dlMB4iAN7DkLhqsJrAIJsYDzrepUyjljKMcwJQrZWEjJQtTIBcR7IqYhdKHMjVpkg+rZHgJRAf+Tv8JL6A/ygvd3KK0Ap+BCTfWnioT1kAA5ehXZ9kJfZ0j18sRgLdsLDjvWYUcWqzeitx+4plaZ2au/vpQwTNIOD4lDfedzb/ONLLfWbXJan26VQtZVCMeCV5ngbVB8uHGinoJAbY/cXICdajPXZaG8ATZTyI8nW1unFq5Ydrm5DGiB+Rpq1D1HsAswwwwOWuvkDHHmOXF7HDGPKj1HKPj1gqBzXUbM5NFRPu/7yBU32P/eV93U1fNWDWwO2dlb5kZ6jFMWqwTVJRtYPtmeD7yhKhkUJIpivjQHpK6zeEgcZRjhTYtoU0auLC46NRbeeh7yPV+R/cdvPRhw+2iet7m4tGkrCsHMT7E9m/epqX48pFhiAdVaOL1An+B9ldXHIMwFklcc9E5RfLYuFKOJycct4Peitll1bSYexZk/pyLDWWO6k0EAGO+tsRLWWTe+7vXUcSh2Qwn
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BN7PR10MB2659.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(44832011)(7416002)(2616005)(6486002)(508600001)(8936002)(5660300002)(83380400001)(186003)(6666004)(316002)(86362001)(2906002)(107886003)(6512007)(31696002)(6506007)(53546011)(36756003)(8676002)(66476007)(66946007)(66556008)(4326008)(6916009)(31686004)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?utf-8?B?SElMakVJdHpJcUg2VVlKOTA5OG9Od041YndoTzdGQ3FyVGNBS0d3amkrOHh6?=
 =?utf-8?B?RElTenFzSXRaQmZuSDlPOEhVK2N6blVnNElHRDJuY2FvSDJteVBpbnlieXE2?=
 =?utf-8?B?eXNUMlRHUXVqK0g3cjBaZXB1R3NEeElKRmcyRmNod0dYdzh5eEhGWTdteGJF?=
 =?utf-8?B?MEQzc1NUQkFlZzNRVnJZclk5UEpjU3cwTG1rSVlaUmh6L2xBUzlaTFY4dC9p?=
 =?utf-8?B?dDZxTTFXRWt2VHZTQ0J2eHFmclQ3TjhDbE90dDRLa1BJcVI0dHJlcFFGdTJB?=
 =?utf-8?B?dThyc3BNeGpjOXgzTTBOUzVwKy91Y25OUW1QQnJJUGphUEc5MWxPclN3T0pO?=
 =?utf-8?B?aVpNQ0QwNGZ5ZUk5d3g3ZWxHNW9aeW1mOW44WlpTMnE2VmtRSWNwdTRYUGox?=
 =?utf-8?B?OVlRUTZEdWNOT25LREE4eWtBU2N5dDVoUkUxZy90WUUyYTJwWmE0bXdvZFdM?=
 =?utf-8?B?SkNDaFM4dWZSekxad0pRSHphQzV2czUzTm11b1hHbm1aYVEwTmk5elRCbnpt?=
 =?utf-8?B?WnBlUkhjMEdDejVVa2tPU3FNS1RVTUZjVEFkVkw0YTFza1R4Rk9vMHorc2VY?=
 =?utf-8?B?blBIVE9FR0lrUmRzbjZNbmxUamIwQ3lDN003aVhvYUlZSnVJREdSVi9sZHZV?=
 =?utf-8?B?MDBGanAyQWFXRVVndkNpZlptaEJwM1dNU0xaTEgwZ0xKYUhZSi9NR0FnKzNI?=
 =?utf-8?B?ejRqZ1djM0w2QnM3WkJMV3Uxd01iUUhFY1hwT2d2K3kyNTZtUWlySnpEMUFk?=
 =?utf-8?B?TVhhcjU3K0pjWmtZalIzZkFaMmZtREYzVHU5eGZ0dkZRRTFJeTkrSU1KK2V3?=
 =?utf-8?B?NmlmVy9SOHM3dkdmdndPSzB2UVZQVHExSWpwVFdNOGRKZ2IwMUZOc0U1akpB?=
 =?utf-8?B?U1o4TWIySkFjVnpHZWpWamYxNEVubEowUEg4eDFqWnQwV25SZDI2TCs0U1Qv?=
 =?utf-8?B?S3lnVmFUZGhFV3dEbWtSZ0kwV1FuUWlWUFhTSWdSWTkrRWpHTnRXeWVGWHRv?=
 =?utf-8?B?U2RtT0JtTlduR0NkWVlVNUx6OXBFZ1lXNzUwdGRxdlpLcXk0TnRrcXdTZmRx?=
 =?utf-8?B?UGk3SWZJR3hua0RmZWRvQVRvRDV2elBXMFRYQjZIcm16dWsyUjVMQm5YNzNL?=
 =?utf-8?B?NVdFTVF0clU4ZjFiZzNUZzhGb2JyNnAwT3VqS2srY0RTaVFTZTdiVjNVVUts?=
 =?utf-8?B?TXc1bnNRdThwRFJiek02Y094YVlaMnJkaGsxMGJYSFVKS0JUelZXcVZ1T2pX?=
 =?utf-8?B?Q1VxOHd4dXRibDdxakNnQWNFSHJuWUZkSC9TQTFnOGVkeGVvUzUrL1lQSUIr?=
 =?utf-8?B?UnI4QlN4SUxsRmg1ZU1RQk0rU0FqUCtUV0JmN3MxVUxqcjM0SUlLdmRYSXo0?=
 =?utf-8?B?aHJrSFR6QWpGRUVGSjVpZ0ttUVBzWnUwbEVKdzd2bG5lWnM2Qm5aSktlaERq?=
 =?utf-8?B?K09Wc0s3amNRUUtLczRFbFB0QzZKN0tYOWtxSExGbllzSjhKck02Nmo4bndz?=
 =?utf-8?B?T0VUSU5WRG1rWGdheHN4R2QyRElZRm1UaG9EUTNoeGw0Q1FKOGVyY1cyNXdw?=
 =?utf-8?B?QitZdjRQQW9mZ3F1N2YzQ2V0Zk1lU2VHelZzV1diYmF4V28wOG5qZmxtYWlV?=
 =?utf-8?B?Q2RyZ3hFNzdKNG1RWi92THZweTlBQTQ4L3B5SEZ5a0tzTk9pTnAxRW5BckI5?=
 =?utf-8?B?Z1BvRnlkbXVvalIxc0ZFNWVUcGJUUDZBc0hMWHlObFBDRUFsVXlpbUx0ZmZ3?=
 =?utf-8?B?RS9WN2FiQXdlTFl0WUhaMWVGTzJiOVVuY1ZnbXNJdFcrSGZGRGJneXo4SjZI?=
 =?utf-8?B?OWhQNjg3RjdqQ2NvWkdHSmpCbGhoTSsyL24vamVaYXEwMHNINTVOcmExclF2?=
 =?utf-8?B?TkVJSXNRTDQ5UjRPaDlTWWFwRnU3TUlpUC8zaG9UWmUrQ2QvYkJ4U1NENnhQ?=
 =?utf-8?B?eTR1RXEvbG1iU0dzZFJSOHRiTHNjSnJsZFZ0KzRnd2dndGhiNFEzRHhmZzkx?=
 =?utf-8?B?aFdUa0psKzZ2c3pMZEl3enBEM003Tjl0bUpUQUZsRkZqU1lFL1h5VDBSbnd1?=
 =?utf-8?B?Mi9EcktDTTQ5eGlRTWxQUnk1L1h6N1dkVjRVSmwxK3owUkFTRlNtYzRxcXNr?=
 =?utf-8?B?cmRhYU5HR0R1VVBkZHhsbUV5QnJrUW9heDJVUnN3SFpJTlMvWXZEZmRHdzlE?=
 =?utf-8?B?dVpXZ3hNQzhvb3FBODFrcHNVTVFvNUVnU000RXcraWFtZ0NneUFSM0ZqSGlC?=
 =?utf-8?B?N3lOSzhTYVpndDgwd3U5amRhL1dnMDh1RDlHTUR5Z3RWNDhvR1ZZYnd1RE5P?=
 =?utf-8?B?VGh6T0hxTVJyU0E1KzYrVHBVaFcvSXFTR3N5Zmtqa25jdmVsZ0kvY213V2dG?=
 =?utf-8?Q?XbFAqF6a3vabdqTnQpttHFELvaFmdK/s2RvHB?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3d1c90ed-5e46-4fa9-ef24-08da4b2c0924
X-MS-Exchange-CrossTenant-AuthSource: BN7PR10MB2659.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2022 21:56:13.1656
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: FxU1E+KjBkwdSLtMS5Eqqir0vGhxNcn33SnePkFruX5zkUnvfuLqNGXAqJlDVkrZiPFY7PcwSg6MgUl9Xy7U4Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR10MB1887
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.517,18.0.874
 definitions=2022-06-10_09:2022-06-09,2022-06-10 signatures=0
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 mlxscore=0 bulkscore=0
 malwarescore=0 phishscore=0 adultscore=0 suspectscore=0 mlxlogscore=999
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2204290000
 definitions=main-2206100082
X-Proofpoint-GUID: X_VrRHmM9J-mUPYgf3I4FFxvCnOFl0Og
X-Proofpoint-ORIG-GUID: X_VrRHmM9J-mUPYgf3I4FFxvCnOFl0Og

Hi Christoph,

On 6/8/22 10:05 PM, Christoph Hellwig wrote:
> All this really needs to be hidden under the hood.
> 

Since this patch file has 200+ lines, would you please help clarify what does
'this' indicate?

The idea of this patch:

1. Convert the functions to initialize swiotlb into callee. The callee accepts
'true' or 'false' as arguments to indicate whether it is for default or new
swiotlb buffer, e.g., swiotlb_init_remap() into __swiotlb_init_remap().

2. At the caller side to decide if we are going to call the callee to create the
extra buffer.

Do you mean the callee if still *too high level* and all the
decision/allocation/initialization should be down at *lower level functions*?

E.g., if I re-work the "struct io_tlb_mem" to make the 64-bit buffer as the 2nd
array of io_tlb_mem->slots[32_or_64][index], the user will only notice it is the
default 'io_tlb_default_mem', but will not be able to know if it is allocated
from 32-bit or 64-bit buffer.

Thank you very much for the feedback.

Dongli Zhang


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 23:02:29 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 23:02:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346779.572707 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzndt-00047M-2j; Fri, 10 Jun 2022 23:02:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346779.572707; Fri, 10 Jun 2022 23:02:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nznds-00047F-Vm; Fri, 10 Jun 2022 23:02:12 +0000
Received: by outflank-mailman (input) for mailman id 346779;
 Fri, 10 Jun 2022 23:02:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzndr-00046m-BT; Fri, 10 Jun 2022 23:02:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzndr-0002xA-9H; Fri, 10 Jun 2022 23:02:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzndq-0005ny-Nj; Fri, 10 Jun 2022 23:02:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nzndq-0002F9-NK; Fri, 10 Jun 2022 23:02:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=RHOURpietLyYxmHAzvUwbWmPaRookgzmJwaspQYZDWw=; b=jiF2VIP9PaQ7IYh8hZYkdlcNij
	NMda+MuoadckrrCqI8MzTR63s3Ub0LQDKkXFlCHku9Cp8Wp2lzFyV7ukpp1C4jzDsGA5RyBHDqokl
	/gLmuKGvp4g0FKYsFQTUEakegmJo+bdMgy5dfnMHGVvHCQMHlSErkKjFeewaFxX9Hzdk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170915-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 170915: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-xl-xsm:debian-fixup:fail:heisenbug
    qemu-mainline:test-amd64-i386-xl-vhd:guest-start.2:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=9cc1bf1ebca550f8d90f967ccd2b6d2e00e81387
X-Osstest-Versions-That:
    qemuu=9b1f58854959c5a9bdb347e3e04c252ab7fc9ef5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 10 Jun 2022 23:02:10 +0000

flight 170915 qemu-mainline real [real]
flight 170968 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/170915/
http://logs.test-lab.xenproject.org/osstest/logs/170968/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-xsm       13 debian-fixup        fail pass in 170968-retest
 test-amd64-i386-xl-vhd       22 guest-start.2       fail pass in 170968-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170884
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170884
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170884
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170884
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170884
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170884
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170884
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170884
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 qemuu                9cc1bf1ebca550f8d90f967ccd2b6d2e00e81387
baseline version:
 qemuu                9b1f58854959c5a9bdb347e3e04c252ab7fc9ef5

Last test of basis   170884  2022-06-08 10:42:45 Z    2 days
Failing since        170891  2022-06-09 01:39:44 Z    1 days    3 attempts
Testing same since   170915  2022-06-10 05:30:32 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alex Williamson <alex.williamson@redhat.com>
  Alistair Francis <alistair.francis@wdc.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Bernhard Beschow <shentey@gmail.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eric Auger <eric.auger@redhat.com>
  Frederic Konrad <fkonrad@amd.com>
  Peter Maydell <peter.maydell@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sai Pavan Boddu <saipava@xilinx.com>
  Stefan Berger <stefanb@linux.ibm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   9b1f588549..9cc1bf1ebc  9cc1bf1ebca550f8d90f967ccd2b6d2e00e81387 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 23:02:29 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 23:02:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346782.572719 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzne2-0004P0-Kh; Fri, 10 Jun 2022 23:02:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346782.572719; Fri, 10 Jun 2022 23:02:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzne2-0004Ot-FY; Fri, 10 Jun 2022 23:02:22 +0000
Received: by outflank-mailman (input) for mailman id 346782;
 Fri, 10 Jun 2022 23:02:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzne0-0004OH-OV; Fri, 10 Jun 2022 23:02:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzne0-0002xJ-M0; Fri, 10 Jun 2022 23:02:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzne0-0005oC-FC; Fri, 10 Jun 2022 23:02:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nzne0-0002M0-Ej; Fri, 10 Jun 2022 23:02:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=LB9hrXRXMMWriNeo2FEZuQNEQaG4lYX6XEtSeMbn7Vk=; b=jDdwWmscfcBJsjgpAvTYrUWtmH
	3g6lJfhnBCYCy6sjGoTGk9vmPNn4gCyHuTIEYznDE+2+1SFO6fosAjVO/ZSxn0IlXkDbWOrEMiCxA
	4//0ejCIx73bz5OL+qlW1Fcn/god9RYhGQ1rY5CW1IPd6YJ+Uufs2RCtOtB1KRH1giuE=;
To: xen-devel@lists.xenproject.org
Subject: [xen-4.16-testing bisection] complete test-amd64-i386-xl-shadow
Message-Id: <E1nzne0-0002M0-Ej@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 10 Jun 2022 23:02:20 +0000

branch xen-4.16-testing
xenbranch xen-4.16-testing
job test-amd64-i386-xl-shadow
testid xen-boot

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  b152dfbc3ad71a788996440b18174d995c3bffc9
  Bug not present: 8e11ec8fbf6f933f8854f4bc54226653316903f2
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/170971/


  commit b152dfbc3ad71a788996440b18174d995c3bffc9
  Author: Andrew Cooper <andrew.cooper3@citrix.com>
  Date:   Thu Jun 9 15:27:19 2022 +0200
  
      x86/pv: Clean up _get_page_type()
      
      Various fixes for clarity, ahead of making complicated changes.
      
       * Split the overflow check out of the if/else chain for type handling, as
         it's somewhat unrelated.
       * Comment the main if/else chain to explain what is going on.  Adjust one
         ASSERT() and state the bit layout for validate-locked and partial states.
       * Correct the comment about TLB flushing, as it's backwards.  The problem
         case is when writeable mappings are retained to a page becoming read-only,
         as it allows the guest to bypass Xen's safety checks for updates.
       * Reduce the scope of 'y'.  It is an artefact of the cmpxchg loop and not
         valid for use by subsequent logic.  Switch to using ACCESS_ONCE() to treat
         all reads as explicitly volatile.  The only thing preventing the validated
         wait-loop being infinite is the compiler barrier hidden in cpu_relax().
       * Replace one page_get_owner(page) with the already-calculated 'd' already in
         scope.
      
      No functional change.
      
      This is part of XSA-401 / CVE-2022-26362.
      
      Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
      Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>
      Reviewed-by: Jan Beulich <jbeulich@suse.com>
      Reviewed-by: George Dunlap <george.dunlap@citrix.com>
      master commit: 9186e96b199e4f7e52e033b238f9fe869afb69c7
      master date: 2022-06-09 14:20:36 +0200


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-4.16-testing/test-amd64-i386-xl-shadow.xen-boot.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-4.16-testing/test-amd64-i386-xl-shadow.xen-boot --summary-out=tmp/170971.bisection-summary --basis-template=170871 --blessings=real,real-bisect,real-retry xen-4.16-testing test-amd64-i386-xl-shadow xen-boot
Searching for failure / basis pass:
 170901 fail [host=nobling1] / 170871 [host=albana1] 169333 [host=debina1] 169252 [host=chardonnay0] 169238 [host=albana0] 169194 [host=nobling0] 169179 [host=debina0] 169119 [host=huxelrebe0] 169086 ok.
Failure / basis pass flights: 170901 / 169086
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ff36b2550f94dc5fac838cf298ae5a23cfddf204 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af dc020d8d1ba420e2dd0e7a40f5045db897f3c4f4
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b1b89f9009f2390652e0061bd7b24fc40732bc70 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 d239552ce7220e448ae81f41515138f7b9e3c4db e34c16cc6ee029fa75c35bd21f75103d5502ea30
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#b1b89f9009f2390652e0061bd7b24fc40732bc70-ff36b2550f94dc5fac838cf298ae5a23cfddf204 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c74\
 37ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#107951211a8d17658e1aaa0c23a8cf29f8806ad8-107951211a8d17658e1aaa0c23a8cf29f8806ad8 git://xenbits.xen.org/osstest/seabios.git#d239552ce7220e448ae81f41515138f7b9e3c4db-dc88f9b72df52b22c35b127b80c487e0b6fca4af git://xenbits.xen.org/xen.git#e34c16cc6ee029fa75c35bd21f75103d5502ea30-dc020d8d1ba420e2dd0e7a40f5045db897f3c4f4
Loaded 7994 nodes in revision graph
Searching for test results:
 169086 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b1b89f9009f2390652e0061bd7b24fc40732bc70 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 d239552ce7220e448ae81f41515138f7b9e3c4db e34c16cc6ee029fa75c35bd21f75103d5502ea30
 169119 [host=huxelrebe0]
 169179 [host=debina0]
 169194 [host=nobling0]
 169238 [host=albana0]
 169252 [host=chardonnay0]
 169333 [host=debina1]
 170871 [host=albana1]
 170901 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ff36b2550f94dc5fac838cf298ae5a23cfddf204 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af dc020d8d1ba420e2dd0e7a40f5045db897f3c4f4
 170914 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b1b89f9009f2390652e0061bd7b24fc40732bc70 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 d239552ce7220e448ae81f41515138f7b9e3c4db e34c16cc6ee029fa75c35bd21f75103d5502ea30
 170917 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ff36b2550f94dc5fac838cf298ae5a23cfddf204 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af dc020d8d1ba420e2dd0e7a40f5045db897f3c4f4
 170920 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a298a84478053872ed9da660a75f182ce81b8ddc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 01774004c7f7fdc9c1e8f1715f70d3b913f8d491 2c026fe1f159494b3ec05f19ddfb3d39ff901296
 170926 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a332ffb6efb57c338087551325cc4fffb0c81210 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af f26544492298cb82d66f9bf36e29d2f75b3133f2
 170928 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b57911c84c10e9e374c1b3d30164648f72afdd57 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af f26544492298cb82d66f9bf36e29d2f75b3133f2
 170936 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a81a650da1dc40ec2b2825d1878cdf2778b4be14 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af 6c6bbfdff9374ef41f84c4ebed7b8a7a40767ef6
 170943 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a81a650da1dc40ec2b2825d1878cdf2778b4be14 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af 88b653f73928117461dc250acd1e830a47a14c2b
 170946 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ff36b2550f94dc5fac838cf298ae5a23cfddf204 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af 8e11ec8fbf6f933f8854f4bc54226653316903f2
 170950 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ff36b2550f94dc5fac838cf298ae5a23cfddf204 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af 9cfd796ae05421ded8e4f70b2c55352491cfa841
 170955 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ff36b2550f94dc5fac838cf298ae5a23cfddf204 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af b152dfbc3ad71a788996440b18174d995c3bffc9
 170960 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ff36b2550f94dc5fac838cf298ae5a23cfddf204 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af 8e11ec8fbf6f933f8854f4bc54226653316903f2
 170961 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ff36b2550f94dc5fac838cf298ae5a23cfddf204 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af b152dfbc3ad71a788996440b18174d995c3bffc9
 170969 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ff36b2550f94dc5fac838cf298ae5a23cfddf204 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af 8e11ec8fbf6f933f8854f4bc54226653316903f2
 170971 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ff36b2550f94dc5fac838cf298ae5a23cfddf204 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af b152dfbc3ad71a788996440b18174d995c3bffc9
Searching for interesting versions
 Result found: flight 169086 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ff36b2550f94dc5fac838cf298ae5a23cfddf204 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af 8e11ec8fbf6f933f8854f4bc54226653316903f2, results HASH(0x55b1c4c497f0) HASH(0x55b1c4d117a8) HASH(0x55b1c4d196c8) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1\
 e6a472b0eb9558310b518f0dfcd8860 a81a650da1dc40ec2b2825d1878cdf2778b4be14 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af 88b653f73928117461dc250acd1e830a47a14c2b, results HASH(0x55b1c4cf1e40) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a81a650da1dc40ec2b2825d1878cdf2778b4be14 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c\
 23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af 6c6bbfdff9374ef41f84c4ebed7b8a7a40767ef6, results HASH(0x55b1c4c49df0) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b57911c84c10e9e374c1b3d30164648f72afdd57 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af f26544492298cb82d66f9bf36e29d2f75b3133f2, results HASH(0x55b1c4c42bb0) For basis\
  failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a332ffb6efb57c338087551325cc4fffb0c81210 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af f26544492298cb82d66f9bf36e29d2f75b3133f2, results HASH(0x55b1c4c23898) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a298a84478053872ed9d\
 a660a75f182ce81b8ddc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 01774004c7f7fdc9c1e8f1715f70d3b913f8d491 2c026fe1f159494b3ec05f19ddfb3d39ff901296, results HASH(0x55b1c4c4b7f8) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b1b89f9009f2390652e0061bd7b24fc40732bc70 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 d239552ce7220e448ae81f41515138f7b9e\
 3c4db e34c16cc6ee029fa75c35bd21f75103d5502ea30, results HASH(0x55b1c4c37158) HASH(0x55b1c4c483e8) Result found: flight 170901 (fail), for basis failure (at ancestor ~646)
 Repro found: flight 170914 (pass), for basis pass
 Repro found: flight 170917 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ff36b2550f94dc5fac838cf298ae5a23cfddf204 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af 8e11ec8fbf6f933f8854f4bc54226653316903f2
No revisions left to test, checking graph state.
 Result found: flight 170946 (pass), for last pass
 Result found: flight 170955 (fail), for first failure
 Repro found: flight 170960 (pass), for last pass
 Repro found: flight 170961 (fail), for first failure
 Repro found: flight 170969 (pass), for last pass
 Repro found: flight 170971 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  b152dfbc3ad71a788996440b18174d995c3bffc9
  Bug not present: 8e11ec8fbf6f933f8854f4bc54226653316903f2
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/170971/


  commit b152dfbc3ad71a788996440b18174d995c3bffc9
  Author: Andrew Cooper <andrew.cooper3@citrix.com>
  Date:   Thu Jun 9 15:27:19 2022 +0200
  
      x86/pv: Clean up _get_page_type()
      
      Various fixes for clarity, ahead of making complicated changes.
      
       * Split the overflow check out of the if/else chain for type handling, as
         it's somewhat unrelated.
       * Comment the main if/else chain to explain what is going on.  Adjust one
         ASSERT() and state the bit layout for validate-locked and partial states.
       * Correct the comment about TLB flushing, as it's backwards.  The problem
         case is when writeable mappings are retained to a page becoming read-only,
         as it allows the guest to bypass Xen's safety checks for updates.
       * Reduce the scope of 'y'.  It is an artefact of the cmpxchg loop and not
         valid for use by subsequent logic.  Switch to using ACCESS_ONCE() to treat
         all reads as explicitly volatile.  The only thing preventing the validated
         wait-loop being infinite is the compiler barrier hidden in cpu_relax().
       * Replace one page_get_owner(page) with the already-calculated 'd' already in
         scope.
      
      No functional change.
      
      This is part of XSA-401 / CVE-2022-26362.
      
      Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
      Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>
      Reviewed-by: Jan Beulich <jbeulich@suse.com>
      Reviewed-by: George Dunlap <george.dunlap@citrix.com>
      master commit: 9186e96b199e4f7e52e033b238f9fe869afb69c7
      master date: 2022-06-09 14:20:36 +0200

pnmtopng: 103 colors found
Revision graph left in /home/logs/results/bisect/xen-4.16-testing/test-amd64-i386-xl-shadow.xen-boot.{dot,ps,png,html,svg}.
----------------------------------------
170971: tolerable ALL FAIL

flight 170971 xen-4.16-testing real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/170971/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-xl-shadow     8 xen-boot                fail baseline untested


jobs:
 test-amd64-i386-xl-shadow                                    fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 23:15:06 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 23:15:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346804.572736 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nznqH-0006tX-QA; Fri, 10 Jun 2022 23:15:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346804.572736; Fri, 10 Jun 2022 23:15:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nznqH-0006tQ-Mb; Fri, 10 Jun 2022 23:15:01 +0000
Received: by outflank-mailman (input) for mailman id 346804;
 Fri, 10 Jun 2022 23:15:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nznqG-0006tE-Bi; Fri, 10 Jun 2022 23:15:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nznqG-00039M-Ap; Fri, 10 Jun 2022 23:15:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nznqF-0006Mn-UE; Fri, 10 Jun 2022 23:15:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nznqF-0001ZF-Tl; Fri, 10 Jun 2022 23:14:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=MTQ6DKka5zfJc5HVJ1eWcOOwIRxe+EX9RoJzrYQnRt4=; b=Ub6d5scYNShEdY3jd89uw0uEPt
	NIX+PkZAe57/TLyxZ3nQtHZXmDkWjIddaHyrJvsUejKhqjD1Lc098rBub0AcYE/xkt7WCvzEgggzd
	+aQKbbH9WDKTmWdP/2gqpeeiyNpk3tNLw8+hB16U7IasyRDgjdkw0D996n2XKlbWBpgk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170964-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 170964: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=f0b97e165e0af79ac9eb3c6ac7697f9183afc7cb
X-Osstest-Versions-That:
    ovmf=ccc269756f773d35aab67ccb935fa9548f30cff3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 10 Jun 2022 23:14:59 +0000

flight 170964 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170964/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 f0b97e165e0af79ac9eb3c6ac7697f9183afc7cb
baseline version:
 ovmf                 ccc269756f773d35aab67ccb935fa9548f30cff3

Last test of basis   170935  2022-06-10 12:40:30 Z    0 days
Testing same since   170964  2022-06-10 20:46:17 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   ccc269756f..f0b97e165e  f0b97e165e0af79ac9eb3c6ac7697f9183afc7cb -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 23:19:40 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 23:19:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346813.572746 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nznuk-0007Wu-Bd; Fri, 10 Jun 2022 23:19:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346813.572746; Fri, 10 Jun 2022 23:19:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nznuk-0007Wn-8l; Fri, 10 Jun 2022 23:19:38 +0000
Received: by outflank-mailman (input) for mailman id 346813;
 Fri, 10 Jun 2022 23:19:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nznui-0007Wd-Pq; Fri, 10 Jun 2022 23:19:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nznui-0003Ei-L1; Fri, 10 Jun 2022 23:19:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nznui-0006Xx-8y; Fri, 10 Jun 2022 23:19:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nznui-0005IW-8Q; Fri, 10 Jun 2022 23:19:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=AO9OyvM75a6VG3Z3Wk2by7qqxAEF73GL7HYEwr8ax70=; b=e7vjn7sPzit7rWW8M4IxzSCqXD
	7ig5+Nhwn9RDTkeY5FVkRu36y3P15HjFg0GbhlqrxkTuMyLgMOlQ6vYgQfHf04iYTWUHcH6Gx+er0
	aLiis8ruSDbzWtpGoVAzPOvp0wKRxkLE7n5I7PCbIf7OAFstR0oBBcRfbjtjMgBs0KF8=;
To: xen-devel@lists.xenproject.org
Subject: [xen-unstable bisection] complete test-amd64-amd64-xl-qemut-debianhvm-i386-xsm
Message-Id: <E1nznui-0005IW-8Q@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 10 Jun 2022 23:19:36 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-xl-qemut-debianhvm-i386-xsm
testid xen-boot

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  f3185c165d28901c3222becfc8be547263c53745
  Bug not present: 7158e80c887d8b451c8525b7fe32049742814e69
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/170958/


  commit f3185c165d28901c3222becfc8be547263c53745
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Wed Jun 8 17:03:32 2022 +0200
  
      IOMMU/x86: perform PV Dom0 mappings in batches
      
      For large page mappings to be easily usable (i.e. in particular without
      un-shattering of smaller page mappings) and for mapping operations to
      then also be more efficient, pass batches of Dom0 memory to iommu_map().
      In dom0_construct_pv() and its helpers (covering strict mode) this
      additionally requires establishing the type of those pages (albeit with
      zero type references).
      
      The earlier establishing of PGT_writable_page | PGT_validated requires
      the existing places where this gets done (through get_page_and_type())
      to be updated: For pages which actually have a mapping, the type
      refcount needs to be 1.
      
      There is actually a related bug that gets fixed here as a side effect:
      Typically the last L1 table would get marked as such only after
      get_page_and_type(..., PGT_writable_page). While this is fine as far as
      refcounting goes, the page did remain mapped in the IOMMU in this case
      (when "iommu=dom0-strict").
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Roger Pau Monné <roger.pau@citrix.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable/test-amd64-amd64-xl-qemut-debianhvm-i386-xsm.xen-boot.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable/test-amd64-amd64-xl-qemut-debianhvm-i386-xsm.xen-boot --summary-out=tmp/170967.bisection-summary --basis-template=170890 --blessings=real,real-bisect,real-retry xen-unstable test-amd64-amd64-xl-qemut-debianhvm-i386-xsm xen-boot
Searching for failure / basis pass:
 170908 fail [host=himrod0] / 170890 [host=nocera0] 170877 [host=huxelrebe0] 170865 [host=nobling1] 170852 [host=nobling0] 170840 [host=chardonnay0] 170823 [host=albana0] 170813 [host=debina1] 170806 [host=fiano1] 170801 [host=godello0] 170797 [host=albana1] 170792 [host=pinot0] 170780 [host=chardonnay1] 170772 [host=italia0] 170766 [host=italia1] 170758 [host=elbling0] 170751 [host=debina0] 170747 [host=sabro1] 170740 [host=fiano0] 170726 [host=nobling1] 170720 ok.
Failure / basis pass flights: 170908 / 170720
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 c1c9cae3a9633054b177c5de21ad7268162b2f2c
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 ec0cce125b8b9fccde3fa825b8ee963083b5de3b
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#a68d6d311c2d1fd9d2fa9a0768ea235\
 3e8a79b42-a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 git://xenbits.xen.org/xen.git#ec0cce125b8b9fccde3fa825b8ee963083b5de3b-c1c9cae3a9633054b177c5de21ad7268162b2f2c
Loaded 5001 nodes in revision graph
Searching for test results:
 170606 [host=nocera0]
 170647 [host=huxelrebe0]
 170657 [host=nocera1]
 170712 [host=sabro0]
 170720 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 ec0cce125b8b9fccde3fa825b8ee963083b5de3b
 170726 [host=nobling1]
 170740 [host=fiano0]
 170747 [host=sabro1]
 170751 [host=debina0]
 170758 [host=elbling0]
 170766 [host=italia1]
 170772 [host=italia0]
 170780 [host=chardonnay1]
 170792 [host=pinot0]
 170797 [host=albana1]
 170801 [host=godello0]
 170806 [host=fiano1]
 170813 [host=debina1]
 170823 [host=albana0]
 170840 [host=chardonnay0]
 170852 [host=nobling0]
 170865 [host=nobling1]
 170877 [host=huxelrebe0]
 170890 [host=nocera0]
 170897 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 f3185c165d28901c3222becfc8be547263c53745
 170909 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 ec0cce125b8b9fccde3fa825b8ee963083b5de3b
 170910 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 f3185c165d28901c3222becfc8be547263c53745
 170912 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 cea9ae06229577cd5b77019ce122f9cdd1568106
 170916 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 f8c818848fa64b1957411faea7cee22d677cebcc
 170918 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 e7f144f80839168e632ea4405ad114e991beecdf
 170927 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 be464973e4565fd9b4999a6eb9db9f469616f07b
 170931 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 28e13c7f4382f5dce6b2fb2ccef2098f22c04694
 170934 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 7158e80c887d8b451c8525b7fe32049742814e69
 170939 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 f3185c165d28901c3222becfc8be547263c53745
 170945 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 7158e80c887d8b451c8525b7fe32049742814e69
 170947 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 f3185c165d28901c3222becfc8be547263c53745
 170954 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 7158e80c887d8b451c8525b7fe32049742814e69
 170908 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 c1c9cae3a9633054b177c5de21ad7268162b2f2c
 170958 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 f3185c165d28901c3222becfc8be547263c53745
 170967 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 c1c9cae3a9633054b177c5de21ad7268162b2f2c
Searching for interesting versions
 Result found: flight 170720 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 7158e80c887d8b451c8525b7fe32049742814e69, results HASH(0x55e6d96a0958) HASH(0x55e6d96ad8e0) HASH(0x55e6d96a0748) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311\
 c2d1fd9d2fa9a0768ea2353e8a79b42 28e13c7f4382f5dce6b2fb2ccef2098f22c04694, results HASH(0x55e6d9c89990) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 be464973e4565fd9b4999a6eb9db9f469616f07b, results HASH(0x55e6d9c88768) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f\
 0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 e7f144f80839168e632ea4405ad114e991beecdf, results HASH(0x55e6d9c0c518) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 f8c818848fa64b1957411faea7cee22d677cebcc, results HASH(0x55e6d9c0a510) For basis failure, parent search stopping at c3038e718a19\
 fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 cea9ae06229577cd5b77019ce122f9cdd1568106, results HASH(0x55e6d9c092e8) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 ec0cce125b8b9fccde3fa825b8ee963083b5de3b, results HASH(0x55e6d9bfac6\
 0) HASH(0x55e6d9c044d0) Result found: flight 170897 (fail), for basis failure (at ancestor ~164)
 Repro found: flight 170909 (pass), for basis pass
 Repro found: flight 170967 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 7158e80c887d8b451c8525b7fe32049742814e69
No revisions left to test, checking graph state.
 Result found: flight 170934 (pass), for last pass
 Result found: flight 170939 (fail), for first failure
 Repro found: flight 170945 (pass), for last pass
 Repro found: flight 170947 (fail), for first failure
 Repro found: flight 170954 (pass), for last pass
 Repro found: flight 170958 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  f3185c165d28901c3222becfc8be547263c53745
  Bug not present: 7158e80c887d8b451c8525b7fe32049742814e69
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/170958/


  commit f3185c165d28901c3222becfc8be547263c53745
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Wed Jun 8 17:03:32 2022 +0200
  
      IOMMU/x86: perform PV Dom0 mappings in batches
      
      For large page mappings to be easily usable (i.e. in particular without
      un-shattering of smaller page mappings) and for mapping operations to
      then also be more efficient, pass batches of Dom0 memory to iommu_map().
      In dom0_construct_pv() and its helpers (covering strict mode) this
      additionally requires establishing the type of those pages (albeit with
      zero type references).
      
      The earlier establishing of PGT_writable_page | PGT_validated requires
      the existing places where this gets done (through get_page_and_type())
      to be updated: For pages which actually have a mapping, the type
      refcount needs to be 1.
      
      There is actually a related bug that gets fixed here as a side effect:
      Typically the last L1 table would get marked as such only after
      get_page_and_type(..., PGT_writable_page). While this is fine as far as
      refcounting goes, the page did remain mapped in the IOMMU in this case
      (when "iommu=dom0-strict").
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Revision graph left in /home/logs/results/bisect/xen-unstable/test-amd64-amd64-xl-qemut-debianhvm-i386-xsm.xen-boot.{dot,ps,png,html,svg}.
----------------------------------------
170967: tolerable FAIL

flight 170967 xen-unstable real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/170967/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 8 xen-boot fail baseline untested


jobs:
 build-amd64-xsm                                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Fri Jun 10 23:35:56 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 23:35:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346828.572761 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzoAQ-00024O-Um; Fri, 10 Jun 2022 23:35:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346828.572761; Fri, 10 Jun 2022 23:35:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzoAQ-00024H-RB; Fri, 10 Jun 2022 23:35:50 +0000
Received: by outflank-mailman (input) for mailman id 346828;
 Fri, 10 Jun 2022 23:35:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Tsmg=WR=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nzoAQ-00024B-3z
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 23:35:50 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org
 [2604:1380:4601:e00::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0e3920e6-e916-11ec-bd2c-47488cf2e6aa;
 Sat, 11 Jun 2022 01:35:48 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 5FD86B837EF;
 Fri, 10 Jun 2022 23:35:47 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6F63BC36B0C;
 Fri, 10 Jun 2022 23:35:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0e3920e6-e916-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654904145;
	bh=BuqirxjS9JtGtMwszroJhztNGlaevYs4Gdo6iCFAnZM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=E99QUel3G9pDHo9Q8vDeRwaVja1I5FRHOpS/4YJnsprWcSbnWAbuGVw/DW+L8rzxA
	 dQC9sakJ7gxxFecoQG1aqgvHwKynaL0V5TSUIuu6JpCmY5Qi+Ll6qzkwsypvBBB1x0
	 F6DfpUy1EkIvfi0oDFRJEUIGsC0+2OtGiZTE3UZ3EPbV1yR4CF9oi3mOLhC6k1vWCl
	 0Id6GkNxxRSEmVyx5CTHmm+T6yXhzsFXwVn4YHCtLJ8OHvt0jFYS40iNEVqWUaXniM
	 lg/9qRLFSkUIuVSpkMfL5ln/k4TFCi2OuNmpGwCJ17PjC1y6ddyaZWxwbRskWrPque
	 N41EbPT5oxGzg==
Date: Fri, 10 Jun 2022 16:35:45 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jan Beulich <jbeulich@suse.com>
cc: Juergen Gross <jgross@suse.com>, Michal Orzel <michal.orzel@arm.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
    xen-devel@lists.xenproject.org
Subject: Re: [PATCH 3/3] xen/console: Fix incorrect format tags for struct
 tm members
In-Reply-To: <295e9c7e-e0de-bbd3-eec4-0864cb2ef086@suse.com>
Message-ID: <alpine.DEB.2.22.394.2206101630520.756493@ubuntu-linux-20-04-desktop>
References: <20220610083358.101412-1-michal.orzel@arm.com> <20220610083358.101412-4-michal.orzel@arm.com> <b84abd29-2856-a173-55b4-4e642d8a6ee5@suse.com> <2ccd52a7-a5b2-c221-b847-ed0c9de2effd@suse.com> <295e9c7e-e0de-bbd3-eec4-0864cb2ef086@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 10 Jun 2022, Jan Beulich wrote:
> On 10.06.2022 11:51, Juergen Gross wrote:
> > On 10.06.22 11:44, Jan Beulich wrote:
> >> On 10.06.2022 10:33, Michal Orzel wrote:
> >>> All the members of struct tm are defined as integers but the format tags
> >>> used in console driver for snprintf wrongly expect unsigned values. Fix
> >>> the tags to expect integers.
> >>
> >> Perhaps do things the other way around - convert field types to unsigned
> >> unless negative values can be stored there? This would match our general
> >> aim of using unsigned types when only non-negative values can be held in
> >> variables / parameters / fields.
> > 
> > Don't you think keeping struct tm in sync with the Posix definition should
> > be preferred here?
> 
> Not necessarily, no. It's not just POSIX which has a (imo bad) habit of
> using plain "int" even for values which can never go negative.

I committed the other two patches in the series because already acked
and straightforward.

In this case, I think the straightforward/mechanical fix is the one
Michal suggested in this patch: fixing %u to be %d. We could of course
consider changing the definition of tm, and there are valid reasons to
do that as Jan pointed out, but I think this patch is valid as-in
anyway.

So I am happy to give my reviewed-by for this version of the patch, and
we can still consider changing tm to use unsigned if someone feels like
proposing a patch for that.

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

Cheers,

Stefano


From xen-devel-bounces@lists.xenproject.org Fri Jun 10 23:56:03 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jun 2022 23:56:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346841.572783 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzoTu-0004tM-Pt; Fri, 10 Jun 2022 23:55:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346841.572783; Fri, 10 Jun 2022 23:55:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzoTu-0004tF-Mg; Fri, 10 Jun 2022 23:55:58 +0000
Received: by outflank-mailman (input) for mailman id 346841;
 Fri, 10 Jun 2022 23:55:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Tsmg=WR=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nzoTt-0004t9-75
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 23:55:57 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ddcd0059-e918-11ec-8901-93a377f238d6;
 Sat, 11 Jun 2022 01:55:55 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 631FBB82675;
 Fri, 10 Jun 2022 23:55:54 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id A53EBC34114;
 Fri, 10 Jun 2022 23:55:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ddcd0059-e918-11ec-8901-93a377f238d6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654905353;
	bh=PlxS3We0GdjvIOX2k5kRle3oBLQPYE2copSxEUBjiCE=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=e9MFev+ELzkkT08dn92q8sS/Kd47fC9pAvCBRCz9e26GbRXhrfwisf8908zziERxw
	 KHcjGSXKKNVJ40jcxbxKerTv3ucB8IOGqtNlDHAosZ7pcWvFIHbNq1ekks8PIURuGz
	 dt3k3JBCaehqBlEIqtkssdBmQnCJbL3I5cS0FHVxos+Q4t7kaNCdKgDdu+lQwFxOjI
	 jaUjfLTme3PKgNddTPXufGtJcmq/ftmgTTf7UHGFNlSNYQrZoCzFKXM/ORyu9VrCke
	 X8b/EJewUfytquvcmxaFxH3jyTVPsX8YZjKxym772t2SsAyD+wHiNCVX8OH9o8U0G4
	 S8cEXBze6uuKA==
Date: Fri, 10 Jun 2022 16:55:52 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Oleksandr <olekstysh@gmail.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, 
    Boris Ostrovsky <boris.ostrovsky@oracle.com>, 
    Juergen Gross <jgross@suse.com>, Julien Grall <julien@xen.org>
Subject: Re: [RFC PATCH 2/2] xen/grant-table: Use unpopulated DMAable pages
 instead of real RAM ones
In-Reply-To: <7f886dfb-2b42-bc70-d55f-14ecd8144e3e@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2206101644210.756493@ubuntu-linux-20-04-desktop>
References: <1652810658-27810-1-git-send-email-olekstysh@gmail.com> <1652810658-27810-3-git-send-email-olekstysh@gmail.com> <alpine.DEB.2.22.394.2206031348230.2783803@ubuntu-linux-20-04-desktop> <7f886dfb-2b42-bc70-d55f-14ecd8144e3e@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1025708553-1654904664=:756493"
Content-ID: <alpine.DEB.2.22.394.2206101644500.756493@ubuntu-linux-20-04-desktop>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1025708553-1654904664=:756493
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.22.394.2206101644501.756493@ubuntu-linux-20-04-desktop>

On Thu, 9 Jun 2022, Oleksandr wrote:
> On 04.06.22 00:19, Stefano Stabellini wrote:
> Hello Stefano
> 
> Thank you for having a look and sorry for the late response.
> 
> > On Tue, 17 May 2022, Oleksandr Tyshchenko wrote:
> > > From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> > > 
> > > Depends on CONFIG_XEN_UNPOPULATED_ALLOC. If enabled then unpopulated
> > > DMAable (contiguous) pages will be allocated for grant mapping into
> > > instead of ballooning out real RAM pages.
> > > 
> > > TODO: Fallback to real RAM pages if xen_alloc_unpopulated_dma_pages()
> > > fails.
> > > 
> > > Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> > > ---
> > >   drivers/xen/grant-table.c | 27 +++++++++++++++++++++++++++
> > >   1 file changed, 27 insertions(+)
> > > 
> > > diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> > > index 8ccccac..2bb4392 100644
> > > --- a/drivers/xen/grant-table.c
> > > +++ b/drivers/xen/grant-table.c
> > > @@ -864,6 +864,25 @@ EXPORT_SYMBOL_GPL(gnttab_free_pages);
> > >    */
> > >   int gnttab_dma_alloc_pages(struct gnttab_dma_alloc_args *args)
> > >   {
> > > +#ifdef CONFIG_XEN_UNPOPULATED_ALLOC
> > > +	int ret;
> > This is an alternative implementation of the same function.
> 
> Currently, yes.
> 
> 
> >   If we are
> > going to use #ifdef, then I would #ifdef the entire function, rather
> > than just the body. Otherwise within the function body we can use
> > IS_ENABLED.
> 
> 
> Good point. Note, there is one missing thing in current patch which is
> described in TODO.
> 
> "Fallback to real RAM pages if xen_alloc_unpopulated_dma_pages() fails."  So I
> will likely use IS_ENABLED within the function body.
> 
> If CONFIG_XEN_UNPOPULATED_ALLOC is enabled then gnttab_dma_alloc_pages() will
> try to call xen_alloc_unpopulated_dma_pages() the first and if fails then
> fallback to allocate RAM pages and balloon them out.
> 
> One moment is not entirely clear to me. If we use fallback in
> gnttab_dma_alloc_pages() then we must use fallback in gnttab_dma_free_pages()
> as well, we cannot use xen_free_unpopulated_dma_pages() for real RAM pages.
> The question is how to pass this information to the gnttab_dma_free_pages()?
> The first idea which comes to mind is to add a flag to struct
> gnttab_dma_alloc_args...
 
You can check if the page is within the mhp_range range or part of
iomem_resource? If not, you can free it as a normal page.

If we do this, then the fallback is better implemented in
unpopulated-alloc.c because that is the one that is aware about
page addresses.

 
 
> > > +	ret = xen_alloc_unpopulated_dma_pages(args->dev, args->nr_pages,
> > > +			args->pages);
> > > +	if (ret < 0)
> > > +		return ret;
> > > +
> > > +	ret = gnttab_pages_set_private(args->nr_pages, args->pages);
> > > +	if (ret < 0) {
> > > +		gnttab_dma_free_pages(args);
> > it should xen_free_unpopulated_dma_pages ?
> 
> Besides calling the xen_free_unpopulated_dma_pages(), we also need to call
> gnttab_pages_clear_private() here, this is what gnttab_dma_free_pages() is
> doing.
> 
> I can change to call both function instead:
> 
>     gnttab_pages_clear_private(args->nr_pages, args->pages);
>     xen_free_unpopulated_dma_pages(args->dev, args->nr_pages, args->pages);
> 
> Shall I?

No, leave it as is. I didn't realize that gnttab_pages_set_private can
fail half-way through.

 
> > 
> > 
> > > +		return ret;
> > > +	}
> > > +
> > > +	args->vaddr = page_to_virt(args->pages[0]);
> > > +	args->dev_bus_addr = page_to_phys(args->pages[0]);
> > There are two things to note here.
> > 
> > The first thing to note is that normally we would call pfn_to_bfn to
> > retrieve the dev_bus_addr of a page because pfn_to_bfn takes into
> > account foreign mappings. However, these are freshly allocated pages
> > without foreign mappings, so page_to_phys/dma should be sufficient.
> 
> agree
> 
> 
> > 
> > 
> > The second has to do with physical addresses and DMA addresses. The
> > functions are called gnttab_dma_alloc_pages and
> > xen_alloc_unpopulated_dma_pages which make you think we are retrieving a
> > DMA address here. However, to get a DMA address we need to call
> > page_to_dma rather than page_to_phys.
> > 
> > page_to_dma takes into account special offsets that some devices have
> > when accessing memory. There are real cases on ARM where the physical
> > address != DMA address, e.g. RPi4.
> 
> I got it. Now I am in doubt whether it would be better to name the API:
> 
> xen_alloc_unpopulated_cma_pages()
> 
> or
> 
> xen_alloc_unpopulated_contiguous_pages()
> 
> What do you think?

Yeah actually I think it is better to stay away from "dma" in the name.
I like xen_alloc_unpopulated_contiguous_pages().
 
 
> > However, to call page_to_dma you need to specify as first argument the
> > DMA-capable device that is expected to use those pages for DMA (e.g. an
> > ethernet device or a MMC controller.) While the args->dev we have in
> > gnttab_dma_alloc_pages is the gntdev_miscdev.
> 
> agree
> 
> As I understand, at this time it is unknown for what exactly device these
> pages are supposed to be used at the end.
> 
> For now, it is only known that these pages to be used by userspace PV backend
> for grant mappings.

Yeah
 

> > So this interface cannot actually be used to allocate memory that is
> > supposed to be DMA-able by a DMA-capable device, such as an ethernet
> > device.
> 
> agree
> 
> 
> > 
> > But I think that should be fine because the memory is meant to be used
> > by a userspace PV backend for grant mappings. If any of those mappings
> > end up being used for actual DMA in the kernel they should go through the
> > drivers/xen/swiotlb-xen.c and xen_phys_to_dma should be called, which
> > ends up calling page_to_dma as appropriate.
> > 
> > It would be good to double-check that the above is correct and, if so,
> > maybe add a short in-code comment about it:
> > 
> > /*
> >   * These are not actually DMA addresses but regular physical addresses.
> >   * If these pages end up being used in a DMA operation then the
> >   * swiotlb-xen functions are called and xen_phys_to_dma takes care of
> >   * the address translations:
> >   *
> >   * - from gfn to bfn in case of foreign mappings
> >   * - from physical to DMA addresses in case the two are different for a
> >   *   given DMA-mastering device
> >   */
> 
> I agree this needs to be re-checked. But, there is one moment here, if
> userspace PV backend runs in other than Dom0 domain (non 1:1 mapped domain),
> the xen-swiotlb seems not to be in use then? How to be in this case?
 
In that case, an IOMMU is required. If an IOMMU is setup correct, then
the gfn->bfn translation is not necessary because it is done
automatically by the IOMMU. That is because when the foreign page is
mapped in the domain, the mapping also applies to the IOMMU pagetable.

So the device is going to do DMA to "gfn" and the IOMMU will translate
it to the right "mfn", the one corresponding to "bfn".

The physical to DMA address should be done automatically by the default
(non-swiotlb_xen) dma_ops in Linux. E.g.
kernel/dma/direct.c:dma_direct_map_sg correctly calls
dma_direct_map_page, which calls phys_to_dma.
 
 
 
> > > +	return ret;
> > > +#else
> > >   	unsigned long pfn, start_pfn;
> > >   	size_t size;
> > >   	int i, ret;
> > > @@ -910,6 +929,7 @@ int gnttab_dma_alloc_pages(struct
> > > gnttab_dma_alloc_args *args)
> > >   fail:
> > >   	gnttab_dma_free_pages(args);
> > >   	return ret;
> > > +#endif
> > >   }
> > >   EXPORT_SYMBOL_GPL(gnttab_dma_alloc_pages);
> > >   @@ -919,6 +939,12 @@ EXPORT_SYMBOL_GPL(gnttab_dma_alloc_pages);
> > >    */
> > >   int gnttab_dma_free_pages(struct gnttab_dma_alloc_args *args)
> > >   {
> > > +#ifdef CONFIG_XEN_UNPOPULATED_ALLOC
> > > +	gnttab_pages_clear_private(args->nr_pages, args->pages);
> > > +	xen_free_unpopulated_dma_pages(args->dev, args->nr_pages,
> > > args->pages);
> > > +
> > > +	return 0;
> > > +#else
> > >   	size_t size;
> > >   	int i, ret;
> > >   @@ -946,6 +972,7 @@ int gnttab_dma_free_pages(struct
> > > gnttab_dma_alloc_args *args)
> > >   		dma_free_wc(args->dev, size,
> > >   			    args->vaddr, args->dev_bus_addr);
> > >   	return ret;
> > > +#endif
> > >   }
> > >   EXPORT_SYMBOL_GPL(gnttab_dma_free_pages);
> > >   #endif
> > > -- 
> > > 2.7.4
> > > 
--8323329-1025708553-1654904664=:756493--


From xen-devel-bounces@lists.xenproject.org Sat Jun 11 00:12:39 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jun 2022 00:12:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346851.572794 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzojq-0008Jg-Bf; Sat, 11 Jun 2022 00:12:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346851.572794; Sat, 11 Jun 2022 00:12:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzojq-0008JZ-92; Sat, 11 Jun 2022 00:12:26 +0000
Received: by outflank-mailman (input) for mailman id 346851;
 Sat, 11 Jun 2022 00:12:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gXwC=WS=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nzojo-0008JT-EQ
 for xen-devel@lists.xenproject.org; Sat, 11 Jun 2022 00:12:24 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 28364f92-e91b-11ec-8901-93a377f238d6;
 Sat, 11 Jun 2022 02:12:20 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by sin.source.kernel.org (Postfix) with ESMTPS id EBD3BCE3966;
 Sat, 11 Jun 2022 00:12:14 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id C3E62C34114;
 Sat, 11 Jun 2022 00:12:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 28364f92-e91b-11ec-8901-93a377f238d6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654906333;
	bh=d0rHiyYopNhDPEkoTUlu2dhdd5i24Zc5+OYebXPp9fs=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=EysdNDGWc160A8pcOAhbfITlnI82gSWEEDNoxThGJwZg46GWtrdsubuWsVhkJ3gYJ
	 tgnPMkdS+qyJQymTcFB9amOmwKcf4+gQhs7FRbcHFsKorgy4xZJ4/p6U/ISYlhNvmx
	 8WVapP4YOoOCLPxMTjxozlv4AaFIiz69BQ7dyGSK1vt3vuiftNVjnA7JQXJxp0jOSy
	 NKtnHFObTgQ6LmuloVlleF5fJlv4fIbAvCeGPDYYthV02aWkw1Km7VPy9fpTHvpiTA
	 Wcuot43iEpDJhTjWnompiQYaW1ZZjP7r8mq18dZQik94DIXCFwZ6jZdj3mozKu8ICs
	 yAdha3akPYsWA==
Date: Fri, 10 Jun 2022 17:12:12 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Oleksandr <olekstysh@gmail.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, 
    Boris Ostrovsky <boris.ostrovsky@oracle.com>, 
    Juergen Gross <jgross@suse.com>, Julien Grall <julien@xen.org>
Subject: Re: [RFC PATCH 1/2] xen/unpopulated-alloc: Introduce helpers for
 DMA allocations
In-Reply-To: <00c14b91-4cf2-179c-749d-593db853e42e@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2206101709210.756493@ubuntu-linux-20-04-desktop>
References: <1652810658-27810-1-git-send-email-olekstysh@gmail.com> <1652810658-27810-2-git-send-email-olekstysh@gmail.com> <alpine.DEB.2.22.394.2206031420430.2783803@ubuntu-linux-20-04-desktop> <00c14b91-4cf2-179c-749d-593db853e42e@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1810477358-1654906333=:756493"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1810477358-1654906333=:756493
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Wed, 8 Jun 2022, Oleksandr wrote:
> 2. Drop the "page_list" entirely and use "dma_pool" for all (contiguous and
> non-contiguous) allocations. After all, all pages are initially contiguous in
> fill_list() as they are built from the resource. This changes behavior for all
> users of xen_alloc_unpopulated_pages()
> 
> Below the diff for unpopulated-alloc.c. The patch is also available at:
> 
> https://github.com/otyshchenko1/linux/commit/7be569f113a4acbdc4bcb9b20cb3995b3151387a
> 
> 
> diff --git a/drivers/xen/unpopulated-alloc.c b/drivers/xen/unpopulated-alloc.c
> index a39f2d3..ab5c7bd 100644
> --- a/drivers/xen/unpopulated-alloc.c
> +++ b/drivers/xen/unpopulated-alloc.c
> @@ -1,5 +1,7 @@
>  // SPDX-License-Identifier: GPL-2.0
> +#include <linux/dma-mapping.h>
>  #include <linux/errno.h>
> +#include <linux/genalloc.h>
>  #include <linux/gfp.h>
>  #include <linux/kernel.h>
>  #include <linux/mm.h>
> @@ -13,8 +15,8 @@
>  #include <xen/xen.h>
> 
>  static DEFINE_MUTEX(list_lock);
> -static struct page *page_list;
> -static unsigned int list_count;
> +
> +static struct gen_pool *dma_pool;
> 
>  static struct resource *target_resource;
> 
> @@ -36,7 +38,7 @@ static int fill_list(unsigned int nr_pages)
>         struct dev_pagemap *pgmap;
>         struct resource *res, *tmp_res = NULL;
>         void *vaddr;
> -       unsigned int i, alloc_pages = round_up(nr_pages, PAGES_PER_SECTION);
> +       unsigned int alloc_pages = round_up(nr_pages, PAGES_PER_SECTION);
>         struct range mhp_range;
>         int ret;
> 
> @@ -106,6 +108,7 @@ static int fill_list(unsigned int nr_pages)
>           * conflict with any devices.
>           */
>         if (!xen_feature(XENFEAT_auto_translated_physmap)) {
> +               unsigned int i;
>                 xen_pfn_t pfn = PFN_DOWN(res->start);
> 
>                 for (i = 0; i < alloc_pages; i++) {
> @@ -125,16 +128,17 @@ static int fill_list(unsigned int nr_pages)
>                 goto err_memremap;
>         }
> 
> -       for (i = 0; i < alloc_pages; i++) {
> -               struct page *pg = virt_to_page(vaddr + PAGE_SIZE * i);
> -
> -               pg->zone_device_data = page_list;
> -               page_list = pg;
> -               list_count++;
> +       ret = gen_pool_add_virt(dma_pool, (unsigned long)vaddr, res->start,
> +                       alloc_pages * PAGE_SIZE, NUMA_NO_NODE);
> +       if (ret) {
> +               pr_err("Cannot add memory range to the pool\n");
> +               goto err_pool;
>         }
> 
>         return 0;
> 
> +err_pool:
> +       memunmap_pages(pgmap);
>  err_memremap:
>         kfree(pgmap);
>  err_pgmap:
> @@ -149,51 +153,49 @@ static int fill_list(unsigned int nr_pages)
>         return ret;
>  }
> 
> -/**
> - * xen_alloc_unpopulated_pages - alloc unpopulated pages
> - * @nr_pages: Number of pages
> - * @pages: pages returned
> - * @return 0 on success, error otherwise
> - */
> -int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages)
> +static int alloc_unpopulated_pages(unsigned int nr_pages, struct page
> **pages,
> +               bool contiguous)
>  {
>         unsigned int i;
>         int ret = 0;
> +       void *vaddr;
> +       bool filled = false;
> 
>         /*
>          * Fallback to default behavior if we do not have any suitable
> resource
>          * to allocate required region from and as the result we won't be able
> to
>          * construct pages.
>          */
> -       if (!target_resource)
> +       if (!target_resource) {
> +               if (contiguous)
> +                       return -ENODEV;
> +
>                 return xen_alloc_ballooned_pages(nr_pages, pages);
> +       }
> 
>         mutex_lock(&list_lock);
> -       if (list_count < nr_pages) {
> -               ret = fill_list(nr_pages - list_count);
> +
> +       while (!(vaddr = (void *)gen_pool_alloc(dma_pool, nr_pages *
> PAGE_SIZE))) {
> +               if (filled)
> +                       ret = -ENOMEM;
> +               else {
> +                       ret = fill_list(nr_pages);
> +                       filled = true;
> +               }
>                 if (ret)
>                         goto out;
>         }
> 
>         for (i = 0; i < nr_pages; i++) {
> -               struct page *pg = page_list;
> -
> -               BUG_ON(!pg);
> -               page_list = pg->zone_device_data;
> -               list_count--;
> -               pages[i] = pg;
> +               pages[i] = virt_to_page(vaddr + PAGE_SIZE * i);
> 
>  #ifdef CONFIG_XEN_HAVE_PVMMU
>                 if (!xen_feature(XENFEAT_auto_translated_physmap)) {
> -                       ret = xen_alloc_p2m_entry(page_to_pfn(pg));
> +                       ret = xen_alloc_p2m_entry(page_to_pfn(pages[i]));
>                         if (ret < 0) {
> -                               unsigned int j;
> -
> -                               for (j = 0; j <= i; j++) {
> - pages[j]->zone_device_data = page_list;
> -                                       page_list = pages[j];
> -                                       list_count++;
> -                               }
> +                               /* XXX Do we need to zeroed pages[i]? */
> +                               gen_pool_free(dma_pool, (unsigned long)vaddr,
> +                                               nr_pages * PAGE_SIZE);
>                                 goto out;
>                         }
>                 }
> @@ -204,32 +206,89 @@ int xen_alloc_unpopulated_pages(unsigned int nr_pages,
> struct page **pages)
>         mutex_unlock(&list_lock);
>         return ret;
>  }
> -EXPORT_SYMBOL(xen_alloc_unpopulated_pages);
> 
> -/**
> - * xen_free_unpopulated_pages - return unpopulated pages
> - * @nr_pages: Number of pages
> - * @pages: pages to return
> - */
> -void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages)
> +static void free_unpopulated_pages(unsigned int nr_pages, struct page
> **pages,
> +               bool contiguous)
>  {
> -       unsigned int i;
> -
>         if (!target_resource) {
> +               if (contiguous)
> +                       return;
> +
>                 xen_free_ballooned_pages(nr_pages, pages);
>                 return;
>         }
> 
>         mutex_lock(&list_lock);
> -       for (i = 0; i < nr_pages; i++) {
> -               pages[i]->zone_device_data = page_list;
> -               page_list = pages[i];
> -               list_count++;
> +
> +       /* XXX Do we need to check the range (gen_pool_has_addr)? */
> +       if (contiguous)
> +               gen_pool_free(dma_pool, (unsigned long)page_to_virt(pages[0]),
> +                               nr_pages * PAGE_SIZE);
> +       else {
> +               unsigned int i;
> +
> +               for (i = 0; i < nr_pages; i++)
> +                       gen_pool_free(dma_pool, (unsigned
> long)page_to_virt(pages[i]),
> +                                       PAGE_SIZE);
>         }
> +
>         mutex_unlock(&list_lock);
>  }
> +
> +/**
> + * xen_alloc_unpopulated_pages - alloc unpopulated pages
> + * @nr_pages: Number of pages
> + * @pages: pages returned
> + * @return 0 on success, error otherwise
> + */
> +int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages)
> +{
> +       return alloc_unpopulated_pages(nr_pages, pages, false);
> +}
> +EXPORT_SYMBOL(xen_alloc_unpopulated_pages);
> +
> +/**
> + * xen_free_unpopulated_pages - return unpopulated pages
> + * @nr_pages: Number of pages
> + * @pages: pages to return
> + */
> +void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages)
> +{
> +       free_unpopulated_pages(nr_pages, pages, false);
> +}
>  EXPORT_SYMBOL(xen_free_unpopulated_pages);
> 
> +/**
> + * xen_alloc_unpopulated_dma_pages - alloc unpopulated DMAable pages
> + * @dev: valid struct device pointer
> + * @nr_pages: Number of pages
> + * @pages: pages returned
> + * @return 0 on success, error otherwise
> + */
> +int xen_alloc_unpopulated_dma_pages(struct device *dev, unsigned int
> nr_pages,
> +               struct page **pages)
> +{
> +       /* XXX Handle devices which support 64-bit DMA address only for now */
> +       if (dma_get_mask(dev) != DMA_BIT_MASK(64))
> +               return -EINVAL;
> +
> +       return alloc_unpopulated_pages(nr_pages, pages, true);
> +}
> +EXPORT_SYMBOL(xen_alloc_unpopulated_dma_pages);
> +
> +/**
> + * xen_free_unpopulated_dma_pages - return unpopulated DMAable pages
> + * @dev: valid struct device pointer
> + * @nr_pages: Number of pages
> + * @pages: pages to return
> + */
> +void xen_free_unpopulated_dma_pages(struct device *dev, unsigned int
> nr_pages,
> +               struct page **pages)
> +{
> +       free_unpopulated_pages(nr_pages, pages, true);
> +}
> +EXPORT_SYMBOL(xen_free_unpopulated_dma_pages);
> +
>  static int __init unpopulated_init(void)
>  {
>         int ret;
> @@ -237,9 +296,19 @@ static int __init unpopulated_init(void)
>         if (!xen_domain())
>                 return -ENODEV;
> 
> +       dma_pool = gen_pool_create(PAGE_SHIFT, NUMA_NO_NODE);
> +       if (!dma_pool) {
> +               pr_err("xen:unpopulated: Cannot create DMA pool\n");
> +               return -ENOMEM;
> +       }
> +
> +       gen_pool_set_algo(dma_pool, gen_pool_best_fit, NULL);
> +
>         ret = arch_xen_unpopulated_init(&target_resource);
>         if (ret) {
>                 pr_err("xen:unpopulated: Cannot initialize target
> resource\n");
> +               gen_pool_destroy(dma_pool);
> +               dma_pool = NULL;
>                 target_resource = NULL;
>         }
> 
> [snip]
> 
> 
> I think, regarding on the approach we would likely need to do some renaming
> for fill_list, page_list, list_lock, etc.
> 
> 
> Both options work in my Arm64 based environment, not sure regarding x86.
> Or do we have another option here?
> I would be happy to go any route. What do you think?

The second option (use "dma_pool" for all) looks great, thank you for
looking into it!
--8323329-1810477358-1654906333=:756493--


From xen-devel-bounces@lists.xenproject.org Sat Jun 11 00:23:26 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jun 2022 00:23:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346861.572805 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzouP-00022a-Ih; Sat, 11 Jun 2022 00:23:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346861.572805; Sat, 11 Jun 2022 00:23:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzouP-00022T-Fl; Sat, 11 Jun 2022 00:23:21 +0000
Received: by outflank-mailman (input) for mailman id 346861;
 Sat, 11 Jun 2022 00:23:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gXwC=WS=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nzouO-00022N-HF
 for xen-devel@lists.xenproject.org; Sat, 11 Jun 2022 00:23:20 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b11d0596-e91c-11ec-bd2c-47488cf2e6aa;
 Sat, 11 Jun 2022 02:23:18 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id DA6D1B837C6;
 Sat, 11 Jun 2022 00:23:17 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 235DBC3411E;
 Sat, 11 Jun 2022 00:23:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b11d0596-e91c-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654906996;
	bh=hgePJFBVCqf4ZZNjTfnhdlCC4xdsn6oPrkWViA35yk0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=AcMGNVGQKZQVJjW5/BrtonNXz1EGLitf9bCgQL1w3aaPF74d82gmEuXpcCJ61R1Tz
	 xT4l0WplwRd2GnW0ahhqcn7OIBE8LP1GFZ+7ATm5bDa/0X4kA2PZKS87X9j8OuKKr9
	 zAWmkC+5VRlHc7MZvnYqIqFs0KhSwo/uDxKk/77rBGDzglZS7pczIkjrXJF1eCuqK7
	 s8d4fLgrNuvBBlDn4pyATDh+fiiJtkMcb7WmARptZpE1t828tFBFx/u1jmlB4mktDy
	 4vpfZHGFZnl78zKEU6kKDPAnT7dPWYHEiSKvPSxh8sns1CZefWK/sBAHMmux+z+r9e
	 qU5Un0+bUIhqg==
Date: Fri, 10 Jun 2022 17:23:15 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Rahul Singh <rahul.singh@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH v3] xen/evtchn: Add design for static event channel
 signaling
In-Reply-To: <bb77b88185e26010d0502ce38940d2d5f7d28464.1652452306.git.rahul.singh@arm.com>
Message-ID: <alpine.DEB.2.22.394.2206101721580.756493@ubuntu-linux-20-04-desktop>
References: <bb77b88185e26010d0502ce38940d2d5f7d28464.1652452306.git.rahul.singh@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1948795101-1654906997=:756493"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1948795101-1654906997=:756493
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Fri, 13 May 2022, Rahul Singh wrote:
> This patch introduces a new feature to support the signaling between
> two domains in dom0less system.
> 
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
> ---
> v3 changes:
> - add dt node example for dom0 and domU.
> - add info about "xen,enhanced" property to enable the event-channel interface
>   for domU guests.
> 
> v2 changes:
> - switch to the one-subnode-per-evtchn under xen,domain" compatible node.
> - Add more detail about event-channel.
> ---
>  docs/designs/dom0less-evtchn.md | 144 ++++++++++++++++++++++++++++++++
>  1 file changed, 144 insertions(+)
>  create mode 100644 docs/designs/dom0less-evtchn.md
> 
> diff --git a/docs/designs/dom0less-evtchn.md b/docs/designs/dom0less-evtchn.md
> new file mode 100644
> index 0000000000..3c89a9fb7d
> --- /dev/null
> +++ b/docs/designs/dom0less-evtchn.md
> @@ -0,0 +1,144 @@
> +# Signaling support between two domUs on dom0less system
> +
> +## Current state: Draft version
> +
> +## Proposer(s): Rahul Singh, Bertrand Marquis
> +
> +## Problem Statement:
> +
> +Dom0less guests would benefit from a statically-defined memory sharing and
> +signally system for communication. One that would be immediately available at
> +boot without any need for dynamic configurations.
> +
> +In embedded a great variety of guest operating system kernels exist, many of
> +which don't have support for xenstore, grant table, or other complex drivers.
> +Some of them are small kernel-space applications (often called "baremetal",
> +not to be confused with the term "baremetal" used in the data center which
> +means "without hypervisors") or RTOSes. Additionally, for safety reasons, users
> +often need to be able to configure the full system statically so that it can
> +be verified statically.
> +
> +Event channels are very simple and can be added even to baremetal applications.
> +This proposal introduces a way to define them statically to make them suitable
> +for dom0less embedded deployments.
> +
> +## Proposal:
> +
> +Event channels are the basic primitive provided by Xen for event notifications.
> +An event channel is a logical connection between 2 domains (more specifically
> +between dom1,port1, and dom2,port2). Each event has a pending and a masked bit.
> +The pending bit indicates the event has been raised. The masked bit is used by
> +the domain to prevent the delivery of that specific event. Xen only performs a
> +0 -> 1 transition on the pending bits and does not touch the mask bit. The
> +domain may toggle masked bits in the masked bit field and should clear the
> +pending bit when an event has been processed
> +
> +Events are received by a domain via an interrupt from Xen to the domain,
> +indicating when an event arrives (setting the bit). Further notifications are
> +blocked until the bit is cleared again. Events are delivered asynchronously to
> +a domain and are enqueued when the domain is not running. Xen supports two
> +different ABIs for event channel: FIFO and 2L.
> +
> +The event channel communication will be established statically between two
> +domains (dom0 and domU also) before unpausing the domains after domain creation.
> +Event channel connection information between domains will be passed to Xen via
> +the device tree node. The event channel will be created and established
> +in Xen before the domain started. The domain doesn’t need to do any operation
> +to establish a connection. Domain only needs hypercall
> +EVTCHNOP_send(local port) to send notifications to the remote guest.
> +
> +There is no need to describe the static event channel info in the domU device
> +tree. Static event channels are only useful in fully static configurations,
> +and in those configurations the domU device tree dynamically generated by Xen
> +is not needed.
> +
> +To enable the event-channel interface for domU guests include the
> +"xen,enhanced" property with an empty string ( or with the value
> +"enabled” or "evtchn") in domU Xen device tree node.
> +
> +Under the "xen,domain" compatible node, there need to be sub-nodes with
> +compatible "xen,evtchn" that describe the event channel connection between two
> +domains(dom0 and domU also).
> +
> +The event channel sub-node has the following properties:
> +
> +- compatible
> +
> +    "xen,evtchn"
> +
> +- xen,evtchn
> +
> +    The property is tuples of two numbers
> +    (local-evtchn link-to-foreign-evtchn) where:
> +
> +    local-evtchn is an integer value that will be used to allocate local port
> +    for a domain to send and receive event notifications to/from the remote
> +    domain. Maximum supported value is 2^17 for FIFO ABI and 4096 for 2L ABI.
> +
> +    link-to-foreign-evtchn is a single phandle to a foreign evtchn to which
> +    local-evtchn will be connected.
> +
> +
> +Example:
> +
> +    chosen {
> +        ....
> +
> +        module@0 {
> +            compatible = "multiboot,kernel", "multiboot,module";
> +            xen,uefi-binary = "...";
> +            bootargs = "...";
> +
> +            /* one sub-node per local event channel */
> +            ec1: evtchn@1 {
> +                compatible = "xen,evtchn-v1";
> +                /* local-evtchn link-to-foreign-evtchn */
> +                xen,evtchn = <0xa &ec2>;
> +            };
> +        };

Great that you added a dom0 example. I wish we had a dom0 node for dom0
but what you have done here is the easiest thing we can do and less
disruptive for the existing bindings.

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> +        domU1: domU1 {
> +            compatible = "xen,domain";
> +
> +            /* one sub-node per local event channel */
> +            ec2: evtchn@2 {
> +                compatible = "xen,evtchn-v1";
> +                /* local-evtchn link-to-foreign-evtchn */
> +                xen,evtchn = <0xa &ec1>;
> +            };
> +
> +            ec3: evtchn@3 {
> +                compatible = "xen,evtchn-v1";
> +                xen,evtchn = <0xb &ec5>;
> +            };
> +
> +            ec4: evtchn@4 {
> +                compatible = "xen,evtchn-v1";
> +                xen,evtchn = <0xc &ec6>;
> +            };
> +            ....
> +        };
> +
> +        domU2: domU2 {
> +            compatible = "xen,domain";
> +
> +            /* one sub-node per local event channel */
> +            ec5: evtchn@5 {
> +                compatible = "xen,evtchn-v1";
> +                /* local-evtchn link-to-foreign-evtchn */
> +                xen,evtchn = <0xa &ec3>;
> +            };
> +
> +            ec6: evtchn@6 {
> +                compatible = "xen,evtchn-v1";
> +                xen,evtchn = <0xb &ec4>;
> +            };
> +            ....
> +        };
> +    };
> +
> +In above example three event channel comunications will be established:
> +
> +    dom0  (port 0xa) <-----------------> domU1 (port 0xa)
> +    domU1 (port 0xb) <-----------------> domU2 (port 0xa)
> +    domU1 (port 0xc) <-----------------> domU2 (port 0xb)
> -- 
> 2.25.1
> 
--8323329-1948795101-1654906997=:756493--


From xen-devel-bounces@lists.xenproject.org Sat Jun 11 00:41:45 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jun 2022 00:41:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346873.572822 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzpC8-0004eb-4R; Sat, 11 Jun 2022 00:41:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346873.572822; Sat, 11 Jun 2022 00:41:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzpC8-0004eU-1V; Sat, 11 Jun 2022 00:41:40 +0000
Received: by outflank-mailman (input) for mailman id 346873;
 Sat, 11 Jun 2022 00:41:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gXwC=WS=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nzpC6-0004eO-Cj
 for xen-devel@lists.xenproject.org; Sat, 11 Jun 2022 00:41:38 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3f36f193-e91f-11ec-bd2c-47488cf2e6aa;
 Sat, 11 Jun 2022 02:41:36 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 0E1F161F39;
 Sat, 11 Jun 2022 00:41:35 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 346B8C34114;
 Sat, 11 Jun 2022 00:41:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3f36f193-e91f-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654908094;
	bh=8bwDMRcbBGmeqBFM+Fcec2ie1Pqtr9LL2LyK4fTPmMo=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=uxpentRgK5SxC7MPrX5SSZnDN7gdu1S1TKaaEtvU49n40SHnLgh0GvmVPEmjIfMq0
	 mGb8sQCvaEIkMMOD1K822xVYfDgezoNy8QCORkTCRCxF/8KTAkaRbeHPZ5T88OVT6D
	 UyMyzdh39g4KtyATm73z7S2byluU4Kfd8wf27Yu7Rhz+oyGrKKhR5Nlutw92WD+Lsx
	 Dbxe4cJIDF3uB9xTlPB8H8BlF4a2i0PjMFaoBj7f4+uUC2YDS5hm2kzNcyTDAxtytQ
	 ibjBDwJYlrbQ8+5nudham4gr8On1ouHe8X53T3ssG3Ymhcy4Y4PCnBVvuKcmffwAGz
	 zdzLrtzBS5weg==
Date: Fri, 10 Jun 2022 17:41:33 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jens Wiklander <jens.wiklander@linaro.org>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 1/2] xen/arm: smccc: add support for SMCCCv1.2 extended
 input/output registers
In-Reply-To: <20220609061812.422130-2-jens.wiklander@linaro.org>
Message-ID: <alpine.DEB.2.22.394.2206101733020.756493@ubuntu-linux-20-04-desktop>
References: <20220609061812.422130-1-jens.wiklander@linaro.org> <20220609061812.422130-2-jens.wiklander@linaro.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 9 Jun 2022, Jens Wiklander wrote:
> SMCCC v1.2 AArch64 allows x0-x17 to be used as both parameter registers
> and result registers for the SMC and HVC instructions.
> 
> Arm Firmware Framework for Armv8-A specification makes use of x0-x7 as
> parameter and result registers.
> 
> Let us add new interface to support this extended set of input/output
> registers.
> 
> This is based on 3fdc0cb59d97 ("arm64: smccc: Add support for SMCCCv1.2
> extended input/output registers") by Sudeep Holla from the Linux kernel
> 
> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> ---
>  xen/arch/arm/arm64/smc.S         | 43 ++++++++++++++++++++++++++++++++
>  xen/arch/arm/include/asm/smccc.h | 42 +++++++++++++++++++++++++++++++
>  xen/arch/arm/vsmc.c              |  2 +-
>  3 files changed, 86 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/arm64/smc.S b/xen/arch/arm/arm64/smc.S
> index 91bae62dd4d2..1570bc8eb9d4 100644
> --- a/xen/arch/arm/arm64/smc.S
> +++ b/xen/arch/arm/arm64/smc.S
> @@ -27,3 +27,46 @@ ENTRY(__arm_smccc_1_0_smc)
>          stp     x2, x3, [x4, #SMCCC_RES_a2]
>  1:
>          ret
> +
> +
> +/*
> + * void arm_smccc_1_2_smc(const struct arm_smccc_1_2_regs *args,
> + *                        struct arm_smccc_1_2_regs *res)
> + */
> +ENTRY(arm_smccc_1_2_smc)
> +    /* Save `res` and free a GPR that won't be clobbered */
> +    stp     x1, x19, [sp, #-16]!
> +
> +    /* Ensure `args` won't be clobbered while loading regs in next step */
> +    mov	x19, x0
> +
> +    /* Load the registers x0 - x17 from the struct arm_smccc_1_2_regs */
> +    ldp	x0, x1, [x19, #0]
> +    ldp	x2, x3, [x19, #16]
> +    ldp	x4, x5, [x19, #32]
> +    ldp	x6, x7, [x19, #48]
> +    ldp	x8, x9, [x19, #64]
> +    ldp	x10, x11, [x19, #80]
> +    ldp	x12, x13, [x19, #96]
> +    ldp	x14, x15, [x19, #112]
> +    ldp	x16, x17, [x19, #128]
> +
> +    smc #0
> +
> +    /* Load the `res` from the stack */
> +    ldr	x19, [sp]
> +
> +    /* Store the registers x0 - x17 into the result structure */
> +    stp	x0, x1, [x19, #0]
> +    stp	x2, x3, [x19, #16]
> +    stp	x4, x5, [x19, #32]
> +    stp	x6, x7, [x19, #48]
> +    stp	x8, x9, [x19, #64]
> +    stp	x10, x11, [x19, #80]
> +    stp	x12, x13, [x19, #96]
> +    stp	x14, x15, [x19, #112]
> +    stp	x16, x17, [x19, #128]

I noticed that in the original commit the offsets are declared as
ARM_SMCCC_1_2_REGS_X0_OFFS, etc. In Xen we could add them to
xen/arch/arm/arm64/asm-offsets.c given that they are only used in asm.

That said, there isn't a huge value in declaring them given that they
are always read and written in order and there is nothing else in the
struct, so I am fine either way.

I am also happy to have them declared if other maintainers prefer it
that way.


> +    /* Restore original x19 */
> +    ldp     xzr, x19, [sp], #16
> +    ret
> diff --git a/xen/arch/arm/include/asm/smccc.h b/xen/arch/arm/include/asm/smccc.h
> index b3dbeecc90ad..316adf968e74 100644
> --- a/xen/arch/arm/include/asm/smccc.h
> +++ b/xen/arch/arm/include/asm/smccc.h
> @@ -33,6 +33,7 @@
>  
>  #define ARM_SMCCC_VERSION_1_0   SMCCC_VERSION(1, 0)
>  #define ARM_SMCCC_VERSION_1_1   SMCCC_VERSION(1, 1)
> +#define ARM_SMCCC_VERSION_1_2   SMCCC_VERSION(1, 2)
>  
>  /*
>   * This file provides common defines for ARM SMC Calling Convention as
> @@ -217,6 +218,7 @@ struct arm_smccc_res {
>  #ifdef CONFIG_ARM_32
>  #define arm_smccc_1_0_smc(...) arm_smccc_1_1_smc(__VA_ARGS__)
>  #define arm_smccc_smc(...) arm_smccc_1_1_smc(__VA_ARGS__)
> +
>  #else

Spurious change


>  void __arm_smccc_1_0_smc(register_t a0, register_t a1, register_t a2,
> @@ -265,8 +267,48 @@ void __arm_smccc_1_0_smc(register_t a0, register_t a1, register_t a2,
>          else                                                    \
>              arm_smccc_1_0_smc(__VA_ARGS__);                     \
>      } while ( 0 )
> +
> +/**
> + * struct arm_smccc_1_2_regs - Arguments for or Results from SMC call
> + * @a0-a17 argument values from registers 0 to 17
> + */
> +struct arm_smccc_1_2_regs {
> +    unsigned long a0;
> +    unsigned long a1;
> +    unsigned long a2;
> +    unsigned long a3;
> +    unsigned long a4;
> +    unsigned long a5;
> +    unsigned long a6;
> +    unsigned long a7;
> +    unsigned long a8;
> +    unsigned long a9;
> +    unsigned long a10;
> +    unsigned long a11;
> +    unsigned long a12;
> +    unsigned long a13;
> +    unsigned long a14;
> +    unsigned long a15;
> +    unsigned long a16;
> +    unsigned long a17;
> +};
>  #endif /* CONFIG_ARM_64 */
>  
> +/**
> + * arm_smccc_1_2_smc() - make SMC calls
> + * @args: arguments passed via struct arm_smccc_1_2_regs
> + * @res: result values via struct arm_smccc_1_2_regs
> + *
> + * This function is used to make SMC calls following SMC Calling Convention
> + * v1.2 or above. The content of the supplied param are copied from the
> + * structure to registers prior to the SMC instruction. The return values
> + * are updated with the content from registers on return from the SMC
> + * instruction.
> + */
> +void arm_smccc_1_2_smc(const struct arm_smccc_1_2_regs *args,
> +                       struct arm_smccc_1_2_regs *res);
> +

As arm_smccc_1_2_smc is not implemented in ARM32 it is better to place
the declaration inside the #ifdef CONFIG_ARM_64.


>  #endif /* __ASSEMBLY__ */
>  
>  /*
> diff --git a/xen/arch/arm/vsmc.c b/xen/arch/arm/vsmc.c
> index 676740ef1520..6f90c08a6304 100644
> --- a/xen/arch/arm/vsmc.c
> +++ b/xen/arch/arm/vsmc.c
> @@ -93,7 +93,7 @@ static bool handle_arch(struct cpu_user_regs *regs)
>      switch ( fid )
>      {
>      case ARM_SMCCC_VERSION_FID:
> -        set_user_reg(regs, 0, ARM_SMCCC_VERSION_1_1);
> +        set_user_reg(regs, 0, ARM_SMCCC_VERSION_1_2);
>          return true;
  
This is going to be a problem for ARM32 given that ARM_SMCCC_VERSION_1_2
is unimplemented on ARM32. If there is an ARM32 implementation in Linux
for ARM_SMCCC_VERSION_1_2 you might as well import it too.

Otherwise we'll have to abstract it away, e.g.:

#ifdef CONFIG_ARM_64
#define ARM_VSMCCC_VERSION ARM_SMCCC_VERSION_1_2
#else
#define ARM_VSMCCC_VERSION ARM_SMCCC_VERSION_1_1
#endif

>      case ARM_SMCCC_ARCH_FEATURES_FID:
> -- 
> 2.31.1
> 


From xen-devel-bounces@lists.xenproject.org Sat Jun 11 01:24:20 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jun 2022 01:24:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346886.572844 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzpr7-00009F-Hb; Sat, 11 Jun 2022 01:24:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346886.572844; Sat, 11 Jun 2022 01:24:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzpr7-000098-EE; Sat, 11 Jun 2022 01:24:01 +0000
Received: by outflank-mailman (input) for mailman id 346886;
 Sat, 11 Jun 2022 01:24:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gXwC=WS=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1nzpr6-000092-4I
 for xen-devel@lists.xenproject.org; Sat, 11 Jun 2022 01:24:00 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 28493810-e925-11ec-8901-93a377f238d6;
 Sat, 11 Jun 2022 03:23:55 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 7D75261FBA;
 Sat, 11 Jun 2022 01:23:53 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7A67DC34114;
 Sat, 11 Jun 2022 01:23:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 28493810-e925-11ec-8901-93a377f238d6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1654910632;
	bh=qlQhxTGJ2cFmJQ6PlTzzwW4WRSpuCsiHNIg2UOlDPJI=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=aKSzmIFrV92LdmIj+ukWODJWsY4d6lQIOyhzkDlbke5IXKF1wB34G8ncP1LXRSWP6
	 k9UgctZfVYWKdAK9d0BlZvBtcmFLW9D51UP+b/d1qUHWyhVNTnHiqJ/QkY1vUWaXtj
	 zHEdtgSFF5YBaip/0wN+9hj4QsZaMtFOiU21dxr3+9WwKHHZEhcu+dHiJ19A+qbT7h
	 GLuRLal/2KBtCHcPJZddUDOIpyG8OBP4bVv8CuY6/KXprLSQqbjYeLqRzXMm2WSY8+
	 c54n/tSw8V5Q3Pc4xt7DQ7vwi3JJsdVeqy3qUGcKghbMdZrB/sxwUc2oQM2S2pQSfH
	 9+1TDlpkyTFIA==
Date: Fri, 10 Jun 2022 18:23:52 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jens Wiklander <jens.wiklander@linaro.org>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Bertrand.Marquis@arm.com
Subject: Re: [PATCH v2 2/2] xen/arm: add FF-A mediator
In-Reply-To: <20220609061812.422130-3-jens.wiklander@linaro.org>
Message-ID: <alpine.DEB.2.22.394.2206101758030.756493@ubuntu-linux-20-04-desktop>
References: <20220609061812.422130-1-jens.wiklander@linaro.org> <20220609061812.422130-3-jens.wiklander@linaro.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 9 Jun 2022, Jens Wiklander wrote:
> Adds a FF-A version 1.1 [1] mediator to communicate with a Secure
> Partition in secure world.
> 
> The implementation is the bare minimum to be able to communicate with
> OP-TEE running as an SPMC at S-EL1.
> 
> This is loosely based on the TEE mediator framework and the OP-TEE
> mediator.
> 
> [1] https://developer.arm.com/documentation/den0077/latest
> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

Hi Jens, thanks for rebasing. This is not a full review because I ran
out of time but some initial comments.


> ---
>  xen/arch/arm/Kconfig              |   11 +
>  xen/arch/arm/Makefile             |    1 +
>  xen/arch/arm/domain.c             |   10 +
>  xen/arch/arm/ffa.c                | 1624 +++++++++++++++++++++++++++++
>  xen/arch/arm/include/asm/domain.h |    4 +
>  xen/arch/arm/include/asm/ffa.h    |   71 ++
>  xen/arch/arm/vsmc.c               |   17 +-
>  7 files changed, 1735 insertions(+), 3 deletions(-)
>  create mode 100644 xen/arch/arm/ffa.c
>  create mode 100644 xen/arch/arm/include/asm/ffa.h
> 
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index ecfa6822e4d3..5b75067e2745 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -106,6 +106,17 @@ config TEE
>  
>  source "arch/arm/tee/Kconfig"
>  
> +config FFA
> +	bool "Enable FF-A mediator support" if EXPERT
> +	default n
> +	depends on ARM_64
> +	help
> +	  This option enables a minamal FF-A mediator. The mediator is
> +	  generic as it follows the FF-A specification [1], but it only
> +	  implements a small substet of the specification.
> +
> +	  [1] https://developer.arm.com/documentation/den0077/latest
> +
>  endmenu
>  
>  menu "ARM errata workaround via the alternative framework"
> diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
> index 1d862351d111..dbf5e593a069 100644
> --- a/xen/arch/arm/Makefile
> +++ b/xen/arch/arm/Makefile
> @@ -20,6 +20,7 @@ obj-y += domain.o
>  obj-y += domain_build.init.o
>  obj-y += domctl.o
>  obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
> +obj-$(CONFIG_FFA) += ffa.o
>  obj-y += gic.o
>  obj-y += gic-v2.o
>  obj-$(CONFIG_GICV3) += gic-v3.o
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 8110c1df8638..a93e6a9c4aef 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -27,6 +27,7 @@
>  #include <asm/cpufeature.h>
>  #include <asm/current.h>
>  #include <asm/event.h>
> +#include <asm/ffa.h>
>  #include <asm/gic.h>
>  #include <asm/guest_atomics.h>
>  #include <asm/irq.h>
> @@ -756,6 +757,9 @@ int arch_domain_create(struct domain *d,
>      if ( (rc = tee_domain_init(d, config->arch.tee_type)) != 0 )
>          goto fail;
>  
> +    if ( (rc = ffa_domain_init(d)) != 0 )
> +        goto fail;
> +
>      update_domain_wallclock_time(d);
>  
>      /*
> @@ -998,6 +1002,7 @@ static int relinquish_memory(struct domain *d, struct page_list_head *list)
>  enum {
>      PROG_pci = 1,
>      PROG_tee,
> +    PROG_ffa,
>      PROG_xen,
>      PROG_page,
>      PROG_mapping,
> @@ -1046,6 +1051,11 @@ int domain_relinquish_resources(struct domain *d)
>          if (ret )
>              return ret;
>  
> +    PROGRESS(ffa):
> +        ret = ffa_relinquish_resources(d);
> +        if (ret )
> +            return ret;
> +
>      PROGRESS(xen):
>          ret = relinquish_memory(d, &d->xenpage_list);
>          if ( ret )
> diff --git a/xen/arch/arm/ffa.c b/xen/arch/arm/ffa.c
> new file mode 100644
> index 000000000000..9063b7f2b59e
> --- /dev/null
> +++ b/xen/arch/arm/ffa.c
> @@ -0,0 +1,1624 @@
> +/*
> + * xen/arch/arm/ffa.c
> + *
> + * Arm Firmware Framework for ARMv8-A(FFA) mediator
> + *
> + * Copyright (C) 2021  Linaro Limited
> + *
> + * Permission is hereby granted, free of charge, to any person
> + * obtaining a copy of this software and associated documentation
> + * files (the "Software"), to deal in the Software without restriction,
> + * including without limitation the rights to use, copy, modify, merge,
> + * publish, distribute, sublicense, and/or sell copies of the Software,
> + * and to permit persons to whom the Software is furnished to do so,
> + * subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be
> + * included in all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
> + * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
> + * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
> + * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
> + * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
> + */
> +
> +#include <xen/domain_page.h>
> +#include <xen/errno.h>
> +#include <xen/init.h>
> +#include <xen/lib.h>
> +#include <xen/sched.h>
> +#include <xen/types.h>
> +#include <xen/sizes.h>
> +#include <xen/bitops.h>
> +
> +#include <asm/smccc.h>
> +#include <asm/event.h>
> +#include <asm/ffa.h>
> +#include <asm/regs.h>
> +
> +/* Error codes */
> +#define FFA_RET_OK			0
> +#define FFA_RET_NOT_SUPPORTED		-1
> +#define FFA_RET_INVALID_PARAMETERS	-2
> +#define FFA_RET_NO_MEMORY		-3
> +#define FFA_RET_BUSY			-4
> +#define FFA_RET_INTERRUPTED		-5
> +#define FFA_RET_DENIED			-6
> +#define FFA_RET_RETRY			-7
> +#define FFA_RET_ABORTED			-8
> +
> +/* FFA_VERSION helpers */
> +#define FFA_VERSION_MAJOR		_AC(1,U)
> +#define FFA_VERSION_MAJOR_SHIFT		_AC(16,U)
> +#define FFA_VERSION_MAJOR_MASK		_AC(0x7FFF,U)
> +#define FFA_VERSION_MINOR		_AC(1,U)
> +#define FFA_VERSION_MINOR_SHIFT		_AC(0,U)
> +#define FFA_VERSION_MINOR_MASK		_AC(0xFFFF,U)
> +#define MAKE_FFA_VERSION(major, minor)	\
> +	((((major) & FFA_VERSION_MAJOR_MASK) << FFA_VERSION_MAJOR_SHIFT) | \
> +	 ((minor) & FFA_VERSION_MINOR_MASK))
> +
> +#define FFA_MIN_VERSION		MAKE_FFA_VERSION(1, 0)
> +#define FFA_VERSION_1_0		MAKE_FFA_VERSION(1, 0)
> +#define FFA_VERSION_1_1		MAKE_FFA_VERSION(1, 1)
> +#define FFA_MY_VERSION		MAKE_FFA_VERSION(FFA_VERSION_MAJOR, \
> +						 FFA_VERSION_MINOR)
> +
> +
> +#define FFA_HANDLE_HYP_FLAG             BIT(63,ULL)
> +
> +/* Memory attributes: Normal memory, Write-Back cacheable, Inner shareable */
> +#define FFA_NORMAL_MEM_REG_ATTR		_AC(0x2f,U)
> +
> +/* Memory access permissions: Read-write */
> +#define FFA_MEM_ACC_RW			_AC(0x2,U)
> +
> +/* Clear memory before mapping in receiver */
> +#define FFA_MEMORY_REGION_FLAG_CLEAR		BIT(0, U)
> +/* Relayer may time slice this operation */
> +#define FFA_MEMORY_REGION_FLAG_TIME_SLICE	BIT(1, U)
> +/* Clear memory after receiver relinquishes it */
> +#define FFA_MEMORY_REGION_FLAG_CLEAR_RELINQUISH	BIT(2, U)
> +
> +/* Share memory transaction */
> +#define FFA_MEMORY_REGION_TRANSACTION_TYPE_SHARE (_AC(1,U) << 3)
> +/* Relayer must choose the alignment boundary */
> +#define FFA_MEMORY_REGION_FLAG_ANY_ALIGNMENT	_AC(0,U)
> +
> +#define FFA_HANDLE_INVALID		_AC(0xffffffffffffffff,ULL)
> +
> +/* Framework direct request/response */
> +#define FFA_MSG_FLAG_FRAMEWORK		BIT(31, U)
> +#define FFA_MSG_TYPE_MASK		_AC(0xFF,U);
> +#define FFA_MSG_PSCI			_AC(0x0,U)
> +#define FFA_MSG_SEND_VM_CREATED		_AC(0x4,U)
> +#define FFA_MSG_RESP_VM_CREATED		_AC(0x5,U)
> +#define FFA_MSG_SEND_VM_DESTROYED	_AC(0x6,U)
> +#define FFA_MSG_RESP_VM_DESTROYED	_AC(0x7,U)
> +
> +/*
> + * Flags used for the FFA_PARTITION_INFO_GET return message:
> + * BIT(0): Supports receipt of direct requests
> + * BIT(1): Can send direct requests
> + * BIT(2): Can send and receive indirect messages
> + * BIT(3): Supports receipt of notifications
> + * BIT(4-5): Partition ID is a PE endpoint ID
> + */
> +#define FFA_PART_PROP_DIRECT_REQ_RECV   BIT(0,U)
> +#define FFA_PART_PROP_DIRECT_REQ_SEND   BIT(1,U)
> +#define FFA_PART_PROP_INDIRECT_MSGS     BIT(2,U)
> +#define FFA_PART_PROP_RECV_NOTIF        BIT(3,U)
> +#define FFA_PART_PROP_IS_PE_ID          (_AC(0,U) << 4)
> +#define FFA_PART_PROP_IS_SEPID_INDEP    (_AC(1,U) << 4)
> +#define FFA_PART_PROP_IS_SEPID_DEP      (_AC(2,U) << 4)
> +#define FFA_PART_PROP_IS_AUX_ID         (_AC(3,U) << 4)
> +#define FFA_PART_PROP_NOTIF_CREATED     BIT(6,U)
> +#define FFA_PART_PROP_NOTIF_DESTROYED   BIT(7,U)
> +#define FFA_PART_PROP_AARCH64_STATE     BIT(8,U)
> +
> +/* Function IDs */
> +#define FFA_ERROR			_AC(0x84000060,U)
> +#define FFA_SUCCESS_32			_AC(0x84000061,U)
> +#define FFA_SUCCESS_64			_AC(0xC4000061,U)
> +#define FFA_INTERRUPT			_AC(0x84000062,U)
> +#define FFA_VERSION			_AC(0x84000063,U)
> +#define FFA_FEATURES			_AC(0x84000064,U)
> +#define FFA_RX_ACQUIRE			_AC(0x84000084,U)
> +#define FFA_RX_RELEASE			_AC(0x84000065,U)
> +#define FFA_RXTX_MAP_32			_AC(0x84000066,U)
> +#define FFA_RXTX_MAP_64			_AC(0xC4000066,U)
> +#define FFA_RXTX_UNMAP			_AC(0x84000067,U)
> +#define FFA_PARTITION_INFO_GET		_AC(0x84000068,U)
> +#define FFA_ID_GET			_AC(0x84000069,U)
> +#define FFA_SPM_ID_GET			_AC(0x84000085,U)
> +#define FFA_MSG_WAIT			_AC(0x8400006B,U)
> +#define FFA_MSG_YIELD			_AC(0x8400006C,U)
> +#define FFA_MSG_RUN			_AC(0x8400006D,U)
> +#define FFA_MSG_SEND2			_AC(0x84000086,U)
> +#define FFA_MSG_SEND_DIRECT_REQ_32	_AC(0x8400006F,U)
> +#define FFA_MSG_SEND_DIRECT_REQ_64	_AC(0xC400006F,U)
> +#define FFA_MSG_SEND_DIRECT_RESP_32	_AC(0x84000070,U)
> +#define FFA_MSG_SEND_DIRECT_RESP_64	_AC(0xC4000070,U)
> +#define FFA_MEM_DONATE_32		_AC(0x84000071,U)
> +#define FFA_MEM_DONATE_64		_AC(0xC4000071,U)
> +#define FFA_MEM_LEND_32			_AC(0x84000072,U)
> +#define FFA_MEM_LEND_64			_AC(0xC4000072,U)
> +#define FFA_MEM_SHARE_32		_AC(0x84000073,U)
> +#define FFA_MEM_SHARE_64		_AC(0xC4000073,U)
> +#define FFA_MEM_RETRIEVE_REQ_32		_AC(0x84000074,U)
> +#define FFA_MEM_RETRIEVE_REQ_64		_AC(0xC4000074,U)
> +#define FFA_MEM_RETRIEVE_RESP		_AC(0x84000075,U)
> +#define FFA_MEM_RELINQUISH		_AC(0x84000076,U)
> +#define FFA_MEM_RECLAIM			_AC(0x84000077,U)
> +#define FFA_MEM_FRAG_RX			_AC(0x8400007A,U)
> +#define FFA_MEM_FRAG_TX			_AC(0x8400007B,U)
> +#define FFA_MSG_SEND			_AC(0x8400006E,U)
> +#define FFA_MSG_POLL			_AC(0x8400006A,U)
> +
> +/* Partition information descriptor */
> +struct ffa_partition_info_1_0 {
> +    uint16_t id;
> +    uint16_t execution_context;
> +    uint32_t partition_properties;
> +};
> +
> +struct ffa_partition_info_1_1 {
> +    uint16_t id;
> +    uint16_t execution_context;
> +    uint32_t partition_properties;
> +    uint8_t uuid[16];
> +};
> +
> +/* Constituent memory region descriptor */
> +struct ffa_address_range {
> +    uint64_t address;
> +    uint32_t page_count;
> +    uint32_t reserved;
> +};
> +
> +/* Composite memory region descriptor */
> +struct ffa_mem_region {
> +    uint32_t total_page_count;
> +    uint32_t address_range_count;
> +    uint64_t reserved;
> +    struct ffa_address_range address_range_array[];
> +};
> +
> +/* Memory access permissions descriptor */
> +struct ffa_mem_access_perm {
> +    uint16_t endpoint_id;
> +    uint8_t perm;
> +    uint8_t flags;
> +};
> +
> +/* Endpoint memory access descriptor */
> +struct ffa_mem_access {
> +    struct ffa_mem_access_perm access_perm;
> +    uint32_t region_offs;
> +    uint64_t reserved;
> +};
> +
> +/* Lend, donate or share memory transaction descriptor */
> +struct ffa_mem_transaction_1_0 {
> +    uint16_t sender_id;
> +    uint8_t mem_reg_attr;
> +    uint8_t reserved0;
> +    uint32_t flags;
> +    uint64_t global_handle;
> +    uint64_t tag;
> +    uint32_t reserved1;
> +    uint32_t mem_access_count;
> +    struct ffa_mem_access mem_access_array[];
> +};
> +
> +struct ffa_mem_transaction_1_1 {
> +    uint16_t sender_id;
> +    uint16_t mem_reg_attr;
> +    uint32_t flags;
> +    uint64_t global_handle;
> +    uint64_t tag;
> +    uint32_t mem_access_size;
> +    uint32_t mem_access_count;
> +    uint32_t mem_access_offs;
> +    uint8_t reserved[12];
> +};
> +
> +/*
> + * The parts needed from struct ffa_mem_transaction_1_0 or struct
> + * ffa_mem_transaction_1_1, used to provide an abstraction of difference in
> + * data structures between version 1.0 and 1.1. This is just an internal
> + * interface and can be changed without changing any ABI.
> + */
> +struct ffa_mem_transaction_x {
> +    uint16_t sender_id;
> +    uint8_t mem_reg_attr;
> +    uint8_t flags;
> +    uint8_t mem_access_size;
> +    uint8_t mem_access_count;
> +    uint16_t mem_access_offs;
> +    uint64_t global_handle;
> +    uint64_t tag;
> +};
> +
> +/* Endpoint RX/TX descriptor */
> +struct ffa_endpoint_rxtx_descriptor_1_0 {
> +    uint16_t sender_id;
> +    uint16_t reserved;
> +    uint32_t rx_range_count;
> +    uint32_t tx_range_count;
> +};
> +
> +struct ffa_endpoint_rxtx_descriptor_1_1 {
> +    uint16_t sender_id;
> +    uint16_t reserved;
> +    uint32_t rx_region_offs;
> +    uint32_t tx_region_offs;
> +};
> +
> +struct ffa_ctx {
> +    void *rx;
> +    void *tx;
> +    struct page_info *rx_pg;
> +    struct page_info *tx_pg;
> +    unsigned int page_count;
> +    uint32_t guest_vers;
> +    bool tx_is_mine;
> +    bool interrupted;
> +};
> +
> +struct ffa_shm_mem {
> +    struct list_head list;
> +    uint16_t sender_id;
> +    uint16_t ep_id;     /* endpoint, the one lending */
> +    uint64_t handle;    /* FFA_HANDLE_INVALID if not set yet */
> +    unsigned int page_count;
> +    struct page_info *pages[];
> +};
> +
> +struct mem_frag_state {
> +    struct list_head list;
> +    struct ffa_shm_mem *shm;
> +    uint32_t range_count;
> +    unsigned int current_page_idx;
> +    unsigned int frag_offset;
> +    unsigned int range_offset;
> +    uint8_t *buf;
> +    unsigned int buf_size;
> +    struct ffa_address_range range;
> +};
> +
> +/*
> + * Our rx/rx buffer shared with the SPMC
> + */
> +static uint32_t ffa_version;
> +static uint16_t *subsr_vm_created;
> +static unsigned int subsr_vm_created_count;
> +static uint16_t *subsr_vm_destroyed;
> +static unsigned int subsr_vm_destroyed_count;
> +static void *ffa_rx;
> +static void *ffa_tx;
> +static unsigned int ffa_page_count;
> +static spinlock_t ffa_buffer_lock = SPIN_LOCK_UNLOCKED;

DEFINE_SPINLOCK. But actually shouldn't the spin_locks be per-domain? It
looks like that at least some of the locks don't need to be global.


> +static struct list_head ffa_mem_list = LIST_HEAD_INIT(ffa_mem_list);
> +static struct list_head ffa_frag_list = LIST_HEAD_INIT(ffa_frag_list);

LIST_HEAD(ffa_mem_list);
LIST_HEAD(ffa_frag_list);

> +static spinlock_t ffa_mem_list_lock = SPIN_LOCK_UNLOCKED;

DEFINE_SPINLOCK


> +static uint64_t next_handle = FFA_HANDLE_HYP_FLAG;
> +
> +static uint64_t reg_pair_to_64(uint32_t reg0, uint32_t reg1)
> +{
> +    return (uint64_t)reg0 << 32 | reg1;
> +}
> +
> +static void reg_pair_from_64(uint32_t *reg0, uint32_t *reg1, uint64_t val)
> +{
> +    *reg0 = val >> 32;
> +    *reg1 = val;
> +}

I think these two should be static inline


> +static bool ffa_get_version(uint32_t *vers)
> +{
> +    const struct arm_smccc_1_2_regs arg = {
> +        .a0 = FFA_VERSION, .a1 = FFA_MY_VERSION,
> +    };
> +    struct arm_smccc_1_2_regs resp;
> +
> +    arm_smccc_1_2_smc(&arg, &resp);
> +    if ( resp.a0 == FFA_RET_NOT_SUPPORTED )
> +    {
> +        printk(XENLOG_ERR "ffa: FFA_VERSION returned not supported\n");
> +        return false;
> +    }
> +
> +    *vers = resp.a0;
> +    return true;
> +}
> +
> +static uint32_t ffa_rxtx_map(register_t tx_addr, register_t rx_addr,
> +                             uint32_t page_count)
> +{
> +    const struct arm_smccc_1_2_regs arg = {
> +#ifdef CONFIG_ARM_64
> +        .a0 = FFA_RXTX_MAP_64,
> +#endif

This ifdef is unnecessary given that FFA depends on ARM64 and SMCCCv1.2
is only implemented on ARM64. It also applies to all the other ifdefs in
this file. You can remove the code under #ifdef CONFIG_ARM_32.


> +#ifdef CONFIG_ARM_32
> +        .a0 = FFA_RXTX_MAP_32,
> +#endif
> +	.a1 = tx_addr, .a2 = rx_addr,
> +        .a3 = page_count,
> +    };
> +    struct arm_smccc_1_2_regs resp;
> +
> +    arm_smccc_1_2_smc(&arg, &resp);
> +
> +    if ( resp.a0 == FFA_ERROR )
> +    {
> +        if ( resp.a2 )
> +            return resp.a2;
> +        else
> +            return FFA_RET_NOT_SUPPORTED;
> +    }
> +
> +    return FFA_RET_OK;
> +}
> +
> +static uint32_t ffa_rxtx_unmap(uint16_t vm_id)
> +{
> +    const struct arm_smccc_1_2_regs arg = {
> +        .a0 = FFA_RXTX_UNMAP, .a1 = vm_id,
> +    };
> +    struct arm_smccc_1_2_regs resp;
> +
> +    arm_smccc_1_2_smc(&arg, &resp);
> +
> +    if ( resp.a0 == FFA_ERROR )
> +    {
> +        if ( resp.a2 )
> +            return resp.a2;
> +        else
> +            return FFA_RET_NOT_SUPPORTED;
> +    }
> +
> +    return FFA_RET_OK;
> +}
> +
> +static uint32_t ffa_partition_info_get(uint32_t w1, uint32_t w2, uint32_t w3,
> +                                       uint32_t w4, uint32_t w5,
> +                                       uint32_t *count)
> +{
> +    const struct arm_smccc_1_2_regs arg = {
> +        .a0 = FFA_PARTITION_INFO_GET, .a1 = w1, .a2 = w2, .a3 = w3, .a4 = w4,
> +        .a5 = w5,
> +    };
> +    struct arm_smccc_1_2_regs resp;
> +
> +    arm_smccc_1_2_smc(&arg, &resp);
> +
> +    if ( resp.a0 == FFA_ERROR )
> +    {
> +        if ( resp.a2 )
> +            return resp.a2;
> +        else
> +            return FFA_RET_NOT_SUPPORTED;
> +    }
> +
> +    *count = resp.a2;
> +
> +    return FFA_RET_OK;
> +}
> +
> +static uint32_t ffa_rx_release(void)
> +{
> +    const struct arm_smccc_1_2_regs arg = { .a0 = FFA_RX_RELEASE, };
> +    struct arm_smccc_1_2_regs resp;
> +
> +    arm_smccc_1_2_smc(&arg, &resp);
> +
> +    if ( resp.a0 == FFA_ERROR )
> +    {
> +        if ( resp.a2 )
> +            return resp.a2;
> +        else
> +            return FFA_RET_NOT_SUPPORTED;
> +    }
> +
> +    return FFA_RET_OK;
> +}
> +
> +static int32_t ffa_mem_share(uint32_t tot_len, uint32_t frag_len,
> +                             register_t addr, uint32_t pg_count,
> +                             uint64_t *handle)
> +{
> +    struct arm_smccc_1_2_regs arg = {
> +        .a0 = FFA_MEM_SHARE_32, .a1 = tot_len, .a2 = frag_len, .a3 = addr,
> +        .a4 = pg_count,
> +    };
> +    struct arm_smccc_1_2_regs resp;
> +
> +    /*
> +     * For arm64 we must use 64-bit calling convention if the buffer isn't
> +     * passed in our tx buffer.
> +     */
> +    if (sizeof(addr) > 4 && addr)
> +        arg.a0 = FFA_MEM_SHARE_64;
> +
> +    arm_smccc_1_2_smc(&arg, &resp);
> +
> +    switch ( resp.a0 ) {
> +    case FFA_ERROR:
> +        if ( resp.a2 )
> +            return resp.a2;
> +        else
> +            return FFA_RET_NOT_SUPPORTED;
> +    case FFA_SUCCESS_32:
> +        *handle = reg_pair_to_64(resp.a3, resp.a2);
> +        return FFA_RET_OK;
> +    case FFA_MEM_FRAG_RX:
> +        *handle = reg_pair_to_64(resp.a2, resp.a1);
> +        return resp.a3;
> +    default:
> +            return FFA_RET_NOT_SUPPORTED;

coding style: alignment


> +    }
> +}
> +
> +static int32_t ffa_mem_frag_tx(uint64_t handle, uint32_t frag_len,
> +                               uint16_t sender_id)
> +{
> +    struct arm_smccc_1_2_regs arg = {
> +        .a0 = FFA_MEM_FRAG_TX, .a1 = handle & UINT32_MAX, .a2 = handle >> 32,
> +        .a3 = frag_len, .a4 = (uint32_t)sender_id << 16,
> +    };
> +    struct arm_smccc_1_2_regs resp;
> +
> +    arm_smccc_1_2_smc(&arg, &resp);
> +
> +    switch ( resp.a0 ) {
> +    case FFA_ERROR:
> +        if ( resp.a2 )
> +            return resp.a2;
> +        else
> +            return FFA_RET_NOT_SUPPORTED;
> +    case FFA_SUCCESS_32:
> +        return FFA_RET_OK;
> +    case FFA_MEM_FRAG_RX:
> +        return resp.a3;
> +    default:
> +            return FFA_RET_NOT_SUPPORTED;
> +    }
> +}
> +
> +static uint32_t ffa_mem_reclaim(uint32_t handle_lo, uint32_t handle_hi,
> +                                uint32_t flags)
> +{
> +    const struct arm_smccc_1_2_regs arg = {
> +        .a0 = FFA_MEM_RECLAIM, .a1 = handle_lo, .a2 = handle_hi, .a3 = flags,
> +    };
> +    struct arm_smccc_1_2_regs resp;
> +
> +    arm_smccc_1_2_smc(&arg, &resp);
> +
> +    if ( resp.a0 == FFA_ERROR )
> +    {
> +        if ( resp.a2 )
> +            return resp.a2;
> +        else
> +            return FFA_RET_NOT_SUPPORTED;
> +    }
> +
> +    return FFA_RET_OK;
> +}
> +
> +static int32_t ffa_direct_req_send_vm(uint16_t sp_id, uint16_t vm_id,
> +                                      uint8_t msg)
> +{
> +    uint32_t exp_resp = FFA_MSG_FLAG_FRAMEWORK;
> +    int32_t res;
> +
> +    if ( msg != FFA_MSG_SEND_VM_CREATED && msg !=FFA_MSG_SEND_VM_DESTROYED )
> +        return FFA_RET_INVALID_PARAMETERS;
> +
> +    if ( msg == FFA_MSG_SEND_VM_CREATED )
> +        exp_resp |= FFA_MSG_RESP_VM_CREATED;
> +    else
> +        exp_resp |= FFA_MSG_RESP_VM_DESTROYED;
> +
> +    do {
> +        const struct arm_smccc_1_2_regs arg = {
> +            .a0 = FFA_MSG_SEND_DIRECT_REQ_32,
> +            .a1 = sp_id,
> +            .a2 = FFA_MSG_FLAG_FRAMEWORK | msg,
> +            .a5 = vm_id,
> +        };
> +        struct arm_smccc_1_2_regs resp;
> +
> +        arm_smccc_1_2_smc(&arg, &resp);
> +        if ( resp.a0 != FFA_MSG_SEND_DIRECT_RESP_32 || resp.a2 != exp_resp ) {
> +            /*
> +             * This is an invalid response, likely due to some error in the
> +             * implementation of the ABI.
> +             */
> +            return FFA_RET_INVALID_PARAMETERS;
> +        }
> +        res = resp.a3;
> +    } while ( res == FFA_RET_INTERRUPTED || res == FFA_RET_RETRY );
> +
> +    return res;
> +}
> +
> +static u16 get_vm_id(struct domain *d)
> +{
> +    /* +1 since 0 is reserved for the hypervisor in FF-A */
> +    return d->domain_id + 1;
> +}
> +
> +static void set_regs(struct cpu_user_regs *regs, register_t v0, register_t v1,
> +                     register_t v2, register_t v3, register_t v4, register_t v5,
> +                     register_t v6, register_t v7)
> +{
> +        set_user_reg(regs, 0, v0);
> +        set_user_reg(regs, 1, v1);
> +        set_user_reg(regs, 2, v2);
> +        set_user_reg(regs, 3, v3);
> +        set_user_reg(regs, 4, v4);
> +        set_user_reg(regs, 5, v5);
> +        set_user_reg(regs, 6, v6);
> +        set_user_reg(regs, 7, v7);
> +}
> +
> +static void set_regs_error(struct cpu_user_regs *regs, uint32_t error_code)
> +{
> +    set_regs(regs, FFA_ERROR, 0, error_code, 0, 0, 0, 0, 0);
> +}
> +
> +static void set_regs_success(struct cpu_user_regs *regs, uint32_t w2,
> +                             uint32_t w3)
> +{
> +    set_regs(regs, FFA_SUCCESS_32, 0, w2, w3, 0, 0, 0, 0);
> +}
> +
> +static void set_regs_frag_rx(struct cpu_user_regs *regs, uint32_t handle_lo,
> +                             uint32_t handle_hi, uint32_t frag_offset,
> +                             uint16_t sender_id)
> +{
> +    set_regs(regs, FFA_MEM_FRAG_RX, handle_lo, handle_hi, frag_offset,
> +             (uint32_t)sender_id << 16, 0, 0, 0);
> +}
> +
> +static void handle_version(struct cpu_user_regs *regs)
> +{
> +    struct domain *d = current->domain;
> +    struct ffa_ctx *ctx = d->arch.ffa;
> +    uint32_t vers = get_user_reg(regs, 1);
> +
> +    if ( vers < FFA_VERSION_1_1 )
> +        vers = FFA_VERSION_1_0;
> +    else
> +        vers = FFA_VERSION_1_1;
> +
> +    ctx->guest_vers = vers;
> +    set_regs(regs, vers, 0, 0, 0, 0, 0, 0, 0);
> +}
> +
> +static uint32_t handle_rxtx_map(uint32_t fid, register_t tx_addr,
> +                                register_t rx_addr, uint32_t page_count)
> +{
> +    uint32_t ret = FFA_RET_NOT_SUPPORTED;
> +    struct domain *d = current->domain;
> +    struct ffa_ctx *ctx = d->arch.ffa;
> +    struct page_info *tx_pg;
> +    struct page_info *rx_pg;
> +    p2m_type_t t;
> +    void *rx;
> +    void *tx;
> +
> +    if ( !smccc_is_conv_64(fid) )
> +    {
> +        tx_addr &= UINT32_MAX;
> +        rx_addr &= UINT32_MAX;
> +    }
> +
> +    /* For now to keep things simple, only deal with a single page */
> +    if ( page_count != 1 )
> +        return FFA_RET_NOT_SUPPORTED;
> +
> +    /* Already mapped */
> +    if ( ctx->rx )
> +        return FFA_RET_DENIED;
> +
> +    tx_pg = get_page_from_gfn(d, gaddr_to_gfn(tx_addr), &t, P2M_ALLOC);
> +    if ( !tx_pg )
> +        return FFA_RET_NOT_SUPPORTED;

It looks like this should be another error: if get_page_from_gfn fails
it is probably because the provided page is invalid, so we should return
FFA_RET_INVALID_PARAMETERS ?


> +    /* Only normal RAM for now */
> +    if (t != p2m_ram_rw)
> +        goto err_put_tx_pg;
> +
> +    rx_pg = get_page_from_gfn(d, gaddr_to_gfn(rx_addr), &t, P2M_ALLOC);
> +    if ( !tx_pg )
> +        goto err_put_tx_pg;

same here?


> +    /* Only normal RAM for now */
> +    if ( t != p2m_ram_rw )
> +        goto err_put_rx_pg;
> +
> +    tx = __map_domain_page_global(tx_pg);
> +    if ( !tx )
> +        goto err_put_rx_pg;
> +
> +    rx = __map_domain_page_global(rx_pg);
> +    if ( !rx )
> +        goto err_unmap_tx;
> +
> +    ctx->rx = rx;
> +    ctx->tx = tx;
> +    ctx->rx_pg = rx_pg;
> +    ctx->tx_pg = tx_pg;
> +    ctx->page_count = 1;
> +    ctx->tx_is_mine = true;
> +    return FFA_RET_OK;
> +
> +err_unmap_tx:
> +    unmap_domain_page_global(tx);
> +err_put_rx_pg:
> +    put_page(rx_pg);
> +err_put_tx_pg:
> +    put_page(tx_pg);
> +    return ret;
> +}
> +
> +static uint32_t handle_rxtx_unmap(void)
> +{
> +    struct domain *d = current->domain;
> +    struct ffa_ctx *ctx = d->arch.ffa;
> +    uint32_t ret;
> +
> +    if ( !ctx-> rx )
> +        return FFA_RET_INVALID_PARAMETERS;
> +
> +    ret = ffa_rxtx_unmap(get_vm_id(d));
> +    if ( ret )
> +        return ret;
> +
> +    unmap_domain_page_global(ctx->rx);
> +    unmap_domain_page_global(ctx->tx);
> +    put_page(ctx->rx_pg);
> +    put_page(ctx->tx_pg);
> +    ctx->rx = NULL;
> +    ctx->tx = NULL;
> +    ctx->rx_pg = NULL;
> +    ctx->tx_pg = NULL;
> +    ctx->page_count = 0;
> +    ctx->tx_is_mine = false;
> +
> +    return FFA_RET_OK;
> +}
> +
> +static uint32_t handle_partition_info_get(uint32_t w1, uint32_t w2, uint32_t w3,
> +                                          uint32_t w4, uint32_t w5,
> +                                          uint32_t *count)
> +{
> +    uint32_t ret = FFA_RET_DENIED;
> +    struct domain *d = current->domain;
> +    struct ffa_ctx *ctx = d->arch.ffa;
> +
> +    if ( !ffa_page_count )
> +        return FFA_RET_DENIED;
> +
> +    spin_lock(&ffa_buffer_lock);
> +    if ( !ctx->page_count || !ctx->tx_is_mine )
> +        goto out;
> +    ret = ffa_partition_info_get(w1, w2, w3, w4, w5, count);
> +    if ( ret )
> +        goto out;
> +    if ( ctx->guest_vers == FFA_VERSION_1_0 ) {
> +        size_t n;
> +        struct ffa_partition_info_1_1 *src = ffa_rx;
> +        struct ffa_partition_info_1_0 *dst = ctx->rx;
> +
> +        for ( n = 0; n < *count; n++ ) {
> +            dst[n].id = src[n].id;
> +            dst[n].execution_context = src[n].execution_context;
> +            dst[n].partition_properties = src[n].partition_properties;
> +        }
> +    } else {
> +        size_t sz = *count * sizeof(struct ffa_partition_info_1_1);
> +
> +        memcpy(ctx->rx, ffa_rx, sz);
> +    }
> +    ffa_rx_release();
> +    ctx->tx_is_mine = false;
> +out:
> +    spin_unlock(&ffa_buffer_lock);
> +
> +    return ret;
> +}
> +
> +static uint32_t handle_rx_release(void)
> +{
> +    uint32_t ret = FFA_RET_DENIED;
> +    struct domain *d = current->domain;
> +    struct ffa_ctx *ctx = d->arch.ffa;
> +
> +    spin_lock(&ffa_buffer_lock);
> +    if ( !ctx->page_count || ctx->tx_is_mine )
> +        goto out;
> +    ret = FFA_RET_OK;
> +    ctx->tx_is_mine = true;
> +out:
> +    spin_unlock(&ffa_buffer_lock);
> +
> +    return ret;
> +}
> +
> +static void handle_msg_send_direct_req(struct cpu_user_regs *regs, uint32_t fid)
> +{
> +    struct arm_smccc_1_2_regs arg = { .a0 = fid, };
> +    struct arm_smccc_1_2_regs resp = { };
> +    struct domain *d = current->domain;
> +    struct ffa_ctx *ctx = d->arch.ffa;
> +    uint32_t src_dst;
> +    uint64_t mask;
> +
> +    if ( smccc_is_conv_64(fid) )
> +        mask = 0xffffffffffffffff;
> +    else
> +        mask = 0xffffffff;
> +
> +    src_dst = get_user_reg(regs, 1);
> +    if ( (src_dst >> 16) != get_vm_id(d) )
> +    {
> +        resp.a0 = FFA_ERROR;
> +        resp.a2 = FFA_RET_INVALID_PARAMETERS;
> +        goto out;
> +    }
> +
> +    arg.a1 = src_dst;
> +    arg.a2 = get_user_reg(regs, 2) & mask;
> +    arg.a3 = get_user_reg(regs, 3) & mask;
> +    arg.a4 = get_user_reg(regs, 4) & mask;
> +    arg.a5 = get_user_reg(regs, 5) & mask;
> +    arg.a6 = get_user_reg(regs, 6) & mask;
> +    arg.a7 = get_user_reg(regs, 7) & mask;
> +
> +    while ( true ) {
> +        arm_smccc_1_2_smc(&arg, &resp);
> +
> +        switch ( resp.a0 )
> +        {
> +        case FFA_INTERRUPT:
> +            ctx->interrupted = true;
> +            goto out;
> +        case FFA_ERROR:
> +        case FFA_SUCCESS_32:
> +        case FFA_SUCCESS_64:
> +        case FFA_MSG_SEND_DIRECT_RESP_32:
> +        case FFA_MSG_SEND_DIRECT_RESP_64:
> +            goto out;
> +        default:
> +            /* Bad fid, report back. */
> +            memset(&arg, 0, sizeof(arg));
> +            arg.a0 = FFA_ERROR;
> +            arg.a1 = src_dst;
> +            arg.a2 = FFA_RET_NOT_SUPPORTED;
> +            continue;
> +        }
> +    }
> +
> +out:
> +    set_user_reg(regs, 0, resp.a0);
> +    set_user_reg(regs, 2, resp.a2 & mask);
> +    set_user_reg(regs, 1, resp.a1 & mask);
> +    set_user_reg(regs, 3, resp.a3 & mask);
> +    set_user_reg(regs, 4, resp.a4 & mask);
> +    set_user_reg(regs, 5, resp.a5 & mask);
> +    set_user_reg(regs, 6, resp.a6 & mask);
> +    set_user_reg(regs, 7, resp.a7 & mask);
> +}
> +
> +static int get_shm_pages(struct domain *d, struct ffa_shm_mem *shm,
> +                         struct ffa_address_range *range, uint32_t range_count,
> +                         unsigned int start_page_idx,
> +                         unsigned int *last_page_idx)
> +{
> +    unsigned int pg_idx = start_page_idx;
> +    unsigned long gfn;
> +    unsigned int n;
> +    unsigned int m;
> +    p2m_type_t t;
> +    uint64_t addr;
> +
> +    for ( n = 0; n < range_count; n++ ) {
> +        for ( m = 0; m < range[n].page_count; m++ ) {
> +            if ( pg_idx >= shm->page_count )
> +                return FFA_RET_INVALID_PARAMETERS;
> +
> +            addr = read_atomic(&range[n].address);
> +            gfn = gaddr_to_gfn(addr + m * PAGE_SIZE);
> +            shm->pages[pg_idx] = get_page_from_gfn(d, gfn, &t, P2M_ALLOC);
> +            if ( !shm->pages[pg_idx] )
> +                return FFA_RET_DENIED;
> +            pg_idx++;
> +            /* Only normal RAM for now */
> +            if ( t != p2m_ram_rw )
> +                return FFA_RET_DENIED;
> +        }
> +    }
> +
> +    *last_page_idx = pg_idx;
> +
> +    return FFA_RET_OK;
> +}
> +
> +static void put_shm_pages(struct ffa_shm_mem *shm)
> +{
> +    unsigned int n;
> +
> +    for ( n = 0; n < shm->page_count && shm->pages[n]; n++ )
> +    {
> +        if ( shm->pages[n] ) {
> +            put_page(shm->pages[n]);
> +            shm->pages[n] = NULL;
> +        }
> +    }
> +}
> +
> +static void init_range(struct ffa_address_range *addr_range,
> +                       paddr_t pa)
> +{
> +    memset(addr_range, 0, sizeof(*addr_range));
> +    addr_range->address = pa;
> +    addr_range->page_count = 1;
> +}
> +
> +static int share_shm(struct ffa_shm_mem *shm)
> +{
> +    uint32_t max_frag_len = ffa_page_count * PAGE_SIZE;
> +    struct ffa_mem_transaction_1_1 *descr = ffa_tx;
> +    struct ffa_mem_access *mem_access_array;
> +    struct ffa_mem_region *region_descr;
> +    struct ffa_address_range *addr_range;
> +    paddr_t pa;
> +    paddr_t last_pa;
> +    unsigned int n;
> +    uint32_t frag_len;
> +    uint32_t tot_len;
> +    int ret;
> +    unsigned int range_count;
> +    unsigned int range_base;
> +    bool first;
> +
> +    memset(descr, 0, sizeof(*descr));
> +    descr->sender_id = shm->sender_id;
> +    descr->global_handle = shm->handle;
> +    descr->mem_reg_attr = FFA_NORMAL_MEM_REG_ATTR;
> +    descr->mem_access_count = 1;
> +    descr->mem_access_size = sizeof(*mem_access_array);
> +    descr->mem_access_offs = sizeof(*descr);
> +    mem_access_array = (void *)(descr + 1);
> +    region_descr = (void *)(mem_access_array + 1);
> +
> +    memset(mem_access_array, 0, sizeof(*mem_access_array));
> +    mem_access_array[0].access_perm.endpoint_id = shm->ep_id;
> +    mem_access_array[0].access_perm.perm = FFA_MEM_ACC_RW;
> +    mem_access_array[0].region_offs = (vaddr_t)region_descr - (vaddr_t)ffa_tx;
> +
> +    memset(region_descr, 0, sizeof(*region_descr));
> +    region_descr->total_page_count = shm->page_count;
> +
> +    region_descr->address_range_count = 1;
> +    last_pa = page_to_maddr(shm->pages[0]);
> +    for ( n = 1; n < shm->page_count; last_pa = pa, n++ )
> +    {
> +        pa = page_to_maddr(shm->pages[n]);
> +        if ( last_pa + PAGE_SIZE == pa )
> +        {
> +            continue;
> +        }
> +        region_descr->address_range_count++;
> +    }
> +
> +    tot_len = sizeof(*descr) + sizeof(*mem_access_array) +
> +              sizeof(*region_descr) +
> +              region_descr->address_range_count * sizeof(*addr_range);
> +
> +    addr_range = region_descr->address_range_array;
> +    frag_len = (vaddr_t)(addr_range + 1) - (vaddr_t)ffa_tx;
> +    last_pa = page_to_maddr(shm->pages[0]);
> +    init_range(addr_range, last_pa);
> +    first = true;
> +    range_count = 1;
> +    range_base = 0;
> +    for ( n = 1; n < shm->page_count; last_pa = pa, n++ )
> +    {
> +        pa = page_to_maddr(shm->pages[n]);
> +        if ( last_pa + PAGE_SIZE == pa )
> +        {
> +            addr_range->page_count++;
> +            continue;
> +        }
> +
> +        if (frag_len == max_frag_len) {
> +            if (first)
> +            {
> +                ret = ffa_mem_share(tot_len, frag_len, 0, 0, &shm->handle);
> +                first = false;
> +            }
> +            else
> +            {
> +                ret = ffa_mem_frag_tx(shm->handle, frag_len, shm->sender_id);
> +            }
> +            if (ret <= 0)
> +                return ret;
> +            range_base = range_count;
> +            range_count = 0;
> +            frag_len = sizeof(*addr_range);
> +            addr_range = ffa_tx;
> +        } else {
> +            frag_len += sizeof(*addr_range);
> +            addr_range++;
> +        }
> +        init_range(addr_range, pa);
> +        range_count++;
> +    }
> +
> +    if (first)
> +        return ffa_mem_share(tot_len, frag_len, 0, 0, &shm->handle);
> +    else
> +        return ffa_mem_frag_tx(shm->handle, frag_len, shm->sender_id);
> +}
> +
> +static int read_mem_transaction(uint32_t ffa_vers, void *buf, size_t blen,
> +                                struct ffa_mem_transaction_x *trans)
> +{
> +    uint16_t mem_reg_attr;
> +    uint32_t flags;
> +    uint32_t count;
> +    uint32_t offs;
> +    uint32_t size;
> +
> +    if (ffa_vers >= FFA_VERSION_1_1) {
> +        struct ffa_mem_transaction_1_1 *descr;
> +
> +        if (blen < sizeof(*descr))
> +            return FFA_RET_INVALID_PARAMETERS;
> +
> +        descr = buf;
> +        trans->sender_id = read_atomic(&descr->sender_id);
> +        mem_reg_attr = read_atomic(&descr->mem_reg_attr);
> +        flags = read_atomic(&descr->flags);
> +        trans->global_handle = read_atomic(&descr->global_handle);
> +        trans->tag = read_atomic(&descr->tag);
> +
> +        count = read_atomic(&descr->mem_access_count);
> +        size = read_atomic(&descr->mem_access_size);
> +        offs = read_atomic(&descr->mem_access_offs);
> +    } else {
> +        struct ffa_mem_transaction_1_0 *descr;
> +
> +        if (blen < sizeof(*descr))
> +            return FFA_RET_INVALID_PARAMETERS;
> +
> +        descr = buf;
> +        trans->sender_id = read_atomic(&descr->sender_id);
> +        mem_reg_attr = read_atomic(&descr->mem_reg_attr);
> +        flags = read_atomic(&descr->flags);
> +        trans->global_handle = read_atomic(&descr->global_handle);
> +        trans->tag = read_atomic(&descr->tag);
> +
> +        count = read_atomic(&descr->mem_access_count);
> +        size = sizeof(struct ffa_mem_access);
> +        offs = offsetof(struct ffa_mem_transaction_1_0, mem_access_array);
> +    }
> +
> +    if (mem_reg_attr > UINT8_MAX || flags > UINT8_MAX || size > UINT8_MAX ||
> +        count > UINT8_MAX || offs > UINT16_MAX)
> +        return FFA_RET_INVALID_PARAMETERS;
> +
> +    /* Check that the endpoint memory access descriptor array fits */
> +    if (size * count + offs > blen)
> +        return FFA_RET_INVALID_PARAMETERS;
> +
> +    trans->mem_reg_attr = mem_reg_attr;
> +    trans->flags = flags;
> +    trans->mem_access_size = size;
> +    trans->mem_access_count = count;
> +    trans->mem_access_offs = offs;
> +    return 0;
> +}
> +
> +static int add_mem_share_frag(struct mem_frag_state *s, unsigned int offs,
> +                              unsigned int frag_len)
> +{
> +    struct domain *d = current->domain;
> +    unsigned int o = offs;
> +    unsigned int l;
> +    int ret;
> +
> +    if (frag_len < o)
> +        return FFA_RET_INVALID_PARAMETERS;
> +
> +    /* Fill up the first struct ffa_address_range */
> +    l = min_t(unsigned int, frag_len - o, sizeof(s->range) - s->range_offset);
> +    memcpy((uint8_t *)&s->range + s->range_offset, s->buf + o, l);
> +    s->range_offset += l;
> +    o += l;
> +    if (s->range_offset != sizeof(s->range))
> +        goto out;
> +    s->range_offset = 0;
> +
> +    while (true) {
> +        ret = get_shm_pages(d, s->shm, &s->range, 1, s->current_page_idx,
> +                            &s->current_page_idx);
> +        if (ret)
> +            return ret;
> +        if (s->range_count == 1)
> +            return 0;
> +        s->range_count--;
> +        if (frag_len - o < sizeof(s->range))
> +            break;
> +        memcpy(&s->range, s->buf + o, sizeof(s->range));
> +        o += sizeof(s->range);
> +    }
> +
> +    /* Collect any remaining bytes for the next struct ffa_address_range */
> +    s->range_offset = frag_len - o;
> +    memcpy(&s->range, s->buf + o, frag_len - o);
> +out:
> +    s->frag_offset += frag_len;
> +    return s->frag_offset;
> +}
> +
> +static void handle_mem_share(struct cpu_user_regs *regs)
> +{
> +    uint32_t tot_len = get_user_reg(regs, 1);
> +    uint32_t frag_len = get_user_reg(regs, 2);
> +    uint64_t addr = get_user_reg(regs, 3);
> +    uint32_t page_count = get_user_reg(regs, 4);
> +    struct ffa_mem_transaction_x trans;
> +    struct ffa_mem_access *mem_access;
> +    struct ffa_mem_region *region_descr;
> +    struct domain *d = current->domain;
> +    struct ffa_ctx *ctx = d->arch.ffa;
> +    struct ffa_shm_mem *shm = NULL;
> +    unsigned int last_page_idx = 0;
> +    uint32_t range_count;
> +    uint32_t region_offs;
> +    int ret = FFA_RET_DENIED;
> +    uint32_t handle_hi = 0;
> +    uint32_t handle_lo = 0;
> +
> +    /*
> +     * We're only accepting memory transaction descriptors via the rx/tx
> +     * buffer.
> +     */
> +    if ( addr ) {
> +        ret = FFA_RET_NOT_SUPPORTED;
> +        goto out_unlock;
> +    }
> +
> +    /* Check that fragment legnth doesn't exceed total length */
> +    if (frag_len > tot_len) {
> +        ret = FFA_RET_INVALID_PARAMETERS;
> +        goto out_unlock;
> +    }
> +
> +    spin_lock(&ffa_buffer_lock);
> +
> +    if ( frag_len > ctx->page_count * PAGE_SIZE )
> +        goto out_unlock;
> +
> +    if ( !ffa_page_count ) {
> +        ret = FFA_RET_NO_MEMORY;
> +        goto out_unlock;
> +    }
> +
> +    ret = read_mem_transaction(ctx->guest_vers, ctx->tx, frag_len, &trans);
> +    if (ret)
> +        goto out_unlock;
> +
> +    if ( trans.mem_reg_attr != FFA_NORMAL_MEM_REG_ATTR )
> +    {
> +        ret = FFA_RET_NOT_SUPPORTED;
> +        goto out;
> +    }
> +
> +    /* Only supports sharing it with one SP for now */
> +    if ( trans.mem_access_count != 1 )
> +    {
> +        ret = FFA_RET_NOT_SUPPORTED;
> +        goto out_unlock;
> +    }
> +
> +    if ( trans.sender_id != get_vm_id(d) )
> +    {
> +        ret = FFA_RET_INVALID_PARAMETERS;
> +        goto out_unlock;
> +    }
> +
> +    /* Check that it fits in the supplied data */
> +    if ( trans.mem_access_offs + trans.mem_access_size > frag_len)
> +        goto out_unlock;
> +
> +    mem_access = (void *)((vaddr_t)ctx->tx + trans.mem_access_offs);
> +    if ( read_atomic(&mem_access->access_perm.perm) != FFA_MEM_ACC_RW )
> +    {
> +        ret = FFA_RET_NOT_SUPPORTED;
> +        goto out_unlock;
> +    }
> +
> +    region_offs = read_atomic(&mem_access->region_offs);
> +    if (sizeof(*region_descr) + region_offs > frag_len) {
> +        ret = FFA_RET_NOT_SUPPORTED;
> +        goto out_unlock;
> +    }
> +
> +    region_descr = (void *)((vaddr_t)ctx->tx + region_offs);
> +    range_count = read_atomic(&region_descr->address_range_count);
> +    page_count = read_atomic(&region_descr->total_page_count);
> +
> +    shm = xzalloc_flex_struct(struct ffa_shm_mem, pages, page_count);
> +    if ( !shm )
> +    {
> +        ret = FFA_RET_NO_MEMORY;
> +        goto out;
> +    }
> +    shm->sender_id = trans.sender_id;
> +    shm->ep_id = read_atomic(&mem_access->access_perm.endpoint_id);
> +    shm->page_count = page_count;
> +
> +    if (frag_len != tot_len) {
> +        struct mem_frag_state *s = xzalloc(struct mem_frag_state);
> +
> +        if (!s) {
> +            ret = FFA_RET_NO_MEMORY;
> +            goto out;
> +        }
> +        s->shm = shm;
> +        s->range_count = range_count;
> +        s->buf = ctx->tx;
> +        s->buf_size = ffa_page_count * PAGE_SIZE;
> +        ret = add_mem_share_frag(s, sizeof(*region_descr)  + region_offs,
> +                                 frag_len);
> +        if (ret <= 0) {
> +            xfree(s);
> +            if (ret < 0)
> +                goto out;
> +        } else {
> +            shm->handle = next_handle++;
> +            reg_pair_from_64(&handle_hi, &handle_lo, shm->handle);
> +            spin_lock(&ffa_mem_list_lock);
> +            list_add_tail(&s->list, &ffa_frag_list);
> +            spin_unlock(&ffa_mem_list_lock);
> +        }
> +        goto out_unlock;
> +    }
> +
> +    /*
> +     * Check that the Composite memory region descriptor fits.
> +     */
> +    if ( sizeof(*region_descr) + region_offs +
> +         range_count * sizeof(struct ffa_address_range) > frag_len) {
> +        ret = FFA_RET_INVALID_PARAMETERS;
> +        goto out;
> +    }
> +
> +    ret = get_shm_pages(d, shm, region_descr->address_range_array, range_count,
> +                        0, &last_page_idx);
> +    if ( ret )
> +        goto out;
> +    if (last_page_idx != shm->page_count) {
> +        ret = FFA_RET_INVALID_PARAMETERS;
> +        goto out;
> +    }
> +
> +    /* Note that share_shm() uses our tx buffer */
> +    ret = share_shm(shm);
> +    if ( ret )
> +        goto out;
> +
> +    spin_lock(&ffa_mem_list_lock);
> +    list_add_tail(&shm->list, &ffa_mem_list);
> +    spin_unlock(&ffa_mem_list_lock);
> +
> +    reg_pair_from_64(&handle_hi, &handle_lo, shm->handle);
> +
> +out:
> +    if ( ret && shm )
> +    {
> +        put_shm_pages(shm);
> +        xfree(shm);
> +    }
> +out_unlock:
> +    spin_unlock(&ffa_buffer_lock);
> +
> +    if ( ret > 0 )
> +            set_regs_frag_rx(regs, handle_lo, handle_hi, ret, trans.sender_id);
> +    else if ( ret == 0)
> +            set_regs_success(regs, handle_lo, handle_hi);
> +    else
> +            set_regs_error(regs, ret);
> +}
> +
> +static struct mem_frag_state *find_frag_state(uint64_t handle)
> +{
> +    struct mem_frag_state *s;
> +
> +    list_for_each_entry(s, &ffa_frag_list, list)
> +        if ( s->shm->handle == handle)
> +            return s;
> +
> +    return NULL;
> +}
> +
> +static void handle_mem_frag_tx(struct cpu_user_regs *regs)
> +{
> +    uint32_t frag_len = get_user_reg(regs, 3);
> +    uint32_t handle_lo = get_user_reg(regs, 1);
> +    uint32_t handle_hi = get_user_reg(regs, 2);
> +    uint64_t handle = reg_pair_to_64(handle_hi, handle_lo);
> +    struct mem_frag_state *s;
> +    uint16_t sender_id = 0;
> +    int ret;
> +
> +    spin_lock(&ffa_buffer_lock);
> +    s = find_frag_state(handle);
> +    if (!s) {
> +        ret = FFA_RET_INVALID_PARAMETERS;
> +        goto out;
> +    }
> +    sender_id = s->shm->sender_id;
> +
> +    if (frag_len > s->buf_size) {
> +        ret = FFA_RET_INVALID_PARAMETERS;
> +        goto out;
> +    }
> +
> +    ret = add_mem_share_frag(s, 0, frag_len);
> +    if (ret == 0) {
> +        /* Note that share_shm() uses our tx buffer */
> +        ret = share_shm(s->shm);
> +        if (ret == 0) {
> +            spin_lock(&ffa_mem_list_lock);
> +            list_add_tail(&s->shm->list, &ffa_mem_list);
> +            spin_unlock(&ffa_mem_list_lock);
> +        } else {
> +            put_shm_pages(s->shm);
> +            xfree(s->shm);
> +        }
> +        spin_lock(&ffa_mem_list_lock);
> +        list_del(&s->list);
> +        spin_unlock(&ffa_mem_list_lock);
> +        xfree(s);
> +    } else if (ret < 0) {
> +        put_shm_pages(s->shm);
> +        xfree(s->shm);
> +        spin_lock(&ffa_mem_list_lock);
> +        list_del(&s->list);
> +        spin_unlock(&ffa_mem_list_lock);
> +        xfree(s);
> +    }
> +out:
> +    spin_unlock(&ffa_buffer_lock);
> +
> +    if ( ret > 0 )
> +            set_regs_frag_rx(regs, handle_lo, handle_hi, ret, sender_id);
> +    else if ( ret == 0)
> +            set_regs_success(regs, handle_lo, handle_hi);
> +    else
> +            set_regs_error(regs, ret);
> +}
> +
> +static int handle_mem_reclaim(uint64_t handle, uint32_t flags)
> +{
> +    struct ffa_shm_mem *shm;
> +    uint32_t handle_hi;
> +    uint32_t handle_lo;
> +    int ret;
> +
> +    spin_lock(&ffa_mem_list_lock);
> +    list_for_each_entry(shm, &ffa_mem_list, list) {
> +        if ( shm->handle == handle )
> +            goto found_it;
> +    }
> +    shm = NULL;
> +found_it:
> +    spin_unlock(&ffa_mem_list_lock);
> +
> +    if ( !shm )
> +        return FFA_RET_INVALID_PARAMETERS;
> +
> +    reg_pair_from_64(&handle_hi, &handle_lo, handle);
> +    ret = ffa_mem_reclaim(handle_lo, handle_hi, flags);
> +    if ( ret )
> +        return ret;
> +
> +    spin_lock(&ffa_mem_list_lock);
> +    list_del(&shm->list);
> +    spin_unlock(&ffa_mem_list_lock);
> +
> +    put_shm_pages(shm);
> +    xfree(shm);
> +
> +    return ret;
> +}
> +
> +bool ffa_handle_call(struct cpu_user_regs *regs, uint32_t fid)
> +{
> +    struct domain *d = current->domain;
> +    struct ffa_ctx *ctx = d->arch.ffa;
> +    uint32_t count;
> +    uint32_t e;
> +
> +    if ( !ctx )
> +        return false;
> +
> +    switch ( fid )
> +    {
> +    case FFA_VERSION:
> +        handle_version(regs);
> +        return true;
> +    case FFA_ID_GET:
> +        set_regs_success(regs, get_vm_id(d), 0);
> +        return true;
> +    case FFA_RXTX_MAP_32:
> +#ifdef CONFIG_ARM_64
> +    case FFA_RXTX_MAP_64:
> +#endif
> +        e = handle_rxtx_map(fid, get_user_reg(regs, 1), get_user_reg(regs, 2),
> +                            get_user_reg(regs, 3));
> +        if ( e )
> +            set_regs_error(regs, e);
> +        else
> +            set_regs_success(regs, 0, 0);
> +        return true;
> +    case FFA_RXTX_UNMAP:
> +        e = handle_rxtx_unmap();
> +        if ( e )
> +            set_regs_error(regs, e);
> +        else
> +            set_regs_success(regs, 0, 0);
> +        return true;
> +    case FFA_PARTITION_INFO_GET:
> +        e = handle_partition_info_get(get_user_reg(regs, 1),
> +                                      get_user_reg(regs, 2),
> +                                      get_user_reg(regs, 3),
> +                                      get_user_reg(regs, 4),
> +                                      get_user_reg(regs, 5), &count);
> +        if ( e )
> +            set_regs_error(regs, e);
> +        else
> +            set_regs_success(regs, count, 0);
> +        return true;
> +    case FFA_RX_RELEASE:
> +        e = handle_rx_release();
> +        if ( e )
> +            set_regs_error(regs, e);
> +        else
> +            set_regs_success(regs, 0, 0);
> +        return true;
> +    case FFA_MSG_SEND_DIRECT_REQ_32:
> +#ifdef CONFIG_ARM_64
> +    case FFA_MSG_SEND_DIRECT_REQ_64:
> +#endif
> +        handle_msg_send_direct_req(regs, fid);
> +        return true;
> +    case FFA_MEM_SHARE_32:
> +#ifdef CONFIG_ARM_64
> +    case FFA_MEM_SHARE_64:
> +#endif
> +        handle_mem_share(regs);
> +        return true;
> +    case FFA_MEM_RECLAIM:
> +        e = handle_mem_reclaim(reg_pair_to_64(get_user_reg(regs, 2),
> +                                              get_user_reg(regs, 1)),
> +                               get_user_reg(regs, 3));
> +        if ( e )
> +            set_regs_error(regs, e);
> +        else
> +            set_regs_success(regs, 0, 0);
> +        return true;
> +    case FFA_MEM_FRAG_TX:
> +        handle_mem_frag_tx(regs);
> +        return true;
> +
> +    default:
> +        printk(XENLOG_ERR "ffa: unhandled fid 0x%x\n", fid);
> +        return false;
> +    }
> +}
> +
> +int ffa_domain_init(struct domain *d)
> +{
> +    struct ffa_ctx *ctx;
> +    unsigned int n;
> +    unsigned int m;
> +    unsigned int c_pos;
> +    int32_t res;
> +
> +    if ( !ffa_version )
> +        return 0;
> +
> +    ctx = xzalloc(struct ffa_ctx);
> +    if ( !ctx )
> +        return -ENOMEM;
> +
> +    for ( n = 0; n < subsr_vm_created_count; n++ ) {
> +        res = ffa_direct_req_send_vm(subsr_vm_created[n], get_vm_id(d),
> +                                     FFA_MSG_SEND_VM_CREATED);
> +        if ( res ) {
> +            printk(XENLOG_ERR "ffa: Failed to report creation of vm_id %u to  %u: res %d\n",
> +                   get_vm_id(d), subsr_vm_created[n], res);
> +            c_pos = n;
> +            goto err;
> +        }
> +    }
> +
> +    d->arch.ffa = ctx;
> +
> +    return 0;
> +
> +err:
> +    /* Undo any already sent vm created messaged */
> +    for ( n = 0; n < c_pos; n++ )
> +        for ( m = 0; m < subsr_vm_destroyed_count; m++ )
> +            if ( subsr_vm_destroyed[m] == subsr_vm_created[n] )
> +                ffa_direct_req_send_vm(subsr_vm_destroyed[n], get_vm_id(d),
> +                                       FFA_MSG_SEND_VM_DESTROYED);
> +    return -ENOMEM;
> +}
> +
> +int ffa_relinquish_resources(struct domain *d)
> +{
> +    struct ffa_ctx *ctx = d->arch.ffa;
> +    unsigned int n;
> +    int32_t res;
> +
> +    if ( !ctx )
> +        return 0;
> +
> +    for ( n = 0; n < subsr_vm_destroyed_count; n++ ) {
> +        res = ffa_direct_req_send_vm(subsr_vm_destroyed[n], get_vm_id(d),
> +                                     FFA_MSG_SEND_VM_DESTROYED);
> +
> +        if ( res )
> +            printk(XENLOG_ERR "ffa: Failed to report destruction of vm_id %u to  %u: res %d\n",
> +                   get_vm_id(d), subsr_vm_destroyed[n], res);
> +    }
> +
> +    XFREE(d->arch.ffa);
> +
> +    return 0;
> +}
> +
> +static bool __init init_subscribers(void)
> +{
> +    struct ffa_partition_info_1_1 *fpi;
> +    bool ret = false;
> +    uint32_t count;
> +    uint32_t e;
> +    uint32_t n;
> +    uint32_t c_pos;
> +    uint32_t d_pos;
> +
> +    if ( ffa_version < FFA_VERSION_1_1 )
> +        return true;
> +
> +    e = ffa_partition_info_get(0, 0, 0, 0, 1, &count);
> +    ffa_rx_release();
> +    if ( e ) {
> +        printk(XENLOG_ERR "ffa: Failed to get list of SPs: %d\n", (int)e);
> +        goto out;
> +    }
> +
> +    fpi = ffa_rx;
> +    subsr_vm_created_count = 0;
> +    subsr_vm_destroyed_count = 0;
> +    for ( n = 0; n < count; n++ ) {
> +        if (fpi[n].partition_properties & FFA_PART_PROP_NOTIF_CREATED)
> +            subsr_vm_created_count++;
> +        if (fpi[n].partition_properties & FFA_PART_PROP_NOTIF_DESTROYED)
> +            subsr_vm_destroyed_count++;
> +    }
> +
> +    if ( subsr_vm_created_count )
> +        subsr_vm_created = xzalloc_array(uint16_t, subsr_vm_created_count);
> +    if ( subsr_vm_destroyed_count )
> +        subsr_vm_destroyed = xzalloc_array(uint16_t, subsr_vm_destroyed_count);
> +    if ( (subsr_vm_created_count && !subsr_vm_created) ||
> +        (subsr_vm_destroyed_count && !subsr_vm_destroyed) ) {
> +        printk(XENLOG_ERR "ffa: Failed to allocate subscription lists\n");
> +        subsr_vm_created_count = 0;
> +        subsr_vm_destroyed_count = 0;
> +        XFREE(subsr_vm_created);
> +        XFREE(subsr_vm_destroyed);
> +        goto out;
> +    }
> +
> +    for ( c_pos = 0, d_pos = 0, n = 0; n < count; n++ ) {
> +        if (fpi[n].partition_properties & FFA_PART_PROP_NOTIF_CREATED)
> +            subsr_vm_created[c_pos++] = fpi[n].id;
> +        if (fpi[n].partition_properties & FFA_PART_PROP_NOTIF_DESTROYED)
> +            subsr_vm_destroyed[d_pos++] = fpi[n].id;
> +    }
> +
> +    ret = true;
> +out:
> +    ffa_rx_release();
> +    return ret;
> +}
> +
> +static int __init ffa_init(void)
> +{
> +    uint32_t vers;
> +    uint32_t e;
> +    unsigned int major_vers;
> +    unsigned int minor_vers;
> +
> +    /*
> +     * psci_init_smccc() updates this value with what's reported by EL-3
> +     * or secure world.
> +     */
> +    if ( smccc_ver < ARM_SMCCC_VERSION_1_2 )
> +    {
> +        printk(XENLOG_ERR
> +               "ffa: unsupported SMCCC version %#x (need at least %#x)\n",
> +               smccc_ver, ARM_SMCCC_VERSION_1_2);
> +        return 0;
> +    }
> +
> +    if ( !ffa_get_version(&vers) )
> +        return 0;
> +
> +    if ( vers < FFA_MIN_VERSION || vers > FFA_MY_VERSION )
> +    {
> +        printk(XENLOG_ERR "ffa: Incompatible version %#x found\n", vers);
> +        return 0;
> +    }
> +
> +    major_vers = (vers >> FFA_VERSION_MAJOR_SHIFT) & FFA_VERSION_MAJOR_MASK;
> +    minor_vers = vers & FFA_VERSION_MINOR_MASK;
> +    printk(XENLOG_ERR "ARM FF-A Mediator version %u.%u\n",
> +           FFA_VERSION_MAJOR, FFA_VERSION_MINOR);
> +    printk(XENLOG_ERR "ARM FF-A Firmware version %u.%u\n",
> +           major_vers, minor_vers);

XENLOG_INFO


> +    ffa_rx = alloc_xenheap_pages(0, 0);
> +    if ( !ffa_rx )
> +        return 0;
> +
> +    ffa_tx = alloc_xenheap_pages(0, 0);
> +    if ( !ffa_tx )
> +        goto err_free_ffa_rx;
> +
> +    e = ffa_rxtx_map(__pa(ffa_tx), __pa(ffa_rx), 1);
> +    if ( e )
> +    {
> +        printk(XENLOG_ERR "ffa: Failed to map rxtx: error %d\n", (int)e);
> +        goto err_free_ffa_tx;
> +    }
> +    ffa_page_count = 1;
> +    ffa_version = vers;
> +
> +    if ( !init_subscribers() )
> +        goto err_free_ffa_tx;
> +
> +    return 0;
> +
> +err_free_ffa_tx:
> +    free_xenheap_pages(ffa_tx, 0);
> +    ffa_tx = NULL;
> +err_free_ffa_rx:
> +    free_xenheap_pages(ffa_rx, 0);
> +    ffa_rx = NULL;
> +    ffa_page_count = 0;
> +    ffa_version = 0;
> +    XFREE(subsr_vm_created);
> +    subsr_vm_created_count = 0;
> +    XFREE(subsr_vm_destroyed);
> +    subsr_vm_destroyed_count = 0;
> +    return 0;
> +}
> +
> +__initcall(ffa_init);
> diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
> index ed63c2b6f91f..b3dee269bced 100644
> --- a/xen/arch/arm/include/asm/domain.h
> +++ b/xen/arch/arm/include/asm/domain.h
> @@ -103,6 +103,10 @@ struct arch_domain
>      void *tee;
>  #endif
>  
> +#ifdef CONFIG_FFA
> +    void *ffa;
> +#endif
> +
>      bool directmap;
>  }  __cacheline_aligned;
>  
> diff --git a/xen/arch/arm/include/asm/ffa.h b/xen/arch/arm/include/asm/ffa.h
> new file mode 100644
> index 000000000000..1c6ce6421294
> --- /dev/null
> +++ b/xen/arch/arm/include/asm/ffa.h
> @@ -0,0 +1,71 @@
> +/*
> + * xen/arch/arm/ffa.c
> + *
> + * Arm Firmware Framework for ARMv8-A(FFA) mediator
> + *
> + * Copyright (C) 2021  Linaro Limited
> + *
> + * Permission is hereby granted, free of charge, to any person
> + * obtaining a copy of this software and associated documentation
> + * files (the "Software"), to deal in the Software without restriction,
> + * including without limitation the rights to use, copy, modify, merge,
> + * publish, distribute, sublicense, and/or sell copies of the Software,
> + * and to permit persons to whom the Software is furnished to do so,
> + * subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be
> + * included in all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
> + * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
> + * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
> + * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
> + * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
> + */
> +
> +#ifndef __ASM_ARM_FFA_H__
> +#define __ASM_ARM_FFA_H__
> +
> +#include <xen/const.h>
> +
> +#include <asm/smccc.h>
> +#include <asm/types.h>
> +
> +#define FFA_FNUM_MIN_VALUE              _AC(0x60,U)
> +#define FFA_FNUM_MAX_VALUE              _AC(0x86,U)
> +
> +static inline bool is_ffa_fid(uint32_t fid)
> +{
> +    uint32_t fn = fid & ARM_SMCCC_FUNC_MASK;
> +
> +    return fn >= FFA_FNUM_MIN_VALUE && fn <= FFA_FNUM_MAX_VALUE;
> +}
> +
> +#ifdef CONFIG_FFA
> +#define FFA_NR_FUNCS    11
> +
> +bool ffa_handle_call(struct cpu_user_regs *regs, uint32_t fid);
> +int ffa_domain_init(struct domain *d);
> +int ffa_relinquish_resources(struct domain *d);
> +#else
> +#define FFA_NR_FUNCS    0
> +
> +static inline bool ffa_handle_call(struct cpu_user_regs *regs, uint32_t fid)
> +{
> +    return false;
> +}
> +
> +static inline int ffa_domain_init(struct domain *d)
> +{
> +    return 0;
> +}
> +
> +static inline int ffa_relinquish_resources(struct domain *d)
> +{
> +    return 0;
> +}
> +#endif
> +
> +#endif /*__ASM_ARM_FFA_H__*/
> diff --git a/xen/arch/arm/vsmc.c b/xen/arch/arm/vsmc.c
> index 6f90c08a6304..34586025eff8 100644
> --- a/xen/arch/arm/vsmc.c
> +++ b/xen/arch/arm/vsmc.c
> @@ -20,6 +20,7 @@
>  #include <public/arch-arm/smccc.h>
>  #include <asm/cpuerrata.h>
>  #include <asm/cpufeature.h>
> +#include <asm/ffa.h>
>  #include <asm/monitor.h>
>  #include <asm/regs.h>
>  #include <asm/smccc.h>
> @@ -32,7 +33,7 @@
>  #define XEN_SMCCC_FUNCTION_COUNT 3
>  
>  /* Number of functions currently supported by Standard Service Service Calls. */
> -#define SSSC_SMCCC_FUNCTION_COUNT (3 + VPSCI_NR_FUNCS)
> +#define SSSC_SMCCC_FUNCTION_COUNT (3 + VPSCI_NR_FUNCS + FFA_NR_FUNCS)
>  
>  static bool fill_uid(struct cpu_user_regs *regs, xen_uuid_t uuid)
>  {
> @@ -196,13 +197,23 @@ static bool handle_existing_apis(struct cpu_user_regs *regs)
>      return do_vpsci_0_1_call(regs, fid);
>  }
>  
> +static bool is_psci_fid(uint32_t fid)
> +{
> +    uint32_t fn = fid & ARM_SMCCC_FUNC_MASK;
> +
> +    return fn >= 0 && fn <= 0x1fU;
> +}
> +
>  /* PSCI 0.2 interface and other Standard Secure Calls */
>  static bool handle_sssc(struct cpu_user_regs *regs)
>  {
>      uint32_t fid = (uint32_t)get_user_reg(regs, 0);
>  
> -    if ( do_vpsci_0_2_call(regs, fid) )
> -        return true;
> +    if ( is_psci_fid(fid) )
> +        return do_vpsci_0_2_call(regs, fid);
> +
> +    if ( is_ffa_fid(fid) )
> +        return ffa_handle_call(regs, fid);
>  
>      switch ( fid )
>      {
> -- 
> 2.31.1
> 


From xen-devel-bounces@lists.xenproject.org Sat Jun 11 02:17:02 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jun 2022 02:17:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346900.572868 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzqgI-00077f-RD; Sat, 11 Jun 2022 02:16:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346900.572868; Sat, 11 Jun 2022 02:16:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzqgI-00077Y-NQ; Sat, 11 Jun 2022 02:16:54 +0000
Received: by outflank-mailman (input) for mailman id 346900;
 Sat, 11 Jun 2022 02:16:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzqgH-00077O-T4; Sat, 11 Jun 2022 02:16:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzqgH-0005Mi-P4; Sat, 11 Jun 2022 02:16:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzqgH-0000mu-C8; Sat, 11 Jun 2022 02:16:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nzqgH-0006Jf-Bg; Sat, 11 Jun 2022 02:16:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7hg9FEZqqCCc78eDfDlVKHntVjjlYP8EDDTWqBHE6bI=; b=4h7qjjXxM5TB9V/gp8CRE3Pcdp
	N1TBDmqSAz7xi+Q2V/8KmczF53gGJ4sSltapWtcBKKYL86e7vVT4GGjY2m5+0+uXhxMss4aUZPAs2
	d/wJpqzGTs3Sne1fw9Vh1pnc6BCV2nQvHJXxhqwvqrFcu72rFCQ9s7St2sctAJUzolBk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170913-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.16-testing test] 170913: regressions - FAIL
X-Osstest-Failures:
    xen-4.16-testing:test-amd64-i386-xl-shadow:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-coresched-i386-xl:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-xl-xsm:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-pair:xen-boot/src_host:fail:regression
    xen-4.16-testing:test-amd64-i386-pair:xen-boot/dst_host:fail:regression
    xen-4.16-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-freebsd10-i386:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-libvirt:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-livepatch:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-qemuu-rhel6hvm-intel:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-xl-qemut-win7-amd64:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-qemut-rhel6hvm-intel:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-xl-qemuu-win7-amd64:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-libvirt-xsm:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-migrupgrade:xen-boot/dst_host:fail:regression
    xen-4.16-testing:test-amd64-i386-qemut-rhel6hvm-amd:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-xl-pvshim:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-libvirt-pair:xen-boot/src_host:fail:regression
    xen-4.16-testing:test-amd64-i386-libvirt-pair:xen-boot/dst_host:fail:regression
    xen-4.16-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-xl-qemut-ws16-amd64:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-xl:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-freebsd10-amd64:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-qemuu-rhel6hvm-amd:xen-boot:fail:regression
    xen-4.16-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    xen-4.16-testing:test-xtf-amd64-amd64-2:xtf/test-pv32pae-xsa-188:fail:heisenbug
    xen-4.16-testing:test-xtf-amd64-amd64-2:leak-check/check:fail:heisenbug
    xen-4.16-testing:test-armhf-armhf-xl-credit1:host-ping-check-xen:fail:heisenbug
    xen-4.16-testing:test-xtf-amd64-amd64-5:xtf/test-pv32pae-pv-iopl~hypercall:fail:heisenbug
    xen-4.16-testing:test-xtf-amd64-amd64-5:leak-check/check:fail:heisenbug
    xen-4.16-testing:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    xen-4.16-testing:test-amd64-i386-xl-vhd:xen-boot:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt-raw:xen-boot:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=dc020d8d1ba420e2dd0e7a40f5045db897f3c4f4
X-Osstest-Versions-That:
    xen=8e11ec8fbf6f933f8854f4bc54226653316903f2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 11 Jun 2022 02:16:53 +0000

flight 170913 xen-4.16-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170913/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-shadow     8 xen-boot                 fail REGR. vs. 170871
 test-amd64-coresched-i386-xl  8 xen-boot                 fail REGR. vs. 170871
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 170871
 test-amd64-i386-xl-xsm        8 xen-boot                 fail REGR. vs. 170871
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 170871
 test-amd64-i386-pair         12 xen-boot/src_host        fail REGR. vs. 170871
 test-amd64-i386-pair         13 xen-boot/dst_host        fail REGR. vs. 170871
 test-amd64-i386-xl-qemuu-debianhvm-amd64  8 xen-boot     fail REGR. vs. 170871
 test-amd64-i386-freebsd10-i386  8 xen-boot               fail REGR. vs. 170871
 test-amd64-i386-libvirt       8 xen-boot                 fail REGR. vs. 170871
 test-amd64-i386-livepatch     8 xen-boot                 fail REGR. vs. 170871
 test-amd64-i386-qemuu-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 170871
 test-amd64-i386-xl-qemut-win7-amd64  8 xen-boot          fail REGR. vs. 170871
 test-amd64-i386-qemut-rhel6hvm-intel  8 xen-boot         fail REGR. vs. 170871
 test-amd64-i386-xl-qemuu-win7-amd64  8 xen-boot          fail REGR. vs. 170871
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  8 xen-boot  fail REGR. vs. 170871
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 170871
 test-amd64-i386-libvirt-xsm   8 xen-boot                 fail REGR. vs. 170871
 test-amd64-i386-xl-qemut-debianhvm-amd64  8 xen-boot     fail REGR. vs. 170871
 test-amd64-i386-migrupgrade  13 xen-boot/dst_host        fail REGR. vs. 170871
 test-amd64-i386-qemut-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 170871
 test-amd64-i386-xl-pvshim     8 xen-boot                 fail REGR. vs. 170871
 test-amd64-i386-libvirt-pair 12 xen-boot/src_host        fail REGR. vs. 170871
 test-amd64-i386-libvirt-pair 13 xen-boot/dst_host        fail REGR. vs. 170871
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 170871
 test-amd64-i386-xl-qemut-ws16-amd64  8 xen-boot          fail REGR. vs. 170871
 test-amd64-i386-xl            8 xen-boot                 fail REGR. vs. 170871
 test-amd64-i386-xl-qemuu-ovmf-amd64  8 xen-boot          fail REGR. vs. 170871
 test-amd64-i386-xl-qemuu-ws16-amd64  8 xen-boot          fail REGR. vs. 170871
 test-amd64-i386-freebsd10-amd64  8 xen-boot              fail REGR. vs. 170871
 test-amd64-i386-qemuu-rhel6hvm-amd  8 xen-boot           fail REGR. vs. 170871
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 170871

Tests which are failing intermittently (not blocking):
 test-xtf-amd64-amd64-2 87 xtf/test-pv32pae-xsa-188 fail in 170901 pass in 170913
 test-xtf-amd64-amd64-2       88 leak-check/check fail in 170901 pass in 170913
 test-armhf-armhf-xl-credit1 10 host-ping-check-xen fail in 170901 pass in 170913
 test-xtf-amd64-amd64-5 82 xtf/test-pv32pae-pv-iopl~hypercall fail pass in 170901
 test-xtf-amd64-amd64-5       83 leak-check/check           fail pass in 170901
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat  fail pass in 170901

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-vhd        8 xen-boot                fail blocked in 170871
 test-amd64-i386-libvirt-raw   8 xen-boot                fail blocked in 170871
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170871
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170871
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170871
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170871
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170871
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170871
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170871
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170871
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  dc020d8d1ba420e2dd0e7a40f5045db897f3c4f4
baseline version:
 xen                  8e11ec8fbf6f933f8854f4bc54226653316903f2

Last test of basis   170871  2022-06-07 12:36:55 Z    3 days
Testing same since   170901  2022-06-09 13:36:44 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  George Dunlap <george.dunlap@eu.citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    fail    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit dc020d8d1ba420e2dd0e7a40f5045db897f3c4f4
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jun 9 15:29:38 2022 +0200

    x86/pv: Track and flush non-coherent mappings of RAM
    
    There are legitimate uses of WC mappings of RAM, e.g. for DMA buffers with
    devices that make non-coherent writes.  The Linux sound subsystem makes
    extensive use of this technique.
    
    For such usecases, the guest's DMA buffer is mapped and consistently used as
    WC, and Xen doesn't interact with the buffer.
    
    However, a mischevious guest can use WC mappings to deliberately create
    non-coherency between the cache and RAM, and use this to trick Xen into
    validating a pagetable which isn't actually safe.
    
    Allocate a new PGT_non_coherent to track the non-coherency of mappings.  Set
    it whenever a non-coherent writeable mapping is created.  If the page is used
    as anything other than PGT_writable_page, force a cache flush before
    validation.  Also force a cache flush before the page is returned to the heap.
    
    This is CVE-2022-26364, part of XSA-402.
    
    Reported-by: Jann Horn <jannh@google.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: c1c9cae3a9633054b177c5de21ad7268162b2f2c
    master date: 2022-06-09 14:23:37 +0200

commit c4815be949aae6583a9a22897beb96b095b4f1a2
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jun 9 15:29:13 2022 +0200

    x86/amd: Work around CLFLUSH ordering on older parts
    
    On pre-CLFLUSHOPT AMD CPUs, CLFLUSH is weakely ordered with everything,
    including reads and writes to the address, and LFENCE/SFENCE instructions.
    
    This creates a multitude of problematic corner cases, laid out in the manual.
    Arrange to use MFENCE on both sides of the CLFLUSH to force proper ordering.
    
    This is part of XSA-402.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: 062868a5a8b428b85db589fa9a6d6e43969ffeb9
    master date: 2022-06-09 14:23:07 +0200

commit 8eafa2d871ae51d461256e4a14175e24df330c70
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jun 9 15:28:48 2022 +0200

    x86: Split cache_flush() out of cache_writeback()
    
    Subsequent changes will want a fully flushing version.
    
    Use the new helper rather than opencoding it in flush_area_local().  This
    resolves an outstanding issue where the conditional sfence is on the wrong
    side of the clflushopt loop.  clflushopt is ordered with respect to older
    stores, not to younger stores.
    
    Rename gnttab_cache_flush()'s helper to avoid colliding in name.
    grant_table.c can see the prototype from cache.h so the build fails
    otherwise.
    
    This is part of XSA-402.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: 9a67ffee3371506e1cbfdfff5b90658d4828f6a2
    master date: 2022-06-09 14:22:38 +0200

commit 74193f4292d9cfc2874866e941d9939d8f33fcef
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jun 9 15:28:23 2022 +0200

    x86: Don't change the cacheability of the directmap
    
    Changeset 55f97f49b7ce ("x86: Change cache attributes of Xen 1:1 page mappings
    in response to guest mapping requests") attempted to keep the cacheability
    consistent between different mappings of the same page.
    
    The reason wasn't described in the changelog, but it is understood to be in
    regards to a concern over machine check exceptions, owing to errata when using
    mixed cacheabilities.  It did this primarily by updating Xen's mapping of the
    page in the direct map when the guest mapped a page with reduced cacheability.
    
    Unfortunately, the logic didn't actually prevent mixed cacheability from
    occurring:
     * A guest could map a page normally, and then map the same page with
       different cacheability; nothing prevented this.
     * The cacheability of the directmap was always latest-takes-precedence in
       terms of guest requests.
     * Grant-mapped frames with lesser cacheability didn't adjust the page's
       cacheattr settings.
     * The map_domain_page() function still unconditionally created WB mappings,
       irrespective of the page's cacheattr settings.
    
    Additionally, update_xen_mappings() had a bug where the alias calculation was
    wrong for mfn's which were .init content, which should have been treated as
    fully guest pages, not Xen pages.
    
    Worse yet, the logic introduced a vulnerability whereby necessary
    pagetable/segdesc adjustments made by Xen in the validation logic could become
    non-coherent between the cache and main memory.  The CPU could subsequently
    operate on the stale value in the cache, rather than the safe value in main
    memory.
    
    The directmap contains primarily mappings of RAM.  PAT/MTRR conflict
    resolution is asymmetric, and generally for MTRR=WB ranges, PAT of lesser
    cacheability resolves to being coherent.  The special case is WC mappings,
    which are non-coherent against MTRR=WB regions (except for fully-coherent
    CPUs).
    
    Xen must not have any WC cacheability in the directmap, to prevent Xen's
    actions from creating non-coherency.  (Guest actions creating non-coherency is
    dealt with in subsequent patches.)  As all memory types for MTRR=WB ranges
    inter-operate coherently, so leave Xen's directmap mappings as WB.
    
    Only PV guests with access to devices can use reduced-cacheability mappings to
    begin with, and they're trusted not to mount DoSs against the system anyway.
    
    Drop PGC_cacheattr_{base,mask} entirely, and the logic to manipulate them.
    Shift the later PGC_* constants up, to gain 3 extra bits in the main reference
    count.  Retain the check in get_page_from_l1e() for special_pages() because a
    guest has no business using reduced cacheability on these.
    
    This reverts changeset 55f97f49b7ce6c3520c555d19caac6cf3f9a5df0
    
    This is CVE-2022-26363, part of XSA-402.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>
    master commit: ae09597da34aee6bc5b76475c5eea6994457e854
    master date: 2022-06-09 14:22:08 +0200

commit 9cfd796ae05421ded8e4f70b2c55352491cfa841
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jun 9 15:27:53 2022 +0200

    x86/page: Introduce _PAGE_* constants for memory types
    
    ... rather than opencoding the PAT/PCD/PWT attributes in __PAGE_HYPERVISOR_*
    constants.  These are going to be needed by forthcoming logic.
    
    No functional change.
    
    This is part of XSA-402.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: 1be8707c75bf4ba68447c74e1618b521dd432499
    master date: 2022-06-09 14:21:38 +0200

commit 8dab3f79b122e69cbcdebca72cdc14f004ee2193
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jun 9 15:27:37 2022 +0200

    x86/pv: Fix ABAC cmpxchg() race in _get_page_type()
    
    _get_page_type() suffers from a race condition where it incorrectly assumes
    that because 'x' was read and a subsequent a cmpxchg() succeeds, the type
    cannot have changed in-between.  Consider:
    
    CPU A:
      1. Creates an L2e referencing pg
         `-> _get_page_type(pg, PGT_l1_page_table), sees count 0, type PGT_writable_page
      2.     Issues flush_tlb_mask()
    CPU B:
      3. Creates a writeable mapping of pg
         `-> _get_page_type(pg, PGT_writable_page), count increases to 1
      4. Writes into new mapping, creating a TLB entry for pg
      5. Removes the writeable mapping of pg
         `-> _put_page_type(pg), count goes back down to 0
    CPU A:
      7.     Issues cmpxchg(), setting count 1, type PGT_l1_page_table
    
    CPU B now has a writeable mapping to pg, which Xen believes is a pagetable and
    suitably protected (i.e. read-only).  The TLB flush in step 2 must be deferred
    until after the guest is prohibited from creating new writeable mappings,
    which is after step 7.
    
    Defer all safety actions until after the cmpxchg() has successfully taken the
    intended typeref, because that is what prevents concurrent users from using
    the old type.
    
    Also remove the early validation for writeable and shared pages.  This removes
    race conditions where one half of a parallel mapping attempt can return
    successfully before:
     * The IOMMU pagetables are in sync with the new page type
     * Writeable mappings to shared pages have been torn down
    
    This is part of XSA-401 / CVE-2022-26362.
    
    Reported-by: Jann Horn <jannh@google.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>
    master commit: 8cc5036bc385112a82f1faff27a0970e6440dfed
    master date: 2022-06-09 14:21:04 +0200

commit b152dfbc3ad71a788996440b18174d995c3bffc9
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jun 9 15:27:19 2022 +0200

    x86/pv: Clean up _get_page_type()
    
    Various fixes for clarity, ahead of making complicated changes.
    
     * Split the overflow check out of the if/else chain for type handling, as
       it's somewhat unrelated.
     * Comment the main if/else chain to explain what is going on.  Adjust one
       ASSERT() and state the bit layout for validate-locked and partial states.
     * Correct the comment about TLB flushing, as it's backwards.  The problem
       case is when writeable mappings are retained to a page becoming read-only,
       as it allows the guest to bypass Xen's safety checks for updates.
     * Reduce the scope of 'y'.  It is an artefact of the cmpxchg loop and not
       valid for use by subsequent logic.  Switch to using ACCESS_ONCE() to treat
       all reads as explicitly volatile.  The only thing preventing the validated
       wait-loop being infinite is the compiler barrier hidden in cpu_relax().
     * Replace one page_get_owner(page) with the already-calculated 'd' already in
       scope.
    
    No functional change.
    
    This is part of XSA-401 / CVE-2022-26362.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>
    master commit: 9186e96b199e4f7e52e033b238f9fe869afb69c7
    master date: 2022-06-09 14:20:36 +0200
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Jun 11 02:49:18 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jun 2022 02:49:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346929.572923 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzrBT-0003La-8Z; Sat, 11 Jun 2022 02:49:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346929.572923; Sat, 11 Jun 2022 02:49:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzrBT-0003LT-59; Sat, 11 Jun 2022 02:49:07 +0000
Received: by outflank-mailman (input) for mailman id 346929;
 Sat, 11 Jun 2022 02:49:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzrBS-0003LJ-FT; Sat, 11 Jun 2022 02:49:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzrBS-0005ti-AY; Sat, 11 Jun 2022 02:49:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzrBR-0002GF-SS; Sat, 11 Jun 2022 02:49:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nzrBR-0006XI-Ry; Sat, 11 Jun 2022 02:49:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=AfOn3rb74dJ5LFAg3ls7gkeLUO5/I72D3+S/yxdaB7U=; b=M48LFbllJQDhFpZ1FNE4KNpr+y
	bN064jRQ5HUtHMcNejQn42CHEPOHtPzsbN+RCDdCQFt0ER0HYeCv1KKskypHDiUzkYsSVyvB28AOB
	nj/oQIlMthqlnT9MxslqOMbBQGBnVNHCXWY33yUOEpBmwa85euLTXjgOkNdepMpHfgYE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170922-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.15-testing test] 170922: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.15-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=0d12261727410d13c4a59d94e34937d6d91ba641
X-Osstest-Versions-That:
    xen=64249afeb63cf7d70b4faf02e76df5eed82371f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 11 Jun 2022 02:49:05 +0000

flight 170922 xen-4.15-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170922/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170870
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170870
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170870
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170870
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170870
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 170870
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170870
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170870
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170870
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170870
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 170870
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170870
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 170870
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  0d12261727410d13c4a59d94e34937d6d91ba641
baseline version:
 xen                  64249afeb63cf7d70b4faf02e76df5eed82371f9

Last test of basis   170870  2022-06-07 12:36:54 Z    3 days
Failing since        170904  2022-06-09 14:07:51 Z    1 days    2 attempts
Testing same since   170922  2022-06-10 08:54:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   64249afeb6..0d12261727  0d12261727410d13c4a59d94e34937d6d91ba641 -> stable-4.15


From xen-devel-bounces@lists.xenproject.org Sat Jun 11 04:10:15 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jun 2022 04:10:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346947.572951 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzsRi-00056J-DZ; Sat, 11 Jun 2022 04:09:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346947.572951; Sat, 11 Jun 2022 04:09:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzsRi-00056C-Az; Sat, 11 Jun 2022 04:09:58 +0000
Received: by outflank-mailman (input) for mailman id 346947;
 Sat, 11 Jun 2022 04:09:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzsRh-000562-7v; Sat, 11 Jun 2022 04:09:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzsRh-0007Jr-6D; Sat, 11 Jun 2022 04:09:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzsRg-0006oX-Sr; Sat, 11 Jun 2022 04:09:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nzsRg-0007x7-SP; Sat, 11 Jun 2022 04:09:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=m2oDbuvSEJ/mGNmaAqDi6Nv/ZAEWKHvplh00NBnuVG4=; b=arvXtxkCdONgIIl8+5Cd0QPrHo
	/j4xU1lnPvbydTfPoY5oABwfu514EzqzojqCKs9269gg79oioo61VGZhh26B9Yb1Zr5ns1qw52/8s
	dTs9TCF4iY7sQnFLas3dQULS43MHKlrqs/2+ebBcZtQHoj85J/GyIm9AJ6i6yveSp+RI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170978-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 170978: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=e051b5cd1043cc1aad506faace824e6aacc887bf
X-Osstest-Versions-That:
    xen=b8bc4588b32e8a40354defac29ceb9c90e570af8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 11 Jun 2022 04:09:56 +0000

flight 170978 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170978/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  e051b5cd1043cc1aad506faace824e6aacc887bf
baseline version:
 xen                  b8bc4588b32e8a40354defac29ceb9c90e570af8

Last test of basis   170924  2022-06-10 09:10:47 Z    0 days
Testing same since   170978  2022-06-11 00:00:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@arm.com>
  Stefano Stabellini <stefano.stabellini@xilinx.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   b8bc4588b3..e051b5cd10  e051b5cd1043cc1aad506faace824e6aacc887bf -> smoke


From xen-devel-bounces@lists.xenproject.org Sat Jun 11 04:42:10 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jun 2022 04:42:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346665.572977 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzswo-0001f0-BP; Sat, 11 Jun 2022 04:42:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346665.572977; Sat, 11 Jun 2022 04:42:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzswo-0001dq-6R; Sat, 11 Jun 2022 04:42:06 +0000
Received: by outflank-mailman (input) for mailman id 346665;
 Fri, 10 Jun 2022 19:11:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z8as=WR=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1nzk2O-0005Fb-Qj
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 19:11:16 +0000
Received: from sonic313-20.consmr.mail.gq1.yahoo.com
 (sonic313-20.consmr.mail.gq1.yahoo.com [98.137.65.83])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 174cbca7-e8f1-11ec-8901-93a377f238d6;
 Fri, 10 Jun 2022 21:11:13 +0200 (CEST)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic313.consmr.mail.gq1.yahoo.com with HTTP; Fri, 10 Jun 2022 19:11:10 +0000
Received: by hermes--canary-production-bf1-856dbf94db-29hkj (Yahoo Inc. Hermes
 SMTP Server) with ESMTPA ID 9d6f995788e29b021bb644e07662bfe3; 
 Fri, 10 Jun 2022 19:11:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 174cbca7-e8f1-11ec-8901-93a377f238d6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1654888270; bh=szQQXgwdpNsXgDZ8HmI7zcWajUugI4/mSs4RjBC4WHA=; h=From:To:Cc:Subject:Date:References:From:Subject:Reply-To; b=duqtH943Dlyju92svX3AFoc3OSDY/43X3Hgu2OgwUAV0o+1z0A5EukqnBjeacRY6CLlgAEetJn4gQc57vjpNXDdF4txnBu+EkAUFQw4Pg4DV4isbM+GXCTJGs1fwCLhOm7RpsZI2ZiJ12lkkmo9DP+hxl8PsE8hDoPBPCKFzsvRjCO4/nL5IwAy9DbGDVTwJdifKJY9hixEpLa/pf4gq73ZpU6XlDFQeoRUmUKMVecE+sQEgyMk/xIn/pK9ACj+FeuePV2YGFSImdBDnTXjekyDJvXDY6ab/rfWQthP6+vaw0xARRtXhR42JVuxK1hQJgfwjmfmOx2JaiKM6r9h2vA==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1654888270; bh=58rsgOgWzfRp11yAmimyUtsHGChA3r+y5aL6LGS3EoG=; h=X-Sonic-MF:From:To:Subject:Date:From:Subject; b=aeh+IS/kWViQ2K5fNMH2tFb1Xsamjmi/unryJA+DqFktY+iS3OAfA3Bd7sDl5lS+ftuIdVDlZf0nwfzvooY5wRkBF3GO34wmyEyoR2usPoGPrxCPpaKpoEn+mIw2XEQuzGI98ELB7Cyday4ifW7Q9CEUzIXlFFZDTz0NGVrKTlAb5vhYc7l1Bi/JgsaoDUP3EzS1g7JynGmhbdXQ0hliH3ghi/u5/kl2uiW/G5XQzMlunK59RNHbsLw/nrsmYd7FN7j+5n12cH5Q9lTLtR9m9e9Jq9yZfIZUJwuyKD5TLlSfm/xDWSAmbo6K10qKm5gPNeCSG71TKbg4niuQcIIJfg==
X-YMail-OSG: EcVWYf0VM1kaBprv3wOo91pcd4NE_dHn3XGuYnQ..1kuU45apy5Yx23HAR0AzY4
 0T7DMF6i9Hm2C_mcD8O_MDD4NWhPEzl3i2ddvOIeiJaT3qLIS3q4y6TerHf1Rpa9V62vCf5NuMfa
 F115JdgVAKEaN37_LDTh_5QgSSpezwcE_PdynM0GT7kapehBh7pYiq5RkfQIALWHaCm03sIIXnhD
 oCGJrgRhbRiy57RoNAhXvKnQtXyeQMWURWuYKN9mGIqqcgr8vGGmvDDHLIRvR_xBBoXvT7khIW5p
 rjH7_.T85f.1VsL_M6eo15tibVK4qorxPNTD5Lf9X_4EXNY4w4dXA_KfG2sdUhrwurJ8wy8_8QLk
 zvXFcw4IjeU4iQudpeZJcXw1es2K5XD6Eph5fKjKyprZtU.lLefx8cZ0.5CspbP_FsPT71KG_GHq
 gY3RHGQWzlPPVFnukSxdV0SHSkPKnyD4sXMj9vZlBMNHUg6z9Y2_p6UzoCVK478QFEpX9uypFBSz
 q69j3aS_4MfAEVmqClfyViiQ2jDSj5XthjcO8_9nNDuDqmkjUXGsqXCDdJuq4x5R9YEeYOdvrmXg
 Jh1rkwUusOMyS6pwvvRvA2p_9q3FkVVHR1BvWARpN0cFkYdjgxWiJWkKJXWndN6Xwqpv7ppL2qka
 HfrXyV9BpTS5bzGLwqIXCVshdVw89zIqV4pMhj.JawjxOw6R0hmcstkUlkj88vqPaOS9LGwA7wlH
 OoL9fumSgHkeDAmFbiT47Ry841dSkcRNgrep_39R_oQm2FzYiugg.AyVqT2mlKfYJNCQ.Pt59Oal
 SrSjYjXCp24BqOfjTrDwkKfm6bQB163jeyWizp9mzr6Q.HC7RsZW18JHyMb9N9KW_of0nvmbdbPk
 nwm7i4HkQf69Sl7lg8UFab_KkWwFnWuC04lM9yHlnON1KpnXswFOCGwf.vuzda4pyXZTbU0KiE22
 LyJiJ2p91MJAoLQTtoQAenOJmMmn2j8CfbDfdLyT6zr0yF3VdWXtLGQpUilnWxn5B2Wwy0DOW_qi
 uA.PutkPF5GwM13dH4aE.gC99aZP7pxbKJs7YAkKaa9P1nleLFcewG0TN0qaoeHefSkZEKf7kWS8
 ihRVOtDVK77urFrAgS8RnOSBwGThlggi02WkL1hgbYd1OLz3LBPBhkaphEJrGzYFuOoo8qdu7dER
 tvwUYUHYJvdQJ_dYXS.NlyT2SExP3BoV0aD0BRqjhoSmRih0f9anAq8.wlxX0PRx.4_O_5yNiCPG
 7ttCfatci_w3JXsPISfqIaEuZYrHGn7iKPOuY04UeycF2dnCsGhBhbUguOiXXKEoGEw0pLK4f.1p
 ECGiJcN6oxvDOOPBBKNFa_q2p4G4miimjKaAHFgUhzRokzUTdGXpF9Vy3ruQ38rEILnm2YPCEH42
 bONsSEj9i0Zgn3nt7TjifWlsL6aS2pQH52G8EFY22nwvoJFsFURc6B.li1PKPNjUq_EX9e8Quy_O
 _d_yPnnX8dHL5svrLy1U..eGJZd8zCI51qSRzHFj_wXo2Ctl8O9NYSb0nXHf3gZ31xACtl6qMh.I
 jqVFpZzaBIWL9qMN5yC8yBNNHFIFOJRjbSWB7qNwHkD37acDf0h.6te18Y0M.8plWok4OBoAxfHY
 DB3N1x3SmjpVjcMD7WpJY4uecreA1JycUyVjViLhIG4Bj8ir6VIKXC.PxPiIa_Ca2hr.NvIpr8m_
 xox5itagkVz4J6DuRUhGdMh_6wtRG5IULyO1qDGUazlDcTjEnG9JYYuF.snlyPFi5Kl6GGvdZbEi
 08V49YJznU1A00.zbAlbIKufMrdqep42D4Z970sqHp21saXOU3iDUSLdLGCa9.G714UZWArDp8b2
 tFr1M6fWbhJWq8fNaJPjCtf5u8uzVJfPsrz8gGnKFCAzUvtIaa1HjFqRgR8dvgJq.WNVmcxAhMD5
 Hx8Q94hJBEdLkJtUNj5m5swa.rDxqRQD1ATPa5ZIXpyO_pDKEu2eSYidd_UkeeumRHBYilKJYnLp
 2_x_3Txkbjd7srUvW1WLeKlxERFb8ltEfKBeFh6Dt7e4kdiXQRZ0jKo.1SkAE4jJ1ryPeZFfSUTd
 avbEJf8z42bOiGp7EdmQXLevDoyIPVyH0n0H0zyef.3W.bRXoma5TJrhIgHdiUz_JRt.WG72rtuQ
 K24FJxKD6cOkXlTxESN2UTG3Xw2BQBoGwz1.2.1XHo2KgmT7yKlykGVeJ9vXEgDQE100yUo61rve
 Iy8jeSs4hwzLSoA--
X-Sonic-MF: <brchuckz@aim.com>
From: Chuck Zmudzinski <brchuckz@aol.com>
To: qemu-devel@nongnu.org
Cc: xen-devel@lists.xenproject.org,
	qemu-trivial@nongnu.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH] xen/pass-through: merge emulated bits correctly
Date: Fri, 10 Jun 2022 15:10:57 -0400
Message-Id: <b6718a3512ec0a97c6ef4a5b5c1f3de72238c603.1654887756.git.brchuckz@aol.com>
X-Mailer: git-send-email 2.36.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
References: <b6718a3512ec0a97c6ef4a5b5c1f3de72238c603.1654887756.git.brchuckz.ref@aol.com>
Content-Length: 2391

In xen_pt_config_reg_init(), there is an error in the merging of the
emulated data with the host value. With the current Qemu, instead of
merging the emulated bits with the host bits as defined by emu_mask,
the inverse of the emulated bits is merged with the host bits. In some
cases, depending on the data in the registers and the way the registers
are setup, the end result will be that the register is initialized with
the wrong value.

To correct this error, use the XEN_PT_MERGE_VALUE macro to help ensure
the merge is done correctly.

This correction is needed to resolve Qemu project issue #1061, which
describes the failure of Xen HVM Linux guests to boot in certain
configurations with passed through PCI devices, that is, when this error
disables instead of enables the PCI_STATUS_CAP_LIST bit of the
PCI_STATUS register of a passed through PCI device, which in turn
disables the MSI-X capability of the device in Linux guests with the end
result being that the Linux guest never completes the boot process.

Fixes: 2e87512eccf3
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1061
Buglink: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=988333

Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
---
 hw/xen/xen_pt_config_init.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/hw/xen/xen_pt_config_init.c b/hw/xen/xen_pt_config_init.c
index ffd915654c..bc0383f7fb 100644
--- a/hw/xen/xen_pt_config_init.c
+++ b/hw/xen/xen_pt_config_init.c
@@ -1966,10 +1966,10 @@ static void xen_pt_config_reg_init(XenPCIPassthroughState *s,
         if ((data & host_mask) != (val & host_mask)) {
             uint32_t new_val;
 
-            /* Mask out host (including past size). */
-            new_val = val & host_mask;
-            /* Merge emulated ones (excluding the non-emulated ones). */
-            new_val |= data & host_mask;
+            /* Merge the emulated bits (data) with the host bits (val)
+             * and mask out the bits past size to enable restoration
+             * of the proper value for logging below. */
+            new_val = XEN_PT_MERGE_VALUE(val, data, host_mask) & size_mask;
             /* Leave intact host and emulated values past the size - even though
              * we do not care as we write per reg->size granularity, but for the
              * logging below lets have the proper value. */
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Sat Jun 11 04:42:10 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jun 2022 04:42:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346914.572986 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzswo-0001mf-OW; Sat, 11 Jun 2022 04:42:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346914.572986; Sat, 11 Jun 2022 04:42:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzswo-0001lg-H8; Sat, 11 Jun 2022 04:42:06 +0000
Received: by outflank-mailman (input) for mailman id 346914;
 Sat, 11 Jun 2022 02:23:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=78w8=WS=chromium.org=senozhatsky@srs-se1.protection.inumbo.net>)
 id 1nzqmR-0000Qp-IZ
 for xen-devel@lists.xenproject.org; Sat, 11 Jun 2022 02:23:15 +0000
Received: from mail-ed1-x529.google.com (mail-ed1-x529.google.com
 [2a00:1450:4864:20::529])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 71b8077b-e92d-11ec-bd2c-47488cf2e6aa;
 Sat, 11 Jun 2022 04:23:14 +0200 (CEST)
Received: by mail-ed1-x529.google.com with SMTP id w27so992236edl.7
 for <xen-devel@lists.xenproject.org>; Fri, 10 Jun 2022 19:23:13 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 71b8077b-e92d-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=HvaEucNyttJ72r4dgkt1sbWCJsvBPyIh5zMt0LNXalY=;
        b=D62jI1p8sOcmW+dGG367yCKI6736qfwlRYdRbtg+pQI3VQ/mZvhMkSh4muEy4yFzXi
         1pifN6keFBPADAFAutLs2f+vKDRy3jOy1SmannYfmHY7tjbpoLbP8600CMqIboz22zit
         FmdCeKC+IDyY3i3YwKCGDz5In1oYgHbIW3q58=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=HvaEucNyttJ72r4dgkt1sbWCJsvBPyIh5zMt0LNXalY=;
        b=v5gbq2Ez8HV30+emo0S0Pq+3J3jcK8zPkN3NeOhAm/FNLCRWCybec1msrm1uQVWza1
         ka1W2JlWqS62vQlxxxCP3ppUYUcSMZd1W1qIWLbkdIi6FytgAcyP4jk7KoMyHRnRzyOT
         MgkY/LG/3+ldBk/20JZaaHD65DCSLlTZGnc+8GXLx5RMw2KtSmwaUComXht/rY+XuEnc
         RLtV0SwXmcGLhhnacLp3YWza9qtNZQynmpj8kFmvZ4+imPLHz2OLMLEbSSMBvfLVg4Oq
         1MuazHtiMWDNZu/gHjZu2AAa/KNt3ildtE8TXWUVArL/9VFEmZebfewywvAahumnh8mP
         qPpw==
X-Gm-Message-State: AOAM530CrOR88nZwsPolDhLv4dT0YKvflYA7vcMud0k9BNLSkzO9GQKW
	8KZm1COqzmvFwfAJDW16tiiw59N0xH91vUx6OVlrNQ==
X-Google-Smtp-Source: ABdhPJz16+9kQdS9QDTB0tjL4FsLbUEsFaIS0ykQfnx5qpyFYyBQCn1i8W4mcCkA/y3viUzYkXAgYYv9JpSOTwgMdpQ=
X-Received: by 2002:a50:eb91:0:b0:42d:c1d8:616a with SMTP id
 y17-20020a50eb91000000b0042dc1d8616amr54940771edr.219.1654914192936; Fri, 10
 Jun 2022 19:23:12 -0700 (PDT)
MIME-Version: 1.0
References: <20220608142723.103523089@infradead.org> <20220608144517.444659212@infradead.org>
 <YqG6URbihTNCk9YR@alley> <YqHFHB6qqv5wiR8t@worktop.programming.kicks-ass.net> <YqHwOFg/WlMqe8/Z@alley>
In-Reply-To: <YqHwOFg/WlMqe8/Z@alley>
From: Sergey Senozhatsky <senozhatsky@chromium.org>
Date: Sat, 11 Jun 2022 11:23:02 +0900
Message-ID: <CA+_sPaq_47C2PWnGU7WfGXMc03E1Nz+1=F-wZe0B2+ymqdm3Fg@mail.gmail.com>
Subject: Re: [PATCH 24/36] printk: Remove trace_.*_rcuidle() usage
To: Petr Mladek <pmladek@suse.com>
Cc: Peter Zijlstra <peterz@infradead.org>, ink@jurassic.park.msu.ru, mattst88@gmail.com, 
	vgupta@kernel.org, linux@armlinux.org.uk, ulli.kroll@googlemail.com, 
	linus.walleij@linaro.org, shawnguo@kernel.org, 
	Sascha Hauer <s.hauer@pengutronix.de>, kernel@pengutronix.de, festevam@gmail.com, 
	linux-imx@nxp.com, tony@atomide.com, khilman@kernel.org, 
	catalin.marinas@arm.com, will@kernel.org, guoren@kernel.org, 
	bcain@quicinc.com, chenhuacai@kernel.org, kernel@xen0n.name, 
	geert@linux-m68k.org, sammy@sammy.net, monstr@monstr.eu, 
	tsbogend@alpha.franken.de, dinguyen@kernel.org, jonas@southpole.se, 
	stefan.kristiansson@saunalahti.fi, shorne@gmail.com, 
	James.Bottomley@hansenpartnership.com, deller@gmx.de, mpe@ellerman.id.au, 
	benh@kernel.crashing.org, paulus@samba.org, paul.walmsley@sifive.com, 
	palmer@dabbelt.com, aou@eecs.berkeley.edu, hca@linux.ibm.com, 
	gor@linux.ibm.com, agordeev@linux.ibm.com, borntraeger@linux.ibm.com, 
	svens@linux.ibm.com, ysato@users.sourceforge.jp, dalias@libc.org, 
	davem@davemloft.net, richard@nod.at, anton.ivanov@cambridgegreys.com, 
	johannes@sipsolutions.net, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, 
	dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, acme@kernel.org, 
	mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, 
	namhyung@kernel.org, jgross@suse.com, srivatsa@csail.mit.edu, 
	amakhalov@vmware.com, pv-drivers@vmware.com, boris.ostrovsky@oracle.com, 
	chris@zankel.net, jcmvbkbc@gmail.com, rafael@kernel.org, lenb@kernel.org, 
	pavel@ucw.cz, gregkh@linuxfoundation.org, mturquette@baylibre.com, 
	sboyd@kernel.org, daniel.lezcano@linaro.org, lpieralisi@kernel.org, 
	sudeep.holla@arm.com, agross@kernel.org, bjorn.andersson@linaro.org, 
	anup@brainfault.org, thierry.reding@gmail.com, jonathanh@nvidia.com, 
	jacob.jun.pan@linux.intel.com, Arnd Bergmann <arnd@arndb.de>, yury.norov@gmail.com, 
	andriy.shevchenko@linux.intel.com, linux@rasmusvillemoes.dk, 
	rostedt@goodmis.org, john.ogness@linutronix.de, paulmck@kernel.org, 
	frederic@kernel.org, quic_neeraju@quicinc.com, josh@joshtriplett.org, 
	mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, 
	joel@joelfernandes.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, 
	dietmar.eggemann@arm.com, bsegall@google.com, mgorman@suse.de, 
	bristot@redhat.com, vschneid@redhat.com, jpoimboe@kernel.org, 
	linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, 
	linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, 
	linux-omap@vger.kernel.org, linux-csky@vger.kernel.org, 
	linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, 
	linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, 
	openrisc@lists.librecores.org, linux-parisc@vger.kernel.org, 
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, 
	linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, 
	sparclinux@vger.kernel.org, linux-um@lists.infradead.org, 
	linux-perf-users@vger.kernel.org, virtualization@lists.linux-foundation.org, 
	xen-devel@lists.xenproject.org, linux-xtensa@linux-xtensa.org, 
	linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org, 
	linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org, 
	linux-tegra@vger.kernel.org, linux-arch@vger.kernel.org, rcu@vger.kernel.org
Content-Type: text/plain; charset="UTF-8"

On Thu, Jun 9, 2022 at 10:06 PM Petr Mladek <pmladek@suse.com> wrote:
>
> Makes sense. Feel free to use for this patch:
>
> Acked-by: Petr Mladek <pmladek@suse.com>

Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org>


From xen-devel-bounces@lists.xenproject.org Sat Jun 11 04:42:10 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jun 2022 04:42:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346918.572991 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzswp-0001sX-3L; Sat, 11 Jun 2022 04:42:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346918.572991; Sat, 11 Jun 2022 04:42:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzswo-0001qd-QH; Sat, 11 Jun 2022 04:42:06 +0000
Received: by outflank-mailman (input) for mailman id 346918;
 Sat, 11 Jun 2022 02:33:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=78w8=WS=chromium.org=senozhatsky@srs-se1.protection.inumbo.net>)
 id 1nzqwA-0001hN-NX
 for xen-devel@lists.xenproject.org; Sat, 11 Jun 2022 02:33:18 +0000
Received: from mail-ed1-x530.google.com (mail-ed1-x530.google.com
 [2a00:1450:4864:20::530])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d9376ec8-e92e-11ec-8901-93a377f238d6;
 Sat, 11 Jun 2022 04:33:16 +0200 (CEST)
Received: by mail-ed1-x530.google.com with SMTP id h19so1080306edj.0
 for <xen-devel@lists.xenproject.org>; Fri, 10 Jun 2022 19:33:16 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d9376ec8-e92e-11ec-8901-93a377f238d6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=rgKkPquDB/3ylB22sdoq44O3nuMRqOGPmfYPJEM3DpI=;
        b=JsXVqmPM/FghNi+FuyecZKJimRfQRZRaZMN7LbU4AElduixKN8TVkmhpZTWnrsGkbZ
         5LwE/p8/n+cIXvQ1YTcLTi0yyURcD/DMLBUnB+eqSpwlWIAygkMHM4b3P17Zp5erCg7D
         knTgcyr8j5ARmG7z3CGlckVUGoumxDAZyb+n8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=rgKkPquDB/3ylB22sdoq44O3nuMRqOGPmfYPJEM3DpI=;
        b=GlXtMrPVtUkvOblYq+6YtGAcN5p21m1TXaW/0c/VPiWFO4bUKNwmuwH7T4xOz0WqGg
         t4ghJmkdnrxPylGGDBPvQ74I92jSLIOmt5qz1aYqeyXWEMoj1YNPIEBZAerkazJyfXao
         OxAfHKOzvRVS6H3QWBDOFjT5DHQuF6bJziEA5ZTumNeJ06QGP9wcz0oRbP5llSLcQnnD
         9qFKlNC0hGMEQ8GNbSckNxGkC4rBcUo32GX6AknCfnUUevLEfyhycXr8Jm00rcno9Kp6
         0IlH8D564xlHTFfYXULSE9F/jUCFMtevUy81DtesE8RON2Qxz9YWz2tLB5elLhy8EsYZ
         Qj+w==
X-Gm-Message-State: AOAM531DtBcqsbYwTPvhNHX6Arv3PUCXhTbtU0tEKNb70NcSol9VfBax
	6rwnjyGM0M9FoPdwiNOE4JLmbFcXAeahr90WDVY2Kg==
X-Google-Smtp-Source: ABdhPJzFbgEeXrjhdaEFlL5CPKYiFgrRM2XGTpISMDDKOxh3YJg+nd70yb1Qcu1fnLsZwhWAweXqTRUn04uTYMjQVNU=
X-Received: by 2002:aa7:c604:0:b0:42d:cffb:f4dc with SMTP id
 h4-20020aa7c604000000b0042dcffbf4dcmr55022482edq.270.1654914796079; Fri, 10
 Jun 2022 19:33:16 -0700 (PDT)
MIME-Version: 1.0
References: <20220608142723.103523089@infradead.org> <20220608144517.444659212@infradead.org>
 <YqG6URbihTNCk9YR@alley> <YqHFHB6qqv5wiR8t@worktop.programming.kicks-ass.net>
 <CA+_sPaoJGrXhNPCs2dKf2J7u07y1xYrRFZBUtkKwzK9GqcHSuQ@mail.gmail.com> <YqHvXFdIJfvUDI6e@alley>
In-Reply-To: <YqHvXFdIJfvUDI6e@alley>
From: Sergey Senozhatsky <senozhatsky@chromium.org>
Date: Sat, 11 Jun 2022 11:33:05 +0900
Message-ID: <CA+_sPaq1ez7jah0bibAdeA__Yp92K_VA7E-NZ9knoUmOW9itJg@mail.gmail.com>
Subject: Re: [PATCH 24/36] printk: Remove trace_.*_rcuidle() usage
To: Petr Mladek <pmladek@suse.com>
Cc: Peter Zijlstra <peterz@infradead.org>, ink@jurassic.park.msu.ru, mattst88@gmail.com, 
	vgupta@kernel.org, linux@armlinux.org.uk, ulli.kroll@googlemail.com, 
	linus.walleij@linaro.org, shawnguo@kernel.org, 
	Sascha Hauer <s.hauer@pengutronix.de>, kernel@pengutronix.de, festevam@gmail.com, 
	linux-imx@nxp.com, tony@atomide.com, khilman@kernel.org, 
	catalin.marinas@arm.com, will@kernel.org, guoren@kernel.org, 
	bcain@quicinc.com, chenhuacai@kernel.org, kernel@xen0n.name, 
	geert@linux-m68k.org, sammy@sammy.net, monstr@monstr.eu, 
	tsbogend@alpha.franken.de, dinguyen@kernel.org, jonas@southpole.se, 
	stefan.kristiansson@saunalahti.fi, shorne@gmail.com, 
	James.Bottomley@hansenpartnership.com, deller@gmx.de, mpe@ellerman.id.au, 
	benh@kernel.crashing.org, paulus@samba.org, paul.walmsley@sifive.com, 
	palmer@dabbelt.com, aou@eecs.berkeley.edu, hca@linux.ibm.com, 
	gor@linux.ibm.com, agordeev@linux.ibm.com, borntraeger@linux.ibm.com, 
	svens@linux.ibm.com, ysato@users.sourceforge.jp, dalias@libc.org, 
	davem@davemloft.net, richard@nod.at, anton.ivanov@cambridgegreys.com, 
	johannes@sipsolutions.net, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, 
	dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, acme@kernel.org, 
	mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, 
	namhyung@kernel.org, jgross@suse.com, srivatsa@csail.mit.edu, 
	amakhalov@vmware.com, pv-drivers@vmware.com, boris.ostrovsky@oracle.com, 
	chris@zankel.net, jcmvbkbc@gmail.com, rafael@kernel.org, lenb@kernel.org, 
	pavel@ucw.cz, gregkh@linuxfoundation.org, mturquette@baylibre.com, 
	sboyd@kernel.org, daniel.lezcano@linaro.org, lpieralisi@kernel.org, 
	sudeep.holla@arm.com, agross@kernel.org, bjorn.andersson@linaro.org, 
	anup@brainfault.org, thierry.reding@gmail.com, jonathanh@nvidia.com, 
	jacob.jun.pan@linux.intel.com, Arnd Bergmann <arnd@arndb.de>, yury.norov@gmail.com, 
	andriy.shevchenko@linux.intel.com, linux@rasmusvillemoes.dk, 
	rostedt@goodmis.org, john.ogness@linutronix.de, paulmck@kernel.org, 
	frederic@kernel.org, quic_neeraju@quicinc.com, josh@joshtriplett.org, 
	mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, 
	joel@joelfernandes.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, 
	dietmar.eggemann@arm.com, bsegall@google.com, mgorman@suse.de, 
	bristot@redhat.com, vschneid@redhat.com, jpoimboe@kernel.org, 
	linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, 
	linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, 
	linux-omap@vger.kernel.org, linux-csky@vger.kernel.org, 
	linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, 
	linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, 
	openrisc@lists.librecores.org, linux-parisc@vger.kernel.org, 
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, 
	linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, 
	sparclinux@vger.kernel.org, linux-um@lists.infradead.org, 
	linux-perf-users@vger.kernel.org, virtualization@lists.linux-foundation.org, 
	xen-devel@lists.xenproject.org, linux-xtensa@linux-xtensa.org, 
	linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org, 
	linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org, 
	linux-tegra@vger.kernel.org, linux-arch@vger.kernel.org, rcu@vger.kernel.org
Content-Type: text/plain; charset="UTF-8"

On Thu, Jun 9, 2022 at 10:02 PM Petr Mladek <pmladek@suse.com> wrote:
>
> On Thu 2022-06-09 20:30:58, Sergey Senozhatsky wrote:
> > My emails are getting rejected... Let me try web-interface
>
> Bad day for mail sending. I have problems as well ;-)

For me the problem is still there and apparently it's an "too many
recipients" error.

> > I'm somewhat curious whether we can actually remove that trace event.
>
> Good question.
>
> Well, I think that it might be useful. It allows to see trace and
> printk messages together.

Fair enough. Seems that back in 2011 people were pretty happy with it
https://lore.kernel.org/all/1322161388.5366.54.camel@jlt3.sipsolutions.net/T/#m7bf6416f469119372191f22a6ecf653c5f7331d2

but... reportedly, one of the folks who Ack-ed it (*cough cough*
PeterZ) has never used it.

> It was ugly when it was in the console code. The new location
> in vprintk_store() allows to have it even "correctly" sorted
> (timestamp) against other tracing messages.

That's true.


From xen-devel-bounces@lists.xenproject.org Sat Jun 11 04:42:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jun 2022 04:42:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.346564.572971 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzswo-0001bY-1t; Sat, 11 Jun 2022 04:42:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 346564.572971; Sat, 11 Jun 2022 04:42:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzswn-0001bR-VM; Sat, 11 Jun 2022 04:42:05 +0000
Received: by outflank-mailman (input) for mailman id 346564;
 Fri, 10 Jun 2022 16:23:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z8as=WR=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1nzhQK-0005qE-D6
 for xen-devel@lists.xenproject.org; Fri, 10 Jun 2022 16:23:48 +0000
Received: from sonic313-19.consmr.mail.gq1.yahoo.com
 (sonic313-19.consmr.mail.gq1.yahoo.com [98.137.65.82])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b3505873-e8d9-11ec-bd2c-47488cf2e6aa;
 Fri, 10 Jun 2022 18:23:47 +0200 (CEST)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic313.consmr.mail.gq1.yahoo.com with HTTP; Fri, 10 Jun 2022 16:23:44 +0000
Received: by hermes--canary-production-ne1-799d7bd497-8kdwl (Yahoo Inc. Hermes
 SMTP Server) with ESMTPA ID aae43f62744bd19876dbcf4383062a9e; 
 Fri, 10 Jun 2022 16:23:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b3505873-e8d9-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1654878224; bh=wS7MFNjt6E7X8hDRkOvXUxjKzW3TZyG13P25/uoa8VE=; h=From:To:Cc:Subject:Date:References:From:Subject:Reply-To; b=iJyYVdX8pEq2c4Rho6ObrkSfOoL3zgjWe35ZSA+4U/mAMXUd7ZKihTJdwfsqZkLtBQcljlGCA79zWb5Zzr0I8JXIZN9g+v5UzWKzqbig+9kqOeYUpLDx0akoPR2G1A++X20zWc2g6g0mNIiOI3Y4iGPDDqwP7utDemsRmFetNQx1EONkOrMM7bvRi6JfjoXRp9CheQVoAoEJgBXI+DzZ4y7K1RYVcc6SdsfoVokFlo0K42+2+ed5dOro7giV2uKlCKLem3PBh7DoJrdtDNHIDEfNwrWbA0CBjwCK6fkAIrjFXJ3Jvi/AIXypnBH67B8ICdvalPtjer8LIB3ugX17Gw==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1654878224; bh=5KetwCUyhbFVbPpH/7B8lhexTvQy/KMyhUEW5YXG8Sb=; h=X-Sonic-MF:From:To:Subject:Date:From:Subject; b=VoTHlo3lPvyiI8z6OpCwQGhGJ8FONqMumR7NsMWP4fEnjB8q6Tq7yJzQH33XLg3fP7hK7bNWo8fdWYL3YwwHaifmn4T4LtU19vpdchsr8YUJDaS1zj3XQeA4pnrM1HSuyMN5+bbpPfjmsDFkrEnrGWye7Ws0eV2Ifb2mT8VbNKsR32QzjQkgTL1T/2p+oaPs+7xPRGWl38NnKUBrwnOJ5a+mC13gYD9yf6v0QbsFz7mcKhsHW2oNAWiFPWRm5W4x4CQXby6C+XsPI5M2ww76Krcw5+EOMuFiGA9d6H6WK1+fL9sqWXgNT6I2SqLt22izTw48II2v8bU1kMDuoUVZYg==
X-YMail-OSG: SF7Gx.IVM1nzIDj9FsvDkDhBuMJq7cxJJ.v3Ht.10RQO2mQD2R0qMVIU51j6ckx
 uYdm9wJdx6ff1jsOcqSaqaUsW1PVsSeE5j_nJ3vX6P4Ahbq88EqlquPqNU_akIaP.6XUAXNdBj6b
 8ys123AmQ5Ulrbn.U3hzPQc4xgbhzPmzL_QMfn.BwUTbEWTycIV8VKnK0RTb.WSAQ2im1sqiSWV.
 w9Dbmo8d53OC5mG76gmp2N6L92eyV2QDnkaxqZB6KwOK8LzLJPogFSwS9yaAHxYbbrt6W8E0Y98.
 ipAEFcmuzPnq9wjcgMuq2cEr_EF0ehob.27cQzlTaSDxAeqBy.FYwD0WZCTrm.2omBO1gAhuQIKE
 rvebIHvleV1hnc.kGki9Ed8a.M12E1zn.KIICdCe82EqmAA.lmHgEIOlyqHPemNpyd99VjRmerF2
 spc5Zzw2GJqrp_.N00yfdaEvWq_xgBlhOucKvs7iQoniZ.SGhOUnGCRldO3LI1mzxiQS8paASg5p
 2fnaFF_Q1OROuvUiSvVnMgmLEo5nXucRw7tsFihlyGk0Dzo4RVytPEvJHQmhqFQ8q_YDKaNXpwHd
 XN817oETXOSOKnyn7MHX2JYZA8C1xL6mXgYDvDC2VgLgd48tU5.kC8umi46653DcO7j9BMWT4Vaw
 mfWrCSFyj0fSKokrCof6ioqmDtu0LMh19iFgxUsyBR.nrVZNF6PnRBeHxIuNWJjv5bmwqDDfW3w.
 lPsqH2dl8EYsVw3DOwNeBlW58gpxpoTsovGWMC0WDyOQ2WZctfbA3K.d32bfvDn6gkNo187.mKIi
 gLvvnjF2hTrMjMLZDpl5jUDuL6yb16y6e.PfiJosdaPAbLmaS5MasRVeNTrA3MkG9ReDe2_p3jiN
 .oOyq5t4PdB15uX6o2fuOyoZ1vMz7kXyyR2.IXy0oa6eSignGnwtrSQUwC8RaOfbukS4KeO4cnRz
 xlxf7yUXEjXSGlbFkjJPevXKize8SUVPM81rS6flsOwxi.BvsdMPWq.x3f.Yq6NOwbo7KwRpjPLR
 X9TMLbHTxvFoVWpgHJ74d6Rn8SVgidvkr7c7q_9NsoETl2119_AITZSNRPg7rTnDTRCS5JkNoq1o
 nn9ozPwMp_1zh4UpnKmvnxPNfDpv_naVMt4VWuVkjk4EnM80DklkgBi1V4_Mgrx3BwGkYs7vBsTf
 VmVbJSlmnsTJc5r2LsL57dpOz0odKv0Ipap_EyUXiVO_TWuUl94.NuZWreYAfF.w9ILpSNvJ27ue
 RKKoAlyO9zlZy2.zuX_N9.Tg2XBUqEBnLdGzukneH.3tJH_E0kz6k2ESgjHXJqe_1QG6NkcLbs2y
 HS6A0fOzQ76kfGGLfYjGZUUC1OxrNIOQ2Pl9wcRxBNcFaoQX2ZY2hcZPdna4Di3Kz2PetFHJ8xER
 CF8ha0YI1E1GtVhIRC9I8jOCEwVLvepyICww8blZ7T6HLzCC.eJlC_yEgGjK0llaickfL5W4KSnQ
 XR0bfg47LAoTtZnlnR9V26_qMGLwOqsIyXvAeN93KC4M7CI7ckq8NAcyQUuZ2zWLE8SZEM_I7rPr
 P8uv32qrblqN9hpdoRwr_QGyUqWBfZe.Wuf1dqaalIQ4TtSdp6O4VdZauMrOsdgbxmtcAXnDcG4Q
 MpMqNkJTj7oAFpC7zfhlaYWf0QB7Li72yjMhv2QWPu82QW07anHMwYUqCCxJZk2gFpcNs9PoxR1h
 zHi11YEFzuL2ANLOQBbELnw1K7YggRUlcpnzdqR3SLScUghOFTDKJ3YE3w6YxfQJcveXVPPSG.eY
 QHaT_rrpzYSya6ALvPaQjdUUKWiIbviWViWqxpzf6zOyr.FlzPAkWi8TpMgJkZ8y8znhHMGddpVR
 5.NJlqR1lvldx2MnHKMIMHPitmgVDvEbtogIBNYbEGZsx8vz8IfooRsl.dPZ9BYROLnyOmsqJhKR
 Axp71u8j.Z4qltk65k0OXNBC6qEQKKelErUhFhmPrjVqa1SNFqJ07ORbCiek7X.5sKTcS9SdYgV6
 6z63szCvqZj1JiyvmNO3nw88.gO1iMr0WdUUgHrvuuMkavILzaY.hAQVhfTfy9PLJkW_tNSz0FJt
 aNOdBmeV_SM0Jh2g8f4Qr5387iLEErQE5O_wXdmt.0.gLQPcefwHVbiVlE5HC9lpc2vcHPIBn4wf
 YOjQtED4HrzhCEi16aq6pyjASAx7ee2IhY89ho7zbqa.c6jLDlkx3_SHnploST0zsh33yvADPr6M
 M0wS_
X-Sonic-MF: <brchuckz@aim.com>
From: Chuck Zmudzinski <brchuckz@aol.com>
To: qemu-devel@nongnu.org
Cc: xen-devel@lists.xenproject.org,
	qemu-trivial@nongnu.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH] xen/pass-through: don't create needless register group
Date: Fri, 10 Jun 2022 12:23:35 -0400
Message-Id: <a2e946dfb45260a5e29cec3b2195e4c1385b0d63.1654876622.git.brchuckz@aol.com>
X-Mailer: git-send-email 2.36.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
References: <a2e946dfb45260a5e29cec3b2195e4c1385b0d63.1654876622.git.brchuckz.ref@aol.com>
Content-Length: 1574

Currently we are creating a register group for the Intel IGD OpRegion
for every device we pass through, but the XEN_PCI_INTEL_OPREGION
register group is only valid for an Intel IGD. Add a check to make
sure the device is an Intel IGD and a check that the administrator has
enabled gfx_passthru in the xl domain configuration. Require both checks
to be true before creating the register group. Use the existing
is_igd_vga_passthrough() function to check for a graphics device from
any vendor and that the administrator enabled gfx_passthru in the xl
domain configuration, but further require that the vendor be Intel,
because only Intel IGD devices have an Intel OpRegion. These are the
same checks hvmloader and libxl do to determine if the Intel OpRegion
needs to be mapped into the guest's memory.

Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
---
 hw/xen/xen_pt_config_init.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/hw/xen/xen_pt_config_init.c b/hw/xen/xen_pt_config_init.c
index c5c4e943a8..ffd915654c 100644
--- a/hw/xen/xen_pt_config_init.c
+++ b/hw/xen/xen_pt_config_init.c
@@ -2037,6 +2037,10 @@ void xen_pt_config_init(XenPCIPassthroughState *s, Error **errp)
          * therefore the size should be 0xff.
          */
         if (xen_pt_emu_reg_grps[i].grp_id == XEN_PCI_INTEL_OPREGION) {
+            if (!is_igd_vga_passthrough(&s->real_device) ||
+                s->real_device.vendor_id != PCI_VENDOR_ID_INTEL) {
+                continue;
+            }
             reg_grp_offset = XEN_PCI_INTEL_OPREGION;
         }
 
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Sat Jun 11 09:08:47 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jun 2022 09:08:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347083.573184 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzx6c-00035c-6S; Sat, 11 Jun 2022 09:08:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347083.573184; Sat, 11 Jun 2022 09:08:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzx6c-00035V-3F; Sat, 11 Jun 2022 09:08:30 +0000
Received: by outflank-mailman (input) for mailman id 347083;
 Sat, 11 Jun 2022 09:08:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1nzx6a-00035P-1G
 for xen-devel@lists.xenproject.org; Sat, 11 Jun 2022 09:08:28 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nzx6Z-0004vu-5f; Sat, 11 Jun 2022 09:08:27 +0000
Received: from home.octic.net ([81.187.162.82] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nzx6Y-00081I-Vg; Sat, 11 Jun 2022 09:08:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=EIq7MleFNgmsiSmj4MQas0eeu0D7BwA55LmsAQ6iBDs=; b=0i7l1/T9Drn3cNolEt9yi2Fpwc
	cEiLFcJciS8hmch59bf5fOcvHWe2VxPQr4iU5W7Wut4AOAWfmj4xnhsD1QCORcbCUs/oq8DlPW11q
	zQ9AOl1ggEodtu2NXLVCdaSjOIWSfv52dF2AroFTkfdHXlRp21wa4OuqvfuUoV14w+Wg=;
Message-ID: <f664f35b-2fec-38e4-3382-1fd14ba66e8d@xen.org>
Date: Sat, 11 Jun 2022 10:08:24 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.9.1
Subject: Re: [PATCH v2 2/2] xen/arm: add FF-A mediator
To: Stefano Stabellini <sstabellini@kernel.org>,
 Jens Wiklander <jens.wiklander@linaro.org>
Cc: xen-devel@lists.xenproject.org,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Bertrand.Marquis@arm.com
References: <20220609061812.422130-1-jens.wiklander@linaro.org>
 <20220609061812.422130-3-jens.wiklander@linaro.org>
 <alpine.DEB.2.22.394.2206101758030.756493@ubuntu-linux-20-04-desktop>
From: Julien Grall <julien@xen.org>
In-Reply-To: <alpine.DEB.2.22.394.2206101758030.756493@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 11/06/2022 02:23, Stefano Stabellini wrote:
>> +static uint32_t ffa_rxtx_map(register_t tx_addr, register_t rx_addr,
>> +                             uint32_t page_count)
>> +{
>> +    const struct arm_smccc_1_2_regs arg = {
>> +#ifdef CONFIG_ARM_64
>> +        .a0 = FFA_RXTX_MAP_64,
>> +#endif
> 
> This ifdef is unnecessary given that FFA depends on ARM64 and SMCCCv1.2
> is only implemented on ARM64. It also applies to all the other ifdefs in
> this file. You can remove the code under #ifdef CONFIG_ARM_32.

To me the #ifdef indicates that it would be possible to use FFA on 
arm32. So I think it is better to keep them rather than having to 
retrofitting them in the future.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat Jun 11 09:17:24 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jun 2022 09:17:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347092.573194 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzxFB-0004nM-2j; Sat, 11 Jun 2022 09:17:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347092.573194; Sat, 11 Jun 2022 09:17:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzxFA-0004nF-W6; Sat, 11 Jun 2022 09:17:20 +0000
Received: by outflank-mailman (input) for mailman id 347092;
 Sat, 11 Jun 2022 09:17:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzxFA-0004n5-5N; Sat, 11 Jun 2022 09:17:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzxFA-00056T-0S; Sat, 11 Jun 2022 09:17:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzxF9-0006F5-Ou; Sat, 11 Jun 2022 09:17:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nzxF9-0006e2-OT; Sat, 11 Jun 2022 09:17:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gJCh2l/sSufFpzWeFHQ2c9lX8fMgrclWl7lPCOOPWWY=; b=RuFkKFQIpk13JVj+XCMTxR+9ib
	rVVIW5ytTQBlwMAOKBBxN4HrCptDwzP6Sb7RWCIy7nO3ydkBOIkICtaqOb1oYecUxB1cYFLCX/IPH
	XVP6XyhTIcnShJYRjqXyc2xx0exkkz2rZaAUBDrIW4XwlmhF5XzZgBIfrQrnzUGsLfY8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170925-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 170925: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.14-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-4.14-testing:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
    xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d7ebe3dfe3b2385bef10014549c36e0f73d39b52
X-Osstest-Versions-That:
    xen=17848dfed47f52b479c4e7eb412671aec5757329
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 11 Jun 2022 09:17:19 +0000

flight 170925 xen-4.14-testing real [real]
flight 170999 xen-4.14-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/170925/
http://logs.test-lab.xenproject.org/osstest/logs/170999/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 170999-retest

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    18 guest-start/debian.repeat fail REGR. vs. 169330

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 169330
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 169330
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 169330
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 169330
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 169330
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 169330
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 169330
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 169330
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 169330
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 169330
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 169330
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 169330
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  d7ebe3dfe3b2385bef10014549c36e0f73d39b52
baseline version:
 xen                  17848dfed47f52b479c4e7eb412671aec5757329

Last test of basis   169330  2022-04-12 10:36:29 Z   59 days
Failing since        170903  2022-06-09 14:07:51 Z    1 days    2 attempts
Testing same since   170925  2022-06-10 09:10:47 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   17848dfed4..d7ebe3dfe3  d7ebe3dfe3b2385bef10014549c36e0f73d39b52 -> stable-4.14


From xen-devel-bounces@lists.xenproject.org Sat Jun 11 09:41:53 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jun 2022 09:41:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347118.573250 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzxcn-0000Zu-Kw; Sat, 11 Jun 2022 09:41:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347118.573250; Sat, 11 Jun 2022 09:41:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzxcn-0000Zn-Gw; Sat, 11 Jun 2022 09:41:45 +0000
Received: by outflank-mailman (input) for mailman id 347118;
 Sat, 11 Jun 2022 09:41:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1nzxcm-0000Zh-BC
 for xen-devel@lists.xenproject.org; Sat, 11 Jun 2022 09:41:44 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nzxck-0005W0-LY; Sat, 11 Jun 2022 09:41:42 +0000
Received: from home.octic.net ([81.187.162.82] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nzxck-0001VZ-Fm; Sat, 11 Jun 2022 09:41:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=0XL7oyQnVXn+mEg0//1CB9ap+KpDRbvC/oKn1rv5qNk=; b=BHzTF+6ztO3YrRjy7X4T2zPXWf
	VimisPDu5KTAzNiYueOjG5yvTGsc3s/avVp+ID0v2fc0eB3QQVpa9DpUTJ+SwVq97GNvU7XDHOMcL
	sp0vAFOEwweKnJ5Okas9ry4tMfhjOLhrsWbKP1WmoviN8xtqKfrILaEbAOkZglXD6tlg=;
Message-ID: <569a6d37-157a-d237-3ef9-d77fae32d002@xen.org>
Date: Sat, 11 Jun 2022 10:41:40 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.9.1
To: Stefano Stabellini <sstabellini@kernel.org>,
 Jens Wiklander <jens.wiklander@linaro.org>
Cc: xen-devel@lists.xenproject.org,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220609061812.422130-1-jens.wiklander@linaro.org>
 <20220609061812.422130-2-jens.wiklander@linaro.org>
 <alpine.DEB.2.22.394.2206101733020.756493@ubuntu-linux-20-04-desktop>
From: Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2 1/2] xen/arm: smccc: add support for SMCCCv1.2 extended
 input/output registers
In-Reply-To: <alpine.DEB.2.22.394.2206101733020.756493@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 11/06/2022 01:41, Stefano Stabellini wrote:
> On Thu, 9 Jun 2022, Jens Wiklander wrote:
>> +    /* Store the registers x0 - x17 into the result structure */
>> +    stp	x0, x1, [x19, #0]
>> +    stp	x2, x3, [x19, #16]
>> +    stp	x4, x5, [x19, #32]
>> +    stp	x6, x7, [x19, #48]
>> +    stp	x8, x9, [x19, #64]
>> +    stp	x10, x11, [x19, #80]
>> +    stp	x12, x13, [x19, #96]
>> +    stp	x14, x15, [x19, #112]
>> +    stp	x16, x17, [x19, #128]
> 
> I noticed that in the original commit the offsets are declared as
> ARM_SMCCC_1_2_REGS_X0_OFFS, etc. In Xen we could add them to
> xen/arch/arm/arm64/asm-offsets.c given that they are only used in asm.
> 
> That said, there isn't a huge value in declaring them given that they
> are always read and written in order and there is nothing else in the
> struct, so I am fine either way.

While we don't support big-endian in Xen (yet?), the offsets will be 
inverted.

Furthermore, I prefer to avoid open-coded value (in particular when they 
are related to offset). They are unlikely going to change, but at least 
we have the compiler that will compute them for us (so less chance for a 
problem).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat Jun 11 10:59:24 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jun 2022 10:59:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347182.573413 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzypj-00027H-0D; Sat, 11 Jun 2022 10:59:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347182.573413; Sat, 11 Jun 2022 10:59:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzypi-00027A-TQ; Sat, 11 Jun 2022 10:59:10 +0000
Received: by outflank-mailman (input) for mailman id 347182;
 Sat, 11 Jun 2022 10:59:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzypi-000270-5b; Sat, 11 Jun 2022 10:59:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzypi-0006tY-3B; Sat, 11 Jun 2022 10:59:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzyph-0000ne-OK; Sat, 11 Jun 2022 10:59:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nzyph-0006kh-Nr; Sat, 11 Jun 2022 10:59:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=I7qesQMaApjgRX1SNcNxLc5ROZVzDEfF3yR0x4Tacz8=; b=1Q5RT1uYoDQylOf7Sp1ZRP45Bk
	IpSKvM7Pmlphbb18YU1XZLNIwgPH7OrwHyowvpfuEVdzfkkBO/ine2moe0ywCm3SFlmCUAniVDn8+
	UBihzepj5EtPMZFp8L0bnAyBneUNnQ3T+v+RFhY3CdY7AytLaMoRE6iDYCVGp0F54leY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170929-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.13-testing test] 170929: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:heisenbug
    xen-4.13-testing:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start.2:fail:heisenbug
    xen-4.13-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=1575075b2e3ac93e9bb2271f4c26a2fb7d947ade
X-Osstest-Versions-That:
    xen=fe97133b5deef58bd1422f4d87821131c66b1d0e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 11 Jun 2022 10:59:09 +0000

flight 170929 xen-4.13-testing real [real]
flight 171040 xen-4.13-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/170929/
http://logs.test-lab.xenproject.org/osstest/logs/171040/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail pass in 171040-retest
 test-amd64-amd64-qemuu-freebsd12-amd64 22 guest-start.2 fail pass in 171040-retest

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 169240
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 169240
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 169240
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 169240
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 169240
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 169240
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 169240
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 169240
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 169240
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 169240
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 169240
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 169240
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  1575075b2e3ac93e9bb2271f4c26a2fb7d947ade
baseline version:
 xen                  fe97133b5deef58bd1422f4d87821131c66b1d0e

Last test of basis   169240  2022-04-08 13:36:34 Z   63 days
Failing since        170905  2022-06-09 15:37:34 Z    1 days    2 attempts
Testing same since   170929  2022-06-10 10:52:06 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   fe97133b5d..1575075b2e  1575075b2e3ac93e9bb2271f4c26a2fb7d947ade -> stable-4.13


From xen-devel-bounces@lists.xenproject.org Sat Jun 11 11:12:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jun 2022 11:12:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347193.573423 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzz2V-0004fj-77; Sat, 11 Jun 2022 11:12:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347193.573423; Sat, 11 Jun 2022 11:12:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzz2V-0004fc-4Q; Sat, 11 Jun 2022 11:12:23 +0000
Received: by outflank-mailman (input) for mailman id 347193;
 Sat, 11 Jun 2022 11:12:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzz2U-0004fS-2O; Sat, 11 Jun 2022 11:12:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzz2T-00079p-Ur; Sat, 11 Jun 2022 11:12:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1nzz2T-00018w-Nr; Sat, 11 Jun 2022 11:12:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1nzz2T-0001aS-NO; Sat, 11 Jun 2022 11:12:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=7zXcmSKxaE1C0BB4aDmZFJirbJFwUJyaVxAMKYG3ql0=; b=PHbcQ8DEcfI4WHEjzFwe46uUOG
	PJOClf0bmJyHbCkfSv85EwcdmpbKq9SfyX2l37DCmen0QCneGa/J2IxOz7l1UdTfYhKtJIv8SWJ15
	jszumq6HEYJuq1GARfOCf2T0fxXOaSQ7OnL4htT65S296CdgJwsuwga+GsUVWiHhosts=;
To: xen-devel@lists.xenproject.org
Subject: [xen-unstable bisection] complete test-amd64-i386-xl-qemut-ws16-amd64
Message-Id: <E1nzz2T-0001aS-NO@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 11 Jun 2022 11:12:21 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-xl-qemut-ws16-amd64
testid xen-boot

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  9186e96b199e4f7e52e033b238f9fe869afb69c7
  Bug not present: 59fbdf8a3667ce42c1cf70c94c3bcd0451afd4d8
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/171076/


  commit 9186e96b199e4f7e52e033b238f9fe869afb69c7
  Author: Andrew Cooper <andrew.cooper3@citrix.com>
  Date:   Thu Jun 9 14:20:36 2022 +0200
  
      x86/pv: Clean up _get_page_type()
      
      Various fixes for clarity, ahead of making complicated changes.
      
       * Split the overflow check out of the if/else chain for type handling, as
         it's somewhat unrelated.
       * Comment the main if/else chain to explain what is going on.  Adjust one
         ASSERT() and state the bit layout for validate-locked and partial states.
       * Correct the comment about TLB flushing, as it's backwards.  The problem
         case is when writeable mappings are retained to a page becoming read-only,
         as it allows the guest to bypass Xen's safety checks for updates.
       * Reduce the scope of 'y'.  It is an artefact of the cmpxchg loop and not
         valid for use by subsequent logic.  Switch to using ACCESS_ONCE() to treat
         all reads as explicitly volatile.  The only thing preventing the validated
         wait-loop being infinite is the compiler barrier hidden in cpu_relax().
       * Replace one page_get_owner(page) with the already-calculated 'd' already in
         scope.
      
      No functional change.
      
      This is part of XSA-401 / CVE-2022-26362.
      
      Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
      Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>
      Reviewed-by: Jan Beulich <jbeulich@suse.com>
      Reviewed-by: George Dunlap <george.dunlap@citrix.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable/test-amd64-i386-xl-qemut-ws16-amd64.xen-boot.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable/test-amd64-i386-xl-qemut-ws16-amd64.xen-boot --summary-out=tmp/171076.bisection-summary --basis-template=170890 --blessings=real,real-bisect,real-retry xen-unstable test-amd64-i386-xl-qemut-ws16-amd64 xen-boot
Searching for failure / basis pass:
 170908 fail [host=huxelrebe1] / 170897 ok.
Failure / basis pass flights: 170908 / 170897
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 c1c9cae3a9633054b177c5de21ad7268162b2f2c
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 f3185c165d28901c3222becfc8be547263c53745
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#a68d6d311c2d1fd9d2fa9a0768ea235\
 3e8a79b42-a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 git://xenbits.xen.org/xen.git#f3185c165d28901c3222becfc8be547263c53745-c1c9cae3a9633054b177c5de21ad7268162b2f2c
>From git://cache:9419/git://xenbits.xen.org/xen
   fe97133b5d..1575075b2e  stable-4.13 -> origin/stable-4.13
Loaded 5001 nodes in revision graph
Searching for test results:
 170890 pass irrelevant
 170897 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 f3185c165d28901c3222becfc8be547263c53745
 170908 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 c1c9cae3a9633054b177c5de21ad7268162b2f2c
 170977 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 f3185c165d28901c3222becfc8be547263c53745
 170982 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 c1c9cae3a9633054b177c5de21ad7268162b2f2c
 170988 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 8cc5036bc385112a82f1faff27a0970e6440dfed
 170992 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 79faa321f2f31d7794604007a290fb6c8fc05035
 170994 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 59fbdf8a3667ce42c1cf70c94c3bcd0451afd4d8
 171000 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 9186e96b199e4f7e52e033b238f9fe869afb69c7
 171037 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 59fbdf8a3667ce42c1cf70c94c3bcd0451afd4d8
 171038 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 9186e96b199e4f7e52e033b238f9fe869afb69c7
 171075 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 59fbdf8a3667ce42c1cf70c94c3bcd0451afd4d8
 171076 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 9186e96b199e4f7e52e033b238f9fe869afb69c7
Searching for interesting versions
 Result found: flight 170897 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 59fbdf8a3667ce42c1cf70c94c3bcd0451afd4d8, results HASH(0x56336caaef50) HASH(0x56336caaaf18) HASH(0x56336b7c7ef0) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311\
 c2d1fd9d2fa9a0768ea2353e8a79b42 79faa321f2f31d7794604007a290fb6c8fc05035, results HASH(0x56336caac4a0) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 f3185c165d28901c3222becfc8be547263c53745, results HASH(0x56336ca961e8) HASH(0x56336ca98218) Result found: flight 170908 (fail), for basis failure (at ancestor ~164)
 Repro found: flight 170977 (pass), for basis pass
 Repro found: flight 170982 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42 59fbdf8a3667ce42c1cf70c94c3bcd0451afd4d8
No revisions left to test, checking graph state.
 Result found: flight 170994 (pass), for last pass
 Result found: flight 171000 (fail), for first failure
 Repro found: flight 171037 (pass), for last pass
 Repro found: flight 171038 (fail), for first failure
 Repro found: flight 171075 (pass), for last pass
 Repro found: flight 171076 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  9186e96b199e4f7e52e033b238f9fe869afb69c7
  Bug not present: 59fbdf8a3667ce42c1cf70c94c3bcd0451afd4d8
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/171076/


  commit 9186e96b199e4f7e52e033b238f9fe869afb69c7
  Author: Andrew Cooper <andrew.cooper3@citrix.com>
  Date:   Thu Jun 9 14:20:36 2022 +0200
  
      x86/pv: Clean up _get_page_type()
      
      Various fixes for clarity, ahead of making complicated changes.
      
       * Split the overflow check out of the if/else chain for type handling, as
         it's somewhat unrelated.
       * Comment the main if/else chain to explain what is going on.  Adjust one
         ASSERT() and state the bit layout for validate-locked and partial states.
       * Correct the comment about TLB flushing, as it's backwards.  The problem
         case is when writeable mappings are retained to a page becoming read-only,
         as it allows the guest to bypass Xen's safety checks for updates.
       * Reduce the scope of 'y'.  It is an artefact of the cmpxchg loop and not
         valid for use by subsequent logic.  Switch to using ACCESS_ONCE() to treat
         all reads as explicitly volatile.  The only thing preventing the validated
         wait-loop being infinite is the compiler barrier hidden in cpu_relax().
       * Replace one page_get_owner(page) with the already-calculated 'd' already in
         scope.
      
      No functional change.
      
      This is part of XSA-401 / CVE-2022-26362.
      
      Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
      Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>
      Reviewed-by: Jan Beulich <jbeulich@suse.com>
      Reviewed-by: George Dunlap <george.dunlap@citrix.com>

Revision graph left in /home/logs/results/bisect/xen-unstable/test-amd64-i386-xl-qemut-ws16-amd64.xen-boot.{dot,ps,png,html,svg}.
----------------------------------------
171076: tolerable ALL FAIL

flight 171076 xen-unstable real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/171076/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-xl-qemut-ws16-amd64  8 xen-boot         fail baseline untested


jobs:
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sat Jun 11 11:30:52 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jun 2022 11:30:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347207.573440 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzzKH-0007g5-0E; Sat, 11 Jun 2022 11:30:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347207.573440; Sat, 11 Jun 2022 11:30:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1nzzKG-0007fy-Tj; Sat, 11 Jun 2022 11:30:44 +0000
Received: by outflank-mailman (input) for mailman id 347207;
 Sat, 11 Jun 2022 11:30:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1nzzKF-0007fs-Rn
 for xen-devel@lists.xenproject.org; Sat, 11 Jun 2022 11:30:43 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nzzKF-0007Sr-BE; Sat, 11 Jun 2022 11:30:43 +0000
Received: from gw1.octic.net ([81.187.162.82] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1nzzKF-0008Hx-38; Sat, 11 Jun 2022 11:30:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=FkwsPuoMcgaQ4ZKoqYdKp5NtlaahcWnaU77d8O+qBEM=; b=fzteqHrUne4DbkbeFl1fXkSAXP
	uD+p2k7EKd/YCbshjZmnjZa30PYO9zhhTDZCh4r2Tk1k2QqZLIpv9mDs5kfHkGVj0mkc0okWJHiHf
	Ad2+nf9NjkDK+JBx50e/THiuSgRHLbbt0Kthw/bvTAvyFZBHROSZxBvEN+xdBC7KrcRY=;
Message-ID: <4f9a6d8b-7613-5030-2653-e7da6f69618d@xen.org>
Date: Sat, 11 Jun 2022 12:30:40 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.9.1
Subject: Re: [PATCH 00/16] xen/arm: mm: Remove open-coding mappings
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20220520120937.28925-1-julien@xen.org>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220520120937.28925-1-julien@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 20/05/2022 13:09, Julien Grall wrote:
> Julien Grall (15):
>    xen/arm: mm: Allow other mapping size in xen_pt_update_entry()
>    xen/arm: mm: Add support for the contiguous bit
>    xen/arm: mm: Avoid flushing the TLBs when mapping are inserted
>    xen/arm: mm: Don't open-code Xen PT update in remove_early_mappings()
>    xen/arm: mm: Re-implement early_fdt_map() using map_pages_to_xen()
>    xen/arm32: mm: Re-implement setup_xenheap_mappings() using
>      map_pages_to_xen()
>    xen/arm: mm: Allocate xen page tables in domheap rather than xenheap
>    xen/arm: mm: Allow page-table allocation from the boot allocator
>    xen/arm: Move fixmap definitions in a separate header
>    xen/arm: mm: Clean-up the includes and order them
>    xen/arm: mm: Use the PMAP helpers in xen_{,un}map_table()
>    xen/arm32: setup: Move out the code to populate the boot allocator
>    xen/arm64: mm: Add memory to the boot allocator first
>    xen/arm: mm: Rework setup_xenheap_mappings()
>    xen/arm: mm: Re-implement setup_frame_table_mappings() with
>      map_pages_to_xen()
> 
> Wei Liu (1):
>    xen/arm: add Persistent Map (PMAP) infrastructure

I have now committed the full series.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat Jun 11 13:07:17 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jun 2022 13:07:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347239.573501 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o00pQ-0002jq-0p; Sat, 11 Jun 2022 13:07:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347239.573501; Sat, 11 Jun 2022 13:06:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o00pP-0002jj-Ti; Sat, 11 Jun 2022 13:06:59 +0000
Received: by outflank-mailman (input) for mailman id 347239;
 Sat, 11 Jun 2022 13:06:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o00pO-0002jZ-Eb; Sat, 11 Jun 2022 13:06:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o00pO-0000kr-Cx; Sat, 11 Jun 2022 13:06:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o00pO-0004Kv-6c; Sat, 11 Jun 2022 13:06:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o00pO-0007FF-68; Sat, 11 Jun 2022 13:06:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=PjWm+tGGMCB2y8TwQkEmwayXzr3uIEJIip30AS89XxE=; b=wnCWWQaSAuumsuu8o60R2+CTUS
	LAieyypI3K13DpbuzwVntcWrb94IH3cmhfiOFnytzcrAMIBxrnrQzM08wX7I1CGjsKrwKqG7pSCyN
	FOyKa44S0gnoWa+dTq73/5oho995ik3bTFR11UgCAXLg68gDsTdG7s6NPuf8yBo1FIII=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170952-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 170952: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:build-arm64-xsm:xen-build:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=874c8ca1e60b2c564a48f7e7acc40d328d5c8733
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 11 Jun 2022 13:06:58 +0000

flight 170952 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170952/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 170714
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 170714
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 170714
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 170714
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 170714
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 170714
 build-arm64-xsm               6 xen-build                fail REGR. vs. 170714
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 170714

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170714
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170714
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170714
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                874c8ca1e60b2c564a48f7e7acc40d328d5c8733
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z   18 days
Failing since        170716  2022-05-24 11:12:06 Z   18 days   45 attempts
Testing same since   170952  2022-06-10 17:12:26 Z    0 days    1 attempts

------------------------------------------------------------
2296 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 270916 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 11 15:34:27 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jun 2022 15:34:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347278.573529 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o037n-0003sP-Ry; Sat, 11 Jun 2022 15:34:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347278.573529; Sat, 11 Jun 2022 15:34:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o037n-0003sI-P5; Sat, 11 Jun 2022 15:34:07 +0000
Received: by outflank-mailman (input) for mailman id 347278;
 Sat, 11 Jun 2022 15:34:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o037m-0003s8-Re; Sat, 11 Jun 2022 15:34:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o037m-0003Gw-OM; Sat, 11 Jun 2022 15:34:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o037m-0002bz-Cv; Sat, 11 Jun 2022 15:34:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o037m-000841-CQ; Sat, 11 Jun 2022 15:34:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Jw6SepiDsxKhMkGYiVlyGP/mjEb1lxGJwBJd5+B0WB0=; b=i/dHEVDokoThqjbJEZl1XfZtsq
	BFQiEFN0QHKdlOj5B7lAbOHL6UZsgVD/3YjaqbpF/qxHvBqe3+me7RIXvpsM6j/dkwDL+yttRjyBu
	GtCAjkzE4Ov5guMptJepCV656L6jWnGpzvYkqIxA/BGZxgbXv/ey4oJW+JWCeEEkuDVg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171079-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 171079: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c9a707df83aad17a6fcf2e8330ab3b5bead6fb8b
X-Osstest-Versions-That:
    xen=e051b5cd1043cc1aad506faace824e6aacc887bf
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 11 Jun 2022 15:34:06 +0000

flight 171079 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171079/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c9a707df83aad17a6fcf2e8330ab3b5bead6fb8b
baseline version:
 xen                  e051b5cd1043cc1aad506faace824e6aacc887bf

Last test of basis   170978  2022-06-11 00:00:28 Z    0 days
Testing same since   171079  2022-06-11 12:00:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Hongyan Xia <hongyxia@amazon.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <julien.grall@arm.com>
  Luca Fancellu <luca.fancellu@arm.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Wei Liu <wei.liu2@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   e051b5cd10..c9a707df83  c9a707df83aad17a6fcf2e8330ab3b5bead6fb8b -> smoke


From xen-devel-bounces@lists.xenproject.org Sat Jun 11 16:44:05 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jun 2022 16:44:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347300.573567 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o04DG-0004eU-F6; Sat, 11 Jun 2022 16:43:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347300.573567; Sat, 11 Jun 2022 16:43:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o04DG-0004eN-CL; Sat, 11 Jun 2022 16:43:50 +0000
Received: by outflank-mailman (input) for mailman id 347300;
 Sat, 11 Jun 2022 16:43:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lT1A=WS=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1o04DE-0004eH-Kt
 for xen-devel@lists.xenproject.org; Sat, 11 Jun 2022 16:43:49 +0000
Received: from sonic304-24.consmr.mail.gq1.yahoo.com
 (sonic304-24.consmr.mail.gq1.yahoo.com [98.137.68.205])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a8656465-e9a5-11ec-bd2c-47488cf2e6aa;
 Sat, 11 Jun 2022 18:43:46 +0200 (CEST)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic304.consmr.mail.gq1.yahoo.com with HTTP; Sat, 11 Jun 2022 16:43:43 +0000
Received: by hermes--canary-production-bf1-856dbf94db-ld7kl (Yahoo Inc. Hermes
 SMTP Server) with ESMTPA ID 10a8310a1deef4c241e871970cd315d5; 
 Sat, 11 Jun 2022 16:43:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a8656465-e9a5-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1654965823; bh=uKipQ9GESCapGnwnZNCK+G59mD+JZ/aP2507Xtz69BU=; h=From:To:Cc:Subject:Date:References:From:Subject:Reply-To; b=rj3EKPf8cHONK1o3O/aL8eLuOd7jPEM5O7se7HquQi0H1Ar0Udg9WyciIQJ630X+hgudHPynMV+haJAl3umomxdUtOiCumbxuNY3Bi5C6fjMB7pOuTuHgZ+hKgIBK2i4JYcWNpSdDknwcxKOMqYgwq56hw/VSgyEr0+9e9xIxJGkkE4uuWcFzwgMDmIlc8dufD26cLbr0lCaD0HS81aOPQ33uAvjtmSuijLPNlcShceo8zntuboRhXfBMsOzF7LQPi88i+zT78NiEJdronIuh58zqeWXK4JDcR3Bp69UbkS1AZ9uRs5REvjtrNBBUZTcsXPIyC92gmeLN9bj3O7vyg==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1654965823; bh=DA2vKnKf+zEJ+BS8cQu11up+Ie3P63di7K7wlyd/Bal=; h=X-Sonic-MF:From:To:Subject:Date:From:Subject; b=Fbgz+GJjm1oytjNO04YnchJYs1BpSV9dnHS5hE5XKg1rCbrfwBqU/GS+WJZFDDW6hHMhENR5VkyWZE6/uz/JOvLHUTC+kYzho3uhGov8Dshvz2sXwHjE6DzUkBDINbUT7l/jJxrdlGr8bqBgCzQNIrPpFgBbBa0FeR6zw70JHBltxqIK456RVBnfzLiirbWqrVZ5g1o/WzUAMJZAjyNQ3j4RznkY6q+lQdBoLHxlreUvVUNDhuXrPNaXLQl+Q0WvPi9MbEwjlrUOE0znssLT30pA30ING4RpFSjPZH/metJt5Izr5cYJ7zCaWQyhLlTMZumaoFKeMtG+D0knK5vWFg==
X-YMail-OSG: AWXn12oVM1k3G32cr49lu1hpWFcCdIsYTwluI2HkZElXlWe7QdEhcypMisJ6FQS
 AdSXl0fE6Sfs9J4xbP7_82mUhW6okyP9SUwtYPp64ltydNqMLcCKBV1s899BZdNW6R3V5sN6U3.1
 2bRRsK7iflyhEmurOCSiu4PRS87F.KpPtcVrFuoXaWJj03_frsyXavXprPEgGzuPgXVHwl6elgov
 6ViFiypp9HzuOoBqHXn5q2HQPnIjGLFef7njwHBU9e_4k3VXjtLIQ5p_sTAs62AtRrijoT2bPxtk
 PoHB0_ZPLkCdpP4FpR9RMXupNNcJDpiRCAYkuT8AxZKkI7q7EcI28nTVcoT6CMdtiFn8MljvXHe7
 j7S3KxpE68GwC7gdJSqcnoeZLJhRBmmaVeGERX6c14cHhoyDC6yCjY5fJ5mWYmTlFH9BBsnaFYEk
 8fBjhnwo7b6AJ4KlLGukcJLJPiytH.Djk0qpV9COJ9KYYl2wTI5w1HVed3VXyYRFYVS_q2r0udbj
 sEqLzzx2v77RlTrCGuZGI9MDt7jlSNqUtkQ4t8N6ZfMnDn3hKip9DoN5RoYPdEa2SGw6FgJpCISM
 PVk4iZx7g4PkWfh7wj7G152g1ahzk2fdSkQZpls1CKBNw4fCBzAI3kpd7fDqMNPJDJnx25Fkd68N
 HcecqmFdF1EI6qCUUdd3rmN1oQq7RSKCLJcQ7k3jR4wMhDavDho1UfwpOAoKawBWPki93k0XXvkx
 B5S0ttbVYCqyMnRpOkg0Wnn3TO36O4VA6qNuv1lUBBO477QXnCjSysaqUT98kkc0tUYMKdNjz377
 6I2r2_qtNWZ7_DHJERNeCYjaVbRA4P83ZfyChqyLD17xkZHWb0rz2U8uWgE_fCmmEh46RhmSifia
 G8XkWFq5LHdR7KCV062WCGcHzwdPZgSc62.0Ch2Jz3ZGI7t_D1ziBnbioTTMuCF2YGEFmvI.AAGd
 focsrp4yPPexGc5JKsrk1bSUbO1jgLdCF23vqwU1EKcsJP2R_hYYywp4kegorjU3PJJx_53.ft_U
 3jv4QREFycQJokXCioYh3fHXf2L185VE_vBs5OuiX4TQMVfvAFeRf6rK4.Ac1naVJF2YtpAwZ27B
 VqXvXZ08ahvOdH20A_cTH.9lJlZAVnbTxSCQzXbddkJNm_8klfIzQWsMu5SJRwJUY8mwGU8rWdX_
 qgkukAG3nj2MiYSdgVDVI.jApGRrg4s4mNqFwy_VjUWUnxbP8WxQr_9GG9MpL8KZ2np_YMAFAgKb
 yD0dQMdVYjW0u7rsMa6QcQyV3aWeiYNGy_tR2TQPwf86.DPr6FjfxUpvmg3ElaGM2uv94slr2LzS
 cJT690MraQcpIO5e7pi5jMlSi.uFDT61Fhj7iwRW57wnwLOoaKpphAkylEtWBwjVM9xHVDTVoDBJ
 De58T8MNWqMAjJqVY_Cn76B5TrgRlbkKZCJ3ysKI_pP576JEWkY6WIYqGy2fbUyGKldv0iFgBnnZ
 IQkwOp.0oe84ayCNA1En0QznZHc7NxeQIcmRE.DgSYghQxL49.T2ngZWVqlX9RnvgWTYp06_Fclr
 x1JPozstV7.Dt_3RL35DUdxqVBBayyzICxtWhCblv__YduvNV.iW9lv.Uxq8PsV4c6AHcxGmMw4W
 lfq7Km26b2u52WUGJO.uemorpxZa1h9.yV1I0bdMHvnnhSU7C6_cl7Qo_BSf7tiLMqVwgUj0B2NA
 IUzJJs0fN0VWhnYbdvKhfwfns4wrQCBgqhaBDZA1Bc2EAXce5zo5jGkRSMrG1Bg.ZuwOFca1k0nE
 6HQ9hhDRGyII8v1tejFKdmSPbU0zup7QZdiy6.LsqJASBW20PCVPXr1yrPZBRmVEg.EMj_BCOZtd
 N8Ev3dLHu2kwc1RbdIBjOUWHyUseFSk6QuwntoAOK0cKyRV3Asxqb2UeWcwSZtReE8tKnq6stP0z
 2PhOcmTQ.oNiAT35AE9tBr7USWIdhpQZrmY70BmJDr5cA3yx0JUssXePar4qj988aWn9GGWnftZd
 51uTNgKeNjVLb0CMrYyFSv4xJLOgwkBq4CxjQ_LOrert9sKtRRRVKD1YZyPVDQmSCo3JJKTJ6LCh
 uXAvH16Ou5_C0Rfs7Uqn.GXNfsR04_JPmjze4wQzbXxKlPjtyaTPZdcHi1PFepiUfCDs5f33toD1
 EQKbQ7OwlF.Ib6uiTpHBh4JJvJzFsri84545KQDcaj.lDRUZkSfWtaRKq3GPj_V4DNY7lFGD2gKl
 2nnkz8aiL8JNRIlQ-
X-Sonic-MF: <brchuckz@aim.com>
From: Chuck Zmudzinski <brchuckz@aol.com>
To: qemu-devel@nongnu.org
Cc: xen-devel@lists.xenproject.org,
	qemu-trivial@nongnu.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH v2] xen/pass-through: merge emulated bits correctly
Date: Sat, 11 Jun 2022 12:43:29 -0400
Message-Id: <b6718a3512ec0a97c6ef4a5b5c1f3de72238c603.1654961918.git.brchuckz@aol.com>
X-Mailer: git-send-email 2.36.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
References: <b6718a3512ec0a97c6ef4a5b5c1f3de72238c603.1654961918.git.brchuckz.ref@aol.com>
Content-Length: 2550

In xen_pt_config_reg_init(), there is an error in the merging of the
emulated data with the host value. With the current Qemu, instead of
merging the emulated bits with the host bits as defined by emu_mask,
the emulated bits are merged with the host bits as defined by the
inverse of emu_mask. In some cases, depending on the data in the
registers on the host, the way the registers are setup, and the
initial values of the emulated bits, the end result will be that
the register is initialized with the wrong value.

To correct this error, use the XEN_PT_MERGE_VALUE macro to help ensure
the merge is done correctly.

This correction is needed to resolve Qemu project issue #1061, which
describes the failure of Xen HVM Linux guests to boot in certain
configurations with passed through PCI devices, that is, when this error
disables instead of enables the PCI_STATUS_CAP_LIST bit of the
PCI_STATUS register of a passed through PCI device, which in turn
disables the MSI-X capability of the device in Linux guests with the end
result being that the Linux guest never completes the boot process.

Fixes: 2e87512eccf3
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1061
Buglink: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=988333

Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
---
v2: Edit the commit message to more accurately describe the cause
of the error.

 hw/xen/xen_pt_config_init.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/hw/xen/xen_pt_config_init.c b/hw/xen/xen_pt_config_init.c
index ffd915654c..bc0383f7fb 100644
--- a/hw/xen/xen_pt_config_init.c
+++ b/hw/xen/xen_pt_config_init.c
@@ -1966,10 +1966,10 @@ static void xen_pt_config_reg_init(XenPCIPassthroughState *s,
         if ((data & host_mask) != (val & host_mask)) {
             uint32_t new_val;
 
-            /* Mask out host (including past size). */
-            new_val = val & host_mask;
-            /* Merge emulated ones (excluding the non-emulated ones). */
-            new_val |= data & host_mask;
+            /* Merge the emulated bits (data) with the host bits (val)
+             * and mask out the bits past size to enable restoration
+             * of the proper value for logging below. */
+            new_val = XEN_PT_MERGE_VALUE(val, data, host_mask) & size_mask;
             /* Leave intact host and emulated values past the size - even though
              * we do not care as we write per reg->size granularity, but for the
              * logging below lets have the proper value. */
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Sat Jun 11 16:47:32 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jun 2022 16:47:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347296.573579 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o04Gq-0005Iw-4r; Sat, 11 Jun 2022 16:47:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347296.573579; Sat, 11 Jun 2022 16:47:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o04Gq-0005Ip-1R; Sat, 11 Jun 2022 16:47:32 +0000
Received: by outflank-mailman (input) for mailman id 347296;
 Sat, 11 Jun 2022 16:35:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2a3a=WS=t-online.de=vr_qemu@srs-se1.protection.inumbo.net>)
 id 1o0455-0003UM-0f
 for xen-devel@lists.xenproject.org; Sat, 11 Jun 2022 16:35:23 +0000
Received: from mailout06.t-online.de (mailout06.t-online.de [194.25.134.19])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7ad7a9cc-e9a4-11ec-8901-93a377f238d6;
 Sat, 11 Jun 2022 18:35:21 +0200 (CEST)
Received: from fwd73.dcpf.telekom.de (fwd73.aul.t-online.de [10.223.144.99])
 by mailout06.t-online.de (Postfix) with SMTP id 6B4A110102;
 Sat, 11 Jun 2022 18:34:40 +0200 (CEST)
Received: from [192.168.211.200] ([84.175.233.154]) by fwd73.t-online.de
 with (TLSv1.2:ECDHE-RSA-AES256-GCM-SHA384 encrypted)
 esmtp id 1o044C-06EtoP0; Sat, 11 Jun 2022 18:34:29 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7ad7a9cc-e9a4-11ec-8901-93a377f238d6
Message-ID: <60c72935-85ce-4e24-43a5-119f6428b916@t-online.de>
Date: Sat, 11 Jun 2022 18:34:28 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
From: =?UTF-8?Q?Volker_R=c3=bcmelin?= <vr_qemu@t-online.de>
Subject: Re: [PULL 00/17] Kraxel 20220610 patches
To: Richard Henderson <richard.henderson@linaro.org>,
 Gerd Hoffmann <kraxel@redhat.com>, =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?=
 <berrange@redhat.com>, qemu-devel@nongnu.org
Cc: "Canokeys.org" <contact@canokeys.org>, "Michael S. Tsirkin"
 <mst@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 Akihiko Odaki <akihiko.odaki@gmail.com>,
 Peter Maydell <peter.maydell@linaro.org>,
 "Hongren (Zenithal) Zheng" <i@zenithal.me>, xen-devel@lists.xenproject.org,
 Alex Williamson <alex.williamson@redhat.com>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <f4bug@amsat.org>
References: <20220610092043.1874654-1-kraxel@redhat.com>
 <adec1cff-54f1-e2bf-8092-945601aeb912@linaro.org>
Content-Language: en-US
In-Reply-To: <adec1cff-54f1-e2bf-8092-945601aeb912@linaro.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-TOI-EXPURGATEID: 150726::1654965269-01432E2F-9B3FCAFB/0/0 CLEAN NORMAL
X-TOI-MSGID: 97fb026e-e35c-459b-a063-121af80287a2

Am 10.06.22 um 22:16 schrieb Richard Henderson:
> On 6/10/22 02:20, Gerd Hoffmann wrote:
>> The following changes since commit 
>> 9cc1bf1ebca550f8d90f967ccd2b6d2e00e81387:
>>
>>    Merge tag 'pull-xen-20220609' of 
>> https://xenbits.xen.org/git-http/people/aperard/qemu-dm into staging 
>> (2022-06-09 08:25:17 -0700)
>>
>> are available in the Git repository at:
>>
>>    git://git.kraxel.org/qemu tags/kraxel-20220610-pull-request
>>
>> for you to fetch changes up to 02319a4d67d3f19039127b8dc9ca9478b6d6ccd8:
>>
>>    virtio-gpu: Respect UI refresh rate for EDID (2022-06-10 11:11:44 
>> +0200)
>>
>> ----------------------------------------------------------------
>> usb: add CanoKey device, fixes for ehci + redir
>> ui: fixes for gtk and cocoa, move keymaps, rework refresh rate
>> virtio-gpu: scanout flush fix
>
> This introduces regressions:
>
> https://gitlab.com/qemu-project/qemu/-/jobs/2576157660
> https://gitlab.com/qemu-project/qemu/-/jobs/2576151565
> https://gitlab.com/qemu-project/qemu/-/jobs/2576154539
> https://gitlab.com/qemu-project/qemu/-/jobs/2575867208
>
>
>  (27/43) 
> tests/avocado/vnc.py:Vnc.test_change_password_requires_a_password: 
> ERROR: ConnectError: Failed to establish session: EOFError\n Exit 
> code: 1\n    Command: ./qemu-system-x86_64 -display none -vga none 
> -chardev 
> socket,id=mon,path=/var/tmp/avo_qemu_sock_4nrz0r37/qemu-2912538-7f732e94e0f0-monitor.sock 
> -mon chardev=mon,mode=control -node... (0.09 s)
>  (28/43) tests/avocado/vnc.py:Vnc.test_change_password:  ERROR: 
> ConnectError: Failed to establish session: EOFError\n    Exit code: 
> 1\n    Command: ./qemu-system-x86_64 -display none -vga none -chardev 
> socket,id=mon,path=/var/tmp/avo_qemu_sock_yhpzy5c3/qemu-2912543-7f732e94b438-monitor.sock 
> -mon chardev=mon,mode=control -node... (0.09 s)
>  (29/43) 
> tests/avocado/vnc.py:Vnc.test_change_password_requires_a_password: 
> ERROR: ConnectError: Failed to establish session: EOFError\n Exit 
> code: 1\n    Command: ./qemu-system-x86_64 -display none -vga none 
> -chardev 
> socket,id=mon,path=/var/tmp/avo_qemu_sock_tk3pfmt2/qemu-2912548-7f732e93d7b8-monitor.sock 
> -mon chardev=mon,mode=control -node... (0.09 s)
>
>
> r~
>

This is caused by [PATCH 14/17] ui: move 'pc-bios/keymaps' to 
'ui/keymaps'. After this patch QEMU no longer finds it's keymaps if 
started directly from the build directory.

With best regards,
Volker



From xen-devel-bounces@lists.xenproject.org Sat Jun 11 17:36:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jun 2022 17:36:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347365.573728 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0520-0003fP-08; Sat, 11 Jun 2022 17:36:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347365.573728; Sat, 11 Jun 2022 17:36:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o051z-0003fI-Sj; Sat, 11 Jun 2022 17:36:15 +0000
Received: by outflank-mailman (input) for mailman id 347365;
 Sat, 11 Jun 2022 17:36:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o051x-0003f8-WF; Sat, 11 Jun 2022 17:36:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o051x-00068c-RY; Sat, 11 Jun 2022 17:36:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o051x-0005nP-H9; Sat, 11 Jun 2022 17:36:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o051x-0006bY-Ge; Sat, 11 Jun 2022 17:36:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=530TqWbQQKRz7dbyHRAwqBZtyOFQZjttmWAJoG60/iI=; b=KXZewSOxB9S4pDshyt06LNuDSG
	Y1mERXir/0rXcuYmW5+Q/23Z7MhbB2fYH+clJhyjnKo+UvejMILP+ei89y3T3wOqEvwuKbjJu8kqf
	uUSn5n7uLmSbeA1LmTFLhLBt+NyEfSTsDg+FXhWvaLl6W4P8fQBrsoKP3BK7yNpWZ4aA=;
To: xen-devel@lists.xenproject.org
Subject: [xen-4.16-testing bisection] complete test-amd64-coresched-i386-xl
Message-Id: <E1o051x-0006bY-Ge@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 11 Jun 2022 17:36:13 +0000

branch xen-4.16-testing
xenbranch xen-4.16-testing
job test-amd64-coresched-i386-xl
testid xen-boot

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  b152dfbc3ad71a788996440b18174d995c3bffc9
  Bug not present: 8e11ec8fbf6f933f8854f4bc54226653316903f2
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/171092/


  commit b152dfbc3ad71a788996440b18174d995c3bffc9
  Author: Andrew Cooper <andrew.cooper3@citrix.com>
  Date:   Thu Jun 9 15:27:19 2022 +0200
  
      x86/pv: Clean up _get_page_type()
      
      Various fixes for clarity, ahead of making complicated changes.
      
       * Split the overflow check out of the if/else chain for type handling, as
         it's somewhat unrelated.
       * Comment the main if/else chain to explain what is going on.  Adjust one
         ASSERT() and state the bit layout for validate-locked and partial states.
       * Correct the comment about TLB flushing, as it's backwards.  The problem
         case is when writeable mappings are retained to a page becoming read-only,
         as it allows the guest to bypass Xen's safety checks for updates.
       * Reduce the scope of 'y'.  It is an artefact of the cmpxchg loop and not
         valid for use by subsequent logic.  Switch to using ACCESS_ONCE() to treat
         all reads as explicitly volatile.  The only thing preventing the validated
         wait-loop being infinite is the compiler barrier hidden in cpu_relax().
       * Replace one page_get_owner(page) with the already-calculated 'd' already in
         scope.
      
      No functional change.
      
      This is part of XSA-401 / CVE-2022-26362.
      
      Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
      Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>
      Reviewed-by: Jan Beulich <jbeulich@suse.com>
      Reviewed-by: George Dunlap <george.dunlap@citrix.com>
      master commit: 9186e96b199e4f7e52e033b238f9fe869afb69c7
      master date: 2022-06-09 14:20:36 +0200


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-4.16-testing/test-amd64-coresched-i386-xl.xen-boot.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-4.16-testing/test-amd64-coresched-i386-xl.xen-boot --summary-out=tmp/171092.bisection-summary --basis-template=170871 --blessings=real,real-bisect,real-retry xen-4.16-testing test-amd64-coresched-i386-xl xen-boot
Searching for failure / basis pass:
 170913 fail [host=nobling0] / 170871 [host=debina1] 169333 [host=pinot0] 169252 ok.
Failure / basis pass flights: 170913 / 169252
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ff36b2550f94dc5fac838cf298ae5a23cfddf204 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af dc020d8d1ba420e2dd0e7a40f5045db897f3c4f4
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b1b89f9009f2390652e0061bd7b24fc40732bc70 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 01774004c7f7fdc9c1e8f1715f70d3b913f8d491 b953760d0b564478e232e7e64823d2a1506e92b5
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#b1b89f9009f2390652e0061bd7b24fc40732bc70-ff36b2550f94dc5fac838cf298ae5a23cfddf204 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c74\
 37ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#107951211a8d17658e1aaa0c23a8cf29f8806ad8-107951211a8d17658e1aaa0c23a8cf29f8806ad8 git://xenbits.xen.org/osstest/seabios.git#01774004c7f7fdc9c1e8f1715f70d3b913f8d491-dc88f9b72df52b22c35b127b80c487e0b6fca4af git://xenbits.xen.org/xen.git#b953760d0b564478e232e7e64823d2a1506e92b5-dc020d8d1ba420e2dd0e7a40f5045db897f3c4f4
Loaded 7994 nodes in revision graph
Searching for test results:
 169238 [host=nobling1]
 169252 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b1b89f9009f2390652e0061bd7b24fc40732bc70 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 01774004c7f7fdc9c1e8f1715f70d3b913f8d491 b953760d0b564478e232e7e64823d2a1506e92b5
 169333 [host=pinot0]
 170871 [host=debina1]
 170901 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ff36b2550f94dc5fac838cf298ae5a23cfddf204 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af dc020d8d1ba420e2dd0e7a40f5045db897f3c4f4
 170974 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b1b89f9009f2390652e0061bd7b24fc40732bc70 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 01774004c7f7fdc9c1e8f1715f70d3b913f8d491 b953760d0b564478e232e7e64823d2a1506e92b5
 170979 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ff36b2550f94dc5fac838cf298ae5a23cfddf204 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af dc020d8d1ba420e2dd0e7a40f5045db897f3c4f4
 170913 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ff36b2550f94dc5fac838cf298ae5a23cfddf204 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af dc020d8d1ba420e2dd0e7a40f5045db897f3c4f4
 170981 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dab96cf02e3be378310dd1bce119b0fac6fac958 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 01774004c7f7fdc9c1e8f1715f70d3b913f8d491 b953760d0b564478e232e7e64823d2a1506e92b5
 170986 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 17702186b56209842e002235c29ffec5ed69745a 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af f26544492298cb82d66f9bf36e29d2f75b3133f2
 170989 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 632574ced10fe184d5665b73c62c959109c39961 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af f26544492298cb82d66f9bf36e29d2f75b3133f2
 170995 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a81a650da1dc40ec2b2825d1878cdf2778b4be14 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af 7c003ab4a398ff4ddd54d15d4158cffb463134cc
 170998 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a81a650da1dc40ec2b2825d1878cdf2778b4be14 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af 982a314bd3000a16c3128afadb36a8ff41029adc
 171001 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ff36b2550f94dc5fac838cf298ae5a23cfddf204 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af b152dfbc3ad71a788996440b18174d995c3bffc9
 171039 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a81a650da1dc40ec2b2825d1878cdf2778b4be14 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af f1be0b62a03b90a40a03e21f965e4cbb89809bb1
 171074 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a81a650da1dc40ec2b2825d1878cdf2778b4be14 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af 8e11ec8fbf6f933f8854f4bc54226653316903f2
 171078 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ff36b2550f94dc5fac838cf298ae5a23cfddf204 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af 8e11ec8fbf6f933f8854f4bc54226653316903f2
 171082 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ff36b2550f94dc5fac838cf298ae5a23cfddf204 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af b152dfbc3ad71a788996440b18174d995c3bffc9
 171085 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ff36b2550f94dc5fac838cf298ae5a23cfddf204 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af 8e11ec8fbf6f933f8854f4bc54226653316903f2
 171088 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ff36b2550f94dc5fac838cf298ae5a23cfddf204 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af b152dfbc3ad71a788996440b18174d995c3bffc9
 171090 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ff36b2550f94dc5fac838cf298ae5a23cfddf204 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af 8e11ec8fbf6f933f8854f4bc54226653316903f2
 171092 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ff36b2550f94dc5fac838cf298ae5a23cfddf204 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af b152dfbc3ad71a788996440b18174d995c3bffc9
Searching for interesting versions
 Result found: flight 169252 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ff36b2550f94dc5fac838cf298ae5a23cfddf204 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af 8e11ec8fbf6f933f8854f4bc54226653316903f2, results HASH(0x55af84629cb0) HASH(0x55af846fe8a8) HASH(0x55af846302f0) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1\
 e6a472b0eb9558310b518f0dfcd8860 a81a650da1dc40ec2b2825d1878cdf2778b4be14 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af 8e11ec8fbf6f933f8854f4bc54226653316903f2, results HASH(0x55af84629088) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a81a650da1dc40ec2b2825d1878cdf2778b4be14 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c\
 23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af f1be0b62a03b90a40a03e21f965e4cbb89809bb1, results HASH(0x55af846f7988) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a81a650da1dc40ec2b2825d1878cdf2778b4be14 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af 982a314bd3000a16c3128afadb36a8ff41029adc, results HASH(0x55af8464ee00) For basis\
  failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a81a650da1dc40ec2b2825d1878cdf2778b4be14 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af 7c003ab4a398ff4ddd54d15d4158cffb463134cc, results HASH(0x55af8464c7f8) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 632574ced10fe184d566\
 5b73c62c959109c39961 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af f26544492298cb82d66f9bf36e29d2f75b3133f2, results HASH(0x55af846299b0) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 17702186b56209842e002235c29ffec5ed69745a 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6f\
 ca4af f26544492298cb82d66f9bf36e29d2f75b3133f2, results HASH(0x55af82ffe038) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dab96cf02e3be378310dd1bce119b0fac6fac958 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 01774004c7f7fdc9c1e8f1715f70d3b913f8d491 b953760d0b564478e232e7e64823d2a1506e92b5, results HASH(0x55af84644f90) For basis failure, parent search stopping at c3038e718a19fc59\
 6f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b1b89f9009f2390652e0061bd7b24fc40732bc70 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 01774004c7f7fdc9c1e8f1715f70d3b913f8d491 b953760d0b564478e232e7e64823d2a1506e92b5, results HASH(0x55af84634c00) HASH(0x55af8462a5b0) Result found: flight 170901 (fail), for basis failure (at ancestor ~644)
 Repro found: flight 170974 (pass), for basis pass
 Repro found: flight 170979 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ff36b2550f94dc5fac838cf298ae5a23cfddf204 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 107951211a8d17658e1aaa0c23a8cf29f8806ad8 dc88f9b72df52b22c35b127b80c487e0b6fca4af 8e11ec8fbf6f933f8854f4bc54226653316903f2
No revisions left to test, checking graph state.
 Result found: flight 171078 (pass), for last pass
 Result found: flight 171082 (fail), for first failure
 Repro found: flight 171085 (pass), for last pass
 Repro found: flight 171088 (fail), for first failure
 Repro found: flight 171090 (pass), for last pass
 Repro found: flight 171092 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  b152dfbc3ad71a788996440b18174d995c3bffc9
  Bug not present: 8e11ec8fbf6f933f8854f4bc54226653316903f2
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/171092/


  commit b152dfbc3ad71a788996440b18174d995c3bffc9
  Author: Andrew Cooper <andrew.cooper3@citrix.com>
  Date:   Thu Jun 9 15:27:19 2022 +0200
  
      x86/pv: Clean up _get_page_type()
      
      Various fixes for clarity, ahead of making complicated changes.
      
       * Split the overflow check out of the if/else chain for type handling, as
         it's somewhat unrelated.
       * Comment the main if/else chain to explain what is going on.  Adjust one
         ASSERT() and state the bit layout for validate-locked and partial states.
       * Correct the comment about TLB flushing, as it's backwards.  The problem
         case is when writeable mappings are retained to a page becoming read-only,
         as it allows the guest to bypass Xen's safety checks for updates.
       * Reduce the scope of 'y'.  It is an artefact of the cmpxchg loop and not
         valid for use by subsequent logic.  Switch to using ACCESS_ONCE() to treat
         all reads as explicitly volatile.  The only thing preventing the validated
         wait-loop being infinite is the compiler barrier hidden in cpu_relax().
       * Replace one page_get_owner(page) with the already-calculated 'd' already in
         scope.
      
      No functional change.
      
      This is part of XSA-401 / CVE-2022-26362.
      
      Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
      Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>
      Reviewed-by: Jan Beulich <jbeulich@suse.com>
      Reviewed-by: George Dunlap <george.dunlap@citrix.com>
      master commit: 9186e96b199e4f7e52e033b238f9fe869afb69c7
      master date: 2022-06-09 14:20:36 +0200

pnmtopng: 115 colors found
Revision graph left in /home/logs/results/bisect/xen-4.16-testing/test-amd64-coresched-i386-xl.xen-boot.{dot,ps,png,html,svg}.
----------------------------------------
171092: tolerable ALL FAIL

flight 171092 xen-4.16-testing real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/171092/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-coresched-i386-xl  8 xen-boot                fail baseline untested


jobs:
 test-amd64-coresched-i386-xl                                 fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sat Jun 11 17:59:19 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jun 2022 17:59:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347395.573814 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o05O6-0007TZ-Pu; Sat, 11 Jun 2022 17:59:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347395.573814; Sat, 11 Jun 2022 17:59:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o05O6-0007TS-Md; Sat, 11 Jun 2022 17:59:06 +0000
Received: by outflank-mailman (input) for mailman id 347395;
 Sat, 11 Jun 2022 17:59:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o05O5-0007TI-3J; Sat, 11 Jun 2022 17:59:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o05O5-0006Wr-0s; Sat, 11 Jun 2022 17:59:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o05O4-0006Kn-OD; Sat, 11 Jun 2022 17:59:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o05O4-0002zq-Nn; Sat, 11 Jun 2022 17:59:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=AZlxogDMuLPyTWhLPWBpEEObacaLGuqwr72HC32rtEg=; b=XEQffsXL57XW0L/w1lrAyTwD4h
	SdATgz6R/B1gw0s/1TOdP24u4lpjPu4CdSGf4edAAgYJT3E2BThIB/nTcI3V89cMeB8R+x5Qn2GQz
	B9GxtX+V6p2j7YTqvSmW+uHgwKzSFvhAPtJPxGPoKq3CyvEgTYsjz8KDQIUVYGpjCNW0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170966-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 170966: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-credit2:xen-boot:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=b8bc4588b32e8a40354defac29ceb9c90e570af8
X-Osstest-Versions-That:
    xen=7ac12e3634cc3ed9234de03e48149e7f5fbf73c3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 11 Jun 2022 17:59:04 +0000

flight 170966 xen-unstable real [real]
flight 171093 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/170966/
http://logs.test-lab.xenproject.org/osstest/logs/171093/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-credit2   8 xen-boot            fail pass in 171093-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit2 15 migrate-support-check fail in 171093 never pass
 test-armhf-armhf-xl-credit2 16 saverestore-support-check fail in 171093 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170890
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170890
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170890
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170890
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 170890
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 170890
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170890
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170890
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170890
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170890
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170890
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170890
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 xen                  b8bc4588b32e8a40354defac29ceb9c90e570af8
baseline version:
 xen                  7ac12e3634cc3ed9234de03e48149e7f5fbf73c3

Last test of basis   170890  2022-06-08 20:37:45 Z    2 days
Failing since        170897  2022-06-09 08:20:47 Z    2 days    3 attempts
Testing same since   170966  2022-06-10 21:15:10 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <jgrall@amazon.com> # Arm
  Luca Fancellu <luca.fancellu@arm.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   7ac12e3634..b8bc4588b3  b8bc4588b32e8a40354defac29ceb9c90e570af8 -> master


From xen-devel-bounces@lists.xenproject.org Sat Jun 11 19:24:00 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jun 2022 19:24:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347420.573862 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o06i6-00023z-OY; Sat, 11 Jun 2022 19:23:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347420.573862; Sat, 11 Jun 2022 19:23:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o06i6-00023s-Lo; Sat, 11 Jun 2022 19:23:50 +0000
Received: by outflank-mailman (input) for mailman id 347420;
 Sat, 11 Jun 2022 19:23:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o06i5-00023i-Fx; Sat, 11 Jun 2022 19:23:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o06i5-000849-DU; Sat, 11 Jun 2022 19:23:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o06i5-0000C5-3r; Sat, 11 Jun 2022 19:23:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o06i5-0003Pe-36; Sat, 11 Jun 2022 19:23:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=uTu0olDUIdwXDzwRndk+CwAoYtY5Ea30InqM0im744I=; b=M+kX4TwFYcRSe9QDAEO9vH1Lqc
	SRuPJ9dT17CU8EBVgOELy9hanYUwbmcmV3sRvvSAGzvYk8KLFGgcIwtMjOXTUFSRINtdcCbda+sD1
	yS9qxasJDW00VduxxpFBxQY/HlTq9CBlJHXSQcQbBNHnxGdPocMj2Qc7MkuHxPjfqpKQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170984-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.16-testing test] 170984: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.16-testing:test-amd64-amd64-dom0pvh-xl-intel:guest-localmigrate/x10:fail:heisenbug
    xen-4.16-testing:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    xen-4.16-testing:test-amd64-i386-xl-vhd:xen-install:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=0b4e62847c5af1a59eea8d17093feccd550d1c26
X-Osstest-Versions-That:
    xen=8e11ec8fbf6f933f8854f4bc54226653316903f2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 11 Jun 2022 19:23:49 +0000

flight 170984 xen-4.16-testing real [real]
flight 171139 xen-4.16-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/170984/
http://logs.test-lab.xenproject.org/osstest/logs/171139/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-dom0pvh-xl-intel 20 guest-localmigrate/x10 fail pass in 171139-retest

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 170871

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-vhd        7 xen-install                  fail  like 170871
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170871
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170871
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170871
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170871
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 170871
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170871
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170871
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170871
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170871
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170871
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170871
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 170871
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  0b4e62847c5af1a59eea8d17093feccd550d1c26
baseline version:
 xen                  8e11ec8fbf6f933f8854f4bc54226653316903f2

Last test of basis   170871  2022-06-07 12:36:55 Z    4 days
Failing since        170901  2022-06-09 13:36:44 Z    2 days    3 attempts
Testing same since   170984  2022-06-11 02:19:57 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8e11ec8fbf..0b4e62847c  0b4e62847c5af1a59eea8d17093feccd550d1c26 -> stable-4.16


From xen-devel-bounces@lists.xenproject.org Sat Jun 11 22:33:21 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jun 2022 22:33:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347435.573877 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o09fG-0007PD-B0; Sat, 11 Jun 2022 22:33:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347435.573877; Sat, 11 Jun 2022 22:33:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o09fG-0007P6-7m; Sat, 11 Jun 2022 22:33:06 +0000
Received: by outflank-mailman (input) for mailman id 347435;
 Sat, 11 Jun 2022 22:33:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o09fE-0007Ow-Om; Sat, 11 Jun 2022 22:33:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o09fE-0002pj-ID; Sat, 11 Jun 2022 22:33:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o09fE-0002sa-4K; Sat, 11 Jun 2022 22:33:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o09fE-0004lD-3t; Sat, 11 Jun 2022 22:33:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4KDElDgqt9p+MtSVcy5zH4r/QKt+NnBwqFPdT/RplOU=; b=Bmx5I7FIVibuCCzoz0wgxdaGw1
	4VLTVqRK4ixGwxgaUr+LwmSWZYW5UYKhzk3jLsm+0NQDcgX3vuTVWgaMdyTZ1mKYhKUwu7nZyvYXL
	7OS7j/L2OsMEsPSB1pzDIz19sOzjP33bG6J4ho/8UUbgRno4Bdp9vvlTZGSAHu1E3w6g=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170990-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 170990: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=a7d2272e59db058962fd5ef84737d7a741fd516a
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 11 Jun 2022 22:33:04 +0000

flight 170990 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/170990/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              a7d2272e59db058962fd5ef84737d7a741fd516a
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  701 days
Failing since        151818  2020-07-11 04:18:52 Z  700 days  682 attempts
Testing same since   170990  2022-06-11 04:18:55 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Amneesh Singh <natto@weirdnatto.in>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani@anisinha.ca>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brad Laue <brad@brad-x.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Kirbach <christian.kirbach@gmail.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Christophe Fergeau <cfergeau@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Didik Supriadi <didiksupriadi41@gmail.com>
  dinglimin <dinglimin@cmss.chinamobile.com>
  Divya Garg <divya.garg@nutanix.com>
  Dmitrii Shcherbakov <dmitrii.shcherbakov@canonical.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Emilio Herrera <ehespinosa57@gmail.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Franck Ridel <fridel@protonmail.com>
  Gavi Teitz <gavi@nvidia.com>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Haonan Wang <hnwanga1@gmail.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Hiroki Narukawa <hnarukaw@yahoo-corp.jp>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Ian Wienand <iwienand@redhat.com>
  Ioanna Alifieraki <ioanna-maria.alifieraki@canonical.com>
  Ivan Teterevkov <ivan.teterevkov@nutanix.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  jason lee <ppark5237@gmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jia Zhou <zhou.jia2@zte.com.cn>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jing Qi <jinqi@redhat.com>
  Jinsheng Zhang <zhangjl02@inspur.com>
  Jiri Denemark <jdenemar@redhat.com>
  Joachim Falk <joachim.falk@gmx.de>
  John Ferlan <jferlan@redhat.com>
  John Levon <john.levon@nutanix.com>
  John Levon <levon@movementarian.org>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Justin Gatzen <justin.gatzen@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kim InSoo <simmon@nplob.com>
  Koichi Murase <myoga.murase@gmail.com>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Lei Yang <yanglei209@huawei.com>
  Lena Voytek <lena.voytek@canonical.com>
  Liang Yan <lyan@digitalocean.com>
  Liang Yan <lyan@digtalocean.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Liu Yiding <liuyd.fnst@fujitsu.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  luzhipeng <luzhipeng@cestc.cn>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Martin Pitt <mpitt@debian.org>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matej Cepl <mcepl@cepl.eu>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Goodhart <c@chromakode.com>
  Maxim Nestratov <mnestratov@virtuozzo.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Moteen Shah <codeguy.moteen@gmail.com>
  Moteen Shah <moteenshah.02@gmail.com>
  Muha Aliss <muhaaliss@gmail.com>
  Nathan <nathan95@live.it>
  Neal Gompa <ngompa13@gmail.com>
  Nick Chevsky <nchevsky@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nicolas Lécureuil <neoclust@mageia.org>
  Nicolas Lécureuil <nicolas.lecureuil@siveo.net>
  Nikolay Shirokovskiy <nikolay.shirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Niteesh Dubey <niteesh@linux.ibm.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Or Ozeri <oro@il.ibm.com>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peng Liang <tcx4c70@gmail.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Praveen K Paladugu <prapal@linux.microsoft.com>
  Richard W.M. Jones <rjones@redhat.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Robin Lee <cheeselee@fedoraproject.org>
  Rohit Kumar <rohit.kumar3@nutanix.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Davis <scott.davis@starlab.io>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  shenjiatong <yshxxsjt715@gmail.com>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Simon Rowe <simon.rowe@nutanix.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tom Wieczorek <tom@bibbu.net>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tu Qiang <tu.qiang35@zte.com.cn>
  Tuguoyi <tu.guoyi@h3c.com>
  tuqiang <tu.qiang35@zte.com.cn>
  Vasiliy Ulyanov <vulyanov@suse.de>
  Victor Toso <victortoso@redhat.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Vinayak Kale <vkale@nvidia.com>
  Vineeth Pillai <viremana@linux.microsoft.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  Wei-Chen Chen <weicche@microsoft.com>
  William Douglas <william.douglas@intel.com>
  Xu Chao <xu.chao6@zte.com.cn>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Fei <yangfei85@huawei.com>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yasuhiko Kamata <belphegor@belbel.or.jp>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
  zhangjl02 <zhangjl02@inspur.com>
  zhanglei <zhanglei@smartx.com>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>
  Дамјан Георгиевски <gdamjan@gmail.com>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 112934 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 11 22:51:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jun 2022 22:51:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347447.573888 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o09x0-0001uI-07; Sat, 11 Jun 2022 22:51:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347447.573888; Sat, 11 Jun 2022 22:51:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o09wz-0001uB-Tb; Sat, 11 Jun 2022 22:51:25 +0000
Received: by outflank-mailman (input) for mailman id 347447;
 Sat, 11 Jun 2022 22:51:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o09wz-0001u1-B3; Sat, 11 Jun 2022 22:51:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o09wz-00038Z-8c; Sat, 11 Jun 2022 22:51:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o09wy-0003kR-SP; Sat, 11 Jun 2022 22:51:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o09wy-0000PH-Rz; Sat, 11 Jun 2022 22:51:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=xTwR1G7z+J35Pz5jm6b7RcsdrgSKNiWsysuFnWmvBYs=; b=H9qkxaXXV+5rwfKZseeGNMvoew
	9oXMrtd6jEk2uT3ctSSXY3Y0FDhIJvJk/ouu7byTy9+IxUbvxu4OiYpTTqphCFRdNh5E5LAlZ9UgT
	Wc7CG9cwBgvhpLBLhp+3wvFXbNTk5pG3nxnOWG4f8ejKiQ/g7IQqDi9aG7TouriTKjRs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-170975-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 170975: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=b3cd3b5a66f0dddfe3d5ba2bef13cd4f5b89cde9
X-Osstest-Versions-That:
    qemuu=9cc1bf1ebca550f8d90f967ccd2b6d2e00e81387
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 11 Jun 2022 22:51:24 +0000

flight 170975 qemu-mainline real [real]
flight 171140 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/170975/
http://logs.test-lab.xenproject.org/osstest/logs/171140/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install   fail pass in 171140-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170915
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170915
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170915
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170915
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170915
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170915
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170915
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170915
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 qemuu                b3cd3b5a66f0dddfe3d5ba2bef13cd4f5b89cde9
baseline version:
 qemuu                9cc1bf1ebca550f8d90f967ccd2b6d2e00e81387

Last test of basis   170915  2022-06-10 05:30:32 Z    1 days
Testing same since   170975  2022-06-10 23:13:00 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alistair Francis <alistair.francis@wdc.com>
  Andrew Bresticker <abrestic@rivosinc.com>
  Atish Patra <atishp@rivosinc.com>
  eop Chen <eop.chen@sifive.com>
  eopXD <eop.chen@sifive.com>
  eopXD <yueh.ting.chen@gmail.com>
  Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
  Jamie Iles <jamie@nuviainc.com>
  Junqiang Wang <wangjunqiang@iscas.ac.cn>
  Richard Henderson <richard.henderson@linaro.org>
  Weiwei Li <liweiwei@iscas.ac.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   9cc1bf1ebc..b3cd3b5a66  b3cd3b5a66f0dddfe3d5ba2bef13cd4f5b89cde9 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Sun Jun 12 01:10:48 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jun 2022 01:10:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347462.573900 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0C7Z-0000hg-82; Sun, 12 Jun 2022 01:10:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347462.573900; Sun, 12 Jun 2022 01:10:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0C7Z-0000hZ-2b; Sun, 12 Jun 2022 01:10:29 +0000
Received: by outflank-mailman (input) for mailman id 347462;
 Sun, 12 Jun 2022 01:10:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tCbL=WT=gmail.com=akihiko.odaki@srs-se1.protection.inumbo.net>)
 id 1o0C7W-0000hA-Vd
 for xen-devel@lists.xenproject.org; Sun, 12 Jun 2022 01:10:27 +0000
Received: from mail-pf1-x431.google.com (mail-pf1-x431.google.com
 [2607:f8b0:4864:20::431])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 704a5757-e9ec-11ec-bd2c-47488cf2e6aa;
 Sun, 12 Jun 2022 03:10:25 +0200 (CEST)
Received: by mail-pf1-x431.google.com with SMTP id 15so2731844pfy.3
 for <xen-devel@lists.xenproject.org>; Sat, 11 Jun 2022 18:10:25 -0700 (PDT)
Received: from [192.168.66.3] (p912131-ipoe.ipoe.ocn.ne.jp. [153.243.13.130])
 by smtp.gmail.com with ESMTPSA id
 kw19-20020a17090b221300b001dd11e4b927sm4212883pjb.39.2022.06.11.18.10.20
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sat, 11 Jun 2022 18:10:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 704a5757-e9ec-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=message-id:date:mime-version:user-agent:subject:content-language:to
         :cc:references:from:in-reply-to:content-transfer-encoding;
        bh=ed2L5Xukb0x530A4eAXxNU5Y2kMwxFVSJ7UPqpwGeqo=;
        b=abxM082Dlwxi65fThPNfZemww/5/56Sdae15mJ/9NowCfcVuO1BqCEcCjPX/iKj5gP
         jzaz/vgCrovBaYKU7U9+DArZtxg20Oi8wzCFth72tIjPW7GrJrqg1NPST0hoDFw6f/qk
         KzfbM+Y7SMTmAzFFC9NVDfjYWyESskIrdWmM0HcE2uxttenUuAlpWUxGkkL8Z+PxT7g2
         zsSo3NIcs/Mt+V5YdgVNm+7bCWhbJiumqSQq/nrIyEnWKoyK1N1zQeuqdiRE+kVdQeoh
         4WUNfAexkeSJ50lHbZ85+JiVZmFwsazFUmHl76hZ4kpL73VbBMMIXogi0mc5zzct/5Zl
         Ruew==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:message-id:date:mime-version:user-agent:subject
         :content-language:to:cc:references:from:in-reply-to
         :content-transfer-encoding;
        bh=ed2L5Xukb0x530A4eAXxNU5Y2kMwxFVSJ7UPqpwGeqo=;
        b=hnSBlLf8RR+7reclzgQATVKgSsUxrMJVbJUe9Vmgdlwi85NSvnmd442nm/39lFpgBB
         /Fl+ctVvgIRdFEsIlsu2ryLzl96bHPHtajokQLCtd5WkH3nzDqPcJleVKt63Q5fD+iNe
         zCVA19bGvqh/bLJlsvsWccqx5w6VPCVPOKcNO25p5KLMsIlzQbyqUwhdnKTmrcbFzCpv
         8xyKRvp83B475MjJ8/mfpedgADb/1fd/XF4kb47pSm2L6jqF4zMWvQlSAQnawlji6wem
         3FZ8ftOQv7bAzEw2UCG+ICD0TkSh/R+wuD35nx1MvhKluV0iuH8oAIpBZZEtuQZNnbJL
         BagQ==
X-Gm-Message-State: AOAM530RE1OhWPxVJQOM3xVF3kQmpyJ/Qn69fem4h7VAjbS+V2pTIHRy
	Gk8oqAAFzaM6S94xRI+5ID4=
X-Google-Smtp-Source: ABdhPJx6nOH3fmkp/Vrk+8u78czzsqal/W8wFxPqKLD2ytrgYjBiiB/mtGp+2rlz16bQrrPZgqSQ7w==
X-Received: by 2002:a05:6a00:194d:b0:51b:eb84:49b1 with SMTP id s13-20020a056a00194d00b0051beb8449b1mr43072278pfk.77.1654996224101;
        Sat, 11 Jun 2022 18:10:24 -0700 (PDT)
Message-ID: <77ae4462-6cd6-b612-a36d-68f196d6a0cf@gmail.com>
Date: Sun, 12 Jun 2022 10:10:18 +0900
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux aarch64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PULL 00/17] Kraxel 20220610 patches
Content-Language: en-US
To: =?UTF-8?Q?Volker_R=c3=bcmelin?= <vr_qemu@t-online.de>,
 Richard Henderson <richard.henderson@linaro.org>,
 Gerd Hoffmann <kraxel@redhat.com>, =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?=
 <berrange@redhat.com>, qemu-devel@nongnu.org
Cc: "Canokeys.org" <contact@canokeys.org>, "Michael S. Tsirkin"
 <mst@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 Peter Maydell <peter.maydell@linaro.org>,
 "Hongren (Zenithal) Zheng" <i@zenithal.me>, xen-devel@lists.xenproject.org,
 Alex Williamson <alex.williamson@redhat.com>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <f4bug@amsat.org>
References: <20220610092043.1874654-1-kraxel@redhat.com>
 <adec1cff-54f1-e2bf-8092-945601aeb912@linaro.org>
 <60c72935-85ce-4e24-43a5-119f6428b916@t-online.de>
From: Akihiko Odaki <akihiko.odaki@gmail.com>
In-Reply-To: <60c72935-85ce-4e24-43a5-119f6428b916@t-online.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 2022/06/12 1:34, Volker Rümelin wrote:
> Am 10.06.22 um 22:16 schrieb Richard Henderson:
>> On 6/10/22 02:20, Gerd Hoffmann wrote:
>>> The following changes since commit 
>>> 9cc1bf1ebca550f8d90f967ccd2b6d2e00e81387:
>>>
>>>    Merge tag 'pull-xen-20220609' of 
>>> https://xenbits.xen.org/git-http/people/aperard/qemu-dm into staging 
>>> (2022-06-09 08:25:17 -0700)
>>>
>>> are available in the Git repository at:
>>>
>>>    git://git.kraxel.org/qemu tags/kraxel-20220610-pull-request
>>>
>>> for you to fetch changes up to 02319a4d67d3f19039127b8dc9ca9478b6d6ccd8:
>>>
>>>    virtio-gpu: Respect UI refresh rate for EDID (2022-06-10 11:11:44 
>>> +0200)
>>>
>>> ----------------------------------------------------------------
>>> usb: add CanoKey device, fixes for ehci + redir
>>> ui: fixes for gtk and cocoa, move keymaps, rework refresh rate
>>> virtio-gpu: scanout flush fix
>>
>> This introduces regressions:
>>
>> https://gitlab.com/qemu-project/qemu/-/jobs/2576157660
>> https://gitlab.com/qemu-project/qemu/-/jobs/2576151565
>> https://gitlab.com/qemu-project/qemu/-/jobs/2576154539
>> https://gitlab.com/qemu-project/qemu/-/jobs/2575867208
>>
>>
>>  (27/43) 
>> tests/avocado/vnc.py:Vnc.test_change_password_requires_a_password: 
>> ERROR: ConnectError: Failed to establish session: EOFError\n Exit 
>> code: 1\n    Command: ./qemu-system-x86_64 -display none -vga none 
>> -chardev 
>> socket,id=mon,path=/var/tmp/avo_qemu_sock_4nrz0r37/qemu-2912538-7f732e94e0f0-monitor.sock 
>> -mon chardev=mon,mode=control -node... (0.09 s)
>>  (28/43) tests/avocado/vnc.py:Vnc.test_change_password:  ERROR: 
>> ConnectError: Failed to establish session: EOFError\n    Exit code: 
>> 1\n    Command: ./qemu-system-x86_64 -display none -vga none -chardev 
>> socket,id=mon,path=/var/tmp/avo_qemu_sock_yhpzy5c3/qemu-2912543-7f732e94b438-monitor.sock 
>> -mon chardev=mon,mode=control -node... (0.09 s)
>>  (29/43) 
>> tests/avocado/vnc.py:Vnc.test_change_password_requires_a_password: 
>> ERROR: ConnectError: Failed to establish session: EOFError\n Exit 
>> code: 1\n    Command: ./qemu-system-x86_64 -display none -vga none 
>> -chardev 
>> socket,id=mon,path=/var/tmp/avo_qemu_sock_tk3pfmt2/qemu-2912548-7f732e93d7b8-monitor.sock 
>> -mon chardev=mon,mode=control -node... (0.09 s)
>>
>>
>> r~
>>
> 
> This is caused by [PATCH 14/17] ui: move 'pc-bios/keymaps' to 
> 'ui/keymaps'. After this patch QEMU no longer finds it's keymaps if 
> started directly from the build directory.
> 
> With best regards,
> Volker
> 

I have a patch series which allow to find files not in pc-bios directory 
even if started directly from the build directory:
https://patchew.org/QEMU/20220228005710.10442-1-akihiko.odaki@gmail.com/

Regards,
Akihiko Odaki


From xen-devel-bounces@lists.xenproject.org Sun Jun 12 02:55:19 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jun 2022 02:55:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347474.573909 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0Dkg-00051H-QX; Sun, 12 Jun 2022 02:54:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347474.573909; Sun, 12 Jun 2022 02:54:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0Dkg-00051A-NC; Sun, 12 Jun 2022 02:54:58 +0000
Received: by outflank-mailman (input) for mailman id 347474;
 Sun, 12 Jun 2022 02:54:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0Dkf-000510-Os; Sun, 12 Jun 2022 02:54:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0Dkf-0005z7-LU; Sun, 12 Jun 2022 02:54:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0Dke-0006KR-Hg; Sun, 12 Jun 2022 02:54:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o0Dke-00071Q-HG; Sun, 12 Jun 2022 02:54:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=aaioXoNvePIGXlHZdM6BrQZVsGaAub/8SZw13Hn4uxA=; b=1Sa4VHdSzpwetJrkiUv+/gGQ/F
	Fo+2uNAAdzbxd9qrUjt/TJhOOFa6szHIxMdIvds9o7kNtyQzdGlBZ7vqXpdTK/kCCev6IDF9/D2pI
	Jm2GTJLP82LPuQ5K1BzWiuFaJJTNaiKxIAxXTGXJpIF2Sqzg6EhNjRCE6+vGYBgC2MUA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171083-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171083: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:build-i386-pvops:kernel-build:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=0885eacdc81f920c3e0554d5615e69a66504a28d
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 12 Jun 2022 02:54:56 +0000

flight 171083 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171083/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 170714
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 170714
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 170714
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 170714
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 170714
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 170714
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 170714
 build-i386-pvops              6 kernel-build             fail REGR. vs. 170714
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 170714
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 170714

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170714
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170714
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170714
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                0885eacdc81f920c3e0554d5615e69a66504a28d
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z   18 days
Failing since        170716  2022-05-24 11:12:06 Z   18 days   46 attempts
Testing same since   171083  2022-06-11 13:10:06 Z    0 days    1 attempts

------------------------------------------------------------
2330 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 274364 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 12 09:12:36 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jun 2022 09:12:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347508.573957 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0Jdw-0008Q4-AS; Sun, 12 Jun 2022 09:12:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347508.573957; Sun, 12 Jun 2022 09:12:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0Jdw-0008Pt-2u; Sun, 12 Jun 2022 09:12:24 +0000
Received: by outflank-mailman (input) for mailman id 347508;
 Sun, 12 Jun 2022 09:12:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0Jdv-0008PQ-2k; Sun, 12 Jun 2022 09:12:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0Jdv-0005Ho-0P; Sun, 12 Jun 2022 09:12:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0Jdu-0001Oa-EQ; Sun, 12 Jun 2022 09:12:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o0Jdu-0006bA-Dw; Sun, 12 Jun 2022 09:12:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DBPu8Gejb5EVCcJtym6iF6J3AbtO5iHJ724DI0YoY8c=; b=Fc2qtjBoITB+Nx1ntnY4WKOlcS
	RfTWhH1xWFU5rYKbTaXb67VZvu32esG5LEDusGh6zO3GGv/UOD6iYe13r/o8YcB8SmHNN/hLWqg0d
	Su2RlhBQHFN/c3vQlhv7Tp9oteIyEoy3Uue0wFq3gOOGjlK1WHK7FdpyefCq1lK8dXaA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171137-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 171137: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c9a707df83aad17a6fcf2e8330ab3b5bead6fb8b
X-Osstest-Versions-That:
    xen=b8bc4588b32e8a40354defac29ceb9c90e570af8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 12 Jun 2022 09:12:22 +0000

flight 171137 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171137/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170966
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170966
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170966
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170966
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 170966
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 170966
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170966
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170966
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170966
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170966
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170966
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170966
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 xen                  c9a707df83aad17a6fcf2e8330ab3b5bead6fb8b
baseline version:
 xen                  b8bc4588b32e8a40354defac29ceb9c90e570af8

Last test of basis   170966  2022-06-10 21:15:10 Z    1 days
Testing same since   171137  2022-06-11 18:01:10 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Hongyan Xia <hongyxia@amazon.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <julien.grall@arm.com>
  Luca Fancellu <luca.fancellu@arm.com>
  Michal Orzel <michal.orzel@arm.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Wei Liu <wei.liu2@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   b8bc4588b3..c9a707df83  c9a707df83aad17a6fcf2e8330ab3b5bead6fb8b -> master


From xen-devel-bounces@lists.xenproject.org Sun Jun 12 14:06:24 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jun 2022 14:06:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347525.573968 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0OEC-0000iM-9t; Sun, 12 Jun 2022 14:06:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347525.573968; Sun, 12 Jun 2022 14:06:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0OEC-0000iF-5n; Sun, 12 Jun 2022 14:06:08 +0000
Received: by outflank-mailman (input) for mailman id 347525;
 Sun, 12 Jun 2022 14:06:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0OEA-0000i5-RX; Sun, 12 Jun 2022 14:06:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0OEA-0001qu-Pe; Sun, 12 Jun 2022 14:06:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0OEA-0005xL-BR; Sun, 12 Jun 2022 14:06:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o0OEA-00038Z-Ax; Sun, 12 Jun 2022 14:06:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=oC9JA3ZCh7y8VyKrIUu9XxogB8Pq7Wzj/jOiofmoSfE=; b=Nr1jJxjyYIED3DpNq/uP9n1rBG
	NLDQ6SLT8/m75fPtg2ykprPrFQJlYFISSAj0dAdEyXXRCRjZ6GqxqyvvdfwYxL5P+Yu4kxQJ8yDk+
	NKEfCgEO7PAS1+NzAxLpX5XgfgXe44n954cTTp4LcYVsBMAnB/kcrwD4oBYjgcW93nKQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171141-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 171141: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=30796f556790631c86c733ab06756981be0e1def
X-Osstest-Versions-That:
    qemuu=b3cd3b5a66f0dddfe3d5ba2bef13cd4f5b89cde9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 12 Jun 2022 14:06:06 +0000

flight 171141 qemu-mainline real [real]
flight 171145 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/171141/
http://logs.test-lab.xenproject.org/osstest/logs/171145/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail pass in 171145-retest
 test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail pass in 171145-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170975
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170975
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170975
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170975
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170975
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170975
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170975
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170975
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 qemuu                30796f556790631c86c733ab06756981be0e1def
baseline version:
 qemuu                b3cd3b5a66f0dddfe3d5ba2bef13cd4f5b89cde9

Last test of basis   170975  2022-06-10 23:13:00 Z    1 days
Testing same since   171141  2022-06-11 22:53:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ani Sinha <ani@anisinha.ca>
  Changpeng Liu <changpeng.liu@intel.com>
  Claudio Fontana <cfontana@suse.de>
  Gerd Hoffmann <kraxel@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Igor Mammedov <imammedo@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Michael S. Tsirkin <mst@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   b3cd3b5a66..30796f5567  30796f556790631c86c733ab06756981be0e1def -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Sun Jun 12 17:38:21 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jun 2022 17:38:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347618.573979 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0RXH-0008V5-LN; Sun, 12 Jun 2022 17:38:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347618.573979; Sun, 12 Jun 2022 17:38:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0RXH-0008Uy-Fq; Sun, 12 Jun 2022 17:38:03 +0000
Received: by outflank-mailman (input) for mailman id 347618;
 Sun, 12 Jun 2022 17:38:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0RXG-0008Uo-G7; Sun, 12 Jun 2022 17:38:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0RXG-0005q0-Bx; Sun, 12 Jun 2022 17:38:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0RXF-0000VF-PX; Sun, 12 Jun 2022 17:38:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o0RXF-0006CG-P4; Sun, 12 Jun 2022 17:38:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BiX5+uwS6jBtTtaRTgszqy6TCS/Gg8gHzeXAxI7FFEc=; b=EbekjY/lZIxiCDEDkGxhEkX2RL
	S+cOrZ5zcJPOopwyn+mhcCkPuXSc9jqJYeeGKTItC6fM+3M9FoyZUQlgbgLmVQDzh4Xvffkfb4PRx
	WE/Ajzu0J6teASR7u9EBRKYm0J79d0VlR/iVe1UuAtFS7Pr30TBsaA9CoQLFAy8FWBcA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171143-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 171143: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=a7d2272e59db058962fd5ef84737d7a741fd516a
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 12 Jun 2022 17:38:01 +0000

flight 171143 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171143/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              a7d2272e59db058962fd5ef84737d7a741fd516a
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  702 days
Failing since        151818  2020-07-11 04:18:52 Z  701 days  683 attempts
Testing same since   170990  2022-06-11 04:18:55 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Amneesh Singh <natto@weirdnatto.in>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani@anisinha.ca>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brad Laue <brad@brad-x.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Kirbach <christian.kirbach@gmail.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Christophe Fergeau <cfergeau@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Didik Supriadi <didiksupriadi41@gmail.com>
  dinglimin <dinglimin@cmss.chinamobile.com>
  Divya Garg <divya.garg@nutanix.com>
  Dmitrii Shcherbakov <dmitrii.shcherbakov@canonical.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Emilio Herrera <ehespinosa57@gmail.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Franck Ridel <fridel@protonmail.com>
  Gavi Teitz <gavi@nvidia.com>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Haonan Wang <hnwanga1@gmail.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Hiroki Narukawa <hnarukaw@yahoo-corp.jp>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Ian Wienand <iwienand@redhat.com>
  Ioanna Alifieraki <ioanna-maria.alifieraki@canonical.com>
  Ivan Teterevkov <ivan.teterevkov@nutanix.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  jason lee <ppark5237@gmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jia Zhou <zhou.jia2@zte.com.cn>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jing Qi <jinqi@redhat.com>
  Jinsheng Zhang <zhangjl02@inspur.com>
  Jiri Denemark <jdenemar@redhat.com>
  Joachim Falk <joachim.falk@gmx.de>
  John Ferlan <jferlan@redhat.com>
  John Levon <john.levon@nutanix.com>
  John Levon <levon@movementarian.org>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Justin Gatzen <justin.gatzen@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kim InSoo <simmon@nplob.com>
  Koichi Murase <myoga.murase@gmail.com>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Lei Yang <yanglei209@huawei.com>
  Lena Voytek <lena.voytek@canonical.com>
  Liang Yan <lyan@digitalocean.com>
  Liang Yan <lyan@digtalocean.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Liu Yiding <liuyd.fnst@fujitsu.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  luzhipeng <luzhipeng@cestc.cn>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Martin Pitt <mpitt@debian.org>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matej Cepl <mcepl@cepl.eu>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Goodhart <c@chromakode.com>
  Maxim Nestratov <mnestratov@virtuozzo.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Moteen Shah <codeguy.moteen@gmail.com>
  Moteen Shah <moteenshah.02@gmail.com>
  Muha Aliss <muhaaliss@gmail.com>
  Nathan <nathan95@live.it>
  Neal Gompa <ngompa13@gmail.com>
  Nick Chevsky <nchevsky@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nicolas Lécureuil <neoclust@mageia.org>
  Nicolas Lécureuil <nicolas.lecureuil@siveo.net>
  Nikolay Shirokovskiy <nikolay.shirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Niteesh Dubey <niteesh@linux.ibm.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Or Ozeri <oro@il.ibm.com>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peng Liang <tcx4c70@gmail.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Praveen K Paladugu <prapal@linux.microsoft.com>
  Richard W.M. Jones <rjones@redhat.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Robin Lee <cheeselee@fedoraproject.org>
  Rohit Kumar <rohit.kumar3@nutanix.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Davis <scott.davis@starlab.io>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  shenjiatong <yshxxsjt715@gmail.com>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Simon Rowe <simon.rowe@nutanix.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tom Wieczorek <tom@bibbu.net>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tu Qiang <tu.qiang35@zte.com.cn>
  Tuguoyi <tu.guoyi@h3c.com>
  tuqiang <tu.qiang35@zte.com.cn>
  Vasiliy Ulyanov <vulyanov@suse.de>
  Victor Toso <victortoso@redhat.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Vinayak Kale <vkale@nvidia.com>
  Vineeth Pillai <viremana@linux.microsoft.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  Wei-Chen Chen <weicche@microsoft.com>
  William Douglas <william.douglas@intel.com>
  Xu Chao <xu.chao6@zte.com.cn>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Fei <yangfei85@huawei.com>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yasuhiko Kamata <belphegor@belbel.or.jp>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
  zhangjl02 <zhangjl02@inspur.com>
  zhanglei <zhanglei@smartx.com>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>
  Дамјан Георгиевски <gdamjan@gmail.com>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 112934 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 12 18:05:50 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jun 2022 18:05:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347632.573989 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0Ry4-00044z-0V; Sun, 12 Jun 2022 18:05:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347632.573989; Sun, 12 Jun 2022 18:05:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0Ry3-00044s-U7; Sun, 12 Jun 2022 18:05:43 +0000
Received: by outflank-mailman (input) for mailman id 347632;
 Sun, 12 Jun 2022 18:05:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0Ry2-00044i-Ir; Sun, 12 Jun 2022 18:05:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0Ry2-0006Nx-Dt; Sun, 12 Jun 2022 18:05:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0Ry1-0001Dj-Ra; Sun, 12 Jun 2022 18:05:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o0Ry1-000296-R6; Sun, 12 Jun 2022 18:05:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ezEwqjZxesI0ZCE8h55HMW+hQrBPsMloS3giZYD7H/E=; b=q8sfspsb0S98kpPQX4qWjG17uo
	UbwHf4hu78j79Iu77oHsg0E8RW/HNoeADe3WTrkaUP75vYQhs9UfLE3sXUiKjnFb3GJKq3YGbeOMQ
	n6cGbMBAvMoDv1xGuM3laIQMlPkeYftp48c73ffWP40rFqVY2cyGfHwByPFi7X37h8NI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171142-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171142: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:guest-start/debian.repeat:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=7a68065eb9cd194cf03f135c9211eeb2d5c4c0a0
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 12 Jun 2022 18:05:41 +0000

flight 171142 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171142/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 170714
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 170714
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 170714
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 170714
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 170714
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 170714
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 170714
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 170714
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 170714
 test-armhf-armhf-xl-credit1 18 guest-start/debian.repeat fail REGR. vs. 170714

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170714
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170714
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170714
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                7a68065eb9cd194cf03f135c9211eeb2d5c4c0a0
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z   19 days
Failing since        170716  2022-05-24 11:12:06 Z   19 days   47 attempts
Testing same since   171142  2022-06-12 02:58:57 Z    0 days    1 attempts

------------------------------------------------------------
2336 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 275185 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 12 20:10:57 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jun 2022 20:10:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347649.574001 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0Tuy-0001qN-1V; Sun, 12 Jun 2022 20:10:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347649.574001; Sun, 12 Jun 2022 20:10:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0Tux-0001qG-US; Sun, 12 Jun 2022 20:10:39 +0000
Received: by outflank-mailman (input) for mailman id 347649;
 Sun, 12 Jun 2022 20:10:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0Tuv-0001q6-V3; Sun, 12 Jun 2022 20:10:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0Tuv-0008TA-Q7; Sun, 12 Jun 2022 20:10:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0Tuv-000669-7j; Sun, 12 Jun 2022 20:10:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o0Tuv-0003cB-7J; Sun, 12 Jun 2022 20:10:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=b0uMf8qDaowMMC3BiIeYQAr+AqWmeEE3dXt0BeJrwbo=; b=C9LqREQmXUV1BOaIXv54QlNS/W
	RPvb9lsnwXZ+jUKS6d9KMbkcJBbALB4tAvQ6kXTt51L5QJ4Y+j1okGA0WQ2zAPlXeG5h3mNgPVEbM
	2aGrAL+rD59d3grOvHteNPfr39eDgh7EyKAEp/iAzQrXIHI1N2Sq7VKdM/c8XXSFeyvI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171144-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 171144: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-libvirt-raw:xen-install:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c9a707df83aad17a6fcf2e8330ab3b5bead6fb8b
X-Osstest-Versions-That:
    xen=c9a707df83aad17a6fcf2e8330ab3b5bead6fb8b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 12 Jun 2022 20:10:37 +0000

flight 171144 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171144/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-raw   7 xen-install                fail pass in 171137
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat  fail pass in 171137

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-raw 14 migrate-support-check fail in 171137 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171137
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171137
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171137
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171137
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171137
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171137
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171137
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171137
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171137
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171137
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171137
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171137
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 xen                  c9a707df83aad17a6fcf2e8330ab3b5bead6fb8b
baseline version:
 xen                  c9a707df83aad17a6fcf2e8330ab3b5bead6fb8b

Last test of basis   171144  2022-06-12 09:14:26 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon Jun 13 00:22:15 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 00:22:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347668.574012 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0XqB-000502-0g; Mon, 13 Jun 2022 00:21:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347668.574012; Mon, 13 Jun 2022 00:21:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0XqA-0004zv-U8; Mon, 13 Jun 2022 00:21:58 +0000
Received: by outflank-mailman (input) for mailman id 347668;
 Mon, 13 Jun 2022 00:21:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0Xq9-0004zl-7q; Mon, 13 Jun 2022 00:21:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0Xq9-0004oQ-5B; Mon, 13 Jun 2022 00:21:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0Xq8-0004Y4-M1; Mon, 13 Jun 2022 00:21:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o0Xq8-0003oX-LZ; Mon, 13 Jun 2022 00:21:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=V7VCm97CcO2M/HHN/9erkYxlzScZ6UIn+3mOU2iIUqQ=; b=fzTrj2oyb1PaJ3aHKyyaYGbPmz
	E4CZE7uNR8jjuVhmHtSGI/q15Bzx2JnSTRjmlBd6dcDdCsu6Hi+qWzaB/hRvrGbtlBpsaLKb/M/8M
	3NnXrGoGEAa7hWyn+c4wr5cPc7V5KrvtSFMjsdsW2iGDD/XCQH2Bpiwf8YTvUjD5Evfk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171146-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 171146: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-armhf-armhf-libvirt-raw:xen-install:fail:heisenbug
    qemu-mainline:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=b871cc83d69a4463ac641286eef7d773ad5b5aaa
X-Osstest-Versions-That:
    qemuu=30796f556790631c86c733ab06756981be0e1def
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 13 Jun 2022 00:21:56 +0000

flight 171146 qemu-mainline real [real]
flight 171148 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/171146/
http://logs.test-lab.xenproject.org/osstest/logs/171148/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-libvirt-raw  7 xen-install         fail pass in 171148-retest

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 171141

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail in 171148 like 171141
 test-armhf-armhf-libvirt-raw 14 migrate-support-check fail in 171148 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171141
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171141
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171141
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171141
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171141
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171141
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171141
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 qemuu                b871cc83d69a4463ac641286eef7d773ad5b5aaa
baseline version:
 qemuu                30796f556790631c86c733ab06756981be0e1def

Last test of basis   171141  2022-06-11 22:53:29 Z    1 days
Testing same since   171146  2022-06-12 14:08:17 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bernhard Beschow <shentey@gmail.com>
  Dragan Mladjenovic <dragan.mladjenovic@syrmia.com>
  Kyle Evans <kevans@FreeBSD.org>
  Marcin Nowakowski <marcin.nowakowski@fungible.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Michael S. Tsirkin <mst@redhat.com>
  Ni Hui <shuizhuyuanluo@126.com>
  nihui <shuizhuyuanluo@126.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@fungible.com>
  Richard Henderson <richard.henderson@linaro.org>
  Stacey Son <sson@FreeBSD.org>
  Stefan Pejic <stefan.pejic@syrmia.com>
  Warner Losh <imp@bsdimp.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   30796f5567..b871cc83d6  b871cc83d69a4463ac641286eef7d773ad5b5aaa -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Mon Jun 13 03:28:39 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 03:28:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347691.574023 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0akU-0007TN-CW; Mon, 13 Jun 2022 03:28:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347691.574023; Mon, 13 Jun 2022 03:28:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0akU-0007TG-8H; Mon, 13 Jun 2022 03:28:18 +0000
Received: by outflank-mailman (input) for mailman id 347691;
 Mon, 13 Jun 2022 03:28:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o5ko=WU=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1o0akT-0007Rd-0r
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 03:28:17 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on060a.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::60a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id db2ba1c5-eac8-11ec-bd2c-47488cf2e6aa;
 Mon, 13 Jun 2022 05:28:14 +0200 (CEST)
Received: from AS8PR04CA0001.eurprd04.prod.outlook.com (2603:10a6:20b:310::6)
 by VI1PR08MB2927.eurprd08.prod.outlook.com (2603:10a6:802:21::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.17; Mon, 13 Jun
 2022 03:28:08 +0000
Received: from AM5EUR03FT034.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:310:cafe::17) by AS8PR04CA0001.outlook.office365.com
 (2603:10a6:20b:310::6) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.20 via Frontend
 Transport; Mon, 13 Jun 2022 03:28:07 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT034.mail.protection.outlook.com (10.152.16.81) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5332.12 via Frontend Transport; Mon, 13 Jun 2022 03:28:07 +0000
Received: ("Tessian outbound e40990bc24d7:v120");
 Mon, 13 Jun 2022 03:28:07 +0000
Received: from 616c64f372da.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 D5F8E906-9A1F-4D38-B081-8557E32F28C6.1; 
 Mon, 13 Jun 2022 03:28:00 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 616c64f372da.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 13 Jun 2022 03:28:00 +0000
Received: from DU2PR08MB7325.eurprd08.prod.outlook.com (2603:10a6:10:2e4::7)
 by AS8PR08MB6887.eurprd08.prod.outlook.com (2603:10a6:20b:38e::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.11; Mon, 13 Jun
 2022 03:27:57 +0000
Received: from DU2PR08MB7325.eurprd08.prod.outlook.com
 ([fe80::50e5:8757:6643:77f8]) by DU2PR08MB7325.eurprd08.prod.outlook.com
 ([fe80::50e5:8757:6643:77f8%7]) with mapi id 15.20.5332.020; Mon, 13 Jun 2022
 03:27:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: db2ba1c5-eac8-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=M7arhBqeI2hOebPQZZuHAQNO28/O78dmIetscEJFYSr0YcianbCkvd07a6kQC8uMO/xYVzKo+kdXq4TWfWlGJVDjUwkiPrUC95onh8eU7T1nCptKUkm1h9muqaFbmyzdXCRdNUC54zmRYTfREN6Kais56DNQEOWsz/L2kiP9IXCKeAsX7GHuR0sNnD696DeNV/TjTNKigkhopCfeWJcZyslvEZ/0IWJ5F4IRRWT7JiXI5/lP5qYCs382qCXbFpIKXHCgDR7rZu8F4DTlXARNoHTdN+LQwkEMUpxMlXSQamWcvVUsxPam0MFT02S23yeVUk6cLm4gpmhFD+5Wz0bhdw==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=HfKWvJ0OYjmx4LqA7vYDLNLOfUmHgMZfQzfCSQzUGHQ=;
 b=EnH9mu6gCVt0IY9PopWguCyBdlsaMSJOzz8aijiTAI7IcZipmFjFlyeZgUxAvFBnBrB0yym4wZ4YFbaFKLHo++BA1cIWcC0HbfYP9DNI74pllt2VDRgpGJGdq3Qibct1IesKXV7SaoQCjxY6uxZsNkFXriAslLpoRBwxXDHkrEjeidGF9NvFpkWR1Zb8qlVlRFza6R4JzK//C4qwGvyBHsH9YyYlvvhQgUSWro/0+0rqSVsKLq5vLT1az3Y9WfqdejzpYtzWJ74rSgc0/uJ+UeGqWAC/hxdTZSJeN4r0DfHZJUioNCsUr48GOdEjr0mLQLbn3KPtPUfQoMDfF2JW+Q==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HfKWvJ0OYjmx4LqA7vYDLNLOfUmHgMZfQzfCSQzUGHQ=;
 b=4Na4MS+/MdB7wizw9glMzpaU+Mo3GBpV11U0u0gbZDCLA0BuhXy4H61rFNfk9IqGRX4usyp+ok+Y5jN2Yisadb5XoMW+PCMbhgooBSeWLnx2TJUB0a3S73v9FeKFztIad9tPM8yancd35G9/8wpiYEOYuj/JqLI/Xe8QifMAUQ8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aIuKUulgzn/k6srSXYG8VqAHyOW3zxnWsxZXsWeluuKgwFIkzlw/DPYqvLB/Ht/IiObJopSF8u/W2rHmpZUl1esmBc4p4d8mXibhdd8W3nclsMEmhsEpftnwinobYy1Efim84xrmYmKswdrFGPCQh30Fv2pRV5zxog32GQxgMsDmFpJGzczYDIbUarhHb/OHzStVqszWsnVfOomeOrRBToLdMtJ3oQMSQOny/5D+ElpMoROSvJjYo4jBDR6sYjF3oQ7HHukVSJtY2TKqfgJRZ/Y1G+qLCkyllv8oqjCysH9a6MQv864RN+osDJiDaR/TA70rmT4PMaaDsuJgchQ7Sg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=HfKWvJ0OYjmx4LqA7vYDLNLOfUmHgMZfQzfCSQzUGHQ=;
 b=FgRR1Mm8prYS09aAaKFYnl5MlJw9sDpTw99rhZli/9gp6LWFhag+Nx1of7aJ+SqGt+xupQVEl+YBGolW0Fx1mJvYCcQBl6okE7SeqK1+Lnm3FKr4HhlWCJzRbHJzX9i554otCPiQPfsif7fATeptVjyjBcLb2AKwRwmYaSXAU/QPnIdOVlce3M7+abQxghHPXx6BsrBVXO4EWNs/G0Vg1R5YYe5B5/brGjwKi9FMAc+cv/Re9AkEnUCu3yhDaMOZybW70cQNJvDNxek7WF1Y6wRQsY+72ZEE9NkO2enFn09+AZPpbVhXR0G/x3oEMkx8iUvmW3q/K/QBxdlXNGWOzQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HfKWvJ0OYjmx4LqA7vYDLNLOfUmHgMZfQzfCSQzUGHQ=;
 b=4Na4MS+/MdB7wizw9glMzpaU+Mo3GBpV11U0u0gbZDCLA0BuhXy4H61rFNfk9IqGRX4usyp+ok+Y5jN2Yisadb5XoMW+PCMbhgooBSeWLnx2TJUB0a3S73v9FeKFztIad9tPM8yancd35G9/8wpiYEOYuj/JqLI/Xe8QifMAUQ8=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>
Subject: RE: [PATCH v6 2/9] xen: do not free reserved memory into heap
Thread-Topic: [PATCH v6 2/9] xen: do not free reserved memory into heap
Thread-Index: AQHYekCLapi6TfhNLECKo4lt4smsBq1DqSgAgALDbhCAAGOJgIAAFrZg
Date: Mon, 13 Jun 2022 03:27:56 +0000
Message-ID:
 <DU2PR08MB7325ABFF156C8AB1C8F76BC0F7AB9@DU2PR08MB7325.eurprd08.prod.outlook.com>
References: <20220607073031.722174-1-Penny.Zheng@arm.com>
 <20220607073031.722174-3-Penny.Zheng@arm.com>
 <d43d2dbd-6b0e-fb0c-5e0a-d409db4e18e9@xen.org>
 <DU2PR08MB7325B2A677FCF2FBF905D588F7A79@DU2PR08MB7325.eurprd08.prod.outlook.com>
 <dd9f8a18-23ce-d5f6-45ff-82376aaefaba@xen.org>
In-Reply-To: <dd9f8a18-23ce-d5f6-45ff-82376aaefaba@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: BCD83B6BCF2D7340BA75DD5ECF96EAA7.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 97d4741f-1297-4ef8-dab7-08da4cecbbd2
x-ms-traffictypediagnostic:
	AS8PR08MB6887:EE_|AM5EUR03FT034:EE_|VI1PR08MB2927:EE_
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB2927AA344588B07A629807B7F7AB9@VI1PR08MB2927.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 N9H/YlBN7/WUWvbKsn4nqKbZN3rPyO4SAKenijYd/xMu6+xo3j08jKqXoqtrwkplYb/LYEPsTTMLZX+qroZV85tEgub+0yLFSI4tpjs0v7VAiXsm7kMr8+GgRQcYrRZPQC0Z94DYjOcvevokCEkmxKJfYKa+ZH+TUTNDRdmLh4xGamf4CJJePDcjp1vCQ3MiYnJ8tJxjG6xF+3SsAwRfEkmro5MGJglJLNZiRR2AkWnIH6xc17a1s1lT7WyA08mtJo4NLyHeX584zie7TQXWBju8ICJeFsi6uyeza4oGEerSMsrwdOr8LyhSDlOBo/DPkVNl1+MJbcL07qTotinHPapqTvLeD/BUWWeOjlaOqVPcmJ3TL7l6C22nQ9Ta3Cez8M1mg+C4dfedO4uWWe6PcZonPma5igWMJd6KJGBxIzbWZMW5c18YFb5L+H8ugEJxHQQQC5bvSiblhMs9GhOg0BET0SEd/mGhDbxhdV3sLkcUsqvriDF7i/xrCCw3t1wEmGOoNRLyudyXYtowKH14GlXqBB+/QkEByNblhQMu+PCkuP4F+wgMKJQy7rD1X+xdfYBCeJAN6FRyPiNkBaalPOhPq6INW803WgGVIstiIO93eRN2Jj/gPQCKIQ2uh0nhnj0h6mrPax1xLtMoZcGenaYj5JYPn62laGzUUahHfSpTcFrkpyNtWGX4uz1uPt5PQxQoUwsMkbajYx7bRvlDub0Cq4aw6lj841ytAAkkFHI=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DU2PR08MB7325.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(64756008)(55016003)(66446008)(508600001)(33656002)(122000001)(316002)(110136005)(66556008)(186003)(66476007)(66946007)(38070700005)(2906002)(7696005)(71200400001)(5660300002)(6506007)(86362001)(53546011)(26005)(9686003)(52536014)(8676002)(38100700002)(4326008)(76116006)(83380400001)(54906003)(8936002)(156664002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6887
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	2f4b2b7d-2026-44ab-2cf4-08da4cecb5b3
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	IPCYJDFv7pjcMUR7hzc4SLYyk30g3E+cH46v8cL+nKDlsISpFYN2BCfvfGRwr06ssewQBuxWXvvQIs0GXg2KXzLxUHIESl8bi4jQol2K6KLsJLsWax+pCdd/NWHRAYzYFlcAFyFyazvArwmSgz3Ue7obWiEGYz4iTWAp43dEIKBpurEAQbBaPjUeTyhhv4de4sOF3qh1SyhGm58ShP7G4RkjSlmsnfGdLpsNlC2c6rtEJD8JnKj+TlED5kwwf4r+I3X+Iyz/9YQf5ULdegEClkKGDdDY2hZB1mq36ryR/T2x6nN2mPy49cSblb+pm9lx8hSMKeINzdgldUJQ2AwoHzZDAAETNPT992JZ1PyYJTI41Xj+fVX1l4yL3gXF3d42POPs74+YWP/P2vJGkMmT1T8jPNL1sm+ciPnYCC5WxXy469ZYtYcwfvpEGocopMHM1cgeg1mzbLIRW/wXF7weZpgSGUX/RGO8+bA2uQdbXlSJxdHyHUL4wr624dOHpw8EYAA8Cjz4GUHZezNIq2/WVuzKkfkBsfmNgQxpAqccQnGOYCNnMqb4+W2nc6qdkxWhecEEf5X9g4Xxieh5aFJ1yMGMZfe3lKkT/hgp2jTFau2a3VGNGxJ3S4aHvFrABXdgkeu5C336nmTlKFiNCz199iveGnkX0FXjRPgGF0woPsV/Uod59H6ZY7uSqXn2OD0A
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(36840700001)(46966006)(6506007)(7696005)(53546011)(47076005)(9686003)(186003)(26005)(81166007)(356005)(83380400001)(336012)(52536014)(5660300002)(36860700001)(86362001)(33656002)(82310400005)(8676002)(4326008)(55016003)(8936002)(2906002)(316002)(110136005)(70206006)(54906003)(508600001)(70586007)(156664002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jun 2022 03:28:07.2716
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 97d4741f-1297-4ef8-dab7-08da4cecbbd2
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB2927

SGkgSnVsaWVuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSnVsaWVu
IEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4NCj4gU2VudDogVGh1cnNkYXksIEp1bmUgOSwgMjAyMiA1
OjIyIFBNDQo+IFRvOiBQZW5ueSBaaGVuZyA8UGVubnkuWmhlbmdAYXJtLmNvbT47IHhlbi1kZXZl
bEBsaXN0cy54ZW5wcm9qZWN0Lm9yZw0KPiBDYzogV2VpIENoZW4gPFdlaS5DaGVuQGFybS5jb20+
OyBTdGVmYW5vIFN0YWJlbGxpbmkNCj4gPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+OyBCZXJ0cmFu
ZCBNYXJxdWlzIDxCZXJ0cmFuZC5NYXJxdWlzQGFybS5jb20+Ow0KPiBWb2xvZHlteXIgQmFiY2h1
ayA8Vm9sb2R5bXlyX0JhYmNodWtAZXBhbS5jb20+OyBBbmRyZXcgQ29vcGVyDQo+IDxhbmRyZXcu
Y29vcGVyM0BjaXRyaXguY29tPjsgR2VvcmdlIER1bmxhcCA8Z2VvcmdlLmR1bmxhcEBjaXRyaXgu
Y29tPjsNCj4gSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPjsgV2VpIExpdSA8d2xAeGVu
Lm9yZz4NCj4gU3ViamVjdDogUmU6IFtQQVRDSCB2NiAyLzldIHhlbjogZG8gbm90IGZyZWUgcmVz
ZXJ2ZWQgbWVtb3J5IGludG8gaGVhcA0KPiANCj4gSGksDQo+IA0KPiBPbiAwOS8wNi8yMDIyIDA2
OjU0LCBQZW5ueSBaaGVuZyB3cm90ZToNCj4gPg0KPiA+DQo+ID4+IC0tLS0tT3JpZ2luYWwgTWVz
c2FnZS0tLS0tDQo+ID4+IEZyb206IEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+ID4+
IFNlbnQ6IFR1ZXNkYXksIEp1bmUgNywgMjAyMiA1OjEzIFBNDQo+ID4+IFRvOiBQZW5ueSBaaGVu
ZyA8UGVubnkuWmhlbmdAYXJtLmNvbT47IHhlbi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZw0K
PiA+PiBDYzogV2VpIENoZW4gPFdlaS5DaGVuQGFybS5jb20+OyBTdGVmYW5vIFN0YWJlbGxpbmkN
Cj4gPj4gPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+OyBCZXJ0cmFuZCBNYXJxdWlzDQo+ID4+IDxC
ZXJ0cmFuZC5NYXJxdWlzQGFybS5jb20+OyBWb2xvZHlteXIgQmFiY2h1aw0KPiA+PiA8Vm9sb2R5
bXlyX0JhYmNodWtAZXBhbS5jb20+OyBBbmRyZXcgQ29vcGVyDQo+ID4+IDxhbmRyZXcuY29vcGVy
M0BjaXRyaXguY29tPjsgR2VvcmdlIER1bmxhcA0KPiA+PiA8Z2VvcmdlLmR1bmxhcEBjaXRyaXgu
Y29tPjsgSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPjsgV2VpIExpdQ0KPiA+PiA8d2xA
eGVuLm9yZz4NCj4gPj4gU3ViamVjdDogUmU6IFtQQVRDSCB2NiAyLzldIHhlbjogZG8gbm90IGZy
ZWUgcmVzZXJ2ZWQgbWVtb3J5IGludG8NCj4gPj4gaGVhcA0KPiA+Pg0KPiA+PiBIaSBQZW5ueSwN
Cj4gPj4NCj4gPg0KPiA+IEhpIEp1bGllbg0KPiA+DQo+ID4+IE9uIDA3LzA2LzIwMjIgMDg6MzAs
IFBlbm55IFpoZW5nIHdyb3RlOg0KPiA+Pj4gUGFnZXMgdXNlZCBhcyBndWVzdCBSQU0gZm9yIHN0
YXRpYyBkb21haW4sIHNoYWxsIGJlIHJlc2VydmVkIHRvIHRoaXMNCj4gPj4+IGRvbWFpbiBvbmx5
Lg0KPiA+Pj4gU28gaW4gY2FzZSByZXNlcnZlZCBwYWdlcyBiZWluZyB1c2VkIGZvciBvdGhlciBw
dXJwb3NlLCB1c2VycyBzaGFsbA0KPiA+Pj4gbm90IGZyZWUgdGhlbSBiYWNrIHRvIGhlYXAsIGV2
ZW4gd2hlbiBsYXN0IHJlZiBnZXRzIGRyb3BwZWQuDQo+ID4+Pg0KPiA+Pj4gZnJlZV9zdGF0aWNt
ZW1fcGFnZXMgd2lsbCBiZSBjYWxsZWQgYnkgZnJlZV9oZWFwX3BhZ2VzIGluIHJ1bnRpbWUNCj4g
Pj4+IGZvciBzdGF0aWMgZG9tYWluIGZyZWVpbmcgbWVtb3J5IHJlc291cmNlLCBzbyBsZXQncyBk
cm9wIHRoZSBfX2luaXQgZmxhZy4NCj4gPj4+DQo+ID4+PiBTaWduZWQtb2ZmLWJ5OiBQZW5ueSBa
aGVuZyA8cGVubnkuemhlbmdAYXJtLmNvbT4NCj4gPj4+IC0tLQ0KPiA+Pj4gdjYgY2hhbmdlczoN
Cj4gPj4+IC0gYWRhcHQgdG8gUEdDX3N0YXRpYw0KPiA+Pj4gLSByZW1vdmUgI2lmZGVmIGFyb3Vk
IGZ1bmN0aW9uIGRlY2xhcmF0aW9uDQo+ID4+PiAtLS0NCj4gPj4+IHY1IGNoYW5nZXM6DQo+ID4+
PiAtIEluIG9yZGVyIHRvIGF2b2lkIHN0dWIgZnVuY3Rpb25zLCB3ZSAjZGVmaW5lIFBHQ19zdGF0
aWNtZW0gdG8NCj4gPj4+IG5vbi16ZXJvIG9ubHkgd2hlbiBDT05GSUdfU1RBVElDX01FTU9SWQ0K
PiA+Pj4gLSB1c2UgInVubGlrZWx5KCkiIGFyb3VuZCBwZy0+Y291bnRfaW5mbyAmIFBHQ19zdGF0
aWNtZW0NCj4gPj4+IC0gcmVtb3ZlIHBvaW50bGVzcyAiaWYiLCBzaW5jZSBtYXJrX3BhZ2VfZnJl
ZSgpIGlzIGdvaW5nIHRvIHNldA0KPiA+Pj4gY291bnRfaW5mbyB0byBQR0Nfc3RhdGVfZnJlZSBh
bmQgYnkgY29uc2VxdWVuY2UgY2xlYXIgUEdDX3N0YXRpY21lbQ0KPiA+Pj4gLSBtb3ZlICNkZWZp
bmUgUEdDX3N0YXRpY21lbSAwIHRvIG1tLmgNCj4gPj4+IC0tLQ0KPiA+Pj4gdjQgY2hhbmdlczoN
Cj4gPj4+IC0gbm8gY2hhbmdlcw0KPiA+Pj4gLS0tDQo+ID4+PiB2MyBjaGFuZ2VzOg0KPiA+Pj4g
LSBmaXggcG9zc2libGUgcmFjeSBpc3N1ZSBpbiBmcmVlX3N0YXRpY21lbV9wYWdlcygpDQo+ID4+
PiAtIGludHJvZHVjZSBhIHN0dWIgZnJlZV9zdGF0aWNtZW1fcGFnZXMoKSBmb3IgdGhlDQo+ID4+
PiAhQ09ORklHX1NUQVRJQ19NRU1PUlkgY2FzZQ0KPiA+Pj4gLSBtb3ZlIHRoZSBjaGFuZ2UgdG8g
ZnJlZV9oZWFwX3BhZ2VzKCkgdG8gY292ZXIgb3RoZXIgcG90ZW50aWFsIGNhbGwNCj4gPj4+IHNp
dGVzDQo+ID4+PiAtIGZpeCB0aGUgaW5kZW50YXRpb24NCj4gPj4+IC0tLQ0KPiA+Pj4gdjIgY2hh
bmdlczoNCj4gPj4+IC0gbmV3IGNvbW1pdA0KPiA+Pj4gLS0tDQo+ID4+PiAgICB4ZW4vYXJjaC9h
cm0vaW5jbHVkZS9hc20vbW0uaCB8ICA0ICsrKy0NCj4gPj4+ICAgIHhlbi9jb21tb24vcGFnZV9h
bGxvYy5jICAgICAgIHwgMTIgKysrKysrKysrLS0tDQo+ID4+PiAgICB4ZW4vaW5jbHVkZS94ZW4v
bW0uaCAgICAgICAgICB8ICAyIC0tDQo+ID4+PiAgICAzIGZpbGVzIGNoYW5nZWQsIDEyIGluc2Vy
dGlvbnMoKyksIDYgZGVsZXRpb25zKC0pDQo+ID4+Pg0KPiA+Pj4gZGlmZiAtLWdpdCBhL3hlbi9h
cmNoL2FybS9pbmNsdWRlL2FzbS9tbS5oDQo+ID4+PiBiL3hlbi9hcmNoL2FybS9pbmNsdWRlL2Fz
bS9tbS5oIGluZGV4IGZiZmYxMWM0NjguLjc0NDI4OTNlNzcgMTAwNjQ0DQo+ID4+PiAtLS0gYS94
ZW4vYXJjaC9hcm0vaW5jbHVkZS9hc20vbW0uaA0KPiA+Pj4gKysrIGIveGVuL2FyY2gvYXJtL2lu
Y2x1ZGUvYXNtL21tLmgNCj4gPj4+IEBAIC0xMDgsOSArMTA4LDExIEBAIHN0cnVjdCBwYWdlX2lu
Zm8NCj4gPj4+ICAgICAgLyogUGFnZSBpcyBYZW4gaGVhcD8gKi8NCj4gPj4+ICAgICNkZWZpbmUg
X1BHQ194ZW5faGVhcCAgICAgUEdfc2hpZnQoMikNCj4gPj4+ICAgICNkZWZpbmUgUEdDX3hlbl9o
ZWFwICAgICAgUEdfbWFzaygxLCAyKQ0KPiA+Pj4gLSAgLyogUGFnZSBpcyBzdGF0aWMgbWVtb3J5
ICovDQo+ID4+DQo+ID4+IE5JVHBpY2tpbmc6IFlvdSBhZGRlZCB0aGlzIGNvbW1lbnQgaW4gcGF0
Y2ggIzEgYW5kIG5vdyByZW1vdmluZyB0aGUNCj4gc3BhY2UuDQo+ID4+IEFueSByZWFzb24gdG8g
ZHJvcCB0aGUgc3BhY2U/DQo+ID4+DQo+ID4+PiArI2lmZGVmIENPTkZJR19TVEFUSUNfTUVNT1JZ
DQo+ID4+DQo+ID4+IEkgdGhpbmsgdGhpcyBjaGFuZ2Ugb3VnaHQgdG8gYmUgZXhwbGFpbmVkIGlu
IHRoZSBjb21taXQgbWVzc2FnZS4NCj4gPj4gQUZBSVUsIHRoaXMgaXMgbmVjZXNzYXJ5IHRvIGFs
bG93IHRoZSBjb21waWxlciB0byByZW1vdmUgY29kZSBhbmQNCj4gPj4gYXZvaWQgbGlua2luZyBp
c3N1ZXMuIElzIHRoYXQgY29ycmVjdD8NCj4gPj4NCj4gPj4+ICsvKiBQYWdlIGlzIHN0YXRpYyBt
ZW1vcnkgKi8NCj4gPj4+ICAgICNkZWZpbmUgX1BHQ19zdGF0aWMgICAgUEdfc2hpZnQoMykNCj4g
Pj4+ICAgICNkZWZpbmUgUEdDX3N0YXRpYyAgICAgUEdfbWFzaygxLCAzKQ0KPiA+Pj4gKyNlbmRp
Zg0KPiA+Pj4gICAgLyogLi4uICovDQo+ID4+PiAgICAvKiBQYWdlIGlzIGJyb2tlbj8gKi8NCj4g
Pj4+ICAgICNkZWZpbmUgX1BHQ19icm9rZW4gICAgICAgUEdfc2hpZnQoNykNCj4gPj4+IGRpZmYg
LS1naXQgYS94ZW4vY29tbW9uL3BhZ2VfYWxsb2MuYyBiL3hlbi9jb21tb24vcGFnZV9hbGxvYy5j
IGluZGV4DQo+ID4+PiA5ZTVjNzU3ODQ3Li42ODc2ODY5ZmE2IDEwMDY0NA0KPiA+Pj4gLS0tIGEv
eGVuL2NvbW1vbi9wYWdlX2FsbG9jLmMNCj4gPj4+ICsrKyBiL3hlbi9jb21tb24vcGFnZV9hbGxv
Yy5jDQo+ID4+PiBAQCAtMTQ0Myw2ICsxNDQzLDEzIEBAIHN0YXRpYyB2b2lkIGZyZWVfaGVhcF9w
YWdlcygNCj4gPj4+DQo+ID4+PiAgICAgICAgQVNTRVJUKG9yZGVyIDw9IE1BWF9PUkRFUik7DQo+
ID4+Pg0KPiA+Pj4gKyAgICBpZiAoIHVubGlrZWx5KHBnLT5jb3VudF9pbmZvICYgUEdDX3N0YXRp
YykgKQ0KPiA+Pj4gKyAgICB7DQo+ID4+PiArICAgICAgICAvKiBQYWdlcyBvZiBzdGF0aWMgbWVt
b3J5IHNoYWxsIG5vdCBnbyBiYWNrIHRvIHRoZSBoZWFwLiAqLw0KPiA+Pj4gKyAgICAgICAgZnJl
ZV9zdGF0aWNtZW1fcGFnZXMocGcsIDFVTCA8PCBvcmRlciwgbmVlZF9zY3J1Yik7DQo+ID4+IEkg
Y2FuJ3QgcmVtZW1iZXIgd2hldGhlciBJIGFza2VkIHRoaXMgYmVmb3JlIChJIGNvdWxkbid0IGZp
bmQgYSB0aHJlYWQpLg0KPiA+Pg0KPiA+PiBmcmVlX3N0YXRpY21lbV9wYWdlcygpIGRvZXNuJ3Qg
c2VlbSB0byBiZSBwcm90ZWN0ZWQgYnkgYW55IGxvY2suIFNvDQo+ID4+IGhvdyBkbyB5b3UgcHJl
dmVudCB0aGUgY29uY3VycmVudCBhY2Nlc3MgdG8gdGhlIHBhZ2UgaW5mbyB3aXRoIHRoZSBhY3F1
aXJlDQo+IHBhcnQ/DQo+ID4NCj4gPiBUcnVlLCBsYXN0IHRpbWUgeW91IHN1Z2dlc3RlZCB0aGF0
IHJzdl9wYWdlX2xpc3QgbmVlZHMgdG8gYmUgcHJvdGVjdGVkDQo+ID4gd2l0aCBhIHNwaW5sb2Nr
IChtb3N0bHkgbGlrZSBkLT5wYWdlX2FsbG9jX2xvY2spLiBJIGhhdmVuJ3QgdGhvdWdodCBpdA0K
PiA+IHRob3JvdWdobHksIHNvcnJ5IGFib3V0IHRoYXQuDQo+ID4gU28gZm9yIGZyZWVpbmcgcGFy
dCwgSSBzaGFsbCBnZXQgdGhlIGxvY2sgYXQgYXJjaF9mcmVlX2hlYXBfcGFnZSgpLA0KPiA+IHdo
ZXJlIHdlIGluc2VydCB0aGUgcGFnZSB0byB0aGUgcnN2X3BhZ2VfbGlzdCwgYW5kIHJlbGVhc2Ug
dGhlIGxvY2sgYXQgdGhlDQo+IGVuZCBvZiB0aGUgZnJlZV9zdGF0aWNtZW1fcGFnZS4NCj4gDQo+
IEluIGdlbmVyYWwsIGEgbG9jayBpcyBiZXR0ZXIgdG8gYmUgbG9jay91bmxvY2sgaW4gdGhlIHNh
bWUgZnVuY3Rpb24gYmVjYXVzZSBpdCBpcw0KPiBlYXNpZXIgdG8gdmVyaWZ5LiBIb3dldmVyLCBJ
IGFtIG5vdCBzdXJlIHRoYXQgZXh0ZW5kaW5nIHRoZSBsb2NraW5nIGZyb20gZC0NCj4gPnBhZ2Vf
YWxsb2NfbG9jayB1cCBhZnRlciBmcmVlX2hlYXBfcGFnZXMoKSBpcyByaWdodC4NCj4gDQo+IFRo
ZSBmaXJzdCByZWFzb24gYmVpbmcgdGhhdCB0aGV5IGFyZSBvdGhlciBjYWxsZXJzIG9mIGZyZWVf
aGVhcF9wYWdlcygpLg0KPiBTbyBub3cgYWxsIHRoZSBjYWxsZXJzIG9mIHRoZSBoZWxwZXJzIHdv
dWxkIG5lZWQgdG8ga25vdyB3aGV0aGVyIHRoZXkgbmVlZCB0bw0KPiBoZWxwIGQtPnBhZ2VfYWxs
b2NfbG9jay4NCj4gDQo+IFNlY29uZGx5LCBmcmVlX3N0YXRpY21lbV9wYWdlcygpIGlzIG1lYW50
IHRvIGJlIHRoZSByZXZlcnNlIG9mDQo+IHByZXBhcmVfc3RhdGljbWVtX3BhZ2VzKCkuIFdlIHNo
b3VsZCBwcmV2ZW50IGJvdGggb2YgdGhlbSB0byBiZSBjYWxsZWQNCj4gY29uY3VycmVudGx5LiBJ
dCBzb3VuZHMgc3RyYW5nZSB0byB1c2UgdGhlIGQtPnBhZ2VfYWxsb2NfbG9jayB0byBwcm90ZWN0
IGl0IChhDQo+IHBhZ2UgaXMgdGVjaG5pY2FsbHkgbm90IGJlbG9uZ2luZyB0byBhIGRvbWFpbiBh
dCB0aGlzIHBvaW50KS4NCj4gDQo+IFRvIG1lIGl0IGxvb2tzIGxpa2Ugd2Ugd2FudCB0byBhZGQg
dGhlIHBhZ2VzIG9uIHRoZSByc3ZfcGFnZV9saXN0DQo+ICphZnRlciogdGhlIHBhZ2VzIGhhdmUg
YmVlbiBmcmVlZC4gU28gd2Uga25vdyB0aGF0IGFsbCB0aGUgcGFnZXMgb24gdGhhdCBsaXN0DQo+
IGhhdmUgYmVlbiBtYXJrZWQgYXMgZnJlZWQgKGkuZS4gZnJlZV9zdGF0aWNtZW1fcGFnZXMoKSBj
b21wbGV0ZWQpLg0KPiANCg0KWWVzISBUaGF0IGZpeGVzIGV2ZXJ5dGhpbmchDQpJZiB3ZSBhZGQg
dGhlIHBhZ2VzIG9uIHRoZSByc3ZfcGFnZV9saXN0ICphZnRlciogdGhlIHBhZ2VzIGhhdmUgYmVl
biBmcmVlZCwgd2UNCmNvdWxkIG1ha2Ugc3VyZSB0aGF0IHBhZ2UgYWNxdWlyZWQgZnJvbSByc3Zf
cGFnZV9saXN0IGhhcyBiZWVuIHRvdGFsbHkgZnJlZWQuDQpIbW1tLCBzbyBpbiBzdWNoIGNhc2Us
IGRvIHdlIHN0aWxsIG5lZWQgdG8gYWRkIGxvY2sgaGVyZT8gDQpXZSBhbHJlYWR5IGNvdWxkIG1h
a2Ugc3VyZSB0aGF0IHBhZ2UgYWNxdWlyZWQgZnJvbSByc3ZfcGFnZV9saXN0IG11c3QgYmUgdG90
YWxseQ0KZnJlZWQsIHdpdGhvdXQgdGhlIGxvY2suDQpPciBpbiBmYWN0cywgd2UgdXNlIHRoZSBs
b2NrIHRvIGxldCBwcmVwYXJlX3N0YXRpY21lbV9wYWdlcygpIG5vdCBmYWlsIGlmIGZyZWVfc3Rh
dGljbWVtX3BhZ2VzDQpoYXBwZW4gY29uY3VycmVudGx5Pw0KVHJ5aW5nIHRvIHVuZGVyc3RhbmQg
dGhlIGludGVudHMgb2YgdGhlIGxvY2sgaGVyZSBtb3JlIGNsZWFybHksIGhvcGUgaXQncyBub3Qg
YSBkdW1iIHF1ZXN0aW9ufg0KDQpgPiBJbiBhZGRpdGlvbiB0byB0aGF0LCB3ZSB3b3VsZCBuZWVk
IHRoZSBjb2RlIGluIGZyZWVfc3RhdGljbWVtX3BhZ2VzKCkgdG8gYmUNCj4gcHJvdGVjdGVkIGJ5
IHRoZSBoZWFwX2xvY2sgKGF0IGxlYXN0IHNvIGl0IGlzIG1hdGNoDQo+IHByZXBhcmVfc3RhdGlj
bWVtX3BhZ2VzKCkpLg0KPiANCj4gQW55IHRob3VnaHRzPw0KPiANCj4gQ2hlZXJzLA0KPiANCj4g
LS0NCj4gSnVsaWVuIEdyYWxsDQo=


From xen-devel-bounces@lists.xenproject.org Mon Jun 13 03:50:47 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 03:50:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347701.574033 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0b69-0002cE-AB; Mon, 13 Jun 2022 03:50:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347701.574033; Mon, 13 Jun 2022 03:50:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0b69-0002c7-72; Mon, 13 Jun 2022 03:50:41 +0000
Received: by outflank-mailman (input) for mailman id 347701;
 Mon, 13 Jun 2022 03:50:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0b67-0002bx-MC; Mon, 13 Jun 2022 03:50:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0b67-00071q-Dj; Mon, 13 Jun 2022 03:50:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0b66-0004VX-Vn; Mon, 13 Jun 2022 03:50:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o0b66-0004mT-V4; Mon, 13 Jun 2022 03:50:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wxWMDLSpHIECUCq2baWJSEVx7d318XhJOjU0q6g82sQ=; b=3TOGplvHVpd5pLo3Z0lWMO+G/R
	NsLC05vMAP1qS/wMy/2xqiUxECh06wIanxSzLwXektJntmNFt0w0hZzQfX+W4YirpPdk+uffEL0Bj
	6ud/sn3yWa782Ihoy2X3zCZF4LhebrwJdLXfmItHjUvQDE+W2fQQwoDL0FcVXvndDV/k=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171150-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 171150: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=b09ada6edc7f3f28d3b4f2ef09852ebd39f17920
X-Osstest-Versions-That:
    ovmf=f0b97e165e0af79ac9eb3c6ac7697f9183afc7cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 13 Jun 2022 03:50:38 +0000

flight 171150 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171150/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 b09ada6edc7f3f28d3b4f2ef09852ebd39f17920
baseline version:
 ovmf                 f0b97e165e0af79ac9eb3c6ac7697f9183afc7cb

Last test of basis   170964  2022-06-10 20:46:17 Z    2 days
Testing same since   171150  2022-06-13 01:41:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Zhihao Li <zhihao.li@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   f0b97e165e..b09ada6edc  b09ada6edc7f3f28d3b4f2ef09852ebd39f17920 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon Jun 13 04:34:07 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 04:34:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347713.574044 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0blw-0007yM-IB; Mon, 13 Jun 2022 04:33:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347713.574044; Mon, 13 Jun 2022 04:33:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0blw-0007yF-Fc; Mon, 13 Jun 2022 04:33:52 +0000
Received: by outflank-mailman (input) for mailman id 347713;
 Mon, 13 Jun 2022 04:33:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0blu-0007y5-US; Mon, 13 Jun 2022 04:33:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0blu-0007vQ-R3; Mon, 13 Jun 2022 04:33:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0blu-0007eD-7I; Mon, 13 Jun 2022 04:33:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o0blu-0001x0-6q; Mon, 13 Jun 2022 04:33:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gK7i30D0n9tQaGb3k/W23wsQytydW779nLwtUU+0FlY=; b=OpAGYn+oD7U1EdfHYY9PxEVlYH
	0YSKfmsYh3WPyrZ0OUoNoH4ztvCPE1tGEfaQEBjxvaLlL8eqs/SzoIYtaOPHXYsk+izo39Y1hnpWr
	rtgw0JUZ0bZ+6JcXHXRTeCymBWIr8AGzdAge1bGbjnO6qCeUIDVltKdt49A58MPrMakw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171147-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171147: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=7a68065eb9cd194cf03f135c9211eeb2d5c4c0a0
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 13 Jun 2022 04:33:50 +0000

flight 171147 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171147/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 170714
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 170714
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 170714
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 170714
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 170714
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 170714
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 170714
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 170714
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 170714

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-credit1 18 guest-start/debian.repeat fail in 171142 pass in 171147
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail pass in 171142
 test-amd64-amd64-xl-credit2   8 xen-boot                   fail pass in 171142

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170714
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170714
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170714
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                7a68065eb9cd194cf03f135c9211eeb2d5c4c0a0
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z   20 days
Failing since        170716  2022-05-24 11:12:06 Z   19 days   48 attempts
Testing same since   171142  2022-06-12 02:58:57 Z    1 days    2 attempts

------------------------------------------------------------
2336 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 275185 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 13 06:05:16 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 06:05:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347733.574055 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0dC6-0001jQ-Al; Mon, 13 Jun 2022 06:04:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347733.574055; Mon, 13 Jun 2022 06:04:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0dC6-0001jJ-7t; Mon, 13 Jun 2022 06:04:58 +0000
Received: by outflank-mailman (input) for mailman id 347733;
 Mon, 13 Jun 2022 06:04:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vSO8=WU=bombadil.srs.infradead.org=BATV+6ef1cfbcd5439e194ca7+6868+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1o0dC4-0001jD-CX
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 06:04:57 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bd865bcc-eade-11ec-bd2c-47488cf2e6aa;
 Mon, 13 Jun 2022 08:04:53 +0200 (CEST)
Received: from hch by bombadil.infradead.org with local (Exim 4.94.2 #2 (Red
 Hat Linux)) id 1o0dBu-001aZ9-At; Mon, 13 Jun 2022 06:04:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bd865bcc-eade-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=In-Reply-To:Content-Transfer-Encoding
	:Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:
	Sender:Reply-To:Content-ID:Content-Description;
	bh=b1PoY1uKlcNqy2edTiTsVL/coyRDE9FCOo38ncps/ZM=; b=asqORLv9fFtRmH0mbdAsX62PEY
	2oM2wjfo4NwjYVsp3GvIcXnsJO9DKRj8XCXW1E4suTxpr2g8LP73vdRMEKnbzqlgoyXVfnbMS54bD
	FBTwd3vKQ1myxZku0vIkaOTi9LwMJKMefz6tfktugd6A88h7XFJ8zVDM64ozYM6Q5rABwX6AKb2fa
	8zxSLwNzq9CLLB7mjw/mVXMIKzM5UZnMYu4GyUGLrYpMbVB2c2wQRdURbri0g+CmKZJ4dWe95IH7X
	ApNaap56yK/FeNrP1odSM3lUI36xBVvmY3EYL/xIb/6e2D8T1L0tLE8e0+gIuJwvSLdsxahG1pSYz
	SAgdlhuA==;
Date: Sun, 12 Jun 2022 23:04:46 -0700
From: Christoph Hellwig <hch@infradead.org>
To: Dongli Zhang <dongli.zhang@oracle.com>
Cc: Christoph Hellwig <hch@infradead.org>, iommu@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, x86@kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	virtualization@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org, m.szyprowski@samsung.com,
	jgross@suse.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
	dave.hansen@linux.intel.com, sstabellini@kernel.org,
	mpe@ellerman.id.au, konrad.wilk@oracle.com, mst@redhat.com,
	jasowang@redhat.com, joe.jin@oracle.com
Subject: Re: [PATCH RFC v1 4/7] swiotlb: to implement io_tlb_high_mem
Message-ID: <YqbTfi/h2P24ynQZ@infradead.org>
References: <20220609005553.30954-1-dongli.zhang@oracle.com>
 <20220609005553.30954-5-dongli.zhang@oracle.com>
 <YqF/sphJj6n+22Si@infradead.org>
 <e6345c27-78fd-be72-9551-1d1fd5db37a4@oracle.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <e6345c27-78fd-be72-9551-1d1fd5db37a4@oracle.com>
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

On Fri, Jun 10, 2022 at 02:56:08PM -0700, Dongli Zhang wrote:
> Since this patch file has 200+ lines, would you please help clarify what does
> 'this' indicate?

This indicates that any choice of a different swiotlb pools needs to
be hidden inside of ѕwiotlb.  The dma mapping API already provides
swiotlb the addressability requirement for the device.  Similarly we
already have a SWIOTLB_ANY flag that switches to a 64-bit buffer
by default, which we can change to, or replace with a flag that
allocates an additional buffer that is not addressing limited.


From xen-devel-bounces@lists.xenproject.org Mon Jun 13 06:49:51 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 06:49:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347747.574067 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0dtR-0006mN-LV; Mon, 13 Jun 2022 06:49:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347747.574067; Mon, 13 Jun 2022 06:49:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0dtR-0006mG-Im; Mon, 13 Jun 2022 06:49:45 +0000
Received: by outflank-mailman (input) for mailman id 347747;
 Mon, 13 Jun 2022 06:49:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BxIL=WU=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o0dtR-0006mA-4F
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 06:49:45 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 009fb490-eae5-11ec-8901-93a377f238d6;
 Mon, 13 Jun 2022 08:49:44 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CC6D0D6E;
 Sun, 12 Jun 2022 23:49:41 -0700 (PDT)
Received: from [10.57.9.245] (unknown [10.57.9.245])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A8B893F792;
 Sun, 12 Jun 2022 23:49:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 009fb490-eae5-11ec-8901-93a377f238d6
Message-ID: <b93bd7c0-0bb4-1837-b004-d687f1296db2@arm.com>
Date: Mon, 13 Jun 2022 08:49:27 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH 3/3] xen/console: Fix incorrect format tags for struct tm
 members
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>,
 Jan Beulich <jbeulich@suse.com>
Cc: Juergen Gross <jgross@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220610083358.101412-1-michal.orzel@arm.com>
 <20220610083358.101412-4-michal.orzel@arm.com>
 <b84abd29-2856-a173-55b4-4e642d8a6ee5@suse.com>
 <2ccd52a7-a5b2-c221-b847-ed0c9de2effd@suse.com>
 <295e9c7e-e0de-bbd3-eec4-0864cb2ef086@suse.com>
 <alpine.DEB.2.22.394.2206101630520.756493@ubuntu-linux-20-04-desktop>
From: Michal Orzel <michal.orzel@arm.com>
In-Reply-To: <alpine.DEB.2.22.394.2206101630520.756493@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 11.06.2022 01:35, Stefano Stabellini wrote:
> On Fri, 10 Jun 2022, Jan Beulich wrote:
>> On 10.06.2022 11:51, Juergen Gross wrote:
>>> On 10.06.22 11:44, Jan Beulich wrote:
>>>> On 10.06.2022 10:33, Michal Orzel wrote:
>>>>> All the members of struct tm are defined as integers but the format tags
>>>>> used in console driver for snprintf wrongly expect unsigned values. Fix
>>>>> the tags to expect integers.
>>>>
>>>> Perhaps do things the other way around - convert field types to unsigned
>>>> unless negative values can be stored there? This would match our general
>>>> aim of using unsigned types when only non-negative values can be held in
>>>> variables / parameters / fields.
>>>
>>> Don't you think keeping struct tm in sync with the Posix definition should
>>> be preferred here?
>>
>> Not necessarily, no. It's not just POSIX which has a (imo bad) habit of
>> using plain "int" even for values which can never go negative.
> 
> I committed the other two patches in the series because already acked
> and straightforward.
> 
> In this case, I think the straightforward/mechanical fix is the one
> Michal suggested in this patch: fixing %u to be %d. We could of course
> consider changing the definition of tm, and there are valid reasons to
> do that as Jan pointed out, but I think this patch is valid as-in
> anyway.
> 
> So I am happy to give my reviewed-by for this version of the patch, and
> we can still consider changing tm to use unsigned if someone feels like
> proposing a patch for that.

It is not true that the members of struct tm cannot go negative.
Examples:
1) tm_year is used to store a number of years elapsed since 1900.
To express years before 1900, this value must be negative.
2) tm_isdst is used to inform whether DST is in effect. Negative value (-1)
means that no information is available.

The above rules are taken into account in a gmtime definition (common/time.c).

The signed/unsigned debate also applies to a calendar time defintion.
The concept of negative value is used to express the time before epoch.

Xen is using signed s_time_t for system time all over the codebase without
any valid reason other than to be coherent with POSIX definition of time_t.

> 
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> 
> Cheers,
> 
> Stefano

Cheers,
Michal


From xen-devel-bounces@lists.xenproject.org Mon Jun 13 06:56:15 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 06:56:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347758.574078 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0dzg-0008RY-Fq; Mon, 13 Jun 2022 06:56:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347758.574078; Mon, 13 Jun 2022 06:56:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0dzg-0008RR-Cy; Mon, 13 Jun 2022 06:56:12 +0000
Received: by outflank-mailman (input) for mailman id 347758;
 Mon, 13 Jun 2022 06:56:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0dzf-0008RH-GL; Mon, 13 Jun 2022 06:56:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0dzf-0002Gu-AF; Mon, 13 Jun 2022 06:56:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0dzf-0000Nw-3b; Mon, 13 Jun 2022 06:56:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o0dzf-0000en-39; Mon, 13 Jun 2022 06:56:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0UIUiQ4IcW1DwodLugj/MbH8YE6t2ZiuYoGzp5O6boo=; b=a7F6NtCiETc++nhvzmemuuph1S
	Doz5uXx8Vot+nie4Au7q0fl1Pq8fVlkZu8EVz19jXdftC91krwzDVj0sCSeX12KnqOsaH7pnvGF5x
	RBC6U2CbPu/NYL0w8luqmZehejIfuWrKRAFdHCKDrhBKnQEJB0acYHsMkCzSjORQTYsU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171152-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 171152: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=92288f433485e84863047fae698614c6785869d1
X-Osstest-Versions-That:
    ovmf=b09ada6edc7f3f28d3b4f2ef09852ebd39f17920
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 13 Jun 2022 06:56:11 +0000

flight 171152 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171152/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 92288f433485e84863047fae698614c6785869d1
baseline version:
 ovmf                 b09ada6edc7f3f28d3b4f2ef09852ebd39f17920

Last test of basis   171150  2022-06-13 01:41:51 Z    0 days
Testing same since   171152  2022-06-13 03:52:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Pedro Falcato <pedro.falcato@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   b09ada6edc..92288f4334  92288f433485e84863047fae698614c6785869d1 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon Jun 13 07:04:33 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 07:04:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347769.574089 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0e7h-0001ht-BU; Mon, 13 Jun 2022 07:04:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347769.574089; Mon, 13 Jun 2022 07:04:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0e7h-0001hm-73; Mon, 13 Jun 2022 07:04:29 +0000
Received: by outflank-mailman (input) for mailman id 347769;
 Mon, 13 Jun 2022 07:04:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T645=WU=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o0e7f-0001hg-NU
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 07:04:27 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur02on060d.outbound.protection.outlook.com
 [2a01:111:f400:fe05::60d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0fbedd62-eae7-11ec-8901-93a377f238d6;
 Mon, 13 Jun 2022 09:04:26 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by HE1PR0402MB3611.eurprd04.prod.outlook.com (2603:10a6:7:87::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.19; Mon, 13 Jun
 2022 07:04:23 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Mon, 13 Jun 2022
 07:04:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0fbedd62-eae7-11ec-8901-93a377f238d6
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mSUjFW+Dgl7GdLDG/L95oXLVr5abz9yD6y7oiiwXr+jar74F5U5nJldN0fLWah5S2rOMuxe5ghbAwBChpg9IXTxVh7/acZO8mhfBmQhHqXWgaiLnvkMJHFvWymG2FjZJ/uMvAjGUgyt2UQh0eUjdGM0J8B8IZwf5+1ZTDbdaB0jGb8axK35mrcGDIydtOi69FOx800mWYjm7g5eyi9tfn9q2FJAsKj2AMUWUbMVNCMxwY4feS1vV7JLG9ij2rYHZX4QWbcmFI8qMBSCWCqZWcXvGHHB7XWq/m0HmGxmX7+xK78L69r24gi0rIsd525XFHYgM9yQILi4oNPyEHYETUQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=szkmJEdVyBE6hyeBpHT+9SbE+OaQlTpnUNFdqDV7kXM=;
 b=OsqdZvp7VTNfy4nN9EGbqueEJ/NnVj+AvMsQB+cUXSA7Vlt7ll494ddpiteuX7+or3agTD4vXFMPSWTFCYjQAjXkCT99aHc/kBH+3BqMKILmkMYUPU+DNBFJR3WTS1b6NUVAj7C1mmN35FHzi9JXI7dz/3gMNQDfQY3JzLMtHMwM3wmuJVUvqPkjnVzd7Abxd0OOA2D8Iu60/3qL5lEqeFYQ2f7aTUjToH+NNKEcZzZIFnIMFl3xQ4CRqMUWHFXAV77VV9bF1Q2QY9sf2LsC+TpwuOMWCAAinbDtqbSt/sNtH/CeWlHAC6hUjg/GKmg9DajEL6t5m26rNZ/wEhAJyg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=szkmJEdVyBE6hyeBpHT+9SbE+OaQlTpnUNFdqDV7kXM=;
 b=qxIXJOwe3WrntSeLfjivvwsGEFM6Wa3g43SQOdVY3kDjUp8yNT+OQ1fri/1eksjlJxGm7oSrtdS+d/dgLpL15vppE+HhduVCZJMuN0TP8/9N5PFPUKGQ3UiFuWGlOUO/Asm6zNPBe/Ky9TGMxuipbK83GI2B5+toQI0AyrzkbuvtZtE4KDRtYgKXAoU+VAplozDC0CN2qBlSVAnzqabltOr21gO+9jGqBqLexF0EMqpuv7fdKk+3ixuSS2BpEkyhJ7llkyUjtAcknECaHolwjv4OCsvuhQijWSeIlf0FeObZdXbs1Fcp3IsleGhwcz1dEAE2IlqBqj0h9lwJnvrYjw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <406ff587-8faf-f233-c06b-b581b0fe9161@suse.com>
Date: Mon, 13 Jun 2022 09:04:24 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH] add more MISRA C rules to docs/misra/rules.rst
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: George.Dunlap@citrix.com, roger.pau@citrix.com, Artem_Mygaiev@epam.com,
 Andrew.Cooper3@citrix.com, julien@xen.org, Bertrand.Marquis@arm.com,
 xen-devel@lists.xenproject.org
References: <alpine.DEB.2.22.394.2206091748210.756493@ubuntu-linux-20-04-desktop>
 <c61f607b-bfdd-3162-7b26-b4681b4cce59@suse.com>
 <alpine.DEB.2.22.394.2206101420290.756493@ubuntu-linux-20-04-desktop>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <alpine.DEB.2.22.394.2206101420290.756493@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS8PR05CA0011.eurprd05.prod.outlook.com
 (2603:10a6:20b:311::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 46feb95a-2c39-4a21-4db1-08da4d0af1f2
X-MS-TrafficTypeDiagnostic: HE1PR0402MB3611:EE_
X-Microsoft-Antispam-PRVS:
	<HE1PR0402MB3611042F44882476F313D7E8B3AB9@HE1PR0402MB3611.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	cZu3mkIIzQoOX4G5vIFR4Dfi10Fkcr8eR4lu3LH9b6QZzpzFy7a5QO9lbaJNNLr1kWeV+r54+B4t6VEetS7A9SwLtsRJipzZQSTJB23Hv/tT5J6OwJU7pwRI2CBGDV4ntO0Z543cFg/Qq6Z8NGVJu0Gi4hCvP6E1Cd3zSZSTpfFjzwL3f3E65izgXRUP3NsPpmOvI7E1VVal2y094Zh79allxSNDtpsvAXsdJqU48kVgPgeRIro10xpM+Bnsp5ulxuTBx8lMqQZtAEU+aPRDnlpaf2a70ldjrFyB/Be1D8xHGoTP5KdhSGjg1QrapPnd/7W2kCNZP87XAXUR82La5nw/C9rQM0ELo/hdjsLc16s+A3g7iMpyX3ZC0o9Z4uXPY+oncbGMnF0owJ1rdSTByT4h8VhhcZRy93eOqyhHIqoIoKRIgxd0nlUxZV6qkjwaNNlDFWMiYmPfo8k5Ais+SutyXlNjsF3YjM30uzX5Pygw+TxRFdQhNMZ7dnjkbQDT4HiT5qf/J+DpogY+v6dCt8n2yYqIQLlxjuRG/M1GqAjDZD3WxPU0vcXryBHHHoe2/2RKzXAUateZmOIuz8EE849kdnQ56D+mTOEjadSB10aXrNASiroQoQsA4h+3MFYgWeHYH14L/1wp9uNWqIdLyEfEA9ys/ZFfiWeehH/pedApUWsQiVfKhienwlE+d3NP6pgutpWXRJOa3RXOcXOQkX1344uTvL7ABiNwk+etjdp76DZ8/z620c4SCADkGUTp8oHWgBup4sN99o2Zw1BAWFgAwbue/JLY03coJk8HkpZ8GwtKgw2VcZ97I8lLEsa0
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(38100700002)(6512007)(26005)(6916009)(53546011)(86362001)(186003)(2616005)(8936002)(66556008)(83380400001)(66476007)(5660300002)(66946007)(316002)(31696002)(4326008)(31686004)(8676002)(36756003)(2906002)(4744005)(508600001)(6506007)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UC8rSWQrTyt6OE1laWFKYTRmbGpnRHJ4UXBHMFVkMW1MUmtrczJ6TjFDRU9w?=
 =?utf-8?B?RmVqK1JqWXpzdWZtTjJXeitReGFlSFNwOFJrZFhycDNoRU54Q2lyNjJOZVRJ?=
 =?utf-8?B?T2JmOVZyckhsSHQrV2V3S0RHYWdTNDhGT1E1ZEtya3FYTzZoVVlOTEFsTTlT?=
 =?utf-8?B?VGdicGlXcmxkZDd5blZvbmVxb0FnWTRIamxsNFZvdUxKOHMvbUxrWmdMY2Vv?=
 =?utf-8?B?MHdESGFhYy9OS1BkeWsvSjdlSjZHbHd2NHk1YU9SSU5uUGFFMmRuRk83eTQz?=
 =?utf-8?B?dEJRd0g5VVUxbWhJamRVeUY1YnZFM1doL1BPMlUyck1CSiswTVdGcldSeXBz?=
 =?utf-8?B?dGxVQm5OdHJZc1hVQnRsYkRoYzNlQnY2VEdUaldaaTBqWGNDTk5pODBGbG1Y?=
 =?utf-8?B?aWR5UVQ1L2VMYlR3akEvSkY5U01xdmNVMkh4emRpMkNXZDVJRGEwV2xyNTRj?=
 =?utf-8?B?cHpleGJCRVlpM3ZuVFNWbVJFeVJqbGZsTDJVZnMxbThnaUlCaEROMElJVWpC?=
 =?utf-8?B?L1NwNEtOU3hNS2g3VDIrSW9LOHBuaWVjL3lyNllSQms1M2NWSFlWNGpXZTd5?=
 =?utf-8?B?bkx4L3YzRUhsMDZPQkRJaDZHM3NpdHUxMWYyRFAxSjdDRzNpUk1nNVhqQWVS?=
 =?utf-8?B?dUx5U1BmUExGS25hRFFEd0NSODVzK2x4T0F6YWxPYmJFNUdrUU1xZVkrS2Ir?=
 =?utf-8?B?VnArdTBsUGdlTHozTmdkSEhPajYrREN3MC9MdHJjK29wNW5XdzExRFVsOFUr?=
 =?utf-8?B?MG1ndTlORlFIUjROZ2ZBUENMSWcyOGpGNlBHOFVvM0pYNU1DTWVndW0vbWRz?=
 =?utf-8?B?WnFxa2owVWlQejJ5UVV4OUtDNTQrWFNDRkQyeGVUcXpONzh3VTcyeVZpcUpE?=
 =?utf-8?B?M3hvUnVrNWpkNnJnVHh2bFJHVllCb2xVem5SYnVsRFluL1F6blVHUGlzK3NM?=
 =?utf-8?B?UEVEWGFsdkNpNVRQNk1OTXhLcDExMi85NW1RZTBqMjlLdHhKWGZNTi9DK2lL?=
 =?utf-8?B?ODJPeVJ4K01vQTdhelV3MFc3QVB2UjNDN0pmay9YR2JHNC9ieS9OWHFqaXo1?=
 =?utf-8?B?SDhsU1RDeUVFTlFzTjFhQ0tMNnJ5SU1tOW16ck5HWGtGTHB6NVRXNGszTXBZ?=
 =?utf-8?B?MURYSXpSQ09GR0Z2a2JoOGI5MWFEVDgyZnZYbG1oRHI5ZisvQVdyWkluVktE?=
 =?utf-8?B?cElyLzJjY21Ka1Y0U0NOeGR1VWRKN3cvbXRpamxoVDZyY0VGWEErMlpZcnZr?=
 =?utf-8?B?WDNPSW9VNm1hdFg5QitoQXVhaUQzUGRMZm4reXVUN0w5OGc5TjRUbnFHTks5?=
 =?utf-8?B?aUcxRVJ4TTJVWmpHdzk4bVBNQzlMOWhzZ01XUjFXL1lGMlBHWmo3R3NhZCsv?=
 =?utf-8?B?Q1RRUkRSOWZEMVZQdkNmQVNray84cHZlV0w0eS9mQ2xmUzlKU3M0RHFwWStL?=
 =?utf-8?B?NVRaZUdtM1o0VllrWm5UVGx1RWxFazAwY2xKcTN4V0Qxc2JkSTJyK3dDK1dP?=
 =?utf-8?B?dHE4QUJEMjltRHVQdmRiOXNqZm9kNWZldk8vdnRMTUp3MEg0SW5lTjdzM3Vp?=
 =?utf-8?B?cU13dWVQMzVrY1N2NldYbHVCRzNLazRuSStYK09WOFJsOWNPWUZ3eVRzV2lO?=
 =?utf-8?B?RDUxOUpDZytkOXBTYnpKTmYzbHdDaGtTMEVhOXRqdXdhSngzeWluY2pGeDlu?=
 =?utf-8?B?eU1EdENJaDk0WE8wR0MxMnpUSWZJTkV0Q3FpMmppZk9wQUR5TmtlUlRNRUUz?=
 =?utf-8?B?eXdVMGZ3SldCQXBkd2g5VldRUm9VMjdnR0p2b3FtM0FtMHN4SU1Ecy9rRHlY?=
 =?utf-8?B?ekU3REtYWjhUeGxOQ2V4SFcvSWpTemU1bTk1Q2haNC9SMU55MXRZc2N6QUVh?=
 =?utf-8?B?NDFlVWRFTTJsNllYbzlvUXNIRlhzRWFJejVjM1Z4RkJLblZ6QmkyeTNZaHFM?=
 =?utf-8?B?RUUrdXE4UWRyOUtWWnJvb1Q0bTAvTEhVTnR1ZFFBWmNSc20wbHptRGkxQytm?=
 =?utf-8?B?SHk3b1QxQkFZZEVNZXFoREh6dEs2WDl2b3dveXE0TzkzN0NUbHo1WFZlL1hy?=
 =?utf-8?B?dzE1SThpSUxPNTVyOE5XVVhoRzRsTDI4SG5lQTZzTG4rdWtTL3JFZE05RFF3?=
 =?utf-8?B?VXEzeUI4NVA2OUZYWHBxRHZyOFpaTGhxcmxUVEUvdEgzbWxSenBwam94ekpx?=
 =?utf-8?B?bS80WXBGMHJTZ3J4dUU2M2xnQ0tPbFVFRElRWFlxM3JhQjB0REF6TnVGYXBE?=
 =?utf-8?B?VkROY1FPMHZZbm92QmJvN2IvRUpXNjJvRTBLOUl5NkZFdWJGQ3BYdkJtR1Yx?=
 =?utf-8?B?d2ovUzUvR0xTVVpNNVVUdUlVM0ViUVgyUFNCNTNXdGJwQms4T3U3QT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 46feb95a-2c39-4a21-4db1-08da4d0af1f2
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jun 2022 07:04:23.2421
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: +TQ/HnVj4KknzsR8nCJCnHUtjE918+mF3/RvYFVPpdu9VImears5XyOvxb+BUmfCwPtxvNdk2Kxb27ZhFBdXmQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0402MB3611

On 10.06.2022 23:24, Stefano Stabellini wrote:
> On Fri, 10 Jun 2022, Jan Beulich wrote:
>> On 10.06.2022 02:48, Stefano Stabellini wrote:
>>> +   * - `Rule 14.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_14_01.c>`_
>>> +     - Required
>>> +     - A loop counter shall not have essentially floating type
>>
>> This looks to be missing "point"?
> 
> I am not sure what you mean. Do you mean "floating-point" instead of
> "floating" ?
> 
> This is the actual headline for Rule 14.1. MISRA defines "Essential
> types" (8.10.2), so in this case it is referring to the type
> "essentially floating", which includes float, double and long double.

Yes, I mean "floating-point". But now that I look more closely I actually
notice that the C standard also uses the term "floating type" in a number
of instances. So perhaps it's just me who considers this odd.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 13 07:30:26 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 07:30:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347780.574100 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0eWZ-0005GH-Bh; Mon, 13 Jun 2022 07:30:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347780.574100; Mon, 13 Jun 2022 07:30:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0eWZ-0005GA-8Z; Mon, 13 Jun 2022 07:30:11 +0000
Received: by outflank-mailman (input) for mailman id 347780;
 Mon, 13 Jun 2022 07:30:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T645=WU=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o0eWY-0005G4-6Y
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 07:30:10 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur03on060a.outbound.protection.outlook.com
 [2a01:111:f400:fe09::60a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a6517cae-eaea-11ec-bd2c-47488cf2e6aa;
 Mon, 13 Jun 2022 09:30:08 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by HE1PR0401MB2666.eurprd04.prod.outlook.com (2603:10a6:3:86::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.20; Mon, 13 Jun
 2022 07:30:05 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Mon, 13 Jun 2022
 07:30:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a6517cae-eaea-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EwvoXFttRnM0fF8kvXZe/Dk8BDzRsn9F7xYzoA0ugbgbek99FowxnpzM3a/2WR/sQmJCUtPH54Y8ap/K2LqX3vlb7IwXtg8lbJN8M7uiogrgCrRq1MHao+T+wg7VthZ5b/Su0s0Z1uQ+qVzT9BOO/5MNnyl20Ic+z09DBykmBIU/TE308ml2WZbAiQyBK4lltnJLgPbEQoXuBRn9m7RQqABQC5KVQa377efw95hizHztMZbQeUXmmIe2qfC1p8PFBOja/1mtRgEuQIu9TsyIXiDx2G1pTP57IIp/PO9ACIbtKzzeebJKsFNdzSbhpVW1PNq6H5v3b/MKHBw2wRYy3A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=W3ljZ3sBPZL/Y7cD5kvO4bPzF67C4pFjWnMt881L/ig=;
 b=NsUPk+wspOOU8kIPuaY9CZSyAiNKaJXOcmAZteik3VVT8fOy70CEIpmjm2WlMZ7SvjQC4QlWJLCYD1S96Esg1Es2qWAfTpesWcigYJC9hDQ+/5ZLYZ0U0FqLBAW/2JIC6d1wg8Ii80KpdbDwlMPfXyquh/e2h4HuMhjG+XTnlTV2TJrUjw1Bt38sv4omM4jEnrrXQjVCVH910IyRJa+LaRWACEE1N6hLRpMFuU5OAWX7///fAtPpugTbCmj4nZyTy/5tlhR+dJGI68n3NaEIQet1zrHe70s9fMmFhgSusOOApJmTspkemz73+L5kItmDGl5dOODtAkFlTBnTr4scKw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=W3ljZ3sBPZL/Y7cD5kvO4bPzF67C4pFjWnMt881L/ig=;
 b=MHW5PZXpbToA8Dge+lpReL1KfkiUzJlsQUGyo1D8XA5h/LfMffrhl2Ir0PaqhJ7E73OsOcOjpr97BSGOICekc+qKoBiV7tM8830kMOTmxxBCYUVbcsT+FfBwt7N8XaD25D2K/9QDGbh5o3JoSlRkd0bdPor8BkXxkSFdjAD3xQhLPnC8PAm5mFHNkqqqwqOV2JujbOlPkG4kGejVMVt1xlXL9zVO79EAAfSEm/iagCl03k8s09FU6Pb30Bm4no4sUEg05ZDKJUsBDbAYx2Ah4NkQesJsiA0WpMquKIb9y3yKXltMLDNwKyr2ZAmGBFZncZVYIU9NuudnPlaVVCmGRw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <3a462021-1802-4764-3547-6d0a02cd092f@suse.com>
Date: Mon, 13 Jun 2022 09:30:06 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH] xen/console: do not drop serial output from the hardware
 domain
Content-Language: en-US
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220610150651.29933-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220610150651.29933-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR06CA0038.eurprd06.prod.outlook.com
 (2603:10a6:20b:463::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4fdf543b-c5d0-4317-6722-08da4d0e8907
X-MS-TrafficTypeDiagnostic: HE1PR0401MB2666:EE_
X-Microsoft-Antispam-PRVS:
	<HE1PR0401MB266617A0E59DAFD6B52708DFB3AB9@HE1PR0401MB2666.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	yJRzSvrz/fX2bZELwhPrdc1QVkw35WP0ps87iex9CALT9PO2w1eOcCAo97BhVVnzzLF9M/W4aaHp6Rvem+Svyc6ux3YAkNVgbCVfPflY8gfJ50Gsrbzx16RUMj8srwwyy8vVZhPbd+JrMqBhAPWw85gZBmKBap1hI8wwBNYEv8dwpEnaGymW+mvTqdZQfpDd0PpqlZHE84dhU2feJW1P9Ph5BvQY5QM4qXKW0vesc+TtZSZJIruJNwwO2MBliqsU1FLGIj468o1F5GIdoKJ1IOweYZPNKnfhPJ0Dxduu6fqZbwjSNwH2wOaekBF1L2qJaPGwKiCEPTKCIhxjdJsBLoqq/vvSGiskttcPIW/p8cX5FTR9vHXL3iuu/5KU/mCxECUlrryMxnxybcY5ko+vM2ja4n0gk57i5/KH5uI3zMtzIdPAWjbFQ/Pt+yxNh1BQeUXVSaSJXxv3/GVR5ADjWDdbZSHtBOpSeEOHhailVe3lWyY/pjBDvZFJtyfIOBgKVhIkZXy5hWY1dQMJrC74yJegTgQN/bPxWuyQR+Iq3mMB349qmMi5kLbKQeMU0/6EPFJGmERSsQ3JWr9xbrsB8hnCe8+q5vTagB6OkqK695D2LZ2meQqsztF8tMgCifiSZ8MCCjVxzRy4KHb/cfZYHiS/s6LkkX/c+AQOiGnCmyKEP6f4/PoOzDUu/oPXhu8Q3B/boAfvNUU0UI/uluOGkZ3paqJtCD78SAj0UsUUVgo=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(508600001)(5660300002)(6512007)(36756003)(26005)(66556008)(4326008)(2616005)(8676002)(8936002)(186003)(66476007)(6486002)(83380400001)(66946007)(38100700002)(6506007)(86362001)(31686004)(2906002)(31696002)(316002)(53546011)(6916009)(54906003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?R1lIZjlINVdaR1pDV3dNdXplYWFRcmNSTUZNTGMySjQrOE1uOHhNa1UwMGVQ?=
 =?utf-8?B?SzEyNTlaUlc4S09oN2h2RmxacnZXSDYrU3crTGRnWUc2RVdRdFNWRGhXVG9G?=
 =?utf-8?B?bkh0VytXNlBPbER2QlE4K1NVa0Jyb1BIR2x2MzhiNkpkTUp5STA4K2Fnbytw?=
 =?utf-8?B?R3AzUFk5TWhkWThhNERRVXhMWi9LTUo0WVExSnI1QnRsWEJmUDFFbkl5czFW?=
 =?utf-8?B?eGZKMko0NkdYQ2FFNGx6RlVrNHg5VEJvNU5CaEQxRGhVNDNFY0dwd0xIeXI1?=
 =?utf-8?B?RUQ0WGF1QUwxTHNNbnZ1WVFXbnJvZTM4SlF2Mng3RlJqMkE5bFJ6VWJ6bXAv?=
 =?utf-8?B?QmtZL21rTXRkTHVHK0g1NHVSa0tBUVpTK2ZSdHFsNll1b2dZZS9UTmdHbUVt?=
 =?utf-8?B?K3JhSjNIc2NkSXVCNFRyMWRSR3FRdXZRcGVIcmN2cGZ1QVNMV0RUd0M3UUtq?=
 =?utf-8?B?N0RzZlF5N08xYldWU3BWaDhrV0cwRkJIOU9hdm1FNm9sVmdOalFrVmllZEY4?=
 =?utf-8?B?RE53VFE5UGFYb2l5MFBVdVFrYTJUVGV3alBMdnRhYlh2Wis1Y0RtRXl3bkVZ?=
 =?utf-8?B?d25RYVJtMEFRL3FKNFVwRy84bFZ6UjlPMEpPQ2VITU94REpXOGUxYTRzMXcy?=
 =?utf-8?B?akFESWs5bVNyb0s5VTdYc1d1anJGYkoxN1hvRkVGZmNld3dLSm1FSXdySFAy?=
 =?utf-8?B?cWU0cEZaYm5lU1NYc0czS1AxNk9mR2k1TGdrdDNhUFR1Y0xXQS9JOHV3NW83?=
 =?utf-8?B?TGxLTG92VWJ6aDRsQXRoelBEbXZkMk1jdURjVlovV2lHVXl2eXdiTDIwYXFz?=
 =?utf-8?B?MFVRQi9iT2FtM0YzWS9GcllWYWs0NTNkUlFSYnJJQ0VsNkdJazZDSm9Gemcv?=
 =?utf-8?B?TVZxcURGZ1ZNWFowWGZrdE16d2F3dkR1KzV6YkdOWnR4R0JhcVUwclZyWSs0?=
 =?utf-8?B?L3lBeUtockZkUjdyQkxIRFErdkR0b3RNbElGZHI3MWZESEVvK3JPZTQ0QUxj?=
 =?utf-8?B?R2pNbEQySEFET09Qcmx5M0tXcDJvK0x1d0R1dGNkNWlhTldaSGxBMEUxRTBI?=
 =?utf-8?B?TXFuM3dXNmY0WVJEbzZyUUxOQzg1WDBTY3owRE5FWkZ3czZodytCdmNFWHVl?=
 =?utf-8?B?OStGcjJ1NVVOQ1EvRFJGbTB4M0lFd01NaXRQR1hCTS85Ym5PL1pOZ0tqRlY0?=
 =?utf-8?B?U2lDZU9BRUo4YlNmZHhjQ2xEb1hvbktqdDFidlVRZVZEKzlmRlAyZkYydTI2?=
 =?utf-8?B?eGlLZHJHbG1VN0E3dWtOeStXQ0RhdWNNNC85NnFVQWVRWG8vOHYrMFYxU3J0?=
 =?utf-8?B?N08zcEUrWmE3RlVVaVdLeGhVbStjNTB3MlgrdFFQdm12R1pTdnBkbmRRWlgz?=
 =?utf-8?B?T1hMbXdaN3BRRUZiQyt2M05mQWFLVEtMZC9QZlY3eDgvMGljUnNlUUtTeDA0?=
 =?utf-8?B?dHNtYnRpZW1uV3ZNL1pwZzVoeUlaN2hiMmxBZG4yNFViVVV3ZXQyaExFYTBt?=
 =?utf-8?B?WEJYRmZJRGdiMXV3Qnp1a2J2TE9nKzFDaXEvMlBRaFd2bnVHUWplRmI2cGtK?=
 =?utf-8?B?b0M0MjBXY2lqbUJkNytmWjNheTFNbXJWdzNWVmd5eU1SdkZSV1dsOVJmdGVI?=
 =?utf-8?B?dGVNTmdxUVdmRzk1S1c1bG1UMXJXUW5pTzMwMWpsV2VGRHl3bFNFM3YydFll?=
 =?utf-8?B?WEZkcE4vWDkxNnAydzFSMXNJMkdsNWl6SFhUdUpISldPcnBsU2orNnhaQ2dK?=
 =?utf-8?B?TkZVTmttUE12cDdTQk5TOURreTQ4UHBzTmJNanFJZUxRbnRVOXNxNWo2MDFN?=
 =?utf-8?B?Z1FkMVJmYVVZWUh2bXhVMElEb0tLRm5tWElPZEhXaE84a1hHQy9vekxpT1kr?=
 =?utf-8?B?c2Z2cmE5QjBpbEJZSWUwSjVlRFFSbWN4YVVubm5qUzk0M0lEYTZidFNNOUZE?=
 =?utf-8?B?clVGTlBieVNyN0lUcTQxaGVKejJCMEo4MTFFc25VV2phZGNLRXlhbXVkRTEv?=
 =?utf-8?B?VjJjZzlFQnMzQVU5Z0M1RmpUTVUwYU9jNE9JVjgwTXFGa09hRlpLdk1ETVlQ?=
 =?utf-8?B?SHJ1cDZNT25SbFBNeHdaME8ycnhaSS9HUW0vcXN2VUUrQ3FaTzhHeXpxazIw?=
 =?utf-8?B?RWhMQ1NJOFU2RG1wcC94QWtpK29zKzlZSXNkWHp6Zmx4bjFzY2tOSzRTWkFm?=
 =?utf-8?B?T1phdGVSVjhXSUxWZm01VVZGcy9WWVZQWkl3NUMyOWQzT21TV1dwYWZIVWUx?=
 =?utf-8?B?ZlNkNmxRc2dlUmswM0pnVFA4RXJEZldiTTlKc3RGZnhzWXZzWEpxWTkwNGVu?=
 =?utf-8?B?cXA3UG1hVzZzcHNVRGJkUW9qTTJWcFZmUWpUd1RWdTlZNkNYbzFndz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4fdf543b-c5d0-4317-6722-08da4d0e8907
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jun 2022 07:30:05.1597
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ggF55eCxIyOBQuM1cK5uWC5PjEBDZ+sdgJT59ZxjSjuwjrYkRFduxtgE5OhdlRI5y+I3xnNfi5PJR8cvpOwgjQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0401MB2666

On 10.06.2022 17:06, Roger Pau Monne wrote:
> Prevent dropping console output from the hardware domain, since it's
> likely important to have all the output if the boot fails without
> having to resort to sync_console (which also affects the output from
> other guests).
> 
> Do so by pairing the console_serial_puts() with
> serial_{start,end}_log_everything(), so that no output is dropped.

While I can see the goal, why would Dom0 output be (effectively) more
important than Xen's own one (which isn't "forced")? And with this
aiming at boot output only, wouldn't you want to stop the overriding
once boot has completed (of which, if I'm not mistaken, we don't
really have any signal coming from Dom0)? And even during boot I'm
not convinced we'd want to let through everything, but perhaps just
Dom0's kernel messages?

I'm also a little puzzled by the sync_console argument: If boot
fails, other guests aren't really involved yet, are they?

Finally, what about (if such configured) output from a Xenstore
domain? That's kind of importantish as well, I'd say.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 13 07:36:24 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 07:36:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347789.574110 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0ecX-00065i-2J; Mon, 13 Jun 2022 07:36:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347789.574110; Mon, 13 Jun 2022 07:36:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0ecW-00065b-Vx; Mon, 13 Jun 2022 07:36:20 +0000
Received: by outflank-mailman (input) for mailman id 347789;
 Mon, 13 Jun 2022 07:36:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T645=WU=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o0ecV-00065U-JI
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 07:36:19 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur02on0603.outbound.protection.outlook.com
 [2a01:111:f400:fe05::603])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 832cd334-eaeb-11ec-bd2c-47488cf2e6aa;
 Mon, 13 Jun 2022 09:36:18 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8072.eurprd04.prod.outlook.com (2603:10a6:20b:3f6::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.20; Mon, 13 Jun
 2022 07:36:16 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Mon, 13 Jun 2022
 07:36:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 832cd334-eaeb-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GRO7fAaWPIUamz8Pz+cOx+cq12fPXR7dAqmi4g12sBusn38CS/GhHRrZzDQ1bl10HFEKEJ+HDDA3jcqRWUXUMalpihmrmwEmDczJtDsHWJCJwCPox5Hqa2R0TlCkLMBgVoqkAvxXasnZ6/eMAjrt41kfoRZdkLFRQpI0GRi3MqxkkG2cO/4+peFj2jEj5EwGwga8RSIEcDo5MKA+P53mpqOG72KCPa3oKDSrrJNmtc4PhbUwPuHNZbX0J6eADSyWszk2j6q97OBr+Ll6MP5iuoZc3dcPuPObCYQDJGBviH22zMgUVC+B2T8Tg+PLA9cWXrEnxLaxkZMb2dcx76RrdQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=sQuwCcI0bX5m5k7OtMN3WGRcLdWEfSd1kqJrODpNTmQ=;
 b=gUDoNRkqnRMpExEDftRfKpurKP8zwtVcyvqiSlbu7ZCm1mE4bLFlEsegNV+waHtcF2hOV2R8NzrAG4HkP8xmGJhG++aM+qtwgELUmIk4yLN0o5SQTQxXsMmOU/iA323npKmCPq33NXd3suGE8PoVlzulQx+fBBTNIb+VmhCr+N55g28JLD8Sx+XlfUVAVKgkbe0LUYL92C41gzbo1ID5P2zwXiFuFueI7f/SBM6Ik4rveygn7OTFhvHxuUtf2QgodppHFRKc82yFIITcncowHFLkmBh76T6bKEU085DJksdEsZ+mUYDlf911HnyLLTLhftA5lRVC/I1hW91dwZ6fgA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sQuwCcI0bX5m5k7OtMN3WGRcLdWEfSd1kqJrODpNTmQ=;
 b=hvMmDhaDm7zJXw7XrfU63YKIY/0H+f4bzP7vCZc6ktehxnB3aBPKvD4M6sc89kC6jeRRAeTMLY0bJLiMrxjB2yPYkSEzXSNj4NuFX/BrxYHY2T5g+j7TZXQRKO20RtJFRGaOqJY5ljZyxmGj4ScGYnBve9eT3lom+LaL6SNSD68HzmTIcGR87Gn+cteJQOlIBEzRW4IKqKXCkx0Jdu0neszMCfqQalVeIxs59QeFAq21pcRFHT4299Clrmh0lCYxfgeedwvLgPntX4+MsfA9ti/mcmz6DyTTiQABUK+mjZyOrOy7mcBUD1dN5Deb3fbrO0ivZG2Y2Q3OQkRsp6Ybow==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <2c4b41e4-7381-7424-de72-43f55c448665@suse.com>
Date: Mon, 13 Jun 2022 09:36:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v2] add more MISRA C rules to docs/misra/rules.rst
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: George.Dunlap@citrix.com, roger.pau@citrix.com, Artem_Mygaiev@epam.com,
 Andrew.Cooper3@citrix.com, julien@xen.org, Bertrand.Marquis@arm.com,
 Stefano Stabellini <stefano.stabellini@xilinx.com>,
 Julien Grall <jgrall@amazon.com>, xen-devel@lists.xenproject.org
References: <20220610212755.1051640-1-sstabellini@kernel.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220610212755.1051640-1-sstabellini@kernel.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR06CA0393.eurprd06.prod.outlook.com
 (2603:10a6:20b:461::34) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d4f9ee54-cf01-4dfc-607b-08da4d0f65fd
X-MS-TrafficTypeDiagnostic: AS8PR04MB8072:EE_
X-Microsoft-Antispam-PRVS:
	<AS8PR04MB80728C4C72A08AD8C37CCC04B3AB9@AS8PR04MB8072.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	zIlCUUDxeEW8vlqz+iXxYdPW35nM88ad48SUVNlvIdYb37WKHMKdLzS1g2UtoGcEsZFcZ1KqrbXjdJbJSOkMC2NYaobmnO7sRh50G3pMFRhrnAmoQahfPvVjVI6tq2hvpXeMWhYnw0CNER+XnmrIlcZaOxjmDkxRQ6HmQp+WNae86jR4vjOrN4mvOiZ4E37QO1C5zrPADXmJx6YrA9vlk9zgnTOE/wY37jIO6KuvSPl+bk6N2gN94ZEV5Il2jnmCfEizgPiKUONKDqd+a9z6BPlFCwI5VdWVO78j9rywAi13XN88Vyxgp60DkOwRZ8waw1prP0RMB6DZGWSVl4JQBVZawwCfwccZASerHhJJ87yBA/IeGW31sdV34wU1c2Oy+GJVP/5Oj/kAt/L1FcSQKjaLLOAtaq7JX1cxm2riqLTb2H4pkJh34la32fi7B9ZtILiR/HZjhUnb/DsDPIhn+7+h9IR7tr1WHLJ4WogdCkkVwcy92qbKhNken8a9r2UBuO9Q2WAKyUnCcRQk91nAMz7ER7qi/0cMfVOkZWICC7lWbG6MudeqfgqAvQLczZXMgmUHGYEqmyi/JosWRnRYsKE895jj+hwzsx6wb8NuJWJfVKEM4NqzdIb278edKY9AEIetMqcyYyos8DuCYckHINZ0kncTK6eIq7K2qSSjT3n+B5HLK7CW8RUGEvF1Scu5YGD83ys2jexc4af09q5nUq2on6qXiYvttFAi0Sfq4Gi3/BzwM29WLSHJbUb/EYQOCHxB75NE1vsg/feCNjkWyMTaxrovcFzzOtpuRTfC2sjH39U6qdLfEUvIeBvZfmP96jKWSLwmnh9tpgaGmkG6yA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(6512007)(53546011)(86362001)(6506007)(26005)(7416002)(4744005)(5660300002)(54906003)(31696002)(83380400001)(8936002)(4326008)(8676002)(38100700002)(316002)(508600001)(2906002)(2616005)(36756003)(66946007)(31686004)(66476007)(6916009)(6486002)(66556008)(186003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eWkxYncxZUtlbUJGckt6MzcrUWRrRkhwMnFIWHdpdFNtbHhzMlpXM3JjNmJt?=
 =?utf-8?B?bDdKS3I1c0wvT1h2K2lFSDZXbGVtUGgyME5yQkpEaldUejBxOGlNbkZESjdF?=
 =?utf-8?B?L3g2Q2dKRE1XK0ZiWlBma1dTNS9DdWNIdld6aUh6alJDelRYRUQzYnRKSmpw?=
 =?utf-8?B?NlR1R0lxci92eG1OVnR5SWVNREJGTWtuMjlHNlFtQUZuTWJYVzdkY21jT25U?=
 =?utf-8?B?KzArby9xaVZuV2hKVzBtam5ndmozK2lseGxVc1dPQnVyZFBMaWRRdjRjSjRG?=
 =?utf-8?B?cXdiSmxSaVpOYUFtMGJjelM5Z0dpOXBGZ3J4a3R4emVDZ2FndEhIYTdDd09h?=
 =?utf-8?B?Mm5sSmZ5ZEtNNHRxK1h2ZmNWcUFTbldqdm0wTlZieGlKVytQZ29WZ0pPMDll?=
 =?utf-8?B?SlY3YnVEa0JNQnc3M2dPaEtlanFydzlGUkd4RTJoZ1IwZmFFczlyWEhCMWZl?=
 =?utf-8?B?dEFtYzQzVnhSTnJwRFZOWUJRR0wwa0lXeU9UNmoxL0U2TGVNeHJkcFBwdG9R?=
 =?utf-8?B?RldTNEo0TFBoTUdMdys4L3Q2Y1ZQUjhKcjNsME8rckcwbmUxRTF4RG1jRktt?=
 =?utf-8?B?eGZTYzFGeHNTbmdjdFJMcHY3WHBHZTQ3Q0NYNE55L2VrUndKd0M3WWliWFMz?=
 =?utf-8?B?QTdXRnI4dWtzUzllUjBmekJsNDhYdTVhak5wWStMNVVHdU9HQ2FYN1QvcUFl?=
 =?utf-8?B?VnNQNFJQanhnYVZYczFSbGJRS05LTWF3d0graWx5SU9qNW5LOG5HelRvSzNP?=
 =?utf-8?B?T2JuRGtNZE0xY3NlOTZiSFN3RHJWMjdWMFNsdlVOc1puN3owNG9MalpRRllB?=
 =?utf-8?B?TjF2ems4MTA2V1E5emhBb0xMSXIyUzh4RjJUZGgvS2JwMUxrT2RrbEozRnk5?=
 =?utf-8?B?ek5tSGxLNnNoeHYyOVpFRUJYcEpibEVFODFXMXBnSE5FMEVIN2JnamRjb040?=
 =?utf-8?B?TGJIcjBiUFMxK3laaTZhZmViZkFkWm1HUFhTdDBLSGppTEVUbXRUTFFvRWNW?=
 =?utf-8?B?MDl6Mlc2d3g2RHBTdUJOcFNTN3pjM2ZIREpJelZqcjA1SDlmWnB1QmkxTDNL?=
 =?utf-8?B?V1ErZ1RjdDJVQ1Z1V2I4VHc2U1cxN2Q3UmplbjBOdkRXc3NpVlJQZW9hOGw3?=
 =?utf-8?B?WkZTQ2c2SkxNSnRUZ1FnQWdiQmF1WlVHQzd3bWpBOUtJOFZESXQwRnBLWCt5?=
 =?utf-8?B?bmJmQXM0RWU5TlcwaXRmODlsNGkwaE1UTE4rUkRkclU1VEthczg1WjRoNFNm?=
 =?utf-8?B?c29vdnFtcmN6Rm8xdVoweUZ4SmNpdzlLNFNmMC9rdjJHMXJEUU4zcW1WeW1G?=
 =?utf-8?B?TkxhcHk0bkhvQW5VVFJlayt6V2JqT1dGS1lEa3NlWExkaTNraDV5MnRUNUdn?=
 =?utf-8?B?UEpUQzhZT2Y5bVAxT3ZFeEloZERjTTBXVmhJR1VtQnFpR1Fia01QTitkdG9C?=
 =?utf-8?B?QnNXRzl6Y1Y1REdDU2NoemhpVTk3NzBZempEcWRhRW5CaVVYeFRucEFFUGJF?=
 =?utf-8?B?eXlxWExIVVE0NjAraVlkOGhLcnFWOThQUk1sZndtRTl6aXpqYUVXcnR1bSs2?=
 =?utf-8?B?NjRsM1NhbU1hK2hFcVdtdHllMnBDdjhBNUY0WVZUK092R0tVekljUkJMTWtw?=
 =?utf-8?B?djh3WFZSbUVUdFh3YWZhM2cwNHdqZk1ST3VURVJEOHh6WUdwRG5wM2k1Sisr?=
 =?utf-8?B?TC9YbDJoZmdVZXJTNTNFcXd6YjhzRmhCTk9NbURMZFM4UmRlNzlWd0p1azhp?=
 =?utf-8?B?ZDgwQzRCUGdGN285MEhmRHpxUFdrWlBKbnM5TEZkcy9BQ2RjZ296Z3hNRGVB?=
 =?utf-8?B?dGREYU80VnhkQkRuT21BSS9NT1l5bmV0UzEvSUtoMVVhblczOU1VclJaQlA3?=
 =?utf-8?B?NHhXcVFsYWkycm9OUXdteGxVSG1YVTRyRnJoa0tkRmZidXdsaFhZYWMwVTh5?=
 =?utf-8?B?eUowaVU5WHhqQU44WkpVNDZsc29oajBCTGZyd1U4RngwamNCYTBlNUE2WGVz?=
 =?utf-8?B?SkNBOENOV2FIZ0lHcmpIWUhwbG1hRmdGNnMxeUUxY08yNGowcDVpaFBXNlhn?=
 =?utf-8?B?L3JHRnExVnc0cGJmSjh5clQ1R3dXNGM1Z3pzUlNwSGlUWVpvdURSWXdoRThY?=
 =?utf-8?B?MUNrbEFReUh2S2JtKzJlTGE0K3lQdnUvNU50NnpvNk4xa3orQVBFTGNHM3RQ?=
 =?utf-8?B?azQwL1FqclhCVHBreldpWUk1VUg0ZWtmTWxnTVBLQzZMUzRjYVkwZkNkeDFl?=
 =?utf-8?B?NXBWVVFrU1ZHQlg2QzdZR3RwL2ZsSGU2dlpkNDRQZVBKL05qa3dMbVNqYy9V?=
 =?utf-8?B?ZEZFOE0vYm5UYU5ZYU1xZmhJaVZUeVlPNU4zbnBaNk1hRFZ4dEl5UT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d4f9ee54-cf01-4dfc-607b-08da4d0f65fd
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jun 2022 07:36:15.8861
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: SrYC0jdc2Lx1yCNGdLunynhoOiKI1WdF2mE42HH8RbUMY44wOf8mh5oLPLFOGHcraRzxTAfpr9IIIP04le0y7Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8072

On 10.06.2022 23:27, Stefano Stabellini wrote:
> +   * - `Rule 5.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_05_03.c>`_
> +     - Required
> +     - An identifier declared in an inner scope shall not hide an
> +       identifier declared in an outer scope
> +     - Using macros as macro parameters at invocation time is allowed
> +       even if both macros use identically named local variables, e.g.
> +       max_t(var0, min_t(var1, var2))

Nit: I would have been okay with the prior use of MIN() and MAX() in this
example, but now that you have switched to min_t() / max_t() I think the
example also wants to match our macros of these names. Hence I'd like to
suggest that either you switch to using min() / max() (which also use
local variables), or you add the missing "type" arguments in both macro
invocations.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 13 07:40:27 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 07:40:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347799.574122 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0egT-0007Xr-Oo; Mon, 13 Jun 2022 07:40:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347799.574122; Mon, 13 Jun 2022 07:40:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0egT-0007Xk-LT; Mon, 13 Jun 2022 07:40:25 +0000
Received: by outflank-mailman (input) for mailman id 347799;
 Mon, 13 Jun 2022 07:40:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=i6zz=WU=citrix.com=prvs=156a1e8c4=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o0egS-0007Xe-83
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 07:40:24 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 137bd560-eaec-11ec-bd2c-47488cf2e6aa;
 Mon, 13 Jun 2022 09:40:22 +0200 (CEST)
Received: from mail-bn8nam11lp2175.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.175])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 13 Jun 2022 03:40:18 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by PH0PR03MB6573.namprd03.prod.outlook.com (2603:10b6:510:b1::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.14; Mon, 13 Jun
 2022 07:40:14 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%6]) with mapi id 15.20.5332.020; Mon, 13 Jun 2022
 07:40:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 137bd560-eaec-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655106021;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=rBnydf/ec2X0WJhshZQh75Y2/5oZrTpBWNYm07Hxh6Q=;
  b=PnVw45ZXSNHvyb6cSciYUzIkGL5AQFv5LFv61IIOT/FvybMoVbwqZtMZ
   hkVZrlB1X2gh5Xgxw9ee7Znz/Y6QgZls4kzCv+r43akdJ0zMGLmQnex7O
   EDlMVuYTsDMYhDkfkyesd8vR7ZiUOEUViwCql8NXmYuHEWbO6NOR0Guym
   c=;
X-IronPort-RemoteIP: 104.47.58.175
X-IronPort-MID: 73849892
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:YKH/g6LUAD27rX8yFE+RpZQlxSXFcZb7ZxGr2PjKsXjdYENSgmRTn
 TFKXDqAOfeJYzT3fYxzbtm390lQ75HQytdkHFdlqX01Q3x08seUXt7xwmUcns+xwm8vaGo9s
 q3yv/GZdJhcokf0/0vrav67xZVF/fngqoDUUYYoAQgsA149IMsdoUg7wbRh3Ncy2YLR7z6l4
 rseneWOYDdJ5BYsWo4kw/rrRMRH5amaVJsw5zTSVNgT1LPsvyB94KE3fMldG0DQUIhMdtNWc
 s6YpF2PEsE1yD92Yj+tuu6TnkTn2dc+NyDW4pZdc/DKbhSvOkXee0v0XRYRQR4/ttmHozx+4
 Ixzip2sYhgrB/LziLQmUgMFPTtsG6ITrdcrIVDn2SCS52vvViK1ht9IXAQxN4Be/ftrC2ZT8
 /BeMCoKch2Im+OxxvS8V/VogcMgasLsOevzuFk5lW2fUalgHMCFGvqXjTNb9G5YasRmB/HRa
 tBfcTNyRB/BfwdOKhEcD5dWcOKA2SKvL2wI+Qj9Sawf/WzP41Aywr/UAfHFQIXUeppKmFmXq
 TeTl4j+KlRAXDCF8hKV/3TpiuLRkCfTXIMJCKb+5vNsmEeUxGEYFFsRT1TTiduTh1O6WtlfA
 1cJ4Sdopq83nGS0SvHtUhv+p2SL1iPwQPJVGuw+rQuLmqzd5l/AAnBeF2EcLts7qMUxWDomk
 EeTmM/kDiBut7vTTm+B8rCTrnW5Pi19wXI+WBLohDAtu7HLyLzfRDqVJjq/OMZZVuHIJAw=
IronPort-HdrOrdr: A9a23:zrTyw65B+Y8E5uah7QPXwVqBI+orL9Y04lQ7vn2ZFiY5TiXIra
 qTdaogviMc6Ax/ZJjvo6HkBEClewKlyXcT2/hrAV7CZniehILMFu1fBOTZowEIdxeOldK1kJ
 0QCZSWa+eAcmSS7/yKhzVQeuxIqLfnzEnrv5a5854Ed3AXV0gK1XYcNu/0KDwVeOEQbqBJaa
 Z0q/A37gaISDAyVICWF3MFV+/Mq5nik4/nWwcPA1oC5BOVhT2lxbbmG1zAty1uGA9n8PMHyy
 zoggb57qKsv7WSzQLd7Xba69BzlMH6wtVOKcSQgow+KynqiCyveIN9Mofy9AwdkaWK0hIHgd
 PMqxAvM4Ba7G7QRHi8pV/X1wzpwF8Vmgvf4G7dpUGmjd3yRTo8BcYEr5leaAHl500pu8w5+L
 5X3kqC3qAnQi/orWDY3ZzlRhtqnk27rT4JiugIlUFSVoMYdft4sZEfxkVIC50NdRiKpLzPKN
 MeTf002cwmMW9zNxvizypSKZ2XLzkO9y69MwY/Upf/6UkVoJh7p3FosfD30E1wsa7VcKM0lt
 gsAp4Y6o2mcfVmHZ6VJN1xNvdfWVa9Ny4lDgqpUCfaPZBCHU7xgLjKx5hwzN2WWfUzvekPcd
 L6IRlliVI=
X-IronPort-AV: E=Sophos;i="5.91,296,1647316800"; 
   d="scan'208";a="73849892"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eVxx0HJpUVPPQ8jvEXOmxQRvHsyU8Yw+haMqTOXUJ+E1gmEqRdiN8lAPlmk6JGrjSkgO2Xi//2wj7DeIf6JvTNMvfNZkcT+OHy1XZvN2rD+Oli7nmVB+NA8AdypqrfqQAwSQoMtCkGfp0pZzGoEk+Ht5DedsfBP1pqFJ6cnmdbt1s9nuB8fpKSZzPB4+fMZPtp/W5FSeIdz+NSPt5n53bbgZqbTW+XljlNBhLW0FXLLtuGSAItUFOAE7QZwszlSPOs7mk3pO1ErntIXPv0m+2jW8/pm7IKiLk4bXx0dvqOJjFO1jgZBId3TAN0NhPUU6KEMXjSWqHVJDSZp8wHbjOQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Q8w7gY02nkb4glripKlR+d9QOGfeSntiiD54bsEoYGc=;
 b=Uai3vFDe5jfFi0YwuMkjbwuBE0n5a0oARj5yYJWIcEUmeI7M4dvA4G/OxjDF/bGBo/vaUJyvDgc5g2fVTQ42NUssWJ/CalkZjQgdP/mT0dRbektG7u7wLKxJtQQNm/l/cML9+ZT89EiaGYOG+3Lwcl5PRr+vLw3ZKH8KeAwy+Dvnzw4152alvoylubeu7CYJMbAT1XOrifrX2q3f9Q96JNVSg5zm3jc03m70tpL8lA4TnZL14GOjXHIwF90l/Un+ECqCE1OVRvsUJyIjPp/QTt0/7f0McMzBDc6fhpFeN32PUX86kEH3MqjyqBdEEPHcqvhL4EYZtvD9OXmXUqqeaw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Q8w7gY02nkb4glripKlR+d9QOGfeSntiiD54bsEoYGc=;
 b=g90z0kozJC53MCvOIDAjtH1g61Aa2hU38uX50n5WYHD+4eKMzB7aCDEYucq+SDO4WjL9TvI4BpfInWic51NdzDPNQv59qRBQoztzt4exCIULa7RML4SSG1ZJRKaqjjaRwdcz7vjVcBRLsDwEWvSqJdQOZ9ZeMz5KFytLzt1fiUQ=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Mon, 13 Jun 2022 09:40:09 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <JBeulich@suse.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] x86/spec-ctrl: More MSR_ARCH_CAPS enumerations
Message-ID: <Yqbp2Ud/aNlkLhby@Air-de-Roger>
References: <20220610160050.24221-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20220610160050.24221-1-andrew.cooper3@citrix.com>
X-ClientProxiedBy: MR1P264CA0036.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:501:2f::23) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 5aebb0be-b870-4d23-2a9a-08da4d0ff43b
X-MS-TrafficTypeDiagnostic: PH0PR03MB6573:EE_
X-Microsoft-Antispam-PRVS:
	<PH0PR03MB65731386E8DD4F5DD1D2690D8FAB9@PH0PR03MB6573.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/tQTsuWfd3tYdQh06F4jsKK+M+bga2RVBjDYvO2m3sTmogKyFE56xRjW8ihafLi9H5si9D2MdyELqYMEQU6IFIsweQJ9D006dlPRdZe3D4K3rBu+Znnrjdt80ET8gn3rD05wjDkj2jTo0Ou1y5lOfprLzatam/is4WC45Jm+2lp7ZNk4EzFjpeeDaYvM77l+U6zpFoh0b9+zTPX+t4PVztNZS4wgx4ojoj48B6onLU8teRmwEW6Tei1RQ+4sLLREyNtYpAxtxLxACUDodZqV2hKwsBcprhWTf3cx/d8NoiVUJFzhtvd+jgF2NSAo8U/0vhaw13UL2OsZwG3+9bWP+OxfWlzY8IqrNWfHmEonGIZi8cpo05E2czLnjqUSiYcKbgOeyQ2Z2I4WUNyhOXcwypLje65cMYrOwP6Rl38jhda0W65q4Diy4mSBcUcc4xXOG9B7ohuBzRd4uzxY6uvrOAoCWNhpI+8vC6KZyg412StfXuwfp2NoWG4y+WCl25766Ns8iyvQFqUi4Og9h+RFtGbp6cwRLJXDJgEOPQAVMsTs2vyfMlROpjmYS3GHDuLf/UnuYwbWW4J9PB6MXDd0BlJv/ocq9OdS/OP8xWEEcwf0pA33s4ZUUKwYLVQgL8THrJ0yA22xtmGv5ZaSJiLdK3tLav5z0p3Crmr9hWzW9DJp81mgmLkMKH7XINItFZUx18C7DS5KO/CSlCIaG9PmQgURHB55Z9w9Z3XoeoP0yvLC7vWK2UyC4SsASSg7y1iFh6nlmYjlbA/ddPavHS5/fw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(7916004)(366004)(26005)(6512007)(508600001)(82960400001)(186003)(5660300002)(966005)(6506007)(6636002)(9686003)(6862004)(6486002)(8936002)(2906002)(66946007)(4326008)(66556008)(66476007)(86362001)(6666004)(38100700002)(8676002)(316002)(33716001)(83380400001)(54906003)(85182001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?b3ZZdEtNaXZ0Uk9ZS0tjY1hjV1pIR3RMVHlRVnJuRGYvbTE4c1k4Y0ZjTEEz?=
 =?utf-8?B?QzA1UDFaWEgwQVkxRGdZblpuSzZOamhzM1J2cG80ekY3cGp4cE1xZ2R2SmJy?=
 =?utf-8?B?RzJ6T1UzRkpYK2pON3lSSFFvU0FSWVIxTmoyeFBCWFZEZlNQYTZFQXowenl2?=
 =?utf-8?B?VFpDMnoxVElTanFFVlRTeXJBa3VhTytVbkU3NWN2QldmUDR2Y1g1OEY0YStP?=
 =?utf-8?B?aXU5dnFBU0UwNU9wcUtBUHk3RzlPRmNWN3RMZU95SXpoQ04rdmM3bzVpNlps?=
 =?utf-8?B?YlVMbFFUeVV4SHVEUXVpNU1xczh0eks5dFFsK0MvOEx5MWt6NWFwbnNXWFNq?=
 =?utf-8?B?bWlpMDMwSHJUV3dObStzVURRUHc1TGRaQTB4ZVpRWXdIWUl6YmdVSElvaW5w?=
 =?utf-8?B?QkF3UVgvdFl2b2FPN2VJaWZCOWl5N3JHbjdZMFl3TFJBbzNvdE9mQlhBYmxC?=
 =?utf-8?B?bUczSEJZNWlvdlE4dllsUFpQOStwdzlDMlBsNmdHNE5uMXB2bW5GMThjdHJy?=
 =?utf-8?B?UkJaL3NQTU9aK0VSaUJhQm11V0ZVM2xkTVpaU0tCT2NucmlQUWVBSUVWTWcx?=
 =?utf-8?B?UU5oRklySXpZVXQ4SDgvRDE5VGxSQlhIdU9DVytDSCtqR21FRzNSNGdydUhl?=
 =?utf-8?B?Q1BEaWpKUURDYXRnV2RHeGp4NkwxRlpKcHBsOGdoVnU4c1h4Sk8wVWNBcHBR?=
 =?utf-8?B?YWM0WDdJd0RqOS9LYWR2VS9jZFlqRlZBbXY1aDRMN2xTb2dvb2hFemZVaVN1?=
 =?utf-8?B?dWJlekdJbUdUR0crcjJ4dlBIY0ZUbyswbzZHWVA2bm03eXdFUVpGaEY5SElY?=
 =?utf-8?B?bmUyTjFOdHBPbWNTeDI3UUFHWW8weXJBeml4d2x1Y1NUTDBRajlvVnl3YW5l?=
 =?utf-8?B?TGZzZ3YwL2tFUWw0ekZXWnI1ek5LRzY3a2RoN0JXZGtTQnprVU5YdUdFVzhV?=
 =?utf-8?B?dnZSYmpSK0p2NzVZQXUzMEJGdHo3a0xjUGV6TmU4NU9ZUnptSkVsdlo5S3dH?=
 =?utf-8?B?eHhQWG9VQTQyUzMyaGlRa3JQU0VCMHdHOGwzd0M3QTlhQVpQeFcrdWl3cmV6?=
 =?utf-8?B?azMrMitRZVF4UWhuSjJJdmFVOVRFZk1ObC9ZQTBHdkxBUVF2YnVGM1NZZ0Ra?=
 =?utf-8?B?cFY1S2djQkh0bktkL1VvTEd0T0Y5MWRZellCS2VaTVpzaTRJc1RIaXR1MWVW?=
 =?utf-8?B?OFkrSnRUNWN4MmprK1E2NjArWmlDbzdTdVNnTHBoQ2k1RzdZWGJuMjNLUSsz?=
 =?utf-8?B?NjZsWFprL1AwRjdnclZ6Wi8xcjV6YnphSmxValBsVVNvQlBuc3ZEdklKT3Jj?=
 =?utf-8?B?d2JBekhIcnp4aXdPcTBaRm5vZzM1M3B6QWFPRmtUaWdweE0vY3o4a0xjNVhQ?=
 =?utf-8?B?cUY0SHBrRlBtTStzYlZxY2RsL2VCRkdCdFliOWRhS08zN1FLakx1ay9YY0Q0?=
 =?utf-8?B?MjBhQ21CdTBzVzhFbVRONWZrYzkwcE1yVkYvTklFR01hU0xreERyMFZIaHNM?=
 =?utf-8?B?RnlEbjRBOG1SUUV2b29tOEtsUjlPRE5QY3RGTHVRVi9IK1BEOUZTNmVMTk80?=
 =?utf-8?B?TWdGQkQ2VXFVMEkzMktxcFBVaVZRbFdKWDRmRjViaTkzK3pLeDA3cnhxc0xI?=
 =?utf-8?B?UENMN2FrTHVaZGd1NlYvOHpmcVRsVDhNWG9MSTVtOFNHY0VYT2RKVldUZmFr?=
 =?utf-8?B?YkI5YnFEZlRsSHYyUTlGTGNLdE1BZE1MZjVWVGdkcmtlSUh0UVhLWnpEcDNj?=
 =?utf-8?B?eWJmUlEweGt3S2cxaGExNXRIVUtiYnlEQkJodFU4eVFVWEQ5eGViREkyUTNw?=
 =?utf-8?B?WEVyZktKaXRyUzJnZUd2RTFjZUNLaWFXUE5WelJ3bVBPYXBqdDRxLzB5MVdY?=
 =?utf-8?B?dVVsNndielVwUDhiaHBodE9kMkd6dEZrVFpuT2llLzNYUkQxNEJsS2dBRzZP?=
 =?utf-8?B?NnYyUlBGYng5Yko4dm8vY3l4QzloZC9BbnBZdUpqeEl6MDcremoxd1dWMkxR?=
 =?utf-8?B?RVo4R1hLcjEzYXZNVUh3bnJkWTl1YjJFdlJVNUJMRlZSWk9MSjd1Y2JiSHpl?=
 =?utf-8?B?UkFiUWFIY3krNU1zbm9haC9IbmQxVHZpak0yWUhWNFREclRGdVBVWXIvMU9y?=
 =?utf-8?B?bGl6eEZJWk8vWkd6a1JJR0ZmT1FKYzN0ZEJoZFBtajF3QVlVWUVXbjVPMGJm?=
 =?utf-8?B?OEVkcmQ1SktqMVFVYVk2SXpVdFNaSk56NkkvN2p4aDY3VlFmUmx6d216TFVa?=
 =?utf-8?B?d1N1QW5CNHhjeHdNYkl5RFZwbW9rL3ZIdnVock9IVFlSRm1IbzNHMDEvQW5L?=
 =?utf-8?B?ek9WclFuekxOU1J6REdrb003STZOY3lrQUkvYUNjbW10MW5nMlZqdz09?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5aebb0be-b870-4d23-2a9a-08da4d0ff43b
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jun 2022 07:40:14.6224
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: tdzmP4WZnRg3jBLHYzLJidZgifKvzfYH6ZsCxdoFb60gyuiVJMFRk1C1SBaUQS8GzCA4lISTgiVrywVRC6+M+A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6573

On Fri, Jun 10, 2022 at 05:00:50PM +0100, Andrew Cooper wrote:
> https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/best-practices/data-operand-independent-timing-isa-guidance.html
> https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/advisory-guidance/running-average-power-limit-energy-reporting.html
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Wei Liu <wl@xen.org>
> 
> The SDM also lists
> 
>   #define  ARCH_CAPS_OVERCLOCKING_STATUS      (_AC(1, ULL) << 23)
> 
> but I've got no idea what this is, nor the index of MSR_OVERCLOCKING_STATUS
> which is the thing allegedly enumerated by this.
> ---
>  xen/arch/x86/include/asm/msr-index.h | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/xen/arch/x86/include/asm/msr-index.h b/xen/arch/x86/include/asm/msr-index.h
> index 6c250bfcadad..781584953654 100644
> --- a/xen/arch/x86/include/asm/msr-index.h
> +++ b/xen/arch/x86/include/asm/msr-index.h
> @@ -51,6 +51,9 @@
>  #define  PPIN_ENABLE                        (_AC(1, ULL) <<  1)
>  #define MSR_PPIN                            0x0000004f
>  
> +#define MSR_MISC_PACKAGE_CTRL               0x000000bc

Not sure it's worth it, but Intel names this MISC_PACKAGE_CTLS rather
than CTRL, same with the bit below in ARCH_CAPABILITIES.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Jun 13 07:46:37 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 07:46:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347808.574133 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0emP-0008I1-FB; Mon, 13 Jun 2022 07:46:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347808.574133; Mon, 13 Jun 2022 07:46:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0emP-0008Hu-C7; Mon, 13 Jun 2022 07:46:33 +0000
Received: by outflank-mailman (input) for mailman id 347808;
 Mon, 13 Jun 2022 07:46:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T645=WU=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o0emO-0008Ho-Kw
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 07:46:32 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0605.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::605])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f077e578-eaec-11ec-8901-93a377f238d6;
 Mon, 13 Jun 2022 09:46:31 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM6PR04MB6359.eurprd04.prod.outlook.com (2603:10a6:20b:fc::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.19; Mon, 13 Jun
 2022 07:46:28 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Mon, 13 Jun 2022
 07:46:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f077e578-eaec-11ec-8901-93a377f238d6
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dkBJwd9OjNHyTvHTiSfDPQ25pS/FzSxthM3kqwc+r9DRnuxhdxMCsiAq9nFsCWxBeRWZJhb9Ne+ET7pl5BiK2SUmg3Qbih9dj8FAp+8Mhk+iNa4paayT7cCEo40wqPf2iSGxME2KY9TKHVEvkC6V/j2Rx/haGtL8KKNnVe3dACbrWDz2APCxFSl316xKNLbvS/id6J6VaWktg3PBDjITKBVxyXKPedCAPlpfByOCcT/QN5/qGfoBFtFNHbc85v+l8x9JgaihmcmjD+7lTIzi6SAPEyIjL8Ko5EBz0k1bAad6b0X6LPZKOF3fKL79ZqeJfQQebFy3811W0XgtIA9OSQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=F7tz5xuW8I1j3GLOBSSfTerng/GIhbPDlEO0t5+dQjo=;
 b=RCHFSJ9Oh6u1xcEk1cLXN/kGY3pKyawh9zOBS6nNwviTOkeCUFLG6sT9U7PX6Ew74ssHpCShITsaYMOb2wL30RSP6rjFebtmXakr8koShwXj2DoSPiA1qbKp/65f2JyE4ee53UTz24tYwm+dyiaDbX4fjMVrlI50F4BYU0+rCEj1orokwa57Tsuo8anO4qCk1xAqTq25F7YlnZy3YlzqyaJLPqhi39I/ohMZXBzkVAtgamxQ5bj+Df7K+ieujnuKlKdt0TRMX1DgZLJM2EKO1ljLc11KFd47MFyQ8HCPiwKTnOm3u+2Brh8Wg3q8v11O/52ItJScN2qLSYEvV/bZqg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=F7tz5xuW8I1j3GLOBSSfTerng/GIhbPDlEO0t5+dQjo=;
 b=CLK2l0g2wybmK0puEBbh1DJfgyq/8bnQXYg3NqjEGO7MkFkjC8h2b2XimXFSWrCeckzVXbC/dpsq/RjbcecKSaNvauhRDPDBZo0d0QpV+KRyNZ0z3vDp49lQuuI7kDrcBv8J83Y41dTb3oo+Vpjwi6k3IHiGYj87AFk4vy2YN/bsUjNQk1eZx/r9TVG8eea6kshiySVOIyT06NHK6IRkroc4sUrcN+SVASTpU+ZAybTFT8wEaYEwZnN4f/AZax0daWjWFwLEm+tRqvRdE+2cR3RcTFv6HWHZoInNToeY+RBndbVS0F8srhQBbHcTtZQ7rNlvVIxn/7BbnsAoy0qOow==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <6f1b9d99-290a-dc29-c953-3b38516995b9@suse.com>
Date: Mon, 13 Jun 2022 09:46:29 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH] x86/spec-ctrl: More MSR_ARCH_CAPS enumerations
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20220610160050.24221-1-andrew.cooper3@citrix.com>
 <46e8251f-6a76-a45a-54db-c15a39b2ff68@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <46e8251f-6a76-a45a-54db-c15a39b2ff68@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AM6P194CA0006.EURP194.PROD.OUTLOOK.COM
 (2603:10a6:209:90::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3f9c97b8-243b-40d8-a751-08da4d10d331
X-MS-TrafficTypeDiagnostic: AM6PR04MB6359:EE_
X-Microsoft-Antispam-PRVS:
	<AM6PR04MB6359950650721D293F648E16B3AB9@AM6PR04MB6359.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	uUWUe7Bqtbn0r5vI/biKiYzeYf0WhamhQN+pAt/lEsmFxa/J5ZRR0q4xHotaQRAZBVtcT/NlKsWhmNgWhmnGI48S1520ODougnnUXXPi1GfJJSZ3XjdyJBwKnON9rk9L7tgTEVNTlsPrsXsQ7LS1Cmvg8p6hr2I6C9aKVjwDAN+lVXEqV9HcYVM6tvLpe6BpHxcccrE/KII+gc7bzO5wn57LCohk9NqeI09sNN0ibhQIp2CnKEamKHLVTunibuw//u4UH2DGOGD4zBomp+d/zAAN281ju7kbJNq+OeWGmWuWh896FosHqc8Y8VUiSzbcTlLKcK5TeAGTesvIuwgx08gTBpVkCaQ4TsiPSOQeRt2wU/ls3rJJr7Br6PsNejHC/ByQf6CZTFUHLQGdJo5jhlPjYSRAFswv8ETlyT1PcmcwTt7PjuwayinY88GhVwOSYOdnXwFKZr1kVAiTPaFDVhcZIAQHklE8YXdWRaY3dSK+OVGHRYOWLc0UwDzmDmuQPqL0/lr0HfC9Hc0Py8UNSnLDvjdA/AzvkM63b1Wn+bL2xFirWEO2Abj6/zu0Nsdo5D65wnSN5hLXLVVYfVa8HYhmZGlyZgJEQc6HfKlajgX38eDthVpiJARNodyGNXqXcYuoosb5LkLzqWbD32qGCHymlDoE3KU/jhU+q81Wm4VkZw+C+Z8ct3VvFP2iWcodoWxE4fR38izXZalgd9X4AjpNC3SHNCddmyfohEa1jKagmqOkr0fnQiPlXoZvFL+CZkUvR64c6QdLeKVkY9Me7fJ+eBjIsTF5xPVDLvz9v017Rp2jcoRwETpxnKFICbvT+CxDX5BdIUSoOM8Imq0acxbQGFty2Z2DmsGOlFEYgPU=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(186003)(8936002)(31696002)(83380400001)(6506007)(38100700002)(26005)(6916009)(53546011)(6512007)(54906003)(66946007)(508600001)(6486002)(316002)(966005)(2616005)(8676002)(66476007)(4326008)(66556008)(2906002)(31686004)(86362001)(36756003)(5660300002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bWtrUjc1SlVxL240cUxyQUNyMk10SXpreWtXV3hWWEExdXNQUGoxRlFadFFL?=
 =?utf-8?B?YzBKelovaUlvQkIxcjFJME9LcnhqTDAzTUcxQ3Jnbi9uOEZNRHlZc3pSRG5Y?=
 =?utf-8?B?eGNoTTJVenM5M09XV3d5clBpK3EzQS8xMzcxTVhERzhrekRwVTRMTjhRY1Va?=
 =?utf-8?B?YUd0TDloaU1ITmlyWHhSandIVm1sMDJNSHNCQm9RVjVoVVQ0enBIek84Nk05?=
 =?utf-8?B?NEc0Zkk0K3JMMzcxYzRmUVBUL1pVSUtIM0JkTXJYSDBaSms1ejVXM3NlSDdU?=
 =?utf-8?B?YTFCT3hMMVliMks1NUVFSzhZV0FGVHlHd1F5NHNVMlZySGk4ZEk5Rlcxc3U0?=
 =?utf-8?B?QUF0MFdLSG9DVCtYb25NSEp2Y1hnL2dvbXhMakpDNVBvajQzcFprSGRKbVJa?=
 =?utf-8?B?K2R4UTNtL0RrWHpIeTdNMDdNVTYwYTBzQ1JIaFhFSUw0QjdNcFpVbVJJQkQr?=
 =?utf-8?B?UDh2K2RZNk5oSzMyTWpxNmIyeURpSnN1bDRoQk44dWtsaFlySXdKZ3Bxdld4?=
 =?utf-8?B?NmF5NTc3ZDJVVXZxbmJiZlVCQmgxaHhuSUJGZkRrNnhqSUx2UUlNdTVNL25n?=
 =?utf-8?B?K0VTa3gwMWJXMkJFcDRJa2cyUXFaRXkyUmdRZ1V6T1BmSy9tVlFmeHN2OWdG?=
 =?utf-8?B?SXB3c1F3bU96REJsRVdKcERIZVFUalRrZEFTcXpOam1DcnNpemJhM00weTNm?=
 =?utf-8?B?TFZRb08yVGtJK2RMYlBZLzNwYUozZGZUTHR2MEdaVUk2VS8xY1hCZHREdUVE?=
 =?utf-8?B?a0dVaE5mQlRCOEsxd1FUZlRkTzN6SnIzdmcvNDJmbmJkeEdSa1VvNTBNOWxs?=
 =?utf-8?B?MXNHa1hZZWROVU1WZ09kVFZLVXdNUFd0SG5Hd2FLZ2s5dVRXNGRqamNsS2Fr?=
 =?utf-8?B?ZkZoZDdDdDhkaldiOEozZHVQaE9GOFVVY0MyTENkaFBlZlRqUmpEWnQ2UXlj?=
 =?utf-8?B?M2s3bjBVdk5zQ2pmL2luL3AwYndPeDVZOGtoUnZJVnBweTNlbVhUWW9uY1p2?=
 =?utf-8?B?ck15dk45am5rK0xaYWxaOHpWVE1MekRpN2J4UUh0ZExTblRhYVVjNU5Sbnc1?=
 =?utf-8?B?ZEhmWnZMc3lLWHg2VG5HZUZJZnFvQUZ3dkM0U3pBQ1BWYkhTVlBDNlNwRDdz?=
 =?utf-8?B?TUlyU25zTFhtQmEvL2xVb29pSkJicWIzK0hsRWVVejd2Wm44emdscnlBak5T?=
 =?utf-8?B?dWFzNlVvN0MvMjdPbFdKRXhmNy92b0pLcWg0eWQxWGt0MTZIQ2tacUNrL0pW?=
 =?utf-8?B?cVpQNnlQVFB4Z0ZVUEFueHBrN0R5NnlPVUhORG01TXVaeFZyOUd3SDhlelI3?=
 =?utf-8?B?b2tGRFBobDhDT1VQeDZpZ1BxbFhDeDVqTmMxdkpDV2VWdXFpRENrdXhkaVFP?=
 =?utf-8?B?WUF1VlVCS0h4OVNGZlF2ZUwzNWIxQTJNRUdSQkRaNjVDLzdyb3oyRzFQZEh2?=
 =?utf-8?B?aGFPSkRLM0hQR2VoNit6NS84WG9lOWZCYTVyZVFCLzJwUFY2ZVpIQVdTd1Vs?=
 =?utf-8?B?b2h2MjBUL3pIeWQ1NFZEYUIzTHE3QWoxQ1pTTG1CRGNDN1BuN1RVOGlzdTFG?=
 =?utf-8?B?MXBWUzBIZ1RIN1VvTWZNaWswd04wVU1Vb2FUREdZdGV6THhidlI5OUNkZlZp?=
 =?utf-8?B?LzlsYTVIZDlYNXpGUmprT1NLdWt6amFjVkdvWHlqRTdXaUlBV2JkL2VKSWRM?=
 =?utf-8?B?WnFnRnFtTGxybnhDV2pkdTJLQUQ5cThJdFBpdDdGdGRneUI3bWZxSndIZVFJ?=
 =?utf-8?B?bnJmMWllTW5iRjFHcDRLYmVDazZoNUNvSXQzMGY3VjJZdnRCbzQyK1hZOXNp?=
 =?utf-8?B?V1o2czdqZHdtTW53a3pXVWVVUUtlbk1HZTIxYjJCN3kreTJ2RVdxY21ickI0?=
 =?utf-8?B?VmVmWGtWejd1eTZ0cGdkbnRlYzBrck9obi9CQUpBdTBnZVFYWjZWZjhjN09I?=
 =?utf-8?B?RlNlY2NiblFLREVrOUFHNUpodE4xWkRuY2J1Ly8rdkpkM0JpU01iQzF5Ni9S?=
 =?utf-8?B?L0pBbXZ4T1M0Qm9Yd0l3aGg5cStKdmsyZUROeXlJQ09BdHF4SzZxbkZYcDlp?=
 =?utf-8?B?R3ZEZnlGZ0ZJRGNza2lOWHUxZFQvV2FkMURSNVlrMkluZjdQSkd5RWdRRFNq?=
 =?utf-8?B?M3RYTUJHVzFxRTlhL3ZUTkxRQlpBT1BQRGFmZExmL2JSZWZaRExiTE5WQk1h?=
 =?utf-8?B?QklkOENOWkNUaG9IU253SzFjMG9JOHg0TmFQcENLVi9EQ0p5emRCOVowUzRJ?=
 =?utf-8?B?aEJEcHM4UmdTZzlxN0xpUlVQZjNLQUs0eWIwT09NSG9DSVpjUmlvZitoSFZ2?=
 =?utf-8?B?M2Z4dEUxS0ViOWZZWGNyVWZSZzY5TUVkNDRicjNldlRxMW9CK2ROZz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3f9c97b8-243b-40d8-a751-08da4d10d331
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jun 2022 07:46:28.5971
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: O3M6K0AxxFspiXB0t0T9AbXOwHpq0UkqX8fvb5rtYOIt3vtMmm3STIfj4xj7nCmdYLfeuS3BodGj4W79gsvLVA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR04MB6359

On 10.06.2022 19:13, Andrew Cooper wrote:
> On 10/06/2022 17:00, Andrew Cooper wrote:
>> https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/best-practices/data-operand-independent-timing-isa-guidance.html
>> https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/advisory-guidance/running-average-power-limit-energy-reporting.html
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Roger Pau Monné <roger.pau@citrix.com>
>> CC: Wei Liu <wl@xen.org>
>>
>> The SDM also lists
>>
>>   #define  ARCH_CAPS_OVERCLOCKING_STATUS      (_AC(1, ULL) << 23)
>>
>> but I've got no idea what this is, nor the index of MSR_OVERCLOCKING_STATUS
>> which is the thing allegedly enumerated by this.
> 
> 
> Found it.  There's an OVER{C}CLOCKING typo in the SDM.  It's MSR 0x195
> and new in AlderLake it seems.

With or without bits for it added
Reviewed-by: Jan Beulich <jbeulich@suse.com>
I'd like to note though that I can't spot such a spelling mistake in version
077 of the SDM (vol 4).

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 13 08:03:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 08:03:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347822.574144 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0f2u-0003CL-9L; Mon, 13 Jun 2022 08:03:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347822.574144; Mon, 13 Jun 2022 08:03:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0f2u-0003CE-5i; Mon, 13 Jun 2022 08:03:36 +0000
Received: by outflank-mailman (input) for mailman id 347822;
 Mon, 13 Jun 2022 08:03:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1H1N=WU=citrix.com=prvs=1561b7a3e=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1o0f2s-0003C8-DN
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 08:03:34 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 50994206-eaef-11ec-bd2c-47488cf2e6aa;
 Mon, 13 Jun 2022 10:03:32 +0200 (CEST)
Received: from mail-dm6nam04lp2041.outbound.protection.outlook.com (HELO
 NAM04-DM6-obe.outbound.protection.outlook.com) ([104.47.73.41])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 13 Jun 2022 04:03:26 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM8PR03MB6216.namprd03.prod.outlook.com (2603:10b6:8:27::8) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5332.13; Mon, 13 Jun 2022 08:03:24 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::50a2:499b:fa53:b1eb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::50a2:499b:fa53:b1eb%5]) with mapi id 15.20.5332.020; Mon, 13 Jun 2022
 08:03:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 50994206-eaef-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655107413;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=CiiyAYTe4t/xVNgbnhRgwfUmI0mw7X2HrKUPuUz7ADk=;
  b=g8i5r9SORumOA+4nMlqEpEXuti3MzxkOM0oX8REjIrYkZt3pVmQlQyD9
   U7b6jSkHyGkxw3yyHcSXUgbR1Q1VZEL/euBQaba/r6TtDt/fFVZXkFwwO
   rOSmG06Uc0gUG5xYyzGK48pQ5a4rd4YfR+gPJmFB6T2tJtm/LFqhE0UDe
   A=;
X-IronPort-RemoteIP: 104.47.73.41
X-IronPort-MID: 73284651
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:i/wI7q0D7RZKalm6aPbD5aNwkn2cJEfYwER7XKvMYLTBsI5bpzcDx
 2McXGGGOq7ZZGv0c9wjboS/8BhXsJPRydAyQVdrpC1hF35El5HIVI+TRqvS04J+DSFhoGZPt
 Zh2hgzodZhsJpPkjk7xdOCn9xGQ7InQLlbGILes1htZGEk1EU/NtTo5w7Rj2tAx24Dja++wk
 YiaT/P3aQfNNwFcagr424rbwP+4lK2v0N+wlgVWicFj5DcypVFMZH4sDfjZw0/DaptVBoaHq
 9Prl9lVyI97EyAFUbtJmp6jGqEDryW70QKm0hK6UID66vROS7BbPg/W+5PwZG8O4whlkeydx
 /0O6q3sFV0sY5b0p8oQCUhIQiBwPaZ/reqvzXiX6aR/zmXgWl60n7BCKR9zOocVvOFqHWtJ6
 PoUbigXaQyOjP63x7T9TfRwgsMkL4/gO4Z3VnNIlGmFS6p5B8+YBfmTjTNb9G5YasRmNPDSf
 ccGLxFoawzNeUZnMVYLEpMu2uyvgxETdhUH8wjF+fRqugA/yiRbyafxItTTauWVTPVRpBvFm
 FvjolnAV0Ry2Nu3jGDtHmiXru3FkD7/WYkSPKal7fMsi1qWrkQMDDUGWF39puO24mauVtQaJ
 0EK9y4Gqakp6FftXtT7Rwe/onOPolgbQdU4LgEhwASEy66R7wPHAGEBFmJFcIZ+6JZwQiE23
 FiUmd+vHSZorLCeVXOa8PGTsC+2Pi8Wa2QFYEfoUDc43jUqm6lr5jqnczqpOPTdYgHdcd0o/
 w23kQ==
IronPort-HdrOrdr: A9a23:Sc9Q1q4itAxq7+Tb9QPXwZGCI+orL9Y04lQ7vn2ZFiY5TiXIra
 qTdaogviMc0AxhI03Jmbi7Scq9qeu1z+843WBjB8bZYOCAghrmEGgC1/qu/9SEIUHDH4FmpM
 BdmsRFaeEYSGIK9foSgzPIXerIouP3lpxA7N22pxgCcegpUdAY0+4TMHf4LqQCfngjOXNPLu
 v42iMonVqdUEVSSv7+KmgOXuDFqdGOvonhewQ6Cxku7xTLpS+06ZbheiLonis2Yndq+/MP4G
 LFmwv26uGIqPeg0CLR0GfV8tB/hMbh8N1eH8aB4/JlaQkEyzzYJriJaYfy+Azdk9vfr2rCV+
 O85SvICv4Drk85uFvF+CcFlTOQiArGoEWStGNwyUGT3fARAghKSvapzLgpDCcwoSAbza5B+b
 MO0GSDu5VNCxTc2Cz7+tjTThlv0lG5uHw4jIco/jRiuKYlGclsRLYkjQpo+VY7bVDHwZFiFP
 MrANDX5f5Qf1/fZ3fFvnN3yNjpWngoBB+JTkULp8TQilFt7TpE5lpdwNZakmYL9Zo7RZUB7+
 PYMr5wnLULSsMNd6pyCOoIXMPyAG3QRhDHNn6UPD3cZeo6EmOIr4Sy7KQ+5emsdpBNxJwumI
 7ZWFcdrmI2c1KGM7z44HSKyGG4fIyQZ0Wc9igF3ekJhlTVfsuZDQSTDFYzjsCnv/ITRsXGRv
 fbAuMiP8Pe
X-IronPort-AV: E=Sophos;i="5.91,296,1647316800"; 
   d="scan'208";a="73284651"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XCNJT/6PdHy1u9z4PjF153hiXO2yxSPTWkbqu8JhpLSDzz6to0rejMO9f0n/SbmktX0gxd2Oeub5Y/UKZ7LhDN2EpgmBb5Cg3qGwMi6viRjGDaWXjOZD/wPExRbLB+FEfbJetkj1lCoFGeWapCyts5mKahMdaHoCLT9BrIS/I/RZhHevkvyuOgBk7IlJQ3WoBDGPDW9NPoUeH3GRFPpuHnNroazBFIZFjCaP9TNAR86DxQ/0+azE10jxMEB0htYkgE0PPBlJPoh7tZ1+LPxiPpeCZIpXrGsUcEAv9iQzycagszwBcmWJLQVCIpQ4KtCeLrCmvRhA3yRgZf8lSBhZXg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=CiiyAYTe4t/xVNgbnhRgwfUmI0mw7X2HrKUPuUz7ADk=;
 b=d/pvtfK5i09msmRbblLQ8GLTSkujlrYmMIYlIny3LLOxmxTWI8cUDBnXjcddesLtBuLGsA2wR/WsvDlDR9BL2YiWPZz5Om6jnqUgx5ZR3vZR7RxOEyoD8JK6AaaoQCWOb/06T0lecWi3zsVlalFxMU1PQrJKLWVh1QtAhHauaz7rsHwCiLBB83ILHcnJrc6N6OfRSbtCA9PAYHWZLRglEDFkvRvUpn/lo29UUf4MQLitjWuutsbIWuRCNwuvoTfVJr1zwtb0R89xZHDarh9DHfjZAc1+/to6ljyVLHvOtp/Bk0S89kjunXLQOLpmtsMh9hyKECLp9cRsFoSKTDh1cw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CiiyAYTe4t/xVNgbnhRgwfUmI0mw7X2HrKUPuUz7ADk=;
 b=rt/2n8wXfobNfXd4X9VMmtstLVQnDLHoa+XGhBFhn+34T6qPdplhEByc+u+QJCYZK72jWFrXZJkU0GQ/Q0XfV6wv8uwofFhZnaUIEl+dsLp4n+H0byJD9YFlBWPyiHkADXm7A6Pwl7hkkM4oizNNhRaMF/xHWlUC1lSn4li5Be8=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, Andrew Cooper <Andrew.Cooper3@citrix.com>
CC: Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Xen-devel
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] x86/spec-ctrl: More MSR_ARCH_CAPS enumerations
Thread-Topic: [PATCH] x86/spec-ctrl: More MSR_ARCH_CAPS enumerations
Thread-Index: AQHYfONZpKeg248Rt0um2v0bci+Cz61I4Q+AgAQYjICAAAS6AA==
Date: Mon, 13 Jun 2022 08:03:24 +0000
Message-ID: <e61ec318-c61d-37d0-7bf6-3fe804a1f9ba@citrix.com>
References: <20220610160050.24221-1-andrew.cooper3@citrix.com>
 <46e8251f-6a76-a45a-54db-c15a39b2ff68@citrix.com>
 <6f1b9d99-290a-dc29-c953-3b38516995b9@suse.com>
In-Reply-To: <6f1b9d99-290a-dc29-c953-3b38516995b9@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 609953d3-4eb3-4841-fa54-08da4d1330c9
x-ms-traffictypediagnostic: DM8PR03MB6216:EE_
x-microsoft-antispam-prvs:
 <DM8PR03MB6216416D083597BA67079D91BAAB9@DM8PR03MB6216.namprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 0JQUSH6qOgr/0rsiEIvNDwZBffJK7Tu70rdQmTp2A5ipIHqfkOxkNBwTbQoBBjRU7acsekk8dVXJ6oefkc5EjajGhBgRj8y68TF8HT4GOTklyJd7rz3Y8nNnKe2sERV1WWN0NQT3fnKxKBKChct5Z7E8w/YXxQ/nIf3QLMZnglesSvvngc4zTxCYYa2DIeRv8qu8EwI6r7c8rIEK+ECQPzZqYrr8o3sahQlHWWmO+BA7jQ0aKYAF7Qoeat5egja24VGUgnm4LnMHfILECxjsGwHzTbJUMxqNa27xzBL5H21oVHX7MO8RcypXd1moya4CiTboSsAmcl89FogycDgbSEoH92M9pfZBXedjckjO/NM545Thc4LkMfwZQl6Ri5VT0hVyV5yQgFx2Qxu7P+SttcW0fNKXak18BSZvC5jip62GuWtjLi/aAYGLK8xA/gnYaKNMSm4ojKWgCVX81j+CgZT0/yjOcu+6dOfZDK9eEQYXs1cO4SgkMXfGNFbGyTkkBIXa7MZij30jSNeXhW36JgTbXiQDajVGwYswQqKwgbBbnNw+270+M86IOR+o4s+G10FBx0AGxEEPowNP1zoSCaV5uoGMjLvVQNDIHyDVFxgF8IbyQLu+DgzLBcann9UiuydnXQTF8lCsA/vUBXzJxdGZGZr+QPItN+l8GIxV0TDayC2I5DUtptxDjhHiGLmAnzKvtAWVocW403gvsI03jJYbSAe7KhuFSpZsAYTYZAes4ATMtUT4rGyHkRES9F6U0GXshcjSsbtvLwTWDpScaYH/hWagACJFgEjYRyaWBfKSk2M/9HBkqneFHuRd5WGVjQMJgSDQrEgY/klrUh3Z2FskkHTKfdmS3+729vuTD0W7eg4DksIL595//VMii5nypbJhns4fhm03ouFge+ApDg==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(86362001)(6506007)(508600001)(8936002)(31696002)(71200400001)(53546011)(55236004)(5660300002)(76116006)(82960400001)(122000001)(2906002)(6512007)(38100700002)(26005)(36756003)(38070700005)(66946007)(91956017)(66476007)(31686004)(8676002)(110136005)(83380400001)(54906003)(316002)(186003)(6486002)(2616005)(966005)(4326008)(66556008)(66446008)(64756008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?OTIvcXRpelN0clNNcmNlRW5CV29SYzN1NDJZWVJGSW9RRU9WbGlXcnpYTThs?=
 =?utf-8?B?MFpEcTU4MU1iNThDNHpEaEhMUEJiSklVQzVTUGdlUU01OWVwOWtnT0JvREw4?=
 =?utf-8?B?dnNJekpXSjRKUEVPVkVQRDBjVStBak1NRzFtVUVTVjYwKzhxNmNqVWVtVFVI?=
 =?utf-8?B?YmdNOFJ2bmVtd0VCbEJtMjdYVUJFeE9UUWN2UTczMVVjbkhNaExpYWZ5RU5y?=
 =?utf-8?B?NUdEaENjbXJ5U1UrSFk1NjRBa2dQWWQxeklYVXNLbXNRUXhzQWd4Ymp5RUt5?=
 =?utf-8?B?WG83R0Zkd3dZaDYvbFNZSWpjUkxFa1FPTEpicldPNFhUQWN6SXppSnhTaHRs?=
 =?utf-8?B?SG5ER2pTdytnRjVyVW9odmRXSnUxWXFRS2Z2Qk5NOUVyY0N6ZC94QUdpc0Fp?=
 =?utf-8?B?ekZZTDZ2SDhnNHcrdVNGVExvdlZPUVBxRzVHUE41VXhPL2VMSTVORSt1Um9v?=
 =?utf-8?B?d2crakpTb0xjWUpHb291bFIxTU5tYklEYml6MElscmtvM0U2bUdoUUNKZC80?=
 =?utf-8?B?RzJIMGVNKzNvYitsTDZyV2o4NXcyVW9qdWhpd3hyYTJPUFNBaEtqSkxtdXVR?=
 =?utf-8?B?bDVXSEQxU0dxNWhjYWpDNGNseDh1SkxXRkVhV1RGQ3ZZNEx4M3RXeVB0eDdk?=
 =?utf-8?B?MFNtTmpBc1pMY0U1MHdxNWJpN01RZGFWQ1Rkc3NIWmRTR3JFSkFLMjIvUy9O?=
 =?utf-8?B?SmpERExFcWVqNndFQW8zaWRzM0swanNoWDBIN2o1WFJwaFR1TXNCUFltQW9s?=
 =?utf-8?B?L3BLcU9uTFBMNDZrZzZxNUIzbTNVVzZOQ3RsV3cxcEtZd1JpeFhQaWs5Mkgz?=
 =?utf-8?B?VFlraHdVb1h3S2Y0V3A3QmtIQ3ZEMmgvbkM0UitjQWVWQVhkR010bGNSWk14?=
 =?utf-8?B?SHdERDR6Uk1IbkMxQUpLZzd5SGlJVVYyM0dxNmV5OU9sZ0hhU3JINVArK1Yw?=
 =?utf-8?B?dHhqekJEbHlaT29xVGYweXZ6VklrUVhKMkVjQkJwTml2WG4yNkx3SlVabHlk?=
 =?utf-8?B?S28yd29vUGo2U0EvbnZqMVorOTlCaktQZnY0NlRZc2k3SGdJWFlWS0g4a2w1?=
 =?utf-8?B?VjYzYnJZdXBpQTZPYmEwUW5ha2doRzV6MG5wUlo2OVpMQ3FWVi9TWnorSmpz?=
 =?utf-8?B?TGlvQ0RSeDBBMzBzYThJdGFBMlVjdmV1TU5IaVR0eFVpNFpHVDNMY0QzWjgx?=
 =?utf-8?B?WHA2T0NQazQ3MU4rSDY2eGtJTEdKdVVTVElDN3FHRG5TMzlGR1QxMzJYdUpW?=
 =?utf-8?B?VjE1aDdLV1ZFaEhQUjczemZyUmRQYTl2UnJjaEhRb1gxVE53VjlNckFsRlAr?=
 =?utf-8?B?RE5YZEc4blFTNjRkVWgxWWNoSHlwWmpnMWVHMktHZ2pQVi9ocnFVd2d2dFp3?=
 =?utf-8?B?UHRjWVlIM0RjWUVFak5EZWFGK1k0VTZjU1Z3OGRtd05XemRORFpucCtLTDFu?=
 =?utf-8?B?OGV6RGE1L2xpbE9McTdXdlg4U0I3YUFSWlZXekg4clVvOHFvb2p2UXFNM2Jp?=
 =?utf-8?B?UkYrUmRZbDhkVFN0a2xwTzN1NDU3ZFpxd3hNeXVvamJNSUZReVcvaS9nRCtk?=
 =?utf-8?B?aVhGK2lDeVpGamRFdGZ6dTRWLzNOOTU4eVZIaXorQkhBWFlWYm1pcHl4SWw5?=
 =?utf-8?B?WFNrVlZGTVRkRmlOVjEra0lwRTNobjd3REgzbkpld1pVd1FSbzRhYTRNMnRy?=
 =?utf-8?B?OWphWFhlWC9VNWJQc3BBK2t4QUROS2N2MWgwaGZGWStMSkgvcjFpQTN3RzVv?=
 =?utf-8?B?bmlscGwzRjZIbWlFTDVJTlZRUnFFSkNWV041Zi82cEFoNDFmS0VuNUl6SEgz?=
 =?utf-8?B?ampKWkJFMzdyVUlnNW9IYVEyL2l2SHhPdTY2cHRLMXE0cVZqRDMwSzNzc3Yy?=
 =?utf-8?B?cFZDMC9pZFY5UW5yWkpYWmtnbHc4U28relZoU2FzK0lYOElNMi80NS9pUHFK?=
 =?utf-8?B?VWZpdlE2aEF1SVNEcWVJcXQ2SkRBSE5mSk9XVGZPd0lORzBhL1JrNktLVVVk?=
 =?utf-8?B?Slc5dThCT3o4VFJQQnY3cUpzUEpHMERYVmI2VjJVVEMwS25iaGREeUdZMUxq?=
 =?utf-8?B?QXJUaDR0bEQyUEdJNlluNlhVUmMzUjRTazlMWG9FSWZlak9YcVZXbTBpZHV2?=
 =?utf-8?B?VVhJeU9QNk52a2xndG9mdUlNTURFRENYTjFNdjc4eXF3L09jdU53K2VyUXBw?=
 =?utf-8?B?TzZXSFhQaTJxRlQrMzFKc3h3WmFxLyttL1R1c1M2NE9Wb1dWbFFibVhvOGxs?=
 =?utf-8?B?cW1KaUJMZ1JGcjlVcmZMLzZMcG54Yk1YZjFPcUoxNEtxczQ5ZEI3YjJsSTlN?=
 =?utf-8?B?UzBKS1JmcEcrdXlaU3hhdURjQlRaTVRSNHEramtGVERiVlRlWW1JZz09?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <731679637338244FA7D91F5CBA01AF8B@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 609953d3-4eb3-4841-fa54-08da4d1330c9
X-MS-Exchange-CrossTenant-originalarrivaltime: 13 Jun 2022 08:03:24.4450
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: necZpJBv6mhPw92YtBiIRU/eyGJtpLCv6fHiOn7JKSZLDqi9+nXWQIs73dT8VUfPnwkx9F7lQ7uJgrXtxAlI8LaWCl6UNQoWtw262oxCbJQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM8PR03MB6216

T24gMTMvMDYvMjAyMiAwODo0NiwgSmFuIEJldWxpY2ggd3JvdGU6DQo+IE9uIDEwLjA2LjIwMjIg
MTk6MTMsIEFuZHJldyBDb29wZXIgd3JvdGU6DQo+PiBPbiAxMC8wNi8yMDIyIDE3OjAwLCBBbmRy
ZXcgQ29vcGVyIHdyb3RlOg0KPj4+IGh0dHBzOi8vd3d3LmludGVsLmNvbS9jb250ZW50L3d3dy91
cy9lbi9kZXZlbG9wZXIvYXJ0aWNsZXMvdGVjaG5pY2FsL3NvZnR3YXJlLXNlY3VyaXR5LWd1aWRh
bmNlL2Jlc3QtcHJhY3RpY2VzL2RhdGEtb3BlcmFuZC1pbmRlcGVuZGVudC10aW1pbmctaXNhLWd1
aWRhbmNlLmh0bWwNCj4+PiBodHRwczovL3d3dy5pbnRlbC5jb20vY29udGVudC93d3cvdXMvZW4v
ZGV2ZWxvcGVyL2FydGljbGVzL3RlY2huaWNhbC9zb2Z0d2FyZS1zZWN1cml0eS1ndWlkYW5jZS9h
ZHZpc29yeS1ndWlkYW5jZS9ydW5uaW5nLWF2ZXJhZ2UtcG93ZXItbGltaXQtZW5lcmd5LXJlcG9y
dGluZy5odG1sDQo+Pj4NCj4+PiBTaWduZWQtb2ZmLWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcu
Y29vcGVyM0BjaXRyaXguY29tPg0KPj4+IC0tLQ0KPj4+IENDOiBKYW4gQmV1bGljaCA8SkJldWxp
Y2hAc3VzZS5jb20+DQo+Pj4gQ0M6IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXgu
Y29tPg0KPj4+IENDOiBXZWkgTGl1IDx3bEB4ZW4ub3JnPg0KPj4+DQo+Pj4gVGhlIFNETSBhbHNv
IGxpc3RzDQo+Pj4NCj4+PiAgICNkZWZpbmUgIEFSQ0hfQ0FQU19PVkVSQ0xPQ0tJTkdfU1RBVFVT
ICAgICAgKF9BQygxLCBVTEwpIDw8IDIzKQ0KPj4+DQo+Pj4gYnV0IEkndmUgZ290IG5vIGlkZWEg
d2hhdCB0aGlzIGlzLCBub3IgdGhlIGluZGV4IG9mIE1TUl9PVkVSQ0xPQ0tJTkdfU1RBVFVTDQo+
Pj4gd2hpY2ggaXMgdGhlIHRoaW5nIGFsbGVnZWRseSBlbnVtZXJhdGVkIGJ5IHRoaXMuDQo+Pg0K
Pj4gRm91bmQgaXQuwqAgVGhlcmUncyBhbiBPVkVSe0N9Q0xPQ0tJTkcgdHlwbyBpbiB0aGUgU0RN
LsKgIEl0J3MgTVNSIDB4MTk1DQo+PiBhbmQgbmV3IGluIEFsZGVyTGFrZSBpdCBzZWVtcy4NCj4g
V2l0aCBvciB3aXRob3V0IGJpdHMgZm9yIGl0IGFkZGVkDQo+IFJldmlld2VkLWJ5OiBKYW4gQmV1
bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQo+IEknZCBsaWtlIHRvIG5vdGUgdGhvdWdoIHRoYXQg
SSBjYW4ndCBzcG90IHN1Y2ggYSBzcGVsbGluZyBtaXN0YWtlIGluIHZlcnNpb24NCj4gMDc3IG9m
IHRoZSBTRE0gKHZvbCA0KS4NCg0KVGhhdCdzIGJlY2F1c2UgaXQncyBzdXJwcmlzaW5nbHkgaGFy
ZCB0byBkZWxpYmVyYXRlbHkgbWFrZSBhIHR5cG8uLi4NCg0KSXQgd2FzIE9WRVIgTE9DS0lORyBp
LmUuIG5vIGMncyByYXRoZXIgdGhhbiAyLg0KDQp+QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Mon Jun 13 08:22:07 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 08:22:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347834.574155 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0fKP-0002OW-RL; Mon, 13 Jun 2022 08:21:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347834.574155; Mon, 13 Jun 2022 08:21:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0fKP-0002O6-LB; Mon, 13 Jun 2022 08:21:41 +0000
Received: by outflank-mailman (input) for mailman id 347834;
 Mon, 13 Jun 2022 08:21:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=i6zz=WU=citrix.com=prvs=156a1e8c4=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o0fKN-0002O0-Kj
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 08:21:39 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d7a28907-eaf1-11ec-8901-93a377f238d6;
 Mon, 13 Jun 2022 10:21:38 +0200 (CEST)
Received: from mail-dm6nam11lp2171.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.171])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 13 Jun 2022 04:21:35 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by SJ0PR03MB5583.namprd03.prod.outlook.com (2603:10b6:a03:28e::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.12; Mon, 13 Jun
 2022 08:21:34 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%6]) with mapi id 15.20.5332.020; Mon, 13 Jun 2022
 08:21:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d7a28907-eaf1-11ec-8901-93a377f238d6
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655108498;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=R9fY8n18h6g6SC38J+5bz83yfpSa/d201CTWahKP9Y8=;
  b=BLBbOO35NZaDpAZZI4XNjJpcrZ9JfI9JO+PJxogWiqZb+imw1LYblYuD
   5E6lls4N+AnGv9JT9NgAFeez1pMJ4eHNcXteRxhfBEx091P2KcJlCuCl3
   UUmwHArl1ZoMCcNkY6Ms3YO9dvgNWIaMGyveLKwFsFGoAQAlIgLVBn7qY
   Q=;
X-IronPort-RemoteIP: 104.47.57.171
X-IronPort-MID: 73286110
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:wm3PHKDTJvUkMhVW/zXiw5YqxClBgxIJ4kV8jS/XYbTApD4j3zQBz
 WEfXjyDb67cYGf9LY9zboXk8kwH756BmNNlQQY4rX1jcSlH+JHPbTi7wuYcHM8wwunrFh8PA
 xA2M4GYRCwMZiaA4E/raNANlFEkvU2ybuOU5NXsZ2YgHGeIdA970Ug5w7Bg2NYy6TSEK1jlV
 e3a8pW31GCNg1aYAkpMg05UgEoy1BhakGpwUm0WPZinjneH/5UmJMt3yZWKB2n5WuFp8tuSH
 I4v+l0bElTxpH/BAvv9+lryn9ZjrrT6ZWBigVIOM0Sub4QrSoXfHc/XOdJFAXq7hQllkPh9y
 dxPqKOeUT0TGamchNVCCDpKQg9xaPguFL/veRBTsOS15mifKj7A5qsrC0s7e4oF5uxwHGdCs
 +QCLywAZQyCgOTwx6+nTu5rhYIoK8yD0IE34yk8i22GS6t5B8mcGs0m5vcBtNs0rtpJEvvEI
 dIQdBJkbQjaYg0JMVASYH47tLjz3CmhKmQAwL6TjaYW5G7Klwd765j8HOT5U/akfvh1r1nN8
 woq+Ey8WHn2Lue3yzCI73atje/nhj7gVcQZE7jQ3vx3hFyewEQDBRtQUkG0ydGikVK3Ud9bL
 00S+wItoLI0+UjtScPyNzWnpFaUsxhaXMBfe8Uq5QfIxqfK7gKxAmkfUiUHeNEgrNUxRzEhy
 hmOhdyBONB0mLicSHbY+rLKqzq3YHERNTVbOnZCShYZ6d7+po11lgjIUttoDK+yiJvyBC30x
 DeJ6iM5gt3/kPI26klyxnif6xrEm3QDZlRdCtn/No590j5EWQ==
IronPort-HdrOrdr: A9a23:x5FoTq3IWhZwlHACqiwYhgqjBS5yeYIsimQD101hICG9Lfb0qy
 n+pp4mPEHP4wr5OEtOpTlPAtjkfZr5z+8M3WBxB8baYOCCggeVxe5ZjbcKrweQeBEWs9Qtrp
 uIEJIOdOEYb2IK6voSiTPQe7hA/DDEytHPuQ639QYRcegAUdAF0+4WMHf4LqUgLzM2f6bRWa
 DskPZvln6FQzA6f867Dn4KU6zqoMDKrovvZVojCwQ84AeDoDu04PqieiLolis2Yndq+/MP4G
 LFmwv26uGKtOy68AbV0yv2445NkNXs59NfDIini9QTKB/rlgG0Db4REoGqjXQQmqWC+VwqmN
 7Dr1MJONly0WrYeiWPrR7ky2DboUMTwk6n7WXdrWrooMT/Sj5/IdFGn5hlfhzQ7FdllM1g0Y
 pQtljp+6Z/PFflpmDQ9tLIXxZlmg6funw5i9MeiHRZTM83dKJRl4oC50lYea1wUR4S0LpXXt
 WGMfuspcq/KTihHjDkVyhUsZaRt00Ib1i7qhNogL3X79BU9EoJvXfwivZv3Evoz6hNNaWs19
 60TZiAq4s+P/P+TZgNcNvpEvHHfVAkf3r3QRKvCGWiMp07EFTwjLOyyIkJxYiRCe81Jd0J6d
 /8bG8=
X-IronPort-AV: E=Sophos;i="5.91,296,1647316800"; 
   d="scan'208";a="73286110"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IjaUUD3DZ18yWJteyM281O8OMxrAqjG0T+AIv7VQocfn9dwBcdatAp/UWMCFfTV+FS5/aNpbfubNnuEP74G1S6RXn18ITFoo/6jpzQBU1iNk3AS3P0zdB5wbh+X3qCaZ4A1TlMTZMWzA4RhzMuqZpPE6mDVKA4MadIx0Qug6sFJWvXbwPg+muAp3Ae2aC6BHWGD9olhYhCLmsJ1wCehRiHY9NFxdt3AD17MSPDpmYxorBZYb5yPD7kBUURduFVnPKRL6T1e5pnjHFsQNNrAcElWyS61NCWgxwwQnarAbGV7op2r6xQGD+Y4mbdhAcEN5WFziIBMf2vjDoFf5LlXB0w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6oKQVHWQuTJ1uGEX+rVQcqENNtypqDEfE7TBGxyKO3I=;
 b=RyUQshTsIryXXWhqHnNJs1NdS2I4VxIMPmzFU9EDSFrhqVgxV3EXfo+fqI+073ccHTXOkuJkyi4X7NBaOGh9YxmE3alQ8ynmF9SFcjCnDKXNKNBke9TP1Ed8bmLXPHxod54an9q3PbRiONMwAujSLs7vwUuvVcl/rkxjpJJe4GDld2242Hz7dUiFWQxP0zz2q9kOGhJLbCfS3ClS2pthM2e4cC0YSBMIj6bccl4liIBkNyt9o4KZo/B5+7SS0ItOec8ZymF0vfML06qTgC3CdPVhGYRUx+2ysV1gKd1acopFn9Nfx5vKeh7ybc59bJ6l8/Y7WircJqSY7lnJcCJIGw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6oKQVHWQuTJ1uGEX+rVQcqENNtypqDEfE7TBGxyKO3I=;
 b=oj4RsBNEfAK15mDo4/0QOu5euX7OIArmmjKajE+L9/qYkEb5JUG8/j6E5MRA4zu0eZF817KqhJguWeDcmc8T2U7AbDoFpvWaIwMki2pcom9LyvQBBYo6fmjTVODIGeVNDhPmkb3EKQakYM9PwLY8GONDucK5XyKwTS4dcnGM6tI=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Mon, 13 Jun 2022 10:21:29 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH] xen/console: do not drop serial output from the hardware
 domain
Message-ID: <YqbziQGizoNX7YFr@Air-de-Roger>
References: <20220610150651.29933-1-roger.pau@citrix.com>
 <3a462021-1802-4764-3547-6d0a02cd092f@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <3a462021-1802-4764-3547-6d0a02cd092f@suse.com>
X-ClientProxiedBy: PR0P264CA0153.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1b::21) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e01d4043-bb09-46fd-ffff-08da4d15ba1a
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5583:EE_
X-Microsoft-Antispam-PRVS:
	<SJ0PR03MB55834DDCBDDC9FE4FA7A76308FAB9@SJ0PR03MB5583.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/UKZvWwrD6JCCnfZpIoXw4ZXWco/sTlnLo3G8e+cPH53DfMkFj1QbmJlutI2GDbJgfGbuAJ+ghuEIV5x8rfxVzS8Sd2wDjX65tPTgHvHmjZ8FyxHXI1DUDCpiEDw8bY/qs3o5JB7So9pfkU5Ig/w7o9k8bTaEDw2ei5cbHQh09FF4R+nAI3+HVETGyvVYeMG/vINff23Y/4tRGvK/OuDxziw9ZE7D2f9QeJHY5tv6ZjtJjV+eeydrLMqYMzTNma9vCxzC4xjEkwfZHHVXDA5J76v1xZZZVrvXHGnSEhHsieKS9MDFYEYVvMsL23BI3HQq5JH7KoYcVpTWXI4BEWAiI4BAIKUJPJjsP2DtUfpxc/VErDSdyjUQsgeobKCK9tjbR5iC/bsU2Zrqe7If+wNP13GoPEbYUlmKgumXahDtBQ62v/wADk6yqYD0ViL9MVaxr9TXNwRgc9AmGrdSsb7lMPS4a15tnw9DiPpdJbEl5WGUDzoVgYo9GEynLQhihuzQTpogpTXrTBA5mqYLtKeezKz5WO1SbfZ2Cu5u1NRsHJcOu/iQL+mXEEmDqJldsddsS/owWmuS0g3F0TxXu2VwSx+KhNGnpgLSaYaAd2anX6+ciIBvAR7a3QCv7Lja+eb7cJxfZdoVy6IelZIS9qCYQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(7916004)(366004)(6916009)(83380400001)(86362001)(26005)(6512007)(8936002)(316002)(82960400001)(9686003)(6666004)(38100700002)(4326008)(6486002)(186003)(66946007)(66556008)(66476007)(2906002)(53546011)(8676002)(6506007)(508600001)(33716001)(85182001)(5660300002)(54906003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NEIwbU5iVmUxaGlESDRveVN0cEJvR2kyaDRrWmhxYW1jWTFkM0F0Tnk1b2hV?=
 =?utf-8?B?QVVvT0dTK0F3Y1JLaTFGTWF1L3I4WW9zODUyajFaRS9LVzlxcVN3anpzaWlW?=
 =?utf-8?B?ZWlaZ0xaK1FaTUZ1NlFIdVQ3Y0J0UHpGNklCSXZLTmRSQkcyQVNWL2pScndS?=
 =?utf-8?B?WWRUdlFUS0ttMDY4aXRtRWlkVTBudDI0S0ZSbW41a3VNc0o2L2FZRFNkd3dx?=
 =?utf-8?B?MUtVRG4rOVVwcDJ1UnZicE5YUlcxN01nTWx4c3ZOK1NSbnFranY4U0NPSGgw?=
 =?utf-8?B?UkU5aFZIMkVPWXFDNjNyd1hBUjErYUNESXp5dU84K2ZubWM0djJYUjVSN1pJ?=
 =?utf-8?B?ZjNhZ0dPR1J0U3BkcDBJUnJDVFNRVkRJL3A5Y3BNR0YvMTZ1cEhFVHo5eXhE?=
 =?utf-8?B?d2hHREJUVmN3R2xmYTFRZ1hlSW9zdkxaZ1BYMVE3aXhtUUI3bllHMzZMbkdV?=
 =?utf-8?B?M1F6dzNGUlQ0VGZFd3l4VXhEZmswTWdBQkgzbjZXVUJlbjBVSlBrRUYrOW81?=
 =?utf-8?B?dmwzbDFWN1A0Wnd1Z2RsWFU3end3dkF0VWlOcEo5MlkzdktIRm4vM1pwbXdq?=
 =?utf-8?B?WGVtU2k2OGpyMWQxSDN3SE9EMFlpeXVDeklRNVF1cmZURldSSFcxbWJ1MVhP?=
 =?utf-8?B?a1ptZ0lUS1k2V1FVa3RZU3dWcEt3dU94YnFIVW1odEZGL0w5UXB0aUVzR2ND?=
 =?utf-8?B?RTU4d1JiYkVFUnREYk9zQXh1ZUJVUDJXNnRld290cVNiNlpVdGJaOUgzam1W?=
 =?utf-8?B?dmkvRUlLNjNBMm9GZ0NvKzREaGNPdHdXRUEvMTc4TXdVY1Fsb1d2VDNTY0hY?=
 =?utf-8?B?S1hybXh6R01mWFdoQ25HdzV3QS9qdDhtNXFUeHFpME5PdDF6c21XTDUzWXY3?=
 =?utf-8?B?MS9YYlVmRjJucFZxWSsyaWxZWC9wNHNwM1FsNFZkTjlXMWpCdnZxZE1GWGhD?=
 =?utf-8?B?MVhrbzJRbEtsRmZhWDNBaTBEMHJnNWN1TE04eGFMbHVmT2FkQ3Z0UmNYSEZ6?=
 =?utf-8?B?NEpUQ05IMWhxUjlqeVpkUENaVGtld0tSMUpHTmY1OCtoR3BIa1luNHdxSlRI?=
 =?utf-8?B?WkZTb0NqTlpJWHJGdVRwb3grMUdFV0tpTnlPNjQ2UHBWZFVsd0pQTkY5QmM1?=
 =?utf-8?B?MjFlU3pTT0tjVmNiUjRoU2JSSnRTSCtzbUZIZTVaM2NSTlRicitkMzhWYWZt?=
 =?utf-8?B?cFdLSVh4N1o1SjBFbm5MZ2ZCT2FOUXNCd0N3YWh6UzdxbTg0VXp2MnBEMFZB?=
 =?utf-8?B?UVlsOFA3TVBYUjlEMC9JWEl6Um1DRXc0amhFM3hicktJZ1RubHpPMGJzdWc2?=
 =?utf-8?B?SmFQQVZ0aWJJRWRjN24xayttdndUNFpWQWJuVmp3VUVrRDBkYTBxdjhhR0RZ?=
 =?utf-8?B?NG0wSExTVHYyQWNsWkw0aGliUmFzWlI1eXU0aVZRdnREOUZpVHNHc2VHbHRv?=
 =?utf-8?B?anBxbVFLcm1kSXVFUlh2c1kzR0Q5MmxjVEo5TUNGUEp1MmdLTHBPWTdiaW0v?=
 =?utf-8?B?NWVGekZvZ0hPVzFJNFlTZ0FrdjJ1U0N1eEg4MG9WaExyQkxyazBRUDQvQUVD?=
 =?utf-8?B?LzVCbVFBZ2JVRjFjMTJOcXFweVlZb2dKbnVzbVNJSjJYKzJ5UGgzT1VOK0dp?=
 =?utf-8?B?V3hSLzVjV3VwRDRYZHg5N29xUXhzeTcrYWdoM09yV01nb29PY3hINUgyb25H?=
 =?utf-8?B?YnpyNDBlTktKM2tXejhkRFo4L1RUOHRnN2J3OTJ2c2tqOFFjTzA2TXFFTTRZ?=
 =?utf-8?B?RXNmOEdTS1hRbjNsRWVYMlB1SGRKaU5Ydy9BcEhSYUNtaG9IQlA4MG5ZaHRw?=
 =?utf-8?B?WGhLdTZVSmEyL0E5dHpwMTFxekN0TU5UVm9CL1BnT0tFU2dKY1pMR2prcEZW?=
 =?utf-8?B?ME1OclY3dEtQM0ZYSXRMdFpUUWdPVUV3dUlwNWdtNzhLVkZYeEpkNDhOaldX?=
 =?utf-8?B?SCtPMGRQQ0FvRVM0MTNVbVNFa1R6bldGckRLYkhPZDRUS3RJT2drOXRqVmpn?=
 =?utf-8?B?YkFGVFJ6RGxwV1JxRjJSN0RCQkFFaWo5a0lIcWVmeXJJWHh4KytGWlorZFRt?=
 =?utf-8?B?UFM3NHdldFI4a1k4ZldMWmJLSlprU3haelF1ZnRUN2xHWUM5bXJFbkN6aEl4?=
 =?utf-8?B?S3ZwUGpsL0ZEYUd2U3pzSGkyOWg2Y2s5TVZEWUhCQTl5K0Z3dWNhMWM3K2p2?=
 =?utf-8?B?dlBUL3p0MWtKdUxPSHFEUFBVU3dxVVZpRFZKRmNWSlNiUFlSTnRNdUdZUHBn?=
 =?utf-8?B?bWtzTEFmTThRQW5FRFhnRkgzWGhIY1NEZkFadGg3MloyZm1NVzBCVlY2Z3dr?=
 =?utf-8?B?U0FFQnJhbFRCOFRVdStuSktOOXU1R212ZHgvWmVTbzQ4YWR0YzJ0dTlSbXQy?=
 =?utf-8?Q?bQ/0k9xwzTB6IYIc=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e01d4043-bb09-46fd-ffff-08da4d15ba1a
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jun 2022 08:21:34.0517
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: BkudsgFTz0+OBeqEMwxanKkpTVIxPsAWK1lTBDZbai+uZ6qPUfOy/bdDxxnRw9xHg0EKokT/C73efhm53l/hpA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5583

On Mon, Jun 13, 2022 at 09:30:06AM +0200, Jan Beulich wrote:
> On 10.06.2022 17:06, Roger Pau Monne wrote:
> > Prevent dropping console output from the hardware domain, since it's
> > likely important to have all the output if the boot fails without
> > having to resort to sync_console (which also affects the output from
> > other guests).
> > 
> > Do so by pairing the console_serial_puts() with
> > serial_{start,end}_log_everything(), so that no output is dropped.
> 
> While I can see the goal, why would Dom0 output be (effectively) more
> important than Xen's own one (which isn't "forced")? And with this
> aiming at boot output only, wouldn't you want to stop the overriding
> once boot has completed (of which, if I'm not mistaken, we don't
> really have any signal coming from Dom0)? And even during boot I'm
> not convinced we'd want to let through everything, but perhaps just
> Dom0's kernel messages?

I normally use sync_console on all the boxes I'm doing dev work, so
this request is something that come up internally.

Didn't realize Xen output wasn't forced, since we already have rate
limiting based on log levels I was assuming that non-ratelimited
messages wouldn't be dropped.  But yes, I agree that Xen (non-guest
triggered) output shouldn't be rate limited either.

> I'm also a little puzzled by the sync_console argument: If boot
> fails, other guests aren't really involved yet, are they?

No, but it would useful to get all relevant info without having to ask
users to use sync_console.

> Finally, what about (if such configured) output from a Xenstore
> domain? That's kind of importantish as well, I'd say.

I would be less inclined to do so.  Xenstore domains can use a regular
PV console, which shouldn't be affected by the rate limiting applied to
the serial.  Also that would give the xenstore domain a way to trigger
DoS attacks.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Jun 13 08:29:51 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 08:29:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347845.574166 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0fSF-0003HF-L7; Mon, 13 Jun 2022 08:29:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347845.574166; Mon, 13 Jun 2022 08:29:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0fSF-0003H8-Hp; Mon, 13 Jun 2022 08:29:47 +0000
Received: by outflank-mailman (input) for mailman id 347845;
 Mon, 13 Jun 2022 08:29:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T645=WU=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o0fSE-0003H2-4g
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 08:29:46 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0604.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::604])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fa4d6235-eaf2-11ec-bd2c-47488cf2e6aa;
 Mon, 13 Jun 2022 10:29:45 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM6PR04MB4679.eurprd04.prod.outlook.com (2603:10a6:20b:15::32)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.19; Mon, 13 Jun
 2022 08:29:42 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Mon, 13 Jun 2022
 08:29:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fa4d6235-eaf2-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JOHILgFexGHd8iy68NGMdUWKN3mFrPbVPk080WBSKW53X85fQZFXZLhQoIvwX48T0BIRlT1nl7snIGV5E1CHN3+KBt/tGj40jfvRRhNlCy1UXJadRsI8w+/9VGr0SXyyLvLGMyAPVSJLcicTC/2GFCaYxURO1TC+lYwGYY4pfM/cFHhjrclBTrRd26z/MUFQ3tAGjZsPuCk5KnNYH1L1Rx2UDPMC5q2QeHhacCDD1l8Zv5SBrvzqocrsBUgEkbRLdGpLa7arQb2MOF/Wt4eD/bE03S5WCVMUQsA+F/+vGHyCyKnO9icISMr3xbpTUFJgT5DWVOUHWsfllOTb/BSgsw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=AYcrEut4kluYSceCBLqOPkSd1e7yrXpHrrbnXH3Oh2k=;
 b=MybAA4aKdZRWXnvomGruCDoCfftwudYQN/SRaWZdPJiGFS+msbYGFNq9Mo9La/skdVMZMwmopUiyjNTzAsLHhZlcpsoPOd0lfaKGerhb45piTlp4g4uvlLMr6KrNov2aoj9TEvk/sl6uHmXjHv3knHS6AjsR9Y0ktm5IVVyJCvEXug2GOT2x32GVfHBsF885gEX79oaO42Cq/zmOPPcUA8tg/EjIoWMEdKnzzDvTc2IXTnyLJMQwP87bAR0xSuRhoZyvsufeT/fgOxkoSUr3n5jNanZ+lUXIBHN908NSkyyKhP5dc+n3YfJ4aNsb/YlpeIcguHas1DSBHLxftwdqDw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AYcrEut4kluYSceCBLqOPkSd1e7yrXpHrrbnXH3Oh2k=;
 b=pYxaUwTHOYEOg9og+0aElXDtEaWeXi6SjIL5eYvgx064ohxX6GOr0N6z2ZnBoA4N5Ce+aqKe4AwH+MpCm+mTo7JWgibRLY7mJS9YsDOINBPa3I/ptoLp4DACr7Yvy7lM76xCQR57PKJZIkyJu1TfGETYBVdOhj0jyzb2xApnI5FY+wXjEg1sCT766g5nnimR/1JkEIbr0XIrpB//e+/wCMGNCchh9sMnKqC3lVmCfN+GDFLB6O39A/NyWDPHndTEola0I71A9j46Atnrt7SSQ3iWqQ/akVR0QL9CDqNoYUB+0PDoNPTz/Ej62jYVQx8qSY0nh7FzWmWzFDxoiKskrg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <3d0d74d8-55a9-cdb6-0c5e-616ddd47bbc0@suse.com>
Date: Mon, 13 Jun 2022 10:29:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH] xen/console: do not drop serial output from the hardware
 domain
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220610150651.29933-1-roger.pau@citrix.com>
 <3a462021-1802-4764-3547-6d0a02cd092f@suse.com>
 <YqbziQGizoNX7YFr@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <YqbziQGizoNX7YFr@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AM5PR0201CA0016.eurprd02.prod.outlook.com
 (2603:10a6:203:3d::26) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: eb343808-4d43-4a31-3e48-08da4d16dd04
X-MS-TrafficTypeDiagnostic: AM6PR04MB4679:EE_
X-Microsoft-Antispam-PRVS:
	<AM6PR04MB467933D208135E503A9C0948B3AB9@AM6PR04MB4679.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	jRQU1W+rt6aJRuoDBuXOI5qbX2ivls1iLhiKGtbS4HpHV0L3es5GyG9/UflJvXwfDKNRv9M3aWL9AYxKJ4z21enWHDaw1RVJWI3z5LTREdQL3/6bgtbsuZWwgpNdVftAScmTbKyruJkvDCUKIYTQW59n3rEaOc2/B3wao48G5WXXftKC/BZJdAeFfz651vIUNXxeLE1VvZlMgThtUrP60j1z1IZXGKmQjasegwbhJcVdguh5KRxS3AM+e9rOBAmaCpcCWaQtb/MgBTwhBZw/6dVbtRKCVeUDkMlEId9LHzYIu/Nn94sQ86dRfX2i2MBnJnX8Pg1AbDIgDDGYHxlxI/zBOpEt0iRI4/JgDjlCOup+sxB3iJHXqrwBm90L8snzselt7gaI9KdNZCH62wMECL8qr6g1WCOVhW+4U7bBxUPtT5ctpmGvNNGQdtiL3sqEV6v+9FghZBRdHttVt+FRYBknYi/WA0kRod5jUaNphcmZ/K6Up5t3sB2fF7joq+BRbGqhjcFJ7JPTNmimQLCYKimGu9Ae4YAj+kdvZyMNjIRzTYvMOwSLKxfGfEtFUWmQkYf2ADDQhuFEjpiokAhdIJE4I8DkW2rwtZtP087Qevg5nj7K0QnsTLyw/U/Ub8EmGj+p/BlHah0Ue9Xg//Kh2P21M/O3+dU7X3RQFRp8t18Ic5EYNVFkjowtZ50ubETViWTme4o2Qkh/Df3hDYWAYV99LMX0XDRZtIccB8pW9FA=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(6512007)(53546011)(26005)(6506007)(2906002)(2616005)(38100700002)(83380400001)(6486002)(86362001)(31686004)(6916009)(54906003)(36756003)(508600001)(5660300002)(8936002)(316002)(186003)(8676002)(31696002)(66476007)(66946007)(4326008)(66556008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WkYzeExoaUt4S2Jvb2V6Zm5NVUdnZzRIc0FyVnVuYnNjSTExbkRyb01rWW80?=
 =?utf-8?B?RjJsNjl2WHF4YUM2WlhHZzcwMlNaS1kyUyszTHczd0ZuUXYxSWM3Nm9QT08r?=
 =?utf-8?B?d1IyYmdEbU4xV0xqNkVMSzRmVnNxRDB6UGdqSXpSMFlsenNaRC9vOXI3ODlZ?=
 =?utf-8?B?RHVQQm5ia1NLRVloVDdwd1gzSnA0UEZyMkJvWW92dFErZFY4dkxoNjEzVXRS?=
 =?utf-8?B?WEJUQlpDcVc1Z0gxeEdwTG5HeEl5cFh4UEwvWTFzbXE3NFFnMzd3dUlnOWFq?=
 =?utf-8?B?ZVFZekNQZzNzdGx6Q3hJbFpJY08xUGRQM0pkR2tINXRiNU9GbHc3b2dQM1ln?=
 =?utf-8?B?QXdPNlQzQklqNHJWckl2YUVUeVFpb0cvU3oycjMybDlMdGFtY0hsMjh5TGps?=
 =?utf-8?B?VVF0aXl1MGVIRlpPVjl5S1hDb0xpNjhMZm1RYjlqbHRKaG5sOHY5YjZuSG5h?=
 =?utf-8?B?VEhmaDZnMWR0QURqQWpvbDQ0UFA2NFphbU9MK25oSFFlRjdGa0EyNXR6aTYz?=
 =?utf-8?B?U1g4NTdqVUdzS0Iwdm1DOVZFeTE4di9PYXZCODQ1WkhGS1dHRGd6TzNLODUv?=
 =?utf-8?B?M3dMTlNuVEMvL3BnS0tkbjFMVWNkaGh0RENJTjZ0MW5JclFLL2VjNGc5eVpo?=
 =?utf-8?B?akRRNlRXMUI1ajYrTU9uSU5ZTTZGTlU5SE9HK2RIT0tvcEJaYXN3NU1TeE1C?=
 =?utf-8?B?OGc4Wk5BWHVrcHp5LzZ6REZDM1RLU2JaRE5PTE9aY2c0Q1BTN1JQMnZPVUR6?=
 =?utf-8?B?c0dETTJxaTM4YXdZZUgxSGROd1N0amFEUUVnNXUyY1hGeHNhcWxzaWhFdTg0?=
 =?utf-8?B?ZXRZNEM2dEhQQmF6UUQ1RDVNblZXenAwR0FUSy9yWGZXTzB6bm1PY1ptRzlH?=
 =?utf-8?B?V2p0Wi8wdjh2MVVPVzcvM2FEZ3RKL0hpdTVpaXM5S0w2ZVlQbDExdjBQMHdS?=
 =?utf-8?B?QWQrT1FxQVRkSFlFeWxHc2FFZGNlV3E5Wmh1aHdzcjhPYTVLY3NPSWZOWC9W?=
 =?utf-8?B?enpDOUQxcVVpRU9DTDZoeW5jSFdJM2JUajNLU1B2dmV4MU10Nkt4MGVhemZo?=
 =?utf-8?B?VkJlaTRiMi9VcVB0cUY5Z0ROTXdHSndwU29DcTAxMWRvaDNSbGN6aXBlZnN3?=
 =?utf-8?B?QUxLS0N2ck9FUC9LL0YyU1dxVUViVFhPYU5aMUlYUHZNUXVsckRGUUJYb1Jv?=
 =?utf-8?B?RTh5ZVp2dEthdGtYMGRZZ1luZm4yYTBMelR3WmFxcW9Ca1RyOEk4TGFUN09B?=
 =?utf-8?B?TXN2Z1VnUXhHcm9ndE9YY0F5VzY2dnZ2dlJOS0IyUXhrdGtFemJrVkhJWXRk?=
 =?utf-8?B?UWIraWNOMFZXaUVoQjNORGc0L2x3bmhJbUZaU01kTjJDS2FOenhjcHptY000?=
 =?utf-8?B?S0MvaS9UYkVJL0FvNmZBR3Z4NHdhNHFnWTBYOVdiRGM5ZlZXYThQT0QyYnpa?=
 =?utf-8?B?Q0swNDdRbUpZMWpDQnU5L3dkaEJQRlVqUVRqeW9kYlFTLzMwMHREaXBnVEZH?=
 =?utf-8?B?OGlwOWNMZS9LREovMnVFdWRqN1FDVjlMaldycW1jbExkVXdicHFsZFRvbWht?=
 =?utf-8?B?enRJbnJzV1d2MEpoTXJVR1h0WFFXbEl3SW1Fd0tQczNFQnMzWnIzY3FxcEJG?=
 =?utf-8?B?UGJUb21WZFRxeStKZHlWV3MyRWNncTJiUncwYVlGcEFORmdmN3FzVjRiVktT?=
 =?utf-8?B?dWdHcDhnckFQVXp3TEN0SXpXK3p2eUpZdzFUSG1HWjAxYkdtZUNPRHpRMWNk?=
 =?utf-8?B?OTVINFdiRnJUMmxVTjF5Y1hHYlNRRUoxaEZ5VkpSd3E5dWo4NHhsdTY3QzJp?=
 =?utf-8?B?bWdIb2VsQkdQT3hvaHh0MVYyeUF5TTdyMFExdTFGbnZuUHVGOWtReWgvLzdv?=
 =?utf-8?B?bVdNbzdIU05WNVdZS1JaYUVmcnlzeW1JcjBFSC9lUmpCdDVsMmdYUkJRZTZB?=
 =?utf-8?B?eGFrQm1mQUhQQVRHRGxEY3J2QllqL0IxVk0rcjJIVWFLVVEvV21QQk5qMWNH?=
 =?utf-8?B?Um1QUnJWQUpnTWxVMUhScmUwK010cmVrNFdqZnNndUpPNkthdGE1ZFQzY1g1?=
 =?utf-8?B?b1FWOURNdkMrSFRocko5MlcvNEZsYm1pSG5QL3EwQ0tDemU0Zm4vQ3BYWG8y?=
 =?utf-8?B?NW9VV0lHNkpMY2l6aUpoc0N0dEt6NUwxSlNsejdzSlFFcFhNa3NwaHY4N2Uw?=
 =?utf-8?B?TUZDK042Q1RrMUNMVGRBbmQyeCtaUmlYa1FlaWd0R3NTMnNpQ0lHQnRLSlU0?=
 =?utf-8?B?TDBURzEyVXdNVWdFZDdtMVlpODM5aG13SnhLSWorbUcySldaR3RUcVNsNytH?=
 =?utf-8?B?Mzhtb21FYUdmTit5TDVIMGcvb3g0TnQ1Y25LbXpGTFRqV04xazQ0dz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: eb343808-4d43-4a31-3e48-08da4d16dd04
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jun 2022 08:29:42.0414
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: HDfVAtiKgAeisMEjjXGWKEyLq4fB2B3PMc5oXZMhQemH+eNOb58fLdP1vhKN/fVx5En02UFPVJD7Opag1mBNnw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR04MB4679

On 13.06.2022 10:21, Roger Pau Monné wrote:
> On Mon, Jun 13, 2022 at 09:30:06AM +0200, Jan Beulich wrote:
>> On 10.06.2022 17:06, Roger Pau Monne wrote:
>>> Prevent dropping console output from the hardware domain, since it's
>>> likely important to have all the output if the boot fails without
>>> having to resort to sync_console (which also affects the output from
>>> other guests).
>>>
>>> Do so by pairing the console_serial_puts() with
>>> serial_{start,end}_log_everything(), so that no output is dropped.
>>
>> While I can see the goal, why would Dom0 output be (effectively) more
>> important than Xen's own one (which isn't "forced")? And with this
>> aiming at boot output only, wouldn't you want to stop the overriding
>> once boot has completed (of which, if I'm not mistaken, we don't
>> really have any signal coming from Dom0)? And even during boot I'm
>> not convinced we'd want to let through everything, but perhaps just
>> Dom0's kernel messages?
> 
> I normally use sync_console on all the boxes I'm doing dev work, so
> this request is something that come up internally.
> 
> Didn't realize Xen output wasn't forced, since we already have rate
> limiting based on log levels I was assuming that non-ratelimited
> messages wouldn't be dropped.  But yes, I agree that Xen (non-guest
> triggered) output shouldn't be rate limited either.

Which would raise the question of why we have log levels for non-guest
messages.

>> Finally, what about (if such configured) output from a Xenstore
>> domain? That's kind of importantish as well, I'd say.
> 
> I would be less inclined to do so.  Xenstore domains can use a regular
> PV console, which shouldn't be affected by the rate limiting applied to
> the serial.

Fair point.

>  Also that would give the xenstore domain a way to trigger
> DoS attacks.

I guess a Xenstore domain can do so anyway, by simply refusing to
fulfill its job.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 13 08:46:16 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 08:46:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347861.574176 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0fi7-0005mH-56; Mon, 13 Jun 2022 08:46:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347861.574176; Mon, 13 Jun 2022 08:46:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0fi7-0005mA-2M; Mon, 13 Jun 2022 08:46:11 +0000
Received: by outflank-mailman (input) for mailman id 347861;
 Mon, 13 Jun 2022 08:46:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xNfa=WU=redhat.com=berrange@srs-se1.protection.inumbo.net>)
 id 1o0fi5-0005m4-Vk
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 08:46:10 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 42a4a149-eaf5-11ec-8901-93a377f238d6;
 Mon, 13 Jun 2022 10:46:05 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-634-9IpHJqSjOBatvPQxcG8MFA-1; Mon, 13 Jun 2022 04:46:00 -0400
Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com
 [10.11.54.10])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0207D3801FE1;
 Mon, 13 Jun 2022 08:45:41 +0000 (UTC)
Received: from redhat.com (unknown [10.33.36.124])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 9B8F5475005;
 Mon, 13 Jun 2022 08:45:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 42a4a149-eaf5-11ec-8901-93a377f238d6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655109964;
	h=from:from:reply-to:reply-to:subject:subject:date:date:
	 message-id:message-id:to:to:cc:cc:mime-version:mime-version:
	 content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=OsmIb8JBjamYO7HSXD7lvz9oKqmWR9je1VCbD+9sYo8=;
	b=PUsT9hswdjnCMad4zVSgK+RQvxWv30of/H2q7cuQsVr7INAXwLISt8J1bhSUoRtzMiQBl8
	wC4Uif4O3bv58YYFmbC8nBJAHRQJRS3ScH7SI1X6tzMsx493r/LX5RcaQdQ2ustghqPTeo
	CHxiZVxs0/TO2MxmqnZct/Ku0gH1Ccc=
X-MC-Unique: 9IpHJqSjOBatvPQxcG8MFA-1
Date: Mon, 13 Jun 2022 09:45:35 +0100
From: Daniel =?utf-8?B?UC4gQmVycmFuZ8Op?= <berrange@redhat.com>
To: Volker =?utf-8?Q?R=C3=BCmelin?= <vr_qemu@t-online.de>
Cc: Richard Henderson <richard.henderson@linaro.org>,
	Gerd Hoffmann <kraxel@redhat.com>, qemu-devel@nongnu.org,
	"Canokeys.org" <contact@canokeys.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	xen-devel@lists.xenproject.org,
	Alex Williamson <alex.williamson@redhat.com>,
	Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Subject: Re: [PULL 00/17] Kraxel 20220610 patches
Message-ID: <Yqb5L31cG/0cVM5B@redhat.com>
Reply-To: Daniel =?utf-8?B?UC4gQmVycmFuZ8Op?= <berrange@redhat.com>
References: <20220610092043.1874654-1-kraxel@redhat.com>
 <adec1cff-54f1-e2bf-8092-945601aeb912@linaro.org>
 <60c72935-85ce-4e24-43a5-119f6428b916@t-online.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <60c72935-85ce-4e24-43a5-119f6428b916@t-online.de>
User-Agent: Mutt/2.2.1 (2022-02-19)
X-Scanned-By: MIMEDefang 2.85 on 10.11.54.10

On Sat, Jun 11, 2022 at 06:34:28PM +0200, Volker Rümelin wrote:
> Am 10.06.22 um 22:16 schrieb Richard Henderson:
> > On 6/10/22 02:20, Gerd Hoffmann wrote:
> > > The following changes since commit
> > > 9cc1bf1ebca550f8d90f967ccd2b6d2e00e81387:
> > > 
> > >    Merge tag 'pull-xen-20220609' of
> > > https://xenbits.xen.org/git-http/people/aperard/qemu-dm into staging
> > > (2022-06-09 08:25:17 -0700)
> > > 
> > > are available in the Git repository at:
> > > 
> > >    git://git.kraxel.org/qemu tags/kraxel-20220610-pull-request
> > > 
> > > for you to fetch changes up to 02319a4d67d3f19039127b8dc9ca9478b6d6ccd8:
> > > 
> > >    virtio-gpu: Respect UI refresh rate for EDID (2022-06-10 11:11:44
> > > +0200)
> > > 
> > > ----------------------------------------------------------------
> > > usb: add CanoKey device, fixes for ehci + redir
> > > ui: fixes for gtk and cocoa, move keymaps, rework refresh rate
> > > virtio-gpu: scanout flush fix
> > 
> > This introduces regressions:
> > 
> > https://gitlab.com/qemu-project/qemu/-/jobs/2576157660
> > https://gitlab.com/qemu-project/qemu/-/jobs/2576151565
> > https://gitlab.com/qemu-project/qemu/-/jobs/2576154539
> > https://gitlab.com/qemu-project/qemu/-/jobs/2575867208
> > 
> > 
> >  (27/43)
> > tests/avocado/vnc.py:Vnc.test_change_password_requires_a_password:
> > ERROR: ConnectError: Failed to establish session: EOFError\n Exit code:
> > 1\n    Command: ./qemu-system-x86_64 -display none -vga none -chardev socket,id=mon,path=/var/tmp/avo_qemu_sock_4nrz0r37/qemu-2912538-7f732e94e0f0-monitor.sock
> > -mon chardev=mon,mode=control -node... (0.09 s)
> >  (28/43) tests/avocado/vnc.py:Vnc.test_change_password:  ERROR:
> > ConnectError: Failed to establish session: EOFError\n    Exit code:
> > 1\n    Command: ./qemu-system-x86_64 -display none -vga none -chardev socket,id=mon,path=/var/tmp/avo_qemu_sock_yhpzy5c3/qemu-2912543-7f732e94b438-monitor.sock
> > -mon chardev=mon,mode=control -node... (0.09 s)
> >  (29/43)
> > tests/avocado/vnc.py:Vnc.test_change_password_requires_a_password:
> > ERROR: ConnectError: Failed to establish session: EOFError\n Exit code:
> > 1\n    Command: ./qemu-system-x86_64 -display none -vga none -chardev socket,id=mon,path=/var/tmp/avo_qemu_sock_tk3pfmt2/qemu-2912548-7f732e93d7b8-monitor.sock
> > -mon chardev=mon,mode=control -node... (0.09 s)
> > 
> > 
> > r~
> > 
> 
> This is caused by [PATCH 14/17] ui: move 'pc-bios/keymaps' to 'ui/keymaps'.
> After this patch QEMU no longer finds it's keymaps if started directly from
> the build directory.

I just sent Gerd an update version which adds a symlink from the source
tree to the build dir to solve this problem, along with updated commit
message to reflect this need

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



From xen-devel-bounces@lists.xenproject.org Mon Jun 13 08:56:26 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 08:56:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347857.574195 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0frz-0007Xf-HW; Mon, 13 Jun 2022 08:56:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347857.574195; Mon, 13 Jun 2022 08:56:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0frz-0007XG-Am; Mon, 13 Jun 2022 08:56:23 +0000
Received: by outflank-mailman (input) for mailman id 347857;
 Mon, 13 Jun 2022 08:42:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f0g7=WU=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1o0fed-0005hb-MP
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 08:42:37 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c4447aa3-eaf4-11ec-8901-93a377f238d6;
 Mon, 13 Jun 2022 10:42:34 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1o0fdb-00Gf1z-43; Mon, 13 Jun 2022 08:41:31 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 600BB300472;
 Mon, 13 Jun 2022 10:41:27 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id 42FA02849859B; Mon, 13 Jun 2022 10:41:27 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c4447aa3-eaf4-11ec-8901-93a377f238d6
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=TBu1hM1Fbte1hrb329bc3ivTNLPSxYZoE9gYu5eRiHg=; b=AYS6grJVD8NMmpEvSXoRmqR/m5
	NEo85wVpnYAfEDlJuO/EUebdXwmtGi7E1hfXWnzD4xJRKZ83szCruTMmPOs7xBMJ6Aw1UFgllIdA2
	DrNCgkjyxwSlCkaYvmrnpeyVHjsekjBUo+W+kPzFmB2XaZh0Vpc97VCHNs1UCeL3PGYHI9qxJxVu/
	DaFylVxmC0fZNsBkd3ExUz+u7t35uoUT8AamFGd6WJIWdTb36E6JQrKM1QWtIzoeKjuNFK4QcGaGj
	jl+nLLuCaTmbWLVhpIZJqu3xqI2+WRMTvXP48+Wg/el92i2X48vydvI85ds5VF4NfA6ZUW+2+coel
	hUXUv0FQ==;
Date: Mon, 13 Jun 2022 10:41:27 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: Lai Jiangshan <jiangshanlai@gmail.com>
Cc: Richard Henderson <rth@twiddle.net>, ink@jurassic.park.msu.ru,
	mattst88@gmail.com, vgupta@kernel.org, linux@armlinux.org.uk,
	ulli.kroll@googlemail.com, linus.walleij@linaro.org,
	shawnguo@kernel.org, Sascha Hauer <s.hauer@pengutronix.de>,
	kernel@pengutronix.de, festevam@gmail.com, linux-imx@nxp.com,
	tony@atomide.com, khilman@kernel.org, catalin.marinas@arm.com,
	Will Deacon <will@kernel.org>, guoren@kernel.org, bcain@quicinc.com,
	Huacai Chen <chenhuacai@kernel.org>, kernel@xen0n.name,
	geert@linux-m68k.org, sammy@sammy.net, monstr@monstr.eu,
	tsbogend@alpha.franken.de, dinguyen@kernel.org, jonas@southpole.se,
	stefan.kristiansson@saunalahti.fi, shorne@gmail.com,
	James.Bottomley@hansenpartnership.com, deller@gmx.de,
	Michael Ellerman <mpe@ellerman.id.au>, benh@kernel.crashing.org,
	paulus@samba.org, Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	Albert Ou <aou@eecs.berkeley.edu>, hca@linux.ibm.com,
	gor@linux.ibm.com, agordeev@linux.ibm.com,
	borntraeger@linux.ibm.com, svens@linux.ibm.com,
	ysato@users.sourceforge.jp, dalias@libc.org, davem@davemloft.net,
	Richard Weinberger <richard@nod.at>,
	anton.ivanov@cambridgegreys.com,
	Johannes Berg <johannes@sipsolutions.net>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>, X86 ML <x86@kernel.org>,
	"H. Peter Anvin" <hpa@zytor.com>, acme <acme@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Alexander Shishkin <alexander.shishkin@linux.intel.com>,
	jolsa@kernel.org, Namhyung Kim <namhyung@kernel.org>,
	Juergen Gross <jgross@suse.com>, srivatsa@csail.mit.edu,
	amakhalov@vmware.com, VMware Inc <pv-drivers@vmware.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, chris@zankel.net,
	jcmvbkbc@gmail.com, rafael@kernel.org, lenb@kernel.org,
	pavel@ucw.cz, gregkh@linuxfoundation.org, mturquette@baylibre.com,
	sboyd@kernel.org, daniel.lezcano@linaro.org, lpieralisi@kernel.org,
	sudeep.holla@arm.com, agross@kernel.org, bjorn.andersson@linaro.org,
	Anup Patel <anup@brainfault.org>, thierry.reding@gmail.com,
	jonathanh@nvidia.com, jacob.jun.pan@linux.intel.com,
	Arnd Bergmann <arnd@arndb.de>, yury.norov@gmail.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	linux@rasmusvillemoes.dk, rostedt@goodmis.org, pmladek@suse.com,
	senozhatsky@chromium.org, john.ogness@linutronix.de,
	paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com,
	josh@joshtriplett.org, mathieu.desnoyers@efficios.com,
	joel@joelfernandes.org, juri.lelli@redhat.com,
	vincent.guittot@linaro.org, dietmar.eggemann@arm.com,
	bsegall@google.com, mgorman@suse.de, bristot@redhat.com,
	vschneid@redhat.com, jpoimboe@kernel.org,
	linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-snps-arc@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, openrisc@lists.librecores.org,
	linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org, sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org, linux-perf-users@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, linux-xtensa@linux-xtensa.org,
	linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org,
	linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	linux-tegra@vger.kernel.org, linux-arch@vger.kernel.org,
	rcu@vger.kernel.org, Isaku Yamahata <isaku.yamahata@gmail.com>,
	kirill.shutemov@linux.intel.com
Subject: Re: [PATCH 21/36] x86/tdx: Remove TDX_HCALL_ISSUE_STI
Message-ID: <Yqb4N3iwh1X7378o@hirez.programming.kicks-ass.net>
References: <20220608142723.103523089@infradead.org>
 <20220608144517.251109029@infradead.org>
 <CAJhGHyCnu_BsKf5STMMJKMWm0NVZ8qXT8Qh=BhhCjSSgwchL3Q@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAJhGHyCnu_BsKf5STMMJKMWm0NVZ8qXT8Qh=BhhCjSSgwchL3Q@mail.gmail.com>

On Mon, Jun 13, 2022 at 04:26:01PM +0800, Lai Jiangshan wrote:
> On Wed, Jun 8, 2022 at 10:48 PM Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > Now that arch_cpu_idle() is expected to return with IRQs disabled,
> > avoid the useless STI/CLI dance.
> >
> > Per the specs this is supposed to work, but nobody has yet relied up
> > this behaviour so broken implementations are possible.
> 
> I'm totally newbie here.
> 
> The point of safe_halt() is that STI must be used and be used
> directly before HLT to enable IRQ during the halting and stop
> the halting if there is any IRQ.

Correct; on real hardware. But this is virt...

> In TDX case, STI must be used directly before the hypercall.
> Otherwise, no IRQ can come and the vcpu would be stalled forever.
> 
> Although the hypercall has an "irq_disabled" argument.
> But the hypervisor doesn't (and can't) touch the IRQ flags no matter
> what the "irq_disabled" argument is.  The IRQ is not enabled during
> the halting if the IRQ is disabled before the hypercall even if
> irq_disabled=false.

All we need the VMM to do is wake the vCPU, and it can do that,
irrespective of the guest's IF.

So the VMM can (and does) know if there's an interrupt pending, and
that's all that's needed to wake from this hypercall. Once the vCPU is
back up and running again, we'll eventually set IF again and the pending
interrupt will get delivered and all's well.

Think of this like MWAIT with ECX[0] set if you will.


From xen-devel-bounces@lists.xenproject.org Mon Jun 13 08:56:26 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 08:56:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347859.574199 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0frz-0007fk-Qe; Mon, 13 Jun 2022 08:56:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347859.574199; Mon, 13 Jun 2022 08:56:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0frz-0007ee-Kl; Mon, 13 Jun 2022 08:56:23 +0000
Received: by outflank-mailman (input) for mailman id 347859;
 Mon, 13 Jun 2022 08:45:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f0g7=WU=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1o0fhH-0005l2-Op
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 08:45:20 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 267d3740-eaf5-11ec-8901-93a377f238d6;
 Mon, 13 Jun 2022 10:45:18 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1o0fgN-007Vf8-Ah; Mon, 13 Jun 2022 08:44:23 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id D73DB302D9E;
 Mon, 13 Jun 2022 10:44:22 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id BB85B200C72F2; Mon, 13 Jun 2022 10:44:22 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 267d3740-eaf5-11ec-8901-93a377f238d6
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=pN3d4KEFcv2ptnSv0mvCj6MvUML0g63O/kngZ8T7P5U=; b=jHY41velXm5OfTDUZVWOgGoGBS
	3XcfJnIs2Kw4MnQzhXHXXp0NuTvYgWNNcy9H8ZMc64VSaMQLttlBqVnSkycnBSdvSyy0Jak2Oc/oh
	vvGayo0C9yY99bqhrLdUZmtWJXcMrqnljzCxzdeOwv7I5/kT19cRmZ3SrWtebHTgXMFdZAZxKh/LD
	FLvZqYIpmAvjq1lnM+gSi7696QvqdD2nf0hqYc6h4USx92PA9LVEoUSL3WQM5ig7zhrj1+/gECYeb
	R1xWNHwwSLm1F21Q9Cu7wJn7J1PofxmN5bqxHgiOokEGCrOE72TMA7ta7eHf7Pxf1c+Q+Jc6wUzzF
	Why4PJmw==;
Date: Mon, 13 Jun 2022 10:44:22 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: rth@twiddle.net, ink@jurassic.park.msu.ru, mattst88@gmail.com,
	vgupta@kernel.org, linux@armlinux.org.uk, ulli.kroll@googlemail.com,
	linus.walleij@linaro.org, shawnguo@kernel.org,
	Sascha Hauer <s.hauer@pengutronix.de>, kernel@pengutronix.de,
	festevam@gmail.com, linux-imx@nxp.com, tony@atomide.com,
	khilman@kernel.org, catalin.marinas@arm.com, will@kernel.org,
	guoren@kernel.org, bcain@quicinc.com, chenhuacai@kernel.org,
	kernel@xen0n.name, geert@linux-m68k.org, sammy@sammy.net,
	monstr@monstr.eu, tsbogend@alpha.franken.de, dinguyen@kernel.org,
	jonas@southpole.se, stefan.kristiansson@saunalahti.fi,
	shorne@gmail.com, James.Bottomley@hansenpartnership.com,
	deller@gmx.de, mpe@ellerman.id.au, benh@kernel.crashing.org,
	paulus@samba.org, paul.walmsley@sifive.com, palmer@dabbelt.com,
	aou@eecs.berkeley.edu, hca@linux.ibm.com, gor@linux.ibm.com,
	agordeev@linux.ibm.com, borntraeger@linux.ibm.com,
	svens@linux.ibm.com, ysato@users.sourceforge.jp, dalias@libc.org,
	davem@davemloft.net, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
	dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com,
	acme@kernel.org, mark.rutland@arm.com,
	alexander.shishkin@linux.intel.com, jolsa@kernel.org,
	namhyung@kernel.org, jgross@suse.com, srivatsa@csail.mit.edu,
	amakhalov@vmware.com, pv-drivers@vmware.com,
	boris.ostrovsky@oracle.com, chris@zankel.net, jcmvbkbc@gmail.com,
	rafael@kernel.org, lenb@kernel.org, pavel@ucw.cz,
	gregkh@linuxfoundation.org, mturquette@baylibre.com,
	sboyd@kernel.org, daniel.lezcano@linaro.org, lpieralisi@kernel.org,
	sudeep.holla@arm.com, agross@kernel.org, bjorn.andersson@linaro.org,
	anup@brainfault.org, thierry.reding@gmail.com, jonathanh@nvidia.com,
	Arnd Bergmann <arnd@arndb.de>, yury.norov@gmail.com,
	andriy.shevchenko@linux.intel.com, linux@rasmusvillemoes.dk,
	rostedt@goodmis.org, pmladek@suse.com, senozhatsky@chromium.org,
	john.ogness@linutronix.de, paulmck@kernel.org, frederic@kernel.org,
	quic_neeraju@quicinc.com, josh@joshtriplett.org,
	mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com,
	joel@joelfernandes.org, juri.lelli@redhat.com,
	vincent.guittot@linaro.org, dietmar.eggemann@arm.com,
	bsegall@google.com, mgorman@suse.de, bristot@redhat.com,
	vschneid@redhat.com, jpoimboe@kernel.org,
	linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-snps-arc@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, openrisc@lists.librecores.org,
	linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org, sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org, linux-perf-users@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, linux-xtensa@linux-xtensa.org,
	linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org,
	linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	linux-tegra@vger.kernel.org, linux-arch@vger.kernel.org,
	rcu@vger.kernel.org
Subject: Re: [PATCH 04/36] cpuidle,intel_idle: Fix CPUIDLE_FLAG_IRQ_ENABLE
Message-ID: <Yqb45vclY2KVL0wZ@hirez.programming.kicks-ass.net>
References: <20220608142723.103523089@infradead.org>
 <20220608144516.172460444@infradead.org>
 <20220609164921.5e61711d@jacob-builder>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20220609164921.5e61711d@jacob-builder>

On Thu, Jun 09, 2022 at 04:49:21PM -0700, Jacob Pan wrote:
> Hi Peter,
> 
> On Wed, 08 Jun 2022 16:27:27 +0200, Peter Zijlstra <peterz@infradead.org>
> wrote:
> 
> > Commit c227233ad64c ("intel_idle: enable interrupts before C1 on
> > Xeons") wrecked intel_idle in two ways:
> > 
> >  - must not have tracing in idle functions
> >  - must return with IRQs disabled
> > 
> > Additionally, it added a branch for no good reason.
> > 
> > Fixes: c227233ad64c ("intel_idle: enable interrupts before C1 on Xeons")
> > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> > ---
> >  drivers/idle/intel_idle.c |   48
> > +++++++++++++++++++++++++++++++++++----------- 1 file changed, 37
> > insertions(+), 11 deletions(-)
> > 
> > --- a/drivers/idle/intel_idle.c
> > +++ b/drivers/idle/intel_idle.c
> > @@ -129,21 +137,37 @@ static unsigned int mwait_substates __in
> >   *
> >   * Must be called under local_irq_disable().
> >   */
> nit: this comment is no long true, right?

It still is, all the idle routines are called with interrupts disabled,
but must also exit with interrupts disabled.

If the idle method requires interrupts to be enabled, it must be sure to
disable them again before returning. Given all the RCU/tracing concerns
it must use raw_local_irq_*() for this though.


From xen-devel-bounces@lists.xenproject.org Mon Jun 13 08:56:26 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 08:56:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347843.574189 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0frz-0007UB-7u; Mon, 13 Jun 2022 08:56:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347843.574189; Mon, 13 Jun 2022 08:56:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0frz-0007U4-2d; Mon, 13 Jun 2022 08:56:23 +0000
Received: by outflank-mailman (input) for mailman id 347843;
 Mon, 13 Jun 2022 08:26:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rOj1=WU=gmail.com=jiangshanlai@srs-se1.protection.inumbo.net>)
 id 1o0fOo-00031i-0d
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 08:26:14 +0000
Received: from mail-wm1-x32d.google.com (mail-wm1-x32d.google.com
 [2a00:1450:4864:20::32d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7c40d223-eaf2-11ec-bd2c-47488cf2e6aa;
 Mon, 13 Jun 2022 10:26:13 +0200 (CEST)
Received: by mail-wm1-x32d.google.com with SMTP id q15so2508417wmj.2
 for <xen-devel@lists.xenproject.org>; Mon, 13 Jun 2022 01:26:13 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7c40d223-eaf2-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=3JSggHLDbkXyuZCKaU+uQ7H5KVpBjE3ZWWl/F0t3JyI=;
        b=hln/7KIUboOODQa9/Iv7zWAXFI8kFLhiLL+AWnA7ZL0itAJaB93eguKFQupB8Lv94a
         mnMuLqlfVsMy6aSBFmNweaacWKXWw7aT6TScazfBGhspJqqH/PMOPWdnnnaxNLaf842d
         D+3NPCattkXMe8VyippSy6AVlz1Q90Igxi/lQII1yr2hkIK0YZRpsj0TKs2Px7tXlVgY
         915w6r1r+N9IHNGMPf7O4C6rOlcjLzR4xSEqc3Zfq8tMbY0ayFsU19Cug+0HkT25Saig
         zkxTdXbzsbpEfsV6HhW0WtRME3JJZVdyKx57BtFZ0WE2gh0TSNH76QSkgUTVIoEuHNxT
         f/gA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=3JSggHLDbkXyuZCKaU+uQ7H5KVpBjE3ZWWl/F0t3JyI=;
        b=g8c4W/xTVMLeZGTKZ2bIoZCCkRMeYxUhKZ1GQo6i+ixBOACstvkagJme2Qv1gGzEum
         3nuMr6ffzNdoSbFu1VwfuOcebvTy2Q/fwq86A57e4MoRb0fOF5oqckhm3ezesL6rOpdI
         EtKm2sS3cusl6+jsr8xPkCWuAYBH11xLurtCiqBF8eMTpVreK5hxls6yTtpUW+aopq0X
         XfmdjPjvNTj96kFMuiGhfTbuC7I4Dgo6r1JTXUS3j9fNkHcgbLeoYhI73F2s+8rscvH5
         PvDk69JeLAiS/0KOCiFuFwCN0MJzd1kbPxczJ71qLlw1vezDZzxu7pUadA/I4Qsm8/Fs
         waEA==
X-Gm-Message-State: AOAM531hgUz3xgF8Ss01ic41l098ztHpdj6/ie6Aht/XBBoynC1UWS7t
	0GNl6lhsghxzNxDYefi6/L9w69CueZw63H0gO3s=
X-Google-Smtp-Source: ABdhPJxgJ3qlsWHXb4q5Q9LeHOXh1hQWbFNdZBk/6atfvyd1YyWRPGtuhT6/ZUDmb0vG1CLuyzfEgHlG2xi/itwptM8=
X-Received: by 2002:a05:600c:1c9a:b0:39c:7db4:90c3 with SMTP id
 k26-20020a05600c1c9a00b0039c7db490c3mr13053942wms.161.1655108772491; Mon, 13
 Jun 2022 01:26:12 -0700 (PDT)
MIME-Version: 1.0
References: <20220608142723.103523089@infradead.org> <20220608144517.251109029@infradead.org>
In-Reply-To: <20220608144517.251109029@infradead.org>
From: Lai Jiangshan <jiangshanlai@gmail.com>
Date: Mon, 13 Jun 2022 16:26:01 +0800
Message-ID: <CAJhGHyCnu_BsKf5STMMJKMWm0NVZ8qXT8Qh=BhhCjSSgwchL3Q@mail.gmail.com>
Subject: Re: [PATCH 21/36] x86/tdx: Remove TDX_HCALL_ISSUE_STI
To: Peter Zijlstra <peterz@infradead.org>
Cc: Richard Henderson <rth@twiddle.net>, ink@jurassic.park.msu.ru, mattst88@gmail.com, 
	vgupta@kernel.org, linux@armlinux.org.uk, ulli.kroll@googlemail.com, 
	linus.walleij@linaro.org, shawnguo@kernel.org, 
	Sascha Hauer <s.hauer@pengutronix.de>, kernel@pengutronix.de, festevam@gmail.com, 
	linux-imx@nxp.com, tony@atomide.com, khilman@kernel.org, 
	catalin.marinas@arm.com, Will Deacon <will@kernel.org>, guoren@kernel.org, 
	bcain@quicinc.com, Huacai Chen <chenhuacai@kernel.org>, kernel@xen0n.name, 
	geert@linux-m68k.org, sammy@sammy.net, monstr@monstr.eu, 
	tsbogend@alpha.franken.de, dinguyen@kernel.org, jonas@southpole.se, 
	stefan.kristiansson@saunalahti.fi, shorne@gmail.com, 
	James.Bottomley@hansenpartnership.com, deller@gmx.de, 
	Michael Ellerman <mpe@ellerman.id.au>, benh@kernel.crashing.org, paulus@samba.org, 
	Paul Walmsley <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>, 
	Albert Ou <aou@eecs.berkeley.edu>, hca@linux.ibm.com, gor@linux.ibm.com, 
	agordeev@linux.ibm.com, borntraeger@linux.ibm.com, svens@linux.ibm.com, 
	ysato@users.sourceforge.jp, dalias@libc.org, davem@davemloft.net, 
	Richard Weinberger <richard@nod.at>, anton.ivanov@cambridgegreys.com, 
	Johannes Berg <johannes@sipsolutions.net>, Thomas Gleixner <tglx@linutronix.de>, 
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, 
	Dave Hansen <dave.hansen@linux.intel.com>, X86 ML <x86@kernel.org>, 
	"H. Peter Anvin" <hpa@zytor.com>, acme <acme@kernel.org>, Mark Rutland <mark.rutland@arm.com>, 
	Alexander Shishkin <alexander.shishkin@linux.intel.com>, jolsa@kernel.org, 
	Namhyung Kim <namhyung@kernel.org>, Juergen Gross <jgross@suse.com>, srivatsa@csail.mit.edu, 
	amakhalov@vmware.com, VMware Inc <pv-drivers@vmware.com>, 
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, chris@zankel.net, jcmvbkbc@gmail.com, 
	rafael@kernel.org, lenb@kernel.org, pavel@ucw.cz, gregkh@linuxfoundation.org, 
	mturquette@baylibre.com, sboyd@kernel.org, daniel.lezcano@linaro.org, 
	lpieralisi@kernel.org, sudeep.holla@arm.com, agross@kernel.org, 
	bjorn.andersson@linaro.org, Anup Patel <anup@brainfault.org>, thierry.reding@gmail.com, 
	jonathanh@nvidia.com, jacob.jun.pan@linux.intel.com, 
	Arnd Bergmann <arnd@arndb.de>, yury.norov@gmail.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, linux@rasmusvillemoes.dk, 
	rostedt@goodmis.org, pmladek@suse.com, senozhatsky@chromium.org, 
	john.ogness@linutronix.de, paulmck@kernel.org, frederic@kernel.org, 
	quic_neeraju@quicinc.com, josh@joshtriplett.org, 
	mathieu.desnoyers@efficios.com, joel@joelfernandes.org, juri.lelli@redhat.com, 
	vincent.guittot@linaro.org, dietmar.eggemann@arm.com, bsegall@google.com, 
	mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, jpoimboe@kernel.org, 
	linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, 
	linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, 
	linux-omap@vger.kernel.org, linux-csky@vger.kernel.org, 
	linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, 
	linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, 
	openrisc@lists.librecores.org, linux-parisc@vger.kernel.org, 
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, 
	linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, 
	sparclinux@vger.kernel.org, linux-um@lists.infradead.org, 
	linux-perf-users@vger.kernel.org, virtualization@lists.linux-foundation.org, 
	xen-devel@lists.xenproject.org, linux-xtensa@linux-xtensa.org, 
	linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org, 
	linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org, 
	linux-tegra@vger.kernel.org, linux-arch@vger.kernel.org, rcu@vger.kernel.org, 
	Isaku Yamahata <isaku.yamahata@gmail.com>, kirill.shutemov@linux.intel.com
Content-Type: text/plain; charset="UTF-8"

On Wed, Jun 8, 2022 at 10:48 PM Peter Zijlstra <peterz@infradead.org> wrote:
>
> Now that arch_cpu_idle() is expected to return with IRQs disabled,
> avoid the useless STI/CLI dance.
>
> Per the specs this is supposed to work, but nobody has yet relied up
> this behaviour so broken implementations are possible.

I'm totally newbie here.

The point of safe_halt() is that STI must be used and be used
directly before HLT to enable IRQ during the halting and stop
the halting if there is any IRQ.

In TDX case, STI must be used directly before the hypercall.
Otherwise, no IRQ can come and the vcpu would be stalled forever.

Although the hypercall has an "irq_disabled" argument.
But the hypervisor doesn't (and can't) touch the IRQ flags no matter
what the "irq_disabled" argument is.  The IRQ is not enabled during
the halting if the IRQ is disabled before the hypercall even if
irq_disabled=false.

The "irq_disabled" argument is used for workaround purposes:
https://lore.kernel.org/kvm/c020ee0b90c424a7010e979c9b32a28e9c488a51.1651774251.git.isaku.yamahata@intel.com/

Hope my immature/incorrect reply elicits a real response from
others.

Thanks
Lai


From xen-devel-bounces@lists.xenproject.org Mon Jun 13 09:04:21 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 09:04:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347897.574221 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0fzb-0001tu-FG; Mon, 13 Jun 2022 09:04:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347897.574221; Mon, 13 Jun 2022 09:04:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0fzb-0001tn-CN; Mon, 13 Jun 2022 09:04:15 +0000
Received: by outflank-mailman (input) for mailman id 347897;
 Mon, 13 Jun 2022 09:04:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=i6zz=WU=citrix.com=prvs=156a1e8c4=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o0fzZ-0001th-TC
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 09:04:13 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c9981a13-eaf7-11ec-bd2c-47488cf2e6aa;
 Mon, 13 Jun 2022 11:04:12 +0200 (CEST)
Received: from mail-dm6nam12lp2174.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.174])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 13 Jun 2022 05:04:09 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by SJ0PR03MB6240.namprd03.prod.outlook.com (2603:10b6:a03:3ae::24) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.12; Mon, 13 Jun
 2022 09:04:06 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%6]) with mapi id 15.20.5332.020; Mon, 13 Jun 2022
 09:04:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c9981a13-eaf7-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655111052;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=kk9tJ6f6m+yGfPG7jBVYmX/Zf+6fB4ldfa/OCis3ydk=;
  b=f/GTfk7p2TbdEGm5x9RJjSzbWNA4pJlqidgXlRgV56kLUGOGAAOQrmWo
   wOrMAWVlPFz3IHZ74Dxp0zZGI8tuobp3cvGQbwBELA6RvBVX594kKivkT
   IPRTvTRdiYMfGAQZUyQ3VW3qdNRGnXygWr0o8EttOstsfu+fyNm+JHEcW
   w=;
X-IronPort-RemoteIP: 104.47.59.174
X-IronPort-MID: 73454490
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:8WobJKwE7lY2XerrDSV6t+crxyrEfRIJ4+MujC+fZmUNrF6WrkVRz
 jEYUT+BMqmPamTxctBxPY+/pE4PuMKDzNUxHFBv/yAxQypGp/SeCIXCJC8cHc8zwu4v7q5Dx
 59DAjUVBJlsFhcwnj/0bv656yMUOZigHtIQMsadUsxKbVIiGX5JZS5LwbZj2NY22ILhWWthh
 PupyyHhEA79s9JLGjp8B5Kr8HuDa9yr5Vv0FnRnDRx6lAe2e0s9VfrzFonoR5fMeaFGH/bSe
 gr25OrRElU1XfsaIojNfr7TKiXmS1NJVOSEoiI+t6OK2nCuqsGuu0qS2TV1hUp/0l20c95NJ
 Npl7qGuGTYiD6fwkv0wbhAAPz85MZZH5+qSSZS/mZT7I0zuVVLJmq8rIGRoeIoS96BwHH1E8
 uEeJHYVdBefiumqwbW9DO5xmsAkK8qtN4Qa0p1i5WiBUbB6HtaeEuOTuoMwMDQY36iiGd7EY
 MUUc3x3ZQnoaBxTIFYHTpk5mY9Eg1GgKGEF9A7E/cLb5UDewiFQ6Km0PeHQJMKLZ+AKn0mdl
 0PvqjGR7hYycYb3JSC+2nCmi/LLnCj7cJkPD7D+/flv6HWR22gSBRs+RVa95/6jhSaWQMlDI
 kYZ/i4vq6ka90GxSNT5GRqirxasoRo0S9dWVeog52ml2qfSpgqUGGUAZjpAc8A98t87QyQw0
 V2ElM+vAiZg2IB5UlqY/7aQ6D+3Zy4cKDZYYTdeFFRZpd7+vIs0kxTDCM55F7K4hcH0Hje2x
 C2WqC85hPMYistjO7iHwG0rSgmE/vDhJjPZLC2NNo55xmuVvLKYWrE=
IronPort-HdrOrdr: A9a23:3TayZKHv4v4bEPBGpLqFepHXdLJyesId70hD6qkvc3Fom52j/f
 xGws5x6faVslkssb8b6LW90Y27MAvhHPlOkPIs1NaZLXDbUQ6TQL2KgrGD/9SNIVycygcZ79
 YbT0EcMqyOMbEZt7ec3ODQKb9Jrri6GeKT9IHjJh9WPH1XgspbnmNE42igYy9LrF4sP+tFKH
 PQ3LsPmxOQPVAsKuirDHgMWObO4/XNiZLdeBYDQzoq8hOHgz+E4KPzV0Hw5GZUbxp/hZMZtU
 TVmQ3w4auu99m91x/nzmfWq7BbgsHoxNdvDNGFzuIVNjLvoAC1Y5kJYczLgBkF5MWUrHo6mt
 jFpBkte+x19nPqZ2mw5SDg3gHxuQxen0PK+Bu9uz/OsMb5TDU1B45qnoRCaCbU7EImoZVVzL
 9L93jxjesZMTrw2ADGo/TYXRBjkUS55VA4l/QIsnBZWYwCLJdMsI0k+l9PGptoJlO31GkeKp
 guMCjg3ocXTbvDBEqp/VWHgebcE0jbJy32DHTr4aeuonprdHMQ9Tps+CVQpAZEyHsHceg02w
 31CNUXqFhwdL5nUUtcPpZ3fSLlMB26ffrzWFjiUmjPJeUgB0/njaLRzfEc2NyKEaZ4vqfa3q
 6xGm9liQ==
X-IronPort-AV: E=Sophos;i="5.91,296,1647316800"; 
   d="scan'208";a="73454490"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QxJIN6Q6nHtVbW24UhkD5ACeuad+VSrbKVNJ+A1pGE574/rrLdpdozQuooMRKM1EPiACvj5pdvM2+DtWxH8icJjsiIVN3DLpjDCWBBw8JxDGWs0Hhc4dtb1EjFdrzBnG7OgM1wmzgKGImaJzmGyYo/7uyeaGVlNv8c1UI/bybaDsfs5arAoxCc6MpYWwgfSXx6J9iKjr+EvxUfl3flgjGLuwUY4pB3ZEHNt9ZzGFGCHcDJBjjvhsQ2wbWAk+6MrUlWfdfkDBaWIyB6jGygUn0OcA+UOjqaXc919FiLeoY6Qe5NLCOEITW76Og36TWbd8VfHix109p3ouAzEp+fe8sg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=aulyOgJ/83gI91Krg7GWsSZGCdYRXUdsEplI7xr1EYI=;
 b=XOjjFup8tQ8+6qMd4JV7Uxb3igf92E0IyB2c+cVZAnLI1rxt3VxaVOgCPCoVp9VRWbhqxCjA2z0VLFp8MQ4Qb+U40qFXB8dh9abbhm+rEKAnvtIajfs22aIyl0RuIMUXM/sIlNqlwTELmAfUeFcTS4FqB2kGgj/MZECfvL/uBbWFQy2aAFxlnsFmJazh9C3sk0Tn7asgIPX9sGSfoTDF9aL+BRKoSrQMv68pSbJfijcrPDI7B9RevpRvDYKe7fC1MvW2/0Cgch63DLQKMdNmiuI0v9kya7k9UczjmI+0fifbJnimCW1/NGs31z4dwEAokX5sElni9p//h0B8gtcCqg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aulyOgJ/83gI91Krg7GWsSZGCdYRXUdsEplI7xr1EYI=;
 b=e6tO329XXiFy1NNQOs82II2SsLjrANQXN5GQqb0yDfdq2sUFbyQtxkfTCISrSblRsD04c2gDGdS6LQ4gjGEalgOUoy614vS3xegHZu9HGfVH5YNEHR3FgrRpeZESqN0/JsjMQOmi6t1wPeS91tNdmguB7SDLVch7/NVCA/RrmUY=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Mon, 13 Jun 2022 11:04:01 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH] xen/console: do not drop serial output from the hardware
 domain
Message-ID: <Yqb9gKUMokLAots7@Air-de-Roger>
References: <20220610150651.29933-1-roger.pau@citrix.com>
 <3a462021-1802-4764-3547-6d0a02cd092f@suse.com>
 <YqbziQGizoNX7YFr@Air-de-Roger>
 <3d0d74d8-55a9-cdb6-0c5e-616ddd47bbc0@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <3d0d74d8-55a9-cdb6-0c5e-616ddd47bbc0@suse.com>
X-ClientProxiedBy: LO4P123CA0543.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:319::15) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 7ba607a1-4780-4a60-f264-08da4d1baba2
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6240:EE_
X-Microsoft-Antispam-PRVS:
	<SJ0PR03MB6240E02903B8B931B375C3DA8FAB9@SJ0PR03MB6240.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	scMUKQ3+zuCiTweuL+3Y7tHKn3Y1U5XELjwSlcEZdoFGtPVXeOmGb8tnduhxIpWWMh/R+ipogVaw5+CxLqDzRhekO23bXiUDXqpXLY54xZcJaCH/doWxzkn0acDUXLvnSmv8T0+Mp5cR//I/Q3sy5ZqHLOOmXkMipkUne+jc5gKLvVS6z7BTGE8AE/w+9Zn1vSnJjJORUvgE8AM66y6fc12XSu32rJpAA+Tdbwgy/q9cHVJ1ZADOsJs6RcGFescclvY6h89F0109eWjMCJ8w5gPNcpgAqsEeBgAWuj0qPfMoiDuahrYh/LMSM/dS+WY3TLFeKh3RTV2dJakAtTAIj2lovjsUePxtykK06Ebnm5TJLXQvi0pNw5bBBr5R1fdTRO/hg9K8x3oMnmWK3aAv71cbc0ZT6EnjOgHkPNBpfOfMVA0uESPqd/EW1dV73HgwYDrnVw8iVfRJdkuuY141X9c78fWvcIW82lcELWewdc24e1IvSfv1LWETJRu3j2h128lURXCTOSpashE3RCmsC9wk7GpmdnsCHbnyKniW/WIcdMtPikEc4IZd8JmIVxyDnA1329uNRFzDQn8VwbsEMjeOp7hIwVVtBteH0i4DblSPO9GuH6CTIvMog8IGl1EUwWGc/ANLMSpz5/D5D2I0jg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(7916004)(366004)(83380400001)(82960400001)(66946007)(8676002)(508600001)(4326008)(66476007)(86362001)(2906002)(33716001)(6916009)(316002)(6486002)(186003)(66556008)(54906003)(53546011)(6506007)(38100700002)(6666004)(26005)(9686003)(85182001)(6512007)(8936002)(5660300002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aVpXdnlKV3pDRHQrc0lkUlFpN3h1TVBqLytiQVJlNll3ZE9hN3hKdkRGdGl1?=
 =?utf-8?B?VmJDRmxrMlpncklRREJxZm9tQmVHK2VrWGI5TGdaaGo0MmJHci9Zb1FlNFZR?=
 =?utf-8?B?YU9YZmNYdE5uL1RGRkk2RW90ajVRa04rVWdjdTVOeC9VRFdxUklpZDBKQmlJ?=
 =?utf-8?B?MnQ1ak5UZE1nUGNZUXkvbmVaL3JjK1NiN0xLR3dvQUY2TjlZdUY3N3lSdkVJ?=
 =?utf-8?B?bFFjcCtmYXBEemZ2Rzd2S1doMGM5QjhyTFJZandmVnVEbGIvdWcwQ0ZyZkRv?=
 =?utf-8?B?UFYreGUzQ2ZNYU9hWWU5VlVpQTNWMlJPZFo5WEFkQ1NFSmhLVVFYUGlTZ3Ez?=
 =?utf-8?B?Z0dkN2RnWTgzQ3AxbGx4dGQzSFcycUpRV3VKdHd3V2thazZXTjRzblA3cTNJ?=
 =?utf-8?B?MzlEZVFXYWdtRWtUOTBJUHloTlFFSGE0TG9KbmlxOEJmTnZERTB3VE43THpS?=
 =?utf-8?B?eXZ1MjFGTTRQNitoeUMrYWNMSWxBbzVvR3d0S2lKZ3ZVa3Z1MnA2M3UvVzF5?=
 =?utf-8?B?ZTZKcDlUQzEzcnRoeVQ5bWZ4SGNpREovZTVpNStxY2szQkg2cnBPZDJjREkz?=
 =?utf-8?B?V2hiSmRaUVhyV3JPRkVOT2xsQkhadCszNTZTbk56NUZPZVhFSitjRnV6bzM2?=
 =?utf-8?B?Nk80MUxiOW94endrSk5TdVVQTnVHSWw0cjhucFZmS1NjeHNkVm95MG9lZFN3?=
 =?utf-8?B?UXRYNVVNZS9KaVk3SlR1VEVFbnZENmVSODRtTXE5VktBbTh1Um1aenVHYzFR?=
 =?utf-8?B?Wnh4NVBiQmtDUFF4cXVyT3VBT2VQZmhSVU1UUzVMZStCSDhjcmNicDA0TFlV?=
 =?utf-8?B?US85NVpNbEVkQ0FTbS9PTXlTZmRaZ0FEVDJvSEhlcjQ5M1VoZC94UVU1SzJI?=
 =?utf-8?B?TVRhZEhJUVgwZFlZNlQ2TU5sRU9PMFp5OXpueEN3M29Ka0xwWVZPMUtnMnFZ?=
 =?utf-8?B?b1N2azV1VVlWWGUzdTN5dDE4eEJCc2NsWnV1OTVwajlYYlpqanFmSzBkczcv?=
 =?utf-8?B?cHo0cUJDUFdHSWdWaXRldWpNSlpxM3FzMzQrZ25MVXcvVHpDcnVzY04yclk3?=
 =?utf-8?B?QVJtb283bkt0L0hNWVNwRUdYNVlpemlQMTdqMlRMbFYzVXhwa1JZZlVXUUtV?=
 =?utf-8?B?WTd0d214cUl0Z0h5d0RxdDhEeVlDa1YxU2hZbWE3ZUcvTW54REhuMjExS1Zr?=
 =?utf-8?B?UktaU3FISHMyYzg3YmUyK1pXdnRjRUU4cVhaTk1yS1JDVjVzZldkeTdwVGRG?=
 =?utf-8?B?TmdBTnlmeVNLazNZcm5yd09RZmJGV3BBSHYwcFFHdkVKTXhxc1VYczZtRVMz?=
 =?utf-8?B?d2RpV3R5S2lWcEg2ZXZnT2RwbkRnQmFxTHlzb1plQ045bmxmSDU2TkU1TFhh?=
 =?utf-8?B?bUUvdlMvT2FYbitQaGNmY0J4Qjd0Z3FaQitFRmNicm1tWlRNeTZleGsrd2FV?=
 =?utf-8?B?Y3g2K3kvbEhSNDNIeklwdmcvWXdZYkM5S3NKeGtYMnFmRzdZdkhoZEV5eW5V?=
 =?utf-8?B?aHdRalRLbXVUeHh5L2FZVlRYZmQrQld6cTRNSmN1N2l3UzE1ZmRGd2FNeDRS?=
 =?utf-8?B?RTJmTHAvK3ZRZldvVmQ5d0dDeGlOQ0REaS83VWVMcEp1WFQrTk9tQThuMk1h?=
 =?utf-8?B?ZEordFFKTk9wamJHNmxaQ09ZdWF5U0ZkcDkyZVZRQzlTU0RIQ3FCeUE4RlFK?=
 =?utf-8?B?UUNpU2poVWl6dmIwSDc3YjBHQUVGSkpiSnhuVFF6cENxTDdrNm9VTktwZHcy?=
 =?utf-8?B?RVhDOHN4UUk3ZzVmL2s4YmJwN3J2djdhanVBKzZ4MDRWdFZkcC9uT3BMTWRC?=
 =?utf-8?B?M2hrVzg3M2JHVWRkbUZHU1ZORjNGNUtOTDVPcWFIZXZtcktlbDlUdTV6cDlt?=
 =?utf-8?B?cVNyWVN2ay9tdEoxSVhIOHdraHFwZEFpVExGNjdaVTVzNjkwK0RsWS94cnZp?=
 =?utf-8?B?UUlweWRFUVpNVFE3c3dwYTNjSjgrMDFsU29GeW01eTFLRGZ0VDFockdJTENj?=
 =?utf-8?B?bFJZSnZLWmlKM2dsY2NGaVZOS3hyWjYxWkpqWDdWcWJyV3FXNmV5YTdNSUFT?=
 =?utf-8?B?WE5zcG16bmU1UWlIVlk0UjVtM0VFWWZPM0U2WXlRSzgwR1VCK2xFTS9HZCsv?=
 =?utf-8?B?V3dwUjE0TEVRd3FaTWlwWHA0NFc1YTIzbzFDOGlyTVpFRS9nMW12SGFEUWVu?=
 =?utf-8?B?aGg5WWRZekpEWmUyaGVlU0lVU215eDJYZkxZdHFMYTAxdUtpVS8wTk9zMjd1?=
 =?utf-8?B?QS9QK0dFWkp5SnRaUy9vZlVYM3NGeERhUUkvZDMxSmcwOXJXYzFxUzBOaFJz?=
 =?utf-8?B?TlR0Qm5HQUFOalJ4T2tjcjJCTFNnMVF3akVxRjVZU1FXSGpXWnJseEZuRlVa?=
 =?utf-8?Q?+Y+bb6a7MZkezPHs=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7ba607a1-4780-4a60-f264-08da4d1baba2
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jun 2022 09:04:06.7728
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: wYbs/xY8PpyCh8SOaWMjaVYuCcPkzFsEcIR02UPiqnQKQmSSA4n2Mc6sNNZeJEEjNiYnt+7noyH08OCHEf/Jdw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6240

On Mon, Jun 13, 2022 at 10:29:43AM +0200, Jan Beulich wrote:
> On 13.06.2022 10:21, Roger Pau Monné wrote:
> > On Mon, Jun 13, 2022 at 09:30:06AM +0200, Jan Beulich wrote:
> >> On 10.06.2022 17:06, Roger Pau Monne wrote:
> >>> Prevent dropping console output from the hardware domain, since it's
> >>> likely important to have all the output if the boot fails without
> >>> having to resort to sync_console (which also affects the output from
> >>> other guests).
> >>>
> >>> Do so by pairing the console_serial_puts() with
> >>> serial_{start,end}_log_everything(), so that no output is dropped.
> >>
> >> While I can see the goal, why would Dom0 output be (effectively) more
> >> important than Xen's own one (which isn't "forced")? And with this
> >> aiming at boot output only, wouldn't you want to stop the overriding
> >> once boot has completed (of which, if I'm not mistaken, we don't
> >> really have any signal coming from Dom0)? And even during boot I'm
> >> not convinced we'd want to let through everything, but perhaps just
> >> Dom0's kernel messages?
> > 
> > I normally use sync_console on all the boxes I'm doing dev work, so
> > this request is something that come up internally.
> > 
> > Didn't realize Xen output wasn't forced, since we already have rate
> > limiting based on log levels I was assuming that non-ratelimited
> > messages wouldn't be dropped.  But yes, I agree that Xen (non-guest
> > triggered) output shouldn't be rate limited either.
> 
> Which would raise the question of why we have log levels for non-guest
> messages.

Hm, maybe I'm confused, but I don't see a direct relation between log
levels and rate limiting.  If I set log level to WARNING I would
expect to not loose _any_ non-guest log messages with level WARNING or
above.  It's still useful to have log levels for non-guest messages,
since user might want to filter out DEBUG non-guest messages for
example.

> >  Also that would give the xenstore domain a way to trigger
> > DoS attacks.
> 
> I guess a Xenstore domain can do so anyway, by simply refusing to
> fulfill its job.

Right, but that's IMO a DoS strictly related to the purpose of the
domain.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Jun 13 09:18:58 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 09:18:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347907.574231 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0gDl-0003rI-QZ; Mon, 13 Jun 2022 09:18:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347907.574231; Mon, 13 Jun 2022 09:18:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0gDl-0003rB-Nn; Mon, 13 Jun 2022 09:18:53 +0000
Received: by outflank-mailman (input) for mailman id 347907;
 Mon, 13 Jun 2022 09:18:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T645=WU=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o0gDk-0003r5-H8
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 09:18:52 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on060c.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::60c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d5b128c7-eaf9-11ec-8901-93a377f238d6;
 Mon, 13 Jun 2022 11:18:50 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB7824.eurprd04.prod.outlook.com (2603:10a6:102:cd::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.20; Mon, 13 Jun
 2022 09:18:48 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Mon, 13 Jun 2022
 09:18:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d5b128c7-eaf9-11ec-8901-93a377f238d6
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HpYuO+uMbPdYY0Bb86idYNgm7q/+0G8mFAG9o1+xMhPElOD4EwIOxKMSiCgRiUbXoAnxql2+BFT1DfFyOzGtvkt72T0TOc67gxaYDXHp3rI1cteDWE+mkQeGy+QmuPvGhGg90vA0YS/nf9tLXPdRAtqE5GKEZzdq+RITVlaerhlVovng/r8G00iZIM6J5nmcaMhPe1yqsRqfK7y8xR/2I5E0xR0uZTg4shzW7V5rBHZPcSrQRs5cZuEvXxbPlcJyJHpIT/OCstgIMnkVaE4OdkZ4IA66F1o1ivxS9SRwdlUJF5GP/IWiTb47cT8fDOCRBTnpUqJALOKR3mZbnBroZA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/MfOVtcYc6x/YwYDCZkS2LDzAnNUQsrVsmViqAcAxVc=;
 b=NjRo6x8gBGuKSWUnXYkPSsEqxHRvyb0iFA+Lh0f/YD4GigrKUb5187eUfSVjhSea3Wbef1ib4kLdumHGWhBa5rLS+KsFHzc/HeIjS0qMLYS/eCMXVkGRmv2Q6ubO5NkMebE44ztQu24OEFF4ryGaZMSXU1BW5+6qOr4daQ3B3Sx6hI6PZeMuIfuDMs7TvVmUONYJo/VeUJjsW1Y02Es2oeX+QS8bndVavgA+Xoyrr/px3UkIycdyCc3knutkTZMzd1URyM3SZO4B/eg2cgM429I3sduOKD2+kWDabtENqaNR+G/v7jWzo1OUt/E0yj2tY8X9QRnqQO9juj4UAwznUw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/MfOVtcYc6x/YwYDCZkS2LDzAnNUQsrVsmViqAcAxVc=;
 b=oC3BCGc1Lyg1A8TcC6blp6uBL6lWM4DUd046cwFBzOHUYNLz/bIIouRlMQHpP5Lv30MadJMIM+xBNkMFv0UrPzKdjWdP2ar8ZLN1gl/zVc3ZKYGRYJQWzkXl7QtsNtYQNy+m+24fd2Ac7quU/ZbQPlp8dCTLKVx/xYmp9vR9LNkoydKd/mzVtEF6mQnlhAvoeh0ame2zAHHGW4gmcwlhwwhGbS198vD4U8XAavSypl8CssjitX9ywCE9QbMNq5VuHLvU+FOpGQegDWimTwsEEVqAdJCtM9n8EbK24m4/sbITsF4zwOu3e9VyFVwf4mHgHXxoAdDzEOjFubHwFGetEQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <afa0a9e3-fd35-be38-427e-3389f4c3ca26@suse.com>
Date: Mon, 13 Jun 2022 11:18:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH] xen/console: do not drop serial output from the hardware
 domain
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220610150651.29933-1-roger.pau@citrix.com>
 <3a462021-1802-4764-3547-6d0a02cd092f@suse.com>
 <YqbziQGizoNX7YFr@Air-de-Roger>
 <3d0d74d8-55a9-cdb6-0c5e-616ddd47bbc0@suse.com>
 <Yqb9gKUMokLAots7@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <Yqb9gKUMokLAots7@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AS9PR07CA0020.eurprd07.prod.outlook.com
 (2603:10a6:20b:46c::21) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a9e08681-ff9a-475d-c9b6-08da4d1db914
X-MS-TrafficTypeDiagnostic: PA4PR04MB7824:EE_
X-Microsoft-Antispam-PRVS:
	<PA4PR04MB7824A2262F84C14C38E9D350B3AB9@PA4PR04MB7824.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	iEQrqm2WoTQhjiwsDLPmiGckoy+bnXr+qBnnPp39+hMdmK7jYMpntlwLYb5wvMz76M8ysI/ujrNWXQpmcw4QDey+wzrfBkeGimTlFHyuOAFEWlai1rS0y+0lDS0Hzxw8ePqvfGG78CqUsVQJKlOST0aCTwnQa6sqWaCytcQfd9s4vHfAI84MwcB98y8sbNZJeMr1O3KTxgwuKxdS3Kec+pAKyhrz3zUhYnUnqJnE6Wx9Ic4mNjlPahpLKw5OYHR/omgAyqaDPhwqT8r2+p7Ag2TRJBClDdmv3u9Ysu1CoB+GLclBLjCYEdDAP5Ejd486JsxTmwmvn3ByrK/OAQipprdJz2qImsBZcTqExQaFUbPt2yUYpdeX7GcQVv5/KxymlyN+PI7dq3O1RNEP9kmHP+X2oEZs4xm+WM2zsW74JjvG+npOf9+IdNU29ytwouA1J4OM7Xboi4X19RA3FptKMkG7lwPi0eoSzH5y6YJ0mAO3TVhhIoqKsybbWJ9mm3NjfqowuRI29mZp13dsudMoputa8vPwfQx4RBH3Xw2cY1g//jS0SkUqDIRmertrwT9Y3KEr2oK48hZ/ZPrAoKpbwaTtqOA4zajXQEc5HzK8WS+ublcWrq06CHnyFFVvxpK7OkqqOGXqpY7DvfMys9WdBVn7AwzIVfMBbp/56MvmBEuAlIKgRNq/yEpv0dnlFNSCuKHFmqh1Q8eVpra5dIEgQrkVpyP5G4qVdd7K6XrviVieHlGRNavBqmyCaFhMGsDc
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(508600001)(66946007)(316002)(6486002)(6916009)(54906003)(53546011)(6512007)(31686004)(86362001)(36756003)(26005)(5660300002)(66476007)(4326008)(8676002)(2616005)(66556008)(2906002)(31696002)(8936002)(186003)(6506007)(38100700002)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cm9oUkpIUUVjOHpaRE9YTC9HUzBLOUFVVE5uNWdqNW1qVzRQT0NIaFV3YzZz?=
 =?utf-8?B?YytGdGg3WjZoSEhmUi8yUVdTWmtuVzJWUTlNWlVTbjVuZFhRQmIrbXcxZ0Y3?=
 =?utf-8?B?VFE0b3RiblFaZ09WaFNCT2hQUk1KS1dtZ3BzeHhVVWdSUXN3bXZlV2lBNEYz?=
 =?utf-8?B?UFI3VEpMYm5HZFNkQlArOVBmcjltellrV1FEZjd1WlV2S0poc3lMOVNzdWE3?=
 =?utf-8?B?ZkVMQW1NTlArVVRQcXo0VGNpYmQvTnlrV1FYSVVXL1dtd3BwUGhTaUdwdWVH?=
 =?utf-8?B?bVk2cXJOMVBwZmZhLzBoR0laNjkyUUxJaXV1V29GQ3M2MDZ5N3pGMlhaZmJI?=
 =?utf-8?B?ZlMwRGMzc0UvMVlLYWtZZFRlU29JZzl2d0V1dThiTGNVRFE4THQ2Mit4ZVM0?=
 =?utf-8?B?LytmWVpjUUpTVGE5N01yUWpGYzUxL0I3NE54MVNBOTI5blpDM3NuYTR6S1Z0?=
 =?utf-8?B?MzcvMFIwSWQwOWhRQWp6MzVvSmpQK2hWZUVKUUEwWU5jNzFra21OQ1gzazdT?=
 =?utf-8?B?SlNCSzFxSWp6TzA3UUVncE95a3pHT3VWdVJZU1pRZGt5TDNWbUJIMm94T2xW?=
 =?utf-8?B?VWVaOE9IQWd2b3VEWWR2aFRmaUY5VldBT0tpTzhXZmZYQXlFNDJOdUNGVTlT?=
 =?utf-8?B?UnQwMkg0RE5DdTY1VjF6MEx4L1NINWplTi9BdjFNKzh1ZDhIQ05HRmZlTUM1?=
 =?utf-8?B?U1Q5a1BscWZNMUt3UThId3NldFI2Z2lTWERNcG8vdDhJREpaWU13aTlwUVpE?=
 =?utf-8?B?TWRtaDUvMm5YMXBGeTR6YmtUSGlyK2cyRDI2ckNBTFBVMkorWXRVdGhPTWdx?=
 =?utf-8?B?Y1JkK1h0TzFmWjdpQ1JTV28zK3lWamUrczYwOHRzNEd0WGFyakRDUGM1MmVD?=
 =?utf-8?B?dzduVDFkemY3VGVUaFA2SWttODNtU1FhZWQzV3Fhck1sY1RERVBmSGI0aE5G?=
 =?utf-8?B?bGQrVVNIckJKNGxIVmZqOElxNEdIZ0RyQ2lBOC9WNU1UeWJZV0s0aE9xNnla?=
 =?utf-8?B?bVhqVElncjBZcnZFakFub2RXNUlXMzVudk5FMEFrY283d1A1MlBQQ3lubEJj?=
 =?utf-8?B?NWgrNzlCLzc2K0hDZ1VWY1dLcmVKdjJKTjY5ckZ1UndTTmhBTnQ0ZENaTHhi?=
 =?utf-8?B?aGcxcmxKQVhid3o0TEp5eS8vd1NqQmtCRW4rUU9jemxQelcvdFAvNzRUOENy?=
 =?utf-8?B?Wmx5eGxJbEV2d2dzSUdKZDF4SER1bTVUUlNRemh1eTlDeVB1UFpVMzh1bkNv?=
 =?utf-8?B?QnVMQy9rNTB1TW1SbzVwbzJsQ1lGVnd0SUlXTktFOUZ3bDdWUmxVSllxcVQw?=
 =?utf-8?B?N1dLdjNKWEI5OXdFWC9yWlpmUkw5eGVHcWtsYU1JUWZwOVhNZTFIYmVSV2xk?=
 =?utf-8?B?Z0sxS0orMFg2MTJSU2hqelFiMTd3YmZZckRTOU5HblNlRXFBWWNYQUVJWkdR?=
 =?utf-8?B?RlVuZWxCanhxK0c1UDJhS25yWVk1Vzh4TVJ4K3AvQlV2NVJrSU9VaEdzS3Y5?=
 =?utf-8?B?V2NSQnhsRC9mM0NGc1JBVWp5Zk03QnlNaHBtU0JtZ0U0eHdYajgwT0NPVUhL?=
 =?utf-8?B?SWVwSmRqNGFtdFJTVG9Wc29XeWtQMWs5SHR6NUhkK0VaTFBjVVNtVWtDcVFK?=
 =?utf-8?B?Sm16OXRPYjZrNlVsZDk0ZVNUR1lpU0ZmQ1NpL0duTGpadjVYWU5pdTBKYy9v?=
 =?utf-8?B?L09zWEhOd1NQWWR1MDlCbWp3U0xqTFpoZ2NZWldmZDRSWjZHWHNnQ3hkdHUw?=
 =?utf-8?B?Y0dhRnF1L2RxLzMraFlBQ2kyQm1BcDl2cTZ3ZVQ4MmxGZjR2bVRYSWMyRllr?=
 =?utf-8?B?SVhCQ2h1RS9ENW9aU05kcktUTE9hS0trczQ5V3NiZHVVeFh2SCt4TFZRVEhh?=
 =?utf-8?B?b01kVC9aRWlvZ281czhacml1MFQybXJYaGhWWG1WTEkvRWVPYTBaT3BJL3ZR?=
 =?utf-8?B?NGhlSXRWdDJUVTFwSkZnOThxaThoU0FVbHV5dVM2RllmZ3NiRng4dlE2OGpr?=
 =?utf-8?B?S2FCRmZwbVVNdUtnZHFGNmpuejdZbW5RRVAyZ2VlUUFhdjd3Z2p1YjROMkYr?=
 =?utf-8?B?ZEFidmRtWVJseC9ma05FcC9oSzMrU0pnSkZXVjV4N3hHTWFueWhyUFM5Q1I5?=
 =?utf-8?B?UFQxeVNpaHhwVVBoQkljNVlNSDBFOUJyNTczMHkxT2JzWXl3MzRWQ0E5WE9F?=
 =?utf-8?B?WHg0d28vQmVmMVR5bkZaTk84bWVXbFBpZUEySGRrU3NsOTFWcVVXLzhqeGh4?=
 =?utf-8?B?d1dzWmZ0ZVVsem5aWUJwaFFxVkw3VldCT3BXS2I2UlBHTmdzNjdxYWN5aGlm?=
 =?utf-8?B?V2ZGTXdXQVpRTk1QeXJWL1Rla1RRTDUrUXRneTJ4MjU3TmFvV3VVdz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a9e08681-ff9a-475d-c9b6-08da4d1db914
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jun 2022 09:18:48.2290
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 2c/NSRC1PY1AuXABq/9AYZ3HYJid/OQJM2JxQhHhM6woDhg9BoUVV8siZSVcac6JQqPe3Q8QxC3KrHd77vSdBA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB7824

On 13.06.2022 11:04, Roger Pau Monné wrote:
> On Mon, Jun 13, 2022 at 10:29:43AM +0200, Jan Beulich wrote:
>> On 13.06.2022 10:21, Roger Pau Monné wrote:
>>> On Mon, Jun 13, 2022 at 09:30:06AM +0200, Jan Beulich wrote:
>>>> On 10.06.2022 17:06, Roger Pau Monne wrote:
>>>>> Prevent dropping console output from the hardware domain, since it's
>>>>> likely important to have all the output if the boot fails without
>>>>> having to resort to sync_console (which also affects the output from
>>>>> other guests).
>>>>>
>>>>> Do so by pairing the console_serial_puts() with
>>>>> serial_{start,end}_log_everything(), so that no output is dropped.
>>>>
>>>> While I can see the goal, why would Dom0 output be (effectively) more
>>>> important than Xen's own one (which isn't "forced")? And with this
>>>> aiming at boot output only, wouldn't you want to stop the overriding
>>>> once boot has completed (of which, if I'm not mistaken, we don't
>>>> really have any signal coming from Dom0)? And even during boot I'm
>>>> not convinced we'd want to let through everything, but perhaps just
>>>> Dom0's kernel messages?
>>>
>>> I normally use sync_console on all the boxes I'm doing dev work, so
>>> this request is something that come up internally.
>>>
>>> Didn't realize Xen output wasn't forced, since we already have rate
>>> limiting based on log levels I was assuming that non-ratelimited
>>> messages wouldn't be dropped.  But yes, I agree that Xen (non-guest
>>> triggered) output shouldn't be rate limited either.
>>
>> Which would raise the question of why we have log levels for non-guest
>> messages.
> 
> Hm, maybe I'm confused, but I don't see a direct relation between log
> levels and rate limiting.  If I set log level to WARNING I would
> expect to not loose _any_ non-guest log messages with level WARNING or
> above.  It's still useful to have log levels for non-guest messages,
> since user might want to filter out DEBUG non-guest messages for
> example.

It was me who was confused, because of the two log-everything variants
we have (console and serial). You're right that your change is unrelated
to log levels. However, when there are e.g. many warnings or when an
admin has lowered the log level, what you (would) do is effectively
force sync_console mode transiently (for a subset of messages, but
that's secondary, especially because the "forced" output would still
be waiting for earlier output to make it out). We strongly advise against
use of sync_console in production environments, so I'm afraid I have
trouble seeing how using this mode transiently can be safe. This is quite
different from forcing all output to appear when e.g. we're about to
crash Xen.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 13 09:21:58 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 09:21:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347915.574243 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0gGf-0005CQ-8b; Mon, 13 Jun 2022 09:21:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347915.574243; Mon, 13 Jun 2022 09:21:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0gGf-0005CJ-5m; Mon, 13 Jun 2022 09:21:53 +0000
Received: by outflank-mailman (input) for mailman id 347915;
 Mon, 13 Jun 2022 09:21:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=E7/M=WU=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1o0gGd-0005CD-Bo
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 09:21:51 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on20618.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::618])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 41816b39-eafa-11ec-8901-93a377f238d6;
 Mon, 13 Jun 2022 11:21:50 +0200 (CEST)
Received: from AS8PR04CA0040.eurprd04.prod.outlook.com (2603:10a6:20b:312::15)
 by AM5PR0802MB2483.eurprd08.prod.outlook.com (2603:10a6:203:9b::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.12; Mon, 13 Jun
 2022 09:21:48 +0000
Received: from VE1EUR03FT009.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:312:cafe::f8) by AS8PR04CA0040.outlook.office365.com
 (2603:10a6:20b:312::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.20 via Frontend
 Transport; Mon, 13 Jun 2022 09:21:47 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT009.mail.protection.outlook.com (10.152.18.92) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5332.12 via Frontend Transport; Mon, 13 Jun 2022 09:21:46 +0000
Received: ("Tessian outbound d3318d0cda7b:v120");
 Mon, 13 Jun 2022 09:21:44 +0000
Received: from 195176ecd504.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 2DBC8566-744C-4808-B0A3-2FEDD99A28D3.1; 
 Mon, 13 Jun 2022 09:21:37 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 195176ecd504.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 13 Jun 2022 09:21:37 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AM0PR08MB5252.eurprd08.prod.outlook.com (2603:10a6:208:15a::28)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.11; Mon, 13 Jun
 2022 09:21:35 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5332.020; Mon, 13 Jun 2022
 09:21:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 41816b39-eafa-11ec-8901-93a377f238d6
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=U6Nn1Kh72CbT3ihVGJKs+OE0bLsQwInt8jlJu8BCud2uGmkJnHJN/E8xv4KzKfnxFfo4VRaFMYIJb4pBV+XlQQ14MNCquSJYm+0ioaktkCOdu1RofmeqVzuQySQFgbGwxARvw0yUmD0W0V7Gpk8YENDlGq4lf276yD26xAv8kXEWMKOgs31uHyG4kqzXISBgZW/UJXPZeKUrAH71R/MEAgoYu0nnc11ZaoLmdzvb4Q1I+LWzyZUUQwvdwIUExb33Tz8jpCrp2lqCmNNAoFC5WrDqo9hgFKl3SfosFown+oTykw3OoqeYOJNzgIHbXw1zQMhDdZ9xHNLyLWzGR5ygbQ==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=HSP+kWOvCT2oPppvmrxVsk7nj8Q3bIMzMKru9D1AXAU=;
 b=IiNUn1WLnkuL7arK36EkfhQou3g2Z6sbS5Y9gTXTVqWxhBG5s126Qw/uv+JTs1v6Z8LueL6GrnVTyfXWYSm4lt2cUw4OL/Hpt/mQa4i/yY9MUbH8gsb/+nrbYFib7HyLAtxZIj8c9MBi7wk5qsqarhyy61E+Bjgi8XSNeRikFSmdj+NaLYVp6Fu+g753jAkuZH9aI1Wjy74cmDtxJHcHYzNOusy1gCEsHhI0Moz7Z6VUDA8rH7zuaeHShkGFFdyZIVMPio2Yubw3DCNq0e55qn5p7WE6ElQ1eDgoRucfzGFY6t4a2Ozv36JFzJd5ZZ3PEMgYaJqcBK5LDYEbJ9zhqQ==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HSP+kWOvCT2oPppvmrxVsk7nj8Q3bIMzMKru9D1AXAU=;
 b=MF71QuxmALXWbhDNy9RE9PPg2Irg735/kGcw/D2SREDoSBzsisjIyQ2XvbxKNQJtoWUSo/V60nDfWuzfA7MLrZk/5f19W2B8rD7PTue7M40mGi2T1mLJ6uulcEEs5scMioogE5Z5jrg93+1z3QPYjO5bSu/ZZxkapyLqpwRBh+A=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: c4bc66be626ea6af
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bUEJt2STraIB/UD/OsElEcfYfng6/lDuZ/RuZdABkdDkwCWLoqwGUiDY9+JvqSYm8hSW9kwRfy2aHImKcGzhoteT8X2x386bFlacGsLyOaMJctSf2zYgCaH/ZPlFBpNh0pYKvDfjLj8X1ZfMv1Tq/LuiJrhQJ9Ep4HafYNLV7YpVSh1qBL0Y5DXasS61/ir2KNkylAz1WWF7LS9YxS02rGmiFvkJg/epQH+VB0pvlfbpzv5lca9wnJthpzj6ivf++oYzzDESqwrcDl7+7gtucLPYGdfmSFQvYZLvwAreNmMCQXrGiSUjwBYGZX5kXjB+xmTAO3Y6++E0hpayKf+SNw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=HSP+kWOvCT2oPppvmrxVsk7nj8Q3bIMzMKru9D1AXAU=;
 b=AJ+uPkXxM/s7czpzwOu60bmKRqghpNn+5YS8ECe5GsKnTS8MhPv9e7sj11NrCyAKnhJerLb8oLdb21ii5QzPIi4+JBu0TEVUDjz/+Aumg3dpc3NRgn/iY6QPpn1BOgvsfjK6xRuo0ToS9jvkHTCujVgHFqRXn5DKDFRLL1tfOWLCCx5jaXbLx93K4HPEupfEpo7d4NZMvaaHgQdsjOy11UpyumzEffofEUE8Y10iax53VIUIjcST30VZRwv11Wi81roy6JBdLUPsszmNnAQC6nudDGPsU2cAMekaLXRbLJ3z/XNGfJLvrfcgYk8Mm/AYGxmvqQrNUuEGIrbyomoLdQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HSP+kWOvCT2oPppvmrxVsk7nj8Q3bIMzMKru9D1AXAU=;
 b=MF71QuxmALXWbhDNy9RE9PPg2Irg735/kGcw/D2SREDoSBzsisjIyQ2XvbxKNQJtoWUSo/V60nDfWuzfA7MLrZk/5f19W2B8rD7PTue7M40mGi2T1mLJ6uulcEEs5scMioogE5Z5jrg93+1z3QPYjO5bSu/ZZxkapyLqpwRBh+A=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 2/4] xen/arm: Add sb instruction support
Thread-Topic: [PATCH v2 2/4] xen/arm: Add sb instruction support
Thread-Index: AQHYdNtJOQ4Z/Aud7ESpo3y4E38gSK1JA+qAgAQgUgA=
Date: Mon, 13 Jun 2022 09:21:35 +0000
Message-ID: <E3444FA6-7CDD-4E7F-8454-80D279C21CB1@arm.com>
References: <cover.1653993431.git.bertrand.marquis@arm.com>
 <efc2f01da9f9dfc0f678eaf7d8fe81f9b3d0cbc3.1653993431.git.bertrand.marquis@arm.com>
 <25cfe471-8e82-97cd-7a47-b5e85c849eae@xen.org>
In-Reply-To: <25cfe471-8e82-97cd-7a47-b5e85c849eae@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.80.82.1.1)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: eb19188a-98ad-42f4-2f60-08da4d1e238a
x-ms-traffictypediagnostic:
	AM0PR08MB5252:EE_|VE1EUR03FT009:EE_|AM5PR0802MB2483:EE_
X-Microsoft-Antispam-PRVS:
	<AM5PR0802MB2483BA95581370335F7B65EC9DAB9@AM5PR0802MB2483.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 eVRxiaWuM2FPxBd/ew2ncAi8pqs3j8TJvqwMWxb0lR4HrdjWA7W8+GY4FWZnBF/mnrVBQCOzI/4uX78yyGdVCUVCWttDQ2JnMe4PSnwvTHRYXD+eimrG93cIv7xacagT9E2kxrDTehMfp4ByGF1j1y9vr/IXR7dtsq0KmC5viHBYDv4L77TUiGmE71ZmoXo6fOfm9XVRxvyarw1/rCVXJ0RG1IXF1GGReAlc4z+lzKI/VsgISWeeoa6xTNWPdb9Ur7RHzowgVm6qqJZyQJVr0srlsyQn28B62Yqn4YUNyIYyhImYSTjRBJ6VTYlGqsRr+8abSLtbUPpFzKhxCrpVm5FItigttXmF1/sE4ZU80gRo7jl3weY8P7Ekb19m5jXnBUudFyxXgxZZtkF86DBlj6Hmi9K8h4k404pSlaJmoIonp9e/RveqKDw3++CQ9USrMozABc1WFDXxyRr16IrDx4iOcCbjsF7zz1BIbQYItSWAvEDuxwmHhqETqjuL0SUwumNxIt0npzuKcYS1YgIn4C/zdLkDycz1fdmJM/lFouSiKagMVW6dKpFO+rE433CV3PPxJvyMsFMYsuV0+s68vaUSVvCCP3BpJb9CBRwDPG/NYPqzoW7/+MD22/d9C7OW19na1q0i8lIjgAKYZ0n7EuJoiVuOgY0IRPZKe6B4i1pSFGH9P74TQXjmTguMmIWpAtNbEJOGmBveozBPBPHRTmLonZwDj9KnKtdsLhq4aUpeevVFCACSpOmUHGeDkdHXPrCXj5pDDwng1nuhlrN12g==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(8936002)(54906003)(66446008)(4326008)(91956017)(71200400001)(8676002)(76116006)(2906002)(33656002)(36756003)(86362001)(316002)(38100700002)(64756008)(5660300002)(508600001)(6506007)(6512007)(26005)(2616005)(6916009)(53546011)(66946007)(186003)(122000001)(66476007)(6486002)(83380400001)(38070700005)(66556008)(21314003)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <733CB81908BBBF4196672301BCEB31C8@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB5252
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT009.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	d65b4e15-d6bc-476d-b9d9-08da4d1e1cd6
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	w6CVhg83i4dgqDMLgmfFGLtB8qOtAzxaTd5rbcIvWIylimGmy55fjacZHVDfed+2t1QDpLKM2Rfd2IwJFh1BQS4wV2ufAahGb3KvClYd1tXt5ntk2JYm3MjLV1ctrmYL0cVROLNxmkmFZmpvtgrIWI4XPkCILZ0YGgRswKCOkU8Rndo5OYRDURtK0eY5ZBilwySRlEUkm384dIaJOONOvqyN4ifuUNatFBivkAPS7VqIBdNyDew7AdQgbvjUKQnKf5O3pcwWmnI7/xMKZwk9cmneU2nvWAMoA1hyAfFhVp10pJw/H9/GuAmgUODBwaGYvAedmdxsPoSdwz9Cblc2c0YBorNfyh5JGMrk17JcfUm1FS7t0FhfQEREF1DX62IsKkKq8MQwtTFV8Lk5TlNzVHM19mCGH4sfPWo6o4U/FpSznfpO4S3pP1TGNCXhcac/fONGxJdryfDR2P6j8CtW22ykhSVkUQZcGH7t8v+IlKFMc8KXFIrqo3f/N6134RfE3x0pi4ehnjmPdFM6umLYNn9hDLxCfV9E7+JMIpOAHq2IadQt1PAPjvTfA4Y6H8bPmsrgcw8pNroYkPxm78N0fI7cKXUHJLEK/VN0AOHbMmmUHMBDLNDNNIV3zGoBEx21JUandX9zG8dpmFRCzMufjgDzljKmzlsx4JkxEmx8q3cnVzQ6Hzivgh29Iz7WkndnXN/eNh/N9AC0o+z5qxLMgw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(36840700001)(40470700004)(46966006)(36756003)(82310400005)(316002)(81166007)(54906003)(4326008)(83380400001)(40460700003)(8676002)(36860700001)(336012)(47076005)(86362001)(186003)(107886003)(2906002)(33656002)(356005)(70586007)(70206006)(2616005)(6486002)(508600001)(8936002)(6862004)(6506007)(26005)(53546011)(5660300002)(6512007)(21314003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jun 2022 09:21:46.5400
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: eb19188a-98ad-42f4-2f60-08da4d1e238a
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT009.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM5PR0802MB2483

Hi Julien,

> On 10 Jun 2022, at 19:20, Julien Grall <julien@xen.org> wrote:
>=20
> Hi Bertrand,
>=20
> On 31/05/2022 11:43, Bertrand Marquis wrote:
>> diff --git a/xen/arch/arm/include/asm/cpufeature.h b/xen/arch/arm/includ=
e/asm/cpufeature.h
>> index f7368766c0..9649a7afee 100644
>> --- a/xen/arch/arm/include/asm/cpufeature.h
>> +++ b/xen/arch/arm/include/asm/cpufeature.h
>> @@ -67,8 +67,9 @@
>>  #define ARM_WORKAROUND_BHB_LOOP_24 13
>>  #define ARM_WORKAROUND_BHB_LOOP_32 14
>>  #define ARM_WORKAROUND_BHB_SMCC_3 15
>> +#define ARM64_HAS_SB 16
>=20
> The feature is for both 32-bit and 64-bit. So I would prefer if it is cal=
led ARM_HAS_SB.

Right make sense.

>=20
>>  -#define ARM_NCAPS           16
>> +#define ARM_NCAPS           17
>>    #ifndef __ASSEMBLY__
>>  @@ -78,6 +79,9 @@
>>    extern DECLARE_BITMAP(cpu_hwcaps, ARM_NCAPS);
>>  +void check_local_cpu_features(void);
>> +void enable_cpu_features(void);
>> +
>>  static inline bool cpus_have_cap(unsigned int num)
>>  {
>>      if ( num >=3D ARM_NCAPS )
>> diff --git a/xen/arch/arm/include/asm/macros.h b/xen/arch/arm/include/as=
m/macros.h
>> index 1aa373760f..33e863d982 100644
>> --- a/xen/arch/arm/include/asm/macros.h
>> +++ b/xen/arch/arm/include/asm/macros.h
>> @@ -5,14 +5,7 @@
>>  # error "This file should only be included in assembly file"
>>  #endif
>>  -    /*
>> -     * Speculative barrier
>> -     * XXX: Add support for the 'sb' instruction
>> -     */
>> -    .macro sb
>> -    dsb nsh
>> -    isb
>> -    .endm
>=20
> Looking at the patch bcab2ac84931 "xen/arm64: Place a speculation barrier=
 following an ret instruction", the macro was defined before including <asm=
/arm*/macros.h> so 'sb' could be used in macros defined by the headers.
>=20
> I can't remember whether I chose the order because I had a failure on som=
e compilers. However, I couldn't find anything in the assembler documentati=
on suggesting that a macro A could use B before it is used.
>=20
> So I would rather avoid to move the macro if there are no strong argument=
 for it.

Sure I will put it back where it was.

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Mon Jun 13 10:25:04 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 10:25:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347933.574254 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0hFY-0003wT-0l; Mon, 13 Jun 2022 10:24:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347933.574254; Mon, 13 Jun 2022 10:24:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0hFX-0003wM-SG; Mon, 13 Jun 2022 10:24:47 +0000
Received: by outflank-mailman (input) for mailman id 347933;
 Mon, 13 Jun 2022 10:24:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0hFW-0003wC-OR; Mon, 13 Jun 2022 10:24:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0hFW-0002iB-KR; Mon, 13 Jun 2022 10:24:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0hFW-0000gr-8o; Mon, 13 Jun 2022 10:24:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o0hFW-0004Op-8N; Mon, 13 Jun 2022 10:24:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=cZXQ2PlVNbfgmIq5g1rBHWjrRD4haHsnaGGLnrD8t1Y=; b=0bpKzY2aPZKJF470czyR3hdnSx
	VHPPm6/kGT9n1EpXLM6kIMzAXntwYjfsiYdkTv9UTiZ+d5mh8hSf3ujIjLdGIES12Bv0MyA7vtdX1
	lOdoR4/Ck5OyoeboKj/ZOkSt9TYClBZwyoIkLJy+CzsfO6oNj6sEKCMT+LzLR69QZokI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171153-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 171153: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=a7d2272e59db058962fd5ef84737d7a741fd516a
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 13 Jun 2022 10:24:46 +0000

flight 171153 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171153/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              a7d2272e59db058962fd5ef84737d7a741fd516a
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  703 days
Failing since        151818  2020-07-11 04:18:52 Z  702 days  684 attempts
Testing same since   170990  2022-06-11 04:18:55 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Amneesh Singh <natto@weirdnatto.in>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani@anisinha.ca>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brad Laue <brad@brad-x.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Kirbach <christian.kirbach@gmail.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Christophe Fergeau <cfergeau@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Didik Supriadi <didiksupriadi41@gmail.com>
  dinglimin <dinglimin@cmss.chinamobile.com>
  Divya Garg <divya.garg@nutanix.com>
  Dmitrii Shcherbakov <dmitrii.shcherbakov@canonical.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Emilio Herrera <ehespinosa57@gmail.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Franck Ridel <fridel@protonmail.com>
  Gavi Teitz <gavi@nvidia.com>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Haonan Wang <hnwanga1@gmail.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Hiroki Narukawa <hnarukaw@yahoo-corp.jp>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Ian Wienand <iwienand@redhat.com>
  Ioanna Alifieraki <ioanna-maria.alifieraki@canonical.com>
  Ivan Teterevkov <ivan.teterevkov@nutanix.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  jason lee <ppark5237@gmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jia Zhou <zhou.jia2@zte.com.cn>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jing Qi <jinqi@redhat.com>
  Jinsheng Zhang <zhangjl02@inspur.com>
  Jiri Denemark <jdenemar@redhat.com>
  Joachim Falk <joachim.falk@gmx.de>
  John Ferlan <jferlan@redhat.com>
  John Levon <john.levon@nutanix.com>
  John Levon <levon@movementarian.org>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Justin Gatzen <justin.gatzen@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kim InSoo <simmon@nplob.com>
  Koichi Murase <myoga.murase@gmail.com>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Lei Yang <yanglei209@huawei.com>
  Lena Voytek <lena.voytek@canonical.com>
  Liang Yan <lyan@digitalocean.com>
  Liang Yan <lyan@digtalocean.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Liu Yiding <liuyd.fnst@fujitsu.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  luzhipeng <luzhipeng@cestc.cn>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Martin Pitt <mpitt@debian.org>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matej Cepl <mcepl@cepl.eu>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Goodhart <c@chromakode.com>
  Maxim Nestratov <mnestratov@virtuozzo.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Moteen Shah <codeguy.moteen@gmail.com>
  Moteen Shah <moteenshah.02@gmail.com>
  Muha Aliss <muhaaliss@gmail.com>
  Nathan <nathan95@live.it>
  Neal Gompa <ngompa13@gmail.com>
  Nick Chevsky <nchevsky@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nicolas Lécureuil <neoclust@mageia.org>
  Nicolas Lécureuil <nicolas.lecureuil@siveo.net>
  Nikolay Shirokovskiy <nikolay.shirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Niteesh Dubey <niteesh@linux.ibm.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Or Ozeri <oro@il.ibm.com>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peng Liang <tcx4c70@gmail.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Praveen K Paladugu <prapal@linux.microsoft.com>
  Richard W.M. Jones <rjones@redhat.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Robin Lee <cheeselee@fedoraproject.org>
  Rohit Kumar <rohit.kumar3@nutanix.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Davis <scott.davis@starlab.io>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  shenjiatong <yshxxsjt715@gmail.com>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Simon Rowe <simon.rowe@nutanix.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tom Wieczorek <tom@bibbu.net>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tu Qiang <tu.qiang35@zte.com.cn>
  Tuguoyi <tu.guoyi@h3c.com>
  tuqiang <tu.qiang35@zte.com.cn>
  Vasiliy Ulyanov <vulyanov@suse.de>
  Victor Toso <victortoso@redhat.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Vinayak Kale <vkale@nvidia.com>
  Vineeth Pillai <viremana@linux.microsoft.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  Wei-Chen Chen <weicche@microsoft.com>
  William Douglas <william.douglas@intel.com>
  Xu Chao <xu.chao6@zte.com.cn>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Fei <yangfei85@huawei.com>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yasuhiko Kamata <belphegor@belbel.or.jp>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
  zhangjl02 <zhangjl02@inspur.com>
  zhanglei <zhanglei@smartx.com>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>
  Дамјан Георгиевски <gdamjan@gmail.com>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 112934 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 13 11:37:17 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 11:37:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347977.574335 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0iNe-0005BX-7M; Mon, 13 Jun 2022 11:37:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347977.574335; Mon, 13 Jun 2022 11:37:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0iNd-00058P-Ou; Mon, 13 Jun 2022 11:37:13 +0000
Received: by outflank-mailman (input) for mailman id 347977;
 Mon, 13 Jun 2022 11:37:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJWo=WU=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1o0iNb-0003eY-6R
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 11:37:11 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 27c6e11e-eb0d-11ec-bd2c-47488cf2e6aa;
 Mon, 13 Jun 2022 13:37:08 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-98-ZPHpoTR3PRSnuCVXjGyOLQ-1; Mon, 13 Jun 2022 07:37:04 -0400
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com
 [10.11.54.7])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A600E811E76;
 Mon, 13 Jun 2022 11:37:03 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 4D42E1402406;
 Mon, 13 Jun 2022 11:37:03 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 4E9211800932; Mon, 13 Jun 2022 13:36:56 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 27c6e11e-eb0d-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655120227;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Iypp4lhP0vK8zaOIYv4YGS15C6hr5eRaL+NEXu7qlj4=;
	b=hNX5Bo7TS7CWIu7Y8+W0CDfPyTCXaJQt+NAk9Khus7Hp6l2Qn2Pa/NQl2KKJk3cf9o/jw7
	uzRFzmpn5Fv61A7W2iFNmeGLFrPUJ0jOSUBE7xxFTxWguuSvOH6lf6xcvawFoGUowsP2b2
	9y+bbok4EMBqf2X9y5NGr57Sl4e3TdM=
X-MC-Unique: ZPHpoTR3PRSnuCVXjGyOLQ-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
	xen-devel@lists.xenproject.org,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	"Canokeys.org" <contact@canokeys.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Gerd Hoffmann <kraxel@redhat.com>
Subject: [PULL 07/16] docs: Add CanoKey documentation
Date: Mon, 13 Jun 2022 13:36:46 +0200
Message-Id: <20220613113655.3693872-8-kraxel@redhat.com>
In-Reply-To: <20220613113655.3693872-1-kraxel@redhat.com>
References: <20220613113655.3693872-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7

From: "Hongren (Zenithal) Zheng" <i@zenithal.me>

Signed-off-by: Hongren (Zenithal) Zheng <i@zenithal.me>
Message-Id: <YoY6ilQimrK+l5NN@Sun>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 docs/system/device-emulation.rst |   1 +
 docs/system/devices/canokey.rst  | 168 +++++++++++++++++++++++++++++++
 2 files changed, 169 insertions(+)
 create mode 100644 docs/system/devices/canokey.rst

diff --git a/docs/system/device-emulation.rst b/docs/system/device-emulation.rst
index 3b729b920d7c..05060060563f 100644
--- a/docs/system/device-emulation.rst
+++ b/docs/system/device-emulation.rst
@@ -92,3 +92,4 @@ Emulated Devices
    devices/vhost-user.rst
    devices/virtio-pmem.rst
    devices/vhost-user-rng.rst
+   devices/canokey.rst
diff --git a/docs/system/devices/canokey.rst b/docs/system/devices/canokey.rst
new file mode 100644
index 000000000000..169f99b8eb82
--- /dev/null
+++ b/docs/system/devices/canokey.rst
@@ -0,0 +1,168 @@
+.. _canokey:
+
+CanoKey QEMU
+------------
+
+CanoKey [1]_ is an open-source secure key with supports of
+
+* U2F / FIDO2 with Ed25519 and HMAC-secret
+* OpenPGP Card V3.4 with RSA4096, Ed25519 and more [2]_
+* PIV (NIST SP 800-73-4)
+* HOTP / TOTP
+* NDEF
+
+All these platform-independent features are in canokey-core [3]_.
+
+For different platforms, CanoKey has different implementations,
+including both hardware implementions and virtual cards:
+
+* CanoKey STM32 [4]_
+* CanoKey Pigeon [5]_
+* (virt-card) CanoKey USB/IP
+* (virt-card) CanoKey FunctionFS
+
+In QEMU, yet another CanoKey virt-card is implemented.
+CanoKey QEMU exposes itself as a USB device to the guest OS.
+
+With the same software configuration as a hardware key,
+the guest OS can use all the functionalities of a secure key as if
+there was actually an hardware key plugged in.
+
+CanoKey QEMU provides much convenience for debuging:
+
+* libcanokey-qemu supports debuging output thus developers can
+  inspect what happens inside a secure key
+* CanoKey QEMU supports trace event thus event
+* QEMU USB stack supports pcap thus USB packet between the guest
+  and key can be captured and analysed
+
+Then for developers:
+
+* For developers on software with secure key support (e.g. FIDO2, OpenPGP),
+  they can see what happens inside the secure key
+* For secure key developers, USB packets between guest OS and CanoKey
+  can be easily captured and analysed
+
+Also since this is a virtual card, it can be easily used in CI for testing
+on code coping with secure key.
+
+Building
+========
+
+libcanokey-qemu is required to use CanoKey QEMU.
+
+.. code-block:: shell
+
+    git clone https://github.com/canokeys/canokey-qemu
+    mkdir canokey-qemu/build
+    pushd canokey-qemu/build
+
+If you want to install libcanokey-qemu in a different place,
+add ``-DCMAKE_INSTALL_PREFIX=/path/to/your/place`` to cmake below.
+
+.. code-block:: shell
+
+    cmake ..
+    make
+    make install # may need sudo
+    popd
+
+Then configuring and building:
+
+.. code-block:: shell
+
+    # depending on your env, lib/pkgconfig can be lib64/pkgconfig
+    export PKG_CONFIG_PATH=/path/to/your/place/lib/pkgconfig:$PKG_CONFIG_PATH
+    ./configure --enable-canokey && make
+
+Using CanoKey QEMU
+==================
+
+CanoKey QEMU stores all its data on a file of the host specified by the argument
+when invoking qemu.
+
+.. parsed-literal::
+
+    |qemu_system| -usb -device canokey,file=$HOME/.canokey-file
+
+Note: you should keep this file carefully as it may contain your private key!
+
+The first time when the file is used, it is created and initialized by CanoKey,
+afterwards CanoKey QEMU would just read this file.
+
+After the guest OS boots, you can check that there is a USB device.
+
+For example, If the guest OS is an Linux machine. You may invoke lsusb
+and find CanoKey QEMU there:
+
+.. code-block:: shell
+
+    $ lsusb
+    Bus 001 Device 002: ID 20a0:42d4 Clay Logic CanoKey QEMU
+
+You may setup the key as guided in [6]_. The console for the key is at [7]_.
+
+Debuging
+========
+
+CanoKey QEMU consists of two parts, ``libcanokey-qemu.so`` and ``canokey.c``,
+the latter of which resides in QEMU. The former provides core functionality
+of a secure key while the latter provides platform-dependent functions:
+USB packet handling.
+
+If you want to trace what happens inside the secure key, when compiling
+libcanokey-qemu, you should add ``-DQEMU_DEBUG_OUTPUT=ON`` in cmake command
+line:
+
+.. code-block:: shell
+
+    cmake .. -DQEMU_DEBUG_OUTPUT=ON
+
+If you want to trace events happened in canokey.c, use
+
+.. parsed-literal::
+
+    |qemu_system| --trace "canokey_*" \\
+        -usb -device canokey,file=$HOME/.canokey-file
+
+If you want to capture USB packets between the guest and the host, you can:
+
+.. parsed-literal::
+
+    |qemu_system| -usb -device canokey,file=$HOME/.canokey-file,pcap=key.pcap
+
+Limitations
+===========
+
+Currently libcanokey-qemu.so has dozens of global variables as it was originally
+designed for embedded systems. Thus one qemu instance can not have
+multiple CanoKey QEMU running, namely you can not
+
+.. parsed-literal::
+
+    |qemu_system| -usb -device canokey,file=$HOME/.canokey-file \\
+         -device canokey,file=$HOME/.canokey-file2
+
+Also, there is no lock on canokey-file, thus two CanoKey QEMU instance
+can not read one canokey-file at the same time.
+
+Another limitation is that this device is not compatible with ``qemu-xhci``,
+in that this device would hang when there are FIDO2 packets (traffic on
+interrupt endpoints). If you do not use FIDO2 then it works as intended,
+but for full functionality you should use old uhci/ehci bus and attach canokey
+to it, for example
+
+.. parsed-literal::
+
+   |qemu_system| -device piix3-usb-uhci,id=uhci -device canokey,bus=uhci.0
+
+References
+==========
+
+.. [1] `<https://canokeys.org>`_
+.. [2] `<https://docs.canokeys.org/userguide/openpgp/#supported-algorithm>`_
+.. [3] `<https://github.com/canokeys/canokey-core>`_
+.. [4] `<https://github.com/canokeys/canokey-stm32>`_
+.. [5] `<https://github.com/canokeys/canokey-pigeon>`_
+.. [6] `<https://docs.canokeys.org/>`_
+.. [7] `<https://console.canokeys.org/>`_
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 13 11:37:17 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 11:37:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347978.574349 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0iNf-0005Qy-IU; Mon, 13 Jun 2022 11:37:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347978.574349; Mon, 13 Jun 2022 11:37:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0iNf-0005Oo-0a; Mon, 13 Jun 2022 11:37:15 +0000
Received: by outflank-mailman (input) for mailman id 347978;
 Mon, 13 Jun 2022 11:37:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJWo=WU=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1o0iNc-0003eY-6M
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 11:37:12 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 28770cea-eb0d-11ec-bd2c-47488cf2e6aa;
 Mon, 13 Jun 2022 13:37:09 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-220-y95P-MdnMjCHDvW2Dm1Vhw-1; Mon, 13 Jun 2022 07:37:05 -0400
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com
 [10.11.54.2])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0241429AA3B9;
 Mon, 13 Jun 2022 11:37:05 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id C162540C1247;
 Mon, 13 Jun 2022 11:37:04 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 5A37E180093F; Mon, 13 Jun 2022 13:36:56 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 28770cea-eb0d-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655120228;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=TqkIe5B5GmtLHYDEQNksBwY0pKybp1zEh7xnID9ieAU=;
	b=IT0vB0yzUqYvDyhlJfhFrOZz2LlmKix7r3rGVOhHQFf9hYCCHPEgxXbqn5EwO+tFxnlBnk
	VfaLBuQD+jorOCPfrlWvqfKEJGfSzOa2777M8340ezc86DqrShbbKGYWjJqNf0K5uld0xo
	abJPDHKYhmTmqQDiUApUnAsXYBGV5t4=
X-MC-Unique: y95P-MdnMjCHDvW2Dm1Vhw-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
	xen-devel@lists.xenproject.org,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	"Canokeys.org" <contact@canokeys.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Gerd Hoffmann <kraxel@redhat.com>
Subject: [PULL 08/16] docs/system/devices/usb: Add CanoKey to USB devices examples
Date: Mon, 13 Jun 2022 13:36:47 +0200
Message-Id: <20220613113655.3693872-9-kraxel@redhat.com>
In-Reply-To: <20220613113655.3693872-1-kraxel@redhat.com>
References: <20220613113655.3693872-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2

From: "Hongren (Zenithal) Zheng" <i@zenithal.me>

Signed-off-by: Hongren (Zenithal) Zheng <i@zenithal.me>
Message-Id: <YoY6o+QFhzA7VHcZ@Sun>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 docs/system/devices/usb.rst | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/docs/system/devices/usb.rst b/docs/system/devices/usb.rst
index afb7d6c2268d..872d9167589b 100644
--- a/docs/system/devices/usb.rst
+++ b/docs/system/devices/usb.rst
@@ -199,6 +199,10 @@ option or the ``device_add`` monitor command. Available devices are:
 ``u2f-{emulated,passthru}``
    Universal Second Factor device
 
+``canokey``
+   An Open-source Secure Key implementing FIDO2, OpenPGP, PIV and more.
+   For more information, see :ref:`canokey`.
+
 Physical port addressing
 ^^^^^^^^^^^^^^^^^^^^^^^^
 
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 13 11:37:17 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 11:37:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347976.574318 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0iNc-0004gJ-Ma; Mon, 13 Jun 2022 11:37:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347976.574318; Mon, 13 Jun 2022 11:37:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0iNc-0004c8-7d; Mon, 13 Jun 2022 11:37:12 +0000
Received: by outflank-mailman (input) for mailman id 347976;
 Mon, 13 Jun 2022 11:37:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJWo=WU=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1o0iNa-0003eX-Jw
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 11:37:10 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 28a296ef-eb0d-11ec-8901-93a377f238d6;
 Mon, 13 Jun 2022 13:37:09 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-389-p75XZMsjNVew4lyNOVT2PA-1; Mon, 13 Jun 2022 07:37:05 -0400
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com
 [10.11.54.3])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 10CE6801E80;
 Mon, 13 Jun 2022 11:37:05 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id D41A21121319;
 Mon, 13 Jun 2022 11:37:04 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 658871800981; Mon, 13 Jun 2022 13:36:56 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 28a296ef-eb0d-11ec-8901-93a377f238d6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655120228;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=PVva2fRC3vGqvRDPoZrUChjHwOIX197Reg0ezh6aFRw=;
	b=Xe3iMIllfTIis8k3Evid8xsMXzTTEXvinvpJ0HJ9wJlVBnVLSzQiVkv2+OESUREcaSdkir
	oB6mQz3//D+NBz+eM00Dn8Zvu3X/1RiJVrHBZFh6U5P8PwUTAGf8IhLhoj+2DZox+6n5P0
	k9JbUlK70pW/mqOvtlbFXhOPfOZEHJU=
X-MC-Unique: p75XZMsjNVew4lyNOVT2PA-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
	xen-devel@lists.xenproject.org,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	"Canokeys.org" <contact@canokeys.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Gerd Hoffmann <kraxel@redhat.com>
Subject: [PULL 09/16] MAINTAINERS: add myself as CanoKey maintainer
Date: Mon, 13 Jun 2022 13:36:48 +0200
Message-Id: <20220613113655.3693872-10-kraxel@redhat.com>
In-Reply-To: <20220613113655.3693872-1-kraxel@redhat.com>
References: <20220613113655.3693872-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3

From: "Hongren (Zenithal) Zheng" <i@zenithal.me>

Signed-off-by: Hongren (Zenithal) Zheng <i@zenithal.me>
Message-Id: <YoY61xI0IcFT1fOP@Sun>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 MAINTAINERS | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 0df25ed4b0a3..4cf6174f9f37 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -2427,6 +2427,14 @@ F: hw/intc/s390_flic*.c
 F: include/hw/s390x/s390_flic.h
 L: qemu-s390x@nongnu.org
 
+CanoKey
+M: Hongren (Zenithal) Zheng <i@zenithal.me>
+S: Maintained
+R: Canokeys.org <contact@canokeys.org>
+F: hw/usb/canokey.c
+F: hw/usb/canokey.h
+F: docs/system/devices/canokey.rst
+
 Subsystems
 ----------
 Overall Audio backends
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 13 11:37:17 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 11:37:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347979.574354 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0iNg-0005dh-CU; Mon, 13 Jun 2022 11:37:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347979.574354; Mon, 13 Jun 2022 11:37:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0iNf-0005bA-QJ; Mon, 13 Jun 2022 11:37:15 +0000
Received: by outflank-mailman (input) for mailman id 347979;
 Mon, 13 Jun 2022 11:37:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJWo=WU=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1o0iNd-0003eY-6O
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 11:37:13 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2961440f-eb0d-11ec-bd2c-47488cf2e6aa;
 Mon, 13 Jun 2022 13:37:11 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-501-KvVginEiNHufd-nXx3l_YA-1; Mon, 13 Jun 2022 07:37:07 -0400
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com
 [10.11.54.5])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B0C0380A0B0;
 Mon, 13 Jun 2022 11:37:06 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 6788810725;
 Mon, 13 Jun 2022 11:37:06 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 715551800989; Mon, 13 Jun 2022 13:36:56 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2961440f-eb0d-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655120230;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=obSiKgR2g439N1NCymTkGuD5ZtC+AWQDUPf1CsA55V4=;
	b=HVbCChnxBj2IzVd6Ktjgn2FMM5amz7rlRjBP10bXGzPdsB6GvA/0oJ+3xn++XzockA+x1k
	EiT2NZ8yKv4sZ/dNDuV8mP9CEpswinrVVJv40Pf414HQI5qZ+OJ8E/7ZxAwMwTjeB3O92F
	a0TsSHj9uYvA/Q9/QZCNzBToMWCufyI=
X-MC-Unique: KvVginEiNHufd-nXx3l_YA-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
	xen-devel@lists.xenproject.org,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	"Canokeys.org" <contact@canokeys.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Arnout Engelen <arnout@bzzt.net>
Subject: [PULL 10/16] hw/usb/hcd-ehci: fix writeback order
Date: Mon, 13 Jun 2022 13:36:49 +0200
Message-Id: <20220613113655.3693872-11-kraxel@redhat.com>
In-Reply-To: <20220613113655.3693872-1-kraxel@redhat.com>
References: <20220613113655.3693872-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5

From: Arnout Engelen <arnout@bzzt.net>

The 'active' bit passes control over a qTD between the guest and the
controller: set to 1 by guest to enable execution by the controller,
and the controller sets it to '0' to hand back control to the guest.

ehci_state_writeback write two dwords to main memory using DMA:
the third dword of the qTD (containing dt, total bytes to transfer,
cpage, cerr and status) and the fourth dword of the qTD (containing
the offset).

This commit makes sure the fourth dword is written before the third,
avoiding a race condition where a new offset written into the qTD
by the guest after it observed the status going to go to '0' gets
overwritten by a 'late' DMA writeback of the previous offset.

This race condition could lead to 'cpage out of range (5)' errors,
and reproduced by:

./qemu-system-x86_64 -enable-kvm -bios $SEABIOS/bios.bin -m 4096 -device usb-ehci -blockdev driver=file,read-only=on,filename=/home/aengelen/Downloads/openSUSE-Tumbleweed-DVD-i586-Snapshot20220428-Media.iso,node-name=iso -device usb-storage,drive=iso,bootindex=0 -chardev pipe,id=shell,path=/tmp/pipe -device virtio-serial -device virtconsole,chardev=shell -device virtio-rng-pci -serial mon:stdio -nographic

(press a key, select 'Installation' (2), and accept the default
values. On my machine the 'cpage out of range' is reproduced while
loading the Linux Kernel about once per 7 attempts. With the fix in
this commit it no longer fails)

This problem was previously reported as a seabios problem in
https://mail.coreboot.org/hyperkitty/list/seabios@seabios.org/thread/OUTHT5ISSQJGXPNTUPY3O5E5EPZJCHM3/
and as a nixos CI build failure in
https://github.com/NixOS/nixpkgs/issues/170803

Signed-off-by: Arnout Engelen <arnout@bzzt.net>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 hw/usb/hcd-ehci.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/hw/usb/hcd-ehci.c b/hw/usb/hcd-ehci.c
index 33a8a377bd95..d4da8dcb8d15 100644
--- a/hw/usb/hcd-ehci.c
+++ b/hw/usb/hcd-ehci.c
@@ -2011,7 +2011,10 @@ static int ehci_state_writeback(EHCIQueue *q)
     ehci_trace_qtd(q, NLPTR_GET(p->qtdaddr), (EHCIqtd *) &q->qh.next_qtd);
     qtd = (uint32_t *) &q->qh.next_qtd;
     addr = NLPTR_GET(p->qtdaddr);
-    put_dwords(q->ehci, addr + 2 * sizeof(uint32_t), qtd + 2, 2);
+    /* First write back the offset */
+    put_dwords(q->ehci, addr + 3 * sizeof(uint32_t), qtd + 3, 1);
+    /* Then write back the token, clearing the 'active' bit */
+    put_dwords(q->ehci, addr + 2 * sizeof(uint32_t), qtd + 2, 1);
     ehci_free_packet(p);
 
     /*
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 13 11:37:17 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 11:37:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347974.574299 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0iNb-0004L5-Cm; Mon, 13 Jun 2022 11:37:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347974.574299; Mon, 13 Jun 2022 11:37:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0iNb-0004I5-5d; Mon, 13 Jun 2022 11:37:11 +0000
Received: by outflank-mailman (input) for mailman id 347974;
 Mon, 13 Jun 2022 11:37:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJWo=WU=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1o0iNZ-0003eY-5m
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 11:37:09 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 26bc0433-eb0d-11ec-bd2c-47488cf2e6aa;
 Mon, 13 Jun 2022 13:37:06 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-294-d-L92S3vMnanKezqCR8Mgg-1; Mon, 13 Jun 2022 07:37:00 -0400
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com
 [10.11.54.2])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5F65929AA3B9;
 Mon, 13 Jun 2022 11:37:00 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 2A9D9404E4C3;
 Mon, 13 Jun 2022 11:37:00 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 031B8180091C; Mon, 13 Jun 2022 13:36:56 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 26bc0433-eb0d-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655120225;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qgE0u/psF+5HbFaMEFEBF+1+y1Gn5UXWEs7HICOeN3Q=;
	b=h+S36l/LCxfuEc3Y4TeKx8GaNlE/NcuWNEUhCygKkUvXQ5I3aYCIueI8GOWwh7WPCKO8ft
	04Tje2bRNd/fCvXfM/zocp0T7KQfZ+pvR9yPD+9rjK8QXfAH9OXZoO5Ma3HDd94ehI23A8
	3O871IA9ZCOfSsBcBN5J9TmDOT+s7wQ=
X-MC-Unique: d-L92S3vMnanKezqCR8Mgg-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
	xen-devel@lists.xenproject.org,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	"Canokeys.org" <contact@canokeys.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Gerd Hoffmann <kraxel@redhat.com>
Subject: [PULL 03/16] ui/cocoa: Fix poweroff request code
Date: Mon, 13 Jun 2022 13:36:42 +0200
Message-Id: <20220613113655.3693872-4-kraxel@redhat.com>
In-Reply-To: <20220613113655.3693872-1-kraxel@redhat.com>
References: <20220613113655.3693872-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2

From: Akihiko Odaki <akihiko.odaki@gmail.com>

Signed-off-by: Akihiko Odaki <akihiko.odaki@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220529082508.89097-1-akihiko.odaki@gmail.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 ui/cocoa.m | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/ui/cocoa.m b/ui/cocoa.m
index 09a62817f2a9..84c84e98fc5e 100644
--- a/ui/cocoa.m
+++ b/ui/cocoa.m
@@ -35,6 +35,7 @@
 #include "ui/kbd-state.h"
 #include "sysemu/sysemu.h"
 #include "sysemu/runstate.h"
+#include "sysemu/runstate-action.h"
 #include "sysemu/cpu-throttle.h"
 #include "qapi/error.h"
 #include "qapi/qapi-commands-block.h"
@@ -1290,7 +1291,10 @@ - (void)applicationWillTerminate:(NSNotification *)aNotification
 {
     COCOA_DEBUG("QemuCocoaAppController: applicationWillTerminate\n");
 
-    qemu_system_shutdown_request(SHUTDOWN_CAUSE_HOST_UI);
+    with_iothread_lock(^{
+        shutdown_action = SHUTDOWN_ACTION_POWEROFF;
+        qemu_system_shutdown_request(SHUTDOWN_CAUSE_HOST_UI);
+    });
 
     /*
      * Sleep here, because returning will cause OSX to kill us
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 13 11:37:17 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 11:37:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347971.574281 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0iNa-0003xy-5l; Mon, 13 Jun 2022 11:37:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347971.574281; Mon, 13 Jun 2022 11:37:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0iNZ-0003x2-Vt; Mon, 13 Jun 2022 11:37:09 +0000
Received: by outflank-mailman (input) for mailman id 347971;
 Mon, 13 Jun 2022 11:37:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJWo=WU=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1o0iNX-0003eY-Qs
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 11:37:07 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 26358c7f-eb0d-11ec-bd2c-47488cf2e6aa;
 Mon, 13 Jun 2022 13:37:05 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-84-XcL-NPu1MMSUI68sguCZOw-1; Mon, 13 Jun 2022 07:37:04 -0400
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com
 [10.11.54.3])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8D8598027EE;
 Mon, 13 Jun 2022 11:37:03 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 4BF6E1121314;
 Mon, 13 Jun 2022 11:37:03 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 3F255180092E; Mon, 13 Jun 2022 13:36:56 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 26358c7f-eb0d-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655120225;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=HZ4OtpGl92/4vTO64NR8x1+3AJHChGb6d9Bq7oyIeKQ=;
	b=EK8l8Bjjx7WZAII0BCiPxAPZcq+tnQJ+XaUCEjEiMnhpqGSDMdQMnoUuEdo+xiOaXkFjXf
	scd/A0poM22sqtuOkTfIRoYHWi6wm0sP4wEGG++i40Qw/qMoBx8ZMahpOEYZr6Z0ZkSRon
	A8dy5z+IPO7ZA6a18vNMp2n/sfzby9U=
X-MC-Unique: XcL-NPu1MMSUI68sguCZOw-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
	xen-devel@lists.xenproject.org,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	"Canokeys.org" <contact@canokeys.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Gerd Hoffmann <kraxel@redhat.com>
Subject: [PULL 06/16] meson: Add CanoKey
Date: Mon, 13 Jun 2022 13:36:45 +0200
Message-Id: <20220613113655.3693872-7-kraxel@redhat.com>
In-Reply-To: <20220613113655.3693872-1-kraxel@redhat.com>
References: <20220613113655.3693872-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3

From: "Hongren (Zenithal) Zheng" <i@zenithal.me>

Signed-off-by: Hongren (Zenithal) Zheng <i@zenithal.me>
Message-Id: <YoY6YRD6cxH21mms@Sun>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 meson_options.txt             | 2 ++
 hw/usb/Kconfig                | 5 +++++
 hw/usb/meson.build            | 5 +++++
 meson.build                   | 6 ++++++
 scripts/meson-buildoptions.sh | 3 +++
 5 files changed, 21 insertions(+)

diff --git a/meson_options.txt b/meson_options.txt
index 2de94af03712..0e8197386b99 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -189,6 +189,8 @@ option('spice_protocol', type : 'feature', value : 'auto',
        description: 'Spice protocol support')
 option('u2f', type : 'feature', value : 'auto',
        description: 'U2F emulation support')
+option('canokey', type : 'feature', value : 'auto',
+       description: 'CanoKey support')
 option('usb_redir', type : 'feature', value : 'auto',
        description: 'libusbredir support')
 option('l2tpv3', type : 'feature', value : 'auto',
diff --git a/hw/usb/Kconfig b/hw/usb/Kconfig
index 53f8283ffdc1..ce4f4339763e 100644
--- a/hw/usb/Kconfig
+++ b/hw/usb/Kconfig
@@ -119,6 +119,11 @@ config USB_U2F
     default y
     depends on USB
 
+config USB_CANOKEY
+    bool
+    default y
+    depends on USB
+
 config IMX_USBPHY
     bool
     default y
diff --git a/hw/usb/meson.build b/hw/usb/meson.build
index de853d780dd8..793df42e2127 100644
--- a/hw/usb/meson.build
+++ b/hw/usb/meson.build
@@ -63,6 +63,11 @@ if u2f.found()
   softmmu_ss.add(when: 'CONFIG_USB_U2F', if_true: [u2f, files('u2f-emulated.c')])
 endif
 
+# CanoKey
+if canokey.found()
+  softmmu_ss.add(when: 'CONFIG_USB_CANOKEY', if_true: [canokey, files('canokey.c')])
+endif
+
 # usb redirect
 if usbredir.found()
   usbredir_ss = ss.source_set()
diff --git a/meson.build b/meson.build
index 21cd949082dc..0c2e11ff0715 100644
--- a/meson.build
+++ b/meson.build
@@ -1408,6 +1408,12 @@ if have_system
                    method: 'pkg-config',
                    kwargs: static_kwargs)
 endif
+canokey = not_found
+if have_system
+  canokey = dependency('canokey-qemu', required: get_option('canokey'),
+                   method: 'pkg-config',
+                   kwargs: static_kwargs)
+endif
 usbredir = not_found
 if not get_option('usb_redir').auto() or have_system
   usbredir = dependency('libusbredirparser-0.5', required: get_option('usb_redir'),
diff --git a/scripts/meson-buildoptions.sh b/scripts/meson-buildoptions.sh
index 00ea4d8cd169..1fc1d2e2c362 100644
--- a/scripts/meson-buildoptions.sh
+++ b/scripts/meson-buildoptions.sh
@@ -73,6 +73,7 @@ meson_options_help() {
   printf "%s\n" '  bpf             eBPF support'
   printf "%s\n" '  brlapi          brlapi character device driver'
   printf "%s\n" '  bzip2           bzip2 support for DMG images'
+  printf "%s\n" '  canokey         CanoKey support'
   printf "%s\n" '  cap-ng          cap_ng support'
   printf "%s\n" '  capstone        Whether and how to find the capstone library'
   printf "%s\n" '  cloop           cloop image format support'
@@ -204,6 +205,8 @@ _meson_option_parse() {
     --disable-brlapi) printf "%s" -Dbrlapi=disabled ;;
     --enable-bzip2) printf "%s" -Dbzip2=enabled ;;
     --disable-bzip2) printf "%s" -Dbzip2=disabled ;;
+    --enable-canokey) printf "%s" -Dcanokey=enabled ;;
+    --disable-canokey) printf "%s" -Dcanokey=disabled ;;
     --enable-cap-ng) printf "%s" -Dcap_ng=enabled ;;
     --disable-cap-ng) printf "%s" -Dcap_ng=disabled ;;
     --enable-capstone) printf "%s" -Dcapstone=enabled ;;
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 13 11:37:17 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 11:37:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347975.574309 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0iNc-0004Tu-0v; Mon, 13 Jun 2022 11:37:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347975.574309; Mon, 13 Jun 2022 11:37:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0iNb-0004RS-Lp; Mon, 13 Jun 2022 11:37:11 +0000
Received: by outflank-mailman (input) for mailman id 347975;
 Mon, 13 Jun 2022 11:37:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJWo=WU=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1o0iNa-0003eY-66
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 11:37:10 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 27558830-eb0d-11ec-bd2c-47488cf2e6aa;
 Mon, 13 Jun 2022 13:37:07 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-189-0fjs8QgvNs-urwr1APxO5Q-1; Mon, 13 Jun 2022 07:37:02 -0400
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com
 [10.11.54.7])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1FD5380B90D;
 Mon, 13 Jun 2022 11:37:02 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id AEE1B1410F34;
 Mon, 13 Jun 2022 11:37:01 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 23AC31800925; Mon, 13 Jun 2022 13:36:56 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 27558830-eb0d-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655120226;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=XMFb51C69E0Si128czVwJqlqgwkrMEmsc4JCjQtiyLU=;
	b=SZcXq/grJ5Bjkjgyeo1d+NkA8ZH5NUPw9Iy0UmL9f0oIIRJqsq/o/mGvW+nJnjlmCvvI71
	FyRBitZXPk7jHqabIKJmJ8NFAu0C4bu+3ffhL5hy6K02wYnRCsmMz68mCgPdGfa4FR/eBG
	Z9sGMozsUSMcHkeNgeEI7Vh3MgwdTU8=
X-MC-Unique: 0fjs8QgvNs-urwr1APxO5Q-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
	xen-devel@lists.xenproject.org,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	"Canokeys.org" <contact@canokeys.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Gerd Hoffmann <kraxel@redhat.com>
Subject: [PULL 05/16] hw/usb/canokey: Add trace events
Date: Mon, 13 Jun 2022 13:36:44 +0200
Message-Id: <20220613113655.3693872-6-kraxel@redhat.com>
In-Reply-To: <20220613113655.3693872-1-kraxel@redhat.com>
References: <20220613113655.3693872-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7

From: "Hongren (Zenithal) Zheng" <i@zenithal.me>

Signed-off-by: Hongren (Zenithal) Zheng <i@zenithal.me>
Message-Id: <YoY6RoDKQIxSkFwL@Sun>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 hw/usb/canokey.c    | 13 +++++++++++++
 hw/usb/trace-events | 16 ++++++++++++++++
 2 files changed, 29 insertions(+)

diff --git a/hw/usb/canokey.c b/hw/usb/canokey.c
index 6cb8b7cdb089..4a08b1cbd776 100644
--- a/hw/usb/canokey.c
+++ b/hw/usb/canokey.c
@@ -14,6 +14,7 @@
 #include "qapi/error.h"
 #include "hw/usb.h"
 #include "hw/qdev-properties.h"
+#include "trace.h"
 #include "desc.h"
 #include "canokey.h"
 
@@ -66,6 +67,7 @@ static const USBDesc desc_canokey = {
  */
 int canokey_emu_stall_ep(void *base, uint8_t ep)
 {
+    trace_canokey_emu_stall_ep(ep);
     CanoKeyState *key = base;
     uint8_t ep_in = CANOKEY_EP_IN(ep); /* INTR IN has ep 129 */
     key->ep_in_size[ep_in] = 0;
@@ -75,6 +77,7 @@ int canokey_emu_stall_ep(void *base, uint8_t ep)
 
 int canokey_emu_set_address(void *base, uint8_t addr)
 {
+    trace_canokey_emu_set_address(addr);
     CanoKeyState *key = base;
     key->dev.addr = addr;
     return 0;
@@ -83,6 +86,7 @@ int canokey_emu_set_address(void *base, uint8_t addr)
 int canokey_emu_prepare_receive(
         void *base, uint8_t ep, uint8_t *pbuf, uint16_t size)
 {
+    trace_canokey_emu_prepare_receive(ep, size);
     CanoKeyState *key = base;
     key->ep_out[ep] = pbuf;
     key->ep_out_size[ep] = size;
@@ -92,6 +96,7 @@ int canokey_emu_prepare_receive(
 int canokey_emu_transmit(
         void *base, uint8_t ep, const uint8_t *pbuf, uint16_t size)
 {
+    trace_canokey_emu_transmit(ep, size);
     CanoKeyState *key = base;
     uint8_t ep_in = CANOKEY_EP_IN(ep); /* INTR IN has ep 129 */
     memcpy(key->ep_in[ep_in] + key->ep_in_size[ep_in],
@@ -125,6 +130,7 @@ uint32_t canokey_emu_get_rx_data_size(void *base, uint8_t ep)
  */
 static void canokey_handle_reset(USBDevice *dev)
 {
+    trace_canokey_handle_reset();
     CanoKeyState *key = CANOKEY(dev);
     for (int i = 0; i != CANOKEY_EP_NUM; ++i) {
         key->ep_in_state[i] = CANOKEY_EP_IN_WAIT;
@@ -137,6 +143,7 @@ static void canokey_handle_reset(USBDevice *dev)
 static void canokey_handle_control(USBDevice *dev, USBPacket *p,
                int request, int value, int index, int length, uint8_t *data)
 {
+    trace_canokey_handle_control_setup(request, value, index, length);
     CanoKeyState *key = CANOKEY(dev);
 
     canokey_emu_setup(request, value, index, length);
@@ -144,6 +151,7 @@ static void canokey_handle_control(USBDevice *dev, USBPacket *p,
     uint32_t dir_in = request & DeviceRequest;
     if (!dir_in) {
         /* OUT */
+        trace_canokey_handle_control_out();
         if (key->ep_out[0] != NULL) {
             memcpy(key->ep_out[0], data, length);
         }
@@ -163,6 +171,7 @@ static void canokey_handle_control(USBDevice *dev, USBPacket *p,
     case CANOKEY_EP_IN_READY:
         memcpy(data, key->ep_in[0], key->ep_in_size[0]);
         p->actual_length = key->ep_in_size[0];
+        trace_canokey_handle_control_in(p->actual_length);
         /* reset state */
         key->ep_in_state[0] = CANOKEY_EP_IN_WAIT;
         key->ep_in_size[0] = 0;
@@ -182,6 +191,7 @@ static void canokey_handle_data(USBDevice *dev, USBPacket *p)
     uint32_t out_len;
     switch (p->pid) {
     case USB_TOKEN_OUT:
+        trace_canokey_handle_data_out(ep_out, p->iov.size);
         usb_packet_copy(p, key->ep_out_buffer[ep_out], p->iov.size);
         out_pos = 0;
         while (out_pos != p->iov.size) {
@@ -226,6 +236,7 @@ static void canokey_handle_data(USBDevice *dev, USBPacket *p)
                 key->ep_in_size[ep_in] = 0;
                 key->ep_in_pos[ep_in] = 0;
             }
+            trace_canokey_handle_data_in(ep_in, in_len);
             break;
         }
         break;
@@ -237,6 +248,7 @@ static void canokey_handle_data(USBDevice *dev, USBPacket *p)
 
 static void canokey_realize(USBDevice *base, Error **errp)
 {
+    trace_canokey_realize();
     CanoKeyState *key = CANOKEY(base);
 
     if (key->file == NULL) {
@@ -260,6 +272,7 @@ static void canokey_realize(USBDevice *base, Error **errp)
 
 static void canokey_unrealize(USBDevice *base)
 {
+    trace_canokey_unrealize();
 }
 
 static Property canokey_properties[] = {
diff --git a/hw/usb/trace-events b/hw/usb/trace-events
index 9773cb53300d..914ca7166829 100644
--- a/hw/usb/trace-events
+++ b/hw/usb/trace-events
@@ -345,3 +345,19 @@ usb_serial_set_baud(int bus, int addr, int baud) "dev %d:%u baud rate %d"
 usb_serial_set_data(int bus, int addr, int parity, int data, int stop) "dev %d:%u parity %c, data bits %d, stop bits %d"
 usb_serial_set_flow_control(int bus, int addr, int index) "dev %d:%u flow control %d"
 usb_serial_set_xonxoff(int bus, int addr, uint8_t xon, uint8_t xoff) "dev %d:%u xon 0x%x xoff 0x%x"
+
+# canokey.c
+canokey_emu_stall_ep(uint8_t ep) "ep %d"
+canokey_emu_set_address(uint8_t addr) "addr %d"
+canokey_emu_prepare_receive(uint8_t ep, uint16_t size) "ep %d size %d"
+canokey_emu_transmit(uint8_t ep, uint16_t size) "ep %d size %d"
+canokey_thread_start(void)
+canokey_thread_stop(void)
+canokey_handle_reset(void)
+canokey_handle_control_setup(int request, int value, int index, int length) "request 0x%04X value 0x%04X index 0x%04X length 0x%04X"
+canokey_handle_control_out(void)
+canokey_handle_control_in(int actual_len) "len %d"
+canokey_handle_data_out(uint8_t ep_out, uint32_t out_len) "ep %d len %d"
+canokey_handle_data_in(uint8_t ep_in, uint32_t in_len) "ep %d len %d"
+canokey_realize(void)
+canokey_unrealize(void)
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 13 11:37:17 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 11:37:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347973.574292 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0iNa-000472-RF; Mon, 13 Jun 2022 11:37:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347973.574292; Mon, 13 Jun 2022 11:37:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0iNa-00044H-Ft; Mon, 13 Jun 2022 11:37:10 +0000
Received: by outflank-mailman (input) for mailman id 347973;
 Mon, 13 Jun 2022 11:37:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJWo=WU=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1o0iNY-0003eY-Cc
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 11:37:08 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 25fc9ad4-eb0d-11ec-bd2c-47488cf2e6aa;
 Mon, 13 Jun 2022 13:37:05 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-517-IuligG2ANNOtbllnN2TiEQ-1; Mon, 13 Jun 2022 07:37:01 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com
 [10.11.54.8])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 78A86802E5B;
 Mon, 13 Jun 2022 11:37:00 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 16446C27E97;
 Mon, 13 Jun 2022 11:37:00 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id EB21C1800908; Mon, 13 Jun 2022 13:36:55 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 25fc9ad4-eb0d-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655120224;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=wx3J1U/WRMVytLzUKnVNwgCHbbzsbHu+DjsV7mqclmI=;
	b=amrHM4ipEeiRTQLE9kMWKYMXVzaROc7yI1/4/+A6EqSylCcko55lbJSHYI8kyQ9BjYtXeU
	64UpcqI3w5yPfLO/r8zunBOxsdbgWEce2gns8fH7ZmHhuYhTPAAkKCwqawPe5w+++O5aXx
	JRZnhV9qlkPwQxhPXdTgvfd8zH0mKSc=
X-MC-Unique: IuligG2ANNOtbllnN2TiEQ-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
	xen-devel@lists.xenproject.org,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	"Canokeys.org" <contact@canokeys.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	=?UTF-8?q?Volker=20R=C3=BCmelin?= <vr_qemu@t-online.de>
Subject: [PULL 02/16] ui/gtk-gl-area: create the requested GL context version
Date: Mon, 13 Jun 2022 13:36:41 +0200
Message-Id: <20220613113655.3693872-3-kraxel@redhat.com>
In-Reply-To: <20220613113655.3693872-1-kraxel@redhat.com>
References: <20220613113655.3693872-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8

From: Volker Rümelin <vr_qemu@t-online.de>

Since about 2018 virglrenderer (commit fa835b0f88 "vrend: don't
hardcode context version") tries to open the highest available GL
context version. This is done by creating the known GL context
versions from the highest to the lowest until (*create_gl_context)
returns a context != NULL.

This does not work properly with
the current QEMU gd_gl_area_create_context() function, because
gdk_gl_context_realize() on Wayland creates a version 3.0 legacy
context if the requested GL context version can't be created.

In order for virglrenderer to find the highest available GL
context version, return NULL if the created context version is
lower than the requested version.

This fixes the following error:
QEMU started with -device virtio-vga-gl -display gtk,gl=on.
Under Wayland, the guest window remains black and the following
information can be seen on the host.

gl_version 30 - compat profile
(qemu:5978): Gdk-WARNING **: 16:19:01.533:
  gdk_gl_context_set_required_version
  - GL context versions less than 3.2 are not supported.

(qemu:5978): Gdk-WARNING **: 16:19:01.537:
  gdk_gl_context_set_required_version -
  GL context versions less than 3.2 are not supported.

(qemu:5978): Gdk-WARNING **: 16:19:01.554:
  gdk_gl_context_set_required_version -
  GL context versions less than 3.2 are not supported.
vrend_renderer_fill_caps: Entering with stale GL error: 1282

To reproduce this error, an OpenGL driver is required on the host
that doesn't have the latest OpenGL extensions fully implemented.
An example for this is the Intel i965 driver on a Haswell processor.

Signed-off-by: Volker Rümelin <vr_qemu@t-online.de>
Message-Id: <20220605085131.7711-2-vr_qemu@t-online.de>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 ui/gtk-gl-area.c | 31 ++++++++++++++++++++++++++++++-
 ui/trace-events  |  1 +
 2 files changed, 31 insertions(+), 1 deletion(-)

diff --git a/ui/gtk-gl-area.c b/ui/gtk-gl-area.c
index 0e20ea031d34..2e0129c28cd4 100644
--- a/ui/gtk-gl-area.c
+++ b/ui/gtk-gl-area.c
@@ -170,6 +170,23 @@ void gd_gl_area_switch(DisplayChangeListener *dcl,
     }
 }
 
+static int gd_cmp_gl_context_version(int major, int minor, QEMUGLParams *params)
+{
+    if (major > params->major_ver) {
+        return 1;
+    }
+    if (major < params->major_ver) {
+        return -1;
+    }
+    if (minor > params->minor_ver) {
+        return 1;
+    }
+    if (minor < params->minor_ver) {
+        return -1;
+    }
+    return 0;
+}
+
 QEMUGLContext gd_gl_area_create_context(DisplayGLCtx *dgc,
                                         QEMUGLParams *params)
 {
@@ -177,8 +194,8 @@ QEMUGLContext gd_gl_area_create_context(DisplayGLCtx *dgc,
     GdkWindow *window;
     GdkGLContext *ctx;
     GError *err = NULL;
+    int major, minor;
 
-    gtk_gl_area_make_current(GTK_GL_AREA(vc->gfx.drawing_area));
     window = gtk_widget_get_window(vc->gfx.drawing_area);
     ctx = gdk_window_create_gl_context(window, &err);
     if (err) {
@@ -196,6 +213,18 @@ QEMUGLContext gd_gl_area_create_context(DisplayGLCtx *dgc,
         g_clear_object(&ctx);
         return NULL;
     }
+
+    gdk_gl_context_make_current(ctx);
+    gdk_gl_context_get_version(ctx, &major, &minor);
+    gdk_gl_context_clear_current();
+    gtk_gl_area_make_current(GTK_GL_AREA(vc->gfx.drawing_area));
+
+    if (gd_cmp_gl_context_version(major, minor, params) == -1) {
+        /* created ctx version < requested version */
+        g_clear_object(&ctx);
+    }
+
+    trace_gd_gl_area_create_context(ctx, params->major_ver, params->minor_ver);
     return ctx;
 }
 
diff --git a/ui/trace-events b/ui/trace-events
index 1040ba0f88c7..a922f00e10b4 100644
--- a/ui/trace-events
+++ b/ui/trace-events
@@ -26,6 +26,7 @@ gd_key_event(const char *tab, int gdk_keycode, int qkeycode, const char *action)
 gd_grab(const char *tab, const char *device, const char *reason) "tab=%s, dev=%s, reason=%s"
 gd_ungrab(const char *tab, const char *device) "tab=%s, dev=%s"
 gd_keymap_windowing(const char *name) "backend=%s"
+gd_gl_area_create_context(void *ctx, int major, int minor) "ctx=%p, major=%d, minor=%d"
 gd_gl_area_destroy_context(void *ctx, void *current_ctx) "ctx=%p, current_ctx=%p"
 
 # vnc-auth-sasl.c
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 13 11:37:17 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 11:37:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347970.574275 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0iNZ-0003uj-QX; Mon, 13 Jun 2022 11:37:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347970.574275; Mon, 13 Jun 2022 11:37:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0iNZ-0003uc-NY; Mon, 13 Jun 2022 11:37:09 +0000
Received: by outflank-mailman (input) for mailman id 347970;
 Mon, 13 Jun 2022 11:37:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJWo=WU=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1o0iNX-0003eX-MF
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 11:37:07 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 267835d0-eb0d-11ec-8901-93a377f238d6;
 Mon, 13 Jun 2022 13:37:06 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-617-fRcmcX6VO6ac2iBNsirumA-1; Mon, 13 Jun 2022 07:36:59 -0400
Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com
 [10.11.54.10])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CA5A7100BAC0;
 Mon, 13 Jun 2022 11:36:58 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 42140492C3B;
 Mon, 13 Jun 2022 11:36:58 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id DCEDF18007B8; Mon, 13 Jun 2022 13:36:55 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 267835d0-eb0d-11ec-8901-93a377f238d6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655120225;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=h/Rq0T4qHdj5P4fsrjuVBaKf49W5WppINvxKA/pkbYI=;
	b=AjtSITVHGveVtwQ83/na/jnjS+JNhAN/KOBxdJC2iMyNPVeBkKffB+jdeIDxeCCt/Os3cZ
	RIMlW7i+q+TuIJI8xwovoJ6LMUUPVZm3zIW7kXF5D3k8uSLBr3LnD4f+ob1TeC7JGGE0iz
	FvD1f3jEy3MZwObx+Vq8u/+v238I6Ho=
X-MC-Unique: fRcmcX6VO6ac2iBNsirumA-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
	xen-devel@lists.xenproject.org,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	"Canokeys.org" <contact@canokeys.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	=?UTF-8?q?Volker=20R=C3=BCmelin?= <vr_qemu@t-online.de>
Subject: [PULL 01/16] ui/gtk-gl-area: implement GL context destruction
Date: Mon, 13 Jun 2022 13:36:40 +0200
Message-Id: <20220613113655.3693872-2-kraxel@redhat.com>
In-Reply-To: <20220613113655.3693872-1-kraxel@redhat.com>
References: <20220613113655.3693872-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.85 on 10.11.54.10

From: Volker Rümelin <vr_qemu@t-online.de>

The counterpart function for gd_gl_area_create_context() is
currently empty. Implement the gd_gl_area_destroy_context()
function to avoid GL context leaks.

Signed-off-by: Volker Rümelin <vr_qemu@t-online.de>
Message-Id: <20220605085131.7711-1-vr_qemu@t-online.de>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 ui/gtk-gl-area.c | 8 +++++++-
 ui/trace-events  | 1 +
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/ui/gtk-gl-area.c b/ui/gtk-gl-area.c
index fc5a082eb846..0e20ea031d34 100644
--- a/ui/gtk-gl-area.c
+++ b/ui/gtk-gl-area.c
@@ -201,7 +201,13 @@ QEMUGLContext gd_gl_area_create_context(DisplayGLCtx *dgc,
 
 void gd_gl_area_destroy_context(DisplayGLCtx *dgc, QEMUGLContext ctx)
 {
-    /* FIXME */
+    GdkGLContext *current_ctx = gdk_gl_context_get_current();
+
+    trace_gd_gl_area_destroy_context(ctx, current_ctx);
+    if (ctx == current_ctx) {
+        gdk_gl_context_clear_current();
+    }
+    g_clear_object(&ctx);
 }
 
 void gd_gl_area_scanout_texture(DisplayChangeListener *dcl,
diff --git a/ui/trace-events b/ui/trace-events
index f78b5e66061f..1040ba0f88c7 100644
--- a/ui/trace-events
+++ b/ui/trace-events
@@ -26,6 +26,7 @@ gd_key_event(const char *tab, int gdk_keycode, int qkeycode, const char *action)
 gd_grab(const char *tab, const char *device, const char *reason) "tab=%s, dev=%s, reason=%s"
 gd_ungrab(const char *tab, const char *device) "tab=%s, dev=%s"
 gd_keymap_windowing(const char *name) "backend=%s"
+gd_gl_area_destroy_context(void *ctx, void *current_ctx) "ctx=%p, current_ctx=%p"
 
 # vnc-auth-sasl.c
 # vnc-auth-vencrypt.c
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 13 11:37:17 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 11:37:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347972.574287 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0iNa-000424-I7; Mon, 13 Jun 2022 11:37:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347972.574287; Mon, 13 Jun 2022 11:37:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0iNa-0003z8-78; Mon, 13 Jun 2022 11:37:10 +0000
Received: by outflank-mailman (input) for mailman id 347972;
 Mon, 13 Jun 2022 11:37:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJWo=WU=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1o0iNY-0003eX-6p
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 11:37:08 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 26cf7725-eb0d-11ec-8901-93a377f238d6;
 Mon, 13 Jun 2022 13:37:06 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-541-38gwYqZlPOWMLeH1dIGXsA-1; Mon, 13 Jun 2022 07:37:02 -0400
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com
 [10.11.54.7])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1DE043C10226;
 Mon, 13 Jun 2022 11:37:02 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id AE3471410DDB;
 Mon, 13 Jun 2022 11:37:01 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 1412E180091D; Mon, 13 Jun 2022 13:36:56 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 26cf7725-eb0d-11ec-8901-93a377f238d6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655120225;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=70clork2yZyYjdUXKnsu9VZIWlpSX2dH9Nrdwr2C7Z4=;
	b=ht4pV8TG1aq9WsPR6cHCuvKjxaIAZea4KAc27RNujDs1wkTGlfb4WqGGD6F7/7UPkYAoh4
	SN3KF+2yvwuD0lcP731wOfQFiMRl4mtvUtruYAKfGHOiIhy1nb6iSZCKLJQ+2a/J0M7FyJ
	I2QWbXxfdZpEjwddEMM6fM9KGmx8Bxg=
X-MC-Unique: 38gwYqZlPOWMLeH1dIGXsA-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
	xen-devel@lists.xenproject.org,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	"Canokeys.org" <contact@canokeys.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Gerd Hoffmann <kraxel@redhat.com>
Subject: [PULL 04/16] hw/usb: Add CanoKey Implementation
Date: Mon, 13 Jun 2022 13:36:43 +0200
Message-Id: <20220613113655.3693872-5-kraxel@redhat.com>
In-Reply-To: <20220613113655.3693872-1-kraxel@redhat.com>
References: <20220613113655.3693872-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7

From: "Hongren (Zenithal) Zheng" <i@zenithal.me>

This commit added a new emulated device called CanoKey to QEMU.

CanoKey implements platform independent features in canokey-core
https://github.com/canokeys/canokey-core, and leaves the USB implementation
to the platform.

In this commit the USB part was implemented in QEMU using QEMU's USB APIs,
therefore the emulated CanoKey can communicate with the guest OS using USB.

Signed-off-by: Hongren (Zenithal) Zheng <i@zenithal.me>
Message-Id: <YoY6Mgph6f6Hc/zI@Sun>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 hw/usb/canokey.h |  69 +++++++++++
 hw/usb/canokey.c | 300 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 369 insertions(+)
 create mode 100644 hw/usb/canokey.h
 create mode 100644 hw/usb/canokey.c

diff --git a/hw/usb/canokey.h b/hw/usb/canokey.h
new file mode 100644
index 000000000000..24cf30420346
--- /dev/null
+++ b/hw/usb/canokey.h
@@ -0,0 +1,69 @@
+/*
+ * CanoKey QEMU device header.
+ *
+ * Copyright (c) 2021-2022 Canokeys.org <contact@canokeys.org>
+ * Written by Hongren (Zenithal) Zheng <i@zenithal.me>
+ *
+ * This code is licensed under the Apache-2.0.
+ */
+
+#ifndef CANOKEY_H
+#define CANOKEY_H
+
+#include "hw/qdev-core.h"
+
+#define TYPE_CANOKEY "canokey"
+#define CANOKEY(obj) \
+    OBJECT_CHECK(CanoKeyState, (obj), TYPE_CANOKEY)
+
+/*
+ * State of Canokey (i.e. hw/canokey.c)
+ */
+
+/* CTRL INTR BULK */
+#define CANOKEY_EP_NUM 3
+/* BULK/INTR IN can be up to 1352 bytes, e.g. get key info */
+#define CANOKEY_EP_IN_BUFFER_SIZE 2048
+/* BULK OUT can be up to 270 bytes, e.g. PIV import cert */
+#define CANOKEY_EP_OUT_BUFFER_SIZE 512
+
+typedef enum {
+    CANOKEY_EP_IN_WAIT,
+    CANOKEY_EP_IN_READY,
+    CANOKEY_EP_IN_STALL
+} CanoKeyEPState;
+
+typedef struct CanoKeyState {
+    USBDevice dev;
+
+    /* IN packets from canokey device loop */
+    uint8_t ep_in[CANOKEY_EP_NUM][CANOKEY_EP_IN_BUFFER_SIZE];
+    /*
+     * See canokey_emu_transmit
+     *
+     * For large INTR IN, receive multiple data from canokey device loop
+     * in this case ep_in_size would increase with every call
+     */
+    uint32_t ep_in_size[CANOKEY_EP_NUM];
+    /*
+     * Used in canokey_handle_data
+     * for IN larger than p->iov.size, we would do multiple handle_data()
+     *
+     * The difference between ep_in_pos and ep_in_size:
+     * We first increase ep_in_size to fill ep_in buffer in device_loop,
+     * then use ep_in_pos to submit data from ep_in buffer in handle_data
+     */
+    uint32_t ep_in_pos[CANOKEY_EP_NUM];
+    CanoKeyEPState ep_in_state[CANOKEY_EP_NUM];
+
+    /* OUT pointer to canokey recv buffer */
+    uint8_t *ep_out[CANOKEY_EP_NUM];
+    uint32_t ep_out_size[CANOKEY_EP_NUM];
+    /* For large BULK OUT, multiple write to ep_out is needed */
+    uint8_t ep_out_buffer[CANOKEY_EP_NUM][CANOKEY_EP_OUT_BUFFER_SIZE];
+
+    /* Properties */
+    char *file; /* canokey-file */
+} CanoKeyState;
+
+#endif /* CANOKEY_H */
diff --git a/hw/usb/canokey.c b/hw/usb/canokey.c
new file mode 100644
index 000000000000..6cb8b7cdb089
--- /dev/null
+++ b/hw/usb/canokey.c
@@ -0,0 +1,300 @@
+/*
+ * CanoKey QEMU device implementation.
+ *
+ * Copyright (c) 2021-2022 Canokeys.org <contact@canokeys.org>
+ * Written by Hongren (Zenithal) Zheng <i@zenithal.me>
+ *
+ * This code is licensed under the Apache-2.0.
+ */
+
+#include "qemu/osdep.h"
+#include <canokey-qemu.h>
+
+#include "qemu/module.h"
+#include "qapi/error.h"
+#include "hw/usb.h"
+#include "hw/qdev-properties.h"
+#include "desc.h"
+#include "canokey.h"
+
+#define CANOKEY_EP_IN(ep) ((ep) & 0x7F)
+
+#define CANOKEY_VENDOR_NUM     0x20a0
+#define CANOKEY_PRODUCT_NUM    0x42d2
+
+/*
+ * placeholder, canokey-qemu implements its own usb desc
+ * Namely we do not use usb_desc_handle_contorl
+ */
+enum {
+    STR_MANUFACTURER = 1,
+    STR_PRODUCT,
+    STR_SERIALNUMBER
+};
+
+static const USBDescStrings desc_strings = {
+    [STR_MANUFACTURER]     = "canokeys.org",
+    [STR_PRODUCT]          = "CanoKey QEMU",
+    [STR_SERIALNUMBER]     = "0"
+};
+
+static const USBDescDevice desc_device_canokey = {
+    .bcdUSB                        = 0x0,
+    .bMaxPacketSize0               = 16,
+    .bNumConfigurations            = 0,
+    .confs = NULL,
+};
+
+static const USBDesc desc_canokey = {
+    .id = {
+        .idVendor          = CANOKEY_VENDOR_NUM,
+        .idProduct         = CANOKEY_PRODUCT_NUM,
+        .bcdDevice         = 0x0100,
+        .iManufacturer     = STR_MANUFACTURER,
+        .iProduct          = STR_PRODUCT,
+        .iSerialNumber     = STR_SERIALNUMBER,
+    },
+    .full = &desc_device_canokey,
+    .high = &desc_device_canokey,
+    .str  = desc_strings,
+};
+
+
+/*
+ * libcanokey-qemu.so side functions
+ * All functions are called from canokey_emu_device_loop
+ */
+int canokey_emu_stall_ep(void *base, uint8_t ep)
+{
+    CanoKeyState *key = base;
+    uint8_t ep_in = CANOKEY_EP_IN(ep); /* INTR IN has ep 129 */
+    key->ep_in_size[ep_in] = 0;
+    key->ep_in_state[ep_in] = CANOKEY_EP_IN_STALL;
+    return 0;
+}
+
+int canokey_emu_set_address(void *base, uint8_t addr)
+{
+    CanoKeyState *key = base;
+    key->dev.addr = addr;
+    return 0;
+}
+
+int canokey_emu_prepare_receive(
+        void *base, uint8_t ep, uint8_t *pbuf, uint16_t size)
+{
+    CanoKeyState *key = base;
+    key->ep_out[ep] = pbuf;
+    key->ep_out_size[ep] = size;
+    return 0;
+}
+
+int canokey_emu_transmit(
+        void *base, uint8_t ep, const uint8_t *pbuf, uint16_t size)
+{
+    CanoKeyState *key = base;
+    uint8_t ep_in = CANOKEY_EP_IN(ep); /* INTR IN has ep 129 */
+    memcpy(key->ep_in[ep_in] + key->ep_in_size[ep_in],
+            pbuf, size);
+    key->ep_in_size[ep_in] += size;
+    key->ep_in_state[ep_in] = CANOKEY_EP_IN_READY;
+    /*
+     * ready for more data in device loop
+     *
+     * Note: this is a quirk for CanoKey CTAPHID
+     * because it calls multiple emu_transmit in one device_loop
+     * but w/o data_in it would stuck in device_loop
+     * This has no side effect for CCID as it is strictly
+     * OUT then IN transfer
+     * However it has side effect for Control transfer
+     */
+    if (ep_in != 0) {
+        canokey_emu_data_in(ep_in);
+    }
+    return 0;
+}
+
+uint32_t canokey_emu_get_rx_data_size(void *base, uint8_t ep)
+{
+    CanoKeyState *key = base;
+    return key->ep_out_size[ep];
+}
+
+/*
+ * QEMU side functions
+ */
+static void canokey_handle_reset(USBDevice *dev)
+{
+    CanoKeyState *key = CANOKEY(dev);
+    for (int i = 0; i != CANOKEY_EP_NUM; ++i) {
+        key->ep_in_state[i] = CANOKEY_EP_IN_WAIT;
+        key->ep_in_pos[i] = 0;
+        key->ep_in_size[i] = 0;
+    }
+    canokey_emu_reset();
+}
+
+static void canokey_handle_control(USBDevice *dev, USBPacket *p,
+               int request, int value, int index, int length, uint8_t *data)
+{
+    CanoKeyState *key = CANOKEY(dev);
+
+    canokey_emu_setup(request, value, index, length);
+
+    uint32_t dir_in = request & DeviceRequest;
+    if (!dir_in) {
+        /* OUT */
+        if (key->ep_out[0] != NULL) {
+            memcpy(key->ep_out[0], data, length);
+        }
+        canokey_emu_data_out(p->ep->nr, data);
+    }
+
+    canokey_emu_device_loop();
+
+    /* IN */
+    switch (key->ep_in_state[0]) {
+    case CANOKEY_EP_IN_WAIT:
+        p->status = USB_RET_NAK;
+        break;
+    case CANOKEY_EP_IN_STALL:
+        p->status = USB_RET_STALL;
+        break;
+    case CANOKEY_EP_IN_READY:
+        memcpy(data, key->ep_in[0], key->ep_in_size[0]);
+        p->actual_length = key->ep_in_size[0];
+        /* reset state */
+        key->ep_in_state[0] = CANOKEY_EP_IN_WAIT;
+        key->ep_in_size[0] = 0;
+        key->ep_in_pos[0] = 0;
+        break;
+    }
+}
+
+static void canokey_handle_data(USBDevice *dev, USBPacket *p)
+{
+    CanoKeyState *key = CANOKEY(dev);
+
+    uint8_t ep_in = CANOKEY_EP_IN(p->ep->nr);
+    uint8_t ep_out = p->ep->nr;
+    uint32_t in_len;
+    uint32_t out_pos;
+    uint32_t out_len;
+    switch (p->pid) {
+    case USB_TOKEN_OUT:
+        usb_packet_copy(p, key->ep_out_buffer[ep_out], p->iov.size);
+        out_pos = 0;
+        while (out_pos != p->iov.size) {
+            /*
+             * key->ep_out[ep_out] set by prepare_receive
+             * to be a buffer inside libcanokey-qemu.so
+             * key->ep_out_size[ep_out] set by prepare_receive
+             * to be the buffer length
+             */
+            out_len = MIN(p->iov.size - out_pos, key->ep_out_size[ep_out]);
+            memcpy(key->ep_out[ep_out],
+                    key->ep_out_buffer[ep_out] + out_pos, out_len);
+            out_pos += out_len;
+            /* update ep_out_size to actual len */
+            key->ep_out_size[ep_out] = out_len;
+            canokey_emu_data_out(ep_out, NULL);
+        }
+        break;
+    case USB_TOKEN_IN:
+        if (key->ep_in_pos[ep_in] == 0) { /* first time IN */
+            canokey_emu_data_in(ep_in);
+            canokey_emu_device_loop(); /* may call transmit multiple times */
+        }
+        switch (key->ep_in_state[ep_in]) {
+        case CANOKEY_EP_IN_WAIT:
+            /* NAK for early INTR IN */
+            p->status = USB_RET_NAK;
+            break;
+        case CANOKEY_EP_IN_STALL:
+            p->status = USB_RET_STALL;
+            break;
+        case CANOKEY_EP_IN_READY:
+            /* submit part of ep_in buffer to USBPacket */
+            in_len = MIN(key->ep_in_size[ep_in] - key->ep_in_pos[ep_in],
+                    p->iov.size);
+            usb_packet_copy(p,
+                    key->ep_in[ep_in] + key->ep_in_pos[ep_in], in_len);
+            key->ep_in_pos[ep_in] += in_len;
+            /* reset state if all data submitted */
+            if (key->ep_in_pos[ep_in] == key->ep_in_size[ep_in]) {
+                key->ep_in_state[ep_in] = CANOKEY_EP_IN_WAIT;
+                key->ep_in_size[ep_in] = 0;
+                key->ep_in_pos[ep_in] = 0;
+            }
+            break;
+        }
+        break;
+    default:
+        p->status = USB_RET_STALL;
+        break;
+    }
+}
+
+static void canokey_realize(USBDevice *base, Error **errp)
+{
+    CanoKeyState *key = CANOKEY(base);
+
+    if (key->file == NULL) {
+        error_setg(errp, "You must provide file=/path/to/canokey-file");
+        return;
+    }
+
+    usb_desc_init(base);
+
+    for (int i = 0; i != CANOKEY_EP_NUM; ++i) {
+        key->ep_in_state[i] = CANOKEY_EP_IN_WAIT;
+        key->ep_in_size[i] = 0;
+        key->ep_in_pos[i] = 0;
+    }
+
+    if (canokey_emu_init(key, key->file)) {
+        error_setg(errp, "canokey can not create or read %s", key->file);
+        return;
+    }
+}
+
+static void canokey_unrealize(USBDevice *base)
+{
+}
+
+static Property canokey_properties[] = {
+    DEFINE_PROP_STRING("file", CanoKeyState, file),
+    DEFINE_PROP_END_OF_LIST(),
+};
+
+static void canokey_class_init(ObjectClass *klass, void *data)
+{
+    DeviceClass *dc = DEVICE_CLASS(klass);
+    USBDeviceClass *uc = USB_DEVICE_CLASS(klass);
+
+    uc->product_desc   = "CanoKey QEMU";
+    uc->usb_desc       = &desc_canokey;
+    uc->handle_reset   = canokey_handle_reset;
+    uc->handle_control = canokey_handle_control;
+    uc->handle_data    = canokey_handle_data;
+    uc->handle_attach  = usb_desc_attach;
+    uc->realize        = canokey_realize;
+    uc->unrealize      = canokey_unrealize;
+    dc->desc           = "CanoKey QEMU";
+    device_class_set_props(dc, canokey_properties);
+    set_bit(DEVICE_CATEGORY_MISC, dc->categories);
+}
+
+static const TypeInfo canokey_info = {
+    .name = TYPE_CANOKEY,
+    .parent = TYPE_USB_DEVICE,
+    .instance_size = sizeof(CanoKeyState),
+    .class_init = canokey_class_init
+};
+
+static void canokey_register_types(void)
+{
+    type_register_static(&canokey_info);
+}
+
+type_init(canokey_register_types)
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 13 11:37:18 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 11:37:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347969.574265 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0iNY-0003f4-HX; Mon, 13 Jun 2022 11:37:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347969.574265; Mon, 13 Jun 2022 11:37:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0iNY-0003ew-Ea; Mon, 13 Jun 2022 11:37:08 +0000
Received: by outflank-mailman (input) for mailman id 347969;
 Mon, 13 Jun 2022 11:37:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJWo=WU=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1o0iNW-0003eX-U2
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 11:37:07 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 24fd8cdd-eb0d-11ec-8901-93a377f238d6;
 Mon, 13 Jun 2022 13:37:04 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-458-ot4G2UMAMfGhoPZBJ64vxA-1; Mon, 13 Jun 2022 07:36:59 -0400
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com
 [10.11.54.6])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id EA9AA802804;
 Mon, 13 Jun 2022 11:36:58 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 703A92166B26;
 Mon, 13 Jun 2022 11:36:58 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id CFF951800626; Mon, 13 Jun 2022 13:36:55 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 24fd8cdd-eb0d-11ec-8901-93a377f238d6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655120222;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=duvBwvox7nwiHgrFheu6u9TsgJfSo6ljmtps4Jfms7E=;
	b=WMEy6dN3zN2T0aeIPWSQeO6016dN+N3igYiKGACl5uQp54ZG9Mwkv0tmAMaXh0LmJQWX/2
	R9JI6tkSSrfCzSKusw6f/YFUkxHeNL6YJ+04aLSta6jGQLv1H1K/YmQRakrUZLi9Z+JVSI
	eH02yfCz7L6ZaT41ekdHbO1VkRi5XAg=
X-MC-Unique: ot4G2UMAMfGhoPZBJ64vxA-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
	xen-devel@lists.xenproject.org,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	"Canokeys.org" <contact@canokeys.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Gerd Hoffmann <kraxel@redhat.com>
Subject: [PULL 00/16] Kraxel 20220613 patches
Date: Mon, 13 Jun 2022 13:36:39 +0200
Message-Id: <20220613113655.3693872-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6

The following changes since commit dcb40541ebca7ec98a14d461593b3cd7282b4fac:

  Merge tag 'mips-20220611' of https://github.com/philmd/qemu into staging (2022-06-11 21:13:27 -0700)

are available in the Git repository at:

  git://git.kraxel.org/qemu tags/kraxel-20220613-pull-request

for you to fetch changes up to 23b87f7a3a13e93e248eef8a4b7257548855a620:

  ui: move 'pc-bios/keymaps' to 'ui/keymaps' (2022-06-13 10:59:25 +0200)

----------------------------------------------------------------
usb: add CanoKey device, fixes for ehci + redir
ui: fixes for gtk and cocoa, move keymaps (v2), rework refresh rate
virtio-gpu: scanout flush fix

----------------------------------------------------------------

Akihiko Odaki (4):
  ui/cocoa: Fix poweroff request code
  ui/console: Do not return a value with ui_info
  ui: Deliver refresh rate via QemuUIInfo
  virtio-gpu: Respect UI refresh rate for EDID

Arnout Engelen (1):
  hw/usb/hcd-ehci: fix writeback order

Daniel P. Berrangé (1):
  ui: move 'pc-bios/keymaps' to 'ui/keymaps'

Dongwon Kim (1):
  virtio-gpu: update done only on the scanout associated with rect

Hongren (Zenithal) Zheng (6):
  hw/usb: Add CanoKey Implementation
  hw/usb/canokey: Add trace events
  meson: Add CanoKey
  docs: Add CanoKey documentation
  docs/system/devices/usb: Add CanoKey to USB devices examples
  MAINTAINERS: add myself as CanoKey maintainer

Joelle van Dyne (1):
  usbredir: avoid queuing hello packet on snapshot restore

Volker Rümelin (2):
  ui/gtk-gl-area: implement GL context destruction
  ui/gtk-gl-area: create the requested GL context version

 configure                           |   4 +
 meson_options.txt                   |   2 +
 hw/usb/canokey.h                    |  69 ++++++
 include/hw/virtio/virtio-gpu.h      |   1 +
 include/ui/console.h                |   4 +-
 include/ui/gtk.h                    |   2 +-
 hw/display/virtio-gpu-base.c        |   7 +-
 hw/display/virtio-gpu.c             |   4 +
 hw/display/virtio-vga.c             |   5 +-
 hw/display/xenfb.c                  |  14 +-
 hw/usb/canokey.c                    | 313 ++++++++++++++++++++++++++++
 hw/usb/hcd-ehci.c                   |   5 +-
 hw/usb/redirect.c                   |   3 +-
 hw/vfio/display.c                   |   8 +-
 ui/console.c                        |   6 -
 ui/gtk-egl.c                        |   4 +-
 ui/gtk-gl-area.c                    |  42 +++-
 ui/gtk.c                            |  45 ++--
 MAINTAINERS                         |   8 +
 docs/system/device-emulation.rst    |   1 +
 docs/system/devices/canokey.rst     | 168 +++++++++++++++
 docs/system/devices/usb.rst         |   4 +
 hw/usb/Kconfig                      |   5 +
 hw/usb/meson.build                  |   5 +
 hw/usb/trace-events                 |  16 ++
 meson.build                         |   6 +
 pc-bios/meson.build                 |   1 -
 scripts/meson-buildoptions.sh       |   3 +
 ui/cocoa.m                          |   6 +-
 {pc-bios => ui}/keymaps/ar          |   0
 {pc-bios => ui}/keymaps/bepo        |   0
 {pc-bios => ui}/keymaps/cz          |   0
 {pc-bios => ui}/keymaps/da          |   0
 {pc-bios => ui}/keymaps/de          |   0
 {pc-bios => ui}/keymaps/de-ch       |   0
 {pc-bios => ui}/keymaps/en-gb       |   0
 {pc-bios => ui}/keymaps/en-us       |   0
 {pc-bios => ui}/keymaps/es          |   0
 {pc-bios => ui}/keymaps/et          |   0
 {pc-bios => ui}/keymaps/fi          |   0
 {pc-bios => ui}/keymaps/fo          |   0
 {pc-bios => ui}/keymaps/fr          |   0
 {pc-bios => ui}/keymaps/fr-be       |   0
 {pc-bios => ui}/keymaps/fr-ca       |   0
 {pc-bios => ui}/keymaps/fr-ch       |   0
 {pc-bios => ui}/keymaps/hr          |   0
 {pc-bios => ui}/keymaps/hu          |   0
 {pc-bios => ui}/keymaps/is          |   0
 {pc-bios => ui}/keymaps/it          |   0
 {pc-bios => ui}/keymaps/ja          |   0
 {pc-bios => ui}/keymaps/lt          |   0
 {pc-bios => ui}/keymaps/lv          |   0
 {pc-bios => ui}/keymaps/meson.build |   0
 {pc-bios => ui}/keymaps/mk          |   0
 {pc-bios => ui}/keymaps/nl          |   0
 {pc-bios => ui}/keymaps/no          |   0
 {pc-bios => ui}/keymaps/pl          |   0
 {pc-bios => ui}/keymaps/pt          |   0
 {pc-bios => ui}/keymaps/pt-br       |   0
 {pc-bios => ui}/keymaps/ru          |   0
 {pc-bios => ui}/keymaps/sl          |   0
 {pc-bios => ui}/keymaps/sv          |   0
 {pc-bios => ui}/keymaps/th          |   0
 {pc-bios => ui}/keymaps/tr          |   0
 ui/meson.build                      |   1 +
 ui/trace-events                     |   2 +
 66 files changed, 712 insertions(+), 52 deletions(-)
 create mode 100644 hw/usb/canokey.h
 create mode 100644 hw/usb/canokey.c
 create mode 100644 docs/system/devices/canokey.rst
 rename {pc-bios => ui}/keymaps/ar (100%)
 rename {pc-bios => ui}/keymaps/bepo (100%)
 rename {pc-bios => ui}/keymaps/cz (100%)
 rename {pc-bios => ui}/keymaps/da (100%)
 rename {pc-bios => ui}/keymaps/de (100%)
 rename {pc-bios => ui}/keymaps/de-ch (100%)
 rename {pc-bios => ui}/keymaps/en-gb (100%)
 rename {pc-bios => ui}/keymaps/en-us (100%)
 rename {pc-bios => ui}/keymaps/es (100%)
 rename {pc-bios => ui}/keymaps/et (100%)
 rename {pc-bios => ui}/keymaps/fi (100%)
 rename {pc-bios => ui}/keymaps/fo (100%)
 rename {pc-bios => ui}/keymaps/fr (100%)
 rename {pc-bios => ui}/keymaps/fr-be (100%)
 rename {pc-bios => ui}/keymaps/fr-ca (100%)
 rename {pc-bios => ui}/keymaps/fr-ch (100%)
 rename {pc-bios => ui}/keymaps/hr (100%)
 rename {pc-bios => ui}/keymaps/hu (100%)
 rename {pc-bios => ui}/keymaps/is (100%)
 rename {pc-bios => ui}/keymaps/it (100%)
 rename {pc-bios => ui}/keymaps/ja (100%)
 rename {pc-bios => ui}/keymaps/lt (100%)
 rename {pc-bios => ui}/keymaps/lv (100%)
 rename {pc-bios => ui}/keymaps/meson.build (100%)
 rename {pc-bios => ui}/keymaps/mk (100%)
 rename {pc-bios => ui}/keymaps/nl (100%)
 rename {pc-bios => ui}/keymaps/no (100%)
 rename {pc-bios => ui}/keymaps/pl (100%)
 rename {pc-bios => ui}/keymaps/pt (100%)
 rename {pc-bios => ui}/keymaps/pt-br (100%)
 rename {pc-bios => ui}/keymaps/ru (100%)
 rename {pc-bios => ui}/keymaps/sl (100%)
 rename {pc-bios => ui}/keymaps/sv (100%)
 rename {pc-bios => ui}/keymaps/th (100%)
 rename {pc-bios => ui}/keymaps/tr (100%)

-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 13 11:37:18 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 11:37:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347980.574365 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0iNh-0005rh-I0; Mon, 13 Jun 2022 11:37:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347980.574365; Mon, 13 Jun 2022 11:37:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0iNg-0005mi-P8; Mon, 13 Jun 2022 11:37:16 +0000
Received: by outflank-mailman (input) for mailman id 347980;
 Mon, 13 Jun 2022 11:37:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJWo=WU=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1o0iNd-0003eX-T4
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 11:37:14 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2a94b216-eb0d-11ec-8901-93a377f238d6;
 Mon, 13 Jun 2022 13:37:13 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-546-czJqyPJ_Ov-1NAke0vpC9w-1; Mon, 13 Jun 2022 07:37:11 -0400
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com
 [10.11.54.2])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4C54C833975;
 Mon, 13 Jun 2022 11:37:10 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 190C740C1247;
 Mon, 13 Jun 2022 11:37:10 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id C60EF1800999; Mon, 13 Jun 2022 13:36:56 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a94b216-eb0d-11ec-8901-93a377f238d6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655120232;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=fjkT/ghvgIP+ASWCtRqN8cFZ3mDw+gAHZxFgoi4zHGM=;
	b=Ww6QgDXtpfiElI2nnlXuCkIzBD8thxW+HH/0px5Ug+/ps4Fx4sPn78QMlXWH5DxlihiojO
	aA5/1SAwE5beBZYRINs6KlyrnSBxXzjOfRwiY9hvYfBAbZDQLM+OoUWzzNQoA/WTEFiZ41
	3oIvmRMBf+9zXkFKVU1oFXxI9OLxPJM=
X-MC-Unique: czJqyPJ_Ov-1NAke0vpC9w-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
	xen-devel@lists.xenproject.org,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	"Canokeys.org" <contact@canokeys.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Gerd Hoffmann <kraxel@redhat.com>
Subject: [PULL 15/16] virtio-gpu: Respect UI refresh rate for EDID
Date: Mon, 13 Jun 2022 13:36:54 +0200
Message-Id: <20220613113655.3693872-16-kraxel@redhat.com>
In-Reply-To: <20220613113655.3693872-1-kraxel@redhat.com>
References: <20220613113655.3693872-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2

From: Akihiko Odaki <akihiko.odaki@gmail.com>

Signed-off-by: Akihiko Odaki <akihiko.odaki@gmail.com>
Message-Id: <20220226115516.59830-4-akihiko.odaki@gmail.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 include/hw/virtio/virtio-gpu.h | 1 +
 hw/display/virtio-gpu-base.c   | 1 +
 hw/display/virtio-gpu.c        | 1 +
 3 files changed, 3 insertions(+)

diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index afff9e158e31..2e28507efe21 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -80,6 +80,7 @@ struct virtio_gpu_scanout {
 struct virtio_gpu_requested_state {
     uint16_t width_mm, height_mm;
     uint32_t width, height;
+    uint32_t refresh_rate;
     int x, y;
 };
 
diff --git a/hw/display/virtio-gpu-base.c b/hw/display/virtio-gpu-base.c
index b21d6e5b0be8..a29f191aa82e 100644
--- a/hw/display/virtio-gpu-base.c
+++ b/hw/display/virtio-gpu-base.c
@@ -79,6 +79,7 @@ static void virtio_gpu_ui_info(void *opaque, uint32_t idx, QemuUIInfo *info)
 
     g->req_state[idx].x = info->xoff;
     g->req_state[idx].y = info->yoff;
+    g->req_state[idx].refresh_rate = info->refresh_rate;
     g->req_state[idx].width = info->width;
     g->req_state[idx].height = info->height;
     g->req_state[idx].width_mm = info->width_mm;
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index 55c6dd576318..20cc703dcc6e 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -217,6 +217,7 @@ virtio_gpu_generate_edid(VirtIOGPU *g, int scanout,
         .height_mm = b->req_state[scanout].height_mm,
         .prefx = b->req_state[scanout].width,
         .prefy = b->req_state[scanout].height,
+        .refresh_rate = b->req_state[scanout].refresh_rate,
     };
 
     edid->size = cpu_to_le32(sizeof(edid->edid));
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 13 11:37:19 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 11:37:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347981.574375 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0iNi-00068b-S3; Mon, 13 Jun 2022 11:37:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347981.574375; Mon, 13 Jun 2022 11:37:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0iNh-00063G-SZ; Mon, 13 Jun 2022 11:37:17 +0000
Received: by outflank-mailman (input) for mailman id 347981;
 Mon, 13 Jun 2022 11:37:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJWo=WU=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1o0iNe-0003eY-8x
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 11:37:14 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2a964cce-eb0d-11ec-bd2c-47488cf2e6aa;
 Mon, 13 Jun 2022 13:37:13 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-382-FkGXzRJ0PEWejaqlg6YRjg-1; Mon, 13 Jun 2022 07:37:09 -0400
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com
 [10.11.54.1])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D7C5685A580;
 Mon, 13 Jun 2022 11:37:08 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 9D66940CF8EB;
 Mon, 13 Jun 2022 11:37:08 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 9C5BB1800995; Mon, 13 Jun 2022 13:36:56 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a964cce-eb0d-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655120232;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=hTTJvnegxxvfJsWseZ7+SmUjba5GRnY9ME/e3uOvaP0=;
	b=BrBYKwRauwgZI0QtOUWV3bf9up6SdDUFYcu6ACMNpqH0sxvYuYXeDfcat53CMkSb6evpcE
	Ii1xzDZVNPAlJckIr0cdg1pcD2fzbyEWasenRbKOrDGYJVkE+G2z0i/F8uBJvO7Ky9RvKG
	gYTy+PA9UOGi+yZuLybZKuW7W0D6GH4=
X-MC-Unique: FkGXzRJ0PEWejaqlg6YRjg-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
	xen-devel@lists.xenproject.org,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	"Canokeys.org" <contact@canokeys.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Gerd Hoffmann <kraxel@redhat.com>
Subject: [PULL 13/16] ui/console: Do not return a value with ui_info
Date: Mon, 13 Jun 2022 13:36:52 +0200
Message-Id: <20220613113655.3693872-14-kraxel@redhat.com>
In-Reply-To: <20220613113655.3693872-1-kraxel@redhat.com>
References: <20220613113655.3693872-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1

From: Akihiko Odaki <akihiko.odaki@gmail.com>

The returned value is not used and misleading.

Signed-off-by: Akihiko Odaki <akihiko.odaki@gmail.com>
Message-Id: <20220226115516.59830-2-akihiko.odaki@gmail.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 include/ui/console.h         | 2 +-
 hw/display/virtio-gpu-base.c | 6 +++---
 hw/display/virtio-vga.c      | 5 ++---
 hw/vfio/display.c            | 8 +++-----
 4 files changed, 9 insertions(+), 12 deletions(-)

diff --git a/include/ui/console.h b/include/ui/console.h
index c44b28a972ca..642d6f5248cf 100644
--- a/include/ui/console.h
+++ b/include/ui/console.h
@@ -432,7 +432,7 @@ typedef struct GraphicHwOps {
     bool gfx_update_async; /* if true, calls graphic_hw_update_done() */
     void (*text_update)(void *opaque, console_ch_t *text);
     void (*update_interval)(void *opaque, uint64_t interval);
-    int (*ui_info)(void *opaque, uint32_t head, QemuUIInfo *info);
+    void (*ui_info)(void *opaque, uint32_t head, QemuUIInfo *info);
     void (*gl_block)(void *opaque, bool block);
 } GraphicHwOps;
 
diff --git a/hw/display/virtio-gpu-base.c b/hw/display/virtio-gpu-base.c
index 790cec333c8c..b21d6e5b0be8 100644
--- a/hw/display/virtio-gpu-base.c
+++ b/hw/display/virtio-gpu-base.c
@@ -69,12 +69,12 @@ static void virtio_gpu_notify_event(VirtIOGPUBase *g, uint32_t event_type)
     virtio_notify_config(&g->parent_obj);
 }
 
-static int virtio_gpu_ui_info(void *opaque, uint32_t idx, QemuUIInfo *info)
+static void virtio_gpu_ui_info(void *opaque, uint32_t idx, QemuUIInfo *info)
 {
     VirtIOGPUBase *g = opaque;
 
     if (idx >= g->conf.max_outputs) {
-        return -1;
+        return;
     }
 
     g->req_state[idx].x = info->xoff;
@@ -92,7 +92,7 @@ static int virtio_gpu_ui_info(void *opaque, uint32_t idx, QemuUIInfo *info)
 
     /* send event to guest */
     virtio_gpu_notify_event(g, VIRTIO_GPU_EVENT_DISPLAY);
-    return 0;
+    return;
 }
 
 static void
diff --git a/hw/display/virtio-vga.c b/hw/display/virtio-vga.c
index c206b5da384b..4dcb34c4a740 100644
--- a/hw/display/virtio-vga.c
+++ b/hw/display/virtio-vga.c
@@ -47,15 +47,14 @@ static void virtio_vga_base_text_update(void *opaque, console_ch_t *chardata)
     }
 }
 
-static int virtio_vga_base_ui_info(void *opaque, uint32_t idx, QemuUIInfo *info)
+static void virtio_vga_base_ui_info(void *opaque, uint32_t idx, QemuUIInfo *info)
 {
     VirtIOVGABase *vvga = opaque;
     VirtIOGPUBase *g = vvga->vgpu;
 
     if (g->hw_ops->ui_info) {
-        return g->hw_ops->ui_info(g, idx, info);
+        g->hw_ops->ui_info(g, idx, info);
     }
-    return -1;
 }
 
 static void virtio_vga_base_gl_block(void *opaque, bool block)
diff --git a/hw/vfio/display.c b/hw/vfio/display.c
index 89bc90508fb8..78f4d82c1c35 100644
--- a/hw/vfio/display.c
+++ b/hw/vfio/display.c
@@ -106,14 +106,14 @@ err:
     return;
 }
 
-static int vfio_display_edid_ui_info(void *opaque, uint32_t idx,
-                                     QemuUIInfo *info)
+static void vfio_display_edid_ui_info(void *opaque, uint32_t idx,
+                                      QemuUIInfo *info)
 {
     VFIOPCIDevice *vdev = opaque;
     VFIODisplay *dpy = vdev->dpy;
 
     if (!dpy->edid_regs) {
-        return 0;
+        return;
     }
 
     if (info->width && info->height) {
@@ -121,8 +121,6 @@ static int vfio_display_edid_ui_info(void *opaque, uint32_t idx,
     } else {
         vfio_display_edid_update(vdev, false, 0, 0);
     }
-
-    return 0;
 }
 
 static void vfio_display_edid_init(VFIOPCIDevice *vdev)
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 13 11:37:20 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 11:37:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347982.574384 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0iNk-0006R9-98; Mon, 13 Jun 2022 11:37:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347982.574384; Mon, 13 Jun 2022 11:37:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0iNj-0006IS-2w; Mon, 13 Jun 2022 11:37:19 +0000
Received: by outflank-mailman (input) for mailman id 347982;
 Mon, 13 Jun 2022 11:37:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJWo=WU=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1o0iNf-0003eY-76
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 11:37:15 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2afc526e-eb0d-11ec-bd2c-47488cf2e6aa;
 Mon, 13 Jun 2022 13:37:13 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-453-ZA2PRSFFPTCRRQFx6USuKA-1; Mon, 13 Jun 2022 07:37:09 -0400
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com
 [10.11.54.6])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4C1BB1C0F682;
 Mon, 13 Jun 2022 11:37:08 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 183B32166B29;
 Mon, 13 Jun 2022 11:37:08 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 898F11800993; Mon, 13 Jun 2022 13:36:56 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2afc526e-eb0d-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655120233;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=nK8ui8MGXo28dreQTzjS6bbhtNnN1TEyDDyCd/Q1Dgo=;
	b=ShSwsLEc6+lolBaSVZna0NxTqKJHDFKc/jukYgpMVVQZWdGD6iqvIfqIvABDWOz5XyD3Qu
	v5zRmkuknRcRsIkOX+UjXdCs0cXfOiCjz+EUwtyrquPY1UzHAW4nuRGgD47o9x0GRHMW5P
	2IF7yoSQzXoKQN0u3Kw5RKdTn2cnYtA=
X-MC-Unique: ZA2PRSFFPTCRRQFx6USuKA-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
	xen-devel@lists.xenproject.org,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	"Canokeys.org" <contact@canokeys.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Dongwon Kim <dongwon.kim@intel.com>,
	Vivek Kasireddy <vivek.kasireddy@intel.com>
Subject: [PULL 12/16] virtio-gpu: update done only on the scanout associated with rect
Date: Mon, 13 Jun 2022 13:36:51 +0200
Message-Id: <20220613113655.3693872-13-kraxel@redhat.com>
In-Reply-To: <20220613113655.3693872-1-kraxel@redhat.com>
References: <20220613113655.3693872-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6

From: Dongwon Kim <dongwon.kim@intel.com>

It only needs to update the scanouts containing the rect area
coming with the resource-flush request from the guest.

Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Vivek Kasireddy <vivek.kasireddy@intel.com>
Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
Message-Id: <20220505214030.4261-1-dongwon.kim@intel.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 hw/display/virtio-gpu.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index cd4a56056fd9..55c6dd576318 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -514,6 +514,9 @@ static void virtio_gpu_resource_flush(VirtIOGPU *g,
         for (i = 0; i < g->parent_obj.conf.max_outputs; i++) {
             scanout = &g->parent_obj.scanout[i];
             if (scanout->resource_id == res->resource_id &&
+                rf.r.x >= scanout->x && rf.r.y >= scanout->y &&
+                rf.r.x + rf.r.width <= scanout->x + scanout->width &&
+                rf.r.y + rf.r.height <= scanout->y + scanout->height &&
                 console_has_gl(scanout->con)) {
                 dpy_gl_update(scanout->con, 0, 0, scanout->width,
                               scanout->height);
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 13 11:37:22 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 11:37:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347983.574393 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0iNl-0006iQ-EW; Mon, 13 Jun 2022 11:37:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347983.574393; Mon, 13 Jun 2022 11:37:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0iNk-0006cd-B5; Mon, 13 Jun 2022 11:37:20 +0000
Received: by outflank-mailman (input) for mailman id 347983;
 Mon, 13 Jun 2022 11:37:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJWo=WU=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1o0iNf-0003eX-7l
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 11:37:15 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2b61a9a0-eb0d-11ec-8901-93a377f238d6;
 Mon, 13 Jun 2022 13:37:14 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-393-QhGqS3mJPgWJlaJrwV9AAQ-1; Mon, 13 Jun 2022 07:37:08 -0400
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com
 [10.11.54.7])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6085A29ABA0C;
 Mon, 13 Jun 2022 11:37:07 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 69AC61415103;
 Mon, 13 Jun 2022 11:37:06 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 7D149180098C; Mon, 13 Jun 2022 13:36:56 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2b61a9a0-eb0d-11ec-8901-93a377f238d6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655120233;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ZStqjDDMtLMpDsz58F8O29JWAJ6F3RkNGXfWxICk7zU=;
	b=fCm3a/3IykoUiHzE8p7cym22MSTvpjCjvtz9Jm8n7LzbXOuxETVo4e1dvHul2bGb6VkeKS
	vM5wQ6r7aoIMCCPxArUzwz3bgagEE+F9OQIwW7LTh6Bz2GVTBGQO48Krlg3rDJZ1WpVlnj
	Xg61MGlpmZJKDvOfzO4O8yGv7AcF0l0=
X-MC-Unique: QhGqS3mJPgWJlaJrwV9AAQ-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
	xen-devel@lists.xenproject.org,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	"Canokeys.org" <contact@canokeys.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Joelle van Dyne <j@getutm.app>
Subject: [PULL 11/16] usbredir: avoid queuing hello packet on snapshot restore
Date: Mon, 13 Jun 2022 13:36:50 +0200
Message-Id: <20220613113655.3693872-12-kraxel@redhat.com>
In-Reply-To: <20220613113655.3693872-1-kraxel@redhat.com>
References: <20220613113655.3693872-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7

From: Joelle van Dyne <j@getutm.app>

When launching QEMU with "-loadvm", usbredir_create_parser() should avoid
setting up the hello packet (just as with "-incoming". On the latest version
of libusbredir, usbredirparser_unserialize() will return error if the parser
is not "pristine."

Signed-off-by: Joelle van Dyne <j@getutm.app>
Message-Id: <20220507041850.98716-1-j@getutm.app>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 hw/usb/redirect.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/hw/usb/redirect.c b/hw/usb/redirect.c
index fd7df599bc0b..1bd30efc3ef0 100644
--- a/hw/usb/redirect.c
+++ b/hw/usb/redirect.c
@@ -1280,7 +1280,8 @@ static void usbredir_create_parser(USBRedirDevice *dev)
     }
 #endif
 
-    if (runstate_check(RUN_STATE_INMIGRATE)) {
+    if (runstate_check(RUN_STATE_INMIGRATE) ||
+        runstate_check(RUN_STATE_PRELAUNCH)) {
         flags |= usbredirparser_fl_no_hello;
     }
     usbredirparser_init(dev->parser, VERSION, caps, USB_REDIR_CAPS_SIZE,
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 13 11:37:24 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 11:37:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.347984.574406 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0iNn-0007HI-ME; Mon, 13 Jun 2022 11:37:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 347984.574406; Mon, 13 Jun 2022 11:37:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0iNm-0007AS-Q8; Mon, 13 Jun 2022 11:37:22 +0000
Received: by outflank-mailman (input) for mailman id 347984;
 Mon, 13 Jun 2022 11:37:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJWo=WU=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1o0iNg-0003eY-72
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 11:37:16 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2b4ec38a-eb0d-11ec-bd2c-47488cf2e6aa;
 Mon, 13 Jun 2022 13:37:14 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-538-4jw5iIGhO3-Xtypzh9OvfQ-1; Mon, 13 Jun 2022 07:37:10 -0400
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com
 [10.11.54.5])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id DC26B3C10221;
 Mon, 13 Jun 2022 11:37:09 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 82CD718EA1;
 Mon, 13 Jun 2022 11:37:09 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id B578C1800996; Mon, 13 Jun 2022 13:36:56 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2b4ec38a-eb0d-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655120233;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ebiRccH2awWxGwAppvEzoFJ/M1x++CURc9+zFzjg06Y=;
	b=ULmjhGTFSRW37t2Ny9X73SCFOf/8UX60iYuYYH3LmLQSgiYfEv/gO01wYueT4GZXh16mLU
	Ko+C2VvzAjSHEe0L8YNnPUtirAZCSh+WvSPs0j4HK5LkO65OvvU5Wv6EOBfo+BRsGHR4fN
	Yy5HWliroLuWhHShg09cHghbCD6wHfQ=
X-MC-Unique: 4jw5iIGhO3-Xtypzh9OvfQ-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
	xen-devel@lists.xenproject.org,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	"Canokeys.org" <contact@canokeys.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Gerd Hoffmann <kraxel@redhat.com>
Subject: [PULL 14/16] ui: Deliver refresh rate via QemuUIInfo
Date: Mon, 13 Jun 2022 13:36:53 +0200
Message-Id: <20220613113655.3693872-15-kraxel@redhat.com>
In-Reply-To: <20220613113655.3693872-1-kraxel@redhat.com>
References: <20220613113655.3693872-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5

From: Akihiko Odaki <akihiko.odaki@gmail.com>

This change adds a new member, refresh_rate to QemuUIInfo in
include/ui/console.h. It represents the refresh rate of the
physical display backend, and it is more appropriate than
GUI update interval as the refresh rate which the emulated device
reports:
- sdl may set GUI update interval shorter than the refresh rate
  of the physical display to respond to user-generated events.
- sdl and vnc aggressively changes GUI update interval, but
  a guests is typically not designed to respond to frequent
  refresh rate changes, or frequent "display mode" changes in
  general. The frequency of refresh rate changes of the physical
  display backend matches better to the guest's expectation.

QemuUIInfo also has other members representing "display mode",
which makes it suitable for refresh rate representation. It has
a throttling of update notifications, and prevents frequent changes
of the display mode.

Signed-off-by: Akihiko Odaki <akihiko.odaki@gmail.com>
Message-Id: <20220226115516.59830-3-akihiko.odaki@gmail.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 include/ui/console.h |  2 +-
 include/ui/gtk.h     |  2 +-
 hw/display/xenfb.c   | 14 +++++++++++---
 ui/console.c         |  6 ------
 ui/gtk-egl.c         |  4 ++--
 ui/gtk-gl-area.c     |  3 +--
 ui/gtk.c             | 45 +++++++++++++++++++++++++-------------------
 7 files changed, 42 insertions(+), 34 deletions(-)

diff --git a/include/ui/console.h b/include/ui/console.h
index 642d6f5248cf..b64d82436097 100644
--- a/include/ui/console.h
+++ b/include/ui/console.h
@@ -139,6 +139,7 @@ typedef struct QemuUIInfo {
     int       yoff;
     uint32_t  width;
     uint32_t  height;
+    uint32_t  refresh_rate;
 } QemuUIInfo;
 
 /* cursor data format is 32bit RGBA */
@@ -431,7 +432,6 @@ typedef struct GraphicHwOps {
     void (*gfx_update)(void *opaque);
     bool gfx_update_async; /* if true, calls graphic_hw_update_done() */
     void (*text_update)(void *opaque, console_ch_t *text);
-    void (*update_interval)(void *opaque, uint64_t interval);
     void (*ui_info)(void *opaque, uint32_t head, QemuUIInfo *info);
     void (*gl_block)(void *opaque, bool block);
 } GraphicHwOps;
diff --git a/include/ui/gtk.h b/include/ui/gtk.h
index 101b147d1b98..ae0f53740d19 100644
--- a/include/ui/gtk.h
+++ b/include/ui/gtk.h
@@ -155,7 +155,7 @@ extern bool gtk_use_gl_area;
 
 /* ui/gtk.c */
 void gd_update_windowsize(VirtualConsole *vc);
-int gd_monitor_update_interval(GtkWidget *widget);
+void gd_update_monitor_refresh_rate(VirtualConsole *vc, GtkWidget *widget);
 void gd_hw_gl_flushed(void *vc);
 
 /* ui/gtk-egl.c */
diff --git a/hw/display/xenfb.c b/hw/display/xenfb.c
index cea10fe3c780..50857cd97a0b 100644
--- a/hw/display/xenfb.c
+++ b/hw/display/xenfb.c
@@ -777,16 +777,24 @@ static void xenfb_update(void *opaque)
     xenfb->up_fullscreen = 0;
 }
 
-static void xenfb_update_interval(void *opaque, uint64_t interval)
+static void xenfb_ui_info(void *opaque, uint32_t idx, QemuUIInfo *info)
 {
     struct XenFB *xenfb = opaque;
+    uint32_t refresh_rate;
 
     if (xenfb->feature_update) {
 #ifdef XENFB_TYPE_REFRESH_PERIOD
         if (xenfb_queue_full(xenfb)) {
             return;
         }
-        xenfb_send_refresh_period(xenfb, interval);
+
+        refresh_rate = info->refresh_rate;
+        if (!refresh_rate) {
+            refresh_rate = 75;
+        }
+
+        /* T = 1 / f = 1 [s*Hz] / f = 1000*1000 [ms*mHz] / f */
+        xenfb_send_refresh_period(xenfb, 1000 * 1000 / refresh_rate);
 #endif
     }
 }
@@ -983,5 +991,5 @@ struct XenDevOps xen_framebuffer_ops = {
 static const GraphicHwOps xenfb_ops = {
     .invalidate  = xenfb_invalidate,
     .gfx_update  = xenfb_update,
-    .update_interval = xenfb_update_interval,
+    .ui_info     = xenfb_ui_info,
 };
diff --git a/ui/console.c b/ui/console.c
index 36c80cd1de85..9331b85203a0 100644
--- a/ui/console.c
+++ b/ui/console.c
@@ -160,7 +160,6 @@ static void gui_update(void *opaque)
     uint64_t dcl_interval;
     DisplayState *ds = opaque;
     DisplayChangeListener *dcl;
-    QemuConsole *con;
 
     ds->refreshing = true;
     dpy_refresh(ds);
@@ -175,11 +174,6 @@ static void gui_update(void *opaque)
     }
     if (ds->update_interval != interval) {
         ds->update_interval = interval;
-        QTAILQ_FOREACH(con, &consoles, next) {
-            if (con->hw_ops->update_interval) {
-                con->hw_ops->update_interval(con->hw, interval);
-            }
-        }
         trace_console_refresh(interval);
     }
     ds->last_update = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
diff --git a/ui/gtk-egl.c b/ui/gtk-egl.c
index e3bd4bc27431..b5bffbab2522 100644
--- a/ui/gtk-egl.c
+++ b/ui/gtk-egl.c
@@ -140,8 +140,8 @@ void gd_egl_refresh(DisplayChangeListener *dcl)
 {
     VirtualConsole *vc = container_of(dcl, VirtualConsole, gfx.dcl);
 
-    vc->gfx.dcl.update_interval = gd_monitor_update_interval(
-            vc->window ? vc->window : vc->gfx.drawing_area);
+    gd_update_monitor_refresh_rate(
+            vc, vc->window ? vc->window : vc->gfx.drawing_area);
 
     if (!vc->gfx.esurface) {
         gd_egl_init(vc);
diff --git a/ui/gtk-gl-area.c b/ui/gtk-gl-area.c
index 2e0129c28cd4..682638a197d2 100644
--- a/ui/gtk-gl-area.c
+++ b/ui/gtk-gl-area.c
@@ -121,8 +121,7 @@ void gd_gl_area_refresh(DisplayChangeListener *dcl)
 {
     VirtualConsole *vc = container_of(dcl, VirtualConsole, gfx.dcl);
 
-    vc->gfx.dcl.update_interval = gd_monitor_update_interval(
-            vc->window ? vc->window : vc->gfx.drawing_area);
+    gd_update_monitor_refresh_rate(vc, vc->window ? vc->window : vc->gfx.drawing_area);
 
     if (!vc->gfx.gls) {
         if (!gtk_widget_get_realized(vc->gfx.drawing_area)) {
diff --git a/ui/gtk.c b/ui/gtk.c
index c57c36749e0e..2a791dd2aa04 100644
--- a/ui/gtk.c
+++ b/ui/gtk.c
@@ -710,11 +710,20 @@ static gboolean gd_window_close(GtkWidget *widget, GdkEvent *event,
     return TRUE;
 }
 
-static void gd_set_ui_info(VirtualConsole *vc, gint width, gint height)
+static void gd_set_ui_refresh_rate(VirtualConsole *vc, int refresh_rate)
 {
     QemuUIInfo info;
 
-    memset(&info, 0, sizeof(info));
+    info = *dpy_get_ui_info(vc->gfx.dcl.con);
+    info.refresh_rate = refresh_rate;
+    dpy_set_ui_info(vc->gfx.dcl.con, &info, true);
+}
+
+static void gd_set_ui_size(VirtualConsole *vc, gint width, gint height)
+{
+    QemuUIInfo info;
+
+    info = *dpy_get_ui_info(vc->gfx.dcl.con);
     info.width = width;
     info.height = height;
     dpy_set_ui_info(vc->gfx.dcl.con, &info, true);
@@ -738,33 +747,32 @@ static void gd_resize_event(GtkGLArea *area,
 {
     VirtualConsole *vc = (void *)opaque;
 
-    gd_set_ui_info(vc, width, height);
+    gd_set_ui_size(vc, width, height);
 }
 
 #endif
 
-/*
- * If available, return the update interval of the monitor in ms,
- * else return 0 (the default update interval).
- */
-int gd_monitor_update_interval(GtkWidget *widget)
+void gd_update_monitor_refresh_rate(VirtualConsole *vc, GtkWidget *widget)
 {
 #ifdef GDK_VERSION_3_22
     GdkWindow *win = gtk_widget_get_window(widget);
+    int refresh_rate;
 
     if (win) {
         GdkDisplay *dpy = gtk_widget_get_display(widget);
         GdkMonitor *monitor = gdk_display_get_monitor_at_window(dpy, win);
-        int refresh_rate = gdk_monitor_get_refresh_rate(monitor); /* [mHz] */
-
-        if (refresh_rate) {
-            /* T = 1 / f = 1 [s*Hz] / f = 1000*1000 [ms*mHz] / f */
-            return MIN(1000 * 1000 / refresh_rate,
-                       GUI_REFRESH_INTERVAL_DEFAULT);
-        }
+        refresh_rate = gdk_monitor_get_refresh_rate(monitor); /* [mHz] */
+    } else {
+        refresh_rate = 0;
     }
+
+    gd_set_ui_refresh_rate(vc, refresh_rate);
+
+    /* T = 1 / f = 1 [s*Hz] / f = 1000*1000 [ms*mHz] / f */
+    vc->gfx.dcl.update_interval = refresh_rate ?
+        MIN(1000 * 1000 / refresh_rate, GUI_REFRESH_INTERVAL_DEFAULT) :
+        GUI_REFRESH_INTERVAL_DEFAULT;
 #endif
-    return 0;
 }
 
 static gboolean gd_draw_event(GtkWidget *widget, cairo_t *cr, void *opaque)
@@ -801,8 +809,7 @@ static gboolean gd_draw_event(GtkWidget *widget, cairo_t *cr, void *opaque)
         return FALSE;
     }
 
-    vc->gfx.dcl.update_interval =
-        gd_monitor_update_interval(vc->window ? vc->window : s->window);
+    gd_update_monitor_refresh_rate(vc, vc->window ? vc->window : s->window);
 
     fbw = surface_width(vc->gfx.ds);
     fbh = surface_height(vc->gfx.ds);
@@ -1691,7 +1698,7 @@ static gboolean gd_configure(GtkWidget *widget,
 {
     VirtualConsole *vc = opaque;
 
-    gd_set_ui_info(vc, cfg->width, cfg->height);
+    gd_set_ui_size(vc, cfg->width, cfg->height);
     return FALSE;
 }
 
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 13 11:38:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 11:38:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348022.574441 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0iOr-00049c-6N; Mon, 13 Jun 2022 11:38:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348022.574441; Mon, 13 Jun 2022 11:38:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0iOr-00049T-2p; Mon, 13 Jun 2022 11:38:29 +0000
Received: by outflank-mailman (input) for mailman id 348022;
 Mon, 13 Jun 2022 11:38:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJWo=WU=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1o0iNh-0003eY-8B
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 11:37:17 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2c35fb1a-eb0d-11ec-bd2c-47488cf2e6aa;
 Mon, 13 Jun 2022 13:37:15 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-263-5G3LNvV0Mv2PeFSjsuQA_Q-1; Mon, 13 Jun 2022 07:37:12 -0400
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com
 [10.11.54.6])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 78034100BADB;
 Mon, 13 Jun 2022 11:37:11 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 1812F2166B26;
 Mon, 13 Jun 2022 11:37:11 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id D761818009AC; Mon, 13 Jun 2022 13:36:56 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c35fb1a-eb0d-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655120235;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=TCo33ax1Q4GDO812+rq/YD1R1Ny1sNrWx6I6aptpWo8=;
	b=OwdDXeOKhnSy0uao4RqpTaBfiLfIApt0tgW+fkB8TAMdg9UT4t2uGNClnqm5S38eyhPvKb
	TP6RoO9H6VnGr5dYg+906gUZr3X2gN44gUk9tv20CqEa3lLVKu+KZrOsLLjqVxTZPMiK16
	jBD7MDI2RdMRNGL3fqal6jFcixaJkuc=
X-MC-Unique: 5G3LNvV0Mv2PeFSjsuQA_Q-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
	xen-devel@lists.xenproject.org,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	"Canokeys.org" <contact@canokeys.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>
Subject: [PULL 16/16] ui: move 'pc-bios/keymaps' to 'ui/keymaps'
Date: Mon, 13 Jun 2022 13:36:55 +0200
Message-Id: <20220613113655.3693872-17-kraxel@redhat.com>
In-Reply-To: <20220613113655.3693872-1-kraxel@redhat.com>
References: <20220613113655.3693872-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6

From: Daniel P. Berrangé <berrange@redhat.com>

The 'keymaps' directory contents is nothing to do with the firmware
blobs. The 'pc-bios/keymaps' directory appears to have been used
previously as a convenience for getting the files installed into
a subdir of the firmware install dir, as well as to make it easier
to launch QEMU directly from the build tree. These requirements
do not need to be reflected in the source tree arrangement. The
keymaps logically belong with the UI code, and meson can install
them into the right place. For in-tree execution, we merely need
a suitable symlink from the source tree to the build tree.

Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20220613084456.470068-1-berrange@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 configure                           | 4 ++++
 pc-bios/meson.build                 | 1 -
 {pc-bios => ui}/keymaps/ar          | 0
 {pc-bios => ui}/keymaps/bepo        | 0
 {pc-bios => ui}/keymaps/cz          | 0
 {pc-bios => ui}/keymaps/da          | 0
 {pc-bios => ui}/keymaps/de          | 0
 {pc-bios => ui}/keymaps/de-ch       | 0
 {pc-bios => ui}/keymaps/en-gb       | 0
 {pc-bios => ui}/keymaps/en-us       | 0
 {pc-bios => ui}/keymaps/es          | 0
 {pc-bios => ui}/keymaps/et          | 0
 {pc-bios => ui}/keymaps/fi          | 0
 {pc-bios => ui}/keymaps/fo          | 0
 {pc-bios => ui}/keymaps/fr          | 0
 {pc-bios => ui}/keymaps/fr-be       | 0
 {pc-bios => ui}/keymaps/fr-ca       | 0
 {pc-bios => ui}/keymaps/fr-ch       | 0
 {pc-bios => ui}/keymaps/hr          | 0
 {pc-bios => ui}/keymaps/hu          | 0
 {pc-bios => ui}/keymaps/is          | 0
 {pc-bios => ui}/keymaps/it          | 0
 {pc-bios => ui}/keymaps/ja          | 0
 {pc-bios => ui}/keymaps/lt          | 0
 {pc-bios => ui}/keymaps/lv          | 0
 {pc-bios => ui}/keymaps/meson.build | 0
 {pc-bios => ui}/keymaps/mk          | 0
 {pc-bios => ui}/keymaps/nl          | 0
 {pc-bios => ui}/keymaps/no          | 0
 {pc-bios => ui}/keymaps/pl          | 0
 {pc-bios => ui}/keymaps/pt          | 0
 {pc-bios => ui}/keymaps/pt-br       | 0
 {pc-bios => ui}/keymaps/ru          | 0
 {pc-bios => ui}/keymaps/sl          | 0
 {pc-bios => ui}/keymaps/sv          | 0
 {pc-bios => ui}/keymaps/th          | 0
 {pc-bios => ui}/keymaps/tr          | 0
 ui/meson.build                      | 1 +
 38 files changed, 5 insertions(+), 1 deletion(-)
 rename {pc-bios => ui}/keymaps/ar (100%)
 rename {pc-bios => ui}/keymaps/bepo (100%)
 rename {pc-bios => ui}/keymaps/cz (100%)
 rename {pc-bios => ui}/keymaps/da (100%)
 rename {pc-bios => ui}/keymaps/de (100%)
 rename {pc-bios => ui}/keymaps/de-ch (100%)
 rename {pc-bios => ui}/keymaps/en-gb (100%)
 rename {pc-bios => ui}/keymaps/en-us (100%)
 rename {pc-bios => ui}/keymaps/es (100%)
 rename {pc-bios => ui}/keymaps/et (100%)
 rename {pc-bios => ui}/keymaps/fi (100%)
 rename {pc-bios => ui}/keymaps/fo (100%)
 rename {pc-bios => ui}/keymaps/fr (100%)
 rename {pc-bios => ui}/keymaps/fr-be (100%)
 rename {pc-bios => ui}/keymaps/fr-ca (100%)
 rename {pc-bios => ui}/keymaps/fr-ch (100%)
 rename {pc-bios => ui}/keymaps/hr (100%)
 rename {pc-bios => ui}/keymaps/hu (100%)
 rename {pc-bios => ui}/keymaps/is (100%)
 rename {pc-bios => ui}/keymaps/it (100%)
 rename {pc-bios => ui}/keymaps/ja (100%)
 rename {pc-bios => ui}/keymaps/lt (100%)
 rename {pc-bios => ui}/keymaps/lv (100%)
 rename {pc-bios => ui}/keymaps/meson.build (100%)
 rename {pc-bios => ui}/keymaps/mk (100%)
 rename {pc-bios => ui}/keymaps/nl (100%)
 rename {pc-bios => ui}/keymaps/no (100%)
 rename {pc-bios => ui}/keymaps/pl (100%)
 rename {pc-bios => ui}/keymaps/pt (100%)
 rename {pc-bios => ui}/keymaps/pt-br (100%)
 rename {pc-bios => ui}/keymaps/ru (100%)
 rename {pc-bios => ui}/keymaps/sl (100%)
 rename {pc-bios => ui}/keymaps/sv (100%)
 rename {pc-bios => ui}/keymaps/th (100%)
 rename {pc-bios => ui}/keymaps/tr (100%)

diff --git a/configure b/configure
index e69537c7566c..77c68642514b 100755
--- a/configure
+++ b/configure
@@ -2226,6 +2226,10 @@ for f in $LINKS ; do
     fi
 done
 
+# Keymaps needs slightly different location in build
+# dir as we are simulating the installed data dir layout
+symlink "$source_path/ui/keymaps" "pc-bios/keymaps"
+
 # Mac OS X ships with a broken assembler
 roms=
 probe_target_compilers i386 x86_64
diff --git a/pc-bios/meson.build b/pc-bios/meson.build
index 41ba1c0ec7ba..e49c0e5f56de 100644
--- a/pc-bios/meson.build
+++ b/pc-bios/meson.build
@@ -97,4 +97,3 @@ foreach f : blobs
 endforeach
 
 subdir('descriptors')
-subdir('keymaps')
diff --git a/pc-bios/keymaps/ar b/ui/keymaps/ar
similarity index 100%
rename from pc-bios/keymaps/ar
rename to ui/keymaps/ar
diff --git a/pc-bios/keymaps/bepo b/ui/keymaps/bepo
similarity index 100%
rename from pc-bios/keymaps/bepo
rename to ui/keymaps/bepo
diff --git a/pc-bios/keymaps/cz b/ui/keymaps/cz
similarity index 100%
rename from pc-bios/keymaps/cz
rename to ui/keymaps/cz
diff --git a/pc-bios/keymaps/da b/ui/keymaps/da
similarity index 100%
rename from pc-bios/keymaps/da
rename to ui/keymaps/da
diff --git a/pc-bios/keymaps/de b/ui/keymaps/de
similarity index 100%
rename from pc-bios/keymaps/de
rename to ui/keymaps/de
diff --git a/pc-bios/keymaps/de-ch b/ui/keymaps/de-ch
similarity index 100%
rename from pc-bios/keymaps/de-ch
rename to ui/keymaps/de-ch
diff --git a/pc-bios/keymaps/en-gb b/ui/keymaps/en-gb
similarity index 100%
rename from pc-bios/keymaps/en-gb
rename to ui/keymaps/en-gb
diff --git a/pc-bios/keymaps/en-us b/ui/keymaps/en-us
similarity index 100%
rename from pc-bios/keymaps/en-us
rename to ui/keymaps/en-us
diff --git a/pc-bios/keymaps/es b/ui/keymaps/es
similarity index 100%
rename from pc-bios/keymaps/es
rename to ui/keymaps/es
diff --git a/pc-bios/keymaps/et b/ui/keymaps/et
similarity index 100%
rename from pc-bios/keymaps/et
rename to ui/keymaps/et
diff --git a/pc-bios/keymaps/fi b/ui/keymaps/fi
similarity index 100%
rename from pc-bios/keymaps/fi
rename to ui/keymaps/fi
diff --git a/pc-bios/keymaps/fo b/ui/keymaps/fo
similarity index 100%
rename from pc-bios/keymaps/fo
rename to ui/keymaps/fo
diff --git a/pc-bios/keymaps/fr b/ui/keymaps/fr
similarity index 100%
rename from pc-bios/keymaps/fr
rename to ui/keymaps/fr
diff --git a/pc-bios/keymaps/fr-be b/ui/keymaps/fr-be
similarity index 100%
rename from pc-bios/keymaps/fr-be
rename to ui/keymaps/fr-be
diff --git a/pc-bios/keymaps/fr-ca b/ui/keymaps/fr-ca
similarity index 100%
rename from pc-bios/keymaps/fr-ca
rename to ui/keymaps/fr-ca
diff --git a/pc-bios/keymaps/fr-ch b/ui/keymaps/fr-ch
similarity index 100%
rename from pc-bios/keymaps/fr-ch
rename to ui/keymaps/fr-ch
diff --git a/pc-bios/keymaps/hr b/ui/keymaps/hr
similarity index 100%
rename from pc-bios/keymaps/hr
rename to ui/keymaps/hr
diff --git a/pc-bios/keymaps/hu b/ui/keymaps/hu
similarity index 100%
rename from pc-bios/keymaps/hu
rename to ui/keymaps/hu
diff --git a/pc-bios/keymaps/is b/ui/keymaps/is
similarity index 100%
rename from pc-bios/keymaps/is
rename to ui/keymaps/is
diff --git a/pc-bios/keymaps/it b/ui/keymaps/it
similarity index 100%
rename from pc-bios/keymaps/it
rename to ui/keymaps/it
diff --git a/pc-bios/keymaps/ja b/ui/keymaps/ja
similarity index 100%
rename from pc-bios/keymaps/ja
rename to ui/keymaps/ja
diff --git a/pc-bios/keymaps/lt b/ui/keymaps/lt
similarity index 100%
rename from pc-bios/keymaps/lt
rename to ui/keymaps/lt
diff --git a/pc-bios/keymaps/lv b/ui/keymaps/lv
similarity index 100%
rename from pc-bios/keymaps/lv
rename to ui/keymaps/lv
diff --git a/pc-bios/keymaps/meson.build b/ui/keymaps/meson.build
similarity index 100%
rename from pc-bios/keymaps/meson.build
rename to ui/keymaps/meson.build
diff --git a/pc-bios/keymaps/mk b/ui/keymaps/mk
similarity index 100%
rename from pc-bios/keymaps/mk
rename to ui/keymaps/mk
diff --git a/pc-bios/keymaps/nl b/ui/keymaps/nl
similarity index 100%
rename from pc-bios/keymaps/nl
rename to ui/keymaps/nl
diff --git a/pc-bios/keymaps/no b/ui/keymaps/no
similarity index 100%
rename from pc-bios/keymaps/no
rename to ui/keymaps/no
diff --git a/pc-bios/keymaps/pl b/ui/keymaps/pl
similarity index 100%
rename from pc-bios/keymaps/pl
rename to ui/keymaps/pl
diff --git a/pc-bios/keymaps/pt b/ui/keymaps/pt
similarity index 100%
rename from pc-bios/keymaps/pt
rename to ui/keymaps/pt
diff --git a/pc-bios/keymaps/pt-br b/ui/keymaps/pt-br
similarity index 100%
rename from pc-bios/keymaps/pt-br
rename to ui/keymaps/pt-br
diff --git a/pc-bios/keymaps/ru b/ui/keymaps/ru
similarity index 100%
rename from pc-bios/keymaps/ru
rename to ui/keymaps/ru
diff --git a/pc-bios/keymaps/sl b/ui/keymaps/sl
similarity index 100%
rename from pc-bios/keymaps/sl
rename to ui/keymaps/sl
diff --git a/pc-bios/keymaps/sv b/ui/keymaps/sv
similarity index 100%
rename from pc-bios/keymaps/sv
rename to ui/keymaps/sv
diff --git a/pc-bios/keymaps/th b/ui/keymaps/th
similarity index 100%
rename from pc-bios/keymaps/th
rename to ui/keymaps/th
diff --git a/pc-bios/keymaps/tr b/ui/keymaps/tr
similarity index 100%
rename from pc-bios/keymaps/tr
rename to ui/keymaps/tr
diff --git a/ui/meson.build b/ui/meson.build
index e9f48c531588..25c9a5ff8cd9 100644
--- a/ui/meson.build
+++ b/ui/meson.build
@@ -170,6 +170,7 @@ if have_system or xkbcommon.found()
 endif
 
 subdir('shader')
+subdir('keymaps')
 
 if have_system
   subdir('icons')
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 13 11:46:22 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 11:46:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348127.574452 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0iWQ-0006O1-W5; Mon, 13 Jun 2022 11:46:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348127.574452; Mon, 13 Jun 2022 11:46:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0iWQ-0006Nu-T1; Mon, 13 Jun 2022 11:46:18 +0000
Received: by outflank-mailman (input) for mailman id 348127;
 Mon, 13 Jun 2022 11:46:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0iWP-0006Nk-Si; Mon, 13 Jun 2022 11:46:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0iWP-0004Cq-OL; Mon, 13 Jun 2022 11:46:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0iWP-0002m0-EK; Mon, 13 Jun 2022 11:46:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o0iWP-00018B-Ds; Mon, 13 Jun 2022 11:46:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=do7A4L+pLkuNnEw7UsEUz3lQaplBAz4YKa45Q60mAkM=; b=1CrFEtexoditodutp4qiJrXEs9
	S72HlJPXA9g2qBzQyUkU1CfWM+OM1ApkL212SWPQarDxrsFv+df3yT6BJNamGN1ev96SUoBqFlabI
	R3d4ifcdQpzU8K1+Hv7NmtgHmNYHkQBvcIQKoA2yBEFXuVry4lWwx50Ep7YAnvJLoxFM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171149-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 171149: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=dcb40541ebca7ec98a14d461593b3cd7282b4fac
X-Osstest-Versions-That:
    qemuu=b871cc83d69a4463ac641286eef7d773ad5b5aaa
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 13 Jun 2022 11:46:17 +0000

flight 171149 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171149/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds   18 guest-start/debian.repeat fail blocked in 171146
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail blocked in 171146
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171146
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171146
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171146
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171146
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171146
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171146
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171146
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 qemuu                dcb40541ebca7ec98a14d461593b3cd7282b4fac
baseline version:
 qemuu                b871cc83d69a4463ac641286eef7d773ad5b5aaa

Last test of basis   171146  2022-06-12 14:08:17 Z    0 days
Testing same since   171149  2022-06-13 00:38:22 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bernhard Beschow <shentey@gmail.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Richard Henderson <richard.henderson@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   b871cc83d6..dcb40541eb  dcb40541ebca7ec98a14d461593b3cd7282b4fac -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Mon Jun 13 12:32:35 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 12:32:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348172.574463 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0jF6-00042S-I7; Mon, 13 Jun 2022 12:32:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348172.574463; Mon, 13 Jun 2022 12:32:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0jF6-00042L-ER; Mon, 13 Jun 2022 12:32:28 +0000
Received: by outflank-mailman (input) for mailman id 348172;
 Mon, 13 Jun 2022 12:32:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=i6zz=WU=citrix.com=prvs=156a1e8c4=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o0jF5-00042F-Ax
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 12:32:27 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id df65f375-eb14-11ec-8901-93a377f238d6;
 Mon, 13 Jun 2022 14:32:25 +0200 (CEST)
Received: from mail-mw2nam04lp2171.outbound.protection.outlook.com (HELO
 NAM04-MW2-obe.outbound.protection.outlook.com) ([104.47.73.171])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 13 Jun 2022 08:32:20 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by MN2PR03MB5151.namprd03.prod.outlook.com (2603:10b6:208:1ab::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Mon, 13 Jun
 2022 12:32:18 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%6]) with mapi id 15.20.5332.020; Mon, 13 Jun 2022
 12:32:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df65f375-eb14-11ec-8901-93a377f238d6
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655123545;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=DDmdS9C20E6VAFH26/qcnxPol0BlQF0mtAlYVXyViM4=;
  b=gtka6HC2KlfeTIpaNPlykR5yQEYV7hNEXFaaqzmDGffaheXhO/08+h2P
   Xt6wnG9Wy++ez/rQRCbaETWez2xCFacVgrl+BAT17VJ2E7ZxNGH8aCb5X
   nOi4Cl9U+m/woOPw9qaxCkiGGZAMbM2HdMs4BSRWujiVQuQEuXIi9ixsG
   0=;
X-IronPort-RemoteIP: 104.47.73.171
X-IronPort-MID: 73869221
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:nU+wD6t1bCPgsFupYJZrdshQZ+fnVEpfMUV32f8akzHdYApBsoF/q
 tZmKWiFParbZDCgL9h2aIuy8k1Q7J/QzoAySVE/rSAzQnwS+JbJXdiXEBz9bniYRiHhoOOLz
 Cm8hv3odp1coqr0/0/1WlTZhSAgk/nOHNIQMcacUsxLbVYMpBwJ1FQywobVvqYy2YLjW13U4
 ouryyHiEATNNwBcYzp8B52r8HuDjNyq0N/PlgVjDRzjlAa2e0g9VPrzF4noR5fLatA88tqBb
 /TC1NmEElbxpH/BPD8HfoHTKSXmSpaKVeSHZ+E/t6KK2nCurQRquko32WZ1he66RFxlkvgoo
 Oihu6BcRi8yH4jBpOYYViMESShDNqdJ9+PgPz+w5Jn7I03uKxMAwt1IJWRuYcg9xbwyBmtDs
 /sFNDoKcxaPwfqsx662QfVtgcJlK9T3OIQYuTdryjSx4fQOGMifBfmVo4ADmm5v36iiHt6HD
 yYdQSBoYxnaJQVGJ38cCY4knffujX76G9FdgA3O9fRmuTKKpOB3+IKwboDfasSIf9hUs2Wdu
 lLrpWHyWThPYbRzzhLAqBpAnNTnnyn2RYYTH72Q7eNxjRuYwWl7IB8LUVq2p9Gph0j4XMhQQ
 2QP4TYnp6U28E2tT/H+Uge+rXrCuQQTM/JPF8Uq5QfLzbDbiy6JC25BQjNfZdgOsM4tWSdsx
 lKPh8nuBzFkrPuSU331y1uPhTa7OCxQKHBYYyYBFVcB+4O6/9h1iQ/TRNF+FqLzlsfyBTz73
 zGNqm45mqkXiskIka68+Dgrng6Rm3QAdSZtji2/Y45vxlkRiFKND2Bw1WXm0A==
IronPort-HdrOrdr: A9a23:tKmasKBgV1H5gGzlHej6sseALOsnbusQ8zAXPh9KJyC9I/b2qy
 nxppgmPEfP+UossHFJo6HlBEDyewKiyXcV2/heAV7GZmjbUQSTXflfBOfZsl/d8mjFh5NgPM
 RbAulD4b/LfCNHZK/BiWHSebtBsbq6GeKT9J3jJhxWPGZXgtRbnn5E43GgYytLrWd9dP8EPa
 vZwvACiyureHwRYMj+LGICRfL/q9rCk4+jSQIaBjY8gTP+zQ+A2frfKVy1zx0eWzRAzfMJ6m
 7eiTH04a2lrrWS1gLc7WnO9J5b8eGRieerRfb8yPT9GA+czjpAV74RHIFqewpF5t1H3Wxa1e
 UkZS1QZvibpUmhJl1d6iGdpTUImAxemkMKj2Xo2kcL6PaJNA4SGo5Pg5lUfQDe7FdltNZg0L
 hT12bcrJZPCwjc9R6NreQg+Csa4nZcjEBS2dL7tUYvGrf2qYUh2rA37QdQCtMNDSj64IcoHK
 1nC9zd/u9fdRefY2rCtmdizdSwVjBrdy32DnQqq4iQyXxbjXp5x0wXyIgWmWoB7os0T91B6/
 7fOqplmblSRotPBJgNS9spUI+yECjAUBjMOGWdLRDuE7wGIWvEr9ry7K8u7O+ndZQUxN85mY
 jHUllfqWkuEnieRPGmzdlO6FTAUW+9VTPixoVX4IV4oKT1QP7xPSiKWDkV4oKdSjUkc7vmst
 qISeBr6qXYXBjT8K5yrnjDZ6U=
X-IronPort-AV: E=Sophos;i="5.91,297,1647316800"; 
   d="scan'208";a="73869221"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Uk1W3K3/wF+hcAGqRyT1LpPbtpfGa9fGZ6c02LcwAYKHO1KbAeU2p74N/KDEuf5h4FGwIz86u7W8UJ8057YfnLtnrf0gqLnMOLqb3AJ+96TZODnR/WYhj9wEcvruh1E7znL/CwaiDW2lkvoPaGHcPI0hJgcZf/KOLvrnG0IE+2V0s5NkvBe8Ce3I/7lQZSSC8bwgkoUn2Od2KScF2phxvVtSTr6dPiiQUNZ3ij86dplWz9hUq9E3sq0dsMcnpMvrsGzlUw/z+PsImOq25qI1U2N2+kiqX3zvzvy9vgUnScgi86NIvknahrV6qkjyaYyKrpxpM3XQ5zodjoXVTtQTDw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Yl1hpugtFISblOFnrxcOsdjwspC8D6kL2znPeVQKb64=;
 b=NumEmYf68rV/eVtcPd/oW8PsBtdEThaaaj1yp32tMaKD24W0pl9gMJgDfDJeyLwYMfjlNLzOT5RMYNcvfvYTH4suCNVpQWrvl/L0WPHtRKWp7AdXLRZ77URFAjZ1J1JslemU8abOaCPJ1ZivuHofcsKQDni6b9gxQ0AjDlaQQbJwWNWI4v5+M93WkO1M3ahHpR/39wzvw41fGwC6k1ik2P8FrHYkjUYIKKRJlsDssDj9BhMlLTIExTpDmMmY+XDezUXPpyW/Sth+yevcuWiMUkH1rdy4q0JRtA8+ysuXvskWRQnZxiIklnHac2tLozLQ+RmxdSQdiguX0uTKBVGdTA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Yl1hpugtFISblOFnrxcOsdjwspC8D6kL2znPeVQKb64=;
 b=oj+yoOKleWGRmgn7CarNUkHsbHfYZlomG7phuQZdPX/8tiWGexAB0jLrITD9qt/08gXBglpS3nSr052c/wXd3PgvTpYoer5ZqCHLkFEzabmx0aeA3jDBYDmrsVgtCXpKh/Grj7m5XgX9ZR5afQoiBz6nUm4Vf0uzvWFWInTafgM=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Mon, 13 Jun 2022 14:32:13 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH] xen/console: do not drop serial output from the hardware
 domain
Message-ID: <YqcuTUJUgXcO3iYE@Air-de-Roger>
References: <20220610150651.29933-1-roger.pau@citrix.com>
 <3a462021-1802-4764-3547-6d0a02cd092f@suse.com>
 <YqbziQGizoNX7YFr@Air-de-Roger>
 <3d0d74d8-55a9-cdb6-0c5e-616ddd47bbc0@suse.com>
 <Yqb9gKUMokLAots7@Air-de-Roger>
 <afa0a9e3-fd35-be38-427e-3389f4c3ca26@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <afa0a9e3-fd35-be38-427e-3389f4c3ca26@suse.com>
X-ClientProxiedBy: LO4P123CA0289.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:196::6) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 5d361de2-bf98-45ca-afc5-08da4d38c168
X-MS-TrafficTypeDiagnostic: MN2PR03MB5151:EE_
X-Microsoft-Antispam-PRVS:
	<MN2PR03MB5151D29E7DCEC1DFFC8289188FAB9@MN2PR03MB5151.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/A9oOn1hHEnWJsAI9l4Fbieyddf0dIirqpNe5PDxYonIpNx/vS1kdvDvtyJd1l2M2htTMd/kBobBIGCN1FifBedFO3fkAmWo+hSj2U8UL1iUG5B/I4T/ZhnINdNTJMTaB3E9w5eOnl6u2UsvNsbvBEtOWvGaMYQiziwn+jRkKgnlX+DW52MBRIAwCCBK19k+7+38GylsF1X9y2/YrADKiIRm9alwvsGOTfwIOSxVL6+WN8zO+Ev3wryJ8zGesrY7Ll0CosIsUqjio1hMLXI+oj6CS/yWFd13Zgp0cexE1k/cv3+/mq/dzeuHGEzUHFwjt/HGDqfuSZWljegnT2nmlMMzhH+a5m1pHZxvY64dvZwBwTfDlVBC9qgucQjIQhVY7rPce9GroMphfldFiZmoOaDXmquJYKqjRbwJDPwlzHU5CbxO3q0lg3KNY6pLM4ONCoi8ItFKagDYy8zZ6kQkreoQdyu/Sgp5IYTy4TLgH0HkY2TrmBWpie5+x1fspPy1Xr0jJ1fYWId0DnyEFHcqiMF8rPhmSkQ7tlelAJINUh1e5itVKw4uNfmNoLMW9qfVh1qKPkqzrJ04ibJqZij8olXZXOxKdFHXLEf8eoWAWwAHvEC7f+lknbEc9TaYJynK4BlTgkKkAl1/aPNlmN2OaA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(7916004)(4636009)(366004)(186003)(86362001)(54906003)(33716001)(508600001)(2906002)(66946007)(6916009)(38100700002)(316002)(85182001)(6666004)(8936002)(6506007)(5660300002)(26005)(66476007)(8676002)(4326008)(66556008)(9686003)(82960400001)(6486002)(6512007)(83380400001)(53546011);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MTlKTVI3YjZHSk5sZSt1NkUvSXhFZ3VDT1RxMWtXU1JGYTBZVGNoakJjNFVn?=
 =?utf-8?B?dXlEWmFNSTNOc3pnVEFPWlJUWFZ0VG53L0Q5TUVEM01RSTF4WlFybCs4eE1t?=
 =?utf-8?B?aWRvOEt1dUtmYTdveHJzbnBXbDBxL0RjeDR4RjE4UW1vN1N5OWlONnNsZnRv?=
 =?utf-8?B?Z0IrMi9tVzRkVFJsY0pyWjFmdzRCa1BqTWMyeEtBWXYrcFBOWjRWcHVFQWJi?=
 =?utf-8?B?dVVRU2JhTThFN0pyY2huRStpU3R3TkIySXgvZTV3MEE4di9tNU1ZT3lnTWJ1?=
 =?utf-8?B?cC8wa1A4Y3dEMGpEb0ZaUjQzMS9jTkZ6Njgxa0ZaRXFXdWo4VHdEOHM1SER4?=
 =?utf-8?B?WU9CZDBLY0RFZTNoYzJOZHdEbGJIbXpRKzNmQmVFUnR4S3F1d0ZDUGFyckhT?=
 =?utf-8?B?ZWVGQ1lDMzVTUDh6ZVJBZWlYNlUyUHFTQWx3aFJaeUFyei9FejF2bGVnN2pS?=
 =?utf-8?B?OHp0U3YrZ20yMldrbWZteTc0ZEpvck56cWZkYnNSNWtCN1ZyUmVOZ0V2cFFT?=
 =?utf-8?B?VlNGMWk4TWgyWGV1TTJnOTRoYUh3SHFTS2JseDdCWjU0QVRNbWtLMFBISmY4?=
 =?utf-8?B?cGhhR1k4OFAvb1FDbWQxN1RJYlN6V0RwQzVJaFZlVTBvUXRNWE1VdG15RGcy?=
 =?utf-8?B?MlcrRzdGNk41QjJDTlVsb3ZObHN1TUdmbUhMWUlZWTVVVWlNKzdRK3BIb2Rp?=
 =?utf-8?B?RGRvK3UzbzE0Q2kxWXI5UHhkZzJ2aER4TWNJZXp1RDFRaU4zS2ZyRVZKWHp6?=
 =?utf-8?B?dFpUMjBlMzI2QnJtN1czT0hVM24rajFYNDJEMHdJc1g1U0svVlpsR2lCOXBR?=
 =?utf-8?B?SU9UNkVXeXBWNFltcG1EcjhOL3NQNnpxY2NCNnUreVFpTEhuci9hcW9VaUkx?=
 =?utf-8?B?b2d1SkVQZVBlRTRNT0FSQ2wzSzdsNVdwTGozb2FSMWpuWnA0RGNyQVAxK0Ri?=
 =?utf-8?B?REhvRHc1UldMTmJ3a2wvUjlIOXY2TitEY0RqU045WkRIYmlQTHk3NXJBcldE?=
 =?utf-8?B?Ryt5dTRpOFN2OTVjRjNFeDNmWUt4ZGJSTi9pVE1MVmtzNGVKMUh5M1EwSzJQ?=
 =?utf-8?B?dEZGaWdiNHpyOERJQWNmRjc1dFlXa1hKakJNaDlwanZIN1d1ZEtYeFFNOStB?=
 =?utf-8?B?a3BqLzB5ZkxGSHF4SHY0NGx3SE53cCtvQmsxSFBrdm11bVY3U085OVhnbnpD?=
 =?utf-8?B?ak9Yd0trcGtYOVd2REtQelgrNTlNOVhvNFV4NDIrQk5FdWJRR08yWkljamRS?=
 =?utf-8?B?ZzFjT0hEMklxTVBNWnJyZkkza3lmSGdWNjQyNzRJcHlVQVp3UkdpTzI5QlAz?=
 =?utf-8?B?RUdkN0MwN01pY0FESlV4MHVVYkdzeXlFRjljdUJnZ3dNSzRaNy9pcS9TcnVW?=
 =?utf-8?B?NnJ5bXV1eHNUaG9ZU0pPWW5GZnc0emxkTktGSGJqMEtodUp6OFpnY3B5aEpX?=
 =?utf-8?B?MWhyZGRBRGVNWm4zaXBkZG5VRVRXNndzMkJZVUFwZC84dzR4NVExRGlaT09x?=
 =?utf-8?B?QUhTK0dYZ09pam9hSDRROHVPYkNqY2ZJcERFUjhUbUUvZ2dPR2VCeVZIdWJY?=
 =?utf-8?B?STJ5aWdiNktwWC8xT0FndVdjRU1zeHhoYk9LVHppSVpoVjFZSEdGVyt3Uktu?=
 =?utf-8?B?NlhFdWVYb29Vc3BOclpkbzN2ZHd0Tm1GRVc4QVpMQi91TXJnYkZJck1iTHg2?=
 =?utf-8?B?RjNIY1dYdFRMQkpvM2Z6WHpDbG9GRVo2K08zbUZRRnVWMG1IQlBUS3dNb1NL?=
 =?utf-8?B?bC9WY0YzcVlyRlI2VHVQVnNjK3cyZFZpYnlHeGM3TVpMczJVNmRreWxYSGpW?=
 =?utf-8?B?MnNJR1hxOVpVVm9mM2lVNGl3M3F3UmQzL3JnbWhpbjlqZmkweXJZMjN0ZllQ?=
 =?utf-8?B?K0NrcThXRmxZajZLNFF4bE9xNEx6cWQ1VnRIc1RGWHpVQUZnUTdoOFNRTFNH?=
 =?utf-8?B?OU8wWmpoR2RPbTF0YlRLYzM4dXQ1Y29vdEpGZG0zZ2d4MGxnRy83Rjg0M2lQ?=
 =?utf-8?B?VjRqenZmZUxkUlVjS1l5bWFjaTFkL0lidzdldllQQW9zbGtMRzV0OXRKUUx6?=
 =?utf-8?B?ejNiUE9HZkpDYUJFVENwQ1grb3lGWmQyeTdxTzM4TG9wenRyVlVwbzZQNlg1?=
 =?utf-8?B?NHo5aDNKRFRocDVKeTZPaGRaL1VoVXJLU0RwNUdaT2N1cDJGVWJ0bjNxU24w?=
 =?utf-8?B?TnBTT002MytTRUs3T3QyR1kzdkdVZzFZaEFTN3pFa1FkZHV1SU1kRmtrVGxa?=
 =?utf-8?B?N2dRZ0h0MmN6SElEd05odE4rK1JYbGloS1BITzc0cGVXemlpYnlPaDR2dEVC?=
 =?utf-8?B?Nmx3NHBLL3dhOWVaYjhVVndIaXV6THRqYXZJWk1USFlrNEVBMW1yMStpUkwv?=
 =?utf-8?Q?oGAfMCOMpzxPq8p0=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5d361de2-bf98-45ca-afc5-08da4d38c168
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jun 2022 12:32:18.7216
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: bE5IWJgzgqmKPDVnIFiYSA6fmcbe1CzW63qg8A/m8qiVxtVl7rUY4ffCnNYWVbfjf/5cowaBbTb66ywMCfkJoQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5151

On Mon, Jun 13, 2022 at 11:18:49AM +0200, Jan Beulich wrote:
> On 13.06.2022 11:04, Roger Pau Monné wrote:
> > On Mon, Jun 13, 2022 at 10:29:43AM +0200, Jan Beulich wrote:
> >> On 13.06.2022 10:21, Roger Pau Monné wrote:
> >>> On Mon, Jun 13, 2022 at 09:30:06AM +0200, Jan Beulich wrote:
> >>>> On 10.06.2022 17:06, Roger Pau Monne wrote:
> >>>>> Prevent dropping console output from the hardware domain, since it's
> >>>>> likely important to have all the output if the boot fails without
> >>>>> having to resort to sync_console (which also affects the output from
> >>>>> other guests).
> >>>>>
> >>>>> Do so by pairing the console_serial_puts() with
> >>>>> serial_{start,end}_log_everything(), so that no output is dropped.
> >>>>
> >>>> While I can see the goal, why would Dom0 output be (effectively) more
> >>>> important than Xen's own one (which isn't "forced")? And with this
> >>>> aiming at boot output only, wouldn't you want to stop the overriding
> >>>> once boot has completed (of which, if I'm not mistaken, we don't
> >>>> really have any signal coming from Dom0)? And even during boot I'm
> >>>> not convinced we'd want to let through everything, but perhaps just
> >>>> Dom0's kernel messages?
> >>>
> >>> I normally use sync_console on all the boxes I'm doing dev work, so
> >>> this request is something that come up internally.
> >>>
> >>> Didn't realize Xen output wasn't forced, since we already have rate
> >>> limiting based on log levels I was assuming that non-ratelimited
> >>> messages wouldn't be dropped.  But yes, I agree that Xen (non-guest
> >>> triggered) output shouldn't be rate limited either.
> >>
> >> Which would raise the question of why we have log levels for non-guest
> >> messages.
> > 
> > Hm, maybe I'm confused, but I don't see a direct relation between log
> > levels and rate limiting.  If I set log level to WARNING I would
> > expect to not loose _any_ non-guest log messages with level WARNING or
> > above.  It's still useful to have log levels for non-guest messages,
> > since user might want to filter out DEBUG non-guest messages for
> > example.
> 
> It was me who was confused, because of the two log-everything variants
> we have (console and serial). You're right that your change is unrelated
> to log levels. However, when there are e.g. many warnings or when an
> admin has lowered the log level, what you (would) do is effectively
> force sync_console mode transiently (for a subset of messages, but
> that's secondary, especially because the "forced" output would still
> be waiting for earlier output to make it out).

Right, it would have to wait for any previous output on the buffer to
go out first.  In any case we can guarantee that no more output will
be added to the buffer while Xen waits for it to be flushed.

So for the hardware domain it might make sense to wait for the TX
buffers to be half empty (the current tx_quench logic) by preempting
the hypercall.  That however could cause issues if guests manage to
keep filling the buffer while the hardware domain is being preempted.

Alternatively we could always reserve half of the buffer for the
hardware domain, and allow it to be preempted while waiting for space
(since it's guaranteed non hardware domains won't be able to steal the
allocation from the hardware domain).

For Xen it's not trivial to prevent messages from being dropped. At
least during Xen boot (system_state < SYS_STATE_active) we could also
active the sync mode and make the spin wait in __serial_putc process
softirqs.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Jun 13 12:33:47 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 12:33:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348179.574474 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0jGM-0004aU-Vn; Mon, 13 Jun 2022 12:33:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348179.574474; Mon, 13 Jun 2022 12:33:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0jGM-0004aN-RB; Mon, 13 Jun 2022 12:33:46 +0000
Received: by outflank-mailman (input) for mailman id 348179;
 Mon, 13 Jun 2022 12:33:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=x+EV=WU=atomide.com=tony@srs-se1.protection.inumbo.net>)
 id 1o0jFt-0004W2-6h
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 12:33:17 +0000
Received: from muru.com (muru.com [72.249.23.125])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id fe0b6347-eb14-11ec-bd2c-47488cf2e6aa;
 Mon, 13 Jun 2022 14:33:14 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by muru.com (Postfix) with ESMTPS id B3AA88125;
 Mon, 13 Jun 2022 12:28:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe0b6347-eb14-11ec-bd2c-47488cf2e6aa
Date: Mon, 13 Jun 2022 15:33:10 +0300
From: Tony Lindgren <tony@atomide.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: rth@twiddle.net, ink@jurassic.park.msu.ru, mattst88@gmail.com,
	vgupta@kernel.org, linux@armlinux.org.uk, ulli.kroll@googlemail.com,
	linus.walleij@linaro.org, shawnguo@kernel.org,
	Sascha Hauer <s.hauer@pengutronix.de>, kernel@pengutronix.de,
	festevam@gmail.com, linux-imx@nxp.com, khilman@kernel.org,
	catalin.marinas@arm.com, will@kernel.org, guoren@kernel.org,
	bcain@quicinc.com, chenhuacai@kernel.org, kernel@xen0n.name,
	geert@linux-m68k.org, sammy@sammy.net, monstr@monstr.eu,
	tsbogend@alpha.franken.de, dinguyen@kernel.org, jonas@southpole.se,
	stefan.kristiansson@saunalahti.fi, shorne@gmail.com,
	James.Bottomley@hansenpartnership.com, deller@gmx.de,
	mpe@ellerman.id.au, benh@kernel.crashing.org, paulus@samba.org,
	paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu,
	hca@linux.ibm.com, gor@linux.ibm.com, agordeev@linux.ibm.com,
	borntraeger@linux.ibm.com, svens@linux.ibm.com,
	ysato@users.sourceforge.jp, dalias@libc.org, davem@davemloft.net,
	richard@nod.at, anton.ivanov@cambridgegreys.com,
	johannes@sipsolutions.net, tglx@linutronix.de, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, acme@kernel.org, mark.rutland@arm.com,
	alexander.shishkin@linux.intel.com, jolsa@kernel.org,
	namhyung@kernel.org, jgross@suse.com, srivatsa@csail.mit.edu,
	amakhalov@vmware.com, pv-drivers@vmware.com,
	boris.ostrovsky@oracle.com, chris@zankel.net, jcmvbkbc@gmail.com,
	rafael@kernel.org, lenb@kernel.org, pavel@ucw.cz,
	gregkh@linuxfoundation.org, mturquette@baylibre.com,
	sboyd@kernel.org, daniel.lezcano@linaro.org, lpieralisi@kernel.org,
	sudeep.holla@arm.com, agross@kernel.org, bjorn.andersson@linaro.org,
	anup@brainfault.org, thierry.reding@gmail.com, jonathanh@nvidia.com,
	jacob.jun.pan@linux.intel.com, Arnd Bergmann <arnd@arndb.de>,
	yury.norov@gmail.com, andriy.shevchenko@linux.intel.com,
	linux@rasmusvillemoes.dk, rostedt@goodmis.org, pmladek@suse.com,
	senozhatsky@chromium.org, john.ogness@linutronix.de,
	paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com,
	josh@joshtriplett.org, mathieu.desnoyers@efficios.com,
	jiangshanlai@gmail.com, joel@joelfernandes.org,
	juri.lelli@redhat.com, vincent.guittot@linaro.org,
	dietmar.eggemann@arm.com, bsegall@google.com, mgorman@suse.de,
	bristot@redhat.com, vschneid@redhat.com, jpoimboe@kernel.org,
	linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-snps-arc@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, openrisc@lists.librecores.org,
	linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org, sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org, linux-perf-users@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, linux-xtensa@linux-xtensa.org,
	linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org,
	linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	linux-tegra@vger.kernel.org, linux-arch@vger.kernel.org,
	rcu@vger.kernel.org
Subject: Re: [PATCH 10/36] cpuidle,omap3: Push RCU-idle into driver
Message-ID: <YqcuhiPVqktEpZxy@atomide.com>
References: <20220608142723.103523089@infradead.org>
 <20220608144516.552202452@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20220608144516.552202452@infradead.org>

* Peter Zijlstra <peterz@infradead.org> [220608 14:42]:
> Doing RCU-idle outside the driver, only to then teporarily enable it
> again before going idle is daft.

Reviewed-by: Tony Lindgren <tony@atomide.com>
Tested-by: Tony Lindgren <tony@atomide.com>


From xen-devel-bounces@lists.xenproject.org Mon Jun 13 12:34:40 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 12:34:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348182.574485 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0jHE-0005AY-A7; Mon, 13 Jun 2022 12:34:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348182.574485; Mon, 13 Jun 2022 12:34:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0jHE-0005AR-75; Mon, 13 Jun 2022 12:34:40 +0000
Received: by outflank-mailman (input) for mailman id 348182;
 Mon, 13 Jun 2022 12:33:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=x+EV=WU=atomide.com=tony@srs-se1.protection.inumbo.net>)
 id 1o0jGX-0004ua-SO
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 12:33:57 +0000
Received: from muru.com (muru.com [72.249.23.125])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 1795d304-eb15-11ec-bd2c-47488cf2e6aa;
 Mon, 13 Jun 2022 14:33:56 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by muru.com (Postfix) with ESMTPS id 07F638191;
 Mon, 13 Jun 2022 12:29:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1795d304-eb15-11ec-bd2c-47488cf2e6aa
Date: Mon, 13 Jun 2022 15:33:54 +0300
From: Tony Lindgren <tony@atomide.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: rth@twiddle.net, ink@jurassic.park.msu.ru, mattst88@gmail.com,
	vgupta@kernel.org, linux@armlinux.org.uk, ulli.kroll@googlemail.com,
	linus.walleij@linaro.org, shawnguo@kernel.org,
	Sascha Hauer <s.hauer@pengutronix.de>, kernel@pengutronix.de,
	festevam@gmail.com, linux-imx@nxp.com, khilman@kernel.org,
	catalin.marinas@arm.com, will@kernel.org, guoren@kernel.org,
	bcain@quicinc.com, chenhuacai@kernel.org, kernel@xen0n.name,
	geert@linux-m68k.org, sammy@sammy.net, monstr@monstr.eu,
	tsbogend@alpha.franken.de, dinguyen@kernel.org, jonas@southpole.se,
	stefan.kristiansson@saunalahti.fi, shorne@gmail.com,
	James.Bottomley@hansenpartnership.com, deller@gmx.de,
	mpe@ellerman.id.au, benh@kernel.crashing.org, paulus@samba.org,
	paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu,
	hca@linux.ibm.com, gor@linux.ibm.com, agordeev@linux.ibm.com,
	borntraeger@linux.ibm.com, svens@linux.ibm.com,
	ysato@users.sourceforge.jp, dalias@libc.org, davem@davemloft.net,
	richard@nod.at, anton.ivanov@cambridgegreys.com,
	johannes@sipsolutions.net, tglx@linutronix.de, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, acme@kernel.org, mark.rutland@arm.com,
	alexander.shishkin@linux.intel.com, jolsa@kernel.org,
	namhyung@kernel.org, jgross@suse.com, srivatsa@csail.mit.edu,
	amakhalov@vmware.com, pv-drivers@vmware.com,
	boris.ostrovsky@oracle.com, chris@zankel.net, jcmvbkbc@gmail.com,
	rafael@kernel.org, lenb@kernel.org, pavel@ucw.cz,
	gregkh@linuxfoundation.org, mturquette@baylibre.com,
	sboyd@kernel.org, daniel.lezcano@linaro.org, lpieralisi@kernel.org,
	sudeep.holla@arm.com, agross@kernel.org, bjorn.andersson@linaro.org,
	anup@brainfault.org, thierry.reding@gmail.com, jonathanh@nvidia.com,
	jacob.jun.pan@linux.intel.com, Arnd Bergmann <arnd@arndb.de>,
	yury.norov@gmail.com, andriy.shevchenko@linux.intel.com,
	linux@rasmusvillemoes.dk, rostedt@goodmis.org, pmladek@suse.com,
	senozhatsky@chromium.org, john.ogness@linutronix.de,
	paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com,
	josh@joshtriplett.org, mathieu.desnoyers@efficios.com,
	jiangshanlai@gmail.com, joel@joelfernandes.org,
	juri.lelli@redhat.com, vincent.guittot@linaro.org,
	dietmar.eggemann@arm.com, bsegall@google.com, mgorman@suse.de,
	bristot@redhat.com, vschneid@redhat.com, jpoimboe@kernel.org,
	linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-snps-arc@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, openrisc@lists.librecores.org,
	linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org, sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org, linux-perf-users@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, linux-xtensa@linux-xtensa.org,
	linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org,
	linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	linux-tegra@vger.kernel.org, linux-arch@vger.kernel.org,
	rcu@vger.kernel.org
Subject: Re: [PATCH 12/36] cpuidle,omap2: Push RCU-idle into driver
Message-ID: <YqcusjKzpN/d0qFf@atomide.com>
References: <20220608142723.103523089@infradead.org>
 <20220608144516.677524509@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20220608144516.677524509@infradead.org>

* Peter Zijlstra <peterz@infradead.org> [220608 14:42]:
> Doing RCU-idle outside the driver, only to then temporarily enable it
> again, some *four* times, before going idle is daft.

Maybe update the subject line with s/omap2/omap4/, other than that:

Reviewed-by: Tony Lindgren <tony@atomide.com>
Tested-by: Tony Lindgren <tony@atomide.com>


From xen-devel-bounces@lists.xenproject.org Mon Jun 13 12:36:41 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 12:36:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348202.574502 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0jJB-0005ty-4h; Mon, 13 Jun 2022 12:36:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348202.574502; Mon, 13 Jun 2022 12:36:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0jJA-0005tW-UU; Mon, 13 Jun 2022 12:36:40 +0000
Received: by outflank-mailman (input) for mailman id 348202;
 Mon, 13 Jun 2022 12:36:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=x+EV=WU=atomide.com=tony@srs-se1.protection.inumbo.net>)
 id 1o0jIl-0005pV-CX
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 12:36:15 +0000
Received: from muru.com (muru.com [72.249.23.125])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 69911010-eb15-11ec-bd2c-47488cf2e6aa;
 Mon, 13 Jun 2022 14:36:14 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by muru.com (Postfix) with ESMTPS id 87B6681D6;
 Mon, 13 Jun 2022 12:31:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 69911010-eb15-11ec-bd2c-47488cf2e6aa
Date: Mon, 13 Jun 2022 15:36:11 +0300
From: Tony Lindgren <tony@atomide.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: rth@twiddle.net, ink@jurassic.park.msu.ru, mattst88@gmail.com,
	vgupta@kernel.org, linux@armlinux.org.uk, ulli.kroll@googlemail.com,
	linus.walleij@linaro.org, shawnguo@kernel.org,
	Sascha Hauer <s.hauer@pengutronix.de>, kernel@pengutronix.de,
	festevam@gmail.com, linux-imx@nxp.com, khilman@kernel.org,
	catalin.marinas@arm.com, will@kernel.org, guoren@kernel.org,
	bcain@quicinc.com, chenhuacai@kernel.org, kernel@xen0n.name,
	geert@linux-m68k.org, sammy@sammy.net, monstr@monstr.eu,
	tsbogend@alpha.franken.de, dinguyen@kernel.org, jonas@southpole.se,
	stefan.kristiansson@saunalahti.fi, shorne@gmail.com,
	James.Bottomley@hansenpartnership.com, deller@gmx.de,
	mpe@ellerman.id.au, benh@kernel.crashing.org, paulus@samba.org,
	paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu,
	hca@linux.ibm.com, gor@linux.ibm.com, agordeev@linux.ibm.com,
	borntraeger@linux.ibm.com, svens@linux.ibm.com,
	ysato@users.sourceforge.jp, dalias@libc.org, davem@davemloft.net,
	richard@nod.at, anton.ivanov@cambridgegreys.com,
	johannes@sipsolutions.net, tglx@linutronix.de, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, acme@kernel.org, mark.rutland@arm.com,
	alexander.shishkin@linux.intel.com, jolsa@kernel.org,
	namhyung@kernel.org, jgross@suse.com, srivatsa@csail.mit.edu,
	amakhalov@vmware.com, pv-drivers@vmware.com,
	boris.ostrovsky@oracle.com, chris@zankel.net, jcmvbkbc@gmail.com,
	rafael@kernel.org, lenb@kernel.org, pavel@ucw.cz,
	gregkh@linuxfoundation.org, mturquette@baylibre.com,
	sboyd@kernel.org, daniel.lezcano@linaro.org, lpieralisi@kernel.org,
	sudeep.holla@arm.com, agross@kernel.org, bjorn.andersson@linaro.org,
	anup@brainfault.org, thierry.reding@gmail.com, jonathanh@nvidia.com,
	jacob.jun.pan@linux.intel.com, Arnd Bergmann <arnd@arndb.de>,
	yury.norov@gmail.com, andriy.shevchenko@linux.intel.com,
	linux@rasmusvillemoes.dk, rostedt@goodmis.org, pmladek@suse.com,
	senozhatsky@chromium.org, john.ogness@linutronix.de,
	paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com,
	josh@joshtriplett.org, mathieu.desnoyers@efficios.com,
	jiangshanlai@gmail.com, joel@joelfernandes.org,
	juri.lelli@redhat.com, vincent.guittot@linaro.org,
	dietmar.eggemann@arm.com, bsegall@google.com, mgorman@suse.de,
	bristot@redhat.com, vschneid@redhat.com, jpoimboe@kernel.org,
	linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-snps-arc@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, openrisc@lists.librecores.org,
	linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org, sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org, linux-perf-users@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, linux-xtensa@linux-xtensa.org,
	linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org,
	linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	linux-tegra@vger.kernel.org, linux-arch@vger.kernel.org,
	rcu@vger.kernel.org
Subject: Re: [PATCH 33/36] cpuidle,omap3: Use WFI for omap3_pm_idle()
Message-ID: <YqcvO0xSmlEVMef3@atomide.com>
References: <20220608142723.103523089@infradead.org>
 <20220608144518.010587032@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20220608144518.010587032@infradead.org>

* Peter Zijlstra <peterz@infradead.org> [220608 14:52]:
> arch_cpu_idle() is a very simple idle interface and exposes only a
> single idle state and is expected to not require RCU and not do any
> tracing/instrumentation.
> 
> As such, omap_sram_idle() is not a valid implementation. Replace it
> with the simple (shallow) omap3_do_wfi() call. Leaving the more
> complicated idle states for the cpuidle driver.

Acked-by: Tony Lindgren <tony@atomide.com>


From xen-devel-bounces@lists.xenproject.org Mon Jun 13 12:36:41 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 12:36:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348199.574495 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0jJA-0005rI-Qh; Mon, 13 Jun 2022 12:36:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348199.574495; Mon, 13 Jun 2022 12:36:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0jJA-0005rB-Nh; Mon, 13 Jun 2022 12:36:40 +0000
Received: by outflank-mailman (input) for mailman id 348199;
 Mon, 13 Jun 2022 12:35:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=x+EV=WU=atomide.com=tony@srs-se1.protection.inumbo.net>)
 id 1o0jHs-00054l-9C
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 12:35:20 +0000
Received: from muru.com (muru.com [72.249.23.125])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 44b8897c-eb15-11ec-8901-93a377f238d6;
 Mon, 13 Jun 2022 14:35:12 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by muru.com (Postfix) with ESMTPS id 5285681CC;
 Mon, 13 Jun 2022 12:30:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 44b8897c-eb15-11ec-8901-93a377f238d6
Date: Mon, 13 Jun 2022 15:35:14 +0300
From: Tony Lindgren <tony@atomide.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: rth@twiddle.net, ink@jurassic.park.msu.ru, mattst88@gmail.com,
	vgupta@kernel.org, linux@armlinux.org.uk, ulli.kroll@googlemail.com,
	linus.walleij@linaro.org, shawnguo@kernel.org,
	Sascha Hauer <s.hauer@pengutronix.de>, kernel@pengutronix.de,
	festevam@gmail.com, linux-imx@nxp.com, khilman@kernel.org,
	catalin.marinas@arm.com, will@kernel.org, guoren@kernel.org,
	bcain@quicinc.com, chenhuacai@kernel.org, kernel@xen0n.name,
	geert@linux-m68k.org, sammy@sammy.net, monstr@monstr.eu,
	tsbogend@alpha.franken.de, dinguyen@kernel.org, jonas@southpole.se,
	stefan.kristiansson@saunalahti.fi, shorne@gmail.com,
	James.Bottomley@hansenpartnership.com, deller@gmx.de,
	mpe@ellerman.id.au, benh@kernel.crashing.org, paulus@samba.org,
	paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu,
	hca@linux.ibm.com, gor@linux.ibm.com, agordeev@linux.ibm.com,
	borntraeger@linux.ibm.com, svens@linux.ibm.com,
	ysato@users.sourceforge.jp, dalias@libc.org, davem@davemloft.net,
	richard@nod.at, anton.ivanov@cambridgegreys.com,
	johannes@sipsolutions.net, tglx@linutronix.de, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, acme@kernel.org, mark.rutland@arm.com,
	alexander.shishkin@linux.intel.com, jolsa@kernel.org,
	namhyung@kernel.org, jgross@suse.com, srivatsa@csail.mit.edu,
	amakhalov@vmware.com, pv-drivers@vmware.com,
	boris.ostrovsky@oracle.com, chris@zankel.net, jcmvbkbc@gmail.com,
	rafael@kernel.org, lenb@kernel.org, pavel@ucw.cz,
	gregkh@linuxfoundation.org, mturquette@baylibre.com,
	sboyd@kernel.org, daniel.lezcano@linaro.org, lpieralisi@kernel.org,
	sudeep.holla@arm.com, agross@kernel.org, bjorn.andersson@linaro.org,
	anup@brainfault.org, thierry.reding@gmail.com, jonathanh@nvidia.com,
	jacob.jun.pan@linux.intel.com, Arnd Bergmann <arnd@arndb.de>,
	yury.norov@gmail.com, andriy.shevchenko@linux.intel.com,
	linux@rasmusvillemoes.dk, rostedt@goodmis.org, pmladek@suse.com,
	senozhatsky@chromium.org, john.ogness@linutronix.de,
	paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com,
	josh@joshtriplett.org, mathieu.desnoyers@efficios.com,
	jiangshanlai@gmail.com, joel@joelfernandes.org,
	juri.lelli@redhat.com, vincent.guittot@linaro.org,
	dietmar.eggemann@arm.com, bsegall@google.com, mgorman@suse.de,
	bristot@redhat.com, vschneid@redhat.com, jpoimboe@kernel.org,
	linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-snps-arc@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, openrisc@lists.librecores.org,
	linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org, sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org, linux-perf-users@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, linux-xtensa@linux-xtensa.org,
	linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org,
	linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	linux-tegra@vger.kernel.org, linux-arch@vger.kernel.org,
	rcu@vger.kernel.org
Subject: Re: [PATCH 34/36] cpuidle,omap3: Push RCU-idle into omap_sram_idle()
Message-ID: <YqcvAmiDSOAOAdA9@atomide.com>
References: <20220608142723.103523089@infradead.org>
 <20220608144518.073801916@infradead.org>
 <YqC6iJx4ygSmry0G@hirez.programming.kicks-ass.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <YqC6iJx4ygSmry0G@hirez.programming.kicks-ass.net>

* Peter Zijlstra <peterz@infradead.org> [220608 15:00]:
> On Wed, Jun 08, 2022 at 04:27:57PM +0200, Peter Zijlstra wrote:
> > @@ -254,11 +255,18 @@ void omap_sram_idle(void)
> >  	 */
> >  	if (save_state)
> >  		omap34xx_save_context(omap3_arm_context);
> > +
> > +	if (rcuidle)
> > +		cpuidle_rcu_enter();
> > +
> >  	if (save_state == 1 || save_state == 3)
> >  		cpu_suspend(save_state, omap34xx_do_sram_idle);
> >  	else
> >  		omap34xx_do_sram_idle(save_state);
> >  
> > +	if (rcuidle)
> > +		rcuidle_rcu_exit();
> 
> *sigh* so much for this having been exposed to the robots for >2 days :/

I tested your git branch of these patches, so:

Reviewed-by: Tony Lindgren <tony@atomide.com>
Tested-by: Tony Lindgren <tony@atomide.com>


From xen-devel-bounces@lists.xenproject.org Mon Jun 13 12:45:35 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 12:45:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348230.574518 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0jRZ-00087O-2b; Mon, 13 Jun 2022 12:45:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348230.574518; Mon, 13 Jun 2022 12:45:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0jRY-00087H-UP; Mon, 13 Jun 2022 12:45:20 +0000
Received: by outflank-mailman (input) for mailman id 348230;
 Mon, 13 Jun 2022 12:39:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=x+EV=WU=atomide.com=tony@srs-se1.protection.inumbo.net>)
 id 1o0jLZ-0007Gg-EB
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 12:39:09 +0000
Received: from muru.com (muru.com [72.249.23.125])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id d103bdb5-eb15-11ec-bd2c-47488cf2e6aa;
 Mon, 13 Jun 2022 14:39:07 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by muru.com (Postfix) with ESMTPS id AC04B82CC;
 Mon, 13 Jun 2022 12:34:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d103bdb5-eb15-11ec-bd2c-47488cf2e6aa
Date: Mon, 13 Jun 2022 15:39:05 +0300
From: Tony Lindgren <tony@atomide.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: rth@twiddle.net, ink@jurassic.park.msu.ru, mattst88@gmail.com,
	vgupta@kernel.org, linux@armlinux.org.uk, ulli.kroll@googlemail.com,
	linus.walleij@linaro.org, shawnguo@kernel.org,
	Sascha Hauer <s.hauer@pengutronix.de>, kernel@pengutronix.de,
	festevam@gmail.com, linux-imx@nxp.com, khilman@kernel.org,
	catalin.marinas@arm.com, will@kernel.org, guoren@kernel.org,
	bcain@quicinc.com, chenhuacai@kernel.org, kernel@xen0n.name,
	geert@linux-m68k.org, sammy@sammy.net, monstr@monstr.eu,
	tsbogend@alpha.franken.de, dinguyen@kernel.org, jonas@southpole.se,
	stefan.kristiansson@saunalahti.fi, shorne@gmail.com,
	James.Bottomley@hansenpartnership.com, deller@gmx.de,
	mpe@ellerman.id.au, benh@kernel.crashing.org, paulus@samba.org,
	paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu,
	hca@linux.ibm.com, gor@linux.ibm.com, agordeev@linux.ibm.com,
	borntraeger@linux.ibm.com, svens@linux.ibm.com,
	ysato@users.sourceforge.jp, dalias@libc.org, davem@davemloft.net,
	richard@nod.at, anton.ivanov@cambridgegreys.com,
	johannes@sipsolutions.net, tglx@linutronix.de, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, acme@kernel.org, mark.rutland@arm.com,
	alexander.shishkin@linux.intel.com, jolsa@kernel.org,
	namhyung@kernel.org, jgross@suse.com, srivatsa@csail.mit.edu,
	amakhalov@vmware.com, pv-drivers@vmware.com,
	boris.ostrovsky@oracle.com, chris@zankel.net, jcmvbkbc@gmail.com,
	rafael@kernel.org, lenb@kernel.org, pavel@ucw.cz,
	gregkh@linuxfoundation.org, mturquette@baylibre.com,
	sboyd@kernel.org, daniel.lezcano@linaro.org, lpieralisi@kernel.org,
	sudeep.holla@arm.com, agross@kernel.org, bjorn.andersson@linaro.org,
	anup@brainfault.org, thierry.reding@gmail.com, jonathanh@nvidia.com,
	jacob.jun.pan@linux.intel.com, Arnd Bergmann <arnd@arndb.de>,
	yury.norov@gmail.com, andriy.shevchenko@linux.intel.com,
	linux@rasmusvillemoes.dk, rostedt@goodmis.org, pmladek@suse.com,
	senozhatsky@chromium.org, john.ogness@linutronix.de,
	paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com,
	josh@joshtriplett.org, mathieu.desnoyers@efficios.com,
	jiangshanlai@gmail.com, joel@joelfernandes.org,
	juri.lelli@redhat.com, vincent.guittot@linaro.org,
	dietmar.eggemann@arm.com, bsegall@google.com, mgorman@suse.de,
	bristot@redhat.com, vschneid@redhat.com, jpoimboe@kernel.org,
	linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-snps-arc@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, openrisc@lists.librecores.org,
	linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org, sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org, linux-perf-users@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, linux-xtensa@linux-xtensa.org,
	linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org,
	linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	linux-tegra@vger.kernel.org, linux-arch@vger.kernel.org,
	rcu@vger.kernel.org
Subject: [PATCH 34.5/36] cpuidle,omap4: Push RCU-idle into
 omap4_enter_lowpower()
Message-ID: <Yqcv6crSNKuSWoTu@atomide.com>
References: <20220608142723.103523089@infradead.org>
 <20220608144518.073801916@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20220608144518.073801916@infradead.org>

OMAP4 uses full SoC suspend modes as idle states, as such it needs the
whole power-domain and clock-domain code from the idle path.

All that code is not suitable to run with RCU disabled, as such push
RCU-idle deeper still.

Signed-off-by: Tony Lindgren <tony@atomide.com>
---

Peter here's one more for your series, looks like this is needed to avoid
warnings similar to what you did for omap3.

---
 arch/arm/mach-omap2/common.h              |  6 ++++--
 arch/arm/mach-omap2/cpuidle44xx.c         |  8 ++------
 arch/arm/mach-omap2/omap-mpuss-lowpower.c | 12 +++++++++++-
 arch/arm/mach-omap2/pm44xx.c              |  2 +-
 4 files changed, 18 insertions(+), 10 deletions(-)

diff --git a/arch/arm/mach-omap2/common.h b/arch/arm/mach-omap2/common.h
--- a/arch/arm/mach-omap2/common.h
+++ b/arch/arm/mach-omap2/common.h
@@ -284,11 +284,13 @@ extern u32 omap4_get_cpu1_ns_pa_addr(void);
 
 #if defined(CONFIG_SMP) && defined(CONFIG_PM)
 extern int omap4_mpuss_init(void);
-extern int omap4_enter_lowpower(unsigned int cpu, unsigned int power_state);
+extern int omap4_enter_lowpower(unsigned int cpu, unsigned int power_state,
+				bool rcuidle);
 extern int omap4_hotplug_cpu(unsigned int cpu, unsigned int power_state);
 #else
 static inline int omap4_enter_lowpower(unsigned int cpu,
-					unsigned int power_state)
+					unsigned int power_state,
+					bool rcuidle)
 {
 	cpu_do_idle();
 	return 0;
diff --git a/arch/arm/mach-omap2/cpuidle44xx.c b/arch/arm/mach-omap2/cpuidle44xx.c
--- a/arch/arm/mach-omap2/cpuidle44xx.c
+++ b/arch/arm/mach-omap2/cpuidle44xx.c
@@ -105,9 +105,7 @@ static int omap_enter_idle_smp(struct cpuidle_device *dev,
 	}
 	raw_spin_unlock_irqrestore(&mpu_lock, flag);
 
-	cpuidle_rcu_enter();
-	omap4_enter_lowpower(dev->cpu, cx->cpu_state);
-	cpuidle_rcu_exit();
+	omap4_enter_lowpower(dev->cpu, cx->cpu_state, true);
 
 	raw_spin_lock_irqsave(&mpu_lock, flag);
 	if (cx->mpu_state_vote == num_online_cpus())
@@ -186,10 +184,8 @@ static int omap_enter_idle_coupled(struct cpuidle_device *dev,
 		}
 	}
 
-	cpuidle_rcu_enter();
-	omap4_enter_lowpower(dev->cpu, cx->cpu_state);
+	omap4_enter_lowpower(dev->cpu, cx->cpu_state, true);
 	cpu_done[dev->cpu] = true;
-	cpuidle_rcu_exit();
 
 	/* Wakeup CPU1 only if it is not offlined */
 	if (dev->cpu == 0 && cpumask_test_cpu(1, cpu_online_mask)) {
diff --git a/arch/arm/mach-omap2/omap-mpuss-lowpower.c b/arch/arm/mach-omap2/omap-mpuss-lowpower.c
--- a/arch/arm/mach-omap2/omap-mpuss-lowpower.c
+++ b/arch/arm/mach-omap2/omap-mpuss-lowpower.c
@@ -33,6 +33,7 @@
  * and first to wake-up when MPUSS low power states are excercised
  */
 
+#include <linux/cpuidle.h>
 #include <linux/kernel.h>
 #include <linux/io.h>
 #include <linux/errno.h>
@@ -214,6 +215,7 @@ static void __init save_l2x0_context(void)
  * of OMAP4 MPUSS subsystem
  * @cpu : CPU ID
  * @power_state: Low power state.
+ * @rcuidle: RCU needs to be idled
  *
  * MPUSS states for the context save:
  * save_state =
@@ -222,7 +224,8 @@ static void __init save_l2x0_context(void)
  *	2 - CPUx L1 and logic lost + GIC lost: MPUSS OSWR
  *	3 - CPUx L1 and logic lost + GIC + L2 lost: DEVICE OFF
  */
-int omap4_enter_lowpower(unsigned int cpu, unsigned int power_state)
+int omap4_enter_lowpower(unsigned int cpu, unsigned int power_state,
+			 bool rcuidle)
 {
 	struct omap4_cpu_pm_info *pm_info = &per_cpu(omap4_pm_info, cpu);
 	unsigned int save_state = 0, cpu_logic_state = PWRDM_POWER_RET;
@@ -268,6 +271,10 @@ int omap4_enter_lowpower(unsigned int cpu, unsigned int power_state)
 	cpu_clear_prev_logic_pwrst(cpu);
 	pwrdm_set_next_pwrst(pm_info->pwrdm, power_state);
 	pwrdm_set_logic_retst(pm_info->pwrdm, cpu_logic_state);
+
+	if (rcuidle)
+		cpuidle_rcu_enter();
+
 	set_cpu_wakeup_addr(cpu, __pa_symbol(omap_pm_ops.resume));
 	omap_pm_ops.scu_prepare(cpu, power_state);
 	l2x0_pwrst_prepare(cpu, save_state);
@@ -283,6 +290,9 @@ int omap4_enter_lowpower(unsigned int cpu, unsigned int power_state)
 	if (IS_PM44XX_ERRATUM(PM_OMAP4_ROM_SMP_BOOT_ERRATUM_GICD) && cpu)
 		gic_dist_enable();
 
+	if (rcuidle)
+		cpuidle_rcu_exit();
+
 	/*
 	 * Restore the CPUx power state to ON otherwise CPUx
 	 * power domain can transitions to programmed low power
diff --git a/arch/arm/mach-omap2/pm44xx.c b/arch/arm/mach-omap2/pm44xx.c
--- a/arch/arm/mach-omap2/pm44xx.c
+++ b/arch/arm/mach-omap2/pm44xx.c
@@ -76,7 +76,7 @@ static int omap4_pm_suspend(void)
 	 * domain CSWR is not supported by hardware.
 	 * More details can be found in OMAP4430 TRM section 4.3.4.2.
 	 */
-	omap4_enter_lowpower(cpu_id, cpu_suspend_state);
+	omap4_enter_lowpower(cpu_id, cpu_suspend_state, false);
 
 	/* Restore next powerdomain state */
 	list_for_each_entry(pwrst, &pwrst_list, node) {
-- 
2.36.1


From xen-devel-bounces@lists.xenproject.org Mon Jun 13 12:53:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 12:53:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348247.574568 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0jZR-00028r-CT; Mon, 13 Jun 2022 12:53:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348247.574568; Mon, 13 Jun 2022 12:53:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0jZR-00027z-4e; Mon, 13 Jun 2022 12:53:29 +0000
Received: by outflank-mailman (input) for mailman id 348247;
 Mon, 13 Jun 2022 12:53:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=E7/M=WU=arm.com=bertrand.marquis@srs-se1.protection.inumbo.net>)
 id 1o0jZP-0001JY-Qd
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 12:53:27 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id d128e809-eb17-11ec-8901-93a377f238d6;
 Mon, 13 Jun 2022 14:53:27 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7764A1042;
 Mon, 13 Jun 2022 05:53:26 -0700 (PDT)
Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com
 [10.1.199.62])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 93EEA3F792;
 Mon, 13 Jun 2022 05:53:25 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d128e809-eb17-11ec-8901-93a377f238d6
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v3 4/4] arm: Define kconfig symbols used by arm64 cpufeatures
Date: Mon, 13 Jun 2022 13:53:14 +0100
Message-Id: <e9f3e4c520f9d78223ff89cf1d8ef14348933a7c.1655124548.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1655124548.git.bertrand.marquis@arm.com>
References: <cover.1655124548.git.bertrand.marquis@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Define kconfig symbols which are used by arm64 cpufeatures to prevent
using undefined symbols and rely on IS_ENABLED returning false.
All the features related to those symbols are unsupported by Xen:
- pointer authentication
- sve
- memory tagging
- branch target identification

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
Changes in v3:
- add Julien acked by
- no changes
Changes in v2:
- patch introduced
---
 xen/arch/arm/Kconfig | 28 ++++++++++++++++++++++++++++
 1 file changed, 28 insertions(+)

diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index a89a67802a..5900aa2efe 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -98,6 +98,34 @@ config HARDEN_BRANCH_PREDICTOR
 
 	  If unsure, say Y.
 
+config ARM64_PTR_AUTH
+	def_bool n
+	depends on ARM64
+	help
+	  Pointer authentication support.
+	  This feature is not supported in Xen.
+
+config ARM64_SVE
+	def_bool n
+	depends on ARM64
+	help
+	  Scalar Vector Extension support.
+	  This feature is not supported in Xen.
+
+config ARM64_MTE
+	def_bool n
+	depends on ARM64
+	help
+	  Memory Tagging Extension support.
+	  This feature is not supported in Xen.
+
+config ARM64_BTI
+	def_bool n
+	depends on ARM64
+	help
+	  Branch Target Identification support.
+	  This feature is not supported in Xen.
+
 config TEE
 	bool "Enable TEE mediators support (UNSUPPORTED)" if UNSUPPORTED
 	default n
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 13 12:53:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 12:53:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348246.574562 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0jZQ-00023d-Ub; Mon, 13 Jun 2022 12:53:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348246.574562; Mon, 13 Jun 2022 12:53:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0jZQ-00021k-Pk; Mon, 13 Jun 2022 12:53:28 +0000
Received: by outflank-mailman (input) for mailman id 348246;
 Mon, 13 Jun 2022 12:53:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=E7/M=WU=arm.com=bertrand.marquis@srs-se1.protection.inumbo.net>)
 id 1o0jZO-0001JY-QZ
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 12:53:26 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id d08b5895-eb17-11ec-8901-93a377f238d6;
 Mon, 13 Jun 2022 14:53:26 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 74B1BD6E;
 Mon, 13 Jun 2022 05:53:25 -0700 (PDT)
Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com
 [10.1.199.62])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id AF3F83F792;
 Mon, 13 Jun 2022 05:53:24 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d08b5895-eb17-11ec-8901-93a377f238d6
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v3 3/4] arm: add ISAR2, MMFR0 and MMFR1 fields in cpufeature
Date: Mon, 13 Jun 2022 13:53:13 +0100
Message-Id: <0d1bce834cc1d9949c9d77acfeb650f2f4c02601.1655124548.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1655124548.git.bertrand.marquis@arm.com>
References: <cover.1655124548.git.bertrand.marquis@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Complete AA64ISAR2 and AA64MMFR[0-1] with more fields.
While there add a comment for MMFR bitfields as for other registers in
the cpuinfo structure definition.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
Changes in v3:
- fix tgranule_4k_2
- add Stefano r-b
Changes in v2:
- patch introduced to isolate changes in cpufeature.h
- complete MMFR0 and ISAR2 to sync with sysregs.h status
---
 xen/arch/arm/include/asm/cpufeature.h | 28 ++++++++++++++++++++++-----
 1 file changed, 23 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/include/asm/cpufeature.h b/xen/arch/arm/include/asm/cpufeature.h
index 24c01d2b9d..c86a2e7f29 100644
--- a/xen/arch/arm/include/asm/cpufeature.h
+++ b/xen/arch/arm/include/asm/cpufeature.h
@@ -234,6 +234,7 @@ struct cpuinfo_arm {
     union {
         register_t bits[3];
         struct {
+            /* MMFR0 */
             unsigned long pa_range:4;
             unsigned long asid_bits:4;
             unsigned long bigend:4;
@@ -242,18 +243,31 @@ struct cpuinfo_arm {
             unsigned long tgranule_16K:4;
             unsigned long tgranule_64K:4;
             unsigned long tgranule_4K:4;
-            unsigned long __res0:32;
-
+            unsigned long tgranule_16k_2:4;
+            unsigned long tgranule_64k_2:4;
+            unsigned long tgranule_4k_2:4;
+            unsigned long exs:4;
+            unsigned long __res0:8;
+            unsigned long fgt:4;
+            unsigned long ecv:4;
+
+            /* MMFR1 */
             unsigned long hafdbs:4;
             unsigned long vmid_bits:4;
             unsigned long vh:4;
             unsigned long hpds:4;
             unsigned long lo:4;
             unsigned long pan:4;
-            unsigned long __res1:8;
-            unsigned long __res2:28;
+            unsigned long specsei:4;
+            unsigned long xnx:4;
+            unsigned long twed:4;
+            unsigned long ets:4;
+            unsigned long __res1:4;
+            unsigned long afp:4;
+            unsigned long __res2:12;
             unsigned long ecbhb:4;
 
+            /* MMFR2 */
             unsigned long __res3:64;
         };
     } mm64;
@@ -297,7 +311,11 @@ struct cpuinfo_arm {
             unsigned long __res2:8;
 
             /* ISAR2 */
-            unsigned long __res3:28;
+            unsigned long wfxt:4;
+            unsigned long rpres:4;
+            unsigned long gpa3:4;
+            unsigned long apa3:4;
+            unsigned long __res3:12;
             unsigned long clearbhb:4;
 
             unsigned long __res4:32;
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 13 12:53:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 12:53:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348244.574535 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0jZP-0001NI-7f; Mon, 13 Jun 2022 12:53:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348244.574535; Mon, 13 Jun 2022 12:53:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0jZP-0001M8-1C; Mon, 13 Jun 2022 12:53:27 +0000
Received: by outflank-mailman (input) for mailman id 348244;
 Mon, 13 Jun 2022 12:53:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=E7/M=WU=arm.com=bertrand.marquis@srs-se1.protection.inumbo.net>)
 id 1o0jZO-0001JY-13
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 12:53:26 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id d008da88-eb17-11ec-8901-93a377f238d6;
 Mon, 13 Jun 2022 14:53:25 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 903221480;
 Mon, 13 Jun 2022 05:53:24 -0700 (PDT)
Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com
 [10.1.199.62])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C9FB73F792;
 Mon, 13 Jun 2022 05:53:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d008da88-eb17-11ec-8901-93a377f238d6
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v3 2/4] xen/arm: Add sb instruction support
Date: Mon, 13 Jun 2022 13:53:12 +0100
Message-Id: <7fa3bf8fc27bac120e28092c8c4081d1e58f0b79.1655124548.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1655124548.git.bertrand.marquis@arm.com>
References: <cover.1655124548.git.bertrand.marquis@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This patch is adding sb instruction support when it is supported by a
CPU on arm64.
A new cpu feature capability system is introduced to enable alternative
code using sb instruction when it is supported by the processor. This is
decided based on the isa64 system register value and use a new hardware
capability ARM_HAS_SB.

The sb instruction is encoded using its hexadecimal value to avoid
recursive macro and support old compilers not having support for sb
instruction.

Arm32 instruction support is added but it is not enabled at the moment
due to the lack of hardware supporting it.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes in v3:
- rename ARM64_HAS_SB to ARM_HAS_SB
- define sb before including per arch macros
Changes in v2:
- fix commit message
- add comment to explain the extra nop
- add support for arm32 and move macro back to arm generic header
- fix macro comment indentation
- introduce cpu feature system instead of using errata
---
 xen/arch/arm/cpufeature.c             | 28 +++++++++++++++++++++++++++
 xen/arch/arm/include/asm/cpufeature.h |  6 +++++-
 xen/arch/arm/include/asm/macros.h     | 19 +++++++++++++++++-
 xen/arch/arm/setup.c                  |  3 +++
 xen/arch/arm/smpboot.c                |  1 +
 5 files changed, 55 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
index a58965f7b9..62d5e1770a 100644
--- a/xen/arch/arm/cpufeature.c
+++ b/xen/arch/arm/cpufeature.c
@@ -26,6 +26,24 @@ DECLARE_BITMAP(cpu_hwcaps, ARM_NCAPS);
 
 struct cpuinfo_arm __read_mostly guest_cpuinfo;
 
+#ifdef CONFIG_ARM_64
+static bool has_sb_instruction(const struct arm_cpu_capabilities *entry)
+{
+    return system_cpuinfo.isa64.sb;
+}
+#endif
+
+static const struct arm_cpu_capabilities arm_features[] = {
+#ifdef CONFIG_ARM_64
+    {
+        .desc = "Speculation barrier instruction (SB)",
+        .capability = ARM_HAS_SB,
+        .matches = has_sb_instruction,
+    },
+#endif
+    {},
+};
+
 void update_cpu_capabilities(const struct arm_cpu_capabilities *caps,
                              const char *info)
 {
@@ -70,6 +88,16 @@ void __init enable_cpu_capabilities(const struct arm_cpu_capabilities *caps)
     }
 }
 
+void check_local_cpu_features(void)
+{
+    update_cpu_capabilities(arm_features, "enabled support for");
+}
+
+void __init enable_cpu_features(void)
+{
+    enable_cpu_capabilities(arm_features);
+}
+
 /*
  * Run through the enabled capabilities and enable() them on the calling CPU.
  * If enabling of any capability fails the error is returned. After enabling a
diff --git a/xen/arch/arm/include/asm/cpufeature.h b/xen/arch/arm/include/asm/cpufeature.h
index f7368766c0..24c01d2b9d 100644
--- a/xen/arch/arm/include/asm/cpufeature.h
+++ b/xen/arch/arm/include/asm/cpufeature.h
@@ -67,8 +67,9 @@
 #define ARM_WORKAROUND_BHB_LOOP_24 13
 #define ARM_WORKAROUND_BHB_LOOP_32 14
 #define ARM_WORKAROUND_BHB_SMCC_3 15
+#define ARM_HAS_SB 16
 
-#define ARM_NCAPS           16
+#define ARM_NCAPS           17
 
 #ifndef __ASSEMBLY__
 
@@ -78,6 +79,9 @@
 
 extern DECLARE_BITMAP(cpu_hwcaps, ARM_NCAPS);
 
+void check_local_cpu_features(void);
+void enable_cpu_features(void);
+
 static inline bool cpus_have_cap(unsigned int num)
 {
     if ( num >= ARM_NCAPS )
diff --git a/xen/arch/arm/include/asm/macros.h b/xen/arch/arm/include/asm/macros.h
index 1aa373760f..dc791245df 100644
--- a/xen/arch/arm/include/asm/macros.h
+++ b/xen/arch/arm/include/asm/macros.h
@@ -5,13 +5,30 @@
 # error "This file should only be included in assembly file"
 #endif
 
+#include <asm/alternative.h>
+
     /*
      * Speculative barrier
-     * XXX: Add support for the 'sb' instruction
      */
     .macro sb
+alternative_if_not ARM_HAS_SB
     dsb nsh
     isb
+alternative_else
+    /*
+     * SB encoding in hexadecimal to prevent recursive macro.
+     * extra nop is required to keep same number of instructions on both sides
+     * of the alternative.
+     */
+#if defined(CONFIG_ARM_32)
+    .inst 0xf57ff070
+#elif defined(CONFIG_ARM_64)
+    .inst 0xd50330ff
+#else
+#   error "missing sb encoding for ARM variant"
+#endif
+    nop
+alternative_endif
     .endm
 
 #if defined (CONFIG_ARM_32)
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 6016471d37..577c54e6fb 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -964,6 +964,8 @@ void __init start_xen(unsigned long boot_phys_offset,
      */
     check_local_cpu_errata();
 
+    check_local_cpu_features();
+
     init_xen_time();
 
     gic_init();
@@ -1033,6 +1035,7 @@ void __init start_xen(unsigned long boot_phys_offset,
      */
     apply_alternatives_all();
     enable_errata_workarounds();
+    enable_cpu_features();
 
     /* Create initial domain 0. */
     if ( !is_dom0less_mode() )
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index 22fede6600..3f62f3a44f 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -395,6 +395,7 @@ void start_secondary(void)
     local_abort_enable();
 
     check_local_cpu_errata();
+    check_local_cpu_features();
 
     printk(XENLOG_DEBUG "CPU %u booted.\n", smp_processor_id());
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 13 12:53:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 12:53:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348243.574529 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0jZO-0001Js-UV; Mon, 13 Jun 2022 12:53:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348243.574529; Mon, 13 Jun 2022 12:53:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0jZO-0001Jl-PR; Mon, 13 Jun 2022 12:53:26 +0000
Received: by outflank-mailman (input) for mailman id 348243;
 Mon, 13 Jun 2022 12:53:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=E7/M=WU=arm.com=bertrand.marquis@srs-se1.protection.inumbo.net>)
 id 1o0jZN-0001I4-6p
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 12:53:25 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id cece51e8-eb17-11ec-bd2c-47488cf2e6aa;
 Mon, 13 Jun 2022 14:53:23 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C2DDBD6E;
 Mon, 13 Jun 2022 05:53:22 -0700 (PDT)
Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com
 [10.1.199.62])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 046DE3F792;
 Mon, 13 Jun 2022 05:53:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cece51e8-eb17-11ec-bd2c-47488cf2e6aa
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v3 0/4] Spectre BHB follow up
Date: Mon, 13 Jun 2022 13:53:10 +0100
Message-Id: <cover.1655124548.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Following up the handling of Spectre BHB on Arm (XSA-398), this serie
contain several changes which were not needed in the XSA patches but
should be done in Xen:
- Sync sysregs and cpuinfo with latest version of Linux (5.18-rc3)
- Add new fields inside cpufeature
- Add sb instruction support. Some newer generations of CPU
  (Neoverse-N2) do support the instruction so add support for it in Xen.
- Create hidden Kconfig entries for CONFIG_ values actually used in
  arm64 cpufeature.

Changes in v3:
- add R-b and A-b on patches
- fixes in sb support patch
Changes in v2:
- remove patch which was merged (workaround 1 when workaround 3 is done)
- split sync with linux and update of cpufeatures
- add patch to define kconfig entries used by arm64 cpufeature

Bertrand Marquis (4):
  xen/arm: Sync sysregs and cpuinfo with Linux 5.18-rc3
  xen/arm: Add sb instruction support
  arm: add ISAR2, MMFR0 and MMFR1 fields in cpufeature
  arm: Define kconfig symbols used by arm64 cpufeatures

 xen/arch/arm/Kconfig                     | 28 +++++++++
 xen/arch/arm/arm64/cpufeature.c          | 18 +++++-
 xen/arch/arm/cpufeature.c                | 28 +++++++++
 xen/arch/arm/include/asm/arm64/sysregs.h | 76 ++++++++++++++++++++----
 xen/arch/arm/include/asm/cpufeature.h    | 34 +++++++++--
 xen/arch/arm/include/asm/macros.h        | 19 +++++-
 xen/arch/arm/setup.c                     |  3 +
 xen/arch/arm/smpboot.c                   |  1 +
 8 files changed, 186 insertions(+), 21 deletions(-)

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 13 12:53:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 12:53:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348245.574541 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0jZP-0001Uh-HB; Mon, 13 Jun 2022 12:53:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348245.574541; Mon, 13 Jun 2022 12:53:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0jZP-0001SY-BC; Mon, 13 Jun 2022 12:53:27 +0000
Received: by outflank-mailman (input) for mailman id 348245;
 Mon, 13 Jun 2022 12:53:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=E7/M=WU=arm.com=bertrand.marquis@srs-se1.protection.inumbo.net>)
 id 1o0jZN-0001I4-Uy
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 12:53:26 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id cf612d85-eb17-11ec-bd2c-47488cf2e6aa;
 Mon, 13 Jun 2022 14:53:24 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AA11A1042;
 Mon, 13 Jun 2022 05:53:23 -0700 (PDT)
Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com
 [10.1.199.62])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E523A3F792;
 Mon, 13 Jun 2022 05:53:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cf612d85-eb17-11ec-bd2c-47488cf2e6aa
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v3 1/4] xen/arm: Sync sysregs and cpuinfo with Linux 5.18-rc3
Date: Mon, 13 Jun 2022 13:53:11 +0100
Message-Id: <b7b2a48eb1f8ed7f1a00e87189160accfc0d9625.1655124548.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1655124548.git.bertrand.marquis@arm.com>
References: <cover.1655124548.git.bertrand.marquis@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Sync existing ID registers sanitization with the status of Linux kernel
version 5.18-rc3 and add sanitization of ISAR2 registers.

Sync sysregs.h bit shift defintions with the status of Linux kernel
version 5.18-rc3.

Changes in this patch are splitted in a number of patches in Linux
kernel and, as the previous synchronisation point was not clear, the
changes are done in one patch with a status possible to compare easily
by diffing Xen files to Linux kernel files.

Origin: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git b2d229d4ddb1
Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
Changes in v3
- add Stefano r-b
Changes in v2
- move changes in cpufeature.h in an independent patch
- add proper origin tag in the commit
- rework the commit message
---
 xen/arch/arm/arm64/cpufeature.c          | 18 +++++-
 xen/arch/arm/include/asm/arm64/sysregs.h | 76 ++++++++++++++++++++----
 2 files changed, 80 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/arm64/cpufeature.c b/xen/arch/arm/arm64/cpufeature.c
index 6e5d30dc7b..d9039d37b2 100644
--- a/xen/arch/arm/arm64/cpufeature.c
+++ b/xen/arch/arm/arm64/cpufeature.c
@@ -143,6 +143,16 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
 	ARM64_FTR_END,
 };
 
+static const struct arm64_ftr_bits ftr_id_aa64isar2[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_HIGHER_SAFE, ID_AA64ISAR2_CLEARBHB_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
+		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR2_APA3_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
+		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR2_GPA3_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR2_RPRES_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
 static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV3_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV2_SHIFT, 4, 0),
@@ -158,8 +168,8 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
 	S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_FP_SHIFT, 4, ID_AA64PFR0_FP_NI),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL3_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL2_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_SHIFT, 4, ID_AA64PFR0_EL1_64BIT_ONLY),
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL0_SHIFT, 4, ID_AA64PFR0_EL0_64BIT_ONLY),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_SHIFT, 4, ID_AA64PFR0_ELx_64BIT_ONLY),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL0_SHIFT, 4, ID_AA64PFR0_ELx_64BIT_ONLY),
 	ARM64_FTR_END,
 };
 
@@ -197,7 +207,7 @@ static const struct arm64_ftr_bits ftr_id_aa64zfr0[] = {
 };
 
 static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = {
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_ECV_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_ECV_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_FGT_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_EXS_SHIFT, 4, 0),
 	/*
@@ -243,6 +253,7 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = {
 };
 
 static const struct arm64_ftr_bits ftr_id_aa64mmfr1[] = {
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_AFP_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_ETS_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_TWED_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_XNX_SHIFT, 4, 0),
@@ -588,6 +599,7 @@ void update_system_features(const struct cpuinfo_arm *new)
 
 	SANITIZE_ID_REG(isa64, 0, aa64isar0);
 	SANITIZE_ID_REG(isa64, 1, aa64isar1);
+	SANITIZE_ID_REG(isa64, 2, aa64isar2);
 
 	SANITIZE_ID_REG(zfr64, 0, aa64zfr0);
 
diff --git a/xen/arch/arm/include/asm/arm64/sysregs.h b/xen/arch/arm/include/asm/arm64/sysregs.h
index eac08ed33f..54670084c3 100644
--- a/xen/arch/arm/include/asm/arm64/sysregs.h
+++ b/xen/arch/arm/include/asm/arm64/sysregs.h
@@ -144,6 +144,30 @@
 
 /* id_aa64isar2 */
 #define ID_AA64ISAR2_CLEARBHB_SHIFT 28
+#define ID_AA64ISAR2_APA3_SHIFT     12
+#define ID_AA64ISAR2_GPA3_SHIFT     8
+#define ID_AA64ISAR2_RPRES_SHIFT    4
+#define ID_AA64ISAR2_WFXT_SHIFT     0
+
+#define ID_AA64ISAR2_RPRES_8BIT     0x0
+#define ID_AA64ISAR2_RPRES_12BIT    0x1
+/*
+ * Value 0x1 has been removed from the architecture, and is
+ * reserved, but has not yet been removed from the ARM ARM
+ * as of ARM DDI 0487G.b.
+ */
+#define ID_AA64ISAR2_WFXT_NI        0x0
+#define ID_AA64ISAR2_WFXT_SUPPORTED 0x2
+
+#define ID_AA64ISAR2_APA3_NI                  0x0
+#define ID_AA64ISAR2_APA3_ARCHITECTED         0x1
+#define ID_AA64ISAR2_APA3_ARCH_EPAC           0x2
+#define ID_AA64ISAR2_APA3_ARCH_EPAC2          0x3
+#define ID_AA64ISAR2_APA3_ARCH_EPAC2_FPAC     0x4
+#define ID_AA64ISAR2_APA3_ARCH_EPAC2_FPAC_CMB 0x5
+
+#define ID_AA64ISAR2_GPA3_NI             0x0
+#define ID_AA64ISAR2_GPA3_ARCHITECTED    0x1
 
 /* id_aa64pfr0 */
 #define ID_AA64PFR0_CSV3_SHIFT       60
@@ -165,14 +189,13 @@
 #define ID_AA64PFR0_AMU              0x1
 #define ID_AA64PFR0_SVE              0x1
 #define ID_AA64PFR0_RAS_V1           0x1
+#define ID_AA64PFR0_RAS_V1P1         0x2
 #define ID_AA64PFR0_FP_NI            0xf
 #define ID_AA64PFR0_FP_SUPPORTED     0x0
 #define ID_AA64PFR0_ASIMD_NI         0xf
 #define ID_AA64PFR0_ASIMD_SUPPORTED  0x0
-#define ID_AA64PFR0_EL1_64BIT_ONLY   0x1
-#define ID_AA64PFR0_EL1_32BIT_64BIT  0x2
-#define ID_AA64PFR0_EL0_64BIT_ONLY   0x1
-#define ID_AA64PFR0_EL0_32BIT_64BIT  0x2
+#define ID_AA64PFR0_ELx_64BIT_ONLY   0x1
+#define ID_AA64PFR0_ELx_32BIT_64BIT  0x2
 
 /* id_aa64pfr1 */
 #define ID_AA64PFR1_MPAMFRAC_SHIFT   16
@@ -189,6 +212,7 @@
 #define ID_AA64PFR1_MTE_NI           0x0
 #define ID_AA64PFR1_MTE_EL0          0x1
 #define ID_AA64PFR1_MTE              0x2
+#define ID_AA64PFR1_MTE_ASYMM        0x3
 
 /* id_aa64zfr0 */
 #define ID_AA64ZFR0_F64MM_SHIFT      56
@@ -228,17 +252,37 @@
 #define ID_AA64MMFR0_ASID_SHIFT      4
 #define ID_AA64MMFR0_PARANGE_SHIFT   0
 
-#define ID_AA64MMFR0_TGRAN4_NI         0xf
-#define ID_AA64MMFR0_TGRAN4_SUPPORTED  0x0
-#define ID_AA64MMFR0_TGRAN64_NI        0xf
-#define ID_AA64MMFR0_TGRAN64_SUPPORTED 0x0
-#define ID_AA64MMFR0_TGRAN16_NI        0x0
-#define ID_AA64MMFR0_TGRAN16_SUPPORTED 0x1
+#define ID_AA64MMFR0_ASID_8          0x0
+#define ID_AA64MMFR0_ASID_16         0x2
+
+#define ID_AA64MMFR0_TGRAN4_NI             0xf
+#define ID_AA64MMFR0_TGRAN4_SUPPORTED_MIN  0x0
+#define ID_AA64MMFR0_TGRAN4_SUPPORTED_MAX  0x7
+#define ID_AA64MMFR0_TGRAN64_NI            0xf
+#define ID_AA64MMFR0_TGRAN64_SUPPORTED_MIN 0x0
+#define ID_AA64MMFR0_TGRAN64_SUPPORTED_MAX 0x7
+#define ID_AA64MMFR0_TGRAN16_NI            0x0
+#define ID_AA64MMFR0_TGRAN16_SUPPORTED_MIN 0x1
+#define ID_AA64MMFR0_TGRAN16_SUPPORTED_MAX 0xf
+
+#define ID_AA64MMFR0_PARANGE_32        0x0
+#define ID_AA64MMFR0_PARANGE_36        0x1
+#define ID_AA64MMFR0_PARANGE_40        0x2
+#define ID_AA64MMFR0_PARANGE_42        0x3
+#define ID_AA64MMFR0_PARANGE_44        0x4
 #define ID_AA64MMFR0_PARANGE_48        0x5
 #define ID_AA64MMFR0_PARANGE_52        0x6
 
+#define ARM64_MIN_PARANGE_BITS     32
+
+#define ID_AA64MMFR0_TGRAN_2_SUPPORTED_DEFAULT 0x0
+#define ID_AA64MMFR0_TGRAN_2_SUPPORTED_NONE    0x1
+#define ID_AA64MMFR0_TGRAN_2_SUPPORTED_MIN     0x2
+#define ID_AA64MMFR0_TGRAN_2_SUPPORTED_MAX     0x7
+
 /* id_aa64mmfr1 */
 #define ID_AA64MMFR1_ECBHB_SHIFT     60
+#define ID_AA64MMFR1_AFP_SHIFT       44
 #define ID_AA64MMFR1_ETS_SHIFT       36
 #define ID_AA64MMFR1_TWED_SHIFT      32
 #define ID_AA64MMFR1_XNX_SHIFT       28
@@ -271,6 +315,9 @@
 #define ID_AA64MMFR2_CNP_SHIFT       0
 
 /* id_aa64dfr0 */
+#define ID_AA64DFR0_MTPMU_SHIFT      48
+#define ID_AA64DFR0_TRBE_SHIFT       44
+#define ID_AA64DFR0_TRACE_FILT_SHIFT 40
 #define ID_AA64DFR0_DOUBLELOCK_SHIFT 36
 #define ID_AA64DFR0_PMSVER_SHIFT     32
 #define ID_AA64DFR0_CTX_CMPS_SHIFT   28
@@ -284,11 +331,18 @@
 #define ID_AA64DFR0_PMUVER_8_1       0x4
 #define ID_AA64DFR0_PMUVER_8_4       0x5
 #define ID_AA64DFR0_PMUVER_8_5       0x6
+#define ID_AA64DFR0_PMUVER_8_7       0x7
 #define ID_AA64DFR0_PMUVER_IMP_DEF   0xf
 
+#define ID_AA64DFR0_PMSVER_8_2      0x1
+#define ID_AA64DFR0_PMSVER_8_3      0x2
+
 #define ID_DFR0_PERFMON_SHIFT        24
 
-#define ID_DFR0_PERFMON_8_1          0x4
+#define ID_DFR0_PERFMON_8_0         0x3
+#define ID_DFR0_PERFMON_8_1         0x4
+#define ID_DFR0_PERFMON_8_4         0x5
+#define ID_DFR0_PERFMON_8_5         0x6
 
 #define ID_ISAR4_SWP_FRAC_SHIFT        28
 #define ID_ISAR4_PSR_M_SHIFT           24
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 13 13:00:18 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 13:00:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348296.574584 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0jfy-0005Ja-VX; Mon, 13 Jun 2022 13:00:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348296.574584; Mon, 13 Jun 2022 13:00:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0jfy-0005JT-SZ; Mon, 13 Jun 2022 13:00:14 +0000
Received: by outflank-mailman (input) for mailman id 348296;
 Mon, 13 Jun 2022 13:00:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0jfy-0005JJ-1j; Mon, 13 Jun 2022 13:00:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0jfx-0005VR-Uq; Mon, 13 Jun 2022 13:00:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0jfx-0004dt-CE; Mon, 13 Jun 2022 13:00:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o0jfx-0005nn-Bn; Mon, 13 Jun 2022 13:00:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FkwofvTnO1tYkje+jIzLrChpiVbcCGSbp3oaR9ra1M8=; b=g7d/wD+0Fqh+c7+TPHa6Ei9414
	VH9zRTB6tZ9TivTBjQcQMIzhfxP1177lBH+amefpJMMdXEhb/uLLQtpZXaRAhpzCpiXyU0mFkukH3
	QrxIKO1ocMK+MpbFwV9JD9UBqkntV5GsGBvrn1z926t3rTB2bqD9REGY2nWMYVCtISsY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171151-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 171151: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt-raw:xen-install:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c9a707df83aad17a6fcf2e8330ab3b5bead6fb8b
X-Osstest-Versions-That:
    xen=c9a707df83aad17a6fcf2e8330ab3b5bead6fb8b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 13 Jun 2022 13:00:13 +0000

flight 171151 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171151/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 171144 pass in 171151
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install          fail pass in 171144

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-raw   7 xen-install                  fail  like 171144
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171144
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171144
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171144
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171144
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171144
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171144
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171144
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171144
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171144
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171144
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171144
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171144
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 xen                  c9a707df83aad17a6fcf2e8330ab3b5bead6fb8b
baseline version:
 xen                  c9a707df83aad17a6fcf2e8330ab3b5bead6fb8b

Last test of basis   171151  2022-06-13 01:52:09 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon Jun 13 13:57:21 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 13:57:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348317.574594 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0kYw-0002wR-Dr; Mon, 13 Jun 2022 13:57:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348317.574594; Mon, 13 Jun 2022 13:57:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0kYw-0002wK-AY; Mon, 13 Jun 2022 13:57:02 +0000
Received: by outflank-mailman (input) for mailman id 348317;
 Mon, 13 Jun 2022 13:57:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T645=WU=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o0kYu-0002wE-DY
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 13:57:00 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20630.outbound.protection.outlook.com
 [2a01:111:f400:7d00::630])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b10aade3-eb20-11ec-bd2c-47488cf2e6aa;
 Mon, 13 Jun 2022 15:56:58 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB7PR04MB4553.eurprd04.prod.outlook.com (2603:10a6:5:34::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.19; Mon, 13 Jun
 2022 13:56:56 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Mon, 13 Jun 2022
 13:56:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b10aade3-eb20-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cM8z/FSazvYBRf3CZcb8j0NPkcSV51cnwis0ily+EyJo1wgS061MKjXHO3DtYMxOg9UzjV6lGYNwA8N5KTJnoq145ZiPNwbTwtZgXRroE6lHcZafxhapBjhIgv/wGxxjZNUwyUVqtt0eFBbXHH/7OqTmywDM/Agb5r8B9apKS/+1jjEk9dtTa3CiQA6P7uc9VH6eOKvybDxotaAqrYwF3wvWOvr/l+MgX8l17KP/8vU5oSGAGeh3OQSfSpGv5nzN1tI/Z7hFPxA6mOTJWKlyLHxqJO++mud1dSkVWMREPMu191hB9hQP58SykBhtCYBeiplE14J0WU+9E/P9BtnzvA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=2B4RI8nq4GUpDrzZ2p5ayGYF78++kTMBMVg6RMNvwCU=;
 b=clZGghL3P+mRTQxPFFLAdDTzBXXVJrkPul5OE2NJFWYXvBDuVvyVM8Fra59ztCCCqiHao2JappdvgtA0OQvUw4DfzI+XXqcreAqRgSJda5SFYjcRw2oqKLejTapM2oMO0InPhZtVh29reGZoDSqum73nu2srMHyxMbYBpVtEhm3KI4OxPnSrjauEvsROPfSUpSSveSmBLH9OAqPg+s4IW3uiRPdaxmB4xCVKXEVoxj69SkDzQqW4DF/LJ0XM+OtverEkli/9jpOPU8ajJ00NJ9il6p28iAIfK9sUzNRmhqwrt9adIIxWpV2dKByw8roinz6zIUqABcY0AoMB2wNrmw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2B4RI8nq4GUpDrzZ2p5ayGYF78++kTMBMVg6RMNvwCU=;
 b=zPqumvESrDgsd0F+MWFBcH/LGaTqk1OXDl4JbphFbo3uZ92NENZCpKV2AVBkP8qRlUcmgbxwYeboWp0Ieqop34Kn/8+a2U+xEAf+Qb9qkDNej0xH6Bdf2oIuWbogYK85tpL3y5t8Cd6sx7MbVOhc/XxX+7rmIPs7wiA0a877gLbTof08cJ2aRLNEcNX1vdaLNaQS+PJv5+FPQ0ha60rgjZ/ahmBolWnne74OjIio2FQAO4XBMthRbk1ylqlkxTF1LZUYdugQ8M8TYOCQgz+hVOb+4csdmH2ESubAjFAo9fySXINq9t91gdicYzFa6UHNNgpoHO7+9mo6TK9bHX6tsA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f0f87e99-282b-6df7-7e57-3a6c73029519@suse.com>
Date: Mon, 13 Jun 2022 15:56:54 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH] xen/console: do not drop serial output from the hardware
 domain
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220610150651.29933-1-roger.pau@citrix.com>
 <3a462021-1802-4764-3547-6d0a02cd092f@suse.com>
 <YqbziQGizoNX7YFr@Air-de-Roger>
 <3d0d74d8-55a9-cdb6-0c5e-616ddd47bbc0@suse.com>
 <Yqb9gKUMokLAots7@Air-de-Roger>
 <afa0a9e3-fd35-be38-427e-3389f4c3ca26@suse.com>
 <YqcuTUJUgXcO3iYE@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <YqcuTUJUgXcO3iYE@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AS9PR04CA0150.eurprd04.prod.outlook.com
 (2603:10a6:20b:48a::23) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: cbd4aee8-7e2d-4d90-7ede-08da4d449412
X-MS-TrafficTypeDiagnostic: DB7PR04MB4553:EE_
X-Microsoft-Antispam-PRVS:
	<DB7PR04MB45536B4030D5282C6F7E26C1B3AB9@DB7PR04MB4553.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	S/fnFVoVGJC9nY2JF8GHd/WxHz7BZm2sLHU1tTf1BviynnsSshcTRytOOnrbzlDZlr07WpV+HR6/kCakugcGEL4zCJoYCrFHCGNsSyNVXHzQy93ImF3f8zh3un304Rwp8eStOa5hPGCQWRM8NRvG3Y+SNi0C6+ErHP5g4sv0onPJFTFM7ldzV4GE8hNFu3jBmiTdqINbvs1Nj0//KCj1fiERaCvfrILfjfHFXyWKddAZJc8XZFZNEKEyRZh+Hd7cHIi3n1fam1UNG/DSa8aZs/V+5664sshX3J3F3dk9lYnugHDfVF/m4SjkEDD3gyBJbHFv6+VFamSbc1Z/jA08Xbp8kRd6PevIB9BEN4a3IUPhgCk3nF9i7YI6DYyBNsZDb9KS8I+7ECKScD3JdE8Jz+BmxQyTG+f2Csh9rqbNd84nBG6mDyWx8DBUbpMhnwF11qaKDQdrxxSNOM8wTNVzzmSQ8WJ1fjwVqDgDkA1GOetwWxAdh9Fs2rp4Um4SCRxguE0/R9pDdYpREGMNhKP91pgRX6hRw/Hnq0D9yMB281pJzA14/PwKcAwAB1Wev48Hgxnx3F5mkxE7m/ndDHZQaOAlsvzvGezJt0tyixPMRPqURWt91wraXXlgkquHVcqsLSRvrSBZL34dOV8PnWS3e4lInwoDBzATdeCuIl9GlM01xc3UEtufLR5ARxyrii/6mGhKnCobRdp9wQlX6yeg4M2a+pUTnvKZvfCd2UhIHWvDvZ+FXocBLJI7cWZj59Pg
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(83380400001)(54906003)(5660300002)(86362001)(31696002)(4326008)(2906002)(508600001)(53546011)(186003)(26005)(6512007)(6506007)(38100700002)(8676002)(31686004)(6916009)(36756003)(316002)(2616005)(66476007)(8936002)(66556008)(66946007)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?L3RTU09wLzlkbFBOUGRjMWdINEZpVmF4V0k0aEhGYVdTQWI3S0ZtQnl3cHRD?=
 =?utf-8?B?ZHZJU0djMFdweTZGZjl0eDlrWlhCNS8vRTFXM2x6Y2o2MW03M28yOWRjaGtW?=
 =?utf-8?B?VUt4QU5lc3I0VGVOVGpRQXF3UWdWVzYzVnBjK0tpeWxibzRVQzE1amZOSkN3?=
 =?utf-8?B?K0Y3STJBcFBUQkJ0SE5PUmc0NjVnNlQyaDNqZVZwZXZGeGhySTFyTWxvcDV5?=
 =?utf-8?B?c2NnTTBrZWl4bVlVdWtneGFzNlM1dm4vNmNLWUlTeHhTdlJqc2hBcStEMGdY?=
 =?utf-8?B?bzI3emZoaGw5alcxWXBuTWllcnVwRFczS3diZmZMM1EwWGVrRjdPQ3R0Vlg5?=
 =?utf-8?B?d0VML25mNksxT2pyRUI5WlhDQU00dUx5QW9QQ1ZaSlRSZmMxV2lHVyszQm1M?=
 =?utf-8?B?Mm9FU1JIL2t2LzhwSlQ4RjlCMmRIQy9uQjRObUlTeGUwaXlmVmtFTkRYczRw?=
 =?utf-8?B?NExDWjQ3U2hTK1VIVEgwWFdSRE4wUEdoY0JISE9pRE1zb29KV25qZVYySllG?=
 =?utf-8?B?Y2FRWURXdzk0ajdJOGRWaFErZDVRR1hGSjVlSTIvNXI2NlRWRmtmQk9vWG5Z?=
 =?utf-8?B?VVYxRVJJSHNhNEtPQkRJV3o2ZHFiWHRhZlh3cGFhOHlnMHNIN3Z1QWtoRklX?=
 =?utf-8?B?cW9vaURuZ0gxU3V1SlV3TitWM29HZytSRVhzOGlWNkF6QkEwU3BOa0d2UERY?=
 =?utf-8?B?cE93M0NYa2hxS2Fza2VvZGMzdDN6eVFOa1JCVTkzMjRqRjZFdllTWldzaXBE?=
 =?utf-8?B?OEE3VVZZVitkajltNTVGUjFyaGd3aENuU0RiTytQSjE1alZPZ3FBQ3ppSGtl?=
 =?utf-8?B?NDcvZlRXTlRFVlRieVp6ZC9GbG5LclNDYmRrMGdTaER6K3p5VDhqMmZ6SGIx?=
 =?utf-8?B?aXRKOU0yeWRtd3VyYktISlpyTXRyYklIRGZ1YW54RkFMTnhHc3VDdEdwZGRo?=
 =?utf-8?B?c0syd3IzK2pPSll1c3J4QktPOWdXU1VZRldEc0ZNVi9MMXptTXQxL1E5d1RU?=
 =?utf-8?B?dk5jQWJjbEt3ZXBpOEFqUTBPZWJEYkJHcWs4bkFJVFc4WFFRdkI0Z0hsWlVZ?=
 =?utf-8?B?NmovSnVSL3ZnazdlZ3l2L3A1dllkSlRiS3IyWnBNN0h6Q1k3YzRTZlBXQVd5?=
 =?utf-8?B?c2xFK0dqL0NibTk5NXo4Y05oL2tHZEJQTHhEV0hhemI4d2Z1dEtnb09UdkFk?=
 =?utf-8?B?UUtuVmtiTFZiQTViQlN2M3BUcmhSbzliazRBY01WdHN2VDdxVitzWkcwdVFG?=
 =?utf-8?B?bGpxc1pXREpmckd3UWRQWGVwZkQwVkx4ZTlNRnY0S2NtdEdoQ0NUM2FCS2NJ?=
 =?utf-8?B?OGpubENvZm5JNWU2RHdXMlU1dUlVczhjL2cwdUZsTG9YV2NVNFMvdG5ob3Zq?=
 =?utf-8?B?ZnJSbnh4SFdyNmxjQjVHNGJiSGNRVEVDZjZvN3BFeENRRHRqVy8vQUY3b2Jz?=
 =?utf-8?B?N1R0S3ArRHcvVXUvYkNpcTh2RzdPNVN1bzg1a05yYTNVZS9BcVkxRk9EM3BQ?=
 =?utf-8?B?U0dYRjIrdDRQVkFLNm1wV1pDS0lsU1FCc1REbFlrQnpWOFVsS0hTd3RnS3Jj?=
 =?utf-8?B?K0EraXhxWjNqU0N1NTFrWCtuS0xpVGFSV01oWU5lWGpzN1hDK0N6QXFEUEJa?=
 =?utf-8?B?dzBSRXlrVXlLS2pIdVdaaGkxeXJIL1FGcStNNjlGRmpvc0VRdzliQ3B1YXd4?=
 =?utf-8?B?Vkw1ZGVVb2hLWkVYSVVBa0ZWbG1vcDlualNtVFl1Q1hqRCtwTXJvOWFCbjRj?=
 =?utf-8?B?b1UxUUJpb05KMlVqMnRPMG42bUpvL1VWeVBtTnRzZGQ5V0hFMHBMSkxLVmZk?=
 =?utf-8?B?TDlUUy9aNHUySW9NOGdGNFNQSkt5aTlCMGtKK3FybXIwRUZiL0xuQnFub2xV?=
 =?utf-8?B?VXp6UjQ5b0p2WGxUTDFNelp4ZUI0MzFKZGhCYVVqcXd2b0QwalZZaE5POEZi?=
 =?utf-8?B?R3JvdUhYU0NnZ2R1T2FrWjdobVNjVVladisxZlFocllBUlpDWGNORTdEUjZG?=
 =?utf-8?B?cmhHb21JVTBaTEQxVmsvcFhLSmRGRy9wNDVRbnZZczBsWXVjKzBPd3IvWkdM?=
 =?utf-8?B?MkZINXF4RmxHeENSaWRPY0RRelNtTUs4dHRUYlJaWHFEcjZxbWQwbi9UY3RI?=
 =?utf-8?B?NFdYcUdYdEVWWTFjQ2YxUXNLSlZraE50VmU4ck9uUkFXU3FWYUNGaWgxbFpl?=
 =?utf-8?B?Y0d2NlRxVlMwQ1ZxdUJlRzdRcHRmSjRnNFJUWE1MTzYwYVRSN3NhY1JnenI5?=
 =?utf-8?B?M2JCUUozUDFSRnJ2Z0YrYzF2TlUvUXV3d3ZvRlJGSDdZRzl1YmtteERHTkJG?=
 =?utf-8?B?NUg2ZzNxeDJWdndjMGxqbUkvTzdLMDlYLyticDdrQWhUTGlZTmtNUT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cbd4aee8-7e2d-4d90-7ede-08da4d449412
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jun 2022 13:56:56.5263
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 0ocXOG51dJ8BMgaJGk9TIPEY2SNhwsYUF8OWbOgdXXoE/jstWEA5BiU4N2W3l/fIe6BeqL7ObH6yc2UKNw3GtA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR04MB4553

On 13.06.2022 14:32, Roger Pau Monné wrote:
> On Mon, Jun 13, 2022 at 11:18:49AM +0200, Jan Beulich wrote:
>> On 13.06.2022 11:04, Roger Pau Monné wrote:
>>> On Mon, Jun 13, 2022 at 10:29:43AM +0200, Jan Beulich wrote:
>>>> On 13.06.2022 10:21, Roger Pau Monné wrote:
>>>>> On Mon, Jun 13, 2022 at 09:30:06AM +0200, Jan Beulich wrote:
>>>>>> On 10.06.2022 17:06, Roger Pau Monne wrote:
>>>>>>> Prevent dropping console output from the hardware domain, since it's
>>>>>>> likely important to have all the output if the boot fails without
>>>>>>> having to resort to sync_console (which also affects the output from
>>>>>>> other guests).
>>>>>>>
>>>>>>> Do so by pairing the console_serial_puts() with
>>>>>>> serial_{start,end}_log_everything(), so that no output is dropped.
>>>>>>
>>>>>> While I can see the goal, why would Dom0 output be (effectively) more
>>>>>> important than Xen's own one (which isn't "forced")? And with this
>>>>>> aiming at boot output only, wouldn't you want to stop the overriding
>>>>>> once boot has completed (of which, if I'm not mistaken, we don't
>>>>>> really have any signal coming from Dom0)? And even during boot I'm
>>>>>> not convinced we'd want to let through everything, but perhaps just
>>>>>> Dom0's kernel messages?
>>>>>
>>>>> I normally use sync_console on all the boxes I'm doing dev work, so
>>>>> this request is something that come up internally.
>>>>>
>>>>> Didn't realize Xen output wasn't forced, since we already have rate
>>>>> limiting based on log levels I was assuming that non-ratelimited
>>>>> messages wouldn't be dropped.  But yes, I agree that Xen (non-guest
>>>>> triggered) output shouldn't be rate limited either.
>>>>
>>>> Which would raise the question of why we have log levels for non-guest
>>>> messages.
>>>
>>> Hm, maybe I'm confused, but I don't see a direct relation between log
>>> levels and rate limiting.  If I set log level to WARNING I would
>>> expect to not loose _any_ non-guest log messages with level WARNING or
>>> above.  It's still useful to have log levels for non-guest messages,
>>> since user might want to filter out DEBUG non-guest messages for
>>> example.
>>
>> It was me who was confused, because of the two log-everything variants
>> we have (console and serial). You're right that your change is unrelated
>> to log levels. However, when there are e.g. many warnings or when an
>> admin has lowered the log level, what you (would) do is effectively
>> force sync_console mode transiently (for a subset of messages, but
>> that's secondary, especially because the "forced" output would still
>> be waiting for earlier output to make it out).
> 
> Right, it would have to wait for any previous output on the buffer to
> go out first.  In any case we can guarantee that no more output will
> be added to the buffer while Xen waits for it to be flushed.
> 
> So for the hardware domain it might make sense to wait for the TX
> buffers to be half empty (the current tx_quench logic) by preempting
> the hypercall.  That however could cause issues if guests manage to
> keep filling the buffer while the hardware domain is being preempted.
> 
> Alternatively we could always reserve half of the buffer for the
> hardware domain, and allow it to be preempted while waiting for space
> (since it's guaranteed non hardware domains won't be able to steal the
> allocation from the hardware domain).

Getting complicated it seems. I have to admit that I wonder whether we
wouldn't be better off leaving the current logic as is.

> For Xen it's not trivial to prevent messages from being dropped. At
> least during Xen boot (system_state < SYS_STATE_active) we could also
> active the sync mode and make the spin wait in __serial_putc process
> softirqs.

Yeah, that would seem doable _and_ safe (enough).

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 13 13:58:24 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 13:58:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348325.574606 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0kaF-0003dO-Qz; Mon, 13 Jun 2022 13:58:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348325.574606; Mon, 13 Jun 2022 13:58:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0kaF-0003dH-O0; Mon, 13 Jun 2022 13:58:23 +0000
Received: by outflank-mailman (input) for mailman id 348325;
 Mon, 13 Jun 2022 13:58:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=E7/M=WU=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1o0kaF-0003d7-5x
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 13:58:23 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20616.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::616])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e24ef0b8-eb20-11ec-bd2c-47488cf2e6aa;
 Mon, 13 Jun 2022 15:58:21 +0200 (CEST)
Received: from AM7PR03CA0023.eurprd03.prod.outlook.com (2603:10a6:20b:130::33)
 by DBAPR08MB5749.eurprd08.prod.outlook.com (2603:10a6:10:1af::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Mon, 13 Jun
 2022 13:58:18 +0000
Received: from AM5EUR03FT051.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:130:cafe::93) by AM7PR03CA0023.outlook.office365.com
 (2603:10a6:20b:130::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.20 via Frontend
 Transport; Mon, 13 Jun 2022 13:58:18 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT051.mail.protection.outlook.com (10.152.16.246) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5332.12 via Frontend Transport; Mon, 13 Jun 2022 13:58:17 +0000
Received: ("Tessian outbound e40990bc24d7:v120");
 Mon, 13 Jun 2022 13:58:17 +0000
Received: from 930ff3f4b965.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 35CA8873-821A-45ED-8499-1C6BE7118ADC.1; 
 Mon, 13 Jun 2022 13:58:10 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 930ff3f4b965.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 13 Jun 2022 13:58:10 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by HE1PR08MB2633.eurprd08.prod.outlook.com (2603:10a6:7:37::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.15; Mon, 13 Jun
 2022 13:58:08 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5332.020; Mon, 13 Jun 2022
 13:58:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e24ef0b8-eb20-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=Z7DA8UjO7PQGrcn5DMEo/xT11Yl5xApOD95Jr3i9p/YPKkqoOnvB9U/XrgEO5YtP8Olyx4DlDl8Ki6dJzGYvKmgQKKwwByDR5dQZyhYfyCMK4IO1iVisEYmtqhLPXV/ZrGHqCc8LAODVNbyOaoV/Cfceaalv+TrAMMWU3s3TxrSdJ7hAT9Jk3SMocj/jIHujKbB4wgvunHlH2tduNIWSqWgmb9lm9c3iXp8B0cl0eBg9F+NIBsZ6slFKzu/uFP/C4pMGaWUO/yHctl2Ff7NLk/dmrmdEglR3s8OsHeeN5H2QV+GkvlyQ8Ta1szaAUUR70LYu3FV5PsuMW0p5kYGqKA==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=WQ1TOciLbn+AvfbgJ3ScHsEcZBzAxLl72jSGgIwWfbU=;
 b=dZZbeg6MRWe+A7Nc8LfobaRc03A/Oc+41Uq7g7UWhYc2/oGWlSmildseDpsCU7Sv2vBGjMyrDQB3+a15MNFaF5yebtw5TM3KzXMxFBCISPf8yssLVKRiz0JCfWcugq05L9b3kxcK2lm5vzFZ87wNIGEVstoTwagPilEOaZeToTsKoV7TR6wzVxDBvSA0lNrWjpP1/ATtwI6k69UwmKGO6OAklr+sF5vDFg+F9SzKg/j2PHwafJsxaBl07wLpOzNcR4pB2S7DN1rHzrPWHyl4eAceerVoZTrrt4ogh+y3f2uIff4LozfDrZpzA3ftGvYUROERGJIgHexY9gZW6gR4iQ==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WQ1TOciLbn+AvfbgJ3ScHsEcZBzAxLl72jSGgIwWfbU=;
 b=4cmVVxwN/jiEyoqkFkDTHsDSnhLm4a8DbMdW3X1VRdRYqTMmCxHQk/poZJxfOk9QaMvKYLJn1sArxbmwsiDlbrmprXLiPW1vcct1F3q4jvVRQTjFX4YLo5aqXSlhmdqioHX5gP19McOo7L8cI0eNTXj+hdXFxiqtFFOBgi0+bJY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: fa1a38c9e8557a8c
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NtygYS+0ojZWaRw1/4Rfc+jL2YeNO8i7808AGPOlGcE3N4K5Lmnz7TSe88gI2KsKBZ8Ejqp0CDZA8NOY49TTJDZrzDp+KTpgtNP8SNoeWyVJNxVABOonjgbsl0uzc5kkomyH4dVBPz4YN9zgnJEL0/csuxDBYQWt4ckn7C3aj9yo8m0SioX/uhHzjp3MwNM+1b5Amez4Sp0dVYeokuj5DbUKyQjPyb23SSWvrd1tmImk/nNiye+IrNSQAEqugLNFXFITw8HkFnsd+OuDwIFWeVs9Kb5nEVCiqwHi7JAnI7MYl6jKsHx8CGjcojXGE7rH1ufXBXDzURMDF6zyt1VAWg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=WQ1TOciLbn+AvfbgJ3ScHsEcZBzAxLl72jSGgIwWfbU=;
 b=Jtevdo/foENYVYyY4dRV0JCTLds8ktRZ77r2oNbRuiDzbwxWigafFEo9DU3sI05XeiNhab9XzY5VO/mcoc9YEjwzbuJPSv7oJDrOvnQ4fdHUSSwgRYLyUBZEgKsySzcytWyeFQR/KHBHSDRnYcTNkRDr4Y+hwYtoC7CGvPNz05ZX0sPcpM6T9LFr1bjxmzkLsPC0hV8LNR2U+W6hkc88JhHJH2sLdo6nBdsRWf9i5vRnKTgxxQj+dKRXZWQcjm2aB4fZwwNdyYiIIaJsIrz14GCJPyV4y2tyTtIWbu3e46bbOo9782kidK2YsIxmTcApT79l4N3GV3rumz043c/kwQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WQ1TOciLbn+AvfbgJ3ScHsEcZBzAxLl72jSGgIwWfbU=;
 b=4cmVVxwN/jiEyoqkFkDTHsDSnhLm4a8DbMdW3X1VRdRYqTMmCxHQk/poZJxfOk9QaMvKYLJn1sArxbmwsiDlbrmprXLiPW1vcct1F3q4jvVRQTjFX4YLo5aqXSlhmdqioHX5gP19McOo7L8cI0eNTXj+hdXFxiqtFFOBgi0+bJY=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Anthony PERARD <anthony.perard@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [XEN PATCH 1/4] build: xen/include: use if_changed
Thread-Topic: [XEN PATCH 1/4] build: xen/include: use if_changed
Thread-Index: AQHYddkD4JsESJA0gkGnWWwykzQjSK1G6E+AgAAC3wCAABeVAIAAEeqAgAZa2QA=
Date: Mon, 13 Jun 2022 13:58:08 +0000
Message-ID: <882A9C24-AB9A-42FF-9500-7F4D3FFF598B@arm.com>
References: <20220601165909.46588-1-anthony.perard@citrix.com>
 <20220601165909.46588-2-anthony.perard@citrix.com>
 <6EE2C13C-7218-4063-8C73-88695C6BF4CE@arm.com>
 <0d85ad23-a232-eac3-416f-fff4d5ec1a93@suse.com>
 <258D1BE1-8E77-4748-A64C-6F080B9C1539@arm.com>
 <YqHtvZPQJOAFt/8K@perard.uk.xensource.com>
In-Reply-To: <YqHtvZPQJOAFt/8K@perard.uk.xensource.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.80.82.1.1)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 7758f3ba-4ad1-4772-1aa2-08da4d44c4ab
x-ms-traffictypediagnostic:
	HE1PR08MB2633:EE_|AM5EUR03FT051:EE_|DBAPR08MB5749:EE_
X-Microsoft-Antispam-PRVS:
	<DBAPR08MB5749EB96152EAF909A4BBA4F9DAB9@DBAPR08MB5749.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 eEvnKgNPyHFWW2p/EIXxu1MQ2ZsXQdhURVufPVnYetFy8b3JbdUNlcF26L8qJjlaB1mPN6m7C6Lf+5HRptXm75nGgki0lRSoCIsP3OSc6QooZL4fifuCqlp0p1rQffT3P0WijqDDbedU2iZxwizLbhV+zGuMPuia5TqT/Wiubhy6OFRtXAp5AuWc0KMxdG0bgEtqoGEUXBBbeZ4vV0lysu60h9tC+yr1/3dCuLaP5wr5shlM+f1ghlWNqADmnF8MjNcG19APiSGWEmXASt3fu1Ke4eIy8ycW2goB62zlt3NNH087fx8E7dpdD2MGam9Ic7BO0LVwwQAz9xZHWSvccJBUnA6eJftmhE+ZH2f3eAsLyk1v/taPAuAgu91xPz8CNNL4Fa/eK6v5ICnbiLkvDNaTcyWuxlgGrQNGTg/pgQmaeOyfTQNa84Is0KPcSPgs3T/Asl+IL8cNIAtLAZyroQSaUCNrTj++QVFGD024EGXHtkxXWm7YzBTpKZoYtxpIoeVdIggCW5HpFDlH6x2qKCaeLv88fRrrYLYO2L98EkA/6iTX0ZpJrDRo4jZyhUoRHXWOjnWWF4RFfIu1NZF34O2ZJcGcetecQ+hLFo2GDgg9UaLSAOswaDZ37MgWl6UXw3jcZIxqT83ltYvbmoDpqZ9NwkZ83V1kS1xqdmVLW2Z0Uf3N3X9ixDBJnxxDA04RH/9FfW+7hv1cgyPsGT0YxmgxbQK2Hufl3IpFUd+ZqdvIFOce45KxIWDb4yljp2S6
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(6512007)(26005)(122000001)(38100700002)(6916009)(6486002)(54906003)(2616005)(38070700005)(86362001)(186003)(53546011)(66476007)(8936002)(66556008)(83380400001)(316002)(5660300002)(66946007)(36756003)(4326008)(64756008)(76116006)(91956017)(8676002)(66446008)(33656002)(2906002)(71200400001)(6506007)(508600001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <94A7B4F1B2B99943A4C30A99E5F60F04@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR08MB2633
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT051.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	1112749b-ad02-4d76-64da-08da4d44bf3e
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0IFx8ujBpLkHmZSUJuZ3mzRUSQuSpW0OseBWYUmyXaUIoxVs3X7op5ILtjUCuHUk3xuL8+lzcTtgLx2YNZLzqdgfYOoMdCGhNf4n/TtoNKaE94EvmDiEKilJJ01DtbiXXRYEdydzDffWmnwMjJsc3VVmd9aWQ0KOD7iJtvK4jtSDIUpahZ90Jz367gCwy0KAwjh9BGu2u0VCLgjNF4py8vWNh+HtIatH39jyl7JKU6rU3VUAkkLr5CYsIH8obXllvILu+n77zBa2G9+be6yHU3mKvKb1ix3jdSwl3Pf0UIruG+AwvS/J34x7x7CrG0ULhEIf21nL6sHgS/zm/evWKp/IbIeWNsBOpRB2rNZWlBc2k05ZaiVTuRdAdzej3wmAS5KI3QLHDDbquRxTPeX5bZ8hZkwAnMNzdz8rvs055q35RDCCKiQR86sxFcJ6gGEj2bnxxtxd46Lwq6XJDUfDQ7Ye9MlbjUW4wv+ktSHCr4kHWcojnCebAT6SBqmxWRx4InYJ6onWYUDcQQFMdlpSYoC/iC/eQ7csOpsQMRPj4oKW4pUTRwo+1nDHoWXW2SNm9jCiGZlM8H3pGGyVdx/NJl8P/h7ZbfOWIzm9GA2zC9wpw+o4UO5g+cFr4C467d2AxJrGArSQDar6LAVSLDeLDcbSJU15TROYVpCJ5eV43d886gLrg8OukV4CImfRZJuC
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(40470700004)(36840700001)(46966006)(8676002)(4326008)(2906002)(70206006)(70586007)(6486002)(83380400001)(33656002)(508600001)(5660300002)(6512007)(40460700003)(6862004)(6506007)(86362001)(26005)(53546011)(8936002)(54906003)(36756003)(316002)(36860700001)(81166007)(356005)(47076005)(336012)(2616005)(82310400005)(186003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jun 2022 13:58:17.8227
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 7758f3ba-4ad1-4772-1aa2-08da4d44c4ab
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT051.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5749

Hi Anthony,

> On 9 Jun 2022, at 13:55, Anthony PERARD <anthony.perard@citrix.com> wrote=
:
>=20
> On Thu, Jun 09, 2022 at 11:51:20AM +0000, Bertrand Marquis wrote:
>> Hi,
>>=20
>>> On 9 Jun 2022, at 11:26, Jan Beulich <jbeulich@suse.com> wrote:
>>>=20
>>> On 09.06.2022 12:16, Bertrand Marquis wrote:
>>>> With this change, compiling for x86 is now ending up in:
>>>> CHK     include/headers99.chk
>>>> make[9]: execvp: /bin/sh: Argument list too long
>>>> make[9]: *** [include/Makefile:181: include/headers++.chk] Error 127
>>>>=20
>>>> Not quite sure yet why but I wanted to signal it early as other might =
be impacted.
>>>>=20
>>>> Arm and arm64 builds are not impacted.
>>>=20
>>> Hmm, that patch has passed the smoke push gate already, so there likely=
 is
>>> more to it than there being an unconditional issue. I did build-test th=
is
>>> before pushing, and I've just re-tested on a 2nd system without seeing =
an
>>> issue.
>>=20
>> I have the problem only when building using Yocto, I did a normal build =
and the
>> issue is not coming.
>>=20
>=20
> Will the following patch help?
>=20
>=20
> From 0f32f749304b233c0d5574dc6b14f66e8709feba Mon Sep 17 00:00:00 2001
> From: Anthony PERARD <anthony.perard@citrix.com>
> Date: Thu, 9 Jun 2022 13:42:52 +0100
> Subject: [XEN PATCH] build,include: rework shell script for headers++.chk
>=20
> The command line generated for headers++.chk by make is quite long,
> and in some environment it is too long. This issue have been seen in
> Yocto build environment.
>=20
> Error messages:
>    make[9]: execvp: /bin/sh: Argument list too long
>    make[9]: *** [include/Makefile:181: include/headers++.chk] Error 127
>=20
> Rework to that we do the foreach loop in shell rather that make to
> reduce the command line size by a lot. We also need a way to get
> headers prerequisite for some public headers so we use a switch "case"
> in shell to be able to do some simple pattern matching. Variables
> alone in POSIX shell don't allow to work with associative array or
> variables with "/".
>=20
> Reported-by: Bertrand Marquis <Bertrand.Marquis@arm.com>
> Fixes: 28e13c7f43 ("build: xen/include: use if_changed")
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> ---
> xen/include/Makefile | 17 +++++++++++++----
> 1 file changed, 13 insertions(+), 4 deletions(-)
>=20
> diff --git a/xen/include/Makefile b/xen/include/Makefile
> index 6d9bcc19b0..ca5e868f38 100644
> --- a/xen/include/Makefile
> +++ b/xen/include/Makefile
> @@ -158,13 +158,22 @@ define cmd_headerscxx_chk
> 	    touch $@.new;                                                     \
> 	    exit 0;                                                           \
> 	fi;                                                                   \
> -	$(foreach i, $(filter %.h,$^),                                        \
> -	    echo "#include "\"$(i)\"                                          \
> +	get_prereq() {                                                        \
> +	    case $$1 in                                                       \
> +	    $(foreach i, $(filter %.h,$^),                                    \
> +	    $(if $($(patsubst $(srctree)/%,%,$i)-prereq),                     \
> +		$(patsubst $(srctree)/%,%,$i)$(close)                         \
> +		echo "$(foreach j, $($(patsubst $(srctree)/%,%,$i)-prereq),   \
> +			-include c$(j))";;))                                  \
> +	    esac;                                                             \
> +	};                                                                    \
> +	for i in $(filter %.h,$^); do                                         \
> +	    echo "#include "\"$$i\"                                           \
> 	    | $(CXX) -x c++ -std=3Dgnu++98 -Wall -Werror -D__XEN_TOOLS__        =
\
> 	      -include stdint.h -include $(srcdir)/public/xen.h               \
> -	      $(foreach j, $($(patsubst $(srctree)/%,%,$i)-prereq), -include c$=
(j)) \
> +	      `get_prereq $$i`                                                \
> 	      -S -o /dev/null -                                               \
> -	    || exit $$?; echo $(i) >> $@.new;) \
> +	    || exit $$?; echo $$i >> $@.new; done;                            \
> 	mv $@.new $@
> endef
>=20

Just a small reminder that you need to push this patch officially :-)

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Mon Jun 13 14:00:14 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 14:00:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348334.574616 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0kc2-00059R-Ah; Mon, 13 Jun 2022 14:00:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348334.574616; Mon, 13 Jun 2022 14:00:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0kc2-00059K-7j; Mon, 13 Jun 2022 14:00:14 +0000
Received: by outflank-mailman (input) for mailman id 348334;
 Mon, 13 Jun 2022 14:00:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ypXD=WU=efficios.com=compudj@srs-se1.protection.inumbo.net>)
 id 1o0kc0-00059E-NU
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 14:00:13 +0000
Received: from mail.efficios.com (mail.efficios.com [167.114.26.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 230fcd97-eb21-11ec-bd2c-47488cf2e6aa;
 Mon, 13 Jun 2022 16:00:11 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by mail.efficios.com (Postfix) with ESMTP id 2294D3B10F9;
 Mon, 13 Jun 2022 10:00:09 -0400 (EDT)
Received: from mail.efficios.com ([127.0.0.1])
 by localhost (mail03.efficios.com [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id ytcVYjVUi3F7; Mon, 13 Jun 2022 10:00:08 -0400 (EDT)
Received: from localhost (localhost [127.0.0.1])
 by mail.efficios.com (Postfix) with ESMTP id A9A433B14C9;
 Mon, 13 Jun 2022 10:00:08 -0400 (EDT)
Received: from mail.efficios.com ([127.0.0.1])
 by localhost (mail03.efficios.com [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id yoCRG8Enc8Nh; Mon, 13 Jun 2022 10:00:08 -0400 (EDT)
Received: from mail03.efficios.com (mail03.efficios.com [167.114.26.124])
 by mail.efficios.com (Postfix) with ESMTP id 9CA723B155A;
 Mon, 13 Jun 2022 10:00:08 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 230fcd97-eb21-11ec-bd2c-47488cf2e6aa
DKIM-Filter: OpenDKIM Filter v2.10.3 mail.efficios.com A9A433B14C9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=efficios.com;
	s=default; t=1655128808;
	bh=aI6PzN9lXEK0hyp7LFwjblv4fYLLDcdjYxhB2mBqp04=;
	h=Date:From:To:Message-ID:MIME-Version;
	b=aJiCF2Uv3uZEv1JU2BVCDp13gUIK53UOQMO/BzZPA+SXuc8t7rkLC0cWZmvwVS34k
	 6MnkVmOLorQ6PrhgZc4uNSgpI6WhtjNoz1knb937yy66w0RI/neSKOkv7iYx4ZOto0
	 P1vgVsR0ADWiZywAbcOfVCFJhN55vbD+6aHyBDQE72tj4cEHU5RsAahfoEiyUUUOz2
	 5KSZTYQYE+laVVovV/72SDsEvFRsTRAzfmh5yzOZcOmPzMJJYO+xO90mLwIR9ZxIHq
	 X2XZIAH7x/0RF2VTVo9PXQmKz0VBqlOrvDZVTpzdByTiF1gk1XtJK9H5+SHFNs+Fpg
	 t/qt5DZMA94sg==
X-Virus-Scanned: amavisd-new at efficios.com
Date: Mon, 13 Jun 2022 10:00:08 -0400 (EDT)
From: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: grub-devel <grub-devel@gnu.org>, Daniel Kiper <dkiper@net-space.pl>, 
	xen-devel <xen-devel@lists.xenproject.org>
Message-ID: <1164797797.55925.1655128808531.JavaMail.zimbra@efficios.com>
In-Reply-To: <CAKf6xpv3aBrzB=ds5jSd2MbFr=VePOMfJygos6E4cLegaizU0w@mail.gmail.com>
References: <20220609185024.447922-1-mathieu.desnoyers@efficios.com> <20220609185024.447922-3-mathieu.desnoyers@efficios.com> <CAKf6xpv3aBrzB=ds5jSd2MbFr=VePOMfJygos6E4cLegaizU0w@mail.gmail.com>
Subject: Re: [PATCH v5 2/5] grub-mkconfig linux_xen: Fix quadratic algorithm
 for sorting menu items
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
X-Originating-IP: [167.114.26.124]
X-Mailer: Zimbra 8.8.15_GA_4272 (ZimbraWebClient - FF100 (Linux)/8.8.15_GA_4257)
Thread-Topic: grub-mkconfig linux_xen: Fix quadratic algorithm for sorting menu items
Thread-Index: UGSf0pzUBdTgYTvAqJYK7vc4LOjpcA==

----- On Jun 10, 2022, at 4:00 PM, Jason Andryuk jandryuk@gmail.com wrote:

> On Thu, Jun 9, 2022 at 2:50 PM Mathieu Desnoyers
> <mathieu.desnoyers@efficios.com> wrote:
>>
>> The current implementation of the 20_linux_xen script implements its
>> menu items sorting in bash with a quadratic algorithm, calling "sed",
>> "sort", "head", and "grep" to compare versions between individual lines,
>> which is annoyingly slow for kernel developers who can easily end up
>> with 50-100 kernels in their boot partition.
>>
>> This fix is ported from the 10_linux script, which has a similar
>> quadratic code pattern.
>>
>> [ Note: this is untested. I would be grateful if anyone with a Xen
>>   environment could test it before it is merged. ]
> 
> Hi, Mathieu,
> 
> I tested by manually applying patch 2/5 on top of Fedora 36's
> installed /etc/grub.d/20_linux_xen, and manually applying patch 1/5 to
> /usr/share/grub/grub-mkconfig_lib.  It seems to generate grub.cfg
> menuentry-ies in the correct order.

Noted. Added your Tested-by to patch 2/5. Thanks!

> 
> Note for patch 1/5, it's best practice to use "$@" with the double
> quotes to prevent word splitting of arguments.  Doesn't really matter
> for that function at this time though.

I'll update patch 1/5 with this change. It's best practice indeed. I notice
that there are quite a few other places in grub-mkconfig_lib.in that do
follow this best practice though.

Thanks,

Mathieu

> 
> Regards,
> Jason

-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com


From xen-devel-bounces@lists.xenproject.org Mon Jun 13 14:08:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 14:08:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348346.574628 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0kjy-00061h-5t; Mon, 13 Jun 2022 14:08:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348346.574628; Mon, 13 Jun 2022 14:08:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0kjy-00061a-2W; Mon, 13 Jun 2022 14:08:26 +0000
Received: by outflank-mailman (input) for mailman id 348346;
 Mon, 13 Jun 2022 14:08:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VFpo=WU=efficios.com=mathieu.desnoyers@srs-se1.protection.inumbo.net>)
 id 1o0kjw-00061U-G1
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 14:08:24 +0000
Received: from mail.efficios.com (mail.efficios.com [167.114.26.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 48efdc0b-eb22-11ec-bd2c-47488cf2e6aa;
 Mon, 13 Jun 2022 16:08:23 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by mail.efficios.com (Postfix) with ESMTP id 49B723B192F;
 Mon, 13 Jun 2022 10:08:22 -0400 (EDT)
Received: from mail.efficios.com ([127.0.0.1])
 by localhost (mail03.efficios.com [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id 1QIeFPky47dL; Mon, 13 Jun 2022 10:08:22 -0400 (EDT)
Received: from localhost (localhost [127.0.0.1])
 by mail.efficios.com (Postfix) with ESMTP id D79103B1917;
 Mon, 13 Jun 2022 10:08:21 -0400 (EDT)
Received: from mail.efficios.com ([127.0.0.1])
 by localhost (mail03.efficios.com [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id TsoPpi0KGJXB; Mon, 13 Jun 2022 10:08:21 -0400 (EDT)
Received: from thinkos.internal.efficios.com (192-222-180-24.qc.cable.ebox.net
 [192.222.180.24])
 by mail.efficios.com (Postfix) with ESMTPSA id 991733B157E;
 Mon, 13 Jun 2022 10:08:21 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 48efdc0b-eb22-11ec-bd2c-47488cf2e6aa
DKIM-Filter: OpenDKIM Filter v2.10.3 mail.efficios.com D79103B1917
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=efficios.com;
	s=default; t=1655129301;
	bh=6PhM3UYenJCPijS1HSQZt5FpUV3quhOUIlo9RAf1E/Q=;
	h=From:To:Date:Message-Id:MIME-Version;
	b=NEY1W6XLdvjCfizHEE972rv5aAiOd5T32YwP2j6DwPJK26tSZJEbtSjV0gGQkDMfL
	 tBBkPexh+2ASFHWsU6u8wo8SN1msTlHMKIpThfV1gVuEaQJzqsNq4zmFRVWDhmoAa1
	 J6AawhReSY0topnwAMSFldeVhxRqRl/5ha3d2MGITCzH7FTl3p3x8z99f+QryzMx8o
	 wE8VV2IghotAKwytN842o/wlrytOD4vYOEe1jt0ET9ZsAJ2aewBrg5Q/9MTT6IuTIy
	 /jnMIYhlTZY9mXEJMms0k8yp+DVzpfixVQr7jBS8dis03t6y3vhFVSUr+h8znPCIdo
	 nGTys1/7SAlBQ==
X-Virus-Scanned: amavisd-new at efficios.com
From: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
To: grub-devel@gnu.org,
	Daniel Kiper <dkiper@net-space.pl>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>,
	Jason Andryuk <jandryuk@gmail.com>,
	xen-devel@lists.xenproject.org
Subject: [PATCH v6 2/5] grub-mkconfig linux_xen: Fix quadratic algorithm for sorting menu items
Date: Mon, 13 Jun 2022 10:08:23 -0400
Message-Id: <20220613140826.571036-3-mathieu.desnoyers@efficios.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220613140826.571036-1-mathieu.desnoyers@efficios.com>
References: <20220613140826.571036-1-mathieu.desnoyers@efficios.com>
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable

The current implementation of the 20_linux_xen script implements its
menu items sorting in bash with a quadratic algorithm, calling "sed",
"sort", "head", and "grep" to compare versions between individual lines,
which is annoyingly slow for kernel developers who can easily end up
with 50-100 kernels in their boot partition.

This fix is ported from the 10_linux script, which has a similar
quadratic code pattern.

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Tested-by: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel@lists.xenproject.org
---
Changes since v4:
- Combine sed -e '...' -e '...' into sed -e '...; ...'
---
 util/grub.d/20_linux_xen.in | 18 ++++++++++--------
 1 file changed, 10 insertions(+), 8 deletions(-)

diff --git a/util/grub.d/20_linux_xen.in b/util/grub.d/20_linux_xen.in
index 51a983926..4382303c1 100644
--- a/util/grub.d/20_linux_xen.in
+++ b/util/grub.d/20_linux_xen.in
@@ -237,11 +237,17 @@ esac
 # yet, so it's empty. In a submenu it will be equal to '\t' (one tab).
 submenu_indentation=3D""
=20
+# Perform a reverse version sort on the entire xen_list and linux_list.
+# Temporarily replace the '.old' suffix by ' 1' and append ' 2' for all
+# other files to order the '.old' files after their non-old counterpart
+# in reverse-sorted order.
+
+reverse_sorted_xen_list=3D$(echo ${xen_list} | tr ' ' '\n' | sed -e 's/\=
.old$/ 1/; / 1$/! s/$/ 2/' | version_sort -r | sed -e 's/ 1$/.old/; s/ 2$=
//')
+reverse_sorted_linux_list=3D$(echo ${linux_list} | tr ' ' '\n' | sed -e =
's/\.old$/ 1/; / 1$/! s/$/ 2/' | version_sort -r | sed -e 's/ 1$/.old/; s=
/ 2$//')
+
 is_top_level=3Dtrue
=20
-while [ "x${xen_list}" !=3D "x" ] ; do
-    list=3D"${linux_list}"
-    current_xen=3D`version_find_latest $xen_list`
+for current_xen in ${reverse_sorted_xen_list}; do
     xen_basename=3D`basename ${current_xen}`
     xen_dirname=3D`dirname ${current_xen}`
     rel_xen_dirname=3D`make_system_path_relative_to_its_root $xen_dirnam=
e`
@@ -273,8 +279,7 @@ while [ "x${xen_list}" !=3D "x" ] ; do
        fi
     done
=20
-    while [ "x$list" !=3D "x" ] ; do
-	linux=3D`version_find_latest $list`
+    for linux in ${reverse_sorted_linux_list}; do
 	gettext_printf "Found linux image: %s\n" "$linux" >&2
 	basename=3D`basename $linux`
 	dirname=3D`dirname $linux`
@@ -351,13 +356,10 @@ while [ "x${xen_list}" !=3D "x" ] ; do
 	    linux_entry "${OS}" "${version}" "${xen_version}" recovery \
 		"${GRUB_CMDLINE_LINUX_RECOVERY} ${GRUB_CMDLINE_LINUX}" "${GRUB_CMDLINE=
_XEN}"
 	fi
-
-	list=3D`echo $list | tr ' ' '\n' | fgrep -vx "$linux" | tr '\n' ' '`
     done
     if [ x"$is_top_level" !=3D xtrue ]; then
 	echo '	}'
     fi
-    xen_list=3D`echo $xen_list | tr ' ' '\n' | fgrep -vx "$current_xen" =
| tr '\n' ' '`
 done
=20
 # If at least one kernel was found, then we need to
--=20
2.30.2



From xen-devel-bounces@lists.xenproject.org Mon Jun 13 15:39:57 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 15:39:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348399.574642 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0mAD-0007fe-P6; Mon, 13 Jun 2022 15:39:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348399.574642; Mon, 13 Jun 2022 15:39:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0mAD-0007fX-Kh; Mon, 13 Jun 2022 15:39:37 +0000
Received: by outflank-mailman (input) for mailman id 348399;
 Mon, 13 Jun 2022 15:39:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0mAB-0007fN-Us; Mon, 13 Jun 2022 15:39:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0mAB-0000KL-QU; Mon, 13 Jun 2022 15:39:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0mAB-000892-8V; Mon, 13 Jun 2022 15:39:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o0mAB-0006OY-7x; Mon, 13 Jun 2022 15:39:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vNFt2hQvCM/H57KLjTH8XZbM+N5MF9ijuQRMmlO1VqM=; b=cEE2ki58UDrezLSQdzNp0wSdAt
	GZ/V5EZTPsKLPP+M8+rItsdGcu73U8tHBcf9auRWFy/CB4xivDSReCgqnqMKMzlPcMZ12uOGBnQ+u
	KiFtY3GZtJ0OdyocP+0ArgUUr9wL2nBj5p/yO03TrjiuD9wSy3xHLQv8+oT1hInCpRG0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171154-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171154: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=b13baccc3850ca8b8cccbf8ed9912dbaa0fdf7f3
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 13 Jun 2022 15:39:35 +0000

flight 171154 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171154/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 170714
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 170714
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 170714
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 170714
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 170714
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 170714
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 170714
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 170714
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 170714

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170714
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170714
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170714
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                b13baccc3850ca8b8cccbf8ed9912dbaa0fdf7f3
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z   20 days
Failing since        170716  2022-05-24 11:12:06 Z   20 days   49 attempts
Testing same since   171154  2022-06-13 04:37:11 Z    0 days    1 attempts

------------------------------------------------------------
2346 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 276082 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 13 15:52:33 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 15:52:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348411.574653 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0mMe-0001g9-Tp; Mon, 13 Jun 2022 15:52:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348411.574653; Mon, 13 Jun 2022 15:52:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0mMe-0001g2-Qr; Mon, 13 Jun 2022 15:52:28 +0000
Received: by outflank-mailman (input) for mailman id 348411;
 Mon, 13 Jun 2022 15:52:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7V0V=WU=linaro.org=richard.henderson@srs-se1.protection.inumbo.net>)
 id 1o0mMd-0001fw-8U
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 15:52:27 +0000
Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com
 [2607:f8b0:4864:20::1036])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d1a83659-eb30-11ec-8901-93a377f238d6;
 Mon, 13 Jun 2022 17:52:26 +0200 (CEST)
Received: by mail-pj1-x1036.google.com with SMTP id a10so5990329pju.3
 for <xen-devel@lists.xenproject.org>; Mon, 13 Jun 2022 08:52:25 -0700 (PDT)
Received: from [172.21.2.253] ([50.208.55.229])
 by smtp.gmail.com with ESMTPSA id
 e11-20020a170902ed8b00b00163f8ddf160sm5320656plj.161.2022.06.13.08.52.22
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 13 Jun 2022 08:52:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d1a83659-eb30-11ec-8901-93a377f238d6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=message-id:date:mime-version:user-agent:subject:content-language:to
         :cc:references:from:in-reply-to:content-transfer-encoding;
        bh=5DgRlsCxoN9bkalUqnw2erGL7IQHDQQ4iObLzReHoSI=;
        b=nkJAiEZtArgp2MoZqgKrUMmhIFGzFLs+Y6TQNOubKl3j0zpto3lXz60GoxoxmNave6
         EK3i9ue27BnOinFxBW6T1f1WD2qdK/g5EFIObDyf5+pjTFKs2fBfIa6Aez6DJqD1cv4M
         aI6FBE5ZtryGyeMGQTX8ELQT5UKfgSkg70XYTP9a3dVHDZjVBjNHOGDlbKIr/I74qxGg
         ppfotO6lwT6XGY84hgmUr/ME2aCRp9k41CPyf0NxNolzH7HQeOispfnBtMV+5e5THOKj
         VcAYNllYsg3Fpy6MtYnd3ZYbhlS5IxGZfzu123JIKb0Hs5Cq3vH6bHWe2R0FJfmZs2wa
         poHw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:message-id:date:mime-version:user-agent:subject
         :content-language:to:cc:references:from:in-reply-to
         :content-transfer-encoding;
        bh=5DgRlsCxoN9bkalUqnw2erGL7IQHDQQ4iObLzReHoSI=;
        b=g1fjqLO3M7B35U+YGdMwg187h5GarOxzni6aoOm1DZkh9fPkenRoWRUGapXMue3wDz
         Pk1YrkOV2U3pvSqpfVmx2j55ITrwscYaZytwsNDCzR46jkG/9SG5deTkTED76yMcopF9
         sdNexW++qaKItJNzc52RH7BLHIG9IVNKCdO5JGbg/H2U7NUxEGQs00dK7dXRFsHGTERc
         gYse71A8o2hvE2B48E4jWS5Lh5rXTDp9XdZpN4Gk0ovCHt135+aJUHY4if8FbANzgNeR
         9L+LoZQfpMM+HO9eKnRx6mn2U7Kj0gA+IE/d+IeBST0MSdyu6nU8GqGqCLSX7TPMrkI+
         a+tQ==
X-Gm-Message-State: AJIora9r8wAMF5VtXDvNqzuLa3r9Z0Bo0gLygot/IBpY+FKV6ovzawnB
	mJpWKeS9rYp+YDHx1UbnXIzBbg==
X-Google-Smtp-Source: AGRyM1ubha0bGe1oEKe7F5ycVr8z5V2V2OEkRKUkD/yVVmiKMsFJwOMNv0AHYZzze4taNq5ayMAWEg==
X-Received: by 2002:a17:902:b683:b0:163:4ef2:3c40 with SMTP id c3-20020a170902b68300b001634ef23c40mr268488pls.123.1655135544401;
        Mon, 13 Jun 2022 08:52:24 -0700 (PDT)
Message-ID: <37f8f623-bb1c-899b-5801-79acd6185c6d@linaro.org>
Date: Mon, 13 Jun 2022 08:52:21 -0700
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PULL 00/16] Kraxel 20220613 patches
Content-Language: en-US
To: Gerd Hoffmann <kraxel@redhat.com>, qemu-devel@nongnu.org
Cc: "Michael S. Tsirkin" <mst@redhat.com>, xen-devel@lists.xenproject.org,
 Akihiko Odaki <akihiko.odaki@gmail.com>,
 "Hongren (Zenithal) Zheng" <i@zenithal.me>,
 Peter Maydell <peter.maydell@linaro.org>,
 Alex Williamson <alex.williamson@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "Canokeys.org" <contact@canokeys.org>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <f4bug@amsat.org>,
 Paul Durrant <paul@xen.org>, Anthony Perard <anthony.perard@citrix.com>
References: <20220613113655.3693872-1-kraxel@redhat.com>
From: Richard Henderson <richard.henderson@linaro.org>
In-Reply-To: <20220613113655.3693872-1-kraxel@redhat.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 6/13/22 04:36, Gerd Hoffmann wrote:
> The following changes since commit dcb40541ebca7ec98a14d461593b3cd7282b4fac:
> 
>    Merge tag 'mips-20220611' of https://github.com/philmd/qemu into staging (2022-06-11 21:13:27 -0700)
> 
> are available in the Git repository at:
> 
>    git://git.kraxel.org/qemu tags/kraxel-20220613-pull-request
> 
> for you to fetch changes up to 23b87f7a3a13e93e248eef8a4b7257548855a620:
> 
>    ui: move 'pc-bios/keymaps' to 'ui/keymaps' (2022-06-13 10:59:25 +0200)
> 
> ----------------------------------------------------------------
> usb: add CanoKey device, fixes for ehci + redir
> ui: fixes for gtk and cocoa, move keymaps (v2), rework refresh rate
> virtio-gpu: scanout flush fix

This doesn't even configure:

../src/ui/keymaps/meson.build:55:4: ERROR: File ar does not exist.




r~


From xen-devel-bounces@lists.xenproject.org Mon Jun 13 15:54:26 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 15:54:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348420.574664 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0mOX-0002JZ-FF; Mon, 13 Jun 2022 15:54:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348420.574664; Mon, 13 Jun 2022 15:54:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0mOX-0002JS-Bc; Mon, 13 Jun 2022 15:54:25 +0000
Received: by outflank-mailman (input) for mailman id 348420;
 Mon, 13 Jun 2022 15:54:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7V0V=WU=linaro.org=richard.henderson@srs-se1.protection.inumbo.net>)
 id 1o0mOW-0002JM-BQ
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 15:54:24 +0000
Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com
 [2607:f8b0:4864:20::1032])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 177d8b71-eb31-11ec-8901-93a377f238d6;
 Mon, 13 Jun 2022 17:54:23 +0200 (CEST)
Received: by mail-pj1-x1032.google.com with SMTP id cx11so6009475pjb.1
 for <xen-devel@lists.xenproject.org>; Mon, 13 Jun 2022 08:54:23 -0700 (PDT)
Received: from [172.21.2.253] ([50.208.55.229])
 by smtp.gmail.com with ESMTPSA id
 f2-20020a170902ab8200b001616b71e5e3sm5283754plr.171.2022.06.13.08.54.20
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 13 Jun 2022 08:54:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 177d8b71-eb31-11ec-8901-93a377f238d6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=message-id:date:mime-version:user-agent:subject:content-language
         :from:to:cc:references:in-reply-to:content-transfer-encoding;
        bh=nHJVhnE9ITPoEkRn2jJu6Akvnw4FhB/RBzEVAwrZNYY=;
        b=jLOnW4vvy+7rnedI3UvibpOr7H4Lp9pwo78dNdJXRa85A0cHE+bIjtZRisvPrJLMYX
         kJieV1hk8UBfaylVOAhQgJKGURC7wg41xl1QL1bWdI9yuHWrRSCESDvr8bEJ3DXptdzw
         XpjdzOQaCEmRihXCk7xa+OwNraykTnLQD3SATeus4wMt9eFfIDUHUPMhPEiU13JrWn0g
         i7bAmDbwUT8gYdjptcSRjH6aiItfFV88Kc9s6CC6psxnXUFVtSE/P4PO4FabxmXmXN0B
         W2z8/vZOYtxe/UDLcGTTvq8s2PZmQKeHk1jgH3Tt7cAMstsn8QiZ3Ffaaw05tCvOwLWO
         mfLg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:message-id:date:mime-version:user-agent:subject
         :content-language:from:to:cc:references:in-reply-to
         :content-transfer-encoding;
        bh=nHJVhnE9ITPoEkRn2jJu6Akvnw4FhB/RBzEVAwrZNYY=;
        b=TPasyaMkHIZ3zHRaryneQFK/eTrCsSAFtqiHIpdm8KpoI/qt64An+TQhl7beZ9cdoI
         hlpRs6PeZri7bzrpyAmRyvpD+8Y43zw/iYVzEtDpUcodZLG6X5uH6h0+jQ+MIbf2xV7G
         qAspoMcZQnL/dVZNahf67qLZlMplYqAyIpP3y1Cz6y14JCb075sD4L2Ia5cZHcW3JouP
         RAMqfZnEinIrm0H8TYQZYe/szLIjnNZUmeIP1Slxt93qA7fXnQ7ihnykzo+XZTlx9OUW
         +sf1udO8Pbkojqy4AI/agNEaSb2s7297AijEBK3GN1JJT9KgqWuUb5Ve4D+6azU3rW2y
         /UcQ==
X-Gm-Message-State: AJIora8qz2TvVNRq//lsJbL4FzIcWQkj2aPdp0dcdw8I6QIAr2/8UJKX
	naq2qnA7h6uSFbzuJs+QqvgwOg==
X-Google-Smtp-Source: AGRyM1sD3tyn8G3zosvdRQwgK8NEK7opcw/arRNKm27jqBsG5w/THdyvq7J9m433qMbuCAwQHvwxJg==
X-Received: by 2002:a17:902:ce83:b0:166:42de:29d5 with SMTP id f3-20020a170902ce8300b0016642de29d5mr287593plg.123.1655135661614;
        Mon, 13 Jun 2022 08:54:21 -0700 (PDT)
Message-ID: <ba8e6d19-f49a-a367-918a-2f05c8793864@linaro.org>
Date: Mon, 13 Jun 2022 08:54:18 -0700
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PULL 00/16] Kraxel 20220613 patches
Content-Language: en-US
From: Richard Henderson <richard.henderson@linaro.org>
To: Gerd Hoffmann <kraxel@redhat.com>, qemu-devel@nongnu.org
Cc: "Michael S. Tsirkin" <mst@redhat.com>, xen-devel@lists.xenproject.org,
 Akihiko Odaki <akihiko.odaki@gmail.com>,
 "Hongren (Zenithal) Zheng" <i@zenithal.me>,
 Peter Maydell <peter.maydell@linaro.org>,
 Alex Williamson <alex.williamson@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "Canokeys.org" <contact@canokeys.org>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <f4bug@amsat.org>,
 Paul Durrant <paul@xen.org>, Anthony Perard <anthony.perard@citrix.com>
References: <20220613113655.3693872-1-kraxel@redhat.com>
 <37f8f623-bb1c-899b-5801-79acd6185c6d@linaro.org>
In-Reply-To: <37f8f623-bb1c-899b-5801-79acd6185c6d@linaro.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 6/13/22 08:52, Richard Henderson wrote:
> On 6/13/22 04:36, Gerd Hoffmann wrote:
>> The following changes since commit dcb40541ebca7ec98a14d461593b3cd7282b4fac:
>>
>>    Merge tag 'mips-20220611' of https://github.com/philmd/qemu into staging (2022-06-11 
>> 21:13:27 -0700)
>>
>> are available in the Git repository at:
>>
>>    git://git.kraxel.org/qemu tags/kraxel-20220613-pull-request
>>
>> for you to fetch changes up to 23b87f7a3a13e93e248eef8a4b7257548855a620:
>>
>>    ui: move 'pc-bios/keymaps' to 'ui/keymaps' (2022-06-13 10:59:25 +0200)
>>
>> ----------------------------------------------------------------
>> usb: add CanoKey device, fixes for ehci + redir
>> ui: fixes for gtk and cocoa, move keymaps (v2), rework refresh rate
>> virtio-gpu: scanout flush fix
> 
> This doesn't even configure:
> 
> ../src/ui/keymaps/meson.build:55:4: ERROR: File ar does not exist.

... or, rather, corrupts the source tree on the first configure, so that any retry fails:

	deleted:    ui/keymaps/ar

	deleted:    ui/keymaps/bepo

	deleted:    ui/keymaps/cz

	deleted:    ui/keymaps/da

	deleted:    ui/keymaps/de

	deleted:    ui/keymaps/de-ch

	deleted:    ui/keymaps/en-gb

	deleted:    ui/keymaps/en-us

	deleted:    ui/keymaps/es

	deleted:    ui/keymaps/et

	deleted:    ui/keymaps/fi

	deleted:    ui/keymaps/fo

	deleted:    ui/keymaps/fr

	deleted:    ui/keymaps/fr-be

	deleted:    ui/keymaps/fr-ca

	deleted:    ui/keymaps/fr-ch

	deleted:    ui/keymaps/hr

	deleted:    ui/keymaps/hu

	deleted:    ui/keymaps/is

	deleted:    ui/keymaps/it

	deleted:    ui/keymaps/ja

	deleted:    ui/keymaps/lt

	deleted:    ui/keymaps/lv

	deleted:    ui/keymaps/mk

	deleted:    ui/keymaps/nl

	deleted:    ui/keymaps/no

	deleted:    ui/keymaps/pl

	deleted:    ui/keymaps/pt

	deleted:    ui/keymaps/pt-br

	deleted:    ui/keymaps/ru

	deleted:    ui/keymaps/th

	deleted:    ui/keymaps/tr



r~


From xen-devel-bounces@lists.xenproject.org Mon Jun 13 18:06:02 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 18:06:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348445.574689 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0oRj-0000f9-Ub; Mon, 13 Jun 2022 18:05:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348445.574689; Mon, 13 Jun 2022 18:05:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0oRj-0000f2-Rb; Mon, 13 Jun 2022 18:05:51 +0000
Received: by outflank-mailman (input) for mailman id 348445;
 Mon, 13 Jun 2022 18:05:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WHVA=WU=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1o0oRi-0000PT-Az
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 18:05:50 +0000
Received: from mail-lj1-x232.google.com (mail-lj1-x232.google.com
 [2a00:1450:4864:20::232])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 748dca5f-eb43-11ec-8901-93a377f238d6;
 Mon, 13 Jun 2022 20:05:49 +0200 (CEST)
Received: by mail-lj1-x232.google.com with SMTP id g25so7095280ljm.2
 for <xen-devel@lists.xenproject.org>; Mon, 13 Jun 2022 11:05:49 -0700 (PDT)
Received: from otyshchenko.router ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 x18-20020a2e7c12000000b00253ceefb668sm1038104ljc.60.2022.06.13.11.05.47
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 13 Jun 2022 11:05:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 748dca5f-eb43-11ec-8901-93a377f238d6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=8/no4G5povAlYKWryYqoB/uPXMjWMWIBagnDt/np9cE=;
        b=R2rMeqb4WT9b1A8q0cO0SmAaBv4lW1i950DDICNzrseqBRtGrFmk3WtJa+I68ATIvG
         BhvZhig8lDoO9j5S5QapC3nEtyf4BTJDY65n8fltE0tamlHC9XcWI4hSvh+yUtnJVV9C
         exq7o+RspBLP0tWXv6yaotLCclVK+TUNp1qxW4NL9dHKM0wMIxjLcuKq71DF2i9GSvw0
         kPNsVv1mwDM3hMAomApIrloIXSskIZbCVSF9UwJ/vhLFEC6LFLWOYJ0APNqXwqMp9mzO
         cJgbmBuoLWxBiJXxr4jMKviq512TiFHXcTLvRp4gblvgY93tFyig7eEESpw1+QMGqVKN
         rTHg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=8/no4G5povAlYKWryYqoB/uPXMjWMWIBagnDt/np9cE=;
        b=F0LpOe+cVHreI95XbUKjZSMH1G9Gh7KN/kuPGXxQ0Lcly5t4dxnacgi5gDVM7qA4Uc
         trdG6+Yn9EKKvSMVRwf17vFCevXo+rLQAhY3t3hGzo/9nzI96CoWIQus/JziJIwN+CWm
         rXm1x9nW83dDCe+J6KNi/ll+gUA1nIP0qlq3pErlAw9lqYf/aXfGPR8ty1ggSKDXUwo/
         RhJZLjTjYZRLO1Qba2W88MttECwmgmabBHahYaVil7g0IlCvHuUZw+bEeeT0azU1zZxY
         Uv8pB5v77tper7ZABkHrajMOVYFIJTCaMM9bPlZiEhhry4f6CtbXPUdroh3kUUa7mTsN
         5fuQ==
X-Gm-Message-State: AJIora+5uzSXpuX0OCxzDF57UfWj+M98hh/6Ii17Hp27gwZCLkOhODOJ
	UFZLyHOLl58Xizck7CqurLZGQne04Z0=
X-Google-Smtp-Source: AGRyM1vJyps2rHLh2ex6qoGbpQqG17M8OB/GkpE3XgkGPtMShLtj/QjdZiVmmyT0AHYj/desaT4erQ==
X-Received: by 2002:a2e:81c3:0:b0:255:7dee:862e with SMTP id s3-20020a2e81c3000000b002557dee862emr413459ljg.424.1655143548520;
        Mon, 13 Jun 2022 11:05:48 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien.grall@arm.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: [PATCH V10 2/3] libxl: Introduce basic virtio-mmio support on Arm
Date: Mon, 13 Jun 2022 21:05:21 +0300
Message-Id: <1655143522-14356-3-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1655143522-14356-1-git-send-email-olekstysh@gmail.com>
References: <1655143522-14356-1-git-send-email-olekstysh@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Julien Grall <julien.grall@arm.com>

This patch introduces helpers to allocate Virtio MMIO params
(IRQ and memory region) and create specific device node in
the Guest device-tree with allocated params. In order to deal
with multiple Virtio devices, reserve corresponding ranges.
For now, we reserve 1MB for memory regions and 10 SPIs.

As these helpers should be used for every Virtio device attached
to the Guest, call them for Virtio disk(s).

Please note, with statically allocated Virtio IRQs there is
a risk of a clash with a physical IRQs of passthrough devices.
For the first version, it's fine, but we should consider allocating
the Virtio IRQs automatically. Thankfully, we know in advance which
IRQs will be used for passthrough to be able to choose non-clashed
ones.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes RFC -> V1:
   - was squashed with:
     "[RFC PATCH V1 09/12] libxl: Handle virtio-mmio irq in more correct way"
     "[RFC PATCH V1 11/12] libxl: Insert "dma-coherent" property into virtio-mmio device node"
     "[RFC PATCH V1 12/12] libxl: Fix duplicate memory node in DT"
   - move VirtIO MMIO #define-s to xen/include/public/arch-arm.h

Changes V1 -> V2:
   - update the author of a patch

Changes V2 -> V3:
   - no changes

Changes V3 -> V4:
   - no changes

Changes V4 -> V5:
   - split the changes, change the order of the patches
   - drop an extra "virtio" configuration option
   - update patch description
   - use CONTAINER_OF instead of own implementation
   - reserve ranges for Virtio MMIO params and put them
     in correct location
   - create helpers to allocate Virtio MMIO params, add
     corresponding sanity-сhecks
   - add comment why MMIO size 0x200 is chosen
   - update debug print
   - drop Wei's T-b

Changes V5 -> V6:
   - rebase on current staging

Changes V6 -> V7:
   - rebase on current staging
   - add T-b and R-b tags
   - update according to the recent changes to
     "libxl: Add support for Virtio disk configuration"

Changes V7 -> V8:
   - drop T-b and R-b tags
   - make virtio_mmio_base/irq global variables to be local in
     libxl__arch_domain_prepare_config() and initialize them at
     the beginning of the function, then rework alloc_virtio_mmio_base/irq()
     to take a pointer to virtio_mmio_base/irq variables as an argument
   - update according to the recent changes to
     "libxl: Add support for Virtio disk configuration"

Changes V8 -> V9:
   - Stefano already gave his Reviewed-by, I dropped it due to the changes
   - remove the second set of parentheses for check in alloc_virtio_mmio_base()
   - clarify the updating of "nr_spis" right after num_disks loop in
     libxl__arch_domain_prepare_config() and add a comment on top of it
   - use GCSPRINTF() instead of using a buffer of a static size
     calculated by hand in make_virtio_mmio_node()

Changes V9 -> V10:
   - add Stefano's and Anthony's R-b
---
 tools/libs/light/libxl_arm.c  | 121 +++++++++++++++++++++++++++++++++++++++++-
 xen/include/public/arch-arm.h |   7 +++
 2 files changed, 126 insertions(+), 2 deletions(-)

diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index eef1de0..9be9b2a 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -8,6 +8,46 @@
 #include <assert.h>
 #include <xen/device_tree_defs.h>
 
+/*
+ * There is no clear requirements for the total size of Virtio MMIO region.
+ * The size of control registers is 0x100 and device-specific configuration
+ * registers starts at the offset 0x100, however it's size depends on the device
+ * and the driver. Pick the biggest known size at the moment to cover most
+ * of the devices (also consider allowing the user to configure the size via
+ * config file for the one not conforming with the proposed value).
+ */
+#define VIRTIO_MMIO_DEV_SIZE   xen_mk_ullong(0x200)
+
+static uint64_t alloc_virtio_mmio_base(libxl__gc *gc, uint64_t *virtio_mmio_base)
+{
+    uint64_t base = *virtio_mmio_base;
+
+    /* Make sure we have enough reserved resources */
+    if (base + VIRTIO_MMIO_DEV_SIZE >
+        GUEST_VIRTIO_MMIO_BASE + GUEST_VIRTIO_MMIO_SIZE) {
+        LOG(ERROR, "Ran out of reserved range for Virtio MMIO BASE 0x%"PRIx64"\n",
+            base);
+        return 0;
+    }
+    *virtio_mmio_base += VIRTIO_MMIO_DEV_SIZE;
+
+    return base;
+}
+
+static uint32_t alloc_virtio_mmio_irq(libxl__gc *gc, uint32_t *virtio_mmio_irq)
+{
+    uint32_t irq = *virtio_mmio_irq;
+
+    /* Make sure we have enough reserved resources */
+    if (irq > GUEST_VIRTIO_MMIO_SPI_LAST) {
+        LOG(ERROR, "Ran out of reserved range for Virtio MMIO IRQ %u\n", irq);
+        return 0;
+    }
+    (*virtio_mmio_irq)++;
+
+    return irq;
+}
+
 static const char *gicv_to_string(libxl_gic_version gic_version)
 {
     switch (gic_version) {
@@ -26,8 +66,10 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
 {
     uint32_t nr_spis = 0;
     unsigned int i;
-    uint32_t vuart_irq;
-    bool vuart_enabled = false;
+    uint32_t vuart_irq, virtio_irq = 0;
+    bool vuart_enabled = false, virtio_enabled = false;
+    uint64_t virtio_mmio_base = GUEST_VIRTIO_MMIO_BASE;
+    uint32_t virtio_mmio_irq = GUEST_VIRTIO_MMIO_SPI_FIRST;
 
     /*
      * If pl011 vuart is enabled then increment the nr_spis to allow allocation
@@ -39,6 +81,35 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
         vuart_enabled = true;
     }
 
+    for (i = 0; i < d_config->num_disks; i++) {
+        libxl_device_disk *disk = &d_config->disks[i];
+
+        if (disk->specification == LIBXL_DISK_SPECIFICATION_VIRTIO) {
+            disk->base = alloc_virtio_mmio_base(gc, &virtio_mmio_base);
+            if (!disk->base)
+                return ERROR_FAIL;
+
+            disk->irq = alloc_virtio_mmio_irq(gc, &virtio_mmio_irq);
+            if (!disk->irq)
+                return ERROR_FAIL;
+
+            if (virtio_irq < disk->irq)
+                virtio_irq = disk->irq;
+            virtio_enabled = true;
+
+            LOG(DEBUG, "Allocate Virtio MMIO params for Vdev %s: IRQ %u BASE 0x%"PRIx64,
+                disk->vdev, disk->irq, disk->base);
+        }
+    }
+
+    /*
+     * Every virtio-mmio device uses one emulated SPI. If Virtio devices are
+     * present, make sure that we allocate enough SPIs for them.
+     * The resulting "nr_spis" needs to cover the highest possible SPI.
+     */
+    if (virtio_enabled)
+        nr_spis = max(nr_spis, virtio_irq - 32 + 1);
+
     for (i = 0; i < d_config->b_info.num_irqs; i++) {
         uint32_t irq = d_config->b_info.irqs[i];
         uint32_t spi;
@@ -58,6 +129,13 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
             return ERROR_FAIL;
         }
 
+        /* The same check as for vpl011 */
+        if (virtio_enabled &&
+            (irq >= GUEST_VIRTIO_MMIO_SPI_FIRST && irq <= virtio_irq)) {
+            LOG(ERROR, "Physical IRQ %u conflicting with Virtio MMIO IRQ range\n", irq);
+            return ERROR_FAIL;
+        }
+
         if (irq < 32)
             continue;
 
@@ -787,6 +865,37 @@ static int make_vpci_node(libxl__gc *gc, void *fdt,
     return 0;
 }
 
+
+static int make_virtio_mmio_node(libxl__gc *gc, void *fdt,
+                                 uint64_t base, uint32_t irq)
+{
+    int res;
+    gic_interrupt intr;
+    const char *name = GCSPRINTF("virtio@%"PRIx64, base);
+
+    res = fdt_begin_node(fdt, name);
+    if (res) return res;
+
+    res = fdt_property_compat(gc, fdt, 1, "virtio,mmio");
+    if (res) return res;
+
+    res = fdt_property_regs(gc, fdt, GUEST_ROOT_ADDRESS_CELLS, GUEST_ROOT_SIZE_CELLS,
+                            1, base, VIRTIO_MMIO_DEV_SIZE);
+    if (res) return res;
+
+    set_interrupt(intr, irq, 0xf, DT_IRQ_TYPE_EDGE_RISING);
+    res = fdt_property_interrupts(gc, fdt, &intr, 1);
+    if (res) return res;
+
+    res = fdt_property(fdt, "dma-coherent", NULL, 0);
+    if (res) return res;
+
+    res = fdt_end_node(fdt);
+    if (res) return res;
+
+    return 0;
+}
+
 static const struct arch_info *get_arch_info(libxl__gc *gc,
                                              const struct xc_dom_image *dom)
 {
@@ -988,6 +1097,7 @@ static int libxl__prepare_dtb(libxl__gc *gc, libxl_domain_config *d_config,
     size_t fdt_size = 0;
     int pfdt_size = 0;
     libxl_domain_build_info *const info = &d_config->b_info;
+    unsigned int i;
 
     const libxl_version_info *vers;
     const struct arch_info *ainfo;
@@ -1094,6 +1204,13 @@ next_resize:
         if (d_config->num_pcidevs)
             FDT( make_vpci_node(gc, fdt, ainfo, dom) );
 
+        for (i = 0; i < d_config->num_disks; i++) {
+            libxl_device_disk *disk = &d_config->disks[i];
+
+            if (disk->specification == LIBXL_DISK_SPECIFICATION_VIRTIO)
+                FDT( make_virtio_mmio_node(gc, fdt, disk->base, disk->irq) );
+        }
+
         if (pfdt)
             FDT( copy_partial_fdt(gc, fdt, pfdt) );
 
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index ab05fe1..c8b6058 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -407,6 +407,10 @@ typedef uint64_t xen_callback_t;
 
 /* Physical Address Space */
 
+/* Virtio MMIO mappings */
+#define GUEST_VIRTIO_MMIO_BASE   xen_mk_ullong(0x02000000)
+#define GUEST_VIRTIO_MMIO_SIZE   xen_mk_ullong(0x00100000)
+
 /*
  * vGIC mappings: Only one set of mapping is used by the guest.
  * Therefore they can overlap.
@@ -493,6 +497,9 @@ typedef uint64_t xen_callback_t;
 
 #define GUEST_VPL011_SPI        32
 
+#define GUEST_VIRTIO_MMIO_SPI_FIRST   33
+#define GUEST_VIRTIO_MMIO_SPI_LAST    43
+
 /* PSCI functions */
 #define PSCI_cpu_suspend 0
 #define PSCI_cpu_off     1
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Mon Jun 13 18:06:03 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 18:06:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348447.574711 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0oRn-0001Bf-MB; Mon, 13 Jun 2022 18:05:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348447.574711; Mon, 13 Jun 2022 18:05:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0oRn-0001BQ-HW; Mon, 13 Jun 2022 18:05:55 +0000
Received: by outflank-mailman (input) for mailman id 348447;
 Mon, 13 Jun 2022 18:05:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WHVA=WU=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1o0oRl-0000pL-DO
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 18:05:53 +0000
Received: from mail-lj1-x22f.google.com (mail-lj1-x22f.google.com
 [2a00:1450:4864:20::22f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 74d32ea6-eb43-11ec-bd2c-47488cf2e6aa;
 Mon, 13 Jun 2022 20:05:49 +0200 (CEST)
Received: by mail-lj1-x22f.google.com with SMTP id g2so7071905ljk.5
 for <xen-devel@lists.xenproject.org>; Mon, 13 Jun 2022 11:05:49 -0700 (PDT)
Received: from otyshchenko.router ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 x18-20020a2e7c12000000b00253ceefb668sm1038104ljc.60.2022.06.13.11.05.46
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 13 Jun 2022 11:05:46 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 74d32ea6-eb43-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=RxMohS9AUX/qzNXgVuGSUUpTrrPlrIo78/o0B7JUlf0=;
        b=JIYpVPNUxidp5RaIe0RdHxGbtnONOpAGkfoLaLea5VhNWhy66mcGHq0uJPIw6Lym3O
         nHrxCyDdDC3JYEG8qi/VSdxXsua3vo8Lf+9y3II/iMPVXNo4bLlCAUsS/Sfl4BZEb3Rp
         g3M5sCGhYy2thnBFDmtPRG484ak6JFWgBen5MV2W+GgZ1sfJZOgLI3bvoOuCsnSpLULE
         UZ5HcTptsRl0ktFmKitLkyjKBSzvbPXEQ0VYaNOeIqRD3PCY13RXJGBuKZwMTPchLBzZ
         Fm2xaC7B7uAWqEgquRomOtL26PsqxMVtBDU6SpeRGavMn/2J9/N8OMTR1G3M1rwom15x
         xqlw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=RxMohS9AUX/qzNXgVuGSUUpTrrPlrIo78/o0B7JUlf0=;
        b=M65cbJtdGkbtpyvNCwAL8VjYoMGe61CwcbEpIfiQwjjFzVvoUUshlkOT0TWad/oXBb
         ILLCpB7GAVVmc4RhYIYu1pAhjVS81kdIV6d0OIzWNNs396cTL2T3nPiBZAn9PVKBZ7B/
         KW6MWFb6HYNbptT5fSJXSo1uszg5KK1lJ8Y0nGWns9huDlsebQvdWhRQYeJuXJ/G1NQ+
         1yZc6qxpHqBsQSG1g0BPB97swWohp0s8szsa10cV3CyH9pH5xUTruDjPgB3mMtmQ2Rwm
         gyy/qw+WcIeZxrrutoc/RYxsYb6YwhmoBER374O9rg9/Z3yjDHkiTQbUL8bpJNeoHPee
         GXgA==
X-Gm-Message-State: AJIora81Z+Z/4Ywfvd1V9qUgGx4YDGmKCYkZR93FmgwaXzoRMSzIPFFt
	ZFV6VeeYOUlxUy/q7Siwx9wUA6tXZl0=
X-Google-Smtp-Source: AGRyM1utnJ3GidJ6pfqHB05RHZIqkjAw98UjWZOkb+rbuN0ZhggMmduZMnoHEBCxSc0/EZY3bSuOwQ==
X-Received: by 2002:a05:651c:19a9:b0:255:7bb5:2cb with SMTP id bx41-20020a05651c19a900b002557bb502cbmr377827ljb.69.1655143547416;
        Mon, 13 Jun 2022 11:05:47 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Nick Rosbrook <rosbrookn@gmail.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>
Subject: [PATCH V10 1/3] libxl: Add support for Virtio disk configuration
Date: Mon, 13 Jun 2022 21:05:20 +0300
Message-Id: <1655143522-14356-2-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1655143522-14356-1-git-send-email-olekstysh@gmail.com>
References: <1655143522-14356-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

This patch adds basic support for configuring and assisting virtio-mmio
based virtio-disk backend (emulator) which is intended to run out of
Qemu and could be run in any domain.
Although the Virtio block device is quite different from traditional
Xen PV block device (vbd) from the toolstack's point of view:
 - as the frontend is virtio-blk which is not a Xenbus driver, nothing
   written to Xenstore are fetched by the frontend currently ("vdev"
   is not passed to the frontend). But this might need to be revised
   in future, so frontend data might be written to Xenstore in order to
   support hotplugging virtio devices or passing the backend domain id
   on arch where the device-tree is not available.
 - the ring-ref/event-channel are not used for the backend<->frontend
   communication, the proposed IPC for Virtio is IOREQ/DM
it is still a "block device" and ought to be integrated in existing
"disk" handling. So, re-use (and adapt) "disk" parsing/configuration
logic to deal with Virtio devices as well.

For the immediate purpose and an ability to extend that support for
other use-cases in future (Qemu, virtio-pci, etc) perform the following
actions:
- Add new disk backend type (LIBXL_DISK_BACKEND_OTHER) and reflect
  that in the configuration
- Introduce new disk "specification" and "transport" fields to struct
  libxl_device_disk. Both are written to the Xenstore. The transport
  field is only used for the specification "virtio" and it assumes
  only "mmio" value for now.
- Introduce new "specification" option with "xen" communication
  protocol being default value.
- Add new device kind (LIBXL__DEVICE_KIND_VIRTIO_DISK) as current
  one (LIBXL__DEVICE_KIND_VBD) doesn't fit into Virtio disk model

An example of domain configuration for Virtio disk:
disk = [ 'phy:/dev/mmcblk0p3, xvda1, backendtype=other, specification=virtio']

Nothing has changed for default Xen disk configuration.

Please note, this patch is not enough for virtio-disk to work
on Xen (Arm), as for every Virtio device (including disk) we need
to allocate Virtio MMIO params (IRQ and memory region) and pass
them to the backend, also update Guest device-tree. The subsequent
patch will add these missing bits. For the current patch,
the default "irq" and "base" are just written to the Xenstore.
This is not an ideal splitting, but this way we avoid breaking
the bisectability.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
---
Changes RFC -> V1:
   - no changes

Changes V1 -> V2:
   - rebase according to the new location of libxl_virtio_disk.c

Changes V2 -> V3:
   - no changes

Changes V3 -> V4:
   - rebase according to the new argument for DEFINE_DEVICE_TYPE_STRUCT

Changes V4 -> V5:
   - split the changes, change the order of the patches
   - update patch description
   - don't introduce new "vdisk" configuration option with own parsing logic,
     re-use Xen PV block "disk" parsing/configuration logic for the virtio-disk
   - introduce "virtio" flag and document it's usage
   - add LIBXL_HAVE_DEVICE_DISK_VIRTIO
   - update libxlu_disk_l.[ch]
   - drop num_disks variable/MAX_VIRTIO_DISKS
   - drop Wei's T-b

Changes V5 -> V6:
   - rebase on current staging
   - use "%"PRIu64 instead of %lu for disk->base in device_disk_add()
   - update *.gen.go files

Changes V6 -> V7:
   - rebase on current staging
   - update *.gen.go files and libxlu_disk_l.[ch] files
   - update patch description
   - rework significantly to support more flexible configuration
     and have more generic basic implementation for being able to extend
     that for other use-cases (virtio-pci, qemu, etc).

Changes V7 -> V8:
   - update *.gen.go files and libxlu_disk_l.[ch] files
   - update patch description and comments in the code
   - use "specification" config option instead of "protocol"
   - update libxl_types.idl and code according to new fields
     in libxl_device_disk

Changes V8 -> V9:
   - update (and harden) checks in libxl__device_disk_setdefault(),
     return error in case of incorrect settings of specification
     and transport
   - remove both asserts in device_disk_add()
   - update virtio related code in libxl__disk_from_xenstore(),
     do not fail if specification node is absent, replace
     open-coded checks of fetched specification and transport by
     libxl_disk_specification_from_string() and libxl_disk_transport_from_string()
     respectively
   - s/libxl_device_disk_get_path/libxl__device_disk_get_path
   - add a comment for virtio-mmio parameters in struct libxl_device_disk

Changes V9 -> V10:
   - s/ERROR_FAIL/ERROR_INVAL in both places in libxl__device_disk_setdefault()
   - rework libxl__device_disk_get_path()
---
 docs/man/xl-disk-configuration.5.pod.in   |  38 +-
 tools/golang/xenlight/helpers.gen.go      |   8 +
 tools/golang/xenlight/types.gen.go        |  18 +
 tools/include/libxl.h                     |   7 +
 tools/libs/light/libxl_device.c           |  62 +-
 tools/libs/light/libxl_disk.c             | 140 ++++-
 tools/libs/light/libxl_internal.h         |   2 +
 tools/libs/light/libxl_types.idl          |  18 +
 tools/libs/light/libxl_types_internal.idl |   1 +
 tools/libs/light/libxl_utils.c            |   2 +
 tools/libs/util/libxlu_disk_l.c           | 959 +++++++++++++++---------------
 tools/libs/util/libxlu_disk_l.h           |   2 +-
 tools/libs/util/libxlu_disk_l.l           |   9 +
 tools/xl/xl_block.c                       |  11 +
 14 files changed, 797 insertions(+), 480 deletions(-)

diff --git a/docs/man/xl-disk-configuration.5.pod.in b/docs/man/xl-disk-configuration.5.pod.in
index 71d0e86..487ffef 100644
--- a/docs/man/xl-disk-configuration.5.pod.in
+++ b/docs/man/xl-disk-configuration.5.pod.in
@@ -232,7 +232,7 @@ Specifies the backend implementation to use
 
 =item Supported values
 
-phy, qdisk
+phy, qdisk, other
 
 =item Mandatory
 
@@ -244,11 +244,13 @@ Automatically determine which backend to use.
 
 =back
 
-This does not affect the guest's view of the device.  It controls
-which software implementation of the Xen backend driver is used.
+It controls which software implementation of the backend driver is used.
+Depending on the "specification" option this may affect the guest's view
+of the device.
 
 Not all backend drivers support all combinations of other options.
-For example, "phy" does not support formats other than "raw".
+For example, "phy" and "other" do not support formats other than "raw" and
+"other" does not support specifications other than "virtio".
 Normally this option should not be specified, in which case libxl will
 automatically determine the most suitable backend.
 
@@ -344,8 +346,36 @@ can be used to disable "hole punching" for file based backends which
 were intentionally created non-sparse to avoid fragmentation of the
 file.
 
+=item B<specification>=I<SPECIFICATION>
+
+=over 4
+
+=item Description
+
+Specifies the communication protocol (specification) to use for the chosen
+"backendtype" option
+
+=item Supported values
+
+xen, virtio
+
+=item Mandatory
+
+No
+
+=item Default value
+
+xen
+
 =back
 
+Besides forcing toolstack to use specific backend implementation, this also
+affects the guest's view of the device. For example, "virtio" requires
+Virtio frontend driver (virtio-blk) to be used. Please note, the virtual
+device (vdev) is not passed to the guest in that case, but it still must be
+specified for the internal purposes.
+
+=back
 
 =head1 COLO Parameters
 
diff --git a/tools/golang/xenlight/helpers.gen.go b/tools/golang/xenlight/helpers.gen.go
index b746ff1..00f10b9 100644
--- a/tools/golang/xenlight/helpers.gen.go
+++ b/tools/golang/xenlight/helpers.gen.go
@@ -1751,6 +1751,10 @@ x.DirectIoSafe = bool(xc.direct_io_safe)
 if err := x.DiscardEnable.fromC(&xc.discard_enable);err != nil {
 return fmt.Errorf("converting field DiscardEnable: %v", err)
 }
+x.Specification = DiskSpecification(xc.specification)
+x.Transport = DiskTransport(xc.transport)
+x.Irq = uint32(xc.irq)
+x.Base = uint64(xc.base)
 if err := x.ColoEnable.fromC(&xc.colo_enable);err != nil {
 return fmt.Errorf("converting field ColoEnable: %v", err)
 }
@@ -1788,6 +1792,10 @@ xc.direct_io_safe = C.bool(x.DirectIoSafe)
 if err := x.DiscardEnable.toC(&xc.discard_enable); err != nil {
 return fmt.Errorf("converting field DiscardEnable: %v", err)
 }
+xc.specification = C.libxl_disk_specification(x.Specification)
+xc.transport = C.libxl_disk_transport(x.Transport)
+xc.irq = C.uint32_t(x.Irq)
+xc.base = C.uint64_t(x.Base)
 if err := x.ColoEnable.toC(&xc.colo_enable); err != nil {
 return fmt.Errorf("converting field ColoEnable: %v", err)
 }
diff --git a/tools/golang/xenlight/types.gen.go b/tools/golang/xenlight/types.gen.go
index b1e84d5..cc52936 100644
--- a/tools/golang/xenlight/types.gen.go
+++ b/tools/golang/xenlight/types.gen.go
@@ -99,6 +99,20 @@ DiskBackendUnknown DiskBackend = 0
 DiskBackendPhy DiskBackend = 1
 DiskBackendTap DiskBackend = 2
 DiskBackendQdisk DiskBackend = 3
+DiskBackendOther DiskBackend = 4
+)
+
+type DiskSpecification int
+const(
+DiskSpecificationUnknown DiskSpecification = 0
+DiskSpecificationXen DiskSpecification = 1
+DiskSpecificationVirtio DiskSpecification = 2
+)
+
+type DiskTransport int
+const(
+DiskTransportUnknown DiskTransport = 0
+DiskTransportMmio DiskTransport = 1
 )
 
 type NicType int
@@ -643,6 +657,10 @@ Readwrite int
 IsCdrom int
 DirectIoSafe bool
 DiscardEnable Defbool
+Specification DiskSpecification
+Transport DiskTransport
+Irq uint32
+Base uint64
 ColoEnable Defbool
 ColoRestoreEnable Defbool
 ColoHost string
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 51a9b6c..cd8067b 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -528,6 +528,13 @@
 #define LIBXL_HAVE_MAX_GRANT_VERSION 1
 
 /*
+ * LIBXL_HAVE_DEVICE_DISK_SPECIFICATION indicates that 'specification' and
+ * 'transport' fields (of libxl_disk_specification and libxl_disk_transport
+ * types respectively) are present in libxl_device_disk.
+ */
+#define LIBXL_HAVE_DEVICE_DISK_SPECIFICATION 1
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
diff --git a/tools/libs/light/libxl_device.c b/tools/libs/light/libxl_device.c
index e6025d1..a38d2e2 100644
--- a/tools/libs/light/libxl_device.c
+++ b/tools/libs/light/libxl_device.c
@@ -289,9 +289,16 @@ static int disk_try_backend(disk_try_backend_args *a,
                             libxl_disk_backend backend)
  {
     libxl__gc *gc = a->gc;
+    libxl_disk_specification specification = a->disk->specification;
     /* returns 0 (ie, DISK_BACKEND_UNKNOWN) on failure, or
      * backend on success */
 
+    if ((specification == LIBXL_DISK_SPECIFICATION_VIRTIO &&
+         backend != LIBXL_DISK_BACKEND_OTHER) ||
+        (specification != LIBXL_DISK_SPECIFICATION_VIRTIO &&
+         backend == LIBXL_DISK_BACKEND_OTHER))
+        goto bad_specification;
+
     switch (backend) {
     case LIBXL_DISK_BACKEND_PHY:
         if (a->disk->format != LIBXL_DISK_FORMAT_RAW) {
@@ -329,6 +336,29 @@ static int disk_try_backend(disk_try_backend_args *a,
         if (a->disk->script) goto bad_script;
         return backend;
 
+    case LIBXL_DISK_BACKEND_OTHER:
+        if (a->disk->format != LIBXL_DISK_FORMAT_RAW)
+            goto bad_format;
+
+        if (a->disk->script)
+            goto bad_script;
+
+        if (libxl_defbool_val(a->disk->colo_enable))
+            goto bad_colo;
+
+        if (a->disk->backend_domid != LIBXL_TOOLSTACK_DOMID) {
+            LOG(DEBUG, "Disk vdev=%s, is using a storage driver domain, "
+                       "skipping physical device check", a->disk->vdev);
+            return backend;
+        }
+
+        if (libxl__try_phy_backend(a->stab.st_mode))
+            return backend;
+
+        LOG(DEBUG, "Disk vdev=%s, backend other unsuitable as phys path not a "
+                   "block device", a->disk->vdev);
+        return 0;
+
     default:
         LOG(DEBUG, "Disk vdev=%s, backend %d unknown", a->disk->vdev, backend);
         return 0;
@@ -352,6 +382,12 @@ static int disk_try_backend(disk_try_backend_args *a,
     LOG(DEBUG, "Disk vdev=%s, backend %s not compatible with colo",
         a->disk->vdev, libxl_disk_backend_to_string(backend));
     return 0;
+
+ bad_specification:
+    LOG(DEBUG, "Disk vdev=%s, backend %s not compatible with specification %s",
+        a->disk->vdev, libxl_disk_backend_to_string(backend),
+        libxl_disk_specification_to_string(specification));
+    return 0;
 }
 
 int libxl__backendpath_parse_domid(libxl__gc *gc, const char *be_path,
@@ -376,8 +412,9 @@ int libxl__device_disk_set_backend(libxl__gc *gc, libxl_device_disk *disk) {
     a.gc = gc;
     a.disk = disk;
 
-    LOG(DEBUG, "Disk vdev=%s spec.backend=%s", disk->vdev,
-               libxl_disk_backend_to_string(disk->backend));
+    LOG(DEBUG, "Disk vdev=%s spec.backend=%s specification=%s", disk->vdev,
+               libxl_disk_backend_to_string(disk->backend),
+               libxl_disk_specification_to_string(disk->specification));
 
     if (disk->format == LIBXL_DISK_FORMAT_EMPTY) {
         if (!disk->is_cdrom) {
@@ -392,7 +429,8 @@ int libxl__device_disk_set_backend(libxl__gc *gc, libxl_device_disk *disk) {
         }
         memset(&a.stab, 0, sizeof(a.stab));
     } else if ((disk->backend == LIBXL_DISK_BACKEND_UNKNOWN ||
-                disk->backend == LIBXL_DISK_BACKEND_PHY) &&
+                disk->backend == LIBXL_DISK_BACKEND_PHY ||
+                disk->backend == LIBXL_DISK_BACKEND_OTHER) &&
                disk->backend_domid == LIBXL_TOOLSTACK_DOMID &&
                !disk->script) {
         if (stat(disk->pdev_path, &a.stab)) {
@@ -408,7 +446,8 @@ int libxl__device_disk_set_backend(libxl__gc *gc, libxl_device_disk *disk) {
         ok=
             disk_try_backend(&a, LIBXL_DISK_BACKEND_PHY) ?:
             disk_try_backend(&a, LIBXL_DISK_BACKEND_QDISK) ?:
-            disk_try_backend(&a, LIBXL_DISK_BACKEND_TAP);
+            disk_try_backend(&a, LIBXL_DISK_BACKEND_TAP) ?:
+            disk_try_backend(&a, LIBXL_DISK_BACKEND_OTHER);
         if (ok)
             LOG(DEBUG, "Disk vdev=%s, using backend %s",
                        disk->vdev,
@@ -441,10 +480,25 @@ char *libxl__device_disk_string_of_backend(libxl_disk_backend backend)
         case LIBXL_DISK_BACKEND_QDISK: return "qdisk";
         case LIBXL_DISK_BACKEND_TAP: return "phy";
         case LIBXL_DISK_BACKEND_PHY: return "phy";
+        case LIBXL_DISK_BACKEND_OTHER: return "other";
+        default: return NULL;
+    }
+}
+
+char *libxl__device_disk_string_of_specification(libxl_disk_specification specification)
+{
+    switch (specification) {
+        case LIBXL_DISK_SPECIFICATION_XEN: return "xen";
+        case LIBXL_DISK_SPECIFICATION_VIRTIO: return "virtio";
         default: return NULL;
     }
 }
 
+char *libxl__device_disk_string_of_transport(libxl_disk_transport transport)
+{
+    return (transport == LIBXL_DISK_TRANSPORT_MMIO ? "mmio" : NULL);
+}
+
 const char *libxl__qemu_disk_format_string(libxl_disk_format format)
 {
     switch (format) {
diff --git a/tools/libs/light/libxl_disk.c b/tools/libs/light/libxl_disk.c
index a5ca778..673b0d6 100644
--- a/tools/libs/light/libxl_disk.c
+++ b/tools/libs/light/libxl_disk.c
@@ -163,6 +163,25 @@ static int libxl__device_disk_setdefault(libxl__gc *gc, uint32_t domid,
     rc = libxl__resolve_domid(gc, disk->backend_domname, &disk->backend_domid);
     if (rc < 0) return rc;
 
+    if (disk->specification == LIBXL_DISK_SPECIFICATION_UNKNOWN)
+        disk->specification = LIBXL_DISK_SPECIFICATION_XEN;
+
+    if (disk->specification == LIBXL_DISK_SPECIFICATION_XEN &&
+        disk->transport != LIBXL_DISK_TRANSPORT_UNKNOWN) {
+        LOGD(ERROR, domid, "Transport is only supported for specification virtio");
+        return ERROR_INVAL;
+    }
+
+    /* Force transport mmio for specification virtio for now */
+    if (disk->specification == LIBXL_DISK_SPECIFICATION_VIRTIO) {
+        if (!(disk->transport == LIBXL_DISK_TRANSPORT_UNKNOWN ||
+              disk->transport == LIBXL_DISK_TRANSPORT_MMIO)) {
+            LOGD(ERROR, domid, "Unsupported transport for specification virtio");
+            return ERROR_INVAL;
+        }
+        disk->transport = LIBXL_DISK_TRANSPORT_MMIO;
+    }
+
     /* Force Qdisk backend for CDROM devices of guests with a device model. */
     if (disk->is_cdrom != 0 &&
         libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM) {
@@ -204,6 +223,9 @@ static int libxl__device_from_disk(libxl__gc *gc, uint32_t domid,
         case LIBXL_DISK_BACKEND_QDISK:
             device->backend_kind = LIBXL__DEVICE_KIND_QDISK;
             break;
+        case LIBXL_DISK_BACKEND_OTHER:
+            device->backend_kind = LIBXL__DEVICE_KIND_VIRTIO_DISK;
+            break;
         default:
             LOGD(ERROR, domid, "Unrecognized disk backend type: %d",
                  disk->backend);
@@ -212,7 +234,8 @@ static int libxl__device_from_disk(libxl__gc *gc, uint32_t domid,
 
     device->domid = domid;
     device->devid = devid;
-    device->kind  = LIBXL__DEVICE_KIND_VBD;
+    device->kind = disk->backend == LIBXL_DISK_BACKEND_OTHER ?
+        LIBXL__DEVICE_KIND_VIRTIO_DISK : LIBXL__DEVICE_KIND_VBD;
 
     return 0;
 }
@@ -330,7 +353,14 @@ static void device_disk_add(libxl__egc *egc, uint32_t domid,
 
                 assert(device->backend_kind == LIBXL__DEVICE_KIND_VBD);
                 break;
+            case LIBXL_DISK_BACKEND_OTHER:
+                dev = disk->pdev_path;
 
+                flexarray_append(back, "params");
+                flexarray_append(back, dev);
+
+                assert(device->backend_kind == LIBXL__DEVICE_KIND_VIRTIO_DISK);
+                break;
             case LIBXL_DISK_BACKEND_TAP:
                 LOG(ERROR, "blktap is not supported");
                 rc = ERROR_FAIL;
@@ -386,6 +416,14 @@ static void device_disk_add(libxl__egc *egc, uint32_t domid,
         flexarray_append_pair(back, "discard-enable",
                               libxl_defbool_val(disk->discard_enable) ?
                               "1" : "0");
+        flexarray_append(back, "specification");
+        flexarray_append(back, libxl__device_disk_string_of_specification(disk->specification));
+        if (disk->specification == LIBXL_DISK_SPECIFICATION_VIRTIO) {
+            flexarray_append(back, "transport");
+            flexarray_append(back, libxl__device_disk_string_of_transport(disk->transport));
+            flexarray_append_pair(back, "base", GCSPRINTF("%"PRIu64, disk->base));
+            flexarray_append_pair(back, "irq", GCSPRINTF("%u", disk->irq));
+        }
 
         flexarray_append(front, "backend-id");
         flexarray_append(front, GCSPRINTF("%d", disk->backend_domid));
@@ -532,6 +570,53 @@ static int libxl__disk_from_xenstore(libxl__gc *gc, const char *libxl_path,
     }
     libxl_string_to_backend(ctx, tmp, &(disk->backend));
 
+    tmp = libxl__xs_read(gc, XBT_NULL,
+                         GCSPRINTF("%s/specification", libxl_path));
+    if (!tmp) {
+        LOG(DEBUG, "Missing xenstore node %s/specification, assuming specification xen", libxl_path);
+        disk->specification = LIBXL_DISK_SPECIFICATION_XEN;
+    } else {
+        rc = libxl_disk_specification_from_string(tmp, &disk->specification);
+        if (rc) {
+            LOG(ERROR, "Unable to parse xenstore node %s/specification", libxl_path);
+            goto cleanup;
+        }
+    }
+
+    if (disk->specification == LIBXL_DISK_SPECIFICATION_VIRTIO) {
+        tmp = libxl__xs_read(gc, XBT_NULL,
+                             GCSPRINTF("%s/transport", libxl_path));
+        if (!tmp) {
+            LOG(ERROR, "Missing xenstore node %s/transport", libxl_path);
+            goto cleanup;
+        }
+        rc = libxl_disk_transport_from_string(tmp, &disk->transport);
+        if (rc) {
+            LOG(ERROR, "Unable to parse xenstore node %s/transport", libxl_path);
+            goto cleanup;
+        }
+        if (disk->transport != LIBXL_DISK_TRANSPORT_MMIO) {
+            LOG(ERROR, "Only transport mmio is expected for specification virtio");
+            goto cleanup;
+        }
+
+        tmp = libxl__xs_read(gc, XBT_NULL,
+                             GCSPRINTF("%s/base", libxl_path));
+        if (!tmp) {
+            LOG(ERROR, "Missing xenstore node %s/base", libxl_path);
+            goto cleanup;
+        }
+        disk->base = strtoul(tmp, NULL, 10);
+
+        tmp = libxl__xs_read(gc, XBT_NULL,
+                             GCSPRINTF("%s/irq", libxl_path));
+        if (!tmp) {
+            LOG(ERROR, "Missing xenstore node %s/irq", libxl_path);
+            goto cleanup;
+        }
+        disk->irq = strtoul(tmp, NULL, 10);
+    }
+
     disk->vdev = xs_read(ctx->xsh, XBT_NULL,
                          GCSPRINTF("%s/dev", libxl_path), &len);
     if (!disk->vdev) {
@@ -575,6 +660,41 @@ cleanup:
     return rc;
 }
 
+static int libxl__device_disk_get_path(libxl__gc *gc, uint32_t domid,
+                                       char **path)
+{
+    const char *xen_dir, *virtio_dir;
+    char *xen_path, *virtio_path;
+    int rc;
+
+    /* default path */
+    xen_path = GCSPRINTF("%s/device/%s",
+                         libxl__xs_libxl_path(gc, domid),
+                         libxl__device_kind_to_string(LIBXL__DEVICE_KIND_VBD));
+
+    rc = libxl__xs_read_checked(gc, XBT_NULL, xen_path, &xen_dir);
+    if (rc)
+        return rc;
+
+    virtio_path = GCSPRINTF("%s/device/%s",
+                            libxl__xs_libxl_path(gc, domid),
+                            libxl__device_kind_to_string(LIBXL__DEVICE_KIND_VIRTIO_DISK));
+
+    rc = libxl__xs_read_checked(gc, XBT_NULL, virtio_path, &virtio_dir);
+    if (rc)
+        return rc;
+
+    if (xen_dir && virtio_dir) {
+        LOGD(ERROR, domid, "Invalid configuration, both xen and virtio paths are present");
+        return ERROR_INVAL;
+    } else if (virtio_dir)
+        *path = virtio_path;
+    else
+        *path = xen_path;
+
+    return 0;
+}
+
 int libxl_vdev_to_device_disk(libxl_ctx *ctx, uint32_t domid,
                               const char *vdev, libxl_device_disk *disk)
 {
@@ -588,10 +708,12 @@ int libxl_vdev_to_device_disk(libxl_ctx *ctx, uint32_t domid,
 
     libxl_device_disk_init(disk);
 
-    libxl_path = libxl__domain_device_libxl_path(gc, domid, devid,
-                                                 LIBXL__DEVICE_KIND_VBD);
+    rc = libxl__device_disk_get_path(gc, domid, &libxl_path);
+    if (rc)
+        return rc;
 
-    rc = libxl__disk_from_xenstore(gc, libxl_path, devid, disk);
+    rc = libxl__disk_from_xenstore(gc, GCSPRINTF("%s/%d", libxl_path, devid),
+                                   devid, disk);
 
     GC_FREE;
     return rc;
@@ -605,16 +727,19 @@ int libxl_device_disk_getinfo(libxl_ctx *ctx, uint32_t domid,
     char *fe_path, *libxl_path;
     char *val;
     int rc;
+    libxl__device_kind kind;
 
     diskinfo->backend = NULL;
 
     diskinfo->devid = libxl__device_disk_dev_number(disk->vdev, NULL, NULL);
 
-    /* tap devices entries in xenstore are written as vbd devices. */
+    /* tap devices entries in xenstore are written as vbd/virtio_disk devices. */
+    kind = disk->backend == LIBXL_DISK_BACKEND_OTHER ?
+        LIBXL__DEVICE_KIND_VIRTIO_DISK : LIBXL__DEVICE_KIND_VBD;
     fe_path = libxl__domain_device_frontend_path(gc, domid, diskinfo->devid,
-                                                 LIBXL__DEVICE_KIND_VBD);
+                                                 kind);
     libxl_path = libxl__domain_device_libxl_path(gc, domid, diskinfo->devid,
-                                                 LIBXL__DEVICE_KIND_VBD);
+                                                 kind);
     diskinfo->backend = xs_read(ctx->xsh, XBT_NULL,
                                 GCSPRINTF("%s/backend", libxl_path), NULL);
     if (!diskinfo->backend) {
@@ -1418,6 +1543,7 @@ LIBXL_DEFINE_DEVICE_LIST(disk)
 #define libxl__device_disk_update_devid NULL
 
 DEFINE_DEVICE_TYPE_STRUCT(disk, VBD, disks,
+    .get_path    = libxl__device_disk_get_path,
     .merge       = libxl_device_disk_merge,
     .dm_needed   = libxl_device_disk_dm_needed,
     .from_xenstore = (device_from_xenstore_fn_t)libxl__disk_from_xenstore,
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index bdef5a6..cb9e8b3 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -1493,6 +1493,8 @@ _hidden char * libxl__domain_pvcontrol_read(libxl__gc *gc,
 
 /* from xl_device */
 _hidden char *libxl__device_disk_string_of_backend(libxl_disk_backend backend);
+_hidden char *libxl__device_disk_string_of_specification(libxl_disk_specification specification);
+_hidden char *libxl__device_disk_string_of_transport(libxl_disk_transport transport);
 _hidden char *libxl__device_disk_string_of_format(libxl_disk_format format);
 _hidden const char *libxl__qemu_disk_format_string(libxl_disk_format format);
 _hidden int libxl__device_disk_set_backend(libxl__gc*, libxl_device_disk*);
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 2a42da2..858e32b 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -130,6 +130,18 @@ libxl_disk_backend = Enumeration("disk_backend", [
     (1, "PHY"),
     (2, "TAP"),
     (3, "QDISK"),
+    (4, "OTHER"),
+    ])
+
+libxl_disk_specification = Enumeration("disk_specification", [
+    (0, "UNKNOWN"),
+    (1, "XEN"),
+    (2, "VIRTIO"),
+    ])
+
+libxl_disk_transport = Enumeration("disk_transport", [
+    (0, "UNKNOWN"),
+    (1, "MMIO"),
     ])
 
 libxl_nic_type = Enumeration("nic_type", [
@@ -704,6 +716,12 @@ libxl_device_disk = Struct("device_disk", [
     ("is_cdrom", integer),
     ("direct_io_safe", bool),
     ("discard_enable", libxl_defbool),
+    ("specification", libxl_disk_specification),
+    ("transport", libxl_disk_transport),
+    # Note that virtio-mmio parameters (irq and base) are for internal use
+    # by libxl and can't be modified.
+    ("irq", uint32),
+    ("base", uint64),
     # Note that the COLO configuration settings should be considered unstable.
     # They may change incompatibly in future versions of Xen.
     ("colo_enable", libxl_defbool),
diff --git a/tools/libs/light/libxl_types_internal.idl b/tools/libs/light/libxl_types_internal.idl
index 3593e21..8f71980 100644
--- a/tools/libs/light/libxl_types_internal.idl
+++ b/tools/libs/light/libxl_types_internal.idl
@@ -32,6 +32,7 @@ libxl__device_kind = Enumeration("device_kind", [
     (14, "PVCALLS"),
     (15, "VSND"),
     (16, "VINPUT"),
+    (17, "VIRTIO_DISK"),
     ])
 
 libxl__console_backend = Enumeration("console_backend", [
diff --git a/tools/libs/light/libxl_utils.c b/tools/libs/light/libxl_utils.c
index e5e6b2d..f55915e 100644
--- a/tools/libs/light/libxl_utils.c
+++ b/tools/libs/light/libxl_utils.c
@@ -297,6 +297,8 @@ int libxl_string_to_backend(libxl_ctx *ctx, char *s, libxl_disk_backend *backend
         *backend = LIBXL_DISK_BACKEND_TAP;
     } else if (!strcmp(s, "qdisk")) {
         *backend = LIBXL_DISK_BACKEND_QDISK;
+    } else if (!strcmp(s, "other")) {
+        *backend = LIBXL_DISK_BACKEND_OTHER;
     } else if (!strcmp(s, "tap")) {
         p = strchr(s, ':');
         if (!p) {
diff --git a/tools/libs/util/libxlu_disk_l.c b/tools/libs/util/libxlu_disk_l.c
index 32d4b74..bb1337c 100644
--- a/tools/libs/util/libxlu_disk_l.c
+++ b/tools/libs/util/libxlu_disk_l.c
@@ -549,8 +549,8 @@ static void yynoreturn yy_fatal_error ( const char* msg , yyscan_t yyscanner );
 	yyg->yy_hold_char = *yy_cp; \
 	*yy_cp = '\0'; \
 	yyg->yy_c_buf_p = yy_cp;
-#define YY_NUM_RULES 36
-#define YY_END_OF_BUFFER 37
+#define YY_NUM_RULES 37
+#define YY_END_OF_BUFFER 38
 /* This struct is not used in this scanner,
    but its presence is necessary. */
 struct yy_trans_info
@@ -558,74 +558,77 @@ struct yy_trans_info
 	flex_int32_t yy_verify;
 	flex_int32_t yy_nxt;
 	};
-static const flex_int16_t yy_acclist[575] =
+static const flex_int16_t yy_acclist[594] =
     {   0,
-       35,   35,   37,   33,   34,   36, 8193,   33,   34,   36,
-    16385, 8193,   33,   36,16385,   33,   34,   36,   34,   36,
-       33,   34,   36,   33,   34,   36,   33,   34,   36,   33,
-       34,   36,   33,   34,   36,   33,   34,   36,   33,   34,
-       36,   33,   34,   36,   33,   34,   36,   33,   34,   36,
-       33,   34,   36,   33,   34,   36,   33,   34,   36,   33,
-       34,   36,   33,   34,   36,   33,   34,   36,   35,   36,
-       36,   33,   33, 8193,   33, 8193,   33,16385, 8193,   33,
-     8193,   33,   33, 8224,   33,16416,   33,   33,   33,   33,
-       33,   33,   33,   33,   33,   33,   33,   33,   33,   33,
-
-       33,   33,   33,   33,   33,   33,   33,   33,   33,   35,
-     8193,   33, 8193,   33, 8193, 8224,   33, 8224,   33, 8224,
-       23,   33,   33,   33,   33,   33,   33,   33,   33,   33,
-       33,   33,   33,   33,   33,   33,   33,   33,   33,   33,
-       33,   33,   33,   33,   33, 8224,   33, 8224,   33, 8224,
-       23,   33,   33,   28, 8224,   33,16416,   33,   33,   15,
-       33,   33,   33,   33,   33,   33,   33,   33,   33, 8217,
-     8224,   33,16409,16416,   33,   33,   31, 8224,   33,16416,
-       33, 8216, 8224,   33,16408,16416,   33,   33, 8219, 8224,
-       33,16411,16416,   33,   33,   33,   33,   33,   28, 8224,
-
-       33,   28, 8224,   33,   28,   33,   28, 8224,   33,    3,
-       33,   15,   33,   33,   33,   33,   33,   30, 8224,   33,
-    16416,   33,   33,   33, 8217, 8224,   33, 8217, 8224,   33,
-     8217,   33, 8217, 8224,   33,   33,   31, 8224,   33,   31,
-     8224,   33,   31,   33,   31, 8224, 8216, 8224,   33, 8216,
-     8224,   33, 8216,   33, 8216, 8224,   33, 8219, 8224,   33,
-     8219, 8224,   33, 8219,   33, 8219, 8224,   33,   33,   10,
-       33,   33,   28, 8224,   33,   28, 8224,   33,   28, 8224,
-       28,   33,   28,   33,    3,   33,   33,   33,   33,   33,
-       33,   33,   30, 8224,   33,   30, 8224,   33,   30,   33,
-
-       30, 8224,   33,   33,   29, 8224,   33,16416, 8217, 8224,
-       33, 8217, 8224,   33, 8217, 8224, 8217,   33, 8217,   33,
-       33,   31, 8224,   33,   31, 8224,   33,   31, 8224,   31,
-       33,   31, 8216, 8224,   33, 8216, 8224,   33, 8216, 8224,
-     8216,   33, 8216,   33, 8219, 8224,   33, 8219, 8224,   33,
-     8219, 8224, 8219,   33, 8219,   33,   33,   10,   23,   10,
-        7,   33,   33,   33,   33,   33,   33,   33,   13,   33,
-       30, 8224,   33,   30, 8224,   33,   30, 8224,   30,   33,
-       30,    2,   33,   29, 8224,   33,   29, 8224,   33,   29,
-       33,   29, 8224,   16,   33,   33,   11,   33,   22,   10,
-
-       10,   23,    7,   23,    7,   33,    8,   33,   33,   33,
-       33,    6,   33,   13,   33,    2,   23,    2,   33,   29,
-     8224,   33,   29, 8224,   33,   29, 8224,   29,   33,   29,
-       16,   33,   33,   11,   23,   11,   26, 8224,   33,16416,
-       22,   23,   22,    7,    7,   23,   33,    8,   23,    8,
-       33,   33,   33,   33,    6,   23,    6,    6,   23,    6,
-       23,   33,    2,    2,   23,   33,   33,   11,   11,   23,
-       26, 8224,   33,   26, 8224,   33,   26,   33,   26, 8224,
-       22,   23,   33,    8,    8,   23,   33,   33,   17,   18,
-        6,    6,   23,    6,    6,   33,   33,   14,   33,   26,
-
-     8224,   33,   26, 8224,   33,   26, 8224,   26,   33,   26,
-       33,   33,   33,   17,   23,   17,   18,   23,   18,    6,
-        6,   33,   33,   14,   33,   20,    9,   19,   17,   17,
-       23,   18,   18,   23,    6,    5,    6,   33,   21,   20,
-       23,   20,    9,   23,    9,   19,   23,   19,    4,    6,
-        5,    6,   33,   21,   23,   21,   20,   20,   23,    9,
-        9,   23,   19,   19,   23,    4,    6,   12,   33,   21,
-       21,   23,   12,   33
+       36,   36,   38,   34,   35,   37, 8193,   34,   35,   37,
+    16385, 8193,   34,   37,16385,   34,   35,   37,   35,   37,
+       34,   35,   37,   34,   35,   37,   34,   35,   37,   34,
+       35,   37,   34,   35,   37,   34,   35,   37,   34,   35,
+       37,   34,   35,   37,   34,   35,   37,   34,   35,   37,
+       34,   35,   37,   34,   35,   37,   34,   35,   37,   34,
+       35,   37,   34,   35,   37,   34,   35,   37,   36,   37,
+       37,   34,   34, 8193,   34, 8193,   34,16385, 8193,   34,
+     8193,   34,   34, 8225,   34,16417,   34,   34,   34,   34,
+       34,   34,   34,   34,   34,   34,   34,   34,   34,   34,
+
+       34,   34,   34,   34,   34,   34,   34,   34,   34,   34,
+       36, 8193,   34, 8193,   34, 8193, 8225,   34, 8225,   34,
+     8225,   24,   34,   34,   34,   34,   34,   34,   34,   34,
+       34,   34,   34,   34,   34,   34,   34,   34,   34,   34,
+       34,   34,   34,   34,   34,   34,   34, 8225,   34, 8225,
+       34, 8225,   24,   34,   34,   29, 8225,   34,16417,   34,
+       34,   16,   34,   34,   34,   34,   34,   34,   34,   34,
+       34, 8218, 8225,   34,16410,16417,   34,   34,   32, 8225,
+       34,16417,   34, 8217, 8225,   34,16409,16417,   34,   34,
+       34, 8220, 8225,   34,16412,16417,   34,   34,   34,   34,
+
+       34,   29, 8225,   34,   29, 8225,   34,   29,   34,   29,
+     8225,   34,    3,   34,   16,   34,   34,   34,   34,   34,
+       31, 8225,   34,16417,   34,   34,   34, 8218, 8225,   34,
+     8218, 8225,   34, 8218,   34, 8218, 8225,   34,   34,   32,
+     8225,   34,   32, 8225,   34,   32,   34,   32, 8225, 8217,
+     8225,   34, 8217, 8225,   34, 8217,   34, 8217, 8225,   34,
+       34, 8220, 8225,   34, 8220, 8225,   34, 8220,   34, 8220,
+     8225,   34,   34,   11,   34,   34,   29, 8225,   34,   29,
+     8225,   34,   29, 8225,   29,   34,   29,   34,    3,   34,
+       34,   34,   34,   34,   34,   34,   31, 8225,   34,   31,
+
+     8225,   34,   31,   34,   31, 8225,   34,   34,   30, 8225,
+       34,16417, 8218, 8225,   34, 8218, 8225,   34, 8218, 8225,
+     8218,   34, 8218,   34,   34,   32, 8225,   34,   32, 8225,
+       34,   32, 8225,   32,   34,   32, 8217, 8225,   34, 8217,
+     8225,   34, 8217, 8225, 8217,   34, 8217,   34,   34, 8220,
+     8225,   34, 8220, 8225,   34, 8220, 8225, 8220,   34, 8220,
+       34,   34,   11,   24,   11,    7,   34,   34,   34,   34,
+       34,   34,   34,   14,   34,   31, 8225,   34,   31, 8225,
+       34,   31, 8225,   31,   34,   31,    2,   34,   30, 8225,
+       34,   30, 8225,   34,   30,   34,   30, 8225,   17,   34,
+
+       34,   12,   34,   34,   23,   11,   11,   24,    7,   24,
+        7,   34,    8,   34,   34,   34,   34,    6,   34,   14,
+       34,    2,   24,    2,   34,   30, 8225,   34,   30, 8225,
+       34,   30, 8225,   30,   34,   30,   17,   34,   34,   12,
+       24,   12,   34,   27, 8225,   34,16417,   23,   24,   23,
+        7,    7,   24,   34,    8,   24,    8,   34,   34,   34,
+       34,    6,   24,    6,    6,   24,    6,   24,   34,    2,
+        2,   24,   34,   34,   12,   12,   24,   34,   27, 8225,
+       34,   27, 8225,   34,   27,   34,   27, 8225,   23,   24,
+       34,    8,    8,   24,   34,   34,   18,   19,    6,    6,
+
+       24,    6,    6,   34,   34,   15,   34,   34,   27, 8225,
+       34,   27, 8225,   34,   27, 8225,   27,   34,   27,   34,
+       34,   34,   18,   24,   18,   19,   24,   19,    6,    6,
+       34,   34,   15,   34,   34,   21,    9,   20,   18,   18,
+       24,   19,   19,   24,    6,    5,    6,   34,   22,   34,
+       21,   24,   21,    9,   24,    9,   20,   24,   20,    4,
+        6,    5,    6,   34,   22,   24,   22,   34,   21,   21,
+       24,    9,    9,   24,   20,   20,   24,    4,    6,   13,
+       34,   22,   22,   24,   10,   13,   34,   10,   24,   10,
+       10,   10,   24
+
     } ;
 
-static const flex_int16_t yy_accept[356] =
+static const flex_int16_t yy_accept[373] =
     {   0,
         1,    1,    1,    2,    3,    4,    7,   12,   16,   19,
        21,   24,   27,   30,   33,   36,   39,   42,   45,   48,
@@ -633,39 +636,41 @@ static const flex_int16_t yy_accept[356] =
        74,   76,   79,   81,   82,   83,   84,   87,   87,   88,
        89,   90,   91,   92,   93,   94,   95,   96,   97,   98,
        99,  100,  101,  102,  103,  104,  105,  106,  107,  108,
-      109,  110,  111,  113,  115,  116,  118,  120,  121,  122,
+      109,  110,  111,  112,  114,  116,  117,  119,  121,  122,
       123,  124,  125,  126,  127,  128,  129,  130,  131,  132,
       133,  134,  135,  136,  137,  138,  139,  140,  141,  142,
-      143,  144,  145,  146,  148,  150,  151,  152,  153,  154,
-
-      158,  159,  160,  162,  163,  164,  165,  166,  167,  168,
-      169,  170,  175,  176,  177,  181,  182,  187,  188,  189,
-      194,  195,  196,  197,  198,  199,  202,  205,  207,  209,
-      210,  212,  214,  215,  216,  217,  218,  222,  223,  224,
-      225,  228,  231,  233,  235,  236,  237,  240,  243,  245,
-      247,  250,  253,  255,  257,  258,  261,  264,  266,  268,
-      269,  270,  271,  272,  273,  276,  279,  281,  283,  284,
-      285,  287,  288,  289,  290,  291,  292,  293,  296,  299,
-      301,  303,  304,  305,  309,  312,  315,  317,  319,  320,
-      321,  322,  325,  328,  330,  332,  333,  336,  339,  341,
-
-      343,  344,  345,  348,  351,  353,  355,  356,  357,  358,
-      360,  361,  362,  363,  364,  365,  366,  367,  368,  369,
-      371,  374,  377,  379,  381,  382,  383,  384,  387,  390,
-      392,  394,  396,  397,  398,  399,  400,  401,  403,  405,
-      406,  407,  408,  409,  410,  411,  412,  413,  414,  416,
-      418,  419,  420,  423,  426,  428,  430,  431,  433,  434,
-      436,  437,  441,  443,  444,  445,  447,  448,  450,  451,
-      452,  453,  454,  455,  457,  458,  460,  462,  463,  464,
-      466,  467,  468,  469,  471,  474,  477,  479,  481,  483,
-      484,  485,  487,  488,  489,  490,  491,  492,  494,  495,
-
-      496,  497,  498,  500,  503,  506,  508,  510,  511,  512,
-      513,  514,  516,  517,  519,  520,  521,  522,  523,  524,
-      526,  527,  528,  529,  530,  532,  533,  535,  536,  538,
-      539,  540,  542,  543,  545,  546,  548,  549,  551,  553,
-      554,  556,  557,  558,  560,  561,  563,  564,  566,  568,
-      570,  571,  573,  575,  575
+      143,  144,  145,  146,  147,  148,  150,  152,  153,  154,
+
+      155,  156,  160,  161,  162,  164,  165,  166,  167,  168,
+      169,  170,  171,  172,  177,  178,  179,  183,  184,  189,
+      190,  191,  192,  197,  198,  199,  200,  201,  202,  205,
+      208,  210,  212,  213,  215,  217,  218,  219,  220,  221,
+      225,  226,  227,  228,  231,  234,  236,  238,  239,  240,
+      243,  246,  248,  250,  253,  256,  258,  260,  261,  262,
+      265,  268,  270,  272,  273,  274,  275,  276,  277,  280,
+      283,  285,  287,  288,  289,  291,  292,  293,  294,  295,
+      296,  297,  300,  303,  305,  307,  308,  309,  313,  316,
+      319,  321,  323,  324,  325,  326,  329,  332,  334,  336,
+
+      337,  340,  343,  345,  347,  348,  349,  350,  353,  356,
+      358,  360,  361,  362,  363,  365,  366,  367,  368,  369,
+      370,  371,  372,  373,  374,  376,  379,  382,  384,  386,
+      387,  388,  389,  392,  395,  397,  399,  401,  402,  403,
+      404,  405,  406,  407,  409,  411,  412,  413,  414,  415,
+      416,  417,  418,  419,  420,  422,  424,  425,  426,  429,
+      432,  434,  436,  437,  439,  440,  442,  443,  444,  448,
+      450,  451,  452,  454,  455,  457,  458,  459,  460,  461,
+      462,  464,  465,  467,  469,  470,  471,  473,  474,  475,
+      476,  478,  479,  482,  485,  487,  489,  491,  492,  493,
+
+      495,  496,  497,  498,  499,  500,  502,  503,  504,  505,
+      506,  508,  509,  512,  515,  517,  519,  520,  521,  522,
+      523,  525,  526,  528,  529,  530,  531,  532,  533,  535,
+      536,  537,  538,  539,  540,  542,  543,  545,  546,  548,
+      549,  550,  551,  553,  554,  556,  557,  559,  560,  562,
+      564,  565,  567,  568,  569,  570,  572,  573,  575,  576,
+      578,  580,  582,  583,  585,  586,  588,  590,  591,  592,
+      594,  594
     } ;
 
 static const YY_CHAR yy_ec[256] =
@@ -708,216 +713,224 @@ static const YY_CHAR yy_meta[35] =
         1,    1,    1,    1
     } ;
 
-static const flex_int16_t yy_base[424] =
+static const flex_int16_t yy_base[443] =
     {   0,
-        0,    0,  901,  900,  902,  897,   33,   36,  905,  905,
-       45,   63,   31,   42,   51,   52,  890,   33,   65,   67,
-       69,   70,  889,   71,  888,   75,    0,  905,  893,  905,
-       91,   94,    0,    0,  103,  886,  112,    0,   89,   98,
-      113,   92,  114,   99,  100,   48,  121,  116,  119,   74,
-      124,  129,  123,  135,  132,  133,  137,  134,  138,  139,
-      141,    0,  155,    0,    0,  164,    0,    0,  849,  142,
-      152,  164,  140,  161,  165,  166,  167,  168,  169,  173,
-      174,  178,  176,  180,  184,  208,  189,  183,  192,  195,
-      215,  191,  193,  223,    0,    0,  905,  208,  204,  236,
-
-      219,  209,  238,  196,  237,  831,  242,  815,  241,  224,
-      243,  261,  244,  259,  277,  266,  286,  250,  288,  298,
-      249,  283,  274,  282,  294,  308,    0,  310,    0,  295,
-      305,  905,  308,  306,  313,  314,  342,  319,  316,  320,
-      331,    0,  349,    0,  342,  344,  356,    0,  358,    0,
-      365,    0,  367,    0,  354,  375,    0,  377,    0,  363,
-      356,  809,  327,  322,  384,    0,    0,    0,    0,  379,
-      905,  382,  384,  386,  390,  372,  392,  403,    0,  410,
-        0,  407,  413,  423,  426,    0,    0,    0,    0,  409,
-      424,  435,    0,    0,    0,    0,  437,    0,    0,    0,
-
-        0,  433,  444,    0,    0,    0,    0,  391,  440,  781,
-      905,  769,  439,  445,  444,  447,  449,  454,  453,  399,
-      464,    0,    0,    0,    0,  757,  465,  476,    0,  478,
-        0,  479,  476,  753,  462,  490,  749,  905,  745,  905,
-      483,  737,  424,  485,  487,  490,  500,  493,  905,  729,
-      905,  502,  518,    0,    0,    0,    0,  905,  498,  721,
-      905,  527,  713,    0,  705,  905,  495,  697,  905,  365,
-      521,  528,  530,  685,  905,  534,  540,  540,  657,  905,
-      537,  542,  650,  905,  553,    0,  557,    0,    0,  551,
-      641,  905,  558,  557,  633,  614,  613,  905,  547,  555,
-
-      563,  565,  569,  584,    0,    0,    0,    0,  583,  570,
-      585,  612,  905,  601,  905,  522,  580,  589,  594,  905,
-      600,  585,  563,  520,  905,  514,  905,  586,  486,  597,
-      480,  441,  905,  416,  905,  345,  905,  334,  905,  601,
-      254,  905,  242,  905,  200,  905,  151,  905,  905,  607,
-       86,  905,  905,  905,  620,  624,  627,  631,  635,  639,
-      643,  647,  651,  655,  659,  663,  667,  671,  675,  679,
-      683,  687,  691,  695,  699,  703,  707,  711,  715,  719,
-      723,  727,  731,  735,  739,  743,  747,  751,  755,  759,
-      763,  767,  771,  775,  779,  783,  787,  791,  795,  799,
-
-      803,  807,  811,  815,  819,  823,  827,  831,  835,  839,
-      843,  847,  851,  855,  859,  863,  867,  871,  875,  879,
-      883,  887,  891
+        0,    0,  936,  935,  937,  932,   33,   36,  940,  940,
+       45,   63,   31,   42,   51,   52,  925,   33,   65,   67,
+       69,   70,  924,   71,  923,   75,    0,  940,  928,  940,
+       91,   95,    0,    0,  104,  921,  113,    0,   91,   99,
+      114,   92,  115,   80,  100,   48,  119,  121,  122,   74,
+      123,  128,  131,  129,  125,  133,  135,  136,  137,  143,
+      138,  145,    0,  157,    0,    0,  168,    0,    0,  926,
+      140,  146,  165,  159,  152,  164,  155,  168,  171,  176,
+      177,  170,  180,  175,  184,  188,  212,  191,  185,  192,
+      193,  194,  219,  212,  199,  230,    0,    0,  940,  195,
+
+      200,  239,  235,  197,  246,  225,  226,  919,  244,  918,
+      243,  236,  245,  266,  248,  264,  282,  271,  291,  248,
+      270,  254,  300,  279,  296,  302,  288,  303,  311,    0,
+      315,    0,  311,  318,  940,  313,  319,  208,  313,  344,
+      321,  331,  325,  333,    0,  352,    0,  345,  347,  359,
+        0,  361,    0,  368,    0,  370,    0,  322,  366,  379,
+        0,  381,    0,  359,  357,  923,  382,  384,  392,    0,
+        0,    0,    0,  387,  940,  386,  390,  392,  329,  401,
+      397,  409,    0,  417,    0,  399,  412,  426,  429,    0,
+        0,    0,    0,  412,  427,  438,    0,    0,    0,    0,
+
+      440,    0,    0,    0,    0,  436,  405,  447,    0,    0,
+        0,    0,  438,  443,  922,  940,  921,  442,  450,  449,
+      452,  454,  459,  458,  453,  469,    0,    0,    0,    0,
+      920,  470,  481,    0,  483,    0,  484,  481,  919,  368,
+      467,  495,  918,  940,  917,  940,  488,  916,  479,  490,
+      492,  495,  505,  498,  940,  915,  940,  507,  523,    0,
+        0,    0,    0,  940,  503,  864,  940,  846,  532,  836,
+        0,  824,  940,  516,  796,  940,  513,  530,  536,  538,
+      784,  940,  542,  535,  547,  772,  940,  549,  551,  768,
+      940,  502,  562,    0,  564,    0,    0,  562,  764,  940,
+
+      544,  557,  760,  752,  744,  940,  552,  568,  571,  568,
+      581,  577,  588,    0,    0,    0,    0,  589,  580,  591,
+      736,  940,  728,  940,  601,  602,  597,  599,  940,  603,
+      720,  712,  700,  672,  940,  665,  940,  610,  656,  603,
+      648,  607,  629,  940,  627,  940,  625,  940,  624,  940,
+      607,  574,  940,  614,  572,  940,  491,  940,  433,  940,
+      940,  622,  389,  940,  303,  940,  261,  940,  204,  940,
+      940,  635,  639,  642,  646,  650,  654,  658,  662,  666,
+      670,  674,  678,  682,  686,  690,  694,  698,  702,  706,
+      710,  714,  718,  722,  726,  730,  734,  738,  742,  746,
+
+      750,  754,  758,  762,  766,  770,  774,  778,  782,  786,
+      790,  794,  798,  802,  806,  810,  814,  818,  822,  826,
+      830,  834,  838,  842,  846,  850,  854,  858,  862,  866,
+      870,  874,  878,  882,  886,  890,  894,  898,  902,  906,
+      910,  914
     } ;
 
-static const flex_int16_t yy_def[424] =
+static const flex_int16_t yy_def[443] =
     {   0,
-      354,    1,  355,  355,  354,  356,  357,  357,  354,  354,
-      358,  358,   12,   12,   12,   12,   12,   12,   12,   12,
-       12,   12,   12,   12,   12,   12,  359,  354,  356,  354,
-      360,  357,  361,  361,  362,   12,  356,  363,   12,   12,
-       12,   12,   12,   12,   12,   12,   12,   12,   12,   12,
+      371,    1,  372,  372,  371,  373,  374,  374,  371,  371,
+      375,  375,   12,   12,   12,   12,   12,   12,   12,   12,
+       12,   12,   12,   12,   12,   12,  376,  371,  373,  371,
+      377,  374,  378,  378,  379,   12,  373,  380,   12,   12,
        12,   12,   12,   12,   12,   12,   12,   12,   12,   12,
-       12,  359,  360,  361,  361,  364,  365,  365,  354,   12,
        12,   12,   12,   12,   12,   12,   12,   12,   12,   12,
-       12,   12,   12,   12,   12,  362,   12,   12,   12,   12,
-       12,   12,   12,  364,  365,  365,  354,   12,   12,  366,
-
+       12,   12,  376,  377,  378,  378,  381,  382,  382,  371,
        12,   12,   12,   12,   12,   12,   12,   12,   12,   12,
-       12,  367,   86,   86,  368,   12,  369,   12,   12,  370,
-       12,   12,   12,   12,   12,  371,  372,  366,  372,   12,
-       12,  354,   86,   12,   12,   12,  373,   12,   12,   12,
-      374,  375,  367,  375,   86,   86,  376,  377,  368,  377,
-      378,  379,  369,  379,   12,  380,  381,  370,  381,   12,
-       12,  382,   12,   12,  371,  372,  372,  383,  383,   12,
-      354,   86,   86,   86,   12,   12,   12,  384,  385,  373,
-      385,   12,   12,  386,  374,  375,  375,  387,  387,   86,
-       86,  376,  377,  377,  388,  388,  378,  379,  379,  389,
-
-      389,   12,  380,  381,  381,  390,  390,   12,   12,  391,
-      354,  392,   86,   12,   86,   86,   86,   12,   86,   12,
-      384,  385,  385,  393,  393,  394,   86,  395,  396,  386,
-      396,   86,   86,  397,   12,  398,  391,  354,  399,  354,
-       86,  400,   12,   86,   86,   86,  401,   86,  354,  402,
-      354,   86,  395,  396,  396,  403,  403,  354,   86,  404,
-      354,  405,  406,  406,  399,  354,   86,  407,  354,   12,
-       86,   86,   86,  408,  354,  408,  408,   86,  402,  354,
-       86,   86,  404,  354,  409,  410,  405,  410,  406,   86,
-      407,  354,   12,   86,  411,  412,  408,  354,  408,  408,
-
-       86,   86,   86,  409,  410,  410,  413,  413,   86,   12,
-       86,  414,  354,  415,  354,  408,  408,   86,   86,  354,
-      416,  417,  418,  414,  354,  415,  354,  408,  408,   86,
-      419,  420,  354,  421,  354,  422,  354,  408,  354,   86,
-      423,  354,  420,  354,  421,  354,  422,  354,  354,   86,
-      423,  354,  354,    0,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354
+       12,   12,   12,   12,   12,   12,  379,   12,   12,   12,
+       12,   12,   12,   12,   12,  381,  382,  382,  371,   12,
+
+       12,  383,   12,   12,   12,   12,   12,   12,   12,   12,
+       12,   12,   12,  384,   87,   87,  385,   12,  386,   12,
+       12,   12,  387,   12,   12,   12,   12,   12,  388,  389,
+      383,  389,   12,   12,  371,   87,   12,   12,   12,  390,
+       12,   12,   12,  391,  392,  384,  392,   87,   87,  393,
+      394,  385,  394,  395,  396,  386,  396,   12,   12,  397,
+      398,  387,  398,   12,   12,  399,   12,   12,  388,  389,
+      389,  400,  400,   12,  371,   87,   87,   87,   12,   12,
+       12,  401,  402,  390,  402,   12,   12,  403,  391,  392,
+      392,  404,  404,   87,   87,  393,  394,  394,  405,  405,
+
+      395,  396,  396,  406,  406,   12,   12,  397,  398,  398,
+      407,  407,   12,   12,  408,  371,  409,   87,   12,   87,
+       87,   87,   12,   87,   12,  401,  402,  402,  410,  410,
+      411,   87,  412,  413,  403,  413,   87,   87,  414,   12,
+       12,  415,  408,  371,  416,  371,   87,  417,   12,   87,
+       87,   87,  418,   87,  371,  419,  371,   87,  412,  413,
+      413,  420,  420,  371,   87,  421,  371,   12,  422,  423,
+      423,  416,  371,   87,  424,  371,   12,   87,   87,   87,
+      425,  371,  425,  425,   87,  419,  371,   87,   87,  421,
+      371,   12,  426,  427,  422,  427,  423,   87,  424,  371,
+
+       12,   87,  428,  429,  425,  371,  425,  425,   87,   87,
+       87,   12,  426,  427,  427,  430,  430,   87,   12,   87,
+      431,  371,  432,  371,  425,  425,   87,   87,  371,   12,
+      433,  434,  435,  431,  371,  432,  371,  425,  425,   87,
+      436,   12,  437,  371,  438,  371,  439,  371,  425,  371,
+       87,  440,  371,   12,  437,  371,  438,  371,  439,  371,
+      371,   87,  440,  371,  441,  371,  442,  371,  442,  371,
+        0,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+      371,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+      371,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+
+      371,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+      371,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+      371,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+      371,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+      371,  371
     } ;
 
-static const flex_int16_t yy_nxt[940] =
+static const flex_int16_t yy_nxt[975] =
     {   0,
         6,    7,    8,    9,    6,    6,    6,    6,   10,   11,
        12,   13,   14,   15,   16,   17,   18,   19,   17,   17,
        17,   17,   20,   17,   21,   22,   23,   24,   25,   17,
        26,   17,   17,   17,   32,   32,   33,   32,   32,   33,
        36,   34,   36,   42,   34,   29,   29,   29,   30,   35,
-       50,   36,   37,   38,   43,   44,   39,   36,   79,   45,
+       50,   36,   37,   38,   43,   44,   39,   36,   80,   45,
        36,   36,   40,   29,   29,   29,   30,   35,   46,   48,
        37,   38,   41,   47,   36,   49,   36,   53,   36,   36,
-       36,   56,   58,   36,   36,   55,   82,   60,   51,  342,
-       54,   61,   52,   29,   64,   32,   32,   33,   36,   65,
-
-       70,   36,   34,   29,   29,   29,   30,   36,   36,   36,
-       29,   38,   66,   66,   66,   67,   66,   71,   74,   66,
-       68,   72,   36,   36,   73,   36,   77,   78,   36,   76,
-       36,   53,   36,   36,   75,   85,   80,   83,   36,   86,
-       84,   36,   36,   36,   36,   81,   36,   36,   36,   36,
-       36,   36,   93,   89,  337,   98,   88,   29,   64,  101,
-       90,   36,   91,   65,   92,   87,   29,   95,   89,   99,
-       36,  100,   96,   36,   36,   36,   36,   36,   36,  106,
-      105,   85,   36,   36,  102,   36,  107,   36,  103,   36,
-      109,  112,   36,   36,  104,  108,  115,  110,   36,  117,
-
-       36,   36,   36,  335,   36,   36,  122,  111,   29,   29,
-       29,   30,  118,   36,  116,   29,   38,   36,   36,  113,
-      114,  119,  120,  123,   36,   29,   95,  121,   36,  134,
-      131,   96,  130,   36,  125,  124,  126,  126,   66,  127,
-      126,  132,  133,  126,  129,  333,   36,   36,  135,  137,
-       36,   36,   36,  140,  139,   35,   35,  352,   36,   36,
-       85,  141,  141,   66,  142,  141,  160,  145,  141,  144,
-       35,   35,   89,  117,  155,   36,  146,  147,  147,   66,
-      148,  147,  162,   36,  147,  150,  151,  151,   66,  152,
-      151,   36,   36,  151,  154,  120,  161,   36,  156,  156,
-
-       66,  157,  156,   36,   36,  156,  159,  164,  171,  163,
-       29,  166,   29,  168,   36,   36,  167,  170,  169,   35,
-       35,  172,   36,   36,  173,   36,  213,  184,   36,   36,
-      175,   36,  174,   29,  186,  212,   36,  349,  183,  187,
-      177,  176,  178,  178,   66,  179,  178,  182,  348,  178,
-      181,   29,  188,   35,   35,   35,   35,  189,   29,  193,
-       29,  195,  190,   36,  194,   36,  196,   29,  198,   29,
-      200,  191,   36,  199,   36,  201,  219,   29,  204,   29,
-      206,   36,  202,  205,  209,  207,   29,  166,   36,  293,
-      208,  214,  167,   35,   35,   35,   35,   35,   35,   36,
-
-       36,   36,  249,  218,  220,   29,  222,  216,   36,  217,
-      235,  223,   29,  224,  215,  226,   36,  227,  225,  346,
-       35,   35,   36,  228,  228,   66,  229,  228,   29,  186,
-      228,  231,  232,   36,  187,  233,   35,   29,  193,   29,
-      198,  234,   36,  194,  344,  199,   29,  204,  236,   36,
-       35,  241,  205,  242,   36,   35,   35,  270,   35,   35,
-       35,   35,  247,   36,   35,   35,   29,  222,  244,  262,
-      248,   36,  223,  243,  245,  246,   35,  252,   29,  254,
-       29,  256,  258,  342,  255,  259,  257,   35,   35,  339,
-       35,   35,   69,  264,   35,   35,   35,   35,   35,   35,
-
-      267,   35,   35,  275,   35,   35,   35,   35,  271,   35,
-       35,  276,  277,   35,   35,  272,  278,  315,  273,  281,
-       29,  254,  290,  313,  282,  275,  255,  285,  285,   66,
-      286,  285,   35,   35,  285,  288,  295,  298,  296,   35,
-       35,   35,   35,  298,  301,  328,  299,  294,   35,   35,
-      275,   35,   35,   35,  303,   29,  305,  300,  275,   29,
-      307,  306,   35,   35,  302,  308,  337,   36,   35,   35,
-      309,  310,  320,  316,   35,   35,   35,   35,  322,   36,
-       35,   35,  317,  275,  319,  311,   29,  305,  335,  275,
-      318,  321,  306,  323,   35,   35,   35,   35,  330,  329,
-
-       35,   35,  331,  333,  327,   35,   35,  338,   35,   35,
-      353,  340,   35,   35,  350,  325,  275,  315,   35,   35,
-       27,   27,   27,   27,   29,   29,   29,   31,   31,   31,
-       31,   36,   36,   36,   36,   62,  313,   62,   62,   63,
-       63,   63,   63,   65,  269,   65,   65,   35,   35,   35,
-       35,   69,   69,  261,   69,   94,   94,   94,   94,   96,
-      251,   96,   96,  128,  128,  128,  128,  143,  143,  143,
-      143,  149,  149,  149,  149,  153,  153,  153,  153,  158,
-      158,  158,  158,  165,  165,  165,  165,  167,  298,  167,
-      167,  180,  180,  180,  180,  185,  185,  185,  185,  187,
-
-      292,  187,  187,  192,  192,  192,  192,  194,  240,  194,
-      194,  197,  197,  197,  197,  199,  289,  199,  199,  203,
-      203,  203,  203,  205,  284,  205,  205,  210,  210,  210,
-      210,  169,  280,  169,  169,  221,  221,  221,  221,  223,
-      269,  223,  223,  230,  230,  230,  230,  189,  266,  189,
-      189,  196,  211,  196,  196,  201,  261,  201,  201,  207,
-      251,  207,  207,  237,  237,  237,  237,  239,  239,  239,
-      239,  225,  240,  225,  225,  250,  250,  250,  250,  253,
-      253,  253,  253,  255,  238,  255,  255,  260,  260,  260,
-      260,  263,  263,  263,  263,  265,  265,  265,  265,  268,
-
-      268,  268,  268,  274,  274,  274,  274,  279,  279,  279,
-      279,  257,  211,  257,  257,  283,  283,  283,  283,  287,
-      287,  287,  287,  264,  138,  264,  264,  291,  291,  291,
-      291,  297,  297,  297,  297,  304,  304,  304,  304,  306,
-      136,  306,  306,  312,  312,  312,  312,  314,  314,  314,
-      314,  308,   97,  308,  308,  324,  324,  324,  324,  326,
-      326,  326,  326,  332,  332,  332,  332,  334,  334,  334,
-      334,  336,  336,  336,  336,  341,  341,  341,  341,  343,
-      343,  343,  343,  345,  345,  345,  345,  347,  347,  347,
-      347,  351,  351,  351,  351,   36,   30,   59,   57,   36,
-
-       30,  354,   28,   28,    5,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354
+       36,   56,   58,   36,   36,   55,   83,   61,   51,   36,
+       54,   62,   52,   29,   65,   59,   32,   32,   33,   66,
+
+       36,   36,   71,   34,   29,   29,   29,   30,   36,   36,
+       77,   29,   38,   67,   67,   67,   68,   67,   75,   72,
+       67,   69,   73,   36,   36,   74,   78,   79,   36,   53,
+       36,   36,   36,   87,   36,   76,   84,   36,   36,   85,
+       36,   81,   36,   86,   36,   36,   36,   36,   82,   36,
+       92,   95,   36,  100,   36,   36,   89,   90,   88,   29,
+       65,   36,   91,  101,   36,   66,   90,   93,   36,   94,
+       29,   97,  102,   36,   36,  104,   98,   36,  103,   36,
+       36,  107,  108,  106,   36,   36,   36,  105,   86,   36,
+      109,  110,  111,   36,   36,  114,  112,   36,  117,  119,
+
+       36,   36,   36,   36,   36,  121,   36,  368,   36,   36,
+      120,  113,   29,   29,   29,   30,  118,   36,  134,   29,
+       38,   36,  127,  115,  116,  122,  123,  125,   36,  126,
+      128,  124,   29,   97,   36,   36,  180,  138,   98,  129,
+      129,   67,  130,  129,   36,   36,  129,  132,  133,  135,
+      136,  140,   36,   36,   36,   36,  142,   36,  137,   35,
+       35,  123,   86,   36,  370,  143,  144,  144,   67,  145,
+      144,  148,  158,  144,  147,   35,   35,   90,  119,   36,
+       36,  149,  150,  150,   67,  151,  150,  159,   36,  150,
+      153,  154,  154,   67,  155,  154,  164,   36,  154,  157,
+
+      160,  160,   67,  161,  160,   36,  368,  160,  163,  165,
+      166,   36,   36,   29,  170,  167,  168,   29,  172,  171,
+       36,  175,   36,  173,   35,   35,  176,   36,   36,  177,
+       36,   36,  188,  174,   36,   29,  190,  178,   36,  181,
+       36,  191,  223,  179,  182,  182,   67,  183,  182,  186,
+      206,  182,  185,  187,   29,  192,   35,   35,   35,   35,
+      193,   29,  197,   29,  199,  194,   36,  198,   36,  200,
+       29,  202,   29,  204,  195,   36,  203,   36,  205,  268,
+      207,   29,  209,   29,  211,  214,  213,  210,  218,  212,
+      217,   36,  353,   36,   29,  170,   36,   35,   35,  219,
+
+      171,   35,   35,   35,   35,  224,   36,  231,   36,  225,
+       36,   29,  227,  221,   36,  222,  232,  228,  220,   29,
+      229,   36,  240,   35,   35,  230,  233,  233,   67,  234,
+      233,   29,  190,  233,  236,  237,  348,  191,  238,   35,
+       29,  197,   29,  202,  239,   36,  198,   36,  203,   29,
+      209,  242,   36,   35,  247,  210,  255,  241,  248,   36,
+       35,   35,   36,   35,   35,   35,   35,  253,   36,   35,
+       35,   29,  227,  250,  269,  254,   36,  228,  249,  251,
+      252,   35,  258,   29,  260,   29,  262,  264,   36,  261,
+      265,  263,   35,   35,  346,   35,   35,   70,  271,   35,
+
+       35,   35,   35,   35,   35,  274,   35,   35,  282,   35,
+       35,   36,  277,  278,   35,   35,  283,  284,   35,   35,
+      279,  285,   36,  280,  288,   29,  260,   35,   35,  289,
+      312,  261,  293,  293,   67,  294,  293,  301,  306,  293,
+      296,   35,   35,  298,  303,  306,  304,   35,   35,   35,
+       35,  309,  308,   36,  307,  282,  302,  319,   35,   35,
+       35,   35,   35,  311,   29,  314,   29,  316,   35,   35,
+      315,  282,  317,   35,   35,  344,  310,  364,  325,   35,
+       35,  318,   35,   35,  329,  320,   36,  328,  332,   36,
+       29,  314,   35,   35,  330,  326,  315,  331,  327,  333,
+
+       35,   35,   35,   35,  282,  282,  340,  341,   35,   35,
+       35,   35,   36,  282,   35,   35,   36,  351,   35,   35,
+      362,  339,  365,   36,  338,  366,  342,  361,  360,  354,
+      358,  349,  356,   35,   35,   27,   27,   27,   27,   29,
+       29,   29,   31,   31,   31,   31,   36,   36,   36,   36,
+       63,  353,   63,   63,   64,   64,   64,   64,   66,  350,
+       66,   66,   35,   35,   35,   35,   70,   70,  324,   70,
+       96,   96,   96,   96,   98,  322,   98,   98,  131,  131,
+      131,  131,  146,  146,  146,  146,  152,  152,  152,  152,
+      156,  156,  156,  156,  162,  162,  162,  162,  169,  169,
+
+      169,  169,  171,  348,  171,  171,  184,  184,  184,  184,
+      189,  189,  189,  189,  191,  346,  191,  191,  196,  196,
+      196,  196,  198,  344,  198,  198,  201,  201,  201,  201,
+      203,  337,  203,  203,  208,  208,  208,  208,  210,  335,
+      210,  210,  215,  215,  215,  215,  173,  282,  173,  173,
+      226,  226,  226,  226,  228,  324,  228,  228,  235,  235,
+      235,  235,  193,  322,  193,  193,  200,  276,  200,  200,
+      205,  267,  205,  205,  212,  257,  212,  212,  243,  243,
+      243,  243,  245,  245,  245,  245,  230,  306,  230,  230,
+      256,  256,  256,  256,  259,  259,  259,  259,  261,  300,
+
+      261,  261,  266,  266,  266,  266,  270,  270,  270,  270,
+      272,  272,  272,  272,  275,  275,  275,  275,  281,  281,
+      281,  281,  286,  286,  286,  286,  263,  246,  263,  263,
+      290,  290,  290,  290,  295,  295,  295,  295,  271,  297,
+      271,  271,  299,  299,  299,  299,  305,  305,  305,  305,
+      313,  313,  313,  313,  315,  292,  315,  315,  321,  321,
+      321,  321,  323,  323,  323,  323,  317,  291,  317,  317,
+      334,  334,  334,  334,  336,  336,  336,  336,  343,  343,
+      343,  343,  345,  345,  345,  345,  347,  347,  347,  347,
+      352,  352,  352,  352,  355,  355,  355,  355,  357,  357,
+
+      357,  357,  359,  359,  359,  359,  363,  363,  363,  363,
+      367,  367,  367,  367,  369,  369,  369,  369,  287,  276,
+      273,  216,  267,  257,  246,  244,  216,  141,  139,   99,
+       36,   30,   60,   57,   36,   30,  371,   28,   28,    5,
+      371,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+      371,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+      371,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+      371,  371,  371,  371
     } ;
 
-static const flex_int16_t yy_chk[940] =
+static const flex_int16_t yy_chk[975] =
     {   0,
         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
@@ -927,101 +940,105 @@ static const flex_int16_t yy_chk[940] =
        18,   14,   11,   11,   13,   14,   11,   46,   46,   14,
        15,   16,   11,   12,   12,   12,   12,   12,   14,   16,
        12,   12,   12,   15,   19,   16,   20,   20,   21,   22,
-       24,   22,   24,   50,   26,   21,   50,   26,   19,  351,
-       20,   26,   19,   31,   31,   32,   32,   32,   39,   31,
-
-       39,   42,   32,   35,   35,   35,   35,   40,   44,   45,
-       35,   35,   37,   37,   37,   37,   37,   39,   42,   37,
-       37,   40,   41,   43,   41,   48,   45,   45,   49,   44,
-       47,   47,   53,   51,   43,   53,   48,   51,   52,   54,
-       52,   55,   56,   58,   54,   49,   57,   59,   60,   73,
-       61,   70,   60,   61,  347,   70,   56,   63,   63,   73,
-       58,   71,   59,   63,   59,   55,   66,   66,   57,   71,
-       74,   72,   66,   72,   75,   76,   77,   78,   79,   78,
-       77,   79,   80,   81,   74,   83,   80,   82,   75,   84,
-       82,   85,   88,   85,   76,   81,   87,   83,   87,   89,
-
-       92,   89,   93,  345,   90,  104,   92,   84,   86,   86,
-       86,   86,   90,   99,   88,   86,   86,   98,  102,   86,
-       86,   91,   91,   93,   91,   94,   94,   91,  101,  104,
-      102,   94,  101,  110,   99,   98,  100,  100,  100,  100,
-      100,  103,  103,  100,  100,  343,  105,  103,  105,  107,
-      109,  107,  111,  110,  109,  113,  113,  341,  121,  118,
-      111,  112,  112,  112,  112,  112,  121,  113,  112,  112,
-      114,  114,  116,  116,  118,  116,  114,  115,  115,  115,
-      115,  115,  123,  123,  115,  115,  117,  117,  117,  117,
-      117,  124,  122,  117,  117,  119,  122,  119,  120,  120,
-
-      120,  120,  120,  125,  130,  120,  120,  125,  131,  124,
-      126,  126,  128,  128,  131,  134,  126,  130,  128,  133,
-      133,  133,  135,  136,  133,  139,  164,  140,  138,  140,
-      134,  164,  133,  141,  141,  163,  163,  338,  139,  141,
-      136,  135,  137,  137,  137,  137,  137,  138,  336,  137,
-      137,  143,  143,  145,  145,  146,  146,  143,  147,  147,
-      149,  149,  145,  155,  147,  161,  149,  151,  151,  153,
-      153,  146,  160,  151,  270,  153,  176,  156,  156,  158,
-      158,  176,  155,  156,  161,  158,  165,  165,  170,  270,
-      160,  170,  165,  172,  172,  173,  173,  174,  174,  175,
-
-      208,  177,  220,  175,  177,  178,  178,  173,  220,  174,
-      208,  178,  180,  180,  172,  182,  182,  183,  180,  334,
-      190,  190,  183,  184,  184,  184,  184,  184,  185,  185,
-      184,  184,  190,  243,  185,  191,  191,  192,  192,  197,
-      197,  202,  202,  192,  332,  197,  203,  203,  209,  209,
-      213,  213,  203,  214,  214,  215,  215,  243,  216,  216,
-      217,  217,  218,  218,  219,  219,  221,  221,  215,  235,
-      219,  235,  221,  214,  216,  217,  227,  227,  228,  228,
-      230,  230,  232,  331,  228,  233,  230,  233,  233,  329,
-      232,  232,  236,  236,  241,  241,  244,  244,  245,  245,
-
-      241,  246,  246,  247,  248,  248,  267,  267,  244,  259,
-      259,  247,  247,  252,  252,  245,  248,  326,  246,  252,
-      253,  253,  267,  324,  259,  316,  253,  262,  262,  262,
-      262,  262,  271,  271,  262,  262,  272,  276,  273,  272,
-      272,  273,  273,  277,  278,  316,  276,  271,  281,  281,
-      299,  278,  278,  282,  282,  285,  285,  277,  300,  287,
-      287,  285,  290,  290,  281,  287,  323,  293,  294,  294,
-      290,  293,  303,  299,  301,  301,  302,  302,  310,  310,
-      303,  303,  300,  317,  302,  294,  304,  304,  322,  328,
-      301,  309,  304,  311,  309,  309,  311,  311,  318,  317,
-
-      318,  318,  319,  321,  314,  319,  319,  328,  330,  330,
-      350,  330,  340,  340,  340,  312,  297,  296,  350,  350,
-      355,  355,  355,  355,  356,  356,  356,  357,  357,  357,
-      357,  358,  358,  358,  358,  359,  295,  359,  359,  360,
-      360,  360,  360,  361,  291,  361,  361,  362,  362,  362,
-      362,  363,  363,  283,  363,  364,  364,  364,  364,  365,
-      279,  365,  365,  366,  366,  366,  366,  367,  367,  367,
-      367,  368,  368,  368,  368,  369,  369,  369,  369,  370,
-      370,  370,  370,  371,  371,  371,  371,  372,  274,  372,
-      372,  373,  373,  373,  373,  374,  374,  374,  374,  375,
-
-      268,  375,  375,  376,  376,  376,  376,  377,  265,  377,
-      377,  378,  378,  378,  378,  379,  263,  379,  379,  380,
-      380,  380,  380,  381,  260,  381,  381,  382,  382,  382,
-      382,  383,  250,  383,  383,  384,  384,  384,  384,  385,
-      242,  385,  385,  386,  386,  386,  386,  387,  239,  387,
-      387,  388,  237,  388,  388,  389,  234,  389,  389,  390,
-      226,  390,  390,  391,  391,  391,  391,  392,  392,  392,
-      392,  393,  212,  393,  393,  394,  394,  394,  394,  395,
-      395,  395,  395,  396,  210,  396,  396,  397,  397,  397,
-      397,  398,  398,  398,  398,  399,  399,  399,  399,  400,
-
-      400,  400,  400,  401,  401,  401,  401,  402,  402,  402,
-      402,  403,  162,  403,  403,  404,  404,  404,  404,  405,
-      405,  405,  405,  406,  108,  406,  406,  407,  407,  407,
-      407,  408,  408,  408,  408,  409,  409,  409,  409,  410,
-      106,  410,  410,  411,  411,  411,  411,  412,  412,  412,
-      412,  413,   69,  413,  413,  414,  414,  414,  414,  415,
-      415,  415,  415,  416,  416,  416,  416,  417,  417,  417,
-      417,  418,  418,  418,  418,  419,  419,  419,  419,  420,
-      420,  420,  420,  421,  421,  421,  421,  422,  422,  422,
-      422,  423,  423,  423,  423,   36,   29,   25,   23,   17,
-
-        6,    5,    4,    3,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354
+       24,   22,   24,   50,   26,   21,   50,   26,   19,   44,
+       20,   26,   19,   31,   31,   24,   32,   32,   32,   31,
+
+       39,   42,   39,   32,   35,   35,   35,   35,   40,   45,
+       44,   35,   35,   37,   37,   37,   37,   37,   42,   39,
+       37,   37,   40,   41,   43,   41,   45,   45,   47,   47,
+       48,   49,   51,   54,   55,   43,   51,   52,   54,   52,
+       53,   48,   56,   53,   57,   58,   59,   61,   49,   71,
+       59,   61,   60,   71,   62,   72,   56,   62,   55,   64,
+       64,   75,   58,   72,   77,   64,   57,   60,   74,   60,
+       67,   67,   73,   76,   73,   75,   67,   78,   74,   82,
+       79,   78,   79,   77,   84,   80,   81,   76,   80,   83,
+       81,   82,   83,   85,   89,   86,   84,   86,   88,   90,
+
+       88,   90,   91,   92,  100,   92,  104,  369,   95,  101,
+       91,   85,   87,   87,   87,   87,   89,  138,  104,   87,
+       87,   94,  100,   87,   87,   93,   93,   94,   93,   95,
+      101,   93,   96,   96,  106,  107,  138,  107,   96,  102,
+      102,  102,  102,  102,  103,  112,  102,  102,  103,  105,
+      105,  109,  111,  109,  113,  105,  111,  120,  106,  115,
+      115,  122,  113,  122,  367,  112,  114,  114,  114,  114,
+      114,  115,  120,  114,  114,  116,  116,  118,  118,  121,
+      118,  116,  117,  117,  117,  117,  117,  121,  124,  117,
+      117,  119,  119,  119,  119,  119,  124,  127,  119,  119,
+
+      123,  123,  123,  123,  123,  125,  365,  123,  123,  125,
+      126,  126,  128,  129,  129,  127,  128,  131,  131,  129,
+      133,  134,  139,  131,  136,  136,  136,  134,  137,  136,
+      141,  158,  143,  133,  143,  144,  144,  136,  179,  139,
+      142,  144,  179,  137,  140,  140,  140,  140,  140,  141,
+      158,  140,  140,  142,  146,  146,  148,  148,  149,  149,
+      146,  150,  150,  152,  152,  148,  165,  150,  164,  152,
+      154,  154,  156,  156,  149,  159,  154,  240,  156,  240,
+      159,  160,  160,  162,  162,  165,  164,  160,  168,  162,
+      167,  167,  363,  168,  169,  169,  174,  176,  176,  174,
+
+      169,  177,  177,  178,  178,  180,  181,  186,  186,  181,
+      180,  182,  182,  177,  207,  178,  187,  182,  176,  184,
+      184,  187,  207,  194,  194,  184,  188,  188,  188,  188,
+      188,  189,  189,  188,  188,  194,  359,  189,  195,  195,
+      196,  196,  201,  201,  206,  206,  196,  213,  201,  208,
+      208,  214,  214,  218,  218,  208,  225,  213,  219,  219,
+      220,  220,  225,  221,  221,  222,  222,  223,  223,  224,
+      224,  226,  226,  220,  241,  224,  241,  226,  219,  221,
+      222,  232,  232,  233,  233,  235,  235,  237,  249,  233,
+      238,  235,  238,  238,  357,  237,  237,  242,  242,  247,
+
+      247,  250,  250,  251,  251,  247,  252,  252,  253,  254,
+      254,  292,  249,  250,  265,  265,  253,  253,  258,  258,
+      251,  254,  277,  252,  258,  259,  259,  274,  274,  265,
+      292,  259,  269,  269,  269,  269,  269,  277,  284,  269,
+      269,  278,  278,  274,  279,  283,  280,  279,  279,  280,
+      280,  285,  284,  301,  283,  307,  278,  301,  285,  285,
+      288,  288,  289,  289,  293,  293,  295,  295,  302,  302,
+      293,  308,  295,  298,  298,  355,  288,  352,  307,  310,
+      310,  298,  309,  309,  311,  302,  312,  310,  319,  319,
+      313,  313,  311,  311,  312,  308,  313,  318,  309,  320,
+
+      318,  318,  320,  320,  325,  326,  327,  328,  327,  327,
+      328,  328,  330,  338,  340,  340,  342,  340,  351,  351,
+      351,  326,  354,  354,  325,  362,  330,  349,  347,  342,
+      345,  338,  343,  362,  362,  372,  372,  372,  372,  373,
+      373,  373,  374,  374,  374,  374,  375,  375,  375,  375,
+      376,  341,  376,  376,  377,  377,  377,  377,  378,  339,
+      378,  378,  379,  379,  379,  379,  380,  380,  336,  380,
+      381,  381,  381,  381,  382,  334,  382,  382,  383,  383,
+      383,  383,  384,  384,  384,  384,  385,  385,  385,  385,
+      386,  386,  386,  386,  387,  387,  387,  387,  388,  388,
+
+      388,  388,  389,  333,  389,  389,  390,  390,  390,  390,
+      391,  391,  391,  391,  392,  332,  392,  392,  393,  393,
+      393,  393,  394,  331,  394,  394,  395,  395,  395,  395,
+      396,  323,  396,  396,  397,  397,  397,  397,  398,  321,
+      398,  398,  399,  399,  399,  399,  400,  305,  400,  400,
+      401,  401,  401,  401,  402,  304,  402,  402,  403,  403,
+      403,  403,  404,  303,  404,  404,  405,  299,  405,  405,
+      406,  290,  406,  406,  407,  286,  407,  407,  408,  408,
+      408,  408,  409,  409,  409,  409,  410,  281,  410,  410,
+      411,  411,  411,  411,  412,  412,  412,  412,  413,  275,
+
+      413,  413,  414,  414,  414,  414,  415,  415,  415,  415,
+      416,  416,  416,  416,  417,  417,  417,  417,  418,  418,
+      418,  418,  419,  419,  419,  419,  420,  272,  420,  420,
+      421,  421,  421,  421,  422,  422,  422,  422,  423,  270,
+      423,  423,  424,  424,  424,  424,  425,  425,  425,  425,
+      426,  426,  426,  426,  427,  268,  427,  427,  428,  428,
+      428,  428,  429,  429,  429,  429,  430,  266,  430,  430,
+      431,  431,  431,  431,  432,  432,  432,  432,  433,  433,
+      433,  433,  434,  434,  434,  434,  435,  435,  435,  435,
+      436,  436,  436,  436,  437,  437,  437,  437,  438,  438,
+
+      438,  438,  439,  439,  439,  439,  440,  440,  440,  440,
+      441,  441,  441,  441,  442,  442,  442,  442,  256,  248,
+      245,  243,  239,  231,  217,  215,  166,  110,  108,   70,
+       36,   29,   25,   23,   17,    6,    5,    4,    3,  371,
+      371,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+      371,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+      371,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+      371,  371,  371,  371
     } ;
 
 #define YY_TRAILING_MASK 0x2000
@@ -1160,9 +1177,17 @@ static void setbackendtype(DiskParseContext *dpc, const char *str) {
     if (     !strcmp(str,"phy"))   DSET(dpc,backend,BACKEND,str,PHY);
     else if (!strcmp(str,"tap"))   DSET(dpc,backend,BACKEND,str,TAP);
     else if (!strcmp(str,"qdisk")) DSET(dpc,backend,BACKEND,str,QDISK);
+    else if (!strcmp(str,"other")) DSET(dpc,backend,BACKEND,str,OTHER);
     else xlu__disk_err(dpc,str,"unknown value for backendtype");
 }
 
+/* Sets ->specification from the string.  IDL should provide something for this. */
+static void setspecification(DiskParseContext *dpc, const char *str) {
+    if      (!strcmp(str,"xen"))    DSET(dpc,specification,SPECIFICATION,str,XEN);
+    else if (!strcmp(str,"virtio")) DSET(dpc,specification,SPECIFICATION,str,VIRTIO);
+    else xlu__disk_err(dpc,str,"unknown value for specification");
+}
+
 /* Sets ->colo-port from the string.  COLO need this. */
 static void setcoloport(DiskParseContext *dpc, const char *str) {
     int port = atoi(str);
@@ -1199,9 +1224,9 @@ static int vdev_and_devtype(DiskParseContext *dpc, char *str) {
 #undef DPC /* needs to be defined differently the actual lexer */
 #define DPC ((DiskParseContext*)yyextra)
 
-#line 1202 "libxlu_disk_l.c"
+#line 1227 "libxlu_disk_l.c"
 
-#line 1204 "libxlu_disk_l.c"
+#line 1229 "libxlu_disk_l.c"
 
 #define INITIAL 0
 #define LEXERR 1
@@ -1477,13 +1502,13 @@ YY_DECL
 		}
 
 	{
-#line 177 "libxlu_disk_l.l"
+#line 185 "libxlu_disk_l.l"
 
 
-#line 180 "libxlu_disk_l.l"
+#line 188 "libxlu_disk_l.l"
  /*----- the scanner rules which do the parsing -----*/
 
-#line 1486 "libxlu_disk_l.c"
+#line 1511 "libxlu_disk_l.c"
 
 	while ( /*CONSTCOND*/1 )		/* loops until end-of-file is reached */
 		{
@@ -1515,14 +1540,14 @@ yy_match:
 			while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 				{
 				yy_current_state = (int) yy_def[yy_current_state];
-				if ( yy_current_state >= 355 )
+				if ( yy_current_state >= 372 )
 					yy_c = yy_meta[yy_c];
 				}
 			yy_current_state = yy_nxt[yy_base[yy_current_state] + yy_c];
 			*yyg->yy_state_ptr++ = yy_current_state;
 			++yy_cp;
 			}
-		while ( yy_current_state != 354 );
+		while ( yy_current_state != 371 );
 
 yy_find_action:
 		yy_current_state = *--yyg->yy_state_ptr;
@@ -1572,152 +1597,158 @@ do_action:	/* This label is used only to access EOF actions. */
 case 1:
 /* rule 1 can match eol */
 YY_RULE_SETUP
-#line 182 "libxlu_disk_l.l"
+#line 190 "libxlu_disk_l.l"
 { /* ignore whitespace before parameters */ }
 	YY_BREAK
 /* ordinary parameters setting enums or strings */
 case 2:
 /* rule 2 can match eol */
 YY_RULE_SETUP
-#line 186 "libxlu_disk_l.l"
+#line 194 "libxlu_disk_l.l"
 { STRIP(','); setformat(DPC, FROMEQUALS); }
 	YY_BREAK
 case 3:
 YY_RULE_SETUP
-#line 188 "libxlu_disk_l.l"
+#line 196 "libxlu_disk_l.l"
 { DPC->disk->is_cdrom = 1; }
 	YY_BREAK
 case 4:
 YY_RULE_SETUP
-#line 189 "libxlu_disk_l.l"
+#line 197 "libxlu_disk_l.l"
 { DPC->disk->is_cdrom = 1; }
 	YY_BREAK
 case 5:
 YY_RULE_SETUP
-#line 190 "libxlu_disk_l.l"
+#line 198 "libxlu_disk_l.l"
 { DPC->disk->is_cdrom = 0; }
 	YY_BREAK
 case 6:
 /* rule 6 can match eol */
 YY_RULE_SETUP
-#line 191 "libxlu_disk_l.l"
+#line 199 "libxlu_disk_l.l"
 { xlu__disk_err(DPC,yytext,"unknown value for type"); }
 	YY_BREAK
 case 7:
 /* rule 7 can match eol */
 YY_RULE_SETUP
-#line 193 "libxlu_disk_l.l"
+#line 201 "libxlu_disk_l.l"
 { STRIP(','); setaccess(DPC, FROMEQUALS); }
 	YY_BREAK
 case 8:
 /* rule 8 can match eol */
 YY_RULE_SETUP
-#line 194 "libxlu_disk_l.l"
+#line 202 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("backend", backend_domname, FROMEQUALS); }
 	YY_BREAK
 case 9:
 /* rule 9 can match eol */
 YY_RULE_SETUP
-#line 195 "libxlu_disk_l.l"
+#line 203 "libxlu_disk_l.l"
 { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
 	YY_BREAK
 case 10:
 /* rule 10 can match eol */
 YY_RULE_SETUP
-#line 197 "libxlu_disk_l.l"
-{ STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
+#line 204 "libxlu_disk_l.l"
+{ STRIP(','); setspecification(DPC,FROMEQUALS); }
 	YY_BREAK
 case 11:
 /* rule 11 can match eol */
 YY_RULE_SETUP
-#line 198 "libxlu_disk_l.l"
-{ STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
+#line 206 "libxlu_disk_l.l"
+{ STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
 	YY_BREAK
 case 12:
+/* rule 12 can match eol */
 YY_RULE_SETUP
-#line 199 "libxlu_disk_l.l"
-{ DPC->disk->direct_io_safe = 1; }
+#line 207 "libxlu_disk_l.l"
+{ STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
 	YY_BREAK
 case 13:
 YY_RULE_SETUP
-#line 200 "libxlu_disk_l.l"
-{ libxl_defbool_set(&DPC->disk->discard_enable, true); }
+#line 208 "libxlu_disk_l.l"
+{ DPC->disk->direct_io_safe = 1; }
 	YY_BREAK
 case 14:
 YY_RULE_SETUP
-#line 201 "libxlu_disk_l.l"
-{ libxl_defbool_set(&DPC->disk->discard_enable, false); }
+#line 209 "libxlu_disk_l.l"
+{ libxl_defbool_set(&DPC->disk->discard_enable, true); }
 	YY_BREAK
-/* Note that the COLO configuration settings should be considered unstable.
-  * They may change incompatibly in future versions of Xen. */
 case 15:
 YY_RULE_SETUP
-#line 204 "libxlu_disk_l.l"
-{ libxl_defbool_set(&DPC->disk->colo_enable, true); }
+#line 210 "libxlu_disk_l.l"
+{ libxl_defbool_set(&DPC->disk->discard_enable, false); }
 	YY_BREAK
+/* Note that the COLO configuration settings should be considered unstable.
+  * They may change incompatibly in future versions of Xen. */
 case 16:
 YY_RULE_SETUP
-#line 205 "libxlu_disk_l.l"
-{ libxl_defbool_set(&DPC->disk->colo_enable, false); }
+#line 213 "libxlu_disk_l.l"
+{ libxl_defbool_set(&DPC->disk->colo_enable, true); }
 	YY_BREAK
 case 17:
-/* rule 17 can match eol */
 YY_RULE_SETUP
-#line 206 "libxlu_disk_l.l"
-{ STRIP(','); SAVESTRING("colo-host", colo_host, FROMEQUALS); }
+#line 214 "libxlu_disk_l.l"
+{ libxl_defbool_set(&DPC->disk->colo_enable, false); }
 	YY_BREAK
 case 18:
 /* rule 18 can match eol */
 YY_RULE_SETUP
-#line 207 "libxlu_disk_l.l"
-{ STRIP(','); setcoloport(DPC, FROMEQUALS); }
+#line 215 "libxlu_disk_l.l"
+{ STRIP(','); SAVESTRING("colo-host", colo_host, FROMEQUALS); }
 	YY_BREAK
 case 19:
 /* rule 19 can match eol */
 YY_RULE_SETUP
-#line 208 "libxlu_disk_l.l"
-{ STRIP(','); SAVESTRING("colo-export", colo_export, FROMEQUALS); }
+#line 216 "libxlu_disk_l.l"
+{ STRIP(','); setcoloport(DPC, FROMEQUALS); }
 	YY_BREAK
 case 20:
 /* rule 20 can match eol */
 YY_RULE_SETUP
-#line 209 "libxlu_disk_l.l"
-{ STRIP(','); SAVESTRING("active-disk", active_disk, FROMEQUALS); }
+#line 217 "libxlu_disk_l.l"
+{ STRIP(','); SAVESTRING("colo-export", colo_export, FROMEQUALS); }
 	YY_BREAK
 case 21:
 /* rule 21 can match eol */
 YY_RULE_SETUP
-#line 210 "libxlu_disk_l.l"
+#line 218 "libxlu_disk_l.l"
+{ STRIP(','); SAVESTRING("active-disk", active_disk, FROMEQUALS); }
+	YY_BREAK
+case 22:
+/* rule 22 can match eol */
+YY_RULE_SETUP
+#line 219 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("hidden-disk", hidden_disk, FROMEQUALS); }
 	YY_BREAK
 /* the target magic parameter, eats the rest of the string */
-case 22:
+case 23:
 YY_RULE_SETUP
-#line 214 "libxlu_disk_l.l"
+#line 223 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("target", pdev_path, FROMEQUALS); }
 	YY_BREAK
 /* unknown parameters */
-case 23:
-/* rule 23 can match eol */
+case 24:
+/* rule 24 can match eol */
 YY_RULE_SETUP
-#line 218 "libxlu_disk_l.l"
+#line 227 "libxlu_disk_l.l"
 { xlu__disk_err(DPC,yytext,"unknown parameter"); }
 	YY_BREAK
 /* deprecated prefixes */
 /* the "/.*" in these patterns ensures that they count as if they
    * matched the whole string, so these patterns take precedence */
-case 24:
+case 25:
 YY_RULE_SETUP
-#line 225 "libxlu_disk_l.l"
+#line 234 "libxlu_disk_l.l"
 {
                     STRIP(':');
                     DPC->had_depr_prefix=1; DEPRECATE("use `[format=]...,'");
                     setformat(DPC, yytext);
                  }
 	YY_BREAK
-case 25:
+case 26:
 YY_RULE_SETUP
-#line 231 "libxlu_disk_l.l"
+#line 240 "libxlu_disk_l.l"
 {
                     char *newscript;
                     STRIP(':');
@@ -1731,65 +1762,65 @@ YY_RULE_SETUP
                     free(newscript);
                 }
 	YY_BREAK
-case 26:
+case 27:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
 yyg->yy_c_buf_p = yy_cp = yy_bp + 8;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 244 "libxlu_disk_l.l"
+#line 253 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
-case 27:
+case 28:
 YY_RULE_SETUP
-#line 245 "libxlu_disk_l.l"
+#line 254 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
-case 28:
+case 29:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
 yyg->yy_c_buf_p = yy_cp = yy_bp + 4;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 246 "libxlu_disk_l.l"
+#line 255 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
-case 29:
+case 30:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
 yyg->yy_c_buf_p = yy_cp = yy_bp + 6;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 247 "libxlu_disk_l.l"
+#line 256 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
-case 30:
+case 31:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
 yyg->yy_c_buf_p = yy_cp = yy_bp + 5;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 248 "libxlu_disk_l.l"
+#line 257 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
-case 31:
+case 32:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
 yyg->yy_c_buf_p = yy_cp = yy_bp + 4;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 249 "libxlu_disk_l.l"
+#line 258 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
-case 32:
-/* rule 32 can match eol */
+case 33:
+/* rule 33 can match eol */
 YY_RULE_SETUP
-#line 251 "libxlu_disk_l.l"
+#line 260 "libxlu_disk_l.l"
 {
 		  xlu__disk_err(DPC,yytext,"unknown deprecated disk prefix");
 		  return 0;
 		}
 	YY_BREAK
 /* positional parameters */
-case 33:
-/* rule 33 can match eol */
+case 34:
+/* rule 34 can match eol */
 YY_RULE_SETUP
-#line 258 "libxlu_disk_l.l"
+#line 267 "libxlu_disk_l.l"
 {
     STRIP(',');
 
@@ -1816,27 +1847,27 @@ YY_RULE_SETUP
     }
 }
 	YY_BREAK
-case 34:
+case 35:
 YY_RULE_SETUP
-#line 284 "libxlu_disk_l.l"
+#line 293 "libxlu_disk_l.l"
 {
     BEGIN(LEXERR);
     yymore();
 }
 	YY_BREAK
-case 35:
+case 36:
 YY_RULE_SETUP
-#line 288 "libxlu_disk_l.l"
+#line 297 "libxlu_disk_l.l"
 {
     xlu__disk_err(DPC,yytext,"bad disk syntax"); return 0;
 }
 	YY_BREAK
-case 36:
+case 37:
 YY_RULE_SETUP
-#line 291 "libxlu_disk_l.l"
+#line 300 "libxlu_disk_l.l"
 YY_FATAL_ERROR( "flex scanner jammed" );
 	YY_BREAK
-#line 1839 "libxlu_disk_l.c"
+#line 1870 "libxlu_disk_l.c"
 			case YY_STATE_EOF(INITIAL):
 			case YY_STATE_EOF(LEXERR):
 				yyterminate();
@@ -2104,7 +2135,7 @@ static int yy_get_next_buffer (yyscan_t yyscanner)
 		while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 			{
 			yy_current_state = (int) yy_def[yy_current_state];
-			if ( yy_current_state >= 355 )
+			if ( yy_current_state >= 372 )
 				yy_c = yy_meta[yy_c];
 			}
 		yy_current_state = yy_nxt[yy_base[yy_current_state] + yy_c];
@@ -2128,11 +2159,11 @@ static int yy_get_next_buffer (yyscan_t yyscanner)
 	while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 		{
 		yy_current_state = (int) yy_def[yy_current_state];
-		if ( yy_current_state >= 355 )
+		if ( yy_current_state >= 372 )
 			yy_c = yy_meta[yy_c];
 		}
 	yy_current_state = yy_nxt[yy_base[yy_current_state] + yy_c];
-	yy_is_jam = (yy_current_state == 354);
+	yy_is_jam = (yy_current_state == 371);
 	if ( ! yy_is_jam )
 		*yyg->yy_state_ptr++ = yy_current_state;
 
@@ -2941,4 +2972,4 @@ void yyfree (void * ptr , yyscan_t yyscanner)
 
 #define YYTABLES_NAME "yytables"
 
-#line 291 "libxlu_disk_l.l"
+#line 300 "libxlu_disk_l.l"
diff --git a/tools/libs/util/libxlu_disk_l.h b/tools/libs/util/libxlu_disk_l.h
index 6abeecf..509aad6 100644
--- a/tools/libs/util/libxlu_disk_l.h
+++ b/tools/libs/util/libxlu_disk_l.h
@@ -694,7 +694,7 @@ extern int yylex (yyscan_t yyscanner);
 #undef yyTABLES_NAME
 #endif
 
-#line 291 "libxlu_disk_l.l"
+#line 300 "libxlu_disk_l.l"
 
 #line 699 "libxlu_disk_l.h"
 #undef xlu__disk_yyIN_HEADER
diff --git a/tools/libs/util/libxlu_disk_l.l b/tools/libs/util/libxlu_disk_l.l
index 3bd639a..47b8ee0 100644
--- a/tools/libs/util/libxlu_disk_l.l
+++ b/tools/libs/util/libxlu_disk_l.l
@@ -122,9 +122,17 @@ static void setbackendtype(DiskParseContext *dpc, const char *str) {
     if (     !strcmp(str,"phy"))   DSET(dpc,backend,BACKEND,str,PHY);
     else if (!strcmp(str,"tap"))   DSET(dpc,backend,BACKEND,str,TAP);
     else if (!strcmp(str,"qdisk")) DSET(dpc,backend,BACKEND,str,QDISK);
+    else if (!strcmp(str,"other")) DSET(dpc,backend,BACKEND,str,OTHER);
     else xlu__disk_err(dpc,str,"unknown value for backendtype");
 }
 
+/* Sets ->specification from the string.  IDL should provide something for this. */
+static void setspecification(DiskParseContext *dpc, const char *str) {
+    if      (!strcmp(str,"xen"))    DSET(dpc,specification,SPECIFICATION,str,XEN);
+    else if (!strcmp(str,"virtio")) DSET(dpc,specification,SPECIFICATION,str,VIRTIO);
+    else xlu__disk_err(dpc,str,"unknown value for specification");
+}
+
 /* Sets ->colo-port from the string.  COLO need this. */
 static void setcoloport(DiskParseContext *dpc, const char *str) {
     int port = atoi(str);
@@ -192,6 +200,7 @@ devtype=[^,]*,?	{ xlu__disk_err(DPC,yytext,"unknown value for type"); }
 access=[^,]*,?	{ STRIP(','); setaccess(DPC, FROMEQUALS); }
 backend=[^,]*,? { STRIP(','); SAVESTRING("backend", backend_domname, FROMEQUALS); }
 backendtype=[^,]*,? { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
+specification=[^,]*,? { STRIP(','); setspecification(DPC,FROMEQUALS); }
 
 vdev=[^,]*,?	{ STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
 script=[^,]*,?	{ STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
diff --git a/tools/xl/xl_block.c b/tools/xl/xl_block.c
index 70eed43..f2b0ff5 100644
--- a/tools/xl/xl_block.c
+++ b/tools/xl/xl_block.c
@@ -50,6 +50,11 @@ int main_blockattach(int argc, char **argv)
         return 0;
     }
 
+    if (disk.specification != LIBXL_DISK_SPECIFICATION_XEN) {
+        fprintf(stderr, "block-attach is only supported for specification xen\n");
+        return 1;
+    }
+
     if (libxl_device_disk_add(ctx, fe_domid, &disk, 0)) {
         fprintf(stderr, "libxl_device_disk_add failed.\n");
         return 1;
@@ -119,6 +124,12 @@ int main_blockdetach(int argc, char **argv)
         fprintf(stderr, "Error: Device %s not connected.\n", argv[optind+1]);
         return 1;
     }
+
+    if (disk.specification != LIBXL_DISK_SPECIFICATION_XEN) {
+        fprintf(stderr, "block-detach is only supported for specification xen\n");
+        return 1;
+    }
+
     rc = !force ? libxl_device_disk_safe_remove(ctx, domid, &disk, 0) :
         libxl_device_disk_destroy(ctx, domid, &disk, 0);
     if (rc) {
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Mon Jun 13 18:06:03 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 18:06:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348444.574679 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0oRi-0000Pk-Ny; Mon, 13 Jun 2022 18:05:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348444.574679; Mon, 13 Jun 2022 18:05:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0oRi-0000Pd-Kb; Mon, 13 Jun 2022 18:05:50 +0000
Received: by outflank-mailman (input) for mailman id 348444;
 Mon, 13 Jun 2022 18:05:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WHVA=WU=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1o0oRg-0000PT-QO
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 18:05:48 +0000
Received: from mail-lf1-x136.google.com (mail-lf1-x136.google.com
 [2a00:1450:4864:20::136])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 73142f65-eb43-11ec-8901-93a377f238d6;
 Mon, 13 Jun 2022 20:05:46 +0200 (CEST)
Received: by mail-lf1-x136.google.com with SMTP id c4so10092384lfj.12
 for <xen-devel@lists.xenproject.org>; Mon, 13 Jun 2022 11:05:46 -0700 (PDT)
Received: from otyshchenko.router ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 x18-20020a2e7c12000000b00253ceefb668sm1038104ljc.60.2022.06.13.11.05.44
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 13 Jun 2022 11:05:45 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 73142f65-eb43-11ec-8901-93a377f238d6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=3X433a3FF43hMBG6s4c1d0hQ8N/X8C5lhmNEz7SbdLU=;
        b=K1vz95IGkglMSxjYaoyzhSsB9kpBR3D32ASHS+9IsM6ydL6Cf136Yml0m2JSWroJJ8
         3orA6sucLm3Y3hmFCBC9p22Q7w/N2Gbv3HYGkLh2dKg65CO7XF4ox0V/KQ20AB1SWWOJ
         86BZMgqM9bQM1TeJXvBP/sZbFCQqpAuNlwOixXV02SthdbHmPANPWjB7D8bckxt44mgy
         l+1T9CVyMIqOlFSOjkVa6t10HdfwQRtK2oC/mw6hkTcyqnGxsXl9xFVsM0KrKXJvUOAT
         ho+QboMKXVkjNhJ1AM9f0LNCkAl9wBqsUEugZu5y2fLzoyxEhtQfILnhQXNDOlM+FJic
         NA+g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=3X433a3FF43hMBG6s4c1d0hQ8N/X8C5lhmNEz7SbdLU=;
        b=kujE5pKArHgeluJD6J6VaRGdo6hwjnAH9QsfAIRVFIG2lklgtGDN+Diwtp2rw9C42u
         1d4QYGDBKo7+lj+RvLbIb6DW+MURDw/OhxpqLcg3QlOzT8OWOrHH5CDd8/oge7c+q1MP
         XzepeUjrXIX7Hj7tsr9O6yjzTtPPYd1Tcr5kn7Hz6l1DIQRR9zbm2JsiUPVXFy0zYjna
         FzjBig9AO3UIDxrWSj1Oar++p9Xn4J9YjZ+EUM9RUElhUcdX3fWh46Ty5f4OpW2PP4FJ
         rmKfmDBwYRIIugNjLRRl7z+KG6zIhSk6Nl4+zkkhFfXwwOmbMpNi6NqtjT0Rv/O8Dw6W
         bBMw==
X-Gm-Message-State: AJIora+LeXcFxTA9Da+dlauRfndxwuZ5fp5wLkBJ79WDQ+3IRAlJ1QHM
	gViTCLLPBYv+1FbtW5dj6XdRRBIURJQ=
X-Google-Smtp-Source: AGRyM1uMpG4usfSrwKkDWOXbd142Prl8AlpC/rS1kFj5zYT98But2bSOpQaQg+Id3svm/CSpUQZKaQ==
X-Received: by 2002:ac2:5f5c:0:b0:478:f5dc:f1c4 with SMTP id 28-20020ac25f5c000000b00478f5dcf1c4mr637955lfz.317.1655143545927;
        Mon, 13 Jun 2022 11:05:45 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Nick Rosbrook <rosbrookn@gmail.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Chen <Wei.Chen@arm.com>,
	Kaly Xin <Kaly.Xin@arm.com>,
	Jiamei Xie <Jiamei.Xie@arm.com>,
	Henry Wang <Henry.Wang@arm.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>
Subject: [PATCH V10 0/3] Virtio support for toolstack on Arm (Was "IOREQ feature (+ virtio-mmio) on Arm")
Date: Mon, 13 Jun 2022 21:05:19 +0300
Message-Id: <1655143522-14356-1-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Hello all.

The purpose of this patch series is to add missing virtio-mmio bits to Xen toolstack on Arm.
The Virtio support for toolstack [1] was postponed as the main target was to upstream IOREQ/DM
support on Arm in the first place. Now, we already have IOREQ support in, so we can resume Virtio
enabling work. You can find previous discussions at [2].

Patch series [3] is based on recent "staging" branch
(c9a707df83aad17a6fcf2e8330ab3b5bead6fb8b xen/arm: mm: Re-implement setup_frame_table_mappings() with map_pages_to_xen())
and tested on Renesas Salvator-X board + H3 ES3.0 SoC (Arm64) with virtio-mmio based virtio-disk backend [4]
running in Dom0 (or Driver domain) and unmodified Linux Guest running on existing virtio-blk driver (frontend).
No issues were observed. Guest domain 'reboot/destroy' use-cases work properly.

!!! Please note, for V10 I included commit "libxl/arm: Create specific IOMMU node to be referred by virtio-mmio
devices" which depends on the current series [5]. All patches except "libxl: Add support for Virtio disk
configuration" have Stefano's and Anthony's R-b tags.

Any feedback/help would be highly appreciated.

[1]
https://lore.kernel.org/xen-devel/1610488352-18494-24-git-send-email-olekstysh@gmail.com/
https://lore.kernel.org/xen-devel/1610488352-18494-25-git-send-email-olekstysh@gmail.com/
[2]
https://lists.xenproject.org/archives/html/xen-devel/2021-01/msg02403.html
https://lists.xenproject.org/archives/html/xen-devel/2021-01/msg02536.html
https://lore.kernel.org/xen-devel/1621626361-29076-1-git-send-email-olekstysh@gmail.com/
https://lore.kernel.org/xen-devel/1638982784-14390-1-git-send-email-olekstysh@gmail.com/
https://lore.kernel.org/xen-devel/1649442065-8332-1-git-send-email-olekstysh@gmail.com/
https://lore.kernel.org/xen-devel/1651598763-12162-1-git-send-email-olekstysh@gmail.com/
https://lore.kernel.org/xen-devel/1654106261-28044-1-git-send-email-olekstysh@gmail.com/

[3] https://github.com/otyshchenko1/xen/commits/libxl_virtio_next3
[4] https://github.com/otyshchenko1/virtio-disk/commits/virtio_grant

[5] https://lore.kernel.org/xen-devel/1653944813-17970-1-git-send-email-olekstysh@gmail.com/
    https://lore.kernel.org/xen-devel/1654197833-25362-1-git-send-email-olekstysh@gmail.com/

Julien Grall (1):
  libxl: Introduce basic virtio-mmio support on Arm

Oleksandr Tyshchenko (2):
  libxl: Add support for Virtio disk configuration
  libxl/arm: Create specific IOMMU node to be referred by virtio-mmio
    device

 docs/man/xl-disk-configuration.5.pod.in   |  38 +-
 tools/golang/xenlight/helpers.gen.go      |   8 +
 tools/golang/xenlight/types.gen.go        |  18 +
 tools/include/libxl.h                     |   7 +
 tools/libs/light/libxl_arm.c              | 164 ++++-
 tools/libs/light/libxl_device.c           |  62 +-
 tools/libs/light/libxl_disk.c             | 140 ++++-
 tools/libs/light/libxl_internal.h         |   2 +
 tools/libs/light/libxl_types.idl          |  18 +
 tools/libs/light/libxl_types_internal.idl |   1 +
 tools/libs/light/libxl_utils.c            |   2 +
 tools/libs/util/libxlu_disk_l.c           | 959 +++++++++++++++---------------
 tools/libs/util/libxlu_disk_l.h           |   2 +-
 tools/libs/util/libxlu_disk_l.l           |   9 +
 tools/xl/xl_block.c                       |  11 +
 xen/include/public/arch-arm.h             |   7 +
 xen/include/public/device_tree_defs.h     |   3 +-
 17 files changed, 968 insertions(+), 483 deletions(-)

-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Mon Jun 13 18:06:03 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 18:06:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348446.574701 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0oRl-0000ua-7V; Mon, 13 Jun 2022 18:05:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348446.574701; Mon, 13 Jun 2022 18:05:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0oRl-0000uT-2B; Mon, 13 Jun 2022 18:05:53 +0000
Received: by outflank-mailman (input) for mailman id 348446;
 Mon, 13 Jun 2022 18:05:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WHVA=WU=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1o0oRj-0000PT-FW
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 18:05:51 +0000
Received: from mail-lj1-x22f.google.com (mail-lj1-x22f.google.com
 [2a00:1450:4864:20::22f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7555356c-eb43-11ec-8901-93a377f238d6;
 Mon, 13 Jun 2022 20:05:50 +0200 (CEST)
Received: by mail-lj1-x22f.google.com with SMTP id c30so7046082ljr.9
 for <xen-devel@lists.xenproject.org>; Mon, 13 Jun 2022 11:05:50 -0700 (PDT)
Received: from otyshchenko.router ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 x18-20020a2e7c12000000b00253ceefb668sm1038104ljc.60.2022.06.13.11.05.48
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 13 Jun 2022 11:05:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7555356c-eb43-11ec-8901-93a377f238d6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=yLAAy4NtQ91FFDfJQHo6/8ROrRbfUrwl4KDHY/J0yS8=;
        b=CdjoAwNQMtcDWFyxiNV0beiF1x000yRfThIFE0Ouu+LedTLkc6GApwBVLB0w0Z2g/N
         bfaYh5SaOH7HRe4fSO84K9NuWhKLtuFdUzUX7cZsOxyAYBVlp1oIaWsQuTAMh0AKGT5E
         qMoefgfRgCyYGZ/VruPktAXgFYRDi1fS8VLxd8peq8uVTLGpgzAHBBqjJSYYlcVxDfE3
         ZxWgk8Qc7JJROioJ5FPsRehkKEJj+nVz4UIVVW0PJ0KTmieho3UP+ngNkysZPJFdpi7+
         ARAo+1p725yjO7k6+ZQCp/GqKtz/H4timUVlIId42dSSabO0lCpHF/wgMNqEQPVfCBa6
         CTOQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=yLAAy4NtQ91FFDfJQHo6/8ROrRbfUrwl4KDHY/J0yS8=;
        b=GSFk1dKZKWiSiH1lZGXH39eKI1if8jp2P57cNsGcitMjRyZjO/6PyGCsHKLzIb4B7W
         0m7SoLqBBztQuQ9NuVIORvemA2ZLzUSXxeD88+9h8M500Slg2FlFCDllkjEfThVe/igi
         OqsLRNSI8+QrLELCk2eTAH/l6QkO6RMqvup0zS+WXzcKIvxEW6yqPXrHrsq5IcBtWfjl
         k0libONQIybKUwBSuqZ0DpjeDBWopbw8lyoR9i9dp42FvCdW7EIiTNCxHgMdC+S6SKwB
         qRQz75jMraWTgSAglqBXHF0nCgN5zmYWxX1g8YtQLLGHzCR4Cx6WoTcu3ynN2XLAEFja
         mAJQ==
X-Gm-Message-State: AJIora9RRXGBVX0pv55un8i3BWGaGcEB17SnwefFaKGOmXtM32s02Vb0
	haYPPychY3lqAoWJZM7a6Td8sylJEao=
X-Google-Smtp-Source: AGRyM1tubub6aWVX97nIYrtXZ2SdUGC23ArpBDutDXvLFrImrQ8SMLeWSBBB00BlcRCHNnppDKdJaQ==
X-Received: by 2002:a2e:7203:0:b0:255:56e1:68bc with SMTP id n3-20020a2e7203000000b0025556e168bcmr397790ljc.30.1655143549889;
        Mon, 13 Jun 2022 11:05:49 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH V10 3/3] libxl/arm: Create specific IOMMU node to be referred by virtio-mmio device
Date: Mon, 13 Jun 2022 21:05:22 +0300
Message-Id: <1655143522-14356-4-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1655143522-14356-1-git-send-email-olekstysh@gmail.com>
References: <1655143522-14356-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Reuse generic IOMMU device tree bindings to communicate Xen specific
information for the virtio devices for which the restricted memory
access using Xen grant mappings need to be enabled.

Insert "iommus" property pointed to the IOMMU node with "xen,grant-dma"
compatible to all virtio devices which backends are going to run in
non-hardware domains (which are non-trusted by default).

Based on device-tree binding from Linux:
Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml

The example of generated nodes:

xen_iommu {
    compatible = "xen,grant-dma";
    #iommu-cells = <0x01>;
    phandle = <0xfde9>;
};

virtio@2000000 {
    compatible = "virtio,mmio";
    reg = <0x00 0x2000000 0x00 0x200>;
    interrupts = <0x00 0x01 0xf01>;
    interrupt-parent = <0xfde8>;
    dma-coherent;
    iommus = <0xfde9 0x01>;
};

virtio@2000200 {
    compatible = "virtio,mmio";
    reg = <0x00 0x2000200 0x00 0x200>;
    interrupts = <0x00 0x02 0xf01>;
    interrupt-parent = <0xfde8>;
    dma-coherent;
    iommus = <0xfde9 0x01>;
};

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
---
The related series https://lore.kernel.org/xen-devel/1654197833-25362-1-git-send-email-olekstysh@gmail.com/
is already in the mainline Linux.

Changes RFC -> V1:
   - update commit description
   - rebase according to the recent changes to
     "libxl: Introduce basic virtio-mmio support on Arm"

Changes V1 -> V2:
   - Henry already gave his Reviewed-by, I dropped it due to the changes
   - use generic IOMMU device tree bindings instead of custom property
     "xen,dev-domid"
   - change commit subject and description, was
     "libxl/arm: Insert "xen,dev-domid" property to virtio-mmio device node"

Changes V2 -> V3:
   - add Stefano's and Anthony's R-b
   - send together with the series it depends on and update post commit
     remark that described dependencies
   - update a comment on top of #define GUEST_PHANDLE_GIC
---
 tools/libs/light/libxl_arm.c          | 49 ++++++++++++++++++++++++++++++++---
 xen/include/public/device_tree_defs.h |  3 ++-
 2 files changed, 48 insertions(+), 4 deletions(-)

diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index 9be9b2a..72da3b1 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -865,9 +865,32 @@ static int make_vpci_node(libxl__gc *gc, void *fdt,
     return 0;
 }
 
+static int make_xen_iommu_node(libxl__gc *gc, void *fdt)
+{
+    int res;
+
+    /* See Linux Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml */
+    res = fdt_begin_node(fdt, "xen_iommu");
+    if (res) return res;
+
+    res = fdt_property_compat(gc, fdt, 1, "xen,grant-dma");
+    if (res) return res;
+
+    res = fdt_property_cell(fdt, "#iommu-cells", 1);
+    if (res) return res;
+
+    res = fdt_property_cell(fdt, "phandle", GUEST_PHANDLE_IOMMU);
+    if (res) return res;
+
+    res = fdt_end_node(fdt);
+    if (res) return res;
+
+    return 0;
+}
 
 static int make_virtio_mmio_node(libxl__gc *gc, void *fdt,
-                                 uint64_t base, uint32_t irq)
+                                 uint64_t base, uint32_t irq,
+                                 uint32_t backend_domid)
 {
     int res;
     gic_interrupt intr;
@@ -890,6 +913,16 @@ static int make_virtio_mmio_node(libxl__gc *gc, void *fdt,
     res = fdt_property(fdt, "dma-coherent", NULL, 0);
     if (res) return res;
 
+    if (backend_domid != LIBXL_TOOLSTACK_DOMID) {
+        uint32_t iommus_prop[2];
+
+        iommus_prop[0] = cpu_to_fdt32(GUEST_PHANDLE_IOMMU);
+        iommus_prop[1] = cpu_to_fdt32(backend_domid);
+
+        res = fdt_property(fdt, "iommus", iommus_prop, sizeof(iommus_prop));
+        if (res) return res;
+    }
+
     res = fdt_end_node(fdt);
     if (res) return res;
 
@@ -1097,6 +1130,7 @@ static int libxl__prepare_dtb(libxl__gc *gc, libxl_domain_config *d_config,
     size_t fdt_size = 0;
     int pfdt_size = 0;
     libxl_domain_build_info *const info = &d_config->b_info;
+    bool iommu_created;
     unsigned int i;
 
     const libxl_version_info *vers;
@@ -1204,11 +1238,20 @@ next_resize:
         if (d_config->num_pcidevs)
             FDT( make_vpci_node(gc, fdt, ainfo, dom) );
 
+        iommu_created = false;
         for (i = 0; i < d_config->num_disks; i++) {
             libxl_device_disk *disk = &d_config->disks[i];
 
-            if (disk->specification == LIBXL_DISK_SPECIFICATION_VIRTIO)
-                FDT( make_virtio_mmio_node(gc, fdt, disk->base, disk->irq) );
+            if (disk->specification == LIBXL_DISK_SPECIFICATION_VIRTIO) {
+                if (disk->backend_domid != LIBXL_TOOLSTACK_DOMID &&
+                    !iommu_created) {
+                    FDT( make_xen_iommu_node(gc, fdt) );
+                    iommu_created = true;
+                }
+
+                FDT( make_virtio_mmio_node(gc, fdt, disk->base, disk->irq,
+                                           disk->backend_domid) );
+            }
         }
 
         if (pfdt)
diff --git a/xen/include/public/device_tree_defs.h b/xen/include/public/device_tree_defs.h
index 209d43d..228daaf 100644
--- a/xen/include/public/device_tree_defs.h
+++ b/xen/include/public/device_tree_defs.h
@@ -4,9 +4,10 @@
 #if defined(__XEN__) || defined(__XEN_TOOLS__)
 /*
  * The device tree compiler (DTC) is allocating the phandle from 1 to
- * onwards. Reserve a high value for the GIC phandle.
+ * onwards. Reserve high values for the specific phandles.
  */
 #define GUEST_PHANDLE_GIC (65000)
+#define GUEST_PHANDLE_IOMMU (GUEST_PHANDLE_GIC + 1)
 
 #define GUEST_ROOT_ADDRESS_CELLS 2
 #define GUEST_ROOT_SIZE_CELLS 2
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Mon Jun 13 23:18:39 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 23:18:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348494.574723 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0tK5-0002WD-1V; Mon, 13 Jun 2022 23:18:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348494.574723; Mon, 13 Jun 2022 23:18:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0tK4-0002W6-UP; Mon, 13 Jun 2022 23:18:16 +0000
Received: by outflank-mailman (input) for mailman id 348494;
 Mon, 13 Jun 2022 23:18:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YWck=WU=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o0tK3-0002Vh-61
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 23:18:15 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 18a78e49-eb6f-11ec-8901-93a377f238d6;
 Tue, 14 Jun 2022 01:18:14 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 421F561560;
 Mon, 13 Jun 2022 23:18:12 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 32884C34114;
 Mon, 13 Jun 2022 23:18:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 18a78e49-eb6f-11ec-8901-93a377f238d6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1655162291;
	bh=AvC5Jris24qA8kxAHIl2kaIWHhvJWc4uyIKV4+afHj0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=GdeQXtdG2EhmeIVb5oDCUb9py30bzXo7on3NHu4y2PQzuuGw5EWgs+wVoXlxFSUhz
	 Jc5mbaU07t5+Uz7P4Z/37cRgLQlC90roP+cd4UvjiEAQVrhfS0j7IkTFL2YqwSZwHN
	 PgKoOaDYYKJcEuJZK8fAX87cD62ZS942sMpYBnASTwTbzk8Ze1A3Dl2f+id6lHwzp5
	 GPgQ4MjegmW+BZZrfZv3gToly5Ekw9buCDmgQejS75NVedpOn3EbHGz/vStNM9oJxh
	 khy5EMf+Xvqn+CrFVWW7ypqP2BYinJTAFEWxRnwZn4lSylraQyPTjWYFp0W7vKjtvb
	 D8A08ruu/cm0g==
Date: Mon, 13 Jun 2022 16:18:03 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Jens Wiklander <jens.wiklander@linaro.org>, xen-devel@lists.xenproject.org, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Bertrand.Marquis@arm.com
Subject: Re: [PATCH v2 2/2] xen/arm: add FF-A mediator
In-Reply-To: <f664f35b-2fec-38e4-3382-1fd14ba66e8d@xen.org>
Message-ID: <alpine.DEB.2.22.394.2206131617490.1837490@ubuntu-linux-20-04-desktop>
References: <20220609061812.422130-1-jens.wiklander@linaro.org> <20220609061812.422130-3-jens.wiklander@linaro.org> <alpine.DEB.2.22.394.2206101758030.756493@ubuntu-linux-20-04-desktop> <f664f35b-2fec-38e4-3382-1fd14ba66e8d@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sat, 11 Jun 2022, Julien Grall wrote:
> On 11/06/2022 02:23, Stefano Stabellini wrote:
> > > +static uint32_t ffa_rxtx_map(register_t tx_addr, register_t rx_addr,
> > > +                             uint32_t page_count)
> > > +{
> > > +    const struct arm_smccc_1_2_regs arg = {
> > > +#ifdef CONFIG_ARM_64
> > > +        .a0 = FFA_RXTX_MAP_64,
> > > +#endif
> > 
> > This ifdef is unnecessary given that FFA depends on ARM64 and SMCCCv1.2
> > is only implemented on ARM64. It also applies to all the other ifdefs in
> > this file. You can remove the code under #ifdef CONFIG_ARM_32.
> 
> To me the #ifdef indicates that it would be possible to use FFA on arm32. So I
> think it is better to keep them rather than having to retrofitting them in the
> future.

OK, fair enough


From xen-devel-bounces@lists.xenproject.org Mon Jun 13 23:19:01 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 23:19:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348501.574734 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0tKn-00030X-Ai; Mon, 13 Jun 2022 23:19:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348501.574734; Mon, 13 Jun 2022 23:19:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0tKn-00030Q-7K; Mon, 13 Jun 2022 23:19:01 +0000
Received: by outflank-mailman (input) for mailman id 348501;
 Mon, 13 Jun 2022 23:19:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YWck=WU=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o0tKm-0002Vh-0p
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 23:19:00 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 342682d0-eb6f-11ec-8901-93a377f238d6;
 Tue, 14 Jun 2022 01:18:59 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 903DBB8113A;
 Mon, 13 Jun 2022 23:18:58 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id E22CCC34114;
 Mon, 13 Jun 2022 23:18:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 342682d0-eb6f-11ec-8901-93a377f238d6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1655162337;
	bh=n7XgE+j94TbHRvuizJ0bzXRSmHEAXiTemoGzgyt3cqw=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=GjXXUPje6LHJbFRU8cf6PAeLywslA6Woopn7t37ft458+bn/idfe0IJubSMtAkkYd
	 +0RvCaBHhI5KzGYaECNLZ3WWiJfDEsCyPmbZdtA3Jpnt9aGDlWb1D+6VBJComGBeeb
	 JcjzucvgAa63F/fg0Va9z4wtJi9X3P5MxtCVknI/uq2iynxZVol98yP9Aoivn6a2o+
	 0xAG5SpUP5MgJj9We2dRohQ2BmEUTtOvnnd5vJiSJlTfqAluYusDwjYnEErbJG8IZ0
	 Prd5ro7Yyqf2IO8Uq98DZpRo1HH6jdslkZXE0Jf/08iS0ZkpbBlv4opfSuP64mSS/G
	 RYakYxlkqfjtA==
Date: Mon, 13 Jun 2022 16:18:56 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Jens Wiklander <jens.wiklander@linaro.org>, xen-devel@lists.xenproject.org, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 1/2] xen/arm: smccc: add support for SMCCCv1.2 extended
 input/output registers
In-Reply-To: <569a6d37-157a-d237-3ef9-d77fae32d002@xen.org>
Message-ID: <alpine.DEB.2.22.394.2206131618300.1837490@ubuntu-linux-20-04-desktop>
References: <20220609061812.422130-1-jens.wiklander@linaro.org> <20220609061812.422130-2-jens.wiklander@linaro.org> <alpine.DEB.2.22.394.2206101733020.756493@ubuntu-linux-20-04-desktop> <569a6d37-157a-d237-3ef9-d77fae32d002@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sat, 11 Jun 2022, Julien Grall wrote:
> Hi Stefano,
> 
> On 11/06/2022 01:41, Stefano Stabellini wrote:
> > On Thu, 9 Jun 2022, Jens Wiklander wrote:
> > > +    /* Store the registers x0 - x17 into the result structure */
> > > +    stp	x0, x1, [x19, #0]
> > > +    stp	x2, x3, [x19, #16]
> > > +    stp	x4, x5, [x19, #32]
> > > +    stp	x6, x7, [x19, #48]
> > > +    stp	x8, x9, [x19, #64]
> > > +    stp	x10, x11, [x19, #80]
> > > +    stp	x12, x13, [x19, #96]
> > > +    stp	x14, x15, [x19, #112]
> > > +    stp	x16, x17, [x19, #128]
> > 
> > I noticed that in the original commit the offsets are declared as
> > ARM_SMCCC_1_2_REGS_X0_OFFS, etc. In Xen we could add them to
> > xen/arch/arm/arm64/asm-offsets.c given that they are only used in asm.
> > 
> > That said, there isn't a huge value in declaring them given that they
> > are always read and written in order and there is nothing else in the
> > struct, so I am fine either way.
> 
> While we don't support big-endian in Xen (yet?), the offsets will be inverted.
> 
> Furthermore, I prefer to avoid open-coded value (in particular when they are
> related to offset). They are unlikely going to change, but at least we have
> the compiler that will compute them for us (so less chance for a problem).

I am OK with that


From xen-devel-bounces@lists.xenproject.org Mon Jun 13 23:29:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 23:29:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348512.574744 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0tUl-0004if-8s; Mon, 13 Jun 2022 23:29:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348512.574744; Mon, 13 Jun 2022 23:29:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0tUl-0004iY-64; Mon, 13 Jun 2022 23:29:19 +0000
Received: by outflank-mailman (input) for mailman id 348512;
 Mon, 13 Jun 2022 23:29:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YWck=WU=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o0tUk-0004iR-Bx
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 23:29:18 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a3d2ec50-eb70-11ec-8901-93a377f238d6;
 Tue, 14 Jun 2022 01:29:16 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 87D65B81123;
 Mon, 13 Jun 2022 23:29:15 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id AA6D6C34114;
 Mon, 13 Jun 2022 23:29:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a3d2ec50-eb70-11ec-8901-93a377f238d6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1655162954;
	bh=ctOhg685xuEJy2ublrMljlYEr1+w8lY7lbA7BJBiVnE=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=QXEfZu031fBPKpFJGk4bSkcGIkwVFPWYa8XqdE8bfV5K4I4K7EfKsjDiZP6TiY5QL
	 Wj5ZGh/zTs6DyrAfwGFJPEudBFn23AdHOx4j9y8byFsjkupvxBi1/oPKS2tbvj8NWz
	 37EUXcHbzj44nI0une26YasAyG1gLpPzCQJVUqZP7tjmVnr5PUBiFnputYH8dy6YrV
	 PCAOPZAi+xT4m9PO7NtundBlBjGGSW4htHzFZAc9YtEqnA+Rfj6WG6Dr9KS+82P3wX
	 SI6tEJHz8FMrjBGm8c11iXVDwxWUpHrMNP8KQapbGUWFXDOcelyf5IHB88iXfAUOhi
	 jKMGaotq83LtQ==
Date: Mon, 13 Jun 2022 16:29:12 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jan Beulich <jbeulich@suse.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, George.Dunlap@citrix.com, 
    roger.pau@citrix.com, Artem_Mygaiev@epam.com, Andrew.Cooper3@citrix.com, 
    julien@xen.org, Bertrand.Marquis@arm.com, 
    Stefano Stabellini <stefano.stabellini@xilinx.com>, 
    Julien Grall <jgrall@amazon.com>, xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2] add more MISRA C rules to docs/misra/rules.rst
In-Reply-To: <2c4b41e4-7381-7424-de72-43f55c448665@suse.com>
Message-ID: <alpine.DEB.2.22.394.2206131627050.1837490@ubuntu-linux-20-04-desktop>
References: <20220610212755.1051640-1-sstabellini@kernel.org> <2c4b41e4-7381-7424-de72-43f55c448665@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 13 Jun 2022, Jan Beulich wrote:
> On 10.06.2022 23:27, Stefano Stabellini wrote:
> > +   * - `Rule 5.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_05_03.c>`_
> > +     - Required
> > +     - An identifier declared in an inner scope shall not hide an
> > +       identifier declared in an outer scope
> > +     - Using macros as macro parameters at invocation time is allowed
> > +       even if both macros use identically named local variables, e.g.
> > +       max_t(var0, min_t(var1, var2))
> 
> Nit: I would have been okay with the prior use of MIN() and MAX() in this
> example, but now that you have switched to min_t() / max_t() I think the
> example also wants to match our macros of these names. Hence I'd like to
> suggest that either you switch to using min() / max() (which also use
> local variables), or you add the missing "type" arguments in both macro
> invocations.

I see your point. I'll use min/max as follows:

max(var0, min(var1, var2))

If you are OK with that and there are no other suggestions this tiny
change could be done on commit.


From xen-devel-bounces@lists.xenproject.org Mon Jun 13 23:54:14 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jun 2022 23:54:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348521.574756 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0tsj-0008CD-59; Mon, 13 Jun 2022 23:54:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348521.574756; Mon, 13 Jun 2022 23:54:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0tsj-0008C6-1a; Mon, 13 Jun 2022 23:54:05 +0000
Received: by outflank-mailman (input) for mailman id 348521;
 Mon, 13 Jun 2022 23:54:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0tsh-0008Bw-GB; Mon, 13 Jun 2022 23:54:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0tsh-0001ob-Cg; Mon, 13 Jun 2022 23:54:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0tsg-00056v-RZ; Mon, 13 Jun 2022 23:54:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o0tsg-0003Ae-R7; Mon, 13 Jun 2022 23:54:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6o/yy6j/Okg1mQqUFvlJmgVJ04Op2ftsacz9nPgoEIg=; b=Y2pAsv8paEWgZdlg5TnWVH7isF
	mAiGxxjE4UYBgawcK/Y/4DXE4KgZCbUCIDGi7riggIXV70h1rCzabzu7t5pfI+Aztn0zXXLwOP6RN
	v0tlOnsfgrm4QW1E4johSHhUj9NzTOZzKrrcBnNdtXhRJZeSYIiixCRcTwJjlB/eLneU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171155-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171155: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=b13baccc3850ca8b8cccbf8ed9912dbaa0fdf7f3
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 13 Jun 2022 23:54:02 +0000

flight 171155 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171155/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 170714
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 170714
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 170714
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 170714
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 170714
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 170714
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 170714
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 170714
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 170714

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl           8 xen-boot                   fail pass in 171154
 test-amd64-amd64-xl-vhd       8 xen-boot                   fail pass in 171154
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot           fail pass in 171154

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170714
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170714
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170714
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                b13baccc3850ca8b8cccbf8ed9912dbaa0fdf7f3
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z   20 days
Failing since        170716  2022-05-24 11:12:06 Z   20 days   50 attempts
Testing same since   171154  2022-06-13 04:37:11 Z    0 days    2 attempts

------------------------------------------------------------
2346 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 276082 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 14 04:45:34 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 04:45:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348483.574767 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0yQc-0004fj-CC; Tue, 14 Jun 2022 04:45:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348483.574767; Tue, 14 Jun 2022 04:45:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0yQc-0004fb-6E; Tue, 14 Jun 2022 04:45:22 +0000
Received: by outflank-mailman (input) for mailman id 348483;
 Mon, 13 Jun 2022 18:50:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mtOv=WU=csail.mit.edu=srivatsa@srs-se1.protection.inumbo.net>)
 id 1o0p8a-00080G-Om
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 18:50:08 +0000
Received: from outgoing-stata.csail.mit.edu (outgoing-stata.csail.mit.edu
 [128.30.2.210]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a3dc1bd1-eb49-11ec-bd2c-47488cf2e6aa;
 Mon, 13 Jun 2022 20:50:06 +0200 (CEST)
Received: from [77.23.249.31] (helo=srivatsab-a02.vmware.com)
 by outgoing-stata.csail.mit.edu with esmtpsa (TLS1.2:RSA_AES_128_CBC_SHA1:128)
 (Exim 4.82) (envelope-from <srivatsa@csail.mit.edu>)
 id 1o0p7j-0002lx-0j; Mon, 13 Jun 2022 14:49:15 -0400
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a3dc1bd1-eb49-11ec-bd2c-47488cf2e6aa
Subject: Re: [PATCH 29/36] cpuidle,xenpv: Make more PARAVIRT_XXL noinstr clean
To: Peter Zijlstra <peterz@infradead.org>
Cc: rth@twiddle.net, ink@jurassic.park.msu.ru, mattst88@gmail.com,
 vgupta@kernel.org, linux@armlinux.org.uk, ulli.kroll@googlemail.com,
 linus.walleij@linaro.org, shawnguo@kernel.org,
 Sascha Hauer <s.hauer@pengutronix.de>, kernel@pengutronix.de,
 festevam@gmail.com, linux-imx@nxp.com, tony@atomide.com, khilman@kernel.org,
 catalin.marinas@arm.com, will@kernel.org, guoren@kernel.org,
 bcain@quicinc.com, chenhuacai@kernel.org, kernel@xen0n.name,
 geert@linux-m68k.org, sammy@sammy.net, monstr@monstr.eu,
 tsbogend@alpha.franken.de, dinguyen@kernel.org, jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi, shorne@gmail.com,
 James.Bottomley@HansenPartnership.com, deller@gmx.de, mpe@ellerman.id.au,
 benh@kernel.crashing.org, paulus@samba.org, paul.walmsley@sifive.com,
 palmer@dabbelt.com, aou@eecs.berkeley.edu, hca@linux.ibm.com,
 gor@linux.ibm.com, agordeev@linux.ibm.com, borntraeger@linux.ibm.com,
 svens@linux.ibm.com, ysato@users.sourceforge.jp, dalias@libc.org,
 davem@davemloft.net, richard@nod.at, anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net, tglx@linutronix.de, mingo@redhat.com,
 bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com,
 acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com,
 jolsa@kernel.org, namhyung@kernel.org, jgross@suse.com,
 amakhalov@vmware.com, pv-drivers@vmware.com, boris.ostrovsky@oracle.com,
 chris@zankel.net, jcmvbkbc@gmail.com, rafael@kernel.org, lenb@kernel.org,
 pavel@ucw.cz, gregkh@linuxfoundation.org, mturquette@baylibre.com,
 sboyd@kernel.org, daniel.lezcano@linaro.org, lpieralisi@kernel.org,
 sudeep.holla@arm.com, agross@kernel.org, bjorn.andersson@linaro.org,
 anup@brainfault.org, thierry.reding@gmail.com, jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com, Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com, andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk, rostedt@goodmis.org, pmladek@suse.com,
 senozhatsky@chromium.org, john.ogness@linutronix.de, paulmck@kernel.org,
 frederic@kernel.org, quic_neeraju@quicinc.com, josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com,
 joel@joelfernandes.org, juri.lelli@redhat.com, vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com, bsegall@google.com, mgorman@suse.de,
 bristot@redhat.com, vschneid@redhat.com, jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org, linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org, linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org, linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org, linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org, virtualization@lists.linux-foundation.org,
 xen-devel@lists.xenproject.org, linux-xtensa@linux-xtensa.org,
 linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org,
 linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org,
 linux-tegra@vger.kernel.org, linux-arch@vger.kernel.org, rcu@vger.kernel.org
References: <20220608142723.103523089@infradead.org>
 <20220608144517.759631860@infradead.org>
From: "Srivatsa S. Bhat" <srivatsa@csail.mit.edu>
Message-ID: <510b9b68-7d53-7d4d-5a05-37fbd199eb4b@csail.mit.edu>
Date: Mon, 13 Jun 2022 20:48:49 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.12.0
MIME-Version: 1.0
In-Reply-To: <20220608144517.759631860@infradead.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 6/8/22 4:27 PM, Peter Zijlstra wrote:
> vmlinux.o: warning: objtool: acpi_idle_enter_s2idle+0xde: call to wbinvd() leaves .noinstr.text section
> vmlinux.o: warning: objtool: default_idle+0x4: call to arch_safe_halt() leaves .noinstr.text section
> vmlinux.o: warning: objtool: xen_safe_halt+0xa: call to HYPERVISOR_sched_op.constprop.0() leaves .noinstr.text section
> 
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>

Reviewed-by: Srivatsa S. Bhat (VMware) <srivatsa@csail.mit.edu>


Regards,
Srivatsa
VMware Photon OS

> ---
>  arch/x86/include/asm/paravirt.h      |    6 ++++--
>  arch/x86/include/asm/special_insns.h |    4 ++--
>  arch/x86/include/asm/xen/hypercall.h |    2 +-
>  arch/x86/kernel/paravirt.c           |   14 ++++++++++++--
>  arch/x86/xen/enlighten_pv.c          |    2 +-
>  arch/x86/xen/irq.c                   |    2 +-
>  6 files changed, 21 insertions(+), 9 deletions(-)
> 
> --- a/arch/x86/include/asm/paravirt.h
> +++ b/arch/x86/include/asm/paravirt.h
> @@ -168,7 +168,7 @@ static inline void __write_cr4(unsigned
>  	PVOP_VCALL1(cpu.write_cr4, x);
>  }
>  
> -static inline void arch_safe_halt(void)
> +static __always_inline void arch_safe_halt(void)
>  {
>  	PVOP_VCALL0(irq.safe_halt);
>  }
> @@ -178,7 +178,9 @@ static inline void halt(void)
>  	PVOP_VCALL0(irq.halt);
>  }
>  
> -static inline void wbinvd(void)
> +extern noinstr void pv_native_wbinvd(void);
> +
> +static __always_inline void wbinvd(void)
>  {
>  	PVOP_ALT_VCALL0(cpu.wbinvd, "wbinvd", ALT_NOT(X86_FEATURE_XENPV));
>  }
> --- a/arch/x86/include/asm/special_insns.h
> +++ b/arch/x86/include/asm/special_insns.h
> @@ -115,7 +115,7 @@ static inline void wrpkru(u32 pkru)
>  }
>  #endif
>  
> -static inline void native_wbinvd(void)
> +static __always_inline void native_wbinvd(void)
>  {
>  	asm volatile("wbinvd": : :"memory");
>  }
> @@ -179,7 +179,7 @@ static inline void __write_cr4(unsigned
>  	native_write_cr4(x);
>  }
>  
> -static inline void wbinvd(void)
> +static __always_inline void wbinvd(void)
>  {
>  	native_wbinvd();
>  }
> --- a/arch/x86/include/asm/xen/hypercall.h
> +++ b/arch/x86/include/asm/xen/hypercall.h
> @@ -382,7 +382,7 @@ MULTI_stack_switch(struct multicall_entr
>  }
>  #endif
>  
> -static inline int
> +static __always_inline int
>  HYPERVISOR_sched_op(int cmd, void *arg)
>  {
>  	return _hypercall2(int, sched_op, cmd, arg);
> --- a/arch/x86/kernel/paravirt.c
> +++ b/arch/x86/kernel/paravirt.c
> @@ -233,6 +233,11 @@ static noinstr void pv_native_set_debugr
>  	native_set_debugreg(regno, val);
>  }
>  
> +noinstr void pv_native_wbinvd(void)
> +{
> +	native_wbinvd();
> +}
> +
>  static noinstr void pv_native_irq_enable(void)
>  {
>  	native_irq_enable();
> @@ -242,6 +247,11 @@ static noinstr void pv_native_irq_disabl
>  {
>  	native_irq_disable();
>  }
> +
> +static noinstr void pv_native_safe_halt(void)
> +{
> +	native_safe_halt();
> +}
>  #endif
>  
>  enum paravirt_lazy_mode paravirt_get_lazy_mode(void)
> @@ -273,7 +283,7 @@ struct paravirt_patch_template pv_ops =
>  	.cpu.read_cr0		= native_read_cr0,
>  	.cpu.write_cr0		= native_write_cr0,
>  	.cpu.write_cr4		= native_write_cr4,
> -	.cpu.wbinvd		= native_wbinvd,
> +	.cpu.wbinvd		= pv_native_wbinvd,
>  	.cpu.read_msr		= native_read_msr,
>  	.cpu.write_msr		= native_write_msr,
>  	.cpu.read_msr_safe	= native_read_msr_safe,
> @@ -307,7 +317,7 @@ struct paravirt_patch_template pv_ops =
>  	.irq.save_fl		= __PV_IS_CALLEE_SAVE(native_save_fl),
>  	.irq.irq_disable	= __PV_IS_CALLEE_SAVE(pv_native_irq_disable),
>  	.irq.irq_enable		= __PV_IS_CALLEE_SAVE(pv_native_irq_enable),
> -	.irq.safe_halt		= native_safe_halt,
> +	.irq.safe_halt		= pv_native_safe_halt,
>  	.irq.halt		= native_halt,
>  #endif /* CONFIG_PARAVIRT_XXL */
>  
> --- a/arch/x86/xen/enlighten_pv.c
> +++ b/arch/x86/xen/enlighten_pv.c
> @@ -1019,7 +1019,7 @@ static const typeof(pv_ops) xen_cpu_ops
>  
>  		.write_cr4 = xen_write_cr4,
>  
> -		.wbinvd = native_wbinvd,
> +		.wbinvd = pv_native_wbinvd,
>  
>  		.read_msr = xen_read_msr,
>  		.write_msr = xen_write_msr,
> --- a/arch/x86/xen/irq.c
> +++ b/arch/x86/xen/irq.c
> @@ -24,7 +24,7 @@ noinstr void xen_force_evtchn_callback(v
>  	(void)HYPERVISOR_xen_version(0, NULL);
>  }
>  
> -static void xen_safe_halt(void)
> +static noinstr void xen_safe_halt(void)
>  {
>  	/* Blocking includes an implicit local_irq_enable(). */
>  	if (HYPERVISOR_sched_op(SCHEDOP_block, NULL) != 0)
> 
> 


From xen-devel-bounces@lists.xenproject.org Tue Jun 14 04:45:34 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 04:45:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348488.574770 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0yQc-0004hQ-K8; Tue, 14 Jun 2022 04:45:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348488.574770; Tue, 14 Jun 2022 04:45:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0yQc-0004hG-G3; Tue, 14 Jun 2022 04:45:22 +0000
Received: by outflank-mailman (input) for mailman id 348488;
 Mon, 13 Jun 2022 19:23:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ij9c=WU=vmware.com=namit@srs-se1.protection.inumbo.net>)
 id 1o0peo-0002om-DD
 for xen-devel@lists.xenproject.org; Mon, 13 Jun 2022 19:23:26 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12hn20300.outbound.protection.outlook.com
 [2a01:111:f400:fe59::300])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 49f20504-eb4e-11ec-8901-93a377f238d6;
 Mon, 13 Jun 2022 21:23:23 +0200 (CEST)
Received: from BY3PR05MB8531.namprd05.prod.outlook.com (2603:10b6:a03:3ce::6)
 by MN2PR05MB6047.namprd05.prod.outlook.com (2603:10b6:208:c3::26)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.6; Mon, 13 Jun
 2022 19:23:14 +0000
Received: from BY3PR05MB8531.namprd05.prod.outlook.com
 ([fe80::a4f8:718a:b2a0:977f]) by BY3PR05MB8531.namprd05.prod.outlook.com
 ([fe80::a4f8:718a:b2a0:977f%5]) with mapi id 15.20.5353.011; Mon, 13 Jun 2022
 19:23:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 49f20504-eb4e-11ec-8901-93a377f238d6
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jXrIH/OSA16b80hUqhrCPzx9ooxqHICWOnzyJgRziq7m3YoykFExXzJTDCJOrheatzG4DTGPUzqBlAs0xAnuY0B/Xjwbzad/iIly5JouPOLaC4HI6lGNE4XkFAaGhzAtYLBTSzQpvimB+HvngjGNO3eZ06wIb3FWU8TRuZQkb1cQOhamLp4i77CwXh9cJbxJ9FehsAy4rP0kteh8Kuex8O6oueP6iw05c6Y8FQdf0bq3IdK2HsYimw825OmRwh6neIXDSWBWBh/qhYoXJsJUbk/lcMZsK3Zk+rYayAc8BpvOvx2VDAIrqAO6mv70Y5I80qA5jNJA3ZCMVF3tnvjilQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ejRHp+UcTOOouITaszW4Xrm66IuBZfWvbS1+dB+rQfw=;
 b=Gh1jpB78qVCtk0d6Eo9ifPxPfIlR7rrRqgqauYqa5Ps6fcZ8N6KjLWCM36rT7z69nlPcW8YkXsEAtQ7vDc3WUmK8suYPzDQgAIIiYWKGxi+P8fvcK8xzafYISsvVzAN43cST5zHOYgzlFe5AFOgWV4jxTGsiXbRdOc0k8rtYLNY5XUWqmZSwZRWL8opY5IZey6LS3Gm+gLCKniJj571cuLoT8Tx9x02PBqX+4VenER09nAM5iVjz7KrllM6L4k971ifjVV/X1x1zqldzhZIhLAjyLK6+Kbeuxsm5FkyBB+wnDH4+VgfacOQNRvhZ7hl7+r7ZHlTpZ5ZliENOwav4sQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=vmware.com; dmarc=pass action=none header.from=vmware.com;
 dkim=pass header.d=vmware.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vmware.com;
 s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ejRHp+UcTOOouITaszW4Xrm66IuBZfWvbS1+dB+rQfw=;
 b=GN7f2JaCeyN018ZVg8mIF1IsJ0XiFtX+hl1IknUK6qH7KFCN4gTa9x1TUyVNT+DrZmbbDfa6497/96T9/OHJIg3QTuaBxDScN5x0TY6TyL3E9tO4UsN9C7BOYxZ3obiKYusMzp5m2nOyio7MHH/sqhIIFx4ddj/daXcNeHSSMA8=
From: Nadav Amit <namit@vmware.com>
To: "srivatsa@csail.mit.edu" <srivatsa@csail.mit.edu>
CC: Peter Zijlstra <peterz@infradead.org>, "juri.lelli@redhat.com"
	<juri.lelli@redhat.com>, "rafael@kernel.org" <rafael@kernel.org>, Benjamin
 Herrenschmidt <benh@kernel.crashing.org>, "linus.walleij@linaro.org"
	<linus.walleij@linaro.org>, "bsegall@google.com" <bsegall@google.com>,
	"guoren@kernel.org" <guoren@kernel.org>, "pavel@ucw.cz" <pavel@ucw.cz>,
	"agordeev@linux.ibm.com" <agordeev@linux.ibm.com>,
	"linux-clk@vger.kernel.org" <linux-clk@vger.kernel.org>, linux-arch
	<linux-arch@vger.kernel.org>, "vincent.guittot@linaro.org"
	<vincent.guittot@linaro.org>, "mpe@ellerman.id.au" <mpe@ellerman.id.au>,
	"chenhuacai@kernel.org" <chenhuacai@kernel.org>, "linux-acpi@vger.kernel.org"
	<linux-acpi@vger.kernel.org>, "agross@kernel.org" <agross@kernel.org>,
	"geert@linux-m68k.org" <geert@linux-m68k.org>, "linux-imx@nxp.com"
	<linux-imx@nxp.com>, Catalin Marinas <catalin.marinas@arm.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"mattst88@gmail.com" <mattst88@gmail.com>, "borntraeger@linux.ibm.com"
	<borntraeger@linux.ibm.com>, "mturquette@baylibre.com"
	<mturquette@baylibre.com>, "sammy@sammy.net" <sammy@sammy.net>,
	"pmladek@suse.com" <pmladek@suse.com>, "linux-pm@vger.kernel.org"
	<linux-pm@vger.kernel.org>, "jiangshanlai@gmail.com"
	<jiangshanlai@gmail.com>, Sascha Hauer <s.hauer@pengutronix.de>,
	"linux-um@lists.infradead.org" <linux-um@lists.infradead.org>,
	"acme@kernel.org" <acme@kernel.org>, Thomas Gleixner <tglx@linutronix.de>,
	"linux-omap@vger.kernel.org" <linux-omap@vger.kernel.org>,
	"dietmar.eggemann@arm.com" <dietmar.eggemann@arm.com>, "rth@twiddle.net"
	<rth@twiddle.net>, Greg Kroah-Hartman <gregkh@linuxfoundation.org>, LKML
	<linux-kernel@vger.kernel.org>, "linux-perf-users@vger.kernel.org"
	<linux-perf-users@vger.kernel.org>, "senozhatsky@chromium.org"
	<senozhatsky@chromium.org>, "svens@linux.ibm.com" <svens@linux.ibm.com>,
	"jolsa@kernel.org" <jolsa@kernel.org>, "paulus@samba.org" <paulus@samba.org>,
	"mark.rutland@arm.com" <mark.rutland@arm.com>, "linux-ia64@vger.kernel.org"
	<linux-ia64@vger.kernel.org>, Dave Hansen <dave.hansen@linux.intel.com>,
	Linux Virtualization <virtualization@lists.linux-foundation.org>,
	"James.Bottomley@hansenpartnership.com"
	<James.Bottomley@HansenPartnership.com>, "jcmvbkbc@gmail.com"
	<jcmvbkbc@gmail.com>, "thierry.reding@gmail.com" <thierry.reding@gmail.com>,
	"kernel@xen0n.name" <kernel@xen0n.name>, "quic_neeraju@quicinc.com"
	<quic_neeraju@quicinc.com>, linux-s390 <linux-s390@vger.kernel.org>,
	"vschneid@redhat.com" <vschneid@redhat.com>, "john.ogness@linutronix.de"
	<john.ogness@linutronix.de>, "ysato@users.sourceforge.jp"
	<ysato@users.sourceforge.jp>, "linux-sh@vger.kernel.org"
	<linux-sh@vger.kernel.org>, "festevam@gmail.com" <festevam@gmail.com>,
	"deller@gmx.de" <deller@gmx.de>, "daniel.lezcano@linaro.org"
	<daniel.lezcano@linaro.org>, "jonathanh@nvidia.com" <jonathanh@nvidia.com>,
	Mathieu Desnoyers <mathieu.desnoyers@efficios.com>, "frederic@kernel.org"
	<frederic@kernel.org>, "lenb@kernel.org" <lenb@kernel.org>,
	"linux-xtensa@linux-xtensa.org" <linux-xtensa@linux-xtensa.org>,
	"kernel@pengutronix.de" <kernel@pengutronix.de>, "gor@linux.ibm.com"
	<gor@linux.ibm.com>, "linux-arm-msm@vger.kernel.org"
	<linux-arm-msm@vger.kernel.org>, "linux-alpha@vger.kernel.org"
	<linux-alpha@vger.kernel.org>, "linux-m68k@lists.linux-m68k.org"
	<linux-m68k@lists.linux-m68k.org>, "shorne@gmail.com" <shorne@gmail.com>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>, "chris@zankel.net"
	<chris@zankel.net>, "sboyd@kernel.org" <sboyd@kernel.org>,
	"dinguyen@kernel.org" <dinguyen@kernel.org>, "bristot@redhat.com"
	<bristot@redhat.com>, "alexander.shishkin@linux.intel.com"
	<alexander.shishkin@linux.intel.com>, "lpieralisi@kernel.org"
	<lpieralisi@kernel.org>, "linux@rasmusvillemoes.dk"
	<linux@rasmusvillemoes.dk>, "joel@joelfernandes.org"
	<joel@joelfernandes.org>, Will Deacon <will@kernel.org>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, "khilman@kernel.org" <khilman@kernel.org>,
	"linux-csky@vger.kernel.org" <linux-csky@vger.kernel.org>, Pv-drivers
	<Pv-drivers@vmware.com>, "linux-snps-arc@lists.infradead.org"
	<linux-snps-arc@lists.infradead.org>, Mel Gorman <mgorman@suse.de>,
	"jacob.jun.pan@linux.intel.com" <jacob.jun.pan@linux.intel.com>, Arnd
 Bergmann <arnd@arndb.de>, "ulli.kroll@googlemail.com"
	<ulli.kroll@googlemail.com>, "vgupta@kernel.org" <vgupta@kernel.org>,
	"josh@joshtriplett.org" <josh@joshtriplett.org>, Steven Rostedt
	<rostedt@goodmis.org>, "rcu@vger.kernel.org" <rcu@vger.kernel.org>, Borislav
 Petkov <bp@alien8.de>, "bcain@quicinc.com" <bcain@quicinc.com>,
	"tsbogend@alpha.franken.de" <tsbogend@alpha.franken.de>,
	"linux-parisc@vger.kernel.org" <linux-parisc@vger.kernel.org>,
	"sudeep.holla@arm.com" <sudeep.holla@arm.com>, "shawnguo@kernel.org"
	<shawnguo@kernel.org>, "davem@davemloft.net" <davem@davemloft.net>,
	"dalias@libc.org" <dalias@libc.org>, "tony@atomide.com" <tony@atomide.com>,
	"bjorn.andersson@linaro.org" <bjorn.andersson@linaro.org>, "H. Peter Anvin"
	<hpa@zytor.com>, "sparclinux@vger.kernel.org" <sparclinux@vger.kernel.org>,
	"linux-hexagon@vger.kernel.org" <linux-hexagon@vger.kernel.org>,
	"linux-riscv@lists.infradead.org" <linux-riscv@lists.infradead.org>, Anton
 Ivanov <anton.ivanov@cambridgegreys.com>, "jonas@southpole.se"
	<jonas@southpole.se>, "yury.norov@gmail.com" <yury.norov@gmail.com>,
	"richard@nod.at" <richard@nod.at>, X86 ML <x86@kernel.org>,
	"linux@armlinux.org.uk" <linux@armlinux.org.uk>, Ingo Molnar
	<mingo@redhat.com>, "aou@eecs.berkeley.edu" <aou@eecs.berkeley.edu>,
	"paulmck@kernel.org" <paulmck@kernel.org>, "hca@linux.ibm.com"
	<hca@linux.ibm.com>, "stefan.kristiansson@saunalahti.fi"
	<stefan.kristiansson@saunalahti.fi>, "openrisc@lists.librecores.org"
	<openrisc@lists.librecores.org>, "paul.walmsley@sifive.com"
	<paul.walmsley@sifive.com>, "linux-tegra@vger.kernel.org"
	<linux-tegra@vger.kernel.org>, "namhyung@kernel.org" <namhyung@kernel.org>,
	"andriy.shevchenko@linux.intel.com" <andriy.shevchenko@linux.intel.com>,
	"jpoimboe@kernel.org" <jpoimboe@kernel.org>, Juergen Gross <jgross@suse.com>,
	"monstr@monstr.eu" <monstr@monstr.eu>, "linux-mips@vger.kernel.org"
	<linux-mips@vger.kernel.org>, "palmer@dabbelt.com" <palmer@dabbelt.com>,
	"anup@brainfault.org" <anup@brainfault.org>, "ink@jurassic.park.msu.ru"
	<ink@jurassic.park.msu.ru>, "johannes@sipsolutions.net"
	<johannes@sipsolutions.net>, linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
Subject: Re: [Pv-drivers] [PATCH 29/36] cpuidle,	xenpv: Make more PARAVIRT_XXL
 noinstr clean
Thread-Topic: [Pv-drivers] [PATCH 29/36] cpuidle,	xenpv: Make more
 PARAVIRT_XXL noinstr clean
Thread-Index: AQHYf1aeUEK+VymBJkmsIscwW/p0J61Nt1yA
Date: Mon, 13 Jun 2022 19:23:13 +0000
Message-ID: <BAE566A0-AEA3-493E-8AC5-912C795BF1DE@vmware.com>
References: <20220608142723.103523089@infradead.org>
 <20220608144517.759631860@infradead.org>
 <510b9b68-7d53-7d4d-5a05-37fbd199eb4b@csail.mit.edu>
In-Reply-To: <510b9b68-7d53-7d4d-5a05-37fbd199eb4b@csail.mit.edu>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=vmware.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 3f126017-af19-49d5-1021-08da4d7228f3
x-ms-traffictypediagnostic: MN2PR05MB6047:EE_
x-ld-processed: b39138ca-3cee-4b4a-a4d6-cd83d9dd62f0,ExtAddr
x-microsoft-antispam-prvs:
 <MN2PR05MB6047A96E0623FE64DB9AF34BD0AB9@MN2PR05MB6047.namprd05.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 aG9Qmh2QLg48H3QoHdFDMD33Qm4br7HZ6rWe+mY2Rs/khBykukk4y3HAkBNVFdnCKWzkLh16hjqi3itKqufl638zonHXMc76mrqm7zdQSfhNlNQmx/+nj98qEQC5ZtZ/wj6GhoO7fcmsYQ5BlTnJDPd/HgC483XvaBE0W1zvOaPP8kTfuaCwwblLPtVlzbtBxZf+YZDrsThHjkAJKjloZrc5/MvC1R7J2lG3QK7yLE3dfyT1oId8+WwG2W6V0Kw+0KCQ01hVIf8dLHrN9prdiDWNJVUgiPBOkodAOvQvh0iyRcCK5i51PbBB2/k65mbnIuWJBrmDz/2yKBX3z69yQrSE/k6H0mciMFYs/69epSYvu9GEKqMetk44wZmIYTcjpgp+Aq+C9lqjTyrHR+6r3IvSnPEQ/PTAc2CZfW+ZFrjqMkftgQrgjaGxh0DBYcaLJWc/0n6sBuiEfKzuEd0bqz7bPLOA4Y8vr3Bzld37stYJ9tsD634vk+EOWvztNJlbhuOiF9oK/nKhM3k+gDlmU4ZB39qQqylk9pZxOfW4U38QYgHPsmxpAII6sJSxVub9/E5GHy/MgW9tfBdUL08j/78Lrb6PO5DjIxKqAquBAtshUTr4ox3bCgtYJ0+wbFn/TftJEkBWXUIqlHpFJHipWCRbMiPcwwU4iyvsaaeEPA1D+1sOgyDjwJoSTZB53IiEBqLEoHxTvr2zEuCo7n9oHnPY2tPwrxUt3yS5aNmqNvsKant7oin1dTwLkPCbpTot
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BY3PR05MB8531.namprd05.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(38100700002)(122000001)(6916009)(2906002)(6506007)(64756008)(6486002)(5660300002)(8936002)(66946007)(6512007)(53546011)(26005)(33656002)(36756003)(83380400001)(86362001)(38070700005)(66476007)(316002)(71200400001)(66446008)(4326008)(7336002)(66556008)(7276002)(54906003)(2616005)(7366002)(76116006)(508600001)(186003)(7416002)(7406005)(17680700008)(45980500001);DIR:OUT;SFP:1501;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?dEx2OUZIYjJQL0VxSS9FVVh5TnRyaDBranZEZmVTYkIzVDNYNWh0VDU5bW1y?=
 =?utf-8?B?ZlgzaEowM01RSFBZTUxNcXJmR05PdmRxVkxjUUo5Sk9iVkU3czQwUEozTE9I?=
 =?utf-8?B?SWN6TFVyNWs1R3FLS1BVZFRTc2FTcllrTFBRcExmR0hGOWZzcW1VMEZQekhQ?=
 =?utf-8?B?NzkzSk42emxWS2tKNnM1WVRrZTgybTh2SDhhL21ySkRXMUg1MTF4UHhwQnps?=
 =?utf-8?B?SW1hWVo5ZDMzeVNwOGFNWFBqUTc1WFUzZjN3b0Rld09sejFDTk9FY1ZsclNu?=
 =?utf-8?B?bm5Cd0luVFJMOTVmUVliTHpaWU1YbFBHSnJkZ2lNWCtzQXgyaDgvb0JzZkZL?=
 =?utf-8?B?N1RtY01FellhZVR2VFAwTDJWWk5HYytScDR6RmdRbE5CYzRtU1ZENWZZNUQv?=
 =?utf-8?B?WUZqNk9hRTlXejZwV2ExWmpScWdqTGs5R29PckpSeE5UNGg4NDc3VFE4S255?=
 =?utf-8?B?cmJaay93OEo5SFBLZkIyMHI0U1Z3YmNyby8wbk9IekszeGF4UWV0cUluK3lr?=
 =?utf-8?B?NDVWK0MxNFoxUkNTM2Voc3FxeFBGdE94bDV3d1VGYW1EdVNZakd2cGMrcC9k?=
 =?utf-8?B?K25QdmJJTUhTSXNvYTdlSkpTOTBBbW1hdTBDZHBCSHdLWU9CQytkZkxtcyt2?=
 =?utf-8?B?bGQyc2YxODZCV01yMGxjN3dFdElSTmdNdnpwVFo0eVh5djR4bHRuRXJTdEIy?=
 =?utf-8?B?VzJvTkwwaFUvbWY0T3QyVHJWSU9yUDRhV09zdjhvR044dVl6dVV4WDdLMERH?=
 =?utf-8?B?T3RoL3hmT3JNUmt3OHhrWWtITDQyQWU5ZGhVNmUvRC9LUFV0anBWZXNrcWdk?=
 =?utf-8?B?YWtMQVNJREl0ckdkL1d0aVFiNzBxM0FSNW11M1lsdlN6SmpDc1lObU1ZSEpX?=
 =?utf-8?B?SzlSWUswQWJUTXlkRUJidFdZQURtL3VoUzQvRkFzb3pyaWdXOHhMQzBoOEYz?=
 =?utf-8?B?ZmlVY3p4QVNBRVNMR3BHNFlnNGF0VFplZW1LZkZkeVdlOE1weVFnZm5KQTky?=
 =?utf-8?B?WkNlOGhlaGo5Rk4vZlhIL0o4NU0rQ2xSK2VDQ0VrbW1CdVBQYVBnYWtQR2Rp?=
 =?utf-8?B?S1Iyby93d2ZtUWtYc1F3Yk1LUzE3Mk1VM2c1WmJPV21XNWNwdlRjSW9kczM2?=
 =?utf-8?B?WUJJaFkxd0pmUWt5c3VvYkNjUC9NZkhLZ0F3VmVsUzM1Qm1hN0FtZzlqbnpE?=
 =?utf-8?B?bDFmOURadS8xbG1DSC9GTTZTZ2NIN0hHZGx5ZHpnN1NtRFZ1S0NoTEh1MUlT?=
 =?utf-8?B?Wk9TemprV2c2cTQzcHlXaHF3bGdLbjF4OXNxMWdPY0dLSFJia0ZkTWxMdUhL?=
 =?utf-8?B?cXRjWVRLaGdhZWpWMlpnN1hwdlNDYnpwRnFVK253V1A2dVp5WWUwaExXaEc2?=
 =?utf-8?B?ZlpNTWpaY29VUjhNQ1piRTRsVGpCbzE2LzJDN2JpTlZEN3RXSmV0VXRLQjk0?=
 =?utf-8?B?WExVY05SQ3htbitQTGdqRWJtVTJnQzhSbmNnYUYvM1ZhV3UvNFgyWUxoeEZ4?=
 =?utf-8?B?dUN6cGgrZjh4VkcrYjg2TEU0MEtydlN2N0doTVdnbFZxMkthQnNNdXBlTlNT?=
 =?utf-8?B?MmlIVWtPVXFJWk84Sm1YbTdRdHBicXhwUXB2cFQySk9mSWk2V1hrTHhVQVVV?=
 =?utf-8?B?WW5mcVdPSTNLeHhZNXRyd1kzK0grdGNEbjB5MW9WRCtGQkVxWG1aL0ZDbzZJ?=
 =?utf-8?B?RzVNRWgyQ29iQTN4clh3dkJKWG02QWxIL1h1SUZUTUpiV2tFSG1XTVpPMkpa?=
 =?utf-8?B?dkMvODZBZFZtMEFuQ1o3Zi9UK0NRazJzSE9IQmRnbzMyS0xBTW9VUll5SFlF?=
 =?utf-8?B?WUlXOFUyR0svR2QvUmJOTlNTTzFveE5XMXpNMTZMS0V1NG1pMlluMlFVd05q?=
 =?utf-8?B?bnp2REZ0QlRHMVIwRXlhN1Z3QVREODN6OVJpNllHNXkxSDRwZk5aUS8zS1V6?=
 =?utf-8?B?RE84dmlHTEkyN3FqRy9UbGc2d0FMY0hnZkgrZlRyUmRuTG1IQk9lZEorbmww?=
 =?utf-8?B?YUFGczJjTVY3QTl1bFJ0NWxtOXFJV3h2UzV2OTM0ZW1EMkxqa2tKUTVsQ1ZR?=
 =?utf-8?B?U3FIdjhLUjdjN3VaNEliTU9ZL1VUR2Qyakk4bmE0MHgwYnNCcDNlSmRKZU1I?=
 =?utf-8?B?cndvc1RtQVR5b2xrbkI4TEpQWHhkMVFHRkxJMlRMNjFpMVBrSTEzTmtTNlFG?=
 =?utf-8?B?cnVQcmlLd1VMMkNFd0k4YUJTUzgwck9HWlVxd3dlL1gwZnBMQWZ6dVNKRmhF?=
 =?utf-8?B?ZzYrMXkxYy9FdDBKT2RpY0UvZC9yWTZ6UU4reDQ5alJwY3dvYzIwbThVTU1V?=
 =?utf-8?B?RGhKdWVmL2JjM1kxQXgzSGNDbXdrWk82MFhoRjdwanNnMVZIeENCQT09?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <342EB95D035E904BB3A2D5C393810F2D@namprd05.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: vmware.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BY3PR05MB8531.namprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3f126017-af19-49d5-1021-08da4d7228f3
X-MS-Exchange-CrossTenant-originalarrivaltime: 13 Jun 2022 19:23:13.4350
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b39138ca-3cee-4b4a-a4d6-cd83d9dd62f0
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: eMjx1KcJtUvjtNRexJ5MAHdBZ4v4OLNTsROICFkNXlOqc+7yT2GH0KrnUxW/fkEbqTS1GHk3VRFzVY9Z4RmShQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR05MB6047

T24gSnVuIDEzLCAyMDIyLCBhdCAxMTo0OCBBTSwgU3JpdmF0c2EgUy4gQmhhdCA8c3JpdmF0c2FA
Y3NhaWwubWl0LmVkdT4gd3JvdGU6DQoNCj4g4pqgIEV4dGVybmFsIEVtYWlsDQo+IA0KPiBPbiA2
LzgvMjIgNDoyNyBQTSwgUGV0ZXIgWmlqbHN0cmEgd3JvdGU6DQo+PiB2bWxpbnV4Lm86IHdhcm5p
bmc6IG9ianRvb2w6IGFjcGlfaWRsZV9lbnRlcl9zMmlkbGUrMHhkZTogY2FsbCB0byB3YmludmQo
KSBsZWF2ZXMgLm5vaW5zdHIudGV4dCBzZWN0aW9uDQo+PiB2bWxpbnV4Lm86IHdhcm5pbmc6IG9i
anRvb2w6IGRlZmF1bHRfaWRsZSsweDQ6IGNhbGwgdG8gYXJjaF9zYWZlX2hhbHQoKSBsZWF2ZXMg
Lm5vaW5zdHIudGV4dCBzZWN0aW9uDQo+PiB2bWxpbnV4Lm86IHdhcm5pbmc6IG9ianRvb2w6IHhl
bl9zYWZlX2hhbHQrMHhhOiBjYWxsIHRvIEhZUEVSVklTT1Jfc2NoZWRfb3AuY29uc3Rwcm9wLjAo
KSBsZWF2ZXMgLm5vaW5zdHIudGV4dCBzZWN0aW9uDQo+PiANCj4+IFNpZ25lZC1vZmYtYnk6IFBl
dGVyIFppamxzdHJhIChJbnRlbCkgPHBldGVyekBpbmZyYWRlYWQub3JnPg0KPiANCj4gUmV2aWV3
ZWQtYnk6IFNyaXZhdHNhIFMuIEJoYXQgKFZNd2FyZSkgPHNyaXZhdHNhQGNzYWlsLm1pdC5lZHU+
DQo+IA0KPj4gDQo+PiAtc3RhdGljIGlubGluZSB2b2lkIHdiaW52ZCh2b2lkKQ0KPj4gK2V4dGVy
biBub2luc3RyIHZvaWQgcHZfbmF0aXZlX3diaW52ZCh2b2lkKTsNCj4+ICsNCj4+ICtzdGF0aWMg
X19hbHdheXNfaW5saW5lIHZvaWQgd2JpbnZkKHZvaWQpDQo+PiB7DQo+PiAgICAgIFBWT1BfQUxU
X1ZDQUxMMChjcHUud2JpbnZkLCAid2JpbnZkIiwgQUxUX05PVChYODZfRkVBVFVSRV9YRU5QVikp
Ow0KPj4gfQ0KDQpJIGd1ZXNzIGl0IGlzIHlldCBhbm90aGVyIGluc3RhbmNlIG9mIHdyb25nIGFj
Y291bnRpbmcgb2YgR0NDIGZvcg0KdGhlIGFzc2VtYmx5IGJsb2Nrc+KAmSB3ZWlnaHQuIEkgZ3Vl
c3MgaXQgaXMgbm90IGEgc29sdXRpb24gZm9yIG9sZGVyDQpHQ0NzLCBidXQgcHJlc3VtYWJseSBf
X19fUFZPUF9BTFRfQ0FMTCgpIGFuZCBmcmllbmRzIHNob3VsZCBoYXZlDQp1c2VkIGFzbV9pbmxp
bmUgb3Igc29tZSBuZXcg4oCcYXNtX3ZvbGF0aWxlX2lubGluZeKAnSB2YXJpYW50Lg0KDQo=


From xen-devel-bounces@lists.xenproject.org Tue Jun 14 06:21:17 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 06:21:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348569.574789 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0zvA-0008A6-K5; Tue, 14 Jun 2022 06:21:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348569.574789; Tue, 14 Jun 2022 06:21:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o0zvA-00089z-Go; Tue, 14 Jun 2022 06:21:00 +0000
Received: by outflank-mailman (input) for mailman id 348569;
 Tue, 14 Jun 2022 06:20:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0zv9-00089o-5Z; Tue, 14 Jun 2022 06:20:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0zv9-0007z0-04; Tue, 14 Jun 2022 06:20:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o0zv8-0004eO-JT; Tue, 14 Jun 2022 06:20:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o0zv8-0008Cl-J1; Tue, 14 Jun 2022 06:20:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1Efo23eMZB3DScKobreYfRDit/yWQ0oaccL/RbanI3k=; b=XMjUSK9LzCmHvBBAN+wvLtknN2
	tBo1LOTdivvQGWhtDl4vzRH8QEYLl9NJGe5Y1lToMFCWn2EKN/yxNyRDPewNWPIF5qlUIDyC++EBX
	1mLQ5SiV3qC8hHpOQNY/wE1+MB7QuQs8EDYT7zhI+dVm/EAcGXDO/OxnTIj0jUlQUx+I=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171158-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 171158: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=6676162f64ad39949ed44f17ce40e5c49ab33e31
X-Osstest-Versions-That:
    ovmf=92288f433485e84863047fae698614c6785869d1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 14 Jun 2022 06:20:58 +0000

flight 171158 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171158/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 6676162f64ad39949ed44f17ce40e5c49ab33e31
baseline version:
 ovmf                 92288f433485e84863047fae698614c6785869d1

Last test of basis   171152  2022-06-13 03:52:29 Z    1 days
Testing same since   171158  2022-06-14 03:11:59 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ray Ni <ray.ni@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   92288f4334..6676162f64  6676162f64ad39949ed44f17ce40e5c49ab33e31 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jun 14 06:53:00 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 06:53:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348581.574799 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o10Q1-0003bW-9n; Tue, 14 Jun 2022 06:52:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348581.574799; Tue, 14 Jun 2022 06:52:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o10Q1-0003bP-71; Tue, 14 Jun 2022 06:52:53 +0000
Received: by outflank-mailman (input) for mailman id 348581;
 Tue, 14 Jun 2022 06:52:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=i1me=WV=citrix.com=prvs=1570496fe=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o10Pz-0003bJ-WF
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 06:52:52 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 998cad5c-ebae-11ec-8901-93a377f238d6;
 Tue, 14 Jun 2022 08:52:50 +0200 (CEST)
Received: from mail-dm6nam11lp2168.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 14 Jun 2022 02:52:46 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by SA1PR03MB6484.namprd03.prod.outlook.com (2603:10b6:806:1c3::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.14; Tue, 14 Jun
 2022 06:52:44 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%6]) with mapi id 15.20.5332.022; Tue, 14 Jun 2022
 06:52:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 998cad5c-ebae-11ec-8901-93a377f238d6
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655189570;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=99rwY7P/BOwRjRx1dI22FroEI9lY+3IeTpziGVT1+ho=;
  b=iINrzshOhXkw0YK4Un91ycI+fs4GoMXGqbVCI/g1GTWljUd0Md1UvFb7
   z6UcO4tvCk7MhF0jcxtsz0YKqmPC1qKaBf7AsPlJXAur/orgWfAg3bPyD
   FK1lKvzGc3AlGRrL59duR33G+aOCS8VbDluHwJqNQqls4F+ofpZoUgysB
   I=;
X-IronPort-RemoteIP: 104.47.57.168
X-IronPort-MID: 73530969
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:hmI92K5U+1ZBK6dzAoxcEAxRtCTGchMFZxGqfqrLsTDasY5as4F+v
 mIZCzqAPqmDZjamfIxzaISy/UkPvJXVmN42SVBqpS5jHi5G8cbLO4+Ufxz6V8+wwmwvb67FA
 +E2MISowBUcFyeEzvuVGuG96yE6j8lkf5KkYAL+EnkZqTRMFWFw0HqPp8Zj2tQy2YbgX1vU0
 T/Pi5a31GGNimYc3l08s8pvmDs31BglkGpF1rCWTakjUG72zxH5PrpGTU2CByKQrr1vNvy7X
 47+IISRpQs1yfuP5uSNyd4XemVSKlLb0JPnZnB+A8BOiTAazsA+PzpS2FPxpi67hh3Q9+2dx
 umhurTvYjYOGpTnpd4kWjxgLDtyYPFiw+TudC3XXcy7lyUqclPK6tA3VQQaGNNd/ex6R2ZT6
 fYfNTYBKAiZgP67y666Te8qgdk/KM7sP8UUvXQIITPxVK56B8ycBfiVo4YGjF/chegXdRraT
 9AeZjd1KgzJfjVEO0sNCYJ4l+Ct7pX6W2ID9QnN9PRmi4TV5DMrzJH0HufPRvChfO4LlWnD+
 V7r1k2sV3n2M/Tak1Jp6EmEhOXCgCf6U4I6D6Cj+7hhh1j77m4ODBwbU3OrrP//jVSxM/pEM
 FAd8Ccqqak09WSoQ8P7Uhn+p2SL1jYDX/JAHut87xuCooLE7gDcCmUaQzppbN09qNRwVTEsz
 kWOnd7iGXpoqrL9dJ6G3rKdrDf3NS1LK2YHPHYAVVFcvYmlp5wvhBXSSNolCLSyktD+BTD3x
 XaNsTQ6gLIQy8UM0s1X4Gz6vt5lnbCRJiZd2+kddjvNAt9RDGJ9W7GV1A==
IronPort-HdrOrdr: A9a23:oeWyRa9jHCbd6q+5UmJuk+FKdb1zdoMgy1knxilNoENuH/Bwxv
 rFoB1E73TJYVYqN03IV+rwXZVoZUmsjaKdhrNhRotKPTOWwVdASbsP0WKM+V3d8kHFh41gPO
 JbAtJD4b7LfCdHZKTBkW6F+r8bqbHokZxAx92uqUuFJTsaF52IhD0JbjpzfHcGJjWvUvECZe
 ehD4d81nOdUEVSSv7+KmgOXuDFqdGOvJX6YSQeDxpizAWVlzun5JPzDhDdh34lInhy6IZn1V
 KAvx3y562lvf3+4hjA11XL55ATvNf60NNMCOGFl8BQADTxjQSDYphnRtS5zXgIidDqzGxvvM
 jHoh8mMcg2w3TNflutqR+o4AXk2CZG0Q6X9XaoxV/Y5eDpTjMzDMRMwahDdAHC1kYmtNZglI
 pWwmOwrfNsfFz9tRW4w+KNewBhl0Kyr3Znu/UUlWZjXYwXb6IUhZAD/XlSDIwLEEvBmcwa+d
 FVfYDhDcttABOnhyizhBgt/DXsZAV/Iv6+eDlNhiTPuAIm3kyQzCMjtbkidzk7hdcAoqJ/lp
 X525RT5c9zp/AtHNJA7cc6MLyK4z/2MGTx2Fz7GyWVKIg3f1TwlrXQ3JIZoMmXRb1g9upBpH
 2GaiITiVIP
X-IronPort-AV: E=Sophos;i="5.91,299,1647316800"; 
   d="scan'208";a="73530969"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JK70YvNml8eu4djKKfuzLLrHQra1dTjH5tjcjGm1AOagXpNHtR/nJDE4YBgX3qNHEMRqdFFADxuxuDmJO6iF39XGtvIKWpQ9cAGcWBxog3LJlOVriYcMrn/fBJ9lFR8sNK2qKkThGlRjqDmz+STds1aaNoZCncN9/mpYSTL/amcVJhqSkHinkAeVuHHLyUL4uAMTty+WFToUPyeQbvtdPDBHMX4F8PuWPa/jLJ+fVzMAq+lYSedV2Rm/AKmAIFk/DjgOoovvunsKXwSF0+hSPsZ5ucDdj7BL46rPTni91E/zEPW1utlbXEx+tm86SR3Zt+mCYeETOyjLANbEOo4nbA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=K6/zceBK2aOiZs2VKBxjvIiclJwdvrigCbkhfSLPj0I=;
 b=cSniaMzZ/7EELz79AJx3aaPcGmc+excu6dWRCgcNS0Y0EedmtPeBjrLbFR7f3LATIr1e+mVThgfgzRS9nvZW5eSeJqrbgyaQX980a+nMffHXUrQe6Mv1WQxH1xLJDopBkNHrnHrJgqBAmEY4VBQ2l6KScaiZUK5AAJOe9wZo+L/6k7YTmfoC5y7Vrg66W/Zt064REp+TcVdbjXgMPvCIVmH33S4o4rPJfjXc87r0fXRm6BrCsuKrW6BmolTcVQaGkSfmw84l4KQp9hgQNsCWjoIZRZRNZwRVQPG9kwwoZ3at4ETPM+H4rKQ4ddv82yXlaR/dxGsnruOSdeHiBRRvQw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=K6/zceBK2aOiZs2VKBxjvIiclJwdvrigCbkhfSLPj0I=;
 b=FHfSKfuTT3A0RMVPuod9GeApzv24zhevWGS8MseIXi8c/NMbKHQgqTe1HGS1CVQOiRwLq9HRz22J6FSaKyuMeAgskm8A1a9cDwVzngQbXUIYCA1uIdi3fmmInMRz88RYgSX+coBTc0mc8+6Gjdmu9VK4wy1E4s4h23foi8AYvfc=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 14 Jun 2022 08:52:38 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH] xen/console: do not drop serial output from the hardware
 domain
Message-ID: <YqgwNu3QSpPcZjnU@Air-de-Roger>
References: <20220610150651.29933-1-roger.pau@citrix.com>
 <3a462021-1802-4764-3547-6d0a02cd092f@suse.com>
 <YqbziQGizoNX7YFr@Air-de-Roger>
 <3d0d74d8-55a9-cdb6-0c5e-616ddd47bbc0@suse.com>
 <Yqb9gKUMokLAots7@Air-de-Roger>
 <afa0a9e3-fd35-be38-427e-3389f4c3ca26@suse.com>
 <YqcuTUJUgXcO3iYE@Air-de-Roger>
 <f0f87e99-282b-6df7-7e57-3a6c73029519@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <f0f87e99-282b-6df7-7e57-3a6c73029519@suse.com>
X-ClientProxiedBy: MR2P264CA0140.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:30::32) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 02088108-9050-4026-c091-08da4dd27bd2
X-MS-TrafficTypeDiagnostic: SA1PR03MB6484:EE_
X-Microsoft-Antispam-PRVS:
	<SA1PR03MB6484520C135719A5A870A8D28FAA9@SA1PR03MB6484.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	i06CeeW3P3G8Nz7PFnoFTcNr/8IJQZ2EgljO/qOhMGxcJqwNF1Ew9oJfQIvBLIaKVtCTIUE1K/irAUjfbqmDTYyHv+aETXI2VKS/5XQG0aY1PRjh5k0tjdiLV7px117Ts6i2TgJXL6yuYRF+qA98Sl7qsdNvoClmPUPi+fJ+RvHpqkvtc9SoEkciAkHO+u1bFeRN9jsv3EjHcGEbYacWcZcalGQ6IF6L0LMVX0aIJm47POs8xZ/sdOpvXJ91z6BK/guL6+qKpXADG/gSUi9lUJKQIBHouogM5IuD28E0aTM/x3gUHMFB32BObjaoSzZk3Vxs0Rn+LMtDs069FWScgja4c08l9QG29pdU4dkeJvw5ZYx+kXN4HUZuKI+DqzQg8AfxOCtU0I9Qq0OC2Q5YoDtK83lTkc7UmM0C3MWdkHMTX1s2wQpWuPPLjTBQcnzXo26Hi07+EsxbRr8gEZYfROeOzQdBPBh6bv9QUOEgVJnzhiKnRM+eHu3K9DU30ePeVSZXXvQTb7dtgJJAIaWV43986iiNqrjymj/5mPvXC13GLeI3/CShLh3CrKMBDTRcYPLUWggtVCSVQ7gpHfplT33vyaTl85MyHuvv6MEN2FOebuwzOOQVOWz5xZkK2yEezvRADlk9JVuD+IqamNE2vA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(7916004)(366004)(38100700002)(83380400001)(8936002)(9686003)(6506007)(8676002)(54906003)(6916009)(33716001)(2906002)(316002)(85182001)(82960400001)(6512007)(26005)(66476007)(5660300002)(66556008)(186003)(53546011)(66946007)(6666004)(86362001)(6486002)(4326008)(508600001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VU1Dd1JUZVFvRUdySFg3VUNEYTUveis5dkhMSldtOWQ1Y0RVdFR5c3V3TldY?=
 =?utf-8?B?b3R3cTdSWS91dm9CVFBkQ2ZTUW5Yd3ZHeTVOa1lKcEdpdTBNTzU5YnVSeTM4?=
 =?utf-8?B?WG1ob21KaG5PZjE5T3RiQWVOTmlxT0NuWHZsK3pZdElNcHR4ZTUydyt6STBJ?=
 =?utf-8?B?QzZPZ2JpRmgyWGkwd2NQRUNxNnZKL0R5empBQ1dwdGpWdFVFWlNPVit6MzZX?=
 =?utf-8?B?dFhVTkxzWUJ5b20zRXJGdGtJbTdMQ1dlSVFLSUxvdGovd3g3VlA4SWNCZXQw?=
 =?utf-8?B?bEJqckhRN3JXNlNGT0R2ZFZWeGpmUEtrSEFhTitESFlsRjdKQy9BU2xVeFhm?=
 =?utf-8?B?K0VyMVVTM2JTN1Vjdms0OEpUb0EwdTMzT0xJOW5qYWVMUUhxNzExZ1A4d2Ja?=
 =?utf-8?B?bUFLbE0zcUg1RmFLWC92c3Q1UWgzalh1dkZmQmFONzFaOUtLTSt5emdBWHd0?=
 =?utf-8?B?aVhvZ1hoSCtQWHdVa1Q5U09LTThjdUZqWm5LdkJCQkcrQTRkdWhGejNRRjAy?=
 =?utf-8?B?Tmo5Y3FXck5CWnQrVk1WODRDTDR5Q1g3QVFmeENyaWtscXA0OU1yQ0RIL2ta?=
 =?utf-8?B?cUJnQ0ZtQjNmQmtwR0dwMTFaeUEwdndjUVc3TG9kSW1ma2VoeGFCVVRvekRs?=
 =?utf-8?B?WWhtT1ltK1BmbCszeVpVMFlNNFVya3dqdW0vSHB5VWNRcHNWbHRCY3ZjODBO?=
 =?utf-8?B?aFNTZnA2WVUxbFBQUXRUMHFBTUlyaHdFSWlOdzlIV1BaSldqdUlUL1IwYy9W?=
 =?utf-8?B?bzhmQWxKb0tOQzVIckNLOG9RN2wzdHJ6OEVYQXhaVC9KcFJzQy9ZempuN3VK?=
 =?utf-8?B?ZVIwK1VaQjJEdG1lUEd0U2hsNmt3b21aVEVqZlFYN2NyUklwbzlYOU1xd0VH?=
 =?utf-8?B?L1FRUVFrQTNxTkZWZk1MeG13dWFSWVg2OUhPSmo4K2NUT3QxRkNtQXMrdEVI?=
 =?utf-8?B?djl5SU4xek83WE1ZVkhPNVZ2MU9mN1NZVHB5S0JsQnl6RnpzSkYvUnQ2Q1VP?=
 =?utf-8?B?U05tL1kyOHptSS9IVnhCbGplSlBPNjkyc1pDdDdYbkFuQUdERjAwSHJMeEZ2?=
 =?utf-8?B?OVZOMHYrSmQ4d2xuam1DVWJMVnVwVzRTdWZwckhvUzBIU0ZlTnhJZG1KbW13?=
 =?utf-8?B?U3hmZzNCbnNKblBwWVFhc29GODFMOHdBcWp3Yjd6UjlNMHJFSnNEbGJ6UGRH?=
 =?utf-8?B?ZzE1R2lxL1JPOHhMa0R6RThHS2g1Rmd5SlkvZTFFenhqSlhoQnNGZXlOWlpO?=
 =?utf-8?B?Um9idkxHOGhJbjhURjZmZWxZSDlZMDhvc2FEWDlpQXg1Z3ZOK1BVUE51bzBO?=
 =?utf-8?B?Q0I3WUdPNmx4bGQvOHg1MnJCeCtSWnNjSWRkeWkrR2tOdzhNZ3M2ZGIrZGVN?=
 =?utf-8?B?TTJ6c1JlY29kT29oQmdqNE9kYnUwWTA4QktXQlVVM0FMbEtqb0dWbFNYeTZD?=
 =?utf-8?B?dFJQTFNoaGs5elcwdmJwOGUwNjErNkJXOWNETWtLNnJVRVpZM2hnSlVlUmxa?=
 =?utf-8?B?UWhZbkFmaWNXZHdBNlFlVHJoYm9lckJtdDI5VlFZVWJuWG5pekpmU0JGcFFt?=
 =?utf-8?B?VUJ6Kyt3UjdUQlJDQlhkVXpheThEaTJqNktnVGFXK2ZyV3ZqOUEvbVNWaWtn?=
 =?utf-8?B?NXBCV0RYRVU1RlBFYzVrRGxmUFdqdmtPUWpUSUhMQnRWblRLNWVTQkFtaFJ6?=
 =?utf-8?B?QTlQMlVmMW5XM2xuY0VWdGdHcDZUZVB1L0h3eFM5aDRLaE9sOVJIYU1HdE9E?=
 =?utf-8?B?SG9BSm92REZtekFOZGJURmRDNlNPUDRQU0tLSWdDSEpQMXhra0NCNWtYZ0VI?=
 =?utf-8?B?Wnc2Y2t4eGhWeUFmcE1yb2wySlhSdkJ4UExDUWlkMEtaOHF5dUNOOEtGNjlY?=
 =?utf-8?B?Q1ZWcFIzZC9EeFpmcXlnSTFpMWcrWXpUdlJTRmVWclN3Vi8xcFNqVEdPK1FF?=
 =?utf-8?B?K3FXYk9IdnZkTmxEOHlqZi8xUTZWOWlyNHlmakhSaHZnWGJnYVNOdFVZSVVI?=
 =?utf-8?B?RUNNMmFzSnlBRlV5ZUNYTU1RN2x4bnRYUkttaGhWaTZ0cEEzVXJheHljZ1Jm?=
 =?utf-8?B?KzJxdndaajEyRWl3SkVpNWUvdjNCWTdvVUJNMmhnb1hCRU44S2ptMnY3Nk5j?=
 =?utf-8?B?VDNKcy9HR3FUOHNaOXFuZFVCVWF2M2NENnpVdEZYL3JpakVyQm96czBNNW4w?=
 =?utf-8?B?cDR2elVTcWRKajh2b283aTYyOHl1SHJFS2hWSWhab1QvbC83eXNBVHdpbFN6?=
 =?utf-8?B?dkVhTUhWTC9pbHlEN3JVS250YW50MkRyMEZvUVV5aEZrTTRhMmhwYkRJNmQ4?=
 =?utf-8?B?NUxEN0xnSGUrVmpxQS9FSEJER1V3NnFXV0ZlU2JjOHIxNjV0Snpsdz09?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 02088108-9050-4026-c091-08da4dd27bd2
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2022 06:52:44.3476
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: DcNN94ob2YX1yGiaHZ+iOqR2bKcFxiw8ZRqf1N5eDgzPYWELDwpUorVeydPz31xKfK7MdLpER99OWJnle33S+w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR03MB6484

On Mon, Jun 13, 2022 at 03:56:54PM +0200, Jan Beulich wrote:
> On 13.06.2022 14:32, Roger Pau Monné wrote:
> > On Mon, Jun 13, 2022 at 11:18:49AM +0200, Jan Beulich wrote:
> >> On 13.06.2022 11:04, Roger Pau Monné wrote:
> >>> On Mon, Jun 13, 2022 at 10:29:43AM +0200, Jan Beulich wrote:
> >>>> On 13.06.2022 10:21, Roger Pau Monné wrote:
> >>>>> On Mon, Jun 13, 2022 at 09:30:06AM +0200, Jan Beulich wrote:
> >>>>>> On 10.06.2022 17:06, Roger Pau Monne wrote:
> >>>>>>> Prevent dropping console output from the hardware domain, since it's
> >>>>>>> likely important to have all the output if the boot fails without
> >>>>>>> having to resort to sync_console (which also affects the output from
> >>>>>>> other guests).
> >>>>>>>
> >>>>>>> Do so by pairing the console_serial_puts() with
> >>>>>>> serial_{start,end}_log_everything(), so that no output is dropped.
> >>>>>>
> >>>>>> While I can see the goal, why would Dom0 output be (effectively) more
> >>>>>> important than Xen's own one (which isn't "forced")? And with this
> >>>>>> aiming at boot output only, wouldn't you want to stop the overriding
> >>>>>> once boot has completed (of which, if I'm not mistaken, we don't
> >>>>>> really have any signal coming from Dom0)? And even during boot I'm
> >>>>>> not convinced we'd want to let through everything, but perhaps just
> >>>>>> Dom0's kernel messages?
> >>>>>
> >>>>> I normally use sync_console on all the boxes I'm doing dev work, so
> >>>>> this request is something that come up internally.
> >>>>>
> >>>>> Didn't realize Xen output wasn't forced, since we already have rate
> >>>>> limiting based on log levels I was assuming that non-ratelimited
> >>>>> messages wouldn't be dropped.  But yes, I agree that Xen (non-guest
> >>>>> triggered) output shouldn't be rate limited either.
> >>>>
> >>>> Which would raise the question of why we have log levels for non-guest
> >>>> messages.
> >>>
> >>> Hm, maybe I'm confused, but I don't see a direct relation between log
> >>> levels and rate limiting.  If I set log level to WARNING I would
> >>> expect to not loose _any_ non-guest log messages with level WARNING or
> >>> above.  It's still useful to have log levels for non-guest messages,
> >>> since user might want to filter out DEBUG non-guest messages for
> >>> example.
> >>
> >> It was me who was confused, because of the two log-everything variants
> >> we have (console and serial). You're right that your change is unrelated
> >> to log levels. However, when there are e.g. many warnings or when an
> >> admin has lowered the log level, what you (would) do is effectively
> >> force sync_console mode transiently (for a subset of messages, but
> >> that's secondary, especially because the "forced" output would still
> >> be waiting for earlier output to make it out).
> > 
> > Right, it would have to wait for any previous output on the buffer to
> > go out first.  In any case we can guarantee that no more output will
> > be added to the buffer while Xen waits for it to be flushed.
> > 
> > So for the hardware domain it might make sense to wait for the TX
> > buffers to be half empty (the current tx_quench logic) by preempting
> > the hypercall.  That however could cause issues if guests manage to
> > keep filling the buffer while the hardware domain is being preempted.
> > 
> > Alternatively we could always reserve half of the buffer for the
> > hardware domain, and allow it to be preempted while waiting for space
> > (since it's guaranteed non hardware domains won't be able to steal the
> > allocation from the hardware domain).
> 
> Getting complicated it seems. I have to admit that I wonder whether we
> wouldn't be better off leaving the current logic as is.

Another possible solution (more like a band aid) is to increase the
buffer size from 4 pages to 8 or 16.  That would likely allow to cope
fine with the high throughput of boot messages.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jun 14 06:59:14 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 06:59:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348590.574811 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o10W5-0004Pl-VG; Tue, 14 Jun 2022 06:59:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348590.574811; Tue, 14 Jun 2022 06:59:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o10W5-0004Pe-SL; Tue, 14 Jun 2022 06:59:09 +0000
Received: by outflank-mailman (input) for mailman id 348590;
 Tue, 14 Jun 2022 06:59:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lWYu=WV=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o10W4-0004PT-I9
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 06:59:08 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on0631.outbound.protection.outlook.com
 [2a01:111:f400:fe0d::631])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7bdacf79-ebaf-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 08:59:07 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB5916.eurprd04.prod.outlook.com (2603:10a6:10:ac::28) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.22; Tue, 14 Jun
 2022 06:59:04 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Tue, 14 Jun 2022
 06:59:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7bdacf79-ebaf-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=U2+f32U0APJl+VeMpz1jwiyzw2AngewjS1pnXRlhE0/p07+LgqToPta02wkK4rIJrq90+XFlCPSoXcvBgz90IT/g8xjkONaAqJfepeG0quGNDnw48C+ajTgnqeq//jKxU0fCTCXxOzdSxUBWD736xANlvBPZ873vXv+x8gbvRB2YFk/JGJGJI+PZY/2x5s/eDeqL173ozyfIrHb3gGEqA1rbAO9Bv5l/QEYqMqQ9YpxxjVW243rdh7xZnofHwNeUcAsLy/Ew958gRv1B59lF7PlqAwY7+hFRQlT8oXcydAek2ALKlquEWEkE03opl5sYlFjrbp3LMT6kIB24xMOX5g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qiqIR0YoJJTjxfG2FKh0K6nz6+HM3MUd+UjnkTQzW3c=;
 b=TfVWtqqz9kz04jE46M2S6FLvD6nfMKA8AUGgQWtD/+MMO5h41UXQUYBU/lSkQYgAtjW3Hna/vL6SBM8BzkvgtTj9qyX765XrH5otWebw13+UEgYh8FG1xKTIhZF8kvqM5RgLKZV3ypiCXZqICq6dPwHGGFWIryeOmy2f9G14Z/Zfbmq0JDdidWcvIKB8wF63cU5RCPYh/VtJE2EkuNMdvddGpvLKUbGmUzjrnk2Q88ml4UaGvb2c6/8trEvRpcWqFnrw5Iomlu4Dxh2xTzwD/Jli2Vb95yGwOX43yfSgxwj3oKz2UulF3zjhvsPCU22PZbaeWQ2YkgAMj/EpbEGTRA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qiqIR0YoJJTjxfG2FKh0K6nz6+HM3MUd+UjnkTQzW3c=;
 b=AGdAUcMs/6NwdGPXuJvA+iMdNxWbX7qYU/gZpe7A9vUcQZ/nP9G1S0XR39ljqru0n9k3jZpggcI7gl1SJnkx/SQa7UiRf3PNwlCo3BQLZdlynyqaCl9WBLZtjG5jhHqbKZQspORi6Th8DA/XzareNj/NLm4HUZP1opIDp1YFUx5utxgaYJe+QvUYCUEFDSZNE5eGl07my77I9o3a9EKjk8OzbTmvAYcXMJBplxaBsCcDZm7uCMZUM5Bnk5O7vQE0MYRflYczZ0K4TGdNyB/HTQPanXOTXfbOaiI+RzqAxtM+MCpY41eZZlopq9t0VEgCeHyE8fCH9hGXY1OGIDnB1A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9f3b4e63-8cc6-1b34-4bf7-2c065b7a0cf6@suse.com>
Date: Tue, 14 Jun 2022 08:59:02 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v2] add more MISRA C rules to docs/misra/rules.rst
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: George.Dunlap@citrix.com, roger.pau@citrix.com, Artem_Mygaiev@epam.com,
 Andrew.Cooper3@citrix.com, julien@xen.org, Bertrand.Marquis@arm.com,
 Stefano Stabellini <stefano.stabellini@xilinx.com>,
 Julien Grall <jgrall@amazon.com>, xen-devel@lists.xenproject.org
References: <20220610212755.1051640-1-sstabellini@kernel.org>
 <2c4b41e4-7381-7424-de72-43f55c448665@suse.com>
 <alpine.DEB.2.22.394.2206131627050.1837490@ubuntu-linux-20-04-desktop>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <alpine.DEB.2.22.394.2206131627050.1837490@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0024.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1c::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 685c44d9-749c-4cb5-511a-08da4dd35e51
X-MS-TrafficTypeDiagnostic: DB8PR04MB5916:EE_
X-Microsoft-Antispam-PRVS:
	<DB8PR04MB591615C209EA4EDBA2BD2EEAB3AA9@DB8PR04MB5916.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	vBfkXHOW6SKoT3AQWR2vdKvJ977VovzXvqn7E5XwVub/lHd/xSNVJiznkfPg7k8goQ3eKmAh6HUh0XxKKjXoBtEDdgOuS9arp7acZPbDZcsUIBm51d0pTWq+ENjB+Eknl+wWM/DrdAUp/cYFba8fA0CXX2urD1S/ADq1hCY6Zudhw1+hHiQPVWhmxFqkR4V7NYmBk2DItQml3qTbuJt9eY3qmhv8aUE2z03yxGuZP5eYdAiCKHIlP3ideebJhA3CyfsWIwm6Gfn/Lnnt1yHZ1Rr4uB/kYgfbbnsG77VT43VFzKw1XvMCRWWT4n52d7XI3FY8TwOVdRIhKp+n+oUJIlOcPlln4TXjGzQXX3sQgsFyJYgJ1a3y5wzm/+2ZHWLDorYTGcjHFXYBGKotF891pmroK/b1QLlegcB0QufCXQDvk5ugtFhqebq9g6uvW4kRPIrW1I2Rns9JFzj7B35umj/iWLU6zQJXIpcLw3SSpBBGEePREOOzC5oVFtr5UACcuj1Kd+WTqyRQ91fumWm14fHF6uRSF54CNle6w6LOfNyYWHaCPnCipOPsrARQ1WTIPBaxElj0ovMljNeJrDch3YKp7cNfdM19PfNznCWl4hrk3BDBbVpp20xCPKq2qfoVZxBCFVn0p9Wv8DmCZe+ZYrv64neaQIWgVLh6VXtPL8geQeyE6D6SP5MYcPZl+rRcFEur5fVoBYejgp3gsYjR7Vv0kFmEw+q9X4TKvJb3yLTmmbb2CuMUuAggNqZvwkTfhyrSJEd57KnoOR5gY34U+ei9Rb+lDOUCMDJIqQTaHKguln8NOT3Lx0yuJKEWoq0x
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(186003)(6512007)(5660300002)(6486002)(38100700002)(2616005)(31686004)(36756003)(316002)(86362001)(26005)(6916009)(66556008)(83380400001)(2906002)(31696002)(508600001)(6506007)(53546011)(8676002)(66946007)(66476007)(54906003)(7416002)(4326008)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Vm1qbDA0aGpMRk5ZK0pxRkJBekpVRzRrQ0grTzNJOGNkdnVOeVBWaER4YlVv?=
 =?utf-8?B?VTFGV3c3dUVmZDJVdWZYZEtSWDRnaFUyZ0xhdzZFYTNWazRyZzhRZlBVd3NF?=
 =?utf-8?B?NkNqQlZQN1EzbWpYVWVGenlkSEtyMUc1Qk80LzdKY0N6bkRQcGxLTzA2SVQv?=
 =?utf-8?B?NWgvMCtKVGpKL3hCb3drNHpyVU1sczRNbGdJKy9WRDdsZEdpOTRMRGdpM2FW?=
 =?utf-8?B?c09NV2MxZjdNeWZPVGNadHliYkM5Ty9yWWw3Y1NnVWo4d2FXWEZWZTFkYzVw?=
 =?utf-8?B?cDd6RFVHb1FKQ2tsUTd5RjNTK0s2YWRqSU5tRTNYck51OVZsSmlucGtyb0hr?=
 =?utf-8?B?UklwRzl6UnBxeDFwYk4yN252eWxhNk0zS2RZbmdPSUEvWnBvbWE1MmhlL0NH?=
 =?utf-8?B?S0Yvc2VOVkt3N29VZDVMUGRzUjhWbXlVeEdiYndjVllBeXNqTDJDa3JHZG13?=
 =?utf-8?B?Q0JabmY1VUo0SUxLbHRLUEN4M0oxQzBTb29Kcms1cVd2cVVxL0wva05Zd3Zu?=
 =?utf-8?B?NXRDUFdGanU0TlExakNWbXZTRjZ3UGtuSy8zME82ZUw3dGJETEMrVEJMcXFo?=
 =?utf-8?B?c2hWekZBN3E4M0QzOUJjYVRIUGpybUZzejVHZElnYk9mY09VZTZsWXoyanVm?=
 =?utf-8?B?VWROYWk1blhwaktBRVc4dmRXTVFud3IxRktNWkJWV3dHaVhOUmpYMEZHbFhS?=
 =?utf-8?B?aHdkcEV0ZVFvc2I0VEtXd25iUytrYVR2cnpUbHlaOHo4NFVmb01CNG9kaW95?=
 =?utf-8?B?WnZ2K0RRdFI4Sm11dDlXcjVLR3RQQXlKMXVPcU5ZTHA2WGMyT2drU2RVajZF?=
 =?utf-8?B?Y2pSN2JuSCs3Sk1LQkpVUTg2c2xVMGVGY3E2VW5oN0c3MUdRTDJreFBUZGhB?=
 =?utf-8?B?Wlg2aGlsTHl1VCtvdXFvZmRuUithSlBxMHd5dXZZNURJOWlxNHpwUzIrbFph?=
 =?utf-8?B?VVNGRzR5RS9EeWxxODFFbWh3LzV2NExPN2VKbzJZdlZpcStKY0NoMWIxQ0xa?=
 =?utf-8?B?aXB0eUZtRFBwTTE4ODhOM3Jxdm8vWTA4em9RaWZheXRhbFMrUlZObStHWnVN?=
 =?utf-8?B?OUdqbk8wR2U0RmlwMkE0TzljSHYwa0VoYVVienBTVHRyK2h1OWVhUnJqdWFJ?=
 =?utf-8?B?Z1UrQ1lQeGhsVlJGUFBxZGZWbGRpOFJsOHl2NFFwNVBTME5nZ3N5Tlg4QzBJ?=
 =?utf-8?B?MUpvek1rNW5KaUVmL2U2OEhFVldHdjNCN25xTnVBTEllQTkweW93WVVxdCtE?=
 =?utf-8?B?aFZua3l1bUtRMzVkVTRUdzhyWlBRY2RoazAxb3MvNmp2M2E0VUk2NzU3SWhP?=
 =?utf-8?B?ZUFaM253KytYcE9CajMra1o2eGZpcGhmdXlRVGFON3VHYmhXaFRRRkdONlUr?=
 =?utf-8?B?WUxZM1VkVWJ2Uk1jemEveWpHbUs1TVYrSDNkVjQ5V081TDJkOEFycVpic1Z0?=
 =?utf-8?B?ZGg0NG5mc1RHWGY3c3FKT1Y1UEd0VDZmbFEydU9jeGJ0Z3BLZ200K1BReFhG?=
 =?utf-8?B?WW1oRm1Tcm1jNUs0T3h1NkRMUC94Y00zc3JXZHVqbUc5VmV6TXVuaS85VG5P?=
 =?utf-8?B?bStWODFiN0JFV2RkWGQyaEc4TWRBTkg2SUwrMDR1Y0NHRCtsZ2YrZHlleXVW?=
 =?utf-8?B?dDFMOGVRa0wyYTdXMzB0eXRlUkV6ck9OcFdSMU1tTG5BQXJOSjFCd3RwYXNE?=
 =?utf-8?B?UERwZTR4NU9teW51VmJxRDFQb3EraWlmODlMbUFyWFh0TUFUWlQxMlhHdlgy?=
 =?utf-8?B?MkVuL3luQ1ZKUDBNZ2p5N0prNEk3Z0VJZW1IaWc0cERSaU9tZDJBZGdnSGVM?=
 =?utf-8?B?c1hFL3lBdzVkdXpjNWZlVVpFd1dXakI0VTB0emhyOG9uV0xSaGdMbnczcFhB?=
 =?utf-8?B?cndRUjhocnZlcDhWNk95Ni93Q1l3RnZkbGVUVDRFam4vbWpHY0crZWJpVTRv?=
 =?utf-8?B?OXNhaDB4VGwyYndjUkdIOCthR2VzbXRCVE5KTmhUcjYvYUd3a0tacGNmUFIy?=
 =?utf-8?B?Z2tLTVZVWW5meGNyRzAvOGNUY2twNUlEbmZ1Y1hQMkl6SGVmc1ZFL2Jia2V2?=
 =?utf-8?B?R052UDBkSlRkK3QzRTFoM2V2a0VvL1FrK0R2ZHFIUG4wanA1MG4yTEZPSGFQ?=
 =?utf-8?B?SFJPRXN6OGIwWVdXZUMra3Y4c1BFYzc3YnM5Z0lHaVB5RENIa2FPcDgzS2hG?=
 =?utf-8?B?UnRidzVFdkNEb3NqZ1E2VWRBUFdNbjdpWWpZOEZVQWlEbzEvbjNwbzhWYXBM?=
 =?utf-8?B?VkdwOUJIOExkVXNFajFCNDJRWDR1dzlLUCs1RUI5T08rangwallOUSs5YUlT?=
 =?utf-8?B?QVZJNEttRXh3S2RjU1ZpTGdJZjV6SXQxdjRIWm05SFhIakdsVFc3dz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 685c44d9-749c-4cb5-511a-08da4dd35e51
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2022 06:59:04.4828
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: TxT0IkPon9MyvWBSbLx5ABQRM9FBUY75OlWxjWQCA8Q/cg1dwdfUuguxwxMT3wdSjU0LmNHvNiy5MP77BYSHjQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB5916

On 14.06.2022 01:29, Stefano Stabellini wrote:
> On Mon, 13 Jun 2022, Jan Beulich wrote:
>> On 10.06.2022 23:27, Stefano Stabellini wrote:
>>> +   * - `Rule 5.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_05_03.c>`_
>>> +     - Required
>>> +     - An identifier declared in an inner scope shall not hide an
>>> +       identifier declared in an outer scope
>>> +     - Using macros as macro parameters at invocation time is allowed
>>> +       even if both macros use identically named local variables, e.g.
>>> +       max_t(var0, min_t(var1, var2))
>>
>> Nit: I would have been okay with the prior use of MIN() and MAX() in this
>> example, but now that you have switched to min_t() / max_t() I think the
>> example also wants to match our macros of these names. Hence I'd like to
>> suggest that either you switch to using min() / max() (which also use
>> local variables), or you add the missing "type" arguments in both macro
>> invocations.
> 
> I see your point. I'll use min/max as follows:
> 
> max(var0, min(var1, var2))
> 
> If you are OK with that and there are no other suggestions this tiny
> change could be done on commit.

Yes, that's fine with me.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 14 08:10:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 08:10:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348608.574823 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o11cp-0005BC-OI; Tue, 14 Jun 2022 08:10:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348608.574823; Tue, 14 Jun 2022 08:10:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o11cp-0005B5-JJ; Tue, 14 Jun 2022 08:10:11 +0000
Received: by outflank-mailman (input) for mailman id 348608;
 Tue, 14 Jun 2022 08:10:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lWYu=WV=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o11cn-0005Az-HQ
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 08:10:09 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on061c.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::61c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 66f2c273-ebb9-11ec-8901-93a377f238d6;
 Tue, 14 Jun 2022 10:10:07 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB4415.eurprd04.prod.outlook.com (2603:10a6:803:74::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.22; Tue, 14 Jun
 2022 08:10:05 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Tue, 14 Jun 2022
 08:10:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 66f2c273-ebb9-11ec-8901-93a377f238d6
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jstJh/97nYdnKUX4XbY6vH6FeSt3our6p5UG26gLYexX7IvU8nLszroAyVcCJAgf+Mf5sE1uTVQkecvXtJN4JawXtUw4mrnZ3hCWDUQhEWznSLgELSTZLq6A+3nuizo/Rkw6nD61Xe2BRWacO5c6X0URJ6x6s+eq9ZTzslb7DacEzyW2MsKL7TMb3J5qrQCPLfbnW+fbYnUQmQxcjjJcjDcCc78rfJRYtZ2+vBSyMVarWjeWlZq9XoYSJflRXKHGodXAhp4JxWgUsHSedH6GisOrEPiEH6w44f6owkujkGXwlRrb5m5UWpbYKUV2gGxiIb8n0wfKuHQnWDgi8fQu+Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=vNsz0ZbxPMJJJQzDn/8BrlIDSUIOr4kHw73iGfX2QRY=;
 b=cyBdY0p+jR6Gmc0e5Br5DWsi+ib39VCK76BptA17spjDOWp3QXhJyIi8iErRao4x7cAmw9lcksQlcvuHENNTrTP7A8+pfVTkrYAccWUV/t8tBFMobl29foTnEYlpk5vp62+SkHVKYAc1ap+kp7zXNoMGt9aUvTHsOQP7Qd0IspRiXEm8uowUBfCpRRqfSwobxnkTugkJHIdgINlm2yOoKtk++082wl53UhcqfqTFMEio0mNeP7cOzFkmwLSEQ0AB69JatZf0iucIeol/HDHFncTogfCvlN268o4VIZi06sKd9QR/d8e7obR9DfzpmpdGrIWrJeIqYyyHJEqM11pkBQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vNsz0ZbxPMJJJQzDn/8BrlIDSUIOr4kHw73iGfX2QRY=;
 b=nDEgBEDCQGEoGpEsEtl+M/QMSKNKtAll0yhiWEzjbHpDsoNhOXgeXydMQer9HUpAGq5B9gC1mhjFqDkgKOrzKAGzRDAlDdc7swDiZolmXPxrT29Wdtlzr8a6UwNHdvEucMaOGUx1R/0U4QC+7NYn5HAicQvMXMQVYm05hOMW2zHw3Hog9XjjORYvwq05bo5maBGz3LZ5DJ0P8anIFPayIGhjwP7+RN+vQBezuy9GTZYajUitfqLUN7n7y8jbBNPBzZ5OPi+UwvIcdYnEU6m8TsLGg8C2wKOyilUAT73W2bv2kzg6aQN//fgE+7uwjEHHZPZkEv1NOJVH78QRtuKNZQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <69d85d88-4ec1-987c-151f-0d433021fe34@suse.com>
Date: Tue, 14 Jun 2022 10:10:03 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH] xen/console: do not drop serial output from the hardware
 domain
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220610150651.29933-1-roger.pau@citrix.com>
 <3a462021-1802-4764-3547-6d0a02cd092f@suse.com>
 <YqbziQGizoNX7YFr@Air-de-Roger>
 <3d0d74d8-55a9-cdb6-0c5e-616ddd47bbc0@suse.com>
 <Yqb9gKUMokLAots7@Air-de-Roger>
 <afa0a9e3-fd35-be38-427e-3389f4c3ca26@suse.com>
 <YqcuTUJUgXcO3iYE@Air-de-Roger>
 <f0f87e99-282b-6df7-7e57-3a6c73029519@suse.com>
 <YqgwNu3QSpPcZjnU@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <YqgwNu3QSpPcZjnU@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AM6P192CA0030.EURP192.PROD.OUTLOOK.COM
 (2603:10a6:209:83::43) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e9459092-458f-4a45-b444-08da4ddd49e1
X-MS-TrafficTypeDiagnostic: VI1PR04MB4415:EE_
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB44157091629EAB735C156345B3AA9@VI1PR04MB4415.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	LLxHfkmMK4TNySRvqMJXqMGn349KuY9MFxtgmEbX/nq29CCICpgXgW1Z/iV6SOij1FC4b/L1JN/2JPbNlcrUXyU4ZSQwg5gktJPPohhP6dsJRG3FjlCI9Udb5kDLCXFlP/aqunjHdelTRDwUO9RtY5jj2csT3HVJY3qeLskwxYASa4Q9hQTLXHMEB+NvPOLUbp2oCmS791Evvkg7GlwkYSOiJYkeM+syDKGwdTOUGgelsuYnnASelWkzNqKmbgDj4tdzZYCoVMPoQY4P58FYGbJuJCup/MQdht1VZjXAL9cP1wwtnfVeMETi8ah+Qss52LZVUO1TakkzxH30AyXNIutoGT0SKv5OqTzGnc3O3f7RWl2nQzYOX0FQ1TjWnu4u1OBwHiZjrAAkAMmNo7/KrbbFy0/9uZsAnDa+LLHGhxnC5E7kmAVsT5D7HRHyw002fK/sw1me4u3Fw3awZDPEgLx3bKTVwA6nVBRwsk57zwJ7MKxEP+/Q9WLzSSR4neq1GcYZu8jG4zEtiylYxTEg68WEF4T4KUVTFtCX8znYNnYf0/jsEO6SKPF+VPyDK27czMBGT7sfK2mstKJr+bk97m6H1/BCvCpzypU6NocegjreLCYUa6EzmUGiIyMX3fduvyUbe/UYbsEBSVfPDGEO+A/aVHyxXCCGckXyEfIjURJYzLov43My1Ys4CSyVW+ebFOYFltvvWMyAW8VDxMYUXu2MnPXMtSe0kDhpz1LrJLsau0U5I5WCiIerMIAp2/Sg
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(6916009)(54906003)(8936002)(4326008)(2616005)(8676002)(86362001)(316002)(66476007)(31696002)(38100700002)(66946007)(6512007)(6506007)(26005)(53546011)(66556008)(31686004)(6486002)(508600001)(83380400001)(2906002)(36756003)(186003)(5660300002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QUUxSlN3eTE1OFJDcHNEWVhxeC9URVlUT0pGTnNnQkFhZkYxaEs5N0ZDYXFF?=
 =?utf-8?B?Z01YcWFYMkU1aUxPdkF6SDArNXEyU0RSdkVkbUZQOWF2M2x5d2xWdWcxVk92?=
 =?utf-8?B?cmc1NWNBMmZOWjZWQks0OFRMSkRyS2habFRINXo2TEZqdW5Lc1dxTlU1VW9z?=
 =?utf-8?B?S2V0eElsVU1tVHpzMUlRK0JiVDFDU1NSaURqa0pWNE5oMUVmVHE1YTVyWGxn?=
 =?utf-8?B?RUwvaklteUUvV1VhRnllcmVRblVuQTRiYzVhdUI1Rm9yQzZTcGlud3lKckp5?=
 =?utf-8?B?Rk9BN2VWSWpwRkw4bWNEci9PRmwxSlZRbVBaVGJubFdRaTMzZlNUMitvQXBx?=
 =?utf-8?B?VXlIU1REOWUwbFlPcXNLSUdCeGhNalJnSFZsRTk3MGNhMlIxaGtjb25tTlpO?=
 =?utf-8?B?dW51bExJUzlHVjUwdXpzM0xGZlhJMEJVNzhoQ1VHVTEwQkhicFgzZXBFc3Fz?=
 =?utf-8?B?WUFFNGZUNC9BTkhWU0E4U1Vpemw2b3FvUGQrNjdyNnZldXF1NXVuamVCT2pT?=
 =?utf-8?B?OExBT3J5cXFPOHA4WUhEc3BFbUlQRCtqb283UjNYcnhOSGRrTmN3MlZxVFR0?=
 =?utf-8?B?ajY1NUd1VEFtc1YrUHZBR1Z6bXhSL3AycUVDMnNWSWJ6UkxZeHkxeU1Cd3o5?=
 =?utf-8?B?OVpMbEx1RmdwRWY3cG5TRHlmYjNjWEdxOXJvSGJtL1VBUTFDTkZSMDZiRUI2?=
 =?utf-8?B?NkFLblFUQk81NlNlMmF3ZWFwVTRCOFd6SzdhMXRIMEJWTFNRTlpmUE5rNG5Q?=
 =?utf-8?B?eTlOQU41a0VMS1NtM3htakp4RUdrY3ZITGdqUUFDa1ZCeTFSaHkxcG5Nb1du?=
 =?utf-8?B?RkY5TlpoS1paMDNJblc4ZTZKMHg4M2FLOFhiRUdsL0tQcVpwblhSTVBBTjl6?=
 =?utf-8?B?L1JqU1gvSW9CTUlWQ2tHREtVMGIrRmE4K1R5K3BTbk95R0V4WHhKM2NRSC90?=
 =?utf-8?B?VFBMcE16Z3ZORzJ2YndXdElDTmo0aklLRngrM2VCWVYwTjVkQi9CM1VldmRN?=
 =?utf-8?B?Z2hqa0Y4WGc5a2ZBSGcxVGdWRzlDQ0k4NElxNExIOENLOHV2aHNlajVJTitG?=
 =?utf-8?B?cVRBRVROandCSFhtRXNqbGFhb0RDUGpIZWlibmhmdUpiNWY1bklCTXVxM1JV?=
 =?utf-8?B?U0p4V2pSWTdJNnRIMU5uS0lOTjhHaEdlak1vRTcrN04xQUd6WUNyTkZKemwr?=
 =?utf-8?B?Vjk3YlFza0JhVklRL0p4WjltejZIaEFwcDNqaUh5Zzk2NkJTemFseFpRb2FZ?=
 =?utf-8?B?RzBmZExiT2htNXhzUFUxOGVJNUdxVlVNdk80QlF2K0M3L2IzN0J2U3pldGQw?=
 =?utf-8?B?YlZsS0FzWlhFS2dlT0pyRHhUbk9NZkwrbkpPc3o3RGUvWEJGbnA0Qml4WG5S?=
 =?utf-8?B?R3FWb09UY3NPcWpnTUJzVDI4UDJBR1VjM3paMjR0OE4zWjlXVlRSV3RCNTF5?=
 =?utf-8?B?MDBWYm9EOFJCOFd5aTRtU0lIUFpVTlN0Y2JzTUtYMmgycU1xdUp1WVJnMlNu?=
 =?utf-8?B?ckF5MStpdlJnRkJxT0ZCaWY3ak0xMm5HcURsTE1hS2xkSDVMY0dERWgxZ2h2?=
 =?utf-8?B?cHhWYUZoTU1kaVBOMDhJSU5pRWIzY3FlR3pSR3czUkNTd0NQNDd0ZnBWMEg3?=
 =?utf-8?B?TFgwNVozU05PdDZSWWdvUDU5Y2dITmc1VDdqVXRXbFVRTkdxd2ptMUVkYjFE?=
 =?utf-8?B?VFp2aklzUDUybWRMUDFVY0EzZE1rQVNOeFM0MlB1L2JHV3dWakZieG04SzNm?=
 =?utf-8?B?VmYyYlEyVjJIY2VCNVhxUXdEaFJJc0FVV2I1OUk1RDVqQWxrSXdZR0YxckZO?=
 =?utf-8?B?aWhyOHlCMVJvbDZ3M2s2SlNDb1JSVkhRaGFsR3hzUWhSMWN6WmUyOTkxZWcx?=
 =?utf-8?B?SWkxVzV6U3dKWXgzUytmaWxaVjBHMWxvYW5scitzV3FmQzdTUE55ZWphRktk?=
 =?utf-8?B?YnNLWUNmaWMyRUxodUN3Z3FrdHMwR0FuRUpzd3dGOHIyZVVELzlXV2orcDY4?=
 =?utf-8?B?eXEzTE93YTFtZWY5a2J5NHRtd0ZQSTdTYWd6RFpDTHVSWjYwaHhmSzh2WXdt?=
 =?utf-8?B?Y3lyOTVqUk4xb0RaajZwSnNrcm5velMrUy9xUnZyRjNkbEVQUjc0ckRUMit1?=
 =?utf-8?B?ZnRtdjJnSWtQSmltMmNpcUtsYlQ1TXlLTGxldGh0b3o5N0ExNUhTYjAyLzhQ?=
 =?utf-8?B?cFZZSmR0Mll1NElEQlhvcGIxSVdYM2xpRzc3QWZFZ0dnU1VYVTIxQ1hlZEsv?=
 =?utf-8?B?SytNQjhUZjh4SFpxVUt3VE9ROUNCTGFLTjM4dFVQbWRwVE9SWDlrbGJ5Q0JN?=
 =?utf-8?B?S29XR2tQeHNPL1BWRTBSb2g1ZzdFeTA1OWpZZTZ1Vjkrbk1QVFpkdz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e9459092-458f-4a45-b444-08da4ddd49e1
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2022 08:10:05.0704
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: PTmEbGUrd+QrF170+VHOkheXab+Y1ss7IFGG/YkE1//WTT6qkv7TmoLah+Q4LNYI/UaXCEzs8YHSAiuceYsA5A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4415

On 14.06.2022 08:52, Roger Pau Monné wrote:
> On Mon, Jun 13, 2022 at 03:56:54PM +0200, Jan Beulich wrote:
>> On 13.06.2022 14:32, Roger Pau Monné wrote:
>>> On Mon, Jun 13, 2022 at 11:18:49AM +0200, Jan Beulich wrote:
>>>> On 13.06.2022 11:04, Roger Pau Monné wrote:
>>>>> On Mon, Jun 13, 2022 at 10:29:43AM +0200, Jan Beulich wrote:
>>>>>> On 13.06.2022 10:21, Roger Pau Monné wrote:
>>>>>>> On Mon, Jun 13, 2022 at 09:30:06AM +0200, Jan Beulich wrote:
>>>>>>>> On 10.06.2022 17:06, Roger Pau Monne wrote:
>>>>>>>>> Prevent dropping console output from the hardware domain, since it's
>>>>>>>>> likely important to have all the output if the boot fails without
>>>>>>>>> having to resort to sync_console (which also affects the output from
>>>>>>>>> other guests).
>>>>>>>>>
>>>>>>>>> Do so by pairing the console_serial_puts() with
>>>>>>>>> serial_{start,end}_log_everything(), so that no output is dropped.
>>>>>>>>
>>>>>>>> While I can see the goal, why would Dom0 output be (effectively) more
>>>>>>>> important than Xen's own one (which isn't "forced")? And with this
>>>>>>>> aiming at boot output only, wouldn't you want to stop the overriding
>>>>>>>> once boot has completed (of which, if I'm not mistaken, we don't
>>>>>>>> really have any signal coming from Dom0)? And even during boot I'm
>>>>>>>> not convinced we'd want to let through everything, but perhaps just
>>>>>>>> Dom0's kernel messages?
>>>>>>>
>>>>>>> I normally use sync_console on all the boxes I'm doing dev work, so
>>>>>>> this request is something that come up internally.
>>>>>>>
>>>>>>> Didn't realize Xen output wasn't forced, since we already have rate
>>>>>>> limiting based on log levels I was assuming that non-ratelimited
>>>>>>> messages wouldn't be dropped.  But yes, I agree that Xen (non-guest
>>>>>>> triggered) output shouldn't be rate limited either.
>>>>>>
>>>>>> Which would raise the question of why we have log levels for non-guest
>>>>>> messages.
>>>>>
>>>>> Hm, maybe I'm confused, but I don't see a direct relation between log
>>>>> levels and rate limiting.  If I set log level to WARNING I would
>>>>> expect to not loose _any_ non-guest log messages with level WARNING or
>>>>> above.  It's still useful to have log levels for non-guest messages,
>>>>> since user might want to filter out DEBUG non-guest messages for
>>>>> example.
>>>>
>>>> It was me who was confused, because of the two log-everything variants
>>>> we have (console and serial). You're right that your change is unrelated
>>>> to log levels. However, when there are e.g. many warnings or when an
>>>> admin has lowered the log level, what you (would) do is effectively
>>>> force sync_console mode transiently (for a subset of messages, but
>>>> that's secondary, especially because the "forced" output would still
>>>> be waiting for earlier output to make it out).
>>>
>>> Right, it would have to wait for any previous output on the buffer to
>>> go out first.  In any case we can guarantee that no more output will
>>> be added to the buffer while Xen waits for it to be flushed.
>>>
>>> So for the hardware domain it might make sense to wait for the TX
>>> buffers to be half empty (the current tx_quench logic) by preempting
>>> the hypercall.  That however could cause issues if guests manage to
>>> keep filling the buffer while the hardware domain is being preempted.
>>>
>>> Alternatively we could always reserve half of the buffer for the
>>> hardware domain, and allow it to be preempted while waiting for space
>>> (since it's guaranteed non hardware domains won't be able to steal the
>>> allocation from the hardware domain).
>>
>> Getting complicated it seems. I have to admit that I wonder whether we
>> wouldn't be better off leaving the current logic as is.
> 
> Another possible solution (more like a band aid) is to increase the
> buffer size from 4 pages to 8 or 16.  That would likely allow to cope
> fine with the high throughput of boot messages.

You mean the buffer whose size is controlled by serial_tx_buffer? On
large systems one may want to simply make use of the command line
option then; I don't think the built-in default needs changing. Or
if so, then perhaps not statically at build time, but taking into
account system properties (like CPU count).

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 14 08:33:22 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 08:33:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348621.574833 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o11z0-00080L-Nd; Tue, 14 Jun 2022 08:33:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348621.574833; Tue, 14 Jun 2022 08:33:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o11z0-00080E-KN; Tue, 14 Jun 2022 08:33:06 +0000
Received: by outflank-mailman (input) for mailman id 348621;
 Tue, 14 Jun 2022 08:33:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=i1me=WV=citrix.com=prvs=1570496fe=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o11yz-000808-FW
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 08:33:05 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9a17c3aa-ebbc-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 10:33:03 +0200 (CEST)
Received: from mail-mw2nam12lp2045.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.45])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 14 Jun 2022 04:33:00 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by PH0PR03MB6740.namprd03.prod.outlook.com (2603:10b6:510:117::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Tue, 14 Jun
 2022 08:32:58 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%6]) with mapi id 15.20.5332.022; Tue, 14 Jun 2022
 08:32:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a17c3aa-ebbc-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655195583;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=w8JMB2I/0hLlB0BbgFRK8BAMpsGJNOJzCn71/Y1tgAk=;
  b=VVPF+lh46Tuq0VkRedo/zQaVg7NpXhVJSsDeUGxouLHkUmgPvBoRlWzj
   D4xljjo9XBBzTtBrO4lvqoqg2lFJGeuq0z83U5o0wJJA5Ve5p93LubVeu
   OkBqnCua5T9go0J/YDCRS00iiZ4BeWeQ4k7XQnzFe8QUiUc3tuc/gVzYA
   k=;
X-IronPort-RemoteIP: 104.47.66.45
X-IronPort-MID: 73555577
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:/MgCBah2i1dfDwptPFHUrVYjX161fREKZh0ujC45NGQN5FlHY01je
 htvDziPP6qLNmrzfdtyaI7joUMAsJHdzYQxT1NvpS5jRn8b9cadCdqndUqhZCn6wu8v7a5EA
 2fyTvGacajYm1eF/k/F3oDJ9CU6jefSLlbFILas1hpZHGeIcw98z0M68wIFqtQw24LhXVrR4
 YqaT/D3YzdJ5RYlagr41IrbwP9flKyaVOQw5wFWiVhj5TcyplFNZH4tDfjZw0jQG+G4KtWSV
 efbpIxVy0uCl/sb5nFJpZ6gGqECaua60QFjERO6UYD66vRJjnRaPqrWqJPwwKqY4tmEt4kZ9
 TlDiXC/YTotGLbCw7xHajZnESp6bfFv++LHMUHq5KR/z2WeG5ft69NHKRhseKgnoKNwC2wI8
 uEEIjcQaBzFn/ix3L+wVuhrgIIkMdXvO4Qc/HpnyFk1D95/GcyFH/qMu4EegGpYasNmRJ4yY
 +IDbjVidlLYagBnMVYLEpMu2uyvgxETdhUH8w/I+PdovAA/yiRA0P/GOvntd+eJaut/wEanj
 0XD+UDAV0Ry2Nu3jGDtHmiXru3FkD7/WYkSPKal7fMsi1qWrkQRFRkXWF2TsfS/zEmkVLp3M
 FcI8yAjqawz8k2DTdTnWRC85nmesXY0RN54A+A8rgaXxcL88wufQ2QJUDNFQNgnr9MtAywn0
 EeTmNHkDiApt6eaIU9x7Z+RpDK2fC0Kd2kLYHZeSRNfu4W+5oYukhjIU9BvVravicH4Ei3xx
 DbMqzUig7IUjogA0KDTEU37vg9Ab6PhFmYdjjg7lEr8hu+lTOZJv7CV1GU=
IronPort-HdrOrdr: A9a23:4i5abKqu7T0MxwtpNHmB3EUaV5u5L9V00zEX/kB9WHVpm5Oj+v
 xGzc5w6farsl0ssREb9uxo9pPwJE800aQFmbX5Wo3SJzUO2VHYVb2KiLGP/9SOIU3DH4JmpM
 Rdmu1FeafN5DtB/LnHCWuDYrEdKbC8mcjH5Ns2jU0dKz2CA5sQkzuRYTzrdnGeKjM2Z6bQQ/
 Gnl7d6TnebCD0qR/X+IkNAc/nIptXNmp6jSRkaByQ/4A3LqT+z8rb1HzWRwx9bClp0sPwf2F
 mAtza8yrSosvm9xBOZ/2jP765OkN+k7tdYHsSDhuUcNz2poAe1Y4ZKXaGEoVkO0amSwWdvtO
 OJjwYrPsx15X+UVmapoSH10w2l6zoq42+K8y7tvVLT5ejCAB4qActIgoxUNjHD7VA7gd162K
 VXm0qEqpt+F3r77WvAzumNcysvulu/oHIkn+JWpWdYS5EiZLhYqpFa1F9JEa0HADnx5OkcYa
 VT5fnnlbdrmG6hHjDkVjEF+q3uYp1zJGbKfqE6gL3a79AM90oJjXfxx6Qk7wI9HdwGOtx5Dt
 //Q9VVfYF1P7ErhJ1GdZc8qOuMexvwqEH3QRSvyWqOLtB1B1v977jK3Z4S2MaGPLQ18bpaou
 WybLofjx95R37T
X-IronPort-AV: E=Sophos;i="5.91,299,1647316800"; 
   d="scan'208";a="73555577"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jt97Xubbu0Fr1CGts6yr+RNHtZ5MU5X25O69pmZ88ke5dynRgXKZfSullloiwbuNdJV6Jil8gPPB4EZYKvfTOuZEHO2CDc6tYwyu1Z0L8PyQrvyvgTI2BF2EFn/yamNBAlzrEbhSHIA4P5G+P8JwvlWQbr6YmVZH2ZbYfLkkb2l5nMy7g+hme+g1sHOdkf6BG/CxXsScp+tv6qEa6SSzRki4orPIWpHaG52QLElpY4czgCMPi3aMosKQ4rH3R+Ve0sT4CQPDML+P2br4zN6nJTQXo/r9bQCl46bFN7IAy+ag23mZaFywTNgWVw1v00ScQ0i8j6fi6O+OpEpNz7wCRw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=YG7z4ZJ/syeOaBHMFNSJud+K31voPYyfT3kV59x0JhI=;
 b=f+BANp4dz3jElWcakrFTKBv+7SCIQW0puMlLapR3mcKuNyU5VM9AVVTmDQ4v7+22H5OEkvmfmRcqqEuDmdlR+Jzx/NJg6/BP1a/f5pRGeB+IzANyFVY4kXs2vnEBflZVwIhaZqY4KjeztJb64nFicFOpVUBS7I9xsKRmo7iJOfZ6FVlYDmwlxB3ZWwOpy7dPG55Gi48yX/oKNMJ+Gr6Ic6SliIAmrg8iQEig0fpTMvGTVw7J1U+U0EXTPQS/ruKucDHlRH7/a4HxGIk8nRrbB3AvwP9AdGwz7mRrxkw+Jd577wjgDkUaNxkfEt7+rg4ZK4t/6LvZGiluH8a+yJ9Grg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YG7z4ZJ/syeOaBHMFNSJud+K31voPYyfT3kV59x0JhI=;
 b=fO1Pr9AzwgaS0NHTR7ZTj2VVAgayw4UHmOUSCa1PQE4a5R8D6Rb/5jX0S9Wo9dT3Koglxq+i2T6yMUMoNjF9X6eOqgAguRgh8AMLtqcIk0Yunqpy3OQpZtq38ya3h7aJrojI8bF1YWHUc3RRwab4s8OTjlYchzLU1WgEEE51MoI=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 14 Jun 2022 10:32:53 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH] xen/console: do not drop serial output from the hardware
 domain
Message-ID: <YqhHtetipYTG8tuc@Air-de-Roger>
References: <20220610150651.29933-1-roger.pau@citrix.com>
 <3a462021-1802-4764-3547-6d0a02cd092f@suse.com>
 <YqbziQGizoNX7YFr@Air-de-Roger>
 <3d0d74d8-55a9-cdb6-0c5e-616ddd47bbc0@suse.com>
 <Yqb9gKUMokLAots7@Air-de-Roger>
 <afa0a9e3-fd35-be38-427e-3389f4c3ca26@suse.com>
 <YqcuTUJUgXcO3iYE@Air-de-Roger>
 <f0f87e99-282b-6df7-7e57-3a6c73029519@suse.com>
 <YqgwNu3QSpPcZjnU@Air-de-Roger>
 <69d85d88-4ec1-987c-151f-0d433021fe34@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <69d85d88-4ec1-987c-151f-0d433021fe34@suse.com>
X-ClientProxiedBy: MR2P264CA0098.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:33::14) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 92ca511f-f2ba-4438-b60d-08da4de07c91
X-MS-TrafficTypeDiagnostic: PH0PR03MB6740:EE_
X-Microsoft-Antispam-PRVS:
	<PH0PR03MB6740BA629903012F532027678FAA9@PH0PR03MB6740.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	mklt12uUdDutMvG++j+2KzGCz4mz7pTceskq5EgIlKLMxtUAUqQ6o9uZKrt8Fl+lThDfHgzZBjQep6n0XVnnM/UPUhS2DKy+iPN0xd2pW9b2nLNHncetswheD5q241GgGhZ957x9oyczZ6VjsHGt8C5mhm3B6Iaf9v2F0EsoLMDS2ukkfDpPsDt3viSCsX5PH77KBCph6UvlSSiA6Jn80OJHwihow5tZrj8P5oMpGQJs3C/RGX/naooAEpVM64gs1G88GzTy2m698YlsRBRnUCmr0IWTzjM0Oy9ZABAbg3MneVZeThZ/W+Hgl+k/c2PFhdOewadN6/Sh07lWnXk3w/cKJiUhEAYtcb38ikAJhS5yaUFholBJJ24p8w9PMpa8L1uJPLVRhnT61S2/d0ymwLYyn2G9PwtTsShZ9QI/kmRW0mWc8PFLHkoZdIfDJHZRO6Fevaln903VNBvbvbHzftE+0dpO/Uy5bqwGynuQ74dTTYjJhmOc7CNSqY+Kq6j/XmECfxUqSCmq3ALGaG1XOu5VTcleLdgTKPJynk8ebkgMwO3tT0f96hjWjfETFU246wCJu/unKdSkCVxTuUVPaMWN5vfe1tuhDaDhuYfF4pwUCz81/Q7YinzPGc7SfZ2kvcuEWc0AMkYSTEchum1fTQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(7916004)(4636009)(366004)(9686003)(6512007)(8936002)(53546011)(6506007)(26005)(4326008)(54906003)(316002)(66556008)(66946007)(5660300002)(6916009)(66476007)(8676002)(86362001)(82960400001)(6666004)(6486002)(508600001)(186003)(33716001)(83380400001)(85182001)(2906002)(38100700002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OGZhZGljSGlSQTVBbHU3cGVHY3pQVVlleG15TlNyN1pPS25zSGFrTktNS1lB?=
 =?utf-8?B?YTlTbmN6MXNSQk8yMlVVSWxMcnA0bFJWUjd3aVNhMWV0eWNYaXIwa0k0ei8w?=
 =?utf-8?B?YlluZHc0dDNhWGFjOHlxWDdnK2FBbHB2THozMmY5QzY1Z3c2bmFadUV5TEh5?=
 =?utf-8?B?WnlQQXQ4VUQ5ejZkYmlWd0lUc3J4bjU1ZnBXZzVBMURNd082ZCtNbWFRcVAw?=
 =?utf-8?B?TTdXR0hSREd5cHM1bkxLSkxncXJNZldkSDNRVmlkSmR2ejNrbUF3TEZuanRa?=
 =?utf-8?B?Yk1wSnkwYWdhRFltSjFpNXBCZTF2VEZQNUpqNTFscm13Y2FYM1JzMWlGcTNv?=
 =?utf-8?B?bFVQVUFrOVAxMUtyVmY5YmJ5TXRiMndYVXJSL244cGFyMmFYSkNNb0UvSVJD?=
 =?utf-8?B?OWpzcE1Qd3NDOGxjb3o1NXNPa0F2NS9Zd2dLODQrRUJNaTB3NmMxaXhma0Q0?=
 =?utf-8?B?RjBTRVpqWi9tU2RTalJzVWwrTXc3TGN4NWxLK1BPck1UVHBRWUl1OU1jYWtQ?=
 =?utf-8?B?MWt4Vm8xMmdTZkRRano2MjNtbU5iZEJNSTRLbkNBb0JIajBBOE4zUFpxeUJi?=
 =?utf-8?B?YXV6QVhaTnRDK2JrQktqN3REbllkc0pMUWdFNldWaFVLWG4zQTdrc0tpMTR2?=
 =?utf-8?B?eFB5RE9MUWZWVzIrUnFValVqdWs0R0RSSWJOZFpDVlVuUEJuSVdSYlR0MWdZ?=
 =?utf-8?B?WGgzblQxc3pVMkYyTkpFazA5WExMZWNXMGdFQ1FCQ3NCSE9VUGNWbXVFSWtl?=
 =?utf-8?B?aTVvd0IxeUdmOEtiQ1dtcHVCY0ptY0dRbmVlUHAvalprYjF6MVowY2owUUUz?=
 =?utf-8?B?YitxQ3VqUVBialp1NGFZNXY1YStXemp6RHBrdi9HdDFZR0tLazQ0WFM2NnFQ?=
 =?utf-8?B?SnFaQWR5S0NrcHpWU2lXMW5sNTJYdWREM28vTXdBcmJsc0ZFLzArS25kQ2wy?=
 =?utf-8?B?RTdzZ3MrYUw4TkM2YzJtK0M5Z3lMRW1iRk9uZ2JZRDVIeUJFNURLN1REYU9v?=
 =?utf-8?B?UUptcTJDdzRrdGFBdUpmM2EzdDI1dU01VUZmcjBFa2h0T2d0cnRWUUtneFN1?=
 =?utf-8?B?V1BoL2xibklzSnZOR09CSTBQdGkyMXRBUDRXbGozY0VVL28xZW1VazR6eDJy?=
 =?utf-8?B?czdMemV1eTE0M3JGWDVFdXdkaG5EeUxyOENLMldXQUx5cnl1c2pDeXhPV3Zt?=
 =?utf-8?B?ZGNiakZXZ1VnMEpGY0Z2U1pHOHBrN1oyUThMSFZPaEdWSlJYVVRGZ2dvYU56?=
 =?utf-8?B?UWtIdEtCQWwzZGhXZldUOEJhK1FIYS9QZEZ4b04wTndaenNFZUlxWXRpL3lp?=
 =?utf-8?B?ZjZrZ0lmbTdSWTMrVE1Gb3Rxa09ZNW40dkRIWXZjbkZxTWJBekdDellCU0FN?=
 =?utf-8?B?UWFpZ21aMTJ3cmpqbC9aL2xhMGs0UEpYd2tJMnI1Y2hsdE1neXNYU0Z5N2ZH?=
 =?utf-8?B?amtFVitZTEpoRmYwMjFycXRUS0Z6QVB4Tlo3YmJzdEVZNnhHMnFMb1pHVG1q?=
 =?utf-8?B?UUhzOFl4SkZFQjFCZ0RCSFBFR3Z6aDgxOWhVbHJncVhoRjZ4eUJDUUNiT1dl?=
 =?utf-8?B?ZXpMb3BCdzZzWVd5S3JjR3dtRFRranQ0b2tSYkFCZEVtZEQxZVJxSTZHcy9J?=
 =?utf-8?B?Vm16b2dXcEdJb1BMcE5oVlVtSG5BLzFaWnpXSkhXM2dFdTk1cW9vekdGU2V6?=
 =?utf-8?B?UUZiSC9TUEJkdzFWN0hwNWllWTFrNGxNUllMSHNZWitJcjVxaXozbENEclVD?=
 =?utf-8?B?ZnkrRmJUenIxaUlqdlQxRlp1RDJ3ZXFnNEVXai9jT25UdjBuTDZ0aXFGQUFa?=
 =?utf-8?B?b3M0cGxMT0l4VHdtOVo5cUJBY1Z5c3N2Mzl2Y2tubDVERXZUa2tDcS9lZlNJ?=
 =?utf-8?B?V3pjeG10cUVmWlI0ZTFCWk8xNHE5OGREdXcxbHBiU3RJQjJ2by9lME9FN2ZE?=
 =?utf-8?B?RUR1MGhyKzFpS0diM3BRNDJkWElMUEd2WDdCdlJMaXVJREluQy9mMkVjUDY5?=
 =?utf-8?B?OFR4YmEzYUFLeStHcWFtdDJEelhBL1FUUXQwSHhGcFlpbDJTMzZ5NEN3WkVr?=
 =?utf-8?B?QUF5NW8rWXpZdU9rZDlPd2d4OEluMjhtNDErU2wyRjluNTU3SG1ROTZMRFZE?=
 =?utf-8?B?eWxTRkp6N2tyKzErUzZYSnQ1aUxUME8zTEpFN2FnS3V4bjVWNVMzbnhLcWxS?=
 =?utf-8?B?N2M3NWFLendRMzNPaDNwZ0VZMlNFcnBNWkZNU09HS1J0elZFcWt1ZU4wSyty?=
 =?utf-8?B?MzhMSHFra3lwMVhTWC9wV3ZKOHB2aUNKeFRNOWlCWjZmZEVNa0JnQXdNL3F3?=
 =?utf-8?B?bUdPSzFCZHhhR3BtdkFSMkhjMEJxa1dLbHZQT0ljNGtFQVMxdFpnY3A0cWtr?=
 =?utf-8?Q?cumlDhyHGcHB6MtI=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 92ca511f-f2ba-4438-b60d-08da4de07c91
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2022 08:32:58.6314
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ttw/nh6YYQEO/Xt3SD91+V3UuQLXnxBiFSgMHLNQKTw4iK3FTggazbu0YFezYZRsw+7FWAz0V5MpYu6gUBySUQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6740

On Tue, Jun 14, 2022 at 10:10:03AM +0200, Jan Beulich wrote:
> On 14.06.2022 08:52, Roger Pau Monné wrote:
> > On Mon, Jun 13, 2022 at 03:56:54PM +0200, Jan Beulich wrote:
> >> On 13.06.2022 14:32, Roger Pau Monné wrote:
> >>> On Mon, Jun 13, 2022 at 11:18:49AM +0200, Jan Beulich wrote:
> >>>> On 13.06.2022 11:04, Roger Pau Monné wrote:
> >>>>> On Mon, Jun 13, 2022 at 10:29:43AM +0200, Jan Beulich wrote:
> >>>>>> On 13.06.2022 10:21, Roger Pau Monné wrote:
> >>>>>>> On Mon, Jun 13, 2022 at 09:30:06AM +0200, Jan Beulich wrote:
> >>>>>>>> On 10.06.2022 17:06, Roger Pau Monne wrote:
> >>>>>>>>> Prevent dropping console output from the hardware domain, since it's
> >>>>>>>>> likely important to have all the output if the boot fails without
> >>>>>>>>> having to resort to sync_console (which also affects the output from
> >>>>>>>>> other guests).
> >>>>>>>>>
> >>>>>>>>> Do so by pairing the console_serial_puts() with
> >>>>>>>>> serial_{start,end}_log_everything(), so that no output is dropped.
> >>>>>>>>
> >>>>>>>> While I can see the goal, why would Dom0 output be (effectively) more
> >>>>>>>> important than Xen's own one (which isn't "forced")? And with this
> >>>>>>>> aiming at boot output only, wouldn't you want to stop the overriding
> >>>>>>>> once boot has completed (of which, if I'm not mistaken, we don't
> >>>>>>>> really have any signal coming from Dom0)? And even during boot I'm
> >>>>>>>> not convinced we'd want to let through everything, but perhaps just
> >>>>>>>> Dom0's kernel messages?
> >>>>>>>
> >>>>>>> I normally use sync_console on all the boxes I'm doing dev work, so
> >>>>>>> this request is something that come up internally.
> >>>>>>>
> >>>>>>> Didn't realize Xen output wasn't forced, since we already have rate
> >>>>>>> limiting based on log levels I was assuming that non-ratelimited
> >>>>>>> messages wouldn't be dropped.  But yes, I agree that Xen (non-guest
> >>>>>>> triggered) output shouldn't be rate limited either.
> >>>>>>
> >>>>>> Which would raise the question of why we have log levels for non-guest
> >>>>>> messages.
> >>>>>
> >>>>> Hm, maybe I'm confused, but I don't see a direct relation between log
> >>>>> levels and rate limiting.  If I set log level to WARNING I would
> >>>>> expect to not loose _any_ non-guest log messages with level WARNING or
> >>>>> above.  It's still useful to have log levels for non-guest messages,
> >>>>> since user might want to filter out DEBUG non-guest messages for
> >>>>> example.
> >>>>
> >>>> It was me who was confused, because of the two log-everything variants
> >>>> we have (console and serial). You're right that your change is unrelated
> >>>> to log levels. However, when there are e.g. many warnings or when an
> >>>> admin has lowered the log level, what you (would) do is effectively
> >>>> force sync_console mode transiently (for a subset of messages, but
> >>>> that's secondary, especially because the "forced" output would still
> >>>> be waiting for earlier output to make it out).
> >>>
> >>> Right, it would have to wait for any previous output on the buffer to
> >>> go out first.  In any case we can guarantee that no more output will
> >>> be added to the buffer while Xen waits for it to be flushed.
> >>>
> >>> So for the hardware domain it might make sense to wait for the TX
> >>> buffers to be half empty (the current tx_quench logic) by preempting
> >>> the hypercall.  That however could cause issues if guests manage to
> >>> keep filling the buffer while the hardware domain is being preempted.
> >>>
> >>> Alternatively we could always reserve half of the buffer for the
> >>> hardware domain, and allow it to be preempted while waiting for space
> >>> (since it's guaranteed non hardware domains won't be able to steal the
> >>> allocation from the hardware domain).
> >>
> >> Getting complicated it seems. I have to admit that I wonder whether we
> >> wouldn't be better off leaving the current logic as is.
> > 
> > Another possible solution (more like a band aid) is to increase the
> > buffer size from 4 pages to 8 or 16.  That would likely allow to cope
> > fine with the high throughput of boot messages.
> 
> You mean the buffer whose size is controlled by serial_tx_buffer?

Yes.

> On
> large systems one may want to simply make use of the command line
> option then; I don't think the built-in default needs changing. Or
> if so, then perhaps not statically at build time, but taking into
> account system properties (like CPU count).

So how about we use:

min(16384, ROUNDUP(1024 * num_possible_cpus(), 4096))

Maybe we should also take CPU frequency into account, but that seems
too complex for the purpose.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jun 14 08:41:58 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 08:41:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348636.574844 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o127V-0001BE-LF; Tue, 14 Jun 2022 08:41:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348636.574844; Tue, 14 Jun 2022 08:41:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o127V-0001B7-I3; Tue, 14 Jun 2022 08:41:53 +0000
Received: by outflank-mailman (input) for mailman id 348636;
 Tue, 14 Jun 2022 08:41:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=i1me=WV=citrix.com=prvs=1570496fe=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o127U-0001B1-NI
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 08:41:52 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d4d1703e-ebbd-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 10:41:50 +0200 (CEST)
Received: from mail-dm6nam10lp2108.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.108])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 14 Jun 2022 04:41:26 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by PH0PR03MB6543.namprd03.prod.outlook.com (2603:10b6:510:b1::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.12; Tue, 14 Jun
 2022 08:41:25 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%6]) with mapi id 15.20.5332.022; Tue, 14 Jun 2022
 08:41:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d4d1703e-ebbd-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655196110;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=oHtueR6sxPyG/BmubKvrr6PYQ39JAFMVREfhd03kS1w=;
  b=An8MZMzOJywXgThR81FIgb3PnlxB08iK8BhB4K6uoo6ZrJG/AWtUIf2M
   TzSC1iEX8lg+e+6cxrY7bPc+7ESJ2w4otAA67FqpqqZnHdOijIei6zpIu
   cHxpKuhPyDFYHfbQ21zWHxAgRaHdA/IpEWWmLgTsHkd3olgLrXpM/lzSm
   g=;
X-IronPort-RemoteIP: 104.47.58.108
X-IronPort-MID: 73394696
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:PeI9m6P+4NihD2nvrR26lsFynXyQoLVcMsEvi/4bfWQNrUpzgzJWz
 jcfDW2FMqrcajbxctl+Ooyw/UsEsZCGndVrSQto+SlhQUwRpJueD7x1DKtR0wB+jCHnZBg6h
 ynLQoCYdKjYdleF+lH1dOKJQUBUjclkfJKlYAL/En03FFYMpBsJ00o5wbZn29Iw2LBVPivW0
 T/Mi5yHULOa82Yc3lI8s8pvfzs24ZweEBtB1rAPTagjUG32zhH5P7pGTU2FFFPqQ5E8IwKPb
 72rIIdVXI/u10xF5tuNyt4Xe6CRK1LYFVDmZnF+A8BOjvXez8CbP2lS2Pc0MC9qZzu1c99Zx
 osRhKGMRhgQb4bulf1aVh59OA1SIvgTkFPHCSDXXc276WTjKiOp79AwSUY8MMsf5/p9BnxI+
 boAMjcRYxufhuWwhrWmVu1rgcdlJ87uVG8dkig4kXeFUrB7EdaaG/WiCdxwhV/cguhUGvnTf
 YwBYCdHZxXceRxffFwQDfrSmc/33SKuL2MJ9jp5o4Icvnr+1zwowYL1OYfVWPy4R+lQhgGh8
 zeuE2PRR0ty2Mak4TiP/2+oh+TPtTjmQ49UH7q9ntZonVmSy2o7GBAQE1yhrpGRkVWiUthSL
 0gV/CsGrqUo8kGvCN7nUHWQv3qsrhMaHd1KHIUS+AyLj6bZ/QudLmwFVSJaLswrstcsQj4n3
 UPPmMnmbQGDq5WQQHOZs7uR8zW7PHFNKXdYPHdUCwwY/9PkvYc/yArVScpuG7K0iduzHizsx
 zeNr241gLB7YdM36phXNGvv21qEzqUlhCZvum07gkrNAttFWbOY
IronPort-HdrOrdr: A9a23:SqKO8apPdhC+1ga1PCryv8waV5u5L9V00zEX/kB9WHVpm5Oj+v
 xGzc5w6farsl0ssREb9uxo9pPwJE800aQFmbX5Wo3SJzUO2VHYVb2KiLGP/9SOIU3DH4JmpM
 Rdmu1FeafN5DtB/LnHCWuDYrEdKbC8mcjH5Ns2jU0dKz2CA5sQkzuRYTzrdnGeKjM2Z6bQQ/
 Gnl7d6TnebCD0qR/X+IkNAc/nIptXNmp6jSRkaByQ/4A3LqT+z8rb1HzWRwx9bClp0sPwf2F
 mAtza8yrSosvm9xBOZ/2jP765OkN+k7tdYHsSDhuUcNz2poAe1Y4ZKXaGEoVkO0amSwWdvtO
 OJjwYrPsx15X+UVmapoSH10w2l6zoq42+K8y7tvVLT5ejCAB4qActIgoxUNjHD7VA7gd162K
 VXm0qEqpt+F3r77WvAzumNcysvulu/oHIkn+JWpWdYS5EiZLhYqpFa1F9JEa0HADnx5OkcYa
 VT5fnnlbdrmG6hHjDkVjEF+q3uYp1zJGbKfqE6gL3a79AM90oJjXfxx6Qk7wI9HdwGOtx5Dt
 //Q9VVfYF1P7ErhJ1GdZc8qOuMexvwqEH3QRSvyWqOLtB1B1v977jK3Z4S2MaGPLQ18bpaou
 WybLofjx95R37T
X-IronPort-AV: E=Sophos;i="5.91,299,1647316800"; 
   d="scan'208";a="73394696"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Scd/xg+AVDsTC2+JHQadlWQnDW41Lmc7DJfqqkhlZRJyPNgjG75zqi8v2SvztsDg/LShBghMTkPzV8Lsp5/CtnMXnCUPX7DRji6KW831usVNR+T6loqBHURuST8wmuJaSntgTWhvet0aMa1XkUxNnNBZl8rCY50LWqEQv9ljJRM7yEFThoYh8lJMp9O5egpNQOKrfMkdIHIjkFqfAH2/0jRjJAC6vpWQ5lIR7+EyMzl/OrYMIcJGkl2tBUUyLONobhmzUaKm30Md/077HV3B1Npg+u3jDKSlUizbSPY9BbPACLGgjUJm8E9rRlx7Adx8sfF8No8SixeD4whsTgGQOg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xGPGaFqBVOi5BK+qIiLCapLMrmlUB2cPOrnh/DVTg+g=;
 b=jsW/rkhS1lzDIaZtK3P8jF0KVRU9T/7Hhn0Qg5xZzBxcoFXyjw9bRZaIbKmXTooaZe2oV41DNjNmy7klE0ZUwPYGAhgp5UOEA60YazRJPECqiEI/WDwbyPOCgE7+3U7aW3NErxdM3gtyhcquTm+xfEkCYPLK1GTNWQAQ28fnUCxcINdN/4Q4rpuZfCoHQfdFA75WkWQaobLNEZMLfC8wxrZEoEGCSgmm9Ru70dMxxx/bXeC9EMiQJlu5aqpeet9w03f/qnCz9lv3t0uJgSPEhlCgvt+WbvRaRy6rfKQgM7OUcSElSTMesoLaKsWu0KDutDNQR1kkFwh1UuGjJVSyNA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xGPGaFqBVOi5BK+qIiLCapLMrmlUB2cPOrnh/DVTg+g=;
 b=NY3QyWmbZWVQmWzpA/GN6cKC3S0LyKFzY94lPCwoL27rUTRyTEbUtk5VGqRKMjqUmkakyvNDCwdRuQcI5nuGkg3vbAEkvLVQGh/t20Zs96kggKh3O4fC2y8C0eQ01J0vrAbqdXOyB/oAk79z64rltQssWdAmSwdYtObsQPpvWdg=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 14 Jun 2022 10:41:20 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH] xen/console: do not drop serial output from the hardware
 domain
Message-ID: <YqhJsOn0KpCwQjeQ@Air-de-Roger>
References: <3a462021-1802-4764-3547-6d0a02cd092f@suse.com>
 <YqbziQGizoNX7YFr@Air-de-Roger>
 <3d0d74d8-55a9-cdb6-0c5e-616ddd47bbc0@suse.com>
 <Yqb9gKUMokLAots7@Air-de-Roger>
 <afa0a9e3-fd35-be38-427e-3389f4c3ca26@suse.com>
 <YqcuTUJUgXcO3iYE@Air-de-Roger>
 <f0f87e99-282b-6df7-7e57-3a6c73029519@suse.com>
 <YqgwNu3QSpPcZjnU@Air-de-Roger>
 <69d85d88-4ec1-987c-151f-0d433021fe34@suse.com>
 <YqhHtetipYTG8tuc@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <YqhHtetipYTG8tuc@Air-de-Roger>
X-ClientProxiedBy: MR2P264CA0015.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:1::27) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b02eeda6-7786-45e7-0c1b-08da4de1aa77
X-MS-TrafficTypeDiagnostic: PH0PR03MB6543:EE_
X-Microsoft-Antispam-PRVS:
	<PH0PR03MB654313BF779B4938ED909D8B8FAA9@PH0PR03MB6543.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	lPQC6OqeJdeJDZHK5HGGiiOWfN+ZzzZcuMFLQQQMAsnoz7A5WIMCKyzgDsMzLY3b6AasAbQLXLa2KYCAp2GjKsl/fbnXmJn3FxHS6oQ4w0+yfEjogri3d3jF39iSxY7Jcphdae4u6xcG1JA7TUHa4h79zQD+x3pLIXJNfONbqWMfJrSAiXLDHJy2H/v/rxbWigbCtTiTXVeoqTz6F+RoZ9F69qmMoq47nXeuuvkx7hqn3Msm9QZZSMwT5jksURyo4kidJZXiFdK8J9Z3FeDhObPvJBrQdgLirlq/wqcBm8OlQ+j8uPMNVGQ5ieiOQmn6KqoSP+C5uBUQSd6p6F6g3G+GT8jT6hgGcLAsvy2Mk2qZ6m08K6VBIfn3YYRDRYP0X9GHCdA2zN0qXXt4V903NCkbug4OsFzSJAiQD20AXRQfofwxTVGc/2/FZfkVrhhveEMwFgr98GYxJpY7pYakRXtNWmM4C7b7/xLq9NE7aCHIMMZRW1f2CXrz4eLjLwYk7uvf0nlYJ3tc3upKviLqk8oXCNNoF9cpfhnGQxj0Z2OfIx09Be0XG0CA1rRhKu/IEcJYoQNxXgXpQOfarloTVMIbN+dMnt53b2jJ39POI84kdopVayMXvWmcy8O72/iSfwGr5Ee/OlLESRisNG+n1w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(7916004)(4636009)(366004)(82960400001)(6916009)(9686003)(6512007)(54906003)(26005)(5660300002)(6486002)(508600001)(38100700002)(6506007)(53546011)(6666004)(8676002)(316002)(4326008)(66476007)(8936002)(186003)(83380400001)(85182001)(33716001)(2906002)(86362001)(66556008)(66946007);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bUJMMHpZNFllSS8yM2c1TEF5L3IvUUdKUWVNSVh0cGpJaE9WVEpYeHc0eUZJ?=
 =?utf-8?B?S0JVamdQZWFyTStMeXlYMkNiMDY5YXhSZmtXb2VMbi9ldUg1Y2EwZzhQbE0x?=
 =?utf-8?B?d0tLeXVSaDBIRTRJN0IyMlk0aDlVdi9xVEZLM0VvQ3huUytzMjN2OFVLZStv?=
 =?utf-8?B?UmVUc3UyTzdjSTBUZW9BY2xQQnk3aHZjWFpwYUZwRDdRMjllZVQxZ0tBNGlS?=
 =?utf-8?B?Wk9xUGNPZ1NTL0d0cDNyaE03MnhwcEpLQmpTZlU2empSbFcxcjF6QU40K2lR?=
 =?utf-8?B?SE1XaEZYbGJXR3dMc3hSNUFFZm5ySmkvbFFTSksyUW04SjI5SlBaWmFZRzRL?=
 =?utf-8?B?WVB0bWswWHVScnZ1S2N5aW5JUzJzMGdyUFZSVUZMT1o3a1NSanhPaHVYM1Bh?=
 =?utf-8?B?U1VweGxIOVQvcEZMUkVzTU5oUm1jc2FNUDQ5MGZHVW5MV2hJdmZ1VGpuTkd6?=
 =?utf-8?B?TTdJcDdUWFpoVEtHMnZ6Nk5yeWRlV0I1WTFTVWxlaTNmUUpueTA3VnhlUGVZ?=
 =?utf-8?B?OG1nNWVCdCt4S1c0ZElpb1BVZnBSS3lrTHIyTGEyd3lXN09NSGdpR29uRWZ0?=
 =?utf-8?B?ck53ZHFqSkRSNkxUTW5NQ3N0WTdUU2piNXl2RUNtZWFHOGQ3bDkvMk9kOXNK?=
 =?utf-8?B?MGVNVzkwMlUreVMzVS9uUkZoSnN4bTQ5OUtldG1vYkdmSHRtYm9pVjRsVmM5?=
 =?utf-8?B?NHlwS2wvSTY4S0JySjR5RkZQSDZYc3VvTnUxUWRlRU50MkZlQjJnV3d3KzNK?=
 =?utf-8?B?aVhSQWlIRCs0UnFUNWJucVJsbkh0VlZtcEtLMlVCd1NMa0d4MFNjRlJxSkRk?=
 =?utf-8?B?akRoRlFZdUJOTnE2eThHMGxKNUE4VjdNL256dm1ka01ZdDc5UDVjeUFJN1Bm?=
 =?utf-8?B?amZLQXRENkVEa1hBa2ZZNGhQRGhJN0FvdnpXZEU5cUVjb09tVzAwS2FpTnRS?=
 =?utf-8?B?OUsyY1FQaTVIV3BpNGhiUTRmVlNCbjV6elowemNmM2Q2U0FSQ1prM2VvVUdl?=
 =?utf-8?B?SS9KQ1k2clpDTDJPMytZYTFTZFZqeTY0a0VzcVI0YWV4SktuaEdnaHpscS85?=
 =?utf-8?B?Ni9sa1diZ3VrTElqd1FNUkZaS05mdW5hbVBpeVFOWXZBYXI1NVo4UmJ2WWV3?=
 =?utf-8?B?VTF3UWdDM3RqdXB3R2tNQkl3N0s3bkx5WEZ3VUI5dXFvK2ErSFZVKzAwTUFo?=
 =?utf-8?B?azFPSUpiS0RUaXlrU0R3VmxIY1lQSDk3TThlaFJFV1FQaGcyZEpXVTV6WDNY?=
 =?utf-8?B?NXN1WG5iK1BuS241R01LWVpLcVEyVTVoa1BYVlRTYXFnMVRPQ0xoem1RRjl2?=
 =?utf-8?B?L0RuKzBZeWFpNWtVMnhDRi9EUTdmS2Y4OTY2OFlRRXVJT3V1N2hNVXU3Z3Ar?=
 =?utf-8?B?dHZxSVAzMGRYeldyT1dBWVhYT2hjOHlldDcrS3lkbWJBVXNJbTlOUnFHTDBF?=
 =?utf-8?B?OWYvWUQvUlFkSGFlMFBZdSt4bEdDYVRjRWVaYzNzdEtkdjY1MElEUytLZkY1?=
 =?utf-8?B?aG1ScStEYlhpZEJsN1p3SjExOVVBZkZBNG0zZVJIT1FGYU4xbjZnalczbzNj?=
 =?utf-8?B?aW1iUkNDS1d0akpMSHE2SkYrMHhoQ3o0emRpTk1TZnd6TURXRG1ZZk1iZjdB?=
 =?utf-8?B?OUN3ZEE4Rmcyc0VKNkdHM25NUGRRUHFyTEhHWW9MSEw4TEVmSEplQlpONlI0?=
 =?utf-8?B?NkdxbE16cHYvbGFFc2xDT3djQlUrNHgvTnNsTWpVdVMrNkNxTVExNmVMaXpJ?=
 =?utf-8?B?NlRBZlhYMHQwblNXR1QzVGNVTHVHYUJ6czYxRU9oQ2RNZ21VcVlIdkJzbU1Z?=
 =?utf-8?B?S0VIWitFNmtjUEV4OHZvdUx0S0tyNm9Lb1ZjNUF0YnBzTmxhU0VQUExyUDVk?=
 =?utf-8?B?SjN1ejh2VXRQcWs3ajc5aldybTZOakY2dFV6amZrMFBYWEI0bjlVLzBLSVda?=
 =?utf-8?B?YjdVMEhzRXk5anFwUEdPaUFLYklldHpJeWJ6RUc5RUE5WG1JWGdPU3BZbDVX?=
 =?utf-8?B?TnA0UGhQRlJHaysvdFI1dlBTUTV0RGdRV210M1MwbDRPUGtGQXR6eS9FcDhS?=
 =?utf-8?B?am9WQjYxdTA4R1UzdGIwS0tVeFF0L0QxRmlzd1ljZm4wZzZEUFJQTGg1cDM2?=
 =?utf-8?B?b2RDc0ZUdXlPd3BIeTlsYnp4ZzlRRjNoUE5tQXNMOXBQYU1EUUtCczFsblR1?=
 =?utf-8?B?MG1zQUpZUUY4bzgva3RLS0FDM093aTNiQ3VpaEplbGg3bS95d2ZoRnJGOTA2?=
 =?utf-8?B?OGNVR0N4UHJoTjkrWGhQYnlBSThrVkhuTTJrVUdFUnI0OWdrL3ZWQytQaWhH?=
 =?utf-8?B?L1Z2ZWVOY0sxOFVWdzJzc2ZQRTZoWW4yWVkzNzZlRDBhWjRPUm1RYnNHK2M2?=
 =?utf-8?Q?8MahVQHNS97GGvV0=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b02eeda6-7786-45e7-0c1b-08da4de1aa77
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2022 08:41:25.0191
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: FhPY6uKgWilIwhxjQC8vj5CThv3oohVEaANnlkswK8Yzo3EYPfHCmEXAuKob6SGq39lApYleuZh8/OO+p4Lmog==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6543

On Tue, Jun 14, 2022 at 10:32:53AM +0200, Roger Pau Monné wrote:
> On Tue, Jun 14, 2022 at 10:10:03AM +0200, Jan Beulich wrote:
> > On 14.06.2022 08:52, Roger Pau Monné wrote:
> > > On Mon, Jun 13, 2022 at 03:56:54PM +0200, Jan Beulich wrote:
> > >> On 13.06.2022 14:32, Roger Pau Monné wrote:
> > >>> On Mon, Jun 13, 2022 at 11:18:49AM +0200, Jan Beulich wrote:
> > >>>> On 13.06.2022 11:04, Roger Pau Monné wrote:
> > >>>>> On Mon, Jun 13, 2022 at 10:29:43AM +0200, Jan Beulich wrote:
> > >>>>>> On 13.06.2022 10:21, Roger Pau Monné wrote:
> > >>>>>>> On Mon, Jun 13, 2022 at 09:30:06AM +0200, Jan Beulich wrote:
> > >>>>>>>> On 10.06.2022 17:06, Roger Pau Monne wrote:
> > >>>>>>>>> Prevent dropping console output from the hardware domain, since it's
> > >>>>>>>>> likely important to have all the output if the boot fails without
> > >>>>>>>>> having to resort to sync_console (which also affects the output from
> > >>>>>>>>> other guests).
> > >>>>>>>>>
> > >>>>>>>>> Do so by pairing the console_serial_puts() with
> > >>>>>>>>> serial_{start,end}_log_everything(), so that no output is dropped.
> > >>>>>>>>
> > >>>>>>>> While I can see the goal, why would Dom0 output be (effectively) more
> > >>>>>>>> important than Xen's own one (which isn't "forced")? And with this
> > >>>>>>>> aiming at boot output only, wouldn't you want to stop the overriding
> > >>>>>>>> once boot has completed (of which, if I'm not mistaken, we don't
> > >>>>>>>> really have any signal coming from Dom0)? And even during boot I'm
> > >>>>>>>> not convinced we'd want to let through everything, but perhaps just
> > >>>>>>>> Dom0's kernel messages?
> > >>>>>>>
> > >>>>>>> I normally use sync_console on all the boxes I'm doing dev work, so
> > >>>>>>> this request is something that come up internally.
> > >>>>>>>
> > >>>>>>> Didn't realize Xen output wasn't forced, since we already have rate
> > >>>>>>> limiting based on log levels I was assuming that non-ratelimited
> > >>>>>>> messages wouldn't be dropped.  But yes, I agree that Xen (non-guest
> > >>>>>>> triggered) output shouldn't be rate limited either.
> > >>>>>>
> > >>>>>> Which would raise the question of why we have log levels for non-guest
> > >>>>>> messages.
> > >>>>>
> > >>>>> Hm, maybe I'm confused, but I don't see a direct relation between log
> > >>>>> levels and rate limiting.  If I set log level to WARNING I would
> > >>>>> expect to not loose _any_ non-guest log messages with level WARNING or
> > >>>>> above.  It's still useful to have log levels for non-guest messages,
> > >>>>> since user might want to filter out DEBUG non-guest messages for
> > >>>>> example.
> > >>>>
> > >>>> It was me who was confused, because of the two log-everything variants
> > >>>> we have (console and serial). You're right that your change is unrelated
> > >>>> to log levels. However, when there are e.g. many warnings or when an
> > >>>> admin has lowered the log level, what you (would) do is effectively
> > >>>> force sync_console mode transiently (for a subset of messages, but
> > >>>> that's secondary, especially because the "forced" output would still
> > >>>> be waiting for earlier output to make it out).
> > >>>
> > >>> Right, it would have to wait for any previous output on the buffer to
> > >>> go out first.  In any case we can guarantee that no more output will
> > >>> be added to the buffer while Xen waits for it to be flushed.
> > >>>
> > >>> So for the hardware domain it might make sense to wait for the TX
> > >>> buffers to be half empty (the current tx_quench logic) by preempting
> > >>> the hypercall.  That however could cause issues if guests manage to
> > >>> keep filling the buffer while the hardware domain is being preempted.
> > >>>
> > >>> Alternatively we could always reserve half of the buffer for the
> > >>> hardware domain, and allow it to be preempted while waiting for space
> > >>> (since it's guaranteed non hardware domains won't be able to steal the
> > >>> allocation from the hardware domain).
> > >>
> > >> Getting complicated it seems. I have to admit that I wonder whether we
> > >> wouldn't be better off leaving the current logic as is.
> > > 
> > > Another possible solution (more like a band aid) is to increase the
> > > buffer size from 4 pages to 8 or 16.  That would likely allow to cope
> > > fine with the high throughput of boot messages.
> > 
> > You mean the buffer whose size is controlled by serial_tx_buffer?
> 
> Yes.
> 
> > On
> > large systems one may want to simply make use of the command line
> > option then; I don't think the built-in default needs changing. Or
> > if so, then perhaps not statically at build time, but taking into
> > account system properties (like CPU count).
> 
> So how about we use:
> 
> min(16384, ROUNDUP(1024 * num_possible_cpus(), 4096))

Er, sorry, that should be max(...) instead.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jun 14 08:46:44 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 08:46:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348646.574854 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o12CA-0001tY-CB; Tue, 14 Jun 2022 08:46:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348646.574854; Tue, 14 Jun 2022 08:46:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o12CA-0001tR-9G; Tue, 14 Jun 2022 08:46:42 +0000
Received: by outflank-mailman (input) for mailman id 348646;
 Tue, 14 Jun 2022 08:46:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o12C8-0001tH-FY; Tue, 14 Jun 2022 08:46:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o12C8-0002vj-BX; Tue, 14 Jun 2022 08:46:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o12C7-0002cr-Q9; Tue, 14 Jun 2022 08:46:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o12C7-00048W-Pj; Tue, 14 Jun 2022 08:46:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Uo9flm+86gbchSsL/usFVG49ckqaaXx4vxnYbJZQ+RE=; b=vrE0e20siIhshdteL+eya55nFr
	R27XrUJpZHVAVVlE/qoGsiV+2JC5C2/GwElYhRgQuPGLCr3Dxh4l8MO0zpxf9zsLmExfsAr1BQP9f
	yAnR0zIoVO4n4pVc0I+Tro55R6DGYQRoSWBZIlkthPYQe15Wh3EKmMMnx3iAT+fyYbjg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171159-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 171159: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=31b5ad06e315a8e9625a7fec56dc14af15895c89
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 14 Jun 2022 08:46:39 +0000

flight 171159 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171159/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              31b5ad06e315a8e9625a7fec56dc14af15895c89
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  704 days
Failing since        151818  2020-07-11 04:18:52 Z  703 days  685 attempts
Testing same since   171159  2022-06-14 04:18:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Amneesh Singh <natto@weirdnatto.in>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani@anisinha.ca>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brad Laue <brad@brad-x.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Kirbach <christian.kirbach@gmail.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Christophe Fergeau <cfergeau@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Didik Supriadi <didiksupriadi41@gmail.com>
  dinglimin <dinglimin@cmss.chinamobile.com>
  Divya Garg <divya.garg@nutanix.com>
  Dmitrii Shcherbakov <dmitrii.shcherbakov@canonical.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Emilio Herrera <ehespinosa57@gmail.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Franck Ridel <fridel@protonmail.com>
  Gavi Teitz <gavi@nvidia.com>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Haonan Wang <hnwanga1@gmail.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Hiroki Narukawa <hnarukaw@yahoo-corp.jp>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Ian Wienand <iwienand@redhat.com>
  Ioanna Alifieraki <ioanna-maria.alifieraki@canonical.com>
  Ivan Teterevkov <ivan.teterevkov@nutanix.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  jason lee <ppark5237@gmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jia Zhou <zhou.jia2@zte.com.cn>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jing Qi <jinqi@redhat.com>
  Jinsheng Zhang <zhangjl02@inspur.com>
  Jiri Denemark <jdenemar@redhat.com>
  Joachim Falk <joachim.falk@gmx.de>
  John Ferlan <jferlan@redhat.com>
  John Levon <john.levon@nutanix.com>
  John Levon <levon@movementarian.org>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Justin Gatzen <justin.gatzen@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kim InSoo <simmon@nplob.com>
  Koichi Murase <myoga.murase@gmail.com>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Lei Yang <yanglei209@huawei.com>
  Lena Voytek <lena.voytek@canonical.com>
  Liang Yan <lyan@digitalocean.com>
  Liang Yan <lyan@digtalocean.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Liu Yiding <liuyd.fnst@fujitsu.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  luzhipeng <luzhipeng@cestc.cn>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Mielke <mark.mielke@gmail.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Martin Pitt <mpitt@debian.org>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matej Cepl <mcepl@cepl.eu>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Goodhart <c@chromakode.com>
  Maxim Nestratov <mnestratov@virtuozzo.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Moteen Shah <codeguy.moteen@gmail.com>
  Moteen Shah <moteenshah.02@gmail.com>
  Muha Aliss <muhaaliss@gmail.com>
  Nathan <nathan95@live.it>
  Neal Gompa <ngompa13@gmail.com>
  Nick Chevsky <nchevsky@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nicolas Lécureuil <neoclust@mageia.org>
  Nicolas Lécureuil <nicolas.lecureuil@siveo.net>
  Nikolay Shirokovskiy <nikolay.shirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Niteesh Dubey <niteesh@linux.ibm.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Or Ozeri <oro@il.ibm.com>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peng Liang <tcx4c70@gmail.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Praveen K Paladugu <prapal@linux.microsoft.com>
  Richard W.M. Jones <rjones@redhat.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Robin Lee <cheeselee@fedoraproject.org>
  Rohit Kumar <rohit.kumar3@nutanix.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Davis <scott.davis@starlab.io>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  shenjiatong <yshxxsjt715@gmail.com>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Simon Rowe <simon.rowe@nutanix.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tom Wieczorek <tom@bibbu.net>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tu Qiang <tu.qiang35@zte.com.cn>
  Tuguoyi <tu.guoyi@h3c.com>
  tuqiang <tu.qiang35@zte.com.cn>
  Vasiliy Ulyanov <vulyanov@suse.de>
  Victor Toso <victortoso@redhat.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Vinayak Kale <vkale@nvidia.com>
  Vineeth Pillai <viremana@linux.microsoft.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  Wei-Chen Chen <weicche@microsoft.com>
  William Douglas <william.douglas@intel.com>
  Xu Chao <xu.chao6@zte.com.cn>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Fei <yangfei85@huawei.com>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yasuhiko Kamata <belphegor@belbel.or.jp>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
  zhangjl02 <zhangjl02@inspur.com>
  zhanglei <zhanglei@smartx.com>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>
  Дамјан Георгиевски <gdamjan@gmail.com>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 113053 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 14 08:49:08 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 08:49:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348657.574866 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o12ER-0002ci-RO; Tue, 14 Jun 2022 08:49:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348657.574866; Tue, 14 Jun 2022 08:49:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o12ER-0002cb-Mv; Tue, 14 Jun 2022 08:49:03 +0000
Received: by outflank-mailman (input) for mailman id 348657;
 Tue, 14 Jun 2022 08:49:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o12EQ-0002cR-Hz; Tue, 14 Jun 2022 08:49:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o12EQ-0002xm-GD; Tue, 14 Jun 2022 08:49:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o12EQ-0002jG-4d; Tue, 14 Jun 2022 08:49:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o12EQ-0004Yd-4E; Tue, 14 Jun 2022 08:49:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=awlRGvnW4rNeAguV2TmT/09LvrrCKKQNADOYUI0DjBo=; b=V9/Rwm0CvHqNiAR00P5DzECKQ/
	yJOHGOYbr5lGfGX26rG9r7B+o2XJ0IKYR36u0YU5nxLbJz8G95EEnmn/bpYgOA7T1dAAy3O+eiOGE
	2CcO1pcUhUfZL5NQKxz3OTLSUDA2dtTvln388txS7b2H+bgBNZvyrMkZwH6RQZ9b86XU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171156-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171156: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:heisenbug
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=b13baccc3850ca8b8cccbf8ed9912dbaa0fdf7f3
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 14 Jun 2022 08:49:02 +0000

flight 171156 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171156/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 170714
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 170714
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 170714
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 170714
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 170714
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 170714
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 170714
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 170714
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 170714
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 170714
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 170714

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl           8 xen-boot                   fail pass in 171154
 test-amd64-amd64-xl-vhd       8 xen-boot                   fail pass in 171154
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot           fail pass in 171154
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot   fail pass in 171155
 test-amd64-amd64-pair        12 xen-boot/src_host          fail pass in 171155
 test-amd64-amd64-pair        13 xen-boot/dst_host          fail pass in 171155
 test-amd64-amd64-xl-credit1   8 xen-boot                   fail pass in 171155
 test-amd64-amd64-xl-xsm       8 xen-boot                   fail pass in 171155

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170714
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170714
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170714
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                b13baccc3850ca8b8cccbf8ed9912dbaa0fdf7f3
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z   21 days
Failing since        170716  2022-05-24 11:12:06 Z   20 days   51 attempts
Testing same since   171154  2022-06-13 04:37:11 Z    1 days    3 attempts

------------------------------------------------------------
2346 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 276082 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 14 09:13:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 09:13:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348680.574881 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o12br-0006Eg-0q; Tue, 14 Jun 2022 09:13:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348680.574881; Tue, 14 Jun 2022 09:13:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o12bq-0006EZ-Ty; Tue, 14 Jun 2022 09:13:14 +0000
Received: by outflank-mailman (input) for mailman id 348680;
 Tue, 14 Jun 2022 09:13:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lWYu=WV=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o12bq-0006ES-0l
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 09:13:14 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0612.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::612])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 35de25b6-ebc2-11ec-8901-93a377f238d6;
 Tue, 14 Jun 2022 11:13:10 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8945.eurprd04.prod.outlook.com (2603:10a6:20b:42c::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.22; Tue, 14 Jun
 2022 09:13:08 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Tue, 14 Jun 2022
 09:13:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 35de25b6-ebc2-11ec-8901-93a377f238d6
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YKPJtp9m2u/GrwXECYQbRvTxcJh5a/yd3/VDoz3sjBNlHKDwhrbME7zJwNQyqdNe+LZS8LcHcb1wK92eTR5LAv1nDYQxZ9wZIfrMG/UJi9ODWt7KEEAAyus5zJGWbfKCNOkwqxYy8GNTpxQbkEXhz/vaUc5kIh2pJF7MyFi+YT3cJBXNhcPzaLg0DlKb62OJ+EkcZG5wn+tHMYeyEY95UFnAKg2FGspjq13v8E0s9Mk/2qXnlNzZd4Tnl5ykEw06pQYBb3A58rrX2afQkFjxwj63a4Kox+ahBsHbxk666gRr2QOy7y3VnDVuVdPH4A+jIN6FBca258VNQ7mqTUYGlg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=RncGR/Hqn7SBvBFY1V9vTjTOQTwupfQNhG79ifnuEYM=;
 b=mN5idc2tNmhckacyyrVjtsgb7BndiJn6dr5byb7WSEJ5Q8Fw1m5F21Jlijw14eDigD8bEzKQIqA8ViwP+53zixYw8W5FgfZVrz88Oz+F41DgaULliFhyaPeQOeygi0PXQmB8/dCFMkup0+Jhzf87WzafGpf6H02hyZUfjxk6ZIDefOv1pQyngkvc2p1Wrc+KpNtyitpAgLg9BI4YUWqVWCmaXoS84WCbH1nF6fysSzM3SnkgmpIl7VK3CSdQ5ry4egQzJGOY1Wf62sCo0+yg2xyhw/nib9puwq40uVmirxS0khAMseCTE+BeDesjB7nhXeuNqHCxxnSs6IvlvrGa9w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RncGR/Hqn7SBvBFY1V9vTjTOQTwupfQNhG79ifnuEYM=;
 b=OPGOJ1iTQM1Qtzv2CbVvK9kj0mrAurCeAkPxzJWcX1I68iiZ+D8CIhD9UfJaHIFabh2eRMHeM9BFMSko2ztwOHuyOo58eFg9NUeSxDFFjCJ04XVgvzv2q334XQAnz9Fsh0q3+KUWbiTS0f0TqlXVb9idH69RaGKO2hDDiojDScDriTTLYPPh20eEnDyAoOHYS0dLGwY21hsPKx/66KiuQzSl8Hya+3xAgwqbAdSGyERERnO3YtjZfOkNeZaWDLD4oUZKJmj76eaLKiy+E4TXwSvvkfR9B9grDd0wZpHYyclXGtUFjZmr5UC75EKeI5p5pGjSxq8I9C/9tUjYIXUnwg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <72c94980-cbcd-d3b3-7aad-c9db58d9c4a2@suse.com>
Date: Tue, 14 Jun 2022 11:13:07 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH] xen/console: do not drop serial output from the hardware
 domain
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220610150651.29933-1-roger.pau@citrix.com>
 <3a462021-1802-4764-3547-6d0a02cd092f@suse.com>
 <YqbziQGizoNX7YFr@Air-de-Roger>
 <3d0d74d8-55a9-cdb6-0c5e-616ddd47bbc0@suse.com>
 <Yqb9gKUMokLAots7@Air-de-Roger>
 <afa0a9e3-fd35-be38-427e-3389f4c3ca26@suse.com>
 <YqcuTUJUgXcO3iYE@Air-de-Roger>
 <f0f87e99-282b-6df7-7e57-3a6c73029519@suse.com>
 <YqgwNu3QSpPcZjnU@Air-de-Roger>
 <69d85d88-4ec1-987c-151f-0d433021fe34@suse.com>
 <YqhHtetipYTG8tuc@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <YqhHtetipYTG8tuc@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0078.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1e::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: edf5095c-871d-4e08-ec1f-08da4de61910
X-MS-TrafficTypeDiagnostic: AS8PR04MB8945:EE_
X-Microsoft-Antispam-PRVS:
	<AS8PR04MB894578F280FB95C9586A36ADB3AA9@AS8PR04MB8945.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	44xWMbSoTmgNkqtR3YGDZj/WC3j435hKCHczfWI2M9wCd+XcqawOjmkh0RNdua8Wb1a3IbHk4V9TVrVwQbyNlyFPqWy4oHWX367Dmhh2FUGWNkJ1M5s5LvNt+jGYAng1x1h6QEvxwQCzKL+Pw/X35hOwIk3XanuZMFWhAgvzX9WXj9U3Guyn+Cva+RYaokEyzTDdIvxWrqIpMJ0cvvpAtBjEBrAgbKboJ/Ng1dOcB75uv7p9KKNyDzQpJWna3nTBsk4YFaGNt0vCyWPmf9CP2XD3iAu6Vb6PgHChhOcyMpuH0JJM+XBYxvSqRUaG0AKPHuT9KCTtnx/kaRdaKDuAuYUgiiMlxPRzRT5f3Hh1EqPib4j26xOBkbhh6zshPrzQRBZZx0VwqK9ZgBlZpnL/ip33Lw6J1OQz5FZXxumg8Ckei2wMNSz0OGZav4vAREnFFQSMZGaB1Wwg0C623NgIKTxJR+Pj9VtJJMfNzruVYJoCMTh3qlydqnNTJfq/zf5N/81B70w8W/s6MdDoteBhLQptljW+KZxEB0bX3Qh0aA5uF9qPvpkKCvL7EnolsbZSPKS1vayPHQFGSTR6D5Wiz0ja/v+qfOSSVhrpy9YM4eZpUNUqdqQnbabR92Dv25GWUwA1fiAvio8snHcKbsdIXFHQIRQ4kCRjgghzzflLkdPHLVxzi4MUIDq2bjQxXFnjBQJi+IuUAunkGW745QXXevB7vbdh5Q86hiIbcMSKh/w0n0kg86Hd8d+m498gkEiD
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(26005)(2616005)(53546011)(31686004)(54906003)(2906002)(5660300002)(6506007)(6916009)(36756003)(83380400001)(38100700002)(6512007)(31696002)(316002)(66946007)(66556008)(8936002)(86362001)(186003)(8676002)(508600001)(4326008)(66476007)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YTVONHhNNVFkSnBFUG1JdDRZdldoV3hJaEFMdVBkdkwvSzc5eEdibHBkb3h2?=
 =?utf-8?B?V1JqV2EwbWo3Nmx4RzcvZGNIbHpXeHBZRXBrblBHUE80b2Rqckc5ZUNQRXpS?=
 =?utf-8?B?Um8xSC9mNWdoQnZYMjBybjhnTVJhcHJ4VnJ6Q1d1Tys1QkxoaHdrTXZVMVlu?=
 =?utf-8?B?TTM5NFFjTmRidVMrMno0TG5naEpMR3hFTW8xd21icGs5S1pDRVMyVWJ4cWwz?=
 =?utf-8?B?ckhLS015ZUdzZTdpSy96NDJzTjZ3bDRNdCtNK0pjbEdzQzZObTBEd3ZpRmtr?=
 =?utf-8?B?M0M0eFQxMCtYUEphblEzSDNBZVZtWDhjdlUzYnI4MTRVTlFjeEVIZVl6b2RY?=
 =?utf-8?B?VlRYZGdXQk9JdHJ6NG80WE9xd1FuaXdObTN2WnFpT0JpaDJFZVhwZ3d0ODdR?=
 =?utf-8?B?aXN6cWd1NUdNQmw2a0lpQ1l4M2E2YlR2SGVRY1liZWEzTzZYN1N1ZmZXNHQ3?=
 =?utf-8?B?ZjhUMFNZZXNBdDZ0TXQ4UFRFcGNEOGplZHAydXNXN0NVWUNmZEJIZ3EveHBB?=
 =?utf-8?B?aHpsZ1FZOVVhUXFzOGh1Uy9BVjZYWTI4OHo4dFpTcVBHRjVpalZIVFNOdysv?=
 =?utf-8?B?MHNuVkphTm5RRTkwMVVGb0h1MWF3VUh1ODhWSXBzWkczaWN3dERzK1R4VGtt?=
 =?utf-8?B?cU42S1dNUlNyd3VrWHVYeWYzUjd5NndjV1ZvcmFlbDZEKzlXNEJqQ3hkT09X?=
 =?utf-8?B?ZGdmY0lCRFhHOUZLNkNKbU41dkQ5QnlNK0tpTmRiMDNXczd2cFpqQzdQaVF5?=
 =?utf-8?B?OTU0NWlRbDZvYzVQeEd5clREUmF4eFBHTUJqNGZLWmhiUFVoYitPaFk2VE1l?=
 =?utf-8?B?SXQwVm1zZlZmWExERk9rQk5mcWZlME1HVVVEbVhHNzFQdHJXZklDSVJXa0NV?=
 =?utf-8?B?OGN6VktvMlhVTGVycW8xdWdWRFcrVlhTUkJpZ1R5aVVTWWRNcFRNQktYL0c1?=
 =?utf-8?B?VE12am81Qk4xYmUwK1BWVTRVS2FPZ2E4QlFwUnhxWUVUSDJxb1ZrRjdQSksv?=
 =?utf-8?B?NnJYenF3Y0NNclcrS0Fkc2UxdExZZkYvb3piOGd2VnlKV1hMbG9DbzVPaFpu?=
 =?utf-8?B?V3JMaVhOVnlDanNqWUFycW9vdlN4aVV1QnlYdk8zdStJSnZ5R1VncFdjQTBu?=
 =?utf-8?B?QWdXRzlqbUJ0bVNBTmQ5bGtxN2JtakZGNC8wdzRMTnA2OTU5WmFXYVdBcE1H?=
 =?utf-8?B?MnNNZlFUZWdzQW5IZU12Qm9zanBxZWI0V1NyRi9TRDJzZUlwWGo1ZEU1YUox?=
 =?utf-8?B?SmZQSncxaGJNRTNEdkRVNDFYaHZRdkpVeWswNm1NVjZtc1FzOVJ6YVRQVVRr?=
 =?utf-8?B?MnVnM202QkpvQnUrZzZBWWhBMDY2MWVFZ0c3bUpuQjRicXhqaVlVcDBOeVBL?=
 =?utf-8?B?Rkk3bU4xMW1kbHc4ekowVzRqakdmOUE1RUJkYmZEVEhmYUtVVm9jbVVGd2FZ?=
 =?utf-8?B?MExUOWpBa1Bnb0xZWEladkhNVXp3bEhONTFaN2owT1NURkV3cUMyWm1Pdzhz?=
 =?utf-8?B?VUxpbFhuSzA1UzJWWTJnTHRxZHZkdVFqRkw0b01HNHBtMDQ0bG9qeThNYlZR?=
 =?utf-8?B?NHBzcHVMeEg2bkMvSHNCdkR4M0NHOGtZbUl3eDMzR20wdlJwMWlSa2JPRFg5?=
 =?utf-8?B?ck9PQ0xwOFJnRHJJMVVFemppOXFKMWxoK2FKMTQ3MmF3R0lPN2NIOWszQlVq?=
 =?utf-8?B?ZHAycUoxSmZKdlVmeEFaSmR6Zmx3blR6eExzSzROdjNjZVh5djJ3bFpmQ2F6?=
 =?utf-8?B?KytIVCtSQXlISHNsMENES2pubmVBM1RnZFc5SXFPWDFCS1MxTm5Qa09ZZXpC?=
 =?utf-8?B?Z2lKUkMvTDFMdU9SNTViTWJRSkRHeW9WY01MR1BPZGE2bklCbzBURzdDelEr?=
 =?utf-8?B?YUozRTBacXlYVDJybjFEY0lKV0tFRnozVFhpdEJxK05nSXZhd3Vac3RZbnl6?=
 =?utf-8?B?NzRxNVRNUGkzU1N1NUd1SksrWGhrUEJ6YjdPWUVvVUVReExlcHFJcWhGYXZn?=
 =?utf-8?B?WHZhM0Z0KzJDam9SckJXMjNSK244aENod2I1QkRQM2lQZ0lzRkEyYWlNODRp?=
 =?utf-8?B?OVIrMnc3bC9nZEZCS3ZZWXo2dDMzcEJNV1RXdjFCdno2bDBCMlMwVXN6OXg4?=
 =?utf-8?B?VU5GemppSGRic0hnV3V5dzVMNnZlMm9pQU16WGJCRHZtWXhDMldocit4Nzdn?=
 =?utf-8?B?ZDFHTzlUbXoxOXk2T1VBdjhaZWU0cWxvMzJzelBJd2dyeVlkTjF2MkRVdWpT?=
 =?utf-8?B?TlMwY0lTc1BZMCtSNkxQVXVmbXZXam1kVTAyc3EwYkI4cDlXdE1vOVhTaXZE?=
 =?utf-8?B?aU92b0ZCd1F0NjFwVEhRRjMwY2E4b0VCanVVWGRWUmx4dWpXNVVTdz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: edf5095c-871d-4e08-ec1f-08da4de61910
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2022 09:13:08.5796
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: DB0FbTrr2zhed0qFZJM7GETzB+AYVu/eEYBHHHO3YqPOYfSREvOXMwoH/3uk3bR/u7scVqxf6OT+LB0lDaG1SQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8945

On 14.06.2022 10:32, Roger Pau Monné wrote:
> On Tue, Jun 14, 2022 at 10:10:03AM +0200, Jan Beulich wrote:
>> On 14.06.2022 08:52, Roger Pau Monné wrote:
>>> On Mon, Jun 13, 2022 at 03:56:54PM +0200, Jan Beulich wrote:
>>>> On 13.06.2022 14:32, Roger Pau Monné wrote:
>>>>> On Mon, Jun 13, 2022 at 11:18:49AM +0200, Jan Beulich wrote:
>>>>>> On 13.06.2022 11:04, Roger Pau Monné wrote:
>>>>>>> On Mon, Jun 13, 2022 at 10:29:43AM +0200, Jan Beulich wrote:
>>>>>>>> On 13.06.2022 10:21, Roger Pau Monné wrote:
>>>>>>>>> On Mon, Jun 13, 2022 at 09:30:06AM +0200, Jan Beulich wrote:
>>>>>>>>>> On 10.06.2022 17:06, Roger Pau Monne wrote:
>>>>>>>>>>> Prevent dropping console output from the hardware domain, since it's
>>>>>>>>>>> likely important to have all the output if the boot fails without
>>>>>>>>>>> having to resort to sync_console (which also affects the output from
>>>>>>>>>>> other guests).
>>>>>>>>>>>
>>>>>>>>>>> Do so by pairing the console_serial_puts() with
>>>>>>>>>>> serial_{start,end}_log_everything(), so that no output is dropped.
>>>>>>>>>>
>>>>>>>>>> While I can see the goal, why would Dom0 output be (effectively) more
>>>>>>>>>> important than Xen's own one (which isn't "forced")? And with this
>>>>>>>>>> aiming at boot output only, wouldn't you want to stop the overriding
>>>>>>>>>> once boot has completed (of which, if I'm not mistaken, we don't
>>>>>>>>>> really have any signal coming from Dom0)? And even during boot I'm
>>>>>>>>>> not convinced we'd want to let through everything, but perhaps just
>>>>>>>>>> Dom0's kernel messages?
>>>>>>>>>
>>>>>>>>> I normally use sync_console on all the boxes I'm doing dev work, so
>>>>>>>>> this request is something that come up internally.
>>>>>>>>>
>>>>>>>>> Didn't realize Xen output wasn't forced, since we already have rate
>>>>>>>>> limiting based on log levels I was assuming that non-ratelimited
>>>>>>>>> messages wouldn't be dropped.  But yes, I agree that Xen (non-guest
>>>>>>>>> triggered) output shouldn't be rate limited either.
>>>>>>>>
>>>>>>>> Which would raise the question of why we have log levels for non-guest
>>>>>>>> messages.
>>>>>>>
>>>>>>> Hm, maybe I'm confused, but I don't see a direct relation between log
>>>>>>> levels and rate limiting.  If I set log level to WARNING I would
>>>>>>> expect to not loose _any_ non-guest log messages with level WARNING or
>>>>>>> above.  It's still useful to have log levels for non-guest messages,
>>>>>>> since user might want to filter out DEBUG non-guest messages for
>>>>>>> example.
>>>>>>
>>>>>> It was me who was confused, because of the two log-everything variants
>>>>>> we have (console and serial). You're right that your change is unrelated
>>>>>> to log levels. However, when there are e.g. many warnings or when an
>>>>>> admin has lowered the log level, what you (would) do is effectively
>>>>>> force sync_console mode transiently (for a subset of messages, but
>>>>>> that's secondary, especially because the "forced" output would still
>>>>>> be waiting for earlier output to make it out).
>>>>>
>>>>> Right, it would have to wait for any previous output on the buffer to
>>>>> go out first.  In any case we can guarantee that no more output will
>>>>> be added to the buffer while Xen waits for it to be flushed.
>>>>>
>>>>> So for the hardware domain it might make sense to wait for the TX
>>>>> buffers to be half empty (the current tx_quench logic) by preempting
>>>>> the hypercall.  That however could cause issues if guests manage to
>>>>> keep filling the buffer while the hardware domain is being preempted.
>>>>>
>>>>> Alternatively we could always reserve half of the buffer for the
>>>>> hardware domain, and allow it to be preempted while waiting for space
>>>>> (since it's guaranteed non hardware domains won't be able to steal the
>>>>> allocation from the hardware domain).
>>>>
>>>> Getting complicated it seems. I have to admit that I wonder whether we
>>>> wouldn't be better off leaving the current logic as is.
>>>
>>> Another possible solution (more like a band aid) is to increase the
>>> buffer size from 4 pages to 8 or 16.  That would likely allow to cope
>>> fine with the high throughput of boot messages.
>>
>> You mean the buffer whose size is controlled by serial_tx_buffer?
> 
> Yes.
> 
>> On
>> large systems one may want to simply make use of the command line
>> option then; I don't think the built-in default needs changing. Or
>> if so, then perhaps not statically at build time, but taking into
>> account system properties (like CPU count).
> 
> So how about we use:
> 
> min(16384, ROUNDUP(1024 * num_possible_cpus(), 4096))

That would _reduce_ size on small systems, wouldn't it? Originally
you were after increasing the default size. But if you had meant
max(), then I'd fear on very large systems this may grow a little
too large.

> Maybe we should also take CPU frequency into account, but that seems
> too complex for the purpose.

Why would frequency matter? Other aspects I could see mattering is
node count and maybe memory size.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 14 09:38:47 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 09:38:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348713.574892 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o130T-0000eK-2H; Tue, 14 Jun 2022 09:38:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348713.574892; Tue, 14 Jun 2022 09:38:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o130S-0000eD-U9; Tue, 14 Jun 2022 09:38:40 +0000
Received: by outflank-mailman (input) for mailman id 348713;
 Tue, 14 Jun 2022 09:38:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=i1me=WV=citrix.com=prvs=1570496fe=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o130R-0000e7-PZ
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 09:38:39 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c3aa8cac-ebc5-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 11:38:38 +0200 (CEST)
Received: from mail-dm6nam12lp2177.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.177])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 14 Jun 2022 05:38:35 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by CH2PR03MB5288.namprd03.prod.outlook.com (2603:10b6:610:9b::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.12; Tue, 14 Jun
 2022 09:38:34 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%6]) with mapi id 15.20.5332.022; Tue, 14 Jun 2022
 09:38:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c3aa8cac-ebc5-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655199518;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=VoYG36nGOO3GYMb+c7WpE0fIiap08K2I+1ZP0rlsr5Y=;
  b=YGbITQREXg+itkWHY5kWdLGFQ+PgOtr9EPC3/nb9wRw7IhCHvQubFtlA
   vXa9XDKEWW/tvvZ6g/PxaSxK4Dm/PlNXflLJdXTStWkvwChhL7tO1D0Ng
   RItfuzeOHjHRBA40JeVEu7Dx1L2P6Lv/BxpUTzUn2tsIZW72udhmQptrD
   0=;
X-IronPort-RemoteIP: 104.47.59.177
X-IronPort-MID: 73543426
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Vr2w468DpDsXPYN1KCz/DrUDnH+TJUtcMsCJ2f8bNWPcYEJGY0x3y
 2UcWzqCaayCZjD2KtFxYYq29UkE65DczNBrHQE/pCw8E34SpcT7XtnIdU2Y0wF+jyHgoOCLy
 +1EN7Es+ehtFie0Si+Fa+Sn9T8mvU2xbuKU5NTsY0idfic5DnZ44f5fs7Rh2NQw34HlW1rlV
 e7a+KUzBnf0g1aYDUpMg06zgEsHUCPa4W5wUvQWPJinjXeG/5UnJMt3yZKZdhMUdrJ8DO+iL
 9sv+Znilo/vE7XBPfv++lrzWhVirrc/pmFigFIOM0SpqkAqSiDfTs/XnRfTAKtao2zhojx/9
 DlCnZu6dCEFYrLAoc0UdUhiLjh9HvZL47CSdBBTseTLp6HHW13F5qw0SW0TY8gf8OsxBnxS/
 /sFLjxLdgqEm++93LO8TK9rm9gnK87oeogYvxmMzxmAVapgHc+FHvSMvIAHtNszrpkm8fL2f
 c0WZCApdB3dSxZOJk0WGNQ1m+LAanzXLGYF9AnJ//RfD277lwBWzLf3D//pZ4aIfexRx2OVp
 mT38DGsav0dHJnFodafyVqujOLSmSLwWKoJCaa1sPVthTW7xHEXCRAQfUu2p7++kEHWc8lEN
 0Ue9y4qrK4z3E+mVN/wW1u/unHslgEYc8pdFas98g7l4rrZ5UOVC3YJShZFacc6r4kmSDoyz
 FiLktj1Qzt1v9WopWm1876VqXa+PHYTJGpbPyscF1JavJ/kvZ05iQ/JQpB7Cqmpg9bpGDb2h
 TeXsCw5gLZVhskOv0mmwW36b/uXjsChZmYICs//BApJMisRiFaZWrGV
IronPort-HdrOrdr: A9a23:Rq3RG6Ce+dC485PlHeg+sceALOsnbusQ8zAXPh9KJCC9I/bzqy
 nxpp8mPH/P5wr5lktQ/OxoHJPwOU80kqQFmrX5XI3SJTUO3VHFEGgM1+vfKlHbak7DH6tmpN
 1dmstFeaLN5DpB/KHHCWCDer5PoeVvsprY49s2p00dMT2CAJsQizuRZDzrcHGfE2J9dOcE/d
 enl4N6jgvlXU5SQtWwB3EDUeSGj9rXlKj+aRpDIxI88gGBgR6h9ba/SnGjr1wjegIK5Y1n3X
 nOkgT/6Knmm/anyiXE32uWy5hNgtPuxvZKGcTJoMkILTfHjBquee1aKvW/lQFwhNvqxEchkd
 HKrRtlF8Nv60nJdmXwmhfp0xmI6kdb11bSjXujxVfzq83wQzw3T+Bbg5hCTxff4008+Plhza
 NixQuixtVqJCKFuB64y8nDVhlsmEbxi2Eli/Qvg3tWVpZbQKNNrLYY4FheHP47bW7HAbgcYa
 hT5fznlbZrmQvwVQGbgoAv+q3gYp0LJGbJfqBY0fblkQS/nxhCvj4lLYIk7zI9HakGOuh5Dt
 T/Q9pVfY51P78rhIJGdZA8qJiMexrwqSylChPgHX3XUIc6Blnql7nbpJ0I2cDCQu178HJ1ou
 WKbG9l
X-IronPort-AV: E=Sophos;i="5.91,299,1647316800"; 
   d="scan'208";a="73543426"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Dw7KAUf88nyrjxfuuJPGCi1uvZtB8cwQov9qagSyZA1PHFWuI3eGbHHM2U5yiYPpjC82goArrVsTS+U+WgG9Thu3FJvayeUeEQ+2JhUtUcSwu9vhMNkmKZI4jXrErD8nZnaonGL7YDU0lck3JUOKcpHzd7LHfzEvbo3SM01HsuJkxaE6BuAJAX9zRXK8JCDETiEQbG8Vm0e3iVbqEkqNh1dS858ilVJAwgVYdJ8agHMaVwaUYcyzJ6AnlyOKW/Y7ffrtBNYpUXo/nzUr8TEzWuhFhACGhd7nG0TzrEx1uXYzeRQbuYtZCcG3/N14oas5krWeASel/kHDsALHKVIRtA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=EMEGXRZfJV8cRfQIPJiIUOeSoPS/a1AqUpOTX0HMXjM=;
 b=PeoXy3PqK36k8NKbtlulixtCQA0P79qwdX1XKCRsA4twAv7yJGer4nZgAOFhRde0fJDwvpcESSacMcfmMm3bGjbthJY3CGtu3v2rYNmyCF59wr+6TGjqwOgNbQRKnfvDdV5gMebKKjPnhi1fc1pTNk83XTV2vgv6gGshVjDgdCYIYimI/S6NDqLhzC+L1tsmz5/TEgtJXDfeeDfwAH9tEF+Wfe0puAzxNZdECZ9r2P2f9aSLaU3SHyE7c8Z5S39SCbOmDSIv1nqG98UcXdyK/Ju4YZ5QGf1x8/BK0cq4787Hv5wSFmjw3QLG16KLknWDRhbGKPo0BXjk5SlO3mKAlQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EMEGXRZfJV8cRfQIPJiIUOeSoPS/a1AqUpOTX0HMXjM=;
 b=AglvoEv1r9cW5E9W2pGkvumr3grVZoyGGiGU7ZvLoUP6xiJaVTiIFDMp9O0ojyXsE3ew9mxyvLyFFeq7A2+JfO/Eu6nQPpTRDUAwBh3W/kSWNVLSfqbDS2paLmTKGKd7hc31+EnDVt2yo7N0eHL3D1QeRixXrD/AMExz4VA6t2g=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 14 Jun 2022 11:38:28 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH] xen/console: do not drop serial output from the hardware
 domain
Message-ID: <YqhXFKMlIvkQzVoT@Air-de-Roger>
References: <YqbziQGizoNX7YFr@Air-de-Roger>
 <3d0d74d8-55a9-cdb6-0c5e-616ddd47bbc0@suse.com>
 <Yqb9gKUMokLAots7@Air-de-Roger>
 <afa0a9e3-fd35-be38-427e-3389f4c3ca26@suse.com>
 <YqcuTUJUgXcO3iYE@Air-de-Roger>
 <f0f87e99-282b-6df7-7e57-3a6c73029519@suse.com>
 <YqgwNu3QSpPcZjnU@Air-de-Roger>
 <69d85d88-4ec1-987c-151f-0d433021fe34@suse.com>
 <YqhHtetipYTG8tuc@Air-de-Roger>
 <72c94980-cbcd-d3b3-7aad-c9db58d9c4a2@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <72c94980-cbcd-d3b3-7aad-c9db58d9c4a2@suse.com>
X-ClientProxiedBy: MR1P264CA0172.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:501:55::20) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c0a58550-5e64-4ada-dd12-08da4de9a657
X-MS-TrafficTypeDiagnostic: CH2PR03MB5288:EE_
X-Microsoft-Antispam-PRVS:
	<CH2PR03MB5288E47ACEAA50169894FEB38FAA9@CH2PR03MB5288.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	bqMBC/72HdvWcJFmNTgpTKlM877FsO8BiG+CbZtshauUsR/zAs9vTQfx0nLcVLlAiz/YNOqQ85Q/9CODwsnL9a1TyqmKR0J4QSyGXuBGG4a+BnAWmHe2Wjgkva9frayaMUl02hmcMFPT1pktROTafSKoUyuHxljxCR6EOkHMjRUUdASySh365eZ3Dx2vJoZKEnjKEBRpJ6QDYMNGH2RIyu/64R8J1jNGg3FDS4gT1fcEQ+16Dcjw9hgztGhNBymyIKghNZWBO7fkPAn3v9BcDgUeKTbmZu+LejEd+S0/VFHhnzns40pvyrPzpb5PdvBtbYseLS2ovV1gClYNsy/jghuJBUf8twuFvd8wDz272Q7NK3zn6+CJIaNG8IVGQ+DdqBsJXEB35HxpTx4ai55DU+bFzY5aWD2CL63/cyUTSh8dezaxWOTd2oGhQdNB49CYLyXLN3WZ8pP/+fZk7V4BK5Umdb+YkqU+aEq1eEQuggZibZHrhd5ipbLTaN/PFuNEVKyz9MhJ6obDPS9EiD76p35yIGpDAiSz/DSHFKC4DSmBxPxakGvZAUpc+b5qhMV6B3g6XUN+d+VlIEQ9P19NzmmqH9Bxg9DL43/QI+i9KXlsNQewAgXr2uNTvjgPa9gcgjGTajHxY+9Jrw5/EC7YmA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(7916004)(4636009)(366004)(82960400001)(2906002)(5660300002)(186003)(38100700002)(33716001)(26005)(6512007)(6666004)(9686003)(86362001)(6506007)(54906003)(53546011)(85182001)(66946007)(66556008)(66476007)(8676002)(83380400001)(508600001)(4326008)(6916009)(8936002)(316002)(6486002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QVBNMnZzejZZaWV6OU5iM2VFWUtDNXljUUhXZkowVVY2cUJTajBVaHhyb0Zn?=
 =?utf-8?B?a1lRNUEvTWtWRHlUamFJMWhQTFBaRnBWb0FBSHdPcWl5T2hFck95MUVxS1B1?=
 =?utf-8?B?Y0R1d2ovSzc5RFN5RDRvYmV5YkJrekE3cnlZK2FnYjlsK1FJelViVlp0b0M4?=
 =?utf-8?B?MnFpTXA3QTlCQjE2Mm1oblFaR1VGNDlaMVAweENOZ3AwTHpGY09jYUZTeHNp?=
 =?utf-8?B?VjMvUldNOHVZVEZRV1NJQkt1VXhzOUczTGJUKzMwakx6T2tpN2FxZGcvQzQx?=
 =?utf-8?B?R3QwZFp2SHI3SDk3aU4xUFNBcmh5ZG5yLzF2UTdtV0xNNCtBSThtdngrb2U1?=
 =?utf-8?B?Q1dza01ycU9iYmZyR3VZUjhYOXA4QzM2UEpiTkROdzJSVldIU2ppekEwL2Uy?=
 =?utf-8?B?cjBiZTZ1K2NvbUdSbk1EYmcyTWhzTzBCV3FWL0pTMVVjdnd1SDRsTjlTaWYr?=
 =?utf-8?B?R3N5WTZKU0hpeGN4TitrcWNXYTVvcGdCQ0FLR0ZUZE1KMmVoYTh1bFA5cXp3?=
 =?utf-8?B?NktTZGdMaU9mcUJVL21HZGpQY1BJbWhVeG1KbEdIMzhHSGtDcWM0R1R2ZElu?=
 =?utf-8?B?QUhBUXIzTnBCRnAvUTlqd0FGUEhLRUY4NlpNb2ptQzlJNnprMWphS3F6NzJs?=
 =?utf-8?B?RzZpVElqdU1wTlZ1c1liS010TGZwdHFGamxBZUIyS0RXRndxdU16WVM3ZHZX?=
 =?utf-8?B?bWg0aktpOXQ2elZzK2JJR1NCRWUxakRXTzZZWnNGUlEvMlVXK0w5eHlZQ0ZO?=
 =?utf-8?B?eStxN2ovSy9jclBCdUkvUDUyY2tXdFhSK3lPU1BPRHB4L1ViUVc3QjBCNUY3?=
 =?utf-8?B?VS9ZQm1CcE5JazZabVJON25SWUU4c3VWemh1cDF6a0dUL3dEWWo3UnZ6MXlC?=
 =?utf-8?B?YktOVW1yWHp3ODQxOFVQV0NQdldtK0J2T29ycVNJeU1HTDNraGFaTFFDV2FC?=
 =?utf-8?B?ZCtNa2JyVGpraVRmTjVUeVdjdmNjQllGVVF5eGRWSEVDK0p5a3REem5uL05Q?=
 =?utf-8?B?R1FhWTJGSlN5NE9ZY01DMEtaeE5Wc0hYdHJPWnVybWFhTlc0SVh2WUgwUkxZ?=
 =?utf-8?B?UmtrbnJoSGoraEV3c2ovRjJlMWNiQmNPYlRsM0NPc3h1UkhGTlRIUjNLOEFy?=
 =?utf-8?B?dUYzYlZkd3YyeG1zTXR2UGN2MEhQRUQzYUZVQlBWRUwreWpWYnQxTVkrQnhh?=
 =?utf-8?B?OXgza1FsMElQSHBKdHU4Y2FYeUtkSmlFd0tvWDBUeTdodXlwOU9sT0FlNndm?=
 =?utf-8?B?M2p1MTd5Mkh3NCtWNVgyL0lURG5PdmF5TktCa3RycERYTHkrUGMxYndETDBL?=
 =?utf-8?B?RkFKeWFxcGNCL2QzT0l5cnN2TVJ2eHVIeUNPV3dWR0JHQUlUakticG9Sdkw3?=
 =?utf-8?B?VFN2aXp3RFZURlJjK3RLc0EwSTRyMU0wZFhPTktxaElCRExmM2JaQ3c1VlZU?=
 =?utf-8?B?RldmQWQ5SjdHUlFuU2NIaEV6MmswbTBUZkg1S2hzNWNxQlVNUHVPc0k4aVZC?=
 =?utf-8?B?OENzNWxTN2RkYVFDdGY5ZWhrVkc4cGk5cXltWEsxLytiY0pSWmYvM2NudUtO?=
 =?utf-8?B?WkFVQTRXNjM1bnVHMG9JbzdobjVmN0MxRytXblk2MXF6dk5IemZNUEdWMUhz?=
 =?utf-8?B?WTM3M2JuTERsZjgxZHZ5YUpMR1kxQTVYT2grS1RPOFprUm5VODdvQVVodW1s?=
 =?utf-8?B?akJvT0JWUFlCR25yeHR1bjhkZUhubFVvTHVHVXVTR25WdDMySWZLVklsMExO?=
 =?utf-8?B?eER3eXB2Y2M5bGl5KzBldUxaQzMvOXNQRTRnb0ZGVDRVVUFxUU9COXZBQkU0?=
 =?utf-8?B?SFprMjlKT2xPQmNhanY0ZXN3QkVJTGtwOGJaNWpkNmUyZldnK3hQQ3hDdnAw?=
 =?utf-8?B?blJGVU9DNG9TSEVkL1REVGc2bElIM3pMV2x4dmN4ejdvZitveDZUOUhVNG1C?=
 =?utf-8?B?Y3M5cnVwd3RaSHllNWR3R1Nhd05NUVlHQlBHV3QyRVFGcHlkdTJ0UmNyVHQ0?=
 =?utf-8?B?UjJOQmlhK0JOdG95YXMzaHVyQWY0M21XNWE5S1EwM0F1a3AvY29hdWVaU2ZX?=
 =?utf-8?B?UmdoQjl2UUEwVVJodm9rL0FXVWsxR1g4TW03cVdCYitzWlFNUnhGa0JpWGc2?=
 =?utf-8?B?bXJySmp0V2VrOTFWTnYrTWUzQkwwMzluU3hWUXJuanE0NUVhZERpeUI1Y2Nt?=
 =?utf-8?B?cnZXWUJSd1BFOHBzSVFUOWswQmlHYmRkenZHa3pwcHNKOUtiUnVkdjR0cjJX?=
 =?utf-8?B?SElTT094dEtFTk03YzVYdEQrTS83MGtGR1dZelBiWGV1OUswTW9xNlNEY2Vi?=
 =?utf-8?B?anB3bzg5VEplUUV0T0U4SWxoQm55eG8xYTZSWjhmLzhrRUVPejdhS0pPVnFG?=
 =?utf-8?Q?vZ6wolE84Kt/IN6o=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c0a58550-5e64-4ada-dd12-08da4de9a657
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2022 09:38:34.1247
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: nUueelq+HKBo8r6XPAS5uHzif4/uyjIVhGqxbCQ3F7Vvw/5Y4moygq9FuQIRfwo1gPcLS1/As4/bRQHzKqp0+Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR03MB5288

On Tue, Jun 14, 2022 at 11:13:07AM +0200, Jan Beulich wrote:
> On 14.06.2022 10:32, Roger Pau Monné wrote:
> > On Tue, Jun 14, 2022 at 10:10:03AM +0200, Jan Beulich wrote:
> >> On 14.06.2022 08:52, Roger Pau Monné wrote:
> >>> On Mon, Jun 13, 2022 at 03:56:54PM +0200, Jan Beulich wrote:
> >>>> On 13.06.2022 14:32, Roger Pau Monné wrote:
> >>>>> On Mon, Jun 13, 2022 at 11:18:49AM +0200, Jan Beulich wrote:
> >>>>>> On 13.06.2022 11:04, Roger Pau Monné wrote:
> >>>>>>> On Mon, Jun 13, 2022 at 10:29:43AM +0200, Jan Beulich wrote:
> >>>>>>>> On 13.06.2022 10:21, Roger Pau Monné wrote:
> >>>>>>>>> On Mon, Jun 13, 2022 at 09:30:06AM +0200, Jan Beulich wrote:
> >>>>>>>>>> On 10.06.2022 17:06, Roger Pau Monne wrote:
> >>>>>>>>>>> Prevent dropping console output from the hardware domain, since it's
> >>>>>>>>>>> likely important to have all the output if the boot fails without
> >>>>>>>>>>> having to resort to sync_console (which also affects the output from
> >>>>>>>>>>> other guests).
> >>>>>>>>>>>
> >>>>>>>>>>> Do so by pairing the console_serial_puts() with
> >>>>>>>>>>> serial_{start,end}_log_everything(), so that no output is dropped.
> >>>>>>>>>>
> >>>>>>>>>> While I can see the goal, why would Dom0 output be (effectively) more
> >>>>>>>>>> important than Xen's own one (which isn't "forced")? And with this
> >>>>>>>>>> aiming at boot output only, wouldn't you want to stop the overriding
> >>>>>>>>>> once boot has completed (of which, if I'm not mistaken, we don't
> >>>>>>>>>> really have any signal coming from Dom0)? And even during boot I'm
> >>>>>>>>>> not convinced we'd want to let through everything, but perhaps just
> >>>>>>>>>> Dom0's kernel messages?
> >>>>>>>>>
> >>>>>>>>> I normally use sync_console on all the boxes I'm doing dev work, so
> >>>>>>>>> this request is something that come up internally.
> >>>>>>>>>
> >>>>>>>>> Didn't realize Xen output wasn't forced, since we already have rate
> >>>>>>>>> limiting based on log levels I was assuming that non-ratelimited
> >>>>>>>>> messages wouldn't be dropped.  But yes, I agree that Xen (non-guest
> >>>>>>>>> triggered) output shouldn't be rate limited either.
> >>>>>>>>
> >>>>>>>> Which would raise the question of why we have log levels for non-guest
> >>>>>>>> messages.
> >>>>>>>
> >>>>>>> Hm, maybe I'm confused, but I don't see a direct relation between log
> >>>>>>> levels and rate limiting.  If I set log level to WARNING I would
> >>>>>>> expect to not loose _any_ non-guest log messages with level WARNING or
> >>>>>>> above.  It's still useful to have log levels for non-guest messages,
> >>>>>>> since user might want to filter out DEBUG non-guest messages for
> >>>>>>> example.
> >>>>>>
> >>>>>> It was me who was confused, because of the two log-everything variants
> >>>>>> we have (console and serial). You're right that your change is unrelated
> >>>>>> to log levels. However, when there are e.g. many warnings or when an
> >>>>>> admin has lowered the log level, what you (would) do is effectively
> >>>>>> force sync_console mode transiently (for a subset of messages, but
> >>>>>> that's secondary, especially because the "forced" output would still
> >>>>>> be waiting for earlier output to make it out).
> >>>>>
> >>>>> Right, it would have to wait for any previous output on the buffer to
> >>>>> go out first.  In any case we can guarantee that no more output will
> >>>>> be added to the buffer while Xen waits for it to be flushed.
> >>>>>
> >>>>> So for the hardware domain it might make sense to wait for the TX
> >>>>> buffers to be half empty (the current tx_quench logic) by preempting
> >>>>> the hypercall.  That however could cause issues if guests manage to
> >>>>> keep filling the buffer while the hardware domain is being preempted.
> >>>>>
> >>>>> Alternatively we could always reserve half of the buffer for the
> >>>>> hardware domain, and allow it to be preempted while waiting for space
> >>>>> (since it's guaranteed non hardware domains won't be able to steal the
> >>>>> allocation from the hardware domain).
> >>>>
> >>>> Getting complicated it seems. I have to admit that I wonder whether we
> >>>> wouldn't be better off leaving the current logic as is.
> >>>
> >>> Another possible solution (more like a band aid) is to increase the
> >>> buffer size from 4 pages to 8 or 16.  That would likely allow to cope
> >>> fine with the high throughput of boot messages.
> >>
> >> You mean the buffer whose size is controlled by serial_tx_buffer?
> > 
> > Yes.
> > 
> >> On
> >> large systems one may want to simply make use of the command line
> >> option then; I don't think the built-in default needs changing. Or
> >> if so, then perhaps not statically at build time, but taking into
> >> account system properties (like CPU count).
> > 
> > So how about we use:
> > 
> > min(16384, ROUNDUP(1024 * num_possible_cpus(), 4096))
> 
> That would _reduce_ size on small systems, wouldn't it? Originally
> you were after increasing the default size. But if you had meant
> max(), then I'd fear on very large systems this may grow a little
> too large.

See previous followup about my mistake of using min() instead of
max().

On a system with 512 CPUs that would be 512KB, I don't think that's a
lot of memory, specially taking into account that a system with 512
CPUs should have a matching amount of memory I would expect.

It's true however that I very much doubt we would fill a 512K buffer,
so limiting to 64K might be a sensible starting point?

> > Maybe we should also take CPU frequency into account, but that seems
> > too complex for the purpose.
> 
> Why would frequency matter? Other aspects I could see mattering is
> node count and maybe memory size.

Higher frequency likely means faster boot, and faster buffer fill,
because the baudrate of the console is constant.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jun 14 09:40:51 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 09:40:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348722.574903 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o132Y-00023r-JV; Tue, 14 Jun 2022 09:40:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348722.574903; Tue, 14 Jun 2022 09:40:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o132Y-00023k-GY; Tue, 14 Jun 2022 09:40:50 +0000
Received: by outflank-mailman (input) for mailman id 348722;
 Tue, 14 Jun 2022 09:40:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hf0K=WV=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1o132W-00023e-M8
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 09:40:48 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 105c7671-ebc6-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 11:40:46 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-157-_yGR8o1cNFWa8zO5jK1pbw-1; Tue, 14 Jun 2022 05:40:41 -0400
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com
 [10.11.54.2])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 038A28027EE;
 Tue, 14 Jun 2022 09:40:41 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 9447D4010E32;
 Tue, 14 Jun 2022 09:40:40 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id EA7751800084; Tue, 14 Jun 2022 11:40:38 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 105c7671-ebc6-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655199645;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=fWuwP4AFW+8vVLjWWxPmrGzKbgBIGbPERfEvHtPl3i4=;
	b=cL+Av3RKIpoWzFZDd4ndMcXJhHbu8rXr82kNs6nehuvKhVASGB0wEvkC4W7WiQK2SbGlLm
	TwM/FA9CyYduhdA5aSbhQ7P4WcmYERrc86v+wtUapDyzCf1OAp+GYa0DygGoh5j84/5hHg
	c8Cwek6fImFzKbXTr8cH9D4G0N9Z9+s=
X-MC-Unique: _yGR8o1cNFWa8zO5jK1pbw-1
Date: Tue, 14 Jun 2022 11:40:38 +0200
From: Gerd Hoffmann <kraxel@redhat.com>
To: Richard Henderson <richard.henderson@linaro.org>
Cc: qemu-devel@nongnu.org, "Michael S. Tsirkin" <mst@redhat.com>,
	xen-devel@lists.xenproject.org,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	"Canokeys.org" <contact@canokeys.org>,
	Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>, berrange@redhat.com
Subject: Re: [PULL 00/16] Kraxel 20220613 patches
Message-ID: <20220614094038.g2g6lzeviypcnqrb@sirius.home.kraxel.org>
References: <20220613113655.3693872-1-kraxel@redhat.com>
 <37f8f623-bb1c-899b-5801-79acd6185c6d@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <37f8f623-bb1c-899b-5801-79acd6185c6d@linaro.org>
X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2

On Mon, Jun 13, 2022 at 08:52:21AM -0700, Richard Henderson wrote:
> On 6/13/22 04:36, Gerd Hoffmann wrote:
> > The following changes since commit dcb40541ebca7ec98a14d461593b3cd7282b4fac:
> > 
> >    Merge tag 'mips-20220611' of https://github.com/philmd/qemu into staging (2022-06-11 21:13:27 -0700)
> > 
> > are available in the Git repository at:
> > 
> >    git://git.kraxel.org/qemu tags/kraxel-20220613-pull-request
> > 
> > for you to fetch changes up to 23b87f7a3a13e93e248eef8a4b7257548855a620:
> > 
> >    ui: move 'pc-bios/keymaps' to 'ui/keymaps' (2022-06-13 10:59:25 +0200)
> > 
> > ----------------------------------------------------------------
> > usb: add CanoKey device, fixes for ehci + redir
> > ui: fixes for gtk and cocoa, move keymaps (v2), rework refresh rate
> > virtio-gpu: scanout flush fix
> 
> This doesn't even configure:
> 
> ../src/ui/keymaps/meson.build:55:4: ERROR: File ar does not exist.

Hmm, build worked here and CI passed too.

I think this is one of those cases where the build directory must be
deleted because one subdirectory is replaced by a compatibility
symlink.

Or we drop the symlink idea and update the keymap loading code to check
both old and new location.  Daniel?

take care,
  Gerd



From xen-devel-bounces@lists.xenproject.org Tue Jun 14 09:41:25 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 09:41:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348728.574913 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1336-0002as-TW; Tue, 14 Jun 2022 09:41:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348728.574913; Tue, 14 Jun 2022 09:41:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1336-0002al-Qn; Tue, 14 Jun 2022 09:41:24 +0000
Received: by outflank-mailman (input) for mailman id 348728;
 Tue, 14 Jun 2022 09:41:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o1335-0002aY-HO
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 09:41:23 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o1335-0003s2-2f; Tue, 14 Jun 2022 09:41:23 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o1334-00040G-QA; Tue, 14 Jun 2022 09:41:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:Message-Id:Date:
	Subject:Cc:To:From; bh=ceUU4HToCnfJLUQnb0zPb4pnqugghgbeuw1i+l4yi0k=; b=1teTMS
	y0X5nRBFHGsSMzsYWNOHGKjDbRUETY7iC5lbzr5U38+l1zjLGJOH1LPwVWmszmlZXDPR/QNzR1iJB
	uJUMN3GPJ3kpT+4eXO5Xe0HBuW2bmBIucaaz/xp4blRttFKpzXPkBsQRSZ9XHgD11LifOp39+lSY9
	3RRFn7g+qb4=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: andrew.cooper3@citrix.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH] xen/arm: smpboot: Allocate the CPU sibling/core maps while preparing the CPU
Date: Tue, 14 Jun 2022 10:41:19 +0100
Message-Id: <20220614094119.94720-1-julien@xen.org>
X-Mailer: git-send-email 2.32.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

Commit 5047cd1d5dea "xen/common: Use enhanced ASSERT_ALLOC_CONTEXT in
xmalloc()" extended the checks in _xmalloc() to catch any use of the
helpers from context with interrupts disabled.

Unfortunately, the rule is not followed when allocating the CPU
sibling/core maps.

(XEN) Xen call trace:
(XEN)    [<00238a5c>] _xmalloc+0xfc/0x314 (PC)
(XEN)    [<00000000>] 00000000 (LR)
(XEN)    [<00238c8c>] _xzalloc+0x18/0x4c
(XEN)    [<00288cb4>] smpboot.c#setup_cpu_sibling_map+0x38/0x138
(XEN)    [<00289024>] start_secondary+0x1b4/0x270
(XEN)    [<40010170>] 40010170
(XEN)
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 2:
(XEN) Assertion '!in_irq() && (local_irq_is_enabled() || num_online_cpus() <= 1)' failed at common/xmalloc_tlsf.c:601
(XEN) ****************************************

This is happening because zalloc_cpumask_var() may allocate memory
if NR_CPUS is > 2 * sizeof(unsigned long).

Avoid the problem by allocate the per-CPU IRQs while preparing the
CPU.

This also has the benefit to remove a panic() in the secondary CPU
code.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/arm/smpboot.c | 25 ++++++++++++++++++++-----
 1 file changed, 20 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index 4888bcd78a5a..2b0c92cd369b 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -79,15 +79,17 @@ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_core_mask);
 static bool __read_mostly opt_hmp_unsafe = false;
 boolean_param("hmp-unsafe", opt_hmp_unsafe);
 
-static void setup_cpu_sibling_map(int cpu)
+static int setup_cpu_sibling_map(int cpu)
 {
     if ( !zalloc_cpumask_var(&per_cpu(cpu_sibling_mask, cpu)) ||
          !zalloc_cpumask_var(&per_cpu(cpu_core_mask, cpu)) )
-        panic("No memory for CPU sibling/core maps\n");
+        return -ENOMEM;
 
     /* A CPU is a sibling with itself and is always on its own core. */
     cpumask_set_cpu(cpu, per_cpu(cpu_sibling_mask, cpu));
     cpumask_set_cpu(cpu, per_cpu(cpu_core_mask, cpu));
+
+    return 0;
 }
 
 static void remove_cpu_sibling_map(int cpu)
@@ -292,9 +294,14 @@ smp_get_max_cpus (void)
 void __init
 smp_prepare_cpus(void)
 {
+    int rc;
+
     cpumask_copy(&cpu_present_map, &cpu_possible_map);
 
-    setup_cpu_sibling_map(0);
+    rc = setup_cpu_sibling_map(0);
+    if ( rc )
+        panic("Unable to allocate CPU sibling/core maps\n");
+
 }
 
 /* Boot the current CPU */
@@ -361,8 +368,6 @@ void start_secondary(void)
 
     set_current(idle_vcpu[cpuid]);
 
-    setup_cpu_sibling_map(cpuid);
-
     /* Run local notifiers */
     notify_cpu_starting(cpuid);
     /*
@@ -530,9 +535,19 @@ static int cpu_smpboot_callback(struct notifier_block *nfb,
                                 void *hcpu)
 {
     unsigned int cpu = (unsigned long)hcpu;
+    unsigned int rc = 0;
 
     switch ( action )
     {
+    case CPU_UP_PREPARE:
+        rc = setup_cpu_sibling_map(cpu);
+        if ( rc )
+            printk(XENLOG_ERR
+                   "Unable to allocate CPU sibling/core map  for CPU%u\n",
+                   cpu);
+
+        break;
+
     case CPU_DEAD:
         remove_cpu_sibling_map(cpu);
         break;
-- 
2.32.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 14 09:42:02 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 09:42:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348735.574925 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o133i-0003AV-6V; Tue, 14 Jun 2022 09:42:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348735.574925; Tue, 14 Jun 2022 09:42:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o133i-0003AO-2v; Tue, 14 Jun 2022 09:42:02 +0000
Received: by outflank-mailman (input) for mailman id 348735;
 Tue, 14 Jun 2022 09:42:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o133h-0003AB-7B
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 09:42:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o133g-0003st-S3; Tue, 14 Jun 2022 09:42:00 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o133g-00041E-Jf; Tue, 14 Jun 2022 09:42:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:Message-Id:Date:
	Subject:Cc:To:From; bh=KV7y7Yscv6Q5zqvYArQUIKkkXlBEZv6KbcKb+feLfL8=; b=559BT/
	Xj8vKHcVDGEdbl+hBB3b+o//fa1Um2TLOCZFjkImKoMIZNEovakgFmDfT6hLnmZtArqhMKCkB9bgB
	bAKrGp8YpC5kqszuvgQ1eBX3Wmt9sSdELtXg2B0w2AVbSyS+Jyw6dPG6ZrOvZL5pHDiZyVe6QGOav
	O6hHORYe7C4=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: andrew.cooper3@citrix.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH] xen/arm: irq: Initialize the per-CPU IRQs while preparing the CPU
Date: Tue, 14 Jun 2022 10:41:57 +0100
Message-Id: <20220614094157.95631-1-julien@xen.org>
X-Mailer: git-send-email 2.32.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

Commit 5047cd1d5dea "xen/common: Use enhanced ASSERT_ALLOC_CONTEXT in
xmalloc()" extended the checks in _xmalloc() to catch any use of the
helpers from context with interrupts disabled.

Unfortunately, the rule is not followed when initializing the per-CPU
IRQs:

(XEN) Xen call trace:
(XEN)    [<002389f4>] _xmalloc+0xfc/0x314 (PC)
(XEN)    [<00000000>] 00000000 (LR)
(XEN)    [<0021a7c4>] init_one_irq_desc+0x48/0xd0
(XEN)    [<002807a8>] irq.c#init_local_irq_data+0x48/0xa4
(XEN)    [<00280834>] init_secondary_IRQ+0x10/0x2c
(XEN)    [<00288fa4>] start_secondary+0x194/0x274
(XEN)    [<40010170>] 40010170
(XEN)
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 2:
(XEN) Assertion '!in_irq() && (local_irq_is_enabled() || num_online_cpus() <= 1)' failed at common/xmalloc_tlsf.c:601
(XEN) ****************************************

This is happening because zalloc_cpumask_var() may allocate memory
if NR_CPUS is > 2 * sizeof(unsigned long).

Avoid the problem by allocate the per-CPU IRQs while preparing the
CPU.

This also has the benefit to remove a BUG_ON() in the secondary CPU
code.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/arm/include/asm/irq.h |  1 -
 xen/arch/arm/irq.c             | 35 +++++++++++++++++++++++++++-------
 xen/arch/arm/smpboot.c         |  2 --
 3 files changed, 28 insertions(+), 10 deletions(-)

diff --git a/xen/arch/arm/include/asm/irq.h b/xen/arch/arm/include/asm/irq.h
index e45d57459899..245f49dcbac5 100644
--- a/xen/arch/arm/include/asm/irq.h
+++ b/xen/arch/arm/include/asm/irq.h
@@ -73,7 +73,6 @@ static inline bool is_lpi(unsigned int irq)
 bool is_assignable_irq(unsigned int irq);
 
 void init_IRQ(void);
-void init_secondary_IRQ(void);
 
 int route_irq_to_guest(struct domain *d, unsigned int virq,
                        unsigned int irq, const char *devname);
diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
index b761d90c4063..56bdcb95335d 100644
--- a/xen/arch/arm/irq.c
+++ b/xen/arch/arm/irq.c
@@ -17,6 +17,7 @@
  * GNU General Public License for more details.
  */
 
+#include <xen/cpu.h>
 #include <xen/lib.h>
 #include <xen/spinlock.h>
 #include <xen/irq.h>
@@ -100,7 +101,7 @@ static int __init init_irq_data(void)
     return 0;
 }
 
-static int init_local_irq_data(void)
+static int init_local_irq_data(unsigned int cpu)
 {
     int irq;
 
@@ -108,7 +109,7 @@ static int init_local_irq_data(void)
 
     for ( irq = 0; irq < NR_LOCAL_IRQS; irq++ )
     {
-        struct irq_desc *desc = irq_to_desc(irq);
+        struct irq_desc *desc = &per_cpu(local_irq_desc, cpu)[irq];
         int rc = init_one_irq_desc(desc);
 
         if ( rc )
@@ -131,6 +132,29 @@ static int init_local_irq_data(void)
     return 0;
 }
 
+static int cpu_callback(struct notifier_block *nfb, unsigned long action,
+                        void *hcpu)
+{
+    unsigned long cpu = (unsigned long)hcpu;
+    int rc = 0;
+
+    switch ( action )
+    {
+    case CPU_UP_PREPARE:
+        rc = init_local_irq_data(cpu);
+        if ( rc )
+            printk(XENLOG_ERR "Unable to allocate local IRQ for CPU%lu\n",
+                   cpu);
+        break;
+    }
+
+    return !rc ? NOTIFY_DONE : notifier_from_errno(rc);
+}
+
+static struct notifier_block cpu_nfb = {
+    .notifier_call = cpu_callback,
+};
+
 void __init init_IRQ(void)
 {
     int irq;
@@ -140,13 +164,10 @@ void __init init_IRQ(void)
         local_irqs_type[irq] = IRQ_TYPE_INVALID;
     spin_unlock(&local_irqs_type_lock);
 
-    BUG_ON(init_local_irq_data() < 0);
+    BUG_ON(init_local_irq_data(smp_processor_id()) < 0);
     BUG_ON(init_irq_data() < 0);
-}
 
-void init_secondary_IRQ(void)
-{
-    BUG_ON(init_local_irq_data() < 0);
+    register_cpu_notifier(&cpu_nfb);
 }
 
 static inline struct irq_guest *irq_get_guest_info(struct irq_desc *desc)
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index 9bb32a301a70..4888bcd78a5a 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -359,8 +359,6 @@ void start_secondary(void)
 
     gic_init_secondary_cpu();
 
-    init_secondary_IRQ();
-
     set_current(idle_vcpu[cpuid]);
 
     setup_cpu_sibling_map(cpuid);
-- 
2.32.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 14 09:46:07 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 09:46:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348750.574936 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o137Z-0003rw-QU; Tue, 14 Jun 2022 09:46:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348750.574936; Tue, 14 Jun 2022 09:46:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o137Z-0003rp-Mb; Tue, 14 Jun 2022 09:46:01 +0000
Received: by outflank-mailman (input) for mailman id 348750;
 Tue, 14 Jun 2022 09:46:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lWYu=WV=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o137Y-0003rj-TI
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 09:46:01 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on061e.outbound.protection.outlook.com
 [2a01:111:f400:fe0d::61e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cb6e352c-ebc6-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 11:45:59 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM5PR0401MB2609.eurprd04.prod.outlook.com (2603:10a6:203:38::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.20; Tue, 14 Jun
 2022 09:45:57 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Tue, 14 Jun 2022
 09:45:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cb6e352c-ebc6-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CJpDzZ2mdYX2qHoBYXbBPUe8KzMYCJm7Yn5t+0g7vTr8mNrg2cDKK6eGgbeKosKrplviRnHlyd87V6XQrwcp6FM+HEnPNukJ1ljReQU6dRqNOWz+aST9nky+SW3DXUtIf3eS95tdFod3793JHuW/AzdOv0GM9YXF34D0BTneKgHZQwZfII0LFRoYepkno97+eNlS9IW+DyGabjb2VTfks1udgdHtmzPk9GDoynzcKjYU0hbrY4QL2HRPp9AVKvIVAS1iOYKdlV1HGAMXZDss89+rg6E60Npbz+K/ZtXpUBWgDPksQ44eKjZswrIiOj9XvsKAU3qw/HNwM+FXraQICQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=BLi85Yk2jrBq8ym9bzJIVKYdAnYZfyWH75YdYLZaWbE=;
 b=F40747sZqPbAYn/qQc8QjYjwGjMs7qwX0p2O7hYGfkKS01+HKgZWtRXWYXsF64N09K6TEj2uoARgQSTENSWrvDKGqs8gIJJg5J8YeQW0sfkLIK10IXfq2WsVdRfHXEhck7CsPXfc0xBdsSHJW/0PNo0zLPpWa6a5SY3mwvfIYTvv8ggPqDdgWom5+NfPibGi1yb4bznGJPNUynI+zaAXrq1toPHzZLR2jarwZLPwXwu/9kq1DDIZ/UUvJUwBLDMtabeEa1J8oj1PmG5Hqpwu7ACP1HsgApVb5RHNCVYRHZdGjBaBtqMdBJX0fZDPlOj2sQuakL1RjfnJFYcHwyc1kw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BLi85Yk2jrBq8ym9bzJIVKYdAnYZfyWH75YdYLZaWbE=;
 b=ybrCLzIh4uHC9Ce4OjkdTt3IIKUaQqKmVjPMH0iYc4N0GAtQhqeD2TMw1ADaAfMXe0xk6PkJnOXhvH4cJhgDQkqCStaq3WCiA2v65La7e9FjuKUoJWQgAFUqYeLGL17JT+SlN/IpsxgHvcxIamhTsqrSv7Jz+e9AII4Pf7JnWquA4dgfDPcW7LtUV1ACQDv1pYNruC2lG5fPvc0Z1H49Q7D6pvyob8xbYOHWaIsD1cwPDCGfKUbg397kcJDQGyoD+U6i/+7QCUKeBvaQ9lxJQhlYIs6o2ccD7sHEyX0BATraFlE0J3h/m/miNasi4vdfhUuBWV0CgUSvEd5pICPBNg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <291bb0ee-06d7-af25-79bb-e099c7ff2fe1@suse.com>
Date: Tue, 14 Jun 2022 11:45:54 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH] xen/console: do not drop serial output from the hardware
 domain
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <YqbziQGizoNX7YFr@Air-de-Roger>
 <3d0d74d8-55a9-cdb6-0c5e-616ddd47bbc0@suse.com>
 <Yqb9gKUMokLAots7@Air-de-Roger>
 <afa0a9e3-fd35-be38-427e-3389f4c3ca26@suse.com>
 <YqcuTUJUgXcO3iYE@Air-de-Roger>
 <f0f87e99-282b-6df7-7e57-3a6c73029519@suse.com>
 <YqgwNu3QSpPcZjnU@Air-de-Roger>
 <69d85d88-4ec1-987c-151f-0d433021fe34@suse.com>
 <YqhHtetipYTG8tuc@Air-de-Roger>
 <72c94980-cbcd-d3b3-7aad-c9db58d9c4a2@suse.com>
 <YqhXFKMlIvkQzVoT@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <YqhXFKMlIvkQzVoT@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AM6P193CA0044.EURP193.PROD.OUTLOOK.COM
 (2603:10a6:209:8e::21) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a3838bf6-5802-4663-917f-08da4deaae45
X-MS-TrafficTypeDiagnostic: AM5PR0401MB2609:EE_
X-Microsoft-Antispam-PRVS:
	<AM5PR0401MB2609BA27FE6B9ACD6D3EA92EB3AA9@AM5PR0401MB2609.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Xsf82hCoYszdlneS4oaqZ+CiUvl+CHBuZQZ3ggzabyWnE3k5X6DA9FUDP+Y0+rpoMO/vVeY+LSOttyx21gOenHnGv95fYZrDHrHkHqgoVXYcXkpNBJCqpkg7PMoF0LHuQuExemzsDVtXZIB99Ed/muLv5o8bVUL8bbKGF0gwv8biOsIlqA1xioMW8YY8WmA26dUtfJ0gsWLZakPIclvQ7qHbJc0f3mlncl2aFc5MhTrbdCY4YmQImQeIWknsEXcO+ol37+Q5T4xhqVH8lUv1EKu8Xj51JLdqG+kb8vel7EU6jkhj5CbUNoRVBG16RrwdLiPKPr2Mxtm6H1fcxhbdb6KMIJfhpFauXQNc1nLaOLFqgd6fffR/w+m3V9P6WMfaKannr7K2q1c0u0qCIVL00Pf/yPiRpS86mETeHtbpYYF/bzpIUxXXBH1mV4h0S6/LITtkrJI3VJKGNcwu0+omfqTZjhXbX+wnfHcGy5SHGKtI5hjeWOMDuOVUupzcIQP6ZHIXcuudE1xkiwuUXwsEZtW/TDhHOzfn7Yr/D/UT20HfzdeLIyhZxbQFNXJUDvcHdXwiNYOgc1qHs2P4iXoOXSjZvduM6pZLuL3Co1CGAXOUI6cjbOJyW4ytBDb7mVoJIaDRyIjArPRV49SJp1ohtqtONIKY+Ier6D/KW71qcXCS2Uqk2cp9X9WmhxAgOzEnoB+x999AnqUtULhymRYCBrxCzi1+pRa3JP1vYLkpM7TY8YCjVenScgHgKiH9W690
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(66556008)(26005)(66476007)(8676002)(4326008)(66946007)(53546011)(6916009)(31686004)(36756003)(6486002)(54906003)(6506007)(8936002)(186003)(2906002)(2616005)(86362001)(38100700002)(316002)(508600001)(31696002)(6512007)(5660300002)(83380400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RTNEaytVTFhlZ0tSSjdoU0tkMHgyTEtmbEFLWnQzNG96NzBqd0F1bkNwZ28x?=
 =?utf-8?B?RjBHTzVaRjBSUjFyWm43eGs2TlFubVNKSzlkNXZUbTVwMzIxd2RjaTU4b1RG?=
 =?utf-8?B?QmhBeDJCVVZHcEo3TEZjRXgrN2NQQ0ZxZ2lTQWhVcnFkVDFGNUJTUm4xbnRV?=
 =?utf-8?B?WkhZLzlCQXBnakpHM3pxTTh4TzRtbmkzWXVpVnQ1MVUyNWo0MSttZzU1OE0r?=
 =?utf-8?B?RWY2cWcrZkh5V2VLcUhIRDFTL0wrNlNDaUVEK3FlVm1TR1ExSXd1RXdkQjIz?=
 =?utf-8?B?Z2pYcnh5QUZKOHBwa2RJbUx6R3dubmpDQXFDVlVaakhtNzFGaStnakt1MUZI?=
 =?utf-8?B?cjROaTdwVlI4VTh4QldUUThkdEtaQzN6R0NQcEJqS0k3K3lsOWZpWGNrNGxY?=
 =?utf-8?B?bDgyM2FGNUhVNzBROEtXcDE1dDhzQURxWHhLNUZnS2o0S0NjYWltVTVXN1li?=
 =?utf-8?B?eGphdHArZEQ4MzBmNHY0Wk1jTTFYdWsxWUR6UXVkcFJqVTUvSSswREJma2FZ?=
 =?utf-8?B?NEdGeklCakZ6clJNbXRBUlZjV0k4MHlZYU1jOHFJVWpjVW44c1kvSm85SE02?=
 =?utf-8?B?dU1JdXpCcHdJSnJUOThJZGlJZG5ldDNMRjZqUWsvNUJuRE1nL09NdStyY0VG?=
 =?utf-8?B?Y2NGWTFTTkdPeTJQaUkySG14dDdxc0lteWdnT3Z1anVmLyt3NFBlYk8xMlFM?=
 =?utf-8?B?WkpwVURwalZwcFZlKyt1K3JCTG10SVgwU3FFWjA3a3JYcDc1MUU0Vm15YlZT?=
 =?utf-8?B?NExWSHBSdnlpK0JNa2hseXZjZ3R5eHVZMFNibWI5RXBoUG5jNjFXOHRmN3ox?=
 =?utf-8?B?anVzTE9oMWZBNmh1YmN5RmFMNy9pcHNpVzV2QS9XZ1hPem1JeGliVlVjQi9Y?=
 =?utf-8?B?bDZPSCtnYzdHQmVoTXpqNHphYkFzZllvOVV2MkUxUkNvbXRUZmlwUFFTZ2FQ?=
 =?utf-8?B?MUpDN2RnSkdtQ2RoWW9NU1Uya0ZDVG1ZWWtTaW01NzlaU3FiN3hvenhadzRW?=
 =?utf-8?B?QWJ6WjZFZ215V3dhMmUra1ZST1dGNFRZZS9WZVNRSnhIZDlPaFUyRjV1RUk0?=
 =?utf-8?B?SFpvK2JRZUlBdVF6T1EwVHBDOXczOGhxb2N0YkY4V2tKMlRFOWY1TW45bzdY?=
 =?utf-8?B?NUVJbHBjcThKdU5YYVkycDRZdjhOQ0JLdkN4V0JFUjd5TExxRzkzR2V4cEhM?=
 =?utf-8?B?dy95YnMyWVh2cmt0ZDFBVWk3ZEp6WVhRRmI1OUVyRWVkODZJcUZ2TTR4VCtU?=
 =?utf-8?B?M2FqWDlyMVcvTUJyeEdXRGZOb3lwc0xIMkJyakV5QjA2K3dkcWFBL3NrL3pt?=
 =?utf-8?B?c0RLOElLdnNIZFNiQkJxSXgzcGNlc3Zkc3JVeEpwang5Uk5BbEV2c3ZJRmRz?=
 =?utf-8?B?RFdPdGpxSnJSaHNLa0gwblVEbDcwOUowWEJqSk14QXVlWkh2ekhXL3htZ1Nj?=
 =?utf-8?B?SmFhNDNjMnF1VUZDbFFVbGI5eS9IRHdvQllXbFJSNHBxOVRzUXRjUW1KQ2hK?=
 =?utf-8?B?VWZMQTRVY1liY3JsMjMwYktLUGxoNDBmVVdtYkNIRFpXdnRPblpjTXNHZGJO?=
 =?utf-8?B?dm5Ndldoa29TdTFRd2NvY0haeFJ6THdTU29ISndDWlZkdWF1ZjlQQkI0a0lz?=
 =?utf-8?B?VldVa1VidnNCNUNaOU1qTEdBcVZCRDg4M1M2ZjBBSUlPb0wvOWJUSXJrN0RU?=
 =?utf-8?B?NHFPakVSRWwvVFJaRFpPSG5FS0xoM2c1RDY2SS8rVFU1Z0dZZDRhL09yNzFz?=
 =?utf-8?B?bE1FLzdiNnFmc004RVpTOVF3djg1MlNMdkpnY1IxbGJ2clYrd1E4enA3anBW?=
 =?utf-8?B?STN4bGRVT0kxZ2FndmY0NlZiclR5T25reVg5SkkyTHlKaUMvUHo2Y1AzQ2ZV?=
 =?utf-8?B?TE1qNTFvQW52ODVpTG1hL1lNTFgxUk9GdkI3T3VFNGZtOWI0SXhML0hTRU54?=
 =?utf-8?B?MEw2dmhwd3RHekJDS2duWnQvOHJYUm9IRzZhR2lwWTAvYkFDQ3JHWjJURG1V?=
 =?utf-8?B?SUxzQy9WZWZEbzBIUWE1b2pOS05xTktHRUp4OXZsZUpHcHNHbCt6RXpteEZN?=
 =?utf-8?B?YzBuQ0lKa3pUMDFqRkpraWhVa0lWSEdaQzMya09haGtneXQwY1FCb1JFc3NN?=
 =?utf-8?B?UU8waEhVeU5qa0MzUGNlcURYZWM0RHZDa0d2Z28rc0VEaU5WWXViYmhkbkll?=
 =?utf-8?B?aEZGVHVwUktYWGxXSDdScDdOQXhzMG9YbEJsTjg0U25LWDNNbWVoTjBwejlz?=
 =?utf-8?B?WktlbVVObUpaYndLd25BRm5yZ2pWUHZmaEZqRkQ2aEsvRkVubkdMUkJsdkZZ?=
 =?utf-8?B?azhDdzgzdjFaeE92OTFIcDdRV3RuaUg3VkhFcjJiVDlZSTk2alNadz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a3838bf6-5802-4663-917f-08da4deaae45
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2022 09:45:56.9233
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: J0vgzQexvFxm/RehZbmMDDuD6ReMfpg4Hm9Av0P+rTTKmL/eXRG01dl1i1xYf/kvC4Srg+4Ianuv39yBBtWTjg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM5PR0401MB2609

On 14.06.2022 11:38, Roger Pau Monné wrote:
> On Tue, Jun 14, 2022 at 11:13:07AM +0200, Jan Beulich wrote:
>> On 14.06.2022 10:32, Roger Pau Monné wrote:
>>> On Tue, Jun 14, 2022 at 10:10:03AM +0200, Jan Beulich wrote:
>>>> On 14.06.2022 08:52, Roger Pau Monné wrote:
>>>>> On Mon, Jun 13, 2022 at 03:56:54PM +0200, Jan Beulich wrote:
>>>>>> On 13.06.2022 14:32, Roger Pau Monné wrote:
>>>>>>> On Mon, Jun 13, 2022 at 11:18:49AM +0200, Jan Beulich wrote:
>>>>>>>> On 13.06.2022 11:04, Roger Pau Monné wrote:
>>>>>>>>> On Mon, Jun 13, 2022 at 10:29:43AM +0200, Jan Beulich wrote:
>>>>>>>>>> On 13.06.2022 10:21, Roger Pau Monné wrote:
>>>>>>>>>>> On Mon, Jun 13, 2022 at 09:30:06AM +0200, Jan Beulich wrote:
>>>>>>>>>>>> On 10.06.2022 17:06, Roger Pau Monne wrote:
>>>>>>>>>>>>> Prevent dropping console output from the hardware domain, since it's
>>>>>>>>>>>>> likely important to have all the output if the boot fails without
>>>>>>>>>>>>> having to resort to sync_console (which also affects the output from
>>>>>>>>>>>>> other guests).
>>>>>>>>>>>>>
>>>>>>>>>>>>> Do so by pairing the console_serial_puts() with
>>>>>>>>>>>>> serial_{start,end}_log_everything(), so that no output is dropped.
>>>>>>>>>>>>
>>>>>>>>>>>> While I can see the goal, why would Dom0 output be (effectively) more
>>>>>>>>>>>> important than Xen's own one (which isn't "forced")? And with this
>>>>>>>>>>>> aiming at boot output only, wouldn't you want to stop the overriding
>>>>>>>>>>>> once boot has completed (of which, if I'm not mistaken, we don't
>>>>>>>>>>>> really have any signal coming from Dom0)? And even during boot I'm
>>>>>>>>>>>> not convinced we'd want to let through everything, but perhaps just
>>>>>>>>>>>> Dom0's kernel messages?
>>>>>>>>>>>
>>>>>>>>>>> I normally use sync_console on all the boxes I'm doing dev work, so
>>>>>>>>>>> this request is something that come up internally.
>>>>>>>>>>>
>>>>>>>>>>> Didn't realize Xen output wasn't forced, since we already have rate
>>>>>>>>>>> limiting based on log levels I was assuming that non-ratelimited
>>>>>>>>>>> messages wouldn't be dropped.  But yes, I agree that Xen (non-guest
>>>>>>>>>>> triggered) output shouldn't be rate limited either.
>>>>>>>>>>
>>>>>>>>>> Which would raise the question of why we have log levels for non-guest
>>>>>>>>>> messages.
>>>>>>>>>
>>>>>>>>> Hm, maybe I'm confused, but I don't see a direct relation between log
>>>>>>>>> levels and rate limiting.  If I set log level to WARNING I would
>>>>>>>>> expect to not loose _any_ non-guest log messages with level WARNING or
>>>>>>>>> above.  It's still useful to have log levels for non-guest messages,
>>>>>>>>> since user might want to filter out DEBUG non-guest messages for
>>>>>>>>> example.
>>>>>>>>
>>>>>>>> It was me who was confused, because of the two log-everything variants
>>>>>>>> we have (console and serial). You're right that your change is unrelated
>>>>>>>> to log levels. However, when there are e.g. many warnings or when an
>>>>>>>> admin has lowered the log level, what you (would) do is effectively
>>>>>>>> force sync_console mode transiently (for a subset of messages, but
>>>>>>>> that's secondary, especially because the "forced" output would still
>>>>>>>> be waiting for earlier output to make it out).
>>>>>>>
>>>>>>> Right, it would have to wait for any previous output on the buffer to
>>>>>>> go out first.  In any case we can guarantee that no more output will
>>>>>>> be added to the buffer while Xen waits for it to be flushed.
>>>>>>>
>>>>>>> So for the hardware domain it might make sense to wait for the TX
>>>>>>> buffers to be half empty (the current tx_quench logic) by preempting
>>>>>>> the hypercall.  That however could cause issues if guests manage to
>>>>>>> keep filling the buffer while the hardware domain is being preempted.
>>>>>>>
>>>>>>> Alternatively we could always reserve half of the buffer for the
>>>>>>> hardware domain, and allow it to be preempted while waiting for space
>>>>>>> (since it's guaranteed non hardware domains won't be able to steal the
>>>>>>> allocation from the hardware domain).
>>>>>>
>>>>>> Getting complicated it seems. I have to admit that I wonder whether we
>>>>>> wouldn't be better off leaving the current logic as is.
>>>>>
>>>>> Another possible solution (more like a band aid) is to increase the
>>>>> buffer size from 4 pages to 8 or 16.  That would likely allow to cope
>>>>> fine with the high throughput of boot messages.
>>>>
>>>> You mean the buffer whose size is controlled by serial_tx_buffer?
>>>
>>> Yes.
>>>
>>>> On
>>>> large systems one may want to simply make use of the command line
>>>> option then; I don't think the built-in default needs changing. Or
>>>> if so, then perhaps not statically at build time, but taking into
>>>> account system properties (like CPU count).
>>>
>>> So how about we use:
>>>
>>> min(16384, ROUNDUP(1024 * num_possible_cpus(), 4096))
>>
>> That would _reduce_ size on small systems, wouldn't it? Originally
>> you were after increasing the default size. But if you had meant
>> max(), then I'd fear on very large systems this may grow a little
>> too large.
> 
> See previous followup about my mistake of using min() instead of
> max().
> 
> On a system with 512 CPUs that would be 512KB, I don't think that's a
> lot of memory, specially taking into account that a system with 512
> CPUs should have a matching amount of memory I would expect.
> 
> It's true however that I very much doubt we would fill a 512K buffer,
> so limiting to 64K might be a sensible starting point?

Yeah, 64k could be a value to compromise on. What total size of
output have you observed to trigger the making of this patch? Xen
alone doesn't even manage to fill 16k on most of my systems ...

>>> Maybe we should also take CPU frequency into account, but that seems
>>> too complex for the purpose.
>>
>> Why would frequency matter? Other aspects I could see mattering is
>> node count and maybe memory size.
> 
> Higher frequency likely means faster boot, and faster buffer fill,
> because the baudrate of the console is constant.

Hmm, yes. But remember there are may serializing actions. Bringing
up of APs for example is relatively slow _and_ not producing a lot
of output by default. As to the baudrate - modern chipsets allow
for higher than the 115200 value, so we may want to modernize our
drivers to know of such vendor specific aspects.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 14 09:51:52 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 09:51:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348763.574947 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o13DA-0005Ue-Ki; Tue, 14 Jun 2022 09:51:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348763.574947; Tue, 14 Jun 2022 09:51:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o13DA-0005UX-I4; Tue, 14 Jun 2022 09:51:48 +0000
Received: by outflank-mailman (input) for mailman id 348763;
 Tue, 14 Jun 2022 09:51:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mIWA=WV=redhat.com=berrange@srs-se1.protection.inumbo.net>)
 id 1o13D9-0005UR-6U
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 09:51:47 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 996bea62-ebc7-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 11:51:45 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-426-pSbqAgPoOEKx6fcCh6x3dg-1; Tue, 14 Jun 2022 05:51:43 -0400
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com
 [10.11.54.7])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A45431C0513E;
 Tue, 14 Jun 2022 09:51:42 +0000 (UTC)
Received: from redhat.com (unknown [10.33.36.137])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 9568D1415103;
 Tue, 14 Jun 2022 09:51:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 996bea62-ebc7-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655200304;
	h=from:from:reply-to:reply-to:subject:subject:date:date:
	 message-id:message-id:to:to:cc:cc:mime-version:mime-version:
	 content-type:content-type:in-reply-to:in-reply-to:  references:references;
	bh=kPsIb46gmdhHq2vvvdI5Q6gA7nrPKFn2iUBOa2L/v9k=;
	b=JdGwGX3wmWkEsK5DMsVayZ5GIMNHNcGkiuWvOOktQ2CpoEvj3SNeQ7agLv2AbxUJIP3Z82
	ZlSWeTAGnJe3slmnETXWVtzl5ipEqUBlGGF4Yf6Z1cZlgsFUgn/yYUNZolmVyjNMD3gnOg
	2X9RAsL5N0zoGgdFj0sQtoXLd8WPbWs=
X-MC-Unique: pSbqAgPoOEKx6fcCh6x3dg-1
Date: Tue, 14 Jun 2022 10:51:38 +0100
From: Daniel =?utf-8?B?UC4gQmVycmFuZ8Op?= <berrange@redhat.com>
To: Gerd Hoffmann <kraxel@redhat.com>
Cc: Richard Henderson <richard.henderson@linaro.org>, qemu-devel@nongnu.org,
	"Michael S. Tsirkin" <mst@redhat.com>,
	xen-devel@lists.xenproject.org,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	"Canokeys.org" <contact@canokeys.org>,
	Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [PULL 00/16] Kraxel 20220613 patches
Message-ID: <YqhaKi8K2EATpAlN@redhat.com>
Reply-To: Daniel =?utf-8?B?UC4gQmVycmFuZ8Op?= <berrange@redhat.com>
References: <20220613113655.3693872-1-kraxel@redhat.com>
 <37f8f623-bb1c-899b-5801-79acd6185c6d@linaro.org>
 <20220614094038.g2g6lzeviypcnqrb@sirius.home.kraxel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20220614094038.g2g6lzeviypcnqrb@sirius.home.kraxel.org>
User-Agent: Mutt/2.2.1 (2022-02-19)
X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7

On Tue, Jun 14, 2022 at 11:40:38AM +0200, Gerd Hoffmann wrote:
> On Mon, Jun 13, 2022 at 08:52:21AM -0700, Richard Henderson wrote:
> > On 6/13/22 04:36, Gerd Hoffmann wrote:
> > > The following changes since commit dcb40541ebca7ec98a14d461593b3cd7282b4fac:
> > > 
> > >    Merge tag 'mips-20220611' of https://github.com/philmd/qemu into staging (2022-06-11 21:13:27 -0700)
> > > 
> > > are available in the Git repository at:
> > > 
> > >    git://git.kraxel.org/qemu tags/kraxel-20220613-pull-request
> > > 
> > > for you to fetch changes up to 23b87f7a3a13e93e248eef8a4b7257548855a620:
> > > 
> > >    ui: move 'pc-bios/keymaps' to 'ui/keymaps' (2022-06-13 10:59:25 +0200)
> > > 
> > > ----------------------------------------------------------------
> > > usb: add CanoKey device, fixes for ehci + redir
> > > ui: fixes for gtk and cocoa, move keymaps (v2), rework refresh rate
> > > virtio-gpu: scanout flush fix
> > 
> > This doesn't even configure:
> > 
> > ../src/ui/keymaps/meson.build:55:4: ERROR: File ar does not exist.
> 
> Hmm, build worked here and CI passed too.
> 
> I think this is one of those cases where the build directory must be
> deleted because one subdirectory is replaced by a compatibility
> symlink.

Except 'configure' deals with that, as it explicitly rm -rf's the
symlink target:

symlink() {
  rm -rf "$2"
  mkdir -p "$(dirname "$2")"
  ln -s "$1" "$2"
}


so i'm pretty confused as to what's going wrong here still


With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



From xen-devel-bounces@lists.xenproject.org Tue Jun 14 11:03:02 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 11:03:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348781.574957 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o14Jo-0004mQ-MD; Tue, 14 Jun 2022 11:02:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348781.574957; Tue, 14 Jun 2022 11:02:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o14Jo-0004mJ-JM; Tue, 14 Jun 2022 11:02:44 +0000
Received: by outflank-mailman (input) for mailman id 348781;
 Tue, 14 Jun 2022 11:02:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5YSJ=WV=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o14Jn-0004mD-TP
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 11:02:43 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 824bd0ca-ebd1-11ec-8901-93a377f238d6;
 Tue, 14 Jun 2022 13:02:41 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8C13A1424;
 Tue, 14 Jun 2022 04:02:40 -0700 (PDT)
Received: from [10.57.11.30] (unknown [10.57.11.30])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 30E6F3F73B;
 Tue, 14 Jun 2022 04:02:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 824bd0ca-ebd1-11ec-8901-93a377f238d6
Message-ID: <f60bd88a-90bc-60a9-be72-aa533315c55f@arm.com>
Date: Tue, 14 Jun 2022 13:02:27 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH] xen/arm: smpboot: Allocate the CPU sibling/core maps
 while preparing the CPU
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: andrew.cooper3@citrix.com, Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220614094119.94720-1-julien@xen.org>
From: Michal Orzel <michal.orzel@arm.com>
In-Reply-To: <20220614094119.94720-1-julien@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Hi Julien,

On 14.06.2022 11:41, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Commit 5047cd1d5dea "xen/common: Use enhanced ASSERT_ALLOC_CONTEXT in
> xmalloc()" extended the checks in _xmalloc() to catch any use of the
> helpers from context with interrupts disabled.
> 
> Unfortunately, the rule is not followed when allocating the CPU
> sibling/core maps.
> 
> (XEN) Xen call trace:
> (XEN)    [<00238a5c>] _xmalloc+0xfc/0x314 (PC)
> (XEN)    [<00000000>] 00000000 (LR)
> (XEN)    [<00238c8c>] _xzalloc+0x18/0x4c
> (XEN)    [<00288cb4>] smpboot.c#setup_cpu_sibling_map+0x38/0x138
> (XEN)    [<00289024>] start_secondary+0x1b4/0x270
> (XEN)    [<40010170>] 40010170
> (XEN)
> (XEN)
> (XEN) ****************************************
> (XEN) Panic on CPU 2:
> (XEN) Assertion '!in_irq() && (local_irq_is_enabled() || num_online_cpus() <= 1)' failed at common/xmalloc_tlsf.c:601
> (XEN) ****************************************
> 
> This is happening because zalloc_cpumask_var() may allocate memory
> if NR_CPUS is > 2 * sizeof(unsigned long).
> 
> Avoid the problem by allocate the per-CPU IRQs while preparing the
> CPU.
Shouldn't this be "by allocating the CPU sibling/core maps while ..."
to reflect the commit title and to distinguish between this change and the IRQ one?

> 
> This also has the benefit to remove a panic() in the secondary CPU
> code.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> ---
>  xen/arch/arm/smpboot.c | 25 ++++++++++++++++++++-----
>  1 file changed, 20 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
> index 4888bcd78a5a..2b0c92cd369b 100644
> --- a/xen/arch/arm/smpboot.c
> +++ b/xen/arch/arm/smpboot.c
> @@ -79,15 +79,17 @@ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_core_mask);
>  static bool __read_mostly opt_hmp_unsafe = false;
>  boolean_param("hmp-unsafe", opt_hmp_unsafe);
>  
> -static void setup_cpu_sibling_map(int cpu)
> +static int setup_cpu_sibling_map(int cpu)
>  {
>      if ( !zalloc_cpumask_var(&per_cpu(cpu_sibling_mask, cpu)) ||
>           !zalloc_cpumask_var(&per_cpu(cpu_core_mask, cpu)) )
> -        panic("No memory for CPU sibling/core maps\n");
> +        return -ENOMEM;
>  
>      /* A CPU is a sibling with itself and is always on its own core. */
>      cpumask_set_cpu(cpu, per_cpu(cpu_sibling_mask, cpu));
>      cpumask_set_cpu(cpu, per_cpu(cpu_core_mask, cpu));
> +
> +    return 0;
>  }
>  
>  static void remove_cpu_sibling_map(int cpu)
> @@ -292,9 +294,14 @@ smp_get_max_cpus (void)
>  void __init
>  smp_prepare_cpus(void)
>  {
> +    int rc;
Here you are leaving rc uninitialized (which is ok) but ...

> +
>      cpumask_copy(&cpu_present_map, &cpu_possible_map);
>  
> -    setup_cpu_sibling_map(0);
> +    rc = setup_cpu_sibling_map(0);
> +    if ( rc )
> +        panic("Unable to allocate CPU sibling/core maps\n");
> +
>  }
>  
>  /* Boot the current CPU */
> @@ -361,8 +368,6 @@ void start_secondary(void)
>  
>      set_current(idle_vcpu[cpuid]);
>  
> -    setup_cpu_sibling_map(cpuid);
> -
>      /* Run local notifiers */
>      notify_cpu_starting(cpuid);
>      /*
> @@ -530,9 +535,19 @@ static int cpu_smpboot_callback(struct notifier_block *nfb,
>                                  void *hcpu)
>  {
>      unsigned int cpu = (unsigned long)hcpu;
> +    unsigned int rc = 0;
... here you are setting rc to 0 even though it will be reassigned.
Furthermore, if rc is used only in case of CPU_UP_PREPARE, why not moving the definition there?

>  
>      switch ( action )
>      {
> +    case CPU_UP_PREPARE:
> +        rc = setup_cpu_sibling_map(cpu);
> +        if ( rc )
> +            printk(XENLOG_ERR
> +                   "Unable to allocate CPU sibling/core map  for CPU%u\n",
Too many spaces between 'map' and 'for'.

> +                   cpu);
> +
> +        break;
> +
>      case CPU_DEAD:
>          remove_cpu_sibling_map(cpu);
>          break;

Cheers,
Michal


From xen-devel-bounces@lists.xenproject.org Tue Jun 14 11:06:09 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 11:06:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348789.574969 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o14N6-0005My-7S; Tue, 14 Jun 2022 11:06:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348789.574969; Tue, 14 Jun 2022 11:06:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o14N6-0005Mr-2d; Tue, 14 Jun 2022 11:06:08 +0000
Received: by outflank-mailman (input) for mailman id 348789;
 Tue, 14 Jun 2022 11:06:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5YSJ=WV=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o14N4-0005Mj-Lm
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 11:06:06 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id f22fd588-ebd1-11ec-8901-93a377f238d6;
 Tue, 14 Jun 2022 13:05:48 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D1AFA1424;
 Tue, 14 Jun 2022 04:05:57 -0700 (PDT)
Received: from [10.57.11.30] (unknown [10.57.11.30])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7417F3F73B;
 Tue, 14 Jun 2022 04:05:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f22fd588-ebd1-11ec-8901-93a377f238d6
Message-ID: <998ad0b5-1835-025c-9329-e80695247033@arm.com>
Date: Tue, 14 Jun 2022 13:05:45 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH] xen/arm: irq: Initialize the per-CPU IRQs while preparing
 the CPU
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: andrew.cooper3@citrix.com, Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220614094157.95631-1-julien@xen.org>
From: Michal Orzel <michal.orzel@arm.com>
In-Reply-To: <20220614094157.95631-1-julien@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Hi Julien,

On 14.06.2022 11:41, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Commit 5047cd1d5dea "xen/common: Use enhanced ASSERT_ALLOC_CONTEXT in
> xmalloc()" extended the checks in _xmalloc() to catch any use of the
> helpers from context with interrupts disabled.
> 
> Unfortunately, the rule is not followed when initializing the per-CPU
> IRQs:
> 
> (XEN) Xen call trace:
> (XEN)    [<002389f4>] _xmalloc+0xfc/0x314 (PC)
> (XEN)    [<00000000>] 00000000 (LR)
> (XEN)    [<0021a7c4>] init_one_irq_desc+0x48/0xd0
> (XEN)    [<002807a8>] irq.c#init_local_irq_data+0x48/0xa4
> (XEN)    [<00280834>] init_secondary_IRQ+0x10/0x2c
> (XEN)    [<00288fa4>] start_secondary+0x194/0x274
> (XEN)    [<40010170>] 40010170
> (XEN)
> (XEN)
> (XEN) ****************************************
> (XEN) Panic on CPU 2:
> (XEN) Assertion '!in_irq() && (local_irq_is_enabled() || num_online_cpus() <= 1)' failed at common/xmalloc_tlsf.c:601
> (XEN) ****************************************
> 
> This is happening because zalloc_cpumask_var() may allocate memory
> if NR_CPUS is > 2 * sizeof(unsigned long).
> 
> Avoid the problem by allocate the per-CPU IRQs while preparing the
> CPU.
Shouldn't this be" by initializing the per-CPU IRQs while ..." ?
Either way this text is the same like in the previous patch so I think this is not correct.

Other than that:
Reviewed-by: Michal Orzel <michal.orzel@arm.com>



From xen-devel-bounces@lists.xenproject.org Tue Jun 14 11:08:35 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 11:08:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348802.574980 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o14PQ-00067h-JI; Tue, 14 Jun 2022 11:08:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348802.574980; Tue, 14 Jun 2022 11:08:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o14PQ-00067a-Ge; Tue, 14 Jun 2022 11:08:32 +0000
Received: by outflank-mailman (input) for mailman id 348802;
 Tue, 14 Jun 2022 11:08:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o14PO-00067Q-V4
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 11:08:31 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o14PO-0005Qq-D3; Tue, 14 Jun 2022 11:08:30 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=[192.168.23.240]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o14PO-0000ei-5I; Tue, 14 Jun 2022 11:08:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=/cIGI4RwopOaSMEwqtBghhPN9mpyaStIvaw7rkgvgbs=; b=bBNxFqcpDot3mq1II3sJPkd/fA
	1f6vQe7g7bDzduVkf8Dy1D1Ez+ZxAdVOpgwbLMYYb3/pzQj/m5wbffqBYeZ/BZHXZONN5kRKMgzk6
	2GT9TK+qAMQMOUNq4nyCdwo1HjcyvYgeO8SgJjyVsBh//LAPBYnDoqiXgt0E9Ii1dLOk=;
Message-ID: <3ed8e44f-293d-958f-c144-466e16d034e2@xen.org>
Date: Tue, 14 Jun 2022 12:08:28 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH] xen/arm: smpboot: Allocate the CPU sibling/core maps
 while preparing the CPU
To: Michal Orzel <michal.orzel@arm.com>, xen-devel@lists.xenproject.org
Cc: andrew.cooper3@citrix.com, Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220614094119.94720-1-julien@xen.org>
 <f60bd88a-90bc-60a9-be72-aa533315c55f@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <f60bd88a-90bc-60a9-be72-aa533315c55f@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 14/06/2022 12:02, Michal Orzel wrote:
> Hi Julien,

Hi Michal,

> On 14.06.2022 11:41, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Commit 5047cd1d5dea "xen/common: Use enhanced ASSERT_ALLOC_CONTEXT in
>> xmalloc()" extended the checks in _xmalloc() to catch any use of the
>> helpers from context with interrupts disabled.
>>
>> Unfortunately, the rule is not followed when allocating the CPU
>> sibling/core maps.
>>
>> (XEN) Xen call trace:
>> (XEN)    [<00238a5c>] _xmalloc+0xfc/0x314 (PC)
>> (XEN)    [<00000000>] 00000000 (LR)
>> (XEN)    [<00238c8c>] _xzalloc+0x18/0x4c
>> (XEN)    [<00288cb4>] smpboot.c#setup_cpu_sibling_map+0x38/0x138
>> (XEN)    [<00289024>] start_secondary+0x1b4/0x270
>> (XEN)    [<40010170>] 40010170
>> (XEN)
>> (XEN)
>> (XEN) ****************************************
>> (XEN) Panic on CPU 2:
>> (XEN) Assertion '!in_irq() && (local_irq_is_enabled() || num_online_cpus() <= 1)' failed at common/xmalloc_tlsf.c:601
>> (XEN) ****************************************
>>
>> This is happening because zalloc_cpumask_var() may allocate memory
>> if NR_CPUS is > 2 * sizeof(unsigned long).
>>
>> Avoid the problem by allocate the per-CPU IRQs while preparing the
>> CPU.
> Shouldn't this be "by allocating the CPU sibling/core maps while ..."
> to reflect the commit title and to distinguish between this change and the IRQ one?

Yes. I will update it.

[...]

>>   static void remove_cpu_sibling_map(int cpu)
>> @@ -292,9 +294,14 @@ smp_get_max_cpus (void)
>>   void __init
>>   smp_prepare_cpus(void)
>>   {
>> +    int rc;
> Here you are leaving rc uninitialized (which is ok) but ...
> 
>> +
>>       cpumask_copy(&cpu_present_map, &cpu_possible_map);
>>   
>> -    setup_cpu_sibling_map(0);
>> +    rc = setup_cpu_sibling_map(0);
>> +    if ( rc )
>> +        panic("Unable to allocate CPU sibling/core maps\n");
>> +
>>   }
>>   
>>   /* Boot the current CPU */
>> @@ -361,8 +368,6 @@ void start_secondary(void)
>>   
>>       set_current(idle_vcpu[cpuid]);
>>   
>> -    setup_cpu_sibling_map(cpuid);
>> -
>>       /* Run local notifiers */
>>       notify_cpu_starting(cpuid);
>>       /*
>> @@ -530,9 +535,19 @@ static int cpu_smpboot_callback(struct notifier_block *nfb,
>>                                   void *hcpu)
>>   {
>>       unsigned int cpu = (unsigned long)hcpu;
>> +    unsigned int rc = 0;
> ... here you are setting rc to 0 even though it will be reassigned.
> Furthermore, if rc is used only in case of CPU_UP_PREPARE, why not moving the definition there?

Because I forgot to replace "return NOTIFY_DONE;" with :

return !rc ? NOTIFY_DONE : notifier_from_errno(rc);

In this case, we would need to initialize rc to 0 so it doesn't get used 
initialized.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 14 11:10:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 11:10:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348811.574991 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o14RW-0007Tl-0d; Tue, 14 Jun 2022 11:10:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348811.574991; Tue, 14 Jun 2022 11:10:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o14RV-0007Te-Tk; Tue, 14 Jun 2022 11:10:41 +0000
Received: by outflank-mailman (input) for mailman id 348811;
 Tue, 14 Jun 2022 11:10:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o14RU-0007TV-MA
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 11:10:40 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o14RU-0005Uf-AK; Tue, 14 Jun 2022 11:10:40 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=[192.168.23.240]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o14RU-0000nn-4h; Tue, 14 Jun 2022 11:10:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=oMIZp+yXJSEBeZsKO2/fkkXCOk7EgCg//qdPK3HV1LY=; b=j84xD1Q/+feryHoJ5s4buoY+zr
	Ns51lnFYv5fdpVcdxeJD39RuIvsNHAzy9kNaLJD0IbobX1D4PrrOHxswG7tI3j3prwPvpT9wPTPkj
	xe604hWhTG70lSwgUvlCC4+94duWhQjjl+7NmopKE9Bum+m0vqF1v8IyG5oZDCDTVWOI=;
Message-ID: <c1642dbd-2a0e-10eb-0842-32667b574a73@xen.org>
Date: Tue, 14 Jun 2022 12:10:38 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH] xen/arm: irq: Initialize the per-CPU IRQs while preparing
 the CPU
To: Michal Orzel <michal.orzel@arm.com>, xen-devel@lists.xenproject.org
Cc: andrew.cooper3@citrix.com, Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220614094157.95631-1-julien@xen.org>
 <998ad0b5-1835-025c-9329-e80695247033@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <998ad0b5-1835-025c-9329-e80695247033@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 14/06/2022 12:05, Michal Orzel wrote:
> Hi Julien,
> 
> On 14.06.2022 11:41, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Commit 5047cd1d5dea "xen/common: Use enhanced ASSERT_ALLOC_CONTEXT in
>> xmalloc()" extended the checks in _xmalloc() to catch any use of the
>> helpers from context with interrupts disabled.
>>
>> Unfortunately, the rule is not followed when initializing the per-CPU
>> IRQs:
>>
>> (XEN) Xen call trace:
>> (XEN)    [<002389f4>] _xmalloc+0xfc/0x314 (PC)
>> (XEN)    [<00000000>] 00000000 (LR)
>> (XEN)    [<0021a7c4>] init_one_irq_desc+0x48/0xd0
>> (XEN)    [<002807a8>] irq.c#init_local_irq_data+0x48/0xa4
>> (XEN)    [<00280834>] init_secondary_IRQ+0x10/0x2c
>> (XEN)    [<00288fa4>] start_secondary+0x194/0x274
>> (XEN)    [<40010170>] 40010170
>> (XEN)
>> (XEN)
>> (XEN) ****************************************
>> (XEN) Panic on CPU 2:
>> (XEN) Assertion '!in_irq() && (local_irq_is_enabled() || num_online_cpus() <= 1)' failed at common/xmalloc_tlsf.c:601
>> (XEN) ****************************************
>>
>> This is happening because zalloc_cpumask_var() may allocate memory
>> if NR_CPUS is > 2 * sizeof(unsigned long).
>>
>> Avoid the problem by allocate the per-CPU IRQs while preparing the
>> CPU.
> Shouldn't this be" by initializing the per-CPU IRQs while ..." ?

I am fine with using "initializing" rather than "allocating".

> Either way this text is the same like in the previous patch so I think this is not correct.

I can't quite parse this.

> Other than that:
> Reviewed-by: Michal Orzel <michal.orzel@arm.com>

Thanks!

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 14 11:13:34 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 11:13:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348820.575001 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o14UG-00087J-E8; Tue, 14 Jun 2022 11:13:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348820.575001; Tue, 14 Jun 2022 11:13:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o14UG-00087C-BI; Tue, 14 Jun 2022 11:13:32 +0000
Received: by outflank-mailman (input) for mailman id 348820;
 Tue, 14 Jun 2022 11:13:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5YSJ=WV=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o14UF-000876-0u
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 11:13:31 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 000b2569-ebd3-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 13:13:21 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C94231424;
 Tue, 14 Jun 2022 04:13:20 -0700 (PDT)
Received: from [10.57.11.30] (unknown [10.57.11.30])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 708563F73B;
 Tue, 14 Jun 2022 04:13:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 000b2569-ebd3-11ec-bd2c-47488cf2e6aa
Message-ID: <55f45337-2da1-fe8f-b7a5-272577ed4d50@arm.com>
Date: Tue, 14 Jun 2022 13:13:07 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH] xen/arm: smpboot: Allocate the CPU sibling/core maps
 while preparing the CPU
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: andrew.cooper3@citrix.com, Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220614094119.94720-1-julien@xen.org>
 <f60bd88a-90bc-60a9-be72-aa533315c55f@arm.com>
 <3ed8e44f-293d-958f-c144-466e16d034e2@xen.org>
From: Michal Orzel <michal.orzel@arm.com>
In-Reply-To: <3ed8e44f-293d-958f-c144-466e16d034e2@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit



On 14.06.2022 13:08, Julien Grall wrote:
>>> +    unsigned int rc = 0;
>> ... here you are setting rc to 0 even though it will be reassigned.
>> Furthermore, if rc is used only in case of CPU_UP_PREPARE, why not moving the definition there?
> 
> Because I forgot to replace "return NOTIFY_DONE;" with :
> 
> return !rc ? NOTIFY_DONE : notifier_from_errno(rc);
That is what I thought.
With these fixes you can add my Rb.

> 
> In this case, we would need to initialize rc to 0 so it doesn't get used initialized.
> 
> Cheers,
> 

Cheers,
Michal


From xen-devel-bounces@lists.xenproject.org Tue Jun 14 11:25:48 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 11:25:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348833.575015 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o14fz-0001Kr-LI; Tue, 14 Jun 2022 11:25:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348833.575015; Tue, 14 Jun 2022 11:25:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o14fz-0001Kk-IP; Tue, 14 Jun 2022 11:25:39 +0000
Received: by outflank-mailman (input) for mailman id 348833;
 Tue, 14 Jun 2022 11:25:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o14fy-0001Ka-SO; Tue, 14 Jun 2022 11:25:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o14fy-0005kF-QH; Tue, 14 Jun 2022 11:25:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o14fy-0003yK-Ag; Tue, 14 Jun 2022 11:25:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o14fy-0004RM-A3; Tue, 14 Jun 2022 11:25:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jeQO/nBBBoPmUFouhQb95nhe5qWa5YPZjSvZI79G9G8=; b=aj9zWKPudw1TTMExI+wGvARzf7
	TFVlI7OTKoS9urK7rNYVC++AAcCgoTTfpr7ZOsbM75JjeOSRvzagGEY5PaLgpmZ5HK6D2NuLHAnxU
	9PzZgJUnsdfNlt1Bc3jY6O7T18K7yP1IJwNzMVr22HQCOZGfHe6wXIJNqUiy2T8enIQs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171157-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 171157: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:build-arm64-pvops:kernel-build:fail:regression
    xen-unstable:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:xen-install:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c9a707df83aad17a6fcf2e8330ab3b5bead6fb8b
X-Osstest-Versions-That:
    xen=c9a707df83aad17a6fcf2e8330ab3b5bead6fb8b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 14 Jun 2022 11:25:38 +0000

flight 171157 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171157/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-pvops             6 kernel-build             fail REGR. vs. 171151

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   7 xen-install                  fail  like 171151
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171151
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171151
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171151
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171151
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171151
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171151
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171151
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171151
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171151
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171151
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171151
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171151
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c9a707df83aad17a6fcf2e8330ab3b5bead6fb8b
baseline version:
 xen                  c9a707df83aad17a6fcf2e8330ab3b5bead6fb8b

Last test of basis   171157  2022-06-14 01:53:24 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Tue Jun 14 11:53:38 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 11:53:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348830.575027 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o156w-0004yA-Tv; Tue, 14 Jun 2022 11:53:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348830.575027; Tue, 14 Jun 2022 11:53:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o156w-0004y3-Qg; Tue, 14 Jun 2022 11:53:30 +0000
Received: by outflank-mailman (input) for mailman id 348830;
 Tue, 14 Jun 2022 11:20:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Qq/B=WV=arm.com=mark.rutland@srs-se1.protection.inumbo.net>)
 id 1o14aY-0000T9-8J
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 11:20:02 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id ee2672d3-ebd3-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 13:20:01 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 223061516;
 Tue, 14 Jun 2022 04:20:00 -0700 (PDT)
Received: from FVFF77S0Q05N (unknown [10.57.41.154])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 47D093F73B;
 Tue, 14 Jun 2022 04:19:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ee2672d3-ebd3-11ec-bd2c-47488cf2e6aa
Date: Tue, 14 Jun 2022 12:19:29 +0100
From: Mark Rutland <mark.rutland@arm.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: rth@twiddle.net, ink@jurassic.park.msu.ru, mattst88@gmail.com,
	vgupta@kernel.org, linux@armlinux.org.uk, ulli.kroll@googlemail.com,
	linus.walleij@linaro.org, shawnguo@kernel.org,
	Sascha Hauer <s.hauer@pengutronix.de>, kernel@pengutronix.de,
	festevam@gmail.com, linux-imx@nxp.com, tony@atomide.com,
	khilman@kernel.org, catalin.marinas@arm.com, will@kernel.org,
	guoren@kernel.org, bcain@quicinc.com, chenhuacai@kernel.org,
	kernel@xen0n.name, geert@linux-m68k.org, sammy@sammy.net,
	monstr@monstr.eu, tsbogend@alpha.franken.de, dinguyen@kernel.org,
	jonas@southpole.se, stefan.kristiansson@saunalahti.fi,
	shorne@gmail.com, James.Bottomley@HansenPartnership.com,
	deller@gmx.de, mpe@ellerman.id.au, benh@kernel.crashing.org,
	paulus@samba.org, paul.walmsley@sifive.com, palmer@dabbelt.com,
	aou@eecs.berkeley.edu, hca@linux.ibm.com, gor@linux.ibm.com,
	agordeev@linux.ibm.com, borntraeger@linux.ibm.com,
	svens@linux.ibm.com, ysato@users.sourceforge.jp, dalias@libc.org,
	davem@davemloft.net, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
	dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com,
	acme@kernel.org, alexander.shishkin@linux.intel.com,
	jolsa@kernel.org, namhyung@kernel.org, jgross@suse.com,
	srivatsa@csail.mit.edu, amakhalov@vmware.com, pv-drivers@vmware.com,
	boris.ostrovsky@oracle.com, chris@zankel.net, jcmvbkbc@gmail.com,
	rafael@kernel.org, lenb@kernel.org, pavel@ucw.cz,
	gregkh@linuxfoundation.org, mturquette@baylibre.com,
	sboyd@kernel.org, daniel.lezcano@linaro.org, lpieralisi@kernel.org,
	sudeep.holla@arm.com, agross@kernel.org, bjorn.andersson@linaro.org,
	anup@brainfault.org, thierry.reding@gmail.com, jonathanh@nvidia.com,
	jacob.jun.pan@linux.intel.com, Arnd Bergmann <arnd@arndb.de>,
	yury.norov@gmail.com, andriy.shevchenko@linux.intel.com,
	linux@rasmusvillemoes.dk, rostedt@goodmis.org, pmladek@suse.com,
	senozhatsky@chromium.org, john.ogness@linutronix.de,
	paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com,
	josh@joshtriplett.org, mathieu.desnoyers@efficios.com,
	jiangshanlai@gmail.com, joel@joelfernandes.org,
	juri.lelli@redhat.com, vincent.guittot@linaro.org,
	dietmar.eggemann@arm.com, bsegall@google.com, mgorman@suse.de,
	bristot@redhat.com, vschneid@redhat.com, jpoimboe@kernel.org,
	linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-snps-arc@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, openrisc@lists.librecores.org,
	linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org, sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org, linux-perf-users@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, linux-xtensa@linux-xtensa.org,
	linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org,
	linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	linux-tegra@vger.kernel.org, linux-arch@vger.kernel.org,
	rcu@vger.kernel.org
Subject: Re: [PATCH 00/36] cpuidle,rcu: Cleanup the mess
Message-ID: <YqhuwQjmZyOVSiLI@FVFF77S0Q05N>
References: <20220608142723.103523089@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20220608142723.103523089@infradead.org>

On Wed, Jun 08, 2022 at 04:27:23PM +0200, Peter Zijlstra wrote:
> Hi All! (omg so many)

Hi Peter,

Sorry for the delay; my plate has also been rather full recently. I'm beginning
to page this in now.

> These here few patches mostly clear out the utter mess that is cpuidle vs rcuidle.
> 
> At the end of the ride there's only 2 real RCU_NONIDLE() users left
> 
>   arch/arm64/kernel/suspend.c:            RCU_NONIDLE(__cpu_suspend_exit());
>   drivers/perf/arm_pmu.c:                 RCU_NONIDLE(armpmu_start(event, PERF_EF_RELOAD));

The latter of these is necessary because apparently PM notifiers are called
with RCU not watching. Is that still the case today (or at the end of this
series)? If so, that feels like fertile land for more issues (yaey...). If not,
we should be able to drop this.

I can go dig into that some more.

>   kernel/cfi.c:   RCU_NONIDLE({
> 
> (the CFI one is likely dead in the kCFI rewrite) and there's only a hand full
> of trace_.*_rcuidle() left:
> 
>   kernel/trace/trace_preemptirq.c:                        trace_irq_enable_rcuidle(CALLER_ADDR0, CALLER_ADDR1);
>   kernel/trace/trace_preemptirq.c:                        trace_irq_disable_rcuidle(CALLER_ADDR0, CALLER_ADDR1);
>   kernel/trace/trace_preemptirq.c:                        trace_irq_enable_rcuidle(CALLER_ADDR0, caller_addr);
>   kernel/trace/trace_preemptirq.c:                        trace_irq_disable_rcuidle(CALLER_ADDR0, caller_addr);
>   kernel/trace/trace_preemptirq.c:                trace_preempt_enable_rcuidle(a0, a1);
>   kernel/trace/trace_preemptirq.c:                trace_preempt_disable_rcuidle(a0, a1);
> 
> All of them are in 'deprecated' code that is unused for GENERIC_ENTRY.

I think those are also unused on arm64 too?

If not, I can go attack that.

> I've touched a _lot_ of code that I can't test and likely broken some of it :/
> In particular, the whole ARM cpuidle stuff was quite involved with OMAP being
> the absolute 'winner'.
> 
> I'm hoping Mark can help me sort the remaining ARM64 bits as he moves that to
> GENERIC_ENTRY.

Moving to GENERIC_ENTRY as a whole is going to take a tonne of work
(refactoring both arm64 and the generic portion to be more amenable to each
other), but we can certainly move closer to that for the bits that matter here.

Maybe we want a STRICT_ENTRY option to get rid of all the deprecated stuff that
we can select regardless of GENERIC_ENTRY to make that easier.

> I've also got a note that says ARM64 can probably do a WFE based
> idle state and employ TIF_POLLING_NRFLAG to avoid some IPIs.

Possibly; I'm not sure how much of a win that'll be given that by default we'll
have a ~10KHz WFE wakeup from the timer, but we could take a peek.

Thanks,
Mark.


From xen-devel-bounces@lists.xenproject.org Tue Jun 14 12:14:47 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 12:14:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348855.575038 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15RH-0007ex-Oi; Tue, 14 Jun 2022 12:14:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348855.575038; Tue, 14 Jun 2022 12:14:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15RH-0007eq-KK; Tue, 14 Jun 2022 12:14:31 +0000
Received: by outflank-mailman (input) for mailman id 348855;
 Tue, 14 Jun 2022 12:14:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hf0K=WV=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1o15RF-0007eM-RN
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 12:14:30 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 896031b8-ebdb-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 14:14:28 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-487--g9QEbfoMZ26_nSdNrXxLQ-1; Tue, 14 Jun 2022 08:14:26 -0400
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com
 [10.11.54.4])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4282A3C025CE;
 Tue, 14 Jun 2022 12:14:25 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id CE0072026D64;
 Tue, 14 Jun 2022 12:14:24 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 37DAE1800084; Tue, 14 Jun 2022 14:14:23 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 896031b8-ebdb-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655208867;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=jTV1MdFQUjOD6SaouLkpusNhrO6aKybutsMy/w9e6iI=;
	b=FdAToRcC51j+sNtlzjkIZTuluoexWhd1sHwUtvsovLsaRV0AdA5bvfAsztQcoYnnwVsXlU
	APJeBnolannAWJ4JeMfrVBUwqzLT/n9FRrhjWdHlIk9AVcesXexNDV1zGojkywwCkLjOKn
	RlQfznwAh8HITgzOfnDj02B0ch09EfM=
X-MC-Unique: -g9QEbfoMZ26_nSdNrXxLQ-1
Date: Tue, 14 Jun 2022 14:14:23 +0200
From: Gerd Hoffmann <kraxel@redhat.com>
To: Daniel =?utf-8?B?UC4gQmVycmFuZ8Op?= <berrange@redhat.com>
Cc: Richard Henderson <richard.henderson@linaro.org>, qemu-devel@nongnu.org,
	"Michael S. Tsirkin" <mst@redhat.com>,
	xen-devel@lists.xenproject.org,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	"Canokeys.org" <contact@canokeys.org>,
	Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [PULL 00/16] Kraxel 20220613 patches
Message-ID: <20220614121423.2hykgv7dn2qfo2dz@sirius.home.kraxel.org>
References: <20220613113655.3693872-1-kraxel@redhat.com>
 <37f8f623-bb1c-899b-5801-79acd6185c6d@linaro.org>
 <20220614094038.g2g6lzeviypcnqrb@sirius.home.kraxel.org>
 <YqhaKi8K2EATpAlN@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <YqhaKi8K2EATpAlN@redhat.com>
X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4

> > Hmm, build worked here and CI passed too.
> > 
> > I think this is one of those cases where the build directory must be
> > deleted because one subdirectory is replaced by a compatibility
> > symlink.
> 
> Except 'configure' deals with that, as it explicitly rm -rf's the
> symlink target:
> 
> symlink() {
>   rm -rf "$2"
>   mkdir -p "$(dirname "$2")"
>   ln -s "$1" "$2"
> }
> 
> so i'm pretty confused as to what's going wrong here still

'git rebase -x ./make.sh master queue/kraxel' not working (where make.sh
is a script effectively doing 'make -C build/$name' for multiple build
trees with different configurations).

'git status' lists ui/keymaps/* as deleted.
'git reset --hard' fixes it.

take care,
  Gerd



From xen-devel-bounces@lists.xenproject.org Tue Jun 14 12:16:21 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 12:16:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348864.575054 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15T3-0008IB-Df; Tue, 14 Jun 2022 12:16:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348864.575054; Tue, 14 Jun 2022 12:16:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15T3-0008Gt-87; Tue, 14 Jun 2022 12:16:21 +0000
Received: by outflank-mailman (input) for mailman id 348864;
 Tue, 14 Jun 2022 12:16:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hf0K=WV=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1o15T1-0008Ek-Ul
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 12:16:20 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ca8d1326-ebdb-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 14:16:18 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-577-zNJqENUwOG2iqDFpqj-GJA-1; Tue, 14 Jun 2022 08:16:13 -0400
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com
 [10.11.54.5])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id ADF9E1C0514E;
 Tue, 14 Jun 2022 12:16:12 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 4089418EA5;
 Tue, 14 Jun 2022 12:16:12 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id EEC8E18000B3; Tue, 14 Jun 2022 14:16:10 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ca8d1326-ebdb-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655208976;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=h/Rq0T4qHdj5P4fsrjuVBaKf49W5WppINvxKA/pkbYI=;
	b=VRCtOH04hwZfrb/TggkWASGePToizBoas/O3n1C94GglalKMjOBx0+GEEZaE6MRTlLTbYm
	3abrJpKreCgSMZa1Bu5FKVZSCBEaO7WYBRKmmO+RVhy+mFjVZsPO/nz5eWjB8sbKKFTB4h
	+yfIHHSj/7OCAQzUxINUz86SgIBVrLo=
X-MC-Unique: zNJqENUwOG2iqDFpqj-GJA-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: Akihiko Odaki <akihiko.odaki@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	xen-devel@lists.xenproject.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Canokeys.org" <contact@canokeys.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gerd Hoffmann <kraxel@redhat.com>,
	=?UTF-8?q?Volker=20R=C3=BCmelin?= <vr_qemu@t-online.de>
Subject: [PULL 01/15] ui/gtk-gl-area: implement GL context destruction
Date: Tue, 14 Jun 2022 14:15:56 +0200
Message-Id: <20220614121610.508356-2-kraxel@redhat.com>
In-Reply-To: <20220614121610.508356-1-kraxel@redhat.com>
References: <20220614121610.508356-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5

From: Volker Rümelin <vr_qemu@t-online.de>

The counterpart function for gd_gl_area_create_context() is
currently empty. Implement the gd_gl_area_destroy_context()
function to avoid GL context leaks.

Signed-off-by: Volker Rümelin <vr_qemu@t-online.de>
Message-Id: <20220605085131.7711-1-vr_qemu@t-online.de>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 ui/gtk-gl-area.c | 8 +++++++-
 ui/trace-events  | 1 +
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/ui/gtk-gl-area.c b/ui/gtk-gl-area.c
index fc5a082eb846..0e20ea031d34 100644
--- a/ui/gtk-gl-area.c
+++ b/ui/gtk-gl-area.c
@@ -201,7 +201,13 @@ QEMUGLContext gd_gl_area_create_context(DisplayGLCtx *dgc,
 
 void gd_gl_area_destroy_context(DisplayGLCtx *dgc, QEMUGLContext ctx)
 {
-    /* FIXME */
+    GdkGLContext *current_ctx = gdk_gl_context_get_current();
+
+    trace_gd_gl_area_destroy_context(ctx, current_ctx);
+    if (ctx == current_ctx) {
+        gdk_gl_context_clear_current();
+    }
+    g_clear_object(&ctx);
 }
 
 void gd_gl_area_scanout_texture(DisplayChangeListener *dcl,
diff --git a/ui/trace-events b/ui/trace-events
index f78b5e66061f..1040ba0f88c7 100644
--- a/ui/trace-events
+++ b/ui/trace-events
@@ -26,6 +26,7 @@ gd_key_event(const char *tab, int gdk_keycode, int qkeycode, const char *action)
 gd_grab(const char *tab, const char *device, const char *reason) "tab=%s, dev=%s, reason=%s"
 gd_ungrab(const char *tab, const char *device) "tab=%s, dev=%s"
 gd_keymap_windowing(const char *name) "backend=%s"
+gd_gl_area_destroy_context(void *ctx, void *current_ctx) "ctx=%p, current_ctx=%p"
 
 # vnc-auth-sasl.c
 # vnc-auth-vencrypt.c
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 14 12:16:21 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 12:16:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348863.575049 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15T3-0008F9-4r; Tue, 14 Jun 2022 12:16:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348863.575049; Tue, 14 Jun 2022 12:16:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15T2-0008F2-Vu; Tue, 14 Jun 2022 12:16:20 +0000
Received: by outflank-mailman (input) for mailman id 348863;
 Tue, 14 Jun 2022 12:16:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hf0K=WV=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1o15T1-0008Ek-4g
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 12:16:19 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ca390d91-ebdb-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 14:16:17 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-597-L15e5m9TNUyMxihfL8XLtA-1; Tue, 14 Jun 2022 08:16:13 -0400
Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com
 [10.11.54.10])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 80F2D1C0514D;
 Tue, 14 Jun 2022 12:16:12 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 44AB1492C3B;
 Tue, 14 Jun 2022 12:16:12 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id E1A291800084; Tue, 14 Jun 2022 14:16:10 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ca390d91-ebdb-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655208976;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=6LLN3PRRdurbe5WDvmvbhPsA1qOqsjYEdVF0UsvlhBQ=;
	b=ONZhyRikfdnThs445SmVSytwMaLt9vjU0OW37LoO0Ir0qzoWP4cFwLP0RDTv7VzdBm+aw+
	9tVQeSq0sjK7MAQDtRYysyt+wq8rimLzgJ92uVv/MWYGFmTp+fqG3JL67e0awAiDdbH2lD
	tVPob/Ffoot1Jt/9DSqBOVOSQTCbJqY=
X-MC-Unique: L15e5m9TNUyMxihfL8XLtA-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: Akihiko Odaki <akihiko.odaki@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	xen-devel@lists.xenproject.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Canokeys.org" <contact@canokeys.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gerd Hoffmann <kraxel@redhat.com>
Subject: [PULL 00/15] Kraxel 20220614 patches
Date: Tue, 14 Jun 2022 14:15:55 +0200
Message-Id: <20220614121610.508356-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.85 on 10.11.54.10

The following changes since commit debd0753663bc89c86f5462a53268f2e3f680f60:

  Merge tag 'pull-testing-next-140622-1' of https://github.com/stsquad/qemu into staging (2022-06-13 21:10:57 -0700)

are available in the Git repository at:

  git://git.kraxel.org/qemu tags/kraxel-20220614-pull-request

for you to fetch changes up to b95b56311a0890da0c9f7fc624529c3d7f8dbce0:

  virtio-gpu: Respect UI refresh rate for EDID (2022-06-14 10:34:37 +0200)

----------------------------------------------------------------
usb: add CanoKey device, fixes for ehci + redir
ui: fixes for gtk and cocoa, rework refresh rate
virtio-gpu: scanout flush fix

----------------------------------------------------------------

Akihiko Odaki (4):
  ui/cocoa: Fix poweroff request code
  ui/console: Do not return a value with ui_info
  ui: Deliver refresh rate via QemuUIInfo
  virtio-gpu: Respect UI refresh rate for EDID

Arnout Engelen (1):
  hw/usb/hcd-ehci: fix writeback order

Dongwon Kim (1):
  virtio-gpu: update done only on the scanout associated with rect

Hongren (Zenithal) Zheng (6):
  hw/usb: Add CanoKey Implementation
  hw/usb/canokey: Add trace events
  meson: Add CanoKey
  docs: Add CanoKey documentation
  docs/system/devices/usb: Add CanoKey to USB devices examples
  MAINTAINERS: add myself as CanoKey maintainer

Joelle van Dyne (1):
  usbredir: avoid queuing hello packet on snapshot restore

Volker Rümelin (2):
  ui/gtk-gl-area: implement GL context destruction
  ui/gtk-gl-area: create the requested GL context version

 meson_options.txt                |   2 +
 hw/usb/canokey.h                 |  69 +++++++
 include/hw/virtio/virtio-gpu.h   |   1 +
 include/ui/console.h             |   4 +-
 include/ui/gtk.h                 |   2 +-
 hw/display/virtio-gpu-base.c     |   7 +-
 hw/display/virtio-gpu.c          |   4 +
 hw/display/virtio-vga.c          |   5 +-
 hw/display/xenfb.c               |  14 +-
 hw/usb/canokey.c                 | 313 +++++++++++++++++++++++++++++++
 hw/usb/hcd-ehci.c                |   5 +-
 hw/usb/redirect.c                |   3 +-
 hw/vfio/display.c                |   8 +-
 ui/console.c                     |   6 -
 ui/gtk-egl.c                     |   4 +-
 ui/gtk-gl-area.c                 |  42 ++++-
 ui/gtk.c                         |  45 +++--
 MAINTAINERS                      |   8 +
 docs/system/device-emulation.rst |   1 +
 docs/system/devices/canokey.rst  | 168 +++++++++++++++++
 docs/system/devices/usb.rst      |   4 +
 hw/usb/Kconfig                   |   5 +
 hw/usb/meson.build               |   5 +
 hw/usb/trace-events              |  16 ++
 meson.build                      |   6 +
 scripts/meson-buildoptions.sh    |   3 +
 ui/cocoa.m                       |   6 +-
 ui/trace-events                  |   2 +
 28 files changed, 707 insertions(+), 51 deletions(-)
 create mode 100644 hw/usb/canokey.h
 create mode 100644 hw/usb/canokey.c
 create mode 100644 docs/system/devices/canokey.rst

-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 14 12:16:22 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 12:16:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348865.575071 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15T4-0000Jy-NA; Tue, 14 Jun 2022 12:16:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348865.575071; Tue, 14 Jun 2022 12:16:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15T4-0000It-I3; Tue, 14 Jun 2022 12:16:22 +0000
Received: by outflank-mailman (input) for mailman id 348865;
 Tue, 14 Jun 2022 12:16:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hf0K=WV=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1o15T2-0008Ek-Tv
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 12:16:20 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cb66a6fa-ebdb-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 14:16:19 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-650-pVihZslYPXmGc_VmJgPZPg-1; Tue, 14 Jun 2022 08:16:14 -0400
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com
 [10.11.54.2])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 417BF85A58E;
 Tue, 14 Jun 2022 12:16:14 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 0829E400F3FF;
 Tue, 14 Jun 2022 12:16:14 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 14D011800395; Tue, 14 Jun 2022 14:16:11 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cb66a6fa-ebdb-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655208978;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qgE0u/psF+5HbFaMEFEBF+1+y1Gn5UXWEs7HICOeN3Q=;
	b=ioqB0tc4u6bdrtzSqbNjGjgG6E+z6UAYTY0C0CnUoyry+J2n3rnG0n4G8B3xRoKtnANif3
	YXETPnTTAzlwdZ87ikgU14unKr+R+xeYVj7UYrvgphMTiyjbadJ5YHW+3xAqzHoe08NNru
	PB85iVBUZLNE7eF9ppzXvSaUngxRHQI=
X-MC-Unique: pVihZslYPXmGc_VmJgPZPg-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: Akihiko Odaki <akihiko.odaki@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	xen-devel@lists.xenproject.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Canokeys.org" <contact@canokeys.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gerd Hoffmann <kraxel@redhat.com>
Subject: [PULL 03/15] ui/cocoa: Fix poweroff request code
Date: Tue, 14 Jun 2022 14:15:58 +0200
Message-Id: <20220614121610.508356-4-kraxel@redhat.com>
In-Reply-To: <20220614121610.508356-1-kraxel@redhat.com>
References: <20220614121610.508356-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2

From: Akihiko Odaki <akihiko.odaki@gmail.com>

Signed-off-by: Akihiko Odaki <akihiko.odaki@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220529082508.89097-1-akihiko.odaki@gmail.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 ui/cocoa.m | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/ui/cocoa.m b/ui/cocoa.m
index 09a62817f2a9..84c84e98fc5e 100644
--- a/ui/cocoa.m
+++ b/ui/cocoa.m
@@ -35,6 +35,7 @@
 #include "ui/kbd-state.h"
 #include "sysemu/sysemu.h"
 #include "sysemu/runstate.h"
+#include "sysemu/runstate-action.h"
 #include "sysemu/cpu-throttle.h"
 #include "qapi/error.h"
 #include "qapi/qapi-commands-block.h"
@@ -1290,7 +1291,10 @@ - (void)applicationWillTerminate:(NSNotification *)aNotification
 {
     COCOA_DEBUG("QemuCocoaAppController: applicationWillTerminate\n");
 
-    qemu_system_shutdown_request(SHUTDOWN_CAUSE_HOST_UI);
+    with_iothread_lock(^{
+        shutdown_action = SHUTDOWN_ACTION_POWEROFF;
+        qemu_system_shutdown_request(SHUTDOWN_CAUSE_HOST_UI);
+    });
 
     /*
      * Sleep here, because returning will cause OSX to kill us
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 14 12:16:24 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 12:16:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348866.575082 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15T5-0000bL-WC; Tue, 14 Jun 2022 12:16:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348866.575082; Tue, 14 Jun 2022 12:16:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15T5-0000Zv-SQ; Tue, 14 Jun 2022 12:16:23 +0000
Received: by outflank-mailman (input) for mailman id 348866;
 Tue, 14 Jun 2022 12:16:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hf0K=WV=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1o15T4-0008Ek-0H
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 12:16:22 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cc47154d-ebdb-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 14:16:20 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-651-wv2AtMAeNLyZQMhe_ogqMQ-1; Tue, 14 Jun 2022 08:16:14 -0400
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com
 [10.11.54.1])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3C4E4101E9B9;
 Tue, 14 Jun 2022 12:16:14 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id C0FD140CF8E5;
 Tue, 14 Jun 2022 12:16:13 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 08BE21800394; Tue, 14 Jun 2022 14:16:11 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cc47154d-ebdb-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655208978;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=wx3J1U/WRMVytLzUKnVNwgCHbbzsbHu+DjsV7mqclmI=;
	b=OirqH9wYqTGhJQiCjAbwtwaUkjAo8xKxw8kPD1mzrqlfNMAydjysS+HWU3jO+efIAvFfEz
	/DySSivZ6/TK8G0n9sftWtfUp+MH3mBrm+9czpuwyt4nda0M/aUVizH3Lq5ckVtD6j8fW2
	YVGtXTZWmtG5n63SXsNVI1T5MhejAzk=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655208979;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=wx3J1U/WRMVytLzUKnVNwgCHbbzsbHu+DjsV7mqclmI=;
	b=P9m/I5ISyZXS9xCLwC6D2Z0CeqwpX22GL6IEkIxouMY7AtlBvSd0H0NA+wdykE3pNrav6W
	kKkC7+d6U4duaFBKusweOtuf9g1SM45bm3soviRWn4T1IPTGetJd14HhXEOYLX0LyX/JVN
	Cvdz9zlYuEfySweP/9NY7aY7btTvkE4=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655208979;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=wx3J1U/WRMVytLzUKnVNwgCHbbzsbHu+DjsV7mqclmI=;
	b=P9m/I5ISyZXS9xCLwC6D2Z0CeqwpX22GL6IEkIxouMY7AtlBvSd0H0NA+wdykE3pNrav6W
	kKkC7+d6U4duaFBKusweOtuf9g1SM45bm3soviRWn4T1IPTGetJd14HhXEOYLX0LyX/JVN
	Cvdz9zlYuEfySweP/9NY7aY7btTvkE4=
X-MC-Unique: wv2AtMAeNLyZQMhe_ogqMQ-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: Akihiko Odaki <akihiko.odaki@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	xen-devel@lists.xenproject.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Canokeys.org" <contact@canokeys.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gerd Hoffmann <kraxel@redhat.com>,
	=?UTF-8?q?Volker=20R=C3=BCmelin?= <vr_qemu@t-online.de>
Subject: [PULL 02/15] ui/gtk-gl-area: create the requested GL context version
Date: Tue, 14 Jun 2022 14:15:57 +0200
Message-Id: <20220614121610.508356-3-kraxel@redhat.com>
In-Reply-To: <20220614121610.508356-1-kraxel@redhat.com>
References: <20220614121610.508356-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1

From: Volker Rümelin <vr_qemu@t-online.de>

Since about 2018 virglrenderer (commit fa835b0f88 "vrend: don't
hardcode context version") tries to open the highest available GL
context version. This is done by creating the known GL context
versions from the highest to the lowest until (*create_gl_context)
returns a context != NULL.

This does not work properly with
the current QEMU gd_gl_area_create_context() function, because
gdk_gl_context_realize() on Wayland creates a version 3.0 legacy
context if the requested GL context version can't be created.

In order for virglrenderer to find the highest available GL
context version, return NULL if the created context version is
lower than the requested version.

This fixes the following error:
QEMU started with -device virtio-vga-gl -display gtk,gl=on.
Under Wayland, the guest window remains black and the following
information can be seen on the host.

gl_version 30 - compat profile
(qemu:5978): Gdk-WARNING **: 16:19:01.533:
  gdk_gl_context_set_required_version
  - GL context versions less than 3.2 are not supported.

(qemu:5978): Gdk-WARNING **: 16:19:01.537:
  gdk_gl_context_set_required_version -
  GL context versions less than 3.2 are not supported.

(qemu:5978): Gdk-WARNING **: 16:19:01.554:
  gdk_gl_context_set_required_version -
  GL context versions less than 3.2 are not supported.
vrend_renderer_fill_caps: Entering with stale GL error: 1282

To reproduce this error, an OpenGL driver is required on the host
that doesn't have the latest OpenGL extensions fully implemented.
An example for this is the Intel i965 driver on a Haswell processor.

Signed-off-by: Volker Rümelin <vr_qemu@t-online.de>
Message-Id: <20220605085131.7711-2-vr_qemu@t-online.de>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 ui/gtk-gl-area.c | 31 ++++++++++++++++++++++++++++++-
 ui/trace-events  |  1 +
 2 files changed, 31 insertions(+), 1 deletion(-)

diff --git a/ui/gtk-gl-area.c b/ui/gtk-gl-area.c
index 0e20ea031d34..2e0129c28cd4 100644
--- a/ui/gtk-gl-area.c
+++ b/ui/gtk-gl-area.c
@@ -170,6 +170,23 @@ void gd_gl_area_switch(DisplayChangeListener *dcl,
     }
 }
 
+static int gd_cmp_gl_context_version(int major, int minor, QEMUGLParams *params)
+{
+    if (major > params->major_ver) {
+        return 1;
+    }
+    if (major < params->major_ver) {
+        return -1;
+    }
+    if (minor > params->minor_ver) {
+        return 1;
+    }
+    if (minor < params->minor_ver) {
+        return -1;
+    }
+    return 0;
+}
+
 QEMUGLContext gd_gl_area_create_context(DisplayGLCtx *dgc,
                                         QEMUGLParams *params)
 {
@@ -177,8 +194,8 @@ QEMUGLContext gd_gl_area_create_context(DisplayGLCtx *dgc,
     GdkWindow *window;
     GdkGLContext *ctx;
     GError *err = NULL;
+    int major, minor;
 
-    gtk_gl_area_make_current(GTK_GL_AREA(vc->gfx.drawing_area));
     window = gtk_widget_get_window(vc->gfx.drawing_area);
     ctx = gdk_window_create_gl_context(window, &err);
     if (err) {
@@ -196,6 +213,18 @@ QEMUGLContext gd_gl_area_create_context(DisplayGLCtx *dgc,
         g_clear_object(&ctx);
         return NULL;
     }
+
+    gdk_gl_context_make_current(ctx);
+    gdk_gl_context_get_version(ctx, &major, &minor);
+    gdk_gl_context_clear_current();
+    gtk_gl_area_make_current(GTK_GL_AREA(vc->gfx.drawing_area));
+
+    if (gd_cmp_gl_context_version(major, minor, params) == -1) {
+        /* created ctx version < requested version */
+        g_clear_object(&ctx);
+    }
+
+    trace_gd_gl_area_create_context(ctx, params->major_ver, params->minor_ver);
     return ctx;
 }
 
diff --git a/ui/trace-events b/ui/trace-events
index 1040ba0f88c7..a922f00e10b4 100644
--- a/ui/trace-events
+++ b/ui/trace-events
@@ -26,6 +26,7 @@ gd_key_event(const char *tab, int gdk_keycode, int qkeycode, const char *action)
 gd_grab(const char *tab, const char *device, const char *reason) "tab=%s, dev=%s, reason=%s"
 gd_ungrab(const char *tab, const char *device) "tab=%s, dev=%s"
 gd_keymap_windowing(const char *name) "backend=%s"
+gd_gl_area_create_context(void *ctx, int major, int minor) "ctx=%p, major=%d, minor=%d"
 gd_gl_area_destroy_context(void *ctx, void *current_ctx) "ctx=%p, current_ctx=%p"
 
 # vnc-auth-sasl.c
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 14 12:16:24 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 12:16:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348867.575087 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15T6-0000jZ-KK; Tue, 14 Jun 2022 12:16:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348867.575087; Tue, 14 Jun 2022 12:16:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15T6-0000hv-F2; Tue, 14 Jun 2022 12:16:24 +0000
Received: by outflank-mailman (input) for mailman id 348867;
 Tue, 14 Jun 2022 12:16:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hf0K=WV=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1o15T5-0008Ek-4h
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 12:16:23 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cc6274b5-ebdb-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 14:16:21 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-434-1DFfCUIxMnaPAZGvCK7x2A-1; Tue, 14 Jun 2022 08:16:16 -0400
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com
 [10.11.54.7])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E6342805F72;
 Tue, 14 Jun 2022 12:16:15 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 7CDD61415103;
 Tue, 14 Jun 2022 12:16:15 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 25C2718003AA; Tue, 14 Jun 2022 14:16:11 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cc6274b5-ebdb-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655208979;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=70clork2yZyYjdUXKnsu9VZIWlpSX2dH9Nrdwr2C7Z4=;
	b=e5powEpN/ZJtDWNV9sS2q+bY940oRXIgLewyDEVU0rzsjfbwE7WRopzTqHgLSPNOvDFY0f
	K3kjEM2b/TRsl6B4AbPLph+OealtBZMFzz3M6kYt02CPWOZITfeeGH99Hl+IwBpBcD5nss
	xWIEso4AfBMZEEkkAfgcMNRdrSlWBs8=
X-MC-Unique: 1DFfCUIxMnaPAZGvCK7x2A-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: Akihiko Odaki <akihiko.odaki@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	xen-devel@lists.xenproject.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Canokeys.org" <contact@canokeys.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gerd Hoffmann <kraxel@redhat.com>
Subject: [PULL 04/15] hw/usb: Add CanoKey Implementation
Date: Tue, 14 Jun 2022 14:15:59 +0200
Message-Id: <20220614121610.508356-5-kraxel@redhat.com>
In-Reply-To: <20220614121610.508356-1-kraxel@redhat.com>
References: <20220614121610.508356-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7

From: "Hongren (Zenithal) Zheng" <i@zenithal.me>

This commit added a new emulated device called CanoKey to QEMU.

CanoKey implements platform independent features in canokey-core
https://github.com/canokeys/canokey-core, and leaves the USB implementation
to the platform.

In this commit the USB part was implemented in QEMU using QEMU's USB APIs,
therefore the emulated CanoKey can communicate with the guest OS using USB.

Signed-off-by: Hongren (Zenithal) Zheng <i@zenithal.me>
Message-Id: <YoY6Mgph6f6Hc/zI@Sun>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 hw/usb/canokey.h |  69 +++++++++++
 hw/usb/canokey.c | 300 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 369 insertions(+)
 create mode 100644 hw/usb/canokey.h
 create mode 100644 hw/usb/canokey.c

diff --git a/hw/usb/canokey.h b/hw/usb/canokey.h
new file mode 100644
index 000000000000..24cf30420346
--- /dev/null
+++ b/hw/usb/canokey.h
@@ -0,0 +1,69 @@
+/*
+ * CanoKey QEMU device header.
+ *
+ * Copyright (c) 2021-2022 Canokeys.org <contact@canokeys.org>
+ * Written by Hongren (Zenithal) Zheng <i@zenithal.me>
+ *
+ * This code is licensed under the Apache-2.0.
+ */
+
+#ifndef CANOKEY_H
+#define CANOKEY_H
+
+#include "hw/qdev-core.h"
+
+#define TYPE_CANOKEY "canokey"
+#define CANOKEY(obj) \
+    OBJECT_CHECK(CanoKeyState, (obj), TYPE_CANOKEY)
+
+/*
+ * State of Canokey (i.e. hw/canokey.c)
+ */
+
+/* CTRL INTR BULK */
+#define CANOKEY_EP_NUM 3
+/* BULK/INTR IN can be up to 1352 bytes, e.g. get key info */
+#define CANOKEY_EP_IN_BUFFER_SIZE 2048
+/* BULK OUT can be up to 270 bytes, e.g. PIV import cert */
+#define CANOKEY_EP_OUT_BUFFER_SIZE 512
+
+typedef enum {
+    CANOKEY_EP_IN_WAIT,
+    CANOKEY_EP_IN_READY,
+    CANOKEY_EP_IN_STALL
+} CanoKeyEPState;
+
+typedef struct CanoKeyState {
+    USBDevice dev;
+
+    /* IN packets from canokey device loop */
+    uint8_t ep_in[CANOKEY_EP_NUM][CANOKEY_EP_IN_BUFFER_SIZE];
+    /*
+     * See canokey_emu_transmit
+     *
+     * For large INTR IN, receive multiple data from canokey device loop
+     * in this case ep_in_size would increase with every call
+     */
+    uint32_t ep_in_size[CANOKEY_EP_NUM];
+    /*
+     * Used in canokey_handle_data
+     * for IN larger than p->iov.size, we would do multiple handle_data()
+     *
+     * The difference between ep_in_pos and ep_in_size:
+     * We first increase ep_in_size to fill ep_in buffer in device_loop,
+     * then use ep_in_pos to submit data from ep_in buffer in handle_data
+     */
+    uint32_t ep_in_pos[CANOKEY_EP_NUM];
+    CanoKeyEPState ep_in_state[CANOKEY_EP_NUM];
+
+    /* OUT pointer to canokey recv buffer */
+    uint8_t *ep_out[CANOKEY_EP_NUM];
+    uint32_t ep_out_size[CANOKEY_EP_NUM];
+    /* For large BULK OUT, multiple write to ep_out is needed */
+    uint8_t ep_out_buffer[CANOKEY_EP_NUM][CANOKEY_EP_OUT_BUFFER_SIZE];
+
+    /* Properties */
+    char *file; /* canokey-file */
+} CanoKeyState;
+
+#endif /* CANOKEY_H */
diff --git a/hw/usb/canokey.c b/hw/usb/canokey.c
new file mode 100644
index 000000000000..6cb8b7cdb089
--- /dev/null
+++ b/hw/usb/canokey.c
@@ -0,0 +1,300 @@
+/*
+ * CanoKey QEMU device implementation.
+ *
+ * Copyright (c) 2021-2022 Canokeys.org <contact@canokeys.org>
+ * Written by Hongren (Zenithal) Zheng <i@zenithal.me>
+ *
+ * This code is licensed under the Apache-2.0.
+ */
+
+#include "qemu/osdep.h"
+#include <canokey-qemu.h>
+
+#include "qemu/module.h"
+#include "qapi/error.h"
+#include "hw/usb.h"
+#include "hw/qdev-properties.h"
+#include "desc.h"
+#include "canokey.h"
+
+#define CANOKEY_EP_IN(ep) ((ep) & 0x7F)
+
+#define CANOKEY_VENDOR_NUM     0x20a0
+#define CANOKEY_PRODUCT_NUM    0x42d2
+
+/*
+ * placeholder, canokey-qemu implements its own usb desc
+ * Namely we do not use usb_desc_handle_contorl
+ */
+enum {
+    STR_MANUFACTURER = 1,
+    STR_PRODUCT,
+    STR_SERIALNUMBER
+};
+
+static const USBDescStrings desc_strings = {
+    [STR_MANUFACTURER]     = "canokeys.org",
+    [STR_PRODUCT]          = "CanoKey QEMU",
+    [STR_SERIALNUMBER]     = "0"
+};
+
+static const USBDescDevice desc_device_canokey = {
+    .bcdUSB                        = 0x0,
+    .bMaxPacketSize0               = 16,
+    .bNumConfigurations            = 0,
+    .confs = NULL,
+};
+
+static const USBDesc desc_canokey = {
+    .id = {
+        .idVendor          = CANOKEY_VENDOR_NUM,
+        .idProduct         = CANOKEY_PRODUCT_NUM,
+        .bcdDevice         = 0x0100,
+        .iManufacturer     = STR_MANUFACTURER,
+        .iProduct          = STR_PRODUCT,
+        .iSerialNumber     = STR_SERIALNUMBER,
+    },
+    .full = &desc_device_canokey,
+    .high = &desc_device_canokey,
+    .str  = desc_strings,
+};
+
+
+/*
+ * libcanokey-qemu.so side functions
+ * All functions are called from canokey_emu_device_loop
+ */
+int canokey_emu_stall_ep(void *base, uint8_t ep)
+{
+    CanoKeyState *key = base;
+    uint8_t ep_in = CANOKEY_EP_IN(ep); /* INTR IN has ep 129 */
+    key->ep_in_size[ep_in] = 0;
+    key->ep_in_state[ep_in] = CANOKEY_EP_IN_STALL;
+    return 0;
+}
+
+int canokey_emu_set_address(void *base, uint8_t addr)
+{
+    CanoKeyState *key = base;
+    key->dev.addr = addr;
+    return 0;
+}
+
+int canokey_emu_prepare_receive(
+        void *base, uint8_t ep, uint8_t *pbuf, uint16_t size)
+{
+    CanoKeyState *key = base;
+    key->ep_out[ep] = pbuf;
+    key->ep_out_size[ep] = size;
+    return 0;
+}
+
+int canokey_emu_transmit(
+        void *base, uint8_t ep, const uint8_t *pbuf, uint16_t size)
+{
+    CanoKeyState *key = base;
+    uint8_t ep_in = CANOKEY_EP_IN(ep); /* INTR IN has ep 129 */
+    memcpy(key->ep_in[ep_in] + key->ep_in_size[ep_in],
+            pbuf, size);
+    key->ep_in_size[ep_in] += size;
+    key->ep_in_state[ep_in] = CANOKEY_EP_IN_READY;
+    /*
+     * ready for more data in device loop
+     *
+     * Note: this is a quirk for CanoKey CTAPHID
+     * because it calls multiple emu_transmit in one device_loop
+     * but w/o data_in it would stuck in device_loop
+     * This has no side effect for CCID as it is strictly
+     * OUT then IN transfer
+     * However it has side effect for Control transfer
+     */
+    if (ep_in != 0) {
+        canokey_emu_data_in(ep_in);
+    }
+    return 0;
+}
+
+uint32_t canokey_emu_get_rx_data_size(void *base, uint8_t ep)
+{
+    CanoKeyState *key = base;
+    return key->ep_out_size[ep];
+}
+
+/*
+ * QEMU side functions
+ */
+static void canokey_handle_reset(USBDevice *dev)
+{
+    CanoKeyState *key = CANOKEY(dev);
+    for (int i = 0; i != CANOKEY_EP_NUM; ++i) {
+        key->ep_in_state[i] = CANOKEY_EP_IN_WAIT;
+        key->ep_in_pos[i] = 0;
+        key->ep_in_size[i] = 0;
+    }
+    canokey_emu_reset();
+}
+
+static void canokey_handle_control(USBDevice *dev, USBPacket *p,
+               int request, int value, int index, int length, uint8_t *data)
+{
+    CanoKeyState *key = CANOKEY(dev);
+
+    canokey_emu_setup(request, value, index, length);
+
+    uint32_t dir_in = request & DeviceRequest;
+    if (!dir_in) {
+        /* OUT */
+        if (key->ep_out[0] != NULL) {
+            memcpy(key->ep_out[0], data, length);
+        }
+        canokey_emu_data_out(p->ep->nr, data);
+    }
+
+    canokey_emu_device_loop();
+
+    /* IN */
+    switch (key->ep_in_state[0]) {
+    case CANOKEY_EP_IN_WAIT:
+        p->status = USB_RET_NAK;
+        break;
+    case CANOKEY_EP_IN_STALL:
+        p->status = USB_RET_STALL;
+        break;
+    case CANOKEY_EP_IN_READY:
+        memcpy(data, key->ep_in[0], key->ep_in_size[0]);
+        p->actual_length = key->ep_in_size[0];
+        /* reset state */
+        key->ep_in_state[0] = CANOKEY_EP_IN_WAIT;
+        key->ep_in_size[0] = 0;
+        key->ep_in_pos[0] = 0;
+        break;
+    }
+}
+
+static void canokey_handle_data(USBDevice *dev, USBPacket *p)
+{
+    CanoKeyState *key = CANOKEY(dev);
+
+    uint8_t ep_in = CANOKEY_EP_IN(p->ep->nr);
+    uint8_t ep_out = p->ep->nr;
+    uint32_t in_len;
+    uint32_t out_pos;
+    uint32_t out_len;
+    switch (p->pid) {
+    case USB_TOKEN_OUT:
+        usb_packet_copy(p, key->ep_out_buffer[ep_out], p->iov.size);
+        out_pos = 0;
+        while (out_pos != p->iov.size) {
+            /*
+             * key->ep_out[ep_out] set by prepare_receive
+             * to be a buffer inside libcanokey-qemu.so
+             * key->ep_out_size[ep_out] set by prepare_receive
+             * to be the buffer length
+             */
+            out_len = MIN(p->iov.size - out_pos, key->ep_out_size[ep_out]);
+            memcpy(key->ep_out[ep_out],
+                    key->ep_out_buffer[ep_out] + out_pos, out_len);
+            out_pos += out_len;
+            /* update ep_out_size to actual len */
+            key->ep_out_size[ep_out] = out_len;
+            canokey_emu_data_out(ep_out, NULL);
+        }
+        break;
+    case USB_TOKEN_IN:
+        if (key->ep_in_pos[ep_in] == 0) { /* first time IN */
+            canokey_emu_data_in(ep_in);
+            canokey_emu_device_loop(); /* may call transmit multiple times */
+        }
+        switch (key->ep_in_state[ep_in]) {
+        case CANOKEY_EP_IN_WAIT:
+            /* NAK for early INTR IN */
+            p->status = USB_RET_NAK;
+            break;
+        case CANOKEY_EP_IN_STALL:
+            p->status = USB_RET_STALL;
+            break;
+        case CANOKEY_EP_IN_READY:
+            /* submit part of ep_in buffer to USBPacket */
+            in_len = MIN(key->ep_in_size[ep_in] - key->ep_in_pos[ep_in],
+                    p->iov.size);
+            usb_packet_copy(p,
+                    key->ep_in[ep_in] + key->ep_in_pos[ep_in], in_len);
+            key->ep_in_pos[ep_in] += in_len;
+            /* reset state if all data submitted */
+            if (key->ep_in_pos[ep_in] == key->ep_in_size[ep_in]) {
+                key->ep_in_state[ep_in] = CANOKEY_EP_IN_WAIT;
+                key->ep_in_size[ep_in] = 0;
+                key->ep_in_pos[ep_in] = 0;
+            }
+            break;
+        }
+        break;
+    default:
+        p->status = USB_RET_STALL;
+        break;
+    }
+}
+
+static void canokey_realize(USBDevice *base, Error **errp)
+{
+    CanoKeyState *key = CANOKEY(base);
+
+    if (key->file == NULL) {
+        error_setg(errp, "You must provide file=/path/to/canokey-file");
+        return;
+    }
+
+    usb_desc_init(base);
+
+    for (int i = 0; i != CANOKEY_EP_NUM; ++i) {
+        key->ep_in_state[i] = CANOKEY_EP_IN_WAIT;
+        key->ep_in_size[i] = 0;
+        key->ep_in_pos[i] = 0;
+    }
+
+    if (canokey_emu_init(key, key->file)) {
+        error_setg(errp, "canokey can not create or read %s", key->file);
+        return;
+    }
+}
+
+static void canokey_unrealize(USBDevice *base)
+{
+}
+
+static Property canokey_properties[] = {
+    DEFINE_PROP_STRING("file", CanoKeyState, file),
+    DEFINE_PROP_END_OF_LIST(),
+};
+
+static void canokey_class_init(ObjectClass *klass, void *data)
+{
+    DeviceClass *dc = DEVICE_CLASS(klass);
+    USBDeviceClass *uc = USB_DEVICE_CLASS(klass);
+
+    uc->product_desc   = "CanoKey QEMU";
+    uc->usb_desc       = &desc_canokey;
+    uc->handle_reset   = canokey_handle_reset;
+    uc->handle_control = canokey_handle_control;
+    uc->handle_data    = canokey_handle_data;
+    uc->handle_attach  = usb_desc_attach;
+    uc->realize        = canokey_realize;
+    uc->unrealize      = canokey_unrealize;
+    dc->desc           = "CanoKey QEMU";
+    device_class_set_props(dc, canokey_properties);
+    set_bit(DEVICE_CATEGORY_MISC, dc->categories);
+}
+
+static const TypeInfo canokey_info = {
+    .name = TYPE_CANOKEY,
+    .parent = TYPE_USB_DEVICE,
+    .instance_size = sizeof(CanoKeyState),
+    .class_init = canokey_class_init
+};
+
+static void canokey_register_types(void)
+{
+    type_register_static(&canokey_info);
+}
+
+type_init(canokey_register_types)
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 14 12:16:26 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 12:16:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348868.575103 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15T8-00017A-0p; Tue, 14 Jun 2022 12:16:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348868.575103; Tue, 14 Jun 2022 12:16:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15T7-000162-RC; Tue, 14 Jun 2022 12:16:25 +0000
Received: by outflank-mailman (input) for mailman id 348868;
 Tue, 14 Jun 2022 12:16:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hf0K=WV=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1o15T5-0008Ek-V5
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 12:16:23 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cd46dad5-ebdb-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 14:16:22 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-562-gh_vAfIkM3yhrdpogkCCfQ-1; Tue, 14 Jun 2022 08:16:19 -0400
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com
 [10.11.54.3])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 37F61299E75F;
 Tue, 14 Jun 2022 12:16:19 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id E95EC1121319;
 Tue, 14 Jun 2022 12:16:18 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 63FB01800626; Tue, 14 Jun 2022 14:16:11 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cd46dad5-ebdb-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655208981;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=TqkIe5B5GmtLHYDEQNksBwY0pKybp1zEh7xnID9ieAU=;
	b=L045pJYrpxdM5PdDnsGhNf3P9JpUqJUrDw6AFPdhTGlauVkdHzGPrA/q/rppWixEspWze+
	Ivz4vU0qCCrXLWr9mJ2tQ+PrwpA6mwpZBjl4L0BT8RTgUOY7TX7+BPvr3cwcKG9BpCWXbK
	nHQoS0ylcrYhH6zswyhaHhJoEjEg12E=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655208981;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=TqkIe5B5GmtLHYDEQNksBwY0pKybp1zEh7xnID9ieAU=;
	b=L045pJYrpxdM5PdDnsGhNf3P9JpUqJUrDw6AFPdhTGlauVkdHzGPrA/q/rppWixEspWze+
	Ivz4vU0qCCrXLWr9mJ2tQ+PrwpA6mwpZBjl4L0BT8RTgUOY7TX7+BPvr3cwcKG9BpCWXbK
	nHQoS0ylcrYhH6zswyhaHhJoEjEg12E=
X-MC-Unique: gh_vAfIkM3yhrdpogkCCfQ-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: Akihiko Odaki <akihiko.odaki@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	xen-devel@lists.xenproject.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Canokeys.org" <contact@canokeys.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gerd Hoffmann <kraxel@redhat.com>
Subject: [PULL 08/15] docs/system/devices/usb: Add CanoKey to USB devices examples
Date: Tue, 14 Jun 2022 14:16:03 +0200
Message-Id: <20220614121610.508356-9-kraxel@redhat.com>
In-Reply-To: <20220614121610.508356-1-kraxel@redhat.com>
References: <20220614121610.508356-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3

From: "Hongren (Zenithal) Zheng" <i@zenithal.me>

Signed-off-by: Hongren (Zenithal) Zheng <i@zenithal.me>
Message-Id: <YoY6o+QFhzA7VHcZ@Sun>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 docs/system/devices/usb.rst | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/docs/system/devices/usb.rst b/docs/system/devices/usb.rst
index afb7d6c2268d..872d9167589b 100644
--- a/docs/system/devices/usb.rst
+++ b/docs/system/devices/usb.rst
@@ -199,6 +199,10 @@ option or the ``device_add`` monitor command. Available devices are:
 ``u2f-{emulated,passthru}``
    Universal Second Factor device
 
+``canokey``
+   An Open-source Secure Key implementing FIDO2, OpenPGP, PIV and more.
+   For more information, see :ref:`canokey`.
+
 Physical port addressing
 ^^^^^^^^^^^^^^^^^^^^^^^^
 
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 14 12:16:27 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 12:16:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348869.575114 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15T9-0001SW-KJ; Tue, 14 Jun 2022 12:16:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348869.575114; Tue, 14 Jun 2022 12:16:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15T9-0001Ru-CF; Tue, 14 Jun 2022 12:16:27 +0000
Received: by outflank-mailman (input) for mailman id 348869;
 Tue, 14 Jun 2022 12:16:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hf0K=WV=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1o15T7-0008Ek-0Q
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 12:16:25 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cda789d3-ebdb-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 14:16:23 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-669-uS3o44O8OS2SpFNP5fHGSA-1; Tue, 14 Jun 2022 08:16:18 -0400
Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com
 [10.11.54.10])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3AE9C1C05149;
 Tue, 14 Jun 2022 12:16:18 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id B4E17416164;
 Tue, 14 Jun 2022 12:16:17 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 5832B1800622; Tue, 14 Jun 2022 14:16:11 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cda789d3-ebdb-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655208981;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Iypp4lhP0vK8zaOIYv4YGS15C6hr5eRaL+NEXu7qlj4=;
	b=ByYhOQoGo5lpU9WjWTlMduyw18WYF4e8z/qeOlNrQSXqGj0pdYvao7zwETV0qNFZqJAapL
	El5go6ddDPML3GUGREhmEgSfjiTV6HNhmgh46JQoopiMjoGwIY1Pu8305Mtf1vZbF65r7j
	a2oqAjOjNSM9TP9uBU2aOYy4rIfggsQ=
X-MC-Unique: uS3o44O8OS2SpFNP5fHGSA-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: Akihiko Odaki <akihiko.odaki@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	xen-devel@lists.xenproject.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Canokeys.org" <contact@canokeys.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gerd Hoffmann <kraxel@redhat.com>
Subject: [PULL 07/15] docs: Add CanoKey documentation
Date: Tue, 14 Jun 2022 14:16:02 +0200
Message-Id: <20220614121610.508356-8-kraxel@redhat.com>
In-Reply-To: <20220614121610.508356-1-kraxel@redhat.com>
References: <20220614121610.508356-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.85 on 10.11.54.10

From: "Hongren (Zenithal) Zheng" <i@zenithal.me>

Signed-off-by: Hongren (Zenithal) Zheng <i@zenithal.me>
Message-Id: <YoY6ilQimrK+l5NN@Sun>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 docs/system/device-emulation.rst |   1 +
 docs/system/devices/canokey.rst  | 168 +++++++++++++++++++++++++++++++
 2 files changed, 169 insertions(+)
 create mode 100644 docs/system/devices/canokey.rst

diff --git a/docs/system/device-emulation.rst b/docs/system/device-emulation.rst
index 3b729b920d7c..05060060563f 100644
--- a/docs/system/device-emulation.rst
+++ b/docs/system/device-emulation.rst
@@ -92,3 +92,4 @@ Emulated Devices
    devices/vhost-user.rst
    devices/virtio-pmem.rst
    devices/vhost-user-rng.rst
+   devices/canokey.rst
diff --git a/docs/system/devices/canokey.rst b/docs/system/devices/canokey.rst
new file mode 100644
index 000000000000..169f99b8eb82
--- /dev/null
+++ b/docs/system/devices/canokey.rst
@@ -0,0 +1,168 @@
+.. _canokey:
+
+CanoKey QEMU
+------------
+
+CanoKey [1]_ is an open-source secure key with supports of
+
+* U2F / FIDO2 with Ed25519 and HMAC-secret
+* OpenPGP Card V3.4 with RSA4096, Ed25519 and more [2]_
+* PIV (NIST SP 800-73-4)
+* HOTP / TOTP
+* NDEF
+
+All these platform-independent features are in canokey-core [3]_.
+
+For different platforms, CanoKey has different implementations,
+including both hardware implementions and virtual cards:
+
+* CanoKey STM32 [4]_
+* CanoKey Pigeon [5]_
+* (virt-card) CanoKey USB/IP
+* (virt-card) CanoKey FunctionFS
+
+In QEMU, yet another CanoKey virt-card is implemented.
+CanoKey QEMU exposes itself as a USB device to the guest OS.
+
+With the same software configuration as a hardware key,
+the guest OS can use all the functionalities of a secure key as if
+there was actually an hardware key plugged in.
+
+CanoKey QEMU provides much convenience for debuging:
+
+* libcanokey-qemu supports debuging output thus developers can
+  inspect what happens inside a secure key
+* CanoKey QEMU supports trace event thus event
+* QEMU USB stack supports pcap thus USB packet between the guest
+  and key can be captured and analysed
+
+Then for developers:
+
+* For developers on software with secure key support (e.g. FIDO2, OpenPGP),
+  they can see what happens inside the secure key
+* For secure key developers, USB packets between guest OS and CanoKey
+  can be easily captured and analysed
+
+Also since this is a virtual card, it can be easily used in CI for testing
+on code coping with secure key.
+
+Building
+========
+
+libcanokey-qemu is required to use CanoKey QEMU.
+
+.. code-block:: shell
+
+    git clone https://github.com/canokeys/canokey-qemu
+    mkdir canokey-qemu/build
+    pushd canokey-qemu/build
+
+If you want to install libcanokey-qemu in a different place,
+add ``-DCMAKE_INSTALL_PREFIX=/path/to/your/place`` to cmake below.
+
+.. code-block:: shell
+
+    cmake ..
+    make
+    make install # may need sudo
+    popd
+
+Then configuring and building:
+
+.. code-block:: shell
+
+    # depending on your env, lib/pkgconfig can be lib64/pkgconfig
+    export PKG_CONFIG_PATH=/path/to/your/place/lib/pkgconfig:$PKG_CONFIG_PATH
+    ./configure --enable-canokey && make
+
+Using CanoKey QEMU
+==================
+
+CanoKey QEMU stores all its data on a file of the host specified by the argument
+when invoking qemu.
+
+.. parsed-literal::
+
+    |qemu_system| -usb -device canokey,file=$HOME/.canokey-file
+
+Note: you should keep this file carefully as it may contain your private key!
+
+The first time when the file is used, it is created and initialized by CanoKey,
+afterwards CanoKey QEMU would just read this file.
+
+After the guest OS boots, you can check that there is a USB device.
+
+For example, If the guest OS is an Linux machine. You may invoke lsusb
+and find CanoKey QEMU there:
+
+.. code-block:: shell
+
+    $ lsusb
+    Bus 001 Device 002: ID 20a0:42d4 Clay Logic CanoKey QEMU
+
+You may setup the key as guided in [6]_. The console for the key is at [7]_.
+
+Debuging
+========
+
+CanoKey QEMU consists of two parts, ``libcanokey-qemu.so`` and ``canokey.c``,
+the latter of which resides in QEMU. The former provides core functionality
+of a secure key while the latter provides platform-dependent functions:
+USB packet handling.
+
+If you want to trace what happens inside the secure key, when compiling
+libcanokey-qemu, you should add ``-DQEMU_DEBUG_OUTPUT=ON`` in cmake command
+line:
+
+.. code-block:: shell
+
+    cmake .. -DQEMU_DEBUG_OUTPUT=ON
+
+If you want to trace events happened in canokey.c, use
+
+.. parsed-literal::
+
+    |qemu_system| --trace "canokey_*" \\
+        -usb -device canokey,file=$HOME/.canokey-file
+
+If you want to capture USB packets between the guest and the host, you can:
+
+.. parsed-literal::
+
+    |qemu_system| -usb -device canokey,file=$HOME/.canokey-file,pcap=key.pcap
+
+Limitations
+===========
+
+Currently libcanokey-qemu.so has dozens of global variables as it was originally
+designed for embedded systems. Thus one qemu instance can not have
+multiple CanoKey QEMU running, namely you can not
+
+.. parsed-literal::
+
+    |qemu_system| -usb -device canokey,file=$HOME/.canokey-file \\
+         -device canokey,file=$HOME/.canokey-file2
+
+Also, there is no lock on canokey-file, thus two CanoKey QEMU instance
+can not read one canokey-file at the same time.
+
+Another limitation is that this device is not compatible with ``qemu-xhci``,
+in that this device would hang when there are FIDO2 packets (traffic on
+interrupt endpoints). If you do not use FIDO2 then it works as intended,
+but for full functionality you should use old uhci/ehci bus and attach canokey
+to it, for example
+
+.. parsed-literal::
+
+   |qemu_system| -device piix3-usb-uhci,id=uhci -device canokey,bus=uhci.0
+
+References
+==========
+
+.. [1] `<https://canokeys.org>`_
+.. [2] `<https://docs.canokeys.org/userguide/openpgp/#supported-algorithm>`_
+.. [3] `<https://github.com/canokeys/canokey-core>`_
+.. [4] `<https://github.com/canokeys/canokey-stm32>`_
+.. [5] `<https://github.com/canokeys/canokey-pigeon>`_
+.. [6] `<https://docs.canokeys.org/>`_
+.. [7] `<https://console.canokeys.org/>`_
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 14 12:16:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 12:16:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348870.575120 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15TA-0001Zu-FO; Tue, 14 Jun 2022 12:16:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348870.575120; Tue, 14 Jun 2022 12:16:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15TA-0001YW-4a; Tue, 14 Jun 2022 12:16:28 +0000
Received: by outflank-mailman (input) for mailman id 348870;
 Tue, 14 Jun 2022 12:16:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hf0K=WV=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1o15T7-0008Ek-V0
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 12:16:26 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cd53b940-ebdb-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 14:16:22 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-480-77cUT6WpNIy3Zzv5R7gyFA-1; Tue, 14 Jun 2022 08:16:17 -0400
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com
 [10.11.54.7])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4840938041D8;
 Tue, 14 Jun 2022 12:16:16 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id BE7331415132;
 Tue, 14 Jun 2022 12:16:15 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 34EE718003BA; Tue, 14 Jun 2022 14:16:11 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cd53b940-ebdb-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655208981;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=XMFb51C69E0Si128czVwJqlqgwkrMEmsc4JCjQtiyLU=;
	b=SrDegPrayZibgORa6z+ku62SoFinTM7GGDaWiJ11yogpuDLugoaibao2TbR5TCkOz8xS99
	jksoLesj3n5R43C2cm+0U2//Lrn7vyyxV9GZXokstABtotRpXKnTPJMqtMoD7cBEml8K6T
	jRag5S+Px3KMVBIei3+MrNK3iHxAFg4=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655208981;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=XMFb51C69E0Si128czVwJqlqgwkrMEmsc4JCjQtiyLU=;
	b=SrDegPrayZibgORa6z+ku62SoFinTM7GGDaWiJ11yogpuDLugoaibao2TbR5TCkOz8xS99
	jksoLesj3n5R43C2cm+0U2//Lrn7vyyxV9GZXokstABtotRpXKnTPJMqtMoD7cBEml8K6T
	jRag5S+Px3KMVBIei3+MrNK3iHxAFg4=
X-MC-Unique: 77cUT6WpNIy3Zzv5R7gyFA-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: Akihiko Odaki <akihiko.odaki@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	xen-devel@lists.xenproject.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Canokeys.org" <contact@canokeys.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gerd Hoffmann <kraxel@redhat.com>
Subject: [PULL 05/15] hw/usb/canokey: Add trace events
Date: Tue, 14 Jun 2022 14:16:00 +0200
Message-Id: <20220614121610.508356-6-kraxel@redhat.com>
In-Reply-To: <20220614121610.508356-1-kraxel@redhat.com>
References: <20220614121610.508356-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7

From: "Hongren (Zenithal) Zheng" <i@zenithal.me>

Signed-off-by: Hongren (Zenithal) Zheng <i@zenithal.me>
Message-Id: <YoY6RoDKQIxSkFwL@Sun>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 hw/usb/canokey.c    | 13 +++++++++++++
 hw/usb/trace-events | 16 ++++++++++++++++
 2 files changed, 29 insertions(+)

diff --git a/hw/usb/canokey.c b/hw/usb/canokey.c
index 6cb8b7cdb089..4a08b1cbd776 100644
--- a/hw/usb/canokey.c
+++ b/hw/usb/canokey.c
@@ -14,6 +14,7 @@
 #include "qapi/error.h"
 #include "hw/usb.h"
 #include "hw/qdev-properties.h"
+#include "trace.h"
 #include "desc.h"
 #include "canokey.h"
 
@@ -66,6 +67,7 @@ static const USBDesc desc_canokey = {
  */
 int canokey_emu_stall_ep(void *base, uint8_t ep)
 {
+    trace_canokey_emu_stall_ep(ep);
     CanoKeyState *key = base;
     uint8_t ep_in = CANOKEY_EP_IN(ep); /* INTR IN has ep 129 */
     key->ep_in_size[ep_in] = 0;
@@ -75,6 +77,7 @@ int canokey_emu_stall_ep(void *base, uint8_t ep)
 
 int canokey_emu_set_address(void *base, uint8_t addr)
 {
+    trace_canokey_emu_set_address(addr);
     CanoKeyState *key = base;
     key->dev.addr = addr;
     return 0;
@@ -83,6 +86,7 @@ int canokey_emu_set_address(void *base, uint8_t addr)
 int canokey_emu_prepare_receive(
         void *base, uint8_t ep, uint8_t *pbuf, uint16_t size)
 {
+    trace_canokey_emu_prepare_receive(ep, size);
     CanoKeyState *key = base;
     key->ep_out[ep] = pbuf;
     key->ep_out_size[ep] = size;
@@ -92,6 +96,7 @@ int canokey_emu_prepare_receive(
 int canokey_emu_transmit(
         void *base, uint8_t ep, const uint8_t *pbuf, uint16_t size)
 {
+    trace_canokey_emu_transmit(ep, size);
     CanoKeyState *key = base;
     uint8_t ep_in = CANOKEY_EP_IN(ep); /* INTR IN has ep 129 */
     memcpy(key->ep_in[ep_in] + key->ep_in_size[ep_in],
@@ -125,6 +130,7 @@ uint32_t canokey_emu_get_rx_data_size(void *base, uint8_t ep)
  */
 static void canokey_handle_reset(USBDevice *dev)
 {
+    trace_canokey_handle_reset();
     CanoKeyState *key = CANOKEY(dev);
     for (int i = 0; i != CANOKEY_EP_NUM; ++i) {
         key->ep_in_state[i] = CANOKEY_EP_IN_WAIT;
@@ -137,6 +143,7 @@ static void canokey_handle_reset(USBDevice *dev)
 static void canokey_handle_control(USBDevice *dev, USBPacket *p,
                int request, int value, int index, int length, uint8_t *data)
 {
+    trace_canokey_handle_control_setup(request, value, index, length);
     CanoKeyState *key = CANOKEY(dev);
 
     canokey_emu_setup(request, value, index, length);
@@ -144,6 +151,7 @@ static void canokey_handle_control(USBDevice *dev, USBPacket *p,
     uint32_t dir_in = request & DeviceRequest;
     if (!dir_in) {
         /* OUT */
+        trace_canokey_handle_control_out();
         if (key->ep_out[0] != NULL) {
             memcpy(key->ep_out[0], data, length);
         }
@@ -163,6 +171,7 @@ static void canokey_handle_control(USBDevice *dev, USBPacket *p,
     case CANOKEY_EP_IN_READY:
         memcpy(data, key->ep_in[0], key->ep_in_size[0]);
         p->actual_length = key->ep_in_size[0];
+        trace_canokey_handle_control_in(p->actual_length);
         /* reset state */
         key->ep_in_state[0] = CANOKEY_EP_IN_WAIT;
         key->ep_in_size[0] = 0;
@@ -182,6 +191,7 @@ static void canokey_handle_data(USBDevice *dev, USBPacket *p)
     uint32_t out_len;
     switch (p->pid) {
     case USB_TOKEN_OUT:
+        trace_canokey_handle_data_out(ep_out, p->iov.size);
         usb_packet_copy(p, key->ep_out_buffer[ep_out], p->iov.size);
         out_pos = 0;
         while (out_pos != p->iov.size) {
@@ -226,6 +236,7 @@ static void canokey_handle_data(USBDevice *dev, USBPacket *p)
                 key->ep_in_size[ep_in] = 0;
                 key->ep_in_pos[ep_in] = 0;
             }
+            trace_canokey_handle_data_in(ep_in, in_len);
             break;
         }
         break;
@@ -237,6 +248,7 @@ static void canokey_handle_data(USBDevice *dev, USBPacket *p)
 
 static void canokey_realize(USBDevice *base, Error **errp)
 {
+    trace_canokey_realize();
     CanoKeyState *key = CANOKEY(base);
 
     if (key->file == NULL) {
@@ -260,6 +272,7 @@ static void canokey_realize(USBDevice *base, Error **errp)
 
 static void canokey_unrealize(USBDevice *base)
 {
+    trace_canokey_unrealize();
 }
 
 static Property canokey_properties[] = {
diff --git a/hw/usb/trace-events b/hw/usb/trace-events
index 9773cb53300d..914ca7166829 100644
--- a/hw/usb/trace-events
+++ b/hw/usb/trace-events
@@ -345,3 +345,19 @@ usb_serial_set_baud(int bus, int addr, int baud) "dev %d:%u baud rate %d"
 usb_serial_set_data(int bus, int addr, int parity, int data, int stop) "dev %d:%u parity %c, data bits %d, stop bits %d"
 usb_serial_set_flow_control(int bus, int addr, int index) "dev %d:%u flow control %d"
 usb_serial_set_xonxoff(int bus, int addr, uint8_t xon, uint8_t xoff) "dev %d:%u xon 0x%x xoff 0x%x"
+
+# canokey.c
+canokey_emu_stall_ep(uint8_t ep) "ep %d"
+canokey_emu_set_address(uint8_t addr) "addr %d"
+canokey_emu_prepare_receive(uint8_t ep, uint16_t size) "ep %d size %d"
+canokey_emu_transmit(uint8_t ep, uint16_t size) "ep %d size %d"
+canokey_thread_start(void)
+canokey_thread_stop(void)
+canokey_handle_reset(void)
+canokey_handle_control_setup(int request, int value, int index, int length) "request 0x%04X value 0x%04X index 0x%04X length 0x%04X"
+canokey_handle_control_out(void)
+canokey_handle_control_in(int actual_len) "len %d"
+canokey_handle_data_out(uint8_t ep_out, uint32_t out_len) "ep %d len %d"
+canokey_handle_data_in(uint8_t ep_in, uint32_t in_len) "ep %d len %d"
+canokey_realize(void)
+canokey_unrealize(void)
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 14 12:16:29 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 12:16:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348871.575126 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15TB-0001g4-3m; Tue, 14 Jun 2022 12:16:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348871.575126; Tue, 14 Jun 2022 12:16:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15TA-0001e8-MQ; Tue, 14 Jun 2022 12:16:28 +0000
Received: by outflank-mailman (input) for mailman id 348871;
 Tue, 14 Jun 2022 12:16:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hf0K=WV=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1o15T8-0008Ek-Vy
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 12:16:27 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ce949b87-ebdb-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 14:16:24 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-459-D6qPa_PhO0qFlLmH-rztdw-1; Tue, 14 Jun 2022 08:16:18 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com
 [10.11.54.8])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7CD253C025CE;
 Tue, 14 Jun 2022 12:16:17 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 43577C53360;
 Tue, 14 Jun 2022 12:16:17 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 48BBC1800620; Tue, 14 Jun 2022 14:16:11 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce949b87-ebdb-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655208983;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=HZ4OtpGl92/4vTO64NR8x1+3AJHChGb6d9Bq7oyIeKQ=;
	b=AEZGA9QmHFlvx96HFAhkp4w+MXKeuVWO/cWRX1D9/uUfwUdL6iuXNbIhL/vyA8kYzENQ+v
	F7lNqB0NKc/SZe8h5h5sNc7I6RE34HSxJgJDFN7rBm4h2AeKqw+rCV9feDunPxryvBYinA
	/+2JN/7NhRd1iVg6TJq/SAFwariEVHY=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655208983;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=HZ4OtpGl92/4vTO64NR8x1+3AJHChGb6d9Bq7oyIeKQ=;
	b=AEZGA9QmHFlvx96HFAhkp4w+MXKeuVWO/cWRX1D9/uUfwUdL6iuXNbIhL/vyA8kYzENQ+v
	F7lNqB0NKc/SZe8h5h5sNc7I6RE34HSxJgJDFN7rBm4h2AeKqw+rCV9feDunPxryvBYinA
	/+2JN/7NhRd1iVg6TJq/SAFwariEVHY=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655208983;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=HZ4OtpGl92/4vTO64NR8x1+3AJHChGb6d9Bq7oyIeKQ=;
	b=AEZGA9QmHFlvx96HFAhkp4w+MXKeuVWO/cWRX1D9/uUfwUdL6iuXNbIhL/vyA8kYzENQ+v
	F7lNqB0NKc/SZe8h5h5sNc7I6RE34HSxJgJDFN7rBm4h2AeKqw+rCV9feDunPxryvBYinA
	/+2JN/7NhRd1iVg6TJq/SAFwariEVHY=
X-MC-Unique: D6qPa_PhO0qFlLmH-rztdw-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: Akihiko Odaki <akihiko.odaki@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	xen-devel@lists.xenproject.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Canokeys.org" <contact@canokeys.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gerd Hoffmann <kraxel@redhat.com>
Subject: [PULL 06/15] meson: Add CanoKey
Date: Tue, 14 Jun 2022 14:16:01 +0200
Message-Id: <20220614121610.508356-7-kraxel@redhat.com>
In-Reply-To: <20220614121610.508356-1-kraxel@redhat.com>
References: <20220614121610.508356-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8

From: "Hongren (Zenithal) Zheng" <i@zenithal.me>

Signed-off-by: Hongren (Zenithal) Zheng <i@zenithal.me>
Message-Id: <YoY6YRD6cxH21mms@Sun>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 meson_options.txt             | 2 ++
 hw/usb/Kconfig                | 5 +++++
 hw/usb/meson.build            | 5 +++++
 meson.build                   | 6 ++++++
 scripts/meson-buildoptions.sh | 3 +++
 5 files changed, 21 insertions(+)

diff --git a/meson_options.txt b/meson_options.txt
index 2de94af03712..0e8197386b99 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -189,6 +189,8 @@ option('spice_protocol', type : 'feature', value : 'auto',
        description: 'Spice protocol support')
 option('u2f', type : 'feature', value : 'auto',
        description: 'U2F emulation support')
+option('canokey', type : 'feature', value : 'auto',
+       description: 'CanoKey support')
 option('usb_redir', type : 'feature', value : 'auto',
        description: 'libusbredir support')
 option('l2tpv3', type : 'feature', value : 'auto',
diff --git a/hw/usb/Kconfig b/hw/usb/Kconfig
index 53f8283ffdc1..ce4f4339763e 100644
--- a/hw/usb/Kconfig
+++ b/hw/usb/Kconfig
@@ -119,6 +119,11 @@ config USB_U2F
     default y
     depends on USB
 
+config USB_CANOKEY
+    bool
+    default y
+    depends on USB
+
 config IMX_USBPHY
     bool
     default y
diff --git a/hw/usb/meson.build b/hw/usb/meson.build
index de853d780dd8..793df42e2127 100644
--- a/hw/usb/meson.build
+++ b/hw/usb/meson.build
@@ -63,6 +63,11 @@ if u2f.found()
   softmmu_ss.add(when: 'CONFIG_USB_U2F', if_true: [u2f, files('u2f-emulated.c')])
 endif
 
+# CanoKey
+if canokey.found()
+  softmmu_ss.add(when: 'CONFIG_USB_CANOKEY', if_true: [canokey, files('canokey.c')])
+endif
+
 # usb redirect
 if usbredir.found()
   usbredir_ss = ss.source_set()
diff --git a/meson.build b/meson.build
index 21cd949082dc..0c2e11ff0715 100644
--- a/meson.build
+++ b/meson.build
@@ -1408,6 +1408,12 @@ if have_system
                    method: 'pkg-config',
                    kwargs: static_kwargs)
 endif
+canokey = not_found
+if have_system
+  canokey = dependency('canokey-qemu', required: get_option('canokey'),
+                   method: 'pkg-config',
+                   kwargs: static_kwargs)
+endif
 usbredir = not_found
 if not get_option('usb_redir').auto() or have_system
   usbredir = dependency('libusbredirparser-0.5', required: get_option('usb_redir'),
diff --git a/scripts/meson-buildoptions.sh b/scripts/meson-buildoptions.sh
index 00ea4d8cd169..1fc1d2e2c362 100644
--- a/scripts/meson-buildoptions.sh
+++ b/scripts/meson-buildoptions.sh
@@ -73,6 +73,7 @@ meson_options_help() {
   printf "%s\n" '  bpf             eBPF support'
   printf "%s\n" '  brlapi          brlapi character device driver'
   printf "%s\n" '  bzip2           bzip2 support for DMG images'
+  printf "%s\n" '  canokey         CanoKey support'
   printf "%s\n" '  cap-ng          cap_ng support'
   printf "%s\n" '  capstone        Whether and how to find the capstone library'
   printf "%s\n" '  cloop           cloop image format support'
@@ -204,6 +205,8 @@ _meson_option_parse() {
     --disable-brlapi) printf "%s" -Dbrlapi=disabled ;;
     --enable-bzip2) printf "%s" -Dbzip2=enabled ;;
     --disable-bzip2) printf "%s" -Dbzip2=disabled ;;
+    --enable-canokey) printf "%s" -Dcanokey=enabled ;;
+    --disable-canokey) printf "%s" -Dcanokey=disabled ;;
     --enable-cap-ng) printf "%s" -Dcap_ng=enabled ;;
     --disable-cap-ng) printf "%s" -Dcap_ng=disabled ;;
     --enable-capstone) printf "%s" -Dcapstone=enabled ;;
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 14 12:16:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 12:16:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348872.575135 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15TC-0001zb-AW; Tue, 14 Jun 2022 12:16:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348872.575135; Tue, 14 Jun 2022 12:16:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15TB-0001vn-TM; Tue, 14 Jun 2022 12:16:29 +0000
Received: by outflank-mailman (input) for mailman id 348872;
 Tue, 14 Jun 2022 12:16:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hf0K=WV=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1o15T9-0008Ek-VW
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 12:16:28 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cecb6f1b-ebdb-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 14:16:25 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-35-j3ECH_JHNIaqhgvfOeUaig-1; Tue, 14 Jun 2022 08:16:20 -0400
Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com
 [10.11.54.9])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A513085A580;
 Tue, 14 Jun 2022 12:16:19 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 6CA1A492CA2;
 Tue, 14 Jun 2022 12:16:19 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 6F585180062D; Tue, 14 Jun 2022 14:16:11 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cecb6f1b-ebdb-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655208983;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=PVva2fRC3vGqvRDPoZrUChjHwOIX197Reg0ezh6aFRw=;
	b=hGTCtB7IjZbhA0aoXp8rLCQNRfd8XQ90ulnxua1YeNj3lyu1RUt/ESBjnwxL73yo3NsDXL
	wN5Q3YlYxOHwVwMLNczsQI1/gxmXB+fJEoG8o/Iy4pnQnqoWY+uWE40DyRmzDOI+idAGph
	bRAN8e341aQij1VMGxRlb1oQVRsXeO8=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655208983;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=PVva2fRC3vGqvRDPoZrUChjHwOIX197Reg0ezh6aFRw=;
	b=hGTCtB7IjZbhA0aoXp8rLCQNRfd8XQ90ulnxua1YeNj3lyu1RUt/ESBjnwxL73yo3NsDXL
	wN5Q3YlYxOHwVwMLNczsQI1/gxmXB+fJEoG8o/Iy4pnQnqoWY+uWE40DyRmzDOI+idAGph
	bRAN8e341aQij1VMGxRlb1oQVRsXeO8=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655208983;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=PVva2fRC3vGqvRDPoZrUChjHwOIX197Reg0ezh6aFRw=;
	b=hGTCtB7IjZbhA0aoXp8rLCQNRfd8XQ90ulnxua1YeNj3lyu1RUt/ESBjnwxL73yo3NsDXL
	wN5Q3YlYxOHwVwMLNczsQI1/gxmXB+fJEoG8o/Iy4pnQnqoWY+uWE40DyRmzDOI+idAGph
	bRAN8e341aQij1VMGxRlb1oQVRsXeO8=
X-MC-Unique: j3ECH_JHNIaqhgvfOeUaig-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: Akihiko Odaki <akihiko.odaki@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	xen-devel@lists.xenproject.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Canokeys.org" <contact@canokeys.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gerd Hoffmann <kraxel@redhat.com>
Subject: [PULL 09/15] MAINTAINERS: add myself as CanoKey maintainer
Date: Tue, 14 Jun 2022 14:16:04 +0200
Message-Id: <20220614121610.508356-10-kraxel@redhat.com>
In-Reply-To: <20220614121610.508356-1-kraxel@redhat.com>
References: <20220614121610.508356-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9

From: "Hongren (Zenithal) Zheng" <i@zenithal.me>

Signed-off-by: Hongren (Zenithal) Zheng <i@zenithal.me>
Message-Id: <YoY61xI0IcFT1fOP@Sun>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 MAINTAINERS | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 0df25ed4b0a3..4cf6174f9f37 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -2427,6 +2427,14 @@ F: hw/intc/s390_flic*.c
 F: include/hw/s390x/s390_flic.h
 L: qemu-s390x@nongnu.org
 
+CanoKey
+M: Hongren (Zenithal) Zheng <i@zenithal.me>
+S: Maintained
+R: Canokeys.org <contact@canokeys.org>
+F: hw/usb/canokey.c
+F: hw/usb/canokey.h
+F: docs/system/devices/canokey.rst
+
 Subsystems
 ----------
 Overall Audio backends
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 14 12:18:25 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 12:18:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348900.575158 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15V2-0005dG-Mf; Tue, 14 Jun 2022 12:18:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348900.575158; Tue, 14 Jun 2022 12:18:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15V2-0005d9-Jl; Tue, 14 Jun 2022 12:18:24 +0000
Received: by outflank-mailman (input) for mailman id 348900;
 Tue, 14 Jun 2022 12:18:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hf0K=WV=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1o15TB-0008Ek-VX
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 12:16:30 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d067031f-ebdb-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 14:16:27 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-253-9-V6lrRoPN-wePOBAR4HAQ-1; Tue, 14 Jun 2022 08:16:25 -0400
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com
 [10.11.54.6])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4430B185A7B2;
 Tue, 14 Jun 2022 12:16:24 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 009C72166B26;
 Tue, 14 Jun 2022 12:16:24 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id D094A180079D; Tue, 14 Jun 2022 14:16:11 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d067031f-ebdb-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655208986;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=fjkT/ghvgIP+ASWCtRqN8cFZ3mDw+gAHZxFgoi4zHGM=;
	b=VgxUziE8uhLC+rQ0gEU5G9yqj1A8dNwFl4K/3PynYs4kaW2OCp8aV4XjFUNKHTY6uEMYsr
	fAP+U0U5nwu6aBES20zOKJ1h1R8tHVLMoy2WZ3JnlNlPznFWCb9nE59LqUEZ8Mb8U7lFm3
	d/g7dIlu7FlYxru1ina1FXhhYSDP35M=
X-MC-Unique: 9-V6lrRoPN-wePOBAR4HAQ-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: Akihiko Odaki <akihiko.odaki@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	xen-devel@lists.xenproject.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Canokeys.org" <contact@canokeys.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gerd Hoffmann <kraxel@redhat.com>
Subject: [PULL 15/15] virtio-gpu: Respect UI refresh rate for EDID
Date: Tue, 14 Jun 2022 14:16:10 +0200
Message-Id: <20220614121610.508356-16-kraxel@redhat.com>
In-Reply-To: <20220614121610.508356-1-kraxel@redhat.com>
References: <20220614121610.508356-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6

From: Akihiko Odaki <akihiko.odaki@gmail.com>

Signed-off-by: Akihiko Odaki <akihiko.odaki@gmail.com>
Message-Id: <20220226115516.59830-4-akihiko.odaki@gmail.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 include/hw/virtio/virtio-gpu.h | 1 +
 hw/display/virtio-gpu-base.c   | 1 +
 hw/display/virtio-gpu.c        | 1 +
 3 files changed, 3 insertions(+)

diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index afff9e158e31..2e28507efe21 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -80,6 +80,7 @@ struct virtio_gpu_scanout {
 struct virtio_gpu_requested_state {
     uint16_t width_mm, height_mm;
     uint32_t width, height;
+    uint32_t refresh_rate;
     int x, y;
 };
 
diff --git a/hw/display/virtio-gpu-base.c b/hw/display/virtio-gpu-base.c
index b21d6e5b0be8..a29f191aa82e 100644
--- a/hw/display/virtio-gpu-base.c
+++ b/hw/display/virtio-gpu-base.c
@@ -79,6 +79,7 @@ static void virtio_gpu_ui_info(void *opaque, uint32_t idx, QemuUIInfo *info)
 
     g->req_state[idx].x = info->xoff;
     g->req_state[idx].y = info->yoff;
+    g->req_state[idx].refresh_rate = info->refresh_rate;
     g->req_state[idx].width = info->width;
     g->req_state[idx].height = info->height;
     g->req_state[idx].width_mm = info->width_mm;
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index 55c6dd576318..20cc703dcc6e 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -217,6 +217,7 @@ virtio_gpu_generate_edid(VirtIOGPU *g, int scanout,
         .height_mm = b->req_state[scanout].height_mm,
         .prefx = b->req_state[scanout].width,
         .prefy = b->req_state[scanout].height,
+        .refresh_rate = b->req_state[scanout].refresh_rate,
     };
 
     edid->size = cpu_to_le32(sizeof(edid->edid));
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 14 12:18:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 12:18:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348902.575170 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15V5-0005vU-Vl; Tue, 14 Jun 2022 12:18:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348902.575170; Tue, 14 Jun 2022 12:18:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15V5-0005vN-SG; Tue, 14 Jun 2022 12:18:27 +0000
Received: by outflank-mailman (input) for mailman id 348902;
 Tue, 14 Jun 2022 12:18:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hf0K=WV=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1o15TE-0008Ek-0O
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 12:16:32 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d1a756ad-ebdb-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 14:16:29 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-638-SPez3mJKOCOnjtiYtyhytA-1; Tue, 14 Jun 2022 08:16:23 -0400
Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com
 [10.11.54.9])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C031F811E75;
 Tue, 14 Jun 2022 12:16:22 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 87A64492CA2;
 Tue, 14 Jun 2022 12:16:22 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id A63A51800795; Tue, 14 Jun 2022 14:16:11 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d1a756ad-ebdb-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655208988;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=hTTJvnegxxvfJsWseZ7+SmUjba5GRnY9ME/e3uOvaP0=;
	b=fM0UcOn3HF8EuPfT/s9sr9Cy2VW8NGbn3nnyQMVRu0zkJ/xs3WY01nkNh5ZEOvng9YPeHy
	O+3ry2BTSPrfRkdfnV76RPVl+66eXydPINuOucAw13NtKxcwPhXbgCA+0GH4FdJTgu417+
	DP8T0sXmlxEjh/dcHpmOM33rCe+9qhI=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655208988;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=hTTJvnegxxvfJsWseZ7+SmUjba5GRnY9ME/e3uOvaP0=;
	b=fM0UcOn3HF8EuPfT/s9sr9Cy2VW8NGbn3nnyQMVRu0zkJ/xs3WY01nkNh5ZEOvng9YPeHy
	O+3ry2BTSPrfRkdfnV76RPVl+66eXydPINuOucAw13NtKxcwPhXbgCA+0GH4FdJTgu417+
	DP8T0sXmlxEjh/dcHpmOM33rCe+9qhI=
X-MC-Unique: SPez3mJKOCOnjtiYtyhytA-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: Akihiko Odaki <akihiko.odaki@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	xen-devel@lists.xenproject.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Canokeys.org" <contact@canokeys.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gerd Hoffmann <kraxel@redhat.com>
Subject: [PULL 13/15] ui/console: Do not return a value with ui_info
Date: Tue, 14 Jun 2022 14:16:08 +0200
Message-Id: <20220614121610.508356-14-kraxel@redhat.com>
In-Reply-To: <20220614121610.508356-1-kraxel@redhat.com>
References: <20220614121610.508356-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9

From: Akihiko Odaki <akihiko.odaki@gmail.com>

The returned value is not used and misleading.

Signed-off-by: Akihiko Odaki <akihiko.odaki@gmail.com>
Message-Id: <20220226115516.59830-2-akihiko.odaki@gmail.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 include/ui/console.h         | 2 +-
 hw/display/virtio-gpu-base.c | 6 +++---
 hw/display/virtio-vga.c      | 5 ++---
 hw/vfio/display.c            | 8 +++-----
 4 files changed, 9 insertions(+), 12 deletions(-)

diff --git a/include/ui/console.h b/include/ui/console.h
index c44b28a972ca..642d6f5248cf 100644
--- a/include/ui/console.h
+++ b/include/ui/console.h
@@ -432,7 +432,7 @@ typedef struct GraphicHwOps {
     bool gfx_update_async; /* if true, calls graphic_hw_update_done() */
     void (*text_update)(void *opaque, console_ch_t *text);
     void (*update_interval)(void *opaque, uint64_t interval);
-    int (*ui_info)(void *opaque, uint32_t head, QemuUIInfo *info);
+    void (*ui_info)(void *opaque, uint32_t head, QemuUIInfo *info);
     void (*gl_block)(void *opaque, bool block);
 } GraphicHwOps;
 
diff --git a/hw/display/virtio-gpu-base.c b/hw/display/virtio-gpu-base.c
index 790cec333c8c..b21d6e5b0be8 100644
--- a/hw/display/virtio-gpu-base.c
+++ b/hw/display/virtio-gpu-base.c
@@ -69,12 +69,12 @@ static void virtio_gpu_notify_event(VirtIOGPUBase *g, uint32_t event_type)
     virtio_notify_config(&g->parent_obj);
 }
 
-static int virtio_gpu_ui_info(void *opaque, uint32_t idx, QemuUIInfo *info)
+static void virtio_gpu_ui_info(void *opaque, uint32_t idx, QemuUIInfo *info)
 {
     VirtIOGPUBase *g = opaque;
 
     if (idx >= g->conf.max_outputs) {
-        return -1;
+        return;
     }
 
     g->req_state[idx].x = info->xoff;
@@ -92,7 +92,7 @@ static int virtio_gpu_ui_info(void *opaque, uint32_t idx, QemuUIInfo *info)
 
     /* send event to guest */
     virtio_gpu_notify_event(g, VIRTIO_GPU_EVENT_DISPLAY);
-    return 0;
+    return;
 }
 
 static void
diff --git a/hw/display/virtio-vga.c b/hw/display/virtio-vga.c
index c206b5da384b..4dcb34c4a740 100644
--- a/hw/display/virtio-vga.c
+++ b/hw/display/virtio-vga.c
@@ -47,15 +47,14 @@ static void virtio_vga_base_text_update(void *opaque, console_ch_t *chardata)
     }
 }
 
-static int virtio_vga_base_ui_info(void *opaque, uint32_t idx, QemuUIInfo *info)
+static void virtio_vga_base_ui_info(void *opaque, uint32_t idx, QemuUIInfo *info)
 {
     VirtIOVGABase *vvga = opaque;
     VirtIOGPUBase *g = vvga->vgpu;
 
     if (g->hw_ops->ui_info) {
-        return g->hw_ops->ui_info(g, idx, info);
+        g->hw_ops->ui_info(g, idx, info);
     }
-    return -1;
 }
 
 static void virtio_vga_base_gl_block(void *opaque, bool block)
diff --git a/hw/vfio/display.c b/hw/vfio/display.c
index 89bc90508fb8..78f4d82c1c35 100644
--- a/hw/vfio/display.c
+++ b/hw/vfio/display.c
@@ -106,14 +106,14 @@ err:
     return;
 }
 
-static int vfio_display_edid_ui_info(void *opaque, uint32_t idx,
-                                     QemuUIInfo *info)
+static void vfio_display_edid_ui_info(void *opaque, uint32_t idx,
+                                      QemuUIInfo *info)
 {
     VFIOPCIDevice *vdev = opaque;
     VFIODisplay *dpy = vdev->dpy;
 
     if (!dpy->edid_regs) {
-        return 0;
+        return;
     }
 
     if (info->width && info->height) {
@@ -121,8 +121,6 @@ static int vfio_display_edid_ui_info(void *opaque, uint32_t idx,
     } else {
         vfio_display_edid_update(vdev, false, 0, 0);
     }
-
-    return 0;
 }
 
 static void vfio_display_edid_init(VFIOPCIDevice *vdev)
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 14 12:18:29 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 12:18:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348907.575181 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15V7-0006Cu-8R; Tue, 14 Jun 2022 12:18:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348907.575181; Tue, 14 Jun 2022 12:18:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15V7-0006Cn-4D; Tue, 14 Jun 2022 12:18:29 +0000
Received: by outflank-mailman (input) for mailman id 348907;
 Tue, 14 Jun 2022 12:18:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hf0K=WV=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1o15TA-0008Ek-VC
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 12:16:29 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cf5a055f-ebdb-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 14:16:26 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-526-ShlgDFknNeqfm4GcSjjXPw-1; Tue, 14 Jun 2022 08:16:21 -0400
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com
 [10.11.54.3])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id AF5B0185A7B2;
 Tue, 14 Jun 2022 12:16:20 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 737951121319;
 Tue, 14 Jun 2022 12:16:20 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 7B3B1180062F; Tue, 14 Jun 2022 14:16:11 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cf5a055f-ebdb-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655208984;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=obSiKgR2g439N1NCymTkGuD5ZtC+AWQDUPf1CsA55V4=;
	b=QiDiqnXQIkJSVpc0dFJ8BUa3AsWf4q0hc1vNipk51ZxvPoiRIOsRJRGjp/Bi9wu0jCAqWE
	Mt9FOo6nf3ebpyILAmKPa13btdAPHrus2vDeIqoIILmuV4oe51iMqUhBa84SXu3xGKdYdl
	NTyUQF788Fn744Sqp3V8x5qrpwPtsus=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655208984;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=obSiKgR2g439N1NCymTkGuD5ZtC+AWQDUPf1CsA55V4=;
	b=QiDiqnXQIkJSVpc0dFJ8BUa3AsWf4q0hc1vNipk51ZxvPoiRIOsRJRGjp/Bi9wu0jCAqWE
	Mt9FOo6nf3ebpyILAmKPa13btdAPHrus2vDeIqoIILmuV4oe51iMqUhBa84SXu3xGKdYdl
	NTyUQF788Fn744Sqp3V8x5qrpwPtsus=
X-MC-Unique: ShlgDFknNeqfm4GcSjjXPw-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: Akihiko Odaki <akihiko.odaki@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	xen-devel@lists.xenproject.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Canokeys.org" <contact@canokeys.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Arnout Engelen <arnout@bzzt.net>
Subject: [PULL 10/15] hw/usb/hcd-ehci: fix writeback order
Date: Tue, 14 Jun 2022 14:16:05 +0200
Message-Id: <20220614121610.508356-11-kraxel@redhat.com>
In-Reply-To: <20220614121610.508356-1-kraxel@redhat.com>
References: <20220614121610.508356-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3

From: Arnout Engelen <arnout@bzzt.net>

The 'active' bit passes control over a qTD between the guest and the
controller: set to 1 by guest to enable execution by the controller,
and the controller sets it to '0' to hand back control to the guest.

ehci_state_writeback write two dwords to main memory using DMA:
the third dword of the qTD (containing dt, total bytes to transfer,
cpage, cerr and status) and the fourth dword of the qTD (containing
the offset).

This commit makes sure the fourth dword is written before the third,
avoiding a race condition where a new offset written into the qTD
by the guest after it observed the status going to go to '0' gets
overwritten by a 'late' DMA writeback of the previous offset.

This race condition could lead to 'cpage out of range (5)' errors,
and reproduced by:

./qemu-system-x86_64 -enable-kvm -bios $SEABIOS/bios.bin -m 4096 -device usb-ehci -blockdev driver=file,read-only=on,filename=/home/aengelen/Downloads/openSUSE-Tumbleweed-DVD-i586-Snapshot20220428-Media.iso,node-name=iso -device usb-storage,drive=iso,bootindex=0 -chardev pipe,id=shell,path=/tmp/pipe -device virtio-serial -device virtconsole,chardev=shell -device virtio-rng-pci -serial mon:stdio -nographic

(press a key, select 'Installation' (2), and accept the default
values. On my machine the 'cpage out of range' is reproduced while
loading the Linux Kernel about once per 7 attempts. With the fix in
this commit it no longer fails)

This problem was previously reported as a seabios problem in
https://mail.coreboot.org/hyperkitty/list/seabios@seabios.org/thread/OUTHT5ISSQJGXPNTUPY3O5E5EPZJCHM3/
and as a nixos CI build failure in
https://github.com/NixOS/nixpkgs/issues/170803

Signed-off-by: Arnout Engelen <arnout@bzzt.net>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 hw/usb/hcd-ehci.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/hw/usb/hcd-ehci.c b/hw/usb/hcd-ehci.c
index 33a8a377bd95..d4da8dcb8d15 100644
--- a/hw/usb/hcd-ehci.c
+++ b/hw/usb/hcd-ehci.c
@@ -2011,7 +2011,10 @@ static int ehci_state_writeback(EHCIQueue *q)
     ehci_trace_qtd(q, NLPTR_GET(p->qtdaddr), (EHCIqtd *) &q->qh.next_qtd);
     qtd = (uint32_t *) &q->qh.next_qtd;
     addr = NLPTR_GET(p->qtdaddr);
-    put_dwords(q->ehci, addr + 2 * sizeof(uint32_t), qtd + 2, 2);
+    /* First write back the offset */
+    put_dwords(q->ehci, addr + 3 * sizeof(uint32_t), qtd + 3, 1);
+    /* Then write back the token, clearing the 'active' bit */
+    put_dwords(q->ehci, addr + 2 * sizeof(uint32_t), qtd + 2, 1);
     ehci_free_packet(p);
 
     /*
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 14 12:18:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 12:18:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348912.575192 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15V9-0006Xi-P1; Tue, 14 Jun 2022 12:18:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348912.575192; Tue, 14 Jun 2022 12:18:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15V9-0006XQ-Lh; Tue, 14 Jun 2022 12:18:31 +0000
Received: by outflank-mailman (input) for mailman id 348912;
 Tue, 14 Jun 2022 12:18:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hf0K=WV=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1o15Tw-0008Ek-G8
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 12:17:16 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e2e5e89e-ebdb-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 14:16:58 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-211-09Nq8XZROquC41wzMebuLw-1; Tue, 14 Jun 2022 08:16:54 -0400
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com
 [10.11.54.7])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 961E02999B20;
 Tue, 14 Jun 2022 12:16:53 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 614741415103;
 Tue, 14 Jun 2022 12:16:53 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 282B41800084; Tue, 14 Jun 2022 14:16:52 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e2e5e89e-ebdb-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655209017;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=nXD25YF84dW7uBlFN8SkZqBu0jvpmi2aaYUiV48m/S4=;
	b=Rk2hD5gvCICUCQZEBspg0O4SFwR6aSrjU9kVRqjjBUlZ3cvM1VbUC8yti1tNWQAVgwX8Un
	P8oP3kh4LJW/98O1Qp5xzUS9zQOMo+iPHmXeW96E9fxKf5lvqcTxrU2PsoTPEmjXDMRXwZ
	SELKFBnVtHPcmaK1ZquNstDIsqyG8hk=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655209017;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=nXD25YF84dW7uBlFN8SkZqBu0jvpmi2aaYUiV48m/S4=;
	b=Rk2hD5gvCICUCQZEBspg0O4SFwR6aSrjU9kVRqjjBUlZ3cvM1VbUC8yti1tNWQAVgwX8Un
	P8oP3kh4LJW/98O1Qp5xzUS9zQOMo+iPHmXeW96E9fxKf5lvqcTxrU2PsoTPEmjXDMRXwZ
	SELKFBnVtHPcmaK1ZquNstDIsqyG8hk=
X-MC-Unique: 09Nq8XZROquC41wzMebuLw-1
Date: Tue, 14 Jun 2022 14:16:52 +0200
From: Gerd Hoffmann <kraxel@redhat.com>
To: Richard Henderson <richard.henderson@linaro.org>
Cc: qemu-devel@nongnu.org, "Michael S. Tsirkin" <mst@redhat.com>,
	xen-devel@lists.xenproject.org,
	Akihiko Odaki <akihiko.odaki@gmail.com>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	"Canokeys.org" <contact@canokeys.org>,
	Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [PULL 00/16] Kraxel 20220613 patches
Message-ID: <20220614121652.6rzwet6cvhupamkv@sirius.home.kraxel.org>
References: <20220613113655.3693872-1-kraxel@redhat.com>
 <37f8f623-bb1c-899b-5801-79acd6185c6d@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <37f8f623-bb1c-899b-5801-79acd6185c6d@linaro.org>
X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7

On Mon, Jun 13, 2022 at 08:52:21AM -0700, Richard Henderson wrote:
> On 6/13/22 04:36, Gerd Hoffmann wrote:
> > The following changes since commit dcb40541ebca7ec98a14d461593b3cd7282b4fac:
> > 
> >    Merge tag 'mips-20220611' of https://github.com/philmd/qemu into staging (2022-06-11 21:13:27 -0700)
> > 
> > are available in the Git repository at:
> > 
> >    git://git.kraxel.org/qemu tags/kraxel-20220613-pull-request
> > 
> > for you to fetch changes up to 23b87f7a3a13e93e248eef8a4b7257548855a620:
> > 
> >    ui: move 'pc-bios/keymaps' to 'ui/keymaps' (2022-06-13 10:59:25 +0200)
> > 
> > ----------------------------------------------------------------
> > usb: add CanoKey device, fixes for ehci + redir
> > ui: fixes for gtk and cocoa, move keymaps (v2), rework refresh rate
> > virtio-gpu: scanout flush fix
> 
> This doesn't even configure:
> 
> ../src/ui/keymaps/meson.build:55:4: ERROR: File ar does not exist.

dropped keymaps patch for now, new version sent.

take care,
  Gerd



From xen-devel-bounces@lists.xenproject.org Tue Jun 14 12:18:46 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 12:18:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348936.575203 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15VO-0007LJ-23; Tue, 14 Jun 2022 12:18:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348936.575203; Tue, 14 Jun 2022 12:18:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15VN-0007Kz-Uc; Tue, 14 Jun 2022 12:18:45 +0000
Received: by outflank-mailman (input) for mailman id 348936;
 Tue, 14 Jun 2022 12:18:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hf0K=WV=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1o15TD-0008Ek-24
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 12:16:31 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d08ed4dc-ebdb-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 14:16:28 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-604-wdmex2yGOJ-VOeDajZfdxQ-1; Tue, 14 Jun 2022 08:16:22 -0400
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com
 [10.11.54.7])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 28842801E6B;
 Tue, 14 Jun 2022 12:16:22 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id DA58A1402403;
 Tue, 14 Jun 2022 12:16:21 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 936FD1800634; Tue, 14 Jun 2022 14:16:11 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d08ed4dc-ebdb-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655208986;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=nK8ui8MGXo28dreQTzjS6bbhtNnN1TEyDDyCd/Q1Dgo=;
	b=QaGyse8WQpqV+Fb1sk4omhhfdkGDM2MjkpCujnwXgfqDaF53IoaW1Y3ohMRe9lkb03JjV7
	P1QmkJ+sn3HZzAfAEd+LKFv2NL86MuF7f00rPmSNlLrG7bvPWWeusZ24dlmqjKsdnxRYnm
	HXaRAIHc/wuvMMnap7J+T0Jw1YTou+A=
X-MC-Unique: wdmex2yGOJ-VOeDajZfdxQ-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: Akihiko Odaki <akihiko.odaki@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	xen-devel@lists.xenproject.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Canokeys.org" <contact@canokeys.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Dongwon Kim <dongwon.kim@intel.com>,
	Vivek Kasireddy <vivek.kasireddy@intel.com>
Subject: [PULL 12/15] virtio-gpu: update done only on the scanout associated with rect
Date: Tue, 14 Jun 2022 14:16:07 +0200
Message-Id: <20220614121610.508356-13-kraxel@redhat.com>
In-Reply-To: <20220614121610.508356-1-kraxel@redhat.com>
References: <20220614121610.508356-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7

From: Dongwon Kim <dongwon.kim@intel.com>

It only needs to update the scanouts containing the rect area
coming with the resource-flush request from the guest.

Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Vivek Kasireddy <vivek.kasireddy@intel.com>
Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
Message-Id: <20220505214030.4261-1-dongwon.kim@intel.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 hw/display/virtio-gpu.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index cd4a56056fd9..55c6dd576318 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -514,6 +514,9 @@ static void virtio_gpu_resource_flush(VirtIOGPU *g,
         for (i = 0; i < g->parent_obj.conf.max_outputs; i++) {
             scanout = &g->parent_obj.scanout[i];
             if (scanout->resource_id == res->resource_id &&
+                rf.r.x >= scanout->x && rf.r.y >= scanout->y &&
+                rf.r.x + rf.r.width <= scanout->x + scanout->width &&
+                rf.r.y + rf.r.height <= scanout->y + scanout->height &&
                 console_has_gl(scanout->con)) {
                 dpy_gl_update(scanout->con, 0, 0, scanout->width,
                               scanout->height);
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 14 12:18:47 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 12:18:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348940.575214 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15VP-0007fV-F4; Tue, 14 Jun 2022 12:18:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348940.575214; Tue, 14 Jun 2022 12:18:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15VP-0007eh-8j; Tue, 14 Jun 2022 12:18:47 +0000
Received: by outflank-mailman (input) for mailman id 348940;
 Tue, 14 Jun 2022 12:18:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hf0K=WV=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1o15TF-0008Ek-27
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 12:16:33 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d18fc4cf-ebdb-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 14:16:30 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-223-Wfr5vr_lM5eHXKXnMWOPLw-1; Tue, 14 Jun 2022 08:16:25 -0400
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com
 [10.11.54.4])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CE1AD299E753;
 Tue, 14 Jun 2022 12:16:23 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 5DC97202699A;
 Tue, 14 Jun 2022 12:16:23 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id BEF1F1800799; Tue, 14 Jun 2022 14:16:11 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d18fc4cf-ebdb-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655208988;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ebiRccH2awWxGwAppvEzoFJ/M1x++CURc9+zFzjg06Y=;
	b=cdEHgiUlJr/ftFnmDkEMFMJRp+OODbF4jE9u4pjJ8J+ZIA36MShHXUu+voydYdBn2q1IZ3
	Rulq2uCMbSivnMhdgfTp2ELon7At9OMtlXu7L+80cj6C89e0NRMnQA6+kHuxEo9PruVIXU
	Gjs1MDEiq81/yNRAtQdgvjnSwQot/Sg=
X-MC-Unique: Wfr5vr_lM5eHXKXnMWOPLw-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: Akihiko Odaki <akihiko.odaki@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	xen-devel@lists.xenproject.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Canokeys.org" <contact@canokeys.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gerd Hoffmann <kraxel@redhat.com>
Subject: [PULL 14/15] ui: Deliver refresh rate via QemuUIInfo
Date: Tue, 14 Jun 2022 14:16:09 +0200
Message-Id: <20220614121610.508356-15-kraxel@redhat.com>
In-Reply-To: <20220614121610.508356-1-kraxel@redhat.com>
References: <20220614121610.508356-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4

From: Akihiko Odaki <akihiko.odaki@gmail.com>

This change adds a new member, refresh_rate to QemuUIInfo in
include/ui/console.h. It represents the refresh rate of the
physical display backend, and it is more appropriate than
GUI update interval as the refresh rate which the emulated device
reports:
- sdl may set GUI update interval shorter than the refresh rate
  of the physical display to respond to user-generated events.
- sdl and vnc aggressively changes GUI update interval, but
  a guests is typically not designed to respond to frequent
  refresh rate changes, or frequent "display mode" changes in
  general. The frequency of refresh rate changes of the physical
  display backend matches better to the guest's expectation.

QemuUIInfo also has other members representing "display mode",
which makes it suitable for refresh rate representation. It has
a throttling of update notifications, and prevents frequent changes
of the display mode.

Signed-off-by: Akihiko Odaki <akihiko.odaki@gmail.com>
Message-Id: <20220226115516.59830-3-akihiko.odaki@gmail.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 include/ui/console.h |  2 +-
 include/ui/gtk.h     |  2 +-
 hw/display/xenfb.c   | 14 +++++++++++---
 ui/console.c         |  6 ------
 ui/gtk-egl.c         |  4 ++--
 ui/gtk-gl-area.c     |  3 +--
 ui/gtk.c             | 45 +++++++++++++++++++++++++-------------------
 7 files changed, 42 insertions(+), 34 deletions(-)

diff --git a/include/ui/console.h b/include/ui/console.h
index 642d6f5248cf..b64d82436097 100644
--- a/include/ui/console.h
+++ b/include/ui/console.h
@@ -139,6 +139,7 @@ typedef struct QemuUIInfo {
     int       yoff;
     uint32_t  width;
     uint32_t  height;
+    uint32_t  refresh_rate;
 } QemuUIInfo;
 
 /* cursor data format is 32bit RGBA */
@@ -431,7 +432,6 @@ typedef struct GraphicHwOps {
     void (*gfx_update)(void *opaque);
     bool gfx_update_async; /* if true, calls graphic_hw_update_done() */
     void (*text_update)(void *opaque, console_ch_t *text);
-    void (*update_interval)(void *opaque, uint64_t interval);
     void (*ui_info)(void *opaque, uint32_t head, QemuUIInfo *info);
     void (*gl_block)(void *opaque, bool block);
 } GraphicHwOps;
diff --git a/include/ui/gtk.h b/include/ui/gtk.h
index 101b147d1b98..ae0f53740d19 100644
--- a/include/ui/gtk.h
+++ b/include/ui/gtk.h
@@ -155,7 +155,7 @@ extern bool gtk_use_gl_area;
 
 /* ui/gtk.c */
 void gd_update_windowsize(VirtualConsole *vc);
-int gd_monitor_update_interval(GtkWidget *widget);
+void gd_update_monitor_refresh_rate(VirtualConsole *vc, GtkWidget *widget);
 void gd_hw_gl_flushed(void *vc);
 
 /* ui/gtk-egl.c */
diff --git a/hw/display/xenfb.c b/hw/display/xenfb.c
index cea10fe3c780..50857cd97a0b 100644
--- a/hw/display/xenfb.c
+++ b/hw/display/xenfb.c
@@ -777,16 +777,24 @@ static void xenfb_update(void *opaque)
     xenfb->up_fullscreen = 0;
 }
 
-static void xenfb_update_interval(void *opaque, uint64_t interval)
+static void xenfb_ui_info(void *opaque, uint32_t idx, QemuUIInfo *info)
 {
     struct XenFB *xenfb = opaque;
+    uint32_t refresh_rate;
 
     if (xenfb->feature_update) {
 #ifdef XENFB_TYPE_REFRESH_PERIOD
         if (xenfb_queue_full(xenfb)) {
             return;
         }
-        xenfb_send_refresh_period(xenfb, interval);
+
+        refresh_rate = info->refresh_rate;
+        if (!refresh_rate) {
+            refresh_rate = 75;
+        }
+
+        /* T = 1 / f = 1 [s*Hz] / f = 1000*1000 [ms*mHz] / f */
+        xenfb_send_refresh_period(xenfb, 1000 * 1000 / refresh_rate);
 #endif
     }
 }
@@ -983,5 +991,5 @@ struct XenDevOps xen_framebuffer_ops = {
 static const GraphicHwOps xenfb_ops = {
     .invalidate  = xenfb_invalidate,
     .gfx_update  = xenfb_update,
-    .update_interval = xenfb_update_interval,
+    .ui_info     = xenfb_ui_info,
 };
diff --git a/ui/console.c b/ui/console.c
index 36c80cd1de85..9331b85203a0 100644
--- a/ui/console.c
+++ b/ui/console.c
@@ -160,7 +160,6 @@ static void gui_update(void *opaque)
     uint64_t dcl_interval;
     DisplayState *ds = opaque;
     DisplayChangeListener *dcl;
-    QemuConsole *con;
 
     ds->refreshing = true;
     dpy_refresh(ds);
@@ -175,11 +174,6 @@ static void gui_update(void *opaque)
     }
     if (ds->update_interval != interval) {
         ds->update_interval = interval;
-        QTAILQ_FOREACH(con, &consoles, next) {
-            if (con->hw_ops->update_interval) {
-                con->hw_ops->update_interval(con->hw, interval);
-            }
-        }
         trace_console_refresh(interval);
     }
     ds->last_update = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
diff --git a/ui/gtk-egl.c b/ui/gtk-egl.c
index e3bd4bc27431..b5bffbab2522 100644
--- a/ui/gtk-egl.c
+++ b/ui/gtk-egl.c
@@ -140,8 +140,8 @@ void gd_egl_refresh(DisplayChangeListener *dcl)
 {
     VirtualConsole *vc = container_of(dcl, VirtualConsole, gfx.dcl);
 
-    vc->gfx.dcl.update_interval = gd_monitor_update_interval(
-            vc->window ? vc->window : vc->gfx.drawing_area);
+    gd_update_monitor_refresh_rate(
+            vc, vc->window ? vc->window : vc->gfx.drawing_area);
 
     if (!vc->gfx.esurface) {
         gd_egl_init(vc);
diff --git a/ui/gtk-gl-area.c b/ui/gtk-gl-area.c
index 2e0129c28cd4..682638a197d2 100644
--- a/ui/gtk-gl-area.c
+++ b/ui/gtk-gl-area.c
@@ -121,8 +121,7 @@ void gd_gl_area_refresh(DisplayChangeListener *dcl)
 {
     VirtualConsole *vc = container_of(dcl, VirtualConsole, gfx.dcl);
 
-    vc->gfx.dcl.update_interval = gd_monitor_update_interval(
-            vc->window ? vc->window : vc->gfx.drawing_area);
+    gd_update_monitor_refresh_rate(vc, vc->window ? vc->window : vc->gfx.drawing_area);
 
     if (!vc->gfx.gls) {
         if (!gtk_widget_get_realized(vc->gfx.drawing_area)) {
diff --git a/ui/gtk.c b/ui/gtk.c
index c57c36749e0e..2a791dd2aa04 100644
--- a/ui/gtk.c
+++ b/ui/gtk.c
@@ -710,11 +710,20 @@ static gboolean gd_window_close(GtkWidget *widget, GdkEvent *event,
     return TRUE;
 }
 
-static void gd_set_ui_info(VirtualConsole *vc, gint width, gint height)
+static void gd_set_ui_refresh_rate(VirtualConsole *vc, int refresh_rate)
 {
     QemuUIInfo info;
 
-    memset(&info, 0, sizeof(info));
+    info = *dpy_get_ui_info(vc->gfx.dcl.con);
+    info.refresh_rate = refresh_rate;
+    dpy_set_ui_info(vc->gfx.dcl.con, &info, true);
+}
+
+static void gd_set_ui_size(VirtualConsole *vc, gint width, gint height)
+{
+    QemuUIInfo info;
+
+    info = *dpy_get_ui_info(vc->gfx.dcl.con);
     info.width = width;
     info.height = height;
     dpy_set_ui_info(vc->gfx.dcl.con, &info, true);
@@ -738,33 +747,32 @@ static void gd_resize_event(GtkGLArea *area,
 {
     VirtualConsole *vc = (void *)opaque;
 
-    gd_set_ui_info(vc, width, height);
+    gd_set_ui_size(vc, width, height);
 }
 
 #endif
 
-/*
- * If available, return the update interval of the monitor in ms,
- * else return 0 (the default update interval).
- */
-int gd_monitor_update_interval(GtkWidget *widget)
+void gd_update_monitor_refresh_rate(VirtualConsole *vc, GtkWidget *widget)
 {
 #ifdef GDK_VERSION_3_22
     GdkWindow *win = gtk_widget_get_window(widget);
+    int refresh_rate;
 
     if (win) {
         GdkDisplay *dpy = gtk_widget_get_display(widget);
         GdkMonitor *monitor = gdk_display_get_monitor_at_window(dpy, win);
-        int refresh_rate = gdk_monitor_get_refresh_rate(monitor); /* [mHz] */
-
-        if (refresh_rate) {
-            /* T = 1 / f = 1 [s*Hz] / f = 1000*1000 [ms*mHz] / f */
-            return MIN(1000 * 1000 / refresh_rate,
-                       GUI_REFRESH_INTERVAL_DEFAULT);
-        }
+        refresh_rate = gdk_monitor_get_refresh_rate(monitor); /* [mHz] */
+    } else {
+        refresh_rate = 0;
     }
+
+    gd_set_ui_refresh_rate(vc, refresh_rate);
+
+    /* T = 1 / f = 1 [s*Hz] / f = 1000*1000 [ms*mHz] / f */
+    vc->gfx.dcl.update_interval = refresh_rate ?
+        MIN(1000 * 1000 / refresh_rate, GUI_REFRESH_INTERVAL_DEFAULT) :
+        GUI_REFRESH_INTERVAL_DEFAULT;
 #endif
-    return 0;
 }
 
 static gboolean gd_draw_event(GtkWidget *widget, cairo_t *cr, void *opaque)
@@ -801,8 +809,7 @@ static gboolean gd_draw_event(GtkWidget *widget, cairo_t *cr, void *opaque)
         return FALSE;
     }
 
-    vc->gfx.dcl.update_interval =
-        gd_monitor_update_interval(vc->window ? vc->window : s->window);
+    gd_update_monitor_refresh_rate(vc, vc->window ? vc->window : s->window);
 
     fbw = surface_width(vc->gfx.ds);
     fbh = surface_height(vc->gfx.ds);
@@ -1691,7 +1698,7 @@ static gboolean gd_configure(GtkWidget *widget,
 {
     VirtualConsole *vc = opaque;
 
-    gd_set_ui_info(vc, cfg->width, cfg->height);
+    gd_set_ui_size(vc, cfg->width, cfg->height);
     return FALSE;
 }
 
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 14 12:19:01 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 12:19:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.348956.575224 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15Vc-00008W-N2; Tue, 14 Jun 2022 12:19:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 348956.575224; Tue, 14 Jun 2022 12:19:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15Vc-000088-KF; Tue, 14 Jun 2022 12:19:00 +0000
Received: by outflank-mailman (input) for mailman id 348956;
 Tue, 14 Jun 2022 12:18:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hf0K=WV=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1o15TK-0008Ek-4I
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 12:16:38 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d533a08d-ebdb-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 14:16:35 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-277-iSDuPmZxN-GLRmezrfI0Zg-1; Tue, 14 Jun 2022 08:16:21 -0400
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com
 [10.11.54.5])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 25D9B3C025C4;
 Tue, 14 Jun 2022 12:16:21 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.40])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id DD2C09D7F;
 Tue, 14 Jun 2022 12:16:20 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 870F21800630; Tue, 14 Jun 2022 14:16:11 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d533a08d-ebdb-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655208994;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ZStqjDDMtLMpDsz58F8O29JWAJ6F3RkNGXfWxICk7zU=;
	b=RpJLr7OezkVjRLe+lbEbgutREiOb2/wEzx21PzzEhJFGnnCkIYQORXDLX15IdUpWkdDG0e
	1mI5ho30CUVUeU+u+xmMBF3NQS2+WRrqcn2Ga05FO3isvkyvKVL4theqblcu/TB1fMpt/e
	yX0CD2kC5AWFnp2efzAYtSIUg+Kvcuk=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1655208994;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ZStqjDDMtLMpDsz58F8O29JWAJ6F3RkNGXfWxICk7zU=;
	b=RpJLr7OezkVjRLe+lbEbgutREiOb2/wEzx21PzzEhJFGnnCkIYQORXDLX15IdUpWkdDG0e
	1mI5ho30CUVUeU+u+xmMBF3NQS2+WRrqcn2Ga05FO3isvkyvKVL4theqblcu/TB1fMpt/e
	yX0CD2kC5AWFnp2efzAYtSIUg+Kvcuk=
X-MC-Unique: iSDuPmZxN-GLRmezrfI0Zg-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: Akihiko Odaki <akihiko.odaki@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	xen-devel@lists.xenproject.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	"Hongren (Zenithal) Zheng" <i@zenithal.me>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Canokeys.org" <contact@canokeys.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Joelle van Dyne <j@getutm.app>
Subject: [PULL 11/15] usbredir: avoid queuing hello packet on snapshot restore
Date: Tue, 14 Jun 2022 14:16:06 +0200
Message-Id: <20220614121610.508356-12-kraxel@redhat.com>
In-Reply-To: <20220614121610.508356-1-kraxel@redhat.com>
References: <20220614121610.508356-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5

From: Joelle van Dyne <j@getutm.app>

When launching QEMU with "-loadvm", usbredir_create_parser() should avoid
setting up the hello packet (just as with "-incoming". On the latest version
of libusbredir, usbredirparser_unserialize() will return error if the parser
is not "pristine."

Signed-off-by: Joelle van Dyne <j@getutm.app>
Message-Id: <20220507041850.98716-1-j@getutm.app>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 hw/usb/redirect.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/hw/usb/redirect.c b/hw/usb/redirect.c
index fd7df599bc0b..1bd30efc3ef0 100644
--- a/hw/usb/redirect.c
+++ b/hw/usb/redirect.c
@@ -1280,7 +1280,8 @@ static void usbredir_create_parser(USBRedirDevice *dev)
     }
 #endif
 
-    if (runstate_check(RUN_STATE_INMIGRATE)) {
+    if (runstate_check(RUN_STATE_INMIGRATE) ||
+        runstate_check(RUN_STATE_PRELAUNCH)) {
         flags |= usbredirparser_fl_no_hello;
     }
     usbredirparser_init(dev->parser, VERSION, caps, USB_REDIR_CAPS_SIZE,
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 14 12:42:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 12:42:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349021.575236 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15sD-0004aF-OS; Tue, 14 Jun 2022 12:42:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349021.575236; Tue, 14 Jun 2022 12:42:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o15sD-0004a8-Kk; Tue, 14 Jun 2022 12:42:21 +0000
Received: by outflank-mailman (input) for mailman id 349021;
 Tue, 14 Jun 2022 12:41:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Qq/B=WV=arm.com=mark.rutland@srs-se1.protection.inumbo.net>)
 id 1o15rW-0004ZM-Vs
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 12:41:38 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 54555853-ebdf-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 14:41:37 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 78E001650;
 Tue, 14 Jun 2022 05:41:35 -0700 (PDT)
Received: from FVFF77S0Q05N (unknown [10.57.41.154])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9CDCF3F73B;
 Tue, 14 Jun 2022 05:41:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 54555853-ebdf-11ec-bd2c-47488cf2e6aa
Date: Tue, 14 Jun 2022 13:41:13 +0100
From: Mark Rutland <mark.rutland@arm.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: rth@twiddle.net, ink@jurassic.park.msu.ru, mattst88@gmail.com,
	vgupta@kernel.org, linux@armlinux.org.uk, ulli.kroll@googlemail.com,
	linus.walleij@linaro.org, shawnguo@kernel.org,
	Sascha Hauer <s.hauer@pengutronix.de>, kernel@pengutronix.de,
	festevam@gmail.com, linux-imx@nxp.com, tony@atomide.com,
	khilman@kernel.org, catalin.marinas@arm.com, will@kernel.org,
	guoren@kernel.org, bcain@quicinc.com, chenhuacai@kernel.org,
	kernel@xen0n.name, geert@linux-m68k.org, sammy@sammy.net,
	monstr@monstr.eu, tsbogend@alpha.franken.de, dinguyen@kernel.org,
	jonas@southpole.se, stefan.kristiansson@saunalahti.fi,
	shorne@gmail.com, James.Bottomley@HansenPartnership.com,
	deller@gmx.de, mpe@ellerman.id.au, benh@kernel.crashing.org,
	paulus@samba.org, paul.walmsley@sifive.com, palmer@dabbelt.com,
	aou@eecs.berkeley.edu, hca@linux.ibm.com, gor@linux.ibm.com,
	agordeev@linux.ibm.com, borntraeger@linux.ibm.com,
	svens@linux.ibm.com, ysato@users.sourceforge.jp, dalias@libc.org,
	davem@davemloft.net, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
	dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com,
	acme@kernel.org, alexander.shishkin@linux.intel.com,
	jolsa@kernel.org, namhyung@kernel.org, jgross@suse.com,
	srivatsa@csail.mit.edu, amakhalov@vmware.com, pv-drivers@vmware.com,
	boris.ostrovsky@oracle.com, chris@zankel.net, jcmvbkbc@gmail.com,
	rafael@kernel.org, lenb@kernel.org, pavel@ucw.cz,
	gregkh@linuxfoundation.org, mturquette@baylibre.com,
	sboyd@kernel.org, daniel.lezcano@linaro.org, lpieralisi@kernel.org,
	sudeep.holla@arm.com, agross@kernel.org, bjorn.andersson@linaro.org,
	anup@brainfault.org, thierry.reding@gmail.com, jonathanh@nvidia.com,
	jacob.jun.pan@linux.intel.com, Arnd Bergmann <arnd@arndb.de>,
	yury.norov@gmail.com, andriy.shevchenko@linux.intel.com,
	linux@rasmusvillemoes.dk, rostedt@goodmis.org, pmladek@suse.com,
	senozhatsky@chromium.org, john.ogness@linutronix.de,
	paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com,
	josh@joshtriplett.org, mathieu.desnoyers@efficios.com,
	jiangshanlai@gmail.com, joel@joelfernandes.org,
	juri.lelli@redhat.com, vincent.guittot@linaro.org,
	dietmar.eggemann@arm.com, bsegall@google.com, mgorman@suse.de,
	bristot@redhat.com, vschneid@redhat.com, jpoimboe@kernel.org,
	linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-snps-arc@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, openrisc@lists.librecores.org,
	linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org, sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org, linux-perf-users@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, linux-xtensa@linux-xtensa.org,
	linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org,
	linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	linux-tegra@vger.kernel.org, linux-arch@vger.kernel.org,
	rcu@vger.kernel.org
Subject: Re: [PATCH 14/36] cpuidle: Fix rcu_idle_*() usage
Message-ID: <YqiB6YpVqq4wuDtO@FVFF77S0Q05N>
References: <20220608142723.103523089@infradead.org>
 <20220608144516.808451191@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20220608144516.808451191@infradead.org>

On Wed, Jun 08, 2022 at 04:27:37PM +0200, Peter Zijlstra wrote:
> --- a/kernel/time/tick-broadcast.c
> +++ b/kernel/time/tick-broadcast.c
> @@ -622,9 +622,13 @@ struct cpumask *tick_get_broadcast_onesh
>   * to avoid a deep idle transition as we are about to get the
>   * broadcast IPI right away.
>   */
> -int tick_check_broadcast_expired(void)
> +noinstr int tick_check_broadcast_expired(void)
>  {
> +#ifdef _ASM_GENERIC_BITOPS_INSTRUMENTED_NON_ATOMIC_H
> +	return arch_test_bit(smp_processor_id(), cpumask_bits(tick_broadcast_force_mask));
> +#else
>  	return cpumask_test_cpu(smp_processor_id(), tick_broadcast_force_mask);
> +#endif
>  }

This is somewhat not-ideal. :/

Could we unconditionally do the arch_test_bit() variant, with a comment, or
does that not exist in some cases?

Thanks,
Mark.


From xen-devel-bounces@lists.xenproject.org Tue Jun 14 14:36:29 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 14:36:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349039.575247 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o17eM-00087S-67; Tue, 14 Jun 2022 14:36:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349039.575247; Tue, 14 Jun 2022 14:36:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o17eM-00087L-2T; Tue, 14 Jun 2022 14:36:10 +0000
Received: by outflank-mailman (input) for mailman id 349039;
 Tue, 14 Jun 2022 14:36:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PEcw=WV=suse.com=pmladek@srs-se1.protection.inumbo.net>)
 id 1o17eK-00087F-N0
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 14:36:08 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 519220ca-ebef-11ec-a26a-b96bd03d9e80;
 Tue, 14 Jun 2022 16:36:06 +0200 (CEST)
Received: from relay2.suse.de (relay2.suse.de [149.44.160.134])
 by smtp-out1.suse.de (Postfix) with ESMTP id 8286121B97;
 Tue, 14 Jun 2022 14:36:03 +0000 (UTC)
Received: from suse.cz (unknown [10.100.201.202])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by relay2.suse.de (Postfix) with ESMTPS id 0C0ED2C142;
 Tue, 14 Jun 2022 14:36:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 519220ca-ebef-11ec-a26a-b96bd03d9e80
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655217363; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=7h6hH0gYTond1cgiS/uMW2G4mVmL8aIozW74ouv+gK4=;
	b=B9wTA4hA6Qxez4dhaQq2jaLjcRQqv8/C1d0oKbtJxHWYl3duIpfM3QjZ2J2ZIG1jByzjIk
	ObTWFZbyZcppMylTomPCNwhynw9TlYQGEMrmiuv4k8IcEezBoIE3JYy9Oiijt3mtdocbSR
	YntR36rSYs/Otfz0SDXm/BOtVHJZFAE=
Date: Tue, 14 Jun 2022 16:36:01 +0200
From: Petr Mladek <pmladek@suse.com>
To: "Guilherme G. Piccoli" <gpiccoli@igalia.com>
Cc: bhe@redhat.com, d.hatayama@jp.fujitsu.com,
	"Eric W. Biederman" <ebiederm@xmission.com>,
	Mark Rutland <mark.rutland@arm.com>, mikelley@microsoft.com,
	vkuznets@redhat.com, akpm@linux-foundation.org,
	kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	bcm-kernel-feedback-list@broadcom.com,
	linuxppc-dev@lists.ozlabs.org, linux-alpha@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, linux-edac@vger.kernel.org,
	linux-hyperv@vger.kernel.org, linux-leds@vger.kernel.org,
	linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org,
	linux-pm@vger.kernel.org, linux-remoteproc@vger.kernel.org,
	linux-s390@vger.kernel.org, linux-tegra@vger.kernel.org,
	linux-um@lists.infradead.org, linux-xtensa@linux-xtensa.org,
	netdev@vger.kernel.org, openipmi-developer@lists.sourceforge.net,
	rcu@vger.kernel.org, sparclinux@vger.kernel.org,
	xen-devel@lists.xenproject.org, x86@kernel.org,
	kernel-dev@igalia.com, kernel@gpiccoli.net, halves@canonical.com,
	fabiomirmar@gmail.com, alejandro.j.jimenez@oracle.com,
	andriy.shevchenko@linux.intel.com, arnd@arndb.de, bp@alien8.de,
	corbet@lwn.net, dave.hansen@linux.intel.com, dyoung@redhat.com,
	feng.tang@intel.com, gregkh@linuxfoundation.org,
	hidehiro.kawai.ez@hitachi.com, jgross@suse.com,
	john.ogness@linutronix.de, keescook@chromium.org, luto@kernel.org,
	mhiramat@kernel.org, mingo@redhat.com, paulmck@kernel.org,
	peterz@infradead.org, rostedt@goodmis.org, senozhatsky@chromium.org,
	stern@rowland.harvard.edu, tglx@linutronix.de, vgoyal@redhat.com,
	will@kernel.org
Subject: Re: [PATCH 24/30] panic: Refactor the panic path
Message-ID: <Yqic0R8/UFqTbbMD@alley>
References: <20220427224924.592546-1-gpiccoli@igalia.com>
 <20220427224924.592546-25-gpiccoli@igalia.com>
 <87fskzuh11.fsf@email.froward.int.ebiederm.org>
 <0d084eed-4781-c815-29c7-ac62c498e216@igalia.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <0d084eed-4781-c815-29c7-ac62c498e216@igalia.com>

On Thu 2022-05-26 13:25:57, Guilherme G. Piccoli wrote:
> OK, so it seems we have some points in which agreement exists, and some
> points that there is no agreement and instead, we have antagonistic /
> opposite views and needs. Let's start with the easier part heh
>
> It seems everybody agrees that *we shouldn't over-engineer things*, and
> as per Eric good words: making the panic path more feature-full or
> increasing flexibility isn't a good idea. So, as a "corollary": the
> panic level approach I'm proposing is not a good fit, I'll drop it and
> let's go with something simpler.

Makes sense.

> Another point of agreement seems to be that _notifier lists in the panic
> path are dangerous_, for *2 different reasons*:
> 
> (a) We cannot guarantee that people won't add crazy callbacks there, we
> can plan and document things the best as possible - it'll never be
> enough, somebody eventually would slip a nonsense callback that would
> break things and defeat the planned purpose of such a list;

It is true that notifier lists might allow to add crazy stuff
without proper review more easily. Things added into the core
code would most likely get better review.

But nothing is error-proof. And bugs will happen with any approach.


> (b) As per Eric point, in a panic/crash situation we might have memory
> corruption exactly in the list code / pointers, etc, so the notifier
> lists are, by nature, a bit fragile. But I think we shouldn't consider
> it completely "bollocks", since this approach has been used for a while
> with a good success rate. So, lists aren't perfect at all, but at the
> same time, they aren't completely useless.

I am not able to judge this. Of course, any extra step increases
the risk. I am just not sure how much more complicated it would
be to hardcode the calls. Most of them are architecture
and/or feature specific. And such code is often hard to
review and maintain.

> To avoid using a 4th list,

4th or 5th? We already have "hypervisor", "info", "pre-reboot", and "pre-loop".
The 5th might be pre-crash-exec.

> especially given the list nature is a bit
> fragile, I'd suggest one of the 3 following approaches - I *really
> appreciate feedbacks* on that so I can implement the best solution and
> avoid wasting time in some poor/disliked solution:

Honestly, I am not able to decide what might be better without seeing
the code.

Most things fits pretty well into the 4 proposed lists:
"hypervisor", "info", "pre-reboot", and "pre-loop". IMHO, the
only question is the code that needs to be always called
even before crash_dump.

I suggest that you solve the crash_dump callbacks the way that
looks best to you. Ideally do it in a separate patch so it can be
reviewed and reworked more easily.

I believe that a fresh code with an updated split and simplified
logic would help us to move forward.

Best Regards,
Petr


From xen-devel-bounces@lists.xenproject.org Tue Jun 14 14:37:04 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 14:37:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349047.575257 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o17fE-0000CT-Fn; Tue, 14 Jun 2022 14:37:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349047.575257; Tue, 14 Jun 2022 14:37:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o17fE-0000CM-Cb; Tue, 14 Jun 2022 14:37:04 +0000
Received: by outflank-mailman (input) for mailman id 349047;
 Tue, 14 Jun 2022 14:37:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o17fC-0000Ak-7K; Tue, 14 Jun 2022 14:37:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o17fC-0000fU-3B; Tue, 14 Jun 2022 14:37:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o17fB-0001Gz-Fz; Tue, 14 Jun 2022 14:37:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o17fB-0004Ol-FV; Tue, 14 Jun 2022 14:37:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ZWC2XlZ6XGQltpFDf+gINKA777VMpDgomZfcfY2mRdA=; b=VlaTP2v/RHYdSxCXn5XzMh7RL3
	ElZy4UPqEIHHGKFsXHlPICopaf/h8jH4URryRYhq8CguK5NdATNu++5yCDWZ5YEj2ezIW6XLCImA/
	LgM/MQNyvEQ1g1haTMSWZZQpL4hF3PvNyf/+l2QuucfObyXnC2U5lfDYjDYtFaV8p9lI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171160-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 171160: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start.2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:heisenbug
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start.2:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=debd0753663bc89c86f5462a53268f2e3f680f60
X-Osstest-Versions-That:
    qemuu=dcb40541ebca7ec98a14d461593b3cd7282b4fac
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 14 Jun 2022 14:37:01 +0000

flight 171160 qemu-mainline real [real]
flight 171163 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/171160/
http://logs.test-lab.xenproject.org/osstest/logs/171163/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qcow2    22 guest-start.2            fail REGR. vs. 171149

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail in 171163 pass in 171160
 test-amd64-i386-xl-qemuu-debianhvm-amd64 7 xen-install fail pass in 171163-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     19 guest-start.2           fail blocked in 171149
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171149
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171149
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171149
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171149
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171149
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171149
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171149
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171149
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 qemuu                debd0753663bc89c86f5462a53268f2e3f680f60
baseline version:
 qemuu                dcb40541ebca7ec98a14d461593b3cd7282b4fac

Last test of basis   171149  2022-06-13 00:38:22 Z    1 days
Testing same since   171160  2022-06-14 06:39:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alex Bennée <alex.bennee@linaro.org>
  Daniel P. Berrangé <berrange@redhat.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit debd0753663bc89c86f5462a53268f2e3f680f60
Merge: dcb40541eb b56d1ee951
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Mon Jun 13 21:10:57 2022 -0700

    Merge tag 'pull-testing-next-140622-1' of https://github.com/stsquad/qemu into staging
    
    Various testing fixes:
    
      - fix compiler abi for test-armv6m-undef
      - fix isns suffixes for i386 tcg tests
      - fix gitlab cfi jobs
      - fix makefile docker invocation
      - don't enable xtensa-linux-user builds with system compiler
      - fix CIRRUS_nn var checking
      - don't spam the aarch64/32 runners with too many jobs at once
    
    # -----BEGIN PGP SIGNATURE-----
    #
    # iQEzBAABCgAdFiEEZoWumedRZ7yvyN81+9DbCVqeKkQFAmKnxfUACgkQ+9DbCVqe
    # KkSksAf/eXGL/k0zUU4RmxmQGWatCPPfbdxWj/pRDJrZl4cvegYK4cqXcfblDkiJ
    # f5kdB4FsSHgJUKic7K7sBSke2xgoi2bvBMeqNIknLo89b+xXZLJSzTE7XCi0W+hm
    # ll3GtHNJMEPjrIhSIAsqiSMFloL5xi7uz+ylBAB49skGF6F3rkCMv4fl7TDFKqaH
    # y5fRzLZMJg+FhlHNwGO0H8O32ZU7FlyqLGQT3JWZywR0n241kQ+gXLykQjQ7//nd
    # 9EbtppXiSOuusbggGCbmUQrEprW93TAEkgxUcuUuQYiAwDp89s66Q0gcwo1qMtcx
    # mORfc+018/WJpBwFF904hBPPjgO08w==
    # =PfzM
    # -----END PGP SIGNATURE-----
    # gpg: Signature made Mon 13 Jun 2022 04:19:17 PM PDT
    # gpg:                using RSA key 6685AE99E75167BCAFC8DF35FBD0DB095A9E2A44
    # gpg: Good signature from "Alex Bennée (Master Work Key) <alex.bennee@linaro.org>" [undefined]
    # gpg: WARNING: This key is not certified with a trusted signature!
    # gpg:          There is no indication that the signature belongs to the owner.
    # Primary key fingerprint: 6685 AE99 E751 67BC AFC8  DF35 FBD0 DB09 5A9E 2A44
    
    * tag 'pull-testing-next-140622-1' of https://github.com/stsquad/qemu:
      .gitlab: use less aggressive nproc on our aarch64/32 runners
      gitlab: compare CIRRUS_nn vars against 'null' not ""
      tests/tcg: disable xtensa-linux-user again
      tests/docker: fix the IMAGE for build invocation
      gitlab-ci: Fix the build-cfi-aarch64 and build-cfi-ppc64-s390x jobs
      tests/tcg/i386: Use explicit suffix on fist insns
      test/tcg/arm: Use -mfloat-abi=soft for test-armv6m-undef
    
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit b56d1ee9514be227854a589b4e11551bed4448a0
Author: Alex Bennée <alex.bennee@linaro.org>
Date:   Mon Jun 13 18:12:58 2022 +0100

    .gitlab: use less aggressive nproc on our aarch64/32 runners
    
    Running on all 80 cores of our aarch64 runner does occasionally
    trigger a race condition which fails the build. However the CI system
    is not the time and place to play with much heisenbugs so turn down
    the nproc to "only" use 40 cores in the build.
    
    Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
    Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
    Tested-by: Richard Henderson <richard.henderson@linaro.org>
    Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
    Message-Id: <20220613171258.1905715-8-alex.bennee@linaro.org>

commit 34776d80f312f36c8cbdf632337dc087e724b568
Author: Daniel P. Berrangé <berrange@redhat.com>
Date:   Mon Jun 13 18:12:57 2022 +0100

    gitlab: compare CIRRUS_nn vars against 'null' not ""
    
    The GitLab variable comparisons don't have shell like semantics where
    an unset variable compares equal to empty string. We need to explicitly
    test against 'null' to detect an unset variable.
    
    Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
    Tested-by: Richard Henderson <richard.henderson@linaro.org>
    Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
    Reviewed-by: Thomas Huth <thuth@redhat.com>
    Message-Id: <20220608160651.248781-1-berrange@redhat.com>
    Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
    Message-Id: <20220613171258.1905715-7-alex.bennee@linaro.org>

commit c48a5c4741d663a77ab3a2b0c1df3a58de6ee340
Author: Paolo Bonzini <pbonzini@redhat.com>
Date:   Mon Jun 13 18:12:56 2022 +0100

    tests/tcg: disable xtensa-linux-user again
    
    The move from tests/tcg/configure.sh started enabling the container image
    for xtensa-linux-user, which fails because the compiler does not have
    the full set of headers.  The cause is the "xtensa*-softmmu)" case
    in tests/tcg/configure.sh which became just "xtensa*)" in the new
    probe_target_compiler shell function.  Look out for xtensa*-linux-user
    and do not configure it.
    
    Reported-by: Alex Bennée <alex.bennee@linaro.org>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    Message-Id: <20220608135727.1341946-1-pbonzini@redhat.com>
    Fixes: cd362defbb ("tests/tcg: merge configure.sh back into main configure script")
    Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
    Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
    Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
    Message-Id: <20220613171258.1905715-6-alex.bennee@linaro.org>

commit ab698a4d8b32be717a675880935c774be66f0d03
Author: Alex Bennée <alex.bennee@linaro.org>
Date:   Mon Jun 13 18:12:55 2022 +0100

    tests/docker: fix the IMAGE for build invocation
    
    We inadvertently broke the ability to run local builds when the code
    was re-factored. The result was the run stanza failing to find the
    docker image with it's qemu/ prefix.
    
    Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
    Fixes: d39eaa2266 ("tests/docker: simplify docker-TEST@IMAGE targets")
    Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
    Message-Id: <20220613171258.1905715-5-alex.bennee@linaro.org>

commit 72ec89bfc5291b8e322a1a60d96d7f43c0375f16
Author: Thomas Huth <thuth@redhat.com>
Date:   Mon Jun 13 18:12:54 2022 +0100

    gitlab-ci: Fix the build-cfi-aarch64 and build-cfi-ppc64-s390x jobs
    
    The job definitions recently got a second "variables:" section by
    accident and thus are failing now if one tries to run them. Merge
    the two sections into one again to fix the issue.
    
    And while we're at it, bump the timeout here (70 minutes are currently
    not enough for the aarch64 job). The jobs are marked as manual anyway,
    so if the user starts them, they want to see their result for sure and
    then it's annoying if the job timeouts too early.
    
    Fixes: e312d1fdbb ("gitlab: convert build/container jobs to .base_job_template")
    Signed-off-by: Thomas Huth <thuth@redhat.com>
    Acked-by: Richard Henderson <richard.henderson@linaro.org>
    Message-Id: <20220603124809.70794-1-thuth@redhat.com>
    Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
    Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
    Message-Id: <20220613171258.1905715-4-alex.bennee@linaro.org>

commit 6012d96379873825ab20d629b828e833023feb9d
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Mon Jun 13 18:12:53 2022 +0100

    tests/tcg/i386: Use explicit suffix on fist insns
    
    Fixes a number of assembler warnings of the form:
    
    test-i386.c: Assembler messages:
    test-i386.c:869: Warning: no instruction mnemonic suffix given
      and no register operands; using default for `fist'
    
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
    Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
    Message-Id: <20220527171143.168276-1-richard.henderson@linaro.org>
    Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
    Message-Id: <20220613171258.1905715-3-alex.bennee@linaro.org>

commit b2df786170a4954d6346c284b8351ca79265d190
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Mon Jun 13 18:12:52 2022 +0100

    test/tcg/arm: Use -mfloat-abi=soft for test-armv6m-undef
    
    GCC11 from crossbuild-essential-armhf from ubuntu 22.04 errors:
    cc1: error: ‘-mfloat-abi=hard’: selected architecture lacks an FPU
    
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
    Reviewed-by: Thomas Huth <thuth@redhat.com>
    Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
    Message-Id: <20220604032713.174976-1-richard.henderson@linaro.org>
    Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
    Message-Id: <20220613171258.1905715-2-alex.bennee@linaro.org>


From xen-devel-bounces@lists.xenproject.org Tue Jun 14 14:53:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 14:53:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349057.575300 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o17uj-0003UF-Gk; Tue, 14 Jun 2022 14:53:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349057.575300; Tue, 14 Jun 2022 14:53:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o17uj-0003U8-Cv; Tue, 14 Jun 2022 14:53:05 +0000
Received: by outflank-mailman (input) for mailman id 349057;
 Tue, 14 Jun 2022 14:37:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=70GO=WV=goodmis.org=rostedt@kernel.org>)
 id 1o17fz-0000lT-VZ
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 14:37:52 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9015bdba-ebef-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 16:37:49 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 2E406B81865;
 Tue, 14 Jun 2022 14:37:48 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1D478C3411C;
 Tue, 14 Jun 2022 14:37:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9015bdba-ebef-11ec-bd2c-47488cf2e6aa
Date: Tue, 14 Jun 2022 10:37:32 -0400
From: Steven Rostedt <rostedt@goodmis.org>
To: Petr Mladek <pmladek@suse.com>
Cc: Sergey Senozhatsky <senozhatsky@chromium.org>, Peter Zijlstra
 <peterz@infradead.org>, ink@jurassic.park.msu.ru, mattst88@gmail.com,
 vgupta@kernel.org, linux@armlinux.org.uk, ulli.kroll@googlemail.com,
 linus.walleij@linaro.org, shawnguo@kernel.org, Sascha Hauer
 <s.hauer@pengutronix.de>, kernel@pengutronix.de, festevam@gmail.com,
 linux-imx@nxp.com, tony@atomide.com, khilman@kernel.org,
 catalin.marinas@arm.com, will@kernel.org, guoren@kernel.org,
 bcain@quicinc.com, chenhuacai@kernel.org, kernel@xen0n.name,
 geert@linux-m68k.org, sammy@sammy.net, monstr@monstr.eu,
 tsbogend@alpha.franken.de, dinguyen@kernel.org, jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi, shorne@gmail.com,
 James.Bottomley@hansenpartnership.com, deller@gmx.de, mpe@ellerman.id.au,
 benh@kernel.crashing.org, paulus@samba.org, paul.walmsley@sifive.com,
 palmer@dabbelt.com, aou@eecs.berkeley.edu, hca@linux.ibm.com,
 gor@linux.ibm.com, agordeev@linux.ibm.com, borntraeger@linux.ibm.com,
 svens@linux.ibm.com, ysato@users.sourceforge.jp, dalias@libc.org,
 davem@davemloft.net, richard@nod.at, anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net, tglx@linutronix.de, mingo@redhat.com,
 bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com,
 acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com,
 jolsa@kernel.org, namhyung@kernel.org, jgross@suse.com,
 srivatsa@csail.mit.edu, amakhalov@vmware.com, pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com, chris@zankel.net, jcmvbkbc@gmail.com,
 rafael@kernel.org, lenb@kernel.org, pavel@ucw.cz,
 gregkh@linuxfoundation.org, mturquette@baylibre.com, sboyd@kernel.org,
 daniel.lezcano@linaro.org, lpieralisi@kernel.org, sudeep.holla@arm.com,
 agross@kernel.org, bjorn.andersson@linaro.org, anup@brainfault.org,
 thierry.reding@gmail.com, jonathanh@nvidia.com,
 jacob.jun.pan@linux.intel.com, Arnd Bergmann <arnd@arndb.de>,
 yury.norov@gmail.com, andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk, john.ogness@linutronix.de, paulmck@kernel.org,
 frederic@kernel.org, quic_neeraju@quicinc.com, josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com,
 joel@joelfernandes.org, juri.lelli@redhat.com, vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com, bsegall@google.com, mgorman@suse.de,
 bristot@redhat.com, vschneid@redhat.com, jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org, linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org, linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org, linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org, linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org, xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org, linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org, linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org, linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org, rcu@vger.kernel.org
Subject: Re: [PATCH 24/36] printk: Remove trace_.*_rcuidle() usage
Message-ID: <20220614103732.489ba62b@gandalf.local.home>
In-Reply-To: <YqHvXFdIJfvUDI6e@alley>
References: <20220608142723.103523089@infradead.org>
	<20220608144517.444659212@infradead.org>
	<YqG6URbihTNCk9YR@alley>
	<YqHFHB6qqv5wiR8t@worktop.programming.kicks-ass.net>
	<CA+_sPaoJGrXhNPCs2dKf2J7u07y1xYrRFZBUtkKwzK9GqcHSuQ@mail.gmail.com>
	<YqHvXFdIJfvUDI6e@alley>
X-Mailer: Claws Mail 3.17.8 (GTK+ 2.24.33; x86_64-pc-linux-gnu)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

On Thu, 9 Jun 2022 15:02:20 +0200
Petr Mladek <pmladek@suse.com> wrote:

> > I'm somewhat curious whether we can actually remove that trace event.  
> 
> Good question.
> 
> Well, I think that it might be useful. It allows to see trace and
> printk messages together.

Yes people still use it. I was just asked about it at Kernel Recipes. That
is, someone wanted printk mixed in with the tracing, and I told them about
this event (which they didn't know about but was happy to hear that it
existed).

-- Steve


From xen-devel-bounces@lists.xenproject.org Tue Jun 14 15:10:06 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 15:10:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349132.575311 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o18B5-0005Ft-VJ; Tue, 14 Jun 2022 15:09:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349132.575311; Tue, 14 Jun 2022 15:09:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o18B5-0005Fm-S6; Tue, 14 Jun 2022 15:09:59 +0000
Received: by outflank-mailman (input) for mailman id 349132;
 Tue, 14 Jun 2022 15:09:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qNKJ=WV=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o18B4-0005Fg-IE
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 15:09:58 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0d529e29-ebf4-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 17:09:57 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id A6C4E21B69;
 Tue, 14 Jun 2022 15:09:56 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 4E0A01361C;
 Tue, 14 Jun 2022 15:09:56 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id n/z8EMSkqGJGVwAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 14 Jun 2022 15:09:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0d529e29-ebf4-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655219396; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=TBuTSA8mQ0jvxTP5qnDkR0bwrd8FxfdNziVlcfJYYno=;
	b=O1UBCoRPk4dvMZPflj5RmgChdsLFUHirjfqT7qIfr/YxFKzsZpYH4JaoGpv5LJ5mOZM1Ti
	I5vSvZ1F4ZcEZc4FfE8HVPbaB9G5b2ytQIbMXelR34pxj1mB6jbSTZ7TdwDPM4rGMxQk1l
	SHKq3LZprlSAA9UF0cqWaDNc7cIfrZs=
Message-ID: <fb0eadee-1d45-f414-eda4-a87f01eeb57a@suse.com>
Date: Tue, 14 Jun 2022 17:09:55 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH 1/2] x86/pat: fix x86_has_pat_wp()
Content-Language: en-US
To: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, Dave Hansen <dave.hansen@linux.intel.com>,
 Borislav Petkov <bp@alien8.de>
Cc: jbeulich@suse.com, Andy Lutomirski <luto@kernel.org>,
 Peter Zijlstra <peterz@infradead.org>, "H. Peter Anvin" <hpa@zytor.com>
References: <20220503132207.17234-1-jgross@suse.com>
 <20220503132207.17234-2-jgross@suse.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20220503132207.17234-2-jgross@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------eun63RK740ZvvNDN34swS1W9"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------eun63RK740ZvvNDN34swS1W9
Content-Type: multipart/mixed; boundary="------------yoJ6DkuojrwCMWyBgW7a1PaZ";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, Dave Hansen <dave.hansen@linux.intel.com>,
 Borislav Petkov <bp@alien8.de>
Cc: jbeulich@suse.com, Andy Lutomirski <luto@kernel.org>,
 Peter Zijlstra <peterz@infradead.org>, "H. Peter Anvin" <hpa@zytor.com>
Message-ID: <fb0eadee-1d45-f414-eda4-a87f01eeb57a@suse.com>
Subject: Re: [PATCH 1/2] x86/pat: fix x86_has_pat_wp()
References: <20220503132207.17234-1-jgross@suse.com>
 <20220503132207.17234-2-jgross@suse.com>
In-Reply-To: <20220503132207.17234-2-jgross@suse.com>

--------------yoJ6DkuojrwCMWyBgW7a1PaZ
Content-Type: multipart/mixed; boundary="------------fRcA0gQMkBVZfOGxtlWXbzS3"

--------------fRcA0gQMkBVZfOGxtlWXbzS3
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDMuMDUuMjIgMTU6MjIsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+IHg4Nl9oYXNfcGF0
X3dwKCkgaXMgdXNpbmcgYSB3cm9uZyB0ZXN0LCBhcyBpdCByZWxpZXMgb24gdGhlIG5vcm1h
bA0KPiBQQVQgY29uZmlndXJhdGlvbiB1c2VkIGJ5IHRoZSBrZXJuZWwuIEluIGNhc2UgdGhl
IFBBVCBNU1IgaGFzIGJlZW4NCj4gc2V0dXAgYnkgYW5vdGhlciBlbnRpdHkgKGUuZy4gQklP
UyBvciBYZW4gaHlwZXJ2aXNvcikgaXQgbWlnaHQgcmV0dXJuDQo+IGZhbHNlIGV2ZW4gaWYg
dGhlIFBBVCBjb25maWd1cmF0aW9uIGlzIGFsbG93aW5nIFdQIG1hcHBpbmdzLg0KPiANCj4g
Rml4ZXM6IDFmNmY2NTVlMDFhZCAoIng4Ni9tbTogQWRkIGEgeDg2X2hhc19wYXRfd3AoKSBo
ZWxwZXIiKQ0KPiBTaWduZWQtb2ZmLWJ5OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5j
b20+DQo+IC0tLQ0KPiAgIGFyY2gveDg2L21tL2luaXQuYyB8IDMgKystDQo+ICAgMSBmaWxl
IGNoYW5nZWQsIDIgaW5zZXJ0aW9ucygrKSwgMSBkZWxldGlvbigtKQ0KPiANCj4gZGlmZiAt
LWdpdCBhL2FyY2gveDg2L21tL2luaXQuYyBiL2FyY2gveDg2L21tL2luaXQuYw0KPiBpbmRl
eCBkOGNmY2UyMjEyNzUuLjcxZTE4MmViY2VkMyAxMDA2NDQNCj4gLS0tIGEvYXJjaC94ODYv
bW0vaW5pdC5jDQo+ICsrKyBiL2FyY2gveDg2L21tL2luaXQuYw0KPiBAQCAtODAsNyArODAs
OCBAQCBzdGF0aWMgdWludDhfdCBfX3B0ZTJjYWNoZW1vZGVfdGJsWzhdID0gew0KPiAgIC8q
IENoZWNrIHRoYXQgdGhlIHdyaXRlLXByb3RlY3QgUEFUIGVudHJ5IGlzIHNldCBmb3Igd3Jp
dGUtcHJvdGVjdCAqLw0KPiAgIGJvb2wgeDg2X2hhc19wYXRfd3Aodm9pZCkNCj4gICB7DQo+
IC0JcmV0dXJuIF9fcHRlMmNhY2hlbW9kZV90YmxbX1BBR0VfQ0FDSEVfTU9ERV9XUF0gPT0g
X1BBR0VfQ0FDSEVfTU9ERV9XUDsNCj4gKwlyZXR1cm4gX19wdGUyY2FjaGVtb2RlX3RibFtf
X2NhY2hlbW9kZTJwdGVfdGJsW19QQUdFX0NBQ0hFX01PREVfV1BdXSA9PQ0KPiArCSAgICAg
ICBfUEFHRV9DQUNIRV9NT0RFX1dQOw0KPiAgIH0NCj4gICANCj4gICBlbnVtIHBhZ2VfY2Fj
aGVfbW9kZSBwZ3Byb3QyY2FjaGVtb2RlKHBncHJvdF90IHBncHJvdCkNCg0KeDg2IG1haW50
YWluZXJzLCBwbGVhc2UgY29uc2lkZXIgdGFraW5nIHRoaXMgcGF0Y2gsIGFzIGl0IGlzIGZp
eGluZw0KYSByZWFsIGJ1Zy4gUGF0Y2ggMiBvZiB0aGlzIHNlcmllcyBjYW4gYmUgZHJvcHBl
ZCBJTU8uDQoNCg0KSnVlcmdlbg0K
--------------fRcA0gQMkBVZfOGxtlWXbzS3
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------fRcA0gQMkBVZfOGxtlWXbzS3--

--------------yoJ6DkuojrwCMWyBgW7a1PaZ--

--------------eun63RK740ZvvNDN34swS1W9
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmKopMMFAwAAAAAACgkQsN6d1ii/Ey/V
awf/awBnKJ5Bfc9Aiile6ImR6gl2yoG9pBgK3XGsRukjf+iVwjGGMEKxsWINeQ1SJctM+m8+BtQ6
XgipkOTUXLmGpFJztKUr+RyFj47wxb27YZGIAg2hxE+WK2IGNaGFDY0ksb351HHaTTDt6+SrvemO
YsEQ84/QsrzqWeC3+YgF3/rQZ/FdI3uZ3CzZOxC+xBGnFydhrPFMWRIjO3BqW5TTupTEgjFq0Irb
HvMi+RmoKrODIIzeeBYO379U11HQ+aEq5Dv9RqgVFrSINnPobAZCaB3n6tZl2LedsD4+dx27oD37
A9uzcajGcLtIMSxdvfd+9veNCh3ABcjWuUHWnyIwUQ==
=jzA7
-----END PGP SIGNATURE-----

--------------eun63RK740ZvvNDN34swS1W9--


From xen-devel-bounces@lists.xenproject.org Tue Jun 14 15:12:27 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 15:12:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349141.575323 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o18DS-0006iB-Dg; Tue, 14 Jun 2022 15:12:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349141.575323; Tue, 14 Jun 2022 15:12:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o18DS-0006i4-9C; Tue, 14 Jun 2022 15:12:26 +0000
Received: by outflank-mailman (input) for mailman id 349141;
 Tue, 14 Jun 2022 15:12:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2Dja=WV=linaro.org=richard.henderson@srs-se1.protection.inumbo.net>)
 id 1o18DQ-0006hy-AY
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 15:12:24 +0000
Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com
 [2607:f8b0:4864:20::1029])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 639a4456-ebf4-11ec-a26a-b96bd03d9e80;
 Tue, 14 Jun 2022 17:12:22 +0200 (CEST)
Received: by mail-pj1-x1029.google.com with SMTP id
 gc3-20020a17090b310300b001e33092c737so9318983pjb.3
 for <xen-devel@lists.xenproject.org>; Tue, 14 Jun 2022 08:12:22 -0700 (PDT)
Received: from [192.168.159.227] ([50.208.55.229])
 by smtp.gmail.com with ESMTPSA id
 y9-20020a170902864900b00168c1668a49sm7368707plt.85.2022.06.14.08.11.56
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 14 Jun 2022 08:12:20 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 639a4456-ebf4-11ec-a26a-b96bd03d9e80
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=message-id:date:mime-version:user-agent:subject:content-language:to
         :cc:references:from:in-reply-to:content-transfer-encoding;
        bh=YM2YngzMkaGIWHpkf6M7ADa9V1hMmADYOG1MXNhlzWE=;
        b=Hua16gKf/ZBdAKXYPXFTnxhFrivxJk4e10lo8ixeYNmksvupoHrmXJ2RGnzwEJq9iv
         kBa31EaF9odSqcB9dv9APd67sxav2GhXvynfpLM6KVmzCrzPlM35sYCDPOfdKsc8+d/D
         my1JQ0/AMOCSnpUjDjjYrWiNTuNXuxbGtaEpanQNHqAsSfl/85olI3ai83sD21BSd0xn
         NV14E/MWURYOtWYbmQPtZyQXRECtXIv+zkG5/YC6EVzSXvEHSjw0Hn7kyRCtBDmDkn5F
         e/I+I/ofG/P/cmMt2Ekj5q0ZZSMAlnh1BBUogdRk3TRiDOpWzu2Dsh8lpQGinhxeit/C
         s2mw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:message-id:date:mime-version:user-agent:subject
         :content-language:to:cc:references:from:in-reply-to
         :content-transfer-encoding;
        bh=YM2YngzMkaGIWHpkf6M7ADa9V1hMmADYOG1MXNhlzWE=;
        b=ZmkMAVOQpDYuwm24hZWpwCzagdsIAqCDmZvJPTJXcHetHIm19N4J8ZklO5IJiAtHlz
         WEu7xdLHLpK3dl0f9wmIeQOwCkInBKxKUsOVRQjrb/k8SLvUEon7Xtz91GaKeRzVz+3l
         lT0ajpIHa3XstwphGMqW//m4PXrlKT/F69wesuueiS1DDuWtkznNNhE0D8a2+yb7qt4o
         rtpWg4s4R4WPu079Syb3S3vlv4zkSV0WX+w9XSyG8C38iR4XgBUDKTsy79+OcMqdTBPi
         wGmA+Xr6yOFnKEjrBXgXwsh02GekijVK/g08/7DkD6vbWLh3o2TPKyUa105h1wNJlstb
         AU9w==
X-Gm-Message-State: AJIora//g5TT0oYX0l734U/QwxSyr950j6BJT9uEkc3GK25ljj3f2MWZ
	y7gSOEz5bt0xiN/gSdXU0O1Nrw==
X-Google-Smtp-Source: AGRyM1tjRZAqa5I7tdwTBMIp6PN2CGs2N7UY/5qPMrgEvGZ2f/pTTPIaidozXKlXn7LAd1/g8WZySA==
X-Received: by 2002:a17:90a:6849:b0:1ea:d05a:223a with SMTP id e9-20020a17090a684900b001ead05a223amr823789pjm.173.1655219541080;
        Tue, 14 Jun 2022 08:12:21 -0700 (PDT)
Message-ID: <ba496d86-3883-c7e2-9e06-76b62e111aa5@linaro.org>
Date: Tue, 14 Jun 2022 08:11:43 -0700
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PULL 00/15] Kraxel 20220614 patches
Content-Language: en-US
To: Gerd Hoffmann <kraxel@redhat.com>, qemu-devel@nongnu.org
Cc: Akihiko Odaki <akihiko.odaki@gmail.com>,
 Peter Maydell <peter.maydell@linaro.org>,
 Alex Williamson <alex.williamson@redhat.com>,
 xen-devel@lists.xenproject.org, Paul Durrant <paul@xen.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <f4bug@amsat.org>,
 "Hongren (Zenithal) Zheng" <i@zenithal.me>,
 "Michael S. Tsirkin" <mst@redhat.com>, "Canokeys.org"
 <contact@canokeys.org>, Stefano Stabellini <sstabellini@kernel.org>
References: <20220614121610.508356-1-kraxel@redhat.com>
From: Richard Henderson <richard.henderson@linaro.org>
In-Reply-To: <20220614121610.508356-1-kraxel@redhat.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 6/14/22 05:15, Gerd Hoffmann wrote:
> The following changes since commit debd0753663bc89c86f5462a53268f2e3f680f60:
> 
>    Merge tag 'pull-testing-next-140622-1' of https://github.com/stsquad/qemu into staging (2022-06-13 21:10:57 -0700)
> 
> are available in the Git repository at:
> 
>    git://git.kraxel.org/qemu tags/kraxel-20220614-pull-request
> 
> for you to fetch changes up to b95b56311a0890da0c9f7fc624529c3d7f8dbce0:
> 
>    virtio-gpu: Respect UI refresh rate for EDID (2022-06-14 10:34:37 +0200)
> 
> ----------------------------------------------------------------
> usb: add CanoKey device, fixes for ehci + redir
> ui: fixes for gtk and cocoa, rework refresh rate
> virtio-gpu: scanout flush fix

Applied, thanks.  Please update https://wiki.qemu.org/ChangeLog/7.1 as appropriate.


r~


> 
> ----------------------------------------------------------------
> 
> Akihiko Odaki (4):
>    ui/cocoa: Fix poweroff request code
>    ui/console: Do not return a value with ui_info
>    ui: Deliver refresh rate via QemuUIInfo
>    virtio-gpu: Respect UI refresh rate for EDID
> 
> Arnout Engelen (1):
>    hw/usb/hcd-ehci: fix writeback order
> 
> Dongwon Kim (1):
>    virtio-gpu: update done only on the scanout associated with rect
> 
> Hongren (Zenithal) Zheng (6):
>    hw/usb: Add CanoKey Implementation
>    hw/usb/canokey: Add trace events
>    meson: Add CanoKey
>    docs: Add CanoKey documentation
>    docs/system/devices/usb: Add CanoKey to USB devices examples
>    MAINTAINERS: add myself as CanoKey maintainer
> 
> Joelle van Dyne (1):
>    usbredir: avoid queuing hello packet on snapshot restore
> 
> Volker Rümelin (2):
>    ui/gtk-gl-area: implement GL context destruction
>    ui/gtk-gl-area: create the requested GL context version
> 
>   meson_options.txt                |   2 +
>   hw/usb/canokey.h                 |  69 +++++++
>   include/hw/virtio/virtio-gpu.h   |   1 +
>   include/ui/console.h             |   4 +-
>   include/ui/gtk.h                 |   2 +-
>   hw/display/virtio-gpu-base.c     |   7 +-
>   hw/display/virtio-gpu.c          |   4 +
>   hw/display/virtio-vga.c          |   5 +-
>   hw/display/xenfb.c               |  14 +-
>   hw/usb/canokey.c                 | 313 +++++++++++++++++++++++++++++++
>   hw/usb/hcd-ehci.c                |   5 +-
>   hw/usb/redirect.c                |   3 +-
>   hw/vfio/display.c                |   8 +-
>   ui/console.c                     |   6 -
>   ui/gtk-egl.c                     |   4 +-
>   ui/gtk-gl-area.c                 |  42 ++++-
>   ui/gtk.c                         |  45 +++--
>   MAINTAINERS                      |   8 +
>   docs/system/device-emulation.rst |   1 +
>   docs/system/devices/canokey.rst  | 168 +++++++++++++++++
>   docs/system/devices/usb.rst      |   4 +
>   hw/usb/Kconfig                   |   5 +
>   hw/usb/meson.build               |   5 +
>   hw/usb/trace-events              |  16 ++
>   meson.build                      |   6 +
>   scripts/meson-buildoptions.sh    |   3 +
>   ui/cocoa.m                       |   6 +-
>   ui/trace-events                  |   2 +
>   28 files changed, 707 insertions(+), 51 deletions(-)
>   create mode 100644 hw/usb/canokey.h
>   create mode 100644 hw/usb/canokey.c
>   create mode 100644 docs/system/devices/canokey.rst
> 



From xen-devel-bounces@lists.xenproject.org Tue Jun 14 15:32:03 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 15:32:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349153.575333 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o18WL-0000vc-WA; Tue, 14 Jun 2022 15:31:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349153.575333; Tue, 14 Jun 2022 15:31:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o18WL-0000vV-T6; Tue, 14 Jun 2022 15:31:57 +0000
Received: by outflank-mailman (input) for mailman id 349153;
 Tue, 14 Jun 2022 15:31:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qNKJ=WV=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o18WL-0000vP-32
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 15:31:57 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1f3d3e40-ebf7-11ec-a26a-b96bd03d9e80;
 Tue, 14 Jun 2022 17:31:55 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 53FF821B9F;
 Tue, 14 Jun 2022 15:31:55 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 21B411361C;
 Tue, 14 Jun 2022 15:31:55 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id ipcIBuupqGKrYQAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 14 Jun 2022 15:31:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1f3d3e40-ebf7-11ec-a26a-b96bd03d9e80
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655220715; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=v3nOr0/TQ7TPym3sBGUMzU6nbzZrs0YvMJej6BkqOj0=;
	b=EG9+QAEDInNklGzH5J+3OEpiM7Vn53KhK5AKoNZNj7+rB+bo4Yu5gwlSCtiws34BfFYxWL
	P+WI1H1dspbtwututv0Tih0CgsGF3LRUssxiaZdL3OvzcxrPH/jTQojEPZNPQk7iw72iGk
	mmEjGX/s5zNgpnibKfs8LxUKk7cg3z0=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH] tools/xenstore: simplify loop handling connection I/O
Date: Tue, 14 Jun 2022 17:31:52 +0200
Message-Id: <20220614153152.25919-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The loop handling input and output of connections of xenstored is
open coding list_for_each_entry_safe() in an incredibly complicated
way.

Use list_for_each_entry_safe() instead, making it much more clear how
the code is working.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_core.c | 12 ++----------
 1 file changed, 2 insertions(+), 10 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 6e4022e5da..fa733e714e 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -2368,16 +2368,8 @@ int main(int argc, char *argv[])
 			}
 		}
 
-		next = list_entry(connections.next, typeof(*conn), list);
-		if (&next->list != &connections)
-			talloc_increase_ref_count(next);
-		while (&next->list != &connections) {
-			conn = next;
-
-			next = list_entry(conn->list.next,
-					  typeof(*conn), list);
-			if (&next->list != &connections)
-				talloc_increase_ref_count(next);
+		list_for_each_entry_safe(conn, next, &connections, list) {
+			talloc_increase_ref_count(conn);
 
 			if (conn_can_read(conn))
 				handle_input(conn);
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue Jun 14 15:33:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 15:33:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349161.575343 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o18XX-0001UW-AN; Tue, 14 Jun 2022 15:33:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349161.575343; Tue, 14 Jun 2022 15:33:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o18XX-0001UP-7P; Tue, 14 Jun 2022 15:33:11 +0000
Received: by outflank-mailman (input) for mailman id 349161;
 Tue, 14 Jun 2022 15:33:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o18XW-0001UJ-CV
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 15:33:10 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o18XV-0001qN-R4; Tue, 14 Jun 2022 15:33:09 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=[192.168.23.240]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o18XV-0000dZ-Io; Tue, 14 Jun 2022 15:33:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=OfPHCdnZMQ6ZJuBl/dZXc34n9FyO1XG/uU1BNTxl4zs=; b=mAmg+3WoSRr6r2hXAnJzuccEye
	ghvDwxKyKGLauWfaiRFw5et/KS4ScvUcrPMHEnoKulnUuVOLfUbBVg0r7ns7PU/ehrc0Jr9Nq/jLo
	pfa8sr7xf7sdxq+qBcp8MbVGwvTuM/IwyLBSuJei6i+zZxcdqEsmVHSLBisReCOFQyco=;
Message-ID: <31fcaef6-668f-9923-ea22-8520e70d7d74@xen.org>
Date: Tue, 14 Jun 2022 16:33:07 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH] tools/xenstore: simplify loop handling connection I/O
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20220614153152.25919-1-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220614153152.25919-1-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 14/06/2022 16:31, Juergen Gross wrote:
> The loop handling input and output of connections of xenstored is
> open coding list_for_each_entry_safe() in an incredibly complicated
> way.
> 
> Use list_for_each_entry_safe() instead, making it much more clear how
> the code is working.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 14 15:40:46 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 15:40:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349172.575355 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o18eg-00039w-6r; Tue, 14 Jun 2022 15:40:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349172.575355; Tue, 14 Jun 2022 15:40:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o18eg-00039p-3h; Tue, 14 Jun 2022 15:40:34 +0000
Received: by outflank-mailman (input) for mailman id 349172;
 Tue, 14 Jun 2022 15:40:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lWYu=WV=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o18ee-00039i-3U
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 15:40:32 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2060d.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::60d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 521e92fb-ebf8-11ec-a26a-b96bd03d9e80;
 Tue, 14 Jun 2022 17:40:30 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB6791.eurprd04.prod.outlook.com (2603:10a6:20b:103::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.20; Tue, 14 Jun
 2022 15:40:29 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Tue, 14 Jun 2022
 15:40:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 521e92fb-ebf8-11ec-a26a-b96bd03d9e80
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oWwyGwcuL+wCJ6bbRq+lDvbH9TnOgvfV13rH1Rc8vUX77OlK72oTOoPjdp3gtqzMCeuQVOg6Tqhkjo1eUPl30LPpCW8++SUtuZqXWlCp1ba3/msGJANngmHBc8W52DmDO0lm+UHr+G0FNRmOEki/MnqAeQWoOlJsFZ/y6WUBfNjPSu2Q+6iehpevMFGhj2x7hb24q/NaDmhDnCLdIo0AESohgz9H4N7Zapx7rXZpfi98qIYFHAeM3FxaZrsEBnY4MAiLkN/SrOGLO42XWEdlgPnBkSv88e/T6Fbk9DygES3NDO9pyZJMaQJsV2fkFH+ZdcKdeQwM6j9cULQe9j2bug==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5rPrl1eBGcU7xfrrmUE+53DCIfx8v3Nqjcf5+80E71Q=;
 b=K9FeqAxjxVzfiY96Vax5PdD07ycbB183RKmoRZydyRiqA/VsZGCwQmrxlPykedc2gOaOnVALfWkNt2KtLDXYUOHwdVw+BoveglmgqYHPjfeyHBQk/GhrYqP7ZNdMbtdR6NR3O2u1Ji6r3VPkBAusMaj9mbZYZrpBXOJnStLhAr14jVXMta4x2YP1BkdZfZrBkwAsv+w1kXPHi0N3HKXNcg4V4BwNvY161A6AD1elebFgsTyGd+IlpjJsw/AxEP7hKnr/UThLPRgRRjm3hHEt/BE3gWSB20VditiQmSYjcle6iMwKsYH4Wg64qOsPc8nNtKB060Kc56z9WtU0vElz6g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5rPrl1eBGcU7xfrrmUE+53DCIfx8v3Nqjcf5+80E71Q=;
 b=oupRs+ysWQ1ohI8IxXrG6APGCMpPX+FvMOeurdLvlOVtB+wwA5j/fDg6qttF4k+8EMnvTXKz3oFycFNFZW5T/htth1vsfr5f/Jxupg+1EjPzoHutGUk1ca1nmzZ+7jdbsmj1qM+3UJ/7dLs1fW4FjNs3sgnACPWt2MUdgeta1+O7rz7F2ABZkFrV+J9alxTlfGt5s9VvX6abyKpTnWgOSNBBT9UQl5JZz2QQzVqfO2/Bc6MSUBWXWA+rk10x8dWMjSwsmYV2TXVLJrKv68nH0yNIPocPsVmy5bpUZLk6ZEHUeqm3Mj/d1gWueCq2wfFJ38qa8LH5hnzvo20cigrhoA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9282329e-ad08-b6e7-ec9b-7e827a8b66ba@suse.com>
Date: Tue, 14 Jun 2022 17:40:27 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Anthony Perard <anthony.perard@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] build: fix exporting for make 3.82
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR06CA0337.eurprd06.prod.outlook.com
 (2603:10a6:20b:466::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 78303dcf-1990-45e1-e293-08da4e1c358a
X-MS-TrafficTypeDiagnostic: AM7PR04MB6791:EE_
X-Microsoft-Antispam-PRVS:
	<AM7PR04MB6791FB8B864AAFF78F14E56CB3AA9@AM7PR04MB6791.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	rprscq7x9ExrGUZpb425f2kqFcwlDyHhaQf53K+brkQs9Dqf6QPWyyIditbL+ACh7I7kCRUw2JpJsiUmEck7jX6Z2mvnsqcij4g7cWDTbDgbSjJ+bue7Tf4id14gJ1JXkPYtnyJSlIHgw0hSlaft2hGt3/hxOLUTgZVx9kA0dAeLO9vv1HmeiyTczrSxvVTkRn1lUB+cDdXyr0DkuVHqaG23MbyKwfSbcdSigYOT8BQTs55a+VfCK+5DxXpqLIIZ3Upak79Bqt8UqbR0zpvMEa5dRBP62CiKEVizhxYrtZPQezYV2lUO2+VTj0st/0HN9l8opgtiYRIFYt9UDfbtHma4M9c1H8/NNEEPNVfdvaTtANcreF4aoVazw1gFawrK7e6hqNCLA3dyxSyoZygT/+6/6CemPhzr1zD0YbENSUQkRmUBo9B/1IYDf7AxOg0t1nu+SOrEl5fSK9ZgFU1jRQBzNY+ej2vIz7sYFI701pEgLfoynEQ7AOpuPupj4tOeQkZ8uCz3QEc4xRFw1Ou4+JaNSsAwGpjUg9DsojbIghemSlV5aRrj0pCxGH56bZgco/hPEdC7uoEzVVK/HBpbhEibtANC4mj7t8VEvR0PGEzNlCUVEGtTh3d3zmA/+uG92ExCkSZr8Psos/os3E13SY+fhT68CSZYuFCrU7jyX8hT38Qf8mKvlu7YqFQh6y73qXcEDbosyAxGcO7thulWCs69XdRnnzJ0FZI0HuVC/u0bM8lZSNiyIfCIuOsiNDJU
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(83380400001)(6512007)(54906003)(316002)(26005)(4326008)(66556008)(6506007)(6916009)(31686004)(2906002)(5660300002)(36756003)(186003)(86362001)(6486002)(66946007)(66476007)(31696002)(8676002)(508600001)(2616005)(8936002)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MnF0Rk5NdmFCYVpZWE55bFVLSDlMS1ZCbmx0M1p1VHFUcWJEdE56SW1tcFE3?=
 =?utf-8?B?VUNkUUd5aXpZKzAwOWNMdk8xZjBxNDdSMzJFQk5EWmJuWmI0WjlLTC9Fa3F6?=
 =?utf-8?B?dDk1aHlVSVFXNGxJOHJqcGJudlRRZTFHNWMrQnBPZzhuR3VaTVR2UWQ5ZW1I?=
 =?utf-8?B?MjJMYkxjVlNjQVBPNjljd01LRWNobDBtb2tCbjNDQ0EzZlR6QUdVdlYvYm9h?=
 =?utf-8?B?Q0J3QzAydWl2clo4Q1FNU2JWNFFPdVh0dTJ5UFdWclNnbGdhSlo2U05GOEFr?=
 =?utf-8?B?QVJnS29QN0xrQzJsY1lNY3JwYUVtYlJDWnVDL3VnWnB2cXp5YzFXNUpLakR1?=
 =?utf-8?B?cHFpbzQ3K2UzNnh3L3FtMUtUYXVNM0daU0xlUXJmY0xzQ2xQTzVkRERIM3FW?=
 =?utf-8?B?UUx2ZUxvRFNpR3kyR2RaS2xjUm1BUFBuRWcvNFhmZWVuR3Bsak1FV01ra0dy?=
 =?utf-8?B?eGxuWURJQ1E3RjQ5WHkzZUJzM2RXL2czV0l6eVdrNHdrQ1NEQ2I3SHpWcDdC?=
 =?utf-8?B?RHh5aEJSZ2FMWlE4OXNpc0h4ZEp5dk1zSjNoYlFOaTY4VjlHQzF4WXdMOFZ6?=
 =?utf-8?B?eDZZUUxsSWY1VnlUWER4Vm8zQ2JsRmR5b3hkV0JVWHdVK01vL3M4K3Q2RWsx?=
 =?utf-8?B?bGZjdFpoQWR6RXRzTG9wMFZEcjlxZXJUckoyTDYyOXdLd2xTeU1RNnhtcTFB?=
 =?utf-8?B?SWtmNEZtOWJnMU1naHlYeVFvMkpUQWZzYmN2dWpzbXhtdGJ6TkJKaURaQTkw?=
 =?utf-8?B?N1paeUtnYkdiQWVlZUp5c2FVMUxkNm5YZTNUQWJIRVdLcDY3enZYREdwV0tZ?=
 =?utf-8?B?T1BCemJ6NWNPSk5xcE1ibnRxZmNsT2kwdU5CeGN5MTU2c1NaY0xBQ3NZYVp2?=
 =?utf-8?B?eGRNVmlJbjM3TmJ6aFVyVXF6dnJyT2FINlhFdktzUE9oZENoQkFYY3d0MXVH?=
 =?utf-8?B?clJ5NTJGUnE1ajVHeWI5VkwwdUVvcldDWUppMVJCdkVXR3NXczQxT1Z2TGdx?=
 =?utf-8?B?dkNvaWZlM1ZzV1hWdVgxbHhXU2Mycy91QVhwMDJNNEVwN2hFSFNabThDa3Rl?=
 =?utf-8?B?VkpGL3FoTEhYNkEzQzN2SHpYK0lrT0IwV0Fod093OG13aVgrc3p1dWtZOHFv?=
 =?utf-8?B?M3loNUlXK3RCSEl5R1haOHhLVzNmM3QwL3BydFRhc1VWdUZOZUdGS1Nxa0dS?=
 =?utf-8?B?djFTcWQ3VHlKMEFwcUliNVdaYVNNb2R4MjZ5RzJacWcxcmdKb2tRSFVIcjVU?=
 =?utf-8?B?L2dsaDJRanh0WXRndEZGTzZwU2ZmS1J5b2UyUDJ1Y3FoVkpiVTcvaDBuTzFx?=
 =?utf-8?B?OUR3TkJUc2lVb1BjRm4xcHE3M1p0dUdTMXhuRnVLUjJiM3FJZGErSFM3N21I?=
 =?utf-8?B?ZXB1dnhReVFRNmswbS9XSkQxTk15MW5qclQzMis3RmR0MjRRWkI1OU4wSlVV?=
 =?utf-8?B?RGFLNDIram13TXdHVlpabU8xNE44K2U1d25HbG9aSUxiVUcydERtQVg1VmdZ?=
 =?utf-8?B?NUt4VjM0SHJ2WFBUTDJhZVE4MXgwanh6UGpDRlZLR2JxOUI0SmhFdkdjaEhs?=
 =?utf-8?B?YlA4cHpRdHVPRUV0TGQrck84a2RCZ0grU3JDM2NFUHR6SllyaHJzYmUwUnFM?=
 =?utf-8?B?dXdleVpBT1U3aTllUFc5WjdKdEFIL21IS0tmSllVU1p5azlJUGRjdDJzK1cv?=
 =?utf-8?B?WmJVZzJ0alQwVlpmMkhNN0RUR2l0ZEtEamhPSXdNTld1NWRpWDRLdWVTSGty?=
 =?utf-8?B?Wjl6Wk9OWlJRY240RVZjUXpzWXR3WWZ2Um1FYVJubURQK1FoU1AzeTRXZHhO?=
 =?utf-8?B?ZjUxd2hIaFo4RkdKZUx2MWdPcXpUMXZwaVQzMkZJNHk4QXZ0aTFhcE11M25I?=
 =?utf-8?B?UG9HZGN4VmsreVVTenl5aUVjb2ErQndjQUYvYitqcXJnVkRwQ0ZjZFdsRlU0?=
 =?utf-8?B?VnVhRHZCeHdWR2c0bkhweWxQaExkQWFUcjBuMXVkUlJ5a3lWM01idU9janBJ?=
 =?utf-8?B?aEgyNDE5ZmJiaE1PRzZmY0VmN3B5b0QyS3YxcHlwQkRlMUEydXZ2ZTFiVDQr?=
 =?utf-8?B?OXc2YjhDUnRTRUQzVVlEVlZiOFcvM1pHdHhoemZNamZhV2JRWFNBRm9Ic2Mx?=
 =?utf-8?B?NzBXb2cwNmFoNm83SVI4dW5sK0tqeEhSTDduOSt1Q0dPc1FSWTNFQXRpS29s?=
 =?utf-8?B?L3JJbXBoZEJ0MHhVK0hNbE1BNTh2eWdYeFd6b0Rpd2ozUnptbmxuUUtNQ055?=
 =?utf-8?B?S0FwZ0o5WmZKbW93WHpzKzBFU05SVHljRWNxeUVNSEc2Mjc3QSt5Rm5XSkVX?=
 =?utf-8?B?K2lYOVRmZk8xaEprNHI5WFUxUFhWc0djUTN4a1ZCWEN2ZW82dUdEQT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 78303dcf-1990-45e1-e293-08da4e1c358a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2022 15:40:29.1941
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: AJBUvXsHy5shnf9w+sfEPYOXntYtgX9CoSl2t6ghT9NF47x+Sf5iKSkxljEwsN1AiK5V8cQEInm5uf6fAgz8eQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB6791

GNU make 3.82 apparently has a quirk where exporting an undefined
variable prevents its value from subsequently being updated. This
situation can arise due to our adding of -rR to MAKEFLAGS, which takes
effect also on make simply re-invoking itself. Once these flags are in
effect, CC (in particular) is empty (undefined), and would be defined
only via Config.mk including StdGNU.mk or alike. With the quirk, CC
remains empty, yet with an empty CC the compiler minimum version check
fails, breaking the build.

Move the exporting of the various tool stack component variables past
where they gain their (final) values.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
There may be further factors playing into the described quirk, as I've
also observed that simply running make as 2nd time would lead to
successful building of xen/.

While this wasn't a problem until several weeks back, I've not been able
to identify which exact commit would have caused the breakage. Hence no
Fixes: tag.

--- a/xen/Makefile
+++ b/xen/Makefile
@@ -44,8 +44,6 @@ export ARCH SRCARCH
 # Allow someone to change their config file
 export KCONFIG_CONFIG ?= .config
 
-export CC CXX LD NM OBJCOPY OBJDUMP ADDR2LINE
-
 export TARGET := xen
 
 .PHONY: dist
@@ -244,6 +242,7 @@ export TARGET_ARCH     := $(shell echo $
                                 -e s'/riscv.*/riscv/g')
 
 export CONFIG_SHELL := $(SHELL)
+export CC CXX LD NM OBJCOPY OBJDUMP ADDR2LINE
 export YACC = $(if $(BISON),$(BISON),bison)
 export LEX = $(if $(FLEX),$(FLEX),flex)
 


From xen-devel-bounces@lists.xenproject.org Tue Jun 14 15:52:03 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 15:52:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349183.575365 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o18pi-0004o5-9X; Tue, 14 Jun 2022 15:51:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349183.575365; Tue, 14 Jun 2022 15:51:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o18pi-0004ny-6n; Tue, 14 Jun 2022 15:51:58 +0000
Received: by outflank-mailman (input) for mailman id 349183;
 Tue, 14 Jun 2022 15:51:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lWYu=WV=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o18pg-0004ns-Oj
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 15:51:56 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on061d.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::61d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ea5459e2-ebf9-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 17:51:55 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB6895.eurprd04.prod.outlook.com (2603:10a6:803:13b::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.22; Tue, 14 Jun
 2022 15:51:53 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Tue, 14 Jun 2022
 15:51:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ea5459e2-ebf9-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kYofTkoJWct7vH0M5p9pdULTDriSfI9bClGC+fAHQXIgRrEhDTj1NnLDc6F3x+OcY8pzY9il8wu6vClM4KTQBJiRx5voBzOzlUcoDwcWfi3CmzMUja0twlIpMiuUwU5qfYayRKWNZJIipdBcqlx4ZTWsxyBVD3OB7iT+5NDrB3XXMCL0QaW9qD8K5/tolHugV9DkR1tRXisKgqFTlEfMsBkWA6RfVrkHTKyEzRllnkLb7DUQq5aNWgCFXbC9xLAGar2rI5hCV/ReDkNzNqDjTYD4lZO7Ok20eQ6VWEV1s+1LSayefeuGbTC+gTH/vu/3Zku82fdfJlqEml7Gnt4i3g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=dEFWelMq6KmXE1vACNMJaly5JF58G9ndL9JvXvSk0Qo=;
 b=YF62dFmegfSMTHXTeQGpbr2whf6T3H3f/FfUg3r4dNCA/VzfVE+Io2f6DuA5m2te2V5PASFYr1If96sKAbPergQqznSgkYcEPI4iVn/NPkCj+Yvl5URK1FoM1sSB026ISR5YYUy1HH3NKGozPv0H3gd1N0WLP1X65s7oc9Ke92oWJP+ExdJOEM3UkT8oOchPAW8RNrA8oG3CbWHPnGXIJAww8+p3Ho7vKPFGpDMIpu61j5OIrCvOPyAxeMUKK6FUc/ECugVl8y6U9ZwohgiNAv0h8IUAzCX6sRwwbrElK7eI7F3zcOmHJpR5vSxVo2K8GVZGNXVkjYJ++amvFF2/hQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dEFWelMq6KmXE1vACNMJaly5JF58G9ndL9JvXvSk0Qo=;
 b=m47DrUGYgLdSOF1bAOlCOcWKGImHdXhGw/JSeN1M+12SIQKF7KFyyXezUC0V9u5hvookqTVy0Mg0tNBUVc4JGDizvDj2KZew77nbQf9g4BD7dfgIhX0DV7POCpbbNezlF+QFNWQUIsmO4NYiO+4IOVV9VousowynHLIRyJa2I1m3Tm6O8otCjnlVEaxcn/i5smQuWz5fZEO9/8GlDEX9VSnkqZbzkpgQLl9AueXVqefx6KZUwHT+KQwVLAjzaOzpuygRtQq1JOCc9H3HRami82MWLkG9hf6e5s5hSGAjeeml2PbZtwjD2PY2uaQ2FZz2hiM21/qQfXwbvFpJJX/lag==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f1f645ee-9af8-add8-a242-b78f69412353@suse.com>
Date: Tue, 14 Jun 2022 17:51:51 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH] build: fix exporting for make 3.82
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Anthony Perard <anthony.perard@citrix.com>
References: <9282329e-ad08-b6e7-ec9b-7e827a8b66ba@suse.com>
In-Reply-To: <9282329e-ad08-b6e7-ec9b-7e827a8b66ba@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM6PR08CA0017.eurprd08.prod.outlook.com
 (2603:10a6:20b:b2::29) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b5319514-2c5a-4f4a-de8c-08da4e1dcd7d
X-MS-TrafficTypeDiagnostic: VI1PR04MB6895:EE_
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB68959C189439333DED3AEA90B3AA9@VI1PR04MB6895.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	KT/KPI49wv2AsKYEwX2Ihrj5WBedTWsEZTG4nbQkKn1LB1QWFzcIFHN5NheNFqFMTNSZW2qSoow7LIuIFZnRyN2VehYaVTipqodI5lxzNmWUXTOEdNni1Mf3MLn7TNhKFn7GTDs75U6nVZYE7XDN3HOiY6yWczqEAQJ7ka2aV7xYl9AWJ5j4KJlh/PKysGraMyt/ertH0iibSVNavsLpksXHSNot08klCkwLkn1aumS4D2bOxgLAXKJ2i//G2wxRMq+PTGHppg0gRofi1om4QvXdLV5I08nXC2HJNWPz+g3+kcFb6F9UXx52cTi3Qm52s0++6r7w5D92OJGrHkMwOI7eysYcW8TgOxhSrOM8FqDOO28GeB6NBZIoYmQac9R/tjiHMKCpS5UTT+9ruCt6Y6UYkJEAqQ+Rd72f/PwHhvAfqs/uzgw732xcZvSJBldF/Nl64j/PP7RAVOlLjt93GIGe0Z8jtQZM4e70Q+c4xhZWwo4gnta29Q7hoaF08QuAL04QWVIE3zvn9D2oGytySSKSY2fAjqlX0ZAVgKWPSRidxufmsADnNHSxU5+3fppDEIMUvD29ip1zjECehkxQEfwaQ8YZvbT8APGdsgiGSwtm4px23GxaQqeQliU21cYBeReopdU8LDouQk23IOmWL38iTqYvcCVu7Z5oIyXgAGTf1qJgxyYb0oW5Oru+bFsunoAMOz/FCuAnRT/2NBX116TH4Tzn+h4xni+H9ELIe12udC2uvlrntFy5iq103GvM
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(316002)(4326008)(54906003)(31686004)(8676002)(6916009)(186003)(36756003)(66476007)(66946007)(38100700002)(66556008)(26005)(86362001)(5660300002)(8936002)(6506007)(53546011)(2616005)(6512007)(83380400001)(508600001)(2906002)(31696002)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TVJqTGVwOU93QXNlS2luK0NPenFTaU5vTjYrcEJVdmw2RXZyei9zMTdxV1Vo?=
 =?utf-8?B?RkZyMmJaY0ZqanNqUEIxZUFTdDBtYml2MzZqR3B1QlFqTEZxVjdzNUdWZ0sw?=
 =?utf-8?B?ZDhNWlJSSlVsRDlEcGhvdGxUaStSdkkxRW52ODB4SEUxL3dnYzcycVF6VzJi?=
 =?utf-8?B?ak9sWTdmNUN1NGo0T2pBZzBaK2hrMyt3bjJMWldzQ29CbVRGSGEzMlhFTVli?=
 =?utf-8?B?YlVIOTlwaUx5Vzd0YnBjT29lVk1yQWV4Y1VSZ24reitZZnFvM25zbW5haHgw?=
 =?utf-8?B?YXpXTEZiYlJDaDI1R2JsV1RiNDlGK1l4Ylh2QklSdHhKZUZkeUxFYlExVG1l?=
 =?utf-8?B?WnAvMy9XQTdiR3lmSFZrSFhUbndVSVJVSWxJbVhCSXFCSEM2c0dxUW5vdUFS?=
 =?utf-8?B?TmFjdFo4YmYrTk1FVlFmRi9scE92aFlEU3RBckdVUVRGRndJVFBoL1ZNaFY3?=
 =?utf-8?B?K0JoVDFKaFIwRnplc01IeWp4dlhLODIwZUxSTjhnUUU1NTFaOUdUL1VBb2Fn?=
 =?utf-8?B?V3oxUFY2aWYxKzZIc1c4aWh2NTh5VEFBc1hwRTdBbkZqS3gzbGF2MlUvNkNI?=
 =?utf-8?B?V3NWUnBjdFJUMjFUS2pQdHA1Y25CYk00NnVJd01VUGJTUTBqWThDMEdnNWE3?=
 =?utf-8?B?U2gyU05HNC8vcERtWkpIbGNuWUt0UHl0em9zYXdQdC8ycFA3dU1kN1c3UFFI?=
 =?utf-8?B?MDNKTlhTYTFnSkFGdHZqWk53M254SVByZVhsYWV6Z0M0NkkxTUtmdGs2aGti?=
 =?utf-8?B?dW14aGwrOTUxVlVMWEIwaWFuSUZRalQybVQydzJKR0dPMmtEaFZWck41dm1V?=
 =?utf-8?B?OTRQVGhBQnF6MGEvSWZMWVhLUzR2SXR3dEFLZVVqRGJhTTF1cyt0cHRXdkVX?=
 =?utf-8?B?UVplRWc4NFZZTXh5azhZVDU4dC9SaG45azhzL253ckdPMXZsOXY2R29YSm1r?=
 =?utf-8?B?eDVJYVpZamlBWWxaRFdzQ2NWR2VjbXppVjNmblBlb1ZvTktFOGpGc21KTXRX?=
 =?utf-8?B?aHUrWWJZUWk1Zm85bm9NOVZDY1kyZXg4S0x5S3F2VFk4UHNPNFlnSkZIQnc3?=
 =?utf-8?B?QVBpM1hnSVN1Z0lBcGJaZnVrMk8rWFBOUWZ4TFJVb2xEeGxWbnFTNVZnRkpI?=
 =?utf-8?B?Q3dhZ0NKcG5hZDJVOWpjN3VVMFQ2aDZHb0JISWUwdU9HajhISktKNmlWemR5?=
 =?utf-8?B?YWhSOE5rMGt4eTZjY0hwL1JZNEFOSmhnVDM0cnhiN2FiVmJIZlZWc2tmb1dv?=
 =?utf-8?B?aFN0UktLU2NCNEM1aEpjcUZ4L0F0aHdBdHR2VFRycldObU1lKzFaTUFYL3dP?=
 =?utf-8?B?K3Vwc0szQWgwbStqUFJyMzRWVDh4cGhuYWtGT21wK0dvRFZXZXZoY0hqN2RJ?=
 =?utf-8?B?R0pxeFQ3YVlXcDFjWGxSYk5XcmlibnNmdHhNV1FvZ296OWdWZElodC9CclRy?=
 =?utf-8?B?amd2RkxVd3BnMGpaT1pBSXRFK29ja082Q3BwOE5VbDJtTlRxVE1lQ01EZnNT?=
 =?utf-8?B?ejlVblhKeHVJMmNJeHNkVllZTUJ1R25WMU9IR1JFeDAybHRnSiszdmkyYWZo?=
 =?utf-8?B?L2Faci8rTy8zYzZUT0w5OGNkd3h5MlIvaVJVai9Pb1FRNzcrbmx1WU9uNFlO?=
 =?utf-8?B?WndrelZaM1lrL1Z6T0RFa0FSdnBNYWtCdHF2d08wazhxY05CM3RrRGMzRGRH?=
 =?utf-8?B?WnRjaStmNndSNkZQaDBObXNtWGJDVkFuekdIQlJRZFBJTUhqQzZSL3F5VU5C?=
 =?utf-8?B?R21RVnp4SmVZU0FNVnRXWGxwbDRTcGp1dG0wVU9QNFJvUzdVY2dTcWZOa1FG?=
 =?utf-8?B?R095MWU3WkRyOThJcHVCcnlRZ2gyVitoekpuV1FVTkljMU05MW1ONmJRMG9R?=
 =?utf-8?B?MVA0bUt1SFhlcVBNdDBrcjNZOU9QU2lGUzhyZStpNFNKYW4zdk0xU3lrcnA4?=
 =?utf-8?B?UGptSU5MRVNBemg1OU9rOTJVWnNDZWFKanZqWkp6Nkc3TENLV2habWI2VFIv?=
 =?utf-8?B?c1I5SCtnWk5TWG1hOGRrMEp3b3FHNnFnZVVFRWp2QzAvUW1McmdscHFhVm9s?=
 =?utf-8?B?UTNsTEhCZVc5UW9yWTBOZzhKTmdEQlE4b1VzMW1wcU04ZDJROUFiNlpUQWJi?=
 =?utf-8?B?U21OK2R5UlFIL1EvbmZuR29hbGY1eDJWbWVSZm80Mkc4OGkwSWZ5dVpGd0xl?=
 =?utf-8?B?dDMzTW1qSVdSRjdjanpPcWx0ai9sYy9lSUJLbVFSSStuR3JQUzZFNEJyWjBC?=
 =?utf-8?B?OGcvVkdlVnVqOXhRYUZMUVp1NTU3YTd1SmdYWHhVT1V6ZHM0cy9FS1h0djZ1?=
 =?utf-8?B?aHpYSVNNeklCWUxRS3pXN3gzQVpqRVd2RmNsMmFnZ1hMSFFaZHVPZz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b5319514-2c5a-4f4a-de8c-08da4e1dcd7d
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2022 15:51:53.6349
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: O2qMZC7AvMP5S143kTpxmro9u7lvY2yWGqmZNZkSPgMUT28FYjPgndSoL9xt0aV/yf4+Qu46qYueJJh/f74FZg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6895

On 14.06.2022 17:40, Jan Beulich wrote:
> GNU make 3.82 apparently has a quirk where exporting an undefined
> variable prevents its value from subsequently being updated. This
> situation can arise due to our adding of -rR to MAKEFLAGS, which takes
> effect also on make simply re-invoking itself. Once these flags are in
> effect, CC (in particular) is empty (undefined), and would be defined
> only via Config.mk including StdGNU.mk or alike. With the quirk, CC
> remains empty, yet with an empty CC the compiler minimum version check
> fails, breaking the build.
> 
> Move the exporting of the various tool stack component variables past
> where they gain their (final) values.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> There may be further factors playing into the described quirk, as I've
> also observed that simply running make as 2nd time would lead to
> successful building of xen/.

Albeit perhaps that's simply because then no re-invocation of make is
involved, as auto.conf and auto.conf.cmd then already exist (and are
up-to-date).

Jan

> While this wasn't a problem until several weeks back, I've not been able
> to identify which exact commit would have caused the breakage. Hence no
> Fixes: tag.
> 
> --- a/xen/Makefile
> +++ b/xen/Makefile
> @@ -44,8 +44,6 @@ export ARCH SRCARCH
>  # Allow someone to change their config file
>  export KCONFIG_CONFIG ?= .config
>  
> -export CC CXX LD NM OBJCOPY OBJDUMP ADDR2LINE
> -
>  export TARGET := xen
>  
>  .PHONY: dist
> @@ -244,6 +242,7 @@ export TARGET_ARCH     := $(shell echo $
>                                  -e s'/riscv.*/riscv/g')
>  
>  export CONFIG_SHELL := $(SHELL)
> +export CC CXX LD NM OBJCOPY OBJDUMP ADDR2LINE
>  export YACC = $(if $(BISON),$(BISON),bison)
>  export LEX = $(if $(FLEX),$(FLEX),flex)
>  
> 



From xen-devel-bounces@lists.xenproject.org Tue Jun 14 16:00:59 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 16:00:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349193.575376 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o18yO-0006vE-4Q; Tue, 14 Jun 2022 16:00:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349193.575376; Tue, 14 Jun 2022 16:00:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o18yO-0006v7-1H; Tue, 14 Jun 2022 16:00:56 +0000
Received: by outflank-mailman (input) for mailman id 349193;
 Tue, 14 Jun 2022 16:00:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cShD=WV=citrix.com=prvs=157bf7d09=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o18yM-0006v1-Ve
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 16:00:54 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 29ce0d41-ebfb-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 18:00:53 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 29ce0d41-ebfb-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655222453;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=06a2S+cAqdwOWkTa0EBy7329BmG6S0PnigDrZg4gVDA=;
  b=hZWnTjyOmR44BENbHRWjZlB2oMQ+mEjVPxgdXYyVluGN06eUlbl/7E/N
   pH+gp2kRpA64r3JmGB4qJ8FL4F/7Ooa3jWOxhogr5E2bwdLZ0F9CCY7if
   HhASbJXvq7O1qLwTDOM4BFkZQL4dT2QApUqF+FZQTAA2OHNfIMuQUa+Ba
   s=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 73596317
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:8X0vha4D2qlbfClpgfLCXAxRtEHHchMFZxGqfqrLsTDasY5as4F+v
 mseWDyPbveOYjeged5xbNvi80oBu56ByNdgSAtl/3o8Hi5G8cbLO4+Ufxz6V8+wwmwvb67FA
 +E2MISowBUcFyeEzvuVGuG96yE6j8lkf5KkYAL+EnkZqTRMFWFw0HqPp8Zj2tQy2YbgX1vX0
 T/Pi5a31GGNimYc3l08s8pvmDs31BglkGpF1rCWTakjUG72zxH5PrpGTU2CByKQrr1vNvy7X
 47+IISRpQs1yfuP5uSNyd4XemVSKlLb0JPnZnB+A8BOiTAazsA+PzpS2FPxpi67hh3Q9+2dx
 umhurSSTxgNfY6Xid8ZbANjKCBGYY1I9LTudC3XXcy7lyUqclPpyvRqSko3IZcZ6qB8BmQmG
 f4wcW5XKErZ3qTvnez9GrIEascLdaEHOKsWvG1gyjfIS+4rW5nZT43B5MNC3Sd2jcdLdRrbT
 5VCNGsxN02fC/FJElgNB8wC382Jv2j+SB0DikC6vYkI3neGmWSd15CyaYGIK7RmX/59hV2Er
 2jL+2D4BBAyN9GFzzeBtHW2iYfngifTSI8UUrqi+ZZCgkCXx2EVIA0bUx28u/bRoky0Vs9bK
 kcU0jEztqV0/0uuJvHtUhv9rHOasxo0X9tLD/Z8+AyL0rDT4QuSGi4DVDEpVTA9nJZoH3pwj
 AbPxo63Q2w02FGIdZ6D3pWSshfxGnMEFms9fxQ/TDNZw9fvq6hm23ojUe1f/L6JYszdQG+tn
 mrT83Jg293/nuZQifzloAmvbyaE48GQE1Vrvli/sneNtFsRWWKzW2C/BbE3B95kJZ3RcFSOt
 WNsdyO2vLFXVsHleMBgrYww8FCVCxWtamS0baZHRcVJythU0yfLkXpsyD9/Plx1Fc0PZCXkZ
 kTe0SsIusIOZSf3N/AqOd3pYyjP8UQHPYWNaxwpRoAWPsgZmPGvp0mCmnJ8L0iyyRNxwMnTy
 L+QcNq2DGZyNJmLOAGeHr9HuZdyn3hW7TqKGfjTkkX8uZLDNSH9dFvwGAbXBgzPxPjc8Fu9H
 hc2H5bi9iizp8WnOniJrtRKdQpVRZX5bLivw/Fqmie4ClIOMAkc5zX5mtvNp6QNc3xpq9r1
IronPort-HdrOrdr: A9a23:fM+t8qh0kbvtDx1edGV4QA7mnHBQXtgji2hC6mlwRA09TySZ//
 rOoB0+726StN9xYgBFpTnuAsW9qB/nmqKdpLNhW4tKPzOW3VdATrsSjrcKqgeIc0aVm9K1l5
 0QEZSWYOeAdGSS5vyb3ODXKbgd/OU=
X-IronPort-AV: E=Sophos;i="5.91,300,1647316800"; 
   d="scan'208";a="73596317"
Date: Tue, 14 Jun 2022 17:00:46 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] build: fix exporting for make 3.82
Message-ID: <Yqiwrt7j1gzYr72n@perard.uk.xensource.com>
References: <9282329e-ad08-b6e7-ec9b-7e827a8b66ba@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <9282329e-ad08-b6e7-ec9b-7e827a8b66ba@suse.com>

On Tue, Jun 14, 2022 at 05:40:27PM +0200, Jan Beulich wrote:
> GNU make 3.82 apparently has a quirk where exporting an undefined
> variable prevents its value from subsequently being updated. This
> situation can arise due to our adding of -rR to MAKEFLAGS, which takes
> effect also on make simply re-invoking itself. Once these flags are in
> effect, CC (in particular) is empty (undefined), and would be defined
> only via Config.mk including StdGNU.mk or alike. With the quirk, CC
> remains empty, yet with an empty CC the compiler minimum version check
> fails, breaking the build.
> 
> Move the exporting of the various tool stack component variables past
> where they gain their (final) values.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> There may be further factors playing into the described quirk, as I've
> also observed that simply running make as 2nd time would lead to
> successful building of xen/.
> 
> While this wasn't a problem until several weeks back, I've not been able
> to identify which exact commit would have caused the breakage. Hence no
> Fixes: tag.

Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Looks like this happened before: be63d9d47f ("build: tweak variable exporting for make 3.82")
So, maybe the issue is started again with 15a0578ca4 ("build: shuffle
main Makefile"), which move the include of Config.mk even later.

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue Jun 14 16:03:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 16:03:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349200.575388 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o190r-0007XF-IP; Tue, 14 Jun 2022 16:03:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349200.575388; Tue, 14 Jun 2022 16:03:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o190r-0007X8-FJ; Tue, 14 Jun 2022 16:03:29 +0000
Received: by outflank-mailman (input) for mailman id 349200;
 Tue, 14 Jun 2022 16:03:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lWYu=WV=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o190p-0007Wj-VD
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 16:03:27 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0602.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::602])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 866b0389-ebfb-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 18:03:26 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8719.eurprd04.prod.outlook.com (2603:10a6:102:21e::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.22; Tue, 14 Jun
 2022 16:03:24 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Tue, 14 Jun 2022
 16:03:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 866b0389-ebfb-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ghBbnOwMRoqmZ2/u3lCezeFZXbFiQ9iuUJZWfnOe/7VEOD3VAOA62xjY9HboHmgUpdXqRIIrNPvOPCv7zXNKrlefM8Wp3O+X7KINrqpO7IWvtoxZ37PWpWaGhkHWXjUkmXLo7cSNtWEnBo+1s1wbggAC9xmjZ45D804n3v/9Cs+B3bsMSlorM+PENbxrKQClccjPly/qMZfLGP0H4942OgQ1LDmo+QFm6g2WJAvNryF9ktXn+nQsMsovkfnZmIQLZrGwUGErEoiSt+6C7gIafy3COQvmXchjvoggNn7NsdnEee8k4Z+K05Kly0mFbONzj9k0WQRm+kiNZFu6XATnWw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=wNT607gGynGLKSl3fNju64xy8DuM0t2mTHdyyuRaUGk=;
 b=n1lR0BG03u6Lzm6BeVJlvhsiJmVfyQ+2viPqldo1xiPpnfIivuV/cFpCJaMlCCQBALkIHUMAMFhQBH9EzJTZdTqGqGsbC7ECFIkbyVaQ8K2w8n2UzIPXmewCN/XHezSf+sQWzX73k3Yolr+pKX3ylefmGXUDNnKuD+s3KulpgzP1P7TSfXmYz8Xhner7TBtYHP9zmWJRsGMgLR32Zmih6NoNHjDTk2Etxgxmtx/Rin7TBL9zAlMX/Bx20d8/Ww+xTH5UTsPikvJG+XFzuTVAgNWdmlgXzmRnJtnmH4FiU8XSLEnnTeC+4VQNAgq0SeqtMlBQKYU1KzuITeHoUj1fFw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wNT607gGynGLKSl3fNju64xy8DuM0t2mTHdyyuRaUGk=;
 b=JhQTMw9YM2HJbJkZ/8W7W1pPq70wVfXwhZFsllV2an2JnYNZLILDh/b11bxIx5C5n4MZYbRpJwRzExmWbNQbLTJQ9QXSs7lRzvYkPoEclrHnTk6QDF4YHG56+qbJuJ0nOg9uQCAeTWIA3Fx3IFJAwIqZJC/jBZXe6B0Mj+t1+dwXVFavmuQ6Vv+wnFrHytF75ujIMLD/BVX/psROydHf34X0wM++aATzi75KuMK/PDxTgKlPovupYMm+3pnxC78ysDgwISnxxZuXESiawNKnRlcBZ+LsG369/nGTSINmAbuac+KxYQBRolx5j/VkIr22HD+7XxQQooMOwD5ao8mXMg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ce7d7acf-9ed2-9dcc-34ad-f9b1e3f77d4f@suse.com>
Date: Tue, 14 Jun 2022 18:03:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86emul/test: improve failure location identification for FMA
 sub-test
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS8PR07CA0008.eurprd07.prod.outlook.com
 (2603:10a6:20b:451::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c616f06d-2b40-4073-6076-08da4e1f6955
X-MS-TrafficTypeDiagnostic: PAXPR04MB8719:EE_
X-Microsoft-Antispam-PRVS:
	<PAXPR04MB871942F71A02276732E3E13CB3AA9@PAXPR04MB8719.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+/W7rpxYNqr4Py5AR5/rgNBR2FQiVrMOeKUP1YDFG2ttSsAjtuBISn6ZFmRon2fopKd1huB99gmujferyWZjGrUpNH05EEe23ccEM6trMMjGxJ49PxZgCvgJQls+eo3cLQMI0ABMm+ypOQs9QSwntBo5tdy6FBLzGufb3JsHyItAP4A0yaDuFkUokGTdxrcamv/aJjt6qb2IFuII36qSk3V4nLog7kOvpWxkjYCEehTMhPXSRsKnIXtQnYu8J3q6I0NjbFekmVBzQ+fP5N4/wSEtXXlRIM8FaCAzu5sXLYGI/mt7BWQnTtaepVqNbFk6gGm7fLgeJ9U1UQiCyfI2NsqBoWW7Q++DgqUHMkFjemq8KBMEGghc/IJT2SY7YkUWuc5h4dox8V802GNhnvHQz1+4bI5a1rYamnHmUQE4fQn/rdMaepSiAgcqRRyeij1Ka2rqcWQcrbndADb8sfW7zRI/qu88jdQ7qXMYc1PoUKcbAgFPqioF/Er2S8VlIv2FytRweIraWZS3pVWLsKEpnhZNtE1IqQdpI5NwYJ3QD13JJnhD+swhpdY2s1leQ8323NIxAbPnQkfmsWcHt+xfFq8vl2yxfEDs7g+BuWWXUhMSUiCevnAQsBmZbNiBbLz2NT2wla+fqDIsFE5ZFmUSu5nc1U+BNRVThit+YGsyWenWh9eZjtr2ep3Bl95hDqiTR8KWrv7U4mUOBlxuoO+VytS0K9I1S8H+FYaLoEv1K0ZWbyXu618wjChoMkHpGm9t
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(6506007)(31696002)(86362001)(508600001)(38100700002)(83380400001)(6512007)(2906002)(6486002)(26005)(8936002)(5660300002)(4744005)(186003)(8676002)(6916009)(31686004)(36756003)(2616005)(54906003)(316002)(66476007)(4326008)(66556008)(66946007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZnlnQkFhRy95cERSc3QzM1lVajk5Y3dJZTY2MW9relVWL3RZWWVCdjU4cG9L?=
 =?utf-8?B?aEdvcUNzZE1aRkptSnd0VlJIU3VLWFE1UXh2S3YvT0t5R0VtSVpCK0xwcTQr?=
 =?utf-8?B?a2RQcFp5ZHhvYjdHU2s4SkQwK3ZVTHlQUHpJNm9qNHprNUszdUhDZnk3M1Rw?=
 =?utf-8?B?Rnh6RTQ2WHQ2N0ZGR2JqWEdWbjY4ZVViZ2hPWjBNTzlVdTIxRDMyZDVjMHhU?=
 =?utf-8?B?U2g4cHd1alBmbzBrRVNNUGYvZ1I4RGliWEhTVUt5bXhWMFBlbGZ0R2oxbkFz?=
 =?utf-8?B?clErYXZHbnpsYVc5V3d4MERYSHZZS1ZsaFRRejNCYWRheXlhS2xYYlB0WExq?=
 =?utf-8?B?RUVTR3BoY3pNNDVBNHJZRzBCMEFpUUQ4K3JmdkRuamJhRHJzYUNBL0NHeko0?=
 =?utf-8?B?UytWZmZXUGdleEF2RnlBamNiMFh1WEJnaVROMEtCZEVaQ3lQSmhtRTB5bWZG?=
 =?utf-8?B?UFptNUJCT3UxOWZLaWx0SG1QMFhrb1paWktVVUdVM2VuZ1ZmMHF3L1U1Wlcw?=
 =?utf-8?B?ZDV4R3U0aUR5Nk5Sdk1vQVRMNjJmMHAxV1dxdFQ3WDhNLzhuZWJKVFRCS1o2?=
 =?utf-8?B?WG9mdzdSOG9GaFJwKzNYUzYydGlxbG5oZWEwcXlyS1FrNkx2Tkt5ODdvUXR2?=
 =?utf-8?B?VEE4cDdrQWU3SDlXSkRyY2ZEVHZrWVRMVmtlZDBhclhqZTNza0Rha29ZQStP?=
 =?utf-8?B?bW5Ubms2N0EzaWRoT0JSakJ2c1dMQVNab0VyRTBib0IybUNpenVlNlVyczBu?=
 =?utf-8?B?YWYwRGs5WnkvcTlyTjV6TDh3d1dVZXRLUkdpaDJxRFdpSG5HWitoSFpFNUFT?=
 =?utf-8?B?dmtZeW5XN050LzUvWUNYOTQzN3lVeEhnTHJzTVc1VlcvY0JzZ1dFRWtPc1Fr?=
 =?utf-8?B?c3BmN04zeHlEZnJzMjMvck9vTW1zNkN3RHNES0NQeWFPZVVVMjhUN3Q5cTB5?=
 =?utf-8?B?VFovdjJHTnlRTytCbFU3c2xzRENjSUJrb0hKMlNOOFl4bUNTZlJJQVZFMFBo?=
 =?utf-8?B?b05VdDc4bVFURGp3ZmNQNFBuMGJQd2RnN0lkdFNOanZSTHFoSzR6aHMra0ZZ?=
 =?utf-8?B?MllobGw4eVVpNXVteWVwYzJNZHptNk42WlhVWG4yTnFzdVJqTSs0ZmxYejJ2?=
 =?utf-8?B?RDJrT3pnN2JnUUxVQzhpNDMxc0NzWm9ETFV2M3ZBYitNUlJBcFBrMGZPNmgx?=
 =?utf-8?B?OHFsczUya3RueTdMWDYzMlNmYWxxL0xZdUd2Q0pVR0hwNW15N0NKNG5rMWRk?=
 =?utf-8?B?aTB3ZDFXemI4cW8zUWFSN0Z6eVJ5SFZ2QXJvaTkvK0tLZlRnWUMvc05JS1M0?=
 =?utf-8?B?K2ZPQjBBbUVIbmlNdEdtY3RENTZQWlA3K3NQM0tZY0JhcmYrcGpnaXNTN1ZQ?=
 =?utf-8?B?YnRpaWVWcGR5d3drTm1GSDZabXEvVFZDaW1xSDE2RFZ3Q1M4RUtnNDB2YXFE?=
 =?utf-8?B?aHcwSWEydnliWldrQnlhRllEc3ZRd2dKWDJ6b3oyWFVtVGVERS9wdW5MMmdl?=
 =?utf-8?B?d1F5ZzNSSVFYTUJuVTRxaGo4NG5ScVREeGlzeUpsVGE3QzdkTE5IZEhoTllC?=
 =?utf-8?B?UXkxOHNmSkZaVXh3ZC9nbERMZmFjZVFjNlBHc1RKTXRnVE5CMTZmeWxoNStE?=
 =?utf-8?B?WG0rSzN3NlM3ZDlpUXVBakVBTS9DTjJESHVnWElDOFFrNFplbEREL0xvTTFU?=
 =?utf-8?B?TDZhSE5yZU1YRHhvSVdBYXF2RkVzbzRYSnNGdmNSRTcrSXh1UHVnR01VTmJo?=
 =?utf-8?B?bW9RaFRwMmF2Rk9ZYmhIOFo3WmJHQzJjUzBnc2lSTEt2a0V1ZGlYYWd3OEV6?=
 =?utf-8?B?MUwwaisxS0J6aHlnWkRqWkhIM0NQN1EzU1RlZFBkTURQZldnYjE4SDVzZzQx?=
 =?utf-8?B?R2ozOGdubnZmbXlZZWZUYWYrOVF1eENJTHFGSSs2eTZDRnE5UUNVUlU4SUJO?=
 =?utf-8?B?M01XVlpiTDRmWUhqTFZKYVNON1BqaVN6bWpQQW1iZFQzTGJlQk9rL0U0M25N?=
 =?utf-8?B?bTNEblB3SlluMGlZL3JHNXVUbndGeXE0TnBMOWNRRmtkSkZCV1l3REdUWkM1?=
 =?utf-8?B?WjAyRjNmekc5ZC9XcWRPNlpqcTBpR2lGOHVOV3I2MStFTXhFTlZJZE5wQk1a?=
 =?utf-8?B?RmNEdjdRZVk5MnRvdXJ4dUl6WXRxZUxRU1JMeWRXRlgrZG12S0RZSkNBc1lt?=
 =?utf-8?B?d0pZYUJQdUEwTG5FR0pBVUJ4aHlEc2I2UlpGTkczWDlHS0tPQUpsTlBEanQx?=
 =?utf-8?B?TERPRmptUmxvNmFJd2xnc1JXYVpOMkJTYVhNc0JGaDBMVDF4cHYyYTdIVnNt?=
 =?utf-8?B?UUxUMWVEVWpGWHNRaHl0cnlEMmFaR0JvOERTUzBlRGtRb3ZjbWwyUT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c616f06d-2b40-4073-6076-08da4e1f6955
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2022 16:03:24.5908
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: NmGM7UZfmHtfPJCs1P+XczIPjHnPa/s/EyYH0pjluFdzoT+AtZz1WJ5Ln3QLqy8zA7zVqry721E7o0L8+wR/rg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8719

When some FMA set of insns is included in the base instruction set (XOP,
AVX512F, and AVX512-FP16 at present), simd_test() simply invokes
fma_test(), negating its return value. In case of a failure this would
yield a value close to 4G, which doesn't lend itself to easy
identification of the failing test case. Recognize the case in
simd_check_regs() and emit alternative output identifying FMA.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -259,7 +259,10 @@ static bool simd_check_regs(const struct
 {
     if ( !regs->eax )
         return true;
-    printf("[line %u] ", (unsigned int)regs->eax);
+    if ( (int)regs->eax > 0 )
+        printf("[line %u] ", (unsigned int)regs->eax);
+    else
+        printf("[FMA line %u] ", (unsigned int)-regs->eax);
     return false;
 }
 


From xen-devel-bounces@lists.xenproject.org Tue Jun 14 16:07:55 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 16:07:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349212.575399 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1954-0008Ep-7N; Tue, 14 Jun 2022 16:07:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349212.575399; Tue, 14 Jun 2022 16:07:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1954-0008Ei-44; Tue, 14 Jun 2022 16:07:50 +0000
Received: by outflank-mailman (input) for mailman id 349212;
 Tue, 14 Jun 2022 16:07:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lWYu=WV=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o1952-0008Ec-SA
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 16:07:48 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20600.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::600])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 218ef254-ebfc-11ec-a26a-b96bd03d9e80;
 Tue, 14 Jun 2022 18:07:47 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB5505.eurprd04.prod.outlook.com (2603:10a6:208:111::32)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.20; Tue, 14 Jun
 2022 16:07:45 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Tue, 14 Jun 2022
 16:07:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 218ef254-ebfc-11ec-a26a-b96bd03d9e80
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=O80ui5SagyU/xHSt9lcVMRHyzuBjIHUTRJWxtCS9rMhrWhn+eqBkh2ovxxOBrNyMX6bBlymhFJeFu7sA/stInW4hRdhh6Xn03iFuPyT1kSiZZCjqAFHQAv7m96uQSJK/OKNync8XhoMhVCvcBxnDtI1MW1gg25i7D5xqWe3n3ztf58Felv311zWYhk+2rEIdO2bFVbRA1iXlPgsoFlWZSkggQkkTJM+VdFgD/W3RPF7c4+ne3MX/WFNbG/he+ffSOvR8K6SFynBIRTbxVLL43YokrFgdMM4/JRXtLxGiAxxg8fi0/VpTnqhGrjRw5Dxi3qT9A0EPCRF6nwPKl5mduA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=LJK9baG7B8jYSsbe5MfHd8EJ7l5huFSjrZfddjWe46U=;
 b=FtnaRyTokFr7jW6P1nyiavYKvh3h06Vrj+Yd5Zp62SqkC+nd5KF1GrgSaaZOc4J/DYhw4fZjac/I5jav+fVMFS/BBOGzfk0pI7zb/ojnA0iDo24BGvF0tNkB4hrlDQgx30xTDhuLBwgqMUNsVSqtDXqYLxAjNFrbpTlwTRKhhGHKvfwxqkslw+S+yJteTlxnIhP2DtTvA7II0EA/V/w6C3BHHSjhwE+Sa4cNK9zgcxcB9Ygyx+OA5317bM3h9cfOqteaF7OX+4dcbyBG6QI6dPHoYIeCjsPsZsP4O8tPbeQWUSS3I8LGs37BI3Ai+nNFbTKSwghJqV/BqmayO/wXGg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LJK9baG7B8jYSsbe5MfHd8EJ7l5huFSjrZfddjWe46U=;
 b=5i4FCR9N4mnzuDGY1Fp8vgYa/mHUn9H+6/ir2Z0QqM18K0Lgfc5x9yMcJ4xGjRoYAcF1mLd0tdEEAxlMce3oq+OPvUJwPlGEjy/A8QSKRZNosUlXmhpSCnT0vl4BdkXZKuVVcbYcIVrlZqfmVgqGp30O5sGeqkqwKJKKMB2/FGZUrA+AXuy8yQ0F9vBM52ed7ul7n5F6zobHjtdthGF/hxloTkHTnDk608C4y+Q+vNTONhxucOwUiEbiBt+nyybyxJgwJsPhR1tyeoLCazcy+oTChpBW/zQorPEfFdAzAH0Lo+/jFT8yj/jKXCXTmskQQJcYIM9/sl2sC8QvhLEV9w==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ae030a84-924c-bc1a-5069-f7b92896670e@suse.com>
Date: Tue, 14 Jun 2022 18:07:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH] build: fix exporting for make 3.82
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <9282329e-ad08-b6e7-ec9b-7e827a8b66ba@suse.com>
 <Yqiwrt7j1gzYr72n@perard.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <Yqiwrt7j1gzYr72n@perard.uk.xensource.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM5PR0301CA0013.eurprd03.prod.outlook.com
 (2603:10a6:206:14::26) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 16a65e80-1f2a-46de-3917-08da4e200493
X-MS-TrafficTypeDiagnostic: AM0PR04MB5505:EE_
X-Microsoft-Antispam-PRVS:
	<AM0PR04MB55050E559CD780FDD0304A78B3AA9@AM0PR04MB5505.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	l+RuGLOuDPcWcYMmbRv22U/nfisfiQNOTmM84nviuKj1Dgh8bwpZBw4IYeT5medgm9+vFwNbh1ARHybGki4dyIfW02mN7Pur1gvNTscFW8YA5PaDgtCuKeul4WAdtKulDec0oGMO+kOBP+Xcpm9DBps+5EaRN53+fe896OISTN1YuOnSKcEaBZaQhS90FfszjHAM515S2IGEsa55KRSFrTRQ1s9+Bjz5bl5WVmKHEkit6T/rzFV2LdqBsvK8MSeohOyuGH/NXQWyHVkAXH90AnEBXW1e6rxA59qTzYcfTpC8aljD4/MC1ad6iIhjfPfXW3/ft4o663o9h8DeOMey8dFzangPUqkjwDOVmBbaC1W7uc7UxsdW3tCPboGEnqCUycNHi+6pBvnYltb7qYfuzx3R/0DCIC4DYgX/+N9ye/4qsw7ndYUmi9fiUYE/wJ3zhpGgxlL9X6jT9ixnIT3VQzzQwEtvjLlOeSKfu9BwsCYK6CKHV3I3G3FD6B0TPyWseD5w3nElaodMoWbKmLQpzJeZvZJRe8+56YpRv3SU9IZKh4uesJL0I6/nFCvQ06p87Fg3wKGg5+Rri0OFEfUvFMVLVS8p3FACcLGlo5Jht4uPAwKbiplFUldIBwe7ZDMX4J/a8CEia4BJKJl7HRkmzyKbI4xSUNCD89Pgp6R5kVexr6n5VzyHVqMzKHs6oLIq+fUmukUnqNpPwqKkw9Vvkxft9qMmrKW8pcvj3dNaNY9MJOgw7AdWx6+U1vpIT64A
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(38100700002)(53546011)(31696002)(2906002)(86362001)(6512007)(2616005)(6506007)(83380400001)(26005)(6666004)(186003)(5660300002)(8676002)(66556008)(66476007)(6916009)(54906003)(66946007)(4326008)(31686004)(508600001)(6486002)(8936002)(316002)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bERxMTliaGlzRzB5bGtrTENId3JUb1NhRjBkNzJTd3BxK0tKWGNCYkpBRzhM?=
 =?utf-8?B?RzJUR2x6NUdUYVlBZ05OSHFBbE0yYnB0eXd5T29ZNHVWMDFDaXJrQWdBK3ZL?=
 =?utf-8?B?a3kzV2drY0tGT25ya201dWVrNVplMWtGSndaalVMQnhxQWJqd3RxNVM5bDYy?=
 =?utf-8?B?djkvb002ZWRhYUZxK21qeXo2WFRsNTQrTHJ1SmtOZzhzMzdSbzJYK2RSTXdY?=
 =?utf-8?B?MEljQnFtNGVMRlIvRDYzS05ka1p3SnRUWmxDM0JlYXRlMWxrU1dMSkhRdldk?=
 =?utf-8?B?dkNpbXZTbzA5MHZUR0pDdi92T3lJTVAybVhLdjJPSy8xUUxUNDFsR05HV3VI?=
 =?utf-8?B?S1lvSDl3S29Sd3dhNUlwcDZDWU1NMVpMd3B2RmtUMHRIZW9vQVFNUHNwWk1s?=
 =?utf-8?B?bTc5Mk01UWpSaC9CLzNFVmQzMkVld3dyNXo5dnYyQ1h1d21BT0ZuUVdFdjVk?=
 =?utf-8?B?anlyQmtsWTREK1lyeU9pQnhaYklCazlZeHJhNFd4cExEWGc1dzUwQkxsaU90?=
 =?utf-8?B?Y2ZBTE1GMGtlODg2dlFmdk5yaHhVRVJac2Y2OFNSSEVsSFk4Y0RqeTc0dGwv?=
 =?utf-8?B?Rm1RYjBJdWREb2RkV0g3bC9OaUZmMVkxTDV4Y21oay9TWTZicVBIb01UTVcr?=
 =?utf-8?B?b1kxb0VBSXJmTGpiYW9yU1o5RVphNEpXYVM1SGRmc0xHRitXL2JjUndlVEVM?=
 =?utf-8?B?aXBsUHFIQXp1SXpkaGJSelRFQXV2aU1HSWxGWVBLTHViU1lpWURJQTFPRHdE?=
 =?utf-8?B?NjBtTzdHMlhNajhnU2xVcnJvRm9ZYnFiZ2ZuWWtCS1FUWHZQVVpGMkFib2dm?=
 =?utf-8?B?S0lqK3MxZWNVVTNGU2MrN2JKZllUM1lmVUNXT09UQUdINHBWbWJZdEVmejhp?=
 =?utf-8?B?Slk2Ni9kTlFhckozc01hU2ltU3ZLaVQxQ0ZlM3JSZkdOWHB4YzVUZ3dDVi93?=
 =?utf-8?B?d2poMEJVZndOV2dtSFBqNVA5UTc4d1VMQmJGOUN0dE1HMTZmL2Rja0pSUXo3?=
 =?utf-8?B?anBnTk1SeXlVSXA3bXpXKzU0L0NEbFFYZVVKbEVvNk5vM2xLenk4bmVvcGc5?=
 =?utf-8?B?L2tIbUZqcVNkNld5dkd2UWQ3bytScEl5U0JJVWhUTXBqb3NucVBRT2d3QTNx?=
 =?utf-8?B?b1dkdmJuUkQ5OXp3cnY5VmRDK1UrckR5akV4cE1DZlM0NVFiM2hBbEdtLzN5?=
 =?utf-8?B?eGIvOWFwWC9oR1RtWWptMHNKYXQ1b0ROTDNiS05jc3M4aUJTc3ZXeTc1Rk1M?=
 =?utf-8?B?UW5VTUpMVkM5TWtqSmtBWGRwcjh2TGcvNkZWMnBDVjF0M1lsbUJiNWd3SnRh?=
 =?utf-8?B?c0RERUdTZHVXeEcrL05DYVAxMW9tZzk4c211Z1M3QmQwejUybTlhS1N0K0wy?=
 =?utf-8?B?YWNEaW9mWEIvWHEvV3dUR1BMWXZsc3hRRVZlWHRRUTcwNzB3OXdpQUZhQ1hk?=
 =?utf-8?B?YTBYNE1RQmgyZWFzbnkxK0dHTFdDT1BmOHk0bEF0enJza3hwZjBzWVUzbE40?=
 =?utf-8?B?ZGg5Qi9LZEhzVmdlNVJjOWNTZk1EV0hMQThhcTIxNXI0YURtVEtkbks4dVZJ?=
 =?utf-8?B?emgwL0w3TkdEVEs5blRGMTdzSVlhdTVVY0hqNm5jOGVNdXZiN2JtY3JTSndZ?=
 =?utf-8?B?MG94dm5VUWUwTC9uVlZxbyswcUtRbmV0cHIxcGRyL2FLRXFBUUhsOWVEb1pU?=
 =?utf-8?B?T3ZoWEZZZUhmeHdLUEExeTBrUTZLbk9FZnJGNzJzSXRRTTJXSjMxbHpWUW05?=
 =?utf-8?B?Q3ErcHpDRSs5QkZBUUt2Y2NjdEx4VzdWYnVRQzlOd3laR1VBS0lnSUtHclNs?=
 =?utf-8?B?SkRHb3EyR2dXY3ZOMXN1dmFpamdqMWdlK2VoV3Fsc21wU1lNN0FaT0hCdEZq?=
 =?utf-8?B?QmhvRkFiV29PNXI5VjFRYnRRZHNlT1hOT29sSnk3UHVqUS9rYVJTdm56bFFv?=
 =?utf-8?B?RVF0TUNZTW1Mc3p3SlIxb1c4cGlZZmh3aGZyWHNBc1J3MFdobnUvUEpwVDVD?=
 =?utf-8?B?TnhuT2RrdlE5N1NjRUNDSVlRckRva2VrMzFVY0RXMTZQZXI4ODVXYTAxRXhp?=
 =?utf-8?B?V2RNbUkvVFBnSnhhb3NRWjJNclF4YlpERk1taURMNE4xWEcxYXRQTkd3TUpL?=
 =?utf-8?B?enhKVHlSc2cvbFg5bXN3Uk5md0E3Q2RaZ25FMm5nY3hmRGNxa25qNVY2OU5F?=
 =?utf-8?B?bWk5a3dqbk5wYVlhV2FBMm1BbDRyR2xZMzJyQ1hUNG1rWVMvNjVlVVJSUTRw?=
 =?utf-8?B?Qm0yV2ZOMUtDZW53dmhTajNxS3BLbThvRDV0RzkvWVEzVk1FVjhSYjNEdXU2?=
 =?utf-8?B?Z1RJOWhBRjRFSmZENTB0WXR4QmxhWjJqVnBrRVg1WGZxQjd0MzhLQT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 16a65e80-1f2a-46de-3917-08da4e200493
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2022 16:07:45.0429
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: NKf68+QrZmVvlytue+2EeZRejcB/RTvArh1tn1QpUTe93NpX8WT8IcL9LKzjqjH/DgwelnrtSm4QrVZWiiIzcw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB5505

On 14.06.2022 18:00, Anthony PERARD wrote:
> On Tue, Jun 14, 2022 at 05:40:27PM +0200, Jan Beulich wrote:
>> GNU make 3.82 apparently has a quirk where exporting an undefined
>> variable prevents its value from subsequently being updated. This
>> situation can arise due to our adding of -rR to MAKEFLAGS, which takes
>> effect also on make simply re-invoking itself. Once these flags are in
>> effect, CC (in particular) is empty (undefined), and would be defined
>> only via Config.mk including StdGNU.mk or alike. With the quirk, CC
>> remains empty, yet with an empty CC the compiler minimum version check
>> fails, breaking the build.
>>
>> Move the exporting of the various tool stack component variables past
>> where they gain their (final) values.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> There may be further factors playing into the described quirk, as I've
>> also observed that simply running make as 2nd time would lead to
>> successful building of xen/.
>>
>> While this wasn't a problem until several weeks back, I've not been able
>> to identify which exact commit would have caused the breakage. Hence no
>> Fixes: tag.
> 
> Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks.

> Looks like this happened before: be63d9d47f ("build: tweak variable exporting for make 3.82")

Ah yes. I did think I had to deal with that before, but I did check patches
only back to early 2021. But it's somewhat worse than described there: It's
not just the origin which changes, but (as explained) it actually prevents
the variable to further change its value.

> So, maybe the issue is started again with 15a0578ca4 ("build: shuffle
> main Makefile"), which move the include of Config.mk even later.

Yes, that's certainly it. Thanks for spotting - I'll add a Fixes: tag.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 14 16:23:04 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 16:23:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349229.575413 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o19Jh-0002T0-OT; Tue, 14 Jun 2022 16:22:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349229.575413; Tue, 14 Jun 2022 16:22:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o19Jh-0002St-LO; Tue, 14 Jun 2022 16:22:57 +0000
Received: by outflank-mailman (input) for mailman id 349229;
 Tue, 14 Jun 2022 16:22:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cShD=WV=citrix.com=prvs=157bf7d09=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o19Jg-0002Ro-Bb
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 16:22:56 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3d77f2ba-ebfe-11ec-a26a-b96bd03d9e80;
 Tue, 14 Jun 2022 18:22:54 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3d77f2ba-ebfe-11ec-a26a-b96bd03d9e80
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655223774;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=4i0rdmeUpTiIobrUgWp6yae7mM3fyKCUFwFlF40zIzo=;
  b=UoLGxROMbvl+YiRfQmbEX52p6n0rP8Pyk+pg2OwY12rfxDV3nEOR+Gcc
   vKkyg4P1TxQUG8jMknqCCH9S5CP6J6lIg9RRND5RZUSUX7NUTdvTPvi/P
   jrY37GaPVRMv97qDf+KiuLBj9YvElk4LYGlYpoIyBEyrgfnIyZ/kQIJTL
   8=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 73581200
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:tjIIrKKaKWNbtPZoFE+RqJUlxSXFcZb7ZxGr2PjKsXjdYENShGQGy
 2UbCDuOa63ZZDT9LtokaNu1oUMD6J6EztdlTFBlqX01Q3x08seUXt7xwmUcns+xwm8vaGo9s
 q3yv/GZdJhcokf0/0vrav67xZVF/fngqoDUUYYoAQgsA149IMsdoUg7wbRh3Nc02YLR7z6l4
 rseneWOYDdJ5BYsWo4kw/rrRMRH5amaVJsw5zTSVNgT1LPsvyB94KE3fMldG0DQUIhMdtNWc
 s6YpF2PEsE1yD92Yj+tuu6TnkTn2dc+NyDW4pZdc/DKbhSvOkXee0v0XRYRQR4/ttmHozx+4
 PBXj7/ua0B1B+6SpuQQQyt5Ig5QILITrdcrIVDn2SCS50jPcn+qyPRyFkAme4Yf/46bA0kXq
 6ZecmpUKEne2aTmm9pXScE17ignBMDtIIMYvGAm1TzDBOwqaZvCX7/L9ZlT2zJYasVmQq2DN
 5NDMWIHgBLoXhF2N0VILJcHhMyFpmv1KG1Xr3mNuv9ii4TU5FMoi+W8WDbPQfSIWMFUk0Cwt
 m/AuWPjDXkyK9i32TeDtHW2iYfnnyn2RYYTH72Q7eNxjRuYwWl7IA0bUx63rOe0jma6WslDM
 AoE9yw2t68w+Ue3CN7nUHWQu2WYtxQRX95RFewS6wyXzKfQpQGDCQA5oiVpMYJ88pVsHHpzi
 wHPz4iB6SFTXKO9bGin1em26m+LenYrLWISVw41YiUY/Iy2yG0stS4jXuqPAYbs0ICoQ2ivm
 W7WxMQtr+5N1JBWjs1X6XiC2mvx/caRE2bZ8y2NBgqYAhVFiJlJjmBCwXzS9r5+IYmQVTFtV
 1BUypHFvIji4Xxg/RFhodnh/5nzvp5pyBWG3TZS82AJrlxBAUKLc4FK+y1ZL0x0KMsCcjKBS
 BaN5F4IuMIJYybwMvcfj2eN5yMCnMDd+SnNDKiIPrKinLAqHON4wM2eTRHJhD28+KTduao+J
 Y2aYa6RMJruMow+lGDeb75EidcDn3lirUuOFMuT50n2jtKjiIu9FO5t3K2mNbhpsstpYWz9r
 r5iCid9404DCLGmO3CModF7wJJjBSFTOK0aYvd/LoarSjeK0kl4YxMN6dvNo7BYopk=
IronPort-HdrOrdr: A9a23:HWsIZqheV2JAdvkueQmdfv4PZXBQXtAji2hC6mlwRA09TySZ//
 rAoB19726QtN9xYgBGpTnuAsi9qB/nmKKdgrNhX4tKPjOHhILAFugLhuHfKlXbaknDH4Vmu5
 uIHZITNDSJNykYsfrH
X-IronPort-AV: E=Sophos;i="5.91,300,1647316800"; 
   d="scan'208";a="73581200"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Julien Grall <julien@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH v2 0/4] xen: rework compat headers generation
Date: Tue, 14 Jun 2022 17:22:44 +0100
Message-ID: <20220614162248.40278-1-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Patch series available in this git branch:
https://xenbits.xen.org/git-http/people/aperard/xen-unstable.git br.build-system-xen-include-rework-v2

v2:
- new patch [1/4] to fix issue with command line that can be way too long
- other small changes, and reorder patches

Hi,

This patch series is about 2 improvement. First one is to use $(if_changed, )
in "include/Makefile" to make the generation of the compat headers less verbose
and to have the command line part of the decision to rebuild the headers.
Second one is to replace one slow script by a much faster one, and save time
when generating the headers.

There's some number here:
    https://lore.kernel.org/xen-devel/Yp3%2F%2Fc%2FCAcwLHCvi@perard.uk.xensource.com/

    On my machine when doing a full build [of the hypervisor], with `ccache`
    enabled, it saves about 1.17 seconds (out of ~17s), and without ccache, it
    saves about 2.0 seconds (out of ~37s).

Thanks.

Anthony PERARD (4):
  build,include: rework shell script for headers++.chk
  build: remove auto.conf prerequisite from compat/xlat.h target
  build: set PERL
  build: replace get-fields.sh by a perl script

 xen/Makefile                    |   1 +
 xen/include/Makefile            |  25 +-
 README                          |   1 +
 xen/tools/compat-xlat-header.pl | 539 ++++++++++++++++++++++++++++++++
 xen/tools/get-fields.sh         | 528 -------------------------------
 5 files changed, 557 insertions(+), 537 deletions(-)
 create mode 100755 xen/tools/compat-xlat-header.pl
 delete mode 100644 xen/tools/get-fields.sh

-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue Jun 14 16:23:13 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 16:23:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349230.575424 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o19Jx-0002mq-0d; Tue, 14 Jun 2022 16:23:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349230.575424; Tue, 14 Jun 2022 16:23:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o19Jw-0002mj-Tw; Tue, 14 Jun 2022 16:23:12 +0000
Received: by outflank-mailman (input) for mailman id 349230;
 Tue, 14 Jun 2022 16:23:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cShD=WV=citrix.com=prvs=157bf7d09=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o19Jv-0002Ro-Ue
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 16:23:12 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 474bd579-ebfe-11ec-a26a-b96bd03d9e80;
 Tue, 14 Jun 2022 18:23:10 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 474bd579-ebfe-11ec-a26a-b96bd03d9e80
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655223790;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=TqPwIHKluWSn+BpKllnZiPLlU5cGc7TXXVi/MIH8tIg=;
  b=VELFbLG4CJwQcX/iSrn2eH242SE52ZrNVfGX1O3rOKBFeiT1Psljw/zP
   a+A9TmubKE9tF5KPTcoDEYIbrMv+dFF4BCOQ9qreuagx8d+qnFzFYdIRk
   PqdGIhaNzvm5AeNJ0b7PmDU4rpa4XmoaV7rVayt+RV+hGML6mH/eSFu49
   k=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 76142775
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:INnBtaqiHdWsiaV2QScEX9XrRT9eBmILZRIvgKrLsJaIsI4StFCzt
 garIBmGb67YZWXye9kka9y3ph4H68fVz4c1SAZl+XsyQShE+JuZCYyVIHmrMnLJJKUvbq7GA
 +byyDXkBJppJpMJjk71atANlVEliefQAOCU5NfsYkidfyc9IMsaoU8lyrdRbrJA24DjWVvT4
 I+q+aUzBXf+s9JKGjNMg068gEsHUMTa4Fv0aXRnOJinFHeH/5UkJMp3yZOZdhMUcaENdgKOf
 M7RzanRw4/s10xF5uVJMFrMWhZirrb6ZWBig5fNMkSoqkAqSicais7XOBeAAKv+Zvrgc91Zk
 b1wWZKMpQgBGJbyvLUkQRNkL2J/eoZG/Zz8L1KhmJnGp6HGWyOEL/RGCUg3OcsT+/ptAHEI/
 vsdQNwPRknd3aTsmuv9E7QywJR4RCXoFNp3VnVIxDfFDfEgUNbbTr/D/9Nw1zYsnMFeW/3ZY
 qL1bBIwMk2cOUIQZz/7DrpvzOyPvEDZVwF58kjKiKs25GXO6k9+he2F3N39JYXRGJQ9clyjj
 mDb+2X0BDkKOdrZziCKmlq3nfPGly7/XIMUFZW7++RsjVnVwXYcYDUJUXOrrP//jVSxM/pPJ
 kpR9icwoKwa8E2wUsK7TxC+uGSDvBMXR5xXCeJSwCuA0LbO6gCVQE0NVCdcaccOvdUzAzct0
 zehndnkGDhuu729Um+G+/GfqjbaBMQOBTZcP2leF1JDuoS95tFo5v7Scjp9OIiOsPmkICP6/
 wzJrW8Vh7AwtsAhyYzuqDgrnAmQSoj1oh8dv1uKAzj8sVknOOZJdKTztwGFsK8owJKxCwDY4
 SNaw5X2APUmV8nlqcCbfAka8FhFDd6hOSaUv1NgFoJJG9+Fqy/6JtA4DN2TyS5U3ic4ldzBO
 ha7Vft5vsM7AZdTRfYfj3iNI8or17P8Mt/uS+rZaNFDCrAoKlLapX0zPxHNhT+1+KTJrU3YE
 cbzTCpRJSxCVfQPIMSeHI/xLoPHNghhnDiOFPgXPjys0KaEZW79dIrpxGCmN7hjhIvd+V292
 48Ga6OilkQEOMWjM3a/zGLmBQ1TRZTNLcuu+5I/my/qClcOJVzN/NeKnet9I9U8xPQK/goKl
 1nkMnJlJJPErSWvAW23hrpLNNsDgb4XQaoHABER
IronPort-HdrOrdr: A9a23:eqfz2qvCXlfNHosKaMPgXT1E7skDctV00zEX/kB9WHVpm6uj5q
 aTdZUgpHjJYVMqMk3I9urvBEDtexzhHP1OkOss1NWZLW3bUQKTRekP0WKF+UyCJ8SXzIVgPM
 xbEpSWZueRMbErt6vHCEvRKadE/OW6
X-IronPort-AV: E=Sophos;i="5.91,300,1647316800"; 
   d="scan'208";a="76142775"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Bertrand Marquis <bertrand.marquis@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
Subject: [XEN PATCH v2 1/4] build,include: rework shell script for headers++.chk
Date: Tue, 14 Jun 2022 17:22:45 +0100
Message-ID: <20220614162248.40278-2-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220614162248.40278-1-anthony.perard@citrix.com>
References: <20220614162248.40278-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

The command line generated for headers++.chk by make is quite long,
and in some environment it is too long. This issue have been seen in
Yocto build environment.

Error messages:
    make[9]: execvp: /bin/sh: Argument list too long
    make[9]: *** [include/Makefile:181: include/headers++.chk] Error 127

Rework so that we do the foreach loop in shell rather that make to
reduce the command line size by a lot. We also need a way to get
headers prerequisite for some public headers so we use a switch "case"
in shell to be able to do some simple pattern matching. Variables
alone in POSIX shell don't allow to work with associative array or
variables with "/".

Reported-by: Bertrand Marquis <Bertrand.Marquis@arm.com>
Fixes: 28e13c7f43 ("build: xen/include: use if_changed")
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
---

Notes:
    v2:
    - fix typo in commit message
    - fix out-of-tree build
    
    v1:
    - was sent as a reply to v1 of the series

 xen/include/Makefile | 17 +++++++++++++----
 1 file changed, 13 insertions(+), 4 deletions(-)

diff --git a/xen/include/Makefile b/xen/include/Makefile
index 6d9bcc19b0..49c75a78f9 100644
--- a/xen/include/Makefile
+++ b/xen/include/Makefile
@@ -158,13 +158,22 @@ define cmd_headerscxx_chk
 	    touch $@.new;                                                     \
 	    exit 0;                                                           \
 	fi;                                                                   \
-	$(foreach i, $(filter %.h,$^),                                        \
-	    echo "#include "\"$(i)\"                                          \
+	get_prereq() {                                                        \
+	    case $$1 in                                                       \
+	    $(foreach i, $(filter %.h,$^),                                    \
+	    $(if $($(patsubst $(srctree)/%,%,$(i))-prereq),                   \
+		$(i)$(close)                                                  \
+		echo "$(foreach j, $($(patsubst $(srctree)/%,%,$(i))-prereq), \
+			-include c$(j))";;))                                  \
+	    esac;                                                             \
+	};                                                                    \
+	for i in $(filter %.h,$^); do                                         \
+	    echo "#include "\"$$i\"                                           \
 	    | $(CXX) -x c++ -std=gnu++98 -Wall -Werror -D__XEN_TOOLS__        \
 	      -include stdint.h -include $(srcdir)/public/xen.h               \
-	      $(foreach j, $($(patsubst $(srctree)/%,%,$i)-prereq), -include c$(j)) \
+	      `get_prereq $$i`                                                \
 	      -S -o /dev/null -                                               \
-	    || exit $$?; echo $(i) >> $@.new;) \
+	    || exit $$?; echo $$i >> $@.new; done;                            \
 	mv $@.new $@
 endef
 
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue Jun 14 16:23:14 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 16:23:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349231.575435 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o19Jy-00033C-AK; Tue, 14 Jun 2022 16:23:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349231.575435; Tue, 14 Jun 2022 16:23:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o19Jy-000335-5k; Tue, 14 Jun 2022 16:23:14 +0000
Received: by outflank-mailman (input) for mailman id 349231;
 Tue, 14 Jun 2022 16:23:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cShD=WV=citrix.com=prvs=157bf7d09=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o19Jx-0002Ro-4B
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 16:23:13 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 48cb3387-ebfe-11ec-a26a-b96bd03d9e80;
 Tue, 14 Jun 2022 18:23:12 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 48cb3387-ebfe-11ec-a26a-b96bd03d9e80
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655223791;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=LmV86saEUcknXXo+WrjWMC73fwG5cbzc7dPP7bJAl9Q=;
  b=Lb54RinZUkl2Lbl6irlUNla/gAMonL7CCCvKBO7aTjio3bTe370lyVeE
   T/MTRxtjOZuUhz4Yq0SJrW+92DNmemoAtq2PJJeWAcmmoLIi3XInD1GfJ
   Tujzo6fpfJFfKdpgns+OqhtUjKREEZ7NU5eBGQWjcvevFRTIVG6flOrrb
   0=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 76142776
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:RBt+aap3Aeehknk9NjD7tlQwn1deBmJhZRIvgKrLsJaIsI4StFCzt
 garIBmFOv+MNmHzL94nao/loRwCuZeEyYQxSwM4/yBhEiNE+ZuZCYyVIHmrMnLJJKUvbq7GA
 +byyDXkBJppJpMJjk71atANlVEliefQAOCU5NfsYkidfyc9IMsaoU8lyrdRbrJA24DjWVvT4
 I+q+aUzBXf+s9JKGjNMg068gEsHUMTa4Fv0aXRnOJinFHeH/5UkJMp3yZOZdhMUcaENdgKOf
 M7RzanRw4/s10xF5uVJMFrMWhZirrb6ZWBig5fNMkSoqkAqSicais7XOBeAAKv+Zvrgc91Zk
 b1wWZKMpQgBGJbyvLUkQRNkL2J/eoZG/Zz8L1KhmJnGp6HGWyOEL/RGCUg3OcsT+/ptAHEI/
 vsdQNwPRknd3aTsmuv9E7QywJR4RCXoFNp3VnVIxDfFDfEgUNbbTr/D/9Nw1zYsnMFeW/3ZY
 qL1bBIwMk2cOUIQZD/7DrobucOGhVPnTwdkqXivqrsazk2D/gVIhe2F3N39JYXRGJQ9clyjj
 nnd423zDxUeNdqe4TmI6HShgqnIhyyTcJ0WPK218LhtmlL77m4ODBwbU3OrrP//jVSxM/pPJ
 kpR9icwoKwa8E2wUsK7TxC+uGSDvBMXR5xXCeJSwAOHx7fQ4g2ZLnMZVTMHY9sj3PLaXhRzi
 AXPxYmwQ2Uy7vvFEhpx64t4sxuUAhYxfFYQbxM4DgpCyNnmg7o9pAzmG4ML/LGOsjHlJd3h6
 2nU8XVk3upJ05JjO7aTpg6e3W/1znTdZktsv1iMADr4hu9sTNT9D7FE/2Q3+hqpwGyxalCa9
 EYJlMGFhAzlJcHczXfdKAnh8VzA2hpkDNE/qQQ2d3XZ327xk0NPhKgJiN2EGG9nM9wfZRjia
 1LJtAVa6fd7ZSX3MPYrPtjuVp51lcAM8OgJsdiFBuein7ArLFPXlM2QTRX4M5/RfLgEzvhkZ
 MbznTeEBncGE6V3pAeLqxMm+eZznEgWnDqLLbiilkjP+efPPxa9FOZaWGZim8hktctoVi2Oq
 4YBXyZLoj0CONDDjt7/qtJLcQhTdCNrWvgbaaV/L4a+H+avI0l5Y9e5/F/rU9ANc3h9/gsQw
 kyAZw==
IronPort-HdrOrdr: A9a23:4MnwQKAUt1QEoqnlHemo55DYdb4zR+YMi2TDsHoBLyC9E/bo8P
 xG+c5w6faaslgssR0b9uxoW5PwI080l6QFhbX5VI3KNGXbUQ2TXeJfBOPZqAEIcBeeygcy78
 ddmvhFZeEZTzBB/KPH3DU=
X-IronPort-AV: E=Sophos;i="5.91,300,1647316800"; 
   d="scan'208";a="76142776"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Jan Beulich
	<jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH v2 2/4] build: remove auto.conf prerequisite from compat/xlat.h target
Date: Tue, 14 Jun 2022 17:22:46 +0100
Message-ID: <20220614162248.40278-3-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220614162248.40278-1-anthony.perard@citrix.com>
References: <20220614162248.40278-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Now that the command line generating "xlat.h" is check on rebuild, the
header will be regenerated whenever the list of xlat headers changes
due to change in ".config". We don't need to force a regeneration for
every changes in ".config".

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
 xen/include/Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/include/Makefile b/xen/include/Makefile
index 49c75a78f9..0d3e3d66e0 100644
--- a/xen/include/Makefile
+++ b/xen/include/Makefile
@@ -101,7 +101,7 @@ cmd_xlat_h = \
 	cat $(filter %.h,$^) >$@.new; \
 	mv -f $@.new $@
 
-$(obj)/compat/xlat.h: $(addprefix $(obj)/compat/.xlat/,$(xlat-y)) $(obj)/config/auto.conf FORCE
+$(obj)/compat/xlat.h: $(addprefix $(obj)/compat/.xlat/,$(xlat-y)) FORCE
 	$(call if_changed,xlat_h)
 
 ifeq ($(XEN_TARGET_ARCH),$(XEN_COMPILE_ARCH))
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue Jun 14 16:23:15 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 16:23:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349232.575446 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o19Jz-0003LI-JJ; Tue, 14 Jun 2022 16:23:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349232.575446; Tue, 14 Jun 2022 16:23:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o19Jz-0003L9-Fr; Tue, 14 Jun 2022 16:23:15 +0000
Received: by outflank-mailman (input) for mailman id 349232;
 Tue, 14 Jun 2022 16:23:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cShD=WV=citrix.com=prvs=157bf7d09=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o19Jy-0002Ro-Gl
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 16:23:14 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 497f4024-ebfe-11ec-a26a-b96bd03d9e80;
 Tue, 14 Jun 2022 18:23:13 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 497f4024-ebfe-11ec-a26a-b96bd03d9e80
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655223793;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=9MlmA0ylP6rG0RiM/rbCen5PZYrP6dJUfb3R+KkbnFk=;
  b=UGh32qkUesmrdcOVwJOrEMapL11c8pHSR028XetLQnrf7zZGCjzdqQJY
   ZwhkkLw920G1/RbKd9HnJh3q2gCXoIvMh7GE61C/xlJRKc/5dlYpAZNWA
   R32vG2X9YzBDiqIvNsa5QRMWoF13Sc8s4kA3ZE31j7A5905nzSvt0Rgra
   o=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 76142780
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:97t6kqijQ9I5jJ+PddKsB0HXX161GRAKZh0ujC45NGQN5FlHY01je
 htvCG3SPPnfYWH8Lop/YY+29x8E7MPRz9FgQVFkrXoyRX4b9cadCdqndUqhZCn6wu8v7a5EA
 2fyTvGacajYm1eF/k/F3oDJ9CU6jefSLlbFILas1hpZHGeIcw98z0M68wIFqtQw24LhXVrR4
 YmaT/D3YzdJ5RYlagr41IrbwP9flKyaVOQw5wFWiVhj5TcyplFNZH4tDfjZw0jQG+G4KtWSV
 efbpIxVy0uCl/sb5nFJpZ6gGqECaua60QFjERO6UYD66vRJjnRaPqrWqJPwwKqY4tmEt4kZ9
 TlDiXC/YTUwYrb9wN0RaRxVAQEiZYxKw4fVGXfq5KR/z2WeG5ft6/BnDUVwNowE4OdnR2pJ8
 JT0KhhUMErF3bjvhuvmFK883azPL+GyVG8bknhm0THeC+dgWZ3ZSr/GzdRZwC0xloZFGvO2i
 88xNmQ2ME6fM0Un1lE/F4IflsaZrSjEdzxDgnDNoo1m/0PO9VkkuFTqGIWMIYHbLSlPpW6Dv
 X7P9Wn9BhAcNfScxCCD/3bqgfXA9QvkXKoCGbv+8eRl6HWR22gSBRs+RVa95/6jhSaWS99Zb
 kAZ5Ccqhawz71CwCMnwWQWip3yJtQJaXMBfe8U44gyQzqvf4y6CG3MJCDVGbbQbWNQeHGJwk
 AXTxpWwWGIp4Ob9pW+hGqm8nyruIhlNFGM4YCIWbVddwtXYmJMLgUeaJjp8K5JZnuEZCBmpn
 W3X9nFh3etP5SIY//7lpA6a2lpAsrCMF1dovVuPAwpJ+ysjPOaYi5qUBU83BBqqBKKQVRG/s
 XcNgKByB8heXMjWxERhrAjgdYxFBspp0xWG2DaD57F7q1yQF4eLJOi8Gg1WKkZzKdojcjT0e
 kLVsg45zMYNYSbzMfItPN/rVJxCIU3c+TPNB5jpgidmOMAtJGdrAgk1DaJv44wduBd1yvxuU
 XtqWc2tEWwbGcxa8dZCfM9EieVD7nlnnQv7HMmnpzz6gOH2TCPEFt843K6mM7lRAFWs+12Fr
 b6y9qKiln1ibQEJSnKLqdJLdAlRfSdT6FKfg5U/S9Nv6zFOQAkJY8I9C5t7E2C5t8y5Ttv1w
 0w=
IronPort-HdrOrdr: A9a23:DxzUU6H8GtAsNQvSpLqE0MeALOsnbusQ8zAXP0AYc3Jom6uj5q
 aTdZUgpGfJYVkqOE3I9ertBEDEewK4yXcX2/h3AV7BZniEhILAFugLhuGO/9SjIVybygc079
 YYT0EUMrzN5DZB4voSmDPIceod/A==
X-IronPort-AV: E=Sophos;i="5.91,300,1647316800"; 
   d="scan'208";a="76142780"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH v2 3/4] build: set PERL
Date: Tue, 14 Jun 2022 17:22:47 +0100
Message-ID: <20220614162248.40278-4-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220614162248.40278-1-anthony.perard@citrix.com>
References: <20220614162248.40278-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

We are going to use it in a moment.

Also update README about Perl requirement.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---

Notes:
    v2:
    - update ./README

 xen/Makefile | 1 +
 README       | 1 +
 2 files changed, 2 insertions(+)

diff --git a/xen/Makefile b/xen/Makefile
index 82f5310b12..a6650a2acc 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -22,6 +22,7 @@ PYTHON_INTERPRETER	:= $(word 1,$(shell which python3 python python2 2>/dev/null)
 export PYTHON		?= $(PYTHON_INTERPRETER)
 
 export CHECKPOLICY	?= checkpolicy
+export PERL		?= perl
 
 $(if $(filter __%, $(MAKECMDGOALS)), \
     $(error targets prefixed with '__' are only for internal use))
diff --git a/README b/README
index 5e55047ffd..c1c18de7e0 100644
--- a/README
+++ b/README
@@ -64,6 +64,7 @@ provided by your OS distributor:
     * iproute package (/sbin/ip)
     * GNU bison and GNU flex
     * ACPI ASL compiler (iasl)
+    * Perl
 
 In addition to the above there are a number of optional build
 prerequisites. Omitting these will cause the related features to be
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue Jun 14 16:23:19 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 16:23:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349234.575457 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o19K3-0003iz-1g; Tue, 14 Jun 2022 16:23:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349234.575457; Tue, 14 Jun 2022 16:23:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o19K2-0003im-UJ; Tue, 14 Jun 2022 16:23:18 +0000
Received: by outflank-mailman (input) for mailman id 349234;
 Tue, 14 Jun 2022 16:23:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cShD=WV=citrix.com=prvs=157bf7d09=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o19K0-0002Ro-TH
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 16:23:17 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 494718f0-ebfe-11ec-a26a-b96bd03d9e80;
 Tue, 14 Jun 2022 18:23:14 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 494718f0-ebfe-11ec-a26a-b96bd03d9e80
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655223793;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=Mqw3sdm5n8MfKc9nInueGkXdH2kX+c2NXuRNB6zKpwQ=;
  b=RV15zRRmlJyy7X/mzQSb0aKqr234k40LJoDsb0b9LXT6auEh9Ancg5I2
   LH34BNsZLNwDY52FckHuejK8viSEJp38VqRNzF0F6FpgqlBK0/s+3NQ5B
   2lv6FAxreoS/OsaWMnF5Dbvw+iLT94PwY9eUMp+VjMtnEUNowiIzC1BTC
   I=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 72930165
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:ZVO51qvvWtMqXutYB+itqpOwXOfnVC9eMUV32f8akzHdYApBsoF/q
 tZmKW6FaarZMWXxKosibI6x9ElSu8WDnYNjTQJl+3w9RX8S+JbJXdiXEBz9bniYRiHhoOOLz
 Cm8hv3odp1coqr0/0/1WlTZhSAgk/nOHNIQMcacUsxLbVYMpBwJ1FQywobVvqYy2YLjW13U5
 YuoyyHiEATNNwBcYzp8B52r8HuDjNyq0N/PlgVjDRzjlAa2e0g9VPrzF4noR5fLatA88tqBb
 /TC1NmEElbxpH/BPD8HfoHTKSXmSpaKVeSHZ+E/t6KK2nCurQRquko32WZ1he66RFxlkvgoo
 Oihu6BcRi8WP63ohdlAWSVASSFAPIZl8bGfeHqw5Jn7I03uKxMAwt1rBUAye4YZ5vx2ESdF8
 vlwxDIlN07ZwbjsmfTiF7cq1p9LwMrDZevzvllpyy3ZCvA3B4jOWazQ6fdT3Ssqh9AIFvHbD
 yYcQWU1PU+YOkMSUrsRIK0Vv6SNjXrQTwwb9GOLrLAZxEWCyyUkhdABN/KKI4fXFK25hH2wv
 Xna9m70BhUbMt23yjef9H+owOjVkkvTR4Y6BLC+sPlwjzW7xHEXCRAQfUu2p7++kEHWc8JSL
 QkY9zQjqYA29Ve3VZ/tUhugunmGsxUAHd1KHIUS6guA167V6AaxHXUfQ3hKb9lOiSMtbWV0j
 BnTxYqvXGEx9u3OIZ6AyluKhQyzOQwIEDIIWTdeTTUEyPrqrtlqjB2aG76PD5WJYs3J9SDYm
 m7X8XJn2O9N0abnxI3gowmZ3mvESozhC1dsu16JBj/NAhZRPtbNWmC+1bTMAR+sxq69R0LJg
 nULktP2AAsmXcDUz3zlrAng8diUCxe53N702wcH82EJrWjFxpJaVdk4DMtCDEloKN0YXjTif
 VXevwhcjLcKYib3NPMqOd3sUZtypUQFKTgCfqG8UzazSsIpKF/vEN9GPiZ8IFwBYGBzyPpia
 P93gO6nDGoACLQP8Qdas9w1iOdxrghnnDu7bcmik3yPjOvFDFbIGOhtDbd7Rr1ghE9yiF6No
 4g32grj40g3bdASlQGMrdZPdQlTdihkbX00wuQOHtO+zsNdMDlJI5fsLXkJIuSJQ4w9ej/0w
 0yA
IronPort-HdrOrdr: A9a23:nz4McqPIhRz0csBcTsejsMiBIKoaSvp037Eqv3ofdfUzSL3+qy
 nOpoVj6faaslcssR0b9OxofZPwI080lqQFhbX5X43DYOCOggLBR+tfBMnZsljd8kXFh4hgPM
 xbHZSWZuedMbEDt7eY3DWF
X-IronPort-AV: E=Sophos;i="5.91,300,1647316800"; 
   d="scan'208";a="72930165"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH v2 4/4] build: replace get-fields.sh by a perl script
Date: Tue, 14 Jun 2022 17:22:48 +0100
Message-ID: <20220614162248.40278-5-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220614162248.40278-1-anthony.perard@citrix.com>
References: <20220614162248.40278-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

The get-fields.sh which generate all the include/compat/.xlat/*.h
headers is quite slow. It takes for example nearly 3 seconds to
generate platform.h on a recent machine, or 2.3 seconds for memory.h.

Since it's only text processing, rewriting the mix of shell/sed/python
into a single perl script make the generation of those file a lot
faster.

I tried to keep a similar look for the code, to keep the code similar
between the shell and perl, and to ease review. So some code in perl
might look weird or could be written better.

No functional change, the headers generated are identical.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---

Notes:
    v2:
    - Add .pl extension to the perl script
    - remove "-w" from the shebang as it is duplicate of "use warning;"
    - Add a note in the commit message that the "headers generated are identical".

 xen/include/Makefile            |   6 +-
 xen/tools/compat-xlat-header.pl | 539 ++++++++++++++++++++++++++++++++
 xen/tools/get-fields.sh         | 528 -------------------------------
 3 files changed, 541 insertions(+), 532 deletions(-)
 create mode 100755 xen/tools/compat-xlat-header.pl
 delete mode 100644 xen/tools/get-fields.sh

diff --git a/xen/include/Makefile b/xen/include/Makefile
index 0d3e3d66e0..31b0be14bc 100644
--- a/xen/include/Makefile
+++ b/xen/include/Makefile
@@ -60,9 +60,7 @@ cmd_compat_c = \
 
 quiet_cmd_xlat_headers = GEN     $@
 cmd_xlat_headers = \
-    while read what name; do \
-        $(SHELL) $(srctree)/tools/get-fields.sh "$$what" compat_$$name $< || exit $$?; \
-    done <$(patsubst $(obj)/compat/%,$(obj)/compat/.xlat/%,$(basename $<)).lst >$@.new; \
+    $(PERL) $(srctree)/tools/compat-xlat-header.pl $< $(patsubst $(obj)/compat/%,$(obj)/compat/.xlat/%,$(basename $<)).lst > $@.new; \
     mv -f $@.new $@
 
 targets += $(headers-y)
@@ -80,7 +78,7 @@ $(obj)/compat/%.c: $(src)/public/%.h $(srcdir)/xlat.lst $(srctree)/tools/compat-
 	$(call if_changed,compat_c)
 
 targets += $(patsubst compat/%, compat/.xlat/%, $(headers-y))
-$(obj)/compat/.xlat/%.h: $(obj)/compat/%.h $(obj)/compat/.xlat/%.lst $(srctree)/tools/get-fields.sh FORCE
+$(obj)/compat/.xlat/%.h: $(obj)/compat/%.h $(obj)/compat/.xlat/%.lst $(srctree)/tools/compat-xlat-header.pl FORCE
 	$(call if_changed,xlat_headers)
 
 quiet_cmd_xlat_lst = GEN     $@
diff --git a/xen/tools/compat-xlat-header.pl b/xen/tools/compat-xlat-header.pl
new file mode 100755
index 0000000000..791230591c
--- /dev/null
+++ b/xen/tools/compat-xlat-header.pl
@@ -0,0 +1,539 @@
+#!/usr/bin/perl
+
+use strict;
+use warnings;
+
+open COMPAT_LIST, "<$ARGV[1]" or die "can't open $ARGV[1], $!";
+open HEADER, "<$ARGV[0]" or die "can't open $ARGV[0], $!";
+
+my @typedefs;
+
+my @header_tokens;
+while (<HEADER>) {
+    next if m/^\s*#.*/;
+    s/([\]\[,;:{}])/ $1 /g;
+    s/^\s+//;
+    push(@header_tokens, split(/\s+/));
+}
+
+sub get_fields {
+    my ($looking_for) = @_;
+    my $level = 1;
+    my $aggr = 0;
+    my ($name, @fields);
+
+    foreach (@header_tokens) {
+        if (/^(struct|union)$/) {
+            unless ($level != 1) {
+                $aggr = 1;
+                @fields = ();
+                $name = '';
+            }
+        } elsif ($_ eq '{') {
+            $level++;
+        } elsif ($_ eq '}') {
+            $level--;
+            if ($level == 1 and $name eq $looking_for) {
+                push (@fields, $_);
+                return @fields;
+            }
+        } elsif (/^[a-zA-Z_].*/) {
+            unless ($aggr == 0 or $name ne "") {
+                $name = $_;
+            }
+        }
+        unless ($aggr == 0) {
+            push (@fields, $_);
+        }
+    }
+    return ();
+}
+
+sub get_typedefs {
+    my $level = 1;
+    my $state = 0;
+    my @typedefs;
+    foreach (@_) {
+        if ($_ eq 'typedef') {
+            unless ($level != 1) {
+                $state = 1;
+            }
+        } elsif (m/^COMPAT_HANDLE\(.*\)$/) {
+            unless ($level != 1 or $state != 1) {
+                $state = 2;
+            }
+        } elsif (m/^[\[\{]$/) {
+            $level++;
+        } elsif (m/^[\]\}]$/) {
+            $level--;
+        } elsif ($_ eq ';') {
+            unless  ($level != 1) {
+                $state = 0;
+            }
+        } elsif (m/^[a-zA-Z_].*$/) {
+            unless ($level != 1 or $state != 2) {
+                push (@typedefs, $_);
+            }
+        }
+    }
+    return @typedefs;
+}
+
+sub build_enums {
+    my ($name, @tokens) = @_;
+
+    my $level = 1;
+    my $kind = '';
+    my $named = '';
+    my (@fields, @members, $id);
+
+    foreach (@tokens) {
+        if (m/^(struct|union)$/) {
+            unless ($level != 2) {
+                @fields = ('');
+            }
+            $kind="$_;$kind";
+        } elsif ($_ eq '{') {
+            $level++;
+        } elsif ($_ eq '}') {
+            $level--;
+            if ($level == 1) {
+                my $subkind = $kind;
+                $subkind =~ s/;.*//;
+                if ($subkind eq 'union') {
+                    print "\nenum XLAT_$name {\n";
+                    foreach (@members) {
+                        print "    XLAT_${name}_$_,\n";
+                    }
+                    print "};\n";
+                }
+                return;
+            } elsif ($level == 2) {
+                $named = '?';
+            }
+        } elsif (/^[a-zA-Z_].*$/) {
+            $id = $_;
+            my $k = $kind;
+            $k =~ s/.*?;//;
+            if ($named ne '' and $k ne '') {
+                shift @fields if @fields > 0 and $fields[0] eq '';
+                build_enums("${name}_$_", @fields);
+                $named = '!';
+            }
+        } elsif ($_ eq ',') {
+            unless ($level != 2) {
+                push (@members, $id);
+            }
+        } elsif ($_ eq ';') {
+            unless ($level != 2) {
+                push (@members, $id);
+            }
+            unless ($named eq '') {
+                $kind =~ s/.*?;//;
+            }
+            $named = '';
+        }
+        unless (@fields == 0) {
+            push (@fields, $_);
+        }
+    }
+}
+
+sub handle_field {
+    my ($prefix, $name, $id, $type, @fields) = @_;
+
+    if (@fields == 0) {
+        print " \\\n";
+        if ($type eq '') {
+            print "$prefix\(_d_)->$id = (_s_)->$id;"
+        } else {
+            my $k = $id;
+            $k =~ s/\./_/g;
+            print "${prefix}XLAT_${name}_HNDL_${k}(_d_, _s_);";
+        }
+    } elsif ("@fields" !~ m/[{}]/) {
+        my $tag = "@fields";
+        $tag =~ s/\s*(struct|union)\s+(compat_)?(\w+)\s.*/$3/;
+        print " \\\n";
+        print "${prefix}XLAT_$tag(&(_d_)->$id, &(_s_)->$id);"
+    } else {
+        my $func_id = $id;
+        my @func_tokens = @fields;
+        my $kind = '';
+        my $array = "";
+        my $level = 1;
+        my $arrlvl = 1;
+        my $array_type = '';
+        my $id = '';
+        my $type = '';
+        @fields = ();
+        foreach (@func_tokens) {
+            if (/^(struct|union)$/) {
+                unless ($level != 2) {
+                    @fields = ('');
+                }
+                if ($level == 1) {
+                    $kind = $_;
+                    if ($kind eq 'union') {
+                        my $tmp = $func_id;
+                        $tmp =~ s/\./_/g;
+                        print " \\\n";
+                        print  "${prefix}switch ($tmp) {"
+                    }
+                }
+            } elsif ($_ eq '{') {
+                $level++;
+                $id = '';
+            } elsif ($_ eq '}') {
+                $level--;
+                $id = '';
+                if ($level == 1 and $kind eq 'union') {
+                    print " \\\n";
+                    print "$prefix}"
+                }
+            } elsif ($_ eq '[') {
+                if ($level != 2 or $arrlvl != 1) {
+                    # skip
+                } elsif ($array eq '') {
+                    $array = ' ';
+                } else {
+                    $array = "$array;";
+                }
+                $arrlvl++;
+            } elsif ($_ eq ']') {
+                $arrlvl--;
+            } elsif (m/^COMPAT_HANDLE\((.*)\)$/) {
+                if ($level == 2 and $id eq '') {
+                    $type = $1;
+                    $type =~ s/^compat_//;
+                }
+            } elsif ($_ eq "compat_domain_handle_t") {
+                if ($level == 2 and $id eq '') {
+                    $array_type = $_;
+                }
+            } elsif (m/^[a-zA-Z_].*$/) {
+                if ($id eq '' and $type eq '' and $array_type eq '') {
+                    foreach $id (@typedefs) {
+                        unless ($id ne $_) {
+                            $type = $id;
+                        }
+                    }
+                    if ($type eq '') {
+                        $id = $_;
+                    } else {
+                        $id = '';
+                    }
+                } else {
+                    $id = $_;
+                }
+            } elsif (m/^[,;]$/) {
+                if ($level == 2 and $id !~ /^_pad\d*$/) {
+                    if ($kind eq 'union') {
+                        my $tmp = "$func_id.$id";
+                        $tmp =~ s/\./_/g;
+                        print " \\\n";
+                        print  "${prefix}case XLAT_${name}_$tmp:";
+                        shift @fields if @fields > 0 and $fields[0] eq '';
+                        handle_field("$prefix    ", $name,  "$func_id.$id", $type, @fields);
+                    } elsif ($array eq '' and $array_type eq '') {
+                        shift @fields if @fields > 0 and $fields[0] eq '';
+                        handle_field($prefix, $name, "$func_id.$id", $type, @fields);
+                    } elsif ($array eq '') {
+                        copy_array("    ", "$func_id.$id");
+                    } else {
+                        $array =~ s/^.*?;//;
+                        shift @fields if @fields > 0 and $fields[0] eq '';
+                        handle_array($prefix, $name, "$func_id.$id", $array, $type, @fields);
+                    }
+                    unless ($_ ne ';') {
+                        @fields = ();
+                        $id = '';
+                        $type = '';
+                    }
+                    $array = '';
+                    if ($kind eq 'union') {
+                        print " \\\n";
+                        print "$prefix    break;";
+                    }
+                }
+            } else {
+                if ($array ne '') {
+                    $array = "$array $_";
+                }
+            }
+            unless (@fields == 0) {
+                push (@fields, $_);
+            }
+        }
+    }
+}
+
+sub copy_array {
+    my ($prefix, $id) = @_;
+
+    print " \\\n";
+    print  "${prefix}if ((_d_)->$id != (_s_)->$id) \\\n";
+    print  "$prefix    memcpy((_d_)->$id, (_s_)->$id, sizeof((_d_)->$id));";
+}
+
+sub handle_array {
+    my ($prefix, $name, $id, $array, $type, @fields) = @_;
+
+    my $i = $array;
+    $i =~ s/[^;]//g;
+    $i = length($i);
+    $i = "i$i";
+
+    print " \\\n";
+    print "$prefix\{ \\\n";
+    print "$prefix    unsigned int $i; \\\n";
+    my $tmp = $array;
+    $tmp =~ s/;.*$//;
+    $tmp =~ s/^\s*(.*)\s*$/$1/;
+    print "$prefix    for ($i = 0; $i < $tmp; ++$i) {";
+    if ($array !~ m/^.*?;/) {
+        handle_field("$prefix        ", $name, "$id\[$i]", $type, @fields);
+    } else {
+        handle_array("$prefix        " ,$name, "$id\[$i]", $', $type, @fields);
+    }
+    print " \\\n";
+    print "$prefix    } \\\n";
+    print "$prefix\}";
+}
+
+sub build_body {
+    my ($name, @tokens) = @_;
+    my $level = 1;
+    my $id = '';
+    my $array = '';
+    my $arrlvl = 1;
+    my $array_type = '';
+    my $type = '';
+    my @fields;
+
+    printf "\n#define XLAT_$name(_d_, _s_) do {";
+
+    foreach (@tokens) {
+        if (/^(struct|union)$/) {
+            unless ($level != 2) {
+                @fields = ('');
+            }
+        } elsif ($_ eq '{') {
+            $level++;
+            $id = '';
+        } elsif ($_ eq '}') {
+            $level--;
+            $id = '';
+        } elsif ($_ eq '[') {
+            if ($level != 2 or $arrlvl != 1) {
+                # skip
+            } elsif ($array eq '') {
+                $array = ' ';
+            } else {
+                $array = "$array;";
+            }
+            $arrlvl++;
+        } elsif ($_ eq ']') {
+            $arrlvl--;
+        } elsif (m/^COMPAT_HANDLE\((.*)\)$/) {
+            if ($level == 2 and $id eq '') {
+                $type = $1;
+                $type =~ s/^compat_//;
+            }
+        } elsif ($_ eq "compat_domain_handle_t") {
+            if ($level == 2 and $id eq '') {
+                $array_type = $_;
+            }
+        } elsif (m/^[a-zA-Z_].*$/) {
+            if ($array ne '') {
+                $array = "$array $_";
+            } elsif ($id eq '' and $type eq '' and $array_type eq '') {
+                foreach $id (@typedefs) {
+                    unless ($id eq $_) {
+                        $type = $id;
+                    }
+                }
+                if ($type eq '') {
+                    $id = $_;
+                } else {
+                    $id = '';
+                }
+            } else {
+                $id = $_;
+            }
+        } elsif (m/^[,;]$/) {
+            if ($level == 2 and $id !~ /^_pad\d*$/) {
+                if ($array eq '' and $array_type eq '') {
+                    shift @fields if @fields > 0 and $fields[0] eq '';
+                    handle_field("    ", $name, $id, $type, @fields);
+                } elsif ($array eq '') {
+                    copy_array("    ", $id);
+                } else {
+                    my $tmp = $array;
+                    $tmp =~ s/^.*?;//;
+                    shift @fields if @fields > 0 and $fields[0] eq '';
+                    handle_array("    ", $name, $id, $tmp, $type, @fields);
+                }
+                unless ($_ ne ';') {
+                    @fields = ();
+                    $id = '';
+                    $type = '';
+                }
+                $array = '';
+            }
+        } else {
+            if ($array ne '') {
+                $array = "$array $_";
+            }
+        }
+        unless (@fields == 0) {
+            push (@fields, $_);
+        }
+    }
+    print " \\\n} while (0)\n";
+}
+
+sub check_field {
+    my ($kind, $name, $field, @extrafields) = @_;
+
+    if ("@extrafields" !~ m/[{}]/) {
+        print "; \\\n";
+        if (@extrafields != 0) {
+            foreach (@extrafields) {
+                if (m/^(struct|union)$/) {
+                    # skip
+                } elsif (m/^[a-zA-Z_].*/) {
+                    s/^xen_//;
+                    print "    CHECK_$_";
+                    last;
+                } else {
+                    die "Malformed compound declaration: '$_'";
+                }
+            }
+        } elsif ($field !~ m/\./) {
+            print "    CHECK_FIELD_($kind, $name, $field)";
+        } else {
+            my $n = $field =~ s/\./, /g;
+            print  "    CHECK_SUBFIELD_${n}_($kind, $name, $field)";
+        }
+    } else {
+        my $level = 1;
+        my @fields = ();
+        my $id = '';
+
+        foreach (@extrafields) {
+            if (m/^(struct|union)$/) {
+                unless ($level != 2) {
+                    @fields = ('');
+                }
+            } elsif ($_ eq '{') {
+                $level++;
+                $id = '';
+            } elsif ($_ eq '}') {
+                $level--;
+                $id = '';
+            } elsif (/^compat_.*_t$/) {
+                if ($level == 2) {
+                    @fields = ('');
+                    s/_t$//;
+                    s/^compat_//;
+                }
+            } elsif (/^evtchn_.*_compat_t$/) {
+                if ($level == 2 and $_ ne "evtchn_port_compat_t") {
+                    @fields = ('');
+                    s/_compat_t$//;
+                }
+            } elsif (/^[a-zA-Z_].*$/) {
+                $id = $_;
+            } elsif (/^[,;]$/) {
+                if ($level == 2 and $id !~ /^_pad\d*$/) {
+                    shift @fields if @fields > 0 and $fields[0] eq '';
+                    check_field($kind, $name, "$field.$id", @fields);
+                    unless ($_ ne ";") {
+                        @fields = ();
+                        $id = '';
+                    }
+                }
+            }
+            unless (@fields == 0) {
+                push (@fields, $_);
+            }
+        }
+    }
+}
+
+sub build_check {
+    my ($name, @tokens) = @_;
+    my $level = 1;
+    my (@fields, $kind, $id);
+    my $arrlvl = 1;
+
+    print "\n";
+    print "#define CHECK_$name \\\n";
+
+    foreach (@tokens) {
+        if (/^(struct|union)$/) {
+            if ($level == 1) {
+                $kind = $_;
+                print "    CHECK_SIZE_($kind, $name)";
+            } elsif ($level == 2) {
+                @fields = ('');
+            }
+        } elsif ($_ eq '{') {
+            $level++;
+            $id = '';
+        } elsif ($_ eq '}') {
+            $level--;
+            $id = '';
+        } elsif ($_ eq '[') {
+            $arrlvl++;
+        } elsif ($_ eq ']') {
+            $arrlvl--;
+        } elsif (/^compat_.*_t$/) {
+            if ($level == 2 and $_ ne "compat_argo_port_t") {
+                @fields = ('');
+                s/_t$//;
+                s/^compat_//;
+            }
+        } elsif (/^[a-zA-Z_].*$/) {
+            unless ($level != 2 or $arrlvl != 1) {
+                $id = $_;
+            }
+        } elsif (/^[,;]$/) {
+            if ($level == 2 and $id !~ /^_pad\d*$/) {
+                shift @fields if @fields > 0 and $fields[0] eq '';
+                check_field($kind, $name, $id, @fields);
+                unless ($_ ne ";") {
+                    @fields = ();
+                    $id = '';
+                }
+            }
+        }
+
+        unless (@fields == 0) {
+            push (@fields, $_);
+        }
+    }
+    print "\n";
+}
+
+@typedefs = get_typedefs(@header_tokens);
+
+while (<COMPAT_LIST>) {
+    my ($what, $name) = split(/\s+/, $_);
+    $name =~ s/^xen//;
+
+    my @fields = get_fields("compat_$name");
+    if (@fields == 0) {
+	die "Fields of 'compat_$name' not found in '$ARGV[1]'";
+    }
+
+    if ($what eq "!") {
+        build_enums($name, @fields);
+        build_body($name, @fields);
+    } elsif ($what eq "?") {
+        build_check($name, @fields);
+    } else {
+        die "Invalid translation indicator: '$what'";
+    }
+}
diff --git a/xen/tools/get-fields.sh b/xen/tools/get-fields.sh
deleted file mode 100644
index 002db2093f..0000000000
--- a/xen/tools/get-fields.sh
+++ /dev/null
@@ -1,528 +0,0 @@
-test -n "$1" -a -n "$2" -a -n "$3"
-set -ef
-
-SED=sed
-if test -x /usr/xpg4/bin/sed; then
-	SED=/usr/xpg4/bin/sed
-fi
-if test -z ${PYTHON}; then
-	PYTHON=`/usr/bin/env python`
-fi
-if test -z ${PYTHON}; then
-	echo "Python not found"
-	exit 1
-fi
-
-get_fields ()
-{
-	local level=1 aggr=0 name= fields=
-	for token in $2
-	do
-		case "$token" in
-		struct|union)
-			test $level != 1 || aggr=1 fields= name=
-			;;
-		"{")
-			level=$(expr $level + 1)
-			;;
-		"}")
-			level=$(expr $level - 1)
-			if [ $level = 1 -a $name = $1 ]
-			then
-				echo "$fields }"
-				return 0
-			fi
-			;;
-		[a-zA-Z_]*)
-			test $aggr = 0 -o -n "$name" || name="$token"
-			;;
-		esac
-		test $aggr = 0 || fields="$fields $token"
-	done
-}
-
-get_typedefs ()
-{
-	local level=1 state=
-	for token in $1
-	do
-		case "$token" in
-		typedef)
-			test $level != 1 || state=1
-			;;
-		COMPAT_HANDLE\(*\))
-			test $level != 1 -o "$state" != 1 || state=2
-			;;
-		[\{\[])
-			level=$(expr $level + 1)
-			;;
-		[\}\]])
-			level=$(expr $level - 1)
-			;;
-		";")
-			test $level != 1 || state=
-			;;
-		[a-zA-Z_]*)
-			test $level != 1 -o "$state" != 2 || echo "$token"
-			;;
-		esac
-	done
-}
-
-build_enums ()
-{
-	local level=1 kind= fields= members= named= id= token
-	for token in $2
-	do
-		case "$token" in
-		struct|union)
-			test $level != 2 || fields=" "
-			kind="$token;$kind"
-			;;
-		"{")
-			level=$(expr $level + 1)
-			;;
-		"}")
-			level=$(expr $level - 1)
-			if [ $level = 1 ]
-			then
-				if [ "${kind%%;*}" = union ]
-				then
-					echo
-					echo "enum XLAT_$1 {"
-					for m in $members
-					do
-						echo "    XLAT_${1}_$m,"
-					done
-					echo "};"
-				fi
-				return 0
-			elif [ $level = 2 ]
-			then
-				named='?'
-			fi
-			;;
-		[a-zA-Z]*)
-			id=$token
-			if [ -n "$named" -a -n "${kind#*;}" ]
-			then
-				build_enums ${1}_$token "$fields"
-				named='!'
-			fi
-			;;
-		",")
-			test $level != 2 || members="$members $id"
-			;;
-		";")
-			test $level != 2 || members="$members $id"
-			test -z "$named" || kind=${kind#*;}
-			named=
-			;;
-		esac
-		test -z "$fields" || fields="$fields $token"
-	done
-}
-
-handle_field ()
-{
-	if [ -z "$5" ]
-	then
-		echo " \\"
-		if [ -z "$4" ]
-		then
-			printf %s "$1(_d_)->$3 = (_s_)->$3;"
-		else
-			printf %s "$1XLAT_${2}_HNDL_$(echo $3 | $SED 's,\.,_,g')(_d_, _s_);"
-		fi
-	elif [ -z "$(echo "$5" | $SED 's,[^{}],,g')" ]
-	then
-		local tag=$(echo "$5" | ${PYTHON} -c '
-import re,sys
-for line in sys.stdin.readlines():
-    sys.stdout.write(re.subn(r"\s*(struct|union)\s+(compat_)?(\w+)\s.*", r"\3", line)[0].rstrip() + "\n")
-')
-		echo " \\"
-		printf %s "${1}XLAT_$tag(&(_d_)->$3, &(_s_)->$3);"
-	else
-		local level=1 kind= fields= id= array= arrlvl=1 array_type= type= token
-		for token in $5
-		do
-			case "$token" in
-			struct|union)
-				test $level != 2 || fields=" "
-				if [ $level = 1 ]
-				then
-					kind=$token
-					if [ $kind = union ]
-					then
-						echo " \\"
-						printf %s "${1}switch ($(echo $3 | $SED 's,\.,_,g')) {"
-					fi
-				fi
-				;;
-			"{")
-				level=$(expr $level + 1) id=
-				;;
-			"}")
-				level=$(expr $level - 1) id=
-				if [ $level = 1 -a $kind = union ]
-				then
-					echo " \\"
-					printf %s "$1}"
-				fi
-				;;
-			"[")
-				if [ $level != 2 -o $arrlvl != 1 ]
-				then
-					:
-				elif [ -z "$array" ]
-				then
-					array=" "
-				else
-					array="$array;"
-				fi
-				arrlvl=$(expr $arrlvl + 1)
-				;;
-			"]")
-				arrlvl=$(expr $arrlvl - 1)
-				;;
-			COMPAT_HANDLE\(*\))
-				if [ $level = 2 -a -z "$id" ]
-				then
-					type=${token#COMPAT_HANDLE?}
-					type=${type%?}
-					type=${type#compat_}
-				fi
-				;;
-			compat_domain_handle_t)
-				if [ $level = 2 -a -z "$id" ]
-				then
-					array_type=$token
-				fi
-				;;
-			[a-zA-Z]*)
-				if [ -z "$id" -a -z "$type" -a -z "$array_type" ]
-				then
-					for id in $typedefs
-					do
-						test $id != "$token" || type=$id
-					done
-					if [ -z "$type" ]
-					then
-						id=$token
-					else
-						id=
-					fi
-				else
-					id=$token
-				fi
-				;;
-			[\,\;])
-				if [ $level = 2 -a -n "$(echo $id | $SED 's,^_pad[[:digit:]]*,,')" ]
-				then
-					if [ $kind = union ]
-					then
-						echo " \\"
-						printf %s "${1}case XLAT_${2}_$(echo $3.$id | $SED 's,\.,_,g'):"
-						handle_field "$1    " $2 $3.$id "$type" "$fields"
-					elif [ -z "$array" -a -z "$array_type" ]
-					then
-						handle_field "$1" $2 $3.$id "$type" "$fields"
-					elif [ -z "$array" ]
-					then
-						copy_array "    " $3.$id
-					else
-						handle_array "$1" $2 $3.$id "${array#*;}" "$type" "$fields"
-					fi
-					test "$token" != ";" || fields= id= type=
-					array=
-					if [ $kind = union ]
-					then
-						echo " \\"
-						printf %s "$1    break;"
-					fi
-				fi
-				;;
-			*)
-				if [ -n "$array" ]
-				then
-					array="$array $token"
-				fi
-				;;
-			esac
-			test -z "$fields" || fields="$fields $token"
-		done
-	fi
-}
-
-copy_array ()
-{
-	echo " \\"
-	echo "${1}if ((_d_)->$2 != (_s_)->$2) \\"
-	printf %s "$1    memcpy((_d_)->$2, (_s_)->$2, sizeof((_d_)->$2));"
-}
-
-handle_array ()
-{
-	local i="i$(echo $4 | $SED 's,[^;], ,g' | wc -w | $SED 's,[[:space:]]*,,g')"
-	echo " \\"
-	echo "$1{ \\"
-	echo "$1    unsigned int $i; \\"
-	printf %s "$1    for ($i = 0; $i < "${4%%;*}"; ++$i) {"
-	if [ "$4" = "${4#*;}" ]
-	then
-		handle_field "$1        " $2 $3[$i] "$5" "$6"
-	else
-		handle_array "$1        " $2 $3[$i] "${4#*;}" "$5" "$6"
-	fi
-	echo " \\"
-	echo "$1    } \\"
-	printf %s "$1}"
-}
-
-build_body ()
-{
-	echo
-	printf %s "#define XLAT_$1(_d_, _s_) do {"
-	local level=1 fields= id= array= arrlvl=1 array_type= type= token
-	for token in $2
-	do
-		case "$token" in
-		struct|union)
-			test $level != 2 || fields=" "
-			;;
-		"{")
-			level=$(expr $level + 1) id=
-			;;
-		"}")
-			level=$(expr $level - 1) id=
-			;;
-		"[")
-			if [ $level != 2 -o $arrlvl != 1 ]
-			then
-				:
-			elif [ -z "$array" ]
-			then
-				array=" "
-			else
-				array="$array;"
-			fi
-			arrlvl=$(expr $arrlvl + 1)
-			;;
-		"]")
-			arrlvl=$(expr $arrlvl - 1)
-			;;
-		COMPAT_HANDLE\(*\))
-			if [ $level = 2 -a -z "$id" ]
-			then
-				type=${token#COMPAT_HANDLE?}
-				type=${type%?}
-				type=${type#compat_}
-			fi
-			;;
-		compat_domain_handle_t)
-			if [ $level = 2 -a -z "$id" ]
-			then
-				array_type=$token
-			fi
-			;;
-		[a-zA-Z_]*)
-			if [ -n "$array" ]
-			then
-				array="$array $token"
-			elif [ -z "$id" -a -z "$type" -a -z "$array_type" ]
-			then
-				for id in $typedefs
-				do
-					test $id != "$token" || type=$id
-				done
-				if [ -z "$type" ]
-				then
-					id=$token
-				else
-					id=
-				fi
-			else
-				id=$token
-			fi
-			;;
-		[\,\;])
-			if [ $level = 2 -a -n "$(echo $id | $SED 's,^_pad[[:digit:]]*,,')" ]
-			then
-				if [ -z "$array" -a -z "$array_type" ]
-				then
-					handle_field "    " $1 $id "$type" "$fields"
-				elif [ -z "$array" ]
-				then
-					copy_array "    " $id
-				else
-					handle_array "    " $1 $id "${array#*;}" "$type" "$fields"
-				fi
-				test "$token" != ";" || fields= id= type=
-				array=
-			fi
-			;;
-		*)
-			if [ -n "$array" ]
-			then
-				array="$array $token"
-			fi
-			;;
-		esac
-		test -z "$fields" || fields="$fields $token"
-	done
-	echo " \\"
-	echo "} while (0)"
-}
-
-check_field ()
-{
-	if [ -z "$(echo "$4" | $SED 's,[^{}],,g')" ]
-	then
-		echo "; \\"
-		local n=$(echo $3 | $SED 's,[^.], ,g' | wc -w | $SED 's,[[:space:]]*,,g')
-		if [ -n "$4" ]
-		then
-			for n in $4
-			do
-				case $n in
-				struct|union)
-					;;
-				[a-zA-Z_]*)
-					printf %s "    CHECK_${n#xen_}"
-					break
-					;;
-				*)
-					echo "Malformed compound declaration: '$n'" >&2
-					exit 1
-					;;
-				esac
-			done
-		elif [ $n = 0 ]
-		then
-			printf %s "    CHECK_FIELD_($1, $2, $3)"
-		else
-			printf %s "    CHECK_SUBFIELD_${n}_($1, $2, $(echo $3 | $SED 's!\.!, !g'))"
-		fi
-	else
-		local level=1 fields= id= token
-		for token in $4
-		do
-			case "$token" in
-			struct|union)
-				test $level != 2 || fields=" "
-				;;
-			"{")
-				level=$(expr $level + 1) id=
-				;;
-			"}")
-				level=$(expr $level - 1) id=
-				;;
-			compat_*_t)
-				if [ $level = 2 ]
-				then
-					fields=" "
-					token="${token%_t}"
-					token="${token#compat_}"
-				fi
-				;;
-			evtchn_*_compat_t)
-				if [ $level = 2 -a $token != evtchn_port_compat_t ]
-				then
-					fields=" "
-					token="${token%_compat_t}"
-				fi
-				;;
-			[a-zA-Z]*)
-				id=$token
-				;;
-			[\,\;])
-				if [ $level = 2 -a -n "$(echo $id | $SED 's,^_pad[[:digit:]]*,,')" ]
-				then
-					check_field $1 $2 $3.$id "$fields"
-					test "$token" != ";" || fields= id=
-				fi
-				;;
-			esac
-			test -z "$fields" || fields="$fields $token"
-		done
-	fi
-}
-
-build_check ()
-{
-	echo
-	echo "#define CHECK_$1 \\"
-	local level=1 fields= kind= id= arrlvl=1 token
-	for token in $2
-	do
-		case "$token" in
-		struct|union)
-			if [ $level = 1 ]
-			then
-				kind=$token
-				printf %s "    CHECK_SIZE_($kind, $1)"
-			elif [ $level = 2 ]
-			then
-				fields=" "
-			fi
-			;;
-		"{")
-			level=$(expr $level + 1) id=
-			;;
-		"}")
-			level=$(expr $level - 1) id=
-			;;
-		"[")
-			arrlvl=$(expr $arrlvl + 1)
-			;;
-		"]")
-			arrlvl=$(expr $arrlvl - 1)
-			;;
-		compat_*_t)
-			if [ $level = 2 -a $token != compat_argo_port_t ]
-			then
-				fields=" "
-				token="${token%_t}"
-				token="${token#compat_}"
-			fi
-			;;
-		[a-zA-Z_]*)
-			test $level != 2 -o $arrlvl != 1 || id=$token
-			;;
-		[\,\;])
-			if [ $level = 2 -a -n "$(echo $id | $SED 's,^_pad[[:digit:]]*,,')" ]
-			then
-				check_field $kind $1 $id "$fields"
-				test "$token" != ";" || fields= id=
-			fi
-			;;
-		esac
-		test -z "$fields" || fields="$fields $token"
-	done
-	echo ""
-}
-
-list="$($SED -e 's,^[[:space:]]#.*,,' -e 's!\([]\[,;:{}]\)! \1 !g' $3)"
-fields="$(get_fields $(echo $2 | $SED 's,^compat_xen,compat_,') "$list")"
-if [ -z "$fields" ]
-then
-	echo "Fields of '$2' not found in '$3'" >&2
-	exit 1
-fi
-name=${2#compat_}
-name=${name#xen}
-case "$1" in
-"!")
-	typedefs="$(get_typedefs "$list")"
-	build_enums $name "$fields"
-	build_body $name "$fields"
-	;;
-"?")
-	build_check $name "$fields"
-	;;
-*)
-	echo "Invalid translation indicator: '$1'" >&2
-	exit 1
-	;;
-esac
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue Jun 14 17:37:36 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 17:37:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349302.575471 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1ATd-00059I-Hw; Tue, 14 Jun 2022 17:37:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349302.575471; Tue, 14 Jun 2022 17:37:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1ATd-00059B-F4; Tue, 14 Jun 2022 17:37:17 +0000
Received: by outflank-mailman (input) for mailman id 349302;
 Tue, 14 Jun 2022 17:37:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yBMn=WV=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1o1ATc-000595-2O
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 17:37:16 +0000
Received: from mail-lf1-x130.google.com (mail-lf1-x130.google.com
 [2a00:1450:4864:20::130])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a0a86642-ec08-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 19:37:14 +0200 (CEST)
Received: by mail-lf1-x130.google.com with SMTP id s6so15020568lfo.13
 for <xen-devel@lists.xenproject.org>; Tue, 14 Jun 2022 10:37:14 -0700 (PDT)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 v22-20020a2e9256000000b002554b7b9a16sm1426680ljg.73.2022.06.14.10.37.12
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 14 Jun 2022 10:37:13 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a0a86642-ec08-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=2SzfmoDL2dLQibAHg1HMX34jSgQpXLHSJTa0hnPPaZo=;
        b=SPvqs3hDB616FYLgmmYbcC/RzSpxhzSna6CANMui7kGP2YMHcZRHNsR150910Qvz5w
         SY8K9ZCSiBxBxrOvcixWZL6RlN9kw9vi2vsuias536X/21wF4rBf9vqTwoSQAw1FD0Ae
         S13Ioxwln8cWZm/7sApzduWIszW1ahHhPAAYnUy9YruYu8w0bjyuon0tHjFSuC21BeW7
         Lpstp63hj5YjFYvuRNZavdwHb+t6MgfiV2h5POW19XOu4qCsXPqcjINoSkeFMPtu4SXc
         k3LW+zDQ8ij9M2tJMMIEZCaQ4qdVqoGE9nOfUbcjvB7tvgp6Jil3GBF00RGhhP2N+Ucu
         dhTg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=2SzfmoDL2dLQibAHg1HMX34jSgQpXLHSJTa0hnPPaZo=;
        b=IeDE6FqvME4lkPSsY7MseeLs8TpoLL59NrlcAcbpGieqqy0yKPHXyrags+XTmTi1Se
         5tcxzxYSW/b1s2T4M7SnoYVe6KYGp1HM6DhR9OWRhjLV6ftzjgCV+7BOY27vJN/R9KJo
         q/xhzhasxlqO58xpbjFLmt/h4IttIaK347PnVB8NdwL4qFelry/t6BQ72aV/RoilsG7/
         +8B8VziGdTfJNbMiIfMJZkdgC29/60wbrO441ue9VUahRruCR8SX0H1FRo/95enoQX79
         5ctvhAx11+AIG+IFY7TO3ts47la6nx3sMJUYbxawhg0oXQ7BZ2RVWe4Zp9HiBbd77yRd
         gqwQ==
X-Gm-Message-State: AOAM530QfnF8N+ft9pK4lwosTuTvh4eoGpzzBgZNykyA374aBCJ5TpD+
	MwRPPrHBO42S++8/vvDDgkI=
X-Google-Smtp-Source: ABdhPJzxjBAaNRz4M4+MxpOtWuRn0k5C1JJPGEjGF8I1hMgfqpcL1XgchfmaGegl5ccHS2Im0c1cQg==
X-Received: by 2002:ac2:4f11:0:b0:479:3554:79d with SMTP id k17-20020ac24f11000000b004793554079dmr3746472lfr.417.1655228233606;
        Tue, 14 Jun 2022 10:37:13 -0700 (PDT)
Subject: Re: [RFC PATCH 1/2] xen/unpopulated-alloc: Introduce helpers for DMA
 allocations
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, Juergen Gross
 <jgross@suse.com>, Julien Grall <julien@xen.org>
References: <1652810658-27810-1-git-send-email-olekstysh@gmail.com>
 <1652810658-27810-2-git-send-email-olekstysh@gmail.com>
 <alpine.DEB.2.22.394.2206031420430.2783803@ubuntu-linux-20-04-desktop>
 <00c14b91-4cf2-179c-749d-593db853e42e@gmail.com>
 <alpine.DEB.2.22.394.2206101709210.756493@ubuntu-linux-20-04-desktop>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <a51dec23-c543-b571-8047-59f39abb0bee@gmail.com>
Date: Tue, 14 Jun 2022 20:37:12 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.22.394.2206101709210.756493@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 11.06.22 03:12, Stefano Stabellini wrote:


Hello Stefano


> On Wed, 8 Jun 2022, Oleksandr wrote:
>> 2. Drop the "page_list" entirely and use "dma_pool" for all (contiguous and
>> non-contiguous) allocations. After all, all pages are initially contiguous in
>> fill_list() as they are built from the resource. This changes behavior for all
>> users of xen_alloc_unpopulated_pages()
>>
>> Below the diff for unpopulated-alloc.c. The patch is also available at:
>>
>> https://github.com/otyshchenko1/linux/commit/7be569f113a4acbdc4bcb9b20cb3995b3151387a
>>
>>
>> diff --git a/drivers/xen/unpopulated-alloc.c b/drivers/xen/unpopulated-alloc.c
>> index a39f2d3..ab5c7bd 100644
>> --- a/drivers/xen/unpopulated-alloc.c
>> +++ b/drivers/xen/unpopulated-alloc.c
>> @@ -1,5 +1,7 @@
>>   // SPDX-License-Identifier: GPL-2.0
>> +#include <linux/dma-mapping.h>
>>   #include <linux/errno.h>
>> +#include <linux/genalloc.h>
>>   #include <linux/gfp.h>
>>   #include <linux/kernel.h>
>>   #include <linux/mm.h>
>> @@ -13,8 +15,8 @@
>>   #include <xen/xen.h>
>>
>>   static DEFINE_MUTEX(list_lock);
>> -static struct page *page_list;
>> -static unsigned int list_count;
>> +
>> +static struct gen_pool *dma_pool;
>>
>>   static struct resource *target_resource;
>>
>> @@ -36,7 +38,7 @@ static int fill_list(unsigned int nr_pages)
>>          struct dev_pagemap *pgmap;
>>          struct resource *res, *tmp_res = NULL;
>>          void *vaddr;
>> -       unsigned int i, alloc_pages = round_up(nr_pages, PAGES_PER_SECTION);
>> +       unsigned int alloc_pages = round_up(nr_pages, PAGES_PER_SECTION);
>>          struct range mhp_range;
>>          int ret;
>>
>> @@ -106,6 +108,7 @@ static int fill_list(unsigned int nr_pages)
>>            * conflict with any devices.
>>            */
>>          if (!xen_feature(XENFEAT_auto_translated_physmap)) {
>> +               unsigned int i;
>>                  xen_pfn_t pfn = PFN_DOWN(res->start);
>>
>>                  for (i = 0; i < alloc_pages; i++) {
>> @@ -125,16 +128,17 @@ static int fill_list(unsigned int nr_pages)
>>                  goto err_memremap;
>>          }
>>
>> -       for (i = 0; i < alloc_pages; i++) {
>> -               struct page *pg = virt_to_page(vaddr + PAGE_SIZE * i);
>> -
>> -               pg->zone_device_data = page_list;
>> -               page_list = pg;
>> -               list_count++;
>> +       ret = gen_pool_add_virt(dma_pool, (unsigned long)vaddr, res->start,
>> +                       alloc_pages * PAGE_SIZE, NUMA_NO_NODE);
>> +       if (ret) {
>> +               pr_err("Cannot add memory range to the pool\n");
>> +               goto err_pool;
>>          }
>>
>>          return 0;
>>
>> +err_pool:
>> +       memunmap_pages(pgmap);
>>   err_memremap:
>>          kfree(pgmap);
>>   err_pgmap:
>> @@ -149,51 +153,49 @@ static int fill_list(unsigned int nr_pages)
>>          return ret;
>>   }
>>
>> -/**
>> - * xen_alloc_unpopulated_pages - alloc unpopulated pages
>> - * @nr_pages: Number of pages
>> - * @pages: pages returned
>> - * @return 0 on success, error otherwise
>> - */
>> -int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages)
>> +static int alloc_unpopulated_pages(unsigned int nr_pages, struct page
>> **pages,
>> +               bool contiguous)
>>   {
>>          unsigned int i;
>>          int ret = 0;
>> +       void *vaddr;
>> +       bool filled = false;
>>
>>          /*
>>           * Fallback to default behavior if we do not have any suitable
>> resource
>>           * to allocate required region from and as the result we won't be able
>> to
>>           * construct pages.
>>           */
>> -       if (!target_resource)
>> +       if (!target_resource) {
>> +               if (contiguous)
>> +                       return -ENODEV;
>> +
>>                  return xen_alloc_ballooned_pages(nr_pages, pages);
>> +       }
>>
>>          mutex_lock(&list_lock);
>> -       if (list_count < nr_pages) {
>> -               ret = fill_list(nr_pages - list_count);
>> +
>> +       while (!(vaddr = (void *)gen_pool_alloc(dma_pool, nr_pages *
>> PAGE_SIZE))) {
>> +               if (filled)
>> +                       ret = -ENOMEM;
>> +               else {
>> +                       ret = fill_list(nr_pages);
>> +                       filled = true;
>> +               }
>>                  if (ret)
>>                          goto out;
>>          }
>>
>>          for (i = 0; i < nr_pages; i++) {
>> -               struct page *pg = page_list;
>> -
>> -               BUG_ON(!pg);
>> -               page_list = pg->zone_device_data;
>> -               list_count--;
>> -               pages[i] = pg;
>> +               pages[i] = virt_to_page(vaddr + PAGE_SIZE * i);
>>
>>   #ifdef CONFIG_XEN_HAVE_PVMMU
>>                  if (!xen_feature(XENFEAT_auto_translated_physmap)) {
>> -                       ret = xen_alloc_p2m_entry(page_to_pfn(pg));
>> +                       ret = xen_alloc_p2m_entry(page_to_pfn(pages[i]));
>>                          if (ret < 0) {
>> -                               unsigned int j;
>> -
>> -                               for (j = 0; j <= i; j++) {
>> - pages[j]->zone_device_data = page_list;
>> -                                       page_list = pages[j];
>> -                                       list_count++;
>> -                               }
>> +                               /* XXX Do we need to zeroed pages[i]? */
>> +                               gen_pool_free(dma_pool, (unsigned long)vaddr,
>> +                                               nr_pages * PAGE_SIZE);
>>                                  goto out;
>>                          }
>>                  }
>> @@ -204,32 +206,89 @@ int xen_alloc_unpopulated_pages(unsigned int nr_pages,
>> struct page **pages)
>>          mutex_unlock(&list_lock);
>>          return ret;
>>   }
>> -EXPORT_SYMBOL(xen_alloc_unpopulated_pages);
>>
>> -/**
>> - * xen_free_unpopulated_pages - return unpopulated pages
>> - * @nr_pages: Number of pages
>> - * @pages: pages to return
>> - */
>> -void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages)
>> +static void free_unpopulated_pages(unsigned int nr_pages, struct page
>> **pages,
>> +               bool contiguous)
>>   {
>> -       unsigned int i;
>> -
>>          if (!target_resource) {
>> +               if (contiguous)
>> +                       return;
>> +
>>                  xen_free_ballooned_pages(nr_pages, pages);
>>                  return;
>>          }
>>
>>          mutex_lock(&list_lock);
>> -       for (i = 0; i < nr_pages; i++) {
>> -               pages[i]->zone_device_data = page_list;
>> -               page_list = pages[i];
>> -               list_count++;
>> +
>> +       /* XXX Do we need to check the range (gen_pool_has_addr)? */
>> +       if (contiguous)
>> +               gen_pool_free(dma_pool, (unsigned long)page_to_virt(pages[0]),
>> +                               nr_pages * PAGE_SIZE);
>> +       else {
>> +               unsigned int i;
>> +
>> +               for (i = 0; i < nr_pages; i++)
>> +                       gen_pool_free(dma_pool, (unsigned
>> long)page_to_virt(pages[i]),
>> +                                       PAGE_SIZE);
>>          }
>> +
>>          mutex_unlock(&list_lock);
>>   }
>> +
>> +/**
>> + * xen_alloc_unpopulated_pages - alloc unpopulated pages
>> + * @nr_pages: Number of pages
>> + * @pages: pages returned
>> + * @return 0 on success, error otherwise
>> + */
>> +int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages)
>> +{
>> +       return alloc_unpopulated_pages(nr_pages, pages, false);
>> +}
>> +EXPORT_SYMBOL(xen_alloc_unpopulated_pages);
>> +
>> +/**
>> + * xen_free_unpopulated_pages - return unpopulated pages
>> + * @nr_pages: Number of pages
>> + * @pages: pages to return
>> + */
>> +void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages)
>> +{
>> +       free_unpopulated_pages(nr_pages, pages, false);
>> +}
>>   EXPORT_SYMBOL(xen_free_unpopulated_pages);
>>
>> +/**
>> + * xen_alloc_unpopulated_dma_pages - alloc unpopulated DMAable pages
>> + * @dev: valid struct device pointer
>> + * @nr_pages: Number of pages
>> + * @pages: pages returned
>> + * @return 0 on success, error otherwise
>> + */
>> +int xen_alloc_unpopulated_dma_pages(struct device *dev, unsigned int
>> nr_pages,
>> +               struct page **pages)
>> +{
>> +       /* XXX Handle devices which support 64-bit DMA address only for now */
>> +       if (dma_get_mask(dev) != DMA_BIT_MASK(64))
>> +               return -EINVAL;
>> +
>> +       return alloc_unpopulated_pages(nr_pages, pages, true);
>> +}
>> +EXPORT_SYMBOL(xen_alloc_unpopulated_dma_pages);
>> +
>> +/**
>> + * xen_free_unpopulated_dma_pages - return unpopulated DMAable pages
>> + * @dev: valid struct device pointer
>> + * @nr_pages: Number of pages
>> + * @pages: pages to return
>> + */
>> +void xen_free_unpopulated_dma_pages(struct device *dev, unsigned int
>> nr_pages,
>> +               struct page **pages)
>> +{
>> +       free_unpopulated_pages(nr_pages, pages, true);
>> +}
>> +EXPORT_SYMBOL(xen_free_unpopulated_dma_pages);
>> +
>>   static int __init unpopulated_init(void)
>>   {
>>          int ret;
>> @@ -237,9 +296,19 @@ static int __init unpopulated_init(void)
>>          if (!xen_domain())
>>                  return -ENODEV;
>>
>> +       dma_pool = gen_pool_create(PAGE_SHIFT, NUMA_NO_NODE);
>> +       if (!dma_pool) {
>> +               pr_err("xen:unpopulated: Cannot create DMA pool\n");
>> +               return -ENOMEM;
>> +       }
>> +
>> +       gen_pool_set_algo(dma_pool, gen_pool_best_fit, NULL);
>> +
>>          ret = arch_xen_unpopulated_init(&target_resource);
>>          if (ret) {
>>                  pr_err("xen:unpopulated: Cannot initialize target
>> resource\n");
>> +               gen_pool_destroy(dma_pool);
>> +               dma_pool = NULL;
>>                  target_resource = NULL;
>>          }
>>
>> [snip]
>>
>>
>> I think, regarding on the approach we would likely need to do some renaming
>> for fill_list, page_list, list_lock, etc.
>>
>>
>> Both options work in my Arm64 based environment, not sure regarding x86.
>> Or do we have another option here?
>> I would be happy to go any route. What do you think?
> The second option (use "dma_pool" for all) looks great, thank you for
> looking into it!


ok, great


May I please clarify a few moments before starting preparing non-RFC 
version:


1. According to the discussion at "[RFC PATCH 2/2] xen/grant-table: Use 
unpopulated DMAable pages instead of real RAM ones" we decided
to stay away from the "dma" in the name, also the second option (use 
"dma_pool" for all) implies dropping the "page_list" entirely, so I am 
going to do some renaming:

- 
s/xen_alloc_unpopulated_dma_pages()/xen_alloc_unpopulated_contiguous_pages()
- s/dma_pool/unpopulated_pool
- s/list_lock/pool_lock
- s/fill_list()/fill_pool()

Any objections?


2. I don't like much the fact that in free_unpopulated_pages() we have 
to free "page by page" if contiguous is false, but unfortunately we 
cannot avoid doing that.
I noticed that many users of unpopulated pages retain initially 
allocated pages[i] array, so it passed here absolutely unmodified since 
being allocated, but there is a code (for example, 
gnttab_page_cache_shrink() in grant-table.c) that can pass pages[i] 
which contains arbitrary pages.

static void free_unpopulated_pages(unsigned int nr_pages, struct page 
**pages,
         bool contiguous)
{

[snip]

     /* XXX Do we need to check the range (gen_pool_has_addr)? */
     if (contiguous)
         gen_pool_free(dma_pool, (unsigned long)page_to_virt(pages[0]),
                 nr_pages * PAGE_SIZE);
     else {
         unsigned int i;

         for (i = 0; i < nr_pages; i++)
             gen_pool_free(dma_pool, (unsigned long)page_to_virt(pages[i]),
                     PAGE_SIZE);
     }

[snip]

}

I think, it wouldn't be a big deal for the small allocations, but for 
the big allocations it might be not optimal for the speed.

What do you think if we update some places which always require big 
allocations to allocate (and free) contiguous pages instead?
The possible candidate is 
gem_create()/xen_drm_front_gem_free_object_unlocked() in 
drivers/gpu/drm/xen/xen_drm_front_gem.c.
OTOH I realize this might be inefficient use of resources. Or better not?


3. The alloc_unpopulated_pages() might be optimized for the 
non-contiguous allocations, currently we always try to allocate a single 
chunk even if contiguous is false.

static int alloc_unpopulated_pages(unsigned int nr_pages, struct page 
**pages,
         bool contiguous)
{

[snip]

     /* XXX: Optimize for non-contiguous allocations */
     while (!(vaddr = (void *)gen_pool_alloc(dma_pool, nr_pages * 
PAGE_SIZE))) {
         if (filled)
             ret = -ENOMEM;
         else {
             ret = fill_list(nr_pages);
             filled = true;
         }
         if (ret)
             goto out;
     }

[snip]

}


But we can allocate "page by page" for the non-contiguous allocations, 
it might be not optimal for the speed, but would be optimal for the 
resource usage. What do you think?

static int alloc_unpopulated_pages(unsigned int nr_pages, struct page 
**pages,
         bool contiguous)
{

[snip]

     if (contiguous) {
         while (!(vaddr = (void *)gen_pool_alloc(dma_pool, nr_pages * 
PAGE_SIZE))) {
             if (filled)
                 ret = -ENOMEM;
             else {
                 ret = fill_list(nr_pages);
                 filled = true;
             }
             if (ret)
                 goto out;
         }

         for (i = 0; i < nr_pages; i++)
             pages[i] = virt_to_page(vaddr + PAGE_SIZE * i);
     } else {
         if (gen_pool_avail(dma_pool) < nr_pages) {
             ret = fill_list(nr_pages - gen_pool_avail(dma_pool));
             if (ret)
                 goto out;
         }

         for (i = 0; i < nr_pages; i++) {
             vaddr = (void *)gen_pool_alloc(dma_pool, PAGE_SIZE);
             if (!vaddr) {
                 while (i--)
                     gen_pool_free(dma_pool, (unsigned 
long)page_to_virt(pages[i]),
                             PAGE_SIZE);

                 ret = -ENOMEM;
                 goto out;
             }

             pages[i] = virt_to_page(vaddr);
         }
     }

[snip]

}


Thank you.

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Tue Jun 14 17:53:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 17:53:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349313.575481 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Aiw-0007k6-19; Tue, 14 Jun 2022 17:53:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349313.575481; Tue, 14 Jun 2022 17:53:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Aiv-0007jz-Tx; Tue, 14 Jun 2022 17:53:05 +0000
Received: by outflank-mailman (input) for mailman id 349313;
 Tue, 14 Jun 2022 17:53:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yBMn=WV=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1o1Aiu-0007jt-EY
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 17:53:04 +0000
Received: from mail-lj1-x235.google.com (mail-lj1-x235.google.com
 [2a00:1450:4864:20::235])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d5e666d8-ec0a-11ec-a26a-b96bd03d9e80;
 Tue, 14 Jun 2022 19:53:02 +0200 (CEST)
Received: by mail-lj1-x235.google.com with SMTP id m25so10577368lji.11
 for <xen-devel@lists.xenproject.org>; Tue, 14 Jun 2022 10:53:02 -0700 (PDT)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 y1-20020a199141000000b0047255d2119bsm1468081lfj.202.2022.06.14.10.53.01
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 14 Jun 2022 10:53:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d5e666d8-ec0a-11ec-a26a-b96bd03d9e80
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=yTsZoLmlHNm+eqmO/1SbPV0SLUCIHibl+Wa6lk1x09s=;
        b=EIuc4MYa35sLsyJFFQjI4DsRq22hHvRJ4PU6NU3OlIx2hjFcPSivcyI/NcsfGo3znF
         5KlRijprlDzC+ALs8hhteISBbIWi/yRv5Mf9Y7z12Yn+R2/Ixlhj/EBi6r242RV5q1oi
         YUssvITSefEQF5gVt/xEQ7ZMOxDxrThtPRXYfqZfLFrUDPRmPoRIfVST84xjamZPPFfW
         qUDx9bZX4QzI0fuukgEKID7q8SKhSmQ3I/nqYexy96Oj220llWqy+haOL9y+Wiym3aPn
         qf1w746AEp1lwTfSaqKL5chC7x8j1tKEni2cQ54Pmu3pmx+8DCGzW/GJ3624koRjumB9
         OO2A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=yTsZoLmlHNm+eqmO/1SbPV0SLUCIHibl+Wa6lk1x09s=;
        b=RTCbXvx5Y5vLyfXoLvAOlCQJgwpmytfxi/fAXaLdI/Ce0sW+ctyzxrBnMyKNkfKtQa
         +OX/tpwRcD6O6KirOYDjP/2q/+7nXrQH+OZHkNr5pM9tlDnypKdNzyFFLKKkKLD1m+fv
         LOciuDSL/Ko9k57c5J9n0NiGvzEVAlEzXFkdE237bpEumH4atYBxMD2ezMosiTTanarC
         nOoKkF5cf2L69Jimrytz7QkaVE1/f69lApG+mnvjJq7/h6dx8d6J6o2M3btNNk1QPPCm
         uCJ9qNuVRMw0L8QWXPD31eM0WsREXNoU5+6n6Q7dSXuPsKALAy6cfKEDOYu37JyECl7Y
         TDdA==
X-Gm-Message-State: AJIora/IPd+FAXhfGiXe6iE/xqFUIex7IMiUJYSGn3/xZK58ZI17ULb0
	LcYu/873SkunFsN/3VMj1II=
X-Google-Smtp-Source: AGRyM1uyOZr7cu55TLCrl/2+ai7K2EuGWY9a0zMHj9AG08cuTTMBj3MaY948Wtug8LfmKsSZg/tXBA==
X-Received: by 2002:a2e:a801:0:b0:24a:ff0b:ae7a with SMTP id l1-20020a2ea801000000b0024aff0bae7amr3092816ljq.287.1655229181999;
        Tue, 14 Jun 2022 10:53:01 -0700 (PDT)
Subject: Re: [RFC PATCH 2/2] xen/grant-table: Use unpopulated DMAable pages
 instead of real RAM ones
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, Juergen Gross
 <jgross@suse.com>, Julien Grall <julien@xen.org>
References: <1652810658-27810-1-git-send-email-olekstysh@gmail.com>
 <1652810658-27810-3-git-send-email-olekstysh@gmail.com>
 <alpine.DEB.2.22.394.2206031348230.2783803@ubuntu-linux-20-04-desktop>
 <7f886dfb-2b42-bc70-d55f-14ecd8144e3e@gmail.com>
 <alpine.DEB.2.22.394.2206101644210.756493@ubuntu-linux-20-04-desktop>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <1266f8cb-bbd6-d952-3108-89665ce76fec@gmail.com>
Date: Tue, 14 Jun 2022 20:53:00 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.22.394.2206101644210.756493@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 11.06.22 02:55, Stefano Stabellini wrote:

Hello Stefano

> On Thu, 9 Jun 2022, Oleksandr wrote:
>> On 04.06.22 00:19, Stefano Stabellini wrote:
>> Hello Stefano
>>
>> Thank you for having a look and sorry for the late response.
>>
>>> On Tue, 17 May 2022, Oleksandr Tyshchenko wrote:
>>>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>>
>>>> Depends on CONFIG_XEN_UNPOPULATED_ALLOC. If enabled then unpopulated
>>>> DMAable (contiguous) pages will be allocated for grant mapping into
>>>> instead of ballooning out real RAM pages.
>>>>
>>>> TODO: Fallback to real RAM pages if xen_alloc_unpopulated_dma_pages()
>>>> fails.
>>>>
>>>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>> ---
>>>>    drivers/xen/grant-table.c | 27 +++++++++++++++++++++++++++
>>>>    1 file changed, 27 insertions(+)
>>>>
>>>> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
>>>> index 8ccccac..2bb4392 100644
>>>> --- a/drivers/xen/grant-table.c
>>>> +++ b/drivers/xen/grant-table.c
>>>> @@ -864,6 +864,25 @@ EXPORT_SYMBOL_GPL(gnttab_free_pages);
>>>>     */
>>>>    int gnttab_dma_alloc_pages(struct gnttab_dma_alloc_args *args)
>>>>    {
>>>> +#ifdef CONFIG_XEN_UNPOPULATED_ALLOC
>>>> +	int ret;
>>> This is an alternative implementation of the same function.
>> Currently, yes.
>>
>>
>>>    If we are
>>> going to use #ifdef, then I would #ifdef the entire function, rather
>>> than just the body. Otherwise within the function body we can use
>>> IS_ENABLED.
>>
>> Good point. Note, there is one missing thing in current patch which is
>> described in TODO.
>>
>> "Fallback to real RAM pages if xen_alloc_unpopulated_dma_pages() fails."  So I
>> will likely use IS_ENABLED within the function body.
>>
>> If CONFIG_XEN_UNPOPULATED_ALLOC is enabled then gnttab_dma_alloc_pages() will
>> try to call xen_alloc_unpopulated_dma_pages() the first and if fails then
>> fallback to allocate RAM pages and balloon them out.
>>
>> One moment is not entirely clear to me. If we use fallback in
>> gnttab_dma_alloc_pages() then we must use fallback in gnttab_dma_free_pages()
>> as well, we cannot use xen_free_unpopulated_dma_pages() for real RAM pages.
>> The question is how to pass this information to the gnttab_dma_free_pages()?
>> The first idea which comes to mind is to add a flag to struct
>> gnttab_dma_alloc_args...
>   
> You can check if the page is within the mhp_range range or part of
> iomem_resource? If not, you can free it as a normal page.
>
> If we do this, then the fallback is better implemented in
> unpopulated-alloc.c because that is the one that is aware about
> page addresses.


I got your idea and agree this can work technically. Or if we finally 
decide to use the second option (use "dma_pool" for all) in the first patch
"[RFC PATCH 1/2] xen/unpopulated-alloc: Introduce helpers for DMA 
allocations" then we will likely be able to check whether a page in question
is within a "dma_pool" using gen_pool_has_addr().

I am still wondering, can we avoid the fallback implementation in 
unpopulated-alloc.c.
Because for that purpose we will need to pull more code into 
unpopulated-alloc.c (to be more precise, almost everything which 
gnttab_dma_free_pages() already has except gnttab_pages_clear_private()) 
and pass more arguments to xen_free_unpopulated_dma_pages(). Also I 
might mistake, but having a fallback split between grant-table.c (to 
allocate RAM pages in gnttab_dma_alloc_pages()) and unpopulated-alloc.c 
(to free RAM pages in xen_free_unpopulated_dma_pages()) would look a bit 
weird.

I see two possible options for the fallback implementation in grant-table.c:
1. (less preferable) by introducing new flag in struct gnttab_dma_alloc_args
2. (more preferable) by guessing unpopulated (non real RAM) page using 
is_zone_device_page(), etc


For example, with the second option the resulting code will look quite 
simple (only build tested):

diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 738029d..3bda71f 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -1047,6 +1047,23 @@ int gnttab_dma_alloc_pages(struct 
gnttab_dma_alloc_args *args)
         size_t size;
         int i, ret;

+       if (IS_ENABLED(CONFIG_XEN_UNPOPULATED_ALLOC)) {
+               ret = xen_alloc_unpopulated_dma_pages(args->dev, 
args->nr_pages,
+                               args->pages);
+               if (ret < 0)
+                       goto fallback;
+
+               ret = gnttab_pages_set_private(args->nr_pages, args->pages);
+               if (ret < 0)
+                       goto fail;
+
+               args->vaddr = page_to_virt(args->pages[0]);
+               args->dev_bus_addr = page_to_phys(args->pages[0]);
+
+               return ret;
+       }
+
+fallback:
         size = args->nr_pages << PAGE_SHIFT;
         if (args->coherent)
                 args->vaddr = dma_alloc_coherent(args->dev, size,
@@ -1103,6 +1120,12 @@ int gnttab_dma_free_pages(struct 
gnttab_dma_alloc_args *args)

         gnttab_pages_clear_private(args->nr_pages, args->pages);

+       if (IS_ENABLED(CONFIG_XEN_UNPOPULATED_ALLOC) &&
+                       is_zone_device_page(args->pages[0])) {
+               xen_free_unpopulated_dma_pages(args->dev, 
args->nr_pages, args->pages);
+               return 0;
+       }
+
         for (i = 0; i < args->nr_pages; i++)
                 args->frames[i] = page_to_xen_pfn(args->pages[i]);


What do you think?


>
>   
>   
>>>> +	ret = xen_alloc_unpopulated_dma_pages(args->dev, args->nr_pages,
>>>> +			args->pages);
>>>> +	if (ret < 0)
>>>> +		return ret;
>>>> +
>>>> +	ret = gnttab_pages_set_private(args->nr_pages, args->pages);
>>>> +	if (ret < 0) {
>>>> +		gnttab_dma_free_pages(args);
>>> it should xen_free_unpopulated_dma_pages ?
>> Besides calling the xen_free_unpopulated_dma_pages(), we also need to call
>> gnttab_pages_clear_private() here, this is what gnttab_dma_free_pages() is
>> doing.
>>
>> I can change to call both function instead:
>>
>>      gnttab_pages_clear_private(args->nr_pages, args->pages);
>>      xen_free_unpopulated_dma_pages(args->dev, args->nr_pages, args->pages);
>>
>> Shall I?
> No, leave it as is. I didn't realize that gnttab_pages_set_private can
> fail half-way through.


ok, thank you for the confirmation.


>
>   
>>>
>>>> +		return ret;
>>>> +	}
>>>> +
>>>> +	args->vaddr = page_to_virt(args->pages[0]);
>>>> +	args->dev_bus_addr = page_to_phys(args->pages[0]);
>>> There are two things to note here.
>>>
>>> The first thing to note is that normally we would call pfn_to_bfn to
>>> retrieve the dev_bus_addr of a page because pfn_to_bfn takes into
>>> account foreign mappings. However, these are freshly allocated pages
>>> without foreign mappings, so page_to_phys/dma should be sufficient.
>> agree
>>
>>
>>>
>>> The second has to do with physical addresses and DMA addresses. The
>>> functions are called gnttab_dma_alloc_pages and
>>> xen_alloc_unpopulated_dma_pages which make you think we are retrieving a
>>> DMA address here. However, to get a DMA address we need to call
>>> page_to_dma rather than page_to_phys.
>>>
>>> page_to_dma takes into account special offsets that some devices have
>>> when accessing memory. There are real cases on ARM where the physical
>>> address != DMA address, e.g. RPi4.
>> I got it. Now I am in doubt whether it would be better to name the API:
>>
>> xen_alloc_unpopulated_cma_pages()
>>
>> or
>>
>> xen_alloc_unpopulated_contiguous_pages()
>>
>> What do you think?
> Yeah actually I think it is better to stay away from "dma" in the name.
> I like xen_alloc_unpopulated_contiguous_pages().


perfect, I will rename then, thank you for the confirmation.


>   
>   
>>> However, to call page_to_dma you need to specify as first argument the
>>> DMA-capable device that is expected to use those pages for DMA (e.g. an
>>> ethernet device or a MMC controller.) While the args->dev we have in
>>> gnttab_dma_alloc_pages is the gntdev_miscdev.
>> agree
>>
>> As I understand, at this time it is unknown for what exactly device these
>> pages are supposed to be used at the end.
>>
>> For now, it is only known that these pages to be used by userspace PV backend
>> for grant mappings.
> Yeah
>   
>
>>> So this interface cannot actually be used to allocate memory that is
>>> supposed to be DMA-able by a DMA-capable device, such as an ethernet
>>> device.
>> agree
>>
>>
>>> But I think that should be fine because the memory is meant to be used
>>> by a userspace PV backend for grant mappings. If any of those mappings
>>> end up being used for actual DMA in the kernel they should go through the
>>> drivers/xen/swiotlb-xen.c and xen_phys_to_dma should be called, which
>>> ends up calling page_to_dma as appropriate.
>>>
>>> It would be good to double-check that the above is correct and, if so,
>>> maybe add a short in-code comment about it:
>>>
>>> /*
>>>    * These are not actually DMA addresses but regular physical addresses.
>>>    * If these pages end up being used in a DMA operation then the
>>>    * swiotlb-xen functions are called and xen_phys_to_dma takes care of
>>>    * the address translations:
>>>    *
>>>    * - from gfn to bfn in case of foreign mappings
>>>    * - from physical to DMA addresses in case the two are different for a
>>>    *   given DMA-mastering device
>>>    */
>> I agree this needs to be re-checked. But, there is one moment here, if
>> userspace PV backend runs in other than Dom0 domain (non 1:1 mapped domain),
>> the xen-swiotlb seems not to be in use then? How to be in this case?
>   
> In that case, an IOMMU is required. If an IOMMU is setup correct, then
> the gfn->bfn translation is not necessary because it is done
> automatically by the IOMMU. That is because when the foreign page is
> mapped in the domain, the mapping also applies to the IOMMU pagetable.
>
> So the device is going to do DMA to "gfn" and the IOMMU will translate
> it to the right "mfn", the one corresponding to "bfn".
>
> The physical to DMA address should be done automatically by the default
> (non-swiotlb_xen) dma_ops in Linux. E.g.
> kernel/dma/direct.c:dma_direct_map_sg correctly calls
> dma_direct_map_page, which calls phys_to_dma.


Thank you for the explanation.


>   
>   
>   
>>>> +	return ret;
>>>> +#else
>>>>    	unsigned long pfn, start_pfn;
>>>>    	size_t size;
>>>>    	int i, ret;
>>>> @@ -910,6 +929,7 @@ int gnttab_dma_alloc_pages(struct
>>>> gnttab_dma_alloc_args *args)
>>>>    fail:
>>>>    	gnttab_dma_free_pages(args);
>>>>    	return ret;
>>>> +#endif
>>>>    }
>>>>    EXPORT_SYMBOL_GPL(gnttab_dma_alloc_pages);
>>>>    @@ -919,6 +939,12 @@ EXPORT_SYMBOL_GPL(gnttab_dma_alloc_pages);
>>>>     */
>>>>    int gnttab_dma_free_pages(struct gnttab_dma_alloc_args *args)
>>>>    {
>>>> +#ifdef CONFIG_XEN_UNPOPULATED_ALLOC
>>>> +	gnttab_pages_clear_private(args->nr_pages, args->pages);
>>>> +	xen_free_unpopulated_dma_pages(args->dev, args->nr_pages,
>>>> args->pages);
>>>> +
>>>> +	return 0;
>>>> +#else
>>>>    	size_t size;
>>>>    	int i, ret;
>>>>    @@ -946,6 +972,7 @@ int gnttab_dma_free_pages(struct
>>>> gnttab_dma_alloc_args *args)
>>>>    		dma_free_wc(args->dev, size,
>>>>    			    args->vaddr, args->dev_bus_addr);
>>>>    	return ret;
>>>> +#endif
>>>>    }
>>>>    EXPORT_SYMBOL_GPL(gnttab_dma_free_pages);
>>>>    #endif
>>>> -- 
>>>> 2.7.4

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Tue Jun 14 18:27:00 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 18:27:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349327.575509 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1BFf-0003YI-Am; Tue, 14 Jun 2022 18:26:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349327.575509; Tue, 14 Jun 2022 18:26:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1BFf-0003Y8-7d; Tue, 14 Jun 2022 18:26:55 +0000
Received: by outflank-mailman (input) for mailman id 349327;
 Tue, 14 Jun 2022 18:26:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u8si=WV=xenbits.xen.org=julieng@srs-se1.protection.inumbo.net>)
 id 1o1BFd-0003Bq-IW
 for xen-devel@lists.xen.org; Tue, 14 Jun 2022 18:26:54 +0000
Received: from mail.xenproject.org (mail.xenproject.org [104.130.215.37])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8be34b65-ec0f-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 20:26:50 +0200 (CEST)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julieng@xenbits.xen.org>)
 id 1o1BFN-0005Uq-C2; Tue, 14 Jun 2022 18:26:37 +0000
Received: from julieng by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <julieng@xenbits.xen.org>)
 id 1o1BFN-0008DW-A3; Tue, 14 Jun 2022 18:26:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8be34b65-ec0f-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=WGn0/VWzRWv6gn0JjFVwvwtiz4kbV4BS5PjVZX9TWzg=; b=0NioSXsq/EEQGWbo0TfQeX0tEl
	y2GpMaLvMSW9nauwYyS86iyZbfZ2mBJkWhHqzue09jyEvq3agrxDOQAeFJ1fEOHNjoWGTgqSpzVVp
	xrie+j7EgjRHrlZhshSA6pXRg1QF6S9XuTqcnYOrFMEhnNJtps5drzicK548Abt3tj/w=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 404 v1 (CVE-2022-21123,CVE-2022-21124,CVE-2022-21166)
 - x86: MMIO Stale Data vulnerabilities
Message-Id: <E1o1BFN-0008DW-A3@xenbits.xenproject.org>
Date: Tue, 14 Jun 2022 18:26:37 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

 Xen Security Advisory CVE-2022-21123,CVE-2022-21124,CVE-2022-21166 / XSA-404

                 x86: MMIO Stale Data vulnerabilities

ISSUE DESCRIPTION
=================

This issue is related to the SRBDS, TAA and MDS vulnerabilities.  Please
see:

  https://xenbits.xen.org/xsa/advisory-320.html (SRBDS)
  https://xenbits.xen.org/xsa/advisory-305.html (TAA)
  https://xenbits.xen.org/xsa/advisory-297.html (MDS)

Please see Intel's whitepaper:

  https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/technical-documentation/processor-mmio-stale-data-vulnerabilities.html

IMPACT
======

An attacker might be able to directly read or infer data from other
security contexts in the system.  This can include data belonging to
other VMs, or to Xen itself.  The degree to which an attacker can obtain
data depends on the CPU, and the system configuration.

VULNERABLE SYSTEMS
==================

Systems running all versions of Xen are affected.

Only x86 processors are vulnerable.  Processors from other manufacturers
(e.g. ARM) are not believed to be vulnerable.

Only Intel based processors are affected.  Processors from other x86
manufacturers (e.g. AMD) are not believed to be vulnerable.

Please consult the Intel Security Advisory for details on the affected
processors and configurations.

Per Xen's support statement, PCI passthrough should be to trusted
domains because the overall system security depends on factors outside
of Xen's control.

As such, Xen, in a supported configuration, is not vulnerable to
DRPW/SBDR.

MITIGATION
==========

All mitigations depend on functionality added in the IPU 2022.1 (May
2022) microcode release from Intel.  Consult your dom0 OS vendor.

To the best of the security team's understanding, the summary is as
follows:

Server CPUs (Xeon EP/EX, Scalable, and some Atom servers), excluding
Xeon E3 (which use the client CPU design), are potentially vulnerable to
DRPW (CVE-2022-21166).

Client CPUs (inc Xeon E3) are, furthermore, potentially vulnerable to
SBDR (CVE-2022-21123) and SBDS (CVE-2022-21125).

SBDS only affects CPUs vulnerable to MDS.  On these CPUs, there are
previously undiscovered leakage channels.  There is no change to the
existing MDS mitigations.

DRPW and SBDR only affects configurations where less privileged domains
have MMIO mappings of buggy endpoints.  Consult your hardware vendor.

In configurations where less privileged domains have MMIO access to
buggy endpoints, `spec-ctrl=unpriv-mmio` can be enabled which will cause
Xen to mitigate cross-domain fill buffer leakage, and extend SRBDS
protections to protect RNG data from leakage.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

The patches are still under review.  An update will be sent once they
are reviewed and the backports are done.

xsa404/xsa404-?.patch           xen-unstable

$ sha256sum xsa404*/*
18b307c2cbbd08d568e9dcb2447901d94e22ff1e3945c3436173aa693f6456fb  xsa404/xsa404-1.patch
d6f193ad963396285e983aa1c18539f67222582711fc62105c21b71b3b53a97d  xsa404/xsa404-2.patch
d2c123ccdf5eb9f862d6e9cb0e59045ae18799a07db149c7d90e301ca20436aa  xsa404/xsa404-3.patch
$

NOTE CONCERNING CVE-2022-21127 / Update to SRBDS
================================================

An issue was discovered with the SRBDS microcode mitigation.  A
microcode update was released as part of Intel's IPU 2022.1 in May 2022.

Updating microcode is sufficient to fix the issue, with no extra actions
required on Xen's behalf.  Consult your dom0 OS vendor or OEM for
updated microcode.

NOTE CONCERNING CVE-2022-21180 / Undefined MMIO Hang
====================================================

A related issue was discovered.  See:

  https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/advisory-guidance/undefined-mmio-hang.html

Xen is not vulnerable to UMH in supported configurations.

The only mitigation to is avoid passing impacted devices through to
untrusted guests.
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAmKo0Z0MHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZc8cH/RFgxQ4L8OewWMxsuowpgLg8NVyYGFMBgttscBh+
ANpjRTnV4yQGpt9nNFDAcXT1c/fvWhypOiwadEtczRl5k/Q96JOKFdiAc1QR35Oj
vmbCLgO20jQ/GdTzaqKUaGBwi8GLShJvH1zMPJ2KuXk5w5uFDhj2gEiB6Kdv9+9O
4FBxQkpDzll0gs5v16ien8btKhEuZj9lNtzXZw5j4+DJD69MvQqsRPVdEt+M17Ox
XGYcpfpLeGUaIUPFTPZDcFIJnMvqPBQyt+2eaeR2ezW2ouNpxepCSPsEDlAmSZ/K
uZA0ShyJD3pfCxjc8eztyF/4zajY5EvuEtWdUZC/3zVaUec=
=4EdA
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa404/xsa404-1.patch"
Content-Disposition: attachment; filename="xsa404/xsa404-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3NwZWMtY3RybDogTWFrZSBWRVJXIGZsdXNoaW5n
IHJ1bnRpbWUgY29uZGl0aW9uYWwKCkN1cnJlbnRseSwgVkVSVyBmbHVzaGlu
ZyB0byBtaXRpZ2F0ZSBNRFMgaXMgYm9vdCB0aW1lIGNvbmRpdGlvbmFsIHBl
ciBkb21haW4KdHlwZS4gIEhvd2V2ZXIsIHRvIHByb3ZpZGUgbWl0aWdhdGlv
bnMgZm9yIERSUFcgKENWRS0yMDIyLTIxMTY2KSwgd2UgbmVlZCB0bwpjb25k
aXRpb25hbGx5IHVzZSBWRVJXIGJhc2VkIG9uIHRoZSB0cnVzdHdvcnRoaW5l
c3Mgb2YgdGhlIGd1ZXN0LCBhbmQgdGhlCmRldmljZXMgcGFzc2VkIHRocm91
Z2guCgpSZW1vdmUgdGhlIFBWL0hWTSBhbHRlcm5hdGl2ZXMgYW5kIGluc3Rl
YWQgaXNzdWUgYSBWRVJXIG9uIHRoZSByZXR1cm4tdG8tZ3Vlc3QKcGF0aCBk
ZXBlbmRpbmcgb24gdGhlIFNDRl92ZXJ3IGJpdCBpbiBjcHVpbmZvIHNwZWNf
Y3RybF9mbGFncy4KCkludHJvZHVjZSBzcGVjX2N0cmxfaW5pdF9kb21haW4o
KSBhbmQgZC0+YXJjaC52ZXJ3IHRvIGNhbGN1bGF0ZSB0aGUgVkVSVwpkaXNw
b3NpdGlvbiBhdCBkb21haW4gY3JlYXRpb24gdGltZSwgYW5kIGNvbnRleHQg
c3dpdGNoIHRoZSBTQ0ZfdmVydyBiaXQuCgpGb3Igbm93LCB0aGUgZXhpc3Rp
bmcgbWQtY2xlYXI9IG9wdGlvbnMgY29udHJvbCBWRVJXIGZsdXNoaW5nIGlu
IHRoZSBzYW1lIHdheQphcyBiZWZvcmUsIGJ1dCBsYXRlciBwYXRjaGVzIHdp
bGwgYWRqdXN0IGl0IG9uIGEgcGVyLWRvbWFpbiBiYXNpcy4KCk5vIGNoYW5n
ZSBpbiBiZWhhdmlvdXIuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTQwNC4KClNp
Z25lZC1vZmYtYnk6IEFuZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNp
dHJpeC5jb20+CgpkaWZmIC0tZ2l0IGEvZG9jcy9taXNjL3hlbi1jb21tYW5k
LWxpbmUucGFuZG9jIGIvZG9jcy9taXNjL3hlbi1jb21tYW5kLWxpbmUucGFu
ZG9jCmluZGV4IDBkMWQ5OGQ3MTViMC4uNDc2OTQyYTQ4YmEwIDEwMDY0NAot
LS0gYS9kb2NzL21pc2MveGVuLWNvbW1hbmQtbGluZS5wYW5kb2MKKysrIGIv
ZG9jcy9taXNjL3hlbi1jb21tYW5kLWxpbmUucGFuZG9jCkBAIC0yMjgxLDEw
ICsyMjgxLDEwIEBAIGluIHBsYWNlIGZvciBndWVzdHMgdG8gdXNlLgogCiBV
c2Ugb2YgYSBwb3NpdGl2ZSBib29sZWFuIHZhbHVlIGZvciBlaXRoZXIgb2Yg
dGhlc2Ugb3B0aW9ucyBpcyBpbnZhbGlkLgogCi1UaGUgYm9vbGVhbnMgYHB2
PWAsIGBodm09YCwgYG1zci1zYz1gLCBgcnNiPWAgYW5kIGBtZC1jbGVhcj1g
IG9mZmVyIGZpbmUKLWdyYWluZWQgY29udHJvbCBvdmVyIHRoZSBhbHRlcm5h
dGl2ZSBibG9ja3MgdXNlZCBieSBYZW4uICBUaGVzZSBpbXBhY3QgWGVuJ3MK
LWFiaWxpdHkgdG8gcHJvdGVjdCBpdHNlbGYsIGFuZCBYZW4ncyBhYmlsaXR5
IHRvIHZpcnR1YWxpc2Ugc3VwcG9ydCBmb3IgZ3Vlc3RzCi10byB1c2UuCitU
aGUgYm9vbGVhbnMgYHB2PWAsIGBodm09YCwgYG1zci1zYz1gLCBgcnNiPWAg
b2ZmZXIgZmluZSBncmFpbmVkIGNvbnRyb2wgb3ZlcgordGhlIGFsdGVybmF0
aXZlIGJsb2NrcyB1c2VkIGJ5IFhlbi4gIChgbWQtY2xlYXI9YCBpcyBubyBs
b25nZXIgYW4gYWx0ZXJuYXRpdmUKK2Jsb2NrLCBhbmQganVzdCBhIG1pdGln
YXRpb24gc2V0dGluZy4pICBUaGVzZSBpbXBhY3QgWGVuJ3MgYWJpbGl0eSB0
byBwcm90ZWN0CitpdHNlbGYsIGFuZCBYZW4ncyBhYmlsaXR5IHRvIHZpcnR1
YWxpc2Ugc3VwcG9ydCBmb3IgZ3Vlc3RzIHRvIHVzZS4KIAogKiBgcHY9YCBh
bmQgYGh2bT1gIG9mZmVyIGNvbnRyb2wgb3ZlciBhbGwgc3Vib3B0aW9ucyBm
b3IgUFYgYW5kIEhWTSBndWVzdHMKICAgcmVzcGVjdGl2ZWx5LgpkaWZmIC0t
Z2l0IGEveGVuL2FyY2gveDg2L2RvbWFpbi5jIGIveGVuL2FyY2gveDg2L2Rv
bWFpbi5jCmluZGV4IGE3MmNjOTU1MmFkNi4uOWVkZGVhYTIwYmQ1IDEwMDY0
NAotLS0gYS94ZW4vYXJjaC94ODYvZG9tYWluLmMKKysrIGIveGVuL2FyY2gv
eDg2L2RvbWFpbi5jCkBAIC04NjQsNiArODY0LDggQEAgaW50IGFyY2hfZG9t
YWluX2NyZWF0ZShzdHJ1Y3QgZG9tYWluICpkLAogCiAgICAgZC0+YXJjaC5t
c3JfcmVsYXhlZCA9IGNvbmZpZy0+YXJjaC5taXNjX2ZsYWdzICYgWEVOX1g4
Nl9NU1JfUkVMQVhFRDsKIAorICAgIHNwZWNfY3RybF9pbml0X2RvbWFpbihk
KTsKKwogICAgIHJldHVybiAwOwogCiAgZmFpbDoKQEAgLTIwMTgsMTQgKzIw
MjAsMTUgQEAgc3RhdGljIHZvaWQgX19jb250ZXh0X3N3aXRjaCh2b2lkKQog
dm9pZCBjb250ZXh0X3N3aXRjaChzdHJ1Y3QgdmNwdSAqcHJldiwgc3RydWN0
IHZjcHUgKm5leHQpCiB7CiAgICAgdW5zaWduZWQgaW50IGNwdSA9IHNtcF9w
cm9jZXNzb3JfaWQoKTsKKyAgICBzdHJ1Y3QgY3B1X2luZm8gKmluZm8gPSBn
ZXRfY3B1X2luZm8oKTsKICAgICBjb25zdCBzdHJ1Y3QgZG9tYWluICpwcmV2
ZCA9IHByZXYtPmRvbWFpbiwgKm5leHRkID0gbmV4dC0+ZG9tYWluOwogICAg
IHVuc2lnbmVkIGludCBkaXJ0eV9jcHUgPSByZWFkX2F0b21pYygmbmV4dC0+
ZGlydHlfY3B1KTsKIAogICAgIEFTU0VSVChwcmV2ICE9IG5leHQpOwogICAg
IEFTU0VSVChsb2NhbF9pcnFfaXNfZW5hYmxlZCgpKTsKIAotICAgIGdldF9j
cHVfaW5mbygpLT51c2VfcHZfY3IzID0gZmFsc2U7Ci0gICAgZ2V0X2NwdV9p
bmZvKCktPnhlbl9jcjMgPSAwOworICAgIGluZm8tPnVzZV9wdl9jcjMgPSBm
YWxzZTsKKyAgICBpbmZvLT54ZW5fY3IzID0gMDsKIAogICAgIGlmICggdW5s
aWtlbHkoZGlydHlfY3B1ICE9IGNwdSkgJiYgZGlydHlfY3B1ICE9IFZDUFVf
Q1BVX0NMRUFOICkKICAgICB7CkBAIC0yMDg5LDYgKzIwOTIsMTEgQEAgdm9p
ZCBjb250ZXh0X3N3aXRjaChzdHJ1Y3QgdmNwdSAqcHJldiwgc3RydWN0IHZj
cHUgKm5leHQpCiAgICAgICAgICAgICAgICAgKmxhc3RfaWQgPSBuZXh0X2lk
OwogICAgICAgICAgICAgfQogICAgICAgICB9CisKKyAgICAgICAgLyogVXBk
YXRlIHRoZSB0b3Atb2Ytc3RhY2sgYmxvY2sgd2l0aCB0aGUgVkVSVyBkaXNw
b3NpdGlvbi4gKi8KKyAgICAgICAgaW5mby0+c3BlY19jdHJsX2ZsYWdzICY9
IH5TQ0ZfdmVydzsKKyAgICAgICAgaWYgKCBuZXh0ZC0+YXJjaC52ZXJ3ICkK
KyAgICAgICAgICAgIGluZm8tPnNwZWNfY3RybF9mbGFncyB8PSBTQ0ZfdmVy
dzsKICAgICB9CiAKICAgICBzY2hlZF9jb250ZXh0X3N3aXRjaGVkKHByZXYs
IG5leHQpOwpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L2h2bS92bXgvZW50
cnkuUyBiL3hlbi9hcmNoL3g4Ni9odm0vdm14L2VudHJ5LlMKaW5kZXggNDk2
NTFmM2M0MzVhLi41ZjVkZTQ1YTEzMDkgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNo
L3g4Ni9odm0vdm14L2VudHJ5LlMKKysrIGIveGVuL2FyY2gveDg2L2h2bS92
bXgvZW50cnkuUwpAQCAtODcsNyArODcsNyBAQCBVTkxJS0VMWV9FTkQocmVh
bG1vZGUpCiAKICAgICAgICAgLyogV0FSTklORyEgYHJldGAsIGBjYWxsICpg
LCBgam1wICpgIG5vdCBzYWZlIGJleW9uZCB0aGlzIHBvaW50LiAqLwogICAg
ICAgICAvKiBTUEVDX0NUUkxfRVhJVF9UT19WTVggICBSZXE6ICVyc3A9cmVn
cy9jcHVpbmZvICAgICAgICAgICAgICBDbG9iOiAgICAqLwotICAgICAgICBB
TFRFUk5BVElWRSAiIiwgX19zdHJpbmdpZnkodmVydyBDUFVJTkZPX3Zlcndf
c2VsKCVyc3ApKSwgWDg2X0ZFQVRVUkVfU0NfVkVSV19IVk0KKyAgICAgICAg
RE9fU1BFQ19DVFJMX0NPTkRfVkVSVwogCiAgICAgICAgIG1vdiAgVkNQVV9o
dm1fZ3Vlc3RfY3IyKCVyYngpLCVyYXgKIApkaWZmIC0tZ2l0IGEveGVuL2Fy
Y2gveDg2L2luY2x1ZGUvYXNtL2NwdWZlYXR1cmVzLmggYi94ZW4vYXJjaC94
ODYvaW5jbHVkZS9hc20vY3B1ZmVhdHVyZXMuaAppbmRleCBmZjMxNTdkNTJk
MTMuLmJkNDVhMTQ0ZWU3OCAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L2lu
Y2x1ZGUvYXNtL2NwdWZlYXR1cmVzLmgKKysrIGIveGVuL2FyY2gveDg2L2lu
Y2x1ZGUvYXNtL2NwdWZlYXR1cmVzLmgKQEAgLTM1LDggKzM1LDcgQEAgWEVO
X0NQVUZFQVRVUkUoU0NfUlNCX0hWTSwgICAgICAgIFg4Nl9TWU5USCgxOSkp
IC8qIFJTQiBvdmVyd3JpdGUgbmVlZGVkIGZvciBIVk0KIFhFTl9DUFVGRUFU
VVJFKFhFTl9TRUxGU05PT1AsICAgICBYODZfU1lOVEgoMjApKSAvKiBTRUxG
U05PT1AgZ2V0cyB1c2VkIGJ5IFhlbiBpdHNlbGYgKi8KIFhFTl9DUFVGRUFU
VVJFKFNDX01TUl9JRExFLCAgICAgICBYODZfU1lOVEgoMjEpKSAvKiAoU0Nf
TVNSX1BWIHx8IFNDX01TUl9IVk0pICYmIGRlZmF1bHRfeGVuX3NwZWNfY3Ry
bCAqLwogWEVOX0NQVUZFQVRVUkUoWEVOX0xCUiwgICAgICAgICAgIFg4Nl9T
WU5USCgyMikpIC8qIFhlbiB1c2VzIE1TUl9ERUJVR0NUTC5MQlIgKi8KLVhF
Tl9DUFVGRUFUVVJFKFNDX1ZFUldfUFYsICAgICAgICBYODZfU1lOVEgoMjMp
KSAvKiBWRVJXIHVzZWQgYnkgWGVuIGZvciBQViAqLwotWEVOX0NQVUZFQVRV
UkUoU0NfVkVSV19IVk0sICAgICAgIFg4Nl9TWU5USCgyNCkpIC8qIFZFUlcg
dXNlZCBieSBYZW4gZm9yIEhWTSAqLworLyogQml0cyAyMywyNCB1bnVzZWQu
ICovCiBYRU5fQ1BVRkVBVFVSRShTQ19WRVJXX0lETEUsICAgICAgWDg2X1NZ
TlRIKDI1KSkgLyogVkVSVyB1c2VkIGJ5IFhlbiBmb3IgaWRsZSAqLwogWEVO
X0NQVUZFQVRVUkUoWEVOX1NIU1RLLCAgICAgICAgIFg4Nl9TWU5USCgyNikp
IC8qIFhlbiB1c2VzIENFVCBTaGFkb3cgU3RhY2tzICovCiBYRU5fQ1BVRkVB
VFVSRShYRU5fSUJULCAgICAgICAgICAgWDg2X1NZTlRIKDI3KSkgLyogWGVu
IHVzZXMgQ0VUIEluZGlyZWN0IEJyYW5jaCBUcmFja2luZyAqLwpkaWZmIC0t
Z2l0IGEveGVuL2FyY2gveDg2L2luY2x1ZGUvYXNtL2RvbWFpbi5oIGIveGVu
L2FyY2gveDg2L2luY2x1ZGUvYXNtL2RvbWFpbi5oCmluZGV4IDNhYTA5MTlm
YTZiOC4uN2VmNGNiNzA2ZGFkIDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYv
aW5jbHVkZS9hc20vZG9tYWluLmgKKysrIGIveGVuL2FyY2gveDg2L2luY2x1
ZGUvYXNtL2RvbWFpbi5oCkBAIC0zMTksNiArMzE5LDkgQEAgc3RydWN0IGFy
Y2hfZG9tYWluCiAgICAgdWludDMyX3QgcGNpX2NmODsKICAgICB1aW50OF90
IGNtb3NfaWR4OwogCisgICAgLyogVXNlIFZFUlcgb24gcmV0dXJuLXRvLWd1
ZXN0IGZvciBpdHMgZmx1c2hpbmcgc2lkZSBlZmZlY3QuICovCisgICAgYm9v
bCB2ZXJ3OworCiAgICAgdW5pb24gewogICAgICAgICBzdHJ1Y3QgcHZfZG9t
YWluIHB2OwogICAgICAgICBzdHJ1Y3QgaHZtX2RvbWFpbiBodm07CmRpZmYg
LS1naXQgYS94ZW4vYXJjaC94ODYvaW5jbHVkZS9hc20vc3BlY19jdHJsLmgg
Yi94ZW4vYXJjaC94ODYvaW5jbHVkZS9hc20vc3BlY19jdHJsLmgKaW5kZXgg
Zjc2MDI5NTIzNjEwLi43NTEzNTVmNDcxZjQgMTAwNjQ0Ci0tLSBhL3hlbi9h
cmNoL3g4Ni9pbmNsdWRlL2FzbS9zcGVjX2N0cmwuaAorKysgYi94ZW4vYXJj
aC94ODYvaW5jbHVkZS9hc20vc3BlY19jdHJsLmgKQEAgLTI0LDYgKzI0LDcg
QEAKICNkZWZpbmUgU0NGX3VzZV9zaGFkb3cgKDEgPDwgMCkKICNkZWZpbmUg
U0NGX2lzdF93cm1zciAgKDEgPDwgMSkKICNkZWZpbmUgU0NGX2lzdF9yc2Ig
ICAgKDEgPDwgMikKKyNkZWZpbmUgU0NGX3ZlcncgICAgICAgKDEgPDwgMykK
IAogI2lmbmRlZiBfX0FTU0VNQkxZX18KIApAQCAtMzIsNiArMzMsNyBAQAog
I2luY2x1ZGUgPGFzbS9tc3ItaW5kZXguaD4KIAogdm9pZCBpbml0X3NwZWN1
bGF0aW9uX21pdGlnYXRpb25zKHZvaWQpOwordm9pZCBzcGVjX2N0cmxfaW5p
dF9kb21haW4oc3RydWN0IGRvbWFpbiAqZCk7CiAKIGV4dGVybiBib29sIG9w
dF9pYnBiOwogZXh0ZXJuIGJvb2wgb3B0X3NzYmQ7CmRpZmYgLS1naXQgYS94
ZW4vYXJjaC94ODYvaW5jbHVkZS9hc20vc3BlY19jdHJsX2FzbS5oIGIveGVu
L2FyY2gveDg2L2luY2x1ZGUvYXNtL3NwZWNfY3RybF9hc20uaAppbmRleCAw
MmIzYjE4Y2U2OWYuLjYyMjA2MjE3OTdmMiAxMDA2NDQKLS0tIGEveGVuL2Fy
Y2gveDg2L2luY2x1ZGUvYXNtL3NwZWNfY3RybF9hc20uaAorKysgYi94ZW4v
YXJjaC94ODYvaW5jbHVkZS9hc20vc3BlY19jdHJsX2FzbS5oCkBAIC0xMzYs
NiArMTM2LDE4IEBACiAjZW5kaWYKIC5lbmRtCiAKKy5tYWNybyBET19TUEVD
X0NUUkxfQ09ORF9WRVJXCisvKgorICogUmVxdWlyZXMgJXJzcD1jcHVpbmZv
CisgKgorICogSXNzdWUgYSBWRVJXIGZvciBpdHMgZmx1c2hpbmcgc2lkZSBl
ZmZlY3QgaWYgaW5kaWNhdGVkLgorICovCisgICAgdGVzdGIgJFNDRl92ZXJ3
LCBDUFVJTkZPX3NwZWNfY3RybF9mbGFncyglcnNwKQorICAgIGp6IC5MXEBf
dmVyd19za2lwCisgICAgdmVydyBDUFVJTkZPX3Zlcndfc2VsKCVyc3ApCisu
TFxAX3Zlcndfc2tpcDoKKy5lbmRtCisKIC5tYWNybyBET19TUEVDX0NUUkxf
RU5UUlkgbWF5YmV4ZW46cmVxCiAvKgogICogUmVxdWlyZXMgJXJzcD1yZWdz
IChhbHNvIGNwdWluZm8gaWYgIW1heWJleGVuKQpAQCAtMjMxLDggKzI0Myw3
IEBACiAjZGVmaW5lIFNQRUNfQ1RSTF9FWElUX1RPX1BWICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCiAgICAgQUxURVJO
QVRJVkUgIiIsICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICBcCiAgICAgICAgIERPX1NQRUNfQ1RSTF9FWElU
X1RPX0dVRVNULCBYODZfRkVBVFVSRV9TQ19NU1JfUFY7ICAgICAgICAgICAg
ICBcCi0gICAgQUxURVJOQVRJVkUgIiIsIF9fc3RyaW5naWZ5KHZlcncgQ1BV
SU5GT192ZXJ3X3NlbCglcnNwKSksICAgICAgICAgICBcCi0gICAgICAgIFg4
Nl9GRUFUVVJFX1NDX1ZFUldfUFYKKyAgICBET19TUEVDX0NUUkxfQ09ORF9W
RVJXCiAKIC8qCiAgKiBVc2UgaW4gSVNUIGludGVycnVwdC9leGNlcHRpb24g
Y29udGV4dC4gIE1heSBpbnRlcnJ1cHQgWGVuIG9yIFBWIGNvbnRleHQuCmRp
ZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvc3BlY19jdHJsLmMgYi94ZW4vYXJj
aC94ODYvc3BlY19jdHJsLmMKaW5kZXggMTQwOGU0YzdhYmQwLi41ZDUwZWM3
ZWVlYmEgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni9zcGVjX2N0cmwuYwor
KysgYi94ZW4vYXJjaC94ODYvc3BlY19jdHJsLmMKQEAgLTM2LDggKzM2LDgg
QEAgc3RhdGljIGJvb2wgX19pbml0ZGF0YSBvcHRfbXNyX3NjX3B2ID0gdHJ1
ZTsKIHN0YXRpYyBib29sIF9faW5pdGRhdGEgb3B0X21zcl9zY19odm0gPSB0
cnVlOwogc3RhdGljIGludDhfdCBfX2luaXRkYXRhIG9wdF9yc2JfcHYgPSAt
MTsKIHN0YXRpYyBib29sIF9faW5pdGRhdGEgb3B0X3JzYl9odm0gPSB0cnVl
Owotc3RhdGljIGludDhfdCBfX2luaXRkYXRhIG9wdF9tZF9jbGVhcl9wdiA9
IC0xOwotc3RhdGljIGludDhfdCBfX2luaXRkYXRhIG9wdF9tZF9jbGVhcl9o
dm0gPSAtMTsKK3N0YXRpYyBpbnQ4X3QgX19yb19hZnRlcl9pbml0IG9wdF9t
ZF9jbGVhcl9wdiA9IC0xOworc3RhdGljIGludDhfdCBfX3JvX2FmdGVyX2lu
aXQgb3B0X21kX2NsZWFyX2h2bSA9IC0xOwogCiAvKiBDbWRsaW5lIGNvbnRy
b2xzIGZvciBYZW4ncyBzcGVjdWxhdGl2ZSBzZXR0aW5ncy4gKi8KIHN0YXRp
YyBlbnVtIGluZF90aHVuayB7CkBAIC05MzMsNiArOTMzLDEzIEBAIHN0YXRp
YyBfX2luaXQgdm9pZCBtZHNfY2FsY3VsYXRpb25zKHVpbnQ2NF90IGNhcHMp
CiAgICAgfQogfQogCit2b2lkIHNwZWNfY3RybF9pbml0X2RvbWFpbihzdHJ1
Y3QgZG9tYWluICpkKQoreworICAgIGJvb2wgcHYgPSBpc19wdl9kb21haW4o
ZCk7CisKKyAgICBkLT5hcmNoLnZlcncgPSBwdiA/IG9wdF9tZF9jbGVhcl9w
diA6IG9wdF9tZF9jbGVhcl9odm07Cit9CisKIHZvaWQgX19pbml0IGluaXRf
c3BlY3VsYXRpb25fbWl0aWdhdGlvbnModm9pZCkKIHsKICAgICBlbnVtIGlu
ZF90aHVuayB0aHVuayA9IFRIVU5LX0RFRkFVTFQ7CkBAIC0xMTk3LDIxICsx
MjA0LDIwIEBAIHZvaWQgX19pbml0IGluaXRfc3BlY3VsYXRpb25fbWl0aWdh
dGlvbnModm9pZCkKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBib290
X2NwdV9oYXMoWDg2X0ZFQVRVUkVfTURfQ0xFQVIpKTsKIAogICAgIC8qCi0g
ICAgICogRW5hYmxlIE1EUyBkZWZlbmNlcyBhcyBhcHBsaWNhYmxlLiAgVGhl
IFBWIGJsb2NrcyBuZWVkIHVzaW5nIGFsbCB0aGUKLSAgICAgKiB0aW1lLCBh
bmQgdGhlIElkbGUgYmxvY2tzIG5lZWQgdXNpbmcgaWYgZWl0aGVyIFBWIG9y
IEhWTSBkZWZlbmNlcyBhcmUKLSAgICAgKiB1c2VkLgorICAgICAqIEVuYWJs
ZSBNRFMgZGVmZW5jZXMgYXMgYXBwbGljYWJsZS4gIFRoZSBJZGxlIGJsb2Nr
cyBuZWVkIHVzaW5nIGlmCisgICAgICogZWl0aGVyIFBWIG9yIEhWTSBkZWZl
bmNlcyBhcmUgdXNlZC4KICAgICAgKgogICAgICAqIEhWTSBpcyBtb3JlIGNv
bXBsaWNhdGVkLiAgVGhlIE1EX0NMRUFSIG1pY3JvY29kZSBleHRlbmRzIEwx
RF9GTFVTSCB3aXRoCiAgICAgICogZXF1aXZlbGVudCBzZW1hbnRpY3MgdG8g
YXZvaWQgbmVlZGluZyB0byBwZXJmb3JtIGJvdGggZmx1c2hlcyBvbiB0aGUK
LSAgICAgKiBIVk0gcGF0aC4gIFRoZSBIVk0gYmxvY2tzIGRvbid0IG5lZWQg
YWN0aXZhdGluZyBpZiBvdXIgaHlwZXJ2aXNvciB0b2xkCi0gICAgICogdXMg
aXQgd2FzIGhhbmRsaW5nIEwxRF9GTFVTSCwgb3Igd2UgYXJlIHVzaW5nIEwx
RF9GTFVTSCBvdXJzZWx2ZXMuCisgICAgICogSFZNIHBhdGguICBUaGVyZWZv
cmUsIHdlIGRvbid0IG5lZWQgVkVSVyBpbiBhZGRpdGlvbiB0byBMMURfRkxV
U0guCisgICAgICoKKyAgICAgKiBBZnRlciBjYWxjdWxhdGluZyB0aGUgYXBw
cm9wcmlhdGUgaWRsZSBzZXR0aW5nLCBzaW1wbGlmeQorICAgICAqIG9wdF9t
ZF9jbGVhcl9odm0gdG8gbWVhbiBqdXN0ICJzaG91bGQgd2UgVkVSVyBvbiB0
aGUgd2F5IGludG8gSFZNCisgICAgICogZ3Vlc3RzIiwgc28gc3BlY19jdHJs
X2luaXRfZG9tYWluKCkgY2FuIGNhbGN1bGF0ZSBzdWl0YWJsZSBzZXR0aW5n
cy4KICAgICAgKi8KLSAgICBpZiAoIG9wdF9tZF9jbGVhcl9wdiApCi0gICAg
ICAgIHNldHVwX2ZvcmNlX2NwdV9jYXAoWDg2X0ZFQVRVUkVfU0NfVkVSV19Q
Vik7CiAgICAgaWYgKCBvcHRfbWRfY2xlYXJfcHYgfHwgb3B0X21kX2NsZWFy
X2h2bSApCiAgICAgICAgIHNldHVwX2ZvcmNlX2NwdV9jYXAoWDg2X0ZFQVRV
UkVfU0NfVkVSV19JRExFKTsKLSAgICBpZiAoIG9wdF9tZF9jbGVhcl9odm0g
JiYgIShjYXBzICYgQVJDSF9DQVBTX1NLSVBfTDFERkwpICYmICFvcHRfbDFk
X2ZsdXNoICkKLSAgICAgICAgc2V0dXBfZm9yY2VfY3B1X2NhcChYODZfRkVB
VFVSRV9TQ19WRVJXX0hWTSk7CisgICAgb3B0X21kX2NsZWFyX2h2bSAmPSAh
KGNhcHMgJiBBUkNIX0NBUFNfU0tJUF9MMURGTCkgJiYgIW9wdF9sMWRfZmx1
c2g7CiAKICAgICAvKgogICAgICAqIFdhcm4gdGhlIHVzZXIgaWYgdGhleSBh
cmUgb24gTUxQRFMvTUZCRFMtdnVsbmVyYWJsZSBoYXJkd2FyZSB3aXRoIEhU
Cg==

--=separator
Content-Type: application/octet-stream; name="xsa404/xsa404-2.patch"
Content-Disposition: attachment; filename="xsa404/xsa404-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3NwZWMtY3RybDogRW51bWVyYXRpb24gZm9yIE1N
SU8gU3RhbGUgRGF0YSBjb250cm9scwpNSU1FLVZlcnNpb246IDEuMApDb250
ZW50LVR5cGU6IHRleHQvcGxhaW47IGNoYXJzZXQ9VVRGLTgKQ29udGVudC1U
cmFuc2Zlci1FbmNvZGluZzogOGJpdAoKVGhlIHRocmVlICpfTk8gYml0cyBp
bmRpY2F0ZSBub24tc3VzY2VwdGliaWxpdHkgdG8gdGhlIFNTRFAsIEZCU0RQ
IGFuZCBQU0RQCmRhdGEgbW92ZW1lbnQgcHJpbWl0aXZlcy4KCkZCX0NMRUFS
IGluZGljYXRlcyB0aGF0IHRoZSBWRVJXIGluc3RydWN0aW9uIGhhcyByZS1n
YWluZWQgaXQncyBGaWxsIEJ1ZmZlcgpmbHVzaGluZyBzaWRlIGVmZmVjdC4g
IFRoaXMgaXMgb25seSBlbnVtZXJhdGVkIG9uIHBhcnRzIHdoZXJlIFZFUlcg
aGFkCnByZXZpb3VzbHkgbG9zdCBpdCdzIGZsdXNoaW5nIHNpZGUgZWZmZWN0
IGR1ZSB0byB0aGUgTURTL1RBQSB2dWxuZXJhYmlsaXRpZXMKYmVpbmcgZml4
ZWQgaW4gaGFyZHdhcmUuCgpGQl9DTEVBUl9DVFJMIGlzIGF2YWlsYWJsZSBv
biBhIHN1YnNldCBvZiBGQl9DTEVBUiBwYXJ0cyB3aGVyZSB0aGUgRmlsbCBC
dWZmZXIKY2xlYXJpbmcgc2lkZSBlZmZlY3Qgb2YgVkVSVyBjYW4gYmUgdHVy
bmVkIG9mZiBmb3IgcGVyZm9ybWFuY2UgcmVhc29ucy4KClRoaXMgaXMgcGFy
dCBvZiBYU0EtNDA0LgoKU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8
YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IFJvZ2Vy
IFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgoKZGlmZiAtLWdp
dCBhL3hlbi9hcmNoL3g4Ni9pbmNsdWRlL2FzbS9tc3ItaW5kZXguaCBiL3hl
bi9hcmNoL3g4Ni9pbmNsdWRlL2FzbS9tc3ItaW5kZXguaAppbmRleCA2YzI1
MGJmY2FkYWQuLmVhNDdmNjhkMDU1OCAxMDA2NDQKLS0tIGEveGVuL2FyY2gv
eDg2L2luY2x1ZGUvYXNtL21zci1pbmRleC5oCisrKyBiL3hlbi9hcmNoL3g4
Ni9pbmNsdWRlL2FzbS9tc3ItaW5kZXguaApAQCAtNzEsNiArNzEsMTEgQEAK
ICNkZWZpbmUgIEFSQ0hfQ0FQU19JRl9QU0NIQU5HRV9NQ19OTyAgICAgICAg
KF9BQygxLCBVTEwpIDw8ICA2KQogI2RlZmluZSAgQVJDSF9DQVBTX1RTWF9D
VFJMICAgICAgICAgICAgICAgICAoX0FDKDEsIFVMTCkgPDwgIDcpCiAjZGVm
aW5lICBBUkNIX0NBUFNfVEFBX05PICAgICAgICAgICAgICAgICAgIChfQUMo
MSwgVUxMKSA8PCAgOCkKKyNkZWZpbmUgIEFSQ0hfQ0FQU19TQkRSX1NTRFBf
Tk8gICAgICAgICAgICAgKF9BQygxLCBVTEwpIDw8IDEzKQorI2RlZmluZSAg
QVJDSF9DQVBTX0ZCU0RQX05PICAgICAgICAgICAgICAgICAoX0FDKDEsIFVM
TCkgPDwgMTQpCisjZGVmaW5lICBBUkNIX0NBUFNfUFNEUF9OTyAgICAgICAg
ICAgICAgICAgIChfQUMoMSwgVUxMKSA8PCAxNSkKKyNkZWZpbmUgIEFSQ0hf
Q0FQU19GQl9DTEVBUiAgICAgICAgICAgICAgICAgKF9BQygxLCBVTEwpIDw8
IDE3KQorI2RlZmluZSAgQVJDSF9DQVBTX0ZCX0NMRUFSX0NUUkwgICAgICAg
ICAgICAoX0FDKDEsIFVMTCkgPDwgMTgpCiAjZGVmaW5lICBBUkNIX0NBUFNf
UlJTQkEgICAgICAgICAgICAgICAgICAgIChfQUMoMSwgVUxMKSA8PCAxOSkK
ICNkZWZpbmUgIEFSQ0hfQ0FQU19CSElfTk8gICAgICAgICAgICAgICAgICAg
KF9BQygxLCBVTEwpIDw8IDIwKQogCkBAIC05MCw2ICs5NSw3IEBACiAjZGVm
aW5lICBNQ1VfT1BUX0NUUkxfUk5HRFNfTUlUR19ESVMgICAgICAgIChfQUMo
MSwgVUxMKSA8PCAgMCkKICNkZWZpbmUgIE1DVV9PUFRfQ1RSTF9SVE1fQUxM
T1cgICAgICAgICAgICAgKF9BQygxLCBVTEwpIDw8ICAxKQogI2RlZmluZSAg
TUNVX09QVF9DVFJMX1JUTV9MT0NLRUQgICAgICAgICAgICAoX0FDKDEsIFVM
TCkgPDwgIDIpCisjZGVmaW5lICBNQ1VfT1BUX0NUUkxfRkJfQ0xFQVJfRElT
ICAgICAgICAgIChfQUMoMSwgVUxMKSA8PCAgMykKIAogI2RlZmluZSBNU1Jf
UlRJVF9PVVRQVVRfQkFTRSAgICAgICAgICAgICAgICAweDAwMDAwNTYwCiAj
ZGVmaW5lIE1TUl9SVElUX09VVFBVVF9NQVNLICAgICAgICAgICAgICAgIDB4
MDAwMDA1NjEKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9zcGVjX2N0cmwu
YyBiL3hlbi9hcmNoL3g4Ni9zcGVjX2N0cmwuYwppbmRleCA1ZDUwZWM3ZWVl
YmEuLmRlZDMzZjVkMGYyZSAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L3Nw
ZWNfY3RybC5jCisrKyBiL3hlbi9hcmNoL3g4Ni9zcGVjX2N0cmwuYwpAQCAt
MzIzLDcgKzMyMyw3IEBAIHN0YXRpYyB2b2lkIF9faW5pdCBwcmludF9kZXRh
aWxzKGVudW0gaW5kX3RodW5rIHRodW5rLCB1aW50NjRfdCBjYXBzKQogICAg
ICAqIEhhcmR3YXJlIHJlYWQtb25seSBpbmZvcm1hdGlvbiwgc3RhdGluZyBp
bW11bml0eSB0byBjZXJ0YWluIGlzc3Vlcywgb3IKICAgICAgKiBzdWdnZXN0
aW9ucyBvZiB3aGljaCBtaXRpZ2F0aW9uIHRvIHVzZS4KICAgICAgKi8KLSAg
ICBwcmludGsoIiAgSGFyZHdhcmUgaGludHM6JXMlcyVzJXMlcyVzJXMlcyVz
JXMlc1xuIiwKKyAgICBwcmludGsoIiAgSGFyZHdhcmUgaGludHM6JXMlcyVz
JXMlcyVzJXMlcyVzJXMlcyVzJXMlc1xuIiwKICAgICAgICAgICAgKGNhcHMg
JiBBUkNIX0NBUFNfUkRDTF9OTykgICAgICAgICAgICAgICAgICAgICAgICA/
ICIgUkRDTF9OTyIgICAgICAgIDogIiIsCiAgICAgICAgICAgIChjYXBzICYg
QVJDSF9DQVBTX0lCUlNfQUxMKSAgICAgICAgICAgICAgICAgICAgICAgPyAi
IElCUlNfQUxMIiAgICAgICA6ICIiLAogICAgICAgICAgICAoY2FwcyAmIEFS
Q0hfQ0FQU19SU0JBKSAgICAgICAgICAgICAgICAgICAgICAgICAgID8gIiBS
U0JBIiAgICAgICAgICAgOiAiIiwKQEAgLTMzMiwxMyArMzMyLDE2IEBAIHN0
YXRpYyB2b2lkIF9faW5pdCBwcmludF9kZXRhaWxzKGVudW0gaW5kX3RodW5r
IHRodW5rLCB1aW50NjRfdCBjYXBzKQogICAgICAgICAgICAoY2FwcyAmIEFS
Q0hfQ0FQU19TU0JfTk8pICAgICAgICAgICAgICAgICAgICAgICAgID8gIiBT
U0JfTk8iICAgICAgICAgOiAiIiwKICAgICAgICAgICAgKGNhcHMgJiBBUkNI
X0NBUFNfTURTX05PKSAgICAgICAgICAgICAgICAgICAgICAgICA/ICIgTURT
X05PIiAgICAgICAgIDogIiIsCiAgICAgICAgICAgIChjYXBzICYgQVJDSF9D
QVBTX1RBQV9OTykgICAgICAgICAgICAgICAgICAgICAgICAgPyAiIFRBQV9O
TyIgICAgICAgICA6ICIiLAorICAgICAgICAgICAoY2FwcyAmIEFSQ0hfQ0FQ
U19TQkRSX1NTRFBfTk8pICAgICAgICAgICAgICAgICAgID8gIiBTQkRSX1NT
RFBfTk8iICAgOiAiIiwKKyAgICAgICAgICAgKGNhcHMgJiBBUkNIX0NBUFNf
RkJTRFBfTk8pICAgICAgICAgICAgICAgICAgICAgICA/ICIgRkJTRFBfTk8i
ICAgICAgIDogIiIsCisgICAgICAgICAgIChjYXBzICYgQVJDSF9DQVBTX1BT
RFBfTk8pICAgICAgICAgICAgICAgICAgICAgICAgPyAiIFBTRFBfTk8iICAg
ICAgICA6ICIiLAogICAgICAgICAgICAoZThiICAmIGNwdWZlYXRfbWFzayhY
ODZfRkVBVFVSRV9JQlJTX0FMV0FZUykpICAgID8gIiBJQlJTX0FMV0FZUyIg
ICAgOiAiIiwKICAgICAgICAgICAgKGU4YiAgJiBjcHVmZWF0X21hc2soWDg2
X0ZFQVRVUkVfU1RJQlBfQUxXQVlTKSkgICA/ICIgU1RJQlBfQUxXQVlTIiAg
IDogIiIsCiAgICAgICAgICAgIChlOGIgICYgY3B1ZmVhdF9tYXNrKFg4Nl9G
RUFUVVJFX0lCUlNfRkFTVCkpICAgICAgPyAiIElCUlNfRkFTVCIgICAgICA6
ICIiLAogICAgICAgICAgICAoZThiICAmIGNwdWZlYXRfbWFzayhYODZfRkVB
VFVSRV9JQlJTX1NBTUVfTU9ERSkpID8gIiBJQlJTX1NBTUVfTU9ERSIgOiAi
Iik7CiAKICAgICAvKiBIYXJkd2FyZSBmZWF0dXJlcyB3aGljaCBuZWVkIGRy
aXZpbmcgdG8gbWl0aWdhdGUgaXNzdWVzLiAqLwotICAgIHByaW50aygiICBI
YXJkd2FyZSBmZWF0dXJlczolcyVzJXMlcyVzJXMlcyVzJXMlc1xuIiwKKyAg
ICBwcmludGsoIiAgSGFyZHdhcmUgZmVhdHVyZXM6JXMlcyVzJXMlcyVzJXMl
cyVzJXMlcyVzXG4iLAogICAgICAgICAgICAoZThiICAmIGNwdWZlYXRfbWFz
ayhYODZfRkVBVFVSRV9JQlBCKSkgfHwKICAgICAgICAgICAgKF83ZDAgJiBj
cHVmZWF0X21hc2soWDg2X0ZFQVRVUkVfSUJSU0IpKSAgICAgICAgICA/ICIg
SUJQQiIgICAgICAgICAgIDogIiIsCiAgICAgICAgICAgIChlOGIgICYgY3B1
ZmVhdF9tYXNrKFg4Nl9GRUFUVVJFX0lCUlMpKSB8fApAQCAtMzUzLDcgKzM1
Niw5IEBAIHN0YXRpYyB2b2lkIF9faW5pdCBwcmludF9kZXRhaWxzKGVudW0g
aW5kX3RodW5rIHRodW5rLCB1aW50NjRfdCBjYXBzKQogICAgICAgICAgICAo
XzdkMCAmIGNwdWZlYXRfbWFzayhYODZfRkVBVFVSRV9NRF9DTEVBUikpICAg
ICAgID8gIiBNRF9DTEVBUiIgICAgICAgOiAiIiwKICAgICAgICAgICAgKF83
ZDAgJiBjcHVmZWF0X21hc2soWDg2X0ZFQVRVUkVfU1JCRFNfQ1RSTCkpICAg
ICA/ICIgU1JCRFNfQ1RSTCIgICAgIDogIiIsCiAgICAgICAgICAgIChlOGIg
ICYgY3B1ZmVhdF9tYXNrKFg4Nl9GRUFUVVJFX1ZJUlRfU1NCRCkpICAgICAg
PyAiIFZJUlRfU1NCRCIgICAgICA6ICIiLAotICAgICAgICAgICAoY2FwcyAm
IEFSQ0hfQ0FQU19UU1hfQ1RSTCkgICAgICAgICAgICAgICAgICAgICAgID8g
IiBUU1hfQ1RSTCIgICAgICAgOiAiIik7CisgICAgICAgICAgIChjYXBzICYg
QVJDSF9DQVBTX1RTWF9DVFJMKSAgICAgICAgICAgICAgICAgICAgICAgPyAi
IFRTWF9DVFJMIiAgICAgICA6ICIiLAorICAgICAgICAgICAoY2FwcyAmIEFS
Q0hfQ0FQU19GQl9DTEVBUikgICAgICAgICAgICAgICAgICAgICAgID8gIiBG
Ql9DTEVBUiIgICAgICAgOiAiIiwKKyAgICAgICAgICAgKGNhcHMgJiBBUkNI
X0NBUFNfRkJfQ0xFQVJfQ1RSTCkgICAgICAgICAgICAgICAgICA/ICIgRkJf
Q0xFQVJfQ1RSTCIgIDogIiIpOwogCiAgICAgLyogQ29tcGlsZWQtaW4gc3Vw
cG9ydCB3aGljaCBwZXJ0YWlucyB0byBtaXRpZ2F0aW9ucy4gKi8KICAgICBp
ZiAoIElTX0VOQUJMRUQoQ09ORklHX0lORElSRUNUX1RIVU5LKSB8fCBJU19F
TkFCTEVEKENPTkZJR19TSEFET1dfUEFHSU5HKSApCg==

--=separator
Content-Type: application/octet-stream; name="xsa404/xsa404-3.patch"
Content-Disposition: attachment; filename="xsa404/xsa404-3.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3NwZWMtY3RybDogQWRkIHNwZWMtY3RybD11bnBy
aXYtbW1pbwoKUGVyIFhlbidzIHN1cHBvcnQgc3RhdGVtZW50LCBQQ0kgcGFz
c3Rocm91Z2ggc2hvdWxkIGJlIHRvIHRydXN0ZWQgZG9tYWlucwpiZWNhdXNl
IHRoZSBvdmVyYWxsIHN5c3RlbSBzZWN1cml0eSBkZXBlbmRzIG9uIGZhY3Rv
cnMgb3V0c2lkZSBvZiBYZW4ncwpjb250cm9sLgoKQXMgc3VjaCwgWGVuLCBp
biBhIHN1cHBvcnRlZCBjb25maWd1cmF0aW9uLCBpcyBub3QgdnVsbmVyYWJs
ZSB0byBEUlBXL1NCRFIuCgpIb3dldmVyLCB1c2VycyB3aG8gaGF2ZSByaXNr
IGFzc2Vzc2VkIHRoZWlyIGNvbmZpZ3VyYXRpb24gbWF5IGJlIGhhcHB5IHdp
dGgKdGhlIHJpc2sgb2YgRG9TLCBidXQgdW5oYXBweSB3aXRoIHRoZSByaXNr
IG9mIGNyb3NzLWRvbWFpbiBkYXRhIGxlYWthZ2UuICBTdWNoCnVzZXJzIHNo
b3VsZCBlbmFibGUgdGhpcyBvcHRpb24uCgpXaGVuIGVuYWJsZWQsIGFuZCB3
aGVuIHJ1bm5pbmcgd2l0aCBvbiBpbXBhY3RlZCBzeXN0ZW1zIHdpdGggdGhl
IEludGVsIE1heQoyMDIyIG1pY3JvY29kZSwgdGhpcyBvcHRpb24gbWl0aWdh
dGVzIGNyb3NzLWRvbWFpbiBmaWxsIGJ1ZmZlciBsZWFrYWdlIGJ5CmZsdXNo
aW5nIG9uIGVudHJ5IHRvIGFueSBkb21haW4gd2l0aCBNTUlPIGFjY2Vzcywg
YW5kIGVuZ2FnZXMgdGhlIGV4aXN0aW5nCnNyYi1sb2NrIHRvIHByb3RlY3Qg
Uk5HIGRhdGEuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTQwNC4KClNpZ25lZC1v
ZmYtYnk6IEFuZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5j
b20+CgpkaWZmIC0tZ2l0IGEvZG9jcy9taXNjL3hlbi1jb21tYW5kLWxpbmUu
cGFuZG9jIGIvZG9jcy9taXNjL3hlbi1jb21tYW5kLWxpbmUucGFuZG9jCmlu
ZGV4IDQ3Njk0MmE0OGJhMC4uYWRmZDRhZDM5OTQxIDEwMDY0NAotLS0gYS9k
b2NzL21pc2MveGVuLWNvbW1hbmQtbGluZS5wYW5kb2MKKysrIGIvZG9jcy9t
aXNjL3hlbi1jb21tYW5kLWxpbmUucGFuZG9jCkBAIC0yMjU5LDcgKzIyNTks
NyBAQCBCeSBkZWZhdWx0IFNTQkQgd2lsbCBiZSBtaXRpZ2F0ZWQgYXQgcnVu
dGltZSAoaS5lIGBzc2JkPXJ1bnRpbWVgKS4KICMjIyBzcGVjLWN0cmwgKHg4
NikKID4gYD0gTGlzdCBvZiBbIDxib29sPiwgeGVuPTxib29sPiwge3B2LGh2
bSxtc3Itc2MscnNiLG1kLWNsZWFyfT08Ym9vbD4sCiA+ICAgICAgICAgICAg
ICBidGktdGh1bms9cmV0cG9saW5lfGxmZW5jZXxqbXAsIHtpYnJzLGlicGIs
c3NiZCxlYWdlci1mcHUsCi0+ICAgICAgICAgICAgICBsMWQtZmx1c2gsYnJh
bmNoLWhhcmRlbixzcmItbG9ja309PGJvb2w+IF1gCis+ICAgICAgICAgICAg
ICBsMWQtZmx1c2gsYnJhbmNoLWhhcmRlbixzcmItbG9jayx1bnByaXYtbW1p
b309PGJvb2w+IF1gCiAKIENvbnRyb2xzIGZvciBzcGVjdWxhdGl2ZSBleGVj
dXRpb24gc2lkZWNoYW5uZWwgbWl0aWdhdGlvbnMuICBCeSBkZWZhdWx0LCBY
ZW4KIHdpbGwgcGljayB0aGUgbW9zdCBhcHByb3ByaWF0ZSBtaXRpZ2F0aW9u
cyBiYXNlZCBvbiBjb21waWxlZCBpbiBzdXBwb3J0LApAQCAtMjMzOSw4ICsy
MzM5LDE4IEBAIFhlbiB3aWxsIGVuYWJsZSB0aGlzIG1pdGlnYXRpb24uCiBP
biBoYXJkd2FyZSBzdXBwb3J0aW5nIFNSQkRTX0NUUkwsIHRoZSBgc3JiLWxv
Y2s9YCBvcHRpb24gY2FuIGJlIHVzZWQgdG8gZm9yY2UKIG9yIHByZXZlbnQg
WGVuIGZyb20gcHJvdGVjdCB0aGUgU3BlY2lhbCBSZWdpc3RlciBCdWZmZXIg
ZnJvbSBsZWFraW5nIHN0YWxlCiBkYXRhLiBCeSBkZWZhdWx0LCBYZW4gd2ls
bCBlbmFibGUgdGhpcyBtaXRpZ2F0aW9uLCBleGNlcHQgb24gcGFydHMgd2hl
cmUgTURTCi1pcyBmaXhlZCBhbmQgVEFBIGlzIGZpeGVkL21pdGlnYXRlZCAo
aW4gd2hpY2ggY2FzZSwgdGhlcmUgaXMgYmVsaWV2ZWQgdG8gYmUgbm8KLXdh
eSBmb3IgYW4gYXR0YWNrZXIgdG8gb2J0YWluIHRoZSBzdGFsZSBkYXRhKS4K
K2lzIGZpeGVkIGFuZCBUQUEgaXMgZml4ZWQvbWl0aWdhdGVkIGFuZCB0aGVy
ZSBhcmUgbm8gdW5wcml2aWxlZ2VkIE1NSU8KK21hcHBpbmdzIChpbiB3aGlj
aCBjYXNlLCB0aGVyZSBpcyBiZWxpZXZlZCB0byBiZSBubyB3YXkgZm9yIGFu
IGF0dGFja2VyIHRvCitvYnRhaW4gc3RhbGUgZGF0YSkuCisKK1RoZSBgdW5w
cml2LW1taW89YCBib29sZWFuIGluZGljYXRlcyB3aGV0aGVyIHRoZSBzeXN0
ZW0gaGFzIChvciB3aWxsIGhhdmUpCitsZXNzIHRoYW4gZnVsbHkgcHJpdmls
ZWdlZCBkb21haW5zIHdpdGggYWNjZXNzIHRvIE1NSU8gZGV2aWNlcy4gIFBl
ciBYZW4ncworc3VwcG9ydCBzdGF0ZW1lbnQgUENJIFBhc3N0aHJvdWdoIHNo
b3VsZCBiZSB0byB0cnVzdGVkIGd1ZXN0cyBvbmx5LCBidXQgdXNlcnMKK3do
byBoYXZlIHJpc2stYXNzZXNzZWQgZnVydGhlciBtaWdodCBiZSBoYXBweSB0
byB0b2xlcmF0ZSB0aGUgcmlzayBvZiBEb1MsIGJ1dAorbm90IG9mIGRhdGEg
bGVha2FnZS4gIFN1Y2ggdXNlciBzaG91bGQgZW5hYmxlIHRoaXMgb3B0aW9u
LiAgV2hlbiBlbmFibGVkLCB0aGlzCitvcHRpb24gdXNlcyBgRkJfQ0xFQVJg
IGFuZC9vciBgU1JCRFNfQ1RSTGAgZnVuY3Rpb25hbGl0eSBhdmFpbGFibGUg
aW4gdGhlCitJbnRlbCBNYXkgMjAyMiBtaWNyb2NvZGUgcmVsZWFzZSB0byBt
aXRpZ2F0ZSBjcm9zcy1kb21haW4gbGVha2FnZSBvZiBmaWxsCitidWZmZXJz
IHZpYSB0aGUgTU1JTyBTdGFsZSBEYXRhIHZ1bG5lcmFiaWxpdGllcy4KIAog
IyMjIHN5bmNfY29uc29sZQogPiBgPSA8Ym9vbGVhbj5gCmRpZmYgLS1naXQg
YS94ZW4vYXJjaC94ODYvc3BlY19jdHJsLmMgYi94ZW4vYXJjaC94ODYvc3Bl
Y19jdHJsLmMKaW5kZXggZGVkMzNmNWQwZjJlLi5jMTllODJlMDcxNDMgMTAw
NjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni9zcGVjX2N0cmwuYworKysgYi94ZW4v
YXJjaC94ODYvc3BlY19jdHJsLmMKQEAgLTY3LDYgKzY3LDggQEAgc3RhdGlj
IGJvb2wgX19pbml0ZGF0YSBjcHVfaGFzX2J1Z19tc2Jkc19vbmx5OyAvKiA9
PiBtaW5pbWFsIEhUIGltcGFjdC4gKi8KIHN0YXRpYyBib29sIF9faW5pdGRh
dGEgY3B1X2hhc19idWdfbWRzOyAvKiBBbnkgb3RoZXIgTXtMUCxTQixGQn1E
UyBjb21iaW5hdGlvbi4gKi8KIAogc3RhdGljIGludDhfdCBfX2luaXRkYXRh
IG9wdF9zcmJfbG9jayA9IC0xOworc3RhdGljIGJvb2wgX19yb19hZnRlcl9p
bml0IG9wdF91bnByaXZfbW1pbzsKK3N0YXRpYyBib29sIF9fcm9fYWZ0ZXJf
aW5pdCBvcHRfZmJfY2xlYXJfbW1pbzsKIAogc3RhdGljIGludCBfX2luaXQg
Y2ZfY2hlY2sgcGFyc2Vfc3BlY19jdHJsKGNvbnN0IGNoYXIgKnMpCiB7CkBA
IC0xODQsNiArMTg2LDggQEAgc3RhdGljIGludCBfX2luaXQgY2ZfY2hlY2sg
cGFyc2Vfc3BlY19jdHJsKGNvbnN0IGNoYXIgKnMpCiAgICAgICAgICAgICBv
cHRfYnJhbmNoX2hhcmRlbiA9IHZhbDsKICAgICAgICAgZWxzZSBpZiAoICh2
YWwgPSBwYXJzZV9ib29sZWFuKCJzcmItbG9jayIsIHMsIHNzKSkgPj0gMCAp
CiAgICAgICAgICAgICBvcHRfc3JiX2xvY2sgPSB2YWw7CisgICAgICAgIGVs
c2UgaWYgKCAodmFsID0gcGFyc2VfYm9vbGVhbigidW5wcml2LW1taW8iLCBz
LCBzcykpID49IDAgKQorICAgICAgICAgICAgb3B0X3VucHJpdl9tbWlvID0g
dmFsOwogICAgICAgICBlbHNlCiAgICAgICAgICAgICByYyA9IC1FSU5WQUw7
CiAKQEAgLTM5Miw3ICszOTYsOCBAQCBzdGF0aWMgdm9pZCBfX2luaXQgcHJp
bnRfZGV0YWlscyhlbnVtIGluZF90aHVuayB0aHVuaywgdWludDY0X3QgY2Fw
cykKICAgICAgICAgICAgb3B0X3NyYl9sb2NrICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgPyAiIFNSQl9MT0NLKyIgOiAiIFNSQl9MT0NLLSIsCiAg
ICAgICAgICAgIG9wdF9pYnBiICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgID8gIiBJQlBCIiAgOiAiIiwKICAgICAgICAgICAgb3B0X2wxZF9m
bHVzaCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgPyAiIEwxRF9GTFVT
SCIgOiAiIiwKLSAgICAgICAgICAgb3B0X21kX2NsZWFyX3B2IHx8IG9wdF9t
ZF9jbGVhcl9odm0gICAgICAgPyAiIFZFUlciICA6ICIiLAorICAgICAgICAg
ICBvcHRfbWRfY2xlYXJfcHYgfHwgb3B0X21kX2NsZWFyX2h2bSB8fAorICAg
ICAgICAgICBvcHRfZmJfY2xlYXJfbW1pbyAgICAgICAgICAgICAgICAgICAg
ICAgICA/ICIgVkVSVyIgIDogIiIsCiAgICAgICAgICAgIG9wdF9icmFuY2hf
aGFyZGVuICAgICAgICAgICAgICAgICAgICAgICAgID8gIiBCUkFOQ0hfSEFS
REVOIiA6ICIiKTsKIAogICAgIC8qIEwxVEYgZGlhZ25vc3RpY3MsIHByaW50
ZWQgaWYgdnVsbmVyYWJsZSBvciBQViBzaGFkb3dpbmcgaXMgaW4gdXNlLiAq
LwpAQCAtOTQyLDcgKzk0Nyw5IEBAIHZvaWQgc3BlY19jdHJsX2luaXRfZG9t
YWluKHN0cnVjdCBkb21haW4gKmQpCiB7CiAgICAgYm9vbCBwdiA9IGlzX3B2
X2RvbWFpbihkKTsKIAotICAgIGQtPmFyY2gudmVydyA9IHB2ID8gb3B0X21k
X2NsZWFyX3B2IDogb3B0X21kX2NsZWFyX2h2bTsKKyAgICBkLT5hcmNoLnZl
cncgPQorICAgICAgICAocHYgPyBvcHRfbWRfY2xlYXJfcHYgOiBvcHRfbWRf
Y2xlYXJfaHZtKSB8fAorICAgICAgICAob3B0X2ZiX2NsZWFyX21taW8gJiYg
aXNfaW9tbXVfZW5hYmxlZChkKSk7CiB9CiAKIHZvaWQgX19pbml0IGluaXRf
c3BlY3VsYXRpb25fbWl0aWdhdGlvbnModm9pZCkKQEAgLTExOTcsNiArMTIw
NCwxOCBAQCB2b2lkIF9faW5pdCBpbml0X3NwZWN1bGF0aW9uX21pdGlnYXRp
b25zKHZvaWQpCiAgICAgbWRzX2NhbGN1bGF0aW9ucyhjYXBzKTsKIAogICAg
IC8qCisgICAgICogUGFydHMgd2hpY2ggZW51bWVyYXRlIEZCX0NMRUFSIGFy
ZSB0aG9zZSB3aGljaCBoYXZlIHJlaW50cm9kdWNlZCB0aGUKKyAgICAgKiBW
RVJXIGZsdXNoaW5nIHNpZGUgYmVjYXVzZSBvZiBhIHN1Y2VwdGFiaWxpdHkg
dG8gRkJTRFAuCisgICAgICoKKyAgICAgKiBJZiB1bnByaXZpbGVnZWQgZ3Vl
c3RzIGhhdmUgKG9yIHdpbGwgaGF2ZSkgTU1JTyBtYXBwaW5ncywgd2UgY2Fu
CisgICAgICogbWl0aWdhdGUgY3Jvc3MtZG9tYWluIGxlYWthZ2Ugb2YgZmls
bCBidWZmZXIgZGF0YSBieSBpc3N1aW5nIFZFUlcgb24KKyAgICAgKiB0aGUg
cmV0dXJuLXRvLWd1ZXN0IHBhdGguICBBbGwgcHJpdmlsZWdlcyBvZiBzb2Z0
d2FyZSByZXNwb25zaWJsZSBmb3IKKyAgICAgKiBub3QgbGVha2luZyB0aGVp
ciBvd24gc2VjcmV0cyB3aGVuIHVzaW5nIE1NSU8uCisgICAgICovCisgICAg
aWYgKCBvcHRfdW5wcml2X21taW8gKQorICAgICAgICBvcHRfZmJfY2xlYXJf
bW1pbyA9IGNhcHMgJiBBUkNIX0NBUFNfRkJfQ0xFQVI7CisKKyAgICAvKgog
ICAgICAqIEJ5IGRlZmF1bHQsIGVuYWJsZSBQViBhbmQgSFZNIG1pdGlnYXRp
b25zIG9uIE1EUy12dWxuZXJhYmxlIGhhcmR3YXJlLgogICAgICAqIFRoaXMg
d2lsbCBvbmx5IGJlIGEgdG9rZW4gZWZmb3J0IGZvciBNTFBEUy9NRkJEUyB3
aGVuIEhUIGlzIGVuYWJsZWQsCiAgICAgICogYnV0IGl0IGlzIHNvbWV3aGF0
IGJldHRlciB0aGFuIG5vdGhpbmcuCkBAIC0xMjA5LDggKzEyMjgsOCBAQCB2
b2lkIF9faW5pdCBpbml0X3NwZWN1bGF0aW9uX21pdGlnYXRpb25zKHZvaWQp
CiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgYm9vdF9jcHVfaGFzKFg4
Nl9GRUFUVVJFX01EX0NMRUFSKSk7CiAKICAgICAvKgotICAgICAqIEVuYWJs
ZSBNRFMgZGVmZW5jZXMgYXMgYXBwbGljYWJsZS4gIFRoZSBJZGxlIGJsb2Nr
cyBuZWVkIHVzaW5nIGlmCi0gICAgICogZWl0aGVyIFBWIG9yIEhWTSBkZWZl
bmNlcyBhcmUgdXNlZC4KKyAgICAgKiBFbmFibGUgTURTL01NSU8gZGVmZW5j
ZXMgYXMgYXBwbGljYWJsZS4gIFRoZSBJZGxlIGJsb2NrcyBuZWVkIHVzaW5n
IGlmCisgICAgICogZWl0aGVyIFBWIG9yIEhWTSwgb3IgaWYgd2UgZ2l2ZSBN
TUlPIGFjY2VzcyB0byB1bnRydXN0ZWQgZ3Vlc3RzLgogICAgICAqCiAgICAg
ICogSFZNIGlzIG1vcmUgY29tcGxpY2F0ZWQuICBUaGUgTURfQ0xFQVIgbWlj
cm9jb2RlIGV4dGVuZHMgTDFEX0ZMVVNIIHdpdGgKICAgICAgKiBlcXVpdmVs
ZW50IHNlbWFudGljcyB0byBhdm9pZCBuZWVkaW5nIHRvIHBlcmZvcm0gYm90
aCBmbHVzaGVzIG9uIHRoZQpAQCAtMTIyMCw3ICsxMjM5LDcgQEAgdm9pZCBf
X2luaXQgaW5pdF9zcGVjdWxhdGlvbl9taXRpZ2F0aW9ucyh2b2lkKQogICAg
ICAqIG9wdF9tZF9jbGVhcl9odm0gdG8gbWVhbiBqdXN0ICJzaG91bGQgd2Ug
VkVSVyBvbiB0aGUgd2F5IGludG8gSFZNCiAgICAgICogZ3Vlc3RzIiwgc28g
c3BlY19jdHJsX2luaXRfZG9tYWluKCkgY2FuIGNhbGN1bGF0ZSBzdWl0YWJs
ZSBzZXR0aW5ncy4KICAgICAgKi8KLSAgICBpZiAoIG9wdF9tZF9jbGVhcl9w
diB8fCBvcHRfbWRfY2xlYXJfaHZtICkKKyAgICBpZiAoIG9wdF9tZF9jbGVh
cl9wdiB8fCBvcHRfbWRfY2xlYXJfaHZtIHx8IG9wdF9mYl9jbGVhcl9tbWlv
ICkKICAgICAgICAgc2V0dXBfZm9yY2VfY3B1X2NhcChYODZfRkVBVFVSRV9T
Q19WRVJXX0lETEUpOwogICAgIG9wdF9tZF9jbGVhcl9odm0gJj0gIShjYXBz
ICYgQVJDSF9DQVBTX1NLSVBfTDFERkwpICYmICFvcHRfbDFkX2ZsdXNoOwog
CkBAIC0xMjg1LDE0ICsxMzA0LDE5IEBAIHZvaWQgX19pbml0IGluaXRfc3Bl
Y3VsYXRpb25fbWl0aWdhdGlvbnModm9pZCkKICAgICAgKiBPbiBzb21lIFNS
QkRTLWFmZmVjdGVkIGhhcmR3YXJlLCBpdCBtYXkgYmUgc2FmZSB0byByZWxh
eCBzcmItbG9jayBieQogICAgICAqIGRlZmF1bHQuCiAgICAgICoKLSAgICAg
KiBPbiBwYXJ0cyB3aGljaCBlbnVtZXJhdGUgTURTX05PIGFuZCBub3QgVEFB
X05PLCBUU1ggaXMgdGhlIG9ubHkga25vd24KLSAgICAgKiB3YXkgdG8gYWNj
ZXNzIHRoZSBGaWxsIEJ1ZmZlci4gIElmIFRTWCBpc24ndCBhdmFpbGFibGUg
KGluYy4gU0tVCi0gICAgICogcmVhc29ucyBvbiBzb21lIG1vZGVscyksIG9y
IFRTWCBpcyBleHBsaWNpdGx5IGRpc2FibGVkLCB0aGVuIHRoZXJlIGlzCi0g
ICAgICogbm8gbmVlZCBmb3IgdGhlIGV4dHJhIG92ZXJoZWFkIHRvIHByb3Rl
Y3QgUkRSQU5EL1JEU0VFRC4KKyAgICAgKiBBbGwgcGFydHMgd2l0aCBTUkJE
U19DVFJMIHN1ZmZlciBTU0RQLCB0aGUgbWVjaGFuaXNtIGJ5IHdoaWNoIHN0
YWxlIFJORworICAgICAqIGRhdGEgYmVjb21lcyBhdmFpbGFibGUgdG8gb3Ro
ZXIgY29udGV4dHMuICBUbyByZWNvdmVyIHRoZSBkYXRhLCBhbgorICAgICAq
IGF0dGFrZXIgbmVlZHMgdG8gdXNlOgorICAgICAqICAtIFNCRFMgKE1EUyBv
ciBUQUEgdG8gc2FtcGxlIHRoZSBjb3JlcyBmaWxsIGJ1ZmZlcikKKyAgICAg
KiAgLSBTQkRSIChBcmNoaXRlY3RydWFsbHkgcmV0cmlldmUgc3RhbGUgdHJh
bnNhY3Rpb24gYnVmZmVyIGNvbnRlbnRzKQorICAgICAqICAtIERSUFcgKEFy
Y2hpdGVjdHJ1YWxseSBsYXRjaCBzdGFsZSBmaWxsIGJ1ZmZlciBkYXRhKQor
ICAgICAqCisgICAgICogVGhlcmVmb3JlLCBvbiBNRFNfTk8gcGFydHMsIGFu
ZCBUQUFfTk8gb3IgVFNYIHVuYXZhaWxhYmxlL2Rpc2FibGVkLCBhbmQKKyAg
ICAgKiBubyB1bnByaXZpbGVnZWQgTU1JTyBhY2Nlc3MsIHRoZSBSTkcgZGF0
YSBkb2Vzbid0IG5lZWQgcHJvdGVjdGluZy4KICAgICAgKi8KICAgICBpZiAo
IGNwdV9oYXNfc3JiZHNfY3RybCApCiAgICAgewotICAgICAgICBpZiAoIG9w
dF9zcmJfbG9jayA9PSAtMSAmJgorICAgICAgICBpZiAoIG9wdF9zcmJfbG9j
ayA9PSAtMSAmJiAhb3B0X3VucHJpdl9tbWlvICYmCiAgICAgICAgICAgICAg
KGNhcHMgJiAoQVJDSF9DQVBTX01EU19OT3xBUkNIX0NBUFNfVEFBX05PKSkg
PT0gQVJDSF9DQVBTX01EU19OTyAmJgogICAgICAgICAgICAgICghY3B1X2hh
c19obGUgfHwgKChjYXBzICYgQVJDSF9DQVBTX1RTWF9DVFJMKSAmJiBydG1f
ZGlzYWJsZWQpKSApCiAgICAgICAgICAgICBvcHRfc3JiX2xvY2sgPSAwOwo=

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Jun 14 19:48:03 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 19:48:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349430.575541 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1CVt-0005wt-LF; Tue, 14 Jun 2022 19:47:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349430.575541; Tue, 14 Jun 2022 19:47:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1CVt-0005wm-IO; Tue, 14 Jun 2022 19:47:45 +0000
Received: by outflank-mailman (input) for mailman id 349430;
 Tue, 14 Jun 2022 19:47:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hGAl=WV=epam.com=prvs=8164eafd99=volodymyr_babchuk@srs-se1.protection.inumbo.net>)
 id 1o1CVr-0005wg-4z
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 19:47:43 +0000
Received: from mx0a-0039f301.pphosted.com (mx0a-0039f301.pphosted.com
 [148.163.133.242]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d67cf6e8-ec1a-11ec-a26a-b96bd03d9e80;
 Tue, 14 Jun 2022 21:47:36 +0200 (CEST)
Received: from pps.filterd (m0174678.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 25EF8Nvw031235;
 Tue, 14 Jun 2022 19:47:24 GMT
Received: from eur05-vi1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2168.outbound.protection.outlook.com [104.47.17.168])
 by mx0a-0039f301.pphosted.com (PPS) with ESMTPS id 3gpspd1ydt-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 14 Jun 2022 19:47:23 +0000
Received: from VI1PR03MB3710.eurprd03.prod.outlook.com (2603:10a6:803:31::18)
 by GVXPR03MB8377.eurprd03.prod.outlook.com (2603:10a6:150:69::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.20; Tue, 14 Jun
 2022 19:47:19 +0000
Received: from VI1PR03MB3710.eurprd03.prod.outlook.com
 ([fe80::28d9:fd20:dee0:74ed]) by VI1PR03MB3710.eurprd03.prod.outlook.com
 ([fe80::28d9:fd20:dee0:74ed%6]) with mapi id 15.20.5332.020; Tue, 14 Jun 2022
 19:47:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d67cf6e8-ec1a-11ec-a26a-b96bd03d9e80
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eBxRDvLFH+eg/KdyDP4wnirhXOtPhfTUuea04yxfi5Sz7KCC9ybCxkCOkvEDt33lTqwlfIO5stwDbHyvcHqVRnLBjepUFEeXDyNumLIu0HtmIWA6C1BuXXmJeQqCv54oNdbTe8p8tcLMS6M5zMvDulmrmV0qUPXowW235vMCv8hVh49HwlfnctrQicgwzWPtptx65WQ+houZ66IrbQUkVUbCx5Wme3K53XYaNXH4C5TFjV5xUF/OAFtJOskwG9N9XHkHgYDByIVns8jJK/vn2fgBfaIex8MXJHMU8vMPQhRb8mVdxC1p8Z+eYKcuct02h7zVvZH5RjUnotSme5PneQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=M/VZeNmCI+t75rH8SSSwgbgIgw40gnJQFwui79ueasc=;
 b=PkQeOA9nGW0cPZp48ns1/JUe026ZF6Z0VzSF3RC2KnAv0jCrrqTQIDcDXX+yEZcC0EPNlXBNEPz43tDSmgS4mn4gNR2dauM4eYDJb46lTmFxJl8/dfuAeuflsndgvvTGnbMT9vgiEP5L4deQ5hoUp6m6ykExHaUduXBwQzY/H3/q9QzCQOdzt5PlhSjgDlKLcZJwxV+Vu/zkDnhFGZz89PippptW5+hMfXa3CyoaOhThTFjmi3V75F1fUoOgeXj338FEZH+ioRKDXLxhlCDYyam/qOt/hXgzO2ScpvLvn4BQl8BmtG9yzkwujsClyNitEAy/PHG6joGdBRKPmatpIw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=M/VZeNmCI+t75rH8SSSwgbgIgw40gnJQFwui79ueasc=;
 b=SHj2xIk0qhgacPd58lJMI6R13VtyzMGjinRXGGDhprqc3kTm8/Xf/Q+3oRZEQcq0EYvUGyAYCVMy+b8tjsr8is8KauaN1U/k5tThzD8xWY4wo8VLPXkZ4aUY6xlBeLd+TV2eGWiX8W3AnysOEs5Iz+/ZiFRBpg0am0CIgMBE5F1eplk8nrYhnIIbEcJeLRbgSHvWWITwjyy6NGRQgo/a7j9x8ASgQk3cPVHDoWEe2h0jubsNDYzDU/9M9N/BrDKxPWA0DT5jEf54OxO56Cxyqpd7pXuLuFR6kqYgyveWK5MMyComV5UzP9Vd4t1lvAMlV0PSUxAn7o6d7pKE/WSZhA==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: Jens Wiklander <jens.wiklander@linaro.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Stefano
 Stabellini <sstabellini@kernel.org>,
        Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2 2/2] xen/arm: add FF-A mediator
Thread-Topic: [PATCH v2 2/2] xen/arm: add FF-A mediator
Thread-Index: AQHYe8i7JOdBXrpIHUqTA6m5YZWy9q1PRQQA
Date: Tue, 14 Jun 2022 19:47:18 +0000
Message-ID: <874k0nhvsq.fsf@epam.com>
References: <20220609061812.422130-1-jens.wiklander@linaro.org>
 <20220609061812.422130-3-jens.wiklander@linaro.org>
In-Reply-To: <20220609061812.422130-3-jens.wiklander@linaro.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: mu4e 1.6.5; emacs 28.1
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 7bcb3e99-c45d-41f5-38b4-08da4e3eb0c9
x-ms-traffictypediagnostic: GVXPR03MB8377:EE_
x-microsoft-antispam-prvs: 
 <GVXPR03MB8377EBAE530224C9023D4059E6AA9@GVXPR03MB8377.eurprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 y2JG1K7ar0fw+1CVHtsgb6x4y/kjTVJQe5FtqBCpvUjwMYoWO9NHgl38MbbsTcTeSFWykl1i6Ln3PkXPFW8rI+kSK4JJ8PcKs1QaIvFOmaupka1yzEHt2lVAzmIxuC4bluP68jOQZn6gA7mScUJyxuofkpFn1Cgi1GHA1MJnjBL4avc5yYvEFXCuYJ2oHpIFLmbaSU4BznvlBBGZsuYrDXcxsoX3ubmRdxXowCNuWP305Gs8gbJ+XOg6PJIfGVjc8XHD12KRA75KTpSh3CWYq5R1m2aHxH8VvDanyk8ecZuG89UGZIr9ZphsuQa77eYCIvv+b9uZVlMYMTI+eSVob+wAH7eCdpZhFGNm+vpH5UO1HvRUFNdZoJSqOjvRiWaXruTDiAk6XmcP+eOjuFi7UBA5EP2O3d2IqfQMAAh8oZcoF7ZbqEO/CvF6TvDbwZog5nj5I0rKYL3FKyvrXfoDQsZdkjl3xyrefnCLQ96/EpZWCxCuf8fp3lK0PxjqfuD7FIZToJdQ/tCwHa7atnzXhtToI0UDgOhK7RBTqQvFEU2ftSPNEzPJxJGDD7K9Svzn3UThsi/X9z1c8bBZI+3Wlg8oMI+R5zZaOcQdXJw62NmBwtOsAgLBTo+awTjsfMiOxYlNclXdVfq6Q8B1F5+1CnoVUBmSGVHoGFBz5LjQvK6IcN0MNm8sQICm4FFIbUSzKBPuvF3nuZO5Iy03kfsTJfY1H5QwVTWkpqnCewMGPrFwDbfpGTZQ2xequl19cmUFwbNpoNk5JmtCrENnsQEfi/UCOXdESGPbymYx82xqWK4=
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR03MB3710.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(6512007)(55236004)(38100700002)(8936002)(26005)(6506007)(316002)(122000001)(86362001)(508600001)(6486002)(2906002)(966005)(30864003)(36756003)(8676002)(186003)(5660300002)(71200400001)(91956017)(66556008)(66476007)(66946007)(4326008)(66446008)(64756008)(83380400001)(6916009)(54906003)(76116006)(38070700005)(2616005)(559001)(579004);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: 
 =?iso-8859-1?Q?QIzfIsaVSBvSV9cyxk8ls7CibWP+ginUr0BUNfJ4xNJtvqyYPkRrbSgFIN?=
 =?iso-8859-1?Q?OzG8kaSjjb/+28P1LIYcGFlXeObrEyx5jvNsP1x9NLE0xCdXELO4fyDoFV?=
 =?iso-8859-1?Q?kabU53DEWmdsyGFqAGV42abCQLu5EWlQyKX13BPE8g5aaCBR86eGF11cPu?=
 =?iso-8859-1?Q?CjVvtPzer9Ny9x4IJwUR37gd28BDEqU13Sfi37DPWs670w8ccuQMTI1Wuz?=
 =?iso-8859-1?Q?CtUvmL/OVXG7tqsC7Ud95JBxEbVsSPuNup9DRlogmYWPKpLYNPqhjm5zlC?=
 =?iso-8859-1?Q?6tCYWk5jkHu+KQUSudcD6NCLik9XEVr7w4zogbI+Vqw00u9xO1MME1BC+b?=
 =?iso-8859-1?Q?Ma2/meEZLkiC014xSb9u5xiqNaOz4102dPVIEmtYVToJF/ZkBg6G86bOdP?=
 =?iso-8859-1?Q?rrKMWj//UOv6U8yAFhQX+idn7dSLshvkyThHsSnYJ42rA+n45I7u9ukR6E?=
 =?iso-8859-1?Q?uRJneIK2SAXzMsTOtkbTyVyn3rP0D/IUJh/J/w+qN4hj1dpCRrMQ45CrGV?=
 =?iso-8859-1?Q?4320meE5/QEgPspPa7gmhR982fSzlHnZr0qLTlMs3zeNaKLiNJqoGwZnuD?=
 =?iso-8859-1?Q?xwOcvNPlmzRrZKx8gqRAKu74P4bDDoQQnd1mGYc2H1IvB6zYNKuZonO4r/?=
 =?iso-8859-1?Q?cPyRRuP4xtMrxvoBWjWgi+4O2asGmPypx+KtuYdWLdM4ccmppUhb4HZCvy?=
 =?iso-8859-1?Q?YtBknVoRGSGnuF00wbdsJrVH/EZMW5Qug7RIyjSlhXWas0pl/Z2BzrZthB?=
 =?iso-8859-1?Q?cj4MsKTSJ4/IqwN1H0rF84Pb3AMEDacGJ1aOZTE+nazF/6qvxPzoFy79Ef?=
 =?iso-8859-1?Q?hb7Sotb+sUeECxu1WXeL8IJ9asAmddTb11qk6VwEwROFFEtr3Zbd75iV3X?=
 =?iso-8859-1?Q?+mstZhhGykGtLp0Cvk2EbmHE3Q2Lo6zocgQ5uSe8SlCiIPh4wcMLmbkfqh?=
 =?iso-8859-1?Q?Q9+ysqKtjxzYeO2G+8fKZp3ZZA/vZRPeM6U/e3YEOCJCqjEp6JR9qpAg9N?=
 =?iso-8859-1?Q?QjO040eAGDgOhLxuJeb6sq+8ZQSVJ5rLt+XKQU0qAIPRVl4NR0yQDp1oIs?=
 =?iso-8859-1?Q?R+NPYdTckumV5nZ5Abo9IOhC3Re9VhczK8PmfeuFRh4XmRBvhnSuDVjhXz?=
 =?iso-8859-1?Q?Thz7UlS2EcPPRdN0nUKOolngLE8WiaYdXn+hogb+FeNxMk3wYCCjetXZjS?=
 =?iso-8859-1?Q?R6AGeOyH2iOPjmhQsi6TxTad+qqISj2O/DzE1YcFD2oV6vRsNjbWWALfyx?=
 =?iso-8859-1?Q?4GtIZCtHM/OYOQauIBVTNn3cc32AHgVoMO12HlFc45ZV6uglKVHOIKkLD2?=
 =?iso-8859-1?Q?oFMp9gez9jb6M7ScKuMSwbPMwentVXTVClbtzzaaQ3dOfCu+7Cbzuuof3K?=
 =?iso-8859-1?Q?Bo5kuQqlwhWJwsljq3wZ9KFCiUkmEnDXJ/zufjpqV/vWc1yb8vS4aIzBhf?=
 =?iso-8859-1?Q?hOPMXEorVTkYe+orzeWuoJIPhqChzdDkOIfGz9kPJdC5S7lQ9XdFKzhtws?=
 =?iso-8859-1?Q?IohiGR5alF1f/foDgPmRSs78Y21SXMsB7uN2GQtB6JuVurA3IZdXDNmwc/?=
 =?iso-8859-1?Q?20ANBQ5sypu2gzQv1/tQmu6lYR+aL/E2Mvn+6GES/+0PhDoQfS4zp2IuQ/?=
 =?iso-8859-1?Q?sSwAU+dz6DCt02CF5bCD4ikgAzPMyFYBdros1nLh8oM8zKE/3yGtaTuLAJ?=
 =?iso-8859-1?Q?4PtTB+JsL1WK8EPiMqho1UYyXFu5/W/rcACn8qy/BHMkDwyc6/l49gl1Ez?=
 =?iso-8859-1?Q?oyy0/604YQ13n3ecOx6F7rEnTVjzvIYSlK2YBro+hZ1HbflrgEtk04MjAj?=
 =?iso-8859-1?Q?i+sqw+WwhOHAwOoLywKkmjSL6aWJu/8=3D?=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: VI1PR03MB3710.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7bcb3e99-c45d-41f5-38b4-08da4e3eb0c9
X-MS-Exchange-CrossTenant-originalarrivaltime: 14 Jun 2022 19:47:18.7106
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 6hsuzx17NbjAfScscZ0djTejrARLm2K2fxBKK0VFejwq4wl6nE5RZnPoJeTrkmzz3iYlvoosm6luw7BkUK+S/aXceTY1XwJ+GSt1dyApX2o=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GVXPR03MB8377
X-Proofpoint-GUID: 1LIum5Rf8Yt9aVOgXIhAFTSSz68Crcac
X-Proofpoint-ORIG-GUID: 1LIum5Rf8Yt9aVOgXIhAFTSSz68Crcac
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.205,Aquarius:18.0.874,Hydra:6.0.517,FMLib:17.11.64.514
 definitions=2022-06-14_08,2022-06-13_01,2022-02-23_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0
 mlxlogscore=999 malwarescore=0 priorityscore=1501 mlxscore=0 clxscore=1011
 bulkscore=0 adultscore=0 suspectscore=0 phishscore=0 lowpriorityscore=0
 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2204290000 definitions=main-2206140072


Hello Jens,

Sorry for late review, I was busy with internal projects.

This is preliminary review. I gave up at scatter-gather operations. Need
more time to review them properly.

One thing that bothers me is that Xen is non-preemptive and there are
plenty potentially long-running operations.

Jens Wiklander <jens.wiklander@linaro.org> writes:

> Adds a FF-A version 1.1 [1] mediator to communicate with a Secure
> Partition in secure world.
>
> The implementation is the bare minimum to be able to communicate with
> OP-TEE running as an SPMC at S-EL1.
>
> This is loosely based on the TEE mediator framework and the OP-TEE
> mediator.
>
> [1] https://urldefense.com/v3/__https://developer.arm.com/documentation/d=
en0077/latest__;!!GF_29dbcQIUBPA!1rn9xKdmcgMXOyZ_CvNIVq-wAS1ZI_Ews1w-Gqt0YP=
wSXyyTJedeFQgD65WhhOwIf_-cIa4EINzmwM4o62XPcMt1cTLcMZ7d$ [developer[.]arm[.]=
com]
> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> ---
>  xen/arch/arm/Kconfig              |   11 +
>  xen/arch/arm/Makefile             |    1 +
>  xen/arch/arm/domain.c             |   10 +
>  xen/arch/arm/ffa.c                | 1624 +++++++++++++++++++++++++++++
>  xen/arch/arm/include/asm/domain.h |    4 +
>  xen/arch/arm/include/asm/ffa.h    |   71 ++
>  xen/arch/arm/vsmc.c               |   17 +-
>  7 files changed, 1735 insertions(+), 3 deletions(-)
>  create mode 100644 xen/arch/arm/ffa.c
>  create mode 100644 xen/arch/arm/include/asm/ffa.h
>
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index ecfa6822e4d3..5b75067e2745 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -106,6 +106,17 @@ config TEE
> =20
>  source "arch/arm/tee/Kconfig"
> =20
> +config FFA
> +	bool "Enable FF-A mediator support" if EXPERT
> +	default n
> +	depends on ARM_64
> +	help
> +	  This option enables a minamal FF-A mediator. The mediator is
> +	  generic as it follows the FF-A specification [1], but it only
> +	  implements a small substet of the specification.
> +
> +	  [1] https://urldefense.com/v3/__https://developer.arm.com/documentati=
on/den0077/latest__;!!GF_29dbcQIUBPA!1rn9xKdmcgMXOyZ_CvNIVq-wAS1ZI_Ews1w-Gq=
t0YPwSXyyTJedeFQgD65WhhOwIf_-cIa4EINzmwM4o62XPcMt1cTLcMZ7d$ [developer[.]ar=
m[.]com]
> +
>  endmenu
> =20
>  menu "ARM errata workaround via the alternative framework"
> diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
> index 1d862351d111..dbf5e593a069 100644
> --- a/xen/arch/arm/Makefile
> +++ b/xen/arch/arm/Makefile
> @@ -20,6 +20,7 @@ obj-y +=3D domain.o
>  obj-y +=3D domain_build.init.o
>  obj-y +=3D domctl.o
>  obj-$(CONFIG_EARLY_PRINTK) +=3D early_printk.o
> +obj-$(CONFIG_FFA) +=3D ffa.o
>  obj-y +=3D gic.o
>  obj-y +=3D gic-v2.o
>  obj-$(CONFIG_GICV3) +=3D gic-v3.o
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 8110c1df8638..a93e6a9c4aef 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -27,6 +27,7 @@
>  #include <asm/cpufeature.h>
>  #include <asm/current.h>
>  #include <asm/event.h>
> +#include <asm/ffa.h>
>  #include <asm/gic.h>
>  #include <asm/guest_atomics.h>
>  #include <asm/irq.h>
> @@ -756,6 +757,9 @@ int arch_domain_create(struct domain *d,
>      if ( (rc =3D tee_domain_init(d, config->arch.tee_type)) !=3D 0 )
>          goto fail;
> =20
> +    if ( (rc =3D ffa_domain_init(d)) !=3D 0 )

So, FFA support will be enabled for each domain? I think that this is
fine for experimental feature, but I want to hear maintainer's opinion.

> +        goto fail;
> +
>      update_domain_wallclock_time(d);
> =20
>      /*
> @@ -998,6 +1002,7 @@ static int relinquish_memory(struct domain *d, struc=
t page_list_head *list)
>  enum {
>      PROG_pci =3D 1,
>      PROG_tee,
> +    PROG_ffa,
>      PROG_xen,
>      PROG_page,
>      PROG_mapping,
> @@ -1046,6 +1051,11 @@ int domain_relinquish_resources(struct domain *d)
>          if (ret )
>              return ret;
> =20
> +    PROGRESS(ffa):
> +        ret =3D ffa_relinquish_resources(d);
> +        if (ret )

Coding style: if ( ret )

> +            return ret;
> +
>      PROGRESS(xen):
>          ret =3D relinquish_memory(d, &d->xenpage_list);
>          if ( ret )
> diff --git a/xen/arch/arm/ffa.c b/xen/arch/arm/ffa.c
> new file mode 100644
> index 000000000000..9063b7f2b59e
> --- /dev/null
> +++ b/xen/arch/arm/ffa.c
> @@ -0,0 +1,1624 @@
> +/*
> + * xen/arch/arm/ffa.c
> + *
> + * Arm Firmware Framework for ARMv8-A(FFA) mediator
> + *
> + * Copyright (C) 2021  Linaro Limited

It is 2022 already :)

> + *
> + * Permission is hereby granted, free of charge, to any person
> + * obtaining a copy of this software and associated documentation
> + * files (the "Software"), to deal in the Software without restriction,
> + * including without limitation the rights to use, copy, modify, merge,
> + * publish, distribute, sublicense, and/or sell copies of the Software,
> + * and to permit persons to whom the Software is furnished to do so,
> + * subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be
> + * included in all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT=
.
> + * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
> + * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
> + * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
> + * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
> + */
> +
> +#include <xen/domain_page.h>
> +#include <xen/errno.h>
> +#include <xen/init.h>
> +#include <xen/lib.h>
> +#include <xen/sched.h>
> +#include <xen/types.h>
> +#include <xen/sizes.h>
> +#include <xen/bitops.h>
> +
> +#include <asm/smccc.h>
> +#include <asm/event.h>
> +#include <asm/ffa.h>
> +#include <asm/regs.h>
> +
> +/* Error codes */
> +#define FFA_RET_OK			0
> +#define FFA_RET_NOT_SUPPORTED		-1
> +#define FFA_RET_INVALID_PARAMETERS	-2
> +#define FFA_RET_NO_MEMORY		-3
> +#define FFA_RET_BUSY			-4
> +#define FFA_RET_INTERRUPTED		-5
> +#define FFA_RET_DENIED			-6
> +#define FFA_RET_RETRY			-7
> +#define FFA_RET_ABORTED			-8
> +
> +/* FFA_VERSION helpers */
> +#define FFA_VERSION_MAJOR		_AC(1,U)
> +#define FFA_VERSION_MAJOR_SHIFT		_AC(16,U)
> +#define FFA_VERSION_MAJOR_MASK		_AC(0x7FFF,U)
> +#define FFA_VERSION_MINOR		_AC(1,U)
> +#define FFA_VERSION_MINOR_SHIFT		_AC(0,U)
> +#define FFA_VERSION_MINOR_MASK		_AC(0xFFFF,U)
> +#define MAKE_FFA_VERSION(major, minor)	\
> +	((((major) & FFA_VERSION_MAJOR_MASK) << FFA_VERSION_MAJOR_SHIFT) | \
> +	 ((minor) & FFA_VERSION_MINOR_MASK))
> +
> +#define FFA_MIN_VERSION		MAKE_FFA_VERSION(1, 0)
> +#define FFA_VERSION_1_0		MAKE_FFA_VERSION(1, 0)
> +#define FFA_VERSION_1_1		MAKE_FFA_VERSION(1, 1)
> +#define FFA_MY_VERSION		MAKE_FFA_VERSION(FFA_VERSION_MAJOR, \
> +						 FFA_VERSION_MINOR)
> +
> +
> +#define FFA_HANDLE_HYP_FLAG             BIT(63,ULL)
> +
> +/* Memory attributes: Normal memory, Write-Back cacheable, Inner shareab=
le */
> +#define FFA_NORMAL_MEM_REG_ATTR		_AC(0x2f,U)
> +
> +/* Memory access permissions: Read-write */
> +#define FFA_MEM_ACC_RW			_AC(0x2,U)
> +
> +/* Clear memory before mapping in receiver */
> +#define FFA_MEMORY_REGION_FLAG_CLEAR		BIT(0, U)
> +/* Relayer may time slice this operation */
> +#define FFA_MEMORY_REGION_FLAG_TIME_SLICE	BIT(1, U)
> +/* Clear memory after receiver relinquishes it */
> +#define FFA_MEMORY_REGION_FLAG_CLEAR_RELINQUISH	BIT(2, U)
> +
> +/* Share memory transaction */
> +#define FFA_MEMORY_REGION_TRANSACTION_TYPE_SHARE (_AC(1,U) << 3)
> +/* Relayer must choose the alignment boundary */
> +#define FFA_MEMORY_REGION_FLAG_ANY_ALIGNMENT	_AC(0,U)
BIT(0, U)?

> +
> +#define FFA_HANDLE_INVALID		_AC(0xffffffffffffffff,ULL)
> +
> +/* Framework direct request/response */
> +#define FFA_MSG_FLAG_FRAMEWORK		BIT(31, U)
> +#define FFA_MSG_TYPE_MASK		_AC(0xFF,U);
> +#define FFA_MSG_PSCI			_AC(0x0,U)
> +#define FFA_MSG_SEND_VM_CREATED		_AC(0x4,U)
> +#define FFA_MSG_RESP_VM_CREATED		_AC(0x5,U)
> +#define FFA_MSG_SEND_VM_DESTROYED	_AC(0x6,U)
> +#define FFA_MSG_RESP_VM_DESTROYED	_AC(0x7,U)
> +
> +/*
> + * Flags used for the FFA_PARTITION_INFO_GET return message:
> + * BIT(0): Supports receipt of direct requests
> + * BIT(1): Can send direct requests
> + * BIT(2): Can send and receive indirect messages
> + * BIT(3): Supports receipt of notifications
> + * BIT(4-5): Partition ID is a PE endpoint ID
> + */
> +#define FFA_PART_PROP_DIRECT_REQ_RECV   BIT(0,U)
> +#define FFA_PART_PROP_DIRECT_REQ_SEND   BIT(1,U)
> +#define FFA_PART_PROP_INDIRECT_MSGS     BIT(2,U)
> +#define FFA_PART_PROP_RECV_NOTIF        BIT(3,U)
> +#define FFA_PART_PROP_IS_PE_ID          (_AC(0,U) << 4)
> +#define FFA_PART_PROP_IS_SEPID_INDEP    (_AC(1,U) << 4)
> +#define FFA_PART_PROP_IS_SEPID_DEP      (_AC(2,U) << 4)
> +#define FFA_PART_PROP_IS_AUX_ID         (_AC(3,U) << 4)
> +#define FFA_PART_PROP_NOTIF_CREATED     BIT(6,U)
> +#define FFA_PART_PROP_NOTIF_DESTROYED   BIT(7,U)
> +#define FFA_PART_PROP_AARCH64_STATE     BIT(8,U)
> +
> +/* Function IDs */
> +#define FFA_ERROR			_AC(0x84000060,U)
> +#define FFA_SUCCESS_32			_AC(0x84000061,U)
> +#define FFA_SUCCESS_64			_AC(0xC4000061,U)
> +#define FFA_INTERRUPT			_AC(0x84000062,U)
> +#define FFA_VERSION			_AC(0x84000063,U)
> +#define FFA_FEATURES			_AC(0x84000064,U)
> +#define FFA_RX_ACQUIRE			_AC(0x84000084,U)
> +#define FFA_RX_RELEASE			_AC(0x84000065,U)
> +#define FFA_RXTX_MAP_32			_AC(0x84000066,U)
> +#define FFA_RXTX_MAP_64			_AC(0xC4000066,U)
> +#define FFA_RXTX_UNMAP			_AC(0x84000067,U)
> +#define FFA_PARTITION_INFO_GET		_AC(0x84000068,U)
> +#define FFA_ID_GET			_AC(0x84000069,U)
> +#define FFA_SPM_ID_GET			_AC(0x84000085,U)
> +#define FFA_MSG_WAIT			_AC(0x8400006B,U)
> +#define FFA_MSG_YIELD			_AC(0x8400006C,U)
> +#define FFA_MSG_RUN			_AC(0x8400006D,U)
> +#define FFA_MSG_SEND2			_AC(0x84000086,U)
> +#define FFA_MSG_SEND_DIRECT_REQ_32	_AC(0x8400006F,U)
> +#define FFA_MSG_SEND_DIRECT_REQ_64	_AC(0xC400006F,U)
> +#define FFA_MSG_SEND_DIRECT_RESP_32	_AC(0x84000070,U)
> +#define FFA_MSG_SEND_DIRECT_RESP_64	_AC(0xC4000070,U)
> +#define FFA_MEM_DONATE_32		_AC(0x84000071,U)
> +#define FFA_MEM_DONATE_64		_AC(0xC4000071,U)
> +#define FFA_MEM_LEND_32			_AC(0x84000072,U)
> +#define FFA_MEM_LEND_64			_AC(0xC4000072,U)
> +#define FFA_MEM_SHARE_32		_AC(0x84000073,U)
> +#define FFA_MEM_SHARE_64		_AC(0xC4000073,U)
> +#define FFA_MEM_RETRIEVE_REQ_32		_AC(0x84000074,U)
> +#define FFA_MEM_RETRIEVE_REQ_64		_AC(0xC4000074,U)
> +#define FFA_MEM_RETRIEVE_RESP		_AC(0x84000075,U)
> +#define FFA_MEM_RELINQUISH		_AC(0x84000076,U)
> +#define FFA_MEM_RECLAIM			_AC(0x84000077,U)
> +#define FFA_MEM_FRAG_RX			_AC(0x8400007A,U)
> +#define FFA_MEM_FRAG_TX			_AC(0x8400007B,U)
> +#define FFA_MSG_SEND			_AC(0x8400006E,U)
> +#define FFA_MSG_POLL			_AC(0x8400006A,U)
> +
> +/* Partition information descriptor */
> +struct ffa_partition_info_1_0 {
> +    uint16_t id;
> +    uint16_t execution_context;
> +    uint32_t partition_properties;
> +};
> +
> +struct ffa_partition_info_1_1 {
> +    uint16_t id;
> +    uint16_t execution_context;
> +    uint32_t partition_properties;
> +    uint8_t uuid[16];
> +};
> +
> +/* Constituent memory region descriptor */
> +struct ffa_address_range {
> +    uint64_t address;
> +    uint32_t page_count;
> +    uint32_t reserved;
> +};
> +
> +/* Composite memory region descriptor */
> +struct ffa_mem_region {
> +    uint32_t total_page_count;
> +    uint32_t address_range_count;
> +    uint64_t reserved;
> +    struct ffa_address_range address_range_array[];
> +};
> +
> +/* Memory access permissions descriptor */
> +struct ffa_mem_access_perm {
> +    uint16_t endpoint_id;
> +    uint8_t perm;
> +    uint8_t flags;
> +};
> +
> +/* Endpoint memory access descriptor */
> +struct ffa_mem_access {
> +    struct ffa_mem_access_perm access_perm;
> +    uint32_t region_offs;
> +    uint64_t reserved;
> +};
> +
> +/* Lend, donate or share memory transaction descriptor */
> +struct ffa_mem_transaction_1_0 {
> +    uint16_t sender_id;
> +    uint8_t mem_reg_attr;
> +    uint8_t reserved0;
> +    uint32_t flags;
> +    uint64_t global_handle;
> +    uint64_t tag;
> +    uint32_t reserved1;
> +    uint32_t mem_access_count;
> +    struct ffa_mem_access mem_access_array[];
> +};
> +
> +struct ffa_mem_transaction_1_1 {
> +    uint16_t sender_id;
> +    uint16_t mem_reg_attr;
> +    uint32_t flags;
> +    uint64_t global_handle;
> +    uint64_t tag;
> +    uint32_t mem_access_size;
> +    uint32_t mem_access_count;
> +    uint32_t mem_access_offs;
> +    uint8_t reserved[12];
> +};
> +
> +/*
> + * The parts needed from struct ffa_mem_transaction_1_0 or struct
> + * ffa_mem_transaction_1_1, used to provide an abstraction of difference=
 in
> + * data structures between version 1.0 and 1.1. This is just an internal
> + * interface and can be changed without changing any ABI.
> + */
> +struct ffa_mem_transaction_x {
> +    uint16_t sender_id;
> +    uint8_t mem_reg_attr;
> +    uint8_t flags;
> +    uint8_t mem_access_size;
> +    uint8_t mem_access_count;
> +    uint16_t mem_access_offs;
> +    uint64_t global_handle;
> +    uint64_t tag;
> +};
> +
> +/* Endpoint RX/TX descriptor */
> +struct ffa_endpoint_rxtx_descriptor_1_0 {
> +    uint16_t sender_id;
> +    uint16_t reserved;
> +    uint32_t rx_range_count;
> +    uint32_t tx_range_count;
> +};
> +
> +struct ffa_endpoint_rxtx_descriptor_1_1 {
> +    uint16_t sender_id;
> +    uint16_t reserved;
> +    uint32_t rx_region_offs;
> +    uint32_t tx_region_offs;
> +};
> +
> +struct ffa_ctx {
> +    void *rx;
> +    void *tx;
> +    struct page_info *rx_pg;
> +    struct page_info *tx_pg;
> +    unsigned int page_count;
> +    uint32_t guest_vers;
> +    bool tx_is_mine;
> +    bool interrupted;
> +};
> +
> +struct ffa_shm_mem {
> +    struct list_head list;
> +    uint16_t sender_id;
> +    uint16_t ep_id;     /* endpoint, the one lending */
> +    uint64_t handle;    /* FFA_HANDLE_INVALID if not set yet */
> +    unsigned int page_count;
> +    struct page_info *pages[];
> +};
> +
> +struct mem_frag_state {
> +    struct list_head list;
> +    struct ffa_shm_mem *shm;
> +    uint32_t range_count;
> +    unsigned int current_page_idx;
> +    unsigned int frag_offset;
> +    unsigned int range_offset;
> +    uint8_t *buf;
> +    unsigned int buf_size;
> +    struct ffa_address_range range;
> +};
> +
> +/*
> + * Our rx/rx buffer shared with the SPMC
> + */
> +static uint32_t ffa_version;
> +static uint16_t *subsr_vm_created;
> +static unsigned int subsr_vm_created_count;
> +static uint16_t *subsr_vm_destroyed;
> +static unsigned int subsr_vm_destroyed_count;
> +static void *ffa_rx;
> +static void *ffa_tx;
> +static unsigned int ffa_page_count;
> +static spinlock_t ffa_buffer_lock =3D SPIN_LOCK_UNLOCKED;
> +
> +static struct list_head ffa_mem_list =3D LIST_HEAD_INIT(ffa_mem_list);
> +static struct list_head ffa_frag_list =3D LIST_HEAD_INIT(ffa_frag_list);
> +static spinlock_t ffa_mem_list_lock =3D SPIN_LOCK_UNLOCKED;
> +
> +static uint64_t next_handle =3D FFA_HANDLE_HYP_FLAG;
> +
> +static uint64_t reg_pair_to_64(uint32_t reg0, uint32_t reg1)
> +{
> +    return (uint64_t)reg0 << 32 | reg1;
> +}
> +
> +static void reg_pair_from_64(uint32_t *reg0, uint32_t *reg1, uint64_t va=
l)
> +{
> +    *reg0 =3D val >> 32;
> +    *reg1 =3D val;
> +}
> +
> +static bool ffa_get_version(uint32_t *vers)
> +{
> +    const struct arm_smccc_1_2_regs arg =3D {
> +        .a0 =3D FFA_VERSION, .a1 =3D FFA_MY_VERSION,
> +    };
> +    struct arm_smccc_1_2_regs resp;
> +
> +    arm_smccc_1_2_smc(&arg, &resp);
> +    if ( resp.a0 =3D=3D FFA_RET_NOT_SUPPORTED )
> +    {
> +        printk(XENLOG_ERR "ffa: FFA_VERSION returned not supported\n");
> +        return false;
> +    }
> +
> +    *vers =3D resp.a0;
> +    return true;
> +}
> +
> +static uint32_t ffa_rxtx_map(register_t tx_addr, register_t rx_addr,
> +                             uint32_t page_count)
> +{
> +    const struct arm_smccc_1_2_regs arg =3D {
> +#ifdef CONFIG_ARM_64
> +        .a0 =3D FFA_RXTX_MAP_64,
> +#endif
> +#ifdef CONFIG_ARM_32
> +        .a0 =3D FFA_RXTX_MAP_32,
> +#endif
> +	.a1 =3D tx_addr, .a2 =3D rx_addr,
> +        .a3 =3D page_count,
> +    };
> +    struct arm_smccc_1_2_regs resp;
> +
> +    arm_smccc_1_2_smc(&arg, &resp);
> +
> +    if ( resp.a0 =3D=3D FFA_ERROR )

What if we get SMCCC_NOT_SUPPORTED there?

> +    {
> +        if ( resp.a2 )
> +            return resp.a2;
> +        else
> +            return FFA_RET_NOT_SUPPORTED;
> +    }
> +
> +    return FFA_RET_OK;
> +}
> +
> +static uint32_t ffa_rxtx_unmap(uint16_t vm_id)
> +{
> +    const struct arm_smccc_1_2_regs arg =3D {
> +        .a0 =3D FFA_RXTX_UNMAP, .a1 =3D vm_id,
> +    };
> +    struct arm_smccc_1_2_regs resp;
> +
> +    arm_smccc_1_2_smc(&arg, &resp);
> +
> +    if ( resp.a0 =3D=3D FFA_ERROR )
> +    {

The same question. I believe it is better to test against FFA_SUCCESS.
Also looks like this code can be extracted into some helper function,
because it repeats again and again.

> +        if ( resp.a2 )
> +            return resp.a2;
> +        else
> +            return FFA_RET_NOT_SUPPORTED;
> +    }
> +
> +    return FFA_RET_OK;
> +}
> +
> +static uint32_t ffa_partition_info_get(uint32_t w1, uint32_t w2, uint32_=
t w3,
> +                                       uint32_t w4, uint32_t w5,
> +                                       uint32_t *count)
> +{
> +    const struct arm_smccc_1_2_regs arg =3D {
> +        .a0 =3D FFA_PARTITION_INFO_GET, .a1 =3D w1, .a2 =3D w2, .a3 =3D =
w3, .a4 =3D w4,
> +        .a5 =3D w5,
> +    };
> +    struct arm_smccc_1_2_regs resp;
> +
> +    arm_smccc_1_2_smc(&arg, &resp);
> +
> +    if ( resp.a0 =3D=3D FFA_ERROR )
> +    {
> +        if ( resp.a2 )
> +            return resp.a2;
> +        else
> +            return FFA_RET_NOT_SUPPORTED;
> +    }
> +
> +    *count =3D resp.a2;
> +
> +    return FFA_RET_OK;
> +}
> +
> +static uint32_t ffa_rx_release(void)
> +{
> +    const struct arm_smccc_1_2_regs arg =3D { .a0 =3D FFA_RX_RELEASE, };
> +    struct arm_smccc_1_2_regs resp;
> +
> +    arm_smccc_1_2_smc(&arg, &resp);
> +
> +    if ( resp.a0 =3D=3D FFA_ERROR )
> +    {
> +        if ( resp.a2 )
> +            return resp.a2;
> +        else
> +            return FFA_RET_NOT_SUPPORTED;
> +    }
> +
> +    return FFA_RET_OK;
> +}
> +
> +static int32_t ffa_mem_share(uint32_t tot_len, uint32_t frag_len,
> +                             register_t addr, uint32_t pg_count,
> +                             uint64_t *handle)
> +{
> +    struct arm_smccc_1_2_regs arg =3D {
> +        .a0 =3D FFA_MEM_SHARE_32, .a1 =3D tot_len, .a2 =3D frag_len, .a3=
 =3D addr,
> +        .a4 =3D pg_count,
> +    };
> +    struct arm_smccc_1_2_regs resp;
> +
> +    /*
> +     * For arm64 we must use 64-bit calling convention if the buffer isn=
't
> +     * passed in our tx buffer.
> +     */
> +    if (sizeof(addr) > 4 && addr)
> +        arg.a0 =3D FFA_MEM_SHARE_64;
> +
> +    arm_smccc_1_2_smc(&arg, &resp);
> +
> +    switch ( resp.a0 ) {
> +    case FFA_ERROR:
> +        if ( resp.a2 )
> +            return resp.a2;
> +        else
> +            return FFA_RET_NOT_SUPPORTED;
> +    case FFA_SUCCESS_32:
> +        *handle =3D reg_pair_to_64(resp.a3, resp.a2);
> +        return FFA_RET_OK;
> +    case FFA_MEM_FRAG_RX:
> +        *handle =3D reg_pair_to_64(resp.a2, resp.a1);
> +        return resp.a3;
> +    default:
> +            return FFA_RET_NOT_SUPPORTED;
> +    }
> +}
> +
> +static int32_t ffa_mem_frag_tx(uint64_t handle, uint32_t frag_len,
> +                               uint16_t sender_id)
> +{
> +    struct arm_smccc_1_2_regs arg =3D {
> +        .a0 =3D FFA_MEM_FRAG_TX, .a1 =3D handle & UINT32_MAX, .a2 =3D ha=
ndle >> 32,
> +        .a3 =3D frag_len, .a4 =3D (uint32_t)sender_id << 16,
> +    };
> +    struct arm_smccc_1_2_regs resp;
> +
> +    arm_smccc_1_2_smc(&arg, &resp);
> +
> +    switch ( resp.a0 ) {
> +    case FFA_ERROR:
> +        if ( resp.a2 )
> +            return resp.a2;
> +        else
> +            return FFA_RET_NOT_SUPPORTED;
> +    case FFA_SUCCESS_32:
> +        return FFA_RET_OK;
> +    case FFA_MEM_FRAG_RX:
> +        return resp.a3;
> +    default:
> +            return FFA_RET_NOT_SUPPORTED;
> +    }
> +}
> +
> +static uint32_t ffa_mem_reclaim(uint32_t handle_lo, uint32_t handle_hi,
> +                                uint32_t flags)
> +{
> +    const struct arm_smccc_1_2_regs arg =3D {
> +        .a0 =3D FFA_MEM_RECLAIM, .a1 =3D handle_lo, .a2 =3D handle_hi, .=
a3 =3D flags,
> +    };
> +    struct arm_smccc_1_2_regs resp;
> +
> +    arm_smccc_1_2_smc(&arg, &resp);
> +
> +    if ( resp.a0 =3D=3D FFA_ERROR )
> +    {
> +        if ( resp.a2 )
> +            return resp.a2;
> +        else
> +            return FFA_RET_NOT_SUPPORTED;
> +    }
> +
> +    return FFA_RET_OK;
> +}
> +
> +static int32_t ffa_direct_req_send_vm(uint16_t sp_id, uint16_t vm_id,
> +                                      uint8_t msg)
> +{
> +    uint32_t exp_resp =3D FFA_MSG_FLAG_FRAMEWORK;
> +    int32_t res;
> +
> +    if ( msg !=3D FFA_MSG_SEND_VM_CREATED && msg !=3DFFA_MSG_SEND_VM_DES=
TROYED )
> +        return FFA_RET_INVALID_PARAMETERS;
> +
> +    if ( msg =3D=3D FFA_MSG_SEND_VM_CREATED )
> +        exp_resp |=3D FFA_MSG_RESP_VM_CREATED;
> +    else
> +        exp_resp |=3D FFA_MSG_RESP_VM_DESTROYED;
> +
> +    do {
> +        const struct arm_smccc_1_2_regs arg =3D {
> +            .a0 =3D FFA_MSG_SEND_DIRECT_REQ_32,
> +            .a1 =3D sp_id,
> +            .a2 =3D FFA_MSG_FLAG_FRAMEWORK | msg,
> +            .a5 =3D vm_id,
> +        };
> +        struct arm_smccc_1_2_regs resp;
> +
> +        arm_smccc_1_2_smc(&arg, &resp);
> +        if ( resp.a0 !=3D FFA_MSG_SEND_DIRECT_RESP_32 || resp.a2 !=3D ex=
p_resp ) {
> +            /*
> +             * This is an invalid response, likely due to some error in =
the
> +             * implementation of the ABI.
> +             */
> +            return FFA_RET_INVALID_PARAMETERS;
> +        }
> +        res =3D resp.a3;
> +    } while ( res =3D=3D FFA_RET_INTERRUPTED || res =3D=3D FFA_RET_RETRY=
 );

How long this loop can ran? XEN is not preemptive unfortunately. So,
maybe we need some way to restart this call.

> +
> +    return res;
> +}
> +
> +static u16 get_vm_id(struct domain *d)
> +{
> +    /* +1 since 0 is reserved for the hypervisor in FF-A */
> +    return d->domain_id + 1;
> +}
> +
> +static void set_regs(struct cpu_user_regs *regs, register_t v0, register=
_t v1,
> +                     register_t v2, register_t v3, register_t v4, regist=
er_t v5,
> +                     register_t v6, register_t v7)
> +{
> +        set_user_reg(regs, 0, v0);
> +        set_user_reg(regs, 1, v1);
> +        set_user_reg(regs, 2, v2);
> +        set_user_reg(regs, 3, v3);
> +        set_user_reg(regs, 4, v4);
> +        set_user_reg(regs, 5, v5);
> +        set_user_reg(regs, 6, v6);
> +        set_user_reg(regs, 7, v7);
> +}
> +
> +static void set_regs_error(struct cpu_user_regs *regs, uint32_t error_co=
de)
> +{
> +    set_regs(regs, FFA_ERROR, 0, error_code, 0, 0, 0, 0, 0);
> +}
> +
> +static void set_regs_success(struct cpu_user_regs *regs, uint32_t w2,
> +                             uint32_t w3)
> +{
> +    set_regs(regs, FFA_SUCCESS_32, 0, w2, w3, 0, 0, 0, 0);
> +}
> +
> +static void set_regs_frag_rx(struct cpu_user_regs *regs, uint32_t handle=
_lo,
> +                             uint32_t handle_hi, uint32_t frag_offset,
> +                             uint16_t sender_id)
> +{
> +    set_regs(regs, FFA_MEM_FRAG_RX, handle_lo, handle_hi, frag_offset,
> +             (uint32_t)sender_id << 16, 0, 0, 0);
> +}
> +
> +static void handle_version(struct cpu_user_regs *regs)
> +{
> +    struct domain *d =3D current->domain;
> +    struct ffa_ctx *ctx =3D d->arch.ffa;
> +    uint32_t vers =3D get_user_reg(regs, 1);
> +
> +    if ( vers < FFA_VERSION_1_1 )
> +        vers =3D FFA_VERSION_1_0;
> +    else
> +        vers =3D FFA_VERSION_1_1;
> +
> +    ctx->guest_vers =3D vers;
> +    set_regs(regs, vers, 0, 0, 0, 0, 0, 0, 0);
> +}
> +
> +static uint32_t handle_rxtx_map(uint32_t fid, register_t tx_addr,
> +                                register_t rx_addr, uint32_t page_count)
> +{
> +    uint32_t ret =3D FFA_RET_NOT_SUPPORTED;
> +    struct domain *d =3D current->domain;
> +    struct ffa_ctx *ctx =3D d->arch.ffa;
> +    struct page_info *tx_pg;
> +    struct page_info *rx_pg;
> +    p2m_type_t t;
> +    void *rx;
> +    void *tx;
> +
> +    if ( !smccc_is_conv_64(fid) )
> +    {
> +        tx_addr &=3D UINT32_MAX;
> +        rx_addr &=3D UINT32_MAX;
> +    }
> +
> +    /* For now to keep things simple, only deal with a single page */
> +    if ( page_count !=3D 1 )
> +        return FFA_RET_NOT_SUPPORTED;
> +
> +    /* Already mapped */
> +    if ( ctx->rx )
> +        return FFA_RET_DENIED;
> +
> +    tx_pg =3D get_page_from_gfn(d, gaddr_to_gfn(tx_addr), &t, P2M_ALLOC)=
;
> +    if ( !tx_pg )
> +        return FFA_RET_NOT_SUPPORTED;
> +    /* Only normal RAM for now */
> +    if (t !=3D p2m_ram_rw)
> +        goto err_put_tx_pg;
> +
> +    rx_pg =3D get_page_from_gfn(d, gaddr_to_gfn(rx_addr), &t, P2M_ALLOC)=
;
> +    if ( !tx_pg )
> +        goto err_put_tx_pg;
> +    /* Only normal RAM for now */
> +    if ( t !=3D p2m_ram_rw )
> +        goto err_put_rx_pg;
> +
> +    tx =3D __map_domain_page_global(tx_pg);
> +    if ( !tx )
> +        goto err_put_rx_pg;
> +
> +    rx =3D __map_domain_page_global(rx_pg);
> +    if ( !rx )
> +        goto err_unmap_tx;
> +
> +    ctx->rx =3D rx;
> +    ctx->tx =3D tx;
> +    ctx->rx_pg =3D rx_pg;
> +    ctx->tx_pg =3D tx_pg;
> +    ctx->page_count =3D 1;
> +    ctx->tx_is_mine =3D true;
> +    return FFA_RET_OK;
> +
> +err_unmap_tx:
> +    unmap_domain_page_global(tx);
> +err_put_rx_pg:
> +    put_page(rx_pg);
> +err_put_tx_pg:
> +    put_page(tx_pg);
> +    return ret;
> +}
> +
> +static uint32_t handle_rxtx_unmap(void)
> +{
> +    struct domain *d =3D current->domain;
> +    struct ffa_ctx *ctx =3D d->arch.ffa;
> +    uint32_t ret;
> +
> +    if ( !ctx-> rx )

coding style: ctx->rx

> +        return FFA_RET_INVALID_PARAMETERS;
> +
> +    ret =3D ffa_rxtx_unmap(get_vm_id(d));
> +    if ( ret )
> +        return ret;
> +
> +    unmap_domain_page_global(ctx->rx);
> +    unmap_domain_page_global(ctx->tx);
> +    put_page(ctx->rx_pg);
> +    put_page(ctx->tx_pg);
> +    ctx->rx =3D NULL;
> +    ctx->tx =3D NULL;
> +    ctx->rx_pg =3D NULL;
> +    ctx->tx_pg =3D NULL;
> +    ctx->page_count =3D 0;
> +    ctx->tx_is_mine =3D false;
> +
> +    return FFA_RET_OK;
> +}
> +
> +static uint32_t handle_partition_info_get(uint32_t w1, uint32_t w2, uint=
32_t w3,
> +                                          uint32_t w4, uint32_t w5,
> +                                          uint32_t *count)
> +{
> +    uint32_t ret =3D FFA_RET_DENIED;
> +    struct domain *d =3D current->domain;
> +    struct ffa_ctx *ctx =3D d->arch.ffa;
> +
> +    if ( !ffa_page_count )
> +        return FFA_RET_DENIED;
> +
> +    spin_lock(&ffa_buffer_lock);
> +    if ( !ctx->page_count || !ctx->tx_is_mine )
> +        goto out;
> +    ret =3D ffa_partition_info_get(w1, w2, w3, w4, w5, count);
> +    if ( ret )
> +        goto out;
> +    if ( ctx->guest_vers =3D=3D FFA_VERSION_1_0 ) {
> +        size_t n;
> +        struct ffa_partition_info_1_1 *src =3D ffa_rx;
> +        struct ffa_partition_info_1_0 *dst =3D ctx->rx;
> +
> +        for ( n =3D 0; n < *count; n++ ) {
> +            dst[n].id =3D src[n].id;
> +            dst[n].execution_context =3D src[n].execution_context;
> +            dst[n].partition_properties =3D src[n].partition_properties;
> +        }
> +    } else {

Maybe it is worth to check version in this branch? Or at least
put ASSERT(ctx->guest_vers =3D=3D FFA_VERSION_1_1).

> +        size_t sz =3D *count * sizeof(struct ffa_partition_info_1_1);
> +
> +        memcpy(ctx->rx, ffa_rx, sz);
> +    }
> +    ffa_rx_release();
> +    ctx->tx_is_mine =3D false;
> +out:
> +    spin_unlock(&ffa_buffer_lock);
> +
> +    return ret;
> +}
> +
> +static uint32_t handle_rx_release(void)
> +{
> +    uint32_t ret =3D FFA_RET_DENIED;
> +    struct domain *d =3D current->domain;
> +    struct ffa_ctx *ctx =3D d->arch.ffa;
> +
> +    spin_lock(&ffa_buffer_lock);
> +    if ( !ctx->page_count || ctx->tx_is_mine )
> +        goto out;
> +    ret =3D FFA_RET_OK;
> +    ctx->tx_is_mine =3D true;
> +out:
> +    spin_unlock(&ffa_buffer_lock);
> +
> +    return ret;
> +}
> +
> +static void handle_msg_send_direct_req(struct cpu_user_regs *regs, uint3=
2_t fid)
> +{
> +    struct arm_smccc_1_2_regs arg =3D { .a0 =3D fid, };
> +    struct arm_smccc_1_2_regs resp =3D { };
> +    struct domain *d =3D current->domain;
> +    struct ffa_ctx *ctx =3D d->arch.ffa;
> +    uint32_t src_dst;
> +    uint64_t mask;
> +
> +    if ( smccc_is_conv_64(fid) )
> +        mask =3D 0xffffffffffffffff;
> +    else
> +        mask =3D 0xffffffff;
> +
> +    src_dst =3D get_user_reg(regs, 1);

Should you apply mask there?

> +    if ( (src_dst >> 16) !=3D get_vm_id(d) )
> +    {
> +        resp.a0 =3D FFA_ERROR;
> +        resp.a2 =3D FFA_RET_INVALID_PARAMETERS;
> +        goto out;
> +    }
> +
> +    arg.a1 =3D src_dst;
... or there?


> +    arg.a2 =3D get_user_reg(regs, 2) & mask;
> +    arg.a3 =3D get_user_reg(regs, 3) & mask;
> +    arg.a4 =3D get_user_reg(regs, 4) & mask;
> +    arg.a5 =3D get_user_reg(regs, 5) & mask;
> +    arg.a6 =3D get_user_reg(regs, 6) & mask;
> +    arg.a7 =3D get_user_reg(regs, 7) & mask;
> +
> +    while ( true ) {
> +        arm_smccc_1_2_smc(&arg, &resp);
> +
> +        switch ( resp.a0 )
> +        {
> +        case FFA_INTERRUPT:
> +            ctx->interrupted =3D true;
> +            goto out;
> +        case FFA_ERROR:
> +        case FFA_SUCCESS_32:
> +        case FFA_SUCCESS_64:
> +        case FFA_MSG_SEND_DIRECT_RESP_32:
> +        case FFA_MSG_SEND_DIRECT_RESP_64:
> +            goto out;
> +        default:
> +            /* Bad fid, report back. */
> +            memset(&arg, 0, sizeof(arg));
> +            arg.a0 =3D FFA_ERROR;
> +            arg.a1 =3D src_dst;
> +            arg.a2 =3D FFA_RET_NOT_SUPPORTED;
> +            continue;
> +        }
> +    }
> +
> +out:
> +    set_user_reg(regs, 0, resp.a0);
> +    set_user_reg(regs, 2, resp.a2 & mask);
> +    set_user_reg(regs, 1, resp.a1 & mask);
Looks like you need to swap two lines above.

> +    set_user_reg(regs, 3, resp.a3 & mask);
> +    set_user_reg(regs, 4, resp.a4 & mask);
> +    set_user_reg(regs, 5, resp.a5 & mask);
> +    set_user_reg(regs, 6, resp.a6 & mask);
> +    set_user_reg(regs, 7, resp.a7 & mask);
> +}
> +
> +static int get_shm_pages(struct domain *d, struct ffa_shm_mem *shm,
> +                         struct ffa_address_range *range, uint32_t range=
_count,
> +                         unsigned int start_page_idx,
> +                         unsigned int *last_page_idx)
> +{
> +    unsigned int pg_idx =3D start_page_idx;
> +    unsigned long gfn;
> +    unsigned int n;
> +    unsigned int m;
> +    p2m_type_t t;
> +    uint64_t addr;
> +
> +    for ( n =3D 0; n < range_count; n++ ) {
> +        for ( m =3D 0; m < range[n].page_count; m++ ) {
> +            if ( pg_idx >=3D shm->page_count )
> +                return FFA_RET_INVALID_PARAMETERS;
> +
> +            addr =3D read_atomic(&range[n].address);
> +            gfn =3D gaddr_to_gfn(addr + m * PAGE_SIZE);
> +            shm->pages[pg_idx] =3D get_page_from_gfn(d, gfn, &t, P2M_ALL=
OC);
> +            if ( !shm->pages[pg_idx] )
> +                return FFA_RET_DENIED;
> +            pg_idx++;
> +            /* Only normal RAM for now */
> +            if ( t !=3D p2m_ram_rw )
> +                return FFA_RET_DENIED;
> +        }
> +    }
> +
> +    *last_page_idx =3D pg_idx;
> +
> +    return FFA_RET_OK;
> +}
> +
> +static void put_shm_pages(struct ffa_shm_mem *shm)
> +{
> +    unsigned int n;
> +
> +    for ( n =3D 0; n < shm->page_count && shm->pages[n]; n++ )
> +    {
> +        if ( shm->pages[n] ) {
Looks like this check is redundant, you already checking the same in the
`if` block.

> +            put_page(shm->pages[n]);
> +            shm->pages[n] =3D NULL;
> +        }
> +    }
> +}
> +
> +static void init_range(struct ffa_address_range *addr_range,
> +                       paddr_t pa)
> +{
> +    memset(addr_range, 0, sizeof(*addr_range));
> +    addr_range->address =3D pa;
> +    addr_range->page_count =3D 1;
> +}
> +
> +static int share_shm(struct ffa_shm_mem *shm)
> +{
> +    uint32_t max_frag_len =3D ffa_page_count * PAGE_SIZE;
> +    struct ffa_mem_transaction_1_1 *descr =3D ffa_tx;
> +    struct ffa_mem_access *mem_access_array;
> +    struct ffa_mem_region *region_descr;
> +    struct ffa_address_range *addr_range;
> +    paddr_t pa;
> +    paddr_t last_pa;
> +    unsigned int n;
> +    uint32_t frag_len;
> +    uint32_t tot_len;
> +    int ret;
> +    unsigned int range_count;
> +    unsigned int range_base;
> +    bool first;
> +
> +    memset(descr, 0, sizeof(*descr));
> +    descr->sender_id =3D shm->sender_id;
> +    descr->global_handle =3D shm->handle;
> +    descr->mem_reg_attr =3D FFA_NORMAL_MEM_REG_ATTR;
> +    descr->mem_access_count =3D 1;
> +    descr->mem_access_size =3D sizeof(*mem_access_array);
> +    descr->mem_access_offs =3D sizeof(*descr);
> +    mem_access_array =3D (void *)(descr + 1);
> +    region_descr =3D (void *)(mem_access_array + 1);
> +
> +    memset(mem_access_array, 0, sizeof(*mem_access_array));
> +    mem_access_array[0].access_perm.endpoint_id =3D shm->ep_id;
> +    mem_access_array[0].access_perm.perm =3D FFA_MEM_ACC_RW;
> +    mem_access_array[0].region_offs =3D (vaddr_t)region_descr - (vaddr_t=
)ffa_tx;
> +
> +    memset(region_descr, 0, sizeof(*region_descr));
> +    region_descr->total_page_count =3D shm->page_count;
> +
> +    region_descr->address_range_count =3D 1;
> +    last_pa =3D page_to_maddr(shm->pages[0]);
> +    for ( n =3D 1; n < shm->page_count; last_pa =3D pa, n++ )
> +    {
> +        pa =3D page_to_maddr(shm->pages[n]);
> +        if ( last_pa + PAGE_SIZE =3D=3D pa )
> +        {
> +            continue;
> +        }
> +        region_descr->address_range_count++;
> +    }
> +
> +    tot_len =3D sizeof(*descr) + sizeof(*mem_access_array) +
> +              sizeof(*region_descr) +
> +              region_descr->address_range_count * sizeof(*addr_range);
> +
> +    addr_range =3D region_descr->address_range_array;
> +    frag_len =3D (vaddr_t)(addr_range + 1) - (vaddr_t)ffa_tx;
> +    last_pa =3D page_to_maddr(shm->pages[0]);
> +    init_range(addr_range, last_pa);
> +    first =3D true;
> +    range_count =3D 1;
> +    range_base =3D 0;
> +    for ( n =3D 1; n < shm->page_count; last_pa =3D pa, n++ )
> +    {
> +        pa =3D page_to_maddr(shm->pages[n]);
> +        if ( last_pa + PAGE_SIZE =3D=3D pa )
> +        {
> +            addr_range->page_count++;
> +            continue;
> +        }
> +
> +        if (frag_len =3D=3D max_frag_len) {
coding stlyle: if ( ... )
           {

> +            if (first)

coding stlyle if ( ... )

> +            {
> +                ret =3D ffa_mem_share(tot_len, frag_len, 0, 0, &shm->han=
dle);
> +                first =3D false;
> +            }
> +            else
> +            {
> +                ret =3D ffa_mem_frag_tx(shm->handle, frag_len, shm->send=
er_id);
> +            }
> +            if (ret <=3D 0)
> +                return ret;
> +            range_base =3D range_count;
> +            range_count =3D 0;
> +            frag_len =3D sizeof(*addr_range);
> +            addr_range =3D ffa_tx;
> +        } else {
> +            frag_len +=3D sizeof(*addr_range);
> +            addr_range++;
> +        }
> +        init_range(addr_range, pa);
> +        range_count++;
> +    }
> +
> +    if (first)
> +        return ffa_mem_share(tot_len, frag_len, 0, 0, &shm->handle);
> +    else
> +        return ffa_mem_frag_tx(shm->handle, frag_len, shm->sender_id);
> +}
> +
> +static int read_mem_transaction(uint32_t ffa_vers, void *buf, size_t ble=
n,
> +                                struct ffa_mem_transaction_x *trans)
> +{
> +    uint16_t mem_reg_attr;
> +    uint32_t flags;
> +    uint32_t count;
> +    uint32_t offs;
> +    uint32_t size;
> +
> +    if (ffa_vers >=3D FFA_VERSION_1_1) {
coding style: if ( ... )
          {
> +        struct ffa_mem_transaction_1_1 *descr;
> +
> +        if (blen < sizeof(*descr))
coding style: if ( ... )
> +            return FFA_RET_INVALID_PARAMETERS;
> +
> +        descr =3D buf;
> +        trans->sender_id =3D read_atomic(&descr->sender_id);
> +        mem_reg_attr =3D read_atomic(&descr->mem_reg_attr);
> +        flags =3D read_atomic(&descr->flags);
> +        trans->global_handle =3D read_atomic(&descr->global_handle);
> +        trans->tag =3D read_atomic(&descr->tag);
> +
> +        count =3D read_atomic(&descr->mem_access_count);
> +        size =3D read_atomic(&descr->mem_access_size);
> +        offs =3D read_atomic(&descr->mem_access_offs);
> +    } else {
coding style:
 }
 else
 {
> +        struct ffa_mem_transaction_1_0 *descr;
> +
> +        if (blen < sizeof(*descr))
> +            return FFA_RET_INVALID_PARAMETERS;
> +
> +        descr =3D buf;
> +        trans->sender_id =3D read_atomic(&descr->sender_id);
> +        mem_reg_attr =3D read_atomic(&descr->mem_reg_attr);
> +        flags =3D read_atomic(&descr->flags);
> +        trans->global_handle =3D read_atomic(&descr->global_handle);
> +        trans->tag =3D read_atomic(&descr->tag);
> +
> +        count =3D read_atomic(&descr->mem_access_count);
> +        size =3D sizeof(struct ffa_mem_access);
> +        offs =3D offsetof(struct ffa_mem_transaction_1_0, mem_access_arr=
ay);
> +    }
> +
> +    if (mem_reg_attr > UINT8_MAX || flags > UINT8_MAX || size > UINT8_MA=
X ||
> +        count > UINT8_MAX || offs > UINT16_MAX)
coding style: if ( ... )

> +        return FFA_RET_INVALID_PARAMETERS;
> +
> +    /* Check that the endpoint memory access descriptor array fits */
> +    if (size * count + offs > blen)
the same

> +        return FFA_RET_INVALID_PARAMETERS;
> +
> +    trans->mem_reg_attr =3D mem_reg_attr;
> +    trans->flags =3D flags;
> +    trans->mem_access_size =3D size;
> +    trans->mem_access_count =3D count;
> +    trans->mem_access_offs =3D offs;
> +    return 0;
> +}
> +
> +static int add_mem_share_frag(struct mem_frag_state *s, unsigned int off=
s,
> +                              unsigned int frag_len)
> +{
> +    struct domain *d =3D current->domain;
> +    unsigned int o =3D offs;
> +    unsigned int l;
> +    int ret;
> +
> +    if (frag_len < o)
the same

> +        return FFA_RET_INVALID_PARAMETERS;
> +
> +    /* Fill up the first struct ffa_address_range */
> +    l =3D min_t(unsigned int, frag_len - o, sizeof(s->range) - s->range_=
offset);
> +    memcpy((uint8_t *)&s->range + s->range_offset, s->buf + o, l);
> +    s->range_offset +=3D l;
> +    o +=3D l;
> +    if (s->range_offset !=3D sizeof(s->range))
> +        goto out;
> +    s->range_offset =3D 0;
> +
> +    while (true) {
the same

> +        ret =3D get_shm_pages(d, s->shm, &s->range, 1, s->current_page_i=
dx,
> +                            &s->current_page_idx);
> +        if (ret)
> +            return ret;
> +        if (s->range_count =3D=3D 1)
> +            return 0;
> +        s->range_count--;
> +        if (frag_len - o < sizeof(s->range))
> +            break;
> +        memcpy(&s->range, s->buf + o, sizeof(s->range));
> +        o +=3D sizeof(s->range);
> +    }
> +
> +    /* Collect any remaining bytes for the next struct ffa_address_range=
 */
> +    s->range_offset =3D frag_len - o;
> +    memcpy(&s->range, s->buf + o, frag_len - o);
> +out:
> +    s->frag_offset +=3D frag_len;
> +    return s->frag_offset;
> +}
> +
> +static void handle_mem_share(struct cpu_user_regs *regs)
> +{
> +    uint32_t tot_len =3D get_user_reg(regs, 1);
> +    uint32_t frag_len =3D get_user_reg(regs, 2);
> +    uint64_t addr =3D get_user_reg(regs, 3);
> +    uint32_t page_count =3D get_user_reg(regs, 4);
> +    struct ffa_mem_transaction_x trans;
> +    struct ffa_mem_access *mem_access;
> +    struct ffa_mem_region *region_descr;
> +    struct domain *d =3D current->domain;
> +    struct ffa_ctx *ctx =3D d->arch.ffa;
> +    struct ffa_shm_mem *shm =3D NULL;
> +    unsigned int last_page_idx =3D 0;
> +    uint32_t range_count;
> +    uint32_t region_offs;
> +    int ret =3D FFA_RET_DENIED;
> +    uint32_t handle_hi =3D 0;
> +    uint32_t handle_lo =3D 0;
> +
> +    /*
> +     * We're only accepting memory transaction descriptors via the rx/tx
> +     * buffer.
> +     */
> +    if ( addr ) {
coding style

> +        ret =3D FFA_RET_NOT_SUPPORTED;
> +        goto out_unlock;
> +    }
> +
> +    /* Check that fragment legnth doesn't exceed total length */
> +    if (frag_len > tot_len) {
coding style

> +        ret =3D FFA_RET_INVALID_PARAMETERS;
> +        goto out_unlock;
> +    }
> +
> +    spin_lock(&ffa_buffer_lock);
> +
> +    if ( frag_len > ctx->page_count * PAGE_SIZE )
> +        goto out_unlock;
> +
> +    if ( !ffa_page_count ) {
> +        ret =3D FFA_RET_NO_MEMORY;
> +        goto out_unlock;
> +    }
> +
> +    ret =3D read_mem_transaction(ctx->guest_vers, ctx->tx, frag_len, &tr=
ans);
> +    if (ret)
coding style

> +        goto out_unlock;
> +
> +    if ( trans.mem_reg_attr !=3D FFA_NORMAL_MEM_REG_ATTR )
> +    {
> +        ret =3D FFA_RET_NOT_SUPPORTED;
> +        goto out;
> +    }
> +
> +    /* Only supports sharing it with one SP for now */
> +    if ( trans.mem_access_count !=3D 1 )
> +    {
> +        ret =3D FFA_RET_NOT_SUPPORTED;
> +        goto out_unlock;
> +    }
> +
> +    if ( trans.sender_id !=3D get_vm_id(d) )
> +    {
> +        ret =3D FFA_RET_INVALID_PARAMETERS;
> +        goto out_unlock;
> +    }
> +
> +    /* Check that it fits in the supplied data */
> +    if ( trans.mem_access_offs + trans.mem_access_size > frag_len)
> +        goto out_unlock;
> +
> +    mem_access =3D (void *)((vaddr_t)ctx->tx + trans.mem_access_offs);
> +    if ( read_atomic(&mem_access->access_perm.perm) !=3D FFA_MEM_ACC_RW =
)
> +    {
> +        ret =3D FFA_RET_NOT_SUPPORTED;
> +        goto out_unlock;
> +    }
> +
> +    region_offs =3D read_atomic(&mem_access->region_offs);
> +    if (sizeof(*region_descr) + region_offs > frag_len) {
> +        ret =3D FFA_RET_NOT_SUPPORTED;
> +        goto out_unlock;
> +    }
> +
> +    region_descr =3D (void *)((vaddr_t)ctx->tx + region_offs);
> +    range_count =3D read_atomic(&region_descr->address_range_count);
> +    page_count =3D read_atomic(&region_descr->total_page_count);
> +
> +    shm =3D xzalloc_flex_struct(struct ffa_shm_mem, pages, page_count);
> +    if ( !shm )
> +    {
> +        ret =3D FFA_RET_NO_MEMORY;
> +        goto out;
> +    }
> +    shm->sender_id =3D trans.sender_id;
> +    shm->ep_id =3D read_atomic(&mem_access->access_perm.endpoint_id);
> +    shm->page_count =3D page_count;
> +
> +    if (frag_len !=3D tot_len) {
> +        struct mem_frag_state *s =3D xzalloc(struct mem_frag_state);
> +
> +        if (!s) {
> +            ret =3D FFA_RET_NO_MEMORY;
> +            goto out;
> +        }
> +        s->shm =3D shm;
> +        s->range_count =3D range_count;
> +        s->buf =3D ctx->tx;
> +        s->buf_size =3D ffa_page_count * PAGE_SIZE;
> +        ret =3D add_mem_share_frag(s, sizeof(*region_descr)  + region_of=
fs,
> +                                 frag_len);
> +        if (ret <=3D 0) {
> +            xfree(s);
> +            if (ret < 0)
> +                goto out;
> +        } else {
> +            shm->handle =3D next_handle++;
> +            reg_pair_from_64(&handle_hi, &handle_lo, shm->handle);
> +            spin_lock(&ffa_mem_list_lock);
> +            list_add_tail(&s->list, &ffa_frag_list);
> +            spin_unlock(&ffa_mem_list_lock);
> +        }
> +        goto out_unlock;
> +    }
> +
> +    /*
> +     * Check that the Composite memory region descriptor fits.
> +     */
> +    if ( sizeof(*region_descr) + region_offs +
> +         range_count * sizeof(struct ffa_address_range) > frag_len) {
> +        ret =3D FFA_RET_INVALID_PARAMETERS;
> +        goto out;
> +    }
> +
> +    ret =3D get_shm_pages(d, shm, region_descr->address_range_array, ran=
ge_count,
> +                        0, &last_page_idx);
> +    if ( ret )
> +        goto out;
> +    if (last_page_idx !=3D shm->page_count) {
> +        ret =3D FFA_RET_INVALID_PARAMETERS;
> +        goto out;
> +    }
> +
> +    /* Note that share_shm() uses our tx buffer */
> +    ret =3D share_shm(shm);
> +    if ( ret )
> +        goto out;
> +
> +    spin_lock(&ffa_mem_list_lock);
> +    list_add_tail(&shm->list, &ffa_mem_list);
> +    spin_unlock(&ffa_mem_list_lock);
> +
> +    reg_pair_from_64(&handle_hi, &handle_lo, shm->handle);
> +
> +out:
> +    if ( ret && shm )
> +    {
> +        put_shm_pages(shm);
> +        xfree(shm);
> +    }
> +out_unlock:
> +    spin_unlock(&ffa_buffer_lock);
> +
> +    if ( ret > 0 )
> +            set_regs_frag_rx(regs, handle_lo, handle_hi, ret, trans.send=
er_id);
> +    else if ( ret =3D=3D 0)
> +            set_regs_success(regs, handle_lo, handle_hi);
> +    else
> +            set_regs_error(regs, ret);
> +}
> +
> +static struct mem_frag_state *find_frag_state(uint64_t handle)
> +{
> +    struct mem_frag_state *s;
> +
> +    list_for_each_entry(s, &ffa_frag_list, list)
> +        if ( s->shm->handle =3D=3D handle)
> +            return s;
> +
> +    return NULL;
> +}
> +
> +static void handle_mem_frag_tx(struct cpu_user_regs *regs)
> +{
> +    uint32_t frag_len =3D get_user_reg(regs, 3);
> +    uint32_t handle_lo =3D get_user_reg(regs, 1);
> +    uint32_t handle_hi =3D get_user_reg(regs, 2);
> +    uint64_t handle =3D reg_pair_to_64(handle_hi, handle_lo);
> +    struct mem_frag_state *s;
> +    uint16_t sender_id =3D 0;
> +    int ret;
> +
> +    spin_lock(&ffa_buffer_lock);
> +    s =3D find_frag_state(handle);
> +    if (!s) {
> +        ret =3D FFA_RET_INVALID_PARAMETERS;
> +        goto out;
> +    }
> +    sender_id =3D s->shm->sender_id;
> +
> +    if (frag_len > s->buf_size) {
> +        ret =3D FFA_RET_INVALID_PARAMETERS;
> +        goto out;
> +    }
> +
> +    ret =3D add_mem_share_frag(s, 0, frag_len);
> +    if (ret =3D=3D 0) {
> +        /* Note that share_shm() uses our tx buffer */
> +        ret =3D share_shm(s->shm);
> +        if (ret =3D=3D 0) {
> +            spin_lock(&ffa_mem_list_lock);
> +            list_add_tail(&s->shm->list, &ffa_mem_list);
> +            spin_unlock(&ffa_mem_list_lock);
> +        } else {
> +            put_shm_pages(s->shm);
> +            xfree(s->shm);
> +        }
> +        spin_lock(&ffa_mem_list_lock);
> +        list_del(&s->list);
> +        spin_unlock(&ffa_mem_list_lock);
> +        xfree(s);
> +    } else if (ret < 0) {
> +        put_shm_pages(s->shm);
> +        xfree(s->shm);
> +        spin_lock(&ffa_mem_list_lock);
> +        list_del(&s->list);
> +        spin_unlock(&ffa_mem_list_lock);
> +        xfree(s);
> +    }
> +out:
> +    spin_unlock(&ffa_buffer_lock);
> +
> +    if ( ret > 0 )
> +            set_regs_frag_rx(regs, handle_lo, handle_hi, ret, sender_id)=
;
> +    else if ( ret =3D=3D 0)
> +            set_regs_success(regs, handle_lo, handle_hi);
> +    else
> +            set_regs_error(regs, ret);
> +}
> +
> +static int handle_mem_reclaim(uint64_t handle, uint32_t flags)
> +{
> +    struct ffa_shm_mem *shm;
> +    uint32_t handle_hi;
> +    uint32_t handle_lo;
> +    int ret;
> +
> +    spin_lock(&ffa_mem_list_lock);
> +    list_for_each_entry(shm, &ffa_mem_list, list) {
> +        if ( shm->handle =3D=3D handle )
> +            goto found_it;
> +    }
> +    shm =3D NULL;
> +found_it:
> +    spin_unlock(&ffa_mem_list_lock);
> +
> +    if ( !shm )
> +        return FFA_RET_INVALID_PARAMETERS;
> +
> +    reg_pair_from_64(&handle_hi, &handle_lo, handle);
> +    ret =3D ffa_mem_reclaim(handle_lo, handle_hi, flags);
> +    if ( ret )
> +        return ret;
> +
> +    spin_lock(&ffa_mem_list_lock);
> +    list_del(&shm->list);
> +    spin_unlock(&ffa_mem_list_lock);
> +
> +    put_shm_pages(shm);
> +    xfree(shm);
> +
> +    return ret;
> +}
> +
> +bool ffa_handle_call(struct cpu_user_regs *regs, uint32_t fid)
> +{
> +    struct domain *d =3D current->domain;
> +    struct ffa_ctx *ctx =3D d->arch.ffa;
> +    uint32_t count;
> +    uint32_t e;
> +
> +    if ( !ctx )
> +        return false;
> +
> +    switch ( fid )
> +    {
> +    case FFA_VERSION:
> +        handle_version(regs);
> +        return true;
> +    case FFA_ID_GET:
> +        set_regs_success(regs, get_vm_id(d), 0);
> +        return true;
> +    case FFA_RXTX_MAP_32:
> +#ifdef CONFIG_ARM_64
> +    case FFA_RXTX_MAP_64:
> +#endif
> +        e =3D handle_rxtx_map(fid, get_user_reg(regs, 1), get_user_reg(r=
egs, 2),
> +                            get_user_reg(regs, 3));
> +        if ( e )
> +            set_regs_error(regs, e);
> +        else
> +            set_regs_success(regs, 0, 0);
> +        return true;
> +    case FFA_RXTX_UNMAP:
> +        e =3D handle_rxtx_unmap();
> +        if ( e )
> +            set_regs_error(regs, e);
> +        else
> +            set_regs_success(regs, 0, 0);
> +        return true;
> +    case FFA_PARTITION_INFO_GET:
> +        e =3D handle_partition_info_get(get_user_reg(regs, 1),
> +                                      get_user_reg(regs, 2),
> +                                      get_user_reg(regs, 3),
> +                                      get_user_reg(regs, 4),
> +                                      get_user_reg(regs, 5), &count);
> +        if ( e )
> +            set_regs_error(regs, e);
> +        else
> +            set_regs_success(regs, count, 0);
> +        return true;
> +    case FFA_RX_RELEASE:
> +        e =3D handle_rx_release();
> +        if ( e )
> +            set_regs_error(regs, e);
> +        else
> +            set_regs_success(regs, 0, 0);
> +        return true;
> +    case FFA_MSG_SEND_DIRECT_REQ_32:
> +#ifdef CONFIG_ARM_64
> +    case FFA_MSG_SEND_DIRECT_REQ_64:
> +#endif
> +        handle_msg_send_direct_req(regs, fid);
> +        return true;
> +    case FFA_MEM_SHARE_32:
> +#ifdef CONFIG_ARM_64
> +    case FFA_MEM_SHARE_64:
> +#endif
> +        handle_mem_share(regs);
> +        return true;
> +    case FFA_MEM_RECLAIM:
> +        e =3D handle_mem_reclaim(reg_pair_to_64(get_user_reg(regs, 2),
> +                                              get_user_reg(regs, 1)),
> +                               get_user_reg(regs, 3));
> +        if ( e )
> +            set_regs_error(regs, e);
> +        else
> +            set_regs_success(regs, 0, 0);
> +        return true;
> +    case FFA_MEM_FRAG_TX:
> +        handle_mem_frag_tx(regs);
> +        return true;
> +
> +    default:
> +        printk(XENLOG_ERR "ffa: unhandled fid 0x%x\n", fid);
> +        return false;
> +    }
> +}
> +
> +int ffa_domain_init(struct domain *d)
> +{
> +    struct ffa_ctx *ctx;
> +    unsigned int n;
> +    unsigned int m;
> +    unsigned int c_pos;
> +    int32_t res;
> +
> +    if ( !ffa_version )
> +        return 0;
> +
> +    ctx =3D xzalloc(struct ffa_ctx);
> +    if ( !ctx )
> +        return -ENOMEM;
> +
> +    for ( n =3D 0; n < subsr_vm_created_count; n++ ) {
> +        res =3D ffa_direct_req_send_vm(subsr_vm_created[n], get_vm_id(d)=
,
> +                                     FFA_MSG_SEND_VM_CREATED);
> +        if ( res ) {
> +            printk(XENLOG_ERR "ffa: Failed to report creation of vm_id %=
u to  %u: res %d\n",
> +                   get_vm_id(d), subsr_vm_created[n], res);
> +            c_pos =3D n;
> +            goto err;
> +        }
> +    }
> +
> +    d->arch.ffa =3D ctx;
> +
> +    return 0;
> +
> +err:
> +    /* Undo any already sent vm created messaged */
> +    for ( n =3D 0; n < c_pos; n++ )
> +        for ( m =3D 0; m < subsr_vm_destroyed_count; m++ )
> +            if ( subsr_vm_destroyed[m] =3D=3D subsr_vm_created[n] )
> +                ffa_direct_req_send_vm(subsr_vm_destroyed[n], get_vm_id(=
d),
> +                                       FFA_MSG_SEND_VM_DESTROYED);
> +    return -ENOMEM;
> +}
> +
> +int ffa_relinquish_resources(struct domain *d)
> +{
> +    struct ffa_ctx *ctx =3D d->arch.ffa;
> +    unsigned int n;
> +    int32_t res;
> +
> +    if ( !ctx )
> +        return 0;
> +
> +    for ( n =3D 0; n < subsr_vm_destroyed_count; n++ ) {
> +        res =3D ffa_direct_req_send_vm(subsr_vm_destroyed[n], get_vm_id(=
d),
> +                                     FFA_MSG_SEND_VM_DESTROYED);
> +
> +        if ( res )
> +            printk(XENLOG_ERR "ffa: Failed to report destruction of vm_i=
d %u to  %u: res %d\n",
> +                   get_vm_id(d), subsr_vm_destroyed[n], res);
> +    }
> +
> +    XFREE(d->arch.ffa);
> +
> +    return 0;
> +}
> +
> +static bool __init init_subscribers(void)
> +{
> +    struct ffa_partition_info_1_1 *fpi;
> +    bool ret =3D false;
> +    uint32_t count;
> +    uint32_t e;
> +    uint32_t n;
> +    uint32_t c_pos;
> +    uint32_t d_pos;
> +
> +    if ( ffa_version < FFA_VERSION_1_1 )
> +        return true;
> +
> +    e =3D ffa_partition_info_get(0, 0, 0, 0, 1, &count);
> +    ffa_rx_release();
> +    if ( e ) {
> +        printk(XENLOG_ERR "ffa: Failed to get list of SPs: %d\n", (int)e=
);
> +        goto out;
> +    }
> +
> +    fpi =3D ffa_rx;
> +    subsr_vm_created_count =3D 0;
> +    subsr_vm_destroyed_count =3D 0;
> +    for ( n =3D 0; n < count; n++ ) {
> +        if (fpi[n].partition_properties & FFA_PART_PROP_NOTIF_CREATED)
> +            subsr_vm_created_count++;
> +        if (fpi[n].partition_properties & FFA_PART_PROP_NOTIF_DESTROYED)
> +            subsr_vm_destroyed_count++;
> +    }
> +
> +    if ( subsr_vm_created_count )
> +        subsr_vm_created =3D xzalloc_array(uint16_t, subsr_vm_created_co=
unt);
> +    if ( subsr_vm_destroyed_count )
> +        subsr_vm_destroyed =3D xzalloc_array(uint16_t, subsr_vm_destroye=
d_count);
> +    if ( (subsr_vm_created_count && !subsr_vm_created) ||
> +        (subsr_vm_destroyed_count && !subsr_vm_destroyed) ) {
> +        printk(XENLOG_ERR "ffa: Failed to allocate subscription lists\n"=
);
> +        subsr_vm_created_count =3D 0;
> +        subsr_vm_destroyed_count =3D 0;
> +        XFREE(subsr_vm_created);
> +        XFREE(subsr_vm_destroyed);
> +        goto out;
> +    }
> +
> +    for ( c_pos =3D 0, d_pos =3D 0, n =3D 0; n < count; n++ ) {
> +        if (fpi[n].partition_properties & FFA_PART_PROP_NOTIF_CREATED)
> +            subsr_vm_created[c_pos++] =3D fpi[n].id;
> +        if (fpi[n].partition_properties & FFA_PART_PROP_NOTIF_DESTROYED)
> +            subsr_vm_destroyed[d_pos++] =3D fpi[n].id;
> +    }
> +
> +    ret =3D true;
> +out:
> +    ffa_rx_release();
> +    return ret;
> +}
> +
> +static int __init ffa_init(void)
> +{
> +    uint32_t vers;
> +    uint32_t e;
> +    unsigned int major_vers;
> +    unsigned int minor_vers;
> +
> +    /*
> +     * psci_init_smccc() updates this value with what's reported by EL-3
> +     * or secure world.
> +     */
> +    if ( smccc_ver < ARM_SMCCC_VERSION_1_2 )
> +    {
> +        printk(XENLOG_ERR
> +               "ffa: unsupported SMCCC version %#x (need at least %#x)\n=
",
> +               smccc_ver, ARM_SMCCC_VERSION_1_2);
> +        return 0;
> +    }
> +
> +    if ( !ffa_get_version(&vers) )
> +        return 0;
> +
> +    if ( vers < FFA_MIN_VERSION || vers > FFA_MY_VERSION )
> +    {
> +        printk(XENLOG_ERR "ffa: Incompatible version %#x found\n", vers)=
;
> +        return 0;
> +    }
> +
> +    major_vers =3D (vers >> FFA_VERSION_MAJOR_SHIFT) & FFA_VERSION_MAJOR=
_MASK;
> +    minor_vers =3D vers & FFA_VERSION_MINOR_MASK;
> +    printk(XENLOG_ERR "ARM FF-A Mediator version %u.%u\n",
> +           FFA_VERSION_MAJOR, FFA_VERSION_MINOR);
> +    printk(XENLOG_ERR "ARM FF-A Firmware version %u.%u\n",
> +           major_vers, minor_vers);
> +
> +    ffa_rx =3D alloc_xenheap_pages(0, 0);
> +    if ( !ffa_rx )
> +        return 0;
> +
> +    ffa_tx =3D alloc_xenheap_pages(0, 0);
> +    if ( !ffa_tx )
> +        goto err_free_ffa_rx;
> +
> +    e =3D ffa_rxtx_map(__pa(ffa_tx), __pa(ffa_rx), 1);
> +    if ( e )
> +    {
> +        printk(XENLOG_ERR "ffa: Failed to map rxtx: error %d\n", (int)e)=
;
> +        goto err_free_ffa_tx;
> +    }
> +    ffa_page_count =3D 1;
> +    ffa_version =3D vers;
> +
> +    if ( !init_subscribers() )
> +        goto err_free_ffa_tx;
> +
> +    return 0;
> +
> +err_free_ffa_tx:
> +    free_xenheap_pages(ffa_tx, 0);
> +    ffa_tx =3D NULL;
> +err_free_ffa_rx:
> +    free_xenheap_pages(ffa_rx, 0);
> +    ffa_rx =3D NULL;
> +    ffa_page_count =3D 0;
> +    ffa_version =3D 0;
> +    XFREE(subsr_vm_created);
> +    subsr_vm_created_count =3D 0;
> +    XFREE(subsr_vm_destroyed);
> +    subsr_vm_destroyed_count =3D 0;
> +    return 0;
> +}
> +
> +__initcall(ffa_init);
> diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm=
/domain.h
> index ed63c2b6f91f..b3dee269bced 100644
> --- a/xen/arch/arm/include/asm/domain.h
> +++ b/xen/arch/arm/include/asm/domain.h
> @@ -103,6 +103,10 @@ struct arch_domain
>      void *tee;
>  #endif
> =20
> +#ifdef CONFIG_FFA
> +    void *ffa;
> +#endif
> +
>      bool directmap;
>  }  __cacheline_aligned;
> =20
> diff --git a/xen/arch/arm/include/asm/ffa.h b/xen/arch/arm/include/asm/ff=
a.h
> new file mode 100644
> index 000000000000..1c6ce6421294
> --- /dev/null
> +++ b/xen/arch/arm/include/asm/ffa.h
> @@ -0,0 +1,71 @@
> +/*
> + * xen/arch/arm/ffa.c
> + *
> + * Arm Firmware Framework for ARMv8-A(FFA) mediator
> + *
> + * Copyright (C) 2021  Linaro Limited
> + *
> + * Permission is hereby granted, free of charge, to any person
> + * obtaining a copy of this software and associated documentation
> + * files (the "Software"), to deal in the Software without restriction,
> + * including without limitation the rights to use, copy, modify, merge,
> + * publish, distribute, sublicense, and/or sell copies of the Software,
> + * and to permit persons to whom the Software is furnished to do so,
> + * subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be
> + * included in all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT=
.
> + * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
> + * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
> + * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
> + * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
> + */
> +
> +#ifndef __ASM_ARM_FFA_H__
> +#define __ASM_ARM_FFA_H__
> +
> +#include <xen/const.h>
> +
> +#include <asm/smccc.h>
> +#include <asm/types.h>
> +
> +#define FFA_FNUM_MIN_VALUE              _AC(0x60,U)
> +#define FFA_FNUM_MAX_VALUE              _AC(0x86,U)
> +
> +static inline bool is_ffa_fid(uint32_t fid)
> +{
> +    uint32_t fn =3D fid & ARM_SMCCC_FUNC_MASK;
> +
> +    return fn >=3D FFA_FNUM_MIN_VALUE && fn <=3D FFA_FNUM_MAX_VALUE;
> +}
> +
> +#ifdef CONFIG_FFA
> +#define FFA_NR_FUNCS    11
> +
> +bool ffa_handle_call(struct cpu_user_regs *regs, uint32_t fid);
> +int ffa_domain_init(struct domain *d);
> +int ffa_relinquish_resources(struct domain *d);
> +#else
> +#define FFA_NR_FUNCS    0
> +
> +static inline bool ffa_handle_call(struct cpu_user_regs *regs, uint32_t =
fid)
> +{
> +    return false;
> +}
> +
> +static inline int ffa_domain_init(struct domain *d)
> +{
> +    return 0;
> +}
> +
> +static inline int ffa_relinquish_resources(struct domain *d)
> +{
> +    return 0;
> +}
> +#endif
> +
> +#endif /*__ASM_ARM_FFA_H__*/
> diff --git a/xen/arch/arm/vsmc.c b/xen/arch/arm/vsmc.c
> index 6f90c08a6304..34586025eff8 100644
> --- a/xen/arch/arm/vsmc.c
> +++ b/xen/arch/arm/vsmc.c
> @@ -20,6 +20,7 @@
>  #include <public/arch-arm/smccc.h>
>  #include <asm/cpuerrata.h>
>  #include <asm/cpufeature.h>
> +#include <asm/ffa.h>
>  #include <asm/monitor.h>
>  #include <asm/regs.h>
>  #include <asm/smccc.h>
> @@ -32,7 +33,7 @@
>  #define XEN_SMCCC_FUNCTION_COUNT 3
> =20
>  /* Number of functions currently supported by Standard Service Service C=
alls. */
> -#define SSSC_SMCCC_FUNCTION_COUNT (3 + VPSCI_NR_FUNCS)
> +#define SSSC_SMCCC_FUNCTION_COUNT (3 + VPSCI_NR_FUNCS + FFA_NR_FUNCS)
> =20
>  static bool fill_uid(struct cpu_user_regs *regs, xen_uuid_t uuid)
>  {
> @@ -196,13 +197,23 @@ static bool handle_existing_apis(struct cpu_user_re=
gs *regs)
>      return do_vpsci_0_1_call(regs, fid);
>  }
> =20
> +static bool is_psci_fid(uint32_t fid)
> +{
> +    uint32_t fn =3D fid & ARM_SMCCC_FUNC_MASK;
> +
> +    return fn >=3D 0 && fn <=3D 0x1fU;
> +}
> +
>  /* PSCI 0.2 interface and other Standard Secure Calls */
>  static bool handle_sssc(struct cpu_user_regs *regs)
>  {
>      uint32_t fid =3D (uint32_t)get_user_reg(regs, 0);
> =20
> -    if ( do_vpsci_0_2_call(regs, fid) )
> -        return true;
> +    if ( is_psci_fid(fid) )
> +        return do_vpsci_0_2_call(regs, fid);
> +
> +    if ( is_ffa_fid(fid) )
> +        return ffa_handle_call(regs, fid);
> =20
>      switch ( fid )
>      {


--=20
Volodymyr Babchuk at EPAM=


From xen-devel-bounces@lists.xenproject.org Tue Jun 14 21:56:06 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 21:56:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349447.575553 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1EVl-0003WW-RU; Tue, 14 Jun 2022 21:55:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349447.575553; Tue, 14 Jun 2022 21:55:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1EVl-0003WP-ON; Tue, 14 Jun 2022 21:55:45 +0000
Received: by outflank-mailman (input) for mailman id 349447;
 Tue, 14 Jun 2022 21:55:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1EVl-0003WF-1O; Tue, 14 Jun 2022 21:55:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1EVk-0001Rl-Tb; Tue, 14 Jun 2022 21:55:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1EVk-0005q7-AO; Tue, 14 Jun 2022 21:55:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o1EVk-0007LH-9z; Tue, 14 Jun 2022 21:55:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=hOGZtL6r3QjJ1rlkXPWfXDb8guLFMnKvprrlqE0aWUQ=; b=gWWrHgNjVVpIgYIyrDFU/Lg/g8
	RG89xm2IKAt8vwCouL79a9z1GzzaBHvY7rGAUIZmhZ8JJ0wFEnWUpj9CG+NEtH4PzsvyVXzDdIM39
	AOSoaB6Uns8BYzVibfgTbMs2et2ITtASoUcjChY5ltL3C/0S/4DMwoEt27SqZvk+Axng=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171161-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171161: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:heisenbug
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=b13baccc3850ca8b8cccbf8ed9912dbaa0fdf7f3
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 14 Jun 2022 21:55:44 +0000

flight 171161 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171161/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 170714
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 170714
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 170714
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 170714
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 170714
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 170714
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 170714
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 170714
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 170714
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 170714

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl           8 xen-boot                   fail pass in 171154
 test-amd64-amd64-xl-vhd       8 xen-boot                   fail pass in 171154
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot           fail pass in 171154
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot   fail pass in 171155
 test-amd64-amd64-pair        12 xen-boot/src_host          fail pass in 171155
 test-amd64-amd64-pair        13 xen-boot/dst_host          fail pass in 171155
 test-amd64-amd64-xl-credit1   8 xen-boot                   fail pass in 171155
 test-amd64-amd64-xl-xsm       8 xen-boot                   fail pass in 171155
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10     fail pass in 171156

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170714
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170714
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170714
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                b13baccc3850ca8b8cccbf8ed9912dbaa0fdf7f3
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z   21 days
Failing since        170716  2022-05-24 11:12:06 Z   21 days   52 attempts
Testing same since   171154  2022-06-13 04:37:11 Z    1 days    4 attempts

------------------------------------------------------------
2346 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 276082 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 14 23:42:15 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jun 2022 23:42:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349468.575570 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1GAd-0007Px-QW; Tue, 14 Jun 2022 23:42:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349468.575570; Tue, 14 Jun 2022 23:42:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1GAd-0007Pq-NW; Tue, 14 Jun 2022 23:42:03 +0000
Received: by outflank-mailman (input) for mailman id 349468;
 Tue, 14 Jun 2022 23:42:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1GAc-0007Pe-6c; Tue, 14 Jun 2022 23:42:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1GAc-0003FP-3r; Tue, 14 Jun 2022 23:42:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1GAb-0001dU-L9; Tue, 14 Jun 2022 23:42:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o1GAb-000637-Kg; Tue, 14 Jun 2022 23:42:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tzD5EJSPU5PlWzGPYlKM01i9DErIDM4hEQpEoD+r4eM=; b=dnbBbXgZ1KYNMvMsJQ8k6ivTIv
	ce2/vkiK+Cx++C+CXK5rjLPjq4zTfB6AcgxnEqsf0r5t01vxyovesw64CMrbiqR3uTecW7e/sFIxS
	y+OMRALCcS4lkJPExXVdIozZM+yYtvMqBYZ7EV4+HBXEOo6rBOIgN6///FuavHMFFvbE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171165-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 171165: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=debd0753663bc89c86f5462a53268f2e3f680f60
X-Osstest-Versions-That:
    qemuu=dcb40541ebca7ec98a14d461593b3cd7282b4fac
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 14 Jun 2022 23:42:01 +0000

flight 171165 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171165/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171149
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171149
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171149
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171149
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171149
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171149
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171149
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171149
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 qemuu                debd0753663bc89c86f5462a53268f2e3f680f60
baseline version:
 qemuu                dcb40541ebca7ec98a14d461593b3cd7282b4fac

Last test of basis   171149  2022-06-13 00:38:22 Z    1 days
Testing same since   171160  2022-06-14 06:39:46 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alex Bennée <alex.bennee@linaro.org>
  Daniel P. Berrangé <berrange@redhat.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   dcb40541eb..debd075366  debd0753663bc89c86f5462a53268f2e3f680f60 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 00:24:04 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 00:24:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349482.575582 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Gp9-0004YW-Vd; Wed, 15 Jun 2022 00:23:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349482.575582; Wed, 15 Jun 2022 00:23:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Gp9-0004YP-QS; Wed, 15 Jun 2022 00:23:55 +0000
Received: by outflank-mailman (input) for mailman id 349482;
 Wed, 15 Jun 2022 00:23:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8oGR=WW=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o1Gp8-0004YJ-Lq
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 00:23:54 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6dd34412-ec41-11ec-bd2c-47488cf2e6aa;
 Wed, 15 Jun 2022 02:23:53 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 729AEB81868;
 Wed, 15 Jun 2022 00:23:49 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1097CC3411B;
 Wed, 15 Jun 2022 00:23:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6dd34412-ec41-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1655252627;
	bh=1NY9pf1XkYATiqp4Y50ryTvD10tjD2prZMCB+yj2zmA=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=kl2CQIC7s/wpCmPjoB1IAk0MOE55o2h3sxzZtMdOGpgt/pCc5WF8S08jmAOTs6vss
	 +2IDf6aQe7JgIZaw7We1Tt8S2nKf7qzpJZaVxcjjoBKFqw2cpMJlHK/+7MWDe+XcWr
	 IVG50fAk8x8IECk1Ceo2MkjW4BbvcnLQqCbSGxOSvKQ6L1qZRdNjYKZ2iNT/XAoBYE
	 arNTWoE+JYbY+pxjMC7X4noklucW038MB0XL+jqKS0W2RsNkuNgSf47EESDQ2b5jIW
	 GlOCSG07cdHIbWk5p5ZkIVOYo/3GeI+hXea4opGbwRUVQb3IUkPDhqEw9EgHOLz7Sw
	 pgXjqkJ6aPxfw==
Date: Tue, 14 Jun 2022 17:23:45 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Michal Orzel <michal.orzel@arm.com>
cc: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org, 
    andrew.cooper3@citrix.com, Julien Grall <jgrall@amazon.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: smpboot: Allocate the CPU sibling/core maps
 while preparing the CPU
In-Reply-To: <55f45337-2da1-fe8f-b7a5-272577ed4d50@arm.com>
Message-ID: <alpine.DEB.2.22.394.2206141723360.1837490@ubuntu-linux-20-04-desktop>
References: <20220614094119.94720-1-julien@xen.org> <f60bd88a-90bc-60a9-be72-aa533315c55f@arm.com> <3ed8e44f-293d-958f-c144-466e16d034e2@xen.org> <55f45337-2da1-fe8f-b7a5-272577ed4d50@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1894069408-1655252627=:1837490"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1894069408-1655252627=:1837490
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Tue, 14 Jun 2022, Michal Orzel wrote:
> On 14.06.2022 13:08, Julien Grall wrote:
> >>> +    unsigned int rc = 0;
> >> ... here you are setting rc to 0 even though it will be reassigned.
> >> Furthermore, if rc is used only in case of CPU_UP_PREPARE, why not moving the definition there?
> > 
> > Because I forgot to replace "return NOTIFY_DONE;" with :
> > 
> > return !rc ? NOTIFY_DONE : notifier_from_errno(rc);
> That is what I thought.
> With these fixes you can add my Rb.

And also my

Acked-by: Stefano Stabellini <sstabellini@kernel.org>
--8323329-1894069408-1655252627=:1837490--


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 00:32:58 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 00:32:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349492.575593 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Gxq-00069k-Qo; Wed, 15 Jun 2022 00:32:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349492.575593; Wed, 15 Jun 2022 00:32:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Gxq-00069d-Nu; Wed, 15 Jun 2022 00:32:54 +0000
Received: by outflank-mailman (input) for mailman id 349492;
 Wed, 15 Jun 2022 00:32:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8oGR=WW=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o1Gxp-00069X-EL
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 00:32:53 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b0fb83a9-ec42-11ec-bd2c-47488cf2e6aa;
 Wed, 15 Jun 2022 02:32:52 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 93D45B815A3;
 Wed, 15 Jun 2022 00:32:51 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id DA71FC3411B;
 Wed, 15 Jun 2022 00:32:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b0fb83a9-ec42-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1655253170;
	bh=jfdbZ95q8z+IJK2N0Pr3tzIbKDlLFpMLlKv5d86pQ4Y=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=davfAyTQKUHdsG0hLEN3j+86P7klCtdDQDsjK0Bht7DCQ2SslHH5985Ep3crdpzc+
	 GDIAnrEqxpsZZwJOF9HJcuUGCGIsS/E4tbQxhtCz6oEf86pwWkm3oaikFCVxK4MHkL
	 g/hWZ7kh9JKB8ogdDbPoeG8uFncngYC/waRfiKKs6RBspZwa0/C0mKSWFmpirSuVV/
	 VbnW3XMe+uNYIrS+BcyBgGlVKjo6XYxj7U4lT5F646Q1WXOE8eqdFmagexlIwT5Slm
	 SB3WkTBKB/H1WpCoS4+6xGwUF/NSKIjzoniEDzrg1taCrvU7KfvphGAdam/30vMI9y
	 5FSYnQ45grAzQ==
Date: Tue, 14 Jun 2022 17:32:48 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, andrew.cooper3@citrix.com, 
    Julien Grall <jgrall@amazon.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: irq: Initialize the per-CPU IRQs while preparing
 the CPU
In-Reply-To: <20220614094157.95631-1-julien@xen.org>
Message-ID: <alpine.DEB.2.22.394.2206141731320.1837490@ubuntu-linux-20-04-desktop>
References: <20220614094157.95631-1-julien@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 14 Jun 2022, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Commit 5047cd1d5dea "xen/common: Use enhanced ASSERT_ALLOC_CONTEXT in
> xmalloc()" extended the checks in _xmalloc() to catch any use of the
> helpers from context with interrupts disabled.
> 
> Unfortunately, the rule is not followed when initializing the per-CPU
> IRQs:
> 
> (XEN) Xen call trace:
> (XEN)    [<002389f4>] _xmalloc+0xfc/0x314 (PC)
> (XEN)    [<00000000>] 00000000 (LR)
> (XEN)    [<0021a7c4>] init_one_irq_desc+0x48/0xd0
> (XEN)    [<002807a8>] irq.c#init_local_irq_data+0x48/0xa4
> (XEN)    [<00280834>] init_secondary_IRQ+0x10/0x2c
> (XEN)    [<00288fa4>] start_secondary+0x194/0x274
> (XEN)    [<40010170>] 40010170
> (XEN)
> (XEN)
> (XEN) ****************************************
> (XEN) Panic on CPU 2:
> (XEN) Assertion '!in_irq() && (local_irq_is_enabled() || num_online_cpus() <= 1)' failed at common/xmalloc_tlsf.c:601
> (XEN) ****************************************
> 
> This is happening because zalloc_cpumask_var() may allocate memory
> if NR_CPUS is > 2 * sizeof(unsigned long).
> 
> Avoid the problem by allocate the per-CPU IRQs while preparing the
> CPU.
> 
> This also has the benefit to remove a BUG_ON() in the secondary CPU
> code.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> ---
>  xen/arch/arm/include/asm/irq.h |  1 -
>  xen/arch/arm/irq.c             | 35 +++++++++++++++++++++++++++-------
>  xen/arch/arm/smpboot.c         |  2 --
>  3 files changed, 28 insertions(+), 10 deletions(-)
> 
> diff --git a/xen/arch/arm/include/asm/irq.h b/xen/arch/arm/include/asm/irq.h
> index e45d57459899..245f49dcbac5 100644
> --- a/xen/arch/arm/include/asm/irq.h
> +++ b/xen/arch/arm/include/asm/irq.h
> @@ -73,7 +73,6 @@ static inline bool is_lpi(unsigned int irq)
>  bool is_assignable_irq(unsigned int irq);
>  
>  void init_IRQ(void);
> -void init_secondary_IRQ(void);
>  
>  int route_irq_to_guest(struct domain *d, unsigned int virq,
>                         unsigned int irq, const char *devname);
> diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
> index b761d90c4063..56bdcb95335d 100644
> --- a/xen/arch/arm/irq.c
> +++ b/xen/arch/arm/irq.c
> @@ -17,6 +17,7 @@
>   * GNU General Public License for more details.
>   */
>  
> +#include <xen/cpu.h>
>  #include <xen/lib.h>
>  #include <xen/spinlock.h>
>  #include <xen/irq.h>
> @@ -100,7 +101,7 @@ static int __init init_irq_data(void)
>      return 0;
>  }
>  
> -static int init_local_irq_data(void)
> +static int init_local_irq_data(unsigned int cpu)
>  {
>      int irq;
>  
> @@ -108,7 +109,7 @@ static int init_local_irq_data(void)
>  
>      for ( irq = 0; irq < NR_LOCAL_IRQS; irq++ )
>      {
> -        struct irq_desc *desc = irq_to_desc(irq);
> +        struct irq_desc *desc = &per_cpu(local_irq_desc, cpu)[irq];
>          int rc = init_one_irq_desc(desc);
>  
>          if ( rc )
> @@ -131,6 +132,29 @@ static int init_local_irq_data(void)
>      return 0;
>  }
>  
> +static int cpu_callback(struct notifier_block *nfb, unsigned long action,
> +                        void *hcpu)
> +{
> +    unsigned long cpu = (unsigned long)hcpu;

unsigned int cpu ?

The rest looks good


> +    int rc = 0;
> +
> +    switch ( action )
> +    {
> +    case CPU_UP_PREPARE:
> +        rc = init_local_irq_data(cpu);
> +        if ( rc )
> +            printk(XENLOG_ERR "Unable to allocate local IRQ for CPU%lu\n",
> +                   cpu);
> +        break;
> +    }
> +
> +    return !rc ? NOTIFY_DONE : notifier_from_errno(rc);
> +}
> +
> +static struct notifier_block cpu_nfb = {
> +    .notifier_call = cpu_callback,
> +};
> +
>  void __init init_IRQ(void)
>  {
>      int irq;
> @@ -140,13 +164,10 @@ void __init init_IRQ(void)
>          local_irqs_type[irq] = IRQ_TYPE_INVALID;
>      spin_unlock(&local_irqs_type_lock);
>  
> -    BUG_ON(init_local_irq_data() < 0);
> +    BUG_ON(init_local_irq_data(smp_processor_id()) < 0);
>      BUG_ON(init_irq_data() < 0);
> -}
>  
> -void init_secondary_IRQ(void)
> -{
> -    BUG_ON(init_local_irq_data() < 0);
> +    register_cpu_notifier(&cpu_nfb);
>  }
>  
>  static inline struct irq_guest *irq_get_guest_info(struct irq_desc *desc)
> diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
> index 9bb32a301a70..4888bcd78a5a 100644
> --- a/xen/arch/arm/smpboot.c
> +++ b/xen/arch/arm/smpboot.c
> @@ -359,8 +359,6 @@ void start_secondary(void)
>  
>      gic_init_secondary_cpu();
>  
> -    init_secondary_IRQ();
> -
>      set_current(idle_vcpu[cpuid]);
>  
>      setup_cpu_sibling_map(cpuid);
> -- 
> 2.32.0
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 00:46:14 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 00:46:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349505.575604 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1HAW-0007oH-71; Wed, 15 Jun 2022 00:46:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349505.575604; Wed, 15 Jun 2022 00:46:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1HAW-0007oA-3F; Wed, 15 Jun 2022 00:46:00 +0000
Received: by outflank-mailman (input) for mailman id 349505;
 Wed, 15 Jun 2022 00:45:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8oGR=WW=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o1HAU-0007o4-IQ
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 00:45:58 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 83f3a500-ec44-11ec-9917-058037db3bb5;
 Wed, 15 Jun 2022 02:45:56 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 469CA6192F;
 Wed, 15 Jun 2022 00:45:55 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3A81AC3411B;
 Wed, 15 Jun 2022 00:45:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 83f3a500-ec44-11ec-9917-058037db3bb5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1655253954;
	bh=xvhFYv8D+8jXylVwssx/dBCMsIFROmVH1bInter13Nc=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=ctt95FJ6/4hNntYIRG6hwe5RcvTHDxGhwpk1EVC4+KZ8C5wMWFQzO60hW4uUvcLpJ
	 TI0zKDi0GCAfGF7AaE96Z5o7MtiZB8sXycGKRJkmxS3BrY60u/1rmTzkqD+9jBQry6
	 edaEPzoFbZj/v2UHNT11WXfOlNQUdnq5AHxy+sm/uptqnE1ebuSv1Q2hGwJKct08ZB
	 Uydk10jb7KkMSR6avUIfWbu1oaXF9HFmxCHm5f+yWtYpxT9XaZpaqZoGvwmtHjaQ1d
	 gGDyscUEENQ4rabSiC9/keOXd8EZrDIooS5Q2cWl8F0M2RoCWC7Lwqr4dmuhKWJ3hh
	 uhtn0Q1LF7u7A==
Date: Tue, 14 Jun 2022 17:45:53 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Oleksandr <olekstysh@gmail.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, 
    Boris Ostrovsky <boris.ostrovsky@oracle.com>, 
    Juergen Gross <jgross@suse.com>, Julien Grall <julien@xen.org>
Subject: Re: [RFC PATCH 1/2] xen/unpopulated-alloc: Introduce helpers for
 DMA allocations
In-Reply-To: <a51dec23-c543-b571-8047-59f39abb0bee@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2206141735430.1837490@ubuntu-linux-20-04-desktop>
References: <1652810658-27810-1-git-send-email-olekstysh@gmail.com> <1652810658-27810-2-git-send-email-olekstysh@gmail.com> <alpine.DEB.2.22.394.2206031420430.2783803@ubuntu-linux-20-04-desktop> <00c14b91-4cf2-179c-749d-593db853e42e@gmail.com>
 <alpine.DEB.2.22.394.2206101709210.756493@ubuntu-linux-20-04-desktop> <a51dec23-c543-b571-8047-59f39abb0bee@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1426860785-1655253485=:1837490"
Content-ID: <alpine.DEB.2.22.394.2206141738350.1837490@ubuntu-linux-20-04-desktop>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1426860785-1655253485=:1837490
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.22.394.2206141738351.1837490@ubuntu-linux-20-04-desktop>

On Tue, 14 Jun 2022, Oleksandr wrote:
> On 11.06.22 03:12, Stefano Stabellini wrote:
> > On Wed, 8 Jun 2022, Oleksandr wrote:
> > > 2. Drop the "page_list" entirely and use "dma_pool" for all (contiguous
> > > and
> > > non-contiguous) allocations. After all, all pages are initially contiguous
> > > in
> > > fill_list() as they are built from the resource. This changes behavior for
> > > all
> > > users of xen_alloc_unpopulated_pages()
> > > 
> > > Below the diff for unpopulated-alloc.c. The patch is also available at:
> > > 
> > > https://github.com/otyshchenko1/linux/commit/7be569f113a4acbdc4bcb9b20cb3995b3151387a
> > > 
> > > 
> > > diff --git a/drivers/xen/unpopulated-alloc.c
> > > b/drivers/xen/unpopulated-alloc.c
> > > index a39f2d3..ab5c7bd 100644
> > > --- a/drivers/xen/unpopulated-alloc.c
> > > +++ b/drivers/xen/unpopulated-alloc.c
> > > @@ -1,5 +1,7 @@
> > >   // SPDX-License-Identifier: GPL-2.0
> > > +#include <linux/dma-mapping.h>
> > >   #include <linux/errno.h>
> > > +#include <linux/genalloc.h>
> > >   #include <linux/gfp.h>
> > >   #include <linux/kernel.h>
> > >   #include <linux/mm.h>
> > > @@ -13,8 +15,8 @@
> > >   #include <xen/xen.h>
> > > 
> > >   static DEFINE_MUTEX(list_lock);
> > > -static struct page *page_list;
> > > -static unsigned int list_count;
> > > +
> > > +static struct gen_pool *dma_pool;
> > > 
> > >   static struct resource *target_resource;
> > > 
> > > @@ -36,7 +38,7 @@ static int fill_list(unsigned int nr_pages)
> > >          struct dev_pagemap *pgmap;
> > >          struct resource *res, *tmp_res = NULL;
> > >          void *vaddr;
> > > -       unsigned int i, alloc_pages = round_up(nr_pages,
> > > PAGES_PER_SECTION);
> > > +       unsigned int alloc_pages = round_up(nr_pages, PAGES_PER_SECTION);
> > >          struct range mhp_range;
> > >          int ret;
> > > 
> > > @@ -106,6 +108,7 @@ static int fill_list(unsigned int nr_pages)
> > >            * conflict with any devices.
> > >            */
> > >          if (!xen_feature(XENFEAT_auto_translated_physmap)) {
> > > +               unsigned int i;
> > >                  xen_pfn_t pfn = PFN_DOWN(res->start);
> > > 
> > >                  for (i = 0; i < alloc_pages; i++) {
> > > @@ -125,16 +128,17 @@ static int fill_list(unsigned int nr_pages)
> > >                  goto err_memremap;
> > >          }
> > > 
> > > -       for (i = 0; i < alloc_pages; i++) {
> > > -               struct page *pg = virt_to_page(vaddr + PAGE_SIZE * i);
> > > -
> > > -               pg->zone_device_data = page_list;
> > > -               page_list = pg;
> > > -               list_count++;
> > > +       ret = gen_pool_add_virt(dma_pool, (unsigned long)vaddr,
> > > res->start,
> > > +                       alloc_pages * PAGE_SIZE, NUMA_NO_NODE);
> > > +       if (ret) {
> > > +               pr_err("Cannot add memory range to the pool\n");
> > > +               goto err_pool;
> > >          }
> > > 
> > >          return 0;
> > > 
> > > +err_pool:
> > > +       memunmap_pages(pgmap);
> > >   err_memremap:
> > >          kfree(pgmap);
> > >   err_pgmap:
> > > @@ -149,51 +153,49 @@ static int fill_list(unsigned int nr_pages)
> > >          return ret;
> > >   }
> > > 
> > > -/**
> > > - * xen_alloc_unpopulated_pages - alloc unpopulated pages
> > > - * @nr_pages: Number of pages
> > > - * @pages: pages returned
> > > - * @return 0 on success, error otherwise
> > > - */
> > > -int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page
> > > **pages)
> > > +static int alloc_unpopulated_pages(unsigned int nr_pages, struct page
> > > **pages,
> > > +               bool contiguous)
> > >   {
> > >          unsigned int i;
> > >          int ret = 0;
> > > +       void *vaddr;
> > > +       bool filled = false;
> > > 
> > >          /*
> > >           * Fallback to default behavior if we do not have any suitable
> > > resource
> > >           * to allocate required region from and as the result we won't be
> > > able
> > > to
> > >           * construct pages.
> > >           */
> > > -       if (!target_resource)
> > > +       if (!target_resource) {
> > > +               if (contiguous)
> > > +                       return -ENODEV;
> > > +
> > >                  return xen_alloc_ballooned_pages(nr_pages, pages);
> > > +       }
> > > 
> > >          mutex_lock(&list_lock);
> > > -       if (list_count < nr_pages) {
> > > -               ret = fill_list(nr_pages - list_count);
> > > +
> > > +       while (!(vaddr = (void *)gen_pool_alloc(dma_pool, nr_pages *
> > > PAGE_SIZE))) {
> > > +               if (filled)
> > > +                       ret = -ENOMEM;
> > > +               else {
> > > +                       ret = fill_list(nr_pages);
> > > +                       filled = true;
> > > +               }
> > >                  if (ret)
> > >                          goto out;
> > >          }
> > > 
> > >          for (i = 0; i < nr_pages; i++) {
> > > -               struct page *pg = page_list;
> > > -
> > > -               BUG_ON(!pg);
> > > -               page_list = pg->zone_device_data;
> > > -               list_count--;
> > > -               pages[i] = pg;
> > > +               pages[i] = virt_to_page(vaddr + PAGE_SIZE * i);
> > > 
> > >   #ifdef CONFIG_XEN_HAVE_PVMMU
> > >                  if (!xen_feature(XENFEAT_auto_translated_physmap)) {
> > > -                       ret = xen_alloc_p2m_entry(page_to_pfn(pg));
> > > +                       ret = xen_alloc_p2m_entry(page_to_pfn(pages[i]));
> > >                          if (ret < 0) {
> > > -                               unsigned int j;
> > > -
> > > -                               for (j = 0; j <= i; j++) {
> > > - pages[j]->zone_device_data = page_list;
> > > -                                       page_list = pages[j];
> > > -                                       list_count++;
> > > -                               }
> > > +                               /* XXX Do we need to zeroed pages[i]? */
> > > +                               gen_pool_free(dma_pool, (unsigned
> > > long)vaddr,
> > > +                                               nr_pages * PAGE_SIZE);
> > >                                  goto out;
> > >                          }
> > >                  }
> > > @@ -204,32 +206,89 @@ int xen_alloc_unpopulated_pages(unsigned int
> > > nr_pages,
> > > struct page **pages)
> > >          mutex_unlock(&list_lock);
> > >          return ret;
> > >   }
> > > -EXPORT_SYMBOL(xen_alloc_unpopulated_pages);
> > > 
> > > -/**
> > > - * xen_free_unpopulated_pages - return unpopulated pages
> > > - * @nr_pages: Number of pages
> > > - * @pages: pages to return
> > > - */
> > > -void xen_free_unpopulated_pages(unsigned int nr_pages, struct page
> > > **pages)
> > > +static void free_unpopulated_pages(unsigned int nr_pages, struct page
> > > **pages,
> > > +               bool contiguous)
> > >   {
> > > -       unsigned int i;
> > > -
> > >          if (!target_resource) {
> > > +               if (contiguous)
> > > +                       return;
> > > +
> > >                  xen_free_ballooned_pages(nr_pages, pages);
> > >                  return;
> > >          }
> > > 
> > >          mutex_lock(&list_lock);
> > > -       for (i = 0; i < nr_pages; i++) {
> > > -               pages[i]->zone_device_data = page_list;
> > > -               page_list = pages[i];
> > > -               list_count++;
> > > +
> > > +       /* XXX Do we need to check the range (gen_pool_has_addr)? */
> > > +       if (contiguous)
> > > +               gen_pool_free(dma_pool, (unsigned
> > > long)page_to_virt(pages[0]),
> > > +                               nr_pages * PAGE_SIZE);
> > > +       else {
> > > +               unsigned int i;
> > > +
> > > +               for (i = 0; i < nr_pages; i++)
> > > +                       gen_pool_free(dma_pool, (unsigned
> > > long)page_to_virt(pages[i]),
> > > +                                       PAGE_SIZE);
> > >          }
> > > +
> > >          mutex_unlock(&list_lock);
> > >   }
> > > +
> > > +/**
> > > + * xen_alloc_unpopulated_pages - alloc unpopulated pages
> > > + * @nr_pages: Number of pages
> > > + * @pages: pages returned
> > > + * @return 0 on success, error otherwise
> > > + */
> > > +int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page
> > > **pages)
> > > +{
> > > +       return alloc_unpopulated_pages(nr_pages, pages, false);
> > > +}
> > > +EXPORT_SYMBOL(xen_alloc_unpopulated_pages);
> > > +
> > > +/**
> > > + * xen_free_unpopulated_pages - return unpopulated pages
> > > + * @nr_pages: Number of pages
> > > + * @pages: pages to return
> > > + */
> > > +void xen_free_unpopulated_pages(unsigned int nr_pages, struct page
> > > **pages)
> > > +{
> > > +       free_unpopulated_pages(nr_pages, pages, false);
> > > +}
> > >   EXPORT_SYMBOL(xen_free_unpopulated_pages);
> > > 
> > > +/**
> > > + * xen_alloc_unpopulated_dma_pages - alloc unpopulated DMAable pages
> > > + * @dev: valid struct device pointer
> > > + * @nr_pages: Number of pages
> > > + * @pages: pages returned
> > > + * @return 0 on success, error otherwise
> > > + */
> > > +int xen_alloc_unpopulated_dma_pages(struct device *dev, unsigned int
> > > nr_pages,
> > > +               struct page **pages)
> > > +{
> > > +       /* XXX Handle devices which support 64-bit DMA address only for
> > > now */
> > > +       if (dma_get_mask(dev) != DMA_BIT_MASK(64))
> > > +               return -EINVAL;
> > > +
> > > +       return alloc_unpopulated_pages(nr_pages, pages, true);
> > > +}
> > > +EXPORT_SYMBOL(xen_alloc_unpopulated_dma_pages);
> > > +
> > > +/**
> > > + * xen_free_unpopulated_dma_pages - return unpopulated DMAable pages
> > > + * @dev: valid struct device pointer
> > > + * @nr_pages: Number of pages
> > > + * @pages: pages to return
> > > + */
> > > +void xen_free_unpopulated_dma_pages(struct device *dev, unsigned int
> > > nr_pages,
> > > +               struct page **pages)
> > > +{
> > > +       free_unpopulated_pages(nr_pages, pages, true);
> > > +}
> > > +EXPORT_SYMBOL(xen_free_unpopulated_dma_pages);
> > > +
> > >   static int __init unpopulated_init(void)
> > >   {
> > >          int ret;
> > > @@ -237,9 +296,19 @@ static int __init unpopulated_init(void)
> > >          if (!xen_domain())
> > >                  return -ENODEV;
> > > 
> > > +       dma_pool = gen_pool_create(PAGE_SHIFT, NUMA_NO_NODE);
> > > +       if (!dma_pool) {
> > > +               pr_err("xen:unpopulated: Cannot create DMA pool\n");
> > > +               return -ENOMEM;
> > > +       }
> > > +
> > > +       gen_pool_set_algo(dma_pool, gen_pool_best_fit, NULL);
> > > +
> > >          ret = arch_xen_unpopulated_init(&target_resource);
> > >          if (ret) {
> > >                  pr_err("xen:unpopulated: Cannot initialize target
> > > resource\n");
> > > +               gen_pool_destroy(dma_pool);
> > > +               dma_pool = NULL;
> > >                  target_resource = NULL;
> > >          }
> > > 
> > > [snip]
> > > 
> > > 
> > > I think, regarding on the approach we would likely need to do some
> > > renaming
> > > for fill_list, page_list, list_lock, etc.
> > > 
> > > 
> > > Both options work in my Arm64 based environment, not sure regarding x86.
> > > Or do we have another option here?
> > > I would be happy to go any route. What do you think?
> > The second option (use "dma_pool" for all) looks great, thank you for
> > looking into it!
> 
> 
> ok, great
> 
> 
> May I please clarify a few moments before starting preparing non-RFC version:
> 
> 
> 1. According to the discussion at "[RFC PATCH 2/2] xen/grant-table: Use
> unpopulated DMAable pages instead of real RAM ones" we decided
> to stay away from the "dma" in the name, also the second option (use
> "dma_pool" for all) implies dropping the "page_list" entirely, so I am going
> to do some renaming:
> 
> - s/xen_alloc_unpopulated_dma_pages()/xen_alloc_unpopulated_contiguous_pages()
> - s/dma_pool/unpopulated_pool
> - s/list_lock/pool_lock
> - s/fill_list()/fill_pool()
> 
> Any objections?
 
Looks good

 
> 2. I don't like much the fact that in free_unpopulated_pages() we have to free
> "page by page" if contiguous is false, but unfortunately we cannot avoid doing
> that.
> I noticed that many users of unpopulated pages retain initially allocated
> pages[i] array, so it passed here absolutely unmodified since being allocated,
> but there is a code (for example, gnttab_page_cache_shrink() in grant-table.c)
> that can pass pages[i] which contains arbitrary pages.
> 
> static void free_unpopulated_pages(unsigned int nr_pages, struct page **pages,
>         bool contiguous)
> {
> 
> [snip]
> 
>     /* XXX Do we need to check the range (gen_pool_has_addr)? */
>     if (contiguous)
>         gen_pool_free(dma_pool, (unsigned long)page_to_virt(pages[0]),
>                 nr_pages * PAGE_SIZE);
>     else {
>         unsigned int i;
> 
>         for (i = 0; i < nr_pages; i++)
>             gen_pool_free(dma_pool, (unsigned long)page_to_virt(pages[i]),
>                     PAGE_SIZE);
>     }
> 
> [snip]
> 
> }
> 
> I think, it wouldn't be a big deal for the small allocations, but for the big
> allocations it might be not optimal for the speed.
> 
> What do you think if we update some places which always require big
> allocations to allocate (and free) contiguous pages instead?
> The possible candidate is
> gem_create()/xen_drm_front_gem_free_object_unlocked() in
> drivers/gpu/drm/xen/xen_drm_front_gem.c.
> OTOH I realize this might be inefficient use of resources. Or better not?
 
Yes I think it is a good idea, more on this below.

 
> 3. The alloc_unpopulated_pages() might be optimized for the non-contiguous
> allocations, currently we always try to allocate a single chunk even if
> contiguous is false.
> 
> static int alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages,
>         bool contiguous)
> {
> 
> [snip]
> 
>     /* XXX: Optimize for non-contiguous allocations */
>     while (!(vaddr = (void *)gen_pool_alloc(dma_pool, nr_pages * PAGE_SIZE)))
> {
>         if (filled)
>             ret = -ENOMEM;
>         else {
>             ret = fill_list(nr_pages);
>             filled = true;
>         }
>         if (ret)
>             goto out;
>     }
> 
> [snip]
> 
> }
> 
> 
> But we can allocate "page by page" for the non-contiguous allocations, it
> might be not optimal for the speed, but would be optimal for the resource
> usage. What do you think?
> 
> static int alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages,
>         bool contiguous)
> {
> 
> [snip]
> 
>     if (contiguous) {
>         while (!(vaddr = (void *)gen_pool_alloc(dma_pool, nr_pages *
> PAGE_SIZE))) {
>             if (filled)
>                 ret = -ENOMEM;
>             else {
>                 ret = fill_list(nr_pages);
>                 filled = true;
>             }
>             if (ret)
>                 goto out;
>         }
> 
>         for (i = 0; i < nr_pages; i++)
>             pages[i] = virt_to_page(vaddr + PAGE_SIZE * i);
>     } else {
>         if (gen_pool_avail(dma_pool) < nr_pages) {
>             ret = fill_list(nr_pages - gen_pool_avail(dma_pool));
>             if (ret)
>                 goto out;
>         }
> 
>         for (i = 0; i < nr_pages; i++) {
>             vaddr = (void *)gen_pool_alloc(dma_pool, PAGE_SIZE);
>             if (!vaddr) {
>                 while (i--)
>                     gen_pool_free(dma_pool, (unsigned
> long)page_to_virt(pages[i]),
>                             PAGE_SIZE);
> 
>                 ret = -ENOMEM;
>                 goto out;
>             }
> 
>             pages[i] = virt_to_page(vaddr);
>         }
>     }
> 
> [snip]
> 
> }

Basically, if we allocate (and free) page-by-page it leads to more
efficient resource utilization but it is slower. If we allocate larger
contiguous chunks it is faster but it leads to less efficient resource
utilization.

Given that on both x86 and ARM the unpopulated memory resource is
arbitrarily large, I don't think we need to worry about resource
utilization. It is not backed by real memory. The only limitation is the
address space size which is very large.

So I would say optimize for speed and use larger contiguous chunks even
when continuity is not strictly required.
--8323329-1426860785-1655253485=:1837490--


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 00:51:46 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 00:51:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349515.575615 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1HG3-000129-1m; Wed, 15 Jun 2022 00:51:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349515.575615; Wed, 15 Jun 2022 00:51:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1HG2-000122-V4; Wed, 15 Jun 2022 00:51:42 +0000
Received: by outflank-mailman (input) for mailman id 349515;
 Wed, 15 Jun 2022 00:51:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8oGR=WW=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o1HG2-00011w-64
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 00:51:42 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 51974d70-ec45-11ec-9917-058037db3bb5;
 Wed, 15 Jun 2022 02:51:40 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id F2365B81A4C;
 Wed, 15 Jun 2022 00:51:39 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3DE10C3411B;
 Wed, 15 Jun 2022 00:51:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 51974d70-ec45-11ec-9917-058037db3bb5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1655254298;
	bh=zYDarweJh7/bEePUo2Am4AELnMD53gnyz/vvlfofoc4=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=b/rQn2i+978Z0a7/w74jSGbY8T1i2C6Orf0JFjyIxM8/0K7vIwcdhH1Ef6LHy+XWB
	 gsvmy26fC+vMxf2xCY7i7k2Jdlr/3IlyIChS5S2gr7Tt9N3pN3q1sHdnYLau+dnRfu
	 p0PnxLdsko2QUrKXMw0kPGHdVc2E8w3t66GNW9U+/vfBmNpOZAAQFovCNsFQ6VOfnW
	 jV1toIBMy/jSB0PRPahqNwZA1fBpoHLDiPM9VuUuNy608w9ysYBvc1Uiae5KM1IeRE
	 KLkPM1X7SkXz0NSNxd/ZbUYPXCiwi/WoiQdM1tkEuwg9yi75a9jz5OamH1K3Axhw4X
	 1/Ip6QaTaaq1Q==
Date: Tue, 14 Jun 2022 17:51:37 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Oleksandr <olekstysh@gmail.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, 
    Boris Ostrovsky <boris.ostrovsky@oracle.com>, 
    Juergen Gross <jgross@suse.com>, Julien Grall <julien@xen.org>
Subject: Re: [RFC PATCH 2/2] xen/grant-table: Use unpopulated DMAable pages
 instead of real RAM ones
In-Reply-To: <1266f8cb-bbd6-d952-3108-89665ce76fec@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2206141748150.1837490@ubuntu-linux-20-04-desktop>
References: <1652810658-27810-1-git-send-email-olekstysh@gmail.com> <1652810658-27810-3-git-send-email-olekstysh@gmail.com> <alpine.DEB.2.22.394.2206031348230.2783803@ubuntu-linux-20-04-desktop> <7f886dfb-2b42-bc70-d55f-14ecd8144e3e@gmail.com>
 <alpine.DEB.2.22.394.2206101644210.756493@ubuntu-linux-20-04-desktop> <1266f8cb-bbd6-d952-3108-89665ce76fec@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1366514938-1655254298=:1837490"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1366514938-1655254298=:1837490
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Tue, 14 Jun 2022, Oleksandr wrote:
> On 11.06.22 02:55, Stefano Stabellini wrote:
> 
> Hello Stefano
> 
> > On Thu, 9 Jun 2022, Oleksandr wrote:
> > > On 04.06.22 00:19, Stefano Stabellini wrote:
> > > Hello Stefano
> > > 
> > > Thank you for having a look and sorry for the late response.
> > > 
> > > > On Tue, 17 May 2022, Oleksandr Tyshchenko wrote:
> > > > > From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> > > > > 
> > > > > Depends on CONFIG_XEN_UNPOPULATED_ALLOC. If enabled then unpopulated
> > > > > DMAable (contiguous) pages will be allocated for grant mapping into
> > > > > instead of ballooning out real RAM pages.
> > > > > 
> > > > > TODO: Fallback to real RAM pages if xen_alloc_unpopulated_dma_pages()
> > > > > fails.
> > > > > 
> > > > > Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> > > > > ---
> > > > >    drivers/xen/grant-table.c | 27 +++++++++++++++++++++++++++
> > > > >    1 file changed, 27 insertions(+)
> > > > > 
> > > > > diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> > > > > index 8ccccac..2bb4392 100644
> > > > > --- a/drivers/xen/grant-table.c
> > > > > +++ b/drivers/xen/grant-table.c
> > > > > @@ -864,6 +864,25 @@ EXPORT_SYMBOL_GPL(gnttab_free_pages);
> > > > >     */
> > > > >    int gnttab_dma_alloc_pages(struct gnttab_dma_alloc_args *args)
> > > > >    {
> > > > > +#ifdef CONFIG_XEN_UNPOPULATED_ALLOC
> > > > > +	int ret;
> > > > This is an alternative implementation of the same function.
> > > Currently, yes.
> > > 
> > > 
> > > >    If we are
> > > > going to use #ifdef, then I would #ifdef the entire function, rather
> > > > than just the body. Otherwise within the function body we can use
> > > > IS_ENABLED.
> > > 
> > > Good point. Note, there is one missing thing in current patch which is
> > > described in TODO.
> > > 
> > > "Fallback to real RAM pages if xen_alloc_unpopulated_dma_pages() fails." 
> > > So I
> > > will likely use IS_ENABLED within the function body.
> > > 
> > > If CONFIG_XEN_UNPOPULATED_ALLOC is enabled then gnttab_dma_alloc_pages()
> > > will
> > > try to call xen_alloc_unpopulated_dma_pages() the first and if fails then
> > > fallback to allocate RAM pages and balloon them out.
> > > 
> > > One moment is not entirely clear to me. If we use fallback in
> > > gnttab_dma_alloc_pages() then we must use fallback in
> > > gnttab_dma_free_pages()
> > > as well, we cannot use xen_free_unpopulated_dma_pages() for real RAM
> > > pages.
> > > The question is how to pass this information to the
> > > gnttab_dma_free_pages()?
> > > The first idea which comes to mind is to add a flag to struct
> > > gnttab_dma_alloc_args...
> >   You can check if the page is within the mhp_range range or part of
> > iomem_resource? If not, you can free it as a normal page.
> > 
> > If we do this, then the fallback is better implemented in
> > unpopulated-alloc.c because that is the one that is aware about
> > page addresses.
> 
> 
> I got your idea and agree this can work technically. Or if we finally decide
> to use the second option (use "dma_pool" for all) in the first patch
> "[RFC PATCH 1/2] xen/unpopulated-alloc: Introduce helpers for DMA allocations"
> then we will likely be able to check whether a page in question
> is within a "dma_pool" using gen_pool_has_addr().
> 
> I am still wondering, can we avoid the fallback implementation in
> unpopulated-alloc.c.
> Because for that purpose we will need to pull more code into
> unpopulated-alloc.c (to be more precise, almost everything which
> gnttab_dma_free_pages() already has except gnttab_pages_clear_private()) and
> pass more arguments to xen_free_unpopulated_dma_pages(). Also I might mistake,
> but having a fallback split between grant-table.c (to allocate RAM pages in
> gnttab_dma_alloc_pages()) and unpopulated-alloc.c (to free RAM pages in
> xen_free_unpopulated_dma_pages()) would look a bit weird.
> 
> I see two possible options for the fallback implementation in grant-table.c:
> 1. (less preferable) by introducing new flag in struct gnttab_dma_alloc_args
> 2. (more preferable) by guessing unpopulated (non real RAM) page using
> is_zone_device_page(), etc
> 
> 
> For example, with the second option the resulting code will look quite simple
> (only build tested):
> 
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index 738029d..3bda71f 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -1047,6 +1047,23 @@ int gnttab_dma_alloc_pages(struct gnttab_dma_alloc_args
> *args)
>         size_t size;
>         int i, ret;
> 
> +       if (IS_ENABLED(CONFIG_XEN_UNPOPULATED_ALLOC)) {
> +               ret = xen_alloc_unpopulated_dma_pages(args->dev,
> args->nr_pages,
> +                               args->pages);
> +               if (ret < 0)
> +                       goto fallback;
> +
> +               ret = gnttab_pages_set_private(args->nr_pages, args->pages);
> +               if (ret < 0)
> +                       goto fail;
> +
> +               args->vaddr = page_to_virt(args->pages[0]);
> +               args->dev_bus_addr = page_to_phys(args->pages[0]);
> +
> +               return ret;
> +       }
> +
> +fallback:
>         size = args->nr_pages << PAGE_SHIFT;
>         if (args->coherent)
>                 args->vaddr = dma_alloc_coherent(args->dev, size,
> @@ -1103,6 +1120,12 @@ int gnttab_dma_free_pages(struct gnttab_dma_alloc_args
> *args)
> 
>         gnttab_pages_clear_private(args->nr_pages, args->pages);
> 
> +       if (IS_ENABLED(CONFIG_XEN_UNPOPULATED_ALLOC) &&
> +                       is_zone_device_page(args->pages[0])) {
> +               xen_free_unpopulated_dma_pages(args->dev, args->nr_pages,
> args->pages);
> +               return 0;
> +       }
> +
>         for (i = 0; i < args->nr_pages; i++)
>                 args->frames[i] = page_to_xen_pfn(args->pages[i]);
> 
> 
> What do you think?
 
I have another idea. Why don't we introduce a function implemented in
drivers/xen/unpopulated-alloc.c called is_xen_unpopulated_page() and
call it from here? is_xen_unpopulated_page can be implemented by using
gen_pool_has_addr.
--8323329-1366514938-1655254298=:1837490--


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 00:52:16 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 00:52:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349521.575626 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1HGa-0001WX-9z; Wed, 15 Jun 2022 00:52:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349521.575626; Wed, 15 Jun 2022 00:52:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1HGa-0001WQ-73; Wed, 15 Jun 2022 00:52:16 +0000
Received: by outflank-mailman (input) for mailman id 349521;
 Wed, 15 Jun 2022 00:52:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1HGZ-0001WC-MS; Wed, 15 Jun 2022 00:52:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1HGZ-00053q-J2; Wed, 15 Jun 2022 00:52:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1HGZ-0004zr-7m; Wed, 15 Jun 2022 00:52:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o1HGZ-0002SP-7J; Wed, 15 Jun 2022 00:52:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ePQhlkCPuzwr2fo32sUCPtXjpnWPsEofTdSFUzrnCfY=; b=D+zD46UFIuDftFwSc5fpZa5LSO
	LAaVvPqE1THaaMXefTtRU94uGLmnx/nRNdNrR9QuxDN8tIQpQ9462MEl+84gmlwOvvo3239tgLorT
	VuE0R91nDe3fPgHjVwTcoF3Q8iV38hOUEaUKMGiDGvvDB0mtVduVBT7lPW9NGbMcsnZY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171167-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 171167: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-armhf-armhf-libvirt-raw:guest-start/debian.repeat:fail:regression
    linux-5.4:test-armhf-armhf-libvirt-raw:guest-start:fail:heisenbug
    linux-5.4:test-amd64-i386-xl-qemut-debianhvm-amd64:guest-start/debianhvm.repeat:fail:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=9d6e67bf50908cc661972969e8f073ec1d1bc97d
X-Osstest-Versions-That:
    linux=35c6471fd2c181f6e5e0b292dc759b49dbd95d6a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 15 Jun 2022 00:52:15 +0000

flight 171167 linux-5.4 real [real]
flight 171169 linux-5.4 real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/171167/
http://logs.test-lab.xenproject.org/osstest/logs/171169/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-libvirt-raw 17 guest-start/debian.repeat fail REGR. vs. 170895

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-libvirt-raw 13 guest-start      fail in 171169 pass in 171167
 test-amd64-i386-xl-qemut-debianhvm-amd64 20 guest-start/debianhvm.repeat fail pass in 171169-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170895
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170895
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 170895
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170895
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170895
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170895
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170895
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 170895
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170895
 test-armhf-armhf-xl-credit2  14 guest-start                  fail  like 170895
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170895
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170895
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170895
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 linux                9d6e67bf50908cc661972969e8f073ec1d1bc97d
baseline version:
 linux                35c6471fd2c181f6e5e0b292dc759b49dbd95d6a

Last test of basis   170895  2022-06-09 06:12:49 Z    5 days
Testing same since   171167  2022-06-14 16:42:25 Z    0 days    1 attempts

------------------------------------------------------------
392 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 12310 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 01:40:06 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 01:40:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349536.575640 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1I0j-0004wt-1i; Wed, 15 Jun 2022 01:39:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349536.575640; Wed, 15 Jun 2022 01:39:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1I0i-0004wm-Tx; Wed, 15 Jun 2022 01:39:56 +0000
Received: by outflank-mailman (input) for mailman id 349536;
 Wed, 15 Jun 2022 01:39:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dio+=WW=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1o1I0h-0004wg-Bf
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 01:39:55 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2061c.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::61c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0db6452e-ec4c-11ec-9917-058037db3bb5;
 Wed, 15 Jun 2022 03:39:54 +0200 (CEST)
Received: from AM6P193CA0082.EURP193.PROD.OUTLOOK.COM (2603:10a6:209:88::23)
 by AM6PR08MB4296.eurprd08.prod.outlook.com (2603:10a6:20b:b6::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Wed, 15 Jun
 2022 01:39:50 +0000
Received: from AM5EUR03FT054.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:88:cafe::73) by AM6P193CA0082.outlook.office365.com
 (2603:10a6:209:88::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.12 via Frontend
 Transport; Wed, 15 Jun 2022 01:39:50 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT054.mail.protection.outlook.com (10.152.16.212) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5332.12 via Frontend Transport; Wed, 15 Jun 2022 01:39:50 +0000
Received: ("Tessian outbound d3318d0cda7b:v120");
 Wed, 15 Jun 2022 01:39:42 +0000
Received: from 05eb9e9f074c.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A276644A-ACDD-43A9-BE58-F64CCD0AA477.1; 
 Wed, 15 Jun 2022 01:39:35 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 05eb9e9f074c.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 15 Jun 2022 01:39:35 +0000
Received: from AM6PR02CA0029.eurprd02.prod.outlook.com (2603:10a6:20b:6e::42)
 by HE1PR0802MB2489.eurprd08.prod.outlook.com (2603:10a6:3:d8::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.14; Wed, 15 Jun
 2022 01:39:32 +0000
Received: from VE1EUR03FT007.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:6e:cafe::7f) by AM6PR02CA0029.outlook.office365.com
 (2603:10a6:20b:6e::42) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.14 via Frontend
 Transport; Wed, 15 Jun 2022 01:39:32 +0000
Received: from nebula.arm.com (40.67.248.234) by
 VE1EUR03FT007.mail.protection.outlook.com (10.152.18.114) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5332.12 via Frontend Transport; Wed, 15 Jun 2022 01:39:31 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX04.Arm.com
 (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.27; Wed, 15 Jun
 2022 01:39:29 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2308.27 via Frontend
 Transport; Wed, 15 Jun 2022 01:39:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0db6452e-ec4c-11ec-9917-058037db3bb5
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=QjvTfJMsay8g6lB2ozoWUEc3WoTDegZtr8pSpkqAMKTi22MVwnq4LscPxuuhj5GjZ45Z+5Hx392xU5hp6PJbS5QbNLqCw+xl1A3vn2kiZdzaytcfWgENJp49DAUP/X5BaiR81SvuZLiFcRT4Fpr22smH9Hju76GzJHoo1U0Dnihi2nDLrYBIQHlc1KEo2coy7s1n6fsGp9Olf6vMhE3lkZ8FHs1vcZ8RiWeJRT/JA5N5xDxU7qSntcu9u0q3Frooor12b6TU4W7wWs4QUoR7aFSN9TxsIuK0965QPOdSRbtR7D9Oxyw1R3/di3geCDzgJ1HRp3yZdxOj5lEGh7dXlA==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=eVHM+ZSTPQplj51mDH4GF5q/liVjrtXDoFS+oFNMT8s=;
 b=gJoqHcoc67E1gW8XmRBbqXfFHIJm9wkHUbLp38lM9UGht3Y9pmGhCmp28oQawBWq3QMPIpcRlm5oUkMiee5wFGpPenhVoP6sDpdAcDnBPdQzdLPZ6zgExcpAqos6R2LNckwfHYpihcIFGOldLd5uhx5VhYxQ4orNhSx2jpOErWRAh/Jp1GW/TmE8Kc8yuDrOb7d2b21DFK4RhCqOkzOSTYbNsrtqlpEPAff4GoLT49kQ41fg/x4YIHtxbMR+07LdbZuXRZA6JLvAJdhDiBx25O8ybiyTsOx6BLhgW0aeeRdgiHBWstY8f/oI3VuL8TzPLhxEBxg1lU7Ao+JuOhQc6Q==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eVHM+ZSTPQplj51mDH4GF5q/liVjrtXDoFS+oFNMT8s=;
 b=j2gPatPdIR7wuLZimYffQstL/t2iWmeoa9i7MMJ+lPaVvBlV1S2JXRuJXQWhPk19Oyt+lLo5U9lTs23QM6Jq284tkr/U+TJKYbtyMDqJeQonmujuymAVUkTsY0fEQbCWwxJxeJGhFDAx+eGDRa4eB4rAbbXTpe1rtUD8iaiZYN4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 8650f332f5582df7
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GySrC5+LOSc1cCzWphaPkAEyEuXj2+srzQxNFAQH/SokJnua3+Yp/GPil8fyk9310YmN1MllglnrClwkA3ZCqkpg20VqIvVO5ChI8ekvCjYaLFWOYhIKzL9v8Kx1Mj4P0Tj+Ia2jEuYJUDRuQyrOcKVeEEfuL7b8QnJE3dcYbwsfbM6/2M9uBMavYVMVz+KPc0AfS4koypMp/GOgcChlQSlUyJzHvKgc8gwPldsGJ9pz7WMbln/5lVrvJqQW4c+5lftMYZIDzqlSdDPN5VatpUZ9dkmsZaBv8I3R0iRDCxmyxL8rKB7GeMzEPBXBXYrEW0WMgcdyK+BIvpcqHqEsMw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=eVHM+ZSTPQplj51mDH4GF5q/liVjrtXDoFS+oFNMT8s=;
 b=Yl5IE10H/55w/fwBRjwarZNZAejktNx+Tz8sQ9qcK28FXRulOsx5qLOQG+3rpV2k5fapCQ3HHGXvR7NbGTvaGL9Khelmw4cGEmdP5wfqay9yVsFsQynD6JsY9fvjJ6tXP40U8rVFrinEffu8JJGG9kNLFeXIp5scSwkY9EdXsIHOZOEXwk7MVN7vG8RlXQcjigKirIliNwy3WsEMAu5mfzfotUTktKo5LmPS3VnQFBHnfW7HXcFOiWzK3NyfxFfZtcxHsyYSNaxmY5bn+fC5T+W0B2tF8dajCmrgkSXK7JLJTzpI2uyiTYbLkDyEaDJODy9ckuZumqeTvHFE9MTuDg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eVHM+ZSTPQplj51mDH4GF5q/liVjrtXDoFS+oFNMT8s=;
 b=j2gPatPdIR7wuLZimYffQstL/t2iWmeoa9i7MMJ+lPaVvBlV1S2JXRuJXQWhPk19Oyt+lLo5U9lTs23QM6Jq284tkr/U+TJKYbtyMDqJeQonmujuymAVUkTsY0fEQbCWwxJxeJGhFDAx+eGDRa4eB4rAbbXTpe1rtUD8iaiZYN4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Wei Chen <wei.chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH] xen/arm: avoid vtimer flip-flop transition in context switch
Date: Wed, 15 Jun 2022 09:39:09 +0800
Message-ID: <20220615013909.283887-1-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-Office365-Filtering-Correlation-Id: ff72a2d2-c899-454c-b9ad-08da4e6ff022
X-MS-TrafficTypeDiagnostic:
	HE1PR0802MB2489:EE_|AM5EUR03FT054:EE_|AM6PR08MB4296:EE_
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB4296B6E47A11BB99E7366D109EAD9@AM6PR08MB4296.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 hUWCr0wUKeuknX1aQ7l/Zroqz1GTQ/i6tWsQ0BbVLnu5iy+hS9doewY/by1sXvoTrgs3PhAVM45F/B027I+2eQLkmslIB4C1fDRL7pnpOFGMBvAaIL8iwc8RyY1QbvU6JfebGPTlInTiJt/cdABrTfray5+0mClpnlwgWDTY/ppBI37vn9dVxQTedyCp+LEjrd5YSZpowRsOoPyW03TuNTK5GT8F5u1N8JM9hunwUJ771fNBrsiTGK5ondkFj1xP1ygqiYh0uTeJyIR7sXi93OPZTxK2FtRR/a/i5beqgBy9ThsQRQv1ag0Pa2zpF9AE754YZsZMAeIQfUIf7KmhMaqVbmO8Yf3AvVZuiJCtvjiqSYNYg0r9y6AUcdTsjFfPDLhWCKI1UK0g4xsxY2N7yCo4QAp18Z3GXmsD1LcPXJT2ckFEQH2Gf4rtZmOedIIpXkmpsdmXcw8uDK1aTIJQ1YFur+Ix95dwWf4HyY9yTRViz2kd7FA/uEgJ0YUSpm7gPwIi8QUzjBWVFaIvD/jmh0lx4IwiZDjUkinzpI7NQS76AQ5kQ+bvUv9PS2kmSUkuktunebC6hGWog90HILDrcBYpvTAtoNlmXFuZmZp+ziA2K+heaB1uP+vjp+7c9+e/nBvCeEH2aen7Xmx0h0NRPQsBaUEzxShw6PZoQ0cx0BXat7IHOPcSxrkPNeDvsYe8wxaZOFYTlXrc9rHzVvQbcA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230016)(4636009)(40470700004)(36840700001)(46966006)(36860700001)(426003)(356005)(83380400001)(82310400005)(508600001)(36756003)(5660300002)(336012)(47076005)(2906002)(40460700003)(44832011)(81166007)(2616005)(4326008)(54906003)(8676002)(6916009)(1076003)(86362001)(6666004)(70586007)(8936002)(7696005)(26005)(186003)(316002)(70206006)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0802MB2489
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT054.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	5abdc51f-8c32-438e-4b76-08da4e6fe4e2
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	OVeVwwPaST6lKANtFC8IVuc48nPoBBvRTobVDaRq2FNJNVC9wB00dZUc6xpjWvQVVLAINqo63H1F48pESUw50nyU3QAnvP8TB9IGHo0AaPgBuo0naa8oxjQRnZ59N5JDFfMQ9bOVi9w7DU2vLvIdCm5SdPxdv48MjCZ1umGIiNXcD26tjFCAIktqVwKVKfRnq4K1NlZU2Rpll+UIg3r8WWr2GBWMxRG3ONvKqHoU+HZbt0fsduMnAtTgPPBaVBF+l51xq5MEhVdWxTAZdbd9S07uqpJzHaezXW01yjl7ZEgWa5Q4sO4E4KUOYnnOKtgp0pTkGk40XXvjo6N96vy66OHIHvXd0P8XfmwZdhYPlNShzouyaKRkf1rQtufUNtoTSlMxC9tPzsNUG4AKhjKmF60+Z6gG68fgN89GNgPt2IC4c+mltZz5FrS9RKoxGtsGbUr+BugG7d+7GYX7kezZ2ymcaujHoTWY6aJWEt9V1c1yb2rGX++E6bykkvCNBOJpwdciHSFZIln79xMxFrslu4IprFdxc33rYPIx2C74jDyznaeM2WDfDSkEYH9Lq9gc5NZeDhMXuCTz+knwcvd2R2IT5igWnYngafWRp/p+jpclTUFGQsfQIjXltYQiT7vkrI10NQUVdf6t+GG9+xSUsZ0BSC55nBWfeBA5V5dXXglVrVQ26BKVGsBW40kvWwmz
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(40470700004)(36840700001)(46966006)(2906002)(316002)(107886003)(5660300002)(70206006)(6666004)(54906003)(86362001)(1076003)(81166007)(186003)(508600001)(8936002)(8676002)(2616005)(4326008)(82310400005)(44832011)(36756003)(6916009)(26005)(36860700001)(40460700003)(7696005)(70586007)(47076005)(426003)(336012)(83380400001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2022 01:39:50.2865
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ff72a2d2-c899-454c-b9ad-08da4e6ff022
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT054.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4296

virt_vtimer_save is calculating the new time for the vtimer and
has a potential risk of timer flip in:
"v->arch.virt_timer.cval + v->domain->arch.virt_timer_base.offset
- boot_count".
In this formula, "cval + offset" could make uint64_t overflow.
Generally speaking, this is difficult to trigger. But unfortunately
the problem was encountered with a platform where the timer started
with a very huge initial value, like 0xF333899122223333. On this
platform cval + offset is overflowing after running for a while.

So in this patch, we adjust the formula to use "offset - boot_count"
first, and then use the result to plus cval. This will avoid the
uint64_t overflow.

Signed-off-by: Wei Chen <wei.chen@arm.com>
---
 xen/arch/arm/vtimer.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
index 5bb5970f58..86e63303c8 100644
--- a/xen/arch/arm/vtimer.c
+++ b/xen/arch/arm/vtimer.c
@@ -144,8 +144,9 @@ void virt_timer_save(struct vcpu *v)
     if ( (v->arch.virt_timer.ctl & CNTx_CTL_ENABLE) &&
          !(v->arch.virt_timer.ctl & CNTx_CTL_MASK))
     {
-        set_timer(&v->arch.virt_timer.timer, ticks_to_ns(v->arch.virt_timer.cval +
-                  v->domain->arch.virt_timer_base.offset - boot_count));
+        set_timer(&v->arch.virt_timer.timer,
+                  ticks_to_ns(v->domain->arch.virt_timer_base.offset -
+                              boot_count + v->arch.virt_timer.cval));
     }
 }
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 15 03:39:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 03:39:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349547.575651 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Jrq-0001fR-Cu; Wed, 15 Jun 2022 03:38:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349547.575651; Wed, 15 Jun 2022 03:38:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Jrq-0001fK-9m; Wed, 15 Jun 2022 03:38:54 +0000
Received: by outflank-mailman (input) for mailman id 349547;
 Wed, 15 Jun 2022 03:38:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1Jrp-0001fA-6c; Wed, 15 Jun 2022 03:38:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1Jrp-0006Ai-46; Wed, 15 Jun 2022 03:38:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1Jro-0003mo-Kc; Wed, 15 Jun 2022 03:38:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o1Jro-0006K3-Hh; Wed, 15 Jun 2022 03:38:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=rrcINl/MH/ErrjVs7+otj3ktNU8WmpmddHfdN81+4jc=; b=tPnVfXLKvef0NHeuwAw8/aSV/3
	kDjN0vFo8rJFNU3823Zrc8c6oKPwveQrvBTmSTC+pLTkVVqdQn8ZqE3oVJ/txbRhmZIR1da7rcUFF
	EujdsXXKED36fzwuJiMBkXJmDW+wLV3Yn8Xa9rE+XK0GFZ7+8C8MsOelfL7ruauZl1Fw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171170-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 171170: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=162dea4e768b835114c736cfd3fa1fc3742d39c5
X-Osstest-Versions-That:
    xen=c9a707df83aad17a6fcf2e8330ab3b5bead6fb8b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 15 Jun 2022 03:38:52 +0000

flight 171170 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171170/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  162dea4e768b835114c736cfd3fa1fc3742d39c5
baseline version:
 xen                  c9a707df83aad17a6fcf2e8330ab3b5bead6fb8b

Last test of basis   171079  2022-06-11 12:00:26 Z    3 days
Testing same since   171170  2022-06-15 00:02:01 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   c9a707df83..162dea4e76  162dea4e768b835114c736cfd3fa1fc3742d39c5 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 04:05:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 04:05:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349224.575667 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1KHi-0005SU-Rh; Wed, 15 Jun 2022 04:05:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349224.575667; Wed, 15 Jun 2022 04:05:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1KHi-0005RV-MX; Wed, 15 Jun 2022 04:05:38 +0000
Received: by outflank-mailman (input) for mailman id 349224;
 Tue, 14 Jun 2022 16:15:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Qq/B=WV=arm.com=mark.rutland@srs-se1.protection.inumbo.net>)
 id 1o19CN-0001PE-8Z
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 16:15:23 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 2f79d453-ebfd-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 18:15:21 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9F0F61692;
 Tue, 14 Jun 2022 09:15:19 -0700 (PDT)
Received: from FVFF77S0Q05N (unknown [10.57.41.154])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D37FC3F66F;
 Tue, 14 Jun 2022 09:15:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2f79d453-ebfd-11ec-bd2c-47488cf2e6aa
Date: Tue, 14 Jun 2022 17:14:57 +0100
From: Mark Rutland <mark.rutland@arm.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: rth@twiddle.net, ink@jurassic.park.msu.ru, mattst88@gmail.com,
	vgupta@kernel.org, linux@armlinux.org.uk, ulli.kroll@googlemail.com,
	linus.walleij@linaro.org, shawnguo@kernel.org,
	Sascha Hauer <s.hauer@pengutronix.de>, kernel@pengutronix.de,
	festevam@gmail.com, linux-imx@nxp.com, tony@atomide.com,
	khilman@kernel.org, catalin.marinas@arm.com, will@kernel.org,
	guoren@kernel.org, bcain@quicinc.com, chenhuacai@kernel.org,
	kernel@xen0n.name, geert@linux-m68k.org, sammy@sammy.net,
	monstr@monstr.eu, tsbogend@alpha.franken.de, dinguyen@kernel.org,
	jonas@southpole.se, stefan.kristiansson@saunalahti.fi,
	shorne@gmail.com, James.Bottomley@HansenPartnership.com,
	deller@gmx.de, mpe@ellerman.id.au, benh@kernel.crashing.org,
	paulus@samba.org, paul.walmsley@sifive.com, palmer@dabbelt.com,
	aou@eecs.berkeley.edu, hca@linux.ibm.com, gor@linux.ibm.com,
	agordeev@linux.ibm.com, borntraeger@linux.ibm.com,
	svens@linux.ibm.com, ysato@users.sourceforge.jp, dalias@libc.org,
	davem@davemloft.net, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
	dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com,
	acme@kernel.org, alexander.shishkin@linux.intel.com,
	jolsa@kernel.org, namhyung@kernel.org, jgross@suse.com,
	srivatsa@csail.mit.edu, amakhalov@vmware.com, pv-drivers@vmware.com,
	boris.ostrovsky@oracle.com, chris@zankel.net, jcmvbkbc@gmail.com,
	rafael@kernel.org, lenb@kernel.org, pavel@ucw.cz,
	gregkh@linuxfoundation.org, mturquette@baylibre.com,
	sboyd@kernel.org, daniel.lezcano@linaro.org, lpieralisi@kernel.org,
	sudeep.holla@arm.com, agross@kernel.org, bjorn.andersson@linaro.org,
	anup@brainfault.org, thierry.reding@gmail.com, jonathanh@nvidia.com,
	jacob.jun.pan@linux.intel.com, Arnd Bergmann <arnd@arndb.de>,
	yury.norov@gmail.com, andriy.shevchenko@linux.intel.com,
	linux@rasmusvillemoes.dk, rostedt@goodmis.org, pmladek@suse.com,
	senozhatsky@chromium.org, john.ogness@linutronix.de,
	paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com,
	josh@joshtriplett.org, mathieu.desnoyers@efficios.com,
	jiangshanlai@gmail.com, joel@joelfernandes.org,
	juri.lelli@redhat.com, vincent.guittot@linaro.org,
	dietmar.eggemann@arm.com, bsegall@google.com, mgorman@suse.de,
	bristot@redhat.com, vschneid@redhat.com, jpoimboe@kernel.org,
	linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-snps-arc@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, openrisc@lists.librecores.org,
	linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org, sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org, linux-perf-users@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, linux-xtensa@linux-xtensa.org,
	linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org,
	linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	linux-tegra@vger.kernel.org, linux-arch@vger.kernel.org,
	rcu@vger.kernel.org
Subject: Re: [PATCH 16/36] rcu: Fix rcu_idle_exit()
Message-ID: <Yqi0AVZmI5GyVpNa@FVFF77S0Q05N>
References: <20220608142723.103523089@infradead.org>
 <20220608144516.935970247@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20220608144516.935970247@infradead.org>

On Wed, Jun 08, 2022 at 04:27:39PM +0200, Peter Zijlstra wrote:
> Current rcu_idle_exit() is terminally broken because it uses
> local_irq_{save,restore}(), which are traced which uses RCU.
> 
> However, now that all the callers are sure to have IRQs disabled, we
> can remove these calls.
> 
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> Acked-by: Paul E. McKenney <paulmck@kernel.org>

Acked-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  kernel/rcu/tree.c |    9 +++------
>  1 file changed, 3 insertions(+), 6 deletions(-)
> 
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -659,7 +659,7 @@ static noinstr void rcu_eqs_enter(bool u
>   * If you add or remove a call to rcu_idle_enter(), be sure to test with
>   * CONFIG_RCU_EQS_DEBUG=y.
>   */
> -void rcu_idle_enter(void)
> +void noinstr rcu_idle_enter(void)
>  {
>  	lockdep_assert_irqs_disabled();
>  	rcu_eqs_enter(false);
> @@ -896,13 +896,10 @@ static void noinstr rcu_eqs_exit(bool us
>   * If you add or remove a call to rcu_idle_exit(), be sure to test with
>   * CONFIG_RCU_EQS_DEBUG=y.
>   */
> -void rcu_idle_exit(void)
> +void noinstr rcu_idle_exit(void)
>  {
> -	unsigned long flags;
> -
> -	local_irq_save(flags);
> +	lockdep_assert_irqs_disabled();
>  	rcu_eqs_exit(false);
> -	local_irq_restore(flags);
>  }
>  EXPORT_SYMBOL_GPL(rcu_idle_exit);
>  
> 
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 04:05:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 04:05:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349227.575670 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1KHj-0005Vb-3Q; Wed, 15 Jun 2022 04:05:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349227.575670; Wed, 15 Jun 2022 04:05:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1KHi-0005U9-Ul; Wed, 15 Jun 2022 04:05:38 +0000
Received: by outflank-mailman (input) for mailman id 349227;
 Tue, 14 Jun 2022 16:22:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Qq/B=WV=arm.com=mark.rutland@srs-se1.protection.inumbo.net>)
 id 1o19JC-0002Ro-Lx
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 16:22:26 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 2c9abf45-ebfe-11ec-a26a-b96bd03d9e80;
 Tue, 14 Jun 2022 18:22:24 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DFB341692;
 Tue, 14 Jun 2022 09:22:23 -0700 (PDT)
Received: from FVFF77S0Q05N (unknown [10.57.41.154])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 113C43F66F;
 Tue, 14 Jun 2022 09:22:05 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c9abf45-ebfe-11ec-a26a-b96bd03d9e80
Date: Tue, 14 Jun 2022 17:22:02 +0100
From: Mark Rutland <mark.rutland@arm.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: rth@twiddle.net, ink@jurassic.park.msu.ru, mattst88@gmail.com,
	vgupta@kernel.org, linux@armlinux.org.uk, ulli.kroll@googlemail.com,
	linus.walleij@linaro.org, shawnguo@kernel.org,
	Sascha Hauer <s.hauer@pengutronix.de>, kernel@pengutronix.de,
	festevam@gmail.com, linux-imx@nxp.com, tony@atomide.com,
	khilman@kernel.org, catalin.marinas@arm.com, will@kernel.org,
	guoren@kernel.org, bcain@quicinc.com, chenhuacai@kernel.org,
	kernel@xen0n.name, geert@linux-m68k.org, sammy@sammy.net,
	monstr@monstr.eu, tsbogend@alpha.franken.de, dinguyen@kernel.org,
	jonas@southpole.se, stefan.kristiansson@saunalahti.fi,
	shorne@gmail.com, James.Bottomley@HansenPartnership.com,
	deller@gmx.de, mpe@ellerman.id.au, benh@kernel.crashing.org,
	paulus@samba.org, paul.walmsley@sifive.com, palmer@dabbelt.com,
	aou@eecs.berkeley.edu, hca@linux.ibm.com, gor@linux.ibm.com,
	agordeev@linux.ibm.com, borntraeger@linux.ibm.com,
	svens@linux.ibm.com, ysato@users.sourceforge.jp, dalias@libc.org,
	davem@davemloft.net, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
	dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com,
	acme@kernel.org, alexander.shishkin@linux.intel.com,
	jolsa@kernel.org, namhyung@kernel.org, jgross@suse.com,
	srivatsa@csail.mit.edu, amakhalov@vmware.com, pv-drivers@vmware.com,
	boris.ostrovsky@oracle.com, chris@zankel.net, jcmvbkbc@gmail.com,
	rafael@kernel.org, lenb@kernel.org, pavel@ucw.cz,
	gregkh@linuxfoundation.org, mturquette@baylibre.com,
	sboyd@kernel.org, daniel.lezcano@linaro.org, lpieralisi@kernel.org,
	sudeep.holla@arm.com, agross@kernel.org, bjorn.andersson@linaro.org,
	anup@brainfault.org, thierry.reding@gmail.com, jonathanh@nvidia.com,
	jacob.jun.pan@linux.intel.com, Arnd Bergmann <arnd@arndb.de>,
	yury.norov@gmail.com, andriy.shevchenko@linux.intel.com,
	linux@rasmusvillemoes.dk, rostedt@goodmis.org, pmladek@suse.com,
	senozhatsky@chromium.org, john.ogness@linutronix.de,
	paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com,
	josh@joshtriplett.org, mathieu.desnoyers@efficios.com,
	jiangshanlai@gmail.com, joel@joelfernandes.org,
	juri.lelli@redhat.com, vincent.guittot@linaro.org,
	dietmar.eggemann@arm.com, bsegall@google.com, mgorman@suse.de,
	bristot@redhat.com, vschneid@redhat.com, jpoimboe@kernel.org,
	linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-snps-arc@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, openrisc@lists.librecores.org,
	linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org, sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org, linux-perf-users@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, linux-xtensa@linux-xtensa.org,
	linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org,
	linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	linux-tegra@vger.kernel.org, linux-arch@vger.kernel.org,
	rcu@vger.kernel.org
Subject: Re: [PATCH 20/36] arch/idle: Change arch_cpu_idle() IRQ behaviour
Message-ID: <Yqi1qra2k0HIft9W@FVFF77S0Q05N>
References: <20220608142723.103523089@infradead.org>
 <20220608144517.188449351@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20220608144517.188449351@infradead.org>

On Wed, Jun 08, 2022 at 04:27:43PM +0200, Peter Zijlstra wrote:
> Current arch_cpu_idle() is called with IRQs disabled, but will return
> with IRQs enabled.
> 
> However, the very first thing the generic code does after calling
> arch_cpu_idle() is raw_local_irq_disable(). This means that
> architectures that can idle with IRQs disabled end up doing a
> pointless 'enable-disable' dance.
> 
> Therefore, push this IRQ disabling into the idle function, meaning
> that those architectures can avoid the pointless IRQ state flipping.
> 
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>

Nice!

  Acked-by: Mark Rutland <mark.rutland@arm.com> [arm64]

Mark.

> ---
>  arch/alpha/kernel/process.c      |    1 -
>  arch/arc/kernel/process.c        |    3 +++
>  arch/arm/kernel/process.c        |    1 -
>  arch/arm/mach-gemini/board-dt.c  |    3 ++-
>  arch/arm64/kernel/idle.c         |    1 -
>  arch/csky/kernel/process.c       |    1 -
>  arch/csky/kernel/smp.c           |    2 +-
>  arch/hexagon/kernel/process.c    |    1 -
>  arch/ia64/kernel/process.c       |    1 +
>  arch/microblaze/kernel/process.c |    1 -
>  arch/mips/kernel/idle.c          |    8 +++-----
>  arch/nios2/kernel/process.c      |    1 -
>  arch/openrisc/kernel/process.c   |    1 +
>  arch/parisc/kernel/process.c     |    2 --
>  arch/powerpc/kernel/idle.c       |    5 ++---
>  arch/riscv/kernel/process.c      |    1 -
>  arch/s390/kernel/idle.c          |    1 -
>  arch/sh/kernel/idle.c            |    1 +
>  arch/sparc/kernel/leon_pmc.c     |    4 ++++
>  arch/sparc/kernel/process_32.c   |    1 -
>  arch/sparc/kernel/process_64.c   |    3 ++-
>  arch/um/kernel/process.c         |    1 -
>  arch/x86/coco/tdx/tdx.c          |    3 +++
>  arch/x86/kernel/process.c        |   15 ++++-----------
>  arch/xtensa/kernel/process.c     |    1 +
>  kernel/sched/idle.c              |    2 --
>  26 files changed, 28 insertions(+), 37 deletions(-)
> 
> --- a/arch/alpha/kernel/process.c
> +++ b/arch/alpha/kernel/process.c
> @@ -57,7 +57,6 @@ EXPORT_SYMBOL(pm_power_off);
>  void arch_cpu_idle(void)
>  {
>  	wtint(0);
> -	raw_local_irq_enable();
>  }
>  
>  void arch_cpu_idle_dead(void)
> --- a/arch/arc/kernel/process.c
> +++ b/arch/arc/kernel/process.c
> @@ -114,6 +114,8 @@ void arch_cpu_idle(void)
>  		"sleep %0	\n"
>  		:
>  		:"I"(arg)); /* can't be "r" has to be embedded const */
> +
> +	raw_local_irq_disable();
>  }
>  
>  #else	/* ARC700 */
> @@ -122,6 +124,7 @@ void arch_cpu_idle(void)
>  {
>  	/* sleep, but enable both set E1/E2 (levels of interrupts) before committing */
>  	__asm__ __volatile__("sleep 0x3	\n");
> +	raw_local_irq_disable();
>  }
>  
>  #endif
> --- a/arch/arm/kernel/process.c
> +++ b/arch/arm/kernel/process.c
> @@ -78,7 +78,6 @@ void arch_cpu_idle(void)
>  		arm_pm_idle();
>  	else
>  		cpu_do_idle();
> -	raw_local_irq_enable();
>  }
>  
>  void arch_cpu_idle_prepare(void)
> --- a/arch/arm/mach-gemini/board-dt.c
> +++ b/arch/arm/mach-gemini/board-dt.c
> @@ -42,8 +42,9 @@ static void gemini_idle(void)
>  	 */
>  
>  	/* FIXME: Enabling interrupts here is racy! */
> -	local_irq_enable();
> +	raw_local_irq_enable();
>  	cpu_do_idle();
> +	raw_local_irq_disable();
>  }
>  
>  static void __init gemini_init_machine(void)
> --- a/arch/arm64/kernel/idle.c
> +++ b/arch/arm64/kernel/idle.c
> @@ -42,5 +42,4 @@ void noinstr arch_cpu_idle(void)
>  	 * tricks
>  	 */
>  	cpu_do_idle();
> -	raw_local_irq_enable();
>  }
> --- a/arch/csky/kernel/process.c
> +++ b/arch/csky/kernel/process.c
> @@ -101,6 +101,5 @@ void arch_cpu_idle(void)
>  #ifdef CONFIG_CPU_PM_STOP
>  	asm volatile("stop\n");
>  #endif
> -	raw_local_irq_enable();
>  }
>  #endif
> --- a/arch/csky/kernel/smp.c
> +++ b/arch/csky/kernel/smp.c
> @@ -314,7 +314,7 @@ void arch_cpu_idle_dead(void)
>  	while (!secondary_stack)
>  		arch_cpu_idle();
>  
> -	local_irq_disable();
> +	raw_local_irq_disable();
>  
>  	asm volatile(
>  		"mov	sp, %0\n"
> --- a/arch/hexagon/kernel/process.c
> +++ b/arch/hexagon/kernel/process.c
> @@ -44,7 +44,6 @@ void arch_cpu_idle(void)
>  {
>  	__vmwait();
>  	/*  interrupts wake us up, but irqs are still disabled */
> -	raw_local_irq_enable();
>  }
>  
>  /*
> --- a/arch/ia64/kernel/process.c
> +++ b/arch/ia64/kernel/process.c
> @@ -241,6 +241,7 @@ void arch_cpu_idle(void)
>  		(*mark_idle)(1);
>  
>  	raw_safe_halt();
> +	raw_local_irq_disable();
>  
>  	if (mark_idle)
>  		(*mark_idle)(0);
> --- a/arch/microblaze/kernel/process.c
> +++ b/arch/microblaze/kernel/process.c
> @@ -138,5 +138,4 @@ int dump_fpu(struct pt_regs *regs, elf_f
>  
>  void arch_cpu_idle(void)
>  {
> -       raw_local_irq_enable();
>  }
> --- a/arch/mips/kernel/idle.c
> +++ b/arch/mips/kernel/idle.c
> @@ -33,13 +33,13 @@ static void __cpuidle r3081_wait(void)
>  {
>  	unsigned long cfg = read_c0_conf();
>  	write_c0_conf(cfg | R30XX_CONF_HALT);
> -	raw_local_irq_enable();
>  }
>  
>  void __cpuidle r4k_wait(void)
>  {
>  	raw_local_irq_enable();
>  	__r4k_wait();
> +	raw_local_irq_disable();
>  }
>  
>  /*
> @@ -57,7 +57,6 @@ void __cpuidle r4k_wait_irqoff(void)
>  		"	.set	arch=r4000	\n"
>  		"	wait			\n"
>  		"	.set	pop		\n");
> -	raw_local_irq_enable();
>  }
>  
>  /*
> @@ -77,7 +76,6 @@ static void __cpuidle rm7k_wait_irqoff(v
>  		"	wait						\n"
>  		"	mtc0	$1, $12		# stalls until W stage	\n"
>  		"	.set	pop					\n");
> -	raw_local_irq_enable();
>  }
>  
>  /*
> @@ -103,6 +101,8 @@ static void __cpuidle au1k_wait(void)
>  	"	nop				\n"
>  	"	.set	pop			\n"
>  	: : "r" (au1k_wait), "r" (c0status));
> +
> +	raw_local_irq_disable();
>  }
>  
>  static int __initdata nowait;
> @@ -245,8 +245,6 @@ void arch_cpu_idle(void)
>  {
>  	if (cpu_wait)
>  		cpu_wait();
> -	else
> -		raw_local_irq_enable();
>  }
>  
>  #ifdef CONFIG_CPU_IDLE
> --- a/arch/nios2/kernel/process.c
> +++ b/arch/nios2/kernel/process.c
> @@ -33,7 +33,6 @@ EXPORT_SYMBOL(pm_power_off);
>  
>  void arch_cpu_idle(void)
>  {
> -	raw_local_irq_enable();
>  }
>  
>  /*
> --- a/arch/openrisc/kernel/process.c
> +++ b/arch/openrisc/kernel/process.c
> @@ -102,6 +102,7 @@ void arch_cpu_idle(void)
>  	raw_local_irq_enable();
>  	if (mfspr(SPR_UPR) & SPR_UPR_PMP)
>  		mtspr(SPR_PMR, mfspr(SPR_PMR) | SPR_PMR_DME);
> +	raw_local_irq_disable();
>  }
>  
>  void (*pm_power_off)(void) = NULL;
> --- a/arch/parisc/kernel/process.c
> +++ b/arch/parisc/kernel/process.c
> @@ -187,8 +187,6 @@ void arch_cpu_idle_dead(void)
>  
>  void __cpuidle arch_cpu_idle(void)
>  {
> -	raw_local_irq_enable();
> -
>  	/* nop on real hardware, qemu will idle sleep. */
>  	asm volatile("or %%r10,%%r10,%%r10\n":::);
>  }
> --- a/arch/powerpc/kernel/idle.c
> +++ b/arch/powerpc/kernel/idle.c
> @@ -51,10 +51,9 @@ void arch_cpu_idle(void)
>  		 * Some power_save functions return with
>  		 * interrupts enabled, some don't.
>  		 */
> -		if (irqs_disabled())
> -			raw_local_irq_enable();
> +		if (!irqs_disabled())
> +			raw_local_irq_disable();
>  	} else {
> -		raw_local_irq_enable();
>  		/*
>  		 * Go into low thread priority and possibly
>  		 * low power mode.
> --- a/arch/riscv/kernel/process.c
> +++ b/arch/riscv/kernel/process.c
> @@ -39,7 +39,6 @@ extern asmlinkage void ret_from_kernel_t
>  void arch_cpu_idle(void)
>  {
>  	cpu_do_idle();
> -	raw_local_irq_enable();
>  }
>  
>  void __show_regs(struct pt_regs *regs)
> --- a/arch/s390/kernel/idle.c
> +++ b/arch/s390/kernel/idle.c
> @@ -66,7 +66,6 @@ void arch_cpu_idle(void)
>  	idle->idle_count++;
>  	account_idle_time(cputime_to_nsecs(idle_time));
>  	raw_write_seqcount_end(&idle->seqcount);
> -	raw_local_irq_enable();
>  }
>  
>  static ssize_t show_idle_count(struct device *dev,
> --- a/arch/sh/kernel/idle.c
> +++ b/arch/sh/kernel/idle.c
> @@ -25,6 +25,7 @@ void default_idle(void)
>  	raw_local_irq_enable();
>  	/* Isn't this racy ? */
>  	cpu_sleep();
> +	raw_local_irq_disable();
>  	clear_bl_bit();
>  }
>  
> --- a/arch/sparc/kernel/leon_pmc.c
> +++ b/arch/sparc/kernel/leon_pmc.c
> @@ -57,6 +57,8 @@ static void pmc_leon_idle_fixup(void)
>  		"lda	[%0] %1, %%g0\n"
>  		:
>  		: "r"(address), "i"(ASI_LEON_BYPASS));
> +
> +	raw_local_irq_disable();
>  }
>  
>  /*
> @@ -70,6 +72,8 @@ static void pmc_leon_idle(void)
>  
>  	/* For systems without power-down, this will be no-op */
>  	__asm__ __volatile__ ("wr	%g0, %asr19\n\t");
> +
> +	raw_local_irq_disable();
>  }
>  
>  /* Install LEON Power Down function */
> --- a/arch/sparc/kernel/process_32.c
> +++ b/arch/sparc/kernel/process_32.c
> @@ -71,7 +71,6 @@ void arch_cpu_idle(void)
>  {
>  	if (sparc_idle)
>  		(*sparc_idle)();
> -	raw_local_irq_enable();
>  }
>  
>  /* XXX cli/sti -> local_irq_xxx here, check this works once SMP is fixed. */
> --- a/arch/sparc/kernel/process_64.c
> +++ b/arch/sparc/kernel/process_64.c
> @@ -59,7 +59,6 @@ void arch_cpu_idle(void)
>  {
>  	if (tlb_type != hypervisor) {
>  		touch_nmi_watchdog();
> -		raw_local_irq_enable();
>  	} else {
>  		unsigned long pstate;
>  
> @@ -90,6 +89,8 @@ void arch_cpu_idle(void)
>  			"wrpr %0, %%g0, %%pstate"
>  			: "=&r" (pstate)
>  			: "i" (PSTATE_IE));
> +
> +		raw_local_irq_disable();
>  	}
>  }
>  
> --- a/arch/um/kernel/process.c
> +++ b/arch/um/kernel/process.c
> @@ -216,7 +216,6 @@ void arch_cpu_idle(void)
>  {
>  	cpu_tasks[current_thread_info()->cpu].pid = os_getpid();
>  	um_idle_sleep();
> -	raw_local_irq_enable();
>  }
>  
>  int __cant_sleep(void) {
> --- a/arch/x86/coco/tdx/tdx.c
> +++ b/arch/x86/coco/tdx/tdx.c
> @@ -178,6 +178,9 @@ void __cpuidle tdx_safe_halt(void)
>  	 */
>  	if (__halt(irq_disabled, do_sti))
>  		WARN_ONCE(1, "HLT instruction emulation failed\n");
> +
> +	/* XXX I can't make sense of what @do_sti actually does */
> +	raw_local_irq_disable();
>  }
>  
>  static bool read_msr(struct pt_regs *regs)
> --- a/arch/x86/kernel/process.c
> +++ b/arch/x86/kernel/process.c
> @@ -699,6 +699,7 @@ EXPORT_SYMBOL(boot_option_idle_override)
>  void __cpuidle default_idle(void)
>  {
>  	raw_safe_halt();
> +	raw_local_irq_disable();
>  }
>  #if defined(CONFIG_APM_MODULE) || defined(CONFIG_HALTPOLL_CPUIDLE_MODULE)
>  EXPORT_SYMBOL(default_idle);
> @@ -804,13 +805,7 @@ static void amd_e400_idle(void)
>  
>  	default_idle();
>  
> -	/*
> -	 * The switch back from broadcast mode needs to be called with
> -	 * interrupts disabled.
> -	 */
> -	raw_local_irq_disable();
>  	tick_broadcast_exit();
> -	raw_local_irq_enable();
>  }
>  
>  /*
> @@ -849,12 +844,10 @@ static __cpuidle void mwait_idle(void)
>  		}
>  
>  		__monitor((void *)&current_thread_info()->flags, 0, 0);
> -		if (!need_resched())
> +		if (!need_resched()) {
>  			__sti_mwait(0, 0);
> -		else
> -			raw_local_irq_enable();
> -	} else {
> -		raw_local_irq_enable();
> +			raw_local_irq_disable();
> +		}
>  	}
>  	__current_clr_polling();
>  }
> --- a/arch/xtensa/kernel/process.c
> +++ b/arch/xtensa/kernel/process.c
> @@ -183,6 +183,7 @@ void coprocessor_flush_release_all(struc
>  void arch_cpu_idle(void)
>  {
>  	platform_idle();
> +	raw_local_irq_disable();
>  }
>  
>  /*
> --- a/kernel/sched/idle.c
> +++ b/kernel/sched/idle.c
> @@ -79,7 +79,6 @@ void __weak arch_cpu_idle_dead(void) { }
>  void __weak arch_cpu_idle(void)
>  {
>  	cpu_idle_force_poll = 1;
> -	raw_local_irq_enable();
>  }
>  
>  /**
> @@ -96,7 +95,6 @@ void __cpuidle default_idle_call(void)
>  
>  		cpuidle_rcu_enter();
>  		arch_cpu_idle();
> -		raw_local_irq_disable();
>  		cpuidle_rcu_exit();
>  
>  		start_critical_timings();
> 
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 04:05:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 04:05:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349289.575695 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1KHk-0005z7-CX; Wed, 15 Jun 2022 04:05:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349289.575695; Wed, 15 Jun 2022 04:05:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1KHk-0005tN-1w; Wed, 15 Jun 2022 04:05:40 +0000
Received: by outflank-mailman (input) for mailman id 349289;
 Tue, 14 Jun 2022 16:53:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Qq/B=WV=arm.com=mark.rutland@srs-se1.protection.inumbo.net>)
 id 1o19nZ-0000tE-VT
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 16:53:49 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 8f15f558-ec02-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 18:53:48 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 708A916F3;
 Tue, 14 Jun 2022 09:53:47 -0700 (PDT)
Received: from FVFF77S0Q05N (unknown [10.57.41.154])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 913753F66F;
 Tue, 14 Jun 2022 09:53:29 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8f15f558-ec02-11ec-bd2c-47488cf2e6aa
Date: Tue, 14 Jun 2022 17:53:25 +0100
From: Mark Rutland <mark.rutland@arm.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: rth@twiddle.net, ink@jurassic.park.msu.ru, mattst88@gmail.com,
	vgupta@kernel.org, linux@armlinux.org.uk, ulli.kroll@googlemail.com,
	linus.walleij@linaro.org, shawnguo@kernel.org,
	Sascha Hauer <s.hauer@pengutronix.de>, kernel@pengutronix.de,
	festevam@gmail.com, linux-imx@nxp.com, tony@atomide.com,
	khilman@kernel.org, catalin.marinas@arm.com, will@kernel.org,
	guoren@kernel.org, bcain@quicinc.com, chenhuacai@kernel.org,
	kernel@xen0n.name, geert@linux-m68k.org, sammy@sammy.net,
	monstr@monstr.eu, tsbogend@alpha.franken.de, dinguyen@kernel.org,
	jonas@southpole.se, stefan.kristiansson@saunalahti.fi,
	shorne@gmail.com, James.Bottomley@hansenpartnership.com,
	deller@gmx.de, mpe@ellerman.id.au, benh@kernel.crashing.org,
	paulus@samba.org, paul.walmsley@sifive.com, palmer@dabbelt.com,
	aou@eecs.berkeley.edu, hca@linux.ibm.com, gor@linux.ibm.com,
	agordeev@linux.ibm.com, borntraeger@linux.ibm.com,
	svens@linux.ibm.com, ysato@users.sourceforge.jp, dalias@libc.org,
	davem@davemloft.net, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
	dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com,
	acme@kernel.org, alexander.shishkin@linux.intel.com,
	jolsa@kernel.org, namhyung@kernel.org, jgross@suse.com,
	srivatsa@csail.mit.edu, amakhalov@vmware.com, pv-drivers@vmware.com,
	boris.ostrovsky@oracle.com, chris@zankel.net, jcmvbkbc@gmail.com,
	rafael@kernel.org, lenb@kernel.org, pavel@ucw.cz,
	gregkh@linuxfoundation.org, mturquette@baylibre.com,
	sboyd@kernel.org, daniel.lezcano@linaro.org, lpieralisi@kernel.org,
	sudeep.holla@arm.com, agross@kernel.org, bjorn.andersson@linaro.org,
	anup@brainfault.org, thierry.reding@gmail.com, jonathanh@nvidia.com,
	jacob.jun.pan@linux.intel.com, Arnd Bergmann <arnd@arndb.de>,
	yury.norov@gmail.com, andriy.shevchenko@linux.intel.com,
	linux@rasmusvillemoes.dk, rostedt@goodmis.org, pmladek@suse.com,
	senozhatsky@chromium.org, john.ogness@linutronix.de,
	paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com,
	josh@joshtriplett.org, mathieu.desnoyers@efficios.com,
	jiangshanlai@gmail.com, joel@joelfernandes.org,
	juri.lelli@redhat.com, vincent.guittot@linaro.org,
	dietmar.eggemann@arm.com, bsegall@google.com, mgorman@suse.de,
	bristot@redhat.com, vschneid@redhat.com, jpoimboe@kernel.org,
	linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-snps-arc@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, openrisc@lists.librecores.org,
	linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org, sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org, linux-perf-users@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, linux-xtensa@linux-xtensa.org,
	linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org,
	linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	linux-tegra@vger.kernel.org, linux-arch@vger.kernel.org,
	rcu@vger.kernel.org
Subject: Re: [PATCH 15/36] cpuidle,cpu_pm: Remove RCU fiddling from
 cpu_pm_{enter,exit}()
Message-ID: <Yqi9BTTnlbcUsD5c@FVFF77S0Q05N>
References: <20220608142723.103523089@infradead.org>
 <20220608144516.871305980@infradead.org>
 <YqiznJL7qB9uSQ9c@FVFF77S0Q05N>
 <Yqi6Zp+DTm22dLB9@hirez.programming.kicks-ass.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <Yqi6Zp+DTm22dLB9@hirez.programming.kicks-ass.net>

On Tue, Jun 14, 2022 at 06:42:14PM +0200, Peter Zijlstra wrote:
> On Tue, Jun 14, 2022 at 05:13:16PM +0100, Mark Rutland wrote:
> > On Wed, Jun 08, 2022 at 04:27:38PM +0200, Peter Zijlstra wrote:
> > > All callers should still have RCU enabled.
> > 
> > IIUC with that true we should be able to drop the RCU_NONIDLE() from
> > drivers/perf/arm_pmu.c, as we only needed that for an invocation via a pm
> > notifier.
> > 
> > I should be able to give that a spin on some hardware.
> > 
> > > 
> > > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> > > ---
> > >  kernel/cpu_pm.c |    9 ---------
> > >  1 file changed, 9 deletions(-)
> > > 
> > > --- a/kernel/cpu_pm.c
> > > +++ b/kernel/cpu_pm.c
> > > @@ -30,16 +30,9 @@ static int cpu_pm_notify(enum cpu_pm_eve
> > >  {
> > >  	int ret;
> > >  
> > > -	/*
> > > -	 * This introduces a RCU read critical section, which could be
> > > -	 * disfunctional in cpu idle. Copy RCU_NONIDLE code to let RCU know
> > > -	 * this.
> > > -	 */
> > > -	rcu_irq_enter_irqson();
> > >  	rcu_read_lock();
> > >  	ret = raw_notifier_call_chain(&cpu_pm_notifier.chain, event, NULL);
> > >  	rcu_read_unlock();
> > > -	rcu_irq_exit_irqson();
> > 
> > To make this easier to debug, is it worth adding an assertion that RCU is
> > watching here? e.g.
> > 
> > 	RCU_LOCKDEP_WARN(!rcu_is_watching(),
> > 			 "cpu_pm_notify() used illegally from EQS");
> > 
> 
> My understanding is that rcu_read_lock() implies something along those
> lines when PROVE_RCU.

Ah, duh. Given that:

Acked-by: Mark Rutland <mark.rutland@arm.com>

Mark.


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 04:05:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 04:05:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349293.575706 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1KHl-00066o-0R; Wed, 15 Jun 2022 04:05:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349293.575706; Wed, 15 Jun 2022 04:05:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1KHk-000644-IJ; Wed, 15 Jun 2022 04:05:40 +0000
Received: by outflank-mailman (input) for mailman id 349293;
 Tue, 14 Jun 2022 17:00:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Qq/B=WV=arm.com=mark.rutland@srs-se1.protection.inumbo.net>)
 id 1o19ti-0001vp-CI
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 17:00:10 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 7213b217-ec03-11ec-a26a-b96bd03d9e80;
 Tue, 14 Jun 2022 19:00:08 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 11469175A;
 Tue, 14 Jun 2022 10:00:08 -0700 (PDT)
Received: from FVFF77S0Q05N (unknown [10.57.41.154])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4A1F03F66F;
 Tue, 14 Jun 2022 09:59:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7213b217-ec03-11ec-a26a-b96bd03d9e80
Date: Tue, 14 Jun 2022 17:59:46 +0100
From: Mark Rutland <mark.rutland@arm.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: rth@twiddle.net, ink@jurassic.park.msu.ru, mattst88@gmail.com,
	vgupta@kernel.org, linux@armlinux.org.uk, ulli.kroll@googlemail.com,
	linus.walleij@linaro.org, shawnguo@kernel.org,
	Sascha Hauer <s.hauer@pengutronix.de>, kernel@pengutronix.de,
	festevam@gmail.com, linux-imx@nxp.com, tony@atomide.com,
	khilman@kernel.org, catalin.marinas@arm.com, will@kernel.org,
	guoren@kernel.org, bcain@quicinc.com, chenhuacai@kernel.org,
	kernel@xen0n.name, geert@linux-m68k.org, sammy@sammy.net,
	monstr@monstr.eu, tsbogend@alpha.franken.de, dinguyen@kernel.org,
	jonas@southpole.se, stefan.kristiansson@saunalahti.fi,
	shorne@gmail.com, James.Bottomley@hansenpartnership.com,
	deller@gmx.de, mpe@ellerman.id.au, benh@kernel.crashing.org,
	paulus@samba.org, paul.walmsley@sifive.com, palmer@dabbelt.com,
	aou@eecs.berkeley.edu, hca@linux.ibm.com, gor@linux.ibm.com,
	agordeev@linux.ibm.com, borntraeger@linux.ibm.com,
	svens@linux.ibm.com, ysato@users.sourceforge.jp, dalias@libc.org,
	davem@davemloft.net, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
	dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com,
	acme@kernel.org, alexander.shishkin@linux.intel.com,
	jolsa@kernel.org, namhyung@kernel.org, jgross@suse.com,
	srivatsa@csail.mit.edu, amakhalov@vmware.com, pv-drivers@vmware.com,
	boris.ostrovsky@oracle.com, chris@zankel.net, jcmvbkbc@gmail.com,
	rafael@kernel.org, lenb@kernel.org, pavel@ucw.cz,
	gregkh@linuxfoundation.org, mturquette@baylibre.com,
	sboyd@kernel.org, daniel.lezcano@linaro.org, lpieralisi@kernel.org,
	sudeep.holla@arm.com, agross@kernel.org, bjorn.andersson@linaro.org,
	anup@brainfault.org, thierry.reding@gmail.com, jonathanh@nvidia.com,
	jacob.jun.pan@linux.intel.com, Arnd Bergmann <arnd@arndb.de>,
	yury.norov@gmail.com, andriy.shevchenko@linux.intel.com,
	linux@rasmusvillemoes.dk, rostedt@goodmis.org, pmladek@suse.com,
	senozhatsky@chromium.org, john.ogness@linutronix.de,
	paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com,
	josh@joshtriplett.org, mathieu.desnoyers@efficios.com,
	jiangshanlai@gmail.com, joel@joelfernandes.org,
	juri.lelli@redhat.com, vincent.guittot@linaro.org,
	dietmar.eggemann@arm.com, bsegall@google.com, mgorman@suse.de,
	bristot@redhat.com, vschneid@redhat.com, jpoimboe@kernel.org,
	linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-snps-arc@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, openrisc@lists.librecores.org,
	linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org, sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org, linux-perf-users@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, linux-xtensa@linux-xtensa.org,
	linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org,
	linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	linux-tegra@vger.kernel.org, linux-arch@vger.kernel.org,
	rcu@vger.kernel.org
Subject: Re: [PATCH 14/36] cpuidle: Fix rcu_idle_*() usage
Message-ID: <Yqi+gg+p0sv0F7Di@FVFF77S0Q05N>
References: <20220608142723.103523089@infradead.org>
 <20220608144516.808451191@infradead.org>
 <YqiB6YpVqq4wuDtO@FVFF77S0Q05N>
 <Yqi6Fd38ZCsDUnQG@hirez.programming.kicks-ass.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <Yqi6Fd38ZCsDUnQG@hirez.programming.kicks-ass.net>

On Tue, Jun 14, 2022 at 06:40:53PM +0200, Peter Zijlstra wrote:
> On Tue, Jun 14, 2022 at 01:41:13PM +0100, Mark Rutland wrote:
> > On Wed, Jun 08, 2022 at 04:27:37PM +0200, Peter Zijlstra wrote:
> > > --- a/kernel/time/tick-broadcast.c
> > > +++ b/kernel/time/tick-broadcast.c
> > > @@ -622,9 +622,13 @@ struct cpumask *tick_get_broadcast_onesh
> > >   * to avoid a deep idle transition as we are about to get the
> > >   * broadcast IPI right away.
> > >   */
> > > -int tick_check_broadcast_expired(void)
> > > +noinstr int tick_check_broadcast_expired(void)
> > >  {
> > > +#ifdef _ASM_GENERIC_BITOPS_INSTRUMENTED_NON_ATOMIC_H
> > > +	return arch_test_bit(smp_processor_id(), cpumask_bits(tick_broadcast_force_mask));
> > > +#else
> > >  	return cpumask_test_cpu(smp_processor_id(), tick_broadcast_force_mask);
> > > +#endif
> > >  }
> > 
> > This is somewhat not-ideal. :/
> 
> I'll say.
> 
> > Could we unconditionally do the arch_test_bit() variant, with a comment, or
> > does that not exist in some cases?
> 
> Loads of build errors ensued, which is how I ended up with this mess ...

Yaey :(

I see the same is true for the thread flag manipulation too.

I'll take a look and see if we can layer things so that we can use the arch_*()
helpers and wrap those consistently so that we don't have to check the CPP
guard.

Ideally we'd have a a better language that allows us to make some
context-senstive decisions, then we could hide all this gunk in the lower
levels with somethin like:

	if (!THIS_IS_A_NOINSTR_FUNCTION()) {
		explicit_instrumentation(...);
	}

... ho hum.

Mark.


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 04:05:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 04:05:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349259.575678 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1KHj-0005az-Dr; Wed, 15 Jun 2022 04:05:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349259.575678; Wed, 15 Jun 2022 04:05:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1KHj-0005Z7-81; Wed, 15 Jun 2022 04:05:39 +0000
Received: by outflank-mailman (input) for mailman id 349259;
 Tue, 14 Jun 2022 16:25:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Qq/B=WV=arm.com=mark.rutland@srs-se1.protection.inumbo.net>)
 id 1o19Ls-0005Qt-GL
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 16:25:12 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 8fd06c92-ebfe-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 18:25:11 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 983DA16A3;
 Tue, 14 Jun 2022 09:25:10 -0700 (PDT)
Received: from FVFF77S0Q05N (unknown [10.57.41.154])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A00683F66F;
 Tue, 14 Jun 2022 09:24:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8fd06c92-ebfe-11ec-bd2c-47488cf2e6aa
Date: Tue, 14 Jun 2022 17:24:48 +0100
From: Mark Rutland <mark.rutland@arm.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: rth@twiddle.net, ink@jurassic.park.msu.ru, mattst88@gmail.com,
	vgupta@kernel.org, linux@armlinux.org.uk, ulli.kroll@googlemail.com,
	linus.walleij@linaro.org, shawnguo@kernel.org,
	Sascha Hauer <s.hauer@pengutronix.de>, kernel@pengutronix.de,
	festevam@gmail.com, linux-imx@nxp.com, tony@atomide.com,
	khilman@kernel.org, catalin.marinas@arm.com, will@kernel.org,
	guoren@kernel.org, bcain@quicinc.com, chenhuacai@kernel.org,
	kernel@xen0n.name, geert@linux-m68k.org, sammy@sammy.net,
	monstr@monstr.eu, tsbogend@alpha.franken.de, dinguyen@kernel.org,
	jonas@southpole.se, stefan.kristiansson@saunalahti.fi,
	shorne@gmail.com, James.Bottomley@HansenPartnership.com,
	deller@gmx.de, mpe@ellerman.id.au, benh@kernel.crashing.org,
	paulus@samba.org, paul.walmsley@sifive.com, palmer@dabbelt.com,
	aou@eecs.berkeley.edu, hca@linux.ibm.com, gor@linux.ibm.com,
	agordeev@linux.ibm.com, borntraeger@linux.ibm.com,
	svens@linux.ibm.com, ysato@users.sourceforge.jp, dalias@libc.org,
	davem@davemloft.net, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
	dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com,
	acme@kernel.org, alexander.shishkin@linux.intel.com,
	jolsa@kernel.org, namhyung@kernel.org, jgross@suse.com,
	srivatsa@csail.mit.edu, amakhalov@vmware.com, pv-drivers@vmware.com,
	boris.ostrovsky@oracle.com, chris@zankel.net, jcmvbkbc@gmail.com,
	rafael@kernel.org, lenb@kernel.org, pavel@ucw.cz,
	gregkh@linuxfoundation.org, mturquette@baylibre.com,
	sboyd@kernel.org, daniel.lezcano@linaro.org, lpieralisi@kernel.org,
	sudeep.holla@arm.com, agross@kernel.org, bjorn.andersson@linaro.org,
	anup@brainfault.org, thierry.reding@gmail.com, jonathanh@nvidia.com,
	jacob.jun.pan@linux.intel.com, Arnd Bergmann <arnd@arndb.de>,
	yury.norov@gmail.com, andriy.shevchenko@linux.intel.com,
	linux@rasmusvillemoes.dk, rostedt@goodmis.org, pmladek@suse.com,
	senozhatsky@chromium.org, john.ogness@linutronix.de,
	paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com,
	josh@joshtriplett.org, mathieu.desnoyers@efficios.com,
	jiangshanlai@gmail.com, joel@joelfernandes.org,
	juri.lelli@redhat.com, vincent.guittot@linaro.org,
	dietmar.eggemann@arm.com, bsegall@google.com, mgorman@suse.de,
	bristot@redhat.com, vschneid@redhat.com, jpoimboe@kernel.org,
	linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-snps-arc@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, openrisc@lists.librecores.org,
	linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org, sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org, linux-perf-users@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, linux-xtensa@linux-xtensa.org,
	linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org,
	linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	linux-tegra@vger.kernel.org, linux-arch@vger.kernel.org,
	rcu@vger.kernel.org, maz@kernel.org
Subject: Re: [PATCH 23/36] arm64,smp: Remove trace_.*_rcuidle() usage
Message-ID: <Yqi2UGb4alCAR5s4@FVFF77S0Q05N>
References: <20220608142723.103523089@infradead.org>
 <20220608144517.380962958@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20220608144517.380962958@infradead.org>

On Wed, Jun 08, 2022 at 04:27:46PM +0200, Peter Zijlstra wrote:
> Ever since commit d3afc7f12987 ("arm64: Allow IPIs to be handled as
> normal interrupts") this function is called in regular IRQ context.
> 
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>

[adding Marc since he authored that commit]

Makes sense to me:

  Acked-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm64/kernel/smp.c |    4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> --- a/arch/arm64/kernel/smp.c
> +++ b/arch/arm64/kernel/smp.c
> @@ -865,7 +865,7 @@ static void do_handle_IPI(int ipinr)
>  	unsigned int cpu = smp_processor_id();
>  
>  	if ((unsigned)ipinr < NR_IPI)
> -		trace_ipi_entry_rcuidle(ipi_types[ipinr]);
> +		trace_ipi_entry(ipi_types[ipinr]);
>  
>  	switch (ipinr) {
>  	case IPI_RESCHEDULE:
> @@ -914,7 +914,7 @@ static void do_handle_IPI(int ipinr)
>  	}
>  
>  	if ((unsigned)ipinr < NR_IPI)
> -		trace_ipi_exit_rcuidle(ipi_types[ipinr]);
> +		trace_ipi_exit(ipi_types[ipinr]);
>  }
>  
>  static irqreturn_t ipi_handler(int irq, void *data)
> 
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 04:05:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 04:05:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349280.575688 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1KHj-0005o9-UL; Wed, 15 Jun 2022 04:05:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349280.575688; Wed, 15 Jun 2022 04:05:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1KHj-0005lJ-Ma; Wed, 15 Jun 2022 04:05:39 +0000
Received: by outflank-mailman (input) for mailman id 349280;
 Tue, 14 Jun 2022 16:28:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Qq/B=WV=arm.com=mark.rutland@srs-se1.protection.inumbo.net>)
 id 1o19P0-0005jd-DX
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 16:28:26 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 03226e8a-ebff-11ec-a26a-b96bd03d9e80;
 Tue, 14 Jun 2022 18:28:24 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D221916F3;
 Tue, 14 Jun 2022 09:28:23 -0700 (PDT)
Received: from FVFF77S0Q05N (unknown [10.57.41.154])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EEF753F66F;
 Tue, 14 Jun 2022 09:28:05 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 03226e8a-ebff-11ec-a26a-b96bd03d9e80
Date: Tue, 14 Jun 2022 17:28:02 +0100
From: Mark Rutland <mark.rutland@arm.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: rth@twiddle.net, ink@jurassic.park.msu.ru, mattst88@gmail.com,
	vgupta@kernel.org, linux@armlinux.org.uk, ulli.kroll@googlemail.com,
	linus.walleij@linaro.org, shawnguo@kernel.org,
	Sascha Hauer <s.hauer@pengutronix.de>, kernel@pengutronix.de,
	festevam@gmail.com, linux-imx@nxp.com, tony@atomide.com,
	khilman@kernel.org, catalin.marinas@arm.com, will@kernel.org,
	guoren@kernel.org, bcain@quicinc.com, chenhuacai@kernel.org,
	kernel@xen0n.name, geert@linux-m68k.org, sammy@sammy.net,
	monstr@monstr.eu, tsbogend@alpha.franken.de, dinguyen@kernel.org,
	jonas@southpole.se, stefan.kristiansson@saunalahti.fi,
	shorne@gmail.com, James.Bottomley@HansenPartnership.com,
	deller@gmx.de, mpe@ellerman.id.au, benh@kernel.crashing.org,
	paulus@samba.org, paul.walmsley@sifive.com, palmer@dabbelt.com,
	aou@eecs.berkeley.edu, hca@linux.ibm.com, gor@linux.ibm.com,
	agordeev@linux.ibm.com, borntraeger@linux.ibm.com,
	svens@linux.ibm.com, ysato@users.sourceforge.jp, dalias@libc.org,
	davem@davemloft.net, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
	dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com,
	acme@kernel.org, alexander.shishkin@linux.intel.com,
	jolsa@kernel.org, namhyung@kernel.org, jgross@suse.com,
	srivatsa@csail.mit.edu, amakhalov@vmware.com, pv-drivers@vmware.com,
	boris.ostrovsky@oracle.com, chris@zankel.net, jcmvbkbc@gmail.com,
	rafael@kernel.org, lenb@kernel.org, pavel@ucw.cz,
	gregkh@linuxfoundation.org, mturquette@baylibre.com,
	sboyd@kernel.org, daniel.lezcano@linaro.org, lpieralisi@kernel.org,
	sudeep.holla@arm.com, agross@kernel.org, bjorn.andersson@linaro.org,
	anup@brainfault.org, thierry.reding@gmail.com, jonathanh@nvidia.com,
	jacob.jun.pan@linux.intel.com, Arnd Bergmann <arnd@arndb.de>,
	yury.norov@gmail.com, andriy.shevchenko@linux.intel.com,
	linux@rasmusvillemoes.dk, rostedt@goodmis.org, pmladek@suse.com,
	senozhatsky@chromium.org, john.ogness@linutronix.de,
	paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com,
	josh@joshtriplett.org, mathieu.desnoyers@efficios.com,
	jiangshanlai@gmail.com, joel@joelfernandes.org,
	juri.lelli@redhat.com, vincent.guittot@linaro.org,
	dietmar.eggemann@arm.com, bsegall@google.com, mgorman@suse.de,
	bristot@redhat.com, vschneid@redhat.com, jpoimboe@kernel.org,
	linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-snps-arc@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, openrisc@lists.librecores.org,
	linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org, sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org, linux-perf-users@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, linux-xtensa@linux-xtensa.org,
	linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org,
	linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	linux-tegra@vger.kernel.org, linux-arch@vger.kernel.org,
	rcu@vger.kernel.org
Subject: Re: [PATCH 25/36] time/tick-broadcast: Remove RCU_NONIDLE usage
Message-ID: <Yqi3EmHbuvf3ItMI@FVFF77S0Q05N>
References: <20220608142723.103523089@infradead.org>
 <20220608144517.507286638@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20220608144517.507286638@infradead.org>

On Wed, Jun 08, 2022 at 04:27:48PM +0200, Peter Zijlstra wrote:
> No callers left that have already disabled RCU.
> 
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>

Acked-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  kernel/time/tick-broadcast-hrtimer.c |   29 ++++++++++++-----------------
>  1 file changed, 12 insertions(+), 17 deletions(-)
> 
> --- a/kernel/time/tick-broadcast-hrtimer.c
> +++ b/kernel/time/tick-broadcast-hrtimer.c
> @@ -56,25 +56,20 @@ static int bc_set_next(ktime_t expires,
>  	 * hrtimer callback function is currently running, then
>  	 * hrtimer_start() cannot move it and the timer stays on the CPU on
>  	 * which it is assigned at the moment.
> +	 */
> +	hrtimer_start(&bctimer, expires, HRTIMER_MODE_ABS_PINNED_HARD);
> +	/*
> +	 * The core tick broadcast mode expects bc->bound_on to be set
> +	 * correctly to prevent a CPU which has the broadcast hrtimer
> +	 * armed from going deep idle.
>  	 *
> -	 * As this can be called from idle code, the hrtimer_start()
> -	 * invocation has to be wrapped with RCU_NONIDLE() as
> -	 * hrtimer_start() can call into tracing.
> +	 * As tick_broadcast_lock is held, nothing can change the cpu
> +	 * base which was just established in hrtimer_start() above. So
> +	 * the below access is safe even without holding the hrtimer
> +	 * base lock.
>  	 */
> -	RCU_NONIDLE( {
> -		hrtimer_start(&bctimer, expires, HRTIMER_MODE_ABS_PINNED_HARD);
> -		/*
> -		 * The core tick broadcast mode expects bc->bound_on to be set
> -		 * correctly to prevent a CPU which has the broadcast hrtimer
> -		 * armed from going deep idle.
> -		 *
> -		 * As tick_broadcast_lock is held, nothing can change the cpu
> -		 * base which was just established in hrtimer_start() above. So
> -		 * the below access is safe even without holding the hrtimer
> -		 * base lock.
> -		 */
> -		bc->bound_on = bctimer.base->cpu_base->cpu;
> -	} );
> +	bc->bound_on = bctimer.base->cpu_base->cpu;
> +
>  	return 0;
>  }
>  
> 
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 04:05:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 04:05:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349222.575662 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1KHi-0005Ow-Hj; Wed, 15 Jun 2022 04:05:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349222.575662; Wed, 15 Jun 2022 04:05:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1KHi-0005Op-Ex; Wed, 15 Jun 2022 04:05:38 +0000
Received: by outflank-mailman (input) for mailman id 349222;
 Tue, 14 Jun 2022 16:13:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Qq/B=WV=arm.com=mark.rutland@srs-se1.protection.inumbo.net>)
 id 1o19Am-0001O4-Ch
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 16:13:44 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id f3301dd3-ebfc-11ec-a26a-b96bd03d9e80;
 Tue, 14 Jun 2022 18:13:39 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0D6151684;
 Tue, 14 Jun 2022 09:13:38 -0700 (PDT)
Received: from FVFF77S0Q05N (unknown [10.57.41.154])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4946B3F66F;
 Tue, 14 Jun 2022 09:13:20 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f3301dd3-ebfc-11ec-a26a-b96bd03d9e80
Date: Tue, 14 Jun 2022 17:13:16 +0100
From: Mark Rutland <mark.rutland@arm.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: rth@twiddle.net, ink@jurassic.park.msu.ru, mattst88@gmail.com,
	vgupta@kernel.org, linux@armlinux.org.uk, ulli.kroll@googlemail.com,
	linus.walleij@linaro.org, shawnguo@kernel.org,
	Sascha Hauer <s.hauer@pengutronix.de>, kernel@pengutronix.de,
	festevam@gmail.com, linux-imx@nxp.com, tony@atomide.com,
	khilman@kernel.org, catalin.marinas@arm.com, will@kernel.org,
	guoren@kernel.org, bcain@quicinc.com, chenhuacai@kernel.org,
	kernel@xen0n.name, geert@linux-m68k.org, sammy@sammy.net,
	monstr@monstr.eu, tsbogend@alpha.franken.de, dinguyen@kernel.org,
	jonas@southpole.se, stefan.kristiansson@saunalahti.fi,
	shorne@gmail.com, James.Bottomley@HansenPartnership.com,
	deller@gmx.de, mpe@ellerman.id.au, benh@kernel.crashing.org,
	paulus@samba.org, paul.walmsley@sifive.com, palmer@dabbelt.com,
	aou@eecs.berkeley.edu, hca@linux.ibm.com, gor@linux.ibm.com,
	agordeev@linux.ibm.com, borntraeger@linux.ibm.com,
	svens@linux.ibm.com, ysato@users.sourceforge.jp, dalias@libc.org,
	davem@davemloft.net, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
	dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com,
	acme@kernel.org, alexander.shishkin@linux.intel.com,
	jolsa@kernel.org, namhyung@kernel.org, jgross@suse.com,
	srivatsa@csail.mit.edu, amakhalov@vmware.com, pv-drivers@vmware.com,
	boris.ostrovsky@oracle.com, chris@zankel.net, jcmvbkbc@gmail.com,
	rafael@kernel.org, lenb@kernel.org, pavel@ucw.cz,
	gregkh@linuxfoundation.org, mturquette@baylibre.com,
	sboyd@kernel.org, daniel.lezcano@linaro.org, lpieralisi@kernel.org,
	sudeep.holla@arm.com, agross@kernel.org, bjorn.andersson@linaro.org,
	anup@brainfault.org, thierry.reding@gmail.com, jonathanh@nvidia.com,
	jacob.jun.pan@linux.intel.com, Arnd Bergmann <arnd@arndb.de>,
	yury.norov@gmail.com, andriy.shevchenko@linux.intel.com,
	linux@rasmusvillemoes.dk, rostedt@goodmis.org, pmladek@suse.com,
	senozhatsky@chromium.org, john.ogness@linutronix.de,
	paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com,
	josh@joshtriplett.org, mathieu.desnoyers@efficios.com,
	jiangshanlai@gmail.com, joel@joelfernandes.org,
	juri.lelli@redhat.com, vincent.guittot@linaro.org,
	dietmar.eggemann@arm.com, bsegall@google.com, mgorman@suse.de,
	bristot@redhat.com, vschneid@redhat.com, jpoimboe@kernel.org,
	linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-snps-arc@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, openrisc@lists.librecores.org,
	linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org, sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org, linux-perf-users@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, linux-xtensa@linux-xtensa.org,
	linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org,
	linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	linux-tegra@vger.kernel.org, linux-arch@vger.kernel.org,
	rcu@vger.kernel.org
Subject: Re: [PATCH 15/36] cpuidle,cpu_pm: Remove RCU fiddling from
 cpu_pm_{enter,exit}()
Message-ID: <YqiznJL7qB9uSQ9c@FVFF77S0Q05N>
References: <20220608142723.103523089@infradead.org>
 <20220608144516.871305980@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20220608144516.871305980@infradead.org>

On Wed, Jun 08, 2022 at 04:27:38PM +0200, Peter Zijlstra wrote:
> All callers should still have RCU enabled.

IIUC with that true we should be able to drop the RCU_NONIDLE() from
drivers/perf/arm_pmu.c, as we only needed that for an invocation via a pm
notifier.

I should be able to give that a spin on some hardware.

> 
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> ---
>  kernel/cpu_pm.c |    9 ---------
>  1 file changed, 9 deletions(-)
> 
> --- a/kernel/cpu_pm.c
> +++ b/kernel/cpu_pm.c
> @@ -30,16 +30,9 @@ static int cpu_pm_notify(enum cpu_pm_eve
>  {
>  	int ret;
>  
> -	/*
> -	 * This introduces a RCU read critical section, which could be
> -	 * disfunctional in cpu idle. Copy RCU_NONIDLE code to let RCU know
> -	 * this.
> -	 */
> -	rcu_irq_enter_irqson();
>  	rcu_read_lock();
>  	ret = raw_notifier_call_chain(&cpu_pm_notifier.chain, event, NULL);
>  	rcu_read_unlock();
> -	rcu_irq_exit_irqson();

To make this easier to debug, is it worth adding an assertion that RCU is
watching here? e.g.

	RCU_LOCKDEP_WARN(!rcu_is_watching(),
			 "cpu_pm_notify() used illegally from EQS");

>  
>  	return notifier_to_errno(ret);
>  }
> @@ -49,11 +42,9 @@ static int cpu_pm_notify_robust(enum cpu
>  	unsigned long flags;
>  	int ret;
>  
> -	rcu_irq_enter_irqson();
>  	raw_spin_lock_irqsave(&cpu_pm_notifier.lock, flags);
>  	ret = raw_notifier_call_chain_robust(&cpu_pm_notifier.chain, event_up, event_down, NULL);
>  	raw_spin_unlock_irqrestore(&cpu_pm_notifier.lock, flags);
> -	rcu_irq_exit_irqson();


... and likewise here?

Thanks,
Mark.

>  
>  	return notifier_to_errno(ret);
>  }
> 
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 04:05:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 04:05:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349300.575713 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1KHl-0006JM-Hr; Wed, 15 Jun 2022 04:05:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349300.575713; Wed, 15 Jun 2022 04:05:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1KHl-0006Fa-3D; Wed, 15 Jun 2022 04:05:41 +0000
Received: by outflank-mailman (input) for mailman id 349300;
 Tue, 14 Jun 2022 17:33:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Qq/B=WV=arm.com=mark.rutland@srs-se1.protection.inumbo.net>)
 id 1o1APt-00055J-OV
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 17:33:25 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 16faf3be-ec08-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 19:33:23 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CB3DE175A;
 Tue, 14 Jun 2022 10:33:22 -0700 (PDT)
Received: from FVFF77S0Q05N (unknown [10.57.41.154])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3C0913F66F;
 Tue, 14 Jun 2022 10:33:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 16faf3be-ec08-11ec-bd2c-47488cf2e6aa
Date: Tue, 14 Jun 2022 18:33:00 +0100
From: Mark Rutland <mark.rutland@arm.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: rth@twiddle.net, ink@jurassic.park.msu.ru, mattst88@gmail.com,
	vgupta@kernel.org, linux@armlinux.org.uk, ulli.kroll@googlemail.com,
	linus.walleij@linaro.org, shawnguo@kernel.org,
	Sascha Hauer <s.hauer@pengutronix.de>, kernel@pengutronix.de,
	festevam@gmail.com, linux-imx@nxp.com, tony@atomide.com,
	khilman@kernel.org, catalin.marinas@arm.com, will@kernel.org,
	guoren@kernel.org, bcain@quicinc.com, chenhuacai@kernel.org,
	kernel@xen0n.name, geert@linux-m68k.org, sammy@sammy.net,
	monstr@monstr.eu, tsbogend@alpha.franken.de, dinguyen@kernel.org,
	jonas@southpole.se, stefan.kristiansson@saunalahti.fi,
	shorne@gmail.com, James.Bottomley@hansenpartnership.com,
	deller@gmx.de, mpe@ellerman.id.au, benh@kernel.crashing.org,
	paulus@samba.org, paul.walmsley@sifive.com, palmer@dabbelt.com,
	aou@eecs.berkeley.edu, hca@linux.ibm.com, gor@linux.ibm.com,
	agordeev@linux.ibm.com, borntraeger@linux.ibm.com,
	svens@linux.ibm.com, ysato@users.sourceforge.jp, dalias@libc.org,
	davem@davemloft.net, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
	dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com,
	acme@kernel.org, alexander.shishkin@linux.intel.com,
	jolsa@kernel.org, namhyung@kernel.org, jgross@suse.com,
	srivatsa@csail.mit.edu, amakhalov@vmware.com, pv-drivers@vmware.com,
	boris.ostrovsky@oracle.com, chris@zankel.net, jcmvbkbc@gmail.com,
	rafael@kernel.org, lenb@kernel.org, pavel@ucw.cz,
	gregkh@linuxfoundation.org, mturquette@baylibre.com,
	sboyd@kernel.org, daniel.lezcano@linaro.org, lpieralisi@kernel.org,
	sudeep.holla@arm.com, agross@kernel.org, bjorn.andersson@linaro.org,
	anup@brainfault.org, thierry.reding@gmail.com, jonathanh@nvidia.com,
	jacob.jun.pan@linux.intel.com, Arnd Bergmann <arnd@arndb.de>,
	yury.norov@gmail.com, andriy.shevchenko@linux.intel.com,
	linux@rasmusvillemoes.dk, rostedt@goodmis.org, pmladek@suse.com,
	senozhatsky@chromium.org, john.ogness@linutronix.de,
	paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com,
	josh@joshtriplett.org, mathieu.desnoyers@efficios.com,
	jiangshanlai@gmail.com, joel@joelfernandes.org,
	juri.lelli@redhat.com, vincent.guittot@linaro.org,
	dietmar.eggemann@arm.com, bsegall@google.com, mgorman@suse.de,
	bristot@redhat.com, vschneid@redhat.com, jpoimboe@kernel.org,
	linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-snps-arc@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, openrisc@lists.librecores.org,
	linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org, sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org, linux-perf-users@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, linux-xtensa@linux-xtensa.org,
	linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org,
	linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	linux-tegra@vger.kernel.org, linux-arch@vger.kernel.org,
	rcu@vger.kernel.org
Subject: Re: [PATCH 00/36] cpuidle,rcu: Cleanup the mess
Message-ID: <YqjGTFEWSJGGOjNA@FVFF77S0Q05N>
References: <20220608142723.103523089@infradead.org>
 <YqhuwQjmZyOVSiLI@FVFF77S0Q05N>
 <Yqi+Nqz1J8wI5GcX@hirez.programming.kicks-ass.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <Yqi+Nqz1J8wI5GcX@hirez.programming.kicks-ass.net>

On Tue, Jun 14, 2022 at 06:58:30PM +0200, Peter Zijlstra wrote:
> On Tue, Jun 14, 2022 at 12:19:29PM +0100, Mark Rutland wrote:
> > On Wed, Jun 08, 2022 at 04:27:23PM +0200, Peter Zijlstra wrote:
> > > Hi All! (omg so many)
> > 
> > Hi Peter,
> > 
> > Sorry for the delay; my plate has also been rather full recently. I'm beginning
> > to page this in now.
> 
> No worries; we all have too much to do ;-)
> 
> > > These here few patches mostly clear out the utter mess that is cpuidle vs rcuidle.
> > > 
> > > At the end of the ride there's only 2 real RCU_NONIDLE() users left
> > > 
> > >   arch/arm64/kernel/suspend.c:            RCU_NONIDLE(__cpu_suspend_exit());
> > >   drivers/perf/arm_pmu.c:                 RCU_NONIDLE(armpmu_start(event, PERF_EF_RELOAD));
> > 
> > The latter of these is necessary because apparently PM notifiers are called
> > with RCU not watching. Is that still the case today (or at the end of this
> > series)? If so, that feels like fertile land for more issues (yaey...). If not,
> > we should be able to drop this.
> 
> That should be fixed; fingers crossed :-)

Cool; I'll try to give that a spin when I'm sat next to some relevant hardware. :)

> > >   kernel/cfi.c:   RCU_NONIDLE({
> > > 
> > > (the CFI one is likely dead in the kCFI rewrite) and there's only a hand full
> > > of trace_.*_rcuidle() left:
> > > 
> > >   kernel/trace/trace_preemptirq.c:                        trace_irq_enable_rcuidle(CALLER_ADDR0, CALLER_ADDR1);
> > >   kernel/trace/trace_preemptirq.c:                        trace_irq_disable_rcuidle(CALLER_ADDR0, CALLER_ADDR1);
> > >   kernel/trace/trace_preemptirq.c:                        trace_irq_enable_rcuidle(CALLER_ADDR0, caller_addr);
> > >   kernel/trace/trace_preemptirq.c:                        trace_irq_disable_rcuidle(CALLER_ADDR0, caller_addr);
> > >   kernel/trace/trace_preemptirq.c:                trace_preempt_enable_rcuidle(a0, a1);
> > >   kernel/trace/trace_preemptirq.c:                trace_preempt_disable_rcuidle(a0, a1);
> > > 
> > > All of them are in 'deprecated' code that is unused for GENERIC_ENTRY.
> > I think those are also unused on arm64 too?
> > 
> > If not, I can go attack that.
> 
> My grep spots:
> 
> arch/arm64/kernel/entry-common.c:               trace_hardirqs_on();
> arch/arm64/include/asm/daifflags.h:     trace_hardirqs_off();
> arch/arm64/include/asm/daifflags.h:             trace_hardirqs_off();

Ah; I hadn't realised those used trace_.*_rcuidle() behind the scenes.

That affects local_irq_{enable,disable,restore}() too (which is what the
daifflags.h bits are emulating), and also the generic entry code's
irqentry_exit().

So it feels to me like we should be fixing those more generally? e.g. say that
with a new STRICT_ENTRY[_RCU], we can only call trace_hardirqs_{on,off}() with
RCU watching, and alter the definition of those?

> The _on thing should be replaced with something like:
> 
> 	trace_hardirqs_on_prepare();
> 	lockdep_hardirqs_on_prepare();
> 	instrumentation_end();
> 	rcu_irq_exit();
> 	lockdep_hardirqs_on(CALLER_ADDR0);
> 
> (as I think you know, since you have some of that already). And
> something similar for the _off thing, but with _off_finish().

Sure; I knew that was necessary for the outermost parts of entry (and I think
that's all handled), I just hadn't realised that trace_hardirqs_{on,off} did
the rcuidle thing in the middle.

It'd be nice to not have to open-code the whole sequence everywhere for the
portions which run after entry and are instrumentable, so (as above) I reckon
we want to make trace_hardirqs_{on,off}() not do the rcuidle part
unnecessarily (which IIUC is an end-goal anyway)?

> > > I've touched a _lot_ of code that I can't test and likely broken some of it :/
> > > In particular, the whole ARM cpuidle stuff was quite involved with OMAP being
> > > the absolute 'winner'.
> > > 
> > > I'm hoping Mark can help me sort the remaining ARM64 bits as he moves that to
> > > GENERIC_ENTRY.
> > 
> > Moving to GENERIC_ENTRY as a whole is going to take a tonne of work
> > (refactoring both arm64 and the generic portion to be more amenable to each
> > other), but we can certainly move closer to that for the bits that matter here.
> 
> I know ... been there etc.. :-)
> 
> > Maybe we want a STRICT_ENTRY option to get rid of all the deprecated stuff that
> > we can select regardless of GENERIC_ENTRY to make that easier.
> 
> Possible yeah.
> 
> > > I've also got a note that says ARM64 can probably do a WFE based
> > > idle state and employ TIF_POLLING_NRFLAG to avoid some IPIs.
> > 
> > Possibly; I'm not sure how much of a win that'll be given that by default we'll
> > have a ~10KHz WFE wakeup from the timer, but we could take a peek.
> 
> Ohh.. I didn't know it woke up *that* often. I just know Will made use
> of it in things like smp_cond_load_relaxed() which would be somewhat
> similar to a very shallow idle state that looks at the TIF word.

We'll get some saving, I'm just not sure where that falls on the curve of idle
states. FWIW the wakeup *can* be disabled (and it'd be nice to when we have
WFxT instructions which take a timeout), it jsut happens to be on by default
for reasons.

Thanks,
Mark.


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 04:05:46 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 04:05:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349283.575750 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1KHq-0007e2-Nx; Wed, 15 Jun 2022 04:05:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349283.575750; Wed, 15 Jun 2022 04:05:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1KHq-0007dj-IO; Wed, 15 Jun 2022 04:05:46 +0000
Received: by outflank-mailman (input) for mailman id 349283;
 Tue, 14 Jun 2022 16:41:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TaZI=WV=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1o19bz-0008Et-Vf
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 16:41:52 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e1c9c598-ec00-11ec-a26a-b96bd03d9e80;
 Tue, 14 Jun 2022 18:41:50 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1o19b8-000KNd-ML; Tue, 14 Jun 2022 16:40:58 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id C5DF9300372;
 Tue, 14 Jun 2022 18:40:53 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id A4FD82868A9BF; Tue, 14 Jun 2022 18:40:53 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e1c9c598-ec00-11ec-a26a-b96bd03d9e80
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=rwYFO4jRp4ZmBbbpQ/dlmC/s4443nixCio2KC7D4yLw=; b=V4CcOtoya4fx7a+TewbwfbMpHX
	S0U58dFbvhEs4+z6vlti2b1yaN2nHZdtpLUhV6b26DfB0W9QaiXCJNgWY1iLqP6JemwxKD8If1zXi
	Dc58dnvyG/zHW9b8hYkgX3+ltwbe4kfS6mygL93PVj25PQUAkTI43bgkav/UP2phwqaLH6Jmg4htB
	+7dwsqbHSMo9jJy12dA4V5J1yPFkpBzcClSfYYGQs5HHmF9AKeJnyxvwMhdlvB0wf7ej1fwQvbSdj
	+cGoUoQpbmvkdPfLd6lFQZf+8q0srIdNcxJPdxc0YJaBq/7VVmZ25B5dD+OpzZBdLGRfiZRDfHRqR
	AHE5YJ2g==;
Date: Tue, 14 Jun 2022 18:40:53 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: Mark Rutland <mark.rutland@arm.com>
Cc: rth@twiddle.net, ink@jurassic.park.msu.ru, mattst88@gmail.com,
	vgupta@kernel.org, linux@armlinux.org.uk, ulli.kroll@googlemail.com,
	linus.walleij@linaro.org, shawnguo@kernel.org,
	Sascha Hauer <s.hauer@pengutronix.de>, kernel@pengutronix.de,
	festevam@gmail.com, linux-imx@nxp.com, tony@atomide.com,
	khilman@kernel.org, catalin.marinas@arm.com, will@kernel.org,
	guoren@kernel.org, bcain@quicinc.com, chenhuacai@kernel.org,
	kernel@xen0n.name, geert@linux-m68k.org, sammy@sammy.net,
	monstr@monstr.eu, tsbogend@alpha.franken.de, dinguyen@kernel.org,
	jonas@southpole.se, stefan.kristiansson@saunalahti.fi,
	shorne@gmail.com, James.Bottomley@hansenpartnership.com,
	deller@gmx.de, mpe@ellerman.id.au, benh@kernel.crashing.org,
	paulus@samba.org, paul.walmsley@sifive.com, palmer@dabbelt.com,
	aou@eecs.berkeley.edu, hca@linux.ibm.com, gor@linux.ibm.com,
	agordeev@linux.ibm.com, borntraeger@linux.ibm.com,
	svens@linux.ibm.com, ysato@users.sourceforge.jp, dalias@libc.org,
	davem@davemloft.net, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
	dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com,
	acme@kernel.org, alexander.shishkin@linux.intel.com,
	jolsa@kernel.org, namhyung@kernel.org, jgross@suse.com,
	srivatsa@csail.mit.edu, amakhalov@vmware.com, pv-drivers@vmware.com,
	boris.ostrovsky@oracle.com, chris@zankel.net, jcmvbkbc@gmail.com,
	rafael@kernel.org, lenb@kernel.org, pavel@ucw.cz,
	gregkh@linuxfoundation.org, mturquette@baylibre.com,
	sboyd@kernel.org, daniel.lezcano@linaro.org, lpieralisi@kernel.org,
	sudeep.holla@arm.com, agross@kernel.org, bjorn.andersson@linaro.org,
	anup@brainfault.org, thierry.reding@gmail.com, jonathanh@nvidia.com,
	jacob.jun.pan@linux.intel.com, Arnd Bergmann <arnd@arndb.de>,
	yury.norov@gmail.com, andriy.shevchenko@linux.intel.com,
	linux@rasmusvillemoes.dk, rostedt@goodmis.org, pmladek@suse.com,
	senozhatsky@chromium.org, john.ogness@linutronix.de,
	paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com,
	josh@joshtriplett.org, mathieu.desnoyers@efficios.com,
	jiangshanlai@gmail.com, joel@joelfernandes.org,
	juri.lelli@redhat.com, vincent.guittot@linaro.org,
	dietmar.eggemann@arm.com, bsegall@google.com, mgorman@suse.de,
	bristot@redhat.com, vschneid@redhat.com, jpoimboe@kernel.org,
	linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-snps-arc@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, openrisc@lists.librecores.org,
	linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org, sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org, linux-perf-users@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, linux-xtensa@linux-xtensa.org,
	linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org,
	linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	linux-tegra@vger.kernel.org, linux-arch@vger.kernel.org,
	rcu@vger.kernel.org
Subject: Re: [PATCH 14/36] cpuidle: Fix rcu_idle_*() usage
Message-ID: <Yqi6Fd38ZCsDUnQG@hirez.programming.kicks-ass.net>
References: <20220608142723.103523089@infradead.org>
 <20220608144516.808451191@infradead.org>
 <YqiB6YpVqq4wuDtO@FVFF77S0Q05N>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <YqiB6YpVqq4wuDtO@FVFF77S0Q05N>

On Tue, Jun 14, 2022 at 01:41:13PM +0100, Mark Rutland wrote:
> On Wed, Jun 08, 2022 at 04:27:37PM +0200, Peter Zijlstra wrote:
> > --- a/kernel/time/tick-broadcast.c
> > +++ b/kernel/time/tick-broadcast.c
> > @@ -622,9 +622,13 @@ struct cpumask *tick_get_broadcast_onesh
> >   * to avoid a deep idle transition as we are about to get the
> >   * broadcast IPI right away.
> >   */
> > -int tick_check_broadcast_expired(void)
> > +noinstr int tick_check_broadcast_expired(void)
> >  {
> > +#ifdef _ASM_GENERIC_BITOPS_INSTRUMENTED_NON_ATOMIC_H
> > +	return arch_test_bit(smp_processor_id(), cpumask_bits(tick_broadcast_force_mask));
> > +#else
> >  	return cpumask_test_cpu(smp_processor_id(), tick_broadcast_force_mask);
> > +#endif
> >  }
> 
> This is somewhat not-ideal. :/

I'll say.

> Could we unconditionally do the arch_test_bit() variant, with a comment, or
> does that not exist in some cases?

Loads of build errors ensued, which is how I ended up with this mess ...


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 04:05:47 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 04:05:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349285.575755 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1KHr-0007ig-9k; Wed, 15 Jun 2022 04:05:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349285.575755; Wed, 15 Jun 2022 04:05:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1KHr-0007h5-2z; Wed, 15 Jun 2022 04:05:47 +0000
Received: by outflank-mailman (input) for mailman id 349285;
 Tue, 14 Jun 2022 16:43:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TaZI=WV=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1o19dT-0008GB-OI
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 16:43:24 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 190b62ce-ec01-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 18:43:21 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1o19cN-007uQt-Hy; Tue, 14 Jun 2022 16:42:16 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 2279B3002BE;
 Tue, 14 Jun 2022 18:42:15 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id 123E028B3F62D; Tue, 14 Jun 2022 18:42:15 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 190b62ce-ec01-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=47+tWvYfTyzzIcAfKg2dikUAXvzTciJT7x9ArpBU0cQ=; b=WhjpnYz1DKB/ghxhQBYKZr1QWz
	84Rt1GI0YXBpWpQ+Rpq7OdaNTLjSPvHCVKLFiVIOCLsRCFy4DqvtOiDgSLHRe8OVkGLFTex3SSmV8
	GohZ4ojxrTeP3ET5b4IFii7FfH5UbRGPxexOa+//wX1ieayjZ2GRIBUTTRmWuqKFS+bMiVDFKRSHa
	IV/MEvx0ElZNGt7kUqMr8NdHhRUFOmSCcqPcU9jSeSr7SVt4T0z3jKsQ6pCYU45ZyxeCj6DorIHXj
	CZmBSb2KSbTYN4gQwzBcOOqwVgJL0yEm9Ud5o0YTPtfXZ0pRZucUs+sMUdTCsznw5yKZXiWmZw8aF
	sEV0aFJg==;
Date: Tue, 14 Jun 2022 18:42:14 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: Mark Rutland <mark.rutland@arm.com>
Cc: rth@twiddle.net, ink@jurassic.park.msu.ru, mattst88@gmail.com,
	vgupta@kernel.org, linux@armlinux.org.uk, ulli.kroll@googlemail.com,
	linus.walleij@linaro.org, shawnguo@kernel.org,
	Sascha Hauer <s.hauer@pengutronix.de>, kernel@pengutronix.de,
	festevam@gmail.com, linux-imx@nxp.com, tony@atomide.com,
	khilman@kernel.org, catalin.marinas@arm.com, will@kernel.org,
	guoren@kernel.org, bcain@quicinc.com, chenhuacai@kernel.org,
	kernel@xen0n.name, geert@linux-m68k.org, sammy@sammy.net,
	monstr@monstr.eu, tsbogend@alpha.franken.de, dinguyen@kernel.org,
	jonas@southpole.se, stefan.kristiansson@saunalahti.fi,
	shorne@gmail.com, James.Bottomley@hansenpartnership.com,
	deller@gmx.de, mpe@ellerman.id.au, benh@kernel.crashing.org,
	paulus@samba.org, paul.walmsley@sifive.com, palmer@dabbelt.com,
	aou@eecs.berkeley.edu, hca@linux.ibm.com, gor@linux.ibm.com,
	agordeev@linux.ibm.com, borntraeger@linux.ibm.com,
	svens@linux.ibm.com, ysato@users.sourceforge.jp, dalias@libc.org,
	davem@davemloft.net, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
	dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com,
	acme@kernel.org, alexander.shishkin@linux.intel.com,
	jolsa@kernel.org, namhyung@kernel.org, jgross@suse.com,
	srivatsa@csail.mit.edu, amakhalov@vmware.com, pv-drivers@vmware.com,
	boris.ostrovsky@oracle.com, chris@zankel.net, jcmvbkbc@gmail.com,
	rafael@kernel.org, lenb@kernel.org, pavel@ucw.cz,
	gregkh@linuxfoundation.org, mturquette@baylibre.com,
	sboyd@kernel.org, daniel.lezcano@linaro.org, lpieralisi@kernel.org,
	sudeep.holla@arm.com, agross@kernel.org, bjorn.andersson@linaro.org,
	anup@brainfault.org, thierry.reding@gmail.com, jonathanh@nvidia.com,
	jacob.jun.pan@linux.intel.com, Arnd Bergmann <arnd@arndb.de>,
	yury.norov@gmail.com, andriy.shevchenko@linux.intel.com,
	linux@rasmusvillemoes.dk, rostedt@goodmis.org, pmladek@suse.com,
	senozhatsky@chromium.org, john.ogness@linutronix.de,
	paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com,
	josh@joshtriplett.org, mathieu.desnoyers@efficios.com,
	jiangshanlai@gmail.com, joel@joelfernandes.org,
	juri.lelli@redhat.com, vincent.guittot@linaro.org,
	dietmar.eggemann@arm.com, bsegall@google.com, mgorman@suse.de,
	bristot@redhat.com, vschneid@redhat.com, jpoimboe@kernel.org,
	linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-snps-arc@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, openrisc@lists.librecores.org,
	linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org, sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org, linux-perf-users@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, linux-xtensa@linux-xtensa.org,
	linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org,
	linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	linux-tegra@vger.kernel.org, linux-arch@vger.kernel.org,
	rcu@vger.kernel.org
Subject: Re: [PATCH 15/36] cpuidle,cpu_pm: Remove RCU fiddling from
 cpu_pm_{enter,exit}()
Message-ID: <Yqi6Zp+DTm22dLB9@hirez.programming.kicks-ass.net>
References: <20220608142723.103523089@infradead.org>
 <20220608144516.871305980@infradead.org>
 <YqiznJL7qB9uSQ9c@FVFF77S0Q05N>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <YqiznJL7qB9uSQ9c@FVFF77S0Q05N>

On Tue, Jun 14, 2022 at 05:13:16PM +0100, Mark Rutland wrote:
> On Wed, Jun 08, 2022 at 04:27:38PM +0200, Peter Zijlstra wrote:
> > All callers should still have RCU enabled.
> 
> IIUC with that true we should be able to drop the RCU_NONIDLE() from
> drivers/perf/arm_pmu.c, as we only needed that for an invocation via a pm
> notifier.
> 
> I should be able to give that a spin on some hardware.
> 
> > 
> > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> > ---
> >  kernel/cpu_pm.c |    9 ---------
> >  1 file changed, 9 deletions(-)
> > 
> > --- a/kernel/cpu_pm.c
> > +++ b/kernel/cpu_pm.c
> > @@ -30,16 +30,9 @@ static int cpu_pm_notify(enum cpu_pm_eve
> >  {
> >  	int ret;
> >  
> > -	/*
> > -	 * This introduces a RCU read critical section, which could be
> > -	 * disfunctional in cpu idle. Copy RCU_NONIDLE code to let RCU know
> > -	 * this.
> > -	 */
> > -	rcu_irq_enter_irqson();
> >  	rcu_read_lock();
> >  	ret = raw_notifier_call_chain(&cpu_pm_notifier.chain, event, NULL);
> >  	rcu_read_unlock();
> > -	rcu_irq_exit_irqson();
> 
> To make this easier to debug, is it worth adding an assertion that RCU is
> watching here? e.g.
> 
> 	RCU_LOCKDEP_WARN(!rcu_is_watching(),
> 			 "cpu_pm_notify() used illegally from EQS");
> 

My understanding is that rcu_read_lock() implies something along those
lines when PROVE_RCU.


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 04:05:48 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 04:05:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349287.575761 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1KHs-0007qx-40; Wed, 15 Jun 2022 04:05:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349287.575761; Wed, 15 Jun 2022 04:05:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1KHr-0007pP-N7; Wed, 15 Jun 2022 04:05:47 +0000
Received: by outflank-mailman (input) for mailman id 349287;
 Tue, 14 Jun 2022 16:44:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TaZI=WV=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1o19ev-0008Hf-1X
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 16:44:53 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4e4b972e-ec01-11ec-a26a-b96bd03d9e80;
 Tue, 14 Jun 2022 18:44:51 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1o19e7-007uTz-7Y; Tue, 14 Jun 2022 16:44:05 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id BBDAE3002BE;
 Tue, 14 Jun 2022 18:44:02 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id A1F7228B3F630; Tue, 14 Jun 2022 18:44:02 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4e4b972e-ec01-11ec-a26a-b96bd03d9e80
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=In-Reply-To:Content-Transfer-Encoding:
	Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:
	Sender:Reply-To:Content-ID:Content-Description;
	bh=lGPDnVOvpW5cJYkNK5aiO+HQt8YloxSeZZcAt0L3T+0=; b=DAyjnelx4YBZDCDKjW1vABnwD6
	bfuQInSMcO0k1N9c43zulrHrt0FEspCLqBV7dzVlaMMaDm/0d8h+E1vx/Mn16eV+mdIA5cFTuH3pJ
	ZPctIYIGAc1IQVFMkXT59RectB0+FvezRUN6pXOBo7jPhyxeW7DquiSvQAiutE+ZX5GvtHgbHFHRx
	6HNiF1zBAiNoXSUuUybOLrpKXxYLhLp7SkKzRQuOg8AsSzW4GjX7X7PIc8C1Yqe4MS+PoRyI4eENc
	TnTVXl8b+mj78ByEoizyUwQl4K8tWC0/1bV8h78+HtzLeArX034mK97ekOVdMcZW15wljxz2QwCm7
	eJasXJFA==;
Date: Tue, 14 Jun 2022 18:44:02 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: Nadav Amit <namit@vmware.com>
Cc: "srivatsa@csail.mit.edu" <srivatsa@csail.mit.edu>,
	"juri.lelli@redhat.com" <juri.lelli@redhat.com>,
	"rafael@kernel.org" <rafael@kernel.org>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	"linus.walleij@linaro.org" <linus.walleij@linaro.org>,
	"bsegall@google.com" <bsegall@google.com>,
	"guoren@kernel.org" <guoren@kernel.org>,
	"pavel@ucw.cz" <pavel@ucw.cz>,
	"agordeev@linux.ibm.com" <agordeev@linux.ibm.com>,
	"linux-clk@vger.kernel.org" <linux-clk@vger.kernel.org>,
	linux-arch <linux-arch@vger.kernel.org>,
	"vincent.guittot@linaro.org" <vincent.guittot@linaro.org>,
	"mpe@ellerman.id.au" <mpe@ellerman.id.au>,
	"chenhuacai@kernel.org" <chenhuacai@kernel.org>,
	"linux-acpi@vger.kernel.org" <linux-acpi@vger.kernel.org>,
	"agross@kernel.org" <agross@kernel.org>,
	"geert@linux-m68k.org" <geert@linux-m68k.org>,
	"linux-imx@nxp.com" <linux-imx@nxp.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"mattst88@gmail.com" <mattst88@gmail.com>,
	"borntraeger@linux.ibm.com" <borntraeger@linux.ibm.com>,
	"mturquette@baylibre.com" <mturquette@baylibre.com>,
	"sammy@sammy.net" <sammy@sammy.net>,
	"pmladek@suse.com" <pmladek@suse.com>,
	"linux-pm@vger.kernel.org" <linux-pm@vger.kernel.org>,
	"jiangshanlai@gmail.com" <jiangshanlai@gmail.com>,
	Sascha Hauer <s.hauer@pengutronix.de>,
	"linux-um@lists.infradead.org" <linux-um@lists.infradead.org>,
	"acme@kernel.org" <acme@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	"linux-omap@vger.kernel.org" <linux-omap@vger.kernel.org>,
	"dietmar.eggemann@arm.com" <dietmar.eggemann@arm.com>,
	"rth@twiddle.net" <rth@twiddle.net>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	LKML <linux-kernel@vger.kernel.org>,
	"linux-perf-users@vger.kernel.org" <linux-perf-users@vger.kernel.org>,
	"senozhatsky@chromium.org" <senozhatsky@chromium.org>,
	"svens@linux.ibm.com" <svens@linux.ibm.com>,
	"jolsa@kernel.org" <jolsa@kernel.org>,
	"paulus@samba.org" <paulus@samba.org>,
	"mark.rutland@arm.com" <mark.rutland@arm.com>,
	"linux-ia64@vger.kernel.org" <linux-ia64@vger.kernel.org>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Linux Virtualization <virtualization@lists.linux-foundation.org>,
	"James.Bottomley@hansenpartnership.com" <James.Bottomley@hansenpartnership.com>,
	"jcmvbkbc@gmail.com" <jcmvbkbc@gmail.com>,
	"thierry.reding@gmail.com" <thierry.reding@gmail.com>,
	"kernel@xen0n.name" <kernel@xen0n.name>,
	"quic_neeraju@quicinc.com" <quic_neeraju@quicinc.com>,
	linux-s390 <linux-s390@vger.kernel.org>,
	"vschneid@redhat.com" <vschneid@redhat.com>,
	"john.ogness@linutronix.de" <john.ogness@linutronix.de>,
	"ysato@users.sourceforge.jp" <ysato@users.sourceforge.jp>,
	"linux-sh@vger.kernel.org" <linux-sh@vger.kernel.org>,
	"festevam@gmail.com" <festevam@gmail.com>,
	"deller@gmx.de" <deller@gmx.de>,
	"daniel.lezcano@linaro.org" <daniel.lezcano@linaro.org>,
	"jonathanh@nvidia.com" <jonathanh@nvidia.com>,
	Mathieu Desnoyers <mathieu.desnoyers@efficios.com>,
	"frederic@kernel.org" <frederic@kernel.org>,
	"lenb@kernel.org" <lenb@kernel.org>,
	"linux-xtensa@linux-xtensa.org" <linux-xtensa@linux-xtensa.org>,
	"kernel@pengutronix.de" <kernel@pengutronix.de>,
	"gor@linux.ibm.com" <gor@linux.ibm.com>,
	"linux-arm-msm@vger.kernel.org" <linux-arm-msm@vger.kernel.org>,
	"linux-alpha@vger.kernel.org" <linux-alpha@vger.kernel.org>,
	"linux-m68k@lists.linux-m68k.org" <linux-m68k@lists.linux-m68k.org>,
	"shorne@gmail.com" <shorne@gmail.com>,
	"linux-arm-kernel@lists.infradead.org" <linux-arm-kernel@lists.infradead.org>,
	"chris@zankel.net" <chris@zankel.net>,
	"sboyd@kernel.org" <sboyd@kernel.org>,
	"dinguyen@kernel.org" <dinguyen@kernel.org>,
	"bristot@redhat.com" <bristot@redhat.com>,
	"alexander.shishkin@linux.intel.com" <alexander.shishkin@linux.intel.com>,
	"lpieralisi@kernel.org" <lpieralisi@kernel.org>,
	"linux@rasmusvillemoes.dk" <linux@rasmusvillemoes.dk>,
	"joel@joelfernandes.org" <joel@joelfernandes.org>,
	Will Deacon <will@kernel.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"khilman@kernel.org" <khilman@kernel.org>,
	"linux-csky@vger.kernel.org" <linux-csky@vger.kernel.org>,
	Pv-drivers <Pv-drivers@vmware.com>,
	"linux-snps-arc@lists.infradead.org" <linux-snps-arc@lists.infradead.org>,
	Mel Gorman <mgorman@suse.de>,
	"jacob.jun.pan@linux.intel.com" <jacob.jun.pan@linux.intel.com>,
	Arnd Bergmann <arnd@arndb.de>,
	"ulli.kroll@googlemail.com" <ulli.kroll@googlemail.com>,
	"vgupta@kernel.org" <vgupta@kernel.org>,
	"josh@joshtriplett.org" <josh@joshtriplett.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	"rcu@vger.kernel.org" <rcu@vger.kernel.org>,
	Borislav Petkov <bp@alien8.de>,
	"bcain@quicinc.com" <bcain@quicinc.com>,
	"tsbogend@alpha.franken.de" <tsbogend@alpha.franken.de>,
	"linux-parisc@vger.kernel.org" <linux-parisc@vger.kernel.org>,
	"sudeep.holla@arm.com" <sudeep.holla@arm.com>,
	"shawnguo@kernel.org" <shawnguo@kernel.org>,
	"davem@davemloft.net" <davem@davemloft.net>,
	"dalias@libc.org" <dalias@libc.org>,
	"tony@atomide.com" <tony@atomide.com>,
	"bjorn.andersson@linaro.org" <bjorn.andersson@linaro.org>,
	"H. Peter Anvin" <hpa@zytor.com>,
	"sparclinux@vger.kernel.org" <sparclinux@vger.kernel.org>,
	"linux-hexagon@vger.kernel.org" <linux-hexagon@vger.kernel.org>,
	"linux-riscv@lists.infradead.org" <linux-riscv@lists.infradead.org>,
	Anton Ivanov <anton.ivanov@cambridgegreys.com>,
	"jonas@southpole.se" <jonas@southpole.se>,
	"yury.norov@gmail.com" <yury.norov@gmail.com>,
	"richard@nod.at" <richard@nod.at>, X86 ML <x86@kernel.org>,
	"linux@armlinux.org.uk" <linux@armlinux.org.uk>,
	Ingo Molnar <mingo@redhat.com>,
	"aou@eecs.berkeley.edu" <aou@eecs.berkeley.edu>,
	"paulmck@kernel.org" <paulmck@kernel.org>,
	"hca@linux.ibm.com" <hca@linux.ibm.com>,
	"stefan.kristiansson@saunalahti.fi" <stefan.kristiansson@saunalahti.fi>,
	"openrisc@lists.librecores.org" <openrisc@lists.librecores.org>,
	"paul.walmsley@sifive.com" <paul.walmsley@sifive.com>,
	"linux-tegra@vger.kernel.org" <linux-tegra@vger.kernel.org>,
	"namhyung@kernel.org" <namhyung@kernel.org>,
	"andriy.shevchenko@linux.intel.com" <andriy.shevchenko@linux.intel.com>,
	"jpoimboe@kernel.org" <jpoimboe@kernel.org>,
	Juergen Gross <jgross@suse.com>,
	"monstr@monstr.eu" <monstr@monstr.eu>,
	"linux-mips@vger.kernel.org" <linux-mips@vger.kernel.org>,
	"palmer@dabbelt.com" <palmer@dabbelt.com>,
	"anup@brainfault.org" <anup@brainfault.org>,
	"ink@jurassic.park.msu.ru" <ink@jurassic.park.msu.ru>,
	"johannes@sipsolutions.net" <johannes@sipsolutions.net>,
	linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
Subject: Re: [Pv-drivers] [PATCH 29/36] cpuidle,	xenpv: Make more
 PARAVIRT_XXL noinstr clean
Message-ID: <Yqi60lnN6MQpysBw@hirez.programming.kicks-ass.net>
References: <20220608142723.103523089@infradead.org>
 <20220608144517.759631860@infradead.org>
 <510b9b68-7d53-7d4d-5a05-37fbd199eb4b@csail.mit.edu>
 <BAE566A0-AEA3-493E-8AC5-912C795BF1DE@vmware.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <BAE566A0-AEA3-493E-8AC5-912C795BF1DE@vmware.com>

On Mon, Jun 13, 2022 at 07:23:13PM +0000, Nadav Amit wrote:
> On Jun 13, 2022, at 11:48 AM, Srivatsa S. Bhat <srivatsa@csail.mit.edu> wrote:
> 
> > ⚠ External Email
> > 
> > On 6/8/22 4:27 PM, Peter Zijlstra wrote:
> >> vmlinux.o: warning: objtool: acpi_idle_enter_s2idle+0xde: call to wbinvd() leaves .noinstr.text section
> >> vmlinux.o: warning: objtool: default_idle+0x4: call to arch_safe_halt() leaves .noinstr.text section
> >> vmlinux.o: warning: objtool: xen_safe_halt+0xa: call to HYPERVISOR_sched_op.constprop.0() leaves .noinstr.text section
> >> 
> >> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> > 
> > Reviewed-by: Srivatsa S. Bhat (VMware) <srivatsa@csail.mit.edu>
> > 
> >> 
> >> -static inline void wbinvd(void)
> >> +extern noinstr void pv_native_wbinvd(void);
> >> +
> >> +static __always_inline void wbinvd(void)
> >> {
> >>      PVOP_ALT_VCALL0(cpu.wbinvd, "wbinvd", ALT_NOT(X86_FEATURE_XENPV));
> >> }
> 
> I guess it is yet another instance of wrong accounting of GCC for
> the assembly blocks’ weight. I guess it is not a solution for older
> GCCs, but presumably ____PVOP_ALT_CALL() and friends should have
> used asm_inline or some new “asm_volatile_inline” variant.

Partially, some of the *SAN options also generate a metric ton of
nonsense when enabled and skew the compilers towards not inlining
things.


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 04:05:49 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 04:05:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349291.575767 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1KHs-0007zh-TQ; Wed, 15 Jun 2022 04:05:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349291.575767; Wed, 15 Jun 2022 04:05:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1KHs-0007w5-DH; Wed, 15 Jun 2022 04:05:48 +0000
Received: by outflank-mailman (input) for mailman id 349291;
 Tue, 14 Jun 2022 16:59:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TaZI=WV=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1o19t3-00017F-Cj
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 16:59:29 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4c2ad397-ec03-11ec-bd2c-47488cf2e6aa;
 Tue, 14 Jun 2022 18:59:05 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1o19s9-007ufs-GM; Tue, 14 Jun 2022 16:58:34 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 033FF300483;
 Tue, 14 Jun 2022 18:58:31 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id DD55D28B48D58; Tue, 14 Jun 2022 18:58:30 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4c2ad397-ec03-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=sat3Ay5cLjo4d6tU+VjO2Gj4xa+yot4RzgDwqSc2j00=; b=oTTvzdrlzuM1YciKvYSz8vYCZ+
	uwlSmBHlK5sxDBJV0dcMNalrGhrnqgMOKyXUZ+NoLLlhXJyR0fgCg0Vd7w7YR39O4rmwupvVnnt7z
	Pv2IpKHqOd9e1zUBiLabpdWMpnntVJGaogImROtBH0G7ylxF+ZmLkVrbSd96L8Hy9XgyJ+fsvwu03
	MqdSZ0j3kwhR1jLAqEDq0pTMC9nvW7cYm2jiZab7OmZmrpb4+NKhzLJEVNFsIG3fWTmbi5xOWvkSk
	63UMys0mwfZRWEzmZMP+o/065ihcNnT8/M/jWgxXAG4XceJ1zUAKP/sWTfpGygRyzdDuXmNAR06ut
	6gQsA8sw==;
Date: Tue, 14 Jun 2022 18:58:30 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: Mark Rutland <mark.rutland@arm.com>
Cc: rth@twiddle.net, ink@jurassic.park.msu.ru, mattst88@gmail.com,
	vgupta@kernel.org, linux@armlinux.org.uk, ulli.kroll@googlemail.com,
	linus.walleij@linaro.org, shawnguo@kernel.org,
	Sascha Hauer <s.hauer@pengutronix.de>, kernel@pengutronix.de,
	festevam@gmail.com, linux-imx@nxp.com, tony@atomide.com,
	khilman@kernel.org, catalin.marinas@arm.com, will@kernel.org,
	guoren@kernel.org, bcain@quicinc.com, chenhuacai@kernel.org,
	kernel@xen0n.name, geert@linux-m68k.org, sammy@sammy.net,
	monstr@monstr.eu, tsbogend@alpha.franken.de, dinguyen@kernel.org,
	jonas@southpole.se, stefan.kristiansson@saunalahti.fi,
	shorne@gmail.com, James.Bottomley@hansenpartnership.com,
	deller@gmx.de, mpe@ellerman.id.au, benh@kernel.crashing.org,
	paulus@samba.org, paul.walmsley@sifive.com, palmer@dabbelt.com,
	aou@eecs.berkeley.edu, hca@linux.ibm.com, gor@linux.ibm.com,
	agordeev@linux.ibm.com, borntraeger@linux.ibm.com,
	svens@linux.ibm.com, ysato@users.sourceforge.jp, dalias@libc.org,
	davem@davemloft.net, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
	dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com,
	acme@kernel.org, alexander.shishkin@linux.intel.com,
	jolsa@kernel.org, namhyung@kernel.org, jgross@suse.com,
	srivatsa@csail.mit.edu, amakhalov@vmware.com, pv-drivers@vmware.com,
	boris.ostrovsky@oracle.com, chris@zankel.net, jcmvbkbc@gmail.com,
	rafael@kernel.org, lenb@kernel.org, pavel@ucw.cz,
	gregkh@linuxfoundation.org, mturquette@baylibre.com,
	sboyd@kernel.org, daniel.lezcano@linaro.org, lpieralisi@kernel.org,
	sudeep.holla@arm.com, agross@kernel.org, bjorn.andersson@linaro.org,
	anup@brainfault.org, thierry.reding@gmail.com, jonathanh@nvidia.com,
	jacob.jun.pan@linux.intel.com, Arnd Bergmann <arnd@arndb.de>,
	yury.norov@gmail.com, andriy.shevchenko@linux.intel.com,
	linux@rasmusvillemoes.dk, rostedt@goodmis.org, pmladek@suse.com,
	senozhatsky@chromium.org, john.ogness@linutronix.de,
	paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com,
	josh@joshtriplett.org, mathieu.desnoyers@efficios.com,
	jiangshanlai@gmail.com, joel@joelfernandes.org,
	juri.lelli@redhat.com, vincent.guittot@linaro.org,
	dietmar.eggemann@arm.com, bsegall@google.com, mgorman@suse.de,
	bristot@redhat.com, vschneid@redhat.com, jpoimboe@kernel.org,
	linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-snps-arc@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, openrisc@lists.librecores.org,
	linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org, sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org, linux-perf-users@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, linux-xtensa@linux-xtensa.org,
	linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org,
	linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	linux-tegra@vger.kernel.org, linux-arch@vger.kernel.org,
	rcu@vger.kernel.org
Subject: Re: [PATCH 00/36] cpuidle,rcu: Cleanup the mess
Message-ID: <Yqi+Nqz1J8wI5GcX@hirez.programming.kicks-ass.net>
References: <20220608142723.103523089@infradead.org>
 <YqhuwQjmZyOVSiLI@FVFF77S0Q05N>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <YqhuwQjmZyOVSiLI@FVFF77S0Q05N>

On Tue, Jun 14, 2022 at 12:19:29PM +0100, Mark Rutland wrote:
> On Wed, Jun 08, 2022 at 04:27:23PM +0200, Peter Zijlstra wrote:
> > Hi All! (omg so many)
> 
> Hi Peter,
> 
> Sorry for the delay; my plate has also been rather full recently. I'm beginning
> to page this in now.

No worries; we all have too much to do ;-)

> > These here few patches mostly clear out the utter mess that is cpuidle vs rcuidle.
> > 
> > At the end of the ride there's only 2 real RCU_NONIDLE() users left
> > 
> >   arch/arm64/kernel/suspend.c:            RCU_NONIDLE(__cpu_suspend_exit());
> >   drivers/perf/arm_pmu.c:                 RCU_NONIDLE(armpmu_start(event, PERF_EF_RELOAD));
> 
> The latter of these is necessary because apparently PM notifiers are called
> with RCU not watching. Is that still the case today (or at the end of this
> series)? If so, that feels like fertile land for more issues (yaey...). If not,
> we should be able to drop this.

That should be fixed; fingers crossed :-)

> >   kernel/cfi.c:   RCU_NONIDLE({
> > 
> > (the CFI one is likely dead in the kCFI rewrite) and there's only a hand full
> > of trace_.*_rcuidle() left:
> > 
> >   kernel/trace/trace_preemptirq.c:                        trace_irq_enable_rcuidle(CALLER_ADDR0, CALLER_ADDR1);
> >   kernel/trace/trace_preemptirq.c:                        trace_irq_disable_rcuidle(CALLER_ADDR0, CALLER_ADDR1);
> >   kernel/trace/trace_preemptirq.c:                        trace_irq_enable_rcuidle(CALLER_ADDR0, caller_addr);
> >   kernel/trace/trace_preemptirq.c:                        trace_irq_disable_rcuidle(CALLER_ADDR0, caller_addr);
> >   kernel/trace/trace_preemptirq.c:                trace_preempt_enable_rcuidle(a0, a1);
> >   kernel/trace/trace_preemptirq.c:                trace_preempt_disable_rcuidle(a0, a1);
> > 
> > All of them are in 'deprecated' code that is unused for GENERIC_ENTRY.
> 
> I think those are also unused on arm64 too?
> 
> If not, I can go attack that.

My grep spots:

arch/arm64/kernel/entry-common.c:               trace_hardirqs_on();
arch/arm64/include/asm/daifflags.h:     trace_hardirqs_off();
arch/arm64/include/asm/daifflags.h:             trace_hardirqs_off();

The _on thing should be replaced with something like:

	trace_hardirqs_on_prepare();
	lockdep_hardirqs_on_prepare();
	instrumentation_end();
	rcu_irq_exit();
	lockdep_hardirqs_on(CALLER_ADDR0);

(as I think you know, since you have some of that already). And
something similar for the _off thing, but with _off_finish().

> > I've touched a _lot_ of code that I can't test and likely broken some of it :/
> > In particular, the whole ARM cpuidle stuff was quite involved with OMAP being
> > the absolute 'winner'.
> > 
> > I'm hoping Mark can help me sort the remaining ARM64 bits as he moves that to
> > GENERIC_ENTRY.
> 
> Moving to GENERIC_ENTRY as a whole is going to take a tonne of work
> (refactoring both arm64 and the generic portion to be more amenable to each
> other), but we can certainly move closer to that for the bits that matter here.

I know ... been there etc.. :-)

> Maybe we want a STRICT_ENTRY option to get rid of all the deprecated stuff that
> we can select regardless of GENERIC_ENTRY to make that easier.

Possible yeah.

> > I've also got a note that says ARM64 can probably do a WFE based
> > idle state and employ TIF_POLLING_NRFLAG to avoid some IPIs.
> 
> Possibly; I'm not sure how much of a win that'll be given that by default we'll
> have a ~10KHz WFE wakeup from the timer, but we could take a peek.

Ohh.. I didn't know it woke up *that* often. I just know Will made use
of it in things like smp_cond_load_relaxed() which would be somewhat
similar to a very shallow idle state that looks at the TIF word.


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 04:05:50 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 04:05:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349459.575776 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1KHu-0008Gn-7b; Wed, 15 Jun 2022 04:05:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349459.575776; Wed, 15 Jun 2022 04:05:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1KHt-0008EK-HN; Wed, 15 Jun 2022 04:05:49 +0000
Received: by outflank-mailman (input) for mailman id 349459;
 Tue, 14 Jun 2022 22:12:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TaZI=WV=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1o1EmB-0006Bp-M1
 for xen-devel@lists.xenproject.org; Tue, 14 Jun 2022 22:12:44 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1b83a49c-ec2f-11ec-9917-058037db3bb5;
 Wed, 15 Jun 2022 00:12:42 +0200 (CEST)
Received: from dhcp-077-249-017-003.chello.nl ([77.249.17.3]
 helo=worktop.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1o1Elc-000Y9d-O5; Tue, 14 Jun 2022 22:12:08 +0000
Received: by worktop.programming.kicks-ass.net (Postfix, from userid 1000)
 id ED8F2981518; Wed, 15 Jun 2022 00:12:06 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1b83a49c-ec2f-11ec-9917-058037db3bb5
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=ieYeEcrodnP9d8uMI6KazQ9YDccjAyjy/+PGF5Ov/Sc=; b=OliYmJSibamFQ0oRl2XkG2+oLG
	eUVxb3PF2WQS3d9O+63gY5P/Th5g3W0k/p07zjSFtfSstgpe5sY3UKPuYKK8U3d61xav16z47xA5h
	tkNVX1vBm9t+GM0XpJtkUArdAr+6C0D6QPwa3LyJL5W3wePUH4pa4u9UPo+aiZ0VxHDbCXoe4qfJ4
	E8Y2M6s9A8SopwISJUuliIWAcL2CXospZ4jCBSi0ba1Zs/HL4wyLHhGRHBuILPFIW3paG1eAzYhHw
	AXd8m8CPomKQfchnKb7uAGftq4wK1SsdTGzKQsjqacjeCk0RsPVRrEJUUy3okfnQk1gM5l81agP/h
	arzWLEHQ==;
Date: Wed, 15 Jun 2022 00:12:06 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: Tony Lindgren <tony@atomide.com>
Cc: rth@twiddle.net, ink@jurassic.park.msu.ru, mattst88@gmail.com,
	vgupta@kernel.org, linux@armlinux.org.uk, ulli.kroll@googlemail.com,
	linus.walleij@linaro.org, shawnguo@kernel.org,
	Sascha Hauer <s.hauer@pengutronix.de>, kernel@pengutronix.de,
	festevam@gmail.com, linux-imx@nxp.com, khilman@kernel.org,
	catalin.marinas@arm.com, will@kernel.org, guoren@kernel.org,
	bcain@quicinc.com, chenhuacai@kernel.org, kernel@xen0n.name,
	geert@linux-m68k.org, sammy@sammy.net, monstr@monstr.eu,
	tsbogend@alpha.franken.de, dinguyen@kernel.org, jonas@southpole.se,
	stefan.kristiansson@saunalahti.fi, shorne@gmail.com,
	James.Bottomley@hansenpartnership.com, deller@gmx.de,
	mpe@ellerman.id.au, benh@kernel.crashing.org, paulus@samba.org,
	paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu,
	hca@linux.ibm.com, gor@linux.ibm.com, agordeev@linux.ibm.com,
	borntraeger@linux.ibm.com, svens@linux.ibm.com,
	ysato@users.sourceforge.jp, dalias@libc.org, davem@davemloft.net,
	richard@nod.at, anton.ivanov@cambridgegreys.com,
	johannes@sipsolutions.net, tglx@linutronix.de, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, acme@kernel.org, mark.rutland@arm.com,
	alexander.shishkin@linux.intel.com, jolsa@kernel.org,
	namhyung@kernel.org, jgross@suse.com, srivatsa@csail.mit.edu,
	amakhalov@vmware.com, pv-drivers@vmware.com,
	boris.ostrovsky@oracle.com, chris@zankel.net, jcmvbkbc@gmail.com,
	rafael@kernel.org, lenb@kernel.org, pavel@ucw.cz,
	gregkh@linuxfoundation.org, mturquette@baylibre.com,
	sboyd@kernel.org, daniel.lezcano@linaro.org, lpieralisi@kernel.org,
	sudeep.holla@arm.com, agross@kernel.org, bjorn.andersson@linaro.org,
	anup@brainfault.org, thierry.reding@gmail.com, jonathanh@nvidia.com,
	jacob.jun.pan@linux.intel.com, Arnd Bergmann <arnd@arndb.de>,
	yury.norov@gmail.com, andriy.shevchenko@linux.intel.com,
	linux@rasmusvillemoes.dk, rostedt@goodmis.org, pmladek@suse.com,
	senozhatsky@chromium.org, john.ogness@linutronix.de,
	paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com,
	josh@joshtriplett.org, mathieu.desnoyers@efficios.com,
	jiangshanlai@gmail.com, joel@joelfernandes.org,
	juri.lelli@redhat.com, vincent.guittot@linaro.org,
	dietmar.eggemann@arm.com, bsegall@google.com, mgorman@suse.de,
	bristot@redhat.com, vschneid@redhat.com, jpoimboe@kernel.org,
	linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-snps-arc@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, openrisc@lists.librecores.org,
	linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org, sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org, linux-perf-users@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, linux-xtensa@linux-xtensa.org,
	linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org,
	linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	linux-tegra@vger.kernel.org, linux-arch@vger.kernel.org,
	rcu@vger.kernel.org
Subject: Re: [PATCH 34.5/36] cpuidle,omap4: Push RCU-idle into
 omap4_enter_lowpower()
Message-ID: <YqkHto+zgAPs4kQI@worktop.programming.kicks-ass.net>
References: <20220608142723.103523089@infradead.org>
 <20220608144518.073801916@infradead.org>
 <Yqcv6crSNKuSWoTu@atomide.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <Yqcv6crSNKuSWoTu@atomide.com>

On Mon, Jun 13, 2022 at 03:39:05PM +0300, Tony Lindgren wrote:
> OMAP4 uses full SoC suspend modes as idle states, as such it needs the
> whole power-domain and clock-domain code from the idle path.
> 
> All that code is not suitable to run with RCU disabled, as such push
> RCU-idle deeper still.
> 
> Signed-off-by: Tony Lindgren <tony@atomide.com>
> ---
> 
> Peter here's one more for your series, looks like this is needed to avoid
> warnings similar to what you did for omap3.

Thanks Tony!

I've had a brief look at omap2_pm_idle() and do I understand it right
that something like the below patch would reduce it to a simple 'WFI'?

What do I do with the rest of that code, because I don't think this
thing has a cpuidle driver to take over, effectively turning it into
dead code.

--- a/arch/arm/mach-omap2/pm24xx.c
+++ b/arch/arm/mach-omap2/pm24xx.c
@@ -126,10 +126,20 @@ static int omap2_allow_mpu_retention(voi
 	return 1;
 }
 
-static void omap2_enter_mpu_retention(void)
+static void omap2_do_wfi(void)
 {
 	const int zero = 0;
 
+	/* WFI */
+	asm("mcr p15, 0, %0, c7, c0, 4" : : "r" (zero) : "memory", "cc");
+}
+
+#if 0
+/*
+ * possible cpuidle implementation between WFI and full_retention above
+ */
+static void omap2_enter_mpu_retention(void)
+{
 	/* The peripherals seem not to be able to wake up the MPU when
 	 * it is in retention mode. */
 	if (omap2_allow_mpu_retention()) {
@@ -146,8 +157,7 @@ static void omap2_enter_mpu_retention(vo
 		pwrdm_set_next_pwrst(mpu_pwrdm, PWRDM_POWER_ON);
 	}
 
-	/* WFI */
-	asm("mcr p15, 0, %0, c7, c0, 4" : : "r" (zero) : "memory", "cc");
+	omap2_do_wfi();
 
 	pwrdm_set_next_pwrst(mpu_pwrdm, PWRDM_POWER_ON);
 }
@@ -161,6 +171,7 @@ static int omap2_can_sleep(void)
 
 	return 1;
 }
+#endif
 
 static void omap2_pm_idle(void)
 {
@@ -169,6 +180,7 @@ static void omap2_pm_idle(void)
 	if (omap_irq_pending())
 		return;
 
+#if 0
 	error = cpu_cluster_pm_enter();
 	if (error || !omap2_can_sleep()) {
 		omap2_enter_mpu_retention();
@@ -179,6 +191,9 @@ static void omap2_pm_idle(void)
 
 out_cpu_cluster_pm:
 	cpu_cluster_pm_exit();
+#else
+	omap2_do_wfi();
+#endif
 }
 
 static void __init prcm_setup_regs(void)


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 04:05:52 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 04:05:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349503.575784 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1KHv-0008Sl-Lz; Wed, 15 Jun 2022 04:05:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349503.575784; Wed, 15 Jun 2022 04:05:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1KHu-0008Oe-KV; Wed, 15 Jun 2022 04:05:50 +0000
Received: by outflank-mailman (input) for mailman id 349503;
 Wed, 15 Jun 2022 00:44:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jzGX=WW=paulmck-ThinkPad-P17-Gen-1.home=paulmck@kernel.org>)
 id 1o1H8q-0007mj-Dh
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 00:44:16 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 478cd092-ec44-11ec-9917-058037db3bb5;
 Wed, 15 Jun 2022 02:44:14 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 02329B81A43;
 Wed, 15 Jun 2022 00:44:14 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9F6D3C3411F;
 Wed, 15 Jun 2022 00:44:12 +0000 (UTC)
Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000)
 id 327DD5C0BCC; Tue, 14 Jun 2022 17:44:12 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 478cd092-ec44-11ec-9917-058037db3bb5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1655253852;
	bh=rcjBHcwDa3Vvz9ae34yYid9fcwgrK1WVNSLCfZ/KP5E=;
	h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From;
	b=fdhcRapfOZeMu89rXN1F9+PFgQBw1jq2TFCvdWilZKBPl7d60xoaf+Z1BoVI71oyo
	 RqJwT7IyOmWKrdq1gGthk/wr5qRLVQoB8lFuH3bhpG8v28ehqFht7ixYYEHPsMm9AB
	 B8Nl92yiOr67Th9tcRyEPi0fRLrSpOhoRzoC/T6UuHa5X12CLk5h++1bfLpyoCHqB+
	 fvK77b4d+EdbOzzr1aIJ1u6gTHC44L8QS8SaSAp9SKbflwgVl3VgsZCvB9XatY8lhL
	 9YwdPZU6iL7l0pnNRdqWmeOhCsfxrQ+i6D2NKXEGZhl5QpSSUjx5BoIk0tcHaLBiwv
	 bedv5BuhTNO6Q==
Date: Tue, 14 Jun 2022 17:44:12 -0700
From: "Paul E. McKenney" <paulmck@kernel.org>
To: Peter Zijlstra <peterz@infradead.org>
Cc: rth@twiddle.net, ink@jurassic.park.msu.ru, mattst88@gmail.com,
	vgupta@kernel.org, linux@armlinux.org.uk, ulli.kroll@googlemail.com,
	linus.walleij@linaro.org, shawnguo@kernel.org,
	Sascha Hauer <s.hauer@pengutronix.de>, kernel@pengutronix.de,
	festevam@gmail.com, linux-imx@nxp.com, tony@atomide.com,
	khilman@kernel.org, catalin.marinas@arm.com, will@kernel.org,
	guoren@kernel.org, bcain@quicinc.com, chenhuacai@kernel.org,
	kernel@xen0n.name, geert@linux-m68k.org, sammy@sammy.net,
	monstr@monstr.eu, tsbogend@alpha.franken.de, dinguyen@kernel.org,
	jonas@southpole.se, stefan.kristiansson@saunalahti.fi,
	shorne@gmail.com, James.Bottomley@HansenPartnership.com,
	deller@gmx.de, mpe@ellerman.id.au, benh@kernel.crashing.org,
	paulus@samba.org, paul.walmsley@sifive.com, palmer@dabbelt.com,
	aou@eecs.berkeley.edu, hca@linux.ibm.com, gor@linux.ibm.com,
	agordeev@linux.ibm.com, borntraeger@linux.ibm.com,
	svens@linux.ibm.com, ysato@users.sourceforge.jp, dalias@libc.org,
	davem@davemloft.net, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
	dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com,
	acme@kernel.org, mark.rutland@arm.com,
	alexander.shishkin@linux.intel.com, jolsa@kernel.org,
	namhyung@kernel.org, jgross@suse.com, srivatsa@csail.mit.edu,
	amakhalov@vmware.com, pv-drivers@vmware.com,
	boris.ostrovsky@oracle.com, chris@zankel.net, jcmvbkbc@gmail.com,
	rafael@kernel.org, lenb@kernel.org, pavel@ucw.cz,
	gregkh@linuxfoundation.org, mturquette@baylibre.com,
	sboyd@kernel.org, daniel.lezcano@linaro.org, lpieralisi@kernel.org,
	sudeep.holla@arm.com, agross@kernel.org, bjorn.andersson@linaro.org,
	anup@brainfault.org, thierry.reding@gmail.com, jonathanh@nvidia.com,
	jacob.jun.pan@linux.intel.com, Arnd Bergmann <arnd@arndb.de>,
	yury.norov@gmail.com, andriy.shevchenko@linux.intel.com,
	linux@rasmusvillemoes.dk, rostedt@goodmis.org, pmladek@suse.com,
	senozhatsky@chromium.org, john.ogness@linutronix.de,
	frederic@kernel.org, quic_neeraju@quicinc.com,
	josh@joshtriplett.org, mathieu.desnoyers@efficios.com,
	jiangshanlai@gmail.com, joel@joelfernandes.org,
	juri.lelli@redhat.com, vincent.guittot@linaro.org,
	dietmar.eggemann@arm.com, bsegall@google.com, mgorman@suse.de,
	bristot@redhat.com, vschneid@redhat.com, jpoimboe@kernel.org,
	linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-snps-arc@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, openrisc@lists.librecores.org,
	linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org, sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org, linux-perf-users@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, linux-xtensa@linux-xtensa.org,
	linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org,
	linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	linux-tegra@vger.kernel.org, linux-arch@vger.kernel.org,
	rcu@vger.kernel.org
Subject: Re: [PATCH 16/36] rcu: Fix rcu_idle_exit()
Message-ID: <20220615004412.GA5766@paulmck-ThinkPad-P17-Gen-1>
Reply-To: paulmck@kernel.org
References: <20220608142723.103523089@infradead.org>
 <20220608144516.935970247@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20220608144516.935970247@infradead.org>

On Wed, Jun 08, 2022 at 04:27:39PM +0200, Peter Zijlstra wrote:
> Current rcu_idle_exit() is terminally broken because it uses
> local_irq_{save,restore}(), which are traced which uses RCU.
> 
> However, now that all the callers are sure to have IRQs disabled, we
> can remove these calls.
> 
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> Acked-by: Paul E. McKenney <paulmck@kernel.org>

We have some fun conflicts between this series and Frederic's context-tracking
series.  But it looks like these can be resolved by:

1.	A patch on top of Frederic's series that provides the old rcu_*()
	names for the functions now prefixed with ct_*() such as
	ct_idle_exit().

2.	Another patch on top of Frederic's series that takes the
	changes remaining from this patch, shown below.  Frederic's
	series uses raw_local_irq_save() and raw_local_irq_restore(),
	which can then be removed.

Or is there a better way to do this?

							Thanx, Paul

------------------------------------------------------------------------

commit f64cee8c159e9863a74594efe3d33fb513a6a7b5
Author: Peter Zijlstra <peterz@infradead.org>
Date:   Tue Jun 14 17:24:43 2022 -0700

    context_tracking: Interrupts always disabled for ct_idle_exit()
    
    Now that the idle-loop cleanups have ensured that rcu_idle_exit() is
    always invoked with interrupts disabled, remove the interrupt disabling
    in favor of a debug check.
    
    Signed-off-by: Peter Zijlstra <peterz@infradead.org>
    Cc: Frederic Weisbecker <frederic@kernel.org>
    Signed-off-by: Paul E. McKenney <paulmck@kernel.org>

diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c
index 1da44803fd319..99310cf5b0254 100644
--- a/kernel/context_tracking.c
+++ b/kernel/context_tracking.c
@@ -332,11 +332,8 @@ EXPORT_SYMBOL_GPL(ct_idle_enter);
  */
 void noinstr ct_idle_exit(void)
 {
-	unsigned long flags;
-
-	raw_local_irq_save(flags);
+	WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !raw_irqs_disabled());
 	ct_kernel_enter(false, RCU_DYNTICKS_IDX - CONTEXT_IDLE);
-	raw_local_irq_restore(flags);
 }
 EXPORT_SYMBOL_GPL(ct_idle_exit);
 


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 05:38:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 05:38:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349678.575815 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1LjK-00070w-00; Wed, 15 Jun 2022 05:38:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349678.575815; Wed, 15 Jun 2022 05:38:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1LjJ-00070i-TO; Wed, 15 Jun 2022 05:38:13 +0000
Received: by outflank-mailman (input) for mailman id 349678;
 Wed, 15 Jun 2022 05:35:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7beJ=WW=atomide.com=tony@srs-se1.protection.inumbo.net>)
 id 1o1Lgu-0006vW-ST
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 05:35:44 +0000
Received: from muru.com (muru.com [72.249.23.125])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id fe61b5b0-ec6c-11ec-bd2c-47488cf2e6aa;
 Wed, 15 Jun 2022 07:35:42 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by muru.com (Postfix) with ESMTPS id 6EE2B80AE;
 Wed, 15 Jun 2022 05:30:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe61b5b0-ec6c-11ec-bd2c-47488cf2e6aa
Date: Wed, 15 Jun 2022 08:35:38 +0300
From: Tony Lindgren <tony@atomide.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: rth@twiddle.net, ink@jurassic.park.msu.ru, mattst88@gmail.com,
	vgupta@kernel.org, linux@armlinux.org.uk, ulli.kroll@googlemail.com,
	linus.walleij@linaro.org, shawnguo@kernel.org,
	Sascha Hauer <s.hauer@pengutronix.de>, kernel@pengutronix.de,
	festevam@gmail.com, linux-imx@nxp.com, khilman@kernel.org,
	catalin.marinas@arm.com, will@kernel.org, guoren@kernel.org,
	bcain@quicinc.com, chenhuacai@kernel.org, kernel@xen0n.name,
	geert@linux-m68k.org, sammy@sammy.net, monstr@monstr.eu,
	tsbogend@alpha.franken.de, dinguyen@kernel.org, jonas@southpole.se,
	stefan.kristiansson@saunalahti.fi, shorne@gmail.com,
	James.Bottomley@hansenpartnership.com, deller@gmx.de,
	mpe@ellerman.id.au, benh@kernel.crashing.org, paulus@samba.org,
	paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu,
	hca@linux.ibm.com, gor@linux.ibm.com, agordeev@linux.ibm.com,
	borntraeger@linux.ibm.com, svens@linux.ibm.com,
	ysato@users.sourceforge.jp, dalias@libc.org, davem@davemloft.net,
	richard@nod.at, anton.ivanov@cambridgegreys.com,
	johannes@sipsolutions.net, tglx@linutronix.de, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, acme@kernel.org, mark.rutland@arm.com,
	alexander.shishkin@linux.intel.com, jolsa@kernel.org,
	namhyung@kernel.org, jgross@suse.com, srivatsa@csail.mit.edu,
	amakhalov@vmware.com, pv-drivers@vmware.com,
	boris.ostrovsky@oracle.com, chris@zankel.net, jcmvbkbc@gmail.com,
	rafael@kernel.org, lenb@kernel.org, pavel@ucw.cz,
	gregkh@linuxfoundation.org, mturquette@baylibre.com,
	sboyd@kernel.org, daniel.lezcano@linaro.org, lpieralisi@kernel.org,
	sudeep.holla@arm.com, agross@kernel.org, bjorn.andersson@linaro.org,
	anup@brainfault.org, thierry.reding@gmail.com, jonathanh@nvidia.com,
	jacob.jun.pan@linux.intel.com, Arnd Bergmann <arnd@arndb.de>,
	yury.norov@gmail.com, andriy.shevchenko@linux.intel.com,
	linux@rasmusvillemoes.dk, rostedt@goodmis.org, pmladek@suse.com,
	senozhatsky@chromium.org, john.ogness@linutronix.de,
	paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com,
	josh@joshtriplett.org, mathieu.desnoyers@efficios.com,
	jiangshanlai@gmail.com, joel@joelfernandes.org,
	juri.lelli@redhat.com, vincent.guittot@linaro.org,
	dietmar.eggemann@arm.com, bsegall@google.com, mgorman@suse.de,
	bristot@redhat.com, vschneid@redhat.com, jpoimboe@kernel.org,
	linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-snps-arc@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, openrisc@lists.librecores.org,
	linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org, sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org, linux-perf-users@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, linux-xtensa@linux-xtensa.org,
	linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org,
	linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	linux-tegra@vger.kernel.org, linux-arch@vger.kernel.org,
	rcu@vger.kernel.org, Peter Vasil <petervasil@gmail.com>,
	Aaro Koskinen <aaro.koskinen@iki.fi>
Subject: Re: [PATCH 34.5/36] cpuidle,omap4: Push RCU-idle into
 omap4_enter_lowpower()
Message-ID: <YqlvqhdlFsNvUBeG@atomide.com>
References: <20220608142723.103523089@infradead.org>
 <20220608144518.073801916@infradead.org>
 <Yqcv6crSNKuSWoTu@atomide.com>
 <YqkHto+zgAPs4kQI@worktop.programming.kicks-ass.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <YqkHto+zgAPs4kQI@worktop.programming.kicks-ass.net>

Hi,

Adding Aaro Koskinen and Peter Vasil for pm24xx for n800 and n810 related
idle.

* Peter Zijlstra <peterz@infradead.org> [220614 22:07]:
> On Mon, Jun 13, 2022 at 03:39:05PM +0300, Tony Lindgren wrote:
> > OMAP4 uses full SoC suspend modes as idle states, as such it needs the
> > whole power-domain and clock-domain code from the idle path.
> > 
> > All that code is not suitable to run with RCU disabled, as such push
> > RCU-idle deeper still.
> > 
> > Signed-off-by: Tony Lindgren <tony@atomide.com>
> > ---
> > 
> > Peter here's one more for your series, looks like this is needed to avoid
> > warnings similar to what you did for omap3.
> 
> Thanks Tony!
> 
> I've had a brief look at omap2_pm_idle() and do I understand it right
> that something like the below patch would reduce it to a simple 'WFI'?

Yes that should do for omap2_do_wfi().

> What do I do with the rest of that code, because I don't think this
> thing has a cpuidle driver to take over, effectively turning it into
> dead code.

As we are establishing a policy where deeper idle states must be
handled by cpuidle, and for most part that has been the case for at least
10 years, I'd just drop the unused functions with an explanation in the
patch why we're doing it. Or the functions could be tagged with
__maybe_unused if folks prefer that.

In the pm24xx case we are not really causing a regression for users as
there are still pending patches to make n800 and n810 truly usable with
the mainline kernel. At least the PMIC and LCD related patches need some
work [0]. The deeper idle states can be added back later using cpuidle
as needed so we have a clear path.

Aaro & Peter V, do you have any better suggestions here as this will
mostly affect you guys currently?

Regards,

Tony

[0] https://lore.kernel.org/linux-omap/20211224214512.1583430-1-peter.vasil@gmail.com/


> --- a/arch/arm/mach-omap2/pm24xx.c
> +++ b/arch/arm/mach-omap2/pm24xx.c
> @@ -126,10 +126,20 @@ static int omap2_allow_mpu_retention(voi
>  	return 1;
>  }
>  
> -static void omap2_enter_mpu_retention(void)
> +static void omap2_do_wfi(void)
>  {
>  	const int zero = 0;
>  
> +	/* WFI */
> +	asm("mcr p15, 0, %0, c7, c0, 4" : : "r" (zero) : "memory", "cc");
> +}
> +
> +#if 0
> +/*
> + * possible cpuidle implementation between WFI and full_retention above
> + */
> +static void omap2_enter_mpu_retention(void)
> +{
>  	/* The peripherals seem not to be able to wake up the MPU when
>  	 * it is in retention mode. */
>  	if (omap2_allow_mpu_retention()) {
> @@ -146,8 +157,7 @@ static void omap2_enter_mpu_retention(vo
>  		pwrdm_set_next_pwrst(mpu_pwrdm, PWRDM_POWER_ON);
>  	}
>  
> -	/* WFI */
> -	asm("mcr p15, 0, %0, c7, c0, 4" : : "r" (zero) : "memory", "cc");
> +	omap2_do_wfi();
>  
>  	pwrdm_set_next_pwrst(mpu_pwrdm, PWRDM_POWER_ON);
>  }
> @@ -161,6 +171,7 @@ static int omap2_can_sleep(void)
>  
>  	return 1;
>  }
> +#endif
>  
>  static void omap2_pm_idle(void)
>  {
> @@ -169,6 +180,7 @@ static void omap2_pm_idle(void)
>  	if (omap_irq_pending())
>  		return;
>  
> +#if 0
>  	error = cpu_cluster_pm_enter();
>  	if (error || !omap2_can_sleep()) {
>  		omap2_enter_mpu_retention();
> @@ -179,6 +191,9 @@ static void omap2_pm_idle(void)
>  
>  out_cpu_cluster_pm:
>  	cpu_cluster_pm_exit();
> +#else
> +	omap2_do_wfi();
> +#endif
>  }
>  
>  static void __init prcm_setup_regs(void)


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 06:07:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 06:07:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349689.575830 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1MBX-0002TD-7n; Wed, 15 Jun 2022 06:07:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349689.575830; Wed, 15 Jun 2022 06:07:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1MBX-0002T6-53; Wed, 15 Jun 2022 06:07:23 +0000
Received: by outflank-mailman (input) for mailman id 349689;
 Wed, 15 Jun 2022 06:05:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Y2Ip=WW=kernel.org=maz@srs-se1.protection.inumbo.net>)
 id 1o1M9X-0002Q2-EP
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 06:05:19 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 20eca2d6-ec71-11ec-9917-058037db3bb5;
 Wed, 15 Jun 2022 08:05:18 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 7B693617B3;
 Wed, 15 Jun 2022 06:05:16 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8B477C34115;
 Wed, 15 Jun 2022 06:05:15 +0000 (UTC)
Received: from sofa.misterjones.org ([185.219.108.64]
 helo=wait-a-minute.misterjones.org)
 by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls
 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95)
 (envelope-from <maz@kernel.org>) id 1o1M9R-000hh4-Lq;
 Wed, 15 Jun 2022 07:05:13 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 20eca2d6-ec71-11ec-9917-058037db3bb5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1655273115;
	bh=yf3pNom/syiLiOha7Upiq9/nR1Hjl9dv2JULctSWZP0=;
	h=Date:From:To:Cc:Subject:In-Reply-To:References:From;
	b=SjnyzJRJ5maovA4d7DzY2/Ij5TNs/akt5ph8MDjdNnh42oSEnpEqun3aky3dQcf6z
	 rGme0eLVsl/XoKZXtCCkL0+cKE/nzsKpLCn2sT4A50UeO3p7crMUO1XfASUME8HSj0
	 pD6alvihCcM/657amEHKEos8BTfaNv0g8jeH8Q6QrkIG8VgVi/YsytFwbegMDMw6Gn
	 Pxj0w5BlxtbfpWb4+1rV886T9D/GynKsmxsWQDGtqkBJk0CUIxtfKvkZpW8JkVRt5W
	 zI01Adx6IQH3zR34l9+HTfR6cCjay+Ytdyco9DFFfWGGA7opCch5d/Lzswk+yKRRZO
	 /G8E+1jj9J/iQ==
Date: Wed, 15 Jun 2022 07:05:11 +0100
Message-ID: <87mteepilk.wl-maz@kernel.org>
From: Marc Zyngier <maz@kernel.org>
To: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>,
	rth@twiddle.net,
	ink@jurassic.park.msu.ru,
	mattst88@gmail.com,
	vgupta@kernel.org,
	linux@armlinux.org.uk,
	ulli.kroll@googlemail.com,
	linus.walleij@linaro.org,
	shawnguo@kernel.org,
	Sascha Hauer <s.hauer@pengutronix.de>,
	kernel@pengutronix.de,
	festevam@gmail.com,
	linux-imx@nxp.com,
	tony@atomide.com,
	khilman@kernel.org,
	catalin.marinas@arm.com,
	will@kernel.org,
	guoren@kernel.org,
	bcain@quicinc.com,
	chenhuacai@kernel.org,
	kernel@xen0n.name,
	geert@linux-m68k.org,
	sammy@sammy.net,
	monstr@monstr.eu,
	tsbogend@alpha.franken.de,
	dinguyen@kernel.org,
	jonas@southpole.se,
	stefan.kristiansson@saunalahti.fi,
	shorne@gmail.com,
	James.Bottomley@HansenPartnership.com,
	deller@gmx.de,
	mpe@ellerman.id.au,
	benh@kernel.crashing.org,
	paulus@samba.org,
	paul.walmsley@sifive.com,
	palmer@dabbelt.com,
	aou@eecs.berkeley.edu,
	hca@linux.ibm.com,
	gor@linux.ibm.com,
	agordeev@linux.ibm.com,
	borntraeger@linux.ibm.com,
	svens@linux.ibm.com,
	ysato@users.sourceforge.jp,
	dalias@libc.org,
	davem@davemloft.net,
	richard@nod.at,
	anton.ivanov@cambridgegreys.com,
	johannes@sipsolutions.net,
	tglx@linutronix.de,
	mingo@redhat.com,
	bp@alien8.de,
	dave.hansen@linux.intel.com,
	x86@kernel.org,
	hpa@zytor.com,
	acme@kernel.org,
	alexander.shishkin@linux.intel.com,
	jolsa@kernel.org,
	namhyung@kernel.org,
	jgross@suse.com,
	srivatsa@csail.mit.edu,
	amakhalov@vmware.com,
	pv-drivers@vmware.com,
	boris.ostrovsky@oracle.com,
	chris@zankel.net,
	jcmvbkbc@gmail.com,
	rafael@kernel.org,
	lenb@kernel.org,
	pavel@ucw.cz,
	gregkh@linuxfoundation.org,
	mturquette@baylibre.com,
	sboyd@kernel.org,
	daniel.lezcano@linaro.org,
	lpieralisi@kernel.org,
	sudeep.holla@arm.com,
	agross@kernel.org,
	bjorn.andersson@linaro.org,
	anup@brainfault.org,
	thierry.reding@gmail.com,
	jonathanh@nvidia.com,
	jacob.jun.pan@linux.intel.com,
	Arnd Bergmann <arnd@arndb.de>,
	yury.norov@gmail.com,
	andriy.shevchenko@linux.intel.com,
	linux@rasmusvillemoes.dk,
	rostedt@goodmis.org,
	pmladek@suse.com,
	senozhatsky@chromium.org,
	john.ogness@linutronix.de,
	paulmck@kernel.org,
	frederic@kernel.org,
	quic_neeraju@quicinc.com,
	josh@joshtriplett.org,
	mathieu.desnoyers@efficios.com,
	jiangshanlai@gmail.com,
	joel@joelfernandes.org,
	juri.lelli@redhat.com,
	vincent.guittot@linaro.org,
	dietmar.eggemann@arm.com,
	bsegall@google.com,
	mgorman@suse.de,
	bristot@redhat.com,
	vschneid@redhat.com,
	jpoimboe@kernel.org,
	linux-alpha@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-snps-arc@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org,
	linux-omap@vger.kernel.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	linux-ia64@vger.kernel.org,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	openrisc@lists.librecores.org,
	linux-parisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	linux-perf-users@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-xtensa@linux-xtensa.org,
	linux-acpi@vger.kernel.org,
	linux-pm@vger.kernel.org,
	linux-clk@vger.kernel.org,
	linux-arm-msm@vger.kernel.org,
	linux-tegra@vger.kernel.org,
	linux-arch@vger.kernel.org,
	rcu@vger.kernel.org
Subject: Re: [PATCH 23/36] arm64,smp: Remove trace_.*_rcuidle() usage
In-Reply-To: <Yqi2UGb4alCAR5s4@FVFF77S0Q05N>
References: <20220608142723.103523089@infradead.org>
	<20220608144517.380962958@infradead.org>
	<Yqi2UGb4alCAR5s4@FVFF77S0Q05N>
User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue)
 FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/27.1
 (x86_64-pc-linux-gnu) MULE/6.0 (HANACHIRUSATO)
MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue")
Content-Type: text/plain; charset=US-ASCII
X-SA-Exim-Connect-IP: 185.219.108.64
X-SA-Exim-Rcpt-To: mark.rutland@arm.com, peterz@infradead.org, rth@twiddle.net, ink@jurassic.park.msu.ru, mattst88@gmail.com, vgupta@kernel.org, linux@armlinux.org.uk, ulli.kroll@googlemail.com, linus.walleij@linaro.org, shawnguo@kernel.org, s.hauer@pengutronix.de, kernel@pengutronix.de, festevam@gmail.com, linux-imx@nxp.com, tony@atomide.com, khilman@kernel.org, catalin.marinas@arm.com, will@kernel.org, guoren@kernel.org, bcain@quicinc.com, chenhuacai@kernel.org, kernel@xen0n.name, geert@linux-m68k.org, sammy@sammy.net, monstr@monstr.eu, tsbogend@alpha.franken.de, dinguyen@kernel.org, jonas@southpole.se, stefan.kristiansson@saunalahti.fi, shorne@gmail.com, James.Bottomley@HansenPartnership.com, deller@gmx.de, mpe@ellerman.id.au, benh@kernel.crashing.org, paulus@samba.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, hca@linux.ibm.com, gor@linux.ibm.com, agordeev@linux.ibm.com, borntraeger@linux.ibm.com, svens@linux.ibm.com, ysato@users.sourceforge.jp, dalias@
 libc.org, davem@davemloft.net, richard@nod.at, anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, acme@kernel.org, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, jgross@suse.com, srivatsa@csail.mit.edu, amakhalov@vmware.com, pv-drivers@vmware.com, boris.ostrovsky@oracle.com, chris@zankel.net, jcmvbkbc@gmail.com, rafael@kernel.org, lenb@kernel.org, pavel@ucw.cz, gregkh@linuxfoundation.org, mturquette@baylibre.com, sboyd@kernel.org, daniel.lezcano@linaro.org, lpieralisi@kernel.org, sudeep.holla@arm.com, agross@kernel.org, bjorn.andersson@linaro.org, anup@brainfault.org, thierry.reding@gmail.com, jonathanh@nvidia.com, jacob.jun.pan@linux.intel.com, arnd@arndb.de, yury.norov@gmail.com, andriy.shevchenko@linux.intel.com, linux@rasmusvillemoes.dk, rostedt@goodmis.org, pmladek@suse.com, senozhatsky@chromium.org, john.ogness@linutronix.de, paul
 mck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, joel@joelfernandes.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, jpoimboe@kernel.org, linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, openrisc@lists.librecores.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-perf-users@vger.kernel.org, virtualization@lists.linux-foundation.org, xen-devel@lists.xenproject.org, 
 linux-xtensa@linux-xtensa.org, linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org, linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-tegra@vger.kernel.org, linux-arch@vger.kernel.org, rcu@vger.kernel.org
X-SA-Exim-Mail-From: maz@kernel.org
X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false

On Tue, 14 Jun 2022 17:24:48 +0100,
Mark Rutland <mark.rutland@arm.com> wrote:
> 
> On Wed, Jun 08, 2022 at 04:27:46PM +0200, Peter Zijlstra wrote:
> > Ever since commit d3afc7f12987 ("arm64: Allow IPIs to be handled as
> > normal interrupts") this function is called in regular IRQ context.
> > 
> > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> 
> [adding Marc since he authored that commit]
> 
> Makes sense to me:
> 
>   Acked-by: Mark Rutland <mark.rutland@arm.com>
> 
> Mark.
> 
> > ---
> >  arch/arm64/kernel/smp.c |    4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> > 
> > --- a/arch/arm64/kernel/smp.c
> > +++ b/arch/arm64/kernel/smp.c
> > @@ -865,7 +865,7 @@ static void do_handle_IPI(int ipinr)
> >  	unsigned int cpu = smp_processor_id();
> >  
> >  	if ((unsigned)ipinr < NR_IPI)
> > -		trace_ipi_entry_rcuidle(ipi_types[ipinr]);
> > +		trace_ipi_entry(ipi_types[ipinr]);
> >  
> >  	switch (ipinr) {
> >  	case IPI_RESCHEDULE:
> > @@ -914,7 +914,7 @@ static void do_handle_IPI(int ipinr)
> >  	}
> >  
> >  	if ((unsigned)ipinr < NR_IPI)
> > -		trace_ipi_exit_rcuidle(ipi_types[ipinr]);
> > +		trace_ipi_exit(ipi_types[ipinr]);
> >  }
> >  
> >  static irqreturn_t ipi_handler(int irq, void *data)

Acked-by: Marc Zyngier <maz@kernel.org>

	M.

-- 
Without deviation from the norm, progress is not possible.


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 06:11:18 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 06:11:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349698.575841 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1MFI-00043K-OS; Wed, 15 Jun 2022 06:11:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349698.575841; Wed, 15 Jun 2022 06:11:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1MFI-00043D-LI; Wed, 15 Jun 2022 06:11:16 +0000
Received: by outflank-mailman (input) for mailman id 349698;
 Wed, 15 Jun 2022 06:11:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=56zs=WW=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o1MFH-000437-AQ
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 06:11:15 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0631.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::631])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f5a66c55-ec71-11ec-9917-058037db3bb5;
 Wed, 15 Jun 2022 08:11:14 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8395.eurprd04.prod.outlook.com (2603:10a6:10:247::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.13; Wed, 15 Jun
 2022 06:11:12 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Wed, 15 Jun 2022
 06:11:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f5a66c55-ec71-11ec-9917-058037db3bb5
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iUw83df3KQc7UNRR87SHdSNEy8X6YC7Mr3q3jilvsqwlZ4lT+W6X5QoCM98ks11/zY4o2rr8yJKln76+mdTjQlaSwtsQhM5+clcVFuXOtdn6dfZJK7D3EA+FeqQ/EtTjRoVEdOoFZ/23aN7WfLCWg5Q9U1K4/TLKbXke5gq9F2lvGNv28ZmIcN0ROgPJWQyiUfL0SG2K1ffxSo91cXULajyJI5WnXLjQykG2gMI8We+rApPboxmMK1PIvjRbsxt01hCvrvOsLrT7tWfqbM+XhWUsSXZDyCCKfhQd5emJYB9c58qp/B+ik/RLWjCuWeGRQnOoM5OFfFDjxzlIcamVjg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=JbKF67I1vWo4HsrzBiw+pIQPkAMMnJYbfh2+y90TsLY=;
 b=KuH5VIQIZkzmbb2RymnTLE6yZf5UWOIJcPG0hiVWr3tzumham53UlJs8c8stTjgPbikodga1xF5E64P0FVMVgqh9Zff/BlTwlvE1EkBIfDnge9BGtuVW4wFs/Q9Y4Ts8z6UyLWE8pCxCCYEHnUyxFb3F7WU/ehEf9a2yBdCaK1k4HGmpPz5qyg8r/wgEoiE35+ECRWpH9zQkWrjpZg5yzUUQf1C4xHQOSPXbTNPHJ/4A6ff11VixsqE11Kjv2KER5C4Z7pvmOQcQ2oqqhDkKVeqY7gRl0YoVvtm15Nt2q2m3Ws15tClJgYQsu7BIuTRNP2HA2QxFWtJBS9Ztor6XQw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JbKF67I1vWo4HsrzBiw+pIQPkAMMnJYbfh2+y90TsLY=;
 b=Pz1rqQAcRshCTLoJB7K1Z/sTaVttEYsg0fufkdWX4eWKQmfkppA0GXAM7A1cI5Q5wWCM3vi2zJDv391bc+3hrgM4bAZHXdkbFUvEQ8rpg/9/Fak46ny4cOxu4f99kPhKidxRyBrG51aUIRN8hNnAXG8mqr1Jahewu3NZx4/4t2YDPEJY233w65a9c23Ol1hRO1w3JGHr2fCDBJvubRKqL+Q/qTv2oFgLlFr49xhmel9ZveIdomtATQSHsmdBTcpXEMd1a6gQxSczriml+CiOT6ao3qO3y7+xK3xbO3jNVbCOA7ZxJeK869OFjhYg6Fjxz4IUw6lSlEfFDFpjxM0Cpg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <7c76c81a-d781-8ffb-f68a-ece5487ad01f@suse.com>
Date: Wed, 15 Jun 2022 08:11:10 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [XEN PATCH v2 3/4] build: set PERL
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220614162248.40278-1-anthony.perard@citrix.com>
 <20220614162248.40278-4-anthony.perard@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220614162248.40278-4-anthony.perard@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR06CA0237.eurprd06.prod.outlook.com
 (2603:10a6:20b:45e::24) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d800c1aa-fd69-4364-3082-08da4e95d898
X-MS-TrafficTypeDiagnostic: DB9PR04MB8395:EE_
X-Microsoft-Antispam-PRVS:
	<DB9PR04MB83958B4151C793307FF28E2BB3AD9@DB9PR04MB8395.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JGz9CMJvFqzgAZsvWCUKy/8rf/jXszr/riZ1kcpsH1c72PSvJMZKAMkjAsm7AXJD07e1ls4eNtw4JvQiRM0BYJ4YPgvdHrrqCVFXBbNeY20BdXndwlIiSbO4k2oo/raV4LwIwt3md8Lv/YRfoS6HDmXEjwmM5vsFY6II2CcgTtnf4cHS6bumjUbXHXCUfBxhHU95D77Jr3dIpHjQy/koTYRw5HCKnszoiEjStOq8s30qlEU2Y23/EOBcPOyEUAkHjP5XeGs0YgjEplvmn0cafy8vYt1QMNjNdV6439WbC/zUinqr5L1ZzAJFd7Vzztll5yvSHzGpDAZCjnWct6nsN6wtQH2KCNGcnIsREh4HyRnYxxHm3jM/0jp8GJuhpVgdW0ez0N597EyWkLnJo4f0JFsqdktskMpFdFTG6nBlNLfcoNfTZN+9P53iOKq/u5ZgELXl+XRTn+ZPSy0zsF7kgPxedF+ha2cU1idjea/UNwnMKjHreiQHUQqv7ukxKz73oYh7iQToWz9zD7Jj1BvQ3UeBEF/RYtBKJUGl3F8CwVpIMorEHwAelLrygJBQ0XpVxORLevtccOlTTL96H7uwDwcYH5iOTnZpPQ2R9AoK5nwWaMZ1zGCsWBAAG2cO7P7EBVs0sHp/+VXx5MnDLzA5DnjbeFruEQbXKDl4jGtChNw2lR76Slti5zoNLvAQD890FdH3TjSNpmhzBaXiMLcyeKpxiWLHZGqPUv13LqPqM6DUlxN6EBZFOTek/wBlf2kc
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(6512007)(2906002)(66476007)(66556008)(86362001)(8676002)(31696002)(6506007)(31686004)(54906003)(6916009)(38100700002)(4326008)(66946007)(26005)(53546011)(5660300002)(8936002)(186003)(316002)(6486002)(508600001)(36756003)(4744005)(2616005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RjI0aFZNMkkwbFZqVW8rbmNPdS8zNURvd3NsSWRERnFvVVlaYXNSTW4rcHJT?=
 =?utf-8?B?SGd4dDJhNHRxak94Si9rdnkvakRnT3Jtc0RkUjU4dE96ZVNmVFBCN0tpbkYr?=
 =?utf-8?B?Rmt5bEQyZGFIaGh0T0QySk94dWQ2U1hoRmt0NndicGdsZ04vazlXcjRGQUZr?=
 =?utf-8?B?QlF4cGRlNDZSaTR3dDRBUDI0cStaWEtBek9pOTQyZVdKU2U5STAyN2NaR25H?=
 =?utf-8?B?emlZL2NGK1MweStqeWRJMm9icTNBT3V0N1Y3c2tmMTlBRE9OcUcwcUFWa3Jv?=
 =?utf-8?B?enlrMENnTE9ZU2Z3TVViV0xEd1hRK1BQRjJaODkrLzczVUV1YlU5KzRHc3BZ?=
 =?utf-8?B?SGgrbmNzUmxDQ2c1dkovakdGdUMxU2QvRGFRaEY2aUNpYXQ1RElUYkFCVlF3?=
 =?utf-8?B?NnVGcDBXRWFSckY0MlB2S284c1U5NlpTclVQQjU4YWZHVkRtbGg1cWVCRkt2?=
 =?utf-8?B?ZlJ1MDNnRXBMdHdvOTBKeHNmdk1abnptUWFJeEFKUUJ1UnAzNTdsOGVVQjdv?=
 =?utf-8?B?NS9LL0RxbzlvaWJoYVVETlFLUGloL2NtdUxIMHBVT21hQUNMczJuNElWcHpX?=
 =?utf-8?B?Q0FJVkVnelNCcUFsN3FoL0hyM2xNemF5QmZ0T2VSYXBUcWIzYUpkTHR6VGhs?=
 =?utf-8?B?TXU0UDF1cS9xcWpTdENFU3BkWDBqRm5nMDJGcWl1SDNDSkV2STk4ZTUwb2Rr?=
 =?utf-8?B?QXFOYUFjcUZLaWkvOGdaOElGTkMrWWxjS3BFVS9sR3o5VGtlSkF3ck1UZnRI?=
 =?utf-8?B?NnN2NlRNVGl5djVETW54L1hiWTgwaW9KU28xK0NwakFTWjR0VW03bGMxSkF4?=
 =?utf-8?B?TFhEeklrRkhGSnRYSnZxanRWU1lkSEd3Wmh6Qm16MmF0V3dZSWtZaDJ5WDRO?=
 =?utf-8?B?OG1meTcrV1dWaFdnTmRDNVZlK1Y3VGZnb01yb2IrYmRkMHlZMDNJQkVzdnYy?=
 =?utf-8?B?MFdzNXRUUGhUM0QwYXFvVmE4MXdlT3Ixbnd6dzg4QjhzbXl6MGNBZzBxOVBo?=
 =?utf-8?B?MW9GK3U2U1pyMTBmSS9oQUFabjVZbnhrL0g3SkUrbHZ0YllLckVaSy9CZFhZ?=
 =?utf-8?B?VVc2WGp5aWsrQWNTVlQ3alorVzZtL05Vd0JsVkk2VnNwckFvdzVwckE1QnZI?=
 =?utf-8?B?MzR6WVNUNVkzdy8xODNQUlpRTlh1TDVtT3hZWTByYjNzNXJ6ZVNiQTBmMDdx?=
 =?utf-8?B?Q2d3bmh5bGh4VjNIbTdwM084a0J5L3psZnUwRWM0bUV6NVhqU1JyOFZnTXZx?=
 =?utf-8?B?ekxveTdpUnRJeWZwQXd3SHVIeWoxRyt4SUd1WmlWS3p5S3gvMjVyWjAwMkJV?=
 =?utf-8?B?WHJHaDlPS1VFeC8wdmNYS3kzUmpPRE1sMWQrVGpvbDh6RU1LY3NNd1BGRUpX?=
 =?utf-8?B?amhFZVFadHR3R0orZUNYRkVpRnJ0RXo2UmdtVXZqMndsN1d3L1pCTDNwK1VY?=
 =?utf-8?B?dmF1VXUxMEFtWGJGTFg5eGRwWkdLNUZUOUprc0RrbzNoaW5FNzcvdWdNWDFF?=
 =?utf-8?B?T1Q5cmRQalBGS0hPaXphN0pRSUVsb0NDYkFHcEowYUU0SURRVHNxVlZyNk5y?=
 =?utf-8?B?OUk1aXgxOGRYU3YxUDdBYXpPNGtEN0x0WGpXZ1NKaHdZR1ovL1krR1V5blJH?=
 =?utf-8?B?VnZ2ZHpGRUY2WmlHM0VicjhFL2xTQTZhUXdrTFNLQ1Z4Mk5NTVFabWljOG9Q?=
 =?utf-8?B?SjVLWGRtQ1JuSWpCQjdqY2d1T2FzNk52ZUlPaTBtTFkwenRjRTA3K1NLUkUy?=
 =?utf-8?B?bmg1R1dYQ2hEY1Y1V01iYzc3dTMzVHZId092T3haeU5jdm1lYnNJUGhGdXl5?=
 =?utf-8?B?UTROZGl5MVhybjF4LzUyM0c4WE9zYXpxaHZkdEN2cWgyZVNsaXNVb0F3NURT?=
 =?utf-8?B?ZXBNZDZldlBDRElYeXg4VmI1dVJONlRzYUo1b3lRaWdxa05WZ00rZGFOVU55?=
 =?utf-8?B?RXUzTEJTYVc5YSsxU3pHRTdhanM4dkhwK2JzMWI4VVBhckZwNzduSm9MWldt?=
 =?utf-8?B?dnpabzZXajBOY25jd1dJbHBpdEdTQURDZ1FEdE40cThEUU4yQjdYZmc5U3BX?=
 =?utf-8?B?N1F3NzJEcDl0cHcwSWRIUGpIUjRFd3prK0RrMXB3TmVITm1QUkhTeHlsNVR1?=
 =?utf-8?B?YUtqUGhCTUJFNzZtUisyYy82SElGem1LRG1XUG9YR3hXVnluUHkyMno4Y0lx?=
 =?utf-8?B?MXV5SFRLNm00RzhwRnVRb2E2Ujhqd0E2ME5rdDBqTWVsL0pBOGUrdDg3ZEd3?=
 =?utf-8?B?VDFubkpRejhOUDBJeDJmbkdLUWErM2NCWVorQ0prMzI3M1pHVVVycXlrd1VS?=
 =?utf-8?B?dVZtdXhKNEdML3I3WkNlUDk1cGNvK1hmU203SGx6YmpwVGh3YkpWdz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d800c1aa-fd69-4364-3082-08da4e95d898
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2022 06:11:11.8542
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 45UPtQrUP8mAut81zWsSIDhB8cR4uQ+yVFOWkLvmu/z9l5e/zip3FaPxoCa7AxnsR4wW6kyncctGBSqZdpQdKw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8395

On 14.06.2022 18:22, Anthony PERARD wrote:
> --- a/xen/Makefile
> +++ b/xen/Makefile
> @@ -22,6 +22,7 @@ PYTHON_INTERPRETER	:= $(word 1,$(shell which python3 python python2 2>/dev/null)
>  export PYTHON		?= $(PYTHON_INTERPRETER)
>  
>  export CHECKPOLICY	?= checkpolicy
> +export PERL		?= perl
>  
>  $(if $(filter __%, $(MAKECMDGOALS)), \
>      $(error targets prefixed with '__' are only for internal use))

Considering my patch yesterday that moved the exporting of CC etc, I
wonder if - at the very least for consistency - this and the neighbouring
pre-existing exports shouldn't all be moved below the inclusion of
./Config.mk as well. After all ./config might override any of those.
(Yes, the ones here don't prevent such overriding, but only as long as
there aren't any further make quirks.)

Since this may want doing in a separate patch, this one is
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 06:24:15 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 06:24:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349709.575858 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1MRl-0005o7-4Z; Wed, 15 Jun 2022 06:24:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349709.575858; Wed, 15 Jun 2022 06:24:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1MRl-0005o0-12; Wed, 15 Jun 2022 06:24:09 +0000
Received: by outflank-mailman (input) for mailman id 349709;
 Wed, 15 Jun 2022 06:24:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=crEz=WW=gmail.com=viresh.linux@srs-se1.protection.inumbo.net>)
 id 1o1MRk-0005nr-Bx
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 06:24:08 +0000
Received: from mail-yw1-x1136.google.com (mail-yw1-x1136.google.com
 [2607:f8b0:4864:20::1136])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c2157d85-ec73-11ec-9917-058037db3bb5;
 Wed, 15 Jun 2022 08:24:07 +0200 (CEST)
Received: by mail-yw1-x1136.google.com with SMTP id
 00721157ae682-30ce6492a60so51000607b3.8
 for <xen-devel@lists.xenproject.org>; Tue, 14 Jun 2022 23:24:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c2157d85-ec73-11ec-9917-058037db3bb5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=LshRSzq22wtp4qhXs+/iZas8+X4eeCgZ5sV4MhFsw2w=;
        b=luyrFMQUtVOGazWD1iq3vMXmKdEc5Z9Js2dg0RzKB7UAOL8BZ52UtgL/CXxEVKZVuw
         Lk4Uf50DB2KCN4AdmvHrHj1dL0buHpaWhvQP2ITrsXjzfMfLafIeW+ByvnE87ylrbN96
         NwgiKQIz2gnpUTJtwkHDdaMyi+OtlkO+HpgYQJA1Xb2gbQ0wx/i1jppcWMwFNPIic/BL
         gHYZPBNIQEnouIR/kcjreA0RJP6sNCa0Cj7vrEecr1j2QTQoDizmKigSbHDaJYHag7vx
         4kk8G8Q4AJJasQv7l8ujLQaoEfvqb6jDOCcGYTrSjw5pjx5idMc+9lQHaLuotaWJ/2ID
         lxcA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=LshRSzq22wtp4qhXs+/iZas8+X4eeCgZ5sV4MhFsw2w=;
        b=0JfWiE+gWaR4kN56JczdfN6simahwec7VUuxISRv4fZgg2nu19DwKOKflU47nF5LsS
         Jfba4rWZ+V/SbIU/c9WclaSJRtYPMY53ktnHw/FLFJAe7x+npSSOZpJPE0MQ+L830xhb
         QWe5GOh9QUuOn9FjR4mSSyctyAc3i+KcAvaxleE0Ped5nx8diNN+gO4ChTQaNe5fdG9r
         DQv8DRI6wIWVNi0LWt+WZ3z3C9L4GiZIwnQbPSU1bfmc5Wpan+WLFqt4kEM7ihtxlbx5
         g/xW1SNw4taaydyr36tWyZKz1VUWlXVXHqCzOF7GIYVXXl5Fkh9wfZvP4ThOVdTKu3gv
         12hw==
X-Gm-Message-State: AJIora+dMT+foc+o5S2XrkQ+duiaV8gz6Aw6FFlGuk7kIIZwUBFrf7+3
	GiV5LewIgis44zHzLru0fe5niMSoSf6Ge49mzXI=
X-Google-Smtp-Source: AGRyM1tAM6ElCRQ+uq7vQIdf64vAqhTVQNLn5SJ7tZ9sRuvacanhNpyrrq71qFzPCPyW5IbmrgmbXDkARvs8jenWC3E=
X-Received: by 2002:a81:ad7:0:b0:2e6:84de:3223 with SMTP id
 206-20020a810ad7000000b002e684de3223mr10019086ywk.209.1655274245807; Tue, 14
 Jun 2022 23:24:05 -0700 (PDT)
MIME-Version: 1.0
References: <1654197833-25362-1-git-send-email-olekstysh@gmail.com>
In-Reply-To: <1654197833-25362-1-git-send-email-olekstysh@gmail.com>
From: Viresh Kumar <viresh.kumar@linaro.org>
Date: Wed, 15 Jun 2022 11:53:54 +0530
Message-ID: <CAOh2x=kxpdisV+tqcYOoZGSKA8YjPMej+7u19Jpa1jmVcZCaxA@mail.gmail.com>
Subject: Re: [PATCH V4 0/8] virtio: Solution to restrict memory access under
 Xen using xen-grant DMA-mapping layer
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: xen-devel@lists.xenproject.org, 
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>, 
	Linux ARM <linux-arm-kernel@lists.infradead.org>, 
	virtualization@lists.linux-foundation.org, x86@kernel.org, 
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, "Michael S. Tsirkin" <mst@redhat.com>, 
	Christoph Hellwig <hch@infradead.org>, Stefano Stabellini <sstabellini@kernel.org>, 
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Juergen Gross <jgross@suse.com>, 
	Julien Grall <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, 
	Henry Wang <Henry.Wang@arm.com>, Kaly Xin <Kaly.Xin@arm.com>, Jiamei Xie <Jiamei.Xie@arm.com>, 
	=?UTF-8?B?QWxleCBCZW5uw6ll?= <alex.bennee@linaro.org>, 
	Vincent Guittot <vincent.guittot@linaro.org>, Mathieu Poirier <mathieu.poirier@linaro.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Oleksandr,

On Mon, Jun 6, 2022 at 10:16 AM Oleksandr Tyshchenko
<olekstysh@gmail.com> wrote:
> The high level idea is to create new Xen=E2=80=99s grant table based DMA-=
mapping layer for the guest Linux whose main
> purpose is to provide a special 64-bit DMA address which is formed by usi=
ng the grant reference (for a page
> to be shared with the backend) with offset and setting the highest addres=
s bit (this is for the backend to
> be able to distinguish grant ref based DMA address from normal GPA). For =
this to work we need the ability
> to allocate contiguous (consecutive) grant references for multi-page allo=
cations. And the backend then needs
> to offer VIRTIO_F_ACCESS_PLATFORM and VIRTIO_F_VERSION_1 feature bits (it=
 must support virtio-mmio modern
> transport for 64-bit addresses in the virtqueue).

I was trying your series, from Linus's tree now and started seeing
boot failures,
failed to mount rootfs. And the reason probably is these messages:

[ 1.222498] virtio_scsi virtio1: device must provide VIRTIO_F_ACCESS_PLATFO=
RM
[ 1.316334] virtio_net virtio0: device must provide VIRTIO_F_ACCESS_PLATFOR=
M

I understand from your email that the backends need to offer
VIRTIO_F_ACCESS_PLATFORM flag now, but should this requirement be a
bit soft ? I mean shouldn't we allow both types of backends to run with the=
 same
kernel, ones that offer this feature and others that don't ? The ones that =
don't
offer the feature, should continue to work like they used to, i.e.
without the restricted
memory access feature.

I am testing Xen currently with help of Qemu  over my x86 desktop and
these backends
(scsi and net) are part of QEMU itself I think, and I don't really
want to go and make the
change there.

Thanks.

--
Viresh


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 06:37:53 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 06:37:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349718.575868 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Mey-0007VW-B8; Wed, 15 Jun 2022 06:37:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349718.575868; Wed, 15 Jun 2022 06:37:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Mey-0007VP-8D; Wed, 15 Jun 2022 06:37:48 +0000
Received: by outflank-mailman (input) for mailman id 349718;
 Wed, 15 Jun 2022 06:37:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=56zs=WW=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o1Mex-0007VJ-Gs
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 06:37:47 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 (mail-am5eur03on060e.outbound.protection.outlook.com
 [2a01:111:f400:fe08::60e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id aa6e7f6b-ec75-11ec-bd2c-47488cf2e6aa;
 Wed, 15 Jun 2022 08:37:46 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB6847.eurprd04.prod.outlook.com (2603:10a6:803:134::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.13; Wed, 15 Jun
 2022 06:37:44 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Wed, 15 Jun 2022
 06:37:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aa6e7f6b-ec75-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nSyA5ZVGzKgV6UifqovkE5NeeDXY1KHn2wKXAUF2OczJDWlRg1z0bDh/YesXrPvXbxsztKK2iJuQ+2kejfZ3z6FRE17ePIckOe82o/Nj1vEgsN5OdSzyZdsxAQRU1YY94pdULz3k9doq6/28lFamwXqohxyOn/rcZBXrMlfq0cjdTxHZdtf/ZI5NrgPTLMVOAxqN7RlERf7FGnZBMdMSDsFg44T4tcujKdVNPz3jYCX90YSg79dbghGe+m/kLEZxcyd1PUIFROkkm/Ud6fir66Ims4+1FsQG1hm4aNyz+Vw8KcKUrCJAAs0cRVuMBFEfRloE7MfVJy77LtcknXq3/g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=wQh1eZRoccmbIlRrRVPPkQHkKBonLorlgOIhyZcQf3g=;
 b=iOwIbvgXoFWHlLF+B8JGZqODky0cfA40bRHPBPYFf/UoeZR2EE6zUpsTJENdXzc+EEkuQRP1aWUn3XpACb0syyzMP/hoHaxqHPpuze+dcG9dEbmHSYLw8ME/E83MpOyRiEyXBaaQo25nrNsfiUO/hphLsGG2wL8A680aAtkbXfrDtOZwjBgqYg+hfuez2HRT+0xa6QITSsAA1UdC/QvguedEzusntNfhoKCJ2s+PlNi8zR+LYkxlDBrx37QAbSYSnSOOsVidh1Rojc+FlKL0q4byJPMOAvhkxsbQiGkYKqalHdiADU+kVUqtDWvyoa8roYXU1lKkKX9RY21BDkF3eA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wQh1eZRoccmbIlRrRVPPkQHkKBonLorlgOIhyZcQf3g=;
 b=l4+qmxfl/xnyZ9kPzigL44zGpwy1M1jXM2dgWPOwCpxj7bmYn7zA6lMmfpJiMDlxi4htQrgz9GQZUvzT0n3Fog1ZCUvO5Lg2nrJWUEjfn/t5NGrnKK03iqeGu09uRlBclJfWa5zDzp+VOcIFQWw7KITd1ppRqXPY1re5XE5eeJ5AuUSZrgyBs8d734ffWlml8ItKmtuUewkKLOX5zAoOKOtvK1RrpGufZjXIT9EEW1+HjPFCipK0XofhXplGGSMxlhYneBnsPmjZ0DPnCpIu3ezy+iSoPaCnseftniYXHvKKnsISXAyHQAmqCMnljovzJXyeCKVKl+sfNVaGOUR2cg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <09b49900-9215-f2a2-d521-a79cf5ce5f0f@suse.com>
Date: Wed, 15 Jun 2022 08:37:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [XEN PATCH v2 1/4] build,include: rework shell script for
 headers++.chk
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220614162248.40278-1-anthony.perard@citrix.com>
 <20220614162248.40278-2-anthony.perard@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220614162248.40278-2-anthony.perard@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR04CA0090.eurprd04.prod.outlook.com
 (2603:10a6:20b:50e::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3a9d836c-4af1-445c-abfd-08da4e998dc1
X-MS-TrafficTypeDiagnostic: VI1PR04MB6847:EE_
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB6847602917803844D10AD25AB3AD9@VI1PR04MB6847.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	a0Y8NMRkwLbCRT3ntsGOfbM7aYGRzKYcE/HcBN4j8H2OU9JhgztYiCgRGeZnaH0pdo79j40iyfGnAyEUGUHpCRC6cciYqI/TDqE7m2ISADFmrbFtPf22kjV/0kqlcQ5ewVmYUu6efCSWpdYfJeWy9xS6HM5P/+khNgrtVZOmzb/+VCbekFboQQgv1Gza5KoDimkNmgDDYDknsnfuybxaisaYNYP4ori+TBXdq4yjA3hknA8ve4rOr+20D22yVM5px5fk5rAwY2eGNA3gQ8hevBGK1ZqSbze9wKjyj6ufQdn+nMoGhM6yw2eZYc9OSq2XuMXTPk3BsNPeVA8URInm0gmqUsWuV/+0LwjKQNk2uNgjzyw1D8ujxbBnfQzIv/Yv1Mq3q1TDYcGWx5xZQTko7WrlgWhi2+G01nJ0akN4jQ7gAZ6HutBaqa1ikalIYJviH6eVC9RAf7l3g31Bxu3CEKS7shD8amNA2wW/1+RXI0CEoCaSLzxiR1TzKRruoJQXfAjUt/DPHixDJbBx2Bsg/7JHLQKyhXc6ANrTxR9kC4rPMvp/ZaXV/4PZVSQj9Zt0/IQqgXyJi70ddfPyxjIytVRL1Os+XzinMT+2EW5fHQGh+j/Vs0lz0k6xcz+Ah2TmoegyKQMh075RNky+A3pwCjyvF3OW4ls7tnaXrm747bQa1adXavn4ehD6OINdAyBQa268tG3ts3VeEPL5MVwDAP98fy4KqmlssjofmDZoga9SMBkczhhQ+NqyNFmOQlRZ
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(66476007)(4326008)(8676002)(66556008)(66946007)(53546011)(6506007)(26005)(6512007)(83380400001)(5660300002)(38100700002)(6916009)(186003)(8936002)(6486002)(86362001)(31696002)(316002)(508600001)(2906002)(2616005)(36756003)(31686004)(54906003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RUdrQmZzN0RhVUFnVWZXQklqZzA4VU9ZdjQwT0MyMjhVbEQzR1hCRkM5dEhN?=
 =?utf-8?B?WkFRcjhUcy84RG9WdEtwcHQrc1dIQ3l0bHUwWFhlSCtXeWZ6d0prN1hnVllD?=
 =?utf-8?B?QmtTVDBScE5WY2o2TnU1d1EzTTU0WkV5d2Nrelc5eUJ5QVF6SS9BNERBS2xE?=
 =?utf-8?B?bkVKbW9saXJINVFBd3N5MlVITEsrSDlJalZqK3NMdXNwSThFeXMwSmhtcURS?=
 =?utf-8?B?VE1laHhIcDRhc3EyaXQzZjg2NEljQWVvUkNBOXVPTXRTNEh5ZUEvUUtwZEZt?=
 =?utf-8?B?cTlpekZETWs0WkY0VUNzMzM4TE5LWTBteGl2WG5ZbjhmdWxkYzZtS1FkTjJZ?=
 =?utf-8?B?TU43NkVxVzFjR1Z6bHRkaGh5VFJRWnd4dUoveHE1VVFpQ0cwWndXYkNBb2xN?=
 =?utf-8?B?L0lZT1RGbVU3cWZMblRSWlpFMllYWEgvdjV2QmRsSC8zNUFOcXdGV2dKRmVR?=
 =?utf-8?B?clBHYTEyaWJoVXQ1V2NacEUxYk90T0llMXIyNFlpRjNKZzVkQkYybHpQVjhB?=
 =?utf-8?B?ci9ZSUFRV1F6Y3I4aVpobG9ta2VNWWtMY3NuQ0lsVVBIQkoxOEZZYTViY0RL?=
 =?utf-8?B?RG54MDgwQk1RUDl3QU9nMTZOZ1RWOGdzQTBnRjdNMzZiQXE1K2owVWp1bDJi?=
 =?utf-8?B?ZlhJVU5ueFRMRmhDVzRFTVIrelRPS2dOT2UvOHgxbEFMN3dPbTdkc3UrK3dB?=
 =?utf-8?B?Y3FYNFczdWpQaDc0K3B0MDhiR0Q1UENzSlh6c2FMZDRId0pYOVdYa3ZlUDZZ?=
 =?utf-8?B?Z0lSYjlIYTNVWXJXNlp4SnVWei9KcFE4Sy9Nd0NmV3dwSmtWMVVidGhqbHRL?=
 =?utf-8?B?RU93TUcrK1BqME9WZCs4M1BiUTFidHlWMmJYOGVNK0JoVDViN25qYU5xYmF0?=
 =?utf-8?B?Zjd2SEcvOFFYd2lpY29ramdJelFRb2g3NjQvdEpvaEdaNUNYSUlCakk0dDVL?=
 =?utf-8?B?WFB1NlY4dlF4T2Q3WHNaM204VHpRZFZ5eG1oNzhmTmJRYVdWYk0xcHlGaUcv?=
 =?utf-8?B?d1VCQW56VkI1cTJhRS93OG1GcUVIamlQaFlyald3NDNjV0dvVndUbHpoYUVK?=
 =?utf-8?B?SHlmS212cG0wZW9xQVFOOWUvdWd0aytJaFZJYTZRN3o1ZEpMMFR6UUcrQ3pr?=
 =?utf-8?B?bk81cGVGY1NQVVFaTndzVnlueGtyd0VydjY2M2tuRWJtZWNzNGdFTzhIbDVS?=
 =?utf-8?B?U3U1QnZCcVNPNVJWK1F2SVR4WTJZZDRFejVsa29aQWtrRUJrTFh4LzBteklO?=
 =?utf-8?B?N0FXUFhlQ1ZHWG11aHhESmsvODNEU1hLYTdEeG9TUG5Ccm9JWDV4bjdTZkVx?=
 =?utf-8?B?YTB5RXZuQTBDNEZOZzBsMW1KU3Z3N1pXREVKVWtNVCtpYXNUbW11b2s0NTlp?=
 =?utf-8?B?ZUp0ak1UbDdVNHdXSHRNdnU3SVJZUUpjYW5nR29SQkZ4VDJvNXFsWVZaNnl5?=
 =?utf-8?B?YndkVFRzSTQxdlNmS2tlcUxDV2ZERVcvYXFXZWNsZ1c4SWJlSXh2VHdLd0Ro?=
 =?utf-8?B?clhtcHhYY2lKS3ZLNmRFd0MxZlZDNVVWM3RscVRJZmlobWpjNVE2NC8wT3Br?=
 =?utf-8?B?MEtzQytteS9Wbmg3VHFyczJkTVJrK2xlMjUrUkEwd0N1cG9TYUV0cUhGOFA4?=
 =?utf-8?B?ZnNKLytMcFp6cUdQVkdZVnNQWE53Yzh3aEt1VGFOaVFwaGhNbUd0Z2MxTkQw?=
 =?utf-8?B?Slc4ZjVmY0RwcjY2bm5ER2pQSCtXZGozNDYxdXpyUXVPcTNMKzFhcHMvOWlY?=
 =?utf-8?B?RUVsWnRCTlBjZi82TnhmZTc1SnhsSFJBczZMSG5SY3ZQTmhEVGpCbDcxY01m?=
 =?utf-8?B?eVVUZXZuYWZwVW9XMEZnVGI4dEl2SWVTSG5JODl1cXV3Ym94ZjI4d08zZi9L?=
 =?utf-8?B?VFJHcHh4UStmWDNqeXY5TUNJMG5Bb29tTlp4eVJlZS92YVRtS0psNk1WVEEw?=
 =?utf-8?B?bkc3T1JGd2tiNFZvSExPWThrZzV1eVFRQ2JBL3VjTkQ4YytXT1R0QUNBNktC?=
 =?utf-8?B?Q1pKeTBIYzh4YzhyeEVUYWZFa0c4cVZSNmdBeFJjYjZqdUhwWFlibm9wZHJw?=
 =?utf-8?B?SUVlSjRmSmVlWUdlTDlWR0J6dDIwVUM1b2VTWTNIUkEyMWc2NHJIN1RURXY3?=
 =?utf-8?B?TFc2VGVGVkI4OGFrSitOaytzUVkvak5zWjNiQURoUGpYRmVra2NFZG54VUFI?=
 =?utf-8?B?amNRaVZyc2ZSWERXWlFMVmxqaFMwNnNlVk9oaVB1SHNlMHM2QitFanhZWDJq?=
 =?utf-8?B?aEZpeTFNSmxUTkpkVXJnK3laMEpjUkVqN1JoTk1DQUIyUlFhWWpmZzUzZENB?=
 =?utf-8?B?SWFsVksrZUZOSVM2WWozalIwSzRqVm5uQ0lEUEpMZVB2TWRXTFAzQT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3a9d836c-4af1-445c-abfd-08da4e998dc1
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2022 06:37:44.2526
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: GZC0rKaT5SVcFe8DC4HhMmaebcOkZmssgRhUdHNPt68K8mWIjL5q0B/EIMgfhr6fMI1BeJq0XRyR5DqV/T7QTA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6847

On 14.06.2022 18:22, Anthony PERARD wrote:
> The command line generated for headers++.chk by make is quite long,
> and in some environment it is too long. This issue have been seen in
> Yocto build environment.
> 
> Error messages:
>     make[9]: execvp: /bin/sh: Argument list too long
>     make[9]: *** [include/Makefile:181: include/headers++.chk] Error 127
> 
> Rework so that we do the foreach loop in shell rather that make to
> reduce the command line size by a lot. We also need a way to get
> headers prerequisite for some public headers so we use a switch "case"
> in shell to be able to do some simple pattern matching. Variables
> alone in POSIX shell don't allow to work with associative array or
> variables with "/".
> 
> Reported-by: Bertrand Marquis <Bertrand.Marquis@arm.com>
> Fixes: 28e13c7f43 ("build: xen/include: use if_changed")
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
> ---
> 
> Notes:
>     v2:
>     - fix typo in commit message
>     - fix out-of-tree build
>     
>     v1:
>     - was sent as a reply to v1 of the series
> 
>  xen/include/Makefile | 17 +++++++++++++----
>  1 file changed, 13 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/include/Makefile b/xen/include/Makefile
> index 6d9bcc19b0..49c75a78f9 100644
> --- a/xen/include/Makefile
> +++ b/xen/include/Makefile
> @@ -158,13 +158,22 @@ define cmd_headerscxx_chk
>  	    touch $@.new;                                                     \
>  	    exit 0;                                                           \
>  	fi;                                                                   \
> -	$(foreach i, $(filter %.h,$^),                                        \
> -	    echo "#include "\"$(i)\"                                          \
> +	get_prereq() {                                                        \
> +	    case $$1 in                                                       \
> +	    $(foreach i, $(filter %.h,$^),                                    \
> +	    $(if $($(patsubst $(srctree)/%,%,$(i))-prereq),                   \
> +		$(i)$(close)                                                  \
> +		echo "$(foreach j, $($(patsubst $(srctree)/%,%,$(i))-prereq), \
> +			-include c$(j))";;))                                  \
> +	    esac;                                                             \

If I'm reading this right (indentation looks to be a little misleading
and hence one needs to count parentheses) the case statement could (in
theory) remain without any "body". As per the command language spec I'm
looking at this (if it works in the first place) is an extension, and
formally there's always at least one label required. Since we aim to be
portable in such regards, I'd like to request that there be a final
otherwise empty *) line.

> +	};                                                                    \
> +	for i in $(filter %.h,$^); do                                         \
> +	    echo "#include "\"$$i\"                                           \
>  	    | $(CXX) -x c++ -std=gnu++98 -Wall -Werror -D__XEN_TOOLS__        \
>  	      -include stdint.h -include $(srcdir)/public/xen.h               \
> -	      $(foreach j, $($(patsubst $(srctree)/%,%,$i)-prereq), -include c$(j)) \
> +	      `get_prereq $$i`                                                \

While I know we use back-ticked quoting elsewhere, I'd generally
recommend to use $() for readability. But maybe others view this
exactly the other way around ...

And a question without good context to put at: Isn't headers99.chk in
similar need of converting? It looks only slightly less involved than
the C++ one.

Jan

>  	      -S -o /dev/null -                                               \
> -	    || exit $$?; echo $(i) >> $@.new;) \
> +	    || exit $$?; echo $$i >> $@.new; done;                            \
>  	mv $@.new $@
>  endef
>  



From xen-devel-bounces@lists.xenproject.org Wed Jun 15 08:13:18 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 08:13:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349736.575880 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1O96-0002cm-PM; Wed, 15 Jun 2022 08:13:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349736.575880; Wed, 15 Jun 2022 08:13:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1O96-0002cf-LH; Wed, 15 Jun 2022 08:13:00 +0000
Received: by outflank-mailman (input) for mailman id 349736;
 Wed, 15 Jun 2022 08:12:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=r2OX=WW=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1o1O95-0002cT-6M
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 08:12:59 +0000
Received: from mail-ed1-x52f.google.com (mail-ed1-x52f.google.com
 [2a00:1450:4864:20::52f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f70c6348-ec82-11ec-bd2c-47488cf2e6aa;
 Wed, 15 Jun 2022 10:12:57 +0200 (CEST)
Received: by mail-ed1-x52f.google.com with SMTP id o10so14965880edi.1
 for <xen-devel@lists.xenproject.org>; Wed, 15 Jun 2022 01:12:57 -0700 (PDT)
Received: from uni.. (adsl-190.37.6.169.tellas.gr. [37.6.169.190])
 by smtp.googlemail.com with ESMTPSA id
 z18-20020a170906241200b006f3ef214da8sm6124795eja.14.2022.06.15.01.12.55
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 15 Jun 2022 01:12:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f70c6348-ec82-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=qZx1eac9zJFtkpBOL0VACeWr3XxYdRwbIQQabdvDnyo=;
        b=jkJnDqzOBowvYli23CnBz+0iOmpMkPqEWw+pAhFH+3W9CeKVoqKQLl1a28o2D3M4xn
         6YkFVsJyIsG0mHPLIw/jCczXepoiB11KPp8Ny70MxmW3Z0Vg540YsFx+TT01hytmfb2Q
         nbjOd4Sxp0DzpJUt+qWY0J5s+PzSqZYKQZ+GqIwAfy8fAR6FL9tjjvFU/TcK+/s0ekdO
         qS2jjqTvJ9Nsy6RAgxNLepdgpgUd8vZ916tP1LI/040Uec5fytVB6aqnj0DVOiheuPYD
         lyFDFVzG92rcUYN10+KvL4mXt/PQ1+uXreB1Ct9xcKobqXUbRK9XALBh2wHRFnKKoMIG
         2aAw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=qZx1eac9zJFtkpBOL0VACeWr3XxYdRwbIQQabdvDnyo=;
        b=vRUT6I+0QyByHo2ztrYAsy+QPH9L6TnSRXtZzPjyS/Jn0HFXmwGcng0uDy5F60G0Pg
         5+vrQbsbR4WNEmYc+3hC7T+TcbHJMxQ9TjogRhdALDkyjvKDmji4X+kD0vy4IEtT8NLF
         vdF9ui/LmVMQVzfMxCFVfcx9niSdXsIoUbR8sHVgMYwa0SdSDem1arXAuoZBwB9Fk+DC
         7iQo/qTMlvdNiDpVw4sOFiedlv/3jwD4AzrAcDHrohNzBGHXWBZf7A0pu+k3mLuTXaYJ
         JguXAYSl6m8uH/Tl8bCShVWgGU/apXDhNm1ekBijRw1yFbX720COFl0nCKGoLF8McL9B
         PZ+w==
X-Gm-Message-State: AOAM530yGiohqkGAUClNvZzojkfzS1MmZgsStqEHYxPyoXXBjUUpvMp4
	IPM/CvA91AgtuuQQUXmyXVdgL5lyVzM=
X-Google-Smtp-Source: ABdhPJx1G5RQ6gn7JvD2P49knD+LlprnBa7U2Dexg1Q6tcxycWberFZMIphpKfRXpl97wyE/2g+izA==
X-Received: by 2002:a05:6402:84c:b0:430:aae2:6dd8 with SMTP id b12-20020a056402084c00b00430aae26dd8mr11250123edz.31.1655280777082;
        Wed, 15 Jun 2022 01:12:57 -0700 (PDT)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	viryaos-discuss@lists.sourceforge.net,
	Xenia Ragiadakou <burzalodowa@gmail.com>
Subject: [ImageBuilder] [PATCH] uboot-script-gen: Add DOMU_STATIC_MEM
Date: Wed, 15 Jun 2022 11:11:27 +0300
Message-Id: <20220615081127.274712-1-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new config parameter to configure a dom0less VM with static allocation.
DOMU_STATIC_MEM[number]="baseaddr1 size1 ... baseaddrN sizeN"
The parameter specifies the host physical address regions to be statically
allocated to the VM. Each region is defined by its start address and size.

For instance,
DOMU_STATIC_MEM[0]="0x30000000 0x10000000 0x50000000 0x20000000"
indicates that the host memory regions [0x30000000, 0x40000000) and
[0x50000000, 0x70000000) are statically allocated to the first dom0less VM.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---
 README.md                |  4 ++++
 scripts/uboot-script-gen | 20 ++++++++++++++++++++
 2 files changed, 24 insertions(+)

diff --git a/README.md b/README.md
index 8ce13f0..876e46d 100644
--- a/README.md
+++ b/README.md
@@ -154,6 +154,10 @@ Where:
   automatically at boot as dom0-less guest. It can still be created
   later from Dom0.
 
+- DOMU_STATIC_MEM[number]="baseaddr1 size1 ... baseaddrN sizeN"
+  if specified, indicates the host physical address regions
+  [baseaddr, baseaddr + size) to be reserved to the VM for static allocation.
+
 - LINUX is optional but specifies the Linux kernel for when Xen is NOT
   used.  To enable this set any LINUX\_\* variables and do NOT set the
   XEN variable.
diff --git a/scripts/uboot-script-gen b/scripts/uboot-script-gen
index 0adf523..bb2ee2d 100755
--- a/scripts/uboot-script-gen
+++ b/scripts/uboot-script-gen
@@ -108,6 +108,22 @@ function add_device_tree_passthrough()
     dt_set "$path/module$addr" "reg" "hex"  "0x0 $addr 0x0 $(printf "0x%x" $size)"
 }
 
+function add_device_tree_static_mem()
+{
+    local path=$1
+    local regions=$2
+
+    dt_set "$path" "#xen,static-mem-address-cells" "hex" "0x2"
+    dt_set "$path" "#xen,static-mem-size-cells" "hex" "0x2"
+
+    for i in ${regions[@]}
+    do
+	cells+=("$(printf "0x%x 0x%x" $(($i >> 32)) $(($i & ((1 << 32) - 1))))")
+    done
+
+    dt_set "$path" "xen,static-mem" "hex" "${cells[*]}"
+}
+
 function xen_device_tree_editing()
 {
     dt_set "/chosen" "#address-cells" "hex" "0x2"
@@ -143,6 +159,10 @@ function xen_device_tree_editing()
         dt_set "/chosen/domU$i" "#size-cells" "hex" "0x2"
         dt_set "/chosen/domU$i" "memory" "int" "0 ${DOMU_MEM[$i]}"
         dt_set "/chosen/domU$i" "cpus" "int" "${DOMU_VCPUS[$i]}"
+	if test "${DOMU_STATIC_MEM[$i]}"
+        then
+	    add_device_tree_static_mem "/chosen/domU$i" "${DOMU_STATIC_MEM[$i]}"
+        fi
         dt_set "/chosen/domU$i" "vpl011" "hex" "0x1"
         add_device_tree_kernel "/chosen/domU$i" ${domU_kernel_addr[$i]} ${domU_kernel_size[$i]} "${DOMU_CMD[$i]}"
         if test "${domU_ramdisk_addr[$i]}"
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 15 08:14:58 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 08:14:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349743.575891 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1OAz-0003An-4t; Wed, 15 Jun 2022 08:14:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349743.575891; Wed, 15 Jun 2022 08:14:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1OAz-0003Ag-1N; Wed, 15 Jun 2022 08:14:57 +0000
Received: by outflank-mailman (input) for mailman id 349743;
 Wed, 15 Jun 2022 08:14:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1OAx-0003AQ-Ag; Wed, 15 Jun 2022 08:14:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1OAx-0003u9-7X; Wed, 15 Jun 2022 08:14:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1OAw-0008EX-Mf; Wed, 15 Jun 2022 08:14:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o1OAw-00010t-MB; Wed, 15 Jun 2022 08:14:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Ku0hGiZ5P9LyqeyQ7M+8mICLD+9x6DeX3OC5mq4R7Zs=; b=LVAloyJmZ+IE4qk7HkhLf50qZV
	3dLIUDg9SjJeMBVFTllRyWo7PxCjtjbGV/29BJVkH9X0yBHbGgm7hoQ40VPkqAIjVqaN54VUZLqpr
	5pzMzDrJZ67Z0rQCPulEh5rclTEJy0UPEMy3fZi38rLQijwLgk4/EXUb+yvDa5TrVaRM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171168-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171168: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-libvirt-raw:<job status>:broken:regression
    linux-linus:test-arm64-arm64-libvirt-raw:host-install(5):broken:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=018ab4fabddd94f1c96f3b59e180691b9e88d5d8
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 15 Jun 2022 08:14:54 +0000

flight 171168 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171168/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-raw    <job status>                 broken
 test-arm64-arm64-libvirt-raw  5 host-install(5)        broken REGR. vs. 170714
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 170714
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 170714
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 170714
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 170714
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 170714
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 170714
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 170714
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 170714
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 170714
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 170714
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 170714
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 170714
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 170714
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 170714

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170714
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170714
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170714
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                018ab4fabddd94f1c96f3b59e180691b9e88d5d8
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z   22 days
Failing since        170716  2022-05-24 11:12:06 Z   21 days   53 attempts
Testing same since   171168  2022-06-14 22:12:35 Z    0 days    1 attempts

------------------------------------------------------------
2348 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         pass    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 broken  
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-libvirt-raw broken
broken-step test-arm64-arm64-libvirt-raw host-install(5)

Not pushing.

(No revision log; it would be 277190 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 08:48:48 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 08:48:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349765.575921 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Ohc-0007do-5C; Wed, 15 Jun 2022 08:48:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349765.575921; Wed, 15 Jun 2022 08:48:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Ohc-0007dh-2T; Wed, 15 Jun 2022 08:48:40 +0000
Received: by outflank-mailman (input) for mailman id 349765;
 Wed, 15 Jun 2022 08:48:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5J6g=WW=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o1Oha-0007dI-P4
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 08:48:38 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f2345795-ec87-11ec-bd2c-47488cf2e6aa;
 Wed, 15 Jun 2022 10:48:37 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id A97B721C37;
 Wed, 15 Jun 2022 08:48:36 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 703CF13A35;
 Wed, 15 Jun 2022 08:48:36 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id vL3yGeScqWJaLQAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 15 Jun 2022 08:48:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f2345795-ec87-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655282916; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=iIBkW7Vwr7SXxDUNu3qaCisr7ig2aybwnHdNAbRTR4E=;
	b=htosqkbsQ1Cd4ZgHAGpw3up4szZoT77TjUmHY/iCSCLPrwGrVG8o905OCfwtFbYrBTkZp5
	Kd9h+RB5YxM+NXcBIe0ztc9PUF6a0CSkOVYW3jpSTMG0/CCRVUpkMiilgaiMNaAbZ86VNo
	NL2/VDfud72k6e7+qCMEUC+swXeC1jE=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Jonathan Corbet <corbet@lwn.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: [PATCH] xen: don't require virtio with grants for non-PV guests
Date: Wed, 15 Jun 2022 10:48:35 +0200
Message-Id: <20220615084835.27113-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Commit fa1f57421e0b ("xen/virtio: Enable restricted memory access using
Xen grant mappings") introduced a new requirement for using virtio
devices: the backend now needs to support the VIRTIO_F_ACCESS_PLATFORM
feature.

This is an undue requirement for non-PV guests, as those can be operated
with existing backends without any problem, as long as those backends
are running in dom0.

Per default allow virtio devices without grant support for non-PV
guests.

The setting can be overridden by using the new "xen_virtio_grant"
command line parameter.

Add a new config item to always force use of grants for virtio.

Fixes: fa1f57421e0b ("xen/virtio: Enable restricted memory access using Xen grant mappings")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 .../admin-guide/kernel-parameters.txt         |  6 +++++
 drivers/xen/Kconfig                           |  9 ++++++++
 drivers/xen/grant-dma-ops.c                   | 22 +++++++++++++++++++
 include/xen/xen.h                             | 12 +++++-----
 4 files changed, 42 insertions(+), 7 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 8090130b544b..7960480c6fe4 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -6695,6 +6695,12 @@
 			improve timer resolution at the expense of processing
 			more timer interrupts.
 
+	xen_virtio_grant= [XEN]
+			Control whether virtio devices are required to use
+			grants when running as a Xen guest. The default is
+			"yes" for PV guests or when the kernel has been built
+			with CONFIG_XEN_VIRTIO_FORCE_GRANT set.
+
 	xen.balloon_boot_timeout= [XEN]
 			The time (in seconds) to wait before giving up to boot
 			in case initial ballooning fails to free enough memory.
diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
index bfd5f4f706bc..a65bd92121a5 100644
--- a/drivers/xen/Kconfig
+++ b/drivers/xen/Kconfig
@@ -355,4 +355,13 @@ config XEN_VIRTIO
 
 	  If in doubt, say n.
 
+config XEN_VIRTIO_FORCE_GRANT
+	bool "Require Xen virtio support to use grants"
+	depends on XEN_VIRTIO
+	help
+	  Require virtio for Xen guests to use grant mappings.
+	  This will avoid the need to give the backend the right to map all
+	  of the guest memory. This will need support on the backend side
+	  (e.g. qemu or kernel, depending on the virtio device types used).
+
 endmenu
diff --git a/drivers/xen/grant-dma-ops.c b/drivers/xen/grant-dma-ops.c
index fc0142484001..d1fae789dfad 100644
--- a/drivers/xen/grant-dma-ops.c
+++ b/drivers/xen/grant-dma-ops.c
@@ -11,6 +11,7 @@
 #include <linux/dma-map-ops.h>
 #include <linux/of.h>
 #include <linux/pfn.h>
+#include <linux/platform-feature.h>
 #include <linux/xarray.h>
 #include <xen/xen.h>
 #include <xen/xen-ops.h>
@@ -27,6 +28,27 @@ static DEFINE_XARRAY(xen_grant_dma_devices);
 
 #define XEN_GRANT_DMA_ADDR_OFF	(1ULL << 63)
 
+static bool __initdata xen_virtio_grants;
+static bool __initdata xen_virtio_grants_set;
+static __init int parse_use_grants(char *arg)
+{
+	if (!strcmp(arg, "yes"))
+		xen_virtio_grants = true;
+	else if (!strcmp(arg, "no"))
+		xen_virtio_grants = false;
+	xen_virtio_grants_set = true;
+
+	return 0;
+}
+early_param("xen_virtio_grant", parse_use_grants);
+
+void xen_set_restricted_virtio_memory_access(void)
+{
+	if (IS_ENABLED(CONFIG_XEN_VIRTIO_FORCE_GRANT) || xen_virtio_grants ||
+	    (!xen_virtio_grants_set && xen_pv_domain()))
+		platform_set(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS);
+}
+
 static inline dma_addr_t grant_to_dma(grant_ref_t grant)
 {
 	return XEN_GRANT_DMA_ADDR_OFF | ((dma_addr_t)grant << PAGE_SHIFT);
diff --git a/include/xen/xen.h b/include/xen/xen.h
index 0780a81e140d..e0b1d534366f 100644
--- a/include/xen/xen.h
+++ b/include/xen/xen.h
@@ -52,13 +52,11 @@ bool xen_biovec_phys_mergeable(const struct bio_vec *vec1,
 extern u64 xen_saved_max_mem_size;
 #endif
 
-#include <linux/platform-feature.h>
-
-static inline void xen_set_restricted_virtio_memory_access(void)
-{
-	if (IS_ENABLED(CONFIG_XEN_VIRTIO) && xen_domain())
-		platform_set(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS);
-}
+#ifdef CONFIG_XEN_GRANT_DMA_OPS
+void xen_set_restricted_virtio_memory_access(void);
+#else
+static inline void xen_set_restricted_virtio_memory_access(void) { }
+#endif
 
 #ifdef CONFIG_XEN_UNPOPULATED_ALLOC
 int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages);
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Jun 15 09:25:43 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 09:25:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349777.575932 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1PHE-0003z5-1o; Wed, 15 Jun 2022 09:25:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349777.575932; Wed, 15 Jun 2022 09:25:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1PHD-0003yy-VJ; Wed, 15 Jun 2022 09:25:27 +0000
Received: by outflank-mailman (input) for mailman id 349777;
 Wed, 15 Jun 2022 09:25:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lpwu=WW=linaro.org=viresh.kumar@srs-se1.protection.inumbo.net>)
 id 1o1PHD-0003ys-6h
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 09:25:27 +0000
Received: from mail-pg1-x534.google.com (mail-pg1-x534.google.com
 [2607:f8b0:4864:20::534])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 15422b2d-ec8d-11ec-ab14-113154c10af9;
 Wed, 15 Jun 2022 11:25:25 +0200 (CEST)
Received: by mail-pg1-x534.google.com with SMTP id 184so10827402pga.12
 for <xen-devel@lists.xenproject.org>; Wed, 15 Jun 2022 02:25:24 -0700 (PDT)
Received: from localhost ([122.162.234.2]) by smtp.gmail.com with ESMTPSA id
 v2-20020a17090a778200b001ea90dada74sm1231965pjk.12.2022.06.15.02.25.21
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 15 Jun 2022 02:25:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 15422b2d-ec8d-11ec-ab14-113154c10af9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=QD1mf5DnBqcLEVOFoiWBRR1QBiCXzXfoEa1RUQMHEbg=;
        b=gmVSCn46TlF8cm8ao3weZQ1dQ+919xti8x+JNTqvEbhjNEFZdvPMGRwZbobjNM3xbf
         uC1/HxtqLlOGh00UA96fBXAz35WiunpTXGZzV5kQH/ocYMckFGEggHTjUPX+MI/pZmOc
         0Nyfs4DBuxMJLPDFXQUFccwTvxRzbEGEEK+TKSrcCe+eTJmjpPY47aruJ1kRn/cku3kK
         luPIiVacfBtMMAvwPdeNm4mBL5eFXYP++g0eRxmGW7FuPJI23ZPCpYI5ypPiJCZkOwWa
         9BR0ntAlHQ+6Q4rk86UEoZfkZ/Qds91d142Z3ertz6dMEcCU9As+k+oOuPVQSa+tI6ap
         YaWg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=QD1mf5DnBqcLEVOFoiWBRR1QBiCXzXfoEa1RUQMHEbg=;
        b=6UVpNskCEcY7TO04sT2FJ0dBT1w9SFxTSpxab/IPAakWujHHBgiusySQC0ccDzTno/
         wtEz1dSXdeRQ23VIEeiS9rzJABygCaZcr9frmrJSK2WjGK6lDX+1bKExtjX44POTPlqq
         4k/xa/SeMd2ddejxGr167Xf9djKUm+vAOShpFIT69FU85JCNemNulnptNwXU5ITojcRS
         X/OroUCfddYv4PYVxMNJGtGZcPt33ndyDdodUEzC2YVMT8rsPmCbWApIOvjhokp5s5kF
         PiwMClBgsdtdXIUmMDHD9fCkrqMz2y+JakTE6EWpryDDwaPReLa6sj4oUQDiriXUO2iV
         Ff9A==
X-Gm-Message-State: AOAM532ICMVyGJ7bygDeg6uEAmE+ACkpcekQ8zrGWIeo4rnhplq32lLh
	Z0ZH5s2QZqI8oT3wd1o8mMnlcw==
X-Google-Smtp-Source: ABdhPJy/EMp63lY79wavrcwsuOIlryAFSfAtHCajQFGqGhcTul+yPCkHJc1P94NUfLJmE8bfCaxLrA==
X-Received: by 2002:a63:6c4a:0:b0:3fe:2813:b1d with SMTP id h71-20020a636c4a000000b003fe28130b1dmr8063133pgc.613.1655285122694;
        Wed, 15 Jun 2022 02:25:22 -0700 (PDT)
Date: Wed, 15 Jun 2022 14:55:19 +0530
From: Viresh Kumar <viresh.kumar@linaro.org>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org, Jonathan Corbet <corbet@lwn.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: Re: [PATCH] xen: don't require virtio with grants for non-PV guests
Message-ID: <20220615092519.5677clabobheziet@vireshk-i7>
References: <20220615084835.27113-1-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20220615084835.27113-1-jgross@suse.com>

On 15-06-22, 10:48, Juergen Gross wrote:
> Commit fa1f57421e0b ("xen/virtio: Enable restricted memory access using
> Xen grant mappings") introduced a new requirement for using virtio
> devices: the backend now needs to support the VIRTIO_F_ACCESS_PLATFORM
> feature.
> 
> This is an undue requirement for non-PV guests, as those can be operated
> with existing backends without any problem, as long as those backends
> are running in dom0.
> 
> Per default allow virtio devices without grant support for non-PV
> guests.
> 
> The setting can be overridden by using the new "xen_virtio_grant"
> command line parameter.
> 
> Add a new config item to always force use of grants for virtio.
> 
> Fixes: fa1f57421e0b ("xen/virtio: Enable restricted memory access using Xen grant mappings")
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  .../admin-guide/kernel-parameters.txt         |  6 +++++
>  drivers/xen/Kconfig                           |  9 ++++++++
>  drivers/xen/grant-dma-ops.c                   | 22 +++++++++++++++++++
>  include/xen/xen.h                             | 12 +++++-----
>  4 files changed, 42 insertions(+), 7 deletions(-)

Thanks for the quick fix.

With CONFIG_DEBUG_SECTION_MISMATCH=y, this generates a warning.

WARNING: modpost: vmlinux.o(.text+0x7a8270): Section mismatch in reference from the function xen_set_restricted_virtio_memory_access() to the variable .init.data:xen_virtio_grants
The function xen_set_restricted_virtio_memory_access() references
the variable __initdata xen_virtio_grants.
This is often because xen_set_restricted_virtio_memory_access lacks a __initdata
annotation or the annotation of xen_virtio_grants is wrong.

This can be fixed by:

diff --git a/drivers/xen/grant-dma-ops.c b/drivers/xen/grant-dma-ops.c
index d1fae789dfad..1099097b4515 100644
--- a/drivers/xen/grant-dma-ops.c
+++ b/drivers/xen/grant-dma-ops.c
@@ -42,7 +42,7 @@ static __init int parse_use_grants(char *arg)
 }
 early_param("xen_virtio_grant", parse_use_grants);

-void xen_set_restricted_virtio_memory_access(void)
+void __init xen_set_restricted_virtio_memory_access(void)
 {
        if (IS_ENABLED(CONFIG_XEN_VIRTIO_FORCE_GRANT) || xen_virtio_grants ||
            (!xen_virtio_grants_set && xen_pv_domain()))

With that:

Tested-by: Viresh Kumar <viresh.kumar@linaro.org>

-- 
viresh


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 09:32:47 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 09:32:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349785.575944 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1POF-0005db-Qt; Wed, 15 Jun 2022 09:32:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349785.575944; Wed, 15 Jun 2022 09:32:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1POF-0005dU-Nx; Wed, 15 Jun 2022 09:32:43 +0000
Received: by outflank-mailman (input) for mailman id 349785;
 Wed, 15 Jun 2022 09:32:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5J6g=WW=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o1POE-0005dO-KJ
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 09:32:42 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1a684bd3-ec8e-11ec-bd2c-47488cf2e6aa;
 Wed, 15 Jun 2022 11:32:41 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 1E6AB21B61;
 Wed, 15 Jun 2022 09:32:41 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id DA5A1139F3;
 Wed, 15 Jun 2022 09:32:40 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id ikr3MzinqWJSQQAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 15 Jun 2022 09:32:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1a684bd3-ec8e-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655285561; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=GdiDvCAg9O1/d1vITE2FLYA4kxpmVbO5wBq5nI2zzws=;
	b=rDnf0s4wqN2vFvjXd+otZTvw7/+5+9Bkl3WtioPxpmigme3C2DIjBJ8sJTBlIDoZBInaLd
	b3n/XqvL+WSEE3Su6AQ29x2/qaSvqtPjKigIncjwiwBlc964HFcuZGoLSei1LmU4dc2HGb
	3oIs5sYCrQFz5NB9fR8s06K/hsivbco=
Message-ID: <a063368a-022a-c294-5a19-da1b80c45461@suse.com>
Date: Wed, 15 Jun 2022 11:32:40 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH] xen: don't require virtio with grants for non-PV guests
Content-Language: en-US
To: Viresh Kumar <viresh.kumar@linaro.org>
Cc: xen-devel@lists.xenproject.org, linux-doc@vger.kernel.org,
 linux-kernel@vger.kernel.org, Jonathan Corbet <corbet@lwn.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
References: <20220615084835.27113-1-jgross@suse.com>
 <20220615092519.5677clabobheziet@vireshk-i7>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20220615092519.5677clabobheziet@vireshk-i7>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------WbFtDx5jTtNkqzBb0mqgblYE"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------WbFtDx5jTtNkqzBb0mqgblYE
Content-Type: multipart/mixed; boundary="------------nKGSwd00yM0V3lyWpANyvyUW";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Viresh Kumar <viresh.kumar@linaro.org>
Cc: xen-devel@lists.xenproject.org, linux-doc@vger.kernel.org,
 linux-kernel@vger.kernel.org, Jonathan Corbet <corbet@lwn.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Message-ID: <a063368a-022a-c294-5a19-da1b80c45461@suse.com>
Subject: Re: [PATCH] xen: don't require virtio with grants for non-PV guests
References: <20220615084835.27113-1-jgross@suse.com>
 <20220615092519.5677clabobheziet@vireshk-i7>
In-Reply-To: <20220615092519.5677clabobheziet@vireshk-i7>

--------------nKGSwd00yM0V3lyWpANyvyUW
Content-Type: multipart/mixed; boundary="------------gQNSt1Ojv9sx01gPYMoid8mJ"

--------------gQNSt1Ojv9sx01gPYMoid8mJ
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTUuMDYuMjIgMTE6MjUsIFZpcmVzaCBLdW1hciB3cm90ZToNCj4gT24gMTUtMDYtMjIs
IDEwOjQ4LCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4gQ29tbWl0IGZhMWY1NzQyMWUwYiAo
Inhlbi92aXJ0aW86IEVuYWJsZSByZXN0cmljdGVkIG1lbW9yeSBhY2Nlc3MgdXNpbmcNCj4+
IFhlbiBncmFudCBtYXBwaW5ncyIpIGludHJvZHVjZWQgYSBuZXcgcmVxdWlyZW1lbnQgZm9y
IHVzaW5nIHZpcnRpbw0KPj4gZGV2aWNlczogdGhlIGJhY2tlbmQgbm93IG5lZWRzIHRvIHN1
cHBvcnQgdGhlIFZJUlRJT19GX0FDQ0VTU19QTEFURk9STQ0KPj4gZmVhdHVyZS4NCj4+DQo+
PiBUaGlzIGlzIGFuIHVuZHVlIHJlcXVpcmVtZW50IGZvciBub24tUFYgZ3Vlc3RzLCBhcyB0
aG9zZSBjYW4gYmUgb3BlcmF0ZWQNCj4+IHdpdGggZXhpc3RpbmcgYmFja2VuZHMgd2l0aG91
dCBhbnkgcHJvYmxlbSwgYXMgbG9uZyBhcyB0aG9zZSBiYWNrZW5kcw0KPj4gYXJlIHJ1bm5p
bmcgaW4gZG9tMC4NCj4+DQo+PiBQZXIgZGVmYXVsdCBhbGxvdyB2aXJ0aW8gZGV2aWNlcyB3
aXRob3V0IGdyYW50IHN1cHBvcnQgZm9yIG5vbi1QVg0KPj4gZ3Vlc3RzLg0KPj4NCj4+IFRo
ZSBzZXR0aW5nIGNhbiBiZSBvdmVycmlkZGVuIGJ5IHVzaW5nIHRoZSBuZXcgInhlbl92aXJ0
aW9fZ3JhbnQiDQo+PiBjb21tYW5kIGxpbmUgcGFyYW1ldGVyLg0KPj4NCj4+IEFkZCBhIG5l
dyBjb25maWcgaXRlbSB0byBhbHdheXMgZm9yY2UgdXNlIG9mIGdyYW50cyBmb3IgdmlydGlv
Lg0KPj4NCj4+IEZpeGVzOiBmYTFmNTc0MjFlMGIgKCJ4ZW4vdmlydGlvOiBFbmFibGUgcmVz
dHJpY3RlZCBtZW1vcnkgYWNjZXNzIHVzaW5nIFhlbiBncmFudCBtYXBwaW5ncyIpDQo+PiBT
aWduZWQtb2ZmLWJ5OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+DQo+PiAtLS0N
Cj4+ICAgLi4uL2FkbWluLWd1aWRlL2tlcm5lbC1wYXJhbWV0ZXJzLnR4dCAgICAgICAgIHwg
IDYgKysrKysNCj4+ICAgZHJpdmVycy94ZW4vS2NvbmZpZyAgICAgICAgICAgICAgICAgICAg
ICAgICAgIHwgIDkgKysrKysrKysNCj4+ICAgZHJpdmVycy94ZW4vZ3JhbnQtZG1hLW9wcy5j
ICAgICAgICAgICAgICAgICAgIHwgMjIgKysrKysrKysrKysrKysrKysrKw0KPj4gICBpbmNs
dWRlL3hlbi94ZW4uaCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgfCAxMiArKysrKy0t
LS0tDQo+PiAgIDQgZmlsZXMgY2hhbmdlZCwgNDIgaW5zZXJ0aW9ucygrKSwgNyBkZWxldGlv
bnMoLSkNCj4gDQo+IFRoYW5rcyBmb3IgdGhlIHF1aWNrIGZpeC4NCj4gDQo+IFdpdGggQ09O
RklHX0RFQlVHX1NFQ1RJT05fTUlTTUFUQ0g9eSwgdGhpcyBnZW5lcmF0ZXMgYSB3YXJuaW5n
Lg0KPiANCj4gV0FSTklORzogbW9kcG9zdDogdm1saW51eC5vKC50ZXh0KzB4N2E4MjcwKTog
U2VjdGlvbiBtaXNtYXRjaCBpbiByZWZlcmVuY2UgZnJvbSB0aGUgZnVuY3Rpb24geGVuX3Nl
dF9yZXN0cmljdGVkX3ZpcnRpb19tZW1vcnlfYWNjZXNzKCkgdG8gdGhlIHZhcmlhYmxlIC5p
bml0LmRhdGE6eGVuX3ZpcnRpb19ncmFudHMNCj4gVGhlIGZ1bmN0aW9uIHhlbl9zZXRfcmVz
dHJpY3RlZF92aXJ0aW9fbWVtb3J5X2FjY2VzcygpIHJlZmVyZW5jZXMNCj4gdGhlIHZhcmlh
YmxlIF9faW5pdGRhdGEgeGVuX3ZpcnRpb19ncmFudHMuDQo+IFRoaXMgaXMgb2Z0ZW4gYmVj
YXVzZSB4ZW5fc2V0X3Jlc3RyaWN0ZWRfdmlydGlvX21lbW9yeV9hY2Nlc3MgbGFja3MgYSBf
X2luaXRkYXRhDQo+IGFubm90YXRpb24gb3IgdGhlIGFubm90YXRpb24gb2YgeGVuX3ZpcnRp
b19ncmFudHMgaXMgd3JvbmcuDQoNClNpbGx5IG1lLiBUaGFua3MgZm9yIHRoZSBub3RpY2Uu
DQoNCj4gDQo+IFRoaXMgY2FuIGJlIGZpeGVkIGJ5Og0KPiANCj4gZGlmZiAtLWdpdCBhL2Ry
aXZlcnMveGVuL2dyYW50LWRtYS1vcHMuYyBiL2RyaXZlcnMveGVuL2dyYW50LWRtYS1vcHMu
Yw0KPiBpbmRleCBkMWZhZTc4OWRmYWQuLjEwOTkwOTdiNDUxNSAxMDA2NDQNCj4gLS0tIGEv
ZHJpdmVycy94ZW4vZ3JhbnQtZG1hLW9wcy5jDQo+ICsrKyBiL2RyaXZlcnMveGVuL2dyYW50
LWRtYS1vcHMuYw0KPiBAQCAtNDIsNyArNDIsNyBAQCBzdGF0aWMgX19pbml0IGludCBwYXJz
ZV91c2VfZ3JhbnRzKGNoYXIgKmFyZykNCj4gICB9DQo+ICAgZWFybHlfcGFyYW0oInhlbl92
aXJ0aW9fZ3JhbnQiLCBwYXJzZV91c2VfZ3JhbnRzKTsNCj4gDQo+IC12b2lkIHhlbl9zZXRf
cmVzdHJpY3RlZF92aXJ0aW9fbWVtb3J5X2FjY2Vzcyh2b2lkKQ0KPiArdm9pZCBfX2luaXQg
eGVuX3NldF9yZXN0cmljdGVkX3ZpcnRpb19tZW1vcnlfYWNjZXNzKHZvaWQpDQo+ICAgew0K
PiAgICAgICAgICBpZiAoSVNfRU5BQkxFRChDT05GSUdfWEVOX1ZJUlRJT19GT1JDRV9HUkFO
VCkgfHwgeGVuX3ZpcnRpb19ncmFudHMgfHwNCj4gICAgICAgICAgICAgICgheGVuX3ZpcnRp
b19ncmFudHNfc2V0ICYmIHhlbl9wdl9kb21haW4oKSkpDQo+IA0KPiBXaXRoIHRoYXQ6DQo+
IA0KPiBUZXN0ZWQtYnk6IFZpcmVzaCBLdW1hciA8dmlyZXNoLmt1bWFyQGxpbmFyby5vcmc+
DQo+IA0KDQpUaGFua3MsDQoNCg0KSnVlcmdlbg0K
--------------gQNSt1Ojv9sx01gPYMoid8mJ
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------gQNSt1Ojv9sx01gPYMoid8mJ--

--------------nKGSwd00yM0V3lyWpANyvyUW--

--------------WbFtDx5jTtNkqzBb0mqgblYE
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmKppzgFAwAAAAAACgkQsN6d1ii/Ey/V
YAf/Q8X92e3qwlE8Q4ITny2XI7T0CiFWcAdKwHkIjBHIHtMszxV/RdVnjVkEHOgh1c1rLwzvf9nC
DODMe4T3YbDZAjCkztASdGuCPn++tuBeBrlTWOvtsbn4W1DzKfQvSptJAjSrZU6tiiwopunvtvtW
4ImAUhCCyxYTJwAaGs1/XgT5RQTt8kAq5GjFb77difPu87aJLVX7evdDbGkdD5qEvReKKVChZVyu
BkQSv4wYWQcVRZBeT7y0EAzNE/erpi3GFrP5rKMbqPtgBRbRFwKmBRxhiZDr4AAcDWi90NAHnnn9
hmwzoDCBPA5BeHeLCtPMtMwnmPI41V/12bkZFGtWUA==
=e5f6
-----END PGP SIGNATURE-----

--------------WbFtDx5jTtNkqzBb0mqgblYE--


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 09:47:32 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 09:47:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349796.575955 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1PcT-0007O0-5j; Wed, 15 Jun 2022 09:47:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349796.575955; Wed, 15 Jun 2022 09:47:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1PcT-0007Nt-22; Wed, 15 Jun 2022 09:47:25 +0000
Received: by outflank-mailman (input) for mailman id 349796;
 Wed, 15 Jun 2022 09:47:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o1PcR-0007Nn-AZ
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 09:47:23 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o1PcQ-0005VT-Rw; Wed, 15 Jun 2022 09:47:22 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=[192.168.25.191]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o1PcQ-0008Ux-M2; Wed, 15 Jun 2022 09:47:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=bZ5iwt1Nbhmd9t3REb64DldQ4TAcAgDD8oDvwYim7zU=; b=v4XM4CuKC712eQd1ryEo4UrRth
	4SWqFxJz0wnpeXlYj6IH/FIdtPZrQ1IaGxkDK5vwZEhnqa1uPTp9+TVMOOdTmEfEGzQ5nOZ1eVKTZ
	fzfP5Zu9YYFL8O/I4NDJ1EQkgf5SygD8EeZv5bNzIjNIFdiDMs5kZYBVarQvAltHQwVU=;
Message-ID: <c48bb719-8cc6-ea8d-291d-4e09d42f93c2@xen.org>
Date: Wed, 15 Jun 2022 10:47:20 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH] xen/arm: avoid vtimer flip-flop transition in context
 switch
To: Wei Chen <wei.chen@arm.com>, xen-devel@lists.xenproject.org
Cc: nd@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220615013909.283887-1-wei.chen@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220615013909.283887-1-wei.chen@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Wei,

Title: I don't quite understand what you mean by "flip-flop transition".

On 15/06/2022 02:39, Wei Chen wrote:
> virt_vtimer_save is calculating the new time for the vtimer and
> has a potential risk of timer flip in:
> "v->arch.virt_timer.cval + v->domain->arch.virt_timer_base.offset
> - boot_count".
> In this formula, "cval + offset" could make uint64_t overflow.
> Generally speaking, this is difficult to trigger. 
> But unfortunately
> the problem was encountered with a platform where the timer started
> with a very huge initial value, like 0xF333899122223333. On this
> platform cval + offset is overflowing after running for a while.

I am not sure how this is a problem. Yes, uint64_t will overflow with 
"cval + offset", but AFAIK the overall result will still be correct and 
not undefined.

> 
> So in this patch, we adjust the formula to use "offset - boot_count"
> first, and then use the result to plus cval. This will avoid the
> uint64_t overflow.

Technically, the overflow is still present because the (offset - 
boot_count) is a non-zero value *and* cval is a 64-bit value.

So I think the equation below should be reworked to...

> 
> Signed-off-by: Wei Chen <wei.chen@arm.com>
> ---
>   xen/arch/arm/vtimer.c | 5 +++--
>   1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
> index 5bb5970f58..86e63303c8 100644
> --- a/xen/arch/arm/vtimer.c
> +++ b/xen/arch/arm/vtimer.c
> @@ -144,8 +144,9 @@ void virt_timer_save(struct vcpu *v)
>       if ( (v->arch.virt_timer.ctl & CNTx_CTL_ENABLE) &&
>            !(v->arch.virt_timer.ctl & CNTx_CTL_MASK))
>       {
> -        set_timer(&v->arch.virt_timer.timer, ticks_to_ns(v->arch.virt_timer.cval +
> -                  v->domain->arch.virt_timer_base.offset - boot_count));
> +        set_timer(&v->arch.virt_timer.timer,
> +                  ticks_to_ns(v->domain->arch.virt_timer_base.offset -
> +                              boot_count + v->arch.virt_timer.cval));

... something like:

ticks_to_ns(offset - boot_count) + ticks_to_ns(cval);

The first part of the equation should always be the same. So it could be 
stored in struct domain.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 09:58:08 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 09:58:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349806.575966 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Pmk-0000iC-A6; Wed, 15 Jun 2022 09:58:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349806.575966; Wed, 15 Jun 2022 09:58:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Pmk-0000i5-5f; Wed, 15 Jun 2022 09:58:02 +0000
Received: by outflank-mailman (input) for mailman id 349806;
 Wed, 15 Jun 2022 09:58:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=56zs=WW=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o1Pmi-0000hx-0j
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 09:58:00 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on061d.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::61d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a298662a-ec91-11ec-bd2c-47488cf2e6aa;
 Wed, 15 Jun 2022 11:57:59 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB7PR04MB4298.eurprd04.prod.outlook.com (2603:10a6:5:17::33) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.14; Wed, 15 Jun
 2022 09:57:56 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Wed, 15 Jun 2022
 09:57:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a298662a-ec91-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HeIJ4QVahk2gkDg5j+Ey1ARVroEiwqOEOWY2qAJwkV0KwxxHT9maQ88n3bjBXSYBh1ly2D1B8oVshiXEaVExY4OetIIsrUkog790uwXRSMmD0NrXfXP74GhF4RGOow9voFV2/NyGsVrgRLhih7uotEijFcBclZOQLIRWOvgiUXP7SCyjzKznglYRGzzmT7dBC7oI0YtNuOE7eK3A0V4bDO26oGAPKsWKWhjsnm06vohWFuQgYTUp56l63NIudOjZg+glIU++fZAVGmRlhkceiseM8xE2g5cs3c1YSqpMPsV7O/SdJ+OsFdusKSUvbfH0sSksxkCf5DT+acMup7AQ7w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=2l8SD8LBc//xxDpnquJAlQYw+Npg13rWMvOrTj4wXJU=;
 b=fxHi+NtHNi4xDukfG9rtq/shiBtDJR8yxhAjYvC5Z/Dn/8prWicgB6j4/iSKRII6D/4fDxqvCzM6KlFJcxFksFGMcaP4EduLz69hYXtn4ncnehV+DnUK8tpCpXHcGGb1TpCQDuzdy54MsSkuyF+efArKtvGoRJG2Jz7zQawbqJjSeD/d4PV6mMO+8qeLJIUVfive8xF+ecNv+RfKFwVIOr2p7zu+PtMO55OCFMZhF/Ul2RHs8MUiF/f1+uOcvLCGwXXgrJwxuEUjIPe1lkWHNWpfuEfdej99o0VGjsUo4jHF499whpzQ7YRYDVQVz/yHnCrqJCoGNu1QuH7nu6CadQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2l8SD8LBc//xxDpnquJAlQYw+Npg13rWMvOrTj4wXJU=;
 b=1mjL9OkKrlZezqgMX2U1zq6tfU/3U/tKyqY2jYVflMKBh9oq4hdAsXp2o5e4xx1z2y52tnnuuvbOeek3JRR2FO/vm9zpjD7RuGPexarPRTzpmkWBh7hdd7i0966Ff/tY5myzVScVKJA8ukqeDsqiVQFjrPyJ4lpvVkiOdUVoKE8bgKkeCg+PebAqQDA65YYtW7tABhZOwsLWz6EA89BQmBt8APzM/jTN9tMn2cCjpnp6VmFvEQjcX868LmE8c6X9dS1DpZYOqFQoGxH2x/Hkn+bcNYXX5AGAkXD+aEaGIBhISiHm60NyLExPVHaNtSZkEcO7tiEeedayqlIlKHxT8A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <7f5287ad-8442-6c53-d513-f9a8345c4857@suse.com>
Date: Wed, 15 Jun 2022 11:57:54 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 0/8] x86emul: a few small steps towards disintegration
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS8PR04CA0185.eurprd04.prod.outlook.com
 (2603:10a6:20b:2f3::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 73180e82-78e2-4425-fbe9-08da4eb58542
X-MS-TrafficTypeDiagnostic: DB7PR04MB4298:EE_
X-Microsoft-Antispam-PRVS:
	<DB7PR04MB4298B1420C4237BF4BA8C296B3AD9@DB7PR04MB4298.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JKowGiQBGFCmaFydive/k3cYJjz/eio12LrKXnCs8qMNWkSnX5kbmgAiKVjiRSrn0L7eXwF83IpOzDGwEDMJZQWuPaueKCNLVv0rqBfZTZubn3HCudkQMj87kopH/E4A7khSWgOv4C3to0hzTajSkB8z8iZTuh+lelsU2JhMeGXKJqV9gCYSGJs9/UZmg6YUkL7JvnojfTK77kwP6CeTXqAuuY4rldtVf55Juj1J3D0cqmCVLd8hJTF0UGXIrVyj+7eqy4GNep99IaC6/c4ei3+f5JpngbcdXHqN//cCMP7nAdj4NxGrHh3QhZoaCAJ00uTCxZS8wzda57YuyY9vY98w0VKw8qLPu606VBWMcCEtJL4jjK1rjAd37hg9PZQ4DL6D46CxTC/05+mWprTsF32d+q8FOMbJBaRaTGHUU/oPhO2HZlQqrPYl8BOuWPPfC8+gxCQ2SBFBr+0hyNBAYnEYXlYfdTzOwJkglrcXy/XVmp4+pJDg5WzHLQoxb2Bz+DEdFWsg5XPN3CltYwV+RTvssmVvilp5LrL6n2EwW96qiPS3F+k3LHEVdLcnw204pBjtGeqHKe1FZCMEp95N8UvPRid8lEMO4fJlls15anN5AorHucixQjWCfTK/Ffm1nAdlrPV5FZYxUJxnW17WlaCrwmSVm2S0uKL87GD5Al6ZPpebMgE7P/meIv6Qj0/mYe3ENOaNH9TbQHVjb8Y3mHWygnBwEfUkXSQRiJD+jFA=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(36756003)(54906003)(31686004)(186003)(6916009)(4744005)(5660300002)(2906002)(316002)(6512007)(2616005)(66946007)(66476007)(66556008)(4326008)(8936002)(508600001)(8676002)(86362001)(31696002)(26005)(6486002)(6506007)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?V21oV1ZXNTRIV28wbGdGdCtoMHRzRlpDaFJRYXA1NGE4UHB2YmhHdHYxMU1i?=
 =?utf-8?B?NlRaVFl0cWcvQVFKamF0Y2ZFemI2bVdSYS90dE8vd0Y4L29HcUZOZzVnUTB2?=
 =?utf-8?B?NDFwcXBOeHRERDBEQ3RPVFdBS1NmVkhURU56aHpEdkVPcmZCQUZSVEtPdFNK?=
 =?utf-8?B?bm1NS1R2d1Y4bURMZlQxejJ3eElJS3pvam9BVWIwSzhRcWhIcHhlaEtBby9O?=
 =?utf-8?B?TVhXays1K3JoTWZwZnNKVUdmWVdpdkkzOC9mRlNvUE1IdU5kT2p4djJEQTNQ?=
 =?utf-8?B?YjJiUjV4K1pScjNKN3RxRzExQUxQUm04eWY0bVFzWUErcGtPcG5SUEFDZ0hz?=
 =?utf-8?B?Mmc2WExHUDVPWHkyQTBGdlNLWkM4QnZ1bVIreUZNTUtCMnB0aVlGSnRUcUhR?=
 =?utf-8?B?RGdyRWZqOXY5ZGJpdnd2THZQcXpqNEVxMGorR3Q5Qy9NS2JHUjE0MlpSS2VK?=
 =?utf-8?B?NS9jYnpzRERxK0NiZExKWndZMXNkandPcm52Y2ZuQ0hiU1BDNWNBazMvbXow?=
 =?utf-8?B?YnFNcTJycm5tSllwWW1vRkVpMDMyWFltZE1qZDR2Z2U0OU5rUk5mRk1kdWgv?=
 =?utf-8?B?R1IrOEt3WDZ3dUx6Mk1kejg5TkVNWjFTbFNBakpyY3NhcFgweVdPNTM5dTB2?=
 =?utf-8?B?U2c1NnQ3SnhVZW5Od0NaQmVMOVJ3c245WEZxaDRHa29uZkQreDBlZm1ESyts?=
 =?utf-8?B?Q3BFZnVWSjZnczlDMTRuS2QrcTJwOVdQdTIzV25GTWoyek1kR2hWVytnYVZl?=
 =?utf-8?B?NGx2bGlqV1AzOWliRkRucVI0cmNDdTc1K3dWOFE0VEhzTXZweWxQeDQ4cmJW?=
 =?utf-8?B?dEx3RFpoM01KVG9meGdMNXV0MEtxMTRxUnJ4YlVNUGNPL2c5cVFJVkpkL2Fj?=
 =?utf-8?B?MjZ1cU5vSXdwaXJaQ2Y5N09OaFFiejFUY283S09aZENlWlRzRXUvTlJYWTR0?=
 =?utf-8?B?aHAzbGc3MmxMSWtvQ3o4UHgzSWFORGxYM0ZWVytiUUNqejdJcFAvaStjTSsv?=
 =?utf-8?B?SGhXdzZlR1c2SUlBeFc3dFZWT0E4by9oVnIxK1F4eTYrQzM5NExqVTl0RHZ2?=
 =?utf-8?B?UC9ySUo0eU9IZ3R3enRtdmZoOENjSGxrY1BxNE1ZR0lsVXEwQnk5SHN4MEJw?=
 =?utf-8?B?R2xUV3NUUk9md254a3kxcnN4cDlpNEU0LzdnT29DUFRHUWx6RWRudzlSNUFi?=
 =?utf-8?B?OGR4QlVaWFBmcUNWTUtKalQwaThTTnJtM1RONWpHWVROcmxpaTVyT1ZmS1pt?=
 =?utf-8?B?Tk1YV3EyWml4cXowSGR4WnpBM2piUHdYNDlzR2ZIYzVUZG5mM2czNzNVQ3dC?=
 =?utf-8?B?dHR2UHlQUWMzY1Myb1BSOHFtWUI0UFNEK2pjSnVGNHUxTUtiUmxTUTIvamJR?=
 =?utf-8?B?aGJKaUsxRFFtV29RNjg1ZUdUa2R1Tnk4bFN2R3A5OXBlNUlGTlhYbnZCS1Br?=
 =?utf-8?B?VWI2ZHhJWmRCMVE5VUZ2TnhvOTJqUFZvVE1DWmZxRzFsOW5KTWo0N0RXWFFv?=
 =?utf-8?B?eWpPTE83YnBBUUR1bXJWdXZiU2lyT3FJYktuMVFFUzRWdnhFcXkxemdWN2ti?=
 =?utf-8?B?QW5ORzRMaktBazlSaE94cmZTVFpXTml0QTk1L0FNMTBsMTltVUxHNHE4MENB?=
 =?utf-8?B?M2FGUGpqeFYvUXJLNlVMQnozaDJRNjJTY1d4eUZPcThqb2VTeHU3eFZDUWdv?=
 =?utf-8?B?VjBuRXlDUmpBZUwwYjQzSi9lcTNVVDA1TndKczVmMjFxd0RTZWdIa0pkbVdY?=
 =?utf-8?B?MTkzbDAwbUd4dHZnT0l3WXBxUEk5cm5wUEZ0UGIyMHRMRmF5NEY0MjRXTncy?=
 =?utf-8?B?OU53MnA0czBETzZEZTgzR2NybytmMFZDWkF2SzhtWmJvUGdYUkpCTzVnR24x?=
 =?utf-8?B?UzlsYzhEVnJkanJneGtUSkhWaXBEMGxld2R3NkwxcTVGMFhsUS9tTk1mbnNQ?=
 =?utf-8?B?cjZncG5BeEVScmRpNVBYWVdqcGw1eGp5MVdRVFlMVGUxQUJqM3pZYzJLY0Nj?=
 =?utf-8?B?T3Y3MkxPNWU4NlFPRE5GelRRZjFScjFpclVNL2pQNVd2anpUYXpYeEU2cUNa?=
 =?utf-8?B?N3I2Z1g1eWxicHU3VUQrQXJSck5rVWkwZE1KdHZlRE5QcCtVRDZaWGM4WkJy?=
 =?utf-8?B?OERDRVJzRmptSUJxb1QveVVGaXFHeXZmK1VUMmFrYWkrRDEwOGNHeDlpNVdX?=
 =?utf-8?B?eWg1OGE2K1NzWjZveTBwdU55bzVtYWZDd0hMUFhPenpYT0ZKRVUwY3d3aXVw?=
 =?utf-8?B?MWgzTGxJSHp3NjByNmFhNWF0QVEzc2xnZkFoWHRuL0hNbnRZb1JrZm5WSmJu?=
 =?utf-8?B?L2xFRTN0NGtYOVEzZTFSVU4wRHBxbU8rZmxIaVlwdm9zUlZ2d2xGQT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 73180e82-78e2-4425-fbe9-08da4eb58542
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2022 09:57:55.9559
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: BP8ultQ9s0TwEkGWtJ2xE6WjCQeVaK33/QHpeKFc63bq8EO9xjciumjPQlaK8QFPMCOPIh2hFALxNfYhVI/rMw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR04MB4298

... of the huge monolithic source file. The series is largely code
movement and hence has the intention of not incurring any functional
change.

It has now been almost a year since the v1 submission, without having
had any feedback. Some re-basing was necessary in the meantime, and a
new patch (the last one) has been added - even if seemingly unrelated,
it was in this context where I did think of that possible adjustment
(which may want to be viewed somewhat RFC, as I know there are
reservations against the use of -Os).

1: split off opcode 0f01 handling
2: split off opcode 0fae handling
3: split off opcode 0fc7 handling
4: split off FPU opcode handling
5: split off insn decoding
6: move x86_emul_blk() to separate source file
7: move various utility functions to separate source files
8: build with -Os

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 09:58:56 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 09:58:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349813.575977 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Pnb-0001Ti-Kr; Wed, 15 Jun 2022 09:58:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349813.575977; Wed, 15 Jun 2022 09:58:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Pnb-0001Tb-HN; Wed, 15 Jun 2022 09:58:55 +0000
Received: by outflank-mailman (input) for mailman id 349813;
 Wed, 15 Jun 2022 09:58:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=56zs=WW=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o1Pna-0001TN-NN
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 09:58:55 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2060c.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::60c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c1daba6b-ec91-11ec-ab14-113154c10af9;
 Wed, 15 Jun 2022 11:58:51 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB7PR04MB4298.eurprd04.prod.outlook.com (2603:10a6:5:17::33) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.14; Wed, 15 Jun
 2022 09:58:47 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Wed, 15 Jun 2022
 09:58:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c1daba6b-ec91-11ec-ab14-113154c10af9
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WExqJqpMkVjwnRekNtmxgtQxc9xNnxHCGT/yXmNYP5sqIHiAldW9dg7yEKQEJRJ3dtqs3vMTPKmIIqfUxm+hVaMd/Xm1hfh/d3pYhfO5DyLkuE/Y56CYfx75t1r+9C7kvfFiqqvuewmbeVIxlPwoTJm4nb2ANApGF6yj5VjvgpWzzy8pM/TVUMpWwrqwpaf6Gjw4Pjo0fQmekX2jjU2rQFG9nVUEFm8F3K2kkPeQ+xT3CKGIRvqxL3YArrY3dWvHYiRgoDMQm2Pww5NhAOhwb/04bTxuLB3cbdrFANRtiiEhzMyXm1aKnxSqzQ5VeDYyDGxzW1HMKN1HelNTTK+TgA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=WI43HshyVfkehmcEm8x8RCVyx8U9QvkUor1Lh6s7XuE=;
 b=f3MuB7Oomhlddy/cNCMSO+aFhzG1/nOakmGEl+rGs8p7OjAFH1BuVpICuyNZ4vnW587rJmurw4tnYPgXUMCL3hUk8mWrJOEtkSBANll+5uyCVpn0IYqPM9Zoz3cASe3KVVcvZDnfQjq42BQd78TXhek720+bnHkw/gDq0wkUIgoW6WUdTeXKqh4JiXRJoewrnJjkjLmc2QkGybrPyoZDDw1QiheD2AdGETa+7/N25k6MNTVkMh/re8MwGh10r7s79ZPjINp5v1NNzfCVP1WPB80sU0+Sr49/Fj8SfoK524qqQo6BPdPx+pHnXXVkwZkiCg7LOdqr15zVqPcmR+KWLg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WI43HshyVfkehmcEm8x8RCVyx8U9QvkUor1Lh6s7XuE=;
 b=bZPK/3jF+HPnTMrF4SHqFWFTzK5wf6xhj1ewBSnWvFkFlIWJth6x7kH9FO73owFMy5stLUu4FWA6QZFlpdUhGdQgUgDSC6UDO6V0VZTcKPjaLzhrumdO1ltUvl5pP3qFQX2v8EwosDvUyZYzyWUNqi/VixZWx7ZBAevogMesf8t5bOkkV24BF6Ab/k4n7hz8AImuEYP/fOoFyxVwSrguGbiS5icbSfPmLoyQkqr+hgv2ROQvE/8R3JekBZk0r3A57+hQpy9ZAJJ0HJNi1N1PsZDJunssSK3IVO2/VpbPK/q2FfbISdMjsVlwHx6GlCE6DGIZEyVqoGRnF5WbQ3eGCg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <c8ecf582-2c1c-ca41-f289-b6a6a080061c@suse.com>
Date: Wed, 15 Jun 2022 11:58:46 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: [PATCH v2 1/8] x86emul: split off opcode 0f01 handling
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <7f5287ad-8442-6c53-d513-f9a8345c4857@suse.com>
In-Reply-To: <7f5287ad-8442-6c53-d513-f9a8345c4857@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS8PR04CA0183.eurprd04.prod.outlook.com
 (2603:10a6:20b:2f3::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ffd05215-d701-4374-0e86-08da4eb5a422
X-MS-TrafficTypeDiagnostic: DB7PR04MB4298:EE_
X-Microsoft-Antispam-PRVS:
	<DB7PR04MB42984947B9D8E2302668E3FFB3AD9@DB7PR04MB4298.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	X05yqCfysAVJXKv0RphYsIp/w54+m+urqrCIcLXj44tCZbzuIyRtoJDP3dMEenBvG/NjXb6KqX4LvXI9cDsKeev/IRuamD2YkqTejrwaaN0QJfipnTZco3/uSs50/b7kcK35RW8omlf7GHmW3Vk8Ul/KaxeUjo+ggTwwK6sDqcn2A2oNNxlNXckkI6DPkYFHhG7OqMalxmN4ad8WsOOVXkTzhsDwRhCBGii7Ht/LSEiioMmaB+u57/AtOkIGaxaau8I+WAd1lLGbrnEwp4wae/2s43a3jZgKOmWesX1gCG76BiBk5pO4DrHerOIpt8uf0xfRij4F5eugVAZUV5V84LPBxwYM8c6Qlia51LqPnb25NX5W460kE2B8uiCQ0yn/v07ahXm1Zd1Cbj/lk/EH+Pymxw10kM3uRZa4v9mJNPxMLUDX58LMSEEFi+HOc8cGZ0fwsp1fJBBXS+O6e1OFYk48zreeE/CHSq7mKdvu1a/tbwg0qW2mSsqBQ5uB45gjxkEii9UsXezyp3L4RWo6pLEt/OlJqZcxeweDOpx0VtWAU7fGSzqGGhWviM9Wl6mHkJmk0CxEqahHT1cTnL3e3yAeg3763Gy2ng7XQem8U4ZcdOkF6xFi8NZFcQ+P68q+ltDh+b4NnoLXubp4k9qHz1++sfRhINlZRc2PHLgeFMCVMYau4K14x7TkbON5/v7fGLYktVlRD2KK+ym+MN9ogSINIuh9NYGTOIUaax95prUP2jmMUGEsno2emfxt0XmLxGC2NDIy5gDnPUBvda5kfBGx/0k5XFimJTPpQfImCLr+EKK6HS2Ti9MC6Gq24TiZt+clt8hqA4/36IYdf4b5Qw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(66574015)(36756003)(54906003)(31686004)(186003)(6916009)(30864003)(83380400001)(5660300002)(2906002)(316002)(6512007)(2616005)(66946007)(66476007)(66556008)(4326008)(8936002)(508600001)(8676002)(86362001)(31696002)(26005)(6486002)(6506007)(38100700002)(2004002)(45980500001)(43740500002)(579004)(559001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MXlKbG9jenZncEEyRTN6OHRKRENYWVlRMUlJd1hFT24vRnNQWjk4bXlHV2Zy?=
 =?utf-8?B?VzVlN1JDNHVlLzdnTU9Pc2FacXlQcEtuS3RBU2IwZDlzVXE2SWRvVkltUUtl?=
 =?utf-8?B?Qjg1VmZBWWJJRzBSTXZMbFZ5TjBNUjBBbmg3aExVNW4wSTZhNElQSk55blU1?=
 =?utf-8?B?Rk1PT2VRZ0ZBMEJ4UXBxYW9JYUxWTDdnTHlnQ2Mrc2tNQTdqMDIrZ1B3ZHJL?=
 =?utf-8?B?REpEMEc5UGI0Q0RNV3VQazhhS2c5cjVlVzRqdWpLd202TmQ1Sm5XRnQxMGNy?=
 =?utf-8?B?eFhoU21Fb3RjZWJldkdDOHp0YjBBbWpQaEtkRzVVRnBhcXV2bDNtREVNRnpr?=
 =?utf-8?B?SzlwNWNjMkhCQTlUb21pa21hcFcwL0prUXBicEwxOXZYaWljTnlMSmJlT0U5?=
 =?utf-8?B?MlI4WDRFWFVrM1FCcTNRWS93WS9oNEkzb2JFZmo0UDFtaXRtSzB5UVdtL1Fr?=
 =?utf-8?B?b3dLNi9MazZnSWJwNWdFTzBLSTVBWmxrNE9McjFYNlFPVGhIaTM2MjhSdEdZ?=
 =?utf-8?B?dEpsWnBSLzF0YURHeUl0ZlZ3Q3AwWmJ5SEpyZnFMR3pFUEw5UVJlZlowQjVP?=
 =?utf-8?B?OEhCRVZwS295b2V5YWhQVGsrVWgwc0d5MFFVUitvTURBdjBobUpRYVFORjZv?=
 =?utf-8?B?VzV5OVBqZWMvU1FiZkh0c05oSTB4TGtnQktzaGtPeTA2VXJTaUdnaDJicnNI?=
 =?utf-8?B?Z1NPUGZqaStOa2VCQkFyc0tqcWtUVHlsM0htY292OFVCQk1FYlpBTXV2NFUy?=
 =?utf-8?B?Z0lhRDJNTzFXaHprT1hiemNsYXpaZndKQWdBVnBYV1VZd09JQXBWd0huTUc0?=
 =?utf-8?B?MFBNOVRUWm1sa1JoOVFuWEloWjJZYW1vT1Y1UDR1VUNoKzdsODFQUXcxN01B?=
 =?utf-8?B?OG8xRmpsZXBhMmdoTW41UTg5U2RaRVErRWZnQXhpVUw3S3BRRzN6M2pkNWxn?=
 =?utf-8?B?UFhYRDQzWEpIcExpcFNVOVNXaUxEdGlENDI5VmhsNERmRzBxd0l3MUdSNDhk?=
 =?utf-8?B?Y04wZ1ZjY3R0T3FtSi9yNFltOXI3OXBoL1lFSnJ0M1JPNHRjQmtTenZsZHpS?=
 =?utf-8?B?MVZwTWd6TzI3R2hTNXl1anZxeTE5OVZHWXpScjFZckVZbGczWUwrZzBZb3pJ?=
 =?utf-8?B?d2dpZjhkU3piaGowRnBQT3ZWUk01T0VaOEtjRXlPYWVuQ0RPYUIwMDhuQmxO?=
 =?utf-8?B?QXErZGx0RklabzBWNTVySXJ6Rm5mRDNPMGZKaDZOdkttVDNhZ29rMFZ5dml6?=
 =?utf-8?B?UjJFQm15a3ZIZVlvbm5Hc21yZzZDSDZ6QitXRXpESEsxUjFjZkpNR0xIcW5T?=
 =?utf-8?B?L1kzQ1Ntbnc4bFk5SmcxWlhkREFIenZEWjNDQWU2MXNRRUdmL1cwbXJvZGZU?=
 =?utf-8?B?Y3NKU1crTzdyYlg4WEFSRFB6YkFpTWJRY3Z6aTBiYVhjSmNnMklBaTNFc2hx?=
 =?utf-8?B?QzlMTjhBc1RRT2ZUVXNaZGVsZWRKQXMrbHZMSTBIU3dCdXJBYnZJZzNRSUxE?=
 =?utf-8?B?ZGxXV3N1U2xVNXRpVlhESzR6djZTS2Rpb1FndDZEOEthL2tQbjA4RjkwZUNj?=
 =?utf-8?B?NlVTUUNobVQ5RUU3VGh0S3BuaFFjMVdDYTh1T25ZOTVYMGV3VlJybjFGdXQ3?=
 =?utf-8?B?elBYU0svY1luZ1prdEU0NHAyWms1UnNYN3R5MkxqSmpzOVY4cHdwWVo0eWo3?=
 =?utf-8?B?OUNJY3BHcWw0VWFWcE53V0FNZzJzRDJEcy9VZjZ2c1hTc09mV0xBS1dZb3hG?=
 =?utf-8?B?S3llOXVOS1EwZHRIVXZZS3p5Y0hhRGNZQ0dWenJyMUV1WUZDelNLTHF5Nk0z?=
 =?utf-8?B?NUlML2ZIZThXcDZYMnZFNk0rclZDWDE4Ti9RVklCVXphSHYyS3B6L3FxbDJo?=
 =?utf-8?B?TkxpNFNoajBaL0lKTlZrZmp0MUczT3FKMFRtSUUyY1BXSjBmRk5JZmJtYnMy?=
 =?utf-8?B?UkpUSzFUZmppWFN3b3F5NVFQVUpRaDlxUzlGSEluTW5WM2RmVTluUHBJS2Zn?=
 =?utf-8?B?YXErN3RtVUpKM056NXh6RjZOMVJWbzVUN3NFKzFTL0lwdTFLSGtabTFWdzR5?=
 =?utf-8?B?b0VVUVdJcVRMNXJKeGk5TzR5MnR5RVp2N2tEc0RlUXJ3QU52UjNubDlZUHBW?=
 =?utf-8?B?VVpPM2g4clkrTzIyd1UwRjV4QlYvZy90OGxpRFB2b2NZa3YvZERrTmRoS1Fx?=
 =?utf-8?B?OTRzRDJGOXVDQURnWlFSM3lqV2F1MFVqdEdTQ0NzZTBNQlYwSWN3aUFFcUhu?=
 =?utf-8?B?dVV3c3dvTVBUditVcTFtSE1vTnFiRndEQ0hhc2dBUW9VdFhzYTR1TjN3VU9V?=
 =?utf-8?B?eTNqa3FGQTArUk9mclgxamo3R2ovbXNKOFlpVTdySTcvUEI4dkt3QT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ffd05215-d701-4374-0e86-08da4eb5a422
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2022 09:58:47.7963
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ppR0wZv/kXfP4NqyU3x3rBKwicI1S0fKnJJAOb7B/Y2nkDT318rez2SEoEnkXkyZl1tEFAfS+gVDT1GWGrbMnw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR04MB4298

There's a fair amount of sub-cases (with some yet to be implemented), so
a separate function seems warranted.

Code moved gets slightly adjusted in a few places, e.g. replacing EXC_*
by X86_EXC_* (such that EXC_* don't need to move as well; we want these
to be phased out anyway).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Re-base.

--- a/tools/fuzz/x86_instruction_emulator/Makefile
+++ b/tools/fuzz/x86_instruction_emulator/Makefile
@@ -11,10 +11,13 @@ endif
 # Add libx86 to the build
 vpath %.c $(XEN_ROOT)/xen/lib/x86
 
+.PHONY: x86_emulate
 x86_emulate:
-	[ -L $@ ] || ln -sf $(XEN_ROOT)/xen/arch/x86/$@
+	mkdir -p $@
+	ln -sf $(XEN_ROOT)/xen/arch/x86/$@/*.[ch] $@/
 
-x86_emulate/%: x86_emulate ;
+x86_emulate/%.c: x86_emulate ;
+x86_emulate/%.h: x86_emulate ;
 
 x86-emulate.c x86-emulate.h wrappers.c: %:
 	[ -L $* ] || ln -sf $(XEN_ROOT)/tools/tests/x86_emulator/$*
@@ -31,18 +34,27 @@ x86.h := $(addprefix $(XEN_ROOT)/tools/i
                      cpuid.h cpuid-autogen.h)
 x86_emulate.h := x86-emulate.h x86_emulate/x86_emulate.h $(x86.h)
 
+OBJS := fuzz-emul.o x86-emulate.o
+OBJS += x86_emulate/0f01.o
+
 # x86-emulate.c will be implicit for both
-x86-emulate.o x86-emulate-cov.o: x86_emulate/x86_emulate.c $(x86_emulate.h)
+x86-emulate.o x86-emulate-cov.o: x86_emulate/x86_emulate.c $(x86_emulate.h) x86_emulate/private.h
 
 fuzz-emul.o fuzz-emulate-cov.o cpuid.o wrappers.o: $(x86_emulate.h)
 
-x86-insn-fuzzer.a: fuzz-emul.o x86-emulate.o cpuid.o
+$(filter x86_emulate/%.o,$(OBJS)): x86_emulate/%.o: x86_emulate/%.c x86_emulate/private.h $(x86_emulate.h)
+	$(CC) $(CPPFLAGS) $(CFLAGS) $(CFLAGS_$*.o) -c -o $@ $< $(APPEND_CFLAGS)
+
+$(patsubst %.o,%-cov.o,$(filter x86_emulate/%.o,$(OBJS))): x86_emulate/%-cov.o: x86_emulate/%.c x86_emulate/private.h $(x86_emulate.h)
+	$(CC) $(CPPFLAGS) $(CFLAGS) $(CFLAGS_$*.o) $(GCOV_FLAGS) -c -o $@ $< $(APPEND_CFLAGS)
+
+x86-insn-fuzzer.a: $(OBJS) cpuid.o
 	$(AR) rc $@ $^
 
-afl-harness: afl-harness.o fuzz-emul.o x86-emulate.o cpuid.o wrappers.o
+afl-harness: afl-harness.o $(OBJS) cpuid.o wrappers.o
 	$(CC) $(CFLAGS) $^ -o $@
 
-afl-harness-cov: afl-harness-cov.o fuzz-emul-cov.o x86-emulate-cov.o cpuid.o wrappers.o
+afl-harness-cov: afl-harness-cov.o $(patsubst %.o,%-cov.o,$(OBJS)) cpuid.o wrappers.o
 	$(CC) $(CFLAGS) $(GCOV_FLAGS) $^ -o $@
 
 # Common targets
--- a/tools/tests/x86_emulator/Makefile
+++ b/tools/tests/x86_emulator/Makefile
@@ -29,7 +29,7 @@ OPMASK := avx512f avx512dq avx512bw
 
 ifeq ($(origin XEN_COMPILE_ARCH),override)
 
-HOSTCFLAGS += -m32
+HOSTCFLAGS += -m32 -I..
 
 else
 
@@ -250,7 +250,10 @@ xop.h avx512f.h: simd-fma.c
 
 endif # 32-bit override
 
-$(TARGET): x86-emulate.o cpuid.o test_x86_emulator.o evex-disp8.o predicates.o wrappers.o
+OBJS := x86-emulate.o cpuid.o test_x86_emulator.o evex-disp8.o predicates.o wrappers.o
+OBJS += x86_emulate/0f01.o
+
+$(TARGET): $(OBJS)
 	$(HOSTCC) $(HOSTCFLAGS) -o $@ $^
 
 .PHONY: clean
@@ -274,8 +277,10 @@ else
 run32 clean32: %32: %
 endif
 
+.PHONY: x86_emulate
 x86_emulate:
-	[ -L $@ ] || ln -sf $(XEN_ROOT)/xen/arch/x86/$@
+	mkdir -p $@
+	ln -sf $(XEN_ROOT)/xen/arch/x86/$@/*.[ch] $@/
 
 x86_emulate/%: x86_emulate ;
 
@@ -287,13 +292,13 @@ x86.h := $(addprefix $(XEN_ROOT)/tools/i
                      x86-vendors.h x86-defns.h msr-index.h) \
          $(addprefix $(XEN_ROOT)/tools/include/xen/lib/x86/, \
                      cpuid.h cpuid-autogen.h)
-x86_emulate.h := x86-emulate.h x86_emulate/x86_emulate.h $(x86.h)
+x86_emulate.h := x86-emulate.h x86_emulate/x86_emulate.h x86_emulate/private.h $(x86.h)
 
-x86-emulate.o cpuid.o test_x86_emulator.o evex-disp8.o predicates.o wrappers.o: %.o: %.c $(x86_emulate.h)
+$(OBJS): %.o: %.c $(x86_emulate.h)
 	$(HOSTCC) $(HOSTCFLAGS) -c -g -o $@ $<
 
 x86-emulate.o: x86_emulate/x86_emulate.c
-x86-emulate.o: HOSTCFLAGS += -D__XEN_TOOLS__
+x86-emulate.o x86_emulate/%.o: HOSTCFLAGS += -D__XEN_TOOLS__
 
 # In order for our custom .type assembler directives to reliably land after
 # gcc's, we need to keep it from re-ordering top-level constructs.
--- a/tools/tests/x86_emulator/x86-emulate.c
+++ b/tools/tests/x86_emulator/x86-emulate.c
@@ -22,11 +22,9 @@
 
 /* For generic assembly code: use macros to define operation/operand sizes. */
 #ifdef __i386__
-# define r(name)       e ## name
 # define __OS          "l"  /* Operation Suffix */
 # define __OP          "e"  /* Operand Prefix */
 #else
-# define r(name)       r ## name
 # define __OS          "q"  /* Operation Suffix */
 # define __OP          "r"  /* Operand Prefix */
 #endif
@@ -265,12 +263,12 @@ void emul_test_put_fpu(
 
 static uint32_t pkru;
 
-static unsigned int rdpkru(void)
+unsigned int rdpkru(void)
 {
     return pkru;
 }
 
-static void wrpkru(unsigned int val)
+void wrpkru(unsigned int val)
 {
     pkru = val;
 }
--- a/tools/tests/x86_emulator/x86-emulate.h
+++ b/tools/tests/x86_emulator/x86-emulate.h
@@ -1,3 +1,6 @@
+#ifndef X86_EMULATE_H
+#define X86_EMULATE_H
+
 #include <assert.h>
 #include <stdbool.h>
 #include <stddef.h>
@@ -129,6 +132,9 @@ static inline bool xcr0_mask(uint64_t ma
     return cpu_has_xsave && ((xgetbv(0) & mask) == mask);
 }
 
+unsigned int rdpkru(void);
+void wrpkru(unsigned int val);
+
 #define cache_line_size() (cp.basic.clflush_size * 8)
 #define cpu_has_fpu        cp.basic.fpu
 #define cpu_has_mmx        cp.basic.mmx
@@ -206,3 +212,5 @@ void emul_test_put_fpu(
     struct x86_emulate_ctxt *ctxt,
     enum x86_emulate_fpu_type backout,
     const struct x86_emul_fpu_aux *aux);
+
+#endif /* X86_EMULATE_H */
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -9,6 +9,7 @@ obj-y += mm/
 obj-$(CONFIG_XENOPROF) += oprofile/
 obj-$(CONFIG_PV) += pv/
 obj-y += x86_64/
+obj-y += x86_emulate/
 
 alternative-y := alternative.init.o
 alternative-$(CONFIG_LIVEPATCH) :=
--- /dev/null
+++ b/xen/arch/x86/x86_emulate/0f01.c
@@ -0,0 +1,349 @@
+/******************************************************************************
+ * 0f01.c - helper for x86_emulate.c
+ *
+ * Generic x86 (32-bit and 64-bit) instruction decoder and emulator.
+ *
+ * Copyright (c) 2005-2007 Keir Fraser
+ * Copyright (c) 2005-2007 XenSource Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "private.h"
+
+#define ad_bytes (s->ad_bytes) /* for truncate_ea() */
+
+int x86emul_0f01(struct x86_emulate_state *s,
+                 struct cpu_user_regs *regs,
+                 struct operand *dst,
+                 struct x86_emulate_ctxt *ctxt,
+                 const struct x86_emulate_ops *ops)
+{
+    enum x86_segment seg = (s->modrm_reg & 1) ? x86_seg_idtr : x86_seg_gdtr;
+    int rc;
+
+    switch ( s->modrm )
+    {
+        unsigned long base, limit, cr0, cr0w, cr4;
+        struct segment_register sreg;
+        uint64_t msr_val;
+
+    case 0xca: /* clac */
+    case 0xcb: /* stac */
+        vcpu_must_have(smap);
+        generate_exception_if(s->vex.pfx || !mode_ring0(), X86_EXC_UD);
+
+        regs->eflags &= ~X86_EFLAGS_AC;
+        if ( s->modrm == 0xcb )
+            regs->eflags |= X86_EFLAGS_AC;
+        break;
+
+    case 0xd0: /* xgetbv */
+        generate_exception_if(s->vex.pfx, X86_EXC_UD);
+        if ( !ops->read_cr || !ops->read_xcr ||
+             ops->read_cr(4, &cr4, ctxt) != X86EMUL_OKAY )
+            cr4 = 0;
+        generate_exception_if(!(cr4 & X86_CR4_OSXSAVE), X86_EXC_UD);
+        rc = ops->read_xcr(regs->ecx, &msr_val, ctxt);
+        if ( rc != X86EMUL_OKAY )
+            goto done;
+        regs->r(ax) = (uint32_t)msr_val;
+        regs->r(dx) = msr_val >> 32;
+        break;
+
+    case 0xd1: /* xsetbv */
+        generate_exception_if(s->vex.pfx, X86_EXC_UD);
+        if ( !ops->read_cr || !ops->write_xcr ||
+             ops->read_cr(4, &cr4, ctxt) != X86EMUL_OKAY )
+            cr4 = 0;
+        generate_exception_if(!(cr4 & X86_CR4_OSXSAVE), X86_EXC_UD);
+        generate_exception_if(!mode_ring0(), X86_EXC_GP, 0);
+        rc = ops->write_xcr(regs->ecx,
+                            regs->eax | ((uint64_t)regs->edx << 32), ctxt);
+        if ( rc != X86EMUL_OKAY )
+            goto done;
+        break;
+
+    case 0xd4: /* vmfunc */
+        generate_exception_if(s->vex.pfx, X86_EXC_UD);
+        fail_if(!ops->vmfunc);
+        if ( (rc = ops->vmfunc(ctxt)) != X86EMUL_OKAY )
+            goto done;
+        break;
+
+    case 0xd5: /* xend */
+        generate_exception_if(s->vex.pfx, X86_EXC_UD);
+        generate_exception_if(!vcpu_has_rtm(), X86_EXC_UD);
+        generate_exception_if(vcpu_has_rtm(), X86_EXC_GP, 0);
+        break;
+
+    case 0xd6: /* xtest */
+        generate_exception_if(s->vex.pfx, X86_EXC_UD);
+        generate_exception_if(!vcpu_has_rtm() && !vcpu_has_hle(),
+                              X86_EXC_UD);
+        /* Neither HLE nor RTM can be active when we get here. */
+        regs->eflags |= X86_EFLAGS_ZF;
+        break;
+
+    case 0xdf: /* invlpga */
+        fail_if(!ops->read_msr);
+        if ( (rc = ops->read_msr(MSR_EFER,
+                                 &msr_val, ctxt)) != X86EMUL_OKAY )
+            goto done;
+        /* Finding SVME set implies vcpu_has_svm(). */
+        generate_exception_if(!(msr_val & EFER_SVME) ||
+                              !in_protmode(ctxt, ops), X86_EXC_UD);
+        generate_exception_if(!mode_ring0(), X86_EXC_GP, 0);
+        fail_if(!ops->tlb_op);
+        if ( (rc = ops->tlb_op(x86emul_invlpga, truncate_ea(regs->r(ax)),
+                               regs->ecx, ctxt)) != X86EMUL_OKAY )
+            goto done;
+        break;
+
+    case 0xe8:
+        switch ( s->vex.pfx )
+        {
+        case vex_none: /* serialize */
+            host_and_vcpu_must_have(serialize);
+            asm volatile ( ".byte 0x0f, 0x01, 0xe8" );
+            break;
+        case vex_f2: /* xsusldtrk */
+            vcpu_must_have(tsxldtrk);
+            /*
+             * We're never in a transactional region when coming here
+             * - nothing else to do.
+             */
+            break;
+        default:
+            return X86EMUL_UNIMPLEMENTED;
+        }
+        break;
+
+    case 0xe9:
+        switch ( s->vex.pfx )
+        {
+        case vex_f2: /* xresldtrk */
+            vcpu_must_have(tsxldtrk);
+            /*
+             * We're never in a transactional region when coming here
+             * - nothing else to do.
+             */
+            break;
+        default:
+            return X86EMUL_UNIMPLEMENTED;
+        }
+        break;
+
+    case 0xee:
+        switch ( s->vex.pfx )
+        {
+        case vex_none: /* rdpkru */
+            if ( !ops->read_cr ||
+                 ops->read_cr(4, &cr4, ctxt) != X86EMUL_OKAY )
+                cr4 = 0;
+            generate_exception_if(!(cr4 & X86_CR4_PKE), X86_EXC_UD);
+            generate_exception_if(regs->ecx, X86_EXC_GP, 0);
+            regs->r(ax) = rdpkru();
+            regs->r(dx) = 0;
+            break;
+        default:
+            return X86EMUL_UNIMPLEMENTED;
+        }
+        break;
+
+    case 0xef:
+        switch ( s->vex.pfx )
+        {
+        case vex_none: /* wrpkru */
+            if ( !ops->read_cr ||
+                 ops->read_cr(4, &cr4, ctxt) != X86EMUL_OKAY )
+                cr4 = 0;
+            generate_exception_if(!(cr4 & X86_CR4_PKE), X86_EXC_UD);
+            generate_exception_if(regs->ecx | regs->edx, X86_EXC_GP, 0);
+            wrpkru(regs->eax);
+            break;
+        default:
+            return X86EMUL_UNIMPLEMENTED;
+        }
+        break;
+
+    case 0xf8: /* swapgs */
+        generate_exception_if(!mode_64bit(), X86_EXC_UD);
+        generate_exception_if(!mode_ring0(), X86_EXC_GP, 0);
+        fail_if(!ops->read_segment || !ops->read_msr ||
+                !ops->write_segment || !ops->write_msr);
+        if ( (rc = ops->read_segment(x86_seg_gs, &sreg,
+                                     ctxt)) != X86EMUL_OKAY ||
+             (rc = ops->read_msr(MSR_SHADOW_GS_BASE, &msr_val,
+                                 ctxt)) != X86EMUL_OKAY ||
+             (rc = ops->write_msr(MSR_SHADOW_GS_BASE, sreg.base,
+                                  ctxt)) != X86EMUL_OKAY )
+            goto done;
+        sreg.base = msr_val;
+        if ( (rc = ops->write_segment(x86_seg_gs, &sreg,
+                                      ctxt)) != X86EMUL_OKAY )
+        {
+            /* Best effort unwind (i.e. no error checking). */
+            ops->write_msr(MSR_SHADOW_GS_BASE, msr_val, ctxt);
+            goto done;
+        }
+        break;
+
+    case 0xf9: /* rdtscp */
+        fail_if(ops->read_msr == NULL);
+        if ( (rc = ops->read_msr(MSR_TSC_AUX,
+                                 &msr_val, ctxt)) != X86EMUL_OKAY )
+            goto done;
+        regs->r(cx) = (uint32_t)msr_val;
+        return X86EMUL_rdtsc;
+
+    case 0xfc: /* clzero */
+    {
+        unsigned long zero = 0;
+
+        vcpu_must_have(clzero);
+
+        base = ad_bytes == 8 ? regs->r(ax) :
+               ad_bytes == 4 ? regs->eax : regs->ax;
+        limit = ctxt->cpuid->basic.clflush_size * 8;
+        generate_exception_if(limit < sizeof(long) ||
+                              (limit & (limit - 1)), X86_EXC_UD);
+        base &= ~(limit - 1);
+        if ( ops->rep_stos )
+        {
+            unsigned long nr_reps = limit / sizeof(zero);
+
+            rc = ops->rep_stos(&zero, s->ea.mem.seg, base, sizeof(zero),
+                               &nr_reps, ctxt);
+            if ( rc == X86EMUL_OKAY )
+            {
+                base += nr_reps * sizeof(zero);
+                limit -= nr_reps * sizeof(zero);
+            }
+            else if ( rc != X86EMUL_UNHANDLEABLE )
+                goto done;
+        }
+        fail_if(limit && !ops->write);
+        while ( limit )
+        {
+            rc = ops->write(s->ea.mem.seg, base, &zero, sizeof(zero), ctxt);
+            if ( rc != X86EMUL_OKAY )
+                goto done;
+            base += sizeof(zero);
+            limit -= sizeof(zero);
+        }
+        break;
+    }
+
+#define _GRP7(mod, reg) \
+        (((mod) << 6) | ((reg) << 3)) ... (((mod) << 6) | ((reg) << 3) | 7)
+#define GRP7_MEM(reg) _GRP7(0, reg): case _GRP7(1, reg): case _GRP7(2, reg)
+#define GRP7_ALL(reg) GRP7_MEM(reg): case _GRP7(3, reg)
+
+    case GRP7_MEM(0): /* sgdt */
+    case GRP7_MEM(1): /* sidt */
+        ASSERT(s->ea.type == OP_MEM);
+        generate_exception_if(umip_active(ctxt, ops), X86_EXC_GP, 0);
+        fail_if(!ops->read_segment || !ops->write);
+        if ( (rc = ops->read_segment(seg, &sreg, ctxt)) )
+            goto done;
+        if ( mode_64bit() )
+            s->op_bytes = 8;
+        else if ( s->op_bytes == 2 )
+        {
+            sreg.base &= 0xffffff;
+            s->op_bytes = 4;
+        }
+        if ( (rc = ops->write(s->ea.mem.seg, s->ea.mem.off, &sreg.limit,
+                              2, ctxt)) != X86EMUL_OKAY ||
+             (rc = ops->write(s->ea.mem.seg, truncate_ea(s->ea.mem.off + 2),
+                              &sreg.base, s->op_bytes, ctxt)) != X86EMUL_OKAY )
+            goto done;
+        break;
+
+    case GRP7_MEM(2): /* lgdt */
+    case GRP7_MEM(3): /* lidt */
+        ASSERT(s->ea.type == OP_MEM);
+        generate_exception_if(!mode_ring0(), X86_EXC_GP, 0);
+        fail_if(ops->write_segment == NULL);
+        memset(&sreg, 0, sizeof(sreg));
+        if ( (rc = read_ulong(s->ea.mem.seg, s->ea.mem.off,
+                              &limit, 2, ctxt, ops)) ||
+             (rc = read_ulong(s->ea.mem.seg, truncate_ea(s->ea.mem.off + 2),
+                              &base, mode_64bit() ? 8 : 4, ctxt, ops)) )
+            goto done;
+        generate_exception_if(!is_canonical_address(base), X86_EXC_GP, 0);
+        sreg.base = base;
+        sreg.limit = limit;
+        if ( !mode_64bit() && s->op_bytes == 2 )
+            sreg.base &= 0xffffff;
+        if ( (rc = ops->write_segment(seg, &sreg, ctxt)) )
+            goto done;
+        break;
+
+    case GRP7_ALL(4): /* smsw */
+        generate_exception_if(umip_active(ctxt, ops), X86_EXC_GP, 0);
+        if ( s->ea.type == OP_MEM )
+        {
+            fail_if(!ops->write);
+            s->desc |= Mov; /* force writeback */
+            s->ea.bytes = 2;
+        }
+        else
+            s->ea.bytes = s->op_bytes;
+        *dst = s->ea;
+        fail_if(ops->read_cr == NULL);
+        if ( (rc = ops->read_cr(0, &dst->val, ctxt)) )
+            goto done;
+        break;
+
+    case GRP7_ALL(6): /* lmsw */
+        fail_if(ops->read_cr == NULL);
+        fail_if(ops->write_cr == NULL);
+        generate_exception_if(!mode_ring0(), X86_EXC_GP, 0);
+        if ( (rc = ops->read_cr(0, &cr0, ctxt)) )
+            goto done;
+        if ( s->ea.type == OP_REG )
+            cr0w = *s->ea.reg;
+        else if ( (rc = read_ulong(s->ea.mem.seg, s->ea.mem.off,
+                                   &cr0w, 2, ctxt, ops)) )
+            goto done;
+        /* LMSW can: (1) set bits 0-3; (2) clear bits 1-3. */
+        cr0 = (cr0 & ~0xe) | (cr0w & 0xf);
+        if ( (rc = ops->write_cr(0, cr0, ctxt)) )
+            goto done;
+        break;
+
+    case GRP7_MEM(7): /* invlpg */
+        ASSERT(s->ea.type == OP_MEM);
+        generate_exception_if(!mode_ring0(), X86_EXC_GP, 0);
+        fail_if(!ops->tlb_op);
+        if ( (rc = ops->tlb_op(x86emul_invlpg, s->ea.mem.off, s->ea.mem.seg,
+                               ctxt)) != X86EMUL_OKAY )
+            goto done;
+        break;
+
+#undef GRP7_ALL
+#undef GRP7_MEM
+#undef _GRP7
+
+    default:
+        return X86EMUL_UNIMPLEMENTED;
+    }
+
+    rc = X86EMUL_OKAY;
+
+ done:
+    return rc;
+}
--- /dev/null
+++ b/xen/arch/x86/x86_emulate/Makefile
@@ -0,0 +1 @@
+obj-y += 0f01.o
--- /dev/null
+++ b/xen/arch/x86/x86_emulate/private.h
@@ -0,0 +1,531 @@
+/******************************************************************************
+ * private.h - interface between x86_emulate.c and its helpers
+ *
+ * Copyright (c) 2005-2007 Keir Fraser
+ * Copyright (c) 2005-2007 XenSource Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifdef __XEN__
+
+# include <xen/kernel.h>
+# include <asm/msr-index.h>
+# include <asm/x86_emulate.h>
+
+# ifndef CONFIG_HVM
+#  define X86EMUL_NO_FPU
+#  define X86EMUL_NO_MMX
+#  define X86EMUL_NO_SIMD
+# endif
+
+#else /* !__XEN__ */
+# include "x86-emulate.h"
+#endif
+
+#ifdef __i386__
+# define mode_64bit() false
+# define r(name) e ## name
+#else
+# define mode_64bit() (ctxt->addr_size == 64)
+# define r(name) r ## name
+#endif
+
+/* Operand sizes: 8-bit operands or specified/overridden size. */
+#define ByteOp      (1<<0) /* 8-bit operands. */
+/* Destination operand type. */
+#define DstNone     (0<<1) /* No destination operand. */
+#define DstImplicit (0<<1) /* Destination operand is implicit in the opcode. */
+#define DstBitBase  (1<<1) /* Memory operand, bit string. */
+#define DstReg      (2<<1) /* Register operand. */
+#define DstEax      DstReg /* Register EAX (aka DstReg with no ModRM) */
+#define DstMem      (3<<1) /* Memory operand. */
+#define DstMask     (3<<1)
+/* Source operand type. */
+#define SrcNone     (0<<3) /* No source operand. */
+#define SrcImplicit (0<<3) /* Source operand is implicit in the opcode. */
+#define SrcReg      (1<<3) /* Register operand. */
+#define SrcEax      SrcReg /* Register EAX (aka SrcReg with no ModRM) */
+#define SrcMem      (2<<3) /* Memory operand. */
+#define SrcMem16    (3<<3) /* Memory operand (16-bit). */
+#define SrcImm      (4<<3) /* Immediate operand. */
+#define SrcImmByte  (5<<3) /* 8-bit sign-extended immediate operand. */
+#define SrcImm16    (6<<3) /* 16-bit zero-extended immediate operand. */
+#define SrcMask     (7<<3)
+/* Generic ModRM decode. */
+#define ModRM       (1<<6)
+/* vSIB addressing mode (0f38 extension opcodes only), aliasing ModRM. */
+#define vSIB        (1<<6)
+/* Destination is only written; never read. */
+#define Mov         (1<<7)
+/* VEX/EVEX (SIMD only): 2nd source operand unused (must be all ones) */
+#define TwoOp       Mov
+/* All operands are implicit in the opcode. */
+#define ImplicitOps (DstImplicit|SrcImplicit)
+
+typedef uint8_t opcode_desc_t;
+
+/* Type, address-of, and value of an instruction's operand. */
+struct operand {
+    enum { OP_REG, OP_MEM, OP_IMM, OP_NONE } type;
+    unsigned int bytes;
+
+    /* Operand value. */
+    unsigned long val;
+
+    /* Original operand value. */
+    unsigned long orig_val;
+
+    /* OP_REG: Pointer to register field. */
+    unsigned long *reg;
+
+    /* OP_MEM: Segment and offset. */
+    struct {
+        enum x86_segment seg;
+        unsigned long    off;
+    } mem;
+};
+
+#define REX_PREFIX 0x40
+#define REX_B 0x01
+#define REX_X 0x02
+#define REX_R 0x04
+#define REX_W 0x08
+
+enum simd_opsize {
+    simd_none,
+
+    /*
+     * Ordinary packed integers:
+     * - 64 bits without prefix 66 (MMX)
+     * - 128 bits with prefix 66 (SSEn)
+     * - 128/256/512 bits depending on VEX.L/EVEX.LR (AVX+)
+     */
+    simd_packed_int,
+
+    /*
+     * Ordinary packed/scalar floating point:
+     * - 128 bits without prefix or with prefix 66 (SSEn)
+     * - 128/256/512 bits depending on VEX.L/EVEX.LR (AVX+)
+     * - 32 bits with prefix F3 (scalar single)
+     * - 64 bits with prefix F2 (scalar doubgle)
+     */
+    simd_any_fp,
+
+    /*
+     * Packed floating point:
+     * - 128 bits without prefix or with prefix 66 (SSEn)
+     * - 128/256/512 bits depending on VEX.L/EVEX.LR (AVX+)
+     */
+    simd_packed_fp,
+
+    /*
+     * Single precision packed/scalar floating point:
+     * - 128 bits without prefix (SSEn)
+     * - 128/256/512 bits depending on VEX.L/EVEX.LR (AVX+)
+     * - 32 bits with prefix F3 (scalar)
+     */
+    simd_single_fp,
+
+    /*
+     * Scalar floating point:
+     * - 32 bits with low opcode bit clear (scalar single)
+     * - 64 bits with low opcode bit set (scalar double)
+     */
+    simd_scalar_opc,
+
+    /*
+     * Scalar floating point:
+     * - 32/64 bits depending on VEX.W/EVEX.W
+     */
+    simd_scalar_vexw,
+
+    /*
+     * 128 bits of integer or floating point data, with no further
+     * formatting information, or with it encoded by EVEX.W.
+     */
+    simd_128,
+
+    /*
+     * 256 bits of integer or floating point data, with formatting
+     * encoded by EVEX.W.
+     */
+    simd_256,
+
+    /* Operand size encoded in non-standard way. */
+    simd_other
+};
+typedef uint8_t simd_opsize_t;
+
+#define vex_none 0
+
+enum vex_opcx {
+    vex_0f = vex_none + 1,
+    vex_0f38,
+    vex_0f3a,
+};
+
+enum vex_pfx {
+    vex_66 = vex_none + 1,
+    vex_f3,
+    vex_f2
+};
+
+union vex {
+    uint8_t raw[2];
+    struct {             /* SDM names */
+        uint8_t opcx:5;  /* mmmmm */
+        uint8_t b:1;     /* B */
+        uint8_t x:1;     /* X */
+        uint8_t r:1;     /* R */
+        uint8_t pfx:2;   /* pp */
+        uint8_t l:1;     /* L */
+        uint8_t reg:4;   /* vvvv */
+        uint8_t w:1;     /* W */
+    };
+};
+
+union evex {
+    uint8_t raw[3];
+    struct {             /* SDM names */
+        uint8_t opcx:2;  /* mm */
+        uint8_t mbz:2;
+        uint8_t R:1;     /* R' */
+        uint8_t b:1;     /* B */
+        uint8_t x:1;     /* X */
+        uint8_t r:1;     /* R */
+        uint8_t pfx:2;   /* pp */
+        uint8_t mbs:1;
+        uint8_t reg:4;   /* vvvv */
+        uint8_t w:1;     /* W */
+        uint8_t opmsk:3; /* aaa */
+        uint8_t RX:1;    /* V' */
+        uint8_t brs:1;   /* b */
+        uint8_t lr:2;    /* L'L */
+        uint8_t z:1;     /* z */
+    };
+};
+
+struct x86_emulate_state {
+    unsigned int op_bytes, ad_bytes;
+
+    enum {
+        ext_none = vex_none,
+        ext_0f   = vex_0f,
+        ext_0f38 = vex_0f38,
+        ext_0f3a = vex_0f3a,
+        /*
+         * For XOP use values such that the respective instruction field
+         * can be used without adjustment.
+         */
+        ext_8f08 = 8,
+        ext_8f09,
+        ext_8f0a,
+    } ext;
+    enum {
+        rmw_NONE,
+        rmw_adc,
+        rmw_add,
+        rmw_and,
+        rmw_btc,
+        rmw_btr,
+        rmw_bts,
+        rmw_dec,
+        rmw_inc,
+        rmw_neg,
+        rmw_not,
+        rmw_or,
+        rmw_rcl,
+        rmw_rcr,
+        rmw_rol,
+        rmw_ror,
+        rmw_sar,
+        rmw_sbb,
+        rmw_shl,
+        rmw_shld,
+        rmw_shr,
+        rmw_shrd,
+        rmw_sub,
+        rmw_xadd,
+        rmw_xchg,
+        rmw_xor,
+    } rmw;
+    enum {
+        blk_NONE,
+        blk_enqcmd,
+#ifndef X86EMUL_NO_FPU
+        blk_fld, /* FLDENV, FRSTOR */
+        blk_fst, /* FNSTENV, FNSAVE */
+#endif
+#if !defined(X86EMUL_NO_FPU) || !defined(X86EMUL_NO_MMX) || \
+    !defined(X86EMUL_NO_SIMD)
+        blk_fxrstor,
+        blk_fxsave,
+#endif
+        blk_movdir,
+    } blk;
+    uint8_t modrm, modrm_mod, modrm_reg, modrm_rm;
+    uint8_t sib_index, sib_scale;
+    uint8_t rex_prefix;
+    bool lock_prefix;
+    bool not_64bit; /* Instruction not available in 64bit. */
+    bool fpu_ctrl;  /* Instruction is an FPU control one. */
+    opcode_desc_t desc;
+    union vex vex;
+    union evex evex;
+    enum simd_opsize simd_size;
+
+    /*
+     * Data operand effective address (usually computed from ModRM).
+     * Default is a memory operand relative to segment DS.
+     */
+    struct operand ea;
+
+    /* Immediate operand values, if any. Use otherwise unused fields. */
+#define imm1 ea.val
+#define imm2 ea.orig_val
+
+    unsigned long ip;
+    struct cpu_user_regs *regs;
+
+#ifndef NDEBUG
+    /*
+     * Track caller of x86_decode_insn() to spot missing as well as
+     * premature calls to x86_emulate_free_state().
+     */
+    void *caller;
+#endif
+};
+
+/*
+ * Externally visible return codes from x86_emulate() are non-negative.
+ * Use negative values for internal state change indicators from helpers
+ * to the main function.
+ */
+#define X86EMUL_rdtsc        (-1)
+
+/*
+ * These EFLAGS bits are restored from saved value during emulation, and
+ * any changes are written back to the saved value after emulation.
+ */
+#define EFLAGS_MASK (X86_EFLAGS_OF | X86_EFLAGS_SF | X86_EFLAGS_ZF | \
+                     X86_EFLAGS_AF | X86_EFLAGS_PF | X86_EFLAGS_CF)
+
+/*
+ * These EFLAGS bits are modifiable (by POPF and IRET), possibly subject
+ * to further CPL and IOPL constraints.
+ */
+#define EFLAGS_MODIFIABLE (X86_EFLAGS_ID | X86_EFLAGS_AC | X86_EFLAGS_RF | \
+                           X86_EFLAGS_NT | X86_EFLAGS_IOPL | X86_EFLAGS_DF | \
+                           X86_EFLAGS_IF | X86_EFLAGS_TF | EFLAGS_MASK)
+
+#define truncate_word(ea, byte_width)           \
+({  unsigned long __ea = (ea);                  \
+    unsigned int _width = (byte_width);         \
+    ((_width == sizeof(unsigned long)) ? __ea : \
+     (__ea & ((1UL << (_width << 3)) - 1)));    \
+})
+#define truncate_ea(ea) truncate_word((ea), ad_bytes)
+
+#define fail_if(p)                                      \
+do {                                                    \
+    rc = (p) ? X86EMUL_UNHANDLEABLE : X86EMUL_OKAY;     \
+    if ( rc ) goto done;                                \
+} while (0)
+
+#define EXPECT(p)                                       \
+do {                                                    \
+    if ( unlikely(!(p)) )                               \
+    {                                                   \
+        ASSERT_UNREACHABLE();                           \
+        goto unhandleable;                              \
+    }                                                   \
+} while (0)
+
+static inline int mkec(uint8_t e, int32_t ec, ...)
+{
+    return (e < 32 && ((1u << e) & X86_EXC_HAVE_EC)) ? ec : X86_EVENT_NO_EC;
+}
+
+#define generate_exception_if(p, e, ec...)                                \
+({  if ( (p) ) {                                                          \
+        x86_emul_hw_exception(e, mkec(e, ##ec, 0), ctxt);                 \
+        rc = X86EMUL_EXCEPTION;                                           \
+        goto done;                                                        \
+    }                                                                     \
+})
+
+#define generate_exception(e, ec...) generate_exception_if(true, e, ##ec)
+
+static inline bool
+in_realmode(
+    struct x86_emulate_ctxt *ctxt,
+    const struct x86_emulate_ops *ops)
+{
+    unsigned long cr0;
+    int rc;
+
+    if ( ops->read_cr == NULL )
+        return 0;
+
+    rc = ops->read_cr(0, &cr0, ctxt);
+    return (!rc && !(cr0 & X86_CR0_PE));
+}
+
+static inline bool
+in_protmode(
+    struct x86_emulate_ctxt *ctxt,
+    const struct x86_emulate_ops *ops)
+{
+    return !(in_realmode(ctxt, ops) || (ctxt->regs->eflags & X86_EFLAGS_VM));
+}
+
+#define mode_ring0() ({                         \
+    int _cpl = x86emul_get_cpl(ctxt, ops);      \
+    fail_if(_cpl < 0);                          \
+    (_cpl == 0);                                \
+})
+
+#define vcpu_has_fpu()         (ctxt->cpuid->basic.fpu)
+#define vcpu_has_sep()         (ctxt->cpuid->basic.sep)
+#define vcpu_has_cx8()         (ctxt->cpuid->basic.cx8)
+#define vcpu_has_cmov()        (ctxt->cpuid->basic.cmov)
+#define vcpu_has_clflush()     (ctxt->cpuid->basic.clflush)
+#define vcpu_has_mmx()         (ctxt->cpuid->basic.mmx)
+#define vcpu_has_fxsr()        (ctxt->cpuid->basic.fxsr)
+#define vcpu_has_sse()         (ctxt->cpuid->basic.sse)
+#define vcpu_has_sse2()        (ctxt->cpuid->basic.sse2)
+#define vcpu_has_sse3()        (ctxt->cpuid->basic.sse3)
+#define vcpu_has_pclmulqdq()   (ctxt->cpuid->basic.pclmulqdq)
+#define vcpu_has_ssse3()       (ctxt->cpuid->basic.ssse3)
+#define vcpu_has_fma()         (ctxt->cpuid->basic.fma)
+#define vcpu_has_cx16()        (ctxt->cpuid->basic.cx16)
+#define vcpu_has_sse4_1()      (ctxt->cpuid->basic.sse4_1)
+#define vcpu_has_sse4_2()      (ctxt->cpuid->basic.sse4_2)
+#define vcpu_has_movbe()       (ctxt->cpuid->basic.movbe)
+#define vcpu_has_popcnt()      (ctxt->cpuid->basic.popcnt)
+#define vcpu_has_aesni()       (ctxt->cpuid->basic.aesni)
+#define vcpu_has_avx()         (ctxt->cpuid->basic.avx)
+#define vcpu_has_f16c()        (ctxt->cpuid->basic.f16c)
+#define vcpu_has_rdrand()      (ctxt->cpuid->basic.rdrand)
+
+#define vcpu_has_mmxext()      (ctxt->cpuid->extd.mmxext || vcpu_has_sse())
+#define vcpu_has_3dnow_ext()   (ctxt->cpuid->extd._3dnowext)
+#define vcpu_has_3dnow()       (ctxt->cpuid->extd._3dnow)
+#define vcpu_has_lahf_lm()     (ctxt->cpuid->extd.lahf_lm)
+#define vcpu_has_cr8_legacy()  (ctxt->cpuid->extd.cr8_legacy)
+#define vcpu_has_lzcnt()       (ctxt->cpuid->extd.abm)
+#define vcpu_has_sse4a()       (ctxt->cpuid->extd.sse4a)
+#define vcpu_has_misalignsse() (ctxt->cpuid->extd.misalignsse)
+#define vcpu_has_xop()         (ctxt->cpuid->extd.xop)
+#define vcpu_has_fma4()        (ctxt->cpuid->extd.fma4)
+#define vcpu_has_tbm()         (ctxt->cpuid->extd.tbm)
+#define vcpu_has_clzero()      (ctxt->cpuid->extd.clzero)
+#define vcpu_has_wbnoinvd()    (ctxt->cpuid->extd.wbnoinvd)
+
+#define vcpu_has_bmi1()        (ctxt->cpuid->feat.bmi1)
+#define vcpu_has_hle()         (ctxt->cpuid->feat.hle)
+#define vcpu_has_avx2()        (ctxt->cpuid->feat.avx2)
+#define vcpu_has_bmi2()        (ctxt->cpuid->feat.bmi2)
+#define vcpu_has_invpcid()     (ctxt->cpuid->feat.invpcid)
+#define vcpu_has_rtm()         (ctxt->cpuid->feat.rtm)
+#define vcpu_has_mpx()         (ctxt->cpuid->feat.mpx)
+#define vcpu_has_avx512f()     (ctxt->cpuid->feat.avx512f)
+#define vcpu_has_avx512dq()    (ctxt->cpuid->feat.avx512dq)
+#define vcpu_has_rdseed()      (ctxt->cpuid->feat.rdseed)
+#define vcpu_has_adx()         (ctxt->cpuid->feat.adx)
+#define vcpu_has_smap()        (ctxt->cpuid->feat.smap)
+#define vcpu_has_avx512_ifma() (ctxt->cpuid->feat.avx512_ifma)
+#define vcpu_has_clflushopt()  (ctxt->cpuid->feat.clflushopt)
+#define vcpu_has_clwb()        (ctxt->cpuid->feat.clwb)
+#define vcpu_has_avx512pf()    (ctxt->cpuid->feat.avx512pf)
+#define vcpu_has_avx512er()    (ctxt->cpuid->feat.avx512er)
+#define vcpu_has_avx512cd()    (ctxt->cpuid->feat.avx512cd)
+#define vcpu_has_sha()         (ctxt->cpuid->feat.sha)
+#define vcpu_has_avx512bw()    (ctxt->cpuid->feat.avx512bw)
+#define vcpu_has_avx512vl()    (ctxt->cpuid->feat.avx512vl)
+#define vcpu_has_avx512_vbmi() (ctxt->cpuid->feat.avx512_vbmi)
+#define vcpu_has_avx512_vbmi2() (ctxt->cpuid->feat.avx512_vbmi2)
+#define vcpu_has_gfni()        (ctxt->cpuid->feat.gfni)
+#define vcpu_has_vaes()        (ctxt->cpuid->feat.vaes)
+#define vcpu_has_vpclmulqdq()  (ctxt->cpuid->feat.vpclmulqdq)
+#define vcpu_has_avx512_vnni() (ctxt->cpuid->feat.avx512_vnni)
+#define vcpu_has_avx512_bitalg() (ctxt->cpuid->feat.avx512_bitalg)
+#define vcpu_has_avx512_vpopcntdq() (ctxt->cpuid->feat.avx512_vpopcntdq)
+#define vcpu_has_rdpid()       (ctxt->cpuid->feat.rdpid)
+#define vcpu_has_movdiri()     (ctxt->cpuid->feat.movdiri)
+#define vcpu_has_movdir64b()   (ctxt->cpuid->feat.movdir64b)
+#define vcpu_has_enqcmd()      (ctxt->cpuid->feat.enqcmd)
+#define vcpu_has_avx512_4vnniw() (ctxt->cpuid->feat.avx512_4vnniw)
+#define vcpu_has_avx512_4fmaps() (ctxt->cpuid->feat.avx512_4fmaps)
+#define vcpu_has_avx512_vp2intersect() (ctxt->cpuid->feat.avx512_vp2intersect)
+#define vcpu_has_serialize()   (ctxt->cpuid->feat.serialize)
+#define vcpu_has_tsxldtrk()    (ctxt->cpuid->feat.tsxldtrk)
+#define vcpu_has_avx_vnni()    (ctxt->cpuid->feat.avx_vnni)
+#define vcpu_has_avx512_bf16() (ctxt->cpuid->feat.avx512_bf16)
+
+#define vcpu_must_have(feat) \
+    generate_exception_if(!vcpu_has_##feat(), X86_EXC_UD)
+
+#ifdef __XEN__
+/*
+ * Note the difference between vcpu_must_have(<feature>) and
+ * host_and_vcpu_must_have(<feature>): The latter needs to be used when
+ * emulation code is using the same instruction class for carrying out
+ * the actual operation.
+ */
+# define host_and_vcpu_must_have(feat) ({ \
+    generate_exception_if(!cpu_has_##feat, X86_EXC_UD); \
+    vcpu_must_have(feat); \
+})
+#else
+/*
+ * For the test harness both are fine to be used interchangeably, i.e.
+ * features known to always be available (e.g. SSE/SSE2) to (64-bit) Xen
+ * may be checked for by just vcpu_must_have().
+ */
+# define host_and_vcpu_must_have(feat) vcpu_must_have(feat)
+#endif
+
+int x86emul_get_cpl(struct x86_emulate_ctxt *ctxt,
+                    const struct x86_emulate_ops *ops);
+
+int x86emul_0f01(struct x86_emulate_state *s,
+                 struct cpu_user_regs *regs,
+                 struct operand *dst,
+                 struct x86_emulate_ctxt *ctxt,
+                 const struct x86_emulate_ops *ops);
+
+static inline bool umip_active(struct x86_emulate_ctxt *ctxt,
+                               const struct x86_emulate_ops *ops)
+{
+    unsigned long cr4;
+
+    /* Intentionally not using mode_ring0() here to avoid its fail_if(). */
+    return x86emul_get_cpl(ctxt, ops) > 0 &&
+           ops->read_cr && ops->read_cr(4, &cr4, ctxt) == X86EMUL_OKAY &&
+           (cr4 & X86_CR4_UMIP);
+}
+
+/* Compatibility function: read guest memory, zero-extend result to a ulong. */
+static inline int read_ulong(enum x86_segment seg,
+                             unsigned long offset,
+                             unsigned long *val,
+                             unsigned int bytes,
+                             struct x86_emulate_ctxt *ctxt,
+                             const struct x86_emulate_ops *ops)
+{
+    *val = 0;
+    return ops->read(seg, offset, val, bytes, ctxt);
+}
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -20,39 +20,7 @@
  * along with this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
-/* Operand sizes: 8-bit operands or specified/overridden size. */
-#define ByteOp      (1<<0) /* 8-bit operands. */
-/* Destination operand type. */
-#define DstNone     (0<<1) /* No destination operand. */
-#define DstImplicit (0<<1) /* Destination operand is implicit in the opcode. */
-#define DstBitBase  (1<<1) /* Memory operand, bit string. */
-#define DstReg      (2<<1) /* Register operand. */
-#define DstEax      DstReg /* Register EAX (aka DstReg with no ModRM) */
-#define DstMem      (3<<1) /* Memory operand. */
-#define DstMask     (3<<1)
-/* Source operand type. */
-#define SrcNone     (0<<3) /* No source operand. */
-#define SrcImplicit (0<<3) /* Source operand is implicit in the opcode. */
-#define SrcReg      (1<<3) /* Register operand. */
-#define SrcEax      SrcReg /* Register EAX (aka SrcReg with no ModRM) */
-#define SrcMem      (2<<3) /* Memory operand. */
-#define SrcMem16    (3<<3) /* Memory operand (16-bit). */
-#define SrcImm      (4<<3) /* Immediate operand. */
-#define SrcImmByte  (5<<3) /* 8-bit sign-extended immediate operand. */
-#define SrcImm16    (6<<3) /* 16-bit zero-extended immediate operand. */
-#define SrcMask     (7<<3)
-/* Generic ModRM decode. */
-#define ModRM       (1<<6)
-/* vSIB addressing mode (0f38 extension opcodes only), aliasing ModRM. */
-#define vSIB        (1<<6)
-/* Destination is only written; never read. */
-#define Mov         (1<<7)
-/* VEX/EVEX (SIMD only): 2nd source operand unused (must be all ones) */
-#define TwoOp       Mov
-/* All operands are implicit in the opcode. */
-#define ImplicitOps (DstImplicit|SrcImplicit)
-
-typedef uint8_t opcode_desc_t;
+#include "private.h"
 
 static const opcode_desc_t opcode_table[256] = {
     /* 0x00 - 0x07 */
@@ -184,71 +152,6 @@ static const opcode_desc_t opcode_table[
     ImplicitOps, ImplicitOps, ByteOp|DstMem|SrcNone|ModRM, DstMem|SrcNone|ModRM
 };
 
-enum simd_opsize {
-    simd_none,
-
-    /*
-     * Ordinary packed integers:
-     * - 64 bits without prefix 66 (MMX)
-     * - 128 bits with prefix 66 (SSEn)
-     * - 128/256/512 bits depending on VEX.L/EVEX.LR (AVX+)
-     */
-    simd_packed_int,
-
-    /*
-     * Ordinary packed/scalar floating point:
-     * - 128 bits without prefix or with prefix 66 (SSEn)
-     * - 128/256/512 bits depending on VEX.L/EVEX.LR (AVX+)
-     * - 32 bits with prefix F3 (scalar single)
-     * - 64 bits with prefix F2 (scalar doubgle)
-     */
-    simd_any_fp,
-
-    /*
-     * Packed floating point:
-     * - 128 bits without prefix or with prefix 66 (SSEn)
-     * - 128/256/512 bits depending on VEX.L/EVEX.LR (AVX+)
-     */
-    simd_packed_fp,
-
-    /*
-     * Single precision packed/scalar floating point:
-     * - 128 bits without prefix (SSEn)
-     * - 128/256/512 bits depending on VEX.L/EVEX.LR (AVX+)
-     * - 32 bits with prefix F3 (scalar)
-     */
-    simd_single_fp,
-
-    /*
-     * Scalar floating point:
-     * - 32 bits with low opcode bit clear (scalar single)
-     * - 64 bits with low opcode bit set (scalar double)
-     */
-    simd_scalar_opc,
-
-    /*
-     * Scalar floating point:
-     * - 32/64 bits depending on VEX.W/EVEX.W
-     */
-    simd_scalar_vexw,
-
-    /*
-     * 128 bits of integer or floating point data, with no further
-     * formatting information, or with it encoded by EVEX.W.
-     */
-    simd_128,
-
-    /*
-     * 256 bits of integer or floating point data, with formatting
-     * encoded by EVEX.W.
-     */
-    simd_256,
-
-    /* Operand size encoded in non-standard way. */
-    simd_other
-};
-typedef uint8_t simd_opsize_t;
-
 enum disp8scale {
     /* Values 0 ... 4 are explicit sizes. */
     d8s_bw = 5,
@@ -670,45 +573,11 @@ static const struct ext8f09_table {
     [0xe1 ... 0xe3] = { .simd_size = simd_packed_int, .two_op = 1 },
 };
 
-#define REX_PREFIX 0x40
-#define REX_B 0x01
-#define REX_X 0x02
-#define REX_R 0x04
-#define REX_W 0x08
-
-#define vex_none 0
-
-enum vex_opcx {
-    vex_0f = vex_none + 1,
-    vex_0f38,
-    vex_0f3a,
-};
-
-enum vex_pfx {
-    vex_66 = vex_none + 1,
-    vex_f3,
-    vex_f2
-};
-
 #define VEX_PREFIX_DOUBLE_MASK 0x1
 #define VEX_PREFIX_SCALAR_MASK 0x2
 
 static const uint8_t sse_prefix[] = { 0x66, 0xf3, 0xf2 };
 
-union vex {
-    uint8_t raw[2];
-    struct {             /* SDM names */
-        uint8_t opcx:5;  /* mmmmm */
-        uint8_t b:1;     /* B */
-        uint8_t x:1;     /* X */
-        uint8_t r:1;     /* R */
-        uint8_t pfx:2;   /* pp */
-        uint8_t l:1;     /* L */
-        uint8_t reg:4;   /* vvvv */
-        uint8_t w:1;     /* W */
-    };
-};
-
 #ifdef __x86_64__
 # define PFX2 REX_PREFIX
 #else
@@ -748,27 +617,6 @@ union vex {
     } \
 } while (0)
 
-union evex {
-    uint8_t raw[3];
-    struct {             /* SDM names */
-        uint8_t opcx:2;  /* mm */
-        uint8_t mbz:2;
-        uint8_t R:1;     /* R' */
-        uint8_t b:1;     /* B */
-        uint8_t x:1;     /* X */
-        uint8_t r:1;     /* R */
-        uint8_t pfx:2;   /* pp */
-        uint8_t mbs:1;
-        uint8_t reg:4;   /* vvvv */
-        uint8_t w:1;     /* W */
-        uint8_t opmsk:3; /* aaa */
-        uint8_t RX:1;    /* V' */
-        uint8_t brs:1;   /* b */
-        uint8_t lr:2;    /* L'L */
-        uint8_t z:1;     /* z */
-    };
-};
-
 #define EVEX_PFX_BYTES 4
 #define init_evex(stub) ({ \
     uint8_t *buf_ = get_stub(stub); \
@@ -789,118 +637,6 @@ union evex {
 #define repe_prefix()  (vex.pfx == vex_f3)
 #define repne_prefix() (vex.pfx == vex_f2)
 
-/* Type, address-of, and value of an instruction's operand. */
-struct operand {
-    enum { OP_REG, OP_MEM, OP_IMM, OP_NONE } type;
-    unsigned int bytes;
-
-    /* Operand value. */
-    unsigned long val;
-
-    /* Original operand value. */
-    unsigned long orig_val;
-
-    /* OP_REG: Pointer to register field. */
-    unsigned long *reg;
-
-    /* OP_MEM: Segment and offset. */
-    struct {
-        enum x86_segment seg;
-        unsigned long    off;
-    } mem;
-};
-
-struct x86_emulate_state {
-    unsigned int op_bytes, ad_bytes;
-
-    enum {
-        ext_none = vex_none,
-        ext_0f   = vex_0f,
-        ext_0f38 = vex_0f38,
-        ext_0f3a = vex_0f3a,
-        /*
-         * For XOP use values such that the respective instruction field
-         * can be used without adjustment.
-         */
-        ext_8f08 = 8,
-        ext_8f09,
-        ext_8f0a,
-    } ext;
-    enum {
-        rmw_NONE,
-        rmw_adc,
-        rmw_add,
-        rmw_and,
-        rmw_btc,
-        rmw_btr,
-        rmw_bts,
-        rmw_dec,
-        rmw_inc,
-        rmw_neg,
-        rmw_not,
-        rmw_or,
-        rmw_rcl,
-        rmw_rcr,
-        rmw_rol,
-        rmw_ror,
-        rmw_sar,
-        rmw_sbb,
-        rmw_shl,
-        rmw_shld,
-        rmw_shr,
-        rmw_shrd,
-        rmw_sub,
-        rmw_xadd,
-        rmw_xchg,
-        rmw_xor,
-    } rmw;
-    enum {
-        blk_NONE,
-        blk_enqcmd,
-#ifndef X86EMUL_NO_FPU
-        blk_fld, /* FLDENV, FRSTOR */
-        blk_fst, /* FNSTENV, FNSAVE */
-#endif
-#if !defined(X86EMUL_NO_FPU) || !defined(X86EMUL_NO_MMX) || \
-    !defined(X86EMUL_NO_SIMD)
-        blk_fxrstor,
-        blk_fxsave,
-#endif
-        blk_movdir,
-    } blk;
-    uint8_t modrm, modrm_mod, modrm_reg, modrm_rm;
-    uint8_t sib_index, sib_scale;
-    uint8_t rex_prefix;
-    bool lock_prefix;
-    bool not_64bit; /* Instruction not available in 64bit. */
-    bool fpu_ctrl;  /* Instruction is an FPU control one. */
-    opcode_desc_t desc;
-    union vex vex;
-    union evex evex;
-    enum simd_opsize simd_size;
-
-    /*
-     * Data operand effective address (usually computed from ModRM).
-     * Default is a memory operand relative to segment DS.
-     */
-    struct operand ea;
-
-    /* Immediate operand values, if any. Use otherwise unused fields. */
-#define imm1 ea.val
-#define imm2 ea.orig_val
-
-    unsigned long ip;
-    struct cpu_user_regs *regs;
-
-#ifndef NDEBUG
-    /*
-     * Track caller of x86_decode_insn() to spot missing as well as
-     * premature calls to x86_emulate_free_state().
-     */
-    void *caller;
-#endif
-};
-
 #ifdef __x86_64__
 #define PTR_POISON ((void *)0x8086000000008086UL) /* non-canonical */
 #else
@@ -1049,21 +785,6 @@ struct x86_fxsr {
 #define _BYTES_PER_LONG "4"
 #endif
 
-/*
- * These EFLAGS bits are restored from saved value during emulation, and
- * any changes are written back to the saved value after emulation.
- */
-#define EFLAGS_MASK (X86_EFLAGS_OF | X86_EFLAGS_SF | X86_EFLAGS_ZF | \
-                     X86_EFLAGS_AF | X86_EFLAGS_PF | X86_EFLAGS_CF)
-
-/*
- * These EFLAGS bits are modifiable (by POPF and IRET), possibly subject
- * to further CPL and IOPL constraints.
- */
-#define EFLAGS_MODIFIABLE (X86_EFLAGS_ID | X86_EFLAGS_AC | X86_EFLAGS_RF | \
-                           X86_EFLAGS_NT | X86_EFLAGS_IOPL | X86_EFLAGS_DF | \
-                           X86_EFLAGS_IF | X86_EFLAGS_TF | EFLAGS_MASK)
-
 /* Before executing instruction: restore necessary bits in EFLAGS. */
 #define _PRE_EFLAGS(_sav, _msk, _tmp)                           \
 /* EFLAGS = (_sav & _msk) | (EFLAGS & ~_msk); _sav &= ~_msk; */ \
@@ -1223,36 +944,6 @@ do{ asm volatile (
 #define __emulate_1op_8byte(op, dst, eflags, extra...)
 #endif /* __i386__ */
 
-#define fail_if(p)                                      \
-do {                                                    \
-    rc = (p) ? X86EMUL_UNHANDLEABLE : X86EMUL_OKAY;     \
-    if ( rc ) goto done;                                \
-} while (0)
-
-#define EXPECT(p)                                       \
-do {                                                    \
-    if ( unlikely(!(p)) )                               \
-    {                                                   \
-        ASSERT_UNREACHABLE();                           \
-        goto unhandleable;                              \
-    }                                                   \
-} while (0)
-
-static inline int mkec(uint8_t e, int32_t ec, ...)
-{
-    return (e < 32 && ((1u << e) & EXC_HAS_EC)) ? ec : X86_EVENT_NO_EC;
-}
-
-#define generate_exception_if(p, e, ec...)                                \
-({  if ( (p) ) {                                                          \
-        x86_emul_hw_exception(e, mkec(e, ##ec, 0), ctxt);                 \
-        rc = X86EMUL_EXCEPTION;                                           \
-        goto done;                                                        \
-    }                                                                     \
-})
-
-#define generate_exception(e, ec...) generate_exception_if(true, e, ##ec)
-
 #ifdef __XEN__
 # define invoke_stub(pre, post, constraints...) do {                    \
     stub_exn.info = (union stub_exception_token) { .raw = ~0 };         \
@@ -1301,20 +992,6 @@ static inline int mkec(uint8_t e, int32_
 })
 #define insn_fetch_type(_type) ((_type)insn_fetch_bytes(sizeof(_type)))
 
-#define truncate_word(ea, byte_width)           \
-({  unsigned long __ea = (ea);                  \
-    unsigned int _width = (byte_width);         \
-    ((_width == sizeof(unsigned long)) ? __ea : \
-     (__ea & ((1UL << (_width << 3)) - 1)));    \
-})
-#define truncate_ea(ea) truncate_word((ea), ad_bytes)
-
-#ifdef __x86_64__
-# define mode_64bit() (ctxt->addr_size == 64)
-#else
-# define mode_64bit() false
-#endif
-
 /*
  * Given byte has even parity (even number of 1s)? SDM Vol. 1 Sec. 3.4.3.1,
  * "Status Flags": EFLAGS.PF reflects parity of least-sig. byte of result only.
@@ -1655,19 +1332,6 @@ static void __put_rep_prefix(
     ea__;                                                                 \
 })
 
-/* Compatibility function: read guest memory, zero-extend result to a ulong. */
-static int read_ulong(
-        enum x86_segment seg,
-        unsigned long offset,
-        unsigned long *val,
-        unsigned int bytes,
-        struct x86_emulate_ctxt *ctxt,
-        const struct x86_emulate_ops *ops)
-{
-    *val = 0;
-    return ops->read(seg, offset, val, bytes, ctxt);
-}
-
 /*
  * Unsigned multiplication with double-word result.
  * IN:  Multiplicand=m[0], Multiplier=m[1]
@@ -1792,10 +1456,8 @@ test_cc(
     return (!!rc ^ (condition & 1));
 }
 
-static int
-get_cpl(
-    struct x86_emulate_ctxt *ctxt,
-    const struct x86_emulate_ops  *ops)
+int x86emul_get_cpl(struct x86_emulate_ctxt *ctxt,
+                    const struct x86_emulate_ops *ops)
 {
     struct segment_register reg;
 
@@ -1814,17 +1476,12 @@ _mode_iopl(
     struct x86_emulate_ctxt *ctxt,
     const struct x86_emulate_ops  *ops)
 {
-    int cpl = get_cpl(ctxt, ops);
+    int cpl = x86emul_get_cpl(ctxt, ops);
     if ( cpl == -1 )
         return -1;
     return cpl <= MASK_EXTR(ctxt->regs->eflags, X86_EFLAGS_IOPL);
 }
 
-#define mode_ring0() ({                         \
-    int _cpl = get_cpl(ctxt, ops);              \
-    fail_if(_cpl < 0);                          \
-    (_cpl == 0);                                \
-})
 #define mode_iopl() ({                          \
     int _iopl = _mode_iopl(ctxt, ops);          \
     fail_if(_iopl < 0);                         \
@@ -1832,7 +1489,7 @@ _mode_iopl(
 })
 #define mode_vif() ({                                        \
     cr4 = 0;                                                 \
-    if ( ops->read_cr && get_cpl(ctxt, ops) == 3 )           \
+    if ( ops->read_cr && x86emul_get_cpl(ctxt, ops) == 3 )   \
     {                                                        \
         rc = ops->read_cr(4, &cr4, ctxt);                    \
         if ( rc != X86EMUL_OKAY ) goto done;                 \
@@ -1900,29 +1557,6 @@ static int ioport_access_check(
 }
 
 static bool
-in_realmode(
-    struct x86_emulate_ctxt *ctxt,
-    const struct x86_emulate_ops  *ops)
-{
-    unsigned long cr0;
-    int rc;
-
-    if ( ops->read_cr == NULL )
-        return 0;
-
-    rc = ops->read_cr(0, &cr0, ctxt);
-    return (!rc && !(cr0 & X86_CR0_PE));
-}
-
-static bool
-in_protmode(
-    struct x86_emulate_ctxt *ctxt,
-    const struct x86_emulate_ops  *ops)
-{
-    return !(in_realmode(ctxt, ops) || (ctxt->regs->eflags & X86_EFLAGS_VM));
-}
-
-static bool
 _amd_like(const struct cpuid_policy *cp)
 {
     return cp->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON);
@@ -1934,107 +1568,6 @@ amd_like(const struct x86_emulate_ctxt *
     return _amd_like(ctxt->cpuid);
 }
 
-#define vcpu_has_fpu()         (ctxt->cpuid->basic.fpu)
-#define vcpu_has_sep()         (ctxt->cpuid->basic.sep)
-#define vcpu_has_cx8()         (ctxt->cpuid->basic.cx8)
-#define vcpu_has_cmov()        (ctxt->cpuid->basic.cmov)
-#define vcpu_has_clflush()     (ctxt->cpuid->basic.clflush)
-#define vcpu_has_mmx()         (ctxt->cpuid->basic.mmx)
-#define vcpu_has_fxsr()        (ctxt->cpuid->basic.fxsr)
-#define vcpu_has_sse()         (ctxt->cpuid->basic.sse)
-#define vcpu_has_sse2()        (ctxt->cpuid->basic.sse2)
-#define vcpu_has_sse3()        (ctxt->cpuid->basic.sse3)
-#define vcpu_has_pclmulqdq()   (ctxt->cpuid->basic.pclmulqdq)
-#define vcpu_has_ssse3()       (ctxt->cpuid->basic.ssse3)
-#define vcpu_has_fma()         (ctxt->cpuid->basic.fma)
-#define vcpu_has_cx16()        (ctxt->cpuid->basic.cx16)
-#define vcpu_has_sse4_1()      (ctxt->cpuid->basic.sse4_1)
-#define vcpu_has_sse4_2()      (ctxt->cpuid->basic.sse4_2)
-#define vcpu_has_movbe()       (ctxt->cpuid->basic.movbe)
-#define vcpu_has_popcnt()      (ctxt->cpuid->basic.popcnt)
-#define vcpu_has_aesni()       (ctxt->cpuid->basic.aesni)
-#define vcpu_has_avx()         (ctxt->cpuid->basic.avx)
-#define vcpu_has_f16c()        (ctxt->cpuid->basic.f16c)
-#define vcpu_has_rdrand()      (ctxt->cpuid->basic.rdrand)
-
-#define vcpu_has_mmxext()      (ctxt->cpuid->extd.mmxext || vcpu_has_sse())
-#define vcpu_has_3dnow_ext()   (ctxt->cpuid->extd._3dnowext)
-#define vcpu_has_3dnow()       (ctxt->cpuid->extd._3dnow)
-#define vcpu_has_lahf_lm()     (ctxt->cpuid->extd.lahf_lm)
-#define vcpu_has_cr8_legacy()  (ctxt->cpuid->extd.cr8_legacy)
-#define vcpu_has_lzcnt()       (ctxt->cpuid->extd.abm)
-#define vcpu_has_sse4a()       (ctxt->cpuid->extd.sse4a)
-#define vcpu_has_misalignsse() (ctxt->cpuid->extd.misalignsse)
-#define vcpu_has_xop()         (ctxt->cpuid->extd.xop)
-#define vcpu_has_fma4()        (ctxt->cpuid->extd.fma4)
-#define vcpu_has_tbm()         (ctxt->cpuid->extd.tbm)
-#define vcpu_has_clzero()      (ctxt->cpuid->extd.clzero)
-#define vcpu_has_wbnoinvd()    (ctxt->cpuid->extd.wbnoinvd)
-
-#define vcpu_has_bmi1()        (ctxt->cpuid->feat.bmi1)
-#define vcpu_has_hle()         (ctxt->cpuid->feat.hle)
-#define vcpu_has_avx2()        (ctxt->cpuid->feat.avx2)
-#define vcpu_has_bmi2()        (ctxt->cpuid->feat.bmi2)
-#define vcpu_has_invpcid()     (ctxt->cpuid->feat.invpcid)
-#define vcpu_has_rtm()         (ctxt->cpuid->feat.rtm)
-#define vcpu_has_mpx()         (ctxt->cpuid->feat.mpx)
-#define vcpu_has_avx512f()     (ctxt->cpuid->feat.avx512f)
-#define vcpu_has_avx512dq()    (ctxt->cpuid->feat.avx512dq)
-#define vcpu_has_rdseed()      (ctxt->cpuid->feat.rdseed)
-#define vcpu_has_adx()         (ctxt->cpuid->feat.adx)
-#define vcpu_has_smap()        (ctxt->cpuid->feat.smap)
-#define vcpu_has_avx512_ifma() (ctxt->cpuid->feat.avx512_ifma)
-#define vcpu_has_clflushopt()  (ctxt->cpuid->feat.clflushopt)
-#define vcpu_has_clwb()        (ctxt->cpuid->feat.clwb)
-#define vcpu_has_avx512pf()    (ctxt->cpuid->feat.avx512pf)
-#define vcpu_has_avx512er()    (ctxt->cpuid->feat.avx512er)
-#define vcpu_has_avx512cd()    (ctxt->cpuid->feat.avx512cd)
-#define vcpu_has_sha()         (ctxt->cpuid->feat.sha)
-#define vcpu_has_avx512bw()    (ctxt->cpuid->feat.avx512bw)
-#define vcpu_has_avx512vl()    (ctxt->cpuid->feat.avx512vl)
-#define vcpu_has_avx512_vbmi() (ctxt->cpuid->feat.avx512_vbmi)
-#define vcpu_has_avx512_vbmi2() (ctxt->cpuid->feat.avx512_vbmi2)
-#define vcpu_has_gfni()        (ctxt->cpuid->feat.gfni)
-#define vcpu_has_vaes()        (ctxt->cpuid->feat.vaes)
-#define vcpu_has_vpclmulqdq()  (ctxt->cpuid->feat.vpclmulqdq)
-#define vcpu_has_avx512_vnni() (ctxt->cpuid->feat.avx512_vnni)
-#define vcpu_has_avx512_bitalg() (ctxt->cpuid->feat.avx512_bitalg)
-#define vcpu_has_avx512_vpopcntdq() (ctxt->cpuid->feat.avx512_vpopcntdq)
-#define vcpu_has_rdpid()       (ctxt->cpuid->feat.rdpid)
-#define vcpu_has_movdiri()     (ctxt->cpuid->feat.movdiri)
-#define vcpu_has_movdir64b()   (ctxt->cpuid->feat.movdir64b)
-#define vcpu_has_enqcmd()      (ctxt->cpuid->feat.enqcmd)
-#define vcpu_has_avx512_4vnniw() (ctxt->cpuid->feat.avx512_4vnniw)
-#define vcpu_has_avx512_4fmaps() (ctxt->cpuid->feat.avx512_4fmaps)
-#define vcpu_has_avx512_vp2intersect() (ctxt->cpuid->feat.avx512_vp2intersect)
-#define vcpu_has_serialize()   (ctxt->cpuid->feat.serialize)
-#define vcpu_has_tsxldtrk()    (ctxt->cpuid->feat.tsxldtrk)
-#define vcpu_has_avx_vnni()    (ctxt->cpuid->feat.avx_vnni)
-#define vcpu_has_avx512_bf16() (ctxt->cpuid->feat.avx512_bf16)
-
-#define vcpu_must_have(feat) \
-    generate_exception_if(!vcpu_has_##feat(), EXC_UD)
-
-#ifdef __XEN__
-/*
- * Note the difference between vcpu_must_have(<feature>) and
- * host_and_vcpu_must_have(<feature>): The latter needs to be used when
- * emulation code is using the same instruction class for carrying out
- * the actual operation.
- */
-#define host_and_vcpu_must_have(feat) ({ \
-    generate_exception_if(!cpu_has_##feat, EXC_UD); \
-    vcpu_must_have(feat); \
-})
-#else
-/*
- * For the test harness both are fine to be used interchangeably, i.e.
- * features known to always be available (e.g. SSE/SSE2) to (64-bit) Xen
- * may be checked for by just vcpu_must_have().
- */
-#define host_and_vcpu_must_have(feat) vcpu_must_have(feat)
-#endif
-
 /* Initialise output state in x86_emulate_ctxt */
 static void init_context(struct x86_emulate_ctxt *ctxt)
 {
@@ -2081,7 +1614,7 @@ protmode_load_seg(
     enum x86_segment sel_seg = (sel & 4) ? x86_seg_ldtr : x86_seg_gdtr;
     struct { uint32_t a, b; } desc, desc_hi = {};
     uint8_t dpl, rpl;
-    int cpl = get_cpl(ctxt, ops);
+    int cpl = x86emul_get_cpl(ctxt, ops);
     uint32_t a_flag = 0x100;
     int rc, fault_type = EXC_GP;
 
@@ -2481,17 +2014,6 @@ static bool is_branch_step(struct x86_em
            (debugctl & IA32_DEBUGCTLMSR_BTF);
 }
 
-static bool umip_active(struct x86_emulate_ctxt *ctxt,
-                        const struct x86_emulate_ops *ops)
-{
-    unsigned long cr4;
-
-    /* Intentionally not using mode_ring0() here to avoid its fail_if(). */
-    return get_cpl(ctxt, ops) > 0 &&
-           ops->read_cr && ops->read_cr(4, &cr4, ctxt) == X86EMUL_OKAY &&
-           (cr4 & X86_CR4_UMIP);
-}
-
 static void adjust_bnd(struct x86_emulate_ctxt *ctxt,
                        const struct x86_emulate_ops *ops, enum vex_pfx pfx)
 {
@@ -5703,317 +5225,8 @@ x86_emulate(
         break;
 
     case X86EMUL_OPC(0x0f, 0x01): /* Grp7 */
-    {
-        unsigned long base, limit, cr0, cr0w;
-
-        seg = (modrm_reg & 1) ? x86_seg_idtr : x86_seg_gdtr;
-
-        switch( modrm )
-        {
-        case 0xca: /* clac */
-        case 0xcb: /* stac */
-            vcpu_must_have(smap);
-            generate_exception_if(vex.pfx || !mode_ring0(), EXC_UD);
-
-            _regs.eflags &= ~X86_EFLAGS_AC;
-            if ( modrm == 0xcb )
-                _regs.eflags |= X86_EFLAGS_AC;
-            break;
-
-        case 0xd0: /* xgetbv */
-            generate_exception_if(vex.pfx, EXC_UD);
-            if ( !ops->read_cr || !ops->read_xcr ||
-                 ops->read_cr(4, &cr4, ctxt) != X86EMUL_OKAY )
-                cr4 = 0;
-            generate_exception_if(!(cr4 & X86_CR4_OSXSAVE), EXC_UD);
-            rc = ops->read_xcr(_regs.ecx, &msr_val, ctxt);
-            if ( rc != X86EMUL_OKAY )
-                goto done;
-            _regs.r(ax) = (uint32_t)msr_val;
-            _regs.r(dx) = msr_val >> 32;
-            break;
-
-        case 0xd1: /* xsetbv */
-            generate_exception_if(vex.pfx, EXC_UD);
-            if ( !ops->read_cr || !ops->write_xcr ||
-                 ops->read_cr(4, &cr4, ctxt) != X86EMUL_OKAY )
-                cr4 = 0;
-            generate_exception_if(!(cr4 & X86_CR4_OSXSAVE), EXC_UD);
-            generate_exception_if(!mode_ring0(), EXC_GP, 0);
-            rc = ops->write_xcr(_regs.ecx,
-                                _regs.eax | ((uint64_t)_regs.edx << 32), ctxt);
-            if ( rc != X86EMUL_OKAY )
-                goto done;
-            break;
-
-        case 0xd4: /* vmfunc */
-            generate_exception_if(vex.pfx, EXC_UD);
-            fail_if(!ops->vmfunc);
-            if ( (rc = ops->vmfunc(ctxt)) != X86EMUL_OKAY )
-                goto done;
-            break;
-
-        case 0xd5: /* xend */
-            generate_exception_if(vex.pfx, EXC_UD);
-            generate_exception_if(!vcpu_has_rtm(), EXC_UD);
-            generate_exception_if(vcpu_has_rtm(), EXC_GP, 0);
-            break;
-
-        case 0xd6: /* xtest */
-            generate_exception_if(vex.pfx, EXC_UD);
-            generate_exception_if(!vcpu_has_rtm() && !vcpu_has_hle(),
-                                  EXC_UD);
-            /* Neither HLE nor RTM can be active when we get here. */
-            _regs.eflags |= X86_EFLAGS_ZF;
-            break;
-
-        case 0xdf: /* invlpga */
-            fail_if(!ops->read_msr);
-            if ( (rc = ops->read_msr(MSR_EFER,
-                                     &msr_val, ctxt)) != X86EMUL_OKAY )
-                goto done;
-            /* Finding SVME set implies vcpu_has_svm(). */
-            generate_exception_if(!(msr_val & EFER_SVME) ||
-                                  !in_protmode(ctxt, ops), EXC_UD);
-            generate_exception_if(!mode_ring0(), EXC_GP, 0);
-            fail_if(!ops->tlb_op);
-            if ( (rc = ops->tlb_op(x86emul_invlpga, truncate_ea(_regs.r(ax)),
-                                   _regs.ecx, ctxt)) != X86EMUL_OKAY )
-                goto done;
-            break;
-
-        case 0xe8:
-            switch ( vex.pfx )
-            {
-            case vex_none: /* serialize */
-                host_and_vcpu_must_have(serialize);
-                asm volatile ( ".byte 0x0f, 0x01, 0xe8" );
-                break;
-            case vex_f2: /* xsusldtrk */
-                vcpu_must_have(tsxldtrk);
-                /*
-                 * We're never in a transactional region when coming here
-                 * - nothing else to do.
-                 */
-                break;
-            default:
-                goto unimplemented_insn;
-            }
-            break;
-
-        case 0xe9:
-            switch ( vex.pfx )
-            {
-            case vex_f2: /* xresldtrk */
-                vcpu_must_have(tsxldtrk);
-                /*
-                 * We're never in a transactional region when coming here
-                 * - nothing else to do.
-                 */
-                break;
-            default:
-                goto unimplemented_insn;
-            }
-            break;
-
-        case 0xee:
-            switch ( vex.pfx )
-            {
-            case vex_none: /* rdpkru */
-                if ( !ops->read_cr ||
-                     ops->read_cr(4, &cr4, ctxt) != X86EMUL_OKAY )
-                    cr4 = 0;
-                generate_exception_if(!(cr4 & X86_CR4_PKE), EXC_UD);
-                generate_exception_if(_regs.ecx, EXC_GP, 0);
-                _regs.r(ax) = rdpkru();
-                _regs.r(dx) = 0;
-                break;
-            default:
-                goto unimplemented_insn;
-            }
-            break;
-
-        case 0xef:
-            switch ( vex.pfx )
-            {
-            case vex_none: /* wrpkru */
-                if ( !ops->read_cr ||
-                     ops->read_cr(4, &cr4, ctxt) != X86EMUL_OKAY )
-                    cr4 = 0;
-                generate_exception_if(!(cr4 & X86_CR4_PKE), EXC_UD);
-                generate_exception_if(_regs.ecx | _regs.edx, EXC_GP, 0);
-                wrpkru(_regs.eax);
-                break;
-            default:
-                goto unimplemented_insn;
-            }
-            break;
-
-        case 0xf8: /* swapgs */
-            generate_exception_if(!mode_64bit(), EXC_UD);
-            generate_exception_if(!mode_ring0(), EXC_GP, 0);
-            fail_if(!ops->read_segment || !ops->read_msr ||
-                    !ops->write_segment || !ops->write_msr);
-            if ( (rc = ops->read_segment(x86_seg_gs, &sreg,
-                                         ctxt)) != X86EMUL_OKAY ||
-                 (rc = ops->read_msr(MSR_SHADOW_GS_BASE, &msr_val,
-                                     ctxt)) != X86EMUL_OKAY ||
-                 (rc = ops->write_msr(MSR_SHADOW_GS_BASE, sreg.base,
-                                      ctxt)) != X86EMUL_OKAY )
-                goto done;
-            sreg.base = msr_val;
-            if ( (rc = ops->write_segment(x86_seg_gs, &sreg,
-                                          ctxt)) != X86EMUL_OKAY )
-            {
-                /* Best effort unwind (i.e. no error checking). */
-                ops->write_msr(MSR_SHADOW_GS_BASE, msr_val, ctxt);
-                goto done;
-            }
-            break;
-
-        case 0xf9: /* rdtscp */
-            fail_if(ops->read_msr == NULL);
-            if ( (rc = ops->read_msr(MSR_TSC_AUX,
-                                     &msr_val, ctxt)) != X86EMUL_OKAY )
-                goto done;
-            _regs.r(cx) = (uint32_t)msr_val;
-            goto rdtsc;
-
-        case 0xfc: /* clzero */
-        {
-            unsigned long zero = 0;
-
-            vcpu_must_have(clzero);
-
-            base = ad_bytes == 8 ? _regs.r(ax) :
-                   ad_bytes == 4 ? _regs.eax : _regs.ax;
-            limit = ctxt->cpuid->basic.clflush_size * 8;
-            generate_exception_if(limit < sizeof(long) ||
-                                  (limit & (limit - 1)), EXC_UD);
-            base &= ~(limit - 1);
-            if ( ops->rep_stos )
-            {
-                unsigned long nr_reps = limit / sizeof(zero);
-
-                rc = ops->rep_stos(&zero, ea.mem.seg, base, sizeof(zero),
-                                   &nr_reps, ctxt);
-                if ( rc == X86EMUL_OKAY )
-                {
-                    base += nr_reps * sizeof(zero);
-                    limit -= nr_reps * sizeof(zero);
-                }
-                else if ( rc != X86EMUL_UNHANDLEABLE )
-                    goto done;
-            }
-            fail_if(limit && !ops->write);
-            while ( limit )
-            {
-                rc = ops->write(ea.mem.seg, base, &zero, sizeof(zero), ctxt);
-                if ( rc != X86EMUL_OKAY )
-                    goto done;
-                base += sizeof(zero);
-                limit -= sizeof(zero);
-            }
-            break;
-        }
-
-#define _GRP7(mod, reg) \
-            (((mod) << 6) | ((reg) << 3)) ... (((mod) << 6) | ((reg) << 3) | 7)
-#define GRP7_MEM(reg) _GRP7(0, reg): case _GRP7(1, reg): case _GRP7(2, reg)
-#define GRP7_ALL(reg) GRP7_MEM(reg): case _GRP7(3, reg)
-
-        case GRP7_MEM(0): /* sgdt */
-        case GRP7_MEM(1): /* sidt */
-            ASSERT(ea.type == OP_MEM);
-            generate_exception_if(umip_active(ctxt, ops), EXC_GP, 0);
-            fail_if(!ops->read_segment || !ops->write);
-            if ( (rc = ops->read_segment(seg, &sreg, ctxt)) )
-                goto done;
-            if ( mode_64bit() )
-                op_bytes = 8;
-            else if ( op_bytes == 2 )
-            {
-                sreg.base &= 0xffffff;
-                op_bytes = 4;
-            }
-            if ( (rc = ops->write(ea.mem.seg, ea.mem.off, &sreg.limit,
-                                  2, ctxt)) != X86EMUL_OKAY ||
-                 (rc = ops->write(ea.mem.seg, truncate_ea(ea.mem.off + 2),
-                                  &sreg.base, op_bytes, ctxt)) != X86EMUL_OKAY )
-                goto done;
-            break;
-
-        case GRP7_MEM(2): /* lgdt */
-        case GRP7_MEM(3): /* lidt */
-            ASSERT(ea.type == OP_MEM);
-            generate_exception_if(!mode_ring0(), EXC_GP, 0);
-            fail_if(ops->write_segment == NULL);
-            memset(&sreg, 0, sizeof(sreg));
-            if ( (rc = read_ulong(ea.mem.seg, ea.mem.off,
-                                  &limit, 2, ctxt, ops)) ||
-                 (rc = read_ulong(ea.mem.seg, truncate_ea(ea.mem.off + 2),
-                                  &base, mode_64bit() ? 8 : 4, ctxt, ops)) )
-                goto done;
-            generate_exception_if(!is_canonical_address(base), EXC_GP, 0);
-            sreg.base = base;
-            sreg.limit = limit;
-            if ( !mode_64bit() && op_bytes == 2 )
-                sreg.base &= 0xffffff;
-            if ( (rc = ops->write_segment(seg, &sreg, ctxt)) )
-                goto done;
-            break;
-
-        case GRP7_ALL(4): /* smsw */
-            generate_exception_if(umip_active(ctxt, ops), EXC_GP, 0);
-            if ( ea.type == OP_MEM )
-            {
-                fail_if(!ops->write);
-                d |= Mov; /* force writeback */
-                ea.bytes = 2;
-            }
-            else
-                ea.bytes = op_bytes;
-            dst = ea;
-            fail_if(ops->read_cr == NULL);
-            if ( (rc = ops->read_cr(0, &dst.val, ctxt)) )
-                goto done;
-            break;
-
-        case GRP7_ALL(6): /* lmsw */
-            fail_if(ops->read_cr == NULL);
-            fail_if(ops->write_cr == NULL);
-            generate_exception_if(!mode_ring0(), EXC_GP, 0);
-            if ( (rc = ops->read_cr(0, &cr0, ctxt)) )
-                goto done;
-            if ( ea.type == OP_REG )
-                cr0w = *ea.reg;
-            else if ( (rc = read_ulong(ea.mem.seg, ea.mem.off,
-                                       &cr0w, 2, ctxt, ops)) )
-                goto done;
-            /* LMSW can: (1) set bits 0-3; (2) clear bits 1-3. */
-            cr0 = (cr0 & ~0xe) | (cr0w & 0xf);
-            if ( (rc = ops->write_cr(0, cr0, ctxt)) )
-                goto done;
-            break;
-
-        case GRP7_MEM(7): /* invlpg */
-            ASSERT(ea.type == OP_MEM);
-            generate_exception_if(!mode_ring0(), EXC_GP, 0);
-            fail_if(!ops->tlb_op);
-            if ( (rc = ops->tlb_op(x86emul_invlpg, ea.mem.off, ea.mem.seg,
-                                   ctxt)) != X86EMUL_OKAY )
-                goto done;
-            break;
-
-#undef GRP7_ALL
-#undef GRP7_MEM
-#undef _GRP7
-
-        default:
-            goto unimplemented_insn;
-        }
-        break;
-    }
+        rc = x86emul_0f01(state, &_regs, &dst, ctxt, ops);
+        goto dispatch_from_helper;
 
     case X86EMUL_OPC(0x0f, 0x02): /* lar */
         generate_exception_if(!in_protmode(ctxt, ops), EXC_UD);
@@ -11332,6 +10545,24 @@ x86_emulate(
     unrecognized_insn:
         rc = X86EMUL_UNRECOGNIZED;
         goto done;
+
+    dispatch_from_helper:
+        if ( rc == X86EMUL_OKAY )
+            break;
+
+        switch ( rc )
+        {
+        case X86EMUL_rdtsc:
+            goto rdtsc;
+        }
+
+        /* Internally used state change indicators may not make it here. */
+        if ( rc < 0 )
+        {
+            ASSERT_UNREACHABLE();
+            rc = X86EMUL_UNHANDLEABLE;
+        }
+        goto done;
     }
 
     if ( state->rmw )
--- a/xen/arch/x86/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate.c
@@ -24,8 +24,6 @@
 #undef cpuid
 #undef wbinvd
 
-#define r(name) r ## name
-
 #define cpu_has_amd_erratum(nr) \
         cpu_has_amd_erratum(&current_cpu_data, AMD_ERRATUM_##nr)
 
@@ -54,12 +52,6 @@
 
 #define FXSAVE_AREA current->arch.fpu_ctxt
 
-#ifndef CONFIG_HVM
-# define X86EMUL_NO_FPU
-# define X86EMUL_NO_MMX
-# define X86EMUL_NO_SIMD
-#endif
-
 #include "x86_emulate/x86_emulate.c"
 
 int cf_check x86emul_read_xcr(



From xen-devel-bounces@lists.xenproject.org Wed Jun 15 09:59:20 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 09:59:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349816.575988 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Po0-0001zH-4h; Wed, 15 Jun 2022 09:59:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349816.575988; Wed, 15 Jun 2022 09:59:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Po0-0001zA-1Z; Wed, 15 Jun 2022 09:59:20 +0000
Received: by outflank-mailman (input) for mailman id 349816;
 Wed, 15 Jun 2022 09:59:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=56zs=WW=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o1Pnx-0001TN-RP
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 09:59:18 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20619.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::619])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d0b07690-ec91-11ec-ab14-113154c10af9;
 Wed, 15 Jun 2022 11:59:16 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8922.eurprd04.prod.outlook.com (2603:10a6:20b:409::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.22; Wed, 15 Jun
 2022 09:59:14 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Wed, 15 Jun 2022
 09:59:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d0b07690-ec91-11ec-ab14-113154c10af9
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AGSnWntplEyz5VbWQ8crUJaYqdsFeDyBB+LTfkwwyOvpL5xWytyALWkNNzFvR6GLkgyQ5+UvNe4FDhmZR5SkonqKjA0FRpuYVkb2018l5qfJlwDtXxSPFGmxxdovSz6+YUkLBTTAeiabGb8kHxLvilF0041FkQmIFLl0bMkwFF9J4hnyszue2ei8wLCGNVgjDIy4aFF2ZqpM/nMw8ReGJ68bN4k9wLUcWI5qBja8mgYAyifQadYAi5RLdYTiClTbLTao8zjQFfqGzsgtKuALHl5w1bjDJOuMJHiIC5nltk7hyqpuncxyDcWoLmMxYRHNtPStKXchYfe2GWHMpIvOAg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5ToDF/3dheRx2o50QmgkQ6+WXvu2iMMOjScLXSCyS8s=;
 b=AFmu0OiwQguPP+1euH3FdZ9g/taOh46d5hSamVDA/lPhs6F9jxRhXildlmc+ipIirNeMOAiog3j0/GuwOyKwNVjPMaOshOlWi0FPE8DPaLSe73jTI2D13QMkArQ9++c3kha5SyJ2ADpiQjabb25OqLQ3t21DViEYYVmNZZPJv84+JQHbLB8XTpLlNW3NrvTfSzX5fPhz9gvpbT3rAjSOAmrciBNo37+19aKzCywxPTUpI0LdMC6fjsJguaI0urTeVzTZJzmHRqxI2QQE4D/NeE5lAFkXdLz47ER5IBP93JPgbaqSTE68IK9JTuXHYgiDwyQtz6SwXGo9vNretFHQcg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5ToDF/3dheRx2o50QmgkQ6+WXvu2iMMOjScLXSCyS8s=;
 b=T4uIjw7KWCkBfZ1ZIjGzs+qeH/YXnL3nyofrYoQj8sx3CQp/D24NrRMu62uV83r3K+Px8spZkzDk1KgsfB1/j5Q262/s6z0W7Ns9eItNU7mZr3+kfxfiXEdUCkkyg9uXycZDnxaq1vCuA/ai8ClQgqvEDRNX7uf/skGvLGWEuHDeyqKf9CDlTVxTkrhu6vBIN4Qqm1+Eb2XOK0wl+3teuJddk7PitFXvxNdKDGOu9sCWMU7DzRxQlWgylZl9FePqF1CByuvpH9KDmhJO/QUrr4FLKN/bsIWT7YChJZDOUjlbNR9U/TjL3yhami1QsgGP/MamGDMceRpQGBaXEfdPaQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <91dbe849-ea99-4134-4cda-55f23a801055@suse.com>
Date: Wed, 15 Jun 2022 11:59:12 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: [PATCH v2 2/8] x86emul: split off opcode 0fae handling
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <7f5287ad-8442-6c53-d513-f9a8345c4857@suse.com>
In-Reply-To: <7f5287ad-8442-6c53-d513-f9a8345c4857@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR06CA0249.eurprd06.prod.outlook.com
 (2603:10a6:20b:45f::22) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 2e010640-0f4c-4e02-7738-08da4eb5b3c2
X-MS-TrafficTypeDiagnostic: AM9PR04MB8922:EE_
X-Microsoft-Antispam-PRVS:
	<AM9PR04MB8922EC901A2B5DC732026969B3AD9@AM9PR04MB8922.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2ggE465PCl9ZR8Rh7i0HXMVYhi7VyJqdaJeK+DEhVycePGLUCXWjeDjtyh/Ulu4CTY+jPFbWYGzyhCLq/3DkuOS7M4LUuU/G1roksbNvjyTlMVvTH3O3z9lHNc9dIn25vCSzycxEqPdVlq9VxjJES2Yb1mbPZxldIBIgbQWyM+HHazrJg57oNKiFzPEJ/PREr6Jz2BY4sCkrD8E6utndj2ewV5E6V7Ii0fAPIhIFM18xs0StQRO+MdkyuD3XTnI6JUzCn/ZP1mv1nm5p0G4iHNYrw/HsoPK5cH/X1khIkUV07swy28W9ZmTVqJHcOpCkHBg0tGO7sHgaFLGNaHzir16cDHWM85vMnQGrkshVXW/HyI1XKM/rftrVv3OvUh1CzRMrh4D9/fJxJZpBxzrDZruromhhjmfXAmdZkVWrvCcbAe6/S9FuC1cjlYpVPRvm+XWsHhhRoKUYJzm8EyzsFaiwGx4Dc4WiPr13yaBGI2/fiozNwhQWdaexjd6oHa7tyXZqeYYHVDQycydGyBIggwMCCRHme3OmyrhJM91q2ngSjd0WwUKlPkW2Yl45uc6P2MxJczKNHRpRc2F6mNVqmgy3rIihAW2bdJ0vrx8aHo7CzYOtkNk41KCEbzZsL2t3cDEjHQ/uYN/d6Q0QVWJTwS+oCc2MDH7fnGkdcZfdX7vQ7Qo5Bu7EJ2GrkM7HolUepSMrFbJCXc9GP2jT2bdNkqj05PiQcJtJWLQf/uC7DjsBg4N3CkTtL/QmCs7kCel4YZdayl/ZirecX5mlowHyIXUrHknunoBUFTB/8rNqwkfJgEmCAA8jdp+hSStTRYT7SXvjmjYgug9UIenBC8FppQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(83380400001)(186003)(2616005)(31686004)(36756003)(38100700002)(2906002)(6512007)(316002)(6486002)(5660300002)(86362001)(6506007)(66556008)(26005)(6916009)(508600001)(8676002)(66946007)(4326008)(8936002)(66476007)(31696002)(54906003)(30864003)(2004002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SkZJbFN2dzRUbk1uYmVMejUrbEY1aWhtNWxKWnJvS0RMOHZiMVo5TlUxWlRS?=
 =?utf-8?B?Rm5WQmRIOE9DY1RPVkZwcGlHeTFDUlI3d0hhSmlYdVJkS1NxMlJicU1RWWRi?=
 =?utf-8?B?TzR0ZEY3WGhxeDR1VGhOYmNydkhIcEV1eXJmOC9iY0ZaNWFSY0NLSGcyZ29a?=
 =?utf-8?B?eDF4cUN6MkFEYUNTTzJ4bDJXNElBeHNlRzFWVDRDZTVvL3ZmSkQvZGpKSm9D?=
 =?utf-8?B?LzRHRHZialRCNWZpaGJodmJZV1p2SFFQQXFJNDBqTE5wWi9NaFg0U2l0TEUv?=
 =?utf-8?B?ZHB2bWYraWpFR2RFMlZMbFo4ZlNSWmpLZm8xU2EwZ1BPV2YwTW5YVEphbG5l?=
 =?utf-8?B?UmJUSy9YS296QmdBRGttazV3aWRQa1RSdjVPK1poRFVYb3RGL09ER3NjSkhX?=
 =?utf-8?B?eHJNNU95UXFvUFp4akFxYlhPMy9WSndqWndYSXNQRVg4ZFdSVElwY0RqRDdR?=
 =?utf-8?B?dWNFOUc3d0Y4T0lkeEQ0WWN5bGZ1aUJXYkJubFhnZHZoL3J1WitJMXNCTlo1?=
 =?utf-8?B?c0d6NUpYN3duYWF4T3JPM2lsd2s1ZE9hU3UzMG0ybXNrOFo5YkswZUdUSm42?=
 =?utf-8?B?S1M3UHBGZjdlaHEremkxSzU3bjhLb2w0U1N0MWpBUzFLN0IwUVcvcTVuUTJL?=
 =?utf-8?B?eWNDZWNhR0p5WXZoY28vZnJDTkdJSWVKMHFwTlBLdVRWYVFYWlREK1hGT3BE?=
 =?utf-8?B?WGFBNmVvaDM0UThyK3BJaG1naDdEeUpqOGJXano3SmF6b3Erd3lid2MzNVhI?=
 =?utf-8?B?RVVYcEF5Q1IrQ3hPNVJpa2RZV0V6NVZSWFd6dlRKZ2tKQmVDNW8yOGExamFa?=
 =?utf-8?B?T0d3NDREeDg5SDBVNHUxbHArcmk3V002ZlR4eHNXSjF1UmVIejZqTE5RcjVP?=
 =?utf-8?B?WnR6a3JKeCtJbzhSamx1SnNna1hOY3YyVTJTSGJRSWNDU1dZQmk1S3lHQ2dB?=
 =?utf-8?B?MTFTVWZuemFrOGM5RWNIaE5TN1U2SXJRSGpsUVk2QUFLY2tSMTJ6SzBNQXov?=
 =?utf-8?B?RGI0WlFFTTZ1KzQxU282KzJlZEd6QndYVzhCUG5qaDNzSzZrNFBmd2xSRmVL?=
 =?utf-8?B?YnRGWkIyYk9UdE15WnRJa2dzRi9HSFlpRTFZL0ZqVGt0MGVHNmN3VVRDeGgx?=
 =?utf-8?B?cjJEU2JDdFQ1WEltWUtMUm5CRjRoWmN5NHkweFRJdmtiQ0VCQklZeUx4bytX?=
 =?utf-8?B?SS92MjJmbklQTHluMERxNXh6NXFUKy9pOEFtVFR3L2lac2k2bTVuZnBVS3RY?=
 =?utf-8?B?YVZxUVkyd0E4M3EzV1hKRHRSL1ZjTGtWaFZJS05JbUlydXJwUUlYd1RVWVMx?=
 =?utf-8?B?Ni9ONVE5dHB4NXNiU3ZXTERxUlJXRmF5YkJVOWQ2RHJnLzNlWS92UlJaa3ls?=
 =?utf-8?B?S1FKN1FINUF4aG1aRVV4RVZTYSt2dXh1UUduUTd2K0R0R1JTQkZWdHBPSjhR?=
 =?utf-8?B?N0Q0ODBJSmVITzc0YjBHY25icmFNZys1ZE1DTWJ2RWxpSWtTZkthVmRuS2Ja?=
 =?utf-8?B?T3JpT21nV29ac3RkN1V5bUU4ajBrd3B5L3dwbVlMNWgyZEpmTGxIaEwrWmtQ?=
 =?utf-8?B?Z0puTDBYS1lzMksyNm10NnQ2QndLV3BlZ0pmSUF0eVFZU3pXU3YzSWUrb1Fl?=
 =?utf-8?B?ZHEwWVR3RERqSXA2NjhUYkMzRUVVdEdEMFJKcklMNUtEMkc5TC9iVGgvL204?=
 =?utf-8?B?N212ajBiRXliRXE3ais4a2ZoL3kzdmsyaCtLUWFONVJLT1QyMXpFUTJ4RjJi?=
 =?utf-8?B?VHNyYWJqY2xBK3kxQTQzT24vTmh0VFR1WXVYKzdBSEVUV3VzcjVqUjVOVEVy?=
 =?utf-8?B?R0cxK29uaVdRWFppeUZqZ2hLbmZ5U05yQVdFK0JXYjNsTUpKeDhBem4wTnky?=
 =?utf-8?B?V0tsUjV2VTBYQ29jRndrYXRsdWVhMm5aSElaYzluYk5JaExPdzkwMEhPY1ox?=
 =?utf-8?B?SGZJbGU1Ly9nVitjZHZvbXZ2TVllZW1uRTZiWnd0dFByODlQblUrc2pvYmhh?=
 =?utf-8?B?WDk2MmxsUzFhZWVOanhSYnFFR3U2T0JNR3p0YkYraExuTys4SW94Q1ZVcGw0?=
 =?utf-8?B?anJXK1hHdXJSNEFUQUowNGc1K3NMcnVqQXNPcHJzL2VTbitmdUk4blpMNnQr?=
 =?utf-8?B?bTFyL0kwTEdZeVJNeFFoY2NVRm9WRE50dk9QZGpoVjFqckg1NmFzWW1EallW?=
 =?utf-8?B?MXpZKzJKRG50MThsVlZScDVtZFJxaE5XaWREcytzY3VnTS96UWRpZFFHanNy?=
 =?utf-8?B?eDBoUVl5cGtDZUorWjhVZFhjNS9IVGQyRUFjMVZHdDcrQ2hkUDZ1RnRGbHg4?=
 =?utf-8?B?bCtzUVFTd1QvZzlNNmc3OFFhRzEwVjRQN2llTzZIMEF4YlBkMzM2Zz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2e010640-0f4c-4e02-7738-08da4eb5b3c2
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2022 09:59:13.9822
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: yYvNqjxtqcp1SHDEmi7zO6nNzdk9o2ZQE2hEJ7qPOVzRJWS5jFmUyfeSUd1+yc7ZY/QIQn7fcOv2YXTp7GVH+g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8922

There's a fair amount of sub-cases (with some yet to be implemented), so
a separate function seems warranted.

Code moved gets slightly adjusted in a few places, e.g. replacing EXC_*
by X86_EXC_* (such that EXC_* don't need to move as well; we want these
to be phased out anyway).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/fuzz/x86_instruction_emulator/Makefile
+++ b/tools/fuzz/x86_instruction_emulator/Makefile
@@ -35,7 +35,7 @@ x86.h := $(addprefix $(XEN_ROOT)/tools/i
 x86_emulate.h := x86-emulate.h x86_emulate/x86_emulate.h $(x86.h)
 
 OBJS := fuzz-emul.o x86-emulate.o
-OBJS += x86_emulate/0f01.o
+OBJS += x86_emulate/0f01.o x86_emulate/0fae.o
 
 # x86-emulate.c will be implicit for both
 x86-emulate.o x86-emulate-cov.o: x86_emulate/x86_emulate.c $(x86_emulate.h) x86_emulate/private.h
--- a/tools/tests/x86_emulator/Makefile
+++ b/tools/tests/x86_emulator/Makefile
@@ -251,7 +251,7 @@ xop.h avx512f.h: simd-fma.c
 endif # 32-bit override
 
 OBJS := x86-emulate.o cpuid.o test_x86_emulator.o evex-disp8.o predicates.o wrappers.o
-OBJS += x86_emulate/0f01.o
+OBJS += x86_emulate/0f01.o x86_emulate/0fae.o
 
 $(TARGET): $(OBJS)
 	$(HOSTCC) $(HOSTCFLAGS) -o $@ $^
--- /dev/null
+++ b/xen/arch/x86/x86_emulate/0fae.c
@@ -0,0 +1,222 @@
+/******************************************************************************
+ * 0fae.c - helper for x86_emulate.c
+ *
+ * Generic x86 (32-bit and 64-bit) instruction decoder and emulator.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "private.h"
+
+#if defined(__XEN__) && \
+    (!defined(X86EMUL_NO_FPU) || !defined(X86EMUL_NO_MMX) || \
+     !defined(X86EMUL_NO_SIMD))
+# include <asm/xstate.h>
+#endif
+
+int x86emul_0fae(struct x86_emulate_state *s,
+                 struct cpu_user_regs *regs,
+                 struct operand *dst,
+                 const struct operand *src,
+                 struct x86_emulate_ctxt *ctxt,
+                 const struct x86_emulate_ops *ops,
+                 enum x86_emulate_fpu_type *fpu_type)
+#define fpu_type (*fpu_type) /* for get_fpu() */
+{
+    unsigned long cr4;
+    int rc;
+
+    if ( !s->vex.opcx && (!s->vex.pfx || s->vex.pfx == vex_66) )
+    {
+        switch ( s->modrm_reg & 7 )
+        {
+#if !defined(X86EMUL_NO_FPU) || !defined(X86EMUL_NO_MMX) || \
+    !defined(X86EMUL_NO_SIMD)
+        case 0: /* fxsave */
+        case 1: /* fxrstor */
+            generate_exception_if(s->vex.pfx, X86_EXC_UD);
+            vcpu_must_have(fxsr);
+            generate_exception_if(s->ea.type != OP_MEM, X86_EXC_UD);
+            generate_exception_if(!is_aligned(s->ea.mem.seg, s->ea.mem.off, 16,
+                                              ctxt, ops),
+                                  X86_EXC_GP, 0);
+            fail_if(!ops->blk);
+            s->op_bytes =
+#ifdef __x86_64__
+                !mode_64bit() ? offsetof(struct x86_fxsr, xmm[8]) :
+#endif
+                sizeof(struct x86_fxsr);
+            if ( amd_like(ctxt) )
+            {
+                uint64_t msr_val;
+
+                /* Assume "normal" operation in case of missing hooks. */
+                if ( !ops->read_cr ||
+                     ops->read_cr(4, &cr4, ctxt) != X86EMUL_OKAY )
+                    cr4 = X86_CR4_OSFXSR;
+                if ( !ops->read_msr ||
+                     ops->read_msr(MSR_EFER, &msr_val, ctxt) != X86EMUL_OKAY )
+                    msr_val = 0;
+                if ( !(cr4 & X86_CR4_OSFXSR) ||
+                     (mode_64bit() && mode_ring0() && (msr_val & EFER_FFXSE)) )
+                    s->op_bytes = offsetof(struct x86_fxsr, xmm[0]);
+            }
+            /*
+             * This could also be X86EMUL_FPU_mmx, but it shouldn't be
+             * X86EMUL_FPU_xmm, as we don't want CR4.OSFXSR checked.
+             */
+            get_fpu(X86EMUL_FPU_fpu);
+            s->fpu_ctrl = true;
+            s->blk = s->modrm_reg & 1 ? blk_fxrstor : blk_fxsave;
+            if ( (rc = ops->blk(s->ea.mem.seg, s->ea.mem.off, NULL,
+                                sizeof(struct x86_fxsr), &regs->eflags,
+                                s, ctxt)) != X86EMUL_OKAY )
+                goto done;
+            break;
+#endif /* X86EMUL_NO_{FPU,MMX,SIMD} */
+
+#ifndef X86EMUL_NO_SIMD
+        case 2: /* ldmxcsr */
+            generate_exception_if(s->vex.pfx, X86_EXC_UD);
+            vcpu_must_have(sse);
+        ldmxcsr:
+            generate_exception_if(src->type != OP_MEM, X86_EXC_UD);
+            get_fpu(s->vex.opcx ? X86EMUL_FPU_ymm : X86EMUL_FPU_xmm);
+            generate_exception_if(src->val & ~mxcsr_mask, X86_EXC_GP, 0);
+            asm volatile ( "ldmxcsr %0" :: "m" (src->val) );
+            break;
+
+        case 3: /* stmxcsr */
+            generate_exception_if(s->vex.pfx, X86_EXC_UD);
+            vcpu_must_have(sse);
+        stmxcsr:
+            generate_exception_if(dst->type != OP_MEM, X86_EXC_UD);
+            get_fpu(s->vex.opcx ? X86EMUL_FPU_ymm : X86EMUL_FPU_xmm);
+            asm volatile ( "stmxcsr %0" : "=m" (dst->val) );
+            break;
+#endif /* X86EMUL_NO_SIMD */
+
+        case 5: /* lfence */
+            fail_if(s->modrm_mod != 3);
+            generate_exception_if(s->vex.pfx, X86_EXC_UD);
+            vcpu_must_have(sse2);
+            asm volatile ( "lfence" ::: "memory" );
+            break;
+        case 6:
+            if ( s->modrm_mod == 3 ) /* mfence */
+            {
+                generate_exception_if(s->vex.pfx, X86_EXC_UD);
+                vcpu_must_have(sse2);
+                asm volatile ( "mfence" ::: "memory" );
+                break;
+            }
+            /* else clwb */
+            fail_if(!s->vex.pfx);
+            vcpu_must_have(clwb);
+            fail_if(!ops->cache_op);
+            if ( (rc = ops->cache_op(x86emul_clwb, s->ea.mem.seg, s->ea.mem.off,
+                                     ctxt)) != X86EMUL_OKAY )
+                goto done;
+            break;
+        case 7:
+            if ( s->modrm_mod == 3 ) /* sfence */
+            {
+                generate_exception_if(s->vex.pfx, X86_EXC_UD);
+                vcpu_must_have(mmxext);
+                asm volatile ( "sfence" ::: "memory" );
+                break;
+            }
+            /* else clflush{,opt} */
+            if ( !s->vex.pfx )
+                vcpu_must_have(clflush);
+            else
+                vcpu_must_have(clflushopt);
+            fail_if(!ops->cache_op);
+            if ( (rc = ops->cache_op(s->vex.pfx ? x86emul_clflushopt
+                                                : x86emul_clflush,
+                                     s->ea.mem.seg, s->ea.mem.off,
+                                     ctxt)) != X86EMUL_OKAY )
+                goto done;
+            break;
+        default:
+            return X86EMUL_UNIMPLEMENTED;
+        }
+    }
+#ifndef X86EMUL_NO_SIMD
+    else if ( s->vex.opcx && !s->vex.pfx )
+    {
+        switch ( s->modrm_reg & 7 )
+        {
+        case 2: /* vldmxcsr */
+            generate_exception_if(s->vex.l || s->vex.reg != 0xf, X86_EXC_UD);
+            vcpu_must_have(avx);
+            goto ldmxcsr;
+        case 3: /* vstmxcsr */
+            generate_exception_if(s->vex.l || s->vex.reg != 0xf, X86_EXC_UD);
+            vcpu_must_have(avx);
+            goto stmxcsr;
+        }
+        return X86EMUL_UNRECOGNIZED;
+    }
+#endif /* !X86EMUL_NO_SIMD */
+    else if ( !s->vex.opcx && s->vex.pfx == vex_f3 )
+    {
+        enum x86_segment seg;
+        struct segment_register sreg;
+
+        fail_if(s->modrm_mod != 3);
+        generate_exception_if((s->modrm_reg & 4) || !mode_64bit(), X86_EXC_UD);
+        fail_if(!ops->read_cr);
+        if ( (rc = ops->read_cr(4, &cr4, ctxt)) != X86EMUL_OKAY )
+            goto done;
+        generate_exception_if(!(cr4 & X86_CR4_FSGSBASE), X86_EXC_UD);
+        seg = s->modrm_reg & 1 ? x86_seg_gs : x86_seg_fs;
+        fail_if(!ops->read_segment);
+        if ( (rc = ops->read_segment(seg, &sreg, ctxt)) != X86EMUL_OKAY )
+            goto done;
+        dst->reg = decode_gpr(regs, s->modrm_rm);
+        if ( !(s->modrm_reg & 2) )
+        {
+            /* rd{f,g}sbase */
+            dst->type = OP_REG;
+            dst->bytes = (s->op_bytes == 8) ? 8 : 4;
+            dst->val = sreg.base;
+        }
+        else
+        {
+            /* wr{f,g}sbase */
+            if ( s->op_bytes == 8 )
+            {
+                sreg.base = *dst->reg;
+                generate_exception_if(!is_canonical_address(sreg.base),
+                                      X86_EXC_GP, 0);
+            }
+            else
+                sreg.base = (uint32_t)*dst->reg;
+            fail_if(!ops->write_segment);
+            if ( (rc = ops->write_segment(seg, &sreg, ctxt)) != X86EMUL_OKAY )
+                goto done;
+        }
+    }
+    else
+    {
+        ASSERT_UNREACHABLE();
+        return X86EMUL_UNRECOGNIZED;
+    }
+
+    rc = X86EMUL_OKAY;
+
+ done:
+    return rc;
+}
--- a/xen/arch/x86/x86_emulate/Makefile
+++ b/xen/arch/x86/x86_emulate/Makefile
@@ -1 +1,2 @@
 obj-y += 0f01.o
+obj-y += 0fae.o
--- a/xen/arch/x86/x86_emulate/private.h
+++ b/xen/arch/x86/x86_emulate/private.h
@@ -308,6 +308,29 @@ struct x86_emulate_state {
 #endif
 };
 
+struct x86_fxsr {
+    uint16_t fcw;
+    uint16_t fsw;
+    uint8_t ftw, :8;
+    uint16_t fop;
+    union {
+        struct {
+            uint32_t offs;
+            uint16_t sel, :16;
+        };
+        uint64_t addr;
+    } fip, fdp;
+    uint32_t mxcsr;
+    uint32_t mxcsr_mask;
+    struct {
+        uint8_t data[10];
+        uint16_t :16, :16, :16;
+    } fpreg[8];
+    uint64_t __attribute__ ((aligned(16))) xmm[16][2];
+    uint64_t rsvd[6];
+    uint64_t avl[6];
+};
+
 /*
  * Externally visible return codes from x86_emulate() are non-negative.
  * Use negative values for internal state change indicators from helpers
@@ -397,6 +420,18 @@ in_protmode(
     (_cpl == 0);                                \
 })
 
+static inline bool
+_amd_like(const struct cpuid_policy *cp)
+{
+    return cp->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON);
+}
+
+static inline bool
+amd_like(const struct x86_emulate_ctxt *ctxt)
+{
+    return _amd_like(ctxt->cpuid);
+}
+
 #define vcpu_has_fpu()         (ctxt->cpuid->basic.fpu)
 #define vcpu_has_sep()         (ctxt->cpuid->basic.sep)
 #define vcpu_has_cx8()         (ctxt->cpuid->basic.cx8)
@@ -501,11 +536,52 @@ in_protmode(
 int x86emul_get_cpl(struct x86_emulate_ctxt *ctxt,
                     const struct x86_emulate_ops *ops);
 
+int x86emul_get_fpu(enum x86_emulate_fpu_type type,
+                    struct x86_emulate_ctxt *ctxt,
+                    const struct x86_emulate_ops *ops);
+
+#define get_fpu(type)                                           \
+do {                                                            \
+    rc = x86emul_get_fpu(fpu_type = (type), ctxt, ops);         \
+    if ( rc ) goto done;                                        \
+} while (0)
+
 int x86emul_0f01(struct x86_emulate_state *s,
                  struct cpu_user_regs *regs,
                  struct operand *dst,
                  struct x86_emulate_ctxt *ctxt,
                  const struct x86_emulate_ops *ops);
+int x86emul_0fae(struct x86_emulate_state *s,
+                 struct cpu_user_regs *regs,
+                 struct operand *dst,
+                 const struct operand *src,
+                 struct x86_emulate_ctxt *ctxt,
+                 const struct x86_emulate_ops *ops,
+                 enum x86_emulate_fpu_type *fpu_type);
+
+static inline bool is_aligned(enum x86_segment seg, unsigned long offs,
+                              unsigned int size, struct x86_emulate_ctxt *ctxt,
+                              const struct x86_emulate_ops *ops)
+{
+    struct segment_register reg;
+
+    /* Expecting powers of two only. */
+    ASSERT(!(size & (size - 1)));
+
+    if ( mode_64bit() && seg < x86_seg_fs )
+        memset(&reg, 0, sizeof(reg));
+    else
+    {
+        /* No alignment checking when we have no way to read segment data. */
+        if ( !ops->read_segment )
+            return true;
+
+        if ( ops->read_segment(seg, &reg, ctxt) != X86EMUL_OKAY )
+            return false;
+    }
+
+    return !((reg.base + offs) & (size - 1));
+}
 
 static inline bool umip_active(struct x86_emulate_ctxt *ctxt,
                                const struct x86_emulate_ops *ops)
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -695,29 +695,6 @@ typedef union {
     uint32_t data32[16];
 } mmval_t;
 
-struct x86_fxsr {
-    uint16_t fcw;
-    uint16_t fsw;
-    uint8_t ftw, :8;
-    uint16_t fop;
-    union {
-        struct {
-            uint32_t offs;
-            uint16_t sel, :16;
-        };
-        uint64_t addr;
-    } fip, fdp;
-    uint32_t mxcsr;
-    uint32_t mxcsr_mask;
-    struct {
-        uint8_t data[10];
-        uint16_t :16, :16, :16;
-    } fpreg[8];
-    uint64_t __attribute__ ((aligned(16))) xmm[16][2];
-    uint64_t rsvd[6];
-    uint64_t avl[6];
-};
-
 /*
  * While proper alignment gets specified above, this doesn't get honored by
  * the compiler for automatic variables. Use this helper to instantiate a
@@ -1063,7 +1040,7 @@ do {
     ops->write_segment(x86_seg_cs, cs, ctxt);                           \
 })
 
-static int _get_fpu(
+int x86emul_get_fpu(
     enum x86_emulate_fpu_type type,
     struct x86_emulate_ctxt *ctxt,
     const struct x86_emulate_ops *ops)
@@ -1102,7 +1079,7 @@ static int _get_fpu(
         break;
     }
 
-    rc = ops->get_fpu(type, ctxt);
+    rc = (ops->get_fpu)(type, ctxt);
 
     if ( rc == X86EMUL_OKAY )
     {
@@ -1146,12 +1123,6 @@ static int _get_fpu(
     return rc;
 }
 
-#define get_fpu(type)                                           \
-do {                                                            \
-    rc = _get_fpu(fpu_type = (type), ctxt, ops);                \
-    if ( rc ) goto done;                                        \
-} while (0)
-
 static void put_fpu(
     enum x86_emulate_fpu_type type,
     bool failed_late,
@@ -1556,18 +1527,6 @@ static int ioport_access_check(
     return rc;
 }
 
-static bool
-_amd_like(const struct cpuid_policy *cp)
-{
-    return cp->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON);
-}
-
-static bool
-amd_like(const struct x86_emulate_ctxt *ctxt)
-{
-    return _amd_like(ctxt->cpuid);
-}
-
 /* Initialise output state in x86_emulate_ctxt */
 static void init_context(struct x86_emulate_ctxt *ctxt)
 {
@@ -1980,30 +1939,6 @@ static unsigned int decode_disp8scale(en
     } \
 } while ( false )
 
-static bool is_aligned(enum x86_segment seg, unsigned long offs,
-                       unsigned int size, struct x86_emulate_ctxt *ctxt,
-                       const struct x86_emulate_ops *ops)
-{
-    struct segment_register reg;
-
-    /* Expecting powers of two only. */
-    ASSERT(!(size & (size - 1)));
-
-    if ( mode_64bit() && seg < x86_seg_fs )
-        memset(&reg, 0, sizeof(reg));
-    else
-    {
-        /* No alignment checking when we have no way to read segment data. */
-        if ( !ops->read_segment )
-            return true;
-
-        if ( ops->read_segment(seg, &reg, ctxt) != X86EMUL_OKAY )
-            return false;
-    }
-
-    return !((reg.base + offs) & (size - 1));
-}
-
 static bool is_branch_step(struct x86_emulate_ctxt *ctxt,
                            const struct x86_emulate_ops *ops)
 {
@@ -3346,7 +3281,8 @@ x86_emulate(
 #ifndef X86EMUL_NO_SIMD
     /* With a memory operand, fetch the mask register in use (if any). */
     if ( ea.type == OP_MEM && evex.opmsk &&
-         _get_fpu(fpu_type = X86EMUL_FPU_opmask, ctxt, ops) == X86EMUL_OKAY )
+         x86emul_get_fpu(fpu_type = X86EMUL_FPU_opmask,
+                         ctxt, ops) == X86EMUL_OKAY )
     {
         uint8_t *stb = get_stub(stub);
 
@@ -3369,7 +3305,7 @@ x86_emulate(
 
     if ( fpu_type == X86EMUL_FPU_opmask )
     {
-        /* Squash (side) effects of the _get_fpu() above. */
+        /* Squash (side) effects of the x86emul_get_fpu() above. */
         x86_emul_reset_event(ctxt);
         put_fpu(X86EMUL_FPU_opmask, false, state, ctxt, ops);
         fpu_type = X86EMUL_FPU_none;
@@ -7433,173 +7369,14 @@ x86_emulate(
             emulate_2op_SrcV_nobyte("bts", src, dst, _regs.eflags);
         break;
 
-    case X86EMUL_OPC(0x0f, 0xae): case X86EMUL_OPC_66(0x0f, 0xae): /* Grp15 */
-        switch ( modrm_reg & 7 )
-        {
-#if !defined(X86EMUL_NO_FPU) || !defined(X86EMUL_NO_MMX) || \
-    !defined(X86EMUL_NO_SIMD)
-        case 0: /* fxsave */
-        case 1: /* fxrstor */
-            generate_exception_if(vex.pfx, EXC_UD);
-            vcpu_must_have(fxsr);
-            generate_exception_if(ea.type != OP_MEM, EXC_UD);
-            generate_exception_if(!is_aligned(ea.mem.seg, ea.mem.off, 16,
-                                              ctxt, ops),
-                                  EXC_GP, 0);
-            fail_if(!ops->blk);
-            op_bytes =
-#ifdef __x86_64__
-                !mode_64bit() ? offsetof(struct x86_fxsr, xmm[8]) :
-#endif
-                sizeof(struct x86_fxsr);
-            if ( amd_like(ctxt) )
-            {
-                /* Assume "normal" operation in case of missing hooks. */
-                if ( !ops->read_cr ||
-                     ops->read_cr(4, &cr4, ctxt) != X86EMUL_OKAY )
-                    cr4 = X86_CR4_OSFXSR;
-                if ( !ops->read_msr ||
-                     ops->read_msr(MSR_EFER, &msr_val, ctxt) != X86EMUL_OKAY )
-                    msr_val = 0;
-                if ( !(cr4 & X86_CR4_OSFXSR) ||
-                     (mode_64bit() && mode_ring0() && (msr_val & EFER_FFXSE)) )
-                    op_bytes = offsetof(struct x86_fxsr, xmm[0]);
-            }
-            /*
-             * This could also be X86EMUL_FPU_mmx, but it shouldn't be
-             * X86EMUL_FPU_xmm, as we don't want CR4.OSFXSR checked.
-             */
-            get_fpu(X86EMUL_FPU_fpu);
-            state->fpu_ctrl = true;
-            state->blk = modrm_reg & 1 ? blk_fxrstor : blk_fxsave;
-            if ( (rc = ops->blk(ea.mem.seg, ea.mem.off, NULL,
-                                sizeof(struct x86_fxsr), &_regs.eflags,
-                                state, ctxt)) != X86EMUL_OKAY )
-                goto done;
-            break;
-#endif /* X86EMUL_NO_{FPU,MMX,SIMD} */
-
-#ifndef X86EMUL_NO_SIMD
-        case 2: /* ldmxcsr */
-            generate_exception_if(vex.pfx, EXC_UD);
-            vcpu_must_have(sse);
-        ldmxcsr:
-            generate_exception_if(src.type != OP_MEM, EXC_UD);
-            get_fpu(vex.opcx ? X86EMUL_FPU_ymm : X86EMUL_FPU_xmm);
-            generate_exception_if(src.val & ~mxcsr_mask, EXC_GP, 0);
-            asm volatile ( "ldmxcsr %0" :: "m" (src.val) );
-            break;
-
-        case 3: /* stmxcsr */
-            generate_exception_if(vex.pfx, EXC_UD);
-            vcpu_must_have(sse);
-        stmxcsr:
-            generate_exception_if(dst.type != OP_MEM, EXC_UD);
-            get_fpu(vex.opcx ? X86EMUL_FPU_ymm : X86EMUL_FPU_xmm);
-            asm volatile ( "stmxcsr %0" : "=m" (dst.val) );
-            break;
-#endif /* X86EMUL_NO_SIMD */
-
-        case 5: /* lfence */
-            fail_if(modrm_mod != 3);
-            generate_exception_if(vex.pfx, EXC_UD);
-            vcpu_must_have(sse2);
-            asm volatile ( "lfence" ::: "memory" );
-            break;
-        case 6:
-            if ( modrm_mod == 3 ) /* mfence */
-            {
-                generate_exception_if(vex.pfx, EXC_UD);
-                vcpu_must_have(sse2);
-                asm volatile ( "mfence" ::: "memory" );
-                break;
-            }
-            /* else clwb */
-            fail_if(!vex.pfx);
-            vcpu_must_have(clwb);
-            fail_if(!ops->cache_op);
-            if ( (rc = ops->cache_op(x86emul_clwb, ea.mem.seg, ea.mem.off,
-                                     ctxt)) != X86EMUL_OKAY )
-                goto done;
-            break;
-        case 7:
-            if ( modrm_mod == 3 ) /* sfence */
-            {
-                generate_exception_if(vex.pfx, EXC_UD);
-                vcpu_must_have(mmxext);
-                asm volatile ( "sfence" ::: "memory" );
-                break;
-            }
-            /* else clflush{,opt} */
-            if ( !vex.pfx )
-                vcpu_must_have(clflush);
-            else
-                vcpu_must_have(clflushopt);
-            fail_if(!ops->cache_op);
-            if ( (rc = ops->cache_op(vex.pfx ? x86emul_clflushopt
-                                             : x86emul_clflush,
-                                     ea.mem.seg, ea.mem.off,
-                                     ctxt)) != X86EMUL_OKAY )
-                goto done;
-            break;
-        default:
-            goto unimplemented_insn;
-        }
-        break;
-
+    case X86EMUL_OPC(0x0f, 0xae): /* Grp15 */
+    case X86EMUL_OPC_66(0x0f, 0xae):
+    case X86EMUL_OPC_F3(0x0f, 0xae):
 #ifndef X86EMUL_NO_SIMD
-
-    case X86EMUL_OPC_VEX(0x0f, 0xae): /* Grp15 */
-        switch ( modrm_reg & 7 )
-        {
-        case 2: /* vldmxcsr */
-            generate_exception_if(vex.l || vex.reg != 0xf, EXC_UD);
-            vcpu_must_have(avx);
-            goto ldmxcsr;
-        case 3: /* vstmxcsr */
-            generate_exception_if(vex.l || vex.reg != 0xf, EXC_UD);
-            vcpu_must_have(avx);
-            goto stmxcsr;
-        }
-        goto unrecognized_insn;
-
-#endif /* !X86EMUL_NO_SIMD */
-
-    case X86EMUL_OPC_F3(0x0f, 0xae): /* Grp15 */
-        fail_if(modrm_mod != 3);
-        generate_exception_if((modrm_reg & 4) || !mode_64bit(), EXC_UD);
-        fail_if(!ops->read_cr);
-        if ( (rc = ops->read_cr(4, &cr4, ctxt)) != X86EMUL_OKAY )
-            goto done;
-        generate_exception_if(!(cr4 & X86_CR4_FSGSBASE), EXC_UD);
-        seg = modrm_reg & 1 ? x86_seg_gs : x86_seg_fs;
-        fail_if(!ops->read_segment);
-        if ( (rc = ops->read_segment(seg, &sreg, ctxt)) != X86EMUL_OKAY )
-            goto done;
-        dst.reg = decode_gpr(&_regs, modrm_rm);
-        if ( !(modrm_reg & 2) )
-        {
-            /* rd{f,g}sbase */
-            dst.type = OP_REG;
-            dst.bytes = (op_bytes == 8) ? 8 : 4;
-            dst.val = sreg.base;
-        }
-        else
-        {
-            /* wr{f,g}sbase */
-            if ( op_bytes == 8 )
-            {
-                sreg.base = *dst.reg;
-                generate_exception_if(!is_canonical_address(sreg.base),
-                                      EXC_GP, 0);
-            }
-            else
-                sreg.base = (uint32_t)*dst.reg;
-            fail_if(!ops->write_segment);
-            if ( (rc = ops->write_segment(seg, &sreg, ctxt)) != X86EMUL_OKAY )
-                goto done;
-        }
-        break;
+    case X86EMUL_OPC_VEX(0x0f, 0xae):
+#endif
+        rc = x86emul_0fae(state, &_regs, &dst, &src, ctxt, ops, &fpu_type);
+        goto dispatch_from_helper;
 
     case X86EMUL_OPC(0x0f, 0xaf): /* imul */
         emulate_2op_SrcV_srcmem("imul", src, dst, _regs.eflags);
@@ -10539,7 +10316,7 @@ x86_emulate(
         goto unrecognized_insn;
 
     default:
-    unimplemented_insn:
+    unimplemented_insn: __maybe_unused;
         rc = X86EMUL_UNIMPLEMENTED;
         goto done;
     unrecognized_insn:



From xen-devel-bounces@lists.xenproject.org Wed Jun 15 09:59:41 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 09:59:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349825.575999 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1PoK-0002cu-Kg; Wed, 15 Jun 2022 09:59:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349825.575999; Wed, 15 Jun 2022 09:59:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1PoK-0002cl-HI; Wed, 15 Jun 2022 09:59:40 +0000
Received: by outflank-mailman (input) for mailman id 349825;
 Wed, 15 Jun 2022 09:59:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=56zs=WW=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o1PoJ-0001TN-MF
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 09:59:39 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur02on0627.outbound.protection.outlook.com
 [2a01:111:f400:fe05::627])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id de02b9c9-ec91-11ec-ab14-113154c10af9;
 Wed, 15 Jun 2022 11:59:38 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8922.eurprd04.prod.outlook.com (2603:10a6:20b:409::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.22; Wed, 15 Jun
 2022 09:59:37 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Wed, 15 Jun 2022
 09:59:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: de02b9c9-ec91-11ec-ab14-113154c10af9
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oNe8LEEZWxjBrCsFS7IJuqE3+1ANxIKvq9N2H9BEDN/zzE2FAskXtodo/QMPLdKnz8MslgfBhF+m0GHSCLYNTtBeUlQK2sbwZRuS4SCcatx9KcmfJxQDzyyZfxKY4qo4dyNHlxfVcyEQsNTRrKnw/QFRvR+BMUQ3zwyGDpHPoEk22JVl1lF3q2tOFxnSEv8NhxXR60Jd66Be8pYK+t11PVAPJvlNTvFa4mmqghrAo6K92r/l7rfhTqbnHxhiclNm0Oz7vcQto2ChR7ARJLLUwhDv1MhX9oJAT/6RaaGYqg07IXyIvQnFC8owUlh36Beroo2GGA/Mi3bSXoQfNk6B/w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=m5zbvUxTlo91kq8asGmhJhb2DvmA+qtPqitP5H4Nzl0=;
 b=Dqv1BXtsa42JL+PQH2Zg8KSZA8FDfiTtPlkd4glWANaFQOThYYpxCdiU9nSP5oH1l80YuycOns5e7zNXOF6ix+MS9Rp73jl+RTBr9KaK0XFSeA2LbLikG89+ZdxdI0UEkHSIFjjhVwf/bfRwn4PjPvPiPRfKBs8V5OW0vTrnKfoALd6SLSP9w7t/+Ex5xMk9sunqYxvyO2uCd1d/JqdrrIdOtWkTEaoNULC9GQrtq6XQG2whAfcqtSVcPhxlVkJ1P0A6HD6E0dOF4RdbYeoUWORkes413FKRyUBqE9bfjMpW0PQtNhpoYE7rVelnOkG7Bw4gU6gYlwkb5J3DL8n/9A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=m5zbvUxTlo91kq8asGmhJhb2DvmA+qtPqitP5H4Nzl0=;
 b=2wVE9L7gnxtuACx0ibCTx9BGPUVnWFdjMjMN8bJL6OyiNM+X6O7V6xjRwDv2cm2+v43wmlnVHYVNbw+c5xqb+U4tGHYQ4JMML405qFHfDq72mwxcOpUZFQD8YB1pzuwjA0XBY6wEPy3QC4s4hJeq1uS0eG/R+cZ8ei2Q3h59BzVz0Ue0BbiFHHt9NMY8eijAqVJjwEfTSG8UGo5V2blGjz4lRh/g++8lyNegYj2I691iVWUvSTwefe3+5AD/YpaZDC/wo067M1seBjQ4s9IEeS2LyberEBCeEny/XpVzX3i+y5n6c/9eJXYc2zIlVeoNGrMQwvmt1MbMi8XN3wGl8w==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <602c8889-1e75-52dd-135f-be6b6013d880@suse.com>
Date: Wed, 15 Jun 2022 11:59:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: [PATCH v2 3/8] x86emul: split off opcode 0fc7 handling
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <7f5287ad-8442-6c53-d513-f9a8345c4857@suse.com>
In-Reply-To: <7f5287ad-8442-6c53-d513-f9a8345c4857@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR06CA0211.eurprd06.prod.outlook.com
 (2603:10a6:20b:45e::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ff0eec73-ae33-49b2-feda-08da4eb5c172
X-MS-TrafficTypeDiagnostic: AM9PR04MB8922:EE_
X-Microsoft-Antispam-PRVS:
	<AM9PR04MB89222CCD65B4482383E16DE2B3AD9@AM9PR04MB8922.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	sK7QD4cud9NBRNXLxGIv/WSX2Ayc8wmIk4q9XsWKnTZUnXk/f80iTqnAqC2hQi+VMbbjdoHnrUCV2I6nIgp12y/wLQCzrXqec4QhSdPqGfainzWntZF6UMohD5IhZrq7iAqnwGpIgRbd9euF+gTn5610BY0BK92cygQSU935Zj9f0yBW1NQPoiE7wNM3FjoI15ex6dShizIdQpHjzE4fhPLfsmqyQypX9wPvvtimflgA3au33JsBBGyvjb3Y/ycKuYtsrrzSvG+wuE7codFXR0D/9VDEQpQoCM9xt+yffavE7pQJSAoomlsEvy/b1dVlRxOTHw7GSGOjaOhL36mr0schghNKXGBo7c5J2PgYslDRgws5o6Hpq04esiFUJrEvIPRTg8YLmxJiNlWNtYLdWIyw8QCDI9mbP7pQJ0gjHSldrZCrjrlghFAJJhNCAb89qiveYgOfS+6/qt4pcPDANp7B5hmyOD8XD/TdFqtFIrbAfGQ6rkRsLZ7JZPMC8C3tbFmcwLLlstfMWlhrxzh3hwdVG3PTFh5sC553cpLXNu1TGBVPs4s61/I1VtfAtntXocoQeUiusDRwjdmbG00nU75ThrCWigG3105iYLwnvqQU6fdAqgKPWeAIw2QEGcSQc1hax0xvu++vSPRKTJfhx/fKWMwmN6EBLzZ2PFprx6TCq0ZwwYYRbbvaEMTlOZibtC7aDeqgBatxOPI2ulkwvRczAbqe1BWr6b1oQ8WKKXAyi+/09FnB4t7H8YV5x8L4rf1LwNHBhZnrn9mI+2ctHQKtX8Lsf5Hb0GCIx3KMCVgn5UB6ndNqwu4VJhsSeMxdDFbxwg81W7sY/+Ri7Hk7ew==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(83380400001)(186003)(2616005)(31686004)(36756003)(38100700002)(2906002)(6512007)(316002)(6486002)(5660300002)(86362001)(6506007)(66556008)(26005)(6916009)(508600001)(8676002)(66946007)(4326008)(8936002)(66476007)(31696002)(54906003)(30864003)(2004002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?b1ROWHhvZmtNbjVHd2t3Tk1sd01Sa21KZ2pXelRCTk5sMEtGdExXdXFpaXg5?=
 =?utf-8?B?dEtwaS9uTFZjSExyWkpyRk54NmhxZUt0Sk5TTWF0ZndvRHdRU1lsQlhjYUl5?=
 =?utf-8?B?KzF0TnpmZHRnWFVuWkF3LzdJbEZVOWxPdnNUVll2ZytYcXVOeTlCcTRXNldu?=
 =?utf-8?B?aHczK21oeEFTay94c041aEtlWERhUUNRK1lDY2FTLy96MTlTeXptZXkvcjlX?=
 =?utf-8?B?dm5yTCtDTVZNdzh2VVJFNEFNdytiTkpWcWUyOWgyV0doZzAzczZTS0dEdlla?=
 =?utf-8?B?UUJXMENvUXlNdTBSMThZN1ZYWlRncWl4c3EwNHdIcmwveXQ3dHJDd0lLLzRG?=
 =?utf-8?B?ZDFLbzJ6S21WcXUxL3EyTXE2Z2syLzZtS3JSbExiQ1h1c1lzL2dwMy9ISWlV?=
 =?utf-8?B?K2s3YzVLcFVtN1VyS1doRmpKbVZxTFI4WWJZMjluZ1NmSUM3cjN4Wmt3L2pz?=
 =?utf-8?B?YmZCNkJkNW1oSzBDcXBMU0YzNEduUUFMZVlNeUNsekU2dGNKK2w2a2dWWWdV?=
 =?utf-8?B?d2lxT2RaNFl4UUdiZXhtdmw1MEVTVVlDYzFFSHZYOWk5SUlSVDg0OHVsZ0Fw?=
 =?utf-8?B?K0pXd0tMa3BkOGN2dmlHOGlXZkh6ZWRqUjhwd2ZkQ09US2lYRVlKRnlEd1FI?=
 =?utf-8?B?cDVQQzZpdGRRV1hreUNvZm9GejFSKzlGVitCMytWZkUwUmVFd25XVTR2K2p3?=
 =?utf-8?B?WFl0VUJiUFo2b3RRSnNGZG1EOERlbGR2WGNJL2xJaUNaTUM5R1d3ZTA0M0gy?=
 =?utf-8?B?YTdPSUNyR2U5MWQzMEJxbFFEaDFKN2hVNjlVRlBEY3FMVlZQSVk2YkdmOHd4?=
 =?utf-8?B?YU83QW1SK21tQUI0dTBqU1BDMG5FR0wvUWU5QjRqQ29BWThZVDg0dzZ6a1V0?=
 =?utf-8?B?N2FKOGwvODhNWEJSNGV0UFNBbVZvRkNsWGRURm9DT1RHQVpSWHUvN2FUMkFG?=
 =?utf-8?B?WkJOSFA4dytNdEt6TkNZakFoa0kxbnExMGFGODh3MjdpR2RsbUJTSjJqcWQ5?=
 =?utf-8?B?elFvZHV6c2llU2o5S09Cc2xvY2FHVUFPelFyYjVFOU9DYTlnQmxuMVMzMW5F?=
 =?utf-8?B?UFQ5Y3JtQ2lkMEpzQ3hOT3JWMk8xWVBaeXVaUFFOMWpIc0M4MU1UTmRxcTZW?=
 =?utf-8?B?a2ZPZUthaUpHWXdIUEVxNXIrSFZJbGhKOGttbi9mK3hYWjhOY2JzdFpyZTVv?=
 =?utf-8?B?dHY4dllDU2F3YkE1Qm9rZUlrSVdwS25EcXYrdmlnSXJ2VEQ5aGhnZDNvS0pG?=
 =?utf-8?B?M3dSQmhxdjdXM25OeGdKcGpxeDhVRCtiQUZYdmJTa0wybXE2d2Q2MWp4bTk0?=
 =?utf-8?B?L1ViQlQrM0xlanpGeEZyc3kxMWthc1RqNE1tblFGdDl4cWNRdU5hMUYxLzc1?=
 =?utf-8?B?c2tpVEhnOXRYdkFjQXBPZktlZU5aYW1SMTVnZDF3Q2xNSkNJVTlnN0ZQU2Vt?=
 =?utf-8?B?aUpIektzVjFEYXRmYkluYjJMWGRTbVBNMCt2V3J3QmZjdVpiSjRDWkZyRDdO?=
 =?utf-8?B?NWRELzZTZ2VaSHBBSmJaN1ZlTmx6SE9VTGViK2RqUjJjOWpNWm1jRDVOYXdZ?=
 =?utf-8?B?ZHc4U25HMUxNdkRoRFV4bjZTSUhFbWJnejJsRkx4dTVGcG42TkdTVGUwTTZG?=
 =?utf-8?B?ZkFna3dBZ3p2V2pkZURjODRGYkJwaFpUOGg4ZUhoUnNqdVZnMEZqWnhIMndo?=
 =?utf-8?B?UGcwcktDQUZlKzA1V2wyQUkyYngwbmtEWExvZUJweGxXOWx1UWdlMXhxZUhS?=
 =?utf-8?B?bkJRRExiaUFndGhuYzN4enJpQWppMGxkbUFrYU5rVFN6UDhRdXY5dHlRUGVa?=
 =?utf-8?B?d3ExRVluMWJkTmJoemtZcEJsWllxNXRGSE9LUUZHSGFBbUhJMFNRaUdXV0dQ?=
 =?utf-8?B?ZGlUREt2bU5rNU1ZUGhienArck9PQmVpU0hVNlgrYktESDhBdkNYN05DT2cr?=
 =?utf-8?B?R2YzMWFVcGd1Si9jSUlQOW1aSVZ0cDJpTkh4Rld5MlpFa25NQklJUDBsRk9y?=
 =?utf-8?B?cVN5K1pWdWRzWGRhalpPN3ZaYXZNWlpiRlIzaC82d0F1R1B3ZVVQSmw3Ujhh?=
 =?utf-8?B?YVdXQkpGS0I2aDh1R0JxdFFHTHlZT21hVG5qbVBkYThBOVJKNEZPUVJYT0k2?=
 =?utf-8?B?VjRqQ1UxM05Ic2p5RDFuNlRjajI2K2pqUXpWUjJlQVBRQ2VZaFJab1A1dFBQ?=
 =?utf-8?B?cW11MmZxWWpvVWVjK2hEVlFKOTJKZFFtMWNYN2UvdWIzTjBUKzhId2Z4OXhz?=
 =?utf-8?B?TUZtK1VLNDNDNzl1dzhlMkUvcmFzRERJMnN2cGgvOTV6cmtOMTRZdnVBV0xV?=
 =?utf-8?B?VVVMdW5TVjVGbTMwSXQ2SkF0MDlMbTFGV1U2UzNrSGgzZ1UrTjBSQT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ff0eec73-ae33-49b2-feda-08da4eb5c172
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2022 09:59:36.9338
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: DI/XiU+d7m4siv1VlxSb73k1Yem/3UgOOWMkhK8DWPlSCSdCHhsaapgO5DVPcNE9yPkiqQorxnq/Si2YSU8eag==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8922

There's a fair amount of sub-cases (with some yet to be implemented), so
a separate function seems warranted.

Code moved gets slightly adjusted in a few places, e.g. replacing EXC_*
by X86_EXC_* (such that EXC_* don't need to move as well; we want these
to be phased out anyway).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/fuzz/x86_instruction_emulator/Makefile
+++ b/tools/fuzz/x86_instruction_emulator/Makefile
@@ -35,7 +35,7 @@ x86.h := $(addprefix $(XEN_ROOT)/tools/i
 x86_emulate.h := x86-emulate.h x86_emulate/x86_emulate.h $(x86.h)
 
 OBJS := fuzz-emul.o x86-emulate.o
-OBJS += x86_emulate/0f01.o x86_emulate/0fae.o
+OBJS += x86_emulate/0f01.o x86_emulate/0fae.o x86_emulate/0fc7.o
 
 # x86-emulate.c will be implicit for both
 x86-emulate.o x86-emulate-cov.o: x86_emulate/x86_emulate.c $(x86_emulate.h) x86_emulate/private.h
--- a/tools/tests/x86_emulator/Makefile
+++ b/tools/tests/x86_emulator/Makefile
@@ -251,7 +251,7 @@ xop.h avx512f.h: simd-fma.c
 endif # 32-bit override
 
 OBJS := x86-emulate.o cpuid.o test_x86_emulator.o evex-disp8.o predicates.o wrappers.o
-OBJS += x86_emulate/0f01.o x86_emulate/0fae.o
+OBJS += x86_emulate/0f01.o x86_emulate/0fae.o x86_emulate/0fc7.o
 
 $(TARGET): $(OBJS)
 	$(HOSTCC) $(HOSTCFLAGS) -o $@ $^
--- /dev/null
+++ b/xen/arch/x86/x86_emulate/0fc7.c
@@ -0,0 +1,210 @@
+/******************************************************************************
+ * 0fc7.c - helper for x86_emulate.c
+ *
+ * Generic x86 (32-bit and 64-bit) instruction decoder and emulator.
+ *
+ * Copyright (c) 2005-2007 Keir Fraser
+ * Copyright (c) 2005-2007 XenSource Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "private.h"
+
+/* Avoid namespace pollution. */
+#undef cmpxchg
+
+int x86emul_0fc7(struct x86_emulate_state *s,
+                 struct cpu_user_regs *regs,
+                 struct operand *dst,
+                 struct x86_emulate_ctxt *ctxt,
+                 const struct x86_emulate_ops *ops,
+                 mmval_t *mmvalp)
+{
+    int rc;
+
+    if ( s->ea.type == OP_REG )
+    {
+        bool __maybe_unused carry;
+
+        switch ( s->modrm_reg & 7 )
+        {
+        default:
+            return X86EMUL_UNRECOGNIZED;
+
+        case 6: /* rdrand */
+#ifdef HAVE_AS_RDRAND
+            generate_exception_if(s->vex.pfx >= vex_f3, X86_EXC_UD);
+            host_and_vcpu_must_have(rdrand);
+            *dst = s->ea;
+            switch ( s->op_bytes )
+            {
+            case 2:
+                asm ( "rdrand %w0" ASM_FLAG_OUT(, "; setc %1")
+                      : "=r" (dst->val), ASM_FLAG_OUT("=@ccc", "=qm") (carry) );
+                break;
+            default:
+# ifdef __x86_64__
+                asm ( "rdrand %k0" ASM_FLAG_OUT(, "; setc %1")
+                      : "=r" (dst->val), ASM_FLAG_OUT("=@ccc", "=qm") (carry) );
+                break;
+            case 8:
+# endif
+                asm ( "rdrand %0" ASM_FLAG_OUT(, "; setc %1")
+                      : "=r" (dst->val), ASM_FLAG_OUT("=@ccc", "=qm") (carry) );
+                break;
+            }
+            regs->eflags &= ~EFLAGS_MASK;
+            if ( carry )
+                regs->eflags |= X86_EFLAGS_CF;
+            break;
+#else
+            return X86EMUL_UNIMPLEMENTED;
+#endif
+
+        case 7: /* rdseed / rdpid */
+            if ( s->vex.pfx == vex_f3 ) /* rdpid */
+            {
+                uint64_t msr_val;
+
+                generate_exception_if(s->ea.type != OP_REG, X86_EXC_UD);
+                vcpu_must_have(rdpid);
+                fail_if(!ops->read_msr);
+                if ( (rc = ops->read_msr(MSR_TSC_AUX, &msr_val,
+                                         ctxt)) != X86EMUL_OKAY )
+                    goto done;
+                *dst = s->ea;
+                dst->val = msr_val;
+                dst->bytes = 4;
+                break;
+            }
+#ifdef HAVE_AS_RDSEED
+            generate_exception_if(s->vex.pfx >= vex_f3, X86_EXC_UD);
+            host_and_vcpu_must_have(rdseed);
+            *dst = s->ea;
+            switch ( s->op_bytes )
+            {
+            case 2:
+                asm ( "rdseed %w0" ASM_FLAG_OUT(, "; setc %1")
+                      : "=r" (dst->val), ASM_FLAG_OUT("=@ccc", "=qm") (carry) );
+                break;
+            default:
+# ifdef __x86_64__
+                asm ( "rdseed %k0" ASM_FLAG_OUT(, "; setc %1")
+                      : "=r" (dst->val), ASM_FLAG_OUT("=@ccc", "=qm") (carry) );
+                break;
+            case 8:
+# endif
+                asm ( "rdseed %0" ASM_FLAG_OUT(, "; setc %1")
+                      : "=r" (dst->val), ASM_FLAG_OUT("=@ccc", "=qm") (carry) );
+                break;
+            }
+            regs->eflags &= ~EFLAGS_MASK;
+            if ( carry )
+                regs->eflags |= X86_EFLAGS_CF;
+            break;
+#endif
+        }
+    }
+    else
+    {
+        union {
+            uint32_t u32[2];
+            uint64_t u64[2];
+        } *old, *aux;
+
+        /* cmpxchg8b/cmpxchg16b */
+        generate_exception_if((s->modrm_reg & 7) != 1, X86_EXC_UD);
+        fail_if(!ops->cmpxchg);
+        if ( s->rex_prefix & REX_W )
+        {
+            host_and_vcpu_must_have(cx16);
+            generate_exception_if(!is_aligned(s->ea.mem.seg, s->ea.mem.off, 16,
+                                              ctxt, ops),
+                                  X86_EXC_GP, 0);
+            s->op_bytes = 16;
+        }
+        else
+        {
+            vcpu_must_have(cx8);
+            s->op_bytes = 8;
+        }
+
+        old = container_of(&mmvalp->ymm[0], typeof(*old), u64[0]);
+        aux = container_of(&mmvalp->ymm[2], typeof(*aux), u64[0]);
+
+        /* Get actual old value. */
+        if ( (rc = ops->read(s->ea.mem.seg, s->ea.mem.off, old, s->op_bytes,
+                             ctxt)) != X86EMUL_OKAY )
+            goto done;
+
+        /* Get expected value. */
+        if ( s->op_bytes == 8 )
+        {
+            aux->u32[0] = regs->eax;
+            aux->u32[1] = regs->edx;
+        }
+        else
+        {
+            aux->u64[0] = regs->r(ax);
+            aux->u64[1] = regs->r(dx);
+        }
+
+        if ( memcmp(old, aux, s->op_bytes) )
+        {
+        cmpxchgNb_failed:
+            /* Expected != actual: store actual to rDX:rAX and clear ZF. */
+            regs->r(ax) = s->op_bytes == 8 ? old->u32[0] : old->u64[0];
+            regs->r(dx) = s->op_bytes == 8 ? old->u32[1] : old->u64[1];
+            regs->eflags &= ~X86_EFLAGS_ZF;
+        }
+        else
+        {
+            /*
+             * Expected == actual: Get proposed value, attempt atomic cmpxchg
+             * and set ZF if successful.
+             */
+            if ( s->op_bytes == 8 )
+            {
+                aux->u32[0] = regs->ebx;
+                aux->u32[1] = regs->ecx;
+            }
+            else
+            {
+                aux->u64[0] = regs->r(bx);
+                aux->u64[1] = regs->r(cx);
+            }
+
+            switch ( rc = ops->cmpxchg(s->ea.mem.seg, s->ea.mem.off, old, aux,
+                                       s->op_bytes, s->lock_prefix, ctxt) )
+            {
+            case X86EMUL_OKAY:
+                regs->eflags |= X86_EFLAGS_ZF;
+                break;
+
+            case X86EMUL_CMPXCHG_FAILED:
+                rc = X86EMUL_OKAY;
+                goto cmpxchgNb_failed;
+
+            default:
+                goto done;
+            }
+        }
+    }
+
+    rc = X86EMUL_OKAY;
+
+ done:
+    return rc;
+}
--- a/xen/arch/x86/x86_emulate/Makefile
+++ b/xen/arch/x86/x86_emulate/Makefile
@@ -1,2 +1,3 @@
 obj-y += 0f01.o
 obj-y += 0fae.o
+obj-y += 0fc7.o
--- a/xen/arch/x86/x86_emulate/private.h
+++ b/xen/arch/x86/x86_emulate/private.h
@@ -308,6 +308,14 @@ struct x86_emulate_state {
 #endif
 };
 
+typedef union {
+    uint64_t mmx;
+    uint64_t __attribute__ ((aligned(16))) xmm[2];
+    uint64_t __attribute__ ((aligned(32))) ymm[4];
+    uint64_t __attribute__ ((aligned(64))) zmm[8];
+    uint32_t data32[16];
+} mmval_t;
+
 struct x86_fxsr {
     uint16_t fcw;
     uint16_t fsw;
@@ -558,6 +566,12 @@ int x86emul_0fae(struct x86_emulate_stat
                  struct x86_emulate_ctxt *ctxt,
                  const struct x86_emulate_ops *ops,
                  enum x86_emulate_fpu_type *fpu_type);
+int x86emul_0fc7(struct x86_emulate_state *s,
+                 struct cpu_user_regs *regs,
+                 struct operand *dst,
+                 struct x86_emulate_ctxt *ctxt,
+                 const struct x86_emulate_ops *ops,
+                 mmval_t *mmvalp);
 
 static inline bool is_aligned(enum x86_segment seg, unsigned long offs,
                               unsigned int size, struct x86_emulate_ctxt *ctxt,
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -687,17 +687,9 @@ struct x87_env32 {
 };
 #endif
 
-typedef union {
-    uint64_t mmx;
-    uint64_t __attribute__ ((aligned(16))) xmm[2];
-    uint64_t __attribute__ ((aligned(32))) ymm[4];
-    uint64_t __attribute__ ((aligned(64))) zmm[8];
-    uint32_t data32[16];
-} mmval_t;
-
 /*
- * While proper alignment gets specified above, this doesn't get honored by
- * the compiler for automatic variables. Use this helper to instantiate a
+ * While proper alignment gets specified in mmval_t, this doesn't get honored
+ * by the compiler for automatic variables. Use this helper to instantiate a
  * suitably aligned variable, producing a pointer to access it.
  */
 #define DECLARE_ALIGNED(type, var)                                        \
@@ -7680,174 +7672,8 @@ x86_emulate(
 #endif /* X86EMUL_NO_SIMD */
 
     case X86EMUL_OPC(0x0f, 0xc7): /* Grp9 */
-    {
-        union {
-            uint32_t u32[2];
-            uint64_t u64[2];
-        } *old, *aux;
-
-        if ( ea.type == OP_REG )
-        {
-            bool __maybe_unused carry;
-
-            switch ( modrm_reg & 7 )
-            {
-            default:
-                goto unrecognized_insn;
-
-            case 6: /* rdrand */
-#ifdef HAVE_AS_RDRAND
-                generate_exception_if(rep_prefix(), EXC_UD);
-                host_and_vcpu_must_have(rdrand);
-                dst = ea;
-                switch ( op_bytes )
-                {
-                case 2:
-                    asm ( "rdrand %w0" ASM_FLAG_OUT(, "; setc %1")
-                          : "=r" (dst.val), ASM_FLAG_OUT("=@ccc", "=qm") (carry) );
-                    break;
-                default:
-# ifdef __x86_64__
-                    asm ( "rdrand %k0" ASM_FLAG_OUT(, "; setc %1")
-                          : "=r" (dst.val), ASM_FLAG_OUT("=@ccc", "=qm") (carry) );
-                    break;
-                case 8:
-# endif
-                    asm ( "rdrand %0" ASM_FLAG_OUT(, "; setc %1")
-                          : "=r" (dst.val), ASM_FLAG_OUT("=@ccc", "=qm") (carry) );
-                    break;
-                }
-                _regs.eflags &= ~EFLAGS_MASK;
-                if ( carry )
-                    _regs.eflags |= X86_EFLAGS_CF;
-                break;
-#else
-                goto unimplemented_insn;
-#endif
-
-            case 7: /* rdseed / rdpid */
-                if ( repe_prefix() ) /* rdpid */
-                {
-                    generate_exception_if(ea.type != OP_REG, EXC_UD);
-                    vcpu_must_have(rdpid);
-                    fail_if(!ops->read_msr);
-                    if ( (rc = ops->read_msr(MSR_TSC_AUX, &msr_val,
-                                             ctxt)) != X86EMUL_OKAY )
-                        goto done;
-                    dst = ea;
-                    dst.val = msr_val;
-                    dst.bytes = 4;
-                    break;
-                }
-#ifdef HAVE_AS_RDSEED
-                generate_exception_if(rep_prefix(), EXC_UD);
-                host_and_vcpu_must_have(rdseed);
-                dst = ea;
-                switch ( op_bytes )
-                {
-                case 2:
-                    asm ( "rdseed %w0" ASM_FLAG_OUT(, "; setc %1")
-                          : "=r" (dst.val), ASM_FLAG_OUT("=@ccc", "=qm") (carry) );
-                    break;
-                default:
-# ifdef __x86_64__
-                    asm ( "rdseed %k0" ASM_FLAG_OUT(, "; setc %1")
-                          : "=r" (dst.val), ASM_FLAG_OUT("=@ccc", "=qm") (carry) );
-                    break;
-                case 8:
-# endif
-                    asm ( "rdseed %0" ASM_FLAG_OUT(, "; setc %1")
-                          : "=r" (dst.val), ASM_FLAG_OUT("=@ccc", "=qm") (carry) );
-                    break;
-                }
-                _regs.eflags &= ~EFLAGS_MASK;
-                if ( carry )
-                    _regs.eflags |= X86_EFLAGS_CF;
-                break;
-#endif
-            }
-            break;
-        }
-
-        /* cmpxchg8b/cmpxchg16b */
-        generate_exception_if((modrm_reg & 7) != 1, EXC_UD);
-        fail_if(!ops->cmpxchg);
-        if ( rex_prefix & REX_W )
-        {
-            host_and_vcpu_must_have(cx16);
-            generate_exception_if(!is_aligned(ea.mem.seg, ea.mem.off, 16,
-                                              ctxt, ops),
-                                  EXC_GP, 0);
-            op_bytes = 16;
-        }
-        else
-        {
-            vcpu_must_have(cx8);
-            op_bytes = 8;
-        }
-
-        old = container_of(&mmvalp->ymm[0], typeof(*old), u64[0]);
-        aux = container_of(&mmvalp->ymm[2], typeof(*aux), u64[0]);
-
-        /* Get actual old value. */
-        if ( (rc = ops->read(ea.mem.seg, ea.mem.off, old, op_bytes,
-                             ctxt)) != X86EMUL_OKAY )
-            goto done;
-
-        /* Get expected value. */
-        if ( !(rex_prefix & REX_W) )
-        {
-            aux->u32[0] = _regs.eax;
-            aux->u32[1] = _regs.edx;
-        }
-        else
-        {
-            aux->u64[0] = _regs.r(ax);
-            aux->u64[1] = _regs.r(dx);
-        }
-
-        if ( memcmp(old, aux, op_bytes) )
-        {
-        cmpxchgNb_failed:
-            /* Expected != actual: store actual to rDX:rAX and clear ZF. */
-            _regs.r(ax) = !(rex_prefix & REX_W) ? old->u32[0] : old->u64[0];
-            _regs.r(dx) = !(rex_prefix & REX_W) ? old->u32[1] : old->u64[1];
-            _regs.eflags &= ~X86_EFLAGS_ZF;
-        }
-        else
-        {
-            /*
-             * Expected == actual: Get proposed value, attempt atomic cmpxchg
-             * and set ZF if successful.
-             */
-            if ( !(rex_prefix & REX_W) )
-            {
-                aux->u32[0] = _regs.ebx;
-                aux->u32[1] = _regs.ecx;
-            }
-            else
-            {
-                aux->u64[0] = _regs.r(bx);
-                aux->u64[1] = _regs.r(cx);
-            }
-
-            switch ( rc = ops->cmpxchg(ea.mem.seg, ea.mem.off, old, aux,
-                                       op_bytes, lock_prefix, ctxt) )
-            {
-            case X86EMUL_OKAY:
-                _regs.eflags |= X86_EFLAGS_ZF;
-                break;
-
-            case X86EMUL_CMPXCHG_FAILED:
-                rc = X86EMUL_OKAY;
-                goto cmpxchgNb_failed;
-
-            default:
-                goto done;
-            }
-        }
-        break;
-    }
+        rc =  x86emul_0fc7(state, &_regs, &dst, ctxt, ops, mmvalp);
+        goto dispatch_from_helper;
 
     case X86EMUL_OPC(0x0f, 0xc8) ... X86EMUL_OPC(0x0f, 0xcf): /* bswap */
         dst.type = OP_REG;



From xen-devel-bounces@lists.xenproject.org Wed Jun 15 10:00:07 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 10:00:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349834.576009 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Pol-0003yM-1X; Wed, 15 Jun 2022 10:00:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349834.576009; Wed, 15 Jun 2022 10:00:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Pok-0003yF-T3; Wed, 15 Jun 2022 10:00:06 +0000
Received: by outflank-mailman (input) for mailman id 349834;
 Wed, 15 Jun 2022 10:00:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=56zs=WW=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o1Poj-0002HV-L6
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 10:00:05 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2062e.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::62e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ec46377b-ec91-11ec-bd2c-47488cf2e6aa;
 Wed, 15 Jun 2022 12:00:02 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8922.eurprd04.prod.outlook.com (2603:10a6:20b:409::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.22; Wed, 15 Jun
 2022 09:59:58 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Wed, 15 Jun 2022
 09:59:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ec46377b-ec91-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EvVLUB93lJqP67tt+FG0qR1da24LvLhH6KkTmXP44R0ogij6lVBeFvRHcIyNa3VowLtCCdxD3zyQ5AHURwU3Ps/joFbFKbKtDIVni/4Wp+kJY0gtUzaT7N2pIqd5dJo4jFurDD+PdC3Kr29sCCqMOoLbGublea/B3pJO8obhDOl0H2iNo/IBIxJnsbaRQW/W48bsmRIwiaRHA4BvxWvnH6gchPNGjdARQIj3lGdrIOoMyf7brHBzciV1Zbg+/jJuNf/A8ICJtS0FyokvVrk9BUEQoJ9tYsD1ofpS1na4lY3xG4JY8mScY1hlupcMHWb8+SA9sJcMet9CLDrPh7/a8w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=JFeNkGoA7GVTPX3Ebz960vl8asOl7Mbo0PLrmJyer5I=;
 b=BpPn4N2ggntVAUYsMGlpu5dX4qJgGlBD3+JCI4tbIFVif4IgZtC+4Qeh5tSTTJHxNBXEsNBEFp1xdwYSH0sWiXYCirHkThSq2FU0or/hJ4Trw2zp5iS8+hqx9ZnUdf6XeGFamBLEHHvWWiDcuRtgG4wY4y6+1VO+GFkrXL6EHk1nkrkJUFNBMNF+EtnX0TjJmHfDycQjfFqCM3TTTdE/J3B/iv4SZuPCJOCjdbpARMFopRihFuyL0nMpclDs17N9gHv2eb4NFSeCp9J3o4e0T89CenqtAPnKMczcx6Jv6dI3XLr41JC862iZIR86bcX6TndTkX2VQY+g1djHyhoSSw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JFeNkGoA7GVTPX3Ebz960vl8asOl7Mbo0PLrmJyer5I=;
 b=iI8nBZO/5KEq+ji0yhDsmEA7I4kJnBIDIX1+FuWN+quG1Mhgk1IG/F+JYwDZeRW8B85UYu51wTge86BVrT2iiXdvskUVo8aXY2Lgni1M0erg6kJCv3ymmDgUhSqRtMESsh7T3FfNtRwXi7FgiCt6pSc3DjdFbx9Am9Xl2Q2fCxvH4WAE1Jq6ADqr5Ktr2xkwg9iac5vB+w+f8SfjW0pBt0zeKjnk2mpCZ1Zl0uxvJ5A5jLUKGBi9VEj5eIAyMQxXWyXx+OypcTYt9BGdRrCGe8eZ6pNj1OO9eQY/Pj23ZwHoj413B7g5+FyvL3xNk0Ti9xV6s/MVVWMy0sEBOtFvEQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <8d86448f-7bc1-245d-fc4f-18b1c816d728@suse.com>
Date: Wed, 15 Jun 2022 11:59:56 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: [PATCH v2 4/8] x86emul: split off FPU opcode handling
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <7f5287ad-8442-6c53-d513-f9a8345c4857@suse.com>
In-Reply-To: <7f5287ad-8442-6c53-d513-f9a8345c4857@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR06CA0075.eurprd06.prod.outlook.com
 (2603:10a6:20b:464::23) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e3714297-3b7b-44fc-a887-08da4eb5ce3e
X-MS-TrafficTypeDiagnostic: AM9PR04MB8922:EE_
X-Microsoft-Antispam-PRVS:
	<AM9PR04MB892229966B3E8BF47974D2D3B3AD9@AM9PR04MB8922.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	3kjeu2/hcx5PbyejHVinabsfdYSBf/2zceK81FTCh4U2+r4JLKJGq3qPOOj9nWyniN3BQ8/M5uFFrEQ8v0Hvk70yhEMZiJuISVt+f3kngSWYmZ+urdWK4MKCORW3CuelMD6ZJwKnM5wmQlMrmD+s3dUlRrq3VJjHi/kPzNelabrIRqNsC6Dj6+vnzFdSDPUJSKOJ1eBKpUc8AiJ+RE5BbjtnJzhe2vJQv4KVQpoqYwuRjXWdn1SrJWNx4mY2EIDoLSRGX/HJ4VXQYZHKXaLDOIKa3ZfRhK2KZTAtLe+umppK904yrhS5lGMv5N2wtl1H8kqiUpI7kIzy9JRyr068kaB+i/gy0ieCmhmEHq/Z4wWyFSeJBdz7aoUEoXslWpE1pv8lCsROq5vW/0fSixKaUuTx6FC5YrRpO0MABs6zA29avzjx48tSboXT37Ho9FoPq/EX4CgGShthxpE9OYHETtZJHX7B3v0KaS/KQ6MzKJQqhDmYUfsgX9Gia5MfNOv+76dNIUpEu1oGTyl4Y2j/mdZbxiL0zfAj9cm+9rAifgPNRoTbYEBJqlS5duM79MTV43vbfTKxrM8N3hq8WThqvUB6OgiS25t5QqWJQu1l9DF43eNFmMoNwGuJ6yjUj6aRQ6YJcS0goESzi76jI/bQKU2JEaBKDHcbmvZtHM+VDPOktugiF1tjKh6alkgDhc6mCYFmfRrv9G22D219/xNEpw4jcNKO31ejxzvhtEKteXFgy9MCXYbWuqubj3+z9bd5AeXQ8iDpuCxNBaV57Cfwd+3nV3jGv61p8U6oV1dQnUruTFOXWdU/VXgWUtr2QavRU39p+O+Lnho/IH83S3Rt+PGXdYwvknkZp4Z6wiv+SZBjtkJxBb0x9262tKMsAJqL
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(83380400001)(186003)(2616005)(31686004)(36756003)(38100700002)(2906002)(6512007)(316002)(6486002)(5660300002)(86362001)(6506007)(66556008)(26005)(6916009)(508600001)(8676002)(66946007)(4326008)(8936002)(66476007)(31696002)(54906003)(30864003)(2004002)(45980500001)(43740500002)(579004)(559001)(127564004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?L1BKQkxqOXhkTHIxSEJTN0MzOEo5bC9PZklxclRUdkVreXFTemRCV1A3YWVX?=
 =?utf-8?B?YlQyeHFPQW5NNExwS3VRSFp5Z3ZrR3B1ZnhXVHdScG9Wc3hzRjFyWFFNcUda?=
 =?utf-8?B?YnNERUhkTnBIczRYWlhleXc5VWcreEJ6Y09IWWludnQrVWVuK0E0bVIydDRN?=
 =?utf-8?B?SDhWUDVvU1J3TEYrTGQ4L1QzNXZpcGdWdExHZmVaZE9uMGI3RFN0MmdBSVpv?=
 =?utf-8?B?MGFnYVBQaTBXT2pIVzMxS1RrQVUwUCtveEdMbkJBaFYvc3VzU1lQZE5uZjht?=
 =?utf-8?B?UWZEK21ndUN4eFRXS1BuMllqdXhtbU9CZTBzLzhBWEMvOUNLUStZYm95Q00w?=
 =?utf-8?B?UkRBeEV1QVZySllzVnVmUG9vY1l2WGk1eTVoTlR2NUEwTkl0TkRPcm42VEdv?=
 =?utf-8?B?aFRvTzVNMENTVFRBazZXcUJYU3NzTldERG9CdnJQeCtUa2hpYnRmT04zK2c2?=
 =?utf-8?B?bzh5OWNLSmtjZnlwNFl1THBNRUM3OXEwbmtvZnRmODduTFNQVFVDcVE2OVIr?=
 =?utf-8?B?N0dHL2duc3pjOS9KRVFmT2w4UXlBblB4azF5OHBvQXFEdFM2RHdrcUUraVgy?=
 =?utf-8?B?Qk9GMTF1ay9ON3A3NG1TTDVhZ1lrQ2l4Z1QxWHhmdEYrTjlFWXVpSjFmd3la?=
 =?utf-8?B?c3hJbm9xNVh0YkVNMmdkblUzamlLZk5ReTNRVkNRWWRCcDZzanhSamJrTXIz?=
 =?utf-8?B?TlRRM2RmWko1MitsY295dHNBOTY5bXB6bGp1ZEp1VjVxUzN5RGE2NGhhOGd4?=
 =?utf-8?B?d3FoSlAvVU9NU1JnY1NMSHNJR1dXTXlaeHFaVUl4YWI0MnNCQkY1VldlVU1r?=
 =?utf-8?B?Y2hlZkJNMmVjckMxR2tUOFcyb3hEbVpxSXBGbk8wTjhpQVR3ZlpjeG5CRkJQ?=
 =?utf-8?B?SWhhbEl6SS9HVnp0K3dFVHNWaUFobkNTanFIYm1CSE5LQkJCQlZtK2Y0Qlgx?=
 =?utf-8?B?QUVIMitJVHFhM2V5Mm5IaFBtdm1zbkxRb3hXVFJnMzhKOTg0UmFMZlVQWUVQ?=
 =?utf-8?B?cVBPTUJidG1oZy8rNDBJcTRaYyt1YkhBRGM4MWl6K0sySHRNcHFYS2tjT1Nt?=
 =?utf-8?B?VUtib1JxT2VsdjBLaXZncU5telcvKzUzazRPK3pWNHFSVTBZNHdCRTU5Umg1?=
 =?utf-8?B?elhjUW1uV3pVV0RXdzkzRXgyVzBIQndVQng0Ty9IeFhLRVV2MVpaN1l1Ti9T?=
 =?utf-8?B?enNNMm4rRHZBS2s5RmZyNkVuVENMQ3FHYkxDOUxSOC9Wb3gvVHpmM1ppcEVs?=
 =?utf-8?B?V0QxblZQeHF2Q1ZMNVBjNzJmTzc0eTJNL09YZjRqOHZoKzFlbFFIMFRScGxB?=
 =?utf-8?B?T1NlYzQ1Wkh6M0pRcG91bzRqSnVHdkQvaStkWDl2NmNMZUptT3F0bkl4QjRP?=
 =?utf-8?B?NWV1SUIvQkQ0N1VtTEIvcEQvZDkxQTl4MWFocTdkYTZMQ3JKdkczMm12WG96?=
 =?utf-8?B?WnVIWjh2Rkc2bndrM1ZhbHRVTnlsUS9abkZqN053L1JhTEIwbDNWWkwyb0tZ?=
 =?utf-8?B?RnJZWGJtYkxzS0N2aGw5SUxTaitkeWU4M2lyMkFVd0I5M0dLNFdWUExsd2wy?=
 =?utf-8?B?SUlwaVZob0dYd3NlaFUwMkdSQkExNHFCV1J2TUtmdFJWdTkyYnd2ZEkzQ09Y?=
 =?utf-8?B?RVhEeEJWK1pGQThlZWhsN2RBc0w5aDdSLzhxemR6WS9qK0dKNHdMaUw4Z1dB?=
 =?utf-8?B?cnlvMzZDMjgrWWcvMGFiWm1YRGRiZUhoU1UyNm1lUFpTN0YwVlYyblZ1end0?=
 =?utf-8?B?eHRlTzJtc1R2NHM2c2pwSXhEdWVKai90S3prYnNYSnlmOWNTem4wNzhTUWda?=
 =?utf-8?B?VzR6NU1EQUJCTFNxRGhxK1NETWN1L21KbTlzQStIL1Jxb3JBdHNOLzJ5SHV4?=
 =?utf-8?B?ZTNYaGxTV040cGJ4Q2F2UGJWdGozd0Q1ZWloK3hpQ043c1Z3SlR2UU9Yb3VR?=
 =?utf-8?B?VHhER1dwQ2xNK3JYQmMrRDRJb1VWN2xNWWlTUVJFVGc1aWVkcm1peEhxek9l?=
 =?utf-8?B?dk0rMkROTjZZdUhPQnk3dUxadStBZnJsNEl0eFdCRVM4Q2J5ZXdDbHM0R2ZI?=
 =?utf-8?B?eTlkL2hnYWhHckVGb2hpa2x1M29TU1JJc1h6RW9jdEZBWEZXdFMyLy8ydTZB?=
 =?utf-8?B?UzlTcGphZ1diQjhqYmZpaEZEK2hQbTZmL1dWYjl1dTg2WWlEREg5ZFRnQVpo?=
 =?utf-8?B?Y2NueHQzRVFjbHdqQ2laZms0SkMrTzR2WVkyZzR6aFJRcldBRktva2hlOGg5?=
 =?utf-8?B?RHhTQndDSFVrNjc1RnNFRHI3UlQzZStKMzg0TThYdVZtd2hzTjBXS1hVN3Vx?=
 =?utf-8?B?Q0lvWHBERDVINkQ3cDEyYWxjVklKUzF3bEQvY1M3QjV6aUJ2TzlUZz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e3714297-3b7b-44fc-a887-08da4eb5ce3e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2022 09:59:58.4481
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Fa6NSS679gHg2RYKCt/o5jF99y3gHU+XzvYRyNACV3pB0rK98jsCNkg7V8QuPeUWINWTmCauTTJ5cX1SDCLHpA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8922

Some of the helper functions/macros are needed only for this, and the
code is otherwise relatively independent of other parts of the emulator.

Code moved gets slightly adjusted in a few places, e.g. replacing EXC_*
by X86_EXC_* (such that EXC_* don't need to move as well; we want these
to be phased out anyway).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Re-base.

--- a/tools/fuzz/x86_instruction_emulator/Makefile
+++ b/tools/fuzz/x86_instruction_emulator/Makefile
@@ -36,6 +36,7 @@ x86_emulate.h := x86-emulate.h x86_emula
 
 OBJS := fuzz-emul.o x86-emulate.o
 OBJS += x86_emulate/0f01.o x86_emulate/0fae.o x86_emulate/0fc7.o
+OBJS += x86_emulate/fpu.o
 
 # x86-emulate.c will be implicit for both
 x86-emulate.o x86-emulate-cov.o: x86_emulate/x86_emulate.c $(x86_emulate.h) x86_emulate/private.h
--- a/tools/tests/x86_emulator/Makefile
+++ b/tools/tests/x86_emulator/Makefile
@@ -252,6 +252,7 @@ endif # 32-bit override
 
 OBJS := x86-emulate.o cpuid.o test_x86_emulator.o evex-disp8.o predicates.o wrappers.o
 OBJS += x86_emulate/0f01.o x86_emulate/0fae.o x86_emulate/0fc7.o
+OBJS += x86_emulate/fpu.o
 
 $(TARGET): $(OBJS)
 	$(HOSTCC) $(HOSTCFLAGS) -o $@ $^
--- a/tools/tests/x86_emulator/x86-emulate.c
+++ b/tools/tests/x86_emulator/x86-emulate.c
@@ -29,12 +29,6 @@
 # define __OP          "r"  /* Operand Prefix */
 #endif
 
-#define get_stub(stb) ({                         \
-    assert(!(stb).addr);                         \
-    (void *)((stb).addr = (uintptr_t)(stb).buf); \
-})
-#define put_stub(stb) ((stb).addr = 0)
-
 uint32_t mxcsr_mask = 0x0000ffbf;
 struct cpuid_policy cp;
 
--- a/xen/arch/x86/x86_emulate/Makefile
+++ b/xen/arch/x86/x86_emulate/Makefile
@@ -1,3 +1,4 @@
 obj-y += 0f01.o
 obj-y += 0fae.o
 obj-y += 0fc7.o
+obj-$(CONFIG_HVM) += fpu.o
--- /dev/null
+++ b/xen/arch/x86/x86_emulate/fpu.c
@@ -0,0 +1,491 @@
+/******************************************************************************
+ * x86_emulate.c
+ *
+ * Generic x86 (32-bit and 64-bit) instruction decoder and emulator.
+ *
+ * Copyright (c) 2005-2007 Keir Fraser
+ * Copyright (c) 2005-2007 XenSource Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "private.h"
+
+#ifdef __XEN__
+# include <asm/amd.h>
+# define cpu_has_amd_erratum(nr) \
+         cpu_has_amd_erratum(&current_cpu_data, AMD_ERRATUM_##nr)
+#else
+# define cpu_has_amd_erratum(nr) 0
+#endif
+
+/* Floating point status word definitions. */
+#define FSW_ES    (1U << 7)
+
+static inline bool fpu_check_write(void)
+{
+    uint16_t fsw;
+
+    asm ( "fnstsw %0" : "=am" (fsw) );
+
+    return !(fsw & FSW_ES);
+}
+
+#define emulate_fpu_insn_memdst(opc, ext, arg)                          \
+do {                                                                    \
+    /* ModRM: mod=0, reg=ext, rm=0, i.e. a (%rax) operand */            \
+    *insn_bytes = 2;                                                    \
+    memcpy(get_stub(stub),                                              \
+           ((uint8_t[]){ opc, ((ext) & 7) << 3, 0xc3 }), 3);            \
+    invoke_stub("", "", "+m" (arg) : "a" (&(arg)));                     \
+    put_stub(stub);                                                     \
+} while (0)
+
+#define emulate_fpu_insn_memsrc(opc, ext, arg)                          \
+do {                                                                    \
+    /* ModRM: mod=0, reg=ext, rm=0, i.e. a (%rax) operand */            \
+    memcpy(get_stub(stub),                                              \
+           ((uint8_t[]){ opc, ((ext) & 7) << 3, 0xc3 }), 3);            \
+    invoke_stub("", "", "=m" (dummy) : "m" (arg), "a" (&(arg)));        \
+    put_stub(stub);                                                     \
+} while (0)
+
+#define emulate_fpu_insn_stub(bytes...)                                 \
+do {                                                                    \
+    unsigned int nr_ = sizeof((uint8_t[]){ bytes });                    \
+    memcpy(get_stub(stub), ((uint8_t[]){ bytes, 0xc3 }), nr_ + 1);      \
+    invoke_stub("", "", "=m" (dummy) : "i" (0));                        \
+    put_stub(stub);                                                     \
+} while (0)
+
+#define emulate_fpu_insn_stub_eflags(bytes...)                          \
+do {                                                                    \
+    unsigned int nr_ = sizeof((uint8_t[]){ bytes });                    \
+    unsigned long tmp_;                                                 \
+    memcpy(get_stub(stub), ((uint8_t[]){ bytes, 0xc3 }), nr_ + 1);      \
+    invoke_stub(_PRE_EFLAGS("[eflags]", "[mask]", "[tmp]"),             \
+                _POST_EFLAGS("[eflags]", "[mask]", "[tmp]"),            \
+                [eflags] "+g" (regs->eflags), [tmp] "=&r" (tmp_)        \
+                : [mask] "i" (X86_EFLAGS_ZF|X86_EFLAGS_PF|X86_EFLAGS_CF)); \
+    put_stub(stub);                                                     \
+} while (0)
+
+int x86emul_fpu(struct x86_emulate_state *s,
+                struct cpu_user_regs *regs,
+                struct operand *dst,
+                struct operand *src,
+                struct x86_emulate_ctxt *ctxt,
+                const struct x86_emulate_ops *ops,
+                unsigned int *insn_bytes,
+                enum x86_emulate_fpu_type *fpu_type,
+#define fpu_type (*fpu_type) /* for get_fpu() */
+                struct stub_exn *stub_exn,
+#define stub_exn (*stub_exn) /* for invoke_stub() */
+                mmval_t *mmvalp)
+{
+    uint8_t b;
+    int rc;
+    struct x86_emulate_stub stub = {};
+
+    switch ( b = ctxt->opcode )
+    {
+        unsigned long dummy;
+
+    case 0x9b:  /* wait/fwait */
+        host_and_vcpu_must_have(fpu);
+        get_fpu(X86EMUL_FPU_wait);
+        emulate_fpu_insn_stub(b);
+        break;
+
+    case 0xd8: /* FPU 0xd8 */
+        host_and_vcpu_must_have(fpu);
+        get_fpu(X86EMUL_FPU_fpu);
+        switch ( s->modrm )
+        {
+        case 0xc0 ... 0xc7: /* fadd %stN,%st */
+        case 0xc8 ... 0xcf: /* fmul %stN,%st */
+        case 0xd0 ... 0xd7: /* fcom %stN,%st */
+        case 0xd8 ... 0xdf: /* fcomp %stN,%st */
+        case 0xe0 ... 0xe7: /* fsub %stN,%st */
+        case 0xe8 ... 0xef: /* fsubr %stN,%st */
+        case 0xf0 ... 0xf7: /* fdiv %stN,%st */
+        case 0xf8 ... 0xff: /* fdivr %stN,%st */
+            emulate_fpu_insn_stub(0xd8, s->modrm);
+            break;
+        default:
+        fpu_memsrc32:
+            ASSERT(s->ea.type == OP_MEM);
+            if ( (rc = ops->read(s->ea.mem.seg, s->ea.mem.off, &src->val,
+                                 4, ctxt)) != X86EMUL_OKAY )
+                goto done;
+            emulate_fpu_insn_memsrc(b, s->modrm_reg & 7, src->val);
+            break;
+        }
+        break;
+
+    case 0xd9: /* FPU 0xd9 */
+        host_and_vcpu_must_have(fpu);
+        get_fpu(X86EMUL_FPU_fpu);
+        switch ( s->modrm )
+        {
+        case 0xfb: /* fsincos */
+            fail_if(cpu_has_amd_erratum(573));
+            /* fall through */
+        case 0xc0 ... 0xc7: /* fld %stN */
+        case 0xc8 ... 0xcf: /* fxch %stN */
+        case 0xd0: /* fnop */
+        case 0xd8 ... 0xdf: /* fstp %stN (alternative encoding) */
+        case 0xe0: /* fchs */
+        case 0xe1: /* fabs */
+        case 0xe4: /* ftst */
+        case 0xe5: /* fxam */
+        case 0xe8: /* fld1 */
+        case 0xe9: /* fldl2t */
+        case 0xea: /* fldl2e */
+        case 0xeb: /* fldpi */
+        case 0xec: /* fldlg2 */
+        case 0xed: /* fldln2 */
+        case 0xee: /* fldz */
+        case 0xf0: /* f2xm1 */
+        case 0xf1: /* fyl2x */
+        case 0xf2: /* fptan */
+        case 0xf3: /* fpatan */
+        case 0xf4: /* fxtract */
+        case 0xf5: /* fprem1 */
+        case 0xf6: /* fdecstp */
+        case 0xf7: /* fincstp */
+        case 0xf8: /* fprem */
+        case 0xf9: /* fyl2xp1 */
+        case 0xfa: /* fsqrt */
+        case 0xfc: /* frndint */
+        case 0xfd: /* fscale */
+        case 0xfe: /* fsin */
+        case 0xff: /* fcos */
+            emulate_fpu_insn_stub(0xd9, s->modrm);
+            break;
+        default:
+            generate_exception_if(s->ea.type != OP_MEM, X86_EXC_UD);
+            switch ( s->modrm_reg & 7 )
+            {
+            case 0: /* fld m32fp */
+                goto fpu_memsrc32;
+            case 2: /* fst m32fp */
+            case 3: /* fstp m32fp */
+            fpu_memdst32:
+                *dst = s->ea;
+                dst->bytes = 4;
+                emulate_fpu_insn_memdst(b, s->modrm_reg & 7, dst->val);
+                break;
+            case 4: /* fldenv */
+                /* Raise #MF now if there are pending unmasked exceptions. */
+                emulate_fpu_insn_stub(0xd9, 0xd0 /* fnop */);
+                /* fall through */
+            case 6: /* fnstenv */
+                fail_if(!ops->blk);
+                s->blk = s->modrm_reg & 2 ? blk_fst : blk_fld;
+                /*
+                 * REX is meaningless for these insns by this point - (ab)use
+                 * the field to communicate real vs protected mode to ->blk().
+                 */
+                s->rex_prefix = in_protmode(ctxt, ops);
+                if ( (rc = ops->blk(s->ea.mem.seg, s->ea.mem.off, NULL,
+                                    s->op_bytes > 2 ? sizeof(struct x87_env32)
+                                                    : sizeof(struct x87_env16),
+                                    &regs->eflags,
+                                    s, ctxt)) != X86EMUL_OKAY )
+                    goto done;
+                s->fpu_ctrl = true;
+                break;
+            case 5: /* fldcw m2byte */
+                s->fpu_ctrl = true;
+            fpu_memsrc16:
+                if ( (rc = ops->read(s->ea.mem.seg, s->ea.mem.off, &src->val,
+                                     2, ctxt)) != X86EMUL_OKAY )
+                    goto done;
+                emulate_fpu_insn_memsrc(b, s->modrm_reg & 7, src->val);
+                break;
+            case 7: /* fnstcw m2byte */
+                s->fpu_ctrl = true;
+            fpu_memdst16:
+                *dst = s->ea;
+                dst->bytes = 2;
+                emulate_fpu_insn_memdst(b, s->modrm_reg & 7, dst->val);
+                break;
+            default:
+                generate_exception(X86_EXC_UD);
+            }
+            /*
+             * Control instructions can't raise FPU exceptions, so we need
+             * to consider suppressing writes only for non-control ones.
+             */
+            if ( dst->type == OP_MEM && !s->fpu_ctrl && !fpu_check_write() )
+                dst->type = OP_NONE;
+        }
+        break;
+
+    case 0xda: /* FPU 0xda */
+        host_and_vcpu_must_have(fpu);
+        get_fpu(X86EMUL_FPU_fpu);
+        switch ( s->modrm )
+        {
+        case 0xc0 ... 0xc7: /* fcmovb %stN */
+        case 0xc8 ... 0xcf: /* fcmove %stN */
+        case 0xd0 ... 0xd7: /* fcmovbe %stN */
+        case 0xd8 ... 0xdf: /* fcmovu %stN */
+            vcpu_must_have(cmov);
+            emulate_fpu_insn_stub_eflags(0xda, s->modrm);
+            break;
+        case 0xe9:          /* fucompp */
+            emulate_fpu_insn_stub(0xda, s->modrm);
+            break;
+        default:
+            generate_exception_if(s->ea.type != OP_MEM, X86_EXC_UD);
+            goto fpu_memsrc32;
+        }
+        break;
+
+    case 0xdb: /* FPU 0xdb */
+        host_and_vcpu_must_have(fpu);
+        get_fpu(X86EMUL_FPU_fpu);
+        switch ( s->modrm )
+        {
+        case 0xc0 ... 0xc7: /* fcmovnb %stN */
+        case 0xc8 ... 0xcf: /* fcmovne %stN */
+        case 0xd0 ... 0xd7: /* fcmovnbe %stN */
+        case 0xd8 ... 0xdf: /* fcmovnu %stN */
+        case 0xe8 ... 0xef: /* fucomi %stN */
+        case 0xf0 ... 0xf7: /* fcomi %stN */
+            vcpu_must_have(cmov);
+            emulate_fpu_insn_stub_eflags(0xdb, s->modrm);
+            break;
+        case 0xe0: /* fneni - 8087 only, ignored by 287 */
+        case 0xe1: /* fndisi - 8087 only, ignored by 287 */
+        case 0xe2: /* fnclex */
+        case 0xe3: /* fninit */
+        case 0xe4: /* fnsetpm - 287 only, ignored by 387 */
+        /* case 0xe5: frstpm - 287 only, #UD on 387 */
+            s->fpu_ctrl = true;
+            emulate_fpu_insn_stub(0xdb, s->modrm);
+            break;
+        default:
+            generate_exception_if(s->ea.type != OP_MEM, X86_EXC_UD);
+            switch ( s->modrm_reg & 7 )
+            {
+            case 0: /* fild m32i */
+                goto fpu_memsrc32;
+            case 1: /* fisttp m32i */
+                host_and_vcpu_must_have(sse3);
+                /* fall through */
+            case 2: /* fist m32i */
+            case 3: /* fistp m32i */
+                goto fpu_memdst32;
+            case 5: /* fld m80fp */
+            fpu_memsrc80:
+                if ( (rc = ops->read(s->ea.mem.seg, s->ea.mem.off, mmvalp,
+                                     10, ctxt)) != X86EMUL_OKAY )
+                    goto done;
+                emulate_fpu_insn_memsrc(b, s->modrm_reg & 7, *mmvalp);
+                break;
+            case 7: /* fstp m80fp */
+            fpu_memdst80:
+                fail_if(!ops->write);
+                emulate_fpu_insn_memdst(b, s->modrm_reg & 7, *mmvalp);
+                if ( fpu_check_write() &&
+                     (rc = ops->write(s->ea.mem.seg, s->ea.mem.off, mmvalp,
+                                      10, ctxt)) != X86EMUL_OKAY )
+                    goto done;
+                break;
+            default:
+                generate_exception(X86_EXC_UD);
+            }
+        }
+        break;
+
+    case 0xdc: /* FPU 0xdc */
+        host_and_vcpu_must_have(fpu);
+        get_fpu(X86EMUL_FPU_fpu);
+        switch ( s->modrm )
+        {
+        case 0xc0 ... 0xc7: /* fadd %st,%stN */
+        case 0xc8 ... 0xcf: /* fmul %st,%stN */
+        case 0xd0 ... 0xd7: /* fcom %stN,%st (alternative encoding) */
+        case 0xd8 ... 0xdf: /* fcomp %stN,%st (alternative encoding) */
+        case 0xe0 ... 0xe7: /* fsubr %st,%stN */
+        case 0xe8 ... 0xef: /* fsub %st,%stN */
+        case 0xf0 ... 0xf7: /* fdivr %st,%stN */
+        case 0xf8 ... 0xff: /* fdiv %st,%stN */
+            emulate_fpu_insn_stub(0xdc, s->modrm);
+            break;
+        default:
+        fpu_memsrc64:
+            ASSERT(s->ea.type == OP_MEM);
+            if ( (rc = ops->read(s->ea.mem.seg, s->ea.mem.off, &src->val,
+                                 8, ctxt)) != X86EMUL_OKAY )
+                goto done;
+            emulate_fpu_insn_memsrc(b, s->modrm_reg & 7, src->val);
+            break;
+        }
+        break;
+
+    case 0xdd: /* FPU 0xdd */
+        host_and_vcpu_must_have(fpu);
+        get_fpu(X86EMUL_FPU_fpu);
+        switch ( s->modrm )
+        {
+        case 0xc0 ... 0xc7: /* ffree %stN */
+        case 0xc8 ... 0xcf: /* fxch %stN (alternative encoding) */
+        case 0xd0 ... 0xd7: /* fst %stN */
+        case 0xd8 ... 0xdf: /* fstp %stN */
+        case 0xe0 ... 0xe7: /* fucom %stN */
+        case 0xe8 ... 0xef: /* fucomp %stN */
+            emulate_fpu_insn_stub(0xdd, s->modrm);
+            break;
+        default:
+            generate_exception_if(s->ea.type != OP_MEM, X86_EXC_UD);
+            switch ( s->modrm_reg & 7 )
+            {
+            case 0: /* fld m64fp */;
+                goto fpu_memsrc64;
+            case 1: /* fisttp m64i */
+                host_and_vcpu_must_have(sse3);
+                /* fall through */
+            case 2: /* fst m64fp */
+            case 3: /* fstp m64fp */
+            fpu_memdst64:
+                *dst = s->ea;
+                dst->bytes = 8;
+                emulate_fpu_insn_memdst(b, s->modrm_reg & 7, dst->val);
+                break;
+            case 4: /* frstor */
+                /* Raise #MF now if there are pending unmasked exceptions. */
+                emulate_fpu_insn_stub(0xd9, 0xd0 /* fnop */);
+                /* fall through */
+            case 6: /* fnsave */
+                fail_if(!ops->blk);
+                s->blk = s->modrm_reg & 2 ? blk_fst : blk_fld;
+                /*
+                 * REX is meaningless for these insns by this point - (ab)use
+                 * the field to communicate real vs protected mode to ->blk().
+                 */
+                s->rex_prefix = in_protmode(ctxt, ops);
+                if ( (rc = ops->blk(s->ea.mem.seg, s->ea.mem.off, NULL,
+                                    s->op_bytes > 2 ? sizeof(struct x87_env32) + 80
+                                                    : sizeof(struct x87_env16) + 80,
+                                    &regs->eflags,
+                                    s, ctxt)) != X86EMUL_OKAY )
+                    goto done;
+                s->fpu_ctrl = true;
+                break;
+            case 7: /* fnstsw m2byte */
+                s->fpu_ctrl = true;
+                goto fpu_memdst16;
+            default:
+                generate_exception(X86_EXC_UD);
+            }
+            /*
+             * Control instructions can't raise FPU exceptions, so we need
+             * to consider suppressing writes only for non-control ones.
+             */
+            if ( dst->type == OP_MEM && !s->fpu_ctrl && !fpu_check_write() )
+                dst->type = OP_NONE;
+        }
+        break;
+
+    case 0xde: /* FPU 0xde */
+        host_and_vcpu_must_have(fpu);
+        get_fpu(X86EMUL_FPU_fpu);
+        switch ( s->modrm )
+        {
+        case 0xc0 ... 0xc7: /* faddp %stN */
+        case 0xc8 ... 0xcf: /* fmulp %stN */
+        case 0xd0 ... 0xd7: /* fcomp %stN (alternative encoding) */
+        case 0xd9: /* fcompp */
+        case 0xe0 ... 0xe7: /* fsubrp %stN */
+        case 0xe8 ... 0xef: /* fsubp %stN */
+        case 0xf0 ... 0xf7: /* fdivrp %stN */
+        case 0xf8 ... 0xff: /* fdivp %stN */
+            emulate_fpu_insn_stub(0xde, s->modrm);
+            break;
+        default:
+            generate_exception_if(s->ea.type != OP_MEM, X86_EXC_UD);
+            emulate_fpu_insn_memsrc(b, s->modrm_reg & 7, src->val);
+            break;
+        }
+        break;
+
+    case 0xdf: /* FPU 0xdf */
+        host_and_vcpu_must_have(fpu);
+        get_fpu(X86EMUL_FPU_fpu);
+        switch ( s->modrm )
+        {
+        case 0xe0:
+            /* fnstsw %ax */
+            s->fpu_ctrl = true;
+            dst->bytes = 2;
+            dst->type = OP_REG;
+            dst->reg = (void *)&regs->ax;
+            emulate_fpu_insn_memdst(b, s->modrm_reg & 7, dst->val);
+            break;
+        case 0xe8 ... 0xef: /* fucomip %stN */
+        case 0xf0 ... 0xf7: /* fcomip %stN */
+            vcpu_must_have(cmov);
+            emulate_fpu_insn_stub_eflags(0xdf, s->modrm);
+            break;
+        case 0xc0 ... 0xc7: /* ffreep %stN */
+        case 0xc8 ... 0xcf: /* fxch %stN (alternative encoding) */
+        case 0xd0 ... 0xd7: /* fstp %stN (alternative encoding) */
+        case 0xd8 ... 0xdf: /* fstp %stN (alternative encoding) */
+            emulate_fpu_insn_stub(0xdf, s->modrm);
+            break;
+        default:
+            generate_exception_if(s->ea.type != OP_MEM, X86_EXC_UD);
+            switch ( s->modrm_reg & 7 )
+            {
+            case 0: /* fild m16i */
+                goto fpu_memsrc16;
+            case 1: /* fisttp m16i */
+                host_and_vcpu_must_have(sse3);
+                /* fall through */
+            case 2: /* fist m16i */
+            case 3: /* fistp m16i */
+                goto fpu_memdst16;
+            case 4: /* fbld m80dec */
+                goto fpu_memsrc80;
+            case 5: /* fild m64i */
+                dst->type = OP_NONE;
+                goto fpu_memsrc64;
+            case 6: /* fbstp packed bcd */
+                goto fpu_memdst80;
+            case 7: /* fistp m64i */
+                goto fpu_memdst64;
+            }
+        }
+        break;
+
+    default:
+        ASSERT_UNREACHABLE();
+        return X86EMUL_UNHANDLEABLE;
+    }
+
+    rc = X86EMUL_OKAY;
+
+ done:
+    put_stub(stub);
+    return rc;
+
+#ifdef __XEN__
+ emulation_stub_failure:
+    return X86EMUL_stub_failure;
+#endif
+}
--- a/xen/arch/x86/x86_emulate/private.h
+++ b/xen/arch/x86/x86_emulate/private.h
@@ -21,6 +21,7 @@
 #ifdef __XEN__
 
 # include <xen/kernel.h>
+# include <asm/endbr.h>
 # include <asm/msr-index.h>
 # include <asm/x86_emulate.h>
 
@@ -339,12 +340,57 @@ struct x86_fxsr {
     uint64_t avl[6];
 };
 
+#ifndef X86EMUL_NO_FPU
+struct x87_env16 {
+    uint16_t fcw;
+    uint16_t fsw;
+    uint16_t ftw;
+    union {
+        struct {
+            uint16_t fip_lo;
+            uint16_t fop:11, :1, fip_hi:4;
+            uint16_t fdp_lo;
+            uint16_t :12, fdp_hi:4;
+        } real;
+        struct {
+            uint16_t fip;
+            uint16_t fcs;
+            uint16_t fdp;
+            uint16_t fds;
+        } prot;
+    } mode;
+};
+
+struct x87_env32 {
+    uint32_t fcw:16, :16;
+    uint32_t fsw:16, :16;
+    uint32_t ftw:16, :16;
+    union {
+        struct {
+            /* some CPUs/FPUs also store the full FIP here */
+            uint32_t fip_lo:16, :16;
+            uint32_t fop:11, :1, fip_hi:16, :4;
+            /* some CPUs/FPUs also store the full FDP here */
+            uint32_t fdp_lo:16, :16;
+            uint32_t :12, fdp_hi:16, :4;
+        } real;
+        struct {
+            uint32_t fip;
+            uint32_t fcs:16, fop:11, :5;
+            uint32_t fdp;
+            uint32_t fds:16, :16;
+        } prot;
+    } mode;
+};
+#endif
+
 /*
  * Externally visible return codes from x86_emulate() are non-negative.
  * Use negative values for internal state change indicators from helpers
  * to the main function.
  */
 #define X86EMUL_rdtsc        (-1)
+#define X86EMUL_stub_failure (-2)
 
 /*
  * These EFLAGS bits are restored from saved value during emulation, and
@@ -541,6 +587,122 @@ amd_like(const struct x86_emulate_ctxt *
 # define host_and_vcpu_must_have(feat) vcpu_must_have(feat)
 #endif
 
+/*
+ * Instruction emulation:
+ * Most instructions are emulated directly via a fragment of inline assembly
+ * code. This allows us to save/restore EFLAGS and thus very easily pick up
+ * any modified flags.
+ */
+
+#if defined(__x86_64__)
+#define _LO32 "k"          /* force 32-bit operand */
+#define _STK  "%%rsp"      /* stack pointer */
+#define _BYTES_PER_LONG "8"
+#elif defined(__i386__)
+#define _LO32 ""           /* force 32-bit operand */
+#define _STK  "%%esp"      /* stack pointer */
+#define _BYTES_PER_LONG "4"
+#endif
+
+/* Before executing instruction: restore necessary bits in EFLAGS. */
+#define _PRE_EFLAGS(_sav, _msk, _tmp)                           \
+/* EFLAGS = (_sav & _msk) | (EFLAGS & ~_msk); _sav &= ~_msk; */ \
+"movl %"_LO32 _sav",%"_LO32 _tmp"; "                            \
+"push %"_tmp"; "                                                \
+"push %"_tmp"; "                                                \
+"movl %"_msk",%"_LO32 _tmp"; "                                  \
+"andl %"_LO32 _tmp",("_STK"); "                                 \
+"pushf; "                                                       \
+"notl %"_LO32 _tmp"; "                                          \
+"andl %"_LO32 _tmp",("_STK"); "                                 \
+"andl %"_LO32 _tmp",2*"_BYTES_PER_LONG"("_STK"); "              \
+"pop  %"_tmp"; "                                                \
+"orl  %"_LO32 _tmp",("_STK"); "                                 \
+"popf; "                                                        \
+"pop  %"_tmp"; "                                                \
+"movl %"_LO32 _tmp",%"_LO32 _sav"; "
+
+/* After executing instruction: write-back necessary bits in EFLAGS. */
+#define _POST_EFLAGS(_sav, _msk, _tmp)          \
+/* _sav |= EFLAGS & _msk; */                    \
+"pushf; "                                       \
+"pop  %"_tmp"; "                                \
+"andl %"_msk",%"_LO32 _tmp"; "                  \
+"orl  %"_LO32 _tmp",%"_LO32 _sav"; "
+
+#ifdef __XEN__
+
+# include <xen/domain_page.h>
+# include <asm/uaccess.h>
+
+# define get_stub(stb) ({                                    \
+    void *ptr;                                               \
+    BUILD_BUG_ON(STUB_BUF_SIZE / 2 < MAX_INST_LEN + 1);      \
+    ASSERT(!(stb).ptr);                                      \
+    (stb).addr = this_cpu(stubs.addr) + STUB_BUF_SIZE / 2;   \
+    (stb).ptr = map_domain_page(_mfn(this_cpu(stubs.mfn))) + \
+        ((stb).addr & ~PAGE_MASK);                           \
+    ptr = memset((stb).ptr, 0xcc, STUB_BUF_SIZE / 2);        \
+    if ( cpu_has_xen_ibt )                                   \
+    {                                                        \
+        place_endbr64(ptr);                                  \
+        ptr += 4;                                            \
+    }                                                        \
+    ptr;                                                     \
+})
+
+# define put_stub(stb) ({             \
+    if ( (stb).ptr )                  \
+    {                                 \
+        unmap_domain_page((stb).ptr); \
+        (stb).ptr = NULL;             \
+    }                                 \
+})
+
+
+struct stub_exn {
+    union stub_exception_token info;
+    unsigned int line;
+};
+
+# define invoke_stub(pre, post, constraints...) do {                    \
+    stub_exn.info = (union stub_exception_token) { .raw = ~0 };         \
+    stub_exn.line = __LINE__; /* Utility outweighs livepatching cost */ \
+    block_speculation(); /* SCSB */                                     \
+    asm volatile ( pre "\n\tINDIRECT_CALL %[stub]\n\t" post "\n"        \
+                   ".Lret%=:\n\t"                                       \
+                   ".pushsection .fixup,\"ax\"\n"                       \
+                   ".Lfix%=:\n\t"                                       \
+                   "pop %[exn]\n\t"                                     \
+                   "jmp .Lret%=\n\t"                                    \
+                   ".popsection\n\t"                                    \
+                   _ASM_EXTABLE(.Lret%=, .Lfix%=)                       \
+                   : [exn] "+g" (stub_exn.info) ASM_CALL_CONSTRAINT,    \
+                     constraints,                                       \
+                     [stub] "r" (stub.func),                            \
+                     "m" (*(uint8_t(*)[MAX_INST_LEN + 1])stub.ptr) );   \
+    if ( unlikely(~stub_exn.info.raw) )                                 \
+        goto emulation_stub_failure;                                    \
+} while (0)
+
+#else /* !__XEN__ */
+
+# define get_stub(stb) ({                        \
+    assert(!(stb).addr);                         \
+    (void *)((stb).addr = (uintptr_t)(stb).buf); \
+})
+
+# define put_stub(stb) ((stb).addr = 0)
+
+struct stub_exn {};
+
+# define invoke_stub(pre, post, constraints...)                         \
+    asm volatile ( pre "\n\tcall *%[stub]\n\t" post                     \
+                   : constraints, [stub] "rm" (stub.func),              \
+                     "m" (*(typeof(stub.buf) *)stub.addr) )
+
+#endif /* __XEN__ */
+
 int x86emul_get_cpl(struct x86_emulate_ctxt *ctxt,
                     const struct x86_emulate_ops *ops);
 
@@ -554,6 +716,16 @@ do {
     if ( rc ) goto done;                                        \
 } while (0)
 
+int x86emul_fpu(struct x86_emulate_state *s,
+                struct cpu_user_regs *regs,
+                struct operand *dst,
+                struct operand *src,
+                struct x86_emulate_ctxt *ctxt,
+                const struct x86_emulate_ops *ops,
+                unsigned int *insn_bytes,
+                enum x86_emulate_fpu_type *fpu_type,
+                struct stub_exn *stub_exn,
+                mmval_t *mmvalp);
 int x86emul_0f01(struct x86_emulate_state *s,
                  struct cpu_user_regs *regs,
                  struct operand *dst,
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -643,50 +643,6 @@ static const uint8_t sse_prefix[] = { 0x
 #define PTR_POISON NULL /* 32-bit builds are for user-space, so NULL is OK. */
 #endif
 
-#ifndef X86EMUL_NO_FPU
-struct x87_env16 {
-    uint16_t fcw;
-    uint16_t fsw;
-    uint16_t ftw;
-    union {
-        struct {
-            uint16_t fip_lo;
-            uint16_t fop:11, :1, fip_hi:4;
-            uint16_t fdp_lo;
-            uint16_t :12, fdp_hi:4;
-        } real;
-        struct {
-            uint16_t fip;
-            uint16_t fcs;
-            uint16_t fdp;
-            uint16_t fds;
-        } prot;
-    } mode;
-};
-
-struct x87_env32 {
-    uint32_t fcw:16, :16;
-    uint32_t fsw:16, :16;
-    uint32_t ftw:16, :16;
-    union {
-        struct {
-            /* some CPUs/FPUs also store the full FIP here */
-            uint32_t fip_lo:16, :16;
-            uint32_t fop:11, :1, fip_hi:16, :4;
-            /* some CPUs/FPUs also store the full FDP here */
-            uint32_t fdp_lo:16, :16;
-            uint32_t :12, fdp_hi:16, :4;
-        } real;
-        struct {
-            uint32_t fip;
-            uint32_t fcs:16, fop:11, :5;
-            uint32_t fdp;
-            uint32_t fds:16, :16;
-        } prot;
-    } mode;
-};
-#endif
-
 /*
  * While proper alignment gets specified in mmval_t, this doesn't get honored
  * by the compiler for automatic variables. Use this helper to instantiate a
@@ -704,9 +660,6 @@ struct x87_env32 {
 # define ASM_FLAG_OUT(yes, no) no
 #endif
 
-/* Floating point status word definitions. */
-#define FSW_ES    (1U << 7)
-
 /* MXCSR bit definitions. */
 #define MXCSR_MM  (1U << 17)
 
@@ -737,49 +690,6 @@ struct x87_env32 {
 #define ECODE_IDT (1 << 1)
 #define ECODE_TI  (1 << 2)
 
-/*
- * Instruction emulation:
- * Most instructions are emulated directly via a fragment of inline assembly
- * code. This allows us to save/restore EFLAGS and thus very easily pick up
- * any modified flags.
- */
-
-#if defined(__x86_64__)
-#define _LO32 "k"          /* force 32-bit operand */
-#define _STK  "%%rsp"      /* stack pointer */
-#define _BYTES_PER_LONG "8"
-#elif defined(__i386__)
-#define _LO32 ""           /* force 32-bit operand */
-#define _STK  "%%esp"      /* stack pointer */
-#define _BYTES_PER_LONG "4"
-#endif
-
-/* Before executing instruction: restore necessary bits in EFLAGS. */
-#define _PRE_EFLAGS(_sav, _msk, _tmp)                           \
-/* EFLAGS = (_sav & _msk) | (EFLAGS & ~_msk); _sav &= ~_msk; */ \
-"movl %"_LO32 _sav",%"_LO32 _tmp"; "                            \
-"push %"_tmp"; "                                                \
-"push %"_tmp"; "                                                \
-"movl %"_msk",%"_LO32 _tmp"; "                                  \
-"andl %"_LO32 _tmp",("_STK"); "                                 \
-"pushf; "                                                       \
-"notl %"_LO32 _tmp"; "                                          \
-"andl %"_LO32 _tmp",("_STK"); "                                 \
-"andl %"_LO32 _tmp",2*"_BYTES_PER_LONG"("_STK"); "              \
-"pop  %"_tmp"; "                                                \
-"orl  %"_LO32 _tmp",("_STK"); "                                 \
-"popf; "                                                        \
-"pop  %"_tmp"; "                                                \
-"movl %"_LO32 _tmp",%"_LO32 _sav"; "
-
-/* After executing instruction: write-back necessary bits in EFLAGS. */
-#define _POST_EFLAGS(_sav, _msk, _tmp)          \
-/* _sav |= EFLAGS & _msk; */                    \
-"pushf; "                                       \
-"pop  %"_tmp"; "                                \
-"andl %"_msk",%"_LO32 _tmp"; "                  \
-"orl  %"_LO32 _tmp",%"_LO32 _sav"; "
-
 /* Raw emulation: instruction has two explicit operands. */
 #define __emulate_2op_nobyte(_op, src, dst, sz, eflags, wsx,wsy,wdx,wdy,   \
                              lsx,lsy,ldx,ldy, qsx,qsy,qdx,qdy, extra...)   \
@@ -913,33 +823,6 @@ do{ asm volatile (
 #define __emulate_1op_8byte(op, dst, eflags, extra...)
 #endif /* __i386__ */
 
-#ifdef __XEN__
-# define invoke_stub(pre, post, constraints...) do {                    \
-    stub_exn.info = (union stub_exception_token) { .raw = ~0 };         \
-    stub_exn.line = __LINE__; /* Utility outweighs livepatching cost */ \
-    block_speculation(); /* SCSB */                                     \
-    asm volatile ( pre "\n\tINDIRECT_CALL %[stub]\n\t" post "\n"        \
-                   ".Lret%=:\n\t"                                       \
-                   ".pushsection .fixup,\"ax\"\n"                       \
-                   ".Lfix%=:\n\t"                                       \
-                   "pop %[exn]\n\t"                                     \
-                   "jmp .Lret%=\n\t"                                    \
-                   ".popsection\n\t"                                    \
-                   _ASM_EXTABLE(.Lret%=, .Lfix%=)                       \
-                   : [exn] "+g" (stub_exn.info) ASM_CALL_CONSTRAINT,    \
-                     constraints,                                       \
-                     [stub] "r" (stub.func),                            \
-                     "m" (*(uint8_t(*)[MAX_INST_LEN + 1])stub.ptr) );   \
-    if ( unlikely(~stub_exn.info.raw) )                                 \
-        goto emulation_stub_failure;                                    \
-} while (0)
-#else
-# define invoke_stub(pre, post, constraints...)                         \
-    asm volatile ( pre "\n\tcall *%[stub]\n\t" post                     \
-                   : constraints, [stub] "rm" (stub.func),              \
-                     "m" (*(typeof(stub.buf) *)stub.addr) )
-#endif
-
 #define emulate_stub(dst, src...) do {                                  \
     unsigned long tmp;                                                  \
     invoke_stub(_PRE_EFLAGS("[efl]", "[msk]", "[tmp]"),                 \
@@ -1162,54 +1045,6 @@ static void put_fpu(
         ops->put_fpu(ctxt, X86EMUL_FPU_none, NULL);
 }
 
-static inline bool fpu_check_write(void)
-{
-    uint16_t fsw;
-
-    asm ( "fnstsw %0" : "=am" (fsw) );
-
-    return !(fsw & FSW_ES);
-}
-
-#define emulate_fpu_insn_memdst(opc, ext, arg)                          \
-do {                                                                    \
-    /* ModRM: mod=0, reg=ext, rm=0, i.e. a (%rax) operand */            \
-    insn_bytes = 2;                                                     \
-    memcpy(get_stub(stub),                                              \
-           ((uint8_t[]){ opc, ((ext) & 7) << 3, 0xc3 }), 3);            \
-    invoke_stub("", "", "+m" (arg) : "a" (&(arg)));                     \
-    put_stub(stub);                                                     \
-} while (0)
-
-#define emulate_fpu_insn_memsrc(opc, ext, arg)                          \
-do {                                                                    \
-    /* ModRM: mod=0, reg=ext, rm=0, i.e. a (%rax) operand */            \
-    memcpy(get_stub(stub),                                              \
-           ((uint8_t[]){ opc, ((ext) & 7) << 3, 0xc3 }), 3);            \
-    invoke_stub("", "", "=m" (dummy) : "m" (arg), "a" (&(arg)));        \
-    put_stub(stub);                                                     \
-} while (0)
-
-#define emulate_fpu_insn_stub(bytes...)                                 \
-do {                                                                    \
-    unsigned int nr_ = sizeof((uint8_t[]){ bytes });                    \
-    memcpy(get_stub(stub), ((uint8_t[]){ bytes, 0xc3 }), nr_ + 1);      \
-    invoke_stub("", "", "=m" (dummy) : "i" (0));                        \
-    put_stub(stub);                                                     \
-} while (0)
-
-#define emulate_fpu_insn_stub_eflags(bytes...)                          \
-do {                                                                    \
-    unsigned int nr_ = sizeof((uint8_t[]){ bytes });                    \
-    unsigned long tmp_;                                                 \
-    memcpy(get_stub(stub), ((uint8_t[]){ bytes, 0xc3 }), nr_ + 1);      \
-    invoke_stub(_PRE_EFLAGS("[eflags]", "[mask]", "[tmp]"),             \
-                _POST_EFLAGS("[eflags]", "[mask]", "[tmp]"),            \
-                [eflags] "+g" (_regs.eflags), [tmp] "=&r" (tmp_)        \
-                : [mask] "i" (X86_EFLAGS_ZF|X86_EFLAGS_PF|X86_EFLAGS_CF)); \
-    put_stub(stub);                                                     \
-} while (0)
-
 static inline unsigned long get_loop_count(
     const struct cpu_user_regs *regs,
     int ad_bytes)
@@ -3154,12 +2989,7 @@ x86_emulate(
     enum x86_emulate_fpu_type fpu_type = X86EMUL_FPU_none;
     struct x86_emulate_stub stub = {};
     DECLARE_ALIGNED(mmval_t, mmval);
-#ifdef __XEN__
-    struct {
-        union stub_exception_token info;
-        unsigned int line;
-    } stub_exn;
-#endif
+    struct stub_exn stub_exn = {};
 
     ASSERT(ops->read);
 
@@ -3950,10 +3780,10 @@ x86_emulate(
 
 #ifndef X86EMUL_NO_FPU
     case 0x9b:  /* wait/fwait */
-        host_and_vcpu_must_have(fpu);
-        get_fpu(X86EMUL_FPU_wait);
-        emulate_fpu_insn_stub(b);
-        break;
+    case 0xd8 ... 0xdf: /* FPU */
+        rc = x86emul_fpu(state, &_regs, &dst, &src, ctxt, ops,
+                         &insn_bytes, &fpu_type, &stub_exn, mmvalp);
+        goto dispatch_from_helper;
 #endif
 
     case 0x9c: /* pushf */
@@ -4364,373 +4194,6 @@ x86_emulate(
         break;
     }
 
-#ifndef X86EMUL_NO_FPU
-    case 0xd8: /* FPU 0xd8 */
-        host_and_vcpu_must_have(fpu);
-        get_fpu(X86EMUL_FPU_fpu);
-        switch ( modrm )
-        {
-        case 0xc0 ... 0xc7: /* fadd %stN,%st */
-        case 0xc8 ... 0xcf: /* fmul %stN,%st */
-        case 0xd0 ... 0xd7: /* fcom %stN,%st */
-        case 0xd8 ... 0xdf: /* fcomp %stN,%st */
-        case 0xe0 ... 0xe7: /* fsub %stN,%st */
-        case 0xe8 ... 0xef: /* fsubr %stN,%st */
-        case 0xf0 ... 0xf7: /* fdiv %stN,%st */
-        case 0xf8 ... 0xff: /* fdivr %stN,%st */
-            emulate_fpu_insn_stub(0xd8, modrm);
-            break;
-        default:
-        fpu_memsrc32:
-            ASSERT(ea.type == OP_MEM);
-            if ( (rc = ops->read(ea.mem.seg, ea.mem.off, &src.val,
-                                 4, ctxt)) != X86EMUL_OKAY )
-                goto done;
-            emulate_fpu_insn_memsrc(b, modrm_reg & 7, src.val);
-            break;
-        }
-        break;
-
-    case 0xd9: /* FPU 0xd9 */
-        host_and_vcpu_must_have(fpu);
-        get_fpu(X86EMUL_FPU_fpu);
-        switch ( modrm )
-        {
-        case 0xfb: /* fsincos */
-            fail_if(cpu_has_amd_erratum(573));
-            /* fall through */
-        case 0xc0 ... 0xc7: /* fld %stN */
-        case 0xc8 ... 0xcf: /* fxch %stN */
-        case 0xd0: /* fnop */
-        case 0xd8 ... 0xdf: /* fstp %stN (alternative encoding) */
-        case 0xe0: /* fchs */
-        case 0xe1: /* fabs */
-        case 0xe4: /* ftst */
-        case 0xe5: /* fxam */
-        case 0xe8: /* fld1 */
-        case 0xe9: /* fldl2t */
-        case 0xea: /* fldl2e */
-        case 0xeb: /* fldpi */
-        case 0xec: /* fldlg2 */
-        case 0xed: /* fldln2 */
-        case 0xee: /* fldz */
-        case 0xf0: /* f2xm1 */
-        case 0xf1: /* fyl2x */
-        case 0xf2: /* fptan */
-        case 0xf3: /* fpatan */
-        case 0xf4: /* fxtract */
-        case 0xf5: /* fprem1 */
-        case 0xf6: /* fdecstp */
-        case 0xf7: /* fincstp */
-        case 0xf8: /* fprem */
-        case 0xf9: /* fyl2xp1 */
-        case 0xfa: /* fsqrt */
-        case 0xfc: /* frndint */
-        case 0xfd: /* fscale */
-        case 0xfe: /* fsin */
-        case 0xff: /* fcos */
-            emulate_fpu_insn_stub(0xd9, modrm);
-            break;
-        default:
-            generate_exception_if(ea.type != OP_MEM, EXC_UD);
-            switch ( modrm_reg & 7 )
-            {
-            case 0: /* fld m32fp */
-                goto fpu_memsrc32;
-            case 2: /* fst m32fp */
-            case 3: /* fstp m32fp */
-            fpu_memdst32:
-                dst = ea;
-                dst.bytes = 4;
-                emulate_fpu_insn_memdst(b, modrm_reg & 7, dst.val);
-                break;
-            case 4: /* fldenv */
-                /* Raise #MF now if there are pending unmasked exceptions. */
-                emulate_fpu_insn_stub(0xd9, 0xd0 /* fnop */);
-                /* fall through */
-            case 6: /* fnstenv */
-                fail_if(!ops->blk);
-                state->blk = modrm_reg & 2 ? blk_fst : blk_fld;
-                /*
-                 * REX is meaningless for these insns by this point - (ab)use
-                 * the field to communicate real vs protected mode to ->blk().
-                 */
-                /*state->*/rex_prefix = in_protmode(ctxt, ops);
-                if ( (rc = ops->blk(ea.mem.seg, ea.mem.off, NULL,
-                                    op_bytes > 2 ? sizeof(struct x87_env32)
-                                                 : sizeof(struct x87_env16),
-                                    &_regs.eflags,
-                                    state, ctxt)) != X86EMUL_OKAY )
-                    goto done;
-                state->fpu_ctrl = true;
-                break;
-            case 5: /* fldcw m2byte */
-                state->fpu_ctrl = true;
-            fpu_memsrc16:
-                if ( (rc = ops->read(ea.mem.seg, ea.mem.off, &src.val,
-                                     2, ctxt)) != X86EMUL_OKAY )
-                    goto done;
-                emulate_fpu_insn_memsrc(b, modrm_reg & 7, src.val);
-                break;
-            case 7: /* fnstcw m2byte */
-                state->fpu_ctrl = true;
-            fpu_memdst16:
-                dst = ea;
-                dst.bytes = 2;
-                emulate_fpu_insn_memdst(b, modrm_reg & 7, dst.val);
-                break;
-            default:
-                generate_exception(EXC_UD);
-            }
-            /*
-             * Control instructions can't raise FPU exceptions, so we need
-             * to consider suppressing writes only for non-control ones.
-             */
-            if ( dst.type == OP_MEM && !state->fpu_ctrl && !fpu_check_write() )
-                dst.type = OP_NONE;
-        }
-        break;
-
-    case 0xda: /* FPU 0xda */
-        host_and_vcpu_must_have(fpu);
-        get_fpu(X86EMUL_FPU_fpu);
-        switch ( modrm )
-        {
-        case 0xc0 ... 0xc7: /* fcmovb %stN */
-        case 0xc8 ... 0xcf: /* fcmove %stN */
-        case 0xd0 ... 0xd7: /* fcmovbe %stN */
-        case 0xd8 ... 0xdf: /* fcmovu %stN */
-            vcpu_must_have(cmov);
-            emulate_fpu_insn_stub_eflags(0xda, modrm);
-            break;
-        case 0xe9:          /* fucompp */
-            emulate_fpu_insn_stub(0xda, modrm);
-            break;
-        default:
-            generate_exception_if(ea.type != OP_MEM, EXC_UD);
-            goto fpu_memsrc32;
-        }
-        break;
-
-    case 0xdb: /* FPU 0xdb */
-        host_and_vcpu_must_have(fpu);
-        get_fpu(X86EMUL_FPU_fpu);
-        switch ( modrm )
-        {
-        case 0xc0 ... 0xc7: /* fcmovnb %stN */
-        case 0xc8 ... 0xcf: /* fcmovne %stN */
-        case 0xd0 ... 0xd7: /* fcmovnbe %stN */
-        case 0xd8 ... 0xdf: /* fcmovnu %stN */
-        case 0xe8 ... 0xef: /* fucomi %stN */
-        case 0xf0 ... 0xf7: /* fcomi %stN */
-            vcpu_must_have(cmov);
-            emulate_fpu_insn_stub_eflags(0xdb, modrm);
-            break;
-        case 0xe0: /* fneni - 8087 only, ignored by 287 */
-        case 0xe1: /* fndisi - 8087 only, ignored by 287 */
-        case 0xe2: /* fnclex */
-        case 0xe3: /* fninit */
-        case 0xe4: /* fnsetpm - 287 only, ignored by 387 */
-        /* case 0xe5: frstpm - 287 only, #UD on 387 */
-            state->fpu_ctrl = true;
-            emulate_fpu_insn_stub(0xdb, modrm);
-            break;
-        default:
-            generate_exception_if(ea.type != OP_MEM, EXC_UD);
-            switch ( modrm_reg & 7 )
-            {
-            case 0: /* fild m32i */
-                goto fpu_memsrc32;
-            case 1: /* fisttp m32i */
-                host_and_vcpu_must_have(sse3);
-                /* fall through */
-            case 2: /* fist m32i */
-            case 3: /* fistp m32i */
-                goto fpu_memdst32;
-            case 5: /* fld m80fp */
-            fpu_memsrc80:
-                if ( (rc = ops->read(ea.mem.seg, ea.mem.off, mmvalp,
-                                     10, ctxt)) != X86EMUL_OKAY )
-                    goto done;
-                emulate_fpu_insn_memsrc(b, modrm_reg & 7, *mmvalp);
-                break;
-            case 7: /* fstp m80fp */
-            fpu_memdst80:
-                fail_if(!ops->write);
-                emulate_fpu_insn_memdst(b, modrm_reg & 7, *mmvalp);
-                if ( fpu_check_write() &&
-                     (rc = ops->write(ea.mem.seg, ea.mem.off, mmvalp,
-                                      10, ctxt)) != X86EMUL_OKAY )
-                    goto done;
-                break;
-            default:
-                generate_exception(EXC_UD);
-            }
-        }
-        break;
-
-    case 0xdc: /* FPU 0xdc */
-        host_and_vcpu_must_have(fpu);
-        get_fpu(X86EMUL_FPU_fpu);
-        switch ( modrm )
-        {
-        case 0xc0 ... 0xc7: /* fadd %st,%stN */
-        case 0xc8 ... 0xcf: /* fmul %st,%stN */
-        case 0xd0 ... 0xd7: /* fcom %stN,%st (alternative encoding) */
-        case 0xd8 ... 0xdf: /* fcomp %stN,%st (alternative encoding) */
-        case 0xe0 ... 0xe7: /* fsubr %st,%stN */
-        case 0xe8 ... 0xef: /* fsub %st,%stN */
-        case 0xf0 ... 0xf7: /* fdivr %st,%stN */
-        case 0xf8 ... 0xff: /* fdiv %st,%stN */
-            emulate_fpu_insn_stub(0xdc, modrm);
-            break;
-        default:
-        fpu_memsrc64:
-            ASSERT(ea.type == OP_MEM);
-            if ( (rc = ops->read(ea.mem.seg, ea.mem.off, &src.val,
-                                 8, ctxt)) != X86EMUL_OKAY )
-                goto done;
-            emulate_fpu_insn_memsrc(b, modrm_reg & 7, src.val);
-            break;
-        }
-        break;
-
-    case 0xdd: /* FPU 0xdd */
-        host_and_vcpu_must_have(fpu);
-        get_fpu(X86EMUL_FPU_fpu);
-        switch ( modrm )
-        {
-        case 0xc0 ... 0xc7: /* ffree %stN */
-        case 0xc8 ... 0xcf: /* fxch %stN (alternative encoding) */
-        case 0xd0 ... 0xd7: /* fst %stN */
-        case 0xd8 ... 0xdf: /* fstp %stN */
-        case 0xe0 ... 0xe7: /* fucom %stN */
-        case 0xe8 ... 0xef: /* fucomp %stN */
-            emulate_fpu_insn_stub(0xdd, modrm);
-            break;
-        default:
-            generate_exception_if(ea.type != OP_MEM, EXC_UD);
-            switch ( modrm_reg & 7 )
-            {
-            case 0: /* fld m64fp */;
-                goto fpu_memsrc64;
-            case 1: /* fisttp m64i */
-                host_and_vcpu_must_have(sse3);
-                /* fall through */
-            case 2: /* fst m64fp */
-            case 3: /* fstp m64fp */
-            fpu_memdst64:
-                dst = ea;
-                dst.bytes = 8;
-                emulate_fpu_insn_memdst(b, modrm_reg & 7, dst.val);
-                break;
-            case 4: /* frstor */
-                /* Raise #MF now if there are pending unmasked exceptions. */
-                emulate_fpu_insn_stub(0xd9, 0xd0 /* fnop */);
-                /* fall through */
-            case 6: /* fnsave */
-                fail_if(!ops->blk);
-                state->blk = modrm_reg & 2 ? blk_fst : blk_fld;
-                /*
-                 * REX is meaningless for these insns by this point - (ab)use
-                 * the field to communicate real vs protected mode to ->blk().
-                 */
-                /*state->*/rex_prefix = in_protmode(ctxt, ops);
-                if ( (rc = ops->blk(ea.mem.seg, ea.mem.off, NULL,
-                                    op_bytes > 2 ? sizeof(struct x87_env32) + 80
-                                                 : sizeof(struct x87_env16) + 80,
-                                    &_regs.eflags,
-                                    state, ctxt)) != X86EMUL_OKAY )
-                    goto done;
-                state->fpu_ctrl = true;
-                break;
-            case 7: /* fnstsw m2byte */
-                state->fpu_ctrl = true;
-                goto fpu_memdst16;
-            default:
-                generate_exception(EXC_UD);
-            }
-            /*
-             * Control instructions can't raise FPU exceptions, so we need
-             * to consider suppressing writes only for non-control ones.
-             */
-            if ( dst.type == OP_MEM && !state->fpu_ctrl && !fpu_check_write() )
-                dst.type = OP_NONE;
-        }
-        break;
-
-    case 0xde: /* FPU 0xde */
-        host_and_vcpu_must_have(fpu);
-        get_fpu(X86EMUL_FPU_fpu);
-        switch ( modrm )
-        {
-        case 0xc0 ... 0xc7: /* faddp %stN */
-        case 0xc8 ... 0xcf: /* fmulp %stN */
-        case 0xd0 ... 0xd7: /* fcomp %stN (alternative encoding) */
-        case 0xd9: /* fcompp */
-        case 0xe0 ... 0xe7: /* fsubrp %stN */
-        case 0xe8 ... 0xef: /* fsubp %stN */
-        case 0xf0 ... 0xf7: /* fdivrp %stN */
-        case 0xf8 ... 0xff: /* fdivp %stN */
-            emulate_fpu_insn_stub(0xde, modrm);
-            break;
-        default:
-            generate_exception_if(ea.type != OP_MEM, EXC_UD);
-            emulate_fpu_insn_memsrc(b, modrm_reg & 7, src.val);
-            break;
-        }
-        break;
-
-    case 0xdf: /* FPU 0xdf */
-        host_and_vcpu_must_have(fpu);
-        get_fpu(X86EMUL_FPU_fpu);
-        switch ( modrm )
-        {
-        case 0xe0:
-            /* fnstsw %ax */
-            state->fpu_ctrl = true;
-            dst.bytes = 2;
-            dst.type = OP_REG;
-            dst.reg = (void *)&_regs.ax;
-            emulate_fpu_insn_memdst(b, modrm_reg & 7, dst.val);
-            break;
-        case 0xe8 ... 0xef: /* fucomip %stN */
-        case 0xf0 ... 0xf7: /* fcomip %stN */
-            vcpu_must_have(cmov);
-            emulate_fpu_insn_stub_eflags(0xdf, modrm);
-            break;
-        case 0xc0 ... 0xc7: /* ffreep %stN */
-        case 0xc8 ... 0xcf: /* fxch %stN (alternative encoding) */
-        case 0xd0 ... 0xd7: /* fstp %stN (alternative encoding) */
-        case 0xd8 ... 0xdf: /* fstp %stN (alternative encoding) */
-            emulate_fpu_insn_stub(0xdf, modrm);
-            break;
-        default:
-            generate_exception_if(ea.type != OP_MEM, EXC_UD);
-            switch ( modrm_reg & 7 )
-            {
-            case 0: /* fild m16i */
-                goto fpu_memsrc16;
-            case 1: /* fisttp m16i */
-                host_and_vcpu_must_have(sse3);
-                /* fall through */
-            case 2: /* fist m16i */
-            case 3: /* fistp m16i */
-                goto fpu_memdst16;
-            case 4: /* fbld m80dec */
-                goto fpu_memsrc80;
-            case 5: /* fild m64i */
-                dst.type = OP_NONE;
-                goto fpu_memsrc64;
-            case 6: /* fbstp packed bcd */
-                goto fpu_memdst80;
-            case 7: /* fistp m64i */
-                goto fpu_memdst64;
-            }
-        }
-        break;
-#endif /* !X86EMUL_NO_FPU */
-
     case 0xe0 ... 0xe2: /* loop{,z,nz} */ {
         unsigned long count = get_loop_count(&_regs, ad_bytes);
         int do_jmp = !(_regs.eflags & X86_EFLAGS_ZF); /* loopnz */
@@ -10157,6 +9620,11 @@ x86_emulate(
         {
         case X86EMUL_rdtsc:
             goto rdtsc;
+
+#ifdef __XEN__
+        case X86EMUL_stub_failure:
+            goto emulation_stub_failure;
+#endif
         }
 
         /* Internally used state change indicators may not make it here. */
--- a/xen/arch/x86/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate.c
@@ -9,7 +9,6 @@
  *    Keir Fraser <keir@xen.org>
  */
 
-#include <xen/domain_page.h>
 #include <xen/err.h>
 #include <xen/event.h>
 #include <asm/x86_emulate.h>
@@ -17,7 +16,6 @@
 #include <asm/xstate.h>
 #include <asm/amd.h> /* cpu_has_amd_erratum() */
 #include <asm/debugreg.h>
-#include <asm/endbr.h>
 
 /* Avoid namespace pollution. */
 #undef cmpxchg
@@ -27,29 +25,6 @@
 #define cpu_has_amd_erratum(nr) \
         cpu_has_amd_erratum(&current_cpu_data, AMD_ERRATUM_##nr)
 
-#define get_stub(stb) ({                                        \
-    void *ptr;                                                  \
-    BUILD_BUG_ON(STUB_BUF_SIZE / 2 < MAX_INST_LEN + 1);         \
-    ASSERT(!(stb).ptr);                                         \
-    (stb).addr = this_cpu(stubs.addr) + STUB_BUF_SIZE / 2;      \
-    (stb).ptr = map_domain_page(_mfn(this_cpu(stubs.mfn))) +    \
-        ((stb).addr & ~PAGE_MASK);                              \
-    ptr = memset((stb).ptr, 0xcc, STUB_BUF_SIZE / 2);           \
-    if ( cpu_has_xen_ibt )                                      \
-    {                                                           \
-        place_endbr64(ptr);                                     \
-        ptr += 4;                                               \
-    }                                                           \
-    ptr;                                                        \
-})
-#define put_stub(stb) ({                                   \
-    if ( (stb).ptr )                                       \
-    {                                                      \
-        unmap_domain_page((stb).ptr);                      \
-        (stb).ptr = NULL;                                  \
-    }                                                      \
-})
-
 #define FXSAVE_AREA current->arch.fpu_ctxt
 
 #include "x86_emulate/x86_emulate.c"



From xen-devel-bounces@lists.xenproject.org Wed Jun 15 10:00:35 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 10:00:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349840.576021 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1PpC-0004e8-MG; Wed, 15 Jun 2022 10:00:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349840.576021; Wed, 15 Jun 2022 10:00:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1PpC-0004e1-Hm; Wed, 15 Jun 2022 10:00:34 +0000
Received: by outflank-mailman (input) for mailman id 349840;
 Wed, 15 Jun 2022 10:00:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=56zs=WW=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o1Pp9-0001TN-Nh
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 10:00:32 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur02on060f.outbound.protection.outlook.com
 [2a01:111:f400:fe05::60f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id faab09f1-ec91-11ec-ab14-113154c10af9;
 Wed, 15 Jun 2022 12:00:27 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8922.eurprd04.prod.outlook.com (2603:10a6:20b:409::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.22; Wed, 15 Jun
 2022 10:00:23 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Wed, 15 Jun 2022
 10:00:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: faab09f1-ec91-11ec-ab14-113154c10af9
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XXvdUIKXiFgnk8HNvb3iNQLGv5X4dZbd9QaRA9BakRARHlAD4VcTj69qBB6nR25AA0kiru3YGDQih/Fq2a9iVhP6SN58PMCI1FG4/UIgilrUbDteeft3wMRxH3K/F9UN1U1AjGxSayWMCGPs2d8KPRJUqfiiVMHebwdqsr2zItawmskPANDawyx+dn105jGNEYScnJe1yNKYLelmkdiA4CZ9ChrJPpJC32JBUf2iZA738/wyb3VgBN2aQTHycdJhyVIz0KNJ4DK6o5mawpv+PWLMhTdvhptUN8zBqC7ZbgFatRS23+YPYctm39uaQ4x8hkovdDa6vjbIfN+eFVt7gg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=AvQ/sd0ff44RR7TRadc587zbGi4Hi8kh03vW/b/TdBE=;
 b=O0LvfUMT2RjvKOTpP3QLTdvGyPe714Ak/RNVIvznWW6+NUKBAwor7d60LFKo5W9UUphCZoTXClXt/roFG9ybnbUe87GBCMrjckZTs13yBEpRExmK1zmkcJofZLdmZfJnbK6tESpPyIs4EYMlMCetG1YN/w1/EGBddg6KFciBkSHScjlkWvVZ183A4w33iCcowxN7D8AXaUJnKrnwDcPs6Mbez4Ql6Rvgi2Mh/Ej+B04+se5jfFNxhVgRre9IWMK1FqLKzuNGC3o8GBxwS8kObVSu8xRvd3FUJIkzCU5FF9xPrKHRVXyapGM/AESGY5TtpzNP94IEUb9YxS/mwcB5SQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AvQ/sd0ff44RR7TRadc587zbGi4Hi8kh03vW/b/TdBE=;
 b=2B/atoDs61KOq4gies3UP/HqDLwoXoUJ5hTCMo0YzlIMzxcGd5TRGo/uO+hfjfNyeANvfvtTpsh+vMibiC/e5YDb4wuYkmu7GzFeRy3YIi97nGuAWikt8qzuqFrJ937EDSP9WZ/y+GlDDZk31BolIBm6AAxKHMRZTdtSUgztq1MUM0BNeTw7GVFgdY+55nqlvXwXPm4fLTynbdpZEG0yIyZO1gflW1/e6Cfnir0192pFo68ku+FRJOpn66AbI+gCKO94huFHq9E+K6V2YAm/oA/T1ePQsp0YqKDuY3CbRuY5jleY+urCA+y6ZGzgICPkh5xJtl4APxvwTihkyxgUOw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a17f35c2-b1ef-0ae5-e1ce-99f34c63c77d@suse.com>
Date: Wed, 15 Jun 2022 12:00:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: [PATCH v2 5/8] x86emul: split off insn decoding
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <7f5287ad-8442-6c53-d513-f9a8345c4857@suse.com>
In-Reply-To: <7f5287ad-8442-6c53-d513-f9a8345c4857@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS8PR04CA0013.eurprd04.prod.outlook.com
 (2603:10a6:20b:310::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 73193ec0-7c76-4979-3816-08da4eb5dcbc
X-MS-TrafficTypeDiagnostic: AM9PR04MB8922:EE_
X-Microsoft-Antispam-PRVS:
	<AM9PR04MB8922D3CDEB2BC3FFD5E3B82FB3AD9@AM9PR04MB8922.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	soqXSt1DM0SULDLSabSHfKFpxs4zUoH1OdOiusVWqBZnyN+AOa+KOMSAG/tfVn/LL/y9WePTtf0VKcBMDTi1n6xImWQNL2hQKJdnOEoWJl5dOYBUGW2RtYfIOKw42S8RsOlOQukV4wHEeiAg4T62uIBDWHsjoR5n1woEYonsGUAvFGy3Qs3HsuwVWFhC6qrwabgCjBLFt3WVjtYy2nfkAOlGdwRB6OPrgjQyN9/HROQ26SCwBLghzyo8X6ilp1neyPH/ldqEZTyppyhoAvRZYdcWtTKwAwVllY/VNB2ifIxcRbQCqVeMvTxoOTOHN8cUBCppaOVtZX6R5aZUDGoKr7da0zXBnrlAOtDAT4ZqAM6+VUZU3rSWLs3iffrH3mUfPOYN2FHnea71sZlvSxIaz5zZT0H7sTyvdRiV+Wbmd2+WIn7QAFgcmwikbl37q3thXq9r+TBVEG7XwErS8E+qKO41+UVTNgR7EtHI/K7qNwkI1BqFGj5cq329FIWhHzhgTHnCXFsDZRe3vc7C1bt/QnBgcO2ikYpXLKIY+AzKo+TTzIvlcrMhcP72FeQqw6r629ei59FiIrzTWnCJMgWvBiSoOkOt/WzjiOyhzOENjl334AhFN5OSvabFJWOwcsk/8aQkkYLKRQ5HhI5ZENRs2RhI3gZaOv6yYXMYHWR61DwEzO0exIUpZxATc6hNJoLRgjJuDWGgC5PFVEaPSdu247IkNeUQdvBI3pDS8ArTXmqL4gjHaECuSGFBiwAQEV6aJgG5WRk4paO+RAKR62nJV2Yp/fCELYpqfpDZGkaiOfHWhS1rJBQdxvHqN0APeZS0bUlT4Auo/o9T+evwh7lvaQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(83380400001)(186003)(2616005)(31686004)(36756003)(38100700002)(2906002)(6512007)(316002)(6486002)(5660300002)(86362001)(6506007)(66556008)(26005)(6916009)(508600001)(8676002)(66946007)(4326008)(8936002)(66476007)(31696002)(54906003)(30864003)(2004002)(45980500001)(43740500002)(579004)(559001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MmxtY0tCTVFSRmVLQnQwYVRSdldublcxWm5HUE0remU0MG4vMFhrM25zYzVz?=
 =?utf-8?B?MEFlMENNcVdIMjJCbzgxWkpZK09QU2RYTXFyeGtiT3piQ1BobTU5K1QxbWtK?=
 =?utf-8?B?V05GeTZmSWZXRnQ5b2pla0pVaUxDaFpwQUsrblkxZkNETmo2Snc1aVdBMzE0?=
 =?utf-8?B?M1hKU0Y1WEpmNDVZYkVFZWx1YXhKTHhSWTZGVWU0UHBBYVl3cmpwSEFkN3Fy?=
 =?utf-8?B?bXFHREZIWXZEQnlkNmplOWU0MFNiTXBxeWQrQ3lBN2dQd0Eyc1ZVY3JicmU2?=
 =?utf-8?B?VWRnK3ZacmRISkhBejAxZVppNWM3MTc2THNVUmZITzVFbDNUMS8zYnNSb1lQ?=
 =?utf-8?B?eHBLc2ZVT0luZFdPRUtKeCtiTzE5S3RBWlZaYTVmNDBIaXZmUFN6b1hCYWRK?=
 =?utf-8?B?aXhkM0Y1L3Q0MjZWTlJrSzVoWDNIRFhhVXI0WjlYUTl5cHZzZkRtejdUazhk?=
 =?utf-8?B?My9XVHNFOXk1SXE1MzZ0b1UwNTBnbFNPNnAxcXBscUh2eXdueGZvWk5MbXJ4?=
 =?utf-8?B?YlpYSSt2M2Y5T2s3V3ZCTEtJL3FNbEFsRDFVOGVXYVREQ3JXT0M3VVRRcldG?=
 =?utf-8?B?K1UyYy9IR2tQOSs5Uk44cEFXRVRoeFdETkRteXFodk8rMDJiZVlnNysybGRi?=
 =?utf-8?B?YmZiY3VwNTJtQzg2MSswRWFtdlEzbXNFcCs1cktSUHJRTHpBVzAzWHptTnJG?=
 =?utf-8?B?L1BpUko1dzF6MTF6VkI1MzVtZWFmLzlNci9HbWNjNDdBM29YYzVDakFWUXRx?=
 =?utf-8?B?L3FmQnlaNGVoODQ2Yi9jYkF4U3VlN3ozdFpZeDhQT0tjVGZOdk5qZUpOZ3ZB?=
 =?utf-8?B?K3A5ZWxtOXF6VjhXWlVyV0FsUjNpRksvdWhDYXVuWWFJU0pmcDZGOUZSblNW?=
 =?utf-8?B?Y0FCVmRJS2dVQW0rOGVieTloeGZSVTFjR1NOTDI1UHNXVGZTdDZKMHlNZVBL?=
 =?utf-8?B?QmYxZGhINFFJRXJwOVNqQWlLQlUybjkxYUQvdHRMdkRwb3NwOE5UN1R2N3hk?=
 =?utf-8?B?V2FqNFlVbWtDTHY1ZlZ0WHQ0MG80amdtYS9YUURkSTBVTXpYK25memtwTzdB?=
 =?utf-8?B?MC9MSGt6dDRTREc4SFRDZXRKZFVMTWZuN21zUUZtOEs5MTZEUmJXbnRhSjBO?=
 =?utf-8?B?cUoyejAzOGIwaktoak8vbHpiaHFMWHhmd1BhRnk3WFBrMGdBWXE1aWFBS0VH?=
 =?utf-8?B?RmYrclNFaGhFRXVreU8raEk0UlBMYVBLdEp1a2F4dDZlSks5QW4wQjg5eW55?=
 =?utf-8?B?RktyVWFlQlo3UWtBZ2Y4eGhIci9yOXBzVDJzRjM3cyt1Z3F4dS8xK1FJTjlT?=
 =?utf-8?B?a2tCTm1yK3VhVDlUanplcHNISjR0UnUvckJiNUtPSVQxYUE5WXZnZ0liYlUz?=
 =?utf-8?B?cTM3TVdtS0o0OWNxdDZWNFYzV3dEU3N6UXdldmxpalZBQTZXZW96My9VbzQz?=
 =?utf-8?B?MU1ZTzdBRGhxSW5NaW9XTFFRNTJGd2MzTklYVjlLZ2hmdFdVSTZPbnVTMXpJ?=
 =?utf-8?B?L2VHZkxBanJ0Z0hVd041SU1Ta2crVUNCaUI5dGdRSnk4U2JkbDUrQUxNbnM5?=
 =?utf-8?B?b3UyWEJZek1TcER2amFvbzZ5VGFLR3hrY29DQnp2MG1MbENNQmV0QlBPMFk0?=
 =?utf-8?B?d3ZBWlNKWXlHSWdHN3oxQ2xSWFNMV3M3YUN2VzFYTzgzM09tMUhweVQ0akJ1?=
 =?utf-8?B?alRsQzd6MTFrVnFEQzFORVZHcHdXVGxNSUFlOFhYNmxuWDQyeXk2NC9XUGlR?=
 =?utf-8?B?WkdqUXArU2pYbkR3QW9MRll5VzR4TDNSb3c3LzNBY0VpNlVTYVN1dytUVmpI?=
 =?utf-8?B?SFBsamNDbERJVXgxV3U5UXZ2WTNoUmRzMHBVWlE4Y0J2amxtaWRNUGxvaVV1?=
 =?utf-8?B?dGJiaFVaUllMUmpneFg2Vm41TUNEU1AyM1oxVlp2SjlCRlN1ZU1FaENQZHZ0?=
 =?utf-8?B?TFZPd09qbTNUQ3lzdEdFWEloQUhVd0xMb2d0UCswNXpNNjFxSWZFMTBqNXZ2?=
 =?utf-8?B?c29WcGJELzdwcEdYL24zMnBsWTRuWm9pbEZLN3dTSXFBaW9MZGxjRzdYT3N6?=
 =?utf-8?B?eXgvblMxUHlNbFg0M3d4eUNTeEkvMEs0TTdTYlBvNjNVVEc1dXZ0aEVNUlli?=
 =?utf-8?B?TGZjU3BGQVVrTmVoTXk3ekpHTnc4ZFRpdWluTWFqUmhETUxuQ3VWZkNGTzNt?=
 =?utf-8?B?NEYzQXgwTUd1Zy9IdXBVOUIyS2M3QmlpWUNRT25HWHBseWl0OS9ZbGJFQTVr?=
 =?utf-8?B?eHVDcVZuQXNUd1ZjbHdFK3VKclBtMWZwSmNDZTVaQ0JyRnB4SFlaYUcrUjJk?=
 =?utf-8?B?bDA5NkIvOVI2cmszSEhTaVd0a3ZoNDlPM3ZPRW96azRvTEdDYU44QT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 73193ec0-7c76-4979-3816-08da4eb5dcbc
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2022 10:00:22.8216
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: NRpeh/UuuomPR+FTTTVqmv/3dzckZUNW8sxcRL7r/jBnrF9qTDO7fX/cEsepd4xCrg/cvGaFSrH+b/ABV3AXmQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8922

This is a fair chunk of code and data and can easily live separate from
the main emulation function.

Code moved gets slightly adjusted in a few places, e.g. replacing EXC_*
by X86_EXC_* (such that EXC_* don't need to move as well; we want these
to be phased out anyway).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Re-base.

--- a/tools/fuzz/x86_instruction_emulator/Makefile
+++ b/tools/fuzz/x86_instruction_emulator/Makefile
@@ -36,7 +36,7 @@ x86_emulate.h := x86-emulate.h x86_emula
 
 OBJS := fuzz-emul.o x86-emulate.o
 OBJS += x86_emulate/0f01.o x86_emulate/0fae.o x86_emulate/0fc7.o
-OBJS += x86_emulate/fpu.o
+OBJS += x86_emulate/decode.o x86_emulate/fpu.o
 
 # x86-emulate.c will be implicit for both
 x86-emulate.o x86-emulate-cov.o: x86_emulate/x86_emulate.c $(x86_emulate.h) x86_emulate/private.h
--- a/tools/tests/x86_emulator/Makefile
+++ b/tools/tests/x86_emulator/Makefile
@@ -252,7 +252,7 @@ endif # 32-bit override
 
 OBJS := x86-emulate.o cpuid.o test_x86_emulator.o evex-disp8.o predicates.o wrappers.o
 OBJS += x86_emulate/0f01.o x86_emulate/0fae.o x86_emulate/0fc7.o
-OBJS += x86_emulate/fpu.o
+OBJS += x86_emulate/decode.o x86_emulate/fpu.o
 
 $(TARGET): $(OBJS)
 	$(HOSTCC) $(HOSTCFLAGS) -o $@ $^
--- a/tools/tests/x86_emulator/x86-emulate.c
+++ b/tools/tests/x86_emulator/x86-emulate.c
@@ -3,11 +3,6 @@
 #include <errno.h>
 #include <sys/mman.h>
 
-#define DEFINE_PER_CPU(type, var) type per_cpu_##var
-#define this_cpu(var) per_cpu_##var
-
-#define ERR_PTR(val) NULL
-
 /* See gcc bug 100680, but here don't bother making this version dependent. */
 #define gcc11_wrap(x) ({                  \
     unsigned long x_;                     \
--- a/tools/tests/x86_emulator/x86-emulate.h
+++ b/tools/tests/x86_emulator/x86-emulate.h
@@ -48,6 +48,9 @@
 #define ASSERT assert
 #define ASSERT_UNREACHABLE() assert(!__LINE__)
 
+#define DEFINE_PER_CPU(type, var) type per_cpu_##var
+#define this_cpu(var) per_cpu_##var
+
 #define MASK_EXTR(v, m) (((v) & (m)) / ((m) & -(m)))
 #define MASK_INSR(v, m) (((v) * ((m) & -(m))) & (m))
 
--- a/xen/arch/x86/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate.c
@@ -9,7 +9,6 @@
  *    Keir Fraser <keir@xen.org>
  */
 
-#include <xen/err.h>
 #include <xen/event.h>
 #include <asm/x86_emulate.h>
 #include <asm/processor.h> /* current_cpu_info */
--- a/xen/arch/x86/x86_emulate/Makefile
+++ b/xen/arch/x86/x86_emulate/Makefile
@@ -1,4 +1,5 @@
 obj-y += 0f01.o
 obj-y += 0fae.o
 obj-y += 0fc7.o
+obj-y += decode.o
 obj-$(CONFIG_HVM) += fpu.o
--- /dev/null
+++ b/xen/arch/x86/x86_emulate/decode.c
@@ -0,0 +1,1749 @@
+/******************************************************************************
+ * decode.c - helper for x86_emulate.c
+ *
+ * Generic x86 (32-bit and 64-bit) instruction decoder and emulator.
+ *
+ * Copyright (c) 2005-2007 Keir Fraser
+ * Copyright (c) 2005-2007 XenSource Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "private.h"
+
+#ifdef __XEN__
+# include <xen/err.h>
+#else
+# define ERR_PTR(val) NULL
+#endif
+
+#define evex_encoded() (s->evex.mbs)
+
+struct x86_emulate_state *
+x86_decode_insn(
+    struct x86_emulate_ctxt *ctxt,
+    int (*insn_fetch)(
+        unsigned long offset, void *p_data, unsigned int bytes,
+        struct x86_emulate_ctxt *ctxt))
+{
+    static DEFINE_PER_CPU(struct x86_emulate_state, state);
+    struct x86_emulate_state *s = &this_cpu(state);
+    const struct x86_emulate_ops ops = {
+        .insn_fetch = insn_fetch,
+        .read       = x86emul_unhandleable_rw,
+    };
+    int rc;
+
+    init_context(ctxt);
+
+    rc = x86emul_decode(s, ctxt, &ops);
+    if ( unlikely(rc != X86EMUL_OKAY) )
+        return ERR_PTR(-rc);
+
+#if defined(__XEN__) && !defined(NDEBUG)
+    /*
+     * While we avoid memory allocation (by use of per-CPU data) above,
+     * nevertheless make sure callers properly release the state structure
+     * for forward compatibility.
+     */
+    if ( s->caller )
+    {
+        printk(XENLOG_ERR "Unreleased emulation state acquired by %ps\n",
+               s->caller);
+        dump_execution_state();
+    }
+    s->caller = __builtin_return_address(0);
+#endif
+
+    return s;
+}
+
+static const opcode_desc_t opcode_table[256] = {
+    /* 0x00 - 0x07 */
+    ByteOp|DstMem|SrcReg|ModRM, DstMem|SrcReg|ModRM,
+    ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM,
+    ByteOp|DstEax|SrcImm, DstEax|SrcImm, ImplicitOps|Mov, ImplicitOps|Mov,
+    /* 0x08 - 0x0F */
+    ByteOp|DstMem|SrcReg|ModRM, DstMem|SrcReg|ModRM,
+    ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM,
+    ByteOp|DstEax|SrcImm, DstEax|SrcImm, ImplicitOps|Mov, 0,
+    /* 0x10 - 0x17 */
+    ByteOp|DstMem|SrcReg|ModRM, DstMem|SrcReg|ModRM,
+    ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM,
+    ByteOp|DstEax|SrcImm, DstEax|SrcImm, ImplicitOps|Mov, ImplicitOps|Mov,
+    /* 0x18 - 0x1F */
+    ByteOp|DstMem|SrcReg|ModRM, DstMem|SrcReg|ModRM,
+    ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM,
+    ByteOp|DstEax|SrcImm, DstEax|SrcImm, ImplicitOps|Mov, ImplicitOps|Mov,
+    /* 0x20 - 0x27 */
+    ByteOp|DstMem|SrcReg|ModRM, DstMem|SrcReg|ModRM,
+    ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM,
+    ByteOp|DstEax|SrcImm, DstEax|SrcImm, 0, ImplicitOps,
+    /* 0x28 - 0x2F */
+    ByteOp|DstMem|SrcReg|ModRM, DstMem|SrcReg|ModRM,
+    ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM,
+    ByteOp|DstEax|SrcImm, DstEax|SrcImm, 0, ImplicitOps,
+    /* 0x30 - 0x37 */
+    ByteOp|DstMem|SrcReg|ModRM, DstMem|SrcReg|ModRM,
+    ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM,
+    ByteOp|DstEax|SrcImm, DstEax|SrcImm, 0, ImplicitOps,
+    /* 0x38 - 0x3F */
+    ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM,
+    ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM,
+    ByteOp|DstEax|SrcImm, DstEax|SrcImm, 0, ImplicitOps,
+    /* 0x40 - 0x4F */
+    ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps,
+    ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps,
+    ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps,
+    ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps,
+    /* 0x50 - 0x5F */
+    ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov,
+    ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov,
+    ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov,
+    ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov,
+    /* 0x60 - 0x67 */
+    ImplicitOps, ImplicitOps, DstReg|SrcMem|ModRM, DstReg|SrcNone|ModRM|Mov,
+    0, 0, 0, 0,
+    /* 0x68 - 0x6F */
+    DstImplicit|SrcImm|Mov, DstReg|SrcImm|ModRM|Mov,
+    DstImplicit|SrcImmByte|Mov, DstReg|SrcImmByte|ModRM|Mov,
+    ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov,
+    /* 0x70 - 0x77 */
+    DstImplicit|SrcImmByte, DstImplicit|SrcImmByte,
+    DstImplicit|SrcImmByte, DstImplicit|SrcImmByte,
+    DstImplicit|SrcImmByte, DstImplicit|SrcImmByte,
+    DstImplicit|SrcImmByte, DstImplicit|SrcImmByte,
+    /* 0x78 - 0x7F */
+    DstImplicit|SrcImmByte, DstImplicit|SrcImmByte,
+    DstImplicit|SrcImmByte, DstImplicit|SrcImmByte,
+    DstImplicit|SrcImmByte, DstImplicit|SrcImmByte,
+    DstImplicit|SrcImmByte, DstImplicit|SrcImmByte,
+    /* 0x80 - 0x87 */
+    ByteOp|DstMem|SrcImm|ModRM, DstMem|SrcImm|ModRM,
+    ByteOp|DstMem|SrcImm|ModRM, DstMem|SrcImmByte|ModRM,
+    ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM,
+    ByteOp|DstMem|SrcReg|ModRM, DstMem|SrcReg|ModRM,
+    /* 0x88 - 0x8F */
+    ByteOp|DstMem|SrcReg|ModRM|Mov, DstMem|SrcReg|ModRM|Mov,
+    ByteOp|DstReg|SrcMem|ModRM|Mov, DstReg|SrcMem|ModRM|Mov,
+    DstMem|SrcReg|ModRM|Mov, DstReg|SrcNone|ModRM,
+    DstReg|SrcMem16|ModRM|Mov, DstMem|SrcNone|ModRM|Mov,
+    /* 0x90 - 0x97 */
+    DstImplicit|SrcEax, DstImplicit|SrcEax,
+    DstImplicit|SrcEax, DstImplicit|SrcEax,
+    DstImplicit|SrcEax, DstImplicit|SrcEax,
+    DstImplicit|SrcEax, DstImplicit|SrcEax,
+    /* 0x98 - 0x9F */
+    ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps,
+    ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps, ImplicitOps,
+    /* 0xA0 - 0xA7 */
+    ByteOp|DstEax|SrcMem|Mov, DstEax|SrcMem|Mov,
+    ByteOp|DstMem|SrcEax|Mov, DstMem|SrcEax|Mov,
+    ByteOp|ImplicitOps|Mov, ImplicitOps|Mov,
+    ByteOp|ImplicitOps, ImplicitOps,
+    /* 0xA8 - 0xAF */
+    ByteOp|DstEax|SrcImm, DstEax|SrcImm,
+    ByteOp|DstImplicit|SrcEax|Mov, DstImplicit|SrcEax|Mov,
+    ByteOp|DstEax|SrcImplicit|Mov, DstEax|SrcImplicit|Mov,
+    ByteOp|DstImplicit|SrcEax, DstImplicit|SrcEax,
+    /* 0xB0 - 0xB7 */
+    ByteOp|DstReg|SrcImm|Mov, ByteOp|DstReg|SrcImm|Mov,
+    ByteOp|DstReg|SrcImm|Mov, ByteOp|DstReg|SrcImm|Mov,
+    ByteOp|DstReg|SrcImm|Mov, ByteOp|DstReg|SrcImm|Mov,
+    ByteOp|DstReg|SrcImm|Mov, ByteOp|DstReg|SrcImm|Mov,
+    /* 0xB8 - 0xBF */
+    DstReg|SrcImm|Mov, DstReg|SrcImm|Mov, DstReg|SrcImm|Mov, DstReg|SrcImm|Mov,
+    DstReg|SrcImm|Mov, DstReg|SrcImm|Mov, DstReg|SrcImm|Mov, DstReg|SrcImm|Mov,
+    /* 0xC0 - 0xC7 */
+    ByteOp|DstMem|SrcImm|ModRM, DstMem|SrcImmByte|ModRM,
+    DstImplicit|SrcImm16, ImplicitOps,
+    DstReg|SrcMem|ModRM|Mov, DstReg|SrcMem|ModRM|Mov,
+    ByteOp|DstMem|SrcImm|ModRM|Mov, DstMem|SrcImm|ModRM|Mov,
+    /* 0xC8 - 0xCF */
+    DstImplicit|SrcImm16, ImplicitOps, DstImplicit|SrcImm16, ImplicitOps,
+    ImplicitOps, DstImplicit|SrcImmByte, ImplicitOps, ImplicitOps,
+    /* 0xD0 - 0xD7 */
+    ByteOp|DstMem|SrcImplicit|ModRM, DstMem|SrcImplicit|ModRM,
+    ByteOp|DstMem|SrcImplicit|ModRM, DstMem|SrcImplicit|ModRM,
+    DstImplicit|SrcImmByte, DstImplicit|SrcImmByte, ImplicitOps, ImplicitOps,
+    /* 0xD8 - 0xDF */
+    ImplicitOps|ModRM, ImplicitOps|ModRM|Mov,
+    ImplicitOps|ModRM, ImplicitOps|ModRM|Mov,
+    ImplicitOps|ModRM, ImplicitOps|ModRM|Mov,
+    DstImplicit|SrcMem16|ModRM, ImplicitOps|ModRM|Mov,
+    /* 0xE0 - 0xE7 */
+    DstImplicit|SrcImmByte, DstImplicit|SrcImmByte,
+    DstImplicit|SrcImmByte, DstImplicit|SrcImmByte,
+    DstEax|SrcImmByte, DstEax|SrcImmByte,
+    DstImplicit|SrcImmByte, DstImplicit|SrcImmByte,
+    /* 0xE8 - 0xEF */
+    DstImplicit|SrcImm|Mov, DstImplicit|SrcImm,
+    ImplicitOps, DstImplicit|SrcImmByte,
+    DstEax|SrcImplicit, DstEax|SrcImplicit, ImplicitOps, ImplicitOps,
+    /* 0xF0 - 0xF7 */
+    0, ImplicitOps, 0, 0,
+    ImplicitOps, ImplicitOps, ByteOp|ModRM, ModRM,
+    /* 0xF8 - 0xFF */
+    ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps,
+    ImplicitOps, ImplicitOps, ByteOp|DstMem|SrcNone|ModRM, DstMem|SrcNone|ModRM
+};
+
+static const struct twobyte_table {
+    opcode_desc_t desc;
+    simd_opsize_t size:4;
+    disp8scale_t d8s:4;
+} twobyte_table[256] = {
+    [0x00] = { ModRM },
+    [0x01] = { ImplicitOps|ModRM },
+    [0x02] = { DstReg|SrcMem16|ModRM },
+    [0x03] = { DstReg|SrcMem16|ModRM },
+    [0x05] = { ImplicitOps },
+    [0x06] = { ImplicitOps },
+    [0x07] = { ImplicitOps },
+    [0x08] = { ImplicitOps },
+    [0x09] = { ImplicitOps },
+    [0x0b] = { ImplicitOps },
+    [0x0d] = { ImplicitOps|ModRM },
+    [0x0e] = { ImplicitOps },
+    [0x0f] = { ModRM|SrcImmByte },
+    [0x10] = { DstImplicit|SrcMem|ModRM|Mov, simd_any_fp, d8s_vl },
+    [0x11] = { DstMem|SrcImplicit|ModRM|Mov, simd_any_fp, d8s_vl },
+    [0x12] = { DstImplicit|SrcMem|ModRM|Mov, simd_other, 3 },
+    [0x13] = { DstMem|SrcImplicit|ModRM|Mov, simd_other, 3 },
+    [0x14 ... 0x15] = { DstImplicit|SrcMem|ModRM, simd_packed_fp, d8s_vl },
+    [0x16] = { DstImplicit|SrcMem|ModRM|Mov, simd_other, 3 },
+    [0x17] = { DstMem|SrcImplicit|ModRM|Mov, simd_other, 3 },
+    [0x18 ... 0x1f] = { ImplicitOps|ModRM },
+    [0x20 ... 0x21] = { DstMem|SrcImplicit|ModRM },
+    [0x22 ... 0x23] = { DstImplicit|SrcMem|ModRM },
+    [0x28] = { DstImplicit|SrcMem|ModRM|Mov, simd_packed_fp, d8s_vl },
+    [0x29] = { DstMem|SrcImplicit|ModRM|Mov, simd_packed_fp, d8s_vl },
+    [0x2a] = { DstImplicit|SrcMem|ModRM|Mov, simd_other, d8s_dq64 },
+    [0x2b] = { DstMem|SrcImplicit|ModRM|Mov, simd_any_fp, d8s_vl },
+    [0x2c ... 0x2d] = { DstImplicit|SrcMem|ModRM|Mov, simd_other },
+    [0x2e ... 0x2f] = { ImplicitOps|ModRM|TwoOp, simd_none, d8s_dq },
+    [0x30 ... 0x35] = { ImplicitOps },
+    [0x37] = { ImplicitOps },
+    [0x38] = { DstReg|SrcMem|ModRM },
+    [0x3a] = { DstReg|SrcImmByte|ModRM },
+    [0x40 ... 0x4f] = { DstReg|SrcMem|ModRM|Mov },
+    [0x50] = { DstReg|SrcImplicit|ModRM|Mov },
+    [0x51] = { DstImplicit|SrcMem|ModRM|TwoOp, simd_any_fp, d8s_vl },
+    [0x52 ... 0x53] = { DstImplicit|SrcMem|ModRM|TwoOp, simd_single_fp },
+    [0x54 ... 0x57] = { DstImplicit|SrcMem|ModRM, simd_packed_fp, d8s_vl },
+    [0x58 ... 0x59] = { DstImplicit|SrcMem|ModRM, simd_any_fp, d8s_vl },
+    [0x5a] = { DstImplicit|SrcMem|ModRM|Mov, simd_any_fp, d8s_vl },
+    [0x5b] = { DstImplicit|SrcMem|ModRM|Mov, simd_packed_fp, d8s_vl },
+    [0x5c ... 0x5f] = { DstImplicit|SrcMem|ModRM, simd_any_fp, d8s_vl },
+    [0x60 ... 0x62] = { DstImplicit|SrcMem|ModRM, simd_other, d8s_vl },
+    [0x63 ... 0x67] = { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_vl },
+    [0x68 ... 0x6a] = { DstImplicit|SrcMem|ModRM, simd_other, d8s_vl },
+    [0x6b ... 0x6d] = { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_vl },
+    [0x6e] = { DstImplicit|SrcMem|ModRM|Mov, simd_none, d8s_dq64 },
+    [0x6f] = { DstImplicit|SrcMem|ModRM|Mov, simd_packed_int, d8s_vl },
+    [0x70] = { SrcImmByte|ModRM|TwoOp, simd_other, d8s_vl },
+    [0x71 ... 0x73] = { DstImplicit|SrcImmByte|ModRM, simd_none, d8s_vl },
+    [0x74 ... 0x76] = { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_vl },
+    [0x77] = { DstImplicit|SrcNone },
+    [0x78 ... 0x79] = { DstImplicit|SrcMem|ModRM|Mov, simd_other, d8s_vl },
+    [0x7a] = { DstImplicit|SrcMem|ModRM|Mov, simd_packed_fp, d8s_vl },
+    [0x7b] = { DstImplicit|SrcMem|ModRM|Mov, simd_other, d8s_dq64 },
+    [0x7c ... 0x7d] = { DstImplicit|SrcMem|ModRM, simd_other },
+    [0x7e] = { DstMem|SrcImplicit|ModRM|Mov, simd_none, d8s_dq64 },
+    [0x7f] = { DstMem|SrcImplicit|ModRM|Mov, simd_packed_int, d8s_vl },
+    [0x80 ... 0x8f] = { DstImplicit|SrcImm },
+    [0x90 ... 0x9f] = { ByteOp|DstMem|SrcNone|ModRM|Mov },
+    [0xa0 ... 0xa1] = { ImplicitOps|Mov },
+    [0xa2] = { ImplicitOps },
+    [0xa3] = { DstBitBase|SrcReg|ModRM },
+    [0xa4] = { DstMem|SrcImmByte|ModRM },
+    [0xa5] = { DstMem|SrcReg|ModRM },
+    [0xa6 ... 0xa7] = { ModRM },
+    [0xa8 ... 0xa9] = { ImplicitOps|Mov },
+    [0xaa] = { ImplicitOps },
+    [0xab] = { DstBitBase|SrcReg|ModRM },
+    [0xac] = { DstMem|SrcImmByte|ModRM },
+    [0xad] = { DstMem|SrcReg|ModRM },
+    [0xae] = { ImplicitOps|ModRM },
+    [0xaf] = { DstReg|SrcMem|ModRM },
+    [0xb0] = { ByteOp|DstMem|SrcReg|ModRM },
+    [0xb1] = { DstMem|SrcReg|ModRM },
+    [0xb2] = { DstReg|SrcMem|ModRM|Mov },
+    [0xb3] = { DstBitBase|SrcReg|ModRM },
+    [0xb4 ... 0xb5] = { DstReg|SrcMem|ModRM|Mov },
+    [0xb6] = { ByteOp|DstReg|SrcMem|ModRM|Mov },
+    [0xb7] = { DstReg|SrcMem16|ModRM|Mov },
+    [0xb8] = { DstReg|SrcMem|ModRM },
+    [0xb9] = { ModRM },
+    [0xba] = { DstBitBase|SrcImmByte|ModRM },
+    [0xbb] = { DstBitBase|SrcReg|ModRM },
+    [0xbc ... 0xbd] = { DstReg|SrcMem|ModRM },
+    [0xbe] = { ByteOp|DstReg|SrcMem|ModRM|Mov },
+    [0xbf] = { DstReg|SrcMem16|ModRM|Mov },
+    [0xc0] = { ByteOp|DstMem|SrcReg|ModRM },
+    [0xc1] = { DstMem|SrcReg|ModRM },
+    [0xc2] = { DstImplicit|SrcImmByte|ModRM, simd_any_fp, d8s_vl },
+    [0xc3] = { DstMem|SrcReg|ModRM|Mov },
+    [0xc4] = { DstImplicit|SrcImmByte|ModRM, simd_none, 1 },
+    [0xc5] = { DstReg|SrcImmByte|ModRM|Mov },
+    [0xc6] = { DstImplicit|SrcImmByte|ModRM, simd_packed_fp, d8s_vl },
+    [0xc7] = { ImplicitOps|ModRM },
+    [0xc8 ... 0xcf] = { ImplicitOps },
+    [0xd0] = { DstImplicit|SrcMem|ModRM, simd_other },
+    [0xd1 ... 0xd3] = { DstImplicit|SrcMem|ModRM, simd_128, 4 },
+    [0xd4 ... 0xd5] = { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_vl },
+    [0xd6] = { DstMem|SrcImplicit|ModRM|Mov, simd_other, 3 },
+    [0xd7] = { DstReg|SrcImplicit|ModRM|Mov },
+    [0xd8 ... 0xdf] = { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_vl },
+    [0xe0] = { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_vl },
+    [0xe1 ... 0xe2] = { DstImplicit|SrcMem|ModRM, simd_128, 4 },
+    [0xe3 ... 0xe5] = { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_vl },
+    [0xe6] = { DstImplicit|SrcMem|ModRM|Mov, simd_packed_fp, d8s_vl },
+    [0xe7] = { DstMem|SrcImplicit|ModRM|Mov, simd_packed_int, d8s_vl },
+    [0xe8 ... 0xef] = { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_vl },
+    [0xf0] = { DstImplicit|SrcMem|ModRM|Mov, simd_other },
+    [0xf1 ... 0xf3] = { DstImplicit|SrcMem|ModRM, simd_128, 4 },
+    [0xf4 ... 0xf6] = { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_vl },
+    [0xf7] = { DstMem|SrcMem|ModRM|Mov, simd_packed_int },
+    [0xf8 ... 0xfe] = { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_vl },
+    [0xff] = { ModRM }
+};
+
+/*
+ * "two_op" and "four_op" below refer to the number of register operands
+ * (one of which possibly also allowing to be a memory one). The named
+ * operand counts do not include any immediate operands.
+ */
+static const struct ext0f38_table {
+    uint8_t simd_size:5;
+    uint8_t to_mem:1;
+    uint8_t two_op:1;
+    uint8_t vsib:1;
+    disp8scale_t d8s:4;
+} ext0f38_table[256] = {
+    [0x00] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0x01 ... 0x03] = { .simd_size = simd_packed_int },
+    [0x04] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0x05 ... 0x0a] = { .simd_size = simd_packed_int },
+    [0x0b] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0x0c ... 0x0d] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
+    [0x0e ... 0x0f] = { .simd_size = simd_packed_fp },
+    [0x10 ... 0x12] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0x13] = { .simd_size = simd_other, .two_op = 1, .d8s = d8s_vl_by_2 },
+    [0x14 ... 0x16] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
+    [0x17] = { .simd_size = simd_packed_int, .two_op = 1 },
+    [0x18] = { .simd_size = simd_scalar_opc, .two_op = 1, .d8s = 2 },
+    [0x19] = { .simd_size = simd_scalar_opc, .two_op = 1, .d8s = 3 },
+    [0x1a] = { .simd_size = simd_128, .two_op = 1, .d8s = 4 },
+    [0x1b] = { .simd_size = simd_256, .two_op = 1, .d8s = d8s_vl_by_2 },
+    [0x1c ... 0x1f] = { .simd_size = simd_packed_int, .two_op = 1, .d8s = d8s_vl },
+    [0x20] = { .simd_size = simd_other, .two_op = 1, .d8s = d8s_vl_by_2 },
+    [0x21] = { .simd_size = simd_other, .two_op = 1, .d8s = d8s_vl_by_4 },
+    [0x22] = { .simd_size = simd_other, .two_op = 1, .d8s = d8s_vl_by_8 },
+    [0x23] = { .simd_size = simd_other, .two_op = 1, .d8s = d8s_vl_by_2 },
+    [0x24] = { .simd_size = simd_other, .two_op = 1, .d8s = d8s_vl_by_4 },
+    [0x25] = { .simd_size = simd_other, .two_op = 1, .d8s = d8s_vl_by_2 },
+    [0x26 ... 0x29] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0x2a] = { .simd_size = simd_packed_int, .two_op = 1, .d8s = d8s_vl },
+    [0x2b] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0x2c] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
+    [0x2d] = { .simd_size = simd_packed_fp, .d8s = d8s_dq },
+    [0x2e ... 0x2f] = { .simd_size = simd_packed_fp, .to_mem = 1 },
+    [0x30] = { .simd_size = simd_other, .two_op = 1, .d8s = d8s_vl_by_2 },
+    [0x31] = { .simd_size = simd_other, .two_op = 1, .d8s = d8s_vl_by_4 },
+    [0x32] = { .simd_size = simd_other, .two_op = 1, .d8s = d8s_vl_by_8 },
+    [0x33] = { .simd_size = simd_other, .two_op = 1, .d8s = d8s_vl_by_2 },
+    [0x34] = { .simd_size = simd_other, .two_op = 1, .d8s = d8s_vl_by_4 },
+    [0x35] = { .simd_size = simd_other, .two_op = 1, .d8s = d8s_vl_by_2 },
+    [0x36 ... 0x3f] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0x40] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0x41] = { .simd_size = simd_packed_int, .two_op = 1 },
+    [0x42] = { .simd_size = simd_packed_fp, .two_op = 1, .d8s = d8s_vl },
+    [0x43] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
+    [0x44] = { .simd_size = simd_packed_int, .two_op = 1, .d8s = d8s_vl },
+    [0x45 ... 0x47] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0x4c] = { .simd_size = simd_packed_fp, .two_op = 1, .d8s = d8s_vl },
+    [0x4d] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
+    [0x4e] = { .simd_size = simd_packed_fp, .two_op = 1, .d8s = d8s_vl },
+    [0x4f] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
+    [0x50 ... 0x53] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0x54 ... 0x55] = { .simd_size = simd_packed_int, .two_op = 1, .d8s = d8s_vl },
+    [0x58] = { .simd_size = simd_other, .two_op = 1, .d8s = 2 },
+    [0x59] = { .simd_size = simd_other, .two_op = 1, .d8s = 3 },
+    [0x5a] = { .simd_size = simd_128, .two_op = 1, .d8s = 4 },
+    [0x5b] = { .simd_size = simd_256, .two_op = 1, .d8s = d8s_vl_by_2 },
+    [0x62] = { .simd_size = simd_packed_int, .two_op = 1, .d8s = d8s_bw },
+    [0x63] = { .simd_size = simd_packed_int, .to_mem = 1, .two_op = 1, .d8s = d8s_bw },
+    [0x64 ... 0x66] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0x68] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0x70 ... 0x73] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0x75 ... 0x76] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0x77] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
+    [0x78] = { .simd_size = simd_other, .two_op = 1 },
+    [0x79] = { .simd_size = simd_other, .two_op = 1, .d8s = 1 },
+    [0x7a ... 0x7c] = { .simd_size = simd_none, .two_op = 1 },
+    [0x7d ... 0x7e] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0x7f] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
+    [0x82] = { .simd_size = simd_other },
+    [0x83] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0x88] = { .simd_size = simd_packed_fp, .two_op = 1, .d8s = d8s_dq },
+    [0x89] = { .simd_size = simd_packed_int, .two_op = 1, .d8s = d8s_dq },
+    [0x8a] = { .simd_size = simd_packed_fp, .to_mem = 1, .two_op = 1, .d8s = d8s_dq },
+    [0x8b] = { .simd_size = simd_packed_int, .to_mem = 1, .two_op = 1, .d8s = d8s_dq },
+    [0x8c] = { .simd_size = simd_packed_int },
+    [0x8d] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0x8e] = { .simd_size = simd_packed_int, .to_mem = 1 },
+    [0x8f] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0x90 ... 0x93] = { .simd_size = simd_other, .vsib = 1, .d8s = d8s_dq },
+    [0x96 ... 0x98] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
+    [0x99] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
+    [0x9a] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
+    [0x9b] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
+    [0x9c] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
+    [0x9d] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
+    [0x9e] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
+    [0x9f] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
+    [0xa0 ... 0xa3] = { .simd_size = simd_other, .to_mem = 1, .vsib = 1, .d8s = d8s_dq },
+    [0xa6 ... 0xa8] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
+    [0xa9] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
+    [0xaa] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
+    [0xab] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
+    [0xac] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
+    [0xad] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
+    [0xae] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
+    [0xaf] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
+    [0xb4 ... 0xb5] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0xb6 ... 0xb8] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
+    [0xb9] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
+    [0xba] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
+    [0xbb] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
+    [0xbc] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
+    [0xbd] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
+    [0xbe] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
+    [0xbf] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
+    [0xc4] = { .simd_size = simd_packed_int, .two_op = 1, .d8s = d8s_vl },
+    [0xc6 ... 0xc7] = { .simd_size = simd_other, .vsib = 1, .d8s = d8s_dq },
+    [0xc8] = { .simd_size = simd_packed_fp, .two_op = 1, .d8s = d8s_vl },
+    [0xc9] = { .simd_size = simd_other },
+    [0xca] = { .simd_size = simd_packed_fp, .two_op = 1, .d8s = d8s_vl },
+    [0xcb] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
+    [0xcc] = { .simd_size = simd_packed_fp, .two_op = 1, .d8s = d8s_vl },
+    [0xcd] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
+    [0xcf] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0xdb] = { .simd_size = simd_packed_int, .two_op = 1 },
+    [0xdc ... 0xdf] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0xf0] = { .two_op = 1 },
+    [0xf1] = { .to_mem = 1, .two_op = 1 },
+    [0xf2 ... 0xf3] = {},
+    [0xf5 ... 0xf7] = {},
+    [0xf8] = { .simd_size = simd_other },
+    [0xf9] = { .to_mem = 1, .two_op = 1 /* Mov */ },
+};
+
+static const struct ext0f3a_table {
+    uint8_t simd_size:5;
+    uint8_t to_mem:1;
+    uint8_t two_op:1;
+    uint8_t four_op:1;
+    disp8scale_t d8s:4;
+} ext0f3a_table[256] = {
+    [0x00] = { .simd_size = simd_packed_int, .two_op = 1, .d8s = d8s_vl },
+    [0x01] = { .simd_size = simd_packed_fp, .two_op = 1, .d8s = d8s_vl },
+    [0x02] = { .simd_size = simd_packed_int },
+    [0x03] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0x04 ... 0x05] = { .simd_size = simd_packed_fp, .two_op = 1, .d8s = d8s_vl },
+    [0x06] = { .simd_size = simd_packed_fp },
+    [0x08 ... 0x09] = { .simd_size = simd_packed_fp, .two_op = 1, .d8s = d8s_vl },
+    [0x0a ... 0x0b] = { .simd_size = simd_scalar_opc, .d8s = d8s_dq },
+    [0x0c ... 0x0d] = { .simd_size = simd_packed_fp },
+    [0x0e] = { .simd_size = simd_packed_int },
+    [0x0f] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0x14] = { .simd_size = simd_none, .to_mem = 1, .two_op = 1, .d8s = 0 },
+    [0x15] = { .simd_size = simd_none, .to_mem = 1, .two_op = 1, .d8s = 1 },
+    [0x16] = { .simd_size = simd_none, .to_mem = 1, .two_op = 1, .d8s = d8s_dq64 },
+    [0x17] = { .simd_size = simd_none, .to_mem = 1, .two_op = 1, .d8s = 2 },
+    [0x18] = { .simd_size = simd_128, .d8s = 4 },
+    [0x19] = { .simd_size = simd_128, .to_mem = 1, .two_op = 1, .d8s = 4 },
+    [0x1a] = { .simd_size = simd_256, .d8s = d8s_vl_by_2 },
+    [0x1b] = { .simd_size = simd_256, .to_mem = 1, .two_op = 1, .d8s = d8s_vl_by_2 },
+    [0x1d] = { .simd_size = simd_other, .to_mem = 1, .two_op = 1, .d8s = d8s_vl_by_2 },
+    [0x1e ... 0x1f] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0x20] = { .simd_size = simd_none, .d8s = 0 },
+    [0x21] = { .simd_size = simd_other, .d8s = 2 },
+    [0x22] = { .simd_size = simd_none, .d8s = d8s_dq64 },
+    [0x23] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0x25] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0x26] = { .simd_size = simd_packed_fp, .two_op = 1, .d8s = d8s_vl },
+    [0x27] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
+    [0x30 ... 0x33] = { .simd_size = simd_other, .two_op = 1 },
+    [0x38] = { .simd_size = simd_128, .d8s = 4 },
+    [0x3a] = { .simd_size = simd_256, .d8s = d8s_vl_by_2 },
+    [0x39] = { .simd_size = simd_128, .to_mem = 1, .two_op = 1, .d8s = 4 },
+    [0x3b] = { .simd_size = simd_256, .to_mem = 1, .two_op = 1, .d8s = d8s_vl_by_2 },
+    [0x3e ... 0x3f] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0x40 ... 0x41] = { .simd_size = simd_packed_fp },
+    [0x42 ... 0x43] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0x44] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0x46] = { .simd_size = simd_packed_int },
+    [0x48 ... 0x49] = { .simd_size = simd_packed_fp, .four_op = 1 },
+    [0x4a ... 0x4b] = { .simd_size = simd_packed_fp, .four_op = 1 },
+    [0x4c] = { .simd_size = simd_packed_int, .four_op = 1 },
+    [0x50] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
+    [0x51] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
+    [0x54] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
+    [0x55] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
+    [0x56] = { .simd_size = simd_packed_fp, .two_op = 1, .d8s = d8s_vl },
+    [0x57] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
+    [0x5c ... 0x5f] = { .simd_size = simd_packed_fp, .four_op = 1 },
+    [0x60 ... 0x63] = { .simd_size = simd_packed_int, .two_op = 1 },
+    [0x66] = { .simd_size = simd_packed_fp, .two_op = 1, .d8s = d8s_vl },
+    [0x67] = { .simd_size = simd_scalar_vexw, .two_op = 1, .d8s = d8s_dq },
+    [0x68 ... 0x69] = { .simd_size = simd_packed_fp, .four_op = 1 },
+    [0x6a ... 0x6b] = { .simd_size = simd_scalar_opc, .four_op = 1 },
+    [0x6c ... 0x6d] = { .simd_size = simd_packed_fp, .four_op = 1 },
+    [0x6e ... 0x6f] = { .simd_size = simd_scalar_opc, .four_op = 1 },
+    [0x70 ... 0x73] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0x78 ... 0x79] = { .simd_size = simd_packed_fp, .four_op = 1 },
+    [0x7a ... 0x7b] = { .simd_size = simd_scalar_opc, .four_op = 1 },
+    [0x7c ... 0x7d] = { .simd_size = simd_packed_fp, .four_op = 1 },
+    [0x7e ... 0x7f] = { .simd_size = simd_scalar_opc, .four_op = 1 },
+    [0xcc] = { .simd_size = simd_other },
+    [0xce ... 0xcf] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0xdf] = { .simd_size = simd_packed_int, .two_op = 1 },
+    [0xf0] = {},
+};
+
+static const opcode_desc_t xop_table[] = {
+    DstReg|SrcImmByte|ModRM,
+    DstReg|SrcMem|ModRM,
+    DstReg|SrcImm|ModRM,
+};
+
+static const struct ext8f08_table {
+    uint8_t simd_size:5;
+    uint8_t two_op:1;
+    uint8_t four_op:1;
+} ext8f08_table[256] = {
+    [0xa2] = { .simd_size = simd_packed_int, .four_op = 1 },
+    [0x85 ... 0x87] = { .simd_size = simd_packed_int, .four_op = 1 },
+    [0x8e ... 0x8f] = { .simd_size = simd_packed_int, .four_op = 1 },
+    [0x95 ... 0x97] = { .simd_size = simd_packed_int, .four_op = 1 },
+    [0x9e ... 0x9f] = { .simd_size = simd_packed_int, .four_op = 1 },
+    [0xa3] = { .simd_size = simd_packed_int, .four_op = 1 },
+    [0xa6] = { .simd_size = simd_packed_int, .four_op = 1 },
+    [0xb6] = { .simd_size = simd_packed_int, .four_op = 1 },
+    [0xc0 ... 0xc3] = { .simd_size = simd_packed_int, .two_op = 1 },
+    [0xcc ... 0xcf] = { .simd_size = simd_packed_int },
+    [0xec ... 0xef] = { .simd_size = simd_packed_int },
+};
+
+static const struct ext8f09_table {
+    uint8_t simd_size:5;
+    uint8_t two_op:1;
+} ext8f09_table[256] = {
+    [0x01 ... 0x02] = { .two_op = 1 },
+    [0x80 ... 0x81] = { .simd_size = simd_packed_fp, .two_op = 1 },
+    [0x82 ... 0x83] = { .simd_size = simd_scalar_opc, .two_op = 1 },
+    [0x90 ... 0x9b] = { .simd_size = simd_packed_int },
+    [0xc1 ... 0xc3] = { .simd_size = simd_packed_int, .two_op = 1 },
+    [0xc6 ... 0xc7] = { .simd_size = simd_packed_int, .two_op = 1 },
+    [0xcb] = { .simd_size = simd_packed_int, .two_op = 1 },
+    [0xd1 ... 0xd3] = { .simd_size = simd_packed_int, .two_op = 1 },
+    [0xd6 ... 0xd7] = { .simd_size = simd_packed_int, .two_op = 1 },
+    [0xdb] = { .simd_size = simd_packed_int, .two_op = 1 },
+    [0xe1 ... 0xe3] = { .simd_size = simd_packed_int, .two_op = 1 },
+};
+
+static unsigned int decode_disp8scale(enum disp8scale scale,
+                                      const struct x86_emulate_state *s)
+{
+    switch ( scale )
+    {
+    case d8s_bw:
+        return s->evex.w;
+
+    default:
+        if ( scale < d8s_vl )
+            return scale;
+        if ( s->evex.brs )
+        {
+    case d8s_dq:
+            return 2 + s->evex.w;
+        }
+        break;
+
+    case d8s_dq64:
+        return 2 + (s->op_bytes == 8);
+    }
+
+    switch ( s->simd_size )
+    {
+    case simd_any_fp:
+    case simd_single_fp:
+        if ( !(s->evex.pfx & VEX_PREFIX_SCALAR_MASK) )
+            break;
+        /* fall through */
+    case simd_scalar_opc:
+    case simd_scalar_vexw:
+        return 2 + s->evex.w;
+
+    case simd_128:
+        /* These should have an explicit size specified. */
+        ASSERT_UNREACHABLE();
+        return 4;
+
+    default:
+        break;
+    }
+
+    return 4 + s->evex.lr - (scale - d8s_vl);
+}
+
+/* Fetch next part of the instruction being emulated. */
+#define insn_fetch_bytes(_size) ({                                    \
+   unsigned long _x = 0, _ip = s->ip;                                 \
+   s->ip += (_size); /* real hardware doesn't truncate */             \
+   generate_exception_if((uint8_t)(s->ip -                            \
+                                   ctxt->regs->r(ip)) > MAX_INST_LEN, \
+                         X86_EXC_GP, 0);                              \
+   rc = ops->insn_fetch(_ip, &_x, _size, ctxt);                       \
+   if ( rc ) goto done;                                               \
+   _x;                                                                \
+})
+#define insn_fetch_type(type) ((type)insn_fetch_bytes(sizeof(type)))
+
+static int
+decode_onebyte(struct x86_emulate_state *s,
+               struct x86_emulate_ctxt *ctxt,
+               const struct x86_emulate_ops *ops)
+{
+    int rc = X86EMUL_OKAY;
+
+    switch ( ctxt->opcode )
+    {
+    case 0x06: /* push %%es */
+    case 0x07: /* pop %%es */
+    case 0x0e: /* push %%cs */
+    case 0x16: /* push %%ss */
+    case 0x17: /* pop %%ss */
+    case 0x1e: /* push %%ds */
+    case 0x1f: /* pop %%ds */
+    case 0x27: /* daa */
+    case 0x2f: /* das */
+    case 0x37: /* aaa */
+    case 0x3f: /* aas */
+    case 0x60: /* pusha */
+    case 0x61: /* popa */
+    case 0x62: /* bound */
+    case 0xc4: /* les */
+    case 0xc5: /* lds */
+    case 0xce: /* into */
+    case 0xd4: /* aam */
+    case 0xd5: /* aad */
+    case 0xd6: /* salc */
+        s->not_64bit = true;
+        break;
+
+    case 0x82: /* Grp1 (x86/32 only) */
+        s->not_64bit = true;
+        /* fall through */
+    case 0x80: case 0x81: case 0x83: /* Grp1 */
+        if ( (s->modrm_reg & 7) == 7 ) /* cmp */
+            s->desc = (s->desc & ByteOp) | DstNone | SrcMem;
+        break;
+
+    case 0x90: /* nop / pause */
+        if ( s->vex.pfx == vex_f3 )
+            ctxt->opcode |= X86EMUL_OPC_F3(0, 0);
+        break;
+
+    case 0x9a: /* call (far, absolute) */
+    case 0xea: /* jmp (far, absolute) */
+        generate_exception_if(mode_64bit(), X86_EXC_UD);
+
+        s->imm1 = insn_fetch_bytes(s->op_bytes);
+        s->imm2 = insn_fetch_type(uint16_t);
+        break;
+
+    case 0xa0: case 0xa1: /* mov mem.offs,{%al,%ax,%eax,%rax} */
+    case 0xa2: case 0xa3: /* mov {%al,%ax,%eax,%rax},mem.offs */
+        /* Source EA is not encoded via ModRM. */
+        s->ea.type = OP_MEM;
+        s->ea.mem.off = insn_fetch_bytes(s->ad_bytes);
+        break;
+
+    case 0xb8 ... 0xbf: /* mov imm{16,32,64},r{16,32,64} */
+        if ( s->op_bytes == 8 ) /* Fetch more bytes to obtain imm64. */
+            s->imm1 = ((uint32_t)s->imm1 |
+                       ((uint64_t)insn_fetch_type(uint32_t) << 32));
+        break;
+
+    case 0xc8: /* enter imm16,imm8 */
+        s->imm2 = insn_fetch_type(uint8_t);
+        break;
+
+    case 0xf6: case 0xf7: /* Grp3 */
+        if ( !(s->modrm_reg & 6) ) /* test */
+            s->desc = (s->desc & ByteOp) | DstNone | SrcMem;
+        break;
+
+    case 0xff: /* Grp5 */
+        switch ( s->modrm_reg & 7 )
+        {
+        case 2: /* call (near) */
+        case 4: /* jmp (near) */
+            if ( mode_64bit() && (s->op_bytes == 4 || !amd_like(ctxt)) )
+                s->op_bytes = 8;
+            s->desc = DstNone | SrcMem | Mov;
+            break;
+
+        case 3: /* call (far, absolute indirect) */
+        case 5: /* jmp (far, absolute indirect) */
+            /* REX.W ignored on a vendor-dependent basis. */
+            if ( s->op_bytes == 8 && amd_like(ctxt) )
+                s->op_bytes = 4;
+            s->desc = DstNone | SrcMem | Mov;
+            break;
+
+        case 6: /* push */
+            if ( mode_64bit() && s->op_bytes == 4 )
+                s->op_bytes = 8;
+            s->desc = DstNone | SrcMem | Mov;
+            break;
+        }
+        break;
+    }
+
+ done:
+    return rc;
+}
+
+static int
+decode_twobyte(struct x86_emulate_state *s,
+               struct x86_emulate_ctxt *ctxt,
+               const struct x86_emulate_ops *ops)
+{
+    int rc = X86EMUL_OKAY;
+
+    switch ( ctxt->opcode & X86EMUL_OPC_MASK )
+    {
+    case 0x00: /* Grp6 */
+        switch ( s->modrm_reg & 6 )
+        {
+        case 0:
+            s->desc |= DstMem | SrcImplicit | Mov;
+            break;
+        case 2: case 4:
+            s->desc |= SrcMem16;
+            break;
+        }
+        break;
+
+    case 0x78:
+        s->desc = ImplicitOps;
+        s->simd_size = simd_none;
+        switch ( s->vex.pfx )
+        {
+        case vex_66: /* extrq $imm8, $imm8, xmm */
+        case vex_f2: /* insertq $imm8, $imm8, xmm, xmm */
+            s->imm1 = insn_fetch_type(uint8_t);
+            s->imm2 = insn_fetch_type(uint8_t);
+            break;
+        }
+        /* fall through */
+    case 0x10 ... 0x18:
+    case 0x28 ... 0x2f:
+    case 0x50 ... 0x77:
+    case 0x7a ... 0x7d:
+    case 0x7f:
+    case 0xc2 ... 0xc3:
+    case 0xc5 ... 0xc6:
+    case 0xd0 ... 0xef:
+    case 0xf1 ... 0xfe:
+        ctxt->opcode |= MASK_INSR(s->vex.pfx, X86EMUL_OPC_PFX_MASK);
+        break;
+
+    case 0x20: case 0x22: /* mov to/from cr */
+        if ( s->lock_prefix && vcpu_has_cr8_legacy() )
+        {
+            s->modrm_reg += 8;
+            s->lock_prefix = false;
+        }
+        /* fall through */
+    case 0x21: case 0x23: /* mov to/from dr */
+        ASSERT(s->ea.type == OP_REG); /* Early operand adjustment ensures this. */
+        generate_exception_if(s->lock_prefix, X86_EXC_UD);
+        s->op_bytes = mode_64bit() ? 8 : 4;
+        break;
+
+    case 0x79:
+        s->desc = DstReg | SrcMem;
+        s->simd_size = simd_packed_int;
+        ctxt->opcode |= MASK_INSR(s->vex.pfx, X86EMUL_OPC_PFX_MASK);
+        break;
+
+    case 0x7e:
+        ctxt->opcode |= MASK_INSR(s->vex.pfx, X86EMUL_OPC_PFX_MASK);
+        if ( s->vex.pfx == vex_f3 ) /* movq xmm/m64,xmm */
+        {
+    case X86EMUL_OPC_VEX_F3(0, 0x7e): /* vmovq xmm/m64,xmm */
+    case X86EMUL_OPC_EVEX_F3(0, 0x7e): /* vmovq xmm/m64,xmm */
+            s->desc = DstImplicit | SrcMem | TwoOp;
+            s->simd_size = simd_other;
+            /* Avoid the s->desc clobbering of TwoOp below. */
+            return X86EMUL_OKAY;
+        }
+        break;
+
+    case X86EMUL_OPC_VEX(0, 0x90):    /* kmov{w,q} */
+    case X86EMUL_OPC_VEX_66(0, 0x90): /* kmov{b,d} */
+        s->desc = DstReg | SrcMem | Mov;
+        s->simd_size = simd_other;
+        break;
+
+    case X86EMUL_OPC_VEX(0, 0x91):    /* kmov{w,q} */
+    case X86EMUL_OPC_VEX_66(0, 0x91): /* kmov{b,d} */
+        s->desc = DstMem | SrcReg | Mov;
+        s->simd_size = simd_other;
+        break;
+
+    case 0xae:
+        ctxt->opcode |= MASK_INSR(s->vex.pfx, X86EMUL_OPC_PFX_MASK);
+        /* fall through */
+    case X86EMUL_OPC_VEX(0, 0xae):
+        switch ( s->modrm_reg & 7 )
+        {
+        case 2: /* {,v}ldmxcsr */
+            s->desc = DstImplicit | SrcMem | Mov;
+            s->op_bytes = 4;
+            break;
+
+        case 3: /* {,v}stmxcsr */
+            s->desc = DstMem | SrcImplicit | Mov;
+            s->op_bytes = 4;
+            break;
+        }
+        break;
+
+    case 0xb2: /* lss */
+    case 0xb4: /* lfs */
+    case 0xb5: /* lgs */
+        /* REX.W ignored on a vendor-dependent basis. */
+        if ( s->op_bytes == 8 && amd_like(ctxt) )
+            s->op_bytes = 4;
+        break;
+
+    case 0xb8: /* jmpe / popcnt */
+        if ( s->vex.pfx >= vex_f3 )
+            ctxt->opcode |= MASK_INSR(s->vex.pfx, X86EMUL_OPC_PFX_MASK);
+        break;
+
+        /* Intentionally not handling here despite being modified by F3:
+    case 0xbc: bsf / tzcnt
+    case 0xbd: bsr / lzcnt
+         * They're being dealt with in the execution phase (if at all).
+         */
+
+    case 0xc4: /* pinsrw */
+        ctxt->opcode |= MASK_INSR(s->vex.pfx, X86EMUL_OPC_PFX_MASK);
+        /* fall through */
+    case X86EMUL_OPC_VEX_66(0, 0xc4): /* vpinsrw */
+    case X86EMUL_OPC_EVEX_66(0, 0xc4): /* vpinsrw */
+        s->desc = DstImplicit | SrcMem16;
+        break;
+
+    case 0xf0:
+        ctxt->opcode |= MASK_INSR(s->vex.pfx, X86EMUL_OPC_PFX_MASK);
+        if ( s->vex.pfx == vex_f2 ) /* lddqu mem,xmm */
+        {
+        /* fall through */
+    case X86EMUL_OPC_VEX_F2(0, 0xf0): /* vlddqu mem,{x,y}mm */
+            s->desc = DstImplicit | SrcMem | TwoOp;
+            s->simd_size = simd_other;
+            /* Avoid the s->desc clobbering of TwoOp below. */
+            return X86EMUL_OKAY;
+        }
+        break;
+    }
+
+    /*
+     * Scalar forms of most VEX-/EVEX-encoded TwoOp instructions have
+     * three operands.  Those which do really have two operands
+     * should have exited earlier.
+     */
+    if ( s->simd_size && s->vex.opcx &&
+         (s->vex.pfx & VEX_PREFIX_SCALAR_MASK) )
+        s->desc &= ~TwoOp;
+
+ done:
+    return rc;
+}
+
+static int
+decode_0f38(struct x86_emulate_state *s,
+            struct x86_emulate_ctxt *ctxt,
+            const struct x86_emulate_ops *ops)
+{
+    switch ( ctxt->opcode & X86EMUL_OPC_MASK )
+    {
+    case 0x00 ... 0xef:
+    case 0xf2 ... 0xf5:
+    case 0xf7 ... 0xf8:
+    case 0xfa ... 0xff:
+        s->op_bytes = 0;
+        /* fall through */
+    case 0xf6: /* adcx / adox */
+    case 0xf9: /* movdiri */
+        ctxt->opcode |= MASK_INSR(s->vex.pfx, X86EMUL_OPC_PFX_MASK);
+        break;
+
+    case X86EMUL_OPC_EVEX_66(0, 0x2d): /* vscalefs{s,d} */
+        s->simd_size = simd_scalar_vexw;
+        break;
+
+    case X86EMUL_OPC_EVEX_66(0, 0x7a): /* vpbroadcastb */
+    case X86EMUL_OPC_EVEX_66(0, 0x7b): /* vpbroadcastw */
+    case X86EMUL_OPC_EVEX_66(0, 0x7c): /* vpbroadcast{d,q} */
+        break;
+
+    case 0xf0: /* movbe / crc32 */
+        s->desc |= s->vex.pfx == vex_f2 ? ByteOp : Mov;
+        if ( s->vex.pfx >= vex_f3 )
+            ctxt->opcode |= MASK_INSR(s->vex.pfx, X86EMUL_OPC_PFX_MASK);
+        break;
+
+    case 0xf1: /* movbe / crc32 */
+        if ( s->vex.pfx == vex_f2 )
+            s->desc = DstReg | SrcMem;
+        if ( s->vex.pfx >= vex_f3 )
+            ctxt->opcode |= MASK_INSR(s->vex.pfx, X86EMUL_OPC_PFX_MASK);
+        break;
+
+    case X86EMUL_OPC_VEX(0, 0xf2):    /* andn */
+    case X86EMUL_OPC_VEX(0, 0xf3):    /* Grp 17 */
+    case X86EMUL_OPC_VEX(0, 0xf5):    /* bzhi */
+    case X86EMUL_OPC_VEX_F3(0, 0xf5): /* pext */
+    case X86EMUL_OPC_VEX_F2(0, 0xf5): /* pdep */
+    case X86EMUL_OPC_VEX_F2(0, 0xf6): /* mulx */
+    case X86EMUL_OPC_VEX(0, 0xf7):    /* bextr */
+    case X86EMUL_OPC_VEX_66(0, 0xf7): /* shlx */
+    case X86EMUL_OPC_VEX_F3(0, 0xf7): /* sarx */
+    case X86EMUL_OPC_VEX_F2(0, 0xf7): /* shrx */
+        break;
+
+    default:
+        s->op_bytes = 0;
+        break;
+    }
+
+    return X86EMUL_OKAY;
+}
+
+static int
+decode_0f3a(struct x86_emulate_state *s,
+            struct x86_emulate_ctxt *ctxt,
+            const struct x86_emulate_ops *ops)
+{
+    if ( !s->vex.opcx )
+        ctxt->opcode |= MASK_INSR(s->vex.pfx, X86EMUL_OPC_PFX_MASK);
+
+    switch ( ctxt->opcode & X86EMUL_OPC_MASK )
+    {
+    case X86EMUL_OPC_66(0, 0x14)
+     ... X86EMUL_OPC_66(0, 0x17):     /* pextr*, extractps */
+    case X86EMUL_OPC_VEX_66(0, 0x14)
+     ... X86EMUL_OPC_VEX_66(0, 0x17): /* vpextr*, vextractps */
+    case X86EMUL_OPC_EVEX_66(0, 0x14)
+     ... X86EMUL_OPC_EVEX_66(0, 0x17): /* vpextr*, vextractps */
+    case X86EMUL_OPC_VEX_F2(0, 0xf0): /* rorx */
+        break;
+
+    case X86EMUL_OPC_66(0, 0x20):     /* pinsrb */
+    case X86EMUL_OPC_VEX_66(0, 0x20): /* vpinsrb */
+    case X86EMUL_OPC_EVEX_66(0, 0x20): /* vpinsrb */
+        s->desc = DstImplicit | SrcMem;
+        if ( s->modrm_mod != 3 )
+            s->desc |= ByteOp;
+        break;
+
+    case X86EMUL_OPC_66(0, 0x22):     /* pinsr{d,q} */
+    case X86EMUL_OPC_VEX_66(0, 0x22): /* vpinsr{d,q} */
+    case X86EMUL_OPC_EVEX_66(0, 0x22): /* vpinsr{d,q} */
+        s->desc = DstImplicit | SrcMem;
+        break;
+
+    default:
+        s->op_bytes = 0;
+        break;
+    }
+
+    return X86EMUL_OKAY;
+}
+
+#define ad_bytes (s->ad_bytes) /* for truncate_ea() */
+
+int x86emul_decode(struct x86_emulate_state *s,
+                   struct x86_emulate_ctxt *ctxt,
+                   const struct x86_emulate_ops *ops)
+{
+    uint8_t b, d;
+    unsigned int def_op_bytes, def_ad_bytes, opcode;
+    enum x86_segment override_seg = x86_seg_none;
+    bool pc_rel = false;
+    int rc = X86EMUL_OKAY;
+
+    ASSERT(ops->insn_fetch);
+
+    memset(s, 0, sizeof(*s));
+    s->ea.type = OP_NONE;
+    s->ea.mem.seg = x86_seg_ds;
+    s->ea.reg = PTR_POISON;
+    s->regs = ctxt->regs;
+    s->ip = ctxt->regs->r(ip);
+
+    s->op_bytes = def_op_bytes = ad_bytes = def_ad_bytes =
+        ctxt->addr_size / 8;
+    if ( s->op_bytes == 8 )
+    {
+        s->op_bytes = def_op_bytes = 4;
+#ifndef __x86_64__
+        return X86EMUL_UNHANDLEABLE;
+#endif
+    }
+
+    /* Prefix bytes. */
+    for ( ; ; )
+    {
+        switch ( b = insn_fetch_type(uint8_t) )
+        {
+        case 0x66: /* operand-size override */
+            s->op_bytes = def_op_bytes ^ 6;
+            if ( !s->vex.pfx )
+                s->vex.pfx = vex_66;
+            break;
+        case 0x67: /* address-size override */
+            ad_bytes = def_ad_bytes ^ (mode_64bit() ? 12 : 6);
+            break;
+        case 0x2e: /* CS override / ignored in 64-bit mode */
+            if ( !mode_64bit() )
+                override_seg = x86_seg_cs;
+            break;
+        case 0x3e: /* DS override / ignored in 64-bit mode */
+            if ( !mode_64bit() )
+                override_seg = x86_seg_ds;
+            break;
+        case 0x26: /* ES override / ignored in 64-bit mode */
+            if ( !mode_64bit() )
+                override_seg = x86_seg_es;
+            break;
+        case 0x64: /* FS override */
+            override_seg = x86_seg_fs;
+            break;
+        case 0x65: /* GS override */
+            override_seg = x86_seg_gs;
+            break;
+        case 0x36: /* SS override / ignored in 64-bit mode */
+            if ( !mode_64bit() )
+                override_seg = x86_seg_ss;
+            break;
+        case 0xf0: /* LOCK */
+            s->lock_prefix = true;
+            break;
+        case 0xf2: /* REPNE/REPNZ */
+            s->vex.pfx = vex_f2;
+            break;
+        case 0xf3: /* REP/REPE/REPZ */
+            s->vex.pfx = vex_f3;
+            break;
+        case 0x40 ... 0x4f: /* REX */
+            if ( !mode_64bit() )
+                goto done_prefixes;
+            s->rex_prefix = b;
+            continue;
+        default:
+            goto done_prefixes;
+        }
+
+        /* Any legacy prefix after a REX prefix nullifies its effect. */
+        s->rex_prefix = 0;
+    }
+ done_prefixes:
+
+    if ( s->rex_prefix & REX_W )
+        s->op_bytes = 8;
+
+    /* Opcode byte(s). */
+    d = opcode_table[b];
+    if ( d == 0 && b == 0x0f )
+    {
+        /* Two-byte opcode. */
+        b = insn_fetch_type(uint8_t);
+        d = twobyte_table[b].desc;
+        switch ( b )
+        {
+        default:
+            opcode = b | MASK_INSR(0x0f, X86EMUL_OPC_EXT_MASK);
+            s->ext = ext_0f;
+            s->simd_size = twobyte_table[b].size;
+            break;
+        case 0x38:
+            b = insn_fetch_type(uint8_t);
+            opcode = b | MASK_INSR(0x0f38, X86EMUL_OPC_EXT_MASK);
+            s->ext = ext_0f38;
+            break;
+        case 0x3a:
+            b = insn_fetch_type(uint8_t);
+            opcode = b | MASK_INSR(0x0f3a, X86EMUL_OPC_EXT_MASK);
+            s->ext = ext_0f3a;
+            break;
+        }
+    }
+    else
+        opcode = b;
+
+    /* ModRM and SIB bytes. */
+    if ( d & ModRM )
+    {
+        s->modrm = insn_fetch_type(uint8_t);
+        s->modrm_mod = (s->modrm & 0xc0) >> 6;
+
+        if ( !s->ext && ((b & ~1) == 0xc4 || (b == 0x8f && (s->modrm & 0x18)) ||
+                         b == 0x62) )
+            switch ( def_ad_bytes )
+            {
+            default:
+                BUG(); /* Shouldn't be possible. */
+            case 2:
+                if ( s->regs->eflags & X86_EFLAGS_VM )
+                    break;
+                /* fall through */
+            case 4:
+                if ( s->modrm_mod != 3 || in_realmode(ctxt, ops) )
+                    break;
+                /* fall through */
+            case 8:
+                /* VEX / XOP / EVEX */
+                generate_exception_if(s->rex_prefix || s->vex.pfx, X86_EXC_UD);
+                /*
+                 * With operand size override disallowed (see above), op_bytes
+                 * should not have changed from its default.
+                 */
+                ASSERT(s->op_bytes == def_op_bytes);
+
+                s->vex.raw[0] = s->modrm;
+                if ( b == 0xc5 )
+                {
+                    opcode = X86EMUL_OPC_VEX_;
+                    s->vex.raw[1] = s->modrm;
+                    s->vex.opcx = vex_0f;
+                    s->vex.x = 1;
+                    s->vex.b = 1;
+                    s->vex.w = 0;
+                }
+                else
+                {
+                    s->vex.raw[1] = insn_fetch_type(uint8_t);
+                    if ( mode_64bit() )
+                    {
+                        if ( !s->vex.b )
+                            s->rex_prefix |= REX_B;
+                        if ( !s->vex.x )
+                            s->rex_prefix |= REX_X;
+                        if ( s->vex.w )
+                        {
+                            s->rex_prefix |= REX_W;
+                            s->op_bytes = 8;
+                        }
+                    }
+                    else
+                    {
+                        /* Operand size fixed at 4 (no override via W bit). */
+                        s->op_bytes = 4;
+                        s->vex.b = 1;
+                    }
+                    switch ( b )
+                    {
+                    case 0x62:
+                        opcode = X86EMUL_OPC_EVEX_;
+                        s->evex.raw[0] = s->vex.raw[0];
+                        s->evex.raw[1] = s->vex.raw[1];
+                        s->evex.raw[2] = insn_fetch_type(uint8_t);
+
+                        generate_exception_if(!s->evex.mbs || s->evex.mbz, X86_EXC_UD);
+                        generate_exception_if(!s->evex.opmsk && s->evex.z, X86_EXC_UD);
+
+                        if ( !mode_64bit() )
+                            s->evex.R = 1;
+
+                        s->vex.opcx = s->evex.opcx;
+                        break;
+                    case 0xc4:
+                        opcode = X86EMUL_OPC_VEX_;
+                        break;
+                    default:
+                        opcode = 0;
+                        break;
+                    }
+                }
+                if ( !s->vex.r )
+                    s->rex_prefix |= REX_R;
+
+                s->ext = s->vex.opcx;
+                if ( b != 0x8f )
+                {
+                    b = insn_fetch_type(uint8_t);
+                    switch ( s->ext )
+                    {
+                    case vex_0f:
+                        opcode |= MASK_INSR(0x0f, X86EMUL_OPC_EXT_MASK);
+                        d = twobyte_table[b].desc;
+                        s->simd_size = twobyte_table[b].size;
+                        break;
+                    case vex_0f38:
+                        opcode |= MASK_INSR(0x0f38, X86EMUL_OPC_EXT_MASK);
+                        d = twobyte_table[0x38].desc;
+                        break;
+                    case vex_0f3a:
+                        opcode |= MASK_INSR(0x0f3a, X86EMUL_OPC_EXT_MASK);
+                        d = twobyte_table[0x3a].desc;
+                        break;
+                    default:
+                        rc = X86EMUL_UNRECOGNIZED;
+                        goto done;
+                    }
+                }
+                else if ( s->ext < ext_8f08 + ARRAY_SIZE(xop_table) )
+                {
+                    b = insn_fetch_type(uint8_t);
+                    opcode |= MASK_INSR(0x8f08 + s->ext - ext_8f08,
+                                        X86EMUL_OPC_EXT_MASK);
+                    d = array_access_nospec(xop_table, s->ext - ext_8f08);
+                }
+                else
+                {
+                    rc = X86EMUL_UNRECOGNIZED;
+                    goto done;
+                }
+
+                opcode |= b | MASK_INSR(s->vex.pfx, X86EMUL_OPC_PFX_MASK);
+
+                if ( !evex_encoded() )
+                    s->evex.lr = s->vex.l;
+
+                if ( !(d & ModRM) )
+                    break;
+
+                s->modrm = insn_fetch_type(uint8_t);
+                s->modrm_mod = (s->modrm & 0xc0) >> 6;
+
+                break;
+            }
+    }
+
+    if ( d & ModRM )
+    {
+        unsigned int disp8scale = 0;
+
+        d &= ~ModRM;
+#undef ModRM /* Only its aliases are valid to use from here on. */
+        s->modrm_reg = ((s->rex_prefix & 4) << 1) | ((s->modrm & 0x38) >> 3) |
+                       ((evex_encoded() && !s->evex.R) << 4);
+        s->modrm_rm  = s->modrm & 0x07;
+
+        /*
+         * Early operand adjustments. Only ones affecting further processing
+         * prior to the x86_decode_*() calls really belong here. That would
+         * normally be only addition/removal of SrcImm/SrcImm16, so their
+         * fetching can be taken care of by the common code below.
+         */
+        switch ( s->ext )
+        {
+        case ext_none:
+            switch ( b )
+            {
+            case 0xf6 ... 0xf7: /* Grp3 */
+                switch ( s->modrm_reg & 7 )
+                {
+                case 0 ... 1: /* test */
+                    d |= DstMem | SrcImm;
+                    break;
+                case 2: /* not */
+                case 3: /* neg */
+                    d |= DstMem;
+                    break;
+                case 4: /* mul */
+                case 5: /* imul */
+                case 6: /* div */
+                case 7: /* idiv */
+                    /*
+                     * DstEax isn't really precise for all cases; updates to
+                     * rDX get handled in an open coded manner.
+                     */
+                    d |= DstEax | SrcMem;
+                    break;
+                }
+                break;
+            }
+            break;
+
+        case ext_0f:
+            if ( evex_encoded() )
+                disp8scale = decode_disp8scale(twobyte_table[b].d8s, s);
+
+            switch ( b )
+            {
+            case 0x12: /* vmovsldup / vmovddup */
+                if ( s->evex.pfx == vex_f2 )
+                    disp8scale = s->evex.lr ? 4 + s->evex.lr : 3;
+                /* fall through */
+            case 0x16: /* vmovshdup */
+                if ( s->evex.pfx == vex_f3 )
+                    disp8scale = 4 + s->evex.lr;
+                break;
+
+            case 0x20: /* mov cr,reg */
+            case 0x21: /* mov dr,reg */
+            case 0x22: /* mov reg,cr */
+            case 0x23: /* mov reg,dr */
+                /*
+                 * Mov to/from cr/dr ignore the encoding of Mod, and behave as
+                 * if they were encoded as reg/reg instructions.  No further
+                 * disp/SIB bytes are fetched.
+                 */
+                s->modrm_mod = 3;
+                break;
+
+            case 0x78:
+            case 0x79:
+                if ( !s->evex.pfx )
+                    break;
+                /* vcvt{,t}ps2uqq need special casing */
+                if ( s->evex.pfx == vex_66 )
+                {
+                    if ( !s->evex.w && !s->evex.brs )
+                        --disp8scale;
+                    break;
+                }
+                /* vcvt{,t}s{s,d}2usi need special casing: fall through */
+            case 0x2c: /* vcvtts{s,d}2si need special casing */
+            case 0x2d: /* vcvts{s,d}2si need special casing */
+                if ( evex_encoded() )
+                    disp8scale = 2 + (s->evex.pfx & VEX_PREFIX_DOUBLE_MASK);
+                break;
+
+            case 0x5a: /* vcvtps2pd needs special casing */
+                if ( disp8scale && !s->evex.pfx && !s->evex.brs )
+                    --disp8scale;
+                break;
+
+            case 0x7a: /* vcvttps2qq and vcvtudq2pd need special casing */
+                if ( disp8scale && s->evex.pfx != vex_f2 && !s->evex.w && !s->evex.brs )
+                    --disp8scale;
+                break;
+
+            case 0x7b: /* vcvtp{s,d}2qq need special casing */
+                if ( disp8scale && s->evex.pfx == vex_66 )
+                    disp8scale = (s->evex.brs ? 2 : 3 + s->evex.lr) + s->evex.w;
+                break;
+
+            case 0x7e: /* vmovq xmm/m64,xmm needs special casing */
+                if ( disp8scale == 2 && s->evex.pfx == vex_f3 )
+                    disp8scale = 3;
+                break;
+
+            case 0xe6: /* vcvtdq2pd needs special casing */
+                if ( disp8scale && s->evex.pfx == vex_f3 && !s->evex.w && !s->evex.brs )
+                    --disp8scale;
+                break;
+            }
+            break;
+
+        case ext_0f38:
+            d = ext0f38_table[b].to_mem ? DstMem | SrcReg
+                                        : DstReg | SrcMem;
+            if ( ext0f38_table[b].two_op )
+                d |= TwoOp;
+            if ( ext0f38_table[b].vsib )
+                d |= vSIB;
+            s->simd_size = ext0f38_table[b].simd_size;
+            if ( evex_encoded() )
+            {
+                /*
+                 * VPMOVUS* are identical to VPMOVS* Disp8-scaling-wise, but
+                 * their attributes don't match those of the vex_66 encoded
+                 * insns with the same base opcodes. Rather than adding new
+                 * columns to the table, handle this here for now.
+                 */
+                if ( s->evex.pfx != vex_f3 || (b & 0xf8) != 0x10 )
+                    disp8scale = decode_disp8scale(ext0f38_table[b].d8s, s);
+                else
+                {
+                    disp8scale = decode_disp8scale(ext0f38_table[b ^ 0x30].d8s,
+                                                   s);
+                    s->simd_size = simd_other;
+                }
+
+                switch ( b )
+                {
+                /* vp4dpwssd{,s} need special casing */
+                case 0x52: case 0x53:
+                /* v4f{,n}madd{p,s}s need special casing */
+                case 0x9a: case 0x9b: case 0xaa: case 0xab:
+                    if ( s->evex.pfx == vex_f2 )
+                    {
+                        disp8scale = 4;
+                        s->simd_size = simd_128;
+                    }
+                    break;
+                }
+            }
+            break;
+
+        case ext_0f3a:
+            /*
+             * Cannot update d here yet, as the immediate operand still
+             * needs fetching.
+             */
+            s->simd_size = ext0f3a_table[b].simd_size;
+            if ( evex_encoded() )
+                disp8scale = decode_disp8scale(ext0f3a_table[b].d8s, s);
+            break;
+
+        case ext_8f09:
+            if ( ext8f09_table[b].two_op )
+                d |= TwoOp;
+            s->simd_size = ext8f09_table[b].simd_size;
+            break;
+
+        case ext_8f08:
+        case ext_8f0a:
+            /*
+             * Cannot update d here yet, as the immediate operand still
+             * needs fetching.
+             */
+            break;
+
+        default:
+            ASSERT_UNREACHABLE();
+            return X86EMUL_UNIMPLEMENTED;
+        }
+
+        if ( s->modrm_mod == 3 )
+        {
+            generate_exception_if(d & vSIB, X86_EXC_UD);
+            s->modrm_rm |= ((s->rex_prefix & 1) << 3) |
+                           ((evex_encoded() && !s->evex.x) << 4);
+            s->ea.type = OP_REG;
+        }
+        else if ( ad_bytes == 2 )
+        {
+            /* 16-bit ModR/M decode. */
+            generate_exception_if(d & vSIB, X86_EXC_UD);
+            s->ea.type = OP_MEM;
+            switch ( s->modrm_rm )
+            {
+            case 0:
+                s->ea.mem.off = s->regs->bx + s->regs->si;
+                break;
+            case 1:
+                s->ea.mem.off = s->regs->bx + s->regs->di;
+                break;
+            case 2:
+                s->ea.mem.seg = x86_seg_ss;
+                s->ea.mem.off = s->regs->bp + s->regs->si;
+                break;
+            case 3:
+                s->ea.mem.seg = x86_seg_ss;
+                s->ea.mem.off = s->regs->bp + s->regs->di;
+                break;
+            case 4:
+                s->ea.mem.off = s->regs->si;
+                break;
+            case 5:
+                s->ea.mem.off = s->regs->di;
+                break;
+            case 6:
+                if ( s->modrm_mod == 0 )
+                    break;
+                s->ea.mem.seg = x86_seg_ss;
+                s->ea.mem.off = s->regs->bp;
+                break;
+            case 7:
+                s->ea.mem.off = s->regs->bx;
+                break;
+            }
+            switch ( s->modrm_mod )
+            {
+            case 0:
+                if ( s->modrm_rm == 6 )
+                    s->ea.mem.off = insn_fetch_type(int16_t);
+                break;
+            case 1:
+                s->ea.mem.off += insn_fetch_type(int8_t) * (1 << disp8scale);
+                break;
+            case 2:
+                s->ea.mem.off += insn_fetch_type(int16_t);
+                break;
+            }
+        }
+        else
+        {
+            /* 32/64-bit ModR/M decode. */
+            s->ea.type = OP_MEM;
+            if ( s->modrm_rm == 4 )
+            {
+                uint8_t sib = insn_fetch_type(uint8_t);
+                uint8_t sib_base = (sib & 7) | ((s->rex_prefix << 3) & 8);
+
+                s->sib_index = ((sib >> 3) & 7) | ((s->rex_prefix << 2) & 8);
+                s->sib_scale = (sib >> 6) & 3;
+                if ( unlikely(d & vSIB) )
+                    s->sib_index |= (mode_64bit() && evex_encoded() &&
+                                     !s->evex.RX) << 4;
+                else if ( s->sib_index != 4 )
+                {
+                    s->ea.mem.off = *decode_gpr(s->regs, s->sib_index);
+                    s->ea.mem.off <<= s->sib_scale;
+                }
+                if ( (s->modrm_mod == 0) && ((sib_base & 7) == 5) )
+                    s->ea.mem.off += insn_fetch_type(int32_t);
+                else if ( sib_base == 4 )
+                {
+                    s->ea.mem.seg  = x86_seg_ss;
+                    s->ea.mem.off += s->regs->r(sp);
+                    if ( !s->ext && (b == 0x8f) )
+                        /* POP <rm> computes its EA post increment. */
+                        s->ea.mem.off += ((mode_64bit() && (s->op_bytes == 4))
+                                       ? 8 : s->op_bytes);
+                }
+                else if ( sib_base == 5 )
+                {
+                    s->ea.mem.seg  = x86_seg_ss;
+                    s->ea.mem.off += s->regs->r(bp);
+                }
+                else
+                    s->ea.mem.off += *decode_gpr(s->regs, sib_base);
+            }
+            else
+            {
+                generate_exception_if(d & vSIB, X86_EXC_UD);
+                s->modrm_rm |= (s->rex_prefix & 1) << 3;
+                s->ea.mem.off = *decode_gpr(s->regs, s->modrm_rm);
+                if ( (s->modrm_rm == 5) && (s->modrm_mod != 0) )
+                    s->ea.mem.seg = x86_seg_ss;
+            }
+            switch ( s->modrm_mod )
+            {
+            case 0:
+                if ( (s->modrm_rm & 7) != 5 )
+                    break;
+                s->ea.mem.off = insn_fetch_type(int32_t);
+                pc_rel = mode_64bit();
+                break;
+            case 1:
+                s->ea.mem.off += insn_fetch_type(int8_t) * (1 << disp8scale);
+                break;
+            case 2:
+                s->ea.mem.off += insn_fetch_type(int32_t);
+                break;
+            }
+        }
+    }
+    else
+    {
+        s->modrm_mod = 0xff;
+        s->modrm_reg = s->modrm_rm = s->modrm = 0;
+    }
+
+    if ( override_seg != x86_seg_none )
+        s->ea.mem.seg = override_seg;
+
+    /* Fetch the immediate operand, if present. */
+    switch ( d & SrcMask )
+    {
+        unsigned int bytes;
+
+    case SrcImm:
+        if ( !(d & ByteOp) )
+        {
+            if ( mode_64bit() && !amd_like(ctxt) &&
+                 ((s->ext == ext_none && (b | 1) == 0xe9) /* call / jmp */ ||
+                  (s->ext == ext_0f && (b | 0xf) == 0x8f) /* jcc */ ) )
+                s->op_bytes = 4;
+            bytes = s->op_bytes != 8 ? s->op_bytes : 4;
+        }
+        else
+        {
+    case SrcImmByte:
+            bytes = 1;
+        }
+        /* NB. Immediates are sign-extended as necessary. */
+        switch ( bytes )
+        {
+        case 1: s->imm1 = insn_fetch_type(int8_t);  break;
+        case 2: s->imm1 = insn_fetch_type(int16_t); break;
+        case 4: s->imm1 = insn_fetch_type(int32_t); break;
+        }
+        break;
+    case SrcImm16:
+        s->imm1 = insn_fetch_type(uint16_t);
+        break;
+    }
+
+    ctxt->opcode = opcode;
+    s->desc = d;
+
+    switch ( s->ext )
+    {
+    case ext_none:
+        rc = decode_onebyte(s, ctxt, ops);
+        break;
+
+    case ext_0f:
+        rc = decode_twobyte(s, ctxt, ops);
+        break;
+
+    case ext_0f38:
+        rc = decode_0f38(s, ctxt, ops);
+        break;
+
+    case ext_0f3a:
+        d = ext0f3a_table[b].to_mem ? DstMem | SrcReg : DstReg | SrcMem;
+        if ( ext0f3a_table[b].two_op )
+            d |= TwoOp;
+        else if ( ext0f3a_table[b].four_op && !mode_64bit() && s->vex.opcx )
+            s->imm1 &= 0x7f;
+        s->desc = d;
+        rc = decode_0f3a(s, ctxt, ops);
+        break;
+
+    case ext_8f08:
+        d = DstReg | SrcMem;
+        if ( ext8f08_table[b].two_op )
+            d |= TwoOp;
+        else if ( ext8f08_table[b].four_op && !mode_64bit() )
+            s->imm1 &= 0x7f;
+        s->desc = d;
+        s->simd_size = ext8f08_table[b].simd_size;
+        break;
+
+    case ext_8f09:
+    case ext_8f0a:
+        break;
+
+    default:
+        ASSERT_UNREACHABLE();
+        return X86EMUL_UNIMPLEMENTED;
+    }
+
+    if ( s->ea.type == OP_MEM )
+    {
+        if ( pc_rel )
+            s->ea.mem.off += s->ip;
+
+        s->ea.mem.off = truncate_ea(s->ea.mem.off);
+    }
+
+    /*
+     * Simple op_bytes calculations. More complicated cases produce 0
+     * and are further handled during execute.
+     */
+    switch ( s->simd_size )
+    {
+    case simd_none:
+        /*
+         * When prefix 66 has a meaning different from operand-size override,
+         * operand size defaults to 4 and can't be overridden to 2.
+         */
+        if ( s->op_bytes == 2 &&
+             (ctxt->opcode & X86EMUL_OPC_PFX_MASK) == X86EMUL_OPC_66(0, 0) )
+            s->op_bytes = 4;
+        break;
+
+#ifndef X86EMUL_NO_SIMD
+    case simd_packed_int:
+        switch ( s->vex.pfx )
+        {
+        case vex_none:
+            if ( !s->vex.opcx )
+            {
+                s->op_bytes = 8;
+                break;
+            }
+            /* fall through */
+        case vex_66:
+            s->op_bytes = 16 << s->evex.lr;
+            break;
+        default:
+            s->op_bytes = 0;
+            break;
+        }
+        break;
+
+    case simd_single_fp:
+        if ( s->vex.pfx & VEX_PREFIX_DOUBLE_MASK )
+        {
+            s->op_bytes = 0;
+            break;
+    case simd_packed_fp:
+            if ( s->vex.pfx & VEX_PREFIX_SCALAR_MASK )
+            {
+                s->op_bytes = 0;
+                break;
+            }
+        }
+        /* fall through */
+    case simd_any_fp:
+        switch ( s->vex.pfx )
+        {
+        default:
+            s->op_bytes = 16 << s->evex.lr;
+            break;
+        case vex_f3:
+            generate_exception_if(evex_encoded() && s->evex.w, X86_EXC_UD);
+            s->op_bytes = 4;
+            break;
+        case vex_f2:
+            generate_exception_if(evex_encoded() && !s->evex.w, X86_EXC_UD);
+            s->op_bytes = 8;
+            break;
+        }
+        break;
+
+    case simd_scalar_opc:
+        s->op_bytes = 4 << (ctxt->opcode & 1);
+        break;
+
+    case simd_scalar_vexw:
+        s->op_bytes = 4 << s->vex.w;
+        break;
+
+    case simd_128:
+        /* The special cases here are MMX shift insns. */
+        s->op_bytes = s->vex.opcx || s->vex.pfx ? 16 : 8;
+        break;
+
+    case simd_256:
+        s->op_bytes = 32;
+        break;
+#endif /* !X86EMUL_NO_SIMD */
+
+    default:
+        s->op_bytes = 0;
+        break;
+    }
+
+ done:
+    return rc;
+}
--- a/xen/arch/x86/x86_emulate/private.h
+++ b/xen/arch/x86/x86_emulate/private.h
@@ -38,9 +38,11 @@
 #ifdef __i386__
 # define mode_64bit() false
 # define r(name) e ## name
+# define PTR_POISON NULL /* 32-bit builds are for user-space, so NULL is OK. */
 #else
 # define mode_64bit() (ctxt->addr_size == 64)
 # define r(name) r ## name
+# define PTR_POISON ((void *)0x8086000000008086UL) /* non-canonical */
 #endif
 
 /* Operand sizes: 8-bit operands or specified/overridden size. */
@@ -77,6 +79,23 @@
 
 typedef uint8_t opcode_desc_t;
 
+enum disp8scale {
+    /* Values 0 ... 4 are explicit sizes. */
+    d8s_bw = 5,
+    d8s_dq,
+    /* EVEX.W ignored outside of 64-bit mode */
+    d8s_dq64,
+    /*
+     * All further values must strictly be last and in the order
+     * given so that arithmetic on the values works.
+     */
+    d8s_vl,
+    d8s_vl_by_2,
+    d8s_vl_by_4,
+    d8s_vl_by_8,
+};
+typedef uint8_t disp8scale_t;
+
 /* Type, address-of, and value of an instruction's operand. */
 struct operand {
     enum { OP_REG, OP_MEM, OP_IMM, OP_NONE } type;
@@ -183,6 +202,9 @@ enum vex_pfx {
     vex_f2
 };
 
+#define VEX_PREFIX_DOUBLE_MASK 0x1
+#define VEX_PREFIX_SCALAR_MASK 0x2
+
 union vex {
     uint8_t raw[2];
     struct {             /* SDM names */
@@ -716,6 +738,10 @@ do {
     if ( rc ) goto done;                                        \
 } while (0)
 
+int x86emul_decode(struct x86_emulate_state *s,
+                   struct x86_emulate_ctxt *ctxt,
+                   const struct x86_emulate_ops *ops);
+
 int x86emul_fpu(struct x86_emulate_state *s,
                 struct cpu_user_regs *regs,
                 struct operand *dst,
@@ -745,6 +771,13 @@ int x86emul_0fc7(struct x86_emulate_stat
                  const struct x86_emulate_ops *ops,
                  mmval_t *mmvalp);
 
+/* Initialise output state in x86_emulate_ctxt */
+static inline void init_context(struct x86_emulate_ctxt *ctxt)
+{
+    ctxt->retire.raw = 0;
+    x86_emul_reset_event(ctxt);
+}
+
 static inline bool is_aligned(enum x86_segment seg, unsigned long offs,
                               unsigned int size, struct x86_emulate_ctxt *ctxt,
                               const struct x86_emulate_ops *ops)
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -22,274 +22,6 @@
 
 #include "private.h"
 
-static const opcode_desc_t opcode_table[256] = {
-    /* 0x00 - 0x07 */
-    ByteOp|DstMem|SrcReg|ModRM, DstMem|SrcReg|ModRM,
-    ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM,
-    ByteOp|DstEax|SrcImm, DstEax|SrcImm, ImplicitOps|Mov, ImplicitOps|Mov,
-    /* 0x08 - 0x0F */
-    ByteOp|DstMem|SrcReg|ModRM, DstMem|SrcReg|ModRM,
-    ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM,
-    ByteOp|DstEax|SrcImm, DstEax|SrcImm, ImplicitOps|Mov, 0,
-    /* 0x10 - 0x17 */
-    ByteOp|DstMem|SrcReg|ModRM, DstMem|SrcReg|ModRM,
-    ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM,
-    ByteOp|DstEax|SrcImm, DstEax|SrcImm, ImplicitOps|Mov, ImplicitOps|Mov,
-    /* 0x18 - 0x1F */
-    ByteOp|DstMem|SrcReg|ModRM, DstMem|SrcReg|ModRM,
-    ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM,
-    ByteOp|DstEax|SrcImm, DstEax|SrcImm, ImplicitOps|Mov, ImplicitOps|Mov,
-    /* 0x20 - 0x27 */
-    ByteOp|DstMem|SrcReg|ModRM, DstMem|SrcReg|ModRM,
-    ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM,
-    ByteOp|DstEax|SrcImm, DstEax|SrcImm, 0, ImplicitOps,
-    /* 0x28 - 0x2F */
-    ByteOp|DstMem|SrcReg|ModRM, DstMem|SrcReg|ModRM,
-    ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM,
-    ByteOp|DstEax|SrcImm, DstEax|SrcImm, 0, ImplicitOps,
-    /* 0x30 - 0x37 */
-    ByteOp|DstMem|SrcReg|ModRM, DstMem|SrcReg|ModRM,
-    ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM,
-    ByteOp|DstEax|SrcImm, DstEax|SrcImm, 0, ImplicitOps,
-    /* 0x38 - 0x3F */
-    ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM,
-    ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM,
-    ByteOp|DstEax|SrcImm, DstEax|SrcImm, 0, ImplicitOps,
-    /* 0x40 - 0x4F */
-    ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps,
-    ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps,
-    ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps,
-    ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps,
-    /* 0x50 - 0x5F */
-    ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov,
-    ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov,
-    ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov,
-    ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov,
-    /* 0x60 - 0x67 */
-    ImplicitOps, ImplicitOps, DstReg|SrcMem|ModRM, DstReg|SrcNone|ModRM|Mov,
-    0, 0, 0, 0,
-    /* 0x68 - 0x6F */
-    DstImplicit|SrcImm|Mov, DstReg|SrcImm|ModRM|Mov,
-    DstImplicit|SrcImmByte|Mov, DstReg|SrcImmByte|ModRM|Mov,
-    ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps|Mov,
-    /* 0x70 - 0x77 */
-    DstImplicit|SrcImmByte, DstImplicit|SrcImmByte,
-    DstImplicit|SrcImmByte, DstImplicit|SrcImmByte,
-    DstImplicit|SrcImmByte, DstImplicit|SrcImmByte,
-    DstImplicit|SrcImmByte, DstImplicit|SrcImmByte,
-    /* 0x78 - 0x7F */
-    DstImplicit|SrcImmByte, DstImplicit|SrcImmByte,
-    DstImplicit|SrcImmByte, DstImplicit|SrcImmByte,
-    DstImplicit|SrcImmByte, DstImplicit|SrcImmByte,
-    DstImplicit|SrcImmByte, DstImplicit|SrcImmByte,
-    /* 0x80 - 0x87 */
-    ByteOp|DstMem|SrcImm|ModRM, DstMem|SrcImm|ModRM,
-    ByteOp|DstMem|SrcImm|ModRM, DstMem|SrcImmByte|ModRM,
-    ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM,
-    ByteOp|DstMem|SrcReg|ModRM, DstMem|SrcReg|ModRM,
-    /* 0x88 - 0x8F */
-    ByteOp|DstMem|SrcReg|ModRM|Mov, DstMem|SrcReg|ModRM|Mov,
-    ByteOp|DstReg|SrcMem|ModRM|Mov, DstReg|SrcMem|ModRM|Mov,
-    DstMem|SrcReg|ModRM|Mov, DstReg|SrcNone|ModRM,
-    DstReg|SrcMem16|ModRM|Mov, DstMem|SrcNone|ModRM|Mov,
-    /* 0x90 - 0x97 */
-    DstImplicit|SrcEax, DstImplicit|SrcEax,
-    DstImplicit|SrcEax, DstImplicit|SrcEax,
-    DstImplicit|SrcEax, DstImplicit|SrcEax,
-    DstImplicit|SrcEax, DstImplicit|SrcEax,
-    /* 0x98 - 0x9F */
-    ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps,
-    ImplicitOps|Mov, ImplicitOps|Mov, ImplicitOps, ImplicitOps,
-    /* 0xA0 - 0xA7 */
-    ByteOp|DstEax|SrcMem|Mov, DstEax|SrcMem|Mov,
-    ByteOp|DstMem|SrcEax|Mov, DstMem|SrcEax|Mov,
-    ByteOp|ImplicitOps|Mov, ImplicitOps|Mov,
-    ByteOp|ImplicitOps, ImplicitOps,
-    /* 0xA8 - 0xAF */
-    ByteOp|DstEax|SrcImm, DstEax|SrcImm,
-    ByteOp|DstImplicit|SrcEax|Mov, DstImplicit|SrcEax|Mov,
-    ByteOp|DstEax|SrcImplicit|Mov, DstEax|SrcImplicit|Mov,
-    ByteOp|DstImplicit|SrcEax, DstImplicit|SrcEax,
-    /* 0xB0 - 0xB7 */
-    ByteOp|DstReg|SrcImm|Mov, ByteOp|DstReg|SrcImm|Mov,
-    ByteOp|DstReg|SrcImm|Mov, ByteOp|DstReg|SrcImm|Mov,
-    ByteOp|DstReg|SrcImm|Mov, ByteOp|DstReg|SrcImm|Mov,
-    ByteOp|DstReg|SrcImm|Mov, ByteOp|DstReg|SrcImm|Mov,
-    /* 0xB8 - 0xBF */
-    DstReg|SrcImm|Mov, DstReg|SrcImm|Mov, DstReg|SrcImm|Mov, DstReg|SrcImm|Mov,
-    DstReg|SrcImm|Mov, DstReg|SrcImm|Mov, DstReg|SrcImm|Mov, DstReg|SrcImm|Mov,
-    /* 0xC0 - 0xC7 */
-    ByteOp|DstMem|SrcImm|ModRM, DstMem|SrcImmByte|ModRM,
-    DstImplicit|SrcImm16, ImplicitOps,
-    DstReg|SrcMem|ModRM|Mov, DstReg|SrcMem|ModRM|Mov,
-    ByteOp|DstMem|SrcImm|ModRM|Mov, DstMem|SrcImm|ModRM|Mov,
-    /* 0xC8 - 0xCF */
-    DstImplicit|SrcImm16, ImplicitOps, DstImplicit|SrcImm16, ImplicitOps,
-    ImplicitOps, DstImplicit|SrcImmByte, ImplicitOps, ImplicitOps,
-    /* 0xD0 - 0xD7 */
-    ByteOp|DstMem|SrcImplicit|ModRM, DstMem|SrcImplicit|ModRM,
-    ByteOp|DstMem|SrcImplicit|ModRM, DstMem|SrcImplicit|ModRM,
-    DstImplicit|SrcImmByte, DstImplicit|SrcImmByte, ImplicitOps, ImplicitOps,
-    /* 0xD8 - 0xDF */
-    ImplicitOps|ModRM, ImplicitOps|ModRM|Mov,
-    ImplicitOps|ModRM, ImplicitOps|ModRM|Mov,
-    ImplicitOps|ModRM, ImplicitOps|ModRM|Mov,
-    DstImplicit|SrcMem16|ModRM, ImplicitOps|ModRM|Mov,
-    /* 0xE0 - 0xE7 */
-    DstImplicit|SrcImmByte, DstImplicit|SrcImmByte,
-    DstImplicit|SrcImmByte, DstImplicit|SrcImmByte,
-    DstEax|SrcImmByte, DstEax|SrcImmByte,
-    DstImplicit|SrcImmByte, DstImplicit|SrcImmByte,
-    /* 0xE8 - 0xEF */
-    DstImplicit|SrcImm|Mov, DstImplicit|SrcImm,
-    ImplicitOps, DstImplicit|SrcImmByte,
-    DstEax|SrcImplicit, DstEax|SrcImplicit, ImplicitOps, ImplicitOps,
-    /* 0xF0 - 0xF7 */
-    0, ImplicitOps, 0, 0,
-    ImplicitOps, ImplicitOps, ByteOp|ModRM, ModRM,
-    /* 0xF8 - 0xFF */
-    ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps,
-    ImplicitOps, ImplicitOps, ByteOp|DstMem|SrcNone|ModRM, DstMem|SrcNone|ModRM
-};
-
-enum disp8scale {
-    /* Values 0 ... 4 are explicit sizes. */
-    d8s_bw = 5,
-    d8s_dq,
-    /* EVEX.W ignored outside of 64-bit mode */
-    d8s_dq64,
-    /*
-     * All further values must strictly be last and in the order
-     * given so that arithmetic on the values works.
-     */
-    d8s_vl,
-    d8s_vl_by_2,
-    d8s_vl_by_4,
-    d8s_vl_by_8,
-};
-typedef uint8_t disp8scale_t;
-
-static const struct twobyte_table {
-    opcode_desc_t desc;
-    simd_opsize_t size:4;
-    disp8scale_t d8s:4;
-} twobyte_table[256] = {
-    [0x00] = { ModRM },
-    [0x01] = { ImplicitOps|ModRM },
-    [0x02] = { DstReg|SrcMem16|ModRM },
-    [0x03] = { DstReg|SrcMem16|ModRM },
-    [0x05] = { ImplicitOps },
-    [0x06] = { ImplicitOps },
-    [0x07] = { ImplicitOps },
-    [0x08] = { ImplicitOps },
-    [0x09] = { ImplicitOps },
-    [0x0b] = { ImplicitOps },
-    [0x0d] = { ImplicitOps|ModRM },
-    [0x0e] = { ImplicitOps },
-    [0x0f] = { ModRM|SrcImmByte },
-    [0x10] = { DstImplicit|SrcMem|ModRM|Mov, simd_any_fp, d8s_vl },
-    [0x11] = { DstMem|SrcImplicit|ModRM|Mov, simd_any_fp, d8s_vl },
-    [0x12] = { DstImplicit|SrcMem|ModRM|Mov, simd_other, 3 },
-    [0x13] = { DstMem|SrcImplicit|ModRM|Mov, simd_other, 3 },
-    [0x14 ... 0x15] = { DstImplicit|SrcMem|ModRM, simd_packed_fp, d8s_vl },
-    [0x16] = { DstImplicit|SrcMem|ModRM|Mov, simd_other, 3 },
-    [0x17] = { DstMem|SrcImplicit|ModRM|Mov, simd_other, 3 },
-    [0x18 ... 0x1f] = { ImplicitOps|ModRM },
-    [0x20 ... 0x21] = { DstMem|SrcImplicit|ModRM },
-    [0x22 ... 0x23] = { DstImplicit|SrcMem|ModRM },
-    [0x28] = { DstImplicit|SrcMem|ModRM|Mov, simd_packed_fp, d8s_vl },
-    [0x29] = { DstMem|SrcImplicit|ModRM|Mov, simd_packed_fp, d8s_vl },
-    [0x2a] = { DstImplicit|SrcMem|ModRM|Mov, simd_other, d8s_dq64 },
-    [0x2b] = { DstMem|SrcImplicit|ModRM|Mov, simd_any_fp, d8s_vl },
-    [0x2c ... 0x2d] = { DstImplicit|SrcMem|ModRM|Mov, simd_other },
-    [0x2e ... 0x2f] = { ImplicitOps|ModRM|TwoOp, simd_none, d8s_dq },
-    [0x30 ... 0x35] = { ImplicitOps },
-    [0x37] = { ImplicitOps },
-    [0x38] = { DstReg|SrcMem|ModRM },
-    [0x3a] = { DstReg|SrcImmByte|ModRM },
-    [0x40 ... 0x4f] = { DstReg|SrcMem|ModRM|Mov },
-    [0x50] = { DstReg|SrcImplicit|ModRM|Mov },
-    [0x51] = { DstImplicit|SrcMem|ModRM|TwoOp, simd_any_fp, d8s_vl },
-    [0x52 ... 0x53] = { DstImplicit|SrcMem|ModRM|TwoOp, simd_single_fp },
-    [0x54 ... 0x57] = { DstImplicit|SrcMem|ModRM, simd_packed_fp, d8s_vl },
-    [0x58 ... 0x59] = { DstImplicit|SrcMem|ModRM, simd_any_fp, d8s_vl },
-    [0x5a] = { DstImplicit|SrcMem|ModRM|Mov, simd_any_fp, d8s_vl },
-    [0x5b] = { DstImplicit|SrcMem|ModRM|Mov, simd_packed_fp, d8s_vl },
-    [0x5c ... 0x5f] = { DstImplicit|SrcMem|ModRM, simd_any_fp, d8s_vl },
-    [0x60 ... 0x62] = { DstImplicit|SrcMem|ModRM, simd_other, d8s_vl },
-    [0x63 ... 0x67] = { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_vl },
-    [0x68 ... 0x6a] = { DstImplicit|SrcMem|ModRM, simd_other, d8s_vl },
-    [0x6b ... 0x6d] = { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_vl },
-    [0x6e] = { DstImplicit|SrcMem|ModRM|Mov, simd_none, d8s_dq64 },
-    [0x6f] = { DstImplicit|SrcMem|ModRM|Mov, simd_packed_int, d8s_vl },
-    [0x70] = { SrcImmByte|ModRM|TwoOp, simd_other, d8s_vl },
-    [0x71 ... 0x73] = { DstImplicit|SrcImmByte|ModRM, simd_none, d8s_vl },
-    [0x74 ... 0x76] = { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_vl },
-    [0x77] = { DstImplicit|SrcNone },
-    [0x78 ... 0x79] = { DstImplicit|SrcMem|ModRM|Mov, simd_other, d8s_vl },
-    [0x7a] = { DstImplicit|SrcMem|ModRM|Mov, simd_packed_fp, d8s_vl },
-    [0x7b] = { DstImplicit|SrcMem|ModRM|Mov, simd_other, d8s_dq64 },
-    [0x7c ... 0x7d] = { DstImplicit|SrcMem|ModRM, simd_other },
-    [0x7e] = { DstMem|SrcImplicit|ModRM|Mov, simd_none, d8s_dq64 },
-    [0x7f] = { DstMem|SrcImplicit|ModRM|Mov, simd_packed_int, d8s_vl },
-    [0x80 ... 0x8f] = { DstImplicit|SrcImm },
-    [0x90 ... 0x9f] = { ByteOp|DstMem|SrcNone|ModRM|Mov },
-    [0xa0 ... 0xa1] = { ImplicitOps|Mov },
-    [0xa2] = { ImplicitOps },
-    [0xa3] = { DstBitBase|SrcReg|ModRM },
-    [0xa4] = { DstMem|SrcImmByte|ModRM },
-    [0xa5] = { DstMem|SrcReg|ModRM },
-    [0xa6 ... 0xa7] = { ModRM },
-    [0xa8 ... 0xa9] = { ImplicitOps|Mov },
-    [0xaa] = { ImplicitOps },
-    [0xab] = { DstBitBase|SrcReg|ModRM },
-    [0xac] = { DstMem|SrcImmByte|ModRM },
-    [0xad] = { DstMem|SrcReg|ModRM },
-    [0xae] = { ImplicitOps|ModRM },
-    [0xaf] = { DstReg|SrcMem|ModRM },
-    [0xb0] = { ByteOp|DstMem|SrcReg|ModRM },
-    [0xb1] = { DstMem|SrcReg|ModRM },
-    [0xb2] = { DstReg|SrcMem|ModRM|Mov },
-    [0xb3] = { DstBitBase|SrcReg|ModRM },
-    [0xb4 ... 0xb5] = { DstReg|SrcMem|ModRM|Mov },
-    [0xb6] = { ByteOp|DstReg|SrcMem|ModRM|Mov },
-    [0xb7] = { DstReg|SrcMem16|ModRM|Mov },
-    [0xb8] = { DstReg|SrcMem|ModRM },
-    [0xb9] = { ModRM },
-    [0xba] = { DstBitBase|SrcImmByte|ModRM },
-    [0xbb] = { DstBitBase|SrcReg|ModRM },
-    [0xbc ... 0xbd] = { DstReg|SrcMem|ModRM },
-    [0xbe] = { ByteOp|DstReg|SrcMem|ModRM|Mov },
-    [0xbf] = { DstReg|SrcMem16|ModRM|Mov },
-    [0xc0] = { ByteOp|DstMem|SrcReg|ModRM },
-    [0xc1] = { DstMem|SrcReg|ModRM },
-    [0xc2] = { DstImplicit|SrcImmByte|ModRM, simd_any_fp, d8s_vl },
-    [0xc3] = { DstMem|SrcReg|ModRM|Mov },
-    [0xc4] = { DstImplicit|SrcImmByte|ModRM, simd_none, 1 },
-    [0xc5] = { DstReg|SrcImmByte|ModRM|Mov },
-    [0xc6] = { DstImplicit|SrcImmByte|ModRM, simd_packed_fp, d8s_vl },
-    [0xc7] = { ImplicitOps|ModRM },
-    [0xc8 ... 0xcf] = { ImplicitOps },
-    [0xd0] = { DstImplicit|SrcMem|ModRM, simd_other },
-    [0xd1 ... 0xd3] = { DstImplicit|SrcMem|ModRM, simd_128, 4 },
-    [0xd4 ... 0xd5] = { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_vl },
-    [0xd6] = { DstMem|SrcImplicit|ModRM|Mov, simd_other, 3 },
-    [0xd7] = { DstReg|SrcImplicit|ModRM|Mov },
-    [0xd8 ... 0xdf] = { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_vl },
-    [0xe0] = { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_vl },
-    [0xe1 ... 0xe2] = { DstImplicit|SrcMem|ModRM, simd_128, 4 },
-    [0xe3 ... 0xe5] = { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_vl },
-    [0xe6] = { DstImplicit|SrcMem|ModRM|Mov, simd_packed_fp, d8s_vl },
-    [0xe7] = { DstMem|SrcImplicit|ModRM|Mov, simd_packed_int, d8s_vl },
-    [0xe8 ... 0xef] = { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_vl },
-    [0xf0] = { DstImplicit|SrcMem|ModRM|Mov, simd_other },
-    [0xf1 ... 0xf3] = { DstImplicit|SrcMem|ModRM, simd_128, 4 },
-    [0xf4 ... 0xf6] = { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_vl },
-    [0xf7] = { DstMem|SrcMem|ModRM|Mov, simd_packed_int },
-    [0xf8 ... 0xfe] = { DstImplicit|SrcMem|ModRM, simd_packed_int, d8s_vl },
-    [0xff] = { ModRM }
-};
-
 /*
  * The next two tables are indexed by high opcode extension byte (the one
  * that's encoded like an immediate) nibble, with each table element then
@@ -325,257 +57,9 @@ static const uint16_t _3dnow_ext_table[1
     [0xb] = (1 << 0xb) /* pswapd */,
 };
 
-/*
- * "two_op" and "four_op" below refer to the number of register operands
- * (one of which possibly also allowing to be a memory one). The named
- * operand counts do not include any immediate operands.
- */
-static const struct ext0f38_table {
-    uint8_t simd_size:5;
-    uint8_t to_mem:1;
-    uint8_t two_op:1;
-    uint8_t vsib:1;
-    disp8scale_t d8s:4;
-} ext0f38_table[256] = {
-    [0x00] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
-    [0x01 ... 0x03] = { .simd_size = simd_packed_int },
-    [0x04] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
-    [0x05 ... 0x0a] = { .simd_size = simd_packed_int },
-    [0x0b] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
-    [0x0c ... 0x0d] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
-    [0x0e ... 0x0f] = { .simd_size = simd_packed_fp },
-    [0x10 ... 0x12] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
-    [0x13] = { .simd_size = simd_other, .two_op = 1, .d8s = d8s_vl_by_2 },
-    [0x14 ... 0x16] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
-    [0x17] = { .simd_size = simd_packed_int, .two_op = 1 },
-    [0x18] = { .simd_size = simd_scalar_opc, .two_op = 1, .d8s = 2 },
-    [0x19] = { .simd_size = simd_scalar_opc, .two_op = 1, .d8s = 3 },
-    [0x1a] = { .simd_size = simd_128, .two_op = 1, .d8s = 4 },
-    [0x1b] = { .simd_size = simd_256, .two_op = 1, .d8s = d8s_vl_by_2 },
-    [0x1c ... 0x1f] = { .simd_size = simd_packed_int, .two_op = 1, .d8s = d8s_vl },
-    [0x20] = { .simd_size = simd_other, .two_op = 1, .d8s = d8s_vl_by_2 },
-    [0x21] = { .simd_size = simd_other, .two_op = 1, .d8s = d8s_vl_by_4 },
-    [0x22] = { .simd_size = simd_other, .two_op = 1, .d8s = d8s_vl_by_8 },
-    [0x23] = { .simd_size = simd_other, .two_op = 1, .d8s = d8s_vl_by_2 },
-    [0x24] = { .simd_size = simd_other, .two_op = 1, .d8s = d8s_vl_by_4 },
-    [0x25] = { .simd_size = simd_other, .two_op = 1, .d8s = d8s_vl_by_2 },
-    [0x26 ... 0x29] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
-    [0x2a] = { .simd_size = simd_packed_int, .two_op = 1, .d8s = d8s_vl },
-    [0x2b] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
-    [0x2c] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
-    [0x2d] = { .simd_size = simd_packed_fp, .d8s = d8s_dq },
-    [0x2e ... 0x2f] = { .simd_size = simd_packed_fp, .to_mem = 1 },
-    [0x30] = { .simd_size = simd_other, .two_op = 1, .d8s = d8s_vl_by_2 },
-    [0x31] = { .simd_size = simd_other, .two_op = 1, .d8s = d8s_vl_by_4 },
-    [0x32] = { .simd_size = simd_other, .two_op = 1, .d8s = d8s_vl_by_8 },
-    [0x33] = { .simd_size = simd_other, .two_op = 1, .d8s = d8s_vl_by_2 },
-    [0x34] = { .simd_size = simd_other, .two_op = 1, .d8s = d8s_vl_by_4 },
-    [0x35] = { .simd_size = simd_other, .two_op = 1, .d8s = d8s_vl_by_2 },
-    [0x36 ... 0x3f] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
-    [0x40] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
-    [0x41] = { .simd_size = simd_packed_int, .two_op = 1 },
-    [0x42] = { .simd_size = simd_packed_fp, .two_op = 1, .d8s = d8s_vl },
-    [0x43] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
-    [0x44] = { .simd_size = simd_packed_int, .two_op = 1, .d8s = d8s_vl },
-    [0x45 ... 0x47] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
-    [0x4c] = { .simd_size = simd_packed_fp, .two_op = 1, .d8s = d8s_vl },
-    [0x4d] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
-    [0x4e] = { .simd_size = simd_packed_fp, .two_op = 1, .d8s = d8s_vl },
-    [0x4f] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
-    [0x50 ... 0x53] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
-    [0x54 ... 0x55] = { .simd_size = simd_packed_int, .two_op = 1, .d8s = d8s_vl },
-    [0x58] = { .simd_size = simd_other, .two_op = 1, .d8s = 2 },
-    [0x59] = { .simd_size = simd_other, .two_op = 1, .d8s = 3 },
-    [0x5a] = { .simd_size = simd_128, .two_op = 1, .d8s = 4 },
-    [0x5b] = { .simd_size = simd_256, .two_op = 1, .d8s = d8s_vl_by_2 },
-    [0x62] = { .simd_size = simd_packed_int, .two_op = 1, .d8s = d8s_bw },
-    [0x63] = { .simd_size = simd_packed_int, .to_mem = 1, .two_op = 1, .d8s = d8s_bw },
-    [0x64 ... 0x66] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
-    [0x68] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
-    [0x70 ... 0x73] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
-    [0x75 ... 0x76] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
-    [0x77] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
-    [0x78] = { .simd_size = simd_other, .two_op = 1 },
-    [0x79] = { .simd_size = simd_other, .two_op = 1, .d8s = 1 },
-    [0x7a ... 0x7c] = { .simd_size = simd_none, .two_op = 1 },
-    [0x7d ... 0x7e] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
-    [0x7f] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
-    [0x82] = { .simd_size = simd_other },
-    [0x83] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
-    [0x88] = { .simd_size = simd_packed_fp, .two_op = 1, .d8s = d8s_dq },
-    [0x89] = { .simd_size = simd_packed_int, .two_op = 1, .d8s = d8s_dq },
-    [0x8a] = { .simd_size = simd_packed_fp, .to_mem = 1, .two_op = 1, .d8s = d8s_dq },
-    [0x8b] = { .simd_size = simd_packed_int, .to_mem = 1, .two_op = 1, .d8s = d8s_dq },
-    [0x8c] = { .simd_size = simd_packed_int },
-    [0x8d] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
-    [0x8e] = { .simd_size = simd_packed_int, .to_mem = 1 },
-    [0x8f] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
-    [0x90 ... 0x93] = { .simd_size = simd_other, .vsib = 1, .d8s = d8s_dq },
-    [0x96 ... 0x98] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
-    [0x99] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
-    [0x9a] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
-    [0x9b] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
-    [0x9c] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
-    [0x9d] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
-    [0x9e] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
-    [0x9f] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
-    [0xa0 ... 0xa3] = { .simd_size = simd_other, .to_mem = 1, .vsib = 1, .d8s = d8s_dq },
-    [0xa6 ... 0xa8] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
-    [0xa9] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
-    [0xaa] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
-    [0xab] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
-    [0xac] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
-    [0xad] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
-    [0xae] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
-    [0xaf] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
-    [0xb4 ... 0xb5] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
-    [0xb6 ... 0xb8] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
-    [0xb9] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
-    [0xba] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
-    [0xbb] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
-    [0xbc] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
-    [0xbd] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
-    [0xbe] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
-    [0xbf] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
-    [0xc4] = { .simd_size = simd_packed_int, .two_op = 1, .d8s = d8s_vl },
-    [0xc6 ... 0xc7] = { .simd_size = simd_other, .vsib = 1, .d8s = d8s_dq },
-    [0xc8] = { .simd_size = simd_packed_fp, .two_op = 1, .d8s = d8s_vl },
-    [0xc9] = { .simd_size = simd_other },
-    [0xca] = { .simd_size = simd_packed_fp, .two_op = 1, .d8s = d8s_vl },
-    [0xcb] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
-    [0xcc] = { .simd_size = simd_packed_fp, .two_op = 1, .d8s = d8s_vl },
-    [0xcd] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
-    [0xcf] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
-    [0xdb] = { .simd_size = simd_packed_int, .two_op = 1 },
-    [0xdc ... 0xdf] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
-    [0xf0] = { .two_op = 1 },
-    [0xf1] = { .to_mem = 1, .two_op = 1 },
-    [0xf2 ... 0xf3] = {},
-    [0xf5 ... 0xf7] = {},
-    [0xf8] = { .simd_size = simd_other },
-    [0xf9] = { .to_mem = 1, .two_op = 1 /* Mov */ },
-};
-
 /* Shift values between src and dst sizes of pmov{s,z}x{b,w,d}{w,d,q}. */
 static const uint8_t pmov_convert_delta[] = { 1, 2, 3, 1, 2, 1 };
 
-static const struct ext0f3a_table {
-    uint8_t simd_size:5;
-    uint8_t to_mem:1;
-    uint8_t two_op:1;
-    uint8_t four_op:1;
-    disp8scale_t d8s:4;
-} ext0f3a_table[256] = {
-    [0x00] = { .simd_size = simd_packed_int, .two_op = 1, .d8s = d8s_vl },
-    [0x01] = { .simd_size = simd_packed_fp, .two_op = 1, .d8s = d8s_vl },
-    [0x02] = { .simd_size = simd_packed_int },
-    [0x03] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
-    [0x04 ... 0x05] = { .simd_size = simd_packed_fp, .two_op = 1, .d8s = d8s_vl },
-    [0x06] = { .simd_size = simd_packed_fp },
-    [0x08 ... 0x09] = { .simd_size = simd_packed_fp, .two_op = 1, .d8s = d8s_vl },
-    [0x0a ... 0x0b] = { .simd_size = simd_scalar_opc, .d8s = d8s_dq },
-    [0x0c ... 0x0d] = { .simd_size = simd_packed_fp },
-    [0x0e] = { .simd_size = simd_packed_int },
-    [0x0f] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
-    [0x14] = { .simd_size = simd_none, .to_mem = 1, .two_op = 1, .d8s = 0 },
-    [0x15] = { .simd_size = simd_none, .to_mem = 1, .two_op = 1, .d8s = 1 },
-    [0x16] = { .simd_size = simd_none, .to_mem = 1, .two_op = 1, .d8s = d8s_dq64 },
-    [0x17] = { .simd_size = simd_none, .to_mem = 1, .two_op = 1, .d8s = 2 },
-    [0x18] = { .simd_size = simd_128, .d8s = 4 },
-    [0x19] = { .simd_size = simd_128, .to_mem = 1, .two_op = 1, .d8s = 4 },
-    [0x1a] = { .simd_size = simd_256, .d8s = d8s_vl_by_2 },
-    [0x1b] = { .simd_size = simd_256, .to_mem = 1, .two_op = 1, .d8s = d8s_vl_by_2 },
-    [0x1d] = { .simd_size = simd_other, .to_mem = 1, .two_op = 1, .d8s = d8s_vl_by_2 },
-    [0x1e ... 0x1f] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
-    [0x20] = { .simd_size = simd_none, .d8s = 0 },
-    [0x21] = { .simd_size = simd_other, .d8s = 2 },
-    [0x22] = { .simd_size = simd_none, .d8s = d8s_dq64 },
-    [0x23] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
-    [0x25] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
-    [0x26] = { .simd_size = simd_packed_fp, .two_op = 1, .d8s = d8s_vl },
-    [0x27] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
-    [0x30 ... 0x33] = { .simd_size = simd_other, .two_op = 1 },
-    [0x38] = { .simd_size = simd_128, .d8s = 4 },
-    [0x3a] = { .simd_size = simd_256, .d8s = d8s_vl_by_2 },
-    [0x39] = { .simd_size = simd_128, .to_mem = 1, .two_op = 1, .d8s = 4 },
-    [0x3b] = { .simd_size = simd_256, .to_mem = 1, .two_op = 1, .d8s = d8s_vl_by_2 },
-    [0x3e ... 0x3f] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
-    [0x40 ... 0x41] = { .simd_size = simd_packed_fp },
-    [0x42 ... 0x43] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
-    [0x44] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
-    [0x46] = { .simd_size = simd_packed_int },
-    [0x48 ... 0x49] = { .simd_size = simd_packed_fp, .four_op = 1 },
-    [0x4a ... 0x4b] = { .simd_size = simd_packed_fp, .four_op = 1 },
-    [0x4c] = { .simd_size = simd_packed_int, .four_op = 1 },
-    [0x50] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
-    [0x51] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
-    [0x54] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
-    [0x55] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
-    [0x56] = { .simd_size = simd_packed_fp, .two_op = 1, .d8s = d8s_vl },
-    [0x57] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
-    [0x5c ... 0x5f] = { .simd_size = simd_packed_fp, .four_op = 1 },
-    [0x60 ... 0x63] = { .simd_size = simd_packed_int, .two_op = 1 },
-    [0x66] = { .simd_size = simd_packed_fp, .two_op = 1, .d8s = d8s_vl },
-    [0x67] = { .simd_size = simd_scalar_vexw, .two_op = 1, .d8s = d8s_dq },
-    [0x68 ... 0x69] = { .simd_size = simd_packed_fp, .four_op = 1 },
-    [0x6a ... 0x6b] = { .simd_size = simd_scalar_opc, .four_op = 1 },
-    [0x6c ... 0x6d] = { .simd_size = simd_packed_fp, .four_op = 1 },
-    [0x6e ... 0x6f] = { .simd_size = simd_scalar_opc, .four_op = 1 },
-    [0x70 ... 0x73] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
-    [0x78 ... 0x79] = { .simd_size = simd_packed_fp, .four_op = 1 },
-    [0x7a ... 0x7b] = { .simd_size = simd_scalar_opc, .four_op = 1 },
-    [0x7c ... 0x7d] = { .simd_size = simd_packed_fp, .four_op = 1 },
-    [0x7e ... 0x7f] = { .simd_size = simd_scalar_opc, .four_op = 1 },
-    [0xcc] = { .simd_size = simd_other },
-    [0xce ... 0xcf] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
-    [0xdf] = { .simd_size = simd_packed_int, .two_op = 1 },
-    [0xf0] = {},
-};
-
-static const opcode_desc_t xop_table[] = {
-    DstReg|SrcImmByte|ModRM,
-    DstReg|SrcMem|ModRM,
-    DstReg|SrcImm|ModRM,
-};
-
-static const struct ext8f08_table {
-    uint8_t simd_size:5;
-    uint8_t two_op:1;
-    uint8_t four_op:1;
-} ext8f08_table[256] = {
-    [0xa2] = { .simd_size = simd_packed_int, .four_op = 1 },
-    [0x85 ... 0x87] = { .simd_size = simd_packed_int, .four_op = 1 },
-    [0x8e ... 0x8f] = { .simd_size = simd_packed_int, .four_op = 1 },
-    [0x95 ... 0x97] = { .simd_size = simd_packed_int, .four_op = 1 },
-    [0x9e ... 0x9f] = { .simd_size = simd_packed_int, .four_op = 1 },
-    [0xa3] = { .simd_size = simd_packed_int, .four_op = 1 },
-    [0xa6] = { .simd_size = simd_packed_int, .four_op = 1 },
-    [0xb6] = { .simd_size = simd_packed_int, .four_op = 1 },
-    [0xc0 ... 0xc3] = { .simd_size = simd_packed_int, .two_op = 1 },
-    [0xcc ... 0xcf] = { .simd_size = simd_packed_int },
-    [0xec ... 0xef] = { .simd_size = simd_packed_int },
-};
-
-static const struct ext8f09_table {
-    uint8_t simd_size:5;
-    uint8_t two_op:1;
-} ext8f09_table[256] = {
-    [0x01 ... 0x02] = { .two_op = 1 },
-    [0x80 ... 0x81] = { .simd_size = simd_packed_fp, .two_op = 1 },
-    [0x82 ... 0x83] = { .simd_size = simd_scalar_opc, .two_op = 1 },
-    [0x90 ... 0x9b] = { .simd_size = simd_packed_int },
-    [0xc1 ... 0xc3] = { .simd_size = simd_packed_int, .two_op = 1 },
-    [0xc6 ... 0xc7] = { .simd_size = simd_packed_int, .two_op = 1 },
-    [0xcb] = { .simd_size = simd_packed_int, .two_op = 1 },
-    [0xd1 ... 0xd3] = { .simd_size = simd_packed_int, .two_op = 1 },
-    [0xd6 ... 0xd7] = { .simd_size = simd_packed_int, .two_op = 1 },
-    [0xdb] = { .simd_size = simd_packed_int, .two_op = 1 },
-    [0xe1 ... 0xe3] = { .simd_size = simd_packed_int, .two_op = 1 },
-};
-
-#define VEX_PREFIX_DOUBLE_MASK 0x1
-#define VEX_PREFIX_SCALAR_MASK 0x2
-
 static const uint8_t sse_prefix[] = { 0x66, 0xf3, 0xf2 };
 
 #ifdef __x86_64__
@@ -637,12 +121,6 @@ static const uint8_t sse_prefix[] = { 0x
 #define repe_prefix()  (vex.pfx == vex_f3)
 #define repne_prefix() (vex.pfx == vex_f2)
 
-#ifdef __x86_64__
-#define PTR_POISON ((void *)0x8086000000008086UL) /* non-canonical */
-#else
-#define PTR_POISON NULL /* 32-bit builds are for user-space, so NULL is OK. */
-#endif
-
 /*
  * While proper alignment gets specified in mmval_t, this doesn't get honored
  * by the compiler for automatic variables. Use this helper to instantiate a
@@ -831,19 +309,6 @@ do{ asm volatile (
                 : [msk] "i" (EFLAGS_MASK), ## src);                     \
 } while (0)
 
-/* Fetch next part of the instruction being emulated. */
-#define insn_fetch_bytes(_size)                                         \
-({ unsigned long _x = 0, _ip = state->ip;                               \
-   state->ip += (_size); /* real hardware doesn't truncate */           \
-   generate_exception_if((uint8_t)(state->ip -                          \
-                                   ctxt->regs->r(ip)) > MAX_INST_LEN,   \
-                         EXC_GP, 0);                                    \
-   rc = ops->insn_fetch(_ip, &_x, _size, ctxt);                         \
-   if ( rc ) goto done;                                                 \
-   _x;                                                                  \
-})
-#define insn_fetch_type(_type) ((_type)insn_fetch_bytes(sizeof(_type)))
-
 /*
  * Given byte has even parity (even number of 1s)? SDM Vol. 1 Sec. 3.4.3.1,
  * "Status Flags": EFLAGS.PF reflects parity of least-sig. byte of result only.
@@ -1354,13 +819,6 @@ static int ioport_access_check(
     return rc;
 }
 
-/* Initialise output state in x86_emulate_ctxt */
-static void init_context(struct x86_emulate_ctxt *ctxt)
-{
-    ctxt->retire.raw = 0;
-    x86_emul_reset_event(ctxt);
-}
-
 static int
 realmode_load_seg(
     enum x86_segment seg,
@@ -1707,51 +1165,6 @@ static unsigned long *decode_vex_gpr(
     return decode_gpr(regs, ~vex_reg & (mode_64bit() ? 0xf : 7));
 }
 
-static unsigned int decode_disp8scale(enum disp8scale scale,
-                                      const struct x86_emulate_state *state)
-{
-    switch ( scale )
-    {
-    case d8s_bw:
-        return state->evex.w;
-
-    default:
-        if ( scale < d8s_vl )
-            return scale;
-        if ( state->evex.brs )
-        {
-    case d8s_dq:
-            return 2 + state->evex.w;
-        }
-        break;
-
-    case d8s_dq64:
-        return 2 + (state->op_bytes == 8);
-    }
-
-    switch ( state->simd_size )
-    {
-    case simd_any_fp:
-    case simd_single_fp:
-        if ( !(state->evex.pfx & VEX_PREFIX_SCALAR_MASK) )
-            break;
-        /* fall through */
-    case simd_scalar_opc:
-    case simd_scalar_vexw:
-        return 2 + state->evex.w;
-
-    case simd_128:
-        /* These should have an explicit size specified. */
-        ASSERT_UNREACHABLE();
-        return 4;
-
-    default:
-        break;
-    }
-
-    return 4 + state->evex.lr - (scale - d8s_vl);
-}
-
 #define avx512_vlen_check(lig) do { \
     switch ( evex.lr ) \
     { \
@@ -1833,1138 +1246,6 @@ int cf_check x86emul_unhandleable_rw(
 #define evex_encoded() (evex.mbs)
 #define ea (state->ea)
 
-static int
-x86_decode_onebyte(
-    struct x86_emulate_state *state,
-    struct x86_emulate_ctxt *ctxt,
-    const struct x86_emulate_ops *ops)
-{
-    int rc = X86EMUL_OKAY;
-
-    switch ( ctxt->opcode )
-    {
-    case 0x06: /* push %%es */
-    case 0x07: /* pop %%es */
-    case 0x0e: /* push %%cs */
-    case 0x16: /* push %%ss */
-    case 0x17: /* pop %%ss */
-    case 0x1e: /* push %%ds */
-    case 0x1f: /* pop %%ds */
-    case 0x27: /* daa */
-    case 0x2f: /* das */
-    case 0x37: /* aaa */
-    case 0x3f: /* aas */
-    case 0x60: /* pusha */
-    case 0x61: /* popa */
-    case 0x62: /* bound */
-    case 0xc4: /* les */
-    case 0xc5: /* lds */
-    case 0xce: /* into */
-    case 0xd4: /* aam */
-    case 0xd5: /* aad */
-    case 0xd6: /* salc */
-        state->not_64bit = true;
-        break;
-
-    case 0x82: /* Grp1 (x86/32 only) */
-        state->not_64bit = true;
-        /* fall through */
-    case 0x80: case 0x81: case 0x83: /* Grp1 */
-        if ( (modrm_reg & 7) == 7 ) /* cmp */
-            state->desc = (state->desc & ByteOp) | DstNone | SrcMem;
-        break;
-
-    case 0x90: /* nop / pause */
-        if ( repe_prefix() )
-            ctxt->opcode |= X86EMUL_OPC_F3(0, 0);
-        break;
-
-    case 0x9a: /* call (far, absolute) */
-    case 0xea: /* jmp (far, absolute) */
-        generate_exception_if(mode_64bit(), EXC_UD);
-
-        imm1 = insn_fetch_bytes(op_bytes);
-        imm2 = insn_fetch_type(uint16_t);
-        break;
-
-    case 0xa0: case 0xa1: /* mov mem.offs,{%al,%ax,%eax,%rax} */
-    case 0xa2: case 0xa3: /* mov {%al,%ax,%eax,%rax},mem.offs */
-        /* Source EA is not encoded via ModRM. */
-        ea.type = OP_MEM;
-        ea.mem.off = insn_fetch_bytes(ad_bytes);
-        break;
-
-    case 0xb8 ... 0xbf: /* mov imm{16,32,64},r{16,32,64} */
-        if ( op_bytes == 8 ) /* Fetch more bytes to obtain imm64. */
-            imm1 = ((uint32_t)imm1 |
-                    ((uint64_t)insn_fetch_type(uint32_t) << 32));
-        break;
-
-    case 0xc8: /* enter imm16,imm8 */
-        imm2 = insn_fetch_type(uint8_t);
-        break;
-
-    case 0xf6: case 0xf7: /* Grp3 */
-        if ( !(modrm_reg & 6) ) /* test */
-            state->desc = (state->desc & ByteOp) | DstNone | SrcMem;
-        break;
-
-    case 0xff: /* Grp5 */
-        switch ( modrm_reg & 7 )
-        {
-        case 2: /* call (near) */
-        case 4: /* jmp (near) */
-            if ( mode_64bit() && (op_bytes == 4 || !amd_like(ctxt)) )
-                op_bytes = 8;
-            state->desc = DstNone | SrcMem | Mov;
-            break;
-
-        case 3: /* call (far, absolute indirect) */
-        case 5: /* jmp (far, absolute indirect) */
-            /* REX.W ignored on a vendor-dependent basis. */
-            if ( op_bytes == 8 && amd_like(ctxt) )
-                op_bytes = 4;
-            state->desc = DstNone | SrcMem | Mov;
-            break;
-
-        case 6: /* push */
-            if ( mode_64bit() && op_bytes == 4 )
-                op_bytes = 8;
-            state->desc = DstNone | SrcMem | Mov;
-            break;
-        }
-        break;
-    }
-
- done:
-    return rc;
-}
-
-static int
-x86_decode_twobyte(
-    struct x86_emulate_state *state,
-    struct x86_emulate_ctxt *ctxt,
-    const struct x86_emulate_ops *ops)
-{
-    int rc = X86EMUL_OKAY;
-
-    switch ( ctxt->opcode & X86EMUL_OPC_MASK )
-    {
-    case 0x00: /* Grp6 */
-        switch ( modrm_reg & 6 )
-        {
-        case 0:
-            state->desc |= DstMem | SrcImplicit | Mov;
-            break;
-        case 2: case 4:
-            state->desc |= SrcMem16;
-            break;
-        }
-        break;
-
-    case 0x78:
-        state->desc = ImplicitOps;
-        state->simd_size = simd_none;
-        switch ( vex.pfx )
-        {
-        case vex_66: /* extrq $imm8, $imm8, xmm */
-        case vex_f2: /* insertq $imm8, $imm8, xmm, xmm */
-            imm1 = insn_fetch_type(uint8_t);
-            imm2 = insn_fetch_type(uint8_t);
-            break;
-        }
-        /* fall through */
-    case 0x10 ... 0x18:
-    case 0x28 ... 0x2f:
-    case 0x50 ... 0x77:
-    case 0x7a ... 0x7d:
-    case 0x7f:
-    case 0xc2 ... 0xc3:
-    case 0xc5 ... 0xc6:
-    case 0xd0 ... 0xef:
-    case 0xf1 ... 0xfe:
-        ctxt->opcode |= MASK_INSR(vex.pfx, X86EMUL_OPC_PFX_MASK);
-        break;
-
-    case 0x20: case 0x22: /* mov to/from cr */
-        if ( lock_prefix && vcpu_has_cr8_legacy() )
-        {
-            modrm_reg += 8;
-            lock_prefix = false;
-        }
-        /* fall through */
-    case 0x21: case 0x23: /* mov to/from dr */
-        ASSERT(ea.type == OP_REG); /* Early operand adjustment ensures this. */
-        generate_exception_if(lock_prefix, EXC_UD);
-        op_bytes = mode_64bit() ? 8 : 4;
-        break;
-
-    case 0x79:
-        state->desc = DstReg | SrcMem;
-        state->simd_size = simd_packed_int;
-        ctxt->opcode |= MASK_INSR(vex.pfx, X86EMUL_OPC_PFX_MASK);
-        break;
-
-    case 0x7e:
-        ctxt->opcode |= MASK_INSR(vex.pfx, X86EMUL_OPC_PFX_MASK);
-        if ( vex.pfx == vex_f3 ) /* movq xmm/m64,xmm */
-        {
-    case X86EMUL_OPC_VEX_F3(0, 0x7e): /* vmovq xmm/m64,xmm */
-    case X86EMUL_OPC_EVEX_F3(0, 0x7e): /* vmovq xmm/m64,xmm */
-            state->desc = DstImplicit | SrcMem | TwoOp;
-            state->simd_size = simd_other;
-            /* Avoid the state->desc clobbering of TwoOp below. */
-            return X86EMUL_OKAY;
-        }
-        break;
-
-    case X86EMUL_OPC_VEX(0, 0x90):    /* kmov{w,q} */
-    case X86EMUL_OPC_VEX_66(0, 0x90): /* kmov{b,d} */
-        state->desc = DstReg | SrcMem | Mov;
-        state->simd_size = simd_other;
-        break;
-
-    case X86EMUL_OPC_VEX(0, 0x91):    /* kmov{w,q} */
-    case X86EMUL_OPC_VEX_66(0, 0x91): /* kmov{b,d} */
-        state->desc = DstMem | SrcReg | Mov;
-        state->simd_size = simd_other;
-        break;
-
-    case 0xae:
-        ctxt->opcode |= MASK_INSR(vex.pfx, X86EMUL_OPC_PFX_MASK);
-        /* fall through */
-    case X86EMUL_OPC_VEX(0, 0xae):
-        switch ( modrm_reg & 7 )
-        {
-        case 2: /* {,v}ldmxcsr */
-            state->desc = DstImplicit | SrcMem | Mov;
-            op_bytes = 4;
-            break;
-
-        case 3: /* {,v}stmxcsr */
-            state->desc = DstMem | SrcImplicit | Mov;
-            op_bytes = 4;
-            break;
-        }
-        break;
-
-    case 0xb2: /* lss */
-    case 0xb4: /* lfs */
-    case 0xb5: /* lgs */
-        /* REX.W ignored on a vendor-dependent basis. */
-        if ( op_bytes == 8 && amd_like(ctxt) )
-            op_bytes = 4;
-        break;
-
-    case 0xb8: /* jmpe / popcnt */
-        if ( rep_prefix() )
-            ctxt->opcode |= MASK_INSR(vex.pfx, X86EMUL_OPC_PFX_MASK);
-        break;
-
-        /* Intentionally not handling here despite being modified by F3:
-    case 0xbc: bsf / tzcnt
-    case 0xbd: bsr / lzcnt
-         * They're being dealt with in the execution phase (if at all).
-         */
-
-    case 0xc4: /* pinsrw */
-        ctxt->opcode |= MASK_INSR(vex.pfx, X86EMUL_OPC_PFX_MASK);
-        /* fall through */
-    case X86EMUL_OPC_VEX_66(0, 0xc4): /* vpinsrw */
-    case X86EMUL_OPC_EVEX_66(0, 0xc4): /* vpinsrw */
-        state->desc = DstImplicit | SrcMem16;
-        break;
-
-    case 0xf0:
-        ctxt->opcode |= MASK_INSR(vex.pfx, X86EMUL_OPC_PFX_MASK);
-        if ( vex.pfx == vex_f2 ) /* lddqu mem,xmm */
-        {
-        /* fall through */
-    case X86EMUL_OPC_VEX_F2(0, 0xf0): /* vlddqu mem,{x,y}mm */
-            state->desc = DstImplicit | SrcMem | TwoOp;
-            state->simd_size = simd_other;
-            /* Avoid the state->desc clobbering of TwoOp below. */
-            return X86EMUL_OKAY;
-        }
-        break;
-    }
-
-    /*
-     * Scalar forms of most VEX-/EVEX-encoded TwoOp instructions have
-     * three operands.  Those which do really have two operands
-     * should have exited earlier.
-     */
-    if ( state->simd_size && vex.opcx &&
-         (vex.pfx & VEX_PREFIX_SCALAR_MASK) )
-        state->desc &= ~TwoOp;
-
- done:
-    return rc;
-}
-
-static int
-x86_decode_0f38(
-    struct x86_emulate_state *state,
-    struct x86_emulate_ctxt *ctxt,
-    const struct x86_emulate_ops *ops)
-{
-    switch ( ctxt->opcode & X86EMUL_OPC_MASK )
-    {
-    case 0x00 ... 0xef:
-    case 0xf2 ... 0xf5:
-    case 0xf7 ... 0xf8:
-    case 0xfa ... 0xff:
-        op_bytes = 0;
-        /* fall through */
-    case 0xf6: /* adcx / adox */
-    case 0xf9: /* movdiri */
-        ctxt->opcode |= MASK_INSR(vex.pfx, X86EMUL_OPC_PFX_MASK);
-        break;
-
-    case X86EMUL_OPC_EVEX_66(0, 0x2d): /* vscalefs{s,d} */
-        state->simd_size = simd_scalar_vexw;
-        break;
-
-    case X86EMUL_OPC_EVEX_66(0, 0x7a): /* vpbroadcastb */
-    case X86EMUL_OPC_EVEX_66(0, 0x7b): /* vpbroadcastw */
-    case X86EMUL_OPC_EVEX_66(0, 0x7c): /* vpbroadcast{d,q} */
-        break;
-
-    case 0xf0: /* movbe / crc32 */
-        state->desc |= repne_prefix() ? ByteOp : Mov;
-        if ( rep_prefix() )
-            ctxt->opcode |= MASK_INSR(vex.pfx, X86EMUL_OPC_PFX_MASK);
-        break;
-
-    case 0xf1: /* movbe / crc32 */
-        if ( repne_prefix() )
-            state->desc = DstReg | SrcMem;
-        if ( rep_prefix() )
-            ctxt->opcode |= MASK_INSR(vex.pfx, X86EMUL_OPC_PFX_MASK);
-        break;
-
-    case X86EMUL_OPC_VEX(0, 0xf2):    /* andn */
-    case X86EMUL_OPC_VEX(0, 0xf3):    /* Grp 17 */
-    case X86EMUL_OPC_VEX(0, 0xf5):    /* bzhi */
-    case X86EMUL_OPC_VEX_F3(0, 0xf5): /* pext */
-    case X86EMUL_OPC_VEX_F2(0, 0xf5): /* pdep */
-    case X86EMUL_OPC_VEX_F2(0, 0xf6): /* mulx */
-    case X86EMUL_OPC_VEX(0, 0xf7):    /* bextr */
-    case X86EMUL_OPC_VEX_66(0, 0xf7): /* shlx */
-    case X86EMUL_OPC_VEX_F3(0, 0xf7): /* sarx */
-    case X86EMUL_OPC_VEX_F2(0, 0xf7): /* shrx */
-        break;
-
-    default:
-        op_bytes = 0;
-        break;
-    }
-
-    return X86EMUL_OKAY;
-}
-
-static int
-x86_decode_0f3a(
-    struct x86_emulate_state *state,
-    struct x86_emulate_ctxt *ctxt,
-    const struct x86_emulate_ops *ops)
-{
-    if ( !vex.opcx )
-        ctxt->opcode |= MASK_INSR(vex.pfx, X86EMUL_OPC_PFX_MASK);
-
-    switch ( ctxt->opcode & X86EMUL_OPC_MASK )
-    {
-    case X86EMUL_OPC_66(0, 0x14)
-     ... X86EMUL_OPC_66(0, 0x17):     /* pextr*, extractps */
-    case X86EMUL_OPC_VEX_66(0, 0x14)
-     ... X86EMUL_OPC_VEX_66(0, 0x17): /* vpextr*, vextractps */
-    case X86EMUL_OPC_EVEX_66(0, 0x14)
-     ... X86EMUL_OPC_EVEX_66(0, 0x17): /* vpextr*, vextractps */
-    case X86EMUL_OPC_VEX_F2(0, 0xf0): /* rorx */
-        break;
-
-    case X86EMUL_OPC_66(0, 0x20):     /* pinsrb */
-    case X86EMUL_OPC_VEX_66(0, 0x20): /* vpinsrb */
-    case X86EMUL_OPC_EVEX_66(0, 0x20): /* vpinsrb */
-        state->desc = DstImplicit | SrcMem;
-        if ( modrm_mod != 3 )
-            state->desc |= ByteOp;
-        break;
-
-    case X86EMUL_OPC_66(0, 0x22):     /* pinsr{d,q} */
-    case X86EMUL_OPC_VEX_66(0, 0x22): /* vpinsr{d,q} */
-    case X86EMUL_OPC_EVEX_66(0, 0x22): /* vpinsr{d,q} */
-        state->desc = DstImplicit | SrcMem;
-        break;
-
-    default:
-        op_bytes = 0;
-        break;
-    }
-
-    return X86EMUL_OKAY;
-}
-
-static int
-x86_decode(
-    struct x86_emulate_state *state,
-    struct x86_emulate_ctxt *ctxt,
-    const struct x86_emulate_ops  *ops)
-{
-    uint8_t b, d;
-    unsigned int def_op_bytes, def_ad_bytes, opcode;
-    enum x86_segment override_seg = x86_seg_none;
-    bool pc_rel = false;
-    int rc = X86EMUL_OKAY;
-
-    ASSERT(ops->insn_fetch);
-
-    memset(state, 0, sizeof(*state));
-    ea.type = OP_NONE;
-    ea.mem.seg = x86_seg_ds;
-    ea.reg = PTR_POISON;
-    state->regs = ctxt->regs;
-    state->ip = ctxt->regs->r(ip);
-
-    op_bytes = def_op_bytes = ad_bytes = def_ad_bytes = ctxt->addr_size/8;
-    if ( op_bytes == 8 )
-    {
-        op_bytes = def_op_bytes = 4;
-#ifndef __x86_64__
-        return X86EMUL_UNHANDLEABLE;
-#endif
-    }
-
-    /* Prefix bytes. */
-    for ( ; ; )
-    {
-        switch ( b = insn_fetch_type(uint8_t) )
-        {
-        case 0x66: /* operand-size override */
-            op_bytes = def_op_bytes ^ 6;
-            if ( !vex.pfx )
-                vex.pfx = vex_66;
-            break;
-        case 0x67: /* address-size override */
-            ad_bytes = def_ad_bytes ^ (mode_64bit() ? 12 : 6);
-            break;
-        case 0x2e: /* CS override / ignored in 64-bit mode */
-            if ( !mode_64bit() )
-                override_seg = x86_seg_cs;
-            break;
-        case 0x3e: /* DS override / ignored in 64-bit mode */
-            if ( !mode_64bit() )
-                override_seg = x86_seg_ds;
-            break;
-        case 0x26: /* ES override / ignored in 64-bit mode */
-            if ( !mode_64bit() )
-                override_seg = x86_seg_es;
-            break;
-        case 0x64: /* FS override */
-            override_seg = x86_seg_fs;
-            break;
-        case 0x65: /* GS override */
-            override_seg = x86_seg_gs;
-            break;
-        case 0x36: /* SS override / ignored in 64-bit mode */
-            if ( !mode_64bit() )
-                override_seg = x86_seg_ss;
-            break;
-        case 0xf0: /* LOCK */
-            lock_prefix = 1;
-            break;
-        case 0xf2: /* REPNE/REPNZ */
-            vex.pfx = vex_f2;
-            break;
-        case 0xf3: /* REP/REPE/REPZ */
-            vex.pfx = vex_f3;
-            break;
-        case 0x40 ... 0x4f: /* REX */
-            if ( !mode_64bit() )
-                goto done_prefixes;
-            rex_prefix = b;
-            continue;
-        default:
-            goto done_prefixes;
-        }
-
-        /* Any legacy prefix after a REX prefix nullifies its effect. */
-        rex_prefix = 0;
-    }
- done_prefixes:
-
-    if ( rex_prefix & REX_W )
-        op_bytes = 8;
-
-    /* Opcode byte(s). */
-    d = opcode_table[b];
-    if ( d == 0 && b == 0x0f )
-    {
-        /* Two-byte opcode. */
-        b = insn_fetch_type(uint8_t);
-        d = twobyte_table[b].desc;
-        switch ( b )
-        {
-        default:
-            opcode = b | MASK_INSR(0x0f, X86EMUL_OPC_EXT_MASK);
-            ext = ext_0f;
-            state->simd_size = twobyte_table[b].size;
-            break;
-        case 0x38:
-            b = insn_fetch_type(uint8_t);
-            opcode = b | MASK_INSR(0x0f38, X86EMUL_OPC_EXT_MASK);
-            ext = ext_0f38;
-            break;
-        case 0x3a:
-            b = insn_fetch_type(uint8_t);
-            opcode = b | MASK_INSR(0x0f3a, X86EMUL_OPC_EXT_MASK);
-            ext = ext_0f3a;
-            break;
-        }
-    }
-    else
-        opcode = b;
-
-    /* ModRM and SIB bytes. */
-    if ( d & ModRM )
-    {
-        modrm = insn_fetch_type(uint8_t);
-        modrm_mod = (modrm & 0xc0) >> 6;
-
-        if ( !ext && ((b & ~1) == 0xc4 || (b == 0x8f && (modrm & 0x18)) ||
-                      b == 0x62) )
-            switch ( def_ad_bytes )
-            {
-            default:
-                BUG(); /* Shouldn't be possible. */
-            case 2:
-                if ( state->regs->eflags & X86_EFLAGS_VM )
-                    break;
-                /* fall through */
-            case 4:
-                if ( modrm_mod != 3 || in_realmode(ctxt, ops) )
-                    break;
-                /* fall through */
-            case 8:
-                /* VEX / XOP / EVEX */
-                generate_exception_if(rex_prefix || vex.pfx, EXC_UD);
-                /*
-                 * With operand size override disallowed (see above), op_bytes
-                 * should not have changed from its default.
-                 */
-                ASSERT(op_bytes == def_op_bytes);
-
-                vex.raw[0] = modrm;
-                if ( b == 0xc5 )
-                {
-                    opcode = X86EMUL_OPC_VEX_;
-                    vex.raw[1] = modrm;
-                    vex.opcx = vex_0f;
-                    vex.x = 1;
-                    vex.b = 1;
-                    vex.w = 0;
-                }
-                else
-                {
-                    vex.raw[1] = insn_fetch_type(uint8_t);
-                    if ( mode_64bit() )
-                    {
-                        if ( !vex.b )
-                            rex_prefix |= REX_B;
-                        if ( !vex.x )
-                            rex_prefix |= REX_X;
-                        if ( vex.w )
-                        {
-                            rex_prefix |= REX_W;
-                            op_bytes = 8;
-                        }
-                    }
-                    else
-                    {
-                        /* Operand size fixed at 4 (no override via W bit). */
-                        op_bytes = 4;
-                        vex.b = 1;
-                    }
-                    switch ( b )
-                    {
-                    case 0x62:
-                        opcode = X86EMUL_OPC_EVEX_;
-                        evex.raw[0] = vex.raw[0];
-                        evex.raw[1] = vex.raw[1];
-                        evex.raw[2] = insn_fetch_type(uint8_t);
-
-                        generate_exception_if(!evex.mbs || evex.mbz, EXC_UD);
-                        generate_exception_if(!evex.opmsk && evex.z, EXC_UD);
-
-                        if ( !mode_64bit() )
-                            evex.R = 1;
-
-                        vex.opcx = evex.opcx;
-                        break;
-                    case 0xc4:
-                        opcode = X86EMUL_OPC_VEX_;
-                        break;
-                    default:
-                        opcode = 0;
-                        break;
-                    }
-                }
-                if ( !vex.r )
-                    rex_prefix |= REX_R;
-
-                ext = vex.opcx;
-                if ( b != 0x8f )
-                {
-                    b = insn_fetch_type(uint8_t);
-                    switch ( ext )
-                    {
-                    case vex_0f:
-                        opcode |= MASK_INSR(0x0f, X86EMUL_OPC_EXT_MASK);
-                        d = twobyte_table[b].desc;
-                        state->simd_size = twobyte_table[b].size;
-                        break;
-                    case vex_0f38:
-                        opcode |= MASK_INSR(0x0f38, X86EMUL_OPC_EXT_MASK);
-                        d = twobyte_table[0x38].desc;
-                        break;
-                    case vex_0f3a:
-                        opcode |= MASK_INSR(0x0f3a, X86EMUL_OPC_EXT_MASK);
-                        d = twobyte_table[0x3a].desc;
-                        break;
-                    default:
-                        rc = X86EMUL_UNRECOGNIZED;
-                        goto done;
-                    }
-                }
-                else if ( ext < ext_8f08 + ARRAY_SIZE(xop_table) )
-                {
-                    b = insn_fetch_type(uint8_t);
-                    opcode |= MASK_INSR(0x8f08 + ext - ext_8f08,
-                                        X86EMUL_OPC_EXT_MASK);
-                    d = array_access_nospec(xop_table, ext - ext_8f08);
-                }
-                else
-                {
-                    rc = X86EMUL_UNRECOGNIZED;
-                    goto done;
-                }
-
-                opcode |= b | MASK_INSR(vex.pfx, X86EMUL_OPC_PFX_MASK);
-
-                if ( !evex_encoded() )
-                    evex.lr = vex.l;
-
-                if ( !(d & ModRM) )
-                    break;
-
-                modrm = insn_fetch_type(uint8_t);
-                modrm_mod = (modrm & 0xc0) >> 6;
-
-                break;
-            }
-    }
-
-    if ( d & ModRM )
-    {
-        unsigned int disp8scale = 0;
-
-        d &= ~ModRM;
-#undef ModRM /* Only its aliases are valid to use from here on. */
-        modrm_reg = ((rex_prefix & 4) << 1) | ((modrm & 0x38) >> 3) |
-                    ((evex_encoded() && !evex.R) << 4);
-        modrm_rm  = modrm & 0x07;
-
-        /*
-         * Early operand adjustments. Only ones affecting further processing
-         * prior to the x86_decode_*() calls really belong here. That would
-         * normally be only addition/removal of SrcImm/SrcImm16, so their
-         * fetching can be taken care of by the common code below.
-         */
-        switch ( ext )
-        {
-        case ext_none:
-            switch ( b )
-            {
-            case 0xf6 ... 0xf7: /* Grp3 */
-                switch ( modrm_reg & 7 )
-                {
-                case 0 ... 1: /* test */
-                    d |= DstMem | SrcImm;
-                    break;
-                case 2: /* not */
-                case 3: /* neg */
-                    d |= DstMem;
-                    break;
-                case 4: /* mul */
-                case 5: /* imul */
-                case 6: /* div */
-                case 7: /* idiv */
-                    /*
-                     * DstEax isn't really precise for all cases; updates to
-                     * rDX get handled in an open coded manner.
-                     */
-                    d |= DstEax | SrcMem;
-                    break;
-                }
-                break;
-            }
-            break;
-
-        case ext_0f:
-            if ( evex_encoded() )
-                disp8scale = decode_disp8scale(twobyte_table[b].d8s, state);
-
-            switch ( b )
-            {
-            case 0x12: /* vmovsldup / vmovddup */
-                if ( evex.pfx == vex_f2 )
-                    disp8scale = evex.lr ? 4 + evex.lr : 3;
-                /* fall through */
-            case 0x16: /* vmovshdup */
-                if ( evex.pfx == vex_f3 )
-                    disp8scale = 4 + evex.lr;
-                break;
-
-            case 0x20: /* mov cr,reg */
-            case 0x21: /* mov dr,reg */
-            case 0x22: /* mov reg,cr */
-            case 0x23: /* mov reg,dr */
-                /*
-                 * Mov to/from cr/dr ignore the encoding of Mod, and behave as
-                 * if they were encoded as reg/reg instructions.  No further
-                 * disp/SIB bytes are fetched.
-                 */
-                modrm_mod = 3;
-                break;
-
-            case 0x78:
-            case 0x79:
-                if ( !evex.pfx )
-                    break;
-                /* vcvt{,t}ps2uqq need special casing */
-                if ( evex.pfx == vex_66 )
-                {
-                    if ( !evex.w && !evex.brs )
-                        --disp8scale;
-                    break;
-                }
-                /* vcvt{,t}s{s,d}2usi need special casing: fall through */
-            case 0x2c: /* vcvtts{s,d}2si need special casing */
-            case 0x2d: /* vcvts{s,d}2si need special casing */
-                if ( evex_encoded() )
-                    disp8scale = 2 + (evex.pfx & VEX_PREFIX_DOUBLE_MASK);
-                break;
-
-            case 0x5a: /* vcvtps2pd needs special casing */
-                if ( disp8scale && !evex.pfx && !evex.brs )
-                    --disp8scale;
-                break;
-
-            case 0x7a: /* vcvttps2qq and vcvtudq2pd need special casing */
-                if ( disp8scale && evex.pfx != vex_f2 && !evex.w && !evex.brs )
-                    --disp8scale;
-                break;
-
-            case 0x7b: /* vcvtp{s,d}2qq need special casing */
-                if ( disp8scale && evex.pfx == vex_66 )
-                    disp8scale = (evex.brs ? 2 : 3 + evex.lr) + evex.w;
-                break;
-
-            case 0x7e: /* vmovq xmm/m64,xmm needs special casing */
-                if ( disp8scale == 2 && evex.pfx == vex_f3 )
-                    disp8scale = 3;
-                break;
-
-            case 0xe6: /* vcvtdq2pd needs special casing */
-                if ( disp8scale && evex.pfx == vex_f3 && !evex.w && !evex.brs )
-                    --disp8scale;
-                break;
-            }
-            break;
-
-        case ext_0f38:
-            d = ext0f38_table[b].to_mem ? DstMem | SrcReg
-                                        : DstReg | SrcMem;
-            if ( ext0f38_table[b].two_op )
-                d |= TwoOp;
-            if ( ext0f38_table[b].vsib )
-                d |= vSIB;
-            state->simd_size = ext0f38_table[b].simd_size;
-            if ( evex_encoded() )
-            {
-                /*
-                 * VPMOVUS* are identical to VPMOVS* Disp8-scaling-wise, but
-                 * their attributes don't match those of the vex_66 encoded
-                 * insns with the same base opcodes. Rather than adding new
-                 * columns to the table, handle this here for now.
-                 */
-                if ( evex.pfx != vex_f3 || (b & 0xf8) != 0x10 )
-                    disp8scale = decode_disp8scale(ext0f38_table[b].d8s, state);
-                else
-                {
-                    disp8scale = decode_disp8scale(ext0f38_table[b ^ 0x30].d8s,
-                                                   state);
-                    state->simd_size = simd_other;
-                }
-
-                switch ( b )
-                {
-                /* vp4dpwssd{,s} need special casing */
-                case 0x52: case 0x53:
-                /* v4f{,n}madd{p,s}s need special casing */
-                case 0x9a: case 0x9b: case 0xaa: case 0xab:
-                    if ( evex.pfx == vex_f2 )
-                    {
-                        disp8scale = 4;
-                        state->simd_size = simd_128;
-                    }
-                    break;
-                }
-            }
-            break;
-
-        case ext_0f3a:
-            /*
-             * Cannot update d here yet, as the immediate operand still
-             * needs fetching.
-             */
-            state->simd_size = ext0f3a_table[b].simd_size;
-            if ( evex_encoded() )
-                disp8scale = decode_disp8scale(ext0f3a_table[b].d8s, state);
-            break;
-
-        case ext_8f09:
-            if ( ext8f09_table[b].two_op )
-                d |= TwoOp;
-            state->simd_size = ext8f09_table[b].simd_size;
-            break;
-
-        case ext_8f08:
-        case ext_8f0a:
-            /*
-             * Cannot update d here yet, as the immediate operand still
-             * needs fetching.
-             */
-            break;
-
-        default:
-            ASSERT_UNREACHABLE();
-            return X86EMUL_UNIMPLEMENTED;
-        }
-
-        if ( modrm_mod == 3 )
-        {
-            generate_exception_if(d & vSIB, EXC_UD);
-            modrm_rm |= ((rex_prefix & 1) << 3) |
-                        ((evex_encoded() && !evex.x) << 4);
-            ea.type = OP_REG;
-        }
-        else if ( ad_bytes == 2 )
-        {
-            /* 16-bit ModR/M decode. */
-            generate_exception_if(d & vSIB, EXC_UD);
-            ea.type = OP_MEM;
-            switch ( modrm_rm )
-            {
-            case 0:
-                ea.mem.off = state->regs->bx + state->regs->si;
-                break;
-            case 1:
-                ea.mem.off = state->regs->bx + state->regs->di;
-                break;
-            case 2:
-                ea.mem.seg = x86_seg_ss;
-                ea.mem.off = state->regs->bp + state->regs->si;
-                break;
-            case 3:
-                ea.mem.seg = x86_seg_ss;
-                ea.mem.off = state->regs->bp + state->regs->di;
-                break;
-            case 4:
-                ea.mem.off = state->regs->si;
-                break;
-            case 5:
-                ea.mem.off = state->regs->di;
-                break;
-            case 6:
-                if ( modrm_mod == 0 )
-                    break;
-                ea.mem.seg = x86_seg_ss;
-                ea.mem.off = state->regs->bp;
-                break;
-            case 7:
-                ea.mem.off = state->regs->bx;
-                break;
-            }
-            switch ( modrm_mod )
-            {
-            case 0:
-                if ( modrm_rm == 6 )
-                    ea.mem.off = insn_fetch_type(int16_t);
-                break;
-            case 1:
-                ea.mem.off += insn_fetch_type(int8_t) * (1 << disp8scale);
-                break;
-            case 2:
-                ea.mem.off += insn_fetch_type(int16_t);
-                break;
-            }
-        }
-        else
-        {
-            /* 32/64-bit ModR/M decode. */
-            ea.type = OP_MEM;
-            if ( modrm_rm == 4 )
-            {
-                uint8_t sib = insn_fetch_type(uint8_t);
-                uint8_t sib_base = (sib & 7) | ((rex_prefix << 3) & 8);
-
-                state->sib_index = ((sib >> 3) & 7) | ((rex_prefix << 2) & 8);
-                state->sib_scale = (sib >> 6) & 3;
-                if ( unlikely(d & vSIB) )
-                    state->sib_index |= (mode_64bit() && evex_encoded() &&
-                                         !evex.RX) << 4;
-                else if ( state->sib_index != 4 )
-                {
-                    ea.mem.off = *decode_gpr(state->regs, state->sib_index);
-                    ea.mem.off <<= state->sib_scale;
-                }
-                if ( (modrm_mod == 0) && ((sib_base & 7) == 5) )
-                    ea.mem.off += insn_fetch_type(int32_t);
-                else if ( sib_base == 4 )
-                {
-                    ea.mem.seg  = x86_seg_ss;
-                    ea.mem.off += state->regs->r(sp);
-                    if ( !ext && (b == 0x8f) )
-                        /* POP <rm> computes its EA post increment. */
-                        ea.mem.off += ((mode_64bit() && (op_bytes == 4))
-                                       ? 8 : op_bytes);
-                }
-                else if ( sib_base == 5 )
-                {
-                    ea.mem.seg  = x86_seg_ss;
-                    ea.mem.off += state->regs->r(bp);
-                }
-                else
-                    ea.mem.off += *decode_gpr(state->regs, sib_base);
-            }
-            else
-            {
-                generate_exception_if(d & vSIB, EXC_UD);
-                modrm_rm |= (rex_prefix & 1) << 3;
-                ea.mem.off = *decode_gpr(state->regs, modrm_rm);
-                if ( (modrm_rm == 5) && (modrm_mod != 0) )
-                    ea.mem.seg = x86_seg_ss;
-            }
-            switch ( modrm_mod )
-            {
-            case 0:
-                if ( (modrm_rm & 7) != 5 )
-                    break;
-                ea.mem.off = insn_fetch_type(int32_t);
-                pc_rel = mode_64bit();
-                break;
-            case 1:
-                ea.mem.off += insn_fetch_type(int8_t) * (1 << disp8scale);
-                break;
-            case 2:
-                ea.mem.off += insn_fetch_type(int32_t);
-                break;
-            }
-        }
-    }
-    else
-    {
-        modrm_mod = 0xff;
-        modrm_reg = modrm_rm = modrm = 0;
-    }
-
-    if ( override_seg != x86_seg_none )
-        ea.mem.seg = override_seg;
-
-    /* Fetch the immediate operand, if present. */
-    switch ( d & SrcMask )
-    {
-        unsigned int bytes;
-
-    case SrcImm:
-        if ( !(d & ByteOp) )
-        {
-            if ( mode_64bit() && !amd_like(ctxt) &&
-                 ((ext == ext_none && (b | 1) == 0xe9) /* call / jmp */ ||
-                  (ext == ext_0f && (b | 0xf) == 0x8f) /* jcc */ ) )
-                op_bytes = 4;
-            bytes = op_bytes != 8 ? op_bytes : 4;
-        }
-        else
-        {
-    case SrcImmByte:
-            bytes = 1;
-        }
-        /* NB. Immediates are sign-extended as necessary. */
-        switch ( bytes )
-        {
-        case 1: imm1 = insn_fetch_type(int8_t);  break;
-        case 2: imm1 = insn_fetch_type(int16_t); break;
-        case 4: imm1 = insn_fetch_type(int32_t); break;
-        }
-        break;
-    case SrcImm16:
-        imm1 = insn_fetch_type(uint16_t);
-        break;
-    }
-
-    ctxt->opcode = opcode;
-    state->desc = d;
-
-    switch ( ext )
-    {
-    case ext_none:
-        rc = x86_decode_onebyte(state, ctxt, ops);
-        break;
-
-    case ext_0f:
-        rc = x86_decode_twobyte(state, ctxt, ops);
-        break;
-
-    case ext_0f38:
-        rc = x86_decode_0f38(state, ctxt, ops);
-        break;
-
-    case ext_0f3a:
-        d = ext0f3a_table[b].to_mem ? DstMem | SrcReg : DstReg | SrcMem;
-        if ( ext0f3a_table[b].two_op )
-            d |= TwoOp;
-        else if ( ext0f3a_table[b].four_op && !mode_64bit() && vex.opcx )
-            imm1 &= 0x7f;
-        state->desc = d;
-        rc = x86_decode_0f3a(state, ctxt, ops);
-        break;
-
-    case ext_8f08:
-        d = DstReg | SrcMem;
-        if ( ext8f08_table[b].two_op )
-            d |= TwoOp;
-        else if ( ext8f08_table[b].four_op && !mode_64bit() )
-            imm1 &= 0x7f;
-        state->desc = d;
-        state->simd_size = ext8f08_table[b].simd_size;
-        break;
-
-    case ext_8f09:
-    case ext_8f0a:
-        break;
-
-    default:
-        ASSERT_UNREACHABLE();
-        return X86EMUL_UNIMPLEMENTED;
-    }
-
-    if ( ea.type == OP_MEM )
-    {
-        if ( pc_rel )
-            ea.mem.off += state->ip;
-
-        ea.mem.off = truncate_ea(ea.mem.off);
-    }
-
-    /*
-     * Simple op_bytes calculations. More complicated cases produce 0
-     * and are further handled during execute.
-     */
-    switch ( state->simd_size )
-    {
-    case simd_none:
-        /*
-         * When prefix 66 has a meaning different from operand-size override,
-         * operand size defaults to 4 and can't be overridden to 2.
-         */
-        if ( op_bytes == 2 &&
-             (ctxt->opcode & X86EMUL_OPC_PFX_MASK) == X86EMUL_OPC_66(0, 0) )
-            op_bytes = 4;
-        break;
-
-#ifndef X86EMUL_NO_SIMD
-    case simd_packed_int:
-        switch ( vex.pfx )
-        {
-        case vex_none:
-            if ( !vex.opcx )
-            {
-                op_bytes = 8;
-                break;
-            }
-            /* fall through */
-        case vex_66:
-            op_bytes = 16 << evex.lr;
-            break;
-        default:
-            op_bytes = 0;
-            break;
-        }
-        break;
-
-    case simd_single_fp:
-        if ( vex.pfx & VEX_PREFIX_DOUBLE_MASK )
-        {
-            op_bytes = 0;
-            break;
-    case simd_packed_fp:
-            if ( vex.pfx & VEX_PREFIX_SCALAR_MASK )
-            {
-                op_bytes = 0;
-                break;
-            }
-        }
-        /* fall through */
-    case simd_any_fp:
-        switch ( vex.pfx )
-        {
-        default:
-            op_bytes = 16 << evex.lr;
-            break;
-        case vex_f3:
-            generate_exception_if(evex_encoded() && evex.w, EXC_UD);
-            op_bytes = 4;
-            break;
-        case vex_f2:
-            generate_exception_if(evex_encoded() && !evex.w, EXC_UD);
-            op_bytes = 8;
-            break;
-        }
-        break;
-
-    case simd_scalar_opc:
-        op_bytes = 4 << (ctxt->opcode & 1);
-        break;
-
-    case simd_scalar_vexw:
-        op_bytes = 4 << vex.w;
-        break;
-
-    case simd_128:
-        /* The special cases here are MMX shift insns. */
-        op_bytes = vex.opcx || vex.pfx ? 16 : 8;
-        break;
-
-    case simd_256:
-        op_bytes = 32;
-        break;
-#endif /* !X86EMUL_NO_SIMD */
-
-    default:
-        op_bytes = 0;
-        break;
-    }
-
- done:
-    return rc;
-}
-
-/* No insn fetching past this point. */
-#undef insn_fetch_bytes
-#undef insn_fetch_type
-
 /* Undo DEBUG wrapper. */
 #undef x86_emulate
 
@@ -3000,7 +1281,7 @@ x86_emulate(
                            (_regs.eflags & X86_EFLAGS_VIP)),
                           EXC_GP, 0);
 
-    rc = x86_decode(&state, ctxt, ops);
+    rc = x86emul_decode(&state, ctxt, ops);
     if ( rc != X86EMUL_OKAY )
         return rc;
 
@@ -10520,45 +8801,6 @@ int x86_emulate_wrapper(
 }
 #endif
 
-struct x86_emulate_state *
-x86_decode_insn(
-    struct x86_emulate_ctxt *ctxt,
-    int (*insn_fetch)(
-        unsigned long offset, void *p_data, unsigned int bytes,
-        struct x86_emulate_ctxt *ctxt))
-{
-    static DEFINE_PER_CPU(struct x86_emulate_state, state);
-    struct x86_emulate_state *state = &this_cpu(state);
-    const struct x86_emulate_ops ops = {
-        .insn_fetch = insn_fetch,
-        .read       = x86emul_unhandleable_rw,
-    };
-    int rc;
-
-    init_context(ctxt);
-
-    rc = x86_decode(state, ctxt, &ops);
-    if ( unlikely(rc != X86EMUL_OKAY) )
-        return ERR_PTR(-rc);
-
-#if defined(__XEN__) && !defined(NDEBUG)
-    /*
-     * While we avoid memory allocation (by use of per-CPU data) above,
-     * nevertheless make sure callers properly release the state structure
-     * for forward compatibility.
-     */
-    if ( state->caller )
-    {
-        printk(XENLOG_ERR "Unreleased emulation state acquired by %ps\n",
-               state->caller);
-        dump_execution_state();
-    }
-    state->caller = __builtin_return_address(0);
-#endif
-
-    return state;
-}
-
 static inline void check_state(const struct x86_emulate_state *state)
 {
 #if defined(__XEN__) && !defined(NDEBUG)



From xen-devel-bounces@lists.xenproject.org Wed Jun 15 10:01:16 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 10:01:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349843.576032 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Pps-0005FI-9c; Wed, 15 Jun 2022 10:01:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349843.576032; Wed, 15 Jun 2022 10:01:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Pps-0005FB-5G; Wed, 15 Jun 2022 10:01:16 +0000
Received: by outflank-mailman (input) for mailman id 349843;
 Wed, 15 Jun 2022 10:01:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=56zs=WW=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o1Ppq-0005BP-Cp
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 10:01:14 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20606.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::606])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 15dfb1e8-ec92-11ec-ab14-113154c10af9;
 Wed, 15 Jun 2022 12:01:12 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8981.eurprd04.prod.outlook.com (2603:10a6:10:2e0::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.20; Wed, 15 Jun
 2022 10:01:10 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Wed, 15 Jun 2022
 10:01:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 15dfb1e8-ec92-11ec-ab14-113154c10af9
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=galJlRCPHgZOhIfrk/t6X/JH+4y9wg/WLGPFFuN4AycWt1WrnQbzRrhXEuBrHjNDWcUPq7dx3JbvL4Gwin5hkGjwFO3tV/7vpVnZ8ygA4dpa96PFX89lcg0ZrWNMp5tUThHeQk7ILfBVxE+dSYkGSCO/09t9HvzMxpigaxyF4FVnkUgxAuXhp1XEMvR3z7cfjr4evthgib5LOQyG9iRKUFuUFdEraadJfjwvMpD36tF4wS0bvXGIro357roNBkAGELPAbaMSoONQBaeWj7EMQRxRg7CGjV9lKCqbMWekmjF3KtaylfpGHf8xsCyp5IOK9DyDZRbDrC8dPTvrsIy53w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ohfM2VreBd5WQTy/j3TfsjM1MewtvAw3hMJKgkcBjLw=;
 b=BItzRopYcBzI73Y1W7dJv1iVejT9T9fOojz0qx2ZGJBE/yVg5ZdJZWhsRHBtbAwVZvWY3r/xZz1AEyBr3/dSqGp0mTvyDjB98oin9VcdogDoDp1pMvRmt994FO8xgY1nnkCqH5nx3g+y9pGl68+YN7FeTR14NUY6esGWLitPwRwq4RREvyZZlzUDQbzSEYyCmp95LaxwhEnOxYmoOQDklPaekvi7SpXpnzhgOd27rz7EagC/PzndDu8F1T++c7y8kNhD+5yENH0orspB1hU6LNiLHHAj0x6xV7vFb3GPD1AQDg141Ml7oS/XMS8oy0aLg+jLAegAZiSrflVktb9tsA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ohfM2VreBd5WQTy/j3TfsjM1MewtvAw3hMJKgkcBjLw=;
 b=csl0Bx58QTQ81OUbb9vnyC5w3Cw+uy+EDTzKDWUhoUQdIps78OboQk0P7p965E4x80VwhJqqQsUMkrrFqSUuHeHzqLGeIFKe4cQel8G5oqooek93A4P+52rlXD89oS5j7T1tDb68CLRCF+/5j4GfEamaXV8F1paoTZLBtGw5V9ENz/QaAwdNOKMTnNZdbMFoTSb044qMcDX4Rur5b2JcY4j5wQ1UiueZ8iNpZv2x/k4f2V6KaRPsQT+H/+CchZEhBx7n0BRdH/Aw6Ei8VsFeqAXk2O/xlJ9FpiqNwl7hpO9F42T3ElCO4bhvLuSc26CpMa9CFLtOaL9AT8JeK1lHww==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <4d6c0008-2157-acc9-5cd7-12ed4016048d@suse.com>
Date: Wed, 15 Jun 2022 12:01:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: [PATCH v2 7/8] x86emul: move various utility functions to separate
 source files
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <7f5287ad-8442-6c53-d513-f9a8345c4857@suse.com>
In-Reply-To: <7f5287ad-8442-6c53-d513-f9a8345c4857@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FRYP281CA0017.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10::27)
 To VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 84358331-82b5-4597-7160-08da4eb5f92f
X-MS-TrafficTypeDiagnostic: DU2PR04MB8981:EE_
X-Microsoft-Antispam-PRVS:
	<DU2PR04MB89815166204F133C130E633FB3AD9@DU2PR04MB8981.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	N1VLILjBis/nTav7yYIaeAwdTjuxZSHcUrsZnxHWDDeok75Ez6/IDikDmLtZMhViLaqoTQs0SmZkjVZHucdtAYs7e0vwBvV3JPerVoMM8ff4FpZvouid788787tRnyXD4gA4xkp1Sqqy9Kvee5DVkbNZMGhPRQiSTZ1mvink1RvwiHYrQzb6SeTeNpX39Fkl07NGobbszny7GPKC1kmVJ0EiKCTWBLFYEzs+uYa03MaHDCG/dkA3Dxd0NMcgeuu9rBamq3Y8mimjl6eqtnpe7SP//8OzhIYOxIhw/sWkFNZiaN0NJCjGoNl2UG1V38Q/hiNwEoNrkD1isRxq318uLhT6NGfF1IaqV5X656tzJdTqwN9mObNMQvw+uFrhp6XdgQlVKgCIIbX3Lva0twHfhMc7WsOJp9BzeqrDdfYBo7Sqhcaxxc5oQDT8Y9n/CIgigLOHo5Fl6YdzqWici6AEiOmFF5JlE3nnfyVDsWsHu9bcKz640C+R0DWZJWIMt2ReWr64Jd87kTdjfujSPkDIuaMa+P4j5/ix2BSkBrGCwWj2YgA1hbjdi53zZwLDivAVPTeaFIZGer8yEcLqNYYNj3oSF6835hnimbLEnQJi2twL/TmU2Mq9ohMEZDl1YOKZ4LFuTTHo8xx5A/ExqF9KRcJp/I+wMBrC9CXehP7VFXJYsnDvFSW+gtw/vNO+6t4aiIh5JC8tQwjnaKUgOUMumqyXDXrOOFGfF7EE7vabu8w90DIoj1xwFjr9bKVa+c1giQT+06oqE9Ty4MZbmg/5Zg/LH6m8253xZZmHDLs5DJ8We3B8DuVWtzjyT3EtKaecFS8fGMI0UrcM4GmdnuDAPw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(83380400001)(6512007)(4326008)(26005)(54906003)(66556008)(36756003)(6506007)(316002)(186003)(31686004)(2906002)(30864003)(5660300002)(8676002)(31696002)(6916009)(86362001)(6486002)(66476007)(66946007)(66574015)(508600001)(2616005)(8936002)(38100700002)(2004002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NVZER2RseXRZOVVPL20wVHlLcnZOZXlqOGtrQmErOStnalhBemszRmNVclAy?=
 =?utf-8?B?Y1l3WmZuQkF1NCtNZ2JVVkMzakM0UnB6L2hEZkZHWEplSXhqSy84bk9sdktz?=
 =?utf-8?B?UHZtUHg2dzRwUXl6dEN4ZnFFbW1HbFAyOEJ5aUorV29aclRGWXNRbDNLdG0v?=
 =?utf-8?B?Uk5LTG9IQ0ltZ0VmaHNaVGYzZy8xc2drTEl1anZIc3VYTFR3ZHRmVGVTRkF3?=
 =?utf-8?B?NnYvcUJYcGE3Wm9mc2JZVExFV2JmQjF3ZENSNGg5bkY4ejd5RHQyUmZzaUly?=
 =?utf-8?B?WG5JeFF5aHdIZHQ5ZG05V0Y0aVJmVURZc1pLQWVJWHQyYTFrVk82dmVpY0dN?=
 =?utf-8?B?TVhvVk5DckFpOUZkdUZPRUJGNmlocmdCUm9vL0xRY2FMN1pCVXE3QWhRM3N4?=
 =?utf-8?B?TEhvT2Y3YmFLbWl3ZFF4QmpYSklzRDZRWXdFRFFYeGVmNkZLSys3VUtXcmJD?=
 =?utf-8?B?OVJXZ2Y5OHpFcWg1ck9lQzZrOUlsSEtPTDE1cEFHcWtSK2Q2V1F2WTc1VzNI?=
 =?utf-8?B?cnRBOU1ZYy9Qekh0Z3NLaEpQOXEwU2tWMTZVL2Q2RHZjdHIxK3NQMFZmTEVG?=
 =?utf-8?B?ZUQ1Zks0RlFzZVhpQ2dSRWxPSXY2blN2T2JIK2lyNThLeHk1VHlkR2dxcndO?=
 =?utf-8?B?VitoNmI0c25xSHdrcjU4QTlsOHN5clFGdWJTMDBSQ0NmdHFFZjdaNXBIZE9N?=
 =?utf-8?B?MzNxeG0wZUZOMUY1eEtqSzFXK0dPaHNYQythdTZaRTM5eDlPeXBWWWNQVUgr?=
 =?utf-8?B?eUNTcEVOVnU5NHJPeTgyL2Z0bWR3VytoMldGUkwxNEFGNXF1ak5rWE5KUWdZ?=
 =?utf-8?B?M3pQaDZYMUlLYmZOUExYbGUwTmtERXN1QVdyamVpRU5YWDFmUmR0NnZxUlhY?=
 =?utf-8?B?NFJIZEZCb2Y0ZVVhQ3E4SUJpV0l3eEJGK1hUblhialN4NXpJTHorMWwyM3Vr?=
 =?utf-8?B?UGxJS3BlTmhBQVlqUmVpL1JNNEtaNzB0MW1vUE41VnFPTGNZSDMzVXlZMUw4?=
 =?utf-8?B?bUN4MUlMS293QzNNTjBVU1lVQlB1b01mMzdRZmh5WjJJMFVWbzFGL2JDNlpm?=
 =?utf-8?B?V2J0YXVRaUZFNTBRQzFxaUdYK1V2cU9pczdaanc5MzBVYWFLZkdHTVRrcTgr?=
 =?utf-8?B?a0lpaGVkK1o3T2laZG9VdTQwYmFzR2JlUVRYU1JQNWFrYkNmQUhnaktGUDBX?=
 =?utf-8?B?eHM5TEYyWk1zb0VxTVVvUTFuK3hGdVNtVmZVNWo2b3JOa2hPV0Frdno3Mldh?=
 =?utf-8?B?Q0xscjd1ei9sUlI5N0tZeHdUQmhWdVZVZzA3UzYrNEhSZXd2bUpFQ1R2MHhR?=
 =?utf-8?B?ZlJaS1JBL09OcnljellLQi81WnNaSURTQ29EdXhRM3RvRUdUazFReHp1VnM5?=
 =?utf-8?B?USs5MGlQbWVscFZBOHFZSTBMZFA1a0JZWksxYUlCU2ptWGxSVW1GNlBJc28v?=
 =?utf-8?B?VVVZM0lvVDFvRUUxYUp1czBMLzFPRm1oQkJIKzlVY0FMRjRBMDh4SXdOQVhk?=
 =?utf-8?B?OS9LWEVkUUV4MHNwd2RPTFc4ZDllK1IzZkNEVjB1MmorcmlEZGJPQ2RnaHRL?=
 =?utf-8?B?WExlVDlZT2tEaFJNeG5NUG5GbzV5SXp5TkZpMlVqTy8waXpaUzVlTDBqU1pt?=
 =?utf-8?B?eFNNd0VRRnRDSVVhUVAyQlgyWWVMSkNyR1BwdEdlMGcvMmRhNzdTM0wwQkZu?=
 =?utf-8?B?ekl6NklFYldkVGJYSmhiRktYVmFYSEFFRHc4UUgwa2pBQ0ZRcFdRMkNZODNp?=
 =?utf-8?B?dFlpYlZjNDNNVjkyc1dobXViQkRsRld4SEw2WHRUL0F0Q0RySXdxdlByWk84?=
 =?utf-8?B?ZFI2N0wxU2lZZ3hrcVgyS1ovSTR1N2N0YUlNalNleStxb0ZQSElDSnBQMlgy?=
 =?utf-8?B?M2I5Z08wb3V5QjF5NERsU0pSVlp4cVBGRlpOSCtVUXBwQzhpek9iejV0dSty?=
 =?utf-8?B?aDBDMW5WNDVzZG1jWGpRV01iTDF6UHpJbm5ERmVud0pTWnhxdnpnRXkwMGlT?=
 =?utf-8?B?ZEttNEVCYlgwa2Iza1VscmV5NGhUc1k3bm54V25Nd1FjZ1JkMGk3V1RaZXJB?=
 =?utf-8?B?QVNJOXc4ditDczY2RGRMYjVRTWw4N2d4Q0pQR3luYW1xeFJDZ3g4Umk3T2Yz?=
 =?utf-8?B?eE9Id2ozTjNLMktFdVM2UlNRWWlnQlJ6Z1dpRklEOHVyTVNVck9BS0I4cDlS?=
 =?utf-8?B?OThrcHlRZEZndGs5QTQyVi9pQ3RLM1hvbXlCb1ArSG55UjRPd3BROGwxRWli?=
 =?utf-8?B?OFhHb3FQd2lEcklzdElCMFJGMjU0LzRoUUd1b0pjWjd0YnpJRnNHSnpZS0JX?=
 =?utf-8?B?TVJGYTdzNWQ2UVFiSS85anVCY0RoWUhiQ3YwNjVJWURUc1JwYnBZQT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 84358331-82b5-4597-7160-08da4eb5f92f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2022 10:01:10.4435
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: g4P7zOOwqCIvgX1gTsfazMvrYCwRbKwyxGJW6s+wxnbRfPfHNRk0lwHDEO+UXL6h/tOXBmDzYdemWpqQwXFseQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8981

Many are needed by the hypervisor only - have one file for this purpose.
Some are also needed by the harness (but not the fuzzer) - have another
file for these.

Code moved gets slightly adjusted in a few places, e.g. replacing
"state" by "s" (like was done for other that has been split off).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Re-base.

--- a/tools/tests/x86_emulator/Makefile
+++ b/tools/tests/x86_emulator/Makefile
@@ -252,7 +252,7 @@ endif # 32-bit override
 
 OBJS := x86-emulate.o cpuid.o test_x86_emulator.o evex-disp8.o predicates.o wrappers.o
 OBJS += x86_emulate/0f01.o x86_emulate/0fae.o x86_emulate/0fc7.o
-OBJS += x86_emulate/blk.o x86_emulate/decode.o x86_emulate/fpu.o
+OBJS += x86_emulate/blk.o x86_emulate/decode.o x86_emulate/fpu.o x86_emulate/util.o
 
 $(TARGET): $(OBJS)
 	$(HOSTCC) $(HOSTCFLAGS) -o $@ $^
--- a/xen/arch/x86/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate.c
@@ -14,7 +14,6 @@
 #include <asm/processor.h> /* current_cpu_info */
 #include <asm/xstate.h>
 #include <asm/amd.h> /* cpu_has_amd_erratum() */
-#include <asm/debugreg.h>
 
 /* Avoid namespace pollution. */
 #undef cmpxchg
@@ -26,129 +25,6 @@
 
 #include "x86_emulate/x86_emulate.c"
 
-int cf_check x86emul_read_xcr(
-    unsigned int reg, uint64_t *val, struct x86_emulate_ctxt *ctxt)
-{
-    switch ( reg )
-    {
-    case 0:
-        *val = current->arch.xcr0;
-        return X86EMUL_OKAY;
-
-    case 1:
-        if ( current->domain->arch.cpuid->xstate.xgetbv1 )
-            break;
-        /* fall through */
-    default:
-        x86_emul_hw_exception(TRAP_gp_fault, 0, ctxt);
-        return X86EMUL_EXCEPTION;
-    }
-
-    *val = xgetbv(reg);
-
-    return X86EMUL_OKAY;
-}
-
-/* Note: May be called with ctxt=NULL. */
-int cf_check x86emul_write_xcr(
-    unsigned int reg, uint64_t val, struct x86_emulate_ctxt *ctxt)
-{
-    switch ( reg )
-    {
-    case 0:
-        break;
-
-    default:
-    gp_fault:
-        if ( ctxt )
-            x86_emul_hw_exception(TRAP_gp_fault, 0, ctxt);
-        return X86EMUL_EXCEPTION;
-    }
-
-    if ( unlikely(handle_xsetbv(reg, val) != 0) )
-        goto gp_fault;
-
-    return X86EMUL_OKAY;
-}
-
-#ifdef CONFIG_PV
-/* Called with NULL ctxt in hypercall context. */
-int cf_check x86emul_read_dr(
-    unsigned int reg, unsigned long *val, struct x86_emulate_ctxt *ctxt)
-{
-    struct vcpu *curr = current;
-
-    /* HVM support requires a bit more plumbing before it will work. */
-    ASSERT(is_pv_vcpu(curr));
-
-    switch ( reg )
-    {
-    case 0 ... 3:
-        *val = array_access_nospec(curr->arch.dr, reg);
-        break;
-
-    case 4:
-        if ( curr->arch.pv.ctrlreg[4] & X86_CR4_DE )
-            goto ud_fault;
-
-        /* Fallthrough */
-    case 6:
-        *val = curr->arch.dr6;
-        break;
-
-    case 5:
-        if ( curr->arch.pv.ctrlreg[4] & X86_CR4_DE )
-            goto ud_fault;
-
-        /* Fallthrough */
-    case 7:
-        *val = curr->arch.dr7 | curr->arch.pv.dr7_emul;
-        break;
-
-    ud_fault:
-    default:
-        if ( ctxt )
-            x86_emul_hw_exception(TRAP_invalid_op, X86_EVENT_NO_EC, ctxt);
-
-        return X86EMUL_EXCEPTION;
-    }
-
-    return X86EMUL_OKAY;
-}
-
-int cf_check x86emul_write_dr(
-    unsigned int reg, unsigned long val, struct x86_emulate_ctxt *ctxt)
-{
-    struct vcpu *curr = current;
-
-    /* HVM support requires a bit more plumbing before it will work. */
-    ASSERT(is_pv_vcpu(curr));
-
-    switch ( set_debugreg(curr, reg, val) )
-    {
-    case 0:
-        return X86EMUL_OKAY;
-
-    case -ENODEV:
-        x86_emul_hw_exception(TRAP_invalid_op, X86_EVENT_NO_EC, ctxt);
-        return X86EMUL_EXCEPTION;
-
-    default:
-        x86_emul_hw_exception(TRAP_gp_fault, 0, ctxt);
-        return X86EMUL_EXCEPTION;
-    }
-}
-#endif /* CONFIG_PV */
-
-int cf_check x86emul_cpuid(
-    uint32_t leaf, uint32_t subleaf, struct cpuid_leaf *res,
-    struct x86_emulate_ctxt *ctxt)
-{
-    guest_cpuid(current, leaf, subleaf, res);
-
-    return X86EMUL_OKAY;
-}
-
 /*
  * Local variables:
  * mode: C
--- a/xen/arch/x86/x86_emulate/Makefile
+++ b/xen/arch/x86/x86_emulate/Makefile
@@ -4,3 +4,5 @@ obj-y += 0fc7.o
 obj-y += blk.o
 obj-y += decode.o
 obj-$(CONFIG_HVM) += fpu.o
+obj-y += util.o
+obj-y += util-xen.o
--- a/xen/arch/x86/x86_emulate/private.h
+++ b/xen/arch/x86/x86_emulate/private.h
@@ -331,6 +331,13 @@ struct x86_emulate_state {
 #endif
 };
 
+static inline void check_state(const struct x86_emulate_state *s)
+{
+#if defined(__XEN__) && !defined(NDEBUG)
+    ASSERT(s->caller);
+#endif
+}
+
 typedef union {
     uint64_t mmx;
     uint64_t __attribute__ ((aligned(16))) xmm[2];
--- /dev/null
+++ b/xen/arch/x86/x86_emulate/util.c
@@ -0,0 +1,298 @@
+/******************************************************************************
+ * util.c
+ *
+ * Generic x86 (32-bit and 64-bit) instruction decoder and emulator utility
+ * functions.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "private.h"
+
+unsigned int x86_insn_length(const struct x86_emulate_state *s,
+                             const struct x86_emulate_ctxt *ctxt)
+{
+    check_state(s);
+
+    return s->ip - ctxt->regs->r(ip);
+}
+
+/*
+ * This function means to return 'true' for all supported insns with explicit
+ * accesses to memory.  This means also insns which don't have an explicit
+ * memory operand (like POP), but it does not mean e.g. segment selector
+ * loads, where the descriptor table access is considered an implicit one.
+ */
+bool cf_check x86_insn_is_mem_access(const struct x86_emulate_state *s,
+                                     const struct x86_emulate_ctxt *ctxt)
+{
+    if ( mode_64bit() && s->not_64bit )
+        return false;
+
+    if ( s->ea.type == OP_MEM )
+    {
+        switch ( ctxt->opcode )
+        {
+        case 0x8d: /* LEA */
+        case X86EMUL_OPC(0x0f, 0x0d): /* PREFETCH */
+        case X86EMUL_OPC(0x0f, 0x18)
+         ... X86EMUL_OPC(0x0f, 0x1f): /* NOP space */
+        case X86EMUL_OPC_66(0x0f, 0x18)
+         ... X86EMUL_OPC_66(0x0f, 0x1f): /* NOP space */
+        case X86EMUL_OPC_F3(0x0f, 0x18)
+         ... X86EMUL_OPC_F3(0x0f, 0x1f): /* NOP space */
+        case X86EMUL_OPC_F2(0x0f, 0x18)
+         ... X86EMUL_OPC_F2(0x0f, 0x1f): /* NOP space */
+        case X86EMUL_OPC(0x0f, 0xb9): /* UD1 */
+        case X86EMUL_OPC(0x0f, 0xff): /* UD0 */
+        case X86EMUL_OPC_EVEX_66(0x0f38, 0xc6): /* V{GATH,SCATT}ERPF*D* */
+        case X86EMUL_OPC_EVEX_66(0x0f38, 0xc7): /* V{GATH,SCATT}ERPF*Q* */
+            return false;
+
+        case X86EMUL_OPC(0x0f, 0x01):
+            return (s->modrm_reg & 7) != 7; /* INVLPG */
+
+        case X86EMUL_OPC(0x0f, 0xae):
+            return (s->modrm_reg & 7) != 7; /* CLFLUSH */
+
+        case X86EMUL_OPC_66(0x0f, 0xae):
+            return (s->modrm_reg & 7) < 6; /* CLWB, CLFLUSHOPT */
+        }
+
+        return true;
+    }
+
+    switch ( ctxt->opcode )
+    {
+    case 0x06 ... 0x07:                  /* PUSH / POP %es */
+    case 0x0e:                           /* PUSH %cs */
+    case 0x16 ... 0x17:                  /* PUSH / POP %ss */
+    case 0x1e ... 0x1f:                  /* PUSH / POP %ds */
+    case 0x50 ... 0x5f:                  /* PUSH / POP reg */
+    case 0x60 ... 0x61:                  /* PUSHA / POPA */
+    case 0x68: case 0x6a:                /* PUSH imm */
+    case 0x6c ... 0x6f:                  /* INS / OUTS */
+    case 0x8f:                           /* POP r/m */
+    case 0x9a:                           /* CALL (far, direct) */
+    case 0x9c ... 0x9d:                  /* PUSHF / POPF */
+    case 0xa4 ... 0xa7:                  /* MOVS / CMPS */
+    case 0xaa ... 0xaf:                  /* STOS / LODS / SCAS */
+    case 0xc2 ... 0xc3:                  /* RET (near) */
+    case 0xc8 ... 0xc9:                  /* ENTER / LEAVE */
+    case 0xca ... 0xcb:                  /* RET (far) */
+    case 0xd7:                           /* XLAT */
+    case 0xe8:                           /* CALL (near, direct) */
+    case X86EMUL_OPC(0x0f, 0xa0):        /* PUSH %fs */
+    case X86EMUL_OPC(0x0f, 0xa1):        /* POP %fs */
+    case X86EMUL_OPC(0x0f, 0xa8):        /* PUSH %gs */
+    case X86EMUL_OPC(0x0f, 0xa9):        /* POP %gs */
+    case X86EMUL_OPC(0x0f, 0xf7):        /* MASKMOVQ */
+    case X86EMUL_OPC_66(0x0f, 0xf7):     /* MASKMOVDQU */
+    case X86EMUL_OPC_VEX_66(0x0f, 0xf7): /* VMASKMOVDQU */
+        return true;
+
+    case 0xff:
+        switch ( s->modrm_reg & 7 )
+        {
+        case 2: /* CALL (near, indirect) */
+        case 6: /* PUSH r/m */
+            return true;
+        }
+        break;
+
+    case X86EMUL_OPC(0x0f, 0x01):
+        /* Cover CLZERO. */
+        return (s->modrm_rm & 7) == 4 && (s->modrm_reg & 7) == 7;
+    }
+
+    return false;
+}
+
+/*
+ * This function means to return 'true' for all supported insns with explicit
+ * writes to memory.  This means also insns which don't have an explicit
+ * memory operand (like PUSH), but it does not mean e.g. segment selector
+ * loads, where the (possible) descriptor table write is considered an
+ * implicit access.
+ */
+bool cf_check x86_insn_is_mem_write(const struct x86_emulate_state *s,
+                                    const struct x86_emulate_ctxt *ctxt)
+{
+    if ( mode_64bit() && s->not_64bit )
+        return false;
+
+    switch ( s->desc & DstMask )
+    {
+    case DstMem:
+        /* The SrcMem check is to cover {,V}MASKMOV{Q,DQU}. */
+        return s->modrm_mod != 3 || (s->desc & SrcMask) == SrcMem;
+
+    case DstBitBase:
+    case DstImplicit:
+        break;
+
+    default:
+        switch ( ctxt->opcode )
+        {
+        case 0x63:                         /* ARPL */
+            return !mode_64bit();
+
+        case X86EMUL_OPC_66(0x0f38, 0xf8): /* MOVDIR64B */
+        case X86EMUL_OPC_F2(0x0f38, 0xf8): /* ENQCMD */
+        case X86EMUL_OPC_F3(0x0f38, 0xf8): /* ENQCMDS */
+            return true;
+
+        case X86EMUL_OPC_EVEX_F3(0x0f38, 0x10) ...
+             X86EMUL_OPC_EVEX_F3(0x0f38, 0x15): /* VPMOVUS* */
+        case X86EMUL_OPC_EVEX_F3(0x0f38, 0x20) ...
+             X86EMUL_OPC_EVEX_F3(0x0f38, 0x25): /* VPMOVS* */
+        case X86EMUL_OPC_EVEX_F3(0x0f38, 0x30) ...
+             X86EMUL_OPC_EVEX_F3(0x0f38, 0x35): /* VPMOV{D,Q,W}* */
+            return s->modrm_mod != 3;
+        }
+
+        return false;
+    }
+
+    if ( s->modrm_mod == 3 )
+    {
+        switch ( ctxt->opcode )
+        {
+        case 0xff: /* Grp5 */
+            break;
+
+        case X86EMUL_OPC(0x0f, 0x01): /* CLZERO is the odd one. */
+            return (s->modrm_rm & 7) == 4 && (s->modrm_reg & 7) == 7;
+
+        default:
+            return false;
+        }
+    }
+
+    switch ( ctxt->opcode )
+    {
+    case 0x06:                           /* PUSH %es */
+    case 0x0e:                           /* PUSH %cs */
+    case 0x16:                           /* PUSH %ss */
+    case 0x1e:                           /* PUSH %ds */
+    case 0x50 ... 0x57:                  /* PUSH reg */
+    case 0x60:                           /* PUSHA */
+    case 0x68: case 0x6a:                /* PUSH imm */
+    case 0x6c: case 0x6d:                /* INS */
+    case 0x9a:                           /* CALL (far, direct) */
+    case 0x9c:                           /* PUSHF */
+    case 0xa4: case 0xa5:                /* MOVS */
+    case 0xaa: case 0xab:                /* STOS */
+    case 0xc8:                           /* ENTER */
+    case 0xe8:                           /* CALL (near, direct) */
+    case X86EMUL_OPC(0x0f, 0xa0):        /* PUSH %fs */
+    case X86EMUL_OPC(0x0f, 0xa8):        /* PUSH %gs */
+    case X86EMUL_OPC(0x0f, 0xab):        /* BTS */
+    case X86EMUL_OPC(0x0f, 0xb3):        /* BTR */
+    case X86EMUL_OPC(0x0f, 0xbb):        /* BTC */
+        return true;
+
+    case 0xd9:
+        switch ( s->modrm_reg & 7 )
+        {
+        case 2: /* FST m32fp */
+        case 3: /* FSTP m32fp */
+        case 6: /* FNSTENV */
+        case 7: /* FNSTCW */
+            return true;
+        }
+        break;
+
+    case 0xdb:
+        switch ( s->modrm_reg & 7 )
+        {
+        case 1: /* FISTTP m32i */
+        case 2: /* FIST m32i */
+        case 3: /* FISTP m32i */
+        case 7: /* FSTP m80fp */
+            return true;
+        }
+        break;
+
+    case 0xdd:
+        switch ( s->modrm_reg & 7 )
+        {
+        case 1: /* FISTTP m64i */
+        case 2: /* FST m64fp */
+        case 3: /* FSTP m64fp */
+        case 6: /* FNSAVE */
+        case 7: /* FNSTSW */
+            return true;
+        }
+        break;
+
+    case 0xdf:
+        switch ( s->modrm_reg & 7 )
+        {
+        case 1: /* FISTTP m16i */
+        case 2: /* FIST m16i */
+        case 3: /* FISTP m16i */
+        case 6: /* FBSTP */
+        case 7: /* FISTP m64i */
+            return true;
+        }
+        break;
+
+    case 0xff:
+        switch ( s->modrm_reg & 7 )
+        {
+        case 2: /* CALL (near, indirect) */
+        case 3: /* CALL (far, indirect) */
+        case 6: /* PUSH r/m */
+            return true;
+        }
+        break;
+
+    case X86EMUL_OPC(0x0f, 0x01):
+        switch ( s->modrm_reg & 7 )
+        {
+        case 0: /* SGDT */
+        case 1: /* SIDT */
+        case 4: /* SMSW */
+            return true;
+        }
+        break;
+
+    case X86EMUL_OPC(0x0f, 0xae):
+        switch ( s->modrm_reg & 7 )
+        {
+        case 0: /* FXSAVE */
+        /* case 3: STMXCSR - handled above */
+        case 4: /* XSAVE */
+        case 6: /* XSAVEOPT */
+            return true;
+        }
+        break;
+
+    case X86EMUL_OPC(0x0f, 0xba):
+        return (s->modrm_reg & 7) > 4; /* BTS / BTR / BTC */
+
+    case X86EMUL_OPC(0x0f, 0xc7):
+        switch ( s->modrm_reg & 7 )
+        {
+        case 1: /* CMPXCHG{8,16}B */
+        case 4: /* XSAVEC */
+        case 5: /* XSAVES */
+            return true;
+        }
+        break;
+    }
+
+    return false;
+}
--- /dev/null
+++ b/xen/arch/x86/x86_emulate/util-xen.c
@@ -0,0 +1,250 @@
+/******************************************************************************
+ * util-xen.c
+ *
+ * Generic x86 (32-bit and 64-bit) instruction decoder and emulator hypervisor-
+ * only utility functions.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "private.h"
+
+#include <xen/nospec.h>
+#include <xen/sched.h>
+#include <asm/debugreg.h>
+#include <asm/xstate.h>
+
+#ifndef NDEBUG
+void x86_emulate_free_state(struct x86_emulate_state *s)
+{
+    check_state(s);
+    s->caller = NULL;
+}
+#endif
+
+unsigned int x86_insn_opsize(const struct x86_emulate_state *s)
+{
+    check_state(s);
+
+    return s->op_bytes << 3;
+}
+
+int x86_insn_modrm(const struct x86_emulate_state *s,
+                   unsigned int *rm, unsigned int *reg)
+{
+    check_state(s);
+
+    if ( unlikely(s->modrm_mod > 3) )
+    {
+        if ( rm )
+            *rm = ~0U;
+        if ( reg )
+            *reg = ~0U;
+        return -EINVAL;
+    }
+
+    if ( rm )
+        *rm = s->modrm_rm;
+    if ( reg )
+        *reg = s->modrm_reg;
+
+    return s->modrm_mod;
+}
+
+unsigned long x86_insn_operand_ea(const struct x86_emulate_state *s,
+                                  enum x86_segment *seg)
+{
+    *seg = s->ea.type == OP_MEM ? s->ea.mem.seg : x86_seg_none;
+
+    check_state(s);
+
+    return s->ea.mem.off;
+}
+
+bool cf_check x86_insn_is_portio(const struct x86_emulate_state *s,
+                                 const struct x86_emulate_ctxt *ctxt)
+{
+    switch ( ctxt->opcode )
+    {
+    case 0x6c ... 0x6f: /* INS / OUTS */
+    case 0xe4 ... 0xe7: /* IN / OUT imm8 */
+    case 0xec ... 0xef: /* IN / OUT %dx */
+        return true;
+    }
+
+    return false;
+}
+
+bool cf_check x86_insn_is_cr_access(const struct x86_emulate_state *s,
+                                    const struct x86_emulate_ctxt *ctxt)
+{
+    switch ( ctxt->opcode )
+    {
+        unsigned int ext;
+
+    case X86EMUL_OPC(0x0f, 0x01):
+        if ( x86_insn_modrm(s, NULL, &ext) >= 0
+             && (ext & 5) == 4 ) /* SMSW / LMSW */
+            return true;
+        break;
+
+    case X86EMUL_OPC(0x0f, 0x06): /* CLTS */
+    case X86EMUL_OPC(0x0f, 0x20): /* MOV from CRn */
+    case X86EMUL_OPC(0x0f, 0x22): /* MOV to CRn */
+        return true;
+    }
+
+    return false;
+}
+
+unsigned long x86_insn_immediate(const struct x86_emulate_state *s,
+                                 unsigned int nr)
+{
+    check_state(s);
+
+    switch ( nr )
+    {
+    case 0:
+        return s->imm1;
+    case 1:
+        return s->imm2;
+    }
+
+    return 0;
+}
+
+int cf_check x86emul_read_xcr(unsigned int reg, uint64_t *val,
+                              struct x86_emulate_ctxt *ctxt)
+{
+    switch ( reg )
+    {
+    case 0:
+        *val = current->arch.xcr0;
+        return X86EMUL_OKAY;
+
+    case 1:
+        if ( current->domain->arch.cpuid->xstate.xgetbv1 )
+            break;
+        /* fall through */
+    default:
+        x86_emul_hw_exception(TRAP_gp_fault, 0, ctxt);
+        return X86EMUL_EXCEPTION;
+    }
+
+    *val = xgetbv(reg);
+
+    return X86EMUL_OKAY;
+}
+
+/* Note: May be called with ctxt=NULL. */
+int cf_check x86emul_write_xcr(unsigned int reg, uint64_t val,
+                               struct x86_emulate_ctxt *ctxt)
+{
+    switch ( reg )
+    {
+    case 0:
+        break;
+
+    default:
+    gp_fault:
+        if ( ctxt )
+            x86_emul_hw_exception(TRAP_gp_fault, 0, ctxt);
+        return X86EMUL_EXCEPTION;
+    }
+
+    if ( unlikely(handle_xsetbv(reg, val) != 0) )
+        goto gp_fault;
+
+    return X86EMUL_OKAY;
+}
+
+#ifdef CONFIG_PV
+
+/* Called with NULL ctxt in hypercall context. */
+int cf_check x86emul_read_dr(unsigned int reg, unsigned long *val,
+                             struct x86_emulate_ctxt *ctxt)
+{
+    struct vcpu *curr = current;
+
+    /* HVM support requires a bit more plumbing before it will work. */
+    ASSERT(is_pv_vcpu(curr));
+
+    switch ( reg )
+    {
+    case 0 ... 3:
+        *val = array_access_nospec(curr->arch.dr, reg);
+        break;
+
+    case 4:
+        if ( curr->arch.pv.ctrlreg[4] & X86_CR4_DE )
+            goto ud_fault;
+
+        /* Fallthrough */
+    case 6:
+        *val = curr->arch.dr6;
+        break;
+
+    case 5:
+        if ( curr->arch.pv.ctrlreg[4] & X86_CR4_DE )
+            goto ud_fault;
+
+        /* Fallthrough */
+    case 7:
+        *val = curr->arch.dr7 | curr->arch.pv.dr7_emul;
+        break;
+
+    ud_fault:
+    default:
+        if ( ctxt )
+            x86_emul_hw_exception(TRAP_invalid_op, X86_EVENT_NO_EC, ctxt);
+
+        return X86EMUL_EXCEPTION;
+    }
+
+    return X86EMUL_OKAY;
+}
+
+int cf_check x86emul_write_dr(unsigned int reg, unsigned long val,
+                              struct x86_emulate_ctxt *ctxt)
+{
+    struct vcpu *curr = current;
+
+    /* HVM support requires a bit more plumbing before it will work. */
+    ASSERT(is_pv_vcpu(curr));
+
+    switch ( set_debugreg(curr, reg, val) )
+    {
+    case 0:
+        return X86EMUL_OKAY;
+
+    case -ENODEV:
+        x86_emul_hw_exception(TRAP_invalid_op, X86_EVENT_NO_EC, ctxt);
+        return X86EMUL_EXCEPTION;
+
+    default:
+        x86_emul_hw_exception(TRAP_gp_fault, 0, ctxt);
+        return X86EMUL_EXCEPTION;
+    }
+}
+
+#endif /* CONFIG_PV */
+
+int cf_check x86emul_cpuid(uint32_t leaf, uint32_t subleaf,
+                           struct cpuid_leaf *res,
+                           struct x86_emulate_ctxt *ctxt)
+{
+    guest_cpuid(current, leaf, subleaf, res);
+
+    return X86EMUL_OKAY;
+}
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -8435,393 +8435,3 @@ int x86_emulate_wrapper(
     return rc;
 }
 #endif
-
-static inline void check_state(const struct x86_emulate_state *state)
-{
-#if defined(__XEN__) && !defined(NDEBUG)
-    ASSERT(state->caller);
-#endif
-}
-
-#if defined(__XEN__) && !defined(NDEBUG)
-void x86_emulate_free_state(struct x86_emulate_state *state)
-{
-    check_state(state);
-    state->caller = NULL;
-}
-#endif
-
-unsigned int
-x86_insn_opsize(const struct x86_emulate_state *state)
-{
-    check_state(state);
-
-    return state->op_bytes << 3;
-}
-
-int
-x86_insn_modrm(const struct x86_emulate_state *state,
-               unsigned int *rm, unsigned int *reg)
-{
-    check_state(state);
-
-    if ( unlikely(state->modrm_mod > 3) )
-    {
-        if ( rm )
-            *rm = ~0U;
-        if ( reg )
-            *reg = ~0U;
-        return -EINVAL;
-    }
-
-    if ( rm )
-        *rm = state->modrm_rm;
-    if ( reg )
-        *reg = state->modrm_reg;
-
-    return state->modrm_mod;
-}
-
-unsigned long
-x86_insn_operand_ea(const struct x86_emulate_state *state,
-                    enum x86_segment *seg)
-{
-    *seg = state->ea.type == OP_MEM ? state->ea.mem.seg : x86_seg_none;
-
-    check_state(state);
-
-    return state->ea.mem.off;
-}
-
-/*
- * This function means to return 'true' for all supported insns with explicit
- * accesses to memory.  This means also insns which don't have an explicit
- * memory operand (like POP), but it does not mean e.g. segment selector
- * loads, where the descriptor table access is considered an implicit one.
- */
-bool cf_check
-x86_insn_is_mem_access(const struct x86_emulate_state *state,
-                       const struct x86_emulate_ctxt *ctxt)
-{
-    if ( mode_64bit() && state->not_64bit )
-        return false;
-
-    if ( state->ea.type == OP_MEM )
-    {
-        switch ( ctxt->opcode )
-        {
-        case 0x8d: /* LEA */
-        case X86EMUL_OPC(0x0f, 0x0d): /* PREFETCH */
-        case X86EMUL_OPC(0x0f, 0x18)
-         ... X86EMUL_OPC(0x0f, 0x1f): /* NOP space */
-        case X86EMUL_OPC_66(0x0f, 0x18)
-         ... X86EMUL_OPC_66(0x0f, 0x1f): /* NOP space */
-        case X86EMUL_OPC_F3(0x0f, 0x18)
-         ... X86EMUL_OPC_F3(0x0f, 0x1f): /* NOP space */
-        case X86EMUL_OPC_F2(0x0f, 0x18)
-         ... X86EMUL_OPC_F2(0x0f, 0x1f): /* NOP space */
-        case X86EMUL_OPC(0x0f, 0xb9): /* UD1 */
-        case X86EMUL_OPC(0x0f, 0xff): /* UD0 */
-        case X86EMUL_OPC_EVEX_66(0x0f38, 0xc6): /* V{GATH,SCATT}ERPF*D* */
-        case X86EMUL_OPC_EVEX_66(0x0f38, 0xc7): /* V{GATH,SCATT}ERPF*Q* */
-            return false;
-
-        case X86EMUL_OPC(0x0f, 0x01):
-            return (state->modrm_reg & 7) != 7; /* INVLPG */
-
-        case X86EMUL_OPC(0x0f, 0xae):
-            return (state->modrm_reg & 7) != 7; /* CLFLUSH */
-
-        case X86EMUL_OPC_66(0x0f, 0xae):
-            return (state->modrm_reg & 7) < 6; /* CLWB, CLFLUSHOPT */
-        }
-
-        return true;
-    }
-
-    switch ( ctxt->opcode )
-    {
-    case 0x06 ... 0x07: /* PUSH / POP %es */
-    case 0x0e:          /* PUSH %cs */
-    case 0x16 ... 0x17: /* PUSH / POP %ss */
-    case 0x1e ... 0x1f: /* PUSH / POP %ds */
-    case 0x50 ... 0x5f: /* PUSH / POP reg */
-    case 0x60 ... 0x61: /* PUSHA / POPA */
-    case 0x68: case 0x6a: /* PUSH imm */
-    case 0x6c ... 0x6f: /* INS / OUTS */
-    case 0x8f:          /* POP r/m */
-    case 0x9a:          /* CALL (far, direct) */
-    case 0x9c ... 0x9d: /* PUSHF / POPF */
-    case 0xa4 ... 0xa7: /* MOVS / CMPS */
-    case 0xaa ... 0xaf: /* STOS / LODS / SCAS */
-    case 0xc2 ... 0xc3: /* RET (near) */
-    case 0xc8 ... 0xc9: /* ENTER / LEAVE */
-    case 0xca ... 0xcb: /* RET (far) */
-    case 0xd7:          /* XLAT */
-    case 0xe8:          /* CALL (near, direct) */
-    case X86EMUL_OPC(0x0f, 0xa0):         /* PUSH %fs */
-    case X86EMUL_OPC(0x0f, 0xa1):         /* POP %fs */
-    case X86EMUL_OPC(0x0f, 0xa8):         /* PUSH %gs */
-    case X86EMUL_OPC(0x0f, 0xa9):         /* POP %gs */
-    CASE_SIMD_PACKED_INT_VEX(0x0f, 0xf7): /* MASKMOV{Q,DQU} */
-                                          /* VMASKMOVDQU */
-        return true;
-
-    case 0xff:
-        switch ( state->modrm_reg & 7 )
-        {
-        case 2: /* CALL (near, indirect) */
-        case 6: /* PUSH r/m */
-            return true;
-        }
-        break;
-
-    case X86EMUL_OPC(0x0f, 0x01):
-        /* Cover CLZERO. */
-        return (state->modrm_rm & 7) == 4 && (state->modrm_reg & 7) == 7;
-    }
-
-    return false;
-}
-
-/*
- * This function means to return 'true' for all supported insns with explicit
- * writes to memory.  This means also insns which don't have an explicit
- * memory operand (like PUSH), but it does not mean e.g. segment selector
- * loads, where the (possible) descriptor table write is considered an
- * implicit access.
- */
-bool cf_check
-x86_insn_is_mem_write(const struct x86_emulate_state *state,
-                      const struct x86_emulate_ctxt *ctxt)
-{
-    if ( mode_64bit() && state->not_64bit )
-        return false;
-
-    switch ( state->desc & DstMask )
-    {
-    case DstMem:
-        /* The SrcMem check is to cover {,V}MASKMOV{Q,DQU}. */
-        return state->modrm_mod != 3 || (state->desc & SrcMask) == SrcMem;
-
-    case DstBitBase:
-    case DstImplicit:
-        break;
-
-    default:
-        switch ( ctxt->opcode )
-        {
-        case 0x63:                         /* ARPL */
-            return !mode_64bit();
-
-        case X86EMUL_OPC_66(0x0f38, 0xf8): /* MOVDIR64B */
-        case X86EMUL_OPC_F2(0x0f38, 0xf8): /* ENQCMD */
-        case X86EMUL_OPC_F3(0x0f38, 0xf8): /* ENQCMDS */
-            return true;
-
-        case X86EMUL_OPC_EVEX_F3(0x0f38, 0x10) ...
-             X86EMUL_OPC_EVEX_F3(0x0f38, 0x15): /* VPMOVUS* */
-        case X86EMUL_OPC_EVEX_F3(0x0f38, 0x20) ...
-             X86EMUL_OPC_EVEX_F3(0x0f38, 0x25): /* VPMOVS* */
-        case X86EMUL_OPC_EVEX_F3(0x0f38, 0x30) ...
-             X86EMUL_OPC_EVEX_F3(0x0f38, 0x35): /* VPMOV{D,Q,W}* */
-            return state->modrm_mod != 3;
-        }
-
-        return false;
-    }
-
-    if ( state->modrm_mod == 3 )
-    {
-        switch ( ctxt->opcode )
-        {
-        case 0xff: /* Grp5 */
-            break;
-
-        case X86EMUL_OPC(0x0f, 0x01): /* CLZERO is the odd one. */
-            return (state->modrm_rm & 7) == 4 && (state->modrm_reg & 7) == 7;
-
-        default:
-            return false;
-        }
-    }
-
-    switch ( ctxt->opcode )
-    {
-    case 0x06:                           /* PUSH %es */
-    case 0x0e:                           /* PUSH %cs */
-    case 0x16:                           /* PUSH %ss */
-    case 0x1e:                           /* PUSH %ds */
-    case 0x50 ... 0x57:                  /* PUSH reg */
-    case 0x60:                           /* PUSHA */
-    case 0x68: case 0x6a:                /* PUSH imm */
-    case 0x6c: case 0x6d:                /* INS */
-    case 0x9a:                           /* CALL (far, direct) */
-    case 0x9c:                           /* PUSHF */
-    case 0xa4: case 0xa5:                /* MOVS */
-    case 0xaa: case 0xab:                /* STOS */
-    case 0xc8:                           /* ENTER */
-    case 0xe8:                           /* CALL (near, direct) */
-    case X86EMUL_OPC(0x0f, 0xa0):        /* PUSH %fs */
-    case X86EMUL_OPC(0x0f, 0xa8):        /* PUSH %gs */
-    case X86EMUL_OPC(0x0f, 0xab):        /* BTS */
-    case X86EMUL_OPC(0x0f, 0xb3):        /* BTR */
-    case X86EMUL_OPC(0x0f, 0xbb):        /* BTC */
-        return true;
-
-    case 0xd9:
-        switch ( state->modrm_reg & 7 )
-        {
-        case 2: /* FST m32fp */
-        case 3: /* FSTP m32fp */
-        case 6: /* FNSTENV */
-        case 7: /* FNSTCW */
-            return true;
-        }
-        break;
-
-    case 0xdb:
-        switch ( state->modrm_reg & 7 )
-        {
-        case 1: /* FISTTP m32i */
-        case 2: /* FIST m32i */
-        case 3: /* FISTP m32i */
-        case 7: /* FSTP m80fp */
-            return true;
-        }
-        break;
-
-    case 0xdd:
-        switch ( state->modrm_reg & 7 )
-        {
-        case 1: /* FISTTP m64i */
-        case 2: /* FST m64fp */
-        case 3: /* FSTP m64fp */
-        case 6: /* FNSAVE */
-        case 7: /* FNSTSW */
-            return true;
-        }
-        break;
-
-    case 0xdf:
-        switch ( state->modrm_reg & 7 )
-        {
-        case 1: /* FISTTP m16i */
-        case 2: /* FIST m16i */
-        case 3: /* FISTP m16i */
-        case 6: /* FBSTP */
-        case 7: /* FISTP m64i */
-            return true;
-        }
-        break;
-
-    case 0xff:
-        switch ( state->modrm_reg & 7 )
-        {
-        case 2: /* CALL (near, indirect) */
-        case 3: /* CALL (far, indirect) */
-        case 6: /* PUSH r/m */
-            return true;
-        }
-        break;
-
-    case X86EMUL_OPC(0x0f, 0x01):
-        switch ( state->modrm_reg & 7 )
-        {
-        case 0: /* SGDT */
-        case 1: /* SIDT */
-        case 4: /* SMSW */
-            return true;
-        }
-        break;
-
-    case X86EMUL_OPC(0x0f, 0xae):
-        switch ( state->modrm_reg & 7 )
-        {
-        case 0: /* FXSAVE */
-        /* case 3: STMXCSR - handled above */
-        case 4: /* XSAVE */
-        case 6: /* XSAVEOPT */
-            return true;
-        }
-        break;
-
-    case X86EMUL_OPC(0x0f, 0xba):
-        return (state->modrm_reg & 7) > 4; /* BTS / BTR / BTC */
-
-    case X86EMUL_OPC(0x0f, 0xc7):
-        switch ( state->modrm_reg & 7 )
-        {
-        case 1: /* CMPXCHG{8,16}B */
-        case 4: /* XSAVEC */
-        case 5: /* XSAVES */
-            return true;
-        }
-        break;
-    }
-
-    return false;
-}
-
-bool cf_check
-x86_insn_is_portio(const struct x86_emulate_state *state,
-                   const struct x86_emulate_ctxt *ctxt)
-{
-    switch ( ctxt->opcode )
-    {
-    case 0x6c ... 0x6f: /* INS / OUTS */
-    case 0xe4 ... 0xe7: /* IN / OUT imm8 */
-    case 0xec ... 0xef: /* IN / OUT %dx */
-        return true;
-    }
-
-    return false;
-}
-
-bool cf_check
-x86_insn_is_cr_access(const struct x86_emulate_state *state,
-                      const struct x86_emulate_ctxt *ctxt)
-{
-    switch ( ctxt->opcode )
-    {
-        unsigned int ext;
-
-    case X86EMUL_OPC(0x0f, 0x01):
-        if ( x86_insn_modrm(state, NULL, &ext) >= 0
-             && (ext & 5) == 4 ) /* SMSW / LMSW */
-            return true;
-        break;
-
-    case X86EMUL_OPC(0x0f, 0x06): /* CLTS */
-    case X86EMUL_OPC(0x0f, 0x20): /* MOV from CRn */
-    case X86EMUL_OPC(0x0f, 0x22): /* MOV to CRn */
-        return true;
-    }
-
-    return false;
-}
-
-unsigned long
-x86_insn_immediate(const struct x86_emulate_state *state, unsigned int nr)
-{
-    check_state(state);
-
-    switch ( nr )
-    {
-    case 0:
-        return state->imm1;
-    case 1:
-        return state->imm2;
-    }
-
-    return 0;
-}
-
-unsigned int
-x86_insn_length(const struct x86_emulate_state *state,
-                const struct x86_emulate_ctxt *ctxt)
-{
-    check_state(state);
-
-    return state->ip - ctxt->regs->r(ip);
-}



From xen-devel-bounces@lists.xenproject.org Wed Jun 15 10:02:05 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 10:02:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349851.576043 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Pqe-00063D-QH; Wed, 15 Jun 2022 10:02:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349851.576043; Wed, 15 Jun 2022 10:02:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Pqe-000634-N9; Wed, 15 Jun 2022 10:02:04 +0000
Received: by outflank-mailman (input) for mailman id 349851;
 Wed, 15 Jun 2022 10:02:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=56zs=WW=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o1Pqc-0005BP-SC
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 10:02:02 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20615.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::615])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 339a0e9e-ec92-11ec-ab14-113154c10af9;
 Wed, 15 Jun 2022 12:02:02 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8981.eurprd04.prod.outlook.com (2603:10a6:10:2e0::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.20; Wed, 15 Jun
 2022 10:02:01 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Wed, 15 Jun 2022
 10:02:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 339a0e9e-ec92-11ec-ab14-113154c10af9
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kYp65T//nkR28kV3cdx87RQ7bQ3kDUJC7v2Sp4RRfTqwg0SHFOLtu5qy+qxufXkDrJWZZzvHuvHc3r1KBS+60rWN6TRu7YfteQ19ym/6HwXPBiXF3YyyBBVxzAwDfzIuDrZwrWu+dviuK42a3HNgrKitds67h7l+pqLgwpgw1NJ+FuPb+QIXPgX15ee+meZQyIrv848uQW9vmxA/Jj1IVQckMzUSN25uaXqq6D7faiPhIyFVBhfrAnMBHIHkPafgQ/2S2uPOx8R3PVpaR973qQBkLtRPr1ivol1yBlJW5xrPzlXDQZlewsvogQnp/jE5PuWCac2J0t4Hb3ZVHa38SQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=AnM2uO5eniVgAzvSjds26CJAnOXmHdpe3X1nHICp8gU=;
 b=WI0THgEG0Q98BUvTsGKSt6F+PlMTn6S1JI0vubUKDg7NkJ+9bpaResBFss2OBzkBClX1U3gAX2Y6zJSqVsie0S7iruaUbBpR3Se3N0RKmHm0XhnRTqTP/AbB5rj/lUwpdkr3W7A87Rq/rlrSOiPxISOeGaHG/vIHkZco6YRXdkP92gBDn1pyQxygNTmigE1cMKxDAWlamk+reKzqaGPIPK2GEITHo7SBPg7SctoQ8SwHytnJBGIg2CHEUzexC88mNv2KXLVh0WIWzirTB/MX2GDaSp62ASt4hpkTR4g6aFOHm5d1W0t2H72YUfv3OkeUhXfwPt6bQewJw28JorDHCw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AnM2uO5eniVgAzvSjds26CJAnOXmHdpe3X1nHICp8gU=;
 b=uAQQzs3s2slgvcvap/SgB7PGQoBVL9FHXPmsoe+hwcTsauHqFuSFTuOzBouEnGQMK/xzR2+3IcwFUy7v1Oj9vgMsniIkavRMHqtGSJWF+vOssKlNJnQm+/pPPZmk7440fJjsEBqdV+PCD1fatwoHp7fkFsbD3stlT4Gw474Toen/x7F5pSVT6BUIl/MgD3iSLt1zmKNf6J/KYP3li9xdA6F1QdBcTd97ZkmRbzS7HjSrt4PfLQfTN1KOe9/cIu4uwSA2a8ImM2xID6fDkZtLNwxOWNZuRu4UkUw52aBLKVyYImEyKcY8os1CVxA4tOEJqU6ephF8YIVzXjWxdMiFWA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <693131a0-4815-36c8-6fb8-cd28bb7b0a08@suse.com>
Date: Wed, 15 Jun 2022 12:01:59 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: [PATCH v2 8/8] x86emul: build with -Os
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <7f5287ad-8442-6c53-d513-f9a8345c4857@suse.com>
In-Reply-To: <7f5287ad-8442-6c53-d513-f9a8345c4857@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM6P194CA0029.EURP194.PROD.OUTLOOK.COM
 (2603:10a6:209:90::42) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 42b24e1a-1280-4432-a402-08da4eb61748
X-MS-TrafficTypeDiagnostic: DU2PR04MB8981:EE_
X-Microsoft-Antispam-PRVS:
	<DU2PR04MB898192A9A7FB4B9C89FC0583B3AD9@DU2PR04MB8981.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	rz0ADYynevDn0g0PCvGN9re/1rq5njqdQaSRKflQ8lcglTDiU/wPURVY/FzGROsLIDwjfMi4Zb7mwSHiY8oj3+0QGV/xhvcZJx3rHHGeqhINw4SMF5ZwGiGk07LZH7BhRpu03kWH3GAoarwjpsMjKU7cA/rUgpqX5dZpxiggbBk4OcqCo1gKX36d57JCHPu4PuEN6a95TdgX5nk9B9wEdpM+rLiHxMWenrqRR/u90EuPl01bILqxdzBfOvqY3ms0xvDfjMpiwqqwmL1tH94BpIR41fEL6n0XaiSS84FOPc8tcTx+qF35qUEcLzJLQs7J0YGSWgIkmmqUPvhCip0WMnXoL8mo1gatp1qhJjO9C//1pdD0zdE6fZfNtvd2SBAMD6DzfQxWNT5A6wUsUq/Si6v+5+F14IryySXVJjZfPgUQR1RVzsyIf58DPai/4ug+5+HxX6qS9nFLvr4XmWYmSScBtLNDrSFsP79/iIqKeXd01KeIQWLD1lZ5xLqD+Y2WQdtFzZ+JW1B4m3gQcWijnlKxePWnaeZLLAzfX31AABUdmDhAYSrEwn4Diy053v5mrq+azlt/FHZ3orVgD3WHBvzmsRrHh25n9dLSmK65Rmt8umDpAEDssOXe5Pvtvyg+DQruu05Gujz4X+7tBWPHCGqYeTXXAkp3cGmc398GbdRI/KcYaTB6nPfbtAVRJqVFwiWABMCyHDqvlI9IDDc6Y+HlJ6YbCcAruL/pSXJLwbFlbUtHHIQaUXeHdYNAubiv
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(83380400001)(6512007)(4326008)(26005)(54906003)(66556008)(36756003)(6506007)(316002)(186003)(31686004)(2906002)(4744005)(5660300002)(8676002)(31696002)(6916009)(86362001)(6486002)(66476007)(66946007)(508600001)(2616005)(8936002)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Zkl0WGkyTjZSSXREZjhhd1lGb01KYXA1ditnMk9NZWM5ZXVOa0RWYjhNVmdP?=
 =?utf-8?B?RE9NMHZMSlNrbEE0RUZ1eGJJN09EVGU5Q2orVTkzbG8ySHRxR2k2Qkl6VlVF?=
 =?utf-8?B?UmxzZTV0L3oxYWVBMDVrcm12UWkvdEtNZUJ4ZDVsSU92Mk51K0pFcGRVcUMz?=
 =?utf-8?B?clhDN0pDN05Rc1ZkMC9xNG5wVlFnRVd0My9CS1JzYTRWV3NLMDlxSmVDRG43?=
 =?utf-8?B?Ylh5NnlnWWw5aU5lZ2piQWNGUHRPTmRrb3ppT042TnduSXFQZTJqOHQzU2Rt?=
 =?utf-8?B?eHZRMWFad1VUR25NbFYwSitkdGRHemlSYzUwMXArTC9Zc3QyZUFxTDBIRnFS?=
 =?utf-8?B?NkdrRERJM1M0dDRlbGhBVXkybkxoRC9zQkcwWENHL2tCcGJhSndMTGlpMmNW?=
 =?utf-8?B?SDZnMS9WZUl2MVFIQnZob1U5dG5NbllOaGc3c2szWHMxUEZZMWtLaVFBVHpF?=
 =?utf-8?B?UnhHa014L1NUUlRVbXYyck9kcUtudkxOYnhOejhubW9ERGxsczhKaFBwS2tm?=
 =?utf-8?B?UGVVSHlGZCtMYm9aTzFYeS9tY2I0d2NVa1ZEdzZTVUlBRjBsQW5mRVlOT3Vu?=
 =?utf-8?B?NEt2TVJVUEZ1dU1QOWlieXpHemw4SElacnF2amJkNTUvWC9HZmxUc2tVRkRN?=
 =?utf-8?B?MVRDb1FYWXZBOEdNd1lKcjZET05SUGlMQk03SGttMzhweG9HbGlPQTJDNU9i?=
 =?utf-8?B?RXhlVGNTeHZPeUxobXgyNThaVjdxTm9iYU5pbmJsbmFqRlpiYmQrS1dPMTVO?=
 =?utf-8?B?NCt2Vk9CRFJEYnBMd0dkS2ltUzBGU0JvNml3RmVYRHR0RzNWSUlPMnBYbzh3?=
 =?utf-8?B?dWYrYWU3dEVwVzFhL2RlS2ZjNURVanQ4SURJK1Znc2JKM1RKcW5aUk1TM1VJ?=
 =?utf-8?B?ZTFmMXA0L3IvbE4rN3FjUllhTkE1OThTWUsvK1JwYlJzL2ZtMTA3UVBRM0ND?=
 =?utf-8?B?SHNsaHFWRExlYktRZWZJNmdpMkhYcjdaWlRzM2I2Q05ablc1eGo2enh0K0RS?=
 =?utf-8?B?NVBJTHRVWW1UeS9OMVBQVjB4WDVHbXhDTFU1UGZwZ2pSU01WeU01YmNyTEJh?=
 =?utf-8?B?SEtaMXd4d2FHVVVSQ1pKakRGZ2hUUzVoSHluTHg3bnRXMGhUZnp2eXdJei9s?=
 =?utf-8?B?Y3FaZWtMclFtRWNEMVc3L0xidXlvRit3VnZBeW5TTWRFQ1RhQXlDUzZ6bjZ6?=
 =?utf-8?B?TEJWQjBDVWFvK2Jic21CeG82UlNQaE1IazEwZXBXcHRMNFdGT0lBMDdSN0lP?=
 =?utf-8?B?RFBKY1k3WDhRc0VMY21ZbC9QZGI4eWgzSHVCZ1JMWDF2Vjg0YnkxZGtFQ3BZ?=
 =?utf-8?B?dU5TR1ZNT1pPeCt2MEE1Ym11VEdXaFZRZWRRMjRDb3JSeEwyNndkMjdGK2tW?=
 =?utf-8?B?N29wdDhEdlo2YTZPeWkxQ0NkdW1kSjg3Zm9CTlk4VkpWSURtalY3Qk9DVkd5?=
 =?utf-8?B?eEh2V0Y2QWZVTkZ4eWhhaXhwWFB4c2lmVEdmc21QVmVYZFRwaTBGdnhiVThm?=
 =?utf-8?B?b2lBUWRhNHZ1MTlLVThkMFRoNHRLL0JXdjM2VlNhSlZib3NSRzRZbVhsVTl4?=
 =?utf-8?B?RWRoVnZQNlFSLzBUNFk3TUR4dHVBZ0RGUjVXbmZsd2JNM0dqWUZNQkdvOStR?=
 =?utf-8?B?RWxEQUJSU2JKRUp6Mm5TUFI0TXd6Q2VDaitCUmw2NWVObWxpQkRtaUxTNGZj?=
 =?utf-8?B?QXNic1NsakdWemxFTms5OVZrU3dtSlRFbWhEY2xMS2hITjZ0cTZoUENWMHRF?=
 =?utf-8?B?L2tVeEF0YUZCVkowUEVSR0hrQmtzbGtVZVQ0RXhyNkltTFMwa1lHVWc0V01I?=
 =?utf-8?B?emsyaHYrYWREMzZMRDh4ZFIwaEU5YXR6MnJLbkZDVEVmZjM1NjVZLzZOUkQz?=
 =?utf-8?B?MHVwbDJHVWM4TzJON2hHQUJWOFh0ZHI4SFNjR0FqUmpRWlRkUWNkWWg4T3FG?=
 =?utf-8?B?VkhiSHJzTkVPK0d0R1FoWXZPTTk2dkVGd256ZVEvNHRjV2hUR0RrWWN3bWd1?=
 =?utf-8?B?U0dPUElrY1YrVk1PUlphT1lxNVBMWXNSOWZWdjY0aXNRM3ZnUUh4dlVMamhP?=
 =?utf-8?B?SjNna3Z5QW1CZHJiZkpYak1WZDM3NWNJV05FTWJQRERvYzcyME81cEVESCtR?=
 =?utf-8?B?YlY4MmMvM0o1QTRDd3hERFhzQmRtNUFnaHlLZERQZDcrMTYrbytRTzRrcFFp?=
 =?utf-8?B?WXlJS1NPd1NVOW44akdrN1drQVVhYitNM2xlbXdnenZwZHZWMW81UVpQQ0V0?=
 =?utf-8?B?WGlNOHB0Ti9TR3VoSDJDdFdWaGhJSTlFRkRlSHVnOFVweU5tRFVvYitJeFlx?=
 =?utf-8?B?TEM1L2ZUL0dQNFdqSTA1UjA0eEtVUFJLWFZRZHM1SCtPU3RWUmwrZz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 42b24e1a-1280-4432-a402-08da4eb61748
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2022 10:02:00.9403
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ixKQZLz34b4bd7RXWZOdimII39DzlpjyNrhHB6M5Q96cz23El707pSj2K0yF+N28ZOyURSz6FEgJTc9jA2/p0g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8981

Emulator code is large and involving it in guest operations cannot be
expected to be fast anyway. Help binary size as well as, for release
builds at least, compile time by building all involved code with size
optimization, independent of the build being a debug or a release one.

The size savings observed in a release build (with AMX and KeyLocker
code in place on top of what's upstream) are above 48k of .text, with
gcc 11.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -89,6 +89,7 @@ hostprogs-y += efi/mkreloc
 # Allows usercopy.c to include itself
 $(obj)/usercopy.o: CFLAGS-y += -iquote .
 
+$(obj)/x86_emulate.o: CFLAGS-y += -Os
 ifneq ($(CONFIG_HVM),y)
 $(obj)/x86_emulate.o: CFLAGS-y += -Wno-unused-label
 endif
--- a/xen/arch/x86/x86_emulate/Makefile
+++ b/xen/arch/x86/x86_emulate/Makefile
@@ -1,3 +1,5 @@
+CFLAGS-y += -Os
+
 obj-y += 0f01.o
 obj-y += 0fae.o
 obj-y += 0fc7.o



From xen-devel-bounces@lists.xenproject.org Wed Jun 15 10:06:48 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 10:06:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349865.576054 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Pv7-0006oi-Ef; Wed, 15 Jun 2022 10:06:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349865.576054; Wed, 15 Jun 2022 10:06:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Pv7-0006ob-A9; Wed, 15 Jun 2022 10:06:41 +0000
Received: by outflank-mailman (input) for mailman id 349865;
 Wed, 15 Jun 2022 10:06:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qil0=WW=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o1Pv6-0006oV-BP
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 10:06:40 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id d81d7da8-ec92-11ec-bd2c-47488cf2e6aa;
 Wed, 15 Jun 2022 12:06:38 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 61055153B;
 Wed, 15 Jun 2022 03:06:37 -0700 (PDT)
Received: from [10.57.11.80] (unknown [10.57.11.80])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 533F03F792;
 Wed, 15 Jun 2022 03:06:35 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d81d7da8-ec92-11ec-bd2c-47488cf2e6aa
Message-ID: <8d608c16-b325-d849-1e10-ba40c88d0f5b@arm.com>
Date: Wed, 15 Jun 2022 12:06:22 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH v3] xen: Add MISRA support to cppcheck make rule
Content-Language: en-US
To: Bertrand Marquis <bertrand.marquis@arm.com>,
 xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <82a29dff7a0da97cc6ad9d247a97372bcf71f17c.1654850751.git.bertrand.marquis@arm.com>
From: Michal Orzel <michal.orzel@arm.com>
In-Reply-To: <82a29dff7a0da97cc6ad9d247a97372bcf71f17c.1654850751.git.bertrand.marquis@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 10.06.2022 11:13, Bertrand Marquis wrote:
> cppcheck MISRA addon can be used to check for non compliance to some of
> the MISRA standard rules.
> 
> Add a CPPCHECK_MISRA variable that can be set to "y" using make command
> line to generate a cppcheck report including cppcheck misra checks.
> 
> When MISRA checking is enabled, a file with a text description suitable
> for cppcheck misra addon is generated out of Xen documentation file
> which lists the rules followed by Xen (docs/misra/rules.rst).
> 
> By default MISRA checking is turned off.
> 
> While adding cppcheck-misra files to gitignore, also fix the missing /
> for htmlreport gitignore
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
The validation was successful so:
Reviewed-by: Michal Orzel <michal.orzel@arm.com>
Tested-by: Michal Orzel <michal.orzel@arm.com>

Cheers


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 10:08:49 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 10:08:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349883.576064 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Px9-0007dt-Qh; Wed, 15 Jun 2022 10:08:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349883.576064; Wed, 15 Jun 2022 10:08:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Px9-0007dm-Nj; Wed, 15 Jun 2022 10:08:47 +0000
Received: by outflank-mailman (input) for mailman id 349883;
 Wed, 15 Jun 2022 10:08:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=56zs=WW=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o1PpS-0002HV-Qb
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 10:00:51 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur02on060a.outbound.protection.outlook.com
 [2a01:111:f400:fe06::60a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 081b40d1-ec92-11ec-bd2c-47488cf2e6aa;
 Wed, 15 Jun 2022 12:00:49 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8981.eurprd04.prod.outlook.com (2603:10a6:10:2e0::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.20; Wed, 15 Jun
 2022 10:00:46 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Wed, 15 Jun 2022
 10:00:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 081b40d1-ec92-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VsjuKNgqvmeV7oaBwKYLSE3sA8fTrVtd50A8+TyqB2lhkHlzLM+rZDmfKHaqFUsqZmieUh/wKfQwvE7w29EJSuyKB0JgzydYszi4Offz2k9qSNdn0iq03H+9Yp8HfqGr//A61UfkTOYoUGYthqo93qo5SQrI96cgu65thXDUh1fUX8A0vmrruQ3wQporueG014P49rM15KRh33oSnRDemFWF3/gvaSoalLau/+EQFzwEEQCDBiM4lfDZx30tjKWHGPm2oIR9QJCX8VFFlXNJV89QVew7TmJ+IXOQ9pYbKv5kljs69/sMeDmgCSOx37MY30QJnlYsUTw0VDTnX+DmwA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xGTVbbNaA6ED01gQ8SQkg/aYpiFfNFnjBETt1TyhgK4=;
 b=oUvw0DGBWHdwsWC3ywv9gX3U54Sii92z71blG90ePIVljt5UA0fPX2SrMJwxLj9SFmRtk71NkoWaV4i38wNVoXYMGZTII9bP/wFnSVCCDdl9Zi5zGaJwWFKsaJY3H6/KcxIzSm5SkfyzOuEQSbzVzugGeUs8+OarEhwXha4MfxaLyaA7KKptYa2Fn4GbOG44L0oj1GXkHBGc0wNk2PTZPkHmeKbRQI9vBOlUixUyls1SaX+x1M3AqDWXT1U9bpDnoQv1RE2Z0eVTgInPpsskf84LDk0j1QTxhRsupNL6vM55SOa1YThVfdyLlL19wvR5fcoRXGjA/UEaYi+sjySs/w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xGTVbbNaA6ED01gQ8SQkg/aYpiFfNFnjBETt1TyhgK4=;
 b=eK1pNgeV2DaqxxbwTmI6gEs0TEgnks6u7P4x0gCTw/JywZOO9aT2IHtZpKQHNTEV5QLKUosgSrNWVza5NpylkV6rpLW7vfX6WcrIHgKLTCAnwYO0S/M9fB+7TxXuvKjk2DXqr6CABet/y5/VnZbLmm/LGQpU3CSP6XOVIIBwd1Hb7ZJKSIJefUO1Z0LWz2YyJuxwTQ+7OXBos8k+qgWmemaon9JPQZn0m349vHrf4/MMsiCmjlx8gOF2eKegBZgFjoj9jHbgrk7z0MBlvhO8NIaLn4eZKrd0wUb88lkKvEFXt0L43Qnz9mShhoE8FQgwRprVxOhMgM2riCiTZY1ElQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e909ff6c-9339-3de6-1eef-24cb65eb7e74@suse.com>
Date: Wed, 15 Jun 2022 12:00:45 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: [PATCH v2 6/8] x86emul: move x86_emul_blk() to separate source file
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <7f5287ad-8442-6c53-d513-f9a8345c4857@suse.com>
In-Reply-To: <7f5287ad-8442-6c53-d513-f9a8345c4857@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM5PR1001CA0011.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:206:2::24) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 27f3b43a-a183-4b12-53ce-08da4eb5eb05
X-MS-TrafficTypeDiagnostic: DU2PR04MB8981:EE_
X-Microsoft-Antispam-PRVS:
	<DU2PR04MB89810FCC5CA0721F2DFE6F5BB3AD9@DU2PR04MB8981.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NZWINzKtRx6DaBKXeSKYSTZ5fD2YtVnpFwb67hRXM6Ijpb7+f3pyD04Ju50+8IMff2uj6bYlnCBWEkyTvgj/i1CTY/42e6L/aw/v4TGOpl+kLZxPQ8tPSPDBm7gzr5NLpVGQrEK2H+j25fXt2R2uqckdBaqqgBWbX/f4xKf0QQwBQckCN8y++QnNrgdAZGFz6r13paIXZLCPmaSp5hRFj2lMtbwNrE/zq/HD90dz8UagOnpbB/a2K+/nvZ7mOVAatcYXBz+PP8r0ZrLt7A35SaDN3RsvyAJ36S4D56EXmeZKz4KLDoO3XNwg+d8ujrYQTrtl/50xMGxiCK/NwwITm63zi2sXmXSHgko97SCq/LAG1zKTRPMOsEguChyQHS95qOa/7N++T1U8JBCK/lpoUUrWVfumcazjYuEiglDHUZ1L/pekvN6v8+novPd0ETcsdU8xCahQirhB8/wq/PH3FKADP6mTdMevj2EBoK1sbZXUMAKo788WS/AUhdE92lrhtM/2dE3pmGSkQE+fFoHnGFz7NdkYk4kV6SGZGUCYVsRc++ks5fLrZXhzPHJP012+cil0PP1qV0sMQ7Mh2hHL07UKlXQ7HL4F6ipSqukxbd0bOKcuyvsvp0a1pTvhCg3Bk/qZKySb9qK+MQUXIUFhEcZY5XPYCpeCqx3XGF8Hzl494hYqxitFjqV7EUOSqB9buTVtxxck0sdQR9PHlgMbJ5GpL7XX0cHP1wm2vVWAnXm2HYhrZQlkpKIQ80goIzcXjSCQrXZjjq5r4jsOqCxzpi78VgcFy5+OF5KX0oDFYRFRfQkcAvGkx9yVvzkGj6l7HvRWoWTr4PayYIzDqCf80A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(83380400001)(6512007)(4326008)(26005)(54906003)(66556008)(36756003)(6506007)(316002)(186003)(31686004)(2906002)(30864003)(5660300002)(8676002)(31696002)(6916009)(86362001)(6486002)(66476007)(66946007)(508600001)(2616005)(8936002)(38100700002)(2004002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?LzFGdXFYMkVJZ0hBVVE1STVnNlczWG5lU3FXVzczcEtvY1oyYjBZUVJVek1j?=
 =?utf-8?B?b0lYbVFBdUROT2hSRnhUbEQvMnI4WlVyZjlUL20xVCt5ZlppNG43TDE1QVZp?=
 =?utf-8?B?VkpuMFRiT1BWUFdvV0I4RytWYlB0bnpUZ1NRa2o2OG9PUHZ0NWNMRGxBWFRl?=
 =?utf-8?B?TFZOVWRVYm9udUNtczNJa2lZY2ZZWFpQQTFncnZ2cE04cXRRWXpyalR2QzMy?=
 =?utf-8?B?ZFp6dTh3L0FyMTJuK2ZYVjFqNDUzSnBuSjJzVlM2cFRTMlFZZnRlSWcxS0Ex?=
 =?utf-8?B?UzhubE9iY25LRHVBdC91YkVyNTFRVkRrRkdCcEtjMkdKVDBEMFNjUEk5WHlI?=
 =?utf-8?B?VWE5MHo0MVFTTUdNK0RNS2VTTk4vZjJmdUdlUTVsdU9zY0c0OU1iKzh3U1VB?=
 =?utf-8?B?RVJVQ0liQVhZeGhyV2VVK2hDNVFwSWhTM3YvVW1RNmRwVU9SRUxHNndwcDdx?=
 =?utf-8?B?T2NISTh3V3JYaUdKUHZUWkdaTnNWMnF4eFd3SDA5Qi94eVh2eWpUZHY5N2Vp?=
 =?utf-8?B?bGEvZ1VZdC94V05vQnlwUTJXTm9zRmxxNFdVSUpXUGRIYjJGMEpOeTFicG95?=
 =?utf-8?B?NHhPOVBsaTBWUWhDdDQxOTJhSkVyZUgrL0NrZnJKeUU2UTdVcVJ6Z3c2NVQ2?=
 =?utf-8?B?L1Y3NVNNVkoyVW1pY21QSnNBbXZrS2dmN0tiUklmMzBZYXdDRHRZZW93WWhm?=
 =?utf-8?B?dzJWOHJkaU5YVG5uZ0FuMHNNUDJzMXE5dVE4QXZncXZLZERzVEZHZkhySlhC?=
 =?utf-8?B?aFRGbG1Wa0ZQMytSZVFrbFQ5bHRibG1jYVQyVnh0QWpZRFB4NnR2YWtmQWlE?=
 =?utf-8?B?OWQ0SytGbGdHS0s5dElyZE40ZWlSZGJDQm0zM1VWRXJSTnpzQVptY1RpTklr?=
 =?utf-8?B?dk1EaE90eVh4cGc2SEhQM3lpdkZvU0kwbVdrSW1zd0xoZ3EvUjFDN3Q1bXRp?=
 =?utf-8?B?ZjJTVWVDay8xdlg1eFRmMzlLZi9rbjlrWC9Nb3R6SnhJSi9oTk5ORHJYVk5Y?=
 =?utf-8?B?QXc1U2JhdEhvcU5SdU9jNHpyN0tUZUJWOFZVc2pXZHZ4WU9aNjZMY243eitz?=
 =?utf-8?B?VEVSUVN6Zk5JYkpMc2ZWWjRhdFFCRlZ4KzBDRXFBdC8xUnRwTHlpUHgwN3Ur?=
 =?utf-8?B?MnE0cU5iUmRkREl1T25IeC9tZjRQUWZHR25GcWdvaDRudEF4SzNkVVh3ZCtT?=
 =?utf-8?B?aWo3VDlZTmF2L05FUGlIeVkzKy9qWS9TZGtUOHZmUzN0OTJ3MVZCT0xScGNy?=
 =?utf-8?B?STVyaXRCa2t6SHFTVTR1SkJhMUpaRXk0dGd0UGJoNFZTc3hCalhES2laZjBJ?=
 =?utf-8?B?K2d4TzQzbGJoSEgwUStlR3Nya2ZjbzdXWmUrSUN5ZHdQUFE0TkZGWXF5YnZo?=
 =?utf-8?B?M1l3K2thNDlUNDhuMG1iOU5VU1Mzci81bDdMbTQ1N1BXMHJ4RjgySDNIS0xH?=
 =?utf-8?B?TWIrdzR1dzl1MDJwbXRHaTljeit4WkhKQ2tRQjAwUml6N0R2NUtnMkg5VVd4?=
 =?utf-8?B?UnZDM2QyTEd2UTdSa2lhU2liQWpseEN6U3E0MEU1OW54SEFDSG1sTVpYaEdY?=
 =?utf-8?B?Rnl0ZHp2UXczRWlWSUJFMWlObXUySFFsMEg2U0NXellVdWptVm14TUVJMDJI?=
 =?utf-8?B?RmlBenRjMU5USWVHRDZGZlNaNjJ1N0hRNkZnVXN2NERtRTBvQ0dCOE5xQTN0?=
 =?utf-8?B?YXU1ZDZYVGYvSFJaVUNGYlJ0Yy84N0d1LzkvRk4wV1B3dHhBdjJBd201dGt5?=
 =?utf-8?B?SjlhemhBZ3RBbXYybzFCTDJLeDhiVlI3amRqODBaQ3hNbVJaRjdNMDFmVmFG?=
 =?utf-8?B?SlR6YkN6MkNmdDI4OGt2WllPTGdPblN6YlcwTStqby9sZ1daV3B6a3dHdEx1?=
 =?utf-8?B?WDRZRXd6YXpPZjFjWUJFencxK1c4U1JIakdGWDhjQ21zbGlRMHMyZk1hVS8r?=
 =?utf-8?B?V1A4NHNqcVFGRmNUTlFIWFNrcktENExoazhRRXMvajZBbVl2dE9aZXVqb3Zj?=
 =?utf-8?B?b2FwNFl2YWpTb2FsNE1hN1hTWFRpYWEyQ0hQOGV4N2xnMUp0NWpDS2NFcE1h?=
 =?utf-8?B?Y1BFVWtXVFZCb1JjamNFcEJ4OXgvR1JraFpYdWdMOFJJME9NUjBNU1hocW9o?=
 =?utf-8?B?OHJsTUVuZ0wyUzY0Z0F3VjZmQ2ZsVkQrcGljeWQ3TnNDTFFxQkE2NERsa1BU?=
 =?utf-8?B?b2pKcDZUMHBQMFBaaVp1NWhkMHFUVHJZaW5MM1kyRVY5TmlIN1VNZEswaWlV?=
 =?utf-8?B?N2tCMWV4WGR2MTFvNC9pMng0L21JUkR1SEVQNzZxVitBeTRaL3hCR29oQjZ4?=
 =?utf-8?B?ZnhQMncxOWFxME41b0x4TUMwQVhINmhzSlJJK05CMWwwZGkvQVpOZz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 27f3b43a-a183-4b12-53ce-08da4eb5eb05
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2022 10:00:46.6794
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3vbZG7irrysk3aW8IEduu4EfESd8YUjDzkCgSOhG15SXbQvoljRfWvWUGxWbrM7rEEhVjccUPYyC2onl4qW0xA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8981

The function is already non-trivial and is expected to further grow.

Code moved gets slightly adjusted in a few places, e.g. replacing EXC_*
by X86_EXC_* (such that EXC_* don't need to move as well; we want these
to be phased out anyway).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Re-base.

--- a/tools/tests/x86_emulator/Makefile
+++ b/tools/tests/x86_emulator/Makefile
@@ -252,7 +252,7 @@ endif # 32-bit override
 
 OBJS := x86-emulate.o cpuid.o test_x86_emulator.o evex-disp8.o predicates.o wrappers.o
 OBJS += x86_emulate/0f01.o x86_emulate/0fae.o x86_emulate/0fc7.o
-OBJS += x86_emulate/decode.o x86_emulate/fpu.o
+OBJS += x86_emulate/blk.o x86_emulate/decode.o x86_emulate/fpu.o
 
 $(TARGET): $(OBJS)
 	$(HOSTCC) $(HOSTCFLAGS) -o $@ $^
--- a/tools/tests/x86_emulator/x86-emulate.h
+++ b/tools/tests/x86_emulator/x86-emulate.h
@@ -85,6 +85,8 @@ bool emul_test_init(void);
 void emul_save_fpu_state(void);
 void emul_restore_fpu_state(void);
 
+struct x86_fxsr *get_fpu_save_area(void);
+
 /*
  * In order to reasonably use the above, wrap library calls we use and which we
  * think might access any of the FPU state into wrappers saving/restoring state
--- a/xen/arch/x86/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate.c
@@ -24,8 +24,6 @@
 #define cpu_has_amd_erratum(nr) \
         cpu_has_amd_erratum(&current_cpu_data, AMD_ERRATUM_##nr)
 
-#define FXSAVE_AREA current->arch.fpu_ctxt
-
 #include "x86_emulate/x86_emulate.c"
 
 int cf_check x86emul_read_xcr(
--- a/xen/arch/x86/x86_emulate/Makefile
+++ b/xen/arch/x86/x86_emulate/Makefile
@@ -1,5 +1,6 @@
 obj-y += 0f01.o
 obj-y += 0fae.o
 obj-y += 0fc7.o
+obj-y += blk.o
 obj-y += decode.o
 obj-$(CONFIG_HVM) += fpu.o
--- /dev/null
+++ b/xen/arch/x86/x86_emulate/blk.c
@@ -0,0 +1,396 @@
+/******************************************************************************
+ * blk.c - helper for x86_emulate.c
+ *
+ * Generic x86 (32-bit and 64-bit) instruction decoder and emulator.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "private.h"
+
+#if !defined(X86EMUL_NO_FPU) || !defined(X86EMUL_NO_MMX) || \
+    !defined(X86EMUL_NO_SIMD)
+# ifdef __XEN__
+#  include <asm/xstate.h>
+#  define FXSAVE_AREA current->arch.fpu_ctxt
+# else
+#  define FXSAVE_AREA get_fpu_save_area()
+# endif
+#endif
+
+int x86_emul_blk(
+    void *ptr,
+    void *data,
+    unsigned int bytes,
+    uint32_t *eflags,
+    struct x86_emulate_state *s,
+    struct x86_emulate_ctxt *ctxt)
+{
+    int rc = X86EMUL_OKAY;
+
+    switch ( s->blk )
+    {
+        bool zf;
+#ifndef X86EMUL_NO_FPU
+        struct {
+            struct x87_env32 env;
+            struct {
+               uint8_t bytes[10];
+            } freg[8];
+        } fpstate;
+#endif
+
+        /*
+         * Throughout this switch(), memory clobbers are used to compensate
+         * that other operands may not properly express the (full) memory
+         * ranges covered.
+         */
+    case blk_enqcmd:
+        ASSERT(bytes == 64);
+        if ( ((unsigned long)ptr & 0x3f) )
+        {
+            ASSERT_UNREACHABLE();
+            return X86EMUL_UNHANDLEABLE;
+        }
+        *eflags &= ~EFLAGS_MASK;
+#ifdef HAVE_AS_ENQCMD
+        asm ( "enqcmds (%[src]), %[dst]" ASM_FLAG_OUT(, "; setz %[zf]")
+              : [zf] ASM_FLAG_OUT("=@ccz", "=qm") (zf)
+              : [src] "r" (data), [dst] "r" (ptr) : "memory" );
+#else
+        /* enqcmds (%rsi), %rdi */
+        asm ( ".byte 0xf3, 0x0f, 0x38, 0xf8, 0x3e"
+              ASM_FLAG_OUT(, "; setz %[zf]")
+              : [zf] ASM_FLAG_OUT("=@ccz", "=qm") (zf)
+              : "S" (data), "D" (ptr) : "memory" );
+#endif
+        if ( zf )
+            *eflags |= X86_EFLAGS_ZF;
+        break;
+
+#ifndef X86EMUL_NO_FPU
+
+    case blk_fld:
+        ASSERT(!data);
+
+        /* s->rex_prefix carries CR0.PE && !EFLAGS.VM setting */
+        switch ( bytes )
+        {
+        case sizeof(fpstate.env): /* 32-bit FLDENV */
+        case sizeof(fpstate):     /* 32-bit FRSTOR */
+            memcpy(&fpstate.env, ptr, sizeof(fpstate.env));
+            if ( !s->rex_prefix )
+            {
+                /* Convert 32-bit real/vm86 to 32-bit prot format. */
+                unsigned int fip = fpstate.env.mode.real.fip_lo +
+                                   (fpstate.env.mode.real.fip_hi << 16);
+                unsigned int fdp = fpstate.env.mode.real.fdp_lo +
+                                   (fpstate.env.mode.real.fdp_hi << 16);
+                unsigned int fop = fpstate.env.mode.real.fop;
+
+                fpstate.env.mode.prot.fip = fip & 0xf;
+                fpstate.env.mode.prot.fcs = fip >> 4;
+                fpstate.env.mode.prot.fop = fop;
+                fpstate.env.mode.prot.fdp = fdp & 0xf;
+                fpstate.env.mode.prot.fds = fdp >> 4;
+            }
+
+            if ( bytes == sizeof(fpstate.env) )
+                ptr = NULL;
+            else
+                ptr += sizeof(fpstate.env);
+            break;
+
+        case sizeof(struct x87_env16):                        /* 16-bit FLDENV */
+        case sizeof(struct x87_env16) + sizeof(fpstate.freg): /* 16-bit FRSTOR */
+        {
+            const struct x87_env16 *env = ptr;
+
+            fpstate.env.fcw = env->fcw;
+            fpstate.env.fsw = env->fsw;
+            fpstate.env.ftw = env->ftw;
+
+            if ( s->rex_prefix )
+            {
+                /* Convert 16-bit prot to 32-bit prot format. */
+                fpstate.env.mode.prot.fip = env->mode.prot.fip;
+                fpstate.env.mode.prot.fcs = env->mode.prot.fcs;
+                fpstate.env.mode.prot.fdp = env->mode.prot.fdp;
+                fpstate.env.mode.prot.fds = env->mode.prot.fds;
+                fpstate.env.mode.prot.fop = 0; /* unknown */
+            }
+            else
+            {
+                /* Convert 16-bit real/vm86 to 32-bit prot format. */
+                unsigned int fip = env->mode.real.fip_lo +
+                                   (env->mode.real.fip_hi << 16);
+                unsigned int fdp = env->mode.real.fdp_lo +
+                                   (env->mode.real.fdp_hi << 16);
+                unsigned int fop = env->mode.real.fop;
+
+                fpstate.env.mode.prot.fip = fip & 0xf;
+                fpstate.env.mode.prot.fcs = fip >> 4;
+                fpstate.env.mode.prot.fop = fop;
+                fpstate.env.mode.prot.fdp = fdp & 0xf;
+                fpstate.env.mode.prot.fds = fdp >> 4;
+            }
+
+            if ( bytes == sizeof(*env) )
+                ptr = NULL;
+            else
+                ptr += sizeof(*env);
+            break;
+        }
+
+        default:
+            ASSERT_UNREACHABLE();
+            return X86EMUL_UNHANDLEABLE;
+        }
+
+        if ( ptr )
+        {
+            memcpy(fpstate.freg, ptr, sizeof(fpstate.freg));
+            asm volatile ( "frstor %0" :: "m" (fpstate) );
+        }
+        else
+            asm volatile ( "fldenv %0" :: "m" (fpstate.env) );
+        break;
+
+    case blk_fst:
+        ASSERT(!data);
+
+        /* Don't chance consuming uninitialized data. */
+        memset(&fpstate, 0, sizeof(fpstate));
+        if ( bytes > sizeof(fpstate.env) )
+            asm ( "fnsave %0" : "+m" (fpstate) );
+        else
+            asm ( "fnstenv %0" : "+m" (fpstate.env) );
+
+        /* s->rex_prefix carries CR0.PE && !EFLAGS.VM setting */
+        switch ( bytes )
+        {
+        case sizeof(fpstate.env): /* 32-bit FNSTENV */
+        case sizeof(fpstate):     /* 32-bit FNSAVE */
+            if ( !s->rex_prefix )
+            {
+                /* Convert 32-bit prot to 32-bit real/vm86 format. */
+                unsigned int fip = fpstate.env.mode.prot.fip +
+                                   (fpstate.env.mode.prot.fcs << 4);
+                unsigned int fdp = fpstate.env.mode.prot.fdp +
+                                   (fpstate.env.mode.prot.fds << 4);
+                unsigned int fop = fpstate.env.mode.prot.fop;
+
+                memset(&fpstate.env.mode, 0, sizeof(fpstate.env.mode));
+                fpstate.env.mode.real.fip_lo = fip;
+                fpstate.env.mode.real.fip_hi = fip >> 16;
+                fpstate.env.mode.real.fop = fop;
+                fpstate.env.mode.real.fdp_lo = fdp;
+                fpstate.env.mode.real.fdp_hi = fdp >> 16;
+            }
+            memcpy(ptr, &fpstate.env, sizeof(fpstate.env));
+            if ( bytes == sizeof(fpstate.env) )
+                ptr = NULL;
+            else
+                ptr += sizeof(fpstate.env);
+            break;
+
+        case sizeof(struct x87_env16):                        /* 16-bit FNSTENV */
+        case sizeof(struct x87_env16) + sizeof(fpstate.freg): /* 16-bit FNSAVE */
+            if ( s->rex_prefix )
+            {
+                /* Convert 32-bit prot to 16-bit prot format. */
+                struct x87_env16 *env = ptr;
+
+                env->fcw = fpstate.env.fcw;
+                env->fsw = fpstate.env.fsw;
+                env->ftw = fpstate.env.ftw;
+                env->mode.prot.fip = fpstate.env.mode.prot.fip;
+                env->mode.prot.fcs = fpstate.env.mode.prot.fcs;
+                env->mode.prot.fdp = fpstate.env.mode.prot.fdp;
+                env->mode.prot.fds = fpstate.env.mode.prot.fds;
+            }
+            else
+            {
+                /* Convert 32-bit prot to 16-bit real/vm86 format. */
+                unsigned int fip = fpstate.env.mode.prot.fip +
+                                   (fpstate.env.mode.prot.fcs << 4);
+                unsigned int fdp = fpstate.env.mode.prot.fdp +
+                                   (fpstate.env.mode.prot.fds << 4);
+                struct x87_env16 env = {
+                    .fcw = fpstate.env.fcw,
+                    .fsw = fpstate.env.fsw,
+                    .ftw = fpstate.env.ftw,
+                    .mode.real.fip_lo = fip,
+                    .mode.real.fip_hi = fip >> 16,
+                    .mode.real.fop = fpstate.env.mode.prot.fop,
+                    .mode.real.fdp_lo = fdp,
+                    .mode.real.fdp_hi = fdp >> 16
+                };
+
+                memcpy(ptr, &env, sizeof(env));
+            }
+            if ( bytes == sizeof(struct x87_env16) )
+                ptr = NULL;
+            else
+                ptr += sizeof(struct x87_env16);
+            break;
+
+        default:
+            ASSERT_UNREACHABLE();
+            return X86EMUL_UNHANDLEABLE;
+        }
+
+        if ( ptr )
+            memcpy(ptr, fpstate.freg, sizeof(fpstate.freg));
+        break;
+
+#endif /* X86EMUL_NO_FPU */
+
+#if !defined(X86EMUL_NO_FPU) || !defined(X86EMUL_NO_MMX) || \
+    !defined(X86EMUL_NO_SIMD)
+
+    case blk_fxrstor:
+    {
+        struct x86_fxsr *fxsr = FXSAVE_AREA;
+
+        ASSERT(!data);
+        ASSERT(bytes == sizeof(*fxsr));
+        ASSERT(s->op_bytes <= bytes);
+
+        if ( s->op_bytes < sizeof(*fxsr) )
+        {
+            if ( s->rex_prefix & REX_W )
+            {
+                /*
+                 * The only way to force fxsaveq on a wide range of gas
+                 * versions. On older versions the rex64 prefix works only if
+                 * we force an addressing mode that doesn't require extended
+                 * registers.
+                 */
+                asm volatile ( ".byte 0x48; fxsave (%1)"
+                               : "=m" (*fxsr) : "R" (fxsr) );
+            }
+            else
+                asm volatile ( "fxsave %0" : "=m" (*fxsr) );
+        }
+
+        /*
+         * Don't chance the reserved or available ranges to contain any
+         * data FXRSTOR may actually consume in some way: Copy only the
+         * defined portion, and zero the rest.
+         */
+        memcpy(fxsr, ptr, min(s->op_bytes,
+                              (unsigned int)offsetof(struct x86_fxsr, rsvd)));
+        memset(fxsr->rsvd, 0, sizeof(*fxsr) - offsetof(struct x86_fxsr, rsvd));
+
+        generate_exception_if(fxsr->mxcsr & ~mxcsr_mask, X86_EXC_GP, 0);
+
+        if ( s->rex_prefix & REX_W )
+        {
+            /* See above for why operand/constraints are this way. */
+            asm volatile ( ".byte 0x48; fxrstor (%1)"
+                           :: "m" (*fxsr), "R" (fxsr) );
+        }
+        else
+            asm volatile ( "fxrstor %0" :: "m" (*fxsr) );
+        break;
+    }
+
+    case blk_fxsave:
+    {
+        struct x86_fxsr *fxsr = FXSAVE_AREA;
+
+        ASSERT(!data);
+        ASSERT(bytes == sizeof(*fxsr));
+        ASSERT(s->op_bytes <= bytes);
+
+        if ( s->op_bytes < sizeof(*fxsr) )
+            /* Don't chance consuming uninitialized data. */
+            memset(fxsr, 0, s->op_bytes);
+        else
+            fxsr = ptr;
+
+        if ( s->rex_prefix & REX_W )
+        {
+            /* See above for why operand/constraints are this way. */
+            asm volatile ( ".byte 0x48; fxsave (%1)"
+                           : "=m" (*fxsr) : "R" (fxsr) );
+        }
+        else
+            asm volatile ( "fxsave %0" : "=m" (*fxsr) );
+
+        if ( fxsr != ptr ) /* i.e. s->op_bytes < sizeof(*fxsr) */
+            memcpy(ptr, fxsr, s->op_bytes);
+        break;
+    }
+
+#endif /* X86EMUL_NO_{FPU,MMX,SIMD} */
+
+    case blk_movdir:
+        switch ( bytes )
+        {
+#ifdef __x86_64__
+        case sizeof(uint32_t):
+# ifdef HAVE_AS_MOVDIR
+            asm ( "movdiri %0, (%1)"
+                  :: "r" (*(uint32_t *)data), "r" (ptr) : "memory" );
+# else
+            /* movdiri %esi, (%rdi) */
+            asm ( ".byte 0x0f, 0x38, 0xf9, 0x37"
+                  :: "S" (*(uint32_t *)data), "D" (ptr) : "memory" );
+# endif
+            break;
+#endif
+
+        case sizeof(unsigned long):
+#ifdef HAVE_AS_MOVDIR
+            asm ( "movdiri %0, (%1)"
+                  :: "r" (*(unsigned long *)data), "r" (ptr) : "memory" );
+#else
+            /* movdiri %rsi, (%rdi) */
+            asm ( ".byte 0x48, 0x0f, 0x38, 0xf9, 0x37"
+                  :: "S" (*(unsigned long *)data), "D" (ptr) : "memory" );
+#endif
+            break;
+
+        case 64:
+            if ( ((unsigned long)ptr & 0x3f) )
+            {
+                ASSERT_UNREACHABLE();
+                return X86EMUL_UNHANDLEABLE;
+            }
+#ifdef HAVE_AS_MOVDIR
+            asm ( "movdir64b (%0), %1" :: "r" (data), "r" (ptr) : "memory" );
+#else
+            /* movdir64b (%rsi), %rdi */
+            asm ( ".byte 0x66, 0x0f, 0x38, 0xf8, 0x3e"
+                  :: "S" (data), "D" (ptr) : "memory" );
+#endif
+            break;
+
+        default:
+            ASSERT_UNREACHABLE();
+            return X86EMUL_UNHANDLEABLE;
+        }
+        break;
+
+    default:
+        ASSERT_UNREACHABLE();
+        return X86EMUL_UNHANDLEABLE;
+    }
+
+ done: __maybe_unused;
+    return rc;
+
+}
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -8365,371 +8365,6 @@ int x86_emul_rmw(
     return X86EMUL_OKAY;
 }
 
-int x86_emul_blk(
-    void *ptr,
-    void *data,
-    unsigned int bytes,
-    uint32_t *eflags,
-    struct x86_emulate_state *state,
-    struct x86_emulate_ctxt *ctxt)
-{
-    int rc = X86EMUL_OKAY;
-
-    switch ( state->blk )
-    {
-        bool zf;
-#ifndef X86EMUL_NO_FPU
-        struct {
-            struct x87_env32 env;
-            struct {
-               uint8_t bytes[10];
-            } freg[8];
-        } fpstate;
-#endif
-
-        /*
-         * Throughout this switch(), memory clobbers are used to compensate
-         * that other operands may not properly express the (full) memory
-         * ranges covered.
-         */
-    case blk_enqcmd:
-        ASSERT(bytes == 64);
-        if ( ((unsigned long)ptr & 0x3f) )
-        {
-            ASSERT_UNREACHABLE();
-            return X86EMUL_UNHANDLEABLE;
-        }
-        *eflags &= ~EFLAGS_MASK;
-#ifdef HAVE_AS_ENQCMD
-        asm ( "enqcmds (%[src]), %[dst]" ASM_FLAG_OUT(, "; setz %[zf]")
-              : [zf] ASM_FLAG_OUT("=@ccz", "=qm") (zf)
-              : [src] "r" (data), [dst] "r" (ptr) : "memory" );
-#else
-        /* enqcmds (%rsi), %rdi */
-        asm ( ".byte 0xf3, 0x0f, 0x38, 0xf8, 0x3e"
-              ASM_FLAG_OUT(, "; setz %[zf]")
-              : [zf] ASM_FLAG_OUT("=@ccz", "=qm") (zf)
-              : "S" (data), "D" (ptr) : "memory" );
-#endif
-        if ( zf )
-            *eflags |= X86_EFLAGS_ZF;
-        break;
-
-#ifndef X86EMUL_NO_FPU
-
-    case blk_fld:
-        ASSERT(!data);
-
-        /* state->rex_prefix carries CR0.PE && !EFLAGS.VM setting */
-        switch ( bytes )
-        {
-        case sizeof(fpstate.env): /* 32-bit FLDENV */
-        case sizeof(fpstate):     /* 32-bit FRSTOR */
-            memcpy(&fpstate.env, ptr, sizeof(fpstate.env));
-            if ( !state->rex_prefix )
-            {
-                /* Convert 32-bit real/vm86 to 32-bit prot format. */
-                unsigned int fip = fpstate.env.mode.real.fip_lo +
-                                   (fpstate.env.mode.real.fip_hi << 16);
-                unsigned int fdp = fpstate.env.mode.real.fdp_lo +
-                                   (fpstate.env.mode.real.fdp_hi << 16);
-                unsigned int fop = fpstate.env.mode.real.fop;
-
-                fpstate.env.mode.prot.fip = fip & 0xf;
-                fpstate.env.mode.prot.fcs = fip >> 4;
-                fpstate.env.mode.prot.fop = fop;
-                fpstate.env.mode.prot.fdp = fdp & 0xf;
-                fpstate.env.mode.prot.fds = fdp >> 4;
-            }
-
-            if ( bytes == sizeof(fpstate.env) )
-                ptr = NULL;
-            else
-                ptr += sizeof(fpstate.env);
-            break;
-
-        case sizeof(struct x87_env16):                        /* 16-bit FLDENV */
-        case sizeof(struct x87_env16) + sizeof(fpstate.freg): /* 16-bit FRSTOR */
-        {
-            const struct x87_env16 *env = ptr;
-
-            fpstate.env.fcw = env->fcw;
-            fpstate.env.fsw = env->fsw;
-            fpstate.env.ftw = env->ftw;
-
-            if ( state->rex_prefix )
-            {
-                /* Convert 16-bit prot to 32-bit prot format. */
-                fpstate.env.mode.prot.fip = env->mode.prot.fip;
-                fpstate.env.mode.prot.fcs = env->mode.prot.fcs;
-                fpstate.env.mode.prot.fdp = env->mode.prot.fdp;
-                fpstate.env.mode.prot.fds = env->mode.prot.fds;
-                fpstate.env.mode.prot.fop = 0; /* unknown */
-            }
-            else
-            {
-                /* Convert 16-bit real/vm86 to 32-bit prot format. */
-                unsigned int fip = env->mode.real.fip_lo +
-                                   (env->mode.real.fip_hi << 16);
-                unsigned int fdp = env->mode.real.fdp_lo +
-                                   (env->mode.real.fdp_hi << 16);
-                unsigned int fop = env->mode.real.fop;
-
-                fpstate.env.mode.prot.fip = fip & 0xf;
-                fpstate.env.mode.prot.fcs = fip >> 4;
-                fpstate.env.mode.prot.fop = fop;
-                fpstate.env.mode.prot.fdp = fdp & 0xf;
-                fpstate.env.mode.prot.fds = fdp >> 4;
-            }
-
-            if ( bytes == sizeof(*env) )
-                ptr = NULL;
-            else
-                ptr += sizeof(*env);
-            break;
-        }
-
-        default:
-            ASSERT_UNREACHABLE();
-            return X86EMUL_UNHANDLEABLE;
-        }
-
-        if ( ptr )
-        {
-            memcpy(fpstate.freg, ptr, sizeof(fpstate.freg));
-            asm volatile ( "frstor %0" :: "m" (fpstate) );
-        }
-        else
-            asm volatile ( "fldenv %0" :: "m" (fpstate.env) );
-        break;
-
-    case blk_fst:
-        ASSERT(!data);
-
-        /* Don't chance consuming uninitialized data. */
-        memset(&fpstate, 0, sizeof(fpstate));
-        if ( bytes > sizeof(fpstate.env) )
-            asm ( "fnsave %0" : "+m" (fpstate) );
-        else
-            asm ( "fnstenv %0" : "+m" (fpstate.env) );
-
-        /* state->rex_prefix carries CR0.PE && !EFLAGS.VM setting */
-        switch ( bytes )
-        {
-        case sizeof(fpstate.env): /* 32-bit FNSTENV */
-        case sizeof(fpstate):     /* 32-bit FNSAVE */
-            if ( !state->rex_prefix )
-            {
-                /* Convert 32-bit prot to 32-bit real/vm86 format. */
-                unsigned int fip = fpstate.env.mode.prot.fip +
-                                   (fpstate.env.mode.prot.fcs << 4);
-                unsigned int fdp = fpstate.env.mode.prot.fdp +
-                                   (fpstate.env.mode.prot.fds << 4);
-                unsigned int fop = fpstate.env.mode.prot.fop;
-
-                memset(&fpstate.env.mode, 0, sizeof(fpstate.env.mode));
-                fpstate.env.mode.real.fip_lo = fip;
-                fpstate.env.mode.real.fip_hi = fip >> 16;
-                fpstate.env.mode.real.fop = fop;
-                fpstate.env.mode.real.fdp_lo = fdp;
-                fpstate.env.mode.real.fdp_hi = fdp >> 16;
-            }
-            memcpy(ptr, &fpstate.env, sizeof(fpstate.env));
-            if ( bytes == sizeof(fpstate.env) )
-                ptr = NULL;
-            else
-                ptr += sizeof(fpstate.env);
-            break;
-
-        case sizeof(struct x87_env16):                        /* 16-bit FNSTENV */
-        case sizeof(struct x87_env16) + sizeof(fpstate.freg): /* 16-bit FNSAVE */
-            if ( state->rex_prefix )
-            {
-                /* Convert 32-bit prot to 16-bit prot format. */
-                struct x87_env16 *env = ptr;
-
-                env->fcw = fpstate.env.fcw;
-                env->fsw = fpstate.env.fsw;
-                env->ftw = fpstate.env.ftw;
-                env->mode.prot.fip = fpstate.env.mode.prot.fip;
-                env->mode.prot.fcs = fpstate.env.mode.prot.fcs;
-                env->mode.prot.fdp = fpstate.env.mode.prot.fdp;
-                env->mode.prot.fds = fpstate.env.mode.prot.fds;
-            }
-            else
-            {
-                /* Convert 32-bit prot to 16-bit real/vm86 format. */
-                unsigned int fip = fpstate.env.mode.prot.fip +
-                                   (fpstate.env.mode.prot.fcs << 4);
-                unsigned int fdp = fpstate.env.mode.prot.fdp +
-                                   (fpstate.env.mode.prot.fds << 4);
-                struct x87_env16 env = {
-                    .fcw = fpstate.env.fcw,
-                    .fsw = fpstate.env.fsw,
-                    .ftw = fpstate.env.ftw,
-                    .mode.real.fip_lo = fip,
-                    .mode.real.fip_hi = fip >> 16,
-                    .mode.real.fop = fpstate.env.mode.prot.fop,
-                    .mode.real.fdp_lo = fdp,
-                    .mode.real.fdp_hi = fdp >> 16
-                };
-
-                memcpy(ptr, &env, sizeof(env));
-            }
-            if ( bytes == sizeof(struct x87_env16) )
-                ptr = NULL;
-            else
-                ptr += sizeof(struct x87_env16);
-            break;
-
-        default:
-            ASSERT_UNREACHABLE();
-            return X86EMUL_UNHANDLEABLE;
-        }
-
-        if ( ptr )
-            memcpy(ptr, fpstate.freg, sizeof(fpstate.freg));
-        break;
-
-#endif /* X86EMUL_NO_FPU */
-
-#if !defined(X86EMUL_NO_FPU) || !defined(X86EMUL_NO_MMX) || \
-    !defined(X86EMUL_NO_SIMD)
-
-    case blk_fxrstor:
-    {
-        struct x86_fxsr *fxsr = FXSAVE_AREA;
-
-        ASSERT(!data);
-        ASSERT(bytes == sizeof(*fxsr));
-        ASSERT(state->op_bytes <= bytes);
-
-        if ( state->op_bytes < sizeof(*fxsr) )
-        {
-            if ( state->rex_prefix & REX_W )
-            {
-                /*
-                 * The only way to force fxsaveq on a wide range of gas
-                 * versions. On older versions the rex64 prefix works only if
-                 * we force an addressing mode that doesn't require extended
-                 * registers.
-                 */
-                asm volatile ( ".byte 0x48; fxsave (%1)"
-                               : "=m" (*fxsr) : "R" (fxsr) );
-            }
-            else
-                asm volatile ( "fxsave %0" : "=m" (*fxsr) );
-        }
-
-        /*
-         * Don't chance the reserved or available ranges to contain any
-         * data FXRSTOR may actually consume in some way: Copy only the
-         * defined portion, and zero the rest.
-         */
-        memcpy(fxsr, ptr, min(state->op_bytes,
-                              (unsigned int)offsetof(struct x86_fxsr, rsvd)));
-        memset(fxsr->rsvd, 0, sizeof(*fxsr) - offsetof(struct x86_fxsr, rsvd));
-
-        generate_exception_if(fxsr->mxcsr & ~mxcsr_mask, EXC_GP, 0);
-
-        if ( state->rex_prefix & REX_W )
-        {
-            /* See above for why operand/constraints are this way. */
-            asm volatile ( ".byte 0x48; fxrstor (%1)"
-                           :: "m" (*fxsr), "R" (fxsr) );
-        }
-        else
-            asm volatile ( "fxrstor %0" :: "m" (*fxsr) );
-        break;
-    }
-
-    case blk_fxsave:
-    {
-        struct x86_fxsr *fxsr = FXSAVE_AREA;
-
-        ASSERT(!data);
-        ASSERT(bytes == sizeof(*fxsr));
-        ASSERT(state->op_bytes <= bytes);
-
-        if ( state->op_bytes < sizeof(*fxsr) )
-            /* Don't chance consuming uninitialized data. */
-            memset(fxsr, 0, state->op_bytes);
-        else
-            fxsr = ptr;
-
-        if ( state->rex_prefix & REX_W )
-        {
-            /* See above for why operand/constraints are this way. */
-            asm volatile ( ".byte 0x48; fxsave (%1)"
-                           : "=m" (*fxsr) : "R" (fxsr) );
-        }
-        else
-            asm volatile ( "fxsave %0" : "=m" (*fxsr) );
-
-        if ( fxsr != ptr ) /* i.e. state->op_bytes < sizeof(*fxsr) */
-            memcpy(ptr, fxsr, state->op_bytes);
-        break;
-    }
-
-#endif /* X86EMUL_NO_{FPU,MMX,SIMD} */
-
-    case blk_movdir:
-        switch ( bytes )
-        {
-#ifdef __x86_64__
-        case sizeof(uint32_t):
-# ifdef HAVE_AS_MOVDIR
-            asm ( "movdiri %0, (%1)"
-                  :: "r" (*(uint32_t *)data), "r" (ptr) : "memory" );
-# else
-            /* movdiri %esi, (%rdi) */
-            asm ( ".byte 0x0f, 0x38, 0xf9, 0x37"
-                  :: "S" (*(uint32_t *)data), "D" (ptr) : "memory" );
-# endif
-            break;
-#endif
-
-        case sizeof(unsigned long):
-#ifdef HAVE_AS_MOVDIR
-            asm ( "movdiri %0, (%1)"
-                  :: "r" (*(unsigned long *)data), "r" (ptr) : "memory" );
-#else
-            /* movdiri %rsi, (%rdi) */
-            asm ( ".byte 0x48, 0x0f, 0x38, 0xf9, 0x37"
-                  :: "S" (*(unsigned long *)data), "D" (ptr) : "memory" );
-#endif
-            break;
-
-        case 64:
-            if ( ((unsigned long)ptr & 0x3f) )
-            {
-                ASSERT_UNREACHABLE();
-                return X86EMUL_UNHANDLEABLE;
-            }
-#ifdef HAVE_AS_MOVDIR
-            asm ( "movdir64b (%0), %1" :: "r" (data), "r" (ptr) : "memory" );
-#else
-            /* movdir64b (%rsi), %rdi */
-            asm ( ".byte 0x66, 0x0f, 0x38, 0xf8, 0x3e"
-                  :: "S" (data), "D" (ptr) : "memory" );
-#endif
-            break;
-
-        default:
-            ASSERT_UNREACHABLE();
-            return X86EMUL_UNHANDLEABLE;
-        }
-        break;
-
-    default:
-        ASSERT_UNREACHABLE();
-        return X86EMUL_UNHANDLEABLE;
-    }
-
- done:
-    return rc;
-}
-
 static void __init __maybe_unused build_assertions(void)
 {
     /* Check the values against SReg3 encoding in opcode/ModRM bytes. */
--- a/tools/tests/x86_emulator/x86-emulate.c
+++ b/tools/tests/x86_emulator/x86-emulate.c
@@ -35,7 +35,10 @@ static bool use_xsave;
  * (When debugging the emulator, care needs to be taken when inserting
  * printf() or alike function calls into regions using this.)
  */
-#define FXSAVE_AREA ((struct x86_fxsr *)fpu_save_area)
+struct x86_fxsr *get_fpu_save_area(void)
+{
+    return (void *)fpu_save_area;
+}
 
 void emul_save_fpu_state(void)
 {



From xen-devel-bounces@lists.xenproject.org Wed Jun 15 10:26:47 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 10:26:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349896.576087 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1QES-00029O-NN; Wed, 15 Jun 2022 10:26:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349896.576087; Wed, 15 Jun 2022 10:26:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1QES-00029H-KB; Wed, 15 Jun 2022 10:26:40 +0000
Received: by outflank-mailman (input) for mailman id 349896;
 Wed, 15 Jun 2022 10:26:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sTDw=WW=citrix.com=prvs=158947eee=George.Dunlap@srs-se1.protection.inumbo.net>)
 id 1o1QER-0001x2-3B
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 10:26:39 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a0e63de9-ec95-11ec-ab14-113154c10af9;
 Wed, 15 Jun 2022 12:26:36 +0200 (CEST)
Received: from unknown (HELO NAM10-BN7-obe.outbound.protection.outlook.com)
 ([104.47.70.109])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 15 Jun 2022 06:26:17 -0400
Received: from CO1PR03MB5665.namprd03.prod.outlook.com (2603:10b6:303:94::6)
 by MN2PR03MB5215.namprd03.prod.outlook.com (2603:10b6:208:1e7::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Wed, 15 Jun
 2022 10:25:34 +0000
Received: from CO1PR03MB5665.namprd03.prod.outlook.com
 ([fe80::d5fb:172:dc55:fbc0]) by CO1PR03MB5665.namprd03.prod.outlook.com
 ([fe80::d5fb:172:dc55:fbc0%5]) with mapi id 15.20.5332.022; Wed, 15 Jun 2022
 10:25:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a0e63de9-ec95-11ec-ab14-113154c10af9
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655288796;
  h=from:to:subject:date:message-id:mime-version;
  bh=RzBnfGh14RnQTE823mnMpsOaE5WUtxy4YjmW/nr1x3Q=;
  b=TKR95br2ZyEsy7VIWU6a+dvmXlYG+otIyIbaIVm4KciUK3hb87r7f9ek
   6UD72HsAf4JlADkpnqpccj8sKg7FQy0dC3tpd3xNI9gHsA+OJKREBXmUZ
   Ofm0E0JZZhiv91OYUEbo0h9ot3zn3vvNKuNcpgEiqCcTFR0xB0o9tEfvT
   I=;
X-IronPort-RemoteIP: 104.47.70.109
X-IronPort-MID: 73660150
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:3B5KVKwWftESjd8WNVF6t+fxxyrEfRIJ4+MujC+fZmQN5Y8a5oE1v
 iFGDjfXfrrIN3ykOIpG3L7Gp0sGuZ/TmtFqSQVvrylkQi0QpZqZXY3GIhz9Yi7Ld8CdFkhtv
 pVDNNKYcJ9oE3WN/Ez1O+TvoCh1jfzQF+b2WYYoVswJqSpMEU/N3jo/y75RbvdUvOWE7yOxV
 fLa+MPWZgWo1mcoajJLu/jdph1lta2u4W5BslEzOq5H4FKPy3VNAJwhfqzgdHGQrqu4vwKZq
 0Qv6Jnjows1Kj90UovNfo7TKxFMGviIVeS3oiI+t5KK23CunQRvlPdiXBYgQR0P0W/RwYktk
 I4lWaGYEm/FAIWdwIzxbDEAe81OFfUuFGjveCXXXWS7liUqQlO0qxlcJBhe0b4wo46bNVpm5
 /0AQA3hWzjY7w6ALBBXfcE37igrBJGD0IryIRiMxxmBZRotacirr6kneba0ddr/7yxDNa+2W
 iYXVdZgRA3rMg8UYQccMc0RpMiNh1rPYyxmuXvA8MLb40CLpOBw+J7EFYONP/asGoBSlEveo
 X/a9WPkBB1cLMaY1TeO7nOrgKnIgD//X4URUra/85aGgnXKnjBVVEJQCgP9+KHo4qK9c4s3x
 0g81Scpt6c/smC2SN30RzWzoWKevw5aUN1VewE/wF7WlPSMulvBboQCZm9nauAhieJxfBhwh
 n/Ro43YJmNpvpTAHBpx8Z/R91teIxM9PWIEICMJUwYBy93iu50oyALCSM55F6y4hcGzHiv/q
 xiaoS57jrMVitMN3KiT+VHcnynqtpXPVhQy5AjcQiSi9AwRWWK+T9XwtR6HsrAfdMDAFgjpU
 GU4pvVyJdsmVfml/BFhis1WdF11z55p6AHhvGM=
IronPort-HdrOrdr: A9a23:Lz9PcayisTEXdbT+1rLCKrPxleskLtp133Aq2lEZdPULSKGlfp
 GV9sjziyWetN9IYgBHpTnyAtj4fZq8z+883WB1B9uftWbdyQ+Vxe1ZjLcKoAeQUBEWlNQtsp
 uIGpIWYLKfMbEQt7eY3ODMKadE/DDxytHLuQ6x9RdQZDAvT5slwxZyCw6dHEEzbhJBH4AFGJ
 2V4dcCjya8eFwMB/7LS0UtbqzmnZnmhZjmaRkJC1oM8w+Vlw6l77b8DlyxwgoeaTVS2r0vmF
 K13zARp5/T/M1T+CWsllM73K4m2OcJDeEzR/Bkv/JlZAkETDzYJriJFYfy+Qzd69vfkGrC2O
 O82CvIef4DoU85N1vF3CfFyk3u1i0j5GTlzkLdiXz/odbhTDZ/EMZZg5lFGyGpnHbJLLlHod
 h2Nk+ixu9q5Cn77VDADhnzJmBXv1vxpWBnnf8YjnRZX4dbYLhNrZYH9EcQFJsbBir15I0uDe
 ErVajnlb9rWELfa2qcsnhkwdSqUHh2FhCaQlIassjQ1zRNhnh2w0YR2cRalHYd85A2TYVC+o
 3/Q9JVvaALStVTYbN2Be8HT8fyAmvRQQjUOGbXOljjHLFvAQO+l3c22sRH2AiHQu128HJpou
 W8bLpxjx9NR2v+TcuTwZZM7hfBBG2gQDWF8LAv26RE
X-IronPort-AV: E=Sophos;i="5.91,300,1647316800"; 
   d="asc'?scan'208";a="73660150"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WZuJ90QtyplGapBZQGDMfCjhEqljeW5edVBr1jL7DCChNKz3soT0ann/RewVs0A1twrXE0OcXi+SQy7rwf6SrA1RdXfgBo2yGd9SslCb0fYrggT4sUTxu2F5b1BT/jiMXosRj6wAC3Qe5jVLUapoMQOHDecKVbULhpJnFsYyaln3n926lBLHdgaQc2ZkmLzWWh8F50COuybTvAloVwoU33bmhkXLOOg/wVRBroBZpBOuycMCBUrEzIBWRhBzbOvVa6tVSwKB/04kGRUPlFQPIVm/C29wLV+W2I+Tud8p2u6fETpZj2YMykCpZi4EUSlH7tp8WHpjYV8uAeoK5gY2Ng==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=jQt/gcfEXOcYivjEt1Oj1brZqNd40vjplQDf2W7Nrbk=;
 b=BCinqtvDLSt3d9KbwOqtT5xGHTr2jE5CZUyy4KIgppJ8aGea1QAnzEtWkdN6VEBYd+veqRaG61uhYTjh9ojPpWYPrpc7tzYcDGpcMd0fBSj3r72a0zT32eo23gji5TOokvpXJd/GFyYkEU/Am/hROiG+Qm4M4ZKsBWVLhrex1A1AbF5tHcV8yqVEPEjcPh7PNeCs+o/MjxeDwyqHRJq/bazUggPGa1UxgImGZgwFCDbRSpzDZY6fw+jU4HivF9/hQ3CaqUX248vKSKAAKfZWaoMwDvJmwKCi6DTnsgZPi89njw2wgmyfPVY3PdzkvEcABgcYsKbM2A7fLLmEnV5h4A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jQt/gcfEXOcYivjEt1Oj1brZqNd40vjplQDf2W7Nrbk=;
 b=Skcn9CfPm86skQEH/ws/9kUp3UxyCBoQK9rycVozTT1P/hZa4OK1qCcqEg4rg5wJ0Joe1/YaCl5rwxdWqdLRe7u0stFiN40w9xTilmvc2klQivtcODerq86Pyvdl7lOnAdeiGDJ99Ye5Se8+bKheeq4BADxySiCm4UGcCxrVPmo=
From: George Dunlap <George.Dunlap@citrix.com>
To: xen-devel <xen-devel@lists.xenproject.org>,
	"xen-announce@lists.xenproject.org" <xen-announce@lists.xenproject.org>
Subject: XenSummit 2022 Call for Papers open
Thread-Topic: XenSummit 2022 Call for Papers open
Thread-Index: AQHYgKI/ZmpAfNYPSkWL/Rc2+ibxAQ==
Date: Wed, 15 Jun 2022 10:25:34 +0000
Message-ID: <DEA6E62E-4135-4315-9F55-7263544FACD9@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 08adf12a-59f6-45c2-cca1-08da4eb961e0
x-ms-traffictypediagnostic: MN2PR03MB5215:EE_
x-microsoft-antispam-prvs:
 <MN2PR03MB5215948AD7F9F8A013813EA899AD9@MN2PR03MB5215.namprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 GCjyHbEF2lP4Q5jIfEyfeUXDQPha/WWs1btj/s6+vpImVL6NCdoPok1DNmlWUKHpNXLuZ3GEmDGBU96mYQP4F3CJaY4FuUKpJNDb/fyibZ4xp6GXO7W0SKH4aODu8+CKVDtlEJ+S8HqL2xgHcEtJS526/tARBmchWxy5r+hZZ8x047+8WeXpsyvbxoemkvrx7OUTWiq/65Xr9MlRnFiBylPuy8zqpjxXXtFMYVZXidQl0sFkERqIUjzwldUB5VVwo0GXV2l7qcaXcvQ9AhbeE2rVBNynD6nxHAIkBQxFJXbynJEHD3VSyNTI+buzHypUXLztrPNDSE7bzITeocF5TQ7s/QN8qPr1ez0IbjkNuPyHFjI2jaB8H8Pqu1+l9dpu0CPmv7e7IBQIFPlnUfuGE4M9+LwVDzsVSNtJ3kZIhR8ON7RswbNz/HHuGt7TQq81tTMfMQW36GRcCBkHFzM28TVFR6bABxmlXJzf4GOgEkyiDFDDVJRDGOtjh/EKy2mHgr9HPfJecLj3InTzOfLRjFTjTCj8OKPn/weAC7W36a+OzJou6K3E+VCgsMahs7l+f0U1gB45RvElLFzgA68W0wT2o9YzNoDKcS8iZYu8IlJl52yYqC3Z6paWbeNgkOD6lZqyOMwTE2yCXdY/5lNmb9haKRjZiAU4akYrHtssKEApfBzg1vyGAAgBTrnBtlBCLvO676rZTo90jYrTjAK/e5JS3yKBu0cSThlgo4kiTIdBdhnjm4IciVSuiqwEFgzqTZavYkTl2VTPh1CMTmg7apg/+NIIAZatWfOrjZ3AhY3q/azi5J0241uSLIybecoZxXfbHUWNvMwlrbJuVyoT9iXqu5jW9I4VIPpXuvkVUlE=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:CO1PR03MB5665.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(26005)(6512007)(86362001)(122000001)(71200400001)(6506007)(38070700005)(2616005)(99936003)(82960400001)(186003)(33656002)(110136005)(76116006)(36756003)(8936002)(4744005)(8676002)(66476007)(64756008)(66446008)(91956017)(316002)(5660300002)(2906002)(38100700002)(966005)(450100002)(6486002)(508600001)(66946007)(66556008)(83380400001)(225293002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?us-ascii?Q?NNr3HllzNegTpWduL6eJ8u8lu7kJ0x9X/RH9bXfsmmSAQzrBWloSbpe31gMf?=
 =?us-ascii?Q?zngY/aChnq0ar4BEx2Vh4E/0HExVLDzQl9goAwLj1nBpiUGMcHuE71oKiWw3?=
 =?us-ascii?Q?AopOTN23VN/1Rj1nFV/MU4b15LCWm51YqCKXXBU8jbqozWIfegqBCmYRjudI?=
 =?us-ascii?Q?TnL7Q0BK710vGl14s+VdVywhnvgwtm/W8UZwTRaKv7Gh7pCKyldGfETQ0bZH?=
 =?us-ascii?Q?u83JJP7JxAS89btJsUXL0Fr8XRAhkfePYTc6IaS/1/EdMeWdJVdgHY2tyBhw?=
 =?us-ascii?Q?b96oPEpOjj2zgBNdr1w86aohDH1OCJPW4yvy7YUE/5cZtBnbKx9mvrfchMXK?=
 =?us-ascii?Q?lwvbR62VtIVgmfAXE7gRHNYGGrB5vUS2adxUz03Vu24+rfVytugUNj//cfZ/?=
 =?us-ascii?Q?hQD8PJF7u70MuySSEKmR6OvEcaEBkwbbBO1oeRElbO9bUU5rxv3hleho4nXo?=
 =?us-ascii?Q?z8urZZ3EZEvA2+H55yCNrZIc60TAD/c3lMcD11v1yGILVTxwJvV+AiGqkcSg?=
 =?us-ascii?Q?y/uP95Vh++AOVR9J6OvuNB5kboLVl8gy2jpLFU+mjrosh0AJpDz8iQF4KBzA?=
 =?us-ascii?Q?LDRLlxcYvNpIy7yzOUDItRumzbx43YFEUvSry5M7yT65IULxi7B1+bmNLSCk?=
 =?us-ascii?Q?7ZXfXxhhhtSars8yrVEQIL14XuYgdy1hYBRT3aliNndpYTsj0xuNAkJreGv+?=
 =?us-ascii?Q?j2hrY9VG5v1nQmJLbN7wIt7AwOqbsY3GVAme21cncAJPlgzvp3K03siiVOZC?=
 =?us-ascii?Q?RaVKl2iQuoR8x2IBGrZzeZpLM7nQ0xDGP0sI1QLbf4+MsoQLxzn8qoS0zI+Q?=
 =?us-ascii?Q?p9kOl9cdrLR1jE7M2som9FoQhYGku0Rx3hGY4HBoUCHcnZiIFhTg1NZ6/Y+B?=
 =?us-ascii?Q?X0VU88RKOmlyGfrGPV2NruYtBO+eHAlR4GxuAb0vlX4nhcG6AfsNOgZxZmkD?=
 =?us-ascii?Q?4Nuv5XcenclNaROHTXOK/b6wBWWPomDil92VP0ja6+ZTRrPe5lNWG1F+zNS9?=
 =?us-ascii?Q?e6vsy7LIMDHWAHvK4QSXZ1OtDtl07nzfJEcOvnOm+l30DzBJ6cHiVUyaCXJs?=
 =?us-ascii?Q?QZadgbeAd/iSp3I8KBPKG9u4ZaechbT73ReFpCITz0XRfD+BydikemePqDrN?=
 =?us-ascii?Q?sB37aoQJMZKX2fQOaOoGWovcuXn2T07pQk7Xiu9nGiAyq/RAno0Hsiw8aslr?=
 =?us-ascii?Q?DGFK5u/XU15Sdm30WnaxrOf+8SUAiNgYBqkFmDwfxQnH9QF1zcNJmS/+TtEt?=
 =?us-ascii?Q?zVjAJEKo0kc+LqupDfH7qJj94rN/e+iCD0P3/pYdLpxICyIW1IVmlaw6A9eN?=
 =?us-ascii?Q?/jIPHPD4v7+8/ubZ2GFZPCC5JTr3QyGvVuvyR/QMoimg2dPcxpBpzcmTijmT?=
 =?us-ascii?Q?T6PIr7MiDJswggumBcqLdxnA3tvuz/R/R8Rz66HxTsIKqXGitwtK1oPYufjy?=
 =?us-ascii?Q?aG1B6nqKG35sXZHHlchC1X42hH51rlrYOftLYWPyh4vX283SXaZFrpLSSEUn?=
 =?us-ascii?Q?JvJ0TRm31xI0qwPeaPj5smdr7mMSPBq3RzMsn9WpDxbehKSvqucUSoMQnpE8?=
 =?us-ascii?Q?rFk3OL063obME6lo5925dG6MjaNpq6de0oJjgrlrpHskR1v7+/bm2UtgHra9?=
 =?us-ascii?Q?j8HOPqGacpAdnJN/55L5dWlLqhJ4PrpAcf3Bl/umw7F/ZJfo7fyGPk+LLXLi?=
 =?us-ascii?Q?F9EUH+wi6fpn2d/OBEQB/l/UGkYZXylIVCfJvAhbo+uQtw0dN38twCtJPsEV?=
 =?us-ascii?Q?T8zDoA2ZxuzBn7cj2+gKFmga9jx8t2o=3D?=
Content-Type: multipart/signed;
	boundary="Apple-Mail=_6264F2C8-D1F4-4668-8195-CD097291CFEB";
	protocol="application/pgp-signature";
	micalg=pgp-sha256
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: CO1PR03MB5665.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 08adf12a-59f6-45c2-cca1-08da4eb961e0
X-MS-Exchange-CrossTenant-originalarrivaltime: 15 Jun 2022 10:25:34.4228
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: t91w9syCf3nqxCF1zdj/AQML8XOdszMio52kqzxAV/UySz/L2Ax10QD8LEeydAmWwqewFjidPjoFrto2TaDe5pKpKyq+oD4G20tZJFUiFYU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5215

--Apple-Mail=_6264F2C8-D1F4-4668-8195-CD097291CFEB
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=us-ascii

Hello everyone,

The Xen Project Design and Developer Summit 2022 is officially announced =
in Cambridge, UK, for 19-2 September:

https://events.linuxfoundation.org/xen-summit/

The Call for Papers is open as well; please submit talks here:

https://events.linuxfoundation.org/xen-summit/program/cfp/

The deadline is currently listed as 8 July; I expect to extend this, but =
if you can get your talks in by then it would be great.

Sponsorships are also open:

https://events.linuxfoundation.org/xen-summit/sponsor/

Registrations will open at the end of June.  The summit will be a =
combined virtual / physical summit; due to space limitations, =
registration for physical attendance must be approved, with priority =
going to core developers.

Hope to see you there!

 -George

--Apple-Mail=_6264F2C8-D1F4-4668-8195-CD097291CFEB
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
	filename=signature.asc
Content-Type: application/pgp-signature;
	name=signature.asc
Content-Description: Message signed with OpenPGP

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEj3+7SZ4EDefWZFyCshXHp8eEG+0FAmKps50ACgkQshXHp8eE
G+3P7wf6Azx2E/NML0nM0q79D4KzAQughZP/DSYL6TEMGVBZ8mIGKjjtig/OzvZA
1lMiOPLVRmF7lQxyWbVde7svMOb0ir6D35lj6AlxC0lTwRufNpB3r0PqMLjUdrFv
bvDxQByXX3YpV6WhvXRBlJxODpiRyqJIfsugKHBgJtKKIgSwpBeu9Bj33CWZAzdC
Kjluc6d6o4Uw96vi5X34lsmugzrHFBc+iRkOD/46mVjWXr9rDCHcOS6VoKZOk7fZ
lZwZFfXFRyhDqvuRsLNU70q3YvARJTuJpY6gCbTt/XCoDfNRid3KM2h472r7eUt6
DngN4gdoPfLDWMLZIkSLBZMO96kKXw==
=bvTw
-----END PGP SIGNATURE-----

--Apple-Mail=_6264F2C8-D1F4-4668-8195-CD097291CFEB--


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 10:26:48 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 10:26:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349894.576076 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1QEP-0001tY-Ek; Wed, 15 Jun 2022 10:26:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349894.576076; Wed, 15 Jun 2022 10:26:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1QEP-0001tR-BO; Wed, 15 Jun 2022 10:26:37 +0000
Received: by outflank-mailman (input) for mailman id 349894;
 Wed, 15 Jun 2022 10:26:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=56zs=WW=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o1QEO-0001tL-AS
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 10:26:36 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20603.outbound.protection.outlook.com
 [2a01:111:f400:7d00::603])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a170bfd6-ec95-11ec-bd2c-47488cf2e6aa;
 Wed, 15 Jun 2022 12:26:35 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB7PR04MB5113.eurprd04.prod.outlook.com (2603:10a6:10:14::27) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.22; Wed, 15 Jun
 2022 10:26:32 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Wed, 15 Jun 2022
 10:26:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a170bfd6-ec95-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Vv+1SRHVMByEIBWYs3A9c2JkqyXIhiZM5s7A8Qqqix4e/Bg6eYGzvYDBnrOb6jQ8CSLPALK3lBxFnJhRFAta/veuX/LUfrnk5WGVOLRGG/RqKkkIZLkpGq9GE+BBRRFzFNn3WX3JIqMSHY1sovlPn6NBAg3ZIao9KL9bZF0Vm8flMiN1QuekBalyd/BSfn7P6nzo4JIX01GVJdPmm7Bl3mhwlXTQskzvpnfQGmKJUjgWrMnrK1TcUWPJJnguRr6NIX/KIbzTzmed/V+UOI4QeDVAO5CgJmKt/Qs5Oc0dQAwCNrrstwpkezxoR34qdFZ3kJjQje0IqRHXtD6TOHpuFw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=n1Aznw/NcwLD9NHgQfEIoybRd1YRsR4d6RXDnm7PaIc=;
 b=PENjVeV/jrL+TpjChkckQIWFY6b2iXdJ35KkHCjxNv8dTy8R58v8Nf1hQwDgyP0uh8oI+ATCa94V2s6aNlI5/sbLUW8wihlQCb5ChCKAxB9mOjaItayHuT1k4IPRMfjfTo8RdU/JdrWCVW7JnTjpGMXNDku7Aqv7i/fIi4kn6NriWxXTI9CqUVYgQBVDw9POkDQ4POwlWHLDLGVuXoFFsYpcIXhFQIcdpcYnandjyEcJcrwCOIEoJv+IT/EJVCwbtn/Oi083lXe8EQjih8dLYwe6UCG8TDJ46jFZybLZfKQmZ8C9utkI6jAW/fTivdabvw+IVWppWD9MaSdwJUJPPA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=n1Aznw/NcwLD9NHgQfEIoybRd1YRsR4d6RXDnm7PaIc=;
 b=fBp4Q99+ssHLaqDf+Ox5xf9Gh8fPKwhGhWynpjRPwQP5+Vcbxw0bdxeW0kdF5sNYBlyVfsiXTvqDQCOtTYPleISGCRoMQLcvMsPG61sr65XjdFUY3c+bBgUMJGlNeQn4mGgOE1xNOJcCL71MAmLWcGNsrLTz8ekCZZcncdX1j6ctLNO7UKagQRdzDTScuZzAyJF5resetvAGlaUqIl1FkT+h5MECsTrtqge1t9ew7CHSfzjA09QexMpzzCQ9vN60Q07jfPWhToZTmj8oWkTfN2drQhIo3AvubyJXpLJxNU0TG1KcXYhJx9WglDhWOcxpmit9a+0scXiYRKZUj8IVOA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9ba3aaa7-56ce-b051-f1c4-874276e493e1@suse.com>
Date: Wed, 15 Jun 2022 12:26:25 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 00/11] x86: support AVX512-FP16
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR04CA0164.eurprd04.prod.outlook.com
 (2603:10a6:20b:530::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 04185020-ebce-4ea0-f192-08da4eb98407
X-MS-TrafficTypeDiagnostic: DB7PR04MB5113:EE_
X-Microsoft-Antispam-PRVS:
	<DB7PR04MB5113046B5ABE75972ACC5D98B3AD9@DB7PR04MB5113.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	CIJfiuW8trmbb2SZFF4SqqKOoK1Jayg2ArcZlgQ/GYORctGc2lvWot3Ojosr1ikBb1Goq9NKXbHs0jH3aHUtbMACopS92NlSXSIm7iMWW1YmEwlaqto/a3btsLGNf+ArUf5hTk9Ptczw0ZZJoHgrbkNUgbSgB/cP45MGNEEoXg8s9Nq/DPfIr2tBZT3Et8BA9OjC1KOdmZfmCCxuT5OtHr1h/uXYhlKuflEEElJVSnuk/Ro6dYlRdkl46I44qtS8kscXFqXhm5twiYXpKQrCBAc/X1KcimlElvVRhuEmQUgsvAM3cgMlO3lLil22WrE76bFUS8lcAAYowd5KU7oA0VTIqqb1jEzbYOquijEsuTq5MSrsFz3atJE9wx5+uvFp4b8hbhmgH1iMgbjKY6BxIXA0Gbx09+c2qRSnefOxPRPq7iAOX8u9FcSMc9Xzkz0efLBXzMn9mtroQycAWAy8nb9clk+S8MGI+vOg11ZMQjwGS0Lp5ks/Ed+wEPKLBozxpRCXPOKhgZBqJXOHn7ckICUl3TT5LKRKr8xnCqs3korP60NG7oYrwGutDMcNBjAyho4Bcfu2N3ONo0B9S69JJuq9lVsJh2AzFP861OqxkgXHANnZwkDiX79ovl6JlF1GC7LI58ItZ9OfUSLllNS1jDeS3wMyAflDuWO/qNYSkyA9gppmNV8mTstUwpNE2h73QwXkW8Z7CZ8HuIHBJUnt8Yh+Np9g2aIUFcIA8ZBmNEg=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(316002)(36756003)(508600001)(38100700002)(6512007)(6916009)(8676002)(31686004)(4326008)(6486002)(26005)(2616005)(66946007)(66556008)(66476007)(2906002)(31696002)(5660300002)(186003)(86362001)(6666004)(6506007)(54906003)(8936002)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bktKV1VzY2NUL0sza0RjWTV1Nzc1NjAzVVV0VVoyOWxZWHV0YjhZYTN4VXZZ?=
 =?utf-8?B?UTlmbjFIeW1iWDVoYnVXN1FvbnF3WTZZKzMreHBIMXkyL1NqR1lrcGtYREJz?=
 =?utf-8?B?emNqa0FNUGxVT1dFYlJmYlVVTXFTWWM2MnVwbXp1WVFIamFaU1dEV1FJZ2pW?=
 =?utf-8?B?U21aUFJ4ZmNaRExRTUE2ZEQ3VHdCUEpJc21PbmdIak9kNVpHYko5WmVOdEhP?=
 =?utf-8?B?L0ZBQkp5OTlvczVPcFVLSm1GcG5YakZISDc1K004MVRPQVd0ZE5iSVRvdU5t?=
 =?utf-8?B?aHJ6cVQ2S0RFdkxLdHNUS2t3QWpVdytiOEE2eHArU2ZmNk54RnUvVDJvNTBY?=
 =?utf-8?B?Mk04M2tyQXFvdzlVTU1QQ3JXVTduNDNza2w0eU9DWVpPWmVkeGNUYTBvRUZ4?=
 =?utf-8?B?QjBBZXdJN1pLUldnVGpiMGNxNVZmZlhINFB0aXc2TEdlMmhoVS83T3VUL24r?=
 =?utf-8?B?SXBLWDdKS1grL2hDRE80R2dGYlR2aHI5Yk9NUDZHTjd5Q0FSbTRMeGhncmla?=
 =?utf-8?B?Q2JnUzU2VE0xdlRUWXdUZGw4QkZrRUFjZWYvNGJ1Z29NTE9HT0NKdThEV3ZW?=
 =?utf-8?B?bzBUVy9aaEJOZ0hMTVgxMlRLTk11SWdXQkY3TG1mdnRPVzltajF0N1FTQkZS?=
 =?utf-8?B?ZFI1TUE4NFNKZktRSGRUSEtQbGNJem1yY0Exbm1vWTJCOGFORkpTRGZFdDgw?=
 =?utf-8?B?VDhkakIyVVB3UWs0QUl2MUczMmY4SXl0S3JoQ3VyV0ZDMEorQXYrU1FLQWFa?=
 =?utf-8?B?QTkvbEdxdkE1ZmFDbzJZMDZ5bnd1RVhlYXFudnRudERHRlBkdmo0TkZTZm1u?=
 =?utf-8?B?U2NuTjdXWmNEcnl4T1NlVW10UUhnaDYzQ3lWYlhabFdSeHhORnVqY2Q3dmp0?=
 =?utf-8?B?UFpCcTZabk9BN2RDT2hUNEpXQkZPZUMwVG1peFpDb0pXVkhLSFE1SGVQSWQ0?=
 =?utf-8?B?a3dpd3FVdWNueUVCRjdoNndEdFpRVGNvbFdrWUx1SU9TSGxycSsrN3d3cHc3?=
 =?utf-8?B?RUhvSWdZeVVmYVRCcGZNUWluaG90bTZEZllUcTliRUFlUWNKUGNCcFR2Z0JQ?=
 =?utf-8?B?aDR1Y0xLZGJXM1NDdkU4ODFvRE1LSGFJdlVtcVdiMzM3SFVQYk1jYWhnNlRt?=
 =?utf-8?B?Wk9VaDdiRlVKSklMWDF5dnkzOTZxUHEwM3k5eWRJeGFrU2hFYmJoak5sZWl4?=
 =?utf-8?B?UGwzV2ZPKzRrU0pBUFRhRlVZa0pDUENWUkhlZ0xOcVRWTFQraUMzR1NoS3Nh?=
 =?utf-8?B?anN0dkIrcTZFRTk4a1pLY3hoNDROWkVtbW5DNUE4dk9CdzZUeU5xblNkWXdy?=
 =?utf-8?B?WmNZVnNuSEg5V3ZxME1KRWFEY0k3d1VJeVNnaFdFK1ZjSUpKODg3dlRVVUVI?=
 =?utf-8?B?TWtNRGd2TVp0OHhOSnBjRGExeE8vOHd4N0IxN1prRUJRRVhCTG5WYkVPSkpr?=
 =?utf-8?B?Z1hDK01YNlRKQ3kwdDVySC9nSGQ2L3Y0cGRtVC9USDFRNzJhcGpwUW9tS2ti?=
 =?utf-8?B?M1RrcTBNbkZidFppWDZJVW9zT09XMG43eUdyOVk3dGZsMHVwWVZJTjhwOEJr?=
 =?utf-8?B?N3VkMWlGeVlJaldrRmJsMGQxSnNLMXY3V1ZBbDhnTDdFaXRzUk92enArSDhF?=
 =?utf-8?B?MnBnSytnUEplbW1GaENMRGNHeVl4VVB4VjhxZllqcWRrZytlUUZRNUp2VE45?=
 =?utf-8?B?YVBZQTBuVDRaSU1rVmtWUTlVS29FRU5tNlFUR2Ria2pmaEZDN1I1SnVoQkIz?=
 =?utf-8?B?M3kydVI5UitnTkRVeDRkdUgxT3RtVTFYT09JcVJNMWR6dXE0Y1pUWmZXOEhI?=
 =?utf-8?B?aHE2RFVFWEJsTmlmc2xmVS8rTzB4cUNpdUZLanEzdERPMnJtVjZpNjVHLzcx?=
 =?utf-8?B?bG5ZUWlIYzgwYzlXN01Rc2drUnp0V1N5UkNvM1paNFE0U3lQeGZpcHNueUtW?=
 =?utf-8?B?aGpkbDdNMVRXeXNScTBKVmF5K2Z3WnRuaHJSQ2xwUTVJcmx0MWhyNHFYWStG?=
 =?utf-8?B?cUMwSDcweTZMZ1FPdVFqZldZU0FjWE1JR3BpOWp5ZE94OUppOVYyQnJ4SFNG?=
 =?utf-8?B?MkZONlFubnBtUjdOSVBNMjRGVEZWR3E4dGZQbkZ4R1pRRlhWdi95dXIyRnJi?=
 =?utf-8?B?SnA5US9wNHk3Rm5SWVMxeG9HUkREbWV4ZzhkUFNib2lMa0FNMXNGLzl2M1dR?=
 =?utf-8?B?dXdiMU5ub0FPYjVWTWNjUU1hcWlkVUhUc2VLVGovMjlhaXkwdmw4ZmNzRjFC?=
 =?utf-8?B?VVNzMVhmRVV4Q3UwMlN5NWp0K0pVTXMxbHo1NFREeHM0aWNPYXMrbXdmVith?=
 =?utf-8?B?RDNDcHdUQ3pNY0NTWWMzRDBoMS82UjY5cEpzVnE4SjZCc3dYQXowUT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 04185020-ebce-4ea0-f192-08da4eb98407
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2022 10:26:31.8782
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: vE1e9UEwRHY08pXMDWqv019LNeefjSAHprxbhRn8+Xfnvh7cfJk6J4ldZouFq6GBbKm+H4jDUzp+ApfgWSmK1Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR04MB5113

While I (quite obviously) don't have any suitable hardware, Intel's
SDE allows testing the implementation. And since there's no new
state (registers) associated with this ISA extension, this should
suffice for integration.

01: CPUID: AVX512-FP16 definitions
02: handle AVX512-FP16 insns encoded in 0f3a opcode map
03: handle AVX512-FP16 Map5 arithmetic insns
04: handle AVX512-FP16 move insns
05: handle AVX512-FP16 fma-like insns
06: handle AVX512-FP16 Map6 misc insns
07: handle AVX512-FP16 complex multiplication insns
08: handle AVX512-FP16 conversion to/from (packed) int16 insns
09: handle AVX512-FP16 floating point conversion insns
10: handle AVX512-FP16 conversion to/from (packed) int{32,64} insns
11: AVX512-FP16 testing

While I've re-based this ahead of the also pending AMX series (and,
obviously, ahead of the not even submitted yet KeyLocker one), this
series is intentionally built on top of "x86emul: a few small steps
towards disintegration", which has already been pending for far too
long.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 10:27:38 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 10:27:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349909.576098 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1QFO-0002wl-42; Wed, 15 Jun 2022 10:27:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349909.576098; Wed, 15 Jun 2022 10:27:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1QFO-0002we-0c; Wed, 15 Jun 2022 10:27:38 +0000
Received: by outflank-mailman (input) for mailman id 349909;
 Wed, 15 Jun 2022 10:27:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=56zs=WW=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o1QFM-0002mz-Ed
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 10:27:36 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur03on060b.outbound.protection.outlook.com
 [2a01:111:f400:fe09::60b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c5df4b41-ec95-11ec-bd2c-47488cf2e6aa;
 Wed, 15 Jun 2022 12:27:35 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB7PR04MB5113.eurprd04.prod.outlook.com (2603:10a6:10:14::27) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.22; Wed, 15 Jun
 2022 10:27:34 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Wed, 15 Jun 2022
 10:27:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c5df4b41-ec95-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IlA/jvSaQ8VNS/FZXyxJMY6+SLIRHz2cZ+YJdpzHH86yvIK5MoOhEMFAxjcUydKIWUx7aFXzRCSbawIWW2VuGHdMdSMxzCGy8IA9k1M+GiQSKuqmGH8TYNN++ffX9c2pZwZDa6K8wFqWitqMWtr7DBHUhFqfrZIPFVBZnug2HQryTDIGVScfeqT4apMuoxm0UoXIRb0EyeHDYXvgr6UfcgdWhbtLXkc+/QGBUfjBPuv+sPsRdLvAUv2jxJU8lfZeMaGn/068etCAUby35OwVFm2Db3zZ+CyQgr4rtvUPvyFb1qFDeZNj5PJELrfxFQUKcuDYlC79+3hHDnmn2Rc0lg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=AhN1mnRm3C+eBZ3ZOnL1xkngzkdjxzdw7LWEKU7f9gk=;
 b=iR5+piM3XauZ4tYzNJtB9+SdnfLP8JItA70U+hn7bVySDA3F0Mgso7Fz/RlQ/8392a5lZntCENjbDbs9X9ZGPLZD5Y/9MI4tLqhDPOo2OQ5K59bNOrol+VtB3yrRc0bErAjUPKkH77BC4hxSORLo7hQQHdnPy4csMASvfd73XiXmAc0D1n0jpT/CiEMfJpcOuLzvil3RAaujJHFYj80zf6IR7ECitGmwbvw4x+SG4y7KbwBnaglbnB1IbJa1S8JVdp7fdl2Y9UJ1GRwCpatDxz/hNKSO2xbpKOhL5g5sWWTxDmBoWFP+RBOnsAUkzVaCvKb/KhhVT2nO3z2loBelgQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AhN1mnRm3C+eBZ3ZOnL1xkngzkdjxzdw7LWEKU7f9gk=;
 b=OVM2p9s4vA/KHto0bZpl2oZteOOiJwRge3OaRkpXS1RfLFc4MYjSkZSxKfhhHKab2bUd+WhfgOCOkARD/phGdoqcVHWbgqFmzfwi77YwGCyM2Cm5vxUOG2BXqfr7f/exFZMfbWhicqd/ECzZQ0w0NYrvKLqeZGj7Wf6fMkB7G3roQM512zxdMYLnSU7wZS/gT6/9QQiGTDodmG9CdSvhDTYwRTAJUAHitilylaYV6hqDXrQVw3fnSCOPMt7jAqDh6LpE02SBIOY2VBUt4rJ0gcHJ4Sgb+KJYavH9nV6B2FWyx4rEF4SCYwUVGpzgxde6HeARaDZ/i8mU6Tgkjzth4Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5dc84e7b-e3d6-92cf-8ffb-c4bc0a3e6c74@suse.com>
Date: Wed, 15 Jun 2022 12:27:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: [PATCH 01/11] x86/CPUID: AVX512-FP16 definitions
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <9ba3aaa7-56ce-b051-f1c4-874276e493e1@suse.com>
In-Reply-To: <9ba3aaa7-56ce-b051-f1c4-874276e493e1@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS8PR04CA0047.eurprd04.prod.outlook.com
 (2603:10a6:20b:312::22) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 36073096-f08b-4e27-b831-08da4eb9a8e8
X-MS-TrafficTypeDiagnostic: DB7PR04MB5113:EE_
X-Microsoft-Antispam-PRVS:
	<DB7PR04MB5113AE74701AECF17A9F0576B3AD9@DB7PR04MB5113.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BBQf00mfNF94/VmAiokwJ2CRAuqgiFeBt6VOPvC6ls0f3bDqnK8WtXziNuCL3/j2SfRn9pI6g0FRGIUc26unbAtkKug7Vbdz/jTcRVnB0X7OyPq9CmRsCZmaPHtM3vYx+bTuPHa4QJGSwTRwEgTG1Uz3TRbAv8OTJyN2/+VksLpsmOO+oBjggGborUEw4piExjh6UE7ykvG5VQMttqthRKtiMdJnEKCNJr61g76WGGkIgE8vgmViqTP4HVs96bMEPHIyh6Nw/TJI3SfngUd5OOnv8NCzhjZEUOqRzCt1PFJqSIrb0NJXq+ND+xIQjHyjRQuBznLz0Ya4D9MVj3sG7BViEExUrFqlCZ4gvXhhvgGaywL8mZRanz/r9EypzvxuFr6GKmpj30/7/hqKYNHpFMSOAYeso2imQUfwqMSil4B6HdquoxUGpeqDBElpheqbxK9ucvNJKdrrsx1Y9rbD6lXj0p8UqACFC/bCDMQib5RQGw3xozHQqoXFjAmdHx59VJh7bmFi+BuTu6tOo4J7o62UAYyDLBSLgnCvJYJUqmvMJvG8X+eI6uv+/DwEUNHjp7fFRfBCdbMQ79g6afdC9P0ZLHpB3D/hFVk7abYdaZ2PTVR7OE9Gmbu/rS8/hC5mLOILp3nC7EC+IP/sc+agPbpj5PNblIJBa6z5XMHaexj8JqqJsT2uYLax5ay5YQpvIZQYFLFbwudc85XQ1Y1pydIFd4ZVtlRQXAwijFd/sbI=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(316002)(36756003)(508600001)(38100700002)(6512007)(6916009)(8676002)(31686004)(4326008)(6486002)(26005)(2616005)(66946007)(66556008)(66476007)(2906002)(31696002)(5660300002)(186003)(86362001)(6506007)(54906003)(8936002)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?N3I4TUVFNGJqa2RlZGJmTWFaNGxZNHlCNUJEZ2NNeEhzM3RlK2dPZzR0MElE?=
 =?utf-8?B?MUFJWW5kbDhvQWRkUm1nbWgzNHVaTGlnL2JtSEthWHAzRm82bU1CV25rdzk3?=
 =?utf-8?B?a2RmUzg0NHkyTGlSV2pRVEZOY2pyd2JJTDk4am9mTUpJUGxzMDQzTW9lMEha?=
 =?utf-8?B?WGdyUy9CWnVjR1FueG9XSUIzUXRZeHl5NnQ2VWZ6cm9nZE1yQ1pGRnZWbWRT?=
 =?utf-8?B?MzFHNk5yZFpIN2dDVU56Ykx6djhwQk9vWStzbzJDanllSi94RXFWM3BrRERY?=
 =?utf-8?B?dE9FN0wrZkVBNjdOazBzSnIwVlpkeW9OVFFlSVBYSFpHUGYwY2VwTDdnbjdU?=
 =?utf-8?B?ZmpRVU9FZmNrVmt4UXIrejZzQU1BbDJWMUVUVkR2UTJtd211V0VrdmQ3czha?=
 =?utf-8?B?TUY1NXViemdDQ3J5dzlLV0p6NGl3czVURVg3TVBscWRFZ0xsUmlEN1VDSlFy?=
 =?utf-8?B?dUwySFd1UUlRYU5HTnJLSGJSWUVJZXhnUE4xSTN1VzcrRmc3VWcyZytab3oy?=
 =?utf-8?B?QVFRTkFZL3lMcVFNSVRON3k1cmsrVWtLNFhvZk9VcEU3ZFpOY3JnM1hNcXFV?=
 =?utf-8?B?VUhRbjViNWxDOHFyREVIcEdjQkpWQU14UXBQY1h6UVFGVVVIQ2tJeHNGSTg4?=
 =?utf-8?B?ZjJqaERSNUhaUWd4dnVueHF3MDR5cm0zK0hwdXJpS3ZWdkVDaG5GRm9ZRGJ4?=
 =?utf-8?B?dERSbEVLU2ZjWWRWRkNla3ZpcXN5ZzBlTVJJWTFtdk5oZUFkN2RuM25Ec2JP?=
 =?utf-8?B?Nm14ZjEyVWdWbWZxSzZYQTkzblNiWnlBOHlqYVBOaXlSRnRiK1lsd0hFYzdW?=
 =?utf-8?B?bnZ2R2ZiOElqVEdQWnA3VWJzaDYrNVBBTEg2M0FzYkczcVByY2ZyQ250OEVO?=
 =?utf-8?B?YW4rc3ZBV0ZoRnpRaE1zNlBtaXpjZENtcENIc1o1ZzY3RnRONG5MT3Nlak83?=
 =?utf-8?B?elJmZ2xKajdjRmxSMTdZOHF0M05qSXF5aFppZkNUejduTlFQQ280MDJMTHJE?=
 =?utf-8?B?SzYzVklLcWNsd1FPaVhlbm1CbU9wQWRXa295WE5lTmRiYXhseTRvZTJSZ2E2?=
 =?utf-8?B?MjlmbnVDUTdwMy9SeXFCblpyUnlPcm9vanpNVzNOU2lpaGlQMDhFRjAxWjJJ?=
 =?utf-8?B?d1EvT1I4UGZCeFVzSmdlKzJZY0kzL0NEc3ZraDA1UEZPdVVHd01jMG45QTg0?=
 =?utf-8?B?UTlGc0RmWmZtK25FeU4yVFhHbzJ5UVJkMzN3VDVnTVQzZkpmK2gvT2pXVUdz?=
 =?utf-8?B?RFUyanpBVGV3MUF4QUZCdUtWUXc5Ukc4L2Rzd1hOOU1BaldVNmQxSGVCbzlv?=
 =?utf-8?B?VnNOd3p6VHdUQjdxaW1qYk9wdHcvOHphTU1pRVBsRHNsd3ZaVzl3MS9DV3pF?=
 =?utf-8?B?VDllZFJUalJuUitVazVhbFMyc2UxbngrL1RPSHVXWkZJZmhranFZYS8yTjgy?=
 =?utf-8?B?MDk4dDY2V0dCWW50ZWMveVg0NEJxVHhlM25hMlNmbU9sUVRCblo4a043Vkxr?=
 =?utf-8?B?K2d3b1JUV3JWU3JmR3A4ZG10L0tFQy9JUjN0a253eGlsakZ1WmNyL2FXbk81?=
 =?utf-8?B?TEkyQmdxNUJwSHlzMm0vY2JvYXozOWFvSDZRMlNXUFRZRnF6RDVkRHd0VCt1?=
 =?utf-8?B?M0RuSGxJTHFyMFZrOXVmNkRGN0xNSkVVK3FNTXBER0tOK1M2Ylg3N0RxK2k1?=
 =?utf-8?B?YlF6cm5ScVN1QzBqUVl1WTlvYmd2Y2lUSWFQTFVtV1RwRnBBMGF5RTlOdEh0?=
 =?utf-8?B?QTBVUlh6UXNmazVKTUpKWEtFLytrVFh2UmpTT21GeThTNDVtSGkzcXR2eXY5?=
 =?utf-8?B?UUZXaWVsT2dlVk1SZW90aTFUMTdEdGlDUE5GYktZWVJxZTVZSTlxMytoeUlM?=
 =?utf-8?B?Z0E4bnpPTjRSV3BKVk5wcFBoMDVQUll3MWVqR2h6WWtNUmdSWVdYVEdHR0k2?=
 =?utf-8?B?UUZTYTJoRTZOM2FmT3Z0a1VwZ2psbnpPZ0hEK01adnJNQWVHbjUzVXV1ZmJX?=
 =?utf-8?B?bEQwSXJ6OTczbWhxWGhEZWRTOGIydExWU1R4c01iWkNhOXVnejNyRXhsREtR?=
 =?utf-8?B?cEJRM2JSS0NvRmJUVDBxQzlhVzYrbTZoWEFYUmNpNE5VV0Nwd2ZLY0dNSlhN?=
 =?utf-8?B?cTFleWVPTVJ0L1FDSk40ditlbklOSkNqVXVlR1hxdHF6NkF0NDVvWUFzZFZB?=
 =?utf-8?B?VFQ1aVAyWVRON0lXZTRFNzFPWWlESmhJanhXaHF4Z2tiUEZIZjBSM1JJS3N2?=
 =?utf-8?B?NnRGK3ZsM0w4Qmp6dzVKVXpSSnZRSXpkWnNPa1RldU10YitxaVo2WjFTNGtX?=
 =?utf-8?B?N3JTRU5LY2VaN1paOHdQVk8rbHFCanl6ZDU5V25najl6Q29UUDVhZz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 36073096-f08b-4e27-b831-08da4eb9a8e8
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2022 10:27:33.9055
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: i6QlyTTOekx/5tOquLxD+l1jfCdlm4NpYA4e6nRZRGEW/fEz3KUfciYc2OBYAR9rBw1Y4tNDV0c3M4WYtl+JoA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR04MB5113

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/libs/light/libxl_cpuid.c
+++ b/tools/libs/light/libxl_cpuid.c
@@ -221,6 +221,7 @@ int libxl_cpuid_parse_config(libxl_cpuid
         {"serialize",    0x00000007,  0, CPUID_REG_EDX, 14,  1},
         {"tsxldtrk",     0x00000007,  0, CPUID_REG_EDX, 16,  1},
         {"cet-ibt",      0x00000007,  0, CPUID_REG_EDX, 20,  1},
+        {"avx512-fp16",  0x00000007,  0, CPUID_REG_EDX, 23,  1},
         {"ibrsb",        0x00000007,  0, CPUID_REG_EDX, 26,  1},
         {"stibp",        0x00000007,  0, CPUID_REG_EDX, 27,  1},
         {"l1d-flush",    0x00000007,  0, CPUID_REG_EDX, 28,  1},
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -175,6 +175,7 @@ static const char *const str_7d0[32] =
     [16] = "tsxldtrk",
     [18] = "pconfig",
     [20] = "cet-ibt",
+    /* 22 */                [23] = "avx512-fp16",
 
     [26] = "ibrsb",         [27] = "stibp",
     [28] = "l1d-flush",     [29] = "arch-caps",
--- a/xen/arch/x86/include/asm/cpufeature.h
+++ b/xen/arch/x86/include/asm/cpufeature.h
@@ -138,6 +138,7 @@
 #define cpu_has_rtm_always_abort boot_cpu_has(X86_FEATURE_RTM_ALWAYS_ABORT)
 #define cpu_has_tsx_force_abort boot_cpu_has(X86_FEATURE_TSX_FORCE_ABORT)
 #define cpu_has_serialize       boot_cpu_has(X86_FEATURE_SERIALIZE)
+#define cpu_has_avx512_fp16     boot_cpu_has(X86_FEATURE_AVX512_FP16)
 #define cpu_has_arch_caps       boot_cpu_has(X86_FEATURE_ARCH_CAPS)
 
 /* CPUID level 0x00000007:1.eax */
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -281,6 +281,7 @@ XEN_CPUFEATURE(TSX_FORCE_ABORT, 9*32+13)
 XEN_CPUFEATURE(SERIALIZE,     9*32+14) /*A  SERIALIZE insn */
 XEN_CPUFEATURE(TSXLDTRK,      9*32+16) /*a  TSX load tracking suspend/resume insns */
 XEN_CPUFEATURE(CET_IBT,       9*32+20) /*   CET - Indirect Branch Tracking */
+XEN_CPUFEATURE(AVX512_FP16,   9*32+23) /*   AVX512 FP16 instructions */
 XEN_CPUFEATURE(IBRSB,         9*32+26) /*A  IBRS and IBPB support (used by Intel) */
 XEN_CPUFEATURE(STIBP,         9*32+27) /*A  STIBP */
 XEN_CPUFEATURE(L1D_FLUSH,     9*32+28) /*S  MSR_FLUSH_CMD and L1D flush. */
--- a/xen/tools/gen-cpuid.py
+++ b/xen/tools/gen-cpuid.py
@@ -267,7 +267,8 @@ def crunch_numbers(state):
         # AVX512 extensions acting on vectors of bytes/words are made
         # dependents of AVX512BW (as to requiring wider than 16-bit mask
         # registers), despite the SDM not formally making this connection.
-        AVX512BW: [AVX512_VBMI, AVX512_VBMI2, AVX512_BITALG, AVX512_BF16],
+        AVX512BW: [AVX512_VBMI, AVX512_VBMI2, AVX512_BITALG, AVX512_BF16,
+                   AVX512_FP16],
 
         # Extensions with VEX/EVEX encodings keyed to a separate feature
         # flag are made dependents of their respective legacy feature.



From xen-devel-bounces@lists.xenproject.org Wed Jun 15 10:27:59 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 10:27:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349913.576109 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1QFj-0003PN-F9; Wed, 15 Jun 2022 10:27:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349913.576109; Wed, 15 Jun 2022 10:27:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1QFj-0003PG-C3; Wed, 15 Jun 2022 10:27:59 +0000
Received: by outflank-mailman (input) for mailman id 349913;
 Wed, 15 Jun 2022 10:27:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=56zs=WW=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o1QFi-0002mz-4d
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 10:27:58 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur03on062f.outbound.protection.outlook.com
 [2a01:111:f400:fe09::62f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d2523f9f-ec95-11ec-bd2c-47488cf2e6aa;
 Wed, 15 Jun 2022 12:27:56 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB7PR04MB5113.eurprd04.prod.outlook.com (2603:10a6:10:14::27) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.22; Wed, 15 Jun
 2022 10:27:55 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Wed, 15 Jun 2022
 10:27:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d2523f9f-ec95-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XPvaag71N3Mp7UF8BeQYhXVEohUcleGNtabJ/DBkkn9nOa3+zlTGIF1qyCCdSZiSaZWxshYlRJRA030QKx5aSWZem5dnzWG0eHo2CWt14lzC3I2XzavoYEtIpcAwxSyPRowBApi79BtquNd5LK+SclIBNW40wdm0A2C2XrMRhmrBfQYIuPev91jLZLdpBqm/cE44FSbPl5alTLPidqc5afSqLMC07dUiqxsXur1Wepnhtk/lM+k6X+cLnBb1JxcMmdni1MdU493MRBYmmMR185/zOK/ifLHMPMWqYHMBZKPXrBEdmlwu8amcAwT1jxDuKMFEpxtIWWrR2cdEnE5jpQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=CbWxrA9PT3dW6Hbw/pnjIBabWsyLSFj6ASuTNtk5o8o=;
 b=BRR4xEJwaNA7tIk6qkOl1ronydqJWZyqvgwYQv9gK4j3H3Iy7os8g8LlUU2bo+nVDI1w6yJg6ciiD2gZPECT17BlfHBFnMGkdzGh20srAGVclC2/Gc4QU9qo1MQZl3g9HAqMB5LxlurFFnxMbo5RnyOba1/4uGWn9f18I+liWkqC2RLnGXtggfNmEeJoPpy3xQWiNegi1tS98fHpVdoddfs4nUl0g0sDMSsLmt02cUaBHgGdlCsb8VXH6Cvj7VU8MEzdm2pV9YVp6FZVg+4FKszzAr+Jrp1gkCo4irBh3PMDpPhj6T2RiJUAkcedF3MGPR+a8FKzOeqdFktzZI5i3w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CbWxrA9PT3dW6Hbw/pnjIBabWsyLSFj6ASuTNtk5o8o=;
 b=O1RA3NZLNLLT6+G8hs2X7eVy4+9jO6x06dZ1LU4fZmb0Ux5cOyirz3CQUnBLFZE9urH+dJmqqsnAKJA3DzhkSNI5otetVSemp0H1yeD9KqgJRnGd5R5Pk9npWcGq2XJOff52EAiQf5bc3qcIMFmBwtbMCLznewwsU3xJB7etVfS0ikec1IqeyAkCIeybh68LlBRsd9Hva8MVI9NchTOlQLmIFMugyxe2UHUxz+5ql675yxd5mUlCFodZIbady1odouQY27aktWaMhXpQBElxN+hZmZUiAnemurKtW9i41dk8qq15ELnlnsYwiuvs2XWvPiuj54/7GbflGcMwDc78iQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b922b255-f8c6-725d-2290-2749c614fde0@suse.com>
Date: Wed, 15 Jun 2022 12:27:53 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: [PATCH 02/11] x86emul: handle AVX512-FP16 insns encoded in 0f3a
 opcode map
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <9ba3aaa7-56ce-b051-f1c4-874276e493e1@suse.com>
In-Reply-To: <9ba3aaa7-56ce-b051-f1c4-874276e493e1@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS8PR04CA0039.eurprd04.prod.outlook.com
 (2603:10a6:20b:312::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f4e97241-d705-4c4f-9d4a-08da4eb9b5af
X-MS-TrafficTypeDiagnostic: DB7PR04MB5113:EE_
X-Microsoft-Antispam-PRVS:
	<DB7PR04MB51134E81BBA16D4665613E70B3AD9@DB7PR04MB5113.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ikIpElavdwA0Wm1/5b8yNmWEa/cN7w0oZVIUGS6rIj588luN6L5/AUPLDweAzpiEiR6IkDPODZ/9mvBSJ4Nxqjkm5mZNksjESsZ7YAzmMjJM+9Syd5GRvt2SN2mDO5p6adr5j+1Yu9hITNTvQHyprDFk91wECmIWwi/TNAU7NjNVNaFWhn+aFhexrLzVq01bnoNe5FLkN8coy92bWtoz5FiNcPhnLq3S5fPO6OLxQ5fX51sODZdGGW4Y9IhegsJknm34Irl04slASFFzT9rgsSYmMwBM/IaNLrnk/C5WZp2iV6iUVMxrK/m4oYWjvFdgmcPZoeozYlsU8RIOB8BojsKHh9JMvOXuUTMQhIwcXVcUWDcWOEV01jOz0o/rDJqaD8F862FYWuM2/p2sKS4UJkSbfcoG3eRs7pHG08BHN7wvz+tlw2UW8YyYO0N1xnlFqR6vt7C3aMwLLXdnwa+fXbz2Osq4BgPWjPDTwlVGA9AHNedub1shWaSDePVhAMnsX0TWw2mfTv7HqwP1vPkNumujd3Y4e5fyypc0VUz9DFrJoty4zfu6ceOvwGxny/WXOm//QSgKKCimzVvAUNilIcg7Eku/kD6tM3fZRCBP6xLO06SZUD1mcPetvUDfJl3yzNzVPsCLwpHWt+VPki+YWLSxszmgS1gbZwbeQoHbfJUMQRN7hVP5cbwLAY0jIi0tGEyMxrRhLTS1a1UEI3nUV5UtRWHT3ITlrYMBOTYeHkwZqzZ8GAuuPzk0UYoPXy9U
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(316002)(36756003)(508600001)(38100700002)(6512007)(30864003)(6916009)(8676002)(31686004)(4326008)(6486002)(26005)(2616005)(66946007)(66556008)(66476007)(2906002)(31696002)(5660300002)(186003)(86362001)(6506007)(54906003)(8936002)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZkN6QmhnSGRRNVBPT2VoMjFsck5YYWROcjBWd1Z3eWRQL0E1SlIyZFlCeHIv?=
 =?utf-8?B?dDUrOTZDQ3prREkvcmVFdjZ3N1dBTE11YUg3WUJta0k0U2Z5UytnSWc3Q3Ur?=
 =?utf-8?B?aWJDZlI5VDhSendma1dyMnVQcTJ2Q2VMRjI4UUtDT0x2ZzlIUUdkRS8zNHAv?=
 =?utf-8?B?NDRET21FSU1vMjQzRnUrdnYyeE82SkFaYkV2UmNXd0VvWDJPNG9RekFrbDY5?=
 =?utf-8?B?by82UkZ3SFBtOGdjMVlUSzU2R0FiYXdtUFRzSGt0RnIwb1NHR1JndnZtUUxt?=
 =?utf-8?B?QmpmTC9xM3JJM3dyZEVweFFCRzBmdGNobi81NHlvaGNwWktoOGVaL1RnaFZz?=
 =?utf-8?B?T0E5SGEyT3h6Q2tCcXU1KzY5VmpKbTNNSXhEUXVvZkdnNmxnY2VRMThyMW9s?=
 =?utf-8?B?dGJSRlhnbXJZS1MrY1Jjb3BaOVdndmtFSzUzd2pGYlVFcUt2d1lxTVNnTGw4?=
 =?utf-8?B?NDdKbmNTOEpHU2UyLytEVGtOSFdjWXNUYmtpSWp3MkpTbzErN0NvNU9rZWl6?=
 =?utf-8?B?SGpKWVFmL3Z0MmlNd3g4RzdadTUxQitmaTlUODFiRW1PWG1OYVJ5TkdjQ0ZY?=
 =?utf-8?B?TXF1K3JFQ0JHQ2REdS9QbmZuRjNLSmdYUlp1UytVZDQ3Y2ZYVVl1enQvMkpE?=
 =?utf-8?B?RHJZeGs5WERSM3UrZ3JGUG56ZnR3Ulh0NnV5RitCSGpweFdEandiM3V5eFgv?=
 =?utf-8?B?TjZZVStQeHIwb01PQXFscEE0c3NZT084a004SEJPS0pReXVsSGtiemwyODEv?=
 =?utf-8?B?ZmNpRzFweXpNZjl0TVV1YU5KdGFQaFpYakV1d3djVkQ5cm9ESzJNOW9Sa29R?=
 =?utf-8?B?eGwwSldWMWg3K1l3Sis5ZTJ1bGpaYi9LZG9iVWxzMXVJRUtYSzVHYjNpKy9j?=
 =?utf-8?B?aHhpc1JJN2dBQTJsOTJVN2daNXljNjFWcnpNNFNxMTBxOEkrSHVLOWt6cW1L?=
 =?utf-8?B?dTFZTEs2STU5by84UjYydzZwek5uUVpLbXBmWWU2bEU3bFo0UjRkZEtYclVy?=
 =?utf-8?B?TXdMcEZhUDF5Z2d0SDh3NXNRSHJBQTFFaWpvRHlHRTNack9QTmVldWp1U054?=
 =?utf-8?B?bnoxbTNGZk1pV0N2eUxKazNhZFRNK2hJbzJzWkZGWHc4eW8xRVJPQnVwSVRx?=
 =?utf-8?B?MHdDVUxodVhOMnYzWWtOeXRKZWxXUnBMSm1lcFJGOHBCVW5uUElQQVhyekZy?=
 =?utf-8?B?YjV4L0dvMlhQRkMrM1dpMGhCQ3RXSFdtNG54c0plTi9zVlZwRjNHd1JzNWtL?=
 =?utf-8?B?ZDNEaVByZ3RwZ3FlU05QQnNWcU5JWmRjc0YxbWJmRnZZT1FSaWRWT1BZcVJp?=
 =?utf-8?B?djVZaVlELzQ2R0NBaitsY0JaMHhhLzZCcHcrckkxVnN1a3VybVN3K1Y1bFhz?=
 =?utf-8?B?QUZmWHFsd1RqeW9IMDd4OXpkZ05HUDZCdmNCZFh1N0tPT2JCUGhHVmFvYkVm?=
 =?utf-8?B?ZWFFSjRKRjRSVlZoS1VVUHRvdVljVkZQcjlFcERNVWJRYmZNSWlFN0c3K2p6?=
 =?utf-8?B?S2hqTENCYjNMVWI2K1dRYTdPVHVnV0JKdEoxOHFBRVZGQ3NkZFl1MXVMY3c4?=
 =?utf-8?B?YmVZOHNuWUVPUDgycTgvUDJQVHdyZTdoQzY0dnJRV1FIODFoYzBIK0cxQVE1?=
 =?utf-8?B?SmcreGNjNE9RRy9Sd1NKY0tjcGNkMTRxUWwxM2lDc21wb1BJQkNxVEFBNlBB?=
 =?utf-8?B?NmhHWGtyWkFVd3NwbHhsdUpqdmVhVUI5Vi9UeldlY1V3Rjg0RGs0WS9wUHpD?=
 =?utf-8?B?RC85Q0JxNWlkWFRXV1Znai9XR1BxVFdzV1hjOHAvc3lyMzFJRHpSVUtKU2Zn?=
 =?utf-8?B?aFQ4dVhma01JWnVKK2o1Mm4rRU5weWtqWFJVcEluczVrYnBEN3dWQkpReUZm?=
 =?utf-8?B?TDY5WWVJWndWT00ySkEzT2hhSkNvZ2ovZzVzc0JSNzFlWlFXaEk0bXhTKzd3?=
 =?utf-8?B?RStLVHNjNW1NVXpuTmJxUnRXUFZaSkloL0grM2xaNHhRV0hzbWFYQll5TWNJ?=
 =?utf-8?B?RnpvekZ4V1VRNTdEb1BDOVNjSG55TmFCcVQ5cCsxVmFpdWQxNHRpaWwwL1NH?=
 =?utf-8?B?aU9ScDJRVHRqWGMrZEJlU1YxZS9DaElFb0xuRXpldjdUMjJQOGh2c3NUQ3Ev?=
 =?utf-8?B?R3Jzc0RLN2cyVDlaRytvNjUwOTRKZ1ltZGJOMzhnTURMelc3bUJOUmE3K3Vr?=
 =?utf-8?B?U2twZ3pwZzhPZDNDY0tJVDl1UHNPN1R3b3N3VWV3b1ZaSFczdnRlU0dzTTBn?=
 =?utf-8?B?bjRSNXAxOVJYTnJ3bUhTcnNmdHNkUnRxT205dU9wRnlNdFhEY2hRc2F3ZERl?=
 =?utf-8?B?clVnd2t0YlRuaXAvRUFxSkMwWlZTL3R2YnBEZmg1RzRFSFNpQU0xZz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f4e97241-d705-4c4f-9d4a-08da4eb9b5af
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2022 10:27:55.1854
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: yQBjhzZvWC4yAmUQ2LCi67pFP2oRJZ7Q6QQdBESGrVmvYjmSJKW3F2OuMoB6eN/ZJhgJiL0IPAhAqxwl8AdGRg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR04MB5113

In order to re-use (also in subsequent patches) existing code and tables
as much as possible, simply introduce a new boolean field in emulator
state indicating whether an insn is one with a half-precision source.
Everything else then follows "naturally".

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
SDE: -spr or -future

--- a/tools/tests/x86_emulator/evex-disp8.c
+++ b/tools/tests/x86_emulator/evex-disp8.c
@@ -76,6 +76,7 @@ enum esz {
     ESZ_b,
     ESZ_w,
     ESZ_bw,
+    ESZ_fp16,
 };
 
 #ifndef __i386__
@@ -601,6 +602,19 @@ static const struct test avx512_vpopcntd
     INSN(popcnt, 66, 0f38, 55, vl, dq, vl)
 };
 
+static const struct test avx512_fp16_all[] = {
+    INSN(cmpph,           , 0f3a, c2,    vl, fp16, vl),
+    INSN(cmpsh,         f3, 0f3a, c2,    el, fp16, el),
+    INSN(fpclassph,       , 0f3a, 66,    vl, fp16, vl),
+    INSN(fpclasssh,       , 0f3a, 67,    el, fp16, el),
+    INSN(getmantph,       , 0f3a, 26,    vl, fp16, vl),
+    INSN(getmantsh,       , 0f3a, 27,    el, fp16, el),
+    INSN(reduceph,        , 0f3a, 56,    vl, fp16, vl),
+    INSN(reducesh,        , 0f3a, 57,    el, fp16, el),
+    INSN(rndscaleph,      , 0f3a, 08,    vl, fp16, vl),
+    INSN(rndscalesh,      , 0f3a, 0a,    el, fp16, el),
+};
+
 static const struct test gfni_all[] = {
     INSN(gf2p8affineinvqb, 66, 0f3a, cf, vl, q, vl),
     INSN(gf2p8affineqb,    66, 0f3a, ce, vl, q, vl),
@@ -728,8 +742,10 @@ static void test_one(const struct test *
         break;
 
     case ESZ_w:
-        esz = 2;
         evex.w = 1;
+        /* fall through */
+    case ESZ_fp16:
+        esz = 2;
         break;
 
 #ifdef __i386__
@@ -845,7 +861,7 @@ static void test_one(const struct test *
     case ESZ_b: case ESZ_w: case ESZ_bw:
         return;
 
-    case ESZ_d: case ESZ_q:
+    case ESZ_d: case ESZ_q: case ESZ_fp16:
         break;
 
     default:
@@ -1002,6 +1018,7 @@ void evex_disp8_test(void *instr, struct
     RUN(avx512_vnni, all);
     RUN(avx512_vp2intersect, all);
     RUN(avx512_vpopcntdq, all);
+    RUN(avx512_fp16, all);
 
     if ( cpu_has_avx512f )
     {
--- a/tools/tests/x86_emulator/predicates.c
+++ b/tools/tests/x86_emulator/predicates.c
@@ -1972,8 +1972,10 @@ static const struct evex {
     { { 0x03 }, 3, T, R, pfx_66, Wn, Ln }, /* valign{d,q} */
     { { 0x04 }, 3, T, R, pfx_66, W0, Ln }, /* vpermilps */
     { { 0x05 }, 3, T, R, pfx_66, W1, Ln }, /* vpermilpd */
+    { { 0x08 }, 3, T, R, pfx_no, W0, Ln }, /* vrndscaleph */
     { { 0x08 }, 3, T, R, pfx_66, W0, Ln }, /* vrndscaleps */
     { { 0x09 }, 3, T, R, pfx_66, W1, Ln }, /* vrndscalepd */
+    { { 0x0a }, 3, T, R, pfx_no, W0, LIG }, /* vrndscalesh */
     { { 0x0a }, 3, T, R, pfx_66, W0, LIG }, /* vrndscaless */
     { { 0x0b }, 3, T, R, pfx_66, W1, LIG }, /* vrndscalesd */
     { { 0x0f }, 3, T, R, pfx_66, WIG, Ln }, /* vpalignr */
@@ -1993,7 +1995,9 @@ static const struct evex {
     { { 0x22 }, 3, T, R, pfx_66, Wn, L0 }, /* vpinsr{d,q} */
     { { 0x23 }, 3, T, R, pfx_66, Wn, L1|L2 }, /* vshuff{32x4,64x2} */
     { { 0x25 }, 3, T, R, pfx_66, Wn, Ln }, /* vpternlog{d,q} */
+    { { 0x26 }, 3, T, R, pfx_no, W0, Ln }, /* vgetmantph */
     { { 0x26 }, 3, T, R, pfx_66, Wn, Ln }, /* vgetmantp{s,d} */
+    { { 0x27 }, 3, T, R, pfx_no, W0, LIG }, /* vgetmantsh */
     { { 0x27 }, 3, T, R, pfx_66, Wn, LIG }, /* vgetmants{s,d} */
     { { 0x38 }, 3, T, R, pfx_66, Wn, L1|L2 }, /* vinserti{32x4,64x2} */
     { { 0x39 }, 3, T, W, pfx_66, Wn, L1|L2 }, /* vextracti{32x4,64x2} */
@@ -2008,14 +2012,20 @@ static const struct evex {
     { { 0x51 }, 3, T, R, pfx_66, Wn, LIG }, /* vranges{s,d} */
     { { 0x54 }, 3, T, R, pfx_66, Wn, Ln }, /* vfixupimmp{s,d} */
     { { 0x55 }, 3, T, R, pfx_66, Wn, LIG }, /* vfixumpimms{s,d} */
+    { { 0x56 }, 3, T, R, pfx_no, W0, Ln }, /* vreduceph */
     { { 0x56 }, 3, T, R, pfx_66, Wn, Ln }, /* vreducep{s,d} */
+    { { 0x57 }, 3, T, R, pfx_no, W0, LIG }, /* vreducesh */
     { { 0x57 }, 3, T, R, pfx_66, Wn, LIG }, /* vreduces{s,d} */
+    { { 0x66 }, 3, T, R, pfx_no, W0, Ln }, /* vfpclassph */
     { { 0x66 }, 3, T, R, pfx_66, Wn, Ln }, /* vfpclassp{s,d} */
+    { { 0x67 }, 3, T, R, pfx_no, W0, LIG }, /* vfpclasssh */
     { { 0x67 }, 3, T, R, pfx_66, Wn, LIG }, /* vfpclasss{s,d} */
     { { 0x70 }, 3, T, R, pfx_66, W1, Ln }, /* vshldw */
     { { 0x71 }, 3, T, R, pfx_66, Wn, Ln }, /* vshld{d,q} */
     { { 0x72 }, 3, T, R, pfx_66, W1, Ln }, /* vshrdw */
     { { 0x73 }, 3, T, R, pfx_66, Wn, Ln }, /* vshrd{d,q} */
+    { { 0xc2 }, 3, T, R, pfx_no, W0, Ln }, /* vcmpph */
+    { { 0xc2 }, 3, T, R, pfx_f3, W0, LIG }, /* vcmpsh */
     { { 0xce }, 3, T, R, pfx_66, W1, Ln }, /* vgf2p8affineqb */
     { { 0xcf }, 3, T, R, pfx_66, W1, Ln }, /* vgf2p8affineinvqb */
 };
--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -4674,6 +4674,44 @@ int main(int argc, char **argv)
     else
         printf("skipped\n");
 
+    printf("%-40s", "Testing vfpclassphz $0x46,128(%ecx),%k3...");
+    if ( stack_exec && cpu_has_avx512_fp16 )
+    {
+        decl_insn(vfpclassph);
+
+        asm volatile ( put_insn(vfpclassph,
+                                /* 0x46: check for +/- 0 and neg. */
+                                /* vfpclassphz $0x46, 128(%0), %%k3 */
+                                ".byte 0x62, 0xf3, 0x7c, 0x48\n\t"
+                                ".byte 0x66, 0x59, 0x02, 0x46")
+                       :: "c" (NULL) );
+
+        set_insn(vfpclassph);
+        for ( i = 0; i < 3; ++i )
+        {
+            res[16 + i * 5 + 0] = 0x7fff0000; /* +0 / +NaN */
+            res[16 + i * 5 + 1] = 0xffff8000; /* -0 / -NaN */
+            res[16 + i * 5 + 2] = 0x80010001; /* +DEN / -DEN */
+            res[16 + i * 5 + 3] = 0xfc00f800; /* -FIN / -INF */
+            res[16 + i * 5 + 4] = 0x7c007800; /* +FIN / +INF */
+        }
+        res[31] = 0;
+        regs.ecx = (unsigned long)res - 64;
+        rc = x86_emulate(&ctxt, &emulops);
+        if ( rc != X86EMUL_OKAY || !check_eip(vfpclassph) )
+            goto fail;
+        asm volatile ( "kmovd %%k3, %0" : "=g" (rc) );
+        /*
+         * 0b11(0001100101)*3
+         * 0b1100_0110_0101_0001_1001_0100_0110_0101
+         */
+        if ( rc != 0xc6519465 )
+            goto fail;
+        printf("okay\n");
+    }
+    else
+        printf("skipped\n");
+
     /*
      * The following compress/expand tests are not only making sure the
      * accessed data is correct, but they also verify (by placing operands
--- a/tools/tests/x86_emulator/x86-emulate.h
+++ b/tools/tests/x86_emulator/x86-emulate.h
@@ -182,6 +182,7 @@ void wrpkru(unsigned int val);
 #define cpu_has_avx512_4fmaps (cp.feat.avx512_4fmaps && xcr0_mask(0xe6))
 #define cpu_has_avx512_vp2intersect (cp.feat.avx512_vp2intersect && xcr0_mask(0xe6))
 #define cpu_has_serialize  cp.feat.serialize
+#define cpu_has_avx512_fp16 (cp.feat.avx512_fp16 && xcr0_mask(0xe6))
 #define cpu_has_avx_vnni   (cp.feat.avx_vnni && xcr0_mask(6))
 #define cpu_has_avx512_bf16 (cp.feat.avx512_bf16 && xcr0_mask(0xe6))
 
--- a/xen/arch/x86/x86_emulate/decode.c
+++ b/xen/arch/x86/x86_emulate/decode.c
@@ -518,6 +518,7 @@ static const struct ext0f3a_table {
     [0x7a ... 0x7b] = { .simd_size = simd_scalar_opc, .four_op = 1 },
     [0x7c ... 0x7d] = { .simd_size = simd_packed_fp, .four_op = 1 },
     [0x7e ... 0x7f] = { .simd_size = simd_scalar_opc, .four_op = 1 },
+    [0xc2] = { .simd_size = simd_any_fp, .d8s = d8s_vl },
     [0xcc] = { .simd_size = simd_other },
     [0xce ... 0xcf] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
     [0xdf] = { .simd_size = simd_packed_int, .two_op = 1 },
@@ -579,7 +580,7 @@ static unsigned int decode_disp8scale(en
         if ( s->evex.brs )
         {
     case d8s_dq:
-            return 2 + s->evex.w;
+            return 1 + !s->fp16 + s->evex.w;
         }
         break;
 
@@ -596,7 +597,7 @@ static unsigned int decode_disp8scale(en
         /* fall through */
     case simd_scalar_opc:
     case simd_scalar_vexw:
-        return 2 + s->evex.w;
+        return 1 + !s->fp16 + s->evex.w;
 
     case simd_128:
         /* These should have an explicit size specified. */
@@ -1417,7 +1418,29 @@ int x86emul_decode(struct x86_emulate_st
              */
             s->simd_size = ext0f3a_table[b].simd_size;
             if ( evex_encoded() )
+            {
+                switch ( b )
+                {
+                case 0x08: /* vrndscaleph */
+                case 0x0a: /* vrndscalesh */
+                case 0x26: /* vfpclassph */
+                case 0x27: /* vfpclasssh */
+                case 0x56: /* vgetmantph */
+                case 0x57: /* vgetmantsh */
+                case 0x66: /* vreduceph */
+                case 0x67: /* vreducesh */
+                    if ( !s->evex.pfx )
+                        s->fp16 = true;
+                    break;
+
+                case 0xc2: /* vpcmp{p,s}h */
+                    if ( !(s->evex.pfx & VEX_PREFIX_DOUBLE_MASK) )
+                        s->fp16 = true;
+                    break;
+                }
+
                 disp8scale = decode_disp8scale(ext0f3a_table[b].d8s, s);
+            }
             break;
 
         case ext_8f09:
@@ -1712,7 +1735,7 @@ int x86emul_decode(struct x86_emulate_st
             break;
         case vex_f3:
             generate_exception_if(evex_encoded() && s->evex.w, X86_EXC_UD);
-            s->op_bytes = 4;
+            s->op_bytes = 4 >> s->fp16;
             break;
         case vex_f2:
             generate_exception_if(evex_encoded() && !s->evex.w, X86_EXC_UD);
@@ -1722,11 +1745,11 @@ int x86emul_decode(struct x86_emulate_st
         break;
 
     case simd_scalar_opc:
-        s->op_bytes = 4 << (ctxt->opcode & 1);
+        s->op_bytes = 2 << (!s->fp16 + (ctxt->opcode & 1));
         break;
 
     case simd_scalar_vexw:
-        s->op_bytes = 4 << s->vex.w;
+        s->op_bytes = 2 << (!s->fp16 + s->vex.w);
         break;
 
     case simd_128:
--- a/xen/arch/x86/x86_emulate/private.h
+++ b/xen/arch/x86/x86_emulate/private.h
@@ -304,6 +304,7 @@ struct x86_emulate_state {
     bool lock_prefix;
     bool not_64bit; /* Instruction not available in 64bit. */
     bool fpu_ctrl;  /* Instruction is an FPU control one. */
+    bool fp16;      /* Instruction has half-precision FP source operand. */
     opcode_desc_t desc;
     union vex vex;
     union evex evex;
@@ -590,6 +591,7 @@ amd_like(const struct x86_emulate_ctxt *
 #define vcpu_has_avx512_vp2intersect() (ctxt->cpuid->feat.avx512_vp2intersect)
 #define vcpu_has_serialize()   (ctxt->cpuid->feat.serialize)
 #define vcpu_has_tsxldtrk()    (ctxt->cpuid->feat.tsxldtrk)
+#define vcpu_has_avx512_fp16() (ctxt->cpuid->feat.avx512_fp16)
 #define vcpu_has_avx_vnni()    (ctxt->cpuid->feat.avx_vnni)
 #define vcpu_has_avx512_bf16() (ctxt->cpuid->feat.avx512_bf16)
 
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -1306,7 +1306,7 @@ x86_emulate(
     b = ctxt->opcode;
     d = state.desc;
 #define state (&state)
-    elem_bytes = 4 << evex.w;
+    elem_bytes = 2 << (!state->fp16 + evex.w);
 
     generate_exception_if(state->not_64bit && mode_64bit(), EXC_UD);
 
@@ -7147,6 +7147,15 @@ x86_emulate(
         avx512_vlen_check(b & 2);
         goto simd_imm8_zmm;
 
+    case X86EMUL_OPC_EVEX(0x0f3a, 0x0a): /* vrndscalesh $imm8,xmm/mem,xmm,xmm{k} */
+        generate_exception_if(ea.type != OP_REG && evex.brs, EXC_UD);
+        /* fall through */
+    case X86EMUL_OPC_EVEX(0x0f3a, 0x08): /* vrndscaleph $imm8,[xyz]mm/mem,[xyz]mm{k} */
+        host_and_vcpu_must_have(avx512_fp16);
+        generate_exception_if(evex.w, EXC_UD);
+        avx512_vlen_check(b & 2);
+        goto simd_imm8_zmm;
+
 #endif /* X86EMUL_NO_SIMD */
 
     CASE_SIMD_PACKED_INT(0x0f3a, 0x0f): /* palignr $imm8,{,x}mm/mem,{,x}mm */
@@ -7457,6 +7466,14 @@ x86_emulate(
             avx512_vlen_check(false);
         goto simd_imm8_zmm;
 
+    case X86EMUL_OPC_EVEX(0x0f3a, 0x26): /* vgetmantph $imm8,[xyz]mm/mem,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX(0x0f3a, 0x56): /* vreduceph $imm8,[xyz]mm/mem,[xyz]mm{k} */
+        host_and_vcpu_must_have(avx512_fp16);
+        generate_exception_if(evex.w, EXC_UD);
+        if ( ea.type != OP_REG || !evex.brs )
+            avx512_vlen_check(false);
+        goto simd_imm8_zmm;
+
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x51): /* vranges{s,d} $imm8,xmm/mem,xmm,xmm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x57): /* vreduces{s,d} $imm8,xmm/mem,xmm,xmm{k} */
         host_and_vcpu_must_have(avx512dq);
@@ -7469,6 +7486,16 @@ x86_emulate(
             avx512_vlen_check(true);
         goto simd_imm8_zmm;
 
+    case X86EMUL_OPC_EVEX(0x0f3a, 0x27): /* vgetmantsh $imm8,xmm/mem,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX(0x0f3a, 0x57): /* vreducesh $imm8,xmm/mem,xmm,xmm{k} */
+        host_and_vcpu_must_have(avx512_fp16);
+        generate_exception_if(evex.w, EXC_UD);
+        if ( !evex.brs )
+            avx512_vlen_check(true);
+        else
+            generate_exception_if(ea.type != OP_REG, EXC_UD);
+        goto simd_imm8_zmm;
+
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x30): /* kshiftr{b,w} $imm8,k,k */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x32): /* kshiftl{b,w} $imm8,k,k */
         if ( !vex.w )
@@ -7632,6 +7659,16 @@ x86_emulate(
         avx512_vlen_check(true);
         goto simd_imm8_zmm;
 
+    case X86EMUL_OPC_EVEX(0x0f3a, 0x66): /* vfpclassph $imm8,[xyz]mm/mem,k{k} */
+    case X86EMUL_OPC_EVEX(0x0f3a, 0x67): /* vfpclasssh $imm8,xmm/mem,k{k} */
+        host_and_vcpu_must_have(avx512_fp16);
+        generate_exception_if(evex.w || !evex.r || !evex.R || evex.z, EXC_UD);
+        if ( !(b & 1) )
+            goto avx512f_imm8_no_sae;
+        generate_exception_if(evex.brs, EXC_UD);
+        avx512_vlen_check(true);
+        goto simd_imm8_zmm;
+
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x70): /* vpshldw $imm8,[xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x72): /* vpshrdw $imm8,[xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
         generate_exception_if(!evex.w, EXC_UD);
@@ -7642,6 +7679,16 @@ x86_emulate(
         host_and_vcpu_must_have(avx512_vbmi2);
         goto avx512f_imm8_no_sae;
 
+    case X86EMUL_OPC_EVEX_F3(0x0f3a, 0xc2): /* vcmpsh $imm8,xmm/mem,xmm,k{k} */
+        generate_exception_if(ea.type != OP_REG && evex.brs, EXC_UD);
+        /* fall through */
+    case X86EMUL_OPC_EVEX(0x0f3a, 0xc2): /* vcmpph $imm8,[xyz]mm/mem,[xyz]mm,k{k} */
+        host_and_vcpu_must_have(avx512_fp16);
+        generate_exception_if(evex.w || !evex.r || !evex.R || evex.z, EXC_UD);
+        if ( ea.type != OP_REG || !evex.brs )
+            avx512_vlen_check(evex.pfx & VEX_PREFIX_SCALAR_MASK);
+        goto simd_imm8_zmm;
+
     case X86EMUL_OPC(0x0f3a, 0xcc):     /* sha1rnds4 $imm8,xmm/m128,xmm */
         host_and_vcpu_must_have(sha);
         op_bytes = 16;



From xen-devel-bounces@lists.xenproject.org Wed Jun 15 10:28:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 10:28:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349918.576121 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1QG5-000457-QZ; Wed, 15 Jun 2022 10:28:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349918.576121; Wed, 15 Jun 2022 10:28:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1QG5-00044f-L3; Wed, 15 Jun 2022 10:28:21 +0000
Received: by outflank-mailman (input) for mailman id 349918;
 Wed, 15 Jun 2022 10:28:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=56zs=WW=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o1QG4-0002mz-DP
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 10:28:20 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2061d.outbound.protection.outlook.com
 [2a01:111:f400:7d00::61d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id df9db296-ec95-11ec-bd2c-47488cf2e6aa;
 Wed, 15 Jun 2022 12:28:19 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB7PR04MB5113.eurprd04.prod.outlook.com (2603:10a6:10:14::27) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.22; Wed, 15 Jun
 2022 10:28:17 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Wed, 15 Jun 2022
 10:28:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df9db296-ec95-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=knfsbiqT+Bu2ST7HeueaXPvfrV0oD9Vaf047LgFbpqYptcQBDIizftlID7OxlbjVPlUBkv9/Tb6bVFnOwsFlm0ham8cCfzye4SuRQeGmK2SNME0P/KLXRqv/RixpnH7kZmi3ZKcSCtALpD2n6UF2Lscnj45Nk7Z9Sts+CJzdCNfk1nLPqYMXVsKP4ySpU1cdfIKZhJV+YQKDfhp30RjhcIKJC0tLJpvXpFDZnoSRSlpHWBAA0N5Lm312A/sSfBfjefQPgPlv61iBD08y1wiIJhh7P5miMmR0FVYQHwrRHWsedNnKlQbBsKnXG7qsK0cnsSpADtuPTypmfu3ZLOzIPA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=JQIzAAXSLVSWkhcS1qf3HBv4Fi2PTXseHqP5u9rJaSQ=;
 b=Z0GAZ5yr2aT9WyPv9dnqlVS19pSLK2x/mLLW1GhWZxtPeOj7taVhdlwwWmRChCnxEJM1Z5moMnNTkntWzeKi0LcoyEk48P+uo3+AkYuVzsZyNuCjVqhbhxTM7AYLJrpxmZEfY4ALKGH9Ur9J6f4QRWhVCyzdRN27GtLRwm2DobTidxvUPuWPAwxRmG5BU8i0qxF+EuVabkaPZL+/8j1g6BRz1nq84baztK4zVOvd7B8JMTMNaqgIVOfWgC29d2cVXhvn6iNkW4RW5Viy2Og56dfOA99jFd2gajooTK4Q96PqxV4TXfONbRbPw/78ykViSnHl7pxxBqTg5cX65W4yhw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JQIzAAXSLVSWkhcS1qf3HBv4Fi2PTXseHqP5u9rJaSQ=;
 b=04tVpoI+dOu5NPsFIwLqjOhIH7Cdt3LN4MS0YtHruAKK2df1jw2WQWTzFWwwrns4fH5BV4y9UOzWTzG+33B0p7uD4rgwQI4eQVpskVMzwmU1myt979ltGage9HBZfYUQdfID2IGsO430x7rO1hkIMpi3eB1CkhxZ+NXIxZiaywrHBgM8Q+kFqkDjO47AbCBlhNCAgM9T6kTyCehU251OwJ8OLMfItS6Yop238ktPbl1luHNtyfg33XRNcrDWWAH1dIZv3L4ZwPmjzfNvnowIweY/L+F+sfjtIg1Z9zocs0VSldQJMCnR1U+00MkFgkkvySn4dA5MTFqRKvjbkrusVw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <6721f404-e2f9-b686-009c-4c465a5a1e3f@suse.com>
Date: Wed, 15 Jun 2022 12:28:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: [PATCH 03/11] x86emul: handle AVX512-FP16 Map5 arithmetic insns
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <9ba3aaa7-56ce-b051-f1c4-874276e493e1@suse.com>
In-Reply-To: <9ba3aaa7-56ce-b051-f1c4-874276e493e1@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM6P195CA0038.EURP195.PROD.OUTLOOK.COM
 (2603:10a6:209:87::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 7923a1c0-f008-411a-b801-08da4eb9c2ce
X-MS-TrafficTypeDiagnostic: DB7PR04MB5113:EE_
X-Microsoft-Antispam-PRVS:
	<DB7PR04MB5113DC76D6EC564AFDA4CC53B3AD9@DB7PR04MB5113.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	b20EOtTG7crdxcbUSrDwuReDqySHTpMjZyXGYRREMbB+PAc/jaBuv+aTJNU3rLXu3TYbWQRo4F27z2j1VBvQoavNtluCSHi7WNJ9t9BpTwiidpNCKWncfUcMB1BrJe5tZYNWK1Lb5UoWFjnWCTPsMD3NPT6QUecp/+R3w/0OpY7ajdyWNRkWFCkFJ/6zhPnC3KtJ3AyifxqXTDbJ30JxSCH0mb8n5lALsePAwnLfBBYgs/MJ8RZlzI8OxPEPXFwo9hT6Hn6Lu3NeolFrh2zniXbPPIrmbnaJ5gAx5KlolLa4tNihMtSTBrzjE0CslVjAZLPZ93PUVYTOFgKPYS+xx2McuvH87WjKuGn+5PJy3Y71VnZ+STcPGTpStCI6VrpoIZub2U3lebyjFCUn6k/iw7xkiY6TO+1mMaBmqWx0CIaCMiOBUS12SoaMYWaN2zE7p8t/18/Xg9oWgjWFxNrT2VXqAE+e3Y4LehqogEJPDrO/1pH2YNozLKjKahfeWI4LglyYdI0xhY/34EobBflN3lrfBo6aTrhiLaYuaBO3Lk3eeCnPn0E0+VZpfYBd7SnEJ4PpXMR8UlzEHGPuneU0STfv1UgT+RmQLx0gv1Q7gR8ji1bwBNPDU6yWWb3n8aF7uXxaaQVr+B1dpPf80NIooYH2ckY0QD2qk1kf5o8gMNGCXwwRrpNKXD7CP0Aj4Aa306PxyrSWnwjZxCtUZA2vHPNSlrbLFltAoXWbXXHCRTD/phYFGkjCGO0IEUOyLKsr
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(316002)(36756003)(508600001)(38100700002)(6512007)(6916009)(8676002)(31686004)(4326008)(6486002)(26005)(2616005)(66946007)(66556008)(66476007)(2906002)(31696002)(5660300002)(186003)(86362001)(6506007)(54906003)(8936002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Z2lJU3RwTzlFblBreGVMRkdDR0JzNkpIYjlMcGFOalFoTGg4cFpuUSt0MUZO?=
 =?utf-8?B?cmJDeXlQdjJvbHJYUDZ1T3JoZHhaYmJ1by84aVpnSXJkaVpsRzFvcUxxaG5E?=
 =?utf-8?B?ck9abEE4YmpCVHdyTVQwNURFNzJxVEZpdlY3djFoU2FpRXBzdHJYWkJDMXp6?=
 =?utf-8?B?TEJISHBMQit2QUFUMjBJWWFoYitQOUEvWFIxb21ldzY1OUljbThsbHUzTlh2?=
 =?utf-8?B?MG9WTW1FbVdkUTdCVHNDN3ppSjFST1BZQktaT2l5Ry9SU1NpZFZuZGxOMXdD?=
 =?utf-8?B?QTVKRHN2aEtNRExqQ2VlVlZJdnAxNkNnaXJQaG5STmtPb3k4dFhsUFN5NnNr?=
 =?utf-8?B?WWRwYVZJL1d3a3JCaHI5dnZVY0wvMWF2VlNtNXp0RHdLR0xwS2hVSkV0UTVt?=
 =?utf-8?B?YjlPdDJWSDFadlJIL1lDQ2hCYWF0ZmNUUmxMM3hlKzFpaG0rRWNIaGU2RG1E?=
 =?utf-8?B?dXE3Z1RiSmdnaHZDM2RVblRYajlUV3Fub05NaVdEdWdMcHc3bzhQeTdFSnZ3?=
 =?utf-8?B?Sk5HUHJuQ1lzdDdHcGJ5SGNsazh0WXFMYVgwMmlkZDVLQ2xNY3p4YnJOamxB?=
 =?utf-8?B?YjY5MGlhN3UwRDdNSGlNbml0MU1oaXRnaWp2QXg1TmhsaFFndWxVZWIvK0Nt?=
 =?utf-8?B?bHhNOGU5dGQrRkd4Mk1PcytyOWRqamMrL3hSWi9aTjBYTlhYWHVlNlQ1MHNE?=
 =?utf-8?B?SFNzZlVaY1c4WGsrNllYK0l6Sm1ZcnFqcG5zUzZDYkt4U1VpcW5ETGhFZ3Vk?=
 =?utf-8?B?VXprTTJsNi9aLytjTWlWQjlLNlJRaGhDZFRKOTdSMWRGb2hwN01MMkFCL3hG?=
 =?utf-8?B?V1RkanVNakNRTkJVMnZIZ3JFVVM2UzdhL09nc2s2Z3JBV1lCR05NVlY5ZXNL?=
 =?utf-8?B?cW8vZDk5dDVqQnpqWTlVRHUvcXZHQnh3Qmt4ZUw1NnJwd3lXSXMydWZXNDJi?=
 =?utf-8?B?YUJXeDdBZkFTbnRZNXUxd0M2WmNaazVIemRwaCswWE5IOUhLbWZwa1QxR3Bm?=
 =?utf-8?B?QXJPSFJreHhnQUxFUFB2SDhuSHpXZWZ4QjZlOGtGV1lod0tza3ZhTEIxazdl?=
 =?utf-8?B?Y1dGd2dlTEZDMExmbG1udjBwYU9CcTNFeFpxamt0T0JzUnd2VTRtTTJVRWc2?=
 =?utf-8?B?ckJvMTgxRmFkbk1UNFNJQmZ3M1g1RTdOQ09udWFlZFJkQ1V1cnVLcWZOeVZm?=
 =?utf-8?B?NlpDcVJUcmtjTjcxSnZDOHptNzByazVEV3dkdmRBNy9TeUhJZkJhY2ZMYkRz?=
 =?utf-8?B?YTNTbEEra21Nb1MzV2JST1ZhSHlHd0dwZUVFajdpZW1JQ0g4VDR3R0VNWkIr?=
 =?utf-8?B?d3RPV3IzeDZySU1KY3I3Q0Jrbk9vQXYrSzVlcjFHSVpmMGd5QzY0Z2J3TkZG?=
 =?utf-8?B?NC9TUVc5M1VlVGU4NVhhcEdYd1MvM000TGZvMFdySlM1QkZ5MGVTamJSYTlL?=
 =?utf-8?B?L3RxYTFmQlRNYm4wUWhITFRMZmlwN0dHZmNDK09pZUFhaS92YkdsVnNmdVZl?=
 =?utf-8?B?Rk4wZE1HdkozL2dBdER5ZllMWjF3LzhQbzUxbVE0M3N2bnphZGVVcnRkN2Yv?=
 =?utf-8?B?eVhScXM0MUpldEsxRFhmaGlkMklGbkRqLzFLdkVHV2VlYlVNc0FCdjI2OTNM?=
 =?utf-8?B?blJmUStqWVBjcFBwdElYSnBVVG5wRXU5dWpYRlg1TlhWeHN3YUlTNlJmMUdN?=
 =?utf-8?B?TEprY1NqbjQ0UDZjOXAweXlhMWtUQjZ2MENsK3dqb1lmeVZtVGdSMWkwUk8v?=
 =?utf-8?B?T3NLQ0JYclFrMWpWbzJrbms5SDBjUG5hWUlYNDZjRHNaa0dKd2djbW0wTjBp?=
 =?utf-8?B?ZWl6RXVaei9pTytmc3I2Y09jUWdaVFZZQ0VrUXJZTnpFRjkySldrZHB3cW51?=
 =?utf-8?B?QkNScGlIajNIUG9qYktxK1lvSEcrKzNSVHgvTkpqYmxpZDFleGVQTitWVE1m?=
 =?utf-8?B?ZGU3Rkxha3FOU2FEYUw4enJxZFNNKzlaRzlhRGdIV3pJSzNsNmlGTUk3WkRv?=
 =?utf-8?B?MVUycGZIclNWOUg5ME1XYXFLdGthY3VLNCthQTRIM2FBZEROZG1sZ1BEL2px?=
 =?utf-8?B?aHk5NjhSZFdXcTdmZ3k5NzliaWsrSm9RcE1xTk9tcmNaMmxpbStSc2drRGxI?=
 =?utf-8?B?TnlGWDBjTU9pZjRSUlhtQmRQcEFXL1ZwcWx2dmZ1dENRdXBib0syYVNNQjI4?=
 =?utf-8?B?a0NybURRa1NybHoydDd4WnVBVEFOYkZiaWIrSUJXeTBKR1VhY09UaVJBMlov?=
 =?utf-8?B?WlAvMXhzUFlLMEp0dlZYYjJFaWpENnpON3NkN3Y5djFPdnJBbFJwVDAyaG9n?=
 =?utf-8?B?NkN0NmhBN0ZLR1ZUUTFaUS84dTdsaVVxZ1VLRFQvQmx2b3ZYUjdNZz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7923a1c0-f008-411a-b801-08da4eb9c2ce
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2022 10:28:17.2308
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: iRVv80wt9FUnn9Z/cajvPbDVi+62fx+oIC063qL019WbUgTdMSmzAn0kA0bbG0M/ubFEY+t2QQFuv9Y6reW0yw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR04MB5113

This encoding space is a very sparse clone of the "twobyte" one. Re-use
that table, as the entries corresponding to invalid opcodes in Map5 are
simply benign with simd_size forced to other than simd_none (preventing
undue memory reads in SrcMem handling early in x86_emulate()).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/tests/x86_emulator/evex-disp8.c
+++ b/tools/tests/x86_emulator/evex-disp8.c
@@ -6,7 +6,7 @@
 struct test {
     const char *mnemonic;
     unsigned int opc:8;
-    unsigned int spc:2;
+    unsigned int spc:3;
     unsigned int pfx:2;
     unsigned int vsz:3;
     unsigned int esz:4;
@@ -19,6 +19,10 @@ enum spc {
     SPC_0f,
     SPC_0f38,
     SPC_0f3a,
+    SPC_unused4,
+    SPC_map5,
+    SPC_map6,
+    SPC_unused7,
 };
 
 enum pfx {
@@ -603,16 +607,32 @@ static const struct test avx512_vpopcntd
 };
 
 static const struct test avx512_fp16_all[] = {
+    INSN(addph,           , map5, 58,    vl, fp16, vl),
+    INSN(addsh,         f3, map5, 58,    el, fp16, el),
     INSN(cmpph,           , 0f3a, c2,    vl, fp16, vl),
     INSN(cmpsh,         f3, 0f3a, c2,    el, fp16, el),
+    INSN(comish,          , map5, 2f,    el, fp16, el),
+    INSN(divph,           , map5, 5e,    vl, fp16, vl),
+    INSN(divsh,         f3, map5, 5e,    el, fp16, el),
     INSN(fpclassph,       , 0f3a, 66,    vl, fp16, vl),
     INSN(fpclasssh,       , 0f3a, 67,    el, fp16, el),
     INSN(getmantph,       , 0f3a, 26,    vl, fp16, vl),
     INSN(getmantsh,       , 0f3a, 27,    el, fp16, el),
+    INSN(maxph,           , map5, 5f,    vl, fp16, vl),
+    INSN(maxsh,         f3, map5, 5f,    el, fp16, el),
+    INSN(minph,           , map5, 5d,    vl, fp16, vl),
+    INSN(minsh,         f3, map5, 5d,    el, fp16, el),
+    INSN(mulph,           , map5, 59,    vl, fp16, vl),
+    INSN(mulsh,         f3, map5, 59,    el, fp16, el),
     INSN(reduceph,        , 0f3a, 56,    vl, fp16, vl),
     INSN(reducesh,        , 0f3a, 57,    el, fp16, el),
     INSN(rndscaleph,      , 0f3a, 08,    vl, fp16, vl),
     INSN(rndscalesh,      , 0f3a, 0a,    el, fp16, el),
+    INSN(sqrtph,          , map5, 51,    vl, fp16, vl),
+    INSN(sqrtsh,        f3, map5, 51,    el, fp16, el),
+    INSN(subph,           , map5, 5c,    vl, fp16, vl),
+    INSN(subsh,         f3, map5, 5c,    el, fp16, el),
+    INSN(ucomish,         , map5, 2e,    el, fp16, el),
 };
 
 static const struct test gfni_all[] = {
@@ -713,8 +733,8 @@ static void test_one(const struct test *
     union evex {
         uint8_t raw[3];
         struct {
-            uint8_t opcx:2;
-            uint8_t mbz:2;
+            uint8_t opcx:3;
+            uint8_t mbz:1;
             uint8_t R:1;
             uint8_t b:1;
             uint8_t x:1;
--- a/tools/tests/x86_emulator/predicates.c
+++ b/tools/tests/x86_emulator/predicates.c
@@ -2028,6 +2028,23 @@ static const struct evex {
     { { 0xc2 }, 3, T, R, pfx_f3, W0, LIG }, /* vcmpsh */
     { { 0xce }, 3, T, R, pfx_66, W1, Ln }, /* vgf2p8affineqb */
     { { 0xcf }, 3, T, R, pfx_66, W1, Ln }, /* vgf2p8affineinvqb */
+}, evex_map5[] = {
+    { { 0x2e }, 2, T, R, pfx_no, W0, LIG }, /* vucomish */
+    { { 0x2f }, 2, T, R, pfx_no, W0, LIG }, /* vcomish */
+    { { 0x51 }, 2, T, R, pfx_no, W0, Ln }, /* vsqrtph */
+    { { 0x51 }, 2, T, R, pfx_f3, W0, LIG }, /* vsqrtsh */
+    { { 0x58 }, 2, T, R, pfx_no, W0, Ln }, /* vaddph */
+    { { 0x58 }, 2, T, R, pfx_f3, W0, LIG }, /* vaddsh */
+    { { 0x59 }, 2, T, R, pfx_no, W0, Ln }, /* vmulph */
+    { { 0x59 }, 2, T, R, pfx_f3, W0, LIG }, /* vmulsh */
+    { { 0x5c }, 2, T, R, pfx_no, W0, Ln }, /* vsubph */
+    { { 0x5c }, 2, T, R, pfx_f3, W0, LIG }, /* vsubsh */
+    { { 0x5d }, 2, T, R, pfx_no, W0, Ln }, /* vminph */
+    { { 0x5d }, 2, T, R, pfx_f3, W0, LIG }, /* vminsh */
+    { { 0x5e }, 2, T, R, pfx_no, W0, Ln }, /* vdivph */
+    { { 0x5e }, 2, T, R, pfx_f3, W0, LIG }, /* vdivsh */
+    { { 0x5f }, 2, T, R, pfx_no, W0, Ln }, /* vmaxph */
+    { { 0x5f }, 2, T, R, pfx_f3, W0, LIG }, /* vmaxsh */
 };
 
 static const struct {
@@ -2037,6 +2054,8 @@ static const struct {
     { evex_0f,   ARRAY_SIZE(evex_0f) },
     { evex_0f38, ARRAY_SIZE(evex_0f38) },
     { evex_0f3a, ARRAY_SIZE(evex_0f3a) },
+    { NULL,      0 },
+    { evex_map5, ARRAY_SIZE(evex_map5) },
 };
 
 #undef Wn
--- a/xen/arch/x86/x86_emulate/decode.c
+++ b/xen/arch/x86/x86_emulate/decode.c
@@ -1219,9 +1219,18 @@ int x86emul_decode(struct x86_emulate_st
                         opcode |= MASK_INSR(0x0f3a, X86EMUL_OPC_EXT_MASK);
                         d = twobyte_table[0x3a].desc;
                         break;
+
+                    case evex_map5:
+                        if ( !evex_encoded() )
+                        {
                     default:
-                        rc = X86EMUL_UNRECOGNIZED;
-                        goto done;
+                            rc = X86EMUL_UNRECOGNIZED;
+                            goto done;
+                        }
+                        opcode |= MASK_INSR(5, X86EMUL_OPC_EXT_MASK);
+                        d = twobyte_table[b].desc;
+                        s->simd_size = twobyte_table[b].size ?: simd_other;
+                        break;
                     }
                 }
                 else if ( s->ext < ext_8f08 + ARRAY_SIZE(xop_table) )
@@ -1443,6 +1452,24 @@ int x86emul_decode(struct x86_emulate_st
             }
             break;
 
+        case ext_map5:
+            switch ( b )
+            {
+            default:
+                if ( !(s->evex.pfx & VEX_PREFIX_DOUBLE_MASK) )
+                    s->fp16 = true;
+                break;
+
+            case 0x2e: case 0x2f: /* v{,u}comish */
+                if ( !s->evex.pfx )
+                    s->fp16 = true;
+                s->simd_size = simd_none;
+                break;
+            }
+
+            disp8scale = decode_disp8scale(twobyte_table[b].d8s, s);
+            break;
+
         case ext_8f09:
             if ( ext8f09_table[b].two_op )
                 d |= TwoOp;
@@ -1661,6 +1688,7 @@ int x86emul_decode(struct x86_emulate_st
         s->simd_size = ext8f08_table[b].simd_size;
         break;
 
+    case ext_map5:
     case ext_8f09:
     case ext_8f0a:
         break;
--- a/xen/arch/x86/x86_emulate/private.h
+++ b/xen/arch/x86/x86_emulate/private.h
@@ -194,6 +194,7 @@ enum vex_opcx {
     vex_0f = vex_none + 1,
     vex_0f38,
     vex_0f3a,
+    evex_map5 = 5,
 };
 
 enum vex_pfx {
@@ -222,8 +223,8 @@ union vex {
 union evex {
     uint8_t raw[3];
     struct {             /* SDM names */
-        uint8_t opcx:2;  /* mm */
-        uint8_t mbz:2;
+        uint8_t opcx:3;  /* mmm */
+        uint8_t mbz:1;
         uint8_t R:1;     /* R' */
         uint8_t b:1;     /* B */
         uint8_t x:1;     /* X */
@@ -248,6 +249,7 @@ struct x86_emulate_state {
         ext_0f   = vex_0f,
         ext_0f38 = vex_0f38,
         ext_0f3a = vex_0f3a,
+        ext_map5 = evex_map5,
         /*
          * For XOP use values such that the respective instruction field
          * can be used without adjustment.
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -3760,6 +3760,13 @@ x86_emulate(
         ASSERT(!state->simd_size);
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
+    case X86EMUL_OPC_EVEX(5, 0x2e): /* vucomish xmm/m16,xmm */
+    case X86EMUL_OPC_EVEX(5, 0x2f): /* vcomish xmm/m16,xmm */
+        host_and_vcpu_must_have(avx512_fp16);
+        generate_exception_if(evex.w, EXC_UD);
+        /* fall through */
     CASE_SIMD_PACKED_FP(_EVEX, 0x0f, 0x2e): /* vucomis{s,d} xmm/mem,xmm */
     CASE_SIMD_PACKED_FP(_EVEX, 0x0f, 0x2f): /* vcomis{s,d} xmm/mem,xmm */
         generate_exception_if((evex.reg != 0xf || !evex.RX || evex.opmsk ||
@@ -3772,9 +3779,11 @@ x86_emulate(
         get_fpu(X86EMUL_FPU_zmm);
 
         opc = init_evex(stub);
-        op_bytes = 4 << evex.w;
+        op_bytes = 2 << (!state->fp16 + evex.w);
         goto vcomi;
 
+#endif
+
     case X86EMUL_OPC(0x0f, 0x30): /* wrmsr */
         generate_exception_if(!mode_ring0(), EXC_GP, 0);
         fail_if(ops->write_msr == NULL);
@@ -7738,6 +7747,20 @@ x86_emulate(
 
 #ifndef X86EMUL_NO_SIMD
 
+    case X86EMUL_OPC_EVEX_F3(5, 0x51):   /* vsqrtsh xmm/m16,xmm,xmm{k} */
+        d &= ~TwoOp;
+        /* fall through */
+    case X86EMUL_OPC_EVEX(5, 0x51):      /* vsqrtph [xyz]mm/mem,[xyz]mm{k} */
+    CASE_SIMD_SINGLE_FP(_EVEX, 5, 0x58): /* vadd{p,s}h [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    CASE_SIMD_SINGLE_FP(_EVEX, 5, 0x59): /* vmul{p,s}h [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    CASE_SIMD_SINGLE_FP(_EVEX, 5, 0x5c): /* vsub{p,s}h [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    CASE_SIMD_SINGLE_FP(_EVEX, 5, 0x5d): /* vmin{p,s}h [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    CASE_SIMD_SINGLE_FP(_EVEX, 5, 0x5e): /* vdiv{p,s}h [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    CASE_SIMD_SINGLE_FP(_EVEX, 5, 0x5f): /* vmax{p,s}h [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+        host_and_vcpu_must_have(avx512_fp16);
+        generate_exception_if(evex.w, EXC_UD);
+        goto avx512f_all_fp;
+
     case X86EMUL_OPC_XOP(08, 0x85): /* vpmacssww xmm,xmm/m128,xmm,xmm */
     case X86EMUL_OPC_XOP(08, 0x86): /* vpmacsswd xmm,xmm/m128,xmm,xmm */
     case X86EMUL_OPC_XOP(08, 0x87): /* vpmacssdql xmm,xmm/m128,xmm,xmm */
--- a/xen/arch/x86/x86_emulate/x86_emulate.h
+++ b/xen/arch/x86/x86_emulate/x86_emulate.h
@@ -619,6 +619,7 @@ struct x86_emulate_ctxt
  *    0x0fxxxx for 0f-prefixed opcodes (or their VEX/EVEX equivalents)
  *  0x0f38xxxx for 0f38-prefixed opcodes (or their VEX/EVEX equivalents)
  *  0x0f3axxxx for 0f3a-prefixed opcodes (or their VEX/EVEX equivalents)
+ *     0x5xxxx for Map5 opcodes (EVEX only)
  *  0x8f08xxxx for 8f/8-prefixed XOP opcodes
  *  0x8f09xxxx for 8f/9-prefixed XOP opcodes
  *  0x8f0axxxx for 8f/a-prefixed XOP opcodes



From xen-devel-bounces@lists.xenproject.org Wed Jun 15 10:28:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 10:28:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349925.576131 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1QGQ-0004pL-6U; Wed, 15 Jun 2022 10:28:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349925.576131; Wed, 15 Jun 2022 10:28:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1QGQ-0004pE-2z; Wed, 15 Jun 2022 10:28:42 +0000
Received: by outflank-mailman (input) for mailman id 349925;
 Wed, 15 Jun 2022 10:28:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=56zs=WW=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o1QGP-0004ln-Ho
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 10:28:41 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on20613.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::613])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ea4b0d04-ec95-11ec-ab14-113154c10af9;
 Wed, 15 Jun 2022 12:28:37 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7332.eurprd04.prod.outlook.com (2603:10a6:20b:1db::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.13; Wed, 15 Jun
 2022 10:28:35 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Wed, 15 Jun 2022
 10:28:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ea4b0d04-ec95-11ec-ab14-113154c10af9
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Vv3uR1Z8PmkV0s3l4DeGVDZAtNjgJsQsqYSGtuyHvKBoMDDuv35rizHT2qTPTORTNmsV0ToMhVXaCrww9ak8qgcJ8M8241uFwEjniUlMe5d3329dKGoK96yZXz+prGAAT5nf6HSjFXcCKhTXXI3sGQCYEWyfZIjUXLTb/wx7lUpsf7mMn+VEfYLNUSCXT27+awNVNt2Ll8FN/i5F5lkWDavzMkVxknvdbnB12gKoun6vJzgcSq8wZd6jg2+66XfZLkZ3cz7uN4J7heYnB2tOet0KPmfNaQGChuCV7iOiyyUUPhZU+7T/wZ/dQEtGUQsVq4ynXMtzktGCwUSh+d3+8Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=X56RGauel3ny1RfLPnM3lAMmDmDld+kl8ioALSznTgc=;
 b=da75xEhaJG0HO6/QWNl0CBBjWS5LOUy4wY6VqfEFN1/R8pI3X7Bp6d6OK36DPSE95qINflheGyNVlU7sYc99pAIcd86HDDVr/UmhTAUheVniDPLz+JpkKrMuFBE3xtPcJZvnlfdnfaj9BZ9KOp028RiY4AjiU7fGqHrLzIrgGqkRGlOgjpuR2N1TseSLebGN3zj5GnIfMJPZm4bamZgadKfXT8WPHcs9b1ojXIGLJEAmFIWkqRv9CX64Ef6+VXPZZV9jC9BaNilcdjJjmneg1aEwRA7TbS7VJRiGq+sDar4gwkR4jvM1/YAGlBRcsA7ARLtrkAnRsoxMuW9qvkZNwg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=X56RGauel3ny1RfLPnM3lAMmDmDld+kl8ioALSznTgc=;
 b=MykJ0BW3lS2bIMxe16Mua0aFUZBiOt6fZ3a7kjrdqG0PvlIDRhFaxQkJ8m52AX5RQiXylKlw+hxZBtfTWO4sBBKwOGvLsrFqO2ct8fimfcagQFoa6K23az2uhMfLagBPhwEx1vSCAMARUah1Mz+h8JQ+VJtjmFIkpERT17pGuh0ItiurXSFg4+LKiqksgZFNgoTwRIN6wfyUoJwVPxsUTp3wThSbKG62FYL1PL13Hd2CFETXgL71gwLzsK/ZQ+gc9HemQ80YwyoSAfo0P81WBc+J6w6d49K3JsWtCFpPa5yUlebrnGEWAblbtOcT7B0QktOUcY4ELAv9gm+CplbKRg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <3e7f95f0-fded-74e3-d4b5-da185a7ab8d8@suse.com>
Date: Wed, 15 Jun 2022 12:28:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: [PATCH 04/11] x86emul: handle AVX512-FP16 move insns
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <9ba3aaa7-56ce-b051-f1c4-874276e493e1@suse.com>
In-Reply-To: <9ba3aaa7-56ce-b051-f1c4-874276e493e1@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR0301CA0025.eurprd03.prod.outlook.com
 (2603:10a6:20b:468::22) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 9bf93028-4b83-46e9-6c35-08da4eb9cdb6
X-MS-TrafficTypeDiagnostic: AM8PR04MB7332:EE_
X-Microsoft-Antispam-PRVS:
	<AM8PR04MB7332FDE222FD6F455F7580B2B3AD9@AM8PR04MB7332.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6d4f184XJ04WAAMYZ3LdVFI3SK782RBn7U9GuILlAM7t8H9yVZ+41wi7qc6uq6CTvbZei0IV1kMz4W5A+cN8bDU1REstvIloooQdJLHMcLtFjBgBOJB1Wif5vEY95rbNYrtCZqr8g6tv1xElYpHRCrVBrrVyQivzQqQmO9qOQbJKUMPo8r+PBl+BBFz0YKT6h5x0rqpfvwEUVyjgRQClm1uB0F3JAl1wreXMBrcte8LNav+oz4PQB+Turdb+WcPt3f18jEVnmv1W0MveJw6jFgYsI4HonOoFZz8Q1vGbvZ99osqbaoP2NITsKGXL9x5oXOtQZhi8ydLytkdUY3nFY4O8RecUluoLiwQEMsu1bQjVanRgM8ZRkoK2mBCBNyTqr+K3Poa6mEiwtKYf7ihlkWFMg2YtNPY6B74tOc/q+lxDtY0F4f+tL45Bo5y9vV2elLi2Bo0Zkp3yqN5GqViCb46a45kQ4HsO4Cb2d1YnyYlegM6j7045/5rl8bNg8scgnb+mN2lZR1utqs1Y69WpR+48mHl1Ju7QIe6+EjGdMqx0eHvM2aZhfjv5Qj/gVJ11bs/0WZeR1wSTOBfyUcYfK0M7D1OTns+ka2exrPXbCTzcWBIQB0LAj7IcBPUYxu3MjWKp5O2TFfYVHc9OSc5il02+tp27SIMIT2Y6e7eb4tBewAhcli73JnEFUONqcf1L2nR0EzeaRbVgVVT8qH9D7A3w3h0uNver6X9ojVraUeNmBfeRyt8+TNonejpRv5sE
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(31686004)(6486002)(316002)(36756003)(26005)(86362001)(31696002)(6512007)(186003)(8936002)(2616005)(6506007)(38100700002)(6916009)(54906003)(508600001)(66476007)(4326008)(66556008)(8676002)(5660300002)(2906002)(66946007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SS9YS2dGLzZieW1BdFcwQXFCcHZVdXNTWHNVaWZoZGNTUkRVVnBHb1JMWVJB?=
 =?utf-8?B?eVNZR3E0bGpRcTVwanNDUmFPTmQrWWZqekNndElVd1NPeVdNWXM1VEx5NENq?=
 =?utf-8?B?VGZNWUN2R09kNGxNcUpmNlQ3TWJDd09PaUhJZDhrRklWcXBMSU1MR05kOWRs?=
 =?utf-8?B?N0FjWEJTaGsxTGhpZjhXK1N5M1IwYjduR3Y0cHJBcCtPbU44RE13SVAwS0lF?=
 =?utf-8?B?ajJKRWdaQk0zYkJMSllGR0NKV1ZSRUQzNnVtV1Jvb3Z0MkRpaEMyZ0k1a3ZL?=
 =?utf-8?B?aVhEakpKWE9zZUVWT09KcjNZSWJIYXNpL2FxMG53WjBvbDVTTWhsZGs3WkN3?=
 =?utf-8?B?dmRaS0xrTm82Tk5JdVVFNjNlZXExNlBmSFljV1FJSWJEVDF1c3RUcFQ5ZHNK?=
 =?utf-8?B?V3hRQzJzSG5ydnVYYnRVZlU2cWNnOXVJSFFsQmx1aDBHaXBDNkdNUU9YcEJz?=
 =?utf-8?B?Q1Nid2NTazByREUvYzRYbDlGT29pYnRuS0NxWTNYZk9JdjFhblV3a01MeGNz?=
 =?utf-8?B?bEdyR1dNbFhTVnV0R3k1ZE02aHkvaytmMVRTQk5oSG5xQTRveStmNWU2UC9v?=
 =?utf-8?B?UUNDdTk1TUg2N3lTTVJrZTlMY21Xd0h1MytFdTRrTlRsUVhvRDBhNHFzbVBD?=
 =?utf-8?B?a3F4eG1FcHVzWlJCS1BmMmwxNVZ2bithOTZ5TGJvRC9mc2cwSXRtZjZIcDYx?=
 =?utf-8?B?d3grVWc3Q3o5bEI2a0x3WC9jb2psNHVkTW1nd0RXbmdNWThBeHZSeEt2d2cz?=
 =?utf-8?B?dG01eHpSU1pKRFBxNDlLek1xU2t6TEpKZDhkREJQbDFFZVJmbHdJZElhU3ll?=
 =?utf-8?B?ZFc3b3Y5enRqdmJKcnRHbVFFeWFpSTV0cjh2RmlmUzVBakg2MERvVFE5YSti?=
 =?utf-8?B?MjVXazRuaDArV0RORlNzeGpOY2laMGlZeE92ZkJHaTJ6dVJFME5wRHVEWlQ1?=
 =?utf-8?B?MzZFNzl6NStsQWpHcG9LYittT01rZzRKTkFYVURuSWpXY1pBWGdtTGtDeDdV?=
 =?utf-8?B?VnVSTXc1MHNIdGVjRzk5c3B4ZmVyVlRIdDAyc3FXMVVVRno5YjFac3Nud01m?=
 =?utf-8?B?OGoycXo2SkJ6RHNvcWdPN3lGdVA5VWxBd3JEQ3YwVHltZlVjRG1kMzB1OHVj?=
 =?utf-8?B?Y0dueFFMY0xLWitSNmdRSGxFZE82UlJraDJxRVpIZUtBcGc5ZVp2a01Tbzgx?=
 =?utf-8?B?cjk5dFhyV0E3NlkveXBNQ3Y4cm5acjZjUEJzY0paZGlkYmhGNG5DeEtEVHht?=
 =?utf-8?B?SlJBQUdkQy9XcE81NEl6b3JDOXNxR2gwL0tIV3duLytabDRkQVg4ckNlcXhQ?=
 =?utf-8?B?bkNvZmd6dCtoUnFJNE9ZZTkrRDlXQUNtRkJMZW45UzAwdUpqbmR6OHhCRG9v?=
 =?utf-8?B?Q2JqbGc0UDREbGtQeVpXWFBNdkZXRlFLOXY4Wmh3M3lWYmVFYnlwSjJmZFho?=
 =?utf-8?B?SGRvYlgxc1lCSk16em9zZHZYb3RNZGhmaHRiWmpRQi9ma0NUR3Ruc0ZpMWQ1?=
 =?utf-8?B?eVdXd3NOdFYwbWNiZXlncEZkYzNkUS9DaUd1RE96S2xCc3k0QWdETHRCMERV?=
 =?utf-8?B?RDZrV09XZW5xazhDZHBXa3JOVWhQZEs0azJhUEh5UE03VFRWQXArV2FtVzY1?=
 =?utf-8?B?Vi9sMW1qcVltSDBnRSs4LzZUUk9wNjRBcTR5ZzFBM09PbXVKUmF2VlZ6MU1u?=
 =?utf-8?B?Q3VBbjFFMVRuZnhCeXNtS1hQZ1h6WVdEM2VIY3pBZUlWTDJJUjlwNld0Z1J1?=
 =?utf-8?B?RVJndjBiK3dsOEt0cUxySlRRYXFIc0xCblJvL1JIbElvUTRwT1BoM3pHK2NU?=
 =?utf-8?B?R1l2Q2ZyU0dhR2M5Z0ZSeWxDdDBpaHpBeHJsRkQwem5LWWFvMTduUDJVV1FE?=
 =?utf-8?B?LzJhUXNWT25uV29rQUprbTJJZy9zMVVET0UycEJjd2NybHFUWkRKRkd0Z3RM?=
 =?utf-8?B?cUtsaXlHbG5VVzNTR3dQUzhURThhSS84VVcvVS9kb2o0T1J5R2xncEZESk52?=
 =?utf-8?B?eVRoWkVJV1lTbm5wNDJXbVZkdFl4SnVCeXNtTjhXTDNBYWtsWTAvTUFDMFpo?=
 =?utf-8?B?LzJZVG4vWDhQS1EyNDF1ZUU1WVY2dUQ2Tzg0L3R1Rjh4ZUVRQ0RSVk80a0Nl?=
 =?utf-8?B?blkrU2IwMTluUzlPZmRjeFJ1RVpjOHgwK2Rac0NzWVBpTUVIY2MwbS92SUo3?=
 =?utf-8?B?RC9LY1FCeHphdUFlS2NXOU9CV0Q0Z2RJeWF0Tks3NkliVXNtWWg5SHM0T0ZS?=
 =?utf-8?B?WVdjMVY1ckwwNTlRRDN3c1FuQjJBby83UFByRzU2OWI4Y1dlTWVpcXhmT083?=
 =?utf-8?B?OE5SNTFFdHZCWmZIZFo0N2xHK0xocmxQdFk4MzNic3IyaEltM2tqUT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9bf93028-4b83-46e9-6c35-08da4eb9cdb6
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2022 10:28:35.4953
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: emWtnPt9iLE+5rAoDeC19MCvSZoVgrjMlGqZrA9tH//D4UxqQMiMSV7WlDad8l+Xxwkk1t8P3zA3ToQ2FrMSTw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7332

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/tests/x86_emulator/evex-disp8.c
+++ b/tools/tests/x86_emulator/evex-disp8.c
@@ -622,6 +622,8 @@ static const struct test avx512_fp16_all
     INSN(maxsh,         f3, map5, 5f,    el, fp16, el),
     INSN(minph,           , map5, 5d,    vl, fp16, vl),
     INSN(minsh,         f3, map5, 5d,    el, fp16, el),
+    INSN(movsh,         f3, map5, 10,    el, fp16, el),
+    INSN(movsh,         f3, map5, 11,    el, fp16, el),
     INSN(mulph,           , map5, 59,    vl, fp16, vl),
     INSN(mulsh,         f3, map5, 59,    el, fp16, el),
     INSN(reduceph,        , 0f3a, 56,    vl, fp16, vl),
@@ -635,6 +637,11 @@ static const struct test avx512_fp16_all
     INSN(ucomish,         , map5, 2e,    el, fp16, el),
 };
 
+static const struct test avx512_fp16_128[] = {
+    INSN(movw, 66, map5, 6e, el, fp16, el),
+    INSN(movw, 66, map5, 7e, el, fp16, el),
+};
+
 static const struct test gfni_all[] = {
     INSN(gf2p8affineinvqb, 66, 0f3a, cf, vl, q, vl),
     INSN(gf2p8affineqb,    66, 0f3a, ce, vl, q, vl),
@@ -1039,6 +1046,7 @@ void evex_disp8_test(void *instr, struct
     RUN(avx512_vp2intersect, all);
     RUN(avx512_vpopcntdq, all);
     RUN(avx512_fp16, all);
+    RUN(avx512_fp16, 128);
 
     if ( cpu_has_avx512f )
     {
--- a/tools/tests/x86_emulator/predicates.c
+++ b/tools/tests/x86_emulator/predicates.c
@@ -2029,6 +2029,8 @@ static const struct evex {
     { { 0xce }, 3, T, R, pfx_66, W1, Ln }, /* vgf2p8affineqb */
     { { 0xcf }, 3, T, R, pfx_66, W1, Ln }, /* vgf2p8affineinvqb */
 }, evex_map5[] = {
+    { { 0x10 }, 2, T, R, pfx_f3, W0, LIG }, /* vmovsh */
+    { { 0x11 }, 2, T, W, pfx_f3, W0, LIG }, /* vmovsh */
     { { 0x2e }, 2, T, R, pfx_no, W0, LIG }, /* vucomish */
     { { 0x2f }, 2, T, R, pfx_no, W0, LIG }, /* vcomish */
     { { 0x51 }, 2, T, R, pfx_no, W0, Ln }, /* vsqrtph */
@@ -2045,6 +2047,8 @@ static const struct evex {
     { { 0x5e }, 2, T, R, pfx_f3, W0, LIG }, /* vdivsh */
     { { 0x5f }, 2, T, R, pfx_no, W0, Ln }, /* vmaxph */
     { { 0x5f }, 2, T, R, pfx_f3, W0, LIG }, /* vmaxsh */
+    { { 0x6e }, 2, T, R, pfx_66, WIG, L0 }, /* vmovw */
+    { { 0x7e }, 2, T, W, pfx_66, WIG, L0 }, /* vmovw */
 };
 
 static const struct {
--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -5137,6 +5137,76 @@ int main(int argc, char **argv)
     else
         printf("skipped\n");
 
+    printf("%-40s", "Testing vmovsh 8(%ecx),%xmm5...");
+    if ( stack_exec && cpu_has_avx512_fp16 )
+    {
+        decl_insn(vmovsh_from_mem);
+        decl_insn(vmovw_to_gpr);
+
+        asm volatile ( "vpcmpeqw %%ymm5, %%ymm5, %%ymm5\n\t"
+                       put_insn(vmovsh_from_mem,
+                                /* vmovsh 8(%0), %%xmm5 */
+                                ".byte 0x62, 0xf5, 0x7e, 0x08\n\t"
+                                ".byte 0x10, 0x69, 0x04")
+                       :: "c" (NULL) );
+
+        set_insn(vmovsh_from_mem);
+        res[2] = 0x3c00bc00;
+        regs.ecx = (unsigned long)res;
+        rc = x86_emulate(&ctxt, &emulops);
+        if ( (rc != X86EMUL_OKAY) || !check_eip(vmovsh_from_mem) )
+            goto fail;
+        asm volatile ( "kmovw     %2, %%k1\n\t"
+                       "vmovdqu16 %1, %%zmm4%{%%k1%}%{z%}\n\t"
+                       "vpcmpeqw  %%zmm4, %%zmm5, %%k0\n\t"
+                       "kmovw     %%k0, %0"
+                       : "=g" (rc)
+                       : "m" (res[2]), "r" (1) );
+        if ( rc != 0xffff )
+            goto fail;
+        printf("okay\n");
+
+        printf("%-40s", "Testing vmovsh %xmm4,2(%eax){%k3}...");
+        memset(res, ~0, 8);
+        res[2] = 0xbc00ffff;
+        memset(res + 3, ~0, 8);
+        regs.eax = (unsigned long)res;
+        regs.ecx = ~0;
+        for ( i = 0; i < 2; ++i )
+        {
+            decl_insn(vmovsh_to_mem);
+
+            asm volatile ( "kmovw %1, %%k3\n\t"
+                           put_insn(vmovsh_to_mem,
+                                    /* vmovsh %%xmm4, 2(%0)%{%%k3%} */
+                                    ".byte 0x62, 0xf5, 0x7e, 0x0b\n\t"
+                                    ".byte 0x11, 0x60, 0x01")
+                           :: "a" (NULL), "r" (i) );
+
+            set_insn(vmovsh_to_mem);
+            rc = x86_emulate(&ctxt, &emulops);
+            if ( (rc != X86EMUL_OKAY) || !check_eip(vmovsh_to_mem) ||
+                 memcmp(res, res + 3 - i, 8) )
+                goto fail;
+        }
+        printf("okay\n");
+
+        printf("%-40s", "Testing vmovw %xmm5,%ecx...");
+        asm volatile ( put_insn(vmovw_to_gpr,
+                                /* vmovw %%xmm5, %0 */
+                                ".byte 0x62, 0xf5, 0x7d, 0x08\n\t"
+                                ".byte 0x7e, 0xe9")
+                       :: "c" (NULL) );
+        set_insn(vmovw_to_gpr);
+        rc = x86_emulate(&ctxt, &emulops);
+        if ( (rc != X86EMUL_OKAY) || !check_eip(vmovw_to_gpr) ||
+             regs.ecx != 0xbc00 )
+            goto fail;
+        printf("okay\n");
+    }
+    else
+        printf("skipped\n");
+
     printf("%-40s", "Testing invpcid 16(%ecx),%%edx...");
     if ( stack_exec )
     {
--- a/xen/arch/x86/x86_emulate/decode.c
+++ b/xen/arch/x86/x86_emulate/decode.c
@@ -585,7 +585,7 @@ static unsigned int decode_disp8scale(en
         break;
 
     case d8s_dq64:
-        return 2 + (s->op_bytes == 8);
+        return 1 + !s->fp16 + (s->op_bytes == 8);
     }
 
     switch ( s->simd_size )
@@ -1465,6 +1465,15 @@ int x86emul_decode(struct x86_emulate_st
                     s->fp16 = true;
                 s->simd_size = simd_none;
                 break;
+
+            case 0x6e: /* vmovw r/m16, xmm */
+                d = (d & ~SrcMask) | SrcMem16;
+                /* fall through */
+            case 0x7e: /* vmovw xmm, r/m16 */
+                if ( s->evex.pfx == vex_66 )
+                    s->fp16 = true;
+                s->simd_size = simd_none;
+                break;
             }
 
             disp8scale = decode_disp8scale(twobyte_table[b].d8s, s);
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -4394,6 +4394,15 @@ x86_emulate(
 
 #ifndef X86EMUL_NO_SIMD
 
+    case X86EMUL_OPC_EVEX_66(5, 0x7e): /* vmovw xmm,r/m16 */
+        ASSERT(dst.bytes >= 4);
+        if ( dst.type == OP_MEM )
+            dst.bytes = 2;
+        /* fall through */
+    case X86EMUL_OPC_EVEX_66(5, 0x6e): /* vmovw r/m16,xmm */
+        host_and_vcpu_must_have(avx512_fp16);
+        generate_exception_if(evex.w, EXC_UD);
+        /* fall through */
     case X86EMUL_OPC_EVEX_66(0x0f, 0x6e): /* vmov{d,q} r/m,xmm */
     case X86EMUL_OPC_EVEX_66(0x0f, 0x7e): /* vmov{d,q} xmm,r/m */
         generate_exception_if((evex.lr || evex.opmsk || evex.brs ||
@@ -7747,8 +7756,18 @@ x86_emulate(
 
 #ifndef X86EMUL_NO_SIMD
 
+    case X86EMUL_OPC_EVEX_F3(5, 0x10):   /* vmovsh m16,xmm{k} */
+                                         /* vmovsh xmm,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX_F3(5, 0x11):   /* vmovsh xmm,m16{k} */
+                                         /* vmovsh xmm,xmm,xmm{k} */
+        generate_exception_if(evex.brs, EXC_UD);
+        if ( ea.type == OP_MEM )
+            d |= TwoOp;
+        else
+        {
     case X86EMUL_OPC_EVEX_F3(5, 0x51):   /* vsqrtsh xmm/m16,xmm,xmm{k} */
-        d &= ~TwoOp;
+            d &= ~TwoOp;
+        }
         /* fall through */
     case X86EMUL_OPC_EVEX(5, 0x51):      /* vsqrtph [xyz]mm/mem,[xyz]mm{k} */
     CASE_SIMD_SINGLE_FP(_EVEX, 5, 0x58): /* vadd{p,s}h [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */



From xen-devel-bounces@lists.xenproject.org Wed Jun 15 10:29:02 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 10:29:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349934.576142 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1QGj-0005Ti-Ee; Wed, 15 Jun 2022 10:29:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349934.576142; Wed, 15 Jun 2022 10:29:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1QGj-0005TZ-Ba; Wed, 15 Jun 2022 10:29:01 +0000
Received: by outflank-mailman (input) for mailman id 349934;
 Wed, 15 Jun 2022 10:29:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=56zs=WW=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o1QGi-0004ln-Bx
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 10:29:00 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on20618.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::618])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f776995e-ec95-11ec-ab14-113154c10af9;
 Wed, 15 Jun 2022 12:28:59 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7332.eurprd04.prod.outlook.com (2603:10a6:20b:1db::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.13; Wed, 15 Jun
 2022 10:28:58 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Wed, 15 Jun 2022
 10:28:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f776995e-ec95-11ec-ab14-113154c10af9
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cilm6SpKw+V1vWrnEBhQz6mhCaXO0SaD+3VbOnWKWdCyN8lhufF4xgbumm9rqoKZ69eJKDUvabBc3AvD8BSi0HUJ98d/P/SnOg10i80Hg6DbHn7HUBPjJrV7UoXdKPN2k343p55+dTU+UEkruIWDynSMPRypLrvoqo1I4Rj8F4nxeDgJznlBtpa6+SbNKieJA3tedfIqirQZo/hntpq3YGugsX1cSthkAeaLjY8pFKcNK/0kxKAkwtVJCELWW+iAcPi/GNUtKlDEBoRUw9oQM+zmnok/NA8hD/wJsyGMQDHs65ZmMl++ZLeiOecYOxjeXaPrlInOgPjiQ562dUn7Qw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=R20kHAGmuLO0WXnq8u+WocwexxXx/2Ax+f3XUYxWKgM=;
 b=VRonKpARPWipDQLk+nM2yoZzadflbbGdiRxfIF5c2yR0Pe/LE5spWU/0a0VTj7XKmUCVtdzAJ7vxetkacSd92qpCwbkJ1qr16Hr/ylgtAVxvQ8HL4KEKBXHOsJJws2g1BX2EFMCVE0iH/fibPXPKi+BUub2Bk3RUxdc3Zj5qpHWySToNEOLNdSqhsXI8GBr99k0X2gMQl2G1yS5AX5FgGb2u87kJmZk5y9XTNy9Jfe8BIxOIlyFccvaRfRc9OhuB/jVqQ6niYYd5RUCSXg2nentbIlmuCL4k0XnGaqNvvkVwHnKvUucIFJpyW/rwkXmZIe1XvrTB6g5JWQrYfftLRQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=R20kHAGmuLO0WXnq8u+WocwexxXx/2Ax+f3XUYxWKgM=;
 b=Usoy1z2ioww0JQtUIFS0x7+jS+PIalFLfFDJK69QT+wfC/HGtBsd+n3Kkk1Ue16eVrexLIRLEknlBAvBgvrRP7PJjNc6qE5uRglBruklVz35xihw9gldBhPP3AFthAr1xJJ8DUTYkG49doDQcRkpdwHCcLVIcoWZOkYcCiATApAzMvQ7xpwxZJm0F18lhDtNEiw7zHpZqlxEiHxeaq+xAXB0mK1opk9CE3mwWQGHE0tG2S7OUt6PRBfu0ZwRpZOqdNXsp9Feam2k4EMPrvbpwrB7qN5PBt6tzxJI17kVAA0udk6rWYlNxJJnm+zd20r0rrAuV/NE7Mv324e3NZLNHA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <36fadb47-32a2-b06e-4cd3-218635ef8aeb@suse.com>
Date: Wed, 15 Jun 2022 12:28:56 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: [PATCH 05/11] x86emul: handle AVX512-FP16 fma-like insns
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <9ba3aaa7-56ce-b051-f1c4-874276e493e1@suse.com>
In-Reply-To: <9ba3aaa7-56ce-b051-f1c4-874276e493e1@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM5PR0202CA0020.eurprd02.prod.outlook.com
 (2603:10a6:203:69::30) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8fbcaa9b-8b4b-4696-6e94-08da4eb9db1f
X-MS-TrafficTypeDiagnostic: AM8PR04MB7332:EE_
X-Microsoft-Antispam-PRVS:
	<AM8PR04MB7332F11BE58FAD989B3CA7AFB3AD9@AM8PR04MB7332.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	bFlNVYf4IPuHSJQ9GGfvlLkSBbv+sTAZGcBTslaLZo/i1sy4f94g9UckZx/CKW6V6QtRwkqPK7th3HfUAuH2p8G60FinzVU5jdJtEAK2PDSPinDishSB53jMDTpsllzUSkyL16E/bGSgwpyvoRh06Vkut15X8WY2LT/joLQcR5hgM0ihf9qI5ik+2K6C1PoalFVLJ6OMESze4ThgC+0Vtg5dEjGyF4jNcS98OJgN0aoGit/N1S7+/cGrBzAY4QVvpWM5RZLldUKuXjJMCc5gETznQ65axoDNFDgr7aSZEFFsOUhfODGCYU79aExlMze+3NmYEQwN7Uebc9ga015u9d9zwBThBFDgCn8cDgsG48DctorPZiyxGpwYMtmGVXxA8TjoR2EFaDLO1Vv/HOheATAsRaCXmWJu8kems6i+7xWbJZ6D5kuY7Y7QwpCycGGHkRBDSFhcQUEVZbzhL9LYd14e/9EFh9Jv0GMqHOR3xbEDDezZ+ODNXl7XkWTASoqzKuK+eJsqtL+fJXY6XM7HY2HJf2R9jDbz7VpMXxX35x5z7Sx+BR1Eq3cw1bqNrhQfFdwdVps7QSMwF4WB2PAb1EcPW1PNkESpdBdTGwoJKngMQ8tGSOSJu2izjB+EjpNL+0sH9cuDZKK8rI5KWDg5KUOzr93OSbplHYOtsLSzvdiUbEO6jp4qi90XdqESwsGGMkWkTBRKlouuDMendCZmcZreOul8qV7chukh54rh7bsCjDAnC9m6BScngiC+cfrN
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(31686004)(6486002)(316002)(36756003)(26005)(86362001)(31696002)(6512007)(186003)(8936002)(2616005)(6506007)(38100700002)(6916009)(54906003)(508600001)(66476007)(4326008)(66556008)(8676002)(5660300002)(2906002)(66946007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dTZaRjRkRDd3M1BwVTBub2RDVCsvTjBYalRrSzhQMHZlYXhjK3BIWTAyOVk0?=
 =?utf-8?B?TmJweVlWNGNadEd1N0YxeTUxRGhzTStMNGFhUHVXZ3JhaXliUjZtMHpQSmMv?=
 =?utf-8?B?WUtiOTZxMVBlRXNFRXNVQ2pLTzFJRnI0YkVqSWxYSVZOY2ZXTHAvMGQ3TEMx?=
 =?utf-8?B?VTJRU29GM0ZvcjlhUHdIVkZKUkI5eWoxYjk2T1pESUhzTnRNV1NaTDV0WVlu?=
 =?utf-8?B?d3o1VmYvR1M4WkhvL2xTRVBRUWRUOFdscEJsaDFHSHlJeEJJRGNhTG5IL0hH?=
 =?utf-8?B?RlJOdlN0Z1NyMXBrYjExUmphbWU0emJMUlNUakQ3V2pUMnV1WXp4Tm9zS0JT?=
 =?utf-8?B?SUFBaWFSMnhraUg4K1R2MGhQZFlOYnE0SmJxaVNpK1JRc01oVnB2VGxPaWFS?=
 =?utf-8?B?KzBsRGU2NFBHV21ManJRb21Jajlwcm1HMElOWGlRNjFqYjc4SDZQQXpoRm0z?=
 =?utf-8?B?SE1iTFRyWVhhUjROeDhtWkVtMWtlQ2hvMzlDckNqazlKQjN2RkJYUk53ZVBp?=
 =?utf-8?B?OHY4Q2NaMi9ramdUam9wbk5rZHJPbDlSYlE4Zk1VVC9SazFsck5oUE93ZW42?=
 =?utf-8?B?SElZVk5rYm1kUFlxb3FSQXZuWFZIUHF0VGNCQkkrZXZyZmFLeVFwV0ZxZjNR?=
 =?utf-8?B?dTFYMi8zcTkzaHRxbittYVVpa01zM1BuYThNQ3ZvamtmQzRwTmpPanQ0a2Jv?=
 =?utf-8?B?RHlZcWcrYXp5VVhzdURyRWJTYUx0Z0ZROXVUcW1JVi9OSVE5L25VazJWUWRq?=
 =?utf-8?B?Z1ExRnFmN1ZDZ2dwQkV0L3lITndPcDlGQ2NkN094MDk2NThuVWlnaG9YV0JK?=
 =?utf-8?B?eXp4NWc5dDVMZXlhakRVSkZ2RnR6M3NYeTVNRUw3SHdnKy9xZk1ZVE5BTC9D?=
 =?utf-8?B?T0ZIeklvdlQ2b3NCNi9DYTRtMng1OUllNWlVZW5zTHJhbmlicXREV0YzVkUw?=
 =?utf-8?B?amJ0ellDK2dHcmR4TXhWQ2Q5TnFid2RpcHlNWGdpMFB1WkhNeXpnczd4dU4z?=
 =?utf-8?B?VWNQZVBCRFdZOUJ6Nm95Ujg0SmlWQnlpS0NJL0djYkJwZmZyVVZFNjc1N0Qr?=
 =?utf-8?B?MzZsUzFjSG5xdHh0RHZ1VXdyemVSL0NyazZ2LzdIdlBrZVNHN0UrM3g0bEc5?=
 =?utf-8?B?SEdqYjRLdEZtbVRtb00yZFVrZURmYnNMaXZkWmRjdENCMnQ4SmYwL2ZYK25j?=
 =?utf-8?B?RXRCQVZtQ1FFTzQva2dOVHJrd0pxSFpxNElDNGFtTktKcUVCcGN4UmJma083?=
 =?utf-8?B?WUFBM3VMenQyUkNBaU5xcmRqOE4vUU9GYlR0RUluNGp6dnhrdTZITXEzbmxm?=
 =?utf-8?B?SzFvNnVOT0FDSXFBUVQwWXlUZ0UraWNoenRyajdPSFF4aHg2WWIrOGJya3pp?=
 =?utf-8?B?dzlIV1pSZE1vcWR5RnJMMlFSbDRMUkxTRzJjNWNZeWJLVkpmSkMvOWxNckVL?=
 =?utf-8?B?OWlUdlNwYldVL2hIRzhTNlNhZ3R1MFVkWkNHMmo5bVFURUhjanFOb1pybEZB?=
 =?utf-8?B?NE95UEtTQzFpangxUFN5L2Rvb0MvZTNZenIzMy96NVB0WGc4VjdpTEU1Vm5z?=
 =?utf-8?B?S0Q3ZHJzVEQvc1J3Z01DY0UxMFVZeGpydlpEcUljUU9PSStnOU5zTkk3UWZq?=
 =?utf-8?B?c1VtRTNjUHFoa3J5eUZYcHRRVnl5OHJqQ0xHcnREbUczanlPemxxZHBFQS9G?=
 =?utf-8?B?Nmc1ekNyQXZvVk5TZ2FDUE5TMlIwTHZ6MGh2Zy9kV2trVnNuOGNpaFBFM1dC?=
 =?utf-8?B?RzV4MkZtU1gyNkdJL1N1WnRCek5helRrY3dzdjEzQW0zaG5sWDNyVkhKWXpD?=
 =?utf-8?B?cWY2WEtuRGI4T2tRQUYzMXdCcXdUL0NJV21UbDNLY25nbm9ETHFQa0ZlNFAz?=
 =?utf-8?B?dGNqTUcwL2FoeXlzamNqOEFuUU00UEF2RTBpVUtrWjlCemhIUXNERS9md2dG?=
 =?utf-8?B?QVhjU3IyaHJMZU5NczF1WXhOSS9IbitaTnJtK0hUSnM1eUV2S2NrMUVqczFx?=
 =?utf-8?B?SVpSNnFuL0g4YmpNcEdITXFEMTFUbmVqa2ZOalBvNnhicDNFMjFsYkxBUzJV?=
 =?utf-8?B?V2cvZWRJL24wQUFIUEdOeXU1d2JGZUIrQUNIdjRIU2phSisyTlVzWkdpT3Bz?=
 =?utf-8?B?OUhnd1VoVmxNS2xReDlwTTBJbEZwc0M4VWpOUVZYa1MvbEQ1YmR2M2NEdlBk?=
 =?utf-8?B?T1h0SGFmWE92bkpQN2UrcEJVYVZLamhPanlFSHAvbXVGVCtpYkhReVBSVUxs?=
 =?utf-8?B?dS80QTdGTTFQeGI2OUpyc0x2MzY2eHd0YlR0M2FjZmM2QzIxSkRDM0tXNHBx?=
 =?utf-8?B?a2RmQzAzaG5wZFVDNS96RXNURS9EOTMyUjI3M004c0prdmlkWnh1Zz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8fbcaa9b-8b4b-4696-6e94-08da4eb9db1f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2022 10:28:57.9782
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 4J5BcHPRuNpX/rY2roY4un8khWKnxdW69yF8t4S40y0oBARVo3aMmFl3lW+42FlBhntF6z/WENrf9xKPmiT57Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7332

The Map6 encoding space is a very sparse clone of the "0f38" one. Once
again re-use that table, as the entries corresponding to invalid opcodes
in Map6 are simply benign with simd_size forced to other than simd_none
(preventing undue memory reads in SrcMem handling early in
x86_emulate()).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/tests/x86_emulator/evex-disp8.c
+++ b/tools/tests/x86_emulator/evex-disp8.c
@@ -614,6 +614,36 @@ static const struct test avx512_fp16_all
     INSN(comish,          , map5, 2f,    el, fp16, el),
     INSN(divph,           , map5, 5e,    vl, fp16, vl),
     INSN(divsh,         f3, map5, 5e,    el, fp16, el),
+    INSN(fmadd132ph,    66, map6, 98,    vl, fp16, vl),
+    INSN(fmadd132sh,    66, map6, 99,    el, fp16, el),
+    INSN(fmadd213ph,    66, map6, a8,    vl, fp16, vl),
+    INSN(fmadd213sh,    66, map6, a9,    el, fp16, el),
+    INSN(fmadd231ph,    66, map6, b8,    vl, fp16, vl),
+    INSN(fmadd231sh,    66, map6, b9,    el, fp16, el),
+    INSN(fmaddsub132ph, 66, map6, 96,    vl, fp16, vl),
+    INSN(fmaddsub213ph, 66, map6, a6,    vl, fp16, vl),
+    INSN(fmaddsub231ph, 66, map6, b6,    vl, fp16, vl),
+    INSN(fmsub132ph,    66, map6, 9a,    vl, fp16, vl),
+    INSN(fmsub132sh,    66, map6, 9b,    el, fp16, el),
+    INSN(fmsub213ph,    66, map6, aa,    vl, fp16, vl),
+    INSN(fmsub213sh,    66, map6, ab,    el, fp16, el),
+    INSN(fmsub231ph,    66, map6, ba,    vl, fp16, vl),
+    INSN(fmsub231sh,    66, map6, bb,    el, fp16, el),
+    INSN(fmsubadd132ph, 66, map6, 97,    vl, fp16, vl),
+    INSN(fmsubadd213ph, 66, map6, a7,    vl, fp16, vl),
+    INSN(fmsubadd231ph, 66, map6, b7,    vl, fp16, vl),
+    INSN(fnmadd132ph,   66, map6, 9c,    vl, fp16, vl),
+    INSN(fnmadd132sh,   66, map6, 9d,    el, fp16, el),
+    INSN(fnmadd213ph,   66, map6, ac,    vl, fp16, vl),
+    INSN(fnmadd213sh,   66, map6, ad,    el, fp16, el),
+    INSN(fnmadd231ph,   66, map6, bc,    vl, fp16, vl),
+    INSN(fnmadd231sh,   66, map6, bd,    el, fp16, el),
+    INSN(fnmsub132ph,   66, map6, 9e,    vl, fp16, vl),
+    INSN(fnmsub132sh,   66, map6, 9f,    el, fp16, el),
+    INSN(fnmsub213ph,   66, map6, ae,    vl, fp16, vl),
+    INSN(fnmsub213sh,   66, map6, af,    el, fp16, el),
+    INSN(fnmsub231ph,   66, map6, be,    vl, fp16, vl),
+    INSN(fnmsub231sh,   66, map6, bf,    el, fp16, el),
     INSN(fpclassph,       , 0f3a, 66,    vl, fp16, vl),
     INSN(fpclasssh,       , 0f3a, 67,    el, fp16, el),
     INSN(getmantph,       , 0f3a, 26,    vl, fp16, vl),
--- a/tools/tests/x86_emulator/predicates.c
+++ b/tools/tests/x86_emulator/predicates.c
@@ -2049,6 +2049,37 @@ static const struct evex {
     { { 0x5f }, 2, T, R, pfx_f3, W0, LIG }, /* vmaxsh */
     { { 0x6e }, 2, T, R, pfx_66, WIG, L0 }, /* vmovw */
     { { 0x7e }, 2, T, W, pfx_66, WIG, L0 }, /* vmovw */
+}, evex_map6[] = {
+    { { 0x96 }, 2, T, R, pfx_66, W0, Ln }, /* vfmaddsub132ph */
+    { { 0x97 }, 2, T, R, pfx_66, W0, Ln }, /* vfmsubadd132ph */
+    { { 0x98 }, 2, T, R, pfx_66, W0, Ln }, /* vfmadd132ph */
+    { { 0x99 }, 2, T, R, pfx_66, W0, LIG }, /* vfmadd132sh */
+    { { 0x9a }, 2, T, R, pfx_66, W0, Ln }, /* vfmsub132ph */
+    { { 0x9b }, 2, T, R, pfx_66, W0, LIG }, /* vfmsub132sh */
+    { { 0x9c }, 2, T, R, pfx_66, W0, Ln }, /* vfnmadd132ph */
+    { { 0x9d }, 2, T, R, pfx_66, W0, LIG }, /* vfnmadd132sh */
+    { { 0x9e }, 2, T, R, pfx_66, W0, Ln }, /* vfnmsub132ph */
+    { { 0x9f }, 2, T, R, pfx_66, W0, LIG }, /* vfnmsub132sh */
+    { { 0xa6 }, 2, T, R, pfx_66, W0, Ln }, /* vfmaddsub213ph */
+    { { 0xa7 }, 2, T, R, pfx_66, W0, Ln }, /* vfmsubadd213ph */
+    { { 0xa8 }, 2, T, R, pfx_66, W0, Ln }, /* vfmadd213ph */
+    { { 0xa9 }, 2, T, R, pfx_66, W0, LIG }, /* vfmadd213sh */
+    { { 0xaa }, 2, T, R, pfx_66, W0, Ln }, /* vfmsub213ph */
+    { { 0xab }, 2, T, R, pfx_66, W0, LIG }, /* vfmsub213sh */
+    { { 0xac }, 2, T, R, pfx_66, W0, Ln }, /* vfnmadd213ph */
+    { { 0xad }, 2, T, R, pfx_66, W0, LIG }, /* vfnmadd213sh */
+    { { 0xae }, 2, T, R, pfx_66, W0, Ln }, /* vfnmsub213ph */
+    { { 0xaf }, 2, T, R, pfx_66, W0, LIG }, /* vfnmsub213sh */
+    { { 0xb6 }, 2, T, R, pfx_66, W0, Ln }, /* vfmaddsub231ph */
+    { { 0xb7 }, 2, T, R, pfx_66, W0, Ln }, /* vfmsubadd231ph */
+    { { 0xb8 }, 2, T, R, pfx_66, W0, Ln }, /* vfmadd231ph */
+    { { 0xb9 }, 2, T, R, pfx_66, W0, LIG }, /* vfmadd231sh */
+    { { 0xba }, 2, T, R, pfx_66, W0, Ln }, /* vfmsub231ph */
+    { { 0xbb }, 2, T, R, pfx_66, W0, LIG }, /* vfmsub231sh */
+    { { 0xbc }, 2, T, R, pfx_66, W0, Ln }, /* vfnmadd231ph */
+    { { 0xbd }, 2, T, R, pfx_66, W0, LIG }, /* vfnmadd231sh */
+    { { 0xbe }, 2, T, R, pfx_66, W0, Ln }, /* vfnmsub231ph */
+    { { 0xbf }, 2, T, R, pfx_66, W0, LIG }, /* vfnmsub231sh */
 };
 
 static const struct {
@@ -2060,6 +2091,7 @@ static const struct {
     { evex_0f3a, ARRAY_SIZE(evex_0f3a) },
     { NULL,      0 },
     { evex_map5, ARRAY_SIZE(evex_map5) },
+    { evex_map6, ARRAY_SIZE(evex_map6) },
 };
 
 #undef Wn
--- a/xen/arch/x86/x86_emulate/decode.c
+++ b/xen/arch/x86/x86_emulate/decode.c
@@ -1231,6 +1231,16 @@ int x86emul_decode(struct x86_emulate_st
                         d = twobyte_table[b].desc;
                         s->simd_size = twobyte_table[b].size ?: simd_other;
                         break;
+
+                    case evex_map6:
+                        if ( !evex_encoded() )
+                        {
+                            rc = X86EMUL_UNRECOGNIZED;
+                            goto done;
+                        }
+                        opcode |= MASK_INSR(6, X86EMUL_OPC_EXT_MASK);
+                        d = twobyte_table[0x38].desc;
+                        break;
                     }
                 }
                 else if ( s->ext < ext_8f08 + ARRAY_SIZE(xop_table) )
@@ -1479,6 +1489,24 @@ int x86emul_decode(struct x86_emulate_st
             disp8scale = decode_disp8scale(twobyte_table[b].d8s, s);
             break;
 
+        case ext_map6:
+            d = ext0f38_table[b].to_mem ? DstMem | SrcReg
+                                        : DstReg | SrcMem;
+            if ( ext0f38_table[b].two_op )
+                d |= TwoOp;
+            s->simd_size = ext0f38_table[b].simd_size ?: simd_other;
+
+            switch ( b )
+            {
+            default:
+                if ( s->evex.pfx == vex_66 )
+                    s->fp16 = true;
+                break;
+            }
+
+            disp8scale = decode_disp8scale(ext0f38_table[b].d8s, s);
+            break;
+
         case ext_8f09:
             if ( ext8f09_table[b].two_op )
                 d |= TwoOp;
@@ -1698,6 +1726,7 @@ int x86emul_decode(struct x86_emulate_st
         break;
 
     case ext_map5:
+    case ext_map6:
     case ext_8f09:
     case ext_8f0a:
         break;
--- a/xen/arch/x86/x86_emulate/private.h
+++ b/xen/arch/x86/x86_emulate/private.h
@@ -195,6 +195,7 @@ enum vex_opcx {
     vex_0f38,
     vex_0f3a,
     evex_map5 = 5,
+    evex_map6,
 };
 
 enum vex_pfx {
@@ -250,6 +251,7 @@ struct x86_emulate_state {
         ext_0f38 = vex_0f38,
         ext_0f3a = vex_0f3a,
         ext_map5 = evex_map5,
+        ext_map6 = evex_map6,
         /*
          * For XOP use values such that the respective instruction field
          * can be used without adjustment.
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -7780,6 +7780,49 @@ x86_emulate(
         generate_exception_if(evex.w, EXC_UD);
         goto avx512f_all_fp;
 
+    case X86EMUL_OPC_EVEX_66(6, 0x96): /* vfmaddsub132ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0x97): /* vfmsubadd132ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0x98): /* vfmadd132ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0x9a): /* vfmsub132ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0x9c): /* vfnmadd132ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0x9e): /* vfnmsub132ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xa6): /* vfmaddsub213ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xa7): /* vfmsubadd213ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xa8): /* vfmadd213ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xaa): /* vfmsub213ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xac): /* vfnmadd213ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xae): /* vfnmsub213ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xb6): /* vfmaddsub231ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xb7): /* vfmsubadd231ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xb8): /* vfmadd231ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xba): /* vfmsub231ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xbc): /* vfnmadd231ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xbe): /* vfnmsub231ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+        host_and_vcpu_must_have(avx512_fp16);
+        generate_exception_if(evex.w, EXC_UD);
+        if ( ea.type != OP_REG || !evex.brs )
+            avx512_vlen_check(false);
+        goto simd_zmm;
+
+    case X86EMUL_OPC_EVEX_66(6, 0x99): /* vfmadd132sh xmm/m16,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0x9b): /* vfmsub132sh xmm/m16,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0x9d): /* vfnmadd132sh xmm/m16,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0x9f): /* vfnmsub132sh xmm/m16,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xa9): /* vfmadd213sh xmm/m16,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xab): /* vfmsub213sh xmm/m16,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xad): /* vfnmadd213sh xmm/m16,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xaf): /* vfnmsub213sh xmm/m16,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xb9): /* vfmadd231sh xmm/m16,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xbb): /* vfmsub231sh xmm/m16,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xbd): /* vfnmadd231sh xmm/m16,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xbf): /* vfnmsub231sh xmm/m16,xmm,xmm{k} */
+        host_and_vcpu_must_have(avx512_fp16);
+        generate_exception_if(evex.w || (ea.type != OP_REG && evex.brs),
+                              EXC_UD);
+        if ( !evex.brs )
+            avx512_vlen_check(true);
+        goto simd_zmm;
+
     case X86EMUL_OPC_XOP(08, 0x85): /* vpmacssww xmm,xmm/m128,xmm,xmm */
     case X86EMUL_OPC_XOP(08, 0x86): /* vpmacsswd xmm,xmm/m128,xmm,xmm */
     case X86EMUL_OPC_XOP(08, 0x87): /* vpmacssdql xmm,xmm/m128,xmm,xmm */
--- a/xen/arch/x86/x86_emulate/x86_emulate.h
+++ b/xen/arch/x86/x86_emulate/x86_emulate.h
@@ -620,6 +620,7 @@ struct x86_emulate_ctxt
  *  0x0f38xxxx for 0f38-prefixed opcodes (or their VEX/EVEX equivalents)
  *  0x0f3axxxx for 0f3a-prefixed opcodes (or their VEX/EVEX equivalents)
  *     0x5xxxx for Map5 opcodes (EVEX only)
+ *     0x6xxxx for Map6 opcodes (EVEX only)
  *  0x8f08xxxx for 8f/8-prefixed XOP opcodes
  *  0x8f09xxxx for 8f/9-prefixed XOP opcodes
  *  0x8f0axxxx for 8f/a-prefixed XOP opcodes



From xen-devel-bounces@lists.xenproject.org Wed Jun 15 10:30:35 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 10:30:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349942.576153 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1QIF-00075j-1c; Wed, 15 Jun 2022 10:30:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349942.576153; Wed, 15 Jun 2022 10:30:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1QIE-00075c-Tk; Wed, 15 Jun 2022 10:30:34 +0000
Received: by outflank-mailman (input) for mailman id 349942;
 Wed, 15 Jun 2022 10:30:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=56zs=WW=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o1QID-00075T-HJ
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 10:30:33 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0619.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::619])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2ef00ec7-ec96-11ec-bd2c-47488cf2e6aa;
 Wed, 15 Jun 2022 12:30:32 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB6876.eurprd04.prod.outlook.com (2603:10a6:10:116::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.20; Wed, 15 Jun
 2022 10:30:30 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Wed, 15 Jun 2022
 10:30:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2ef00ec7-ec96-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CEFE/xmpqlrJK+fT76C9M+gtQo5+W8+8r3Be5Yk9tJ4Yo76VQocBr1ikp+pbN4oCFn+6tPHazjl46QictiIsH2GkS8SJ7EZCP0n4QtXNsq2/xwPrdmGN/rmtTVtcaMhGklZPOrVGcErIQj4RjVh7krUHfKAJeFmZebRy5qyRNKsE0YI3KEaPRrDbjC8LwmIJdmutlJfmCa7+OXDfJbAQX8ODqxWTQ9w+6e3u42TC5m3+DZe1RA9nLTpeNUacSpe6yChV6D8ghNlNJmJ2NoIBr4NzDUmF6gs7XtMjhrs6CkkIXDkvF9Oov2RqKhIw6zR1rdE+TXA4JSxfJ98siyMJ+g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Pf+9jy8WYs5YFSSKVO0mOzJRe1qi6wwM0nQAc2XnSdc=;
 b=Y+fPwhomFfs23C+nstPksrmqXpStoIO72iUKaGMtiHdztJeRVLnMYcm6uYDeYvQ4Z1xwPBB3uXYkc2OoRxwjxM8ivAS7d7JT3KV119VzCPX80cldW1wuD2Om8m+8Yu4BjeKaIZMqYJ0JlI14V1sjR2iA+jaiNbaSQFdc5tzJQjLF6jHKHQ3rt616y796jMTgKaMRHhGwMhf57o+nA5FgKGaP44v7xy7qqqNNnaOKrrL+gZ/kwCwz0lR0TMJ5wfLZ4VoE/PCNJUubDABqDUG5PAU9/KU5wuEWqPpVmJaUsO5IvX9rYPhH7TCfjZMmWxOoVQxZLATXgkAeljZQ4Op4wA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Pf+9jy8WYs5YFSSKVO0mOzJRe1qi6wwM0nQAc2XnSdc=;
 b=KonsezMEaA0kBHvSz0isaYWLV1R+DcXASQK6RhfbJPl7qJ4N+dgFbH7mEkShDWTo4+itrrrovK4aiN7wmW4d3s4AsNbP/NLiQJiW9p5VwVrFMk5iY9NZnS86IoJA2l1k3yPhQU4mcf+DlNQkQ1mXtGbPNxMv262xFXc0osI68CUzpt1FXf/dAuaFGuLPIUpONiWl/e1E/DdaVDcF/dAlOe6Gj5Oaqd49uQIOtkFGoIPwvj6zPx8AEGq8hORc9DJApvtn2pTgmpydZQq8fGxgP4hVsiM/h60WfY8gMwmLt56gIifjvppuGUJDL9ZSel1vUo+MKSRcFz9GDKZcNWi4zw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5c77cdba-fac9-d82b-9d68-40f8b4f82d66@suse.com>
Date: Wed, 15 Jun 2022 12:30:29 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: [PATCH 08/11] x86emul: handle AVX512-FP16 conversion to/from (packed)
 int16 insns
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <9ba3aaa7-56ce-b051-f1c4-874276e493e1@suse.com>
In-Reply-To: <9ba3aaa7-56ce-b051-f1c4-874276e493e1@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR06CA0015.eurprd06.prod.outlook.com
 (2603:10a6:20b:462::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 0bee5b93-c83a-433d-713b-08da4eba1269
X-MS-TrafficTypeDiagnostic: DB8PR04MB6876:EE_
X-Microsoft-Antispam-PRVS:
	<DB8PR04MB6876B222BCFDCFA88C55D51FB3AD9@DB8PR04MB6876.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	XswqozxqBxBiFgV0+BPPfR7yu9zvMjyyo4OsBoB0K2ZWYZqJqOOv5iVdm2mMeomMHj2O1vsAAUr+Cuuo8ZpD89Teub/ufAd2BfZ5Fh+G5E/tNeqczJwERFk8QYL2xXr2+s0GKrLb/tW2iGsUCLLZqEoxI70MXTw7wEGgV8yKmKxLZju2V1LD+FwtvYLj83mlSa2qH7421F2cc41jSbL5cKWRwFiyLA1rXUqJKtXjzxTzb1DULnLVZpoN1HddlCyyb6cVTOr9tvgx5e0rpouiKCcAY6njtkDE9xzDSVMDDhPn5IBcF/9EmatbgzyX/t1xb5pPmB6IHtXMSXth3cDrPZJ/1YXrpCJf6v1ikGKvtrCHmSjCeDk2ICz8OhxL746/TJ6EBD7Ia0O8GK3yoMa6ovaIDBSuY7JYJNeECwl4bghZ5Xa31KoDWolakkFDTwwa5XgKgckvukNs2j+oFpwrb/ZEA+hrfIXqnu83Zlhwogj7MSlioz1FdbksNnuKj5Wh+O2R2keSmZ3mFOg5C8Mgz4WFS1XeNSGWmyAjVFlYpTThQQIYWOt5/ydCcKj07woQ37NwI6CJvwyRB0M1eit2EUl91TRl+4jWZNhSy+QFDR0Ixcbiswvwb33nmW4QbFU2D5vl/O08SOOwBsId8MRPcka/4hRvLP3eNVvX5yfZV//t7zxFW+JQlildIXQYzaPJJUqwdci2VWwujPuwtral2tO6DGTI7X2XIvz1a67ySUDNejbY7qmfLuwQpjGVDcVX
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(6512007)(66556008)(26005)(54906003)(5660300002)(6506007)(316002)(186003)(2906002)(31686004)(36756003)(508600001)(6486002)(8936002)(66946007)(86362001)(4326008)(8676002)(6916009)(66476007)(38100700002)(31696002)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RGNLWVp1UkNUbitnZlpmME1XYmJOclZHOWJFQTBLUFdiYXJBUDBaZktRdk1n?=
 =?utf-8?B?TkZYdUxWNzJtbks3VXFqSUJRb1cxdG5YdUZWdGtlZ1U5RWJydlJXN0RFM29B?=
 =?utf-8?B?ZFJYaEMxOTNMWkRxMDI5UVJvdXIwR25ySGdVbExXWGhRb2lrUHhxODlsR1dh?=
 =?utf-8?B?OWdYWlQzd05BZkI5cFlyejlRZmEyYlppUFJPYWhnSHBmRjlxSDJuNkI4ZVg3?=
 =?utf-8?B?bTMrSGJIU2FFN3FwZ0VxenRjTmw5OTEySEJuRndqT0VZNVRUQ29lTW11SVlx?=
 =?utf-8?B?K0M3YjJsdTFWR09jd3NiVWlZN1Y0akNEUXFqUjRIL3hzM2t3SHFGL2ZvUUx4?=
 =?utf-8?B?UGVqUE9EQ1ZGaDU1RmNpZHVDWnpoM2dOT0p5UEZLN1kxZzVBUWF6UFVQcjZ1?=
 =?utf-8?B?TENIV0ZaZDlRUlpKeUg1WmNGOXZGMFVyc20xcUVJVDdmbHBFcWQvVFlCZ2hQ?=
 =?utf-8?B?NW1jU2RqZEpJallkSUVlZEFiMlF3cU5yRDI4cG9oVzJtZ3l4OU4rMGFzN2ww?=
 =?utf-8?B?S3psSmtZZzJ1bmcwYXJYOVFMOVVjbHZaNFZ2SEFPWjZFeVQ4alVDM1FEWTA3?=
 =?utf-8?B?TDBQMDZzUXRCbi9qcHJEQVk3dnVScExubmFCYnNpeDdCMG5VcTU0ZkNINXZz?=
 =?utf-8?B?d1NBWUx1blVBalV2OWhuOXp6bURHbENwMEViakUxbHRwNDVJL3JyYkJLKy9M?=
 =?utf-8?B?bG9BU3R1cllkNVd5dXJnNXdmaENJYlFvd1k3bUJmdVZMdTkyUDNrSUtCUGRO?=
 =?utf-8?B?emNTUC9qYm9mb1ExNjFmQ20yd0x0bEIva2hIVUNCYVJ3cXp3elBGdTFnZjZ0?=
 =?utf-8?B?RTlVbWJZQUFxZDF3TG1XcUs3dTNGcWUyRU1KVWVoeWJPQ3cyU2c4aURQNXZ5?=
 =?utf-8?B?eTg3SFJsT3ZkajhQVnNhNThYeHljR1JjeGNxRWlSaTNSRzNYU0x3NlJzcGJ3?=
 =?utf-8?B?b3c5YThRS3hNdnN2NWI0WW1BdkZGU2hKemRMaFdnb2JqVCtNK3hzMWJ0TWtL?=
 =?utf-8?B?ZFBvaW4vMjhLNHVocDhXb0o2QXlEM2FydkZ4RUZMRWFJZ04zTWE5RFQzbVVZ?=
 =?utf-8?B?WGxmNzdPWFFNdGx4cVMrb2s4d3lITm5KQUprZXZiMFVlaHhSY0c0ZWVCQnpE?=
 =?utf-8?B?Y3JnMmtVWmFyK1VvUWJ6eU5FRDhHMXNoV2NkalE4YjNKVEtkNEJCUi80OWZY?=
 =?utf-8?B?ZTFSQUszdVVsMlh4d2laRGlqeWNHREh2MUJZR2ZDNHNsRlhZeHcxRmhBWXhV?=
 =?utf-8?B?NGp4ZFBhLzhxTkF2Vm53MFE4YnZ5YkFab3JOY1ZiUDZXSkxBVHBkeVVrY0Vl?=
 =?utf-8?B?R2FtTm9XQmlWY0NwWm1JV3M1cnpveXVyVTJ5blU2RmIwMnFPb3U0WXBJRDhF?=
 =?utf-8?B?bnA4Zm1MZkRmdTJqYk5EaU5lZkdGZm1kVElHSU9zVjU3OVlMdExqSVVMWmtl?=
 =?utf-8?B?YTV4VGlCcXNHL2UzV0F2NVN1MUYrNlJ4RVd5WWxGWXp3NlFSTm5wYjBZOStl?=
 =?utf-8?B?UG9kaGxQaDhZd2Y4WHUydmVtMzZIVmVrMjJJSVNFQVM5dWFxMUhzUUpTc21U?=
 =?utf-8?B?KzBlc2NJM01vYit3TEJvWis0UURsRE93S01lTitNS1hUMDdWaytNVjVhc2cv?=
 =?utf-8?B?b3V3R0k0TW44a1FhUERqRVYwQk9uQytUQmRSUUNLRUhTMkx4SHU5bGQxalpl?=
 =?utf-8?B?MkFmRTR2aG5hS0xCQkRmRitjTnZFdG5KQ24ySDl6VGY0TnhBQ3dnTFc1RmJR?=
 =?utf-8?B?cUh3M1JEZ0NDSDdTUzBVR3lBMVJ5VmRSd0prSFRtck5mUWdBa1A1bXU0dmVK?=
 =?utf-8?B?a3dlcVQvZzRMSjkzZzJwNWRDbGZZNzEyVEVyNHNhT2JrQThRcUZWK242UTBj?=
 =?utf-8?B?R2luU2lwODJqNjlKMWJTUmgyOTBURTBhVjVHZGEwTHkxM0VjRTd5VzRSNUM0?=
 =?utf-8?B?Vkp4T1dKRkxMbVBGR2RhcEx2M2MxVENMcER0Z3d4VnB5RFh3VFFGRGlsUHRQ?=
 =?utf-8?B?UVZzVnloZTk1R1R3dzkyR0JUMUNXM2JlNHdzbzZLaU5qWEdPT2sxLzVrd0t5?=
 =?utf-8?B?bG1aenlvR3hHQklIamNLUmQzMFlmdjR1cFhNTEswUTZVbS9QQ0dTZzNzVy85?=
 =?utf-8?B?OFE4MEZOT2tXUHFqOEJKMEFRNW9qWStJbXJsdHBGMjJjZXo3WGRqY0kvbGl3?=
 =?utf-8?B?TnZ5UEN6QjgxSlFmZFdYMFZHa2laR0lVYW1TV1RTaS91ajB2Qjc0bTFRTm16?=
 =?utf-8?B?b2tNWUNoaGhaVmVsT2pjV0lHODlpNGFGR2x3ZkpjRm5UcDJkenR1RXBiUkQ4?=
 =?utf-8?B?SU0rcEVuYXBaNVVsZFJnaHVzUHEzOU8xTVFKZ2pxVVI0T29qSEx4Zz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0bee5b93-c83a-433d-713b-08da4eba1269
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2022 10:30:30.7692
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: XkF9fVRpUNudE1NJvPPyL8oSiga4aDpF3JWWduid5HQLuVdkTOzYU/HQoVEsPRcxesKv7mp0VPSaSOssTIqMOQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB6876

These are easiest in that they have same-size source and destination
vectors, yet they're different from other conversion insns in that they
use opcodes which have different meaning in the 0F encoding space
({,V}H{ADD,SUB}P{S,D}), hence requiring a little bit of overriding.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/tests/x86_emulator/evex-disp8.c
+++ b/tools/tests/x86_emulator/evex-disp8.c
@@ -612,6 +612,12 @@ static const struct test avx512_fp16_all
     INSN(cmpph,           , 0f3a, c2,    vl, fp16, vl),
     INSN(cmpsh,         f3, 0f3a, c2,    el, fp16, el),
     INSN(comish,          , map5, 2f,    el, fp16, el),
+    INSN(cvtph2uw,        , map5, 7d,    vl, fp16, vl),
+    INSN(cvtph2w,       66, map5, 7d,    vl, fp16, vl),
+    INSN(cvttph2uw,       , map5, 7c,    vl, fp16, vl),
+    INSN(cvttph2w,      66, map5, 7c,    vl, fp16, vl),
+    INSN(cvtuw2ph,      f2, map5, 7d,    vl, fp16, vl),
+    INSN(cvtw2ph,       f3, map5, 7d,    vl, fp16, vl),
     INSN(divph,           , map5, 5e,    vl, fp16, vl),
     INSN(divsh,         f3, map5, 5e,    el, fp16, el),
     INSNX(fcmaddcph,    f2, map6, 56, 1, vl,    d, vl),
--- a/tools/tests/x86_emulator/predicates.c
+++ b/tools/tests/x86_emulator/predicates.c
@@ -2048,6 +2048,12 @@ static const struct evex {
     { { 0x5f }, 2, T, R, pfx_no, W0, Ln }, /* vmaxph */
     { { 0x5f }, 2, T, R, pfx_f3, W0, LIG }, /* vmaxsh */
     { { 0x6e }, 2, T, R, pfx_66, WIG, L0 }, /* vmovw */
+    { { 0x7c }, 2, T, R, pfx_no, W0, Ln }, /* vcvttph2uw */
+    { { 0x7c }, 2, T, R, pfx_66, W0, Ln }, /* vcvttph2w */
+    { { 0x7d }, 2, T, R, pfx_no, W0, Ln }, /* vcvtph2uw */
+    { { 0x7d }, 2, T, R, pfx_66, W0, Ln }, /* vcvtph2w */
+    { { 0x7d }, 2, T, R, pfx_f3, W0, Ln }, /* vcvtw2ph */
+    { { 0x7d }, 2, T, R, pfx_f2, W0, Ln }, /* vcvtuwph */
     { { 0x7e }, 2, T, W, pfx_66, WIG, L0 }, /* vmovw */
 }, evex_map6[] = {
     { { 0x2c }, 2, T, R, pfx_66, W0, Ln }, /* vscalefph */
--- a/xen/arch/x86/x86_emulate/decode.c
+++ b/xen/arch/x86/x86_emulate/decode.c
@@ -259,7 +259,7 @@ static const struct twobyte_table {
     [0x78 ... 0x79] = { DstImplicit|SrcMem|ModRM|Mov, simd_other, d8s_vl },
     [0x7a] = { DstImplicit|SrcMem|ModRM|Mov, simd_packed_fp, d8s_vl },
     [0x7b] = { DstImplicit|SrcMem|ModRM|Mov, simd_other, d8s_dq64 },
-    [0x7c ... 0x7d] = { DstImplicit|SrcMem|ModRM, simd_other },
+    [0x7c ... 0x7d] = { DstImplicit|SrcMem|ModRM, simd_other, d8s_vl },
     [0x7e] = { DstMem|SrcImplicit|ModRM|Mov, simd_none, d8s_dq64 },
     [0x7f] = { DstMem|SrcImplicit|ModRM|Mov, simd_packed_int, d8s_vl },
     [0x80 ... 0x8f] = { DstImplicit|SrcImm },
@@ -1488,6 +1488,12 @@ int x86emul_decode(struct x86_emulate_st
                     s->fp16 = true;
                 s->simd_size = simd_none;
                 break;
+
+            case 0x7c: /* vcvttph2{,u}w */
+            case 0x7d: /* vcvtph2{,u}w / vcvt{,u}w2ph */
+                d = DstReg | SrcMem | TwoOp;
+                s->fp16 = true;
+                break;
             }
 
             disp8scale = decode_disp8scale(twobyte_table[b].d8s, s);
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -7780,6 +7780,14 @@ x86_emulate(
         generate_exception_if(evex.w, EXC_UD);
         goto avx512f_all_fp;
 
+    case X86EMUL_OPC_EVEX   (5, 0x7c): /* vcvttph2uw [xyz]mm/mem,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(5, 0x7c): /* vcvttph2w [xyz]mm/mem,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX   (5, 0x7d): /* vcvtph2uw [xyz]mm/mem,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(5, 0x7d): /* vcvtph2w [xyz]mm/mem,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_F3(5, 0x7d): /* vcvtw2ph [xyz]mm/mem,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_F2(5, 0x7d): /* vcvtuw2ph [xyz]mm/mem,[xyz]mm{k} */
+        op_bytes = 16 << evex.lr;
+        /* fall through */
     case X86EMUL_OPC_EVEX_66(6, 0x2c): /* vscalefph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(6, 0x42): /* vgetexpph [xyz]mm/mem,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(6, 0x96): /* vfmaddsub132ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */



From xen-devel-bounces@lists.xenproject.org Wed Jun 15 10:31:27 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 10:31:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349949.576164 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1QJ5-0007hP-CK; Wed, 15 Jun 2022 10:31:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349949.576164; Wed, 15 Jun 2022 10:31:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1QJ5-0007hI-8I; Wed, 15 Jun 2022 10:31:27 +0000
Received: by outflank-mailman (input) for mailman id 349949;
 Wed, 15 Jun 2022 10:31:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=56zs=WW=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o1QJ3-0007gz-MR
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 10:31:25 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on062b.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::62b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3beb87d1-ec96-11ec-bd2c-47488cf2e6aa;
 Wed, 15 Jun 2022 12:30:54 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB6876.eurprd04.prod.outlook.com (2603:10a6:10:116::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.20; Wed, 15 Jun
 2022 10:30:52 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Wed, 15 Jun 2022
 10:30:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3beb87d1-ec96-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DE0NnyhkLT+KxeYdCD6VvVynLmARSPYuxsy2X0bbwIPJrypbjKwYjUZQKcf37uka9z7chsfR8oJ6KrsAAdRhIaqQBG+TkVm9tpQhivpo4ESCCUUe+We8LloTiooCX9ez33ujcuq1iSyJSt41GuoPuc4vAqLXJNwuCUI28uG2h+bUc7OW0AaWlSDEXS9cNbsQgvgmVKqqv9nIXekrfNJV+51Hin3FWLMW7l4NK6JYBzLFKX1VtKX32//K1NaOXm4DkEozXY46tHhsVbxJuwOu0qkObG5LQpQe3mG87utKT3Z6mhs3Fk54hFyG7zOZO+jMYKYqvpfc8Ug3fh3sWKZ4TA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=k2dOvs6g6kpvHhFE2Cq6miTN2SdtjeNE6N4Fh2/FJ04=;
 b=Q3JUbM4c/zh9J4OIqT5nY/7nWOP4SlMe0Fx3s/q7pr6nHe1C8o6EqXk+60TPR0Zkrffi2RDSBkCoJITgpgq9nMa9+vIET3sfI3b5qUlZUzbPAHiqbJgNOHNi3xcRuOqvAJ56ObSE51LJZL16643UAOMoAptPLgEmKJyTPAddBKNAp6K90PLZrYmzzHa4flSCQ1fenKK+I9x0gPEH508yj7dvbl9R5qidKgqb9qPJ5OgnbDXb+4C3ku7Szj39MG8S8JUV7JEvg2Np1/LfGPKAq41WUrwS6HgoAIESsRCAdYMpP0q5wlroNwo2PLXBKUjBEsg1UIgTLwg2M80DaywgpQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=k2dOvs6g6kpvHhFE2Cq6miTN2SdtjeNE6N4Fh2/FJ04=;
 b=bY3fkeDUYNXJz5EB4PHT0YKB6FK7T6TjQkOi7NNPZkzdcQb2I/Kr+LYXzTMjgf3LlBCaDTYlZAEV83jPt6uJVLv9lTPabdUe4jK92QSMAAB8INQ3TSwSPc6h6B+A5OG4awUvVjncLT+S1NwRp4hd/SKQbe3GA0EFXomEk4gwMq6tEFhx3XxYSXKl5ApgCwPQ1nn5goZ0SsMq8Td/GdeuYxgt0HNz9glFxMhK52MyW3QBpC2XVuuFA7CidYyc9/5Mo1xMMyuw5mODJPmUGAGbHR942sk7UAF5rvQQCR0gWrDLsPi8wL5cDGUPCrl7Gh43/pzNZwWhDaIRTpCr+nOOdg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <4d9c76d3-763c-c1e6-f38b-9282f023a995@suse.com>
Date: Wed, 15 Jun 2022 12:30:50 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: [PATCH 09/11] x86emul: handle AVX512-FP16 floating point conversion
 insns
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <9ba3aaa7-56ce-b051-f1c4-874276e493e1@suse.com>
In-Reply-To: <9ba3aaa7-56ce-b051-f1c4-874276e493e1@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9P194CA0029.EURP194.PROD.OUTLOOK.COM
 (2603:10a6:20b:46d::35) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f7da50cf-ece9-4bad-2d56-08da4eba1f48
X-MS-TrafficTypeDiagnostic: DB8PR04MB6876:EE_
X-Microsoft-Antispam-PRVS:
	<DB8PR04MB6876C0A6FB79533D50AB4AB3B3AD9@DB8PR04MB6876.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	zR/6NkwkcywMZtd0xmdG34avwsFW9weN3rw6DSXBgIGbg4ftmMjNmETqcOMjWgH0Al5SbXaSzh1Y5yMkkPSdhrJksZTRomfMYRCATfjNZlNCoY6zKz7TPG56a9aHm/Navq2BmD6H6wgKcqcuzSDfztC9iNXVUF4p5qX/8hFPB04gMKASZW8os1DhEx+e+D4RbHK2KMPGh2/tSjSYk8djEWM1Y2WZsBtshc7OXKckUYesG24TnOS3YIwUw6TbHTFSak0rDJteqyX3F5DM9CCUnhva2tZvQE2r8Z2MLUwaukeMJeemP6HzqqA4pKZMXqsSfhvsMihM89GtZ56H/KwThjFMY3boASddHBb/grvpgg7Y91bTrexYsrMtYtW67w7ZaeK9fqVKINMw2vxdihZe5SpDtZBJc30u2t99tQSerU3S6niOl8CpMf+JZynX6wFe16MGGq4GmhTrQy/H6xIKcV0YZNhgbhR5YuWyjkdbKsZNeXpI/hGBYBqgXM49rEx+EoVdXmGlRp18g5H/uAYNbHIFsIp4E/EhhNz1K2r6pDvpBMzgA66MpvUQKkiE5VNdgggCjJBVYbChfEtosDGRk5KrrgFUSUW4MBzMoZOwXmn799OJtnma6+Z8t2TjFgvwWSN+OrjfLWPmbmi9LzVxw2QtnDvx1Ho2+4ywbpmhC0mp2XfGoir9cGgl+zznBYdyDwsxugGAjax51UcTJUfCtMeJeFc6F/8Q7XQ63fH70or4Nq8vTZknslsWsg1s+yD6
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(6512007)(66556008)(26005)(54906003)(5660300002)(6506007)(316002)(186003)(2906002)(31686004)(36756003)(508600001)(6486002)(8936002)(66946007)(86362001)(4326008)(8676002)(6916009)(66476007)(38100700002)(31696002)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dEs0R1VIN0hWMDM5cHoxTi9oaUh1WTlpZDB0SU5uci8xZEdEblh1NTJoOHRS?=
 =?utf-8?B?NU1CcHR3cVNTRXFObDBlREh3UlFDbmhqWDZza2RWUFJsenB2VkczRThMMFlF?=
 =?utf-8?B?blBkdlJ0bmRDSm5rS3ZLSlg2NThVQWtBcEhlVHFheHRrbzFmTktpNHNub3ZV?=
 =?utf-8?B?eHBqYWZqeHZRMjBEblNXbS81OE4rV3RVS2ttL29LUDdPQ1ZyZmRjOHlLTnBH?=
 =?utf-8?B?M0pxOG1BeFVOMUZkOFFNb0pzUGZGRFhuL3FCRGl3MjRXWXh1NXQ4c2ZmWXNR?=
 =?utf-8?B?NlNVYzNqU29lRTRoT1hzVlplR3I1ZzE1QWRjd0Fsci9HaDhLN0Zrb2Jtb0w0?=
 =?utf-8?B?WkcxRm1VUkxNdEhId2ZFZ296eUNUWmR5YjFxWExBc29tVitoUHQrc3NZZDFM?=
 =?utf-8?B?MTVXY21tR0FzMHUrYmxST0IzbVgrVE1RY0ZDRHRFN2liVXh4SkRMTDcxeThQ?=
 =?utf-8?B?UFFFS3pwd253VC9Qc2tzYlNaQ1dHa0crTTFTU25jbDRxK2ZEY3lXQ0tkM1NZ?=
 =?utf-8?B?YU5DM0Jlc3BnLzBhaFV0SUhBcWplanM1b2ljQ09Ea0p0dXBtZ1pHT3I3b0RE?=
 =?utf-8?B?TjhpSGJyeUMzNDVnY2QrVUl4dTdGZWxTaVRTNkZhdXNtd1BnRElLWHRHUnNt?=
 =?utf-8?B?VW8xYTZEblN5clc5S0FkOFZBY3kvOGxsOG91c2E3YzNsNEk5d2lZSEIxMGhm?=
 =?utf-8?B?bFBvUmVIN1o0Uk5lSjJnbkhjQWRPaVNqaDFlRmF3TkhPZThSVmpMTFUzVnY1?=
 =?utf-8?B?ZlZ2RmRLNXBIaWsyallBb0x4Y1hBK1JWbWZBZy8yVmVoVVJrdW11bzVWaWtE?=
 =?utf-8?B?bmw4MmF1TVNtdE5SVThCMmp6N0x0U2h6SGd4VnZkejU5eDk0TC92b2ZYSnVu?=
 =?utf-8?B?cG05Vnc1MjVNZFI3TXV0K2ZDY2NxWHRtTDdOTHhUM3pQK3hQMTdaUGIvMUt5?=
 =?utf-8?B?SWtYSjhDSjBJL1ZqZndCaU4vanBEWUQ4amZEcW9LRVJ2L3kyY29uQ1dSZ3pz?=
 =?utf-8?B?dlhiQ1hpTzd3TklmODRYUFhQOVg1UTk0ZDlabmVmdGNQVSs5eEtPYXhKOUt5?=
 =?utf-8?B?ZDdPZlJxNFordFZPY1N3TXBpQXNxZlY4aEIrR2Q5akJFbkk5UEZrSXdBcVpx?=
 =?utf-8?B?QzlZRzNtcFoyRWdSc3U2RDl0aDRmOHJwTmNJbkhRbXJ0dnhPTXZHY0doRWVv?=
 =?utf-8?B?MlQyMDFVRkVqYnBvVDZkMENCK3c5SnAya3pJLzFXZndjaEZDUHVicVdPSktJ?=
 =?utf-8?B?RUl0YXlONkhCS3JoMHhvblE5K28wRnplN1hvZ2wvMEFGRUpvdlNyVmdUTTM4?=
 =?utf-8?B?S0FVTXFYYlVaR3FPczVTUHA3aTUrTDZOYXNwR2FlZmhCUFROdGo5bFZXVWhL?=
 =?utf-8?B?SGJvOHB2N3ZuRXhrVXd0NCtqSVN2UGhhZWY2aWs4d0h1RXpUS2xGVHUyRGFG?=
 =?utf-8?B?N3Y3ZDBFL1lwMC93M1RQYUR5cDhGWTF5cTBVaGhEQnMzaWNaUnhzcFNyVjV6?=
 =?utf-8?B?YlZ4ZHdINmhQOS9jcG16MXBnVVhubjNxWkZpTHN1L09hUXI0YU1vNUFKQmNv?=
 =?utf-8?B?N0VGc0lmYUdDc0hIOVErWitLVzE2czhpbDRteVpZWVFKb1k5aGNhTHJ2a1B6?=
 =?utf-8?B?RGUrZ2o1Q3RQOW9XQnFjV00rQ1kxdVc1R3VnMEl6Tlp2TmhTUlc2S2kzTzND?=
 =?utf-8?B?K2R1VEJMWmFvZGtONmoxUzdOd2RaT3JRdkw0MVNQanpQREdFRjVVK0lNb2Zm?=
 =?utf-8?B?UVBURGgrak14a2JOMzhlS3FuYmJXdzRvOUhNb1h3UE1xQnZRUXZvWTUzZm8v?=
 =?utf-8?B?TVdYR2VQbVNkOElGdWs0azNrR2lCbHFvZ1VvcFlYRjkrZU9TR0FrbVlwdE5w?=
 =?utf-8?B?N3RxVXpmbkc1d256K2cvRG5Hai9DcnVXTzZnS1NHTWNvQzlnQStzNWwvQ244?=
 =?utf-8?B?M2s4R1UrWUlIcGNKU1ZPSTNLK09DbzdUSUJuWVJuWXRsdzRzSEUwQy9ORkhl?=
 =?utf-8?B?YUx0VUpBV2YxbzFzaEtHOXorYUVlSXJiZGVVYmF1MTVOWTF1UXNHN2pET0Nt?=
 =?utf-8?B?dkVRREY5K0hmNXY1RUp6RTJQempwaFdGdGFaSXlteWU0QWJkM3VFLytmOWY2?=
 =?utf-8?B?WEM1UURkMzNqTzFrRFZTNU5BSTduelNaUDJaUEV4ZXRhMXlhWjFjVHNHNElI?=
 =?utf-8?B?eDdIM09wSDV1akFFRncwUUFTZW5GZkpndXpXQXRrSmZrbUtucHVhaDVwcVl5?=
 =?utf-8?B?b1NvWlNYRWhRVXdLRXNNZXVzNTA4aEZ1MUhkT2M3VXhpKzdNWlZiOEZRbkhP?=
 =?utf-8?B?U1pHNy9ZZTRNVlF3dHNXazlaVmVwdnU0d0w3OEo5MHlaY0FQSjBldz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f7da50cf-ece9-4bad-2d56-08da4eba1f48
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2022 10:30:52.3616
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: kg9J8dmX9prMY3qcHatrkZtAYJpAtmZNu/Ya0Vj8AG1zupNmBVq2ULGHJH1SNYsEKpxMB6leG0VlH9SbZ+raWA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB6876

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/tests/x86_emulator/evex-disp8.c
+++ b/tools/tests/x86_emulator/evex-disp8.c
@@ -612,8 +612,16 @@ static const struct test avx512_fp16_all
     INSN(cmpph,           , 0f3a, c2,    vl, fp16, vl),
     INSN(cmpsh,         f3, 0f3a, c2,    el, fp16, el),
     INSN(comish,          , map5, 2f,    el, fp16, el),
+    INSN(cvtpd2ph,      66, map5, 5a,    vl,    q, vl),
+    INSN(cvtph2pd,        , map5, 5a,  vl_4, fp16, vl),
+    INSN(cvtph2psx,     66, map6, 13,  vl_2, fp16, vl),
     INSN(cvtph2uw,        , map5, 7d,    vl, fp16, vl),
     INSN(cvtph2w,       66, map5, 7d,    vl, fp16, vl),
+    INSN(cvtps2phx,     66, map5, 1d,    vl,    d, vl),
+    INSN(cvtsd2sh,      f2, map5, 5a,    el,    q, el),
+    INSN(cvtsh2sd,      f3, map5, 5a,    el, fp16, el),
+    INSN(cvtsh2ss,        , map6, 13,    el, fp16, el),
+    INSN(cvtss2sh,        , map5, 1d,    el,    d, el),
     INSN(cvttph2uw,       , map5, 7c,    vl, fp16, vl),
     INSN(cvttph2w,      66, map5, 7c,    vl, fp16, vl),
     INSN(cvtuw2ph,      f2, map5, 7d,    vl, fp16, vl),
--- a/tools/tests/x86_emulator/predicates.c
+++ b/tools/tests/x86_emulator/predicates.c
@@ -2031,6 +2031,8 @@ static const struct evex {
 }, evex_map5[] = {
     { { 0x10 }, 2, T, R, pfx_f3, W0, LIG }, /* vmovsh */
     { { 0x11 }, 2, T, W, pfx_f3, W0, LIG }, /* vmovsh */
+    { { 0x1d }, 2, T, R, pfx_66, W0, Ln }, /* vcvtps2phx */
+    { { 0x1d }, 2, T, R, pfx_no, W0, LIG }, /* vcvtss2sh */
     { { 0x2e }, 2, T, R, pfx_no, W0, LIG }, /* vucomish */
     { { 0x2f }, 2, T, R, pfx_no, W0, LIG }, /* vcomish */
     { { 0x51 }, 2, T, R, pfx_no, W0, Ln }, /* vsqrtph */
@@ -2039,6 +2041,10 @@ static const struct evex {
     { { 0x58 }, 2, T, R, pfx_f3, W0, LIG }, /* vaddsh */
     { { 0x59 }, 2, T, R, pfx_no, W0, Ln }, /* vmulph */
     { { 0x59 }, 2, T, R, pfx_f3, W0, LIG }, /* vmulsh */
+    { { 0x5a }, 2, T, R, pfx_no, W0, Ln }, /* vcvtph2pd */
+    { { 0x5a }, 2, T, R, pfx_66, W1, Ln }, /* vcvtpd2ph */
+    { { 0x5a }, 2, T, R, pfx_f3, W0, LIG }, /* vcvtsh2sd */
+    { { 0x5a }, 2, T, R, pfx_f2, W1, LIG }, /* vcvtsd2sh */
     { { 0x5c }, 2, T, R, pfx_no, W0, Ln }, /* vsubph */
     { { 0x5c }, 2, T, R, pfx_f3, W0, LIG }, /* vsubsh */
     { { 0x5d }, 2, T, R, pfx_no, W0, Ln }, /* vminph */
@@ -2056,6 +2062,8 @@ static const struct evex {
     { { 0x7d }, 2, T, R, pfx_f2, W0, Ln }, /* vcvtuwph */
     { { 0x7e }, 2, T, W, pfx_66, WIG, L0 }, /* vmovw */
 }, evex_map6[] = {
+    { { 0x13 }, 2, T, R, pfx_66, W0, Ln }, /* vcvtph2psx */
+    { { 0x13 }, 2, T, R, pfx_no, W0, LIG }, /* vcvtsh2ss */
     { { 0x2c }, 2, T, R, pfx_66, W0, Ln }, /* vscalefph */
     { { 0x2d }, 2, T, R, pfx_66, W0, LIG }, /* vscalefsh */
     { { 0x42 }, 2, T, R, pfx_66, W0, Ln }, /* vgetexpph */
--- a/xen/arch/x86/x86_emulate/decode.c
+++ b/xen/arch/x86/x86_emulate/decode.c
@@ -224,7 +224,9 @@ static const struct twobyte_table {
     [0x14 ... 0x15] = { DstImplicit|SrcMem|ModRM, simd_packed_fp, d8s_vl },
     [0x16] = { DstImplicit|SrcMem|ModRM|Mov, simd_other, 3 },
     [0x17] = { DstMem|SrcImplicit|ModRM|Mov, simd_other, 3 },
-    [0x18 ... 0x1f] = { ImplicitOps|ModRM },
+    [0x18 ... 0x1c] = { ImplicitOps|ModRM },
+    [0x1d] = { ImplicitOps|ModRM, simd_none, d8s_vl },
+    [0x1e ... 0x1f] = { ImplicitOps|ModRM },
     [0x20 ... 0x21] = { DstMem|SrcImplicit|ModRM },
     [0x22 ... 0x23] = { DstImplicit|SrcMem|ModRM },
     [0x28] = { DstImplicit|SrcMem|ModRM|Mov, simd_packed_fp, d8s_vl },
@@ -1474,6 +1476,19 @@ int x86emul_decode(struct x86_emulate_st
                     s->fp16 = true;
                 break;
 
+            case 0x1d: /* vcvtps2phx / vcvtss2sh */
+                if ( s->evex.pfx & VEX_PREFIX_SCALAR_MASK )
+                    break;
+                d = DstReg | SrcMem;
+                if ( s->evex.pfx & VEX_PREFIX_DOUBLE_MASK )
+                {
+                    s->simd_size = simd_packed_fp;
+                    d |= TwoOp;
+                }
+                else
+                    s->simd_size = simd_scalar_vexw;
+                break;
+
             case 0x2e: case 0x2f: /* v{,u}comish */
                 if ( !s->evex.pfx )
                     s->fp16 = true;
@@ -1497,6 +1512,15 @@ int x86emul_decode(struct x86_emulate_st
             }
 
             disp8scale = decode_disp8scale(twobyte_table[b].d8s, s);
+
+            switch ( b )
+            {
+            case 0x5a: /* vcvtph2pd needs special casing */
+                if ( !s->evex.pfx && !s->evex.brs )
+                    disp8scale -= 2;
+                break;
+            }
+
             break;
 
         case ext_map6:
@@ -1513,6 +1537,17 @@ int x86emul_decode(struct x86_emulate_st
                     s->fp16 = true;
                 break;
 
+            case 0x13: /* vcvtph2psx / vcvtsh2ss */
+                if ( s->evex.pfx & VEX_PREFIX_SCALAR_MASK )
+                    break;
+                s->fp16 = true;
+                if ( !(s->evex.pfx & VEX_PREFIX_DOUBLE_MASK) )
+                {
+                    s->simd_size = simd_scalar_vexw;
+                    d &= ~TwoOp;
+                }
+                break;
+
             case 0x56: case 0x57: /* vf{,c}maddc{p,s}h */
             case 0xd6: case 0xd7: /* vf{,c}mulc{p,s}h */
                 break;
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -7780,14 +7780,25 @@ x86_emulate(
         generate_exception_if(evex.w, EXC_UD);
         goto avx512f_all_fp;
 
+    CASE_SIMD_ALL_FP(_EVEX, 5, 0x5a):  /* vcvtp{h,d}2p{h,d} [xyz]mm/mem,[xyz]mm{k} */
+                                       /* vcvts{h,d}2s{h,d} xmm/mem,xmm,xmm{k} */
+        host_and_vcpu_must_have(avx512_fp16);
+        if ( vex.pfx & VEX_PREFIX_SCALAR_MASK )
+            d &= ~TwoOp;
+        op_bytes = 2 << (((evex.pfx & VEX_PREFIX_SCALAR_MASK) ? 0 : 1 + evex.lr) +
+                         2 * evex.w);
+        goto avx512f_all_fp;
+
     case X86EMUL_OPC_EVEX   (5, 0x7c): /* vcvttph2uw [xyz]mm/mem,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(5, 0x7c): /* vcvttph2w [xyz]mm/mem,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX   (5, 0x7d): /* vcvtph2uw [xyz]mm/mem,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(5, 0x7d): /* vcvtph2w [xyz]mm/mem,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_F3(5, 0x7d): /* vcvtw2ph [xyz]mm/mem,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_F2(5, 0x7d): /* vcvtuw2ph [xyz]mm/mem,[xyz]mm{k} */
-        op_bytes = 16 << evex.lr;
+    case X86EMUL_OPC_EVEX_66(6, 0x13): /* vcvtph2psx [xy]mm/mem,[xyz]mm{k} */
+        op_bytes = 8 << ((ext == ext_map5) + evex.lr);
         /* fall through */
+    case X86EMUL_OPC_EVEX_66(5, 0x1d): /* vcvtps2phx [xyz]mm/mem,[xy]mm{k} */
     case X86EMUL_OPC_EVEX_66(6, 0x2c): /* vscalefph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(6, 0x42): /* vgetexpph [xyz]mm/mem,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(6, 0x96): /* vfmaddsub132ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
@@ -7814,6 +7825,8 @@ x86_emulate(
             avx512_vlen_check(false);
         goto simd_zmm;
 
+    case X86EMUL_OPC_EVEX(5, 0x1d):    /* vcvtss2sh xmm/mem,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX(6, 0x13):    /* vcvtsh2ss xmm/mem,xmm,xmm{k} */
     case X86EMUL_OPC_EVEX_66(6, 0x2d): /* vscalefsh xmm/m16,xmm,xmm{k} */
     case X86EMUL_OPC_EVEX_66(6, 0x43): /* vgetexpsh xmm/m16,xmm,xmm{k} */
     case X86EMUL_OPC_EVEX_66(6, 0x99): /* vfmadd132sh xmm/m16,xmm,xmm{k} */



From xen-devel-bounces@lists.xenproject.org Wed Jun 15 10:31:29 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 10:31:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349952.576175 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1QJ7-0007yb-P6; Wed, 15 Jun 2022 10:31:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349952.576175; Wed, 15 Jun 2022 10:31:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1QJ7-0007yS-Lf; Wed, 15 Jun 2022 10:31:29 +0000
Received: by outflank-mailman (input) for mailman id 349952;
 Wed, 15 Jun 2022 10:31:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=56zs=WW=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o1QJ6-0007gz-G4
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 10:31:28 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on062b.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::62b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4eb47bed-ec96-11ec-bd2c-47488cf2e6aa;
 Wed, 15 Jun 2022 12:31:25 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB6876.eurprd04.prod.outlook.com (2603:10a6:10:116::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.20; Wed, 15 Jun
 2022 10:31:11 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Wed, 15 Jun 2022
 10:31:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4eb47bed-ec96-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EvwiM6N8HVmwTxiIQoHRpvSK8v76sNx61DemjfPJXfGBkHPlwOY7r+TQFnQMjrwGA8+H+l6O15fHtM/atjvoknZs6QFC1cj9sM2SWsN+LRDsrTR+xa8AYf1whTZ6PopL2+xSpYmlL2VJLrwQkTIUkLK7wD3shomNMQV/EYlaPrScn1iMt6dFOit6ILiL5LiugkgIyTbN7IqNxEO0TKiiiGc9dl8t+0iV3xSpmVsM+0Xv8YVgINl+qUEhlcLqOE9COLX+ttiD9JJlUCkT1CObChAL/wdcnYlAKsDOv8fmaX/KmncPPDHIoWHopGLjFwJIqRpUyomw70EdE6DlET28Rw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=72avtIJigMrJ/t9S9LNVdKFGNIXvlVkWO5c2zIOjy7c=;
 b=ZqTHbFdGBDATJ5lryi3tsw/74ur0CLsmgtOjdhth2+LIUNTjH3mj3VwoUBkKvKVSMxkpNecCMc6NdqqrxqTda49qGygkReGgVaDDfxUTSOtDUDIxmw2NA3w9NFYdRjo+xS4aKyOQk0bVyZzUL8R71s0Q7G1POINrQqwTs3jSLlrjXYI9fFflqdDsn/bwR4Ws+A8o6pOdvjS0Pz6qADhZ5PSs/KlGA/0OGvNePGY/0QVpYB/C4JE7mGSTUeGLZVFRYmst6jlT0DJOvS+TdLrFBgAmh7jo/5KdVA9Qv+0RrciJladYkHtV1JmPz2MRgiSMTowhjFFRW6sljyUODBZ59A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=72avtIJigMrJ/t9S9LNVdKFGNIXvlVkWO5c2zIOjy7c=;
 b=Ho20m5UqcMFYc8glAAesLdknesp4AZFzPE9puhuR8GWOicdhA9zSwk0ENheOhGnzQsAqpJnEff5viSMSuKCQq9MRsQd0v2GO2mi/rOhawwqR71kJI37o0cq0vvkLgfaFQVD/nwt2ocT6NzGmEoYos9lrV6y82jYSWd7GaynWOG2j9TIlSOzTsANboFFSOYa2XYMRf2szlWrArlFqhG4zUQZLA8xYhXbcAL2N/2ganBg3UDfw3fYN7CsIJSK284ocpypMCDJAN4rfipUjBE56ssviOBzMNj33V6eb0PIQvCs25S+U5ONAY1uyIXJ8oA/IqKdwCBV50/IRmEo5PJ0N9Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <2f99e91d-6a91-f860-45d0-9c8b67c9b2b8@suse.com>
Date: Wed, 15 Jun 2022 12:31:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: [PATCH 10/11] x86emul: handle AVX512-FP16 conversion to/from (packed)
 int{32,64} insns
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <9ba3aaa7-56ce-b051-f1c4-874276e493e1@suse.com>
In-Reply-To: <9ba3aaa7-56ce-b051-f1c4-874276e493e1@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR06CA0177.eurprd06.prod.outlook.com
 (2603:10a6:20b:45c::34) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b40be030-3b1c-45f7-64ee-08da4eba2a96
X-MS-TrafficTypeDiagnostic: DB8PR04MB6876:EE_
X-Microsoft-Antispam-PRVS:
	<DB8PR04MB6876ACC5098E76DCD0B4879FB3AD9@DB8PR04MB6876.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	HpyGpaQG1k86byOWZpbOIUF7cv4TuO/y5yO2L0RMSKuaP0wjCvGnjrGhtvk/g8utz9EhGBOInDbxNVVvU50YEpjpONi2b4M86gLKyaeyuNdVmKi5TrRVWxFzzUszGeSBbaLo+ipTm9G1aHHPp23160EPehX5xHUhJM2HHWycqZ3BTTfpYHEea50BHi01QqbTJUuIBMSMg+w4Orqpio43Bk+ZqDDT20NnJft4mUWiiwOywH3Y8HNIrSbeyyYQqYPGwIXzlJm4uPWc+b3AH04vo3nHyFshDShsEWOF8JnKD3xECbJlpyToa+agqkIVtktOEnHj6OXUe8wFGcBaz0VQ9dSaz++25vA3FRNerPChlY6/UktbpQVobbD97r3sC/8wAPgxyIXOdKTaXjuY9326t2EC0Wf7C3F7kvnH7zuA+5YmV8BlYjr6v2e0yZ1a0sBZy2MxyhekrIdXbAyBhi9L1sDDWQ/Kty1q6LcRA5DiFY75uQI1gT+GbLVmZ0f6r9z6eGocj2S2kXuWtXk8bV85Z88LDypTEocwK8/oF4UL99yrkD360NjNyUueZW2XqavERcerGK5t92l+60zWb/eddMVm7CDA0X5i4exGPYVxSQ47SOyIFZ3m1GAsWGsRLBMFXkm90NCEf6H8t/jAg+b8JPsEALV7ZvmMYUz9O5O7i8MtKZWL9hI6wPx1vGC5QX2wVal+SXqxiBzlC952bQXJqynggp0FEl0tJYjv2IVF8/4=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(6512007)(66556008)(26005)(54906003)(5660300002)(6506007)(316002)(186003)(2906002)(31686004)(36756003)(508600001)(6486002)(8936002)(66946007)(86362001)(4326008)(8676002)(6916009)(66476007)(38100700002)(31696002)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?U2xwcURieWxDajd4L21PZlA3SklNQjJKUDFQZmJCbURnZ3AxZGNrTUJERGQx?=
 =?utf-8?B?VElhbW03NzVMME5VN0UrRUpqNkFjTFo0M1czaUQyMk50dVRDak5HWlNBU0p1?=
 =?utf-8?B?Q0o3Z3ZBaFRvWnZaTUZNbEtGaWtrdU92OURtMjgzVHRYcE1NWTdlMWdZNWNQ?=
 =?utf-8?B?VkpIZTI2NzNja2VEU2hRZ3VoTGh6R01ibHpFd3NMOTZ2Y3BSK1luKy9leUlY?=
 =?utf-8?B?ZmxvRklSdU44UzRXeENsYjFLdlBiL2FGWFF6c05tcVhHUmxkanA1ekFoQW44?=
 =?utf-8?B?b2RLRHJITUJvZU1FSHgybWJ5aERid0FmS0pDQWlDVTA3cXE2YTE5NlhXZndS?=
 =?utf-8?B?OVNPaDVqYklnamZreTdKYUlCRXd4a0twMkNhYlNkQTRnL0JVKzRBcDBNdXgw?=
 =?utf-8?B?dS8vdVpSajRtcDJDTWhDYWJhTlFhS0JxQ1h2ZGxYakVDcUdsRXJ5cWtmN3Bu?=
 =?utf-8?B?M3A4SlVhdDN5eUVhc2N4M1JOR0NFUkxRWDJUSjRmTW5WVU14OXMzbXFKU1Zq?=
 =?utf-8?B?YWt1TlpmaXNoaW1PUWxnNElDTWJ5dTF6Z1MrQm9qbFk0MGhCOFhUalU2cUZV?=
 =?utf-8?B?YWE3UDgybTdEUWxyQy9jNlJOMlZ5R0RXbXZkYy8wd3AyaWdDbHlYZ2h1S1ZQ?=
 =?utf-8?B?UHAwbW1PdFBNSmNEZVJCcFBhdkxJZEFZTG12WEp1NS9YalNhSUZUSUZqVHNi?=
 =?utf-8?B?NStYdGFRS0NPTURTTlpZOHE3UGJnWkJhZE5HdFludW9velhXQi85bCtNUkE1?=
 =?utf-8?B?UHJYRUJ5SXZDVHdrZmd4a1E2RVBCTURBTWljbGdwVGc3bjZ4QVg0QmFQSzY4?=
 =?utf-8?B?K2lFZzd6R0hLREM1TWRJOTZwOFdRclpqRTdzbXFJVVVaQndLejFUUFBJeGNR?=
 =?utf-8?B?R2JmeXZUVGVuQ1ZJY01oWk1wb01kbmxKMXNJdDVyNXlsYTRWZ1JxVXE4cktl?=
 =?utf-8?B?M29SQW1kNTR2UFZ4MVpWejU3VTNPcXZpaG5UNUlFVzFLVm9hbk5pKzc2NHdJ?=
 =?utf-8?B?ZE9kQ1pLUVZYOVhRYlAzY0JESDhUODJ0cERNaVRjNHlITmlOclJZYzBMekUv?=
 =?utf-8?B?TGRETWVucTBQSVQ2TTFVbUlvM203azM0MjNnS2NoaTlDWDI3bW9TS0laaThi?=
 =?utf-8?B?aUpzZUFPeXZkMUtwd0ZTaENGQ0RpOG5Md1R1d3ZTY3dtSHdjSzF0TlZOd09Y?=
 =?utf-8?B?QXhzbld2Z1B4dExpTlovdFpScVpST1ZuY01ySDREcEpodHdoWVFKWXVmQ0Fm?=
 =?utf-8?B?QVN3aVFMZDdNbTRmSWlyMlZxYXdOYTBzOHRNTE5ZNGFjRW0zT3B5LzNlaWtv?=
 =?utf-8?B?MHdySjZYQi9lVmRHenpHZVh1Y2Zwc3VZUXFDaUJsSDlYWCtTMVNpYVF3alJh?=
 =?utf-8?B?dCtlNk9INXpOTXVZbG1jVTByaEdjRlhZa0tYL2M3bExaZjVIclZWQjJPUERW?=
 =?utf-8?B?TFpjL013S0tjSEwvZTY1ZVh1NDBKKzAxYkIxRHk1UU9JZlY0RUtWOWlWejNS?=
 =?utf-8?B?VkVqZXVyWENldFIxSTNmWTFUc3gyNHI3a29DQWMyenNOeDhpbUVPRU1PeGZF?=
 =?utf-8?B?Y1NJYmFMaE1lQ0p0K0xjSHhacnRJK1NSVjREZk4zMHZNQnVPNDdXWm1BZkts?=
 =?utf-8?B?bUprTWVGUjJKMEJWcFduTllhZUM4OS8rNUtGL21sNkVsWkwxelVNaXN1eHUy?=
 =?utf-8?B?azJDQlBoMTJ0MWVzUEk4dHdPVGZaR3VSTllFOExKU0RpZE9lRURjR2I2V2k4?=
 =?utf-8?B?NjVhTlZheENiMy9NaDIvTFVLZjFaQStPMEZvWGJmRFdaN0YwRjgrdEhzSHpr?=
 =?utf-8?B?Qm9UOUprNU9QNXNHay8xTUhJRzVWc0tyUGNBdmE4WHQ5N2swYVYrZ3V1Yzl4?=
 =?utf-8?B?ZUN4YS9ZVlBGQ0VYWTlWM2dJVkp6RWZLQnczSy9HQXZtTzEzV29uS0FlZEp4?=
 =?utf-8?B?QzVMR1IrSHBSZnlCUTNJZ2E1Wm1vbXNIZjlmQmNwZGF1dGZJaENVZmpwL2pS?=
 =?utf-8?B?ZEtmSWpQcXlCRHNEOWwwbmVKTDBadGZ6amF0d1ZXaCt6eVYrUEdJV20xN1do?=
 =?utf-8?B?L0tjQkY3akgzQ290eWxwMllSZEdmTElYc2c3UmlWNUxyL0Yzdm5hSXJsN1hC?=
 =?utf-8?B?SGd0ME14bTNvRnZVdG1GQzFYMTQ3eGJPc0JUQ2lhOU9tcE5Jdmc0R3l1TC9I?=
 =?utf-8?B?dkdnNkl1Y2duaDViak9XMVdibUZodDdtSlJqQ1RCQkpuZG1wYk9ybGNHTTlS?=
 =?utf-8?B?clh3SUtWbkY3cy9RUnlOSjkyRlRsbFRBcm1NLzlYeTM3MFp0QjUzRWttQ1JJ?=
 =?utf-8?B?Y0JnenUrWFo3Q1VuV1BwSVNSc3B3aXg3R3BQMThoM21obERXWEp3UT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b40be030-3b1c-45f7-64ee-08da4eba2a96
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2022 10:31:11.4229
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: AxjTwJ6iXyCayFDYrKJR/1HEXoLcFjwEzpiP6JQgZ1cc+U0FDlgWs2M6jc/orN6mnBP2aCEqAXcEvswPRO2fHg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB6876

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/tests/x86_emulator/evex-disp8.c
+++ b/tools/tests/x86_emulator/evex-disp8.c
@@ -612,18 +612,36 @@ static const struct test avx512_fp16_all
     INSN(cmpph,           , 0f3a, c2,    vl, fp16, vl),
     INSN(cmpsh,         f3, 0f3a, c2,    el, fp16, el),
     INSN(comish,          , map5, 2f,    el, fp16, el),
+    INSN(cvtdq2ph,        , map5, 5b,    vl,    d, vl),
     INSN(cvtpd2ph,      66, map5, 5a,    vl,    q, vl),
+    INSN(cvtph2dq,      66, map5, 5b,  vl_2, fp16, vl),
     INSN(cvtph2pd,        , map5, 5a,  vl_4, fp16, vl),
     INSN(cvtph2psx,     66, map6, 13,  vl_2, fp16, vl),
+    INSN(cvtph2qq,      66, map5, 7b,  vl_4, fp16, vl),
+    INSN(cvtph2udq,       , map5, 79,  vl_2, fp16, vl),
+    INSN(cvtph2uqq,     66, map5, 79,  vl_4, fp16, vl),
     INSN(cvtph2uw,        , map5, 7d,    vl, fp16, vl),
     INSN(cvtph2w,       66, map5, 7d,    vl, fp16, vl),
     INSN(cvtps2phx,     66, map5, 1d,    vl,    d, vl),
+    INSN(cvtqq2ph,        , map5, 5b,    vl,    q, vl),
     INSN(cvtsd2sh,      f2, map5, 5a,    el,    q, el),
     INSN(cvtsh2sd,      f3, map5, 5a,    el, fp16, el),
+    INSN(cvtsh2si,      f3, map5, 2d,    el, fp16, el),
     INSN(cvtsh2ss,        , map6, 13,    el, fp16, el),
+    INSN(cvtsh2usi,     f3, map5, 79,    el, fp16, el),
+    INSN(cvtsi2sh,      f3, map5, 2a,    el, dq64, el),
     INSN(cvtss2sh,        , map5, 1d,    el,    d, el),
+    INSN(cvttph2dq,     f3, map5, 5b,  vl_2, fp16, vl),
+    INSN(cvttph2qq,     66, map5, 7a,  vl_4, fp16, vl),
+    INSN(cvttph2udq,      , map5, 78,  vl_2, fp16, vl),
+    INSN(cvttph2uqq,    66, map5, 78,  vl_4, fp16, vl),
     INSN(cvttph2uw,       , map5, 7c,    vl, fp16, vl),
     INSN(cvttph2w,      66, map5, 7c,    vl, fp16, vl),
+    INSN(cvttsh2si,     f3, map5, 2c,    el, fp16, el),
+    INSN(cvttsh2usi,    f3, map5, 78,    el, fp16, el),
+    INSN(cvtudq2ph,     f2, map5, 7a,    vl,    d, vl),
+    INSN(cvtuqq2ph,     f2, map5, 7a,    vl,    q, vl),
+    INSN(cvtusi2sh,     f3, map5, 7b,    el, dq64, el),
     INSN(cvtuw2ph,      f2, map5, 7d,    vl, fp16, vl),
     INSN(cvtw2ph,       f3, map5, 7d,    vl, fp16, vl),
     INSN(divph,           , map5, 5e,    vl, fp16, vl),
--- a/tools/tests/x86_emulator/predicates.c
+++ b/tools/tests/x86_emulator/predicates.c
@@ -2033,6 +2033,9 @@ static const struct evex {
     { { 0x11 }, 2, T, W, pfx_f3, W0, LIG }, /* vmovsh */
     { { 0x1d }, 2, T, R, pfx_66, W0, Ln }, /* vcvtps2phx */
     { { 0x1d }, 2, T, R, pfx_no, W0, LIG }, /* vcvtss2sh */
+    { { 0x2a }, 2, T, R, pfx_f3, Wn, LIG }, /* vcvtsi2sh */
+    { { 0x2c }, 2, T, R, pfx_f3, Wn, LIG }, /* vcvttsh2si */
+    { { 0x2d }, 2, T, R, pfx_f3, Wn, LIG }, /* vcvtsh2si */
     { { 0x2e }, 2, T, R, pfx_no, W0, LIG }, /* vucomish */
     { { 0x2f }, 2, T, R, pfx_no, W0, LIG }, /* vcomish */
     { { 0x51 }, 2, T, R, pfx_no, W0, Ln }, /* vsqrtph */
@@ -2045,6 +2048,10 @@ static const struct evex {
     { { 0x5a }, 2, T, R, pfx_66, W1, Ln }, /* vcvtpd2ph */
     { { 0x5a }, 2, T, R, pfx_f3, W0, LIG }, /* vcvtsh2sd */
     { { 0x5a }, 2, T, R, pfx_f2, W1, LIG }, /* vcvtsd2sh */
+    { { 0x5b }, 2, T, R, pfx_no, W0, Ln }, /* vcvtdq2ph */
+    { { 0x5b }, 2, T, R, pfx_no, W1, Ln }, /* vcvtqq2ph */
+    { { 0x5b }, 2, T, R, pfx_66, W0, Ln }, /* vcvtph2dq */
+    { { 0x5b }, 2, T, R, pfx_f3, W0, Ln }, /* vcvttph2dq */
     { { 0x5c }, 2, T, R, pfx_no, W0, Ln }, /* vsubph */
     { { 0x5c }, 2, T, R, pfx_f3, W0, LIG }, /* vsubsh */
     { { 0x5d }, 2, T, R, pfx_no, W0, Ln }, /* vminph */
@@ -2054,6 +2061,17 @@ static const struct evex {
     { { 0x5f }, 2, T, R, pfx_no, W0, Ln }, /* vmaxph */
     { { 0x5f }, 2, T, R, pfx_f3, W0, LIG }, /* vmaxsh */
     { { 0x6e }, 2, T, R, pfx_66, WIG, L0 }, /* vmovw */
+    { { 0x78 }, 2, T, R, pfx_no, W0, Ln }, /* vcvttph2udq */
+    { { 0x78 }, 2, T, R, pfx_66, W0, Ln }, /* vcvttph2uqq */
+    { { 0x78 }, 2, T, R, pfx_f3, Wn, LIG }, /* vcvttsh2usi */
+    { { 0x79 }, 2, T, R, pfx_no, W0, Ln }, /* vcvtph2udq */
+    { { 0x79 }, 2, T, R, pfx_66, W0, Ln }, /* vcvtph2uqq */
+    { { 0x79 }, 2, T, R, pfx_f3, Wn, LIG }, /* vcvtsh2usi */
+    { { 0x7a }, 2, T, R, pfx_66, W0, Ln }, /* vcvttph2qq */
+    { { 0x7a }, 2, T, R, pfx_f2, W0, Ln }, /* vcvtudq2ph */
+    { { 0x7a }, 2, T, R, pfx_f2, W1, Ln }, /* vcvtuqq2ph */
+    { { 0x7b }, 2, T, R, pfx_66, W0, Ln }, /* vcvtph2qq */
+    { { 0x7b }, 2, T, R, pfx_f3, Wn, LIG }, /* vcvtusi2sh */
     { { 0x7c }, 2, T, R, pfx_no, W0, Ln }, /* vcvttph2uw */
     { { 0x7c }, 2, T, R, pfx_66, W0, Ln }, /* vcvttph2w */
     { { 0x7d }, 2, T, R, pfx_no, W0, Ln }, /* vcvtph2uw */
--- a/xen/arch/x86/x86_emulate/decode.c
+++ b/xen/arch/x86/x86_emulate/decode.c
@@ -1489,12 +1489,25 @@ int x86emul_decode(struct x86_emulate_st
                     s->simd_size = simd_scalar_vexw;
                 break;
 
+            case 0x2a: /* vcvtsi2sh */
+                break;
+
+            case 0x2c: case 0x2d: /* vcvt{,t}sh2si */
+                if ( s->evex.pfx == vex_f3 )
+                    s->fp16 = true;
+                break;
+
             case 0x2e: case 0x2f: /* v{,u}comish */
                 if ( !s->evex.pfx )
                     s->fp16 = true;
                 s->simd_size = simd_none;
                 break;
 
+            case 0x5b: /* vcvt{d,q}q2ph, vcvt{,t}ph2dq */
+                if ( s->evex.pfx && s->evex.pfx != vex_f2 )
+                    s->fp16 = true;
+                break;
+
             case 0x6e: /* vmovw r/m16, xmm */
                 d = (d & ~SrcMask) | SrcMem16;
                 /* fall through */
@@ -1504,6 +1517,17 @@ int x86emul_decode(struct x86_emulate_st
                 s->simd_size = simd_none;
                 break;
 
+            case 0x78: case 0x79: /* vcvt{,t}ph2u{d,q}q, vcvt{,t}sh2usi */
+                if ( s->evex.pfx != vex_f2 )
+                    s->fp16 = true;
+                break;
+
+            case 0x7a: /* vcvttph2qq, vcvtu{d,q}q2ph */
+            case 0x7b: /* vcvtph2qq, vcvtusi2sh */
+                if ( s->evex.pfx == vex_66 )
+                    s->fp16 = true;
+                break;
+
             case 0x7c: /* vcvttph2{,u}w */
             case 0x7d: /* vcvtph2{,u}w / vcvt{,u}w2ph */
                 d = DstReg | SrcMem | TwoOp;
@@ -1515,10 +1539,34 @@ int x86emul_decode(struct x86_emulate_st
 
             switch ( b )
             {
+            case 0x78:
+            case 0x79:
+                /* vcvt{,t}ph2u{d,q}q need special casing */
+                if ( s->evex.pfx <= vex_66 )
+                {
+                    if ( !s->evex.brs )
+                        disp8scale -= 1 + (s->evex.pfx == vex_66);
+                    break;
+                }
+                /* vcvt{,t}sh2usi needs special casing: fall through */
+            case 0x2c: case 0x2d: /* vcvt{,t}sh2si need special casing */
+                disp8scale = 1;
+                break;
+
             case 0x5a: /* vcvtph2pd needs special casing */
                 if ( !s->evex.pfx && !s->evex.brs )
                     disp8scale -= 2;
                 break;
+
+            case 0x5b: /* vcvt{,t}ph2dq need special casing */
+                if ( s->evex.pfx && !s->evex.brs )
+                    --disp8scale;
+                break;
+
+            case 0x7a: case 0x7b: /* vcvt{,t}ph2qq need special casing */
+                if ( s->evex.pfx == vex_66 && !s->evex.brs )
+                    disp8scale = s->evex.brs ? 1 : 2 + s->evex.lr;
+                break;
             }
 
             break;
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -3581,6 +3581,12 @@ x86_emulate(
         state->simd_size = simd_none;
         goto simd_0f_rm;
 
+#ifndef X86EMUL_NO_SIMD
+
+    case X86EMUL_OPC_EVEX_F3(5, 0x2a):      /* vcvtsi2sh r/m,xmm,xmm */
+    case X86EMUL_OPC_EVEX_F3(5, 0x7b):      /* vcvtusi2sh r/m,xmm,xmm */
+        host_and_vcpu_must_have(avx512_fp16);
+        /* fall through */
     CASE_SIMD_SCALAR_FP(_EVEX, 0x0f, 0x2a): /* vcvtsi2s{s,d} r/m,xmm,xmm */
     CASE_SIMD_SCALAR_FP(_EVEX, 0x0f, 0x7b): /* vcvtusi2s{s,d} r/m,xmm,xmm */
         generate_exception_if(evex.opmsk || (ea.type != OP_REG && evex.brs),
@@ -3659,7 +3665,9 @@ x86_emulate(
             opc[1] = 0x01;
 
             rc = ops->read(ea.mem.seg, ea.mem.off, mmvalp,
-                           vex.pfx & VEX_PREFIX_DOUBLE_MASK ? 8 : 4, ctxt);
+                           vex.pfx & VEX_PREFIX_DOUBLE_MASK
+                           ? 8 : 2 << !state->fp16,
+                           ctxt);
             if ( rc != X86EMUL_OKAY )
                 goto done;
         }
@@ -3689,6 +3697,12 @@ x86_emulate(
         state->simd_size = simd_none;
         break;
 
+    case X86EMUL_OPC_EVEX_F3(5, 0x2c):      /* vcvttsh2si xmm/mem,reg */
+    case X86EMUL_OPC_EVEX_F3(5, 0x2d):      /* vcvtsh2si xmm/mem,reg */
+    case X86EMUL_OPC_EVEX_F3(5, 0x78):      /* vcvttsh2usi xmm/mem,reg */
+    case X86EMUL_OPC_EVEX_F3(5, 0x79):      /* vcvtsh2usi xmm/mem,reg */
+        host_and_vcpu_must_have(avx512_fp16);
+        /* fall through */
     CASE_SIMD_SCALAR_FP(_EVEX, 0x0f, 0x2c): /* vcvtts{s,d}2si xmm/mem,reg */
     CASE_SIMD_SCALAR_FP(_EVEX, 0x0f, 0x2d): /* vcvts{s,d}2si xmm/mem,reg */
     CASE_SIMD_SCALAR_FP(_EVEX, 0x0f, 0x78): /* vcvtts{s,d}2usi xmm/mem,reg */
@@ -3760,8 +3774,6 @@ x86_emulate(
         ASSERT(!state->simd_size);
         break;
 
-#ifndef X86EMUL_NO_SIMD
-
     case X86EMUL_OPC_EVEX(5, 0x2e): /* vucomish xmm/m16,xmm */
     case X86EMUL_OPC_EVEX(5, 0x2f): /* vcomish xmm/m16,xmm */
         host_and_vcpu_must_have(avx512_fp16);
@@ -7789,6 +7801,38 @@ x86_emulate(
                          2 * evex.w);
         goto avx512f_all_fp;
 
+    case X86EMUL_OPC_EVEX   (5, 0x5b): /* vcvtdq2ph [xyz]mm/mem,[xy]mm{k} */
+                                       /* vcvtqq2ph [xyz]mm/mem,xmm{k} */
+    case X86EMUL_OPC_EVEX_F2(5, 0x7a): /* vcvtudq2ph [xyz]mm/mem,[xy]mm{k} */
+                                       /* vcvtuqq2ph [xyz]mm/mem,xmm{k} */
+        host_and_vcpu_must_have(avx512_fp16);
+        if ( ea.type != OP_REG || !evex.brs )
+            avx512_vlen_check(false);
+        op_bytes = 16 << evex.lr;
+        goto simd_zmm;
+
+    case X86EMUL_OPC_EVEX_66(5, 0x5b): /* vcvtph2dq [xy]mm/mem,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_F3(5, 0x5b): /* vcvttph2dq [xy]mm/mem,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX   (5, 0x78): /* vcvttph2udq [xy]mm/mem,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX   (5, 0x79): /* vcvtph2udq [xy]mm/mem,[xyz]mm{k} */
+        host_and_vcpu_must_have(avx512_fp16);
+        generate_exception_if(evex.w, EXC_UD);
+        if ( ea.type != OP_REG || !evex.brs )
+            avx512_vlen_check(false);
+        op_bytes = 8 << evex.lr;
+        goto simd_zmm;
+
+    case X86EMUL_OPC_EVEX_66(5, 0x78): /* vcvttph2uqq xmm/mem,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(5, 0x79): /* vcvtph2uqq xmm/mem,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(5, 0x7a): /* vcvttph2qq xmm/mem,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(5, 0x7b): /* vcvtph2qq xmm/mem,[xyz]mm{k} */
+        host_and_vcpu_must_have(avx512_fp16);
+        generate_exception_if(evex.w, EXC_UD);
+        if ( ea.type != OP_REG || !evex.brs )
+            avx512_vlen_check(false);
+        op_bytes = 4 << (evex.w + evex.lr);
+        goto simd_zmm;
+
     case X86EMUL_OPC_EVEX   (5, 0x7c): /* vcvttph2uw [xyz]mm/mem,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(5, 0x7c): /* vcvttph2w [xyz]mm/mem,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX   (5, 0x7d): /* vcvtph2uw [xyz]mm/mem,[xyz]mm{k} */



From xen-devel-bounces@lists.xenproject.org Wed Jun 15 10:31:35 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 10:31:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349953.576186 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1QJD-0008Jx-4E; Wed, 15 Jun 2022 10:31:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349953.576186; Wed, 15 Jun 2022 10:31:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1QJD-0008Ji-0T; Wed, 15 Jun 2022 10:31:35 +0000
Received: by outflank-mailman (input) for mailman id 349953;
 Wed, 15 Jun 2022 10:31:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9BPj=WW=citrix.com=prvs=158bd2e8d=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o1QJB-0007YZ-IL
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 10:31:33 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 524352f7-ec96-11ec-ab14-113154c10af9;
 Wed, 15 Jun 2022 12:31:32 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 524352f7-ec96-11ec-ab14-113154c10af9
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655289092;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=3jVoqXBbnZF/v6+a2duf5OFs0kzx0CUP0rJ2G7z0xw8=;
  b=P+K/iU8aoheOrcKc2lZQpOUoUqCUmzso2w7uiJcJEYeP6PHcDEYwP2Gh
   jbJ+jCInsAQIvTwyaPfk0Xosqcc1/qqiiMbcDq6VOkzvGekI2x6cUF6ua
   sYZmlBijSdoSjQTMV9wEotAIAB7cmi68oSOpssEQZuICc1aGcasMKA1pQ
   U=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 76205995
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:M5hvlqN32UVFb6/vrR22l8FynXyQoLVcMsEvi/4bfWQNrUp03zZSx
 2VLWDyBbP+OYmL9fN4gPdi2oEoC78LSx4RhSQto+SlhQUwRpJueD7x1DKtR0wB+jCHnZBg6h
 ynLQoCYdKjYdleF+lH1dOKJQUBUjclkfJKlYAL/En03FFYMpBsJ00o5wbZn29Iw2rBVPivW0
 T/Mi5yHULOa82Yc3lI8s8pvfzs24ZweEBtB1rAPTagjUG32zhH5P7pGTU2FFFPqQ5E8IwKPb
 72rIIdVXI/u10xF5tuNyt4Xe6CRK1LYFVDmZnF+A8BOjvXez8CbP2lS2Pc0MC9qZzu1c99Z9
 Mdp7p6/GToQEveTmMAgChtcSRhwMvgTkFPHCSDXXc27ykTHdz3nwul0DVFwNoodkgp1KTgQr
 7pCcmlLN03dwbLtqF64YrAEasALJc/3PIQZqzd4wCvQF/oOSpHfWaTao9Rf2V/cg+gRQ6yGO
 ptINFKDajzbbEEQNmxLJKs9lfeK2in5UgJflVas8P9fD2/7k1UqjemF3MDuUseRWcxfk0Kcp
 2TH12f0GBcXMJqY0zXt2m2orv/Cm2X8Qo16PK218LtmjUOewkQXCQYKTh2rrP+hkEm8VtlDb
 UsO9UIGr6I/6UiqRdnVRACjrTiPuRt0c/BdCfEg4QeBjI/d+R+EB3MsRyRELtchsaceRyEu1
 1KPt8PkA3poqrL9dJ6G3u7K93XoY3FTdDJcI39fJecY3zX9iL0hqknNQYZeKYLv0d3RJGjyx
 2qzoidr0t3/kvU3O7WHEUHv2mzx+8KQEVRrum07TUr+sFonOdfNi5iArAGCsK0edNvxokyp5
 iBspiSI0AwZ4XhhfgSpSf5FIrym7u3t3Nb00Q82RMlJG9hAFheekWFsDNJWfh4B3j4sI2OBX
 aMqkVo5CGVvFHWrd7RrRIm6Ft4ny6Ptffy8CK2JNIEXPMcrJFTWlM2LWaJ39zm1+HXAbIllY
 cvLGSpSJS1y5VtbIMqeGL5GjO5DKtEWzmLPX5HrpymaPU6lTCfNE98taQLWBshgtf/siFiEq
 L53aprVoyizpcWjO0E7B6ZIdQBURZX6bLirw/FqmhmrfVM4Rzt7Ua6NqV7jEqQ895loei7z1
 inVcidlJJDX3xUr9S3ihqhfVY7S
IronPort-HdrOrdr: A9a23:iDjJf6n/3BhyFlE1A2Z2Pnwt7ZvpDfIs3DAbv31ZSRFFG/Fxl6
 iV8sjz8SWE7Ar5OUtQ/OxoV5PsfZqxz/JICMwqTNCftWrdyQmVxeNZjbcKqgeIc0aVygce79
 YCT0EXMqyXMbEQt6fHCWeDfOod/A==
X-IronPort-AV: E=Sophos;i="5.91,300,1647316800"; 
   d="scan'208";a="76205995"
Date: Wed, 15 Jun 2022 11:31:06 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
	"Julien Grall" <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>, <xen-devel@lists.xenproject.org>
Subject: Re: [XEN PATCH v2 1/4] build,include: rework shell script for
 headers++.chk
Message-ID: <Yqm06hdvXE2caKpq@perard.uk.xensource.com>
References: <20220614162248.40278-1-anthony.perard@citrix.com>
 <20220614162248.40278-2-anthony.perard@citrix.com>
 <09b49900-9215-f2a2-d521-a79cf5ce5f0f@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <09b49900-9215-f2a2-d521-a79cf5ce5f0f@suse.com>

On Wed, Jun 15, 2022 at 08:37:42AM +0200, Jan Beulich wrote:
> On 14.06.2022 18:22, Anthony PERARD wrote:
> > diff --git a/xen/include/Makefile b/xen/include/Makefile
> > index 6d9bcc19b0..49c75a78f9 100644
> > --- a/xen/include/Makefile
> > +++ b/xen/include/Makefile
> > @@ -158,13 +158,22 @@ define cmd_headerscxx_chk
> >  	    touch $@.new;                                                     \
> >  	    exit 0;                                                           \
> >  	fi;                                                                   \
> > -	$(foreach i, $(filter %.h,$^),                                        \
> > -	    echo "#include "\"$(i)\"                                          \
> > +	get_prereq() {                                                        \
> > +	    case $$1 in                                                       \
> > +	    $(foreach i, $(filter %.h,$^),                                    \
> > +	    $(if $($(patsubst $(srctree)/%,%,$(i))-prereq),                   \
> > +		$(i)$(close)                                                  \
> > +		echo "$(foreach j, $($(patsubst $(srctree)/%,%,$(i))-prereq), \
> > +			-include c$(j))";;))                                  \
> > +	    esac;                                                             \
> 
> If I'm reading this right (indentation looks to be a little misleading
> and hence one needs to count parentheses) the case statement could (in
> theory) remain without any "body". As per the command language spec I'm
> looking at this (if it works in the first place) is an extension, and
> formally there's always at least one label required. Since we aim to be
> portable in such regards, I'd like to request that there be a final
> otherwise empty *) line.

When looking at the shell grammar at [1], an empty body seems to be
allowed. But I can add "*)" at the end for peace of mind.

[1] https://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_10_02

As for misleading indentation, I've got my $EDITOR to show me matching
parentheses, so I don't need to count them. But if I rewrite that
function as following, would it be easier to follow?

+	get_prereq() {                                                        \
+	case $$1 in                                                           \
+	$(foreach i, $(filter %.h,$^),                                        \
+	    $(if $($(patsubst $(srctree)/%,%,$(i))-prereq),                   \
+		$(i)$(close)                                                  \
+		echo "$(foreach j, $($(patsubst $(srctree)/%,%,$(i))-prereq), \
+			-include c$(j))";;))                                  \
+	*) ;;                                                                 \
+	esac;                                                                 \
+	};                                                                    \


> > +	};                                                                    \
> > +	for i in $(filter %.h,$^); do                                         \
> > +	    echo "#include "\"$$i\"                                           \
> >  	    | $(CXX) -x c++ -std=gnu++98 -Wall -Werror -D__XEN_TOOLS__        \
> >  	      -include stdint.h -include $(srcdir)/public/xen.h               \
> > -	      $(foreach j, $($(patsubst $(srctree)/%,%,$i)-prereq), -include c$(j)) \
> > +	      `get_prereq $$i`                                                \
> 
> While I know we use back-ticked quoting elsewhere, I'd generally
> recommend to use $() for readability. But maybe others view this
> exactly the other way around ...

Well, in Makefile it's `` vs $$(). At least, we don't have to write
$$$(open)$(close) here :-).

I guess $$(get_prereq $$i) isn't too bad here, I'll replace the
backquote.

> And a question without good context to put at: Isn't headers99.chk in
> similar need of converting? It looks only slightly less involved than
> the C++ one.

It doesn't really need converting at the moment, because there's only
two headers to check, so the resulting command line is small. But
converting it at the same time is probably a good idea to avoid having
two different implementations of the header check.

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 10:32:20 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 10:32:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349974.576197 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1QJt-0000yp-Fg; Wed, 15 Jun 2022 10:32:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349974.576197; Wed, 15 Jun 2022 10:32:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1QJt-0000yi-C7; Wed, 15 Jun 2022 10:32:17 +0000
Received: by outflank-mailman (input) for mailman id 349974;
 Wed, 15 Jun 2022 10:32:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=56zs=WW=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o1QJr-0007gz-H5
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 10:32:15 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on0618.outbound.protection.outlook.com
 [2a01:111:f400:fe02::618])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6ba1654c-ec96-11ec-bd2c-47488cf2e6aa;
 Wed, 15 Jun 2022 12:32:14 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB6876.eurprd04.prod.outlook.com (2603:10a6:10:116::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.20; Wed, 15 Jun
 2022 10:32:11 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Wed, 15 Jun 2022
 10:32:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6ba1654c-ec96-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TJ0rTOC+f4B3CdqJG9fMbRrb4NlZOgLt4fJVGvS/8tALv5WBOLckar0s7RPU7Vv0F4IAqh5JPawMbDPCDFDTQLswbTyp5zaklgVUkpx8rW/l3Acwd4ML88DXTLaRFlfNOo5i72QaSXoGFhSSNXuVseAldKzwtGMnt5fobuToNwEzKiXKP1z1oJZ99CPtRoY3lMXXrT7v2gNAcwpzDIrEWKAeqljzGEUJK4kyUF6DAYdw9fr8I+yRi5jTg66N2MH5dPABeoOOew8KXL2gRNcOQC1d9KnwIyCRsdT9SyA53btnjH7NOKVD1waHDGKxZmpPaeJAXH0MZeymUvvUl5aCxw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=X1FAw4SWHwB87fUYBwv+opycuFCEYYEsmXkhPZ7XIIk=;
 b=Oy/svtfaSI/DHRRqzUdPXM84mWdl/5Sn8xdVfpLIwoPoEfQoH47gd059aleVQjEOibogKylLRLvo2wHaThSHYlAvO4rZ3tx1PDoADwPNP+y6NiWrfEykJWJAZm5w62RDdbRU2hox6BLmCu7Sa3IEACbYzYyoZQkQJhuHpa04vR2+tx9XgFPdWxG5aHhPa1JFkc8XYY8AiL78bo5/C/q+gQcJpsaC/ih4zF9Ep3CpK8bBgPit5uqwWgFUWoJzGMDjPuj76YwkBT8nBYmisjuCSEF8h0sxxXjhVOzSGGkotHjfz5nubL3EtsyzJ8lv9Dfzuu4lED1VQkSAhOiMInsFYQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=X1FAw4SWHwB87fUYBwv+opycuFCEYYEsmXkhPZ7XIIk=;
 b=yAozt7Acu9RsWWisV+PraYciwWtbSqD6xGjiTirNoGo2/2yrJVVwHFkfo1zM0B8nva8luUt+8zY6wi2sMqbX1FFzCsbyRneXn6x7kQEKwtg+i2kSJi8vGmKKNrtsLAQjsunS9D5bCv6ziKMh6MbBefqzJ8IxHweOKyDCa5PcVpeu4Xq6A9fhJLDTJwBiN4/k+EHk1HIOq4eKomAAmFne90PCMs+7+f3Cw1SdlY72cl0iyqG1R8o5DGC6j1r6fuNPcqgCWcmrE/4oyAQIM6c5pdUz2xnQp17itN2d16hOJTOFD17Z6bXmaqQOww4UnCYkcL/1E+Zegpk3ekOrXMEi1g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a64d46c5-53ca-299e-a7f7-7f66f6ae871f@suse.com>
Date: Wed, 15 Jun 2022 12:32:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: [PATCH 11/11] x86emul: AVX512-FP16 testing
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <9ba3aaa7-56ce-b051-f1c4-874276e493e1@suse.com>
In-Reply-To: <9ba3aaa7-56ce-b051-f1c4-874276e493e1@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AS8PR04CA0035.eurprd04.prod.outlook.com
 (2603:10a6:20b:312::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a3512278-6146-4765-2c15-08da4eba4e2e
X-MS-TrafficTypeDiagnostic: DB8PR04MB6876:EE_
X-Microsoft-Antispam-PRVS:
	<DB8PR04MB687645F6DE53A08A9760EBE1B3AD9@DB8PR04MB6876.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Y8oDrGhWSSkuQ0t/WYAcr4DclOpTCk+PRAkI0nygqMr67Ytm5ktI8XkuJUgPmkGoIqp/w4eNOuNrjKPQH2+dOm/HGwB0BLIOrAkkPxM2/gYG4/23X8UoPdIqdbEsWDjOaK935LCyMQSJSf5MjsZ96yX12BC/MI9sRQu0IHLbTScYLfQrNlGgE2t5WP7qTXDcIZLPIjd2IJP66d9/AqxY9C3G97Iuexd4ODu2bhzrKPHVS4bPKhewJ7CV1BOMbRPfT59/eWJ1rjRzHsDIQdLy46igdXXpDlPk9BkCq2WdqJpfZzQlKziaJP20HtGAQsklaT1FoyYkyNMo0SzwpCXgs8vCOpJgRxV1hhmcdRnLKlpiFoqG9xXVhgobd9og7x4wjGMkfQXNP++CjnOcZA54MWB802t5B3pvJZ96KiOTpgdIiI6z7E3F2b65HCheMtY70pNP6OKFVgfVWKvAhgkDY2hCAOLzkyOfJedctW0srVNdfoQ/BK5JRkY7GQMbWZ7KvXVZyYdaL9pp+VkPlfHReOzJSFDMugYteM0LYCiNHfjRqKe1Dy9ildvB3GL591ql0oHOQ8TaGTiFbnWo1xnyc6ZjvjzJldlN0gh4kf1A1klgocZZ2iEMNuZCuxTxHth6i40hLLolgUw2em8gM+aE40lucIEAzBPVrp97dU8Hs2q+DWKKy2EYb/w75/6JhGtIYEuqZWh31DUavtEWYTFXdN/sPFElRV5MuvS5JcVFhHg=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(83380400001)(6512007)(66556008)(26005)(54906003)(5660300002)(6506007)(316002)(186003)(2906002)(31686004)(36756003)(30864003)(508600001)(6486002)(8936002)(66946007)(86362001)(4326008)(8676002)(6916009)(66476007)(38100700002)(31696002)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bUdPQ1dKcW5OSStGN1BYdy9wd0F4UGg3cGxJN0JUU0QrenBTejlNQWxtaEpU?=
 =?utf-8?B?WU5UdGNtUmsyZUNBaWVocG5ZdkdlTEY4cFZVTzFLcE5tZW5TbGNSU2dXcXB1?=
 =?utf-8?B?N3RxQkhYZ0NFQytmVXc3TW44Uk5mdXdvdExmT3dZRmRBaUVJMG9CSi95cjVh?=
 =?utf-8?B?TEMxVTVQdnpRdU5GaXdQeTJNZzh0ZGYyRFc2NTQrSUcwQ3g3c1hKd1BGQUZI?=
 =?utf-8?B?L1hZQ001YnhFQ3FRNitEcWVtYm9jZzhqNDFOa1JUTWRWUEZlUWJtRm9PTncx?=
 =?utf-8?B?R2ZyZHd6NEhkTUxKZllzeDdpN05KTkgvRmdsQVNYUHliVWFBK1AvMVlJaUFI?=
 =?utf-8?B?OGdkdzBNeHNjWjgxM3dMeldOYUFUdEFBZHpta0RRUU1rTEREYXRmdkZsRklL?=
 =?utf-8?B?cVAzcWdQTHZiUHNqR3lTQ25US21SUHl2WHd3SitvdXRReGRBTmM1WFhVanJ5?=
 =?utf-8?B?UkpaZVNMWWxXRmhLNHdTMm53alh2Sy9zeldwQW0xZEpMM0dVdUJxdVBabnNr?=
 =?utf-8?B?Wjh2OXpTa01JUko4aWpxYS9HZE5jK08rcFVTU1FDTzRxbitNVzRCeEsyZXR0?=
 =?utf-8?B?emZBalVPMVVXZnFQeEZzNVZSbVBXQTJ0WjVJV1poSFRlV1d4RStHSUhtUTRO?=
 =?utf-8?B?MnFzanZ0azBZWUMvQnNrd0N3ODBnN05ubEc0YlB0THpydFZHV3FWdHZrK1JD?=
 =?utf-8?B?dEtHTnJmSytZYXgzVWhFS2xwazNjdmR4V0tPS1N6Yit4d3FHRkpKbXlmTkJn?=
 =?utf-8?B?dFYrbGVDaWVSRmF1NzhBdE1sRFJybngvclQyY0hSeGZ6NnJpSVVqSTFyWXMz?=
 =?utf-8?B?UXRDa0x3TlErWkZFa2tRUG9sUzBuYlhBK09RN09hbmgyU09PbTZyMGt2SGVP?=
 =?utf-8?B?Q3hzQXA5Tm10ay9ZM0MyaXJWVW9GNzhrRWNXVk8rMlNJQTZKOGVkeHBpS2F5?=
 =?utf-8?B?SnVVZTROTkZMS01SU05oeHdVbi9Pb1ZPQWJ3SjJlQW1KWGZjRWZhL0Y0MmRM?=
 =?utf-8?B?dUhMSnlyMGdBLzFIMFhMVVpSNnZKb2JKQzlVanFGdkJrVjJlMWdOL0pOUU5p?=
 =?utf-8?B?NEo0UG9LMW96TTlMVWVZMVNMRi9QaERVNHpvcERBMDBDanpFa0lQRkludW1w?=
 =?utf-8?B?VWdISkl3dURRaE05NjBPSnVXMDhvek5hNkZ2RG5qSEUwZktjNGdwRUZsbUhT?=
 =?utf-8?B?WUtIZ2JxcFhWK1AzN2IyQitFMzBOa0ZHQzM4RTlPL2taVDJqZVpkRVJDWFNF?=
 =?utf-8?B?Y1p2VnlXb3Y3QXdvNlVLKzA2L2hTYmZUajE0RDB3d2l1NktwWkt5ZTdlVEhD?=
 =?utf-8?B?ejNqZTVwZldyOGh0UHhCSjQrbXBVekJ3ekpNYS9XZ2x1Z2RRRnRzSndmbThk?=
 =?utf-8?B?ZnlvdnY0aTM2NlRjb1RZMlJMSC9ETHI3bWpnZ0piY2NaS1VCZ2J6YnNTQ08v?=
 =?utf-8?B?U05IakNQeU05RHdWbTFUbUw5ZXRCMW5FRGdLOVZmTW9VaCtEWDBTc09tVkYv?=
 =?utf-8?B?MTZoOWVSRlNSVks3b1hva0JrYS9rZWlVcmVETjFUSlB3djdxem5Hcnh2em83?=
 =?utf-8?B?ZVI5ZVVzVzB4Ylg4MkxSZVB6a1NmWGRpaWVzTzVWZnF5NjBWbUgzN2FjR2JE?=
 =?utf-8?B?TzRobXkweWpLZUFSYWt0WjdkZGVUQzNsNFAzcjZKamJBclQ0YVNGU2RqN2M0?=
 =?utf-8?B?d3JGamYrblYzUi9RZGZDRFByU3VSa0FZbXZLRUtJV0UwVXNoK3RRVGovK01l?=
 =?utf-8?B?U1VOaG8rcEVxOUh6SnA5dGlTcG1qSFlsMWh3TElMOFJNdkR6V2MxTi9YYWxU?=
 =?utf-8?B?QmtvOXpzWjZCMk11TG9ZRlpKcUhKeXJWdlRuY0FwbzBUeURSd3NYc0VHR2tC?=
 =?utf-8?B?UDRJVlVkdW1ZdGk4M1FWelFWRkFaMkV6c3oyMU05MXVaS2JHYTlMMVRsdGRF?=
 =?utf-8?B?QlZwaEhOei8wZkk4QWZud1pHazdzZHBrZms1czdOUS9NVGtQTlF4Vzh2OUFp?=
 =?utf-8?B?bzIrd095WGQzMGhnVEIzemtNTXFPc0R1VjJRcmtMMlhPb0tzNk55bmxYZmdV?=
 =?utf-8?B?U0tTbHdjZ1NSQ1pzSFlFYjE1WEh5eGZuMUtoZXNaeldyYUx2RXl1OXppQUc3?=
 =?utf-8?B?ckx3S1JUbEtoekx1b1FJWUJXQWwybzBPSkdvL1JPUzlVcTNYaHJlNGo5clFQ?=
 =?utf-8?B?dkQzQlUyNlYwZ3RLOHp5KzJYNTQvNmdUY2M3ZU10NDFuNUl4SWVBTUdDalNM?=
 =?utf-8?B?UUxJOXZ2ZU56Q1g0R3NwQklxdnVKc1FNNWtyY3E2OGtUU3cvTE1XRGNWNXFW?=
 =?utf-8?B?eU45Z1JieVNUS3haU0Y1WnR5V0I0Q0hqanE4YVBrbkJrelNXWFdTdz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a3512278-6146-4765-2c15-08da4eba4e2e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2022 10:32:11.0441
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: F+7O1iNwpQyUlVvdfqxLvBzCyCaMnjP4MiruGEMHI9cgcp27rYS5QNLUB2CivGA4dEFTdh4t3l0JiNjjwAQrJg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB6876

Naming of some of the builtins isn't fully consistent with that of
pre-existing ones, so there's a need for a new BR2() wrapper macro.

With the tests providing some proof of proper functioning of the
emulator code also enable use of the feature by guests, as there's no
other infrastructure involved in enabling this ISA extension.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
SDE: -spr or -future
---
In the course of putting together the FMA part of the test I've noticed
that we no longer test scalar FMA insns (FMA, FMA4, AVX512F), due to gcc
no longer recognizing the pattern in version 9 or later. See gcc bug
105965, which apparently has already gained a fix for version 13. (Using
intrinsics for scalar operations is prohibitive, as they have full-
vector parameters.) I'm taking this as one of several reasons why here
I'm not even trying to make the compiler spot the complex FMA patterns,
using a mixture of intrinsics and inline assembly instead.

--- a/tools/tests/x86_emulator/Makefile
+++ b/tools/tests/x86_emulator/Makefile
@@ -16,7 +16,7 @@ vpath %.c $(XEN_ROOT)/xen/lib/x86
 
 CFLAGS += $(CFLAGS_xeninclude)
 
-SIMD := 3dnow sse sse2 sse4 avx avx2 xop avx512f avx512bw avx512dq avx512er avx512vbmi
+SIMD := 3dnow sse sse2 sse4 avx avx2 xop avx512f avx512bw avx512dq avx512er avx512vbmi avx512fp16
 FMA := fma4 fma
 SG := avx2-sg avx512f-sg avx512vl-sg
 AES := ssse3-aes avx-aes avx2-vaes avx512bw-vaes
@@ -91,6 +91,9 @@ avx512vbmi-vecs := $(avx512bw-vecs)
 avx512vbmi-ints := $(avx512bw-ints)
 avx512vbmi-flts := $(avx512bw-flts)
 avx512vbmi2-vecs := $(avx512bw-vecs)
+avx512fp16-vecs := $(avx512bw-vecs)
+avx512fp16-ints :=
+avx512fp16-flts := 2
 
 avx512f-opmask-vecs := 2
 avx512dq-opmask-vecs := 1 2
@@ -246,7 +249,7 @@ $(addsuffix .c,$(GF)):
 
 $(addsuffix .h,$(SIMD) $(FMA) $(SG) $(AES) $(CLMUL) $(SHA) $(GF)): simd.h
 
-xop.h avx512f.h: simd-fma.c
+xop.h avx512f.h avx512fp16.h: simd-fma.c
 
 endif # 32-bit override
 
--- a/tools/tests/x86_emulator/simd.c
+++ b/tools/tests/x86_emulator/simd.c
@@ -20,6 +20,14 @@ ENTRY(simd_test);
     asm ( "vcmpsd $0, %1, %2, %0"  : "=k" (r_) : "m" (x_), "v" (y_) ); \
     r_ == 1; \
 })
+# elif VEC_SIZE == 2
+#  define eq(x, y) ({ \
+    _Float16 x_ = (x)[0]; \
+    _Float16 __attribute__((vector_size(16))) y_ = { (y)[0] }; \
+    unsigned int r_; \
+    asm ( "vcmpsh $0, %1, %2, %0"  : "=k" (r_) : "m" (x_), "v" (y_) ); \
+    r_ == 1; \
+})
 # elif FLOAT_SIZE == 4
 /*
  * gcc's (up to at least 8.2) __builtin_ia32_cmpps256_mask() has an anomaly in
@@ -31,6 +39,8 @@ ENTRY(simd_test);
 #  define eq(x, y) ((BR(cmpps, _mask, x, y, 0, -1) & ALL_TRUE) == ALL_TRUE)
 # elif FLOAT_SIZE == 8
 #  define eq(x, y) (BR(cmppd, _mask, x, y, 0, -1) == ALL_TRUE)
+# elif FLOAT_SIZE == 2
+#  define eq(x, y) (B(cmpph, _mask, x, y, 0, -1) == ALL_TRUE)
 # elif (INT_SIZE == 1 || UINT_SIZE == 1) && defined(__AVX512BW__)
 #  define eq(x, y) (B(pcmpeqb, _mask, (vqi_t)(x), (vqi_t)(y), -1) == ALL_TRUE)
 # elif (INT_SIZE == 2 || UINT_SIZE == 2) && defined(__AVX512BW__)
@@ -116,6 +126,14 @@ static inline bool _to_bool(byte_vec_t b
     asm ( "vcvtusi2sd%z1 %1, %0, %0" : "=v" (t_) : "m" (u_) ); \
     (vec_t){ t_[0] }; \
 })
+#  elif FLOAT_SIZE == 2
+#   define to_u_int(type, x) ({ \
+    unsigned type u_; \
+    _Float16 __attribute__((vector_size(16))) t_; \
+    asm ( "vcvtsh2usi %1, %0" : "=r" (u_) : "m" ((x)[0]) ); \
+    asm ( "vcvtusi2sh%z1 %1, %0, %0" : "=v" (t_) : "m" (u_) ); \
+    (vec_t){ t_[0] }; \
+})
 #  endif
 #  define to_uint(x) to_u_int(int, x)
 #  ifdef __x86_64__
@@ -153,6 +171,43 @@ static inline bool _to_bool(byte_vec_t b
 #   define to_wint(x) BR(cvtqq2pd, _mask, BR(cvtpd2qq, _mask, x, (vdi_t)undef(), ~0), undef(), ~0)
 #   define to_uwint(x) BR(cvtuqq2pd, _mask, BR(cvtpd2uqq, _mask, x, (vdi_t)undef(), ~0), undef(), ~0)
 #  endif
+# elif FLOAT_SIZE == 2
+#  define to_int(x) BR2(vcvtw2ph, _mask, BR2(vcvtph2w, _mask, x, (vhi_t)undef(), ~0), undef(), ~0)
+#  define to_uint(x) BR2(vcvtuw2ph, _mask, BR2(vcvtph2uw, _mask, x, (vhi_t)undef(), ~0), undef(), ~0)
+#  if VEC_SIZE == 16
+#   define low_half(x) (x)
+#   define high_half(x) ((vec_t)B_(movhlps, , (vsf_t)undef(), (vsf_t)(x)))
+#   define insert_half(x, y, p) ((vec_t)((p) ? B_(movlhps, , (vsf_t)(x), (vsf_t)(y)) \
+                                             : B_(shufps, , (vsf_t)(y), (vsf_t)(x), 0b11100100)))
+#  elif VEC_SIZE == 32
+#   define _half(x, lh) ((vhf_half_t)B(extracti32x4_, _mask, (vsi_t)(x), lh, (vsi_half_t){}, ~0))
+#   define low_half(x)  _half(x, 0)
+#   define high_half(x) _half(x, 1)
+#   define insert_half(x, y, p) \
+    ((vec_t)B(inserti32x4_, _mask, (vsi_t)(x), (vsi_half_t)(y), p, (vsi_t)undef(), ~0))
+#  elif VEC_SIZE == 64
+#   define _half(x, lh) \
+    ((vhf_half_t)__builtin_ia32_extracti64x4_mask((vdi_t)(x), lh, (vdi_half_t){}, ~0))
+#   define low_half(x)  _half(x, 0)
+#   define high_half(x) _half(x, 1)
+#   define insert_half(x, y, p) \
+    ((vec_t)__builtin_ia32_inserti64x4_mask((vdi_t)(x), (vdi_half_t)(y), p, (vdi_t)undef(), ~0))
+#  endif
+#  define to_w_int(x, s) ({ \
+    vhf_half_t t_ = low_half(x); \
+    vsi_t lo_, hi_; \
+    touch(t_); \
+    lo_ = BR2(vcvtph2 ## s ## dq, _mask, t_, (vsi_t)undef(), ~0); \
+    t_ = high_half(x); \
+    touch(t_); \
+    hi_ = BR2(vcvtph2 ## s ## dq, _mask, t_, (vsi_t)undef(), ~0); \
+    touch(lo_); touch(hi_); \
+    insert_half(insert_half(undef(), \
+                            BR2(vcvt ## s ## dq2ph, _mask, lo_, (vhf_half_t){}, ~0), 0), \
+                BR2(vcvt ## s ## dq2ph, _mask, hi_, (vhf_half_t){}, ~0), 1); \
+})
+#  define to_wint(x) to_w_int(x, )
+#  define to_uwint(x) to_w_int(x, u)
 # endif
 #elif VEC_SIZE == 16 && defined(__SSE2__)
 # if FLOAT_SIZE == 4
@@ -240,10 +295,18 @@ static inline vec_t movlhps(vec_t x, vec
 #  define scale(x, y) scalar_2op(x, y, "vscalefsd %[in2], %[in1], %[out]")
 #  define sqrt(x) scalar_1op(x, "vsqrtsd %[in], %[out], %[out]")
 #  define trunc(x) scalar_1op(x, "vrndscalesd $0b1011, %[in], %[out], %[out]")
+# elif FLOAT_SIZE == 2
+#  define getexp(x) scalar_1op(x, "vgetexpsh %[in], %[out], %[out]")
+#  define getmant(x) scalar_1op(x, "vgetmantsh $0, %[in], %[out], %[out]")
+#  define recip(x) scalar_1op(x, "vrcpsh %[in], %[out], %[out]")
+#  define rsqrt(x) scalar_1op(x, "vrsqrtsh %[in], %[out], %[out]")
+#  define scale(x, y) scalar_2op(x, y, "vscalefsh %[in2], %[in1], %[out]")
+#  define sqrt(x) scalar_1op(x, "vsqrtsh %[in], %[out], %[out]")
+#  define trunc(x) scalar_1op(x, "vrndscalesh $0b1011, %[in], %[out], %[out]")
 # endif
 #elif defined(FLOAT_SIZE) && defined(__AVX512F__) && \
       (VEC_SIZE == 64 || defined(__AVX512VL__))
-# if ELEM_COUNT == 8 /* vextractf{32,64}x4 */ || \
+# if (ELEM_COUNT == 8 && ELEM_SIZE >= 4) /* vextractf{32,64}x4 */ || \
      (ELEM_COUNT == 16 && ELEM_SIZE == 4 && defined(__AVX512DQ__)) /* vextractf32x8 */ || \
      (ELEM_COUNT == 4 && ELEM_SIZE == 8 && defined(__AVX512DQ__)) /* vextractf64x2 */
 #  define _half(x, lh) ({ \
@@ -398,6 +461,21 @@ static inline vec_t movlhps(vec_t x, vec
                          VEC_SIZE == 32 ? 0b01 : 0b00011011, undef(), ~0), \
                        0b01010101, undef(), ~0)
 #  endif
+# elif FLOAT_SIZE == 2
+#  define frac(x) BR2(reduceph, _mask, x, 0b00001011, undef(), ~0)
+#  define getexp(x) BR(getexpph, _mask, x, undef(), ~0)
+#  define getmant(x) BR(getmantph, _mask, x, 0, undef(), ~0)
+#  define max(x, y) BR2(maxph, _mask, x, y, undef(), ~0)
+#  define min(x, y) BR2(minph, _mask, x, y, undef(), ~0)
+#  define scale(x, y) BR2(scalefph, _mask, x, y, undef(), ~0)
+#  define recip(x) B(rcpph, _mask, x, undef(), ~0)
+#  define rsqrt(x) B(rsqrtph, _mask, x, undef(), ~0)
+#  define shrink1(x) BR2(vcvtps2phx, _mask, (vsf_t)(x), (vhf_half_t){}, ~0)
+#  define shrink2(x) BR2(vcvtpd2ph, _mask, (vdf_t)(x), (vhf_quarter_t){}, ~0)
+#  define sqrt(x) BR2(sqrtph, _mask, x, undef(), ~0)
+#  define trunc(x) BR2(rndscaleph, _mask, x, 0b1011, undef(), ~0)
+#  define widen1(x) ((vec_t)BR2(vcvtph2psx, _mask, x, (vsf_t)undef(), ~0))
+#  define widen2(x) ((vec_t)BR2(vcvtph2pd, _mask, x, (vdf_t)undef(), ~0))
 # endif
 #elif FLOAT_SIZE == 4 && defined(__SSE__)
 # if VEC_SIZE == 32 && defined(__AVX__)
@@ -920,6 +998,16 @@ static inline vec_t movlhps(vec_t x, vec
 #  define dup_lo(x) B(movddup, _mask, x, undef(), ~0)
 # endif
 #endif
+#if FLOAT_SIZE == 2 && ELEM_COUNT > 1
+# define dup_hi(x) ((vec_t)B(pshufhw, _mask, \
+                             B(pshuflw, _mask, (vhi_t)(x), 0b11110101, \
+                               (vhi_t)undef(), ~0), \
+                             0b11110101, (vhi_t)undef(), ~0))
+# define dup_lo(x) ((vec_t)B(pshufhw, _mask, \
+                             B(pshuflw, _mask, (vhi_t)(x), 0b10100000, \
+                               (vhi_t)undef(), ~0), \
+                             0b10100000, (vhi_t)undef(), ~0))
+#endif
 #if VEC_SIZE == 16 && defined(__SSSE3__) && !defined(__AVX512VL__)
 # if INT_SIZE == 1
 #  define abs(x) ((vec_t)__builtin_ia32_pabsb128((vqi_t)(x)))
--- a/tools/tests/x86_emulator/simd.h
+++ b/tools/tests/x86_emulator/simd.h
@@ -53,6 +53,9 @@ float
 # elif FLOAT_SIZE == 8
 #  define MODE DF
 #  define ELEM_SFX "d"
+# elif FLOAT_SIZE == 2
+#  define MODE HF
+#  define ELEM_SFX "h"
 # endif
 #endif
 #ifndef VEC_SIZE
@@ -67,7 +70,10 @@ typedef unsigned int __attribute__((mode
 /* Various builtins want plain char / int / long long vector types ... */
 typedef char __attribute__((vector_size(VEC_SIZE))) vqi_t;
 typedef short __attribute__((vector_size(VEC_SIZE))) vhi_t;
+#if VEC_SIZE >= 4
 typedef int __attribute__((vector_size(VEC_SIZE))) vsi_t;
+typedef float __attribute__((vector_size(VEC_SIZE))) vsf_t;
+#endif
 #if VEC_SIZE >= 8
 typedef long long __attribute__((vector_size(VEC_SIZE))) vdi_t;
 typedef double __attribute__((vector_size(VEC_SIZE))) vdf_t;
@@ -96,6 +102,9 @@ typedef char __attribute__((vector_size(
 typedef short __attribute__((vector_size(HALF_SIZE))) vhi_half_t;
 typedef int __attribute__((vector_size(HALF_SIZE))) vsi_half_t;
 typedef long long __attribute__((vector_size(HALF_SIZE))) vdi_half_t;
+#ifdef __AVX512FP16__
+typedef _Float16 __attribute__((vector_size(HALF_SIZE))) vhf_half_t;
+#endif
 typedef float __attribute__((vector_size(HALF_SIZE))) vsf_half_t;
 # endif
 
@@ -110,6 +119,9 @@ typedef char __attribute__((vector_size(
 typedef short __attribute__((vector_size(QUARTER_SIZE))) vhi_quarter_t;
 typedef int __attribute__((vector_size(QUARTER_SIZE))) vsi_quarter_t;
 typedef long long __attribute__((vector_size(QUARTER_SIZE))) vdi_quarter_t;
+#ifdef __AVX512FP16__
+typedef _Float16 __attribute__((vector_size(QUARTER_SIZE))) vhf_quarter_t;
+#endif
 # endif
 
 # if ELEM_COUNT >= 8
@@ -163,6 +175,7 @@ DECL_OCTET(half);
 #elif VEC_SIZE == 64
 # define B(n, s, a...)   __builtin_ia32_ ## n ## 512 ## s(a)
 # define BR(n, s, a...)  __builtin_ia32_ ## n ## 512 ## s(a, 4)
+# define BR2(n, s, a...) __builtin_ia32_ ## n ## 512 ## s ## _round(a, 4)
 #endif
 #ifndef B_
 # define B_ B
@@ -171,6 +184,9 @@ DECL_OCTET(half);
 # define BR B
 # define BR_ B_
 #endif
+#ifndef BR2
+# define BR2 BR
+#endif
 #ifndef BR_
 # define BR_ BR
 #endif
--- a/tools/tests/x86_emulator/simd-fma.c
+++ b/tools/tests/x86_emulator/simd-fma.c
@@ -28,6 +28,8 @@ ENTRY(fma_test);
 #  define fmaddsub(x, y, z) BR(vfmaddsubps, _mask, x, y, z, ~0)
 # elif FLOAT_SIZE == 8
 #  define fmaddsub(x, y, z) BR(vfmaddsubpd, _mask, x, y, z, ~0)
+# elif FLOAT_SIZE == 2
+#  define fmaddsub(x, y, z) BR(vfmaddsubph, _mask, x, y, z, ~0)
 # endif
 #elif VEC_SIZE == 16
 # if FLOAT_SIZE == 4
@@ -70,6 +72,75 @@ ENTRY(fma_test);
 # endif
 #endif
 
+#ifdef __AVX512FP16__
+# define I (1.if16)
+# if VEC_SIZE > FLOAT_SIZE
+#  define CELEM_COUNT (ELEM_COUNT / 2)
+static const unsigned int conj_mask = 0x80000000;
+#  define conj(z) ({ \
+    vec_t r_; \
+    asm ( "vpxord %2%{1to%c3%}, %1, %0" \
+          : "=v" (r_) \
+          : "v" (z), "m" (conj_mask), "i" (CELEM_COUNT) ); \
+    r_; \
+})
+#  define _cmul_vv(a, b, c)  BR2(vf##c##mulcph, , a, b)
+#  define _cmul_vs(a, b, c) ({ \
+    vec_t r_; \
+    _Complex _Float16 b_ = (b); \
+    asm ( "vf"#c"mulcph %2%{1to%c3%}, %1, %0" \
+          : "=v" (r_) \
+          : "v" (a), "m" (b_), "i" (CELEM_COUNT) ); \
+    r_; \
+})
+#  define cmadd_vv(a, b, c) BR2(vfmaddcph, , a, b, c)
+#  define cmadd_vs(a, b, c) ({ \
+    _Complex _Float16 b_ = (b); \
+    vec_t r_; \
+    asm ( "vfmaddcph %2%{1to%c3%}, %1, %0" \
+          : "=v" (r_) \
+          : "v" (a), "m" (b_), "i" (CELEM_COUNT), "0" (c) ); \
+    r_; \
+})
+# else
+#  define CELEM_COUNT 1
+typedef _Float16 __attribute__((vector_size(4))) cvec_t;
+#  define conj(z) ({ \
+    cvec_t r_; \
+    asm ( "xor $0x80000000, %0" : "=rm" (r_) : "0" (z) ); \
+    r_; \
+})
+#  define _cmul_vv(a, b, c) ({ \
+    cvec_t r_; \
+    /* "=&x" to force destination to be different from both sources */ \
+    asm ( "vf"#c"mulcsh %2, %1, %0" : "=&x" (r_) : "x" (a), "m" (b) ); \
+    r_; \
+})
+#  define _cmul_vs(a, b, c) ({ \
+    _Complex _Float16 b_ = (b); \
+    cvec_t r_; \
+    /* "=&x" to force destination to be different from both sources */ \
+    asm ( "vf"#c"mulcsh %2, %1, %0" : "=&x" (r_) : "x" (a), "m" (b_) ); \
+    r_; \
+})
+#  define cmadd_vv(a, b, c) ({ \
+    cvec_t r_ = (c); \
+    asm ( "vfmaddcsh %2, %1, %0" : "+x" (r_) : "x" (a), "m" (b) ); \
+    r_; \
+})
+#  define cmadd_vs(a, b, c) ({ \
+    _Complex _Float16 b_ = (b); \
+    cvec_t r_ = (c); \
+    asm ( "vfmaddcsh %2, %1, %0" : "+x" (r_) : "x" (a), "m" (b_) ); \
+    r_; \
+})
+# endif
+# define cmul_vv(a, b) _cmul_vv(a, b, )
+# define cmulc_vv(a, b) _cmul_vv(a, b, c)
+# define cmul_vs(a, b) _cmul_vs(a, b, )
+# define cmulc_vs(a, b) _cmul_vs(a, b, c)
+#endif
+
 int fma_test(void)
 {
     unsigned int i;
@@ -156,5 +227,99 @@ int fma_test(void)
     touch(inv);
 #endif
 
+#ifdef CELEM_COUNT
+
+# if VEC_SIZE > FLOAT_SIZE
+#  define cvec_t vec_t
+#  define ceq eq
+# else
+  {
+    /* Cannot re-use the function-scope variables (for being too small). */
+    cvec_t x, y, z, src = { 1, 2 }, inv = { 2, 1 }, one = { 1, 1 };
+#  define ceq(x, y) ({ \
+    unsigned int r_; \
+    asm ( "vcmpph $0, %1, %2, %0"  : "=k" (r_) : "x" (x), "x" (y) ); \
+    (r_ & 3) == 3; \
+})
+# endif
+
+    /* (a * i)² == -a² */
+    x = cmul_vs(src, I);
+    y = cmul_vv(x, x);
+    x = -src;
+    touch(src);
+    z = cmul_vv(x, src);
+    if ( !ceq(y, z) ) return __LINE__;
+
+    /* conj(a * b) == conj(a) * conj(b) */
+    touch(src);
+    x = conj(src);
+    touch(inv);
+    y = cmulc_vv(x, inv);
+    touch(src);
+    touch(inv);
+    z = conj(cmul_vv(src, inv));
+    if ( !ceq(y, z) ) return __LINE__;
+
+    /* a * conj(a) == |a|² */
+    touch(src);
+    y = src;
+    touch(src);
+    x = cmulc_vv(y, src);
+    y *= y;
+    for ( i = 0; i < ELEM_COUNT; i += 2 )
+    {
+        if ( x[i] != y[i] + y[i + 1] ) return __LINE__;
+        if ( x[i + 1] ) return __LINE__;
+    }
+
+    /* a * b == b * a + 0 */
+    touch(src);
+    touch(inv);
+    x = cmul_vv(src, inv);
+    touch(src);
+    touch(inv);
+    y = cmadd_vv(inv, src, (cvec_t){});
+    if ( !ceq(x, y) ) return __LINE__;
+
+    /* a * 1 + b == b * 1 + a */
+    touch(src);
+    touch(inv);
+    x = cmadd_vs(src, 1, inv);
+    for ( i = 0; i < ELEM_COUNT; i += 2 )
+    {
+        z[i] = 1;
+        z[i + 1] = 0;
+    }
+    touch(z);
+    y = cmadd_vv(inv, z, src);
+    if ( !ceq(x, y) ) return __LINE__;
+
+    /* (a + b) * c == a * c + b * c */
+    touch(one);
+    touch(inv);
+    x = cmul_vv(src + one, inv);
+    touch(inv);
+    y = cmul_vv(one, inv);
+    touch(inv);
+    z = cmadd_vv(src, inv, y);
+    if ( !ceq(x, z) ) return __LINE__;
+
+    /* a * i + conj(a) == (Re(a) - Im(a)) * (1 + i) */
+    x = cmadd_vs(src, I, conj(src));
+    for ( i = 0; i < ELEM_COUNT; i += 2 )
+    {
+        typeof(x[0]) val = src[i] - src[i + 1];
+
+        if ( x[i] != val ) return __LINE__;
+        if ( x[i + 1] != val ) return __LINE__;
+    }
+
+# if VEC_SIZE == FLOAT_SIZE
+  }
+# endif
+
+#endif /* CELEM_COUNT */
+
     return 0;
 }
--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -43,6 +43,7 @@ asm ( ".pushsection .test, \"ax\", @prog
 #include "avx512er.h"
 #include "avx512vbmi.h"
 #include "avx512vbmi2-vpclmulqdq.h"
+#include "avx512fp16.h"
 
 #define verbose false /* Switch to true for far more logging. */
 
@@ -249,6 +250,16 @@ static bool simd_check_avx512bw_gf_vl(vo
     return cpu_has_gfni && cpu_has_avx512vl;
 }
 
+static bool simd_check_avx512fp16(void)
+{
+    return cpu_has_avx512_fp16;
+}
+
+static bool simd_check_avx512fp16_vl(void)
+{
+    return cpu_has_avx512_fp16 && cpu_has_avx512vl;
+}
+
 static void simd_set_regs(struct cpu_user_regs *regs)
 {
     if ( cpu_has_mmx )
@@ -510,6 +521,10 @@ static const struct {
     AVX512VL(_VBMI+VL u16x8, avx512vbmi,    16u2),
     AVX512VL(_VBMI+VL s16x16, avx512vbmi,   32i2),
     AVX512VL(_VBMI+VL u16x16, avx512vbmi,   32u2),
+    SIMD(AVX512_FP16 f16 scal,avx512fp16,     f2),
+    SIMD(AVX512_FP16 f16x32, avx512fp16,    64f2),
+    AVX512VL(_FP16+VL f16x8, avx512fp16,    16f2),
+    AVX512VL(_FP16+VL f16x16,avx512fp16,    32f2),
     SIMD(SHA,                sse4_sha,        16),
     SIMD(AVX+SHA,             avx_sha,        16),
     AVX512VL(VL+SHA,      avx512f_sha,        16),
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -281,7 +281,7 @@ XEN_CPUFEATURE(TSX_FORCE_ABORT, 9*32+13)
 XEN_CPUFEATURE(SERIALIZE,     9*32+14) /*A  SERIALIZE insn */
 XEN_CPUFEATURE(TSXLDTRK,      9*32+16) /*a  TSX load tracking suspend/resume insns */
 XEN_CPUFEATURE(CET_IBT,       9*32+20) /*   CET - Indirect Branch Tracking */
-XEN_CPUFEATURE(AVX512_FP16,   9*32+23) /*   AVX512 FP16 instructions */
+XEN_CPUFEATURE(AVX512_FP16,   9*32+23) /*A  AVX512 FP16 instructions */
 XEN_CPUFEATURE(IBRSB,         9*32+26) /*A  IBRS and IBPB support (used by Intel) */
 XEN_CPUFEATURE(STIBP,         9*32+27) /*A  STIBP */
 XEN_CPUFEATURE(L1D_FLUSH,     9*32+28) /*S  MSR_FLUSH_CMD and L1D flush. */



From xen-devel-bounces@lists.xenproject.org Wed Jun 15 10:37:14 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 10:37:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349983.576208 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1QOf-0001qF-84; Wed, 15 Jun 2022 10:37:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349983.576208; Wed, 15 Jun 2022 10:37:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1QOf-0001q8-4K; Wed, 15 Jun 2022 10:37:13 +0000
Received: by outflank-mailman (input) for mailman id 349983;
 Wed, 15 Jun 2022 10:37:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dio+=WW=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1o1QOe-0001q2-4i
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 10:37:12 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0620.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::620])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1ca634fd-ec97-11ec-bd2c-47488cf2e6aa;
 Wed, 15 Jun 2022 12:37:11 +0200 (CEST)
Received: from DB6PR07CA0192.eurprd07.prod.outlook.com (2603:10a6:6:42::22) by
 AM0PR08MB3316.eurprd08.prod.outlook.com (2603:10a6:208:5f::13) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5332.14; Wed, 15 Jun 2022 10:37:04 +0000
Received: from DBAEUR03FT025.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:42:cafe::58) by DB6PR07CA0192.outlook.office365.com
 (2603:10a6:6:42::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.6 via Frontend
 Transport; Wed, 15 Jun 2022 10:37:04 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT025.mail.protection.outlook.com (100.127.142.226) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5353.14 via Frontend Transport; Wed, 15 Jun 2022 10:37:04 +0000
Received: ("Tessian outbound e40990bc24d7:v120");
 Wed, 15 Jun 2022 10:37:04 +0000
Received: from 5b950c15e590.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 3CB7F00D-3EBC-44CC-91BA-80F4033F9532.1; 
 Wed, 15 Jun 2022 10:36:53 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 5b950c15e590.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 15 Jun 2022 10:36:53 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com (2603:10a6:102:2b9::9)
 by AM0PR08MB4099.eurprd08.prod.outlook.com (2603:10a6:208:12a::32)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.12; Wed, 15 Jun
 2022 10:36:51 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::7da8:5168:ad86:7178]) by PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::7da8:5168:ad86:7178%7]) with mapi id 15.20.5353.013; Wed, 15 Jun 2022
 10:36:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ca634fd-ec97-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=V3RrgsWW7rs/tt4uUU7AwvomHZWaD0D089fNBI/UXx3AGwq6DPCo4YQC/UaUaa1Z5+HO4cJ+zE4Ejz9th1U57Q0gcZFT+1p6eShM/Aryo+lFZebsj+5CvcfsIwh5wEQ8X5yMyoJhjJiy98yoJMEAUM6GrSvvZln4u37zF07G+OGf3pVT7i14hC7w/wkRW9jKe3GDlAYZV9rZko5d/br0/Kfn6lktmeRbq2lUW1Vo0tleNIPLBqBSs2KAo8EolX2hLcSHvw4DT9vkGTHKm/+lVtge3uohe8W1a67WhXQxcBtTpNAwcIA/6BMfyIV8C8dnWxa4xVj5FaDpZ3LFaApl8Q==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DF6nkyADRZPAd/nl+T3h6ASr7u7DpDEePS/gSzT2PIg=;
 b=hLrr86ynyNpn8XfzezdyRMX2DfagmWMCsgirxG3pqXWwsMy9DQ46ek4k+LUAiLbeGC22Mvo4KTJYGjGCdACl7nIbR/+wSXL8QUuh2JzwH98OGIb8hKpMvHADqhecFis5udMHQ1FNhGm3sH7Jj1OoJAA9/zn5/tqhfwyxEl4nV1TQwzt4vGujbrjo6mgL9MbvpMAVaYBOMVWcO+c7aTbXlDakJ5L/q55bJ6/RdA109V1VrGvFv/ZspI3oF3TcFlWDGRAutrh8JqeeWUh1I4LcCEV8O4H68HDVjle5QJoD/yz1qYlDIvdcw8JF2qCuUP5m3c5NQsJaEHr8nYsQH/HGng==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DF6nkyADRZPAd/nl+T3h6ASr7u7DpDEePS/gSzT2PIg=;
 b=HnTlUyNf93x76nYi2WKb4xLZDiZ54R4eO9ArLswDNFkQxxX26IPKnoYxzFhsC9SWwqChFRoWDUQ13TfG+t+KXmulZecwu20op/urv/N+JSpYaT+YYvILbL50/YnvUqk6hKUamyHZQduO5S+Bg7UqgVPxBBgnjdNeNOz0zO4XzGM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VYAnj0bhnkmaGFjzB7oAA9xu4R8+od+EeOkjDp1FEo+t6eHOrptjvpTyndffpwX2bJKIQUlJCWx6kNX/QiGoh2xswfD06Ou6+xL3utuiGLWcPbPbNJteeA6EC/tbhZqpxLmhTSfp/xNtWh6l2ep3QIVAgtKLMfezlOKYx0kUxwx5AGYjWVYahrkDLPXOaIWvhSXPLBzxsFOivlbgYxQkBe6RYeTloTKk3GJVXwJAJ/m4k3xqIFtIXQO5FAfOB3HaBiFYQCX3//2zDPJBsGtZ/K4WvX0RNtIYMj1UXq8UtJd89luoEiiunArQP0YF2IyErev9i2uJhODrSIH7aH0cfg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DF6nkyADRZPAd/nl+T3h6ASr7u7DpDEePS/gSzT2PIg=;
 b=Cppwx+SBZUDy6dLUkZWybsaQ7cVa6kW02U59YgZBdtKhHKEgV29ffJdIkxYvYAzxbxIG1iarx03iy8x/68M+M22Pt/3nBn0zSf4wwOLRD24i3iq/qPRH9OLd/zOkJd4orptU0y2euqx/XdeS4bqoQraD3QxwxSRqWoMyT7IsuUCXo5cW98PfokxVpr1bR1q/v0p38PFlXeDqV1jtYcdqDC5me1Ld8MhLUNQhLEmw/+SNuKSMdzqLBT5ZLi9J/VQ2ry9dxxH5Mh8gZLdVMGBIespfaEvG5yx3zlbUpU2hI05w+AYjbYLqGClZVrgm4FnGta3BY/oAgGpLKK09atrTHg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DF6nkyADRZPAd/nl+T3h6ASr7u7DpDEePS/gSzT2PIg=;
 b=HnTlUyNf93x76nYi2WKb4xLZDiZ54R4eO9ArLswDNFkQxxX26IPKnoYxzFhsC9SWwqChFRoWDUQ13TfG+t+KXmulZecwu20op/urv/N+JSpYaT+YYvILbL50/YnvUqk6hKUamyHZQduO5S+Bg7UqgVPxBBgnjdNeNOz0zO4XzGM=
From: Wei Chen <Wei.Chen@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH] xen/arm: avoid vtimer flip-flop transition in context
 switch
Thread-Topic: [PATCH] xen/arm: avoid vtimer flip-flop transition in context
 switch
Thread-Index: AQHYgFjGGQRmuhVkyUGP4eiVYzlBr61QOR4AgAABKRA=
Date: Wed, 15 Jun 2022 10:36:51 +0000
Message-ID:
 <PAXPR08MB7420FDB50DA7265956A3B0BC9EAD9@PAXPR08MB7420.eurprd08.prod.outlook.com>
References: <20220615013909.283887-1-wei.chen@arm.com>
 <c48bb719-8cc6-ea8d-291d-4e09d42f93c2@xen.org>
In-Reply-To: <c48bb719-8cc6-ea8d-291d-4e09d42f93c2@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 5DA9E8E59638924AAC5283D0AF5B4515.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: da56f7e4-dd53-4d56-a7ec-08da4ebafd0f
x-ms-traffictypediagnostic:
	AM0PR08MB4099:EE_|DBAEUR03FT025:EE_|AM0PR08MB3316:EE_
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB331699DA4D5BDE51822D27E89EAD9@AM0PR08MB3316.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 AxpPc/lmQZcGWzHr9i4OPBwgx2XDmSSC28oih1pyFFVHopQpTg5SEyiepeOAJdLygEfTQQr+oVTXTRmRNGNcoROnh4U4HZ+89jzlKyylcjRzHfKDnelH5nL1OGm4PknAmS1AyZ29IvdC8dsgdJaGuYK0Zk7XzouhbsMxdxxnbASjRwmlhGjOQygFB7vpnlJaAGZPBPBCA89dUbQM25KazTLpBgEJL3OX2sWLH5PXR/dc9ZvxGr9hR1unpFewT0kjryi3TgFFQM6UxANYviAeSqwtT0deRru3RdufRf2b0uMi3Tuj452zOH+7P02NyZtOFxD3xcdof4CT7xy6mpwXS8wn+tIDdwchvm2AteacA4T5hjQM/8khiwNyy64TYzp+BCo4JBEUpE4rUF83C1bky2YH/Jpta5z9o9FiQvsVYv24AaCOB1UMYUX7mcekVp6Q9/jirlS2InqGGWxJTuimIvf10IWOBfHG0S3NyTfQU4uayq4cIAfQAh6BKZQXzFXPhwTxq+gQX9JgFc/Swgyqy+7wX4p2CeNNyUsEUKqfvplODpBrG9LeZpP0Bibui4mIKn1REECt1JXQRbZkSYlIp/ACHyY4L2UP/0tCDOoidsQIStLYy3F4ARaiU8MVUj0RK0O4bR65oHi32q79zus2IPLnHlsVbfr4VuJLZiICW6G4s0+tuKAGsmU1GKQqlzk+UDrXDCyq3KAiw//lBVwv+g==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB7420.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(5660300002)(52536014)(66476007)(66556008)(66946007)(33656002)(55016003)(66446008)(316002)(2906002)(8936002)(86362001)(64756008)(6506007)(122000001)(38070700005)(9686003)(76116006)(4326008)(71200400001)(38100700002)(110136005)(186003)(83380400001)(508600001)(8676002)(54906003)(7696005)(26005)(53546011);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB4099
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT025.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ccad0ed5-9e7a-4b86-85d9-08da4ebaf556
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	GgN18aeHmR0Za9ofoqKd491R6P7kQRJxKxVWFJRjJLkcovvNIOpSgEW4zVY1q9lCstOrpIHaZ9TQM6nNzzc0t7rkPxjsDWq43ALa3xPN7Qnb5GikzrrvMHcTDIBeXaB3wV7FQjjWwM9i130BHybhxS4/5MxvKRVyqfehWCXNamKAMxqaB6K4DLKSnh8GOg6R4Zowtcwe3fJfTkYH+X4zHKCmP4WNa8G7AfJq4wouW4lUogyjOk8Xj3Vnngo+lGgLBHaCLNutKcRM8et3L7PC1ATxFGhQQe9inZ1SeQTcgXpV6arN7vmmweaMB9SjHxNLz47GP+wSW0Y75n05VYyuU0Hl5EdcO8H/Eu+wsGCHeifGgHcM41U5p70iAReaI5hN+GTsDZ7V13a+WyAQM5Wj02nROsUcMuNYHTJJ2WgoxBj17ytlvgmGypGTvrikghuqpfAzxqJMmSYvNmwSgQZ/+zd4EE2FCF0qcSr6lu7rfhmkeXLXbyKd6/ghgyHUAMrn/WpmSV6UTTOLgJ3BtLJlSroZ6/mKOGtVLbTsOz8/axFGbMj3I+tDgdwdmpTwGGNkHB6Od53Fg1oizR2qjxbRe2zIdbq7hSKDob1TL+At151HC474vC05LB9ruD0TH+dnClKMwoFhVPV9rPW2QqLo6ZHkmm6NkCn+wsj2QSXM3b0yOqAViHD1JdG89sN+Eaga
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(36840700001)(40470700004)(46966006)(36860700001)(8936002)(2906002)(40460700003)(26005)(9686003)(52536014)(316002)(356005)(508600001)(86362001)(33656002)(82310400005)(47076005)(4326008)(70206006)(70586007)(107886003)(336012)(83380400001)(54906003)(110136005)(55016003)(8676002)(81166007)(186003)(7696005)(53546011)(6506007)(5660300002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2022 10:37:04.2735
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: da56f7e4-dd53-4d56-a7ec-08da4ebafd0f
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT025.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB3316

SGkgSnVsaWVuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEp1bGll
biBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+IFNlbnQ6IDIwMjLlubQ25pyIMTXml6UgMTc6NDcN
Cj4gVG86IFdlaSBDaGVuIDxXZWkuQ2hlbkBhcm0uY29tPjsgeGVuLWRldmVsQGxpc3RzLnhlbnBy
b2plY3Qub3JnDQo+IENjOiBuZCA8bmRAYXJtLmNvbT47IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0
YWJlbGxpbmlAa2VybmVsLm9yZz47IEJlcnRyYW5kDQo+IE1hcnF1aXMgPEJlcnRyYW5kLk1hcnF1
aXNAYXJtLmNvbT47IFZvbG9keW15ciBCYWJjaHVrDQo+IDxWb2xvZHlteXJfQmFiY2h1a0BlcGFt
LmNvbT4NCj4gU3ViamVjdDogUmU6IFtQQVRDSF0geGVuL2FybTogYXZvaWQgdnRpbWVyIGZsaXAt
ZmxvcCB0cmFuc2l0aW9uIGluIGNvbnRleHQNCj4gc3dpdGNoDQo+IA0KPiBIaSBXZWksDQo+IA0K
PiBUaXRsZTogSSBkb24ndCBxdWl0ZSB1bmRlcnN0YW5kIHdoYXQgeW91IG1lYW4gYnkgImZsaXAt
ZmxvcCB0cmFuc2l0aW9uIi4NCj4gDQoNClNvcnJ5IGZvciB0aGUgbm8gYWNjdXJhdGUgd29yZHMu
IEkgbWVhbiB0aGUgdGltZSByZWFjaGVzIHRvIHRoZSBNQVggdWludDY0X3QNCmFuZCBjb250aW51
ZSBmcm9tIDAuIE1heWJlIGFuICJvdmVyZmxvdyIgYmUgYmV0dGVyIGZvciB0aGlzIGRlc2NyaXB0
aW9uLiANCg0KPiBPbiAxNS8wNi8yMDIyIDAyOjM5LCBXZWkgQ2hlbiB3cm90ZToNCj4gPiB2aXJ0
X3Z0aW1lcl9zYXZlIGlzIGNhbGN1bGF0aW5nIHRoZSBuZXcgdGltZSBmb3IgdGhlIHZ0aW1lciBh
bmQNCj4gPiBoYXMgYSBwb3RlbnRpYWwgcmlzayBvZiB0aW1lciBmbGlwIGluOg0KPiA+ICJ2LT5h
cmNoLnZpcnRfdGltZXIuY3ZhbCArIHYtPmRvbWFpbi0+YXJjaC52aXJ0X3RpbWVyX2Jhc2Uub2Zm
c2V0DQo+ID4gLSBib290X2NvdW50Ii4NCj4gPiBJbiB0aGlzIGZvcm11bGEsICJjdmFsICsgb2Zm
c2V0IiBjb3VsZCBtYWtlIHVpbnQ2NF90IG92ZXJmbG93Lg0KPiA+IEdlbmVyYWxseSBzcGVha2lu
ZywgdGhpcyBpcyBkaWZmaWN1bHQgdG8gdHJpZ2dlci4NCj4gPiBCdXQgdW5mb3J0dW5hdGVseQ0K
PiA+IHRoZSBwcm9ibGVtIHdhcyBlbmNvdW50ZXJlZCB3aXRoIGEgcGxhdGZvcm0gd2hlcmUgdGhl
IHRpbWVyIHN0YXJ0ZWQNCj4gPiB3aXRoIGEgdmVyeSBodWdlIGluaXRpYWwgdmFsdWUsIGxpa2Ug
MHhGMzMzODk5MTIyMjIzMzMzLiBPbiB0aGlzDQo+ID4gcGxhdGZvcm0gY3ZhbCArIG9mZnNldCBp
cyBvdmVyZmxvd2luZyBhZnRlciBydW5uaW5nIGZvciBhIHdoaWxlLg0KPiANCj4gSSBhbSBub3Qg
c3VyZSBob3cgdGhpcyBpcyBhIHByb2JsZW0uIFllcywgdWludDY0X3Qgd2lsbCBvdmVyZmxvdyB3
aXRoDQo+ICJjdmFsICsgb2Zmc2V0IiwgYnV0IEFGQUlLIHRoZSBvdmVyYWxsIHJlc3VsdCB3aWxs
IHN0aWxsIGJlIGNvcnJlY3QgYW5kDQo+IG5vdCB1bmRlZmluZWQuDQo+IA0KDQpZZXMsIGp1c3Qg
YXMgeW91IHNhaWQsIGV2ZW4gb3ZlcmZsb3csIGJ1dCB0aGUgcmVzdWx0IGlzIHN0aWxsIGNvcnJl
Y3QuDQpJIGhhZCBqdXN0IG5vdGljZWQgdGhlIG92ZXJmbG93LCBidXQgdGhvdWdodCBpbiBhIHdy
b25nIHdheS4NCg0KV2UgaGF2ZSBlbmNvdW50ZXJlZCB0aGlzIG92ZXJmbG93IGluIG9uZSBSVE9T
IGd1ZXN0Og0KUENOVDogICAgICAweGYzM2FkNDUzNjdlNGE0ZmYNCkVMMl9PRkY6ICAgMHhmMzMz
MzMzMzU5ZTI5YTdmDQpCT09UX1RJQ0s6IDB4ZjMzMzMzMzM1OWUwMDAwMA0KVkNOVDogICAgICAw
eDAwMDdhMTIwMGUwMjBhODANCklmIHRoZXJlIGlzIG5vIHRpbWVyIGluIGxpc3QsIFJUT1Mgd2ls
bCBjYWxjdWxhdGUgYSBodWdlIG51bWJlciBmb3INCiJpbmZpbml0ZSB3YWl0IiwgZm9yIGV4YW1w
bGU6DQpWQ0FMOiAgICAweDBmZjdhMTIwMGUwMjBhODANCkVMMl9PRkYgKyBWQ0FMIC0gQk9PVF9U
SUNLID0gMHgwMzJhZDQ1MzY3ZTRhNGZmIC0gMHhmMzMzMzMzMzU5ZTAwMDAwID0gMHhGRjdBMTIw
MEUwNEE0RkYNCkVMMl9PRkYgLSBCT09UX1RJQ0sgKyBWQ0FMID0gMHgyOWE3ZiArIDB4MGZmN2Ex
MjAwZTAyMGE4MCA9IDB4RkY3QTEyMDBFMDRBNEZGDQoNCg0KPiA+DQo+ID4gU28gaW4gdGhpcyBw
YXRjaCwgd2UgYWRqdXN0IHRoZSBmb3JtdWxhIHRvIHVzZSAib2Zmc2V0IC0gYm9vdF9jb3VudCIN
Cj4gPiBmaXJzdCwgYW5kIHRoZW4gdXNlIHRoZSByZXN1bHQgdG8gcGx1cyBjdmFsLiBUaGlzIHdp
bGwgYXZvaWQgdGhlDQo+ID4gdWludDY0X3Qgb3ZlcmZsb3cuDQo+IA0KPiBUZWNobmljYWxseSwg
dGhlIG92ZXJmbG93IGlzIHN0aWxsIHByZXNlbnQgYmVjYXVzZSB0aGUgKG9mZnNldCAtDQo+IGJv
b3RfY291bnQpIGlzIGEgbm9uLXplcm8gdmFsdWUgKmFuZCogY3ZhbCBpcyBhIDY0LWJpdCB2YWx1
ZS4NCj4gDQoNClllcywgR3Vlc3RPUyBjYW4gaXNzdWUgYW55IHZhbGlkIDY0LWJpdCB2YWx1ZSBm
b3IgdGhlaXIgdXNhZ2UuDQoNCj4gU28gSSB0aGluayB0aGUgZXF1YXRpb24gYmVsb3cgc2hvdWxk
IGJlIHJld29ya2VkIHRvLi4uDQo+IA0KPiA+DQo+ID4gU2lnbmVkLW9mZi1ieTogV2VpIENoZW4g
PHdlaS5jaGVuQGFybS5jb20+DQo+ID4gLS0tDQo+ID4gICB4ZW4vYXJjaC9hcm0vdnRpbWVyLmMg
fCA1ICsrKy0tDQo+ID4gICAxIGZpbGUgY2hhbmdlZCwgMyBpbnNlcnRpb25zKCspLCAyIGRlbGV0
aW9ucygtKQ0KPiA+DQo+ID4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS92dGltZXIuYyBiL3hl
bi9hcmNoL2FybS92dGltZXIuYw0KPiA+IGluZGV4IDViYjU5NzBmNTguLjg2ZTYzMzAzYzggMTAw
NjQ0DQo+ID4gLS0tIGEveGVuL2FyY2gvYXJtL3Z0aW1lci5jDQo+ID4gKysrIGIveGVuL2FyY2gv
YXJtL3Z0aW1lci5jDQo+ID4gQEAgLTE0NCw4ICsxNDQsOSBAQCB2b2lkIHZpcnRfdGltZXJfc2F2
ZShzdHJ1Y3QgdmNwdSAqdikNCj4gPiAgICAgICBpZiAoICh2LT5hcmNoLnZpcnRfdGltZXIuY3Rs
ICYgQ05UeF9DVExfRU5BQkxFKSAmJg0KPiA+ICAgICAgICAgICAgISh2LT5hcmNoLnZpcnRfdGlt
ZXIuY3RsICYgQ05UeF9DVExfTUFTSykpDQo+ID4gICAgICAgew0KPiA+IC0gICAgICAgIHNldF90
aW1lcigmdi0+YXJjaC52aXJ0X3RpbWVyLnRpbWVyLCB0aWNrc190b19ucyh2LQ0KPiA+YXJjaC52
aXJ0X3RpbWVyLmN2YWwgKw0KPiA+IC0gICAgICAgICAgICAgICAgICB2LT5kb21haW4tPmFyY2gu
dmlydF90aW1lcl9iYXNlLm9mZnNldCAtIGJvb3RfY291bnQpKTsNCj4gPiArICAgICAgICBzZXRf
dGltZXIoJnYtPmFyY2gudmlydF90aW1lci50aW1lciwNCj4gPiArICAgICAgICAgICAgICAgICAg
dGlja3NfdG9fbnModi0+ZG9tYWluLT5hcmNoLnZpcnRfdGltZXJfYmFzZS5vZmZzZXQgLQ0KPiA+
ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBib290X2NvdW50ICsgdi0+YXJjaC52aXJ0
X3RpbWVyLmN2YWwpKTsNCj4gDQo+IC4uLiBzb21ldGhpbmcgbGlrZToNCj4gDQo+IHRpY2tzX3Rv
X25zKG9mZnNldCAtIGJvb3RfY291bnQpICsgdGlja3NfdG9fbnMoY3ZhbCk7DQo+IA0KPiBUaGUg
Zmlyc3QgcGFydCBvZiB0aGUgZXF1YXRpb24gc2hvdWxkIGFsd2F5cyBiZSB0aGUgc2FtZS4gU28g
aXQgY291bGQgYmUNCj4gc3RvcmVkIGluIHN0cnVjdCBkb21haW4uDQo+IA0KDQpJZiB5b3UgdGhp
bmsgdGhlcmUgaXMgc3RpbGwgc29tZSB2YWx1ZXMgdG8gY29udGludWUgdGhpcyBwYXRjaCwgSSB3
aWxsDQphZGRyZXNzIHRoaXMgY29tbWVudCA6ICkNCg0KDQpUaGFua3MsDQpXZWkgQ2hlbg0KDQo+
IENoZWVycywNCj4gDQo+IC0tDQo+IEp1bGllbiBHcmFsbA0K


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 10:38:15 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 10:38:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.349990.576219 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1QPb-0002R0-N8; Wed, 15 Jun 2022 10:38:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 349990.576219; Wed, 15 Jun 2022 10:38:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1QPb-0002Ql-Gd; Wed, 15 Jun 2022 10:38:11 +0000
Received: by outflank-mailman (input) for mailman id 349990;
 Wed, 15 Jun 2022 10:38:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=56zs=WW=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o1QHf-0004ln-Jt
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 10:29:59 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on061e.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::61e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1ad2e8e1-ec96-11ec-ab14-113154c10af9;
 Wed, 15 Jun 2022 12:29:58 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB6876.eurprd04.prod.outlook.com (2603:10a6:10:116::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.20; Wed, 15 Jun
 2022 10:29:55 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Wed, 15 Jun 2022
 10:29:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ad2e8e1-ec96-11ec-ab14-113154c10af9
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZHem/YFsl4OXr3QZw1vdeNx/CBpoB6f7cuOEX3X031tcWhdCs9Y6uAjknYbzO+8BRJ6mFclSZSNnxoxxZh7HH7Gh/m4DKWvkaO8Sh6hKJzicEFy2Wj9E38ZWj7UvkTdpnxjlS+fNeTzFbhiOH3wzA7R24mkJ4JCzmswaZ8nEXTVgP+KYFRPl4xZz8cMvMfriNdRrKAPGVDi6EzQLlrhN1vDNwuomHPEMzn7fr6DpnLLbysAcmGjLMUgbcYCd97/P/ZKDlcJFrVmHB7H/rc9o48ZLjEzXy9gemIs4D0vu6sioZxOJab0gY7RRdlkSJibObfhNLrZzQV+ksRzugXi9wg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=eQbKjA8/usEOvga5i8HP0FnNQfY6bpT7pO+ZFMrlli4=;
 b=DXJZT59aV7Nmdnb7jE0oPiodRePh4q/pZnnF2UQAJ65SbeKHGdiYdshIAWZj/teghme053sOeAfDQ5W1iR6x5sd7jjyErk1OVhhX2cT5I3iZzezLlS0BpvUjMDbnSFrz2Pdq3UVDHOEQ5nWMJHwcTUi+fzxGTU2wUrzfcnUQXIVW7C+CA9zRPBerIvGkhJydd4S+1dZZpZipRyFG8nzSzz7gaTCT5/BTo4J3MNEVjilDugkFx0eEihyZoqHCx6TLtLMupc1LsWZbVTck9d3Pb7BpuqYleHgm1+mdhKrf/3SKgl5CSZOjbPKv3/g72yucE8D+NSUPEQweBdwkFT+jkg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eQbKjA8/usEOvga5i8HP0FnNQfY6bpT7pO+ZFMrlli4=;
 b=Fa/6DLSWxc8V16DI+a4Aa8735JXTvh2lizXXwvVV+hUzUfzk9rEy5JQZa4lRLZghic8kugNDqcwdvQMLpdu1XjUJD+RYXunNqOmELIw8bYjrC3qADY4rAzgRLy13euLHLMcjR4OxCxcZu4FzB0dQP0/+TQRXAmE+wpI29gyX+XBLbKPl96WtybX5llpGNajPVnN3uGTjq8UqVsID4h4rGLd0jSAN2OyQySmMIH0NXw3j0O2t5PPzPZRaqVfOIKfSfv0a87fO1XfVvMZcIuEIyZgKh9VeYReKtOgFSntB80xuDDn5/h/U12OLrHJtEVzfnfMzlFOOmcKy33BDYZn58Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b4054ce2-7645-e467-ad91-93868d493845@suse.com>
Date: Wed, 15 Jun 2022 12:29:53 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: [PATCH 07/11] x86emul: handle AVX512-FP16 complex multiplication
 insns
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <9ba3aaa7-56ce-b051-f1c4-874276e493e1@suse.com>
In-Reply-To: <9ba3aaa7-56ce-b051-f1c4-874276e493e1@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM6PR08CA0042.eurprd08.prod.outlook.com
 (2603:10a6:20b:c0::30) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 7821c3d1-e5d0-4d70-2235-08da4eb9fd7a
X-MS-TrafficTypeDiagnostic: DB8PR04MB6876:EE_
X-Microsoft-Antispam-PRVS:
	<DB8PR04MB68760683BBCA4C49A45A4823B3AD9@DB8PR04MB6876.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	FbXnr4BOOLAY5sBSndTU2W5NIpHquBmXWqkZw7syhFgQxbFh7YPSHJtNPnecrCvce0uXdFCbDLxOBZqUVQ+Tj3x/L/Ikh+pNJBe50ZZLtGO2DhWu2vBunJTJfXVvjKw7VZRpRoZCSCQL2TNgfeh1g11nMt2N/4EdMBOBxDQS52dQfDHwpycgn0mDGRsfpgFGESNvgvlkAUrdoL8UPHJ1ceJWyDv7iqkhZZZe0hzIdh0EoVvtidG/3/9buDzi133pm+oHkKGlaSfhxVEyGzSzDnPr3t2k9GRQvjHqWnJdS4XktJq3apacASEqqyCeyABXUqMiLscFQrBBPE1gR7DcyiRf1F4Iwl5bqVZKnbv0NI6avrTfWm8mhzkcndPuanHNHaACPUwD3qRs8QTnxx+R9sgtP/+nx25ou7MvkkpCGDHKf2gkrohQltkTyiPJuXHS1KmxIDy6i5WjRiwPZ62WQSdMEb5iblBF4mOhJ1u0p+YUdowFZahR0Rov7tzCsPYAdzbG/rQARrFRpim0HnrcV9C+viF7gz5LE/71k3O5pk7BjPWqO765HdLVd6pewmjnZIXfrFJvYqxdoPEWkR3rPLSlkDan5ZZYFC4PyRsYVd3eWeke4HwBv2rpFAk6stjosgQgG0lDjwhhbXUK1PsU8uhAc/7O6TTNSyURb6qdPSrcKUKSwjtopKrB9cD9GomeINqW6PXkmsJ8JUbFGfdT1I/KFEDI/gGQrRg9ANuFW86L7VS2G4JZwFeX8u956ieR
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(6512007)(66556008)(26005)(54906003)(5660300002)(6506007)(316002)(186003)(2906002)(31686004)(36756003)(508600001)(6486002)(8936002)(66946007)(86362001)(4326008)(8676002)(6916009)(66476007)(38100700002)(31696002)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aDlTYkFBMVlaR3p6YjVDWHRqci9GNEYrVEh2VGVzWU51RzN1WnlLemhQYkho?=
 =?utf-8?B?aGYrQWUrd0F1UGdJcE1XbHNCd2p2NkxmNnVneUhkSnZGbGtlSGxnZTgvZjN5?=
 =?utf-8?B?bmQyUU9kTDM3OTY1WklPa21WL0hqTlJmQjlLU2VlNUVMMmZvVDVMUThaUUxz?=
 =?utf-8?B?UGFhSFFSRkpPUFZNbWE2ZUc2YWhWSnFCT2o5UkJXdDBYcHFZWGMzenAyYVRK?=
 =?utf-8?B?b2JRQjNVQzJEMDduWGF5ZnpEK3FmT0t1amtvYW9hRTlXZ3F4WXozWXJ2MmxI?=
 =?utf-8?B?akdzVXREWVpUdTJSWTZ5MWw1NUR2M1c1dFA4Zkl2RjBVZmVhVDV6U3A3Vm5U?=
 =?utf-8?B?YjZib2Z0emxTcEVsRko4bFhxS3laSzRjL0VNV1BVc0dpbGp5TlpzU01wMlIr?=
 =?utf-8?B?WXNuVDN5UjVaN05nMmMvWndSbGx4SEhPWW9LWW1YS1ZlanIrYndiOHJqazVx?=
 =?utf-8?B?VW9zNmVuNEZMSGxKeGdSR0xldm80VDYyUk51dTZRNm05NExJT3M1aFJ3SnZK?=
 =?utf-8?B?R3g1ZGVOL2JiMFNhUG9zTkRCMitYNEc5ZEpVeGFDN1pPUVdMUHgxNGthZ1F5?=
 =?utf-8?B?V1prRklxMndQa2JzTWsyWFNNdUFGbW4zZDJnYTE3UEFwc2kxaWtPc05wTHZF?=
 =?utf-8?B?a0NjNjFrK1h3eDBxTWFiK1dzWjkvdW1wZlRZTWRIQXp4Q0F2WUt0Q05uTHNt?=
 =?utf-8?B?TjZja1lUS0Y4aThBMmZ3V2ZJOTBBSFJRZktjWjEvOW9PampxWHZudzd6YVFS?=
 =?utf-8?B?V0NzYXdLYUFId0xyVElPSjZWYU00cTFPNnlGYnhXNkZyNkp3WEJnVGdCOER0?=
 =?utf-8?B?U3Z2ZzQ5amJVbDdUNytVTW5kYXN6eGszV3dLbHZFWDJDQXIwcDFvODdlUCt4?=
 =?utf-8?B?S3pCRXFPaUZ2VzBkWW80cElWUGVXbHFvcys3TGwxWGk1UUVQd05HendmK1hh?=
 =?utf-8?B?S2g0MkJHd0xWQU5QTjNHY3ZvUWUzU24vK0I4dlFTREVEbUxCd1RrYW5lZ25W?=
 =?utf-8?B?VlA2dTNXSSttRWFuS1AzeHpTcEJYdlNvRzh6L1lwNmhuZGkxQXJBcUpQSGE0?=
 =?utf-8?B?TlpOZHJVemdjWTZjTERkT0VqcVFIbEYwUmZqMkppSThncjg3TXNJcS9xR3A2?=
 =?utf-8?B?UXhnaXNJM2MwOEFQTFhNV25nbnptdTluZE1tcERWVmxhaU1pYXZvczA1VjZD?=
 =?utf-8?B?K0VZanZRUVhVa1UxOEtoeWRWWm5lcDhxV3ZhSWZCYVM4SVRwKzJmL2VEZkFq?=
 =?utf-8?B?NUVadFNJQU1iUk9UNkF6YTRoTFhjOGFPY21Xbm5JRTdLSnprWUZZTjgrNTdT?=
 =?utf-8?B?ak1NZmpBeXBzR3lna2lpUmY4VWV0c0JGMTAxSGltTjlzQ1FsUmpGWWNGSS94?=
 =?utf-8?B?NTJLMllHN01DRmQ1bS9iU0crdXQ0bmpPcjJMMFo1Vi9ldktscFZZMVVYc2s3?=
 =?utf-8?B?NFBXd0RQUzU2LzJjMDhyeUd0cXppODNMeW4xandLK3Q0ZFBmL0VudVltOTlw?=
 =?utf-8?B?VUU0YVFLQXovMURRRHRCRi94REQvaTlHRkliQXlqdS9MRnEzQ2dkWHEwYmp4?=
 =?utf-8?B?bUxaeU5aSXR6K3I5eG5QbzNhQjJybGJaMEhuc090NlVsR0dYNGU5NDM4a0xJ?=
 =?utf-8?B?dUVocFpTaW5UUnFSeFMwSVpGaXBGTlBLeWxqUllPSnF1MFZ6TXdxRFA5cmFo?=
 =?utf-8?B?cGdnZU5QOGMrR0VKSkxuS3kxcWM0bUU3OTNwN1dPRHpHcDJoaGhWcWlkZ293?=
 =?utf-8?B?dGVwRGNKU0F1QnhWdHZTbU1KbjdyZmFXakpBSjlldEI5RFhaUlg1R1RWMU8v?=
 =?utf-8?B?Zy9BajBlSzYremhXbnF0L1RJUVI4TUs5OVZSODhtOUNFRTZ3a0JJWW5CdHBY?=
 =?utf-8?B?MGRLK3JFa2ZkMDJEcitQSEJRV0ZFU25oUTBVakVUWWV1OWRsUFl3S1hUdk1h?=
 =?utf-8?B?M0JhaFBpZFFNNDNDUlVGK2VOOGNoUWUyemlhMTI4Qyt2eG1Zby9wRE1aV0ZV?=
 =?utf-8?B?NEFNREI5YUFjRnUzanhIRTBNc2RuOHNURFgvMVpqMW1kdUdPK3NZTTY0V3V5?=
 =?utf-8?B?eFZaRmtnS3ZVUnFQQysxUnpZZ0hmdzNYVkdyK3ZZUXZQUDZ6SWk4Y3hSMStR?=
 =?utf-8?B?SkNQcjJneUNlR0hsUlB4MEZrU3p3Yll6SzZZTkl2WWJLbThiaU5MZTczNXZQ?=
 =?utf-8?B?SVgwbXZobzZIYm15bGhzMGs1SFVxdDFFVEI2UThHUkdDNE9zRkt1YlNKbEJS?=
 =?utf-8?B?MUdXcEFKcFVKbGVMNXJic3JRVk5tT0lzQWZjNmg4MnBwSWh2eklnOTJLMHFy?=
 =?utf-8?B?elpZV1RYMDBiYjlrejdmbUgveXFBMDc0RmdYT2o3c1FtZjZYL2hzQT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7821c3d1-e5d0-4d70-2235-08da4eb9fd7a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2022 10:29:55.6621
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: nXdKV5xn5ShlcB9wbtNQJATumr9ACa5l2RyFaLAfb82bSmXpmInSstpxcRIE26XE6alOifnbSKOKept+x5OKOw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB6876

Aspects to consider are that these have 32-bit element size (pairs of
FP16) and that there are restrictions on the registers valid to use.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/tests/x86_emulator/evex-disp8.c
+++ b/tools/tests/x86_emulator/evex-disp8.c
@@ -614,12 +614,18 @@ static const struct test avx512_fp16_all
     INSN(comish,          , map5, 2f,    el, fp16, el),
     INSN(divph,           , map5, 5e,    vl, fp16, vl),
     INSN(divsh,         f3, map5, 5e,    el, fp16, el),
+    INSNX(fcmaddcph,    f2, map6, 56, 1, vl,    d, vl),
+    INSNX(fcmaddcsh,    f2, map6, 57, 1, el,    d, el),
+    INSNX(fcmulcph,     f2, map6, d6, 1, vl,    d, vl),
+    INSNX(fcmulcsh,     f2, map6, d7, 1, el,    d, el),
     INSN(fmadd132ph,    66, map6, 98,    vl, fp16, vl),
     INSN(fmadd132sh,    66, map6, 99,    el, fp16, el),
     INSN(fmadd213ph,    66, map6, a8,    vl, fp16, vl),
     INSN(fmadd213sh,    66, map6, a9,    el, fp16, el),
     INSN(fmadd231ph,    66, map6, b8,    vl, fp16, vl),
     INSN(fmadd231sh,    66, map6, b9,    el, fp16, el),
+    INSNX(fmaddcph,     f3, map6, 56, 1, vl,    d, vl),
+    INSNX(fmaddcsh,     f3, map6, 57, 1, el,    d, el),
     INSN(fmaddsub132ph, 66, map6, 96,    vl, fp16, vl),
     INSN(fmaddsub213ph, 66, map6, a6,    vl, fp16, vl),
     INSN(fmaddsub231ph, 66, map6, b6,    vl, fp16, vl),
@@ -632,6 +638,8 @@ static const struct test avx512_fp16_all
     INSN(fmsubadd132ph, 66, map6, 97,    vl, fp16, vl),
     INSN(fmsubadd213ph, 66, map6, a7,    vl, fp16, vl),
     INSN(fmsubadd231ph, 66, map6, b7,    vl, fp16, vl),
+    INSNX(fmulcph,      f3, map6, d6, 1, vl,    d, vl),
+    INSNX(fmulcsh,      f3, map6, d7, 1, el,    d, el),
     INSN(fnmadd132ph,   66, map6, 9c,    vl, fp16, vl),
     INSN(fnmadd132sh,   66, map6, 9d,    el, fp16, el),
     INSN(fnmadd213ph,   66, map6, ac,    vl, fp16, vl),
--- a/tools/tests/x86_emulator/predicates.c
+++ b/tools/tests/x86_emulator/predicates.c
@@ -2058,6 +2058,10 @@ static const struct evex {
     { { 0x4d }, 2, T, R, pfx_66, W0, LIG }, /* vrcpsh */
     { { 0x4e }, 2, T, R, pfx_66, W0, Ln }, /* vrsqrtph */
     { { 0x4f }, 2, T, R, pfx_66, W0, LIG }, /* vrsqrtsh */
+    { { 0x56 }, 2, T, R, pfx_f3, W0, Ln }, /* vfmaddcph */
+    { { 0x56 }, 2, T, R, pfx_f2, W0, Ln }, /* vfcmaddcph */
+    { { 0x57 }, 2, T, R, pfx_f3, W0, LIG }, /* vfmaddcsh */
+    { { 0x57 }, 2, T, R, pfx_f2, W0, LIG }, /* vfcmaddcsh */
     { { 0x96 }, 2, T, R, pfx_66, W0, Ln }, /* vfmaddsub132ph */
     { { 0x97 }, 2, T, R, pfx_66, W0, Ln }, /* vfmsubadd132ph */
     { { 0x98 }, 2, T, R, pfx_66, W0, Ln }, /* vfmadd132ph */
@@ -2088,6 +2092,10 @@ static const struct evex {
     { { 0xbd }, 2, T, R, pfx_66, W0, LIG }, /* vfnmadd231sh */
     { { 0xbe }, 2, T, R, pfx_66, W0, Ln }, /* vfnmsub231ph */
     { { 0xbf }, 2, T, R, pfx_66, W0, LIG }, /* vfnmsub231sh */
+    { { 0xd6 }, 2, T, R, pfx_f3, W0, Ln }, /* vfmulcph */
+    { { 0xd6 }, 2, T, R, pfx_f2, W0, Ln }, /* vfcmulcph */
+    { { 0xd7 }, 2, T, R, pfx_f3, W0, LIG }, /* vfmulcsh */
+    { { 0xd7 }, 2, T, R, pfx_f2, W0, LIG }, /* vfcmulcsh */
 };
 
 static const struct {
--- a/xen/arch/x86/x86_emulate/decode.c
+++ b/xen/arch/x86/x86_emulate/decode.c
@@ -379,6 +379,8 @@ static const struct ext0f38_table {
     [0x4f] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
     [0x50 ... 0x53] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
     [0x54 ... 0x55] = { .simd_size = simd_packed_int, .two_op = 1, .d8s = d8s_vl },
+    [0x56] = { .simd_size = simd_other, .d8s = d8s_vl },
+    [0x57] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
     [0x58] = { .simd_size = simd_other, .two_op = 1, .d8s = 2 },
     [0x59] = { .simd_size = simd_other, .two_op = 1, .d8s = 3 },
     [0x5a] = { .simd_size = simd_128, .two_op = 1, .d8s = 4 },
@@ -441,6 +443,8 @@ static const struct ext0f38_table {
     [0xcc] = { .simd_size = simd_packed_fp, .two_op = 1, .d8s = d8s_vl },
     [0xcd] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
     [0xcf] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0xd6] = { .simd_size = simd_other, .d8s = d8s_vl },
+    [0xd7] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
     [0xdb] = { .simd_size = simd_packed_int, .two_op = 1 },
     [0xdc ... 0xdf] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
     [0xf0] = { .two_op = 1 },
@@ -1502,6 +1506,10 @@ int x86emul_decode(struct x86_emulate_st
                 if ( s->evex.pfx == vex_66 )
                     s->fp16 = true;
                 break;
+
+            case 0x56: case 0x57: /* vf{,c}maddc{p,s}h */
+            case 0xd6: case 0xd7: /* vf{,c}mulc{p,s}h */
+                break;
             }
 
             disp8scale = decode_disp8scale(ext0f38_table[b].d8s, s);
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -7840,6 +7840,34 @@ x86_emulate(
         avx512_vlen_check(true);
         goto simd_zmm;
 
+    case X86EMUL_OPC_EVEX_F3(6, 0x56): /* vfmaddcph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_F2(6, 0x56): /* vfcmaddcph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_F3(6, 0xd6): /* vfmulcph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_F2(6, 0xd6): /* vfcmulcph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+        op_bytes = 16 << evex.lr;
+        /* fall through */
+    case X86EMUL_OPC_EVEX_F3(6, 0x57): /* vfmaddcsh xmm/m16,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX_F2(6, 0x57): /* vfcmaddcsh xmm/m16,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX_F3(6, 0xd7): /* vfmulcsh xmm/m16,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX_F2(6, 0xd7): /* vfcmulcsh xmm/m16,xmm,xmm{k} */
+    {
+        unsigned int src1 = ~evex.reg;
+
+        host_and_vcpu_must_have(avx512_fp16);
+        generate_exception_if(evex.w || ((b & 1) && ea.type != OP_REG && evex.brs),
+                              EXC_UD);
+        if ( mode_64bit() )
+            src1 = (src1 & 0xf) | (!evex.RX << 4);
+        else
+            src1 &= 7;
+        generate_exception_if(modrm_reg == src1 ||
+                              (ea.type != OP_MEM && modrm_reg == modrm_rm),
+                              EXC_UD);
+        if ( ea.type != OP_REG || (b & 1) || !evex.brs )
+            avx512_vlen_check(!(b & 1));
+        goto simd_zmm;
+    }
+
     case X86EMUL_OPC_XOP(08, 0x85): /* vpmacssww xmm,xmm/m128,xmm,xmm */
     case X86EMUL_OPC_XOP(08, 0x86): /* vpmacsswd xmm,xmm/m128,xmm,xmm */
     case X86EMUL_OPC_XOP(08, 0x87): /* vpmacssdql xmm,xmm/m128,xmm,xmm */



From xen-devel-bounces@lists.xenproject.org Wed Jun 15 10:38:16 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 10:38:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350000.576230 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1QPg-0002s3-19; Wed, 15 Jun 2022 10:38:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350000.576230; Wed, 15 Jun 2022 10:38:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1QPf-0002ru-To; Wed, 15 Jun 2022 10:38:15 +0000
Received: by outflank-mailman (input) for mailman id 350000;
 Wed, 15 Jun 2022 10:38:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=56zs=WW=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o1QH7-0002mz-Vx
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 10:29:26 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on0629.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::629])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 06cdd9ac-ec96-11ec-bd2c-47488cf2e6aa;
 Wed, 15 Jun 2022 12:29:25 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7332.eurprd04.prod.outlook.com (2603:10a6:20b:1db::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.13; Wed, 15 Jun
 2022 10:29:23 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Wed, 15 Jun 2022
 10:29:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 06cdd9ac-ec96-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UmzbM9VmEtu60O6d2A6xUEjeau3ZJ+a4LFwmd+AaHwBo6woOsa6fhSDB4JiE+emcPCv3Z2aSdlVtR06ZmauCmghcaSh86bsNm++UedyjFLme9uvyNXwj/qHahy7fV5xmvzgEShbx6jqBSOY6Bu+TnhVkBP4cpmfOb35OiOIQSGNcBmv09HGzG+m+oxofD9iUEQVgIVQi7qOhtwv1Z3LYnfGbuvWGs7pxWS/ssOJuzyt5nJToFFQnMVTws1ehOk6CzsX7rMu4L4fEt0WsiZIyiGvFi8tvCV82ncMkNd/LvPptYsn4B5HzTmxVZhoRCyUOjQOK0VqXG3ZSz/biyCJ/fw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=HjhcNK1/AAtI6oe/nZgHbpEji5E8JjehJMuPbAPTZnI=;
 b=SsV/3Mb+y3QW+2aejVkY5OS/te4kuVkFIVnJ+GX2ekxXabSS7oiiAR832Ut14HoqC739JEfVoPsI6eC8NyLjMyBj3FTb0Cocsc1QCA5hzrhbQELCuX1UrUpfHbU2Cacsu5W6mRf6JRcrsUU9IxCH57hIl3fQNTxRDmci/o/O3vuNROBram/01zykZl9T1PDpwAZR3LaIiY2FapihxOkeFpe3XM1+yoGrimL5KN3m1Drkq2SgBOsKRKoZW5hJ3GfE/Fi1Woa/Qe4Leky5fGE82ga+wk0kiJ1ZZ3GU3W3F8MyOhx4oHL7zG32+32Lff3ur7b19cXnHaWcaMqcfsZmh6Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HjhcNK1/AAtI6oe/nZgHbpEji5E8JjehJMuPbAPTZnI=;
 b=091yK/Qon4Y5lgJvq4tvpGQ2NcvEebP+ZIgWfjZJd1ittCaMmqSErjXYBumjTlp4Velu3RORsIJJjUHzSvPz4tYF/wgt/DPviAZAoB3tSaXkciJcjyphbxu6d+nDa5o678Amp6W8HFm4PXyAMY6ZqJDc9bwLHtfehQs5gYp5DoFENh5WOodrjGRUBB2DvbkpwvNGfGDo1CDMYWnpWCRSWxkqusAT3yu/4UQvvKSaPlX/Ug2rDIqSuP/rbWbiBGoxrySzr7b3rQGLMkWZRVi8pUIYmA8E8otlima5qvknSaTJUVhML1kOwGEX+SUGEDBjsUT+3dyFfdar4rXHWLHBtQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ca49ed5d-d440-72da-6f56-664888c8b25c@suse.com>
Date: Wed, 15 Jun 2022 12:29:20 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: [PATCH 06/11] x86emul: handle AVX512-FP16 Map6 misc insns
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <9ba3aaa7-56ce-b051-f1c4-874276e493e1@suse.com>
In-Reply-To: <9ba3aaa7-56ce-b051-f1c4-874276e493e1@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM6P191CA0027.EURP191.PROD.OUTLOOK.COM
 (2603:10a6:209:8b::40) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d1532c4d-c4c0-45ca-5f99-08da4eb9e99f
X-MS-TrafficTypeDiagnostic: AM8PR04MB7332:EE_
X-Microsoft-Antispam-PRVS:
	<AM8PR04MB7332227BE00708A2BCADEC00B3AD9@AM8PR04MB7332.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BsIMuZPD/6cFiapbTg/tQCZVnb7MeGEkoFG0QZGRymvkrsL+fQyg/kF/Z9hDvS8ojrm5Wg6aGPpAiN3M7SAnpgsueXMyv3/R3TV4HTTMQ96pxY1us2+6bD2CYcbYWjIE5SVdna4pUcZU124YB35WJJomqAI55vvJ0BEgIYryFGVkhZ8J2MAgX907JTV8vgCVK4tro7FqvROOkyUPC7/Vld2kqWmwlU7l5TO0bzoql0TsPtD2E4tRb3xUWLkfFdApeURTOKP/Tyc1qJDc4vEm/DRVfVLFMxEXbUvwdu/MiWVuhbahDIjWJeyRztIbD/OnoORTxrke5iRzo99G+oyVTbp8niQIWQboi9Frt7SkPPSjbwlmwj2XIkMVadfwgvG1bV0tno+wcArpHhc075mfTJPAAJj2VTXAXsWtZjt/W+S3//RoeYsGWmoDF5rK8OPEIH0tRT1wgQUnsjmTrqfAR8ze/MRLtf6p8lEgSgl5o7YN53r6X3Vliz9OzrnK9k5jxvotot8DONiklNZfXGNEBd9P9E+wXqV1TlXw+kJ1Hc+ooZvb7+4ER3it0Qbn6kc0pUEdc5PVZtKl3n+kltNwdBMkCgfzLPaWcjOnLVY8gCCIBGiqpagXjrz/jOVkAvariUSFFihTSmI678yqfOr4iCjk+JR7JpclFYs5yTgDM4WdAcSI3lMiAE+dpd7LZz9t7vxf51jZlieIO5JHouK+mdOR6jr+SM9mlE9ceDwPDYwG5AckTK8eaI+7Foa8Bo0k
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(31686004)(6486002)(316002)(36756003)(26005)(86362001)(31696002)(6512007)(186003)(8936002)(2616005)(6506007)(38100700002)(6916009)(54906003)(508600001)(66476007)(4326008)(66556008)(8676002)(5660300002)(2906002)(66946007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZkhWZUVSSXNIaWp0Mjlkc0d0bHRGOFBGNm5rTGkzRkZvdEg5Q1dZOU1wZFJQ?=
 =?utf-8?B?Q0ZycHllaEcvNHplcHRpYmhhL2ZmMTk1UUZ4RVFlSW5la0pRTytsN1ZZU01y?=
 =?utf-8?B?TUNwU1g5WUlMS0NJclFTRG83STJKbUxIeEsvNWw5RjZyTUZ6dld5a2hWeG85?=
 =?utf-8?B?cVdRZjdPK3hNVG9UNjhwVXlMZG1ualltUTNkSlN5aEtHeCt6ZVIyOHFvaHpx?=
 =?utf-8?B?WHlib3FyaHVlZjNLSjYyU21mWEFSeE1rWFJZcnkyN1E5dW5jd1VxVnVwYngv?=
 =?utf-8?B?U1Jqd1BzRlB0aWJmWVNCSk45TFVHL3lDeHpXc2tBeVJ6amUzVkpneFVVUkd4?=
 =?utf-8?B?MThXUllraEsxeVJkNDNDVFg1bDZRZFNIb015b2JWOHBvNHpyc1dYamJ3V3FK?=
 =?utf-8?B?R0I5Z3pGWVByNTc0ZlhiRm5qWlJtYVBGSkFUMEQ3NTJjVjZiUU9KcnB2TTdM?=
 =?utf-8?B?cmNvcGNxNmViSzA3MnNwelUwbzIzY3FCZ2Rrd091MnlnaUp3ZUliMGhBWHFz?=
 =?utf-8?B?OFAzVlRsRzBKZisvVis0ZllIRVlwQ0tMZDEvWTFlYUQ0ZjA1dWl2MDhHbGRP?=
 =?utf-8?B?WnAyVHM4V2tIM0MvcEtMZlFrUWZaejFZYTAzNW5QNnFPTVhJZnB3QXlidCs2?=
 =?utf-8?B?QnUyK2toRG0vUzVOMGk4QW5jakRFOURTU0x4WDZDR0lncVRLWklwN242bkh3?=
 =?utf-8?B?dTlFN2tUSlA0RW1DTm9SRDRENEhVei9xZ3JhbVorRTQyandIbHZxYUM1UlBw?=
 =?utf-8?B?WGJXVlQvOFE1MG1NTzdhbSt5VTJjUkp3WEJWeUcxcXpyWnZGaHo5Y1hGMmRk?=
 =?utf-8?B?Z1lHTnRyQU0xRWE1Skc0a2VJTitaanZkUFFyOFhSZVlCeWR3VU1hbzFrc0Qx?=
 =?utf-8?B?ekwvdWxteGZUaVVlM3FxaWQ3OVZ1KzUyZXM0clY0eTdHREVhUndxNW1QOUh3?=
 =?utf-8?B?YXp4SUJlZjlVMlBxMkcwTG4yTkZLcURhS2tqVW5uYXNHc2VWems1NjluWVJB?=
 =?utf-8?B?bE9aVHNBaWhxOTljd3I5WFR2dEcvLzVja0M3ZEl6aEthYkszL2ROQ2UzR3F3?=
 =?utf-8?B?c2FFN00rOE5yTndTVFhESk5rNlAzKzBVcSswVlBmUS8wNjM1UyszRHVLSHFw?=
 =?utf-8?B?ckRIOTdZdE5vUW5IWE1pc1JhQmJlNnpZeFZBUFMvTzdzTlVVK1BwVUc3UmJn?=
 =?utf-8?B?UStRRSt2K1RWUVlnK0E0dnkzdTlwL1AwdDR0dnc0ZE5WRXgwdVFEZFlQVVlT?=
 =?utf-8?B?bldzZDJSc21NK0NQUm5RU2g5d2RsS1VhMWt2eE52TXhkL2FYWG9UQnIzUC9l?=
 =?utf-8?B?TUpWVFZEMmFPT3VGV1UwcFAraEo1TFdwWjU1Z1FZRFRZRm81Sy9XRXhXbnA5?=
 =?utf-8?B?NXBjcVBqaFI4U0VQNFIwUlNiWE5BZ1ZabXhuSzVUOStvZU9YenRwaUd6MFhi?=
 =?utf-8?B?R29MWXlMSlZnTFdOQTlkWHU1dHVNalJBaWh2bEhqVG4wVWdldjY4Q2F6VjVK?=
 =?utf-8?B?R2FTVHlNVitCVkJQQjF4MnZUWnE0ankrUXJHYVFCWlZ2V0cwQ3dnQVJVL0Nz?=
 =?utf-8?B?Tnhxazh2czAxaEpyUnRGeUhiQjNSN1N5ZHZzdEwzL0lTMk9mQS9XZE5UclNQ?=
 =?utf-8?B?L2dnTDJxSTV4TFhQVTlzVkI1MCs5QStBbTF5UEFJdkpwemNBVWZCc2NrRngr?=
 =?utf-8?B?VmErc1lxNVhUNkFiS1A5ejY5NGdqU3hoN2M0azJJZkFLS3AvUlArUDU0WXRh?=
 =?utf-8?B?dXhGd3o1RnorMnRYdjZGbGhoekVUZGlucXFSSW5mU2dTOWV6WmtzVEhTckRD?=
 =?utf-8?B?UzFZem16Szd1TUd0b0dKSC9hTFVaV21ZV3hzNHNkRnl1VkJFM1JKQXpQZ21T?=
 =?utf-8?B?L21EbkRHUzVzdU82a3pqNE5lVy83SDdDTHFJVy9mcFdDK21CVHUyb3VOYUNL?=
 =?utf-8?B?cFRXNDJ2RXRYcExPOTcvL1pTemM4TVFpZFpISklOMCttVmxRUWlvMlQ1ZFp6?=
 =?utf-8?B?ZmdTQ1gzTjRaTU1YcEpqU3llY2hGN0pGY2VDVnVHWHhDMFJveDAvRUl2NXcw?=
 =?utf-8?B?UWZPTUFHQldSOXVHT1JSdE5ucldFRm84NlhMYUIzTWJHTEZ3eFcvYUtxbmxT?=
 =?utf-8?B?NUZlNTZGaFhCbWN4d1FQQ0l0ejY1NlNlN2dDZVdJaUI5Q0R6ckhSWTRNRGlH?=
 =?utf-8?B?ZTdTY0JuN0VHTUoya2ozZHBqc3p5MVErbjlPL21MWUR0UDZNcWlwT3Y1MzZG?=
 =?utf-8?B?NDN5UEVEVUVlMTJwK084dWp2NXpVbnFlWEgxVENIVWRSV2RSN0RMWmNpWnlp?=
 =?utf-8?B?bDBhb3RDWmFUWjlqelByM3FDc09sb3J5OHVMUHZQNG1QWDR5d3g3UT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d1532c4d-c4c0-45ca-5f99-08da4eb9e99f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2022 10:29:22.3204
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: vMLTNJLCNk85fP8qTMq3A8br4LZxAsiG7RRwII0ku8mGAQmpXsLlARJy7qHGGyoyJU7UQ62OXcQHmCzJKGwPZw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7332

While, as before, this leverages that the Map6 encoding space is a very
sparse clone of the "0f38" one, switch around the simd_size overriding
for opcode 2D. This way less separate overrides are needed.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/tests/x86_emulator/evex-disp8.c
+++ b/tools/tests/x86_emulator/evex-disp8.c
@@ -646,6 +646,8 @@ static const struct test avx512_fp16_all
     INSN(fnmsub231sh,   66, map6, bf,    el, fp16, el),
     INSN(fpclassph,       , 0f3a, 66,    vl, fp16, vl),
     INSN(fpclasssh,       , 0f3a, 67,    el, fp16, el),
+    INSN(getexpph,      66, map6, 42,    vl, fp16, vl),
+    INSN(getexpsh,      66, map6, 43,    el, fp16, el),
     INSN(getmantph,       , 0f3a, 26,    vl, fp16, vl),
     INSN(getmantsh,       , 0f3a, 27,    el, fp16, el),
     INSN(maxph,           , map5, 5f,    vl, fp16, vl),
@@ -656,10 +658,16 @@ static const struct test avx512_fp16_all
     INSN(movsh,         f3, map5, 11,    el, fp16, el),
     INSN(mulph,           , map5, 59,    vl, fp16, vl),
     INSN(mulsh,         f3, map5, 59,    el, fp16, el),
+    INSN(rcpph,         66, map6, 4c,    vl, fp16, vl),
+    INSN(rcpsh,         66, map6, 4d,    el, fp16, el),
     INSN(reduceph,        , 0f3a, 56,    vl, fp16, vl),
     INSN(reducesh,        , 0f3a, 57,    el, fp16, el),
     INSN(rndscaleph,      , 0f3a, 08,    vl, fp16, vl),
     INSN(rndscalesh,      , 0f3a, 0a,    el, fp16, el),
+    INSN(rsqrtph,       66, map6, 4e,    vl, fp16, vl),
+    INSN(rsqrtsh,       66, map6, 4f,    el, fp16, el),
+    INSN(scalefph,      66, map6, 2c,    vl, fp16, vl),
+    INSN(scalefsh,      66, map6, 2d,    el, fp16, el),
     INSN(sqrtph,          , map5, 51,    vl, fp16, vl),
     INSN(sqrtsh,        f3, map5, 51,    el, fp16, el),
     INSN(subph,           , map5, 5c,    vl, fp16, vl),
--- a/tools/tests/x86_emulator/predicates.c
+++ b/tools/tests/x86_emulator/predicates.c
@@ -2050,6 +2050,14 @@ static const struct evex {
     { { 0x6e }, 2, T, R, pfx_66, WIG, L0 }, /* vmovw */
     { { 0x7e }, 2, T, W, pfx_66, WIG, L0 }, /* vmovw */
 }, evex_map6[] = {
+    { { 0x2c }, 2, T, R, pfx_66, W0, Ln }, /* vscalefph */
+    { { 0x2d }, 2, T, R, pfx_66, W0, LIG }, /* vscalefsh */
+    { { 0x42 }, 2, T, R, pfx_66, W0, Ln }, /* vgetexpph */
+    { { 0x43 }, 2, T, R, pfx_66, W0, LIG }, /* vgetexpsh */
+    { { 0x4c }, 2, T, R, pfx_66, W0, Ln }, /* vrcpph */
+    { { 0x4d }, 2, T, R, pfx_66, W0, LIG }, /* vrcpsh */
+    { { 0x4e }, 2, T, R, pfx_66, W0, Ln }, /* vrsqrtph */
+    { { 0x4f }, 2, T, R, pfx_66, W0, LIG }, /* vrsqrtsh */
     { { 0x96 }, 2, T, R, pfx_66, W0, Ln }, /* vfmaddsub132ph */
     { { 0x97 }, 2, T, R, pfx_66, W0, Ln }, /* vfmsubadd132ph */
     { { 0x98 }, 2, T, R, pfx_66, W0, Ln }, /* vfmadd132ph */
--- a/xen/arch/x86/x86_emulate/decode.c
+++ b/xen/arch/x86/x86_emulate/decode.c
@@ -358,7 +358,7 @@ static const struct ext0f38_table {
     [0x2a] = { .simd_size = simd_packed_int, .two_op = 1, .d8s = d8s_vl },
     [0x2b] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
     [0x2c] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
-    [0x2d] = { .simd_size = simd_packed_fp, .d8s = d8s_dq },
+    [0x2d] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
     [0x2e ... 0x2f] = { .simd_size = simd_packed_fp, .to_mem = 1 },
     [0x30] = { .simd_size = simd_other, .two_op = 1, .d8s = d8s_vl_by_2 },
     [0x31] = { .simd_size = simd_other, .two_op = 1, .d8s = d8s_vl_by_4 },
@@ -909,8 +909,8 @@ decode_0f38(struct x86_emulate_state *s,
         ctxt->opcode |= MASK_INSR(s->vex.pfx, X86EMUL_OPC_PFX_MASK);
         break;
 
-    case X86EMUL_OPC_EVEX_66(0, 0x2d): /* vscalefs{s,d} */
-        s->simd_size = simd_scalar_vexw;
+    case X86EMUL_OPC_VEX_66(0, 0x2d): /* vmaskmovpd */
+        s->simd_size = simd_packed_fp;
         break;
 
     case X86EMUL_OPC_EVEX_66(0, 0x7a): /* vpbroadcastb */
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -7780,6 +7780,8 @@ x86_emulate(
         generate_exception_if(evex.w, EXC_UD);
         goto avx512f_all_fp;
 
+    case X86EMUL_OPC_EVEX_66(6, 0x2c): /* vscalefph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0x42): /* vgetexpph [xyz]mm/mem,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(6, 0x96): /* vfmaddsub132ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(6, 0x97): /* vfmsubadd132ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(6, 0x98): /* vfmadd132ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
@@ -7804,6 +7806,8 @@ x86_emulate(
             avx512_vlen_check(false);
         goto simd_zmm;
 
+    case X86EMUL_OPC_EVEX_66(6, 0x2d): /* vscalefsh xmm/m16,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0x43): /* vgetexpsh xmm/m16,xmm,xmm{k} */
     case X86EMUL_OPC_EVEX_66(6, 0x99): /* vfmadd132sh xmm/m16,xmm,xmm{k} */
     case X86EMUL_OPC_EVEX_66(6, 0x9b): /* vfmsub132sh xmm/m16,xmm,xmm{k} */
     case X86EMUL_OPC_EVEX_66(6, 0x9d): /* vfnmadd132sh xmm/m16,xmm,xmm{k} */
@@ -7823,6 +7827,19 @@ x86_emulate(
             avx512_vlen_check(true);
         goto simd_zmm;
 
+    case X86EMUL_OPC_EVEX_66(6, 0x4c): /* vrcpph [xyz]mm/mem,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0x4e): /* vrsqrtph [xyz]mm/mem,[xyz]mm{k} */
+        host_and_vcpu_must_have(avx512_fp16);
+        generate_exception_if(evex.w, EXC_UD);
+        goto avx512f_no_sae;
+
+    case X86EMUL_OPC_EVEX_66(6, 0x4d): /* vrcpsh xmm/m16,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0x4f): /* vrsqrtsh xmm/m16,xmm,xmm{k} */
+        host_and_vcpu_must_have(avx512_fp16);
+        generate_exception_if(evex.w || evex.brs, EXC_UD);
+        avx512_vlen_check(true);
+        goto simd_zmm;
+
     case X86EMUL_OPC_XOP(08, 0x85): /* vpmacssww xmm,xmm/m128,xmm,xmm */
     case X86EMUL_OPC_XOP(08, 0x86): /* vpmacsswd xmm,xmm/m128,xmm,xmm */
     case X86EMUL_OPC_XOP(08, 0x87): /* vpmacssdql xmm,xmm/m128,xmm,xmm */



From xen-devel-bounces@lists.xenproject.org Wed Jun 15 10:44:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 10:44:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350022.576241 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1QVb-0004kw-LB; Wed, 15 Jun 2022 10:44:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350022.576241; Wed, 15 Jun 2022 10:44:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1QVb-0004kp-IN; Wed, 15 Jun 2022 10:44:23 +0000
Received: by outflank-mailman (input) for mailman id 350022;
 Wed, 15 Jun 2022 10:44:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=56zs=WW=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o1QVa-0004kj-U9
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 10:44:22 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2062f.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::62f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1d8d0647-ec98-11ec-ab14-113154c10af9;
 Wed, 15 Jun 2022 12:44:21 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB7111.eurprd04.prod.outlook.com (2603:10a6:20b:118::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.14; Wed, 15 Jun
 2022 10:44:20 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Wed, 15 Jun 2022
 10:44:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d8d0647-ec98-11ec-ab14-113154c10af9
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NwPRmMNlK2HqHB32EXPCjhsskkJVbcPWTLnXRzfVvl2/UG+ft+pE3Op9bvsSXibJyaWdqGCkGFPhgaKCOTQ7hzbVXervknlQiBrz30h52h61z8ElOA7iWnQAWKXpHJOB+SAFEAoOsfLqdeETD8ASQ88oc0QJO1Jsic6zGktuwGDaf2ajEbt2MW/vBC9hLY31zID9zfPABQwYWuIAXce9gEkO+KJyjvZYcdRaQGpYRQZlDbIWPe/riCIh7MnJj15LzVg8rirezWl8/AKEuBg3Yg6C/CyrA6omCnuCxM0hYVK6Gn29AqKHaAg2n+Gx7fSpcu7d5DVaIbNCxg2YLM3csw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3zB8A7BFijukqjC6xGoPOJL5l8xfeQLWHlypK/IB/54=;
 b=FrMmb49zsUHUvun1gPsplG6j6WONqH7hNQXKkymD3rmwHekyVU/nMKo2nlbA0HGVtXMQApWFqfZshxUqmvnG5qf9n54j2D1wIzLTTskcNA0gY3pUyEbpnnRFYVcdnfFr3h6cmFnPi3VGSEMvo6c5mcqlGddrT7gN1cp2XfuzKMwqoxh5AEAUDwds9jfZMFyByj65XjY/UZO69MENlytuVE/nsUW6OGUsQW2PXueEZLbAevyzPgALYPihh2m3eQSVZ3wMM4Uf5JnHlqpO/gcKp0bFrRSnO7URx0RfWsr3dr4+o/1iI7vl41kYrltYoLprfGLC0Jri0TqStYhbC+6bWQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3zB8A7BFijukqjC6xGoPOJL5l8xfeQLWHlypK/IB/54=;
 b=VQvcLWCK2fDPFulu/EAkQaWENg8G+MaPNFIHJsbSdadKZ9ira8V/acQlg66A6QZ9ilhPY+dnxb4bWBeDYtBmGSWWgoMhvbZMlnMKz4oVklDZD9YDzfswWN+OTmCndtkj6UKtrlyAYSz1ZEm9SdYdX4ClekL80vrQLCL/nzLAkgvrhvC0IN8S19if9HyVlpLeB9E3dqdoMWwjVzhzUC5M8PzvBZZYgRWGGwFdrr9L1vgo+Wa8LHTqeo4rCVNXonMGLYoB/23MGG7NYYtRDKsTWNMBCXa9MNCdG2rUcuxDuvDOb6YmXIVYIlBp3yvqR7aiwMUZyU9ndPd8hO3wIWTFXg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <62972285-4228-fc79-2824-1a541bc022fd@suse.com>
Date: Wed, 15 Jun 2022 12:44:17 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [XEN PATCH v2 1/4] build,include: rework shell script for
 headers++.chk
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220614162248.40278-1-anthony.perard@citrix.com>
 <20220614162248.40278-2-anthony.perard@citrix.com>
 <09b49900-9215-f2a2-d521-a79cf5ce5f0f@suse.com>
 <Yqm06hdvXE2caKpq@perard.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <Yqm06hdvXE2caKpq@perard.uk.xensource.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM6PR02CA0001.eurprd02.prod.outlook.com
 (2603:10a6:20b:6e::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 62ae32e3-4fa1-42aa-f26b-08da4ebc00a6
X-MS-TrafficTypeDiagnostic: AM7PR04MB7111:EE_
X-Microsoft-Antispam-PRVS:
	<AM7PR04MB7111157B8A30349CA9874410B3AD9@AM7PR04MB7111.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	iEx6v21EcFe5mXVFiM9AmajT5PZRXeB/r/KJIrB3ft0E/Ec3DV1XQTz47Fa/blSARGI5S5c+n/+IeJpftTdEpNZDYrOf+jzOwz8QdZV7hBsP6rRNiFbVWoNLXfhNcFRDSXAMXbpMg/hXEGMBrNYPp7RkAlnuYR9X4ai5mxQdMFGofPx/Ch1c0F6OX4PcFiAJoEabvzvlWrn+wR+/g8Ax9pfJSmtWqTNw2t2Vjr7pkETIUr5WQuTdotLBSjuuVM11dQ7GLgAZGOqlSpxWvV+tVDZGWyGaJaSX7rqgORuNwZPtvmxHpasPtJetgzlPOD6j0djZQ0G05SFufZ2hZlotZBojZX3iuWOs0RW6v3OxmHp/qUo2+e6QOXpbAf+eX3a78lGvuCwSHdiyfLPWX81Nl19dHGMNbwMt8LN7i2VXDDSZT5LoO3fTOwNJmJ6yC219li9qJxSqeQI0hsmOYlRZVDyAwmFwob27WJM65XDY57YV8uB2b02y07yLHwN/Vuf1bsxXL+CPtZcAhPgAQVEqcSabJFTyLGwCGWWrmmAuNYVJw3IoCIKWEaKa9sUmlzo0voeQDwFd+M1/XqL+9tcKZah7qW3ZwpCsb9+NM4M6KSrGAphmO83l/2VymKKqzcdNC8ZuQATB9dicnTui9K6lR5b623zItYqVk1Zh8AFzrJqQT65QzJrrb9oetSq3DjAZumh3Guz9xaItAp2wNR1UfBEPOj+dLeTiE4RcmPjdLORWfu38EshI0iRzdTuKpl4NcfV+53POMXO03d14U4T6d0tk7TxbxSmhYoRhP4ubEcs24ET2h5Ik/7GDNy7UCiGY
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(8936002)(316002)(38100700002)(53546011)(5660300002)(8676002)(4326008)(6506007)(6486002)(966005)(6512007)(6916009)(66946007)(66556008)(31696002)(36756003)(508600001)(54906003)(6666004)(2906002)(66476007)(26005)(186003)(83380400001)(2616005)(31686004)(86362001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cTJEN1NiRU1pWDJpZnhhK0w2aXR3NnlDcWhZQTRBY0txWktzYXlWcjdYZ0Fl?=
 =?utf-8?B?VHhGL24vV3RiRlljTXBZTzh3RGtwMUlOVlpEWmYwQ3VMN2NLT0kybzhvZlJ5?=
 =?utf-8?B?Z2dzLzI4ay9ObFZJRjM3anlvQmhsZDM1V2F1K1hhM3VERi84d2sxKy9YSHNa?=
 =?utf-8?B?cFVrQlZZOXpzQ0ZaZmtQTUpYcXhHTEFxeGFhUWNiV1pnOWdEVXdrOTlxMVBC?=
 =?utf-8?B?eVJkRTNSWHdjZEV0aVU5dWh3Wm4xVTYva1lVYkNsUUtIU1dITlI2aHFtNDky?=
 =?utf-8?B?MHdjSW01b05MS1ZiNVNzUGZBWTlKMjM1eHBaUnZjb1dESGNsQm5KWnpCNU9M?=
 =?utf-8?B?bm1Fb000LzJaRm1hVWFGS3lYZm5UWnNrNFVLQzNPVE52S1Y5aWRGRy9jSXBy?=
 =?utf-8?B?MXV0ZG1nV0htSE9pK3Y5N0UrVjlyTXAyNXZ4MVU3Z1VGS2VqOW4wTk1mYWVC?=
 =?utf-8?B?b0pITFFxQ0ovVnZpUURlL3VGS3N4NktIbHlMdnBGUWpnaHJLYitJOU41OUUz?=
 =?utf-8?B?eXkyS0J6aFVkeVZrZ0gxZVR2YVFHNUl4Mm1SRzZodzcrYmFLRVhZcHplOEFP?=
 =?utf-8?B?eHNZWkpCdlNsUTFDZlhiSm1XVUJqUXZleFlPYkRJSUt0UnhYQUl6SnZiYUg3?=
 =?utf-8?B?QlZzYkZXa0ExR2RFZTh4em5kbkZSRFgrUVc0UXVKc0FvRmQ1TFlNUWJNSDJa?=
 =?utf-8?B?K0I2amFRVDk2bWdrVTNWcVNudVBtNkRiZTZjdUw2WllpL1R4WjlBT0NzdFly?=
 =?utf-8?B?QXBOTCs3M1pGZE8wbnc2c2pOaUJpZjZMWUplYVQ1NXQwWU05NFFDRG1XZHlo?=
 =?utf-8?B?d2dxOXpmb3NsZEtzeTNwd0FON0NPTUFwM0pSd1IzME5uc1hVMVIwMGQzd1Zq?=
 =?utf-8?B?VXJIeGpIREdUajg5VGpPMnZhblJ2TGFRamVEWTRPNnBrL25teGdnWnNTU3By?=
 =?utf-8?B?ZnpqWGRRcWF3K3EvTW1kU05HMUxqZ0VGb2pVdnEvZHhsb3RpdmNRN1lSNTNT?=
 =?utf-8?B?VkFmNE91Z1ZWSkpoSjVCK1pNVmN1OVRYdXRoMUIwRm56Snd1YWVZZC8vVmt4?=
 =?utf-8?B?TCtzZkJjVkR5WndLZitqNis0NFdaTW91dkZNV3dyOHUwaUhiOGJIbzJ4RTNQ?=
 =?utf-8?B?cDd3M2JZQUVjMlRhbGtJRlFsSzZPeFVmcS9GZm84WFJLd2orRkFJUC9MWUlM?=
 =?utf-8?B?MDFJZjAxT1pnaFM3V3U1Z3NKRENvWmtZcTdmcnlSVTdMN2poOXdGL0dEVVdk?=
 =?utf-8?B?MHRYa2RKRHN6N09nb0YvWXFxbHpBZ0UvSHlUWjlzelVNeE1hajNTc1N5ME5Q?=
 =?utf-8?B?YnhVNWxNekF4eC9zNEsyZlZTK3pwZVZsTUkzcmRwV0ZBN0dFUWt1cGovdjRh?=
 =?utf-8?B?UVEzNURpaEZ0dmtEWjdlQ2dUekEzYlA4cVRkYjlNTkRvODB6bEJTR1d0YndB?=
 =?utf-8?B?VWFNMGZ5aDZqRDVtd1Q5Y0pqbXFYUUdUbmVmWXUrREVEOVpMbmdjcmZFTitF?=
 =?utf-8?B?WmN2Y2NwUURENWk2SjhIODlJL2M1YTZ3L0FtUUJKdUMxNGxQbWRBeVZRWmM5?=
 =?utf-8?B?WDBmY2VwalNrU3pnNnhWaS9uN3h0SXFFYmRlc3JldHJaTnplNEdWMk5DTnpw?=
 =?utf-8?B?bk50UHBlaEZVbEt2b2cwa0Q5ME9yRWNjc1huZVRGMmlyNDdCb0dDdERsMXFK?=
 =?utf-8?B?NU9oRU5ZSDdZR2hDc3NqTG50S3NMdC8vWnVUQ25OYzJwMUQ3ZkM5bENGRmNa?=
 =?utf-8?B?VVQ1Y0tuRE1ucFpoVmFPbFRmWXhYZ3hXZ1BiTzNVQTVId2hWTkh3SzVDQnBZ?=
 =?utf-8?B?cDF3UWpGQUFPMkJlelB4L2VHUEZtUEFrSGhwd3BlZlBScEZKcmRxdzI1RUpw?=
 =?utf-8?B?UW93ZDlUMW01TndEbTRuV2FyZ3BhMXVhODQrQ3ZBSEN2dFRtTW9SR2J5WWlm?=
 =?utf-8?B?U21aMTUvTEVWcEN0WGh2T2c2cUx0MFlnQXV3OXRpSytJQ2kwQzJiMlVCWE8v?=
 =?utf-8?B?VTczeVlVazhkTDVPN3J6U2syQXJsUUZsd3JGSFB5NEJTUzg1ejh2VWR0NU9X?=
 =?utf-8?B?UDNyVDVjY2EvZWVZa2k1QVI1VzdVK2FPeHdHb0VLMlNqN2hPTjlBWUNzcm9u?=
 =?utf-8?B?Syt6ejdEVUx6NkllK2l2aDBNNW1sdHdqWTdaM3BPMUV4R0hDMkF6WUtWbmVk?=
 =?utf-8?B?d2VYeGdnQTR1bDJSRWp5b2xody9Wbjc2d3Rwa3NIK1M5bEU5L1V5cUo1RWlZ?=
 =?utf-8?B?a09yVDlZZSt2dEpKUzJSaTlxQ0RqQ3NlMzk5R2xkcEdvTzdxbWNKTm1oSUF2?=
 =?utf-8?B?ejhTcXBzZTFya1EyRzBEeWNPMVN2UEViK3dBNE4rL3Z0RTVUVDhDdz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 62ae32e3-4fa1-42aa-f26b-08da4ebc00a6
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2022 10:44:19.9668
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: pQ+j8ReDrq1i0wBDnjLqqve5W0eRM7hq20PduFDDur1YPvtTHm8LaYSoUk7bmr3elKW5rrKVF+NORfeVvlDxyw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB7111

On 15.06.2022 12:31, Anthony PERARD wrote:
> On Wed, Jun 15, 2022 at 08:37:42AM +0200, Jan Beulich wrote:
>> On 14.06.2022 18:22, Anthony PERARD wrote:
>>> diff --git a/xen/include/Makefile b/xen/include/Makefile
>>> index 6d9bcc19b0..49c75a78f9 100644
>>> --- a/xen/include/Makefile
>>> +++ b/xen/include/Makefile
>>> @@ -158,13 +158,22 @@ define cmd_headerscxx_chk
>>>  	    touch $@.new;                                                     \
>>>  	    exit 0;                                                           \
>>>  	fi;                                                                   \
>>> -	$(foreach i, $(filter %.h,$^),                                        \
>>> -	    echo "#include "\"$(i)\"                                          \
>>> +	get_prereq() {                                                        \
>>> +	    case $$1 in                                                       \
>>> +	    $(foreach i, $(filter %.h,$^),                                    \
>>> +	    $(if $($(patsubst $(srctree)/%,%,$(i))-prereq),                   \
>>> +		$(i)$(close)                                                  \
>>> +		echo "$(foreach j, $($(patsubst $(srctree)/%,%,$(i))-prereq), \
>>> +			-include c$(j))";;))                                  \
>>> +	    esac;                                                             \
>>
>> If I'm reading this right (indentation looks to be a little misleading
>> and hence one needs to count parentheses) the case statement could (in
>> theory) remain without any "body". As per the command language spec I'm
>> looking at this (if it works in the first place) is an extension, and
>> formally there's always at least one label required. Since we aim to be
>> portable in such regards, I'd like to request that there be a final
>> otherwise empty *) line.
> 
> When looking at the shell grammar at [1], an empty body seems to be
> allowed. But I can add "*)" at the end for peace of mind.
> 
> [1] https://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_10_02

Hmm, indeed. As opposed to

https://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_09_04

> As for misleading indentation, I've got my $EDITOR to show me matching
> parentheses, so I don't need to count them. But if I rewrite that
> function as following, would it be easier to follow?
> 
> +	get_prereq() {                                                        \
> +	case $$1 in                                                           \
> +	$(foreach i, $(filter %.h,$^),                                        \
> +	    $(if $($(patsubst $(srctree)/%,%,$(i))-prereq),                   \
> +		$(i)$(close)                                                  \
> +		echo "$(foreach j, $($(patsubst $(srctree)/%,%,$(i))-prereq), \
> +			-include c$(j))";;))                                  \
> +	*) ;;                                                                 \
> +	esac;                                                                 \
> +	};                                                                    \

Hmm, not really, no (and it may be more obvious looking at the reply
context above): My primary concern is the use of hard tabs beyond the
leading one (which is uniform across all lines and hence doesn't alter
apparent alignment even with the + or > prefixed).

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 11:24:37 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 11:24:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350032.576252 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1R8P-0001aI-ON; Wed, 15 Jun 2022 11:24:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350032.576252; Wed, 15 Jun 2022 11:24:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1R8P-0001aB-LB; Wed, 15 Jun 2022 11:24:29 +0000
Received: by outflank-mailman (input) for mailman id 350032;
 Wed, 15 Jun 2022 11:24:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AGfn=WW=bombadil.srs.infradead.org=BATV+cd04db8a85bff7cd13dc+6870+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1o1R8O-0001a5-0d
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 11:24:28 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b52e7928-ec9d-11ec-bd2c-47488cf2e6aa;
 Wed, 15 Jun 2022 13:24:26 +0200 (CEST)
Received: from hch by bombadil.infradead.org with local (Exim 4.94.2 #2 (Red
 Hat Linux)) id 1o1R8I-00E9lp-6L; Wed, 15 Jun 2022 11:24:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b52e7928-ec9d-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=In-Reply-To:Content-Type:MIME-Version
	:References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=BK8bIaaa2wq3VpZdMmSbQMojkUO+GHs4/AOKLZKBl7Q=; b=UB+Jljv2uYmYDQ6VXlacVbsXef
	dqTRZnATTAR/5b7GUWgSzE2FyUwMS4FTTKpvLECUeCtZ50iJ1WdKIYoleKOcawth62F9jFMifLH48
	PAt5nKttUO4No+iT6DksuV0nbb73d28QWyXshaWvgzjsMgR7Oiy/6VA9EiKsjGoL/SOxns4rzYxQj
	/nN7UlP7WD9hOWDF1/jaod4Gu888DPsfRrTbvJxkllsilOnIN5u15pb8xTD1Qgi4f4RZPzZZKC4MK
	QRVBML5IGUKiQeW7tocw75XsUZrep09UuxgSrXGIMzD7EXwzqK/rvezMjvOxA9v2PTZgTAzI7kpte
	P3e2OYXQ==;
Date: Wed, 15 Jun 2022 04:24:22 -0700
From: Christoph Hellwig <hch@infradead.org>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org, Jonathan Corbet <corbet@lwn.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: Re: [PATCH] xen: don't require virtio with grants for non-PV guests
Message-ID: <YqnBZhiLOHnoalbC@infradead.org>
References: <20220615084835.27113-1-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20220615084835.27113-1-jgross@suse.com>
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

I don't think this command line mess is a good idea.  Instead the
brand new grant devices need to be properly advertised in the device
tree, and any device that isn't a grant device doesn't need
VIRTIO_F_ACCESS_PLATFORM.  No need for a command line or user
intervention.



From xen-devel-bounces@lists.xenproject.org Wed Jun 15 11:26:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 11:26:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350039.576263 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1RAN-0002DM-AL; Wed, 15 Jun 2022 11:26:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350039.576263; Wed, 15 Jun 2022 11:26:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1RAN-0002DF-5y; Wed, 15 Jun 2022 11:26:31 +0000
Received: by outflank-mailman (input) for mailman id 350039;
 Wed, 15 Jun 2022 11:26:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5J6g=WW=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o1RAL-0002D7-OX
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 11:26:29 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ff711d5f-ec9d-11ec-ab14-113154c10af9;
 Wed, 15 Jun 2022 13:26:28 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id E08E621C00;
 Wed, 15 Jun 2022 11:26:27 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id A7831139F3;
 Wed, 15 Jun 2022 11:26:27 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id STBYJ+PBqWKrcQAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 15 Jun 2022 11:26:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ff711d5f-ec9d-11ec-ab14-113154c10af9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655292387; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=YOh+t8LF+oP1jI69y5KUDNuhDZgavRutcYgLYNPpypk=;
	b=pwKlx9mXQkgiE3YWSmBNbQscWc/fIYl1TFi79ugWkqiPKkQovk7h9SQEN+3BJcstbxdtUM
	WAd9/TmRbtNaMwp+inu5iqxDZCnRTOa+5S1cCqviu9Tu5UNa7kX3OtVplNwZX42JjPZSYV
	2kB1IQoAy7LmQxbUOl+yg+agKdmWWtM=
Message-ID: <9b9785f5-085b-0882-177f-d8418c366beb@suse.com>
Date: Wed, 15 Jun 2022 13:26:27 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH] xen: don't require virtio with grants for non-PV guests
Content-Language: en-US
To: Christoph Hellwig <hch@infradead.org>
Cc: xen-devel@lists.xenproject.org, linux-doc@vger.kernel.org,
 linux-kernel@vger.kernel.org, Jonathan Corbet <corbet@lwn.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
References: <20220615084835.27113-1-jgross@suse.com>
 <YqnBZhiLOHnoalbC@infradead.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <YqnBZhiLOHnoalbC@infradead.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------0dR4IGU0rVaj3oqMZ0oY6672"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------0dR4IGU0rVaj3oqMZ0oY6672
Content-Type: multipart/mixed; boundary="------------JZ1ibS0Zuc7duvJog4SELbUl";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Christoph Hellwig <hch@infradead.org>
Cc: xen-devel@lists.xenproject.org, linux-doc@vger.kernel.org,
 linux-kernel@vger.kernel.org, Jonathan Corbet <corbet@lwn.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Message-ID: <9b9785f5-085b-0882-177f-d8418c366beb@suse.com>
Subject: Re: [PATCH] xen: don't require virtio with grants for non-PV guests
References: <20220615084835.27113-1-jgross@suse.com>
 <YqnBZhiLOHnoalbC@infradead.org>
In-Reply-To: <YqnBZhiLOHnoalbC@infradead.org>

--------------JZ1ibS0Zuc7duvJog4SELbUl
Content-Type: multipart/mixed; boundary="------------hDMuK1Fsc7lEgsY36O34aK0x"

--------------hDMuK1Fsc7lEgsY36O34aK0x
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTUuMDYuMjIgMTM6MjQsIENocmlzdG9waCBIZWxsd2lnIHdyb3RlOg0KPiBJIGRvbid0
IHRoaW5rIHRoaXMgY29tbWFuZCBsaW5lIG1lc3MgaXMgYSBnb29kIGlkZWEuICBJbnN0ZWFk
IHRoZQ0KPiBicmFuZCBuZXcgZ3JhbnQgZGV2aWNlcyBuZWVkIHRvIGJlIHByb3Blcmx5IGFk
dmVydGlzZWQgaW4gdGhlIGRldmljZQ0KPiB0cmVlLCBhbmQgYW55IGRldmljZSB0aGF0IGlz
bid0IGEgZ3JhbnQgZGV2aWNlIGRvZXNuJ3QgbmVlZA0KPiBWSVJUSU9fRl9BQ0NFU1NfUExB
VEZPUk0uICBObyBuZWVkIGZvciBhIGNvbW1hbmQgbGluZSBvciB1c2VyDQo+IGludGVydmVu
dGlvbi4NCg0KQW5kIG9uIHg4Nj8NCg0KDQpKdWVyZ2VuDQo=
--------------hDMuK1Fsc7lEgsY36O34aK0x
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------hDMuK1Fsc7lEgsY36O34aK0x--

--------------JZ1ibS0Zuc7duvJog4SELbUl--

--------------0dR4IGU0rVaj3oqMZ0oY6672
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmKpweMFAwAAAAAACgkQsN6d1ii/Ey87
VAgAmYzRkYKavmjJdFRwhltDAVihOBkE97sTorxUxn2LbOCT1m3+3u8cS/ju61bweKPRQS/X9pAL
QiKsSmHgxVzppcXJXcWJ4BZHchVwiNlEibCWQzSDri777XSZrtVOcuiub5PdCusKvuYw3ZT2zzKg
B9jLk/nGmWl2kc5FdNeVR2lBFBETyE187bbJ/JDYJjpr1h22hkkq/5C4YF9qYMEPcd8bacN7UgTK
jQ15/FEG8r0Nw9W5c/hNnq+sqb7ggBdIaNTRm1AAtVja0NmAgxl0cyp77W+VATQzOQsVKB23TtUg
zB6bbydfJLHFSqF8gP/i3M5m3LEt9IeGfH1G7fl4PQ==
=3dTj
-----END PGP SIGNATURE-----

--------------0dR4IGU0rVaj3oqMZ0oY6672--


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 11:28:52 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 11:28:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350048.576275 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1RCW-000378-Mh; Wed, 15 Jun 2022 11:28:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350048.576275; Wed, 15 Jun 2022 11:28:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1RCW-000371-Hn; Wed, 15 Jun 2022 11:28:44 +0000
Received: by outflank-mailman (input) for mailman id 350048;
 Wed, 15 Jun 2022 11:28:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AGfn=WW=bombadil.srs.infradead.org=BATV+cd04db8a85bff7cd13dc+6870+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1o1RCU-00036m-0j
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 11:28:42 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4e35593e-ec9e-11ec-bd2c-47488cf2e6aa;
 Wed, 15 Jun 2022 13:28:41 +0200 (CEST)
Received: from hch by bombadil.infradead.org with local (Exim 4.94.2 #2 (Red
 Hat Linux)) id 1o1RCR-00EB1m-Bb; Wed, 15 Jun 2022 11:28:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4e35593e-ec9e-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=In-Reply-To:Content-Type:MIME-Version
	:References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=ffs70zIAEPDoQR7c4GQM9h5KRYBlcReN8DDYbmuklWA=; b=eVx0xL+TUT9/UozlSGm3BSao4r
	t4A0k50rGvMY5v0y2l2Lu7kInP8Iuuw2lNBBwjXDQve2/BETf31KEI5uWFKraX1UBrrTHQVr9EObw
	MqIxPKjQMHg2b1n6EvewXsvzMTYm15xPMYAnWZ4KXYnarus+p5ywQRg6pwLgh9jcmoX/cfUML8z0A
	wTkEwU+8XR/zRJXGSR1suj47ftkmSZPxPcpjkOcB4FnubiT1DhUT0LpHrm0vdP32doAJvmx5DkMqb
	9L8oBwaINGC5njlF/BxW0fMQy+3UFXFkTvMhDiK3hApgX19XsumxnSJEFRqzfiU55vd0wyBpOzK3a
	sV+lQJXw==;
Date: Wed, 15 Jun 2022 04:28:39 -0700
From: Christoph Hellwig <hch@infradead.org>
To: Juergen Gross <jgross@suse.com>
Cc: Christoph Hellwig <hch@infradead.org>, xen-devel@lists.xenproject.org,
	linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org,
	Jonathan Corbet <corbet@lwn.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: Re: [PATCH] xen: don't require virtio with grants for non-PV guests
Message-ID: <YqnCZ+EKZeZ5AEnr@infradead.org>
References: <20220615084835.27113-1-jgross@suse.com>
 <YqnBZhiLOHnoalbC@infradead.org>
 <9b9785f5-085b-0882-177f-d8418c366beb@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <9b9785f5-085b-0882-177f-d8418c366beb@suse.com>
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

On Wed, Jun 15, 2022 at 01:26:27PM +0200, Juergen Gross wrote:
> On 15.06.22 13:24, Christoph Hellwig wrote:
> > I don't think this command line mess is a good idea.  Instead the
> > brand new grant devices need to be properly advertised in the device
> > tree, and any device that isn't a grant device doesn't need
> > VIRTIO_F_ACCESS_PLATFORM.  No need for a command line or user
> > intervention.
> 
> And on x86?

ACPI tables or whatever mechanism you otherwise use to advertise grant
based DMA.  But that is irrelevant for now, as the code in mainline
only supports DT on arm/arm64 anyay.


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 11:39:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 11:39:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350060.576285 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1RMY-0004se-O3; Wed, 15 Jun 2022 11:39:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350060.576285; Wed, 15 Jun 2022 11:39:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1RMY-0004sX-Ip; Wed, 15 Jun 2022 11:39:06 +0000
Received: by outflank-mailman (input) for mailman id 350060;
 Wed, 15 Jun 2022 11:39:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5J6g=WW=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o1RMX-0004sR-4n
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 11:39:05 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c140c56b-ec9f-11ec-ab14-113154c10af9;
 Wed, 15 Jun 2022 13:39:03 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 6B0361F461;
 Wed, 15 Jun 2022 11:39:02 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 32972139F3;
 Wed, 15 Jun 2022 11:39:02 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id +pjUCtbEqWKPdgAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 15 Jun 2022 11:39:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c140c56b-ec9f-11ec-ab14-113154c10af9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655293142; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=vWPyTUqt05MY3W5QAH5ZScqNjOJblYblU80W/EAoMTc=;
	b=pj88y/1uhddmJF6+ViziLn3f9btEYgJHUxO+4CVlPd36NF9lAY938X+KI2og6Ej07tIXYB
	SBsHeofB2W5lvCiNCDm5dLIt5a+4W3+x98sMeJ9ncXBSawaUx15hd27sAGSHH2j7Zwq7ip
	Ha0WFC9245O9XEPb68PYRWINOgYLUyg=
Message-ID: <c5a521e0-26b1-b1d6-7f7d-00aa9b4b1e0e@suse.com>
Date: Wed, 15 Jun 2022 13:39:01 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Content-Language: en-US
To: Christoph Hellwig <hch@infradead.org>
Cc: xen-devel@lists.xenproject.org, linux-doc@vger.kernel.org,
 linux-kernel@vger.kernel.org, Jonathan Corbet <corbet@lwn.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
References: <20220615084835.27113-1-jgross@suse.com>
 <YqnBZhiLOHnoalbC@infradead.org>
 <9b9785f5-085b-0882-177f-d8418c366beb@suse.com>
 <YqnCZ+EKZeZ5AEnr@infradead.org>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH] xen: don't require virtio with grants for non-PV guests
In-Reply-To: <YqnCZ+EKZeZ5AEnr@infradead.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------TWmXe532bQQDVPjVheiFhZaA"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------TWmXe532bQQDVPjVheiFhZaA
Content-Type: multipart/mixed; boundary="------------cNawxCPRxMUORbcskRskiQnY";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Christoph Hellwig <hch@infradead.org>
Cc: xen-devel@lists.xenproject.org, linux-doc@vger.kernel.org,
 linux-kernel@vger.kernel.org, Jonathan Corbet <corbet@lwn.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Message-ID: <c5a521e0-26b1-b1d6-7f7d-00aa9b4b1e0e@suse.com>
Subject: Re: [PATCH] xen: don't require virtio with grants for non-PV guests
References: <20220615084835.27113-1-jgross@suse.com>
 <YqnBZhiLOHnoalbC@infradead.org>
 <9b9785f5-085b-0882-177f-d8418c366beb@suse.com>
 <YqnCZ+EKZeZ5AEnr@infradead.org>
In-Reply-To: <YqnCZ+EKZeZ5AEnr@infradead.org>

--------------cNawxCPRxMUORbcskRskiQnY
Content-Type: multipart/mixed; boundary="------------Sd2TeXFi0Z1F0raD2vPQkxgR"

--------------Sd2TeXFi0Z1F0raD2vPQkxgR
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTUuMDYuMjIgMTM6MjgsIENocmlzdG9waCBIZWxsd2lnIHdyb3RlOg0KPiBPbiBXZWQs
IEp1biAxNSwgMjAyMiBhdCAwMToyNjoyN1BNICswMjAwLCBKdWVyZ2VuIEdyb3NzIHdyb3Rl
Og0KPj4gT24gMTUuMDYuMjIgMTM6MjQsIENocmlzdG9waCBIZWxsd2lnIHdyb3RlOg0KPj4+
IEkgZG9uJ3QgdGhpbmsgdGhpcyBjb21tYW5kIGxpbmUgbWVzcyBpcyBhIGdvb2QgaWRlYS4g
IEluc3RlYWQgdGhlDQo+Pj4gYnJhbmQgbmV3IGdyYW50IGRldmljZXMgbmVlZCB0byBiZSBw
cm9wZXJseSBhZHZlcnRpc2VkIGluIHRoZSBkZXZpY2UNCj4+PiB0cmVlLCBhbmQgYW55IGRl
dmljZSB0aGF0IGlzbid0IGEgZ3JhbnQgZGV2aWNlIGRvZXNuJ3QgbmVlZA0KPj4+IFZJUlRJ
T19GX0FDQ0VTU19QTEFURk9STS4gIE5vIG5lZWQgZm9yIGEgY29tbWFuZCBsaW5lIG9yIHVz
ZXINCj4+PiBpbnRlcnZlbnRpb24uDQo+Pg0KPj4gQW5kIG9uIHg4Nj8NCj4gDQo+IEFDUEkg
dGFibGVzIG9yIHdoYXRldmVyIG1lY2hhbmlzbSB5b3Ugb3RoZXJ3aXNlIHVzZSB0byBhZHZl
cnRpc2UgZ3JhbnQNCj4gYmFzZWQgRE1BLiAgQnV0IHRoYXQgaXMgaXJyZWxldmFudCBmb3Ig
bm93LCBhcyB0aGUgY29kZSBpbiBtYWlubGluZQ0KPiBvbmx5IHN1cHBvcnRzIERUIG9uIGFy
bS9hcm02NCBhbnlheS4NCg0KTm8sIGl0IGRvZXNuJ3QuIEknbSB3b3JraW5nIG9uIGEgcWVt
dSBwYXRjaCBzZXJpZXMgZW5hYmxpbmcgdGhlIHFlbXUNCmJhc2VkIGJhY2tlbmRzIHRvIHN1
cHBvcnQgZ3JhbnRzIHdpdGggdmlydGlvLiBUaGUgY29kZSBpcyB3b3JraW5nIGZpbmUNCm9u
IHg4NiwgdG9vIChhcGFydCBmcm9tIHRoZSBmYWN0IHRoYXQgdGhlIGJhY2tlbmRzIGFyZW4n
dCByZWFkeSB5ZXQpLg0KDQpJJ2QgYmUgZmluZSB0byBqdXN0IGRyb3AgdGhlIGNvbW1hbmQg
bGluZSBwYXJhbWV0ZXIsIGJ1dCBJIHRoaW5rIGENCmd1ZXN0IGFkbWluIHNob3VsZCBiZSBh
YmxlIHRvIHNwZWNpZnkgdGhhdCBoZSBkb2Vzbid0IHdhbnQgdGhlIGJhY2tlbmQNCnRvIGJl
IGFibGUgdG8gbWFwIGFyYml0cmFyeSBtZW1vcnkgb2YgdGhlIGd1ZXN0IGluIG9yZGVyIHRv
IGFkZA0KYW5vdGhlciBsaW5lIG9mIGRlZmVuc2UuDQoNCg0KSnVlcmdlbg0K
--------------Sd2TeXFi0Z1F0raD2vPQkxgR
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------Sd2TeXFi0Z1F0raD2vPQkxgR--

--------------cNawxCPRxMUORbcskRskiQnY--

--------------TWmXe532bQQDVPjVheiFhZaA
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmKpxNUFAwAAAAAACgkQsN6d1ii/Ey9B
5Af9F2dY6J4qBYEbMURzj1DkbJAf6MjkmYYThPu+gU3Xfe5zRwpbuutcpumG80XNYnJuWZ/cab36
Lc3zVRHJsmOgqHefGa+F7fqtCW0qPwz7yh9r708cJebFWZFjJcRPlEVZ+xupRoguzb4uvrJHJsv1
CtlbaUesSN+TRatgLjkl88B3VV7rp/Y/+578CdHwy/kKKnNJrKEmtGbDmm0yb/HW5TdKuP/T8I/c
d8ru/eBY2tfF5BPp+pPZswCceiJzItpi4hzrv2fxCqSS5VY42fCaVrdIMaMukAlWe+vZ75iWVfNX
QxN99ml9MQNWob3scGsuaRAIG3cGfWDh15+KfFlePQ==
=hXfX
-----END PGP SIGNATURE-----

--------------TWmXe532bQQDVPjVheiFhZaA--


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 11:54:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 11:54:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350069.576296 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Rb3-0007Rv-VA; Wed, 15 Jun 2022 11:54:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350069.576296; Wed, 15 Jun 2022 11:54:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Rb3-0007Ro-SK; Wed, 15 Jun 2022 11:54:05 +0000
Received: by outflank-mailman (input) for mailman id 350069;
 Wed, 15 Jun 2022 11:54:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AGfn=WW=bombadil.srs.infradead.org=BATV+cd04db8a85bff7cd13dc+6870+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1o1Rb2-0007Ri-8H
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 11:54:04 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d93a6211-eca1-11ec-bd2c-47488cf2e6aa;
 Wed, 15 Jun 2022 13:54:02 +0200 (CEST)
Received: from hch by bombadil.infradead.org with local (Exim 4.94.2 #2 (Red
 Hat Linux)) id 1o1Ray-00EISR-Pg; Wed, 15 Jun 2022 11:54:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d93a6211-eca1-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=In-Reply-To:Content-Type:MIME-Version
	:References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=ZvNzq09TdLzqZwexljuHETkJeL+6Sds0r4wzjQ/HMi4=; b=i4MVsHiQuG8LAiV+GqX7FPx/Gk
	/3Tg3rurZJZ8OxfjdxMPfSslNT96JSU/5vr53PlGFmdDVaW7XyInbVmDA+JEuPSBhh4MXvqEtaaRM
	fpnX61iUuQff4KbGQyR/4J3vcwTDMYTn7uekJLv8VQbx7HDBKs+B9GWhfZHVCFSC5c4AFsauHQHVC
	jocf22X585lFpCZk6vhrJZ5Hgwv4SBz/5yMxzYg4iW0hhY5bkDlgmgnK6P0xCy4GcSqKdrCGFPH0v
	Aky09pIl07V83eXE3N+mzaOIoi/xrivF6WMm7zXamcRZsArr+jQwkXru0rQNt8nPgiCtxEuWeHjk/
	mQljCOwg==;
Date: Wed, 15 Jun 2022 04:54:00 -0700
From: Christoph Hellwig <hch@infradead.org>
To: Juergen Gross <jgross@suse.com>
Cc: Christoph Hellwig <hch@infradead.org>, xen-devel@lists.xenproject.org,
	linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org,
	Jonathan Corbet <corbet@lwn.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: Re: [PATCH] xen: don't require virtio with grants for non-PV guests
Message-ID: <YqnIWCXxsGzkfQp7@infradead.org>
References: <20220615084835.27113-1-jgross@suse.com>
 <YqnBZhiLOHnoalbC@infradead.org>
 <9b9785f5-085b-0882-177f-d8418c366beb@suse.com>
 <YqnCZ+EKZeZ5AEnr@infradead.org>
 <c5a521e0-26b1-b1d6-7f7d-00aa9b4b1e0e@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <c5a521e0-26b1-b1d6-7f7d-00aa9b4b1e0e@suse.com>
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

On Wed, Jun 15, 2022 at 01:39:01PM +0200, Juergen Gross wrote:
> No, it doesn't. I'm working on a qemu patch series enabling the qemu
> based backends to support grants with virtio. The code is working fine
> on x86, too (apart from the fact that the backends aren't ready yet).

The code right now in mainline only ever sets the ops for DMA.  So
I can't see how you could make it work.


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 12:41:38 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 12:41:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350089.576309 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1SKl-0005iT-4F; Wed, 15 Jun 2022 12:41:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350089.576309; Wed, 15 Jun 2022 12:41:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1SKl-0005iM-1S; Wed, 15 Jun 2022 12:41:19 +0000
Received: by outflank-mailman (input) for mailman id 350089;
 Wed, 15 Jun 2022 12:41:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wv+x=WW=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1o1SKk-0005iG-4V
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 12:41:18 +0000
Received: from mail-lj1-x235.google.com (mail-lj1-x235.google.com
 [2a00:1450:4864:20::235])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 726cea9e-eca8-11ec-ab14-113154c10af9;
 Wed, 15 Jun 2022 14:41:16 +0200 (CEST)
Received: by mail-lj1-x235.google.com with SMTP id m25so13093858lji.11
 for <xen-devel@lists.xenproject.org>; Wed, 15 Jun 2022 05:41:16 -0700 (PDT)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 z7-20020a056512370700b00478ee6a58c1sm1792986lfr.172.2022.06.15.05.41.14
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 15 Jun 2022 05:41:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 726cea9e-eca8-11ec-ab14-113154c10af9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=w6mEqtJ4r74TnNlMvfbdu8cIIS/gk/ZwzqlOU40cLcI=;
        b=Ns/Yffi5uouREXNrVzUrM90jbm7q7a2tY0STxq349YuYuatUrOEBk8sVaJgw4JgFzo
         m3glrH2cCTIhO6ZUxBDytyLucbRmwnxuO9OF73Okk2xJIwVPmcuL4eRsWi8GAG+rOt+X
         wYsB+HoBLOQDXKB2n00M6RF/44SSgpydn90oAe70A3HAMQgn8YQJyddKkRRXY0yCwRms
         Y/f7AztfyYwWlnyGBi1qNE7Vv6QWj60WshXfVFpzAP86iowhQknaTsfhLH+7k+OEiFHQ
         HIDbzY4fr3FB5pNomH/QMszBG8vOl9G0YIULNPbZseUJicWUZkfH45XPqhCC5mrS411f
         e+jA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=w6mEqtJ4r74TnNlMvfbdu8cIIS/gk/ZwzqlOU40cLcI=;
        b=Xe/dBP2ajH5hmVuNnrq1cncvoDIf6kiiQ0kFGoN1lktlePTu6ULVMK9OscaftRjG0f
         gYz47QQNOOcpg2cRn2zbXlqLST2xqmQWuPoruZdCNBVDPpvAtjSSC2VYB2BZ8yFDDW49
         lqgc8wO5aCzvEpYlOJwYBO1zQNQGxWR9W3Q6ljmlcnrUNTPRH66rc/cm4sm+Gz/A69oW
         epB0muEmIOYRNLkyxHBGkkCka03Z2wGvqHJILlht8NAswj04xTzIlgSXu3Qi/okBvKe3
         Omrc2F0avH7k2sR2w5Z1nPOGi5pIMfT7aIN3WJX1wHcPoSs+paso4683kDVe1lCs2FPm
         mINw==
X-Gm-Message-State: AJIora/1M1HR/H/KiZgc6VaQEgmXY19IvxiP5+ryGWH6fR+qFnVXuncu
	qPaP6wKkQ2oGoNXqkx7TlPc=
X-Google-Smtp-Source: AGRyM1vMQ3DAgxFkhCIcxZL1gvzuqNwRkvB4KnPM+GPQG7YK1lhJpUD9P+MxaOzfLYZ1FVgmhRljvw==
X-Received: by 2002:a05:651c:981:b0:253:b87e:ba6c with SMTP id b1-20020a05651c098100b00253b87eba6cmr5354225ljq.530.1655296875504;
        Wed, 15 Jun 2022 05:41:15 -0700 (PDT)
Subject: Re: [PATCH V4 0/8] virtio: Solution to restrict memory access under
 Xen using xen-grant DMA-mapping layer
To: Viresh Kumar <viresh.kumar@linaro.org>
Cc: xen-devel@lists.xenproject.org,
 Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
 Linux ARM <linux-arm-kernel@lists.infradead.org>,
 virtualization@lists.linux-foundation.org, x86@kernel.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Christoph Hellwig
 <hch@infradead.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, Juergen Gross
 <jgross@suse.com>, Julien Grall <julien@xen.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Henry Wang <Henry.Wang@arm.com>, Kaly Xin <Kaly.Xin@arm.com>,
 Jiamei Xie <Jiamei.Xie@arm.com>, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>, Vincent Guittot <vincent.guittot@linaro.org>,
 Mathieu Poirier <mathieu.poirier@linaro.org>
References: <1654197833-25362-1-git-send-email-olekstysh@gmail.com>
 <CAOh2x=kxpdisV+tqcYOoZGSKA8YjPMej+7u19Jpa1jmVcZCaxA@mail.gmail.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <52b1389b-348d-2433-f80c-fab22194dac2@gmail.com>
Date: Wed, 15 Jun 2022 15:41:13 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <CAOh2x=kxpdisV+tqcYOoZGSKA8YjPMej+7u19Jpa1jmVcZCaxA@mail.gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 15.06.22 09:23, Viresh Kumar wrote:
> Hi Oleksandr,


Hello Viresh


>
> On Mon, Jun 6, 2022 at 10:16 AM Oleksandr Tyshchenko
> <olekstysh@gmail.com> wrote:
>> The high level idea is to create new Xen’s grant table based DMA-mapping layer for the guest Linux whose main
>> purpose is to provide a special 64-bit DMA address which is formed by using the grant reference (for a page
>> to be shared with the backend) with offset and setting the highest address bit (this is for the backend to
>> be able to distinguish grant ref based DMA address from normal GPA). For this to work we need the ability
>> to allocate contiguous (consecutive) grant references for multi-page allocations. And the backend then needs
>> to offer VIRTIO_F_ACCESS_PLATFORM and VIRTIO_F_VERSION_1 feature bits (it must support virtio-mmio modern
>> transport for 64-bit addresses in the virtqueue).
> I was trying your series, from Linus's tree now and started seeing
> boot failures,
> failed to mount rootfs. And the reason probably is these messages:
>
> [ 1.222498] virtio_scsi virtio1: device must provide VIRTIO_F_ACCESS_PLATFORM
> [ 1.316334] virtio_net virtio0: device must provide VIRTIO_F_ACCESS_PLATFORM
>
> I understand from your email that the backends need to offer
> VIRTIO_F_ACCESS_PLATFORM flag now, but should this requirement be a
> bit soft ? I mean shouldn't we allow both types of backends to run with the same
> kernel, ones that offer this feature and others that don't ? The ones that don't
> offer the feature, should continue to work like they used to, i.e.
> without the restricted
> memory access feature.
> I am testing Xen currently with help of Qemu  over my x86 desktop and
> these backends
> (scsi and net) are part of QEMU itself I think, and I don't really
> want to go and make the
> change there.


Thank you for testing on x86.


I assume your guest type in HVM. Within current series the 
PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS is set for *all* type of Xen 
guests if CONFIG_XEN_VIRTIO is enabled.

I have to admit that from the very beginning it could be possible to 
configure for PV and HVM guests separately [1] because the usage of 
grant mappings for virtio is mandatory for paravirtualized guest, but 
not strictly necessary for the fully virtualized guests (if the backends 
are in Dom0). But it was decided to drop these extra options (including 
XEN_HVM_VIRTIO_GRANT) and leave only single one CONFIG_XEN_VIRTIO.

I see that Juergen has already pushed a fix.

Sorry for the inconvenience.



[1] 
https://lore.kernel.org/xen-devel/1649963973-22879-3-git-send-email-olekstysh@gmail.com/


>
> Thanks.
>
> --
> Viresh

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Wed Jun 15 12:54:02 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 12:54:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350097.576321 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1SX0-0007Tn-8i; Wed, 15 Jun 2022 12:53:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350097.576321; Wed, 15 Jun 2022 12:53:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1SX0-0007Tg-5F; Wed, 15 Jun 2022 12:53:58 +0000
Received: by outflank-mailman (input) for mailman id 350097;
 Wed, 15 Jun 2022 12:53:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5J6g=WW=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o1SWz-0007Ta-2y
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 12:53:57 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 373cac93-ecaa-11ec-bd2c-47488cf2e6aa;
 Wed, 15 Jun 2022 14:53:55 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 6956C1F88E;
 Wed, 15 Jun 2022 12:53:55 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 3061913A35;
 Wed, 15 Jun 2022 12:53:55 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id IB41CmPWqWLzFgAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 15 Jun 2022 12:53:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 373cac93-ecaa-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655297635; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=PLZICg7uteGrNkus+TkltCr/ZYre6VfA0dO/FQY4hBw=;
	b=p+YUDTvyU1YaKkaZw0bO9m/mkJG6yTT5UQ8s2qYt7VfrHo5y0N6yJd5AlmCyxpCFa5W0ki
	zztrz/qRxcccaMvVVuTlgx8PZZ/Bi8HnqZjITC2X5UCQ//IeXslo6hT2R/T3SQ91CnH4lM
	15mWzbHHC1chKY4HQubgTCBa5gsTJ1s=
Message-ID: <ab0653bc-7728-e24c-5d83-78cee135528c@suse.com>
Date: Wed, 15 Jun 2022 14:53:54 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Content-Language: en-US
To: Christoph Hellwig <hch@infradead.org>
Cc: xen-devel@lists.xenproject.org, linux-doc@vger.kernel.org,
 linux-kernel@vger.kernel.org, Jonathan Corbet <corbet@lwn.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
References: <20220615084835.27113-1-jgross@suse.com>
 <YqnBZhiLOHnoalbC@infradead.org>
 <9b9785f5-085b-0882-177f-d8418c366beb@suse.com>
 <YqnCZ+EKZeZ5AEnr@infradead.org>
 <c5a521e0-26b1-b1d6-7f7d-00aa9b4b1e0e@suse.com>
 <YqnIWCXxsGzkfQp7@infradead.org>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH] xen: don't require virtio with grants for non-PV guests
In-Reply-To: <YqnIWCXxsGzkfQp7@infradead.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------bCUY8BhvhKPVYA0Vkiaf5GOk"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------bCUY8BhvhKPVYA0Vkiaf5GOk
Content-Type: multipart/mixed; boundary="------------bJYoAtcPkRBUuSODuR5n1ait";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Christoph Hellwig <hch@infradead.org>
Cc: xen-devel@lists.xenproject.org, linux-doc@vger.kernel.org,
 linux-kernel@vger.kernel.org, Jonathan Corbet <corbet@lwn.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Message-ID: <ab0653bc-7728-e24c-5d83-78cee135528c@suse.com>
Subject: Re: [PATCH] xen: don't require virtio with grants for non-PV guests
References: <20220615084835.27113-1-jgross@suse.com>
 <YqnBZhiLOHnoalbC@infradead.org>
 <9b9785f5-085b-0882-177f-d8418c366beb@suse.com>
 <YqnCZ+EKZeZ5AEnr@infradead.org>
 <c5a521e0-26b1-b1d6-7f7d-00aa9b4b1e0e@suse.com>
 <YqnIWCXxsGzkfQp7@infradead.org>
In-Reply-To: <YqnIWCXxsGzkfQp7@infradead.org>

--------------bJYoAtcPkRBUuSODuR5n1ait
Content-Type: multipart/mixed; boundary="------------kckpb9DKnPpfXEBq0EuR6m7l"

--------------kckpb9DKnPpfXEBq0EuR6m7l
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTUuMDYuMjIgMTM6NTQsIENocmlzdG9waCBIZWxsd2lnIHdyb3RlOg0KPiBPbiBXZWQs
IEp1biAxNSwgMjAyMiBhdCAwMTozOTowMVBNICswMjAwLCBKdWVyZ2VuIEdyb3NzIHdyb3Rl
Og0KPj4gTm8sIGl0IGRvZXNuJ3QuIEknbSB3b3JraW5nIG9uIGEgcWVtdSBwYXRjaCBzZXJp
ZXMgZW5hYmxpbmcgdGhlIHFlbXUNCj4+IGJhc2VkIGJhY2tlbmRzIHRvIHN1cHBvcnQgZ3Jh
bnRzIHdpdGggdmlydGlvLiBUaGUgY29kZSBpcyB3b3JraW5nIGZpbmUNCj4+IG9uIHg4Niwg
dG9vIChhcGFydCBmcm9tIHRoZSBmYWN0IHRoYXQgdGhlIGJhY2tlbmRzIGFyZW4ndCByZWFk
eSB5ZXQpLg0KPiANCj4gVGhlIGNvZGUgcmlnaHQgbm93IGluIG1haW5saW5lIG9ubHkgZXZl
ciBzZXRzIHRoZSBvcHMgZm9yIERNQS4gIFNvDQo+IEkgY2FuJ3Qgc2VlIGhvdyB5b3UgY291
bGQgbWFrZSBpdCB3b3JrLg0KDQpBaCwgeW91IGFyZSByaWdodC4gSSB3YXMgdXNpbmcgYSBn
dWVzdCB3aXRoIGFuIG9sZGVyIHZlcnNpb24gb2YgdGhlIHNlcmllcy4NClNvcnJ5IGZvciB0
aGUgbm9pc2UuDQoNCg0KSnVlcmdlbg0K
--------------kckpb9DKnPpfXEBq0EuR6m7l
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------kckpb9DKnPpfXEBq0EuR6m7l--

--------------bJYoAtcPkRBUuSODuR5n1ait--

--------------bCUY8BhvhKPVYA0Vkiaf5GOk
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmKp1mIFAwAAAAAACgkQsN6d1ii/Ey+J
FAgAhuZtO01u/6KCQl0hcc36qfENJd+p4klPGlJisL+gbDez8fKGAGOGaXpG8HpoaLmbevtFLh5V
m3czisUFvhjMNlW4if4JtMNMCAtopicTLUCzGEjDdA9fzPv4/e7jNoXj4/XibNeJE9Cc5+W5Z5HV
CX4j6vTmMgdpDOBHdaDlfXQc8sdib6dHqUbnmEqKQzXXj65ihWbIPuzZ5k7PNkB8opNuBWAX64m4
I9PIggUJvEVBxWcNP+Tu7diz5AKOvoBjZopkfv/PSQQJ0YvXldKtLp+nL5BABw2yCb8Ey8JdJ+1J
TRoUWqYc8PNe3H8pkmjOigIIvieWWHSsJIOtr9XRxg==
=lckF
-----END PGP SIGNATURE-----

--------------bCUY8BhvhKPVYA0Vkiaf5GOk--


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 12:56:35 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 12:56:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350104.576332 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1SZV-00085Q-Mg; Wed, 15 Jun 2022 12:56:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350104.576332; Wed, 15 Jun 2022 12:56:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1SZV-00085J-Hu; Wed, 15 Jun 2022 12:56:33 +0000
Received: by outflank-mailman (input) for mailman id 350104;
 Wed, 15 Jun 2022 12:56:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1SZU-000857-3W; Wed, 15 Jun 2022 12:56:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1SZT-0000be-WD; Wed, 15 Jun 2022 12:56:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1SZT-0006Fl-GU; Wed, 15 Jun 2022 12:56:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o1SZT-0001J1-G2; Wed, 15 Jun 2022 12:56:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=L+pqW2dJ6sj4FhZHCGxt0FyHeinTHFPTitdDuFaOurg=; b=sodKSe5e0tYby2YIlQV8Zpq8J/
	32cEaDwdYrLUlq9/82WuPENQ8Mfqx8GkOEz17JIO13an3tNp7mFcV9sWWPYRhajCXTSVFTgeQ2E8R
	utrcPOfLjRmfduJPozaUsmmDpj+qNoBnzEZJpPk89bWu1twIWa8h/hNE8XFzzrYwfv2Q=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171178-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 171178: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8c1d9760b1d847d983529eae2b360b38648841b5
X-Osstest-Versions-That:
    xen=162dea4e768b835114c736cfd3fa1fc3742d39c5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 15 Jun 2022 12:56:31 +0000

flight 171178 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171178/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8c1d9760b1d847d983529eae2b360b38648841b5
baseline version:
 xen                  162dea4e768b835114c736cfd3fa1fc3742d39c5

Last test of basis   171170  2022-06-15 00:02:01 Z    0 days
Testing same since   171178  2022-06-15 09:01:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jane Malalane <jane.malalane@citrix.com>
  Juergen Gross <jgross@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   162dea4e76..8c1d9760b1  8c1d9760b1d847d983529eae2b360b38648841b5 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 13:03:51 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 13:03:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350115.576342 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1SgW-0001Po-Cg; Wed, 15 Jun 2022 13:03:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350115.576342; Wed, 15 Jun 2022 13:03:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1SgW-0001Ph-9i; Wed, 15 Jun 2022 13:03:48 +0000
Received: by outflank-mailman (input) for mailman id 350115;
 Wed, 15 Jun 2022 13:03:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1SgU-0001PX-Tc; Wed, 15 Jun 2022 13:03:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1SgU-0000k3-QE; Wed, 15 Jun 2022 13:03:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1SgU-0006RN-BR; Wed, 15 Jun 2022 13:03:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o1SgU-00035N-Ax; Wed, 15 Jun 2022 13:03:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XAX2tyaZ646+/VWmoTFNsb/qLwGlgnJUrFCf1tL6v3o=; b=g7pDSnyOWAStlQidi33g0Lz8m4
	DbN6RCQDSvrkk90DZK6gEePaPQOzLDxxOhzslmk2GLo0KmRHuctGoWWvRznGDCiLsaS7JvyZLBWGJ
	JeA5BlspkJBVudPax/+vkqiPMV9OeAB7TSN+UJf3IGbz3nmi+j90mZXiXfq5w/yF1hdw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171171-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 171171: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-armhf-armhf-libvirt:xen-boot:fail:heisenbug
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:heisenbug
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=8e6c70b9d4a1b1f3011805947925cfdb31642f7f
X-Osstest-Versions-That:
    qemuu=debd0753663bc89c86f5462a53268f2e3f680f60
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 15 Jun 2022 13:03:46 +0000

flight 171171 qemu-mainline real [real]
flight 171179 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/171171/
http://logs.test-lab.xenproject.org/osstest/logs/171179/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-libvirt      8 xen-boot            fail pass in 171179-retest
 test-amd64-i386-xl-qemuu-ws16-amd64 12 windows-install fail pass in 171179-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt 16 saverestore-support-check fail in 171179 like 171165
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop   fail in 171179 like 171165
 test-armhf-armhf-libvirt    15 migrate-support-check fail in 171179 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171165
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171165
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171165
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171165
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171165
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171165
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 qemuu                8e6c70b9d4a1b1f3011805947925cfdb31642f7f
baseline version:
 qemuu                debd0753663bc89c86f5462a53268f2e3f680f60

Last test of basis   171165  2022-06-14 15:07:35 Z    0 days
Testing same since   171171  2022-06-15 00:07:18 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Akihiko Odaki <akihiko.odaki@gmail.com>
  Arnout Engelen <arnout@bzzt.net>
  Dongwon Kim <dongwon.kim@intel.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Hongren (Zenithal) Zheng <i@zenithal.me>
  Joelle van Dyne <j@getutm.app>
  Richard Henderson <richard.henderson@linaro.org>
  Volker Rümelin <vr_qemu@t-online.de>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   debd075366..8e6c70b9d4  8e6c70b9d4a1b1f3011805947925cfdb31642f7f -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 13:43:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 13:43:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350127.576354 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1TIj-0006o9-Id; Wed, 15 Jun 2022 13:43:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350127.576354; Wed, 15 Jun 2022 13:43:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1TIj-0006o2-G5; Wed, 15 Jun 2022 13:43:17 +0000
Received: by outflank-mailman (input) for mailman id 350127;
 Wed, 15 Jun 2022 13:43:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AGfn=WW=bombadil.srs.infradead.org=BATV+cd04db8a85bff7cd13dc+6870+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1o1TIh-0006nv-9g
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 13:43:16 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1956d14c-ecb1-11ec-bd2c-47488cf2e6aa;
 Wed, 15 Jun 2022 15:43:13 +0200 (CEST)
Received: from hch by bombadil.infradead.org with local (Exim 4.94.2 #2 (Red
 Hat Linux)) id 1o1TIc-00ElmM-Nl; Wed, 15 Jun 2022 13:43:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1956d14c-ecb1-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=In-Reply-To:Content-Type:MIME-Version
	:References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=9m0Yel3mKJUR0/9WLy9YG0v6kS0HnL5HFztBLkukfhU=; b=l5pMXoObGe9qCoN1JNmE4rMH8z
	CptrHdOdY0cHpAWepueUiYpA9ETdVP8UOjDY3K6o2t+eaxcm3lmVTzhQfxmAiRDh5s6GklPirU7qc
	7ItQTcUBhxkYiBo455aSSoG68s+N4EKRxBFhfDWelNWPOE1w3kOg0eYS18ESpI2iQwLAJHWmsN4iF
	PdbDG+CjA5QulntLirJsSzM9fKo7qUXPNgDWagSbJtr5R4znSLP1Gra8zg/HL7JXLdR/QCQHPggMG
	3j2UL7hEyIHJ8BuZ8rshW8HmAUOh7wJO1zCXDgpfqC3zV/6c6BS72/yDmGVaddn7Zq5ycCgaZ4DNy
	cpqCY3lA==;
Date: Wed, 15 Jun 2022 06:43:10 -0700
From: Christoph Hellwig <hch@infradead.org>
To: Juergen Gross <jgross@suse.com>
Cc: Christoph Hellwig <hch@infradead.org>, xen-devel@lists.xenproject.org,
	linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org,
	Jonathan Corbet <corbet@lwn.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: Re: [PATCH] xen: don't require virtio with grants for non-PV guests
Message-ID: <Yqnh7vjO8iT4/fiK@infradead.org>
References: <20220615084835.27113-1-jgross@suse.com>
 <YqnBZhiLOHnoalbC@infradead.org>
 <9b9785f5-085b-0882-177f-d8418c366beb@suse.com>
 <YqnCZ+EKZeZ5AEnr@infradead.org>
 <c5a521e0-26b1-b1d6-7f7d-00aa9b4b1e0e@suse.com>
 <YqnIWCXxsGzkfQp7@infradead.org>
 <ab0653bc-7728-e24c-5d83-78cee135528c@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <ab0653bc-7728-e24c-5d83-78cee135528c@suse.com>
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

On Wed, Jun 15, 2022 at 02:53:54PM +0200, Juergen Gross wrote:
> On 15.06.22 13:54, Christoph Hellwig wrote:
> > On Wed, Jun 15, 2022 at 01:39:01PM +0200, Juergen Gross wrote:
> > > No, it doesn't. I'm working on a qemu patch series enabling the qemu
> > > based backends to support grants with virtio. The code is working fine
> > > on x86, too (apart from the fact that the backends aren't ready yet).
> > 
> > The code right now in mainline only ever sets the ops for DMA.  So
> > I can't see how you could make it work.
> 
> Ah, you are right. I was using a guest with an older version of the series.
> Sorry for the noise.

No problem.  But whatever you end up using to enable the grant DMA
ops n x86 should also require the platform access feature.  We already
have that information so we can make use of it.


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 13:47:02 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 13:47:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350134.576365 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1TMM-0007Qd-4c; Wed, 15 Jun 2022 13:47:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350134.576365; Wed, 15 Jun 2022 13:47:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1TMM-0007QW-10; Wed, 15 Jun 2022 13:47:02 +0000
Received: by outflank-mailman (input) for mailman id 350134;
 Wed, 15 Jun 2022 13:47:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5J6g=WW=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o1TML-0007QQ-EO
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 13:47:01 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a14e994d-ecb1-11ec-bd2c-47488cf2e6aa;
 Wed, 15 Jun 2022 15:47:00 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id E105D21C44;
 Wed, 15 Jun 2022 13:46:59 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id A890113A35;
 Wed, 15 Jun 2022 13:46:59 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id VmCWJ9PiqWKiLAAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 15 Jun 2022 13:46:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a14e994d-ecb1-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655300819; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=t8c6ISAdYWh/3XpBO9dSTJ2YoEerDJCxg3v6Cg36NWI=;
	b=ebBHbnYZMWGRp7C4ce9eYigmVRAlcjPIk/98LGdgKWQfvlsNZyq4CezBMmipkcMDDO5CgT
	p3yD8OzAhioZsuL+R09saUbA0FGzYil8/TgufQFqDDJ23uE6T8SBE1ULe6pDaZSuOVm2a9
	LDZDTV90P2TZOZe0xECHv56qnT+juCQ=
Message-ID: <0a1501fc-e03f-8a1d-76a1-c115606c6278@suse.com>
Date: Wed, 15 Jun 2022 15:46:59 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH] xen: don't require virtio with grants for non-PV guests
Content-Language: en-US
To: Christoph Hellwig <hch@infradead.org>
Cc: xen-devel@lists.xenproject.org, linux-doc@vger.kernel.org,
 linux-kernel@vger.kernel.org, Jonathan Corbet <corbet@lwn.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
References: <20220615084835.27113-1-jgross@suse.com>
 <YqnBZhiLOHnoalbC@infradead.org>
 <9b9785f5-085b-0882-177f-d8418c366beb@suse.com>
 <YqnCZ+EKZeZ5AEnr@infradead.org>
 <c5a521e0-26b1-b1d6-7f7d-00aa9b4b1e0e@suse.com>
 <YqnIWCXxsGzkfQp7@infradead.org>
 <ab0653bc-7728-e24c-5d83-78cee135528c@suse.com>
 <Yqnh7vjO8iT4/fiK@infradead.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <Yqnh7vjO8iT4/fiK@infradead.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------ag0QxufSzrrpFlmYdFmtHaa1"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------ag0QxufSzrrpFlmYdFmtHaa1
Content-Type: multipart/mixed; boundary="------------0baxSOydh0hsGx0Put4W0HVi";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Christoph Hellwig <hch@infradead.org>
Cc: xen-devel@lists.xenproject.org, linux-doc@vger.kernel.org,
 linux-kernel@vger.kernel.org, Jonathan Corbet <corbet@lwn.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Message-ID: <0a1501fc-e03f-8a1d-76a1-c115606c6278@suse.com>
Subject: Re: [PATCH] xen: don't require virtio with grants for non-PV guests
References: <20220615084835.27113-1-jgross@suse.com>
 <YqnBZhiLOHnoalbC@infradead.org>
 <9b9785f5-085b-0882-177f-d8418c366beb@suse.com>
 <YqnCZ+EKZeZ5AEnr@infradead.org>
 <c5a521e0-26b1-b1d6-7f7d-00aa9b4b1e0e@suse.com>
 <YqnIWCXxsGzkfQp7@infradead.org>
 <ab0653bc-7728-e24c-5d83-78cee135528c@suse.com>
 <Yqnh7vjO8iT4/fiK@infradead.org>
In-Reply-To: <Yqnh7vjO8iT4/fiK@infradead.org>

--------------0baxSOydh0hsGx0Put4W0HVi
Content-Type: multipart/mixed; boundary="------------yO30fh0Q0Jn02q3nM59fph43"

--------------yO30fh0Q0Jn02q3nM59fph43
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTUuMDYuMjIgMTU6NDMsIENocmlzdG9waCBIZWxsd2lnIHdyb3RlOg0KPiBPbiBXZWQs
IEp1biAxNSwgMjAyMiBhdCAwMjo1Mzo1NFBNICswMjAwLCBKdWVyZ2VuIEdyb3NzIHdyb3Rl
Og0KPj4gT24gMTUuMDYuMjIgMTM6NTQsIENocmlzdG9waCBIZWxsd2lnIHdyb3RlOg0KPj4+
IE9uIFdlZCwgSnVuIDE1LCAyMDIyIGF0IDAxOjM5OjAxUE0gKzAyMDAsIEp1ZXJnZW4gR3Jv
c3Mgd3JvdGU6DQo+Pj4+IE5vLCBpdCBkb2Vzbid0LiBJJ20gd29ya2luZyBvbiBhIHFlbXUg
cGF0Y2ggc2VyaWVzIGVuYWJsaW5nIHRoZSBxZW11DQo+Pj4+IGJhc2VkIGJhY2tlbmRzIHRv
IHN1cHBvcnQgZ3JhbnRzIHdpdGggdmlydGlvLiBUaGUgY29kZSBpcyB3b3JraW5nIGZpbmUN
Cj4+Pj4gb24geDg2LCB0b28gKGFwYXJ0IGZyb20gdGhlIGZhY3QgdGhhdCB0aGUgYmFja2Vu
ZHMgYXJlbid0IHJlYWR5IHlldCkuDQo+Pj4NCj4+PiBUaGUgY29kZSByaWdodCBub3cgaW4g
bWFpbmxpbmUgb25seSBldmVyIHNldHMgdGhlIG9wcyBmb3IgRE1BLiAgU28NCj4+PiBJIGNh
bid0IHNlZSBob3cgeW91IGNvdWxkIG1ha2UgaXQgd29yay4NCj4+DQo+PiBBaCwgeW91IGFy
ZSByaWdodC4gSSB3YXMgdXNpbmcgYSBndWVzdCB3aXRoIGFuIG9sZGVyIHZlcnNpb24gb2Yg
dGhlIHNlcmllcy4NCj4+IFNvcnJ5IGZvciB0aGUgbm9pc2UuDQo+IA0KPiBObyBwcm9ibGVt
LiAgQnV0IHdoYXRldmVyIHlvdSBlbmQgdXAgdXNpbmcgdG8gZW5hYmxlIHRoZSBncmFudCBE
TUENCj4gb3BzIG4geDg2IHNob3VsZCBhbHNvIHJlcXVpcmUgdGhlIHBsYXRmb3JtIGFjY2Vz
cyBmZWF0dXJlLiAgV2UgYWxyZWFkeQ0KPiBoYXZlIHRoYXQgaW5mb3JtYXRpb24gc28gd2Ug
Y2FuIG1ha2UgdXNlIG9mIGl0Lg0KDQpZZXMsIG9mIGNvdXJzZS4NCg0KDQpKdWVyZ2VuDQo=

--------------yO30fh0Q0Jn02q3nM59fph43
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------yO30fh0Q0Jn02q3nM59fph43--

--------------0baxSOydh0hsGx0Put4W0HVi--

--------------ag0QxufSzrrpFlmYdFmtHaa1
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmKp4tMFAwAAAAAACgkQsN6d1ii/Ey9o
ZQf/bERS+itpRsVc5LubFWQpXz+QtHmp7xtjzZR9SBcdwrciAvH05Tjkabht1CobtWXS/qJwT6RP
GpMmfNa8JTdkNTTuDai4B4d5SNOEaDdOrI4rDPlFVBUU1zny2ZdAagAGZoNXJY34m2iw74y12BIL
42JHw3AnqXVAA/sgVMwYYGjQIccTDT2UUCqPDGL03662ZwGRV1YFPy0t4EebQx6QrrpWv4R3ixqg
3qMhwYlsH7S2BVs2yFDc8psGlPn5uBFsayJcuiL15OLvrmKoCn16mrbLhZpDkpQBYKuym31Iduo8
3olWAUdMp+Zn1Gc4vBKe5BP4Borm144RUhDNZURRrA==
=VSIN
-----END PGP SIGNATURE-----

--------------ag0QxufSzrrpFlmYdFmtHaa1--


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 14:21:49 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 14:21:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350144.576384 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Ttv-0004PH-19; Wed, 15 Jun 2022 14:21:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350144.576384; Wed, 15 Jun 2022 14:21:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Ttu-0004PA-UO; Wed, 15 Jun 2022 14:21:42 +0000
Received: by outflank-mailman (input) for mailman id 350144;
 Wed, 15 Jun 2022 14:21:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1Ttt-0004Ok-AA; Wed, 15 Jun 2022 14:21:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1Ttt-0003PX-5v; Wed, 15 Jun 2022 14:21:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1Tts-0008OV-TT; Wed, 15 Jun 2022 14:21:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o1Tts-00074d-T1; Wed, 15 Jun 2022 14:21:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WSu95nVJNp+RaaoBCPdlVmKvCg75SRLLnLNP4oQTnWI=; b=cC29vptGIFiUaT0hUI90yF0onH
	VE98WHdcfYayb8FvwK3eTQ7abDRWZWb2wrVSYPxX1skWjYTWunUn9iayNMv8JKNXpt2uszJXKOeGN
	y458csuWY8XqCLNiXjqpa/Rb7h2Lmgg5CDLVjlHNztdURkvYOZDoeE+64nHNG2EMYEUQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171173-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 171173: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=9d6e67bf50908cc661972969e8f073ec1d1bc97d
X-Osstest-Versions-That:
    linux=35c6471fd2c181f6e5e0b292dc759b49dbd95d6a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 15 Jun 2022 14:21:40 +0000

flight 171173 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171173/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit2  18 guest-start/debian.repeat    fail  like 170846
 test-armhf-armhf-xl-credit1  14 guest-start                  fail  like 170860
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170895
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170895
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 170895
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170895
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170895
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170895
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170895
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 170895
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 170895
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170895
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170895
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170895
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170895
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 linux                9d6e67bf50908cc661972969e8f073ec1d1bc97d
baseline version:
 linux                35c6471fd2c181f6e5e0b292dc759b49dbd95d6a

Last test of basis   170895  2022-06-09 06:12:49 Z    6 days
Testing same since   171167  2022-06-14 16:42:25 Z    0 days    2 attempts

------------------------------------------------------------
392 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   35c6471fd2c1..9d6e67bf5090  9d6e67bf50908cc661972969e8f073ec1d1bc97d -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 14:24:24 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 14:24:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350152.576394 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1TwV-00054E-JG; Wed, 15 Jun 2022 14:24:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350152.576394; Wed, 15 Jun 2022 14:24:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1TwV-000547-Gj; Wed, 15 Jun 2022 14:24:23 +0000
Received: by outflank-mailman (input) for mailman id 350152;
 Wed, 15 Jun 2022 14:24:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9BPj=WW=citrix.com=prvs=158bd2e8d=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o1TwT-000541-OO
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 14:24:21 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d737b8c5-ecb6-11ec-bd2c-47488cf2e6aa;
 Wed, 15 Jun 2022 16:24:20 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d737b8c5-ecb6-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655303059;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=uyGzlJigE87uMdbioX2cHoy8gZ64p4MGoSKppMcIGjY=;
  b=SwXkCQhcMLirwp+m8mXRGCKz1dwQWHhOwZwUm434CWtPu/3PPZlJdKhd
   mMXQLJuAg7VFMk5576VVPB+GWRJ+xgY4BpgZkG3rw+1P1fYaHEg7gRUO8
   lCoQvPTpyQ6E7zmqApxtVMvMfHJdn1slkJIwDtRDsUuSHSzcIagtFXUiL
   c=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 76225591
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:JdRMu6NAKQGub/nvrR1al8FynXyQoLVcMsEvi/4bfWQNrUp01j1Vn
 GBND2nTM/qIYjf9c4hzbdvip05Qu5+GydU1HQto+SlhQUwRpJueD7x1DKtR0wB+jCHnZBg6h
 ynLQoCYdKjYdleF+lH1dOKJQUBUjclkfJKlYAL/En03FFYMpBsJ00o5wbZn29Mw2LBVPivW0
 T/Mi5yHULOa82Yc3lI8s8pvfzs24ZweEBtB1rAPTagjUG32zhH5P7pGTU2FFFPqQ5E8IwKPb
 72rIIdVXI/u10xF5tuNyt4Xe6CRK1LYFVDmZnF+A8BOjvXez8CbP2lS2Pc0MC9qZzu1c99Zx
 9937o6vFUAVeZaQtadDTxdBTgxQIvgTkFPHCSDXXc27ykTHdz3nwul0DVFwNoodkgp1KTgQr
 7pCcmlLN03dwbLtqF64YrAEasALJc/3PIQZqzd4wCvQF/oOSpHfWaTao9Rf2V/cg+gRQ6yEO
 pdIMFKDajz6TSFJFwYvDag5kaSKnUevcy1ntQ2s8P9fD2/7k1UqjemF3MDuUseRWcxfk0Kcp
 2TH12f0GBcXMJqY0zXt2nCjnOjUhgvgRZkfUra/85ZCgkCXx2EVIA0bUx28u/bRokSzQc5FI
 koYvC8nt7Ev9VeDR8P4GRa/pRasgBkYXNZBFvwg3yuEwKHU/gWxC3ANS3hKb9lOnN87Q3km2
 0GEm/vtBCdzq/uFRHSF7LCWoDiufy8PIgcqYisJThAZ8sLjiI42hxPLCN1kFcadidn4Gir5x
 TyQmzQvnLUYjcMN1KKT8EjOhnSnoZ2hZhQy/Q/NWWWm6Ct2YYekY8qj7l2zxelEBJaUSB+Gp
 ndspiSFxLlQV9fXznXLGbhTWuHyjxqYDNHCqVFlJcIz6GjqxzmYIa0Tvi1wPRpEHNlRLFcFf
 3TvVRNtCI57ZSX3MPUuPNjvV6zG3oC7S427C6m8gs5mJ8EoKVTZpHwGiVu4hTiFraQ6rU0o1
 X53m+6IBG1SN6loxSHeqww1ge5ynXBWKY8+qPnGI/WbPVm2Pif9pU8tagfmUwzAxPrsTP/p2
 9heLdCW7B5UTffzZCLamaZKcw1XdyNiW8+s+5cGHgJmHuaBMDh5Y8I9PJt7I9A190irvrygE
 o6Btr9wlwOk2CyvxfSiYXF/crL/NatCQYYAFXV0Zz6AgiF7Ca72tft3X8ZmJtEPqb08pcOYu
 tFYIq1s9NwUEmSZk9ncBLGgxLFfmOOD3l/XZnP7PmFuIPaNhWXho7fZQ+cmzwFWZgLfiCf0i
 +fIOt/zKXbbezlfMQ==
IronPort-HdrOrdr: A9a23:EAL9yqnrnflVGYH2++qzFIhnprjpDfIs3DAbv31ZSRFFG/Fxl6
 iV8sjz8SWE7Ar5OUtQ/OxoV5PsfZqxz/JICMwqTNCftWrdyQmVxeNZjbcKqgeIc0aVygce79
 YCT0EXMqyXMbEQt6fHCWeDfOod/A==
X-IronPort-AV: E=Sophos;i="5.91,302,1647316800"; 
   d="scan'208";a="76225591"
Date: Wed, 15 Jun 2022 15:23:54 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
CC: <xen-devel@lists.xenproject.org>, Oleksandr Tyshchenko
	<oleksandr_tyshchenko@epam.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Nick Rosbrook <rosbrookn@gmail.com>, "Juergen
 Gross" <jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>,
	"Julien Grall" <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Bertrand Marquis <bertrand.marquis@arm.com>
Subject: Re: [PATCH V10 1/3] libxl: Add support for Virtio disk configuration
Message-ID: <YqnrerEAFXJUCRL1@perard.uk.xensource.com>
References: <1655143522-14356-1-git-send-email-olekstysh@gmail.com>
 <1655143522-14356-2-git-send-email-olekstysh@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <1655143522-14356-2-git-send-email-olekstysh@gmail.com>

On Mon, Jun 13, 2022 at 09:05:20PM +0300, Oleksandr Tyshchenko wrote:
> diff --git a/tools/libs/light/libxl_disk.c b/tools/libs/light/libxl_disk.c
> index a5ca778..673b0d6 100644
> --- a/tools/libs/light/libxl_disk.c
> +++ b/tools/libs/light/libxl_disk.c
> @@ -575,6 +660,41 @@ cleanup:
>      return rc;
>  }
>  
> +static int libxl__device_disk_get_path(libxl__gc *gc, uint32_t domid,
> +                                       char **path)
> +{
> +    const char *xen_dir, *virtio_dir;
> +    char *xen_path, *virtio_path;
> +    int rc;
> +
> +    /* default path */
> +    xen_path = GCSPRINTF("%s/device/%s",
> +                         libxl__xs_libxl_path(gc, domid),
> +                         libxl__device_kind_to_string(LIBXL__DEVICE_KIND_VBD));
> +
> +    rc = libxl__xs_read_checked(gc, XBT_NULL, xen_path, &xen_dir);
> +    if (rc)
> +        return rc;
> +
> +    virtio_path = GCSPRINTF("%s/device/%s",
> +                            libxl__xs_libxl_path(gc, domid),
> +                            libxl__device_kind_to_string(LIBXL__DEVICE_KIND_VIRTIO_DISK));
> +
> +    rc = libxl__xs_read_checked(gc, XBT_NULL, virtio_path, &virtio_dir);
> +    if (rc)
> +        return rc;
> +
> +    if (xen_dir && virtio_dir) {
> +        LOGD(ERROR, domid, "Invalid configuration, both xen and virtio paths are present");
> +        return ERROR_INVAL;
> +    } else if (virtio_dir)
> +        *path = virtio_path;
> +    else
> +        *path = xen_path;

Small coding style issue, could you use blocks {} on all part of the
if...else, since you are using them on one of the block? This is
described in tools/libs/light/CODING_STYLE (5. Block structure).

> diff --git a/tools/xl/xl_block.c b/tools/xl/xl_block.c
> index 70eed43..f2b0ff5 100644
> --- a/tools/xl/xl_block.c
> +++ b/tools/xl/xl_block.c
> @@ -50,6 +50,11 @@ int main_blockattach(int argc, char **argv)
>          return 0;
>      }
>  
> +    if (disk.specification != LIBXL_DISK_SPECIFICATION_XEN) {
> +        fprintf(stderr, "block-attach is only supported for specification xen\n");

This check prevents a previously working `block-attach` command line
from working.

    # xl -Tvvv block-attach 0 /dev/vg/guest_disk,raw,hda
    block-attach is only supported for specification xen

At least, that works by adding ",specification=xen", but it should work
without it as "xen" is the default (from the man page).

Maybe the check is done too soon, or maybe a better place to do it would
be in libxl.

libxl__device_disk_setdefault() is called much later, while executing
libxl_device_disk_add(), so `xl` can't use the default been done there
to "disk.specification".

`xl block-attach` calls libxl_device_disk_add() which I think is only
called for hotplug of disk. If I recall correctly, libxl__add_disks() is
called instead at guest creation. So maybe it is possible to do
something in libxl_device_disk_add(), but that a function defined by a
macro, and the macro is using the same libxl__device_disk_add() that
libxl_device_disk_add(). On the other hand, there is a "hotplug"
parameter to libxl__device_disk_setdefault(), maybe that could be use?


Cheers,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 14:26:03 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 14:26:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350159.576406 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Ty5-0005eS-Ub; Wed, 15 Jun 2022 14:26:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350159.576406; Wed, 15 Jun 2022 14:26:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Ty5-0005eL-RE; Wed, 15 Jun 2022 14:26:01 +0000
Received: by outflank-mailman (input) for mailman id 350159;
 Wed, 15 Jun 2022 14:26:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=56zs=WW=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o1Ty4-0005eF-9b
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 14:26:00 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on0620.outbound.protection.outlook.com
 [2a01:111:f400:fe0d::620])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 13178f11-ecb7-11ec-ab14-113154c10af9;
 Wed, 15 Jun 2022 16:25:58 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB7095.eurprd04.prod.outlook.com (2603:10a6:20b:11c::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.22; Wed, 15 Jun
 2022 14:25:56 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Wed, 15 Jun 2022
 14:25:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 13178f11-ecb7-11ec-ab14-113154c10af9
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WX4VZKdl15hpjD2szd6kOJcHt92p8Axl0+lYlJhoTvk0DTDezV63YbY+84Um8oub/KgHUm0NzASw/C6+BHbsSdYiggaRS29p9y0hxRXeef6XbPn+ICcg28oU0RWHvfg5CDiUBNaXdbMIkHwXyHxo+1JyP3iUPeuT6puyj37i7RJHAKR1f9sswuOvUNOl+Yltav9kFqStoIHf6+2hJqJCHzS+yfmyaHKpwVq7TxMNpr+UphratszqMgvY2o87dKresIrRIGTPw6NyBFawMbOFkOl75fJmZ9XJ75zXhDzLpRA3QQDSqv3N7CUbk7ypttLhV5HAxWC4/HlJ7ZmS07gfSQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=JlFJaJZ4poUDzv+w4vDKt8i5i+SJXGu/Bd0vpJSYlWI=;
 b=gFnv+4QF4JPidht9XYSKfgcBKqbJfgaESG0OzOiabepv13pG1n1G2S/Ht2IIAnViNJQF8CsaCobevG4ajEsr0lKiPqdl/iUwj2FAITxug+T7lRIfpghU1vViPWpSrJL48WQV2Zs33kkTl9k9QzcV5aBXkJuH8CWA5asEiPQr9dVOW3qowXZPeGjw9rd0A4Av5XZzZQpApa1MxRJPL27TV5PIKwhUR1GEzUKOFAKTTkDEmBa2OIAf2wQlD7cmWfXINY0KRGCx0ia3um6JaTOTwy1sNr9g0Tz/4odbpolE4I4keanLwIsHJAX4TifYTUDxPE7rU0ADb5mRpbALIRlPCA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JlFJaJZ4poUDzv+w4vDKt8i5i+SJXGu/Bd0vpJSYlWI=;
 b=BmfR388PkA7GoqxFzmEgq6VfIWR/AA6Y3COi6Jn1oRgtN5G9J9OmT43E93KmFzskY2oMRX3Y/Sz7SeEb5M3t6pQP/ABRBQoDx8NIkoYCFclcnTmFwz4bZCjX5x+bN9zGobAvmUIrtO5OSYmeq/UsRbBhpMzVpNsLzc/QCF8dpblyMwYn5wFLKhc++v1WOAeHEyctBUnkYtx4SMH54XiopDDA/32smmbrBo5ry102jk9trlA2hAEE8FmeB3uCp+E73iFVvY2vDHpQh6WpUJNkk8im4diQanVRoEuGUP5PSRMHy+ArAVsBTXN4HtGAFH/MQ6LRIwWdV4vEKkIM9Vj0Cw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9c7c11f5-be1e-f0ef-0659-48026675ec1a@suse.com>
Date: Wed, 15 Jun 2022 16:25:54 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v1 01/10] drivers/char: Add support for Xue USB3 debugger
Content-Language: en-US
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
Cc: Connor Davis <davisc@ainfosec.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
References: <cover.5d286dc6304969ed7155051e900236947c1b14dc.1654612169.git-series.marmarek@invisiblethingslab.com>
 <87c73737fe8ec6d9fe31c844b72b6c979b90c25d.1654612169.git-series.marmarek@invisiblethingslab.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <87c73737fe8ec6d9fe31c844b72b6c979b90c25d.1654612169.git-series.marmarek@invisiblethingslab.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AM5PR1001CA0008.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:206:2::21) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b354d300-cdb5-4d5d-28c9-08da4edaf60a
X-MS-TrafficTypeDiagnostic: AM7PR04MB7095:EE_
X-Microsoft-Antispam-PRVS:
	<AM7PR04MB7095F1F290DA62BD26E710EBB3AD9@AM7PR04MB7095.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	m9Di686abCk+cPqwlDOvSVkqsKEKR7KSSsOktk7UAyql9DjN25i0nv72ktY8D8aXGWQKy+HKP+6eJ6JsbySRTLWV5n2TxQP0wG39aDWgg8QcCixLRtm+mNX4CX9NbeZlyh6litTDlY77Ugw14dj4QVy+twPstO9PrFrWZIaZDfcD/bhHHGlCNi/kwhXcfJLRCK8IIJ0hDqL51o6owPp3xoyANQT+mHBidD+8b37Mp9Id95DV4gyOymnP81tf1+OYxNxi6TmEmxKVjvRJMJ93Unq6q30pnD/fWCh+ooLHD3onyqsnOHfGk08d3yyJW22bhlvp8Rf5bC7sKeg89+9wnVBmWfqW3jnZkio8hZl2UavtT6RR5uWosI8mibwN0Yx1hAQhJomYUCKL5R9IIwWHAyOnR8B5ADu9ejnGgLyxZJmmRZP95doRdDpx1nZBbLwT6E8zx2y7sFwB7epkS2qfPwsnXJ5Vd7ny5+g5nQTvBjox/qJZ8Uu0AZyQf4fwGoU/uMStAXyN8QPeDKMGJhSfJmYnKEtUIvFVQ8bPwC4TM8RLFbJsd4cJ41RFAfCHmIU6NVLdYPfzL04g+Y75E1iIys74CrTOfalHMa3xNaTV1xjljdZXIH7XMZIgKsQZBnV/9QCEXCKBv16Orpx22ozuElp4NLBHedgk0j85j2nCPbHZjfV4KvjXwCOlFDRyPFeEBziAYmKkZCRIAZU73jCg5I/sMUv0FXDGxobtnysELJUQEZxi0Y7uBQ+puTE+VTmrLo0hHkNCYows91AEL8f0/XFQVbpsKpAmLrSXFknVH/yQXAMuo4z9mBpkuKxLN1QDAEfxBteyRV85NuF8V1Ppag==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(31696002)(86362001)(2906002)(6916009)(54906003)(38100700002)(6486002)(316002)(508600001)(30864003)(4326008)(66574015)(8676002)(186003)(66476007)(5660300002)(8936002)(66556008)(83380400001)(26005)(53546011)(6512007)(66946007)(2616005)(6506007)(31686004)(36756003)(2004002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bHZkTi9MdzRGa29QYWR1ZExoWHFxQ1lrV0pmd0xLV1FzYnZVMTcvS3puc1JH?=
 =?utf-8?B?T3JleFk0cCtJU2NFTUdrN2t6SmN6dDR3ODNmYUlNR21teVpmd0oxb3g2QWJX?=
 =?utf-8?B?RWwrMkV6K0dFYVRDQ2pUN1gyMWMrUmZlMVM2SWlzKytVbWFjZEYxa1cvbnJl?=
 =?utf-8?B?bFowRU5VMlBiS2pjUnRTZ1hhR3ErQ1hsKzlLRlpiNnBHbFk2OTZrOTRMM3Yr?=
 =?utf-8?B?MEhBM3hqN3FyZEdtNXNvdTluclh5eXhISkxLekMxbXFoVjBVdlpHY1dSTmRW?=
 =?utf-8?B?Z1B0bkpSaWVCSk9mOFk1aDRlZVpJMWt4R2VtNmVGZ0JjRXpKVDZyT3dZYW9h?=
 =?utf-8?B?T3c5aWZPWHhxRXVDeDB2UFN6dWxsVDJERm9wRVgxeUUrbXBpdGkrbVF2RnRs?=
 =?utf-8?B?MjN3ekFJUnBSNVJVb0ZXQURPQnQxcDVxWmVVOU5YT3kraGh0czM1NTltMWZj?=
 =?utf-8?B?bSt4SldFZEZma3YvcjNVdlpaencyV2V0SEJtYXJRSXYzYVh1R2xwcmpuYkIz?=
 =?utf-8?B?c2ovdEtscTgvWm5HZUZLT1d3aGlpeUxQVzZLUHlzRXNqZHJIYTlhZ0NSU1o0?=
 =?utf-8?B?ZS9YUndKbDYxSHVxc1pkQklDZE1KNFhGanhpaGVZTEViVldXMlUzMDlzVWJM?=
 =?utf-8?B?WkMyeFlnMmV1eE1OK2poSFUxc2lPdlFleVRpcDFVaFBTa3JJdnRROG9RRWVz?=
 =?utf-8?B?SkYwYTdlNVdNSXRpV2dHRTB1ZUJhMzY2SG9uWnlzTXArRml3ZjRpZnFuT2tm?=
 =?utf-8?B?OGg4b25BMGdyNVJuc0Y0MXR2WDVSL2JvN1ZjeDJjSmZlbzA0aU5NdXdTOXV4?=
 =?utf-8?B?N2xadCtpOGxQK0tacDRSY3pNb0duWTZEanBRRFBBY0orV2E0VjdiUlN3aGlQ?=
 =?utf-8?B?Sm9HaW9wdk9jdXlPQUtINTRweE8xL2xtTGZYT0N0S3hjVk5XNndDUEZURS9i?=
 =?utf-8?B?UlBjWG1ENWExZ3U5QTNnSnVrNFBPZCtKOXhTT0tZYWp3MnZwaGxrRHNPWThM?=
 =?utf-8?B?bzFrRytUZkc2ZzE5Mmo5NjZ1MW1MOGovZUFudi9sM1FUS2h2aXloNnh1bWwz?=
 =?utf-8?B?akZSSkVJWWtlMU1TaDlkc25oemp4bGtEWlFwK3hkR2NGK1o5ZHB5NUVvMzJF?=
 =?utf-8?B?RDRZTERhN3VCUVBEZlIrdVNvZHZiOWZndnZqeTl6OENxdU84OHM3L2R6SFk4?=
 =?utf-8?B?V2xENWsxeG56ckNmWjN0R0gwSzdkQUdxRnd2T21rbzBzVmpYb3NxdGgraG9p?=
 =?utf-8?B?UHQxeWJwK05UKzNMdGdnUVJpUFBkNWQySUZVS2Y3a0NpeGZMN0cyRFFxS3Ft?=
 =?utf-8?B?T08zSGRSZmoxQVRmVUhzN09TRkJ0b0c1Vk1GN01nekFwTWcyS1Z2UnkrTlAv?=
 =?utf-8?B?UmJpRTVvS3dmbDBCa0Rpb093cmFDYlVMbXF4bjE5Rzd5NnhEWWsvSlZZbll4?=
 =?utf-8?B?N2dHcU8yUGxlTzkwZkxOdEgwNlNOT3c0Vno5RC9YRExpZmNieDFhRmwrRUhM?=
 =?utf-8?B?OGVFSmo4NFRNZEJpb3cxMXIvNnhRUk5RbjVmRjdCNjNreDlpY3FmWFpyM3NU?=
 =?utf-8?B?d3RyMWtuU2FrVEJNcHpNaEI1aGN0aG0wU0tSVXpBMFZvb2o0L1FLUEl0Nnhr?=
 =?utf-8?B?VFhCYURYY3dmc3dXMFdwTlVJUXEwU1RuT1VyaFFNRDRlc1VQZ1pFT21ZaHZN?=
 =?utf-8?B?d25WVFQxVmVaQWFGRGlYZkprNWxpeklySElMMytRQktiOERkZGZmTWhNY0x0?=
 =?utf-8?B?dmttNDE4Z0FXb3B6dDZoVTE3SVFnWTdXdXhEeTJ1K1A4NThJcGtNc09lSmJW?=
 =?utf-8?B?V1pldS9tRjlnZlZ3OExjUFA3ajZ5RVFKY3QxNkxSWXhiR2VPR2lTaksyWURl?=
 =?utf-8?B?S2g1WFhrUUI0ZUNyOWkyTVJxSWo5bE9pODliWFNVc29yeU1NVmZYdkxSRTZU?=
 =?utf-8?B?dGZ1SEp3THcvQjV1MFk5emVjSHh3dllrVWtXZG1ZaDIrSTBRVkxselpMY0hj?=
 =?utf-8?B?NWt6T05IZEJqUXVZS0RUSUNORXNGR3pqaDJoL0RZQTFLTzB0L1dZWDhGdElS?=
 =?utf-8?B?b0s1cnpEVDFuMDl3SVZ0cDJzRXJRb1IvY05NYlFqTWlBY1BLV0puclZGN25t?=
 =?utf-8?B?dTdjWjdscTVTdG42cnp4WTg4di82c3E1bWJpTFlYTGE4NDNIKzFKYjhCanpw?=
 =?utf-8?B?eTFGZDRBTHQrVFJrQmZuaUJHWHVnLzR2ODZVejFsUGU2RnN0TWtPQ2FHNDJa?=
 =?utf-8?B?Yko5cXdaV3dpMUFVbGlDY0lzem13eEcyazZyTEYxbUtQZDBKRmxOTkh4UVNm?=
 =?utf-8?B?M2RqVk80bGhlZzV1Snd6K25lZk54eTc4NzJldmRuei82RmNuM2ZvQT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b354d300-cdb5-4d5d-28c9-08da4edaf60a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2022 14:25:56.5294
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 2xCpus1HXDHoteKJSSkPadAakGgDllDT6B33eLUFBBlUXfVKXzFIx9FPFF7wT2LSkLT6K1WX0MZ9TBsi30Rzpw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB7095

On 07.06.2022 16:30, Marek Marczykowski-Górecki wrote:
> From: Connor Davis <davisc@ainfosec.com>
> 
> [Connor]
> Xue is a cross-platform USB 3 debugger that drives the Debug
> Capability (DbC) of xHCI-compliant host controllers. This patch
> implements the operations needed for xue to initialize the host
> controller's DbC and communicate with it. It also implements a struct
> uart_driver that uses xue as a backend. Note that only target -> host
> communication is supported for now. To use Xue as a console, add
> 'console=dbgp dbgp=xue' to the command line.

Which suggests that the command line doc also wants updating here.

> --- a/xen/drivers/char/Kconfig
> +++ b/xen/drivers/char/Kconfig
> @@ -74,3 +74,10 @@ config HAS_EHCI
>  	help
>  	  This selects the USB based EHCI debug port to be used as a UART. If
>  	  you have an x86 based system with USB, say Y.
> +
> +config HAS_XHCI
> +	bool
> +	depends on X86

This isn't very useful when there's no prompt, and only "select" would
enable an option. Even if HAS_EHCI has it (for whatever reason), I'd
prefer if we could avoid proliferation of the oddity.

> +	help
> +      This selects the USB based XHCI debug capability to be used as a UART. If
> +      you have an x86 based system with USB3, say Y.

Indentation looks wrong here - the HAS_EHCI one in context shows how
it's expected to be.

> --- /dev/null
> +++ b/xen/drivers/char/xue.c
> @@ -0,0 +1,957 @@
> +/*
> + * drivers/char/xue.c
> + *
> + * Xen port for the xue debugger

Since even here it's not spelled out - may I ask what "xue" actually
stands for (assumng it's an acronym)?

> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; If not, see <http://www.gnu.org/licenses/>.
> + *
> + * Copyright (c) 2019 Assured Information Security.
> + */
> +
> +#include <xen/delay.h>
> +#include <xen/types.h>
> +#include <asm/string.h>
> +#include <asm/system.h>
> +#include <xen/serial.h>
> +#include <xen/timer.h>
> +#include <xen/param.h>
> +#include <asm/fixmap.h>
> +#include <asm/io.h>
> +#include <xen/mm.h>
> +
> +#define XUE_POLL_INTERVAL 100 /* us */
> +
> +#define XUE_PAGE_SIZE 4096ULL
> +
> +/* Supported xHC PCI configurations */
> +#define XUE_XHC_CLASSC 0xC0330ULL

While I'm not meaning to fully review the code in this file (for lack
of knowledge on xhci), the two ULL suffixes above strike me as odd.
Is there a specific reason these can't just be U?

> +/* DbC idVendor and idProduct */
> +#define XUE_DBC_VENDOR 0x1D6B
> +#define XUE_DBC_PRODUCT 0x0010
> +#define XUE_DBC_PROTOCOL 0x0000
> +
> +/* DCCTRL fields */
> +#define XUE_CTRL_DCR 0
> +#define XUE_CTRL_HOT 2
> +#define XUE_CTRL_HIT 3
> +#define XUE_CTRL_DRC 4
> +#define XUE_CTRL_DCE 31
> +
> +/* DCPORTSC fields */
> +#define XUE_PSC_PED 1
> +#define XUE_PSC_CSC 17
> +#define XUE_PSC_PRC 21
> +#define XUE_PSC_PLC 22
> +#define XUE_PSC_CEC 23
> +
> +#define XUE_PSC_ACK_MASK                                                       \
> +    ((1UL << XUE_PSC_CSC) | (1UL << XUE_PSC_PRC) | (1UL << XUE_PSC_PLC) |      \
> +     (1UL << XUE_PSC_CEC))
> +
> +#define xue_debug(...) printk("xue debug: " __VA_ARGS__)
> +#define xue_alert(...) printk("xue alert: " __VA_ARGS__)
> +#define xue_error(...) printk("xue error: " __VA_ARGS__)
> +
> +/******************************************************************************
> + * TRB ring (summarized from the manual):
> + *
> + * TRB rings are circular queues of TRBs shared between the xHC and the driver.
> + * Each ring has one producer and one consumer. The DbC has one event
> + * ring and two transfer rings; one IN and one OUT.
> + *
> + * The DbC hardware is the producer on the event ring, and
> + * xue is the consumer. This means that event TRBs are read-only from
> + * the xue.
> + *
> + * OTOH, xue is the producer of transfer TRBs on the two transfer
> + * rings, so xue enqueues transfers, and the hardware dequeues
> + * them. The dequeue pointer of a transfer ring is read by
> + * xue by examining the latest transfer event TRB on the event ring. The
> + * transfer event TRB contains the address of the transfer TRB that generated
> + * the event.
> + *
> + * To make each transfer ring circular, the last TRB must be a link TRB, which
> + * points to the beginning of the next queue. Note that this implementation
> + * does not support multiple segments, so each link TRB points back to the
> + * beginning of its own segment.
> + ******************************************************************************/
> +
> +/* TRB types */
> +enum {
> +    xue_trb_norm = 1,
> +    xue_trb_link = 6,
> +    xue_trb_tfre = 32,
> +    xue_trb_psce = 34
> +};
> +
> +/* TRB completion codes */
> +enum { xue_trb_cc_success = 1, xue_trb_cc_trb_err = 5 };
> +
> +/* DbC endpoint types */
> +enum { xue_ep_bulk_out = 2, xue_ep_bulk_in = 6 };
> +
> +/* DMA/MMIO structures */
> +#pragma pack(push, 1)

Why? There's no mis-aligned field ...

> +struct xue_trb {
> +    uint64_t params;
> +    uint32_t status;
> +    uint32_t ctrl;
> +};
> +
> +struct xue_erst_segment {
> +    uint64_t base;
> +    uint16_t size;
> +    uint8_t rsvdz[6];
> +};
> +
> +#define XUE_CTX_SIZE 16
> +#define XUE_CTX_BYTES (XUE_CTX_SIZE * 4)
> +
> +struct xue_dbc_ctx {
> +    uint32_t info[XUE_CTX_SIZE];
> +    uint32_t ep_out[XUE_CTX_SIZE];
> +    uint32_t ep_in[XUE_CTX_SIZE];
> +};
> +
> +struct xue_dbc_reg {
> +    uint32_t id;
> +    uint32_t db;
> +    uint32_t erstsz;
> +    uint32_t rsvdz;
> +    uint64_t erstba;
> +    uint64_t erdp;
> +    uint32_t ctrl;
> +    uint32_t st;
> +    uint32_t portsc;
> +    uint32_t rsvdp;
> +    uint64_t cp;
> +    uint32_t ddi1;
> +    uint32_t ddi2;
> +};
> +#pragma pack(pop)

... anywhere in these structures, afaict.

> +static void xue_sys_pause(void)
> +{
> +    __asm volatile("pause" ::: "memory");

Nit: In Xen we don't normally use __asm, but asm.

> +static void xue_flush_range(struct xue *xue, void *ptr, uint32_t bytes)
> +{
> +    uint32_t i;
> +
> +    const uint32_t clshft = 6;
> +    const uint32_t clsize = (1UL << clshft);
> +    const uint32_t clmask = clsize - 1;
> +
> +    uint32_t lines = (bytes >> clshft);
> +    lines += (bytes & clmask) != 0;
> +
> +    for ( i = 0; i < lines; i++ )
> +        clflush((void *)((uint64_t)ptr + (i * clsize)));
> +}

Please drop this function (which doesn't even use its first parameter)
and use cache_flush() instead. (At the example of this though, please
see ./CODING_STYLE for the use of fixed-width types. I understand
there are many places where their use is appropriate in a drivers, but
none of the uses above look to fall in this class.)

> +static int xue_init_xhc(struct xue *xue)

Looks like this function returns boolean, and hence wants to have bool
return type. Also looks like this can be __init. Both remarks similarly
apply to xue_open() (the only caller of this function).

> +{
> +    uint32_t bar0;
> +    uint64_t bar1;
> +    uint64_t devfn;
> +
> +    /*
> +     * Search PCI bus 0 for the xHC. All the host controllers supported so far
> +     * are part of the chipset and are on bus 0.
> +     */
> +    for ( devfn = 0; devfn < 256; devfn++ ) {

Nit: Xen style wants the brace on its own line, like you have almost
everywhere else.

> +        uint32_t dev = (devfn & 0xF8) >> 3;
> +        uint32_t fun = devfn & 0x07;
> +        pci_sbdf_t sbdf = PCI_SBDF(0, dev, fun);

This is at best an abuse, as per

#define PCI_SBDF(s, b, d...) PCI_(SBDF, count_args(s, b, ##d))(s, b, ##d)

But really I think this generates the wrong coordinates. What you can
fold is devfn, i.e. PCI_SBDF(0, 0, devfn). (If you don't want to use
that form, then please avoid open-coding PCI_SLOT() and PCI_FUNC().)

> +        uint32_t hdr = pci_conf_read8(sbdf, PCI_HEADER_TYPE);
> +
> +        if ( hdr == 0 || hdr == 0x80 )
> +        {
> +            if ( (pci_conf_read32(sbdf, PCI_CLASS_REVISION) >> 8) == XUE_XHC_CLASSC )
> +            {
> +                xue->sbdf = sbdf;
> +                break;
> +            }
> +        }
> +    }
> +
> +    if ( !xue->sbdf.sbdf )
> +    {
> +        xue_error("Compatible xHC not found on bus 0\n");
> +        return 0;
> +    }
> +
> +    /* ...we found it, so parse the BAR and map the registers */
> +    bar0 = pci_conf_read32(xue->sbdf, PCI_BASE_ADDRESS_0);
> +    bar1 = pci_conf_read32(xue->sbdf, PCI_BASE_ADDRESS_1);

What if there are multiple?

> +    /* IO BARs not allowed; BAR must be 64-bit */
> +    if ( (bar0 & 0x1) != 0 || ((bar0 & 0x6) >> 1) != 2 )

Please don't open-code constants from xen/pci.h.

> +        return 0;
> +
> +    pci_conf_write32(xue->sbdf, PCI_BASE_ADDRESS_0, 0xFFFFFFFF);
> +    xue->xhc_mmio_size = ~(pci_conf_read32(xue->sbdf, PCI_BASE_ADDRESS_0) & 0xFFFFFFF0) + 1;

Same here and ...

> +    pci_conf_write32(xue->sbdf, PCI_BASE_ADDRESS_0, bar0);
> +
> +    xue->xhc_mmio_phys = (bar0 & 0xFFFFFFF0) | (bar1 << 32);

... here.

> +static struct uart_driver xue_uart_driver = {
> +    .init_preirq = xue_uart_init_preirq,
> +    .init_postirq = xue_uart_init_postirq,
> +    .endboot = NULL,
> +    .suspend = NULL,
> +    .resume = NULL,
> +    .tx_ready = xue_uart_tx_ready,
> +    .putc = xue_uart_putc,
> +    .flush = xue_uart_flush,
> +    .getc = NULL
> +};

Please omit the NULL initializers.

> +static struct xue_trb evt_trb[XUE_TRB_RING_CAP] __aligned(XUE_PAGE_SIZE);
> +static struct xue_trb out_trb[XUE_TRB_RING_CAP] __aligned(XUE_PAGE_SIZE);
> +static struct xue_trb in_trb[XUE_TRB_RING_CAP] __aligned(XUE_PAGE_SIZE);
> +static struct xue_erst_segment erst __aligned(64);
> +static struct xue_dbc_ctx ctx __aligned(64);
> +static uint8_t wrk_buf[XUE_WORK_RING_CAP] __aligned(XUE_PAGE_SIZE);
> +static char str_buf[XUE_PAGE_SIZE] __aligned(64);
> +static char __initdata opt_dbgp[30];
> +
> +string_param("dbgp", opt_dbgp);
> +
> +void __init xue_uart_init(void)
> +{
> +    struct xue_uart *uart = &xue_uart;
> +    struct xue *xue = &uart->xue;
> +
> +    if ( strncmp(opt_dbgp, "xue", 3) )
> +        return;
> +
> +    memset(xue, 0, sizeof(*xue));
> +
> +    xue->dbc_ctx = &ctx;
> +    xue->dbc_erst = &erst;
> +    xue->dbc_ering.trb = evt_trb;
> +    xue->dbc_oring.trb = out_trb;
> +    xue->dbc_iring.trb = in_trb;
> +    xue->dbc_owork.buf = wrk_buf;
> +    xue->dbc_str = str_buf;

Especially the page-sized entities want allocating dynamically here, as
they won't be needed without the command line option requesting the use
of this driver.

> +    xue_open(xue);

No check of return value?

> +    serial_register_uart(SERHND_DBGP, &xue_uart_driver, &xue_uart);
> +}
> +
> +void xue_uart_dump(void)
> +{
> +    struct xue_uart *uart = &xue_uart;
> +    struct xue *xue = &uart->xue;
> +
> +    xue_dump(xue);
> +}

This function looks to be unused (and lacks a declaration).

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 14:26:38 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 14:26:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350163.576417 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Tyf-0006A1-7V; Wed, 15 Jun 2022 14:26:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350163.576417; Wed, 15 Jun 2022 14:26:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Tyf-00069q-4n; Wed, 15 Jun 2022 14:26:37 +0000
Received: by outflank-mailman (input) for mailman id 350163;
 Wed, 15 Jun 2022 14:26:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1Tye-00069T-55; Wed, 15 Jun 2022 14:26:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1Tye-0003VS-2c; Wed, 15 Jun 2022 14:26:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1Tyd-0008Td-Sq; Wed, 15 Jun 2022 14:26:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o1Tyd-00026i-SL; Wed, 15 Jun 2022 14:26:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nUTkL+11TAbKSLW5vNI4D1lwNHTdPl5tqBPlz1y2dEQ=; b=FfWQBL0VgPiQGi3IE+csQhHCKm
	oDT/yNgIt/+6vLpyLyB0jhMFNilsPdL2Sv2uLpj3ZCev9AGgOdgYVLWrnQhggX/Okwg/zL+GW6EFc
	Q50fLAFyZ0CgmL+W9qJPBKJ2/rFruwSgC7TaxpA+cXuU08pgbYuR6b3ezoYs/dFpNZj4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171174-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 171174: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:build-arm64-pvops:kernel-build:fail:regression
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start:fail:heisenbug
    xen-unstable:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:xen-install:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c9a707df83aad17a6fcf2e8330ab3b5bead6fb8b
X-Osstest-Versions-That:
    xen=c9a707df83aad17a6fcf2e8330ab3b5bead6fb8b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 15 Jun 2022 14:26:35 +0000

flight 171174 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171174/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-pvops             6 kernel-build   fail in 171157 REGR. vs. 171174

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start                fail pass in 171157

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-examine      1 build-check(1)           blocked in 171157 n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)           blocked in 171157 n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)           blocked in 171157 n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)           blocked in 171157 n/a
 test-arm64-arm64-xl           1 build-check(1)           blocked in 171157 n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)           blocked in 171157 n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)           blocked in 171157 n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)           blocked in 171157 n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)           blocked in 171157 n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)           blocked in 171157 n/a
 test-amd64-i386-libvirt-raw   7 xen-install         fail in 171157 like 171151
 test-armhf-armhf-xl-rtds    15 migrate-support-check fail in 171157 never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-check fail in 171157 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171157
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171157
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171157
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171157
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171157
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171157
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171157
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171157
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171157
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171157
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171157
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171157
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  c9a707df83aad17a6fcf2e8330ab3b5bead6fb8b
baseline version:
 xen                  c9a707df83aad17a6fcf2e8330ab3b5bead6fb8b

Last test of basis   171174  2022-06-15 01:53:26 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Wed Jun 15 14:35:25 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 14:35:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350180.576427 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1U77-0007uP-6k; Wed, 15 Jun 2022 14:35:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350180.576427; Wed, 15 Jun 2022 14:35:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1U77-0007uI-43; Wed, 15 Jun 2022 14:35:21 +0000
Received: by outflank-mailman (input) for mailman id 350180;
 Wed, 15 Jun 2022 14:35:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=56zs=WW=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o1U76-0007uC-Dr
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 14:35:20 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur02on0620.outbound.protection.outlook.com
 [2a01:111:f400:fe06::620])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 611f9939-ecb8-11ec-ab14-113154c10af9;
 Wed, 15 Jun 2022 16:35:19 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7412.eurprd04.prod.outlook.com (2603:10a6:20b:1df::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.14; Wed, 15 Jun
 2022 14:35:17 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Wed, 15 Jun 2022
 14:35:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 611f9939-ecb8-11ec-ab14-113154c10af9
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mLqMzH6OssWr9ZKwuiGp5KM0Gh4kZVOB16XYAZ55PShiZ6UaSPV2tqkv9QvIdgj6qobhNEvgwyjC/RY6ty5Cyud7DY5iP00ntXc4Zq+2UzIinu5qgdcUMMgWuj/GmYCc4AB3nl/ai8kF/MH3/rne8LSzcNZ1do1Y62PW3fuVyiv4cu5kPepG8YeWDsaP2or3n1VrY0RL9GAIMCl5eVaTJ3tOOvqeOtzpa/TNah0vFGArBC791QUNULUrpx2jgdT8jpg3pajMIXGr6+zq1O490AJrJk0pzuKFAHxadXgxJDCxP33J36SuIGdP1uqEY33YgOCCWYzuS1CGjAUzK6R7Sw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=WeHCGjt12bsp6OuqvQHlX0XzDaFGViCx4bAk4abJIQk=;
 b=F9Z2vX//HjOwCKWMZkf9Qr+OqCqdHcizBWW3oO+DS5s2sq4gKnGGpXmwQwwqb3JkWEis+WE7dpVvGkDx271b5y6tbzvQ1IUKcmZtvzQ61R5PjERqLhJ6agvOkAtLOr4E5lcT13RYQ7ICIGyEr1q8mjitOg0yc1H6ZfUEL3Ib2/QRmBkmwATW1wjYVOg1jTpjiX+uBqYY40DTUkWm1GuWB8mw6LerS43Xmxh+60dfm+2Q9WP+jGuFjVlk5L39EBWs9H2q5uPgcvWjDUdhWoVy1HPrFg/pmEuhx2jdqpIqul2NJ8DGJ1HYenXVHDn+0A/tyMf94hlyB3T7IvlcnZc9xA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WeHCGjt12bsp6OuqvQHlX0XzDaFGViCx4bAk4abJIQk=;
 b=gcHmnBaWQuvrHMqBU/YJyE96UTcdO/K2/h/naRxVGPjHmq21QAvXn8K++uc7OmhhLf+Msof7jZhEoPLMtHaa1BrUOr2MW+MreyMJkka6oQfdiazIeRSAdjV67xyD/S0AvQaO7Hwdrbkytz+TQVqZYxBD9Z6ua0VIeDqTYduMil8kw4DjSSyqsxNFGenr5l9C9sYQ7AzPPQZR2RM71DZZ4R1YmT7SDoBRjN2LyohdK3OTm4IjIPeSWBxMwL4wm5vxwajkHlypKvwkzx0UmNXumSI2fJZqxnzSzyZCsjnK8UtoUmTpoXZS0je+TuHBpFbAgMoqLL16SyZ/Optc+q02Ig==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <add476e5-dc49-9bf6-873a-174f6a3a7292@suse.com>
Date: Wed, 15 Jun 2022 16:35:14 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v1 02/10] xue: reset XHCI ports when initializing dbc
Content-Language: en-US
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <cover.5d286dc6304969ed7155051e900236947c1b14dc.1654612169.git-series.marmarek@invisiblethingslab.com>
 <f89ad3528e9d57e4598ac450f08a81391538fa69.1654612169.git-series.marmarek@invisiblethingslab.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <f89ad3528e9d57e4598ac450f08a81391538fa69.1654612169.git-series.marmarek@invisiblethingslab.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AS8PR07CA0033.eurprd07.prod.outlook.com
 (2603:10a6:20b:459::21) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: faceca25-92a7-4f4c-68d8-08da4edc4402
X-MS-TrafficTypeDiagnostic: AM8PR04MB7412:EE_
X-Microsoft-Antispam-PRVS:
	<AM8PR04MB7412610F2779E5C5DCBC25D9B3AD9@AM8PR04MB7412.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	FF3rR/IUKQZEeBC6KOvmyN1zfV2wMTr84cQukvCzOatnFAIhNdH6AWTWwNGWB6PhUEt22kZft+lRO5h4hMAhqu9H7dSA+GXe4O3CH/Q3ZObkcJwGr/aIkq3oRq7SqOQhHuwMuGCuc1Y4EQCb4bhv92Wx7AThmn4nlI1pRPBtDQ6rb0GQ3wbvfSgBcI4FEqYOar0Laz6s6+3wiahre2AGNAukPW1xBUw9INcahStSaFN8Q4oZqVzNKRUVkBrv0eCxNmLbGKNWozc9iNOgO5vF08vLHk7/F0z2zbKfJbtyY8ScD0VS+2Ke25sN0EcTeTZp3Gcj4f6YDVUJB2uDkmOmSZmGsEh/+eeIUQ0xFs3HgT33QkxUPspdlQbln+lyUtGF20ID3BdGNYCELDjrpxXHFvdq1cMhkZGu/zhNdH6FytWEBl5tGTJvHanWuf2E9VBbDkIba4Jatqd5fhvQOvoucSCWPTPkE8i46emrLB54isR/SfmW2kZc8o9t94Ky/DsOe2JQEkzrSwhfTyNuRL4daVLD1llQtd6/B023wqUJVHV+9ueZi/AbutMZqg8W+RkvsEjBenPY6M6mB5c2eN6fG7XyceFycURqpe7oIQzVSYXRbekYFgcdPbs8LNqbaCdY9a9uZlS1y077iQwCVADyczQqnnbyGeiYpGrpK+yoxqucVYgXxEIR5pahupSHUprxZQgcYizplmMQ1OyEyPb6Yyv9dGhBCuCNFIvHrz0AIV+GP/c217Xw4SGZpDg8TZrX
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(2906002)(6666004)(6512007)(26005)(6486002)(508600001)(8936002)(5660300002)(53546011)(6506007)(66946007)(31696002)(38100700002)(2616005)(186003)(66476007)(31686004)(54906003)(316002)(4326008)(86362001)(6916009)(8676002)(36756003)(66556008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Sjc2R0x4eTZXSEFmdExtRmIxa0JodkFIVnlmd1NScmxEdFQrbUVWdDhMRkRC?=
 =?utf-8?B?a2V3aVBGUFg1dlBTUFgxZDByaGMvWXF5OGlaSXhBVXBId1NGT0xaVElpZkFU?=
 =?utf-8?B?TXlSZ0pySExxZVN2dzE1TWkrdU5CZi81c2VFZWIvSlZQRTkySnB3UFhLMTJF?=
 =?utf-8?B?RkRPd0lYa2dKTURqeXRHLzNoWEt2UWNsSlNvclgwcU5OaDBLRXpVdmd6T3Vq?=
 =?utf-8?B?OGtxampLYitjbkg2MTFYSlhBMFYzbmhONDFSQXpiNU9waXpjM0dJNzBtWDdp?=
 =?utf-8?B?TWFaWGZPWGpYQWZ0Q0F2SmlCQlI1WUJDZC9vZ2VmWlFVRDBIZXY5R2JyUGMy?=
 =?utf-8?B?UXZ3S2RNVGJqRVI3bzJZYTNhNVNTb1g2bFNhanlJZ2xWRE1zVEQzNVp3NGpq?=
 =?utf-8?B?cDYxbWRpK21xYU5VODlSeVdaYkRsLzM4d2RXaU41WE9UeElyVDFxRjhZL05m?=
 =?utf-8?B?QTRPYjhvS2dyZFVLd0R4YmhYUVNVcklwdlBFRE93YnpqSCtnTGNFN3FzTFRO?=
 =?utf-8?B?bmVGZVVPbGJhaVhmWWIwdjlBM2k2Z2ZtQWh0NEs1OFV4TDIzZnB2UDd3SEZi?=
 =?utf-8?B?YW85eHpGVktleVI2VFp3OXptSE1XMk4yRy9oR0VIZEFiL1hMRy95d3hQOVpz?=
 =?utf-8?B?b2NodG8zTDZGMGtXY2JHYlNUQUdRQkhaZ2ttMFUwNW5OOXpYSk84WGEwTzZp?=
 =?utf-8?B?U3VjbGlhYXNiWXRuMTk3UEU4RjJXTVg3UGtSZ1hlSTRoeU5HM1EvbGRpQW53?=
 =?utf-8?B?TDNWRXdXek9jcGtXS3ZHcTRVaUNUa3h5N0xFRHBGQnViUHNIcTlBc1pDRGQv?=
 =?utf-8?B?ZG1UTERIS3dsWGpJeUY1Z3lhZEo5SWZFWHJ3bEZybzAwdmM0ZmhpYjUvOWo4?=
 =?utf-8?B?aE9mcUprYzYxSXpVdHd3S2pXSk5veS9LUGE3OFhIY09BSTNXNGpOT0hoSFpC?=
 =?utf-8?B?UGJ6MG4wV3RYYlQxVlFsMmZhWW0rTmEvNDVkMnowcUVic24rME1MSFRwQnRm?=
 =?utf-8?B?cW1UMUY0U3VxWVBZN2laNlZPRmhEUUdvSUhTc1lCOEtHemVvSU81NjRZTVhq?=
 =?utf-8?B?Q2hkbDFieG96dFozeUpWVlQ5aU02a3lERUducGJEWUZNSnBOWDI3My9XK2ND?=
 =?utf-8?B?dDYxOURDdm9PazYrT1cwcnpHRFJpYUpSbzArUmtXTXowSFRwVW1ZbmxyZmNx?=
 =?utf-8?B?MWU3NHlSU2ErSnIzNVpGNC9rSTRzOTBFcEV4a2NWMWxWK04rTmdhcEdLOHNF?=
 =?utf-8?B?cHMxeVJuQ3RkTStqTlVUTjdQQ3FOZTFyamlzY3Rvekljc1dITW0vZ00zUUJT?=
 =?utf-8?B?QlpLZmdlN0RpZndValE4Q2NSNUJuQWdlMWUzM3k4V3Eva0FDUDNVY3ZJcjBo?=
 =?utf-8?B?b2xYVVZGZXc4bEx1UTh3bGh2ZVNQVDVuWjRrZkR5Z2xXMEdPa1h1WXpIY1JU?=
 =?utf-8?B?MEd6aG45SHlwcncyR091OTR1UWI1eklkQXlmbkJzdFpWVkVQSEkyaEJIL2sz?=
 =?utf-8?B?YjRVL2FFZWNCcjZveW1sOXN1Q0lTS3FDM3Mxb2ZUMkcwZkJla2FVRUtGa3dm?=
 =?utf-8?B?WVpoV1Bkc1dYQitabzNTckd5Z0xvbWIwSUlBNG9KTjJVQzVvMmM4Nlc2WTBT?=
 =?utf-8?B?RDlIRVBYdVFKUEc4OUxyRlFOQ25UQVJyMWZoRG5tR1AxSUZKbTUwcEx0S1A2?=
 =?utf-8?B?QjVRQkYyQ1VzcFNSN3lFNDBTa1Z6VnVHOTRTLzBneHJYOUR6YmlVa0pMZXl5?=
 =?utf-8?B?d1VZdFpLYWxRODloWFBjS2s3RjRLdkE0bm54Y3JMamNvTVlBczdyY1o4NURn?=
 =?utf-8?B?ZjhuWGNadDJIN25IUzZoazFIczlJTjVRTWkzUnhVMVlReVlTNk1oOWdHTDdi?=
 =?utf-8?B?VHJvOUJXYkt2VDJxSHIwOUpPWmFwZ3Q2ekZ2d2R6ZklZZ3Y5cHp6YnFGY0sy?=
 =?utf-8?B?QmlUajE5cnJsQnprM1ZuZmVlRFV4MmUvQkpUOHlQVzVHMVROdE1WQXE4Y1lM?=
 =?utf-8?B?WjQ5blFnRUZXWWFsR1lTNTFSempOVUJrL0RuQVEvL3hmK3UrbU9EUExDT0JK?=
 =?utf-8?B?VGMwTnlWSGVvalB1cnUxa2tJaEdvTDk5ZElYR3pNc0taRC93dWN6M0hzUWE2?=
 =?utf-8?B?YkNxTFBNSXlmck85Lzc0WnoyNm40RTM3ODU2MSswemQ1VHpGVG5EMWZ0QXBW?=
 =?utf-8?B?OTBSbjIrNzBTdEJEb1Jmd0ZlT0tDQ2E5bVJXTFZMalMwVkdoZHM3cTVTQVU4?=
 =?utf-8?B?cVFreVRveEdlY3hCUWFySjZmTkxNRHIwTm9xaHBjclY3SE9uNXhnTjhzdW1k?=
 =?utf-8?B?Tm04WVRHZkI0Y2JVcnN2WEdRcWdSdEd2UGQyQ1hNdWZPdjdjczNTdz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: faceca25-92a7-4f4c-68d8-08da4edc4402
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2022 14:35:16.8845
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: LCwzd9iGZmwzIyOvv8S/C5dZ1dNDcswK1JjjFpanImSCud/pmOkql9EfVf/VJIg0tw4WpYZQlKzObpJs+H3JsQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7412

On 07.06.2022 16:30, Marek Marczykowski-Górecki wrote:
> Reset ports, to force host system to re-enumerate devices. Otheriwse it
> will require the cable to be re-plugged, or will wait in the
> "configuring" state indefinitely.
> 
> Trick and code copied from Linux:
> drivers/usb/early/xhci-dbc.c:xdbc_start()->xdbc_reset_debug_port()
> 
> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Just two style nits:

> --- a/xen/drivers/char/xue.c
> +++ b/xen/drivers/char/xue.c
> @@ -60,6 +60,10 @@
>      ((1UL << XUE_PSC_CSC) | (1UL << XUE_PSC_PRC) | (1UL << XUE_PSC_PLC) |      \
>       (1UL << XUE_PSC_CEC))
>  
> +#define     XUE_XHC_EXT_PORT_MAJOR(x)  (((x) >> 24) & 0xff)
> +#define PORT_RESET  (1 << 4)
> +#define PORT_CONNECT  (1 << 0)

Odd multiple blanks on the first of the lines you add.

> @@ -604,6 +608,68 @@ static void xue_init_strings(struct xue *xue, uint32_t *info)
>      info[8] = (4 << 24) | (30 << 16) | (8 << 8) | 6;
>  }
>  
> +static void xue_do_reset_debug_port(struct xue *xue, u32 id, u32 count)
> +{
> +    uint32_t *ops_reg;
> +    uint32_t *portsc;
> +    u32 val, cap_length;
> +    int i;
> +
> +    cap_length = (*(uint32_t*)xue->xhc_mmio) & 0xff;
> +    ops_reg = xue->xhc_mmio + cap_length;
> +
> +    id--;
> +    for ( i = id; i < (id + count); i++ )
> +    {
> +        portsc = ops_reg + 0x100 + i * 0x4;
> +        val = *portsc;
> +        if ( !(val & PORT_CONNECT) )
> +            *portsc = val | PORT_RESET;
> +    }
> +}
> +
> +
> +static void xue_reset_debug_port(struct xue *xue)

Please don't add double blank lines.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 14:38:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 14:38:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350186.576438 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1UA6-0000KW-M5; Wed, 15 Jun 2022 14:38:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350186.576438; Wed, 15 Jun 2022 14:38:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1UA6-0000KP-JF; Wed, 15 Jun 2022 14:38:26 +0000
Received: by outflank-mailman (input) for mailman id 350186;
 Wed, 15 Jun 2022 14:38:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kvx/=WW=igalia.com=gpiccoli@srs-se1.protection.inumbo.net>)
 id 1o1UA5-0000KJ-I0
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 14:38:25 +0000
Received: from fanzine2.igalia.com (fanzine.igalia.com [178.60.130.6])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ce622bdb-ecb8-11ec-bd2c-47488cf2e6aa;
 Wed, 15 Jun 2022 16:38:23 +0200 (CEST)
Received: from 179.red-81-39-194.dynamicip.rima-tde.net ([81.39.194.179]
 helo=[192.168.15.167]) by fanzine2.igalia.com with esmtpsa 
 (Cipher TLS1.3:ECDHE_X25519__RSA_PSS_RSAE_SHA256__AES_128_GCM:128) (Exim)
 id 1o1U8m-002LYs-SW; Wed, 15 Jun 2022 16:37:04 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce622bdb-ecb8-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=igalia.com;
	s=20170329; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID:Sender:Reply-To:
	Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
	Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
	List-Subscribe:List-Post:List-Owner:List-Archive;
	bh=75kgVCcqOPOLeu4JBiLLFab/dM4uNZ1dcLuvpDx9ZCg=; b=kAIWl67VuUe8cBDkmFOZTh4b+J
	Uy6PgE2+CwIEzhFEfsXCTAadcMkk5TefuTIBHvgzxNrjmUz+/r/f3fMBR5i8mMC7UweLMh9MkeJoD
	97gkOJRUGibA77qDy9s4vL7NkQkS1Sv+TX4WpruLFHmbGvImVgE9LeYadgdIraA8AGxMlB+PxaWy/
	1adKvJuvvpCbTsZosCqi5SMDlMupfKEgEY7JpKMS8HhvbF7gyowCTheRqW0YX0S9D05qpW9G1fKLt
	DJUWQ6k8Tz/HxaHb6kuJRSWnDBXrsJA/DQco3hh/lFSm8XLWvSeNFNfH2bKr+ugXM0hH/Gw2zl2s+
	laFslj7w==;
Message-ID: <362f6520-8209-1721-823c-11928338f57d@igalia.com>
Date: Wed, 15 Jun 2022 11:36:39 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH 24/30] panic: Refactor the panic path
Content-Language: en-US
To: Petr Mladek <pmladek@suse.com>
Cc: bhe@redhat.com, d.hatayama@jp.fujitsu.com,
 "Eric W. Biederman" <ebiederm@xmission.com>,
 Mark Rutland <mark.rutland@arm.com>, mikelley@microsoft.com,
 vkuznets@redhat.com, akpm@linux-foundation.org, kexec@lists.infradead.org,
 linux-kernel@vger.kernel.org, bcm-kernel-feedback-list@broadcom.com,
 linuxppc-dev@lists.ozlabs.org, linux-alpha@vger.kernel.org,
 linux-arm-kernel@lists.infradead.org, linux-edac@vger.kernel.org,
 linux-hyperv@vger.kernel.org, linux-leds@vger.kernel.org,
 linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org,
 linux-pm@vger.kernel.org, linux-remoteproc@vger.kernel.org,
 linux-s390@vger.kernel.org, linux-tegra@vger.kernel.org,
 linux-um@lists.infradead.org, linux-xtensa@linux-xtensa.org,
 netdev@vger.kernel.org, openipmi-developer@lists.sourceforge.net,
 rcu@vger.kernel.org, sparclinux@vger.kernel.org,
 xen-devel@lists.xenproject.org, x86@kernel.org, kernel-dev@igalia.com,
 kernel@gpiccoli.net, halves@canonical.com, fabiomirmar@gmail.com,
 alejandro.j.jimenez@oracle.com, andriy.shevchenko@linux.intel.com,
 arnd@arndb.de, bp@alien8.de, corbet@lwn.net, dave.hansen@linux.intel.com,
 dyoung@redhat.com, feng.tang@intel.com, gregkh@linuxfoundation.org,
 hidehiro.kawai.ez@hitachi.com, jgross@suse.com, john.ogness@linutronix.de,
 keescook@chromium.org, luto@kernel.org, mhiramat@kernel.org,
 mingo@redhat.com, paulmck@kernel.org, peterz@infradead.org,
 rostedt@goodmis.org, senozhatsky@chromium.org, stern@rowland.harvard.edu,
 tglx@linutronix.de, vgoyal@redhat.com, will@kernel.org
References: <20220427224924.592546-1-gpiccoli@igalia.com>
 <20220427224924.592546-25-gpiccoli@igalia.com>
 <87fskzuh11.fsf@email.froward.int.ebiederm.org>
 <0d084eed-4781-c815-29c7-ac62c498e216@igalia.com> <Yqic0R8/UFqTbbMD@alley>
From: "Guilherme G. Piccoli" <gpiccoli@igalia.com>
In-Reply-To: <Yqic0R8/UFqTbbMD@alley>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Perfect Petr, thanks for your feedback!

I'll be out for some weeks, but after that what I'm doing is to split
the series in 2 parts:

(a) The general fixes, which should be reviewed by subsystem maintainers
and even merged individually by them.

(b) The proper panic refactor, which includes the notifiers list split,
etc. I'll think about what I consider the best solution for the
crash_dump required ones, and will try to split in very simple patches
to make it easier to review.

Cheers,


Guilherme


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 14:40:44 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 14:40:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350197.576461 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1UCJ-000257-HL; Wed, 15 Jun 2022 14:40:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350197.576461; Wed, 15 Jun 2022 14:40:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1UCJ-00024y-EK; Wed, 15 Jun 2022 14:40:43 +0000
Received: by outflank-mailman (input) for mailman id 350197;
 Wed, 15 Jun 2022 14:40:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=56zs=WW=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o1UCJ-00024S-2e
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 14:40:43 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0631.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::631])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 218793f5-ecb9-11ec-ab14-113154c10af9;
 Wed, 15 Jun 2022 16:40:42 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8494.eurprd04.prod.outlook.com (2603:10a6:10:2c6::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.21; Wed, 15 Jun
 2022 14:40:37 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Wed, 15 Jun 2022
 14:40:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 218793f5-ecb9-11ec-ab14-113154c10af9
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=n23jeP8Ge5yuVWYyMlmHFSjYHcRSCTExbzp1LO0A9YBYxwxZ34xrHt6Cy5+uudJUgwuJiUluWg5/bWSxZNaAWF+kTzFH3dYLJQet9iTK1I1OpzdZMYcenolc0uMmGstasQwjZWI3Yh2zFQNgraCnDqMJFSjrec+cXZ9k0wYGLNbwn7vdCfDIhZH5f0nx0wfRiFVCpv+b1APdWMojsbp8iTFila+IIc+lN9smQuv02BzK+vNZLCTDzRk5zYkaUDPTcYLLz6Bv/wVe4z/zfzL9IXVaflIFzwGPxJJdyqxHX1my0DjBypMU8T8uEMSlyOj5aIVrd7CAccJYITwzoLJvjQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Rj3fmlDvxl/e0346b/nVu82P4DRX6tiS7rI/NcjekSE=;
 b=GYHrzmZQXeTw5+IRIUHbVqVNAWHSq3rPT95z5yyVqhpHpJChzbXFXnPE8orkdQSxZpKRIvv6jM7/0vPERBrHrNKmLPP54n3XuK54gxryQOzpYz6Cp9ePaPFJUVzlu+KE8pTF+172ArdBrN2fvIbvU43Zh7SmWexLaNc5MvRSCp+QXQibqiimHInwi5t798KBwVt9w/iCE0Q5ibnH4XV5lodzHgGkSbc/nyPQSxtrAQtqQaJTTrKv9h9PaTn9t2GCzLvzEvIsFQpLAB+AMPRSN6EJOCtz4S+URqkTdSlO5dX2rLkh82K8bmkOcROKtnvE/BIS+6GXPJFhBhXvHbatyA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Rj3fmlDvxl/e0346b/nVu82P4DRX6tiS7rI/NcjekSE=;
 b=3NuMxV8SQFSe4/nlp7pgJsAwn8To+0uNxjQkilxVa4zyw5XjrdoiKOTurs/3qTQMd/4Dd3Lz4GG1wYdfqqnN4C29rHXfeXb0NhESEfPHdnZSeFmaNazTi+Wdt1ODK9+znJBvTNmIJKbtc4W6yfDCs+LrxJ5a38c88kHmHTrC5CFsU5KFCHWKwuP9hmPE5IghBG4pbwOau2iBk1xYTleWOXloQdRqBTnGftmAVBCl+b+fmic58QsIht8xkf6Qc/ZvFB54R6clXWPTZE3e4WSaO4swUZMEqwZ/X34dgyIr+S/SRArE3SAzAE6SVCk/q+Y3QVtUIJJcyncD4mQ4vD1frw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <8eb0dfcb-4df2-f447-9ac9-235881d496ce@suse.com>
Date: Wed, 15 Jun 2022 16:40:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v1 03/10] xue: add support for selecting specific xhci
Content-Language: en-US
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <cover.5d286dc6304969ed7155051e900236947c1b14dc.1654612169.git-series.marmarek@invisiblethingslab.com>
 <b5466e495943210adc48c754df98862ae49ee489.1654612169.git-series.marmarek@invisiblethingslab.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <b5466e495943210adc48c754df98862ae49ee489.1654612169.git-series.marmarek@invisiblethingslab.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AS9PR06CA0358.eurprd06.prod.outlook.com
 (2603:10a6:20b:466::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 1b88c8b7-040f-4f05-d005-08da4edd031a
X-MS-TrafficTypeDiagnostic: DB9PR04MB8494:EE_
X-Microsoft-Antispam-PRVS:
	<DB9PR04MB84941C3A1F6D247FBC8076A2B3AD9@DB9PR04MB8494.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	DLgEmMjEOp7mbuZN53nLdvv+XKztPDYAxHm+s2+M2k889Ulhv2c+XCpewK34Kxa8h8B52lzvbHz5wrddmXvhUlXbBEYZ3U7kEHM/MK9MmkNLB0AHFLvjY6ot7QfKklWhQc/bpzluJlLhJFSl0Za1ypWWPh/pxebq0YO+SET4pZL00JIXQF+/JlPE6V1zprcQue116kguL01e+ysG4vHS1pa8TxqJ+4VyF0T60/xJlzJuq9+RqX2NqDcfBJOYqwE6AbKL0ABVlsQkd9eOPZ7xhEtZalrtMxUtjjDR/gPxF/d1mdCQ4VOrQJybaJ5ZSmUrUVpH5zwd6P0UELuytGta1lMI8n760w5rRpKXK5ptjnwbWUfFGPEOlzxLKfmBXGWjWGHO297CW23rUca63AggPWjx5quRO9/aLKPjE8FgkLYD2JTCu58/GyqobPcW7m/NwVU8hBca9pCi27X6ZqBBfLu30Ep0eVhyKbxh543R+aziKZcL8jAF2dnoERCESmPWXs1YWSmzpTBBGHoeNi/x9Chywe2lwlZKPeCniexir4hNeoBqmr9VHnLifTJpzlEH9b9vtwwXKxd2dIeITR2+ecsDLGWzE8ksAs3dJ7Bpx1h4wSW7ZpGDNT4NKF8E3t073sM46h2j/IqaUiRGS0tCGGbwzEbu97lbZI3LMiZkE5jdn3208W9yM+cOBU5k+w4jH9/XXHhiqOG8rNVVKyXgtx9omKvj+Ts+lGquoClNIP21TS+0KfoI+VfbfRKf3kgh
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(31696002)(86362001)(26005)(6512007)(53546011)(38100700002)(186003)(2906002)(36756003)(66476007)(2616005)(6506007)(5660300002)(54906003)(6916009)(66556008)(4326008)(8936002)(8676002)(66946007)(6486002)(508600001)(31686004)(316002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NjNlWmtTaHFUVk42TUs0dCt3QkRpbGNPKzNjUHBYZ3FML01ab1kxd0Y0N0lT?=
 =?utf-8?B?b29UTUMzRzBkclU3YTJCdzlpY01tK3E0Y3VpSzNtM3VIVjdLYzVoUllET3k0?=
 =?utf-8?B?dXBQd2t1UUdnc1haWlhjS05IVEJPRGVodnUzZlVVQW5abXA5d0hEYnZoMWJh?=
 =?utf-8?B?cFB0UnIzcWJ2R09FZ1BXZ2RSTkxtL00yQUttd1d0YiswVlg0bS82WWV6NXhv?=
 =?utf-8?B?VzdaWTB0WHoxUTRGbW1qc0lBWmZNalBxNTA3cDQyZGRxT01nMlVBUHdndEFa?=
 =?utf-8?B?eHVWdWp6Nm9qWGpicjR0LzNZVDhkSHhEeC9LdDVKTHlTV0J5NGhyb1M5Uk91?=
 =?utf-8?B?TEJZRGhBbmpQeGthekRBM2ZvZno4L3dKelV0d1Bid0Ewd0NzOHMzS0lMaEVa?=
 =?utf-8?B?SHd1TzdDV0tYaGk3SnVFeTdQdktGaWpGb0UwVFBJaXpkbERQcVl4Y2M4aUFv?=
 =?utf-8?B?RmJWWVhLV2hwdW9uVW9EaElMQ1dNUDlLMHZESWtENCtoNWk3aElSNVBFQTVQ?=
 =?utf-8?B?OXlSOFVnaUx3RSt5SVhJaDZWR2lFcW9yWCtzbldUeXloRzVQVVJ5KzJVcmFM?=
 =?utf-8?B?NENRMjRUdFVhWTZVUHdST3VkU1cwNGEvb3NhdURDa3FUU2JDS0FDWWJ5K0cz?=
 =?utf-8?B?c1U3alovYW9xRVhiTlNWZzIwd1RvWUc3K1ZTNktVWXprRkhyak5IOEZpeS9U?=
 =?utf-8?B?YVhiU0pJY0NVSENqQnJrS2dBdnR6ZE91cFBsUmQwMXdPaStkekpMN0dIS3BB?=
 =?utf-8?B?WmNka2lRU1Y1Qk9nN1FhdG94dkpqajZ0RWxTTFprT0l0Y1ZvYUFKRXgrelF2?=
 =?utf-8?B?MFlHbVpTQTVUVUpkd1BDS090YnhJYnltalBTTVVYTDc0MG82RjQzd0Z6WTVX?=
 =?utf-8?B?UGJYMzkrbDRLU2gvSU9aS2l0aEFxL2RBUVJURGN0YzNKWTRLcjNGTkRyckk1?=
 =?utf-8?B?UkNaYTY3Ym5Uem96a290Yk5CN2c4dkJGQVdUNzdSUXFjWC9BenFtQVJWV0gw?=
 =?utf-8?B?NjREZjRaNDZRVEV5akRVcmRTdXRFa21sZjZyUVZiUjZXRDdHMUd2Nk1yNEl6?=
 =?utf-8?B?VXBUYXhZTGsxMElKZXlZUUZJVGlNSWNEeTVaR2Y0MFh6VG11UndKQVNORlox?=
 =?utf-8?B?TlZaMnI0RlJmalNyRWRUVDlZVkVIUnNzV2dqRXlNTmRyMk1NYU9QbU5ic3Fm?=
 =?utf-8?B?K2tBa3dGVExZem1wWTZrbktOOEROUTZHNzVXTFVoeUdzRXdFbEo5b0cxUElP?=
 =?utf-8?B?VVViaG1RL1VLMERDcXBZYWZBRGhFeUJNSHFIYnJYelNoUk9YM2tOWFU4d3Jh?=
 =?utf-8?B?Q2o2R2lhTytlN0RGTzNUOU1rRVVXWUZmT3JRM3pwa1FxTW03WVcxODBFOWMv?=
 =?utf-8?B?SHdUVWI2M3Rhb3h4SnhlR2hMd25tQjJYaUgwbXFSYmNjeC90ZlRtU3VqemhL?=
 =?utf-8?B?ZkwwL1dmdVYrQ1N5a1d5ajFKVndveGJzQ2o2NWNyZTM3RmZvN1VnSDZmRVUw?=
 =?utf-8?B?S04rZGZUaWxCZ1B4d2FyTC9Od3VDak4wTUR5WWJYaUpVWkJYOXVlSjduT2NQ?=
 =?utf-8?B?bW9GTFNGTlBKSUVZTURXcnN5UHVqWXMyNTJrN09GRk9aMk4vNkN3allzV0J5?=
 =?utf-8?B?SGhkSEZKTEp1dGZibnRPUHh4TDBkMW9JQnpDZ3gxVjJ5d252MWNyeGlvRUlv?=
 =?utf-8?B?cEtxMG50QnV6YkU0MWhLTGUxcGc2dmZLMDJUUW9RZHhka1NmV0V5Y2ZMOGFJ?=
 =?utf-8?B?U3JrTlNQVWY3dmVmTGJsRUNjZ09wT2V3TUF1K2RIMlNqYWQ4TnQ1OXR3MlY3?=
 =?utf-8?B?azVGd3hjTUdjV01TeWl6anZFRURtQmg3QXFLNWtnNHNxWktta0dpTzlrTFMz?=
 =?utf-8?B?YWh6VUo1VlF2cCs5dHdOa3NUYm1ZVlZYV3J6ZDlNSm1xMWRMb2VmZ2NRa1J6?=
 =?utf-8?B?TGlZSGVqWnlWbkl2dG9qT3A3YVJ3YUdwckcyYlRIay9iYXBNWC9JK29iNXF2?=
 =?utf-8?B?RDFZZHJpbXpUQTFQRGdEbG5pTmVrMFFvTS9wYjdVVUpETEJ2MzRZSzlQNXVk?=
 =?utf-8?B?cGdZUzB1bVJaOTMrOE92S1p2cXdGWkJWR2FZUWJVanhSbnlOcmp0dTY5Q25J?=
 =?utf-8?B?OXNxUm9PWjBsbWd1djNsaThSSmlIQnZGUllQTldwNitpVzZyWCtyQmoyK0po?=
 =?utf-8?B?NlFmZUI5OVAwQUpOQ21QaUdDeXRkS0N3NVhrSkVlYmRMaWZycEhyUmxwMGlW?=
 =?utf-8?B?UGZvbFBDbjBwSDJ6VG1wdDBjblNMMU9oODYzQ3FWeEdUV3RQSVg4dzNaRUZI?=
 =?utf-8?B?anh4VFNYSUgwM1JrNjFlQ0dZQ3RIM2sxRmdDSTZJRG1HVkhVU3BUQT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1b88c8b7-040f-4f05-d005-08da4edd031a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2022 14:40:37.7549
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: QVJloHUQrVWVSUlM1AhBMYdCkXCWolhUin3SRCkjDpKHe/gZXkSo9jz2qD/nDHdZLcKj7boGnGs3rpR3V6Kgig==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8494

On 07.06.2022 16:30, Marek Marczykowski-Górecki wrote:
> --- a/docs/misc/xen-command-line.pandoc
> +++ b/docs/misc/xen-command-line.pandoc
> @@ -721,10 +721,15 @@ Available alternatives, with their meaning, are:
>  
>  ### dbgp
>  > `= ehci[ <integer> | @pci<bus>:<slot>.<func> ]`
> +> `= xue[ <integer> | @pci<bus>:<slot>.<func> ]`
>  
>  Specify the USB controller to use, either by instance number (when going
>  over the PCI busses sequentially) or by PCI device (must be on segment 0).
>  
> +Use `ehci` for EHCI debug port, use `xue` for XHCI debug capability.

Ah, this answers one of my questions on patch 1. But I still think
the option should appear here in patch 1, with this patch extending
it (and its doc).

> --- a/xen/drivers/char/xue.c
> +++ b/xen/drivers/char/xue.c
> @@ -204,6 +204,7 @@ struct xue {
>      void *xhc_mmio;
>  
>      int open;
> +    int xhc_num; /* look for n-th xhc */

unsigned int?

> @@ -252,24 +253,34 @@ static int xue_init_xhc(struct xue *xue)
>      uint64_t bar1;
>      uint64_t devfn;
>  
> -    /*
> -     * Search PCI bus 0 for the xHC. All the host controllers supported so far
> -     * are part of the chipset and are on bus 0.
> -     */
> -    for ( devfn = 0; devfn < 256; devfn++ ) {
> -        uint32_t dev = (devfn & 0xF8) >> 3;
> -        uint32_t fun = devfn & 0x07;
> -        pci_sbdf_t sbdf = PCI_SBDF(0, dev, fun);
> -        uint32_t hdr = pci_conf_read8(sbdf, PCI_HEADER_TYPE);
> -
> -        if ( hdr == 0 || hdr == 0x80 )
> +    if ( xue->sbdf.sbdf == 0 )
> +    {
> +        /*
> +         * Search PCI bus 0 for the xHC. All the host controllers supported so far
> +         * are part of the chipset and are on bus 0.
> +         */
> +        for ( devfn = 0; devfn < 256; devfn++ )
>          {
> -            if ( (pci_conf_read32(sbdf, PCI_CLASS_REVISION) >> 8) == XUE_XHC_CLASSC )
> +            uint32_t dev = (devfn & 0xF8) >> 3;
> +            uint32_t fun = devfn & 0x07;
> +            pci_sbdf_t sbdf = PCI_SBDF(0, dev, fun);
> +            uint32_t hdr = pci_conf_read8(sbdf, PCI_HEADER_TYPE);
> +
> +            if ( hdr == 0 || hdr == 0x80 )
>              {
> -                xue->sbdf = sbdf;
> -                break;
> +                if ( (pci_conf_read32(sbdf, PCI_CLASS_REVISION) >> 8) == XUE_XHC_CLASSC )
> +                {
> +                    if ( xue->xhc_num-- )
> +                        continue;
> +                    xue->sbdf = sbdf;
> +                    break;
> +                }
>              }
>          }
> +    } else {

Nit:

    }
    else
    {

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 14:40:44 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 14:40:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350194.576450 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1UCH-0001oE-3Z; Wed, 15 Jun 2022 14:40:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350194.576450; Wed, 15 Jun 2022 14:40:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1UCG-0001o7-W3; Wed, 15 Jun 2022 14:40:40 +0000
Received: by outflank-mailman (input) for mailman id 350194;
 Wed, 15 Jun 2022 14:40:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1UCF-0001nx-Hs; Wed, 15 Jun 2022 14:40:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1UCF-0003m6-Db; Wed, 15 Jun 2022 14:40:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1UCE-0000N7-Rb; Wed, 15 Jun 2022 14:40:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o1UCE-0007uU-R7; Wed, 15 Jun 2022 14:40:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KM/9SyuCnnUbaDt1bqCxptTGyHqDHYr6KKX4K+8yRnk=; b=FUbg4I/gViHytuGqOt+e4Bb6H8
	V1btPabbKsnpR9ikQG2rWN5KtlUcSlmhInuZCRIK5tlsMYEY02OP+4oGJmWeLbmR8DpPJYdk/9uAc
	+k+b+ZYjSIayT1K2fzfCIh74+Ol+1F3Sc/LcQ6rRMtaaYHHf2jaUkC4XPD0OCcMl6hTE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171175-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 171175: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=ed8984306e1cd44c424fda3ed412a4177dd7b84d
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 15 Jun 2022 14:40:38 +0000

flight 171175 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171175/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              ed8984306e1cd44c424fda3ed412a4177dd7b84d
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  705 days
Failing since        151818  2020-07-11 04:18:52 Z  704 days  686 attempts
Testing same since   171175  2022-06-15 04:20:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Amneesh Singh <natto@weirdnatto.in>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani@anisinha.ca>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brad Laue <brad@brad-x.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Kirbach <christian.kirbach@gmail.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Christophe Fergeau <cfergeau@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Didik Supriadi <didiksupriadi41@gmail.com>
  dinglimin <dinglimin@cmss.chinamobile.com>
  Divya Garg <divya.garg@nutanix.com>
  Dmitrii Shcherbakov <dmitrii.shcherbakov@canonical.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Emilio Herrera <ehespinosa57@gmail.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Florian Schmidt <flosch@nutanix.com>
  Franck Ridel <fridel@protonmail.com>
  Gavi Teitz <gavi@nvidia.com>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Haonan Wang <hnwanga1@gmail.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Hiroki Narukawa <hnarukaw@yahoo-corp.jp>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Ian Wienand <iwienand@redhat.com>
  Ioanna Alifieraki <ioanna-maria.alifieraki@canonical.com>
  Ivan Teterevkov <ivan.teterevkov@nutanix.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  jason lee <ppark5237@gmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jia Zhou <zhou.jia2@zte.com.cn>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jing Qi <jinqi@redhat.com>
  Jinsheng Zhang <zhangjl02@inspur.com>
  Jiri Denemark <jdenemar@redhat.com>
  Joachim Falk <joachim.falk@gmx.de>
  John Ferlan <jferlan@redhat.com>
  John Levon <john.levon@nutanix.com>
  John Levon <levon@movementarian.org>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Justin Gatzen <justin.gatzen@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kim InSoo <simmon@nplob.com>
  Koichi Murase <myoga.murase@gmail.com>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Lei Yang <yanglei209@huawei.com>
  Lena Voytek <lena.voytek@canonical.com>
  Liang Yan <lyan@digitalocean.com>
  Liang Yan <lyan@digtalocean.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Liu Yiding <liuyd.fnst@fujitsu.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  luzhipeng <luzhipeng@cestc.cn>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Mielke <mark.mielke@gmail.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Martin Pitt <mpitt@debian.org>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matej Cepl <mcepl@cepl.eu>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Goodhart <c@chromakode.com>
  Maxim Nestratov <mnestratov@virtuozzo.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Moteen Shah <codeguy.moteen@gmail.com>
  Moteen Shah <moteenshah.02@gmail.com>
  Muha Aliss <muhaaliss@gmail.com>
  Nathan <nathan95@live.it>
  Neal Gompa <ngompa13@gmail.com>
  Nick Chevsky <nchevsky@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nicolas Lécureuil <neoclust@mageia.org>
  Nicolas Lécureuil <nicolas.lecureuil@siveo.net>
  Nikolay Shirokovskiy <nikolay.shirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Niteesh Dubey <niteesh@linux.ibm.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Or Ozeri <oro@il.ibm.com>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peng Liang <tcx4c70@gmail.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Praveen K Paladugu <prapal@linux.microsoft.com>
  Prerna Saxena <prerna.saxena@nutanix.com>
  Richard W.M. Jones <rjones@redhat.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Robin Lee <cheeselee@fedoraproject.org>
  Rohit Kumar <rohit.kumar3@nutanix.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Davis <scott.davis@starlab.io>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  shenjiatong <yshxxsjt715@gmail.com>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Simon Rowe <simon.rowe@nutanix.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Temuri Doghonadze <temuri.doghonadze@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tom Wieczorek <tom@bibbu.net>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tu Qiang <tu.qiang35@zte.com.cn>
  Tuguoyi <tu.guoyi@h3c.com>
  tuqiang <tu.qiang35@zte.com.cn>
  Vasiliy Ulyanov <vulyanov@suse.de>
  Victor Toso <victortoso@redhat.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Vinayak Kale <vkale@nvidia.com>
  Vineeth Pillai <viremana@linux.microsoft.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  Wei-Chen Chen <weicche@microsoft.com>
  William Douglas <william.douglas@intel.com>
  Xu Chao <xu.chao6@zte.com.cn>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Fei <yangfei85@huawei.com>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yasuhiko Kamata <belphegor@belbel.or.jp>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
  zhangjl02 <zhangjl02@inspur.com>
  zhanglei <zhanglei@smartx.com>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>
  Дамјан Георгиевски <gdamjan@gmail.com>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 113395 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 14:42:29 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 14:42:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350210.576472 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1UE0-00038v-UR; Wed, 15 Jun 2022 14:42:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350210.576472; Wed, 15 Jun 2022 14:42:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1UE0-00038o-R9; Wed, 15 Jun 2022 14:42:28 +0000
Received: by outflank-mailman (input) for mailman id 350210;
 Wed, 15 Jun 2022 14:42:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=56zs=WW=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o1UE0-00038i-33
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 14:42:28 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0606.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::606])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 60154d30-ecb9-11ec-bd2c-47488cf2e6aa;
 Wed, 15 Jun 2022 16:42:27 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB9100.eurprd04.prod.outlook.com (2603:10a6:10:2f3::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.14; Wed, 15 Jun
 2022 14:42:25 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5332.020; Wed, 15 Jun 2022
 14:42:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 60154d30-ecb9-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iksRiiD7pqTtmZ/MeOMtBVEaRjDoWD9SIqUlT2TnY8eayJQoRl7Qupax6vTephdgrIE1kyXBOcioeoGRuHgFLUVSBnY7vsXoeHgdToW6Re0pb6yU4gVoAwsG8rMFWv3pwf/Py5qG6CKRng9SOGeromo9X+NeE8W49kejinUI3+TCD1G9ktmeyHRPjrS3gGTnR4+abL9A+LYQPAd4fKpJqDYgGO3gxvYlIpCvktpmEtLUfwJGCDVd1zSBr6t73llRKQFNHV9UJr7WSMqFbkRL1Y0TSXzVJn3fJwP4VpUHo0yyYFMrHv42tTRc3edQ0tCmoqQiMdkbRFvfRZ503ilhiA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=LQTfkArX5sRZ50VfTI+9jym17yc/bngL8yIUznBf0Ck=;
 b=EkrxEiuNxoBLIs1iRf1QoXp530/5/4tDxcVoaK2D53oB5ny0kJ+hFwNbTeFqKSL/chIQdmKtjb536ft9a9pxCC6ndoUjlwa6c5qn9P73qR/poknRPR93INyqzsNWX0YJk+UlgJ+vnGt4Yb8ITe/rN0x/3L7BHn8nq1N5WGaW/YNWZFXnD4FWxF8/AieDtUe7CXREycojnaFlmFmO91/WAzQPCZdOJg+JdtwLmeZG76Rh1kzU5T0u01vjstwb3rgjk/65EYar/aA6fxzppO61a19Mqpywrpy0jq+je5pwVniWzPulE0GJAfwtOSML54/QuOwZxXuMl5Qb5iw+yo0ATQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LQTfkArX5sRZ50VfTI+9jym17yc/bngL8yIUznBf0Ck=;
 b=2uA9K+LyTyDwZnV7jfSULWAozu8C/AwZI1GUZUSUU2zWe32+T84E2dUBGVAu2VqmvKWlzFTrYYW7HvlACYIKme0u5fA/lRwnJsxOwCkLDcsqjpDWeOANM7e7JTTakTP9ibwYsg685BOjYnMLsNNidQzg1GAzu8bUCaR9w7cCV8SC1UKQ9XwODm5Om275iZx2FOBs5oV9X817/HSwIKT8I44SuVB17EhAF+9AjjfRefl8waY6VlLanAp2snKpVNgaxJWPqjjp0bPFKZ7wzw/QkEE4RVqZJw5OBOAFOSK5PRtVzBiNJunn2GwPs6Zi+c07otduuaNgAzusooU5fnuV4w==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <54267d0f-36b0-3ccf-f4a2-1c99efa2d66d@suse.com>
Date: Wed, 15 Jun 2022 16:42:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v1 04/10] ehci-dbgp: fix selecting n-th ehci controller
Content-Language: en-US
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <cover.5d286dc6304969ed7155051e900236947c1b14dc.1654612169.git-series.marmarek@invisiblethingslab.com>
 <e7d63b72873de3791e26a6551fef7132fcc9f241.1654612169.git-series.marmarek@invisiblethingslab.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <e7d63b72873de3791e26a6551fef7132fcc9f241.1654612169.git-series.marmarek@invisiblethingslab.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0035.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1c::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 0b0b7635-b418-40e3-963d-08da4edd432a
X-MS-TrafficTypeDiagnostic: DU2PR04MB9100:EE_
X-Microsoft-Antispam-PRVS:
	<DU2PR04MB9100DA6903E86D211F172EE8B3AD9@DU2PR04MB9100.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NtFyFfiQiEdgZgL58x/z43p24Q5tRzIcmixhgG5DdyJanJHSXXbMrERAhTfbH3tc8/qKw0GKHFjHFbb3kpc4LSNYvBwW36manSk6zZzPYXZjRe0TfHqYqRUnckK9BLNVCTeqIwJDFXjxzLsz+qjE65uv1Y/3DnyADpNEsBh5UQzWp9gQSi95CWsqdfiyY1pjLG1j0SByoPbMKFnvXynMZvZ0VRjKMB7psUJy4O5X0cCNyr6MwomghsmbvvYn+YB0olhLDJzrS6WLRtwe24tXcOwYDqg85rLkxAZOPiBalFD1OHdcVkRoxUa1jX75jkORv3F3FIG59tfsH5SCDJ3XC6IyqaUayU4ABF5puoAB5b5tabxVAXpEaXdTOr/asvlnrVZZbxkME1jrULiaMSCeMVZksqEgEzl4Y2/eu1eH/ND+9kEVH54QbErg1SpI8uecipX5g7W+l8oBks1WxcNIJwsDAyHhqNsWcqvGmyPOYxPy1V+4i/5lr+Ox3YHWyBKX/Jdm43PyH3fwHCcRWyXx/dVd6OOmKNEcQp6YbpBf0BNeSLWLYhoXBKy6Uocxv+7rhocmRTS9H2saWRB6YcL6bKDiagQjZzjoygVNCvtjiZMVjjbvtPQaLfCvxoMp9pLlm+iJB9B3mUxCeuCBoe1NjZvz7zhd6ILhudQIRmcTEXxacj2RNtoPxANgwaEOylrz7vL1gAJcAS6y75XizvJnLY/vTOE7RUOTOrUy/8wiZtE=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(4326008)(54906003)(6916009)(508600001)(6506007)(36756003)(6512007)(2906002)(8936002)(316002)(86362001)(26005)(31686004)(5660300002)(6486002)(66476007)(66556008)(66946007)(8676002)(31696002)(186003)(38100700002)(53546011)(2616005)(558084003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?T0QxRTZleE5GRHluanhtdUEyODdCeGVOOWxaeVBTaVJFMUVHSkN2WmJjYmlU?=
 =?utf-8?B?dDZVMkZ2bWd4Tnl5dGRLTVNJNWtwWHFyOG5kSVJlZmdvYUNQMCtuajM0U1N0?=
 =?utf-8?B?cVE1QVI3SlloWG1QQ2ljRDF0UlRmM01KRWVnYjErNjlJdWlaZFNHS1JKamZI?=
 =?utf-8?B?eE1oMW9OaXZnVkZZMzRseHFmbEJucEdQaHNlNlVma2ViV0FZT0ZHemcwWEZM?=
 =?utf-8?B?N0cwT0FyaWN2cjBKaEdLZFdBOXg5dkxmWGFuNXBxNzFUbU81aHEweGtKWHFu?=
 =?utf-8?B?OUJLREtzWFdTNFVzU1JjSVF0RzRBRnRmZzZ3WHpwSDh3bUxWNmpmRTRBbDE2?=
 =?utf-8?B?cG1LUGtjOVczY3NIOVErbjd2R2xrUFN2Z1VZbDhHWG43cU01NVJjWldYWlNV?=
 =?utf-8?B?QjVReUhzdk1KTDNhY1lTNnVNNk1SZ2M0b0ZwdWcyZnhobk96aU9sQmFwd3BT?=
 =?utf-8?B?VDFyUmtSWWFRNkZWZ2ZEMzFDdkZqaUVHOXd6NUI1aFhWY21vNlNhbFdUNndo?=
 =?utf-8?B?Z1dzMzFTNCtrcVVROFAwamRaNW5wQlkvcXExU2dqcGVnWXJDTGJ6MHM5YVU4?=
 =?utf-8?B?bnoyalExbEl2SlhnalArb3hDNElDVFY2NkpYYVlvWjdXdmV0WEFHTzNHU0Za?=
 =?utf-8?B?OVFwZE40eE1SbWUvc21Jd2dGWlk4ZFRuS3M4TzdDdnYrUTBpcWcyVUViYlZa?=
 =?utf-8?B?NWFzM2VBSDRvbFRZYVNOS2tTdG5rOXdUbWxxRUVoU21iaitkMkJFbG5LYUp4?=
 =?utf-8?B?ZmtnM3ovZ1JhL2ZTb2wzNWpEdGtvaWlIQnpvTUVKMEhreFdjOHRVVkd4ZWZn?=
 =?utf-8?B?a3lCNG04NGcrYzNMS3ptMkFYc2NxTUZTd0RuSE5uQ1pYaU5wOEE1MGZxanRi?=
 =?utf-8?B?NFBMd2xWVGpUVnlXbThjaG1VcEY3cUFObG14eTVlelU5Kzg5ZU00TndsL0Ev?=
 =?utf-8?B?VngzeEFHSzhpaDJNYXhMOHpqMitoYjFJd2ZYdlN1dGlzTjhlWWlSNHVGMDBR?=
 =?utf-8?B?MDYzR04ySUdzY0I5Uyt4ZHN3VXFSOVNLNnhYWWRPMDg3cm1PYVFJQ1dDT28v?=
 =?utf-8?B?S29YUUs3M2tqcXRtTEJvZmZ4SUVaSWdteEVrc1J1SitFRkszc1dUUHZjbExr?=
 =?utf-8?B?N2Z1c3FGalVqUHIyb01rNUN3cTZObmtUZnR4S2djMzJYdW1EdDcxcVZQaHVG?=
 =?utf-8?B?MllYSWRhTUZwQ3dzelpySzhYU1JvVWJUNlJuZHM2WG9VUkpaV0wvK3hGaUVM?=
 =?utf-8?B?TnlwbitSYXFsWjRXZ2RGN0dvVFhNSkp6SWpScjJYdS9pL3luT001VmR6OGp3?=
 =?utf-8?B?ODI1MHI1TzhicHRENDdiUVlWVUpvSklML1dDRkZVTC9QOUpwSVY0S2t2eW5C?=
 =?utf-8?B?STR5UlIvd2U5aHpTOCtxMEFXTVpEQUtCVnNkbnB3V2ZCRG90SkRNaGplamt2?=
 =?utf-8?B?bkNuYlNIMzFVL3lyNGhWMzRnblZsQVZMenVxU3p2VUZqNUdEQkFrdEdIL0pV?=
 =?utf-8?B?K1k4NzdSWlhRb1ZkTGRQc1Z3MitxTmRlN2crRFd0ODBWYlRCeWxkTnRzS3ZT?=
 =?utf-8?B?eWY5Z2dMcmJrUDU3eTQ1NFNmZUtFWC9uYy9aOUUyb1I3d0F5M2doSDROazdJ?=
 =?utf-8?B?a1B6dzFCenRyOE00Q2R0Y05NR2ZGVythNWg2TGFpcUJ5OXgzNE1BcU53U1VJ?=
 =?utf-8?B?VEFJd2JYV0U1T1pFSkhCS1BxM1lEd2dlRmo4OUd5V25ZYitXVWU2RFhYK29n?=
 =?utf-8?B?SW9VS0ZrcmlCM0ROTkUvR3RiY1RpY1dJMUZkS1c5Mk5zMFJzMXAxTEk5NGhi?=
 =?utf-8?B?OVgwTjI0SXgvbXFQaXFRaXd1OVQzdVAxM3BFc0hlQmtKeVZCK2tBa0t3QmlY?=
 =?utf-8?B?dFU0VVV0TjA0NXdJRWdPQ2JLeTFQb3ZZeUdkb1lmYitSbldzVnFKNm9uNjdO?=
 =?utf-8?B?RmxCTndCVXVLYzJhakI2VTlTTUNDd0tWRWNraVVkUEdveTNubDNJdlFERito?=
 =?utf-8?B?VlE4N0FLN0lwaVJwWGRsZWpGVHV1Q1FGeGZHVVR3aXlwUDdNRVhDd0Jrcm1L?=
 =?utf-8?B?RklxWVRFcGVIQVdlbWxPQ0tVdXRSdU5lMnVMbG03clFHbGJiMVpqOHZrS2ps?=
 =?utf-8?B?RjZ4Q1daQlMyVDBUbUpCckpXQ1B1UnE0ZEhQYkJpZ0NmaW1zeExNWlViRmZa?=
 =?utf-8?B?aFhBOE1vRUY5aWxVS0JBM3hIT3VoVFA5ZjRsOW9qZ2NKaWNDUGFhUkp1eGI4?=
 =?utf-8?B?TklEelE5WG14bk5zbEhwdlVkTmZLd2M0K2xIdUtqVlJzSlJtZ2xuWXRHYTM4?=
 =?utf-8?B?cDhaVDF5dlF3MHJpK0liNGI4d1lXdEduZ0ljT2RueU5tZ2liaDIxQT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0b0b7635-b418-40e3-963d-08da4edd432a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jun 2022 14:42:25.0137
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: TCTyiFVDFgHmk8CYC3O7EbT5WOxhkf2ApaCK6QDZxSKYMYVS87dfWayOW3nXMACg/IMrlwK1ZKcoosNzVmvmRQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB9100

On 07.06.2022 16:30, Marek Marczykowski-Górecki wrote:
> The ehci<n> number was parsed but ignored.

Oops.

> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

This could do with a Fixes: tag. Then
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 15:48:13 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 15:48:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350224.576489 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1VFP-0002cU-Nw; Wed, 15 Jun 2022 15:47:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350224.576489; Wed, 15 Jun 2022 15:47:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1VFP-0002cN-LJ; Wed, 15 Jun 2022 15:47:59 +0000
Received: by outflank-mailman (input) for mailman id 350224;
 Wed, 15 Jun 2022 15:47:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wv+x=WW=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1o1VFO-0002cD-2H
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 15:47:58 +0000
Received: from mail-lf1-x12a.google.com (mail-lf1-x12a.google.com
 [2a00:1450:4864:20::12a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 86838183-ecc2-11ec-bd2c-47488cf2e6aa;
 Wed, 15 Jun 2022 17:47:56 +0200 (CEST)
Received: by mail-lf1-x12a.google.com with SMTP id 20so19546696lfz.8
 for <xen-devel@lists.xenproject.org>; Wed, 15 Jun 2022 08:47:56 -0700 (PDT)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 s15-20020a2e150f000000b0025567cf8633sm1710514ljd.85.2022.06.15.08.47.55
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 15 Jun 2022 08:47:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 86838183-ecc2-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=VTCGWujNv4+ZeNU9inSiK0tI+fyoxm0X3NKXlbOy8Y8=;
        b=GphwJd+LsGEWeN2UG4hGNyGvnzO74SuULFcCInrrgMlqBx8ENwGzXPt2mYReWKHevc
         0XbVfNJOky5MmRd3j4yfAIyRzYLYgHv7/n0cn0R+GFOpDUVDUC81hCn2m1oE1UxPUoWo
         +qgBpWgoy4aIdpBjLJl1r0T+ju62jYqHcOMRTctNqp2bWSn4QS6F2IVeElVNzK99mGrb
         WoIIyfTwWKcNpy2mXR+SKnP/cc6nh07OU48gAhRTmgRycemIPRHP9oukemCczQWi7JJt
         yTgLZ7YWzIi/Z/8M+yheAOjE60UQs2PPXenesWMsQyrdGX7sZMb2PDXbl3/Vz9sxlg/l
         vnnA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=VTCGWujNv4+ZeNU9inSiK0tI+fyoxm0X3NKXlbOy8Y8=;
        b=CjYi5jZsG1CjVFP0GchHzFnEAq9Lrz7RfRsseZHqFJSpExMD3MjbKyxCEtj2U7LTcA
         yZJcTDUZEt3qZiHNLSfWgkssGvmMQrK2WWjSI+1oWQKe6X7gcTHojg+guw1DgWEQ8uro
         B8oyuPUJDwyf0D7zCVKuv05BhqdaFsMwE3cOIewGKaJm8Cda34n6jTHc1PhMeRQb3vY2
         2WoZdfRmG2XBmOupUWxVus7UvnnKHllWt+i1qfnJ2XwX7x9KfjKA/fj0JoBNr9nm97sS
         N9PUvTy0Oqr0MxVgiY8eZy/EFwmeVEI73vOMdjCdIe6H6F4eltAuW4Rsl5fx0yYiPO+1
         Trhg==
X-Gm-Message-State: AJIora8lhGDtd0IlcYrxVkHhWMvKguLqCausOCfPWAdzrEs+JXaTdq3C
	hhkwjP0ZzqJi17For2HhGU0=
X-Google-Smtp-Source: AGRyM1vcCVXtE8TdfCiHOURs9ffWBcnlUTL+yi36mILI8l9ZN3HWHOYcewv5RFV18a3Ky1FRl+oYTA==
X-Received: by 2002:a05:6512:39cb:b0:47d:a4c6:40eb with SMTP id k11-20020a05651239cb00b0047da4c640ebmr62321lfu.597.1655308076206;
        Wed, 15 Jun 2022 08:47:56 -0700 (PDT)
Subject: Re: [PATCH] xen: don't require virtio with grants for non-PV guests
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, linux-doc@vger.kernel.org,
 linux-kernel@vger.kernel.org, Jonathan Corbet <corbet@lwn.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
References: <20220615084835.27113-1-jgross@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <6f5b1562-1270-5e83-bf9f-a9a7afc5a725@gmail.com>
Date: Wed, 15 Jun 2022 18:47:54 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20220615084835.27113-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 15.06.22 11:48, Juergen Gross wrote:

Hello Juergen

> Commit fa1f57421e0b ("xen/virtio: Enable restricted memory access using
> Xen grant mappings") introduced a new requirement for using virtio
> devices: the backend now needs to support the VIRTIO_F_ACCESS_PLATFORM
> feature.
>
> This is an undue requirement for non-PV guests, as those can be operated
> with existing backends without any problem, as long as those backends
> are running in dom0.
>
> Per default allow virtio devices without grant support for non-PV
> guests.
>
> The setting can be overridden by using the new "xen_virtio_grant"
> command line parameter.
>
> Add a new config item to always force use of grants for virtio.
>
> Fixes: fa1f57421e0b ("xen/virtio: Enable restricted memory access using Xen grant mappings")
> Signed-off-by: Juergen Gross <jgross@suse.com>

Thank you for the fix.


I have tested it on Arm64 guest (XEN_HVM_DOMAIN), it works.

With the "__init" fix (pointed out by Viresh) applied you can add my:

Tested-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> #Arm64 only


[snip]


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Wed Jun 15 15:58:36 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 15:58:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350232.576500 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1VPb-0004in-Mt; Wed, 15 Jun 2022 15:58:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350232.576500; Wed, 15 Jun 2022 15:58:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1VPb-0004ig-Jr; Wed, 15 Jun 2022 15:58:31 +0000
Received: by outflank-mailman (input) for mailman id 350232;
 Wed, 15 Jun 2022 15:58:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3iFn=WW=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1o1VPa-0004ia-Ho
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 15:58:30 +0000
Received: from mail-pl1-x633.google.com (mail-pl1-x633.google.com
 [2607:f8b0:4864:20::633])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fee0492c-ecc3-11ec-bd2c-47488cf2e6aa;
 Wed, 15 Jun 2022 17:58:29 +0200 (CEST)
Received: by mail-pl1-x633.google.com with SMTP id m14so382359plg.5
 for <xen-devel@lists.xenproject.org>; Wed, 15 Jun 2022 08:58:28 -0700 (PDT)
Received: from jade ([192.77.111.2]) by smtp.gmail.com with ESMTPSA id
 cd6-20020a056a00420600b0050dc7628162sm9969540pfb.60.2022.06.15.08.58.26
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 15 Jun 2022 08:58:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fee0492c-ecc3-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=I6BYRgU3q9RK9f3Cs7vh5L6Mmszj6062pUG1lzgw5+Y=;
        b=pECjFpfpC2YRL43a/Cn7KbUP4Rs8fjf2Uu9zEzw7jnPpFQ801zLKKBWaP5V04Z0htR
         zoIb8yp6rK24sBmOW02NpE1YhN5lJzLzSl0clm8hQtJr2cCMA3of2SycUF685Hy6dYYV
         GCg/2wUyG0oaHPKDhIKPsXV9bBn03Y/NB5fSgbezassoRFUehr8wHcUMIQOndv+OaBQC
         NZ9j+KAhZYQXX0QWhzMLgYQC21FnGmD09kAbRLDWXdEQYwLvuo3+4y2j27UehdVXuSL4
         GbI2TYeiu976/gzpS1+0ZjKK2616ZDEZKuxPW31/yWpYqYCBh6BOw5XaTpeVRScTKGJd
         erPw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=I6BYRgU3q9RK9f3Cs7vh5L6Mmszj6062pUG1lzgw5+Y=;
        b=whs5eVxmKuVWbGqu+9+LXVzN8rTX5XzDftzEQMFOf5NZCuvBz+jJIUXo0fBVqOwwEX
         NrLBdC5y0rhfHlafVlx0BB8d9nwgFzUAzQlThjdgMZSRkxRXPdcwSKVix4nQ6M6WiOWG
         Sn9pBS2sk9FaBE02RTF182RTxWPPmJijZiTgaT1RpvGE4t7JH76aylt4cm7WVW/ppzZU
         bQESK+TmyST2dLtkuFeWaJCF1AU0oEg1SOSyyg77ckibAz005keNYnKNrX4fhL4bjrwP
         te/yK12lSl0Aq0E+caShcVotkVtHRB3W1vBAhma6gTnZE4cLNSXWriZz31mAdHas6vb6
         LGeA==
X-Gm-Message-State: AJIora/NwyRHj5cDnOeIaNpTrfV5dWkoqAcBgYSun9RhW1u6tmYFGBZD
	H03pJ8k9eNVEi/N84Z2wY3/ziA==
X-Google-Smtp-Source: AGRyM1sZhpanY+Dgu9nrkmVZhzhQ3qEuGv4/xO5jk+ujdgq4Q2JwebqEugJF3Y94xtEDKne4DQHNzw==
X-Received: by 2002:a17:903:408c:b0:163:e526:4397 with SMTP id z12-20020a170903408c00b00163e5264397mr472170plc.80.1655308707329;
        Wed, 15 Jun 2022 08:58:27 -0700 (PDT)
Date: Wed, 15 Jun 2022 08:58:25 -0700
From: Jens Wiklander <jens.wiklander@linaro.org>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 1/2] xen/arm: smccc: add support for SMCCCv1.2
 extended input/output registers
Message-ID: <20220615155825.GA30639@jade>
References: <20220609061812.422130-1-jens.wiklander@linaro.org>
 <20220609061812.422130-2-jens.wiklander@linaro.org>
 <alpine.DEB.2.22.394.2206101733020.756493@ubuntu-linux-20-04-desktop>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.22.394.2206101733020.756493@ubuntu-linux-20-04-desktop>

On Fri, Jun 10, 2022 at 05:41:33PM -0700, Stefano Stabellini wrote:
> On Thu, 9 Jun 2022, Jens Wiklander wrote:
> > SMCCC v1.2 AArch64 allows x0-x17 to be used as both parameter registers
> > and result registers for the SMC and HVC instructions.
> > 
> > Arm Firmware Framework for Armv8-A specification makes use of x0-x7 as
> > parameter and result registers.
> > 
> > Let us add new interface to support this extended set of input/output
> > registers.
> > 
> > This is based on 3fdc0cb59d97 ("arm64: smccc: Add support for SMCCCv1.2
> > extended input/output registers") by Sudeep Holla from the Linux kernel
> > 
> > Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> > ---
> >  xen/arch/arm/arm64/smc.S         | 43 ++++++++++++++++++++++++++++++++
> >  xen/arch/arm/include/asm/smccc.h | 42 +++++++++++++++++++++++++++++++
> >  xen/arch/arm/vsmc.c              |  2 +-
> >  3 files changed, 86 insertions(+), 1 deletion(-)
> > 
> > diff --git a/xen/arch/arm/arm64/smc.S b/xen/arch/arm/arm64/smc.S
> > index 91bae62dd4d2..1570bc8eb9d4 100644
> > --- a/xen/arch/arm/arm64/smc.S
> > +++ b/xen/arch/arm/arm64/smc.S
> > @@ -27,3 +27,46 @@ ENTRY(__arm_smccc_1_0_smc)
> >          stp     x2, x3, [x4, #SMCCC_RES_a2]
> >  1:
> >          ret
> > +
> > +
> > +/*
> > + * void arm_smccc_1_2_smc(const struct arm_smccc_1_2_regs *args,
> > + *                        struct arm_smccc_1_2_regs *res)
> > + */
> > +ENTRY(arm_smccc_1_2_smc)
> > +    /* Save `res` and free a GPR that won't be clobbered */
> > +    stp     x1, x19, [sp, #-16]!
> > +
> > +    /* Ensure `args` won't be clobbered while loading regs in next step */
> > +    mov	x19, x0
> > +
> > +    /* Load the registers x0 - x17 from the struct arm_smccc_1_2_regs */
> > +    ldp	x0, x1, [x19, #0]
> > +    ldp	x2, x3, [x19, #16]
> > +    ldp	x4, x5, [x19, #32]
> > +    ldp	x6, x7, [x19, #48]
> > +    ldp	x8, x9, [x19, #64]
> > +    ldp	x10, x11, [x19, #80]
> > +    ldp	x12, x13, [x19, #96]
> > +    ldp	x14, x15, [x19, #112]
> > +    ldp	x16, x17, [x19, #128]
> > +
> > +    smc #0
> > +
> > +    /* Load the `res` from the stack */
> > +    ldr	x19, [sp]
> > +
> > +    /* Store the registers x0 - x17 into the result structure */
> > +    stp	x0, x1, [x19, #0]
> > +    stp	x2, x3, [x19, #16]
> > +    stp	x4, x5, [x19, #32]
> > +    stp	x6, x7, [x19, #48]
> > +    stp	x8, x9, [x19, #64]
> > +    stp	x10, x11, [x19, #80]
> > +    stp	x12, x13, [x19, #96]
> > +    stp	x14, x15, [x19, #112]
> > +    stp	x16, x17, [x19, #128]
> 
> I noticed that in the original commit the offsets are declared as
> ARM_SMCCC_1_2_REGS_X0_OFFS, etc. In Xen we could add them to
> xen/arch/arm/arm64/asm-offsets.c given that they are only used in asm.
> 
> That said, there isn't a huge value in declaring them given that they
> are always read and written in order and there is nothing else in the
> struct, so I am fine either way.
> 
> I am also happy to have them declared if other maintainers prefer it
> that way.

OK, I'll update with asm-offsets.c since Julien asked for that too.

> 
> 
> > +    /* Restore original x19 */
> > +    ldp     xzr, x19, [sp], #16
> > +    ret
> > diff --git a/xen/arch/arm/include/asm/smccc.h b/xen/arch/arm/include/asm/smccc.h
> > index b3dbeecc90ad..316adf968e74 100644
> > --- a/xen/arch/arm/include/asm/smccc.h
> > +++ b/xen/arch/arm/include/asm/smccc.h
> > @@ -33,6 +33,7 @@
> >  
> >  #define ARM_SMCCC_VERSION_1_0   SMCCC_VERSION(1, 0)
> >  #define ARM_SMCCC_VERSION_1_1   SMCCC_VERSION(1, 1)
> > +#define ARM_SMCCC_VERSION_1_2   SMCCC_VERSION(1, 2)
> >  
> >  /*
> >   * This file provides common defines for ARM SMC Calling Convention as
> > @@ -217,6 +218,7 @@ struct arm_smccc_res {
> >  #ifdef CONFIG_ARM_32
> >  #define arm_smccc_1_0_smc(...) arm_smccc_1_1_smc(__VA_ARGS__)
> >  #define arm_smccc_smc(...) arm_smccc_1_1_smc(__VA_ARGS__)
> > +
> >  #else
> 
> Spurious change
> 
> 
> >  void __arm_smccc_1_0_smc(register_t a0, register_t a1, register_t a2,
> > @@ -265,8 +267,48 @@ void __arm_smccc_1_0_smc(register_t a0, register_t a1, register_t a2,
> >          else                                                    \
> >              arm_smccc_1_0_smc(__VA_ARGS__);                     \
> >      } while ( 0 )
> > +
> > +/**
> > + * struct arm_smccc_1_2_regs - Arguments for or Results from SMC call
> > + * @a0-a17 argument values from registers 0 to 17
> > + */
> > +struct arm_smccc_1_2_regs {
> > +    unsigned long a0;
> > +    unsigned long a1;
> > +    unsigned long a2;
> > +    unsigned long a3;
> > +    unsigned long a4;
> > +    unsigned long a5;
> > +    unsigned long a6;
> > +    unsigned long a7;
> > +    unsigned long a8;
> > +    unsigned long a9;
> > +    unsigned long a10;
> > +    unsigned long a11;
> > +    unsigned long a12;
> > +    unsigned long a13;
> > +    unsigned long a14;
> > +    unsigned long a15;
> > +    unsigned long a16;
> > +    unsigned long a17;
> > +};
> >  #endif /* CONFIG_ARM_64 */
> >  
> > +/**
> > + * arm_smccc_1_2_smc() - make SMC calls
> > + * @args: arguments passed via struct arm_smccc_1_2_regs
> > + * @res: result values via struct arm_smccc_1_2_regs
> > + *
> > + * This function is used to make SMC calls following SMC Calling Convention
> > + * v1.2 or above. The content of the supplied param are copied from the
> > + * structure to registers prior to the SMC instruction. The return values
> > + * are updated with the content from registers on return from the SMC
> > + * instruction.
> > + */
> > +void arm_smccc_1_2_smc(const struct arm_smccc_1_2_regs *args,
> > +                       struct arm_smccc_1_2_regs *res);
> > +
> 
> As arm_smccc_1_2_smc is not implemented in ARM32 it is better to place
> the declaration inside the #ifdef CONFIG_ARM_64.

I'll fix.

> 
> 
> >  #endif /* __ASSEMBLY__ */
> >  
> >  /*
> > diff --git a/xen/arch/arm/vsmc.c b/xen/arch/arm/vsmc.c
> > index 676740ef1520..6f90c08a6304 100644
> > --- a/xen/arch/arm/vsmc.c
> > +++ b/xen/arch/arm/vsmc.c
> > @@ -93,7 +93,7 @@ static bool handle_arch(struct cpu_user_regs *regs)
> >      switch ( fid )
> >      {
> >      case ARM_SMCCC_VERSION_FID:
> > -        set_user_reg(regs, 0, ARM_SMCCC_VERSION_1_1);
> > +        set_user_reg(regs, 0, ARM_SMCCC_VERSION_1_2);
> >          return true;
>   
> This is going to be a problem for ARM32 given that ARM_SMCCC_VERSION_1_2
> is unimplemented on ARM32. If there is an ARM32 implementation in Linux
> for ARM_SMCCC_VERSION_1_2 you might as well import it too.
> 
> Otherwise we'll have to abstract it away, e.g.:
> 
> #ifdef CONFIG_ARM_64
> #define ARM_VSMCCC_VERSION ARM_SMCCC_VERSION_1_2
> #else
> #define ARM_VSMCCC_VERSION ARM_SMCCC_VERSION_1_1
> #endif

I couldn't find an ARM32 implementation for ARM_SMCCC_VERSION_1_2. But
I'm not sure it's needed at this point. From what I've understood r4-17
are either preserved or updated by the function ID in question. So
claiming ARM_SMCCC_VERSION_1_2 shouldn't break anything. The FF-A
functions will mostly update r4-r7, but we don't use FF-A with ARM32
yet. I'l update with you're proposal if that's what you prefer.

Thanks,
Jens

> 
> >      case ARM_SMCCC_ARCH_FEATURES_FID:
> > -- 
> > 2.31.1
> > 


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 16:33:07 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 16:33:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350241.576523 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Vww-0001yk-Or; Wed, 15 Jun 2022 16:32:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350241.576523; Wed, 15 Jun 2022 16:32:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Vww-0001yd-Ll; Wed, 15 Jun 2022 16:32:58 +0000
Received: by outflank-mailman (input) for mailman id 350241;
 Wed, 15 Jun 2022 16:32:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wv+x=WW=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1o1Vww-0001yX-1M
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 16:32:58 +0000
Received: from mail-lj1-x230.google.com (mail-lj1-x230.google.com
 [2a00:1450:4864:20::230])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cf9f8e40-ecc8-11ec-bd2c-47488cf2e6aa;
 Wed, 15 Jun 2022 18:32:56 +0200 (CEST)
Received: by mail-lj1-x230.google.com with SMTP id h23so13909929ljl.3
 for <xen-devel@lists.xenproject.org>; Wed, 15 Jun 2022 09:32:56 -0700 (PDT)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 y17-20020a2eb011000000b0025a0ca6a0e5sm317893ljk.61.2022.06.15.09.32.55
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 15 Jun 2022 09:32:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cf9f8e40-ecc8-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=ZF8th8mGWf3SuaAuDD6LprpEpz3Ht4Qh6Ya1fy8DFtc=;
        b=VyhBwthrTIxACGj1/BPHJ/qXWdEw9lDb2CCrSvCFVeXhRBE0CqHpJYDVkHZn3AdjEs
         UgSTGxbvXp/IRCck8Xe3aw24YT7otKdobFo4UiY+fzzoRuFBdMtyNmXu8QsOgvPGWlqO
         oU4DshzigiTvNb8Or1L64azvGCOlpKEF9BhLnSh26jVCtq8VprvjmIVpAxABQZS51ueT
         xt014P2IYnUZiI8QDhPyw2Rcc8UCTgKoZzp1WxY6I8HWIxjkkZoQ3rAMfxZTvYgoiI9E
         CVkAzjm2VJOHQ5pqyBVqtysnykILA5dGfN2OOyMdOCW3QqH3OVI284lqeFru6RHzu5CG
         6/kQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=ZF8th8mGWf3SuaAuDD6LprpEpz3Ht4Qh6Ya1fy8DFtc=;
        b=LOxJO8mGECJSHNnQvHLE1P6fGB5jLwNtK+PUV6b7qrud40LbAvPgBySDg7b8a/gS2k
         HOsGGep30vWiqVQ5tc12/iRTmP6uXh+sHX8gL7b6gI4asrRkw6Fz/fQErF0mQsJY8aT7
         Jqtq0p4w2HaA50fb/MDWUvEahCLy4/vKY64Cyua7exqGsHWQEBG5l7c5N4+yKRk27fiF
         jIO2VUCtaJkOuSpWIq3X3JZJCtu8hoOhFLyEyir/k5bRrnavQkO/eeyFlil/dj4IbduR
         Ql7xccZ4EGzswELClZDPnpti/HwtHzZTcv0/+8ueMkTa4Oe1x4Ez6XEZZ60aNREuigeB
         02PA==
X-Gm-Message-State: AJIora9Yks4L9B57965+QAJ/qzoeOjg0wpfU0nrPhQgay6bBIeVmMbKW
	6Zse+7RFEy1WAn+iUaOZAKY=
X-Google-Smtp-Source: AGRyM1uYaDstOGREn7Fl41IHuyNWbMc0681CW2heauSwQ+7Oq/iWA8aY9j/MyJHR+8NxDek5cttl9w==
X-Received: by 2002:a05:651c:b09:b0:25a:44fd:41f with SMTP id b9-20020a05651c0b0900b0025a44fd041fmr318733ljr.366.1655310776113;
        Wed, 15 Jun 2022 09:32:56 -0700 (PDT)
Subject: Re: [PATCH V10 1/3] libxl: Add support for Virtio disk configuration
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel@lists.xenproject.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Nick Rosbrook
 <rosbrookn@gmail.com>, Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>
References: <1655143522-14356-1-git-send-email-olekstysh@gmail.com>
 <1655143522-14356-2-git-send-email-olekstysh@gmail.com>
 <YqnrerEAFXJUCRL1@perard.uk.xensource.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <21798651-1254-0c17-5379-224b52a92566@gmail.com>
Date: Wed, 15 Jun 2022 19:32:54 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <YqnrerEAFXJUCRL1@perard.uk.xensource.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 15.06.22 17:23, Anthony PERARD wrote:

Hello Anthony

> On Mon, Jun 13, 2022 at 09:05:20PM +0300, Oleksandr Tyshchenko wrote:
>> diff --git a/tools/libs/light/libxl_disk.c b/tools/libs/light/libxl_disk.c
>> index a5ca778..673b0d6 100644
>> --- a/tools/libs/light/libxl_disk.c
>> +++ b/tools/libs/light/libxl_disk.c
>> @@ -575,6 +660,41 @@ cleanup:
>>       return rc;
>>   }
>>   
>> +static int libxl__device_disk_get_path(libxl__gc *gc, uint32_t domid,
>> +                                       char **path)
>> +{
>> +    const char *xen_dir, *virtio_dir;
>> +    char *xen_path, *virtio_path;
>> +    int rc;
>> +
>> +    /* default path */
>> +    xen_path = GCSPRINTF("%s/device/%s",
>> +                         libxl__xs_libxl_path(gc, domid),
>> +                         libxl__device_kind_to_string(LIBXL__DEVICE_KIND_VBD));
>> +
>> +    rc = libxl__xs_read_checked(gc, XBT_NULL, xen_path, &xen_dir);
>> +    if (rc)
>> +        return rc;
>> +
>> +    virtio_path = GCSPRINTF("%s/device/%s",
>> +                            libxl__xs_libxl_path(gc, domid),
>> +                            libxl__device_kind_to_string(LIBXL__DEVICE_KIND_VIRTIO_DISK));
>> +
>> +    rc = libxl__xs_read_checked(gc, XBT_NULL, virtio_path, &virtio_dir);
>> +    if (rc)
>> +        return rc;
>> +
>> +    if (xen_dir && virtio_dir) {
>> +        LOGD(ERROR, domid, "Invalid configuration, both xen and virtio paths are present");
>> +        return ERROR_INVAL;
>> +    } else if (virtio_dir)
>> +        *path = virtio_path;
>> +    else
>> +        *path = xen_path;
> Small coding style issue, could you use blocks {} on all part of the
> if...else, since you are using them on one of the block? This is
> described in tools/libs/light/CODING_STYLE (5. Block structure).

yes, will do


>
>> diff --git a/tools/xl/xl_block.c b/tools/xl/xl_block.c
>> index 70eed43..f2b0ff5 100644
>> --- a/tools/xl/xl_block.c
>> +++ b/tools/xl/xl_block.c
>> @@ -50,6 +50,11 @@ int main_blockattach(int argc, char **argv)
>>           return 0;
>>       }
>>   
>> +    if (disk.specification != LIBXL_DISK_SPECIFICATION_XEN) {
>> +        fprintf(stderr, "block-attach is only supported for specification xen\n");
> This check prevents a previously working `block-attach` command line
> from working.
>
>      # xl -Tvvv block-attach 0 /dev/vg/guest_disk,raw,hda
>      block-attach is only supported for specification xen
>
> At least, that works by adding ",specification=xen", but it should work
> without it as "xen" is the default (from the man page).

yes, you are right. thank you for pointing this out.


>
> Maybe the check is done too soon, or maybe a better place to do it would
> be in libxl.
>
> libxl__device_disk_setdefault() is called much later, while executing
> libxl_device_disk_add(), so `xl` can't use the default been done there
> to "disk.specification".

I got it.


>
> `xl block-attach` calls libxl_device_disk_add() which I think is only
> called for hotplug of disk. If I recall correctly, libxl__add_disks() is
> called instead at guest creation. So maybe it is possible to do
> something in libxl_device_disk_add(), but that a function defined by a
> macro, and the macro is using the same libxl__device_disk_add() that
> libxl_device_disk_add(). On the other hand, there is a "hotplug"
> parameter to libxl__device_disk_setdefault(), maybe that could be use?

Thank you for digging into the details here.

If I understood correctly your suggestion we simply can drop checks in 
main_blockattach() (and likely main_blockdetach() ?) and add it to 
libxl__device_disk_setdefault().


diff --git a/tools/libs/light/libxl_disk.c b/tools/libs/light/libxl_disk.c
index 9e82adb..96ace09 100644
--- a/tools/libs/light/libxl_disk.c
+++ b/tools/libs/light/libxl_disk.c
@@ -182,6 +182,11 @@ static int libxl__device_disk_setdefault(libxl__gc 
*gc, uint32_t domid,
          disk->transport = LIBXL_DISK_TRANSPORT_MMIO;
      }

+    if (hotplug && disk->specification != LIBXL_DISK_SPECIFICATION_XEN) {
+        LOGD(ERROR, domid, "Hotplug is only supported for specification 
xen");
+        return ERROR_FAIL;
+    }
+
      /* Force Qdisk backend for CDROM devices of guests with a device 
model. */
      if (disk->is_cdrom != 0 &&
          libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM) {


Is my understanding correct?


I have checked, it works:

root@generic-armv8-xt-dom0:~# xl block-attach DomU /dev/loop0,raw,xvda3
[  762.062874] xen-blkback: backend/vbd/3/51715: using 4 queues, 
protocol 1 (arm-abi)


root@generic-armv8-xt-dom0:~# xl block-attach DomU 
/dev/loop0,raw,xvda3,specification=virtio
libxl: error: libxl_disk.c:186:libxl__device_disk_setdefault: Domain 
3:Hotplug is only supported for specification xen
libxl: error: libxl_device.c:1468:device_addrm_aocomplete: unable to add 
device

>
>
> Cheers,
>
-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Wed Jun 15 16:40:12 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 16:40:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350249.576534 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1W3s-0003hL-Ib; Wed, 15 Jun 2022 16:40:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350249.576534; Wed, 15 Jun 2022 16:40:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1W3s-0003hE-FB; Wed, 15 Jun 2022 16:40:08 +0000
Received: by outflank-mailman (input) for mailman id 350249;
 Wed, 15 Jun 2022 16:40:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wv+x=WW=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1o1W3r-0003ZH-8C
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 16:40:07 +0000
Received: from mail-lf1-x132.google.com (mail-lf1-x132.google.com
 [2a00:1450:4864:20::132])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cfaa7022-ecc9-11ec-bd2c-47488cf2e6aa;
 Wed, 15 Jun 2022 18:40:05 +0200 (CEST)
Received: by mail-lf1-x132.google.com with SMTP id c4so19760442lfj.12
 for <xen-devel@lists.xenproject.org>; Wed, 15 Jun 2022 09:40:06 -0700 (PDT)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 u18-20020ac25192000000b0047255d211cesm1851391lfi.253.2022.06.15.09.40.04
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 15 Jun 2022 09:40:05 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cfaa7022-ecc9-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=MWANSzdRAOWKpopzWH4swQNo/XxRIXruJcTbdkHnuFo=;
        b=nqDPazFgESVPMchw5t1IyLKwSYffE31zuhyigqZoR0u6mWJ5lr0NxjOUBK3WHo8rAt
         lw5lYUMeqvkvM9wee5LNtZMFA/ozRR3JKfgcdoEJWd+n/a/C311MNCxlBSHydx8/VWxX
         eQ536sW66FtzcgyR57uRiUXDmqBfUk2qp/lxdnpk66AVQ590RbOxzfvnh8i9xccgOQjl
         +BeoWeUL0P4hTo632fmXGZJcrMwwsv27jGNv9nju398+9CI2baFqkzNnOat0Kx65Adom
         xpAnieakVJxYSAjEBMjHxXnxSskdIWPSl4dbTXZnihYWYkeX6+LKOUWgw0+v9tEb8+jI
         sS4g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=MWANSzdRAOWKpopzWH4swQNo/XxRIXruJcTbdkHnuFo=;
        b=gk5kejkrlMCEZ0ft3qLIaDTJaLg5rvrxqHC3O6bdvowc5kJfaS0pvBLgB4QbcjbnOD
         9Frvj1NUqL5V8D7dGLt9MF6RpbdWLPlAj860yCAkiL9VLEBkFcPPJyfXsGRoheydbk58
         K1Gkrx93i6/RUP80IiqGrAwcak45jwEZgOHQ3rdwAW/rde7EiDB5qbMEtLkCFEGZN2a5
         fy6a3YeHo4vrIKkCR/IP7mBkA9Io5sRfD0jQhK0x3Sg6ReKJtg/z/DIzOWVPGizZHfv/
         Zcasght2bTChaXGGnCp98G3SzT6jfeHm0/taby1WwR3abcXWwz4X54gBi1D0W7J0u4/m
         NuhA==
X-Gm-Message-State: AJIora+8pC8gTURaIJsfgOQxzKKJ6trE2ZheLrbiQZ3F43jwqFateSW3
	MdzemdVHqzPW08Fts6H9MkQ=
X-Google-Smtp-Source: AGRyM1tajAVsOPq2QGMUHcUpNqZOlXppdM2bBQzKlxoCKdKoIfE+blT5jZUZrimz0oNBnRyTWdWwVw==
X-Received: by 2002:a05:6512:3484:b0:47d:451e:3d53 with SMTP id v4-20020a056512348400b0047d451e3d53mr235059lfr.144.1655311205711;
        Wed, 15 Jun 2022 09:40:05 -0700 (PDT)
Subject: Re: [RFC PATCH 2/2] xen/grant-table: Use unpopulated DMAable pages
 instead of real RAM ones
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, Juergen Gross
 <jgross@suse.com>, Julien Grall <julien@xen.org>
References: <1652810658-27810-1-git-send-email-olekstysh@gmail.com>
 <1652810658-27810-3-git-send-email-olekstysh@gmail.com>
 <alpine.DEB.2.22.394.2206031348230.2783803@ubuntu-linux-20-04-desktop>
 <7f886dfb-2b42-bc70-d55f-14ecd8144e3e@gmail.com>
 <alpine.DEB.2.22.394.2206101644210.756493@ubuntu-linux-20-04-desktop>
 <1266f8cb-bbd6-d952-3108-89665ce76fec@gmail.com>
 <alpine.DEB.2.22.394.2206141748150.1837490@ubuntu-linux-20-04-desktop>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <9f1e4568-1cfd-e967-e54c-735bad1ea211@gmail.com>
Date: Wed, 15 Jun 2022 19:40:04 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.22.394.2206141748150.1837490@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 15.06.22 03:51, Stefano Stabellini wrote:

Hello Stefano

> On Tue, 14 Jun 2022, Oleksandr wrote:
>> On 11.06.22 02:55, Stefano Stabellini wrote:
>>
>> Hello Stefano
>>
>>> On Thu, 9 Jun 2022, Oleksandr wrote:
>>>> On 04.06.22 00:19, Stefano Stabellini wrote:
>>>> Hello Stefano
>>>>
>>>> Thank you for having a look and sorry for the late response.
>>>>
>>>>> On Tue, 17 May 2022, Oleksandr Tyshchenko wrote:
>>>>>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>>>>
>>>>>> Depends on CONFIG_XEN_UNPOPULATED_ALLOC. If enabled then unpopulated
>>>>>> DMAable (contiguous) pages will be allocated for grant mapping into
>>>>>> instead of ballooning out real RAM pages.
>>>>>>
>>>>>> TODO: Fallback to real RAM pages if xen_alloc_unpopulated_dma_pages()
>>>>>> fails.
>>>>>>
>>>>>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>>>> ---
>>>>>>     drivers/xen/grant-table.c | 27 +++++++++++++++++++++++++++
>>>>>>     1 file changed, 27 insertions(+)
>>>>>>
>>>>>> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
>>>>>> index 8ccccac..2bb4392 100644
>>>>>> --- a/drivers/xen/grant-table.c
>>>>>> +++ b/drivers/xen/grant-table.c
>>>>>> @@ -864,6 +864,25 @@ EXPORT_SYMBOL_GPL(gnttab_free_pages);
>>>>>>      */
>>>>>>     int gnttab_dma_alloc_pages(struct gnttab_dma_alloc_args *args)
>>>>>>     {
>>>>>> +#ifdef CONFIG_XEN_UNPOPULATED_ALLOC
>>>>>> +	int ret;
>>>>> This is an alternative implementation of the same function.
>>>> Currently, yes.
>>>>
>>>>
>>>>>     If we are
>>>>> going to use #ifdef, then I would #ifdef the entire function, rather
>>>>> than just the body. Otherwise within the function body we can use
>>>>> IS_ENABLED.
>>>> Good point. Note, there is one missing thing in current patch which is
>>>> described in TODO.
>>>>
>>>> "Fallback to real RAM pages if xen_alloc_unpopulated_dma_pages() fails."
>>>> So I
>>>> will likely use IS_ENABLED within the function body.
>>>>
>>>> If CONFIG_XEN_UNPOPULATED_ALLOC is enabled then gnttab_dma_alloc_pages()
>>>> will
>>>> try to call xen_alloc_unpopulated_dma_pages() the first and if fails then
>>>> fallback to allocate RAM pages and balloon them out.
>>>>
>>>> One moment is not entirely clear to me. If we use fallback in
>>>> gnttab_dma_alloc_pages() then we must use fallback in
>>>> gnttab_dma_free_pages()
>>>> as well, we cannot use xen_free_unpopulated_dma_pages() for real RAM
>>>> pages.
>>>> The question is how to pass this information to the
>>>> gnttab_dma_free_pages()?
>>>> The first idea which comes to mind is to add a flag to struct
>>>> gnttab_dma_alloc_args...
>>>    You can check if the page is within the mhp_range range or part of
>>> iomem_resource? If not, you can free it as a normal page.
>>>
>>> If we do this, then the fallback is better implemented in
>>> unpopulated-alloc.c because that is the one that is aware about
>>> page addresses.
>>
>> I got your idea and agree this can work technically. Or if we finally decide
>> to use the second option (use "dma_pool" for all) in the first patch
>> "[RFC PATCH 1/2] xen/unpopulated-alloc: Introduce helpers for DMA allocations"
>> then we will likely be able to check whether a page in question
>> is within a "dma_pool" using gen_pool_has_addr().
>>
>> I am still wondering, can we avoid the fallback implementation in
>> unpopulated-alloc.c.
>> Because for that purpose we will need to pull more code into
>> unpopulated-alloc.c (to be more precise, almost everything which
>> gnttab_dma_free_pages() already has except gnttab_pages_clear_private()) and
>> pass more arguments to xen_free_unpopulated_dma_pages(). Also I might mistake,
>> but having a fallback split between grant-table.c (to allocate RAM pages in
>> gnttab_dma_alloc_pages()) and unpopulated-alloc.c (to free RAM pages in
>> xen_free_unpopulated_dma_pages()) would look a bit weird.
>>
>> I see two possible options for the fallback implementation in grant-table.c:
>> 1. (less preferable) by introducing new flag in struct gnttab_dma_alloc_args
>> 2. (more preferable) by guessing unpopulated (non real RAM) page using
>> is_zone_device_page(), etc
>>
>>
>> For example, with the second option the resulting code will look quite simple
>> (only build tested):
>>
>> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
>> index 738029d..3bda71f 100644
>> --- a/drivers/xen/grant-table.c
>> +++ b/drivers/xen/grant-table.c
>> @@ -1047,6 +1047,23 @@ int gnttab_dma_alloc_pages(struct gnttab_dma_alloc_args
>> *args)
>>          size_t size;
>>          int i, ret;
>>
>> +       if (IS_ENABLED(CONFIG_XEN_UNPOPULATED_ALLOC)) {
>> +               ret = xen_alloc_unpopulated_dma_pages(args->dev,
>> args->nr_pages,
>> +                               args->pages);
>> +               if (ret < 0)
>> +                       goto fallback;
>> +
>> +               ret = gnttab_pages_set_private(args->nr_pages, args->pages);
>> +               if (ret < 0)
>> +                       goto fail;
>> +
>> +               args->vaddr = page_to_virt(args->pages[0]);
>> +               args->dev_bus_addr = page_to_phys(args->pages[0]);
>> +
>> +               return ret;
>> +       }
>> +
>> +fallback:
>>          size = args->nr_pages << PAGE_SHIFT;
>>          if (args->coherent)
>>                  args->vaddr = dma_alloc_coherent(args->dev, size,
>> @@ -1103,6 +1120,12 @@ int gnttab_dma_free_pages(struct gnttab_dma_alloc_args
>> *args)
>>
>>          gnttab_pages_clear_private(args->nr_pages, args->pages);
>>
>> +       if (IS_ENABLED(CONFIG_XEN_UNPOPULATED_ALLOC) &&
>> +                       is_zone_device_page(args->pages[0])) {
>> +               xen_free_unpopulated_dma_pages(args->dev, args->nr_pages,
>> args->pages);
>> +               return 0;
>> +       }
>> +
>>          for (i = 0; i < args->nr_pages; i++)
>>                  args->frames[i] = page_to_xen_pfn(args->pages[i]);
>>
>>
>> What do you think?
>   
> I have another idea. Why don't we introduce a function implemented in
> drivers/xen/unpopulated-alloc.c called is_xen_unpopulated_page() and
> call it from here? is_xen_unpopulated_page can be implemented by using
> gen_pool_has_addr.

I like the idea, will do


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Wed Jun 15 16:57:04 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 16:57:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350259.576545 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1WK5-0005aS-UV; Wed, 15 Jun 2022 16:56:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350259.576545; Wed, 15 Jun 2022 16:56:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1WK5-0005aL-RP; Wed, 15 Jun 2022 16:56:53 +0000
Received: by outflank-mailman (input) for mailman id 350259;
 Wed, 15 Jun 2022 16:56:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wv+x=WW=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1o1WK3-0005aF-Te
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 16:56:52 +0000
Received: from mail-lf1-x131.google.com (mail-lf1-x131.google.com
 [2a00:1450:4864:20::131])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 25e7d368-eccc-11ec-bd2c-47488cf2e6aa;
 Wed, 15 Jun 2022 18:56:49 +0200 (CEST)
Received: by mail-lf1-x131.google.com with SMTP id c4so19825569lfj.12
 for <xen-devel@lists.xenproject.org>; Wed, 15 Jun 2022 09:56:49 -0700 (PDT)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 y12-20020a2e320c000000b00253b24a90b6sm1753719ljy.17.2022.06.15.09.56.48
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 15 Jun 2022 09:56:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 25e7d368-eccc-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=avNuVrsYQW+vVEUxF26XhG+fGUfACIH16EQKrO0sXWA=;
        b=UszzRgSNGUchQXXf5QLeEdCVVMIalFwfwf5wos8tvJ2eVfferlZu87fwlXt5hEYVrX
         /jWTRxk6pak/p1WFHh/ldUIe9A6aBNsYCLu5HEUry4FixfYBpgKQXuyoC1XyCH04b63M
         ou5VzHjRzkaTQexFFMKjUFgbWSM7M+TXxuvuCMSkiMhFvwtBKo8VI/x9FqW3ZsL1tjBM
         i5W0F8GOISK//BOTOJmyFGKp+I2Z6NEuPM71Uog+B5uO5b41f1BnWL/szQIMwoRQAgoy
         1uZw6YXGjBPNoXVZJY2gELwuzb3FZcPQ4iPP/g+UFqh/YRAdPFKIiwo0E+AXJJPcIoAu
         nRNg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=avNuVrsYQW+vVEUxF26XhG+fGUfACIH16EQKrO0sXWA=;
        b=YSv0T0UE2DfChJ1l7tOyLxqdewgtg5K9idWjD31Yi8tkgcaqfCg4upyzZz5coMh7D7
         0TUAS5DiMW2PztmmHb9VGBySOghBRCPBczDJPCewpXEQRISbGOvirJb9C10TA/d2G68j
         c+PCSbUAkPuxTGPjzKd4neoRfYV9q+RkO/zmNxOIlRCpT9QyML6cUPTn8bSykiqGfmuV
         Htj65E+1kvk8PZtrAJ4g/8W5pANCeCWaEjR539SYtjkGXacErp/8jgKh2RKNHEdilKcV
         XGYk8QBEXXeSmeTRLqpwKGkl4EA5BW6AYNmrm9YpLcMERn3pWENAIR/OCTyt13jgdwSK
         avyg==
X-Gm-Message-State: AJIora9hHl2GDqPGYY9Weoc2SNWWeC8K9t46nNVTA+legQJnmUK27JKx
	5hNioxcnrc6Xm4dq8qVhmcE=
X-Google-Smtp-Source: AGRyM1t+0djTRVu3lmpg3ZepQaHCp0oc8R44ohWGIy2vV5oYNK+dlFLOH3idd4S6EY7yRzrSxfqx8Q==
X-Received: by 2002:ac2:4f03:0:b0:44a:9b62:3201 with SMTP id k3-20020ac24f03000000b0044a9b623201mr285754lfr.42.1655312209256;
        Wed, 15 Jun 2022 09:56:49 -0700 (PDT)
Subject: Re: [RFC PATCH 1/2] xen/unpopulated-alloc: Introduce helpers for DMA
 allocations
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, Juergen Gross
 <jgross@suse.com>, Julien Grall <julien@xen.org>
References: <1652810658-27810-1-git-send-email-olekstysh@gmail.com>
 <1652810658-27810-2-git-send-email-olekstysh@gmail.com>
 <alpine.DEB.2.22.394.2206031420430.2783803@ubuntu-linux-20-04-desktop>
 <00c14b91-4cf2-179c-749d-593db853e42e@gmail.com>
 <alpine.DEB.2.22.394.2206101709210.756493@ubuntu-linux-20-04-desktop>
 <a51dec23-c543-b571-8047-59f39abb0bee@gmail.com>
 <alpine.DEB.2.22.394.2206141735430.1837490@ubuntu-linux-20-04-desktop>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <1ce1978e-eecd-20af-3c1d-531a7dd046b6@gmail.com>
Date: Wed, 15 Jun 2022 19:56:47 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.22.394.2206141735430.1837490@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 15.06.22 03:45, Stefano Stabellini wrote:

Hello Stefano


> On Tue, 14 Jun 2022, Oleksandr wrote:
>> On 11.06.22 03:12, Stefano Stabellini wrote:
>>> On Wed, 8 Jun 2022, Oleksandr wrote:
>>>> 2. Drop the "page_list" entirely and use "dma_pool" for all (contiguous
>>>> and
>>>> non-contiguous) allocations. After all, all pages are initially contiguous
>>>> in
>>>> fill_list() as they are built from the resource. This changes behavior for
>>>> all
>>>> users of xen_alloc_unpopulated_pages()
>>>>
>>>> Below the diff for unpopulated-alloc.c. The patch is also available at:
>>>>
>>>> https://github.com/otyshchenko1/linux/commit/7be569f113a4acbdc4bcb9b20cb3995b3151387a
>>>>
>>>>
>>>> diff --git a/drivers/xen/unpopulated-alloc.c
>>>> b/drivers/xen/unpopulated-alloc.c
>>>> index a39f2d3..ab5c7bd 100644
>>>> --- a/drivers/xen/unpopulated-alloc.c
>>>> +++ b/drivers/xen/unpopulated-alloc.c
>>>> @@ -1,5 +1,7 @@
>>>>    // SPDX-License-Identifier: GPL-2.0
>>>> +#include <linux/dma-mapping.h>
>>>>    #include <linux/errno.h>
>>>> +#include <linux/genalloc.h>
>>>>    #include <linux/gfp.h>
>>>>    #include <linux/kernel.h>
>>>>    #include <linux/mm.h>
>>>> @@ -13,8 +15,8 @@
>>>>    #include <xen/xen.h>
>>>>
>>>>    static DEFINE_MUTEX(list_lock);
>>>> -static struct page *page_list;
>>>> -static unsigned int list_count;
>>>> +
>>>> +static struct gen_pool *dma_pool;
>>>>
>>>>    static struct resource *target_resource;
>>>>
>>>> @@ -36,7 +38,7 @@ static int fill_list(unsigned int nr_pages)
>>>>           struct dev_pagemap *pgmap;
>>>>           struct resource *res, *tmp_res = NULL;
>>>>           void *vaddr;
>>>> -       unsigned int i, alloc_pages = round_up(nr_pages,
>>>> PAGES_PER_SECTION);
>>>> +       unsigned int alloc_pages = round_up(nr_pages, PAGES_PER_SECTION);
>>>>           struct range mhp_range;
>>>>           int ret;
>>>>
>>>> @@ -106,6 +108,7 @@ static int fill_list(unsigned int nr_pages)
>>>>             * conflict with any devices.
>>>>             */
>>>>           if (!xen_feature(XENFEAT_auto_translated_physmap)) {
>>>> +               unsigned int i;
>>>>                   xen_pfn_t pfn = PFN_DOWN(res->start);
>>>>
>>>>                   for (i = 0; i < alloc_pages; i++) {
>>>> @@ -125,16 +128,17 @@ static int fill_list(unsigned int nr_pages)
>>>>                   goto err_memremap;
>>>>           }
>>>>
>>>> -       for (i = 0; i < alloc_pages; i++) {
>>>> -               struct page *pg = virt_to_page(vaddr + PAGE_SIZE * i);
>>>> -
>>>> -               pg->zone_device_data = page_list;
>>>> -               page_list = pg;
>>>> -               list_count++;
>>>> +       ret = gen_pool_add_virt(dma_pool, (unsigned long)vaddr,
>>>> res->start,
>>>> +                       alloc_pages * PAGE_SIZE, NUMA_NO_NODE);
>>>> +       if (ret) {
>>>> +               pr_err("Cannot add memory range to the pool\n");
>>>> +               goto err_pool;
>>>>           }
>>>>
>>>>           return 0;
>>>>
>>>> +err_pool:
>>>> +       memunmap_pages(pgmap);
>>>>    err_memremap:
>>>>           kfree(pgmap);
>>>>    err_pgmap:
>>>> @@ -149,51 +153,49 @@ static int fill_list(unsigned int nr_pages)
>>>>           return ret;
>>>>    }
>>>>
>>>> -/**
>>>> - * xen_alloc_unpopulated_pages - alloc unpopulated pages
>>>> - * @nr_pages: Number of pages
>>>> - * @pages: pages returned
>>>> - * @return 0 on success, error otherwise
>>>> - */
>>>> -int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page
>>>> **pages)
>>>> +static int alloc_unpopulated_pages(unsigned int nr_pages, struct page
>>>> **pages,
>>>> +               bool contiguous)
>>>>    {
>>>>           unsigned int i;
>>>>           int ret = 0;
>>>> +       void *vaddr;
>>>> +       bool filled = false;
>>>>
>>>>           /*
>>>>            * Fallback to default behavior if we do not have any suitable
>>>> resource
>>>>            * to allocate required region from and as the result we won't be
>>>> able
>>>> to
>>>>            * construct pages.
>>>>            */
>>>> -       if (!target_resource)
>>>> +       if (!target_resource) {
>>>> +               if (contiguous)
>>>> +                       return -ENODEV;
>>>> +
>>>>                   return xen_alloc_ballooned_pages(nr_pages, pages);
>>>> +       }
>>>>
>>>>           mutex_lock(&list_lock);
>>>> -       if (list_count < nr_pages) {
>>>> -               ret = fill_list(nr_pages - list_count);
>>>> +
>>>> +       while (!(vaddr = (void *)gen_pool_alloc(dma_pool, nr_pages *
>>>> PAGE_SIZE))) {
>>>> +               if (filled)
>>>> +                       ret = -ENOMEM;
>>>> +               else {
>>>> +                       ret = fill_list(nr_pages);
>>>> +                       filled = true;
>>>> +               }
>>>>                   if (ret)
>>>>                           goto out;
>>>>           }
>>>>
>>>>           for (i = 0; i < nr_pages; i++) {
>>>> -               struct page *pg = page_list;
>>>> -
>>>> -               BUG_ON(!pg);
>>>> -               page_list = pg->zone_device_data;
>>>> -               list_count--;
>>>> -               pages[i] = pg;
>>>> +               pages[i] = virt_to_page(vaddr + PAGE_SIZE * i);
>>>>
>>>>    #ifdef CONFIG_XEN_HAVE_PVMMU
>>>>                   if (!xen_feature(XENFEAT_auto_translated_physmap)) {
>>>> -                       ret = xen_alloc_p2m_entry(page_to_pfn(pg));
>>>> +                       ret = xen_alloc_p2m_entry(page_to_pfn(pages[i]));
>>>>                           if (ret < 0) {
>>>> -                               unsigned int j;
>>>> -
>>>> -                               for (j = 0; j <= i; j++) {
>>>> - pages[j]->zone_device_data = page_list;
>>>> -                                       page_list = pages[j];
>>>> -                                       list_count++;
>>>> -                               }
>>>> +                               /* XXX Do we need to zeroed pages[i]? */
>>>> +                               gen_pool_free(dma_pool, (unsigned
>>>> long)vaddr,
>>>> +                                               nr_pages * PAGE_SIZE);
>>>>                                   goto out;
>>>>                           }
>>>>                   }
>>>> @@ -204,32 +206,89 @@ int xen_alloc_unpopulated_pages(unsigned int
>>>> nr_pages,
>>>> struct page **pages)
>>>>           mutex_unlock(&list_lock);
>>>>           return ret;
>>>>    }
>>>> -EXPORT_SYMBOL(xen_alloc_unpopulated_pages);
>>>>
>>>> -/**
>>>> - * xen_free_unpopulated_pages - return unpopulated pages
>>>> - * @nr_pages: Number of pages
>>>> - * @pages: pages to return
>>>> - */
>>>> -void xen_free_unpopulated_pages(unsigned int nr_pages, struct page
>>>> **pages)
>>>> +static void free_unpopulated_pages(unsigned int nr_pages, struct page
>>>> **pages,
>>>> +               bool contiguous)
>>>>    {
>>>> -       unsigned int i;
>>>> -
>>>>           if (!target_resource) {
>>>> +               if (contiguous)
>>>> +                       return;
>>>> +
>>>>                   xen_free_ballooned_pages(nr_pages, pages);
>>>>                   return;
>>>>           }
>>>>
>>>>           mutex_lock(&list_lock);
>>>> -       for (i = 0; i < nr_pages; i++) {
>>>> -               pages[i]->zone_device_data = page_list;
>>>> -               page_list = pages[i];
>>>> -               list_count++;
>>>> +
>>>> +       /* XXX Do we need to check the range (gen_pool_has_addr)? */
>>>> +       if (contiguous)
>>>> +               gen_pool_free(dma_pool, (unsigned
>>>> long)page_to_virt(pages[0]),
>>>> +                               nr_pages * PAGE_SIZE);
>>>> +       else {
>>>> +               unsigned int i;
>>>> +
>>>> +               for (i = 0; i < nr_pages; i++)
>>>> +                       gen_pool_free(dma_pool, (unsigned
>>>> long)page_to_virt(pages[i]),
>>>> +                                       PAGE_SIZE);
>>>>           }
>>>> +
>>>>           mutex_unlock(&list_lock);
>>>>    }
>>>> +
>>>> +/**
>>>> + * xen_alloc_unpopulated_pages - alloc unpopulated pages
>>>> + * @nr_pages: Number of pages
>>>> + * @pages: pages returned
>>>> + * @return 0 on success, error otherwise
>>>> + */
>>>> +int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page
>>>> **pages)
>>>> +{
>>>> +       return alloc_unpopulated_pages(nr_pages, pages, false);
>>>> +}
>>>> +EXPORT_SYMBOL(xen_alloc_unpopulated_pages);
>>>> +
>>>> +/**
>>>> + * xen_free_unpopulated_pages - return unpopulated pages
>>>> + * @nr_pages: Number of pages
>>>> + * @pages: pages to return
>>>> + */
>>>> +void xen_free_unpopulated_pages(unsigned int nr_pages, struct page
>>>> **pages)
>>>> +{
>>>> +       free_unpopulated_pages(nr_pages, pages, false);
>>>> +}
>>>>    EXPORT_SYMBOL(xen_free_unpopulated_pages);
>>>>
>>>> +/**
>>>> + * xen_alloc_unpopulated_dma_pages - alloc unpopulated DMAable pages
>>>> + * @dev: valid struct device pointer
>>>> + * @nr_pages: Number of pages
>>>> + * @pages: pages returned
>>>> + * @return 0 on success, error otherwise
>>>> + */
>>>> +int xen_alloc_unpopulated_dma_pages(struct device *dev, unsigned int
>>>> nr_pages,
>>>> +               struct page **pages)
>>>> +{
>>>> +       /* XXX Handle devices which support 64-bit DMA address only for
>>>> now */
>>>> +       if (dma_get_mask(dev) != DMA_BIT_MASK(64))
>>>> +               return -EINVAL;
>>>> +
>>>> +       return alloc_unpopulated_pages(nr_pages, pages, true);
>>>> +}
>>>> +EXPORT_SYMBOL(xen_alloc_unpopulated_dma_pages);
>>>> +
>>>> +/**
>>>> + * xen_free_unpopulated_dma_pages - return unpopulated DMAable pages
>>>> + * @dev: valid struct device pointer
>>>> + * @nr_pages: Number of pages
>>>> + * @pages: pages to return
>>>> + */
>>>> +void xen_free_unpopulated_dma_pages(struct device *dev, unsigned int
>>>> nr_pages,
>>>> +               struct page **pages)
>>>> +{
>>>> +       free_unpopulated_pages(nr_pages, pages, true);
>>>> +}
>>>> +EXPORT_SYMBOL(xen_free_unpopulated_dma_pages);
>>>> +
>>>>    static int __init unpopulated_init(void)
>>>>    {
>>>>           int ret;
>>>> @@ -237,9 +296,19 @@ static int __init unpopulated_init(void)
>>>>           if (!xen_domain())
>>>>                   return -ENODEV;
>>>>
>>>> +       dma_pool = gen_pool_create(PAGE_SHIFT, NUMA_NO_NODE);
>>>> +       if (!dma_pool) {
>>>> +               pr_err("xen:unpopulated: Cannot create DMA pool\n");
>>>> +               return -ENOMEM;
>>>> +       }
>>>> +
>>>> +       gen_pool_set_algo(dma_pool, gen_pool_best_fit, NULL);
>>>> +
>>>>           ret = arch_xen_unpopulated_init(&target_resource);
>>>>           if (ret) {
>>>>                   pr_err("xen:unpopulated: Cannot initialize target
>>>> resource\n");
>>>> +               gen_pool_destroy(dma_pool);
>>>> +               dma_pool = NULL;
>>>>                   target_resource = NULL;
>>>>           }
>>>>
>>>> [snip]
>>>>
>>>>
>>>> I think, regarding on the approach we would likely need to do some
>>>> renaming
>>>> for fill_list, page_list, list_lock, etc.
>>>>
>>>>
>>>> Both options work in my Arm64 based environment, not sure regarding x86.
>>>> Or do we have another option here?
>>>> I would be happy to go any route. What do you think?
>>> The second option (use "dma_pool" for all) looks great, thank you for
>>> looking into it!
>>
>> ok, great
>>
>>
>> May I please clarify a few moments before starting preparing non-RFC version:
>>
>>
>> 1. According to the discussion at "[RFC PATCH 2/2] xen/grant-table: Use
>> unpopulated DMAable pages instead of real RAM ones" we decided
>> to stay away from the "dma" in the name, also the second option (use
>> "dma_pool" for all) implies dropping the "page_list" entirely, so I am going
>> to do some renaming:
>>
>> - s/xen_alloc_unpopulated_dma_pages()/xen_alloc_unpopulated_contiguous_pages()
>> - s/dma_pool/unpopulated_pool
>> - s/list_lock/pool_lock
>> - s/fill_list()/fill_pool()
>>
>> Any objections?
>   
> Looks good


thank you for the clarification


>
>   
>> 2. I don't like much the fact that in free_unpopulated_pages() we have to free
>> "page by page" if contiguous is false, but unfortunately we cannot avoid doing
>> that.
>> I noticed that many users of unpopulated pages retain initially allocated
>> pages[i] array, so it passed here absolutely unmodified since being allocated,
>> but there is a code (for example, gnttab_page_cache_shrink() in grant-table.c)
>> that can pass pages[i] which contains arbitrary pages.
>>
>> static void free_unpopulated_pages(unsigned int nr_pages, struct page **pages,
>>          bool contiguous)
>> {
>>
>> [snip]
>>
>>      /* XXX Do we need to check the range (gen_pool_has_addr)? */
>>      if (contiguous)
>>          gen_pool_free(dma_pool, (unsigned long)page_to_virt(pages[0]),
>>                  nr_pages * PAGE_SIZE);
>>      else {
>>          unsigned int i;
>>
>>          for (i = 0; i < nr_pages; i++)
>>              gen_pool_free(dma_pool, (unsigned long)page_to_virt(pages[i]),
>>                      PAGE_SIZE);
>>      }
>>
>> [snip]
>>
>> }
>>
>> I think, it wouldn't be a big deal for the small allocations, but for the big
>> allocations it might be not optimal for the speed.
>>
>> What do you think if we update some places which always require big
>> allocations to allocate (and free) contiguous pages instead?
>> The possible candidate is
>> gem_create()/xen_drm_front_gem_free_object_unlocked() in
>> drivers/gpu/drm/xen/xen_drm_front_gem.c.
>> OTOH I realize this might be inefficient use of resources. Or better not?
>   
> Yes I think it is a good idea, more on this below.

thanks


>
>   
>> 3. The alloc_unpopulated_pages() might be optimized for the non-contiguous
>> allocations, currently we always try to allocate a single chunk even if
>> contiguous is false.
>>
>> static int alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages,
>>          bool contiguous)
>> {
>>
>> [snip]
>>
>>      /* XXX: Optimize for non-contiguous allocations */
>>      while (!(vaddr = (void *)gen_pool_alloc(dma_pool, nr_pages * PAGE_SIZE)))
>> {
>>          if (filled)
>>              ret = -ENOMEM;
>>          else {
>>              ret = fill_list(nr_pages);
>>              filled = true;
>>          }
>>          if (ret)
>>              goto out;
>>      }
>>
>> [snip]
>>
>> }
>>
>>
>> But we can allocate "page by page" for the non-contiguous allocations, it
>> might be not optimal for the speed, but would be optimal for the resource
>> usage. What do you think?
>>
>> static int alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages,
>>          bool contiguous)
>> {
>>
>> [snip]
>>
>>      if (contiguous) {
>>          while (!(vaddr = (void *)gen_pool_alloc(dma_pool, nr_pages *
>> PAGE_SIZE))) {
>>              if (filled)
>>                  ret = -ENOMEM;
>>              else {
>>                  ret = fill_list(nr_pages);
>>                  filled = true;
>>              }
>>              if (ret)
>>                  goto out;
>>          }
>>
>>          for (i = 0; i < nr_pages; i++)
>>              pages[i] = virt_to_page(vaddr + PAGE_SIZE * i);
>>      } else {
>>          if (gen_pool_avail(dma_pool) < nr_pages) {
>>              ret = fill_list(nr_pages - gen_pool_avail(dma_pool));
>>              if (ret)
>>                  goto out;
>>          }
>>
>>          for (i = 0; i < nr_pages; i++) {
>>              vaddr = (void *)gen_pool_alloc(dma_pool, PAGE_SIZE);
>>              if (!vaddr) {
>>                  while (i--)
>>                      gen_pool_free(dma_pool, (unsigned
>> long)page_to_virt(pages[i]),
>>                              PAGE_SIZE);
>>
>>                  ret = -ENOMEM;
>>                  goto out;
>>              }
>>
>>              pages[i] = virt_to_page(vaddr);
>>          }
>>      }
>>
>> [snip]
>>
>> }
> Basically, if we allocate (and free) page-by-page it leads to more
> efficient resource utilization but it is slower.

yes, I think the same


>   If we allocate larger
> contiguous chunks it is faster but it leads to less efficient resource
> utilization.

yes, I think the same


>
> Given that on both x86 and ARM the unpopulated memory resource is
> arbitrarily large, I don't think we need to worry about resource
> utilization. It is not backed by real memory. The only limitation is the
> address space size which is very large.

I agree


>
> So I would say optimize for speed and use larger contiguous chunks even
> when continuity is not strictly required.

thank you for the clarification, will do


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Wed Jun 15 18:16:06 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 18:16:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350270.576556 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1XYR-0007Sp-Ql; Wed, 15 Jun 2022 18:15:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350270.576556; Wed, 15 Jun 2022 18:15:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1XYR-0007Si-Nq; Wed, 15 Jun 2022 18:15:47 +0000
Received: by outflank-mailman (input) for mailman id 350270;
 Wed, 15 Jun 2022 18:15:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o1XYQ-0007Sc-LQ
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 18:15:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o1XYP-00087C-Ug; Wed, 15 Jun 2022 18:15:45 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=[192.168.25.191]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o1XYP-0000p0-B9; Wed, 15 Jun 2022 18:15:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=74zxn/BFNywwaJ4ztXIs/6H/aTG9Vlub6f9bjuR1SMM=; b=XdaaulWWPv24up747cpikI5BNn
	edoMb6iA6Acpv/oKxjsY+Fyt4kz7s9+H4zwHHNVWJtcIbc/xd/ZZcUyHyYnX52noqC7goUXkG1LVf
	sKkTOzGTcsA1ofw3pR2zR5WNVdVyQYpIJq8g4BYP/ADOhXdE8k0IefnnPxdTNMFmZyt4=;
Message-ID: <ba744d38-7ebe-c58a-5576-99d44ec36a29@xen.org>
Date: Wed, 15 Jun 2022 19:15:42 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH v2 2/2] xen/arm: add FF-A mediator
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Jens Wiklander <jens.wiklander@linaro.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20220609061812.422130-1-jens.wiklander@linaro.org>
 <20220609061812.422130-3-jens.wiklander@linaro.org> <874k0nhvsq.fsf@epam.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <874k0nhvsq.fsf@epam.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 14/06/2022 20:47, Volodymyr Babchuk wrote:
>>   menu "ARM errata workaround via the alternative framework"
>> diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
>> index 1d862351d111..dbf5e593a069 100644
>> --- a/xen/arch/arm/Makefile
>> +++ b/xen/arch/arm/Makefile
>> @@ -20,6 +20,7 @@ obj-y += domain.o
>>   obj-y += domain_build.init.o
>>   obj-y += domctl.o
>>   obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
>> +obj-$(CONFIG_FFA) += ffa.o
>>   obj-y += gic.o
>>   obj-y += gic-v2.o
>>   obj-$(CONFIG_GICV3) += gic-v3.o
>> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
>> index 8110c1df8638..a93e6a9c4aef 100644
>> --- a/xen/arch/arm/domain.c
>> +++ b/xen/arch/arm/domain.c
>> @@ -27,6 +27,7 @@
>>   #include <asm/cpufeature.h>
>>   #include <asm/current.h>
>>   #include <asm/event.h>
>> +#include <asm/ffa.h>
>>   #include <asm/gic.h>
>>   #include <asm/guest_atomics.h>
>>   #include <asm/irq.h>
>> @@ -756,6 +757,9 @@ int arch_domain_create(struct domain *d,
>>       if ( (rc = tee_domain_init(d, config->arch.tee_type)) != 0 )
>>           goto fail;
>>   
>> +    if ( (rc = ffa_domain_init(d)) != 0 )
> 
> So, FFA support will be enabled for each domain? I think that this is
> fine for experimental feature, but I want to hear maintainer's opinion.

I would prefer if we add a flag to allow per-domain support. This would 
allow someone to use FFA with a trusted domain (e.g dom0) but not on 
non-trusted VMs
(I don't yet know how secure it will be to expose it to everyone).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 18:40:46 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 18:40:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350278.576566 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1XwT-0003Ag-QJ; Wed, 15 Jun 2022 18:40:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350278.576566; Wed, 15 Jun 2022 18:40:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1XwT-0003AZ-Mt; Wed, 15 Jun 2022 18:40:37 +0000
Received: by outflank-mailman (input) for mailman id 350278;
 Wed, 15 Jun 2022 18:40:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o1XwS-0003AT-9b
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 18:40:36 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o1XwR-0008WF-S6; Wed, 15 Jun 2022 18:40:35 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=[192.168.25.191]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o1XwR-0004wC-7h; Wed, 15 Jun 2022 18:40:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=GxGTD/Pzc2KdlNWvdelO5Y4it1voapJw4z6VC9uObwg=; b=VWCZ7f/MPWheuPoAI1kC3efA9M
	2003BzRTxyoJs56ngFn+2EzPsYI0DsvJhLNr69m57zn0Qz2pMg9V1+xkmX3TwQlrzjELjAAEutgGZ
	meF5t9qu5HVLgDdot/IYcs2Z0/JHqrYLhEEajEf3eknCnKsRt65RGQjypxrisgi194CQ=;
Message-ID: <31ef4c2b-359a-76dd-652f-03a1c4d000c4@xen.org>
Date: Wed, 15 Jun 2022 19:40:33 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH v3 2/4] xen/arm: Add sb instruction support
To: Bertrand Marquis <bertrand.marquis@arm.com>,
 xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <cover.1655124548.git.bertrand.marquis@arm.com>
 <7fa3bf8fc27bac120e28092c8c4081d1e58f0b79.1655124548.git.bertrand.marquis@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <7fa3bf8fc27bac120e28092c8c4081d1e58f0b79.1655124548.git.bertrand.marquis@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Bertrand,

On 13/06/2022 13:53, Bertrand Marquis wrote:
> This patch is adding sb instruction support when it is supported by a
> CPU on arm64.
> A new cpu feature capability system is introduced to enable alternative
> code using sb instruction when it is supported by the processor. This is
> decided based on the isa64 system register value and use a new hardware
> capability ARM_HAS_SB.
> 
> The sb instruction is encoded using its hexadecimal value to avoid
> recursive macro and support old compilers not having support for sb
> instruction.
> 
> Arm32 instruction support is added but it is not enabled at the moment
> due to the lack of hardware supporting it.
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

> ---
> Changes in v3:
> - rename ARM64_HAS_SB to ARM_HAS_SB
> - define sb before including per arch macros
> Changes in v2:
> - fix commit message
> - add comment to explain the extra nop
> - add support for arm32 and move macro back to arm generic header
> - fix macro comment indentation
> - introduce cpu feature system instead of using errata
> ---
>   xen/arch/arm/cpufeature.c             | 28 +++++++++++++++++++++++++++
>   xen/arch/arm/include/asm/cpufeature.h |  6 +++++-
>   xen/arch/arm/include/asm/macros.h     | 19 +++++++++++++++++-
>   xen/arch/arm/setup.c                  |  3 +++
>   xen/arch/arm/smpboot.c                |  1 +
>   5 files changed, 55 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
> index a58965f7b9..62d5e1770a 100644
> --- a/xen/arch/arm/cpufeature.c
> +++ b/xen/arch/arm/cpufeature.c
> @@ -26,6 +26,24 @@ DECLARE_BITMAP(cpu_hwcaps, ARM_NCAPS);
>   
>   struct cpuinfo_arm __read_mostly guest_cpuinfo;
>   
> +#ifdef CONFIG_ARM_64
> +static bool has_sb_instruction(const struct arm_cpu_capabilities *entry)
> +{
> +    return system_cpuinfo.isa64.sb;
> +}
> +#endif
> +
> +static const struct arm_cpu_capabilities arm_features[] = {
> +#ifdef CONFIG_ARM_64
> +    {
> +        .desc = "Speculation barrier instruction (SB)",
> +        .capability = ARM_HAS_SB,
> +        .matches = has_sb_instruction,
> +    },
> +#endif
> +    {},
> +};
> +
>   void update_cpu_capabilities(const struct arm_cpu_capabilities *caps,
>                                const char *info)
>   {
> @@ -70,6 +88,16 @@ void __init enable_cpu_capabilities(const struct arm_cpu_capabilities *caps)
>       }
>   }
>   
> +void check_local_cpu_features(void)
> +{
> +    update_cpu_capabilities(arm_features, "enabled support for");
> +}
> +
> +void __init enable_cpu_features(void)
> +{
> +    enable_cpu_capabilities(arm_features);
> +}
> +
>   /*
>    * Run through the enabled capabilities and enable() them on the calling CPU.
>    * If enabling of any capability fails the error is returned. After enabling a
> diff --git a/xen/arch/arm/include/asm/cpufeature.h b/xen/arch/arm/include/asm/cpufeature.h
> index f7368766c0..24c01d2b9d 100644
> --- a/xen/arch/arm/include/asm/cpufeature.h
> +++ b/xen/arch/arm/include/asm/cpufeature.h
> @@ -67,8 +67,9 @@
>   #define ARM_WORKAROUND_BHB_LOOP_24 13
>   #define ARM_WORKAROUND_BHB_LOOP_32 14
>   #define ARM_WORKAROUND_BHB_SMCC_3 15
> +#define ARM_HAS_SB 16
>   
> -#define ARM_NCAPS           16
> +#define ARM_NCAPS           17
>   
>   #ifndef __ASSEMBLY__
>   
> @@ -78,6 +79,9 @@
>   
>   extern DECLARE_BITMAP(cpu_hwcaps, ARM_NCAPS);
>   
> +void check_local_cpu_features(void);
> +void enable_cpu_features(void);
> +
>   static inline bool cpus_have_cap(unsigned int num)
>   {
>       if ( num >= ARM_NCAPS )
> diff --git a/xen/arch/arm/include/asm/macros.h b/xen/arch/arm/include/asm/macros.h
> index 1aa373760f..dc791245df 100644
> --- a/xen/arch/arm/include/asm/macros.h
> +++ b/xen/arch/arm/include/asm/macros.h
> @@ -5,13 +5,30 @@
>   # error "This file should only be included in assembly file"
>   #endif
>   
> +#include <asm/alternative.h>
> +
>       /*
>        * Speculative barrier
> -     * XXX: Add support for the 'sb' instruction
>        */
>       .macro sb
> +alternative_if_not ARM_HAS_SB
>       dsb nsh
>       isb
> +alternative_else
> +    /*
> +     * SB encoding in hexadecimal to prevent recursive macro.
> +     * extra nop is required to keep same number of instructions on both sides
> +     * of the alternative.
> +     */
> +#if defined(CONFIG_ARM_32)
> +    .inst 0xf57ff070
> +#elif defined(CONFIG_ARM_64)
> +    .inst 0xd50330ff
> +#else
> +#   error "missing sb encoding for ARM variant"
> +#endif
> +    nop
> +alternative_endif
>       .endm
>   
>   #if defined (CONFIG_ARM_32)
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 6016471d37..577c54e6fb 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -964,6 +964,8 @@ void __init start_xen(unsigned long boot_phys_offset,
>        */
>       check_local_cpu_errata();
>   
> +    check_local_cpu_features();
> +
>       init_xen_time();
>   
>       gic_init();
> @@ -1033,6 +1035,7 @@ void __init start_xen(unsigned long boot_phys_offset,
>        */
>       apply_alternatives_all();
>       enable_errata_workarounds();
> +    enable_cpu_features();
>   
>       /* Create initial domain 0. */
>       if ( !is_dom0less_mode() )
> diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
> index 22fede6600..3f62f3a44f 100644
> --- a/xen/arch/arm/smpboot.c
> +++ b/xen/arch/arm/smpboot.c
> @@ -395,6 +395,7 @@ void start_secondary(void)
>       local_abort_enable();
>   
>       check_local_cpu_errata();
> +    check_local_cpu_features();
>   
>       printk(XENLOG_DEBUG "CPU %u booted.\n", smp_processor_id());
>   

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 18:51:22 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 18:51:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350286.576578 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Y6n-0004yc-Pu; Wed, 15 Jun 2022 18:51:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350286.576578; Wed, 15 Jun 2022 18:51:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1Y6n-0004yV-MV; Wed, 15 Jun 2022 18:51:17 +0000
Received: by outflank-mailman (input) for mailman id 350286;
 Wed, 15 Jun 2022 18:51:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=r2OX=WW=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1o1Y6m-0004yP-64
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 18:51:16 +0000
Received: from mail-ed1-x535.google.com (mail-ed1-x535.google.com
 [2a00:1450:4864:20::535])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 21b87540-ecdc-11ec-ab14-113154c10af9;
 Wed, 15 Jun 2022 20:51:14 +0200 (CEST)
Received: by mail-ed1-x535.google.com with SMTP id 25so17462481edw.8
 for <xen-devel@lists.xenproject.org>; Wed, 15 Jun 2022 11:51:14 -0700 (PDT)
Received: from uni.. (adsl-190.37.6.169.tellas.gr. [37.6.169.190])
 by smtp.googlemail.com with ESMTPSA id
 u26-20020aa7db9a000000b004314bb65e7fsm9907519edt.41.2022.06.15.11.51.12
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 15 Jun 2022 11:51:13 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 21b87540-ecdc-11ec-ab14-113154c10af9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=PHxUOy4hYN73AmNz+geJU/BFXvCSUXjGlGFigx0rrgc=;
        b=m65gUB5t5E2F6t+A/usvGDamPbDKNVE1bADjHAJvWgEai4PnCw7cFJ1ZmqcKaYyiEs
         WOAv9zhg/1JT2pLdBCqaaERvA/0/2mwZgiGibZtniHabh89xjwH6cMOinzp3VVN60uqW
         /2AwEiq77nN7U274M52KAsxXTVRFkaaqYZHpP0vtlapR/3guKpztjhgCjeRCKSOQsxVJ
         6g6grUzfblFrLnOse5jXoSnmwyBFbh70ZNg/3Qg/9pDRWTAmMcDqb8FJAX66P4myOleK
         Oh1NGBNTQFVPoF67K111/vGxDuMyhQjc2CJBAQgz5PvdTy44BdDob2n+uQTQ/Z643Wej
         qqrQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=PHxUOy4hYN73AmNz+geJU/BFXvCSUXjGlGFigx0rrgc=;
        b=1zzucfcx2PbxLTDcs+CbECXP562i9oyOv4MB5yAp4UFWQdZqzm/5y9MDy1P8u3rIG6
         gkRrpSfUXCZllV1OibXu6kESEpaEj7IisOjjF1H5H1f7FTYSDDBgC3eEVsaULIi4SLKj
         jm9VBBi30bgFjtlrfpPlArDtJ2WWDcyTHA9oq4MRF9YbEFNNuwnxWfCy16p4TALX9P9r
         Zv1XlUBonWVGSmioH8ONt8sWLZ3ckyerNrGjg34Qzue4d4CFTYwpb/jFNQQtzcE4nITZ
         lAwwklRZH16ouIqSMhUJzJ4TF7X6X7sTRqYtOd5UHCGrq515oC/YO+DCDWShCIkXVUgI
         RIMA==
X-Gm-Message-State: AJIora9LVb+a0fyYS17q6DweFyujbNdZGBR1ebs7n/LxypOAHCADrBUY
	P3DUXkL3hRotwRpY8hn2RW93r+QkIzg=
X-Google-Smtp-Source: AGRyM1tY3dMOvP6/At7qoo+/8nOctrrYRwjXXZTNJtf5fFJCUUIaxnsNwRCKkvSHuLvWA1BC0rHV4A==
X-Received: by 2002:a05:6402:177b:b0:433:426d:83ea with SMTP id da27-20020a056402177b00b00433426d83eamr1535611edb.18.1655319073935;
        Wed, 15 Jun 2022 11:51:13 -0700 (PDT)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	viryaos-discuss@lists.sourceforge.net,
	Xenia Ragiadakou <burzalodowa@gmail.com>
Subject: [ImageBuilder] [PATCH v2] uboot-script-gen: Add DOMU_STATIC_MEM
Date: Wed, 15 Jun 2022 21:51:00 +0300
Message-Id: <20220615185100.283754-1-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new config parameter to configure a dom0less VM with static allocation.
DOMU_STATIC_MEM[number]="baseaddr1 size1 ... baseaddrN sizeN"
The parameter specifies the host physical address regions to be statically
allocated to the VM. Each region is defined by its start address and size.

For instance,
DOMU_STATIC_MEM[0]="0x30000000 0x10000000 0x50000000 0x20000000"
indicates that the host memory regions [0x30000000, 0x40000000) and
[0x50000000, 0x70000000) are statically allocated to the first dom0less VM.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---

Notes:
    v2: in add_device_tree_static_mem(), replace i with val because variable i
        is already in use as an index

 README.md                |  4 ++++
 scripts/uboot-script-gen | 20 ++++++++++++++++++++
 2 files changed, 24 insertions(+)

diff --git a/README.md b/README.md
index 8ce13f0..876e46d 100644
--- a/README.md
+++ b/README.md
@@ -154,6 +154,10 @@ Where:
   automatically at boot as dom0-less guest. It can still be created
   later from Dom0.
 
+- DOMU_STATIC_MEM[number]="baseaddr1 size1 ... baseaddrN sizeN"
+  if specified, indicates the host physical address regions
+  [baseaddr, baseaddr + size) to be reserved to the VM for static allocation.
+
 - LINUX is optional but specifies the Linux kernel for when Xen is NOT
   used.  To enable this set any LINUX\_\* variables and do NOT set the
   XEN variable.
diff --git a/scripts/uboot-script-gen b/scripts/uboot-script-gen
index 0adf523..3a5f720 100755
--- a/scripts/uboot-script-gen
+++ b/scripts/uboot-script-gen
@@ -108,6 +108,22 @@ function add_device_tree_passthrough()
     dt_set "$path/module$addr" "reg" "hex"  "0x0 $addr 0x0 $(printf "0x%x" $size)"
 }
 
+function add_device_tree_static_mem()
+{
+    local path=$1
+    local regions=$2
+
+    dt_set "$path" "#xen,static-mem-address-cells" "hex" "0x2"
+    dt_set "$path" "#xen,static-mem-size-cells" "hex" "0x2"
+
+    for val in ${regions[@]}
+    do
+	cells+=("$(printf "0x%x 0x%x" $(($val >> 32)) $(($val & ((1 << 32) - 1))))")
+    done
+
+    dt_set "$path" "xen,static-mem" "hex" "${cells[*]}"
+}
+
 function xen_device_tree_editing()
 {
     dt_set "/chosen" "#address-cells" "hex" "0x2"
@@ -143,6 +159,10 @@ function xen_device_tree_editing()
         dt_set "/chosen/domU$i" "#size-cells" "hex" "0x2"
         dt_set "/chosen/domU$i" "memory" "int" "0 ${DOMU_MEM[$i]}"
         dt_set "/chosen/domU$i" "cpus" "int" "${DOMU_VCPUS[$i]}"
+	if test "${DOMU_STATIC_MEM[$i]}"
+        then
+	    add_device_tree_static_mem "/chosen/domU$i" "${DOMU_STATIC_MEM[$i]}"
+        fi
         dt_set "/chosen/domU$i" "vpl011" "hex" "0x1"
         add_device_tree_kernel "/chosen/domU$i" ${domU_kernel_addr[$i]} ${domU_kernel_size[$i]} "${DOMU_CMD[$i]}"
         if test "${domU_ramdisk_addr[$i]}"
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 15 19:01:39 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 19:01:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350294.576589 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1YGi-0006nm-NR; Wed, 15 Jun 2022 19:01:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350294.576589; Wed, 15 Jun 2022 19:01:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1YGi-0006nf-KE; Wed, 15 Jun 2022 19:01:32 +0000
Received: by outflank-mailman (input) for mailman id 350294;
 Wed, 15 Jun 2022 19:01:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o1YGh-0006nZ-HQ
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 19:01:31 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o1YGg-0000TB-5w; Wed, 15 Jun 2022 19:01:30 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=[192.168.25.191]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o1YGg-0008TW-0D; Wed, 15 Jun 2022 19:01:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=LTtOR01RoncP8K6/pfJ+xyBfBKFTlHwLxpgpw4eKwA8=; b=j7vpj4zA6Jt/lJ8MyNzG1DS4oX
	TMf2vadAQSPrE/20sHGvyKco6LqoPoC0rnzU6QSFS5kcARgvEwXpwAZWasPq1JOuwYyEHbyktugVW
	vIxNDoq1/3aA8LZZGxEupOKeIh8IUa7oZsnlfsGAz1cxQkga13wKMT88mdPfI8WrHlNc=;
Message-ID: <588f9903-2a0e-a546-912a-24d2a13c3c6f@xen.org>
Date: Wed, 15 Jun 2022 20:01:28 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH v2 1/2] xen/arm: smccc: add support for SMCCCv1.2 extended
 input/output registers
To: Jens Wiklander <jens.wiklander@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220609061812.422130-1-jens.wiklander@linaro.org>
 <20220609061812.422130-2-jens.wiklander@linaro.org>
 <alpine.DEB.2.22.394.2206101733020.756493@ubuntu-linux-20-04-desktop>
 <20220615155825.GA30639@jade>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220615155825.GA30639@jade>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 15/06/2022 16:58, Jens Wiklander wrote:
> On Fri, Jun 10, 2022 at 05:41:33PM -0700, Stefano Stabellini wrote:
>>>   #endif /* __ASSEMBLY__ */
>>>   
>>>   /*
>>> diff --git a/xen/arch/arm/vsmc.c b/xen/arch/arm/vsmc.c
>>> index 676740ef1520..6f90c08a6304 100644
>>> --- a/xen/arch/arm/vsmc.c
>>> +++ b/xen/arch/arm/vsmc.c
>>> @@ -93,7 +93,7 @@ static bool handle_arch(struct cpu_user_regs *regs)
>>>       switch ( fid )
>>>       {
>>>       case ARM_SMCCC_VERSION_FID:
>>> -        set_user_reg(regs, 0, ARM_SMCCC_VERSION_1_1);
>>> +        set_user_reg(regs, 0, ARM_SMCCC_VERSION_1_2);
>>>           return true;
>>    
>> This is going to be a problem for ARM32 given that ARM_SMCCC_VERSION_1_2
>> is unimplemented on ARM32. If there is an ARM32 implementation in Linux
>> for ARM_SMCCC_VERSION_1_2 you might as well import it too.
>>
>> Otherwise we'll have to abstract it away, e.g.:
>>
>> #ifdef CONFIG_ARM_64
>> #define ARM_VSMCCC_VERSION ARM_SMCCC_VERSION_1_2
>> #else
>> #define ARM_VSMCCC_VERSION ARM_SMCCC_VERSION_1_1
>> #endif
> 
> I couldn't find an ARM32 implementation for ARM_SMCCC_VERSION_1_2. But
> I'm not sure it's needed at this point. From what I've understood r4-17
> are either preserved or updated by the function ID in question. So
> claiming ARM_SMCCC_VERSION_1_2 shouldn't break anything. 

So in Xen, we always take a snapshot of the registers on entry to the 
hypervisor and only touch it when necessary. Therefore, it doesn't 
matter whether we claim to be complaient with 1.1 or 1.2 based on the 
argument passing convention.

However, the spec is not only about arguments. For instance, SMCCC v1.1 
also added some mandatory functions (e.g. detection the version). I 
haven't looked closely on whether the SMCCC v1.2 introduced such thing. 
Can you confirm what mandatory feature comes with 1.2?

Furthermore, your commit message explain why arm_smccc_1_2_smc() was 
introduced. But it seems to miss some words about exposing SMCCC v2.1 to 
the VM.

In general, I think it is better to split the host support from the VM 
support. The two are technically not independent (caller vs 
implementation) and there are different problematics for each (see above 
for an example). I don't think there are a lot to add here, so I would 
be OK to keep it in the same patch with a few words.

Lastly, can you provide a link to the spec in the commit message? This 
would help to find the pdf easily. I think in this case, you are using
ARM DEN 0028C (this is not the latest but describe 1.2):

https://developer.arm.com/documentation/den0028/c/?lang=en

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 19:43:50 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 19:43:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350303.576600 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1YvP-0003Tx-0o; Wed, 15 Jun 2022 19:43:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350303.576600; Wed, 15 Jun 2022 19:43:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1YvO-0003Tq-SU; Wed, 15 Jun 2022 19:43:34 +0000
Received: by outflank-mailman (input) for mailman id 350303;
 Wed, 15 Jun 2022 19:43:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o1YvN-0003Tk-0e
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 19:43:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o1YvM-0001AB-9m; Wed, 15 Jun 2022 19:43:32 +0000
Received: from home.octic.net ([81.187.162.82] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o1YvL-00069J-Jv; Wed, 15 Jun 2022 19:43:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=bMgGonnEiv+yxqh5aHw/ojo91IbRi8VRnhsZ22GFEOc=; b=Aecp+ZUGTxJc6s2/vg24WrBeCa
	jeu6cmMLhHay448XsvRu7olMK+9NAED5Q8FPAfQrjP8ESagLQqZJCTQjn6ihpHmpRY/g/2bPk3Z76
	TDrW9ofMNqHCRzcKXj/uBCerPgk9WtLEoaor9uROI1hNsZBSK77Mk5FbSB4jwH5Oeumk=;
Message-ID: <9c39d8dc-4a74-297c-45fd-4e261fe9ef90@xen.org>
Date: Wed, 15 Jun 2022 20:43:29 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.9.1
To: Stefano Stabellini <sstabellini@kernel.org>,
 Jens Wiklander <jens.wiklander@linaro.org>
Cc: xen-devel@lists.xenproject.org,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220609061812.422130-1-jens.wiklander@linaro.org>
 <20220609061812.422130-2-jens.wiklander@linaro.org>
 <alpine.DEB.2.22.394.2206101733020.756493@ubuntu-linux-20-04-desktop>
From: Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2 1/2] xen/arm: smccc: add support for SMCCCv1.2 extended
 input/output registers
In-Reply-To: <alpine.DEB.2.22.394.2206101733020.756493@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 11/06/2022 01:41, Stefano Stabellini wrote:
>>   #endif /* __ASSEMBLY__ */
>>   
>>   /*
>> diff --git a/xen/arch/arm/vsmc.c b/xen/arch/arm/vsmc.c
>> index 676740ef1520..6f90c08a6304 100644
>> --- a/xen/arch/arm/vsmc.c
>> +++ b/xen/arch/arm/vsmc.c
>> @@ -93,7 +93,7 @@ static bool handle_arch(struct cpu_user_regs *regs)
>>       switch ( fid )
>>       {
>>       case ARM_SMCCC_VERSION_FID:
>> -        set_user_reg(regs, 0, ARM_SMCCC_VERSION_1_1);
>> +        set_user_reg(regs, 0, ARM_SMCCC_VERSION_1_2);
>>           return true;
>    
> This is going to be a problem for ARM32 given that ARM_SMCCC_VERSION_1_2
> is unimplemented on ARM32. If there is an ARM32 implementation in Linux
> for ARM_SMCCC_VERSION_1_2 you might as well import it too.
> 
> Otherwise we'll have to abstract it away, e.g.:
> 
> #ifdef CONFIG_ARM_64
> #define ARM_VSMCCC_VERSION ARM_SMCCC_VERSION_1_2
> #else
> #define ARM_VSMCCC_VERSION ARM_SMCCC_VERSION_1_1
> #endif

I don't understand why you want to tie the virtual and host SMCCC version.

In theory, it would be possible for us to implement a subsystem to fully 
emulate, lets say, FFA. We would to tell the VM that we are v1.2 
compliant but we would not need the helper as no calls would be forwarded.

When a 32-bit guest is running on Xen Arm64, we are going to say that 
SMCCC v1.2 will be available. This is not much different from running a 
32-bit guest on 32-bit hardware. So I think we should expose 1.2 unless 
we think there is a problem in the Xen 32-bit specific code.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 20:14:24 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 20:14:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350311.576610 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1ZP3-0007Ws-CT; Wed, 15 Jun 2022 20:14:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350311.576610; Wed, 15 Jun 2022 20:14:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1ZP3-0007Wl-9o; Wed, 15 Jun 2022 20:14:13 +0000
Received: by outflank-mailman (input) for mailman id 350311;
 Wed, 15 Jun 2022 20:14:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1ZP2-0007Wb-AR; Wed, 15 Jun 2022 20:14:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1ZP2-0001oK-7K; Wed, 15 Jun 2022 20:14:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1ZP1-0004Pr-Lh; Wed, 15 Jun 2022 20:14:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o1ZP1-0007uq-LC; Wed, 15 Jun 2022 20:14:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gh7thEnTyKo1ocBKlCaVTTdfya4D1MZ+cSrHn+tHHiI=; b=NtgBMyREZofhDcrd3KaQB/UVvX
	vElbXIBCPMAPczy+9APdM1WzPbqtUqr/0K5o5cmr8zv+0jc4mO4UgWNmJwqkt5AS+p9mjkVYqhN5E
	Zah7RS7tHHJkMCLmrkHbj760618EIGqUV/XdkI95vymBS7/6UZNtnXlynWD1q1ma45qQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171177-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171177: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-libvirt-raw:<job status>:broken:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:guest-start.2:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=018ab4fabddd94f1c96f3b59e180691b9e88d5d8
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 15 Jun 2022 20:14:11 +0000

flight 171177 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171177/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-raw    <job status>                 broken  in 171168
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 170714
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 170714
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 170714
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 170714
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 170714
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 170714
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 170714
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 170714
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 170714
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 170714
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 170714
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 170714
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 170714
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 170714
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 170714

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-libvirt-raw 5 host-install(5) broken in 171168 pass in 171177
 test-armhf-armhf-xl-rtds     19 guest-start.2              fail pass in 171168

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170714
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170714
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                018ab4fabddd94f1c96f3b59e180691b9e88d5d8
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z   22 days
Failing since        170716  2022-05-24 11:12:06 Z   22 days   54 attempts
Testing same since   171168  2022-06-14 22:12:35 Z    0 days    2 attempts

------------------------------------------------------------
2348 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-libvirt-raw broken

Not pushing.

(No revision log; it would be 277190 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 22:09:29 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 22:09:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350322.576622 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1bCI-0004B7-1B; Wed, 15 Jun 2022 22:09:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350322.576622; Wed, 15 Jun 2022 22:09:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1bCH-0004B0-Tq; Wed, 15 Jun 2022 22:09:09 +0000
Received: by outflank-mailman (input) for mailman id 350322;
 Wed, 15 Jun 2022 22:09:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3iFn=WW=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1o1bCG-0004Ar-Lp
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 22:09:08 +0000
Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com
 [2607:f8b0:4864:20::62a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c4edfcdf-ecf7-11ec-bd2c-47488cf2e6aa;
 Thu, 16 Jun 2022 00:09:06 +0200 (CEST)
Received: by mail-pl1-x62a.google.com with SMTP id k7so2188546plg.7
 for <xen-devel@lists.xenproject.org>; Wed, 15 Jun 2022 15:09:05 -0700 (PDT)
Received: from jade ([192.77.111.2]) by smtp.gmail.com with ESMTPSA id
 r24-20020a638f58000000b00401a7b4f137sm79597pgn.41.2022.06.15.15.09.02
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 15 Jun 2022 15:09:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c4edfcdf-ecf7-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:content-transfer-encoding:in-reply-to;
        bh=KDHt8rTEQEteKpcLPR9BQ8oUD9BiuWpeuA5sV9+beto=;
        b=imJ3C4u3FxHRsnOWqTD91loLLoJp05M3FCuj9sl/YbV1binqdmaaotn4CBhMgYup4C
         wYl1IU33Gici/VwVeV3yG6QWzQkkHHoShsKf7ycbXaJE9mMJn9j/4jLwotKgGKf1DgXy
         Stc4pdgz5pnf8ZU6GUZyl71vPJ4ZtSlbOUx67yDSGp9ls2MjlAkFFRR8ulkGlHqV5hAy
         +h3lJjwcWGrDPI4FFPVTHaABgJlXisBlSaa70/ULdWtdRGmtEE41a8B/xYWF4Oi3teJo
         0wdxYpaqYUPWLD0SdRQFgqF7r2eer0xsXkTB9nKHIfbJqkfwgbSkGGd7cKwkNGfHqQ3+
         ScuQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:content-transfer-encoding
         :in-reply-to;
        bh=KDHt8rTEQEteKpcLPR9BQ8oUD9BiuWpeuA5sV9+beto=;
        b=6uaQ3wTvxj5mdGHse3Gxz0fwfEo+EymFRMgrBYJjR4LIfqFe9K6mzWY0qgZmyMmovt
         JZ1N3zCTV2A6Nsb68BJSLsAdZ6+MR8QT1wIQLtA25uEH1Chh4tieeekquZ3vm9S6O1ns
         GqDqsUrqxZM9cG7xpOL6MwoYE5ApzKDMd+3oekfbGTJb6ef2MeJDTWzDyVE9rOrZFVMy
         u38AAChClUsN7xcvr0N4wCicxwPQ5tgM1ZNg9aliOeTxq9BKpGZ8U+gIw+5f23dNMa8e
         DFCNpI/9SpUr5A6Jw5bIOEH3iGD9L+siUkpR7Hc+jOlNP++10hXEVpXN7Vt4IsOzGXPW
         mDwg==
X-Gm-Message-State: AJIora/MmD77p3YcrnB3uECcR3+GGwJzwoJN9YL7Wdgk3UyRr7MvyX/B
	Kys7mzDB/E/5KuLUreK/f47n/Q==
X-Google-Smtp-Source: AGRyM1sjNovknLmHovCVaS9QjQl8WpwA6fXxX0ajD9dNmRh1gL3YDFpxx/oVrw29dsWBBYATiX/Vlw==
X-Received: by 2002:a17:90b:380b:b0:1e6:67f6:f70c with SMTP id mq11-20020a17090b380b00b001e667f6f70cmr12332003pjb.120.1655330944008;
        Wed, 15 Jun 2022 15:09:04 -0700 (PDT)
Date: Wed, 15 Jun 2022 15:09:01 -0700
From: Jens Wiklander <jens.wiklander@linaro.org>
To: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	xen-devel@lists.xenproject.org,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 1/2] xen/arm: smccc: add support for SMCCCv1.2
 extended input/output registers
Message-ID: <20220615220901.GA43803@jade>
References: <20220609061812.422130-1-jens.wiklander@linaro.org>
 <20220609061812.422130-2-jens.wiklander@linaro.org>
 <alpine.DEB.2.22.394.2206101733020.756493@ubuntu-linux-20-04-desktop>
 <20220615155825.GA30639@jade>
 <588f9903-2a0e-a546-912a-24d2a13c3c6f@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <588f9903-2a0e-a546-912a-24d2a13c3c6f@xen.org>

On Wed, Jun 15, 2022 at 08:01:28PM +0100, Julien Grall wrote:
> Hi,
> 
> On 15/06/2022 16:58, Jens Wiklander wrote:
> > On Fri, Jun 10, 2022 at 05:41:33PM -0700, Stefano Stabellini wrote:
> > > >   #endif /* __ASSEMBLY__ */
> > > >   /*
> > > > diff --git a/xen/arch/arm/vsmc.c b/xen/arch/arm/vsmc.c
> > > > index 676740ef1520..6f90c08a6304 100644
> > > > --- a/xen/arch/arm/vsmc.c
> > > > +++ b/xen/arch/arm/vsmc.c
> > > > @@ -93,7 +93,7 @@ static bool handle_arch(struct cpu_user_regs *regs)
> > > >       switch ( fid )
> > > >       {
> > > >       case ARM_SMCCC_VERSION_FID:
> > > > -        set_user_reg(regs, 0, ARM_SMCCC_VERSION_1_1);
> > > > +        set_user_reg(regs, 0, ARM_SMCCC_VERSION_1_2);
> > > >           return true;
> > > This is going to be a problem for ARM32 given that ARM_SMCCC_VERSION_1_2
> > > is unimplemented on ARM32. If there is an ARM32 implementation in Linux
> > > for ARM_SMCCC_VERSION_1_2 you might as well import it too.
> > > 
> > > Otherwise we'll have to abstract it away, e.g.:
> > > 
> > > #ifdef CONFIG_ARM_64
> > > #define ARM_VSMCCC_VERSION ARM_SMCCC_VERSION_1_2
> > > #else
> > > #define ARM_VSMCCC_VERSION ARM_SMCCC_VERSION_1_1
> > > #endif
> > 
> > I couldn't find an ARM32 implementation for ARM_SMCCC_VERSION_1_2. But
> > I'm not sure it's needed at this point. From what I've understood r4-17
> > are either preserved or updated by the function ID in question. So
> > claiming ARM_SMCCC_VERSION_1_2 shouldn't break anything.
> 
> So in Xen, we always take a snapshot of the registers on entry to the
> hypervisor and only touch it when necessary. Therefore, it doesn't matter
> whether we claim to be complaient with 1.1 or 1.2 based on the argument
> passing convention.
> 
> However, the spec is not only about arguments. For instance, SMCCC v1.1 also
> added some mandatory functions (e.g. detection the version). I haven't
> looked closely on whether the SMCCC v1.2 introduced such thing. Can you
> confirm what mandatory feature comes with 1.2?

There's a nice summary in a table at the end of the C version of DEN0028
you linked below. For SMCCC v1.2:
Argument/Result register set:
Permits calls to use R4—R7 as return register (Section 4.1).
Permits calls to use X4—X17 as return registers(Section3.1).
Permits calls to use X8—X17 as argument registers (Section 3.1).
Introduces:
SMCCC_ARCH_SOC_ID (Section 7.4)
Deprecates:
UID, Revision Queries on Arm Architecture Service (Section 6.2)
Count Query on all services (Section 6.2)

As far a I can tell nothing mandatory is introduced with version 1.2.

> 
> Furthermore, your commit message explain why arm_smccc_1_2_smc() was
> introduced. But it seems to miss some words about exposing SMCCC v2.1 to the
> VM.
> 
> In general, I think it is better to split the host support from the VM
> support. The two are technically not independent (caller vs implementation)
> and there are different problematics for each (see above for an example). I
> don't think there are a lot to add here, so I would be OK to keep it in the
> same patch with a few words.
> 
> Lastly, can you provide a link to the spec in the commit message? This would
> help to find the pdf easily. I think in this case, you are using
> ARM DEN 0028C (this is not the latest but describe 1.2):
> 
> https://developer.arm.com/documentation/den0028/c/?lang=en

I'll update the commit message.

Thanks,
Jens


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 23:39:40 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 23:39:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350332.576633 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1cbe-0006Y2-Mq; Wed, 15 Jun 2022 23:39:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350332.576633; Wed, 15 Jun 2022 23:39:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1cbe-0006Xv-Jt; Wed, 15 Jun 2022 23:39:26 +0000
Received: by outflank-mailman (input) for mailman id 350332;
 Wed, 15 Jun 2022 23:39:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1cbd-0006Xl-P6; Wed, 15 Jun 2022 23:39:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1cbd-0005M1-MT; Wed, 15 Jun 2022 23:39:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1cbd-00041o-2K; Wed, 15 Jun 2022 23:39:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o1cbd-0001gA-1m; Wed, 15 Jun 2022 23:39:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=94r654B5SXC3yi/sKZCS6EPoLwX0VjdN17vwo0vPpDA=; b=P+eDd4xQ6y6VzKFbnuFX4/Gd+4
	KHarY0bA0JmqMYZ4yBXjDCIHxkblK1uFhDw3vrD0tZvsU1LvMDF2/iTPRSDAFJGZm5u62eCVjbYSC
	ms5p/H3P3v1EQpscMp+iBqPWzdNf+7nT6uLDHsYoodw6A1hBCOJaa9Dk6/uZFQoGC8es=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171180-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-upstream-unstable test] 171180: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-upstream-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
    qemu-upstream-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=9a5e4bc76058766962ab3ff13f42c1d39a8e08d3
X-Osstest-Versions-That:
    qemuu=a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 15 Jun 2022 23:39:25 +0000

flight 171180 qemu-upstream-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171180/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    18 guest-start/debian.repeat fail REGR. vs. 167674

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 167674
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 167674
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 167674
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 167674
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 167674
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 167674
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 167674
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 167674
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                9a5e4bc76058766962ab3ff13f42c1d39a8e08d3
baseline version:
 qemuu                a68d6d311c2d1fd9d2fa9a0768ea2353e8a79b42

Last test of basis   167674  2022-01-12 10:40:11 Z  154 days
Testing same since   171180  2022-06-15 14:08:39 Z    0 days    1 attempts

------------------------------------------------------------
420 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   a68d6d311c..9a5e4bc760  9a5e4bc76058766962ab3ff13f42c1d39a8e08d3 -> master


From xen-devel-bounces@lists.xenproject.org Wed Jun 15 23:40:56 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jun 2022 23:40:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350341.576643 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1cd6-0007rE-1H; Wed, 15 Jun 2022 23:40:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350341.576643; Wed, 15 Jun 2022 23:40:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1cd5-0007r7-UM; Wed, 15 Jun 2022 23:40:55 +0000
Received: by outflank-mailman (input) for mailman id 350341;
 Wed, 15 Jun 2022 23:40:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8oGR=WW=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o1cd5-0007qv-6r
 for xen-devel@lists.xenproject.org; Wed, 15 Jun 2022 23:40:55 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 97dcbcfe-ed04-11ec-ab14-113154c10af9;
 Thu, 16 Jun 2022 01:40:53 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 028EA619C5;
 Wed, 15 Jun 2022 23:40:52 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0EE0AC3411A;
 Wed, 15 Jun 2022 23:40:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 97dcbcfe-ed04-11ec-ab14-113154c10af9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1655336451;
	bh=oSj2REQ5qGLGBBrSe5qR+g1EGWydDN+7KvkPWe4Dyo4=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=LUW1bMXvrBoJoj1dB2FuYZFam/bHc7Ly+IxnWxq6Eb8tkpXlgcaemVUcTi8TY4Mj6
	 uHZLAofAPPPATkiVmcMgvzpBt62ULG6W6WGlXSJGSocKm3EFl3Z8hx0RCnMvyecbzY
	 WLLEW+JmY1nObyduCjwaVJfMU8rdhhVpDjLZzvIU4VvLwXf8/ZlyeF1dtohOyzFKSu
	 y8zUBntYlUWGr/91mAzroQtwqdOzKbJoEawyIF/BwKQoECNk0wFztH29gRlL7r4aN8
	 oOZ6g0VIMBXuP+1EKhMpG4IYmehNZyQ2KnPhCGHeYpGcIPz9M6QPoCSvKLbOoywOEW
	 VU8wLej/gSrQQ==
Date: Wed, 15 Jun 2022 16:40:42 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Juergen Gross <jgross@suse.com>
cc: xen-devel@lists.xenproject.org, linux-doc@vger.kernel.org, 
    linux-kernel@vger.kernel.org, Jonathan Corbet <corbet@lwn.net>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: Re: [PATCH] xen: don't require virtio with grants for non-PV
 guests
In-Reply-To: <20220615084835.27113-1-jgross@suse.com>
Message-ID: <alpine.DEB.2.22.394.2206151336230.2430546@ubuntu-linux-20-04-desktop>
References: <20220615084835.27113-1-jgross@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 15 Jun 2022, Juergen Gross wrote:
> Commit fa1f57421e0b ("xen/virtio: Enable restricted memory access using
> Xen grant mappings") introduced a new requirement for using virtio
> devices: the backend now needs to support the VIRTIO_F_ACCESS_PLATFORM
> feature.
> 
> This is an undue requirement for non-PV guests, as those can be operated
> with existing backends without any problem, as long as those backends
> are running in dom0.
> 
> Per default allow virtio devices without grant support for non-PV
> guests.
> 
> The setting can be overridden by using the new "xen_virtio_grant"
> command line parameter.
> 
> Add a new config item to always force use of grants for virtio.
> 
> Fixes: fa1f57421e0b ("xen/virtio: Enable restricted memory access using Xen grant mappings")
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  .../admin-guide/kernel-parameters.txt         |  6 +++++
>  drivers/xen/Kconfig                           |  9 ++++++++
>  drivers/xen/grant-dma-ops.c                   | 22 +++++++++++++++++++
>  include/xen/xen.h                             | 12 +++++-----
>  4 files changed, 42 insertions(+), 7 deletions(-)
> 
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index 8090130b544b..7960480c6fe4 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -6695,6 +6695,12 @@
>  			improve timer resolution at the expense of processing
>  			more timer interrupts.
>  
> +	xen_virtio_grant= [XEN]
> +			Control whether virtio devices are required to use
> +			grants when running as a Xen guest. The default is
> +			"yes" for PV guests or when the kernel has been built
> +			with CONFIG_XEN_VIRTIO_FORCE_GRANT set.
> +
>  	xen.balloon_boot_timeout= [XEN]
>  			The time (in seconds) to wait before giving up to boot
>  			in case initial ballooning fails to free enough memory.
> diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
> index bfd5f4f706bc..a65bd92121a5 100644
> --- a/drivers/xen/Kconfig
> +++ b/drivers/xen/Kconfig
> @@ -355,4 +355,13 @@ config XEN_VIRTIO
>  
>  	  If in doubt, say n.
>  
> +config XEN_VIRTIO_FORCE_GRANT
> +	bool "Require Xen virtio support to use grants"
> +	depends on XEN_VIRTIO
> +	help
> +	  Require virtio for Xen guests to use grant mappings.
> +	  This will avoid the need to give the backend the right to map all
> +	  of the guest memory. This will need support on the backend side
> +	  (e.g. qemu or kernel, depending on the virtio device types used).
> +
>  endmenu
> diff --git a/drivers/xen/grant-dma-ops.c b/drivers/xen/grant-dma-ops.c
> index fc0142484001..d1fae789dfad 100644
> --- a/drivers/xen/grant-dma-ops.c
> +++ b/drivers/xen/grant-dma-ops.c
> @@ -11,6 +11,7 @@
>  #include <linux/dma-map-ops.h>
>  #include <linux/of.h>
>  #include <linux/pfn.h>
> +#include <linux/platform-feature.h>
>  #include <linux/xarray.h>
>  #include <xen/xen.h>
>  #include <xen/xen-ops.h>
> @@ -27,6 +28,27 @@ static DEFINE_XARRAY(xen_grant_dma_devices);
>  
>  #define XEN_GRANT_DMA_ADDR_OFF	(1ULL << 63)
>  
> +static bool __initdata xen_virtio_grants;
> +static bool __initdata xen_virtio_grants_set;
> +static __init int parse_use_grants(char *arg)
> +{
> +	if (!strcmp(arg, "yes"))
> +		xen_virtio_grants = true;
> +	else if (!strcmp(arg, "no"))
> +		xen_virtio_grants = false;
> +	xen_virtio_grants_set = true;
> +
> +	return 0;
> +}
> +early_param("xen_virtio_grant", parse_use_grants);
> +
> +void xen_set_restricted_virtio_memory_access(void)
> +{
> +	if (IS_ENABLED(CONFIG_XEN_VIRTIO_FORCE_GRANT) || xen_virtio_grants ||
> +	    (!xen_virtio_grants_set && xen_pv_domain()))
> +		platform_set(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS);
> +}

I agree with Christoph on this.

On ARM all guests are HVM guests. Unless I am reading this wrongly, with
this check, the user needs to pass the xen_virtio_grant command line
option or add CONFIG_XEN_VIRTIO_FORCE_GRANT to the build to use virtio
with grants. Instead, it should be automatic.

I am not against adding new command line or compile-time options. But
on ARM we already have all the information we need in device tree with
"iommus" and "xen,grant-dma". We don't need anything more.

On ARM if "xen,grant-dma" is present we need to enable
PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS, otherwise we don't.

So I think it should be something like the appended (untested):

- on ARM we call xen_set_restricted_virtio_memory_access if
  xen,grant-dma is present in device tree
- on x86 ideally we would have something like xen,grant-dma in a Xen
  ACPI table, but for now:
    - always restrict for PV guests (no change)
    - only restrict for HVM guests if a new cmdline option is passed
    - so the command line option is only for Xen x86 HVM guests
    - no need for another build-time option

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 2522b11e593f..cdd13d08f836 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -6730,6 +6730,10 @@
 			improve timer resolution at the expense of processing
 			more timer interrupts.
 
+	xen_virtio_grant= [X86,XEN]
+			Control whether virtio devices are required to use
+			grants when running as a Xen HVM guest.
+
 	xen.balloon_boot_timeout= [XEN]
 			The time (in seconds) to wait before giving up to boot
 			in case initial ballooning fails to free enough memory.
diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 1f9c3ba32833..07eb69f9e7df 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -443,8 +443,6 @@ static int __init xen_guest_init(void)
 	if (!xen_domain())
 		return 0;
 
-	xen_set_restricted_virtio_memory_access();
-
 	if (!acpi_disabled)
 		xen_acpi_guest_init();
 	else
diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
index 8b71b1dd7639..66b1d9d3d950 100644
--- a/arch/x86/xen/enlighten_hvm.c
+++ b/arch/x86/xen/enlighten_hvm.c
@@ -189,13 +189,27 @@ static int xen_cpu_dead_hvm(unsigned int cpu)
 }
 
 static bool no_vector_callback __initdata;
+static bool __initdata xen_virtio_grants;
+static bool __initdata xen_virtio_grants_set;
+static __init int parse_use_grants(char *arg)
+{
+	if (!strcmp(arg, "yes"))
+		xen_virtio_grants = true;
+	else if (!strcmp(arg, "no"))
+		xen_virtio_grants = false;
+	xen_virtio_grants_set = true;
+
+	return 0;
+}
+early_param("xen_virtio_grant", parse_use_grants);
 
 static void __init xen_hvm_guest_init(void)
 {
 	if (xen_pv_domain())
 		return;
 
-	xen_set_restricted_virtio_memory_access();
+	if (xen_virtio_grant)
+		xen_set_restricted_virtio_memory_access();
 
 	init_hvm_pv_info();
 
diff --git a/drivers/xen/grant-dma-iommu.c b/drivers/xen/grant-dma-iommu.c
index 16b8bc0c0b33..b43a8906ef64 100644
--- a/drivers/xen/grant-dma-iommu.c
+++ b/drivers/xen/grant-dma-iommu.c
@@ -40,6 +40,7 @@ static int grant_dma_iommu_probe(struct platform_device *pdev)
 		return ret;
 
 	platform_set_drvdata(pdev, mmu);
+	xen_set_restricted_virtio_memory_access();
 
 	return 0;
 }


From xen-devel-bounces@lists.xenproject.org Thu Jun 16 00:09:17 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 00:09:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350351.576655 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1d4R-0003OI-FC; Thu, 16 Jun 2022 00:09:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350351.576655; Thu, 16 Jun 2022 00:09:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1d4R-0003OB-BY; Thu, 16 Jun 2022 00:09:11 +0000
Received: by outflank-mailman (input) for mailman id 350351;
 Thu, 16 Jun 2022 00:09:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hdZ4=WX=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o1d4Q-0003O5-0w
 for xen-devel@lists.xenproject.org; Thu, 16 Jun 2022 00:09:10 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8aca1a37-ed08-11ec-ab14-113154c10af9;
 Thu, 16 Jun 2022 02:09:08 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id CCE6DB821FC;
 Thu, 16 Jun 2022 00:09:07 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 551FAC3411F;
 Thu, 16 Jun 2022 00:09:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8aca1a37-ed08-11ec-ab14-113154c10af9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1655338145;
	bh=jJISYR6NRFQwSY8+RR9lGNRnt7g65v9vRC0lxptbg0c=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=YIQB3ko9rNTn4jxLzmpHRcocUWHPMbAv8y5LTEMWW0pwzfRAmylxQexP8CU+CD7UR
	 FZJbIYfStAF9QG9PprZVTVS8gqC4L12WHVo2S2PXfC2l73S7XdZvQDEPlf6LTSpZei
	 mU9ujRJzSSq+sE4e1z38CZhmbK9WMPOILzdr+yT2XrLEULyQ7Ve2ostd+3yKX0T0nv
	 Bl0vFma/V0PVKSRc2NLJlkw9TcoH3maE67bBn9FhpncVqFfqT3LCeWW2/xi0MC6Ccd
	 8diAnK/7og7g5GLe3VJHT0Igjf7mRznKdYxZbneRq2jvpu25YsRfU0d97N33c1NNJR
	 wG24CNMMXxzGA==
Date: Wed, 15 Jun 2022 17:09:04 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Jens Wiklander <jens.wiklander@linaro.org>, xen-devel@lists.xenproject.org, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 1/2] xen/arm: smccc: add support for SMCCCv1.2 extended
 input/output registers
In-Reply-To: <9c39d8dc-4a74-297c-45fd-4e261fe9ef90@xen.org>
Message-ID: <alpine.DEB.2.22.394.2206151355000.2430546@ubuntu-linux-20-04-desktop>
References: <20220609061812.422130-1-jens.wiklander@linaro.org> <20220609061812.422130-2-jens.wiklander@linaro.org> <alpine.DEB.2.22.394.2206101733020.756493@ubuntu-linux-20-04-desktop> <9c39d8dc-4a74-297c-45fd-4e261fe9ef90@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 15 Jun 2022, Julien Grall wrote:
> On 11/06/2022 01:41, Stefano Stabellini wrote:
> > >   #endif /* __ASSEMBLY__ */
> > >     /*
> > > diff --git a/xen/arch/arm/vsmc.c b/xen/arch/arm/vsmc.c
> > > index 676740ef1520..6f90c08a6304 100644
> > > --- a/xen/arch/arm/vsmc.c
> > > +++ b/xen/arch/arm/vsmc.c
> > > @@ -93,7 +93,7 @@ static bool handle_arch(struct cpu_user_regs *regs)
> > >       switch ( fid )
> > >       {
> > >       case ARM_SMCCC_VERSION_FID:
> > > -        set_user_reg(regs, 0, ARM_SMCCC_VERSION_1_1);
> > > +        set_user_reg(regs, 0, ARM_SMCCC_VERSION_1_2);
> > >           return true;
> >    This is going to be a problem for ARM32 given that ARM_SMCCC_VERSION_1_2
> > is unimplemented on ARM32. If there is an ARM32 implementation in Linux
> > for ARM_SMCCC_VERSION_1_2 you might as well import it too.
> > 
> > Otherwise we'll have to abstract it away, e.g.:
> > 
> > #ifdef CONFIG_ARM_64
> > #define ARM_VSMCCC_VERSION ARM_SMCCC_VERSION_1_2
> > #else
> > #define ARM_VSMCCC_VERSION ARM_SMCCC_VERSION_1_1
> > #endif
> 
> I don't understand why you want to tie the virtual and host SMCCC version.
> 
> In theory, it would be possible for us to implement a subsystem to fully
> emulate, lets say, FFA. We would to tell the VM that we are v1.2 compliant but
> we would not need the helper as no calls would be forwarded.
> 
> When a 32-bit guest is running on Xen Arm64, we are going to say that SMCCC
> v1.2 will be available. This is not much different from running a 32-bit guest
> on 32-bit hardware.

In a few places (especially platform specific code, such as Xilinx EEMI)
the guest SMC call traps in Xen, then Xen repeats the same SMC call to
the firmware.

I realize this is not a good reason to keep virtual SMCCC 1.1 because
the firmware could support SMCCC 1.0 or older. Some argument conversions
are to be expected.  In reality I have been working with SMCCC 1.1
virtual and SMCCC 1.1 firmware for a long time so the problem didn't
exist, and I didn't really think it through :-)


> So I think we should expose 1.2 unless we think there is a
> problem in the Xen 32-bit specific code.

Yeah, I am OK with that.


From xen-devel-bounces@lists.xenproject.org Thu Jun 16 00:59:22 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 00:59:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350370.576696 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1dqn-0001Zo-SG; Thu, 16 Jun 2022 00:59:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350370.576696; Thu, 16 Jun 2022 00:59:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1dqn-0001Zh-PB; Thu, 16 Jun 2022 00:59:09 +0000
Received: by outflank-mailman (input) for mailman id 350370;
 Thu, 16 Jun 2022 00:59:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hdZ4=WX=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o1dql-0001ZY-Nw
 for xen-devel@lists.xenproject.org; Thu, 16 Jun 2022 00:59:07 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 844bc1bb-ed0f-11ec-ab14-113154c10af9;
 Thu, 16 Jun 2022 02:59:05 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id AFE3961B79;
 Thu, 16 Jun 2022 00:59:03 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id D346BC3411A;
 Thu, 16 Jun 2022 00:59:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 844bc1bb-ed0f-11ec-ab14-113154c10af9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1655341143;
	bh=aMI97DOFwNFKz/ZsDjbOQsoT3kpjQLZ0DQNTZ4x77Kk=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=BY2uOJsgNknE3iP+NRmDsaf7OX1EDGjjTV6r2VEtyocMnImiEu3NNNvyYMMJ+Z+x9
	 BsuEbVayvq0AWLUbXtB2TZbgt1wd0e9nzRLirG+aYsIK7btY/ZVmgjYYBsxxP3byZj
	 LXKF3l8sXMtog3Ps4UZzdRTFIATiaboVzt6kyyNst+jlvDd3GAHjx0MX7TmuSHoL5C
	 2Fvie22RWiWHeqR6WLxjOxoHD2IzCq6HKJw/WOQMtpXtFxmWuzY758YWkNbPx96xkR
	 Ai0Sv0uXFjyVpbs5a42FE/y0rjvllrZPcU7oMEq2ApNpXQBI6QwrVY8t+1/Drr+cdG
	 rz1vb4aPvgeUw==
Date: Wed, 15 Jun 2022 17:59:02 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Xenia Ragiadakou <burzalodowa@gmail.com>
cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
    viryaos-discuss@lists.sourceforge.net
Subject: Re: [ImageBuilder] [PATCH v2] uboot-script-gen: Add
 DOMU_STATIC_MEM
In-Reply-To: <20220615185100.283754-1-burzalodowa@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2206151750570.2430546@ubuntu-linux-20-04-desktop>
References: <20220615185100.283754-1-burzalodowa@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 15 Jun 2022, Xenia Ragiadakou wrote:
> Add a new config parameter to configure a dom0less VM with static allocation.
> DOMU_STATIC_MEM[number]="baseaddr1 size1 ... baseaddrN sizeN"
> The parameter specifies the host physical address regions to be statically
> allocated to the VM. Each region is defined by its start address and size.
> 
> For instance,
> DOMU_STATIC_MEM[0]="0x30000000 0x10000000 0x50000000 0x20000000"
> indicates that the host memory regions [0x30000000, 0x40000000) and
> [0x50000000, 0x70000000) are statically allocated to the first dom0less VM.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>

Hi Xenia, thanks for the patch!

It looks fine as is, only two minor code style issues (tabs instead of
spaces for indentation.)

I think this would work. However, when static-mem is specified also the
total memory for the guest needs to match. So for instance:

  #xen,static-mem-address-cells = <0x1>;
  #xen,static-mem-size-cells = <0x1>;
  xen,static-mem = <0x30000000 0x20000000>;

In this case memory has to be:

  memory = <0x0 0x80000>;

memory is in kilobytes, so 0x20000000/1024=0x80000.

In ImageBuilder "memory" is normally set by the DOMU_MEM variable,
although that is in megabytes.

I think it would make sense to automatically calculate "memory" DOMU_MEM
based on the sizes passed via DOMU_STATIC_MEM when DOMU_STATIC_MEM is
specified: summing all the sizes together and dividing by 1024.

That could be done either with something like

    if test "${DOMU_STATIC_MEM[$i]}"
    then
        local memory=[calculate memory]
        dt_set "/chosen/domU$i" "memory" "int" "0 $memory"
        add_device_tree_static_mem "/chosen/domU$i" "${DOMU_STATIC_MEM[$i]}"

Or it could be done by changing DOMU_MEM to be in kilobytes and simply
setting DOMU_MEM based on the DOMU_STATIC_MEM values when
DOMU_STATIC_MEM is specified.

Would you be OK to add that to this patch? If not, that's OK. This patch
is also good to have as is.



> ---
> 
> Notes:
>     v2: in add_device_tree_static_mem(), replace i with val because variable i
>         is already in use as an index
> 
>  README.md                |  4 ++++
>  scripts/uboot-script-gen | 20 ++++++++++++++++++++
>  2 files changed, 24 insertions(+)
> 
> diff --git a/README.md b/README.md
> index 8ce13f0..876e46d 100644
> --- a/README.md
> +++ b/README.md
> @@ -154,6 +154,10 @@ Where:
>    automatically at boot as dom0-less guest. It can still be created
>    later from Dom0.
>  
> +- DOMU_STATIC_MEM[number]="baseaddr1 size1 ... baseaddrN sizeN"
> +  if specified, indicates the host physical address regions
> +  [baseaddr, baseaddr + size) to be reserved to the VM for static allocation.
> +
>  - LINUX is optional but specifies the Linux kernel for when Xen is NOT
>    used.  To enable this set any LINUX\_\* variables and do NOT set the
>    XEN variable.
> diff --git a/scripts/uboot-script-gen b/scripts/uboot-script-gen
> index 0adf523..3a5f720 100755
> --- a/scripts/uboot-script-gen
> +++ b/scripts/uboot-script-gen
> @@ -108,6 +108,22 @@ function add_device_tree_passthrough()
>      dt_set "$path/module$addr" "reg" "hex"  "0x0 $addr 0x0 $(printf "0x%x" $size)"
>  }
>  
> +function add_device_tree_static_mem()
> +{
> +    local path=$1
> +    local regions=$2
> +
> +    dt_set "$path" "#xen,static-mem-address-cells" "hex" "0x2"
> +    dt_set "$path" "#xen,static-mem-size-cells" "hex" "0x2"
> +
> +    for val in ${regions[@]}
> +    do
> +	cells+=("$(printf "0x%x 0x%x" $(($val >> 32)) $(($val & ((1 << 32) - 1))))")
> +    done
> +
> +    dt_set "$path" "xen,static-mem" "hex" "${cells[*]}"
> +}
> +
>  function xen_device_tree_editing()
>  {
>      dt_set "/chosen" "#address-cells" "hex" "0x2"
> @@ -143,6 +159,10 @@ function xen_device_tree_editing()
>          dt_set "/chosen/domU$i" "#size-cells" "hex" "0x2"
>          dt_set "/chosen/domU$i" "memory" "int" "0 ${DOMU_MEM[$i]}"
>          dt_set "/chosen/domU$i" "cpus" "int" "${DOMU_VCPUS[$i]}"
> +	if test "${DOMU_STATIC_MEM[$i]}"
> +        then
> +	    add_device_tree_static_mem "/chosen/domU$i" "${DOMU_STATIC_MEM[$i]}"
> +        fi
>          dt_set "/chosen/domU$i" "vpl011" "hex" "0x1"
>          add_device_tree_kernel "/chosen/domU$i" ${domU_kernel_addr[$i]} ${domU_kernel_size[$i]} "${DOMU_CMD[$i]}"
>          if test "${domU_ramdisk_addr[$i]}"
> -- 
> 2.34.1
> 


From xen-devel-bounces@lists.xenproject.org Thu Jun 16 02:34:17 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 02:34:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350379.576707 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1fKc-0003T8-SI; Thu, 16 Jun 2022 02:34:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350379.576707; Thu, 16 Jun 2022 02:34:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1fKc-0003T1-Ow; Thu, 16 Jun 2022 02:34:02 +0000
Received: by outflank-mailman (input) for mailman id 350379;
 Thu, 16 Jun 2022 02:34:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1fKb-0003Sr-HD; Thu, 16 Jun 2022 02:34:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1fKb-0007Fw-9t; Thu, 16 Jun 2022 02:34:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1fKa-0003Et-Rz; Thu, 16 Jun 2022 02:34:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o1fKa-0008Hd-RZ; Thu, 16 Jun 2022 02:34:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=pJHRx1Ib9npVe+MMdv+bADf53YTiycUY9/gkRP/v48s=; b=yFMf5c0hdKUfJkVmkFqmRGcloI
	XckwyB985okoyeNjuWcei0Gshzn8YPquFRldHFTCV5apm0yrBbZeEe1ieLWUJ5itK9dlXP6jB5e1N
	WRziiQaC7Vtj2zObF2DcmHYEFp+OtDQKS1G+u5CAG8KHkDJ8Um9rS76toU3hm/mWhyDU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171181-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 171181: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:build-amd64:xen-build:fail:regression
    xen-unstable:build-i386:xen-build:fail:regression
    xen-unstable:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine-bios:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine-uefi:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine-bios:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine-uefi:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-1:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-2:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-3:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-4:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-5:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8c1d9760b1d847d983529eae2b360b38648841b5
X-Osstest-Versions-That:
    xen=c9a707df83aad17a6fcf2e8330ab3b5bead6fb8b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 16 Jun 2022 02:34:00 +0000

flight 171181 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171181/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 171174
 build-i386                    6 xen-build                fail REGR. vs. 171174

Tests which did not succeed, but are not blocking:
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-livepatch    1 build-check(1)               blocked  n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-1        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-2        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-3        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-4        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-5        1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171174
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171174
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171174
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  8c1d9760b1d847d983529eae2b360b38648841b5
baseline version:
 xen                  c9a707df83aad17a6fcf2e8330ab3b5bead6fb8b

Last test of basis   171174  2022-06-15 01:53:26 Z    1 days
Testing same since   171181  2022-06-15 14:38:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jane Malalane <jane.malalane@citrix.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  fail    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       blocked 
 test-xtf-amd64-amd64-2                                       blocked 
 test-xtf-amd64-amd64-3                                       blocked 
 test-xtf-amd64-amd64-4                                       blocked 
 test-xtf-amd64-amd64-5                                       blocked 
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                blocked 
 test-amd64-i386-examine-bios                                 blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   blocked 
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                blocked 
 test-amd64-i386-examine-uefi                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 8c1d9760b1d847d983529eae2b360b38648841b5
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Wed Jun 15 10:24:06 2022 +0200

    build: remove auto.conf prerequisite from compat/xlat.h target
    
    Now that the command line generating "xlat.h" is check on rebuild, the
    header will be regenerated whenever the list of xlat headers changes
    due to change in ".config". We don't need to force a regeneration for
    every changes in ".config".
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 95b0d7bbddfbd797f37f7a09f0586c4bbd22291b
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jun 15 10:23:16 2022 +0200

    build: fix exporting for make 3.82
    
    GNU make 3.82 apparently has a quirk where exporting an undefined
    variable prevents its value from subsequently being updated. This
    situation can arise due to our adding of -rR to MAKEFLAGS, which takes
    effect also on make simply re-invoking itself. Once these flags are in
    effect, CC (in particular) is empty (undefined), and would be defined
    only via Config.mk including StdGNU.mk or alike. With the quirk, CC
    remains empty, yet with an empty CC the compiler minimum version check
    fails, breaking the build.
    
    Move the exporting of the various tool stack component variables past
    where they gain their (final) values.
    
    See also be63d9d47f57 ("build: tweak variable exporting for make 3.82").
    
    Fixes: 15a0578ca4b0 ("build: shuffle main Makefile")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

commit e8e6e42279a5723239c5c40ba4c7f579a979465d
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Jun 15 10:22:38 2022 +0200

    tools/xenstore: simplify loop handling connection I/O
    
    The loop handling input and output of connections of xenstored is
    open coding list_for_each_entry_safe() in an incredibly complicated
    way.
    
    Use list_for_each_entry_safe() instead, making it much more clear how
    the code is working.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>

commit e2d2b9fd7a2b349a1a9a75b482981cfd2d2407a8
Author: Jane Malalane <jane.malalane@citrix.com>
Date:   Wed Jun 15 10:21:08 2022 +0200

    x86/hvm: widen condition for is_hvm_pv_evtchn_domain() and report fix in CPUID
    
    Have is_hvm_pv_evtchn_domain() return true for vector callbacks for
    evtchn delivery set up on a per-vCPU basis via
    HVMOP_set_evtchn_upcall_vector.
    
    Assume that if vCPU0 uses HVMOP_set_evtchn_upcall_vector, all
    remaining vCPUs will too and thus remove is_hvm_pv_evtchn_vcpu() and
    replace sole caller with is_hvm_pv_evtchn_domain().
    
    is_hvm_pv_evtchn_domain() returning true is a condition for setting up
    physical IRQ to event channel mappings. Therefore, also add a CPUID
    bit so that guests know whether the check in is_hvm_pv_evtchn_domain()
    will fail when using HVMOP_set_evtchn_upcall_vector. This matters for
    guests that route PIRQs over event channels since
    is_hvm_pv_evtchn_domain() is a condition in physdev_map_pirq().
    
    The naming of the CPUID bit is quite generic about upcall support
    being available. That's done so that the define name doesn't become
    overly long.
    
    A guest that doesn't care about physical interrupts routed over event
    channels can just test for the availability of the hypercall directly
    (HVMOP_set_evtchn_upcall_vector) without checking the CPUID bit.
    
    Signed-off-by: Jane Malalane <jane.malalane@citrix.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

commit 80ad8db8a4d9bb24952f0aea788ce6f47566fa76
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jun 15 10:19:32 2022 +0200

    IOMMU/x86: work around bogus gcc12 warning in hvm_gsi_eoi()
    
    As per [1] the expansion of the pirq_dpci() macro causes a -Waddress
    controlled warning (enabled implicitly in our builds, if not by default)
    tying the middle part of the involved conditional expression to the
    surrounding boolean context. Work around this by introducing a local
    inline function in the affected source file.
    
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
    
    [1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=102967

commit 162dea4e768b835114c736cfd3fa1fc3742d39c5
Author: Stefano Stabellini <sstabellini@kernel.org>
Date:   Fri Jun 10 14:27:55 2022 -0700

    add more MISRA C rules to docs/misra/rules.rst
    
    Add the new MISRA C rules agreed by the MISRA C working group to
    docs/misra/rules.rst.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Jun 16 05:02:03 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 05:02:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350396.576732 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1hda-0004u5-Ts; Thu, 16 Jun 2022 05:01:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350396.576732; Thu, 16 Jun 2022 05:01:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1hda-0004ty-R5; Thu, 16 Jun 2022 05:01:46 +0000
Received: by outflank-mailman (input) for mailman id 350396;
 Thu, 16 Jun 2022 05:01:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1hdY-0004to-Uy; Thu, 16 Jun 2022 05:01:44 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1hdY-0001pF-SV; Thu, 16 Jun 2022 05:01:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1hdY-00005M-CY; Thu, 16 Jun 2022 05:01:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o1hdY-0007pm-C4; Thu, 16 Jun 2022 05:01:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=VtA6I2VajYXFAEQ5bqGeWkFUECpQTBDbULq1KjF4tYc=; b=xNOPWAvWqDao0+XwLhOj8qZzst
	uwaYm6Aung/CoyYESs+RP1VBmeK4O3aS7y4mok2C3GBJoJ3ds76RCKsbBeevuk5FGDcNPM2icjUFY
	eScDLhBIvKZUeIEjORMZWy++5vwxOw2xuM0KP3bOqVG0Peu1hoUIeKigCi3r6vib6sVA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171184-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 171184: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=3c2a14ea81c77ae7973c1e436a32436a7e6d017b
X-Osstest-Versions-That:
    xen=8c1d9760b1d847d983529eae2b360b38648841b5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 16 Jun 2022 05:01:44 +0000

flight 171184 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171184/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  3c2a14ea81c77ae7973c1e436a32436a7e6d017b
baseline version:
 xen                  8c1d9760b1d847d983529eae2b360b38648841b5

Last test of basis   171178  2022-06-15 09:01:45 Z    0 days
Testing same since   171184  2022-06-16 01:01:50 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bertrand Marquis <bertrand.marquis@arm.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8c1d9760b1..3c2a14ea81  3c2a14ea81c77ae7973c1e436a32436a7e6d017b -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Jun 16 05:37:33 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 05:37:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350408.576750 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1iC7-0000YR-Pm; Thu, 16 Jun 2022 05:37:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350408.576750; Thu, 16 Jun 2022 05:37:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1iC7-0000YK-L2; Thu, 16 Jun 2022 05:37:27 +0000
Received: by outflank-mailman (input) for mailman id 350408;
 Thu, 16 Jun 2022 05:37:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7YVi=WX=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o1iC6-0000YE-Lj
 for xen-devel@lists.xenproject.org; Thu, 16 Jun 2022 05:37:26 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 65c10bbd-ed36-11ec-bd2c-47488cf2e6aa;
 Thu, 16 Jun 2022 07:37:25 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id EEB6221BC3;
 Thu, 16 Jun 2022 05:37:22 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id B8DAE139C4;
 Thu, 16 Jun 2022 05:37:22 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id PojHK5LBqmKVRwAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 16 Jun 2022 05:37:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65c10bbd-ed36-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655357842; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=dts+yl7uUGQYgHpGD+VRq8p1wRFkW3OYpt27WoT5mBw=;
	b=Wx6s/9DMRIBUq5O2Ke+thHwBX/osV+aWHUpiZvIt+G+IFpLzMjvbRDuj/MJ050pNFTctl4
	8dE8pXdY3hRNQ0SbDIBUrxQXzeDaHSrwPiWsXhR6K0w0nwLIfI2c4YaTL8lAgab2z9P8Ij
	zoMmc13vZ8d1+vQZkTY/g1lOkI6qGa8=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: viresh.kumar@linaro.org,
	hch@infradead.org,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: [PATCH v2] xen: don't require virtio with grants for non-PV guests
Date: Thu, 16 Jun 2022 07:37:15 +0200
Message-Id: <20220616053715.3166-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Commit fa1f57421e0b ("xen/virtio: Enable restricted memory access using
Xen grant mappings") introduced a new requirement for using virtio
devices: the backend now needs to support the VIRTIO_F_ACCESS_PLATFORM
feature.

This is an undue requirement for non-PV guests, as those can be operated
with existing backends without any problem, as long as those backends
are running in dom0.

Per default allow virtio devices without grant support for non-PV
guests.

Add a new config item to always force use of grants for virtio.

Fixes: fa1f57421e0b ("xen/virtio: Enable restricted memory access using Xen grant mappings")
Reported-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- remove command line parameter (Christoph Hellwig)
---
 drivers/xen/Kconfig | 9 +++++++++
 include/xen/xen.h   | 2 +-
 2 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
index bfd5f4f706bc..a65bd92121a5 100644
--- a/drivers/xen/Kconfig
+++ b/drivers/xen/Kconfig
@@ -355,4 +355,13 @@ config XEN_VIRTIO
 
 	  If in doubt, say n.
 
+config XEN_VIRTIO_FORCE_GRANT
+	bool "Require Xen virtio support to use grants"
+	depends on XEN_VIRTIO
+	help
+	  Require virtio for Xen guests to use grant mappings.
+	  This will avoid the need to give the backend the right to map all
+	  of the guest memory. This will need support on the backend side
+	  (e.g. qemu or kernel, depending on the virtio device types used).
+
 endmenu
diff --git a/include/xen/xen.h b/include/xen/xen.h
index 0780a81e140d..4d4188f20337 100644
--- a/include/xen/xen.h
+++ b/include/xen/xen.h
@@ -56,7 +56,7 @@ extern u64 xen_saved_max_mem_size;
 
 static inline void xen_set_restricted_virtio_memory_access(void)
 {
-	if (IS_ENABLED(CONFIG_XEN_VIRTIO) && xen_domain())
+	if (IS_ENABLED(CONFIG_XEN_VIRTIO_FORCE_GRANT) || xen_pv_domain())
 		platform_set(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS);
 }
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Thu Jun 16 06:03:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 06:03:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350416.576761 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1iaw-0004a2-Oy; Thu, 16 Jun 2022 06:03:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350416.576761; Thu, 16 Jun 2022 06:03:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1iaw-0004Zv-Lv; Thu, 16 Jun 2022 06:03:06 +0000
Received: by outflank-mailman (input) for mailman id 350416;
 Thu, 16 Jun 2022 06:03:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZtRg=WX=bombadil.srs.infradead.org=BATV+c9d26d7972d0db0217a7+6871+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1o1iau-0004Zp-J7
 for xen-devel@lists.xenproject.org; Thu, 16 Jun 2022 06:03:05 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fa8191a8-ed39-11ec-bd2c-47488cf2e6aa;
 Thu, 16 Jun 2022 08:03:02 +0200 (CEST)
Received: from hch by bombadil.infradead.org with local (Exim 4.94.2 #2 (Red
 Hat Linux)) id 1o1iaq-000fho-44; Thu, 16 Jun 2022 06:03:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fa8191a8-ed39-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=In-Reply-To:Content-Transfer-Encoding
	:Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:
	Sender:Reply-To:Content-ID:Content-Description;
	bh=iOBMPze3s2Lts4x7hCuI3brE8gqb/oK3vC99y6kJxTs=; b=FAJ6hHlfYq4a8A6nEmnUqy7YFQ
	H6/hQfIR6XVi3ZDAKoGfGTCP9dXBPMyzxG4Lsa0dRw/WjSE0HqqXy2alHlxN+uDU0JHEIsw+2JXvC
	b6kwpx1G1peN6Il1mC6O3gBJp5q8UWXHLbsyWvbj51yVdVpKRLOP82DAY66mybij8zNOC+uzjTlTy
	q7VP7rb3smIlUDJg8oqBQL3nhYJe12nt0DoK1OlYM+9g3TTif1VAm2XvNGh41lCUbfmhpYUZbDuSC
	51k4GsSN3PmkQnngV8M9IYsuldfh1KvfxBW2ks3xs7mDwA5shOfORFQA4WXu99U0joY+rxSv1Vawd
	M6XM+gjQ==;
Date: Wed, 15 Jun 2022 23:03:00 -0700
From: Christoph Hellwig <hch@infradead.org>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	viresh.kumar@linaro.org, hch@infradead.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: Re: [PATCH v2] xen: don't require virtio with grants for non-PV
 guests
Message-ID: <YqrHlNOiRxxc8xcq@infradead.org>
References: <20220616053715.3166-1-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20220616053715.3166-1-jgross@suse.com>
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

On Thu, Jun 16, 2022 at 07:37:15AM +0200, Juergen Gross wrote:
> Commit fa1f57421e0b ("xen/virtio: Enable restricted memory access using
> Xen grant mappings") introduced a new requirement for using virtio
> devices: the backend now needs to support the VIRTIO_F_ACCESS_PLATFORM
> feature.
> 
> This is an undue requirement for non-PV guests, as those can be operated
> with existing backends without any problem, as long as those backends
> are running in dom0.
> 
> Per default allow virtio devices without grant support for non-PV
> guests.
> 
> Add a new config item to always force use of grants for virtio.

What 'd really expect here is to only set the limitations for the
actual grant-based devic.  Unfortunately
PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS is global instead of per-device,
but this is what coms closest to that intention without major
refactoring:

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 1f9c3ba328333..07eb69f9e7df3 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -443,8 +443,6 @@ static int __init xen_guest_init(void)
 	if (!xen_domain())
 		return 0;
 
-	xen_set_restricted_virtio_memory_access();
-
 	if (!acpi_disabled)
 		xen_acpi_guest_init();
 	else
diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
index 8b71b1dd76396..517a9d8d8f94d 100644
--- a/arch/x86/xen/enlighten_hvm.c
+++ b/arch/x86/xen/enlighten_hvm.c
@@ -195,8 +195,6 @@ static void __init xen_hvm_guest_init(void)
 	if (xen_pv_domain())
 		return;
 
-	xen_set_restricted_virtio_memory_access();
-
 	init_hvm_pv_info();
 
 	reserve_shared_info();
diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index e3297b15701c6..f33a4421e7cd6 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -109,8 +109,6 @@ static DEFINE_PER_CPU(struct tls_descs, shadow_tls_desc);
 
 static void __init xen_pv_init_platform(void)
 {
-	xen_set_restricted_virtio_memory_access();
-
 	populate_extra_pte(fix_to_virt(FIX_PARAVIRT_BOOTMAP));
 
 	set_fixmap(FIX_PARAVIRT_BOOTMAP, xen_start_info->shared_info);
diff --git a/drivers/xen/grant-dma-ops.c b/drivers/xen/grant-dma-ops.c
index fc01424840017..f9bbacb5b5456 100644
--- a/drivers/xen/grant-dma-ops.c
+++ b/drivers/xen/grant-dma-ops.c
@@ -8,6 +8,7 @@
  */
 
 #include <linux/module.h>
+#include <linux/platform-feature.h>
 #include <linux/dma-map-ops.h>
 #include <linux/of.h>
 #include <linux/pfn.h>
@@ -333,6 +334,8 @@ void xen_grant_setup_dma_ops(struct device *dev)
 		goto err;
 	}
 
+	/* XXX: this really should be per-device instead of blobal */
+	platform_set(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS);
 	dev->dma_ops = &xen_grant_dma_ops;
 
 	return;
diff --git a/include/xen/xen.h b/include/xen/xen.h
index 0780a81e140de..a99bab8175234 100644
--- a/include/xen/xen.h
+++ b/include/xen/xen.h
@@ -52,14 +52,6 @@ bool xen_biovec_phys_mergeable(const struct bio_vec *vec1,
 extern u64 xen_saved_max_mem_size;
 #endif
 
-#include <linux/platform-feature.h>
-
-static inline void xen_set_restricted_virtio_memory_access(void)
-{
-	if (IS_ENABLED(CONFIG_XEN_VIRTIO) && xen_domain())
-		platform_set(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS);
-}
-
 #ifdef CONFIG_XEN_UNPOPULATED_ALLOC
 int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages);
 void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages);


From xen-devel-bounces@lists.xenproject.org Thu Jun 16 06:15:25 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 06:15:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350427.576771 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1iml-0006Jb-T6; Thu, 16 Jun 2022 06:15:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350427.576771; Thu, 16 Jun 2022 06:15:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1iml-0006JU-QP; Thu, 16 Jun 2022 06:15:19 +0000
Received: by outflank-mailman (input) for mailman id 350427;
 Thu, 16 Jun 2022 06:15:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7YVi=WX=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o1imk-0006JO-OK
 for xen-devel@lists.xenproject.org; Thu, 16 Jun 2022 06:15:18 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b0fa0b5d-ed3b-11ec-bd2c-47488cf2e6aa;
 Thu, 16 Jun 2022 08:15:17 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id C3C6121C58;
 Thu, 16 Jun 2022 06:15:16 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 856451344E;
 Thu, 16 Jun 2022 06:15:16 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id pP0KH3TKqmLcUQAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 16 Jun 2022 06:15:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b0fa0b5d-ed3b-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655360116; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=YFF3k13LUKCiKrPq8kWk/hdM4VqGuJLIJy2t17i2Wfw=;
	b=AuBDIqVKHouwIEK7lqLR/FqjbC6FsLU78nEeq5YxXQOA8bIlpBeo/vxmC0zoH53AnazNEA
	/vTqSGjIy3mhHQNDb1TIVUxThona4DuuAqralDNxPb8C44JcdHyXZ4BLV7qAC2OrIMJWeY
	ZTuGrYvEJuI/08t0yGGIK2AxcH0OhfE=
Message-ID: <50b72415-2e7d-b25e-0022-539d9fe91d41@suse.com>
Date: Thu, 16 Jun 2022 08:15:16 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Content-Language: en-US
To: Christoph Hellwig <hch@infradead.org>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 viresh.kumar@linaro.org, Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
References: <20220616053715.3166-1-jgross@suse.com>
 <YqrHlNOiRxxc8xcq@infradead.org>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v2] xen: don't require virtio with grants for non-PV
 guests
In-Reply-To: <YqrHlNOiRxxc8xcq@infradead.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------cIwu0mbjmosd8QbJsqimG8rf"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------cIwu0mbjmosd8QbJsqimG8rf
Content-Type: multipart/mixed; boundary="------------J2Rq30FEtUnvERcW8kC0PDn0";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Christoph Hellwig <hch@infradead.org>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 viresh.kumar@linaro.org, Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Message-ID: <50b72415-2e7d-b25e-0022-539d9fe91d41@suse.com>
Subject: Re: [PATCH v2] xen: don't require virtio with grants for non-PV
 guests
References: <20220616053715.3166-1-jgross@suse.com>
 <YqrHlNOiRxxc8xcq@infradead.org>
In-Reply-To: <YqrHlNOiRxxc8xcq@infradead.org>

--------------J2Rq30FEtUnvERcW8kC0PDn0
Content-Type: multipart/mixed; boundary="------------NdIsUxjRwVmXlsOvTsrEHUWr"

--------------NdIsUxjRwVmXlsOvTsrEHUWr
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTYuMDYuMjIgMDg6MDMsIENocmlzdG9waCBIZWxsd2lnIHdyb3RlOg0KPiBPbiBUaHUs
IEp1biAxNiwgMjAyMiBhdCAwNzozNzoxNUFNICswMjAwLCBKdWVyZ2VuIEdyb3NzIHdyb3Rl
Og0KPj4gQ29tbWl0IGZhMWY1NzQyMWUwYiAoInhlbi92aXJ0aW86IEVuYWJsZSByZXN0cmlj
dGVkIG1lbW9yeSBhY2Nlc3MgdXNpbmcNCj4+IFhlbiBncmFudCBtYXBwaW5ncyIpIGludHJv
ZHVjZWQgYSBuZXcgcmVxdWlyZW1lbnQgZm9yIHVzaW5nIHZpcnRpbw0KPj4gZGV2aWNlczog
dGhlIGJhY2tlbmQgbm93IG5lZWRzIHRvIHN1cHBvcnQgdGhlIFZJUlRJT19GX0FDQ0VTU19Q
TEFURk9STQ0KPj4gZmVhdHVyZS4NCj4+DQo+PiBUaGlzIGlzIGFuIHVuZHVlIHJlcXVpcmVt
ZW50IGZvciBub24tUFYgZ3Vlc3RzLCBhcyB0aG9zZSBjYW4gYmUgb3BlcmF0ZWQNCj4+IHdp
dGggZXhpc3RpbmcgYmFja2VuZHMgd2l0aG91dCBhbnkgcHJvYmxlbSwgYXMgbG9uZyBhcyB0
aG9zZSBiYWNrZW5kcw0KPj4gYXJlIHJ1bm5pbmcgaW4gZG9tMC4NCj4+DQo+PiBQZXIgZGVm
YXVsdCBhbGxvdyB2aXJ0aW8gZGV2aWNlcyB3aXRob3V0IGdyYW50IHN1cHBvcnQgZm9yIG5v
bi1QVg0KPj4gZ3Vlc3RzLg0KPj4NCj4+IEFkZCBhIG5ldyBjb25maWcgaXRlbSB0byBhbHdh
eXMgZm9yY2UgdXNlIG9mIGdyYW50cyBmb3IgdmlydGlvLg0KPiANCj4gV2hhdCDDjSdkIHJl
YWxseSBleHBlY3QgaGVyZSBpcyB0byBvbmx5IHNldCB0aGUgbGltaXRhdGlvbnMgZm9yIHRo
ZQ0KPiBhY3R1YWwgZ3JhbnQtYmFzZWQgZGV2aWMuICBVbmZvcnR1bmF0ZWx5DQo+IFBMQVRG
T1JNX1ZJUlRJT19SRVNUUklDVEVEX01FTV9BQ0NFU1MgaXMgZ2xvYmFsIGluc3RlYWQgb2Yg
cGVyLWRldmljZSwNCg0KSSB0aGluayB0aGUgZ2xvYmFsIHNldHRpbmcgaXMgZmluZSwgYXMg
aXQgc2VydmVzIGEgc3BlY2lmaWMgcHVycG9zZToNCmRvbid0IGFsbG93IEFOWSB2aXJ0aW8g
ZGV2aWNlcyB3aXRob3V0IHRoZSBzcGVjaWFsIGhhbmRsaW5nIChsaWtlIHRoZQ0KLzM5MCBQ
ViBjYXNlLCBTRVYsIFREWCwgb3IgWGVuIFBWLWd1ZXN0cykuIFRob3NlIGNhc2VzIGNhbid0
IHNlbnNpYmx5DQp3b3JrIHdpdGhvdXQgdGhlIHNwZWNpYWwgRE1BIG9wcy4NCg0KSW4gY2Fz
ZSB0aGUgc3BlY2lhbCBETUEgb3BzIGFyZSBqdXN0IGEgIm5pY2UgdG8gaGF2ZSIgbGlrZSBm
b3IgWGVuIEhWTQ0KZ3Vlc3RzLCBQTEFURk9STV9WSVJUSU9fUkVTVFJJQ1RFRF9NRU1fQUND
RVNTIHdvbid0IGJlIHNldC4NCg0KQW5kIGlmIHNvbWVvbmUgd2FudHMgYSBndWVzdCBvbmx5
IHRvIHVzZSBncmFudCBiYXNlZCB2aXJ0aW8gZGV2aWNlcywNCnRoZSBndWVzdCBrZXJuZWwg
Y2FuIGJlIGJ1aWx0IHdpdGggQ09ORklHX1hFTl9WSVJUSU9fRk9SQ0VfR1JBTlQgKGUuZy4N
CmluIGNhc2UgdGhlIGJhY2tlbmRzIGFyZSBydW5uaW5nIGluIHNvbWUgbGVzcyBwcml2aWxl
Z2VkIGVudmlyb25tZW50DQphbmQgdGh1cyBjYW4ndCBtYXAgYXJiaXRyYXJ5IGd1ZXN0IG1l
bW9yeSBwYWdlcykuDQoNCg0KSnVlcmdlbg0K
--------------NdIsUxjRwVmXlsOvTsrEHUWr
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------NdIsUxjRwVmXlsOvTsrEHUWr--

--------------J2Rq30FEtUnvERcW8kC0PDn0--

--------------cIwu0mbjmosd8QbJsqimG8rf
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmKqynQFAwAAAAAACgkQsN6d1ii/Ey+m
/Af+Me+q12n9oI6lTeU8OHRs26Qw8RWbpRpvU1UN411+ZPsDSQDIskyhXMFatSSg2OBQ4aL638I6
kzH/SkxM5n6z2ckLvxym3Hgdd8BJ/VI4ih3zBcRJNaQeKf9tIFcBWUzAMda8OFKiKOKPBoQWGe6L
HiC5ckXiK/i4p/U1exxWzNwQmnvE83wAO8+YjF94R1vxlBPlSou6ceRlAhSdNsK9EBDDPk4YSktX
DCz3X4sqrzAh53qCpWgxWgs+jg9MeP3bdACBOPnzdg08CMApUZ1+pOIFS2B1zqQSZmFKiM4+YgjE
AzJ8pjry/nnoC7+MNmWGNcFeDI7TDtFmbkfFoO766w==
=NX7Z
-----END PGP SIGNATURE-----

--------------cIwu0mbjmosd8QbJsqimG8rf--


From xen-devel-bounces@lists.xenproject.org Thu Jun 16 07:32:05 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 07:32:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350437.576789 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1jyn-0007E3-Kn; Thu, 16 Jun 2022 07:31:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350437.576789; Thu, 16 Jun 2022 07:31:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1jyn-0007Dw-HP; Thu, 16 Jun 2022 07:31:49 +0000
Received: by outflank-mailman (input) for mailman id 350437;
 Thu, 16 Jun 2022 07:31:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d4NK=WX=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1o1jyl-0007Dq-Pr
 for xen-devel@lists.xenproject.org; Thu, 16 Jun 2022 07:31:47 +0000
Received: from mail-ej1-x62b.google.com (mail-ej1-x62b.google.com
 [2a00:1450:4864:20::62b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5f21ba70-ed46-11ec-ab14-113154c10af9;
 Thu, 16 Jun 2022 09:31:44 +0200 (CEST)
Received: by mail-ej1-x62b.google.com with SMTP id g25so1112423ejh.9
 for <xen-devel@lists.xenproject.org>; Thu, 16 Jun 2022 00:31:44 -0700 (PDT)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 w7-20020a056402070700b0042db87b5ff4sm1148344edx.88.2022.06.16.00.31.42
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 16 Jun 2022 00:31:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f21ba70-ed46-11ec-ab14-113154c10af9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=WR22BROo7BpVfHELY0/whUHHBl6+EosORE6LfOKACTA=;
        b=OUIhC3rE3NvkmStO46Q/ciAcnSAFfmmzP6mMCl8KhOx5/114KZw66r2dI7lB2gyO+U
         YXT28KvwmSKXDzmCe05V3bq/L/Y3f1XBHYAeP0y9oPsvzE+Mnc4nYi5nB4TbP4MKMtc8
         f5xaaw1QHZbVpoaEsehgITiq6iJ32F2JFZpoVZv0Il+WXqz/Z1am1BBJCmjmfYUBoTDS
         0SE7jkh/BYXhmawMxmwZRlMvKX2CV2izN1hK6D41/1IWf4fexcBZIGsNiexb9hsvtGTn
         KrLgdQrobJGLY+RnrU8RhvBYod40AWOcexwODDHmhFMrC9QzPjK69+a8RzJnqacf6K1U
         R4KA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=WR22BROo7BpVfHELY0/whUHHBl6+EosORE6LfOKACTA=;
        b=aDJVqUvK++b4FeSiNhFg4ezGJ4nmEjuLZqfGNKiPVs9hF8iXXuhAxmSwD0BzxB9qPq
         SRuiebN2bzDd+03Ae6obHtnItkntc71CSV266ju7lHl2c2FYAFKwAPjr7AtbALHl9zd7
         jPHMkobzJpJ2WJtbHa8FXCmxW9urGdLbbkbPp+XfpnNuvNfcsgiwYHq5rtl/tdIT7xFm
         kcRa4dEwjau5/81sgH2y7V9UA+FizvQS/iTgJjh6P8c75HAwECzx0jGj695Gur6gpDth
         Ohx8Le3ejZTOdZfzOdqFPQPF8SUBfrXya7+E1DWWyr8TPORdfRhZubPHEzT+TZkz/gzk
         Pyvg==
X-Gm-Message-State: AJIora98owbTWNMT8mWbPEerR3yguKDyYOXvmAHdbnQ4roIgRcJefHNG
	6xSAq0SJyVKZ8tWCJULa+rk=
X-Google-Smtp-Source: AGRyM1s0/RHY2CxHyvPWA7qPtKPg63zpTc3tJxhXGQ31GmKyrOyZ12y3foJKnNWNWxsUEbgH4HVK6g==
X-Received: by 2002:a17:907:1694:b0:716:14a4:fba with SMTP id hc20-20020a170907169400b0071614a40fbamr3319578ejc.290.1655364703764;
        Thu, 16 Jun 2022 00:31:43 -0700 (PDT)
Subject: Re: [PATCH v2] xen: don't require virtio with grants for non-PV
 guests
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 viresh.kumar@linaro.org, hch@infradead.org,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
References: <20220616053715.3166-1-jgross@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <573c2d9f-8df0-0e0f-2f57-e8ea85e403b4@gmail.com>
Date: Thu, 16 Jun 2022 10:31:42 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20220616053715.3166-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 16.06.22 08:37, Juergen Gross wrote:


Hello Juergen

> Commit fa1f57421e0b ("xen/virtio: Enable restricted memory access using
> Xen grant mappings") introduced a new requirement for using virtio
> devices: the backend now needs to support the VIRTIO_F_ACCESS_PLATFORM
> feature.
>
> This is an undue requirement for non-PV guests, as those can be operated
> with existing backends without any problem, as long as those backends
> are running in dom0.
>
> Per default allow virtio devices without grant support for non-PV
> guests.
>
> Add a new config item to always force use of grants for virtio.
>
> Fixes: fa1f57421e0b ("xen/virtio: Enable restricted memory access using Xen grant mappings")
> Reported-by: Viresh Kumar <viresh.kumar@linaro.org>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V2:
> - remove command line parameter (Christoph Hellwig)
> ---
>   drivers/xen/Kconfig | 9 +++++++++
>   include/xen/xen.h   | 2 +-
>   2 files changed, 10 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
> index bfd5f4f706bc..a65bd92121a5 100644
> --- a/drivers/xen/Kconfig
> +++ b/drivers/xen/Kconfig
> @@ -355,4 +355,13 @@ config XEN_VIRTIO
>   
>   	  If in doubt, say n.
>   
> +config XEN_VIRTIO_FORCE_GRANT
> +	bool "Require Xen virtio support to use grants"
> +	depends on XEN_VIRTIO
> +	help
> +	  Require virtio for Xen guests to use grant mappings.
> +	  This will avoid the need to give the backend the right to map all
> +	  of the guest memory. This will need support on the backend side
> +	  (e.g. qemu or kernel, depending on the virtio device types used).
> +
>   endmenu
> diff --git a/include/xen/xen.h b/include/xen/xen.h
> index 0780a81e140d..4d4188f20337 100644
> --- a/include/xen/xen.h
> +++ b/include/xen/xen.h
> @@ -56,7 +56,7 @@ extern u64 xen_saved_max_mem_size;
>   
>   static inline void xen_set_restricted_virtio_memory_access(void)
>   {
> -	if (IS_ENABLED(CONFIG_XEN_VIRTIO) && xen_domain())
> +	if (IS_ENABLED(CONFIG_XEN_VIRTIO_FORCE_GRANT) || xen_pv_domain())
>   		platform_set(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS);


Looks like, the flag will be *always* set for paravirtualized guests 
even if CONFIG_XEN_VIRTIO disabled.

Maybe we should clarify the check?


if (IS_ENABLED(CONFIG_XEN_VIRTIO_FORCE_GRANT) || 
IS_ENABLED(CONFIG_XEN_VIRTIO) && xen_pv_domain())

     platform_set(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS);


>   }
>   

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Thu Jun 16 08:46:15 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 08:46:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350446.576800 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1l8V-0007cr-1e; Thu, 16 Jun 2022 08:45:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350446.576800; Thu, 16 Jun 2022 08:45:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1l8U-0007ck-UZ; Thu, 16 Jun 2022 08:45:54 +0000
Received: by outflank-mailman (input) for mailman id 350446;
 Thu, 16 Jun 2022 07:48:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QxcQ=WX=gmail.com=mykyta.poturai@srs-se1.protection.inumbo.net>)
 id 1o1kEa-0000b1-VX
 for xen-devel@lists.xenproject.org; Thu, 16 Jun 2022 07:48:09 +0000
Received: from mail-ej1-x632.google.com (mail-ej1-x632.google.com
 [2a00:1450:4864:20::632])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a957e71b-ed48-11ec-bd2c-47488cf2e6aa;
 Thu, 16 Jun 2022 09:48:07 +0200 (CEST)
Received: by mail-ej1-x632.google.com with SMTP id n10so1203034ejk.5
 for <xen-devel@lists.xenproject.org>; Thu, 16 Jun 2022 00:48:07 -0700 (PDT)
Received: from localhost.localdomain (93.75.52.3.lut.volia.net. [93.75.52.3])
 by smtp.gmail.com with ESMTPSA id
 a23-20020a1709064a5700b006feaa22e367sm421410ejv.165.2022.06.16.00.48.06
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 16 Jun 2022 00:48:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a957e71b-ed48-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=nkGlUKJelnsdtw45YiFZDAO0r8YIoDyxfOZDpul7M7g=;
        b=e1ZZnaggq41XeIwEOxoDbdPQgKwFGnhlu2DK+waNLEhc7yM5Zsw9cinvw9aCG/20V1
         0QAivyp2cX3th9lv1hS8dpGGKyfNszo4Y2FlBRJ+rjp4A7IqhwyaSKsA9NXPqOldEa8F
         jsqGmUYAuv1TMENFn3ebIqTIEA7VwzladXDno0wguFBHWF1IOnSt43DdLAERBa3lShkM
         FuscgLaIGFVXqWi3SSxMVbqp16fYVnHHCRKVV04YFV4t3YQ8++MujpIu4R6TN8XkKPhm
         P24OlzacCKvNK7lPWyqBXPTE4v4L0nfxS6iRL+mSsMjgNlwLG1Af371K+HQb5EaqxHNg
         YXeQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=nkGlUKJelnsdtw45YiFZDAO0r8YIoDyxfOZDpul7M7g=;
        b=QJc8FyPEFn03jBT7ofD4TKic8lSioF7dOUVgUsgafInngQzV1ys3fnkvrYR9yrgSxp
         SwMMa/oLeM4+Diqiw7wXxEm0zouCrkZMAGUqwUndgwYeiWlB296nRR5HcQp096iLOAlz
         sT3SljrggIRbfeXrw/P7fqsXSS3ZzEE0MkcYDlJAKDKEYg1VcaaymmYbX6K3aryqoXWg
         b0sNunNKe147btBZP59yLLLEBisKesxTg8Qkn9M46iXs3AWP2vvz4jVSQGRTGYlgEegY
         MQt9bNXF0f8THdJOdr5PxI2wUNafFUp8x9YD5eFSzYipVp5bHC6Uc56ruRtsZhVSB2yy
         BQWQ==
X-Gm-Message-State: AJIora+7M3Zh0g4uRY2GUTw6lc4IwaMnGqbZnNjbRUfg2NgEfkm9dzBN
	UZkodd+ck6puKDNsHajX4GAX5GTqF1j7ldI2Gos=
X-Google-Smtp-Source: AGRyM1sWyhqu7m+IdJRRM8dGkJ0UFNYcz1fz8fJy4ufFqCIAaVUMG3dqz0y1Ci3SiDIVoHFzrVv2ag==
X-Received: by 2002:a17:907:6d8a:b0:711:cb5e:650e with SMTP id sb10-20020a1709076d8a00b00711cb5e650emr3329015ejc.280.1655365687276;
        Thu, 16 Jun 2022 00:48:07 -0700 (PDT)
From: Mykyta Poturai <mykyta.poturai@gmail.com>
X-Google-Original-From: Mykyta Poturai <mykyta_poturai@epam.com>
To: julien@xen.org
Cc: Bertrand.Marquis@arm.com,
	Rahul.Singh@arm.com,
	Volodymyr_Babchuk@epam.com,
	sstabellini@kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH] xen/arm: smmuv1: remove iommu group when deassign a device
Date: Thu, 16 Jun 2022 10:48:05 +0300
Message-Id: <20220616074805.538720-1-mykyta_poturai@epam.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <b6af8c10-9331-eec8-9a77-cd192829a6f2@xen.org>
References: <b6af8c10-9331-eec8-9a77-cd192829a6f2@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Hi Julien, Rahul
I've encountered a similar problem with IMX8 GPU recently. It wasn't probing
properly after the domain reboot.  After some digging, I came to the same
solution as Rahul and found this thread. I also encountered the occasional
"Unexpected global fault, this could be serious" error message when destroying
a domain with an actively-working GPU.

>Hmmmm.... Looking at the code, arm_smmu_alloc_smes() doesn't seem to use
>the domain information. So why would it need to be done every time it is assigned?
Indeed after removing the arm_smmu_master_free_smes() call, both reboot and global
fault issues are gone. If I understand correctly, device removing is not yet
supported, so I can't find a proper place for the arm_smmu_master_free_smes() call.
Should we remove the function completely or just left it commented for later or
something else?

Rahul, are you still working on this or could I send my patch?

Regards,
Mykyta


From xen-devel-bounces@lists.xenproject.org Thu Jun 16 08:49:50 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 08:49:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350464.576811 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1lCG-0008RV-K7; Thu, 16 Jun 2022 08:49:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350464.576811; Thu, 16 Jun 2022 08:49:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1lCG-0008RO-FM; Thu, 16 Jun 2022 08:49:48 +0000
Received: by outflank-mailman (input) for mailman id 350464;
 Thu, 16 Jun 2022 08:49:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aOxQ=WX=citrix.com=prvs=1593354c1=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o1lCF-0008RI-GV
 for xen-devel@lists.xenproject.org; Thu, 16 Jun 2022 08:49:47 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 449016d2-ed51-11ec-bd2c-47488cf2e6aa;
 Thu, 16 Jun 2022 10:49:45 +0200 (CEST)
Received: from mail-bn8nam11lp2172.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.172])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 16 Jun 2022 04:49:40 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by DS7PR03MB5525.namprd03.prod.outlook.com (2603:10b6:5:2d3::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.14; Thu, 16 Jun
 2022 08:49:38 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5353.015; Thu, 16 Jun 2022
 08:49:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 449016d2-ed51-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655369385;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=fLJ/1BCNMT7a26LAEsyfKBEGOYqfOjKVa9uNnWC3Az4=;
  b=AV6oDmswOek0h0R80/jLPTkZzKjzMgSsagBRh0S0ZHq/nncTe3QoEkqy
   6hknWST+UHwYe26+5CO9hjJz1W87vkFhNxwAWEzgurEiFgOXN3c9rRI7r
   erE47jUDHV0EuYeL4U0gJQQfKgt5nzsZ4vidw+0k56RzQWlKzhZy1Blh+
   g=;
X-IronPort-RemoteIP: 104.47.58.172
X-IronPort-MID: 73082122
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:ehFWMqzwpH3nKrQMqVd6t+fLxyrEfRIJ4+MujC+fZmUNrF6WrkUFy
 WFLX2HTPvmDYjb2co9/at/lo0xQ78TcydFkSAFqqyAxQypGp/SeCIXCJC8cHc8zwu4v7q5Dx
 59DAjUVBJlsFhcwnj/0bv656yMUOZigHtIQMsadUsxKbVIiGX5JZS5LwbZj2NY22IbhWmthh
 PupyyHhEA79s9JLGjp8B5Kr8HuDa9yr5Vv0FnRnDRx6lAe2e0s9VfrzFonoR5fMeaFGH/bSe
 gr25OrRElU1XfsaIojNfr7TKiXmS1NJVOSEoiI+t6OK2nCuqsGuu0qS2TV1hUp/0l20c95NJ
 Npl7Jy7ED4oE6P3qc9HYxoAPQ5MBasd9+qSSZS/mZT7I0zuVVLJmqwrJmdmeIoS96BwHH1E8
 uEeJHYVdBefiumqwbW9DO5xmsAkK8qtN4Qa0p1i5WiBUbB6HtaeE+OTuoUwMDQY36iiGd7EY
 MUUc3x3ZQnoaBxTIFYHTpk5mY9Eg1GgKGUI8gvP/8Lb5UD3xktP/KbsP+DvXfu6a51rsRiW/
 kP/qjGR7hYycYb3JSC+2mm3mubFkCf/WYQTPL617PhnhBuU3GN7IBoSWFigveiiimaxXtteL
 wof/S9Ghbg/8gmnQ8fwWzW8oWWYpVgMVtxICeo45QqRjK3O7G6xAmkCUy4Ea9E8ssIybSIl2
 0XPnN7zAzFr9rqPRhq18bOZrii7PyQPGnMTfi8PTQYD4N7LrZk6i1TESdMLOKSylNzuXzbr3
 yqNsjM9lp0Ul8cA06j99lfC6xquqYLOVRUd/RjMUySu6QYRTIy4Y42l73DL4PAGK5yWJmRtp
 1ABksmaqeoIXZeEkXXURP1XRe7zofGYLDfbnFhjWYE78Cig8GKieoYW5yxiIEBuMYAPfjqBj
 FLvhD69LaR7ZBOCBZKbqaroYyj25cAMzejYa80=
IronPort-HdrOrdr: A9a23:DCM+EKpc/VPf7C22khEIzqUaV5u3L9V00zEX/kB9WHVpm5Oj+v
 xGzc5w6farsl0ssREb9uxo9pPwJE800aQFmbX5XI3SJTUO3VHFEGgM1+vfKlHbak7DH6tmpN
 xdmstFeaHN5DpB/KHHCWCDer5PoeVvsprY49s2p00dMD2CAJsQizuRZDzrcHGfE2J9dOAE/d
 enl7x6jgvlXU5SQtWwB3EDUeSGj9rXlKj+aRpDIxI88gGBgR6h9ba/SnGjr18jegIK5Y1n3X
 nOkgT/6Knmm/anyiXE32uWy5hNgtPuxvZKGcTJoMkILTfHjBquee1aKvS/lQFwhNvqxEchkd
 HKrRtlF8Nv60nJdmXwmhfp0xmI6kda11bSjXujxVfzq83wQzw3T+Bbg5hCTxff4008+Plhza
 NixQuixtZqJCKFuB64y8nDVhlsmEbxi2Eli/Qvg3tWVpZbQKNNrLYY4FheHP47bW/HAbgcYa
 dT5fznlbdrmQvwVQGYgoAv+q3nYp0LJGbIfqBY0fblkAS/nxhCvjklLYIk7zU9HakGOud5Dt
 T/Q9tVfY51P74rhIJGdZM8qJiMexvwqSylChPjHX3XUIc6Blnql7nbpJ0I2cDCQu168HJ1ou
 WLbG9l
X-IronPort-AV: E=Sophos;i="5.91,304,1647316800"; 
   d="scan'208";a="73082122"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=d+y1kzb/rUnRZCpySi9IVUFsbeVZ3dbHcJRpBuEm74nbUhCSA211d98me6MZCjbZF/xxVKK44thVDysA8FHmtfihdyfzxFVrUJ+Ndedv5A9kQvqUS7NDYtXnMoQYwDwxtufnqo2aTkjAnab1vZWV3atVefS9AZgCWleEvdS7uRPoYesS+iW0rDlewQ8vr4HjdxpI1GTmPTA233tTMWkmy8G/mxVJOhWz9/wrcp0UaLKHI1jM8AD+Au9cazgzZtXK7w3Qgbslqog4LkwIGqhpBnL6Gx+yuGYwP7YaMq5lIVKl4GXrb+L6+bIUYZSSCuL0CqWmXNzlLV1btvoHT+v3Cg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=2cLom0y4HWD07Ic3sPVf5/TfOrMlkRnW1dnPd1g2lys=;
 b=IbLC9PRs3kranqis8T6IzYRG0a5xDAzFIKZOEIKbNzXRrhUSXicNDYsUgkVrgeFg/MvwGwCeK1e9GoUe+29aMJ5Cr+fg8l+T5u7VLJHGdaO7VFJMKeVbWPn6SZDaFedbIFcBt0YOHFdmJfewB/K1+10muJzrKsdRi0djpyMnRo5a8oX/DrScp3f0PkqG7xEtD88wUCoeYloPOCEsSrkBAkYoRk8EQdICDIrREGBlM5Ylc+Hk/tAgcIMPsVtmd4tBPnFuN5w/C+8vtj6C22tSugG7ABSDS0J+Y8B5kr7tDthh4DO11iv+uBa7nWT4zwvM2CS67/b1qANajEQmfy9rug==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2cLom0y4HWD07Ic3sPVf5/TfOrMlkRnW1dnPd1g2lys=;
 b=aoAfw5RKCIM8x9yB/pNxP+V68kasK6L1MfsrPa458yT73hq1qMb0tQvN32K2y1JTDIZECWWX7uVMMTTAGqJ7YIiQdqRzBSCJIl+iUgaelxMmw4Ldwh33zcFmNk6fuNMy3Ib1jbCpy9wpPG3S7Wb6sBxeb2zy5l07gRK8fE2EdlQ=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Thu, 16 Jun 2022 10:49:34 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Oleksandr <olekstysh@gmail.com>, xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>, Julien Grall <julien@xen.org>
Subject: Re: [RFC PATCH 1/2] xen/unpopulated-alloc: Introduce helpers for DMA
 allocations
Message-ID: <YqrunrtBjWPgdlDi@Air-de-Roger>
References: <1652810658-27810-1-git-send-email-olekstysh@gmail.com>
 <1652810658-27810-2-git-send-email-olekstysh@gmail.com>
 <alpine.DEB.2.22.394.2206031420430.2783803@ubuntu-linux-20-04-desktop>
 <00c14b91-4cf2-179c-749d-593db853e42e@gmail.com>
 <alpine.DEB.2.22.394.2206101709210.756493@ubuntu-linux-20-04-desktop>
 <a51dec23-c543-b571-8047-59f39abb0bee@gmail.com>
 <alpine.DEB.2.22.394.2206141735430.1837490@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.22.394.2206141735430.1837490@ubuntu-linux-20-04-desktop>
X-ClientProxiedBy: LO2P265CA0120.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:c::36) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 5e77ae81-37b2-406b-37e4-08da4f75253d
X-MS-TrafficTypeDiagnostic: DS7PR03MB5525:EE_
X-Microsoft-Antispam-PRVS:
	<DS7PR03MB5525C131ACEF388EDD4D6F378FAC9@DS7PR03MB5525.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	OtxHInVo6ayjAezQhvK8Rkw/1+myezAWl141bRnoOclO2A7o9uGBynfYn6zNafwpF6FCLjzRPcDGctiDQoLwm0blvj8RUnsIt5YXwHoOT9TTDUrVD/9t87Znm8kcDmRWqtr8gPpxeVwC2vivhcpQ/wy0WBqKVAUNnWi+MgI5wS8PQDfeuJLcskunN5BxBDyI+EO6IdKTM8IahTXmooWtqTi4fiuGTt+QOGv50r3hPjz4WxHw6C/wzlbelmFW3UYMTU3Tks1u6lToq6iwczfnM3QsYn8n8dBRlNi58XEevPA7M/FZEgo1KbGhfajPdSqalM0MoN4+zj+KstDGgQZkgUKa93kL0jFJkoplr6Gbxvm2EjvUamK+hEUh09kCdf1orkxzfwGpt8pU9aOKwRILrUTnAjSLoyClf1hWXypGa2wX7sLGy5nl3GDM0TIPcmFIvCOSPPzVugu0IXRiysIWKRVwXC2v/XQZa2Z+rYqk3Y3dt2AdDwrlrUkp/cEOY64KLPVpyUuji+uVf7p2VUQmq0++Fkvcz6ZKkxuq7D7w4VAaQ5W0Gx5a2EJG/4VJNhFodKa0ZwNUui0f9EUCMY32b5PmicfhVrAOz6G6eyF9XnZSCnppwlnnPQOnfmTy1aJCzr9JsPcH/ehovtVCUqQcsA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(7916004)(366004)(6666004)(85182001)(8936002)(38100700002)(6506007)(33716001)(9686003)(6512007)(26005)(6486002)(54906003)(8676002)(4326008)(186003)(2906002)(5660300002)(508600001)(316002)(6916009)(83380400001)(82960400001)(4744005)(66556008)(66476007)(66946007)(86362001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MnUwbWJPa2Y5bEVrdVgrRkQwMXBDQ2hQNUM1YUovejF0RVY5UjNlZ2F3a1pC?=
 =?utf-8?B?Y2gyNjhUVkZNMjdSWVRrZURxQXRxbk9lVFlWc00zWHFPOGErUGhxZ0ZZTzBu?=
 =?utf-8?B?bFV5NFY5ZStOYnR5RlVOQmYyNjNacjJBYW9ZeVRVdHVVRER3dVR0NDAyV2d6?=
 =?utf-8?B?T0tjczNwWkswS2dtOW1DVnJVL3l2OVJVZmU5eEN4T1czRGNLSG5XYm1JTXNT?=
 =?utf-8?B?UldMS2ZZV1JlN096Z1F1OGdLZWdNblN0cmowS1dGRndMRUZwdC82K21KcENw?=
 =?utf-8?B?ZzRFcUZDWTRPWHFMR0JaQmdDMFk5RU1uVy9kdGdDd2xIY1U1bE5PWnFWOHBC?=
 =?utf-8?B?MmxUdGFYSmpJQUUyWXA2cS9tWVNrWHJESGFHd1l5dlBYVUhMVVVLZFNHa0Nl?=
 =?utf-8?B?elJKMjM3MS9rWXJhd1A2Wm42RUV0UWhIcC8zeTM3bE1JY2JXL21pNFBTdzc4?=
 =?utf-8?B?ZXQySnhFVVF4R1lJam9seEZKUE5uZlR6cDNvVGJFRk5yYW1oOVhKYzdVcHlX?=
 =?utf-8?B?ZE9VdnNVdFJlR0JPeXdTM3VVMVR2eFhORUFqQ0NaWm5NQVdRZEs4Z1JXQmdp?=
 =?utf-8?B?ejQ1S3YyaGdGbVBUcUhwYzRKWE5kS1VocGJteGkzamU4NUJiTXpuMHVGYlhB?=
 =?utf-8?B?bm5RaWRJS3FtNG51NzRmVnQxeCtSVzZIcDhrK0FBYlc4QVAvUU9GQUsydXZ6?=
 =?utf-8?B?QTNRWERkT2FkRFlWYzRNVTJVZVJSaXhGalZBckNFbFJrL3dVTnZVY0hjMGZw?=
 =?utf-8?B?bm1xeU5iWUttNzhPYTZ5eGJMYm9kaXF6cnJKTEZMZzR6VDJxS2tTV1hRaDU4?=
 =?utf-8?B?eFFNNkdiVGhKdGtSYWlXOE1GdEpRNkJ1Z2dtOWlvRTFoUTZQZDdwa0hNMFBj?=
 =?utf-8?B?T0U0b2FZS2pDN1l2WVRsVTB6QVVSeVBoSTdIMFdkV3d3SWRwTFRNdVpHdEtl?=
 =?utf-8?B?VytxYk9GRTNWZXhDVy8vTEk4c2RxQnZ0c2dtMzd0M3BQVk1VYVNmcUpyTmg1?=
 =?utf-8?B?SHBTeVB5UHY0K29paEo1cWw4QWVRNkJselpvZ01pV3NVaXdxZUp3aEtodG9a?=
 =?utf-8?B?NlloR1I3QUZhYlVXcjVSTUxEbXJ3dkZpSkhyencyeml3V0VrbUQ2TmRlMXoz?=
 =?utf-8?B?UVhUZkN2Sjd0alhia0tPQzFMejJST2RaUFlEblk1ZThJR3pUK3ZVbkQyMkJa?=
 =?utf-8?B?Qm9ZTzJwcCtpcmtOQlJKN21qWVh6SE5RLzF1c0V1cU9MZFVBTjZrUHJZMklX?=
 =?utf-8?B?RXZqRDZpcEpleDNqMVAxOENtZU9Udkl5QzRDZzdpbGt3dHQrcmk1eDUzb016?=
 =?utf-8?B?aDJLNUFpeGNwR2szT1dsbDA4ZUNVRDlZdGxuaHlDaVNSOS81QVhhR05oRzVo?=
 =?utf-8?B?Z2NMaCtZYzY3WDJTTjBReWkrbWVYNGZKOENBWjFaaUVIdis5WCswNElaVExn?=
 =?utf-8?B?U09EZUpQVHNJREF3c0ZIcjBwRTI1cjhOMWQ0OFJjc012VjdXejF2NXlZNVVM?=
 =?utf-8?B?MkJnVnM1VHRYa01lRVJ0eWw5bkNpSnhRUmduQkRCZVZ1QzFuZjNDVmpuUzRy?=
 =?utf-8?B?WVE3MlN0eGJDZHNEOTJ6MUFLekZRT3lsdDFnK2hRRWZHYkdpbUw3MFlQRHVm?=
 =?utf-8?B?V0xESDVXUjBHU01MMzE3ajJrdG9jUitYYWdMRVNTMlhYZWRkS0R5bjYxWUJS?=
 =?utf-8?B?Z0NSelhpTjFHZ2JSbHhvYlBoVU1MaUdSWWxCbGVwY1VnRG9pWGN4c0dCRVJ2?=
 =?utf-8?B?a01hMjFVeWZFZGxXWHRKdmZuUjVTNzcvWGVLU2NRQmd2aklHWW1rMkJkR2h2?=
 =?utf-8?B?LzlzOWh6Tzc5R2UxS1hOekh6THYwbWZmUFlidy9GODVKVDlZaHkxeFhqems4?=
 =?utf-8?B?Q3Q3MEpseHlnYXRQcXJLa0hnSHgzaG9JVU5kRisxMjU1b2xiR2crMkhrc3Js?=
 =?utf-8?B?ZVZseUg1M2t4S29IcUhhU2E5OE9Gb0xkT3ZrOVd0N1VrRUpDOVozTm55OTJH?=
 =?utf-8?B?TW84ekNkU2V4Q1hrZ0NoalZUdjByTWFIM1pmc0g4ckpDNlRydy9QRGxFVTVI?=
 =?utf-8?B?YjhWOEhhNkUzRFl1aFZLaVZaV3hpTm5ibmU5aGdvamZBSHJFenBGRG9objla?=
 =?utf-8?B?dkI1MXFnTjdFTEw5M2pUK0tHNUlDcHA1UXZWMGIzMGE5bitaZWFTbG1qL2U5?=
 =?utf-8?B?WmcxbTA5cmN6aU55ZGE4RXlhamtsbWlHazJIbS9pT1hhcFBUVUpJTmtJdFpa?=
 =?utf-8?B?NWJOUXpPc1BPUkRyOGdzYWk2S0cybGl4ZTJXa2VrZ1ErVkx1bURTcTliaDBn?=
 =?utf-8?B?em01MWhuaVNUa2hxd2RwNlQ3UWZDRVpQSnpaeXMrZElpMk16b3pYSUQwcmZH?=
 =?utf-8?Q?2YboA+rg9jRMv0nk=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5e77ae81-37b2-406b-37e4-08da4f75253d
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jun 2022 08:49:38.3213
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: +/uGnLNB6Ktf9E6jRUIHaibTVsrj58ddFnWVl28ewv5vov5wwa1/6YcToQ5T+wa2YBrZQczyZ82Kscvl1DuXWg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5525

On Tue, Jun 14, 2022 at 05:45:53PM -0700, Stefano Stabellini wrote:
> Basically, if we allocate (and free) page-by-page it leads to more
> efficient resource utilization but it is slower. If we allocate larger
> contiguous chunks it is faster but it leads to less efficient resource
> utilization.
> 
> Given that on both x86 and ARM the unpopulated memory resource is
> arbitrarily large, I don't think we need to worry about resource
> utilization. It is not backed by real memory. The only limitation is the
> address space size which is very large.

Well, unpopulated memory will consume memory to populate the related
metadata (ie: struct page and realted tracking information) for the
unpopulated region, so it's not completely free.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jun 16 08:56:26 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 08:56:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350473.576822 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1lIc-0001So-Ad; Thu, 16 Jun 2022 08:56:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350473.576822; Thu, 16 Jun 2022 08:56:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1lIc-0001Sh-6A; Thu, 16 Jun 2022 08:56:22 +0000
Received: by outflank-mailman (input) for mailman id 350473;
 Thu, 16 Jun 2022 08:56:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7YVi=WX=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o1lIb-0001Sb-Ap
 for xen-devel@lists.xenproject.org; Thu, 16 Jun 2022 08:56:21 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 302810c8-ed52-11ec-ab14-113154c10af9;
 Thu, 16 Jun 2022 10:56:19 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 098491FAE2;
 Thu, 16 Jun 2022 08:56:19 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id C58BC1344E;
 Thu, 16 Jun 2022 08:56:18 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id IQyPLjLwqmIUOwAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 16 Jun 2022 08:56:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 302810c8-ed52-11ec-ab14-113154c10af9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655369779; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=ClCu1lSS1eT0mVHhicogOYfhzB0qOq3w1ezeRl7JLbI=;
	b=Ek2sYhrEQLjW95dTjLSwKVrt+HGAJ6TX0vmJWWHItRa3l9Qoxvnv3rXVZ85mJvtnLvx70w
	eFsyvJFDN+goB33pTA0D0AWI8OViFO7FGF/Cvc5iGbI8Nfz2WA1aJTArKeOQPGojHnkezA
	IpjugpHm+/v23Pu8A0ZQc90Rcps2BkY=
Message-ID: <cf755bb8-4265-875f-dc20-eefc0e8740f4@suse.com>
Date: Thu, 16 Jun 2022 10:56:18 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH v2] xen: don't require virtio with grants for non-PV
 guests
Content-Language: en-US
To: Oleksandr <olekstysh@gmail.com>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 viresh.kumar@linaro.org, hch@infradead.org,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
References: <20220616053715.3166-1-jgross@suse.com>
 <573c2d9f-8df0-0e0f-2f57-e8ea85e403b4@gmail.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <573c2d9f-8df0-0e0f-2f57-e8ea85e403b4@gmail.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------1IF8fi1u5UzCv06QNZz4Ohpv"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------1IF8fi1u5UzCv06QNZz4Ohpv
Content-Type: multipart/mixed; boundary="------------xEvkx51wTbwtZ0aB08s3WO9P";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Oleksandr <olekstysh@gmail.com>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 viresh.kumar@linaro.org, hch@infradead.org,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Message-ID: <cf755bb8-4265-875f-dc20-eefc0e8740f4@suse.com>
Subject: Re: [PATCH v2] xen: don't require virtio with grants for non-PV
 guests
References: <20220616053715.3166-1-jgross@suse.com>
 <573c2d9f-8df0-0e0f-2f57-e8ea85e403b4@gmail.com>
In-Reply-To: <573c2d9f-8df0-0e0f-2f57-e8ea85e403b4@gmail.com>

--------------xEvkx51wTbwtZ0aB08s3WO9P
Content-Type: multipart/mixed; boundary="------------tZsPIuXAi0nONCWwb0JoU0z5"

--------------tZsPIuXAi0nONCWwb0JoU0z5
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTYuMDYuMjIgMDk6MzEsIE9sZWtzYW5kciB3cm90ZToNCj4gDQo+IE9uIDE2LjA2LjIy
IDA4OjM3LCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPiANCj4gDQo+IEhlbGxvIEp1ZXJnZW4N
Cj4gDQo+PiBDb21taXQgZmExZjU3NDIxZTBiICgieGVuL3ZpcnRpbzogRW5hYmxlIHJlc3Ry
aWN0ZWQgbWVtb3J5IGFjY2VzcyB1c2luZw0KPj4gWGVuIGdyYW50IG1hcHBpbmdzIikgaW50
cm9kdWNlZCBhIG5ldyByZXF1aXJlbWVudCBmb3IgdXNpbmcgdmlydGlvDQo+PiBkZXZpY2Vz
OiB0aGUgYmFja2VuZCBub3cgbmVlZHMgdG8gc3VwcG9ydCB0aGUgVklSVElPX0ZfQUNDRVNT
X1BMQVRGT1JNDQo+PiBmZWF0dXJlLg0KPj4NCj4+IFRoaXMgaXMgYW4gdW5kdWUgcmVxdWly
ZW1lbnQgZm9yIG5vbi1QViBndWVzdHMsIGFzIHRob3NlIGNhbiBiZSBvcGVyYXRlZA0KPj4g
d2l0aCBleGlzdGluZyBiYWNrZW5kcyB3aXRob3V0IGFueSBwcm9ibGVtLCBhcyBsb25nIGFz
IHRob3NlIGJhY2tlbmRzDQo+PiBhcmUgcnVubmluZyBpbiBkb20wLg0KPj4NCj4+IFBlciBk
ZWZhdWx0IGFsbG93IHZpcnRpbyBkZXZpY2VzIHdpdGhvdXQgZ3JhbnQgc3VwcG9ydCBmb3Ig
bm9uLVBWDQo+PiBndWVzdHMuDQo+Pg0KPj4gQWRkIGEgbmV3IGNvbmZpZyBpdGVtIHRvIGFs
d2F5cyBmb3JjZSB1c2Ugb2YgZ3JhbnRzIGZvciB2aXJ0aW8uDQo+Pg0KPj4gRml4ZXM6IGZh
MWY1NzQyMWUwYiAoInhlbi92aXJ0aW86IEVuYWJsZSByZXN0cmljdGVkIG1lbW9yeSBhY2Nl
c3MgdXNpbmcgWGVuIA0KPj4gZ3JhbnQgbWFwcGluZ3MiKQ0KPj4gUmVwb3J0ZWQtYnk6IFZp
cmVzaCBLdW1hciA8dmlyZXNoLmt1bWFyQGxpbmFyby5vcmc+DQo+PiBTaWduZWQtb2ZmLWJ5
OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+DQo+PiAtLS0NCj4+IFYyOg0KPj4g
LSByZW1vdmUgY29tbWFuZCBsaW5lIHBhcmFtZXRlciAoQ2hyaXN0b3BoIEhlbGx3aWcpDQo+
PiAtLS0NCj4+IMKgIGRyaXZlcnMveGVuL0tjb25maWcgfCA5ICsrKysrKysrKw0KPj4gwqAg
aW5jbHVkZS94ZW4veGVuLmjCoMKgIHwgMiArLQ0KPj4gwqAgMiBmaWxlcyBjaGFuZ2VkLCAx
MCBpbnNlcnRpb25zKCspLCAxIGRlbGV0aW9uKC0pDQo+Pg0KPj4gZGlmZiAtLWdpdCBhL2Ry
aXZlcnMveGVuL0tjb25maWcgYi9kcml2ZXJzL3hlbi9LY29uZmlnDQo+PiBpbmRleCBiZmQ1
ZjRmNzA2YmMuLmE2NWJkOTIxMjFhNSAxMDA2NDQNCj4+IC0tLSBhL2RyaXZlcnMveGVuL0tj
b25maWcNCj4+ICsrKyBiL2RyaXZlcnMveGVuL0tjb25maWcNCj4+IEBAIC0zNTUsNCArMzU1
LDEzIEBAIGNvbmZpZyBYRU5fVklSVElPDQo+PiDCoMKgwqDCoMKgwqDCoCBJZiBpbiBkb3Vi
dCwgc2F5IG4uDQo+PiArY29uZmlnIFhFTl9WSVJUSU9fRk9SQ0VfR1JBTlQNCj4+ICvCoMKg
wqAgYm9vbCAiUmVxdWlyZSBYZW4gdmlydGlvIHN1cHBvcnQgdG8gdXNlIGdyYW50cyINCj4+
ICvCoMKgwqAgZGVwZW5kcyBvbiBYRU5fVklSVElPDQo+PiArwqDCoMKgIGhlbHANCj4+ICvC
oMKgwqDCoMKgIFJlcXVpcmUgdmlydGlvIGZvciBYZW4gZ3Vlc3RzIHRvIHVzZSBncmFudCBt
YXBwaW5ncy4NCj4+ICvCoMKgwqDCoMKgIFRoaXMgd2lsbCBhdm9pZCB0aGUgbmVlZCB0byBn
aXZlIHRoZSBiYWNrZW5kIHRoZSByaWdodCB0byBtYXAgYWxsDQo+PiArwqDCoMKgwqDCoCBv
ZiB0aGUgZ3Vlc3QgbWVtb3J5LiBUaGlzIHdpbGwgbmVlZCBzdXBwb3J0IG9uIHRoZSBiYWNr
ZW5kIHNpZGUNCj4+ICvCoMKgwqDCoMKgIChlLmcuIHFlbXUgb3Iga2VybmVsLCBkZXBlbmRp
bmcgb24gdGhlIHZpcnRpbyBkZXZpY2UgdHlwZXMgdXNlZCkuDQo+PiArDQo+PiDCoCBlbmRt
ZW51DQo+PiBkaWZmIC0tZ2l0IGEvaW5jbHVkZS94ZW4veGVuLmggYi9pbmNsdWRlL3hlbi94
ZW4uaA0KPj4gaW5kZXggMDc4MGE4MWUxNDBkLi40ZDQxODhmMjAzMzcgMTAwNjQ0DQo+PiAt
LS0gYS9pbmNsdWRlL3hlbi94ZW4uaA0KPj4gKysrIGIvaW5jbHVkZS94ZW4veGVuLmgNCj4+
IEBAIC01Niw3ICs1Niw3IEBAIGV4dGVybiB1NjQgeGVuX3NhdmVkX21heF9tZW1fc2l6ZTsN
Cj4+IMKgIHN0YXRpYyBpbmxpbmUgdm9pZCB4ZW5fc2V0X3Jlc3RyaWN0ZWRfdmlydGlvX21l
bW9yeV9hY2Nlc3Modm9pZCkNCj4+IMKgIHsNCj4+IC3CoMKgwqAgaWYgKElTX0VOQUJMRUQo
Q09ORklHX1hFTl9WSVJUSU8pICYmIHhlbl9kb21haW4oKSkNCj4+ICvCoMKgwqAgaWYgKElT
X0VOQUJMRUQoQ09ORklHX1hFTl9WSVJUSU9fRk9SQ0VfR1JBTlQpIHx8IHhlbl9wdl9kb21h
aW4oKSkNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoCBwbGF0Zm9ybV9zZXQoUExBVEZPUk1fVklS
VElPX1JFU1RSSUNURURfTUVNX0FDQ0VTUyk7DQo+IA0KPiANCj4gTG9va3MgbGlrZSwgdGhl
IGZsYWcgd2lsbCBiZSAqYWx3YXlzKiBzZXQgZm9yIHBhcmF2aXJ0dWFsaXplZCBndWVzdHMg
ZXZlbiBpZiANCj4gQ09ORklHX1hFTl9WSVJUSU8gZGlzYWJsZWQuDQo+IA0KPiBNYXliZSB3
ZSBzaG91bGQgY2xhcmlmeSB0aGUgY2hlY2s/DQo+IA0KPiANCj4gaWYgKElTX0VOQUJMRUQo
Q09ORklHX1hFTl9WSVJUSU9fRk9SQ0VfR1JBTlQpIHx8IElTX0VOQUJMRUQoQ09ORklHX1hF
Tl9WSVJUSU8pIA0KPiAmJiB4ZW5fcHZfZG9tYWluKCkpDQo+IA0KPiAgwqDCoMKgIHBsYXRm
b3JtX3NldChQTEFURk9STV9WSVJUSU9fUkVTVFJJQ1RFRF9NRU1fQUNDRVNTKTsNCj4gDQoN
Clllcywgd2Ugc2hvdWxkLiBJIGhhZCB0aGUgZnVuY3Rpb24gaW4gZ3JhbnQtZG1hLW9wcy5j
IGluIFYxLCBhbmQgY291bGQgZHJvcCB0aGUNCkNPTkZJR19YRU5fVklSVElPIGRlcGVuZGVu
Y3kgZm9yIHRoYXQgcmVhc29uLg0KDQpJJ2xsIHdhaXQgZm9yIG1vcmUgY29tbWVudHMgYmVm
b3JlIHNlbmRpbmcgVjMsIHRob3VnaC4NCg0KDQpKdWVyZ2VuDQo=
--------------tZsPIuXAi0nONCWwb0JoU0z5
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------tZsPIuXAi0nONCWwb0JoU0z5--

--------------xEvkx51wTbwtZ0aB08s3WO9P--

--------------1IF8fi1u5UzCv06QNZz4Ohpv
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmKq8DIFAwAAAAAACgkQsN6d1ii/Ey9p
AQf9GsDfMN6LBQuiIGXN/DLwv9awfh62/ozPA597SR9/IAvrGkMAVaeN59Twj/Oaiy7SGtR3F2v2
CVkNoulP7MpGJDEAgnLsFHx5j2FUd+gTvgLMqlCMNT2kyaWCtGR4MAvjcU9wSijXooA0ZLgAhw0J
tpWDS/kJDHpua5Ye2VdjygaGIqgpnKvgCiQAT8L6oPmluSbLNvvlRqFLKmMXO7w2dSHf+MGYZ4GY
bILzrgWKpx7a7qM/RuL74gZFdfSZD2RjnNpZecCaYNpaeLH2Svb7OP/kK56q3J5jKj+ZiLxx3XT3
H0auoF8V1E02cD0vvMpaezrHcCjFFguMk6nnJwdx1Q==
=S6Fy
-----END PGP SIGNATURE-----

--------------1IF8fi1u5UzCv06QNZz4Ohpv--


From xen-devel-bounces@lists.xenproject.org Thu Jun 16 09:30:26 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 09:30:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350487.576845 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1lpT-0006Vn-5B; Thu, 16 Jun 2022 09:30:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350487.576845; Thu, 16 Jun 2022 09:30:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1lpT-0006Vg-1m; Thu, 16 Jun 2022 09:30:19 +0000
Received: by outflank-mailman (input) for mailman id 350487;
 Thu, 16 Jun 2022 09:30:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1lpS-0006VW-0G; Thu, 16 Jun 2022 09:30:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1lpR-00070g-Uq; Thu, 16 Jun 2022 09:30:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1lpR-0006FM-FD; Thu, 16 Jun 2022 09:30:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o1lpR-0003IV-Ei; Thu, 16 Jun 2022 09:30:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FKDASLle8QdlBQ9p3PhaKe83gtxTA1TXmryShuHE8hc=; b=a6pbYCQUuomFhYC/CXltSCt7zZ
	caMBregdemXWYVLgfW7gbFXpbTliZaSXgsERy7fGrTvI8vGggyTD3Syw45z2H11X75Az2xgvMD88L
	DZYENPDe0+jNxMOAa2d0yNmBrMamW5ilBDnIxatQEzkN/FQvD2AxbcKL1484w1wlZjPY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171182-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171182: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=afe9eb14ea1cbac5d91ca04eb64810d2d9fa22b0
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 16 Jun 2022 09:30:17 +0000

flight 171182 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171182/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 170714
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 170714
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 170714
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 170714
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 170714
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 170714
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 170714
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 170714
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 170714
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 170714
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 170714
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 170714
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 170714

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    18 guest-start/debian.repeat fail REGR. vs. 170714

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170714
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170714
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                afe9eb14ea1cbac5d91ca04eb64810d2d9fa22b0
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z   23 days
Failing since        170716  2022-05-24 11:12:06 Z   22 days   55 attempts
Testing same since   171182  2022-06-15 20:43:26 Z    0 days    1 attempts

------------------------------------------------------------
2348 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 277334 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 16 09:38:08 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 09:38:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350497.576856 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1lwy-0007C6-0o; Thu, 16 Jun 2022 09:38:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350497.576856; Thu, 16 Jun 2022 09:38:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1lwx-0007Bz-TV; Thu, 16 Jun 2022 09:38:03 +0000
Received: by outflank-mailman (input) for mailman id 350497;
 Thu, 16 Jun 2022 09:38:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7YVi=WX=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o1lww-0007Bt-H9
 for xen-devel@lists.xenproject.org; Thu, 16 Jun 2022 09:38:02 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0306586c-ed58-11ec-bd2c-47488cf2e6aa;
 Thu, 16 Jun 2022 11:38:00 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 0D0F31FB24;
 Thu, 16 Jun 2022 09:38:00 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id CE57613A70;
 Thu, 16 Jun 2022 09:37:59 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id PKrLL/f5qmK8TAAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 16 Jun 2022 09:37:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0306586c-ed58-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655372280; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=mqMQ+kXK9KAfjDfvhuBy6CzOafCzcWwgHBD28IQj36c=;
	b=RG2I+qmd9N+Z8FZsVWieY2gsM69DTVwwQ8njB7BAZG39vXwBjK6mgnVwzIIJhOITXCIykI
	pD2dbX8aT/R+7J+JXjeXK6spfANN61FmFaJDUBAoJJxfVlkx2uKUkjsQh0vGU6LVGh2ClB
	+iZFRKtXNA2jE5cQpF3uoThD/4motXw=
Message-ID: <441665ed-f719-56a2-b8c5-85197a67242e@suse.com>
Date: Thu, 16 Jun 2022 11:37:59 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [linux-linus test] 171182: regressions - FAIL
Content-Language: en-US
To: osstest service owner <osstest-admin@xenproject.org>,
 xen-devel@lists.xenproject.org
References: <osstest-171182-mainreport@xen.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <osstest-171182-mainreport@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------iPdcyVQF9FKxgQuNCO9uZAmU"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------iPdcyVQF9FKxgQuNCO9uZAmU
Content-Type: multipart/mixed; boundary="------------YhE2FUvG07rpn2UqkNm0XP0q";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: osstest service owner <osstest-admin@xenproject.org>,
 xen-devel@lists.xenproject.org
Message-ID: <441665ed-f719-56a2-b8c5-85197a67242e@suse.com>
Subject: Re: [linux-linus test] 171182: regressions - FAIL
References: <osstest-171182-mainreport@xen.org>
In-Reply-To: <osstest-171182-mainreport@xen.org>

--------------YhE2FUvG07rpn2UqkNm0XP0q
Content-Type: multipart/mixed; boundary="------------k6qz6wwUPA0rDXcM5YNtJmND"

--------------k6qz6wwUPA0rDXcM5YNtJmND
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTYuMDYuMjIgMTE6MzAsIG9zc3Rlc3Qgc2VydmljZSBvd25lciB3cm90ZToNCj4gZmxp
Z2h0IDE3MTE4MiBsaW51eC1saW51cyByZWFsIFtyZWFsXQ0KPiBodHRwOi8vbG9ncy50ZXN0
LWxhYi54ZW5wcm9qZWN0Lm9yZy9vc3N0ZXN0L2xvZ3MvMTcxMTgyLw0KDQpJIHRoaW5rIHRo
aXMgaXMgYW4gaW5mcmFzdHJ1Y3R1cmUvaGFyZHdhcmUgcHJvYmxlbS4gTG9va2luZyBpbnRv
DQoNCmh0dHA6Ly9sb2dzLnRlc3QtbGFiLnhlbnByb2plY3Qub3JnL29zc3Rlc3QvbG9ncy8x
NzExODIvdGVzdC1hbWQ2NC1hbWQ2NC14bC9zZXJpYWwtZWxibGluZzAubG9nDQoNCkkgc2Vl
Og0KDQpKdW4gMTYgMDE6NTM6MDYuNzM3NzMyIFsgICAgMi4wMzQ3ODhdIFJ1biAvaW5pdCBh
cyBpbml0IHByb2Nlc3MNCkp1biAxNiAwMTo1MzowNi43Mzc3OTAgTG9hZGluZywgcGxlYXNl
IHdhaXQuLi4NCkp1biAxNiAwMTo1MzowNi43Mzc4MzQgU3RhcnRpbmcgdmVyc2lvbiAyNDEN
Ckp1biAxNiAwMTo1MzowNi43Mzc4NzUgWyAgICAyLjYzOTU1NV0gbWVnYXNhczogMDcuNzE5
LjAzLjAwLXJjMQ0KSnVuIDE2IDAxOjUzOjA2Ljc0OTgwMSBbICAgIDIuNjQwNjc5XSBtZWdh
cmFpZF9zYXMgMDAwMDowMTowMC4wOiBXYWl0aW5nIGZvciBGVyANCnRvIGNvbWUgdG8gcmVh
ZHkgc3RhdGUNCkp1biAxNiAwMTo1MzowNi43NDk4NjcgWyAgICAyLjY0MDY4MV0gbWVnYXJh
aWRfc2FzIDAwMDA6MDE6MDAuMDogRlcgaW4gRkFVTFQgDQpzdGF0ZSwgRmF1bHQgY29kZTow
eGZmZjAwMDAgc3ViY29kZToweGZmMDAgZnVuYzptZWdhc2FzX3RyYW5zaXRpb25fdG9fcmVh
ZHkNCkp1biAxNiAwMTo1MzowNi43NzM3MzYgWyAgICAyLjY0MDY4N10gMDAwMDAwMDA6IGZm
ZmZmZmZmDQoNCkFuZCBmcm9tIHRoZW4gb24gb25seSBtZWdhcmFpZF9zYXMgZXJyb3IgbWVz
c2FnZXMgYXJlIGZvbGxvd2luZy4NCg0KDQpKdWVyZ2VuDQoNCj4gDQo+IFJlZ3Jlc3Npb25z
IDotKA0KPiANCj4gVGVzdHMgd2hpY2ggZGlkIG5vdCBzdWNjZWVkIGFuZCBhcmUgYmxvY2tp
bmcsDQo+IGluY2x1ZGluZyB0ZXN0cyB3aGljaCBjb3VsZCBub3QgYmUgcnVuOg0KPiAgIHRl
c3QtYXJtNjQtYXJtNjQtZXhhbWluZSAgICAgIDggcmVib290ICAgICAgICAgICAgICAgICAg
IGZhaWwgUkVHUi4gdnMuIDE3MDcxNA0KPiAgIHRlc3QtYW1kNjQtYW1kNjQteGwtcWVtdXUt
ZGViaWFuaHZtLWkzODYteHNtICA4IHhlbi1ib290IGZhaWwgUkVHUi4gdnMuIDE3MDcxNA0K
PiAgIHRlc3QtYW1kNjQtYW1kNjQteGwtcWVtdXUtb3ZtZi1hbWQ2NCAgOCB4ZW4tYm9vdCAg
ICAgICAgIGZhaWwgUkVHUi4gdnMuIDE3MDcxNA0KPiAgIHRlc3QtYW1kNjQtYW1kNjQteGwg
ICAgICAgICAgIDggeGVuLWJvb3QgICAgICAgICAgICAgICAgIGZhaWwgUkVHUi4gdnMuIDE3
MDcxNA0KPiAgIHRlc3QtYW1kNjQtYW1kNjQteGwtcWVtdXQtc3R1YmRvbS1kZWJpYW5odm0t
YW1kNjQteHNtIDggeGVuLWJvb3QgZmFpbCBSRUdSLiB2cy4gMTcwNzE0DQo+ICAgdGVzdC1h
bWQ2NC1hbWQ2NC1saWJ2aXJ0LXFjb3cyICA4IHhlbi1ib290ICAgICAgICAgICAgICAgZmFp
bCBSRUdSLiB2cy4gMTcwNzE0DQo+ICAgdGVzdC1hbWQ2NC1hbWQ2NC1saWJ2aXJ0ICAgICAg
OCB4ZW4tYm9vdCAgICAgICAgICAgICAgICAgZmFpbCBSRUdSLiB2cy4gMTcwNzE0DQo+ICAg
dGVzdC1hbWQ2NC1hbWQ2NC14bC1jcmVkaXQyICAgOCB4ZW4tYm9vdCAgICAgICAgICAgICAg
ICAgZmFpbCBSRUdSLiB2cy4gMTcwNzE0DQo+ICAgdGVzdC1hbWQ2NC1hbWQ2NC14bC1xZW11
dS1kZWJpYW5odm0tYW1kNjQtc2hhZG93IDggeGVuLWJvb3QgZmFpbCBSRUdSLiB2cy4gMTcw
NzE0DQo+ICAgdGVzdC1hbWQ2NC1hbWQ2NC14bC1tdWx0aXZjcHUgIDggeGVuLWJvb3QgICAg
ICAgICAgICAgICAgZmFpbCBSRUdSLiB2cy4gMTcwNzE0DQo+ICAgdGVzdC1hbWQ2NC1hbWQ2
NC14bC12aGQgICAgICAgOCB4ZW4tYm9vdCAgICAgICAgICAgICAgICAgZmFpbCBSRUdSLiB2
cy4gMTcwNzE0DQo+ICAgdGVzdC1hbWQ2NC1hbWQ2NC1mcmVlYnNkMTItYW1kNjQgIDggeGVu
LWJvb3QgICAgICAgICAgICAgZmFpbCBSRUdSLiB2cy4gMTcwNzE0DQo+ICAgdGVzdC1hbWQ2
NC1hbWQ2NC1xZW11dS1uZXN0ZWQtaW50ZWwgIDggeGVuLWJvb3QgICAgICAgICAgZmFpbCBS
RUdSLiB2cy4gMTcwNzE0DQo+ICAgdGVzdC1hcm02NC1hcm02NC14bC1zZWF0dGxlICAgOCB4
ZW4tYm9vdCAgICAgICAgICAgICAgICAgZmFpbCBSRUdSLiB2cy4gMTcwNzE0DQo+ICAgdGVz
dC1hbWQ2NC1hbWQ2NC1saWJ2aXJ0LXBhaXIgMTIgeGVuLWJvb3Qvc3JjX2hvc3QgICAgICAg
ZmFpbCBSRUdSLiB2cy4gMTcwNzE0DQo+ICAgdGVzdC1hbWQ2NC1hbWQ2NC1saWJ2aXJ0LXBh
aXIgMTMgeGVuLWJvb3QvZHN0X2hvc3QgICAgICAgZmFpbCBSRUdSLiB2cy4gMTcwNzE0DQo+
ICAgdGVzdC1hbWQ2NC1hbWQ2NC1wYWlyICAgICAgICAxMiB4ZW4tYm9vdC9zcmNfaG9zdCAg
ICAgICAgZmFpbCBSRUdSLiB2cy4gMTcwNzE0DQo+ICAgdGVzdC1hbWQ2NC1hbWQ2NC1wYWly
ICAgICAgICAxMyB4ZW4tYm9vdC9kc3RfaG9zdCAgICAgICAgZmFpbCBSRUdSLiB2cy4gMTcw
NzE0DQo+ICAgdGVzdC1hcm02NC1hcm02NC14bC1jcmVkaXQyICAgOCB4ZW4tYm9vdCAgICAg
ICAgICAgICAgICAgZmFpbCBSRUdSLiB2cy4gMTcwNzE0DQo+ICAgdGVzdC1hbWQ2NC1hbWQ2
NC1saWJ2aXJ0LXJhdyAgOCB4ZW4tYm9vdCAgICAgICAgICAgICAgICAgZmFpbCBSRUdSLiB2
cy4gMTcwNzE0DQo+ICAgdGVzdC1hbWQ2NC1hbWQ2NC14bC1xZW11dS1kZWJpYW5odm0tYW1k
NjQgIDggeGVuLWJvb3QgICAgZmFpbCBSRUdSLiB2cy4gMTcwNzE0DQo+ICAgdGVzdC1hcm02
NC1hcm02NC14bC1jcmVkaXQxICAgOCB4ZW4tYm9vdCAgICAgICAgICAgICAgICAgZmFpbCBS
RUdSLiB2cy4gMTcwNzE0DQo+ICAgdGVzdC1hcm02NC1hcm02NC14bC12aGQgICAgICAgOCB4
ZW4tYm9vdCAgICAgICAgICAgICAgICAgZmFpbCBSRUdSLiB2cy4gMTcwNzE0DQo+ICAgdGVz
dC1hcm02NC1hcm02NC14bC14c20gICAgICAgOCB4ZW4tYm9vdCAgICAgICAgICAgICAgICAg
ZmFpbCBSRUdSLiB2cy4gMTcwNzE0DQo+ICAgdGVzdC1hbWQ2NC1hbWQ2NC1leGFtaW5lICAg
ICAgOCByZWJvb3QgICAgICAgICAgICAgICAgICAgZmFpbCBSRUdSLiB2cy4gMTcwNzE0DQo+
ICAgdGVzdC1hbWQ2NC1hbWQ2NC1leGFtaW5lLWJpb3MgIDggcmVib290ICAgICAgICAgICAg
ICAgICAgZmFpbCBSRUdSLiB2cy4gMTcwNzE0DQo+ICAgdGVzdC1hcm02NC1hcm02NC1saWJ2
aXJ0LXJhdyAgOCB4ZW4tYm9vdCAgICAgICAgICAgICAgICAgZmFpbCBSRUdSLiB2cy4gMTcw
NzE0DQo+ICAgdGVzdC1hbWQ2NC1hbWQ2NC1leGFtaW5lLXVlZmkgIDggcmVib290ICAgICAg
ICAgICAgICAgICAgZmFpbCBSRUdSLiB2cy4gMTcwNzE0DQo+ICAgdGVzdC1hbWQ2NC1hbWQ2
NC14bC1wdnNoaW0gICAgOCB4ZW4tYm9vdCAgICAgICAgICAgICAgICAgZmFpbCBSRUdSLiB2
cy4gMTcwNzE0DQo+ICAgdGVzdC1hbWQ2NC1hbWQ2NC14bC1jcmVkaXQxICAgOCB4ZW4tYm9v
dCAgICAgICAgICAgICAgICAgZmFpbCBSRUdSLiB2cy4gMTcwNzE0DQo+ICAgdGVzdC1hbWQ2
NC1hbWQ2NC14bC14c20gICAgICAgOCB4ZW4tYm9vdCAgICAgICAgICAgICAgICAgZmFpbCBS
RUdSLiB2cy4gMTcwNzE0DQo+ICAgdGVzdC1hcm02NC1hcm02NC14bCAgICAgICAgICAgOCB4
ZW4tYm9vdCAgICAgICAgICAgICAgICAgZmFpbCBSRUdSLiB2cy4gMTcwNzE0DQo+ICAgdGVz
dC1hcm02NC1hcm02NC1saWJ2aXJ0LXhzbSAgOCB4ZW4tYm9vdCAgICAgICAgICAgICAgICAg
ZmFpbCBSRUdSLiB2cy4gMTcwNzE0DQo+ICAgdGVzdC1hbWQ2NC1hbWQ2NC14bC1wdmh2Mi1p
bnRlbCAgOCB4ZW4tYm9vdCAgICAgICAgICAgICAgZmFpbCBSRUdSLiB2cy4gMTcwNzE0DQo+
IA0KPiBSZWdyZXNzaW9ucyB3aGljaCBhcmUgcmVnYXJkZWQgYXMgYWxsb3dhYmxlIChub3Qg
YmxvY2tpbmcpOg0KPiAgIHRlc3QtYXJtaGYtYXJtaGYteGwtcnRkcyAgICAxOCBndWVzdC1z
dGFydC9kZWJpYW4ucmVwZWF0IGZhaWwgUkVHUi4gdnMuIDE3MDcxNA0KPiANCj4gVGVzdHMg
d2hpY2ggZGlkIG5vdCBzdWNjZWVkLCBidXQgYXJlIG5vdCBibG9ja2luZzoNCj4gICB0ZXN0
LWFtZDY0LWFtZDY0LXhsLXFlbXV0LXdpbjctYW1kNjQgMTkgZ3Vlc3Qtc3RvcCAgICAgICAg
ICAgIGZhaWwgbGlrZSAxNzA3MTQNCj4gICB0ZXN0LWFtZDY0LWFtZDY0LXFlbXV1LW5lc3Rl
ZC1hbWQgMjAgZGViaWFuLWh2bS1pbnN0YWxsL2wxL2wyIGZhaWwgbGlrZSAxNzA3MTQNCj4g
ICB0ZXN0LWFtZDY0LWFtZDY0LXhsLXFlbXV1LXdzMTYtYW1kNjQgMTkgZ3Vlc3Qtc3RvcCAg
ICAgICAgICAgIGZhaWwgbGlrZSAxNzA3MTQNCj4gICB0ZXN0LWFtZDY0LWFtZDY0LXhsLXFl
bXV1LXdpbjctYW1kNjQgMTkgZ3Vlc3Qtc3RvcCAgICAgICAgICAgIGZhaWwgbGlrZSAxNzA3
MTQNCj4gICB0ZXN0LWFybWhmLWFybWhmLWxpYnZpcnQtcWNvdzIgMTUgc2F2ZXJlc3RvcmUt
c3VwcG9ydC1jaGVjayAgIGZhaWwgbGlrZSAxNzA3MTQNCj4gICB0ZXN0LWFybWhmLWFybWhm
LWxpYnZpcnQgICAgIDE2IHNhdmVyZXN0b3JlLXN1cHBvcnQtY2hlY2sgICAgZmFpbCAgbGlr
ZSAxNzA3MTQNCj4gICB0ZXN0LWFybWhmLWFybWhmLWxpYnZpcnQtcmF3IDE1IHNhdmVyZXN0
b3JlLXN1cHBvcnQtY2hlY2sgICAgZmFpbCAgbGlrZSAxNzA3MTQNCj4gICB0ZXN0LWFtZDY0
LWFtZDY0LXhsLXFlbXV0LXdzMTYtYW1kNjQgMTkgZ3Vlc3Qtc3RvcCAgICAgICAgICAgIGZh
aWwgbGlrZSAxNzA3MTQNCj4gICB0ZXN0LWFtZDY0LWFtZDY0LWxpYnZpcnQteHNtIDE1IG1p
Z3JhdGUtc3VwcG9ydC1jaGVjayAgICAgICAgZmFpbCAgIG5ldmVyIHBhc3MNCj4gICB0ZXN0
LWFybTY0LWFybTY0LXhsLXRodW5kZXJ4IDE1IG1pZ3JhdGUtc3VwcG9ydC1jaGVjayAgICAg
ICAgZmFpbCAgIG5ldmVyIHBhc3MNCj4gICB0ZXN0LWFybTY0LWFybTY0LXhsLXRodW5kZXJ4
IDE2IHNhdmVyZXN0b3JlLXN1cHBvcnQtY2hlY2sgICAgZmFpbCAgIG5ldmVyIHBhc3MNCj4g
ICB0ZXN0LWFtZDY0LWFtZDY0LWxpYnZpcnQtcWVtdXUtZGViaWFuaHZtLWFtZDY0LXhzbSAx
MyBtaWdyYXRlLXN1cHBvcnQtY2hlY2sgZmFpbCBuZXZlciBwYXNzDQo+ICAgdGVzdC1hcm1o
Zi1hcm1oZi14bCAgICAgICAgICAxNSBtaWdyYXRlLXN1cHBvcnQtY2hlY2sgICAgICAgIGZh
aWwgICBuZXZlciBwYXNzDQo+ICAgdGVzdC1hcm1oZi1hcm1oZi14bCAgICAgICAgICAxNiBz
YXZlcmVzdG9yZS1zdXBwb3J0LWNoZWNrICAgIGZhaWwgICBuZXZlciBwYXNzDQo+ICAgdGVz
dC1hcm1oZi1hcm1oZi14bC1ydGRzICAgICAxNSBtaWdyYXRlLXN1cHBvcnQtY2hlY2sgICAg
ICAgIGZhaWwgICBuZXZlciBwYXNzDQo+ICAgdGVzdC1hcm1oZi1hcm1oZi14bC1ydGRzICAg
ICAxNiBzYXZlcmVzdG9yZS1zdXBwb3J0LWNoZWNrICAgIGZhaWwgICBuZXZlciBwYXNzDQo+
ICAgdGVzdC1hcm1oZi1hcm1oZi14bC1jcmVkaXQyICAxNSBtaWdyYXRlLXN1cHBvcnQtY2hl
Y2sgICAgICAgIGZhaWwgICBuZXZlciBwYXNzDQo+ICAgdGVzdC1hcm1oZi1hcm1oZi14bC1j
cmVkaXQyICAxNiBzYXZlcmVzdG9yZS1zdXBwb3J0LWNoZWNrICAgIGZhaWwgICBuZXZlciBw
YXNzDQo+ICAgdGVzdC1hcm1oZi1hcm1oZi14bC1tdWx0aXZjcHUgMTUgbWlncmF0ZS1zdXBw
b3J0LWNoZWNrICAgICAgICBmYWlsICBuZXZlciBwYXNzDQo+ICAgdGVzdC1hcm1oZi1hcm1o
Zi14bC1tdWx0aXZjcHUgMTYgc2F2ZXJlc3RvcmUtc3VwcG9ydC1jaGVjayAgICBmYWlsICBu
ZXZlciBwYXNzDQo+ICAgdGVzdC1hcm1oZi1hcm1oZi14bC1jdWJpZXRydWNrIDE1IG1pZ3Jh
dGUtc3VwcG9ydC1jaGVjayAgICAgICAgZmFpbCBuZXZlciBwYXNzDQo+ICAgdGVzdC1hcm1o
Zi1hcm1oZi14bC1jdWJpZXRydWNrIDE2IHNhdmVyZXN0b3JlLXN1cHBvcnQtY2hlY2sgICAg
ZmFpbCBuZXZlciBwYXNzDQo+ICAgdGVzdC1hcm1oZi1hcm1oZi1saWJ2aXJ0LXFjb3cyIDE0
IG1pZ3JhdGUtc3VwcG9ydC1jaGVjayAgICAgICAgZmFpbCBuZXZlciBwYXNzDQo+ICAgdGVz
dC1hcm1oZi1hcm1oZi14bC1jcmVkaXQxICAxNSBtaWdyYXRlLXN1cHBvcnQtY2hlY2sgICAg
ICAgIGZhaWwgICBuZXZlciBwYXNzDQo+ICAgdGVzdC1hcm1oZi1hcm1oZi14bC1jcmVkaXQx
ICAxNiBzYXZlcmVzdG9yZS1zdXBwb3J0LWNoZWNrICAgIGZhaWwgICBuZXZlciBwYXNzDQo+
ICAgdGVzdC1hcm1oZi1hcm1oZi14bC12aGQgICAgICAxNCBtaWdyYXRlLXN1cHBvcnQtY2hl
Y2sgICAgICAgIGZhaWwgICBuZXZlciBwYXNzDQo+ICAgdGVzdC1hcm1oZi1hcm1oZi14bC12
aGQgICAgICAxNSBzYXZlcmVzdG9yZS1zdXBwb3J0LWNoZWNrICAgIGZhaWwgICBuZXZlciBw
YXNzDQo+ICAgdGVzdC1hcm1oZi1hcm1oZi14bC1hcm5kYWxlICAxNSBtaWdyYXRlLXN1cHBv
cnQtY2hlY2sgICAgICAgIGZhaWwgICBuZXZlciBwYXNzDQo+ICAgdGVzdC1hcm1oZi1hcm1o
Zi14bC1hcm5kYWxlICAxNiBzYXZlcmVzdG9yZS1zdXBwb3J0LWNoZWNrICAgIGZhaWwgICBu
ZXZlciBwYXNzDQo+ICAgdGVzdC1hcm1oZi1hcm1oZi1saWJ2aXJ0ICAgICAxNSBtaWdyYXRl
LXN1cHBvcnQtY2hlY2sgICAgICAgIGZhaWwgICBuZXZlciBwYXNzDQo+ICAgdGVzdC1hcm1o
Zi1hcm1oZi1saWJ2aXJ0LXJhdyAxNCBtaWdyYXRlLXN1cHBvcnQtY2hlY2sgICAgICAgIGZh
aWwgICBuZXZlciBwYXNzDQo+IA0KPiB2ZXJzaW9uIHRhcmdldGVkIGZvciB0ZXN0aW5nOg0K
PiAgIGxpbnV4ICAgICAgICAgICAgICAgIGFmZTllYjE0ZWExY2JhYzVkOTFjYTA0ZWI2NDgx
MGQyZDlmYTIyYjANCj4gYmFzZWxpbmUgdmVyc2lvbjoNCj4gICBsaW51eCAgICAgICAgICAg
ICAgICBkNmVjYWEwMDI0NDg1ZWZmZDA2NTEyNGZlNzc0ZGUyZTIyMDk1ZjJkDQo+IA0KPiBM
YXN0IHRlc3Qgb2YgYmFzaXMgICAxNzA3MTQgIDIwMjItMDUtMjQgMDM6Mjc6NDQgWiAgIDIz
IGRheXMNCj4gRmFpbGluZyBzaW5jZSAgICAgICAgMTcwNzE2ICAyMDIyLTA1LTI0IDExOjEy
OjA2IFogICAyMiBkYXlzICAgNTUgYXR0ZW1wdHMNCj4gVGVzdGluZyBzYW1lIHNpbmNlICAg
MTcxMTgyICAyMDIyLTA2LTE1IDIwOjQzOjI2IFogICAgMCBkYXlzICAgIDEgYXR0ZW1wdHMN
Cj4gDQo+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLQ0KPiAyMzQ4IHBlb3BsZSB0b3VjaGVkIHJldmlzaW9ucyB1bmRlciB0
ZXN0LA0KPiBub3QgbGlzdGluZyB0aGVtIGFsbA0KPiANCj4gam9iczoNCj4gICBidWlsZC1h
bWQ2NC14c20gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
cGFzcw0KPiAgIGJ1aWxkLWFybTY0LXhzbSAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICBwYXNzDQo+ICAgYnVpbGQtaTM4Ni14c20gICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBhc3MNCj4gICBidWlsZC1hbWQ2
NCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgcGFz
cw0KPiAgIGJ1aWxkLWFybTY0ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBwYXNzDQo+ICAgYnVpbGQtYXJtaGYgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBhc3MNCj4gICBidWlsZC1pMzg2ICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgcGFzcw0K
PiAgIGJ1aWxkLWFtZDY0LWxpYnZpcnQgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBwYXNzDQo+ICAgYnVpbGQtYXJtNjQtbGlidmlydCAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBhc3MNCj4gICBidWlsZC1hcm1oZi1saWJ2
aXJ0ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgcGFzcw0KPiAg
IGJ1aWxkLWkzODYtbGlidmlydCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICBwYXNzDQo+ICAgYnVpbGQtYW1kNjQtcHZvcHMgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIHBhc3MNCj4gICBidWlsZC1hcm02NC1wdm9wcyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgcGFzcw0KPiAgIGJ1
aWxkLWFybWhmLXB2b3BzICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBwYXNzDQo+ICAgYnVpbGQtaTM4Ni1wdm9wcyAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIHBhc3MNCj4gICB0ZXN0LWFtZDY0LWFtZDY0LXhsICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgZmFpbA0KPiAgIHRlc3Qt
YW1kNjQtY29yZXNjaGVkLWFtZDY0LXhsICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBwYXNzDQo+ICAgdGVzdC1hcm02NC1hcm02NC14bCAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIGZhaWwNCj4gICB0ZXN0LWFybWhmLWFybWhmLXhsICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgcGFzcw0KPiAgIHRlc3QtYW1k
NjQtYW1kNjQtbGlidmlydC1xZW11dS1kZWJpYW5odm0tYW1kNjQteHNtICAgICAgICAgICBw
YXNzDQo+ICAgdGVzdC1hbWQ2NC1hbWQ2NC14bC1xZW11dC1zdHViZG9tLWRlYmlhbmh2bS1h
bWQ2NC14c20gICAgICAgIGZhaWwNCj4gICB0ZXN0LWFtZDY0LWFtZDY0LXhsLXFlbXV0LWRl
Ymlhbmh2bS1pMzg2LXhzbSAgICAgICAgICAgICAgICAgcGFzcw0KPiAgIHRlc3QtYW1kNjQt
YW1kNjQteGwtcWVtdXUtZGViaWFuaHZtLWkzODYteHNtICAgICAgICAgICAgICAgICBmYWls
DQo+ICAgdGVzdC1hbWQ2NC1hbWQ2NC1saWJ2aXJ0LXhzbSAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIHBhc3MNCj4gICB0ZXN0LWFybTY0LWFybTY0LWxpYnZpcnQteHNtICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgZmFpbA0KPiAgIHRlc3QtYW1kNjQtYW1k
NjQteGwteHNtICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBmYWlsDQo+
ICAgdGVzdC1hcm02NC1hcm02NC14bC14c20gICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIGZhaWwNCj4gICB0ZXN0LWFtZDY0LWFtZDY0LXFlbXV1LW5lc3RlZC1hbWQg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgZmFpbA0KPiAgIHRlc3QtYW1kNjQtYW1kNjQt
eGwtcHZodjItYW1kICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBwYXNzDQo+ICAg
dGVzdC1hbWQ2NC1hbWQ2NC1kb20wcHZoLXhsLWFtZCAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIHBhc3MNCj4gICB0ZXN0LWFtZDY0LWFtZDY0LXhsLXFlbXV0LWRlYmlhbmh2bS1h
bWQ2NCAgICAgICAgICAgICAgICAgICAgcGFzcw0KPiAgIHRlc3QtYW1kNjQtYW1kNjQteGwt
cWVtdXUtZGViaWFuaHZtLWFtZDY0ICAgICAgICAgICAgICAgICAgICBmYWlsDQo+ICAgdGVz
dC1hbWQ2NC1hbWQ2NC1mcmVlYnNkMTEtYW1kNjQgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIHBhc3MNCj4gICB0ZXN0LWFtZDY0LWFtZDY0LWZyZWVic2QxMi1hbWQ2NCAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgZmFpbA0KPiAgIHRlc3QtYW1kNjQtYW1kNjQteGwtcWVt
dXUtb3ZtZi1hbWQ2NCAgICAgICAgICAgICAgICAgICAgICAgICBmYWlsDQo+ICAgdGVzdC1h
bWQ2NC1hbWQ2NC14bC1xZW11dC13aW43LWFtZDY0ICAgICAgICAgICAgICAgICAgICAgICAg
IGZhaWwNCj4gICB0ZXN0LWFtZDY0LWFtZDY0LXhsLXFlbXV1LXdpbjctYW1kNjQgICAgICAg
ICAgICAgICAgICAgICAgICAgZmFpbA0KPiAgIHRlc3QtYW1kNjQtYW1kNjQteGwtcWVtdXQt
d3MxNi1hbWQ2NCAgICAgICAgICAgICAgICAgICAgICAgICBmYWlsDQo+ICAgdGVzdC1hbWQ2
NC1hbWQ2NC14bC1xZW11dS13czE2LWFtZDY0ICAgICAgICAgICAgICAgICAgICAgICAgIGZh
aWwNCj4gICB0ZXN0LWFybWhmLWFybWhmLXhsLWFybmRhbGUgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgcGFzcw0KPiAgIHRlc3QtYW1kNjQtYW1kNjQtZXhhbWluZS1iaW9z
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBmYWlsDQo+ICAgdGVzdC1hbWQ2NC1h
bWQ2NC14bC1jcmVkaXQxICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGZhaWwN
Cj4gICB0ZXN0LWFybTY0LWFybTY0LXhsLWNyZWRpdDEgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgZmFpbA0KPiAgIHRlc3QtYXJtaGYtYXJtaGYteGwtY3JlZGl0MSAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICBwYXNzDQo+ICAgdGVzdC1hbWQ2NC1hbWQ2
NC14bC1jcmVkaXQyICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGZhaWwNCj4g
ICB0ZXN0LWFybTY0LWFybTY0LXhsLWNyZWRpdDIgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgZmFpbA0KPiAgIHRlc3QtYXJtaGYtYXJtaGYteGwtY3JlZGl0MiAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICBwYXNzDQo+ICAgdGVzdC1hcm1oZi1hcm1oZi14
bC1jdWJpZXRydWNrICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBhc3MNCj4gICB0
ZXN0LWFtZDY0LWFtZDY0LXhsLXFlbXV1LWRtcmVzdHJpY3QtYW1kNjQtZG1yZXN0cmljdCAg
ICAgICAgcGFzcw0KPiAgIHRlc3QtYW1kNjQtYW1kNjQtZXhhbWluZSAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICBmYWlsDQo+ICAgdGVzdC1hcm02NC1hcm02NC1leGFt
aW5lICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGZhaWwNCj4gICB0ZXN0
LWFybWhmLWFybWhmLWV4YW1pbmUgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgcGFzcw0KPiAgIHRlc3QtYW1kNjQtYW1kNjQtcWVtdXUtbmVzdGVkLWludGVsICAgICAg
ICAgICAgICAgICAgICAgICAgICBmYWlsDQo+ICAgdGVzdC1hbWQ2NC1hbWQ2NC14bC1wdmh2
Mi1pbnRlbCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGZhaWwNCj4gICB0ZXN0LWFt
ZDY0LWFtZDY0LWRvbTBwdmgteGwtaW50ZWwgICAgICAgICAgICAgICAgICAgICAgICAgICAg
cGFzcw0KPiAgIHRlc3QtYW1kNjQtYW1kNjQtbGlidmlydCAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICBmYWlsDQo+ICAgdGVzdC1hcm1oZi1hcm1oZi1saWJ2aXJ0ICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBhc3MNCj4gICB0ZXN0LWFtZDY0
LWFtZDY0LXhsLW11bHRpdmNwdSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgZmFp
bA0KPiAgIHRlc3QtYXJtaGYtYXJtaGYteGwtbXVsdGl2Y3B1ICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBwYXNzDQo+ICAgdGVzdC1hbWQ2NC1hbWQ2NC1wYWlyICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGZhaWwNCj4gICB0ZXN0LWFtZDY0LWFt
ZDY0LWxpYnZpcnQtcGFpciAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgZmFpbA0K
PiAgIHRlc3QtYW1kNjQtYW1kNjQteGwtcHZzaGltICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBmYWlsDQo+ICAgdGVzdC1hbWQ2NC1hbWQ2NC1weWdydWIgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBhc3MNCj4gICB0ZXN0LWFtZDY0LWFtZDY0
LWxpYnZpcnQtcWNvdzIgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgZmFpbA0KPiAg
IHRlc3QtYXJtaGYtYXJtaGYtbGlidmlydC1xY293MiAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICBwYXNzDQo+ICAgdGVzdC1hbWQ2NC1hbWQ2NC1saWJ2aXJ0LXJhdyAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIGZhaWwNCj4gICB0ZXN0LWFybTY0LWFybTY0LWxp
YnZpcnQtcmF3ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgZmFpbA0KPiAgIHRl
c3QtYXJtaGYtYXJtaGYtbGlidmlydC1yYXcgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBwYXNzDQo+ICAgdGVzdC1hbWQ2NC1hbWQ2NC14bC1ydGRzICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIHBhc3MNCj4gICB0ZXN0LWFybWhmLWFybWhmLXhsLXJ0
ZHMgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgZmFpbA0KPiAgIHRlc3Qt
YXJtNjQtYXJtNjQteGwtc2VhdHRsZSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBmYWlsDQo+ICAgdGVzdC1hbWQ2NC1hbWQ2NC14bC1xZW11dS1kZWJpYW5odm0tYW1kNjQt
c2hhZG93ICAgICAgICAgICAgIGZhaWwNCj4gICB0ZXN0LWFtZDY0LWFtZDY0LXhsLXNoYWRv
dyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgcGFzcw0KPiAgIHRlc3QtYXJt
NjQtYXJtNjQteGwtdGh1bmRlcnggICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBw
YXNzDQo+ICAgdGVzdC1hbWQ2NC1hbWQ2NC1leGFtaW5lLXVlZmkgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIGZhaWwNCj4gICB0ZXN0LWFtZDY0LWFtZDY0LXhsLXZoZCAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgZmFpbA0KPiAgIHRlc3QtYXJtNjQt
YXJtNjQteGwtdmhkICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBmYWls
DQo+ICAgdGVzdC1hcm1oZi1hcm1oZi14bC12aGQgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIHBhc3MNCj4gDQo+IA0KPiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4gc2ctcmVwb3J0LWZsaWdo
dCBvbiBvc3N0ZXN0LnRlc3QtbGFiLnhlbnByb2plY3Qub3JnDQo+IGxvZ3M6IC9ob21lL2xv
Z3MvbG9ncw0KPiBpbWFnZXM6IC9ob21lL2xvZ3MvaW1hZ2VzDQo+IA0KPiBMb2dzLCBjb25m
aWcgZmlsZXMsIGV0Yy4gYXJlIGF2YWlsYWJsZSBhdA0KPiAgICAgIGh0dHA6Ly9sb2dzLnRl
c3QtbGFiLnhlbnByb2plY3Qub3JnL29zc3Rlc3QvbG9ncw0KPiANCj4gRXhwbGFuYXRpb24g
b2YgdGhlc2UgcmVwb3J0cywgYW5kIG9mIG9zc3Rlc3QgaW4gZ2VuZXJhbCwgaXMgYXQNCj4g
ICAgICBodHRwOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD1vc3N0ZXN0LmdpdDthPWJs
b2I7Zj1SRUFETUUuZW1haWw7aGI9bWFzdGVyDQo+ICAgICAgaHR0cDovL3hlbmJpdHMueGVu
Lm9yZy9naXR3ZWIvP3A9b3NzdGVzdC5naXQ7YT1ibG9iO2Y9UkVBRE1FO2hiPW1hc3Rlcg0K
PiANCj4gVGVzdCBoYXJuZXNzIGNvZGUgY2FuIGJlIGZvdW5kIGF0DQo+ICAgICAgaHR0cDov
L3hlbmJpdHMueGVuLm9yZy9naXR3ZWI/cD1vc3N0ZXN0LmdpdDthPXN1bW1hcnkNCj4gDQo+
IA0KPiBOb3QgcHVzaGluZy4NCj4gDQo+IChObyByZXZpc2lvbiBsb2c7IGl0IHdvdWxkIGJl
IDI3NzMzNCBsaW5lcyBsb25nLikNCj4gDQoNCg==
--------------k6qz6wwUPA0rDXcM5YNtJmND
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------k6qz6wwUPA0rDXcM5YNtJmND--

--------------YhE2FUvG07rpn2UqkNm0XP0q--

--------------iPdcyVQF9FKxgQuNCO9uZAmU
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmKq+fcFAwAAAAAACgkQsN6d1ii/Ey/B
9wgAiHew0pNMEu+j9WMZr+hPpF68GmWa+G1fjX7vRSn6MdZKUlVGcFODu3GfGJ9DmWNwfG3sMQAs
gWsCvJVMOCH8GcrM7zAh47Y8jA77z4ANlAMn0ItffaL7isgjobLDmyfXHrN+JeBhjiN6lZ4UpBy5
Bkvn8En/gIP0R9Xa2fFeMawsmz1VxBgI2gdYbZvJ7D19QM1GHmhDXuh2vhlxQGukrfGKn9R2d/lR
gDMzg1pAvoWqoGOJW8Qg03xRW1hTzfBVK0xmMDq45xY5gnvxqFJElcdWRF/acZhJ2JOtkhIOkSNv
0Ty8rsA306FY8SNdJx50q20NYuSihit4vnxR+nsmSQ==
=rVWe
-----END PGP SIGNATURE-----

--------------iPdcyVQF9FKxgQuNCO9uZAmU--


From xen-devel-bounces@lists.xenproject.org Thu Jun 16 09:53:09 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 09:53:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350506.576867 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1mBC-0001XT-Fd; Thu, 16 Jun 2022 09:52:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350506.576867; Thu, 16 Jun 2022 09:52:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1mBC-0001XM-C8; Thu, 16 Jun 2022 09:52:46 +0000
Received: by outflank-mailman (input) for mailman id 350506;
 Thu, 16 Jun 2022 09:52:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aOxQ=WX=citrix.com=prvs=1593354c1=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o1mBB-0001XG-4Y
 for xen-devel@lists.xenproject.org; Thu, 16 Jun 2022 09:52:45 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0f2898aa-ed5a-11ec-bd2c-47488cf2e6aa;
 Thu, 16 Jun 2022 11:52:42 +0200 (CEST)
Received: from mail-bn7nam10lp2104.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.104])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 16 Jun 2022 05:52:39 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by PH7PR03MB6995.namprd03.prod.outlook.com (2603:10b6:510:12f::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.14; Thu, 16 Jun
 2022 09:52:36 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5353.015; Thu, 16 Jun 2022
 09:52:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0f2898aa-ed5a-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655373162;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=8hUT8+fg06I4pNd13c1QGFVdRMz3TisZWZGcyL2tA2A=;
  b=eYnL9N5JFFDMyAciOyzEmeZgi9evcAuifbDqm6BpLsQN/T9ksRzslQSF
   bUI9/bG68x/RU9a0jyOGAFnzJRLTIFtKhWK/75R51h2WcFFqcj2NIA2cm
   fyJG/UJXK9XSHcyyXUqi9FLJsp1iKeilTOl4pYZUxfyioXesEwJv8hloO
   A=;
X-IronPort-RemoteIP: 104.47.70.104
X-IronPort-MID: 74160832
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:HIfUX6OgVTCeUarvrR2hlsFynXyQoLVcMsEvi/4bfWQNrUoj0TAAn
 WEbXWnUO67ZNmf9c9t1b47k9ElV6sKBmNE2HQto+SlhQUwRpJueD7x1DKtR0wB+jCHnZBg6h
 ynLQoCYdKjYdleF+lH1dOKJQUBUjclkfJKlYAL/En03FFYMpBsJ00o5wbZn29Aw2LBVPivW0
 T/Mi5yHULOa82Yc3lI8s8pvfzs24ZweEBtB1rAPTagjUG32zhH5P7pGTU2FFFPqQ5E8IwKPb
 72rIIdVXI/u10xF5tuNyt4Xe6CRK1LYFVDmZnF+A8BOjvXez8CbP2lS2Pc0MC9qZzu1c99Z8
 skT6qWJGSUVI6DSlNRATzZbFHF9IvgTkFPHCSDXXc276WTjKiOp6dMxSUY8MMsf5/p9BnxI+
 boAMjcRYxufhuWwhrWmVu1rgcdlJ87uVG8dkig4kXeFUrB5GdaaG/miCdxwhV/cguhUGvnTf
 YwBYCdHZxXceRxffFwQDfrSmc/32iChK20D8jp5o4Jp8Xf86hRM4YLJH8jHQNeVZpx/nkeX8
 zeuE2PRR0ty2Mak4TiK6HW3ncfUgDj2HokVEdWQ8eVxnFCI2ikaBBgXU3OrrP+hkEm8VtlDb
 UsO9UIGvaU0sUCmUNT5dxm5u2Kf+A4RXcJKFO834x3LzbDbizt1HUABRz9FLdYg68k/QGVy0
 kfTxou1QztyrLeSVHSRsK+Oqi+/MjQUKmlEYjIYSQwC4J/op4RbYg/zc+uP2ZWd1rXdcQwcC
 RjTxMTir93/VfI26pg=
IronPort-HdrOrdr: A9a23:UWOjkKtctITC/lOiPhaXR/vc7skC6oMji2hC6mlwRA09TyXGra
 2TdaUgvyMc1gx7ZJhBo7+90We7MBbhHLpOkPEs1NaZLXDbUQ6TQL2KgrGD/9SNIVycygcZ79
 YaT0EcMqyNMbEZt7ec3ODQKb9Jrri6GeKT9IHjJh9WPHxXgspbnmNE42igYy9LrF4sP+tCKH
 PQ3LswmxOQPVAsKuirDHgMWObO4/XNiZLdeBYDQzoq8hOHgz+E4KPzV0Hw5GZXbxp/hZMZtU
 TVmQ3w4auu99m91x/nzmfWq7BbgsHoxNdvDNGFzuIVNjLvoAC1Y5kJYczKgBkF5MWUrHo6mt
 jFpBkte+x19nPqZ2mw5SDg3gHxuQxenkPK+Bu9uz/OsMb5TDU1B45qnoRCaCbU7EImoZVVzL
 9L93jxjesaMTrw2ADGo/TYXRBjkUS55VA4l/QIsnBZWYwCLJdMsI0k+l9PGptoJlO21GkeKp
 ghMCjg3ocWTbvDBEqp/lWHgebcFEjbJy32DXTr4aeuontrdHMQ9Tpr+CVQpAZDyHsHceg72w
 31CNUWqFhwdL5mUUtcPpZ0fSLlMB27ffrzWFjiUWjPJeUgB0/njaLRzfEc2NyKEaZ4v6fa3q
 6xG29liQ==
X-IronPort-AV: E=Sophos;i="5.91,304,1647316800"; 
   d="scan'208";a="74160832"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=E3tg4JAFaP5gSRQyotQMphOTzXeCWjrWOFfxiqEC1IeYF66ag8KZ03Vb6KYR08NPJ5pSRzNCjpzs+Dz54+rCV8a6o3PMENCL+uV+Po/bYyFmzQFJhVZThQc7CqZphn6pWbhfC2sV2AN38bzEllqSkQ632r5JdCZG190QbHWkOXmAUdaiXedNAWYtXn9mSbtYYOKsoV5HcnvrdP7JsRtn4E15iuCVkO7j5Kf4+Q3p8YC2mH3N4/Sh+SGqNq4gppIx0qImo2GUs0EHCy8lMg2ylk/SRfuhJVCqs7/LNYVFFXuD/hgyN2yKSNwqH3DgnIJ9skzz8uqlEfu0jJSsT2FdSg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=kr1peAeuo+o6Y4MTO0SpFtyxBehm37IJ23frTcAsy5s=;
 b=jYfK1Cco9HwcTVuFL4xGB4YnLXu380iKjAGM9Cp5kZ49sVqD5UPO104IaqgSd7M93jCRMEJoOB5QHNugWO+JYyLMSLTFwUFAr3rureCJtHwBJY0sbYVy9RYE7M0V7n0Fh7SPjbuXoaixhqfoYTjF8LWuGSvcW5MrXirZiIl1LGm/KCkwm+h76UDGmx3aQcnJ3XYfBOLy6Hm/l5N/mUMNo3K/GVdVy6OYzm+dZbRIAsZL+Jh6Bs8FAQQbkveM2K0XUX0IT5X9AJ8zqgvUbUnAL8EX5hde6Ubc49bgLcXNrXaOnM0jJPWzPAGl91Y7ioyhr/KIzKLUPvx7GLslmKBPNg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kr1peAeuo+o6Y4MTO0SpFtyxBehm37IJ23frTcAsy5s=;
 b=uExQKa4ExXqF1IN8eeo5axyk/+TfNTGWVNG0dNDp1A4iGiQXtYda+5RrDxVOZRCsiLJxFYO4MAbDE/h2gKy+YXkRRDIRE/2vzAU+WnDg68BnvH9O1GjPH4KFewtQZRIthPsU79v/OXntQsnVIang0/hZzZjM83s1PgRgX/cLcDA=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Thu, 16 Jun 2022 11:52:34 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Juergen Gross <jgross@suse.com>
Cc: osstest service owner <osstest-admin@xenproject.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [linux-linus test] 171182: regressions - FAIL
Message-ID: <Yqr9Yt9ZsnSZ687L@Air-de-Roger>
References: <osstest-171182-mainreport@xen.org>
 <441665ed-f719-56a2-b8c5-85197a67242e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <441665ed-f719-56a2-b8c5-85197a67242e@suse.com>
X-ClientProxiedBy: LO2P265CA0249.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:8a::21) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 2a167cfe-a14d-40d9-fd46-08da4f7df148
X-MS-TrafficTypeDiagnostic: PH7PR03MB6995:EE_
X-Microsoft-Antispam-PRVS:
	<PH7PR03MB699509C520A34B83B1F4043A8FAC9@PH7PR03MB6995.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8mIf43DtsitxiRL3Y/awLfBFfO93w5TL6zVxaXUrPPTQbizr35qVLjhfVWFwE0ETL6d6O13BRkZLHGMnRbttEK3jjioTrhbq+rmydFNtutcItBRrO3pQA6gt72AO0BiiAtfkGJu4EXq4IVyI9bVQ64X1Cjb6ghb9pWi5mMjdTb3/wXr9vgr1wakxTLvbTFakpTte/Ea5ZIvcbAbRu8oMQVYMeLjQXircbnGGr75whvR1XIOZnydAdStZCagmq5EgWHgtUgLS6c9HOnnqc/VOEzL5uusF/7Ck6811FLzUVmPAa52dP5IINpV9DSOPlyAwBvnYL346R0nPVtsoCpxAkfDYZKVcDOOXEkDLvLvklAsQYdaQS6v3cUfGrhibYMtnUvgKwwuB8h9tFMl9fYrl9haOcbZPCFxhUKTkOVRakQCD7XwsdZBMOsUMp8sfyPCDTAbyefcA9K7MKGiqpfxoNZYXbbErzXZH2buV7vBdlkVmaneKUaN5dHwhCoQaLTBdgWWMUn6hGlzzDWOaO2BvvmTdXSB7uwZRnaT4Hg7gga6/j80f3IA3DmZcmWv32QOKMHznlScEevWDfdbfjuhPMoiLasupj4GenX6xtVdhTCGHJq4oNKRuruMp9AXOoECabFvTQtvFIAhbM1F1b1CWS4ZasorwC/9IptfU7vVT3+XRr7Z7CLFLIoWLsWgwTVqjpgYTLXWhcSsE8nyExZCtdKtDtVcF3HF20pIvIA9k/7KUDD9839XXCJfTeZVI56mIMdQSQt9y1nIYRHtqpI6xZQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(7916004)(366004)(186003)(83380400001)(8936002)(2906002)(316002)(508600001)(6916009)(966005)(6486002)(66946007)(53546011)(6512007)(66556008)(66476007)(4326008)(8676002)(9686003)(26005)(5660300002)(6506007)(82960400001)(86362001)(85182001)(33716001)(38100700002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZSs0Z2dSb21vekRoY09yK3RUMzR3LzJiUldzUGNTV2FvTkdvMlFWMzNieXVi?=
 =?utf-8?B?dUs2Uk1UQ2tVeFUySEJDcjM4QlU2MytlcEVybWRuaGV6QnJGVHRXQmNiY292?=
 =?utf-8?B?RWR4bytNQkVqWHZWL3QyQnIweWIyWk02cmlNYzdhZ2JzVFVOMlR1eTZnUEFX?=
 =?utf-8?B?Q1R5Y3NrWU4rYVBSR283MDc5czVSYW11WXZDVUlMRjlDUlcybVNlM0FNOEs2?=
 =?utf-8?B?cjJSUHpHWkZvdlRBSFpudFU2RmVNeU1GN2NPMzhTNUdLaDNPOGJNMzhQbFMw?=
 =?utf-8?B?b3dDSDdtYmFjbTN2UVNWNFRqQTBZbjBTaklJSnpiS3RtTjY2ekIvVFBMMmZa?=
 =?utf-8?B?QWlGU25vK0EzS1pXR0tnRGpyWWtvb3BiUTJwTXJhNisyNzJaR2taa2g5NWxL?=
 =?utf-8?B?VWdNMXdGNG5aWk9RdGpwaXJ3UytDVjVIendVQk4wdkQrVzFSd0dKeGpMVnJa?=
 =?utf-8?B?Uis3ZlVnb1NSejZuNUZRbkg1emdOWlBDVmdpcWpxYnBHeGRMdHd4R0ZBNGd3?=
 =?utf-8?B?VlZmYk9rYlRMQkZrRkJVem5jcnBFWDB2azlFL1g4c3RQUGZ3aktXTlgzUWpB?=
 =?utf-8?B?cmYyV1U0b1RycjNYOEhjRjFFc2ltSjE3cDdDeUpMazJSRnJGSUhUMWF3WnM2?=
 =?utf-8?B?c0haSmpDL2RQMEs2Nis0R2tUY1dJRzdnaG1VV0t1ZWE1alMxWjVBNVZja3lw?=
 =?utf-8?B?djNPT09lMlFreTVZRVUxQy9KZDBmaEs0b2ppUURDOGJnWTNiNlNNZStpNVpH?=
 =?utf-8?B?VDdWSVdyMEd6ZHhLbFdFenpKQkt6UnB0bFBpVHVtYmMxM3ZCYzI3K0pWbTBC?=
 =?utf-8?B?YmlxZEhQcmoveEdPL2xhZEc0MWJWdDdxRDN2a3k2UThOa1V0NkFBTEY0U1Rk?=
 =?utf-8?B?bU9rbkM5SjFwQzVSeFhvZWFzS2RoWTRza2pNclBXY0t5azM4RlNxMG9JNG5Q?=
 =?utf-8?B?WTJ0RlNFcVh5dFlpWmw0eWxqYnZiQWtUSVNmZ3dFam11dVBSeS9KTHl3YTFz?=
 =?utf-8?B?UkpHUGtYY1pHOEhIMnh1V1J6ekw4bjYxV1h6OElHWjc0YXNJc0FTbnBUQXVs?=
 =?utf-8?B?b0M1dXorbnBzaE80dlRtUEtIRVY0dHg4QW5HN2FEUVozRHVYRkRGdHdNL0hv?=
 =?utf-8?B?OFBFY2JNUFdWY0VZR3ZPdWV6VExaclBBaExvMWx4Wk5jVVZEblFORXBKMHZF?=
 =?utf-8?B?V1d2RWRkSUxwdGtiS2tMZGYzMDArTVEwV3VYN2hXRThvd3V0OS80UW85dHNH?=
 =?utf-8?B?V050bk5VdzJCTUFzRHNMalhJNXRlNHVwTXpsdUc4Z2hqYVU3SkRmU1I3N2ta?=
 =?utf-8?B?RDh1NFRrd1U0UEZJL0pIa0VxUzhtNk5QaUhqcGNLQTU0R3dXZVVBdSttb0o3?=
 =?utf-8?B?TjVWd1JDdkdrS3lxMFQwQVA0RnE4OVk5WkhHQnBwRmZscm1rSUk1RFhiYm1G?=
 =?utf-8?B?dTYwVWJHbWNPa3k3VHJRbkJkNTdXeTFKL09qMVg4ZXR2a2NZaFZDUDFkai9X?=
 =?utf-8?B?bWZwVnZsQjhVUlZCK1I0UENFSzVNQmNNT3Bxc1gxUC80Y3Q4akJRVkpzWGxu?=
 =?utf-8?B?Wkc5TGhlUUFka21TMzczSjR0dnYzYlFOQkJQcGFJYkFlUXd4S2FoYUtpUFVL?=
 =?utf-8?B?TGc4WDRvYjZKdkNHT3R1SkpvM1ltbVhidGlKZHpSQm4yQ1IrRDgyUkhyaExs?=
 =?utf-8?B?eHZxZ2xDajdyVG9jOEFyTlp3OGtnQWdrLzI2MG92RHpVTjNxWGd2S3g5Tjdi?=
 =?utf-8?B?QUFzQlFUQ0ZCSzFZbHVFdDZuRGVjY3RvWkMxbUQzQlpIVXEyOHJiWklURjJE?=
 =?utf-8?B?R2VTYmNucUFoYncrcllqVVJRV05oVTFLYkpSdkR0RFQ3eS9EVkw4UHNERjRs?=
 =?utf-8?B?cHNKdURHaHJObzlJem5zT3JNUzY0Y3ZlOWJlMGlNVzNPUmpGbGFXYnBNajkz?=
 =?utf-8?B?WFpGZWVocmNRWE9PUTdvaVpWRVROYXJlYlE1STZIYmdJRXBOYkxEeHZ4UkE1?=
 =?utf-8?B?aEU2VGhZeGVYSUFpTGd2Z1BlK2gvd1U0ciszdFJwOGhjenNmMHpPdkVCZ0ZJ?=
 =?utf-8?B?UmJVNE9ONHZmeDYxYUFJYUV2MjU3ZlFsY3ByaHEyZVVRbDZZdUVaYTRwSXBs?=
 =?utf-8?B?OU9Way9POFpTbUJ3dW9oQm1oY2RVSkZqM2xxd0R0V0kxWFFBRnplOEg0Vm4w?=
 =?utf-8?B?azFxTzlDckM1d1hkeVNTaGxsbzhrd1pMMXN6emI1ZkY4RTZ1WUgyRk5qR3Uv?=
 =?utf-8?B?TGd4OWMxaHkxWGlpU2paOGNTbTBIWGg4Z1NJU3I3QkR2ajVBTmNGTGJwRGpa?=
 =?utf-8?B?bFNKNTU4Sm05YXBJVkZNSFVqNXplNlFGTXE4MUlYWTNlb1ZDYlBtZ1lGMnRF?=
 =?utf-8?Q?Yap70o9dAHH47n4s=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2a167cfe-a14d-40d9-fd46-08da4f7df148
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jun 2022 09:52:36.6131
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 09NY9prqS/B4kQyIrJ8HLQWKZQJUmycjiXnMixLh60NvGyKvxolUj9ZuvGAO8wxoL6AFknlgm+QtHRKKTCvpJw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR03MB6995

On Thu, Jun 16, 2022 at 11:37:59AM +0200, Juergen Gross wrote:
> On 16.06.22 11:30, osstest service owner wrote:
> > flight 171182 linux-linus real [real]
> > http://logs.test-lab.xenproject.org/osstest/logs/171182/
> 
> I think this is an infrastructure/hardware problem. Looking into
> 
> http://logs.test-lab.xenproject.org/osstest/logs/171182/test-amd64-amd64-xl/serial-elbling0.log
> 
> I see:
> 
> Jun 16 01:53:06.737732 [    2.034788] Run /init as init process
> Jun 16 01:53:06.737790 Loading, please wait...
> Jun 16 01:53:06.737834 Starting version 241
> Jun 16 01:53:06.737875 [    2.639555] megasas: 07.719.03.00-rc1
> Jun 16 01:53:06.749801 [    2.640679] megaraid_sas 0000:01:00.0: Waiting for
> FW to come to ready state
> Jun 16 01:53:06.749867 [    2.640681] megaraid_sas 0000:01:00.0: FW in FAULT
> state, Fault code:0xfff0000 subcode:0xff00 func:megasas_transition_to_ready
> Jun 16 01:53:06.773736 [    2.640687] 00000000: ffffffff
> 
> And from then on only megaraid_sas error messages are following.

I think it's unlikely to be a hardware problem.  It affects both
elbling{0,1} boxes [0][1], and only when booting Xen with the new
Linux kernel.

We don't test booting the updated Linux kernel without Xen, so it's
not possible to know whether the issue also manifests when booting the
new kernel without Xen.

Thanks, Roger.

[0] http://logs.test-lab.xenproject.org/osstest/logs/171182/test-amd64-amd64-xl/info.html
[1] http://logs.test-lab.xenproject.org/osstest/logs/171182/test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm/info.html


From xen-devel-bounces@lists.xenproject.org Thu Jun 16 09:57:38 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 09:57:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350514.576877 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1mFu-0002Al-0J; Thu, 16 Jun 2022 09:57:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350514.576877; Thu, 16 Jun 2022 09:57:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1mFt-0002Ae-TX; Thu, 16 Jun 2022 09:57:37 +0000
Received: by outflank-mailman (input) for mailman id 350514;
 Thu, 16 Jun 2022 09:57:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Sau3=WX=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1o1mFs-0002AY-AG
 for xen-devel@lists.xenproject.org; Thu, 16 Jun 2022 09:57:36 +0000
Received: from mail-ed1-x532.google.com (mail-ed1-x532.google.com
 [2a00:1450:4864:20::532])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bd99c3ee-ed5a-11ec-ab14-113154c10af9;
 Thu, 16 Jun 2022 11:57:35 +0200 (CEST)
Received: by mail-ed1-x532.google.com with SMTP id g7so1456214eda.3
 for <xen-devel@lists.xenproject.org>; Thu, 16 Jun 2022 02:57:32 -0700 (PDT)
Received: from uni.. (adsl-190.37.6.169.tellas.gr. [37.6.169.190])
 by smtp.googlemail.com with ESMTPSA id
 n22-20020a1709065e1600b0070b1ecdc12bsm568285eju.112.2022.06.16.02.57.30
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 16 Jun 2022 02:57:31 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bd99c3ee-ed5a-11ec-ab14-113154c10af9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=4yF/dBZvUdI7jPmuRQj/KahNk9taCdE/KQDKfwzRreY=;
        b=NiFREroozi+yeGJILk2A376YU2UvEatLQBh1WJdrfYNAy6AO5371RsixdtawB4yP4A
         W41rYxKt5aGiO7fGIOi3dgHgP5vUq1SCWnSkVzAHEo7Sv2k4zWYt2p+2MJ1fBLIDoml1
         +l6CRJF1jJpXjVl3tbgxbaqOPZHOTJoM2Mzt5GdhzDR2TFP84i1cbBS9S4v2uK6BRu3d
         FAIyFhV004exrYlQNO+Qv09TLy5z0NC1YbYrxuzk+QIigWwVJUlvbjOMgwuTCOk/Uz9q
         Aqt8NkeNyyZGCd4zQz5sOSq1Jp5AtgiR4APZiMxhb6T4T7zahpgrn/13PhPy51vGsDa2
         aUMw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=4yF/dBZvUdI7jPmuRQj/KahNk9taCdE/KQDKfwzRreY=;
        b=6UZVCjQV56gW8CIUai1ih3xKJk3GhYfuLmnJMfZKpkqaLNkV8au9f44Zetx9nX6zaf
         w+zW07Zw1N9tPC4w6a3rt/OE+W1HJyJEMMJeoTsnZDd5ElR9SN8Go9eibs/u5At6VGZh
         d3NzACOaJxxIPnaPTgoNpZQsm6nsFkEAq825sm+JqEGwqEFxrFDkUeyAAZv56IsnXk+F
         erULoHmiHh/dOirNTG0jy3t7p9+CiTvl0Od87lnO+xl95XR+x+3i8LW5btX0E8XxUTRA
         K1XuQ4Cq7F4Yw15jJpVPBNFwKSMILA2ky/CRZfWcCcJvFpldG7iuYD/8+kFpGjGUFJ7X
         GBDg==
X-Gm-Message-State: AJIora9Y0qRTPaRkGjr5x3EDhM0dykACu0aeQH7+3mxSRe6eVxyMO7lU
	tmSYrJivcdKtY968h8+tWwxl6bkN+cc=
X-Google-Smtp-Source: AGRyM1tl9OemBVv83q1e06HFaXP3z3u2SIDqEiy486RImFNK3Jc7opTNHnQfhZbMK63W66WJ+xIYcg==
X-Received: by 2002:a05:6402:a4f:b0:42d:db40:f685 with SMTP id bt15-20020a0564020a4f00b0042ddb40f685mr5335455edb.384.1655373451905;
        Thu, 16 Jun 2022 02:57:31 -0700 (PDT)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	viryaos-discuss@lists.sourceforge.net
Subject: [ImageBuilder] [PATCH v3] uboot-script-gen: Add DOMU_STATIC_MEM
Date: Thu, 16 Jun 2022 12:56:39 +0300
Message-Id: <20220616095639.305510-1-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new config parameter to configure a dom0less VM with static allocation.
DOMU_STATIC_MEM[number]="baseaddr1 size1 ... baseaddrN sizeN"
The parameter specifies the host physical address regions to be statically
allocated to the VM. Each region is defined by its start address and size.

For instance,
DOMU_STATIC_MEM[0]="0x30000000 0x10000000 0x50000000 0x20000000"
indicates that the host memory regions [0x30000000, 0x40000000) and
[0x50000000, 0x70000000) are statically allocated to the first dom0less VM.

Since currently it is not possible for a VM to have a mix of both statically
and non-statically allocated memory regions, when DOMU_STATIC_MEM is specified,
adjust VM's memory size to equal the amount of statically allocated memory.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---

Changes in v2
- in add_device_tree_static_mem(), replace 'i' with 'val' because variable 'i'
  is already in use as an index
Changes in v3
- fix indentation
- in add_device_tree_static_mem(), declare 'cells' and 'val' variables local
  to not mess up any global variables by accident
- add a new function add_device_tree_mem() responsible for setting up
  the memory property in the device tree, as well as for adjusting
  the memory size accordingly when static mem is specified

 README.md                |  4 ++++
 scripts/uboot-script-gen | 48 +++++++++++++++++++++++++++++++++++++++-
 2 files changed, 51 insertions(+), 1 deletion(-)

diff --git a/README.md b/README.md
index 8ce13f0..876e46d 100644
--- a/README.md
+++ b/README.md
@@ -154,6 +154,10 @@ Where:
   automatically at boot as dom0-less guest. It can still be created
   later from Dom0.
 
+- DOMU_STATIC_MEM[number]="baseaddr1 size1 ... baseaddrN sizeN"
+  if specified, indicates the host physical address regions
+  [baseaddr, baseaddr + size) to be reserved to the VM for static allocation.
+
 - LINUX is optional but specifies the Linux kernel for when Xen is NOT
   used.  To enable this set any LINUX\_\* variables and do NOT set the
   XEN variable.
diff --git a/scripts/uboot-script-gen b/scripts/uboot-script-gen
index 0adf523..7781714 100755
--- a/scripts/uboot-script-gen
+++ b/scripts/uboot-script-gen
@@ -108,6 +108,48 @@ function add_device_tree_passthrough()
     dt_set "$path/module$addr" "reg" "hex"  "0x0 $addr 0x0 $(printf "0x%x" $size)"
 }
 
+function add_device_tree_mem()
+{
+    local path=$1
+    local memory=$2
+
+    # When the DOMU is configured with static allocation,
+    # the size of DOMU's memory must match the size of DOMU's static memory.
+    if test "${DOMU_STATIC_MEM[$i]}"
+    then
+        local array=(${DOMU_STATIC_MEM[$i]})
+        local index
+
+        memory=0
+        for (( index=1; index<${#array[@]}; index+=2 ))
+        do
+            (( memory += ${array[$index]} ))
+        done
+        # The property "memory" is in KB.
+        (( memory >>= 10 ))
+    fi
+
+    dt_set "$path" "memory" "int" "0 $memory"
+}
+
+function add_device_tree_static_mem()
+{
+    local path=$1
+    local regions=$2
+    local cells=()
+    local val
+
+    dt_set "$path" "#xen,static-mem-address-cells" "hex" "0x2"
+    dt_set "$path" "#xen,static-mem-size-cells" "hex" "0x2"
+
+    for val in ${regions[@]}
+    do
+        cells+=("$(printf "0x%x 0x%x" $(($val >> 32)) $(($val & ((1 << 32) - 1))))")
+    done
+
+    dt_set "$path" "xen,static-mem" "hex" "${cells[*]}"
+}
+
 function xen_device_tree_editing()
 {
     dt_set "/chosen" "#address-cells" "hex" "0x2"
@@ -141,8 +183,12 @@ function xen_device_tree_editing()
         dt_set "/chosen/domU$i" "compatible" "str" "xen,domain"
         dt_set "/chosen/domU$i" "#address-cells" "hex" "0x2"
         dt_set "/chosen/domU$i" "#size-cells" "hex" "0x2"
-        dt_set "/chosen/domU$i" "memory" "int" "0 ${DOMU_MEM[$i]}"
+        add_device_tree_mem "/chosen/domU$i" ${DOMU_MEM[$i]}
         dt_set "/chosen/domU$i" "cpus" "int" "${DOMU_VCPUS[$i]}"
+        if test "${DOMU_STATIC_MEM[$i]}"
+        then
+            add_device_tree_static_mem "/chosen/domU$i" "${DOMU_STATIC_MEM[$i]}"
+        fi
         dt_set "/chosen/domU$i" "vpl011" "hex" "0x1"
         add_device_tree_kernel "/chosen/domU$i" ${domU_kernel_addr[$i]} ${domU_kernel_size[$i]} "${DOMU_CMD[$i]}"
         if test "${domU_ramdisk_addr[$i]}"
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 16 10:14:53 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 10:14:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350523.576889 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1mWU-0004xm-GQ; Thu, 16 Jun 2022 10:14:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350523.576889; Thu, 16 Jun 2022 10:14:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1mWU-0004xf-DN; Thu, 16 Jun 2022 10:14:46 +0000
Received: by outflank-mailman (input) for mailman id 350523;
 Thu, 16 Jun 2022 10:14:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aOxQ=WX=citrix.com=prvs=1593354c1=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o1mWS-0004xZ-Md
 for xen-devel@lists.xenproject.org; Thu, 16 Jun 2022 10:14:44 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 22d77eb5-ed5d-11ec-bd2c-47488cf2e6aa;
 Thu, 16 Jun 2022 12:14:42 +0200 (CEST)
Received: from mail-dm6nam12lp2171.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.171])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 16 Jun 2022 06:14:40 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by SA0PR03MB5531.namprd03.prod.outlook.com (2603:10b6:806:bd::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.14; Thu, 16 Jun
 2022 10:14:39 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5353.015; Thu, 16 Jun 2022
 10:14:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 22d77eb5-ed5d-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655374482;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=feQVQ2JU0N4Rv8pdmf8AXmzA7E3ChGKbkJQbuq5O0DQ=;
  b=Cj8RdGs8MBAmrR8qkq8UB+WbJN2MikZIZWcnUKi9/LqCj+cl3JAxutmv
   DBPR/udaUMZzoGX+ZdVNTfqrWl0aaTktxlfG6F5Y7MkW/+w1zImfp/k4v
   BmLEV7wjEl2f/S5bUCRIJUweGnmAE14I54XbPMvTcwHmdgZ2dY1CGlx5M
   E=;
X-IronPort-RemoteIP: 104.47.59.171
X-IronPort-MID: 74162023
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:W7wVIaNb5WyGTr3vrR3QlsFynXyQoLVcMsEvi/4bfWQNrUoghWYEy
 TEZWm/QOfzcZGbzeIgiaoyz/EgFuMeAmtFjGgto+SlhQUwRpJueD7x1DKtR0wB+jCHnZBg6h
 ynLQoCYdKjYdleF+lH1dOKJQUBUjclkfJKlYAL/En03FFYMpBsJ00o5wbZn29Aw2LBVPivW0
 T/Mi5yHULOa82Yc3lI8s8pvfzs24ZweEBtB1rAPTagjUG32zhH5P7pGTU2FFFPqQ5E8IwKPb
 72rIIdVXI/u10xF5tuNyt4Xe6CRK1LYFVDmZnF+A8BOjvXez8CbP2lS2Pc0MC9qZzu1c99Z+
 vNP6riBWFcVOZbpkcMZaB9oInA5MvgTkFPHCSDXXc276WTjKiGp79AwSUY8MMsf5/p9BnxI+
 boAMjcRYxufhuWwhrWmVu1rgcdlJ87uVG8dkig4kXeFUrB7ENaaHPyiCdxwhV/cguhUGvnTf
 YwBYCdHZxXceRxffFwQDfrSmc/32iChKWUC8zp5o4I7zVT51gpQgYSuH8D0KuKFVfl3umuh8
 zeuE2PRR0ty2Mak4T2e6W6hnfOKlC/+WYQ6BLC+7uRtglCY2ioUEhJ+fUu2p7y1h1CzX/pbK
 lcI4Ww+oK4q7kupQ9LhGRqirxa5UgU0XtNRF6g27V+Lw6+NuQKBXDFbF3hGdcAss9IwSXoyz
 FiVktj1BDtp9rqIVXaa8bTSpjS3UcQIEVI/ieY/ZVNty7HeTEsb13ojkv4L/HaJs+DI
IronPort-HdrOrdr: A9a23:WM7/Ga4tpFhhVeYzcQPXwVSBI+orL9Y04lQ7vn2ZFiY5TiXIra
 qTdaogviMc6Ax/ZJjvo6HkBEClewKlyXcV2/hpAV7GZmXbUQSTTL2KgbGSoAEIXheOjdK1tp
 0QD5SWaueAamSS5PySiGfYLz9j+qjgzEnBv5ai854Hd3APV0gP1XYaNu7NeXcGPjWuSKBJYq
 a0145inX6NaH4XZsO0Cj0sWPXCncTCkNbDbQQdDxAqxQGShXfwgYSKWySw71M7aXdi0L0i+W
 /Kn0jQ4biiieiyzlv523XI55pbtdP9wp9oBdCKiOISNjLw4zzYLbhJavmnhnQYseuv4FElnJ
 3lpAohBd167zfrcmS8sXLWqnzd+Qdrz0Wn5U6TgHPlr8C8bik9EdB9iYVQdQacw1Y8vflnuZ
 g7k16xht5yN1ftjS7979/HW1VBjUyvu0cvluYVkjh2TZYeUrlMtoYSlXklXavoJBiKprzPLd
 MeTf01vJ1tABOnhjHizyNSKeWXLzsO9kzseDlAhiSXuwIm7kyRgXFohvD3pU1wiq7Ve6M0mN
 gsDZ4Y5Y2mbvVmGZ6VV91xNvdeNAT2MGLxGVPXB2jbP4c6HF+Ig6LLwdwOlZKXkdozvdAPpK
 g=
X-IronPort-AV: E=Sophos;i="5.91,304,1647316800"; 
   d="scan'208";a="74162023"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=efekLzV0Zfnd3XsTIuWxl4+Uw/jxCtz4G52ZC+Sc4wQoM9dI47dTVetFj+THAcb+wWhFSbtHkDivLxww0zilmeZTYAdH2DVG6rRbLIjUJ0IebJRkPxaY4VOzEq0AM80IPYPFM3h6uYM6aNy4SDqbD7uhUGt2xAzNjb7KmiwQHQglrOgozTJb+9Mp9Ss9RJ9fYolQxbNgrnFI9Bwcp4xVV1M22E+Q6c2wjGNHlBrP7q7YbH3CB3LIYlw2WaHIGz0VOEaFAdxR2VPJlpUAgfLEACP1HrbcRNvV1ytlbVraaJcaTLzkJlMinUBeN05/rIlslt47vjFG63KhHjN8fGcrrA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=jHsPe542jizFaxb9+LIfW92OlsCpT4cYQD8aEMb8+3A=;
 b=jM9ucqa74fp4V3tcE11AFcNFCVIuHPylK/P/5AoCKipWNMgPoQfFJcqLpzOdgj5SOXr+ta+XnKyHQc1ZIJMz9MqciCl9yeQMqXVmQmGvYNux2MbDw/jRxY5JwvQBGWQ9w1ETbPqy655YjJdNzS/0q2jPb9UB6G2WDNxDf4kF3a/khkS7GTsGLLQW2lvet8DwTQ93jtwarQwEo08NbYqFYRlNT4gUEKqStU3nLrBvX8dLLj6J2FHhtYVI+Y5kAHe1UDwnPhxYux1NO1SK37Jas0Bm8AvfWqD3dGVhrV40T/pxZTLjhgcmr8pRiYdS7LYu47Q8QCpy1alLkkhugFHn1g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jHsPe542jizFaxb9+LIfW92OlsCpT4cYQD8aEMb8+3A=;
 b=UVlRLc71pnz0sI376HiEjuROydTC+1eU+6ibaMe1WAd1QeVlFYXQjoHrueC30w2Xp04uuMxq4io8Kt8RhCs0EGGilaettDQvHPWfnkbbQeqEbsMI5AD4L5NoJzwDpD5wnqYJl4XZ5RsfQA8onn8zh7llGlSW6ypY6Srz6ifAcao=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Thu, 16 Jun 2022 12:14:35 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: osstest service owner <osstest-admin@xenproject.org>
Cc: xen-devel@lists.xenproject.org
Subject: Re: [xen-unstable test] 171181: regressions - FAIL
Message-ID: <YqsCi4r1IYYl4e+x@Air-de-Roger>
References: <osstest-171181-mainreport@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <osstest-171181-mainreport@xen.org>
X-ClientProxiedBy: LO2P123CA0065.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1::29) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: caa4daea-e764-4ec7-89a0-08da4f810571
X-MS-TrafficTypeDiagnostic: SA0PR03MB5531:EE_
X-Microsoft-Antispam-PRVS:
	<SA0PR03MB5531D979544BF12D96215D568FAC9@SA0PR03MB5531.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Ht312BLpTTIgwBCu81pgq21gs/M80iDtdsuk6P3SyZ/tL+ZFdAqEVV+A0JzIWKYNxGGf1fUvh8V5ruIRoggyWc+QU56RgpuZh9uElbHhW33QrCKexSZ5/tq1a9CIxQ8JYj9qW7rg3MYycSpozD6dGLlrdB6AaiwI884nf/YQHhqXV/RHzVyjECivCXIGCnnVO/Xwgrb1lULlm1YWHaUP401Zmvwb6pwryLjTnNBfBIjkFVBuvNX20T8l+Q16+vcRD87ohjfJya2+nSMDMybJyWOWPZ26A9c03ohvaxekckBYQ+97DQ47pUusIpt13iCwJKR92STuPYw/c1gdmJm/xNQ3ykLsQsSckmmYrsCk0ZU3YuG5jENfzTjYu2RH18nLD7BbBswWG3jexgzP8EOCx7wCkzDT1v+Mq3/g8NI8Rh3ZANkQdObbHhB4OxYDkQ44YsMzOraiSGhWdh9SYQlv8lzD3O3IoFJT9xd6Fp6MmhLvBRCUoL5QIwosqX11aIMTA851SXtK0WmJHzT3nAvWrrK7Ye6cEoqjZ07RgHrBiNGHy+LNGiVa+bZvoMJUEY0Dq2aiPMP0ZaaRZVnyNBFg5NoOI48BDj5C4YbUUE21q6qszhmGa8jQsWBVh80/sThAElrg0wnqgc9FlQf+iqAA9m58sypLplkSpLRIKQDPd7i6Fg4wirtntwBcczB3FX1CQnY2yzni7j4BNtKwFOfv8WIM3VWnnb8x/Zy8pJpFR2+wHfrH3u6nsFH3cq9HBBsuI/4vhifbG1dBVAXFklTukqLZtc3bj2ib2uvqNo2VdRCKEPGwZZqe9PYUt4JrKgVOVrX2p/tidjkBCnIK2Io/xg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(7916004)(366004)(6916009)(9686003)(6666004)(86362001)(6506007)(316002)(26005)(6512007)(966005)(6486002)(33716001)(186003)(498600001)(5660300002)(85182001)(38100700002)(8936002)(66556008)(2906002)(82960400001)(4744005)(66476007)(4326008)(8676002)(66946007)(83380400001)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eTFPOWlXaXBNMUdxUkFyUU4vUTJqWTV0MlVnYkFNcndsUVAzbTh5V1NYc3Zo?=
 =?utf-8?B?aHAzYm5jY0ZFQW41b2xVOGZtN00vNVVSS3dPYjJMaHhVVm9yNXFnWmdlTmla?=
 =?utf-8?B?a0RiRE9OM2ZZR2ptMXZwZDZOazl2eXlGVUJMM0FEa0tVNE1UaHMyME9Cai9Y?=
 =?utf-8?B?U055cmp0Y3pWdzVjU3pWWXRZL3E0WjNGYmVJYXI5VzVUZ25yTk1CbUk3TmRR?=
 =?utf-8?B?YXBmRXhhUVVndWtta1hNdWtMWmE1bm1TZWpoUjhVbmQ3YlZQMVhMTjJ3Nmxx?=
 =?utf-8?B?UFZmM2NTRmttS0JEWWM4RHgvWG1ob2hHanIzeEdjNzkwU3hZQkhzZzVybGZ1?=
 =?utf-8?B?bFo1QVZvR2tONHh5MmY4aU13T3BkS0NQc0hxVk1jblpOU2YwVHZhQVpDMkhJ?=
 =?utf-8?B?MWo1WW5CQmdPdG5vVjNCdlFJamw3bzROckdnT0I3YUpSR2VwcWRtaVQ0N1k3?=
 =?utf-8?B?ZEpCYjdDTzVEY2tlcXlWV2NiYnh2blJPWmJJUlV1NTBRc3gvTTFWN1JGdzRr?=
 =?utf-8?B?YS9zOFhGL29sYjdiRFlIUUhqelcvejNkYkVYbTJEQkcxeEg2S1NYYXFoSnA4?=
 =?utf-8?B?K2x5L1Q3V3ZUeFdkeHFqQStxeHM2K2hBMjc3NUlLNjMwM1EwOGNvdjJoZmpG?=
 =?utf-8?B?cFV1SWU2dVRFOVZadWlRYmNsQU5ic3E4QWZubzQ3VVBYMEtRU08rYTJSM1Z1?=
 =?utf-8?B?VDllMVZGbnlvRGZQQlVkVHRXckNGMDlHaU5DWUZhRlhBWWkvV2lhY2dzWWVJ?=
 =?utf-8?B?V3RNUmZ0VGdaODhMUTJ0ZUdiZzhtd3Vta2hHZ0psc0h4cFBkUXRuY2k3UnhP?=
 =?utf-8?B?Vy8yTWhnRHp6bVBxR1lGaDlCdTdleUpRNVZpdFpKU2Q2dEJ0K21zUjV6ZWls?=
 =?utf-8?B?SU1Hb3pwMVdUaytzaTR4RzBZaEF5WUxBTkpnN3JQbENZMTdWV2hMQSt2QVps?=
 =?utf-8?B?U1dpRXc3bFZpSDhCT0ozTEt0WlJrZmovWkRkMUEyRmtXOWY4R1RwOHpEaE1k?=
 =?utf-8?B?SDNrZG95TjNaZVVwVVpYMWpVZzhSR0VMTEY0Yno2WHVjVTFmTW9manRMcEps?=
 =?utf-8?B?cGRoeVBvYXJuRkR4cWZTci90NUprUlNvSFhyMVVPQm9kQ3YrSnFoMDB3SWIw?=
 =?utf-8?B?YTJvM004QXRMVGlucEkxMndNUHpQSUk1VWJMNStTNktDQWxrYXE2cmZjOUR5?=
 =?utf-8?B?ZGM3MTg1c1h2SG1tempVTmNxbktOb1Z3WmJVcUNPeHUwa2d5SWFuTXUwTTVr?=
 =?utf-8?B?ZXFTK3h6dzQwOGg2TW9PZ2kzd0ZuMDV6V0x3b28rdnNyWC9FV05WQXl3N3kr?=
 =?utf-8?B?djBNdWZheTRiNWZQR2o1NTRUeU9UcXFYMGlvdWkrT0pNakNkbDd4RzNsZ1Mz?=
 =?utf-8?B?clBITVZqRThQRklvcTFWeFRwbENWRlNpNWN3Y0gyRnFPSXpyeCtPbkdpVzk3?=
 =?utf-8?B?UkVMNnlqOFM0L3o5RW5sUHFuR0RnRW54SE1KTVhtVk1QU0kzNFJ1UGdWYnJs?=
 =?utf-8?B?a3BCWmlyemVGak5VdW9makJRcnJCbGlZbU9DbEZHQkFkL2lVWlVXSS9mSHBz?=
 =?utf-8?B?bjU5bXZDbUZnYWJUKzBheURLZWNNTjJRQW56dmdyWTlLd0ZNdzMzT1NmZ1h5?=
 =?utf-8?B?VjZJYWo0L293SW1YN2FPaEprWUprcFVSUURnaWlvUURRZjRwVG9RN3lkREdV?=
 =?utf-8?B?RXlkS08wWVgvSEc1Z01WN0xwN0hMdjcyTUdpVENVRXR1eGlJblNtT3MrTENq?=
 =?utf-8?B?NGtuUWNHU0VtY1hiK1JpYjhGMm9YUkJBYWxoQnd1QlB5eXd5dnBHSDFPc01l?=
 =?utf-8?B?VEpyaCs0MkJXaGRVc1B3Q2RuQVZFK0h4Rkt4cDlLWm5xT1ZoMjFLdEFTK2hi?=
 =?utf-8?B?cjRCRkpTOGdCMDMreFJXMHJmQitwZStqN1c3RGE3SXp6dnh5dVdLcmtyaVhw?=
 =?utf-8?B?cFVJaXVMUmFFbkdWOXAzZk5kRnR6QlVMdnQ5OURKOHh2R1laazA4WHg4MG9u?=
 =?utf-8?B?dVlYc1FqZG5BUEJlR3M3dlpWTmw3ZmRRR0JwQm9oVVQ4UEM4MzZxT3k5Z1Qy?=
 =?utf-8?B?RnVFZGhpRENzL1JzU1J3R0JqalRWL0c2U2FkbDFxcW5STUxsSGVxY0VwK0Zy?=
 =?utf-8?B?SGRKYkJjMjRzdVN0cUJaT3BlYTNWam5nZGVieTJ0UkpCVEJKQXBGczNveXRr?=
 =?utf-8?B?OW5hMkVuZmdoWGlYSkRUWHdaemRQenNGajAxUDhJR1NMc09YdWt5WjlrM1J1?=
 =?utf-8?B?MVhDZFIzVzlhWEdhaE5JSmxmZnRFVTVud0E3aEgwRThyOGJ2bm82ZnlZeUJI?=
 =?utf-8?B?eVJpbVkrREg4TjcvQmVmaUFONTUvT2pHb2FRQU5yb0p2RVlVRGU4SU5SRVVK?=
 =?utf-8?Q?MxoB/ACctjQueg68=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: caa4daea-e764-4ec7-89a0-08da4f810571
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jun 2022 10:14:38.8454
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: SftbLdwtv1XYD6d/kZQb8cxQOZgb5HBx5hIJvlQD3GhkyIp5/Wcz+UZjDHRmdlpwFy8OmEaniMh2tlyAqhvb4A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR03MB5531

On Thu, Jun 16, 2022 at 02:34:00AM +0000, osstest service owner wrote:
> flight 171181 xen-unstable real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/171181/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  build-amd64                   6 xen-build                fail REGR. vs. 171174
>  build-i386                    6 xen-build                fail REGR. vs. 171174

Seems like the issues with xenbits being overloaded has caused those
builds to fail:

fatal: remote error: git-cache-proxy: git remote died with error exit code 1 // Fetching origin // fatal: unable to access 'http://xenbits.xen.org/git-http/ovmf.git/': Failed to connect to xenbits.xen.org port 80: Connection timed out // error: Could not fetch origin

I expect next run will be fine.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jun 16 10:31:54 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 10:31:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350534.576905 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1mmy-0007la-47; Thu, 16 Jun 2022 10:31:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350534.576905; Thu, 16 Jun 2022 10:31:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1mmy-0007lT-1T; Thu, 16 Jun 2022 10:31:48 +0000
Received: by outflank-mailman (input) for mailman id 350534;
 Thu, 16 Jun 2022 10:31:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1mmw-0007lJ-Tj; Thu, 16 Jun 2022 10:31:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1mmw-0008B7-R9; Thu, 16 Jun 2022 10:31:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1mmw-0000F3-BN; Thu, 16 Jun 2022 10:31:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o1mmw-0001KV-Ax; Thu, 16 Jun 2022 10:31:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=GDsQTEfGlVxzgDpAQbuf2Yrap7eRmaN8WWjyAteZKuY=; b=f3w4IwaND7i5xmUGWpmNN51xSa
	iDbR327xZdC08MDSTOYNSuA+MuBgikZZ0o40xFavYYpIxPl3sU/+iA4Eb2qp7g+WCjzp5YVoEmS49
	+N1PTvaLs8HR8/jJLuWCpO5GsDuL0jeiRCYW8ofMk4qkZu5W278lE9Q6zUMwP4VQPDrA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171192-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 171192: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=05e57cc9ced67d2cd633c2bdcf70b5e1352bf635
X-Osstest-Versions-That:
    ovmf=6676162f64ad39949ed44f17ce40e5c49ab33e31
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 16 Jun 2022 10:31:46 +0000

flight 171192 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171192/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 05e57cc9ced67d2cd633c2bdcf70b5e1352bf635
baseline version:
 ovmf                 6676162f64ad39949ed44f17ce40e5c49ab33e31

Last test of basis   171158  2022-06-14 03:11:59 Z    2 days
Testing same since   171192  2022-06-16 08:10:35 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   6676162f64..05e57cc9ce  05e57cc9ced67d2cd633c2bdcf70b5e1352bf635 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Jun 16 10:56:50 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 10:56:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350544.576917 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1nAu-000299-4x; Thu, 16 Jun 2022 10:56:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350544.576917; Thu, 16 Jun 2022 10:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1nAu-000292-22; Thu, 16 Jun 2022 10:56:32 +0000
Received: by outflank-mailman (input) for mailman id 350544;
 Thu, 16 Jun 2022 10:56:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Gudg=WX=arm.com=Rahul.Singh@srs-se1.protection.inumbo.net>)
 id 1o1nAs-00028w-B5
 for xen-devel@lists.xenproject.org; Thu, 16 Jun 2022 10:56:30 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0615.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::615])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f81bbeec-ed62-11ec-bd2c-47488cf2e6aa;
 Thu, 16 Jun 2022 12:56:27 +0200 (CEST)
Received: from DB6PR07CA0188.eurprd07.prod.outlook.com (2603:10a6:6:42::18) by
 AM6PR08MB3542.eurprd08.prod.outlook.com (2603:10a6:20b:4b::19) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5332.14; Thu, 16 Jun 2022 10:56:23 +0000
Received: from DBAEUR03FT026.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:42:cafe::12) by DB6PR07CA0188.outlook.office365.com
 (2603:10a6:6:42::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.9 via Frontend
 Transport; Thu, 16 Jun 2022 10:56:23 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT026.mail.protection.outlook.com (100.127.142.242) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5353.14 via Frontend Transport; Thu, 16 Jun 2022 10:56:23 +0000
Received: ("Tessian outbound 6f53897bcd4e:v120");
 Thu, 16 Jun 2022 10:56:23 +0000
Received: from 9868e40b23ce.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 76F92342-7DCC-49F8-9C52-AECFFFAAC730.1; 
 Thu, 16 Jun 2022 10:55:59 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9868e40b23ce.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 16 Jun 2022 10:55:59 +0000
Received: from DB7PR08MB2986.eurprd08.prod.outlook.com (2603:10a6:5:1e::14) by
 AM6PR08MB3335.eurprd08.prod.outlook.com (2603:10a6:209:4c::18) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5353.13; Thu, 16 Jun 2022 10:55:56 +0000
Received: from DB7PR08MB2986.eurprd08.prod.outlook.com
 ([fe80::8cfc:33ec:e3ef:ebeb]) by DB7PR08MB2986.eurprd08.prod.outlook.com
 ([fe80::8cfc:33ec:e3ef:ebeb%3]) with mapi id 15.20.5332.020; Thu, 16 Jun 2022
 10:55:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f81bbeec-ed62-11ec-bd2c-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=lo+hmxwralDP4lF54ah1tH9VAQvhmZX4+dCbG77nGzF9DZklckMk+UWWBKDVFW4rR5YML1EaLrJfTkZJ7m2+6WfGBMALPzfvjnyGNYjOKfWWEWWYlzzUtmdhYWWvKoXnBrJyVEZYqmajGxdXl8mHagaAeavc3rumI26Fcq3tjU1lVq64jPIkieDDuozIPyu/uvtL7ac8hlboUotQZ7P8PE04kb8gtKI9skesypg1uiUgYbPva81dl5IAa2esEwzXC3ZYSopybV237ieivRhJ/pW0Ag/HSNgN+uZ6hdFl8ORlCjSgxNN54nPWTfkfUsYrJfd8/WKKa+g1znpQar8lzQ==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=OnT+k9wLLW9bNIoLd/BEczpkgmZl1ZO94X5qi6ahdWc=;
 b=mZjtEI+i2YlPFEPAF9e/IRLi4uEJWd0YvnCrunbHNGIrMNpPQ06vzJxGtLQh/vl8HNM2b/CcUKiSGxBaKrQlq4Likz7sz0nX0dYTX72wl9rUqW2H7EvjNBSjs+G4qN3SUzEjrSKqmn+5OUyug8pDQv8zkx6Y7PjGD5XYPwTIa0ekaqxmHpySMOS6fZlyulPERwyieVHJGxgIcSYMeqlLYQr5Bxx6RkOllfviI83j1J5UnJrp2yFRUgQJYKsgg55N/JsD+7dHZ+LCL8muwUGKwpkIpKGVj0myw2gE8e/kH0K5ihpIXYvZyi+uNivgdoCciDzK0mJTbgDf8SVIp0BH3A==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OnT+k9wLLW9bNIoLd/BEczpkgmZl1ZO94X5qi6ahdWc=;
 b=hrJnzXsXU7HD1mdg8d0xC03LvnsWTQtKVBTFMT+Hq2IF/t0wMei70LfgtJZOW4rCIt6j4P0/ipTx+EjG/tGOnt0BJReylS1YceGctMa9d/raWh02VxEQmbpslpFeLfrZ0LUfpXu4cKL19JWXDtVNtbgVPRqTA39sgsupeqNsDC4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VQc3rp0BLtq03yp3NUP6qk9grbNoV2jz015rv8yR3jIVjj5VEG17IqEavrEa0+FPyZFi1amsWYkcZpkUm4vx1XdX2323brlbosiucKshlutL3QwgIQiNgZAN6EDoso7HjYknUehZRF4kEGeIhv8yh5UcrlDbfCqkGdx9wAT5KLl3gL29G64xCfkkIHT21UQV53dBOgB1POjBM6+mOZRC01SgqRbXKoYvIdFpS1y20CFbIcZrmXQ2y0CsxqWjH+YG001Xy6WGfqsYn8cZRd1l4GypFKZY7Irx8yW8TRwjTeVHzV6sdGUp8wXmcZZwjtX4/gldETYjkjcT29Fce0L1tg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=OnT+k9wLLW9bNIoLd/BEczpkgmZl1ZO94X5qi6ahdWc=;
 b=gG/yH49kMXZQo2md0VDODGayhYfheVwmhODOEqk0z0DYe2H7b6WYZU1b8vAt45sDO7R9CMcpJpD8IPYhrs3MV6b9EZLIC8ypywETAOeCCQAObb7j0fazECJzB2EXoC4lVU7NUm62AUyXcCxM6Av5TsDqeTvjAgYRKsXq2SkqaHNz5ku/ShbJ01wcJBSo2nhkcAsqd5ymuMINSmQ9UkXfkwNywZLvFLUTwPurvoa7scCcUCjEB+ChM14fe4i8YTILWirbAgOGE/CVxACV+TZhpZjHLGjv+fB3zg4hbYkcAlVC7As/ceoPH3xH9z2nMklpeu47icwWqUaea0jU8I0Hcw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OnT+k9wLLW9bNIoLd/BEczpkgmZl1ZO94X5qi6ahdWc=;
 b=hrJnzXsXU7HD1mdg8d0xC03LvnsWTQtKVBTFMT+Hq2IF/t0wMei70LfgtJZOW4rCIt6j4P0/ipTx+EjG/tGOnt0BJReylS1YceGctMa9d/raWh02VxEQmbpslpFeLfrZ0LUfpXu4cKL19JWXDtVNtbgVPRqTA39sgsupeqNsDC4=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Mykyta Poturai <mykyta.poturai@gmail.com>
CC: Julien Grall <julien@xen.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, "Volodymyr_Babchuk@epam.com"
	<Volodymyr_Babchuk@epam.com>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] xen/arm: smmuv1: remove iommu group when deassign a
 device
Thread-Topic: [PATCH] xen/arm: smmuv1: remove iommu group when deassign a
 device
Thread-Index: AQHYWlIekEFYa4Z+ZkOKg6prrRxnv60EB7eAgALvxoCABl1EgIBEoW2AgAA0fAA=
Date: Thu, 16 Jun 2022 10:55:56 +0000
Message-ID: <029EEEE1-69E1-42A9-90D3-BEC18CD5B7BC@arm.com>
References: <b6af8c10-9331-eec8-9a77-cd192829a6f2@xen.org>
 <20220616074805.538720-1-mykyta_poturai@epam.com>
In-Reply-To: <20220616074805.538720-1-mykyta_poturai@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 5c420485-6993-4335-2b6c-08da4f86da69
x-ms-traffictypediagnostic:
	AM6PR08MB3335:EE_|DBAEUR03FT026:EE_|AM6PR08MB3542:EE_
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB3542CCA818BA20CB80768F4AFCAC9@AM6PR08MB3542.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 6QYxt0AlRh7jmuf4jH4/4NiseTvjOyXbbagt0Fq8K1jJXOsY79KGktt+myRXqZi7Xk2JNzlqAn9SZktEW3x57xO0uZ/+i6a//WibBGspoCtaWsnErLnF8sjHavW8dcwcSWwANd8SgNwOJ7K7NwTqKpuRAFQ3EjXc2HixhHcvCClBeLEksjr2eJkETdqURJYwGur+g2jSPsXLuZo6QAbZWjmY/Q2JOABXQCtilFs7twlLAgTuqoLTOZWgRx98R8pUpitXv2z3zcWSq1cSbdXJ3E0S3AvPTaFi9jPGPCo9HqxoxwYuNd5kYyxkf1kScsVLAaznpVvaAzeCQelbbYGKl6zXTElNv86ZQMlbcVx+zKrKQwobHNqd3d1VbWgkKPMDUGuBEcJj21F+WyQmBDgSEGvhMJizYY96xSKC5H0PPfIg0kZYeF+MD5hzc1Oibhr5E/65cCVis6J9/t/kHavN2uzVS66HnqgHRUHNL4MH2NA9ePJRucVqyRS1MG5UhzFupOixOzNuohAIyoc5keWTyCAvl3w+XCZY9ZCUMyrmM+begdADcCQtByTYg4eCE7rin1Sr//5WGckB47Ex6nQK0JWafP7v76t526cIQ6yycW/CDr5GHqleEEZlOgXouNEQwJD2oad90sKyPrPg0cQjfrdgLW3PBVdI3bivTGIKyzQPidZ/eNp1GuoHBmD6xGI3gA/AiccbcLAgEU4F9X6Y6GGHrvqNUur1Zh/m21U4p+EAhIGiGpDQaaHpVIbx1sPZ
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB2986.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(83380400001)(76116006)(64756008)(8676002)(66946007)(6486002)(33656002)(5660300002)(2906002)(66446008)(508600001)(91956017)(38070700005)(66556008)(71200400001)(66476007)(8936002)(26005)(4326008)(2616005)(6506007)(36756003)(122000001)(54906003)(186003)(316002)(6916009)(53546011)(38100700002)(6512007)(86362001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <F1018AC48961434BA06F11A557F2B1A2@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3335
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT026.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	b2b030e9-5401-485b-120f-08da4f86ca46
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ckY1DHzpz4g71Gws0qIn/ezS0sqV+AVLdJyiAZgVJbv0fZx7uGmoRarThapscDgu/KxIE/wiWy+Pul3kLbHOdiJmcJpCluTIIVlHyC0m28K7hiCXVdstuteGhW7wWxyLjiN5GnL7CIu+Qko1X0Xe7pzPmUXIOf9SgpznE0Nei8PL7Cb7/9mHdxO1w8aoBteDxsPXJtDL1XKTanbwvE10IbWK8vJBGQ3jNMVRlz1TXF+LzxS0z8gCfg3JGupyal4B42dUXxyZ83WSeoTFgngVxfE5/2eUk2k7f4qQskTsqRgfyyFHqdoXYDGTw2TWT1Vs1zMe/F9KBoVE7UDp59uXPJRPMHznRdSqBaZgzC//ZEPl7/K/nrsKFOae6rkESFWhnRwAd+3geYyhDnOxlGRlCE10nraBx9REiqh5x47WP3FzyDuRbeG3WW/N5GWiDB8T3OjZ+SInjyeDNInGiC+x09m15lXi810Fy2skeOsGiNXuw4BujsOGPl91F5pU0xYKL2/2WKecsEMo+Q26i3G/8d1LF5qMSk16npfpyYXs8y5RPHYFpVO4BRbOSXYbfagyOicFImQhS/wb6pA07jvdUgBLR/n9+UXBYDDXKjPhK640sOI4Ay2WD+uTAd6apNLoK/+4PnuX5nVholQ7p0viSMTckildgW44mBm3ypCVXQCYSw6TsT0fpjBbMZWYhDkh
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(46966006)(36840700001)(40470700004)(6512007)(186003)(26005)(2616005)(316002)(83380400001)(81166007)(336012)(356005)(82310400005)(47076005)(36860700001)(498600001)(2906002)(8936002)(53546011)(6486002)(6862004)(36756003)(40460700003)(8676002)(5660300002)(4326008)(6506007)(70206006)(54906003)(33656002)(70586007)(86362001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jun 2022 10:56:23.4883
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 5c420485-6993-4335-2b6c-08da4f86da69
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT026.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3542

SGkgTXlreXRhLA0KDQo+IE9uIDE2IEp1biAyMDIyLCBhdCA4OjQ4IGFtLCBNeWt5dGEgUG90dXJh
aSA8bXlreXRhLnBvdHVyYWlAZ21haWwuY29tPiB3cm90ZToNCj4gDQo+IEhpIEp1bGllbiwgUmFo
dWwNCj4gSSd2ZSBlbmNvdW50ZXJlZCBhIHNpbWlsYXIgcHJvYmxlbSB3aXRoIElNWDggR1BVIHJl
Y2VudGx5LiBJdCB3YXNuJ3QgcHJvYmluZw0KPiBwcm9wZXJseSBhZnRlciB0aGUgZG9tYWluIHJl
Ym9vdC4gIEFmdGVyIHNvbWUgZGlnZ2luZywgSSBjYW1lIHRvIHRoZSBzYW1lDQo+IHNvbHV0aW9u
IGFzIFJhaHVsIGFuZCBmb3VuZCB0aGlzIHRocmVhZC4gSSBhbHNvIGVuY291bnRlcmVkIHRoZSBv
Y2Nhc2lvbmFsDQo+ICJVbmV4cGVjdGVkIGdsb2JhbCBmYXVsdCwgdGhpcyBjb3VsZCBiZSBzZXJp
b3VzIiBlcnJvciBtZXNzYWdlIHdoZW4gZGVzdHJveWluZw0KPiBhIGRvbWFpbiB3aXRoIGFuIGFj
dGl2ZWx5LXdvcmtpbmcgR1BVLg0KPiANCj4+IEhtbW1tLi4uLiBMb29raW5nIGF0IHRoZSBjb2Rl
LCBhcm1fc21tdV9hbGxvY19zbWVzKCkgZG9lc24ndCBzZWVtIHRvIHVzZQ0KPj4gdGhlIGRvbWFp
biBpbmZvcm1hdGlvbi4gU28gd2h5IHdvdWxkIGl0IG5lZWQgdG8gYmUgZG9uZSBldmVyeSB0aW1l
IGl0IGlzIGFzc2lnbmVkPw0KPiBJbmRlZWQgYWZ0ZXIgcmVtb3ZpbmcgdGhlIGFybV9zbW11X21h
c3Rlcl9mcmVlX3NtZXMoKSBjYWxsLCBib3RoIHJlYm9vdCBhbmQgZ2xvYmFsDQo+IGZhdWx0IGlz
c3VlcyBhcmUgZ29uZS4gSWYgSSB1bmRlcnN0YW5kIGNvcnJlY3RseSwgZGV2aWNlIHJlbW92aW5n
IGlzIG5vdCB5ZXQNCj4gc3VwcG9ydGVkLCBzbyBJIGNhbid0IGZpbmQgYSBwcm9wZXIgcGxhY2Ug
Zm9yIHRoZSBhcm1fc21tdV9tYXN0ZXJfZnJlZV9zbWVzKCkgY2FsbC4NCj4gU2hvdWxkIHdlIHJl
bW92ZSB0aGUgZnVuY3Rpb24gY29tcGxldGVseSBvciBqdXN0IGxlZnQgaXQgY29tbWVudGVkIGZv
ciBsYXRlciBvcg0KPiBzb21ldGhpbmcgZWxzZT8NCj4gDQo+IFJhaHVsLCBhcmUgeW91IHN0aWxs
IHdvcmtpbmcgb24gdGhpcyBvciBjb3VsZCBJIHNlbmQgbXkgcGF0Y2g/DQoNClllcywgSSBoYXZl
IHRoaXMgb24gbXkgdG8tZG8gbGlzdCBidXQgSSB3YXMgYnVzeSB3aXRoIG90aGVyIHdvcmsgYW5k
IGl0IGdvdCBkZWxheWVkLiANCg0KSSBjcmVhdGVkIGFub3RoZXIgc29sdXRpb24gZm9yIHRoaXMg
aXNzdWUsIGluIHdoaWNoIHdlIGRvbuKAmXQgbmVlZCB0byBjYWxsIGFybV9zbW11X21hc3Rlcl9m
cmVlX3NtZXMoKSANCmluIGFybV9zbW11X2RldGFjaF9kZXYoKSBidXQgd2UgY2FuIGNvbmZpZ3Vy
ZSB0aGUgczJjciB2YWx1ZSB0byB0eXBlIGZhdWx0IGluIGRldGFjaCBmdW5jdGlvbi4NCg0KV2ls
bCBjYWxsIG5ldyBmdW5jdGlvbiBhcm1fc21tdV9kb21haW5fcmVtb3ZlX21hc3RlcigpIGluIGRl
dGFjaCBmdW5jdGlvbiB0aGF0IHdpbGwgcmV2ZXJ0IHRoZSBjaGFuZ2VzIGRvbmUgDQpieSBhcm1f
c21tdV9kb21haW5fYWRkX21hc3RlcigpICBpbiBhdHRhY2ggZnVuY3Rpb24uDQoNCkkgZG9u4oCZ
dCBoYXZlIGFueSBib2FyZCB0byB0ZXN0IHRoZSBwYXRjaC4gSWYgaXQgaXMgb2theSwgQ291bGQg
eW91IHBsZWFzZSB0ZXN0IHRoZSBwYXRjaCBhbmQgbGV0IG1lIGtub3cgdGhlIHJlc3VsdC4NCg0K
ZGlmZiAtLWdpdCBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL2FybS9zbW11LmMgYi94ZW4vZHJp
dmVycy9wYXNzdGhyb3VnaC9hcm0vc21tdS5jDQppbmRleCA2OTUxMTY4M2I0Li5kYTNhZGY4ZTdm
IDEwMDY0NA0KLS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvYXJtL3NtbXUuYw0KKysrIGIv
eGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvYXJtL3NtbXUuYw0KQEAgLTE1OTgsMjEgKzE1OTgsNiBA
QCBvdXRfZXJyOg0KICAgICAgICByZXR1cm4gcmV0Ow0KIH0NCiANCi1zdGF0aWMgdm9pZCBhcm1f
c21tdV9tYXN0ZXJfZnJlZV9zbWVzKHN0cnVjdCBhcm1fc21tdV9tYXN0ZXJfY2ZnICpjZmcpDQot
ew0KLSAgICBzdHJ1Y3QgYXJtX3NtbXVfZGV2aWNlICpzbW11ID0gY2ZnLT5zbW11Ow0KLSAgICAg
ICBpbnQgaSwgaWR4Ow0KLSAgICAgICBzdHJ1Y3QgaW9tbXVfZndzcGVjICpmd3NwZWMgPSBhcm1f
c21tdV9nZXRfZndzcGVjKGNmZyk7DQotDQotICAgICAgIHNwaW5fbG9jaygmc21tdS0+c3RyZWFt
X21hcF9sb2NrKTsNCi0gICAgICAgZm9yX2VhY2hfY2ZnX3NtZShjZmcsIGksIGlkeCwgZndzcGVj
LT5udW1faWRzKSB7DQotICAgICAgICAgICAgICAgaWYgKGFybV9zbW11X2ZyZWVfc21lKHNtbXUs
IGlkeCkpDQotICAgICAgICAgICAgICAgICAgICAgICBhcm1fc21tdV93cml0ZV9zbWUoc21tdSwg
aWR4KTsNCi0gICAgICAgICAgICAgICBjZmctPnNtZW5keFtpXSA9IElOVkFMSURfU01FTkRYOw0K
LSAgICAgICB9DQotICAgICAgIHNwaW5fdW5sb2NrKCZzbW11LT5zdHJlYW1fbWFwX2xvY2spOw0K
LX0NCi0NCiBzdGF0aWMgaW50IGFybV9zbW11X2RvbWFpbl9hZGRfbWFzdGVyKHN0cnVjdCBhcm1f
c21tdV9kb21haW4gKnNtbXVfZG9tYWluLA0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICBzdHJ1Y3QgYXJtX3NtbXVfbWFzdGVyX2NmZyAqY2ZnKQ0KIHsNCkBAIC0xNjM1LDYg
KzE2MjAsMjAgQEAgc3RhdGljIGludCBhcm1fc21tdV9kb21haW5fYWRkX21hc3RlcihzdHJ1Y3Qg
YXJtX3NtbXVfZG9tYWluICpzbW11X2RvbWFpbiwNCiAgICAgICAgcmV0dXJuIDA7DQogfQ0KIA0K
K3N0YXRpYyB2b2lkIGFybV9zbW11X2RvbWFpbl9yZW1vdmVfbWFzdGVyKHN0cnVjdCBhcm1fc21t
dV9kb21haW4gKnNtbXVfZG9tYWluLA0KKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBzdHJ1Y3QgYXJtX3NtbXVfbWFzdGVyX2NmZyAqY2ZnKQ0KK3sNCisgICAgICAgc3RydWN0
IGFybV9zbW11X2RldmljZSAqc21tdSA9IHNtbXVfZG9tYWluLT5zbW11Ow0KKyAgICAgICBzdHJ1
Y3QgYXJtX3NtbXVfczJjciAqczJjciA9IHNtbXUtPnMyY3JzOw0KKyAgICAgICBzdHJ1Y3QgaW9t
bXVfZndzcGVjICpmd3NwZWMgPSBhcm1fc21tdV9nZXRfZndzcGVjKGNmZyk7DQorICAgICAgIGlu
dCBpLCBpZHg7DQorDQorICAgICAgIGZvcl9lYWNoX2NmZ19zbWUoY2ZnLCBpLCBpZHgsIGZ3c3Bl
Yy0+bnVtX2lkcykgew0KKyAgICAgICAgICAgICAgIHMyY3JbaWR4XSA9IHMyY3JfaW5pdF92YWw7
DQorICAgICAgICAgICAgICAgYXJtX3NtbXVfd3JpdGVfczJjcihzbW11LCBpZHgpOw0KKyAgICAg
ICB9DQorfQ0KKw0KIHN0YXRpYyBpbnQgYXJtX3NtbXVfYXR0YWNoX2RldihzdHJ1Y3QgaW9tbXVf
ZG9tYWluICpkb21haW4sIHN0cnVjdCBkZXZpY2UgKmRldikNCiB7DQogICAgICAgIGludCByZXQ7
DQpAQCAtMTY4NCwxMCArMTY4MywxMSBAQCBzdGF0aWMgaW50IGFybV9zbW11X2F0dGFjaF9kZXYo
c3RydWN0IGlvbW11X2RvbWFpbiAqZG9tYWluLCBzdHJ1Y3QgZGV2aWNlICpkZXYpDQogDQogc3Rh
dGljIHZvaWQgYXJtX3NtbXVfZGV0YWNoX2RldihzdHJ1Y3QgaW9tbXVfZG9tYWluICpkb21haW4s
IHN0cnVjdCBkZXZpY2UgKmRldikNCiB7DQorICAgICAgIHN0cnVjdCBhcm1fc21tdV9kb21haW4g
KnNtbXVfZG9tYWluID0gZG9tYWluLT5wcml2Ow0KICAgICAgICBzdHJ1Y3QgYXJtX3NtbXVfbWFz
dGVyX2NmZyAqY2ZnID0gZmluZF9zbW11X21hc3Rlcl9jZmcoZGV2KTsNCiANCiAgICAgICAgaWYg
KGNmZykNCi0gICAgICAgICAgICAgICBhcm1fc21tdV9tYXN0ZXJfZnJlZV9zbWVzKGNmZyk7DQor
ICAgICAgICAgICAgICAgcmV0dXJuIGFybV9zbW11X2RvbWFpbl9yZW1vdmVfbWFzdGVyKHNtbXVf
ZG9tYWluLCBjZmcpOw0KIA0KIH0NCg0KUmVnYXJkcywNClJhaHVs


From xen-devel-bounces@lists.xenproject.org Thu Jun 16 11:17:54 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 11:17:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350553.576932 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1nVU-0004yr-0Y; Thu, 16 Jun 2022 11:17:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350553.576932; Thu, 16 Jun 2022 11:17:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1nVT-0004yk-Rj; Thu, 16 Jun 2022 11:17:47 +0000
Received: by outflank-mailman (input) for mailman id 350553;
 Thu, 16 Jun 2022 11:17:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aOxQ=WX=citrix.com=prvs=1593354c1=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o1nVS-0004ye-B1
 for xen-devel@lists.xenproject.org; Thu, 16 Jun 2022 11:17:46 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ef0da7b9-ed65-11ec-bd2c-47488cf2e6aa;
 Thu, 16 Jun 2022 13:17:43 +0200 (CEST)
Received: from mail-dm6nam12lp2170.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.170])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 16 Jun 2022 07:17:36 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by SA0PR03MB5403.namprd03.prod.outlook.com (2603:10b6:806:b4::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.13; Thu, 16 Jun
 2022 11:17:34 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5353.015; Thu, 16 Jun 2022
 11:17:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ef0da7b9-ed65-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655378263;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=wu/9Co5rzH0ThpwvTIlsLkn3euBDv1tRxBq52rfnP2w=;
  b=L9pYc+wjed8qxhQ0EXtZkJI4r6GEoKcL6diKV5i9CpgtVKD622sK8VMC
   LTbIP58Oe+dx35JAk58c9xqy2y27B/MGxko32jRUeEYFWhfTJ723rCpuN
   Nm6HdqGmmpW6Dd1PoawNl68dV1SKuKzFDHy0pglsARlEYCIfJfqkjFZm3
   0=;
X-IronPort-RemoteIP: 104.47.59.170
X-IronPort-MID: 73597070
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:+6FdD6N2j62yMtfvrR1VlsFynXyQoLVcMsEvi/4bfWQNrUolhjVSx
 jAcXmqHPq2OMDPwL410YIzi8U8H65GAx9BrQQto+SlhQUwRpJueD7x1DKtR0wB+jCHnZBg6h
 ynLQoCYdKjYdleF+lH1dOKJQUBUjclkfJKlYAL/En03FFYMpBsJ00o5wbZn29Aw27BVPivW0
 T/Mi5yHULOa82Yc3lI8s8pvfzs24ZweEBtB1rAPTagjUG32zhH5P7pGTU2FFFPqQ5E8IwKPb
 72rIIdVXI/u10xF5tuNyt4Xe6CRK1LYFVDmZnF+A8BOjvXez8CbP2lS2Pc0MC9qZzu1c99Zy
 5JdmLi1bEASGrSQsfpecDNBORBQBPgTkFPHCSDXXc276WTjKiGp79AwSUY8MMsf5/p9BnxI+
 boAMjcRYxufhuWwhrWmVu1rgcdlJ87uVG8dkig4kXeFUrB7ENaaHP2iCdxwhV/cguhUGvnTf
 YwBYCdHZxXceRxffFwQDfrSmc/33SSuLGUH8Dp5o4IquGrNkSV92YS0LeOWXOfaXOxLh3ix8
 zeuE2PRR0ty2Mak4SqE+3W9j+iJmSLTWYQOGbn+/flv6HWQy3ISDlsKVFK9ifi/lkO6HdlYL
 iQ86ico6KQ/6kGvZt38RAGj5m6JuAYGXNhdGPF87xuCooL2yQuEAmkPThZadccr8sQxQFQC1
 EKNnt7vLSxitvuSU3313qyPsTq4NCwRLGkDTSwJVw0I55/kuo5bpg3LZsZuFuiylNKdMTPtx
 XaMpSs3hbQWhOYK0bm2+RbMhDfEjpPJQwgk50POX2uj4St4YpKoY8qj7l2z0BpbBIOQT13Es
 H1ancGbtboKFcvUy3TLR/gRFra04frDKCfbnVNkA5gm8XKq5mKneodTpjp5IS+FL/o5RNMgW
 2eL0Ss52XOZFCHCgXNfC25pN/kX8A==
IronPort-HdrOrdr: A9a23:2v/R4KDhFzEd64blHeglsceALOsnbusQ8zAXPh9KJCC9I/bzqy
 nxpp8mPH/P5wr5lktQ/OxoHJPwOU80kqQFmrX5XI3SJTUO3VHFEGgM1+vfKlHbak7DH6tmpN
 1dmstFeaLN5DpB/KHHCWCDer5PoeVvsprY49s2p00dMT2CAJsQizuRZDzrcHGfE2J9dOcE/d
 enl4J6T33KQwVlUu2LQl0+G8TTrdzCk5zrJTYAGh4c8QGLyRel8qTzHRS01goXF2on+8ZpzU
 H11yjCoomzufCyzRHRk0fV8pRtgdPkjv9OHtaFhMQ5IijlziyoeINicbufuy1dmpDl1H8a1P
 335zswNcV67H3cOkmzvBvWwgHllA0j7nfzoGXo9kfLkIjcfnYXGsBBjYVWfl/y8Ew7puxx16
 pNwiawq4dXJQmoplWy2/H4EzVR0makq3srluAey1ZFV5EFVbNXpYsDuGtIDZY7Gj7g4oxPKp
 ggMCjl3ocXTbqmVQGbgoE2q+bcHEjbXy32DnTqg/blkgS/xxtCvg4lLM92pAZ1yHtycegB2w
 3+CNUYqFh/dL5pUUtDPpZwfSKWMB26ffueChPaHbzYfJt3SU7lmtrQ3Igfwt2MVdgh8KYS8a
 6xJW+w81RCNn7TNQ==
X-IronPort-AV: E=Sophos;i="5.91,305,1647316800"; 
   d="scan'208";a="73597070"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QM6z2T6JK6u+dRb8KSyjiUAYUoGKqnv9m6y7W/cy3YhhsPoLIch10OjKnrao7xP21ACK3YDW3ZQ9QdIOyxlu1SvYmGNPgwqjVK62jGghWOecdE5pE8QKAIM0CnbWi+1RYHcM4YJ75N3iTG8Ptg0yTLOUS4qwNLOvMsOek92yCYaJCCrl0RT3H/xo8fYfkzbZkABBdhgL/EYfkpAyhQxD/l2sVJeY8Or6Gh2gXzMRtXIpFYPzbIFLR7pFTM00Qe2fB+qIZBtUA6YhnqEpEUET2NxEV5i/2OpVDdTvhn78z7zZQx4vVoyKKbuL73xcSa2xpdv8UKOksZl+gjepwZ7UUA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=jH1HBwAj15WT5WAef5LD5xjcQGRwUX4vRGD4OkaMjRs=;
 b=OIA3fvwrYXHDo2BGy8Fgre4c4nLXXQu3EkqLx7JvHB/yICAUYEMzPnuW0NMJUXPH24iq/D1T1fM/tSLOwxRl6L7gZ+dkbLPdhFnLQz03r5oytEFFEu38Cd2XbK/4sqy/WqoscR7XNe9bnsEdLvulOa1pxqK8ykXd8bdSw2TrLwr7HHKWNtPkO2MGlNYPGtTIn2tlmF0OVshyJr3nEn5oJtt96lxR2NB7O0IKPH4xwATUxHGlyjd0AvuK2NDO4+kIlKn22ilcEYFzMqYKMcGuG0SAFduM6XgU9Nvd8Y2CByQHL/GZnUxztUoJFUsg2DLK3U2nbmscWFaEIIZWTS105w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jH1HBwAj15WT5WAef5LD5xjcQGRwUX4vRGD4OkaMjRs=;
 b=rCAqm09DUYfRxnmKckhn+vQ6RpKdHsY6fpTH92v6zApwah9Mktf7a/y3I9BswEuSWSOHqUQPvgdeA2VupwDju4kJ5e68MRLoh+YpWMNpnxpuP9sybCAdjaOAC3V16srtH4YjDFNsMe+CYvJT/Wz8fhrU6frjJzg4k7FPUsiJedw=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Thu, 16 Jun 2022 13:17:31 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Xiaoyao Li <xiaoyao.li@intel.com>
Cc: "Tian, Kevin" <kevin.tian@intel.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"Cooper, Andrew" <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	"Beulich, Jan" <JBeulich@suse.com>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	"Nakajima, Jun" <jun.nakajima@intel.com>,
	"Qiang, Chenyi" <chenyi.qiang@intel.com>
Subject: Re: [PATCH v2 3/3] x86/vmx: implement Notify VM Exit
Message-ID: <YqsRS8jRvAL2przG@Air-de-Roger>
References: <20220526111157.24479-1-roger.pau@citrix.com>
 <20220526111157.24479-4-roger.pau@citrix.com>
 <BN9PR11MB5276B16CB69514120B7E0E318CA79@BN9PR11MB5276.namprd11.prod.outlook.com>
 <4f2c4d5b-dab8-c9d2-f4c2-b8cd44011630@intel.com>
 <YqHGzuJ+D0WjaW+6@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <YqHGzuJ+D0WjaW+6@Air-de-Roger>
X-ClientProxiedBy: LO2P123CA0069.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1::33) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c3a7ad0f-3150-4bca-a5df-08da4f89cfed
X-MS-TrafficTypeDiagnostic: SA0PR03MB5403:EE_
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-Microsoft-Antispam-PRVS:
	<SA0PR03MB5403FB694133A9CC4E5B2BAA8FAC9@SA0PR03MB5403.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	dM9SdDcez3BoDmT1oijBaqfraduH/ongkLtpbFZ/u6ccQ7p9hJH8G2Dsvez1ZD5WW0aBm2jXYou220jBXqKqcMW9mj8Ie9Gruo6vkja8Zrjtoz/qBr5zkZ54P2W7q19dIgzTwFHewX9T3urdk0tHFxBI3YAGJ52UHMEEc+PkG+9aCwqSouF0zbF1G9WrGyPG8NJZ0nvEtgOCtwIKst0ILKyafz9HbKEHRjh/LmvaoMYoFxAtEoN6xe9ne72EjmkHc7JkjTlP2Alkg6hLJMv/WtgtlmfqzyifnP8sTMv6lGlB7zrm8Lz8qOio43lrfTeN6vOlPXj5+YjTSN6tmw2comqlkQcJBe3lAdU9zJp+6znR2sCoF9DRPHvx87LB0j2yxah40h+WctzwpSwKo0GfVS3tNMT4j5vF0BaOEtsoHcpJaDCy6zvuqnG2Abx+CAg52BS8FywMa4LgZObNMluIPY2XR10jguYul/uhH82mTSmkeEIq+Y0fCqZTaNtKgdK9Skdhzp7v40pWbzy+MWMvZomljssjw8MBgn+qU6TQovNdAHmGw69D4UzprwChTcDksm47E/r5g+zCSCf5+AMXYnXFco5zalBuWOwOZULmpguaTLS21i/s7cHvcsfo8BXa1OwIi2uk9mlkpmUKsgZ9hA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(7916004)(366004)(508600001)(5660300002)(316002)(6666004)(186003)(38100700002)(33716001)(8936002)(85182001)(83380400001)(2906002)(6506007)(53546011)(6512007)(4326008)(66556008)(8676002)(66946007)(82960400001)(6916009)(66476007)(86362001)(9686003)(26005)(54906003)(6486002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WU9vdldPY2VtUVZIUllHeWl2Rm0reU03SngzZ0hRc0FSclJoTS85ZlMvK1h1?=
 =?utf-8?B?UWQ0TjJ0TmQvank0TDQ3YWF2T1pwWmV1Wmk5dVREVVVSWjNpRmVZMWpGQlRm?=
 =?utf-8?B?NDRYcTdNVStjNmFxTEVzK3o1RHJ4ZTRVVks5NEdVenRDZnRaMjFmNG9FbVhE?=
 =?utf-8?B?ZVNNN0EzV3EvREF6Wi9kbjB1M0ZKMzVUQm1QeVZpeVp4MFE4UGVKVEVFMWtJ?=
 =?utf-8?B?Y3JaL0lVYXcwOXJweGs0NUdNeitudmlJb0tlZ0VkMzBZczl3NXRoOFprTUNa?=
 =?utf-8?B?S1l3b00xSEZQMGlXcHZidURWNERJNGZOTE9kU2FRQzRTQlpDdUQ1NnRiSmFS?=
 =?utf-8?B?d3dhd3NyWUdMMEpyNkVjWEVWcHhIV3hBRnBrc0Y4SnVmdytXQXhVb2tMS2dS?=
 =?utf-8?B?bVd4VjlxcndMQ1lNQUNjdGNqVFpqVCtzUkVib0VJUDF2MFRHbjB0Z3hIcVov?=
 =?utf-8?B?WUxvOUsyY0hMeFpJR00vNVdNL21hcVBPTXNLaTcwc3dCNjlKVStoOXkveHd1?=
 =?utf-8?B?MDR2ZUtxZjIxdWpIQ3ZONERONm13OXJ2QU1NOVFqQTV0bkxsdDBwNEtPTjFF?=
 =?utf-8?B?UndzcTRUNHhYUjVHNjQwWGptV1NpczVWRXFyeUNZSWhVY3dvdkY5REZQOGg2?=
 =?utf-8?B?V1JwR2tmL0h6V0YyY29SLzA1QWpnZTJ2TnRUTnIreHg3SllqbkVZaUtXTTFB?=
 =?utf-8?B?MTlqNmIyYVJ6Z1JkbU1XVDFJZ2d4RnY1R2lEUVhURWV1MTdGT2laMG91bkRK?=
 =?utf-8?B?MmFHY3QvTnhDUzBqL3o4NVYrTkN6QVM0UUxXQkZnWnREeFZzb1cxSEg2eGU1?=
 =?utf-8?B?VFZuT0dxc1E2T05LMVFrSmtIRWdKZkhGSjBuRzMyVTlzTTlGaERBdU5ScW1x?=
 =?utf-8?B?TUd6cWVjTWp2SmFEc0d0OTRlb1VqQkhWUXZ1TFVLaHF1S3ZzM0NSQnovYUs1?=
 =?utf-8?B?SEtjOG9teVIrd25OOU15dzI3WTI1SlhDVWNLeEU2bFJmQzFtd1ZkaE92RUU0?=
 =?utf-8?B?NytXdG9IVDZyTDlkeWRGbjdIcTJHYTY4NGxFTXpGZXNPc1gwK3h4Q1NZbHF1?=
 =?utf-8?B?eUpDUE9KVUs4YUhXSDB5Q0hvdjdNT2RVYmtMekRSNnQrRWxSelpLUG1JOFZL?=
 =?utf-8?B?V3htVnZEdWhTZnVQWXVKczBCYlNxWjFSQTYxY2IyQlk0Z3psdHhLczJObTZI?=
 =?utf-8?B?b1F2Y3FTdnM2MkZGNmFsMVJlZFhFT0J5QWdCZ2ttNndUc3dmTktiSUlRME1o?=
 =?utf-8?B?Q2pnaDdKOWhhcStFV0htTEE2WUdQTzRtdHJnMjlIZ2xIcXNWRkJQN0w3b1Q3?=
 =?utf-8?B?akxXUnEwVlQ5NXN3L3dYMk1sQXZ3bmhhM0c5SWVLRHVZZElUazErbTdMckVo?=
 =?utf-8?B?eU9QYXVhOHlXV29hNUU2U1orZnNtQVFrb3JFN0NxVUFyaWhSbHd2dXNKZmhz?=
 =?utf-8?B?cDBMNUUyME5odUU3OEgza0VvYmZUbTU4MFJhakhSMWdOSHRIRzVXV1J2d1RW?=
 =?utf-8?B?bzN1cHRxdHJXbmR4dm02NTNaRGpHTHgzM2ZBZVlkK0ZiWTJ1aUZ5MHcrSEk5?=
 =?utf-8?B?V25TUWVic0QyYUkyUUlHL3dlMlF2WWd3WHN1Z0t2eitDWk5UQ0cvck00RVVi?=
 =?utf-8?B?aWpoVTZqRkwxSjI3M1FKejVJd0duTVBCdlpPdUt2U0lDQ0pFdWlYMVJYLzBB?=
 =?utf-8?B?eGl6MXVpZWlKUGcvYTVKRVJZeE0ySmptanFHVE9COVZwdU9FV0hlT2tRZ0pT?=
 =?utf-8?B?RFdzYXdvclllaDU2TlhNTnJEVnJyTTluVVRjbkx4K3dqUEhnTDF2K3ZIeEZQ?=
 =?utf-8?B?VmliZ2d6b3dNOHRPVDduak1jUWlnWkYwU01taStVKzVTS285SllpUU9aQ0M5?=
 =?utf-8?B?MUpjTUIyaGxqb3Z6RExQdXFQdDN1YnQ5aDdEelFnZnlQdElyYjVFelB2Wlda?=
 =?utf-8?B?bDlLMEt2dVpQS1JpY282OGd0ZTNXekhHWVF1WDd4Lzk2V1ZJakF6eUFER2xY?=
 =?utf-8?B?Q3YwYjRNTUJkY2ZiT2FOQ3pCTmFTVVMzSjFSaDR5anlVL1JZZms5eFp1MTYw?=
 =?utf-8?B?YXYxNUwrUVZSWHh5Vmg4YUhxcFJwUWdKbDBENDB1NE9aS1JkNDFQWmJDZFdY?=
 =?utf-8?B?S2RWT2xDVnRYbkxCZURzbUV0NjZrZWxtc1d1Q0svcnJ1Q2d2VEU5aElHVVYx?=
 =?utf-8?B?UFFQQ3FvR2Y4UUVPZUNoQXVCTUp3eTRGdHlkMTZ3MDF0QWgzYWZmRDEyajFz?=
 =?utf-8?B?emgzaDU3V2x0UzZBNUtuWG9YaEhxMTN0Z3IxVDY5U05yS003NEhuV01YaDQy?=
 =?utf-8?B?bEI4RW9DVlhWMWhXOG5hNTdnL014a3F4UCtwSWNZQU1UaDZFcHJ3Q3ZtT1g5?=
 =?utf-8?Q?nJB+8kSvMVq0Nnhk=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c3a7ad0f-3150-4bca-a5df-08da4f89cfed
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jun 2022 11:17:34.5335
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: n6gaLU6Eedf6xwiPtVRmF1ubojtm2tV76kTWbtwAppxCHyvmSvRS3dUrGMxo7z9rtSzou4DFLgBZK/TMJB3prQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR03MB5403

Ping?

On Thu, Jun 09, 2022 at 12:09:18PM +0200, Roger Pau Monné wrote:
> On Thu, Jun 09, 2022 at 03:39:33PM +0800, Xiaoyao Li wrote:
> > On 6/9/2022 3:04 PM, Tian, Kevin wrote:
> > > +Chenyi/Xiaoyao who worked on the KVM support. Presumably
> > > similar opens have been discussed in KVM hence they have the
> > > right background to comment here.
> > > 
> > > > From: Roger Pau Monne <roger.pau@citrix.com>
> > > > Sent: Thursday, May 26, 2022 7:12 PM
> > > > 
> > > > Under certain conditions guests can get the CPU stuck in an unbounded
> > > > loop without the possibility of an interrupt window to occur on
> > > > instruction boundary.  This was the case with the scenarios described
> > > > in XSA-156.
> > > > 
> > > > Make use of the Notify VM Exit mechanism, that will trigger a VM Exit
> > > > if no interrupt window occurs for a specified amount of time.  Note
> > > > that using the Notify VM Exit avoids having to trap #AC and #DB
> > > > exceptions, as Xen is guaranteed to get a VM Exit even if the guest
> > > > puts the CPU in a loop without an interrupt window, as such disable
> > > > the intercepts if the feature is available and enabled.
> > > > 
> > > > Setting the notify VM exit window to 0 is safe because there's a
> > > > threshold added by the hardware in order to have a sane window value.
> > > > 
> > > > Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> > > > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> > > > ---
> > > > Changes since v1:
> > > >   - Properly update debug state when using notify VM exit.
> > > >   - Reword commit message.
> > > > ---
> > > > This change enables the notify VM exit by default, KVM however doesn't
> > > > seem to enable it by default, and there's the following note in the
> > > > commit message:
> > > > 
> > > > "- There's a possibility, however small, that a notify VM exit happens
> > > >     with VM_CONTEXT_INVALID set in exit qualification. In this case, the
> > > >     vcpu can no longer run. To avoid killing a well-behaved guest, set
> > > >     notify window as -1 to disable this feature by default."
> > > > 
> > > > It's not obviously clear to me whether the comment was meant to be:
> > > > "There's a possibility, however small, that a notify VM exit _wrongly_
> > > > happens with VM_CONTEXT_INVALID".
> > > > 
> > > > It's also not clear whether such wrong hardware behavior only affects
> > > > a specific set of hardware,
> > 
> > I'm not sure what you mean for a specific set of hardware.
> > 
> > We make it default off in KVM just in case that future silicon wrongly sets
> > VM_CONTEXT_INVALID bit. Becuase we make the policy that VM cannot continue
> > running in that case.
> > 
> > For the worst case, if some future silicon happens to have this kind silly
> > bug, then the existing product kernel all suffer the possibility that their
> > VM being killed due to the feature is default on.
> 
> That's IMO a weird policy.  If there's such behavior in any hardware
> platform I would assume Intel would issue an errata, and then we would
> just avoid using the feature on affected hardware (like we do with
> other hardware features when they have erratas).
> 
> If we applied the same logic to all new Intel features we won't use
> any of them.  At least in Xen there are already combinations of vmexit
> conditions that will lead to the guest being killed.
> 
> > > > in a way that we could avoid enabling
> > > > notify VM exit there.
> > > > 
> > > > There's a discussion in one of the Linux patches that 128K might be
> > > > the safer value in order to prevent false positives, but I have no
> > > > formal confirmation about this.  Maybe our Intel maintainers can
> > > > provide some more feedback on a suitable notify VM exit window
> > > > value.
> > 
> > The 128k is the internal threshold for SPR silicon. The internal threshold
> > is tuned by Intel for each silicon, to make sure it's big enough to avoid
> > false positive even when user set vmcs.notify_window to 0.
> > 
> > However, it varies for different processor generations.
> > 
> > What is the suitable value is hard to say, it depends on how soon does VMM
> > want to intercept the VM. Anyway, Intel ensures that even value 0 is safe.
> 
> Ideally we need a fixed default value that's guaranteed to work on all
> possible hardware that supports the feature, or alternatively a way to
> calculate a sane default window based on the hardware platform.
> 
> Could we get some wording added to the ISE regarding 0 being a
> suitable default value to use because hardware will add a threshold
> internally to make the value safe?
> 
> Thanks, Roger.
> 


From xen-devel-bounces@lists.xenproject.org Thu Jun 16 11:31:26 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 11:31:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350565.576954 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1nib-0007rz-Ey; Thu, 16 Jun 2022 11:31:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350565.576954; Thu, 16 Jun 2022 11:31:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1nib-0007rs-Bo; Thu, 16 Jun 2022 11:31:21 +0000
Received: by outflank-mailman (input) for mailman id 350565;
 Thu, 16 Jun 2022 11:31:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aOxQ=WX=citrix.com=prvs=1593354c1=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o1nia-0007rm-JY
 for xen-devel@lists.xenproject.org; Thu, 16 Jun 2022 11:31:20 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d5f57152-ed67-11ec-bd2c-47488cf2e6aa;
 Thu, 16 Jun 2022 13:31:18 +0200 (CEST)
Received: from mail-bn8nam11lp2173.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.173])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 16 Jun 2022 07:31:15 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by SN4PR03MB6781.namprd03.prod.outlook.com (2603:10b6:806:214::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.15; Thu, 16 Jun
 2022 11:31:12 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5353.015; Thu, 16 Jun 2022
 11:31:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d5f57152-ed67-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655379078;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=TrGuu1O6xyTnKuYPsaFomWjzxLP9McM8YjHAt6f03ck=;
  b=YaQ/ot3XbiQaVIsjxF3VqJ3lSfcr+dbUTWEVHaR5HE6Jn6Cthl44/RaZ
   rkaXMePFHUZngL4wqBdZ9b8KIiFwSzKLfmMfmNYPFzFlUovQ4S+jsnWR9
   9ZItlj/QTvIi4a6Ch3R2Zt1Yogh3/6CblqLRcQYY/uhkF22TJqPNzWeg/
   o=;
X-IronPort-RemoteIP: 104.47.58.173
X-IronPort-MID: 74166412
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:l9bxRKi8GS/PoBWf4uc2wmNpX161fBEKZh0ujC45NGQN5FlHY01je
 htvUDuHOf3eYDb2Ldxzb4Sz9UsGvpCEztdhGwA/qCsyFXwb9cadCdqndUqhZCn6wu8v7a5EA
 2fyTvGacajYm1eF/k/F3oDJ9CU6jefSLlbFILas1hpZHGeIcw98z0M68wIFqtQw24LhXVrT4
 YmaT/D3YzdJ5RYlagr41IrbwP9flKyaVOQw5wFWiVhj5TcyplFNZH4tDfjZw0jQG+G4KtWSV
 efbpIxVy0uCl/sb5nFJpZ6gGqECaua60QFjERO6UYD66vRJjnRaPqrWqJPwwKqY4tmEt4kZ9
 TlDiXC/YSMxAvPoyNUHaDdBOQ9uIIp0/OeYLEHq5KR/z2WeG5ft69NHKRhveKc+qqNwC2wI8
 uEEIjcQaBzFn/ix3L+wVuhrgIIkMdXvO4Qc/HpnyFk1D95/GcyFH/qMuI8ehWhr7ixNNa+2i
 84xcz1gYQ6GexRSElwWFIg/jKGjgXyXnzhw9w7M+/RrujK7IApZj5/dH9z5Xd2wZ4YIxViEt
 keczVygHURPXDCY4X/fmp62vcfNly7mXIMZFJWj6+VnxlaUwwQ7CgASVFa9iem0jAi5Qd03A
 1cP5iMkoKw29UqqZtrwRRu1pDiDpBF0c8VUO/037keK0KW8yxaUAC0IQyBMbPQitdQqXno62
 1mRhdTrCDdz9rqPRhqgGqy8qDqzPW0fKz8EbCpdFA8duYC8+8c0kw7FSctlHOitlNrpFDrsw
 jeM6i8jm7EUis1N3KK+lbzavw+RSlHyZlZdzm3qsqiNtGuVuKbNi1SU1GXm
IronPort-HdrOrdr: A9a23:Q3Gtv6jlREJGK0/SPnuFGWz5C3BQX0h13DAbv31ZSRFFG/FwyP
 rCoB1L73XJYWgqM03I+eruBEBPewK4yXdQ2/hoAV7EZnichILIFvAa0WKG+VHd8kLFltK1uZ
 0QEJSWTeeAd2SS7vyKnzVQcexQp+VvmZrA7Ym+854ud3ANV0gJ1XYENu/xKDwTeOApP+taKH
 LKjfA32gZINE5nJ/iTNz0gZazuttfLnJXpbVovAAMm0hCHiXeN5KThGxaV8x8CW3cXqI1Sul
 Ttokjc3OGOovu7whjT2yv66IlXosLozp9mCNaXgsYYBz3wgkKDZZhnWZeFoDcpydvfoGoCoZ
 3pmVMNLs5z43TeciWcpgbs4RDp1HIU53rr2Taj8A/eiP28YAh/J9tKhIpffBecwVEnpstA3K
 VC2H/cn4ZLDDvb9R6NqOTgZlVPrA6ZsHAimekcgzh0So0FcoJcqoQZ4Qd8DIoAJiTn84oqed
 MeQP003MwmMG9yUkqp/lWGmLeXLzcO91a9MwU/U/WuonZrdCsT9Tpb+CQd9k1wga7VBaM0ot
 gsCZ4Y5Y2mfvVmE56VO91xMfdfKla9Ni4kY1jiV2gOKsk8SgHwgq+yxokJz8eXX7FN5KcOuf
 36ISFlXCgJCgjTNfE=
X-IronPort-AV: E=Sophos;i="5.91,305,1647316800"; 
   d="scan'208";a="74166412"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kPxTv4U2bbEykJNxMlukz2yWjGa4Ovsidys9cj8hrqPR+vF+Q5k5wmLwkgfetlt6LMbwTVnB5ZPtbQp7WwVGsJHgaG6jINIHqU+hX9MYG3Cof8mnFMXTBZAStsds6OG5xd3ofu8IuKSH8MvJR42K40o3ybyEoEoCKJh9Ozt4liYqfcZRDG8pr7vLW6gScGUeCJv9tWfkz9cA1hT4DBOnMqVk6hHFWYjN09vNY1ZKdlvfRPvZ8+LtWQjzAlIh5tsRMtpHjaWOG50bQu22OS7dJqhbEzKUz0bMxm7V5jD0GOk+aG5sLT+HyNgryecWmjy9gkDUICAIXitZVAZIUNwcPQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=MDRUB+Tje9osuHlgCT/GCZN9hu9NnzZg4bn8F8g5oTU=;
 b=ePsmn5bI+zm5WjbG4tXuR4XJSJ+F9g8SxQCBdzi2u3KtqrrG3qGTli1Bjs3j31WcMqGLhZu8JQ/Ngt0uxleWi/vSgBz2JXvs1kffOOvbvSrUdUjvZFuw5GqAXu38fhEpTorPsmiWf53w3I6NXYMoY7WAamQp78TatfJBHhsVEJDZrgj0tbzUNr/5GV/5GAL8TGHPJyZKA8OcVF+s6FmRCmuJ0PKXNeiUbEqbtfn3uBXwt1LVW6T12qHTe6tnOF0SFQNxmRagYAIs3qfUCrvMf/C54V9nYRYYCscDpOQX83WQ0CcPPotOfkL6+JadEjxKdtOKrTDSUa1fgeN02j/oIg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MDRUB+Tje9osuHlgCT/GCZN9hu9NnzZg4bn8F8g5oTU=;
 b=k58goI6AXV/yOnmphjHQJ3YRIjGmztYK5jz/Q/rYJy3mASgZUOHMaG0J8ZSqZkEybm6L039wR05c5XLByenQk+fJIy5qAtct7VVxhm/mJ/U1/RysUUmjsv6Kct92yH2mgATKyT1hO3JxQrQ1Q1iymzm9JJjeIdqsoKL7OAH0IjY=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Thu, 16 Jun 2022 13:31:08 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH] xen/console: do not drop serial output from the hardware
 domain
Message-ID: <YqsUfH763oSchRdW@Air-de-Roger>
References: <Yqb9gKUMokLAots7@Air-de-Roger>
 <afa0a9e3-fd35-be38-427e-3389f4c3ca26@suse.com>
 <YqcuTUJUgXcO3iYE@Air-de-Roger>
 <f0f87e99-282b-6df7-7e57-3a6c73029519@suse.com>
 <YqgwNu3QSpPcZjnU@Air-de-Roger>
 <69d85d88-4ec1-987c-151f-0d433021fe34@suse.com>
 <YqhHtetipYTG8tuc@Air-de-Roger>
 <72c94980-cbcd-d3b3-7aad-c9db58d9c4a2@suse.com>
 <YqhXFKMlIvkQzVoT@Air-de-Roger>
 <291bb0ee-06d7-af25-79bb-e099c7ff2fe1@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <291bb0ee-06d7-af25-79bb-e099c7ff2fe1@suse.com>
X-ClientProxiedBy: LO4P265CA0009.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2ad::12) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c81a3f51-714f-41ed-72aa-08da4f8bb70b
X-MS-TrafficTypeDiagnostic: SN4PR03MB6781:EE_
X-Microsoft-Antispam-PRVS:
	<SN4PR03MB67812310D2B7CB22E7D31DBC8FAC9@SN4PR03MB6781.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	mmu8ieMJYv+UXLzMRGODQNALxPJKNj7hPBoDMYillfsmJiFEUp5kEXXoFP5HR9L0MgSiC7YsgsGFhKeZNaffU6Un0voFB0+4tEu2oUZ8d4eS8kwzdEalDAI823HgMWta9f45Wyd4UMvNiiZAQQX0njFrYmMVMSYKt+LIb8fLxDIgATu20gE3Eb/TUupwCT6db+8E0u4iXuWfPFYn47JdDvntKWKUAqumoXx2iI91bGngFM1ZqiW6vHTDoHo01pnsXNQ39xE9WhO++St3Xi+/xzwCLjvNR1GLGB8oJR2WWpmCg5K28UbJ2pH2caZGfRx0mQpj49WYPy1T/R6hhoL0uAjXOxp7qE1j4hZcJPuLpZpScXyTuos95RaMu+vvb0roOkJA7W0h9EgI0r+HMm+h2oDmwmzrvZXffa4miP55DbpA1HA4wM2uyUyPWfk1wJPiN/dYPXzebMiVsWTipU1HinWJYVwIMtCxWcScaO8v7wQ4kTeT/EMGDQk35XY0R/Mvdz2MSFz9GzN/3k2gWQ2d4zBrwEtrqRhM4rstTR7VOd1voL7ueSef0/GCuWV8d4hl5TmmoM66Z+GZDPuGK8AwX0FC277HLdoveV1kUWLmi1RiByiH8xPMb55cqP3gpJuR5o06ems2hNPSAhp7eU06Lw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(7916004)(366004)(4326008)(6486002)(498600001)(82960400001)(86362001)(6512007)(8676002)(54906003)(6916009)(5660300002)(316002)(9686003)(26005)(66946007)(186003)(66476007)(85182001)(66556008)(6506007)(83380400001)(6666004)(38100700002)(33716001)(2906002)(8936002)(53546011);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TnRRWGFlQW1vVGMzV1RTUnd5dExiZXZuc2NvdVNyTnJxV29CcE96c2ZlOEto?=
 =?utf-8?B?OEtETEhGSU5rVUlQVlVNaGhUSGxwRFQySDllejN1VUdZSm5Fc3VJbER6RDFV?=
 =?utf-8?B?L2NDQmdWUDNQZ2hUbHRhSmN0T2oxdWhMUFZPSWJJNjU3ci9yZWFocTlhYTFi?=
 =?utf-8?B?a1g4SjMrYWl5SGptVVh1a3Q4OW9oTG9QMS9MUmR2eWx1UHRDY2d1Yk50K3JJ?=
 =?utf-8?B?a1AvZCtkUk5hT2RVMFByUG5DMEgwZVJNNkN0TGdJMmJCMW04NkVFSUpxUk50?=
 =?utf-8?B?K1gyYVE0bi9LYTdDK2tXZlRSVUp1anVRcHovc0ZkSXFNSzliNk5sZ3RMN3h4?=
 =?utf-8?B?UEtrZkVrRFVXa0lUL2JJc1E5NE92RDhkaXFLZk1JNGxCdElmTkpWb1VRandP?=
 =?utf-8?B?RE5sRVdtN0hwNXozdVpZd1NuVSt0RG9qcjBvcUZoeUU5ay9XTklBSXpCN1ZQ?=
 =?utf-8?B?SkgxWWV3VUtJTFozZFVWZjIvWDVNY2dDY2F3cHoxY1QwbzV2SmtBT3JKVjI4?=
 =?utf-8?B?MDREc3dkRGRvbnV0a0xSNFIvR0N2K0cvakxzRThrb1ZCdUZucG81ak9vNFRT?=
 =?utf-8?B?a0ZUcW9nOXBBaHpWOWprd2h1WmJwMVhwOFZ5alJ6eGNld1J5UW1VNkVIbDRn?=
 =?utf-8?B?THFVZU44MmRkNndPVm9qSXhnRHp4bm10MVhBR2JzTGpSQnNuaUszdTFOU29W?=
 =?utf-8?B?Umtxb2tXeVgzVVNtSG5hSjJ2a1I0UUtPQmk5bC9UNW5RRXZJVUVTeVVobHdE?=
 =?utf-8?B?WGVSWXl5c0JCM0w1TWRuWTZMYkJzZEJMMC9qZlFrUGM1dHh6dXBnbk1mWG9J?=
 =?utf-8?B?anNPbzQ4VXpSOTZXME83SXc1OUZJSkVFV2Z4NVN6YUlDazNPaUZqSHl2OUdV?=
 =?utf-8?B?V3NDWU1oTFdyTlYyWGtkQjFFN1JOVitFN1Bta2ZvZUdTRStXWnNjMmxvZWFT?=
 =?utf-8?B?azd0SHRXeWtqUmdvQ0VFbjhGQmNrK3pCUkRwYWc3Q29Mbi9Vc040R2hRTldV?=
 =?utf-8?B?dElTQUxVREdMM0h2WHV3aDZmZHNjUHBSVjZQUkVadTNSNFQzQUZxTEhxeDJ6?=
 =?utf-8?B?c1NvQ090WFRBMzhwbTRma0lnQ1NKTDlZV3IyWE1POGo2YmRCWTdmeW5PZ04z?=
 =?utf-8?B?R1h0bG0yVUtORDd0aTVBQUNXSWZJcmwyYmp5eDAzMWRLQU9KL0V0aFYraU1B?=
 =?utf-8?B?Z0RvcmtTc3ZNYks5Vk9pUzFHeFhJc3lCemtWcVQ2Tm11bTVtaWV1WFhaZXFB?=
 =?utf-8?B?TGFuOFRhV3dOSW9idDNkWElBbXM1RWdXZ0RPL1QvbXdRWnBVNjV4eVR4Ylpu?=
 =?utf-8?B?c3lVc2Jmc0M4SVBuUHVLMEtNSGZFWW5Kai9EMlc2VXVxM3FtOXBSaDYvaktW?=
 =?utf-8?B?UUFvVGhPRHljL3VnVER0NFk2Q05yV2VCdVZuQ0xGK2hOeGdyOStQWUZsbW5F?=
 =?utf-8?B?dmNuaHFQL1haT2hsMUF4TUNlT0U3VVNma0ZCR1I2V3RmNEgvSEhVWWRXNmNH?=
 =?utf-8?B?RkdKSUN1V2hJa1lvTTk2bmFFNEQvaFI2NXlySTV6NW9kZVl4OXFMZ1RyeFZI?=
 =?utf-8?B?Rk1zY0pQSkFuZTlLa2JoNW9URWI5eXBmbzlNUU1LcytLRFAzOFA0WVBWZUJV?=
 =?utf-8?B?RFY0ZnlZNEY3Z3BmMUdVUzRpSU5aNGpCeURjZ3VoUmZGdDJNL09TblBwdmhE?=
 =?utf-8?B?NXBTbDZjNk10VHZDSlV0RHVrT1B5cURrdnVjWkxXaGdwQzNlUDBsT0MyanJW?=
 =?utf-8?B?MGFIcUlEMEVyNFBRQzFPK3VmWmttMFkxN2NpN2oycXNVL25zZWk4QzQxd3F1?=
 =?utf-8?B?YVBidFRNUjZPQXpaTlBpZFMvQk5MRVhjbHlIMzZZVlAweGIxRHh5cHVtQnI0?=
 =?utf-8?B?Y21GdEo3WlBmeVFHaUJ2WG1wNUY2aUh2cDIxeUZOR01DcUZwUWt0OHVoc2Ez?=
 =?utf-8?B?ZERNVTdJWjNGVVc5YmVCTDNNN2M5SkdqSzFob2xMTHd4WGE5Z1BTOTB4LzNr?=
 =?utf-8?B?NFlCWmVyM3piSEphWFl5bE80WWZEQVh0ODAyZXhGM0VLaXI2OHFPZmVlZlZD?=
 =?utf-8?B?VFRLYzU4d0MyWFNROGhpSWc5eURJeHhYTGFmdFFFQVZyVUJlUVlnKzdjS2xu?=
 =?utf-8?B?aTNRZ2dwbzhKemdjOVd4YVB6cVlzcGhxZzBOU0tjRmdLVGttd2tjcnoyMjNB?=
 =?utf-8?B?c005MjhZNGhvQkQvUDBEQXFSWEU1MDV1dmc1azIrN0xCd1RDY3VSb2VLaU5B?=
 =?utf-8?B?elNiN3JNaTZuamNxcUVjZVZvWnN6VlBhUWZ0cFBPZktTdTdEbnlxYnJ3bHVv?=
 =?utf-8?B?M1N4aVl3YjI2M3NGRVBGYnE0L3VQOENIeDUvRnVjS3hnQ3B2OXlHbGRGSGdN?=
 =?utf-8?Q?aATn3J+hx9mFEbB0=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c81a3f51-714f-41ed-72aa-08da4f8bb70b
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jun 2022 11:31:11.8250
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: OTkX2Hu6qY5ymFN2Y/hn8cHfR3OtuUgvqWDocYmd9zRSCREhOlMuP0uidXB/gjCqyVNhGL3u0ZPHRywYEQJzYA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN4PR03MB6781

On Tue, Jun 14, 2022 at 11:45:54AM +0200, Jan Beulich wrote:
> On 14.06.2022 11:38, Roger Pau Monné wrote:
> > On Tue, Jun 14, 2022 at 11:13:07AM +0200, Jan Beulich wrote:
> >> On 14.06.2022 10:32, Roger Pau Monné wrote:
> >>> On Tue, Jun 14, 2022 at 10:10:03AM +0200, Jan Beulich wrote:
> >>>> On 14.06.2022 08:52, Roger Pau Monné wrote:
> >>>>> On Mon, Jun 13, 2022 at 03:56:54PM +0200, Jan Beulich wrote:
> >>>>>> On 13.06.2022 14:32, Roger Pau Monné wrote:
> >>>>>>> On Mon, Jun 13, 2022 at 11:18:49AM +0200, Jan Beulich wrote:
> >>>>>>>> On 13.06.2022 11:04, Roger Pau Monné wrote:
> >>>>>>>>> On Mon, Jun 13, 2022 at 10:29:43AM +0200, Jan Beulich wrote:
> >>>>>>>>>> On 13.06.2022 10:21, Roger Pau Monné wrote:
> >>>>>>>>>>> On Mon, Jun 13, 2022 at 09:30:06AM +0200, Jan Beulich wrote:
> >>>>>>>>>>>> On 10.06.2022 17:06, Roger Pau Monne wrote:
> >>>>>>>>>>>>> Prevent dropping console output from the hardware domain, since it's
> >>>>>>>>>>>>> likely important to have all the output if the boot fails without
> >>>>>>>>>>>>> having to resort to sync_console (which also affects the output from
> >>>>>>>>>>>>> other guests).
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Do so by pairing the console_serial_puts() with
> >>>>>>>>>>>>> serial_{start,end}_log_everything(), so that no output is dropped.
> >>>>>>>>>>>>
> >>>>>>>>>>>> While I can see the goal, why would Dom0 output be (effectively) more
> >>>>>>>>>>>> important than Xen's own one (which isn't "forced")? And with this
> >>>>>>>>>>>> aiming at boot output only, wouldn't you want to stop the overriding
> >>>>>>>>>>>> once boot has completed (of which, if I'm not mistaken, we don't
> >>>>>>>>>>>> really have any signal coming from Dom0)? And even during boot I'm
> >>>>>>>>>>>> not convinced we'd want to let through everything, but perhaps just
> >>>>>>>>>>>> Dom0's kernel messages?
> >>>>>>>>>>>
> >>>>>>>>>>> I normally use sync_console on all the boxes I'm doing dev work, so
> >>>>>>>>>>> this request is something that come up internally.
> >>>>>>>>>>>
> >>>>>>>>>>> Didn't realize Xen output wasn't forced, since we already have rate
> >>>>>>>>>>> limiting based on log levels I was assuming that non-ratelimited
> >>>>>>>>>>> messages wouldn't be dropped.  But yes, I agree that Xen (non-guest
> >>>>>>>>>>> triggered) output shouldn't be rate limited either.
> >>>>>>>>>>
> >>>>>>>>>> Which would raise the question of why we have log levels for non-guest
> >>>>>>>>>> messages.
> >>>>>>>>>
> >>>>>>>>> Hm, maybe I'm confused, but I don't see a direct relation between log
> >>>>>>>>> levels and rate limiting.  If I set log level to WARNING I would
> >>>>>>>>> expect to not loose _any_ non-guest log messages with level WARNING or
> >>>>>>>>> above.  It's still useful to have log levels for non-guest messages,
> >>>>>>>>> since user might want to filter out DEBUG non-guest messages for
> >>>>>>>>> example.
> >>>>>>>>
> >>>>>>>> It was me who was confused, because of the two log-everything variants
> >>>>>>>> we have (console and serial). You're right that your change is unrelated
> >>>>>>>> to log levels. However, when there are e.g. many warnings or when an
> >>>>>>>> admin has lowered the log level, what you (would) do is effectively
> >>>>>>>> force sync_console mode transiently (for a subset of messages, but
> >>>>>>>> that's secondary, especially because the "forced" output would still
> >>>>>>>> be waiting for earlier output to make it out).
> >>>>>>>
> >>>>>>> Right, it would have to wait for any previous output on the buffer to
> >>>>>>> go out first.  In any case we can guarantee that no more output will
> >>>>>>> be added to the buffer while Xen waits for it to be flushed.
> >>>>>>>
> >>>>>>> So for the hardware domain it might make sense to wait for the TX
> >>>>>>> buffers to be half empty (the current tx_quench logic) by preempting
> >>>>>>> the hypercall.  That however could cause issues if guests manage to
> >>>>>>> keep filling the buffer while the hardware domain is being preempted.
> >>>>>>>
> >>>>>>> Alternatively we could always reserve half of the buffer for the
> >>>>>>> hardware domain, and allow it to be preempted while waiting for space
> >>>>>>> (since it's guaranteed non hardware domains won't be able to steal the
> >>>>>>> allocation from the hardware domain).
> >>>>>>
> >>>>>> Getting complicated it seems. I have to admit that I wonder whether we
> >>>>>> wouldn't be better off leaving the current logic as is.
> >>>>>
> >>>>> Another possible solution (more like a band aid) is to increase the
> >>>>> buffer size from 4 pages to 8 or 16.  That would likely allow to cope
> >>>>> fine with the high throughput of boot messages.
> >>>>
> >>>> You mean the buffer whose size is controlled by serial_tx_buffer?
> >>>
> >>> Yes.
> >>>
> >>>> On
> >>>> large systems one may want to simply make use of the command line
> >>>> option then; I don't think the built-in default needs changing. Or
> >>>> if so, then perhaps not statically at build time, but taking into
> >>>> account system properties (like CPU count).
> >>>
> >>> So how about we use:
> >>>
> >>> min(16384, ROUNDUP(1024 * num_possible_cpus(), 4096))
> >>
> >> That would _reduce_ size on small systems, wouldn't it? Originally
> >> you were after increasing the default size. But if you had meant
> >> max(), then I'd fear on very large systems this may grow a little
> >> too large.
> > 
> > See previous followup about my mistake of using min() instead of
> > max().
> > 
> > On a system with 512 CPUs that would be 512KB, I don't think that's a
> > lot of memory, specially taking into account that a system with 512
> > CPUs should have a matching amount of memory I would expect.
> > 
> > It's true however that I very much doubt we would fill a 512K buffer,
> > so limiting to 64K might be a sensible starting point?
> 
> Yeah, 64k could be a value to compromise on. What total size of
> output have you observed to trigger the making of this patch? Xen
> alone doesn't even manage to fill 16k on most of my systems ...

I've tried on one of the affected systems now, it's a 8 CPU Kaby Lake
at 3,5GHz, and manages to fill the buffer while booting Linux.

My proposed formula won't fix this use case, so what about just
bumping the buffer to 32K by default, which does fix it?

Or alternatively use the proposed formula, but adjust the buffer to be
between [32K,64K].

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jun 16 11:57:46 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 11:57:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350575.576972 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1o7u-0002Ix-KU; Thu, 16 Jun 2022 11:57:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350575.576972; Thu, 16 Jun 2022 11:57:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1o7u-0002Iq-HL; Thu, 16 Jun 2022 11:57:30 +0000
Received: by outflank-mailman (input) for mailman id 350575;
 Thu, 16 Jun 2022 11:57:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WBKC=WX=intel.com=xiaoyao.li@srs-se1.protection.inumbo.net>)
 id 1o1o7s-0002Ij-G1
 for xen-devel@lists.xenproject.org; Thu, 16 Jun 2022 11:57:29 +0000
Received: from mga17.intel.com (mga17.intel.com [192.55.52.151])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7b8cba4d-ed6b-11ec-ab14-113154c10af9;
 Thu, 16 Jun 2022 13:57:25 +0200 (CEST)
Received: from fmsmga006.fm.intel.com ([10.253.24.20])
 by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 16 Jun 2022 04:57:22 -0700
Received: from xiaoyaol-hp-g830.ccr.corp.intel.com (HELO [10.249.169.162])
 ([10.249.169.162])
 by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 16 Jun 2022 04:57:19 -0700
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7b8cba4d-ed6b-11ec-ab14-113154c10af9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1655380645; x=1686916645;
  h=message-id:date:mime-version:subject:to:cc:references:
   from:in-reply-to:content-transfer-encoding;
  bh=Vt9FDqWl7wNQvJndTqqsfdy8FikwMx61g6XTDAM0IIg=;
  b=AD+xoyt/D2Llogm78wSujhioTYl1kES1sEAa3ZENowYxDDs2A4urvsIv
   3XDBkrTOTuieLC1KivdkXwL18u9d+OB274yCpwzByN4Y40dvqZf73Xzhp
   nXOFLkfQ5rpF9eNHpHtQz5EMVXRjiqoXGv2FWLv1ziDytt7IEzuCfJtZL
   WyEdvwE/VSp/xiFPsWnsQbOpZ4OZzm+F+1BxR3gBG1pFggV4ie9SGaWgW
   XQIRbWtJCmXOVE+mL6xdxsyi55WDIL7CjWeImqY0YLpI7ryk57sUvgf+w
   5J5CgHLMOaU3Oalf4TD4+ZiqjF5S+Rfct48fjIOpk/1yu1JhbZZB0+cvi
   A==;
X-IronPort-AV: E=McAfee;i="6400,9594,10379"; a="259693690"
X-IronPort-AV: E=Sophos;i="5.91,305,1647327600"; 
   d="scan'208";a="259693690"
X-IronPort-AV: E=Sophos;i="5.91,305,1647327600"; 
   d="scan'208";a="831528917"
Message-ID: <3c64db19-00fe-05bf-ac4d-6ef4201b6aa0@intel.com>
Date: Thu, 16 Jun 2022 19:57:17 +0800
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Firefox/91.0 Thunderbird/91.10.0
Subject: Re: [PATCH v2 3/3] x86/vmx: implement Notify VM Exit
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "Tian, Kevin" <kevin.tian@intel.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "Cooper, Andrew" <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, "Beulich, Jan"
 <JBeulich@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "Nakajima, Jun" <jun.nakajima@intel.com>,
 "Qiang, Chenyi" <chenyi.qiang@intel.com>
References: <20220526111157.24479-1-roger.pau@citrix.com>
 <20220526111157.24479-4-roger.pau@citrix.com>
 <BN9PR11MB5276B16CB69514120B7E0E318CA79@BN9PR11MB5276.namprd11.prod.outlook.com>
 <4f2c4d5b-dab8-c9d2-f4c2-b8cd44011630@intel.com>
 <YqHGzuJ+D0WjaW+6@Air-de-Roger>
From: Xiaoyao Li <xiaoyao.li@intel.com>
In-Reply-To: <YqHGzuJ+D0WjaW+6@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 6/9/2022 6:09 PM, Roger Pau Monné wrote:
> On Thu, Jun 09, 2022 at 03:39:33PM +0800, Xiaoyao Li wrote:
>> On 6/9/2022 3:04 PM, Tian, Kevin wrote:
>>> +Chenyi/Xiaoyao who worked on the KVM support. Presumably
>>> similar opens have been discussed in KVM hence they have the
>>> right background to comment here.
>>>
>>>> From: Roger Pau Monne <roger.pau@citrix.com>
>>>> Sent: Thursday, May 26, 2022 7:12 PM
>>>>
>>>> Under certain conditions guests can get the CPU stuck in an unbounded
>>>> loop without the possibility of an interrupt window to occur on
>>>> instruction boundary.  This was the case with the scenarios described
>>>> in XSA-156.
>>>>
>>>> Make use of the Notify VM Exit mechanism, that will trigger a VM Exit
>>>> if no interrupt window occurs for a specified amount of time.  Note
>>>> that using the Notify VM Exit avoids having to trap #AC and #DB
>>>> exceptions, as Xen is guaranteed to get a VM Exit even if the guest
>>>> puts the CPU in a loop without an interrupt window, as such disable
>>>> the intercepts if the feature is available and enabled.
>>>>
>>>> Setting the notify VM exit window to 0 is safe because there's a
>>>> threshold added by the hardware in order to have a sane window value.
>>>>
>>>> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>>>> ---
>>>> Changes since v1:
>>>>    - Properly update debug state when using notify VM exit.
>>>>    - Reword commit message.
>>>> ---
>>>> This change enables the notify VM exit by default, KVM however doesn't
>>>> seem to enable it by default, and there's the following note in the
>>>> commit message:
>>>>
>>>> "- There's a possibility, however small, that a notify VM exit happens
>>>>      with VM_CONTEXT_INVALID set in exit qualification. In this case, the
>>>>      vcpu can no longer run. To avoid killing a well-behaved guest, set
>>>>      notify window as -1 to disable this feature by default."
>>>>
>>>> It's not obviously clear to me whether the comment was meant to be:
>>>> "There's a possibility, however small, that a notify VM exit _wrongly_
>>>> happens with VM_CONTEXT_INVALID".
>>>>
>>>> It's also not clear whether such wrong hardware behavior only affects
>>>> a specific set of hardware,
>>
>> I'm not sure what you mean for a specific set of hardware.
>>
>> We make it default off in KVM just in case that future silicon wrongly sets
>> VM_CONTEXT_INVALID bit. Becuase we make the policy that VM cannot continue
>> running in that case.
>>
>> For the worst case, if some future silicon happens to have this kind silly
>> bug, then the existing product kernel all suffer the possibility that their
>> VM being killed due to the feature is default on.
> 
> That's IMO a weird policy.  If there's such behavior in any hardware
> platform I would assume Intel would issue an errata, and then we would
> just avoid using the feature on affected hardware (like we do with
> other hardware features when they have erratas).
> 
> If we applied the same logic to all new Intel features we won't use
> any of them.  At least in Xen there are already combinations of vmexit
> conditions that will lead to the guest being killed.

The reason is that, currently no case will set the VM_CONTEXT_INVALID 
bit, people in KVM community are cautious on the uncertainty. No one in 
what case the VM_CONTEXT_INVALID will be in the future.

Anyway, that's only the worry from KVM reviewers.

>>>> in a way that we could avoid enabling
>>>> notify VM exit there.
>>>>
>>>> There's a discussion in one of the Linux patches that 128K might be
>>>> the safer value in order to prevent false positives, but I have no
>>>> formal confirmation about this.  Maybe our Intel maintainers can
>>>> provide some more feedback on a suitable notify VM exit window
>>>> value.
>>
>> The 128k is the internal threshold for SPR silicon. The internal threshold
>> is tuned by Intel for each silicon, to make sure it's big enough to avoid
>> false positive even when user set vmcs.notify_window to 0.
>>
>> However, it varies for different processor generations.
>>
>> What is the suitable value is hard to say, it depends on how soon does VMM
>> want to intercept the VM. Anyway, Intel ensures that even value 0 is safe.
> 
> Ideally we need a fixed default value that's guaranteed to work on all
> possible hardware that supports the feature, or alternatively a way to
> calculate a sane default window based on the hardware platform.
> 
> Could we get some wording added to the ISE regarding 0 being a
> suitable default value to use because hardware will add a threshold
> internally to make the value safe?

We will work with internal architects on this.

> Thanks, Roger.



From xen-devel-bounces@lists.xenproject.org Thu Jun 16 12:41:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 12:41:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350585.576985 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1ooZ-0008Nh-5Y; Thu, 16 Jun 2022 12:41:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350585.576985; Thu, 16 Jun 2022 12:41:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1ooZ-0008Na-2P; Thu, 16 Jun 2022 12:41:35 +0000
Received: by outflank-mailman (input) for mailman id 350585;
 Thu, 16 Jun 2022 12:41:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1ooX-0008NQ-K3; Thu, 16 Jun 2022 12:41:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1ooX-00020R-GJ; Thu, 16 Jun 2022 12:41:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1ooX-000637-5R; Thu, 16 Jun 2022 12:41:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o1ooX-0002pd-3d; Thu, 16 Jun 2022 12:41:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=a6xXf0YTJGRovvNqfcD2UshJeWaJEg8YHgDf+VPl5K4=; b=kPrr8LIY9UukyLlZIslusQA2pY
	LpPr3SdyVdRCrd+42M+/Y84t6W9R+9ONpc72XHv1v25iqfwnsrVajpiNccyeMdXFjyRE+7uUKCakH
	aJIooP3gTD3n0fuhHnQE38gmkNU+WHxLY/WMHaGtUk5SSF0kNrHXOQHyv4uZIIsqKEs4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171183-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 171183: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-armhf-armhf-xl-arndale:xen-boot:fail:heisenbug
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=9ac873a46963098441be920ef7a2eaf244a3352d
X-Osstest-Versions-That:
    qemuu=8e6c70b9d4a1b1f3011805947925cfdb31642f7f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 16 Jun 2022 12:41:33 +0000

flight 171183 qemu-mainline real [real]
flight 171199 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/171183/
http://logs.test-lab.xenproject.org/osstest/logs/171199/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-arndale   8 xen-boot            fail pass in 171199-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt   16 saverestore-support-check fail blocked in 171171
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop       fail blocked in 171171
 test-armhf-armhf-xl-arndale 15 migrate-support-check fail in 171199 never pass
 test-armhf-armhf-xl-arndale 16 saverestore-support-check fail in 171199 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171171
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171171
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171171
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171171
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171171
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171171
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 qemuu                9ac873a46963098441be920ef7a2eaf244a3352d
baseline version:
 qemuu                8e6c70b9d4a1b1f3011805947925cfdb31642f7f

Last test of basis   171171  2022-06-15 00:07:18 Z    1 days
Testing same since   171183  2022-06-15 23:40:54 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Elena Ufimtseva <elena.ufimtseva@oracle.com>
  Jagannathan Raman <jag.raman@oracle.com>
  John G Johnson <john.g.johnson@oracle.com>
  Richard Henderson <richard.henderson@linaro.org>
  Sam Li <faithilikerun@gmail.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   8e6c70b9d4..9ac873a469  9ac873a46963098441be920ef7a2eaf244a3352d -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Thu Jun 16 13:15:45 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 13:15:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350610.577051 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1pLN-0004SD-Hm; Thu, 16 Jun 2022 13:15:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350610.577051; Thu, 16 Jun 2022 13:15:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1pLN-0004S6-E0; Thu, 16 Jun 2022 13:15:29 +0000
Received: by outflank-mailman (input) for mailman id 350610;
 Thu, 16 Jun 2022 13:15:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/DUN=WX=citrix.com=prvs=1590248a2=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1o1pLK-0004S0-QA
 for xen-devel@lists.xenproject.org; Thu, 16 Jun 2022 13:15:27 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 60a303d8-ed76-11ec-bd2c-47488cf2e6aa;
 Thu, 16 Jun 2022 15:15:24 +0200 (CEST)
Received: from mail-mw2nam10lp2106.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.106])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 16 Jun 2022 09:15:09 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BLAPR03MB5412.namprd03.prod.outlook.com (2603:10b6:208:291::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.14; Thu, 16 Jun
 2022 13:15:07 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::50a2:499b:fa53:b1eb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::50a2:499b:fa53:b1eb%5]) with mapi id 15.20.5332.022; Thu, 16 Jun 2022
 13:15:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 60a303d8-ed76-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655385324;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=Osk/NGbZYRC8vKOnyqsekoVPNNLahBQA+xrGSIBX7eg=;
  b=gWkX5rLZHYJ3okfMLpkvc4Zj/NdWCdEDttUwrkq0qLubEHbwAUaSKeDx
   i08J23Pw698uXf9cK5l7q7TgBR56VJXUhaMeR40lhWejXpuKH7SEt6jzx
   QkgKGnlasOiIowUKBCUEL3xBgXj6EA1iPW5IOPW9lGIPRcvgVrahd6tgG
   s=;
X-IronPort-RemoteIP: 104.47.55.106
X-IronPort-MID: 73099298
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:3ekgMKi2ewT7zs+NGEWALYlwX161ExEKZh0ujC45NGQN5FlHY01je
 htvDzqOP66Mamv2LdwgPIWzoU5Q7cXSyNVmSQc/qSk2Qi0b9cadCdqndUqhZCn6wu8v7a5EA
 2fyTvGacajYm1eF/k/F3oDJ9CU6jefSLlbFILas1hpZHGeIcw98z0M68wIFqtQw24LhXVrT4
 YmaT/D3YzdJ5RYlagr41IrbwP9flKyaVOQw5wFWiVhj5TcyplFNZH4tDfjZw0jQG+G4KtWSV
 efbpIxVy0uCl/sb5nFJpZ6gGqECaua60QFjERO6UYD66vRJjnRaPqrWqJPwwKqY4tmEt4kZ9
 TlDiXC/YQEAH6n0hfgkbxYbPRt0DIxj5ufnJnfq5KR/z2WeG5ft69NHKRhueKgnoKNwC2wI8
 uEEIjcQaBzFn/ix3L+wVuhrgIIkMdXvO4Qc/HpnyFk1D95/GcyFH/qMuIIehW9u7ixNNa+2i
 84xQDxjdhnfJTZIPU8aEskWl+a0nHjvNTZfrTp5oIJouDCCnVYggNABNvLwUYOwd556nHyav
 3L/+D77Hxg/b8W2nG/tHnWEw7WncTnAcIAYGaC89/VqqEaO3WFVAxoTPXOrrP/8hkOgVtZ3L
 00P5jFovaU07FasTNT2Q1u/unHslgEYc8pdFas98g7l90bPywOQB2xBSyEbbtUj7ZcyXWZzj
 gHPmM71DztytrHTUWia6rqfsTK1P24SMHMGYigHCwAC5rEPvb0Os/4Gdf47eIbdszE/MWuYL
 +yixMTmu4gusA==
IronPort-HdrOrdr: A9a23:nWONkqhXQD7ZlRvYtLgemzG85nBQX4N23DAbv31ZSRFFG/FwyP
 rCoB1L73XJYWgqM03IwerwQ5VpQRvnhP1ICRF4B8bvYOCUghrTEGgE1/qs/9SAIVyyygc578
 tdmsdFebrN5DRB7PoSpTPIa+rIo+P3vpxA592uqUuFJDsCA84P0+46MHfjLqQcfnglOXNNLu
 v52iMxnUvERZ14VKSGL0hAe9KGi8zAlZrgbxJDLQUg8hOygTSh76O/OwSE3z8FOgk/gYsKwC
 zgqUjU96+ju/a0xlv3zGnI9albn9Pn159qGNGMsM4IMT/h4zzYJ7iJGofy/gzdktvfrGrCo+
 O85CvI+P4DrU85S1vF5CcFHTOQiQrGpUWSkWNwykGT3PARDAhKd/apw7gpMycxonBQwu2Vms
 hwrh2knosSAhXakCvn4d/UExlsi0qvuHIn1fUelnpFTOIlGfdsRKEkjTVo+a07bWvHAUEcYZ
 tTJdCZ4OwTfUKRbnjfsGUqyNuwXm4rFhPDRkQZoMSa3zVfgXg8liIjtYYit2ZF8Ih4R4hP5u
 zCPKgtnLZSTtUOZaY4AOsaW8O4BmHEXBqJOmOPJlbsEr0BJhv22tXKyaRw4PvvdI0DzZM0lp
 iEWFREtXQqc0arEsGK1I0jyGG7fIx8Z0WY9ihz3ekIhlSnfsubDcSqciFcr+Kw5/MCH8bcR/
 G/fJpLHv6LFxqaJbp0
X-IronPort-AV: E=Sophos;i="5.91,305,1647316800"; 
   d="scan'208";a="73099298"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bVOerus9TKDPXhnuJSJEBFk30CIaK0BZKJR1kkdrB3LSa9nnuLyDUfU2L6K2zPj/OQhrS/HmmBSjtgJVKD5zYHledSGOwH6IJbgfyigCp0ruKDYvZaSM6yhRgGNc500lKwZLV66mFPzjaF5IV16C4WDb4qhi5Te3cFl2yi17qWi10t6FtrlSvAZ14RNpk7xch9dnvh650nTV59ldCFMNMXms/H7tALsp+LmI1aFuleX2PHGxuYTyFcu4GA1FYVtQAdhlN/F5KymySL+kwuVDDNJApjgucOZ0MJyXEQ8K/mQDBANJvr7afyZroBVh+o/mjG6q0CC2Dx+05qKa8a1Yiw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Osk/NGbZYRC8vKOnyqsekoVPNNLahBQA+xrGSIBX7eg=;
 b=hu7b8beT3o5akIG3WPXE3Q+gWxnXiNgi7zyXj/I41CXHVydzCWNqsG3+cQt+6F5sMJxFKFL8unGO/gZtGdn1B/jrfTnlts0PEcko8z8uVIO4lNinTpttmb7ui0x5Cl37/dyoU1ftNqOd+iOLfJhNqwuI/bc65Ko+EvATM+mP9yluX1zxFLtc7WXgJt1KURAFrQBcec/usK6lWxABJuaHZuSjArzQexlbVwPPXo+ZyVVy2kr9xM5TUa85cAVjkcEESncqfKh5W3YshxZaORfoa7G9xxLPPgjk4DtOD/tq6AeJ54x5UGOnLKgFHXmJArfWqWKjwByw0EpWNoCk0s+foA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Osk/NGbZYRC8vKOnyqsekoVPNNLahBQA+xrGSIBX7eg=;
 b=FS7ETvyWZ0K0quTNoT7pvZN4UrgCCzJCzLGzn992EBcjDwnHtbxVUikTdx3EpsJ+iH/VvLDx4EpRvKHLTMuq7F7lvt1fUJsJqKdmaD5okU1OC5nKn0PrNGnSbJom4snDrJ2GaClmfZUYDp3QFJTi2Ju0OwvDL4Y/92NEpx8rodE=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [PATCH] x86emul/test: improve failure location identification for
 FMA sub-test
Thread-Topic: [PATCH] x86emul/test: improve failure location identification
 for FMA sub-test
Thread-Index: AQHYgAhN5vfsYbvhOUu6F7yjqbmrJK1SBiAA
Date: Thu, 16 Jun 2022 13:15:06 +0000
Message-ID: <4193f0db-b75d-0eab-209c-49c6db787e3d@citrix.com>
References: <ce7d7acf-9ed2-9dcc-34ad-f9b1e3f77d4f@suse.com>
In-Reply-To: <ce7d7acf-9ed2-9dcc-34ad-f9b1e3f77d4f@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 75e21b63-40fb-4c6e-a02c-08da4f9a3b86
x-ms-traffictypediagnostic: BLAPR03MB5412:EE_
x-microsoft-antispam-prvs:
 <BLAPR03MB5412DDF05AE5B70689F73D44BAAC9@BLAPR03MB5412.namprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 cGmD1pQplpW+0fjWXXRY1NY7sfhY31g7Qg0HoyXBnLFP7oDqz9FOswYuH8vxXUkMPKRwVkWQ/ZFnPzF1ccgDexm7saFNL3Erhpp7UOumUc/aaegY3zS12IpglCWWUoFz6x3Mk1vtANygzJPTWGpWdbdxXDqR+n7Ci0vtk5a7B5ZduzARAVunGqKxJHBkAG0LDrZsMSQtWBLeajUB3VM8YZg5uLBcQS0uxeNi6fx7pwajXwgqoZ8ZWvQqwMYQV9ant7lrIYWO3Vf9Dl8/R1aG+UhUBBw93CRjkPZjq3oB9jNoknTSJU2DcIhLlki3a/cpkARPMu1lZpxBLHf0jE4MvFBo2ukLHDGHJkhfSPgBnT7x+ggxghddVCBuMK7O5AAT7XUV1RKYG8giyAQtYyHhENSrAKEn4cvqI7TC2igJEu/y7f7XFhD+atWD+u6UJWBvDVlaJKqWzACYAdYgOaBvguU+GFwDOLPklnk6+5SeUtYDt0llxLFapujtsjByPjL+4vEsGLUbqC++xl/mHaE9WaU5LBvOKXsMW+HU0w0ISQzfcypRJtqR4N2rAGTQUy8/FPFlm/WiEAHiGtpe/9a9IVmAK5edb8MpZkPIZeav/wbuMfT61PODq1I1qHq4MTTP9rQangrduPv4s7z7M7IIUAtzx2kycMvAyjcBI5/S/IAHif/n1VAl1zneDDAp46CBdaLk2IPScRjCdug1JE/YWMJV7yQ7pCJnK7q7tEY5yed1uTDdEi2w1sot88258eWnvSR1eGnXi8fU0I8uK4twFQ==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(186003)(316002)(83380400001)(6486002)(2906002)(91956017)(498600001)(54906003)(71200400001)(4744005)(110136005)(53546011)(8676002)(4326008)(76116006)(6506007)(66946007)(66476007)(66556008)(64756008)(66446008)(38070700005)(6512007)(26005)(107886003)(82960400001)(122000001)(38100700002)(31686004)(86362001)(36756003)(31696002)(8936002)(2616005)(5660300002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?MXJRK1VId1J3K0NFaisvZTJnUjg2Yjg0NHovZDA1L2xQODcvc0V5Qm9iYVdv?=
 =?utf-8?B?OTRBN1EveHRyWm1RUlcwUHQybXZuZjg4Q1ZEN3gybXFTUms3UTVYTjR2cEhL?=
 =?utf-8?B?Nmo2bWdUa2phMitzOGlGY3FYSjdYYXJ5Mkc4K3B3N0ZNaG4zV005ZGRrOXZF?=
 =?utf-8?B?WGgzYk5aaUxOWHdUTFFBd0Y3NWcyZDRmMkRvb01ucW1jNlk1NzFFaFVWSzlp?=
 =?utf-8?B?YTNsRm13NUdjTWlvTVZuOE1pSFNyR0VXdGxObnU3QW1sL2tWVlhLL3ZmbWZh?=
 =?utf-8?B?M2pKQ3BYNW1COGNMVlVKUEgyL0Z2VXNWOUwvQjZCSmRSSENqbWYxdFFNK3Vq?=
 =?utf-8?B?YWs1RTFWemVMSGRXZzF6di9zaEtYU0dkczZLeXAzOVM1ak4vdDN3NTBOSmY3?=
 =?utf-8?B?Slh6V2FuY3hIZzZzMzZuVkQwRlpqY0orRyszaEE1NkdRcDA2ZUdzanNmNW5y?=
 =?utf-8?B?NTZkTmtjYmw5Y0xVSGFkMGMwSGl1dHJzODBWcmF3ZWVqa0FMQi9QNlVlem44?=
 =?utf-8?B?VWZ0SmpHdEhEVzAzREtSb3ZSTXU3MHdoalI1U1dkNm9DSjNRaERucEVrQnMx?=
 =?utf-8?B?YXJ2REVka21lajY4OC9TWjZ2QVM1Qit3bHRqSEVrSjF1K1ZtMkV1QVhFMGVt?=
 =?utf-8?B?UTlHWTZqRzFYOURmaEhwaGg1U3VCQTNmdi9mYy9UM1ZNV1RoK0ZaQXgrNUFm?=
 =?utf-8?B?K2ViZ3JSOE1EeGZqR2ZxRE9yYS9hSzR6WjlGWGVuT1lwSjlMQ3JLeW5WcFBI?=
 =?utf-8?B?bWkyL2NmSndyZ2VHL1pmTTdjSXM2d3RKMEtZZ1FwSXJkMU1xQUhoSG1qa2Fn?=
 =?utf-8?B?dFFON2xkWlR1YXg5aGNyZ1BvT0RZbnBzUkVIaWhCQXl3UyszcWhCVVlKWDFu?=
 =?utf-8?B?Q2VTUkxOaGVKdmg1QjlKR3hES1gwcEpwL2FnbVNKMXhlQmFLODdrOUtVaGNF?=
 =?utf-8?B?Q2Y2M2VzUkpvODgxRUw3TGsxMzZNb0g3UVA4ZFkvZFFwbXNaY1JsTW5BUmhW?=
 =?utf-8?B?OXpkRDJtNGt3NjcxK0pzcCtnN2pKbVRZaGRZeTV3Z1ltUThldEZNZUtpRFAz?=
 =?utf-8?B?aFIyTzhZYlpNMUtQR1Z4NlEzQjkzdHNNZ2RNTFA3YTdVTWw0U2didTVwQzhE?=
 =?utf-8?B?TlhZbm51b20yNG4vKysvb3h3anNLNjB2V3RBdkdyTFFKeUQ2dTdRM2NJS1d2?=
 =?utf-8?B?T09kbTZlZXFBM1lsRTZaSE55RGlnZmtKR0tHSkk2Yjd3WHV2RWk1SmUyVE9I?=
 =?utf-8?B?VlRkRXNiYWQ1Zk11YWJURVNWNDJpMUMvL3hUSVhJQ0NTd082eHZhYzd2Ym9o?=
 =?utf-8?B?UlI4Z2g0WjIzdkJuaE1zRG1CSEx3azNVRjZGYzVSQWIzNzhjRDNmdUN3MVNR?=
 =?utf-8?B?NzIrcXNEeldQWHg3U0l2WjNkd0F6Y0RrMFkrQnQrYkhSdFRFWHdBaGpKQW9i?=
 =?utf-8?B?WWt2bVljNjRCYlpOWTlSRVpWUzdsdFYrOFh1TXZZclZCNWlVaENVaHQ4aFli?=
 =?utf-8?B?RytSNmt1WFAwOVV4dVd0VENLeUFhUUJSMVJOemZtVXRrVFluR1JKSDdBK1lG?=
 =?utf-8?B?M0xFNlQ1VlF6MGtFckxQR3BBdnB3SEpWcVQ1LzdpN1ppM3lsM0lrc1A5Zk1l?=
 =?utf-8?B?SUdwT0ZUYTV4T2xFU3ZaYVJHaUNaa3drTkwrYVZDQ2w3eVpMTDgyOVdybmlQ?=
 =?utf-8?B?VlRUcEJzSzhwSStxLzRzaklJb2VBdENaNEEzNHhoTmxLT1h4Y3dDd3kxL0JN?=
 =?utf-8?B?djJKeEhNUUMrZ2c1TFIxSm56Q1orRHNtcG8rWmt1VXNNZ2xzS1FkR2xhWEVP?=
 =?utf-8?B?L1dsTE1yR05tUUpqckpDVHllVXFFeDBvWExNRElOOG5CYmlKVGJvNGVJRTN2?=
 =?utf-8?B?L2tkbFdsNmZpMHVaOWFYUThZSno4WVR1eXhsbFJDS3cvc3I3d1Avb1FQUUQz?=
 =?utf-8?B?djZxdlRvTGoxcmlPN0RETTB5K1dwY01LRVdHNnRMd0g2Y1IwK0IyV0kwZUJ6?=
 =?utf-8?B?SWd0cSs5N3J6dWpDYWVQVVhFQ1RUcWEveHB1Sm4vYWFjNW4yM0pZWEFZZkwv?=
 =?utf-8?B?MjZBZW5rMURxc0JYUUFpb0ZhcTM1ckZEOW1KeGt5NkhOTzdkNVlId3lXMjVp?=
 =?utf-8?B?RlprZE85SitoWTJJMG1TdExXSnlVTm0wREg1Z0psZ3FHYWZrNnFjUjVDaURr?=
 =?utf-8?B?VnVyNmdqRmt5MXoxKzhyazQzdjVqR1drMElaV285VVBUR1RuRUh5T1JuZFdM?=
 =?utf-8?B?WVFhWEFPRTg5SkZKaEFRNUFDdGV6MGNSMEJRdkFxRXFxYUhud1F0MVFUSWNy?=
 =?utf-8?B?TEU3T1duTkZuNnRrK25RNzVpVkR4UTlwalFFVS9iekZ5UWprZVZySHluMGFn?=
 =?utf-8?Q?a+/LTeVIcfkdZn54=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <0AB87D41FD44414B91F7143D1D8EA129@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 75e21b63-40fb-4c6e-a02c-08da4f9a3b86
X-MS-Exchange-CrossTenant-originalarrivaltime: 16 Jun 2022 13:15:06.8270
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: kwtvqDLS6T1gIG7NRpjB9q6m9NGcmeqdycxR47esISc6RpBwT2sJlSRGjmbX1QaEosPK/SOEwGm1YP3CCh+RP3R2srltDGZhs9gNxVD2JTY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR03MB5412

T24gMTQvMDYvMjAyMiAxNzowMywgSmFuIEJldWxpY2ggd3JvdGU6DQo+IFdoZW4gc29tZSBGTUEg
c2V0IG9mIGluc25zIGlzIGluY2x1ZGVkIGluIHRoZSBiYXNlIGluc3RydWN0aW9uIHNldCAoWE9Q
LA0KPiBBVlg1MTJGLCBhbmQgQVZYNTEyLUZQMTYgYXQgcHJlc2VudCksIHNpbWRfdGVzdCgpIHNp
bXBseSBpbnZva2VzDQo+IGZtYV90ZXN0KCksIG5lZ2F0aW5nIGl0cyByZXR1cm4gdmFsdWUuIElu
IGNhc2Ugb2YgYSBmYWlsdXJlIHRoaXMgd291bGQNCj4geWllbGQgYSB2YWx1ZSBjbG9zZSB0byA0
Rywgd2hpY2ggZG9lc24ndCBsZW5kIGl0c2VsZiB0byBlYXN5DQo+IGlkZW50aWZpY2F0aW9uIG9m
IHRoZSBmYWlsaW5nIHRlc3QgY2FzZS4gUmVjb2duaXplIHRoZSBjYXNlIGluDQo+IHNpbWRfY2hl
Y2tfcmVncygpIGFuZCBlbWl0IGFsdGVybmF0aXZlIG91dHB1dCBpZGVudGlmeWluZyBGTUEuDQo+
DQo+IFNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCg0KQWNr
ZWQtYnk6IEFuZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+DQo=


From xen-devel-bounces@lists.xenproject.org Thu Jun 16 13:32:40 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 13:32:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350619.577062 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1pbt-0007F8-2A; Thu, 16 Jun 2022 13:32:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350619.577062; Thu, 16 Jun 2022 13:32:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1pbs-0007F1-VW; Thu, 16 Jun 2022 13:32:32 +0000
Received: by outflank-mailman (input) for mailman id 350619;
 Thu, 16 Jun 2022 13:32:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yWHJ=WX=citrix.com=prvs=15945cc1a=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o1pbr-0007Ev-Ed
 for xen-devel@lists.xenproject.org; Thu, 16 Jun 2022 13:32:31 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c40e3721-ed78-11ec-bd2c-47488cf2e6aa;
 Thu, 16 Jun 2022 15:32:30 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c40e3721-ed78-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655386350;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=SC4qIvbuwA7oS8jkKI4cMaKo4B/88ao12Qc7MfCExZA=;
  b=U8FFvxkg2RLXpFBlen++E36OWHWSyZjamUzEI4sbhIMPrdWArb8RNFdt
   V4ZXxTiqpxX5kfwM9ctYyQvZisRcXpPAu1uSbAHMfE7ZOtq1BsPz8yvxU
   L6mjCMhTJm5XEqVFJ0JuDD6BprS1PRideG8IuETU3V66sqo6cBKwDGTDG
   E=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 73754237
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:rzyrgKLIrRnDWjqUFE+RCpUlxSXFcZb7ZxGr2PjKsXjdYENS1GQDn
 2obCzqAbP2MN2v2c4h3YIW/o00CusCGn95iSQZlqX01Q3x08seUXt7xwmUcns+xwm8vaGo9s
 q3yv/GZdJhcokf0/0vrav67xZVF/fngqoDUUYYoAQgsA14+IMsdoUg7wbRh3Nc22YLR7z6l4
 rseneWOYDdJ5BYsWo4kw/rrRMRH5amaVJsw5zTSVNgT1LPsvyB94KE3fMldG0DQUIhMdtNWc
 s6YpF2PEsE1yD92Yj+tuu6TnkTn2dc+NyDW4pZdc/DKbhSvOkXee0v0XRYRQR4/ttmHozx+4
 PVWkMSvSygxBJHrn7waSEBfFAZ4P4QTrdcrIVDn2SCS50jPcn+qyPRyFkAme4Yf/46bA0kXq
 6ZecmpUKEne2aTmm9pXScE17ignBMDtIIMYvGAm1TzDBOwqaZvCX7/L9ZlT2zJYasVmQq2BO
 pZDMmUHgBLoJB9SZmcLAbUEhqSBgXv1YgNnshWFqv9ii4TU5FMoi+W8WDbPQfSGTNtYtlyVr
 WXH+yL+GB5yHN6VxCeB83msrvTShi69U4UXfJW66/prjVu71mEVThoMWjOTrP20jEf4RtxeL
 lAP9zQnha8o/UevQ5/2WBjQiGWfohcWVt5UEus7wAKA0KzZ50CeHGdsZiFFQMwrsokxXzNC/
 k+EmZblCCJitJWRSGmB7fGEoDWqIy8XIGQeIygeQmMt4db5p5oopgnSVdslG6mw5vXuEDTtz
 jTMsCg/jbwOidIj2qOguFTWhDTqoYLGJjPZ/S2OADjjtFkgItf4Ocr4sjA38MqsMq65VXzZo
 3org/Kiy+dWCorUkyuqRuckSeTBC+m+DNHMvbJ+N8B/qmn3oiH5I9w4DCJWfxkwbJtdEdP9S
 AqK4F4KuscOVJe/RfUvC79dHfjG2kQJ+T7NcvnPJuRDbZFqHONs1HE/PBXAt4wBfaVFrE3eB
 Xt4WZz1ZZriIf47pAdavs9EuVPR+ggwxHnIWbfwxAm93LyVaRa9EOlYbQDXPrhkvfrb+205F
 uqz0OPTk31ivBDWOHGLoeb/03hRRZTEOXwGg5MOLbPSSuaXMGogF+XQ0dscRmCRpIwMzr2g1
 ijkAidwkQOj7VWaeVTiQi0yM9vHAMcgxU/XyARxZD5ELVB4Ot3xhEreHrNqFYQaGBtLl6IkF
 6JYJZveXZyiiF3volwgUHU0l6Q6HDzDuO5EF3PNjOQXF3K4ezH0xw==
IronPort-HdrOrdr: A9a23:8Rawra85c3rmc2V73y9uk+DaI+orL9Y04lQ7vn2YSXRuE/Bws/
 re+8jztCWE7Ar5N0tNpTntAsa9qDbnhPhICOoqTNKftWvdyQiVxehZhOOIqVDd8m/Fh4xgPM
 9bAtFD4bbLbWSS4/yV3DWF
X-IronPort-AV: E=Sophos;i="5.92,305,1650945600"; 
   d="scan'208";a="73754237"
Date: Thu, 16 Jun 2022 14:32:22 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, "Christian
 Lindig" <christian.lindig@citrix.com>, Samuel Thibault
	<samuel.thibault@ens-lyon.org>, Juergen Gross <jgross@suse.com>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Roger Pau =?iso-8859-1?Q?Monn=E9?=
	<roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>, David Scott
	<dave@recoil.org>, Elena Ufimtseva <elena.ufimtseva@oracle.com>, Julien Grall
	<julien@xen.org>
Subject: Re: [XEN PATCH v2 00/29] Toolstack build system improvement, toward
 non-recursive makefiles
Message-ID: <Yqsw5mmC8KHVbtrb@perard.uk.xensource.com>
References: <20220225151321.44126-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20220225151321.44126-1-anthony.perard@citrix.com>

Hi,

There's quite a few patch in this series that are reviewed and could be
committed. The one reviewed don't depends on the other ones.

The list I've gathered that I think are reviewed properly are:

11: tools/xenstore: Cleanup makefile
14: libs: rename LDUSELIBS to LDLIBS and use it instead of APPEND_LDFLAGS
15: libs: Remove need for *installlocal targets
16: libs,tools/include: Clean "clean" targets
17: libs: Rename $(SRCS-y) to $(OBJS-y)
18: libs/guest: rename ELF_OBJS to LIBELF_OBJS
19: libs/guest: rework CFLAGS
20: libs/store: use of -iquote instead of -I
21: libs/stat: Fix and rework python-bindings build
22: libs/stat: Fix and rework perl-binding build
24: stubdom: introduce xenlibs.mk
25: tools/libs: create Makefile.common to be used by stubdom build system
26: tools/xenstore: introduce Makefile.common to be used by stubdom
27: stubdom: build xenstore*-stubdom using new Makefile.common
28: stubdom: xenlibs linkfarm, ignore non-regular files
29: tools/ocaml: fix build dependency target

(I didn't a run with them on our gitlab ci, and no build issue.)

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Thu Jun 16 14:06:55 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 14:06:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350629.577080 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1q92-0002uZ-TT; Thu, 16 Jun 2022 14:06:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350629.577080; Thu, 16 Jun 2022 14:06:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1q92-0002uS-Qc; Thu, 16 Jun 2022 14:06:48 +0000
Received: by outflank-mailman (input) for mailman id 350629;
 Thu, 16 Jun 2022 13:57:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/+X/=WX=gmail.com=dmitry.semenets@srs-se1.protection.inumbo.net>)
 id 1o1pzw-0001j4-Um
 for xen-devel@lists.xenproject.org; Thu, 16 Jun 2022 13:57:24 +0000
Received: from mail-lf1-x135.google.com (mail-lf1-x135.google.com
 [2a00:1450:4864:20::135])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3f578ffc-ed7c-11ec-bd2c-47488cf2e6aa;
 Thu, 16 Jun 2022 15:57:23 +0200 (CEST)
Received: by mail-lf1-x135.google.com with SMTP id p18so2378446lfr.1
 for <xen-devel@lists.xenproject.org>; Thu, 16 Jun 2022 06:57:23 -0700 (PDT)
Received: from localhost.localdomain ([91.219.254.75])
 by smtp.gmail.com with ESMTPSA id
 a14-20020a19e30e000000b0047255d2119csm247998lfh.203.2022.06.16.06.57.21
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 16 Jun 2022 06:57:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3f578ffc-ed7c-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=ANBsMVrpOfPHJzpBSaRzY2rH8VlpLm1cTHGBO/4dLYw=;
        b=o8QT9fCMAbLqvgsY43URvSBp8El0higCNMiWpPwvSAndk98VIzqw8MboBtu27S/BoO
         EPeruhLLIp48iFMAkh3AnuRrJWhBsyor/Vbmb3UjQS7DWSsUXDFjIQ19Aq0yCNwiSnLV
         CGrQhjJ08zzvGykyvjh+vhycUXAJ9OpVeCtHPr/LWWPJBhqOZbucrwBE35FrsissMpsL
         Vr5sRCi72/tnWm4vZFanhAHYiUKLYxGRxTf7xMsbuQjuKZsut79i0IbnPnBGPZIRf8Xm
         NiU3zCXGjNYMMpsz7TZhjpQ1qBK3w/IK9dHFr+1MEJRU6OwFL0OEGbgUjec1TloasWQ6
         l8vg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=ANBsMVrpOfPHJzpBSaRzY2rH8VlpLm1cTHGBO/4dLYw=;
        b=6vQrGzttFDOckb1etzURf7yW57oSVOLE/13EqwKb8PE3WbAKgy/sqmbvNq/MVF+9vG
         yT2lsBGi6UASuAYRW4WTUuqfe3KN09E8B01MLpEI/YbZZlYC2tREkzdT248MaJ/BDPV2
         LeE6b25ix7KGQZccQs1id0sqdrn+qXsCLaxQWuU0YkysQ43ZiSvnhfXyCmqSp7uJkXbw
         TJ15mSpmfZpdcE7WToT/AmtPh6dqV9pd/FK8uyzUQmqdBDE+9S4XmTRgmM/zZGPAJRWR
         yDDhRW5Sp7/g1i6xgEPzE7tBf4sCp0kuGysXlXsX6ZGLlk2vvgICuPpqXvmmxDcNvY7C
         RZ7A==
X-Gm-Message-State: AJIora/PPhrgzAm+3xSiM8Rk8ow1keav1RNOpY21kkl24Su1jxTb/JQz
	ukAgNJEtWOJ8JQM3DZ5t5VIbCW1hMetfXw==
X-Google-Smtp-Source: AGRyM1tfcSG6v/Y1mxyIKgQusuV7BJdQ9SFFJInnpvCu/6k9QEXdec/CjffsGBXxXPGZOS/jrDvAxQ==
X-Received: by 2002:a05:6512:3408:b0:479:5933:fb7 with SMTP id i8-20020a056512340800b0047959330fb7mr2735242lfr.300.1655387842753;
        Thu, 16 Jun 2022 06:57:22 -0700 (PDT)
From: dmitry.semenets@gmail.com
To: xen-devel@lists.xenproject.org
Cc: Dmytro Semenets <dmytro_semenets@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>
Subject: [PATCH] xen: Don't call panic if ARM TF cpu off returns DENIED
Date: Thu, 16 Jun 2022 16:55:41 +0300
Message-Id: <20220616135541.3333760-1-dmitry.semenets@gmail.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Dmytro Semenets <dmytro_semenets@epam.com>

According to PSCI specification ARM TF can return DENIED on CPU OFF.
This patch brings the hypervisor into compliance with the PSCI
specification.
Refer to "Arm Power State Coordination Interface (DEN0022D.b)"
section 5.5.2

Signed-off-by: Dmytro Semenets <dmytro_semenets@epam.com>
Reviewed-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
---
 xen/arch/arm/psci.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/psci.c b/xen/arch/arm/psci.c
index 0c90c2305c..55787fde58 100644
--- a/xen/arch/arm/psci.c
+++ b/xen/arch/arm/psci.c
@@ -63,8 +63,9 @@ void call_psci_cpu_off(void)
 
         /* If successfull the PSCI cpu_off call doesn't return */
         arm_smccc_smc(PSCI_0_2_FN32_CPU_OFF, &res);
-        panic("PSCI cpu off failed for CPU%d err=%d\n", smp_processor_id(),
-              PSCI_RET(res));
+        if ( PSCI_RET(res) != PSCI_DENIED )
+            panic("PSCI cpu off failed for CPU%d err=%d\n", smp_processor_id(),
+                PSCI_RET(res));
     }
 }
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 16 14:07:09 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 14:07:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350631.577092 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1q9N-0003HL-52; Thu, 16 Jun 2022 14:07:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350631.577092; Thu, 16 Jun 2022 14:07:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1q9N-0003HE-1x; Thu, 16 Jun 2022 14:07:09 +0000
Received: by outflank-mailman (input) for mailman id 350631;
 Thu, 16 Jun 2022 14:07:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1q9L-0003Gj-M6; Thu, 16 Jun 2022 14:07:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1q9L-0003cY-Ht; Thu, 16 Jun 2022 14:07:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1q9L-0001aX-4f; Thu, 16 Jun 2022 14:07:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o1q9L-0004bk-49; Thu, 16 Jun 2022 14:07:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=SxEgQdZcchpGAkQv6RxZsRPkU0h9wQTXFrojyhUrGyE=; b=zYK6l2n4Km99CV1ZVlxjL+tFB1
	grloGbC2mUD/ZdWOfcAyryBCeAYuHa2Lt0YhTKQ3G3K/zeEuD60A2YiY9JETVikBZhlnxsFjlSI/9
	KpmdUFYnOIE7Bw6q5cu+I1AIN+TYEbqaRj7pjcjMnwj0FBXyVpOXEHE0llFG+d5opDWc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171185-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 171185: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-examine-uefi:xen-install:fail:regression
    xen-unstable:build-i386-prev:xen-build:fail:regression
    xen-unstable:build-i386-xsm:xen-build:fail:regression
    xen-unstable:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8c1d9760b1d847d983529eae2b360b38648841b5
X-Osstest-Versions-That:
    xen=c9a707df83aad17a6fcf2e8330ab3b5bead6fb8b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 16 Jun 2022 14:07:07 +0000

flight 171185 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171185/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-examine-uefi  6 xen-install              fail REGR. vs. 171174
 build-i386-prev               6 xen-build                fail REGR. vs. 171174
 build-i386-xsm                6 xen-build                fail REGR. vs. 171174

Tests which did not succeed, but are not blocking:
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171174
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171174
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171174
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171174
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171174
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171174
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171174
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171174
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171174
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171174
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171174
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171174
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  8c1d9760b1d847d983529eae2b360b38648841b5
baseline version:
 xen                  c9a707df83aad17a6fcf2e8330ab3b5bead6fb8b

Last test of basis   171174  2022-06-15 01:53:26 Z    1 days
Testing same since   171181  2022-06-15 14:38:30 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jane Malalane <jane.malalane@citrix.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 8c1d9760b1d847d983529eae2b360b38648841b5
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Wed Jun 15 10:24:06 2022 +0200

    build: remove auto.conf prerequisite from compat/xlat.h target
    
    Now that the command line generating "xlat.h" is check on rebuild, the
    header will be regenerated whenever the list of xlat headers changes
    due to change in ".config". We don't need to force a regeneration for
    every changes in ".config".
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 95b0d7bbddfbd797f37f7a09f0586c4bbd22291b
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jun 15 10:23:16 2022 +0200

    build: fix exporting for make 3.82
    
    GNU make 3.82 apparently has a quirk where exporting an undefined
    variable prevents its value from subsequently being updated. This
    situation can arise due to our adding of -rR to MAKEFLAGS, which takes
    effect also on make simply re-invoking itself. Once these flags are in
    effect, CC (in particular) is empty (undefined), and would be defined
    only via Config.mk including StdGNU.mk or alike. With the quirk, CC
    remains empty, yet with an empty CC the compiler minimum version check
    fails, breaking the build.
    
    Move the exporting of the various tool stack component variables past
    where they gain their (final) values.
    
    See also be63d9d47f57 ("build: tweak variable exporting for make 3.82").
    
    Fixes: 15a0578ca4b0 ("build: shuffle main Makefile")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

commit e8e6e42279a5723239c5c40ba4c7f579a979465d
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Jun 15 10:22:38 2022 +0200

    tools/xenstore: simplify loop handling connection I/O
    
    The loop handling input and output of connections of xenstored is
    open coding list_for_each_entry_safe() in an incredibly complicated
    way.
    
    Use list_for_each_entry_safe() instead, making it much more clear how
    the code is working.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>

commit e2d2b9fd7a2b349a1a9a75b482981cfd2d2407a8
Author: Jane Malalane <jane.malalane@citrix.com>
Date:   Wed Jun 15 10:21:08 2022 +0200

    x86/hvm: widen condition for is_hvm_pv_evtchn_domain() and report fix in CPUID
    
    Have is_hvm_pv_evtchn_domain() return true for vector callbacks for
    evtchn delivery set up on a per-vCPU basis via
    HVMOP_set_evtchn_upcall_vector.
    
    Assume that if vCPU0 uses HVMOP_set_evtchn_upcall_vector, all
    remaining vCPUs will too and thus remove is_hvm_pv_evtchn_vcpu() and
    replace sole caller with is_hvm_pv_evtchn_domain().
    
    is_hvm_pv_evtchn_domain() returning true is a condition for setting up
    physical IRQ to event channel mappings. Therefore, also add a CPUID
    bit so that guests know whether the check in is_hvm_pv_evtchn_domain()
    will fail when using HVMOP_set_evtchn_upcall_vector. This matters for
    guests that route PIRQs over event channels since
    is_hvm_pv_evtchn_domain() is a condition in physdev_map_pirq().
    
    The naming of the CPUID bit is quite generic about upcall support
    being available. That's done so that the define name doesn't become
    overly long.
    
    A guest that doesn't care about physical interrupts routed over event
    channels can just test for the availability of the hypercall directly
    (HVMOP_set_evtchn_upcall_vector) without checking the CPUID bit.
    
    Signed-off-by: Jane Malalane <jane.malalane@citrix.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

commit 80ad8db8a4d9bb24952f0aea788ce6f47566fa76
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jun 15 10:19:32 2022 +0200

    IOMMU/x86: work around bogus gcc12 warning in hvm_gsi_eoi()
    
    As per [1] the expansion of the pirq_dpci() macro causes a -Waddress
    controlled warning (enabled implicitly in our builds, if not by default)
    tying the middle part of the involved conditional expression to the
    surrounding boolean context. Work around this by introducing a local
    inline function in the affected source file.
    
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
    
    [1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=102967

commit 162dea4e768b835114c736cfd3fa1fc3742d39c5
Author: Stefano Stabellini <sstabellini@kernel.org>
Date:   Fri Jun 10 14:27:55 2022 -0700

    add more MISRA C rules to docs/misra/rules.rst
    
    Add the new MISRA C rules agreed by the MISRA C working group to
    docs/misra/rules.rst.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Jun 16 15:12:49 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 15:12:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350652.577108 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1rAb-0003LB-Bn; Thu, 16 Jun 2022 15:12:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350652.577108; Thu, 16 Jun 2022 15:12:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1rAb-0003L4-8r; Thu, 16 Jun 2022 15:12:29 +0000
Received: by outflank-mailman (input) for mailman id 350652;
 Thu, 16 Jun 2022 15:12:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o1rAZ-0003Ky-7O
 for xen-devel@lists.xenproject.org; Thu, 16 Jun 2022 15:12:27 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o1rAY-0004kG-U0; Thu, 16 Jun 2022 15:12:26 +0000
Received: from 54-240-197-228.amazon.com ([54.240.197.228]
 helo=[10.95.152.232]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o1rAY-0006fY-HQ; Thu, 16 Jun 2022 15:12:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=r/J6xpDz0O8Teq9yYJ6YkzF2E2EuUUW/9FfFWGvAyvM=; b=r0UYs9vA3yKUQNeMbsupPzAQEl
	6FATYZQGHSw1DUn/9vydiNoC15bS+Y3Dn1ibhjBI7lKro8bDUDB1hwdsvmIHqUeoyqn0d3IahOa54
	R5GbC1GT4ypgJeoOLe9TQuNgsJYwBFunRLuQhSrFxPNJAOrFxM2pxIO1XoUsHZSHuCYc=;
Message-ID: <cf7660da-0bde-865e-7c22-a2e21e31fae5@xen.org>
Date: Thu, 16 Jun 2022 16:12:24 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH] xen: Don't call panic if ARM TF cpu off returns DENIED
To: dmitry.semenets@gmail.com, xen-devel@lists.xenproject.org
Cc: Dmytro Semenets <dmytro_semenets@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220616135541.3333760-1-dmitry.semenets@gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220616135541.3333760-1-dmitry.semenets@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 16/06/2022 14:55, dmitry.semenets@gmail.com wrote:
> From: Dmytro Semenets <dmytro_semenets@epam.com>
> 
> According to PSCI specification ARM TF can return DENIED on CPU OFF.

I am confused. The spec is talking about Trusted OS and not firmware. 
The docummentation is also not specific to ARM Trusted Firmware. So did 
you mean "Trusted OS"?

Also, did you reproduce on HW? If so, on which CPU this will fail?

> This patch brings the hypervisor into compliance with the PSCI
> specification.

Now it means CPU off will never be turned off using PSCI. Instead, we 
would end up to spin in Xen. This would be a problem because we could 
save less power.

> Refer to "Arm Power State Coordination Interface (DEN0022D.b)"
> section 5.5.2

Reading both 5.5.2 and 5.9.1 together, DENIED would be returned when the 
trusted OS can only run on one core.

Some of the trusted OS are migratable. So I think we should first 
attempt to migrate the CPU. Then if it doesn't work, we should prevent 
the CPU to go offline.

That said, upstream doesn't support cpu offlining (I don't know for your 
use case). In case of shutdown, it is not necessary to offline the CPU, 
so we could avoid to call CPU_OFF on all CPUs but one. Something like:

diff --git a/xen/arch/arm/shutdown.c b/xen/arch/arm/shutdown.c
index 3dc6819d56de..d956812ef8f4 100644
--- a/xen/arch/arm/shutdown.c
+++ b/xen/arch/arm/shutdown.c
@@ -8,7 +8,9 @@

  static void noreturn halt_this_cpu(void *arg)
  {
-    stop_cpu();
+    ASSERT(!local_irq_enable());
+    while ( 1 )
+        wfi();
  }

  void machine_halt(void)
@@ -21,10 +23,6 @@ void machine_halt(void)
      smp_call_function(halt_this_cpu, NULL, 0);
      local_irq_disable();

-    /* Wait at most another 10ms for all other CPUs to go offline. */
-    while ( (num_online_cpus() > 1) && (timeout-- > 0) )
-        mdelay(1);
-
      /* This is mainly for PSCI-0.2, which does not return if success. */
      call_psci_system_off();

> Signed-off-by: Dmytro Semenets <dmytro_semenets@epam.com>
> Reviewed-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>

I don't recall to see patch on the ML recently for this. So is this an 
internal review?

> ---
>   xen/arch/arm/psci.c | 5 +++--
>   1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/psci.c b/xen/arch/arm/psci.c
> index 0c90c2305c..55787fde58 100644
> --- a/xen/arch/arm/psci.c
> +++ b/xen/arch/arm/psci.c
> @@ -63,8 +63,9 @@ void call_psci_cpu_off(void)
>   
>           /* If successfull the PSCI cpu_off call doesn't return */
>           arm_smccc_smc(PSCI_0_2_FN32_CPU_OFF, &res);
> -        panic("PSCI cpu off failed for CPU%d err=%d\n", smp_processor_id(),
> -              PSCI_RET(res));
> +        if ( PSCI_RET(res) != PSCI_DENIED )
> +            panic("PSCI cpu off failed for CPU%d err=%d\n", smp_processor_id(),
> +                PSCI_RET(res));
>       }
>   }
>   

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 16 15:13:32 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 15:13:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350659.577119 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1rBb-0003rM-N7; Thu, 16 Jun 2022 15:13:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350659.577119; Thu, 16 Jun 2022 15:13:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1rBb-0003rF-KG; Thu, 16 Jun 2022 15:13:31 +0000
Received: by outflank-mailman (input) for mailman id 350659;
 Thu, 16 Jun 2022 15:13:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o1rBa-0003r5-TP
 for xen-devel@lists.xenproject.org; Thu, 16 Jun 2022 15:13:30 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o1rBa-0004lB-42; Thu, 16 Jun 2022 15:13:30 +0000
Received: from 54-240-197-228.amazon.com ([54.240.197.228]
 helo=[10.95.152.232]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o1rBZ-0006ge-US; Thu, 16 Jun 2022 15:13:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=g9TslAzVbdz+fosdKYYeZ1z0yLhfuYxU8pge/Uaom6g=; b=15Sp7z9eMe1BOT86EYIMLuC6AE
	k6rGQ2LD+MG+eAxE0oYfWTg6q+itvG7drjnBRYLtuny4fOq9yL7lxJ8D3//h3Tuv/D64F1EbG+E1H
	nBCwvvCSCnRbCEyGqaSm2hwTj9XQKVqLg40dLx3vcRRCYOao7tjTgo2fPBAylRN10B2k=;
Message-ID: <0c6d58ae-bd8d-a3d8-d8d9-2b7a5340b4dd@xen.org>
Date: Thu, 16 Jun 2022 16:13:28 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH v2 1/2] xen/arm: smccc: add support for SMCCCv1.2 extended
 input/output registers
To: Jens Wiklander <jens.wiklander@linaro.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220609061812.422130-1-jens.wiklander@linaro.org>
 <20220609061812.422130-2-jens.wiklander@linaro.org>
 <alpine.DEB.2.22.394.2206101733020.756493@ubuntu-linux-20-04-desktop>
 <20220615155825.GA30639@jade> <588f9903-2a0e-a546-912a-24d2a13c3c6f@xen.org>
 <20220615220901.GA43803@jade>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220615220901.GA43803@jade>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Jens,

On 15/06/2022 23:09, Jens Wiklander wrote:
> On Wed, Jun 15, 2022 at 08:01:28PM +0100, Julien Grall wrote:
>> Hi,
>>
>> On 15/06/2022 16:58, Jens Wiklander wrote:
>>> On Fri, Jun 10, 2022 at 05:41:33PM -0700, Stefano Stabellini wrote:
>>>>>    #endif /* __ASSEMBLY__ */
>>>>>    /*
>>>>> diff --git a/xen/arch/arm/vsmc.c b/xen/arch/arm/vsmc.c
>>>>> index 676740ef1520..6f90c08a6304 100644
>>>>> --- a/xen/arch/arm/vsmc.c
>>>>> +++ b/xen/arch/arm/vsmc.c
>>>>> @@ -93,7 +93,7 @@ static bool handle_arch(struct cpu_user_regs *regs)
>>>>>        switch ( fid )
>>>>>        {
>>>>>        case ARM_SMCCC_VERSION_FID:
>>>>> -        set_user_reg(regs, 0, ARM_SMCCC_VERSION_1_1);
>>>>> +        set_user_reg(regs, 0, ARM_SMCCC_VERSION_1_2);
>>>>>            return true;
>>>> This is going to be a problem for ARM32 given that ARM_SMCCC_VERSION_1_2
>>>> is unimplemented on ARM32. If there is an ARM32 implementation in Linux
>>>> for ARM_SMCCC_VERSION_1_2 you might as well import it too.
>>>>
>>>> Otherwise we'll have to abstract it away, e.g.:
>>>>
>>>> #ifdef CONFIG_ARM_64
>>>> #define ARM_VSMCCC_VERSION ARM_SMCCC_VERSION_1_2
>>>> #else
>>>> #define ARM_VSMCCC_VERSION ARM_SMCCC_VERSION_1_1
>>>> #endif
>>>
>>> I couldn't find an ARM32 implementation for ARM_SMCCC_VERSION_1_2. But
>>> I'm not sure it's needed at this point. From what I've understood r4-17
>>> are either preserved or updated by the function ID in question. So
>>> claiming ARM_SMCCC_VERSION_1_2 shouldn't break anything.
>>
>> So in Xen, we always take a snapshot of the registers on entry to the
>> hypervisor and only touch it when necessary. Therefore, it doesn't matter
>> whether we claim to be complaient with 1.1 or 1.2 based on the argument
>> passing convention.
>>
>> However, the spec is not only about arguments. For instance, SMCCC v1.1 also
>> added some mandatory functions (e.g. detection the version). I haven't
>> looked closely on whether the SMCCC v1.2 introduced such thing. Can you
>> confirm what mandatory feature comes with 1.2?
> 
> There's a nice summary in a table at the end of the C version of DEN0028
> you linked below. For SMCCC v1.2:
> Argument/Result register set:
> Permits calls to use R4—R7 as return register (Section 4.1).
> Permits calls to use X4—X17 as return registers(Section3.1).
> Permits calls to use X8—X17 as argument registers (Section 3.1).
> Introduces:
> SMCCC_ARCH_SOC_ID (Section 7.4)
> Deprecates:
> UID, Revision Queries on Arm Architecture Service (Section 6.2)
> Count Query on all services (Section 6.2)

Thanks for posting here!

> 
> As far a I can tell nothing mandatory is introduced with version 1.2.

Agree. So it is safe to expose 1.2 unconditionally to the VMs.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 16 15:28:15 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 15:28:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350668.577131 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1rPj-0005e9-06; Thu, 16 Jun 2022 15:28:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350668.577131; Thu, 16 Jun 2022 15:28:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1rPi-0005e2-TI; Thu, 16 Jun 2022 15:28:06 +0000
Received: by outflank-mailman (input) for mailman id 350668;
 Thu, 16 Jun 2022 15:28:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1rPh-0005ds-SN; Thu, 16 Jun 2022 15:28:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1rPh-00050B-Nb; Thu, 16 Jun 2022 15:28:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1rPh-00061F-Ch; Thu, 16 Jun 2022 15:28:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o1rPh-0003l7-CE; Thu, 16 Jun 2022 15:28:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8ZyzrLVQto9UITTZ+pVCroauxE3prcP7yqtaitihqus=; b=CO2x9S+nHtAyXgrEEdnUg68fEH
	5GK11VAt2q2RscDAXTd213UT0ORA4JKbL0MTKMoTaEdDpeQ0HGZCgwyug5nXzC6Hcjm3/8sosIAuy
	deYQr1/mhIAQ6cueIcvRBepqPVTSMRPJJOytf3S7+2ocv6A4jdJ5QVNVcLaMt5B6SoX8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171189-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 171189: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64:xen-build:fail:regression
    libvirt:build-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=eb120a79da5b5a2f2b716522abf176a1e45c30c3
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 16 Jun 2022 15:28:05 +0000

flight 171189 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171189/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64                   6 xen-build                fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              eb120a79da5b5a2f2b716522abf176a1e45c30c3
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  706 days
Failing since        151818  2020-07-11 04:18:52 Z  705 days  687 attempts
Testing same since   171189  2022-06-16 04:19:07 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Amneesh Singh <natto@weirdnatto.in>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani@anisinha.ca>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brad Laue <brad@brad-x.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Kirbach <christian.kirbach@gmail.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Christophe Fergeau <cfergeau@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Didik Supriadi <didiksupriadi41@gmail.com>
  dinglimin <dinglimin@cmss.chinamobile.com>
  Divya Garg <divya.garg@nutanix.com>
  Dmitrii Shcherbakov <dmitrii.shcherbakov@canonical.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Emilio Herrera <ehespinosa57@gmail.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Florian Schmidt <flosch@nutanix.com>
  Franck Ridel <fridel@protonmail.com>
  Gavi Teitz <gavi@nvidia.com>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Haonan Wang <hnwanga1@gmail.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Hiroki Narukawa <hnarukaw@yahoo-corp.jp>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Ian Wienand <iwienand@redhat.com>
  Ioanna Alifieraki <ioanna-maria.alifieraki@canonical.com>
  Ivan Teterevkov <ivan.teterevkov@nutanix.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  jason lee <ppark5237@gmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jia Zhou <zhou.jia2@zte.com.cn>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jing Qi <jinqi@redhat.com>
  Jinsheng Zhang <zhangjl02@inspur.com>
  Jiri Denemark <jdenemar@redhat.com>
  Joachim Falk <joachim.falk@gmx.de>
  John Ferlan <jferlan@redhat.com>
  John Levon <john.levon@nutanix.com>
  John Levon <levon@movementarian.org>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Justin Gatzen <justin.gatzen@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kim InSoo <simmon@nplob.com>
  Koichi Murase <myoga.murase@gmail.com>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Lei Yang <yanglei209@huawei.com>
  Lena Voytek <lena.voytek@canonical.com>
  Liang Yan <lyan@digitalocean.com>
  Liang Yan <lyan@digtalocean.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Liu Yiding <liuyd.fnst@fujitsu.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  luzhipeng <luzhipeng@cestc.cn>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Mielke <mark.mielke@gmail.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Martin Pitt <mpitt@debian.org>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matej Cepl <mcepl@cepl.eu>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Goodhart <c@chromakode.com>
  Maxim Nestratov <mnestratov@virtuozzo.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Moteen Shah <codeguy.moteen@gmail.com>
  Moteen Shah <moteenshah.02@gmail.com>
  Muha Aliss <muhaaliss@gmail.com>
  Nathan <nathan95@live.it>
  Neal Gompa <ngompa13@gmail.com>
  Nick Chevsky <nchevsky@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nicolas Lécureuil <neoclust@mageia.org>
  Nicolas Lécureuil <nicolas.lecureuil@siveo.net>
  Nikolay Shirokovskiy <nikolay.shirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Niteesh Dubey <niteesh@linux.ibm.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Or Ozeri <oro@il.ibm.com>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peng Liang <tcx4c70@gmail.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Praveen K Paladugu <prapal@linux.microsoft.com>
  Prerna Saxena <prerna.saxena@nutanix.com>
  Richard W.M. Jones <rjones@redhat.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Robin Lee <cheeselee@fedoraproject.org>
  Rohit Kumar <rohit.kumar3@nutanix.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Davis <scott.davis@starlab.io>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Sergey A <sw@atrus.ru>
  Sergey A. <sw@atrus.ru>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  shenjiatong <yshxxsjt715@gmail.com>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Simon Rowe <simon.rowe@nutanix.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Temuri Doghonadze <temuri.doghonadze@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tom Wieczorek <tom@bibbu.net>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tu Qiang <tu.qiang35@zte.com.cn>
  Tuguoyi <tu.guoyi@h3c.com>
  tuqiang <tu.qiang35@zte.com.cn>
  Vasiliy Ulyanov <vulyanov@suse.de>
  Victor Toso <victortoso@redhat.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Vinayak Kale <vkale@nvidia.com>
  Vineeth Pillai <viremana@linux.microsoft.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  Wei-Chen Chen <weicche@microsoft.com>
  William Douglas <william.douglas@intel.com>
  Xu Chao <xu.chao6@zte.com.cn>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Fei <yangfei85@huawei.com>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yasuhiko Kamata <belphegor@belbel.or.jp>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
  zhangjl02 <zhangjl02@inspur.com>
  zhanglei <zhanglei@smartx.com>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>
  Дамјан Георгиевски <gdamjan@gmail.com>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  fail    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 113423 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 16 15:34:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 15:34:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350679.577144 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1rVo-0007HY-NV; Thu, 16 Jun 2022 15:34:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350679.577144; Thu, 16 Jun 2022 15:34:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1rVo-0007HR-Ko; Thu, 16 Jun 2022 15:34:24 +0000
Received: by outflank-mailman (input) for mailman id 350679;
 Thu, 16 Jun 2022 15:34:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1rVm-0007HH-Um; Thu, 16 Jun 2022 15:34:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1rVm-00056l-Q7; Thu, 16 Jun 2022 15:34:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1rVm-0006WH-DO; Thu, 16 Jun 2022 15:34:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o1rVm-0008QG-Cu; Thu, 16 Jun 2022 15:34:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=hyqGS1E5dYz3A35tCDjMqqF/W/PaV5fmzBUSIiuvsUQ=; b=5G+E4Qa3qUI8+tMBUiypmlDPci
	PI1W3z5zWORb0XqBXVtUYEp99+BuyEdJZNqxr5Lgzyp8ERpwTMaO43hn1Ws1hztqPy+k46CCftZ0w
	iOvuDp7YkIWwF2UJVRq8gQfJDgC5Ji0ps7FkYW5sbTg1JQ9OnQQuKfyHWYdTwxgrm6Kc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171201-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 171201: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8c24b70fedcb52633b2370f834d8a2be3f7fa38e
X-Osstest-Versions-That:
    xen=3c2a14ea81c77ae7973c1e436a32436a7e6d017b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 16 Jun 2022 15:34:22 +0000

flight 171201 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171201/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8c24b70fedcb52633b2370f834d8a2be3f7fa38e
baseline version:
 xen                  3c2a14ea81c77ae7973c1e436a32436a7e6d017b

Last test of basis   171184  2022-06-16 01:01:50 Z    0 days
Testing same since   171201  2022-06-16 12:00:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   3c2a14ea81..8c24b70fed  8c24b70fedcb52633b2370f834d8a2be3f7fa38e -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Jun 16 16:10:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 16:10:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350695.577191 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1s4e-0004tv-MX; Thu, 16 Jun 2022 16:10:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350695.577191; Thu, 16 Jun 2022 16:10:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1s4e-0004sJ-CZ; Thu, 16 Jun 2022 16:10:24 +0000
Received: by outflank-mailman (input) for mailman id 350695;
 Thu, 16 Jun 2022 16:10:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8SYV=WX=xenbits.xen.org=julieng@srs-se1.protection.inumbo.net>)
 id 1o1s4c-0004NM-Uq
 for xen-devel@lists.xen.org; Thu, 16 Jun 2022 16:10:23 +0000
Received: from mail.xenproject.org (mail.xenproject.org [104.130.215.37])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cef20c58-ed8e-11ec-ab14-113154c10af9;
 Thu, 16 Jun 2022 18:10:18 +0200 (CEST)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julieng@xenbits.xen.org>)
 id 1o1s4J-0006G0-3c; Thu, 16 Jun 2022 16:10:03 +0000
Received: from julieng by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <julieng@xenbits.xen.org>)
 id 1o1s4I-00011l-V8; Thu, 16 Jun 2022 16:10:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cef20c58-ed8e-11ec-ab14-113154c10af9
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=nhwmZ2xiIfHUXHUlEvQ+jKqLWMxkxayVcigPURx5bFE=; b=UxH4IeUmW2HC6aOO68fbKrZq9i
	yqACwqivQQC4724Q3+XAlUBc6pOwa625+iJg8wVEa15N1NwyRHfQdoMUqmfLwRfmK9hw/AbZvdZzC
	ykKb+gEx0lGnJpw8AwA5LWeXC/j9DtWuiaa1LsX0iKDWBZXzsQ13T7VDlg2OIxnlSq4Y=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 404 v2 (CVE-2022-21123,CVE-2022-21125,CVE-2022-21166)
 - x86: MMIO Stale Data vulnerabilities
Message-Id: <E1o1s4I-00011l-V8@xenbits.xenproject.org>
Date: Thu, 16 Jun 2022 16:10:02 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

 Xen Security Advisory CVE-2022-21123,CVE-2022-21125,CVE-2022-21166 / XSA-404
                                   version 2

                 x86: MMIO Stale Data vulnerabilities

UPDATES IN VERSION 2
====================

Correct one CVE.  The title for version 1 gave CVE-2022-21124 which was
incorrect and should have been CVE-2022-21125.

Patches are now reviewed.  Backports are available.

ISSUE DESCRIPTION
=================

This issue is related to the SRBDS, TAA and MDS vulnerabilities.  Please
see:

  https://xenbits.xen.org/xsa/advisory-320.html (SRBDS)
  https://xenbits.xen.org/xsa/advisory-305.html (TAA)
  https://xenbits.xen.org/xsa/advisory-297.html (MDS)

Please see Intel's whitepaper:

  https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/technical-documentation/processor-mmio-stale-data-vulnerabilities.html

IMPACT
======

An attacker might be able to directly read or infer data from other
security contexts in the system.  This can include data belonging to
other VMs, or to Xen itself.  The degree to which an attacker can obtain
data depends on the CPU, and the system configuration.

VULNERABLE SYSTEMS
==================

Systems running all versions of Xen are affected.

Only x86 processors are vulnerable.  Processors from other manufacturers
(e.g. ARM) are not believed to be vulnerable.

Only Intel based processors are affected.  Processors from other x86
manufacturers (e.g. AMD) are not believed to be vulnerable.

Please consult the Intel Security Advisory for details on the affected
processors and configurations.

Per Xen's support statement, PCI passthrough should be to trusted
domains because the overall system security depends on factors outside
of Xen's control.

As such, Xen, in a supported configuration, is not vulnerable to
DRPW/SBDR.

MITIGATION
==========

All mitigations depend on functionality added in the IPU 2022.1 (May
2022) microcode release from Intel.  Consult your dom0 OS vendor.

To the best of the security team's understanding, the summary is as
follows:

Server CPUs (Xeon EP/EX, Scalable, and some Atom servers), excluding
Xeon E3 (which use the client CPU design), are potentially vulnerable to
DRPW (CVE-2022-21166).

Client CPUs (inc Xeon E3) are, furthermore, potentially vulnerable to
SBDR (CVE-2022-21123) and SBDS (CVE-2022-21125).

SBDS only affects CPUs vulnerable to MDS.  On these CPUs, there are
previously undiscovered leakage channels.  There is no change to the
existing MDS mitigations.

DRPW and SBDR only affects configurations where less privileged domains
have MMIO mappings of buggy endpoints.  Consult your hardware vendor.

In configurations where less privileged domains have MMIO access to
buggy endpoints, `spec-ctrl=unpriv-mmio` can be enabled which will cause
Xen to mitigate cross-domain fill buffer leakage, and extend SRBDS
protections to protect RNG data from leakage.

RESOLUTION
==========

Applying the appropriate attached patches and enabling the newly
introduced command line option, if appropriate, mitigates these issues.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa404/xsa404-?.patch           xen-unstable
xsa404/xsa404-4.16-?.patch      Xen 4.16.x
xsa404/xsa404-4.15-?.patch      Xen 4.15.x
xsa404/xsa404-4.14-?.patch      Xen 4.14.x
xsa404/xsa404-4.13-?.patch      Xen 4.13.x

$ sha256sum xsa404*/*
51a812b3e37fb5067aff94d7e587c3fed0de4fcc89e694c7b7dbf1ef2d7e2acc  xsa404/xsa404-1.patch
99d9657cd811f5ed86949bd44777b6bfbb4356fea70795edaa9c7ede341603a0  xsa404/xsa404-2.patch
7e61db8f1741a9e2e9e68e7221cc532f4d17c4d0b2e02ce9ba4468ce187b7b57  xsa404/xsa404-3.patch
be78110d460db361be29f5e5f4b4608bbd25d2032c5f14eed05fd10e66e99e87  xsa404/xsa404-4.13-1.patch
7734bc21a04eb0cea30564bd0855ecc969b7b427a250b5ea6efc6fab46483b70  xsa404/xsa404-4.13-2.patch
6abbdcf5308c033ab7b59c6c75514e29aa14f06c61ef807e2d0c80695af1cace  xsa404/xsa404-4.13-3.patch
ccff36c3615d0068ade29e1d25abd6112b9e90490a5b0ef3d189b27aa53976b2  xsa404/xsa404-4.14-1.patch
ac446bed9d33d84e0b20e4898ce1424f3ed7ed4b05c3c559045a377a9a044b0c  xsa404/xsa404-4.14-2.patch
0ca7801e0442dd304d62538a0861fe459b08dc367530d2142405d602930e1dab  xsa404/xsa404-4.14-3.patch
a26036a136c10810de88960704e6922a40b483a49c8b1821a6e265cae968bfc2  xsa404/xsa404-4.15-1.patch
25616a8665b96b965fbc0b799fb8cd17a360b4add71c6e6e504859cfd35f19ce  xsa404/xsa404-4.15-2.patch
a4c3608210f62e453f9c983ebc1a3b0846ca3a52ba32ee13143561710b4c4118  xsa404/xsa404-4.15-3.patch
a18c04cfdacf7dbb518216ac85047a5851c1f64c62d64e234f8ed19b6905ba60  xsa404/xsa404-4.16-1.patch
d22af75e0bc42e249a37bd91165b426c7146f69dfd6c4de4a06d6ed0b3e5e713  xsa404/xsa404-4.16-2.patch
b04603668f61fbd40e2effaaeb7b3d9c555a8d8a4667208ae0ae42baf323230a  xsa404/xsa404-4.16-3.patch
$

In addition, the backports have already been pushed to xen.git.  They are
available in the following branches:

staging      8c24b70fedcb52633b2370f834d8a2be3f7fa38e
staging-4.16 2e82446cb252f6c8ac697e81f4155872c69afde4
staging-4.15 a3faf632606e54437146dbcac2c9bbb89b9a4007
staging-4.14 c5f774eaeeca195ef85b47713f0b21220c4b41e6
staging-4.13 87ff11354f0dc0d6e77e1695e6c1e14aa1382cdc

NOTE CONCERNING CVE-2022-21127 / Update to SRBDS
================================================

An issue was discovered with the SRBDS microcode mitigation.  A
microcode update was released as part of Intel's IPU 2022.1 in May 2022.

Updating microcode is sufficient to fix the issue, with no extra actions
required on Xen's behalf.  Consult your dom0 OS vendor or OEM for
updated microcode.

NOTE CONCERNING CVE-2022-21180 / Undefined MMIO Hang
====================================================

A related issue was discovered.  See:

  https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/advisory-guidance/undefined-mmio-hang.html

Xen is not vulnerable to UMH in supported configurations.

The only mitigation is to avoid passing impacted devices through to
untrusted guests.

NOTE CONCERNING LACK OF EMBARGO
===============================

The discoverer did not authorise us to predisclose.
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAmKrVbAMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZ2AcH/jWGiu0jpWMkQw/3U4DUu2a77PcC9jLH8NONesB7
SGfdhIMNqmStUI5VJf54ccDIrZSLQxvNVWWxXyQPhZXWhSPf5xE2uYK1qUL+Za8c
kOIJr0Drzffr2Bmu3NnBCRdQDkmXl2GDgqig4YWK/+BOlOO+YxBGdyoE0mBOXMo4
+cQHHvYa16kZVuwxyS0mZxhKFo3JQZaKqh2DEzKZUWm3w8n3NKEYG8S00sttZfjs
dS8rNXEu+yrmPjsJ+hFfJw8MfoETE6yGI47C89dFTN9Q0KedEYM28oD6ClMUC+ks
kwnFAk561m4VUoTqkSv82PeJfS9Sp5D6yO4CDdC05Eyc9gA=
=K9Tq
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa404/xsa404-1.patch"
Content-Disposition: attachment; filename="xsa404/xsa404-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3NwZWMtY3RybDogTWFrZSBWRVJXIGZsdXNoaW5n
IHJ1bnRpbWUgY29uZGl0aW9uYWwKCkN1cnJlbnRseSwgVkVSVyBmbHVzaGlu
ZyB0byBtaXRpZ2F0ZSBNRFMgaXMgYm9vdCB0aW1lIGNvbmRpdGlvbmFsIHBl
ciBkb21haW4KdHlwZS4gIEhvd2V2ZXIsIHRvIHByb3ZpZGUgbWl0aWdhdGlv
bnMgZm9yIERSUFcgKENWRS0yMDIyLTIxMTY2KSwgd2UgbmVlZCB0bwpjb25k
aXRpb25hbGx5IHVzZSBWRVJXIGJhc2VkIG9uIHRoZSB0cnVzdHdvcnRoaW5l
c3Mgb2YgdGhlIGd1ZXN0LCBhbmQgdGhlCmRldmljZXMgcGFzc2VkIHRocm91
Z2guCgpSZW1vdmUgdGhlIFBWL0hWTSBhbHRlcm5hdGl2ZXMgYW5kIGluc3Rl
YWQgaXNzdWUgYSBWRVJXIG9uIHRoZSByZXR1cm4tdG8tZ3Vlc3QKcGF0aCBk
ZXBlbmRpbmcgb24gdGhlIFNDRl92ZXJ3IGJpdCBpbiBjcHVpbmZvIHNwZWNf
Y3RybF9mbGFncy4KCkludHJvZHVjZSBzcGVjX2N0cmxfaW5pdF9kb21haW4o
KSBhbmQgZC0+YXJjaC52ZXJ3IHRvIGNhbGN1bGF0ZSB0aGUgVkVSVwpkaXNw
b3NpdGlvbiBhdCBkb21haW4gY3JlYXRpb24gdGltZSwgYW5kIGNvbnRleHQg
c3dpdGNoIHRoZSBTQ0ZfdmVydyBiaXQuCgpGb3Igbm93LCBWRVJXIGZsdXNo
aW5nIGlzIHVzZWQgYW5kIGNvbnRyb2xsZWQgZXhhY3RseSBhcyBiZWZvcmUs
IGJ1dCBsYXRlcgpwYXRjaGVzIHdpbGwgYWRkIHBlci1kb21haW4gY2FzZXMg
dG9vLgoKTm8gY2hhbmdlIGluIGJlaGF2aW91ci4KClRoaXMgaXMgcGFydCBv
ZiBYU0EtNDA0LgoKU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8YW5k
cmV3LmNvb3BlcjNAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEphbiBCZXVs
aWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6IFJvZ2VyIFBh
dSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgoKZGlmZiAtLWdpdCBh
L2RvY3MvbWlzYy94ZW4tY29tbWFuZC1saW5lLnBhbmRvYyBiL2RvY3MvbWlz
Yy94ZW4tY29tbWFuZC1saW5lLnBhbmRvYwppbmRleCAwZDFkOThkNzE1YjAu
LjI2NmExMWFiNTg0ZSAxMDA2NDQKLS0tIGEvZG9jcy9taXNjL3hlbi1jb21t
YW5kLWxpbmUucGFuZG9jCisrKyBiL2RvY3MvbWlzYy94ZW4tY29tbWFuZC1s
aW5lLnBhbmRvYwpAQCAtMjI4Miw5ICsyMjgyLDggQEAgaW4gcGxhY2UgZm9y
IGd1ZXN0cyB0byB1c2UuCiBVc2Ugb2YgYSBwb3NpdGl2ZSBib29sZWFuIHZh
bHVlIGZvciBlaXRoZXIgb2YgdGhlc2Ugb3B0aW9ucyBpcyBpbnZhbGlkLgog
CiBUaGUgYm9vbGVhbnMgYHB2PWAsIGBodm09YCwgYG1zci1zYz1gLCBgcnNi
PWAgYW5kIGBtZC1jbGVhcj1gIG9mZmVyIGZpbmUKLWdyYWluZWQgY29udHJv
bCBvdmVyIHRoZSBhbHRlcm5hdGl2ZSBibG9ja3MgdXNlZCBieSBYZW4uICBU
aGVzZSBpbXBhY3QgWGVuJ3MKLWFiaWxpdHkgdG8gcHJvdGVjdCBpdHNlbGYs
IGFuZCBYZW4ncyBhYmlsaXR5IHRvIHZpcnR1YWxpc2Ugc3VwcG9ydCBmb3Ig
Z3Vlc3RzCi10byB1c2UuCitncmFpbmVkIGNvbnRyb2wgb3ZlciB0aGUgcHJp
bWl0aXZlcyBieSBYZW4uICBUaGVzZSBpbXBhY3QgWGVuJ3MgYWJpbGl0eSB0
bworcHJvdGVjdCBpdHNlbGYsIGFuZCBYZW4ncyBhYmlsaXR5IHRvIHZpcnR1
YWxpc2Ugc3VwcG9ydCBmb3IgZ3Vlc3RzIHRvIHVzZS4KIAogKiBgcHY9YCBh
bmQgYGh2bT1gIG9mZmVyIGNvbnRyb2wgb3ZlciBhbGwgc3Vib3B0aW9ucyBm
b3IgUFYgYW5kIEhWTSBndWVzdHMKICAgcmVzcGVjdGl2ZWx5LgpkaWZmIC0t
Z2l0IGEveGVuL2FyY2gveDg2L2RvbWFpbi5jIGIveGVuL2FyY2gveDg2L2Rv
bWFpbi5jCmluZGV4IGE3MmNjOTU1MmFkNi4uOWVkZGVhYTIwYmQ1IDEwMDY0
NAotLS0gYS94ZW4vYXJjaC94ODYvZG9tYWluLmMKKysrIGIveGVuL2FyY2gv
eDg2L2RvbWFpbi5jCkBAIC04NjQsNiArODY0LDggQEAgaW50IGFyY2hfZG9t
YWluX2NyZWF0ZShzdHJ1Y3QgZG9tYWluICpkLAogCiAgICAgZC0+YXJjaC5t
c3JfcmVsYXhlZCA9IGNvbmZpZy0+YXJjaC5taXNjX2ZsYWdzICYgWEVOX1g4
Nl9NU1JfUkVMQVhFRDsKIAorICAgIHNwZWNfY3RybF9pbml0X2RvbWFpbihk
KTsKKwogICAgIHJldHVybiAwOwogCiAgZmFpbDoKQEAgLTIwMTgsMTQgKzIw
MjAsMTUgQEAgc3RhdGljIHZvaWQgX19jb250ZXh0X3N3aXRjaCh2b2lkKQog
dm9pZCBjb250ZXh0X3N3aXRjaChzdHJ1Y3QgdmNwdSAqcHJldiwgc3RydWN0
IHZjcHUgKm5leHQpCiB7CiAgICAgdW5zaWduZWQgaW50IGNwdSA9IHNtcF9w
cm9jZXNzb3JfaWQoKTsKKyAgICBzdHJ1Y3QgY3B1X2luZm8gKmluZm8gPSBn
ZXRfY3B1X2luZm8oKTsKICAgICBjb25zdCBzdHJ1Y3QgZG9tYWluICpwcmV2
ZCA9IHByZXYtPmRvbWFpbiwgKm5leHRkID0gbmV4dC0+ZG9tYWluOwogICAg
IHVuc2lnbmVkIGludCBkaXJ0eV9jcHUgPSByZWFkX2F0b21pYygmbmV4dC0+
ZGlydHlfY3B1KTsKIAogICAgIEFTU0VSVChwcmV2ICE9IG5leHQpOwogICAg
IEFTU0VSVChsb2NhbF9pcnFfaXNfZW5hYmxlZCgpKTsKIAotICAgIGdldF9j
cHVfaW5mbygpLT51c2VfcHZfY3IzID0gZmFsc2U7Ci0gICAgZ2V0X2NwdV9p
bmZvKCktPnhlbl9jcjMgPSAwOworICAgIGluZm8tPnVzZV9wdl9jcjMgPSBm
YWxzZTsKKyAgICBpbmZvLT54ZW5fY3IzID0gMDsKIAogICAgIGlmICggdW5s
aWtlbHkoZGlydHlfY3B1ICE9IGNwdSkgJiYgZGlydHlfY3B1ICE9IFZDUFVf
Q1BVX0NMRUFOICkKICAgICB7CkBAIC0yMDg5LDYgKzIwOTIsMTEgQEAgdm9p
ZCBjb250ZXh0X3N3aXRjaChzdHJ1Y3QgdmNwdSAqcHJldiwgc3RydWN0IHZj
cHUgKm5leHQpCiAgICAgICAgICAgICAgICAgKmxhc3RfaWQgPSBuZXh0X2lk
OwogICAgICAgICAgICAgfQogICAgICAgICB9CisKKyAgICAgICAgLyogVXBk
YXRlIHRoZSB0b3Atb2Ytc3RhY2sgYmxvY2sgd2l0aCB0aGUgVkVSVyBkaXNw
b3NpdGlvbi4gKi8KKyAgICAgICAgaW5mby0+c3BlY19jdHJsX2ZsYWdzICY9
IH5TQ0ZfdmVydzsKKyAgICAgICAgaWYgKCBuZXh0ZC0+YXJjaC52ZXJ3ICkK
KyAgICAgICAgICAgIGluZm8tPnNwZWNfY3RybF9mbGFncyB8PSBTQ0ZfdmVy
dzsKICAgICB9CiAKICAgICBzY2hlZF9jb250ZXh0X3N3aXRjaGVkKHByZXYs
IG5leHQpOwpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L2h2bS92bXgvZW50
cnkuUyBiL3hlbi9hcmNoL3g4Ni9odm0vdm14L2VudHJ5LlMKaW5kZXggNDk2
NTFmM2M0MzVhLi41ZjVkZTQ1YTEzMDkgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNo
L3g4Ni9odm0vdm14L2VudHJ5LlMKKysrIGIveGVuL2FyY2gveDg2L2h2bS92
bXgvZW50cnkuUwpAQCAtODcsNyArODcsNyBAQCBVTkxJS0VMWV9FTkQocmVh
bG1vZGUpCiAKICAgICAgICAgLyogV0FSTklORyEgYHJldGAsIGBjYWxsICpg
LCBgam1wICpgIG5vdCBzYWZlIGJleW9uZCB0aGlzIHBvaW50LiAqLwogICAg
ICAgICAvKiBTUEVDX0NUUkxfRVhJVF9UT19WTVggICBSZXE6ICVyc3A9cmVn
cy9jcHVpbmZvICAgICAgICAgICAgICBDbG9iOiAgICAqLwotICAgICAgICBB
TFRFUk5BVElWRSAiIiwgX19zdHJpbmdpZnkodmVydyBDUFVJTkZPX3Zlcndf
c2VsKCVyc3ApKSwgWDg2X0ZFQVRVUkVfU0NfVkVSV19IVk0KKyAgICAgICAg
RE9fU1BFQ19DVFJMX0NPTkRfVkVSVwogCiAgICAgICAgIG1vdiAgVkNQVV9o
dm1fZ3Vlc3RfY3IyKCVyYngpLCVyYXgKIApkaWZmIC0tZ2l0IGEveGVuL2Fy
Y2gveDg2L2luY2x1ZGUvYXNtL2NwdWZlYXR1cmVzLmggYi94ZW4vYXJjaC94
ODYvaW5jbHVkZS9hc20vY3B1ZmVhdHVyZXMuaAppbmRleCBmZjMxNTdkNTJk
MTMuLmJkNDVhMTQ0ZWU3OCAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L2lu
Y2x1ZGUvYXNtL2NwdWZlYXR1cmVzLmgKKysrIGIveGVuL2FyY2gveDg2L2lu
Y2x1ZGUvYXNtL2NwdWZlYXR1cmVzLmgKQEAgLTM1LDggKzM1LDcgQEAgWEVO
X0NQVUZFQVRVUkUoU0NfUlNCX0hWTSwgICAgICAgIFg4Nl9TWU5USCgxOSkp
IC8qIFJTQiBvdmVyd3JpdGUgbmVlZGVkIGZvciBIVk0KIFhFTl9DUFVGRUFU
VVJFKFhFTl9TRUxGU05PT1AsICAgICBYODZfU1lOVEgoMjApKSAvKiBTRUxG
U05PT1AgZ2V0cyB1c2VkIGJ5IFhlbiBpdHNlbGYgKi8KIFhFTl9DUFVGRUFU
VVJFKFNDX01TUl9JRExFLCAgICAgICBYODZfU1lOVEgoMjEpKSAvKiAoU0Nf
TVNSX1BWIHx8IFNDX01TUl9IVk0pICYmIGRlZmF1bHRfeGVuX3NwZWNfY3Ry
bCAqLwogWEVOX0NQVUZFQVRVUkUoWEVOX0xCUiwgICAgICAgICAgIFg4Nl9T
WU5USCgyMikpIC8qIFhlbiB1c2VzIE1TUl9ERUJVR0NUTC5MQlIgKi8KLVhF
Tl9DUFVGRUFUVVJFKFNDX1ZFUldfUFYsICAgICAgICBYODZfU1lOVEgoMjMp
KSAvKiBWRVJXIHVzZWQgYnkgWGVuIGZvciBQViAqLwotWEVOX0NQVUZFQVRV
UkUoU0NfVkVSV19IVk0sICAgICAgIFg4Nl9TWU5USCgyNCkpIC8qIFZFUlcg
dXNlZCBieSBYZW4gZm9yIEhWTSAqLworLyogQml0cyAyMywyNCB1bnVzZWQu
ICovCiBYRU5fQ1BVRkVBVFVSRShTQ19WRVJXX0lETEUsICAgICAgWDg2X1NZ
TlRIKDI1KSkgLyogVkVSVyB1c2VkIGJ5IFhlbiBmb3IgaWRsZSAqLwogWEVO
X0NQVUZFQVRVUkUoWEVOX1NIU1RLLCAgICAgICAgIFg4Nl9TWU5USCgyNikp
IC8qIFhlbiB1c2VzIENFVCBTaGFkb3cgU3RhY2tzICovCiBYRU5fQ1BVRkVB
VFVSRShYRU5fSUJULCAgICAgICAgICAgWDg2X1NZTlRIKDI3KSkgLyogWGVu
IHVzZXMgQ0VUIEluZGlyZWN0IEJyYW5jaCBUcmFja2luZyAqLwpkaWZmIC0t
Z2l0IGEveGVuL2FyY2gveDg2L2luY2x1ZGUvYXNtL2RvbWFpbi5oIGIveGVu
L2FyY2gveDg2L2luY2x1ZGUvYXNtL2RvbWFpbi5oCmluZGV4IDc1Mzg5ZTk2
MmE1ZS4uYWQwMWVlNjhlMTJjIDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYv
aW5jbHVkZS9hc20vZG9tYWluLmgKKysrIGIveGVuL2FyY2gveDg2L2luY2x1
ZGUvYXNtL2RvbWFpbi5oCkBAIC0zMjQsNiArMzI0LDkgQEAgc3RydWN0IGFy
Y2hfZG9tYWluCiAgICAgdWludDMyX3QgcGNpX2NmODsKICAgICB1aW50OF90
IGNtb3NfaWR4OwogCisgICAgLyogVXNlIFZFUlcgb24gcmV0dXJuLXRvLWd1
ZXN0IGZvciBpdHMgZmx1c2hpbmcgc2lkZSBlZmZlY3QuICovCisgICAgYm9v
bCB2ZXJ3OworCiAgICAgdW5pb24gewogICAgICAgICBzdHJ1Y3QgcHZfZG9t
YWluIHB2OwogICAgICAgICBzdHJ1Y3QgaHZtX2RvbWFpbiBodm07CmRpZmYg
LS1naXQgYS94ZW4vYXJjaC94ODYvaW5jbHVkZS9hc20vc3BlY19jdHJsLmgg
Yi94ZW4vYXJjaC94ODYvaW5jbHVkZS9hc20vc3BlY19jdHJsLmgKaW5kZXgg
Zjc2MDI5NTIzNjEwLi43NTEzNTVmNDcxZjQgMTAwNjQ0Ci0tLSBhL3hlbi9h
cmNoL3g4Ni9pbmNsdWRlL2FzbS9zcGVjX2N0cmwuaAorKysgYi94ZW4vYXJj
aC94ODYvaW5jbHVkZS9hc20vc3BlY19jdHJsLmgKQEAgLTI0LDYgKzI0LDcg
QEAKICNkZWZpbmUgU0NGX3VzZV9zaGFkb3cgKDEgPDwgMCkKICNkZWZpbmUg
U0NGX2lzdF93cm1zciAgKDEgPDwgMSkKICNkZWZpbmUgU0NGX2lzdF9yc2Ig
ICAgKDEgPDwgMikKKyNkZWZpbmUgU0NGX3ZlcncgICAgICAgKDEgPDwgMykK
IAogI2lmbmRlZiBfX0FTU0VNQkxZX18KIApAQCAtMzIsNiArMzMsNyBAQAog
I2luY2x1ZGUgPGFzbS9tc3ItaW5kZXguaD4KIAogdm9pZCBpbml0X3NwZWN1
bGF0aW9uX21pdGlnYXRpb25zKHZvaWQpOwordm9pZCBzcGVjX2N0cmxfaW5p
dF9kb21haW4oc3RydWN0IGRvbWFpbiAqZCk7CiAKIGV4dGVybiBib29sIG9w
dF9pYnBiOwogZXh0ZXJuIGJvb2wgb3B0X3NzYmQ7CmRpZmYgLS1naXQgYS94
ZW4vYXJjaC94ODYvaW5jbHVkZS9hc20vc3BlY19jdHJsX2FzbS5oIGIveGVu
L2FyY2gveDg2L2luY2x1ZGUvYXNtL3NwZWNfY3RybF9hc20uaAppbmRleCAw
MmIzYjE4Y2U2OWYuLjVhNTkwYmFjNDRhYSAxMDA2NDQKLS0tIGEveGVuL2Fy
Y2gveDg2L2luY2x1ZGUvYXNtL3NwZWNfY3RybF9hc20uaAorKysgYi94ZW4v
YXJjaC94ODYvaW5jbHVkZS9hc20vc3BlY19jdHJsX2FzbS5oCkBAIC0xMzYs
NiArMTM2LDE5IEBACiAjZW5kaWYKIC5lbmRtCiAKKy5tYWNybyBET19TUEVD
X0NUUkxfQ09ORF9WRVJXCisvKgorICogUmVxdWlyZXMgJXJzcD1jcHVpbmZv
CisgKgorICogSXNzdWUgYSBWRVJXIGZvciBpdHMgZmx1c2hpbmcgc2lkZSBl
ZmZlY3QsIGlmIGluZGljYXRlZC4gIFRoaXMgaXMgYSBTcGVjdHJlCisgKiB2
MSBnYWRnZXQsIGJ1dCB0aGUgSVJFVC9WTUVudHJ5IGlzIHNlcmlhbGlzaW5n
LgorICovCisgICAgdGVzdGIgJFNDRl92ZXJ3LCBDUFVJTkZPX3NwZWNfY3Ry
bF9mbGFncyglcnNwKQorICAgIGp6IC5MXEBfdmVyd19za2lwCisgICAgdmVy
dyBDUFVJTkZPX3Zlcndfc2VsKCVyc3ApCisuTFxAX3Zlcndfc2tpcDoKKy5l
bmRtCisKIC5tYWNybyBET19TUEVDX0NUUkxfRU5UUlkgbWF5YmV4ZW46cmVx
CiAvKgogICogUmVxdWlyZXMgJXJzcD1yZWdzIChhbHNvIGNwdWluZm8gaWYg
IW1heWJleGVuKQpAQCAtMjMxLDggKzI0NCw3IEBACiAjZGVmaW5lIFNQRUNf
Q1RSTF9FWElUX1RPX1BWICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBcCiAgICAgQUxURVJOQVRJVkUgIiIsICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBc
CiAgICAgICAgIERPX1NQRUNfQ1RSTF9FWElUX1RPX0dVRVNULCBYODZfRkVB
VFVSRV9TQ19NU1JfUFY7ICAgICAgICAgICAgICBcCi0gICAgQUxURVJOQVRJ
VkUgIiIsIF9fc3RyaW5naWZ5KHZlcncgQ1BVSU5GT192ZXJ3X3NlbCglcnNw
KSksICAgICAgICAgICBcCi0gICAgICAgIFg4Nl9GRUFUVVJFX1NDX1ZFUldf
UFYKKyAgICBET19TUEVDX0NUUkxfQ09ORF9WRVJXCiAKIC8qCiAgKiBVc2Ug
aW4gSVNUIGludGVycnVwdC9leGNlcHRpb24gY29udGV4dC4gIE1heSBpbnRl
cnJ1cHQgWGVuIG9yIFBWIGNvbnRleHQuCmRpZmYgLS1naXQgYS94ZW4vYXJj
aC94ODYvc3BlY19jdHJsLmMgYi94ZW4vYXJjaC94ODYvc3BlY19jdHJsLmMK
aW5kZXggMTQwOGU0YzdhYmQwLi45MmViNGVjZDNkMDkgMTAwNjQ0Ci0tLSBh
L3hlbi9hcmNoL3g4Ni9zcGVjX2N0cmwuYworKysgYi94ZW4vYXJjaC94ODYv
c3BlY19jdHJsLmMKQEAgLTM2LDggKzM2LDggQEAgc3RhdGljIGJvb2wgX19p
bml0ZGF0YSBvcHRfbXNyX3NjX3B2ID0gdHJ1ZTsKIHN0YXRpYyBib29sIF9f
aW5pdGRhdGEgb3B0X21zcl9zY19odm0gPSB0cnVlOwogc3RhdGljIGludDhf
dCBfX2luaXRkYXRhIG9wdF9yc2JfcHYgPSAtMTsKIHN0YXRpYyBib29sIF9f
aW5pdGRhdGEgb3B0X3JzYl9odm0gPSB0cnVlOwotc3RhdGljIGludDhfdCBf
X2luaXRkYXRhIG9wdF9tZF9jbGVhcl9wdiA9IC0xOwotc3RhdGljIGludDhf
dCBfX2luaXRkYXRhIG9wdF9tZF9jbGVhcl9odm0gPSAtMTsKK3N0YXRpYyBp
bnQ4X3QgX19yb19hZnRlcl9pbml0IG9wdF9tZF9jbGVhcl9wdiA9IC0xOwor
c3RhdGljIGludDhfdCBfX3JvX2FmdGVyX2luaXQgb3B0X21kX2NsZWFyX2h2
bSA9IC0xOwogCiAvKiBDbWRsaW5lIGNvbnRyb2xzIGZvciBYZW4ncyBzcGVj
dWxhdGl2ZSBzZXR0aW5ncy4gKi8KIHN0YXRpYyBlbnVtIGluZF90aHVuayB7
CkBAIC05MzMsNiArOTMzLDEzIEBAIHN0YXRpYyBfX2luaXQgdm9pZCBtZHNf
Y2FsY3VsYXRpb25zKHVpbnQ2NF90IGNhcHMpCiAgICAgfQogfQogCit2b2lk
IHNwZWNfY3RybF9pbml0X2RvbWFpbihzdHJ1Y3QgZG9tYWluICpkKQorewor
ICAgIGJvb2wgcHYgPSBpc19wdl9kb21haW4oZCk7CisKKyAgICBkLT5hcmNo
LnZlcncgPSBwdiA/IG9wdF9tZF9jbGVhcl9wdiA6IG9wdF9tZF9jbGVhcl9o
dm07Cit9CisKIHZvaWQgX19pbml0IGluaXRfc3BlY3VsYXRpb25fbWl0aWdh
dGlvbnModm9pZCkKIHsKICAgICBlbnVtIGluZF90aHVuayB0aHVuayA9IFRI
VU5LX0RFRkFVTFQ7CkBAIC0xMTk3LDIxICsxMjA0LDIwIEBAIHZvaWQgX19p
bml0IGluaXRfc3BlY3VsYXRpb25fbWl0aWdhdGlvbnModm9pZCkKICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICBib290X2NwdV9oYXMoWDg2X0ZFQVRV
UkVfTURfQ0xFQVIpKTsKIAogICAgIC8qCi0gICAgICogRW5hYmxlIE1EUyBk
ZWZlbmNlcyBhcyBhcHBsaWNhYmxlLiAgVGhlIFBWIGJsb2NrcyBuZWVkIHVz
aW5nIGFsbCB0aGUKLSAgICAgKiB0aW1lLCBhbmQgdGhlIElkbGUgYmxvY2tz
IG5lZWQgdXNpbmcgaWYgZWl0aGVyIFBWIG9yIEhWTSBkZWZlbmNlcyBhcmUK
LSAgICAgKiB1c2VkLgorICAgICAqIEVuYWJsZSBNRFMgZGVmZW5jZXMgYXMg
YXBwbGljYWJsZS4gIFRoZSBJZGxlIGJsb2NrcyBuZWVkIHVzaW5nIGlmCisg
ICAgICogZWl0aGVyIFBWIG9yIEhWTSBkZWZlbmNlcyBhcmUgdXNlZC4KICAg
ICAgKgogICAgICAqIEhWTSBpcyBtb3JlIGNvbXBsaWNhdGVkLiAgVGhlIE1E
X0NMRUFSIG1pY3JvY29kZSBleHRlbmRzIEwxRF9GTFVTSCB3aXRoCi0gICAg
ICogZXF1aXZlbGVudCBzZW1hbnRpY3MgdG8gYXZvaWQgbmVlZGluZyB0byBw
ZXJmb3JtIGJvdGggZmx1c2hlcyBvbiB0aGUKLSAgICAgKiBIVk0gcGF0aC4g
IFRoZSBIVk0gYmxvY2tzIGRvbid0IG5lZWQgYWN0aXZhdGluZyBpZiBvdXIg
aHlwZXJ2aXNvciB0b2xkCi0gICAgICogdXMgaXQgd2FzIGhhbmRsaW5nIEwx
RF9GTFVTSCwgb3Igd2UgYXJlIHVzaW5nIEwxRF9GTFVTSCBvdXJzZWx2ZXMu
CisgICAgICogZXF1aXZhbGVudCBzZW1hbnRpY3MgdG8gYXZvaWQgbmVlZGlu
ZyB0byBwZXJmb3JtIGJvdGggZmx1c2hlcyBvbiB0aGUKKyAgICAgKiBIVk0g
cGF0aC4gIFRoZXJlZm9yZSwgd2UgZG9uJ3QgbmVlZCBWRVJXIGluIGFkZGl0
aW9uIHRvIEwxRF9GTFVTSC4KKyAgICAgKgorICAgICAqIEFmdGVyIGNhbGN1
bGF0aW5nIHRoZSBhcHByb3ByaWF0ZSBpZGxlIHNldHRpbmcsIHNpbXBsaWZ5
CisgICAgICogb3B0X21kX2NsZWFyX2h2bSB0byBtZWFuIGp1c3QgInNob3Vs
ZCB3ZSBWRVJXIG9uIHRoZSB3YXkgaW50byBIVk0KKyAgICAgKiBndWVzdHMi
LCBzbyBzcGVjX2N0cmxfaW5pdF9kb21haW4oKSBjYW4gY2FsY3VsYXRlIHN1
aXRhYmxlIHNldHRpbmdzLgogICAgICAqLwotICAgIGlmICggb3B0X21kX2Ns
ZWFyX3B2ICkKLSAgICAgICAgc2V0dXBfZm9yY2VfY3B1X2NhcChYODZfRkVB
VFVSRV9TQ19WRVJXX1BWKTsKICAgICBpZiAoIG9wdF9tZF9jbGVhcl9wdiB8
fCBvcHRfbWRfY2xlYXJfaHZtICkKICAgICAgICAgc2V0dXBfZm9yY2VfY3B1
X2NhcChYODZfRkVBVFVSRV9TQ19WRVJXX0lETEUpOwotICAgIGlmICggb3B0
X21kX2NsZWFyX2h2bSAmJiAhKGNhcHMgJiBBUkNIX0NBUFNfU0tJUF9MMURG
TCkgJiYgIW9wdF9sMWRfZmx1c2ggKQotICAgICAgICBzZXR1cF9mb3JjZV9j
cHVfY2FwKFg4Nl9GRUFUVVJFX1NDX1ZFUldfSFZNKTsKKyAgICBvcHRfbWRf
Y2xlYXJfaHZtICY9ICEoY2FwcyAmIEFSQ0hfQ0FQU19TS0lQX0wxREZMKSAm
JiAhb3B0X2wxZF9mbHVzaDsKIAogICAgIC8qCiAgICAgICogV2FybiB0aGUg
dXNlciBpZiB0aGV5IGFyZSBvbiBNTFBEUy9NRkJEUy12dWxuZXJhYmxlIGhh
cmR3YXJlIHdpdGggSFQK

--=separator
Content-Type: application/octet-stream; name="xsa404/xsa404-2.patch"
Content-Disposition: attachment; filename="xsa404/xsa404-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3NwZWMtY3RybDogRW51bWVyYXRpb24gZm9yIE1N
SU8gU3RhbGUgRGF0YSBjb250cm9scwoKVGhlIHRocmVlICpfTk8gYml0cyBp
bmRpY2F0ZSBub24tc3VzY2VwdGliaWxpdHkgdG8gdGhlIFNTRFAsIEZCU0RQ
IGFuZCBQU0RQCmRhdGEgbW92ZW1lbnQgcHJpbWl0aXZlcy4KCkZCX0NMRUFS
IGluZGljYXRlcyB0aGF0IHRoZSBWRVJXIGluc3RydWN0aW9uIGhhcyByZS1n
YWluZWQgaXQncyBGaWxsIEJ1ZmZlcgpmbHVzaGluZyBzaWRlIGVmZmVjdC4g
IFRoaXMgaXMgb25seSBlbnVtZXJhdGVkIG9uIHBhcnRzIHdoZXJlIFZFUlcg
aGFkCnByZXZpb3VzbHkgbG9zdCBpdCdzIGZsdXNoaW5nIHNpZGUgZWZmZWN0
IGR1ZSB0byB0aGUgTURTL1RBQSB2dWxuZXJhYmlsaXRpZXMKYmVpbmcgZml4
ZWQgaW4gaGFyZHdhcmUuCgpGQl9DTEVBUl9DVFJMIGlzIGF2YWlsYWJsZSBv
biBhIHN1YnNldCBvZiBGQl9DTEVBUiBwYXJ0cyB3aGVyZSB0aGUgRmlsbCBC
dWZmZXIKY2xlYXJpbmcgc2lkZSBlZmZlY3Qgb2YgVkVSVyBjYW4gYmUgdHVy
bmVkIG9mZiBmb3IgcGVyZm9ybWFuY2UgcmVhc29ucy4KClRoaXMgaXMgcGFy
dCBvZiBYU0EtNDA0LgoKU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8
YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IFJvZ2Vy
IFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgoKZGlmZiAtLWdp
dCBhL3hlbi9hcmNoL3g4Ni9pbmNsdWRlL2FzbS9tc3ItaW5kZXguaCBiL3hl
bi9hcmNoL3g4Ni9pbmNsdWRlL2FzbS9tc3ItaW5kZXguaAppbmRleCA2YzI1
MGJmY2FkYWQuLmVhNDdmNjhkMDU1OCAxMDA2NDQKLS0tIGEveGVuL2FyY2gv
eDg2L2luY2x1ZGUvYXNtL21zci1pbmRleC5oCisrKyBiL3hlbi9hcmNoL3g4
Ni9pbmNsdWRlL2FzbS9tc3ItaW5kZXguaApAQCAtNzEsNiArNzEsMTEgQEAK
ICNkZWZpbmUgIEFSQ0hfQ0FQU19JRl9QU0NIQU5HRV9NQ19OTyAgICAgICAg
KF9BQygxLCBVTEwpIDw8ICA2KQogI2RlZmluZSAgQVJDSF9DQVBTX1RTWF9D
VFJMICAgICAgICAgICAgICAgICAoX0FDKDEsIFVMTCkgPDwgIDcpCiAjZGVm
aW5lICBBUkNIX0NBUFNfVEFBX05PICAgICAgICAgICAgICAgICAgIChfQUMo
MSwgVUxMKSA8PCAgOCkKKyNkZWZpbmUgIEFSQ0hfQ0FQU19TQkRSX1NTRFBf
Tk8gICAgICAgICAgICAgKF9BQygxLCBVTEwpIDw8IDEzKQorI2RlZmluZSAg
QVJDSF9DQVBTX0ZCU0RQX05PICAgICAgICAgICAgICAgICAoX0FDKDEsIFVM
TCkgPDwgMTQpCisjZGVmaW5lICBBUkNIX0NBUFNfUFNEUF9OTyAgICAgICAg
ICAgICAgICAgIChfQUMoMSwgVUxMKSA8PCAxNSkKKyNkZWZpbmUgIEFSQ0hf
Q0FQU19GQl9DTEVBUiAgICAgICAgICAgICAgICAgKF9BQygxLCBVTEwpIDw8
IDE3KQorI2RlZmluZSAgQVJDSF9DQVBTX0ZCX0NMRUFSX0NUUkwgICAgICAg
ICAgICAoX0FDKDEsIFVMTCkgPDwgMTgpCiAjZGVmaW5lICBBUkNIX0NBUFNf
UlJTQkEgICAgICAgICAgICAgICAgICAgIChfQUMoMSwgVUxMKSA8PCAxOSkK
ICNkZWZpbmUgIEFSQ0hfQ0FQU19CSElfTk8gICAgICAgICAgICAgICAgICAg
KF9BQygxLCBVTEwpIDw8IDIwKQogCkBAIC05MCw2ICs5NSw3IEBACiAjZGVm
aW5lICBNQ1VfT1BUX0NUUkxfUk5HRFNfTUlUR19ESVMgICAgICAgIChfQUMo
MSwgVUxMKSA8PCAgMCkKICNkZWZpbmUgIE1DVV9PUFRfQ1RSTF9SVE1fQUxM
T1cgICAgICAgICAgICAgKF9BQygxLCBVTEwpIDw8ICAxKQogI2RlZmluZSAg
TUNVX09QVF9DVFJMX1JUTV9MT0NLRUQgICAgICAgICAgICAoX0FDKDEsIFVM
TCkgPDwgIDIpCisjZGVmaW5lICBNQ1VfT1BUX0NUUkxfRkJfQ0xFQVJfRElT
ICAgICAgICAgIChfQUMoMSwgVUxMKSA8PCAgMykKIAogI2RlZmluZSBNU1Jf
UlRJVF9PVVRQVVRfQkFTRSAgICAgICAgICAgICAgICAweDAwMDAwNTYwCiAj
ZGVmaW5lIE1TUl9SVElUX09VVFBVVF9NQVNLICAgICAgICAgICAgICAgIDB4
MDAwMDA1NjEKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9zcGVjX2N0cmwu
YyBiL3hlbi9hcmNoL3g4Ni9zcGVjX2N0cmwuYwppbmRleCA5MmViNGVjZDNk
MDkuLjJlYzMxMjZjZjA2MCAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L3Nw
ZWNfY3RybC5jCisrKyBiL3hlbi9hcmNoL3g4Ni9zcGVjX2N0cmwuYwpAQCAt
MzIzLDcgKzMyMyw3IEBAIHN0YXRpYyB2b2lkIF9faW5pdCBwcmludF9kZXRh
aWxzKGVudW0gaW5kX3RodW5rIHRodW5rLCB1aW50NjRfdCBjYXBzKQogICAg
ICAqIEhhcmR3YXJlIHJlYWQtb25seSBpbmZvcm1hdGlvbiwgc3RhdGluZyBp
bW11bml0eSB0byBjZXJ0YWluIGlzc3Vlcywgb3IKICAgICAgKiBzdWdnZXN0
aW9ucyBvZiB3aGljaCBtaXRpZ2F0aW9uIHRvIHVzZS4KICAgICAgKi8KLSAg
ICBwcmludGsoIiAgSGFyZHdhcmUgaGludHM6JXMlcyVzJXMlcyVzJXMlcyVz
JXMlc1xuIiwKKyAgICBwcmludGsoIiAgSGFyZHdhcmUgaGludHM6JXMlcyVz
JXMlcyVzJXMlcyVzJXMlcyVzJXMlc1xuIiwKICAgICAgICAgICAgKGNhcHMg
JiBBUkNIX0NBUFNfUkRDTF9OTykgICAgICAgICAgICAgICAgICAgICAgICA/
ICIgUkRDTF9OTyIgICAgICAgIDogIiIsCiAgICAgICAgICAgIChjYXBzICYg
QVJDSF9DQVBTX0lCUlNfQUxMKSAgICAgICAgICAgICAgICAgICAgICAgPyAi
IElCUlNfQUxMIiAgICAgICA6ICIiLAogICAgICAgICAgICAoY2FwcyAmIEFS
Q0hfQ0FQU19SU0JBKSAgICAgICAgICAgICAgICAgICAgICAgICAgID8gIiBS
U0JBIiAgICAgICAgICAgOiAiIiwKQEAgLTMzMiwxMyArMzMyLDE2IEBAIHN0
YXRpYyB2b2lkIF9faW5pdCBwcmludF9kZXRhaWxzKGVudW0gaW5kX3RodW5r
IHRodW5rLCB1aW50NjRfdCBjYXBzKQogICAgICAgICAgICAoY2FwcyAmIEFS
Q0hfQ0FQU19TU0JfTk8pICAgICAgICAgICAgICAgICAgICAgICAgID8gIiBT
U0JfTk8iICAgICAgICAgOiAiIiwKICAgICAgICAgICAgKGNhcHMgJiBBUkNI
X0NBUFNfTURTX05PKSAgICAgICAgICAgICAgICAgICAgICAgICA/ICIgTURT
X05PIiAgICAgICAgIDogIiIsCiAgICAgICAgICAgIChjYXBzICYgQVJDSF9D
QVBTX1RBQV9OTykgICAgICAgICAgICAgICAgICAgICAgICAgPyAiIFRBQV9O
TyIgICAgICAgICA6ICIiLAorICAgICAgICAgICAoY2FwcyAmIEFSQ0hfQ0FQ
U19TQkRSX1NTRFBfTk8pICAgICAgICAgICAgICAgICAgID8gIiBTQkRSX1NT
RFBfTk8iICAgOiAiIiwKKyAgICAgICAgICAgKGNhcHMgJiBBUkNIX0NBUFNf
RkJTRFBfTk8pICAgICAgICAgICAgICAgICAgICAgICA/ICIgRkJTRFBfTk8i
ICAgICAgIDogIiIsCisgICAgICAgICAgIChjYXBzICYgQVJDSF9DQVBTX1BT
RFBfTk8pICAgICAgICAgICAgICAgICAgICAgICAgPyAiIFBTRFBfTk8iICAg
ICAgICA6ICIiLAogICAgICAgICAgICAoZThiICAmIGNwdWZlYXRfbWFzayhY
ODZfRkVBVFVSRV9JQlJTX0FMV0FZUykpICAgID8gIiBJQlJTX0FMV0FZUyIg
ICAgOiAiIiwKICAgICAgICAgICAgKGU4YiAgJiBjcHVmZWF0X21hc2soWDg2
X0ZFQVRVUkVfU1RJQlBfQUxXQVlTKSkgICA/ICIgU1RJQlBfQUxXQVlTIiAg
IDogIiIsCiAgICAgICAgICAgIChlOGIgICYgY3B1ZmVhdF9tYXNrKFg4Nl9G
RUFUVVJFX0lCUlNfRkFTVCkpICAgICAgPyAiIElCUlNfRkFTVCIgICAgICA6
ICIiLAogICAgICAgICAgICAoZThiICAmIGNwdWZlYXRfbWFzayhYODZfRkVB
VFVSRV9JQlJTX1NBTUVfTU9ERSkpID8gIiBJQlJTX1NBTUVfTU9ERSIgOiAi
Iik7CiAKICAgICAvKiBIYXJkd2FyZSBmZWF0dXJlcyB3aGljaCBuZWVkIGRy
aXZpbmcgdG8gbWl0aWdhdGUgaXNzdWVzLiAqLwotICAgIHByaW50aygiICBI
YXJkd2FyZSBmZWF0dXJlczolcyVzJXMlcyVzJXMlcyVzJXMlc1xuIiwKKyAg
ICBwcmludGsoIiAgSGFyZHdhcmUgZmVhdHVyZXM6JXMlcyVzJXMlcyVzJXMl
cyVzJXMlcyVzXG4iLAogICAgICAgICAgICAoZThiICAmIGNwdWZlYXRfbWFz
ayhYODZfRkVBVFVSRV9JQlBCKSkgfHwKICAgICAgICAgICAgKF83ZDAgJiBj
cHVmZWF0X21hc2soWDg2X0ZFQVRVUkVfSUJSU0IpKSAgICAgICAgICA/ICIg
SUJQQiIgICAgICAgICAgIDogIiIsCiAgICAgICAgICAgIChlOGIgICYgY3B1
ZmVhdF9tYXNrKFg4Nl9GRUFUVVJFX0lCUlMpKSB8fApAQCAtMzUzLDcgKzM1
Niw5IEBAIHN0YXRpYyB2b2lkIF9faW5pdCBwcmludF9kZXRhaWxzKGVudW0g
aW5kX3RodW5rIHRodW5rLCB1aW50NjRfdCBjYXBzKQogICAgICAgICAgICAo
XzdkMCAmIGNwdWZlYXRfbWFzayhYODZfRkVBVFVSRV9NRF9DTEVBUikpICAg
ICAgID8gIiBNRF9DTEVBUiIgICAgICAgOiAiIiwKICAgICAgICAgICAgKF83
ZDAgJiBjcHVmZWF0X21hc2soWDg2X0ZFQVRVUkVfU1JCRFNfQ1RSTCkpICAg
ICA/ICIgU1JCRFNfQ1RSTCIgICAgIDogIiIsCiAgICAgICAgICAgIChlOGIg
ICYgY3B1ZmVhdF9tYXNrKFg4Nl9GRUFUVVJFX1ZJUlRfU1NCRCkpICAgICAg
PyAiIFZJUlRfU1NCRCIgICAgICA6ICIiLAotICAgICAgICAgICAoY2FwcyAm
IEFSQ0hfQ0FQU19UU1hfQ1RSTCkgICAgICAgICAgICAgICAgICAgICAgID8g
IiBUU1hfQ1RSTCIgICAgICAgOiAiIik7CisgICAgICAgICAgIChjYXBzICYg
QVJDSF9DQVBTX1RTWF9DVFJMKSAgICAgICAgICAgICAgICAgICAgICAgPyAi
IFRTWF9DVFJMIiAgICAgICA6ICIiLAorICAgICAgICAgICAoY2FwcyAmIEFS
Q0hfQ0FQU19GQl9DTEVBUikgICAgICAgICAgICAgICAgICAgICAgID8gIiBG
Ql9DTEVBUiIgICAgICAgOiAiIiwKKyAgICAgICAgICAgKGNhcHMgJiBBUkNI
X0NBUFNfRkJfQ0xFQVJfQ1RSTCkgICAgICAgICAgICAgICAgICA/ICIgRkJf
Q0xFQVJfQ1RSTCIgIDogIiIpOwogCiAgICAgLyogQ29tcGlsZWQtaW4gc3Vw
cG9ydCB3aGljaCBwZXJ0YWlucyB0byBtaXRpZ2F0aW9ucy4gKi8KICAgICBp
ZiAoIElTX0VOQUJMRUQoQ09ORklHX0lORElSRUNUX1RIVU5LKSB8fCBJU19F
TkFCTEVEKENPTkZJR19TSEFET1dfUEFHSU5HKSApCg==

--=separator
Content-Type: application/octet-stream; name="xsa404/xsa404-3.patch"
Content-Disposition: attachment; filename="xsa404/xsa404-3.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3NwZWMtY3RybDogQWRkIHNwZWMtY3RybD11bnBy
aXYtbW1pbwoKUGVyIFhlbidzIHN1cHBvcnQgc3RhdGVtZW50LCBQQ0kgcGFz
c3Rocm91Z2ggc2hvdWxkIGJlIHRvIHRydXN0ZWQgZG9tYWlucwpiZWNhdXNl
IHRoZSBvdmVyYWxsIHN5c3RlbSBzZWN1cml0eSBkZXBlbmRzIG9uIGZhY3Rv
cnMgb3V0c2lkZSBvZiBYZW4ncwpjb250cm9sLgoKQXMgc3VjaCwgWGVuLCBp
biBhIHN1cHBvcnRlZCBjb25maWd1cmF0aW9uLCBpcyBub3QgdnVsbmVyYWJs
ZSB0byBEUlBXL1NCRFIuCgpIb3dldmVyLCB1c2VycyB3aG8gaGF2ZSByaXNr
IGFzc2Vzc2VkIHRoZWlyIGNvbmZpZ3VyYXRpb24gbWF5IGJlIGhhcHB5IHdp
dGgKdGhlIHJpc2sgb2YgRG9TLCBidXQgdW5oYXBweSB3aXRoIHRoZSByaXNr
IG9mIGNyb3NzLWRvbWFpbiBkYXRhIGxlYWthZ2UuICBTdWNoCnVzZXJzIHNo
b3VsZCBlbmFibGUgdGhpcyBvcHRpb24uCgpPbiBDUFVzIHZ1bG5lcmFibGUg
dG8gTURTLCB0aGUgZXhpc3RpbmcgbWl0aWdhdGlvbnMgYXJlIHRoZSBiZXN0
IHdlIGNhbiBkbyB0bwptaXRpZ2F0ZSBNTUlPIGNyb3NzLWRvbWFpbiBkYXRh
IGxlYWthZ2UuCgpPbiBDUFVzIGZpeGVkIHRvIE1EUyBidXQgdnVsbmVyYWJs
ZSBNTUlPIHN0YWxlIGRhdGEgbGVha2FnZSwgdGhpcyBvcHRpb246CgogKiBP
biBDUFVzIHN1c2NlcHRpYmxlIHRvIEZCU0RQLCBtaXRpZ2F0ZXMgY3Jvc3Mt
ZG9tYWluIGZpbGwgYnVmZmVyIGxlYWthZ2UKICAgdXNpbmcgRkJfQ0xFQVIu
CiAqIE9uIENQVXMgc3VzY2VwdGlibGUgdG8gU0JEUiwgbWl0aWdhdGVzIFJO
RyBkYXRhIHJlY292ZXJ5IGJ5IGVuZ2FnaW5nIHRoZQogICBzcmItbG9jaywg
cHJldmlvdXNseSB1c2VkIHRvIG1pdGlnYXRlIFNSQkRTLgoKQm90aCBtaXRp
Z2F0aW9ucyByZXF1aXJlIG1pY3JvY29kZSBmcm9tIElQVSAyMDIyLjEsIE1h
eSAyMDIyLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS00MDQuCgpTaWduZWQtb2Zm
LWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29t
PgpSZXZpZXdlZC1ieTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNp
dHJpeC5jb20+Ci0tLQpCYWNrcG9ydGluZyBub3RlOiBGb3IgWGVuIDQuNyBh
bmQgZWFybGllciB3aXRoIGJvb2xfdCBub3QgYWxpYXNpbmcgYm9vbCwgdGhl
CkFSQ0hfQ0FQU19GQl9DTEVBUiBodW5rIG5lZWRzICEhCgpkaWZmIC0tZ2l0
IGEvZG9jcy9taXNjL3hlbi1jb21tYW5kLWxpbmUucGFuZG9jIGIvZG9jcy9t
aXNjL3hlbi1jb21tYW5kLWxpbmUucGFuZG9jCmluZGV4IDI2NmExMWFiNTg0
ZS4uYTkyYjdkMjI4Y2FlIDEwMDY0NAotLS0gYS9kb2NzL21pc2MveGVuLWNv
bW1hbmQtbGluZS5wYW5kb2MKKysrIGIvZG9jcy9taXNjL3hlbi1jb21tYW5k
LWxpbmUucGFuZG9jCkBAIC0yMjU5LDcgKzIyNTksNyBAQCBCeSBkZWZhdWx0
IFNTQkQgd2lsbCBiZSBtaXRpZ2F0ZWQgYXQgcnVudGltZSAoaS5lIGBzc2Jk
PXJ1bnRpbWVgKS4KICMjIyBzcGVjLWN0cmwgKHg4NikKID4gYD0gTGlzdCBv
ZiBbIDxib29sPiwgeGVuPTxib29sPiwge3B2LGh2bSxtc3Itc2MscnNiLG1k
LWNsZWFyfT08Ym9vbD4sCiA+ICAgICAgICAgICAgICBidGktdGh1bms9cmV0
cG9saW5lfGxmZW5jZXxqbXAsIHtpYnJzLGlicGIsc3NiZCxlYWdlci1mcHUs
Ci0+ICAgICAgICAgICAgICBsMWQtZmx1c2gsYnJhbmNoLWhhcmRlbixzcmIt
bG9ja309PGJvb2w+IF1gCis+ICAgICAgICAgICAgICBsMWQtZmx1c2gsYnJh
bmNoLWhhcmRlbixzcmItbG9jayx1bnByaXYtbW1pb309PGJvb2w+IF1gCiAK
IENvbnRyb2xzIGZvciBzcGVjdWxhdGl2ZSBleGVjdXRpb24gc2lkZWNoYW5u
ZWwgbWl0aWdhdGlvbnMuICBCeSBkZWZhdWx0LCBYZW4KIHdpbGwgcGljayB0
aGUgbW9zdCBhcHByb3ByaWF0ZSBtaXRpZ2F0aW9ucyBiYXNlZCBvbiBjb21w
aWxlZCBpbiBzdXBwb3J0LApAQCAtMjMzOCw4ICsyMzM4LDE2IEBAIFhlbiB3
aWxsIGVuYWJsZSB0aGlzIG1pdGlnYXRpb24uCiBPbiBoYXJkd2FyZSBzdXBw
b3J0aW5nIFNSQkRTX0NUUkwsIHRoZSBgc3JiLWxvY2s9YCBvcHRpb24gY2Fu
IGJlIHVzZWQgdG8gZm9yY2UKIG9yIHByZXZlbnQgWGVuIGZyb20gcHJvdGVj
dCB0aGUgU3BlY2lhbCBSZWdpc3RlciBCdWZmZXIgZnJvbSBsZWFraW5nIHN0
YWxlCiBkYXRhLiBCeSBkZWZhdWx0LCBYZW4gd2lsbCBlbmFibGUgdGhpcyBt
aXRpZ2F0aW9uLCBleGNlcHQgb24gcGFydHMgd2hlcmUgTURTCi1pcyBmaXhl
ZCBhbmQgVEFBIGlzIGZpeGVkL21pdGlnYXRlZCAoaW4gd2hpY2ggY2FzZSwg
dGhlcmUgaXMgYmVsaWV2ZWQgdG8gYmUgbm8KLXdheSBmb3IgYW4gYXR0YWNr
ZXIgdG8gb2J0YWluIHRoZSBzdGFsZSBkYXRhKS4KK2lzIGZpeGVkIGFuZCBU
QUEgaXMgZml4ZWQvbWl0aWdhdGVkIGFuZCB0aGVyZSBhcmUgbm8gdW5wcml2
aWxlZ2VkIE1NSU8KK21hcHBpbmdzIChpbiB3aGljaCBjYXNlLCB0aGVyZSBp
cyBiZWxpZXZlZCB0byBiZSBubyB3YXkgZm9yIGFuIGF0dGFja2VyIHRvCitv
YnRhaW4gc3RhbGUgZGF0YSkuCisKK1RoZSBgdW5wcml2LW1taW89YCBib29s
ZWFuIGluZGljYXRlcyB3aGV0aGVyIHRoZSBzeXN0ZW0gaGFzIChvciB3aWxs
IGhhdmUpCitsZXNzIHRoYW4gZnVsbHkgcHJpdmlsZWdlZCBkb21haW5zIGdy
YW50ZWQgYWNjZXNzIHRvIE1NSU8gZGV2aWNlcy4gIEJ5CitkZWZhdWx0LCB0
aGlzIG9wdGlvbiBpcyBkaXNhYmxlZC4gIElmIGVuYWJsZWQsIFhlbiB3aWxs
IHVzZSB0aGUgYEZCX0NMRUFSYAorYW5kL29yIGBTUkJEU19DVFJMYCBmdW5j
dGlvbmFsaXR5IGF2YWlsYWJsZSBpbiB0aGUgSW50ZWwgTWF5IDIwMjIgbWlj
cm9jb2RlCityZWxlYXNlIHRvIG1pdGlnYXRlIGNyb3NzLWRvbWFpbiBsZWFr
YWdlIG9mIGRhdGEgdmlhIHRoZSBNTUlPIFN0YWxlIERhdGEKK3Z1bG5lcmFi
aWxpdGllcy4KIAogIyMjIHN5bmNfY29uc29sZQogPiBgPSA8Ym9vbGVhbj5g
CmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvc3BlY19jdHJsLmMgYi94ZW4v
YXJjaC94ODYvc3BlY19jdHJsLmMKaW5kZXggMmVjMzEyNmNmMDYwLi4xZjI3
NWFkMWZiNWQgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni9zcGVjX2N0cmwu
YworKysgYi94ZW4vYXJjaC94ODYvc3BlY19jdHJsLmMKQEAgLTY3LDYgKzY3
LDggQEAgc3RhdGljIGJvb2wgX19pbml0ZGF0YSBjcHVfaGFzX2J1Z19tc2Jk
c19vbmx5OyAvKiA9PiBtaW5pbWFsIEhUIGltcGFjdC4gKi8KIHN0YXRpYyBi
b29sIF9faW5pdGRhdGEgY3B1X2hhc19idWdfbWRzOyAvKiBBbnkgb3RoZXIg
TXtMUCxTQixGQn1EUyBjb21iaW5hdGlvbi4gKi8KIAogc3RhdGljIGludDhf
dCBfX2luaXRkYXRhIG9wdF9zcmJfbG9jayA9IC0xOworc3RhdGljIGJvb2wg
X19pbml0ZGF0YSBvcHRfdW5wcml2X21taW87CitzdGF0aWMgYm9vbCBfX3Jv
X2FmdGVyX2luaXQgb3B0X2ZiX2NsZWFyX21taW87CiAKIHN0YXRpYyBpbnQg
X19pbml0IGNmX2NoZWNrIHBhcnNlX3NwZWNfY3RybChjb25zdCBjaGFyICpz
KQogewpAQCAtMTg0LDYgKzE4Niw4IEBAIHN0YXRpYyBpbnQgX19pbml0IGNm
X2NoZWNrIHBhcnNlX3NwZWNfY3RybChjb25zdCBjaGFyICpzKQogICAgICAg
ICAgICAgb3B0X2JyYW5jaF9oYXJkZW4gPSB2YWw7CiAgICAgICAgIGVsc2Ug
aWYgKCAodmFsID0gcGFyc2VfYm9vbGVhbigic3JiLWxvY2siLCBzLCBzcykp
ID49IDAgKQogICAgICAgICAgICAgb3B0X3NyYl9sb2NrID0gdmFsOworICAg
ICAgICBlbHNlIGlmICggKHZhbCA9IHBhcnNlX2Jvb2xlYW4oInVucHJpdi1t
bWlvIiwgcywgc3MpKSA+PSAwICkKKyAgICAgICAgICAgIG9wdF91bnByaXZf
bW1pbyA9IHZhbDsKICAgICAgICAgZWxzZQogICAgICAgICAgICAgcmMgPSAt
RUlOVkFMOwogCkBAIC0zOTIsNyArMzk2LDggQEAgc3RhdGljIHZvaWQgX19p
bml0IHByaW50X2RldGFpbHMoZW51bSBpbmRfdGh1bmsgdGh1bmssIHVpbnQ2
NF90IGNhcHMpCiAgICAgICAgICAgIG9wdF9zcmJfbG9jayAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgID8gIiBTUkJfTE9DSysiIDogIiBTUkJfTE9D
Sy0iLAogICAgICAgICAgICBvcHRfaWJwYiAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICA/ICIgSUJQQiIgIDogIiIsCiAgICAgICAgICAgIG9w
dF9sMWRfZmx1c2ggICAgICAgICAgICAgICAgICAgICAgICAgICAgID8gIiBM
MURfRkxVU0giIDogIiIsCi0gICAgICAgICAgIG9wdF9tZF9jbGVhcl9wdiB8
fCBvcHRfbWRfY2xlYXJfaHZtICAgICAgID8gIiBWRVJXIiAgOiAiIiwKKyAg
ICAgICAgICAgb3B0X21kX2NsZWFyX3B2IHx8IG9wdF9tZF9jbGVhcl9odm0g
fHwKKyAgICAgICAgICAgb3B0X2ZiX2NsZWFyX21taW8gICAgICAgICAgICAg
ICAgICAgICAgICAgPyAiIFZFUlciICA6ICIiLAogICAgICAgICAgICBvcHRf
YnJhbmNoX2hhcmRlbiAgICAgICAgICAgICAgICAgICAgICAgICA/ICIgQlJB
TkNIX0hBUkRFTiIgOiAiIik7CiAKICAgICAvKiBMMVRGIGRpYWdub3N0aWNz
LCBwcmludGVkIGlmIHZ1bG5lcmFibGUgb3IgUFYgc2hhZG93aW5nIGlzIGlu
IHVzZS4gKi8KQEAgLTk0Miw3ICs5NDcsOSBAQCB2b2lkIHNwZWNfY3RybF9p
bml0X2RvbWFpbihzdHJ1Y3QgZG9tYWluICpkKQogewogICAgIGJvb2wgcHYg
PSBpc19wdl9kb21haW4oZCk7CiAKLSAgICBkLT5hcmNoLnZlcncgPSBwdiA/
IG9wdF9tZF9jbGVhcl9wdiA6IG9wdF9tZF9jbGVhcl9odm07CisgICAgZC0+
YXJjaC52ZXJ3ID0KKyAgICAgICAgKHB2ID8gb3B0X21kX2NsZWFyX3B2IDog
b3B0X21kX2NsZWFyX2h2bSkgfHwKKyAgICAgICAgKG9wdF9mYl9jbGVhcl9t
bWlvICYmIGlzX2lvbW11X2VuYWJsZWQoZCkpOwogfQogCiB2b2lkIF9faW5p
dCBpbml0X3NwZWN1bGF0aW9uX21pdGlnYXRpb25zKHZvaWQpCkBAIC0xMTk3
LDYgKzEyMDQsMTggQEAgdm9pZCBfX2luaXQgaW5pdF9zcGVjdWxhdGlvbl9t
aXRpZ2F0aW9ucyh2b2lkKQogICAgIG1kc19jYWxjdWxhdGlvbnMoY2Fwcyk7
CiAKICAgICAvKgorICAgICAqIFBhcnRzIHdoaWNoIGVudW1lcmF0ZSBGQl9D
TEVBUiBhcmUgdGhvc2Ugd2hpY2ggYXJlIHBvc3QtTURTX05PIGFuZCBoYXZl
CisgICAgICogcmVpbnRyb2R1Y2VkIHRoZSBWRVJXIGZpbGwgYnVmZmVyIGZs
dXNoaW5nIHNpZGUgZWZmZWN0IGJlY2F1c2Ugb2YgYQorICAgICAqIHN1c2Nl
cHRpYmlsaXR5IHRvIEZCU0RQLgorICAgICAqCisgICAgICogSWYgdW5wcml2
aWxlZ2VkIGd1ZXN0cyBoYXZlIChvciB3aWxsIGhhdmUpIE1NSU8gbWFwcGlu
Z3MsIHdlIGNhbgorICAgICAqIG1pdGlnYXRlIGNyb3NzLWRvbWFpbiBsZWFr
YWdlIG9mIGZpbGwgYnVmZmVyIGRhdGEgYnkgaXNzdWluZyBWRVJXIG9uCisg
ICAgICogdGhlIHJldHVybi10by1ndWVzdCBwYXRoLgorICAgICAqLworICAg
IGlmICggb3B0X3VucHJpdl9tbWlvICkKKyAgICAgICAgb3B0X2ZiX2NsZWFy
X21taW8gPSBjYXBzICYgQVJDSF9DQVBTX0ZCX0NMRUFSOworCisgICAgLyoK
ICAgICAgKiBCeSBkZWZhdWx0LCBlbmFibGUgUFYgYW5kIEhWTSBtaXRpZ2F0
aW9ucyBvbiBNRFMtdnVsbmVyYWJsZSBoYXJkd2FyZS4KICAgICAgKiBUaGlz
IHdpbGwgb25seSBiZSBhIHRva2VuIGVmZm9ydCBmb3IgTUxQRFMvTUZCRFMg
d2hlbiBIVCBpcyBlbmFibGVkLAogICAgICAqIGJ1dCBpdCBpcyBzb21ld2hh
dCBiZXR0ZXIgdGhhbiBub3RoaW5nLgpAQCAtMTIwOSwxOCArMTIyOCwyMCBA
QCB2b2lkIF9faW5pdCBpbml0X3NwZWN1bGF0aW9uX21pdGlnYXRpb25zKHZv
aWQpCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgYm9vdF9jcHVfaGFz
KFg4Nl9GRUFUVVJFX01EX0NMRUFSKSk7CiAKICAgICAvKgotICAgICAqIEVu
YWJsZSBNRFMgZGVmZW5jZXMgYXMgYXBwbGljYWJsZS4gIFRoZSBJZGxlIGJs
b2NrcyBuZWVkIHVzaW5nIGlmCi0gICAgICogZWl0aGVyIFBWIG9yIEhWTSBk
ZWZlbmNlcyBhcmUgdXNlZC4KKyAgICAgKiBFbmFibGUgTURTL01NSU8gZGVm
ZW5jZXMgYXMgYXBwbGljYWJsZS4gIFRoZSBJZGxlIGJsb2NrcyBuZWVkIHVz
aW5nIGlmCisgICAgICogZWl0aGVyIHRoZSBQViBvciBIVk0gTURTIGRlZmVu
Y2VzIGFyZSB1c2VkLCBvciBpZiB3ZSBtYXkgZ2l2ZSBNTUlPCisgICAgICog
YWNjZXNzIHRvIHVudHJ1c3RlZCBndWVzdHMuCiAgICAgICoKICAgICAgKiBI
Vk0gaXMgbW9yZSBjb21wbGljYXRlZC4gIFRoZSBNRF9DTEVBUiBtaWNyb2Nv
ZGUgZXh0ZW5kcyBMMURfRkxVU0ggd2l0aAogICAgICAqIGVxdWl2YWxlbnQg
c2VtYW50aWNzIHRvIGF2b2lkIG5lZWRpbmcgdG8gcGVyZm9ybSBib3RoIGZs
dXNoZXMgb24gdGhlCi0gICAgICogSFZNIHBhdGguICBUaGVyZWZvcmUsIHdl
IGRvbid0IG5lZWQgVkVSVyBpbiBhZGRpdGlvbiB0byBMMURfRkxVU0guCisg
ICAgICogSFZNIHBhdGguICBUaGVyZWZvcmUsIHdlIGRvbid0IG5lZWQgVkVS
VyBpbiBhZGRpdGlvbiB0byBMMURfRkxVU0ggKGZvcgorICAgICAqIE1EUyBt
aXRpZ2F0aW9ucy4gIEwxRF9GTFVTSCBpcyBub3Qgc2FmZSBmb3IgTU1JTyBt
aXRpZ2F0aW9ucy4pCiAgICAgICoKICAgICAgKiBBZnRlciBjYWxjdWxhdGlu
ZyB0aGUgYXBwcm9wcmlhdGUgaWRsZSBzZXR0aW5nLCBzaW1wbGlmeQogICAg
ICAqIG9wdF9tZF9jbGVhcl9odm0gdG8gbWVhbiBqdXN0ICJzaG91bGQgd2Ug
VkVSVyBvbiB0aGUgd2F5IGludG8gSFZNCiAgICAgICogZ3Vlc3RzIiwgc28g
c3BlY19jdHJsX2luaXRfZG9tYWluKCkgY2FuIGNhbGN1bGF0ZSBzdWl0YWJs
ZSBzZXR0aW5ncy4KICAgICAgKi8KLSAgICBpZiAoIG9wdF9tZF9jbGVhcl9w
diB8fCBvcHRfbWRfY2xlYXJfaHZtICkKKyAgICBpZiAoIG9wdF9tZF9jbGVh
cl9wdiB8fCBvcHRfbWRfY2xlYXJfaHZtIHx8IG9wdF9mYl9jbGVhcl9tbWlv
ICkKICAgICAgICAgc2V0dXBfZm9yY2VfY3B1X2NhcChYODZfRkVBVFVSRV9T
Q19WRVJXX0lETEUpOwogICAgIG9wdF9tZF9jbGVhcl9odm0gJj0gIShjYXBz
ICYgQVJDSF9DQVBTX1NLSVBfTDFERkwpICYmICFvcHRfbDFkX2ZsdXNoOwog
CkBAIC0xMjg1LDE0ICsxMzA2LDE5IEBAIHZvaWQgX19pbml0IGluaXRfc3Bl
Y3VsYXRpb25fbWl0aWdhdGlvbnModm9pZCkKICAgICAgKiBPbiBzb21lIFNS
QkRTLWFmZmVjdGVkIGhhcmR3YXJlLCBpdCBtYXkgYmUgc2FmZSB0byByZWxh
eCBzcmItbG9jayBieQogICAgICAqIGRlZmF1bHQuCiAgICAgICoKLSAgICAg
KiBPbiBwYXJ0cyB3aGljaCBlbnVtZXJhdGUgTURTX05PIGFuZCBub3QgVEFB
X05PLCBUU1ggaXMgdGhlIG9ubHkga25vd24KLSAgICAgKiB3YXkgdG8gYWNj
ZXNzIHRoZSBGaWxsIEJ1ZmZlci4gIElmIFRTWCBpc24ndCBhdmFpbGFibGUg
KGluYy4gU0tVCi0gICAgICogcmVhc29ucyBvbiBzb21lIG1vZGVscyksIG9y
IFRTWCBpcyBleHBsaWNpdGx5IGRpc2FibGVkLCB0aGVuIHRoZXJlIGlzCi0g
ICAgICogbm8gbmVlZCBmb3IgdGhlIGV4dHJhIG92ZXJoZWFkIHRvIHByb3Rl
Y3QgUkRSQU5EL1JEU0VFRC4KKyAgICAgKiBBbGwgcGFydHMgd2l0aCBTUkJE
U19DVFJMIHN1ZmZlciBTU0RQLCB0aGUgbWVjaGFuaXNtIGJ5IHdoaWNoIHN0
YWxlIFJORworICAgICAqIGRhdGEgYmVjb21lcyBhdmFpbGFibGUgdG8gb3Ro
ZXIgY29udGV4dHMuICBUbyByZWNvdmVyIHRoZSBkYXRhLCBhbgorICAgICAq
IGF0dGFja2VyIG5lZWRzIHRvIHVzZToKKyAgICAgKiAgLSBTQkRTIChNRFMg
b3IgVEFBIHRvIHNhbXBsZSB0aGUgY29yZXMgZmlsbCBidWZmZXIpCisgICAg
ICogIC0gU0JEUiAoQXJjaGl0ZWN0dXJhbGx5IHJldHJpZXZlIHN0YWxlIHRy
YW5zYWN0aW9uIGJ1ZmZlciBjb250ZW50cykKKyAgICAgKiAgLSBEUlBXIChB
cmNoaXRlY3R1cmFsbHkgbGF0Y2ggc3RhbGUgZmlsbCBidWZmZXIgZGF0YSkK
KyAgICAgKgorICAgICAqIE9uIE1EU19OTyBwYXJ0cywgYW5kIHdpdGggVEFB
X05PIG9yIFRTWCB1bmF2YWlsYWJsZS9kaXNhYmxlZCwgYW5kIHRoZXJlCisg
ICAgICogaXMgbm8gdW5wcml2aWxlZ2VkIE1NSU8gYWNjZXNzLCB0aGUgUk5H
IGRhdGEgZG9lc24ndCBuZWVkIHByb3RlY3RpbmcuCiAgICAgICovCiAgICAg
aWYgKCBjcHVfaGFzX3NyYmRzX2N0cmwgKQogICAgIHsKLSAgICAgICAgaWYg
KCBvcHRfc3JiX2xvY2sgPT0gLTEgJiYKKyAgICAgICAgaWYgKCBvcHRfc3Ji
X2xvY2sgPT0gLTEgJiYgIW9wdF91bnByaXZfbW1pbyAmJgogICAgICAgICAg
ICAgIChjYXBzICYgKEFSQ0hfQ0FQU19NRFNfTk98QVJDSF9DQVBTX1RBQV9O
TykpID09IEFSQ0hfQ0FQU19NRFNfTk8gJiYKICAgICAgICAgICAgICAoIWNw
dV9oYXNfaGxlIHx8ICgoY2FwcyAmIEFSQ0hfQ0FQU19UU1hfQ1RSTCkgJiYg
cnRtX2Rpc2FibGVkKSkgKQogICAgICAgICAgICAgb3B0X3NyYl9sb2NrID0g
MDsK

--=separator
Content-Type: application/octet-stream; name="xsa404/xsa404-4.13-1.patch"
Content-Disposition: attachment; filename="xsa404/xsa404-4.13-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3NwZWMtY3RybDogTWFrZSBWRVJXIGZsdXNoaW5n
IHJ1bnRpbWUgY29uZGl0aW9uYWwKCkN1cnJlbnRseSwgVkVSVyBmbHVzaGlu
ZyB0byBtaXRpZ2F0ZSBNRFMgaXMgYm9vdCB0aW1lIGNvbmRpdGlvbmFsIHBl
ciBkb21haW4KdHlwZS4gIEhvd2V2ZXIsIHRvIHByb3ZpZGUgbWl0aWdhdGlv
bnMgZm9yIERSUFcgKENWRS0yMDIyLTIxMTY2KSwgd2UgbmVlZCB0bwpjb25k
aXRpb25hbGx5IHVzZSBWRVJXIGJhc2VkIG9uIHRoZSB0cnVzdHdvcnRoaW5l
c3Mgb2YgdGhlIGd1ZXN0LCBhbmQgdGhlCmRldmljZXMgcGFzc2VkIHRocm91
Z2guCgpSZW1vdmUgdGhlIFBWL0hWTSBhbHRlcm5hdGl2ZXMgYW5kIGluc3Rl
YWQgaXNzdWUgYSBWRVJXIG9uIHRoZSByZXR1cm4tdG8tZ3Vlc3QKcGF0aCBk
ZXBlbmRpbmcgb24gdGhlIFNDRl92ZXJ3IGJpdCBpbiBjcHVpbmZvIHNwZWNf
Y3RybF9mbGFncy4KCkludHJvZHVjZSBzcGVjX2N0cmxfaW5pdF9kb21haW4o
KSBhbmQgZC0+YXJjaC52ZXJ3IHRvIGNhbGN1bGF0ZSB0aGUgVkVSVwpkaXNw
b3NpdGlvbiBhdCBkb21haW4gY3JlYXRpb24gdGltZSwgYW5kIGNvbnRleHQg
c3dpdGNoIHRoZSBTQ0ZfdmVydyBiaXQuCgpGb3Igbm93LCBWRVJXIGZsdXNo
aW5nIGlzIHVzZWQgYW5kIGNvbnRyb2xsZWQgZXhhY3RseSBhcyBiZWZvcmUs
IGJ1dCBsYXRlcgpwYXRjaGVzIHdpbGwgYWRkIHBlci1kb21haW4gY2FzZXMg
dG9vLgoKTm8gY2hhbmdlIGluIGJlaGF2aW91ci4KClRoaXMgaXMgcGFydCBv
ZiBYU0EtNDA0LgoKU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8YW5k
cmV3LmNvb3BlcjNAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEphbiBCZXVs
aWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6IFJvZ2VyIFBh
dSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgoKZGlmZiAtLWdpdCBh
L2RvY3MvbWlzYy94ZW4tY29tbWFuZC1saW5lLnBhbmRvYyBiL2RvY3MvbWlz
Yy94ZW4tY29tbWFuZC1saW5lLnBhbmRvYwppbmRleCBlZWFkNjlhZGEyYzIu
LmU4YmRmMzBmYTQ2YyAxMDA2NDQKLS0tIGEvZG9jcy9taXNjL3hlbi1jb21t
YW5kLWxpbmUucGFuZG9jCisrKyBiL2RvY3MvbWlzYy94ZW4tY29tbWFuZC1s
aW5lLnBhbmRvYwpAQCAtMjA1OCw5ICsyMDU4LDggQEAgaW4gcGxhY2UgZm9y
IGd1ZXN0cyB0byB1c2UuCiBVc2Ugb2YgYSBwb3NpdGl2ZSBib29sZWFuIHZh
bHVlIGZvciBlaXRoZXIgb2YgdGhlc2Ugb3B0aW9ucyBpcyBpbnZhbGlkLgog
CiBUaGUgYm9vbGVhbnMgYHB2PWAsIGBodm09YCwgYG1zci1zYz1gLCBgcnNi
PWAgYW5kIGBtZC1jbGVhcj1gIG9mZmVyIGZpbmUKLWdyYWluZWQgY29udHJv
bCBvdmVyIHRoZSBhbHRlcm5hdGl2ZSBibG9ja3MgdXNlZCBieSBYZW4uICBU
aGVzZSBpbXBhY3QgWGVuJ3MKLWFiaWxpdHkgdG8gcHJvdGVjdCBpdHNlbGYs
IGFuZCBYZW4ncyBhYmlsaXR5IHRvIHZpcnR1YWxpc2Ugc3VwcG9ydCBmb3Ig
Z3Vlc3RzCi10byB1c2UuCitncmFpbmVkIGNvbnRyb2wgb3ZlciB0aGUgcHJp
bWl0aXZlcyBieSBYZW4uICBUaGVzZSBpbXBhY3QgWGVuJ3MgYWJpbGl0eSB0
bworcHJvdGVjdCBpdHNlbGYsIGFuZCBYZW4ncyBhYmlsaXR5IHRvIHZpcnR1
YWxpc2Ugc3VwcG9ydCBmb3IgZ3Vlc3RzIHRvIHVzZS4KIAogKiBgcHY9YCBh
bmQgYGh2bT1gIG9mZmVyIGNvbnRyb2wgb3ZlciBhbGwgc3Vib3B0aW9ucyBm
b3IgUFYgYW5kIEhWTSBndWVzdHMKICAgcmVzcGVjdGl2ZWx5LgpkaWZmIC0t
Z2l0IGEveGVuL2FyY2gveDg2L2RvbWFpbi5jIGIveGVuL2FyY2gveDg2L2Rv
bWFpbi5jCmluZGV4IDgyMGNiMGY5MDU1OC4uZmU5NWIyNWEwMzRlIDEwMDY0
NAotLS0gYS94ZW4vYXJjaC94ODYvZG9tYWluLmMKKysrIGIveGVuL2FyY2gv
eDg2L2RvbWFpbi5jCkBAIC02NTEsNiArNjUxLDggQEAgaW50IGFyY2hfZG9t
YWluX2NyZWF0ZShzdHJ1Y3QgZG9tYWluICpkLAogCiAgICAgZG9tYWluX2Nw
dV9wb2xpY3lfY2hhbmdlZChkKTsKIAorICAgIHNwZWNfY3RybF9pbml0X2Rv
bWFpbihkKTsKKwogICAgIHJldHVybiAwOwogCiAgZmFpbDoKQEAgLTE3NDYs
MTQgKzE3NDgsMTUgQEAgc3RhdGljIHZvaWQgX19jb250ZXh0X3N3aXRjaCh2
b2lkKQogdm9pZCBjb250ZXh0X3N3aXRjaChzdHJ1Y3QgdmNwdSAqcHJldiwg
c3RydWN0IHZjcHUgKm5leHQpCiB7CiAgICAgdW5zaWduZWQgaW50IGNwdSA9
IHNtcF9wcm9jZXNzb3JfaWQoKTsKKyAgICBzdHJ1Y3QgY3B1X2luZm8gKmlu
Zm8gPSBnZXRfY3B1X2luZm8oKTsKICAgICBjb25zdCBzdHJ1Y3QgZG9tYWlu
ICpwcmV2ZCA9IHByZXYtPmRvbWFpbiwgKm5leHRkID0gbmV4dC0+ZG9tYWlu
OwogICAgIHVuc2lnbmVkIGludCBkaXJ0eV9jcHUgPSBuZXh0LT5kaXJ0eV9j
cHU7CiAKICAgICBBU1NFUlQocHJldiAhPSBuZXh0KTsKICAgICBBU1NFUlQo
bG9jYWxfaXJxX2lzX2VuYWJsZWQoKSk7CiAKLSAgICBnZXRfY3B1X2luZm8o
KS0+dXNlX3B2X2NyMyA9IGZhbHNlOwotICAgIGdldF9jcHVfaW5mbygpLT54
ZW5fY3IzID0gMDsKKyAgICBpbmZvLT51c2VfcHZfY3IzID0gZmFsc2U7Cisg
ICAgaW5mby0+eGVuX2NyMyA9IDA7CiAKICAgICBpZiAoIHVubGlrZWx5KGRp
cnR5X2NwdSAhPSBjcHUpICYmIGRpcnR5X2NwdSAhPSBWQ1BVX0NQVV9DTEVB
TiApCiAgICAgewpAQCAtMTgxNiw2ICsxODE5LDExIEBAIHZvaWQgY29udGV4
dF9zd2l0Y2goc3RydWN0IHZjcHUgKnByZXYsIHN0cnVjdCB2Y3B1ICpuZXh0
KQogICAgICAgICAgICAgICAgICpsYXN0X2lkID0gbmV4dF9pZDsKICAgICAg
ICAgICAgIH0KICAgICAgICAgfQorCisgICAgICAgIC8qIFVwZGF0ZSB0aGUg
dG9wLW9mLXN0YWNrIGJsb2NrIHdpdGggdGhlIFZFUlcgZGlzcG9zaXRpb24u
ICovCisgICAgICAgIGluZm8tPnNwZWNfY3RybF9mbGFncyAmPSB+U0NGX3Zl
cnc7CisgICAgICAgIGlmICggbmV4dGQtPmFyY2gudmVydyApCisgICAgICAg
ICAgICBpbmZvLT5zcGVjX2N0cmxfZmxhZ3MgfD0gU0NGX3Zlcnc7CiAgICAg
fQogCiAgICAgc2NoZWRfY29udGV4dF9zd2l0Y2hlZChwcmV2LCBuZXh0KTsK
ZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9odm0vdm14L2VudHJ5LlMgYi94
ZW4vYXJjaC94ODYvaHZtL3ZteC9lbnRyeS5TCmluZGV4IDI3YzhjNWNhNDk0
My4uNjJlZDBkODU0ZGYxIDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYvaHZt
L3ZteC9lbnRyeS5TCisrKyBiL3hlbi9hcmNoL3g4Ni9odm0vdm14L2VudHJ5
LlMKQEAgLTgxLDYgKzgxLDcgQEAgVU5MSUtFTFlfRU5EKHJlYWxtb2RlKQog
CiAgICAgICAgIC8qIFdBUk5JTkchIGByZXRgLCBgY2FsbCAqYCwgYGptcCAq
YCBub3Qgc2FmZSBiZXlvbmQgdGhpcyBwb2ludC4gKi8KICAgICAgICAgU1BF
Q19DVFJMX0VYSVRfVE9fSFZNICAgLyogUmVxOiBhPXNwZWNfY3RybCAlcnNw
PXJlZ3MvY3B1aW5mbywgQ2xvYjogY2QgKi8KKyAgICAgICAgRE9fU1BFQ19D
VFJMX0NPTkRfVkVSVwogCiAgICAgICAgIG1vdiAgVkNQVV9odm1fZ3Vlc3Rf
Y3IyKCVyYngpLCVyYXgKIApkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L3Nw
ZWNfY3RybC5jIGIveGVuL2FyY2gveDg2L3NwZWNfY3RybC5jCmluZGV4IDc0
NDdkNGE4ZTViNS4uMzhlMWYxMDk4MjEwIDEwMDY0NAotLS0gYS94ZW4vYXJj
aC94ODYvc3BlY19jdHJsLmMKKysrIGIveGVuL2FyY2gveDg2L3NwZWNfY3Ry
bC5jCkBAIC0zNSw4ICszNSw4IEBAIHN0YXRpYyBib29sIF9faW5pdGRhdGEg
b3B0X21zcl9zY19wdiA9IHRydWU7CiBzdGF0aWMgYm9vbCBfX2luaXRkYXRh
IG9wdF9tc3Jfc2NfaHZtID0gdHJ1ZTsKIHN0YXRpYyBib29sIF9faW5pdGRh
dGEgb3B0X3JzYl9wdiA9IHRydWU7CiBzdGF0aWMgYm9vbCBfX2luaXRkYXRh
IG9wdF9yc2JfaHZtID0gdHJ1ZTsKLXN0YXRpYyBpbnQ4X3QgX19pbml0ZGF0
YSBvcHRfbWRfY2xlYXJfcHYgPSAtMTsKLXN0YXRpYyBpbnQ4X3QgX19pbml0
ZGF0YSBvcHRfbWRfY2xlYXJfaHZtID0gLTE7CitzdGF0aWMgaW50OF90IF9f
cmVhZF9tb3N0bHkgb3B0X21kX2NsZWFyX3B2ID0gLTE7CitzdGF0aWMgaW50
OF90IF9fcmVhZF9tb3N0bHkgb3B0X21kX2NsZWFyX2h2bSA9IC0xOwogCiAv
KiBDbWRsaW5lIGNvbnRyb2xzIGZvciBYZW4ncyBzcGVjdWxhdGl2ZSBzZXR0
aW5ncy4gKi8KIHN0YXRpYyBlbnVtIGluZF90aHVuayB7CkBAIC04NzgsNiAr
ODc4LDEzIEBAIHN0YXRpYyBfX2luaXQgdm9pZCBtZHNfY2FsY3VsYXRpb25z
KHVpbnQ2NF90IGNhcHMpCiAgICAgfQogfQogCit2b2lkIHNwZWNfY3RybF9p
bml0X2RvbWFpbihzdHJ1Y3QgZG9tYWluICpkKQoreworICAgIGJvb2wgcHYg
PSBpc19wdl9kb21haW4oZCk7CisKKyAgICBkLT5hcmNoLnZlcncgPSBwdiA/
IG9wdF9tZF9jbGVhcl9wdiA6IG9wdF9tZF9jbGVhcl9odm07Cit9CisKIHZv
aWQgX19pbml0IGluaXRfc3BlY3VsYXRpb25fbWl0aWdhdGlvbnModm9pZCkK
IHsKICAgICBlbnVtIGluZF90aHVuayB0aHVuayA9IFRIVU5LX0RFRkFVTFQ7
CkBAIC0xMDc4LDIxICsxMDg1LDIwIEBAIHZvaWQgX19pbml0IGluaXRfc3Bl
Y3VsYXRpb25fbWl0aWdhdGlvbnModm9pZCkKICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBib290X2NwdV9oYXMoWDg2X0ZFQVRVUkVfTURfQ0xFQVIp
KTsKIAogICAgIC8qCi0gICAgICogRW5hYmxlIE1EUyBkZWZlbmNlcyBhcyBh
cHBsaWNhYmxlLiAgVGhlIFBWIGJsb2NrcyBuZWVkIHVzaW5nIGFsbCB0aGUK
LSAgICAgKiB0aW1lLCBhbmQgdGhlIElkbGUgYmxvY2tzIG5lZWQgdXNpbmcg
aWYgZWl0aGVyIFBWIG9yIEhWTSBkZWZlbmNlcyBhcmUKLSAgICAgKiB1c2Vk
LgorICAgICAqIEVuYWJsZSBNRFMgZGVmZW5jZXMgYXMgYXBwbGljYWJsZS4g
IFRoZSBJZGxlIGJsb2NrcyBuZWVkIHVzaW5nIGlmCisgICAgICogZWl0aGVy
IFBWIG9yIEhWTSBkZWZlbmNlcyBhcmUgdXNlZC4KICAgICAgKgogICAgICAq
IEhWTSBpcyBtb3JlIGNvbXBsaWNhdGVkLiAgVGhlIE1EX0NMRUFSIG1pY3Jv
Y29kZSBleHRlbmRzIEwxRF9GTFVTSCB3aXRoCi0gICAgICogZXF1aXZlbGVu
dCBzZW1hbnRpY3MgdG8gYXZvaWQgbmVlZGluZyB0byBwZXJmb3JtIGJvdGgg
Zmx1c2hlcyBvbiB0aGUKLSAgICAgKiBIVk0gcGF0aC4gIFRoZSBIVk0gYmxv
Y2tzIGRvbid0IG5lZWQgYWN0aXZhdGluZyBpZiBvdXIgaHlwZXJ2aXNvciB0
b2xkCi0gICAgICogdXMgaXQgd2FzIGhhbmRsaW5nIEwxRF9GTFVTSCwgb3Ig
d2UgYXJlIHVzaW5nIEwxRF9GTFVTSCBvdXJzZWx2ZXMuCisgICAgICogZXF1
aXZhbGVudCBzZW1hbnRpY3MgdG8gYXZvaWQgbmVlZGluZyB0byBwZXJmb3Jt
IGJvdGggZmx1c2hlcyBvbiB0aGUKKyAgICAgKiBIVk0gcGF0aC4gIFRoZXJl
Zm9yZSwgd2UgZG9uJ3QgbmVlZCBWRVJXIGluIGFkZGl0aW9uIHRvIEwxRF9G
TFVTSC4KKyAgICAgKgorICAgICAqIEFmdGVyIGNhbGN1bGF0aW5nIHRoZSBh
cHByb3ByaWF0ZSBpZGxlIHNldHRpbmcsIHNpbXBsaWZ5CisgICAgICogb3B0
X21kX2NsZWFyX2h2bSB0byBtZWFuIGp1c3QgInNob3VsZCB3ZSBWRVJXIG9u
IHRoZSB3YXkgaW50byBIVk0KKyAgICAgKiBndWVzdHMiLCBzbyBzcGVjX2N0
cmxfaW5pdF9kb21haW4oKSBjYW4gY2FsY3VsYXRlIHN1aXRhYmxlIHNldHRp
bmdzLgogICAgICAqLwotICAgIGlmICggb3B0X21kX2NsZWFyX3B2ICkKLSAg
ICAgICAgc2V0dXBfZm9yY2VfY3B1X2NhcChYODZfRkVBVFVSRV9TQ19WRVJX
X1BWKTsKICAgICBpZiAoIG9wdF9tZF9jbGVhcl9wdiB8fCBvcHRfbWRfY2xl
YXJfaHZtICkKICAgICAgICAgc2V0dXBfZm9yY2VfY3B1X2NhcChYODZfRkVB
VFVSRV9TQ19WRVJXX0lETEUpOwotICAgIGlmICggb3B0X21kX2NsZWFyX2h2
bSAmJiAhKGNhcHMgJiBBUkNIX0NBUFNfU0tJUF9MMURGTCkgJiYgIW9wdF9s
MWRfZmx1c2ggKQotICAgICAgICBzZXR1cF9mb3JjZV9jcHVfY2FwKFg4Nl9G
RUFUVVJFX1NDX1ZFUldfSFZNKTsKKyAgICBvcHRfbWRfY2xlYXJfaHZtICY9
ICEoY2FwcyAmIEFSQ0hfQ0FQU19TS0lQX0wxREZMKSAmJiAhb3B0X2wxZF9m
bHVzaDsKIAogICAgIC8qCiAgICAgICogV2FybiB0aGUgdXNlciBpZiB0aGV5
IGFyZSBvbiBNTFBEUy9NRkJEUy12dWxuZXJhYmxlIGhhcmR3YXJlIHdpdGgg
SFQKZGlmZiAtLWdpdCBhL3hlbi9pbmNsdWRlL2FzbS14ODYvY3B1ZmVhdHVy
ZXMuaCBiL3hlbi9pbmNsdWRlL2FzbS14ODYvY3B1ZmVhdHVyZXMuaAppbmRl
eCBhODIyMmU5NzhjZDkuLmJjYmE5MjZiZGE0MSAxMDA2NDQKLS0tIGEveGVu
L2luY2x1ZGUvYXNtLXg4Ni9jcHVmZWF0dXJlcy5oCisrKyBiL3hlbi9pbmNs
dWRlL2FzbS14ODYvY3B1ZmVhdHVyZXMuaApAQCAtMzUsOCArMzUsNyBAQCBY
RU5fQ1BVRkVBVFVSRShTQ19SU0JfSFZNLCAgICAgICAgWDg2X1NZTlRIKDE5
KSkgLyogUlNCIG92ZXJ3cml0ZSBuZWVkZWQgZm9yIEhWTQogWEVOX0NQVUZF
QVRVUkUoWEVOX1NFTEZTTk9PUCwgICAgIFg4Nl9TWU5USCgyMCkpIC8qIFNF
TEZTTk9PUCBnZXRzIHVzZWQgYnkgWGVuIGl0c2VsZiAqLwogWEVOX0NQVUZF
QVRVUkUoU0NfTVNSX0lETEUsICAgICAgIFg4Nl9TWU5USCgyMSkpIC8qIChT
Q19NU1JfUFYgfHwgU0NfTVNSX0hWTSkgJiYgZGVmYXVsdF94ZW5fc3BlY19j
dHJsICovCiBYRU5fQ1BVRkVBVFVSRShYRU5fTEJSLCAgICAgICAgICAgWDg2
X1NZTlRIKDIyKSkgLyogWGVuIHVzZXMgTVNSX0RFQlVHQ1RMLkxCUiAqLwot
WEVOX0NQVUZFQVRVUkUoU0NfVkVSV19QViwgICAgICAgIFg4Nl9TWU5USCgy
MykpIC8qIFZFUlcgdXNlZCBieSBYZW4gZm9yIFBWICovCi1YRU5fQ1BVRkVB
VFVSRShTQ19WRVJXX0hWTSwgICAgICAgWDg2X1NZTlRIKDI0KSkgLyogVkVS
VyB1c2VkIGJ5IFhlbiBmb3IgSFZNICovCisvKiBCaXRzIDIzLDI0IHVudXNl
ZC4gKi8KIFhFTl9DUFVGRUFUVVJFKFNDX1ZFUldfSURMRSwgICAgICBYODZf
U1lOVEgoMjUpKSAvKiBWRVJXIHVzZWQgYnkgWGVuIGZvciBpZGxlICovCiAK
IC8qIEJ1ZyB3b3JkcyBmb2xsb3cgdGhlIHN5bnRoZXRpYyB3b3Jkcy4gKi8K
ZGlmZiAtLWdpdCBhL3hlbi9pbmNsdWRlL2FzbS14ODYvZG9tYWluLmggYi94
ZW4vaW5jbHVkZS9hc20teDg2L2RvbWFpbi5oCmluZGV4IDMwOWI1NmUyZDZi
Ny4uNzFkMWNhMjQzYjMyIDEwMDY0NAotLS0gYS94ZW4vaW5jbHVkZS9hc20t
eDg2L2RvbWFpbi5oCisrKyBiL3hlbi9pbmNsdWRlL2FzbS14ODYvZG9tYWlu
LmgKQEAgLTI5NSw2ICsyOTUsOSBAQCBzdHJ1Y3QgYXJjaF9kb21haW4KICAg
ICB1aW50MzJfdCBwY2lfY2Y4OwogICAgIHVpbnQ4X3QgY21vc19pZHg7CiAK
KyAgICAvKiBVc2UgVkVSVyBvbiByZXR1cm4tdG8tZ3Vlc3QgZm9yIGl0cyBm
bHVzaGluZyBzaWRlIGVmZmVjdC4gKi8KKyAgICBib29sIHZlcnc7CisKICAg
ICB1bmlvbiB7CiAgICAgICAgIHN0cnVjdCBwdl9kb21haW4gcHY7CiAgICAg
ICAgIHN0cnVjdCBodm1fZG9tYWluIGh2bTsKZGlmZiAtLWdpdCBhL3hlbi9p
bmNsdWRlL2FzbS14ODYvc3BlY19jdHJsLmggYi94ZW4vaW5jbHVkZS9hc20t
eDg2L3NwZWNfY3RybC5oCmluZGV4IGIyNTJiYjg2MzExMS4uMTU3YTJjNjdk
ODljIDEwMDY0NAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L3NwZWNfY3Ry
bC5oCisrKyBiL3hlbi9pbmNsdWRlL2FzbS14ODYvc3BlY19jdHJsLmgKQEAg
LTI0LDYgKzI0LDcgQEAKICNkZWZpbmUgU0NGX3VzZV9zaGFkb3cgKDEgPDwg
MCkKICNkZWZpbmUgU0NGX2lzdF93cm1zciAgKDEgPDwgMSkKICNkZWZpbmUg
U0NGX2lzdF9yc2IgICAgKDEgPDwgMikKKyNkZWZpbmUgU0NGX3ZlcncgICAg
ICAgKDEgPDwgMykKIAogI2lmbmRlZiBfX0FTU0VNQkxZX18KIApAQCAtMzIs
NiArMzMsNyBAQAogI2luY2x1ZGUgPGFzbS9tc3ItaW5kZXguaD4KIAogdm9p
ZCBpbml0X3NwZWN1bGF0aW9uX21pdGlnYXRpb25zKHZvaWQpOwordm9pZCBz
cGVjX2N0cmxfaW5pdF9kb21haW4oc3RydWN0IGRvbWFpbiAqZCk7CiAKIGV4
dGVybiBib29sIG9wdF9pYnBiOwogZXh0ZXJuIGJvb2wgb3B0X3NzYmQ7CmRp
ZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9hc20teDg2L3NwZWNfY3RybF9hc20u
aCBiL3hlbi9pbmNsdWRlL2FzbS14ODYvc3BlY19jdHJsX2FzbS5oCmluZGV4
IGM2MDA5M2IwOTBiNS4uNGEzNzc3Y2M1MjI3IDEwMDY0NAotLS0gYS94ZW4v
aW5jbHVkZS9hc20teDg2L3NwZWNfY3RybF9hc20uaAorKysgYi94ZW4vaW5j
bHVkZS9hc20teDg2L3NwZWNfY3RybF9hc20uaApAQCAtMTQxLDYgKzE0MSwx
OSBAQAogICAgIHdybXNyCiAuZW5kbQogCisubWFjcm8gRE9fU1BFQ19DVFJM
X0NPTkRfVkVSVworLyoKKyAqIFJlcXVpcmVzICVyc3A9Y3B1aW5mbworICoK
KyAqIElzc3VlIGEgVkVSVyBmb3IgaXRzIGZsdXNoaW5nIHNpZGUgZWZmZWN0
LCBpZiBpbmRpY2F0ZWQuICBUaGlzIGlzIGEgU3BlY3RyZQorICogdjEgZ2Fk
Z2V0LCBidXQgdGhlIElSRVQvVk1FbnRyeSBpcyBzZXJpYWxpc2luZy4KKyAq
LworICAgIHRlc3RiICRTQ0ZfdmVydywgQ1BVSU5GT19zcGVjX2N0cmxfZmxh
Z3MoJXJzcCkKKyAgICBqeiAuTFxAX3Zlcndfc2tpcAorICAgIHZlcncgQ1BV
SU5GT192ZXJ3X3NlbCglcnNwKQorLkxcQF92ZXJ3X3NraXA6CisuZW5kbQor
CiAubWFjcm8gRE9fU1BFQ19DVFJMX0VOVFJZIG1heWJleGVuOnJlcQogLyoK
ICAqIFJlcXVpcmVzICVyc3A9cmVncyAoYWxzbyBjcHVpbmZvIGlmICFtYXli
ZXhlbikKQEAgLTI0MiwxNSArMjU1LDEyIEBACiAjZGVmaW5lIFNQRUNfQ1RS
TF9FWElUX1RPX1BWICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBcCiAgICAgQUxURVJOQVRJVkUgIiIsICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCiAg
ICAgICAgIERPX1NQRUNfQ1RSTF9FWElUX1RPX0dVRVNULCBYODZfRkVBVFVS
RV9TQ19NU1JfUFY7ICAgICAgICAgICAgICBcCi0gICAgQUxURVJOQVRJVkUg
IiIsIF9fc3RyaW5naWZ5KHZlcncgQ1BVSU5GT192ZXJ3X3NlbCglcnNwKSks
ICAgICAgICAgICBcCi0gICAgICAgIFg4Nl9GRUFUVVJFX1NDX1ZFUldfUFYK
KyAgICBET19TUEVDX0NUUkxfQ09ORF9WRVJXCiAKIC8qIFVzZSB3aGVuIGV4
aXRpbmcgdG8gSFZNIGd1ZXN0IGNvbnRleHQuICovCiAjZGVmaW5lIFNQRUNf
Q1RSTF9FWElUX1RPX0hWTSAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBcCiAgICAgQUxURVJOQVRJVkUgIiIsICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBc
CiAgICAgICAgIERPX1NQRUNfQ1RSTF9FWElUX1RPX0dVRVNULCBYODZfRkVB
VFVSRV9TQ19NU1JfSFZNOyAgICAgICAgICAgICBcCi0gICAgQUxURVJOQVRJ
VkUgIiIsIF9fc3RyaW5naWZ5KHZlcncgQ1BVSU5GT192ZXJ3X3NlbCglcnNw
KSksICAgICAgICAgICBcCi0gICAgICAgIFg4Nl9GRUFUVVJFX1NDX1ZFUldf
SFZNCiAKIC8qCiAgKiBVc2UgaW4gSVNUIGludGVycnVwdC9leGNlcHRpb24g
Y29udGV4dC4gIE1heSBpbnRlcnJ1cHQgWGVuIG9yIFBWIGNvbnRleHQuCg==

--=separator
Content-Type: application/octet-stream; name="xsa404/xsa404-4.13-2.patch"
Content-Disposition: attachment; filename="xsa404/xsa404-4.13-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3NwZWMtY3RybDogRW51bWVyYXRpb24gZm9yIE1N
SU8gU3RhbGUgRGF0YSBjb250cm9scwoKVGhlIHRocmVlICpfTk8gYml0cyBp
bmRpY2F0ZSBub24tc3VzY2VwdGliaWxpdHkgdG8gdGhlIFNTRFAsIEZCU0RQ
IGFuZCBQU0RQCmRhdGEgbW92ZW1lbnQgcHJpbWl0aXZlcy4KCkZCX0NMRUFS
IGluZGljYXRlcyB0aGF0IHRoZSBWRVJXIGluc3RydWN0aW9uIGhhcyByZS1n
YWluZWQgaXQncyBGaWxsIEJ1ZmZlcgpmbHVzaGluZyBzaWRlIGVmZmVjdC4g
IFRoaXMgaXMgb25seSBlbnVtZXJhdGVkIG9uIHBhcnRzIHdoZXJlIFZFUlcg
aGFkCnByZXZpb3VzbHkgbG9zdCBpdCdzIGZsdXNoaW5nIHNpZGUgZWZmZWN0
IGR1ZSB0byB0aGUgTURTL1RBQSB2dWxuZXJhYmlsaXRpZXMKYmVpbmcgZml4
ZWQgaW4gaGFyZHdhcmUuCgpGQl9DTEVBUl9DVFJMIGlzIGF2YWlsYWJsZSBv
biBhIHN1YnNldCBvZiBGQl9DTEVBUiBwYXJ0cyB3aGVyZSB0aGUgRmlsbCBC
dWZmZXIKY2xlYXJpbmcgc2lkZSBlZmZlY3Qgb2YgVkVSVyBjYW4gYmUgdHVy
bmVkIG9mZiBmb3IgcGVyZm9ybWFuY2UgcmVhc29ucy4KClRoaXMgaXMgcGFy
dCBvZiBYU0EtNDA0LgoKU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8
YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IFJvZ2Vy
IFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgoKZGlmZiAtLWdp
dCBhL3hlbi9hcmNoL3g4Ni9zcGVjX2N0cmwuYyBiL3hlbi9hcmNoL3g4Ni9z
cGVjX2N0cmwuYwppbmRleCAzOGUxZjEwOTgyMTAuLmZkMzY5MjdiYTFjYiAx
MDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L3NwZWNfY3RybC5jCisrKyBiL3hl
bi9hcmNoL3g4Ni9zcGVjX2N0cmwuYwpAQCAtMzE4LDcgKzMxOCw3IEBAIHN0
YXRpYyB2b2lkIF9faW5pdCBwcmludF9kZXRhaWxzKGVudW0gaW5kX3RodW5r
IHRodW5rLCB1aW50NjRfdCBjYXBzKQogICAgIHByaW50aygiU3BlY3VsYXRp
dmUgbWl0aWdhdGlvbiBmYWNpbGl0aWVzOlxuIik7CiAKICAgICAvKiBIYXJk
d2FyZSBmZWF0dXJlcyB3aGljaCBwZXJ0YWluIHRvIHNwZWN1bGF0aXZlIG1p
dGlnYXRpb25zLiAqLwotICAgIHByaW50aygiICBIYXJkd2FyZSBmZWF0dXJl
czolcyVzJXMlcyVzJXMlcyVzJXMlcyVzJXMlcyVzJXNcbiIsCisgICAgcHJp
bnRrKCIgIEhhcmR3YXJlIGZlYXR1cmVzOiVzJXMlcyVzJXMlcyVzJXMlcyVz
JXMlcyVzJXMlcyVzJXMlcyVzJXNcbiIsCiAgICAgICAgICAgIChfN2QwICYg
Y3B1ZmVhdF9tYXNrKFg4Nl9GRUFUVVJFX0lCUlNCKSkgPyAiIElCUlMvSUJQ
QiIgOiAiIiwKICAgICAgICAgICAgKF83ZDAgJiBjcHVmZWF0X21hc2soWDg2
X0ZFQVRVUkVfU1RJQlApKSA/ICIgU1RJQlAiICAgICA6ICIiLAogICAgICAg
ICAgICAoXzdkMCAmIGNwdWZlYXRfbWFzayhYODZfRkVBVFVSRV9MMURfRkxV
U0gpKSA/ICIgTDFEX0ZMVVNIIiA6ICIiLApAQCAtMzMzLDcgKzMzMywxMiBA
QCBzdGF0aWMgdm9pZCBfX2luaXQgcHJpbnRfZGV0YWlscyhlbnVtIGluZF90
aHVuayB0aHVuaywgdWludDY0X3QgY2FwcykKICAgICAgICAgICAgKGNhcHMg
JiBBUkNIX0NBUFNfU1NCX05PKSAgICAgICAgICAgICAgICA/ICIgU1NCX05P
IiAgICA6ICIiLAogICAgICAgICAgICAoY2FwcyAmIEFSQ0hfQ0FQU19NRFNf
Tk8pICAgICAgICAgICAgICAgID8gIiBNRFNfTk8iICAgIDogIiIsCiAgICAg
ICAgICAgIChjYXBzICYgQVJDSF9DQVBTX1RTWF9DVFJMKSAgICAgICAgICAg
ICAgPyAiIFRTWF9DVFJMIiAgOiAiIiwKLSAgICAgICAgICAgKGNhcHMgJiBB
UkNIX0NBUFNfVEFBX05PKSAgICAgICAgICAgICAgICA/ICIgVEFBX05PIiAg
ICA6ICIiKTsKKyAgICAgICAgICAgKGNhcHMgJiBBUkNIX0NBUFNfVEFBX05P
KSAgICAgICAgICAgICAgICA/ICIgVEFBX05PIiAgICA6ICIiLAorICAgICAg
ICAgICAoY2FwcyAmIEFSQ0hfQ0FQU19TQkRSX1NTRFBfTk8pICAgICAgICAg
ID8gIiBTQkRSX1NTRFBfTk8iIDogIiIsCisgICAgICAgICAgIChjYXBzICYg
QVJDSF9DQVBTX0ZCU0RQX05PKSAgICAgICAgICAgICAgPyAiIEZCU0RQX05P
IiAgOiAiIiwKKyAgICAgICAgICAgKGNhcHMgJiBBUkNIX0NBUFNfUFNEUF9O
TykgICAgICAgICAgICAgICA/ICIgUFNEUF9OTyIgICA6ICIiLAorICAgICAg
ICAgICAoY2FwcyAmIEFSQ0hfQ0FQU19GQl9DTEVBUikgICAgICAgICAgICAg
ID8gIiBGQl9DTEVBUiIgIDogIiIsCisgICAgICAgICAgIChjYXBzICYgQVJD
SF9DQVBTX0ZCX0NMRUFSX0NUUkwpICAgICAgICAgPyAiIEZCX0NMRUFSX0NU
UkwiIDogIiIpOwogCiAgICAgLyogQ29tcGlsZWQtaW4gc3VwcG9ydCB3aGlj
aCBwZXJ0YWlucyB0byBtaXRpZ2F0aW9ucy4gKi8KICAgICBpZiAoIElTX0VO
QUJMRUQoQ09ORklHX0lORElSRUNUX1RIVU5LKSB8fCBJU19FTkFCTEVEKENP
TkZJR19TSEFET1dfUEFHSU5HKSApCmRpZmYgLS1naXQgYS94ZW4vaW5jbHVk
ZS9hc20teDg2L21zci1pbmRleC5oIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9t
c3ItaW5kZXguaAppbmRleCBiYTllOTBhZjIxMGIuLjJhODA2NjBkODQ5ZCAx
MDA2NDQKLS0tIGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9tc3ItaW5kZXguaAor
KysgYi94ZW4vaW5jbHVkZS9hc20teDg2L21zci1pbmRleC5oCkBAIC01NSw2
ICs1NSwxMSBAQAogI2RlZmluZSBBUkNIX0NBUFNfSUZfUFNDSEFOR0VfTUNf
Tk8JKF9BQygxLCBVTEwpIDw8IDYpCiAjZGVmaW5lIEFSQ0hfQ0FQU19UU1hf
Q1RSTAkJKF9BQygxLCBVTEwpIDw8IDcpCiAjZGVmaW5lIEFSQ0hfQ0FQU19U
QUFfTk8JCShfQUMoMSwgVUxMKSA8PCA4KQorI2RlZmluZSBBUkNIX0NBUFNf
U0JEUl9TU0RQX05PCQkoX0FDKDEsIFVMTCkgPDwgMTMpCisjZGVmaW5lIEFS
Q0hfQ0FQU19GQlNEUF9OTwkJKF9BQygxLCBVTEwpIDw8IDE0KQorI2RlZmlu
ZSBBUkNIX0NBUFNfUFNEUF9OTwkJKF9BQygxLCBVTEwpIDw8IDE1KQorI2Rl
ZmluZSBBUkNIX0NBUFNfRkJfQ0xFQVIJCShfQUMoMSwgVUxMKSA8PCAxNykK
KyNkZWZpbmUgQVJDSF9DQVBTX0ZCX0NMRUFSX0NUUkwJCShfQUMoMSwgVUxM
KSA8PCAxOCkKIAogI2RlZmluZSBNU1JfRkxVU0hfQ01ECQkJMHgwMDAwMDEw
YgogI2RlZmluZSBGTFVTSF9DTURfTDFECQkJKF9BQygxLCBVTEwpIDw8IDAp
Cg==

--=separator
Content-Type: application/octet-stream; name="xsa404/xsa404-4.13-3.patch"
Content-Disposition: attachment; filename="xsa404/xsa404-4.13-3.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3NwZWMtY3RybDogQWRkIHNwZWMtY3RybD11bnBy
aXYtbW1pbwoKUGVyIFhlbidzIHN1cHBvcnQgc3RhdGVtZW50LCBQQ0kgcGFz
c3Rocm91Z2ggc2hvdWxkIGJlIHRvIHRydXN0ZWQgZG9tYWlucwpiZWNhdXNl
IHRoZSBvdmVyYWxsIHN5c3RlbSBzZWN1cml0eSBkZXBlbmRzIG9uIGZhY3Rv
cnMgb3V0c2lkZSBvZiBYZW4ncwpjb250cm9sLgoKQXMgc3VjaCwgWGVuLCBp
biBhIHN1cHBvcnRlZCBjb25maWd1cmF0aW9uLCBpcyBub3QgdnVsbmVyYWJs
ZSB0byBEUlBXL1NCRFIuCgpIb3dldmVyLCB1c2VycyB3aG8gaGF2ZSByaXNr
IGFzc2Vzc2VkIHRoZWlyIGNvbmZpZ3VyYXRpb24gbWF5IGJlIGhhcHB5IHdp
dGgKdGhlIHJpc2sgb2YgRG9TLCBidXQgdW5oYXBweSB3aXRoIHRoZSByaXNr
IG9mIGNyb3NzLWRvbWFpbiBkYXRhIGxlYWthZ2UuICBTdWNoCnVzZXJzIHNo
b3VsZCBlbmFibGUgdGhpcyBvcHRpb24uCgpPbiBDUFVzIHZ1bG5lcmFibGUg
dG8gTURTLCB0aGUgZXhpc3RpbmcgbWl0aWdhdGlvbnMgYXJlIHRoZSBiZXN0
IHdlIGNhbiBkbyB0bwptaXRpZ2F0ZSBNTUlPIGNyb3NzLWRvbWFpbiBkYXRh
IGxlYWthZ2UuCgpPbiBDUFVzIGZpeGVkIHRvIE1EUyBidXQgdnVsbmVyYWJs
ZSBNTUlPIHN0YWxlIGRhdGEgbGVha2FnZSwgdGhpcyBvcHRpb246CgogKiBP
biBDUFVzIHN1c2NlcHRpYmxlIHRvIEZCU0RQLCBtaXRpZ2F0ZXMgY3Jvc3Mt
ZG9tYWluIGZpbGwgYnVmZmVyIGxlYWthZ2UKICAgdXNpbmcgRkJfQ0xFQVIu
CiAqIE9uIENQVXMgc3VzY2VwdGlibGUgdG8gU0JEUiwgbWl0aWdhdGVzIFJO
RyBkYXRhIHJlY292ZXJ5IGJ5IGVuZ2FnaW5nIHRoZQogICBzcmItbG9jaywg
cHJldmlvdXNseSB1c2VkIHRvIG1pdGlnYXRlIFNSQkRTLgoKQm90aCBtaXRp
Z2F0aW9ucyByZXF1aXJlIG1pY3JvY29kZSBmcm9tIElQVSAyMDIyLjEsIE1h
eSAyMDIyLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS00MDQuCgpTaWduZWQtb2Zm
LWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29t
PgpSZXZpZXdlZC1ieTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNp
dHJpeC5jb20+Ci0tLQpCYWNrcG9ydGluZyBub3RlOiBGb3IgWGVuIDQuNyBh
bmQgZWFybGllciB3aXRoIGJvb2xfdCBub3QgYWxpYXNpbmcgYm9vbCwgdGhl
CkFSQ0hfQ0FQU19GQl9DTEVBUiBodW5rIG5lZWRzICEhCgpkaWZmIC0tZ2l0
IGEvZG9jcy9taXNjL3hlbi1jb21tYW5kLWxpbmUucGFuZG9jIGIvZG9jcy9t
aXNjL3hlbi1jb21tYW5kLWxpbmUucGFuZG9jCmluZGV4IGU4YmRmMzBmYTQ2
Yy4uMDIyY2IwMWRhNzYyIDEwMDY0NAotLS0gYS9kb2NzL21pc2MveGVuLWNv
bW1hbmQtbGluZS5wYW5kb2MKKysrIGIvZG9jcy9taXNjL3hlbi1jb21tYW5k
LWxpbmUucGFuZG9jCkBAIC0yMDM1LDcgKzIwMzUsNyBAQCBCeSBkZWZhdWx0
IFNTQkQgd2lsbCBiZSBtaXRpZ2F0ZWQgYXQgcnVudGltZSAoaS5lIGBzc2Jk
PXJ1bnRpbWVgKS4KICMjIyBzcGVjLWN0cmwgKHg4NikKID4gYD0gTGlzdCBv
ZiBbIDxib29sPiwgeGVuPTxib29sPiwge3B2LGh2bSxtc3Itc2MscnNiLG1k
LWNsZWFyfT08Ym9vbD4sCiA+ICAgICAgICAgICAgICBidGktdGh1bms9cmV0
cG9saW5lfGxmZW5jZXxqbXAsIHtpYnJzLGlicGIsc3NiZCxlYWdlci1mcHUs
Ci0+ICAgICAgICAgICAgICBsMWQtZmx1c2gsYnJhbmNoLWhhcmRlbixzcmIt
bG9ja309PGJvb2w+IF1gCis+ICAgICAgICAgICAgICBsMWQtZmx1c2gsYnJh
bmNoLWhhcmRlbixzcmItbG9jayx1bnByaXYtbW1pb309PGJvb2w+IF1gCiAK
IENvbnRyb2xzIGZvciBzcGVjdWxhdGl2ZSBleGVjdXRpb24gc2lkZWNoYW5u
ZWwgbWl0aWdhdGlvbnMuICBCeSBkZWZhdWx0LCBYZW4KIHdpbGwgcGljayB0
aGUgbW9zdCBhcHByb3ByaWF0ZSBtaXRpZ2F0aW9ucyBiYXNlZCBvbiBjb21w
aWxlZCBpbiBzdXBwb3J0LApAQCAtMjExNCw4ICsyMTE0LDE2IEBAIFhlbiB3
aWxsIGVuYWJsZSB0aGlzIG1pdGlnYXRpb24uCiBPbiBoYXJkd2FyZSBzdXBw
b3J0aW5nIFNSQkRTX0NUUkwsIHRoZSBgc3JiLWxvY2s9YCBvcHRpb24gY2Fu
IGJlIHVzZWQgdG8gZm9yY2UKIG9yIHByZXZlbnQgWGVuIGZyb20gcHJvdGVj
dCB0aGUgU3BlY2lhbCBSZWdpc3RlciBCdWZmZXIgZnJvbSBsZWFraW5nIHN0
YWxlCiBkYXRhLiBCeSBkZWZhdWx0LCBYZW4gd2lsbCBlbmFibGUgdGhpcyBt
aXRpZ2F0aW9uLCBleGNlcHQgb24gcGFydHMgd2hlcmUgTURTCi1pcyBmaXhl
ZCBhbmQgVEFBIGlzIGZpeGVkL21pdGlnYXRlZCAoaW4gd2hpY2ggY2FzZSwg
dGhlcmUgaXMgYmVsaWV2ZWQgdG8gYmUgbm8KLXdheSBmb3IgYW4gYXR0YWNr
ZXIgdG8gb2J0YWluIHRoZSBzdGFsZSBkYXRhKS4KK2lzIGZpeGVkIGFuZCBU
QUEgaXMgZml4ZWQvbWl0aWdhdGVkIGFuZCB0aGVyZSBhcmUgbm8gdW5wcml2
aWxlZ2VkIE1NSU8KK21hcHBpbmdzIChpbiB3aGljaCBjYXNlLCB0aGVyZSBp
cyBiZWxpZXZlZCB0byBiZSBubyB3YXkgZm9yIGFuIGF0dGFja2VyIHRvCitv
YnRhaW4gc3RhbGUgZGF0YSkuCisKK1RoZSBgdW5wcml2LW1taW89YCBib29s
ZWFuIGluZGljYXRlcyB3aGV0aGVyIHRoZSBzeXN0ZW0gaGFzIChvciB3aWxs
IGhhdmUpCitsZXNzIHRoYW4gZnVsbHkgcHJpdmlsZWdlZCBkb21haW5zIGdy
YW50ZWQgYWNjZXNzIHRvIE1NSU8gZGV2aWNlcy4gIEJ5CitkZWZhdWx0LCB0
aGlzIG9wdGlvbiBpcyBkaXNhYmxlZC4gIElmIGVuYWJsZWQsIFhlbiB3aWxs
IHVzZSB0aGUgYEZCX0NMRUFSYAorYW5kL29yIGBTUkJEU19DVFJMYCBmdW5j
dGlvbmFsaXR5IGF2YWlsYWJsZSBpbiB0aGUgSW50ZWwgTWF5IDIwMjIgbWlj
cm9jb2RlCityZWxlYXNlIHRvIG1pdGlnYXRlIGNyb3NzLWRvbWFpbiBsZWFr
YWdlIG9mIGRhdGEgdmlhIHRoZSBNTUlPIFN0YWxlIERhdGEKK3Z1bG5lcmFi
aWxpdGllcy4KIAogIyMjIHN5bmNfY29uc29sZQogPiBgPSA8Ym9vbGVhbj5g
CmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvc3BlY19jdHJsLmMgYi94ZW4v
YXJjaC94ODYvc3BlY19jdHJsLmMKaW5kZXggZmQzNjkyN2JhMWNiLi5kNGJh
OTQxMjA2N2IgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni9zcGVjX2N0cmwu
YworKysgYi94ZW4vYXJjaC94ODYvc3BlY19jdHJsLmMKQEAgLTY3LDYgKzY3
LDggQEAgc3RhdGljIGJvb2wgX19pbml0ZGF0YSBjcHVfaGFzX2J1Z19tZHM7
IC8qIEFueSBvdGhlciBNe0xQLFNCLEZCfURTIGNvbWJpbmF0aW9uLgogCiBz
dGF0aWMgaW50OF90IF9faW5pdGRhdGEgb3B0X3NyYl9sb2NrID0gLTE7CiB1
aW50NjRfdCBfX3JlYWRfbW9zdGx5IGRlZmF1bHRfeGVuX21jdV9vcHRfY3Ry
bDsKK3N0YXRpYyBib29sIF9faW5pdGRhdGEgb3B0X3VucHJpdl9tbWlvOwor
c3RhdGljIGJvb2wgX19yZWFkX21vc3RseSBvcHRfZmJfY2xlYXJfbW1pbzsK
IAogc3RhdGljIGludCBfX2luaXQgcGFyc2Vfc3BlY19jdHJsKGNvbnN0IGNo
YXIgKnMpCiB7CkBAIC0xODQsNiArMTg2LDggQEAgc3RhdGljIGludCBfX2lu
aXQgcGFyc2Vfc3BlY19jdHJsKGNvbnN0IGNoYXIgKnMpCiAgICAgICAgICAg
ICBvcHRfYnJhbmNoX2hhcmRlbiA9IHZhbDsKICAgICAgICAgZWxzZSBpZiAo
ICh2YWwgPSBwYXJzZV9ib29sZWFuKCJzcmItbG9jayIsIHMsIHNzKSkgPj0g
MCApCiAgICAgICAgICAgICBvcHRfc3JiX2xvY2sgPSB2YWw7CisgICAgICAg
IGVsc2UgaWYgKCAodmFsID0gcGFyc2VfYm9vbGVhbigidW5wcml2LW1taW8i
LCBzLCBzcykpID49IDAgKQorICAgICAgICAgICAgb3B0X3VucHJpdl9tbWlv
ID0gdmFsOwogICAgICAgICBlbHNlCiAgICAgICAgICAgICByYyA9IC1FSU5W
QUw7CiAKQEAgLTM2Nyw3ICszNzEsOCBAQCBzdGF0aWMgdm9pZCBfX2luaXQg
cHJpbnRfZGV0YWlscyhlbnVtIGluZF90aHVuayB0aHVuaywgdWludDY0X3Qg
Y2FwcykKICAgICAgICAgICAgb3B0X3NyYl9sb2NrICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgPyAiIFNSQl9MT0NLKyIgOiAiIFNSQl9MT0NLLSIs
CiAgICAgICAgICAgIG9wdF9pYnBiICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgID8gIiBJQlBCIiAgOiAiIiwKICAgICAgICAgICAgb3B0X2wx
ZF9mbHVzaCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgPyAiIEwxRF9G
TFVTSCIgOiAiIiwKLSAgICAgICAgICAgb3B0X21kX2NsZWFyX3B2IHx8IG9w
dF9tZF9jbGVhcl9odm0gICAgICAgPyAiIFZFUlciICA6ICIiLAorICAgICAg
ICAgICBvcHRfbWRfY2xlYXJfcHYgfHwgb3B0X21kX2NsZWFyX2h2bSB8fAor
ICAgICAgICAgICBvcHRfZmJfY2xlYXJfbW1pbyAgICAgICAgICAgICAgICAg
ICAgICAgICA/ICIgVkVSVyIgIDogIiIsCiAgICAgICAgICAgIG9wdF9icmFu
Y2hfaGFyZGVuICAgICAgICAgICAgICAgICAgICAgICAgID8gIiBCUkFOQ0hf
SEFSREVOIiA6ICIiKTsKIAogICAgIC8qIEwxVEYgZGlhZ25vc3RpY3MsIHBy
aW50ZWQgaWYgdnVsbmVyYWJsZSBvciBQViBzaGFkb3dpbmcgaXMgaW4gdXNl
LiAqLwpAQCAtODg3LDcgKzg5Miw5IEBAIHZvaWQgc3BlY19jdHJsX2luaXRf
ZG9tYWluKHN0cnVjdCBkb21haW4gKmQpCiB7CiAgICAgYm9vbCBwdiA9IGlz
X3B2X2RvbWFpbihkKTsKIAotICAgIGQtPmFyY2gudmVydyA9IHB2ID8gb3B0
X21kX2NsZWFyX3B2IDogb3B0X21kX2NsZWFyX2h2bTsKKyAgICBkLT5hcmNo
LnZlcncgPQorICAgICAgICAocHYgPyBvcHRfbWRfY2xlYXJfcHYgOiBvcHRf
bWRfY2xlYXJfaHZtKSB8fAorICAgICAgICAob3B0X2ZiX2NsZWFyX21taW8g
JiYgaXNfaW9tbXVfZW5hYmxlZChkKSk7CiB9CiAKIHZvaWQgX19pbml0IGlu
aXRfc3BlY3VsYXRpb25fbWl0aWdhdGlvbnModm9pZCkKQEAgLTEwNzgsNiAr
MTA4NSwxOCBAQCB2b2lkIF9faW5pdCBpbml0X3NwZWN1bGF0aW9uX21pdGln
YXRpb25zKHZvaWQpCiAgICAgbWRzX2NhbGN1bGF0aW9ucyhjYXBzKTsKIAog
ICAgIC8qCisgICAgICogUGFydHMgd2hpY2ggZW51bWVyYXRlIEZCX0NMRUFS
IGFyZSB0aG9zZSB3aGljaCBhcmUgcG9zdC1NRFNfTk8gYW5kIGhhdmUKKyAg
ICAgKiByZWludHJvZHVjZWQgdGhlIFZFUlcgZmlsbCBidWZmZXIgZmx1c2hp
bmcgc2lkZSBlZmZlY3QgYmVjYXVzZSBvZiBhCisgICAgICogc3VzY2VwdGli
aWxpdHkgdG8gRkJTRFAuCisgICAgICoKKyAgICAgKiBJZiB1bnByaXZpbGVn
ZWQgZ3Vlc3RzIGhhdmUgKG9yIHdpbGwgaGF2ZSkgTU1JTyBtYXBwaW5ncywg
d2UgY2FuCisgICAgICogbWl0aWdhdGUgY3Jvc3MtZG9tYWluIGxlYWthZ2Ug
b2YgZmlsbCBidWZmZXIgZGF0YSBieSBpc3N1aW5nIFZFUlcgb24KKyAgICAg
KiB0aGUgcmV0dXJuLXRvLWd1ZXN0IHBhdGguCisgICAgICovCisgICAgaWYg
KCBvcHRfdW5wcml2X21taW8gKQorICAgICAgICBvcHRfZmJfY2xlYXJfbW1p
byA9IGNhcHMgJiBBUkNIX0NBUFNfRkJfQ0xFQVI7CisKKyAgICAvKgogICAg
ICAqIEJ5IGRlZmF1bHQsIGVuYWJsZSBQViBhbmQgSFZNIG1pdGlnYXRpb25z
IG9uIE1EUy12dWxuZXJhYmxlIGhhcmR3YXJlLgogICAgICAqIFRoaXMgd2ls
bCBvbmx5IGJlIGEgdG9rZW4gZWZmb3J0IGZvciBNTFBEUy9NRkJEUyB3aGVu
IEhUIGlzIGVuYWJsZWQsCiAgICAgICogYnV0IGl0IGlzIHNvbWV3aGF0IGJl
dHRlciB0aGFuIG5vdGhpbmcuCkBAIC0xMDkwLDE4ICsxMTA5LDIwIEBAIHZv
aWQgX19pbml0IGluaXRfc3BlY3VsYXRpb25fbWl0aWdhdGlvbnModm9pZCkK
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICBib290X2NwdV9oYXMoWDg2
X0ZFQVRVUkVfTURfQ0xFQVIpKTsKIAogICAgIC8qCi0gICAgICogRW5hYmxl
IE1EUyBkZWZlbmNlcyBhcyBhcHBsaWNhYmxlLiAgVGhlIElkbGUgYmxvY2tz
IG5lZWQgdXNpbmcgaWYKLSAgICAgKiBlaXRoZXIgUFYgb3IgSFZNIGRlZmVu
Y2VzIGFyZSB1c2VkLgorICAgICAqIEVuYWJsZSBNRFMvTU1JTyBkZWZlbmNl
cyBhcyBhcHBsaWNhYmxlLiAgVGhlIElkbGUgYmxvY2tzIG5lZWQgdXNpbmcg
aWYKKyAgICAgKiBlaXRoZXIgdGhlIFBWIG9yIEhWTSBNRFMgZGVmZW5jZXMg
YXJlIHVzZWQsIG9yIGlmIHdlIG1heSBnaXZlIE1NSU8KKyAgICAgKiBhY2Nl
c3MgdG8gdW50cnVzdGVkIGd1ZXN0cy4KICAgICAgKgogICAgICAqIEhWTSBp
cyBtb3JlIGNvbXBsaWNhdGVkLiAgVGhlIE1EX0NMRUFSIG1pY3JvY29kZSBl
eHRlbmRzIEwxRF9GTFVTSCB3aXRoCiAgICAgICogZXF1aXZhbGVudCBzZW1h
bnRpY3MgdG8gYXZvaWQgbmVlZGluZyB0byBwZXJmb3JtIGJvdGggZmx1c2hl
cyBvbiB0aGUKLSAgICAgKiBIVk0gcGF0aC4gIFRoZXJlZm9yZSwgd2UgZG9u
J3QgbmVlZCBWRVJXIGluIGFkZGl0aW9uIHRvIEwxRF9GTFVTSC4KKyAgICAg
KiBIVk0gcGF0aC4gIFRoZXJlZm9yZSwgd2UgZG9uJ3QgbmVlZCBWRVJXIGlu
IGFkZGl0aW9uIHRvIEwxRF9GTFVTSCAoZm9yCisgICAgICogTURTIG1pdGln
YXRpb25zLiAgTDFEX0ZMVVNIIGlzIG5vdCBzYWZlIGZvciBNTUlPIG1pdGln
YXRpb25zLikKICAgICAgKgogICAgICAqIEFmdGVyIGNhbGN1bGF0aW5nIHRo
ZSBhcHByb3ByaWF0ZSBpZGxlIHNldHRpbmcsIHNpbXBsaWZ5CiAgICAgICog
b3B0X21kX2NsZWFyX2h2bSB0byBtZWFuIGp1c3QgInNob3VsZCB3ZSBWRVJX
IG9uIHRoZSB3YXkgaW50byBIVk0KICAgICAgKiBndWVzdHMiLCBzbyBzcGVj
X2N0cmxfaW5pdF9kb21haW4oKSBjYW4gY2FsY3VsYXRlIHN1aXRhYmxlIHNl
dHRpbmdzLgogICAgICAqLwotICAgIGlmICggb3B0X21kX2NsZWFyX3B2IHx8
IG9wdF9tZF9jbGVhcl9odm0gKQorICAgIGlmICggb3B0X21kX2NsZWFyX3B2
IHx8IG9wdF9tZF9jbGVhcl9odm0gfHwgb3B0X2ZiX2NsZWFyX21taW8gKQog
ICAgICAgICBzZXR1cF9mb3JjZV9jcHVfY2FwKFg4Nl9GRUFUVVJFX1NDX1ZF
UldfSURMRSk7CiAgICAgb3B0X21kX2NsZWFyX2h2bSAmPSAhKGNhcHMgJiBB
UkNIX0NBUFNfU0tJUF9MMURGTCkgJiYgIW9wdF9sMWRfZmx1c2g7CiAKQEAg
LTExNzAsMTIgKzExOTEsMTggQEAgdm9pZCBfX2luaXQgaW5pdF9zcGVjdWxh
dGlvbl9taXRpZ2F0aW9ucyh2b2lkKQogICAgICAgICAgKiBPbiBzb21lIFNS
QkRTLWFmZmVjdGVkIGhhcmR3YXJlLCBpdCBtYXkgYmUgc2FmZSB0byByZWxh
eCBzcmItbG9jawogICAgICAgICAgKiBieSBkZWZhdWx0LgogICAgICAgICAg
KgotICAgICAgICAgKiBPbiBwYXJ0cyB3aGljaCBlbnVtZXJhdGUgTURTX05P
IGFuZCBub3QgVEFBX05PLCBUU1ggaXMgdGhlIG9ubHkgd2F5Ci0gICAgICAg
ICAqIHRvIGFjY2VzcyB0aGUgRmlsbCBCdWZmZXIuICBJZiBUU1ggaXNuJ3Qg
YXZhaWxhYmxlIChpbmMuIFNLVQotICAgICAgICAgKiByZWFzb25zIG9uIHNv
bWUgbW9kZWxzKSwgb3IgVFNYIGlzIGV4cGxpY2l0bHkgZGlzYWJsZWQsIHRo
ZW4gdGhlcmUKLSAgICAgICAgICogaXMgbm8gbmVlZCBmb3IgdGhlIGV4dHJh
IG92ZXJoZWFkIHRvIHByb3RlY3QgUkRSQU5EL1JEU0VFRC4KKyAgICAgICAg
ICogQWxsIHBhcnRzIHdpdGggU1JCRFNfQ1RSTCBzdWZmZXIgU1NEUCwgdGhl
IG1lY2hhbmlzbSBieSB3aGljaCBzdGFsZQorICAgICAgICAgKiBSTkcgZGF0
YSBiZWNvbWVzIGF2YWlsYWJsZSB0byBvdGhlciBjb250ZXh0cy4gIFRvIHJl
Y292ZXIgdGhlIGRhdGEsCisgICAgICAgICAqIGFuIGF0dGFja2VyIG5lZWRz
IHRvIHVzZToKKyAgICAgICAgICogIC0gU0JEUyAoTURTIG9yIFRBQSB0byBz
YW1wbGUgdGhlIGNvcmVzIGZpbGwgYnVmZmVyKQorICAgICAgICAgKiAgLSBT
QkRSIChBcmNoaXRlY3R1cmFsbHkgcmV0cmlldmUgc3RhbGUgdHJhbnNhY3Rp
b24gYnVmZmVyIGNvbnRlbnRzKQorICAgICAgICAgKiAgLSBEUlBXIChBcmNo
aXRlY3R1cmFsbHkgbGF0Y2ggc3RhbGUgZmlsbCBidWZmZXIgZGF0YSkKKyAg
ICAgICAgICoKKyAgICAgICAgICogT24gTURTX05PIHBhcnRzLCBhbmQgd2l0
aCBUQUFfTk8gb3IgVFNYIHVuYXZhaWxhYmxlL2Rpc2FibGVkLCBhbmQKKyAg
ICAgICAgICogdGhlcmUgaXMgbm8gdW5wcml2aWxlZ2VkIE1NSU8gYWNjZXNz
LCB0aGUgUk5HIGRhdGEgZG9lc24ndCBuZWVkCisgICAgICAgICAqIHByb3Rl
Y3RpbmcuCiAgICAgICAgICAqLwotICAgICAgICBpZiAoIG9wdF9zcmJfbG9j
ayA9PSAtMSAmJgorICAgICAgICBpZiAoIG9wdF9zcmJfbG9jayA9PSAtMSAm
JiAhb3B0X3VucHJpdl9tbWlvICYmCiAgICAgICAgICAgICAgKGNhcHMgJiAo
QVJDSF9DQVBTX01EU19OT3xBUkNIX0NBUFNfVEFBX05PKSkgPT0gQVJDSF9D
QVBTX01EU19OTyAmJgogICAgICAgICAgICAgICghY3B1X2hhc19obGUgfHwg
KChjYXBzICYgQVJDSF9DQVBTX1RTWF9DVFJMKSAmJiBvcHRfdHN4ID09IDAp
KSApCiAgICAgICAgICAgICBvcHRfc3JiX2xvY2sgPSAwOwo=

--=separator
Content-Type: application/octet-stream; name="xsa404/xsa404-4.14-1.patch"
Content-Disposition: attachment; filename="xsa404/xsa404-4.14-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3NwZWMtY3RybDogTWFrZSBWRVJXIGZsdXNoaW5n
IHJ1bnRpbWUgY29uZGl0aW9uYWwKCkN1cnJlbnRseSwgVkVSVyBmbHVzaGlu
ZyB0byBtaXRpZ2F0ZSBNRFMgaXMgYm9vdCB0aW1lIGNvbmRpdGlvbmFsIHBl
ciBkb21haW4KdHlwZS4gIEhvd2V2ZXIsIHRvIHByb3ZpZGUgbWl0aWdhdGlv
bnMgZm9yIERSUFcgKENWRS0yMDIyLTIxMTY2KSwgd2UgbmVlZCB0bwpjb25k
aXRpb25hbGx5IHVzZSBWRVJXIGJhc2VkIG9uIHRoZSB0cnVzdHdvcnRoaW5l
c3Mgb2YgdGhlIGd1ZXN0LCBhbmQgdGhlCmRldmljZXMgcGFzc2VkIHRocm91
Z2guCgpSZW1vdmUgdGhlIFBWL0hWTSBhbHRlcm5hdGl2ZXMgYW5kIGluc3Rl
YWQgaXNzdWUgYSBWRVJXIG9uIHRoZSByZXR1cm4tdG8tZ3Vlc3QKcGF0aCBk
ZXBlbmRpbmcgb24gdGhlIFNDRl92ZXJ3IGJpdCBpbiBjcHVpbmZvIHNwZWNf
Y3RybF9mbGFncy4KCkludHJvZHVjZSBzcGVjX2N0cmxfaW5pdF9kb21haW4o
KSBhbmQgZC0+YXJjaC52ZXJ3IHRvIGNhbGN1bGF0ZSB0aGUgVkVSVwpkaXNw
b3NpdGlvbiBhdCBkb21haW4gY3JlYXRpb24gdGltZSwgYW5kIGNvbnRleHQg
c3dpdGNoIHRoZSBTQ0ZfdmVydyBiaXQuCgpGb3Igbm93LCBWRVJXIGZsdXNo
aW5nIGlzIHVzZWQgYW5kIGNvbnRyb2xsZWQgZXhhY3RseSBhcyBiZWZvcmUs
IGJ1dCBsYXRlcgpwYXRjaGVzIHdpbGwgYWRkIHBlci1kb21haW4gY2FzZXMg
dG9vLgoKTm8gY2hhbmdlIGluIGJlaGF2aW91ci4KClRoaXMgaXMgcGFydCBv
ZiBYU0EtNDA0LgoKU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8YW5k
cmV3LmNvb3BlcjNAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEphbiBCZXVs
aWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6IFJvZ2VyIFBh
dSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgoKZGlmZiAtLWdpdCBh
L2RvY3MvbWlzYy94ZW4tY29tbWFuZC1saW5lLnBhbmRvYyBiL2RvY3MvbWlz
Yy94ZW4tY29tbWFuZC1saW5lLnBhbmRvYwppbmRleCA1NDY3YWU3MTY4ZmYu
LmFkODU3ODVlMTRiMyAxMDA2NDQKLS0tIGEvZG9jcy9taXNjL3hlbi1jb21t
YW5kLWxpbmUucGFuZG9jCisrKyBiL2RvY3MvbWlzYy94ZW4tY29tbWFuZC1s
aW5lLnBhbmRvYwpAQCAtMjEyOSw5ICsyMTI5LDggQEAgaW4gcGxhY2UgZm9y
IGd1ZXN0cyB0byB1c2UuCiBVc2Ugb2YgYSBwb3NpdGl2ZSBib29sZWFuIHZh
bHVlIGZvciBlaXRoZXIgb2YgdGhlc2Ugb3B0aW9ucyBpcyBpbnZhbGlkLgog
CiBUaGUgYm9vbGVhbnMgYHB2PWAsIGBodm09YCwgYG1zci1zYz1gLCBgcnNi
PWAgYW5kIGBtZC1jbGVhcj1gIG9mZmVyIGZpbmUKLWdyYWluZWQgY29udHJv
bCBvdmVyIHRoZSBhbHRlcm5hdGl2ZSBibG9ja3MgdXNlZCBieSBYZW4uICBU
aGVzZSBpbXBhY3QgWGVuJ3MKLWFiaWxpdHkgdG8gcHJvdGVjdCBpdHNlbGYs
IGFuZCBYZW4ncyBhYmlsaXR5IHRvIHZpcnR1YWxpc2Ugc3VwcG9ydCBmb3Ig
Z3Vlc3RzCi10byB1c2UuCitncmFpbmVkIGNvbnRyb2wgb3ZlciB0aGUgcHJp
bWl0aXZlcyBieSBYZW4uICBUaGVzZSBpbXBhY3QgWGVuJ3MgYWJpbGl0eSB0
bworcHJvdGVjdCBpdHNlbGYsIGFuZCBYZW4ncyBhYmlsaXR5IHRvIHZpcnR1
YWxpc2Ugc3VwcG9ydCBmb3IgZ3Vlc3RzIHRvIHVzZS4KIAogKiBgcHY9YCBh
bmQgYGh2bT1gIG9mZmVyIGNvbnRyb2wgb3ZlciBhbGwgc3Vib3B0aW9ucyBm
b3IgUFYgYW5kIEhWTSBndWVzdHMKICAgcmVzcGVjdGl2ZWx5LgpkaWZmIC0t
Z2l0IGEveGVuL2FyY2gveDg2L2RvbWFpbi5jIGIveGVuL2FyY2gveDg2L2Rv
bWFpbi5jCmluZGV4IDNkYTgxZWJmMWQ0MS4uNWVhNWVmNmJhMDM3IDEwMDY0
NAotLS0gYS94ZW4vYXJjaC94ODYvZG9tYWluLmMKKysrIGIveGVuL2FyY2gv
eDg2L2RvbWFpbi5jCkBAIC02NTEsNiArNjUxLDggQEAgaW50IGFyY2hfZG9t
YWluX2NyZWF0ZShzdHJ1Y3QgZG9tYWluICpkLAogCiAgICAgZG9tYWluX2Nw
dV9wb2xpY3lfY2hhbmdlZChkKTsKIAorICAgIHNwZWNfY3RybF9pbml0X2Rv
bWFpbihkKTsKKwogICAgIHJldHVybiAwOwogCiAgZmFpbDoKQEAgLTE3NjMs
MTQgKzE3NjUsMTUgQEAgc3RhdGljIHZvaWQgX19jb250ZXh0X3N3aXRjaCh2
b2lkKQogdm9pZCBjb250ZXh0X3N3aXRjaChzdHJ1Y3QgdmNwdSAqcHJldiwg
c3RydWN0IHZjcHUgKm5leHQpCiB7CiAgICAgdW5zaWduZWQgaW50IGNwdSA9
IHNtcF9wcm9jZXNzb3JfaWQoKTsKKyAgICBzdHJ1Y3QgY3B1X2luZm8gKmlu
Zm8gPSBnZXRfY3B1X2luZm8oKTsKICAgICBjb25zdCBzdHJ1Y3QgZG9tYWlu
ICpwcmV2ZCA9IHByZXYtPmRvbWFpbiwgKm5leHRkID0gbmV4dC0+ZG9tYWlu
OwogICAgIHVuc2lnbmVkIGludCBkaXJ0eV9jcHUgPSByZWFkX2F0b21pYygm
bmV4dC0+ZGlydHlfY3B1KTsKIAogICAgIEFTU0VSVChwcmV2ICE9IG5leHQp
OwogICAgIEFTU0VSVChsb2NhbF9pcnFfaXNfZW5hYmxlZCgpKTsKIAotICAg
IGdldF9jcHVfaW5mbygpLT51c2VfcHZfY3IzID0gZmFsc2U7Ci0gICAgZ2V0
X2NwdV9pbmZvKCktPnhlbl9jcjMgPSAwOworICAgIGluZm8tPnVzZV9wdl9j
cjMgPSBmYWxzZTsKKyAgICBpbmZvLT54ZW5fY3IzID0gMDsKIAogICAgIGlm
ICggdW5saWtlbHkoZGlydHlfY3B1ICE9IGNwdSkgJiYgZGlydHlfY3B1ICE9
IFZDUFVfQ1BVX0NMRUFOICkKICAgICB7CkBAIC0xODM0LDYgKzE4MzcsMTEg
QEAgdm9pZCBjb250ZXh0X3N3aXRjaChzdHJ1Y3QgdmNwdSAqcHJldiwgc3Ry
dWN0IHZjcHUgKm5leHQpCiAgICAgICAgICAgICAgICAgKmxhc3RfaWQgPSBu
ZXh0X2lkOwogICAgICAgICAgICAgfQogICAgICAgICB9CisKKyAgICAgICAg
LyogVXBkYXRlIHRoZSB0b3Atb2Ytc3RhY2sgYmxvY2sgd2l0aCB0aGUgVkVS
VyBkaXNwb3NpdGlvbi4gKi8KKyAgICAgICAgaW5mby0+c3BlY19jdHJsX2Zs
YWdzICY9IH5TQ0ZfdmVydzsKKyAgICAgICAgaWYgKCBuZXh0ZC0+YXJjaC52
ZXJ3ICkKKyAgICAgICAgICAgIGluZm8tPnNwZWNfY3RybF9mbGFncyB8PSBT
Q0ZfdmVydzsKICAgICB9CiAKICAgICBzY2hlZF9jb250ZXh0X3N3aXRjaGVk
KHByZXYsIG5leHQpOwpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L2h2bS92
bXgvZW50cnkuUyBiL3hlbi9hcmNoL3g4Ni9odm0vdm14L2VudHJ5LlMKaW5k
ZXggNDk2NTFmM2M0MzVhLi41ZjVkZTQ1YTEzMDkgMTAwNjQ0Ci0tLSBhL3hl
bi9hcmNoL3g4Ni9odm0vdm14L2VudHJ5LlMKKysrIGIveGVuL2FyY2gveDg2
L2h2bS92bXgvZW50cnkuUwpAQCAtODcsNyArODcsNyBAQCBVTkxJS0VMWV9F
TkQocmVhbG1vZGUpCiAKICAgICAgICAgLyogV0FSTklORyEgYHJldGAsIGBj
YWxsICpgLCBgam1wICpgIG5vdCBzYWZlIGJleW9uZCB0aGlzIHBvaW50LiAq
LwogICAgICAgICAvKiBTUEVDX0NUUkxfRVhJVF9UT19WTVggICBSZXE6ICVy
c3A9cmVncy9jcHVpbmZvICAgICAgICAgICAgICBDbG9iOiAgICAqLwotICAg
ICAgICBBTFRFUk5BVElWRSAiIiwgX19zdHJpbmdpZnkodmVydyBDUFVJTkZP
X3Zlcndfc2VsKCVyc3ApKSwgWDg2X0ZFQVRVUkVfU0NfVkVSV19IVk0KKyAg
ICAgICAgRE9fU1BFQ19DVFJMX0NPTkRfVkVSVwogCiAgICAgICAgIG1vdiAg
VkNQVV9odm1fZ3Vlc3RfY3IyKCVyYngpLCVyYXgKIApkaWZmIC0tZ2l0IGEv
eGVuL2FyY2gveDg2L3NwZWNfY3RybC5jIGIveGVuL2FyY2gveDg2L3NwZWNf
Y3RybC5jCmluZGV4IDFlMjI2MTAyZDM5OS4uYjRlZmM5NDBhYTJiIDEwMDY0
NAotLS0gYS94ZW4vYXJjaC94ODYvc3BlY19jdHJsLmMKKysrIGIveGVuL2Fy
Y2gveDg2L3NwZWNfY3RybC5jCkBAIC0zNiw4ICszNiw4IEBAIHN0YXRpYyBi
b29sIF9faW5pdGRhdGEgb3B0X21zcl9zY19wdiA9IHRydWU7CiBzdGF0aWMg
Ym9vbCBfX2luaXRkYXRhIG9wdF9tc3Jfc2NfaHZtID0gdHJ1ZTsKIHN0YXRp
YyBib29sIF9faW5pdGRhdGEgb3B0X3JzYl9wdiA9IHRydWU7CiBzdGF0aWMg
Ym9vbCBfX2luaXRkYXRhIG9wdF9yc2JfaHZtID0gdHJ1ZTsKLXN0YXRpYyBp
bnQ4X3QgX19pbml0ZGF0YSBvcHRfbWRfY2xlYXJfcHYgPSAtMTsKLXN0YXRp
YyBpbnQ4X3QgX19pbml0ZGF0YSBvcHRfbWRfY2xlYXJfaHZtID0gLTE7Citz
dGF0aWMgaW50OF90IF9fcmVhZF9tb3N0bHkgb3B0X21kX2NsZWFyX3B2ID0g
LTE7CitzdGF0aWMgaW50OF90IF9fcmVhZF9tb3N0bHkgb3B0X21kX2NsZWFy
X2h2bSA9IC0xOwogCiAvKiBDbWRsaW5lIGNvbnRyb2xzIGZvciBYZW4ncyBz
cGVjdWxhdGl2ZSBzZXR0aW5ncy4gKi8KIHN0YXRpYyBlbnVtIGluZF90aHVu
ayB7CkBAIC05MDMsNiArOTAzLDEzIEBAIHN0YXRpYyBfX2luaXQgdm9pZCBt
ZHNfY2FsY3VsYXRpb25zKHVpbnQ2NF90IGNhcHMpCiAgICAgfQogfQogCit2
b2lkIHNwZWNfY3RybF9pbml0X2RvbWFpbihzdHJ1Y3QgZG9tYWluICpkKQor
eworICAgIGJvb2wgcHYgPSBpc19wdl9kb21haW4oZCk7CisKKyAgICBkLT5h
cmNoLnZlcncgPSBwdiA/IG9wdF9tZF9jbGVhcl9wdiA6IG9wdF9tZF9jbGVh
cl9odm07Cit9CisKIHZvaWQgX19pbml0IGluaXRfc3BlY3VsYXRpb25fbWl0
aWdhdGlvbnModm9pZCkKIHsKICAgICBlbnVtIGluZF90aHVuayB0aHVuayA9
IFRIVU5LX0RFRkFVTFQ7CkBAIC0xMTQ4LDIxICsxMTU1LDIwIEBAIHZvaWQg
X19pbml0IGluaXRfc3BlY3VsYXRpb25fbWl0aWdhdGlvbnModm9pZCkKICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICBib290X2NwdV9oYXMoWDg2X0ZF
QVRVUkVfTURfQ0xFQVIpKTsKIAogICAgIC8qCi0gICAgICogRW5hYmxlIE1E
UyBkZWZlbmNlcyBhcyBhcHBsaWNhYmxlLiAgVGhlIFBWIGJsb2NrcyBuZWVk
IHVzaW5nIGFsbCB0aGUKLSAgICAgKiB0aW1lLCBhbmQgdGhlIElkbGUgYmxv
Y2tzIG5lZWQgdXNpbmcgaWYgZWl0aGVyIFBWIG9yIEhWTSBkZWZlbmNlcyBh
cmUKLSAgICAgKiB1c2VkLgorICAgICAqIEVuYWJsZSBNRFMgZGVmZW5jZXMg
YXMgYXBwbGljYWJsZS4gIFRoZSBJZGxlIGJsb2NrcyBuZWVkIHVzaW5nIGlm
CisgICAgICogZWl0aGVyIFBWIG9yIEhWTSBkZWZlbmNlcyBhcmUgdXNlZC4K
ICAgICAgKgogICAgICAqIEhWTSBpcyBtb3JlIGNvbXBsaWNhdGVkLiAgVGhl
IE1EX0NMRUFSIG1pY3JvY29kZSBleHRlbmRzIEwxRF9GTFVTSCB3aXRoCi0g
ICAgICogZXF1aXZlbGVudCBzZW1hbnRpY3MgdG8gYXZvaWQgbmVlZGluZyB0
byBwZXJmb3JtIGJvdGggZmx1c2hlcyBvbiB0aGUKLSAgICAgKiBIVk0gcGF0
aC4gIFRoZSBIVk0gYmxvY2tzIGRvbid0IG5lZWQgYWN0aXZhdGluZyBpZiBv
dXIgaHlwZXJ2aXNvciB0b2xkCi0gICAgICogdXMgaXQgd2FzIGhhbmRsaW5n
IEwxRF9GTFVTSCwgb3Igd2UgYXJlIHVzaW5nIEwxRF9GTFVTSCBvdXJzZWx2
ZXMuCisgICAgICogZXF1aXZhbGVudCBzZW1hbnRpY3MgdG8gYXZvaWQgbmVl
ZGluZyB0byBwZXJmb3JtIGJvdGggZmx1c2hlcyBvbiB0aGUKKyAgICAgKiBI
Vk0gcGF0aC4gIFRoZXJlZm9yZSwgd2UgZG9uJ3QgbmVlZCBWRVJXIGluIGFk
ZGl0aW9uIHRvIEwxRF9GTFVTSC4KKyAgICAgKgorICAgICAqIEFmdGVyIGNh
bGN1bGF0aW5nIHRoZSBhcHByb3ByaWF0ZSBpZGxlIHNldHRpbmcsIHNpbXBs
aWZ5CisgICAgICogb3B0X21kX2NsZWFyX2h2bSB0byBtZWFuIGp1c3QgInNo
b3VsZCB3ZSBWRVJXIG9uIHRoZSB3YXkgaW50byBIVk0KKyAgICAgKiBndWVz
dHMiLCBzbyBzcGVjX2N0cmxfaW5pdF9kb21haW4oKSBjYW4gY2FsY3VsYXRl
IHN1aXRhYmxlIHNldHRpbmdzLgogICAgICAqLwotICAgIGlmICggb3B0X21k
X2NsZWFyX3B2ICkKLSAgICAgICAgc2V0dXBfZm9yY2VfY3B1X2NhcChYODZf
RkVBVFVSRV9TQ19WRVJXX1BWKTsKICAgICBpZiAoIG9wdF9tZF9jbGVhcl9w
diB8fCBvcHRfbWRfY2xlYXJfaHZtICkKICAgICAgICAgc2V0dXBfZm9yY2Vf
Y3B1X2NhcChYODZfRkVBVFVSRV9TQ19WRVJXX0lETEUpOwotICAgIGlmICgg
b3B0X21kX2NsZWFyX2h2bSAmJiAhKGNhcHMgJiBBUkNIX0NBUFNfU0tJUF9M
MURGTCkgJiYgIW9wdF9sMWRfZmx1c2ggKQotICAgICAgICBzZXR1cF9mb3Jj
ZV9jcHVfY2FwKFg4Nl9GRUFUVVJFX1NDX1ZFUldfSFZNKTsKKyAgICBvcHRf
bWRfY2xlYXJfaHZtICY9ICEoY2FwcyAmIEFSQ0hfQ0FQU19TS0lQX0wxREZM
KSAmJiAhb3B0X2wxZF9mbHVzaDsKIAogICAgIC8qCiAgICAgICogV2FybiB0
aGUgdXNlciBpZiB0aGV5IGFyZSBvbiBNTFBEUy9NRkJEUy12dWxuZXJhYmxl
IGhhcmR3YXJlIHdpdGggSFQKZGlmZiAtLWdpdCBhL3hlbi9pbmNsdWRlL2Fz
bS14ODYvY3B1ZmVhdHVyZXMuaCBiL3hlbi9pbmNsdWRlL2FzbS14ODYvY3B1
ZmVhdHVyZXMuaAppbmRleCAwOWY2MTk0NTliYzcuLjllYWFiN2EyYTFmYSAx
MDA2NDQKLS0tIGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9jcHVmZWF0dXJlcy5o
CisrKyBiL3hlbi9pbmNsdWRlL2FzbS14ODYvY3B1ZmVhdHVyZXMuaApAQCAt
MzUsOCArMzUsNyBAQCBYRU5fQ1BVRkVBVFVSRShTQ19SU0JfSFZNLCAgICAg
ICAgWDg2X1NZTlRIKDE5KSkgLyogUlNCIG92ZXJ3cml0ZSBuZWVkZWQgZm9y
IEhWTQogWEVOX0NQVUZFQVRVUkUoWEVOX1NFTEZTTk9PUCwgICAgIFg4Nl9T
WU5USCgyMCkpIC8qIFNFTEZTTk9PUCBnZXRzIHVzZWQgYnkgWGVuIGl0c2Vs
ZiAqLwogWEVOX0NQVUZFQVRVUkUoU0NfTVNSX0lETEUsICAgICAgIFg4Nl9T
WU5USCgyMSkpIC8qIChTQ19NU1JfUFYgfHwgU0NfTVNSX0hWTSkgJiYgZGVm
YXVsdF94ZW5fc3BlY19jdHJsICovCiBYRU5fQ1BVRkVBVFVSRShYRU5fTEJS
LCAgICAgICAgICAgWDg2X1NZTlRIKDIyKSkgLyogWGVuIHVzZXMgTVNSX0RF
QlVHQ1RMLkxCUiAqLwotWEVOX0NQVUZFQVRVUkUoU0NfVkVSV19QViwgICAg
ICAgIFg4Nl9TWU5USCgyMykpIC8qIFZFUlcgdXNlZCBieSBYZW4gZm9yIFBW
ICovCi1YRU5fQ1BVRkVBVFVSRShTQ19WRVJXX0hWTSwgICAgICAgWDg2X1NZ
TlRIKDI0KSkgLyogVkVSVyB1c2VkIGJ5IFhlbiBmb3IgSFZNICovCisvKiBC
aXRzIDIzLDI0IHVudXNlZC4gKi8KIFhFTl9DUFVGRUFUVVJFKFNDX1ZFUldf
SURMRSwgICAgICBYODZfU1lOVEgoMjUpKSAvKiBWRVJXIHVzZWQgYnkgWGVu
IGZvciBpZGxlICovCiBYRU5fQ1BVRkVBVFVSRShYRU5fU0hTVEssICAgICAg
ICAgWDg2X1NZTlRIKDI2KSkgLyogWGVuIHVzZXMgQ0VUIFNoYWRvdyBTdGFj
a3MgKi8KIFhFTl9DUFVGRUFUVVJFKFhFTl9JQlQsICAgICAgICAgICBYODZf
U1lOVEgoMjcpKSAvKiBYZW4gdXNlcyBDRVQgSW5kaXJlY3QgQnJhbmNoIFRy
YWNraW5nICovCmRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9hc20teDg2L2Rv
bWFpbi5oIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9kb21haW4uaAppbmRleCAw
ZGI1NTFiZmYzNDQuLjRlZTc2YmJhNDVkYSAxMDA2NDQKLS0tIGEveGVuL2lu
Y2x1ZGUvYXNtLXg4Ni9kb21haW4uaAorKysgYi94ZW4vaW5jbHVkZS9hc20t
eDg2L2RvbWFpbi5oCkBAIC0zMDgsNiArMzA4LDkgQEAgc3RydWN0IGFyY2hf
ZG9tYWluCiAgICAgdWludDMyX3QgcGNpX2NmODsKICAgICB1aW50OF90IGNt
b3NfaWR4OwogCisgICAgLyogVXNlIFZFUlcgb24gcmV0dXJuLXRvLWd1ZXN0
IGZvciBpdHMgZmx1c2hpbmcgc2lkZSBlZmZlY3QuICovCisgICAgYm9vbCB2
ZXJ3OworCiAgICAgdW5pb24gewogICAgICAgICBzdHJ1Y3QgcHZfZG9tYWlu
IHB2OwogICAgICAgICBzdHJ1Y3QgaHZtX2RvbWFpbiBodm07CmRpZmYgLS1n
aXQgYS94ZW4vaW5jbHVkZS9hc20teDg2L3NwZWNfY3RybC5oIGIveGVuL2lu
Y2x1ZGUvYXNtLXg4Ni9zcGVjX2N0cmwuaAppbmRleCA5Y2FlY2RkZmVjOTYu
LjY4ZjZjNDZjNDcwYyAxMDA2NDQKLS0tIGEveGVuL2luY2x1ZGUvYXNtLXg4
Ni9zcGVjX2N0cmwuaAorKysgYi94ZW4vaW5jbHVkZS9hc20teDg2L3NwZWNf
Y3RybC5oCkBAIC0yNCw2ICsyNCw3IEBACiAjZGVmaW5lIFNDRl91c2Vfc2hh
ZG93ICgxIDw8IDApCiAjZGVmaW5lIFNDRl9pc3Rfd3Jtc3IgICgxIDw8IDEp
CiAjZGVmaW5lIFNDRl9pc3RfcnNiICAgICgxIDw8IDIpCisjZGVmaW5lIFND
Rl92ZXJ3ICAgICAgICgxIDw8IDMpCiAKICNpZm5kZWYgX19BU1NFTUJMWV9f
CiAKQEAgLTMyLDYgKzMzLDcgQEAKICNpbmNsdWRlIDxhc20vbXNyLWluZGV4
Lmg+CiAKIHZvaWQgaW5pdF9zcGVjdWxhdGlvbl9taXRpZ2F0aW9ucyh2b2lk
KTsKK3ZvaWQgc3BlY19jdHJsX2luaXRfZG9tYWluKHN0cnVjdCBkb21haW4g
KmQpOwogCiBleHRlcm4gYm9vbCBvcHRfaWJwYjsKIGV4dGVybiBib29sIG9w
dF9zc2JkOwpkaWZmIC0tZ2l0IGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9zcGVj
X2N0cmxfYXNtLmggYi94ZW4vaW5jbHVkZS9hc20teDg2L3NwZWNfY3RybF9h
c20uaAppbmRleCAwMmIzYjE4Y2U2OWYuLjVhNTkwYmFjNDRhYSAxMDA2NDQK
LS0tIGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9zcGVjX2N0cmxfYXNtLmgKKysr
IGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9zcGVjX2N0cmxfYXNtLmgKQEAgLTEz
Niw2ICsxMzYsMTkgQEAKICNlbmRpZgogLmVuZG0KIAorLm1hY3JvIERPX1NQ
RUNfQ1RSTF9DT05EX1ZFUlcKKy8qCisgKiBSZXF1aXJlcyAlcnNwPWNwdWlu
Zm8KKyAqCisgKiBJc3N1ZSBhIFZFUlcgZm9yIGl0cyBmbHVzaGluZyBzaWRl
IGVmZmVjdCwgaWYgaW5kaWNhdGVkLiAgVGhpcyBpcyBhIFNwZWN0cmUKKyAq
IHYxIGdhZGdldCwgYnV0IHRoZSBJUkVUL1ZNRW50cnkgaXMgc2VyaWFsaXNp
bmcuCisgKi8KKyAgICB0ZXN0YiAkU0NGX3ZlcncsIENQVUlORk9fc3BlY19j
dHJsX2ZsYWdzKCVyc3ApCisgICAganogLkxcQF92ZXJ3X3NraXAKKyAgICB2
ZXJ3IENQVUlORk9fdmVyd19zZWwoJXJzcCkKKy5MXEBfdmVyd19za2lwOgor
LmVuZG0KKwogLm1hY3JvIERPX1NQRUNfQ1RSTF9FTlRSWSBtYXliZXhlbjpy
ZXEKIC8qCiAgKiBSZXF1aXJlcyAlcnNwPXJlZ3MgKGFsc28gY3B1aW5mbyBp
ZiAhbWF5YmV4ZW4pCkBAIC0yMzEsOCArMjQ0LDcgQEAKICNkZWZpbmUgU1BF
Q19DVFJMX0VYSVRfVE9fUFYgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIFwKICAgICBBTFRFUk5BVElWRSAiIiwgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IFwKICAgICAgICAgRE9fU1BFQ19DVFJMX0VYSVRfVE9fR1VFU1QsIFg4Nl9G
RUFUVVJFX1NDX01TUl9QVjsgICAgICAgICAgICAgIFwKLSAgICBBTFRFUk5B
VElWRSAiIiwgX19zdHJpbmdpZnkodmVydyBDUFVJTkZPX3Zlcndfc2VsKCVy
c3ApKSwgICAgICAgICAgIFwKLSAgICAgICAgWDg2X0ZFQVRVUkVfU0NfVkVS
V19QVgorICAgIERPX1NQRUNfQ1RSTF9DT05EX1ZFUlcKIAogLyoKICAqIFVz
ZSBpbiBJU1QgaW50ZXJydXB0L2V4Y2VwdGlvbiBjb250ZXh0LiAgTWF5IGlu
dGVycnVwdCBYZW4gb3IgUFYgY29udGV4dC4K

--=separator
Content-Type: application/octet-stream; name="xsa404/xsa404-4.14-2.patch"
Content-Disposition: attachment; filename="xsa404/xsa404-4.14-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3NwZWMtY3RybDogRW51bWVyYXRpb24gZm9yIE1N
SU8gU3RhbGUgRGF0YSBjb250cm9scwoKVGhlIHRocmVlICpfTk8gYml0cyBp
bmRpY2F0ZSBub24tc3VzY2VwdGliaWxpdHkgdG8gdGhlIFNTRFAsIEZCU0RQ
IGFuZCBQU0RQCmRhdGEgbW92ZW1lbnQgcHJpbWl0aXZlcy4KCkZCX0NMRUFS
IGluZGljYXRlcyB0aGF0IHRoZSBWRVJXIGluc3RydWN0aW9uIGhhcyByZS1n
YWluZWQgaXQncyBGaWxsIEJ1ZmZlcgpmbHVzaGluZyBzaWRlIGVmZmVjdC4g
IFRoaXMgaXMgb25seSBlbnVtZXJhdGVkIG9uIHBhcnRzIHdoZXJlIFZFUlcg
aGFkCnByZXZpb3VzbHkgbG9zdCBpdCdzIGZsdXNoaW5nIHNpZGUgZWZmZWN0
IGR1ZSB0byB0aGUgTURTL1RBQSB2dWxuZXJhYmlsaXRpZXMKYmVpbmcgZml4
ZWQgaW4gaGFyZHdhcmUuCgpGQl9DTEVBUl9DVFJMIGlzIGF2YWlsYWJsZSBv
biBhIHN1YnNldCBvZiBGQl9DTEVBUiBwYXJ0cyB3aGVyZSB0aGUgRmlsbCBC
dWZmZXIKY2xlYXJpbmcgc2lkZSBlZmZlY3Qgb2YgVkVSVyBjYW4gYmUgdHVy
bmVkIG9mZiBmb3IgcGVyZm9ybWFuY2UgcmVhc29ucy4KClRoaXMgaXMgcGFy
dCBvZiBYU0EtNDA0LgoKU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8
YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IFJvZ2Vy
IFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgoKZGlmZiAtLWdp
dCBhL3hlbi9hcmNoL3g4Ni9zcGVjX2N0cmwuYyBiL3hlbi9hcmNoL3g4Ni9z
cGVjX2N0cmwuYwppbmRleCBiNGVmYzk0MGFhMmIuLjM4ZTBjYzI4NDdlMCAx
MDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L3NwZWNfY3RybC5jCisrKyBiL3hl
bi9hcmNoL3g4Ni9zcGVjX2N0cmwuYwpAQCAtMzIzLDcgKzMyMyw3IEBAIHN0
YXRpYyB2b2lkIF9faW5pdCBwcmludF9kZXRhaWxzKGVudW0gaW5kX3RodW5r
IHRodW5rLCB1aW50NjRfdCBjYXBzKQogICAgICAqIEhhcmR3YXJlIHJlYWQt
b25seSBpbmZvcm1hdGlvbiwgc3RhdGluZyBpbW11bml0eSB0byBjZXJ0YWlu
IGlzc3Vlcywgb3IKICAgICAgKiBzdWdnZXN0aW9ucyBvZiB3aGljaCBtaXRp
Z2F0aW9uIHRvIHVzZS4KICAgICAgKi8KLSAgICBwcmludGsoIiAgSGFyZHdh
cmUgaGludHM6JXMlcyVzJXMlcyVzJXMlcyVzJXMlc1xuIiwKKyAgICBwcmlu
dGsoIiAgSGFyZHdhcmUgaGludHM6JXMlcyVzJXMlcyVzJXMlcyVzJXMlcyVz
JXMlc1xuIiwKICAgICAgICAgICAgKGNhcHMgJiBBUkNIX0NBUFNfUkRDTF9O
TykgICAgICAgICAgICAgICAgICAgICAgICA/ICIgUkRDTF9OTyIgICAgICAg
IDogIiIsCiAgICAgICAgICAgIChjYXBzICYgQVJDSF9DQVBTX0lCUlNfQUxM
KSAgICAgICAgICAgICAgICAgICAgICAgPyAiIElCUlNfQUxMIiAgICAgICA6
ICIiLAogICAgICAgICAgICAoY2FwcyAmIEFSQ0hfQ0FQU19SU0JBKSAgICAg
ICAgICAgICAgICAgICAgICAgICAgID8gIiBSU0JBIiAgICAgICAgICAgOiAi
IiwKQEAgLTMzMiwxMyArMzMyLDE2IEBAIHN0YXRpYyB2b2lkIF9faW5pdCBw
cmludF9kZXRhaWxzKGVudW0gaW5kX3RodW5rIHRodW5rLCB1aW50NjRfdCBj
YXBzKQogICAgICAgICAgICAoY2FwcyAmIEFSQ0hfQ0FQU19TU0JfTk8pICAg
ICAgICAgICAgICAgICAgICAgICAgID8gIiBTU0JfTk8iICAgICAgICAgOiAi
IiwKICAgICAgICAgICAgKGNhcHMgJiBBUkNIX0NBUFNfTURTX05PKSAgICAg
ICAgICAgICAgICAgICAgICAgICA/ICIgTURTX05PIiAgICAgICAgIDogIiIs
CiAgICAgICAgICAgIChjYXBzICYgQVJDSF9DQVBTX1RBQV9OTykgICAgICAg
ICAgICAgICAgICAgICAgICAgPyAiIFRBQV9OTyIgICAgICAgICA6ICIiLAor
ICAgICAgICAgICAoY2FwcyAmIEFSQ0hfQ0FQU19TQkRSX1NTRFBfTk8pICAg
ICAgICAgICAgICAgICAgID8gIiBTQkRSX1NTRFBfTk8iICAgOiAiIiwKKyAg
ICAgICAgICAgKGNhcHMgJiBBUkNIX0NBUFNfRkJTRFBfTk8pICAgICAgICAg
ICAgICAgICAgICAgICA/ICIgRkJTRFBfTk8iICAgICAgIDogIiIsCisgICAg
ICAgICAgIChjYXBzICYgQVJDSF9DQVBTX1BTRFBfTk8pICAgICAgICAgICAg
ICAgICAgICAgICAgPyAiIFBTRFBfTk8iICAgICAgICA6ICIiLAogICAgICAg
ICAgICAoZThiICAmIGNwdWZlYXRfbWFzayhYODZfRkVBVFVSRV9JQlJTX0FM
V0FZUykpICAgID8gIiBJQlJTX0FMV0FZUyIgICAgOiAiIiwKICAgICAgICAg
ICAgKGU4YiAgJiBjcHVmZWF0X21hc2soWDg2X0ZFQVRVUkVfU1RJQlBfQUxX
QVlTKSkgICA/ICIgU1RJQlBfQUxXQVlTIiAgIDogIiIsCiAgICAgICAgICAg
IChlOGIgICYgY3B1ZmVhdF9tYXNrKFg4Nl9GRUFUVVJFX0lCUlNfRkFTVCkp
ICAgICAgPyAiIElCUlNfRkFTVCIgICAgICA6ICIiLAogICAgICAgICAgICAo
ZThiICAmIGNwdWZlYXRfbWFzayhYODZfRkVBVFVSRV9JQlJTX1NBTUVfTU9E
RSkpID8gIiBJQlJTX1NBTUVfTU9ERSIgOiAiIik7CiAKICAgICAvKiBIYXJk
d2FyZSBmZWF0dXJlcyB3aGljaCBuZWVkIGRyaXZpbmcgdG8gbWl0aWdhdGUg
aXNzdWVzLiAqLwotICAgIHByaW50aygiICBIYXJkd2FyZSBmZWF0dXJlczol
cyVzJXMlcyVzJXMlcyVzJXMlc1xuIiwKKyAgICBwcmludGsoIiAgSGFyZHdh
cmUgZmVhdHVyZXM6JXMlcyVzJXMlcyVzJXMlcyVzJXMlcyVzXG4iLAogICAg
ICAgICAgICAoZThiICAmIGNwdWZlYXRfbWFzayhYODZfRkVBVFVSRV9JQlBC
KSkgfHwKICAgICAgICAgICAgKF83ZDAgJiBjcHVmZWF0X21hc2soWDg2X0ZF
QVRVUkVfSUJSU0IpKSAgICAgICAgICA/ICIgSUJQQiIgICAgICAgICAgIDog
IiIsCiAgICAgICAgICAgIChlOGIgICYgY3B1ZmVhdF9tYXNrKFg4Nl9GRUFU
VVJFX0lCUlMpKSB8fApAQCAtMzUzLDcgKzM1Niw5IEBAIHN0YXRpYyB2b2lk
IF9faW5pdCBwcmludF9kZXRhaWxzKGVudW0gaW5kX3RodW5rIHRodW5rLCB1
aW50NjRfdCBjYXBzKQogICAgICAgICAgICAoXzdkMCAmIGNwdWZlYXRfbWFz
ayhYODZfRkVBVFVSRV9NRF9DTEVBUikpICAgICAgID8gIiBNRF9DTEVBUiIg
ICAgICAgOiAiIiwKICAgICAgICAgICAgKF83ZDAgJiBjcHVmZWF0X21hc2so
WDg2X0ZFQVRVUkVfU1JCRFNfQ1RSTCkpICAgICA/ICIgU1JCRFNfQ1RSTCIg
ICAgIDogIiIsCiAgICAgICAgICAgIChlOGIgICYgY3B1ZmVhdF9tYXNrKFg4
Nl9GRUFUVVJFX1ZJUlRfU1NCRCkpICAgICAgPyAiIFZJUlRfU1NCRCIgICAg
ICA6ICIiLAotICAgICAgICAgICAoY2FwcyAmIEFSQ0hfQ0FQU19UU1hfQ1RS
TCkgICAgICAgICAgICAgICAgICAgICAgID8gIiBUU1hfQ1RSTCIgICAgICAg
OiAiIik7CisgICAgICAgICAgIChjYXBzICYgQVJDSF9DQVBTX1RTWF9DVFJM
KSAgICAgICAgICAgICAgICAgICAgICAgPyAiIFRTWF9DVFJMIiAgICAgICA6
ICIiLAorICAgICAgICAgICAoY2FwcyAmIEFSQ0hfQ0FQU19GQl9DTEVBUikg
ICAgICAgICAgICAgICAgICAgICAgID8gIiBGQl9DTEVBUiIgICAgICAgOiAi
IiwKKyAgICAgICAgICAgKGNhcHMgJiBBUkNIX0NBUFNfRkJfQ0xFQVJfQ1RS
TCkgICAgICAgICAgICAgICAgICA/ICIgRkJfQ0xFQVJfQ1RSTCIgIDogIiIp
OwogCiAgICAgLyogQ29tcGlsZWQtaW4gc3VwcG9ydCB3aGljaCBwZXJ0YWlu
cyB0byBtaXRpZ2F0aW9ucy4gKi8KICAgICBpZiAoIElTX0VOQUJMRUQoQ09O
RklHX0lORElSRUNUX1RIVU5LKSB8fCBJU19FTkFCTEVEKENPTkZJR19TSEFE
T1dfUEFHSU5HKSApCmRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9hc20teDg2
L21zci1pbmRleC5oIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9tc3ItaW5kZXgu
aAppbmRleCA3YTM5ZDk0YjlhNzAuLmM4NjcwZWFiOGVmNSAxMDA2NDQKLS0t
IGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9tc3ItaW5kZXguaAorKysgYi94ZW4v
aW5jbHVkZS9hc20teDg2L21zci1pbmRleC5oCkBAIC01Niw2ICs1NiwxMSBA
QAogI2RlZmluZSAgQVJDSF9DQVBTX0lGX1BTQ0hBTkdFX01DX05PICAgICAg
ICAoX0FDKDEsIFVMTCkgPDwgIDYpCiAjZGVmaW5lICBBUkNIX0NBUFNfVFNY
X0NUUkwgICAgICAgICAgICAgICAgIChfQUMoMSwgVUxMKSA8PCAgNykKICNk
ZWZpbmUgIEFSQ0hfQ0FQU19UQUFfTk8gICAgICAgICAgICAgICAgICAgKF9B
QygxLCBVTEwpIDw8ICA4KQorI2RlZmluZSAgQVJDSF9DQVBTX1NCRFJfU1NE
UF9OTyAgICAgICAgICAgICAoX0FDKDEsIFVMTCkgPDwgMTMpCisjZGVmaW5l
ICBBUkNIX0NBUFNfRkJTRFBfTk8gICAgICAgICAgICAgICAgIChfQUMoMSwg
VUxMKSA8PCAxNCkKKyNkZWZpbmUgIEFSQ0hfQ0FQU19QU0RQX05PICAgICAg
ICAgICAgICAgICAgKF9BQygxLCBVTEwpIDw8IDE1KQorI2RlZmluZSAgQVJD
SF9DQVBTX0ZCX0NMRUFSICAgICAgICAgICAgICAgICAoX0FDKDEsIFVMTCkg
PDwgMTcpCisjZGVmaW5lICBBUkNIX0NBUFNfRkJfQ0xFQVJfQ1RSTCAgICAg
ICAgICAgIChfQUMoMSwgVUxMKSA8PCAxOCkKIAogI2RlZmluZSBNU1JfRkxV
U0hfQ01EICAgICAgICAgICAgICAgICAgICAgICAweDAwMDAwMTBiCiAjZGVm
aW5lICBGTFVTSF9DTURfTDFEICAgICAgICAgICAgICAgICAgICAgIChfQUMo
MSwgVUxMKSA8PCAgMCkKQEAgLTczLDYgKzc4LDcgQEAKICNkZWZpbmUgIE1D
VV9PUFRfQ1RSTF9STkdEU19NSVRHX0RJUyAgICAgICAgKF9BQygxLCBVTEwp
IDw8ICAwKQogI2RlZmluZSAgTUNVX09QVF9DVFJMX1JUTV9BTExPVyAgICAg
ICAgICAgICAoX0FDKDEsIFVMTCkgPDwgIDEpCiAjZGVmaW5lICBNQ1VfT1BU
X0NUUkxfUlRNX0xPQ0tFRCAgICAgICAgICAgIChfQUMoMSwgVUxMKSA8PCAg
MikKKyNkZWZpbmUgIE1DVV9PUFRfQ1RSTF9GQl9DTEVBUl9ESVMgICAgICAg
ICAgKF9BQygxLCBVTEwpIDw8ICAzKQogCiAjZGVmaW5lIE1TUl9SVElUX09V
VFBVVF9CQVNFICAgICAgICAgICAgICAgIDB4MDAwMDA1NjAKICNkZWZpbmUg
TVNSX1JUSVRfT1VUUFVUX01BU0sgICAgICAgICAgICAgICAgMHgwMDAwMDU2
MQo=

--=separator
Content-Type: application/octet-stream; name="xsa404/xsa404-4.14-3.patch"
Content-Disposition: attachment; filename="xsa404/xsa404-4.14-3.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3NwZWMtY3RybDogQWRkIHNwZWMtY3RybD11bnBy
aXYtbW1pbwoKUGVyIFhlbidzIHN1cHBvcnQgc3RhdGVtZW50LCBQQ0kgcGFz
c3Rocm91Z2ggc2hvdWxkIGJlIHRvIHRydXN0ZWQgZG9tYWlucwpiZWNhdXNl
IHRoZSBvdmVyYWxsIHN5c3RlbSBzZWN1cml0eSBkZXBlbmRzIG9uIGZhY3Rv
cnMgb3V0c2lkZSBvZiBYZW4ncwpjb250cm9sLgoKQXMgc3VjaCwgWGVuLCBp
biBhIHN1cHBvcnRlZCBjb25maWd1cmF0aW9uLCBpcyBub3QgdnVsbmVyYWJs
ZSB0byBEUlBXL1NCRFIuCgpIb3dldmVyLCB1c2VycyB3aG8gaGF2ZSByaXNr
IGFzc2Vzc2VkIHRoZWlyIGNvbmZpZ3VyYXRpb24gbWF5IGJlIGhhcHB5IHdp
dGgKdGhlIHJpc2sgb2YgRG9TLCBidXQgdW5oYXBweSB3aXRoIHRoZSByaXNr
IG9mIGNyb3NzLWRvbWFpbiBkYXRhIGxlYWthZ2UuICBTdWNoCnVzZXJzIHNo
b3VsZCBlbmFibGUgdGhpcyBvcHRpb24uCgpPbiBDUFVzIHZ1bG5lcmFibGUg
dG8gTURTLCB0aGUgZXhpc3RpbmcgbWl0aWdhdGlvbnMgYXJlIHRoZSBiZXN0
IHdlIGNhbiBkbyB0bwptaXRpZ2F0ZSBNTUlPIGNyb3NzLWRvbWFpbiBkYXRh
IGxlYWthZ2UuCgpPbiBDUFVzIGZpeGVkIHRvIE1EUyBidXQgdnVsbmVyYWJs
ZSBNTUlPIHN0YWxlIGRhdGEgbGVha2FnZSwgdGhpcyBvcHRpb246CgogKiBP
biBDUFVzIHN1c2NlcHRpYmxlIHRvIEZCU0RQLCBtaXRpZ2F0ZXMgY3Jvc3Mt
ZG9tYWluIGZpbGwgYnVmZmVyIGxlYWthZ2UKICAgdXNpbmcgRkJfQ0xFQVIu
CiAqIE9uIENQVXMgc3VzY2VwdGlibGUgdG8gU0JEUiwgbWl0aWdhdGVzIFJO
RyBkYXRhIHJlY292ZXJ5IGJ5IGVuZ2FnaW5nIHRoZQogICBzcmItbG9jaywg
cHJldmlvdXNseSB1c2VkIHRvIG1pdGlnYXRlIFNSQkRTLgoKQm90aCBtaXRp
Z2F0aW9ucyByZXF1aXJlIG1pY3JvY29kZSBmcm9tIElQVSAyMDIyLjEsIE1h
eSAyMDIyLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS00MDQuCgpTaWduZWQtb2Zm
LWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29t
PgpSZXZpZXdlZC1ieTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNp
dHJpeC5jb20+Ci0tLQpCYWNrcG9ydGluZyBub3RlOiBGb3IgWGVuIDQuNyBh
bmQgZWFybGllciB3aXRoIGJvb2xfdCBub3QgYWxpYXNpbmcgYm9vbCwgdGhl
CkFSQ0hfQ0FQU19GQl9DTEVBUiBodW5rIG5lZWRzICEhCgpkaWZmIC0tZ2l0
IGEvZG9jcy9taXNjL3hlbi1jb21tYW5kLWxpbmUucGFuZG9jIGIvZG9jcy9t
aXNjL3hlbi1jb21tYW5kLWxpbmUucGFuZG9jCmluZGV4IGFkODU3ODVlMTRi
My4uZDFkNTg1MmNkZDg0IDEwMDY0NAotLS0gYS9kb2NzL21pc2MveGVuLWNv
bW1hbmQtbGluZS5wYW5kb2MKKysrIGIvZG9jcy9taXNjL3hlbi1jb21tYW5k
LWxpbmUucGFuZG9jCkBAIC0yMTA2LDcgKzIxMDYsNyBAQCBCeSBkZWZhdWx0
IFNTQkQgd2lsbCBiZSBtaXRpZ2F0ZWQgYXQgcnVudGltZSAoaS5lIGBzc2Jk
PXJ1bnRpbWVgKS4KICMjIyBzcGVjLWN0cmwgKHg4NikKID4gYD0gTGlzdCBv
ZiBbIDxib29sPiwgeGVuPTxib29sPiwge3B2LGh2bSxtc3Itc2MscnNiLG1k
LWNsZWFyfT08Ym9vbD4sCiA+ICAgICAgICAgICAgICBidGktdGh1bms9cmV0
cG9saW5lfGxmZW5jZXxqbXAsIHtpYnJzLGlicGIsc3NiZCxlYWdlci1mcHUs
Ci0+ICAgICAgICAgICAgICBsMWQtZmx1c2gsYnJhbmNoLWhhcmRlbixzcmIt
bG9ja309PGJvb2w+IF1gCis+ICAgICAgICAgICAgICBsMWQtZmx1c2gsYnJh
bmNoLWhhcmRlbixzcmItbG9jayx1bnByaXYtbW1pb309PGJvb2w+IF1gCiAK
IENvbnRyb2xzIGZvciBzcGVjdWxhdGl2ZSBleGVjdXRpb24gc2lkZWNoYW5u
ZWwgbWl0aWdhdGlvbnMuICBCeSBkZWZhdWx0LCBYZW4KIHdpbGwgcGljayB0
aGUgbW9zdCBhcHByb3ByaWF0ZSBtaXRpZ2F0aW9ucyBiYXNlZCBvbiBjb21w
aWxlZCBpbiBzdXBwb3J0LApAQCAtMjE4NSw4ICsyMTg1LDE2IEBAIFhlbiB3
aWxsIGVuYWJsZSB0aGlzIG1pdGlnYXRpb24uCiBPbiBoYXJkd2FyZSBzdXBw
b3J0aW5nIFNSQkRTX0NUUkwsIHRoZSBgc3JiLWxvY2s9YCBvcHRpb24gY2Fu
IGJlIHVzZWQgdG8gZm9yY2UKIG9yIHByZXZlbnQgWGVuIGZyb20gcHJvdGVj
dCB0aGUgU3BlY2lhbCBSZWdpc3RlciBCdWZmZXIgZnJvbSBsZWFraW5nIHN0
YWxlCiBkYXRhLiBCeSBkZWZhdWx0LCBYZW4gd2lsbCBlbmFibGUgdGhpcyBt
aXRpZ2F0aW9uLCBleGNlcHQgb24gcGFydHMgd2hlcmUgTURTCi1pcyBmaXhl
ZCBhbmQgVEFBIGlzIGZpeGVkL21pdGlnYXRlZCAoaW4gd2hpY2ggY2FzZSwg
dGhlcmUgaXMgYmVsaWV2ZWQgdG8gYmUgbm8KLXdheSBmb3IgYW4gYXR0YWNr
ZXIgdG8gb2J0YWluIHRoZSBzdGFsZSBkYXRhKS4KK2lzIGZpeGVkIGFuZCBU
QUEgaXMgZml4ZWQvbWl0aWdhdGVkIGFuZCB0aGVyZSBhcmUgbm8gdW5wcml2
aWxlZ2VkIE1NSU8KK21hcHBpbmdzIChpbiB3aGljaCBjYXNlLCB0aGVyZSBp
cyBiZWxpZXZlZCB0byBiZSBubyB3YXkgZm9yIGFuIGF0dGFja2VyIHRvCitv
YnRhaW4gc3RhbGUgZGF0YSkuCisKK1RoZSBgdW5wcml2LW1taW89YCBib29s
ZWFuIGluZGljYXRlcyB3aGV0aGVyIHRoZSBzeXN0ZW0gaGFzIChvciB3aWxs
IGhhdmUpCitsZXNzIHRoYW4gZnVsbHkgcHJpdmlsZWdlZCBkb21haW5zIGdy
YW50ZWQgYWNjZXNzIHRvIE1NSU8gZGV2aWNlcy4gIEJ5CitkZWZhdWx0LCB0
aGlzIG9wdGlvbiBpcyBkaXNhYmxlZC4gIElmIGVuYWJsZWQsIFhlbiB3aWxs
IHVzZSB0aGUgYEZCX0NMRUFSYAorYW5kL29yIGBTUkJEU19DVFJMYCBmdW5j
dGlvbmFsaXR5IGF2YWlsYWJsZSBpbiB0aGUgSW50ZWwgTWF5IDIwMjIgbWlj
cm9jb2RlCityZWxlYXNlIHRvIG1pdGlnYXRlIGNyb3NzLWRvbWFpbiBsZWFr
YWdlIG9mIGRhdGEgdmlhIHRoZSBNTUlPIFN0YWxlIERhdGEKK3Z1bG5lcmFi
aWxpdGllcy4KIAogIyMjIHN5bmNfY29uc29sZQogPiBgPSA8Ym9vbGVhbj5g
CmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvc3BlY19jdHJsLmMgYi94ZW4v
YXJjaC94ODYvc3BlY19jdHJsLmMKaW5kZXggMzhlMGNjMjg0N2UwLi44M2I4
NTZmYTkxNTggMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni9zcGVjX2N0cmwu
YworKysgYi94ZW4vYXJjaC94ODYvc3BlY19jdHJsLmMKQEAgLTY3LDYgKzY3
LDggQEAgc3RhdGljIGJvb2wgX19pbml0ZGF0YSBjcHVfaGFzX2J1Z19tc2Jk
c19vbmx5OyAvKiA9PiBtaW5pbWFsIEhUIGltcGFjdC4gKi8KIHN0YXRpYyBi
b29sIF9faW5pdGRhdGEgY3B1X2hhc19idWdfbWRzOyAvKiBBbnkgb3RoZXIg
TXtMUCxTQixGQn1EUyBjb21iaW5hdGlvbi4gKi8KIAogc3RhdGljIGludDhf
dCBfX2luaXRkYXRhIG9wdF9zcmJfbG9jayA9IC0xOworc3RhdGljIGJvb2wg
X19pbml0ZGF0YSBvcHRfdW5wcml2X21taW87CitzdGF0aWMgYm9vbCBfX3Jl
YWRfbW9zdGx5IG9wdF9mYl9jbGVhcl9tbWlvOwogCiBzdGF0aWMgaW50IF9f
aW5pdCBwYXJzZV9zcGVjX2N0cmwoY29uc3QgY2hhciAqcykKIHsKQEAgLTE4
NCw2ICsxODYsOCBAQCBzdGF0aWMgaW50IF9faW5pdCBwYXJzZV9zcGVjX2N0
cmwoY29uc3QgY2hhciAqcykKICAgICAgICAgICAgIG9wdF9icmFuY2hfaGFy
ZGVuID0gdmFsOwogICAgICAgICBlbHNlIGlmICggKHZhbCA9IHBhcnNlX2Jv
b2xlYW4oInNyYi1sb2NrIiwgcywgc3MpKSA+PSAwICkKICAgICAgICAgICAg
IG9wdF9zcmJfbG9jayA9IHZhbDsKKyAgICAgICAgZWxzZSBpZiAoICh2YWwg
PSBwYXJzZV9ib29sZWFuKCJ1bnByaXYtbW1pbyIsIHMsIHNzKSkgPj0gMCAp
CisgICAgICAgICAgICBvcHRfdW5wcml2X21taW8gPSB2YWw7CiAgICAgICAg
IGVsc2UKICAgICAgICAgICAgIHJjID0gLUVJTlZBTDsKIApAQCAtMzkyLDcg
KzM5Niw4IEBAIHN0YXRpYyB2b2lkIF9faW5pdCBwcmludF9kZXRhaWxzKGVu
dW0gaW5kX3RodW5rIHRodW5rLCB1aW50NjRfdCBjYXBzKQogICAgICAgICAg
ICBvcHRfc3JiX2xvY2sgICAgICAgICAgICAgICAgICAgICAgICAgICAgICA/
ICIgU1JCX0xPQ0srIiA6ICIgU1JCX0xPQ0stIiwKICAgICAgICAgICAgb3B0
X2licGIgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgPyAiIElC
UEIiICA6ICIiLAogICAgICAgICAgICBvcHRfbDFkX2ZsdXNoICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICA/ICIgTDFEX0ZMVVNIIiA6ICIiLAotICAg
ICAgICAgICBvcHRfbWRfY2xlYXJfcHYgfHwgb3B0X21kX2NsZWFyX2h2bSAg
ICAgICA/ICIgVkVSVyIgIDogIiIsCisgICAgICAgICAgIG9wdF9tZF9jbGVh
cl9wdiB8fCBvcHRfbWRfY2xlYXJfaHZtIHx8CisgICAgICAgICAgIG9wdF9m
Yl9jbGVhcl9tbWlvICAgICAgICAgICAgICAgICAgICAgICAgID8gIiBWRVJX
IiAgOiAiIiwKICAgICAgICAgICAgb3B0X2JyYW5jaF9oYXJkZW4gICAgICAg
ICAgICAgICAgICAgICAgICAgPyAiIEJSQU5DSF9IQVJERU4iIDogIiIpOwog
CiAgICAgLyogTDFURiBkaWFnbm9zdGljcywgcHJpbnRlZCBpZiB2dWxuZXJh
YmxlIG9yIFBWIHNoYWRvd2luZyBpcyBpbiB1c2UuICovCkBAIC05MTIsNyAr
OTE3LDkgQEAgdm9pZCBzcGVjX2N0cmxfaW5pdF9kb21haW4oc3RydWN0IGRv
bWFpbiAqZCkKIHsKICAgICBib29sIHB2ID0gaXNfcHZfZG9tYWluKGQpOwog
Ci0gICAgZC0+YXJjaC52ZXJ3ID0gcHYgPyBvcHRfbWRfY2xlYXJfcHYgOiBv
cHRfbWRfY2xlYXJfaHZtOworICAgIGQtPmFyY2gudmVydyA9CisgICAgICAg
IChwdiA/IG9wdF9tZF9jbGVhcl9wdiA6IG9wdF9tZF9jbGVhcl9odm0pIHx8
CisgICAgICAgIChvcHRfZmJfY2xlYXJfbW1pbyAmJiBpc19pb21tdV9lbmFi
bGVkKGQpKTsKIH0KIAogdm9pZCBfX2luaXQgaW5pdF9zcGVjdWxhdGlvbl9t
aXRpZ2F0aW9ucyh2b2lkKQpAQCAtMTE0OCw2ICsxMTU1LDE4IEBAIHZvaWQg
X19pbml0IGluaXRfc3BlY3VsYXRpb25fbWl0aWdhdGlvbnModm9pZCkKICAg
ICBtZHNfY2FsY3VsYXRpb25zKGNhcHMpOwogCiAgICAgLyoKKyAgICAgKiBQ
YXJ0cyB3aGljaCBlbnVtZXJhdGUgRkJfQ0xFQVIgYXJlIHRob3NlIHdoaWNo
IGFyZSBwb3N0LU1EU19OTyBhbmQgaGF2ZQorICAgICAqIHJlaW50cm9kdWNl
ZCB0aGUgVkVSVyBmaWxsIGJ1ZmZlciBmbHVzaGluZyBzaWRlIGVmZmVjdCBi
ZWNhdXNlIG9mIGEKKyAgICAgKiBzdXNjZXB0aWJpbGl0eSB0byBGQlNEUC4K
KyAgICAgKgorICAgICAqIElmIHVucHJpdmlsZWdlZCBndWVzdHMgaGF2ZSAo
b3Igd2lsbCBoYXZlKSBNTUlPIG1hcHBpbmdzLCB3ZSBjYW4KKyAgICAgKiBt
aXRpZ2F0ZSBjcm9zcy1kb21haW4gbGVha2FnZSBvZiBmaWxsIGJ1ZmZlciBk
YXRhIGJ5IGlzc3VpbmcgVkVSVyBvbgorICAgICAqIHRoZSByZXR1cm4tdG8t
Z3Vlc3QgcGF0aC4KKyAgICAgKi8KKyAgICBpZiAoIG9wdF91bnByaXZfbW1p
byApCisgICAgICAgIG9wdF9mYl9jbGVhcl9tbWlvID0gY2FwcyAmIEFSQ0hf
Q0FQU19GQl9DTEVBUjsKKworICAgIC8qCiAgICAgICogQnkgZGVmYXVsdCwg
ZW5hYmxlIFBWIGFuZCBIVk0gbWl0aWdhdGlvbnMgb24gTURTLXZ1bG5lcmFi
bGUgaGFyZHdhcmUuCiAgICAgICogVGhpcyB3aWxsIG9ubHkgYmUgYSB0b2tl
biBlZmZvcnQgZm9yIE1MUERTL01GQkRTIHdoZW4gSFQgaXMgZW5hYmxlZCwK
ICAgICAgKiBidXQgaXQgaXMgc29tZXdoYXQgYmV0dGVyIHRoYW4gbm90aGlu
Zy4KQEAgLTExNjAsMTggKzExNzksMjAgQEAgdm9pZCBfX2luaXQgaW5pdF9z
cGVjdWxhdGlvbl9taXRpZ2F0aW9ucyh2b2lkKQogICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIGJvb3RfY3B1X2hhcyhYODZfRkVBVFVSRV9NRF9DTEVB
UikpOwogCiAgICAgLyoKLSAgICAgKiBFbmFibGUgTURTIGRlZmVuY2VzIGFz
IGFwcGxpY2FibGUuICBUaGUgSWRsZSBibG9ja3MgbmVlZCB1c2luZyBpZgot
ICAgICAqIGVpdGhlciBQViBvciBIVk0gZGVmZW5jZXMgYXJlIHVzZWQuCisg
ICAgICogRW5hYmxlIE1EUy9NTUlPIGRlZmVuY2VzIGFzIGFwcGxpY2FibGUu
ICBUaGUgSWRsZSBibG9ja3MgbmVlZCB1c2luZyBpZgorICAgICAqIGVpdGhl
ciB0aGUgUFYgb3IgSFZNIE1EUyBkZWZlbmNlcyBhcmUgdXNlZCwgb3IgaWYg
d2UgbWF5IGdpdmUgTU1JTworICAgICAqIGFjY2VzcyB0byB1bnRydXN0ZWQg
Z3Vlc3RzLgogICAgICAqCiAgICAgICogSFZNIGlzIG1vcmUgY29tcGxpY2F0
ZWQuICBUaGUgTURfQ0xFQVIgbWljcm9jb2RlIGV4dGVuZHMgTDFEX0ZMVVNI
IHdpdGgKICAgICAgKiBlcXVpdmFsZW50IHNlbWFudGljcyB0byBhdm9pZCBu
ZWVkaW5nIHRvIHBlcmZvcm0gYm90aCBmbHVzaGVzIG9uIHRoZQotICAgICAq
IEhWTSBwYXRoLiAgVGhlcmVmb3JlLCB3ZSBkb24ndCBuZWVkIFZFUlcgaW4g
YWRkaXRpb24gdG8gTDFEX0ZMVVNILgorICAgICAqIEhWTSBwYXRoLiAgVGhl
cmVmb3JlLCB3ZSBkb24ndCBuZWVkIFZFUlcgaW4gYWRkaXRpb24gdG8gTDFE
X0ZMVVNIIChmb3IKKyAgICAgKiBNRFMgbWl0aWdhdGlvbnMuICBMMURfRkxV
U0ggaXMgbm90IHNhZmUgZm9yIE1NSU8gbWl0aWdhdGlvbnMuKQogICAgICAq
CiAgICAgICogQWZ0ZXIgY2FsY3VsYXRpbmcgdGhlIGFwcHJvcHJpYXRlIGlk
bGUgc2V0dGluZywgc2ltcGxpZnkKICAgICAgKiBvcHRfbWRfY2xlYXJfaHZt
IHRvIG1lYW4ganVzdCAic2hvdWxkIHdlIFZFUlcgb24gdGhlIHdheSBpbnRv
IEhWTQogICAgICAqIGd1ZXN0cyIsIHNvIHNwZWNfY3RybF9pbml0X2RvbWFp
bigpIGNhbiBjYWxjdWxhdGUgc3VpdGFibGUgc2V0dGluZ3MuCiAgICAgICov
Ci0gICAgaWYgKCBvcHRfbWRfY2xlYXJfcHYgfHwgb3B0X21kX2NsZWFyX2h2
bSApCisgICAgaWYgKCBvcHRfbWRfY2xlYXJfcHYgfHwgb3B0X21kX2NsZWFy
X2h2bSB8fCBvcHRfZmJfY2xlYXJfbW1pbyApCiAgICAgICAgIHNldHVwX2Zv
cmNlX2NwdV9jYXAoWDg2X0ZFQVRVUkVfU0NfVkVSV19JRExFKTsKICAgICBv
cHRfbWRfY2xlYXJfaHZtICY9ICEoY2FwcyAmIEFSQ0hfQ0FQU19TS0lQX0wx
REZMKSAmJiAhb3B0X2wxZF9mbHVzaDsKIApAQCAtMTIzNiwxNCArMTI1Nywx
OSBAQCB2b2lkIF9faW5pdCBpbml0X3NwZWN1bGF0aW9uX21pdGlnYXRpb25z
KHZvaWQpCiAgICAgICogT24gc29tZSBTUkJEUy1hZmZlY3RlZCBoYXJkd2Fy
ZSwgaXQgbWF5IGJlIHNhZmUgdG8gcmVsYXggc3JiLWxvY2sgYnkKICAgICAg
KiBkZWZhdWx0LgogICAgICAqCi0gICAgICogT24gcGFydHMgd2hpY2ggZW51
bWVyYXRlIE1EU19OTyBhbmQgbm90IFRBQV9OTywgVFNYIGlzIHRoZSBvbmx5
IGtub3duCi0gICAgICogd2F5IHRvIGFjY2VzcyB0aGUgRmlsbCBCdWZmZXIu
ICBJZiBUU1ggaXNuJ3QgYXZhaWxhYmxlIChpbmMuIFNLVQotICAgICAqIHJl
YXNvbnMgb24gc29tZSBtb2RlbHMpLCBvciBUU1ggaXMgZXhwbGljaXRseSBk
aXNhYmxlZCwgdGhlbiB0aGVyZSBpcwotICAgICAqIG5vIG5lZWQgZm9yIHRo
ZSBleHRyYSBvdmVyaGVhZCB0byBwcm90ZWN0IFJEUkFORC9SRFNFRUQuCisg
ICAgICogQWxsIHBhcnRzIHdpdGggU1JCRFNfQ1RSTCBzdWZmZXIgU1NEUCwg
dGhlIG1lY2hhbmlzbSBieSB3aGljaCBzdGFsZSBSTkcKKyAgICAgKiBkYXRh
IGJlY29tZXMgYXZhaWxhYmxlIHRvIG90aGVyIGNvbnRleHRzLiAgVG8gcmVj
b3ZlciB0aGUgZGF0YSwgYW4KKyAgICAgKiBhdHRhY2tlciBuZWVkcyB0byB1
c2U6CisgICAgICogIC0gU0JEUyAoTURTIG9yIFRBQSB0byBzYW1wbGUgdGhl
IGNvcmVzIGZpbGwgYnVmZmVyKQorICAgICAqICAtIFNCRFIgKEFyY2hpdGVj
dHVyYWxseSByZXRyaWV2ZSBzdGFsZSB0cmFuc2FjdGlvbiBidWZmZXIgY29u
dGVudHMpCisgICAgICogIC0gRFJQVyAoQXJjaGl0ZWN0dXJhbGx5IGxhdGNo
IHN0YWxlIGZpbGwgYnVmZmVyIGRhdGEpCisgICAgICoKKyAgICAgKiBPbiBN
RFNfTk8gcGFydHMsIGFuZCB3aXRoIFRBQV9OTyBvciBUU1ggdW5hdmFpbGFi
bGUvZGlzYWJsZWQsIGFuZCB0aGVyZQorICAgICAqIGlzIG5vIHVucHJpdmls
ZWdlZCBNTUlPIGFjY2VzcywgdGhlIFJORyBkYXRhIGRvZXNuJ3QgbmVlZCBw
cm90ZWN0aW5nLgogICAgICAqLwogICAgIGlmICggY3B1X2hhc19zcmJkc19j
dHJsICkKICAgICB7Ci0gICAgICAgIGlmICggb3B0X3NyYl9sb2NrID09IC0x
ICYmCisgICAgICAgIGlmICggb3B0X3NyYl9sb2NrID09IC0xICYmICFvcHRf
dW5wcml2X21taW8gJiYKICAgICAgICAgICAgICAoY2FwcyAmIChBUkNIX0NB
UFNfTURTX05PfEFSQ0hfQ0FQU19UQUFfTk8pKSA9PSBBUkNIX0NBUFNfTURT
X05PICYmCiAgICAgICAgICAgICAgKCFjcHVfaGFzX2hsZSB8fCAoKGNhcHMg
JiBBUkNIX0NBUFNfVFNYX0NUUkwpICYmIHJ0bV9kaXNhYmxlZCkpICkKICAg
ICAgICAgICAgIG9wdF9zcmJfbG9jayA9IDA7Cg==

--=separator
Content-Type: application/octet-stream; name="xsa404/xsa404-4.15-1.patch"
Content-Disposition: attachment; filename="xsa404/xsa404-4.15-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3NwZWMtY3RybDogTWFrZSBWRVJXIGZsdXNoaW5n
IHJ1bnRpbWUgY29uZGl0aW9uYWwKCkN1cnJlbnRseSwgVkVSVyBmbHVzaGlu
ZyB0byBtaXRpZ2F0ZSBNRFMgaXMgYm9vdCB0aW1lIGNvbmRpdGlvbmFsIHBl
ciBkb21haW4KdHlwZS4gIEhvd2V2ZXIsIHRvIHByb3ZpZGUgbWl0aWdhdGlv
bnMgZm9yIERSUFcgKENWRS0yMDIyLTIxMTY2KSwgd2UgbmVlZCB0bwpjb25k
aXRpb25hbGx5IHVzZSBWRVJXIGJhc2VkIG9uIHRoZSB0cnVzdHdvcnRoaW5l
c3Mgb2YgdGhlIGd1ZXN0LCBhbmQgdGhlCmRldmljZXMgcGFzc2VkIHRocm91
Z2guCgpSZW1vdmUgdGhlIFBWL0hWTSBhbHRlcm5hdGl2ZXMgYW5kIGluc3Rl
YWQgaXNzdWUgYSBWRVJXIG9uIHRoZSByZXR1cm4tdG8tZ3Vlc3QKcGF0aCBk
ZXBlbmRpbmcgb24gdGhlIFNDRl92ZXJ3IGJpdCBpbiBjcHVpbmZvIHNwZWNf
Y3RybF9mbGFncy4KCkludHJvZHVjZSBzcGVjX2N0cmxfaW5pdF9kb21haW4o
KSBhbmQgZC0+YXJjaC52ZXJ3IHRvIGNhbGN1bGF0ZSB0aGUgVkVSVwpkaXNw
b3NpdGlvbiBhdCBkb21haW4gY3JlYXRpb24gdGltZSwgYW5kIGNvbnRleHQg
c3dpdGNoIHRoZSBTQ0ZfdmVydyBiaXQuCgpGb3Igbm93LCBWRVJXIGZsdXNo
aW5nIGlzIHVzZWQgYW5kIGNvbnRyb2xsZWQgZXhhY3RseSBhcyBiZWZvcmUs
IGJ1dCBsYXRlcgpwYXRjaGVzIHdpbGwgYWRkIHBlci1kb21haW4gY2FzZXMg
dG9vLgoKTm8gY2hhbmdlIGluIGJlaGF2aW91ci4KClRoaXMgaXMgcGFydCBv
ZiBYU0EtNDA0LgoKU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8YW5k
cmV3LmNvb3BlcjNAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEphbiBCZXVs
aWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6IFJvZ2VyIFBh
dSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgoKZGlmZiAtLWdpdCBh
L2RvY3MvbWlzYy94ZW4tY29tbWFuZC1saW5lLnBhbmRvYyBiL2RvY3MvbWlz
Yy94ZW4tY29tbWFuZC1saW5lLnBhbmRvYwppbmRleCAxY2FiMjZmZWY2MWYu
LmU0YzgyMGUxNzA1MyAxMDA2NDQKLS0tIGEvZG9jcy9taXNjL3hlbi1jb21t
YW5kLWxpbmUucGFuZG9jCisrKyBiL2RvY3MvbWlzYy94ZW4tY29tbWFuZC1s
aW5lLnBhbmRvYwpAQCAtMjE5NCw5ICsyMTk0LDggQEAgaW4gcGxhY2UgZm9y
IGd1ZXN0cyB0byB1c2UuCiBVc2Ugb2YgYSBwb3NpdGl2ZSBib29sZWFuIHZh
bHVlIGZvciBlaXRoZXIgb2YgdGhlc2Ugb3B0aW9ucyBpcyBpbnZhbGlkLgog
CiBUaGUgYm9vbGVhbnMgYHB2PWAsIGBodm09YCwgYG1zci1zYz1gLCBgcnNi
PWAgYW5kIGBtZC1jbGVhcj1gIG9mZmVyIGZpbmUKLWdyYWluZWQgY29udHJv
bCBvdmVyIHRoZSBhbHRlcm5hdGl2ZSBibG9ja3MgdXNlZCBieSBYZW4uICBU
aGVzZSBpbXBhY3QgWGVuJ3MKLWFiaWxpdHkgdG8gcHJvdGVjdCBpdHNlbGYs
IGFuZCBYZW4ncyBhYmlsaXR5IHRvIHZpcnR1YWxpc2Ugc3VwcG9ydCBmb3Ig
Z3Vlc3RzCi10byB1c2UuCitncmFpbmVkIGNvbnRyb2wgb3ZlciB0aGUgcHJp
bWl0aXZlcyBieSBYZW4uICBUaGVzZSBpbXBhY3QgWGVuJ3MgYWJpbGl0eSB0
bworcHJvdGVjdCBpdHNlbGYsIGFuZCBYZW4ncyBhYmlsaXR5IHRvIHZpcnR1
YWxpc2Ugc3VwcG9ydCBmb3IgZ3Vlc3RzIHRvIHVzZS4KIAogKiBgcHY9YCBh
bmQgYGh2bT1gIG9mZmVyIGNvbnRyb2wgb3ZlciBhbGwgc3Vib3B0aW9ucyBm
b3IgUFYgYW5kIEhWTSBndWVzdHMKICAgcmVzcGVjdGl2ZWx5LgpkaWZmIC0t
Z2l0IGEveGVuL2FyY2gveDg2L2RvbWFpbi5jIGIveGVuL2FyY2gveDg2L2Rv
bWFpbi5jCmluZGV4IGIyMTI3Mjk4ODAwNi4uNGE2MWU5NTFmYWNmIDEwMDY0
NAotLS0gYS94ZW4vYXJjaC94ODYvZG9tYWluLmMKKysrIGIveGVuL2FyY2gv
eDg2L2RvbWFpbi5jCkBAIC04NjEsNiArODYxLDggQEAgaW50IGFyY2hfZG9t
YWluX2NyZWF0ZShzdHJ1Y3QgZG9tYWluICpkLAogCiAgICAgZC0+YXJjaC5t
c3JfcmVsYXhlZCA9IGNvbmZpZy0+YXJjaC5taXNjX2ZsYWdzICYgWEVOX1g4
Nl9NU1JfUkVMQVhFRDsKIAorICAgIHNwZWNfY3RybF9pbml0X2RvbWFpbihk
KTsKKwogICAgIHJldHVybiAwOwogCiAgZmFpbDoKQEAgLTE5OTQsMTQgKzE5
OTYsMTUgQEAgc3RhdGljIHZvaWQgX19jb250ZXh0X3N3aXRjaCh2b2lkKQog
dm9pZCBjb250ZXh0X3N3aXRjaChzdHJ1Y3QgdmNwdSAqcHJldiwgc3RydWN0
IHZjcHUgKm5leHQpCiB7CiAgICAgdW5zaWduZWQgaW50IGNwdSA9IHNtcF9w
cm9jZXNzb3JfaWQoKTsKKyAgICBzdHJ1Y3QgY3B1X2luZm8gKmluZm8gPSBn
ZXRfY3B1X2luZm8oKTsKICAgICBjb25zdCBzdHJ1Y3QgZG9tYWluICpwcmV2
ZCA9IHByZXYtPmRvbWFpbiwgKm5leHRkID0gbmV4dC0+ZG9tYWluOwogICAg
IHVuc2lnbmVkIGludCBkaXJ0eV9jcHUgPSByZWFkX2F0b21pYygmbmV4dC0+
ZGlydHlfY3B1KTsKIAogICAgIEFTU0VSVChwcmV2ICE9IG5leHQpOwogICAg
IEFTU0VSVChsb2NhbF9pcnFfaXNfZW5hYmxlZCgpKTsKIAotICAgIGdldF9j
cHVfaW5mbygpLT51c2VfcHZfY3IzID0gZmFsc2U7Ci0gICAgZ2V0X2NwdV9p
bmZvKCktPnhlbl9jcjMgPSAwOworICAgIGluZm8tPnVzZV9wdl9jcjMgPSBm
YWxzZTsKKyAgICBpbmZvLT54ZW5fY3IzID0gMDsKIAogICAgIGlmICggdW5s
aWtlbHkoZGlydHlfY3B1ICE9IGNwdSkgJiYgZGlydHlfY3B1ICE9IFZDUFVf
Q1BVX0NMRUFOICkKICAgICB7CkBAIC0yMDY1LDYgKzIwNjgsMTEgQEAgdm9p
ZCBjb250ZXh0X3N3aXRjaChzdHJ1Y3QgdmNwdSAqcHJldiwgc3RydWN0IHZj
cHUgKm5leHQpCiAgICAgICAgICAgICAgICAgKmxhc3RfaWQgPSBuZXh0X2lk
OwogICAgICAgICAgICAgfQogICAgICAgICB9CisKKyAgICAgICAgLyogVXBk
YXRlIHRoZSB0b3Atb2Ytc3RhY2sgYmxvY2sgd2l0aCB0aGUgVkVSVyBkaXNw
b3NpdGlvbi4gKi8KKyAgICAgICAgaW5mby0+c3BlY19jdHJsX2ZsYWdzICY9
IH5TQ0ZfdmVydzsKKyAgICAgICAgaWYgKCBuZXh0ZC0+YXJjaC52ZXJ3ICkK
KyAgICAgICAgICAgIGluZm8tPnNwZWNfY3RybF9mbGFncyB8PSBTQ0ZfdmVy
dzsKICAgICB9CiAKICAgICBzY2hlZF9jb250ZXh0X3N3aXRjaGVkKHByZXYs
IG5leHQpOwpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L2h2bS92bXgvZW50
cnkuUyBiL3hlbi9hcmNoL3g4Ni9odm0vdm14L2VudHJ5LlMKaW5kZXggNDk2
NTFmM2M0MzVhLi41ZjVkZTQ1YTEzMDkgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNo
L3g4Ni9odm0vdm14L2VudHJ5LlMKKysrIGIveGVuL2FyY2gveDg2L2h2bS92
bXgvZW50cnkuUwpAQCAtODcsNyArODcsNyBAQCBVTkxJS0VMWV9FTkQocmVh
bG1vZGUpCiAKICAgICAgICAgLyogV0FSTklORyEgYHJldGAsIGBjYWxsICpg
LCBgam1wICpgIG5vdCBzYWZlIGJleW9uZCB0aGlzIHBvaW50LiAqLwogICAg
ICAgICAvKiBTUEVDX0NUUkxfRVhJVF9UT19WTVggICBSZXE6ICVyc3A9cmVn
cy9jcHVpbmZvICAgICAgICAgICAgICBDbG9iOiAgICAqLwotICAgICAgICBB
TFRFUk5BVElWRSAiIiwgX19zdHJpbmdpZnkodmVydyBDUFVJTkZPX3Zlcndf
c2VsKCVyc3ApKSwgWDg2X0ZFQVRVUkVfU0NfVkVSV19IVk0KKyAgICAgICAg
RE9fU1BFQ19DVFJMX0NPTkRfVkVSVwogCiAgICAgICAgIG1vdiAgVkNQVV9o
dm1fZ3Vlc3RfY3IyKCVyYngpLCVyYXgKIApkaWZmIC0tZ2l0IGEveGVuL2Fy
Y2gveDg2L3NwZWNfY3RybC5jIGIveGVuL2FyY2gveDg2L3NwZWNfY3RybC5j
CmluZGV4IDFlMjI2MTAyZDM5OS4uYjRlZmM5NDBhYTJiIDEwMDY0NAotLS0g
YS94ZW4vYXJjaC94ODYvc3BlY19jdHJsLmMKKysrIGIveGVuL2FyY2gveDg2
L3NwZWNfY3RybC5jCkBAIC0zNiw4ICszNiw4IEBAIHN0YXRpYyBib29sIF9f
aW5pdGRhdGEgb3B0X21zcl9zY19wdiA9IHRydWU7CiBzdGF0aWMgYm9vbCBf
X2luaXRkYXRhIG9wdF9tc3Jfc2NfaHZtID0gdHJ1ZTsKIHN0YXRpYyBib29s
IF9faW5pdGRhdGEgb3B0X3JzYl9wdiA9IHRydWU7CiBzdGF0aWMgYm9vbCBf
X2luaXRkYXRhIG9wdF9yc2JfaHZtID0gdHJ1ZTsKLXN0YXRpYyBpbnQ4X3Qg
X19pbml0ZGF0YSBvcHRfbWRfY2xlYXJfcHYgPSAtMTsKLXN0YXRpYyBpbnQ4
X3QgX19pbml0ZGF0YSBvcHRfbWRfY2xlYXJfaHZtID0gLTE7CitzdGF0aWMg
aW50OF90IF9fcmVhZF9tb3N0bHkgb3B0X21kX2NsZWFyX3B2ID0gLTE7Citz
dGF0aWMgaW50OF90IF9fcmVhZF9tb3N0bHkgb3B0X21kX2NsZWFyX2h2bSA9
IC0xOwogCiAvKiBDbWRsaW5lIGNvbnRyb2xzIGZvciBYZW4ncyBzcGVjdWxh
dGl2ZSBzZXR0aW5ncy4gKi8KIHN0YXRpYyBlbnVtIGluZF90aHVuayB7CkBA
IC05MDMsNiArOTAzLDEzIEBAIHN0YXRpYyBfX2luaXQgdm9pZCBtZHNfY2Fs
Y3VsYXRpb25zKHVpbnQ2NF90IGNhcHMpCiAgICAgfQogfQogCit2b2lkIHNw
ZWNfY3RybF9pbml0X2RvbWFpbihzdHJ1Y3QgZG9tYWluICpkKQoreworICAg
IGJvb2wgcHYgPSBpc19wdl9kb21haW4oZCk7CisKKyAgICBkLT5hcmNoLnZl
cncgPSBwdiA/IG9wdF9tZF9jbGVhcl9wdiA6IG9wdF9tZF9jbGVhcl9odm07
Cit9CisKIHZvaWQgX19pbml0IGluaXRfc3BlY3VsYXRpb25fbWl0aWdhdGlv
bnModm9pZCkKIHsKICAgICBlbnVtIGluZF90aHVuayB0aHVuayA9IFRIVU5L
X0RFRkFVTFQ7CkBAIC0xMTQ4LDIxICsxMTU1LDIwIEBAIHZvaWQgX19pbml0
IGluaXRfc3BlY3VsYXRpb25fbWl0aWdhdGlvbnModm9pZCkKICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBib290X2NwdV9oYXMoWDg2X0ZFQVRVUkVf
TURfQ0xFQVIpKTsKIAogICAgIC8qCi0gICAgICogRW5hYmxlIE1EUyBkZWZl
bmNlcyBhcyBhcHBsaWNhYmxlLiAgVGhlIFBWIGJsb2NrcyBuZWVkIHVzaW5n
IGFsbCB0aGUKLSAgICAgKiB0aW1lLCBhbmQgdGhlIElkbGUgYmxvY2tzIG5l
ZWQgdXNpbmcgaWYgZWl0aGVyIFBWIG9yIEhWTSBkZWZlbmNlcyBhcmUKLSAg
ICAgKiB1c2VkLgorICAgICAqIEVuYWJsZSBNRFMgZGVmZW5jZXMgYXMgYXBw
bGljYWJsZS4gIFRoZSBJZGxlIGJsb2NrcyBuZWVkIHVzaW5nIGlmCisgICAg
ICogZWl0aGVyIFBWIG9yIEhWTSBkZWZlbmNlcyBhcmUgdXNlZC4KICAgICAg
KgogICAgICAqIEhWTSBpcyBtb3JlIGNvbXBsaWNhdGVkLiAgVGhlIE1EX0NM
RUFSIG1pY3JvY29kZSBleHRlbmRzIEwxRF9GTFVTSCB3aXRoCi0gICAgICog
ZXF1aXZlbGVudCBzZW1hbnRpY3MgdG8gYXZvaWQgbmVlZGluZyB0byBwZXJm
b3JtIGJvdGggZmx1c2hlcyBvbiB0aGUKLSAgICAgKiBIVk0gcGF0aC4gIFRo
ZSBIVk0gYmxvY2tzIGRvbid0IG5lZWQgYWN0aXZhdGluZyBpZiBvdXIgaHlw
ZXJ2aXNvciB0b2xkCi0gICAgICogdXMgaXQgd2FzIGhhbmRsaW5nIEwxRF9G
TFVTSCwgb3Igd2UgYXJlIHVzaW5nIEwxRF9GTFVTSCBvdXJzZWx2ZXMuCisg
ICAgICogZXF1aXZhbGVudCBzZW1hbnRpY3MgdG8gYXZvaWQgbmVlZGluZyB0
byBwZXJmb3JtIGJvdGggZmx1c2hlcyBvbiB0aGUKKyAgICAgKiBIVk0gcGF0
aC4gIFRoZXJlZm9yZSwgd2UgZG9uJ3QgbmVlZCBWRVJXIGluIGFkZGl0aW9u
IHRvIEwxRF9GTFVTSC4KKyAgICAgKgorICAgICAqIEFmdGVyIGNhbGN1bGF0
aW5nIHRoZSBhcHByb3ByaWF0ZSBpZGxlIHNldHRpbmcsIHNpbXBsaWZ5Cisg
ICAgICogb3B0X21kX2NsZWFyX2h2bSB0byBtZWFuIGp1c3QgInNob3VsZCB3
ZSBWRVJXIG9uIHRoZSB3YXkgaW50byBIVk0KKyAgICAgKiBndWVzdHMiLCBz
byBzcGVjX2N0cmxfaW5pdF9kb21haW4oKSBjYW4gY2FsY3VsYXRlIHN1aXRh
YmxlIHNldHRpbmdzLgogICAgICAqLwotICAgIGlmICggb3B0X21kX2NsZWFy
X3B2ICkKLSAgICAgICAgc2V0dXBfZm9yY2VfY3B1X2NhcChYODZfRkVBVFVS
RV9TQ19WRVJXX1BWKTsKICAgICBpZiAoIG9wdF9tZF9jbGVhcl9wdiB8fCBv
cHRfbWRfY2xlYXJfaHZtICkKICAgICAgICAgc2V0dXBfZm9yY2VfY3B1X2Nh
cChYODZfRkVBVFVSRV9TQ19WRVJXX0lETEUpOwotICAgIGlmICggb3B0X21k
X2NsZWFyX2h2bSAmJiAhKGNhcHMgJiBBUkNIX0NBUFNfU0tJUF9MMURGTCkg
JiYgIW9wdF9sMWRfZmx1c2ggKQotICAgICAgICBzZXR1cF9mb3JjZV9jcHVf
Y2FwKFg4Nl9GRUFUVVJFX1NDX1ZFUldfSFZNKTsKKyAgICBvcHRfbWRfY2xl
YXJfaHZtICY9ICEoY2FwcyAmIEFSQ0hfQ0FQU19TS0lQX0wxREZMKSAmJiAh
b3B0X2wxZF9mbHVzaDsKIAogICAgIC8qCiAgICAgICogV2FybiB0aGUgdXNl
ciBpZiB0aGV5IGFyZSBvbiBNTFBEUy9NRkJEUy12dWxuZXJhYmxlIGhhcmR3
YXJlIHdpdGggSFQKZGlmZiAtLWdpdCBhL3hlbi9pbmNsdWRlL2FzbS14ODYv
Y3B1ZmVhdHVyZXMuaCBiL3hlbi9pbmNsdWRlL2FzbS14ODYvY3B1ZmVhdHVy
ZXMuaAppbmRleCAwOWY2MTk0NTliYzcuLjllYWFiN2EyYTFmYSAxMDA2NDQK
LS0tIGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9jcHVmZWF0dXJlcy5oCisrKyBi
L3hlbi9pbmNsdWRlL2FzbS14ODYvY3B1ZmVhdHVyZXMuaApAQCAtMzUsOCAr
MzUsNyBAQCBYRU5fQ1BVRkVBVFVSRShTQ19SU0JfSFZNLCAgICAgICAgWDg2
X1NZTlRIKDE5KSkgLyogUlNCIG92ZXJ3cml0ZSBuZWVkZWQgZm9yIEhWTQog
WEVOX0NQVUZFQVRVUkUoWEVOX1NFTEZTTk9PUCwgICAgIFg4Nl9TWU5USCgy
MCkpIC8qIFNFTEZTTk9PUCBnZXRzIHVzZWQgYnkgWGVuIGl0c2VsZiAqLwog
WEVOX0NQVUZFQVRVUkUoU0NfTVNSX0lETEUsICAgICAgIFg4Nl9TWU5USCgy
MSkpIC8qIChTQ19NU1JfUFYgfHwgU0NfTVNSX0hWTSkgJiYgZGVmYXVsdF94
ZW5fc3BlY19jdHJsICovCiBYRU5fQ1BVRkVBVFVSRShYRU5fTEJSLCAgICAg
ICAgICAgWDg2X1NZTlRIKDIyKSkgLyogWGVuIHVzZXMgTVNSX0RFQlVHQ1RM
LkxCUiAqLwotWEVOX0NQVUZFQVRVUkUoU0NfVkVSV19QViwgICAgICAgIFg4
Nl9TWU5USCgyMykpIC8qIFZFUlcgdXNlZCBieSBYZW4gZm9yIFBWICovCi1Y
RU5fQ1BVRkVBVFVSRShTQ19WRVJXX0hWTSwgICAgICAgWDg2X1NZTlRIKDI0
KSkgLyogVkVSVyB1c2VkIGJ5IFhlbiBmb3IgSFZNICovCisvKiBCaXRzIDIz
LDI0IHVudXNlZC4gKi8KIFhFTl9DUFVGRUFUVVJFKFNDX1ZFUldfSURMRSwg
ICAgICBYODZfU1lOVEgoMjUpKSAvKiBWRVJXIHVzZWQgYnkgWGVuIGZvciBp
ZGxlICovCiBYRU5fQ1BVRkVBVFVSRShYRU5fU0hTVEssICAgICAgICAgWDg2
X1NZTlRIKDI2KSkgLyogWGVuIHVzZXMgQ0VUIFNoYWRvdyBTdGFja3MgKi8K
IFhFTl9DUFVGRUFUVVJFKFhFTl9JQlQsICAgICAgICAgICBYODZfU1lOVEgo
MjcpKSAvKiBYZW4gdXNlcyBDRVQgSW5kaXJlY3QgQnJhbmNoIFRyYWNraW5n
ICovCmRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9hc20teDg2L2RvbWFpbi5o
IGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9kb21haW4uaAppbmRleCA3MjEzZDE4
NGIwMTYuLmQwZGY3ZjgzYWEwYyAxMDA2NDQKLS0tIGEveGVuL2luY2x1ZGUv
YXNtLXg4Ni9kb21haW4uaAorKysgYi94ZW4vaW5jbHVkZS9hc20teDg2L2Rv
bWFpbi5oCkBAIC0zMTksNiArMzE5LDkgQEAgc3RydWN0IGFyY2hfZG9tYWlu
CiAgICAgdWludDMyX3QgcGNpX2NmODsKICAgICB1aW50OF90IGNtb3NfaWR4
OwogCisgICAgLyogVXNlIFZFUlcgb24gcmV0dXJuLXRvLWd1ZXN0IGZvciBp
dHMgZmx1c2hpbmcgc2lkZSBlZmZlY3QuICovCisgICAgYm9vbCB2ZXJ3Owor
CiAgICAgdW5pb24gewogICAgICAgICBzdHJ1Y3QgcHZfZG9tYWluIHB2Owog
ICAgICAgICBzdHJ1Y3QgaHZtX2RvbWFpbiBodm07CmRpZmYgLS1naXQgYS94
ZW4vaW5jbHVkZS9hc20teDg2L3NwZWNfY3RybC5oIGIveGVuL2luY2x1ZGUv
YXNtLXg4Ni9zcGVjX2N0cmwuaAppbmRleCA5Y2FlY2RkZmVjOTYuLjY4ZjZj
NDZjNDcwYyAxMDA2NDQKLS0tIGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9zcGVj
X2N0cmwuaAorKysgYi94ZW4vaW5jbHVkZS9hc20teDg2L3NwZWNfY3RybC5o
CkBAIC0yNCw2ICsyNCw3IEBACiAjZGVmaW5lIFNDRl91c2Vfc2hhZG93ICgx
IDw8IDApCiAjZGVmaW5lIFNDRl9pc3Rfd3Jtc3IgICgxIDw8IDEpCiAjZGVm
aW5lIFNDRl9pc3RfcnNiICAgICgxIDw8IDIpCisjZGVmaW5lIFNDRl92ZXJ3
ICAgICAgICgxIDw8IDMpCiAKICNpZm5kZWYgX19BU1NFTUJMWV9fCiAKQEAg
LTMyLDYgKzMzLDcgQEAKICNpbmNsdWRlIDxhc20vbXNyLWluZGV4Lmg+CiAK
IHZvaWQgaW5pdF9zcGVjdWxhdGlvbl9taXRpZ2F0aW9ucyh2b2lkKTsKK3Zv
aWQgc3BlY19jdHJsX2luaXRfZG9tYWluKHN0cnVjdCBkb21haW4gKmQpOwog
CiBleHRlcm4gYm9vbCBvcHRfaWJwYjsKIGV4dGVybiBib29sIG9wdF9zc2Jk
OwpkaWZmIC0tZ2l0IGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9zcGVjX2N0cmxf
YXNtLmggYi94ZW4vaW5jbHVkZS9hc20teDg2L3NwZWNfY3RybF9hc20uaApp
bmRleCAwMmIzYjE4Y2U2OWYuLjVhNTkwYmFjNDRhYSAxMDA2NDQKLS0tIGEv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9zcGVjX2N0cmxfYXNtLmgKKysrIGIveGVu
L2luY2x1ZGUvYXNtLXg4Ni9zcGVjX2N0cmxfYXNtLmgKQEAgLTEzNiw2ICsx
MzYsMTkgQEAKICNlbmRpZgogLmVuZG0KIAorLm1hY3JvIERPX1NQRUNfQ1RS
TF9DT05EX1ZFUlcKKy8qCisgKiBSZXF1aXJlcyAlcnNwPWNwdWluZm8KKyAq
CisgKiBJc3N1ZSBhIFZFUlcgZm9yIGl0cyBmbHVzaGluZyBzaWRlIGVmZmVj
dCwgaWYgaW5kaWNhdGVkLiAgVGhpcyBpcyBhIFNwZWN0cmUKKyAqIHYxIGdh
ZGdldCwgYnV0IHRoZSBJUkVUL1ZNRW50cnkgaXMgc2VyaWFsaXNpbmcuCisg
Ki8KKyAgICB0ZXN0YiAkU0NGX3ZlcncsIENQVUlORk9fc3BlY19jdHJsX2Zs
YWdzKCVyc3ApCisgICAganogLkxcQF92ZXJ3X3NraXAKKyAgICB2ZXJ3IENQ
VUlORk9fdmVyd19zZWwoJXJzcCkKKy5MXEBfdmVyd19za2lwOgorLmVuZG0K
KwogLm1hY3JvIERPX1NQRUNfQ1RSTF9FTlRSWSBtYXliZXhlbjpyZXEKIC8q
CiAgKiBSZXF1aXJlcyAlcnNwPXJlZ3MgKGFsc28gY3B1aW5mbyBpZiAhbWF5
YmV4ZW4pCkBAIC0yMzEsOCArMjQ0LDcgQEAKICNkZWZpbmUgU1BFQ19DVFJM
X0VYSVRfVE9fUFYgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIFwKICAgICBBTFRFUk5BVElWRSAiIiwgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKICAg
ICAgICAgRE9fU1BFQ19DVFJMX0VYSVRfVE9fR1VFU1QsIFg4Nl9GRUFUVVJF
X1NDX01TUl9QVjsgICAgICAgICAgICAgIFwKLSAgICBBTFRFUk5BVElWRSAi
IiwgX19zdHJpbmdpZnkodmVydyBDUFVJTkZPX3Zlcndfc2VsKCVyc3ApKSwg
ICAgICAgICAgIFwKLSAgICAgICAgWDg2X0ZFQVRVUkVfU0NfVkVSV19QVgor
ICAgIERPX1NQRUNfQ1RSTF9DT05EX1ZFUlcKIAogLyoKICAqIFVzZSBpbiBJ
U1QgaW50ZXJydXB0L2V4Y2VwdGlvbiBjb250ZXh0LiAgTWF5IGludGVycnVw
dCBYZW4gb3IgUFYgY29udGV4dC4K

--=separator
Content-Type: application/octet-stream; name="xsa404/xsa404-4.15-2.patch"
Content-Disposition: attachment; filename="xsa404/xsa404-4.15-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3NwZWMtY3RybDogRW51bWVyYXRpb24gZm9yIE1N
SU8gU3RhbGUgRGF0YSBjb250cm9scwoKVGhlIHRocmVlICpfTk8gYml0cyBp
bmRpY2F0ZSBub24tc3VzY2VwdGliaWxpdHkgdG8gdGhlIFNTRFAsIEZCU0RQ
IGFuZCBQU0RQCmRhdGEgbW92ZW1lbnQgcHJpbWl0aXZlcy4KCkZCX0NMRUFS
IGluZGljYXRlcyB0aGF0IHRoZSBWRVJXIGluc3RydWN0aW9uIGhhcyByZS1n
YWluZWQgaXQncyBGaWxsIEJ1ZmZlcgpmbHVzaGluZyBzaWRlIGVmZmVjdC4g
IFRoaXMgaXMgb25seSBlbnVtZXJhdGVkIG9uIHBhcnRzIHdoZXJlIFZFUlcg
aGFkCnByZXZpb3VzbHkgbG9zdCBpdCdzIGZsdXNoaW5nIHNpZGUgZWZmZWN0
IGR1ZSB0byB0aGUgTURTL1RBQSB2dWxuZXJhYmlsaXRpZXMKYmVpbmcgZml4
ZWQgaW4gaGFyZHdhcmUuCgpGQl9DTEVBUl9DVFJMIGlzIGF2YWlsYWJsZSBv
biBhIHN1YnNldCBvZiBGQl9DTEVBUiBwYXJ0cyB3aGVyZSB0aGUgRmlsbCBC
dWZmZXIKY2xlYXJpbmcgc2lkZSBlZmZlY3Qgb2YgVkVSVyBjYW4gYmUgdHVy
bmVkIG9mZiBmb3IgcGVyZm9ybWFuY2UgcmVhc29ucy4KClRoaXMgaXMgcGFy
dCBvZiBYU0EtNDA0LgoKU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8
YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IFJvZ2Vy
IFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgoKZGlmZiAtLWdp
dCBhL3hlbi9hcmNoL3g4Ni9zcGVjX2N0cmwuYyBiL3hlbi9hcmNoL3g4Ni9z
cGVjX2N0cmwuYwppbmRleCBiNGVmYzk0MGFhMmIuLjM4ZTBjYzI4NDdlMCAx
MDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L3NwZWNfY3RybC5jCisrKyBiL3hl
bi9hcmNoL3g4Ni9zcGVjX2N0cmwuYwpAQCAtMzIzLDcgKzMyMyw3IEBAIHN0
YXRpYyB2b2lkIF9faW5pdCBwcmludF9kZXRhaWxzKGVudW0gaW5kX3RodW5r
IHRodW5rLCB1aW50NjRfdCBjYXBzKQogICAgICAqIEhhcmR3YXJlIHJlYWQt
b25seSBpbmZvcm1hdGlvbiwgc3RhdGluZyBpbW11bml0eSB0byBjZXJ0YWlu
IGlzc3Vlcywgb3IKICAgICAgKiBzdWdnZXN0aW9ucyBvZiB3aGljaCBtaXRp
Z2F0aW9uIHRvIHVzZS4KICAgICAgKi8KLSAgICBwcmludGsoIiAgSGFyZHdh
cmUgaGludHM6JXMlcyVzJXMlcyVzJXMlcyVzJXMlc1xuIiwKKyAgICBwcmlu
dGsoIiAgSGFyZHdhcmUgaGludHM6JXMlcyVzJXMlcyVzJXMlcyVzJXMlcyVz
JXMlc1xuIiwKICAgICAgICAgICAgKGNhcHMgJiBBUkNIX0NBUFNfUkRDTF9O
TykgICAgICAgICAgICAgICAgICAgICAgICA/ICIgUkRDTF9OTyIgICAgICAg
IDogIiIsCiAgICAgICAgICAgIChjYXBzICYgQVJDSF9DQVBTX0lCUlNfQUxM
KSAgICAgICAgICAgICAgICAgICAgICAgPyAiIElCUlNfQUxMIiAgICAgICA6
ICIiLAogICAgICAgICAgICAoY2FwcyAmIEFSQ0hfQ0FQU19SU0JBKSAgICAg
ICAgICAgICAgICAgICAgICAgICAgID8gIiBSU0JBIiAgICAgICAgICAgOiAi
IiwKQEAgLTMzMiwxMyArMzMyLDE2IEBAIHN0YXRpYyB2b2lkIF9faW5pdCBw
cmludF9kZXRhaWxzKGVudW0gaW5kX3RodW5rIHRodW5rLCB1aW50NjRfdCBj
YXBzKQogICAgICAgICAgICAoY2FwcyAmIEFSQ0hfQ0FQU19TU0JfTk8pICAg
ICAgICAgICAgICAgICAgICAgICAgID8gIiBTU0JfTk8iICAgICAgICAgOiAi
IiwKICAgICAgICAgICAgKGNhcHMgJiBBUkNIX0NBUFNfTURTX05PKSAgICAg
ICAgICAgICAgICAgICAgICAgICA/ICIgTURTX05PIiAgICAgICAgIDogIiIs
CiAgICAgICAgICAgIChjYXBzICYgQVJDSF9DQVBTX1RBQV9OTykgICAgICAg
ICAgICAgICAgICAgICAgICAgPyAiIFRBQV9OTyIgICAgICAgICA6ICIiLAor
ICAgICAgICAgICAoY2FwcyAmIEFSQ0hfQ0FQU19TQkRSX1NTRFBfTk8pICAg
ICAgICAgICAgICAgICAgID8gIiBTQkRSX1NTRFBfTk8iICAgOiAiIiwKKyAg
ICAgICAgICAgKGNhcHMgJiBBUkNIX0NBUFNfRkJTRFBfTk8pICAgICAgICAg
ICAgICAgICAgICAgICA/ICIgRkJTRFBfTk8iICAgICAgIDogIiIsCisgICAg
ICAgICAgIChjYXBzICYgQVJDSF9DQVBTX1BTRFBfTk8pICAgICAgICAgICAg
ICAgICAgICAgICAgPyAiIFBTRFBfTk8iICAgICAgICA6ICIiLAogICAgICAg
ICAgICAoZThiICAmIGNwdWZlYXRfbWFzayhYODZfRkVBVFVSRV9JQlJTX0FM
V0FZUykpICAgID8gIiBJQlJTX0FMV0FZUyIgICAgOiAiIiwKICAgICAgICAg
ICAgKGU4YiAgJiBjcHVmZWF0X21hc2soWDg2X0ZFQVRVUkVfU1RJQlBfQUxX
QVlTKSkgICA/ICIgU1RJQlBfQUxXQVlTIiAgIDogIiIsCiAgICAgICAgICAg
IChlOGIgICYgY3B1ZmVhdF9tYXNrKFg4Nl9GRUFUVVJFX0lCUlNfRkFTVCkp
ICAgICAgPyAiIElCUlNfRkFTVCIgICAgICA6ICIiLAogICAgICAgICAgICAo
ZThiICAmIGNwdWZlYXRfbWFzayhYODZfRkVBVFVSRV9JQlJTX1NBTUVfTU9E
RSkpID8gIiBJQlJTX1NBTUVfTU9ERSIgOiAiIik7CiAKICAgICAvKiBIYXJk
d2FyZSBmZWF0dXJlcyB3aGljaCBuZWVkIGRyaXZpbmcgdG8gbWl0aWdhdGUg
aXNzdWVzLiAqLwotICAgIHByaW50aygiICBIYXJkd2FyZSBmZWF0dXJlczol
cyVzJXMlcyVzJXMlcyVzJXMlc1xuIiwKKyAgICBwcmludGsoIiAgSGFyZHdh
cmUgZmVhdHVyZXM6JXMlcyVzJXMlcyVzJXMlcyVzJXMlcyVzXG4iLAogICAg
ICAgICAgICAoZThiICAmIGNwdWZlYXRfbWFzayhYODZfRkVBVFVSRV9JQlBC
KSkgfHwKICAgICAgICAgICAgKF83ZDAgJiBjcHVmZWF0X21hc2soWDg2X0ZF
QVRVUkVfSUJSU0IpKSAgICAgICAgICA/ICIgSUJQQiIgICAgICAgICAgIDog
IiIsCiAgICAgICAgICAgIChlOGIgICYgY3B1ZmVhdF9tYXNrKFg4Nl9GRUFU
VVJFX0lCUlMpKSB8fApAQCAtMzUzLDcgKzM1Niw5IEBAIHN0YXRpYyB2b2lk
IF9faW5pdCBwcmludF9kZXRhaWxzKGVudW0gaW5kX3RodW5rIHRodW5rLCB1
aW50NjRfdCBjYXBzKQogICAgICAgICAgICAoXzdkMCAmIGNwdWZlYXRfbWFz
ayhYODZfRkVBVFVSRV9NRF9DTEVBUikpICAgICAgID8gIiBNRF9DTEVBUiIg
ICAgICAgOiAiIiwKICAgICAgICAgICAgKF83ZDAgJiBjcHVmZWF0X21hc2so
WDg2X0ZFQVRVUkVfU1JCRFNfQ1RSTCkpICAgICA/ICIgU1JCRFNfQ1RSTCIg
ICAgIDogIiIsCiAgICAgICAgICAgIChlOGIgICYgY3B1ZmVhdF9tYXNrKFg4
Nl9GRUFUVVJFX1ZJUlRfU1NCRCkpICAgICAgPyAiIFZJUlRfU1NCRCIgICAg
ICA6ICIiLAotICAgICAgICAgICAoY2FwcyAmIEFSQ0hfQ0FQU19UU1hfQ1RS
TCkgICAgICAgICAgICAgICAgICAgICAgID8gIiBUU1hfQ1RSTCIgICAgICAg
OiAiIik7CisgICAgICAgICAgIChjYXBzICYgQVJDSF9DQVBTX1RTWF9DVFJM
KSAgICAgICAgICAgICAgICAgICAgICAgPyAiIFRTWF9DVFJMIiAgICAgICA6
ICIiLAorICAgICAgICAgICAoY2FwcyAmIEFSQ0hfQ0FQU19GQl9DTEVBUikg
ICAgICAgICAgICAgICAgICAgICAgID8gIiBGQl9DTEVBUiIgICAgICAgOiAi
IiwKKyAgICAgICAgICAgKGNhcHMgJiBBUkNIX0NBUFNfRkJfQ0xFQVJfQ1RS
TCkgICAgICAgICAgICAgICAgICA/ICIgRkJfQ0xFQVJfQ1RSTCIgIDogIiIp
OwogCiAgICAgLyogQ29tcGlsZWQtaW4gc3VwcG9ydCB3aGljaCBwZXJ0YWlu
cyB0byBtaXRpZ2F0aW9ucy4gKi8KICAgICBpZiAoIElTX0VOQUJMRUQoQ09O
RklHX0lORElSRUNUX1RIVU5LKSB8fCBJU19FTkFCTEVEKENPTkZJR19TSEFE
T1dfUEFHSU5HKSApCmRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9hc20teDg2
L21zci1pbmRleC5oIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9tc3ItaW5kZXgu
aAppbmRleCA5NDc3NzgxMDVmYjYuLjFlNzQzNDYxZTkxZCAxMDA2NDQKLS0t
IGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9tc3ItaW5kZXguaAorKysgYi94ZW4v
aW5jbHVkZS9hc20teDg2L21zci1pbmRleC5oCkBAIC01OSw2ICs1OSwxMSBA
QAogI2RlZmluZSAgQVJDSF9DQVBTX0lGX1BTQ0hBTkdFX01DX05PICAgICAg
ICAoX0FDKDEsIFVMTCkgPDwgIDYpCiAjZGVmaW5lICBBUkNIX0NBUFNfVFNY
X0NUUkwgICAgICAgICAgICAgICAgIChfQUMoMSwgVUxMKSA8PCAgNykKICNk
ZWZpbmUgIEFSQ0hfQ0FQU19UQUFfTk8gICAgICAgICAgICAgICAgICAgKF9B
QygxLCBVTEwpIDw8ICA4KQorI2RlZmluZSAgQVJDSF9DQVBTX1NCRFJfU1NE
UF9OTyAgICAgICAgICAgICAoX0FDKDEsIFVMTCkgPDwgMTMpCisjZGVmaW5l
ICBBUkNIX0NBUFNfRkJTRFBfTk8gICAgICAgICAgICAgICAgIChfQUMoMSwg
VUxMKSA8PCAxNCkKKyNkZWZpbmUgIEFSQ0hfQ0FQU19QU0RQX05PICAgICAg
ICAgICAgICAgICAgKF9BQygxLCBVTEwpIDw8IDE1KQorI2RlZmluZSAgQVJD
SF9DQVBTX0ZCX0NMRUFSICAgICAgICAgICAgICAgICAoX0FDKDEsIFVMTCkg
PDwgMTcpCisjZGVmaW5lICBBUkNIX0NBUFNfRkJfQ0xFQVJfQ1RSTCAgICAg
ICAgICAgIChfQUMoMSwgVUxMKSA8PCAxOCkKIAogI2RlZmluZSBNU1JfRkxV
U0hfQ01EICAgICAgICAgICAgICAgICAgICAgICAweDAwMDAwMTBiCiAjZGVm
aW5lICBGTFVTSF9DTURfTDFEICAgICAgICAgICAgICAgICAgICAgIChfQUMo
MSwgVUxMKSA8PCAgMCkKQEAgLTc2LDYgKzgxLDcgQEAKICNkZWZpbmUgIE1D
VV9PUFRfQ1RSTF9STkdEU19NSVRHX0RJUyAgICAgICAgKF9BQygxLCBVTEwp
IDw8ICAwKQogI2RlZmluZSAgTUNVX09QVF9DVFJMX1JUTV9BTExPVyAgICAg
ICAgICAgICAoX0FDKDEsIFVMTCkgPDwgIDEpCiAjZGVmaW5lICBNQ1VfT1BU
X0NUUkxfUlRNX0xPQ0tFRCAgICAgICAgICAgIChfQUMoMSwgVUxMKSA8PCAg
MikKKyNkZWZpbmUgIE1DVV9PUFRfQ1RSTF9GQl9DTEVBUl9ESVMgICAgICAg
ICAgKF9BQygxLCBVTEwpIDw8ICAzKQogCiAjZGVmaW5lIE1TUl9SVElUX09V
VFBVVF9CQVNFICAgICAgICAgICAgICAgIDB4MDAwMDA1NjAKICNkZWZpbmUg
TVNSX1JUSVRfT1VUUFVUX01BU0sgICAgICAgICAgICAgICAgMHgwMDAwMDU2
MQo=

--=separator
Content-Type: application/octet-stream; name="xsa404/xsa404-4.15-3.patch"
Content-Disposition: attachment; filename="xsa404/xsa404-4.15-3.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3NwZWMtY3RybDogQWRkIHNwZWMtY3RybD11bnBy
aXYtbW1pbwoKUGVyIFhlbidzIHN1cHBvcnQgc3RhdGVtZW50LCBQQ0kgcGFz
c3Rocm91Z2ggc2hvdWxkIGJlIHRvIHRydXN0ZWQgZG9tYWlucwpiZWNhdXNl
IHRoZSBvdmVyYWxsIHN5c3RlbSBzZWN1cml0eSBkZXBlbmRzIG9uIGZhY3Rv
cnMgb3V0c2lkZSBvZiBYZW4ncwpjb250cm9sLgoKQXMgc3VjaCwgWGVuLCBp
biBhIHN1cHBvcnRlZCBjb25maWd1cmF0aW9uLCBpcyBub3QgdnVsbmVyYWJs
ZSB0byBEUlBXL1NCRFIuCgpIb3dldmVyLCB1c2VycyB3aG8gaGF2ZSByaXNr
IGFzc2Vzc2VkIHRoZWlyIGNvbmZpZ3VyYXRpb24gbWF5IGJlIGhhcHB5IHdp
dGgKdGhlIHJpc2sgb2YgRG9TLCBidXQgdW5oYXBweSB3aXRoIHRoZSByaXNr
IG9mIGNyb3NzLWRvbWFpbiBkYXRhIGxlYWthZ2UuICBTdWNoCnVzZXJzIHNo
b3VsZCBlbmFibGUgdGhpcyBvcHRpb24uCgpPbiBDUFVzIHZ1bG5lcmFibGUg
dG8gTURTLCB0aGUgZXhpc3RpbmcgbWl0aWdhdGlvbnMgYXJlIHRoZSBiZXN0
IHdlIGNhbiBkbyB0bwptaXRpZ2F0ZSBNTUlPIGNyb3NzLWRvbWFpbiBkYXRh
IGxlYWthZ2UuCgpPbiBDUFVzIGZpeGVkIHRvIE1EUyBidXQgdnVsbmVyYWJs
ZSBNTUlPIHN0YWxlIGRhdGEgbGVha2FnZSwgdGhpcyBvcHRpb246CgogKiBP
biBDUFVzIHN1c2NlcHRpYmxlIHRvIEZCU0RQLCBtaXRpZ2F0ZXMgY3Jvc3Mt
ZG9tYWluIGZpbGwgYnVmZmVyIGxlYWthZ2UKICAgdXNpbmcgRkJfQ0xFQVIu
CiAqIE9uIENQVXMgc3VzY2VwdGlibGUgdG8gU0JEUiwgbWl0aWdhdGVzIFJO
RyBkYXRhIHJlY292ZXJ5IGJ5IGVuZ2FnaW5nIHRoZQogICBzcmItbG9jaywg
cHJldmlvdXNseSB1c2VkIHRvIG1pdGlnYXRlIFNSQkRTLgoKQm90aCBtaXRp
Z2F0aW9ucyByZXF1aXJlIG1pY3JvY29kZSBmcm9tIElQVSAyMDIyLjEsIE1h
eSAyMDIyLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS00MDQuCgpTaWduZWQtb2Zm
LWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29t
PgpSZXZpZXdlZC1ieTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNp
dHJpeC5jb20+Ci0tLQpCYWNrcG9ydGluZyBub3RlOiBGb3IgWGVuIDQuNyBh
bmQgZWFybGllciB3aXRoIGJvb2xfdCBub3QgYWxpYXNpbmcgYm9vbCwgdGhl
CkFSQ0hfQ0FQU19GQl9DTEVBUiBodW5rIG5lZWRzICEhCgpkaWZmIC0tZ2l0
IGEvZG9jcy9taXNjL3hlbi1jb21tYW5kLWxpbmUucGFuZG9jIGIvZG9jcy9t
aXNjL3hlbi1jb21tYW5kLWxpbmUucGFuZG9jCmluZGV4IGU0YzgyMGUxNzA1
My4uZTE3YTgzNWVkMjU0IDEwMDY0NAotLS0gYS9kb2NzL21pc2MveGVuLWNv
bW1hbmQtbGluZS5wYW5kb2MKKysrIGIvZG9jcy9taXNjL3hlbi1jb21tYW5k
LWxpbmUucGFuZG9jCkBAIC0yMTcxLDcgKzIxNzEsNyBAQCBCeSBkZWZhdWx0
IFNTQkQgd2lsbCBiZSBtaXRpZ2F0ZWQgYXQgcnVudGltZSAoaS5lIGBzc2Jk
PXJ1bnRpbWVgKS4KICMjIyBzcGVjLWN0cmwgKHg4NikKID4gYD0gTGlzdCBv
ZiBbIDxib29sPiwgeGVuPTxib29sPiwge3B2LGh2bSxtc3Itc2MscnNiLG1k
LWNsZWFyfT08Ym9vbD4sCiA+ICAgICAgICAgICAgICBidGktdGh1bms9cmV0
cG9saW5lfGxmZW5jZXxqbXAsIHtpYnJzLGlicGIsc3NiZCxlYWdlci1mcHUs
Ci0+ICAgICAgICAgICAgICBsMWQtZmx1c2gsYnJhbmNoLWhhcmRlbixzcmIt
bG9ja309PGJvb2w+IF1gCis+ICAgICAgICAgICAgICBsMWQtZmx1c2gsYnJh
bmNoLWhhcmRlbixzcmItbG9jayx1bnByaXYtbW1pb309PGJvb2w+IF1gCiAK
IENvbnRyb2xzIGZvciBzcGVjdWxhdGl2ZSBleGVjdXRpb24gc2lkZWNoYW5u
ZWwgbWl0aWdhdGlvbnMuICBCeSBkZWZhdWx0LCBYZW4KIHdpbGwgcGljayB0
aGUgbW9zdCBhcHByb3ByaWF0ZSBtaXRpZ2F0aW9ucyBiYXNlZCBvbiBjb21w
aWxlZCBpbiBzdXBwb3J0LApAQCAtMjI1MCw4ICsyMjUwLDE2IEBAIFhlbiB3
aWxsIGVuYWJsZSB0aGlzIG1pdGlnYXRpb24uCiBPbiBoYXJkd2FyZSBzdXBw
b3J0aW5nIFNSQkRTX0NUUkwsIHRoZSBgc3JiLWxvY2s9YCBvcHRpb24gY2Fu
IGJlIHVzZWQgdG8gZm9yY2UKIG9yIHByZXZlbnQgWGVuIGZyb20gcHJvdGVj
dCB0aGUgU3BlY2lhbCBSZWdpc3RlciBCdWZmZXIgZnJvbSBsZWFraW5nIHN0
YWxlCiBkYXRhLiBCeSBkZWZhdWx0LCBYZW4gd2lsbCBlbmFibGUgdGhpcyBt
aXRpZ2F0aW9uLCBleGNlcHQgb24gcGFydHMgd2hlcmUgTURTCi1pcyBmaXhl
ZCBhbmQgVEFBIGlzIGZpeGVkL21pdGlnYXRlZCAoaW4gd2hpY2ggY2FzZSwg
dGhlcmUgaXMgYmVsaWV2ZWQgdG8gYmUgbm8KLXdheSBmb3IgYW4gYXR0YWNr
ZXIgdG8gb2J0YWluIHRoZSBzdGFsZSBkYXRhKS4KK2lzIGZpeGVkIGFuZCBU
QUEgaXMgZml4ZWQvbWl0aWdhdGVkIGFuZCB0aGVyZSBhcmUgbm8gdW5wcml2
aWxlZ2VkIE1NSU8KK21hcHBpbmdzIChpbiB3aGljaCBjYXNlLCB0aGVyZSBp
cyBiZWxpZXZlZCB0byBiZSBubyB3YXkgZm9yIGFuIGF0dGFja2VyIHRvCitv
YnRhaW4gc3RhbGUgZGF0YSkuCisKK1RoZSBgdW5wcml2LW1taW89YCBib29s
ZWFuIGluZGljYXRlcyB3aGV0aGVyIHRoZSBzeXN0ZW0gaGFzIChvciB3aWxs
IGhhdmUpCitsZXNzIHRoYW4gZnVsbHkgcHJpdmlsZWdlZCBkb21haW5zIGdy
YW50ZWQgYWNjZXNzIHRvIE1NSU8gZGV2aWNlcy4gIEJ5CitkZWZhdWx0LCB0
aGlzIG9wdGlvbiBpcyBkaXNhYmxlZC4gIElmIGVuYWJsZWQsIFhlbiB3aWxs
IHVzZSB0aGUgYEZCX0NMRUFSYAorYW5kL29yIGBTUkJEU19DVFJMYCBmdW5j
dGlvbmFsaXR5IGF2YWlsYWJsZSBpbiB0aGUgSW50ZWwgTWF5IDIwMjIgbWlj
cm9jb2RlCityZWxlYXNlIHRvIG1pdGlnYXRlIGNyb3NzLWRvbWFpbiBsZWFr
YWdlIG9mIGRhdGEgdmlhIHRoZSBNTUlPIFN0YWxlIERhdGEKK3Z1bG5lcmFi
aWxpdGllcy4KIAogIyMjIHN5bmNfY29uc29sZQogPiBgPSA8Ym9vbGVhbj5g
CmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvc3BlY19jdHJsLmMgYi94ZW4v
YXJjaC94ODYvc3BlY19jdHJsLmMKaW5kZXggMzhlMGNjMjg0N2UwLi44M2I4
NTZmYTkxNTggMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni9zcGVjX2N0cmwu
YworKysgYi94ZW4vYXJjaC94ODYvc3BlY19jdHJsLmMKQEAgLTY3LDYgKzY3
LDggQEAgc3RhdGljIGJvb2wgX19pbml0ZGF0YSBjcHVfaGFzX2J1Z19tc2Jk
c19vbmx5OyAvKiA9PiBtaW5pbWFsIEhUIGltcGFjdC4gKi8KIHN0YXRpYyBi
b29sIF9faW5pdGRhdGEgY3B1X2hhc19idWdfbWRzOyAvKiBBbnkgb3RoZXIg
TXtMUCxTQixGQn1EUyBjb21iaW5hdGlvbi4gKi8KIAogc3RhdGljIGludDhf
dCBfX2luaXRkYXRhIG9wdF9zcmJfbG9jayA9IC0xOworc3RhdGljIGJvb2wg
X19pbml0ZGF0YSBvcHRfdW5wcml2X21taW87CitzdGF0aWMgYm9vbCBfX3Jl
YWRfbW9zdGx5IG9wdF9mYl9jbGVhcl9tbWlvOwogCiBzdGF0aWMgaW50IF9f
aW5pdCBwYXJzZV9zcGVjX2N0cmwoY29uc3QgY2hhciAqcykKIHsKQEAgLTE4
NCw2ICsxODYsOCBAQCBzdGF0aWMgaW50IF9faW5pdCBwYXJzZV9zcGVjX2N0
cmwoY29uc3QgY2hhciAqcykKICAgICAgICAgICAgIG9wdF9icmFuY2hfaGFy
ZGVuID0gdmFsOwogICAgICAgICBlbHNlIGlmICggKHZhbCA9IHBhcnNlX2Jv
b2xlYW4oInNyYi1sb2NrIiwgcywgc3MpKSA+PSAwICkKICAgICAgICAgICAg
IG9wdF9zcmJfbG9jayA9IHZhbDsKKyAgICAgICAgZWxzZSBpZiAoICh2YWwg
PSBwYXJzZV9ib29sZWFuKCJ1bnByaXYtbW1pbyIsIHMsIHNzKSkgPj0gMCAp
CisgICAgICAgICAgICBvcHRfdW5wcml2X21taW8gPSB2YWw7CiAgICAgICAg
IGVsc2UKICAgICAgICAgICAgIHJjID0gLUVJTlZBTDsKIApAQCAtMzkyLDcg
KzM5Niw4IEBAIHN0YXRpYyB2b2lkIF9faW5pdCBwcmludF9kZXRhaWxzKGVu
dW0gaW5kX3RodW5rIHRodW5rLCB1aW50NjRfdCBjYXBzKQogICAgICAgICAg
ICBvcHRfc3JiX2xvY2sgICAgICAgICAgICAgICAgICAgICAgICAgICAgICA/
ICIgU1JCX0xPQ0srIiA6ICIgU1JCX0xPQ0stIiwKICAgICAgICAgICAgb3B0
X2licGIgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgPyAiIElC
UEIiICA6ICIiLAogICAgICAgICAgICBvcHRfbDFkX2ZsdXNoICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICA/ICIgTDFEX0ZMVVNIIiA6ICIiLAotICAg
ICAgICAgICBvcHRfbWRfY2xlYXJfcHYgfHwgb3B0X21kX2NsZWFyX2h2bSAg
ICAgICA/ICIgVkVSVyIgIDogIiIsCisgICAgICAgICAgIG9wdF9tZF9jbGVh
cl9wdiB8fCBvcHRfbWRfY2xlYXJfaHZtIHx8CisgICAgICAgICAgIG9wdF9m
Yl9jbGVhcl9tbWlvICAgICAgICAgICAgICAgICAgICAgICAgID8gIiBWRVJX
IiAgOiAiIiwKICAgICAgICAgICAgb3B0X2JyYW5jaF9oYXJkZW4gICAgICAg
ICAgICAgICAgICAgICAgICAgPyAiIEJSQU5DSF9IQVJERU4iIDogIiIpOwog
CiAgICAgLyogTDFURiBkaWFnbm9zdGljcywgcHJpbnRlZCBpZiB2dWxuZXJh
YmxlIG9yIFBWIHNoYWRvd2luZyBpcyBpbiB1c2UuICovCkBAIC05MTIsNyAr
OTE3LDkgQEAgdm9pZCBzcGVjX2N0cmxfaW5pdF9kb21haW4oc3RydWN0IGRv
bWFpbiAqZCkKIHsKICAgICBib29sIHB2ID0gaXNfcHZfZG9tYWluKGQpOwog
Ci0gICAgZC0+YXJjaC52ZXJ3ID0gcHYgPyBvcHRfbWRfY2xlYXJfcHYgOiBv
cHRfbWRfY2xlYXJfaHZtOworICAgIGQtPmFyY2gudmVydyA9CisgICAgICAg
IChwdiA/IG9wdF9tZF9jbGVhcl9wdiA6IG9wdF9tZF9jbGVhcl9odm0pIHx8
CisgICAgICAgIChvcHRfZmJfY2xlYXJfbW1pbyAmJiBpc19pb21tdV9lbmFi
bGVkKGQpKTsKIH0KIAogdm9pZCBfX2luaXQgaW5pdF9zcGVjdWxhdGlvbl9t
aXRpZ2F0aW9ucyh2b2lkKQpAQCAtMTE0OCw2ICsxMTU1LDE4IEBAIHZvaWQg
X19pbml0IGluaXRfc3BlY3VsYXRpb25fbWl0aWdhdGlvbnModm9pZCkKICAg
ICBtZHNfY2FsY3VsYXRpb25zKGNhcHMpOwogCiAgICAgLyoKKyAgICAgKiBQ
YXJ0cyB3aGljaCBlbnVtZXJhdGUgRkJfQ0xFQVIgYXJlIHRob3NlIHdoaWNo
IGFyZSBwb3N0LU1EU19OTyBhbmQgaGF2ZQorICAgICAqIHJlaW50cm9kdWNl
ZCB0aGUgVkVSVyBmaWxsIGJ1ZmZlciBmbHVzaGluZyBzaWRlIGVmZmVjdCBi
ZWNhdXNlIG9mIGEKKyAgICAgKiBzdXNjZXB0aWJpbGl0eSB0byBGQlNEUC4K
KyAgICAgKgorICAgICAqIElmIHVucHJpdmlsZWdlZCBndWVzdHMgaGF2ZSAo
b3Igd2lsbCBoYXZlKSBNTUlPIG1hcHBpbmdzLCB3ZSBjYW4KKyAgICAgKiBt
aXRpZ2F0ZSBjcm9zcy1kb21haW4gbGVha2FnZSBvZiBmaWxsIGJ1ZmZlciBk
YXRhIGJ5IGlzc3VpbmcgVkVSVyBvbgorICAgICAqIHRoZSByZXR1cm4tdG8t
Z3Vlc3QgcGF0aC4KKyAgICAgKi8KKyAgICBpZiAoIG9wdF91bnByaXZfbW1p
byApCisgICAgICAgIG9wdF9mYl9jbGVhcl9tbWlvID0gY2FwcyAmIEFSQ0hf
Q0FQU19GQl9DTEVBUjsKKworICAgIC8qCiAgICAgICogQnkgZGVmYXVsdCwg
ZW5hYmxlIFBWIGFuZCBIVk0gbWl0aWdhdGlvbnMgb24gTURTLXZ1bG5lcmFi
bGUgaGFyZHdhcmUuCiAgICAgICogVGhpcyB3aWxsIG9ubHkgYmUgYSB0b2tl
biBlZmZvcnQgZm9yIE1MUERTL01GQkRTIHdoZW4gSFQgaXMgZW5hYmxlZCwK
ICAgICAgKiBidXQgaXQgaXMgc29tZXdoYXQgYmV0dGVyIHRoYW4gbm90aGlu
Zy4KQEAgLTExNjAsMTggKzExNzksMjAgQEAgdm9pZCBfX2luaXQgaW5pdF9z
cGVjdWxhdGlvbl9taXRpZ2F0aW9ucyh2b2lkKQogICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIGJvb3RfY3B1X2hhcyhYODZfRkVBVFVSRV9NRF9DTEVB
UikpOwogCiAgICAgLyoKLSAgICAgKiBFbmFibGUgTURTIGRlZmVuY2VzIGFz
IGFwcGxpY2FibGUuICBUaGUgSWRsZSBibG9ja3MgbmVlZCB1c2luZyBpZgot
ICAgICAqIGVpdGhlciBQViBvciBIVk0gZGVmZW5jZXMgYXJlIHVzZWQuCisg
ICAgICogRW5hYmxlIE1EUy9NTUlPIGRlZmVuY2VzIGFzIGFwcGxpY2FibGUu
ICBUaGUgSWRsZSBibG9ja3MgbmVlZCB1c2luZyBpZgorICAgICAqIGVpdGhl
ciB0aGUgUFYgb3IgSFZNIE1EUyBkZWZlbmNlcyBhcmUgdXNlZCwgb3IgaWYg
d2UgbWF5IGdpdmUgTU1JTworICAgICAqIGFjY2VzcyB0byB1bnRydXN0ZWQg
Z3Vlc3RzLgogICAgICAqCiAgICAgICogSFZNIGlzIG1vcmUgY29tcGxpY2F0
ZWQuICBUaGUgTURfQ0xFQVIgbWljcm9jb2RlIGV4dGVuZHMgTDFEX0ZMVVNI
IHdpdGgKICAgICAgKiBlcXVpdmFsZW50IHNlbWFudGljcyB0byBhdm9pZCBu
ZWVkaW5nIHRvIHBlcmZvcm0gYm90aCBmbHVzaGVzIG9uIHRoZQotICAgICAq
IEhWTSBwYXRoLiAgVGhlcmVmb3JlLCB3ZSBkb24ndCBuZWVkIFZFUlcgaW4g
YWRkaXRpb24gdG8gTDFEX0ZMVVNILgorICAgICAqIEhWTSBwYXRoLiAgVGhl
cmVmb3JlLCB3ZSBkb24ndCBuZWVkIFZFUlcgaW4gYWRkaXRpb24gdG8gTDFE
X0ZMVVNIIChmb3IKKyAgICAgKiBNRFMgbWl0aWdhdGlvbnMuICBMMURfRkxV
U0ggaXMgbm90IHNhZmUgZm9yIE1NSU8gbWl0aWdhdGlvbnMuKQogICAgICAq
CiAgICAgICogQWZ0ZXIgY2FsY3VsYXRpbmcgdGhlIGFwcHJvcHJpYXRlIGlk
bGUgc2V0dGluZywgc2ltcGxpZnkKICAgICAgKiBvcHRfbWRfY2xlYXJfaHZt
IHRvIG1lYW4ganVzdCAic2hvdWxkIHdlIFZFUlcgb24gdGhlIHdheSBpbnRv
IEhWTQogICAgICAqIGd1ZXN0cyIsIHNvIHNwZWNfY3RybF9pbml0X2RvbWFp
bigpIGNhbiBjYWxjdWxhdGUgc3VpdGFibGUgc2V0dGluZ3MuCiAgICAgICov
Ci0gICAgaWYgKCBvcHRfbWRfY2xlYXJfcHYgfHwgb3B0X21kX2NsZWFyX2h2
bSApCisgICAgaWYgKCBvcHRfbWRfY2xlYXJfcHYgfHwgb3B0X21kX2NsZWFy
X2h2bSB8fCBvcHRfZmJfY2xlYXJfbW1pbyApCiAgICAgICAgIHNldHVwX2Zv
cmNlX2NwdV9jYXAoWDg2X0ZFQVRVUkVfU0NfVkVSV19JRExFKTsKICAgICBv
cHRfbWRfY2xlYXJfaHZtICY9ICEoY2FwcyAmIEFSQ0hfQ0FQU19TS0lQX0wx
REZMKSAmJiAhb3B0X2wxZF9mbHVzaDsKIApAQCAtMTIzNiwxNCArMTI1Nywx
OSBAQCB2b2lkIF9faW5pdCBpbml0X3NwZWN1bGF0aW9uX21pdGlnYXRpb25z
KHZvaWQpCiAgICAgICogT24gc29tZSBTUkJEUy1hZmZlY3RlZCBoYXJkd2Fy
ZSwgaXQgbWF5IGJlIHNhZmUgdG8gcmVsYXggc3JiLWxvY2sgYnkKICAgICAg
KiBkZWZhdWx0LgogICAgICAqCi0gICAgICogT24gcGFydHMgd2hpY2ggZW51
bWVyYXRlIE1EU19OTyBhbmQgbm90IFRBQV9OTywgVFNYIGlzIHRoZSBvbmx5
IGtub3duCi0gICAgICogd2F5IHRvIGFjY2VzcyB0aGUgRmlsbCBCdWZmZXIu
ICBJZiBUU1ggaXNuJ3QgYXZhaWxhYmxlIChpbmMuIFNLVQotICAgICAqIHJl
YXNvbnMgb24gc29tZSBtb2RlbHMpLCBvciBUU1ggaXMgZXhwbGljaXRseSBk
aXNhYmxlZCwgdGhlbiB0aGVyZSBpcwotICAgICAqIG5vIG5lZWQgZm9yIHRo
ZSBleHRyYSBvdmVyaGVhZCB0byBwcm90ZWN0IFJEUkFORC9SRFNFRUQuCisg
ICAgICogQWxsIHBhcnRzIHdpdGggU1JCRFNfQ1RSTCBzdWZmZXIgU1NEUCwg
dGhlIG1lY2hhbmlzbSBieSB3aGljaCBzdGFsZSBSTkcKKyAgICAgKiBkYXRh
IGJlY29tZXMgYXZhaWxhYmxlIHRvIG90aGVyIGNvbnRleHRzLiAgVG8gcmVj
b3ZlciB0aGUgZGF0YSwgYW4KKyAgICAgKiBhdHRhY2tlciBuZWVkcyB0byB1
c2U6CisgICAgICogIC0gU0JEUyAoTURTIG9yIFRBQSB0byBzYW1wbGUgdGhl
IGNvcmVzIGZpbGwgYnVmZmVyKQorICAgICAqICAtIFNCRFIgKEFyY2hpdGVj
dHVyYWxseSByZXRyaWV2ZSBzdGFsZSB0cmFuc2FjdGlvbiBidWZmZXIgY29u
dGVudHMpCisgICAgICogIC0gRFJQVyAoQXJjaGl0ZWN0dXJhbGx5IGxhdGNo
IHN0YWxlIGZpbGwgYnVmZmVyIGRhdGEpCisgICAgICoKKyAgICAgKiBPbiBN
RFNfTk8gcGFydHMsIGFuZCB3aXRoIFRBQV9OTyBvciBUU1ggdW5hdmFpbGFi
bGUvZGlzYWJsZWQsIGFuZCB0aGVyZQorICAgICAqIGlzIG5vIHVucHJpdmls
ZWdlZCBNTUlPIGFjY2VzcywgdGhlIFJORyBkYXRhIGRvZXNuJ3QgbmVlZCBw
cm90ZWN0aW5nLgogICAgICAqLwogICAgIGlmICggY3B1X2hhc19zcmJkc19j
dHJsICkKICAgICB7Ci0gICAgICAgIGlmICggb3B0X3NyYl9sb2NrID09IC0x
ICYmCisgICAgICAgIGlmICggb3B0X3NyYl9sb2NrID09IC0xICYmICFvcHRf
dW5wcml2X21taW8gJiYKICAgICAgICAgICAgICAoY2FwcyAmIChBUkNIX0NB
UFNfTURTX05PfEFSQ0hfQ0FQU19UQUFfTk8pKSA9PSBBUkNIX0NBUFNfTURT
X05PICYmCiAgICAgICAgICAgICAgKCFjcHVfaGFzX2hsZSB8fCAoKGNhcHMg
JiBBUkNIX0NBUFNfVFNYX0NUUkwpICYmIHJ0bV9kaXNhYmxlZCkpICkKICAg
ICAgICAgICAgIG9wdF9zcmJfbG9jayA9IDA7Cg==

--=separator
Content-Type: application/octet-stream; name="xsa404/xsa404-4.16-1.patch"
Content-Disposition: attachment; filename="xsa404/xsa404-4.16-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3NwZWMtY3RybDogTWFrZSBWRVJXIGZsdXNoaW5n
IHJ1bnRpbWUgY29uZGl0aW9uYWwKCkN1cnJlbnRseSwgVkVSVyBmbHVzaGlu
ZyB0byBtaXRpZ2F0ZSBNRFMgaXMgYm9vdCB0aW1lIGNvbmRpdGlvbmFsIHBl
ciBkb21haW4KdHlwZS4gIEhvd2V2ZXIsIHRvIHByb3ZpZGUgbWl0aWdhdGlv
bnMgZm9yIERSUFcgKENWRS0yMDIyLTIxMTY2KSwgd2UgbmVlZCB0bwpjb25k
aXRpb25hbGx5IHVzZSBWRVJXIGJhc2VkIG9uIHRoZSB0cnVzdHdvcnRoaW5l
c3Mgb2YgdGhlIGd1ZXN0LCBhbmQgdGhlCmRldmljZXMgcGFzc2VkIHRocm91
Z2guCgpSZW1vdmUgdGhlIFBWL0hWTSBhbHRlcm5hdGl2ZXMgYW5kIGluc3Rl
YWQgaXNzdWUgYSBWRVJXIG9uIHRoZSByZXR1cm4tdG8tZ3Vlc3QKcGF0aCBk
ZXBlbmRpbmcgb24gdGhlIFNDRl92ZXJ3IGJpdCBpbiBjcHVpbmZvIHNwZWNf
Y3RybF9mbGFncy4KCkludHJvZHVjZSBzcGVjX2N0cmxfaW5pdF9kb21haW4o
KSBhbmQgZC0+YXJjaC52ZXJ3IHRvIGNhbGN1bGF0ZSB0aGUgVkVSVwpkaXNw
b3NpdGlvbiBhdCBkb21haW4gY3JlYXRpb24gdGltZSwgYW5kIGNvbnRleHQg
c3dpdGNoIHRoZSBTQ0ZfdmVydyBiaXQuCgpGb3Igbm93LCBWRVJXIGZsdXNo
aW5nIGlzIHVzZWQgYW5kIGNvbnRyb2xsZWQgZXhhY3RseSBhcyBiZWZvcmUs
IGJ1dCBsYXRlcgpwYXRjaGVzIHdpbGwgYWRkIHBlci1kb21haW4gY2FzZXMg
dG9vLgoKTm8gY2hhbmdlIGluIGJlaGF2aW91ci4KClRoaXMgaXMgcGFydCBv
ZiBYU0EtNDA0LgoKU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8YW5k
cmV3LmNvb3BlcjNAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEphbiBCZXVs
aWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6IFJvZ2VyIFBh
dSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgoKZGlmZiAtLWdpdCBh
L2RvY3MvbWlzYy94ZW4tY29tbWFuZC1saW5lLnBhbmRvYyBiL2RvY3MvbWlz
Yy94ZW4tY29tbWFuZC1saW5lLnBhbmRvYwppbmRleCAxZDA4ZmI3ZTlhYTYu
LmQ1Y2IwOWY4NjU0MSAxMDA2NDQKLS0tIGEvZG9jcy9taXNjL3hlbi1jb21t
YW5kLWxpbmUucGFuZG9jCisrKyBiL2RvY3MvbWlzYy94ZW4tY29tbWFuZC1s
aW5lLnBhbmRvYwpAQCAtMjI1OCw5ICsyMjU4LDggQEAgaW4gcGxhY2UgZm9y
IGd1ZXN0cyB0byB1c2UuCiBVc2Ugb2YgYSBwb3NpdGl2ZSBib29sZWFuIHZh
bHVlIGZvciBlaXRoZXIgb2YgdGhlc2Ugb3B0aW9ucyBpcyBpbnZhbGlkLgog
CiBUaGUgYm9vbGVhbnMgYHB2PWAsIGBodm09YCwgYG1zci1zYz1gLCBgcnNi
PWAgYW5kIGBtZC1jbGVhcj1gIG9mZmVyIGZpbmUKLWdyYWluZWQgY29udHJv
bCBvdmVyIHRoZSBhbHRlcm5hdGl2ZSBibG9ja3MgdXNlZCBieSBYZW4uICBU
aGVzZSBpbXBhY3QgWGVuJ3MKLWFiaWxpdHkgdG8gcHJvdGVjdCBpdHNlbGYs
IGFuZCBYZW4ncyBhYmlsaXR5IHRvIHZpcnR1YWxpc2Ugc3VwcG9ydCBmb3Ig
Z3Vlc3RzCi10byB1c2UuCitncmFpbmVkIGNvbnRyb2wgb3ZlciB0aGUgcHJp
bWl0aXZlcyBieSBYZW4uICBUaGVzZSBpbXBhY3QgWGVuJ3MgYWJpbGl0eSB0
bworcHJvdGVjdCBpdHNlbGYsIGFuZCBYZW4ncyBhYmlsaXR5IHRvIHZpcnR1
YWxpc2Ugc3VwcG9ydCBmb3IgZ3Vlc3RzIHRvIHVzZS4KIAogKiBgcHY9YCBh
bmQgYGh2bT1gIG9mZmVyIGNvbnRyb2wgb3ZlciBhbGwgc3Vib3B0aW9ucyBm
b3IgUFYgYW5kIEhWTSBndWVzdHMKICAgcmVzcGVjdGl2ZWx5LgpkaWZmIC0t
Z2l0IGEveGVuL2FyY2gveDg2L2RvbWFpbi5jIGIveGVuL2FyY2gveDg2L2Rv
bWFpbi5jCmluZGV4IGVmMTgxMmRjMTQwMi4uMWZlNjY0NGE3MWFlIDEwMDY0
NAotLS0gYS94ZW4vYXJjaC94ODYvZG9tYWluLmMKKysrIGIveGVuL2FyY2gv
eDg2L2RvbWFpbi5jCkBAIC04NjMsNiArODYzLDggQEAgaW50IGFyY2hfZG9t
YWluX2NyZWF0ZShzdHJ1Y3QgZG9tYWluICpkLAogCiAgICAgZC0+YXJjaC5t
c3JfcmVsYXhlZCA9IGNvbmZpZy0+YXJjaC5taXNjX2ZsYWdzICYgWEVOX1g4
Nl9NU1JfUkVMQVhFRDsKIAorICAgIHNwZWNfY3RybF9pbml0X2RvbWFpbihk
KTsKKwogICAgIHJldHVybiAwOwogCiAgZmFpbDoKQEAgLTIwMTcsMTQgKzIw
MTksMTUgQEAgc3RhdGljIHZvaWQgX19jb250ZXh0X3N3aXRjaCh2b2lkKQog
dm9pZCBjb250ZXh0X3N3aXRjaChzdHJ1Y3QgdmNwdSAqcHJldiwgc3RydWN0
IHZjcHUgKm5leHQpCiB7CiAgICAgdW5zaWduZWQgaW50IGNwdSA9IHNtcF9w
cm9jZXNzb3JfaWQoKTsKKyAgICBzdHJ1Y3QgY3B1X2luZm8gKmluZm8gPSBn
ZXRfY3B1X2luZm8oKTsKICAgICBjb25zdCBzdHJ1Y3QgZG9tYWluICpwcmV2
ZCA9IHByZXYtPmRvbWFpbiwgKm5leHRkID0gbmV4dC0+ZG9tYWluOwogICAg
IHVuc2lnbmVkIGludCBkaXJ0eV9jcHUgPSByZWFkX2F0b21pYygmbmV4dC0+
ZGlydHlfY3B1KTsKIAogICAgIEFTU0VSVChwcmV2ICE9IG5leHQpOwogICAg
IEFTU0VSVChsb2NhbF9pcnFfaXNfZW5hYmxlZCgpKTsKIAotICAgIGdldF9j
cHVfaW5mbygpLT51c2VfcHZfY3IzID0gZmFsc2U7Ci0gICAgZ2V0X2NwdV9p
bmZvKCktPnhlbl9jcjMgPSAwOworICAgIGluZm8tPnVzZV9wdl9jcjMgPSBm
YWxzZTsKKyAgICBpbmZvLT54ZW5fY3IzID0gMDsKIAogICAgIGlmICggdW5s
aWtlbHkoZGlydHlfY3B1ICE9IGNwdSkgJiYgZGlydHlfY3B1ICE9IFZDUFVf
Q1BVX0NMRUFOICkKICAgICB7CkBAIC0yMDg4LDYgKzIwOTEsMTEgQEAgdm9p
ZCBjb250ZXh0X3N3aXRjaChzdHJ1Y3QgdmNwdSAqcHJldiwgc3RydWN0IHZj
cHUgKm5leHQpCiAgICAgICAgICAgICAgICAgKmxhc3RfaWQgPSBuZXh0X2lk
OwogICAgICAgICAgICAgfQogICAgICAgICB9CisKKyAgICAgICAgLyogVXBk
YXRlIHRoZSB0b3Atb2Ytc3RhY2sgYmxvY2sgd2l0aCB0aGUgVkVSVyBkaXNw
b3NpdGlvbi4gKi8KKyAgICAgICAgaW5mby0+c3BlY19jdHJsX2ZsYWdzICY9
IH5TQ0ZfdmVydzsKKyAgICAgICAgaWYgKCBuZXh0ZC0+YXJjaC52ZXJ3ICkK
KyAgICAgICAgICAgIGluZm8tPnNwZWNfY3RybF9mbGFncyB8PSBTQ0ZfdmVy
dzsKICAgICB9CiAKICAgICBzY2hlZF9jb250ZXh0X3N3aXRjaGVkKHByZXYs
IG5leHQpOwpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L2h2bS92bXgvZW50
cnkuUyBiL3hlbi9hcmNoL3g4Ni9odm0vdm14L2VudHJ5LlMKaW5kZXggNDk2
NTFmM2M0MzVhLi41ZjVkZTQ1YTEzMDkgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNo
L3g4Ni9odm0vdm14L2VudHJ5LlMKKysrIGIveGVuL2FyY2gveDg2L2h2bS92
bXgvZW50cnkuUwpAQCAtODcsNyArODcsNyBAQCBVTkxJS0VMWV9FTkQocmVh
bG1vZGUpCiAKICAgICAgICAgLyogV0FSTklORyEgYHJldGAsIGBjYWxsICpg
LCBgam1wICpgIG5vdCBzYWZlIGJleW9uZCB0aGlzIHBvaW50LiAqLwogICAg
ICAgICAvKiBTUEVDX0NUUkxfRVhJVF9UT19WTVggICBSZXE6ICVyc3A9cmVn
cy9jcHVpbmZvICAgICAgICAgICAgICBDbG9iOiAgICAqLwotICAgICAgICBB
TFRFUk5BVElWRSAiIiwgX19zdHJpbmdpZnkodmVydyBDUFVJTkZPX3Zlcndf
c2VsKCVyc3ApKSwgWDg2X0ZFQVRVUkVfU0NfVkVSV19IVk0KKyAgICAgICAg
RE9fU1BFQ19DVFJMX0NPTkRfVkVSVwogCiAgICAgICAgIG1vdiAgVkNQVV9o
dm1fZ3Vlc3RfY3IyKCVyYngpLCVyYXgKIApkaWZmIC0tZ2l0IGEveGVuL2Fy
Y2gveDg2L3NwZWNfY3RybC5jIGIveGVuL2FyY2gveDg2L3NwZWNfY3RybC5j
CmluZGV4IGMxOTQ2NGRhNzBjZS4uMjE3MzBhYTAzMDcxIDEwMDY0NAotLS0g
YS94ZW4vYXJjaC94ODYvc3BlY19jdHJsLmMKKysrIGIveGVuL2FyY2gveDg2
L3NwZWNfY3RybC5jCkBAIC0zNiw4ICszNiw4IEBAIHN0YXRpYyBib29sIF9f
aW5pdGRhdGEgb3B0X21zcl9zY19wdiA9IHRydWU7CiBzdGF0aWMgYm9vbCBf
X2luaXRkYXRhIG9wdF9tc3Jfc2NfaHZtID0gdHJ1ZTsKIHN0YXRpYyBpbnQ4
X3QgX19pbml0ZGF0YSBvcHRfcnNiX3B2ID0gLTE7CiBzdGF0aWMgYm9vbCBf
X2luaXRkYXRhIG9wdF9yc2JfaHZtID0gdHJ1ZTsKLXN0YXRpYyBpbnQ4X3Qg
X19pbml0ZGF0YSBvcHRfbWRfY2xlYXJfcHYgPSAtMTsKLXN0YXRpYyBpbnQ4
X3QgX19pbml0ZGF0YSBvcHRfbWRfY2xlYXJfaHZtID0gLTE7CitzdGF0aWMg
aW50OF90IF9fcmVhZF9tb3N0bHkgb3B0X21kX2NsZWFyX3B2ID0gLTE7Citz
dGF0aWMgaW50OF90IF9fcmVhZF9tb3N0bHkgb3B0X21kX2NsZWFyX2h2bSA9
IC0xOwogCiAvKiBDbWRsaW5lIGNvbnRyb2xzIGZvciBYZW4ncyBzcGVjdWxh
dGl2ZSBzZXR0aW5ncy4gKi8KIHN0YXRpYyBlbnVtIGluZF90aHVuayB7CkBA
IC05MzIsNiArOTMyLDEzIEBAIHN0YXRpYyBfX2luaXQgdm9pZCBtZHNfY2Fs
Y3VsYXRpb25zKHVpbnQ2NF90IGNhcHMpCiAgICAgfQogfQogCit2b2lkIHNw
ZWNfY3RybF9pbml0X2RvbWFpbihzdHJ1Y3QgZG9tYWluICpkKQoreworICAg
IGJvb2wgcHYgPSBpc19wdl9kb21haW4oZCk7CisKKyAgICBkLT5hcmNoLnZl
cncgPSBwdiA/IG9wdF9tZF9jbGVhcl9wdiA6IG9wdF9tZF9jbGVhcl9odm07
Cit9CisKIHZvaWQgX19pbml0IGluaXRfc3BlY3VsYXRpb25fbWl0aWdhdGlv
bnModm9pZCkKIHsKICAgICBlbnVtIGluZF90aHVuayB0aHVuayA9IFRIVU5L
X0RFRkFVTFQ7CkBAIC0xMTk2LDIxICsxMjAzLDIwIEBAIHZvaWQgX19pbml0
IGluaXRfc3BlY3VsYXRpb25fbWl0aWdhdGlvbnModm9pZCkKICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBib290X2NwdV9oYXMoWDg2X0ZFQVRVUkVf
TURfQ0xFQVIpKTsKIAogICAgIC8qCi0gICAgICogRW5hYmxlIE1EUyBkZWZl
bmNlcyBhcyBhcHBsaWNhYmxlLiAgVGhlIFBWIGJsb2NrcyBuZWVkIHVzaW5n
IGFsbCB0aGUKLSAgICAgKiB0aW1lLCBhbmQgdGhlIElkbGUgYmxvY2tzIG5l
ZWQgdXNpbmcgaWYgZWl0aGVyIFBWIG9yIEhWTSBkZWZlbmNlcyBhcmUKLSAg
ICAgKiB1c2VkLgorICAgICAqIEVuYWJsZSBNRFMgZGVmZW5jZXMgYXMgYXBw
bGljYWJsZS4gIFRoZSBJZGxlIGJsb2NrcyBuZWVkIHVzaW5nIGlmCisgICAg
ICogZWl0aGVyIFBWIG9yIEhWTSBkZWZlbmNlcyBhcmUgdXNlZC4KICAgICAg
KgogICAgICAqIEhWTSBpcyBtb3JlIGNvbXBsaWNhdGVkLiAgVGhlIE1EX0NM
RUFSIG1pY3JvY29kZSBleHRlbmRzIEwxRF9GTFVTSCB3aXRoCi0gICAgICog
ZXF1aXZlbGVudCBzZW1hbnRpY3MgdG8gYXZvaWQgbmVlZGluZyB0byBwZXJm
b3JtIGJvdGggZmx1c2hlcyBvbiB0aGUKLSAgICAgKiBIVk0gcGF0aC4gIFRo
ZSBIVk0gYmxvY2tzIGRvbid0IG5lZWQgYWN0aXZhdGluZyBpZiBvdXIgaHlw
ZXJ2aXNvciB0b2xkCi0gICAgICogdXMgaXQgd2FzIGhhbmRsaW5nIEwxRF9G
TFVTSCwgb3Igd2UgYXJlIHVzaW5nIEwxRF9GTFVTSCBvdXJzZWx2ZXMuCisg
ICAgICogZXF1aXZhbGVudCBzZW1hbnRpY3MgdG8gYXZvaWQgbmVlZGluZyB0
byBwZXJmb3JtIGJvdGggZmx1c2hlcyBvbiB0aGUKKyAgICAgKiBIVk0gcGF0
aC4gIFRoZXJlZm9yZSwgd2UgZG9uJ3QgbmVlZCBWRVJXIGluIGFkZGl0aW9u
IHRvIEwxRF9GTFVTSC4KKyAgICAgKgorICAgICAqIEFmdGVyIGNhbGN1bGF0
aW5nIHRoZSBhcHByb3ByaWF0ZSBpZGxlIHNldHRpbmcsIHNpbXBsaWZ5Cisg
ICAgICogb3B0X21kX2NsZWFyX2h2bSB0byBtZWFuIGp1c3QgInNob3VsZCB3
ZSBWRVJXIG9uIHRoZSB3YXkgaW50byBIVk0KKyAgICAgKiBndWVzdHMiLCBz
byBzcGVjX2N0cmxfaW5pdF9kb21haW4oKSBjYW4gY2FsY3VsYXRlIHN1aXRh
YmxlIHNldHRpbmdzLgogICAgICAqLwotICAgIGlmICggb3B0X21kX2NsZWFy
X3B2ICkKLSAgICAgICAgc2V0dXBfZm9yY2VfY3B1X2NhcChYODZfRkVBVFVS
RV9TQ19WRVJXX1BWKTsKICAgICBpZiAoIG9wdF9tZF9jbGVhcl9wdiB8fCBv
cHRfbWRfY2xlYXJfaHZtICkKICAgICAgICAgc2V0dXBfZm9yY2VfY3B1X2Nh
cChYODZfRkVBVFVSRV9TQ19WRVJXX0lETEUpOwotICAgIGlmICggb3B0X21k
X2NsZWFyX2h2bSAmJiAhKGNhcHMgJiBBUkNIX0NBUFNfU0tJUF9MMURGTCkg
JiYgIW9wdF9sMWRfZmx1c2ggKQotICAgICAgICBzZXR1cF9mb3JjZV9jcHVf
Y2FwKFg4Nl9GRUFUVVJFX1NDX1ZFUldfSFZNKTsKKyAgICBvcHRfbWRfY2xl
YXJfaHZtICY9ICEoY2FwcyAmIEFSQ0hfQ0FQU19TS0lQX0wxREZMKSAmJiAh
b3B0X2wxZF9mbHVzaDsKIAogICAgIC8qCiAgICAgICogV2FybiB0aGUgdXNl
ciBpZiB0aGV5IGFyZSBvbiBNTFBEUy9NRkJEUy12dWxuZXJhYmxlIGhhcmR3
YXJlIHdpdGggSFQKZGlmZiAtLWdpdCBhL3hlbi9pbmNsdWRlL2FzbS14ODYv
Y3B1ZmVhdHVyZXMuaCBiL3hlbi9pbmNsdWRlL2FzbS14ODYvY3B1ZmVhdHVy
ZXMuaAppbmRleCBmZjMxNTdkNTJkMTMuLmJkNDVhMTQ0ZWU3OCAxMDA2NDQK
LS0tIGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9jcHVmZWF0dXJlcy5oCisrKyBi
L3hlbi9pbmNsdWRlL2FzbS14ODYvY3B1ZmVhdHVyZXMuaApAQCAtMzUsOCAr
MzUsNyBAQCBYRU5fQ1BVRkVBVFVSRShTQ19SU0JfSFZNLCAgICAgICAgWDg2
X1NZTlRIKDE5KSkgLyogUlNCIG92ZXJ3cml0ZSBuZWVkZWQgZm9yIEhWTQog
WEVOX0NQVUZFQVRVUkUoWEVOX1NFTEZTTk9PUCwgICAgIFg4Nl9TWU5USCgy
MCkpIC8qIFNFTEZTTk9PUCBnZXRzIHVzZWQgYnkgWGVuIGl0c2VsZiAqLwog
WEVOX0NQVUZFQVRVUkUoU0NfTVNSX0lETEUsICAgICAgIFg4Nl9TWU5USCgy
MSkpIC8qIChTQ19NU1JfUFYgfHwgU0NfTVNSX0hWTSkgJiYgZGVmYXVsdF94
ZW5fc3BlY19jdHJsICovCiBYRU5fQ1BVRkVBVFVSRShYRU5fTEJSLCAgICAg
ICAgICAgWDg2X1NZTlRIKDIyKSkgLyogWGVuIHVzZXMgTVNSX0RFQlVHQ1RM
LkxCUiAqLwotWEVOX0NQVUZFQVRVUkUoU0NfVkVSV19QViwgICAgICAgIFg4
Nl9TWU5USCgyMykpIC8qIFZFUlcgdXNlZCBieSBYZW4gZm9yIFBWICovCi1Y
RU5fQ1BVRkVBVFVSRShTQ19WRVJXX0hWTSwgICAgICAgWDg2X1NZTlRIKDI0
KSkgLyogVkVSVyB1c2VkIGJ5IFhlbiBmb3IgSFZNICovCisvKiBCaXRzIDIz
LDI0IHVudXNlZC4gKi8KIFhFTl9DUFVGRUFUVVJFKFNDX1ZFUldfSURMRSwg
ICAgICBYODZfU1lOVEgoMjUpKSAvKiBWRVJXIHVzZWQgYnkgWGVuIGZvciBp
ZGxlICovCiBYRU5fQ1BVRkVBVFVSRShYRU5fU0hTVEssICAgICAgICAgWDg2
X1NZTlRIKDI2KSkgLyogWGVuIHVzZXMgQ0VUIFNoYWRvdyBTdGFja3MgKi8K
IFhFTl9DUFVGRUFUVVJFKFhFTl9JQlQsICAgICAgICAgICBYODZfU1lOVEgo
MjcpKSAvKiBYZW4gdXNlcyBDRVQgSW5kaXJlY3QgQnJhbmNoIFRyYWNraW5n
ICovCmRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9hc20teDg2L2RvbWFpbi5o
IGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9kb21haW4uaAppbmRleCA5MmQ1NGRl
MGI5YTEuLjIzOThhMWQ5OWRhOSAxMDA2NDQKLS0tIGEveGVuL2luY2x1ZGUv
YXNtLXg4Ni9kb21haW4uaAorKysgYi94ZW4vaW5jbHVkZS9hc20teDg2L2Rv
bWFpbi5oCkBAIC0zMTksNiArMzE5LDkgQEAgc3RydWN0IGFyY2hfZG9tYWlu
CiAgICAgdWludDMyX3QgcGNpX2NmODsKICAgICB1aW50OF90IGNtb3NfaWR4
OwogCisgICAgLyogVXNlIFZFUlcgb24gcmV0dXJuLXRvLWd1ZXN0IGZvciBp
dHMgZmx1c2hpbmcgc2lkZSBlZmZlY3QuICovCisgICAgYm9vbCB2ZXJ3Owor
CiAgICAgdW5pb24gewogICAgICAgICBzdHJ1Y3QgcHZfZG9tYWluIHB2Owog
ICAgICAgICBzdHJ1Y3QgaHZtX2RvbWFpbiBodm07CmRpZmYgLS1naXQgYS94
ZW4vaW5jbHVkZS9hc20teDg2L3NwZWNfY3RybC5oIGIveGVuL2luY2x1ZGUv
YXNtLXg4Ni9zcGVjX2N0cmwuaAppbmRleCBmNzYwMjk1MjM2MTAuLjc1MTM1
NWY0NzFmNCAxMDA2NDQKLS0tIGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9zcGVj
X2N0cmwuaAorKysgYi94ZW4vaW5jbHVkZS9hc20teDg2L3NwZWNfY3RybC5o
CkBAIC0yNCw2ICsyNCw3IEBACiAjZGVmaW5lIFNDRl91c2Vfc2hhZG93ICgx
IDw8IDApCiAjZGVmaW5lIFNDRl9pc3Rfd3Jtc3IgICgxIDw8IDEpCiAjZGVm
aW5lIFNDRl9pc3RfcnNiICAgICgxIDw8IDIpCisjZGVmaW5lIFNDRl92ZXJ3
ICAgICAgICgxIDw8IDMpCiAKICNpZm5kZWYgX19BU1NFTUJMWV9fCiAKQEAg
LTMyLDYgKzMzLDcgQEAKICNpbmNsdWRlIDxhc20vbXNyLWluZGV4Lmg+CiAK
IHZvaWQgaW5pdF9zcGVjdWxhdGlvbl9taXRpZ2F0aW9ucyh2b2lkKTsKK3Zv
aWQgc3BlY19jdHJsX2luaXRfZG9tYWluKHN0cnVjdCBkb21haW4gKmQpOwog
CiBleHRlcm4gYm9vbCBvcHRfaWJwYjsKIGV4dGVybiBib29sIG9wdF9zc2Jk
OwpkaWZmIC0tZ2l0IGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9zcGVjX2N0cmxf
YXNtLmggYi94ZW4vaW5jbHVkZS9hc20teDg2L3NwZWNfY3RybF9hc20uaApp
bmRleCAwMmIzYjE4Y2U2OWYuLjVhNTkwYmFjNDRhYSAxMDA2NDQKLS0tIGEv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9zcGVjX2N0cmxfYXNtLmgKKysrIGIveGVu
L2luY2x1ZGUvYXNtLXg4Ni9zcGVjX2N0cmxfYXNtLmgKQEAgLTEzNiw2ICsx
MzYsMTkgQEAKICNlbmRpZgogLmVuZG0KIAorLm1hY3JvIERPX1NQRUNfQ1RS
TF9DT05EX1ZFUlcKKy8qCisgKiBSZXF1aXJlcyAlcnNwPWNwdWluZm8KKyAq
CisgKiBJc3N1ZSBhIFZFUlcgZm9yIGl0cyBmbHVzaGluZyBzaWRlIGVmZmVj
dCwgaWYgaW5kaWNhdGVkLiAgVGhpcyBpcyBhIFNwZWN0cmUKKyAqIHYxIGdh
ZGdldCwgYnV0IHRoZSBJUkVUL1ZNRW50cnkgaXMgc2VyaWFsaXNpbmcuCisg
Ki8KKyAgICB0ZXN0YiAkU0NGX3ZlcncsIENQVUlORk9fc3BlY19jdHJsX2Zs
YWdzKCVyc3ApCisgICAganogLkxcQF92ZXJ3X3NraXAKKyAgICB2ZXJ3IENQ
VUlORk9fdmVyd19zZWwoJXJzcCkKKy5MXEBfdmVyd19za2lwOgorLmVuZG0K
KwogLm1hY3JvIERPX1NQRUNfQ1RSTF9FTlRSWSBtYXliZXhlbjpyZXEKIC8q
CiAgKiBSZXF1aXJlcyAlcnNwPXJlZ3MgKGFsc28gY3B1aW5mbyBpZiAhbWF5
YmV4ZW4pCkBAIC0yMzEsOCArMjQ0LDcgQEAKICNkZWZpbmUgU1BFQ19DVFJM
X0VYSVRfVE9fUFYgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIFwKICAgICBBTFRFUk5BVElWRSAiIiwgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKICAg
ICAgICAgRE9fU1BFQ19DVFJMX0VYSVRfVE9fR1VFU1QsIFg4Nl9GRUFUVVJF
X1NDX01TUl9QVjsgICAgICAgICAgICAgIFwKLSAgICBBTFRFUk5BVElWRSAi
IiwgX19zdHJpbmdpZnkodmVydyBDUFVJTkZPX3Zlcndfc2VsKCVyc3ApKSwg
ICAgICAgICAgIFwKLSAgICAgICAgWDg2X0ZFQVRVUkVfU0NfVkVSV19QVgor
ICAgIERPX1NQRUNfQ1RSTF9DT05EX1ZFUlcKIAogLyoKICAqIFVzZSBpbiBJ
U1QgaW50ZXJydXB0L2V4Y2VwdGlvbiBjb250ZXh0LiAgTWF5IGludGVycnVw
dCBYZW4gb3IgUFYgY29udGV4dC4K

--=separator
Content-Type: application/octet-stream; name="xsa404/xsa404-4.16-2.patch"
Content-Disposition: attachment; filename="xsa404/xsa404-4.16-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3NwZWMtY3RybDogRW51bWVyYXRpb24gZm9yIE1N
SU8gU3RhbGUgRGF0YSBjb250cm9scwoKVGhlIHRocmVlICpfTk8gYml0cyBp
bmRpY2F0ZSBub24tc3VzY2VwdGliaWxpdHkgdG8gdGhlIFNTRFAsIEZCU0RQ
IGFuZCBQU0RQCmRhdGEgbW92ZW1lbnQgcHJpbWl0aXZlcy4KCkZCX0NMRUFS
IGluZGljYXRlcyB0aGF0IHRoZSBWRVJXIGluc3RydWN0aW9uIGhhcyByZS1n
YWluZWQgaXQncyBGaWxsIEJ1ZmZlcgpmbHVzaGluZyBzaWRlIGVmZmVjdC4g
IFRoaXMgaXMgb25seSBlbnVtZXJhdGVkIG9uIHBhcnRzIHdoZXJlIFZFUlcg
aGFkCnByZXZpb3VzbHkgbG9zdCBpdCdzIGZsdXNoaW5nIHNpZGUgZWZmZWN0
IGR1ZSB0byB0aGUgTURTL1RBQSB2dWxuZXJhYmlsaXRpZXMKYmVpbmcgZml4
ZWQgaW4gaGFyZHdhcmUuCgpGQl9DTEVBUl9DVFJMIGlzIGF2YWlsYWJsZSBv
biBhIHN1YnNldCBvZiBGQl9DTEVBUiBwYXJ0cyB3aGVyZSB0aGUgRmlsbCBC
dWZmZXIKY2xlYXJpbmcgc2lkZSBlZmZlY3Qgb2YgVkVSVyBjYW4gYmUgdHVy
bmVkIG9mZiBmb3IgcGVyZm9ybWFuY2UgcmVhc29ucy4KClRoaXMgaXMgcGFy
dCBvZiBYU0EtNDA0LgoKU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8
YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IFJvZ2Vy
IFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgoKZGlmZiAtLWdp
dCBhL3hlbi9hcmNoL3g4Ni9zcGVjX2N0cmwuYyBiL3hlbi9hcmNoL3g4Ni9z
cGVjX2N0cmwuYwppbmRleCAyMTczMGFhMDMwNzEuLmQyODU1MzhiZGU5ZiAx
MDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L3NwZWNfY3RybC5jCisrKyBiL3hl
bi9hcmNoL3g4Ni9zcGVjX2N0cmwuYwpAQCAtMzIzLDcgKzMyMyw3IEBAIHN0
YXRpYyB2b2lkIF9faW5pdCBwcmludF9kZXRhaWxzKGVudW0gaW5kX3RodW5r
IHRodW5rLCB1aW50NjRfdCBjYXBzKQogICAgICAqIEhhcmR3YXJlIHJlYWQt
b25seSBpbmZvcm1hdGlvbiwgc3RhdGluZyBpbW11bml0eSB0byBjZXJ0YWlu
IGlzc3Vlcywgb3IKICAgICAgKiBzdWdnZXN0aW9ucyBvZiB3aGljaCBtaXRp
Z2F0aW9uIHRvIHVzZS4KICAgICAgKi8KLSAgICBwcmludGsoIiAgSGFyZHdh
cmUgaGludHM6JXMlcyVzJXMlcyVzJXMlcyVzJXMlc1xuIiwKKyAgICBwcmlu
dGsoIiAgSGFyZHdhcmUgaGludHM6JXMlcyVzJXMlcyVzJXMlcyVzJXMlcyVz
JXMlc1xuIiwKICAgICAgICAgICAgKGNhcHMgJiBBUkNIX0NBUFNfUkRDTF9O
TykgICAgICAgICAgICAgICAgICAgICAgICA/ICIgUkRDTF9OTyIgICAgICAg
IDogIiIsCiAgICAgICAgICAgIChjYXBzICYgQVJDSF9DQVBTX0lCUlNfQUxM
KSAgICAgICAgICAgICAgICAgICAgICAgPyAiIElCUlNfQUxMIiAgICAgICA6
ICIiLAogICAgICAgICAgICAoY2FwcyAmIEFSQ0hfQ0FQU19SU0JBKSAgICAg
ICAgICAgICAgICAgICAgICAgICAgID8gIiBSU0JBIiAgICAgICAgICAgOiAi
IiwKQEAgLTMzMiwxMyArMzMyLDE2IEBAIHN0YXRpYyB2b2lkIF9faW5pdCBw
cmludF9kZXRhaWxzKGVudW0gaW5kX3RodW5rIHRodW5rLCB1aW50NjRfdCBj
YXBzKQogICAgICAgICAgICAoY2FwcyAmIEFSQ0hfQ0FQU19TU0JfTk8pICAg
ICAgICAgICAgICAgICAgICAgICAgID8gIiBTU0JfTk8iICAgICAgICAgOiAi
IiwKICAgICAgICAgICAgKGNhcHMgJiBBUkNIX0NBUFNfTURTX05PKSAgICAg
ICAgICAgICAgICAgICAgICAgICA/ICIgTURTX05PIiAgICAgICAgIDogIiIs
CiAgICAgICAgICAgIChjYXBzICYgQVJDSF9DQVBTX1RBQV9OTykgICAgICAg
ICAgICAgICAgICAgICAgICAgPyAiIFRBQV9OTyIgICAgICAgICA6ICIiLAor
ICAgICAgICAgICAoY2FwcyAmIEFSQ0hfQ0FQU19TQkRSX1NTRFBfTk8pICAg
ICAgICAgICAgICAgICAgID8gIiBTQkRSX1NTRFBfTk8iICAgOiAiIiwKKyAg
ICAgICAgICAgKGNhcHMgJiBBUkNIX0NBUFNfRkJTRFBfTk8pICAgICAgICAg
ICAgICAgICAgICAgICA/ICIgRkJTRFBfTk8iICAgICAgIDogIiIsCisgICAg
ICAgICAgIChjYXBzICYgQVJDSF9DQVBTX1BTRFBfTk8pICAgICAgICAgICAg
ICAgICAgICAgICAgPyAiIFBTRFBfTk8iICAgICAgICA6ICIiLAogICAgICAg
ICAgICAoZThiICAmIGNwdWZlYXRfbWFzayhYODZfRkVBVFVSRV9JQlJTX0FM
V0FZUykpICAgID8gIiBJQlJTX0FMV0FZUyIgICAgOiAiIiwKICAgICAgICAg
ICAgKGU4YiAgJiBjcHVmZWF0X21hc2soWDg2X0ZFQVRVUkVfU1RJQlBfQUxX
QVlTKSkgICA/ICIgU1RJQlBfQUxXQVlTIiAgIDogIiIsCiAgICAgICAgICAg
IChlOGIgICYgY3B1ZmVhdF9tYXNrKFg4Nl9GRUFUVVJFX0lCUlNfRkFTVCkp
ICAgICAgPyAiIElCUlNfRkFTVCIgICAgICA6ICIiLAogICAgICAgICAgICAo
ZThiICAmIGNwdWZlYXRfbWFzayhYODZfRkVBVFVSRV9JQlJTX1NBTUVfTU9E
RSkpID8gIiBJQlJTX1NBTUVfTU9ERSIgOiAiIik7CiAKICAgICAvKiBIYXJk
d2FyZSBmZWF0dXJlcyB3aGljaCBuZWVkIGRyaXZpbmcgdG8gbWl0aWdhdGUg
aXNzdWVzLiAqLwotICAgIHByaW50aygiICBIYXJkd2FyZSBmZWF0dXJlczol
cyVzJXMlcyVzJXMlcyVzJXMlc1xuIiwKKyAgICBwcmludGsoIiAgSGFyZHdh
cmUgZmVhdHVyZXM6JXMlcyVzJXMlcyVzJXMlcyVzJXMlcyVzXG4iLAogICAg
ICAgICAgICAoZThiICAmIGNwdWZlYXRfbWFzayhYODZfRkVBVFVSRV9JQlBC
KSkgfHwKICAgICAgICAgICAgKF83ZDAgJiBjcHVmZWF0X21hc2soWDg2X0ZF
QVRVUkVfSUJSU0IpKSAgICAgICAgICA/ICIgSUJQQiIgICAgICAgICAgIDog
IiIsCiAgICAgICAgICAgIChlOGIgICYgY3B1ZmVhdF9tYXNrKFg4Nl9GRUFU
VVJFX0lCUlMpKSB8fApAQCAtMzUzLDcgKzM1Niw5IEBAIHN0YXRpYyB2b2lk
IF9faW5pdCBwcmludF9kZXRhaWxzKGVudW0gaW5kX3RodW5rIHRodW5rLCB1
aW50NjRfdCBjYXBzKQogICAgICAgICAgICAoXzdkMCAmIGNwdWZlYXRfbWFz
ayhYODZfRkVBVFVSRV9NRF9DTEVBUikpICAgICAgID8gIiBNRF9DTEVBUiIg
ICAgICAgOiAiIiwKICAgICAgICAgICAgKF83ZDAgJiBjcHVmZWF0X21hc2so
WDg2X0ZFQVRVUkVfU1JCRFNfQ1RSTCkpICAgICA/ICIgU1JCRFNfQ1RSTCIg
ICAgIDogIiIsCiAgICAgICAgICAgIChlOGIgICYgY3B1ZmVhdF9tYXNrKFg4
Nl9GRUFUVVJFX1ZJUlRfU1NCRCkpICAgICAgPyAiIFZJUlRfU1NCRCIgICAg
ICA6ICIiLAotICAgICAgICAgICAoY2FwcyAmIEFSQ0hfQ0FQU19UU1hfQ1RS
TCkgICAgICAgICAgICAgICAgICAgICAgID8gIiBUU1hfQ1RSTCIgICAgICAg
OiAiIik7CisgICAgICAgICAgIChjYXBzICYgQVJDSF9DQVBTX1RTWF9DVFJM
KSAgICAgICAgICAgICAgICAgICAgICAgPyAiIFRTWF9DVFJMIiAgICAgICA6
ICIiLAorICAgICAgICAgICAoY2FwcyAmIEFSQ0hfQ0FQU19GQl9DTEVBUikg
ICAgICAgICAgICAgICAgICAgICAgID8gIiBGQl9DTEVBUiIgICAgICAgOiAi
IiwKKyAgICAgICAgICAgKGNhcHMgJiBBUkNIX0NBUFNfRkJfQ0xFQVJfQ1RS
TCkgICAgICAgICAgICAgICAgICA/ICIgRkJfQ0xFQVJfQ1RSTCIgIDogIiIp
OwogCiAgICAgLyogQ29tcGlsZWQtaW4gc3VwcG9ydCB3aGljaCBwZXJ0YWlu
cyB0byBtaXRpZ2F0aW9ucy4gKi8KICAgICBpZiAoIElTX0VOQUJMRUQoQ09O
RklHX0lORElSRUNUX1RIVU5LKSB8fCBJU19FTkFCTEVEKENPTkZJR19TSEFE
T1dfUEFHSU5HKSApCmRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9hc20teDg2
L21zci1pbmRleC5oIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9tc3ItaW5kZXgu
aAppbmRleCAzMTk2NGI4OGFmN2EuLjcyYmMzMmJhMDRmZiAxMDA2NDQKLS0t
IGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9tc3ItaW5kZXguaAorKysgYi94ZW4v
aW5jbHVkZS9hc20teDg2L21zci1pbmRleC5oCkBAIC02Niw2ICs2NiwxMSBA
QAogI2RlZmluZSAgQVJDSF9DQVBTX0lGX1BTQ0hBTkdFX01DX05PICAgICAg
ICAoX0FDKDEsIFVMTCkgPDwgIDYpCiAjZGVmaW5lICBBUkNIX0NBUFNfVFNY
X0NUUkwgICAgICAgICAgICAgICAgIChfQUMoMSwgVUxMKSA8PCAgNykKICNk
ZWZpbmUgIEFSQ0hfQ0FQU19UQUFfTk8gICAgICAgICAgICAgICAgICAgKF9B
QygxLCBVTEwpIDw8ICA4KQorI2RlZmluZSAgQVJDSF9DQVBTX1NCRFJfU1NE
UF9OTyAgICAgICAgICAgICAoX0FDKDEsIFVMTCkgPDwgMTMpCisjZGVmaW5l
ICBBUkNIX0NBUFNfRkJTRFBfTk8gICAgICAgICAgICAgICAgIChfQUMoMSwg
VUxMKSA8PCAxNCkKKyNkZWZpbmUgIEFSQ0hfQ0FQU19QU0RQX05PICAgICAg
ICAgICAgICAgICAgKF9BQygxLCBVTEwpIDw8IDE1KQorI2RlZmluZSAgQVJD
SF9DQVBTX0ZCX0NMRUFSICAgICAgICAgICAgICAgICAoX0FDKDEsIFVMTCkg
PDwgMTcpCisjZGVmaW5lICBBUkNIX0NBUFNfRkJfQ0xFQVJfQ1RSTCAgICAg
ICAgICAgIChfQUMoMSwgVUxMKSA8PCAxOCkKIAogI2RlZmluZSBNU1JfRkxV
U0hfQ01EICAgICAgICAgICAgICAgICAgICAgICAweDAwMDAwMTBiCiAjZGVm
aW5lICBGTFVTSF9DTURfTDFEICAgICAgICAgICAgICAgICAgICAgIChfQUMo
MSwgVUxMKSA8PCAgMCkKQEAgLTgzLDYgKzg4LDcgQEAKICNkZWZpbmUgIE1D
VV9PUFRfQ1RSTF9STkdEU19NSVRHX0RJUyAgICAgICAgKF9BQygxLCBVTEwp
IDw8ICAwKQogI2RlZmluZSAgTUNVX09QVF9DVFJMX1JUTV9BTExPVyAgICAg
ICAgICAgICAoX0FDKDEsIFVMTCkgPDwgIDEpCiAjZGVmaW5lICBNQ1VfT1BU
X0NUUkxfUlRNX0xPQ0tFRCAgICAgICAgICAgIChfQUMoMSwgVUxMKSA8PCAg
MikKKyNkZWZpbmUgIE1DVV9PUFRfQ1RSTF9GQl9DTEVBUl9ESVMgICAgICAg
ICAgKF9BQygxLCBVTEwpIDw8ICAzKQogCiAjZGVmaW5lIE1TUl9SVElUX09V
VFBVVF9CQVNFICAgICAgICAgICAgICAgIDB4MDAwMDA1NjAKICNkZWZpbmUg
TVNSX1JUSVRfT1VUUFVUX01BU0sgICAgICAgICAgICAgICAgMHgwMDAwMDU2
MQo=

--=separator
Content-Type: application/octet-stream; name="xsa404/xsa404-4.16-3.patch"
Content-Disposition: attachment; filename="xsa404/xsa404-4.16-3.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L3NwZWMtY3RybDogQWRkIHNwZWMtY3RybD11bnBy
aXYtbW1pbwoKUGVyIFhlbidzIHN1cHBvcnQgc3RhdGVtZW50LCBQQ0kgcGFz
c3Rocm91Z2ggc2hvdWxkIGJlIHRvIHRydXN0ZWQgZG9tYWlucwpiZWNhdXNl
IHRoZSBvdmVyYWxsIHN5c3RlbSBzZWN1cml0eSBkZXBlbmRzIG9uIGZhY3Rv
cnMgb3V0c2lkZSBvZiBYZW4ncwpjb250cm9sLgoKQXMgc3VjaCwgWGVuLCBp
biBhIHN1cHBvcnRlZCBjb25maWd1cmF0aW9uLCBpcyBub3QgdnVsbmVyYWJs
ZSB0byBEUlBXL1NCRFIuCgpIb3dldmVyLCB1c2VycyB3aG8gaGF2ZSByaXNr
IGFzc2Vzc2VkIHRoZWlyIGNvbmZpZ3VyYXRpb24gbWF5IGJlIGhhcHB5IHdp
dGgKdGhlIHJpc2sgb2YgRG9TLCBidXQgdW5oYXBweSB3aXRoIHRoZSByaXNr
IG9mIGNyb3NzLWRvbWFpbiBkYXRhIGxlYWthZ2UuICBTdWNoCnVzZXJzIHNo
b3VsZCBlbmFibGUgdGhpcyBvcHRpb24uCgpPbiBDUFVzIHZ1bG5lcmFibGUg
dG8gTURTLCB0aGUgZXhpc3RpbmcgbWl0aWdhdGlvbnMgYXJlIHRoZSBiZXN0
IHdlIGNhbiBkbyB0bwptaXRpZ2F0ZSBNTUlPIGNyb3NzLWRvbWFpbiBkYXRh
IGxlYWthZ2UuCgpPbiBDUFVzIGZpeGVkIHRvIE1EUyBidXQgdnVsbmVyYWJs
ZSBNTUlPIHN0YWxlIGRhdGEgbGVha2FnZSwgdGhpcyBvcHRpb246CgogKiBP
biBDUFVzIHN1c2NlcHRpYmxlIHRvIEZCU0RQLCBtaXRpZ2F0ZXMgY3Jvc3Mt
ZG9tYWluIGZpbGwgYnVmZmVyIGxlYWthZ2UKICAgdXNpbmcgRkJfQ0xFQVIu
CiAqIE9uIENQVXMgc3VzY2VwdGlibGUgdG8gU0JEUiwgbWl0aWdhdGVzIFJO
RyBkYXRhIHJlY292ZXJ5IGJ5IGVuZ2FnaW5nIHRoZQogICBzcmItbG9jaywg
cHJldmlvdXNseSB1c2VkIHRvIG1pdGlnYXRlIFNSQkRTLgoKQm90aCBtaXRp
Z2F0aW9ucyByZXF1aXJlIG1pY3JvY29kZSBmcm9tIElQVSAyMDIyLjEsIE1h
eSAyMDIyLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS00MDQuCgpTaWduZWQtb2Zm
LWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29t
PgpSZXZpZXdlZC1ieTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNp
dHJpeC5jb20+Ci0tLQpCYWNrcG9ydGluZyBub3RlOiBGb3IgWGVuIDQuNyBh
bmQgZWFybGllciB3aXRoIGJvb2xfdCBub3QgYWxpYXNpbmcgYm9vbCwgdGhl
CkFSQ0hfQ0FQU19GQl9DTEVBUiBodW5rIG5lZWRzICEhCgpkaWZmIC0tZ2l0
IGEvZG9jcy9taXNjL3hlbi1jb21tYW5kLWxpbmUucGFuZG9jIGIvZG9jcy9t
aXNjL3hlbi1jb21tYW5kLWxpbmUucGFuZG9jCmluZGV4IGQ1Y2IwOWY4NjU0
MS4uYTY0MmU0MzQ3NmEyIDEwMDY0NAotLS0gYS9kb2NzL21pc2MveGVuLWNv
bW1hbmQtbGluZS5wYW5kb2MKKysrIGIvZG9jcy9taXNjL3hlbi1jb21tYW5k
LWxpbmUucGFuZG9jCkBAIC0yMjM1LDcgKzIyMzUsNyBAQCBCeSBkZWZhdWx0
IFNTQkQgd2lsbCBiZSBtaXRpZ2F0ZWQgYXQgcnVudGltZSAoaS5lIGBzc2Jk
PXJ1bnRpbWVgKS4KICMjIyBzcGVjLWN0cmwgKHg4NikKID4gYD0gTGlzdCBv
ZiBbIDxib29sPiwgeGVuPTxib29sPiwge3B2LGh2bSxtc3Itc2MscnNiLG1k
LWNsZWFyfT08Ym9vbD4sCiA+ICAgICAgICAgICAgICBidGktdGh1bms9cmV0
cG9saW5lfGxmZW5jZXxqbXAsIHtpYnJzLGlicGIsc3NiZCxlYWdlci1mcHUs
Ci0+ICAgICAgICAgICAgICBsMWQtZmx1c2gsYnJhbmNoLWhhcmRlbixzcmIt
bG9ja309PGJvb2w+IF1gCis+ICAgICAgICAgICAgICBsMWQtZmx1c2gsYnJh
bmNoLWhhcmRlbixzcmItbG9jayx1bnByaXYtbW1pb309PGJvb2w+IF1gCiAK
IENvbnRyb2xzIGZvciBzcGVjdWxhdGl2ZSBleGVjdXRpb24gc2lkZWNoYW5u
ZWwgbWl0aWdhdGlvbnMuICBCeSBkZWZhdWx0LCBYZW4KIHdpbGwgcGljayB0
aGUgbW9zdCBhcHByb3ByaWF0ZSBtaXRpZ2F0aW9ucyBiYXNlZCBvbiBjb21w
aWxlZCBpbiBzdXBwb3J0LApAQCAtMjMxNCw4ICsyMzE0LDE2IEBAIFhlbiB3
aWxsIGVuYWJsZSB0aGlzIG1pdGlnYXRpb24uCiBPbiBoYXJkd2FyZSBzdXBw
b3J0aW5nIFNSQkRTX0NUUkwsIHRoZSBgc3JiLWxvY2s9YCBvcHRpb24gY2Fu
IGJlIHVzZWQgdG8gZm9yY2UKIG9yIHByZXZlbnQgWGVuIGZyb20gcHJvdGVj
dCB0aGUgU3BlY2lhbCBSZWdpc3RlciBCdWZmZXIgZnJvbSBsZWFraW5nIHN0
YWxlCiBkYXRhLiBCeSBkZWZhdWx0LCBYZW4gd2lsbCBlbmFibGUgdGhpcyBt
aXRpZ2F0aW9uLCBleGNlcHQgb24gcGFydHMgd2hlcmUgTURTCi1pcyBmaXhl
ZCBhbmQgVEFBIGlzIGZpeGVkL21pdGlnYXRlZCAoaW4gd2hpY2ggY2FzZSwg
dGhlcmUgaXMgYmVsaWV2ZWQgdG8gYmUgbm8KLXdheSBmb3IgYW4gYXR0YWNr
ZXIgdG8gb2J0YWluIHRoZSBzdGFsZSBkYXRhKS4KK2lzIGZpeGVkIGFuZCBU
QUEgaXMgZml4ZWQvbWl0aWdhdGVkIGFuZCB0aGVyZSBhcmUgbm8gdW5wcml2
aWxlZ2VkIE1NSU8KK21hcHBpbmdzIChpbiB3aGljaCBjYXNlLCB0aGVyZSBp
cyBiZWxpZXZlZCB0byBiZSBubyB3YXkgZm9yIGFuIGF0dGFja2VyIHRvCitv
YnRhaW4gc3RhbGUgZGF0YSkuCisKK1RoZSBgdW5wcml2LW1taW89YCBib29s
ZWFuIGluZGljYXRlcyB3aGV0aGVyIHRoZSBzeXN0ZW0gaGFzIChvciB3aWxs
IGhhdmUpCitsZXNzIHRoYW4gZnVsbHkgcHJpdmlsZWdlZCBkb21haW5zIGdy
YW50ZWQgYWNjZXNzIHRvIE1NSU8gZGV2aWNlcy4gIEJ5CitkZWZhdWx0LCB0
aGlzIG9wdGlvbiBpcyBkaXNhYmxlZC4gIElmIGVuYWJsZWQsIFhlbiB3aWxs
IHVzZSB0aGUgYEZCX0NMRUFSYAorYW5kL29yIGBTUkJEU19DVFJMYCBmdW5j
dGlvbmFsaXR5IGF2YWlsYWJsZSBpbiB0aGUgSW50ZWwgTWF5IDIwMjIgbWlj
cm9jb2RlCityZWxlYXNlIHRvIG1pdGlnYXRlIGNyb3NzLWRvbWFpbiBsZWFr
YWdlIG9mIGRhdGEgdmlhIHRoZSBNTUlPIFN0YWxlIERhdGEKK3Z1bG5lcmFi
aWxpdGllcy4KIAogIyMjIHN5bmNfY29uc29sZQogPiBgPSA8Ym9vbGVhbj5g
CmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvc3BlY19jdHJsLmMgYi94ZW4v
YXJjaC94ODYvc3BlY19jdHJsLmMKaW5kZXggZDI4NTUzOGJkZTlmLi4wOTkx
MTNiYTQxZTYgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni9zcGVjX2N0cmwu
YworKysgYi94ZW4vYXJjaC94ODYvc3BlY19jdHJsLmMKQEAgLTY3LDYgKzY3
LDggQEAgc3RhdGljIGJvb2wgX19pbml0ZGF0YSBjcHVfaGFzX2J1Z19tc2Jk
c19vbmx5OyAvKiA9PiBtaW5pbWFsIEhUIGltcGFjdC4gKi8KIHN0YXRpYyBi
b29sIF9faW5pdGRhdGEgY3B1X2hhc19idWdfbWRzOyAvKiBBbnkgb3RoZXIg
TXtMUCxTQixGQn1EUyBjb21iaW5hdGlvbi4gKi8KIAogc3RhdGljIGludDhf
dCBfX2luaXRkYXRhIG9wdF9zcmJfbG9jayA9IC0xOworc3RhdGljIGJvb2wg
X19pbml0ZGF0YSBvcHRfdW5wcml2X21taW87CitzdGF0aWMgYm9vbCBfX3Jl
YWRfbW9zdGx5IG9wdF9mYl9jbGVhcl9tbWlvOwogCiBzdGF0aWMgaW50IF9f
aW5pdCBwYXJzZV9zcGVjX2N0cmwoY29uc3QgY2hhciAqcykKIHsKQEAgLTE4
NCw2ICsxODYsOCBAQCBzdGF0aWMgaW50IF9faW5pdCBwYXJzZV9zcGVjX2N0
cmwoY29uc3QgY2hhciAqcykKICAgICAgICAgICAgIG9wdF9icmFuY2hfaGFy
ZGVuID0gdmFsOwogICAgICAgICBlbHNlIGlmICggKHZhbCA9IHBhcnNlX2Jv
b2xlYW4oInNyYi1sb2NrIiwgcywgc3MpKSA+PSAwICkKICAgICAgICAgICAg
IG9wdF9zcmJfbG9jayA9IHZhbDsKKyAgICAgICAgZWxzZSBpZiAoICh2YWwg
PSBwYXJzZV9ib29sZWFuKCJ1bnByaXYtbW1pbyIsIHMsIHNzKSkgPj0gMCAp
CisgICAgICAgICAgICBvcHRfdW5wcml2X21taW8gPSB2YWw7CiAgICAgICAg
IGVsc2UKICAgICAgICAgICAgIHJjID0gLUVJTlZBTDsKIApAQCAtMzkyLDcg
KzM5Niw4IEBAIHN0YXRpYyB2b2lkIF9faW5pdCBwcmludF9kZXRhaWxzKGVu
dW0gaW5kX3RodW5rIHRodW5rLCB1aW50NjRfdCBjYXBzKQogICAgICAgICAg
ICBvcHRfc3JiX2xvY2sgICAgICAgICAgICAgICAgICAgICAgICAgICAgICA/
ICIgU1JCX0xPQ0srIiA6ICIgU1JCX0xPQ0stIiwKICAgICAgICAgICAgb3B0
X2licGIgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgPyAiIElC
UEIiICA6ICIiLAogICAgICAgICAgICBvcHRfbDFkX2ZsdXNoICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICA/ICIgTDFEX0ZMVVNIIiA6ICIiLAotICAg
ICAgICAgICBvcHRfbWRfY2xlYXJfcHYgfHwgb3B0X21kX2NsZWFyX2h2bSAg
ICAgICA/ICIgVkVSVyIgIDogIiIsCisgICAgICAgICAgIG9wdF9tZF9jbGVh
cl9wdiB8fCBvcHRfbWRfY2xlYXJfaHZtIHx8CisgICAgICAgICAgIG9wdF9m
Yl9jbGVhcl9tbWlvICAgICAgICAgICAgICAgICAgICAgICAgID8gIiBWRVJX
IiAgOiAiIiwKICAgICAgICAgICAgb3B0X2JyYW5jaF9oYXJkZW4gICAgICAg
ICAgICAgICAgICAgICAgICAgPyAiIEJSQU5DSF9IQVJERU4iIDogIiIpOwog
CiAgICAgLyogTDFURiBkaWFnbm9zdGljcywgcHJpbnRlZCBpZiB2dWxuZXJh
YmxlIG9yIFBWIHNoYWRvd2luZyBpcyBpbiB1c2UuICovCkBAIC05NDEsNyAr
OTQ2LDkgQEAgdm9pZCBzcGVjX2N0cmxfaW5pdF9kb21haW4oc3RydWN0IGRv
bWFpbiAqZCkKIHsKICAgICBib29sIHB2ID0gaXNfcHZfZG9tYWluKGQpOwog
Ci0gICAgZC0+YXJjaC52ZXJ3ID0gcHYgPyBvcHRfbWRfY2xlYXJfcHYgOiBv
cHRfbWRfY2xlYXJfaHZtOworICAgIGQtPmFyY2gudmVydyA9CisgICAgICAg
IChwdiA/IG9wdF9tZF9jbGVhcl9wdiA6IG9wdF9tZF9jbGVhcl9odm0pIHx8
CisgICAgICAgIChvcHRfZmJfY2xlYXJfbW1pbyAmJiBpc19pb21tdV9lbmFi
bGVkKGQpKTsKIH0KIAogdm9pZCBfX2luaXQgaW5pdF9zcGVjdWxhdGlvbl9t
aXRpZ2F0aW9ucyh2b2lkKQpAQCAtMTE5Niw2ICsxMjAzLDE4IEBAIHZvaWQg
X19pbml0IGluaXRfc3BlY3VsYXRpb25fbWl0aWdhdGlvbnModm9pZCkKICAg
ICBtZHNfY2FsY3VsYXRpb25zKGNhcHMpOwogCiAgICAgLyoKKyAgICAgKiBQ
YXJ0cyB3aGljaCBlbnVtZXJhdGUgRkJfQ0xFQVIgYXJlIHRob3NlIHdoaWNo
IGFyZSBwb3N0LU1EU19OTyBhbmQgaGF2ZQorICAgICAqIHJlaW50cm9kdWNl
ZCB0aGUgVkVSVyBmaWxsIGJ1ZmZlciBmbHVzaGluZyBzaWRlIGVmZmVjdCBi
ZWNhdXNlIG9mIGEKKyAgICAgKiBzdXNjZXB0aWJpbGl0eSB0byBGQlNEUC4K
KyAgICAgKgorICAgICAqIElmIHVucHJpdmlsZWdlZCBndWVzdHMgaGF2ZSAo
b3Igd2lsbCBoYXZlKSBNTUlPIG1hcHBpbmdzLCB3ZSBjYW4KKyAgICAgKiBt
aXRpZ2F0ZSBjcm9zcy1kb21haW4gbGVha2FnZSBvZiBmaWxsIGJ1ZmZlciBk
YXRhIGJ5IGlzc3VpbmcgVkVSVyBvbgorICAgICAqIHRoZSByZXR1cm4tdG8t
Z3Vlc3QgcGF0aC4KKyAgICAgKi8KKyAgICBpZiAoIG9wdF91bnByaXZfbW1p
byApCisgICAgICAgIG9wdF9mYl9jbGVhcl9tbWlvID0gY2FwcyAmIEFSQ0hf
Q0FQU19GQl9DTEVBUjsKKworICAgIC8qCiAgICAgICogQnkgZGVmYXVsdCwg
ZW5hYmxlIFBWIGFuZCBIVk0gbWl0aWdhdGlvbnMgb24gTURTLXZ1bG5lcmFi
bGUgaGFyZHdhcmUuCiAgICAgICogVGhpcyB3aWxsIG9ubHkgYmUgYSB0b2tl
biBlZmZvcnQgZm9yIE1MUERTL01GQkRTIHdoZW4gSFQgaXMgZW5hYmxlZCwK
ICAgICAgKiBidXQgaXQgaXMgc29tZXdoYXQgYmV0dGVyIHRoYW4gbm90aGlu
Zy4KQEAgLTEyMDgsMTggKzEyMjcsMjAgQEAgdm9pZCBfX2luaXQgaW5pdF9z
cGVjdWxhdGlvbl9taXRpZ2F0aW9ucyh2b2lkKQogICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIGJvb3RfY3B1X2hhcyhYODZfRkVBVFVSRV9NRF9DTEVB
UikpOwogCiAgICAgLyoKLSAgICAgKiBFbmFibGUgTURTIGRlZmVuY2VzIGFz
IGFwcGxpY2FibGUuICBUaGUgSWRsZSBibG9ja3MgbmVlZCB1c2luZyBpZgot
ICAgICAqIGVpdGhlciBQViBvciBIVk0gZGVmZW5jZXMgYXJlIHVzZWQuCisg
ICAgICogRW5hYmxlIE1EUy9NTUlPIGRlZmVuY2VzIGFzIGFwcGxpY2FibGUu
ICBUaGUgSWRsZSBibG9ja3MgbmVlZCB1c2luZyBpZgorICAgICAqIGVpdGhl
ciB0aGUgUFYgb3IgSFZNIE1EUyBkZWZlbmNlcyBhcmUgdXNlZCwgb3IgaWYg
d2UgbWF5IGdpdmUgTU1JTworICAgICAqIGFjY2VzcyB0byB1bnRydXN0ZWQg
Z3Vlc3RzLgogICAgICAqCiAgICAgICogSFZNIGlzIG1vcmUgY29tcGxpY2F0
ZWQuICBUaGUgTURfQ0xFQVIgbWljcm9jb2RlIGV4dGVuZHMgTDFEX0ZMVVNI
IHdpdGgKICAgICAgKiBlcXVpdmFsZW50IHNlbWFudGljcyB0byBhdm9pZCBu
ZWVkaW5nIHRvIHBlcmZvcm0gYm90aCBmbHVzaGVzIG9uIHRoZQotICAgICAq
IEhWTSBwYXRoLiAgVGhlcmVmb3JlLCB3ZSBkb24ndCBuZWVkIFZFUlcgaW4g
YWRkaXRpb24gdG8gTDFEX0ZMVVNILgorICAgICAqIEhWTSBwYXRoLiAgVGhl
cmVmb3JlLCB3ZSBkb24ndCBuZWVkIFZFUlcgaW4gYWRkaXRpb24gdG8gTDFE
X0ZMVVNIIChmb3IKKyAgICAgKiBNRFMgbWl0aWdhdGlvbnMuICBMMURfRkxV
U0ggaXMgbm90IHNhZmUgZm9yIE1NSU8gbWl0aWdhdGlvbnMuKQogICAgICAq
CiAgICAgICogQWZ0ZXIgY2FsY3VsYXRpbmcgdGhlIGFwcHJvcHJpYXRlIGlk
bGUgc2V0dGluZywgc2ltcGxpZnkKICAgICAgKiBvcHRfbWRfY2xlYXJfaHZt
IHRvIG1lYW4ganVzdCAic2hvdWxkIHdlIFZFUlcgb24gdGhlIHdheSBpbnRv
IEhWTQogICAgICAqIGd1ZXN0cyIsIHNvIHNwZWNfY3RybF9pbml0X2RvbWFp
bigpIGNhbiBjYWxjdWxhdGUgc3VpdGFibGUgc2V0dGluZ3MuCiAgICAgICov
Ci0gICAgaWYgKCBvcHRfbWRfY2xlYXJfcHYgfHwgb3B0X21kX2NsZWFyX2h2
bSApCisgICAgaWYgKCBvcHRfbWRfY2xlYXJfcHYgfHwgb3B0X21kX2NsZWFy
X2h2bSB8fCBvcHRfZmJfY2xlYXJfbW1pbyApCiAgICAgICAgIHNldHVwX2Zv
cmNlX2NwdV9jYXAoWDg2X0ZFQVRVUkVfU0NfVkVSV19JRExFKTsKICAgICBv
cHRfbWRfY2xlYXJfaHZtICY9ICEoY2FwcyAmIEFSQ0hfQ0FQU19TS0lQX0wx
REZMKSAmJiAhb3B0X2wxZF9mbHVzaDsKIApAQCAtMTI4NCwxNCArMTMwNSwx
OSBAQCB2b2lkIF9faW5pdCBpbml0X3NwZWN1bGF0aW9uX21pdGlnYXRpb25z
KHZvaWQpCiAgICAgICogT24gc29tZSBTUkJEUy1hZmZlY3RlZCBoYXJkd2Fy
ZSwgaXQgbWF5IGJlIHNhZmUgdG8gcmVsYXggc3JiLWxvY2sgYnkKICAgICAg
KiBkZWZhdWx0LgogICAgICAqCi0gICAgICogT24gcGFydHMgd2hpY2ggZW51
bWVyYXRlIE1EU19OTyBhbmQgbm90IFRBQV9OTywgVFNYIGlzIHRoZSBvbmx5
IGtub3duCi0gICAgICogd2F5IHRvIGFjY2VzcyB0aGUgRmlsbCBCdWZmZXIu
ICBJZiBUU1ggaXNuJ3QgYXZhaWxhYmxlIChpbmMuIFNLVQotICAgICAqIHJl
YXNvbnMgb24gc29tZSBtb2RlbHMpLCBvciBUU1ggaXMgZXhwbGljaXRseSBk
aXNhYmxlZCwgdGhlbiB0aGVyZSBpcwotICAgICAqIG5vIG5lZWQgZm9yIHRo
ZSBleHRyYSBvdmVyaGVhZCB0byBwcm90ZWN0IFJEUkFORC9SRFNFRUQuCisg
ICAgICogQWxsIHBhcnRzIHdpdGggU1JCRFNfQ1RSTCBzdWZmZXIgU1NEUCwg
dGhlIG1lY2hhbmlzbSBieSB3aGljaCBzdGFsZSBSTkcKKyAgICAgKiBkYXRh
IGJlY29tZXMgYXZhaWxhYmxlIHRvIG90aGVyIGNvbnRleHRzLiAgVG8gcmVj
b3ZlciB0aGUgZGF0YSwgYW4KKyAgICAgKiBhdHRhY2tlciBuZWVkcyB0byB1
c2U6CisgICAgICogIC0gU0JEUyAoTURTIG9yIFRBQSB0byBzYW1wbGUgdGhl
IGNvcmVzIGZpbGwgYnVmZmVyKQorICAgICAqICAtIFNCRFIgKEFyY2hpdGVj
dHVyYWxseSByZXRyaWV2ZSBzdGFsZSB0cmFuc2FjdGlvbiBidWZmZXIgY29u
dGVudHMpCisgICAgICogIC0gRFJQVyAoQXJjaGl0ZWN0dXJhbGx5IGxhdGNo
IHN0YWxlIGZpbGwgYnVmZmVyIGRhdGEpCisgICAgICoKKyAgICAgKiBPbiBN
RFNfTk8gcGFydHMsIGFuZCB3aXRoIFRBQV9OTyBvciBUU1ggdW5hdmFpbGFi
bGUvZGlzYWJsZWQsIGFuZCB0aGVyZQorICAgICAqIGlzIG5vIHVucHJpdmls
ZWdlZCBNTUlPIGFjY2VzcywgdGhlIFJORyBkYXRhIGRvZXNuJ3QgbmVlZCBw
cm90ZWN0aW5nLgogICAgICAqLwogICAgIGlmICggY3B1X2hhc19zcmJkc19j
dHJsICkKICAgICB7Ci0gICAgICAgIGlmICggb3B0X3NyYl9sb2NrID09IC0x
ICYmCisgICAgICAgIGlmICggb3B0X3NyYl9sb2NrID09IC0xICYmICFvcHRf
dW5wcml2X21taW8gJiYKICAgICAgICAgICAgICAoY2FwcyAmIChBUkNIX0NB
UFNfTURTX05PfEFSQ0hfQ0FQU19UQUFfTk8pKSA9PSBBUkNIX0NBUFNfTURT
X05PICYmCiAgICAgICAgICAgICAgKCFjcHVfaGFzX2hsZSB8fCAoKGNhcHMg
JiBBUkNIX0NBUFNfVFNYX0NUUkwpICYmIHJ0bV9kaXNhYmxlZCkpICkKICAg
ICAgICAgICAgIG9wdF9zcmJfbG9jayA9IDA7Cg==

--=separator--


From xen-devel-bounces@lists.xenproject.org Thu Jun 16 18:06:10 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 18:06:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350806.577213 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1tsL-0002pv-C2; Thu, 16 Jun 2022 18:05:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350806.577213; Thu, 16 Jun 2022 18:05:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1tsL-0002po-93; Thu, 16 Jun 2022 18:05:49 +0000
Received: by outflank-mailman (input) for mailman id 350806;
 Thu, 16 Jun 2022 18:05:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BA/p=WX=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1o1tsI-0002pi-Lf
 for xen-devel@lists.xenproject.org; Thu, 16 Jun 2022 18:05:47 +0000
Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com
 [2607:f8b0:4864:20::631])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id eee8ebcf-ed9e-11ec-ab14-113154c10af9;
 Thu, 16 Jun 2022 20:05:42 +0200 (CEST)
Received: by mail-pl1-x631.google.com with SMTP id k7so1871549plg.7
 for <xen-devel@lists.xenproject.org>; Thu, 16 Jun 2022 11:05:41 -0700 (PDT)
Received: from jade ([192.77.111.2]) by smtp.gmail.com with ESMTPSA id
 ij19-20020a170902ab5300b00167736c8568sm1960734plb.70.2022.06.16.11.05.37
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 16 Jun 2022 11:05:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eee8ebcf-ed9e-11ec-ab14-113154c10af9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=NoCsBu4JpCMLs3QN0+caKdk9y3TiiWf0LS1GqJMVRO0=;
        b=G2R3qVfbOlbTTk56oU9OWNLjjt2KQYUYFJNZLA0EHFqa7QMFInR+EH37d+8j8KFOYt
         rrUHi8P6m6p7hVLyMQZ5/PgPSL7EYSQAaoGcHNrX0aKAgx+9EBhdY+vIPo/xd+BIj5SW
         gZrI81tWeOCHknUDi2V8xpGruL1cKUurGdlCdLbgSEUUaIzPVq7MXvUl+JgUkPdg0fMC
         y8wgbxE7g5xd5tbqaIZRuM7Vcu7XeeIu1QULbpeufyCZB0vJP3lswqRq9USVUHfrVlY1
         FDjBgkC6etxmocgrT1bT+WHR7sbn2vdlgr4GYpjF7jqQdC8os0d1q4nUu4tfbpxfFKY2
         iTVQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=NoCsBu4JpCMLs3QN0+caKdk9y3TiiWf0LS1GqJMVRO0=;
        b=kZeEnpeR8szBC5x5lpQKyrvHIMQIFCrg3E15IzCzyxhVnbQJva8dNy18WbbTVz6bVx
         xjDKctp+BXhUHehVE+VFNRKKR17gVZqRyKEF0Y+QYe0vejaveiVHJn+pZgZRcknujubx
         2ULbWz1hv6HWIDjdNq2a1HPUIe9dLI7O99Dg3A9K/W2Pulv027d5xyqMdF1EsGlBQ/IN
         IQtnZuSjQh3VPR5R+vA4qmbFBLTuNGKw5RusM15/SEN0ADH0b1akolQ/b8RXYmFnFIw9
         gcy2Zpmca5yxk8TOcNIsMs5v7t3kiMWE41nGEMHHXWWAsQ8xbZcvrImK6UNMqJvDTMne
         Ox3Q==
X-Gm-Message-State: AJIora/f1cCQ9AiqeJythzrBR+Rkf+K6+tT0qWZUY7ZGNQTVRApOiZvm
	6k1V7istTxleqSxctjCUJV+jQA==
X-Google-Smtp-Source: AGRyM1vGS+Lqmz1QNriHIg1FDqgyuW6XiuAGlPs+M0/IRQlf+swXxdAaipCdv8/AtabdjbQDqtZoAg==
X-Received: by 2002:a17:902:d48d:b0:169:15f:45a4 with SMTP id c13-20020a170902d48d00b00169015f45a4mr4884620plg.162.1655402740072;
        Thu, 16 Jun 2022 11:05:40 -0700 (PDT)
Date: Thu, 16 Jun 2022 11:05:35 -0700
From: Jens Wiklander <jens.wiklander@linaro.org>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Bertrand.Marquis@arm.com
Subject: Re: [PATCH v2 2/2] xen/arm: add FF-A mediator
Message-ID: <20220616180535.GA69531@jade>
References: <20220609061812.422130-1-jens.wiklander@linaro.org>
 <20220609061812.422130-3-jens.wiklander@linaro.org>
 <alpine.DEB.2.22.394.2206101758030.756493@ubuntu-linux-20-04-desktop>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.22.394.2206101758030.756493@ubuntu-linux-20-04-desktop>

On Fri, Jun 10, 2022 at 06:23:52PM -0700, Stefano Stabellini wrote:
> On Thu, 9 Jun 2022, Jens Wiklander wrote:
> > Adds a FF-A version 1.1 [1] mediator to communicate with a Secure
> > Partition in secure world.
> > 
> > The implementation is the bare minimum to be able to communicate with
> > OP-TEE running as an SPMC at S-EL1.
> > 
> > This is loosely based on the TEE mediator framework and the OP-TEE
> > mediator.
> > 
> > [1] https://developer.arm.com/documentation/den0077/latest
> > Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> 
> Hi Jens, thanks for rebasing. This is not a full review because I ran
> out of time but some initial comments.

Thanks a lot for the early feedback.

> 
> 
> > ---
> >  xen/arch/arm/Kconfig              |   11 +
> >  xen/arch/arm/Makefile             |    1 +
> >  xen/arch/arm/domain.c             |   10 +
> >  xen/arch/arm/ffa.c                | 1624 +++++++++++++++++++++++++++++
> >  xen/arch/arm/include/asm/domain.h |    4 +
> >  xen/arch/arm/include/asm/ffa.h    |   71 ++
> >  xen/arch/arm/vsmc.c               |   17 +-
> >  7 files changed, 1735 insertions(+), 3 deletions(-)
> >  create mode 100644 xen/arch/arm/ffa.c
> >  create mode 100644 xen/arch/arm/include/asm/ffa.h
> > 
> > diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> > index ecfa6822e4d3..5b75067e2745 100644
> > --- a/xen/arch/arm/Kconfig
> > +++ b/xen/arch/arm/Kconfig
> > @@ -106,6 +106,17 @@ config TEE
> >  
> >  source "arch/arm/tee/Kconfig"
> >  
> > +config FFA
> > +	bool "Enable FF-A mediator support" if EXPERT
> > +	default n
> > +	depends on ARM_64
> > +	help
> > +	  This option enables a minamal FF-A mediator. The mediator is
> > +	  generic as it follows the FF-A specification [1], but it only
> > +	  implements a small substet of the specification.
> > +
> > +	  [1] https://developer.arm.com/documentation/den0077/latest
> > +
> >  endmenu
> >  
> >  menu "ARM errata workaround via the alternative framework"
> > diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
> > index 1d862351d111..dbf5e593a069 100644
> > --- a/xen/arch/arm/Makefile
> > +++ b/xen/arch/arm/Makefile
> > @@ -20,6 +20,7 @@ obj-y += domain.o
> >  obj-y += domain_build.init.o
> >  obj-y += domctl.o
> >  obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
> > +obj-$(CONFIG_FFA) += ffa.o
> >  obj-y += gic.o
> >  obj-y += gic-v2.o
> >  obj-$(CONFIG_GICV3) += gic-v3.o
> > diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> > index 8110c1df8638..a93e6a9c4aef 100644
> > --- a/xen/arch/arm/domain.c
> > +++ b/xen/arch/arm/domain.c
> > @@ -27,6 +27,7 @@
> >  #include <asm/cpufeature.h>
> >  #include <asm/current.h>
> >  #include <asm/event.h>
> > +#include <asm/ffa.h>
> >  #include <asm/gic.h>
> >  #include <asm/guest_atomics.h>
> >  #include <asm/irq.h>
> > @@ -756,6 +757,9 @@ int arch_domain_create(struct domain *d,
> >      if ( (rc = tee_domain_init(d, config->arch.tee_type)) != 0 )
> >          goto fail;
> >  
> > +    if ( (rc = ffa_domain_init(d)) != 0 )
> > +        goto fail;
> > +
> >      update_domain_wallclock_time(d);
> >  
> >      /*
> > @@ -998,6 +1002,7 @@ static int relinquish_memory(struct domain *d, struct page_list_head *list)
> >  enum {
> >      PROG_pci = 1,
> >      PROG_tee,
> > +    PROG_ffa,
> >      PROG_xen,
> >      PROG_page,
> >      PROG_mapping,
> > @@ -1046,6 +1051,11 @@ int domain_relinquish_resources(struct domain *d)
> >          if (ret )
> >              return ret;
> >  
> > +    PROGRESS(ffa):
> > +        ret = ffa_relinquish_resources(d);
> > +        if (ret )
> > +            return ret;
> > +
> >      PROGRESS(xen):
> >          ret = relinquish_memory(d, &d->xenpage_list);
> >          if ( ret )
> > diff --git a/xen/arch/arm/ffa.c b/xen/arch/arm/ffa.c
> > new file mode 100644
> > index 000000000000..9063b7f2b59e
> > --- /dev/null
> > +++ b/xen/arch/arm/ffa.c
> > @@ -0,0 +1,1624 @@
> > +/*
> > + * xen/arch/arm/ffa.c
> > + *
> > + * Arm Firmware Framework for ARMv8-A(FFA) mediator
> > + *
> > + * Copyright (C) 2021  Linaro Limited
> > + *
> > + * Permission is hereby granted, free of charge, to any person
> > + * obtaining a copy of this software and associated documentation
> > + * files (the "Software"), to deal in the Software without restriction,
> > + * including without limitation the rights to use, copy, modify, merge,
> > + * publish, distribute, sublicense, and/or sell copies of the Software,
> > + * and to permit persons to whom the Software is furnished to do so,
> > + * subject to the following conditions:
> > + *
> > + * The above copyright notice and this permission notice shall be
> > + * included in all copies or substantial portions of the Software.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> > + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> > + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
> > + * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
> > + * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
> > + * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
> > + * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
> > + */
> > +
> > +#include <xen/domain_page.h>
> > +#include <xen/errno.h>
> > +#include <xen/init.h>
> > +#include <xen/lib.h>
> > +#include <xen/sched.h>
> > +#include <xen/types.h>
> > +#include <xen/sizes.h>
> > +#include <xen/bitops.h>
> > +
> > +#include <asm/smccc.h>
> > +#include <asm/event.h>
> > +#include <asm/ffa.h>
> > +#include <asm/regs.h>
> > +
> > +/* Error codes */
> > +#define FFA_RET_OK			0
> > +#define FFA_RET_NOT_SUPPORTED		-1
> > +#define FFA_RET_INVALID_PARAMETERS	-2
> > +#define FFA_RET_NO_MEMORY		-3
> > +#define FFA_RET_BUSY			-4
> > +#define FFA_RET_INTERRUPTED		-5
> > +#define FFA_RET_DENIED			-6
> > +#define FFA_RET_RETRY			-7
> > +#define FFA_RET_ABORTED			-8
> > +
> > +/* FFA_VERSION helpers */
> > +#define FFA_VERSION_MAJOR		_AC(1,U)
> > +#define FFA_VERSION_MAJOR_SHIFT		_AC(16,U)
> > +#define FFA_VERSION_MAJOR_MASK		_AC(0x7FFF,U)
> > +#define FFA_VERSION_MINOR		_AC(1,U)
> > +#define FFA_VERSION_MINOR_SHIFT		_AC(0,U)
> > +#define FFA_VERSION_MINOR_MASK		_AC(0xFFFF,U)
> > +#define MAKE_FFA_VERSION(major, minor)	\
> > +	((((major) & FFA_VERSION_MAJOR_MASK) << FFA_VERSION_MAJOR_SHIFT) | \
> > +	 ((minor) & FFA_VERSION_MINOR_MASK))
> > +
> > +#define FFA_MIN_VERSION		MAKE_FFA_VERSION(1, 0)
> > +#define FFA_VERSION_1_0		MAKE_FFA_VERSION(1, 0)
> > +#define FFA_VERSION_1_1		MAKE_FFA_VERSION(1, 1)
> > +#define FFA_MY_VERSION		MAKE_FFA_VERSION(FFA_VERSION_MAJOR, \
> > +						 FFA_VERSION_MINOR)
> > +
> > +
> > +#define FFA_HANDLE_HYP_FLAG             BIT(63,ULL)
> > +
> > +/* Memory attributes: Normal memory, Write-Back cacheable, Inner shareable */
> > +#define FFA_NORMAL_MEM_REG_ATTR		_AC(0x2f,U)
> > +
> > +/* Memory access permissions: Read-write */
> > +#define FFA_MEM_ACC_RW			_AC(0x2,U)
> > +
> > +/* Clear memory before mapping in receiver */
> > +#define FFA_MEMORY_REGION_FLAG_CLEAR		BIT(0, U)
> > +/* Relayer may time slice this operation */
> > +#define FFA_MEMORY_REGION_FLAG_TIME_SLICE	BIT(1, U)
> > +/* Clear memory after receiver relinquishes it */
> > +#define FFA_MEMORY_REGION_FLAG_CLEAR_RELINQUISH	BIT(2, U)
> > +
> > +/* Share memory transaction */
> > +#define FFA_MEMORY_REGION_TRANSACTION_TYPE_SHARE (_AC(1,U) << 3)
> > +/* Relayer must choose the alignment boundary */
> > +#define FFA_MEMORY_REGION_FLAG_ANY_ALIGNMENT	_AC(0,U)
> > +
> > +#define FFA_HANDLE_INVALID		_AC(0xffffffffffffffff,ULL)
> > +
> > +/* Framework direct request/response */
> > +#define FFA_MSG_FLAG_FRAMEWORK		BIT(31, U)
> > +#define FFA_MSG_TYPE_MASK		_AC(0xFF,U);
> > +#define FFA_MSG_PSCI			_AC(0x0,U)
> > +#define FFA_MSG_SEND_VM_CREATED		_AC(0x4,U)
> > +#define FFA_MSG_RESP_VM_CREATED		_AC(0x5,U)
> > +#define FFA_MSG_SEND_VM_DESTROYED	_AC(0x6,U)
> > +#define FFA_MSG_RESP_VM_DESTROYED	_AC(0x7,U)
> > +
> > +/*
> > + * Flags used for the FFA_PARTITION_INFO_GET return message:
> > + * BIT(0): Supports receipt of direct requests
> > + * BIT(1): Can send direct requests
> > + * BIT(2): Can send and receive indirect messages
> > + * BIT(3): Supports receipt of notifications
> > + * BIT(4-5): Partition ID is a PE endpoint ID
> > + */
> > +#define FFA_PART_PROP_DIRECT_REQ_RECV   BIT(0,U)
> > +#define FFA_PART_PROP_DIRECT_REQ_SEND   BIT(1,U)
> > +#define FFA_PART_PROP_INDIRECT_MSGS     BIT(2,U)
> > +#define FFA_PART_PROP_RECV_NOTIF        BIT(3,U)
> > +#define FFA_PART_PROP_IS_PE_ID          (_AC(0,U) << 4)
> > +#define FFA_PART_PROP_IS_SEPID_INDEP    (_AC(1,U) << 4)
> > +#define FFA_PART_PROP_IS_SEPID_DEP      (_AC(2,U) << 4)
> > +#define FFA_PART_PROP_IS_AUX_ID         (_AC(3,U) << 4)
> > +#define FFA_PART_PROP_NOTIF_CREATED     BIT(6,U)
> > +#define FFA_PART_PROP_NOTIF_DESTROYED   BIT(7,U)
> > +#define FFA_PART_PROP_AARCH64_STATE     BIT(8,U)
> > +
> > +/* Function IDs */
> > +#define FFA_ERROR			_AC(0x84000060,U)
> > +#define FFA_SUCCESS_32			_AC(0x84000061,U)
> > +#define FFA_SUCCESS_64			_AC(0xC4000061,U)
> > +#define FFA_INTERRUPT			_AC(0x84000062,U)
> > +#define FFA_VERSION			_AC(0x84000063,U)
> > +#define FFA_FEATURES			_AC(0x84000064,U)
> > +#define FFA_RX_ACQUIRE			_AC(0x84000084,U)
> > +#define FFA_RX_RELEASE			_AC(0x84000065,U)
> > +#define FFA_RXTX_MAP_32			_AC(0x84000066,U)
> > +#define FFA_RXTX_MAP_64			_AC(0xC4000066,U)
> > +#define FFA_RXTX_UNMAP			_AC(0x84000067,U)
> > +#define FFA_PARTITION_INFO_GET		_AC(0x84000068,U)
> > +#define FFA_ID_GET			_AC(0x84000069,U)
> > +#define FFA_SPM_ID_GET			_AC(0x84000085,U)
> > +#define FFA_MSG_WAIT			_AC(0x8400006B,U)
> > +#define FFA_MSG_YIELD			_AC(0x8400006C,U)
> > +#define FFA_MSG_RUN			_AC(0x8400006D,U)
> > +#define FFA_MSG_SEND2			_AC(0x84000086,U)
> > +#define FFA_MSG_SEND_DIRECT_REQ_32	_AC(0x8400006F,U)
> > +#define FFA_MSG_SEND_DIRECT_REQ_64	_AC(0xC400006F,U)
> > +#define FFA_MSG_SEND_DIRECT_RESP_32	_AC(0x84000070,U)
> > +#define FFA_MSG_SEND_DIRECT_RESP_64	_AC(0xC4000070,U)
> > +#define FFA_MEM_DONATE_32		_AC(0x84000071,U)
> > +#define FFA_MEM_DONATE_64		_AC(0xC4000071,U)
> > +#define FFA_MEM_LEND_32			_AC(0x84000072,U)
> > +#define FFA_MEM_LEND_64			_AC(0xC4000072,U)
> > +#define FFA_MEM_SHARE_32		_AC(0x84000073,U)
> > +#define FFA_MEM_SHARE_64		_AC(0xC4000073,U)
> > +#define FFA_MEM_RETRIEVE_REQ_32		_AC(0x84000074,U)
> > +#define FFA_MEM_RETRIEVE_REQ_64		_AC(0xC4000074,U)
> > +#define FFA_MEM_RETRIEVE_RESP		_AC(0x84000075,U)
> > +#define FFA_MEM_RELINQUISH		_AC(0x84000076,U)
> > +#define FFA_MEM_RECLAIM			_AC(0x84000077,U)
> > +#define FFA_MEM_FRAG_RX			_AC(0x8400007A,U)
> > +#define FFA_MEM_FRAG_TX			_AC(0x8400007B,U)
> > +#define FFA_MSG_SEND			_AC(0x8400006E,U)
> > +#define FFA_MSG_POLL			_AC(0x8400006A,U)
> > +
> > +/* Partition information descriptor */
> > +struct ffa_partition_info_1_0 {
> > +    uint16_t id;
> > +    uint16_t execution_context;
> > +    uint32_t partition_properties;
> > +};
> > +
> > +struct ffa_partition_info_1_1 {
> > +    uint16_t id;
> > +    uint16_t execution_context;
> > +    uint32_t partition_properties;
> > +    uint8_t uuid[16];
> > +};
> > +
> > +/* Constituent memory region descriptor */
> > +struct ffa_address_range {
> > +    uint64_t address;
> > +    uint32_t page_count;
> > +    uint32_t reserved;
> > +};
> > +
> > +/* Composite memory region descriptor */
> > +struct ffa_mem_region {
> > +    uint32_t total_page_count;
> > +    uint32_t address_range_count;
> > +    uint64_t reserved;
> > +    struct ffa_address_range address_range_array[];
> > +};
> > +
> > +/* Memory access permissions descriptor */
> > +struct ffa_mem_access_perm {
> > +    uint16_t endpoint_id;
> > +    uint8_t perm;
> > +    uint8_t flags;
> > +};
> > +
> > +/* Endpoint memory access descriptor */
> > +struct ffa_mem_access {
> > +    struct ffa_mem_access_perm access_perm;
> > +    uint32_t region_offs;
> > +    uint64_t reserved;
> > +};
> > +
> > +/* Lend, donate or share memory transaction descriptor */
> > +struct ffa_mem_transaction_1_0 {
> > +    uint16_t sender_id;
> > +    uint8_t mem_reg_attr;
> > +    uint8_t reserved0;
> > +    uint32_t flags;
> > +    uint64_t global_handle;
> > +    uint64_t tag;
> > +    uint32_t reserved1;
> > +    uint32_t mem_access_count;
> > +    struct ffa_mem_access mem_access_array[];
> > +};
> > +
> > +struct ffa_mem_transaction_1_1 {
> > +    uint16_t sender_id;
> > +    uint16_t mem_reg_attr;
> > +    uint32_t flags;
> > +    uint64_t global_handle;
> > +    uint64_t tag;
> > +    uint32_t mem_access_size;
> > +    uint32_t mem_access_count;
> > +    uint32_t mem_access_offs;
> > +    uint8_t reserved[12];
> > +};
> > +
> > +/*
> > + * The parts needed from struct ffa_mem_transaction_1_0 or struct
> > + * ffa_mem_transaction_1_1, used to provide an abstraction of difference in
> > + * data structures between version 1.0 and 1.1. This is just an internal
> > + * interface and can be changed without changing any ABI.
> > + */
> > +struct ffa_mem_transaction_x {
> > +    uint16_t sender_id;
> > +    uint8_t mem_reg_attr;
> > +    uint8_t flags;
> > +    uint8_t mem_access_size;
> > +    uint8_t mem_access_count;
> > +    uint16_t mem_access_offs;
> > +    uint64_t global_handle;
> > +    uint64_t tag;
> > +};
> > +
> > +/* Endpoint RX/TX descriptor */
> > +struct ffa_endpoint_rxtx_descriptor_1_0 {
> > +    uint16_t sender_id;
> > +    uint16_t reserved;
> > +    uint32_t rx_range_count;
> > +    uint32_t tx_range_count;
> > +};
> > +
> > +struct ffa_endpoint_rxtx_descriptor_1_1 {
> > +    uint16_t sender_id;
> > +    uint16_t reserved;
> > +    uint32_t rx_region_offs;
> > +    uint32_t tx_region_offs;
> > +};
> > +
> > +struct ffa_ctx {
> > +    void *rx;
> > +    void *tx;
> > +    struct page_info *rx_pg;
> > +    struct page_info *tx_pg;
> > +    unsigned int page_count;
> > +    uint32_t guest_vers;
> > +    bool tx_is_mine;
> > +    bool interrupted;
> > +};
> > +
> > +struct ffa_shm_mem {
> > +    struct list_head list;
> > +    uint16_t sender_id;
> > +    uint16_t ep_id;     /* endpoint, the one lending */
> > +    uint64_t handle;    /* FFA_HANDLE_INVALID if not set yet */
> > +    unsigned int page_count;
> > +    struct page_info *pages[];
> > +};
> > +
> > +struct mem_frag_state {
> > +    struct list_head list;
> > +    struct ffa_shm_mem *shm;
> > +    uint32_t range_count;
> > +    unsigned int current_page_idx;
> > +    unsigned int frag_offset;
> > +    unsigned int range_offset;
> > +    uint8_t *buf;
> > +    unsigned int buf_size;
> > +    struct ffa_address_range range;
> > +};
> > +
> > +/*
> > + * Our rx/rx buffer shared with the SPMC
> > + */
> > +static uint32_t ffa_version;
> > +static uint16_t *subsr_vm_created;
> > +static unsigned int subsr_vm_created_count;
> > +static uint16_t *subsr_vm_destroyed;
> > +static unsigned int subsr_vm_destroyed_count;
> > +static void *ffa_rx;
> > +static void *ffa_tx;
> > +static unsigned int ffa_page_count;
> > +static spinlock_t ffa_buffer_lock = SPIN_LOCK_UNLOCKED;
> 
> DEFINE_SPINLOCK. But actually shouldn't the spin_locks be per-domain? It
> looks like that at least some of the locks don't need to be global.

You're right. I'll add better granularity.

> 
> 
> > +static struct list_head ffa_mem_list = LIST_HEAD_INIT(ffa_mem_list);
> > +static struct list_head ffa_frag_list = LIST_HEAD_INIT(ffa_frag_list);
> 
> LIST_HEAD(ffa_mem_list);
> LIST_HEAD(ffa_frag_list);
> 
> > +static spinlock_t ffa_mem_list_lock = SPIN_LOCK_UNLOCKED;
> 
> DEFINE_SPINLOCK
> 
> 
> > +static uint64_t next_handle = FFA_HANDLE_HYP_FLAG;
> > +
> > +static uint64_t reg_pair_to_64(uint32_t reg0, uint32_t reg1)
> > +{
> > +    return (uint64_t)reg0 << 32 | reg1;
> > +}
> > +
> > +static void reg_pair_from_64(uint32_t *reg0, uint32_t *reg1, uint64_t val)
> > +{
> > +    *reg0 = val >> 32;
> > +    *reg1 = val;
> > +}
> 
> I think these two should be static inline

I don't suppose it will make any difference for the generated code, but I'll
change that if that's the preferred style.

> 
> 
> > +static bool ffa_get_version(uint32_t *vers)
> > +{
> > +    const struct arm_smccc_1_2_regs arg = {
> > +        .a0 = FFA_VERSION, .a1 = FFA_MY_VERSION,
> > +    };
> > +    struct arm_smccc_1_2_regs resp;
> > +
> > +    arm_smccc_1_2_smc(&arg, &resp);
> > +    if ( resp.a0 == FFA_RET_NOT_SUPPORTED )
> > +    {
> > +        printk(XENLOG_ERR "ffa: FFA_VERSION returned not supported\n");
> > +        return false;
> > +    }
> > +
> > +    *vers = resp.a0;
> > +    return true;
> > +}
> > +
> > +static uint32_t ffa_rxtx_map(register_t tx_addr, register_t rx_addr,
> > +                             uint32_t page_count)
> > +{
> > +    const struct arm_smccc_1_2_regs arg = {
> > +#ifdef CONFIG_ARM_64
> > +        .a0 = FFA_RXTX_MAP_64,
> > +#endif
> 
> This ifdef is unnecessary given that FFA depends on ARM64 and SMCCCv1.2
> is only implemented on ARM64. It also applies to all the other ifdefs in
> this file. You can remove the code under #ifdef CONFIG_ARM_32.
> 
> 
> > +#ifdef CONFIG_ARM_32
> > +        .a0 = FFA_RXTX_MAP_32,
> > +#endif
> > +	.a1 = tx_addr, .a2 = rx_addr,
> > +        .a3 = page_count,
> > +    };
> > +    struct arm_smccc_1_2_regs resp;
> > +
> > +    arm_smccc_1_2_smc(&arg, &resp);
> > +
> > +    if ( resp.a0 == FFA_ERROR )
> > +    {
> > +        if ( resp.a2 )
> > +            return resp.a2;
> > +        else
> > +            return FFA_RET_NOT_SUPPORTED;
> > +    }
> > +
> > +    return FFA_RET_OK;
> > +}
> > +
> > +static uint32_t ffa_rxtx_unmap(uint16_t vm_id)
> > +{
> > +    const struct arm_smccc_1_2_regs arg = {
> > +        .a0 = FFA_RXTX_UNMAP, .a1 = vm_id,
> > +    };
> > +    struct arm_smccc_1_2_regs resp;
> > +
> > +    arm_smccc_1_2_smc(&arg, &resp);
> > +
> > +    if ( resp.a0 == FFA_ERROR )
> > +    {
> > +        if ( resp.a2 )
> > +            return resp.a2;
> > +        else
> > +            return FFA_RET_NOT_SUPPORTED;
> > +    }
> > +
> > +    return FFA_RET_OK;
> > +}
> > +
> > +static uint32_t ffa_partition_info_get(uint32_t w1, uint32_t w2, uint32_t w3,
> > +                                       uint32_t w4, uint32_t w5,
> > +                                       uint32_t *count)
> > +{
> > +    const struct arm_smccc_1_2_regs arg = {
> > +        .a0 = FFA_PARTITION_INFO_GET, .a1 = w1, .a2 = w2, .a3 = w3, .a4 = w4,
> > +        .a5 = w5,
> > +    };
> > +    struct arm_smccc_1_2_regs resp;
> > +
> > +    arm_smccc_1_2_smc(&arg, &resp);
> > +
> > +    if ( resp.a0 == FFA_ERROR )
> > +    {
> > +        if ( resp.a2 )
> > +            return resp.a2;
> > +        else
> > +            return FFA_RET_NOT_SUPPORTED;
> > +    }
> > +
> > +    *count = resp.a2;
> > +
> > +    return FFA_RET_OK;
> > +}
> > +
> > +static uint32_t ffa_rx_release(void)
> > +{
> > +    const struct arm_smccc_1_2_regs arg = { .a0 = FFA_RX_RELEASE, };
> > +    struct arm_smccc_1_2_regs resp;
> > +
> > +    arm_smccc_1_2_smc(&arg, &resp);
> > +
> > +    if ( resp.a0 == FFA_ERROR )
> > +    {
> > +        if ( resp.a2 )
> > +            return resp.a2;
> > +        else
> > +            return FFA_RET_NOT_SUPPORTED;
> > +    }
> > +
> > +    return FFA_RET_OK;
> > +}
> > +
> > +static int32_t ffa_mem_share(uint32_t tot_len, uint32_t frag_len,
> > +                             register_t addr, uint32_t pg_count,
> > +                             uint64_t *handle)
> > +{
> > +    struct arm_smccc_1_2_regs arg = {
> > +        .a0 = FFA_MEM_SHARE_32, .a1 = tot_len, .a2 = frag_len, .a3 = addr,
> > +        .a4 = pg_count,
> > +    };
> > +    struct arm_smccc_1_2_regs resp;
> > +
> > +    /*
> > +     * For arm64 we must use 64-bit calling convention if the buffer isn't
> > +     * passed in our tx buffer.
> > +     */
> > +    if (sizeof(addr) > 4 && addr)
> > +        arg.a0 = FFA_MEM_SHARE_64;
> > +
> > +    arm_smccc_1_2_smc(&arg, &resp);
> > +
> > +    switch ( resp.a0 ) {
> > +    case FFA_ERROR:
> > +        if ( resp.a2 )
> > +            return resp.a2;
> > +        else
> > +            return FFA_RET_NOT_SUPPORTED;
> > +    case FFA_SUCCESS_32:
> > +        *handle = reg_pair_to_64(resp.a3, resp.a2);
> > +        return FFA_RET_OK;
> > +    case FFA_MEM_FRAG_RX:
> > +        *handle = reg_pair_to_64(resp.a2, resp.a1);
> > +        return resp.a3;
> > +    default:
> > +            return FFA_RET_NOT_SUPPORTED;
> 
> coding style: alignment
> 
> 
> > +    }
> > +}
> > +
> > +static int32_t ffa_mem_frag_tx(uint64_t handle, uint32_t frag_len,
> > +                               uint16_t sender_id)
> > +{
> > +    struct arm_smccc_1_2_regs arg = {
> > +        .a0 = FFA_MEM_FRAG_TX, .a1 = handle & UINT32_MAX, .a2 = handle >> 32,
> > +        .a3 = frag_len, .a4 = (uint32_t)sender_id << 16,
> > +    };
> > +    struct arm_smccc_1_2_regs resp;
> > +
> > +    arm_smccc_1_2_smc(&arg, &resp);
> > +
> > +    switch ( resp.a0 ) {
> > +    case FFA_ERROR:
> > +        if ( resp.a2 )
> > +            return resp.a2;
> > +        else
> > +            return FFA_RET_NOT_SUPPORTED;
> > +    case FFA_SUCCESS_32:
> > +        return FFA_RET_OK;
> > +    case FFA_MEM_FRAG_RX:
> > +        return resp.a3;
> > +    default:
> > +            return FFA_RET_NOT_SUPPORTED;
> > +    }
> > +}
> > +
> > +static uint32_t ffa_mem_reclaim(uint32_t handle_lo, uint32_t handle_hi,
> > +                                uint32_t flags)
> > +{
> > +    const struct arm_smccc_1_2_regs arg = {
> > +        .a0 = FFA_MEM_RECLAIM, .a1 = handle_lo, .a2 = handle_hi, .a3 = flags,
> > +    };
> > +    struct arm_smccc_1_2_regs resp;
> > +
> > +    arm_smccc_1_2_smc(&arg, &resp);
> > +
> > +    if ( resp.a0 == FFA_ERROR )
> > +    {
> > +        if ( resp.a2 )
> > +            return resp.a2;
> > +        else
> > +            return FFA_RET_NOT_SUPPORTED;
> > +    }
> > +
> > +    return FFA_RET_OK;
> > +}
> > +
> > +static int32_t ffa_direct_req_send_vm(uint16_t sp_id, uint16_t vm_id,
> > +                                      uint8_t msg)
> > +{
> > +    uint32_t exp_resp = FFA_MSG_FLAG_FRAMEWORK;
> > +    int32_t res;
> > +
> > +    if ( msg != FFA_MSG_SEND_VM_CREATED && msg !=FFA_MSG_SEND_VM_DESTROYED )
> > +        return FFA_RET_INVALID_PARAMETERS;
> > +
> > +    if ( msg == FFA_MSG_SEND_VM_CREATED )
> > +        exp_resp |= FFA_MSG_RESP_VM_CREATED;
> > +    else
> > +        exp_resp |= FFA_MSG_RESP_VM_DESTROYED;
> > +
> > +    do {
> > +        const struct arm_smccc_1_2_regs arg = {
> > +            .a0 = FFA_MSG_SEND_DIRECT_REQ_32,
> > +            .a1 = sp_id,
> > +            .a2 = FFA_MSG_FLAG_FRAMEWORK | msg,
> > +            .a5 = vm_id,
> > +        };
> > +        struct arm_smccc_1_2_regs resp;
> > +
> > +        arm_smccc_1_2_smc(&arg, &resp);
> > +        if ( resp.a0 != FFA_MSG_SEND_DIRECT_RESP_32 || resp.a2 != exp_resp ) {
> > +            /*
> > +             * This is an invalid response, likely due to some error in the
> > +             * implementation of the ABI.
> > +             */
> > +            return FFA_RET_INVALID_PARAMETERS;
> > +        }
> > +        res = resp.a3;
> > +    } while ( res == FFA_RET_INTERRUPTED || res == FFA_RET_RETRY );
> > +
> > +    return res;
> > +}
> > +
> > +static u16 get_vm_id(struct domain *d)
> > +{
> > +    /* +1 since 0 is reserved for the hypervisor in FF-A */
> > +    return d->domain_id + 1;
> > +}
> > +
> > +static void set_regs(struct cpu_user_regs *regs, register_t v0, register_t v1,
> > +                     register_t v2, register_t v3, register_t v4, register_t v5,
> > +                     register_t v6, register_t v7)
> > +{
> > +        set_user_reg(regs, 0, v0);
> > +        set_user_reg(regs, 1, v1);
> > +        set_user_reg(regs, 2, v2);
> > +        set_user_reg(regs, 3, v3);
> > +        set_user_reg(regs, 4, v4);
> > +        set_user_reg(regs, 5, v5);
> > +        set_user_reg(regs, 6, v6);
> > +        set_user_reg(regs, 7, v7);
> > +}
> > +
> > +static void set_regs_error(struct cpu_user_regs *regs, uint32_t error_code)
> > +{
> > +    set_regs(regs, FFA_ERROR, 0, error_code, 0, 0, 0, 0, 0);
> > +}
> > +
> > +static void set_regs_success(struct cpu_user_regs *regs, uint32_t w2,
> > +                             uint32_t w3)
> > +{
> > +    set_regs(regs, FFA_SUCCESS_32, 0, w2, w3, 0, 0, 0, 0);
> > +}
> > +
> > +static void set_regs_frag_rx(struct cpu_user_regs *regs, uint32_t handle_lo,
> > +                             uint32_t handle_hi, uint32_t frag_offset,
> > +                             uint16_t sender_id)
> > +{
> > +    set_regs(regs, FFA_MEM_FRAG_RX, handle_lo, handle_hi, frag_offset,
> > +             (uint32_t)sender_id << 16, 0, 0, 0);
> > +}
> > +
> > +static void handle_version(struct cpu_user_regs *regs)
> > +{
> > +    struct domain *d = current->domain;
> > +    struct ffa_ctx *ctx = d->arch.ffa;
> > +    uint32_t vers = get_user_reg(regs, 1);
> > +
> > +    if ( vers < FFA_VERSION_1_1 )
> > +        vers = FFA_VERSION_1_0;
> > +    else
> > +        vers = FFA_VERSION_1_1;
> > +
> > +    ctx->guest_vers = vers;
> > +    set_regs(regs, vers, 0, 0, 0, 0, 0, 0, 0);
> > +}
> > +
> > +static uint32_t handle_rxtx_map(uint32_t fid, register_t tx_addr,
> > +                                register_t rx_addr, uint32_t page_count)
> > +{
> > +    uint32_t ret = FFA_RET_NOT_SUPPORTED;
> > +    struct domain *d = current->domain;
> > +    struct ffa_ctx *ctx = d->arch.ffa;
> > +    struct page_info *tx_pg;
> > +    struct page_info *rx_pg;
> > +    p2m_type_t t;
> > +    void *rx;
> > +    void *tx;
> > +
> > +    if ( !smccc_is_conv_64(fid) )
> > +    {
> > +        tx_addr &= UINT32_MAX;
> > +        rx_addr &= UINT32_MAX;
> > +    }
> > +
> > +    /* For now to keep things simple, only deal with a single page */
> > +    if ( page_count != 1 )
> > +        return FFA_RET_NOT_SUPPORTED;
> > +
> > +    /* Already mapped */
> > +    if ( ctx->rx )
> > +        return FFA_RET_DENIED;
> > +
> > +    tx_pg = get_page_from_gfn(d, gaddr_to_gfn(tx_addr), &t, P2M_ALLOC);
> > +    if ( !tx_pg )
> > +        return FFA_RET_NOT_SUPPORTED;
> 
> It looks like this should be another error: if get_page_from_gfn fails
> it is probably because the provided page is invalid, so we should return
> FFA_RET_INVALID_PARAMETERS ?

Makes sense, I'll fix here and below. I suppose all the failures below
are more likely due to some problem with the provided page.

> 
> 
> > +    /* Only normal RAM for now */
> > +    if (t != p2m_ram_rw)
> > +        goto err_put_tx_pg;
> > +
> > +    rx_pg = get_page_from_gfn(d, gaddr_to_gfn(rx_addr), &t, P2M_ALLOC);
> > +    if ( !tx_pg )
> > +        goto err_put_tx_pg;
> 
> same here?
> 
> 
> > +    /* Only normal RAM for now */
> > +    if ( t != p2m_ram_rw )
> > +        goto err_put_rx_pg;
> > +
> > +    tx = __map_domain_page_global(tx_pg);
> > +    if ( !tx )
> > +        goto err_put_rx_pg;
> > +
> > +    rx = __map_domain_page_global(rx_pg);
> > +    if ( !rx )
> > +        goto err_unmap_tx;
> > +
> > +    ctx->rx = rx;
> > +    ctx->tx = tx;
> > +    ctx->rx_pg = rx_pg;
> > +    ctx->tx_pg = tx_pg;
> > +    ctx->page_count = 1;
> > +    ctx->tx_is_mine = true;
> > +    return FFA_RET_OK;
> > +
> > +err_unmap_tx:
> > +    unmap_domain_page_global(tx);
> > +err_put_rx_pg:
> > +    put_page(rx_pg);
> > +err_put_tx_pg:
> > +    put_page(tx_pg);
> > +    return ret;
> > +}
> > +
> > +static uint32_t handle_rxtx_unmap(void)
> > +{
> > +    struct domain *d = current->domain;
> > +    struct ffa_ctx *ctx = d->arch.ffa;
> > +    uint32_t ret;
> > +
> > +    if ( !ctx-> rx )
> > +        return FFA_RET_INVALID_PARAMETERS;
> > +
> > +    ret = ffa_rxtx_unmap(get_vm_id(d));
> > +    if ( ret )
> > +        return ret;
> > +
> > +    unmap_domain_page_global(ctx->rx);
> > +    unmap_domain_page_global(ctx->tx);
> > +    put_page(ctx->rx_pg);
> > +    put_page(ctx->tx_pg);
> > +    ctx->rx = NULL;
> > +    ctx->tx = NULL;
> > +    ctx->rx_pg = NULL;
> > +    ctx->tx_pg = NULL;
> > +    ctx->page_count = 0;
> > +    ctx->tx_is_mine = false;
> > +
> > +    return FFA_RET_OK;
> > +}
> > +
> > +static uint32_t handle_partition_info_get(uint32_t w1, uint32_t w2, uint32_t w3,
> > +                                          uint32_t w4, uint32_t w5,
> > +                                          uint32_t *count)
> > +{
> > +    uint32_t ret = FFA_RET_DENIED;
> > +    struct domain *d = current->domain;
> > +    struct ffa_ctx *ctx = d->arch.ffa;
> > +
> > +    if ( !ffa_page_count )
> > +        return FFA_RET_DENIED;
> > +
> > +    spin_lock(&ffa_buffer_lock);
> > +    if ( !ctx->page_count || !ctx->tx_is_mine )
> > +        goto out;
> > +    ret = ffa_partition_info_get(w1, w2, w3, w4, w5, count);
> > +    if ( ret )
> > +        goto out;
> > +    if ( ctx->guest_vers == FFA_VERSION_1_0 ) {
> > +        size_t n;
> > +        struct ffa_partition_info_1_1 *src = ffa_rx;
> > +        struct ffa_partition_info_1_0 *dst = ctx->rx;
> > +
> > +        for ( n = 0; n < *count; n++ ) {
> > +            dst[n].id = src[n].id;
> > +            dst[n].execution_context = src[n].execution_context;
> > +            dst[n].partition_properties = src[n].partition_properties;
> > +        }
> > +    } else {
> > +        size_t sz = *count * sizeof(struct ffa_partition_info_1_1);
> > +
> > +        memcpy(ctx->rx, ffa_rx, sz);
> > +    }
> > +    ffa_rx_release();
> > +    ctx->tx_is_mine = false;
> > +out:
> > +    spin_unlock(&ffa_buffer_lock);
> > +
> > +    return ret;
> > +}
> > +
> > +static uint32_t handle_rx_release(void)
> > +{
> > +    uint32_t ret = FFA_RET_DENIED;
> > +    struct domain *d = current->domain;
> > +    struct ffa_ctx *ctx = d->arch.ffa;
> > +
> > +    spin_lock(&ffa_buffer_lock);
> > +    if ( !ctx->page_count || ctx->tx_is_mine )
> > +        goto out;
> > +    ret = FFA_RET_OK;
> > +    ctx->tx_is_mine = true;
> > +out:
> > +    spin_unlock(&ffa_buffer_lock);
> > +
> > +    return ret;
> > +}
> > +
> > +static void handle_msg_send_direct_req(struct cpu_user_regs *regs, uint32_t fid)
> > +{
> > +    struct arm_smccc_1_2_regs arg = { .a0 = fid, };
> > +    struct arm_smccc_1_2_regs resp = { };
> > +    struct domain *d = current->domain;
> > +    struct ffa_ctx *ctx = d->arch.ffa;
> > +    uint32_t src_dst;
> > +    uint64_t mask;
> > +
> > +    if ( smccc_is_conv_64(fid) )
> > +        mask = 0xffffffffffffffff;
> > +    else
> > +        mask = 0xffffffff;
> > +
> > +    src_dst = get_user_reg(regs, 1);
> > +    if ( (src_dst >> 16) != get_vm_id(d) )
> > +    {
> > +        resp.a0 = FFA_ERROR;
> > +        resp.a2 = FFA_RET_INVALID_PARAMETERS;
> > +        goto out;
> > +    }
> > +
> > +    arg.a1 = src_dst;
> > +    arg.a2 = get_user_reg(regs, 2) & mask;
> > +    arg.a3 = get_user_reg(regs, 3) & mask;
> > +    arg.a4 = get_user_reg(regs, 4) & mask;
> > +    arg.a5 = get_user_reg(regs, 5) & mask;
> > +    arg.a6 = get_user_reg(regs, 6) & mask;
> > +    arg.a7 = get_user_reg(regs, 7) & mask;
> > +
> > +    while ( true ) {
> > +        arm_smccc_1_2_smc(&arg, &resp);
> > +
> > +        switch ( resp.a0 )
> > +        {
> > +        case FFA_INTERRUPT:
> > +            ctx->interrupted = true;
> > +            goto out;
> > +        case FFA_ERROR:
> > +        case FFA_SUCCESS_32:
> > +        case FFA_SUCCESS_64:
> > +        case FFA_MSG_SEND_DIRECT_RESP_32:
> > +        case FFA_MSG_SEND_DIRECT_RESP_64:
> > +            goto out;
> > +        default:
> > +            /* Bad fid, report back. */
> > +            memset(&arg, 0, sizeof(arg));
> > +            arg.a0 = FFA_ERROR;
> > +            arg.a1 = src_dst;
> > +            arg.a2 = FFA_RET_NOT_SUPPORTED;
> > +            continue;
> > +        }
> > +    }
> > +
> > +out:
> > +    set_user_reg(regs, 0, resp.a0);
> > +    set_user_reg(regs, 2, resp.a2 & mask);
> > +    set_user_reg(regs, 1, resp.a1 & mask);
> > +    set_user_reg(regs, 3, resp.a3 & mask);
> > +    set_user_reg(regs, 4, resp.a4 & mask);
> > +    set_user_reg(regs, 5, resp.a5 & mask);
> > +    set_user_reg(regs, 6, resp.a6 & mask);
> > +    set_user_reg(regs, 7, resp.a7 & mask);
> > +}
> > +
> > +static int get_shm_pages(struct domain *d, struct ffa_shm_mem *shm,
> > +                         struct ffa_address_range *range, uint32_t range_count,
> > +                         unsigned int start_page_idx,
> > +                         unsigned int *last_page_idx)
> > +{
> > +    unsigned int pg_idx = start_page_idx;
> > +    unsigned long gfn;
> > +    unsigned int n;
> > +    unsigned int m;
> > +    p2m_type_t t;
> > +    uint64_t addr;
> > +
> > +    for ( n = 0; n < range_count; n++ ) {
> > +        for ( m = 0; m < range[n].page_count; m++ ) {
> > +            if ( pg_idx >= shm->page_count )
> > +                return FFA_RET_INVALID_PARAMETERS;
> > +
> > +            addr = read_atomic(&range[n].address);
> > +            gfn = gaddr_to_gfn(addr + m * PAGE_SIZE);
> > +            shm->pages[pg_idx] = get_page_from_gfn(d, gfn, &t, P2M_ALLOC);
> > +            if ( !shm->pages[pg_idx] )
> > +                return FFA_RET_DENIED;
> > +            pg_idx++;
> > +            /* Only normal RAM for now */
> > +            if ( t != p2m_ram_rw )
> > +                return FFA_RET_DENIED;
> > +        }
> > +    }
> > +
> > +    *last_page_idx = pg_idx;
> > +
> > +    return FFA_RET_OK;
> > +}
> > +
> > +static void put_shm_pages(struct ffa_shm_mem *shm)
> > +{
> > +    unsigned int n;
> > +
> > +    for ( n = 0; n < shm->page_count && shm->pages[n]; n++ )
> > +    {
> > +        if ( shm->pages[n] ) {
> > +            put_page(shm->pages[n]);
> > +            shm->pages[n] = NULL;
> > +        }
> > +    }
> > +}
> > +
> > +static void init_range(struct ffa_address_range *addr_range,
> > +                       paddr_t pa)
> > +{
> > +    memset(addr_range, 0, sizeof(*addr_range));
> > +    addr_range->address = pa;
> > +    addr_range->page_count = 1;
> > +}
> > +
> > +static int share_shm(struct ffa_shm_mem *shm)
> > +{
> > +    uint32_t max_frag_len = ffa_page_count * PAGE_SIZE;
> > +    struct ffa_mem_transaction_1_1 *descr = ffa_tx;
> > +    struct ffa_mem_access *mem_access_array;
> > +    struct ffa_mem_region *region_descr;
> > +    struct ffa_address_range *addr_range;
> > +    paddr_t pa;
> > +    paddr_t last_pa;
> > +    unsigned int n;
> > +    uint32_t frag_len;
> > +    uint32_t tot_len;
> > +    int ret;
> > +    unsigned int range_count;
> > +    unsigned int range_base;
> > +    bool first;
> > +
> > +    memset(descr, 0, sizeof(*descr));
> > +    descr->sender_id = shm->sender_id;
> > +    descr->global_handle = shm->handle;
> > +    descr->mem_reg_attr = FFA_NORMAL_MEM_REG_ATTR;
> > +    descr->mem_access_count = 1;
> > +    descr->mem_access_size = sizeof(*mem_access_array);
> > +    descr->mem_access_offs = sizeof(*descr);
> > +    mem_access_array = (void *)(descr + 1);
> > +    region_descr = (void *)(mem_access_array + 1);
> > +
> > +    memset(mem_access_array, 0, sizeof(*mem_access_array));
> > +    mem_access_array[0].access_perm.endpoint_id = shm->ep_id;
> > +    mem_access_array[0].access_perm.perm = FFA_MEM_ACC_RW;
> > +    mem_access_array[0].region_offs = (vaddr_t)region_descr - (vaddr_t)ffa_tx;
> > +
> > +    memset(region_descr, 0, sizeof(*region_descr));
> > +    region_descr->total_page_count = shm->page_count;
> > +
> > +    region_descr->address_range_count = 1;
> > +    last_pa = page_to_maddr(shm->pages[0]);
> > +    for ( n = 1; n < shm->page_count; last_pa = pa, n++ )
> > +    {
> > +        pa = page_to_maddr(shm->pages[n]);
> > +        if ( last_pa + PAGE_SIZE == pa )
> > +        {
> > +            continue;
> > +        }
> > +        region_descr->address_range_count++;
> > +    }
> > +
> > +    tot_len = sizeof(*descr) + sizeof(*mem_access_array) +
> > +              sizeof(*region_descr) +
> > +              region_descr->address_range_count * sizeof(*addr_range);
> > +
> > +    addr_range = region_descr->address_range_array;
> > +    frag_len = (vaddr_t)(addr_range + 1) - (vaddr_t)ffa_tx;
> > +    last_pa = page_to_maddr(shm->pages[0]);
> > +    init_range(addr_range, last_pa);
> > +    first = true;
> > +    range_count = 1;
> > +    range_base = 0;
> > +    for ( n = 1; n < shm->page_count; last_pa = pa, n++ )
> > +    {
> > +        pa = page_to_maddr(shm->pages[n]);
> > +        if ( last_pa + PAGE_SIZE == pa )
> > +        {
> > +            addr_range->page_count++;
> > +            continue;
> > +        }
> > +
> > +        if (frag_len == max_frag_len) {
> > +            if (first)
> > +            {
> > +                ret = ffa_mem_share(tot_len, frag_len, 0, 0, &shm->handle);
> > +                first = false;
> > +            }
> > +            else
> > +            {
> > +                ret = ffa_mem_frag_tx(shm->handle, frag_len, shm->sender_id);
> > +            }
> > +            if (ret <= 0)
> > +                return ret;
> > +            range_base = range_count;
> > +            range_count = 0;
> > +            frag_len = sizeof(*addr_range);
> > +            addr_range = ffa_tx;
> > +        } else {
> > +            frag_len += sizeof(*addr_range);
> > +            addr_range++;
> > +        }
> > +        init_range(addr_range, pa);
> > +        range_count++;
> > +    }
> > +
> > +    if (first)
> > +        return ffa_mem_share(tot_len, frag_len, 0, 0, &shm->handle);
> > +    else
> > +        return ffa_mem_frag_tx(shm->handle, frag_len, shm->sender_id);
> > +}
> > +
> > +static int read_mem_transaction(uint32_t ffa_vers, void *buf, size_t blen,
> > +                                struct ffa_mem_transaction_x *trans)
> > +{
> > +    uint16_t mem_reg_attr;
> > +    uint32_t flags;
> > +    uint32_t count;
> > +    uint32_t offs;
> > +    uint32_t size;
> > +
> > +    if (ffa_vers >= FFA_VERSION_1_1) {
> > +        struct ffa_mem_transaction_1_1 *descr;
> > +
> > +        if (blen < sizeof(*descr))
> > +            return FFA_RET_INVALID_PARAMETERS;
> > +
> > +        descr = buf;
> > +        trans->sender_id = read_atomic(&descr->sender_id);
> > +        mem_reg_attr = read_atomic(&descr->mem_reg_attr);
> > +        flags = read_atomic(&descr->flags);
> > +        trans->global_handle = read_atomic(&descr->global_handle);
> > +        trans->tag = read_atomic(&descr->tag);
> > +
> > +        count = read_atomic(&descr->mem_access_count);
> > +        size = read_atomic(&descr->mem_access_size);
> > +        offs = read_atomic(&descr->mem_access_offs);
> > +    } else {
> > +        struct ffa_mem_transaction_1_0 *descr;
> > +
> > +        if (blen < sizeof(*descr))
> > +            return FFA_RET_INVALID_PARAMETERS;
> > +
> > +        descr = buf;
> > +        trans->sender_id = read_atomic(&descr->sender_id);
> > +        mem_reg_attr = read_atomic(&descr->mem_reg_attr);
> > +        flags = read_atomic(&descr->flags);
> > +        trans->global_handle = read_atomic(&descr->global_handle);
> > +        trans->tag = read_atomic(&descr->tag);
> > +
> > +        count = read_atomic(&descr->mem_access_count);
> > +        size = sizeof(struct ffa_mem_access);
> > +        offs = offsetof(struct ffa_mem_transaction_1_0, mem_access_array);
> > +    }
> > +
> > +    if (mem_reg_attr > UINT8_MAX || flags > UINT8_MAX || size > UINT8_MAX ||
> > +        count > UINT8_MAX || offs > UINT16_MAX)
> > +        return FFA_RET_INVALID_PARAMETERS;
> > +
> > +    /* Check that the endpoint memory access descriptor array fits */
> > +    if (size * count + offs > blen)
> > +        return FFA_RET_INVALID_PARAMETERS;
> > +
> > +    trans->mem_reg_attr = mem_reg_attr;
> > +    trans->flags = flags;
> > +    trans->mem_access_size = size;
> > +    trans->mem_access_count = count;
> > +    trans->mem_access_offs = offs;
> > +    return 0;
> > +}
> > +
> > +static int add_mem_share_frag(struct mem_frag_state *s, unsigned int offs,
> > +                              unsigned int frag_len)
> > +{
> > +    struct domain *d = current->domain;
> > +    unsigned int o = offs;
> > +    unsigned int l;
> > +    int ret;
> > +
> > +    if (frag_len < o)
> > +        return FFA_RET_INVALID_PARAMETERS;
> > +
> > +    /* Fill up the first struct ffa_address_range */
> > +    l = min_t(unsigned int, frag_len - o, sizeof(s->range) - s->range_offset);
> > +    memcpy((uint8_t *)&s->range + s->range_offset, s->buf + o, l);
> > +    s->range_offset += l;
> > +    o += l;
> > +    if (s->range_offset != sizeof(s->range))
> > +        goto out;
> > +    s->range_offset = 0;
> > +
> > +    while (true) {
> > +        ret = get_shm_pages(d, s->shm, &s->range, 1, s->current_page_idx,
> > +                            &s->current_page_idx);
> > +        if (ret)
> > +            return ret;
> > +        if (s->range_count == 1)
> > +            return 0;
> > +        s->range_count--;
> > +        if (frag_len - o < sizeof(s->range))
> > +            break;
> > +        memcpy(&s->range, s->buf + o, sizeof(s->range));
> > +        o += sizeof(s->range);
> > +    }
> > +
> > +    /* Collect any remaining bytes for the next struct ffa_address_range */
> > +    s->range_offset = frag_len - o;
> > +    memcpy(&s->range, s->buf + o, frag_len - o);
> > +out:
> > +    s->frag_offset += frag_len;
> > +    return s->frag_offset;
> > +}
> > +
> > +static void handle_mem_share(struct cpu_user_regs *regs)
> > +{
> > +    uint32_t tot_len = get_user_reg(regs, 1);
> > +    uint32_t frag_len = get_user_reg(regs, 2);
> > +    uint64_t addr = get_user_reg(regs, 3);
> > +    uint32_t page_count = get_user_reg(regs, 4);
> > +    struct ffa_mem_transaction_x trans;
> > +    struct ffa_mem_access *mem_access;
> > +    struct ffa_mem_region *region_descr;
> > +    struct domain *d = current->domain;
> > +    struct ffa_ctx *ctx = d->arch.ffa;
> > +    struct ffa_shm_mem *shm = NULL;
> > +    unsigned int last_page_idx = 0;
> > +    uint32_t range_count;
> > +    uint32_t region_offs;
> > +    int ret = FFA_RET_DENIED;
> > +    uint32_t handle_hi = 0;
> > +    uint32_t handle_lo = 0;
> > +
> > +    /*
> > +     * We're only accepting memory transaction descriptors via the rx/tx
> > +     * buffer.
> > +     */
> > +    if ( addr ) {
> > +        ret = FFA_RET_NOT_SUPPORTED;
> > +        goto out_unlock;
> > +    }
> > +
> > +    /* Check that fragment legnth doesn't exceed total length */
> > +    if (frag_len > tot_len) {
> > +        ret = FFA_RET_INVALID_PARAMETERS;
> > +        goto out_unlock;
> > +    }
> > +
> > +    spin_lock(&ffa_buffer_lock);
> > +
> > +    if ( frag_len > ctx->page_count * PAGE_SIZE )
> > +        goto out_unlock;
> > +
> > +    if ( !ffa_page_count ) {
> > +        ret = FFA_RET_NO_MEMORY;
> > +        goto out_unlock;
> > +    }
> > +
> > +    ret = read_mem_transaction(ctx->guest_vers, ctx->tx, frag_len, &trans);
> > +    if (ret)
> > +        goto out_unlock;
> > +
> > +    if ( trans.mem_reg_attr != FFA_NORMAL_MEM_REG_ATTR )
> > +    {
> > +        ret = FFA_RET_NOT_SUPPORTED;
> > +        goto out;
> > +    }
> > +
> > +    /* Only supports sharing it with one SP for now */
> > +    if ( trans.mem_access_count != 1 )
> > +    {
> > +        ret = FFA_RET_NOT_SUPPORTED;
> > +        goto out_unlock;
> > +    }
> > +
> > +    if ( trans.sender_id != get_vm_id(d) )
> > +    {
> > +        ret = FFA_RET_INVALID_PARAMETERS;
> > +        goto out_unlock;
> > +    }
> > +
> > +    /* Check that it fits in the supplied data */
> > +    if ( trans.mem_access_offs + trans.mem_access_size > frag_len)
> > +        goto out_unlock;
> > +
> > +    mem_access = (void *)((vaddr_t)ctx->tx + trans.mem_access_offs);
> > +    if ( read_atomic(&mem_access->access_perm.perm) != FFA_MEM_ACC_RW )
> > +    {
> > +        ret = FFA_RET_NOT_SUPPORTED;
> > +        goto out_unlock;
> > +    }
> > +
> > +    region_offs = read_atomic(&mem_access->region_offs);
> > +    if (sizeof(*region_descr) + region_offs > frag_len) {
> > +        ret = FFA_RET_NOT_SUPPORTED;
> > +        goto out_unlock;
> > +    }
> > +
> > +    region_descr = (void *)((vaddr_t)ctx->tx + region_offs);
> > +    range_count = read_atomic(&region_descr->address_range_count);
> > +    page_count = read_atomic(&region_descr->total_page_count);
> > +
> > +    shm = xzalloc_flex_struct(struct ffa_shm_mem, pages, page_count);
> > +    if ( !shm )
> > +    {
> > +        ret = FFA_RET_NO_MEMORY;
> > +        goto out;
> > +    }
> > +    shm->sender_id = trans.sender_id;
> > +    shm->ep_id = read_atomic(&mem_access->access_perm.endpoint_id);
> > +    shm->page_count = page_count;
> > +
> > +    if (frag_len != tot_len) {
> > +        struct mem_frag_state *s = xzalloc(struct mem_frag_state);
> > +
> > +        if (!s) {
> > +            ret = FFA_RET_NO_MEMORY;
> > +            goto out;
> > +        }
> > +        s->shm = shm;
> > +        s->range_count = range_count;
> > +        s->buf = ctx->tx;
> > +        s->buf_size = ffa_page_count * PAGE_SIZE;
> > +        ret = add_mem_share_frag(s, sizeof(*region_descr)  + region_offs,
> > +                                 frag_len);
> > +        if (ret <= 0) {
> > +            xfree(s);
> > +            if (ret < 0)
> > +                goto out;
> > +        } else {
> > +            shm->handle = next_handle++;
> > +            reg_pair_from_64(&handle_hi, &handle_lo, shm->handle);
> > +            spin_lock(&ffa_mem_list_lock);
> > +            list_add_tail(&s->list, &ffa_frag_list);
> > +            spin_unlock(&ffa_mem_list_lock);
> > +        }
> > +        goto out_unlock;
> > +    }
> > +
> > +    /*
> > +     * Check that the Composite memory region descriptor fits.
> > +     */
> > +    if ( sizeof(*region_descr) + region_offs +
> > +         range_count * sizeof(struct ffa_address_range) > frag_len) {
> > +        ret = FFA_RET_INVALID_PARAMETERS;
> > +        goto out;
> > +    }
> > +
> > +    ret = get_shm_pages(d, shm, region_descr->address_range_array, range_count,
> > +                        0, &last_page_idx);
> > +    if ( ret )
> > +        goto out;
> > +    if (last_page_idx != shm->page_count) {
> > +        ret = FFA_RET_INVALID_PARAMETERS;
> > +        goto out;
> > +    }
> > +
> > +    /* Note that share_shm() uses our tx buffer */
> > +    ret = share_shm(shm);
> > +    if ( ret )
> > +        goto out;
> > +
> > +    spin_lock(&ffa_mem_list_lock);
> > +    list_add_tail(&shm->list, &ffa_mem_list);
> > +    spin_unlock(&ffa_mem_list_lock);
> > +
> > +    reg_pair_from_64(&handle_hi, &handle_lo, shm->handle);
> > +
> > +out:
> > +    if ( ret && shm )
> > +    {
> > +        put_shm_pages(shm);
> > +        xfree(shm);
> > +    }
> > +out_unlock:
> > +    spin_unlock(&ffa_buffer_lock);
> > +
> > +    if ( ret > 0 )
> > +            set_regs_frag_rx(regs, handle_lo, handle_hi, ret, trans.sender_id);
> > +    else if ( ret == 0)
> > +            set_regs_success(regs, handle_lo, handle_hi);
> > +    else
> > +            set_regs_error(regs, ret);
> > +}
> > +
> > +static struct mem_frag_state *find_frag_state(uint64_t handle)
> > +{
> > +    struct mem_frag_state *s;
> > +
> > +    list_for_each_entry(s, &ffa_frag_list, list)
> > +        if ( s->shm->handle == handle)
> > +            return s;
> > +
> > +    return NULL;
> > +}
> > +
> > +static void handle_mem_frag_tx(struct cpu_user_regs *regs)
> > +{
> > +    uint32_t frag_len = get_user_reg(regs, 3);
> > +    uint32_t handle_lo = get_user_reg(regs, 1);
> > +    uint32_t handle_hi = get_user_reg(regs, 2);
> > +    uint64_t handle = reg_pair_to_64(handle_hi, handle_lo);
> > +    struct mem_frag_state *s;
> > +    uint16_t sender_id = 0;
> > +    int ret;
> > +
> > +    spin_lock(&ffa_buffer_lock);
> > +    s = find_frag_state(handle);
> > +    if (!s) {
> > +        ret = FFA_RET_INVALID_PARAMETERS;
> > +        goto out;
> > +    }
> > +    sender_id = s->shm->sender_id;
> > +
> > +    if (frag_len > s->buf_size) {
> > +        ret = FFA_RET_INVALID_PARAMETERS;
> > +        goto out;
> > +    }
> > +
> > +    ret = add_mem_share_frag(s, 0, frag_len);
> > +    if (ret == 0) {
> > +        /* Note that share_shm() uses our tx buffer */
> > +        ret = share_shm(s->shm);
> > +        if (ret == 0) {
> > +            spin_lock(&ffa_mem_list_lock);
> > +            list_add_tail(&s->shm->list, &ffa_mem_list);
> > +            spin_unlock(&ffa_mem_list_lock);
> > +        } else {
> > +            put_shm_pages(s->shm);
> > +            xfree(s->shm);
> > +        }
> > +        spin_lock(&ffa_mem_list_lock);
> > +        list_del(&s->list);
> > +        spin_unlock(&ffa_mem_list_lock);
> > +        xfree(s);
> > +    } else if (ret < 0) {
> > +        put_shm_pages(s->shm);
> > +        xfree(s->shm);
> > +        spin_lock(&ffa_mem_list_lock);
> > +        list_del(&s->list);
> > +        spin_unlock(&ffa_mem_list_lock);
> > +        xfree(s);
> > +    }
> > +out:
> > +    spin_unlock(&ffa_buffer_lock);
> > +
> > +    if ( ret > 0 )
> > +            set_regs_frag_rx(regs, handle_lo, handle_hi, ret, sender_id);
> > +    else if ( ret == 0)
> > +            set_regs_success(regs, handle_lo, handle_hi);
> > +    else
> > +            set_regs_error(regs, ret);
> > +}
> > +
> > +static int handle_mem_reclaim(uint64_t handle, uint32_t flags)
> > +{
> > +    struct ffa_shm_mem *shm;
> > +    uint32_t handle_hi;
> > +    uint32_t handle_lo;
> > +    int ret;
> > +
> > +    spin_lock(&ffa_mem_list_lock);
> > +    list_for_each_entry(shm, &ffa_mem_list, list) {
> > +        if ( shm->handle == handle )
> > +            goto found_it;
> > +    }
> > +    shm = NULL;
> > +found_it:
> > +    spin_unlock(&ffa_mem_list_lock);
> > +
> > +    if ( !shm )
> > +        return FFA_RET_INVALID_PARAMETERS;
> > +
> > +    reg_pair_from_64(&handle_hi, &handle_lo, handle);
> > +    ret = ffa_mem_reclaim(handle_lo, handle_hi, flags);
> > +    if ( ret )
> > +        return ret;
> > +
> > +    spin_lock(&ffa_mem_list_lock);
> > +    list_del(&shm->list);
> > +    spin_unlock(&ffa_mem_list_lock);
> > +
> > +    put_shm_pages(shm);
> > +    xfree(shm);
> > +
> > +    return ret;
> > +}
> > +
> > +bool ffa_handle_call(struct cpu_user_regs *regs, uint32_t fid)
> > +{
> > +    struct domain *d = current->domain;
> > +    struct ffa_ctx *ctx = d->arch.ffa;
> > +    uint32_t count;
> > +    uint32_t e;
> > +
> > +    if ( !ctx )
> > +        return false;
> > +
> > +    switch ( fid )
> > +    {
> > +    case FFA_VERSION:
> > +        handle_version(regs);
> > +        return true;
> > +    case FFA_ID_GET:
> > +        set_regs_success(regs, get_vm_id(d), 0);
> > +        return true;
> > +    case FFA_RXTX_MAP_32:
> > +#ifdef CONFIG_ARM_64
> > +    case FFA_RXTX_MAP_64:
> > +#endif
> > +        e = handle_rxtx_map(fid, get_user_reg(regs, 1), get_user_reg(regs, 2),
> > +                            get_user_reg(regs, 3));
> > +        if ( e )
> > +            set_regs_error(regs, e);
> > +        else
> > +            set_regs_success(regs, 0, 0);
> > +        return true;
> > +    case FFA_RXTX_UNMAP:
> > +        e = handle_rxtx_unmap();
> > +        if ( e )
> > +            set_regs_error(regs, e);
> > +        else
> > +            set_regs_success(regs, 0, 0);
> > +        return true;
> > +    case FFA_PARTITION_INFO_GET:
> > +        e = handle_partition_info_get(get_user_reg(regs, 1),
> > +                                      get_user_reg(regs, 2),
> > +                                      get_user_reg(regs, 3),
> > +                                      get_user_reg(regs, 4),
> > +                                      get_user_reg(regs, 5), &count);
> > +        if ( e )
> > +            set_regs_error(regs, e);
> > +        else
> > +            set_regs_success(regs, count, 0);
> > +        return true;
> > +    case FFA_RX_RELEASE:
> > +        e = handle_rx_release();
> > +        if ( e )
> > +            set_regs_error(regs, e);
> > +        else
> > +            set_regs_success(regs, 0, 0);
> > +        return true;
> > +    case FFA_MSG_SEND_DIRECT_REQ_32:
> > +#ifdef CONFIG_ARM_64
> > +    case FFA_MSG_SEND_DIRECT_REQ_64:
> > +#endif
> > +        handle_msg_send_direct_req(regs, fid);
> > +        return true;
> > +    case FFA_MEM_SHARE_32:
> > +#ifdef CONFIG_ARM_64
> > +    case FFA_MEM_SHARE_64:
> > +#endif
> > +        handle_mem_share(regs);
> > +        return true;
> > +    case FFA_MEM_RECLAIM:
> > +        e = handle_mem_reclaim(reg_pair_to_64(get_user_reg(regs, 2),
> > +                                              get_user_reg(regs, 1)),
> > +                               get_user_reg(regs, 3));
> > +        if ( e )
> > +            set_regs_error(regs, e);
> > +        else
> > +            set_regs_success(regs, 0, 0);
> > +        return true;
> > +    case FFA_MEM_FRAG_TX:
> > +        handle_mem_frag_tx(regs);
> > +        return true;
> > +
> > +    default:
> > +        printk(XENLOG_ERR "ffa: unhandled fid 0x%x\n", fid);
> > +        return false;
> > +    }
> > +}
> > +
> > +int ffa_domain_init(struct domain *d)
> > +{
> > +    struct ffa_ctx *ctx;
> > +    unsigned int n;
> > +    unsigned int m;
> > +    unsigned int c_pos;
> > +    int32_t res;
> > +
> > +    if ( !ffa_version )
> > +        return 0;
> > +
> > +    ctx = xzalloc(struct ffa_ctx);
> > +    if ( !ctx )
> > +        return -ENOMEM;
> > +
> > +    for ( n = 0; n < subsr_vm_created_count; n++ ) {
> > +        res = ffa_direct_req_send_vm(subsr_vm_created[n], get_vm_id(d),
> > +                                     FFA_MSG_SEND_VM_CREATED);
> > +        if ( res ) {
> > +            printk(XENLOG_ERR "ffa: Failed to report creation of vm_id %u to  %u: res %d\n",
> > +                   get_vm_id(d), subsr_vm_created[n], res);
> > +            c_pos = n;
> > +            goto err;
> > +        }
> > +    }
> > +
> > +    d->arch.ffa = ctx;
> > +
> > +    return 0;
> > +
> > +err:
> > +    /* Undo any already sent vm created messaged */
> > +    for ( n = 0; n < c_pos; n++ )
> > +        for ( m = 0; m < subsr_vm_destroyed_count; m++ )
> > +            if ( subsr_vm_destroyed[m] == subsr_vm_created[n] )
> > +                ffa_direct_req_send_vm(subsr_vm_destroyed[n], get_vm_id(d),
> > +                                       FFA_MSG_SEND_VM_DESTROYED);
> > +    return -ENOMEM;
> > +}
> > +
> > +int ffa_relinquish_resources(struct domain *d)
> > +{
> > +    struct ffa_ctx *ctx = d->arch.ffa;
> > +    unsigned int n;
> > +    int32_t res;
> > +
> > +    if ( !ctx )
> > +        return 0;
> > +
> > +    for ( n = 0; n < subsr_vm_destroyed_count; n++ ) {
> > +        res = ffa_direct_req_send_vm(subsr_vm_destroyed[n], get_vm_id(d),
> > +                                     FFA_MSG_SEND_VM_DESTROYED);
> > +
> > +        if ( res )
> > +            printk(XENLOG_ERR "ffa: Failed to report destruction of vm_id %u to  %u: res %d\n",
> > +                   get_vm_id(d), subsr_vm_destroyed[n], res);
> > +    }
> > +
> > +    XFREE(d->arch.ffa);
> > +
> > +    return 0;
> > +}
> > +
> > +static bool __init init_subscribers(void)
> > +{
> > +    struct ffa_partition_info_1_1 *fpi;
> > +    bool ret = false;
> > +    uint32_t count;
> > +    uint32_t e;
> > +    uint32_t n;
> > +    uint32_t c_pos;
> > +    uint32_t d_pos;
> > +
> > +    if ( ffa_version < FFA_VERSION_1_1 )
> > +        return true;
> > +
> > +    e = ffa_partition_info_get(0, 0, 0, 0, 1, &count);
> > +    ffa_rx_release();
> > +    if ( e ) {
> > +        printk(XENLOG_ERR "ffa: Failed to get list of SPs: %d\n", (int)e);
> > +        goto out;
> > +    }
> > +
> > +    fpi = ffa_rx;
> > +    subsr_vm_created_count = 0;
> > +    subsr_vm_destroyed_count = 0;
> > +    for ( n = 0; n < count; n++ ) {
> > +        if (fpi[n].partition_properties & FFA_PART_PROP_NOTIF_CREATED)
> > +            subsr_vm_created_count++;
> > +        if (fpi[n].partition_properties & FFA_PART_PROP_NOTIF_DESTROYED)
> > +            subsr_vm_destroyed_count++;
> > +    }
> > +
> > +    if ( subsr_vm_created_count )
> > +        subsr_vm_created = xzalloc_array(uint16_t, subsr_vm_created_count);
> > +    if ( subsr_vm_destroyed_count )
> > +        subsr_vm_destroyed = xzalloc_array(uint16_t, subsr_vm_destroyed_count);
> > +    if ( (subsr_vm_created_count && !subsr_vm_created) ||
> > +        (subsr_vm_destroyed_count && !subsr_vm_destroyed) ) {
> > +        printk(XENLOG_ERR "ffa: Failed to allocate subscription lists\n");
> > +        subsr_vm_created_count = 0;
> > +        subsr_vm_destroyed_count = 0;
> > +        XFREE(subsr_vm_created);
> > +        XFREE(subsr_vm_destroyed);
> > +        goto out;
> > +    }
> > +
> > +    for ( c_pos = 0, d_pos = 0, n = 0; n < count; n++ ) {
> > +        if (fpi[n].partition_properties & FFA_PART_PROP_NOTIF_CREATED)
> > +            subsr_vm_created[c_pos++] = fpi[n].id;
> > +        if (fpi[n].partition_properties & FFA_PART_PROP_NOTIF_DESTROYED)
> > +            subsr_vm_destroyed[d_pos++] = fpi[n].id;
> > +    }
> > +
> > +    ret = true;
> > +out:
> > +    ffa_rx_release();
> > +    return ret;
> > +}
> > +
> > +static int __init ffa_init(void)
> > +{
> > +    uint32_t vers;
> > +    uint32_t e;
> > +    unsigned int major_vers;
> > +    unsigned int minor_vers;
> > +
> > +    /*
> > +     * psci_init_smccc() updates this value with what's reported by EL-3
> > +     * or secure world.
> > +     */
> > +    if ( smccc_ver < ARM_SMCCC_VERSION_1_2 )
> > +    {
> > +        printk(XENLOG_ERR
> > +               "ffa: unsupported SMCCC version %#x (need at least %#x)\n",
> > +               smccc_ver, ARM_SMCCC_VERSION_1_2);
> > +        return 0;
> > +    }
> > +
> > +    if ( !ffa_get_version(&vers) )
> > +        return 0;
> > +
> > +    if ( vers < FFA_MIN_VERSION || vers > FFA_MY_VERSION )
> > +    {
> > +        printk(XENLOG_ERR "ffa: Incompatible version %#x found\n", vers);
> > +        return 0;
> > +    }
> > +
> > +    major_vers = (vers >> FFA_VERSION_MAJOR_SHIFT) & FFA_VERSION_MAJOR_MASK;
> > +    minor_vers = vers & FFA_VERSION_MINOR_MASK;
> > +    printk(XENLOG_ERR "ARM FF-A Mediator version %u.%u\n",
> > +           FFA_VERSION_MAJOR, FFA_VERSION_MINOR);
> > +    printk(XENLOG_ERR "ARM FF-A Firmware version %u.%u\n",
> > +           major_vers, minor_vers);
> 
> XENLOG_INFO

I'll fix. Thanks for looking at this.

Cheers,
Jens

> 
> 
> > +    ffa_rx = alloc_xenheap_pages(0, 0);
> > +    if ( !ffa_rx )
> > +        return 0;
> > +
> > +    ffa_tx = alloc_xenheap_pages(0, 0);
> > +    if ( !ffa_tx )
> > +        goto err_free_ffa_rx;
> > +
> > +    e = ffa_rxtx_map(__pa(ffa_tx), __pa(ffa_rx), 1);
> > +    if ( e )
> > +    {
> > +        printk(XENLOG_ERR "ffa: Failed to map rxtx: error %d\n", (int)e);
> > +        goto err_free_ffa_tx;
> > +    }
> > +    ffa_page_count = 1;
> > +    ffa_version = vers;
> > +
> > +    if ( !init_subscribers() )
> > +        goto err_free_ffa_tx;
> > +
> > +    return 0;
> > +
> > +err_free_ffa_tx:
> > +    free_xenheap_pages(ffa_tx, 0);
> > +    ffa_tx = NULL;
> > +err_free_ffa_rx:
> > +    free_xenheap_pages(ffa_rx, 0);
> > +    ffa_rx = NULL;
> > +    ffa_page_count = 0;
> > +    ffa_version = 0;
> > +    XFREE(subsr_vm_created);
> > +    subsr_vm_created_count = 0;
> > +    XFREE(subsr_vm_destroyed);
> > +    subsr_vm_destroyed_count = 0;
> > +    return 0;
> > +}
> > +
> > +__initcall(ffa_init);
> > diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
> > index ed63c2b6f91f..b3dee269bced 100644
> > --- a/xen/arch/arm/include/asm/domain.h
> > +++ b/xen/arch/arm/include/asm/domain.h
> > @@ -103,6 +103,10 @@ struct arch_domain
> >      void *tee;
> >  #endif
> >  
> > +#ifdef CONFIG_FFA
> > +    void *ffa;
> > +#endif
> > +
> >      bool directmap;
> >  }  __cacheline_aligned;
> >  
> > diff --git a/xen/arch/arm/include/asm/ffa.h b/xen/arch/arm/include/asm/ffa.h
> > new file mode 100644
> > index 000000000000..1c6ce6421294
> > --- /dev/null
> > +++ b/xen/arch/arm/include/asm/ffa.h
> > @@ -0,0 +1,71 @@
> > +/*
> > + * xen/arch/arm/ffa.c
> > + *
> > + * Arm Firmware Framework for ARMv8-A(FFA) mediator
> > + *
> > + * Copyright (C) 2021  Linaro Limited
> > + *
> > + * Permission is hereby granted, free of charge, to any person
> > + * obtaining a copy of this software and associated documentation
> > + * files (the "Software"), to deal in the Software without restriction,
> > + * including without limitation the rights to use, copy, modify, merge,
> > + * publish, distribute, sublicense, and/or sell copies of the Software,
> > + * and to permit persons to whom the Software is furnished to do so,
> > + * subject to the following conditions:
> > + *
> > + * The above copyright notice and this permission notice shall be
> > + * included in all copies or substantial portions of the Software.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> > + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> > + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
> > + * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
> > + * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
> > + * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
> > + * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
> > + */
> > +
> > +#ifndef __ASM_ARM_FFA_H__
> > +#define __ASM_ARM_FFA_H__
> > +
> > +#include <xen/const.h>
> > +
> > +#include <asm/smccc.h>
> > +#include <asm/types.h>
> > +
> > +#define FFA_FNUM_MIN_VALUE              _AC(0x60,U)
> > +#define FFA_FNUM_MAX_VALUE              _AC(0x86,U)
> > +
> > +static inline bool is_ffa_fid(uint32_t fid)
> > +{
> > +    uint32_t fn = fid & ARM_SMCCC_FUNC_MASK;
> > +
> > +    return fn >= FFA_FNUM_MIN_VALUE && fn <= FFA_FNUM_MAX_VALUE;
> > +}
> > +
> > +#ifdef CONFIG_FFA
> > +#define FFA_NR_FUNCS    11
> > +
> > +bool ffa_handle_call(struct cpu_user_regs *regs, uint32_t fid);
> > +int ffa_domain_init(struct domain *d);
> > +int ffa_relinquish_resources(struct domain *d);
> > +#else
> > +#define FFA_NR_FUNCS    0
> > +
> > +static inline bool ffa_handle_call(struct cpu_user_regs *regs, uint32_t fid)
> > +{
> > +    return false;
> > +}
> > +
> > +static inline int ffa_domain_init(struct domain *d)
> > +{
> > +    return 0;
> > +}
> > +
> > +static inline int ffa_relinquish_resources(struct domain *d)
> > +{
> > +    return 0;
> > +}
> > +#endif
> > +
> > +#endif /*__ASM_ARM_FFA_H__*/
> > diff --git a/xen/arch/arm/vsmc.c b/xen/arch/arm/vsmc.c
> > index 6f90c08a6304..34586025eff8 100644
> > --- a/xen/arch/arm/vsmc.c
> > +++ b/xen/arch/arm/vsmc.c
> > @@ -20,6 +20,7 @@
> >  #include <public/arch-arm/smccc.h>
> >  #include <asm/cpuerrata.h>
> >  #include <asm/cpufeature.h>
> > +#include <asm/ffa.h>
> >  #include <asm/monitor.h>
> >  #include <asm/regs.h>
> >  #include <asm/smccc.h>
> > @@ -32,7 +33,7 @@
> >  #define XEN_SMCCC_FUNCTION_COUNT 3
> >  
> >  /* Number of functions currently supported by Standard Service Service Calls. */
> > -#define SSSC_SMCCC_FUNCTION_COUNT (3 + VPSCI_NR_FUNCS)
> > +#define SSSC_SMCCC_FUNCTION_COUNT (3 + VPSCI_NR_FUNCS + FFA_NR_FUNCS)
> >  
> >  static bool fill_uid(struct cpu_user_regs *regs, xen_uuid_t uuid)
> >  {
> > @@ -196,13 +197,23 @@ static bool handle_existing_apis(struct cpu_user_regs *regs)
> >      return do_vpsci_0_1_call(regs, fid);
> >  }
> >  
> > +static bool is_psci_fid(uint32_t fid)
> > +{
> > +    uint32_t fn = fid & ARM_SMCCC_FUNC_MASK;
> > +
> > +    return fn >= 0 && fn <= 0x1fU;
> > +}
> > +
> >  /* PSCI 0.2 interface and other Standard Secure Calls */
> >  static bool handle_sssc(struct cpu_user_regs *regs)
> >  {
> >      uint32_t fid = (uint32_t)get_user_reg(regs, 0);
> >  
> > -    if ( do_vpsci_0_2_call(regs, fid) )
> > -        return true;
> > +    if ( is_psci_fid(fid) )
> > +        return do_vpsci_0_2_call(regs, fid);
> > +
> > +    if ( is_ffa_fid(fid) )
> > +        return ffa_handle_call(regs, fid);
> >  
> >      switch ( fid )
> >      {
> > -- 
> > 2.31.1
> > 


From xen-devel-bounces@lists.xenproject.org Thu Jun 16 18:20:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 18:20:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350816.577225 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1u6V-0005cj-Pu; Thu, 16 Jun 2022 18:20:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350816.577225; Thu, 16 Jun 2022 18:20:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1u6V-0005cc-Mr; Thu, 16 Jun 2022 18:20:27 +0000
Received: by outflank-mailman (input) for mailman id 350816;
 Thu, 16 Jun 2022 18:20:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hdZ4=WX=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o1u6V-0005cW-1y
 for xen-devel@lists.xenproject.org; Thu, 16 Jun 2022 18:20:27 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fe0f2acc-eda0-11ec-ab14-113154c10af9;
 Thu, 16 Jun 2022 20:20:25 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id DB8DAB824F8;
 Thu, 16 Jun 2022 18:20:24 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 253CEC34114;
 Thu, 16 Jun 2022 18:20:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe0f2acc-eda0-11ec-ab14-113154c10af9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1655403623;
	bh=8D4UPmPWJGZms6MD3CREfq52BmU6+/z5BRR65651Wts=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=QsraUgbu9SU0YBwZZ6+jQn/BdglCIeXx3J/wTE5X92rlzaZtaXCdcqPnksQFUwwqk
	 nR3E/JqTMu5QiPX43myf+PnQgP0uXbshq4YYw4jDwK/6o30SkKIqfIIj2Mvq2rMq0M
	 L2+0mVIHciDqg5RIE5393kLTwHkheBNoCgZqfHCNHTZVRutblqQNPKGO5K4AsSklsI
	 zNTHdgC/93EJ45M/IaQmGJMCK6scVHAmm2kU3sGpfgPQ5wwJWqG/X59ETWlXe2ndnM
	 8DPG5D4wIQIQjhwJUB4CamjR3kMrr02DVWsP2JDnhAkyB+KC0ucby3huL6M8JoiQjI
	 FHBxkahj3NqQw==
Date: Thu, 16 Jun 2022 11:20:22 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Juergen Gross <jgross@suse.com>
cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, 
    viresh.kumar@linaro.org, hch@infradead.org, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: Re: [PATCH v2] xen: don't require virtio with grants for non-PV
 guests
In-Reply-To: <20220616053715.3166-1-jgross@suse.com>
Message-ID: <alpine.DEB.2.22.394.2206161106020.10483@ubuntu-linux-20-04-desktop>
References: <20220616053715.3166-1-jgross@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 16 Jun 2022, Juergen Gross wrote:
> Commit fa1f57421e0b ("xen/virtio: Enable restricted memory access using
> Xen grant mappings") introduced a new requirement for using virtio
> devices: the backend now needs to support the VIRTIO_F_ACCESS_PLATFORM
> feature.
> 
> This is an undue requirement for non-PV guests, as those can be operated
> with existing backends without any problem, as long as those backends
> are running in dom0.
> 
> Per default allow virtio devices without grant support for non-PV
> guests.
> 
> Add a new config item to always force use of grants for virtio.
> 
> Fixes: fa1f57421e0b ("xen/virtio: Enable restricted memory access using Xen grant mappings")
> Reported-by: Viresh Kumar <viresh.kumar@linaro.org>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V2:
> - remove command line parameter (Christoph Hellwig)
> ---
>  drivers/xen/Kconfig | 9 +++++++++
>  include/xen/xen.h   | 2 +-
>  2 files changed, 10 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
> index bfd5f4f706bc..a65bd92121a5 100644
> --- a/drivers/xen/Kconfig
> +++ b/drivers/xen/Kconfig
> @@ -355,4 +355,13 @@ config XEN_VIRTIO
>  
>  	  If in doubt, say n.
>  
> +config XEN_VIRTIO_FORCE_GRANT
> +	bool "Require Xen virtio support to use grants"
> +	depends on XEN_VIRTIO
> +	help
> +	  Require virtio for Xen guests to use grant mappings.
> +	  This will avoid the need to give the backend the right to map all
> +	  of the guest memory. This will need support on the backend side
> +	  (e.g. qemu or kernel, depending on the virtio device types used).
> +
>  endmenu
> diff --git a/include/xen/xen.h b/include/xen/xen.h
> index 0780a81e140d..4d4188f20337 100644
> --- a/include/xen/xen.h
> +++ b/include/xen/xen.h
> @@ -56,7 +56,7 @@ extern u64 xen_saved_max_mem_size;
>  
>  static inline void xen_set_restricted_virtio_memory_access(void)
>  {
> -	if (IS_ENABLED(CONFIG_XEN_VIRTIO) && xen_domain())
> +	if (IS_ENABLED(CONFIG_XEN_VIRTIO_FORCE_GRANT) || xen_pv_domain())
>  		platform_set(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS);
>  }

Hi Juergen, you might have seen my email:
https://marc.info/?l=linux-kernel&m=165533636607801&w=2

Linux is always running as HVM on ARM, so if you want to introduce
XEN_VIRTIO_FORCE_GRANT, then XEN_VIRTIO_FORCE_GRANT should be
automatically selected on ARM. I don't think there should be a visible
menu option for XEN_VIRTIO_FORCE_GRANT on ARM.

I realize we have a conflict between HVM guests on ARM and x86:

- on ARM, PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS should be enabled when
  "xen,grant-dma" is present
- on x86, due to the lack of "xen,grant-dma", it should be off by
  default and based on a kconfig or command line option

To be honest, like Christoph suggested, I think even on x86 there should
be a firmware table to trigger setting
PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS. We have 2 Xen-specific ACPI
tables, and we could have 1 more to define this. Or an HVM param or
a feature flag?

I think that would be the cleanest way to do this, but it is a lot of
more work compared to adding a couple of lines of code to Linux, so this
is why I suggested:
https://marc.info/?l=linux-kernel&m=165533636607801&w=2

ARM uses "xen,grant-dma" to detect whether
PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS needs setting.

One day x86 could check an ACPI property or HVM param or feature flag.
None of them are available now, so for now use a command line option as
a workaround. It is totally fine to use an x86-only kconfig option
instead of a command line option.

Would you be OK with that?


From xen-devel-bounces@lists.xenproject.org Thu Jun 16 18:41:18 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 18:41:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350824.577236 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1uQa-0008S1-Hp; Thu, 16 Jun 2022 18:41:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350824.577236; Thu, 16 Jun 2022 18:41:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1uQa-0008Rh-Ch; Thu, 16 Jun 2022 18:41:12 +0000
Received: by outflank-mailman (input) for mailman id 350824;
 Thu, 16 Jun 2022 18:41:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dRx+=WX=epam.com=prvs=81664c6ed3=volodymyr_babchuk@srs-se1.protection.inumbo.net>)
 id 1o1uQY-0008Qj-MQ
 for xen-devel@lists.xenproject.org; Thu, 16 Jun 2022 18:41:10 +0000
Received: from mx0b-0039f301.pphosted.com (mx0b-0039f301.pphosted.com
 [148.163.137.242]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e1492293-eda3-11ec-ab14-113154c10af9;
 Thu, 16 Jun 2022 20:41:06 +0200 (CEST)
Received: from pps.filterd (m0174683.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 25GFhGeI026262;
 Thu, 16 Jun 2022 18:40:54 GMT
Received: from eur01-ve1-obe.outbound.protection.outlook.com
 (mail-ve1eur01lp2057.outbound.protection.outlook.com [104.47.1.57])
 by mx0b-0039f301.pphosted.com (PPS) with ESMTPS id 3gr5fvh0yt-2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Thu, 16 Jun 2022 18:40:53 +0000
Received: from VI1PR03MB3710.eurprd03.prod.outlook.com (2603:10a6:803:31::18)
 by VI1PR03MB3088.eurprd03.prod.outlook.com (2603:10a6:802:2f::25)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.15; Thu, 16 Jun
 2022 18:40:48 +0000
Received: from VI1PR03MB3710.eurprd03.prod.outlook.com
 ([fe80::28d9:fd20:dee0:74ed]) by VI1PR03MB3710.eurprd03.prod.outlook.com
 ([fe80::28d9:fd20:dee0:74ed%6]) with mapi id 15.20.5353.014; Thu, 16 Jun 2022
 18:40:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e1492293-eda3-11ec-ab14-113154c10af9
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WHz54CRX5mX1Xy1bsi+FOV6QOF91SEnNwBTU4X6fTto1ruQHEjReDO4mjsiS71O0bMg7BT78vhevHnl6bAqbtHJlvOnCtVoB/cwySCW4nOKym5Hxa931jxVfiiv8qdHh4jw1zuzCijJ+ED3IsZEKCWHnzMq3sQCDJfD2LQA5TZcgDA9CWsUFJVDwHf2gECJyHZXo5sLYclDe9IxfvjgeQp1Tf2mBQvfIC7ouv4bfpMCvj5ntgHbz/uCg0jCikD1ZASMVBzSZ/62UaAWq3QhnN3cgbY/kC3QjKME97Xdxh7iwB5e9TaZk34YCXZb0QEgut2ZvUrnKp9PBri0OIbQCEQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=LDyLX7om2MyjGJk9+xTC4wtbGOCibzdWb5XkaS4LBcE=;
 b=NaXqodNt8QRmOzdg0LHUKXqjd9UvbE2cPy2PP79Ryyd05V2zHJPfjnVhhT6XSCw3NZMtgmvn351xVAby72M6wXmLACVrnuKGEQOy8r7vrfNA7IGubQ0xkJ+g8bLFZTYkwknq7KULXVyP7IBexafYk6dZN6aZ7VnaftLPqH9q3hxnvY6FLIbHYFYrV0BHGYLaLVqZjQ2zRhztPoyH5O0uW9h7RygQ8kL1Y+XURVVcJ/FRtXI56DpfZITBxi/2Jht/xrRCbjbZDcjFvg8RlOJQzIcOXKN1Oz1Q+X/WpfY+27xzp5tLnnm644WabI1UWUG2nNM8MqcVOXpxScrIHOfx4w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LDyLX7om2MyjGJk9+xTC4wtbGOCibzdWb5XkaS4LBcE=;
 b=jANm5jFsNclOcgEt6eKEjwYP9tTjjr6ICTjMPE3ZN2wmCum7N+Kd52ALmeLUVDkilIAw05g+42mlL5ZqVrOQpDTz3Pn00RZ48y9dfGkCHMDDIqZ6uB7GeNVqCJw3mD/far9dGszXoTkzuzh9i/w0qEwNe3TKlnZxdNzXfeVKFJhfgw9i097K2FLNUPpalY+29UYoDSOGvDzUStPCso/QZvxQ9wma5aHCWytD7PNl4pJL0DAzNqre8gMVdJeMRHgqbwk6kvJ+8duj2il+wm5oqoKe737AeZWQCW9vWXDL5A5cqwT83Rk+70NWhGhowXmddf//EoqCcag1KL+akwJ/cA==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: Julien Grall <julien@xen.org>
CC: "dmitry.semenets@gmail.com" <dmitry.semenets@gmail.com>,
        Dmytro Semenets
	<Dmytro_Semenets@epam.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Bertrand Marquis <bertrand.marquis@arm.com>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] xen: Don't call panic if ARM TF cpu off returns DENIED
Thread-Topic: [PATCH] xen: Don't call panic if ARM TF cpu off returns DENIED
Thread-Index: AQHYgYkEncCZtJfZsEuchGWkwMivea1SI+UAgAA3C4A=
Date: Thu, 16 Jun 2022 18:40:48 +0000
Message-ID: <87wndgh2og.fsf@epam.com>
References: <20220616135541.3333760-1-dmitry.semenets@gmail.com>
 <cf7660da-0bde-865e-7c22-a2e21e31fae5@xen.org>
In-Reply-To: <cf7660da-0bde-865e-7c22-a2e21e31fae5@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: mu4e 1.6.5; emacs 28.1
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: c177d0f6-b564-436a-718b-08da4fc7bb51
x-ms-traffictypediagnostic: VI1PR03MB3088:EE_
x-ld-processed: b41b72d0-4e9f-4c26-8a69-f949f367c91d,ExtAddr
x-microsoft-antispam-prvs: 
 <VI1PR03MB308856C96230DF2581EA10C8E6AC9@VI1PR03MB3088.eurprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 fzHc5BakdqgW6/BBwrcXfaBMfl/9P7rlXEM7Uxq2jwGu66/5RuorNcVRwI22H9m1FSoSxyzCIRUeFsiojlQpkj1CxeP2i+rXF6OqM4jGSNjdf8dbFJ9napo4vaDHRHOauTIHhF5g7wceEjrWN6qSGcBHiVfOQquc58hmfdnuiGv9agTI7pskt/3Fe4VIITBuO1KgW5L72eW1xXWyMHC0BZmjDSrd6Ka9jBcjRlOXDMEj2C7Js/GlgHiAWFr7EbZBqa03nCZa8DCDxauzGm33WUS3P6mndJJQtLzRYK8ALKW53VYOZ+LS0s+Jg8t6vANIo4pm9dPZSJE9RsScHvjL8+uJGBQ5suZkgiZI+E6JAJESrlYMqXUOpEEaoV4zpfaqG5LsW9bXKw/gPY6NEGb6RD2hWQ4TBx9iT04fQS4DhAwmWUK7m16xvj3veQfdyrSU1VXo+W5hkYrc+M078vSyuJBda5kiASOZLel2wkEEa84VffrG+CtPPlAM6UDIC6/0IN4fbj7JmmElpI7QTq5Dd6jeg8rC9hizF370aRno7zTTQSm+5GRn8aW/qSPC7edBzB30IZf0isZdnY1er2gsuopxqnrsl3L8xIZVGbur0W9BOY6SFMbQYJyQ6JClQ9CvhCsT1P2KbU9OftW7SiT36RPyRgbSoroTEQG9+ooc27I3p0ZXIHEGlAt/Iomm+y+mULrafYaTYWFbBQWds7xVNCNMsjA9gklY2uv7zcqjfAE7H+M2HNcPbp5HAP5DAieXssBMQoTihsSyJ43rP8rt/2IHFBqCk4ReS4QUXQeAr78=
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR03MB3710.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(86362001)(64756008)(66476007)(4326008)(8676002)(66446008)(54906003)(66556008)(71200400001)(38070700005)(6916009)(122000001)(508600001)(76116006)(316002)(966005)(91956017)(186003)(83380400001)(6486002)(66946007)(6506007)(6512007)(55236004)(53546011)(38100700002)(2616005)(36756003)(5660300002)(8936002)(2906002)(26005);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: 
 =?iso-8859-1?Q?CTSLBN4r7rj5XbmcNf8qXwewoQBXbh6vCfGS6hh1OnfC/JvWxaogk0959s?=
 =?iso-8859-1?Q?fiG4SSJpvCvWLwPhSTbMa6L0r5xh4YoW+RLi0VJoAGXKPMsW9wVcopzsoF?=
 =?iso-8859-1?Q?ZOUgm3gl593fyPT/CW9G68OmnhxNNVOmoy2Q6zErkl2KBSrB9jGBlUHnh+?=
 =?iso-8859-1?Q?GJgaomZv2C69sdeYBXJcccThB3KQu2gTD/5Pa3wWD7TI2WfyK0zPGvJm61?=
 =?iso-8859-1?Q?UifL9HB9nDNu6QywrjEtIC1RZlBai/EcBqnwl+I99hUwQMN4vyT+982oy3?=
 =?iso-8859-1?Q?h3MIc3FjDaqlhCZrIA6GqKDTq5uRSeeIMynEufr+jqiPYkjaKCPD+W4jPS?=
 =?iso-8859-1?Q?8mIy5z1Ec/ie4Bu9cYncd262n/YQW2e968dp8gI9JVZG8c0SmzKBkahrOS?=
 =?iso-8859-1?Q?3VlqjC+vS5c6iyXvyloj3UWIV64F84gRsz3BJuMnXwSoQFPY6D+nw96+D1?=
 =?iso-8859-1?Q?AXGj44rpUWg5A0040iUkWy/ZexV5KdVnrU8aVwAkuedQy46wxgH4G6M08I?=
 =?iso-8859-1?Q?hGKis0FB+9m4RhGnUdk8/rtIgF+vgU1dfrdS6sn9gwdep1WfWklXGpxG3H?=
 =?iso-8859-1?Q?rrWk6nh0sDZFYgHppwlUj6El9DVPeraerGETiMQJhw0UdEQS9zA1SJdjXC?=
 =?iso-8859-1?Q?UzvOJhBRLgJAFBuz4OzLSx03mrXVCqWT2RocmQs3iYAoLPAuco66Kjdvyq?=
 =?iso-8859-1?Q?7xMWfRm/sqhcvCe7aUYHhjg8AKHCJcDyYFcFAEILibm4U8d27gzJsrS8zt?=
 =?iso-8859-1?Q?hybA4TRs6xp1jfW32DOYKmER9K+bptm5kyuXkih1cEiSJfT0sZyZnOykTP?=
 =?iso-8859-1?Q?B1fSfYzq7GFhZpU0AcCdIV9qVWssb414ndM54/RNQezKj0/o8xXsdM7Wrw?=
 =?iso-8859-1?Q?yuaRV1b5L7/OeVksv/bvgtcA3AD5MK5I1ST36i4NaCHDEY+h9PYkmeEn4p?=
 =?iso-8859-1?Q?24jMRjqmHuI2IQyG05nPeyaFVK/LWjKzqHUbTdUcBxEYeDD1VST+yRF57T?=
 =?iso-8859-1?Q?ZEd4sLKvIM4C/yY1YTepiFpZCW5wia0bz72RGZxxduDsNVg2+wcfHQflgo?=
 =?iso-8859-1?Q?ibs9deU1W8DIEh2O+J4DcSRy68Ko3V0SnLWccMvuxSkB2P0vkGIPrcC9JR?=
 =?iso-8859-1?Q?KRFxg5wH4XmyYuIX5dI65TkqXIT8ABXmEzabDTVzCO79D5VH/EzmLQ8Y6c?=
 =?iso-8859-1?Q?jKcww0LhEhNOI2xj9znkBi6gHrjq7clve3D2QzC1GNiBGzpMugYfvzvUoH?=
 =?iso-8859-1?Q?0cdivMUdWMr5Qb6hpxA5zl2a4TGeYM4/Os6CAW4AnRs/ZUp48B8w+GA+vy?=
 =?iso-8859-1?Q?x2i6IhMdyI76fPqwA5R2/hcLwQ+rJxCOWMaWYtwfzCksA9eLyDDDA7BSQv?=
 =?iso-8859-1?Q?aAfurgaDl2cVj/Gb8aqQmKPM61Q6nwrqeo7LSjYS82z+wUMVERI0J2S/C3?=
 =?iso-8859-1?Q?VSLz46SeYwfSUD9cbIaStVcQq2Lua2Lwmlun2eg/lcZTP5GmU7NHXCEvqI?=
 =?iso-8859-1?Q?fNQePAzacxw2+ddhgAmua8QRdegok4eEiSs2NkMjtHE1u3ui2PyQ7xH48k?=
 =?iso-8859-1?Q?pWD/zQcU8Uns32RDUmEsydy1ssaZAjP6HQ7dzh66Wf8IXKq4WrDtvHyKfg?=
 =?iso-8859-1?Q?cMHYY4m8U7iZ46FnqAoxldR4NdMtbwk07SZzAx57lQk0DJfczLDqxXlKSV?=
 =?iso-8859-1?Q?b0rQXUR7APnksne01o03bQn89hoFKF1CvIjr5umKDkjif0gAcEjJjbwQjB?=
 =?iso-8859-1?Q?IkVquRCotKQ/a1/L9tyXW/wB8jUeY/JgoOnTSWIB3+Z5eegL+6Tq68d6tT?=
 =?iso-8859-1?Q?4JYYuRSePJC3bsJ5TwVeUWTlZqt1dmw=3D?=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: VI1PR03MB3710.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c177d0f6-b564-436a-718b-08da4fc7bb51
X-MS-Exchange-CrossTenant-originalarrivaltime: 16 Jun 2022 18:40:48.5640
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: pVH2XEklDmwpFLXjW/KqyvVT2psdRASAumsxqI1/p7gOUnBe+Mzw8ffwsB+Kmj4qljQyUQUAwYH3h4NZHqxJU0ikLPfFWcF7tk3+R3iM5OU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR03MB3088
X-Proofpoint-ORIG-GUID: RRjKJQePBFrtxLH0hAFsxpPKEYNj6PDI
X-Proofpoint-GUID: RRjKJQePBFrtxLH0hAFsxpPKEYNj6PDI
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.64.514
 definitions=2022-06-16_16,2022-06-16_01,2022-02-23_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 phishscore=0
 spamscore=0 lowpriorityscore=0 priorityscore=1501 mlxlogscore=901
 adultscore=0 bulkscore=0 malwarescore=0 suspectscore=0 mlxscore=0
 clxscore=1011 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2204290000 definitions=main-2206160075


Hi Julien,

Julien Grall <julien@xen.org> writes:

> Hi,
>
> On 16/06/2022 14:55, dmitry.semenets@gmail.com wrote:
>> From: Dmytro Semenets <dmytro_semenets@epam.com>
>> According to PSCI specification ARM TF can return DENIED on CPU OFF.
>
> I am confused. The spec is talking about Trusted OS and not
> firmware. The docummentation is also not specific to ARM Trusted
> Firmware. So did you mean "Trusted OS"?

It should be "firmware", I believe.

>
> Also, did you reproduce on HW? If so, on which CPU this will fail?
>

Yes, we reproduced this on HW. In our case it failed on CPU0. To be
fair, in our case it had nothing to do with Trusted OS. It is just
platform limitation - it can't turn off CPU0. But from Xen perspective
there is no difference - CPU_OFF call returns DENIED.

>> This patch brings the hypervisor into compliance with the PSCI
>> specification.
>
> Now it means CPU off will never be turned off using PSCI. Instead, we
> would end up to spin in Xen. This would be a problem because we could
> save less power.

Agreed.

>
>> Refer to "Arm Power State Coordination Interface (DEN0022D.b)"
>> section 5.5.2
>
> Reading both 5.5.2 and 5.9.1 together, DENIED would be returned when
> the trusted OS can only run on one core.
>
> Some of the trusted OS are migratable. So I think we should first
> attempt to migrate the CPU. Then if it doesn't work, we should prevent
> the CPU to go offline.
>
> That said, upstream doesn't support cpu offlining (I don't know for
> your use case). In case of shutdown, it is not necessary to offline
> the CPU, so we could avoid to call CPU_OFF on all CPUs but
> one. Something like:
>

This is even better approach yes. But you mentioned CPU_OFF. Did you
mean SYSTEM_RESET?

> diff --git a/xen/arch/arm/shutdown.c b/xen/arch/arm/shutdown.c
> index 3dc6819d56de..d956812ef8f4 100644
> --- a/xen/arch/arm/shutdown.c
> +++ b/xen/arch/arm/shutdown.c
> @@ -8,7 +8,9 @@
>
>  static void noreturn halt_this_cpu(void *arg)
>  {
> -    stop_cpu();
> +    ASSERT(!local_irq_enable());
> +    while ( 1 )
> +        wfi();
>  }
>
>  void machine_halt(void)
> @@ -21,10 +23,6 @@ void machine_halt(void)
>      smp_call_function(halt_this_cpu, NULL, 0);
>      local_irq_disable();
>
> -    /* Wait at most another 10ms for all other CPUs to go offline. */
> -    while ( (num_online_cpus() > 1) && (timeout-- > 0) )
> -        mdelay(1);
> -
>      /* This is mainly for PSCI-0.2, which does not return if success. */
>      call_psci_system_off();
>
>> Signed-off-by: Dmytro Semenets <dmytro_semenets@epam.com>
>> Reviewed-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
>
> I don't recall to see patch on the ML recently for this. So is this an
> internal review?

Yeah, sorry about that. Dmytro is a new member in our team and he is not
yet familiar with differences in internal reviews and reviews in ML.

If you are interested, we had internal review at [1]:

[1] https://github.com/xen-troops/xen/pull/184

>
>> ---
>>   xen/arch/arm/psci.c | 5 +++--
>>   1 file changed, 3 insertions(+), 2 deletions(-)
>> diff --git a/xen/arch/arm/psci.c b/xen/arch/arm/psci.c
>> index 0c90c2305c..55787fde58 100644
>> --- a/xen/arch/arm/psci.c
>> +++ b/xen/arch/arm/psci.c
>> @@ -63,8 +63,9 @@ void call_psci_cpu_off(void)
>>             /* If successfull the PSCI cpu_off call doesn't return
>> */
>>           arm_smccc_smc(PSCI_0_2_FN32_CPU_OFF, &res);
>> -        panic("PSCI cpu off failed for CPU%d err=3D%d\n", smp_processor=
_id(),
>> -              PSCI_RET(res));
>> +        if ( PSCI_RET(res) !=3D PSCI_DENIED )
>> +            panic("PSCI cpu off failed for CPU%d err=3D%d\n", smp_proce=
ssor_id(),
>> +                PSCI_RET(res));
>>       }
>>   }
>>  =20
>
> Cheers,


--=20
Volodymyr Babchuk at EPAM=


From xen-devel-bounces@lists.xenproject.org Thu Jun 16 18:49:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 18:49:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350832.577247 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1uYV-0000uh-AY; Thu, 16 Jun 2022 18:49:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350832.577247; Thu, 16 Jun 2022 18:49:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1uYV-0000ua-77; Thu, 16 Jun 2022 18:49:23 +0000
Received: by outflank-mailman (input) for mailman id 350832;
 Thu, 16 Jun 2022 18:49:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1uYU-0000uQ-5R; Thu, 16 Jun 2022 18:49:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1uYU-0000hO-2t; Thu, 16 Jun 2022 18:49:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1uYT-0004Fb-Oq; Thu, 16 Jun 2022 18:49:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o1uYT-0004ii-OP; Thu, 16 Jun 2022 18:49:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=rEz9qFdoZ3AWW9ThhLjy+jys672nNaY3zaivdpNk3us=; b=brtbLcnBKrm9yfsf4jwiYMTZWo
	Ge3z/bUPRpRtreFlsfTXH0QWgemIeed0st9DmfEeh8CBfSQ6QGl0xRvE8hM9fxODaxQomuFE2W+37
	zvBBI1PR+e9Dy1hF86/JGWVxlJRuCYuBWSbC6Vmumon8E4KGdm9D2mWxtpshuLv+msYA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171213-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 171213: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d6d0cb659fda64430d4649f8680c5cead32da8fd
X-Osstest-Versions-That:
    xen=8c24b70fedcb52633b2370f834d8a2be3f7fa38e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 16 Jun 2022 18:49:21 +0000

flight 171213 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171213/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d6d0cb659fda64430d4649f8680c5cead32da8fd
baseline version:
 xen                  8c24b70fedcb52633b2370f834d8a2be3f7fa38e

Last test of basis   171201  2022-06-16 12:00:24 Z    0 days
Testing same since   171213  2022-06-16 16:00:32 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8c24b70fed..d6d0cb659f  d6d0cb659fda64430d4649f8680c5cead32da8fd -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Jun 16 19:09:58 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 19:09:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350842.577257 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1usB-0003k1-15; Thu, 16 Jun 2022 19:09:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350842.577257; Thu, 16 Jun 2022 19:09:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1usA-0003ju-Ud; Thu, 16 Jun 2022 19:09:42 +0000
Received: by outflank-mailman (input) for mailman id 350842;
 Thu, 16 Jun 2022 19:09:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o1us9-0003jo-Dg
 for xen-devel@lists.xenproject.org; Thu, 16 Jun 2022 19:09:41 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o1us5-00013a-QH; Thu, 16 Jun 2022 19:09:37 +0000
Received: from 54-240-197-228.amazon.com ([54.240.197.228]
 helo=[10.95.152.232]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o1us5-00085V-Fk; Thu, 16 Jun 2022 19:09:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=izQx3XPBc7LFETS7Dw+w+rCaTVcbKs+zYwAASNCMo9U=; b=uUinrJ/vy6w4K586BTBvCycj6s
	+7JAoPNbrmI1iKbOEceCUyDhBwxHVwWxcQHAJ0hA5g3m7FvWAcSn5mnofq+fgmjFDaWIT/hJMysad
	JnqcKx8AeVVWisQP1LjK6/ddoxr7RFCIhXsqEYwhJwJ1MS17yrLXSv6vMbmSMwW6qGCM=;
Message-ID: <67f56cdd-531b-72fc-1257-214d078f6bb6@xen.org>
Date: Thu, 16 Jun 2022 20:09:35 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH] xen: Don't call panic if ARM TF cpu off returns DENIED
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: "dmitry.semenets@gmail.com" <dmitry.semenets@gmail.com>,
 Dmytro Semenets <Dmytro_Semenets@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20220616135541.3333760-1-dmitry.semenets@gmail.com>
 <cf7660da-0bde-865e-7c22-a2e21e31fae5@xen.org> <87wndgh2og.fsf@epam.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <87wndgh2og.fsf@epam.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 16/06/2022 19:40, Volodymyr Babchuk wrote:
> 
> Hi Julien,

Hi Volodymyr,

> 
> Julien Grall <julien@xen.org> writes:
> 
>> Hi,
>>
>> On 16/06/2022 14:55, dmitry.semenets@gmail.com wrote:
>>> From: Dmytro Semenets <dmytro_semenets@epam.com>
>>> According to PSCI specification ARM TF can return DENIED on CPU OFF.
>>
>> I am confused. The spec is talking about Trusted OS and not
>> firmware. The docummentation is also not specific to ARM Trusted
>> Firmware. So did you mean "Trusted OS"?
> 
> It should be "firmware", I believe.

Hmmm... I couldn't find a reference in the spec suggesting that CPU_OFF 
could return DENIED because of the firmware. Do you have a pointer to 
the spec?

> 
>>
>> Also, did you reproduce on HW? If so, on which CPU this will fail?
>>
> 
> Yes, we reproduced this on HW. In our case it failed on CPU0. To be
> fair, in our case it had nothing to do with Trusted OS. It is just
> platform limitation - it can't turn off CPU0. But from Xen perspective
> there is no difference - CPU_OFF call returns DENIED.

Thanks for the clarification. I think I have seen that in the wild also 
but it never got on top of my queue. It is good that we are fixing it.

> 
>>> This patch brings the hypervisor into compliance with the PSCI
>>> specification.
>>
>> Now it means CPU off will never be turned off using PSCI. Instead, we
>> would end up to spin in Xen. This would be a problem because we could
>> save less power.
> 
> Agreed.
> 
>>
>>> Refer to "Arm Power State Coordination Interface (DEN0022D.b)"
>>> section 5.5.2
>>
>> Reading both 5.5.2 and 5.9.1 together, DENIED would be returned when
>> the trusted OS can only run on one core.
>>
>> Some of the trusted OS are migratable. So I think we should first
>> attempt to migrate the CPU. Then if it doesn't work, we should prevent
>> the CPU to go offline.
>>
>> That said, upstream doesn't support cpu offlining (I don't know for
>> your use case). In case of shutdown, it is not necessary to offline
>> the CPU, so we could avoid to call CPU_OFF on all CPUs but
>> one. Something like:
>>
> 
> This is even better approach yes. But you mentioned CPU_OFF. Did you
> mean SYSTEM_RESET?

By CPU_OFF I was referring to the fact that Xen will issue the call all 
CPUs but one. The remaining CPU will issue the command to reset/shutdown 
the system.

>>   void machine_halt(void)
>> @@ -21,10 +23,6 @@ void machine_halt(void)
>>       smp_call_function(halt_this_cpu, NULL, 0);
>>       local_irq_disable();
>>
>> -    /* Wait at most another 10ms for all other CPUs to go offline. */
>> -    while ( (num_online_cpus() > 1) && (timeout-- > 0) )
>> -        mdelay(1);
>> -
>>       /* This is mainly for PSCI-0.2, which does not return if success. */
>>       call_psci_system_off();
>>
>>> Signed-off-by: Dmytro Semenets <dmytro_semenets@epam.com>
>>> Reviewed-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
>>
>> I don't recall to see patch on the ML recently for this. So is this an
>> internal review?
> 
> Yeah, sorry about that. Dmytro is a new member in our team and he is not
> yet familiar with differences in internal reviews and reviews in ML.

No worries. I usually classify internal review anything that was done 
privately. This looks to be a public review, althought not on xen-devel.

I understand that not all some of the patches are still in PoC stage and 
doing the review on your github is a good idea. But for those are meant 
to be for upstream (e.g. bug fixes, small patches), I would suggest to 
do the review on xen-devel directly.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 16 19:32:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 19:32:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350851.577269 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1vE4-0007Nw-0j; Thu, 16 Jun 2022 19:32:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350851.577269; Thu, 16 Jun 2022 19:32:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1vE3-0007Np-U4; Thu, 16 Jun 2022 19:32:19 +0000
Received: by outflank-mailman (input) for mailman id 350851;
 Thu, 16 Jun 2022 19:32:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/+X/=WX=gmail.com=dmitry.semenets@srs-se1.protection.inumbo.net>)
 id 1o1vE3-0007Nj-3Y
 for xen-devel@lists.xenproject.org; Thu, 16 Jun 2022 19:32:19 +0000
Received: from mail-ej1-x62f.google.com (mail-ej1-x62f.google.com
 [2a00:1450:4864:20::62f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 08092ade-edab-11ec-bd2c-47488cf2e6aa;
 Thu, 16 Jun 2022 21:32:17 +0200 (CEST)
Received: by mail-ej1-x62f.google.com with SMTP id hj18so3992050ejb.0
 for <xen-devel@lists.xenproject.org>; Thu, 16 Jun 2022 12:32:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 08092ade-edab-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=Pg2as9lsR4oc1g+5FUfqlZ+95jwroKMZ2q0lTQ4egZY=;
        b=mEwaWbPyjwNQYYmpQhkZidR8tylworINZrxrX+w6qx6sqFG6eGxCQO2IR2ec1g3dDB
         BFTLs27xn+vrhAl3GuFrlSUQXFenotYvbP4EbEcgZrGlcqtQ8elgB+kRXcn2MLv3zfOz
         FiZhs8UPRe0kZVhs5K6KkpmHEooskW0WT4iJ3+NzFoqFMeBjNqlN+5dVK75uB9SS1j0n
         sA/WPG9hSi91v5wnHAQJc2fqaUL3/61i8jlwHPyAFUiRLj2vrY/cYF3vAGIjuF0B3F6w
         +wgcUKx135IdpK5d0TL9/qGDJc8sL0ntlkn287ybwE/hSjcoiaCyh4uBepH7mj/70Ae3
         NnjQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=Pg2as9lsR4oc1g+5FUfqlZ+95jwroKMZ2q0lTQ4egZY=;
        b=1+KnOzWeJZ0GV2kFv2xEEcwQVeYOJNmq036MIW8FvCIFXmpxGoAIQxrGW6ktnpnKnl
         nkuCUy0B+kIUy21ilEOnvFNwBfi1RhJurRsqWkILpNQihz+uQqQSKwSs/JN4mA2ZeKZs
         nsC87eEteg2Fyoh4w6XfEQJDXnX5054qQbwyhZFFyXtFUBPMb3CUkMBliaRTjC/f5juY
         A37lRJrbvMDZKDkCXI6NsqQbdwA/l9JFMWpNnj+2RNd4BFtVclIfsjDwJvGs0KHld567
         hxhS08nLsV59sGtJMiGA+/8MFkSXfidW5y8PIk5pCMEMdDWmZIhwA/Gn7bA2CiRC7cav
         qdnA==
X-Gm-Message-State: AJIora/pPKST+56/WuMqg/40CYPbGCSSFRXE0YcJzA1OUoigbmwXEyiq
	+pMTRMcnqOfpeXHWdJB/LHpehflylxwCCrBkbPI=
X-Google-Smtp-Source: AGRyM1tuwDYvgqDBDLZ4rbNZIQ0GjDgNdVfw39RZxTGQegYfJcS/8gKJosD5BmB/vggfMKCaLiCPTSBmO7mIlU2Fwv8=
X-Received: by 2002:a17:906:e2d2:b0:704:81fe:3152 with SMTP id
 gr18-20020a170906e2d200b0070481fe3152mr5772183ejb.411.1655407936848; Thu, 16
 Jun 2022 12:32:16 -0700 (PDT)
MIME-Version: 1.0
References: <20220616135541.3333760-1-dmitry.semenets@gmail.com>
 <cf7660da-0bde-865e-7c22-a2e21e31fae5@xen.org> <87wndgh2og.fsf@epam.com> <67f56cdd-531b-72fc-1257-214d078f6bb6@xen.org>
In-Reply-To: <67f56cdd-531b-72fc-1257-214d078f6bb6@xen.org>
From: Dmytro Semenets <dmitry.semenets@gmail.com>
Date: Thu, 16 Jun 2022 22:32:05 +0300
Message-ID: <CACM97VUFBVGGYkXqrL-iLkU_jrQj+-KLveTdHk-H9F3UECSxKQ@mail.gmail.com>
Subject: Re: [PATCH] xen: Don't call panic if ARM TF cpu off returns DENIED
To: Julien Grall <julien@xen.org>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Dmytro Semenets <Dmytro_Semenets@epam.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis <bertrand.marquis@arm.com>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Content-Type: multipart/alternative; boundary="00000000000008c5f005e195b29e"

--00000000000008c5f005e195b29e
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

=D1=87=D1=82, 16 =D0=B8=D1=8E=D0=BD. 2022 =D0=B3. =D0=B2 22:09, Julien Gral=
l <julien@xen.org>:

>
>
> On 16/06/2022 19:40, Volodymyr Babchuk wrote:
> >
> > Hi Julien,
>
> Hi Volodymyr,
>
> >
> > Julien Grall <julien@xen.org> writes:
> >
> >> Hi,
> >>
> >> On 16/06/2022 14:55, dmitry.semenets@gmail.com wrote:
> >>> From: Dmytro Semenets <dmytro_semenets@epam.com>
> >>> According to PSCI specification ARM TF can return DENIED on CPU OFF.
> >>
> >> I am confused. The spec is talking about Trusted OS and not
> >> firmware. The docummentation is also not specific to ARM Trusted
> >> Firmware. So did you mean "Trusted OS"?
> >
> > It should be "firmware", I believe.
>
> Hmmm... I couldn't find a reference in the spec suggesting that CPU_OFF
> could return DENIED because of the firmware. Do you have a pointer to
> the spec?
>
Actually CPU_OFF performed by Trusted OS. But Trusted OS is called by the
ARM TF.
ARM TF for our platform doesn't call Trusted OS for CPU0 and returns DENIED
instead.

>
> >
> >>
> >> Also, did you reproduce on HW? If so, on which CPU this will fail?
> >>
> >
> > Yes, we reproduced this on HW. In our case it failed on CPU0. To be
> > fair, in our case it had nothing to do with Trusted OS. It is just
> > platform limitation - it can't turn off CPU0. But from Xen perspective
> > there is no difference - CPU_OFF call returns DENIED.
>
> Thanks for the clarification. I think I have seen that in the wild also
> but it never got on top of my queue. It is good that we are fixing it.
>
> >
> >>> This patch brings the hypervisor into compliance with the PSCI
> >>> specification.
> >>
> >> Now it means CPU off will never be turned off using PSCI. Instead, we
> >> would end up to spin in Xen. This would be a problem because we could
> >> save less power.
> >
> > Agreed.
> >
> >>
> >>> Refer to "Arm Power State Coordination Interface (DEN0022D.b)"
> >>> section 5.5.2
> >>
> >> Reading both 5.5.2 and 5.9.1 together, DENIED would be returned when
> >> the trusted OS can only run on one core.
> >>
> >> Some of the trusted OS are migratable. So I think we should first
> >> attempt to migrate the CPU. Then if it doesn't work, we should prevent
> >> the CPU to go offline.
> >>
> >> That said, upstream doesn't support cpu offlining (I don't know for
> >> your use case). In case of shutdown, it is not necessary to offline
> >> the CPU, so we could avoid to call CPU_OFF on all CPUs but
> >> one. Something like:
> >>
> >
> > This is even better approach yes. But you mentioned CPU_OFF. Did you
> > mean SYSTEM_RESET?
>
> By CPU_OFF I was referring to the fact that Xen will issue the call all
> CPUs but one. The remaining CPU will issue the command to reset/shutdown
> the system.
>
> >>   void machine_halt(void)
> >> @@ -21,10 +23,6 @@ void machine_halt(void)
> >>       smp_call_function(halt_this_cpu, NULL, 0);
> >>       local_irq_disable();
> >>
> >> -    /* Wait at most another 10ms for all other CPUs to go offline. */
> >> -    while ( (num_online_cpus() > 1) && (timeout-- > 0) )
> >> -        mdelay(1);
> >> -
> >>       /* This is mainly for PSCI-0.2, which does not return if success=
.
> */
> >>       call_psci_system_off();
> >>
> >>> Signed-off-by: Dmytro Semenets <dmytro_semenets@epam.com>
> >>> Reviewed-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
> >>
> >> I don't recall to see patch on the ML recently for this. So is this an
> >> internal review?
> >
> > Yeah, sorry about that. Dmytro is a new member in our team and he is no=
t
> > yet familiar with differences in internal reviews and reviews in ML.
>
> No worries. I usually classify internal review anything that was done
> privately. This looks to be a public review, althought not on xen-devel.
>
> I understand that not all some of the patches are still in PoC stage and
> doing the review on your github is a good idea. But for those are meant
> to be for upstream (e.g. bug fixes, small patches), I would suggest to
> do the review on xen-devel directly
>
Sorry about that

>
> Cheers,
>
> --
> Julien Grall
>

--00000000000008c5f005e195b29e
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr"><br></div><br><div class=3D"gmail_quote">=
<div dir=3D"ltr" class=3D"gmail_attr">=D1=87=D1=82, 16 =D0=B8=D1=8E=D0=BD. =
2022 =D0=B3. =D0=B2 22:09, Julien Grall &lt;<a href=3D"mailto:julien@xen.or=
g">julien@xen.org</a>&gt;:<br></div><blockquote class=3D"gmail_quote" style=
=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding=
-left:1ex"><br>
<br>
On 16/06/2022 19:40, Volodymyr Babchuk wrote:<br>
&gt; <br>
&gt; Hi Julien,<br>
<br>
Hi Volodymyr,<br>
<br>
&gt; <br>
&gt; Julien Grall &lt;<a href=3D"mailto:julien@xen.org" target=3D"_blank">j=
ulien@xen.org</a>&gt; writes:<br>
&gt; <br>
&gt;&gt; Hi,<br>
&gt;&gt;<br>
&gt;&gt; On 16/06/2022 14:55, <a href=3D"mailto:dmitry.semenets@gmail.com" =
target=3D"_blank">dmitry.semenets@gmail.com</a> wrote:<br>
&gt;&gt;&gt; From: Dmytro Semenets &lt;<a href=3D"mailto:dmytro_semenets@ep=
am.com" target=3D"_blank">dmytro_semenets@epam.com</a>&gt;<br>
&gt;&gt;&gt; According to PSCI specification ARM TF can return DENIED on CP=
U OFF.<br>
&gt;&gt;<br>
&gt;&gt; I am confused. The spec is talking about Trusted OS and not<br>
&gt;&gt; firmware. The docummentation is also not specific to ARM Trusted<b=
r>
&gt;&gt; Firmware. So did you mean &quot;Trusted OS&quot;?<br>
&gt; <br>
&gt; It should be &quot;firmware&quot;, I believe.<br>
<br>
Hmmm... I couldn&#39;t find a reference in the spec suggesting that CPU_OFF=
 <br>
could return DENIED because of the firmware. Do you have a pointer to <br>
the spec?<br></blockquote><div>Actually CPU_OFF performed by Trusted OS. Bu=
t Trusted OS is called by the ARM TF.</div><div>ARM TF for our platform doe=
sn&#39;t call Trusted OS for CPU0 and returns DENIED instead.</div><blockqu=
ote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px=
 solid rgb(204,204,204);padding-left:1ex">
<br>
&gt; <br>
&gt;&gt;<br>
&gt;&gt; Also, did you reproduce on HW? If so, on which CPU this will fail?=
<br>
&gt;&gt;<br>
&gt; <br>
&gt; Yes, we reproduced this on HW. In our case it failed on CPU0. To be<br=
>
&gt; fair, in our case it had nothing to do with Trusted OS. It is just<br>
&gt; platform limitation - it can&#39;t turn off CPU0. But from Xen perspec=
tive<br>
&gt; there is no difference - CPU_OFF call returns DENIED.<br>
<br>
Thanks for the clarification. I think I have seen that in the wild also <br=
>
but it never got on top of my queue. It is good that we are fixing it.<br>
<br>
&gt; <br>
&gt;&gt;&gt; This patch brings the hypervisor into compliance with the PSCI=
<br>
&gt;&gt;&gt; specification.<br>
&gt;&gt;<br>
&gt;&gt; Now it means CPU off will never be turned off using PSCI. Instead,=
 we<br>
&gt;&gt; would end up to spin in Xen. This would be a problem because we co=
uld<br>
&gt;&gt; save less power.<br>
&gt; <br>
&gt; Agreed.<br>
&gt; <br>
&gt;&gt;<br>
&gt;&gt;&gt; Refer to &quot;Arm Power State Coordination Interface (DEN0022=
D.b)&quot;<br>
&gt;&gt;&gt; section 5.5.2<br>
&gt;&gt;<br>
&gt;&gt; Reading both 5.5.2 and 5.9.1 together, DENIED would be returned wh=
en<br>
&gt;&gt; the trusted OS can only run on one core.<br>
&gt;&gt;<br>
&gt;&gt; Some of the trusted OS are migratable. So I think we should first<=
br>
&gt;&gt; attempt to migrate the CPU. Then if it doesn&#39;t work, we should=
 prevent<br>
&gt;&gt; the CPU to go offline.<br>
&gt;&gt;<br>
&gt;&gt; That said, upstream doesn&#39;t support cpu offlining (I don&#39;t=
 know for<br>
&gt;&gt; your use case). In case of shutdown, it is not necessary to offlin=
e<br>
&gt;&gt; the CPU, so we could avoid to call CPU_OFF on all CPUs but<br>
&gt;&gt; one. Something like:<br>
&gt;&gt;<br>
&gt; <br>
&gt; This is even better approach yes. But you mentioned CPU_OFF. Did you<b=
r>
&gt; mean SYSTEM_RESET?<br>
<br>
By CPU_OFF I was referring to the fact that Xen will issue the call all <br=
>
CPUs but one. The remaining CPU will issue the command to reset/shutdown <b=
r>
the system.<br>
<br>
&gt;&gt;=C2=A0 =C2=A0void machine_halt(void)<br>
&gt;&gt; @@ -21,10 +23,6 @@ void machine_halt(void)<br>
&gt;&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0smp_call_function(halt_this_cpu, NULL, 0=
);<br>
&gt;&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0local_irq_disable();<br>
&gt;&gt;<br>
&gt;&gt; -=C2=A0 =C2=A0 /* Wait at most another 10ms for all other CPUs to =
go offline. */<br>
&gt;&gt; -=C2=A0 =C2=A0 while ( (num_online_cpus() &gt; 1) &amp;&amp; (time=
out-- &gt; 0) )<br>
&gt;&gt; -=C2=A0 =C2=A0 =C2=A0 =C2=A0 mdelay(1);<br>
&gt;&gt; -<br>
&gt;&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0/* This is mainly for PSCI-0.2, which do=
es not return if success. */<br>
&gt;&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0call_psci_system_off();<br>
&gt;&gt;<br>
&gt;&gt;&gt; Signed-off-by: Dmytro Semenets &lt;<a href=3D"mailto:dmytro_se=
menets@epam.com" target=3D"_blank">dmytro_semenets@epam.com</a>&gt;<br>
&gt;&gt;&gt; Reviewed-by: Volodymyr Babchuk &lt;<a href=3D"mailto:volodymyr=
_babchuk@epam.com" target=3D"_blank">volodymyr_babchuk@epam.com</a>&gt;<br>
&gt;&gt;<br>
&gt;&gt; I don&#39;t recall to see patch on the ML recently for this. So is=
 this an<br>
&gt;&gt; internal review?<br>
&gt; <br>
&gt; Yeah, sorry about that. Dmytro is a new member in our team and he is n=
ot<br>
&gt; yet familiar with differences in internal reviews and reviews in ML.<b=
r>
<br>
No worries. I usually classify internal review anything that was done <br>
privately. This looks to be a public review, althought not on xen-devel.<br=
>
<br>
I understand that not all some of the patches are still in PoC stage and <b=
r>
doing the review on your github is a good idea. But for those are meant <br=
>
to be for upstream (e.g. bug fixes, small patches), I would suggest to <br>
do the review on xen-devel directly<br></blockquote><div>Sorry about that=
=C2=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0=
.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
Cheers,<br>
<br>
-- <br>
Julien Grall<br>
</blockquote></div></div>

--00000000000008c5f005e195b29e--


From xen-devel-bounces@lists.xenproject.org Thu Jun 16 19:42:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 19:42:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350860.577282 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1vNr-0000fC-2r; Thu, 16 Jun 2022 19:42:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350860.577282; Thu, 16 Jun 2022 19:42:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1vNr-0000f5-09; Thu, 16 Jun 2022 19:42:27 +0000
Received: by outflank-mailman (input) for mailman id 350860;
 Thu, 16 Jun 2022 19:42:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1vNp-0000ev-9k; Thu, 16 Jun 2022 19:42:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1vNp-0001cm-6f; Thu, 16 Jun 2022 19:42:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1vNo-0007l6-Nl; Thu, 16 Jun 2022 19:42:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o1vNo-000650-NK; Thu, 16 Jun 2022 19:42:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6oSUpXTdZKc9f0/vPgzSckFNm6JYjUhTar5iAWd7xig=; b=g/CvebmInnvZ80eQRJxthXhtLA
	GyzEJSKJpgqe43W4CDWkXJV6QW3zPiO1owdr13tdCBxIX6vWydzW92rffeJlCqxwao9cmC4FHOlny
	XYhPWuACE5PTzy/cZk931M/+tJpSzJISrk/uCjteXc5avhz+5wgaopWdDVeDVU6WzrZ8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171202-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 171202: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=92ab049719afe96913c0452bcf12946e0af0f0d5
X-Osstest-Versions-That:
    ovmf=05e57cc9ced67d2cd633c2bdcf70b5e1352bf635
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 16 Jun 2022 19:42:24 +0000

flight 171202 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171202/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 92ab049719afe96913c0452bcf12946e0af0f0d5
baseline version:
 ovmf                 05e57cc9ced67d2cd633c2bdcf70b5e1352bf635

Last test of basis   171192  2022-06-16 08:10:35 Z    0 days
Testing same since   171202  2022-06-16 12:43:16 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ni, Ray <ray.ni@intel.com>
  Ray Ni <ray.ni@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   05e57cc9ce..92ab049719  92ab049719afe96913c0452bcf12946e0af0f0d5 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Jun 16 20:33:41 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 20:33:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350872.577297 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1wBA-0006s9-VQ; Thu, 16 Jun 2022 20:33:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350872.577297; Thu, 16 Jun 2022 20:33:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1wBA-0006s2-SY; Thu, 16 Jun 2022 20:33:24 +0000
Received: by outflank-mailman (input) for mailman id 350872;
 Thu, 16 Jun 2022 20:33:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d4NK=WX=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1o1wB9-0006rw-Ew
 for xen-devel@lists.xenproject.org; Thu, 16 Jun 2022 20:33:23 +0000
Received: from mail-wm1-x32d.google.com (mail-wm1-x32d.google.com
 [2a00:1450:4864:20::32d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 900d307b-edb3-11ec-bd2c-47488cf2e6aa;
 Thu, 16 Jun 2022 22:33:21 +0200 (CEST)
Received: by mail-wm1-x32d.google.com with SMTP id
 i81-20020a1c3b54000000b0039c76434147so3388618wma.1
 for <xen-devel@lists.xenproject.org>; Thu, 16 Jun 2022 13:33:21 -0700 (PDT)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 n5-20020a05600c4f8500b0039c18d3fe27sm3417118wmq.19.2022.06.16.13.33.19
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 16 Jun 2022 13:33:20 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 900d307b-edb3-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=hDBPWvdY7W7dPGLBDUDgpWrW+UxB4PtfXXxdrIDrzyU=;
        b=oXHQ4fkMqlo2hFCoHXGaXgDcYDVlyScQbouqmSiN40WJFq4q6l52sYUY2uFh6c5M+a
         ETN1/kWXAHu7yuCeLEgHun3ndriWrbbhhQnJDxxopllIc37qnmN8RsuplT2/RbJ8P28U
         NSi9v3YU4zzVQSa83o8AE4V6uHK7QqVjjH8LikVrIp6lBifIv3ABi6sLyM+EZRLYXWBH
         kc60mvTh1hjvrbid7ecPj050kw2gaMV6ZU97lt0x78f69Y7uUUX8GZUpE3Tdkh7oCuhV
         KNkDxeI4Vk4acxOsrq1kAzp5mOdze5hJj3U1DOH3CBzEafyISPZvz7wiuufA+rst4AO8
         tzig==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=hDBPWvdY7W7dPGLBDUDgpWrW+UxB4PtfXXxdrIDrzyU=;
        b=U9WFdPonYwgXmrYa/mEx1cFBWZlFYs3f9dYmqcs3rxBKONZRwritcyCFx8dY0TTKfs
         t23tEBrt9DP7Pd+YrQZkkzWJ7lnoSCINuIQ522MTZf7JBv65fQX86JGsL4KxTTL2O9Yc
         bCSnCVaw4JcxU3LjxInOQfw/mzzl38ASdn0X24hBRNUCSbtGdiwF3ZRHzKRLMvye9pyE
         9D+HH0UgyTlszDQuopphL3OTBG7i1YNcxnaHl5yMORuMG63k1jBuhSrBRFP07Qv0IMGY
         WsPBEeIkwZm5qDWBDLGKZ+sD4BmtJ4FWqb41w52VC3ogr2oGFHU/gxmmYneFbOB/n217
         HnDA==
X-Gm-Message-State: AOAM532Nv4iUa+cvBHD9L6keHF2dMtw1mh84uuwKwf2Oj3sA61440KUZ
	EASiUB0evyhJ2n5KslShJMc=
X-Google-Smtp-Source: ABdhPJyj7GvuFrf6+omBmdl2bHuWM2ny9z+jt8gG+7M9cFFOQNKyI4xUCUVYu1PkxKMEj289rbGSLA==
X-Received: by 2002:a05:600c:4fcb:b0:39c:64cd:cc89 with SMTP id o11-20020a05600c4fcb00b0039c64cdcc89mr17426832wmq.197.1655411600801;
        Thu, 16 Jun 2022 13:33:20 -0700 (PDT)
Subject: Re: [PATCH v2] xen: don't require virtio with grants for non-PV
 guests
To: Juergen Gross <jgross@suse.com>, hch@infradead.org,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 viresh.kumar@linaro.org, Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
References: <20220616053715.3166-1-jgross@suse.com>
 <573c2d9f-8df0-0e0f-2f57-e8ea85e403b4@gmail.com>
 <cf755bb8-4265-875f-dc20-eefc0e8740f4@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <a67a709a-78b1-c3b1-009e-2d9c834bdd67@gmail.com>
Date: Thu, 16 Jun 2022 23:33:19 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <cf755bb8-4265-875f-dc20-eefc0e8740f4@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 16.06.22 11:56, Juergen Gross wrote:

Hello Juergen, all


> On 16.06.22 09:31, Oleksandr wrote:
>>
>> On 16.06.22 08:37, Juergen Gross wrote:
>>
>>
>> Hello Juergen
>>
>>> Commit fa1f57421e0b ("xen/virtio: Enable restricted memory access using
>>> Xen grant mappings") introduced a new requirement for using virtio
>>> devices: the backend now needs to support the VIRTIO_F_ACCESS_PLATFORM
>>> feature.
>>>
>>> This is an undue requirement for non-PV guests, as those can be 
>>> operated
>>> with existing backends without any problem, as long as those backends
>>> are running in dom0.
>>>
>>> Per default allow virtio devices without grant support for non-PV
>>> guests.
>>>
>>> Add a new config item to always force use of grants for virtio.
>>>
>>> Fixes: fa1f57421e0b ("xen/virtio: Enable restricted memory access 
>>> using Xen grant mappings")
>>> Reported-by: Viresh Kumar <viresh.kumar@linaro.org>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> ---
>>> V2:
>>> - remove command line parameter (Christoph Hellwig)
>>> ---
>>>   drivers/xen/Kconfig | 9 +++++++++
>>>   include/xen/xen.h   | 2 +-
>>>   2 files changed, 10 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
>>> index bfd5f4f706bc..a65bd92121a5 100644
>>> --- a/drivers/xen/Kconfig
>>> +++ b/drivers/xen/Kconfig
>>> @@ -355,4 +355,13 @@ config XEN_VIRTIO
>>>         If in doubt, say n.
>>> +config XEN_VIRTIO_FORCE_GRANT
>>> +    bool "Require Xen virtio support to use grants"
>>> +    depends on XEN_VIRTIO
>>> +    help
>>> +      Require virtio for Xen guests to use grant mappings.
>>> +      This will avoid the need to give the backend the right to map 
>>> all
>>> +      of the guest memory. This will need support on the backend side
>>> +      (e.g. qemu or kernel, depending on the virtio device types 
>>> used).
>>> +
>>>   endmenu
>>> diff --git a/include/xen/xen.h b/include/xen/xen.h
>>> index 0780a81e140d..4d4188f20337 100644
>>> --- a/include/xen/xen.h
>>> +++ b/include/xen/xen.h
>>> @@ -56,7 +56,7 @@ extern u64 xen_saved_max_mem_size;
>>>   static inline void xen_set_restricted_virtio_memory_access(void)
>>>   {
>>> -    if (IS_ENABLED(CONFIG_XEN_VIRTIO) && xen_domain())
>>> +    if (IS_ENABLED(CONFIG_XEN_VIRTIO_FORCE_GRANT) || xen_pv_domain())
>>>           platform_set(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS);
>>
>>
>> Looks like, the flag will be *always* set for paravirtualized guests 
>> even if CONFIG_XEN_VIRTIO disabled.
>>
>> Maybe we should clarify the check?
>>
>>
>> if (IS_ENABLED(CONFIG_XEN_VIRTIO_FORCE_GRANT) || 
>> IS_ENABLED(CONFIG_XEN_VIRTIO) && xen_pv_domain())
>>
>>      platform_set(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS);
>>
>
> Yes, we should. I had the function in grant-dma-ops.c in V1, and could 
> drop the
> CONFIG_XEN_VIRTIO dependency for that reason.
>
> I'll wait for more comments before sending V3, though.

ok



Please note, I am happy with current patch and it works in my Arm64 
based environment.

Just one moment to consider.


As it was already mentioned earlier in current thread the 
PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS (former 
arch_has_restricted_virtio_memory_access()) is not per device but about 
the whole guest. Being set it makes VIRTIO_F_ACCESS_PLATFORM and 
VIRTIO_F_VERSION_1 features mandatory for *all* virtio devices in the guest.

The question is “Do we want/need to lift this restriction for some 
devices (which backends are trusted so can access all guest memory) at 
the same time”? Copy here the original Viresh's question for the 
convenience:

"I understand from your email that the backends need to offer the 
VIRTIO_F_ACCESS_PLATFORM flag now, but should this requirement be a bit 
soft?
I mean shouldn't we allow both types of backends to run with the same 
kernel, ones that offer this feature and others that don't? The ones 
that don't offer the feature, should continue to work like they used to, 
i.e. without the restricted memory access feature."

Technically this can be possible with HVM.

Let's imagine the following situation:

- Dom0 with backends which don't offer required features for some reason(s)

But running in Dom0 (trusted domain) these backends are not obliged to 
offer it (yes they can offer the required features and support grant 
mappings for the virtio, but this is not strictly necessary, as they are 
considered as trusted so are allowed to access all guest memory).

- DomD with backend which do offer them and require grant mappings for 
the virtio

If this is a valid and correct use-case, then we indeed need an ability 
to control that per device, otherwise - what is written below doesn't 
really matter.

I am wondering whether we can avoid using global 
PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS for Xen guests at all? I assume 
that all we need to do (when CONFIG_XEN_VIRTIO is enabled) is to make 
sure that *only* Xen grant DMA devices in HVM guests and *all* devices 
in PV guests offer required flags.

Below the diff how this could be done w/o an extra options (not 
completely tested), although I realize it might look hackish, and a lot 
more effort is needed to get it right. In my Arm64 based environment it 
works, I have tried to run two backends, the first offered required 
features and the corresponding device node had required property, but 
the second didn’t and there was no property.

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 1f9c3ba..07eb69f 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -443,8 +443,6 @@ static int __init xen_guest_init(void)
         if (!xen_domain())
                 return 0;

-       xen_set_restricted_virtio_memory_access();
-
         if (!acpi_disabled)
                 xen_acpi_guest_init();
         else
diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
index 8b71b1d..517a9d8 100644
--- a/arch/x86/xen/enlighten_hvm.c
+++ b/arch/x86/xen/enlighten_hvm.c
@@ -195,8 +195,6 @@ static void __init xen_hvm_guest_init(void)
         if (xen_pv_domain())
                 return;

-       xen_set_restricted_virtio_memory_access();
-
         init_hvm_pv_info();

         reserve_shared_info();
diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index 30d24fe..ca85d14 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -108,8 +108,6 @@ static DEFINE_PER_CPU(struct tls_descs, 
shadow_tls_desc);

  static void __init xen_pv_init_platform(void)
  {
-       xen_set_restricted_virtio_memory_access();
-
         populate_extra_pte(fix_to_virt(FIX_PARAVIRT_BOOTMAP));

         set_fixmap(FIX_PARAVIRT_BOOTMAP, xen_start_info->shared_info);
diff --git a/drivers/virtio/virtio.c b/drivers/virtio/virtio.c
index 371e16b..875690a 100644
--- a/drivers/virtio/virtio.c
+++ b/drivers/virtio/virtio.c
@@ -167,6 +167,11 @@ void virtio_add_status(struct virtio_device *dev, 
unsigned int status)
  }
  EXPORT_SYMBOL_GPL(virtio_add_status);

+int __weak device_has_restricted_virtio_memory_access(struct device *dev)
+{
+       return platform_has(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS);
+}
+
  /* Do some validation, then set FEATURES_OK */
  static int virtio_features_ok(struct virtio_device *dev)
  {
@@ -174,7 +179,7 @@ static int virtio_features_ok(struct virtio_device *dev)

         might_sleep();

-       if (platform_has(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS)) {
+       if (device_has_restricted_virtio_memory_access(dev->dev.parent)) {
                 if (!virtio_has_feature(dev, VIRTIO_F_VERSION_1)) {
                         dev_warn(&dev->dev,
                                  "device must provide 
VIRTIO_F_VERSION_1\n");
diff --git a/drivers/xen/grant-dma-ops.c b/drivers/xen/grant-dma-ops.c
index 6586152..da938f6 100644
--- a/drivers/xen/grant-dma-ops.c
+++ b/drivers/xen/grant-dma-ops.c
@@ -11,6 +11,7 @@
  #include <linux/dma-map-ops.h>
  #include <linux/of.h>
  #include <linux/pfn.h>
+#include <linux/virtio_config.h>
  #include <linux/xarray.h>
  #include <xen/xen.h>
  #include <xen/grant_table.h>
@@ -286,6 +287,11 @@ bool xen_is_grant_dma_device(struct device *dev)
         return has_iommu;
  }

+int device_has_restricted_virtio_memory_access(struct device *dev)
+{
+       return (xen_pv_domain() || xen_is_grant_dma_device(dev));
+}
+
  void xen_grant_setup_dma_ops(struct device *dev)
  {
         struct xen_grant_dma_data *data;
diff --git a/include/linux/virtio_config.h b/include/linux/virtio_config.h
index 7949829..b3a455b 100644
--- a/include/linux/virtio_config.h
+++ b/include/linux/virtio_config.h
@@ -559,4 +559,6 @@ static inline void virtio_cwrite64(struct 
virtio_device *vdev,
_r;                                                     \
         })

+int device_has_restricted_virtio_memory_access(struct device *dev);
+
  #endif /* _LINUX_VIRTIO_CONFIG_H */
diff --git a/include/xen/xen.h b/include/xen/xen.h
index 0780a81..a99bab8 100644
--- a/include/xen/xen.h
+++ b/include/xen/xen.h
@@ -52,14 +52,6 @@ bool xen_biovec_phys_mergeable(const struct bio_vec 
*vec1,
  extern u64 xen_saved_max_mem_size;
  #endif

-#include <linux/platform-feature.h>
-
-static inline void xen_set_restricted_virtio_memory_access(void)
-{
-       if (IS_ENABLED(CONFIG_XEN_VIRTIO) && xen_domain())
- platform_set(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS);
-}
-
  #ifdef CONFIG_XEN_UNPOPULATED_ALLOC
  int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page 
**pages);
  void xen_free_unpopulated_pages(unsigned int nr_pages, struct page 
**pages);
(END)


I think when x86 HVM gains required support (via ACPI or other means) to 
communicate the x86's alternative of "xen,grant-dma" then 
xen_is_grant_dma_device() will be just extended to handle that.


bool xen_is_grant_dma_device(struct device *dev)
{
     struct device_node *iommu_np;
     bool has_iommu;

     /* XXX Handle only DT devices for now */
     if (!dev->of_node)
         return false;

     iommu_np = of_parse_phandle(dev->of_node, "iommus", 0);
     has_iommu = iommu_np && of_device_is_compatible(iommu_np, 
"xen,grant-dma");
     of_node_put(iommu_np);

     return has_iommu;
}



>
>
>
> Juergen

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Thu Jun 16 20:59:49 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 20:59:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350881.577308 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1wae-0001d7-6H; Thu, 16 Jun 2022 20:59:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350881.577308; Thu, 16 Jun 2022 20:59:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1wae-0001d0-3W; Thu, 16 Jun 2022 20:59:44 +0000
Received: by outflank-mailman (input) for mailman id 350881;
 Thu, 16 Jun 2022 20:59:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hdZ4=WX=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o1wad-0001cu-C0
 for xen-devel@lists.xenproject.org; Thu, 16 Jun 2022 20:59:43 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3d6e8f7b-edb7-11ec-ab14-113154c10af9;
 Thu, 16 Jun 2022 22:59:41 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 51DA061E0C;
 Thu, 16 Jun 2022 20:59:40 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 81956C34114;
 Thu, 16 Jun 2022 20:59:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3d6e8f7b-edb7-11ec-ab14-113154c10af9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1655413179;
	bh=0tqifvP35073FfX2wW6spHcNRhUwWYpuViv/UJmaiIg=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=hWiLGw75IACeHLt2CnY/wkmQBcIJ2VUgJ91ki9PT94UjcuR02wrl+W4a6qww3h8uT
	 EJoWhkAW9r/4L4BGxU86/dNBOWAAKofp4y97LXe6vz/km4LHlRq31pY1lHIpZQrlnf
	 HvpEtk4z/NdxRSwrhGjI7+kF/9YzlnIBS4u7Ve/ErNjn0gewCrE+xtVWibTdeSYyNv
	 GO3F5kq0LxaL1PcCSdy5LdEZuHlkW3sbbIaQrLmlx77gTQZDYBN+H8NnAmxS90PwKl
	 J+JDo2xq5RvVxYXu7WGEVGXWlKFIr/31WKH9xAU6Zqip01qyape3+8zv++Ocmsbhny
	 9wsGESJ93vb7A==
Date: Thu, 16 Jun 2022 13:59:39 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Xenia Ragiadakou <burzalodowa@gmail.com>
cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
    viryaos-discuss@lists.sourceforge.net
Subject: Re: [ImageBuilder] [PATCH v3] uboot-script-gen: Add
 DOMU_STATIC_MEM
In-Reply-To: <20220616095639.305510-1-burzalodowa@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2206161359100.10483@ubuntu-linux-20-04-desktop>
References: <20220616095639.305510-1-burzalodowa@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 16 Jun 2022, Xenia Ragiadakou wrote:
> Add a new config parameter to configure a dom0less VM with static allocation.
> DOMU_STATIC_MEM[number]="baseaddr1 size1 ... baseaddrN sizeN"
> The parameter specifies the host physical address regions to be statically
> allocated to the VM. Each region is defined by its start address and size.
> 
> For instance,
> DOMU_STATIC_MEM[0]="0x30000000 0x10000000 0x50000000 0x20000000"
> indicates that the host memory regions [0x30000000, 0x40000000) and
> [0x50000000, 0x70000000) are statically allocated to the first dom0less VM.
> 
> Since currently it is not possible for a VM to have a mix of both statically
> and non-statically allocated memory regions, when DOMU_STATIC_MEM is specified,
> adjust VM's memory size to equal the amount of statically allocated memory.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>

Reviewed and committed, thanks!

I added a check for DOMU_MEM != DOMU_STATIC_MEM on commit.

Cheers,

Stefano



From xen-devel-bounces@lists.xenproject.org Thu Jun 16 21:09:07 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 21:09:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350889.577319 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1wje-0003Ld-4L; Thu, 16 Jun 2022 21:09:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350889.577319; Thu, 16 Jun 2022 21:09:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1wjd-0003LW-Vy; Thu, 16 Jun 2022 21:09:01 +0000
Received: by outflank-mailman (input) for mailman id 350889;
 Thu, 16 Jun 2022 21:09:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1wjd-0003LM-0E; Thu, 16 Jun 2022 21:09:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1wjc-0003BS-TR; Thu, 16 Jun 2022 21:09:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1wjc-0003Du-Ek; Thu, 16 Jun 2022 21:09:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o1wjc-0007vI-EL; Thu, 16 Jun 2022 21:09:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=MhW4HTHAjuGXCeFvOI9G0ltT+5RHuGZjmGDYRzYP758=; b=RnE9cwAMH4KKLaP2AMcaJf/2yH
	m9zoOE0j+edRY9IIkxZ2KtxrlY8s5CzP4abhOw6YQaWoa1Y5QfpjUGd7e5F+fNTsuzIm0eT+2rdPU
	5jDOnoonV56+ZrIWwXJDJ7c4EqsVA8TfKL52wHlztZeM+UOk2gCnRyYJ09I7QWpqJgxU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171195-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171195: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:guest-start.2:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=30306f6194cadcc29c77f6ddcd416a75bf5c0232
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 16 Jun 2022 21:09:00 +0000

flight 171195 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171195/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 170714
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 170714
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 170714
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 170714
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 170714
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 170714
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 170714
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 170714
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 170714
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 170714
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 170714
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 170714
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 19 guest-start.2 fail REGR. vs. 170714
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 170714

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170714
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170714
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                30306f6194cadcc29c77f6ddcd416a75bf5c0232
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z   23 days
Failing since        170716  2022-05-24 11:12:06 Z   23 days   56 attempts
Testing same since   171195  2022-06-16 09:32:54 Z    0 days    1 attempts

------------------------------------------------------------
2349 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 277434 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 16 21:38:53 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 21:38:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350903.577333 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1xCH-0007Ib-NO; Thu, 16 Jun 2022 21:38:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350903.577333; Thu, 16 Jun 2022 21:38:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1xCH-0007IU-IO; Thu, 16 Jun 2022 21:38:37 +0000
Received: by outflank-mailman (input) for mailman id 350903;
 Thu, 16 Jun 2022 21:38:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1xCG-0007IK-7Z; Thu, 16 Jun 2022 21:38:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1xCG-0003hN-5b; Thu, 16 Jun 2022 21:38:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1xCF-0005Zo-Oo; Thu, 16 Jun 2022 21:38:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o1xCF-00060y-MS; Thu, 16 Jun 2022 21:38:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=R2qVkxkQy8vlgC7c3zhxPPcpzw2SME3XbgFhmephxDE=; b=o8uNEX9CW42CAg/B3gBNvI3mfK
	2yMreVBST+ojHemAeMES79vCs4bhps2VM7b+kqw2Qzb4geK9nhzb9yNotbXte4zIQu2Ux3Y4uaiUh
	TAtpAu5IlKpmqVUK9WoSEzsNHfLf231+s1E9ZsPWhDU17X/7XcWJ3c7OW6Zi/r80j3xQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171197-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-upstream-unstable test] 171197: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-upstream-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=b746458e1ce1bec85e58b458386f8b7a0bedfaa6
X-Osstest-Versions-That:
    qemuu=9a5e4bc76058766962ab3ff13f42c1d39a8e08d3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 16 Jun 2022 21:38:35 +0000

flight 171197 qemu-upstream-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171197/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171180
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171180
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171180
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171180
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171180
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171180
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171180
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171180
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                b746458e1ce1bec85e58b458386f8b7a0bedfaa6
baseline version:
 qemuu                9a5e4bc76058766962ab3ff13f42c1d39a8e08d3

Last test of basis   171180  2022-06-15 14:08:39 Z    1 days
Testing same since   171197  2022-06-16 11:08:17 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dr. David Alan Gilbert <dgilbert@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   9a5e4bc760..b746458e1c  b746458e1ce1bec85e58b458386f8b7a0bedfaa6 -> master


From xen-devel-bounces@lists.xenproject.org Thu Jun 16 22:37:47 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jun 2022 22:37:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350919.577359 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1y7P-00063k-Es; Thu, 16 Jun 2022 22:37:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350919.577359; Thu, 16 Jun 2022 22:37:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1y7P-00063d-Bs; Thu, 16 Jun 2022 22:37:39 +0000
Received: by outflank-mailman (input) for mailman id 350919;
 Thu, 16 Jun 2022 22:37:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BA/p=WX=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1o1y7N-00063X-7m
 for xen-devel@lists.xenproject.org; Thu, 16 Jun 2022 22:37:37 +0000
Received: from mail-pg1-x52d.google.com (mail-pg1-x52d.google.com
 [2607:f8b0:4864:20::52d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e915cb41-edc4-11ec-ab14-113154c10af9;
 Fri, 17 Jun 2022 00:37:33 +0200 (CEST)
Received: by mail-pg1-x52d.google.com with SMTP id s135so2464269pgs.10
 for <xen-devel@lists.xenproject.org>; Thu, 16 Jun 2022 15:37:33 -0700 (PDT)
Received: from jade ([192.77.111.2]) by smtp.gmail.com with ESMTPSA id
 b10-20020a17090a550a00b001cd4989feebsm4240789pji.55.2022.06.16.15.37.30
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 16 Jun 2022 15:37:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e915cb41-edc4-11ec-ab14-113154c10af9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=tgs9BiQ8fCCI5xzuPZeLIkAerocz5uyDcxscquibc0s=;
        b=FmQkIOSklDOl56M1VxM5BnIrf+OJln5vujy7+3w7L385t6zqtHdM7uuc9420QY5m51
         EXV3FdRDs0i76W9j13xoIfCQnBYFiEd0TskJWqoOl8WqyA5lXV+IDiJfVzZubfhZAjJw
         dSbBTr0zlqv1j/WIESVISAqa8pBiVqZ4HA5fiM+9FZXILUZ3UjUhsS64LslNa2wJfY0R
         DQkddk7BULD/EwIbolVAToyvXvkUiXOcbh3kZP2dU3vQdXdhfYYCq0s4JYGIT/g1H08C
         8KUBzFyF9Mi55VpiY61ua87pxOHLSuvn5cpTK3ar6yNHiglshJVuvszLxwQVV0LBYWlE
         nB+w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=tgs9BiQ8fCCI5xzuPZeLIkAerocz5uyDcxscquibc0s=;
        b=YFugimFShTuC9k2Xy5uyn1XiDwTEvxV3hdhWuBZwjSOPQpKm2XXxLK10R5JG/Q/Gub
         GyKxvd1NzYLAnYrWtqTJIqW0DAKFCWO/2A0oQu7+sGM9cPXAMVab805eBkaumymEkzn+
         fwr51DMgXJdyKljibiby+AnHuIw5KErrCi6lvzpI1OJ9IeYyVk65TskK4hPO1kCQf7sI
         KSPSzKGvVJCkcMJy60OCEpdzYQWBRRCsq7bDABPOy3hK9G1RphZ8nC445fdx4T5tTFuN
         wFN/fFhVx4Eg7WwSdBnusukGKGKPZmly+xarDh09bVQR7CtmgZRGXxphRyFoPy/CRoqH
         RfUg==
X-Gm-Message-State: AJIora+YmJfKUOBB7vr/7MOeX1kjEu/iiUohJNo+ES2kbsljrh3E5uBJ
	lFIGiw7HVR2gar5AQRhWAzYd4Q==
X-Google-Smtp-Source: AGRyM1taXZzrjm7bsPLIUVLF/R2rMOmPnLXEgpM3ZaB3JAH0FJFwvUZjEtdgL9tuCzcWD3kTpHBKGQ==
X-Received: by 2002:a63:555d:0:b0:3fd:5d54:2708 with SMTP id f29-20020a63555d000000b003fd5d542708mr6302226pgm.92.1655419051139;
        Thu, 16 Jun 2022 15:37:31 -0700 (PDT)
Date: Thu, 16 Jun 2022 15:37:28 -0700
From: Jens Wiklander <jens.wiklander@linaro.org>
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2 2/2] xen/arm: add FF-A mediator
Message-ID: <20220616223728.GA71444@jade>
References: <20220609061812.422130-1-jens.wiklander@linaro.org>
 <20220609061812.422130-3-jens.wiklander@linaro.org>
 <874k0nhvsq.fsf@epam.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <874k0nhvsq.fsf@epam.com>

Hi Volodymyr,

On Tue, Jun 14, 2022 at 07:47:18PM +0000, Volodymyr Babchuk wrote:
> 
> Hello Jens,
> 
> Sorry for late review, I was busy with internal projects.
> 
> This is preliminary review. I gave up at scatter-gather operations. Need
> more time to review them properly.

No problem, thanks for taking the time.

> 
> One thing that bothers me is that Xen is non-preemptive and there are
> plenty potentially long-running operations.

There's room to deal with that in the FF-A specification. These scatter
gather operation are quite complicated so I started with the minimum. We
can as a future optimization address the problem with long running
operations.

> 
> Jens Wiklander <jens.wiklander@linaro.org> writes:
> 
> > Adds a FF-A version 1.1 [1] mediator to communicate with a Secure
> > Partition in secure world.
> >
> > The implementation is the bare minimum to be able to communicate with
> > OP-TEE running as an SPMC at S-EL1.
> >
> > This is loosely based on the TEE mediator framework and the OP-TEE
> > mediator.
> >
> > [1] https://urldefense.com/v3/__https://developer.arm.com/documentation/den0077/latest__;!!GF_29dbcQIUBPA!1rn9xKdmcgMXOyZ_CvNIVq-wAS1ZI_Ews1w-Gqt0YPwSXyyTJedeFQgD65WhhOwIf_-cIa4EINzmwM4o62XPcMt1cTLcMZ7d$ [developer[.]arm[.]com]
> > Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> > ---
> >  xen/arch/arm/Kconfig              |   11 +
> >  xen/arch/arm/Makefile             |    1 +
> >  xen/arch/arm/domain.c             |   10 +
> >  xen/arch/arm/ffa.c                | 1624 +++++++++++++++++++++++++++++
> >  xen/arch/arm/include/asm/domain.h |    4 +
> >  xen/arch/arm/include/asm/ffa.h    |   71 ++
> >  xen/arch/arm/vsmc.c               |   17 +-
> >  7 files changed, 1735 insertions(+), 3 deletions(-)
> >  create mode 100644 xen/arch/arm/ffa.c
> >  create mode 100644 xen/arch/arm/include/asm/ffa.h
> >
> > diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> > index ecfa6822e4d3..5b75067e2745 100644
> > --- a/xen/arch/arm/Kconfig
> > +++ b/xen/arch/arm/Kconfig
> > @@ -106,6 +106,17 @@ config TEE
> >  
> >  source "arch/arm/tee/Kconfig"
> >  
> > +config FFA
> > +	bool "Enable FF-A mediator support" if EXPERT
> > +	default n
> > +	depends on ARM_64
> > +	help
> > +	  This option enables a minamal FF-A mediator. The mediator is
> > +	  generic as it follows the FF-A specification [1], but it only
> > +	  implements a small substet of the specification.
> > +
> > +	  [1] https://urldefense.com/v3/__https://developer.arm.com/documentation/den0077/latest__;!!GF_29dbcQIUBPA!1rn9xKdmcgMXOyZ_CvNIVq-wAS1ZI_Ews1w-Gqt0YPwSXyyTJedeFQgD65WhhOwIf_-cIa4EINzmwM4o62XPcMt1cTLcMZ7d$ [developer[.]arm[.]com]
> > +
> >  endmenu
> >  
> >  menu "ARM errata workaround via the alternative framework"
> > diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
> > index 1d862351d111..dbf5e593a069 100644
> > --- a/xen/arch/arm/Makefile
> > +++ b/xen/arch/arm/Makefile
> > @@ -20,6 +20,7 @@ obj-y += domain.o
> >  obj-y += domain_build.init.o
> >  obj-y += domctl.o
> >  obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
> > +obj-$(CONFIG_FFA) += ffa.o
> >  obj-y += gic.o
> >  obj-y += gic-v2.o
> >  obj-$(CONFIG_GICV3) += gic-v3.o
> > diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> > index 8110c1df8638..a93e6a9c4aef 100644
> > --- a/xen/arch/arm/domain.c
> > +++ b/xen/arch/arm/domain.c
> > @@ -27,6 +27,7 @@
> >  #include <asm/cpufeature.h>
> >  #include <asm/current.h>
> >  #include <asm/event.h>
> > +#include <asm/ffa.h>
> >  #include <asm/gic.h>
> >  #include <asm/guest_atomics.h>
> >  #include <asm/irq.h>
> > @@ -756,6 +757,9 @@ int arch_domain_create(struct domain *d,
> >      if ( (rc = tee_domain_init(d, config->arch.tee_type)) != 0 )
> >          goto fail;
> >  
> > +    if ( (rc = ffa_domain_init(d)) != 0 )
> 
> So, FFA support will be enabled for each domain? I think that this is
> fine for experimental feature, but I want to hear maintainer's opinion.
> 
> > +        goto fail;
> > +
> >      update_domain_wallclock_time(d);
> >  
> >      /*
> > @@ -998,6 +1002,7 @@ static int relinquish_memory(struct domain *d, struct page_list_head *list)
> >  enum {
> >      PROG_pci = 1,
> >      PROG_tee,
> > +    PROG_ffa,
> >      PROG_xen,
> >      PROG_page,
> >      PROG_mapping,
> > @@ -1046,6 +1051,11 @@ int domain_relinquish_resources(struct domain *d)
> >          if (ret )
> >              return ret;
> >  
> > +    PROGRESS(ffa):
> > +        ret = ffa_relinquish_resources(d);
> > +        if (ret )
> 
> Coding style: if ( ret )
> 
> > +            return ret;
> > +
> >      PROGRESS(xen):
> >          ret = relinquish_memory(d, &d->xenpage_list);
> >          if ( ret )
> > diff --git a/xen/arch/arm/ffa.c b/xen/arch/arm/ffa.c
> > new file mode 100644
> > index 000000000000..9063b7f2b59e
> > --- /dev/null
> > +++ b/xen/arch/arm/ffa.c
> > @@ -0,0 +1,1624 @@
> > +/*
> > + * xen/arch/arm/ffa.c
> > + *
> > + * Arm Firmware Framework for ARMv8-A(FFA) mediator
> > + *
> > + * Copyright (C) 2021  Linaro Limited
> 
> It is 2022 already :)

Haha, time flies. :-)

> 
> > + *
> > + * Permission is hereby granted, free of charge, to any person
> > + * obtaining a copy of this software and associated documentation
> > + * files (the "Software"), to deal in the Software without restriction,
> > + * including without limitation the rights to use, copy, modify, merge,
> > + * publish, distribute, sublicense, and/or sell copies of the Software,
> > + * and to permit persons to whom the Software is furnished to do so,
> > + * subject to the following conditions:
> > + *
> > + * The above copyright notice and this permission notice shall be
> > + * included in all copies or substantial portions of the Software.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> > + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> > + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
> > + * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
> > + * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
> > + * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
> > + * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
> > + */
> > +
> > +#include <xen/domain_page.h>
> > +#include <xen/errno.h>
> > +#include <xen/init.h>
> > +#include <xen/lib.h>
> > +#include <xen/sched.h>
> > +#include <xen/types.h>
> > +#include <xen/sizes.h>
> > +#include <xen/bitops.h>
> > +
> > +#include <asm/smccc.h>
> > +#include <asm/event.h>
> > +#include <asm/ffa.h>
> > +#include <asm/regs.h>
> > +
> > +/* Error codes */
> > +#define FFA_RET_OK			0
> > +#define FFA_RET_NOT_SUPPORTED		-1
> > +#define FFA_RET_INVALID_PARAMETERS	-2
> > +#define FFA_RET_NO_MEMORY		-3
> > +#define FFA_RET_BUSY			-4
> > +#define FFA_RET_INTERRUPTED		-5
> > +#define FFA_RET_DENIED			-6
> > +#define FFA_RET_RETRY			-7
> > +#define FFA_RET_ABORTED			-8
> > +
> > +/* FFA_VERSION helpers */
> > +#define FFA_VERSION_MAJOR		_AC(1,U)
> > +#define FFA_VERSION_MAJOR_SHIFT		_AC(16,U)
> > +#define FFA_VERSION_MAJOR_MASK		_AC(0x7FFF,U)
> > +#define FFA_VERSION_MINOR		_AC(1,U)
> > +#define FFA_VERSION_MINOR_SHIFT		_AC(0,U)
> > +#define FFA_VERSION_MINOR_MASK		_AC(0xFFFF,U)
> > +#define MAKE_FFA_VERSION(major, minor)	\
> > +	((((major) & FFA_VERSION_MAJOR_MASK) << FFA_VERSION_MAJOR_SHIFT) | \
> > +	 ((minor) & FFA_VERSION_MINOR_MASK))
> > +
> > +#define FFA_MIN_VERSION		MAKE_FFA_VERSION(1, 0)
> > +#define FFA_VERSION_1_0		MAKE_FFA_VERSION(1, 0)
> > +#define FFA_VERSION_1_1		MAKE_FFA_VERSION(1, 1)
> > +#define FFA_MY_VERSION		MAKE_FFA_VERSION(FFA_VERSION_MAJOR, \
> > +						 FFA_VERSION_MINOR)
> > +
> > +
> > +#define FFA_HANDLE_HYP_FLAG             BIT(63,ULL)
> > +
> > +/* Memory attributes: Normal memory, Write-Back cacheable, Inner shareable */
> > +#define FFA_NORMAL_MEM_REG_ATTR		_AC(0x2f,U)
> > +
> > +/* Memory access permissions: Read-write */
> > +#define FFA_MEM_ACC_RW			_AC(0x2,U)
> > +
> > +/* Clear memory before mapping in receiver */
> > +#define FFA_MEMORY_REGION_FLAG_CLEAR		BIT(0, U)
> > +/* Relayer may time slice this operation */
> > +#define FFA_MEMORY_REGION_FLAG_TIME_SLICE	BIT(1, U)
> > +/* Clear memory after receiver relinquishes it */
> > +#define FFA_MEMORY_REGION_FLAG_CLEAR_RELINQUISH	BIT(2, U)
> > +
> > +/* Share memory transaction */
> > +#define FFA_MEMORY_REGION_TRANSACTION_TYPE_SHARE (_AC(1,U) << 3)
> > +/* Relayer must choose the alignment boundary */
> > +#define FFA_MEMORY_REGION_FLAG_ANY_ALIGNMENT	_AC(0,U)
> BIT(0, U)?

No, it's rather bit 9 set to 0. This is unused, I'll remove it instead.

> 
> > +
> > +#define FFA_HANDLE_INVALID		_AC(0xffffffffffffffff,ULL)
> > +
> > +/* Framework direct request/response */
> > +#define FFA_MSG_FLAG_FRAMEWORK		BIT(31, U)
> > +#define FFA_MSG_TYPE_MASK		_AC(0xFF,U);
> > +#define FFA_MSG_PSCI			_AC(0x0,U)
> > +#define FFA_MSG_SEND_VM_CREATED		_AC(0x4,U)
> > +#define FFA_MSG_RESP_VM_CREATED		_AC(0x5,U)
> > +#define FFA_MSG_SEND_VM_DESTROYED	_AC(0x6,U)
> > +#define FFA_MSG_RESP_VM_DESTROYED	_AC(0x7,U)
> > +
> > +/*
> > + * Flags used for the FFA_PARTITION_INFO_GET return message:
> > + * BIT(0): Supports receipt of direct requests
> > + * BIT(1): Can send direct requests
> > + * BIT(2): Can send and receive indirect messages
> > + * BIT(3): Supports receipt of notifications
> > + * BIT(4-5): Partition ID is a PE endpoint ID
> > + */
> > +#define FFA_PART_PROP_DIRECT_REQ_RECV   BIT(0,U)
> > +#define FFA_PART_PROP_DIRECT_REQ_SEND   BIT(1,U)
> > +#define FFA_PART_PROP_INDIRECT_MSGS     BIT(2,U)
> > +#define FFA_PART_PROP_RECV_NOTIF        BIT(3,U)
> > +#define FFA_PART_PROP_IS_PE_ID          (_AC(0,U) << 4)
> > +#define FFA_PART_PROP_IS_SEPID_INDEP    (_AC(1,U) << 4)
> > +#define FFA_PART_PROP_IS_SEPID_DEP      (_AC(2,U) << 4)
> > +#define FFA_PART_PROP_IS_AUX_ID         (_AC(3,U) << 4)
> > +#define FFA_PART_PROP_NOTIF_CREATED     BIT(6,U)
> > +#define FFA_PART_PROP_NOTIF_DESTROYED   BIT(7,U)
> > +#define FFA_PART_PROP_AARCH64_STATE     BIT(8,U)
> > +
> > +/* Function IDs */
> > +#define FFA_ERROR			_AC(0x84000060,U)
> > +#define FFA_SUCCESS_32			_AC(0x84000061,U)
> > +#define FFA_SUCCESS_64			_AC(0xC4000061,U)
> > +#define FFA_INTERRUPT			_AC(0x84000062,U)
> > +#define FFA_VERSION			_AC(0x84000063,U)
> > +#define FFA_FEATURES			_AC(0x84000064,U)
> > +#define FFA_RX_ACQUIRE			_AC(0x84000084,U)
> > +#define FFA_RX_RELEASE			_AC(0x84000065,U)
> > +#define FFA_RXTX_MAP_32			_AC(0x84000066,U)
> > +#define FFA_RXTX_MAP_64			_AC(0xC4000066,U)
> > +#define FFA_RXTX_UNMAP			_AC(0x84000067,U)
> > +#define FFA_PARTITION_INFO_GET		_AC(0x84000068,U)
> > +#define FFA_ID_GET			_AC(0x84000069,U)
> > +#define FFA_SPM_ID_GET			_AC(0x84000085,U)
> > +#define FFA_MSG_WAIT			_AC(0x8400006B,U)
> > +#define FFA_MSG_YIELD			_AC(0x8400006C,U)
> > +#define FFA_MSG_RUN			_AC(0x8400006D,U)
> > +#define FFA_MSG_SEND2			_AC(0x84000086,U)
> > +#define FFA_MSG_SEND_DIRECT_REQ_32	_AC(0x8400006F,U)
> > +#define FFA_MSG_SEND_DIRECT_REQ_64	_AC(0xC400006F,U)
> > +#define FFA_MSG_SEND_DIRECT_RESP_32	_AC(0x84000070,U)
> > +#define FFA_MSG_SEND_DIRECT_RESP_64	_AC(0xC4000070,U)
> > +#define FFA_MEM_DONATE_32		_AC(0x84000071,U)
> > +#define FFA_MEM_DONATE_64		_AC(0xC4000071,U)
> > +#define FFA_MEM_LEND_32			_AC(0x84000072,U)
> > +#define FFA_MEM_LEND_64			_AC(0xC4000072,U)
> > +#define FFA_MEM_SHARE_32		_AC(0x84000073,U)
> > +#define FFA_MEM_SHARE_64		_AC(0xC4000073,U)
> > +#define FFA_MEM_RETRIEVE_REQ_32		_AC(0x84000074,U)
> > +#define FFA_MEM_RETRIEVE_REQ_64		_AC(0xC4000074,U)
> > +#define FFA_MEM_RETRIEVE_RESP		_AC(0x84000075,U)
> > +#define FFA_MEM_RELINQUISH		_AC(0x84000076,U)
> > +#define FFA_MEM_RECLAIM			_AC(0x84000077,U)
> > +#define FFA_MEM_FRAG_RX			_AC(0x8400007A,U)
> > +#define FFA_MEM_FRAG_TX			_AC(0x8400007B,U)
> > +#define FFA_MSG_SEND			_AC(0x8400006E,U)
> > +#define FFA_MSG_POLL			_AC(0x8400006A,U)
> > +
> > +/* Partition information descriptor */
> > +struct ffa_partition_info_1_0 {
> > +    uint16_t id;
> > +    uint16_t execution_context;
> > +    uint32_t partition_properties;
> > +};
> > +
> > +struct ffa_partition_info_1_1 {
> > +    uint16_t id;
> > +    uint16_t execution_context;
> > +    uint32_t partition_properties;
> > +    uint8_t uuid[16];
> > +};
> > +
> > +/* Constituent memory region descriptor */
> > +struct ffa_address_range {
> > +    uint64_t address;
> > +    uint32_t page_count;
> > +    uint32_t reserved;
> > +};
> > +
> > +/* Composite memory region descriptor */
> > +struct ffa_mem_region {
> > +    uint32_t total_page_count;
> > +    uint32_t address_range_count;
> > +    uint64_t reserved;
> > +    struct ffa_address_range address_range_array[];
> > +};
> > +
> > +/* Memory access permissions descriptor */
> > +struct ffa_mem_access_perm {
> > +    uint16_t endpoint_id;
> > +    uint8_t perm;
> > +    uint8_t flags;
> > +};
> > +
> > +/* Endpoint memory access descriptor */
> > +struct ffa_mem_access {
> > +    struct ffa_mem_access_perm access_perm;
> > +    uint32_t region_offs;
> > +    uint64_t reserved;
> > +};
> > +
> > +/* Lend, donate or share memory transaction descriptor */
> > +struct ffa_mem_transaction_1_0 {
> > +    uint16_t sender_id;
> > +    uint8_t mem_reg_attr;
> > +    uint8_t reserved0;
> > +    uint32_t flags;
> > +    uint64_t global_handle;
> > +    uint64_t tag;
> > +    uint32_t reserved1;
> > +    uint32_t mem_access_count;
> > +    struct ffa_mem_access mem_access_array[];
> > +};
> > +
> > +struct ffa_mem_transaction_1_1 {
> > +    uint16_t sender_id;
> > +    uint16_t mem_reg_attr;
> > +    uint32_t flags;
> > +    uint64_t global_handle;
> > +    uint64_t tag;
> > +    uint32_t mem_access_size;
> > +    uint32_t mem_access_count;
> > +    uint32_t mem_access_offs;
> > +    uint8_t reserved[12];
> > +};
> > +
> > +/*
> > + * The parts needed from struct ffa_mem_transaction_1_0 or struct
> > + * ffa_mem_transaction_1_1, used to provide an abstraction of difference in
> > + * data structures between version 1.0 and 1.1. This is just an internal
> > + * interface and can be changed without changing any ABI.
> > + */
> > +struct ffa_mem_transaction_x {
> > +    uint16_t sender_id;
> > +    uint8_t mem_reg_attr;
> > +    uint8_t flags;
> > +    uint8_t mem_access_size;
> > +    uint8_t mem_access_count;
> > +    uint16_t mem_access_offs;
> > +    uint64_t global_handle;
> > +    uint64_t tag;
> > +};
> > +
> > +/* Endpoint RX/TX descriptor */
> > +struct ffa_endpoint_rxtx_descriptor_1_0 {
> > +    uint16_t sender_id;
> > +    uint16_t reserved;
> > +    uint32_t rx_range_count;
> > +    uint32_t tx_range_count;
> > +};
> > +
> > +struct ffa_endpoint_rxtx_descriptor_1_1 {
> > +    uint16_t sender_id;
> > +    uint16_t reserved;
> > +    uint32_t rx_region_offs;
> > +    uint32_t tx_region_offs;
> > +};
> > +
> > +struct ffa_ctx {
> > +    void *rx;
> > +    void *tx;
> > +    struct page_info *rx_pg;
> > +    struct page_info *tx_pg;
> > +    unsigned int page_count;
> > +    uint32_t guest_vers;
> > +    bool tx_is_mine;
> > +    bool interrupted;
> > +};
> > +
> > +struct ffa_shm_mem {
> > +    struct list_head list;
> > +    uint16_t sender_id;
> > +    uint16_t ep_id;     /* endpoint, the one lending */
> > +    uint64_t handle;    /* FFA_HANDLE_INVALID if not set yet */
> > +    unsigned int page_count;
> > +    struct page_info *pages[];
> > +};
> > +
> > +struct mem_frag_state {
> > +    struct list_head list;
> > +    struct ffa_shm_mem *shm;
> > +    uint32_t range_count;
> > +    unsigned int current_page_idx;
> > +    unsigned int frag_offset;
> > +    unsigned int range_offset;
> > +    uint8_t *buf;
> > +    unsigned int buf_size;
> > +    struct ffa_address_range range;
> > +};
> > +
> > +/*
> > + * Our rx/rx buffer shared with the SPMC
> > + */
> > +static uint32_t ffa_version;
> > +static uint16_t *subsr_vm_created;
> > +static unsigned int subsr_vm_created_count;
> > +static uint16_t *subsr_vm_destroyed;
> > +static unsigned int subsr_vm_destroyed_count;
> > +static void *ffa_rx;
> > +static void *ffa_tx;
> > +static unsigned int ffa_page_count;
> > +static spinlock_t ffa_buffer_lock = SPIN_LOCK_UNLOCKED;
> > +
> > +static struct list_head ffa_mem_list = LIST_HEAD_INIT(ffa_mem_list);
> > +static struct list_head ffa_frag_list = LIST_HEAD_INIT(ffa_frag_list);
> > +static spinlock_t ffa_mem_list_lock = SPIN_LOCK_UNLOCKED;
> > +
> > +static uint64_t next_handle = FFA_HANDLE_HYP_FLAG;
> > +
> > +static uint64_t reg_pair_to_64(uint32_t reg0, uint32_t reg1)
> > +{
> > +    return (uint64_t)reg0 << 32 | reg1;
> > +}
> > +
> > +static void reg_pair_from_64(uint32_t *reg0, uint32_t *reg1, uint64_t val)
> > +{
> > +    *reg0 = val >> 32;
> > +    *reg1 = val;
> > +}
> > +
> > +static bool ffa_get_version(uint32_t *vers)
> > +{
> > +    const struct arm_smccc_1_2_regs arg = {
> > +        .a0 = FFA_VERSION, .a1 = FFA_MY_VERSION,
> > +    };
> > +    struct arm_smccc_1_2_regs resp;
> > +
> > +    arm_smccc_1_2_smc(&arg, &resp);
> > +    if ( resp.a0 == FFA_RET_NOT_SUPPORTED )
> > +    {
> > +        printk(XENLOG_ERR "ffa: FFA_VERSION returned not supported\n");
> > +        return false;
> > +    }
> > +
> > +    *vers = resp.a0;
> > +    return true;
> > +}
> > +
> > +static uint32_t ffa_rxtx_map(register_t tx_addr, register_t rx_addr,
> > +                             uint32_t page_count)
> > +{
> > +    const struct arm_smccc_1_2_regs arg = {
> > +#ifdef CONFIG_ARM_64
> > +        .a0 = FFA_RXTX_MAP_64,
> > +#endif
> > +#ifdef CONFIG_ARM_32
> > +        .a0 = FFA_RXTX_MAP_32,
> > +#endif
> > +	.a1 = tx_addr, .a2 = rx_addr,
> > +        .a3 = page_count,
> > +    };
> > +    struct arm_smccc_1_2_regs resp;
> > +
> > +    arm_smccc_1_2_smc(&arg, &resp);
> > +
> > +    if ( resp.a0 == FFA_ERROR )
> 
> What if we get SMCCC_NOT_SUPPORTED there?

Good point, I'll fix.

> 
> > +    {
> > +        if ( resp.a2 )
> > +            return resp.a2;
> > +        else
> > +            return FFA_RET_NOT_SUPPORTED;
> > +    }
> > +
> > +    return FFA_RET_OK;
> > +}
> > +
> > +static uint32_t ffa_rxtx_unmap(uint16_t vm_id)
> > +{
> > +    const struct arm_smccc_1_2_regs arg = {
> > +        .a0 = FFA_RXTX_UNMAP, .a1 = vm_id,
> > +    };
> > +    struct arm_smccc_1_2_regs resp;
> > +
> > +    arm_smccc_1_2_smc(&arg, &resp);
> > +
> > +    if ( resp.a0 == FFA_ERROR )
> > +    {
> 
> The same question. I believe it is better to test against FFA_SUCCESS.
> Also looks like this code can be extracted into some helper function,
> because it repeats again and again.

Yeah, I'll add a helper function.

> 
> > +        if ( resp.a2 )
> > +            return resp.a2;
> > +        else
> > +            return FFA_RET_NOT_SUPPORTED;
> > +    }
> > +
> > +    return FFA_RET_OK;
> > +}
> > +
> > +static uint32_t ffa_partition_info_get(uint32_t w1, uint32_t w2, uint32_t w3,
> > +                                       uint32_t w4, uint32_t w5,
> > +                                       uint32_t *count)
> > +{
> > +    const struct arm_smccc_1_2_regs arg = {
> > +        .a0 = FFA_PARTITION_INFO_GET, .a1 = w1, .a2 = w2, .a3 = w3, .a4 = w4,
> > +        .a5 = w5,
> > +    };
> > +    struct arm_smccc_1_2_regs resp;
> > +
> > +    arm_smccc_1_2_smc(&arg, &resp);
> > +
> > +    if ( resp.a0 == FFA_ERROR )
> > +    {
> > +        if ( resp.a2 )
> > +            return resp.a2;
> > +        else
> > +            return FFA_RET_NOT_SUPPORTED;
> > +    }
> > +
> > +    *count = resp.a2;
> > +
> > +    return FFA_RET_OK;
> > +}
> > +
> > +static uint32_t ffa_rx_release(void)
> > +{
> > +    const struct arm_smccc_1_2_regs arg = { .a0 = FFA_RX_RELEASE, };
> > +    struct arm_smccc_1_2_regs resp;
> > +
> > +    arm_smccc_1_2_smc(&arg, &resp);
> > +
> > +    if ( resp.a0 == FFA_ERROR )
> > +    {
> > +        if ( resp.a2 )
> > +            return resp.a2;
> > +        else
> > +            return FFA_RET_NOT_SUPPORTED;
> > +    }
> > +
> > +    return FFA_RET_OK;
> > +}
> > +
> > +static int32_t ffa_mem_share(uint32_t tot_len, uint32_t frag_len,
> > +                             register_t addr, uint32_t pg_count,
> > +                             uint64_t *handle)
> > +{
> > +    struct arm_smccc_1_2_regs arg = {
> > +        .a0 = FFA_MEM_SHARE_32, .a1 = tot_len, .a2 = frag_len, .a3 = addr,
> > +        .a4 = pg_count,
> > +    };
> > +    struct arm_smccc_1_2_regs resp;
> > +
> > +    /*
> > +     * For arm64 we must use 64-bit calling convention if the buffer isn't
> > +     * passed in our tx buffer.
> > +     */
> > +    if (sizeof(addr) > 4 && addr)
> > +        arg.a0 = FFA_MEM_SHARE_64;
> > +
> > +    arm_smccc_1_2_smc(&arg, &resp);
> > +
> > +    switch ( resp.a0 ) {
> > +    case FFA_ERROR:
> > +        if ( resp.a2 )
> > +            return resp.a2;
> > +        else
> > +            return FFA_RET_NOT_SUPPORTED;
> > +    case FFA_SUCCESS_32:
> > +        *handle = reg_pair_to_64(resp.a3, resp.a2);
> > +        return FFA_RET_OK;
> > +    case FFA_MEM_FRAG_RX:
> > +        *handle = reg_pair_to_64(resp.a2, resp.a1);
> > +        return resp.a3;
> > +    default:
> > +            return FFA_RET_NOT_SUPPORTED;
> > +    }
> > +}
> > +
> > +static int32_t ffa_mem_frag_tx(uint64_t handle, uint32_t frag_len,
> > +                               uint16_t sender_id)
> > +{
> > +    struct arm_smccc_1_2_regs arg = {
> > +        .a0 = FFA_MEM_FRAG_TX, .a1 = handle & UINT32_MAX, .a2 = handle >> 32,
> > +        .a3 = frag_len, .a4 = (uint32_t)sender_id << 16,
> > +    };
> > +    struct arm_smccc_1_2_regs resp;
> > +
> > +    arm_smccc_1_2_smc(&arg, &resp);
> > +
> > +    switch ( resp.a0 ) {
> > +    case FFA_ERROR:
> > +        if ( resp.a2 )
> > +            return resp.a2;
> > +        else
> > +            return FFA_RET_NOT_SUPPORTED;
> > +    case FFA_SUCCESS_32:
> > +        return FFA_RET_OK;
> > +    case FFA_MEM_FRAG_RX:
> > +        return resp.a3;
> > +    default:
> > +            return FFA_RET_NOT_SUPPORTED;
> > +    }
> > +}
> > +
> > +static uint32_t ffa_mem_reclaim(uint32_t handle_lo, uint32_t handle_hi,
> > +                                uint32_t flags)
> > +{
> > +    const struct arm_smccc_1_2_regs arg = {
> > +        .a0 = FFA_MEM_RECLAIM, .a1 = handle_lo, .a2 = handle_hi, .a3 = flags,
> > +    };
> > +    struct arm_smccc_1_2_regs resp;
> > +
> > +    arm_smccc_1_2_smc(&arg, &resp);
> > +
> > +    if ( resp.a0 == FFA_ERROR )
> > +    {
> > +        if ( resp.a2 )
> > +            return resp.a2;
> > +        else
> > +            return FFA_RET_NOT_SUPPORTED;
> > +    }
> > +
> > +    return FFA_RET_OK;
> > +}
> > +
> > +static int32_t ffa_direct_req_send_vm(uint16_t sp_id, uint16_t vm_id,
> > +                                      uint8_t msg)
> > +{
> > +    uint32_t exp_resp = FFA_MSG_FLAG_FRAMEWORK;
> > +    int32_t res;
> > +
> > +    if ( msg != FFA_MSG_SEND_VM_CREATED && msg !=FFA_MSG_SEND_VM_DESTROYED )
> > +        return FFA_RET_INVALID_PARAMETERS;
> > +
> > +    if ( msg == FFA_MSG_SEND_VM_CREATED )
> > +        exp_resp |= FFA_MSG_RESP_VM_CREATED;
> > +    else
> > +        exp_resp |= FFA_MSG_RESP_VM_DESTROYED;
> > +
> > +    do {
> > +        const struct arm_smccc_1_2_regs arg = {
> > +            .a0 = FFA_MSG_SEND_DIRECT_REQ_32,
> > +            .a1 = sp_id,
> > +            .a2 = FFA_MSG_FLAG_FRAMEWORK | msg,
> > +            .a5 = vm_id,
> > +        };
> > +        struct arm_smccc_1_2_regs resp;
> > +
> > +        arm_smccc_1_2_smc(&arg, &resp);
> > +        if ( resp.a0 != FFA_MSG_SEND_DIRECT_RESP_32 || resp.a2 != exp_resp ) {
> > +            /*
> > +             * This is an invalid response, likely due to some error in the
> > +             * implementation of the ABI.
> > +             */
> > +            return FFA_RET_INVALID_PARAMETERS;
> > +        }
> > +        res = resp.a3;
> > +    } while ( res == FFA_RET_INTERRUPTED || res == FFA_RET_RETRY );
> 
> How long this loop can ran? XEN is not preemptive unfortunately. So,
> maybe we need some way to restart this call.

That's a good question. As long as we don't have any non-secure
interrupts pending (and masked in CPSR) while entering secure world we
should not be able to get FFA_RET_INTERRUPTED. The spec doesn't say why
FFA_RET_RETRY except that secure world may be able to recover from
something if we try again. In the use case I'm aiming for, OP-TEE as
SPMC at S-EL1 without any SPs only TAs, neither of these cases can
happen. So I added this code more for completeness.

> 
> > +
> > +    return res;
> > +}
> > +
> > +static u16 get_vm_id(struct domain *d)
> > +{
> > +    /* +1 since 0 is reserved for the hypervisor in FF-A */
> > +    return d->domain_id + 1;
> > +}
> > +
> > +static void set_regs(struct cpu_user_regs *regs, register_t v0, register_t v1,
> > +                     register_t v2, register_t v3, register_t v4, register_t v5,
> > +                     register_t v6, register_t v7)
> > +{
> > +        set_user_reg(regs, 0, v0);
> > +        set_user_reg(regs, 1, v1);
> > +        set_user_reg(regs, 2, v2);
> > +        set_user_reg(regs, 3, v3);
> > +        set_user_reg(regs, 4, v4);
> > +        set_user_reg(regs, 5, v5);
> > +        set_user_reg(regs, 6, v6);
> > +        set_user_reg(regs, 7, v7);
> > +}
> > +
> > +static void set_regs_error(struct cpu_user_regs *regs, uint32_t error_code)
> > +{
> > +    set_regs(regs, FFA_ERROR, 0, error_code, 0, 0, 0, 0, 0);
> > +}
> > +
> > +static void set_regs_success(struct cpu_user_regs *regs, uint32_t w2,
> > +                             uint32_t w3)
> > +{
> > +    set_regs(regs, FFA_SUCCESS_32, 0, w2, w3, 0, 0, 0, 0);
> > +}
> > +
> > +static void set_regs_frag_rx(struct cpu_user_regs *regs, uint32_t handle_lo,
> > +                             uint32_t handle_hi, uint32_t frag_offset,
> > +                             uint16_t sender_id)
> > +{
> > +    set_regs(regs, FFA_MEM_FRAG_RX, handle_lo, handle_hi, frag_offset,
> > +             (uint32_t)sender_id << 16, 0, 0, 0);
> > +}
> > +
> > +static void handle_version(struct cpu_user_regs *regs)
> > +{
> > +    struct domain *d = current->domain;
> > +    struct ffa_ctx *ctx = d->arch.ffa;
> > +    uint32_t vers = get_user_reg(regs, 1);
> > +
> > +    if ( vers < FFA_VERSION_1_1 )
> > +        vers = FFA_VERSION_1_0;
> > +    else
> > +        vers = FFA_VERSION_1_1;
> > +
> > +    ctx->guest_vers = vers;
> > +    set_regs(regs, vers, 0, 0, 0, 0, 0, 0, 0);
> > +}
> > +
> > +static uint32_t handle_rxtx_map(uint32_t fid, register_t tx_addr,
> > +                                register_t rx_addr, uint32_t page_count)
> > +{
> > +    uint32_t ret = FFA_RET_NOT_SUPPORTED;
> > +    struct domain *d = current->domain;
> > +    struct ffa_ctx *ctx = d->arch.ffa;
> > +    struct page_info *tx_pg;
> > +    struct page_info *rx_pg;
> > +    p2m_type_t t;
> > +    void *rx;
> > +    void *tx;
> > +
> > +    if ( !smccc_is_conv_64(fid) )
> > +    {
> > +        tx_addr &= UINT32_MAX;
> > +        rx_addr &= UINT32_MAX;
> > +    }
> > +
> > +    /* For now to keep things simple, only deal with a single page */
> > +    if ( page_count != 1 )
> > +        return FFA_RET_NOT_SUPPORTED;
> > +
> > +    /* Already mapped */
> > +    if ( ctx->rx )
> > +        return FFA_RET_DENIED;
> > +
> > +    tx_pg = get_page_from_gfn(d, gaddr_to_gfn(tx_addr), &t, P2M_ALLOC);
> > +    if ( !tx_pg )
> > +        return FFA_RET_NOT_SUPPORTED;
> > +    /* Only normal RAM for now */
> > +    if (t != p2m_ram_rw)
> > +        goto err_put_tx_pg;
> > +
> > +    rx_pg = get_page_from_gfn(d, gaddr_to_gfn(rx_addr), &t, P2M_ALLOC);
> > +    if ( !tx_pg )
> > +        goto err_put_tx_pg;
> > +    /* Only normal RAM for now */
> > +    if ( t != p2m_ram_rw )
> > +        goto err_put_rx_pg;
> > +
> > +    tx = __map_domain_page_global(tx_pg);
> > +    if ( !tx )
> > +        goto err_put_rx_pg;
> > +
> > +    rx = __map_domain_page_global(rx_pg);
> > +    if ( !rx )
> > +        goto err_unmap_tx;
> > +
> > +    ctx->rx = rx;
> > +    ctx->tx = tx;
> > +    ctx->rx_pg = rx_pg;
> > +    ctx->tx_pg = tx_pg;
> > +    ctx->page_count = 1;
> > +    ctx->tx_is_mine = true;
> > +    return FFA_RET_OK;
> > +
> > +err_unmap_tx:
> > +    unmap_domain_page_global(tx);
> > +err_put_rx_pg:
> > +    put_page(rx_pg);
> > +err_put_tx_pg:
> > +    put_page(tx_pg);
> > +    return ret;
> > +}
> > +
> > +static uint32_t handle_rxtx_unmap(void)
> > +{
> > +    struct domain *d = current->domain;
> > +    struct ffa_ctx *ctx = d->arch.ffa;
> > +    uint32_t ret;
> > +
> > +    if ( !ctx-> rx )
> 
> coding style: ctx->rx
> 
> > +        return FFA_RET_INVALID_PARAMETERS;
> > +
> > +    ret = ffa_rxtx_unmap(get_vm_id(d));
> > +    if ( ret )
> > +        return ret;
> > +
> > +    unmap_domain_page_global(ctx->rx);
> > +    unmap_domain_page_global(ctx->tx);
> > +    put_page(ctx->rx_pg);
> > +    put_page(ctx->tx_pg);
> > +    ctx->rx = NULL;
> > +    ctx->tx = NULL;
> > +    ctx->rx_pg = NULL;
> > +    ctx->tx_pg = NULL;
> > +    ctx->page_count = 0;
> > +    ctx->tx_is_mine = false;
> > +
> > +    return FFA_RET_OK;
> > +}
> > +
> > +static uint32_t handle_partition_info_get(uint32_t w1, uint32_t w2, uint32_t w3,
> > +                                          uint32_t w4, uint32_t w5,
> > +                                          uint32_t *count)
> > +{
> > +    uint32_t ret = FFA_RET_DENIED;
> > +    struct domain *d = current->domain;
> > +    struct ffa_ctx *ctx = d->arch.ffa;
> > +
> > +    if ( !ffa_page_count )
> > +        return FFA_RET_DENIED;
> > +
> > +    spin_lock(&ffa_buffer_lock);
> > +    if ( !ctx->page_count || !ctx->tx_is_mine )
> > +        goto out;
> > +    ret = ffa_partition_info_get(w1, w2, w3, w4, w5, count);
> > +    if ( ret )
> > +        goto out;
> > +    if ( ctx->guest_vers == FFA_VERSION_1_0 ) {
> > +        size_t n;
> > +        struct ffa_partition_info_1_1 *src = ffa_rx;
> > +        struct ffa_partition_info_1_0 *dst = ctx->rx;
> > +
> > +        for ( n = 0; n < *count; n++ ) {
> > +            dst[n].id = src[n].id;
> > +            dst[n].execution_context = src[n].execution_context;
> > +            dst[n].partition_properties = src[n].partition_properties;
> > +        }
> > +    } else {
> 
> Maybe it is worth to check version in this branch? Or at least
> put ASSERT(ctx->guest_vers == FFA_VERSION_1_1).

This is actually on purpose. If a newer version is introduced, depending
on what's changed this may very well continue to work. I suppose one
purpose with such an assert would be to catch all the places you need to
double check when increasing the version, but that seems a little messy.

> 
> > +        size_t sz = *count * sizeof(struct ffa_partition_info_1_1);
> > +
> > +        memcpy(ctx->rx, ffa_rx, sz);
> > +    }
> > +    ffa_rx_release();
> > +    ctx->tx_is_mine = false;
> > +out:
> > +    spin_unlock(&ffa_buffer_lock);
> > +
> > +    return ret;
> > +}
> > +
> > +static uint32_t handle_rx_release(void)
> > +{
> > +    uint32_t ret = FFA_RET_DENIED;
> > +    struct domain *d = current->domain;
> > +    struct ffa_ctx *ctx = d->arch.ffa;
> > +
> > +    spin_lock(&ffa_buffer_lock);
> > +    if ( !ctx->page_count || ctx->tx_is_mine )
> > +        goto out;
> > +    ret = FFA_RET_OK;
> > +    ctx->tx_is_mine = true;
> > +out:
> > +    spin_unlock(&ffa_buffer_lock);
> > +
> > +    return ret;
> > +}
> > +
> > +static void handle_msg_send_direct_req(struct cpu_user_regs *regs, uint32_t fid)
> > +{
> > +    struct arm_smccc_1_2_regs arg = { .a0 = fid, };
> > +    struct arm_smccc_1_2_regs resp = { };
> > +    struct domain *d = current->domain;
> > +    struct ffa_ctx *ctx = d->arch.ffa;
> > +    uint32_t src_dst;
> > +    uint64_t mask;
> > +
> > +    if ( smccc_is_conv_64(fid) )
> > +        mask = 0xffffffffffffffff;
> > +    else
> > +        mask = 0xffffffff;
> > +
> > +    src_dst = get_user_reg(regs, 1);
> 
> Should you apply mask there?

src_dst is already an uint32_t so eventual higher bits will be removed in
the assignment.

> 
> > +    if ( (src_dst >> 16) != get_vm_id(d) )
> > +    {
> > +        resp.a0 = FFA_ERROR;
> > +        resp.a2 = FFA_RET_INVALID_PARAMETERS;
> > +        goto out;
> > +    }
> > +
> > +    arg.a1 = src_dst;
> ... or there?
> 
> 
> > +    arg.a2 = get_user_reg(regs, 2) & mask;
> > +    arg.a3 = get_user_reg(regs, 3) & mask;
> > +    arg.a4 = get_user_reg(regs, 4) & mask;
> > +    arg.a5 = get_user_reg(regs, 5) & mask;
> > +    arg.a6 = get_user_reg(regs, 6) & mask;
> > +    arg.a7 = get_user_reg(regs, 7) & mask;
> > +
> > +    while ( true ) {
> > +        arm_smccc_1_2_smc(&arg, &resp);
> > +
> > +        switch ( resp.a0 )
> > +        {
> > +        case FFA_INTERRUPT:
> > +            ctx->interrupted = true;
> > +            goto out;
> > +        case FFA_ERROR:
> > +        case FFA_SUCCESS_32:
> > +        case FFA_SUCCESS_64:
> > +        case FFA_MSG_SEND_DIRECT_RESP_32:
> > +        case FFA_MSG_SEND_DIRECT_RESP_64:
> > +            goto out;
> > +        default:
> > +            /* Bad fid, report back. */
> > +            memset(&arg, 0, sizeof(arg));
> > +            arg.a0 = FFA_ERROR;
> > +            arg.a1 = src_dst;
> > +            arg.a2 = FFA_RET_NOT_SUPPORTED;
> > +            continue;
> > +        }
> > +    }
> > +
> > +out:
> > +    set_user_reg(regs, 0, resp.a0);
> > +    set_user_reg(regs, 2, resp.a2 & mask);
> > +    set_user_reg(regs, 1, resp.a1 & mask);
> Looks like you need to swap two lines above.

Sure, it looks a bit funny.

> 
> > +    set_user_reg(regs, 3, resp.a3 & mask);
> > +    set_user_reg(regs, 4, resp.a4 & mask);
> > +    set_user_reg(regs, 5, resp.a5 & mask);
> > +    set_user_reg(regs, 6, resp.a6 & mask);
> > +    set_user_reg(regs, 7, resp.a7 & mask);
> > +}
> > +
> > +static int get_shm_pages(struct domain *d, struct ffa_shm_mem *shm,
> > +                         struct ffa_address_range *range, uint32_t range_count,
> > +                         unsigned int start_page_idx,
> > +                         unsigned int *last_page_idx)
> > +{
> > +    unsigned int pg_idx = start_page_idx;
> > +    unsigned long gfn;
> > +    unsigned int n;
> > +    unsigned int m;
> > +    p2m_type_t t;
> > +    uint64_t addr;
> > +
> > +    for ( n = 0; n < range_count; n++ ) {
> > +        for ( m = 0; m < range[n].page_count; m++ ) {
> > +            if ( pg_idx >= shm->page_count )
> > +                return FFA_RET_INVALID_PARAMETERS;
> > +
> > +            addr = read_atomic(&range[n].address);
> > +            gfn = gaddr_to_gfn(addr + m * PAGE_SIZE);
> > +            shm->pages[pg_idx] = get_page_from_gfn(d, gfn, &t, P2M_ALLOC);
> > +            if ( !shm->pages[pg_idx] )
> > +                return FFA_RET_DENIED;
> > +            pg_idx++;
> > +            /* Only normal RAM for now */
> > +            if ( t != p2m_ram_rw )
> > +                return FFA_RET_DENIED;
> > +        }
> > +    }
> > +
> > +    *last_page_idx = pg_idx;
> > +
> > +    return FFA_RET_OK;
> > +}
> > +
> > +static void put_shm_pages(struct ffa_shm_mem *shm)
> > +{
> > +    unsigned int n;
> > +
> > +    for ( n = 0; n < shm->page_count && shm->pages[n]; n++ )
> > +    {
> > +        if ( shm->pages[n] ) {
> Looks like this check is redundant, you already checking the same in the
> `if` block.

Well spotted, I'll fix.

> 
> > +            put_page(shm->pages[n]);
> > +            shm->pages[n] = NULL;
> > +        }
> > +    }
> > +}
> > +
> > +static void init_range(struct ffa_address_range *addr_range,
> > +                       paddr_t pa)
> > +{
> > +    memset(addr_range, 0, sizeof(*addr_range));
> > +    addr_range->address = pa;
> > +    addr_range->page_count = 1;
> > +}
> > +
> > +static int share_shm(struct ffa_shm_mem *shm)
> > +{
> > +    uint32_t max_frag_len = ffa_page_count * PAGE_SIZE;
> > +    struct ffa_mem_transaction_1_1 *descr = ffa_tx;
> > +    struct ffa_mem_access *mem_access_array;
> > +    struct ffa_mem_region *region_descr;
> > +    struct ffa_address_range *addr_range;
> > +    paddr_t pa;
> > +    paddr_t last_pa;
> > +    unsigned int n;
> > +    uint32_t frag_len;
> > +    uint32_t tot_len;
> > +    int ret;
> > +    unsigned int range_count;
> > +    unsigned int range_base;
> > +    bool first;
> > +
> > +    memset(descr, 0, sizeof(*descr));
> > +    descr->sender_id = shm->sender_id;
> > +    descr->global_handle = shm->handle;
> > +    descr->mem_reg_attr = FFA_NORMAL_MEM_REG_ATTR;
> > +    descr->mem_access_count = 1;
> > +    descr->mem_access_size = sizeof(*mem_access_array);
> > +    descr->mem_access_offs = sizeof(*descr);
> > +    mem_access_array = (void *)(descr + 1);
> > +    region_descr = (void *)(mem_access_array + 1);
> > +
> > +    memset(mem_access_array, 0, sizeof(*mem_access_array));
> > +    mem_access_array[0].access_perm.endpoint_id = shm->ep_id;
> > +    mem_access_array[0].access_perm.perm = FFA_MEM_ACC_RW;
> > +    mem_access_array[0].region_offs = (vaddr_t)region_descr - (vaddr_t)ffa_tx;
> > +
> > +    memset(region_descr, 0, sizeof(*region_descr));
> > +    region_descr->total_page_count = shm->page_count;
> > +
> > +    region_descr->address_range_count = 1;
> > +    last_pa = page_to_maddr(shm->pages[0]);
> > +    for ( n = 1; n < shm->page_count; last_pa = pa, n++ )
> > +    {
> > +        pa = page_to_maddr(shm->pages[n]);
> > +        if ( last_pa + PAGE_SIZE == pa )
> > +        {
> > +            continue;
> > +        }
> > +        region_descr->address_range_count++;
> > +    }
> > +
> > +    tot_len = sizeof(*descr) + sizeof(*mem_access_array) +
> > +              sizeof(*region_descr) +
> > +              region_descr->address_range_count * sizeof(*addr_range);
> > +
> > +    addr_range = region_descr->address_range_array;
> > +    frag_len = (vaddr_t)(addr_range + 1) - (vaddr_t)ffa_tx;
> > +    last_pa = page_to_maddr(shm->pages[0]);
> > +    init_range(addr_range, last_pa);
> > +    first = true;
> > +    range_count = 1;
> > +    range_base = 0;
> > +    for ( n = 1; n < shm->page_count; last_pa = pa, n++ )
> > +    {
> > +        pa = page_to_maddr(shm->pages[n]);
> > +        if ( last_pa + PAGE_SIZE == pa )
> > +        {
> > +            addr_range->page_count++;
> > +            continue;
> > +        }
> > +
> > +        if (frag_len == max_frag_len) {
> coding stlyle: if ( ... )
>            {
> 
> > +            if (first)
> 
> coding stlyle if ( ... )
> 
> > +            {
> > +                ret = ffa_mem_share(tot_len, frag_len, 0, 0, &shm->handle);
> > +                first = false;
> > +            }
> > +            else
> > +            {
> > +                ret = ffa_mem_frag_tx(shm->handle, frag_len, shm->sender_id);
> > +            }
> > +            if (ret <= 0)
> > +                return ret;
> > +            range_base = range_count;
> > +            range_count = 0;
> > +            frag_len = sizeof(*addr_range);
> > +            addr_range = ffa_tx;
> > +        } else {
> > +            frag_len += sizeof(*addr_range);
> > +            addr_range++;
> > +        }
> > +        init_range(addr_range, pa);
> > +        range_count++;
> > +    }
> > +
> > +    if (first)
> > +        return ffa_mem_share(tot_len, frag_len, 0, 0, &shm->handle);
> > +    else
> > +        return ffa_mem_frag_tx(shm->handle, frag_len, shm->sender_id);
> > +}
> > +
> > +static int read_mem_transaction(uint32_t ffa_vers, void *buf, size_t blen,
> > +                                struct ffa_mem_transaction_x *trans)
> > +{
> > +    uint16_t mem_reg_attr;
> > +    uint32_t flags;
> > +    uint32_t count;
> > +    uint32_t offs;
> > +    uint32_t size;
> > +
> > +    if (ffa_vers >= FFA_VERSION_1_1) {
> coding style: if ( ... )
>           {
> > +        struct ffa_mem_transaction_1_1 *descr;
> > +
> > +        if (blen < sizeof(*descr))
> coding style: if ( ... )
> > +            return FFA_RET_INVALID_PARAMETERS;
> > +
> > +        descr = buf;
> > +        trans->sender_id = read_atomic(&descr->sender_id);
> > +        mem_reg_attr = read_atomic(&descr->mem_reg_attr);
> > +        flags = read_atomic(&descr->flags);
> > +        trans->global_handle = read_atomic(&descr->global_handle);
> > +        trans->tag = read_atomic(&descr->tag);
> > +
> > +        count = read_atomic(&descr->mem_access_count);
> > +        size = read_atomic(&descr->mem_access_size);
> > +        offs = read_atomic(&descr->mem_access_offs);
> > +    } else {
> coding style:
>  }
>  else
>  {
> > +        struct ffa_mem_transaction_1_0 *descr;
> > +
> > +        if (blen < sizeof(*descr))
> > +            return FFA_RET_INVALID_PARAMETERS;
> > +
> > +        descr = buf;
> > +        trans->sender_id = read_atomic(&descr->sender_id);
> > +        mem_reg_attr = read_atomic(&descr->mem_reg_attr);
> > +        flags = read_atomic(&descr->flags);
> > +        trans->global_handle = read_atomic(&descr->global_handle);
> > +        trans->tag = read_atomic(&descr->tag);
> > +
> > +        count = read_atomic(&descr->mem_access_count);
> > +        size = sizeof(struct ffa_mem_access);
> > +        offs = offsetof(struct ffa_mem_transaction_1_0, mem_access_array);
> > +    }
> > +
> > +    if (mem_reg_attr > UINT8_MAX || flags > UINT8_MAX || size > UINT8_MAX ||
> > +        count > UINT8_MAX || offs > UINT16_MAX)
> coding style: if ( ... )
> 
> > +        return FFA_RET_INVALID_PARAMETERS;
> > +
> > +    /* Check that the endpoint memory access descriptor array fits */
> > +    if (size * count + offs > blen)
> the same
> 
> > +        return FFA_RET_INVALID_PARAMETERS;
> > +
> > +    trans->mem_reg_attr = mem_reg_attr;
> > +    trans->flags = flags;
> > +    trans->mem_access_size = size;
> > +    trans->mem_access_count = count;
> > +    trans->mem_access_offs = offs;
> > +    return 0;
> > +}
> > +
> > +static int add_mem_share_frag(struct mem_frag_state *s, unsigned int offs,
> > +                              unsigned int frag_len)
> > +{
> > +    struct domain *d = current->domain;
> > +    unsigned int o = offs;
> > +    unsigned int l;
> > +    int ret;
> > +
> > +    if (frag_len < o)
> the same
> 
> > +        return FFA_RET_INVALID_PARAMETERS;
> > +
> > +    /* Fill up the first struct ffa_address_range */
> > +    l = min_t(unsigned int, frag_len - o, sizeof(s->range) - s->range_offset);
> > +    memcpy((uint8_t *)&s->range + s->range_offset, s->buf + o, l);
> > +    s->range_offset += l;
> > +    o += l;
> > +    if (s->range_offset != sizeof(s->range))
> > +        goto out;
> > +    s->range_offset = 0;
> > +
> > +    while (true) {
> the same
> 
> > +        ret = get_shm_pages(d, s->shm, &s->range, 1, s->current_page_idx,
> > +                            &s->current_page_idx);
> > +        if (ret)
> > +            return ret;
> > +        if (s->range_count == 1)
> > +            return 0;
> > +        s->range_count--;
> > +        if (frag_len - o < sizeof(s->range))
> > +            break;
> > +        memcpy(&s->range, s->buf + o, sizeof(s->range));
> > +        o += sizeof(s->range);
> > +    }
> > +
> > +    /* Collect any remaining bytes for the next struct ffa_address_range */
> > +    s->range_offset = frag_len - o;
> > +    memcpy(&s->range, s->buf + o, frag_len - o);
> > +out:
> > +    s->frag_offset += frag_len;
> > +    return s->frag_offset;
> > +}
> > +
> > +static void handle_mem_share(struct cpu_user_regs *regs)
> > +{
> > +    uint32_t tot_len = get_user_reg(regs, 1);
> > +    uint32_t frag_len = get_user_reg(regs, 2);
> > +    uint64_t addr = get_user_reg(regs, 3);
> > +    uint32_t page_count = get_user_reg(regs, 4);
> > +    struct ffa_mem_transaction_x trans;
> > +    struct ffa_mem_access *mem_access;
> > +    struct ffa_mem_region *region_descr;
> > +    struct domain *d = current->domain;
> > +    struct ffa_ctx *ctx = d->arch.ffa;
> > +    struct ffa_shm_mem *shm = NULL;
> > +    unsigned int last_page_idx = 0;
> > +    uint32_t range_count;
> > +    uint32_t region_offs;
> > +    int ret = FFA_RET_DENIED;
> > +    uint32_t handle_hi = 0;
> > +    uint32_t handle_lo = 0;
> > +
> > +    /*
> > +     * We're only accepting memory transaction descriptors via the rx/tx
> > +     * buffer.
> > +     */
> > +    if ( addr ) {
> coding style
> 
> > +        ret = FFA_RET_NOT_SUPPORTED;
> > +        goto out_unlock;
> > +    }
> > +
> > +    /* Check that fragment legnth doesn't exceed total length */
> > +    if (frag_len > tot_len) {
> coding style
> 
> > +        ret = FFA_RET_INVALID_PARAMETERS;
> > +        goto out_unlock;
> > +    }
> > +
> > +    spin_lock(&ffa_buffer_lock);
> > +
> > +    if ( frag_len > ctx->page_count * PAGE_SIZE )
> > +        goto out_unlock;
> > +
> > +    if ( !ffa_page_count ) {
> > +        ret = FFA_RET_NO_MEMORY;
> > +        goto out_unlock;
> > +    }
> > +
> > +    ret = read_mem_transaction(ctx->guest_vers, ctx->tx, frag_len, &trans);
> > +    if (ret)
> coding style
> 
> > +        goto out_unlock;
> > +
> > +    if ( trans.mem_reg_attr != FFA_NORMAL_MEM_REG_ATTR )
> > +    {
> > +        ret = FFA_RET_NOT_SUPPORTED;
> > +        goto out;
> > +    }
> > +
> > +    /* Only supports sharing it with one SP for now */
> > +    if ( trans.mem_access_count != 1 )
> > +    {
> > +        ret = FFA_RET_NOT_SUPPORTED;
> > +        goto out_unlock;
> > +    }
> > +
> > +    if ( trans.sender_id != get_vm_id(d) )
> > +    {
> > +        ret = FFA_RET_INVALID_PARAMETERS;
> > +        goto out_unlock;
> > +    }
> > +
> > +    /* Check that it fits in the supplied data */
> > +    if ( trans.mem_access_offs + trans.mem_access_size > frag_len)
> > +        goto out_unlock;
> > +
> > +    mem_access = (void *)((vaddr_t)ctx->tx + trans.mem_access_offs);
> > +    if ( read_atomic(&mem_access->access_perm.perm) != FFA_MEM_ACC_RW )
> > +    {
> > +        ret = FFA_RET_NOT_SUPPORTED;
> > +        goto out_unlock;
> > +    }
> > +
> > +    region_offs = read_atomic(&mem_access->region_offs);
> > +    if (sizeof(*region_descr) + region_offs > frag_len) {
> > +        ret = FFA_RET_NOT_SUPPORTED;
> > +        goto out_unlock;
> > +    }
> > +
> > +    region_descr = (void *)((vaddr_t)ctx->tx + region_offs);
> > +    range_count = read_atomic(&region_descr->address_range_count);
> > +    page_count = read_atomic(&region_descr->total_page_count);
> > +
> > +    shm = xzalloc_flex_struct(struct ffa_shm_mem, pages, page_count);
> > +    if ( !shm )
> > +    {
> > +        ret = FFA_RET_NO_MEMORY;
> > +        goto out;
> > +    }
> > +    shm->sender_id = trans.sender_id;
> > +    shm->ep_id = read_atomic(&mem_access->access_perm.endpoint_id);
> > +    shm->page_count = page_count;
> > +
> > +    if (frag_len != tot_len) {
> > +        struct mem_frag_state *s = xzalloc(struct mem_frag_state);
> > +
> > +        if (!s) {
> > +            ret = FFA_RET_NO_MEMORY;
> > +            goto out;
> > +        }
> > +        s->shm = shm;
> > +        s->range_count = range_count;
> > +        s->buf = ctx->tx;
> > +        s->buf_size = ffa_page_count * PAGE_SIZE;
> > +        ret = add_mem_share_frag(s, sizeof(*region_descr)  + region_offs,
> > +                                 frag_len);
> > +        if (ret <= 0) {
> > +            xfree(s);
> > +            if (ret < 0)
> > +                goto out;
> > +        } else {
> > +            shm->handle = next_handle++;
> > +            reg_pair_from_64(&handle_hi, &handle_lo, shm->handle);
> > +            spin_lock(&ffa_mem_list_lock);
> > +            list_add_tail(&s->list, &ffa_frag_list);
> > +            spin_unlock(&ffa_mem_list_lock);
> > +        }
> > +        goto out_unlock;
> > +    }
> > +
> > +    /*
> > +     * Check that the Composite memory region descriptor fits.
> > +     */
> > +    if ( sizeof(*region_descr) + region_offs +
> > +         range_count * sizeof(struct ffa_address_range) > frag_len) {
> > +        ret = FFA_RET_INVALID_PARAMETERS;
> > +        goto out;
> > +    }
> > +
> > +    ret = get_shm_pages(d, shm, region_descr->address_range_array, range_count,
> > +                        0, &last_page_idx);
> > +    if ( ret )
> > +        goto out;
> > +    if (last_page_idx != shm->page_count) {
> > +        ret = FFA_RET_INVALID_PARAMETERS;
> > +        goto out;
> > +    }
> > +
> > +    /* Note that share_shm() uses our tx buffer */
> > +    ret = share_shm(shm);
> > +    if ( ret )
> > +        goto out;
> > +
> > +    spin_lock(&ffa_mem_list_lock);
> > +    list_add_tail(&shm->list, &ffa_mem_list);
> > +    spin_unlock(&ffa_mem_list_lock);
> > +
> > +    reg_pair_from_64(&handle_hi, &handle_lo, shm->handle);
> > +
> > +out:
> > +    if ( ret && shm )
> > +    {
> > +        put_shm_pages(shm);
> > +        xfree(shm);
> > +    }
> > +out_unlock:
> > +    spin_unlock(&ffa_buffer_lock);
> > +
> > +    if ( ret > 0 )
> > +            set_regs_frag_rx(regs, handle_lo, handle_hi, ret, trans.sender_id);
> > +    else if ( ret == 0)
> > +            set_regs_success(regs, handle_lo, handle_hi);
> > +    else
> > +            set_regs_error(regs, ret);
> > +}
> > +
> > +static struct mem_frag_state *find_frag_state(uint64_t handle)
> > +{
> > +    struct mem_frag_state *s;
> > +
> > +    list_for_each_entry(s, &ffa_frag_list, list)
> > +        if ( s->shm->handle == handle)
> > +            return s;
> > +
> > +    return NULL;
> > +}
> > +
> > +static void handle_mem_frag_tx(struct cpu_user_regs *regs)
> > +{
> > +    uint32_t frag_len = get_user_reg(regs, 3);
> > +    uint32_t handle_lo = get_user_reg(regs, 1);
> > +    uint32_t handle_hi = get_user_reg(regs, 2);
> > +    uint64_t handle = reg_pair_to_64(handle_hi, handle_lo);
> > +    struct mem_frag_state *s;
> > +    uint16_t sender_id = 0;
> > +    int ret;
> > +
> > +    spin_lock(&ffa_buffer_lock);
> > +    s = find_frag_state(handle);
> > +    if (!s) {
> > +        ret = FFA_RET_INVALID_PARAMETERS;
> > +        goto out;
> > +    }
> > +    sender_id = s->shm->sender_id;
> > +
> > +    if (frag_len > s->buf_size) {
> > +        ret = FFA_RET_INVALID_PARAMETERS;
> > +        goto out;
> > +    }
> > +
> > +    ret = add_mem_share_frag(s, 0, frag_len);
> > +    if (ret == 0) {
> > +        /* Note that share_shm() uses our tx buffer */
> > +        ret = share_shm(s->shm);
> > +        if (ret == 0) {
> > +            spin_lock(&ffa_mem_list_lock);
> > +            list_add_tail(&s->shm->list, &ffa_mem_list);
> > +            spin_unlock(&ffa_mem_list_lock);
> > +        } else {
> > +            put_shm_pages(s->shm);
> > +            xfree(s->shm);
> > +        }
> > +        spin_lock(&ffa_mem_list_lock);
> > +        list_del(&s->list);
> > +        spin_unlock(&ffa_mem_list_lock);
> > +        xfree(s);
> > +    } else if (ret < 0) {
> > +        put_shm_pages(s->shm);
> > +        xfree(s->shm);
> > +        spin_lock(&ffa_mem_list_lock);
> > +        list_del(&s->list);
> > +        spin_unlock(&ffa_mem_list_lock);
> > +        xfree(s);
> > +    }
> > +out:
> > +    spin_unlock(&ffa_buffer_lock);
> > +
> > +    if ( ret > 0 )
> > +            set_regs_frag_rx(regs, handle_lo, handle_hi, ret, sender_id);
> > +    else if ( ret == 0)
> > +            set_regs_success(regs, handle_lo, handle_hi);
> > +    else
> > +            set_regs_error(regs, ret);
> > +}
> > +
> > +static int handle_mem_reclaim(uint64_t handle, uint32_t flags)
> > +{
> > +    struct ffa_shm_mem *shm;
> > +    uint32_t handle_hi;
> > +    uint32_t handle_lo;
> > +    int ret;
> > +
> > +    spin_lock(&ffa_mem_list_lock);
> > +    list_for_each_entry(shm, &ffa_mem_list, list) {
> > +        if ( shm->handle == handle )
> > +            goto found_it;
> > +    }
> > +    shm = NULL;
> > +found_it:
> > +    spin_unlock(&ffa_mem_list_lock);
> > +
> > +    if ( !shm )
> > +        return FFA_RET_INVALID_PARAMETERS;
> > +
> > +    reg_pair_from_64(&handle_hi, &handle_lo, handle);
> > +    ret = ffa_mem_reclaim(handle_lo, handle_hi, flags);
> > +    if ( ret )
> > +        return ret;
> > +
> > +    spin_lock(&ffa_mem_list_lock);
> > +    list_del(&shm->list);
> > +    spin_unlock(&ffa_mem_list_lock);
> > +
> > +    put_shm_pages(shm);
> > +    xfree(shm);
> > +
> > +    return ret;
> > +}
> > +
> > +bool ffa_handle_call(struct cpu_user_regs *regs, uint32_t fid)
> > +{
> > +    struct domain *d = current->domain;
> > +    struct ffa_ctx *ctx = d->arch.ffa;
> > +    uint32_t count;
> > +    uint32_t e;
> > +
> > +    if ( !ctx )
> > +        return false;
> > +
> > +    switch ( fid )
> > +    {
> > +    case FFA_VERSION:
> > +        handle_version(regs);
> > +        return true;
> > +    case FFA_ID_GET:
> > +        set_regs_success(regs, get_vm_id(d), 0);
> > +        return true;
> > +    case FFA_RXTX_MAP_32:
> > +#ifdef CONFIG_ARM_64
> > +    case FFA_RXTX_MAP_64:
> > +#endif
> > +        e = handle_rxtx_map(fid, get_user_reg(regs, 1), get_user_reg(regs, 2),
> > +                            get_user_reg(regs, 3));
> > +        if ( e )
> > +            set_regs_error(regs, e);
> > +        else
> > +            set_regs_success(regs, 0, 0);
> > +        return true;
> > +    case FFA_RXTX_UNMAP:
> > +        e = handle_rxtx_unmap();
> > +        if ( e )
> > +            set_regs_error(regs, e);
> > +        else
> > +            set_regs_success(regs, 0, 0);
> > +        return true;
> > +    case FFA_PARTITION_INFO_GET:
> > +        e = handle_partition_info_get(get_user_reg(regs, 1),
> > +                                      get_user_reg(regs, 2),
> > +                                      get_user_reg(regs, 3),
> > +                                      get_user_reg(regs, 4),
> > +                                      get_user_reg(regs, 5), &count);
> > +        if ( e )
> > +            set_regs_error(regs, e);
> > +        else
> > +            set_regs_success(regs, count, 0);
> > +        return true;
> > +    case FFA_RX_RELEASE:
> > +        e = handle_rx_release();
> > +        if ( e )
> > +            set_regs_error(regs, e);
> > +        else
> > +            set_regs_success(regs, 0, 0);
> > +        return true;
> > +    case FFA_MSG_SEND_DIRECT_REQ_32:
> > +#ifdef CONFIG_ARM_64
> > +    case FFA_MSG_SEND_DIRECT_REQ_64:
> > +#endif
> > +        handle_msg_send_direct_req(regs, fid);
> > +        return true;
> > +    case FFA_MEM_SHARE_32:
> > +#ifdef CONFIG_ARM_64
> > +    case FFA_MEM_SHARE_64:
> > +#endif
> > +        handle_mem_share(regs);
> > +        return true;
> > +    case FFA_MEM_RECLAIM:
> > +        e = handle_mem_reclaim(reg_pair_to_64(get_user_reg(regs, 2),
> > +                                              get_user_reg(regs, 1)),
> > +                               get_user_reg(regs, 3));
> > +        if ( e )
> > +            set_regs_error(regs, e);
> > +        else
> > +            set_regs_success(regs, 0, 0);
> > +        return true;
> > +    case FFA_MEM_FRAG_TX:
> > +        handle_mem_frag_tx(regs);
> > +        return true;
> > +
> > +    default:
> > +        printk(XENLOG_ERR "ffa: unhandled fid 0x%x\n", fid);
> > +        return false;
> > +    }
> > +}
> > +
> > +int ffa_domain_init(struct domain *d)
> > +{
> > +    struct ffa_ctx *ctx;
> > +    unsigned int n;
> > +    unsigned int m;
> > +    unsigned int c_pos;
> > +    int32_t res;
> > +
> > +    if ( !ffa_version )
> > +        return 0;
> > +
> > +    ctx = xzalloc(struct ffa_ctx);
> > +    if ( !ctx )
> > +        return -ENOMEM;
> > +
> > +    for ( n = 0; n < subsr_vm_created_count; n++ ) {
> > +        res = ffa_direct_req_send_vm(subsr_vm_created[n], get_vm_id(d),
> > +                                     FFA_MSG_SEND_VM_CREATED);
> > +        if ( res ) {
> > +            printk(XENLOG_ERR "ffa: Failed to report creation of vm_id %u to  %u: res %d\n",
> > +                   get_vm_id(d), subsr_vm_created[n], res);
> > +            c_pos = n;
> > +            goto err;
> > +        }
> > +    }
> > +
> > +    d->arch.ffa = ctx;
> > +
> > +    return 0;
> > +
> > +err:
> > +    /* Undo any already sent vm created messaged */
> > +    for ( n = 0; n < c_pos; n++ )
> > +        for ( m = 0; m < subsr_vm_destroyed_count; m++ )
> > +            if ( subsr_vm_destroyed[m] == subsr_vm_created[n] )
> > +                ffa_direct_req_send_vm(subsr_vm_destroyed[n], get_vm_id(d),
> > +                                       FFA_MSG_SEND_VM_DESTROYED);
> > +    return -ENOMEM;
> > +}
> > +
> > +int ffa_relinquish_resources(struct domain *d)
> > +{
> > +    struct ffa_ctx *ctx = d->arch.ffa;
> > +    unsigned int n;
> > +    int32_t res;
> > +
> > +    if ( !ctx )
> > +        return 0;
> > +
> > +    for ( n = 0; n < subsr_vm_destroyed_count; n++ ) {
> > +        res = ffa_direct_req_send_vm(subsr_vm_destroyed[n], get_vm_id(d),
> > +                                     FFA_MSG_SEND_VM_DESTROYED);
> > +
> > +        if ( res )
> > +            printk(XENLOG_ERR "ffa: Failed to report destruction of vm_id %u to  %u: res %d\n",
> > +                   get_vm_id(d), subsr_vm_destroyed[n], res);
> > +    }
> > +
> > +    XFREE(d->arch.ffa);
> > +
> > +    return 0;
> > +}
> > +
> > +static bool __init init_subscribers(void)
> > +{
> > +    struct ffa_partition_info_1_1 *fpi;
> > +    bool ret = false;
> > +    uint32_t count;
> > +    uint32_t e;
> > +    uint32_t n;
> > +    uint32_t c_pos;
> > +    uint32_t d_pos;
> > +
> > +    if ( ffa_version < FFA_VERSION_1_1 )
> > +        return true;
> > +
> > +    e = ffa_partition_info_get(0, 0, 0, 0, 1, &count);
> > +    ffa_rx_release();
> > +    if ( e ) {
> > +        printk(XENLOG_ERR "ffa: Failed to get list of SPs: %d\n", (int)e);
> > +        goto out;
> > +    }
> > +
> > +    fpi = ffa_rx;
> > +    subsr_vm_created_count = 0;
> > +    subsr_vm_destroyed_count = 0;
> > +    for ( n = 0; n < count; n++ ) {
> > +        if (fpi[n].partition_properties & FFA_PART_PROP_NOTIF_CREATED)
> > +            subsr_vm_created_count++;
> > +        if (fpi[n].partition_properties & FFA_PART_PROP_NOTIF_DESTROYED)
> > +            subsr_vm_destroyed_count++;
> > +    }
> > +
> > +    if ( subsr_vm_created_count )
> > +        subsr_vm_created = xzalloc_array(uint16_t, subsr_vm_created_count);
> > +    if ( subsr_vm_destroyed_count )
> > +        subsr_vm_destroyed = xzalloc_array(uint16_t, subsr_vm_destroyed_count);
> > +    if ( (subsr_vm_created_count && !subsr_vm_created) ||
> > +        (subsr_vm_destroyed_count && !subsr_vm_destroyed) ) {
> > +        printk(XENLOG_ERR "ffa: Failed to allocate subscription lists\n");
> > +        subsr_vm_created_count = 0;
> > +        subsr_vm_destroyed_count = 0;
> > +        XFREE(subsr_vm_created);
> > +        XFREE(subsr_vm_destroyed);
> > +        goto out;
> > +    }
> > +
> > +    for ( c_pos = 0, d_pos = 0, n = 0; n < count; n++ ) {
> > +        if (fpi[n].partition_properties & FFA_PART_PROP_NOTIF_CREATED)
> > +            subsr_vm_created[c_pos++] = fpi[n].id;
> > +        if (fpi[n].partition_properties & FFA_PART_PROP_NOTIF_DESTROYED)
> > +            subsr_vm_destroyed[d_pos++] = fpi[n].id;
> > +    }
> > +
> > +    ret = true;
> > +out:
> > +    ffa_rx_release();
> > +    return ret;
> > +}
> > +
> > +static int __init ffa_init(void)
> > +{
> > +    uint32_t vers;
> > +    uint32_t e;
> > +    unsigned int major_vers;
> > +    unsigned int minor_vers;
> > +
> > +    /*
> > +     * psci_init_smccc() updates this value with what's reported by EL-3
> > +     * or secure world.
> > +     */
> > +    if ( smccc_ver < ARM_SMCCC_VERSION_1_2 )
> > +    {
> > +        printk(XENLOG_ERR
> > +               "ffa: unsupported SMCCC version %#x (need at least %#x)\n",
> > +               smccc_ver, ARM_SMCCC_VERSION_1_2);
> > +        return 0;
> > +    }
> > +
> > +    if ( !ffa_get_version(&vers) )
> > +        return 0;
> > +
> > +    if ( vers < FFA_MIN_VERSION || vers > FFA_MY_VERSION )
> > +    {
> > +        printk(XENLOG_ERR "ffa: Incompatible version %#x found\n", vers);
> > +        return 0;
> > +    }
> > +
> > +    major_vers = (vers >> FFA_VERSION_MAJOR_SHIFT) & FFA_VERSION_MAJOR_MASK;
> > +    minor_vers = vers & FFA_VERSION_MINOR_MASK;
> > +    printk(XENLOG_ERR "ARM FF-A Mediator version %u.%u\n",
> > +           FFA_VERSION_MAJOR, FFA_VERSION_MINOR);
> > +    printk(XENLOG_ERR "ARM FF-A Firmware version %u.%u\n",
> > +           major_vers, minor_vers);
> > +
> > +    ffa_rx = alloc_xenheap_pages(0, 0);
> > +    if ( !ffa_rx )
> > +        return 0;
> > +
> > +    ffa_tx = alloc_xenheap_pages(0, 0);
> > +    if ( !ffa_tx )
> > +        goto err_free_ffa_rx;
> > +
> > +    e = ffa_rxtx_map(__pa(ffa_tx), __pa(ffa_rx), 1);
> > +    if ( e )
> > +    {
> > +        printk(XENLOG_ERR "ffa: Failed to map rxtx: error %d\n", (int)e);
> > +        goto err_free_ffa_tx;
> > +    }
> > +    ffa_page_count = 1;
> > +    ffa_version = vers;
> > +
> > +    if ( !init_subscribers() )
> > +        goto err_free_ffa_tx;
> > +
> > +    return 0;
> > +
> > +err_free_ffa_tx:
> > +    free_xenheap_pages(ffa_tx, 0);
> > +    ffa_tx = NULL;
> > +err_free_ffa_rx:
> > +    free_xenheap_pages(ffa_rx, 0);
> > +    ffa_rx = NULL;
> > +    ffa_page_count = 0;
> > +    ffa_version = 0;
> > +    XFREE(subsr_vm_created);
> > +    subsr_vm_created_count = 0;
> > +    XFREE(subsr_vm_destroyed);
> > +    subsr_vm_destroyed_count = 0;
> > +    return 0;
> > +}
> > +
> > +__initcall(ffa_init);
> > diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
> > index ed63c2b6f91f..b3dee269bced 100644
> > --- a/xen/arch/arm/include/asm/domain.h
> > +++ b/xen/arch/arm/include/asm/domain.h
> > @@ -103,6 +103,10 @@ struct arch_domain
> >      void *tee;
> >  #endif
> >  
> > +#ifdef CONFIG_FFA
> > +    void *ffa;
> > +#endif
> > +
> >      bool directmap;
> >  }  __cacheline_aligned;
> >  
> > diff --git a/xen/arch/arm/include/asm/ffa.h b/xen/arch/arm/include/asm/ffa.h
> > new file mode 100644
> > index 000000000000..1c6ce6421294
> > --- /dev/null
> > +++ b/xen/arch/arm/include/asm/ffa.h
> > @@ -0,0 +1,71 @@
> > +/*
> > + * xen/arch/arm/ffa.c
> > + *
> > + * Arm Firmware Framework for ARMv8-A(FFA) mediator
> > + *
> > + * Copyright (C) 2021  Linaro Limited
> > + *
> > + * Permission is hereby granted, free of charge, to any person
> > + * obtaining a copy of this software and associated documentation
> > + * files (the "Software"), to deal in the Software without restriction,
> > + * including without limitation the rights to use, copy, modify, merge,
> > + * publish, distribute, sublicense, and/or sell copies of the Software,
> > + * and to permit persons to whom the Software is furnished to do so,
> > + * subject to the following conditions:
> > + *
> > + * The above copyright notice and this permission notice shall be
> > + * included in all copies or substantial portions of the Software.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> > + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> > + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
> > + * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
> > + * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
> > + * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
> > + * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
> > + */
> > +
> > +#ifndef __ASM_ARM_FFA_H__
> > +#define __ASM_ARM_FFA_H__
> > +
> > +#include <xen/const.h>
> > +
> > +#include <asm/smccc.h>
> > +#include <asm/types.h>
> > +
> > +#define FFA_FNUM_MIN_VALUE              _AC(0x60,U)
> > +#define FFA_FNUM_MAX_VALUE              _AC(0x86,U)
> > +
> > +static inline bool is_ffa_fid(uint32_t fid)
> > +{
> > +    uint32_t fn = fid & ARM_SMCCC_FUNC_MASK;
> > +
> > +    return fn >= FFA_FNUM_MIN_VALUE && fn <= FFA_FNUM_MAX_VALUE;
> > +}
> > +
> > +#ifdef CONFIG_FFA
> > +#define FFA_NR_FUNCS    11
> > +
> > +bool ffa_handle_call(struct cpu_user_regs *regs, uint32_t fid);
> > +int ffa_domain_init(struct domain *d);
> > +int ffa_relinquish_resources(struct domain *d);
> > +#else
> > +#define FFA_NR_FUNCS    0
> > +
> > +static inline bool ffa_handle_call(struct cpu_user_regs *regs, uint32_t fid)
> > +{
> > +    return false;
> > +}
> > +
> > +static inline int ffa_domain_init(struct domain *d)
> > +{
> > +    return 0;
> > +}
> > +
> > +static inline int ffa_relinquish_resources(struct domain *d)
> > +{
> > +    return 0;
> > +}
> > +#endif
> > +
> > +#endif /*__ASM_ARM_FFA_H__*/
> > diff --git a/xen/arch/arm/vsmc.c b/xen/arch/arm/vsmc.c
> > index 6f90c08a6304..34586025eff8 100644
> > --- a/xen/arch/arm/vsmc.c
> > +++ b/xen/arch/arm/vsmc.c
> > @@ -20,6 +20,7 @@
> >  #include <public/arch-arm/smccc.h>
> >  #include <asm/cpuerrata.h>
> >  #include <asm/cpufeature.h>
> > +#include <asm/ffa.h>
> >  #include <asm/monitor.h>
> >  #include <asm/regs.h>
> >  #include <asm/smccc.h>
> > @@ -32,7 +33,7 @@
> >  #define XEN_SMCCC_FUNCTION_COUNT 3
> >  
> >  /* Number of functions currently supported by Standard Service Service Calls. */
> > -#define SSSC_SMCCC_FUNCTION_COUNT (3 + VPSCI_NR_FUNCS)
> > +#define SSSC_SMCCC_FUNCTION_COUNT (3 + VPSCI_NR_FUNCS + FFA_NR_FUNCS)
> >  
> >  static bool fill_uid(struct cpu_user_regs *regs, xen_uuid_t uuid)
> >  {
> > @@ -196,13 +197,23 @@ static bool handle_existing_apis(struct cpu_user_regs *regs)
> >      return do_vpsci_0_1_call(regs, fid);
> >  }
> >  
> > +static bool is_psci_fid(uint32_t fid)
> > +{
> > +    uint32_t fn = fid & ARM_SMCCC_FUNC_MASK;
> > +
> > +    return fn >= 0 && fn <= 0x1fU;
> > +}
> > +
> >  /* PSCI 0.2 interface and other Standard Secure Calls */
> >  static bool handle_sssc(struct cpu_user_regs *regs)
> >  {
> >      uint32_t fid = (uint32_t)get_user_reg(regs, 0);
> >  
> > -    if ( do_vpsci_0_2_call(regs, fid) )
> > -        return true;
> > +    if ( is_psci_fid(fid) )
> > +        return do_vpsci_0_2_call(regs, fid);
> > +
> > +    if ( is_ffa_fid(fid) )
> > +        return ffa_handle_call(regs, fid);
> >  
> >      switch ( fid )
> >      {
> 

Thanks for the comments, I'll take care of the style issues.

Cheers,
Jens


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 00:04:21 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 00:04:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350933.577376 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1zT2-0000SI-Mi; Fri, 17 Jun 2022 00:04:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350933.577376; Fri, 17 Jun 2022 00:04:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1zT2-0000SB-JR; Fri, 17 Jun 2022 00:04:04 +0000
Received: by outflank-mailman (input) for mailman id 350933;
 Fri, 17 Jun 2022 00:04:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3SZS=WY=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o1zT0-0000S5-He
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 00:04:02 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org
 [2604:1380:4601:e00::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fd2e547c-edd0-11ec-ab14-113154c10af9;
 Fri, 17 Jun 2022 02:04:00 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 768CBB82688;
 Fri, 17 Jun 2022 00:03:59 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id CBD24C34114;
 Fri, 17 Jun 2022 00:03:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fd2e547c-edd0-11ec-ab14-113154c10af9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1655424238;
	bh=vcDV2pMUfx2RYW8UbJGnucRZpVwLFeJC08bYGA7L8SI=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=WkMmudRQMYzcNgSS1/kINcC1ofKsubM1KaNeg5+Tl63aiRuaTJJH2qS2s8isOlKpI
	 HbH3/sBWlAu6RKQNBObG8AYZtdX9KoyIXkIK3aYSqn37ACpQ3+zendV/UZTZpsWf6A
	 bwVb6lRZLNE4iXr67pqMdwbYWCASDUL4vKEHg91e1hUko85H8WtrrHAAIJhy+1vBLN
	 Kr5ArnVs6aql0mHv4E5epmrhexPEWjMpI8APflgU92F0v9GHcmo7JBy4DdhTxbMWOJ
	 g1y/Y/6/Svk3zZ2smA2a1MM1IuG7S+24XeHzFB4VWe2zTbX0B1ksIt/DfbOctGi86G
	 HgUyqhU0IzJlg==
Date: Thu, 16 Jun 2022 17:03:56 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Oleksandr <olekstysh@gmail.com>
cc: Juergen Gross <jgross@suse.com>, hch@infradead.org, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, 
    viresh.kumar@linaro.org, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: Re: [PATCH v2] xen: don't require virtio with grants for non-PV
 guests
In-Reply-To: <a67a709a-78b1-c3b1-009e-2d9c834bdd67@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2206161657440.10483@ubuntu-linux-20-04-desktop>
References: <20220616053715.3166-1-jgross@suse.com> <573c2d9f-8df0-0e0f-2f57-e8ea85e403b4@gmail.com> <cf755bb8-4265-875f-dc20-eefc0e8740f4@suse.com> <a67a709a-78b1-c3b1-009e-2d9c834bdd67@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-246346228-1655424238=:10483"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-246346228-1655424238=:10483
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Thu, 16 Jun 2022, Oleksandr wrote:
> On 16.06.22 11:56, Juergen Gross wrote:
> > On 16.06.22 09:31, Oleksandr wrote:
> > > 
> > > On 16.06.22 08:37, Juergen Gross wrote:
> > > 
> > > 
> > > Hello Juergen
> > > 
> > > > Commit fa1f57421e0b ("xen/virtio: Enable restricted memory access using
> > > > Xen grant mappings") introduced a new requirement for using virtio
> > > > devices: the backend now needs to support the VIRTIO_F_ACCESS_PLATFORM
> > > > feature.
> > > > 
> > > > This is an undue requirement for non-PV guests, as those can be operated
> > > > with existing backends without any problem, as long as those backends
> > > > are running in dom0.
> > > > 
> > > > Per default allow virtio devices without grant support for non-PV
> > > > guests.
> > > > 
> > > > Add a new config item to always force use of grants for virtio.
> > > > 
> > > > Fixes: fa1f57421e0b ("xen/virtio: Enable restricted memory access using
> > > > Xen grant mappings")
> > > > Reported-by: Viresh Kumar <viresh.kumar@linaro.org>
> > > > Signed-off-by: Juergen Gross <jgross@suse.com>
> > > > ---
> > > > V2:
> > > > - remove command line parameter (Christoph Hellwig)
> > > > ---
> > > >   drivers/xen/Kconfig | 9 +++++++++
> > > >   include/xen/xen.h   | 2 +-
> > > >   2 files changed, 10 insertions(+), 1 deletion(-)
> > > > 
> > > > diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
> > > > index bfd5f4f706bc..a65bd92121a5 100644
> > > > --- a/drivers/xen/Kconfig
> > > > +++ b/drivers/xen/Kconfig
> > > > @@ -355,4 +355,13 @@ config XEN_VIRTIO
> > > >         If in doubt, say n.
> > > > +config XEN_VIRTIO_FORCE_GRANT
> > > > +    bool "Require Xen virtio support to use grants"
> > > > +    depends on XEN_VIRTIO
> > > > +    help
> > > > +      Require virtio for Xen guests to use grant mappings.
> > > > +      This will avoid the need to give the backend the right to map all
> > > > +      of the guest memory. This will need support on the backend side
> > > > +      (e.g. qemu or kernel, depending on the virtio device types used).
> > > > +
> > > >   endmenu
> > > > diff --git a/include/xen/xen.h b/include/xen/xen.h
> > > > index 0780a81e140d..4d4188f20337 100644
> > > > --- a/include/xen/xen.h
> > > > +++ b/include/xen/xen.h
> > > > @@ -56,7 +56,7 @@ extern u64 xen_saved_max_mem_size;
> > > >   static inline void xen_set_restricted_virtio_memory_access(void)
> > > >   {
> > > > -    if (IS_ENABLED(CONFIG_XEN_VIRTIO) && xen_domain())
> > > > +    if (IS_ENABLED(CONFIG_XEN_VIRTIO_FORCE_GRANT) || xen_pv_domain())
> > > >           platform_set(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS);
> > > 
> > > 
> > > Looks like, the flag will be *always* set for paravirtualized guests even
> > > if CONFIG_XEN_VIRTIO disabled.
> > > 
> > > Maybe we should clarify the check?
> > > 
> > > 
> > > if (IS_ENABLED(CONFIG_XEN_VIRTIO_FORCE_GRANT) ||
> > > IS_ENABLED(CONFIG_XEN_VIRTIO) && xen_pv_domain())
> > > 
> > >      platform_set(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS);
> > > 
> > 
> > Yes, we should. I had the function in grant-dma-ops.c in V1, and could drop
> > the
> > CONFIG_XEN_VIRTIO dependency for that reason.
> > 
> > I'll wait for more comments before sending V3, though.
> 
> ok
> 
> 
> 
> Please note, I am happy with current patch and it works in my Arm64 based
> environment.
> 
> Just one moment to consider.
> 
> 
> As it was already mentioned earlier in current thread the
> PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS (former
> arch_has_restricted_virtio_memory_access()) is not per device but about the
> whole guest. Being set it makes VIRTIO_F_ACCESS_PLATFORM and
> VIRTIO_F_VERSION_1 features mandatory for *all* virtio devices in the guest.
> 
> The question is “Do we want/need to lift this restriction for some devices
> (which backends are trusted so can access all guest memory) at the same time”?
> Copy here the original Viresh's question for the convenience:
> 
> "I understand from your email that the backends need to offer the
> VIRTIO_F_ACCESS_PLATFORM flag now, but should this requirement be a bit soft?
> I mean shouldn't we allow both types of backends to run with the same kernel,
> ones that offer this feature and others that don't? The ones that don't offer
> the feature, should continue to work like they used to, i.e. without the
> restricted memory access feature."
> 
> Technically this can be possible with HVM.
> 
> Let's imagine the following situation:
> 
> - Dom0 with backends which don't offer required features for some reason(s)
> 
> But running in Dom0 (trusted domain) these backends are not obliged to offer
> it (yes they can offer the required features and support grant mappings for
> the virtio, but this is not strictly necessary, as they are considered as
> trusted so are allowed to access all guest memory).
> 
> - DomD with backend which do offer them and require grant mappings for the
> virtio
> 
> If this is a valid and correct use-case, then we indeed need an ability to
> control that per device, otherwise - what is written below doesn't really
> matter.
> 
> I am wondering whether we can avoid using global
> PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS for Xen guests at all? I assume that all
> we need to do (when CONFIG_XEN_VIRTIO is enabled) is to make sure that *only*
> Xen grant DMA devices in HVM guests and *all* devices in PV guests offer
> required flags.
> 
> Below the diff how this could be done w/o an extra options (not completely
> tested), although I realize it might look hackish, and a lot more effort is
> needed to get it right. In my Arm64 based environment it works, I have tried
> to run two backends, the first offered required features and the corresponding
> device node had required property, but the second didn’t and there was no
> property.
> 
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index 1f9c3ba..07eb69f 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -443,8 +443,6 @@ static int __init xen_guest_init(void)
>         if (!xen_domain())
>                 return 0;
> 
> -       xen_set_restricted_virtio_memory_access();
> -
>         if (!acpi_disabled)
>                 xen_acpi_guest_init();
>         else
> diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
> index 8b71b1d..517a9d8 100644
> --- a/arch/x86/xen/enlighten_hvm.c
> +++ b/arch/x86/xen/enlighten_hvm.c
> @@ -195,8 +195,6 @@ static void __init xen_hvm_guest_init(void)
>         if (xen_pv_domain())
>                 return;
> 
> -       xen_set_restricted_virtio_memory_access();
> -
>         init_hvm_pv_info();
> 
>         reserve_shared_info();
> diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
> index 30d24fe..ca85d14 100644
> --- a/arch/x86/xen/enlighten_pv.c
> +++ b/arch/x86/xen/enlighten_pv.c
> @@ -108,8 +108,6 @@ static DEFINE_PER_CPU(struct tls_descs, shadow_tls_desc);
> 
>  static void __init xen_pv_init_platform(void)
>  {
> -       xen_set_restricted_virtio_memory_access();
> -
>         populate_extra_pte(fix_to_virt(FIX_PARAVIRT_BOOTMAP));
> 
>         set_fixmap(FIX_PARAVIRT_BOOTMAP, xen_start_info->shared_info);
> diff --git a/drivers/virtio/virtio.c b/drivers/virtio/virtio.c
> index 371e16b..875690a 100644
> --- a/drivers/virtio/virtio.c
> +++ b/drivers/virtio/virtio.c
> @@ -167,6 +167,11 @@ void virtio_add_status(struct virtio_device *dev,
> unsigned int status)
>  }
>  EXPORT_SYMBOL_GPL(virtio_add_status);
> 
> +int __weak device_has_restricted_virtio_memory_access(struct device *dev)
> +{
> +       return platform_has(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS);
> +}
> +
>  /* Do some validation, then set FEATURES_OK */
>  static int virtio_features_ok(struct virtio_device *dev)
>  {
> @@ -174,7 +179,7 @@ static int virtio_features_ok(struct virtio_device *dev)
> 
>         might_sleep();
> 
> -       if (platform_has(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS)) {
> +       if (device_has_restricted_virtio_memory_access(dev->dev.parent)) {
>                 if (!virtio_has_feature(dev, VIRTIO_F_VERSION_1)) {
>                         dev_warn(&dev->dev,
>                                  "device must provide VIRTIO_F_VERSION_1\n");
> diff --git a/drivers/xen/grant-dma-ops.c b/drivers/xen/grant-dma-ops.c
> index 6586152..da938f6 100644
> --- a/drivers/xen/grant-dma-ops.c
> +++ b/drivers/xen/grant-dma-ops.c
> @@ -11,6 +11,7 @@
>  #include <linux/dma-map-ops.h>
>  #include <linux/of.h>
>  #include <linux/pfn.h>
> +#include <linux/virtio_config.h>
>  #include <linux/xarray.h>
>  #include <xen/xen.h>
>  #include <xen/grant_table.h>
> @@ -286,6 +287,11 @@ bool xen_is_grant_dma_device(struct device *dev)
>         return has_iommu;
>  }
> 
> +int device_has_restricted_virtio_memory_access(struct device *dev)
> +{
> +       return (xen_pv_domain() || xen_is_grant_dma_device(dev));
> +}
> +
>  void xen_grant_setup_dma_ops(struct device *dev)
>  {
>         struct xen_grant_dma_data *data;
> diff --git a/include/linux/virtio_config.h b/include/linux/virtio_config.h
> index 7949829..b3a455b 100644
> --- a/include/linux/virtio_config.h
> +++ b/include/linux/virtio_config.h
> @@ -559,4 +559,6 @@ static inline void virtio_cwrite64(struct virtio_device
> *vdev,
> _r;                                                     \
>         })
> 
> +int device_has_restricted_virtio_memory_access(struct device *dev);
> +
>  #endif /* _LINUX_VIRTIO_CONFIG_H */
> diff --git a/include/xen/xen.h b/include/xen/xen.h
> index 0780a81..a99bab8 100644
> --- a/include/xen/xen.h
> +++ b/include/xen/xen.h
> @@ -52,14 +52,6 @@ bool xen_biovec_phys_mergeable(const struct bio_vec *vec1,
>  extern u64 xen_saved_max_mem_size;
>  #endif
> 
> -#include <linux/platform-feature.h>
> -
> -static inline void xen_set_restricted_virtio_memory_access(void)
> -{
> -       if (IS_ENABLED(CONFIG_XEN_VIRTIO) && xen_domain())
> - platform_set(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS);
> -}
> -
>  #ifdef CONFIG_XEN_UNPOPULATED_ALLOC
>  int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages);
>  void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages);
> (END)
> 
> 
> I think when x86 HVM gains required support (via ACPI or other means) to
> communicate the x86's alternative of "xen,grant-dma" then
> xen_is_grant_dma_device() will be just extended to handle that.

Yeah I like this approach:

- on ARM it bases the setting of PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS
  on "xen,grant-dma", as it should be
- it goes beyond my suggestion and it is capable of doing that
  per-device, which is awesome
- on x86, it always enables for PV guests as they have no other choice

On top of this we could add a command line option or kconfig option to
force-enable it as well for the benefit of x86/HVM, but I would make
that option x86 specific.
--8323329-246346228-1655424238=:10483--


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 00:31:19 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 00:31:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350942.577388 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1ztK-0004Lc-Tr; Fri, 17 Jun 2022 00:31:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350942.577388; Fri, 17 Jun 2022 00:31:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o1ztK-0004LV-R4; Fri, 17 Jun 2022 00:31:14 +0000
Received: by outflank-mailman (input) for mailman id 350942;
 Fri, 17 Jun 2022 00:31:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1ztJ-0004LL-AC; Fri, 17 Jun 2022 00:31:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1ztJ-0007Im-6g; Fri, 17 Jun 2022 00:31:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o1ztI-0001Mi-Tr; Fri, 17 Jun 2022 00:31:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o1ztI-0007rI-Oq; Fri, 17 Jun 2022 00:31:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JcNX0tN6/mTaUXy/ooKRqWqF0cKuO0NIP5c9WeY/qNo=; b=h7XKgZTSGTgIdRs0IuiM3tlXpC
	fFVYspPDTkpm/MEsfn7uCqVahQ/JUKcXAYsvfIMe4OWJf0KXZU4r9YZ7Exs4EBTpOgFlBq9p4j5XR
	PzLKWFaGnRMAnDHrxs9PZ3Y9g0vv5hF/nq1nV+4taehYUHe88bjzDrZ+lyWMZTGxv+L4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171200-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 171200: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start/debian.repeat:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=a31bd366116cb157fc20d5cdc8a2fd1c0d39ac38
X-Osstest-Versions-That:
    linux=9d6e67bf50908cc661972969e8f073ec1d1bc97d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 17 Jun 2022 00:31:12 +0000

flight 171200 linux-5.4 real [real]
flight 171222 linux-5.4 real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/171200/
http://logs.test-lab.xenproject.org/osstest/logs/171222/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 171173
 test-armhf-armhf-xl-multivcpu 18 guest-start/debian.repeat fail REGR. vs. 171173

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171173
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171173
 test-armhf-armhf-xl-credit2  18 guest-start/debian.repeat    fail  like 171173
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171173
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171173
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171173
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171173
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171173
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171173
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171173
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171173
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171173
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171173
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 linux                a31bd366116cb157fc20d5cdc8a2fd1c0d39ac38
baseline version:
 linux                9d6e67bf50908cc661972969e8f073ec1d1bc97d

Last test of basis   171173  2022-06-15 01:11:00 Z    1 days
Testing same since   171200  2022-06-16 11:41:43 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Borislav Petkov <bp@suse.de>
  Florian Fainelli <f.fainelli@gmail.com>
  Gayatri Kammela <gayatri.kammela@intel.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Hulk Robot <hulkrobot@huawei.com>
  Ingo Molnar <mingo@kernel.org>
  Jon Hunter <jonathanh@nvidia.com>
  Josh Poimboeuf <jpoimboe@kernel.org>
  Josh Poimboeuf <jpoimboe@redhat.com>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Thomas Gleixner <tglx@linutronix.de>
  Tony Luck <tony.luck@intel.com>
  Zhang Rui <rui.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 352 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 03:25:21 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 03:25:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350961.577420 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o22bW-0005mb-In; Fri, 17 Jun 2022 03:25:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350961.577420; Fri, 17 Jun 2022 03:25:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o22bW-0005mU-Dy; Fri, 17 Jun 2022 03:25:02 +0000
Received: by outflank-mailman (input) for mailman id 350961;
 Fri, 17 Jun 2022 03:25:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YAhX=WY=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1o22bV-0005mO-1y
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 03:25:01 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on0620.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::620])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 100b6efb-eded-11ec-bd2d-47488cf2e6aa;
 Fri, 17 Jun 2022 05:24:58 +0200 (CEST)
Received: from AS8P250CA0018.EURP250.PROD.OUTLOOK.COM (2603:10a6:20b:330::23)
 by HE1PR08MB2939.eurprd08.prod.outlook.com (2603:10a6:7:33::27) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.15; Fri, 17 Jun
 2022 03:24:51 +0000
Received: from AM5EUR03FT038.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:330:cafe::7a) by AS8P250CA0018.outlook.office365.com
 (2603:10a6:20b:330::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.16 via Frontend
 Transport; Fri, 17 Jun 2022 03:24:51 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT038.mail.protection.outlook.com (10.152.17.118) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5353.14 via Frontend Transport; Fri, 17 Jun 2022 03:24:49 +0000
Received: ("Tessian outbound 1766a3bff204:v120");
 Fri, 17 Jun 2022 03:24:48 +0000
Received: from d6487d84f8ad.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 72865495-5DA9-426F-B6A2-00F3046D3786.1; 
 Fri, 17 Jun 2022 03:24:42 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d6487d84f8ad.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 17 Jun 2022 03:24:42 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AM7PR08MB5464.eurprd08.prod.outlook.com (2603:10a6:20b:10a::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.16; Fri, 17 Jun
 2022 03:24:39 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::502f:a77a:aba1:f3ee]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::502f:a77a:aba1:f3ee%7]) with mapi id 15.20.5353.015; Fri, 17 Jun 2022
 03:24:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 100b6efb-eded-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=nHKJV0E5fUB0OlVlv9AHd5ppMBau1uBX/UquojaSpaNIloTcNF6FlmSbPG+SrtZiCnxbExInwPnn2Paw/hyiDLzMn6AhiDD4dEMhQgzmm6X8mKfBxRoSHbR5ZBq9MQSzOkotIQsZEZ7V2qj90R9OZvP+rtNOhLdTza30tw1ZXOXvKuqSdHZ8CtEMdFl3tqkntnIYj/DCGXsXCar2v67G1lBCukfVbW8Q5xWA6RT+zsioE/u1lyPKXZXjN5hQd5VXAf0onXq/i5PhUcCj8phi6QaKgrXnPIcDHKAgCTVQaWEBiVZMgwUxdBFsSP4paqBggmkuT/o2Q9vE3BphDIjA8w==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8VREWxiJvUhFIjwf3qdTHKNTciRIEdi1fyoHyo3QY8o=;
 b=RSP9ysFr/VMS7hnK4HDKUwQUhhXBB/D8YawDlHh/9JOEPj20mMr9KWG+PgxCC3BoZURGV9XX+iEAfMTqGL5rz2aSY5vnjpVwxwrpgIEMTgdpuGR/WbE4tge+sSqXwrF0m2hFUjvsDlELG9swk9LvghDK0wgYG2Wx9JoIljDPhM68T/+1d+igIfbEoDTUJxnnZR1QH22PN/FwSK3jaDevRRjP0wMA2alRKE0bBTTS3Zw19ZpTxFtQEFDbIEm04Nb4Wte6dF0Gk/RdqlsKPhMKnKSCxDae88HoJfYSuHyK3lh7kq4lpHS7mEIaUHQuPHAEvywdz+sutTQoHoV0qH8OYw==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8VREWxiJvUhFIjwf3qdTHKNTciRIEdi1fyoHyo3QY8o=;
 b=NEk5eXxoIoKI5ebEx+t9ipObQHcj5/ISUpBLJU1Z1tmftlxlkkiHDtigKgz6s1Yl/wwl+NzsXZ9hb9g2UXB0J86Jn6LbGBsFoMpot41zaWbZjKbqdwCytasMa9CrE3U4aatjumk1h6A87hQU3nB1fvNnYuLgOoi5jJJ7dSQic+0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Wdg9Qnc6hC//Bpj+o7MngaTbsR0Z/Hh7fws5VW6QNk1dQd1uxgkOmAV5XNvoRy3Hg32f5KnvIqYBWZHmSn3iuxHt+XnhVcKZ7+ZwuvEutQWYGLDy1ag6Y0m5bV+/as4kZajnCZV/ktdUP7XYRReuAEo3rbDdtxGO+McIcPYr9JEHSh20ClPgmgbMryp9BKCZu+KY7McybGTukzmxN7Jbjehnijk+I23zJ+gVQOpRlRZWrQeLWPlh55VDC1m+ZKmQevRVa/v4uOdrL1kxwoCywHZ/fMiW2/Xg5PyABnyJpIxiDl6a/pKny7WLlEMrlr/6gZzyHEBr3H6OqYj7z+5hsg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8VREWxiJvUhFIjwf3qdTHKNTciRIEdi1fyoHyo3QY8o=;
 b=ncR0StoTqXPR6ZPjVojCwnDCpaGpJ04fL+XDNuBz8DgC5Dn5mOPgZqG0yz5dUvMzfx83DQG3MXtH1zEfw8Id3tbkpIgnZiE2rOJtYMOdJ0qEd3mNhlCXtkFEKPEWRTkk7YJjkDMvkgSa8lwVsB+etpcSat0q3LtHytlNSDWKChmalAEFb2/ql+QTmyh3IKMEEtcOoUxitaniWV4u9KPq3jWjlI/T+HGxYGVRIiiJsheNKx1esv4MmTEhtTOqijvDq8Fjhdr507yrNDCJ0vRHLKQm4AaoTtnCjCgCBBmym90XHpKbrDqAaf+tlvxrUdqt/wsoQgfwY+l4P7QvbOcCBg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8VREWxiJvUhFIjwf3qdTHKNTciRIEdi1fyoHyo3QY8o=;
 b=NEk5eXxoIoKI5ebEx+t9ipObQHcj5/ISUpBLJU1Z1tmftlxlkkiHDtigKgz6s1Yl/wwl+NzsXZ9hb9g2UXB0J86Jn6LbGBsFoMpot41zaWbZjKbqdwCytasMa9CrE3U4aatjumk1h6A87hQU3nB1fvNnYuLgOoi5jJJ7dSQic+0=
From: Henry Wang <Henry.Wang@arm.com>
To: Juergen Gross <jgross@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
Subject: RE: [PATCH v2 0/4] tools/xenstore: add some new features to the
 documentation
Thread-Topic: [PATCH v2 0/4] tools/xenstore: add some new features to the
 documentation
Thread-Index: AQHYcZr+xugQmCaBT0SxSahY0q90Lq1TDssw
Date: Fri, 17 Jun 2022 03:24:38 +0000
Message-ID:
 <AS8PR08MB799129FC63126C8435F5366A92AF9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20220527072427.20327-1-jgross@suse.com>
In-Reply-To: <20220527072427.20327-1-jgross@suse.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 9F95812FA4B93442989CD37FC642BC70.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 493d81d5-494c-4bfe-1501-08da5010ef52
x-ms-traffictypediagnostic:
	AM7PR08MB5464:EE_|AM5EUR03FT038:EE_|HE1PR08MB2939:EE_
X-Microsoft-Antispam-PRVS:
	<HE1PR08MB29397BD88150C61FA0112A1692AF9@HE1PR08MB2939.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 Dj32VYDEzteoJHtRPd5s2W3R4uhg8aGN5C/2In2BPCtrwsZkXO0gsE1jDOxupgj6bzyd02lzs6PfoFXFIHN82sXg/DHuf7xJrP/kuU0VeXDTml/a/46pbYVF6lOEAgC2iIP8ArUdt28LAYmjIC+eUPinwVuE/TwCN8sX3b2AHHw7sfxt9OHeASwE4QEh1E1gkNzhxvTHpiVzIeAK6XaKlCEunJZjjOOP2GMkO0oGXu7uUB3U5RZzSfnoA+Eh3++IIFdnlhr6KnnQB2mCtYRkch0kl93aPUu6rXea3PUmnT/qA5dVyvoJOnOqQlen+Oazzvi7fudqLTuOJ+qgUTzT6L33CczBrqOvhdi2PuSHbnDSI3ABvzKHYH4ViUUUv42MRnafT+X635pUT003gfiBa9W5T24aV7cdpYcqxlGPHci1BG9Czxmrd/YJgNIZa3M4EGSfZeTYcqrDjrfMjVBV5ukMOxHEUIfKqTFUMPeSPdtDX6ecIRGKotdA/d48riSkran0bZizwbMZgVoNeQe0mjRhXzkXAlFQjg6Z0+KLkFxiSFRJ+/YDs8NU2Yf90j2lzeH7LG7Euv/Xcv7MmCAD3NSSNSZiheNN2ITmWkt0b27NX81ai1bsIsrsp6GvWSBjLaUjXFi0N3j/Tgf5cyxLxdwcz6K+bzUJRxS3P1Pv+dYG6XyJ38ZUXiMEXpkv1YD0nTt0q+7J7twrDRFmcBJQCAdpF+PsJnetHt3pEf4wrsWi2rR6atmEVjLv9QTZtexWzAoIyr0zftgPMRdYy1ERPwaCW641GIlFldHllUw/QokVrAih+gkPIXcHyhN3naEAeiRCaYpJmkNirQ4y01uxng==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(66446008)(64756008)(4326008)(33656002)(52536014)(2906002)(8676002)(76116006)(66476007)(66556008)(66946007)(55016003)(86362001)(5660300002)(316002)(8936002)(54906003)(498600001)(7696005)(110136005)(6506007)(26005)(966005)(9686003)(71200400001)(83380400001)(38100700002)(122000001)(186003)(38070700005);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR08MB5464
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT038.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	8c25a4f0-b5ff-46ab-4176-08da5010e904
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ZH+O9Q6w+reL+UyRSzvIYjFd2c3L+5mI4KtZul3j5CFdxpAndCFpnkbzaIXqgpvrX3FambsIBtGH7kgWQjL6M/sZrN2tq94944xQ2JpqpNzM+kgR/tqhV16eJqvVtb0thEblI+AG2SqC6m9AJUnugfaVv5ZXWw1lrBgs+gjPL1onq0BMWXF20qqDROPswnGJDv5ZVWFjQo4EBDQijads+umyth2Ri1JvT0ymhX7AcskndnFDJV/hrVYAE41fLUUVG9C1Dh1gQLmrqo9p61gfD4xNXaEnQGJ3AZ5h7c3yfs4K68cjIp3sqsBgck0OgxWO8gc+Am5RFkJcfJ68e3F7ymy6QZ1mylpU+0eS/e4no3CqsY+5LKjV/eSG7TCbYCL+rk8TXYEldlty91jV/xbJmbKdRcly1PDe0bCZfNB2a1FR3MNxacOQi8uZabyicUu8u/oL4x6NiS58xcx6OBCInxYMovwy0lJTbUcZDiD3KUKYfSw8+Du5TRK3JpJ+nRhctxr1dvTNt83PFB0qL0v3bseUqjKm8pDJt/zCKKqDQeTtl0F41xFZQzCdI1cE2eK9LMoBFxKgGIGU+YixJj4rf1WH6L9J8Kxb8/XY1IsqU3bNdpegZGJOZLN+YA7qGUZWTFIDscLKFjqfy1zOjlvd/f/fkE70tqaH8h5QgOfdgiTPIhGRVJvFk57ANndQakYUfDghwCQglme/WyFAkCmkbsjiXm5Og6FD2iw+rYkk63jms3s9Cm0XhTplTB7SstXbgYjPZjeZghpNMUkLxOoIS/W0zseJGeykF9TcfNzkoiV75IR+KLaQMzO7YVnT4+8cso9yOKBKa2juHHri+cT7YMZAWMOIC/REonbvC2e13jk=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ErrorRetry;CAT:NONE;SFS:(13230016)(4636009)(36840700001)(40470700004)(46966006)(110136005)(70206006)(8936002)(316002)(2906002)(5660300002)(4326008)(54906003)(8676002)(70586007)(33656002)(7696005)(55016003)(966005)(498600001)(52536014)(9686003)(82310400005)(356005)(6506007)(47076005)(26005)(40460700003)(336012)(86362001)(186003)(81166007)(83380400001)(36860700001)(2690400003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jun 2022 03:24:49.0554
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 493d81d5-494c-4bfe-1501-08da5010ef52
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT038.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR08MB2939

Hi,

It seems that this series [1] has been stale for a while with actions neede=
d from
the maintainers (review needed). So sending this email as a gentle reminder=
.
Thanks!

[1] https://patchwork.kernel.org/project/xen-devel/list/?series=3D645480

Kind regards,
Henry

> -----Original Message-----
> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of
> Juergen Gross
> Subject: [PATCH v2 0/4] tools/xenstore: add some new features to the
> documentation
>=20
> In the past there have been spotted some shortcomings in the Xenstore
> interface, which should be repaired. Those are in detail:
>=20
> - Using driver domains for large number of domains needs per domain
>   Xenstore quota [1]. The feedback sent was rather slim (one reply),
>   but it was preferring a new set of wire commands.
>=20
> - XSA-349 [2] has shown that the current definition of watches is not
>   optimal, as it will trigger lots of events when a single one would
>   suffice: for detecting new backend devices the backends in the Linux
>   kernel are registering a watch for e.g. "/local/domain/0/backend"
>   which will fire for ANY sub-node written below this node (on a test
>   machine this added up to 91 watch events for only 3 devices).
>   This can be limited dramatically by extending the XS_WATCH command
>   to take another optional parameter specifying the depth of
>   subdirectories to be considered for sending watch events ("0" would
>   trigger a watch event only if the watched node itself being written).
>=20
> - New features like above being added might make migration of guests
>   between hosts with different Xenstore variants harder, so it should
>   be possible to set the available feature set per domain. For socket
>   connections it should be possible to read the available features.
>=20
> - The special watches @introduceDomain and @releaseDomain are rather
>   cumbersome to use, as they only tell you that SOME domain has been
>   introduced/released. Any consumer of those watches needs to scan
>   all domains on the host in order to find out the domid, causing
>   significant pressure on the dominfo hypercall (imagine a system
>   with 1000 domains running and one domain dying - there will be more
>   than 1000 watch events triggered and 1000 xl daemons will try to
>   find out whether "their" domain has died). Those watches should be
>   enhanced to optionally be specific to a single domain and to let the
>   event carry the related domid.
>=20
> As some of those extensions will need to be considered in the Xenstore
> migration stream, they should be defined in one go (in fact the 4th one
> wouldn't need that, but it can easily be connected to the 2nd one).
> As such extensions need to be flagged in the "features" in the ring
> page anyway, it is fine to implement them independently.
>=20
> Add the documentation of the new commands/features.
>=20
> [1]: https://lists.xen.org/archives/html/xen-devel/2020-06/msg00291.html
> [2]: http://xenbits.xen.org/xsa/advisory-349.html
>=20
> Changes in V2:
> - added new patch 1
> - remove feature bits for dom0-only features
> - get-features without domid returns Xenstore supported features
> - get/set-quota without domid for global quota access
>=20
> Juergen Gross (4):
>   tools/xenstore: modify feature bit specification in xenstore-ring.txt
>   tools/xenstore: add documentation for new set/get-feature commands
>   tools/xenstore: add documentation for new set/get-quota commands
>   tools/xenstore: add documentation for extended watch command
>=20
>  docs/misc/xenstore-ring.txt | 10 ++++----
>  docs/misc/xenstore.txt      | 47 ++++++++++++++++++++++++++++++++++---
>  2 files changed, 50 insertions(+), 7 deletions(-)
>=20
> --
> 2.35.3
>=20



From xen-devel-bounces@lists.xenproject.org Fri Jun 17 03:25:21 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 03:25:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350962.577431 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o22bl-00065f-Tp; Fri, 17 Jun 2022 03:25:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350962.577431; Fri, 17 Jun 2022 03:25:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o22bl-00065W-PT; Fri, 17 Jun 2022 03:25:17 +0000
Received: by outflank-mailman (input) for mailman id 350962;
 Fri, 17 Jun 2022 03:25:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YAhX=WY=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1o22bk-000652-GL
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 03:25:16 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20601.outbound.protection.outlook.com
 [2a01:111:f400:7d00::601])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1a68000e-eded-11ec-ab14-113154c10af9;
 Fri, 17 Jun 2022 05:25:15 +0200 (CEST)
Received: from DU2P251CA0019.EURP251.PROD.OUTLOOK.COM (2603:10a6:10:230::26)
 by DB6PR0801MB1928.eurprd08.prod.outlook.com (2603:10a6:4:71::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Fri, 17 Jun
 2022 03:25:05 +0000
Received: from DBAEUR03FT054.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:230:cafe::32) by DU2P251CA0019.outlook.office365.com
 (2603:10a6:10:230::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.16 via Frontend
 Transport; Fri, 17 Jun 2022 03:25:05 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT054.mail.protection.outlook.com (100.127.142.218) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5353.14 via Frontend Transport; Fri, 17 Jun 2022 03:25:04 +0000
Received: ("Tessian outbound e40990bc24d7:v120");
 Fri, 17 Jun 2022 03:25:04 +0000
Received: from cb475fc259a0.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 0D22E63E-1DEA-450D-8162-19F9F6BD5DA9.1; 
 Fri, 17 Jun 2022 03:24:58 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id cb475fc259a0.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 17 Jun 2022 03:24:58 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AM7PR08MB5464.eurprd08.prod.outlook.com (2603:10a6:20b:10a::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.16; Fri, 17 Jun
 2022 03:24:56 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::502f:a77a:aba1:f3ee]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::502f:a77a:aba1:f3ee%7]) with mapi id 15.20.5353.015; Fri, 17 Jun 2022
 03:24:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1a68000e-eded-11ec-ab14-113154c10af9
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=P/YIGBu0O3f+jQFDwNgJGEI36mvBwrlBizNPSrfpjWMGnghTWMB1w0I2kacQMSETJ+mCBh0zGgWPUeg38rEVEv2xlee8LnTkVSttB9UuxVRjZBcqtCWDVJHEV8Z6ATXdsWc+4l1NYCRh46UxbMbYyMkaJfXQ9u8LmhNChQelk15w6dfJpPG6FJnJ91p7HbkyyO/7O4QgGiUxPYvNZ8I7PsjnXBi3ZkU483E0OSjO9yqHDqkB7zp0eLvtFNfqbHdVEQ+9heM/09sGuh/+PvgSbQ5AceWecWi1hJacTY/Z37Q3A18+ap6ZDWOhN7NrNhlPvczbdtaoOjK7ZLKLJzh3ZQ==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=GWYQ+mlADmvF9UtpQu6HZVmTpy/QSM8fWlCaRvr8sHA=;
 b=RSZUEzJdL4/ajK1OLaLM6f5ydn2og91dHJKnGQZZOZ/VWx1GX2Lu4fXfNaxq3Og7WCazSkQMHfyaMz/TjHAeZ7EaskluCRm4blevN+x0bXWoe7bjnbppFtwqq3lGY+2SmBjNK8/ZETI5TyppsvxW0mVbLn9bxS51xsCkuUZSDcL8LchI3W8HBKQDlyx2eCni2rgja06RqIPP1M9azWfJ3COhrpjywFf//+dOx4uJb25s/uP5mr+iUCHzp2L0Z/BB5mp8BPeaHesaP0RMmNslPRcUZYQfpJyEzb1+F1I0wY1rNOBjYi3+x8dlETQfQ+ZE3Ra7i272LrKSoC5sSC6EVw==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GWYQ+mlADmvF9UtpQu6HZVmTpy/QSM8fWlCaRvr8sHA=;
 b=1JKrZm+Bi3MPWN3K3EHd+JahXO3xfAMw4jO4hlqLmlmeMzLZkFFb8D7pigbyMi7SxqLatCjpGsbXHqbWPDffp86eyqhYVklvQ9ulI0rIg15qC7Gf6MslDmhMaazHumx2HKhBq0Hbytph5jMY1S/OOxkgqaGLQVYNQlIS4IRPY5I=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VqpA7YvulkPv1oA/rExoPCWhbXYS1sYaM9oDUxfsdz4V+fTS3lNwYVty6YKOsgt3aRW2rD1YOJdvjPamOBNKvxNWgVOoErPsxDYguRCMu8oX6+2kh/G5PcsQBxmLwpMRO2S6XXIT0dA/UYkKm5aMMyUB3BW8yJ361EMvos/ZUS3y7VAML+hLFu8AV+8VgNYUQRgt7tllAtLTFP4duL+5mDd7zM97bvtXIxL1jZEYlxqa9wqWBmezAgJYK+aH0FAAWEeNvzubdG9I4Qso34H8q4HvV6+62WB7xtGALSbERux2M6a3Gx9Xm4zgYljw3ou9cEsrOJGrM2lvWu0HbYHScQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=GWYQ+mlADmvF9UtpQu6HZVmTpy/QSM8fWlCaRvr8sHA=;
 b=f5+bk8HiuetDCzVOhI0VA0bXwE70z8PjmgxLRtBVOdLV4ppwEqEhhiqKrl/qoOD6PyrEuxAnWU5cxIaRKZDhMJh4XSYlFeybfDKRyDVuI2MV9qBKDFNMI2HJZjbRcljJ8Qqtms1R0DP5fvaS+3ur/JuRjRCVlqJl68XrBoZSx8WWiSUHIFR3omJWioeGjmHSSnVnjyWGolNdhI9LsqzhaSIESsH4pagL/GA5HMAxri0enfGIzoZ1NKmlkC8vniNap5cYiLbSXh5n6VjJBe9MJBzB1No1J6AreGCw8q/7iQv321t/cfEkhWD8EdHHZngDeNRHH1rnGp7R9V7NL4QJvA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GWYQ+mlADmvF9UtpQu6HZVmTpy/QSM8fWlCaRvr8sHA=;
 b=1JKrZm+Bi3MPWN3K3EHd+JahXO3xfAMw4jO4hlqLmlmeMzLZkFFb8D7pigbyMi7SxqLatCjpGsbXHqbWPDffp86eyqhYVklvQ9ulI0rIg15qC7Gf6MslDmhMaazHumx2HKhBq0Hbytph5jMY1S/OOxkgqaGLQVYNQlIS4IRPY5I=
From: Henry Wang <Henry.Wang@arm.com>
To: Roger Pau Monne <roger.pau@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Jun Nakajima
	<jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>
Subject: RE: [PATCH 0/5] x86/lbr: handle lack of model-specific LBRs
Thread-Topic: [PATCH 0/5] x86/lbr: handle lack of model-specific LBRs
Thread-Index: AQHYbE8ENIrU0fzd30KB0FdsdC25cq1TGGOg
Date: Fri, 17 Jun 2022 03:24:56 +0000
Message-ID:
 <AS8PR08MB79915278A454C8933E8D7DE992AF9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20220520133746.66142-1-roger.pau@citrix.com>
In-Reply-To: <20220520133746.66142-1-roger.pau@citrix.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 7AE86C4C0389E640BD5B3218410CD2DB.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 3f3dd3f0-25c5-45b3-d29b-08da5010f8c6
x-ms-traffictypediagnostic:
	AM7PR08MB5464:EE_|DBAEUR03FT054:EE_|DB6PR0801MB1928:EE_
X-Microsoft-Antispam-PRVS:
	<DB6PR0801MB19281301AB6BD036CB81823992AF9@DB6PR0801MB1928.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 YvFqUqq9l4/5zn5j+EGhwaZZJmSpsMxwAqqSleelqqPr8kUwj2fYMyNvpdSOhLV2B2/GMsTbyzJT8AhMeQ83jyog7pD+fegzxbWtA2ZjGS7qdNLa8hiFeXBlGptA4AbAFIGPGrKAI6MrjNBkG0qSLRybcd5ZLVBxspmDQ5I8pY8rsWs0fjgnqo0xvA3WgIMa0OuT7nZjZuuwI9neGgIKSiGk81EWY4fNNldR4zo/xP5/+koizv2MmnlaCO2QYHqU8LuHn64eodgBcLzxLNILOHEnlgPWa9uTDrhtdyECNxDysb58c6KuEQFseowzR5hyUywBk9is4YzsKYGR7aBL2Q7ykReaAuPPujVAnFuD1nxLpPkG/jCVmsZlNSZu8DhehxbYxcg2j7yguPMB0wQAtgG5/5VNW9QWWCB6O1lyzT8ZQ/8csaAMLL7vhRVBEhN7XBF6z0maYGaq7xpDqCAOfByWiwqIllZqKrAj78vVBnXp5zFcYNIlywI7MRlWWniXjyHHkhZVPuLKrWHGDUOGbYA+ZsnCDPoCuRhMAHA4Q5F0wL0iepjOANVEViTbqkkwdd1ZViwk+aZtyvWy04WARbB8S5wL1//xfjPQXdcalJt/Mvj0H+GHvXkMPf1UAdHsK1UsS+k9eqS3vX2CB4ryAGu7P0DaYWu/VmiCMEvVVW1CHhUjVkjetCRLybrRo1QUwnVlXuEkITHt/uHYAqxXjmJ9FhbSlBsgrSix1lnXE+A7vIcmFoRhwIiMeRYQQbycdBAyYPwy9rWXPVKQvWPw82gjRmt6wANS3+Q8NHDH5oU=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(66446008)(64756008)(4326008)(33656002)(52536014)(2906002)(8676002)(76116006)(66476007)(66556008)(66946007)(55016003)(86362001)(5660300002)(316002)(8936002)(54906003)(498600001)(7696005)(110136005)(6506007)(26005)(966005)(9686003)(71200400001)(83380400001)(38100700002)(122000001)(186003)(38070700005);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR08MB5464
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT054.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	df71f582-8583-4777-e4df-08da5010f381
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	9GHFyS+kQCp4KsalkZqfvQvVxhDsVsMBP3yJyi+v5mT/lCjyw07dYWtEDItY9NCQ+ExmM5MsjrhEFp+pF3GEJP6fldWh5Uy8mekE59+d61vE2CXDFb+kl3lqncAg2vkp+mrAyJIpRBYVJsnCLau1wvxRx3TMWTDqOP3OB2E1jwY/P3oBJKTBFlTv7PbAcksjfTpkoMB7ia9eUsgMRIGDOL66mbGoXiZg6cnLPiTU4ZUaMsFUKufJwfwOC2WP1kahYR2+udCGyOmjtIsE2wGnbPUC7UlnFkxuO6pfeTKQpPlispsC/fycB+CtLucKFoZ+AMPmo4ulNPihRKxHjj57iRZX6crIHyDJa3YlFs6V6RcXTcL+GBFhma3K0xySvXzxWeg+x7HQsGX8yK3AjO5f7weJmxKJvjCpB6gZSs6Qk3lEamlBClPf0bRIc2dxFKpa4Os66sXmXTlHyHXeZ0hiX8ZAZVmdMWsws9VxFHLZsuObWGabHd9TmlZk7pqOT4CpWbUgnrvCuqVG4j1ljfdPclw6fbCzOHvrtxbyzYMp9+vNJYV4Kvv3IdGJMwmxro6plr2YjntyJLNUOAHTkykkRqDcRo13IpLyHOCzkUlm1QyHEvFJVSoG47+2s3LDzGCV/h2vRNalLE7GhlBoglwuAkJf97F/ynzIywAHewBd93tBBxDejo9pgDYqD7bpnlA6UVJjxa8yEjwffLmU4nfZlD9ZAbDqqrtz/CupClBdTeckjBNbO+iR7qb7cZI75nLV9wXiygN7xJP+/5UpzhsPqA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(46966006)(40470700004)(36840700001)(86362001)(336012)(47076005)(54906003)(81166007)(70206006)(2906002)(7696005)(40460700003)(356005)(52536014)(70586007)(316002)(110136005)(4326008)(55016003)(36860700001)(82310400005)(33656002)(8676002)(5660300002)(8936002)(186003)(6506007)(107886003)(83380400001)(26005)(9686003)(966005)(498600001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jun 2022 03:25:04.9668
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3f3dd3f0-25c5-45b3-d29b-08da5010f8c6
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT054.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1928

SGksDQoNCkl0IHNlZW1zIHRoYXQgdGhpcyBzZXJpZXMgWzFdIGhhcyBiZWVuIHN0YWxlIGZvciBh
IHdoaWxlLCB3aXRoIGFjdGlvbnMgbmVlZGVkDQpmcm9tIHRoZSBhdXRob3IgZm9yIHRoZSBmaXJz
dCA0IHBhdGNoZXMgYW5kIGNvbW1lbnRzIG5lZWRlZCBmcm9tIHRoZQ0KbWFpbnRhaW5lcnMgZm9y
IHRoZSBwYXRjaCM1LiBTbyBzZW5kaW5nIHRoaXMgZW1haWwgYXMgYSBnZW50bGUgcmVtaW5kZXIu
DQpUaGFua3MhDQoNClsxXSBodHRwczovL3BhdGNod29yay5rZXJuZWwub3JnL3Byb2plY3QveGVu
LWRldmVsL2xpc3QvP3Nlcmllcz02NDM2MjUNCg0KS2luZCByZWdhcmRzLA0KSGVucnkNCg0KPiAt
LS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBYZW4tZGV2ZWwgPHhlbi1kZXZlbC1i
b3VuY2VzQGxpc3RzLnhlbnByb2plY3Qub3JnPiBPbiBCZWhhbGYgT2YNCj4gUm9nZXIgUGF1IE1v
bm5lDQo+IFN1YmplY3Q6IFtQQVRDSCAwLzVdIHg4Ni9sYnI6IGhhbmRsZSBsYWNrIG9mIG1vZGVs
LXNwZWNpZmljIExCUnMNCj4gDQo+IEhlbGxvLA0KPiANCj4gSW50ZWwgU2FwcGhpcmUgUmFwaWRz
IENQVXMgZG9lc24ndCBoYXZlIG1vZGVsLXNwZWNpZmljIE1TUnMsIGFuZCBoZW5jZQ0KPiBvbmx5
IGFyY2hpdGVjdHVyYWwgTEJScyBhcmUgYXZhaWxhYmxlLg0KPiANCj4gRmlyc3RseSBpbXBsZW1l
bnQgc29tZSBjaGFuZ2VzIHNvIFhlbiBrbm93cyBob3cgdG8gZW5hYmxlIGFyY2ggTEJScyBzbw0K
PiB0aGF0IHRoZSBsZXIgb3B0aW9uIGNhbiBhbHNvIHdvcmsgaW4gc3VjaCBzY2VuYXJpbyAoZmly
c3QgdHdvIHBhdGNoZXMpLg0KPiANCj4gVGhlIGxhY2sgb2YgbW9kZWwtc3BlY2lmaWMgTEJScyBh
bHNvIGFmZmVjdHMgZ3Vlc3RzLCBhcyBzZXR0aW5nDQo+IERFQlVHQ1RMTVNSLkxCUiBpcyBub3cg
aWdub3JlZCAodmFsdWUgaGFyZHdpcmVkIHRvIDAsIHdyaXRlcyBpZ25vcmVkKQ0KPiBieQ0KPiB0
aGUgaGFyZHdhcmUgZHVlIHRvIHRoZSBsYWNrIG9mIG1vZGVsLXNwZWNpZmljIExCUnMuICBUaGUg
TEJSIGZvcm1hdA0KPiByZXBvcnRlZCBpbiBQRVJGX0NBUEFCSUxJVElFUyBhbHNvIG5lZWQgdG8g
YmUgZXhwb3NlZCwgYXMgdGhhdCdzIGEgd2F5DQo+IGZvciBndWVzdHMgdG8gZGV0ZWN0IGxhY2sg
b2YgbW9kZWwtc3BlY2lmaWMgTEJScyBwcmVzZW5jZSAocGF0Y2hlcyAzDQo+IGFuZCA0KS4NCj4g
DQo+IFBhdGNoIDUgaXMgYW4gaW5kZW50YXRpb24gZml4IHRoYXQgY2FuIGJlIG1lcmdlZCBpbnRv
IHBhdGNoIDQ6IGRvbmUNCj4gc2VwYXJhdGVseSB0byBoZWxwIHJlYWRhYmlsaXR5IG9mIHBhdGNo
IDQuDQo+IA0KPiBUaGFua3MsIFJvZ2VyLg0KPiANCj4gUm9nZXIgUGF1IE1vbm5lICg1KToNCj4g
ICB4ODYvbGVyOiB1c2UgZmVhdHVyZSBmbGFnIHRvIGNoZWNrIGlmIG9wdGlvbiBpcyBlbmFibGVk
DQo+ICAgeDg2L2xicjogZW5hYmxlIGh5cGVydmlzb3IgTEVSIHdpdGggYXJjaCBMQlINCj4gICB4
ODYvcGVyZjogZXhwb3NlIExCUiBmb3JtYXQgaW4gUEVSRl9DQVBBQklMSVRJRVMNCj4gICB4ODYv
dm14OiBoYW5kbGUgbm8gbW9kZWwtc3BlY2lmaWMgTEJSIHByZXNlbmNlDQo+ICAgeDg2L3ZteDog
Zml4IGluZGVudGF0aW9uIG9mIExCUg0KPiANCj4gIHhlbi9hcmNoL3g4Ni9odm0vdm14L3ZteC5j
ICAgICAgICAgICAgICAgICAgfCA1OSArKysrKysrKysrKysrKy0tLS0tLS0NCj4gIHhlbi9hcmNo
L3g4Ni9pbmNsdWRlL2FzbS9tc3ItaW5kZXguaCAgICAgICAgfCAxOCArKysrKysrDQo+ICB4ZW4v
YXJjaC94ODYvbXNyLmMgICAgICAgICAgICAgICAgICAgICAgICAgIHwgIDkgKysrKw0KPiAgeGVu
L2FyY2gveDg2L3RyYXBzLmMgICAgICAgICAgICAgICAgICAgICAgICB8IDI5ICsrKysrKysrLS0N
Cj4gIHhlbi9hcmNoL3g4Ni94ODZfNjQvdHJhcHMuYyAgICAgICAgICAgICAgICAgfCAgMiArLQ0K
PiAgeGVuL2luY2x1ZGUvcHVibGljL2FyY2gteDg2L2NwdWZlYXR1cmVzZXQuaCB8ICAzICstDQo+
ICA2IGZpbGVzIGNoYW5nZWQsIDk3IGluc2VydGlvbnMoKyksIDIzIGRlbGV0aW9ucygtKQ0KPiAN
Cj4gLS0NCj4gMi4zNi4wDQo+IA0KDQo=


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 03:26:19 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 03:26:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350976.577442 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o22ck-0006vF-Bp; Fri, 17 Jun 2022 03:26:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350976.577442; Fri, 17 Jun 2022 03:26:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o22ck-0006v8-8Y; Fri, 17 Jun 2022 03:26:18 +0000
Received: by outflank-mailman (input) for mailman id 350976;
 Fri, 17 Jun 2022 03:26:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YAhX=WY=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1o22ci-000652-RA
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 03:26:17 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2062d.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::62d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3e2969ef-eded-11ec-ab14-113154c10af9;
 Fri, 17 Jun 2022 05:26:15 +0200 (CEST)
Received: from AM6P195CA0107.EURP195.PROD.OUTLOOK.COM (2603:10a6:209:86::48)
 by VI1PR08MB3902.eurprd08.prod.outlook.com (2603:10a6:803:c2::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.15; Fri, 17 Jun
 2022 03:26:10 +0000
Received: from VE1EUR03FT061.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:86:cafe::be) by AM6P195CA0107.outlook.office365.com
 (2603:10a6:209:86::48) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.14 via Frontend
 Transport; Fri, 17 Jun 2022 03:26:10 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT061.mail.protection.outlook.com (10.152.19.220) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5353.14 via Frontend Transport; Fri, 17 Jun 2022 03:26:09 +0000
Received: ("Tessian outbound e40990bc24d7:v120");
 Fri, 17 Jun 2022 03:26:09 +0000
Received: from f54d362ecf32.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 C2A18A3F-A367-4CA4-AFF3-89DB1EAB469E.1; 
 Fri, 17 Jun 2022 03:26:03 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f54d362ecf32.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 17 Jun 2022 03:26:03 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AM7PR08MB5464.eurprd08.prod.outlook.com (2603:10a6:20b:10a::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.16; Fri, 17 Jun
 2022 03:26:01 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::502f:a77a:aba1:f3ee]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::502f:a77a:aba1:f3ee%7]) with mapi id 15.20.5353.015; Fri, 17 Jun 2022
 03:26:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e2969ef-eded-11ec-ab14-113154c10af9
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=FTS81PRMkoPAbynMpSJkJjUn4XEZbjcPfXXf5X5Bhdktv0WdDB9qRylDeJVB/ZiEoi2ESrEvi2traRo40Y9jz06kToafRl0I7tz4CGScOjqjJ7Z0OOkjfxNKAWSzXKEpzavh5apb7849qjHfP41HVnj3sjLz56hy6rhjBt19geTU84sUUac3O2bWAzfH3kCUPGLSQLWjVbC0zfYPDLhKsw/mok/ktXmgzCJZuyKMRuK/IIjs6g/jrRMweBxEqvzgeKJeog6FBTBp3PvPum92+/B45qdJPeuISl5K++viSTxbLzwtNSuAhWZvkkWnKFKOdgMHKMoUwPFFlrIj5ChH1A==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mviCt2paHi30I9RN/6gFGzSxDD0uSTbvQvPQYyT/Gpg=;
 b=W1WA2+U1LH4JN83TyZQc3BojtJC0JYPQvvaB484zIG65Ko46WmeEHAvrJwC5m0RRfrU4SrCFlvQkUsW2J1YJGArjxJg9fJlkWfymdcHWoPipPCUrMB6duf5fTAdqNdpK6FklhupqASTO6C3kkwINRZnfz+y9IgvmKadP0l2uiIUWDDHsbHGL3x6yHVUyu9Pi8GHIfmJSoBvB3zE4OJJ1XACxkr2Kt/FDOg+JISnajMcIE9MuwaxFc31v2zPw1AW4ZQnzjHjg0QMiQzKHQWNupRhrT7ooDMMEQ1c6oAFaZG+Qf2pbWN8B1lrS0pVPFND0LD6miGq/XH1c8QNRGnzpdw==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mviCt2paHi30I9RN/6gFGzSxDD0uSTbvQvPQYyT/Gpg=;
 b=LdQsd7AmQDfmyKYMG/wssee+QHiv1M4pdtng5y6blk8IRu/dK4T6T35IlKqp+Jln3Dj06w7YM7JohCORrkXwGtcY10qBFG2j5KjThH/zJjNMf8Ir0hXThK3MpCC7lSmEEm3hKlwpvfnz7P5JOvcY1WT14y8YADfah8xRrvA2jkg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=f8LQTcXUrdB2+91ou7YvZ/bMOVTVUl93HBNo2F49/6aSD8XkqPAWuP+OpUdrjMZ2RZDFLKYIAFhM13ZMaoK6BDq4TnRLfzePbnbg42oPXFLVv9reWmTMx4oQ2e78+6gY6MiMTsoUZwwV7umWcTNsZSTMao0mHCdI+V+bNj8HPSA4gYUTwNIGd/Sf6h3kuzNBMb9ybo/vbwuWPL8ZuEU1CTTFfdIcOggv3chRoklJz8/xs894xCdjWrgxMphNCuSZY7NI0LZ5DBuRvL6n3uxysOZEOG5xKERVWENY9eHRoXbHWId3h0Iq0woeLPm+EsbwI9pKb0YOXCSzoss4Tv7mFQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mviCt2paHi30I9RN/6gFGzSxDD0uSTbvQvPQYyT/Gpg=;
 b=PU/VI0m3AsNYl+gf3WCGTLRZ+9lBbyKFvG1iCDgdy7OGKst9P2brhUJ3RDlOVn9hdE7UfKoI556KkSLWcgdPHbLLLEt3pxBmFXyGoWUz94EEz10/47KxEFe1V2EAJwFfbtI/XnZW60428UzUbjY6e+MFrqJDIG0jgx2bW3m+mfEIO5cIbWl0KlehO5db9e8E21dhrkO2IHa/lm+q+Py+zn32pprDnu4Y3FJsMO0R+dsHN46fryTTc3wKCMq+d7NEi7PEIWBO7yLztFiCTdsHsmMuKtfHJeB9ySCtCQMNzAvNHdUmbZQ24SFoQR1JElY0KVOGZqXNPuTi1b1PLeCWJQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mviCt2paHi30I9RN/6gFGzSxDD0uSTbvQvPQYyT/Gpg=;
 b=LdQsd7AmQDfmyKYMG/wssee+QHiv1M4pdtng5y6blk8IRu/dK4T6T35IlKqp+Jln3Dj06w7YM7JohCORrkXwGtcY10qBFG2j5KjThH/zJjNMf8Ir0hXThK3MpCC7lSmEEm3hKlwpvfnz7P5JOvcY1WT14y8YADfah8xRrvA2jkg=
From: Henry Wang <Henry.Wang@arm.com>
To: Roger Pau Monne <roger.pau@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Community Manager <community.manager@xenproject.org>, Jan Beulich
	<jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: RE: [PATCH v6 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests
 using legacy SSBD
Thread-Topic: [PATCH v6 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests
 using legacy SSBD
Thread-Index: AQHYagNC0CXNzdsfvEia3fuvabBVia1TGtkA
Date: Fri, 17 Jun 2022 03:26:01 +0000
Message-ID:
 <AS8PR08MB799195FC7D9949031F33802892AF9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20220517153127.40276-1-roger.pau@citrix.com>
 <20220517153127.40276-4-roger.pau@citrix.com>
In-Reply-To: <20220517153127.40276-4-roger.pau@citrix.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 934A182AFE9132478B0F3825A0BF97FF.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: ea0b315a-5305-4f5c-10a9-08da50111f5a
x-ms-traffictypediagnostic:
	AM7PR08MB5464:EE_|VE1EUR03FT061:EE_|VI1PR08MB3902:EE_
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB39024FF2AE5A990E5563991D92AF9@VI1PR08MB3902.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 9KunPCga/8MZlixhQfMWP12FI3GNqQifGQx30yWqIv0BjtUjeRZsGsuHmYGx1KHChKoMuo2l0rba2hG6ktJR5tL5o+lZR42uFD2FB9lGMSwawxNmO4hQJrJVUZAK+i/wWn99PVJOVfig6DUeU9Fi2LmaJ202HEH/JKSrs5QCn81QzxcpcEuhc6EG7t/myLFQN9Xv2UdNa+hIx0GXVye2Sy9FQFyUXAm3jIHIByyqCTl4dXj/ZASUYy1Kdx9AHOC/OMRUfnQRPyt7WfABPF089MWKC6Hqn2UxjrIYmoVFLUvHKvqymahrR3X0e0Z9JXB0WLgOyOw8w7yQSqE1worhXnjcVD6gKnUpEWj37SXUFL+hhnUDrPYHwyySM0un58F7ALeNqXMJedr81WPfJhi0zdJE2435gVvaXJvb5qxrjEowe2ItldZNryQHYJhgmYeEdVsqRtXbaGTrX2+m+DO04I8v04WzxbUuaAK4E8OZBRoy1IIkSyksuYW9UKhPtAJIZoYMi1i8OO54OormMUrCXWBRv7ZDOR7quTQ7Jt63pohIDYCK9eMaSJQArzzPf1SCpwnNNsuMDg4t21HJqROyLzqL+/Xs+XfhfVVTyaRQpTZP3F2nv9bo8z0WLEVWtAsmr+YdnIPOzr+f3L0Wnplt3N8Ielt4OqFnAjGhHXHGOsQxXaJ9ee/9H9PhdbjuAK3FPzN8LZXj9uT+IvVvlmHXy6et8waI1kVxxCEpNmALfug4uJsL06c4vTNg5P+WWbIMhFaahW3sTijUX03o65znA9ZBpGVQ1Y9Zy0RPE3rAY81pCRjuYAFYX29qnmZGao0sHj4QTdyii7JxQ8eeo0g1YQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(66446008)(64756008)(4326008)(30864003)(33656002)(52536014)(2906002)(8676002)(76116006)(66476007)(66556008)(66946007)(55016003)(86362001)(5660300002)(316002)(8936002)(54906003)(498600001)(7696005)(110136005)(6506007)(26005)(966005)(9686003)(71200400001)(83380400001)(38100700002)(122000001)(186003)(38070700005);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR08MB5464
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT061.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	bb7e06c1-e28c-4b90-6656-08da50111a35
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	C2ZnEU5EgLjZq8VsUNhBYLEjEy9MB/KckKVuuJBqDomFSp95Ij7zcnwoo/McCX9K3d0U4chias1IOqVHbv+NxzkDUU1OgbqSpQVQJbRej9p16hvp4uXFildXD9WdYMpfTJfkeDTKJ8VmnG5agqhE0GHxH4+TkJXphSZMociTq0B+0xbSZefZ/01d+TbyVg7Rd8ci68sqc7tl34S1ex0ev2z7L81HmZ+agAkWog38LusHnBcSNwBEIgY740YAJkDVpS6b/csxixfs0D8uv9XGT6L0M3/2HZZVmEqQeywuJUHo2qTCag1IpsK7ccAeeE8DaxJkhAg3OLYW7LwACKwwxdtva9/B+nzrTGRotscW0ZcJAWB1Exs336/fB5eVfb44Gw1rTslwXX/d96tos7oczjYXoWf4WoSXYkUylNND0Ou+kf43ylAWw5Yso8V1Tcs0OFEHLovuh8OJ4bl4Yp3HGFkbPw1amx10BaTaaLLy3YSnwjxH/X/xubjuNxnSJ1TzdjtfXkbcBNkNTIS/+Ze3MZOpUgE+8EGnzgt/VTgHBJ+sCd8h5dDiZGd9j9XMZgXPY8BzBomQ3rWCuyIBVxMuys+x+msWDoVIyagU3AbaA7eKjqVIZw8JBnZKt/3i9m51W1g1VZa89Q5/SdBJC5LfQ5nYME09LdHRRX+dDvZG0BraBSeNzaDuLGWk/RymA1ZR1zTuLWiGp1K1TQPYXfsiM2mKZpYnaznck1wpC8Qwgq0lqR2maQsI8PgV2LVGm9sYGoHFsb+ZM/Gouz0anK6VL1g+HMP7YRs+GKaNPePeHxTQLUGLgAJKUKxDiMtXoBYZ
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(46966006)(40470700004)(36840700001)(5660300002)(86362001)(33656002)(82310400005)(4326008)(52536014)(55016003)(8936002)(30864003)(36860700001)(498600001)(966005)(8676002)(110136005)(316002)(2906002)(54906003)(6506007)(70206006)(70586007)(40460700003)(81166007)(186003)(7696005)(336012)(356005)(9686003)(83380400001)(26005)(47076005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jun 2022 03:26:09.5602
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ea0b315a-5305-4f5c-10a9-08da50111f5a
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT061.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3902

SGksDQoNCkl0IHNlZW1zIHRoYXQgdGhpcyBzZXJpZXMgWzFdIGhhcyBiZWVuIHN0YWxlIGZvciBt
b3JlIHRoYW4gYSBtb250aCBhbmQgYWxzbw0KdGhpcyBzZXJpZXMgc2VlbXMgdG8gYmUgcHJvcGVy
bHkgcmV2aWV3ZWQgYW5kIGFja2VkIGFscmVhZHkuIA0KDQpGcm9tIHdoYXQgSmFuIGhhcyByZXBs
aWVkIHRvIFJvZ2VyIGFuZCBBbmRyZXc6DQoiLi4uIHRoaXMgYWRkaXRpb24gdGhlIHNlcmllcyB3
b3VsZCBub3cgbG9vayB0byBiZSByZWFkeSB0byBnbyBpbiwNCkknZCBsaWtlIHRvIGhhdmUgc29t
ZSBmb3JtIG9mIGNvbmZpcm1hdGlvbiBieSB5b3UsIEFuZHJldywgdGhhdA0KeW91IG5vdyB2aWV3
IHRoaXMgYXMgbWVldGluZyB0aGUgY29tbWVudHMgeW91IGdhdmUgb24gYW4gZWFybGllcg0KdmVy
c2lvbi4iDQoNClNvIEkgZ3Vlc3MgdGhpcyBjYW4gYmUgbWVyZ2VkLiBTZW5kaW5nIHRoaXMgYXMg
YSBnZW50bGUgcmVtaW5kZXIgZm9yDQpwb3NzaWJsZSBhY3Rpb25zIGZyb20gUm9nZXIgYW5kIEFu
ZHJldy4gVGhhbmtzIQ0KDQpBbHNvLCBub3Qgc3VyZSB3aHkgbXkgYWNrZWQtYnkgZm9yIHRoZSBD
SEFOR0VMT0cubWQgaXMgbWlzc2luZyBpbg0KcGF0Y2h3b3JrLCBqdXN0IGluIGNhc2UgLSBmb3Ig
dGhlIGNoYW5nZSBpbiBDSEFOR0VMT0cubWQgaW4gcGF0Y2gjMzoNCg0KQWNrZWQtYnk6IEhlbnJ5
IFdhbmcgPEhlbnJ5LldhbmdAYXJtLmNvbT4NCg0KWzFdIGh0dHBzOi8vcGF0Y2h3b3JrLmtlcm5l
bC5vcmcvcHJvamVjdC94ZW4tZGV2ZWwvbGlzdC8/c2VyaWVzPTY0MjQxMw0KDQpLaW5kIHJlZ2Fy
ZHMsDQpIZW5yeQ0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IFJvZ2Vy
IFBhdSBNb25uZSA8cm9nZXIucGF1QGNpdHJpeC5jb20+DQo+IFN1YmplY3Q6IFtQQVRDSCB2NiAz
LzNdIGFtZC9tc3I6IGltcGxlbWVudCBWSVJUX1NQRUNfQ1RSTCBmb3IgSFZNDQo+IGd1ZXN0cyB1
c2luZyBsZWdhY3kgU1NCRA0KPiANCj4gRXhwb3NlIFZJUlRfU1NCRCB0byBndWVzdHMgaWYgdGhl
IGhhcmR3YXJlIHN1cHBvcnRzIHNldHRpbmcgU1NCRCBpbg0KPiB0aGUgTFNfQ0ZHIE1TUiAoYS5r
LmEuIG5vbi1hcmNoaXRlY3R1cmFsIHdheSkuIERpZmZlcmVudCBBTUQgQ1BVDQo+IGZhbWlsaWVz
IHVzZSBkaWZmZXJlbnQgYml0cyBpbiBMU19DRkcsIHNvIGV4cG9zaW5nIFZJUlRfU1BFQ19DVFJM
LlNTQkQNCj4gYWxsb3dzIGZvciBhbiB1bmlmaWVkIHdheSBvZiBleHBvc2luZyBTU0JEIHN1cHBv
cnQgdG8gZ3Vlc3RzIG9uIEFNRA0KPiBoYXJkd2FyZSB0aGF0J3MgY29tcGF0aWJsZSBtaWdyYXRp
b24gd2lzZSwgcmVnYXJkbGVzcyBvZiB3aGF0DQo+IHVuZGVybHlpbmcgbWVjaGFuaXNtIGlzIHVz
ZWQgdG8gc2V0IFNTQkQuDQo+IA0KPiBOb3RlIHRoYXQgb24gQU1EIEZhbWlseSAxN2ggYW5kIEh5
Z29uIEZhbWlseSAxOGggcHJvY2Vzc29ycyB0aGUgdmFsdWUNCj4gb2YgU1NCRCBpbiBMU19DRkcg
aXMgc2hhcmVkIGJldHdlZW4gdGhyZWFkcyBvbiB0aGUgc2FtZSBjb3JlLCBzbw0KPiB0aGVyZSdz
IGV4dHJhIGxvZ2ljIGluIG9yZGVyIHRvIHN5bmNocm9uaXplIHRoZSB2YWx1ZSBhbmQgaGF2ZSBT
U0JEDQo+IHNldCBhcyBsb25nIGFzIG9uZSBvZiB0aGUgdGhyZWFkcyBpbiB0aGUgY29yZSByZXF1
aXJlcyBpdCB0byBiZSBzZXQuDQo+IFN1Y2ggbG9naWMgYWxzbyByZXF1aXJlcyBleHRyYSBzdG9y
YWdlIGZvciBlYWNoIHRocmVhZCBzdGF0ZSwgd2hpY2ggaXMNCj4gYWxsb2NhdGVkIGF0IGluaXRp
YWxpemF0aW9uIHRpbWUuDQo+IA0KPiBEbyB0aGUgY29udGV4dCBzd2l0Y2hpbmcgb2YgdGhlIFNT
QkQgc2VsZWN0aW9uIGluIExTX0NGRyBiZXR3ZWVuDQo+IGh5cGVydmlzb3IgYW5kIGd1ZXN0IGlu
IHRoZSBzYW1lIGhhbmRsZXIgdGhhdCdzIGFscmVhZHkgdXNlZCB0byBzd2l0Y2gNCj4gdGhlIHZh
bHVlIG9mIFZJUlRfU1BFQ19DVFJMLg0KPiANCj4gU3VnZ2VzdGVkLWJ5OiBBbmRyZXcgQ29vcGVy
IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPg0KPiBTaWduZWQtb2ZmLWJ5OiBSb2dlciBQYXUg
TW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4NCj4gUmV2aWV3ZWQtYnk6IEphbiBCZXVsaWNo
IDxqYmV1bGljaEBzdXNlLmNvbT4NCj4gLS0tDQo+IENoYW5nZXMgc2luY2UgdjU6DQo+ICAtIEZp
eCBvbmUgY29kZGluZyBzdHlsZSBpc3N1ZS4NCj4gDQo+IENoYW5nZXMgc2luY2UgdjQ6DQo+ICAt
IFNsaWdodGx5IGNoYW5nZSB1c2FnZSBvZiB2YWwvb3B0X3NzYmQgaW4NCj4gICAgdm17ZXhpdCxl
bnRyeX1fdmlydF9zcGVjX2N0cmwuDQo+ICAtIFB1bGwgb3B0X3NzYmQgb3V0c2lkZSBvZiB0aGUg
Zm9yIGxvb3AgaW4gYW1kX3NldHVwX2xlZ2FjeV9zc2JkKCkuDQo+ICAtIEZpeCBpbmRlbnRhdGlv
bi4NCj4gIC0gUmVtb3ZlIEFTU0VSVHMvQlVHX09OcyBmcm9tIEdJRj0wIGNvbnRleHQuDQo+IA0K
PiBDaGFuZ2VzIHNpbmNlIHYzOg0KPiAgLSBBbGlnbiBzc2JkIHBlci1jb3JlIHN0cnVjdCB0byBh
IGNhY2hlIGxpbmUuDQo+ICAtIE9wZW4gY29kZSBhIHNpbXBsZSBzcGlubG9jayB0byBhdm9pZCBw
bGF5aW5nIHRyaWNrcyB3aXRoIHRoZSBsb2NrDQo+ICAgIGRldGVjdG9yLg0KPiAgLSBzL3NzYmRf
Y29yZS9zc2JkX2xzX2NmZy8uDQo+ICAtIEZpeCBsb2cgbWVzc2FnZSB3b3JkaW5nLg0KPiAgLSBG
aXggZGVmaW5lIG5hbWUgYW5kIHJlbW92ZSBjb21tZW50Lg0KPiAgLSBBbHNvIGhhbmRsZSBIeWdv
biBwcm9jZXNzb3JzIChGYW0xOGgpLg0KPiAgLSBBZGQgY2hhbmdlbG9nIGVudHJ5Lg0KPiANCj4g
Q2hhbmdlcyBzaW5jZSB2MjoNCj4gIC0gRml4IGNvZGRpbmcgc3R5bGUgaXNzdWVzLg0KPiAgLSBV
c2UgQU1EX1pFTjFfTUFYX1NPQ0tFVFMgdG8gZGVmaW5lIHRoZSBtYXggbnVtYmVyIG9mIHBvc3Np
YmxlDQo+ICAgIHNvY2tldHMgaW4gWmVuMSBzeXN0ZW1zLg0KPiANCj4gQ2hhbmdlcyBzaW5jZSB2
MToNCj4gIC0gUmVwb3J0IGxlZ2FjeSBTU0JEIHN1cHBvcnQgdXNpbmcgYSBnbG9iYWwgdmFyaWFi
bGUuDQo+ICAtIFVzZSByb19hZnRlcl9pbml0IGZvciBzc2JkX21heF9jb3Jlcy4NCj4gIC0gSGFu
ZGxlIGJvb3RfY3B1X2RhdGEueDg2X251bV9zaWJsaW5ncyA8IDEuDQo+ICAtIEFkZCBjb21tZW50
IHJlZ2FyZGluZyBfaXJxc2F2ZSB1c2FnZSBpbiBhbWRfc2V0X2xlZ2FjeV9zc2JkLg0KPiAtLS0N
Cj4gIENIQU5HRUxPRy5tZCAgICAgICAgICAgICAgICAgICB8ICAgMyArDQo+ICB4ZW4vYXJjaC94
ODYvY3B1L2FtZC5jICAgICAgICAgfCAxMjEgKysrKysrKysrKysrKysrKysrKysrKysrKysrKy0t
LS0tDQo+ICB4ZW4vYXJjaC94ODYvaHZtL3N2bS9zdm0uYyAgICAgfCAgIDQgKysNCj4gIHhlbi9h
cmNoL3g4Ni9pbmNsdWRlL2FzbS9hbWQuaCB8ICAgNCArKw0KPiAgeGVuL2FyY2gveDg2L3NwZWNf
Y3RybC5jICAgICAgIHwgICA0ICstDQo+ICA1IGZpbGVzIGNoYW5nZWQsIDExOCBpbnNlcnRpb25z
KCspLCAxOCBkZWxldGlvbnMoLSkNCj4gDQo+IGRpZmYgLS1naXQgYS9DSEFOR0VMT0cubWQgYi9D
SEFOR0VMT0cubWQNCj4gaW5kZXggNmE3NzU1ZDdiMC4uOWEwMDdlMjUxMyAxMDA2NDQNCj4gLS0t
IGEvQ0hBTkdFTE9HLm1kDQo+ICsrKyBiL0NIQU5HRUxPRy5tZA0KPiBAQCAtMTMsNiArMTMsOSBA
QCBUaGUgZm9ybWF0IGlzIGJhc2VkIG9uIFtLZWVwIGENCj4gQ2hhbmdlbG9nXShodHRwczovL2tl
ZXBhY2hhbmdlbG9nLmNvbS9lbi8xLjAuMC8pDQo+ICAjIyMgUmVtb3ZlZCAvIHN1cHBvcnQgZG93
bmdyYWRlZA0KPiAgIC0gZHJvcHBlZCBzdXBwb3J0IGZvciB0aGUgKHg4Ni1vbmx5KSAidmVzYS1t
dHJyIiBhbmQgInZlc2EtcmVtYXAiDQo+IGNvbW1hbmQgbGluZSBvcHRpb25zDQo+IA0KPiArIyMj
IEFkZGVkDQo+ICsgLSBTdXBwb3J0IFZJUlRfU1NCRCBmZWF0dXJlIGZvciBIVk0gZ3Vlc3RzIG9u
IEFNRC4NCj4gKw0KPiAgIyMgWzQuMTYuMF0oaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2Vi
Lz9wPXhlbi5naXQ7YT1zaG9ydGxvZztoPXN0YWdpbmcpDQo+IC0gMjAyMS0xMi0wMg0KPiANCj4g
ICMjIyBSZW1vdmVkDQo+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvY3B1L2FtZC5jIGIveGVu
L2FyY2gveDg2L2NwdS9hbWQuYw0KPiBpbmRleCA0OTk5ZjhiZTJiLi41ZjllNzM0ZTg0IDEwMDY0
NA0KPiAtLS0gYS94ZW4vYXJjaC94ODYvY3B1L2FtZC5jDQo+ICsrKyBiL3hlbi9hcmNoL3g4Ni9j
cHUvYW1kLmMNCj4gQEAgLTQ4LDYgKzQ4LDcgQEAgYm9vbGVhbl9wYXJhbSgiYWxsb3dfdW5zYWZl
Iiwgb3B0X2FsbG93X3Vuc2FmZSk7DQo+IA0KPiAgLyogU2lnbmFsIHdoZXRoZXIgdGhlIEFDUEkg
QzFFIHF1aXJrIGlzIHJlcXVpcmVkLiAqLw0KPiAgYm9vbCBfX3JlYWRfbW9zdGx5IGFtZF9hY3Bp
X2MxZV9xdWlyazsNCj4gK2Jvb2wgX19yb19hZnRlcl9pbml0IGFtZF9sZWdhY3lfc3NiZDsNCj4g
DQo+ICBzdGF0aWMgaW5saW5lIGludCByZG1zcl9hbWRfc2FmZSh1bnNpZ25lZCBpbnQgbXNyLCB1
bnNpZ25lZCBpbnQgKmxvLA0KPiAgCQkJCSB1bnNpZ25lZCBpbnQgKmhpKQ0KPiBAQCAtNjg1LDIz
ICs2ODYsMTAgQEAgdm9pZCBhbWRfaW5pdF9sZmVuY2Uoc3RydWN0IGNwdWluZm9feDg2ICpjKQ0K
PiAgICogUmVmZXIgdG8gdGhlIEFNRCBTcGVjdWxhdGl2ZSBTdG9yZSBCeXBhc3Mgd2hpdGVwYXBl
cjoNCj4gICAqIGh0dHBzOi8vZGV2ZWxvcGVyLmFtZC5jb20vd3AtDQo+IGNvbnRlbnQvcmVzb3Vy
Y2VzLzEyNDQ0MV9BTUQ2NF9TcGVjdWxhdGl2ZVN0b3JlQnlwYXNzRGlzYWJsZV9XaGl0ZQ0KPiBw
YXBlcl9maW5hbC5wZGYNCj4gICAqLw0KPiAtdm9pZCBhbWRfaW5pdF9zc2JkKGNvbnN0IHN0cnVj
dCBjcHVpbmZvX3g4NiAqYykNCj4gK3N0YXRpYyBib29sIHNldF9sZWdhY3lfc3NiZChjb25zdCBz
dHJ1Y3QgY3B1aW5mb194ODYgKmMsIGJvb2wgZW5hYmxlKQ0KPiAgew0KPiAgCWludCBiaXQgPSAt
MTsNCj4gDQo+IC0JaWYgKGNwdV9oYXNfc3NiX25vKQ0KPiAtCQlyZXR1cm47DQo+IC0NCj4gLQlp
ZiAoY3B1X2hhc19hbWRfc3NiZCkgew0KPiAtCQkvKiBIYW5kbGVkIGJ5IGNvbW1vbiBNU1JfU1BF
Q19DVFJMIGxvZ2ljICovDQo+IC0JCXJldHVybjsNCj4gLQl9DQo+IC0NCj4gLQlpZiAoY3B1X2hh
c192aXJ0X3NzYmQpIHsNCj4gLQkJd3Jtc3JsKE1TUl9WSVJUX1NQRUNfQ1RSTCwgb3B0X3NzYmQg
Pw0KPiBTUEVDX0NUUkxfU1NCRCA6IDApOw0KPiAtCQlyZXR1cm47DQo+IC0JfQ0KPiAtDQo+ICAJ
c3dpdGNoIChjLT54ODYpIHsNCj4gIAljYXNlIDB4MTU6IGJpdCA9IDU0OyBicmVhazsNCj4gIAlj
YXNlIDB4MTY6IGJpdCA9IDMzOyBicmVhazsNCj4gQEAgLTcxNSwyMCArNzAzLDExOSBAQCB2b2lk
IGFtZF9pbml0X3NzYmQoY29uc3Qgc3RydWN0IGNwdWluZm9feDg2ICpjKQ0KPiAgCQlpZiAocmRt
c3Jfc2FmZShNU1JfQU1ENjRfTFNfQ0ZHLCB2YWwpIHx8DQo+ICAJCSAgICAoew0KPiAgCQkJICAg
IHZhbCAmPSB+bWFzazsNCj4gLQkJCSAgICBpZiAob3B0X3NzYmQpDQo+ICsJCQkgICAgaWYgKGVu
YWJsZSkNCj4gIAkJCQkgICAgdmFsIHw9IG1hc2s7DQo+ICAJCQkgICAgZmFsc2U7DQo+ICAJCSAg
ICB9KSB8fA0KPiAgCQkgICAgd3Jtc3Jfc2FmZShNU1JfQU1ENjRfTFNfQ0ZHLCB2YWwpIHx8DQo+
ICAJCSAgICAoew0KPiAgCQkJICAgIHJkbXNybChNU1JfQU1ENjRfTFNfQ0ZHLCB2YWwpOw0KPiAt
CQkJICAgICh2YWwgJiBtYXNrKSAhPSAob3B0X3NzYmQgKiBtYXNrKTsNCj4gKwkJCSAgICAodmFs
ICYgbWFzaykgIT0gKGVuYWJsZSAqIG1hc2spOw0KPiAgCQkgICAgfSkpDQo+ICAJCQliaXQgPSAt
MTsNCj4gIAl9DQo+IA0KPiAtCWlmIChiaXQgPCAwKQ0KPiArCXJldHVybiBiaXQgPj0gMDsNCj4g
K30NCj4gKw0KPiArdm9pZCBhbWRfaW5pdF9zc2JkKGNvbnN0IHN0cnVjdCBjcHVpbmZvX3g4NiAq
YykNCj4gK3sNCj4gKwlpZiAoY3B1X2hhc19zc2Jfbm8pDQo+ICsJCXJldHVybjsNCj4gKw0KPiAr
CWlmIChjcHVfaGFzX2FtZF9zc2JkKSB7DQo+ICsJCS8qIEhhbmRsZWQgYnkgY29tbW9uIE1TUl9T
UEVDX0NUUkwgbG9naWMgKi8NCj4gKwkJcmV0dXJuOw0KPiArCX0NCj4gKw0KPiArCWlmIChjcHVf
aGFzX3ZpcnRfc3NiZCkgew0KPiArCQl3cm1zcmwoTVNSX1ZJUlRfU1BFQ19DVFJMLCBvcHRfc3Ni
ZCA/DQo+IFNQRUNfQ1RSTF9TU0JEIDogMCk7DQo+ICsJCXJldHVybjsNCj4gKwl9DQo+ICsNCj4g
KwlpZiAoIXNldF9sZWdhY3lfc3NiZChjLCBvcHRfc3NiZCkpIHsNCj4gIAkJcHJpbnRrX29uY2Uo
WEVOTE9HX0VSUiAiTm8gU1NCRCBjb250cm9scyBhdmFpbGFibGVcbiIpOw0KPiArCQlpZiAoYW1k
X2xlZ2FjeV9zc2JkKQ0KPiArCQkJcGFuaWMoIkNQVSBmZWF0dXJlIG1pc21hdGNoOiBubyBsZWdh
Y3kgU1NCRFxuIik7DQo+ICsJfSBlbHNlIGlmIChjID09ICZib290X2NwdV9kYXRhKQ0KPiArCQlh
bWRfbGVnYWN5X3NzYmQgPSB0cnVlOw0KPiArfQ0KPiArDQo+ICtzdGF0aWMgc3RydWN0IHNzYmRf
bHNfY2ZnIHsNCj4gKyAgICBib29sIGxvY2tlZDsNCj4gKyAgICB1bnNpZ25lZCBpbnQgY291bnQ7
DQo+ICt9IF9fY2FjaGVsaW5lX2FsaWduZWQgKnNzYmRfbHNfY2ZnOw0KPiArc3RhdGljIHVuc2ln
bmVkIGludCBfX3JvX2FmdGVyX2luaXQgc3NiZF9tYXhfY29yZXM7DQo+ICsjZGVmaW5lIEFNRF9G
QU0xN0hfTUFYX1NPQ0tFVFMgMg0KPiArDQo+ICtib29sIF9faW5pdCBhbWRfc2V0dXBfbGVnYWN5
X3NzYmQodm9pZCkNCj4gK3sNCj4gKwl1bnNpZ25lZCBpbnQgaTsNCj4gKw0KPiArCWlmICgoYm9v
dF9jcHVfZGF0YS54ODYgIT0gMHgxNyAmJiBib290X2NwdV9kYXRhLng4NiAhPSAweDE4KSB8fA0K
PiArCSAgICBib290X2NwdV9kYXRhLng4Nl9udW1fc2libGluZ3MgPD0gMSkNCj4gKwkJcmV0dXJu
IHRydWU7DQo+ICsNCj4gKwkvKg0KPiArCSAqIE9uZSBjb3VsZCBiZSBmb3JnaXZlbiBmb3IgdGhp
bmtpbmcgdGhhdCBjLT54ODZfbWF4X2NvcmVzIGlzIHRoZQ0KPiArCSAqIGNvcnJlY3QgdmFsdWUg
dG8gdXNlIGhlcmUuDQo+ICsJICoNCj4gKwkgKiBIb3dldmVyLCB0aGF0IHZhbHVlIGlzIGRlcml2
ZWQgZnJvbSB0aGUgY3VycmVudCBjb25maWd1cmF0aW9uLA0KPiBhbmQNCj4gKwkgKiBjLT5jcHVf
Y29yZV9pZCBpcyBzcGFyc2Ugb24gYWxsIGJ1dCB0aGUgdG9wIGVuZCBDUFVzLiAgRGVyaXZlDQo+
ICsJICogbWF4X2NwdXMgZnJvbSBBcGljSWRDb3JlSWRTaXplIHdoaWNoIHdpbGwgY292ZXIgYW55
IHNwYXJzZW5lc3MuDQo+ICsJICovDQo+ICsJaWYgKGJvb3RfY3B1X2RhdGEuZXh0ZW5kZWRfY3B1
aWRfbGV2ZWwgPj0gMHg4MDAwMDAwOCkgew0KPiArCQlzc2JkX21heF9jb3JlcyA9IDF1IDw8DQo+
IE1BU0tfRVhUUihjcHVpZF9lY3goMHg4MDAwMDAwOCksIDB4ZjAwMCk7DQo+ICsJCXNzYmRfbWF4
X2NvcmVzIC89IGJvb3RfY3B1X2RhdGEueDg2X251bV9zaWJsaW5nczsNCj4gKwl9DQo+ICsJaWYg
KCFzc2JkX21heF9jb3JlcykNCj4gKwkJcmV0dXJuIGZhbHNlOw0KPiArDQo+ICsJc3NiZF9sc19j
ZmcgPSB4emFsbG9jX2FycmF5KHN0cnVjdCBzc2JkX2xzX2NmZywNCj4gKwkgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgc3NiZF9tYXhfY29yZXMgKiBBTURfRkFNMTdIX01BWF9TT0NLRVRTKTsN
Cj4gKwlpZiAoIXNzYmRfbHNfY2ZnKQ0KPiArCQlyZXR1cm4gZmFsc2U7DQo+ICsNCj4gKwlpZiAo
b3B0X3NzYmQpDQo+ICsJCWZvciAoaSA9IDA7IGkgPCBzc2JkX21heF9jb3JlcyAqDQo+IEFNRF9G
QU0xN0hfTUFYX1NPQ0tFVFM7IGkrKykNCj4gKwkJCS8qIFNldCBpbml0aWFsIHN0YXRlLCBhcHBs
aWVzIHRvIGFueSAoaG90cGx1ZykgQ1BVLiAqLw0KPiArCQkJc3NiZF9sc19jZmdbaV0uY291bnQg
PQ0KPiBib290X2NwdV9kYXRhLng4Nl9udW1fc2libGluZ3M7DQo+ICsNCj4gKwlyZXR1cm4gdHJ1
ZTsNCj4gK30NCj4gKw0KPiArLyoNCj4gKyAqIEV4ZWN1dGVkIGZyb20gR0lGPT0wIGNvbnRleHQ6
IGF2b2lkIHVzaW5nIEJVRy9BU1NFUlQgb3Igb3RoZXINCj4gZnVuY3Rpb25hbGl0eQ0KPiArICog
dGhhdCByZWxpZXMgb24gZXhjZXB0aW9ucyBhcyB0aG9zZSBhcmUgbm90IGV4cGVjdGVkIHRvIHJ1
biBpbiBHSUY9PTANCj4gKyAqIGNvbnRleHQuDQo+ICsgKi8NCj4gK3ZvaWQgYW1kX3NldF9sZWdh
Y3lfc3NiZChib29sIGVuYWJsZSkNCj4gK3sNCj4gKwljb25zdCBzdHJ1Y3QgY3B1aW5mb194ODYg
KmMgPSAmY3VycmVudF9jcHVfZGF0YTsNCj4gKwlzdHJ1Y3Qgc3NiZF9sc19jZmcgKnN0YXR1czsN
Cj4gKw0KPiArCWlmICgoYy0+eDg2ICE9IDB4MTcgJiYgYy0+eDg2ICE9IDB4MTgpIHx8IGMtPng4
Nl9udW1fc2libGluZ3MgPD0gMSkNCj4gew0KPiArCQlzZXRfbGVnYWN5X3NzYmQoYywgZW5hYmxl
KTsNCj4gKwkJcmV0dXJuOw0KPiArCX0NCj4gKw0KPiArCXN0YXR1cyA9ICZzc2JkX2xzX2NmZ1tj
LT5waHlzX3Byb2NfaWQgKiBzc2JkX21heF9jb3JlcyArDQo+ICsJICAgICAgICAgICAgICAgICAg
ICAgIGMtPmNwdV9jb3JlX2lkXTsNCj4gKw0KPiArCS8qDQo+ICsJICogT3BlbiBjb2RlIGEgdmVy
eSBzaW1wbGUgc3BpbmxvY2s6IHRoaXMgZnVuY3Rpb24gaXMgdXNlZCB3aXRoDQo+IEdJRj09MA0K
PiArCSAqIGFuZCBkaWZmZXJlbnQgSUYgdmFsdWVzLCBzbyB3b3VsZCB0cmlnZ2VyIHRoZSBjaGVj
a2xvY2sgZGV0ZWN0b3IuDQo+ICsJICogSW5zdGVhZCBvZiB0cnlpbmcgdG8gd29ya2Fyb3VuZCB0
aGUgZGV0ZWN0b3IsIHVzZSBhIHZlcnkgc2ltcGxlDQo+IGxvY2sNCj4gKwkgKiBpbXBsZW1lbnRh
dGlvbjogaXQncyBiZXR0ZXIgdG8gcmVkdWNlIHRoZSBhbW91bnQgb2YgY29kZQ0KPiBleGVjdXRl
ZA0KPiArCSAqIHdpdGggR0lGPT0wLg0KPiArCSAqLw0KPiArCXdoaWxlICh0ZXN0X2FuZF9zZXRf
Ym9vbChzdGF0dXMtPmxvY2tlZCkpDQo+ICsJCWNwdV9yZWxheCgpOw0KPiArCXN0YXR1cy0+Y291
bnQgKz0gZW5hYmxlID8gMSA6IC0xOw0KPiArCWlmIChlbmFibGUgPyBzdGF0dXMtPmNvdW50ID09
IDEgOiAhc3RhdHVzLT5jb3VudCkNCj4gKwkJc2V0X2xlZ2FjeV9zc2JkKGMsIGVuYWJsZSk7DQo+
ICsJYmFycmllcigpOw0KPiArCXdyaXRlX2F0b21pYygmc3RhdHVzLT5sb2NrZWQsIGZhbHNlKTsN
Cj4gIH0NCj4gDQo+ICB2b2lkIF9faW5pdCBkZXRlY3RfemVuMl9udWxsX3NlZ19iZWhhdmlvdXIo
dm9pZCkNCj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9odm0vc3ZtL3N2bS5jIGIveGVuL2Fy
Y2gveDg2L2h2bS9zdm0vc3ZtLmMNCj4gaW5kZXggYzRiZGVhZmY1Mi4uM2NjNWZjZGM0NCAxMDA2
NDQNCj4gLS0tIGEveGVuL2FyY2gveDg2L2h2bS9zdm0vc3ZtLmMNCj4gKysrIGIveGVuL2FyY2gv
eDg2L2h2bS9zdm0vc3ZtLmMNCj4gQEAgLTMxMjYsNiArMzEyNiw4IEBAIHZvaWQgdm1leGl0X3Zp
cnRfc3BlY19jdHJsKHZvaWQpDQo+IA0KPiAgICAgIGlmICggY3B1X2hhc192aXJ0X3NzYmQgKQ0K
PiAgICAgICAgICB3cm1zcihNU1JfVklSVF9TUEVDX0NUUkwsIHZhbCwgMCk7DQo+ICsgICAgZWxz
ZQ0KPiArICAgICAgICBhbWRfc2V0X2xlZ2FjeV9zc2JkKHZhbCk7DQo+ICB9DQo+IA0KPiAgLyog
Q2FsbGVkIHdpdGggR0lGPTAuICovDQo+IEBAIC0zMTM4LDYgKzMxNDAsOCBAQCB2b2lkIHZtZW50
cnlfdmlydF9zcGVjX2N0cmwodm9pZCkNCj4gDQo+ICAgICAgaWYgKCBjcHVfaGFzX3ZpcnRfc3Ni
ZCApDQo+ICAgICAgICAgIHdybXNyKE1TUl9WSVJUX1NQRUNfQ1RSTCwgdmFsLCAwKTsNCj4gKyAg
ICBlbHNlDQo+ICsgICAgICAgIGFtZF9zZXRfbGVnYWN5X3NzYmQodmFsKTsNCj4gIH0NCj4gDQo+
ICAvKg0KPiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L2luY2x1ZGUvYXNtL2FtZC5oDQo+IGIv
eGVuL2FyY2gveDg2L2luY2x1ZGUvYXNtL2FtZC5oDQo+IGluZGV4IGE4MjM4MmU2YmYuLjZhNDJm
Njg1NDIgMTAwNjQ0DQo+IC0tLSBhL3hlbi9hcmNoL3g4Ni9pbmNsdWRlL2FzbS9hbWQuaA0KPiAr
KysgYi94ZW4vYXJjaC94ODYvaW5jbHVkZS9hc20vYW1kLmgNCj4gQEAgLTE1MSw0ICsxNTEsOCBA
QCB2b2lkIGNoZWNrX2VuYWJsZV9hbWRfbW1jb25mX2RtaSh2b2lkKTsNCj4gIGV4dGVybiBib29s
IGFtZF9hY3BpX2MxZV9xdWlyazsNCj4gIHZvaWQgYW1kX2NoZWNrX2Rpc2FibGVfYzFlKHVuc2ln
bmVkIGludCBwb3J0LCB1OCB2YWx1ZSk7DQo+IA0KPiArZXh0ZXJuIGJvb2wgYW1kX2xlZ2FjeV9z
c2JkOw0KPiArYm9vbCBhbWRfc2V0dXBfbGVnYWN5X3NzYmQodm9pZCk7DQo+ICt2b2lkIGFtZF9z
ZXRfbGVnYWN5X3NzYmQoYm9vbCBlbmFibGUpOw0KPiArDQo+ICAjZW5kaWYgLyogX19BTURfSF9f
ICovDQo+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvc3BlY19jdHJsLmMgYi94ZW4vYXJjaC94
ODYvc3BlY19jdHJsLmMNCj4gaW5kZXggMGQ1ZWM4NzdkMS4uNDk1ZTZmOTQwNSAxMDA2NDQNCj4g
LS0tIGEveGVuL2FyY2gveDg2L3NwZWNfY3RybC5jDQo+ICsrKyBiL3hlbi9hcmNoL3g4Ni9zcGVj
X2N0cmwuYw0KPiBAQCAtMjIsNiArMjIsNyBAQA0KPiAgI2luY2x1ZGUgPHhlbi9wYXJhbS5oPg0K
PiAgI2luY2x1ZGUgPHhlbi93YXJuaW5nLmg+DQo+IA0KPiArI2luY2x1ZGUgPGFzbS9hbWQuaD4N
Cj4gICNpbmNsdWRlIDxhc20vaHZtL3N2bS9zdm0uaD4NCj4gICNpbmNsdWRlIDxhc20vbWljcm9j
b2RlLmg+DQo+ICAjaW5jbHVkZSA8YXNtL21zci5oPg0KPiBAQCAtMTA3Myw3ICsxMDc0LDggQEAg
dm9pZCBfX2luaXQgaW5pdF9zcGVjdWxhdGlvbl9taXRpZ2F0aW9ucyh2b2lkKQ0KPiAgICAgIH0N
Cj4gDQo+ICAgICAgLyogU3VwcG9ydCBWSVJUX1NQRUNfQ1RSTC5TU0JEIGlmIEFNRF9TU0JEIGlz
IG5vdCBhdmFpbGFibGUuICovDQo+IC0gICAgaWYgKCBvcHRfbXNyX3NjX2h2bSAmJiAhY3B1X2hh
c19hbWRfc3NiZCAmJiBjcHVfaGFzX3ZpcnRfc3NiZCApDQo+ICsgICAgaWYgKCBvcHRfbXNyX3Nj
X2h2bSAmJiAhY3B1X2hhc19hbWRfc3NiZCAmJg0KPiArICAgICAgICAgKGNwdV9oYXNfdmlydF9z
c2JkIHx8IChhbWRfbGVnYWN5X3NzYmQgJiYNCj4gYW1kX3NldHVwX2xlZ2FjeV9zc2JkKCkpKSAp
DQo+ICAgICAgICAgIHNldHVwX2ZvcmNlX2NwdV9jYXAoWDg2X0ZFQVRVUkVfVklSVF9TQ19NU1Jf
SFZNKTsNCj4gDQo+ICAgICAgLyogSWYgd2UgaGF2ZSBJQlJTIGF2YWlsYWJsZSwgc2VlIHdoZXRo
ZXIgd2Ugc2hvdWxkIHVzZSBpdC4gKi8NCj4gLS0NCj4gMi4zNi4wDQoNCg==


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 03:26:46 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 03:26:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350978.577453 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o22dC-0007QY-Kw; Fri, 17 Jun 2022 03:26:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350978.577453; Fri, 17 Jun 2022 03:26:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o22dC-0007QR-Hv; Fri, 17 Jun 2022 03:26:46 +0000
Received: by outflank-mailman (input) for mailman id 350978;
 Fri, 17 Jun 2022 03:26:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YAhX=WY=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1o22dB-000652-CB
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 03:26:45 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur02on0602.outbound.protection.outlook.com
 [2a01:111:f400:fe05::602])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4f9407a2-eded-11ec-ab14-113154c10af9;
 Fri, 17 Jun 2022 05:26:44 +0200 (CEST)
Received: from AM7PR03CA0018.eurprd03.prod.outlook.com (2603:10a6:20b:130::28)
 by DB7PR08MB3739.eurprd08.prod.outlook.com (2603:10a6:10:79::27) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.13; Fri, 17 Jun
 2022 03:26:39 +0000
Received: from AM5EUR03FT008.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:130:cafe::48) by AM7PR03CA0018.outlook.office365.com
 (2603:10a6:20b:130::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.16 via Frontend
 Transport; Fri, 17 Jun 2022 03:26:39 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT008.mail.protection.outlook.com (10.152.16.123) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5353.14 via Frontend Transport; Fri, 17 Jun 2022 03:26:39 +0000
Received: ("Tessian outbound 01afcf8ccfad:v120");
 Fri, 17 Jun 2022 03:26:39 +0000
Received: from 6d4723de48ce.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A751F9F9-C370-47B6-BD17-8BB5D1F4A263.1; 
 Fri, 17 Jun 2022 03:26:33 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 6d4723de48ce.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 17 Jun 2022 03:26:33 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AM7PR08MB5464.eurprd08.prod.outlook.com (2603:10a6:20b:10a::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.16; Fri, 17 Jun
 2022 03:26:29 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::502f:a77a:aba1:f3ee]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::502f:a77a:aba1:f3ee%7]) with mapi id 15.20.5353.015; Fri, 17 Jun 2022
 03:26:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4f9407a2-eded-11ec-ab14-113154c10af9
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=CtBM35C3cLx2eBiklmjYmBcs5+rV/b+ML6CcVdIuq/CfnkGVKsD/S8WSup/uMZDpE37x3VKR4yNJ2imosmUKIMRh2gku2+Z4ghhVxgE2SGLDNaFGrbEfAXy76Vt5xzv6bk5jChbl0ffM0//q3lL1rOtz/Sy34Hw1ZhPKw0EjSVx4lVQKL+dIl6AQrISboUJYv9/R79+a8RHp8RFyprwkGMQ/+e+S3aWe+eKE7e+lSuF6Qv3ydR5j8fhLOF9p53+9bkMpjW3W+Zzd9c/RMWHpD1v6KZR7nj7kN+21bAWdglH43r4aKm84BsBUvJWIL/eQrITTCthmx4aozRjs3NrnhQ==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Gacrb8LjpHK9r2UNzDUbNdw4fZGBTKqV2TlBvwjjJ5k=;
 b=GxgXzqkup6My2OHpOls4qqvJGUHqAqiJR18NJWaeA+WSQ7HST99CuIV9Wth91DydWYghNyUvzMPhFfRZO94sHMFA1AeARWQ2M0hSkSmP7/9lLXdOzTEFdSvMZNeyPO78SKzDK39caxc/y4uerrXtr/soiUerR9SBnBnA6UDvJG+UTVo9nAVHghgKZZtCqYnbx9jjxib3Qvi4SaA5rcsBlXdAuhiraqtnnC/BjhjHvGEw0RPwBxmGdDpv9Gv6WGgPAk8RrHy1SKoyOw6Sr5U4HTI+htyL+ChZcij/rKudAh/z+TUcvoulwws3+Qz4VCnOz2z309DfTA5MDBg5pn0jmA==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Gacrb8LjpHK9r2UNzDUbNdw4fZGBTKqV2TlBvwjjJ5k=;
 b=TqGwHkJEEmkCk26B6gzKdqbAQzJ1G+9YiOZfc8x5mTvIYExIsx+1bvhA9lwCaOHoM5MKOn2UqaiAZ09KMOxXh9dOij8uJC78MWFRvfcXA7DGVLnUWnQ6sU34UJWg+geKzqPOCdGLJTkCT/jqThbLpPcFVnC9uZFhG6xzB77ps5I=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=abAUTts9yxxEUbiLrXj+DGqUkr/8KE/sJ6nVKk7HsmYBTn0FDu1EFhR9vfWKBEU056yZ72cPZr1M4J47kPNiawj3fGFSHYwzL9Gu29ba1PFZTscdbDbyT6834P1ceVq0FHi2tSDWdyo8+PJ2VQJV323hgmLN7MXmdhOEqVTVt0xNkhCdjT/eWYfkKW7q7Nt3Xmr6rCYJdWJqcjgjYsUj+UX0wCQBCKavnrkspdII4RD4IbGXM/djQf87pf22GAZyj5PrMBLo+k7vvNN+Oa6UVnz/YTvPoeALd+F62dxdQL0Z0P2Lcii+hT/PXqWUtuRqH/9pfbicxPKvZNAX/2xd9w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Gacrb8LjpHK9r2UNzDUbNdw4fZGBTKqV2TlBvwjjJ5k=;
 b=ZjveQIAFn8btb3ticao9m95QbvBU+WUeHskhbjPs1R/IEPqOCdjxoNYmeq++3tMKYCdx2v1X799rvwZothwgZiX3bAFYAn8BddH4Cgj0Elsy9qLKDGrz0l0qL2uAK7t8ehetBaJrYqExic5pIqDl9l6SH5IDcKsAWnNmJ/v0gTtZfSKA3oJsujwmQ4EmZwNB7nDInlTioQgc2rUHTk8wyPoP1SiIIDM5XOksj5eB/O/WDVJFdCZzoKmTyvK9orpGbvRniq6BFN4v82FhYCmKlFpdaC1CzqENfmHzpFwBpTBGznyrgWx5LWhkzacr3ACzuCejKnHAk8MeS6BicjR8iA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Gacrb8LjpHK9r2UNzDUbNdw4fZGBTKqV2TlBvwjjJ5k=;
 b=TqGwHkJEEmkCk26B6gzKdqbAQzJ1G+9YiOZfc8x5mTvIYExIsx+1bvhA9lwCaOHoM5MKOn2UqaiAZ09KMOxXh9dOij8uJC78MWFRvfcXA7DGVLnUWnQ6sU34UJWg+geKzqPOCdGLJTkCT/jqThbLpPcFVnC9uZFhG6xzB77ps5I=
From: Henry Wang <Henry.Wang@arm.com>
To: Matias Ezequiel Vara Larsen <matiasevara@gmail.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Matias Ezequiel Vara Larsen <matias.vara@vates.fr>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Dario Faggioli
	<dfaggioli@suse.com>, Anthony PERARD <anthony.perard@citrix.com>
Subject: RE: [RFC PATCH 0/2] Add a new acquire resource to query vcpu
 statistics
Thread-Topic: [RFC PATCH 0/2] Add a new acquire resource to query vcpu
 statistics
Thread-Index: AQHYafs/iBEBw9uXUkyBIqTOdVcxNq1TF8bA
Date: Fri, 17 Jun 2022 03:26:29 +0000
Message-ID:
 <AS8PR08MB7991F28D180D1781FE8E9D0792AF9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <cover.1652797713.git.matias.vara@vates.fr>
In-Reply-To: <cover.1652797713.git.matias.vara@vates.fr>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: F86346322E199A4CBBA7963D3ED8D915.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 1b6a4420-f2a6-462b-9b1f-08da50113121
x-ms-traffictypediagnostic:
	AM7PR08MB5464:EE_|AM5EUR03FT008:EE_|DB7PR08MB3739:EE_
X-Microsoft-Antispam-PRVS:
	<DB7PR08MB3739ACD7991D576A435E4CDF92AF9@DB7PR08MB3739.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 4Lt31oa89SSGH3KYTfJrDPArjaXQAJmFIHBFM1DyakdSXc0Jlxx/aU6zTxofDFSDWzKBODQNHEaattpIkjehaa5DFekgckF/dV6R7CLuHRtXwKSb/MmjLbfehdcv2wllBZGxUEHyBr085sgsT2OW+u1mtPkxs+tLvGD6Px9ccYZMz6Z9isUUftr99i2wefg6m4TZv6NMoErvzZtzLtCU+t6fzb+HXTC/PvyZTuBQhgjSoqxiRgeJUlbF1XZdExUpTeS7ZjU7/N8bXo3Oi9/8azMFt+qc1nM+ZybkkuqhGOZZaz9zX//wEC1EMgGVfPbPnDCV7yAmJRo+273kEFaf9Kh6pJjISHs7x3NFnyLrPcsxlAZ+DkgyIULGHzl7Rxujy3VHV+0O8y1cg81wAPBOIjFyNVntCOGWAdEflHXxfjMBI7EdbaaVej+wigQqYK3svJx15OTQ7deBnJKoAl0ZXAki/dSZp3At8uyNBZqcrSijfDmFk5ywRFCUhMkzog9DGbSITGtTHduU7alVOB9H/z+KtLrtHfJu2Hrm7TBgA25HKpGtlC/6+sBUHtAwfj46miJzU8wwflTB1WzU9qrYhXRWjg1zi+69zvveqw37NZiJfO0VgNY+XLt0MRLPHveTFKX/Jp4hNCDlbe0e5qu8e+hyum1OWBXOHT94RsLOJy7fljgY1kfevoBfMSitj9ztELpiP968OixkJUncfTuJ4QhxWy55K625zAF6qXS/L1psIG0mLcEDkP8S1szMw8rhkV++G/l0TxGvJzhBmwcQjKFlx1HNlz1NeAZFWuBHEA+tHVcvWE+lMR+Wxt88mdSJ2rB1S4hYrdKlGBm35lV3xQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(66446008)(64756008)(4326008)(33656002)(52536014)(2906002)(8676002)(76116006)(66476007)(66556008)(66946007)(7416002)(55016003)(86362001)(5660300002)(316002)(8936002)(54906003)(498600001)(7696005)(110136005)(6506007)(26005)(966005)(9686003)(71200400001)(83380400001)(38100700002)(122000001)(186003)(38070700005);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR08MB5464
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT008.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	247602ee-206d-419f-0dac-08da50112b20
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hgP/OWL2EP3n2NSeXelpFcwJMPGenw2LamHBCPbIyyKNUMni6Pq9XCo2k44DdmE9rZHZI650YcryMPeBLStrBqSppFl3jyO79xV+POzc1TjJTpe+w/ST3/gCElwXX8erc5dK75U/gjig8V6GHjGIlbq6crS/TTc48HRZVVrX3sJtfUIKys8rxmRaYOtbk6/xXRhZgFV8Pf90V0R9uRGRYpoYbfyMrTz4lP/i160fu8tf407GV/P7Pm28WGxfuPtZLbFpNatHJJMJMxFCZNqiEHX8YQF4IO0IiZH9hi3IUlCqfkzcfldFLi1/nLKGqIf4uy+f0X0wFfAysQemJk6VgwPKgnLyKAvlCxJmjOkptfbUIJdb5+DbmuoG5FUyfyPCnNJe9G9ppqtc6/d7IlTMM3geUTrrnzBUn5pPmEH8Wf0r1JsWbqHdj4jOS3QP8v1Q85Qjo79UaZCkj1ZOacmcY+jky+scaspprZKw+pRWy98eGKSP5EBDxpWsBS+9JhX4lj5BlsHlfizR+ttkilMHQF+JqvCZxC4zWUT5csgMDLOb7jinJAbgw1zNtYvnn6LV2CDVR5APi8BMqhchiJXl/MuQ9uWSsJnjn1uiz1M0hMM4Dgq3sOg3n2MBFSOn3iensP6fuYvvgbl3SUmHdg2ZTofs9DGnGKmZSDTgNieufX1YMNWg/vbaaYE9IsToUCmNfBMD3MaySljUW1V7nT5dolZMN9FTSTewc3PobLSrxGOmQxPd7w+/YUpOIAxpi5TQuOWTjXrGDn8UKSllZGA7u8Bp5PpicEBNSTwbSnp9xB8=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(46966006)(40470700004)(36840700001)(316002)(110136005)(86362001)(107886003)(81166007)(36860700001)(54906003)(70206006)(4326008)(8676002)(70586007)(966005)(55016003)(8936002)(33656002)(5660300002)(508600001)(7696005)(6506007)(9686003)(83380400001)(26005)(40460700003)(336012)(82310400005)(2906002)(186003)(47076005)(356005)(52536014);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jun 2022 03:26:39.4673
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 1b6a4420-f2a6-462b-9b1f-08da50113121
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT008.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3739

Hi,

It seems that this series has been stale for more than a month with no comm=
ent
from maintainer for patch#1 [1] and actions needed from author for patch#2 =
[2].
So sending this email as a gentle reminder. Thanks!

[1] https://patchwork.kernel.org/project/xen-devel/patch/d0afb6657b1e78df48=
57ad7bcc875982e9c022b4.1652797713.git.matias.vara@vates.fr/
[2] https://patchwork.kernel.org/project/xen-devel/patch/e233c4f60c6fe97b93=
b3adf9affeb0404c554130.1652797713.git.matias.vara@vates.fr/

Kind regards,
Henry

> -----Original Message-----
> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of
> Matias Ezequiel Vara Larsen
> Subject: [RFC PATCH 0/2] Add a new acquire resource to query vcpu
> statistics
>=20
> Hello all,
>=20
> The purpose of this RFC is to get feedback about a new acquire resource
> that
> exposes vcpu statistics for a given domain. The current mechanism to get
> those
> statistics is by querying the hypervisor. This mechanism relies on a
> hypercall
> and holds the domctl spinlock during its execution. When a pv tool like x=
cp-
> rrdd
> periodically samples these counters, it ends up affecting other paths tha=
t
> share
> that spinlock. By using acquire resources, the pv tool only requires a fe=
w
> hypercalls to set the shared memory region and samples are got without
> issuing
> any other hypercall. The original idea has been suggested by Andrew
> Cooper to
> which I have been discussing about how to implement the current PoC. You
> can
> find the RFC patch series at [1]. The series is rebased on top of stable-=
4.15.
>=20
> I am currently a bit blocked on 1) what to expose and 2) how to expose it=
.
> For
> 1), I decided to expose what xcp-rrdd is querying, e.g.,
> XEN_DOMCTL_getvcpuinfo.
> More precisely, xcp-rrd gets runstate.time[RUNSTATE_running]. This is a
> uint64_t
> counter. However, the time spent in other states may be interesting too.
> Regarding 2), I am not sure if simply using an array of uint64_t is enoug=
h or
> if
> a different interface should be exposed. The remaining question is when t=
o
> get
> new values. For the moment, I am updating this counter during
> vcpu_runstate_change().
>=20
> The current series includes a simple pv tool that shows how this new
> interface is
> used. This tool maps the counter and periodically samples it.
>=20
> Any feedback/help would be appreciated.
>=20
> Thanks, Matias.
>=20
> [1] https://github.com/MatiasVara/xen/tree/feature_stats
>=20
> Matias Ezequiel Vara Larsen (2):
>   xen/memory : Add stats_table resource type
>   tools/misc: Add xen-stats tool
>=20
>  tools/misc/Makefile         |  5 +++
>  tools/misc/xen-stats.c      | 83 +++++++++++++++++++++++++++++++++++++
>  xen/common/domain.c         | 42 +++++++++++++++++++
>  xen/common/memory.c         | 29 +++++++++++++
>  xen/common/sched/core.c     |  5 +++
>  xen/include/public/memory.h |  1 +
>  xen/include/xen/sched.h     |  5 +++
>  7 files changed, 170 insertions(+)
>  create mode 100644 tools/misc/xen-stats.c
>=20
> --
> 2.25.1
>=20



From xen-devel-bounces@lists.xenproject.org Fri Jun 17 03:27:49 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 03:27:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350986.577464 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o22eD-0008Af-1v; Fri, 17 Jun 2022 03:27:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350986.577464; Fri, 17 Jun 2022 03:27:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o22eC-0008AY-Us; Fri, 17 Jun 2022 03:27:48 +0000
Received: by outflank-mailman (input) for mailman id 350986;
 Fri, 17 Jun 2022 03:27:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YAhX=WY=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1o22eB-0008AP-Pp
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 03:27:47 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur02on0616.outbound.protection.outlook.com
 [2a01:111:f400:fe05::616])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 74802f9d-eded-11ec-bd2d-47488cf2e6aa;
 Fri, 17 Jun 2022 05:27:46 +0200 (CEST)
Received: from AS9P251CA0012.EURP251.PROD.OUTLOOK.COM (2603:10a6:20b:50f::14)
 by AS4PR08MB8167.eurprd08.prod.outlook.com (2603:10a6:20b:58e::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.14; Fri, 17 Jun
 2022 03:27:43 +0000
Received: from VE1EUR03FT020.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:50f:cafe::a4) by AS9P251CA0012.outlook.office365.com
 (2603:10a6:20b:50f::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.16 via Frontend
 Transport; Fri, 17 Jun 2022 03:27:43 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT020.mail.protection.outlook.com (10.152.18.242) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5353.14 via Frontend Transport; Fri, 17 Jun 2022 03:27:42 +0000
Received: ("Tessian outbound d3318d0cda7b:v120");
 Fri, 17 Jun 2022 03:27:42 +0000
Received: from ebf5bae1dc83.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 D6C4F512-6CB7-4A0E-9E55-304DF5848B2E.1; 
 Fri, 17 Jun 2022 03:27:36 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ebf5bae1dc83.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 17 Jun 2022 03:27:36 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by VE1PR08MB5071.eurprd08.prod.outlook.com (2603:10a6:803:111::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Fri, 17 Jun
 2022 03:27:28 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::502f:a77a:aba1:f3ee]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::502f:a77a:aba1:f3ee%7]) with mapi id 15.20.5353.015; Fri, 17 Jun 2022
 03:27:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 74802f9d-eded-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=Tsa1Neqb8w84xP0+ZQ6/EIH1TggbSRC8YlSHoSW9PoI1ef7iyyyYJCjegldRBmWyGEMuvmYhXKlRmceZVEs0o8scdA61hzNzpeQplr/Bc6amqJmT59hHhFI+iZ90+Pmp6KQ+vi1lhIVPxFh7F0b9/km5mdC6Lz1vfBHo5XDNeh7TPxkdJu+AMfYSv9AkYX6WBzu9kCzcUgOBpLuOMsmys13fY+7EpjRonKMlio6BsImlZ8C6wv8wmTZHaf1pTLcHJLUR5NY0UGLHzFY57QVl7ow99KxZZWK6sPybmLh0xdH06aSkf4k6nwEIK4ZhL950pTRb8aT3gyJ6q47jgxlzNA==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=cFevtssOANJP6LvxXC7jOz2yGPru9RJwznpKXFf3yRY=;
 b=iWvYLGJZZgkDLlvWRv+uT4nYuUGsOCn+0l5THYTWq0LceERvj9dfVlTHB64jXfDO/IPyIBpLaRaoLdhMDGu0glXel37HKUF1AW5ci0sEBsdxb011lpXCRyIb8ejn4X6+6mD+ukIP63tyu6QIpBYtwMjDZIjVTXmkEM2K8GHOOojvmjq9F734Mdh63rx6kJYMtsqAladsP5IgEBCSLBWnCa8s4q9UoJKRwDyRoI5YtS2HapgJDCMvJR06OfFp7sl/oza0J9lBP5+cvmZyWYMw8MdeeFbtzL4NFsNycAvffGjgwo3PlKbPj+kR8XtNsF9EjRijEmTUEFc6CkTUxB4q3Q==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cFevtssOANJP6LvxXC7jOz2yGPru9RJwznpKXFf3yRY=;
 b=G4DamrJTfAXdoUR+lMu93YWMPw8gLv35F3BAtf9lDQZtsHTYiGgMOWCA1KFID2kCYy1aktkBW/au7mIK2nylWZwlKZaEMukDKt+nDz2xdi9HJWSEWX45fHILpW4Q7obVcpzeS7JjvFBLWjYuV8y0rXoGbnrXps8RnzyBpnuQPmo=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cJi7X6UAdxNuzh61YDNPRV6ag5QxQGzLul6pYB7V3POrfjHjpZCYT6wNAPHQY/REm+qWYopGH0CYB+MeiNRap2wt1WJBssk2zuvglTKBt0v/z5RmTknSBoAtQrw7A0OPitIZMOFhyRp6lhPoSaMNLZZtHLxu0i4ckaI16t7NroASla/JP89LfoHp5rK3qtyVDcRAtCLBJy60mTneUq+R6EiDO/m2snlIGdJPdktb6miXJwhaCdtGgYGeUrepx2umwwD0Kjb839/njgO+x/ed/5TEWzB9dnsfSCJYNDyEKlihlhOLMRfLutGSYlj5ex+ZODk62tDFqFnFmzjAYA85pQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=cFevtssOANJP6LvxXC7jOz2yGPru9RJwznpKXFf3yRY=;
 b=Gom5gqESgHR0m2WstdyKhm9QajSZfDRiCplDos7dreVWGkn/sr1lF7rWWJPHHlZVHyKDiZ1jRt7VzJ7/+xzHf22vX2ykuYqkIcdjA8V91FgmZMvwTZGiU2DhII5m8r3azfFyVe0zGqWVZaunJzcJc3r3AR2JZd/SlyzgLXCD3yyBIWsSe5xmo/gptCKBT5AO0CcQDVY0odnH/DRhxmf4tpYQu8OQ+imf0pEM5E8Ja6ahzftRRqCNY7TJCKhKbCEuaRXVZYz5azgi7xn4fno0X0CnE4O8DHpvF/1Gfn0zrLgijAyrIh3j3s+87StxROVKrXy5bDc1kflxO20AhoNaZg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cFevtssOANJP6LvxXC7jOz2yGPru9RJwznpKXFf3yRY=;
 b=G4DamrJTfAXdoUR+lMu93YWMPw8gLv35F3BAtf9lDQZtsHTYiGgMOWCA1KFID2kCYy1aktkBW/au7mIK2nylWZwlKZaEMukDKt+nDz2xdi9HJWSEWX45fHILpW4Q7obVcpzeS7JjvFBLWjYuV8y0rXoGbnrXps8RnzyBpnuQPmo=
From: Henry Wang <Henry.Wang@arm.com>
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: "scott.davis@starlab.io" <scott.davis@starlab.io>,
	"christopher.clark@starlab.io" <christopher.clark@starlab.io>,
	"jandryuk@gmail.com" <jandryuk@gmail.com>, Daniel De Graaf
	<dgdegra@tycho.nsa.gov>
Subject: RE: [PATCH v8 0/2] Adds starting the idle domain privileged
Thread-Topic: [PATCH v8 0/2] Adds starting the idle domain privileged
Thread-Index: AQHYdP8PDf+BmAUrf0GccQQ+1OW0J61S/+BQ
Date: Fri, 17 Jun 2022 03:27:28 +0000
Message-ID:
 <AS8PR08MB7991ACC81637588D09EDB4BF92AF9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20220531145646.10062-1-dpsmith@apertussolutions.com>
In-Reply-To: <20220531145646.10062-1-dpsmith@apertussolutions.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: F65B8D5316620945AA15A8B124A00B8B.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: be6b4acf-0182-4180-f22b-08da501156f7
x-ms-traffictypediagnostic:
	VE1PR08MB5071:EE_|VE1EUR03FT020:EE_|AS4PR08MB8167:EE_
X-Microsoft-Antispam-PRVS:
	<AS4PR08MB8167E59BC0E7695AF903909B92AF9@AS4PR08MB8167.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 dVFf9/LcKfgUfLkkKLp+Z24DMdIU7zh8t0Pj7WUG62GMt61RmYndvlzMDh0XDqADk5QtSfw0J2T7572AkLxySVvpObyhrrsUd3McZNLBgML5J7GpyJLywlBGP4RTDmKtyG3UGYLhIbvOvu34GQ7F89ZrETAV1ulAz+/YqcCM6XlKs5pNnItyp70e5v1U+Wwd4sU2HySPvokP5V5x2V4LS1Jw599Y2pWG++AW0EYhjr0c9gFRRNS7H1ENfllsIuZ/uawyhoNcxvT+vUq3jcUNp6e6D2FzlYRJy0+alYHasfJmRNoMQubqsitRp3ecm3xNkGv8mmp7+1y6UgEeBSPzNjNlxYyQFnVLZs8AqVOlhTcy8IKQd8XCU4esFOPw7LdbUC9wxlY9WidXrocHZomQqDnUoBjpTC64Tr/SaCEBuGMCb++5P/Y2CyOeLc1iR06UwyjDenQx0FPOBmyW8kQ7AJdcqNRT7p9uwhUmlJzHiOABh3Aw7/wT97W4LX6AJrmyUlHy6f8jdVY1JyXLgnfJ/wkGMw41MGz6/LGX1JFhjdIAm65+4Dr3zyaiI39iew2wwqqPdYKK2hoZ+zck8csvaD1ZDuT3CJGIG11L9zvpK1LmOKj91FI5B30i3SCHSyn3mCO50CKYcYZNvp58qTqHZOuBKzz0ZdK1gF4qlhZzApK7vXZ5avGEuXyF425Nz4NxUnNp/EwVTYIjgSqxj/Eu9rFHCPzzBe07k5uG/pacg9Ecb9rf2c8e/PxJw4MzVjaqWgA4Mzp5UJtuqjp+dYfmc6ENfySTLI44IYlmAM5LcxDdP0QeNsc2681kjK0fynHohumQ+kh1LIjcmLmXOPK+bg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(66946007)(66476007)(66446008)(8676002)(66556008)(64756008)(966005)(4326008)(76116006)(71200400001)(86362001)(54906003)(508600001)(186003)(83380400001)(316002)(8936002)(55016003)(52536014)(122000001)(33656002)(2906002)(5660300002)(6506007)(9686003)(7696005)(110136005)(26005)(38100700002)(38070700005);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5071
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT020.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	3a7b0b73-860c-4beb-b96c-08da50114e4b
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	K6Kiqujeyok3pm53UqLhLrgtDSdrdQebI7pBEozvLTEYCiSfWQ2QHXOb0tvjjV+F0p/G1bpvCQ5fXUdO2pBMgCWS9KlD/MNQEwkcmUKjUKVboNuh3fQdmjM2mCa6tn02vRd/Jrr6j9+CBRCShfeIDsFM9ATiF2ofbltOAi4aWDkMzkybz+W7AXg31iZKF6OTAZaaDyU+TNNMKF4CfBXay89fq5U4b852Mo5oZqiWiS+qnIvbAVMFIszHLNcNdLv0+Mj5V9ekGSp/J0cxDNjsRa8fI1/0Fu7OpD0S27wLloQedZZT9FAuxjpD8rLaV59gVbQlKwUua96XM6Zxo2FxY2DF3f8i1FeA++pvGbMJLDISG1eg32+3usLCY0jBXwyHqRzv5Q3rw/LrJLEDvWFCiwFmjamXnSBUB+PkqlMaCBclX+vg2pXF9MSaco/o7CC3aW7ayTXt63OwH0UauyHWek34foFJuhvB9k+bb/o/Lve/rZZ6y6sV68lRkRptqVFheDlnz5GCsQtoUyd56wLfY7YBarSBWp6kcuGHI8s/l6va7E1XasoGMzuonasgb4/UArShi0w61Obse7iCnIpqNDitA3X5WIfEMJLq9kS/LgGHwLT14aqHmJXJ5JmnSVoU3+DM7wLQ/izKGU6W9RCYjVtYo93hVBD328Y4Q6HX3rndWEo3STocVretIzn+AunOAX5Hl9xiMfti3F7Ar/4lfFgNqBZzPgIlW0CZin0rYk8UyQOPNQohyptJBLu96K0RN1fheQnagAdIhSS3+AeJSu+43wsV34vAB5W+wToiiXyVcz2zyoryKSifDIDIy+ei
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(36840700001)(46966006)(40470700004)(9686003)(81166007)(2906002)(186003)(82310400005)(6506007)(8676002)(110136005)(7696005)(26005)(54906003)(47076005)(40460700003)(83380400001)(356005)(5660300002)(336012)(52536014)(33656002)(316002)(966005)(55016003)(4326008)(8936002)(498600001)(86362001)(70206006)(70586007)(36860700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jun 2022 03:27:42.8989
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: be6b4acf-0182-4180-f22b-08da501156f7
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT020.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR08MB8167

SGksDQoNCkl0IHNlZW1zIHRoYXQgdGhpcyBzZXJpZXMgaXMgc3RhbGUgZm9yIGEgd2hpbGUgd2l0
aCBhdXRob3IncyBhY3Rpb24gbmVlZGVkIGZvcg0KUGF0Y2gjMSBbMV0gKGFuZCBwcm9iYWJseSBh
bHNvIG5lZWQgYWNrIGZyb20gZmxhc2sgbWFpbnRhaW5lciBmb3IgWzJdKS4gU28gdGhpcyBlbWFp
bA0KaXMgYSBnZW50bGUgcmVtaW5kZXIgYWJvdXQgdGhpcyBzZXJpZXMuIFRoYW5rcyENCg0KWzFd
IGh0dHBzOi8vcGF0Y2h3b3JrLmtlcm5lbC5vcmcvcHJvamVjdC94ZW4tZGV2ZWwvcGF0Y2gvMjAy
MjA1MzExNDU2NDYuMTAwNjItMi1kcHNtaXRoQGFwZXJ0dXNzb2x1dGlvbnMuY29tLw0KWzJdIGh0
dHBzOi8vcGF0Y2h3b3JrLmtlcm5lbC5vcmcvcHJvamVjdC94ZW4tZGV2ZWwvcGF0Y2gvMjAyMjA1
MzExNDU2NDYuMTAwNjItMy1kcHNtaXRoQGFwZXJ0dXNzb2x1dGlvbnMuY29tLw0KDQpLaW5kIHJl
Z2FyZHMsDQpIZW5yeQ0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IFhl
bi1kZXZlbCA8eGVuLWRldmVsLWJvdW5jZXNAbGlzdHMueGVucHJvamVjdC5vcmc+IE9uIEJlaGFs
ZiBPZg0KPiBEYW5pZWwgUC4gU21pdGgNCj4gU3ViamVjdDogW1BBVENIIHY4IDAvMl0gQWRkcyBz
dGFydGluZyB0aGUgaWRsZSBkb21haW4gcHJpdmlsZWdlZA0KPiANCj4gVGhpcyBzZXJpZXMgbWFr
ZXMgaXQgc28gdGhhdCB0aGUgaWRsZSBkb21haW4gaXMgc3RhcnRlZCBwcml2aWxlZ2VkIHVuZGVy
IHRoZQ0KPiBkZWZhdWx0IHBvbGljeSwgd2hpY2ggdGhlIFNJTE8gcG9saWN5IGluaGVyaXRzLCBh
bmQgdW5kZXIgdGhlIGZsYXNrIHBvbGljeS4gSXQNCj4gdGhlbiBpbnRyb2R1Y2VzIGEgbmV3IG9u
ZS13YXkgWFNNIGhvb2ssIHhzbV90cmFuc2l0aW9uX3J1bm5pbmcsIHRoYXQgaXMNCj4gaG9va2Vk
DQo+IGJ5IGFuIFhTTSBwb2xpY3kgdG8gdHJhbnNpdGlvbiB0aGUgaWRsZSBkb21haW4gdG8gaXRz
IHJ1bm5pbmcgcHJpdmlsZWdlIGxldmVsLg0KPiANCj4gQ2hhbmdlcyBpbiB2ODoNCj4gLSBhZGp1
c3RlZCBwYW5pYyBtZXNzYWdlcyBpbiBhcm0gYW5kIHg4NiBzZXR1cC5jIHRvIGJlIGxlc3MgdGhh
biA4MGNvbHMNCj4gLSBmaXhlZCBjb21tZW50IGxpbmUgdGhhdCB3ZW50IG92ZXIgODBjb2wNCj4g
LSBhZGRlZCBsaW5lIGluIHBhdGNoICMxIGNvbW1pdCBtZXNzYWdlIHRvIGNsYXJpZnkgdGhlIG5l
ZWQgaXMgZm9yIGRvbWFpbg0KPiAgIGNyZWF0aW9uDQo+IA0KPiBDaGFuZ2VzIGluIHY3Og0KPiAt
IGFkanVzdGVkIGVycm9yIG1lc3NhZ2UgaW4gZGVmYXVsdCBhbmQgZmxhc2sgeHNtX3NldF9zeXN0
ZW1fYWN0aXZlIGhvb2tzDQo+IC0gbWVyZ2VkIHBhbmljIG1lc3NhZ2VzIGluIGFybSBhbmQgeDg2
IHNldHVwLmMgdG8gYSBzaW5nbGUgbGluZQ0KPiANCj4gQ2hhbmdlcyBpbiB2NjoNCj4gLSByZWFk
ZGVkIHRoZSBzZXR0aW5nIG9mIGlzX3ByaXZpbGVnZWQgaW4gZmxhc2tfc2V0X3N5c3RlbV9hY3Rp
dmUoKQ0KPiAtIGNsYXJpZmllZCBjb21tZW50IG9uIGlzX3ByaXZpbGVnZWQgaW4gZmxhc2tfc2V0
X3N5c3RlbV9hY3RpdmUoKQ0KPiAtIGFkZGVkIEFTU0VSVCBvbiBpc19wcml2aWxlZ2VkIGFuZCBz
ZWxmX3NpZCBpbiBmbGFza19zZXRfc3lzdGVtX2FjdGl2ZSgpDQo+IC0gZml4ZWQgZXJyIGNvZGUg
cmV0dXJuZWQgb24gQXJtIGZvciB4c21fc2V0X3N5c3RlbV9hY3RpdmUoKSBwYW5pYw0KPiBtZXNz
YWdlDQo+IA0KPiBDaGFuZ2VzIGluIHY1Og0KPiAtIGRyb3BwZWQgc2V0dGluZyBpc19wcml2aWxl
Z2VkIGluIGZsYXNrX3NldF9zeXN0ZW1fYWN0aXZlKCkNCj4gLSBhZGRlZCBlcnIgY29kZSByZXR1
cm5lZCBieSB4c21fc2V0X3N5c3RlbV9hY3RpdmUoKSB0byBwYW5pYyBtZXNzYWdlDQo+IA0KPiBD
aGFuZ2VzIGluIHY0Og0KPiAtIHJld29yZGVkIHBhdGNoIDEgY29tbWl0IG1lc3NhZ2VkDQo+IC0g
Zml4ZWQgd2hpdGVzcGFjZSB0byBjb2Rpbmcgc3R5bGUNCj4gLSBmaXhlZCBjb21tZW50IHRvIGNv
ZGluZyBzdHlsZQ0KPiANCj4gQ2hhbmdlcyBpbiB2MzoNCj4gLSByZW5hbWVkICpfdHJhbnNpdGlv
bl9ydW5uaW5nKCkgdG8gKl9zZXRfc3lzdGVtX2FjdGl2ZSgpDQo+IC0gY2hhbmdlZCB0aGUgWFNN
IGhvb2sgc2V0X3N5c3RlbV9hY3RpdmUoKSBmcm9tIHZvaWQgdG8gaW50IHJldHVybg0KPiAtIGFk
ZGVkIEFTU0VSVCBjaGVjayBmb3IgdGhlIGV4cGVjdGVkIHByaXZpbGVnZSBsZXZlbCBlYWNoIFhT
TSBwb2xpY3kNCj4gZXhwZWN0ZWQNCj4gLSByZXBsYWNlZCBhIGNoZWNrIGFnYWluc3QgaXNfcHJp
dmlsZWdlZCBpbiBlYWNoIGFyY2ggd2l0aCBjaGVja2luZyB0aGUNCj4gcmV0dXJuDQo+ICAgdmFs
dWUgZnJvbSB0aGUgY2FsbCB0byB4c21fc2V0X3N5c3RlbV9hY3RpdmUoKQ0KPiANCj4gQ2hhbmdl
cyBpbiB2MjoNCj4gLSByZW5hbWVkIGZsYXNrX2RvbWFpbl9ydW50aW1lX3NlY3VyaXR5KCkgdG8g
Zmxhc2tfdHJhbnNpdGlvbl9ydW5uaW5nKCkNCj4gLSBhZGRlZCB0aGUgbWlzc2VkIGFzc2lnbm1l
bnQgb2Ygc2VsZl9zaWQNCj4gDQo+IERhbmllbCBQLiBTbWl0aCAoMik6DQo+ICAgeHNtOiBjcmVh
dGUgaWRsZSBkb21haW4gcHJpdmlsZWdlZCBhbmQgZGVtb3RlIGFmdGVyIHNldHVwDQo+ICAgZmxh
c2s6IGltcGxlbWVudCB4c21fc2V0X3N5c3RlbV9hY3RpdmUNCj4gDQo+ICB0b29scy9mbGFzay9w
b2xpY3kvbW9kdWxlcy94ZW4uaWYgICAgICB8ICA2ICsrKysrDQo+ICB0b29scy9mbGFzay9wb2xp
Y3kvbW9kdWxlcy94ZW4udGUgICAgICB8ICAxICsNCj4gIHRvb2xzL2ZsYXNrL3BvbGljeS9wb2xp
Y3kvaW5pdGlhbF9zaWRzIHwgIDEgKw0KPiAgeGVuL2FyY2gvYXJtL3NldHVwLmMgICAgICAgICAg
ICAgICAgICAgfCAgMyArKysNCj4gIHhlbi9hcmNoL3g4Ni9zZXR1cC5jICAgICAgICAgICAgICAg
ICAgIHwgIDQgKysrKw0KPiAgeGVuL2NvbW1vbi9zY2hlZC9jb3JlLmMgICAgICAgICAgICAgICAg
fCAgNyArKysrKy0NCj4gIHhlbi9pbmNsdWRlL3hzbS9kdW1teS5oICAgICAgICAgICAgICAgIHwg
MTcgKysrKysrKysrKysrKysNCj4gIHhlbi9pbmNsdWRlL3hzbS94c20uaCAgICAgICAgICAgICAg
ICAgIHwgIDYgKysrKysNCj4gIHhlbi94c20vZHVtbXkuYyAgICAgICAgICAgICAgICAgICAgICAg
IHwgIDEgKw0KPiAgeGVuL3hzbS9mbGFzay9ob29rcy5jICAgICAgICAgICAgICAgICAgfCAzMiAr
KysrKysrKysrKysrKysrKysrKysrKysrLQ0KPiAgeGVuL3hzbS9mbGFzay9wb2xpY3kvaW5pdGlh
bF9zaWRzICAgICAgfCAgMSArDQo+ICAxMSBmaWxlcyBjaGFuZ2VkLCA3NyBpbnNlcnRpb25zKCsp
LCAyIGRlbGV0aW9ucygtKQ0KPiANCj4gLS0NCj4gMi4yMC4xDQo+IA0KDQo=


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 03:28:19 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 03:28:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350994.577475 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o22eh-0000XF-BY; Fri, 17 Jun 2022 03:28:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350994.577475; Fri, 17 Jun 2022 03:28:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o22eh-0000X8-8m; Fri, 17 Jun 2022 03:28:19 +0000
Received: by outflank-mailman (input) for mailman id 350994;
 Fri, 17 Jun 2022 03:28:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YAhX=WY=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1o22eO-000652-Nm
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 03:28:00 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2062b.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::62b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7c7bd4de-eded-11ec-ab14-113154c10af9;
 Fri, 17 Jun 2022 05:28:00 +0200 (CEST)
Received: from AS8PR04CA0191.eurprd04.prod.outlook.com (2603:10a6:20b:2f3::16)
 by VI1PR08MB2830.eurprd08.prod.outlook.com (2603:10a6:802:24::33)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.17; Fri, 17 Jun
 2022 03:27:57 +0000
Received: from VE1EUR03FT026.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:2f3:cafe::c6) by AS8PR04CA0191.outlook.office365.com
 (2603:10a6:20b:2f3::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.16 via Frontend
 Transport; Fri, 17 Jun 2022 03:27:56 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT026.mail.protection.outlook.com (10.152.18.148) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5353.14 via Frontend Transport; Fri, 17 Jun 2022 03:27:56 +0000
Received: ("Tessian outbound 4ab5a053767b:v120");
 Fri, 17 Jun 2022 03:27:55 +0000
Received: from 4f5d417b1c0b.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 C2A99E44-07C0-4F81-8911-A8C8CA402CFE.1; 
 Fri, 17 Jun 2022 03:27:45 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 4f5d417b1c0b.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 17 Jun 2022 03:27:45 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by VE1PR08MB5071.eurprd08.prod.outlook.com (2603:10a6:803:111::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Fri, 17 Jun
 2022 03:27:42 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::502f:a77a:aba1:f3ee]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::502f:a77a:aba1:f3ee%7]) with mapi id 15.20.5353.015; Fri, 17 Jun 2022
 03:27:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7c7bd4de-eded-11ec-ab14-113154c10af9
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=CJVtrJ+pvNWvXuaYqfA1rsPIjaiv2sh4lcfylleFYKGNtdE+qmFJx0VFr96LBOtJwb2bVHw/CK5e6Qw2psiI1xtKRy6LKRxKlZ3E4joBcpVT1TeGJ9BcElSnaLVoc4dvvasz+Q9uO2Hn1V++sDNHii+f7MOHbzmjaLU/CssMbQEc2YY6dP99Z0Y1QTuA4JZIj7sICPHG2Szto4DRKM8JmJAKyJ89FZVCylTaSALka0XDEYXlo98TKROW3nhb7Q+ewvnYfOtyclpvGWN03Hu+LWCZTZlQ9iuaWLea9XZi436FweASFevXwKO0NoBQGQHFHJgxo9Gg3JD5xMRJhyNwDQ==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=JJv9pcQaQyAi9G1d2233cEOKgQaJ5JNjMkNmyXUH8+w=;
 b=JiMbv++ZOhvW1EkIROlen/tZcVaCfnKM1WL/f5n9GWyxWiZ2x/fywruCpPXliGD0rsN09rOi0aRTod/R/3GGAHsONpNS5LkOO160fBSIE2+JnG6D0HF9jkZ9L7lZD34RZzRSgoWbuWfpAbD0CT/T7JH4fz/6hoT/yoz4FWN2dIn0gpMmttcdEQWgO5s0teHTxyiAp2I+yZtEkSn5ii9X6PX9yM1NM3cg6D7looJQuT5WMApW5MklfCi1oUDvRFjY9BDtKusQCz8xFiYeHt6TFLqHHT4gTafxCOpU4wgbvF9VMTCTriB0vis84+pH3yF8Gy9bJXgyfKNs/QdLrJfnuA==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JJv9pcQaQyAi9G1d2233cEOKgQaJ5JNjMkNmyXUH8+w=;
 b=mxvvyJjb0eiK68phBpD5PVJK0Ez5b3tOiQ14p3dedoz/RhI6uJodG9in7JVSgGanMe9tRYuhp5Kv3ZSPpLk61imtVgBEQIV0armL6B4iLnX//RG8X4IWuRed79Rk5+kW6zHsFSmzANqNMQQzeCieGTdd7EToIQTeh0JLTV/KWcY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EYStzYdpFSFVYgK6CdLfIuf33NgJgMwbN7+7MW4ZpeDtJa/hbN6+D81WWVX9ZgUUQBiG90S8tA4QZkob1GLs6ll5/ZPMlTlg9swDOx88eEefeMrVblAeuugSOzrvwXaNpXvirURyADnHxuHLSh+fBTNHIHqp9cy0OOTo9EH9wEcv72/jrJuSx0wubIZmziSJIXOxbDlyWcd6c8PfEkgZQ8LXhzb90tjp2trAkU11J+spf2a6Dxt5rl3/jzZnrQg5rMYsRamr0S1q1HhDnxPvhYMavU8TnV32GaJBbpWbC2s1GtwkgFDGZFa7RfVv8BgKoe5W+TEokACXca292gVcKQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=JJv9pcQaQyAi9G1d2233cEOKgQaJ5JNjMkNmyXUH8+w=;
 b=XTRCcmLLg95xNc4m9R08gwds/cGmE2CMAxxVW0dRRCyGV/BnHS7rs5Wx4eZGKhYWKAq8IkXgWp7VVZSL8i3h8mA6PApdUJr09LHjLaqqJZD59ELF4/bRVH0/jmWvnLLt1q3QtouZs9cWtDdTg8pwImsy8oDTa6WSw9jfnsSwOyf6th0rqET1bPJk3G8H/0KMAPb9hYNAW1cziML8syOyz9le6I9PDEaLE8m8uLPRVqLHbu44jddmFdTIBGRdl94qcKVV+LkkbnWaoNq2wwgZAxCryCjuc9ED5nozBnchvxW7S3AZMyc70zsWTFSCld9kjriS9s+Ja0V1OgYnRyQKpg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JJv9pcQaQyAi9G1d2233cEOKgQaJ5JNjMkNmyXUH8+w=;
 b=mxvvyJjb0eiK68phBpD5PVJK0Ez5b3tOiQ14p3dedoz/RhI6uJodG9in7JVSgGanMe9tRYuhp5Kv3ZSPpLk61imtVgBEQIV0armL6B4iLnX//RG8X4IWuRed79Rk5+kW6zHsFSmzANqNMQQzeCieGTdd7EToIQTeh0JLTV/KWcY=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Jiamei Xie <Jiamei.Xie@arm.com>, Wei Chen
	<Wei.Chen@arm.com>, Julien Grall <jgrall@amazon.com>
Subject: RE: [PATCH v3 1/9] xen/arm: Print a 64-bit number in hex from early
 uart
Thread-Topic: [PATCH v3 1/9] xen/arm: Print a 64-bit number in hex from early
 uart
Thread-Index: AQHYZNkcOiKSsWrCWE+HAY/+6cemHa0hxzQAgDFVPLA=
Date: Fri, 17 Jun 2022 03:27:42 +0000
Message-ID:
 <AS8PR08MB79912A6797514E583095CBFC92AF9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20220511014639.197825-1-wei.chen@arm.com>
 <20220511014639.197825-2-wei.chen@arm.com>
 <46f6a909-2f77-021c-a069-6a8f827e53fc@xen.org>
In-Reply-To: <46f6a909-2f77-021c-a069-6a8f827e53fc@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: D7318CCDE05D8D4594AA0CCA806C5C05.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: f1c48c35-bc5c-4e69-61ba-08da50115ecc
x-ms-traffictypediagnostic:
	VE1PR08MB5071:EE_|VE1EUR03FT026:EE_|VI1PR08MB2830:EE_
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB28308D7814910C5F00F472A692AF9@VI1PR08MB2830.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 HUsXknXDd14M8uSuAupbMP5sAZLvP5G2cYmCLMYURzYu6pm0l6Hj9D8v+Vv1VmLWRoSRSJ/l70ncxlPRXqySnNqtFLnXbwr6hHPIvxv4XngJ0UJc99Jk7at3rLDap2RnzyYKeQRIMjNfszJuUEe45E5zgzaHadsjXd8lnkBIvOk2g4qYzr622srBjgsZJQZgLDzf0RLl5bZgi2lsHvqnQnc1f8H4rkZeSjIcM5sZU3a5XPtRBEdLqqHtCGfSiT3lh1AxQudgQtOvCmpa+K2E0zaP4WR0WBe0RP6TolFRXlaJtbPcJCmbE1xOVeN82aYP/DDw3tlz1JrB0wR+d4siCmEJMDGy8sSUGAGVy3P47cTbk7F7CtwfYiSlHEk4wgwP/N6nY6ZYp6UdUyqHgbAxiZidX8Jedv2RdpT/XwwHEH9+rKfN5OztsYHJezmyGVYXd42LKN/Jvup5KMQ1/Puw0XP4wytGxGA/VJxgnwTVzyaCJpH6Ws3MNR8Vl/XECKbjJH5/SHzcCX4g41ryFmEg9zAmT8UzxA09VApkZSlplRVdBaDLUvXkcDp1WjoancdnxuSigaguZjzQNx/ZL+r6V+mp47Fm6wuxUJ0s12fis3/9a6m4fE+9sJ31tOyXXJy9N4i7xbn8HDF6CZX5HhkYRxHWqVwIihjlGtqdSBBZW5BBYCEzZ9cIQENheUzOuB8V1qUCTrEd+xE3V4tpVk+zsKM39diGgN3wa681p8oayvInzGMQ6vREpxczM0Pjvc8YJeSTQ2+XG0HOY8uSBcNrE9xUH1tcEP1mS/SDy0l3yQQ=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(66946007)(66476007)(66446008)(8676002)(66556008)(64756008)(966005)(4326008)(76116006)(71200400001)(86362001)(54906003)(508600001)(186003)(83380400001)(316002)(8936002)(55016003)(52536014)(122000001)(33656002)(2906002)(5660300002)(6506007)(9686003)(7696005)(110136005)(26005)(38100700002)(38070700005)(4744005);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5071
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT026.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	07c9fdae-9050-483c-8f75-08da5011567e
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2jMOfOroLSKigid2qesql1RW3gk0jSkzh44O+rL82SS4dn0UqpvjoF/zjOTBeRHqQngv0RvVu/ucJHo/KhbMy7hZhcE16C+22cN3D17ZFNnqeMp4gBfRQKLeSntu+2YUFFJw7MaTUhcjdiqk5w8Mau7dvWNbDMs1AfGrdd9orTlVbAAvyjtTvjgiQtgrpCmOoa/bhzU8u+2dGIA0GdRt6lNzagsuu9q4jITqp9LPV9+AhgoxmBz8AZJB4Wre9v2qTnRRCdejwTD20e4NmFVYlThniSYIwLoelOSTM19eVs/wqUaIXlUfLPefongslUoYyvnctECuxrslvWVM4BnyVrN76qNWSvhIUZYdn0G5TPouvAQ9cHX3LLg3u4KoDvhvTUlYrvYk/2JI/Nl359uEWkh5Qq4Rr+Jye0aivS3gt+snRQKDhE7+vCT9kjxe2uzkboSwBk6hAnyYKEolC2eVhKMMmfW1MMk81+03EQ7egUdyTzfDZ/+rOKnhed81tKiZMcPClagIr6d8lKeAhyBSeJBuY1m8mYX9B/tls44MKCut4iwA8+LkFSlF7Uac6WLu4Mzw/TqKHdMlnFwM9cuc5tPwdym098KCfjhAF9opGnXx7Eab6VKp3h5XBkLOsI2GOfb1TEj7LzfLQdn5zqtzNv/ArztEMvyejbx3pNQlqyY6RreV7hqn9C2nG8beLEQvFFDrkyMg7Sf8N8r6XUXw2J5E7QdDO+2Tcwmqz3h8lvlxEQ4vN0gVI+vBl0aUmdsW
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(40470700004)(36840700001)(46966006)(8936002)(81166007)(356005)(6506007)(966005)(186003)(52536014)(26005)(508600001)(70586007)(7696005)(4744005)(40460700003)(2906002)(336012)(47076005)(36860700001)(86362001)(82310400005)(9686003)(107886003)(33656002)(8676002)(110136005)(70206006)(316002)(4326008)(54906003)(83380400001)(55016003)(5660300002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jun 2022 03:27:56.0371
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f1c48c35-bc5c-4e69-61ba-08da50115ecc
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT026.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB2830

SGkgSnVsaWVuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IFhlbi1k
ZXZlbCA8eGVuLWRldmVsLWJvdW5jZXNAbGlzdHMueGVucHJvamVjdC5vcmc+IE9uIEJlaGFsZiBP
Zg0KPiBKdWxpZW4gR3JhbGwNCj4gDQo+IEhpLA0KPiANCj4gSSBoYXZlIGNvbW1pdHRlZCB0aGlz
IHBhdGNoLg0KPiANCj4gUGF0Y2ggIzMgbG9va3MgdG8gYmUgc3VpdGFibHkgYWNrZWQgYnV0IEkg
YW0gbm90IHN1cmUgd2hldGhlciBpdCBjYW4gYmUNCj4gY29tbWl0dGVkIGJlZm9yZSAjMi4gU28g
SSBkaWRuJ3QgY29tbWl0IGl0Lg0KPiANCj4gUGxlYXNlIGxldCBtZSBrbm93IGlmIGl0IGNhbiBi
ZS4NCg0KSUlVQywgdGhlIGxhdGVzdCBzZXJpZXMgKHY2KSBbMV0gaXMgcHJvcGVybHkgYWNrZWQg
YW5kIHJldmlld2VkIGZvciB0aGUgd2hvbGUNCnNlcmllcywgc28gSSB0aGluayB2NiBvZiB0aGlz
IHNlcmllcyBpcyByZWFkeSB0byBiZSBtZXJnZWQuIFNlbmRpbmcgdGhpcyBhcyBhDQpnZW50bGUg
cmVtaW5kZXIgOikgDQoNClsxXSBodHRwczovL3BhdGNod29yay5rZXJuZWwub3JnL3Byb2plY3Qv
eGVuLWRldmVsL2xpc3QvP3Nlcmllcz02NDkwOTINCg0KS2luZCByZWdhcmRzLA0KSGVucnkNCg0K
PiANCj4gQ2hlZXJzLA0KPiANCj4gLS0NCj4gSnVsaWVuIEdyYWxsDQoNCg==


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 04:05:49 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 04:05:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.350899.577492 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o23Es-0005bU-E9; Fri, 17 Jun 2022 04:05:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 350899.577492; Fri, 17 Jun 2022 04:05:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o23Es-0005bN-BM; Fri, 17 Jun 2022 04:05:42 +0000
Received: by outflank-mailman (input) for mailman id 350899;
 Thu, 16 Jun 2022 21:23:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xETI=WX=linux.intel.com=jacob.jun.pan@srs-se1.protection.inumbo.net>)
 id 1o1wxA-0005q1-BJ
 for xen-devel@lists.xenproject.org; Thu, 16 Jun 2022 21:23:00 +0000
Received: from mga02.intel.com (mga02.intel.com [134.134.136.20])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7acd545b-edba-11ec-bd2c-47488cf2e6aa;
 Thu, 16 Jun 2022 23:22:55 +0200 (CEST)
Received: from orsmga006.jf.intel.com ([10.7.209.51])
 by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 16 Jun 2022 14:22:51 -0700
Received: from jacob-builder.jf.intel.com (HELO jacob-builder) ([10.7.198.157])
 by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 16 Jun 2022 14:22:51 -0700
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7acd545b-edba-11ec-bd2c-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1655414575; x=1686950575;
  h=date:from:to:cc:subject:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=eIVB2Wcg0d4KNkOlM5BOJs0iOIDhUm5jxxQVobMAAIM=;
  b=OQ0iJhew++TA1rbFr369n0l2OSc4VHeYD+gUIWhn30eWLuYFqEwnpXuQ
   3NIEi0UBmGKIICmuV2VZjIt4KkHum5Tly6/wH+4GHHPvoupLqOItYLvYH
   VF80khvoojKjsalps30jwprPkWk4lnUQpb84i8ABB8rbIG7z+dTAwyZ+e
   s4M4rgS35rlx8w1Ag2f/MtLNEWSIx/QHOqnhrSCHtwCtgrhgGhThHnewg
   6kkIqPClJFDUA0/CtK2IpRCU9JC8bfAOtBLzkK5DGJCtCNv4dMCcMpvCy
   FreGQMaiRJilw1GR1I1TpT/6jKTsPC7NXTfZdIHywAtsTip6vQ4CP+60I
   A==;
X-IronPort-AV: E=McAfee;i="6400,9594,10380"; a="268043185"
X-IronPort-AV: E=Sophos;i="5.92,306,1650956400"; 
   d="scan'208";a="268043185"
X-IronPort-AV: E=Sophos;i="5.92,306,1650956400"; 
   d="scan'208";a="560027674"
Date: Thu, 16 Jun 2022 14:26:56 -0700
From: Jacob Pan <jacob.jun.pan@linux.intel.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: rth@twiddle.net, ink@jurassic.park.msu.ru, mattst88@gmail.com,
 vgupta@kernel.org, linux@armlinux.org.uk, ulli.kroll@googlemail.com,
 linus.walleij@linaro.org, shawnguo@kernel.org, Sascha Hauer
 <s.hauer@pengutronix.de>, kernel@pengutronix.de, festevam@gmail.com,
 linux-imx@nxp.com, tony@atomide.com, khilman@kernel.org,
 catalin.marinas@arm.com, will@kernel.org, guoren@kernel.org,
 bcain@quicinc.com, chenhuacai@kernel.org, kernel@xen0n.name,
 geert@linux-m68k.org, sammy@sammy.net, monstr@monstr.eu,
 tsbogend@alpha.franken.de, dinguyen@kernel.org, jonas@southpole.se,
 stefan.kristiansson@saunalahti.fi, shorne@gmail.com,
 James.Bottomley@hansenpartnership.com, deller@gmx.de, mpe@ellerman.id.au,
 benh@kernel.crashing.org, paulus@samba.org, paul.walmsley@sifive.com,
 palmer@dabbelt.com, aou@eecs.berkeley.edu, hca@linux.ibm.com,
 gor@linux.ibm.com, agordeev@linux.ibm.com, borntraeger@linux.ibm.com,
 svens@linux.ibm.com, ysato@users.sourceforge.jp, dalias@libc.org,
 davem@davemloft.net, richard@nod.at, anton.ivanov@cambridgegreys.com,
 johannes@sipsolutions.net, tglx@linutronix.de, mingo@redhat.com,
 bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com,
 acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com,
 jolsa@kernel.org, namhyung@kernel.org, jgross@suse.com,
 srivatsa@csail.mit.edu, amakhalov@vmware.com, pv-drivers@vmware.com,
 boris.ostrovsky@oracle.com, chris@zankel.net, jcmvbkbc@gmail.com,
 rafael@kernel.org, lenb@kernel.org, pavel@ucw.cz,
 gregkh@linuxfoundation.org, mturquette@baylibre.com, sboyd@kernel.org,
 daniel.lezcano@linaro.org, lpieralisi@kernel.org, sudeep.holla@arm.com,
 agross@kernel.org, bjorn.andersson@linaro.org, anup@brainfault.org,
 thierry.reding@gmail.com, jonathanh@nvidia.com, Arnd Bergmann
 <arnd@arndb.de>, yury.norov@gmail.com, andriy.shevchenko@linux.intel.com,
 linux@rasmusvillemoes.dk, rostedt@goodmis.org, pmladek@suse.com,
 senozhatsky@chromium.org, john.ogness@linutronix.de, paulmck@kernel.org,
 frederic@kernel.org, quic_neeraju@quicinc.com, josh@joshtriplett.org,
 mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com,
 joel@joelfernandes.org, juri.lelli@redhat.com, vincent.guittot@linaro.org,
 dietmar.eggemann@arm.com, bsegall@google.com, mgorman@suse.de,
 bristot@redhat.com, vschneid@redhat.com, jpoimboe@kernel.org,
 linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org,
 linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org,
 linux-omap@vger.kernel.org, linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org,
 linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org,
 openrisc@lists.librecores.org, linux-parisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org, linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org, linux-um@lists.infradead.org,
 linux-perf-users@vger.kernel.org,
 virtualization@lists.linux-foundation.org, xen-devel@lists.xenproject.org,
 linux-xtensa@linux-xtensa.org, linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org, linux-clk@vger.kernel.org,
 linux-arm-msm@vger.kernel.org, linux-tegra@vger.kernel.org,
 linux-arch@vger.kernel.org, rcu@vger.kernel.org,
 jacob.jun.pan@linux.intel.com
Subject: Re: [PATCH 04/36] cpuidle,intel_idle: Fix CPUIDLE_FLAG_IRQ_ENABLE
Message-ID: <20220616142656.4b1acc4a@jacob-builder>
In-Reply-To: <Yqb45vclY2KVL0wZ@hirez.programming.kicks-ass.net>
References: <20220608142723.103523089@infradead.org>
	<20220608144516.172460444@infradead.org>
	<20220609164921.5e61711d@jacob-builder>
	<Yqb45vclY2KVL0wZ@hirez.programming.kicks-ass.net>
Organization: OTC
X-Mailer: Claws Mail 3.17.5 (GTK+ 2.24.32; x86_64-pc-linux-gnu)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

Hi Peter,

On Mon, 13 Jun 2022 10:44:22 +0200, Peter Zijlstra <peterz@infradead.org>
wrote:

> On Thu, Jun 09, 2022 at 04:49:21PM -0700, Jacob Pan wrote:
> > Hi Peter,
> > 
> > On Wed, 08 Jun 2022 16:27:27 +0200, Peter Zijlstra
> > <peterz@infradead.org> wrote:
> >   
> > > Commit c227233ad64c ("intel_idle: enable interrupts before C1 on
> > > Xeons") wrecked intel_idle in two ways:
> > > 
> > >  - must not have tracing in idle functions
> > >  - must return with IRQs disabled
> > > 
> > > Additionally, it added a branch for no good reason.
> > > 
> > > Fixes: c227233ad64c ("intel_idle: enable interrupts before C1 on
> > > Xeons") Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> > > ---
> > >  drivers/idle/intel_idle.c |   48
> > > +++++++++++++++++++++++++++++++++++----------- 1 file changed, 37
> > > insertions(+), 11 deletions(-)
> > > 
> > > --- a/drivers/idle/intel_idle.c
> > > +++ b/drivers/idle/intel_idle.c
> > > @@ -129,21 +137,37 @@ static unsigned int mwait_substates __in
> > >   *
> > >   * Must be called under local_irq_disable().
> > >   */  
> > nit: this comment is no long true, right?  
> 
> It still is, all the idle routines are called with interrupts disabled,
> but must also exit with interrupts disabled.
> 
> If the idle method requires interrupts to be enabled, it must be sure to
> disable them again before returning. Given all the RCU/tracing concerns
> it must use raw_local_irq_*() for this though.
Makes sense, it is just little confusing when the immediate caller does
raw_local_irq_enable() which does not cancel out local_irq_disable().

Thanks,

Jacob


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 05:39:16 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 05:39:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351026.577509 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o24hC-00009b-B2; Fri, 17 Jun 2022 05:39:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351026.577509; Fri, 17 Jun 2022 05:39:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o24hC-00009T-7d; Fri, 17 Jun 2022 05:39:02 +0000
Received: by outflank-mailman (input) for mailman id 351026;
 Fri, 17 Jun 2022 05:39:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=laXa=WY=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o24hA-00009N-15
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 05:39:00 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c6f88ec3-edff-11ec-bd2d-47488cf2e6aa;
 Fri, 17 Jun 2022 07:38:58 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 471AF21DDA;
 Fri, 17 Jun 2022 05:38:55 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 1227B1330D;
 Fri, 17 Jun 2022 05:38:55 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 4TOeAm8TrGL5UQAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 17 Jun 2022 05:38:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c6f88ec3-edff-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655444335; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=icVLYw40BJWzaywvjlhESg++OAa7jHW/XMj9qHWHaa0=;
	b=Kd1e4aqc1FMKRpEPJtPjq6oWTH6Q1wejSv5j+q0pmeplu6xVDMid+gv2dwXMnEnFCwoX7e
	m6NymwMG+YMRKI8MAmV1ExLuQ7zM9QaolW6Dxveio+uOJB/BuMkwAzmEnV9UNmQbwyczK4
	hI/Ul0mn7DKXJa9Efo9fuSwTGBj5FOk=
Message-ID: <9c7aaca1-d23e-aeec-bc3a-a0bd7dc7de77@suse.com>
Date: Fri, 17 Jun 2022 07:38:54 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 viresh.kumar@linaro.org, hch@infradead.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
References: <20220616053715.3166-1-jgross@suse.com>
 <alpine.DEB.2.22.394.2206161106020.10483@ubuntu-linux-20-04-desktop>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v2] xen: don't require virtio with grants for non-PV
 guests
In-Reply-To: <alpine.DEB.2.22.394.2206161106020.10483@ubuntu-linux-20-04-desktop>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------UHqUBWRx2KomWqMzCZqExmX0"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------UHqUBWRx2KomWqMzCZqExmX0
Content-Type: multipart/mixed; boundary="------------67hurPYhWVJPZ7nzlPPw0cJo";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 viresh.kumar@linaro.org, hch@infradead.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Message-ID: <9c7aaca1-d23e-aeec-bc3a-a0bd7dc7de77@suse.com>
Subject: Re: [PATCH v2] xen: don't require virtio with grants for non-PV
 guests
References: <20220616053715.3166-1-jgross@suse.com>
 <alpine.DEB.2.22.394.2206161106020.10483@ubuntu-linux-20-04-desktop>
In-Reply-To: <alpine.DEB.2.22.394.2206161106020.10483@ubuntu-linux-20-04-desktop>

--------------67hurPYhWVJPZ7nzlPPw0cJo
Content-Type: multipart/mixed; boundary="------------Ev1Ocmj2cMoOvmWy9UDaAU0c"

--------------Ev1Ocmj2cMoOvmWy9UDaAU0c
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTYuMDYuMjIgMjA6MjAsIFN0ZWZhbm8gU3RhYmVsbGluaSB3cm90ZToNCj4gT24gVGh1
LCAxNiBKdW4gMjAyMiwgSnVlcmdlbiBHcm9zcyB3cm90ZToNCj4+IENvbW1pdCBmYTFmNTc0
MjFlMGIgKCJ4ZW4vdmlydGlvOiBFbmFibGUgcmVzdHJpY3RlZCBtZW1vcnkgYWNjZXNzIHVz
aW5nDQo+PiBYZW4gZ3JhbnQgbWFwcGluZ3MiKSBpbnRyb2R1Y2VkIGEgbmV3IHJlcXVpcmVt
ZW50IGZvciB1c2luZyB2aXJ0aW8NCj4+IGRldmljZXM6IHRoZSBiYWNrZW5kIG5vdyBuZWVk
cyB0byBzdXBwb3J0IHRoZSBWSVJUSU9fRl9BQ0NFU1NfUExBVEZPUk0NCj4+IGZlYXR1cmUu
DQo+Pg0KPj4gVGhpcyBpcyBhbiB1bmR1ZSByZXF1aXJlbWVudCBmb3Igbm9uLVBWIGd1ZXN0
cywgYXMgdGhvc2UgY2FuIGJlIG9wZXJhdGVkDQo+PiB3aXRoIGV4aXN0aW5nIGJhY2tlbmRz
IHdpdGhvdXQgYW55IHByb2JsZW0sIGFzIGxvbmcgYXMgdGhvc2UgYmFja2VuZHMNCj4+IGFy
ZSBydW5uaW5nIGluIGRvbTAuDQo+Pg0KPj4gUGVyIGRlZmF1bHQgYWxsb3cgdmlydGlvIGRl
dmljZXMgd2l0aG91dCBncmFudCBzdXBwb3J0IGZvciBub24tUFYNCj4+IGd1ZXN0cy4NCj4+
DQo+PiBBZGQgYSBuZXcgY29uZmlnIGl0ZW0gdG8gYWx3YXlzIGZvcmNlIHVzZSBvZiBncmFu
dHMgZm9yIHZpcnRpby4NCj4+DQo+PiBGaXhlczogZmExZjU3NDIxZTBiICgieGVuL3ZpcnRp
bzogRW5hYmxlIHJlc3RyaWN0ZWQgbWVtb3J5IGFjY2VzcyB1c2luZyBYZW4gZ3JhbnQgbWFw
cGluZ3MiKQ0KPj4gUmVwb3J0ZWQtYnk6IFZpcmVzaCBLdW1hciA8dmlyZXNoLmt1bWFyQGxp
bmFyby5vcmc+DQo+PiBTaWduZWQtb2ZmLWJ5OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3Vz
ZS5jb20+DQo+PiAtLS0NCj4+IFYyOg0KPj4gLSByZW1vdmUgY29tbWFuZCBsaW5lIHBhcmFt
ZXRlciAoQ2hyaXN0b3BoIEhlbGx3aWcpDQo+PiAtLS0NCj4+ICAgZHJpdmVycy94ZW4vS2Nv
bmZpZyB8IDkgKysrKysrKysrDQo+PiAgIGluY2x1ZGUveGVuL3hlbi5oICAgfCAyICstDQo+
PiAgIDIgZmlsZXMgY2hhbmdlZCwgMTAgaW5zZXJ0aW9ucygrKSwgMSBkZWxldGlvbigtKQ0K
Pj4NCj4+IGRpZmYgLS1naXQgYS9kcml2ZXJzL3hlbi9LY29uZmlnIGIvZHJpdmVycy94ZW4v
S2NvbmZpZw0KPj4gaW5kZXggYmZkNWY0ZjcwNmJjLi5hNjViZDkyMTIxYTUgMTAwNjQ0DQo+
PiAtLS0gYS9kcml2ZXJzL3hlbi9LY29uZmlnDQo+PiArKysgYi9kcml2ZXJzL3hlbi9LY29u
ZmlnDQo+PiBAQCAtMzU1LDQgKzM1NSwxMyBAQCBjb25maWcgWEVOX1ZJUlRJTw0KPj4gICAN
Cj4+ICAgCSAgSWYgaW4gZG91YnQsIHNheSBuLg0KPj4gICANCj4+ICtjb25maWcgWEVOX1ZJ
UlRJT19GT1JDRV9HUkFOVA0KPj4gKwlib29sICJSZXF1aXJlIFhlbiB2aXJ0aW8gc3VwcG9y
dCB0byB1c2UgZ3JhbnRzIg0KPj4gKwlkZXBlbmRzIG9uIFhFTl9WSVJUSU8NCj4+ICsJaGVs
cA0KPj4gKwkgIFJlcXVpcmUgdmlydGlvIGZvciBYZW4gZ3Vlc3RzIHRvIHVzZSBncmFudCBt
YXBwaW5ncy4NCj4+ICsJICBUaGlzIHdpbGwgYXZvaWQgdGhlIG5lZWQgdG8gZ2l2ZSB0aGUg
YmFja2VuZCB0aGUgcmlnaHQgdG8gbWFwIGFsbA0KPj4gKwkgIG9mIHRoZSBndWVzdCBtZW1v
cnkuIFRoaXMgd2lsbCBuZWVkIHN1cHBvcnQgb24gdGhlIGJhY2tlbmQgc2lkZQ0KPj4gKwkg
IChlLmcuIHFlbXUgb3Iga2VybmVsLCBkZXBlbmRpbmcgb24gdGhlIHZpcnRpbyBkZXZpY2Ug
dHlwZXMgdXNlZCkuDQo+PiArDQo+PiAgIGVuZG1lbnUNCj4+IGRpZmYgLS1naXQgYS9pbmNs
dWRlL3hlbi94ZW4uaCBiL2luY2x1ZGUveGVuL3hlbi5oDQo+PiBpbmRleCAwNzgwYTgxZTE0
MGQuLjRkNDE4OGYyMDMzNyAxMDA2NDQNCj4+IC0tLSBhL2luY2x1ZGUveGVuL3hlbi5oDQo+
PiArKysgYi9pbmNsdWRlL3hlbi94ZW4uaA0KPj4gQEAgLTU2LDcgKzU2LDcgQEAgZXh0ZXJu
IHU2NCB4ZW5fc2F2ZWRfbWF4X21lbV9zaXplOw0KPj4gICANCj4+ICAgc3RhdGljIGlubGlu
ZSB2b2lkIHhlbl9zZXRfcmVzdHJpY3RlZF92aXJ0aW9fbWVtb3J5X2FjY2Vzcyh2b2lkKQ0K
Pj4gICB7DQo+PiAtCWlmIChJU19FTkFCTEVEKENPTkZJR19YRU5fVklSVElPKSAmJiB4ZW5f
ZG9tYWluKCkpDQo+PiArCWlmIChJU19FTkFCTEVEKENPTkZJR19YRU5fVklSVElPX0ZPUkNF
X0dSQU5UKSB8fCB4ZW5fcHZfZG9tYWluKCkpDQo+PiAgIAkJcGxhdGZvcm1fc2V0KFBMQVRG
T1JNX1ZJUlRJT19SRVNUUklDVEVEX01FTV9BQ0NFU1MpOw0KPj4gICB9DQo+IA0KPiBIaSBK
dWVyZ2VuLCB5b3UgbWlnaHQgaGF2ZSBzZWVuIG15IGVtYWlsOg0KPiBodHRwczovL21hcmMu
aW5mby8/bD1saW51eC1rZXJuZWwmbT0xNjU1MzM2MzY2MDc4MDEmdz0yDQo+IA0KPiBMaW51
eCBpcyBhbHdheXMgcnVubmluZyBhcyBIVk0gb24gQVJNLCBzbyBpZiB5b3Ugd2FudCB0byBp
bnRyb2R1Y2UNCj4gWEVOX1ZJUlRJT19GT1JDRV9HUkFOVCwgdGhlbiBYRU5fVklSVElPX0ZP
UkNFX0dSQU5UIHNob3VsZCBiZQ0KPiBhdXRvbWF0aWNhbGx5IHNlbGVjdGVkIG9uIEFSTS4g
SSBkb24ndCB0aGluayB0aGVyZSBzaG91bGQgYmUgYSB2aXNpYmxlDQo+IG1lbnUgb3B0aW9u
IGZvciBYRU5fVklSVElPX0ZPUkNFX0dSQU5UIG9uIEFSTS4NCg0KTm8sIEkgZG9uJ3QgdGhp
bmsgc28uIEkgdGhpbmsgeW91IGFyZSBtaXhpbmcgdXAgZGlmZmVyZW50IHRoaW5ncyBoZXJl
Lg0KDQpTZXR0aW5nIFhFTl9WSVJUSU9fRk9SQ0VfR1JBTlQgcmVxdWlyZXMgdG8gc3VwcG9y
dCBncmFudHMgZm9yIGFsbA0KdmlydGlvIGRldmljZXMgb2YgdGhlIGd1ZXN0LCB3aGlsZSB0
aGVyZSBtaWdodCBiZSBwZXJmZWN0IHJlYXNvbnMgdG8NCmhhdmUgdGhhdCBmb3Igc29tZSBz
cGVjaWFsIGRldmljZXMgb25seSAob3IgdG8gYWxsb3cgdG8gdXNlIG5vIGdyYW50cw0KZm9y
IHNvbWUgc3BlY2lhbCBkZXZpY2VzKS4NCg0KWW91ciBzdWdnZXN0aW9uIHdvdWxkIHJlc3Vs
dCBpbiBhbiAiYWxsIG9yIG5vdGhpbmciIGFwcHJvYWNoLCB3aGlsZQ0KbWFueSB1c2VycyBj
b3VsZCB2ZXJ5IHdlbGwgd2FudCBhIG1peGVkIHNldHVwLg0KDQo+IEkgcmVhbGl6ZSB3ZSBo
YXZlIGEgY29uZmxpY3QgYmV0d2VlbiBIVk0gZ3Vlc3RzIG9uIEFSTSBhbmQgeDg2Og0KPiAN
Cj4gLSBvbiBBUk0sIFBMQVRGT1JNX1ZJUlRJT19SRVNUUklDVEVEX01FTV9BQ0NFU1Mgc2hv
dWxkIGJlIGVuYWJsZWQgd2hlbg0KPiAgICAieGVuLGdyYW50LWRtYSIgaXMgcHJlc2VudA0K
DQpBZ2Fpbiwgd2h5PyBXaHkgc2hvdWxkIG9uZSB2aXJ0aW8gZGV2aWNlIHdpdGggYSBiYWNr
ZW5kIHJ1bm5pbmcgaW4NCmEgZHJpdmVyIGRvbWFpbiByZXF1aXJlIF9hbGxfIHZpcnRpbyBk
ZXZpY2VzIHRvIHVzZSBncmFudHM/DQoNCj4gLSBvbiB4ODYsIGR1ZSB0byB0aGUgbGFjayBv
ZiAieGVuLGdyYW50LWRtYSIsIGl0IHNob3VsZCBiZSBvZmYgYnkNCj4gICAgZGVmYXVsdCBh
bmQgYmFzZWQgb24gYSBrY29uZmlnIG9yIGNvbW1hbmQgbGluZSBvcHRpb24NCg0KU2VlIGFi
b3ZlLiBJIGRvbid0IHNlZSBhIG1ham9yIGRpZmZlcmVuY2UgZm9yIEFybSBoZXJlLg0KDQo+
IFRvIGJlIGhvbmVzdCwgbGlrZSBDaHJpc3RvcGggc3VnZ2VzdGVkLCBJIHRoaW5rIGV2ZW4g
b24geDg2IHRoZXJlIHNob3VsZA0KPiBiZSBhIGZpcm13YXJlIHRhYmxlIHRvIHRyaWdnZXIg
c2V0dGluZw0KPiBQTEFURk9STV9WSVJUSU9fUkVTVFJJQ1RFRF9NRU1fQUNDRVNTLiBXZSBo
YXZlIDIgWGVuLXNwZWNpZmljIEFDUEkNCj4gdGFibGVzLCBhbmQgd2UgY291bGQgaGF2ZSAx
IG1vcmUgdG8gZGVmaW5lIHRoaXMuIE9yIGFuIEhWTSBwYXJhbSBvcg0KPiBhIGZlYXR1cmUg
ZmxhZz8NCg0KUGxlYXNlIGRvbid0IG1peCB1cCBQTEFURk9STV9WSVJUSU9fUkVTVFJJQ1RF
RF9NRU1fQUNDRVNTICh3aGljaCB3aWxsDQplbmQgaW4gcmVxdWlyaW5nIGdyYW50cyBmb3Ig
X2FsbF8gdmlydGlvIGRldmljZXMpIGFuZCB0aGUgdXNlIGNhc2UNCnBlciBkZXZpY2UuDQoN
CkkgYWdyZWUgdGhhdCB4ODYgbmVlZHMgYSB3YXkgdG8gdHJhbnNwb3J0IHRoZSBncmFudCBz
ZXR0aW5nIHBlcg0KZGV2aWNlLCBpZiBvbmx5IGZvciBjb21tdW5pY2F0aW5nIHRoZSBiYWNr
ZW5kIGRvbWFpbiBpZC4NCg0KSSBjYW4gc2VlIHR3byBkaWZmZXJlbnQgc29sdXRpb25zIGZv
ciB0aGF0OiBlaXRoZXIgQUNQSSBvciBYZW5zdG9yZS4NCkEgSFZNIHBhcmFtIGRvZXNuJ3Qg
c2VlbSB0byBkbyB0aGUgam9iLCBhcyB0aGUgYmFja2VuZCBkb21haW4gaWQgc3RpbGwNCm5l
ZWRzIHRvIGJlIGNvbW11bmljYXRlZCBzb21laG93Lg0KDQpXaGVuIGNvbnNpZGVyaW5nIHdo
aWNoIG9uZSB0byB1c2UgKHRoZXJlIGFyZSBtYXliZSBvdGhlciBhbHRlcm5hdGl2ZXMpDQp3
ZSBzaG91bGQgaGF2ZSBpbiBtaW5kLCB0aGF0IHRoZSBzb2x1dGlvbiBzaG91bGQgc3VwcG9y
dCBQViBndWVzdHMNCih3aGljaCBpbiB0aGUgZ2VuZXJhbCBjYXNlIGRvbid0IHNlZSBhbiBB
Q1BJIHRhYmxlIHRvZGF5KSBhcyB3ZWxsIGFzDQp2aXJ0aW8gZGV2aWNlIGhvdHBsdWdnaW5n
IChpcyB0aGlzIHBvc3NpYmxlIG9uIEFybSB2aWEgZGV2aWNlIHRyZWU/IC0NCkkgZ3Vlc3Mg
aXQgc2hvdWxkIGJlLCBidXQgSSdtIG5vdCBzdXJlIGhvdyBkaWZmaWN1bHQgdGhhdCB3b3Vs
ZCBiZSkuDQoNCj4gSSB0aGluayB0aGF0IHdvdWxkIGJlIHRoZSBjbGVhbmVzdCB3YXkgdG8g
ZG8gdGhpcywgYnV0IGl0IGlzIGEgbG90IG9mDQo+IG1vcmUgd29yayBjb21wYXJlZCB0byBh
ZGRpbmcgYSBjb3VwbGUgb2YgbGluZXMgb2YgY29kZSB0byBMaW51eCwgc28gdGhpcw0KPiBp
cyB3aHkgSSBzdWdnZXN0ZWQ6DQo+IGh0dHBzOi8vbWFyYy5pbmZvLz9sPWxpbnV4LWtlcm5l
bCZtPTE2NTUzMzYzNjYwNzgwMSZ3PTINCg0KSSdsbCBhbnN3ZXIgdG8gdGhpcyBvbmUgc2Vw
YXJhdGVseS4NCg0KPiANCj4gQVJNIHVzZXMgInhlbixncmFudC1kbWEiIHRvIGRldGVjdCB3
aGV0aGVyDQo+IFBMQVRGT1JNX1ZJUlRJT19SRVNUUklDVEVEX01FTV9BQ0NFU1MgbmVlZHMg
c2V0dGluZy4NCj4gDQo+IE9uZSBkYXkgeDg2IGNvdWxkIGNoZWNrIGFuIEFDUEkgcHJvcGVy
dHkgb3IgSFZNIHBhcmFtIG9yIGZlYXR1cmUgZmxhZy4NCj4gTm9uZSBvZiB0aGVtIGFyZSBh
dmFpbGFibGUgbm93LCBzbyBmb3Igbm93IHVzZSBhIGNvbW1hbmQgbGluZSBvcHRpb24gYXMN
Cj4gYSB3b3JrYXJvdW5kLiBJdCBpcyB0b3RhbGx5IGZpbmUgdG8gdXNlIGFuIHg4Ni1vbmx5
IGtjb25maWcgb3B0aW9uDQo+IGluc3RlYWQgb2YgYSBjb21tYW5kIGxpbmUgb3B0aW9uLg0K
PiANCj4gV291bGQgeW91IGJlIE9LIHdpdGggdGhhdD8NCg0KU2VlIGFib3ZlLiA6LSkNCg0K
DQpKdWVyZ2VuDQo=
--------------Ev1Ocmj2cMoOvmWy9UDaAU0c
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------Ev1Ocmj2cMoOvmWy9UDaAU0c--

--------------67hurPYhWVJPZ7nzlPPw0cJo--

--------------UHqUBWRx2KomWqMzCZqExmX0
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmKsE24FAwAAAAAACgkQsN6d1ii/Ey/9
Jgf/VmDqJdei43pp8Gz3BIATNHL8sD3YHw6rR/78BXIOLHeWZ+gny40YVQ6R8dITiRGEgIeVmIqN
Cyvp5wcXjut5sikNqiMjZ20g+X654W15eaZGenajvIoueE4Wa2dk1Pnxb2hMP4riTklg+y+BnCO4
BKP5nGft/FyGDZnZrtsYgcs0JRhJLMqjAeqIMWXueSWXkyHGySGwH7jnErdXEj6ammtt/wWFUxk7
Q7bjaSpcxenSwo7pMD7GwRmhCQAy9KyynvBqhpkkgfo7inj75YPgXoGe5BxuRrTV49Q72A7v9Ury
21+Psrg4b4op+1ylvmnhahb5Zam03j5VSQP2qnLEtg==
=1DB4
-----END PGP SIGNATURE-----

--------------UHqUBWRx2KomWqMzCZqExmX0--


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 05:41:43 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 05:41:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351033.577520 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o24jm-0001Wh-SE; Fri, 17 Jun 2022 05:41:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351033.577520; Fri, 17 Jun 2022 05:41:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o24jm-0001Wa-Nn; Fri, 17 Jun 2022 05:41:42 +0000
Received: by outflank-mailman (input) for mailman id 351033;
 Fri, 17 Jun 2022 05:41:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=laXa=WY=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o24jm-0001WU-6J
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 05:41:42 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 29a060ea-ee00-11ec-ab14-113154c10af9;
 Fri, 17 Jun 2022 07:41:41 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 9547621A7F;
 Fri, 17 Jun 2022 05:41:40 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 58FF81330D;
 Fri, 17 Jun 2022 05:41:40 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id x639ExQUrGKvUgAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 17 Jun 2022 05:41:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 29a060ea-ee00-11ec-ab14-113154c10af9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655444500; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=LcZUUZdA9Bs6dRtZtYIQd7zgwuZ9YUtZN1HPMs8hGuw=;
	b=h6tsuKPj2kJHlqL5ClhRpoiNSijRgyqnv0NtcqpXKD40iKylFnrd3A6l1RMDzqcjkMw7BL
	61yRbCGK8lMV9yys1q+IJuDo98fxDXkKUWdXtQzrAfUAST0guuTVyZpWe/clK+dFng928E
	CP8kW6ASX09QNOfp0Nruf5+EFzEAT7g=
Message-ID: <0cb980f8-255d-4835-272e-f625e8463f11@suse.com>
Date: Fri, 17 Jun 2022 07:41:39 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH v2] xen: don't require virtio with grants for non-PV
 guests
Content-Language: en-US
To: Oleksandr <olekstysh@gmail.com>, hch@infradead.org,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 viresh.kumar@linaro.org, Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
References: <20220616053715.3166-1-jgross@suse.com>
 <573c2d9f-8df0-0e0f-2f57-e8ea85e403b4@gmail.com>
 <cf755bb8-4265-875f-dc20-eefc0e8740f4@suse.com>
 <a67a709a-78b1-c3b1-009e-2d9c834bdd67@gmail.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <a67a709a-78b1-c3b1-009e-2d9c834bdd67@gmail.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------PlTpeNGGUZ6fVQJy38WwWM7y"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------PlTpeNGGUZ6fVQJy38WwWM7y
Content-Type: multipart/mixed; boundary="------------KAzbyXDfafTqoOmE5eSwOQrI";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Oleksandr <olekstysh@gmail.com>, hch@infradead.org,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 viresh.kumar@linaro.org, Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Message-ID: <0cb980f8-255d-4835-272e-f625e8463f11@suse.com>
Subject: Re: [PATCH v2] xen: don't require virtio with grants for non-PV
 guests
References: <20220616053715.3166-1-jgross@suse.com>
 <573c2d9f-8df0-0e0f-2f57-e8ea85e403b4@gmail.com>
 <cf755bb8-4265-875f-dc20-eefc0e8740f4@suse.com>
 <a67a709a-78b1-c3b1-009e-2d9c834bdd67@gmail.com>
In-Reply-To: <a67a709a-78b1-c3b1-009e-2d9c834bdd67@gmail.com>

--------------KAzbyXDfafTqoOmE5eSwOQrI
Content-Type: multipart/mixed; boundary="------------RmLAwTcj00wZFtYIY8DWZy9f"

--------------RmLAwTcj00wZFtYIY8DWZy9f
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTYuMDYuMjIgMjI6MzMsIE9sZWtzYW5kciB3cm90ZToNCj4gDQo+IE9uIDE2LjA2LjIy
IDExOjU2LCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPiANCj4gSGVsbG8gSnVlcmdlbiwgYWxs
DQo+IA0KPiANCj4+IE9uIDE2LjA2LjIyIDA5OjMxLCBPbGVrc2FuZHIgd3JvdGU6DQo+Pj4N
Cj4+PiBPbiAxNi4wNi4yMiAwODozNywgSnVlcmdlbiBHcm9zcyB3cm90ZToNCj4+Pg0KPj4+
DQo+Pj4gSGVsbG8gSnVlcmdlbg0KPj4+DQo+Pj4+IENvbW1pdCBmYTFmNTc0MjFlMGIgKCJ4
ZW4vdmlydGlvOiBFbmFibGUgcmVzdHJpY3RlZCBtZW1vcnkgYWNjZXNzIHVzaW5nDQo+Pj4+
IFhlbiBncmFudCBtYXBwaW5ncyIpIGludHJvZHVjZWQgYSBuZXcgcmVxdWlyZW1lbnQgZm9y
IHVzaW5nIHZpcnRpbw0KPj4+PiBkZXZpY2VzOiB0aGUgYmFja2VuZCBub3cgbmVlZHMgdG8g
c3VwcG9ydCB0aGUgVklSVElPX0ZfQUNDRVNTX1BMQVRGT1JNDQo+Pj4+IGZlYXR1cmUuDQo+
Pj4+DQo+Pj4+IFRoaXMgaXMgYW4gdW5kdWUgcmVxdWlyZW1lbnQgZm9yIG5vbi1QViBndWVz
dHMsIGFzIHRob3NlIGNhbiBiZSBvcGVyYXRlZA0KPj4+PiB3aXRoIGV4aXN0aW5nIGJhY2tl
bmRzIHdpdGhvdXQgYW55IHByb2JsZW0sIGFzIGxvbmcgYXMgdGhvc2UgYmFja2VuZHMNCj4+
Pj4gYXJlIHJ1bm5pbmcgaW4gZG9tMC4NCj4+Pj4NCj4+Pj4gUGVyIGRlZmF1bHQgYWxsb3cg
dmlydGlvIGRldmljZXMgd2l0aG91dCBncmFudCBzdXBwb3J0IGZvciBub24tUFYNCj4+Pj4g
Z3Vlc3RzLg0KPj4+Pg0KPj4+PiBBZGQgYSBuZXcgY29uZmlnIGl0ZW0gdG8gYWx3YXlzIGZv
cmNlIHVzZSBvZiBncmFudHMgZm9yIHZpcnRpby4NCj4+Pj4NCj4+Pj4gRml4ZXM6IGZhMWY1
NzQyMWUwYiAoInhlbi92aXJ0aW86IEVuYWJsZSByZXN0cmljdGVkIG1lbW9yeSBhY2Nlc3Mg
dXNpbmcgWGVuIA0KPj4+PiBncmFudCBtYXBwaW5ncyIpDQo+Pj4+IFJlcG9ydGVkLWJ5OiBW
aXJlc2ggS3VtYXIgPHZpcmVzaC5rdW1hckBsaW5hcm8ub3JnPg0KPj4+PiBTaWduZWQtb2Zm
LWJ5OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+DQo+Pj4+IC0tLQ0KPj4+PiBW
MjoNCj4+Pj4gLSByZW1vdmUgY29tbWFuZCBsaW5lIHBhcmFtZXRlciAoQ2hyaXN0b3BoIEhl
bGx3aWcpDQo+Pj4+IC0tLQ0KPj4+PiDCoCBkcml2ZXJzL3hlbi9LY29uZmlnIHwgOSArKysr
KysrKysNCj4+Pj4gwqAgaW5jbHVkZS94ZW4veGVuLmjCoMKgIHwgMiArLQ0KPj4+PiDCoCAy
IGZpbGVzIGNoYW5nZWQsIDEwIGluc2VydGlvbnMoKyksIDEgZGVsZXRpb24oLSkNCj4+Pj4N
Cj4+Pj4gZGlmZiAtLWdpdCBhL2RyaXZlcnMveGVuL0tjb25maWcgYi9kcml2ZXJzL3hlbi9L
Y29uZmlnDQo+Pj4+IGluZGV4IGJmZDVmNGY3MDZiYy4uYTY1YmQ5MjEyMWE1IDEwMDY0NA0K
Pj4+PiAtLS0gYS9kcml2ZXJzL3hlbi9LY29uZmlnDQo+Pj4+ICsrKyBiL2RyaXZlcnMveGVu
L0tjb25maWcNCj4+Pj4gQEAgLTM1NSw0ICszNTUsMTMgQEAgY29uZmlnIFhFTl9WSVJUSU8N
Cj4+Pj4gwqDCoMKgwqDCoMKgwqAgSWYgaW4gZG91YnQsIHNheSBuLg0KPj4+PiArY29uZmln
IFhFTl9WSVJUSU9fRk9SQ0VfR1JBTlQNCj4+Pj4gK8KgwqDCoCBib29sICJSZXF1aXJlIFhl
biB2aXJ0aW8gc3VwcG9ydCB0byB1c2UgZ3JhbnRzIg0KPj4+PiArwqDCoMKgIGRlcGVuZHMg
b24gWEVOX1ZJUlRJTw0KPj4+PiArwqDCoMKgIGhlbHANCj4+Pj4gK8KgwqDCoMKgwqAgUmVx
dWlyZSB2aXJ0aW8gZm9yIFhlbiBndWVzdHMgdG8gdXNlIGdyYW50IG1hcHBpbmdzLg0KPj4+
PiArwqDCoMKgwqDCoCBUaGlzIHdpbGwgYXZvaWQgdGhlIG5lZWQgdG8gZ2l2ZSB0aGUgYmFj
a2VuZCB0aGUgcmlnaHQgdG8gbWFwIGFsbA0KPj4+PiArwqDCoMKgwqDCoCBvZiB0aGUgZ3Vl
c3QgbWVtb3J5LiBUaGlzIHdpbGwgbmVlZCBzdXBwb3J0IG9uIHRoZSBiYWNrZW5kIHNpZGUN
Cj4+Pj4gK8KgwqDCoMKgwqAgKGUuZy4gcWVtdSBvciBrZXJuZWwsIGRlcGVuZGluZyBvbiB0
aGUgdmlydGlvIGRldmljZSB0eXBlcyB1c2VkKS4NCj4+Pj4gKw0KPj4+PiDCoCBlbmRtZW51
DQo+Pj4+IGRpZmYgLS1naXQgYS9pbmNsdWRlL3hlbi94ZW4uaCBiL2luY2x1ZGUveGVuL3hl
bi5oDQo+Pj4+IGluZGV4IDA3ODBhODFlMTQwZC4uNGQ0MTg4ZjIwMzM3IDEwMDY0NA0KPj4+
PiAtLS0gYS9pbmNsdWRlL3hlbi94ZW4uaA0KPj4+PiArKysgYi9pbmNsdWRlL3hlbi94ZW4u
aA0KPj4+PiBAQCAtNTYsNyArNTYsNyBAQCBleHRlcm4gdTY0IHhlbl9zYXZlZF9tYXhfbWVt
X3NpemU7DQo+Pj4+IMKgIHN0YXRpYyBpbmxpbmUgdm9pZCB4ZW5fc2V0X3Jlc3RyaWN0ZWRf
dmlydGlvX21lbW9yeV9hY2Nlc3Modm9pZCkNCj4+Pj4gwqAgew0KPj4+PiAtwqDCoMKgIGlm
IChJU19FTkFCTEVEKENPTkZJR19YRU5fVklSVElPKSAmJiB4ZW5fZG9tYWluKCkpDQo+Pj4+
ICvCoMKgwqAgaWYgKElTX0VOQUJMRUQoQ09ORklHX1hFTl9WSVJUSU9fRk9SQ0VfR1JBTlQp
IHx8IHhlbl9wdl9kb21haW4oKSkNCj4+Pj4gwqDCoMKgwqDCoMKgwqDCoMKgIHBsYXRmb3Jt
X3NldChQTEFURk9STV9WSVJUSU9fUkVTVFJJQ1RFRF9NRU1fQUNDRVNTKTsNCj4+Pg0KPj4+
DQo+Pj4gTG9va3MgbGlrZSwgdGhlIGZsYWcgd2lsbCBiZSAqYWx3YXlzKiBzZXQgZm9yIHBh
cmF2aXJ0dWFsaXplZCBndWVzdHMgZXZlbiBpZiANCj4+PiBDT05GSUdfWEVOX1ZJUlRJTyBk
aXNhYmxlZC4NCj4+Pg0KPj4+IE1heWJlIHdlIHNob3VsZCBjbGFyaWZ5IHRoZSBjaGVjaz8N
Cj4+Pg0KPj4+DQo+Pj4gaWYgKElTX0VOQUJMRUQoQ09ORklHX1hFTl9WSVJUSU9fRk9SQ0Vf
R1JBTlQpIHx8IA0KPj4+IElTX0VOQUJMRUQoQ09ORklHX1hFTl9WSVJUSU8pICYmIHhlbl9w
dl9kb21haW4oKSkNCj4+Pg0KPj4+IMKgwqDCoMKgIHBsYXRmb3JtX3NldChQTEFURk9STV9W
SVJUSU9fUkVTVFJJQ1RFRF9NRU1fQUNDRVNTKTsNCj4+Pg0KPj4NCj4+IFllcywgd2Ugc2hv
dWxkLiBJIGhhZCB0aGUgZnVuY3Rpb24gaW4gZ3JhbnQtZG1hLW9wcy5jIGluIFYxLCBhbmQg
Y291bGQgZHJvcCB0aGUNCj4+IENPTkZJR19YRU5fVklSVElPIGRlcGVuZGVuY3kgZm9yIHRo
YXQgcmVhc29uLg0KPj4NCj4+IEknbGwgd2FpdCBmb3IgbW9yZSBjb21tZW50cyBiZWZvcmUg
c2VuZGluZyBWMywgdGhvdWdoLg0KPiANCj4gb2sNCj4gDQo+IA0KPiANCj4gUGxlYXNlIG5v
dGUsIEkgYW0gaGFwcHkgd2l0aCBjdXJyZW50IHBhdGNoIGFuZCBpdCB3b3JrcyBpbiBteSBB
cm02NCBiYXNlZCANCj4gZW52aXJvbm1lbnQuDQo+IA0KPiBKdXN0IG9uZSBtb21lbnQgdG8g
Y29uc2lkZXIuDQo+IA0KPiANCj4gQXMgaXQgd2FzIGFscmVhZHkgbWVudGlvbmVkIGVhcmxp
ZXIgaW4gY3VycmVudCB0aHJlYWQgdGhlIA0KPiBQTEFURk9STV9WSVJUSU9fUkVTVFJJQ1RF
RF9NRU1fQUNDRVNTIChmb3JtZXIgDQo+IGFyY2hfaGFzX3Jlc3RyaWN0ZWRfdmlydGlvX21l
bW9yeV9hY2Nlc3MoKSkgaXMgbm90IHBlciBkZXZpY2UgYnV0IGFib3V0IHRoZSANCj4gd2hv
bGUgZ3Vlc3QuIEJlaW5nIHNldCBpdCBtYWtlcyBWSVJUSU9fRl9BQ0NFU1NfUExBVEZPUk0g
YW5kIFZJUlRJT19GX1ZFUlNJT05fMSANCj4gZmVhdHVyZXMgbWFuZGF0b3J5IGZvciAqYWxs
KiB2aXJ0aW8gZGV2aWNlcyBpbiB0aGUgZ3Vlc3QuDQo+IA0KPiBUaGUgcXVlc3Rpb24gaXMg
4oCcRG8gd2Ugd2FudC9uZWVkIHRvIGxpZnQgdGhpcyByZXN0cmljdGlvbiBmb3Igc29tZSBk
ZXZpY2VzIA0KPiAod2hpY2ggYmFja2VuZHMgYXJlIHRydXN0ZWQgc28gY2FuIGFjY2VzcyBh
bGwgZ3Vlc3QgbWVtb3J5KSBhdCB0aGUgc2FtZSB0aW1l4oCdPyANCg0KTm8sIGlmIHlvdSBu
ZWVkIHNvbWUgdmlydGlvIGRldmljZXMgdG8gbm90IHVzZSBncmFudHMsIHRoZW4gZG9uJ3Qg
c2V0DQpQTEFURk9STV9WSVJUSU9fUkVTVFJJQ1RFRF9NRU1fQUNDRVNTLg0KDQpQbGVhc2Ug
c2VlIG15IGFuc3dlciB0byBTdGVmYW5vJ3MgYWx0ZXJuYXRpdmUgc29sdXRpb24gZm9yIG15
IGlkZWEgaG93IHRvDQpyZXNvbHZlIHRoaXMgdmlhIGEgcGVyLWRldmljZSBzZXR0aW5nLg0K
DQoNCkp1ZXJnZW4NCg==
--------------RmLAwTcj00wZFtYIY8DWZy9f
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------RmLAwTcj00wZFtYIY8DWZy9f--

--------------KAzbyXDfafTqoOmE5eSwOQrI--

--------------PlTpeNGGUZ6fVQJy38WwWM7y
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmKsFBMFAwAAAAAACgkQsN6d1ii/Ey8x
Vgf/YeKLCESBxoZ/u7ukv1Dlu1I1ZmIngpAmEbzpls1a44298LwvsXFBO1dl244X/ovg2Ioq5lst
SW1WrcREt+0MILShyPs34+tY/YpnYnqEX9XC48I6cFhpWfewPgGi7UUrAmqcBK1fcZP1D8k3endn
XzeLxu6z2VvGF87kHAHVx+FpjlUPg369UCYIDqMPfIIerUu80XMf0UaZJq8rW1zaZxEYYcyPjPbb
9bNZmfUBMsxgEp53qGIqkifo1l8TB8lfzZkc5c69BFU3EC3ZOQrTTuey5M3ZAYWRb5JxkpKwF8GI
ApfNuLXbwHuQ4LIPOKeGJMrHuz4Ma22N8LN3wgLrOQ==
=gADQ
-----END PGP SIGNATURE-----

--------------PlTpeNGGUZ6fVQJy38WwWM7y--


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 05:49:13 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 05:49:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351043.577531 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o24qz-0002XK-NS; Fri, 17 Jun 2022 05:49:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351043.577531; Fri, 17 Jun 2022 05:49:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o24qz-0002XD-Kd; Fri, 17 Jun 2022 05:49:09 +0000
Received: by outflank-mailman (input) for mailman id 351043;
 Fri, 17 Jun 2022 05:49:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=laXa=WY=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o24qy-0002Wl-0z
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 05:49:08 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3332efb9-ee01-11ec-ab14-113154c10af9;
 Fri, 17 Jun 2022 07:49:06 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 1A0F91FB17;
 Fri, 17 Jun 2022 05:49:06 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id B96081330D;
 Fri, 17 Jun 2022 05:49:05 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id Z9hQK9EVrGLzVAAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 17 Jun 2022 05:49:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3332efb9-ee01-11ec-ab14-113154c10af9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655444946; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=AnyB9kJHN0zpLjDAE2lTSOSIGlApnZwigkyIhkBQbXU=;
	b=CxxLdHiom6dmP6E3/k2G+hx5ygGstSgvYkQ2E+k6voDEkBTO3iO8Dcp8/MDnIkhzw4+Ybg
	li8vDRzpzAi8lTG3mKVEcRzAAHssLMoVatOhbjoHhV2JmqxbpuKErM+0oJW/cbhp2rJfAW
	ZqpX6XprPr7BY76OZ5L4qTX6A5U8FTo=
Message-ID: <ab59158f-c718-f109-074c-7fc193c5406d@suse.com>
Date: Fri, 17 Jun 2022 07:49:05 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr <olekstysh@gmail.com>
Cc: hch@infradead.org, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org, viresh.kumar@linaro.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
References: <20220616053715.3166-1-jgross@suse.com>
 <573c2d9f-8df0-0e0f-2f57-e8ea85e403b4@gmail.com>
 <cf755bb8-4265-875f-dc20-eefc0e8740f4@suse.com>
 <a67a709a-78b1-c3b1-009e-2d9c834bdd67@gmail.com>
 <alpine.DEB.2.22.394.2206161657440.10483@ubuntu-linux-20-04-desktop>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v2] xen: don't require virtio with grants for non-PV
 guests
In-Reply-To: <alpine.DEB.2.22.394.2206161657440.10483@ubuntu-linux-20-04-desktop>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------J7bHW69tJDe0kFmKU9pRFxkN"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------J7bHW69tJDe0kFmKU9pRFxkN
Content-Type: multipart/mixed; boundary="------------RsLH27x5zqbuYDtPWN0F9D3E";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr <olekstysh@gmail.com>
Cc: hch@infradead.org, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org, viresh.kumar@linaro.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Message-ID: <ab59158f-c718-f109-074c-7fc193c5406d@suse.com>
Subject: Re: [PATCH v2] xen: don't require virtio with grants for non-PV
 guests
References: <20220616053715.3166-1-jgross@suse.com>
 <573c2d9f-8df0-0e0f-2f57-e8ea85e403b4@gmail.com>
 <cf755bb8-4265-875f-dc20-eefc0e8740f4@suse.com>
 <a67a709a-78b1-c3b1-009e-2d9c834bdd67@gmail.com>
 <alpine.DEB.2.22.394.2206161657440.10483@ubuntu-linux-20-04-desktop>
In-Reply-To: <alpine.DEB.2.22.394.2206161657440.10483@ubuntu-linux-20-04-desktop>

--------------RsLH27x5zqbuYDtPWN0F9D3E
Content-Type: multipart/mixed; boundary="------------H3UH93BZQPI5hdyZQSKacFtF"

--------------H3UH93BZQPI5hdyZQSKacFtF
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTcuMDYuMjIgMDI6MDMsIFN0ZWZhbm8gU3RhYmVsbGluaSB3cm90ZToNCj4gT24gVGh1
LCAxNiBKdW4gMjAyMiwgT2xla3NhbmRyIHdyb3RlOg0KPj4gT24gMTYuMDYuMjIgMTE6NTYs
IEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+Pj4gT24gMTYuMDYuMjIgMDk6MzEsIE9sZWtzYW5k
ciB3cm90ZToNCj4+Pj4NCj4+Pj4gT24gMTYuMDYuMjIgMDg6MzcsIEp1ZXJnZW4gR3Jvc3Mg
d3JvdGU6DQo+Pj4+DQo+Pj4+DQo+Pj4+IEhlbGxvIEp1ZXJnZW4NCj4+Pj4NCj4+Pj4+IENv
bW1pdCBmYTFmNTc0MjFlMGIgKCJ4ZW4vdmlydGlvOiBFbmFibGUgcmVzdHJpY3RlZCBtZW1v
cnkgYWNjZXNzIHVzaW5nDQo+Pj4+PiBYZW4gZ3JhbnQgbWFwcGluZ3MiKSBpbnRyb2R1Y2Vk
IGEgbmV3IHJlcXVpcmVtZW50IGZvciB1c2luZyB2aXJ0aW8NCj4+Pj4+IGRldmljZXM6IHRo
ZSBiYWNrZW5kIG5vdyBuZWVkcyB0byBzdXBwb3J0IHRoZSBWSVJUSU9fRl9BQ0NFU1NfUExB
VEZPUk0NCj4+Pj4+IGZlYXR1cmUuDQo+Pj4+Pg0KPj4+Pj4gVGhpcyBpcyBhbiB1bmR1ZSBy
ZXF1aXJlbWVudCBmb3Igbm9uLVBWIGd1ZXN0cywgYXMgdGhvc2UgY2FuIGJlIG9wZXJhdGVk
DQo+Pj4+PiB3aXRoIGV4aXN0aW5nIGJhY2tlbmRzIHdpdGhvdXQgYW55IHByb2JsZW0sIGFz
IGxvbmcgYXMgdGhvc2UgYmFja2VuZHMNCj4+Pj4+IGFyZSBydW5uaW5nIGluIGRvbTAuDQo+
Pj4+Pg0KPj4+Pj4gUGVyIGRlZmF1bHQgYWxsb3cgdmlydGlvIGRldmljZXMgd2l0aG91dCBn
cmFudCBzdXBwb3J0IGZvciBub24tUFYNCj4+Pj4+IGd1ZXN0cy4NCj4+Pj4+DQo+Pj4+PiBB
ZGQgYSBuZXcgY29uZmlnIGl0ZW0gdG8gYWx3YXlzIGZvcmNlIHVzZSBvZiBncmFudHMgZm9y
IHZpcnRpby4NCj4+Pj4+DQo+Pj4+PiBGaXhlczogZmExZjU3NDIxZTBiICgieGVuL3ZpcnRp
bzogRW5hYmxlIHJlc3RyaWN0ZWQgbWVtb3J5IGFjY2VzcyB1c2luZw0KPj4+Pj4gWGVuIGdy
YW50IG1hcHBpbmdzIikNCj4+Pj4+IFJlcG9ydGVkLWJ5OiBWaXJlc2ggS3VtYXIgPHZpcmVz
aC5rdW1hckBsaW5hcm8ub3JnPg0KPj4+Pj4gU2lnbmVkLW9mZi1ieTogSnVlcmdlbiBHcm9z
cyA8amdyb3NzQHN1c2UuY29tPg0KPj4+Pj4gLS0tDQo+Pj4+PiBWMjoNCj4+Pj4+IC0gcmVt
b3ZlIGNvbW1hbmQgbGluZSBwYXJhbWV0ZXIgKENocmlzdG9waCBIZWxsd2lnKQ0KPj4+Pj4g
LS0tDQo+Pj4+PiAgwqAgZHJpdmVycy94ZW4vS2NvbmZpZyB8IDkgKysrKysrKysrDQo+Pj4+
PiAgwqAgaW5jbHVkZS94ZW4veGVuLmjCoMKgIHwgMiArLQ0KPj4+Pj4gIMKgIDIgZmlsZXMg
Y2hhbmdlZCwgMTAgaW5zZXJ0aW9ucygrKSwgMSBkZWxldGlvbigtKQ0KPj4+Pj4NCj4+Pj4+
IGRpZmYgLS1naXQgYS9kcml2ZXJzL3hlbi9LY29uZmlnIGIvZHJpdmVycy94ZW4vS2NvbmZp
Zw0KPj4+Pj4gaW5kZXggYmZkNWY0ZjcwNmJjLi5hNjViZDkyMTIxYTUgMTAwNjQ0DQo+Pj4+
PiAtLS0gYS9kcml2ZXJzL3hlbi9LY29uZmlnDQo+Pj4+PiArKysgYi9kcml2ZXJzL3hlbi9L
Y29uZmlnDQo+Pj4+PiBAQCAtMzU1LDQgKzM1NSwxMyBAQCBjb25maWcgWEVOX1ZJUlRJTw0K
Pj4+Pj4gIMKgwqDCoMKgwqDCoMKgIElmIGluIGRvdWJ0LCBzYXkgbi4NCj4+Pj4+ICtjb25m
aWcgWEVOX1ZJUlRJT19GT1JDRV9HUkFOVA0KPj4+Pj4gK8KgwqDCoCBib29sICJSZXF1aXJl
IFhlbiB2aXJ0aW8gc3VwcG9ydCB0byB1c2UgZ3JhbnRzIg0KPj4+Pj4gK8KgwqDCoCBkZXBl
bmRzIG9uIFhFTl9WSVJUSU8NCj4+Pj4+ICvCoMKgwqAgaGVscA0KPj4+Pj4gK8KgwqDCoMKg
wqAgUmVxdWlyZSB2aXJ0aW8gZm9yIFhlbiBndWVzdHMgdG8gdXNlIGdyYW50IG1hcHBpbmdz
Lg0KPj4+Pj4gK8KgwqDCoMKgwqAgVGhpcyB3aWxsIGF2b2lkIHRoZSBuZWVkIHRvIGdpdmUg
dGhlIGJhY2tlbmQgdGhlIHJpZ2h0IHRvIG1hcCBhbGwNCj4+Pj4+ICvCoMKgwqDCoMKgIG9m
IHRoZSBndWVzdCBtZW1vcnkuIFRoaXMgd2lsbCBuZWVkIHN1cHBvcnQgb24gdGhlIGJhY2tl
bmQgc2lkZQ0KPj4+Pj4gK8KgwqDCoMKgwqAgKGUuZy4gcWVtdSBvciBrZXJuZWwsIGRlcGVu
ZGluZyBvbiB0aGUgdmlydGlvIGRldmljZSB0eXBlcyB1c2VkKS4NCj4+Pj4+ICsNCj4+Pj4+
ICDCoCBlbmRtZW51DQo+Pj4+PiBkaWZmIC0tZ2l0IGEvaW5jbHVkZS94ZW4veGVuLmggYi9p
bmNsdWRlL3hlbi94ZW4uaA0KPj4+Pj4gaW5kZXggMDc4MGE4MWUxNDBkLi40ZDQxODhmMjAz
MzcgMTAwNjQ0DQo+Pj4+PiAtLS0gYS9pbmNsdWRlL3hlbi94ZW4uaA0KPj4+Pj4gKysrIGIv
aW5jbHVkZS94ZW4veGVuLmgNCj4+Pj4+IEBAIC01Niw3ICs1Niw3IEBAIGV4dGVybiB1NjQg
eGVuX3NhdmVkX21heF9tZW1fc2l6ZTsNCj4+Pj4+ICDCoCBzdGF0aWMgaW5saW5lIHZvaWQg
eGVuX3NldF9yZXN0cmljdGVkX3ZpcnRpb19tZW1vcnlfYWNjZXNzKHZvaWQpDQo+Pj4+PiAg
wqAgew0KPj4+Pj4gLcKgwqDCoCBpZiAoSVNfRU5BQkxFRChDT05GSUdfWEVOX1ZJUlRJTykg
JiYgeGVuX2RvbWFpbigpKQ0KPj4+Pj4gK8KgwqDCoCBpZiAoSVNfRU5BQkxFRChDT05GSUdf
WEVOX1ZJUlRJT19GT1JDRV9HUkFOVCkgfHwgeGVuX3B2X2RvbWFpbigpKQ0KPj4+Pj4gIMKg
wqDCoMKgwqDCoMKgwqDCoCBwbGF0Zm9ybV9zZXQoUExBVEZPUk1fVklSVElPX1JFU1RSSUNU
RURfTUVNX0FDQ0VTUyk7DQo+Pj4+DQo+Pj4+DQo+Pj4+IExvb2tzIGxpa2UsIHRoZSBmbGFn
IHdpbGwgYmUgKmFsd2F5cyogc2V0IGZvciBwYXJhdmlydHVhbGl6ZWQgZ3Vlc3RzIGV2ZW4N
Cj4+Pj4gaWYgQ09ORklHX1hFTl9WSVJUSU8gZGlzYWJsZWQuDQo+Pj4+DQo+Pj4+IE1heWJl
IHdlIHNob3VsZCBjbGFyaWZ5IHRoZSBjaGVjaz8NCj4+Pj4NCj4+Pj4NCj4+Pj4gaWYgKElT
X0VOQUJMRUQoQ09ORklHX1hFTl9WSVJUSU9fRk9SQ0VfR1JBTlQpIHx8DQo+Pj4+IElTX0VO
QUJMRUQoQ09ORklHX1hFTl9WSVJUSU8pICYmIHhlbl9wdl9kb21haW4oKSkNCj4+Pj4NCj4+
Pj4gIMKgwqDCoMKgIHBsYXRmb3JtX3NldChQTEFURk9STV9WSVJUSU9fUkVTVFJJQ1RFRF9N
RU1fQUNDRVNTKTsNCj4+Pj4NCj4+Pg0KPj4+IFllcywgd2Ugc2hvdWxkLiBJIGhhZCB0aGUg
ZnVuY3Rpb24gaW4gZ3JhbnQtZG1hLW9wcy5jIGluIFYxLCBhbmQgY291bGQgZHJvcA0KPj4+
IHRoZQ0KPj4+IENPTkZJR19YRU5fVklSVElPIGRlcGVuZGVuY3kgZm9yIHRoYXQgcmVhc29u
Lg0KPj4+DQo+Pj4gSSdsbCB3YWl0IGZvciBtb3JlIGNvbW1lbnRzIGJlZm9yZSBzZW5kaW5n
IFYzLCB0aG91Z2guDQo+Pg0KPj4gb2sNCj4+DQo+Pg0KPj4NCj4+IFBsZWFzZSBub3RlLCBJ
IGFtIGhhcHB5IHdpdGggY3VycmVudCBwYXRjaCBhbmQgaXQgd29ya3MgaW4gbXkgQXJtNjQg
YmFzZWQNCj4+IGVudmlyb25tZW50Lg0KPj4NCj4+IEp1c3Qgb25lIG1vbWVudCB0byBjb25z
aWRlci4NCj4+DQo+Pg0KPj4gQXMgaXQgd2FzIGFscmVhZHkgbWVudGlvbmVkIGVhcmxpZXIg
aW4gY3VycmVudCB0aHJlYWQgdGhlDQo+PiBQTEFURk9STV9WSVJUSU9fUkVTVFJJQ1RFRF9N
RU1fQUNDRVNTIChmb3JtZXINCj4+IGFyY2hfaGFzX3Jlc3RyaWN0ZWRfdmlydGlvX21lbW9y
eV9hY2Nlc3MoKSkgaXMgbm90IHBlciBkZXZpY2UgYnV0IGFib3V0IHRoZQ0KPj4gd2hvbGUg
Z3Vlc3QuIEJlaW5nIHNldCBpdCBtYWtlcyBWSVJUSU9fRl9BQ0NFU1NfUExBVEZPUk0gYW5k
DQo+PiBWSVJUSU9fRl9WRVJTSU9OXzEgZmVhdHVyZXMgbWFuZGF0b3J5IGZvciAqYWxsKiB2
aXJ0aW8gZGV2aWNlcyBpbiB0aGUgZ3Vlc3QuDQo+Pg0KPj4gVGhlIHF1ZXN0aW9uIGlzIOKA
nERvIHdlIHdhbnQvbmVlZCB0byBsaWZ0IHRoaXMgcmVzdHJpY3Rpb24gZm9yIHNvbWUgZGV2
aWNlcw0KPj4gKHdoaWNoIGJhY2tlbmRzIGFyZSB0cnVzdGVkIHNvIGNhbiBhY2Nlc3MgYWxs
IGd1ZXN0IG1lbW9yeSkgYXQgdGhlIHNhbWUgdGltZeKAnT8NCj4+IENvcHkgaGVyZSB0aGUg
b3JpZ2luYWwgVmlyZXNoJ3MgcXVlc3Rpb24gZm9yIHRoZSBjb252ZW5pZW5jZToNCj4+DQo+
PiAiSSB1bmRlcnN0YW5kIGZyb20geW91ciBlbWFpbCB0aGF0IHRoZSBiYWNrZW5kcyBuZWVk
IHRvIG9mZmVyIHRoZQ0KPj4gVklSVElPX0ZfQUNDRVNTX1BMQVRGT1JNIGZsYWcgbm93LCBi
dXQgc2hvdWxkIHRoaXMgcmVxdWlyZW1lbnQgYmUgYSBiaXQgc29mdD8NCj4+IEkgbWVhbiBz
aG91bGRuJ3Qgd2UgYWxsb3cgYm90aCB0eXBlcyBvZiBiYWNrZW5kcyB0byBydW4gd2l0aCB0
aGUgc2FtZSBrZXJuZWwsDQo+PiBvbmVzIHRoYXQgb2ZmZXIgdGhpcyBmZWF0dXJlIGFuZCBv
dGhlcnMgdGhhdCBkb24ndD8gVGhlIG9uZXMgdGhhdCBkb24ndCBvZmZlcg0KPj4gdGhlIGZl
YXR1cmUsIHNob3VsZCBjb250aW51ZSB0byB3b3JrIGxpa2UgdGhleSB1c2VkIHRvLCBpLmUu
IHdpdGhvdXQgdGhlDQo+PiByZXN0cmljdGVkIG1lbW9yeSBhY2Nlc3MgZmVhdHVyZS4iDQo+
Pg0KPj4gVGVjaG5pY2FsbHkgdGhpcyBjYW4gYmUgcG9zc2libGUgd2l0aCBIVk0uDQo+Pg0K
Pj4gTGV0J3MgaW1hZ2luZSB0aGUgZm9sbG93aW5nIHNpdHVhdGlvbjoNCj4+DQo+PiAtIERv
bTAgd2l0aCBiYWNrZW5kcyB3aGljaCBkb24ndCBvZmZlciByZXF1aXJlZCBmZWF0dXJlcyBm
b3Igc29tZSByZWFzb24ocykNCj4+DQo+PiBCdXQgcnVubmluZyBpbiBEb20wICh0cnVzdGVk
IGRvbWFpbikgdGhlc2UgYmFja2VuZHMgYXJlIG5vdCBvYmxpZ2VkIHRvIG9mZmVyDQo+PiBp
dCAoeWVzIHRoZXkgY2FuIG9mZmVyIHRoZSByZXF1aXJlZCBmZWF0dXJlcyBhbmQgc3VwcG9y
dCBncmFudCBtYXBwaW5ncyBmb3INCj4+IHRoZSB2aXJ0aW8sIGJ1dCB0aGlzIGlzIG5vdCBz
dHJpY3RseSBuZWNlc3NhcnksIGFzIHRoZXkgYXJlIGNvbnNpZGVyZWQgYXMNCj4+IHRydXN0
ZWQgc28gYXJlIGFsbG93ZWQgdG8gYWNjZXNzIGFsbCBndWVzdCBtZW1vcnkpLg0KPj4NCj4+
IC0gRG9tRCB3aXRoIGJhY2tlbmQgd2hpY2ggZG8gb2ZmZXIgdGhlbSBhbmQgcmVxdWlyZSBn
cmFudCBtYXBwaW5ncyBmb3IgdGhlDQo+PiB2aXJ0aW8NCj4+DQo+PiBJZiB0aGlzIGlzIGEg
dmFsaWQgYW5kIGNvcnJlY3QgdXNlLWNhc2UsIHRoZW4gd2UgaW5kZWVkIG5lZWQgYW4gYWJp
bGl0eSB0bw0KPj4gY29udHJvbCB0aGF0IHBlciBkZXZpY2UsIG90aGVyd2lzZSAtIHdoYXQg
aXMgd3JpdHRlbiBiZWxvdyBkb2Vzbid0IHJlYWxseQ0KPj4gbWF0dGVyLg0KPj4NCj4+IEkg
YW0gd29uZGVyaW5nIHdoZXRoZXIgd2UgY2FuIGF2b2lkIHVzaW5nIGdsb2JhbA0KPj4gUExB
VEZPUk1fVklSVElPX1JFU1RSSUNURURfTUVNX0FDQ0VTUyBmb3IgWGVuIGd1ZXN0cyBhdCBh
bGw/IEkgYXNzdW1lIHRoYXQgYWxsDQo+PiB3ZSBuZWVkIHRvIGRvICh3aGVuIENPTkZJR19Y
RU5fVklSVElPIGlzIGVuYWJsZWQpIGlzIHRvIG1ha2Ugc3VyZSB0aGF0ICpvbmx5Kg0KPj4g
WGVuIGdyYW50IERNQSBkZXZpY2VzIGluIEhWTSBndWVzdHMgYW5kICphbGwqIGRldmljZXMg
aW4gUFYgZ3Vlc3RzIG9mZmVyDQo+PiByZXF1aXJlZCBmbGFncy4NCj4+DQo+PiBCZWxvdyB0
aGUgZGlmZiBob3cgdGhpcyBjb3VsZCBiZSBkb25lIHcvbyBhbiBleHRyYSBvcHRpb25zIChu
b3QgY29tcGxldGVseQ0KPj4gdGVzdGVkKSwgYWx0aG91Z2ggSSByZWFsaXplIGl0IG1pZ2h0
IGxvb2sgaGFja2lzaCwgYW5kIGEgbG90IG1vcmUgZWZmb3J0IGlzDQo+PiBuZWVkZWQgdG8g
Z2V0IGl0IHJpZ2h0LiBJbiBteSBBcm02NCBiYXNlZCBlbnZpcm9ubWVudCBpdCB3b3Jrcywg
SSBoYXZlIHRyaWVkDQo+PiB0byBydW4gdHdvIGJhY2tlbmRzLCB0aGUgZmlyc3Qgb2ZmZXJl
ZCByZXF1aXJlZCBmZWF0dXJlcyBhbmQgdGhlIGNvcnJlc3BvbmRpbmcNCj4+IGRldmljZSBu
b2RlIGhhZCByZXF1aXJlZCBwcm9wZXJ0eSwgYnV0IHRoZSBzZWNvbmQgZGlkbuKAmXQgYW5k
IHRoZXJlIHdhcyBubw0KPj4gcHJvcGVydHkuDQo+Pg0KPj4gZGlmZiAtLWdpdCBhL2FyY2gv
YXJtL3hlbi9lbmxpZ2h0ZW4uYyBiL2FyY2gvYXJtL3hlbi9lbmxpZ2h0ZW4uYw0KPj4gaW5k
ZXggMWY5YzNiYS4uMDdlYjY5ZiAxMDA2NDQNCj4+IC0tLSBhL2FyY2gvYXJtL3hlbi9lbmxp
Z2h0ZW4uYw0KPj4gKysrIGIvYXJjaC9hcm0veGVuL2VubGlnaHRlbi5jDQo+PiBAQCAtNDQz
LDggKzQ0Myw2IEBAIHN0YXRpYyBpbnQgX19pbml0IHhlbl9ndWVzdF9pbml0KHZvaWQpDQo+
PiAgwqDCoMKgwqDCoMKgwqAgaWYgKCF4ZW5fZG9tYWluKCkpDQo+PiAgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgIHJldHVybiAwOw0KPj4NCj4+IC3CoMKgwqDCoMKgwqAgeGVu
X3NldF9yZXN0cmljdGVkX3ZpcnRpb19tZW1vcnlfYWNjZXNzKCk7DQo+PiAtDQo+PiAgwqDC
oMKgwqDCoMKgwqAgaWYgKCFhY3BpX2Rpc2FibGVkKQ0KPj4gIMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoCB4ZW5fYWNwaV9ndWVzdF9pbml0KCk7DQo+PiAgwqDCoMKgwqDCoMKg
wqAgZWxzZQ0KPj4gZGlmZiAtLWdpdCBhL2FyY2gveDg2L3hlbi9lbmxpZ2h0ZW5faHZtLmMg
Yi9hcmNoL3g4Ni94ZW4vZW5saWdodGVuX2h2bS5jDQo+PiBpbmRleCA4YjcxYjFkLi41MTdh
OWQ4IDEwMDY0NA0KPj4gLS0tIGEvYXJjaC94ODYveGVuL2VubGlnaHRlbl9odm0uYw0KPj4g
KysrIGIvYXJjaC94ODYveGVuL2VubGlnaHRlbl9odm0uYw0KPj4gQEAgLTE5NSw4ICsxOTUs
NiBAQCBzdGF0aWMgdm9pZCBfX2luaXQgeGVuX2h2bV9ndWVzdF9pbml0KHZvaWQpDQo+PiAg
wqDCoMKgwqDCoMKgwqAgaWYgKHhlbl9wdl9kb21haW4oKSkNCj4+ICDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqAgcmV0dXJuOw0KPj4NCj4+IC3CoMKgwqDCoMKgwqAgeGVuX3Nl
dF9yZXN0cmljdGVkX3ZpcnRpb19tZW1vcnlfYWNjZXNzKCk7DQo+PiAtDQo+PiAgwqDCoMKg
wqDCoMKgwqAgaW5pdF9odm1fcHZfaW5mbygpOw0KPj4NCj4+ICDCoMKgwqDCoMKgwqDCoCBy
ZXNlcnZlX3NoYXJlZF9pbmZvKCk7DQo+PiBkaWZmIC0tZ2l0IGEvYXJjaC94ODYveGVuL2Vu
bGlnaHRlbl9wdi5jIGIvYXJjaC94ODYveGVuL2VubGlnaHRlbl9wdi5jDQo+PiBpbmRleCAz
MGQyNGZlLi5jYTg1ZDE0IDEwMDY0NA0KPj4gLS0tIGEvYXJjaC94ODYveGVuL2VubGlnaHRl
bl9wdi5jDQo+PiArKysgYi9hcmNoL3g4Ni94ZW4vZW5saWdodGVuX3B2LmMNCj4+IEBAIC0x
MDgsOCArMTA4LDYgQEAgc3RhdGljIERFRklORV9QRVJfQ1BVKHN0cnVjdCB0bHNfZGVzY3Ms
IHNoYWRvd190bHNfZGVzYyk7DQo+Pg0KPj4gIMKgc3RhdGljIHZvaWQgX19pbml0IHhlbl9w
dl9pbml0X3BsYXRmb3JtKHZvaWQpDQo+PiAgwqB7DQo+PiAtwqDCoMKgwqDCoMKgIHhlbl9z
ZXRfcmVzdHJpY3RlZF92aXJ0aW9fbWVtb3J5X2FjY2VzcygpOw0KPj4gLQ0KPj4gIMKgwqDC
oMKgwqDCoMKgIHBvcHVsYXRlX2V4dHJhX3B0ZShmaXhfdG9fdmlydChGSVhfUEFSQVZJUlRf
Qk9PVE1BUCkpOw0KPj4NCj4+ICDCoMKgwqDCoMKgwqDCoCBzZXRfZml4bWFwKEZJWF9QQVJB
VklSVF9CT09UTUFQLCB4ZW5fc3RhcnRfaW5mby0+c2hhcmVkX2luZm8pOw0KPj4gZGlmZiAt
LWdpdCBhL2RyaXZlcnMvdmlydGlvL3ZpcnRpby5jIGIvZHJpdmVycy92aXJ0aW8vdmlydGlv
LmMNCj4+IGluZGV4IDM3MWUxNmIuLjg3NTY5MGEgMTAwNjQ0DQo+PiAtLS0gYS9kcml2ZXJz
L3ZpcnRpby92aXJ0aW8uYw0KPj4gKysrIGIvZHJpdmVycy92aXJ0aW8vdmlydGlvLmMNCj4+
IEBAIC0xNjcsNiArMTY3LDExIEBAIHZvaWQgdmlydGlvX2FkZF9zdGF0dXMoc3RydWN0IHZp
cnRpb19kZXZpY2UgKmRldiwNCj4+IHVuc2lnbmVkIGludCBzdGF0dXMpDQo+PiAgwqB9DQo+
PiAgwqBFWFBPUlRfU1lNQk9MX0dQTCh2aXJ0aW9fYWRkX3N0YXR1cyk7DQo+Pg0KPj4gK2lu
dCBfX3dlYWsgZGV2aWNlX2hhc19yZXN0cmljdGVkX3ZpcnRpb19tZW1vcnlfYWNjZXNzKHN0
cnVjdCBkZXZpY2UgKmRldikNCj4+ICt7DQo+PiArwqDCoMKgwqDCoMKgIHJldHVybiBwbGF0
Zm9ybV9oYXMoUExBVEZPUk1fVklSVElPX1JFU1RSSUNURURfTUVNX0FDQ0VTUyk7DQo+PiAr
fQ0KPj4gKw0KPj4gIMKgLyogRG8gc29tZSB2YWxpZGF0aW9uLCB0aGVuIHNldCBGRUFUVVJF
U19PSyAqLw0KPj4gIMKgc3RhdGljIGludCB2aXJ0aW9fZmVhdHVyZXNfb2soc3RydWN0IHZp
cnRpb19kZXZpY2UgKmRldikNCj4+ICDCoHsNCj4+IEBAIC0xNzQsNyArMTc5LDcgQEAgc3Rh
dGljIGludCB2aXJ0aW9fZmVhdHVyZXNfb2soc3RydWN0IHZpcnRpb19kZXZpY2UgKmRldikN
Cj4+DQo+PiAgwqDCoMKgwqDCoMKgwqAgbWlnaHRfc2xlZXAoKTsNCj4+DQo+PiAtwqDCoMKg
wqDCoMKgIGlmIChwbGF0Zm9ybV9oYXMoUExBVEZPUk1fVklSVElPX1JFU1RSSUNURURfTUVN
X0FDQ0VTUykpIHsNCj4+ICvCoMKgwqDCoMKgwqAgaWYgKGRldmljZV9oYXNfcmVzdHJpY3Rl
ZF92aXJ0aW9fbWVtb3J5X2FjY2VzcyhkZXYtPmRldi5wYXJlbnQpKSB7DQo+PiAgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIGlmICghdmlydGlvX2hhc19mZWF0dXJlKGRldiwg
VklSVElPX0ZfVkVSU0lPTl8xKSkgew0KPj4gIMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqAgZGV2X3dhcm4oJmRldi0+ZGV2LA0KPj4gIMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqAgImRldmljZSBtdXN0IHByb3ZpZGUgVklSVElPX0ZfVkVSU0lPTl8xXG4iKTsNCj4+IGRp
ZmYgLS1naXQgYS9kcml2ZXJzL3hlbi9ncmFudC1kbWEtb3BzLmMgYi9kcml2ZXJzL3hlbi9n
cmFudC1kbWEtb3BzLmMNCj4+IGluZGV4IDY1ODYxNTIuLmRhOTM4ZjYgMTAwNjQ0DQo+PiAt
LS0gYS9kcml2ZXJzL3hlbi9ncmFudC1kbWEtb3BzLmMNCj4+ICsrKyBiL2RyaXZlcnMveGVu
L2dyYW50LWRtYS1vcHMuYw0KPj4gQEAgLTExLDYgKzExLDcgQEANCj4+ICDCoCNpbmNsdWRl
IDxsaW51eC9kbWEtbWFwLW9wcy5oPg0KPj4gIMKgI2luY2x1ZGUgPGxpbnV4L29mLmg+DQo+
PiAgwqAjaW5jbHVkZSA8bGludXgvcGZuLmg+DQo+PiArI2luY2x1ZGUgPGxpbnV4L3ZpcnRp
b19jb25maWcuaD4NCj4+ICDCoCNpbmNsdWRlIDxsaW51eC94YXJyYXkuaD4NCj4+ICDCoCNp
bmNsdWRlIDx4ZW4veGVuLmg+DQo+PiAgwqAjaW5jbHVkZSA8eGVuL2dyYW50X3RhYmxlLmg+
DQo+PiBAQCAtMjg2LDYgKzI4NywxMSBAQCBib29sIHhlbl9pc19ncmFudF9kbWFfZGV2aWNl
KHN0cnVjdCBkZXZpY2UgKmRldikNCj4+ICDCoMKgwqDCoMKgwqDCoCByZXR1cm4gaGFzX2lv
bW11Ow0KPj4gIMKgfQ0KPj4NCj4+ICtpbnQgZGV2aWNlX2hhc19yZXN0cmljdGVkX3ZpcnRp
b19tZW1vcnlfYWNjZXNzKHN0cnVjdCBkZXZpY2UgKmRldikNCj4+ICt7DQo+PiArwqDCoMKg
wqDCoMKgIHJldHVybiAoeGVuX3B2X2RvbWFpbigpIHx8IHhlbl9pc19ncmFudF9kbWFfZGV2
aWNlKGRldikpOw0KPj4gK30NCj4+ICsNCj4+ICDCoHZvaWQgeGVuX2dyYW50X3NldHVwX2Rt
YV9vcHMoc3RydWN0IGRldmljZSAqZGV2KQ0KPj4gIMKgew0KPj4gIMKgwqDCoMKgwqDCoMKg
IHN0cnVjdCB4ZW5fZ3JhbnRfZG1hX2RhdGEgKmRhdGE7DQo+PiBkaWZmIC0tZ2l0IGEvaW5j
bHVkZS9saW51eC92aXJ0aW9fY29uZmlnLmggYi9pbmNsdWRlL2xpbnV4L3ZpcnRpb19jb25m
aWcuaA0KPj4gaW5kZXggNzk0OTgyOS4uYjNhNDU1YiAxMDA2NDQNCj4+IC0tLSBhL2luY2x1
ZGUvbGludXgvdmlydGlvX2NvbmZpZy5oDQo+PiArKysgYi9pbmNsdWRlL2xpbnV4L3ZpcnRp
b19jb25maWcuaA0KPj4gQEAgLTU1OSw0ICs1NTksNiBAQCBzdGF0aWMgaW5saW5lIHZvaWQg
dmlydGlvX2N3cml0ZTY0KHN0cnVjdCB2aXJ0aW9fZGV2aWNlDQo+PiAqdmRldiwNCj4+IF9y
O8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIFwN
Cj4+ICDCoMKgwqDCoMKgwqDCoCB9KQ0KPj4NCj4+ICtpbnQgZGV2aWNlX2hhc19yZXN0cmlj
dGVkX3ZpcnRpb19tZW1vcnlfYWNjZXNzKHN0cnVjdCBkZXZpY2UgKmRldik7DQo+PiArDQo+
PiAgwqAjZW5kaWYgLyogX0xJTlVYX1ZJUlRJT19DT05GSUdfSCAqLw0KPj4gZGlmZiAtLWdp
dCBhL2luY2x1ZGUveGVuL3hlbi5oIGIvaW5jbHVkZS94ZW4veGVuLmgNCj4+IGluZGV4IDA3
ODBhODEuLmE5OWJhYjggMTAwNjQ0DQo+PiAtLS0gYS9pbmNsdWRlL3hlbi94ZW4uaA0KPj4g
KysrIGIvaW5jbHVkZS94ZW4veGVuLmgNCj4+IEBAIC01MiwxNCArNTIsNiBAQCBib29sIHhl
bl9iaW92ZWNfcGh5c19tZXJnZWFibGUoY29uc3Qgc3RydWN0IGJpb192ZWMgKnZlYzEsDQo+
PiAgwqBleHRlcm4gdTY0IHhlbl9zYXZlZF9tYXhfbWVtX3NpemU7DQo+PiAgwqAjZW5kaWYN
Cj4+DQo+PiAtI2luY2x1ZGUgPGxpbnV4L3BsYXRmb3JtLWZlYXR1cmUuaD4NCj4+IC0NCj4+
IC1zdGF0aWMgaW5saW5lIHZvaWQgeGVuX3NldF9yZXN0cmljdGVkX3ZpcnRpb19tZW1vcnlf
YWNjZXNzKHZvaWQpDQo+PiAtew0KPj4gLcKgwqDCoMKgwqDCoCBpZiAoSVNfRU5BQkxFRChD
T05GSUdfWEVOX1ZJUlRJTykgJiYgeGVuX2RvbWFpbigpKQ0KPj4gLSBwbGF0Zm9ybV9zZXQo
UExBVEZPUk1fVklSVElPX1JFU1RSSUNURURfTUVNX0FDQ0VTUyk7DQo+PiAtfQ0KPj4gLQ0K
Pj4gIMKgI2lmZGVmIENPTkZJR19YRU5fVU5QT1BVTEFURURfQUxMT0MNCj4+ICDCoGludCB4
ZW5fYWxsb2NfdW5wb3B1bGF0ZWRfcGFnZXModW5zaWduZWQgaW50IG5yX3BhZ2VzLCBzdHJ1
Y3QgcGFnZSAqKnBhZ2VzKTsNCj4+ICDCoHZvaWQgeGVuX2ZyZWVfdW5wb3B1bGF0ZWRfcGFn
ZXModW5zaWduZWQgaW50IG5yX3BhZ2VzLCBzdHJ1Y3QgcGFnZSAqKnBhZ2VzKTsNCj4+IChF
TkQpDQo+Pg0KPj4NCj4+IEkgdGhpbmsgd2hlbiB4ODYgSFZNIGdhaW5zIHJlcXVpcmVkIHN1
cHBvcnQgKHZpYSBBQ1BJIG9yIG90aGVyIG1lYW5zKSB0bw0KPj4gY29tbXVuaWNhdGUgdGhl
IHg4NidzIGFsdGVybmF0aXZlIG9mICJ4ZW4sZ3JhbnQtZG1hIiB0aGVuDQo+PiB4ZW5faXNf
Z3JhbnRfZG1hX2RldmljZSgpIHdpbGwgYmUganVzdCBleHRlbmRlZCB0byBoYW5kbGUgdGhh
dC4NCj4gDQo+IFllYWggSSBsaWtlIHRoaXMgYXBwcm9hY2g6DQo+IA0KPiAtIG9uIEFSTSBp
dCBiYXNlcyB0aGUgc2V0dGluZyBvZiBQTEFURk9STV9WSVJUSU9fUkVTVFJJQ1RFRF9NRU1f
QUNDRVNTDQo+ICAgIG9uICJ4ZW4sZ3JhbnQtZG1hIiwgYXMgaXQgc2hvdWxkIGJlDQo+IC0g
aXQgZ29lcyBiZXlvbmQgbXkgc3VnZ2VzdGlvbiBhbmQgaXQgaXMgY2FwYWJsZSBvZiBkb2lu
ZyB0aGF0DQo+ICAgIHBlci1kZXZpY2UsIHdoaWNoIGlzIGF3ZXNvbWUNCj4gLSBvbiB4ODYs
IGl0IGFsd2F5cyBlbmFibGVzIGZvciBQViBndWVzdHMgYXMgdGhleSBoYXZlIG5vIG90aGVy
IGNob2ljZQ0KPiANCj4gT24gdG9wIG9mIHRoaXMgd2UgY291bGQgYWRkIGEgY29tbWFuZCBs
aW5lIG9wdGlvbiBvciBrY29uZmlnIG9wdGlvbiB0bw0KPiBmb3JjZS1lbmFibGUgaXQgYXMg
d2VsbCBmb3IgdGhlIGJlbmVmaXQgb2YgeDg2L0hWTSwgYnV0IEkgd291bGQgbWFrZQ0KPiB0
aGF0IG9wdGlvbiB4ODYgc3BlY2lmaWMuDQoNCkluIHRoZSBlbmQgdGhlIHByb3BlciBzb2x1
dGlvbiB3b3VsZCBiZSBhIHBlci1kZXZpY2Ugc2V0dGluZywgYXMgQ2hyaXN0b3BoDQphbHJl
YWR5IHNhaWQuDQoNClNvIGJhc2ljYWxseSBJIHRoaW5rIHdlIGNhbiByaXAgb3V0IHRoZSBQ
TEFURk9STV9WSVJUSU9fUkVTVFJJQ1RFRF9NRU1fQUNDRVNTDQpmbGFnIGFnYWluICh3aGlj
aCB3b3VsZCBtZWFuIHdlIGNvdWxkIHJpcCBvdXQgdGhlIHdob2xlIHBsYXRmb3JtIGZlYXR1
cmUNCnN1cHBvcnQgYWdhaW4pLiBJbnN0ZWFkIHdlIHNob3VsZCBoYXZlIGEgcGxhdGZvcm0g
c3BlY2lmaWMgY2FsbGJhY2sgaW4gdmlydGlvDQp3aGljaCByZXBsYWNlcyB0aGUgdGVzdCBm
b3IgUExBVEZPUk1fVklSVElPX1JFU1RSSUNURURfTUVNX0FDQ0VTUy4gVGhlIGNhbGxiYWNr
DQp3b3VsZCBoYXZlIHRoZSB2aXJ0aW8gZGV2aWNlIGFzIGEgcGFyYW1ldGVyLg0KDQpUaGlz
IGNhbGxiYWNrIHdvdWxkIGJlIHByZS1pbml0aWFsaXplZCB3aXRoIGEgZnVuY3Rpb24gcmV0
dXJuaW5nIGFsd2F5cw0KImZhbHNlIi4gU0VWLCBURFggYW5kIC8zOTAgUFYgY291bGQgcmVw
bGFjZSBpdCB3aXRoIGEgZnVuY3Rpb24gcmV0dXJuaW5nDQphbHdheXMgInRydWUiLiBXaGVu
IENPTkZJR19YRU5fVklSVElPX0ZPUkNFX0dSQU5UIGlzIHNldCwgWGVuIGd1ZXN0cyB3b3Vs
ZA0KcmV0dXJuIGFsd2F5cyAidHJ1ZSIsIG90aGVyd2lzZSB0aGV5IGNhbiBjaGVjayB3aGV0
aGVyIGUuZy4gInhlbixncmFudC1kbWEiDQp3YXMgc2V0IGZvciB0aGUgZGV2aWNlIGluIHRo
ZSBkZXZpY2UgdGFibGUgYW5kIHJldHVybiAidHJ1ZSIgaWYgdGhpcyBpcyB0aGUNCmNhc2Uu
IFRoaXMgc2NoZW1lIHdvdWxkIElNTyBjb3ZlciBhbGwgbmVlZHMuDQoNCg0KSnVlcmdlbg0K

--------------H3UH93BZQPI5hdyZQSKacFtF
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------H3UH93BZQPI5hdyZQSKacFtF--

--------------RsLH27x5zqbuYDtPWN0F9D3E--

--------------J7bHW69tJDe0kFmKU9pRFxkN
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmKsFdEFAwAAAAAACgkQsN6d1ii/Ey95
ywf8CjWAVHYgHzzI0M9z3PtYW95bJV2sYM3i+dV0QchkhPrQ5Ue+WNLpDtJHQm+Z7472rJMIdh9Q
aAb7x2D7PwNgLWBrWCx8BSXjZKEVw6BUlkzHD8Lkl5cSQVz48V3qgXdrTckYLGg2iup1SYSOkIyx
oPOfmwEnIvG4KZ0qq968mjBfbWjLLPTEtkK4MNELyoncdowfOrmnz6CZ91UffBc8rmIUmOLPaA38
QsebgfXyLjtgZNLuKNdiJPBOArR8ZXKfEBDSULr4+ZHnK1utj6bWniqUK/vQDSxLth9eu/D8/glM
d2j8BwLKUVpKUDz14ftRXI6cppyxPp0pLf0Z/Mn5mw==
=xzBU
-----END PGP SIGNATURE-----

--------------J7bHW69tJDe0kFmKU9pRFxkN--


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 08:06:50 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 08:06:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351080.577584 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o26zx-0002aI-Nx; Fri, 17 Jun 2022 08:06:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351080.577584; Fri, 17 Jun 2022 08:06:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o26zx-0002aB-KY; Fri, 17 Jun 2022 08:06:33 +0000
Received: by outflank-mailman (input) for mailman id 351080;
 Fri, 17 Jun 2022 08:06:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o26zv-0002a1-Sm; Fri, 17 Jun 2022 08:06:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o26zv-000600-Nr; Fri, 17 Jun 2022 08:06:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o26zv-0003o1-Cw; Fri, 17 Jun 2022 08:06:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o26zv-0005w2-Bx; Fri, 17 Jun 2022 08:06:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=o6Vgc/AeF5ndsBsQRnDjYIE55yPC9wbf/AiQnb1775Q=; b=CRgFroxKFIFG4KETBX2DWGiEdb
	ZNnA4Y0nxwh8FiDgyfigVC461T6jKsNzOLQOKarW9vGqOvuicN952aY6TXJbvKPputpRgVb7+Q7lK
	ZsWcHr6ey8ba7UeGUX/eqaBph4MI/k7QqUrN/Ny71TBTz7hrQjrutBhjF6fEL28C8yos=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171206-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 171206: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.14-testing:test-amd64-amd64-xl-pvshim:guest-start:fail:heisenbug
    xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c5f774eaeeca195ef85b47713f0b21220c4b41e6
X-Osstest-Versions-That:
    xen=d7ebe3dfe3b2385bef10014549c36e0f73d39b52
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 17 Jun 2022 08:06:31 +0000

flight 171206 xen-4.14-testing real [real]
flight 171236 xen-4.14-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/171206/
http://logs.test-lab.xenproject.org/osstest/logs/171236/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-pvshim   14 guest-start         fail pass in 171236-retest

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170925
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170925
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170925
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170925
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170925
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170925
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 170925
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 170925
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 170925
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170925
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170925
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170925
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170925
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  c5f774eaeeca195ef85b47713f0b21220c4b41e6
baseline version:
 xen                  d7ebe3dfe3b2385bef10014549c36e0f73d39b52

Last test of basis   170925  2022-06-10 09:10:47 Z    6 days
Testing same since   171206  2022-06-16 13:08:03 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   d7ebe3dfe3..c5f774eaee  c5f774eaeeca195ef85b47713f0b21220c4b41e6 -> stable-4.14


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 08:15:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 08:15:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351092.577595 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o278Y-0004J4-NI; Fri, 17 Jun 2022 08:15:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351092.577595; Fri, 17 Jun 2022 08:15:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o278Y-0004Ix-KF; Fri, 17 Jun 2022 08:15:26 +0000
Received: by outflank-mailman (input) for mailman id 351092;
 Fri, 17 Jun 2022 08:15:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FDGi=WY=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1o278X-0004Ir-FB
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 08:15:25 +0000
Received: from mail-wr1-x42f.google.com (mail-wr1-x42f.google.com
 [2a00:1450:4864:20::42f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a2f9e8a2-ee15-11ec-bd2d-47488cf2e6aa;
 Fri, 17 Jun 2022 10:15:24 +0200 (CEST)
Received: by mail-wr1-x42f.google.com with SMTP id e25so888769wrc.13
 for <xen-devel@lists.xenproject.org>; Fri, 17 Jun 2022 01:15:23 -0700 (PDT)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 q16-20020adff950000000b0020fe35aec4bsm4084687wrr.70.2022.06.17.01.15.22
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 17 Jun 2022 01:15:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a2f9e8a2-ee15-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=6dUkUYoVMH3OwOP3a0FuWoeqPsTxQ37KEIRMyIyUIXw=;
        b=UWBb2CiPyVRdgU6Fi3kq1RIQcGEO3UZ7sddSz2lOifhjNlcfI/AvljJjl1g8A2Eu75
         4zxHvy4bBllBy1G3j1CPVD+O8paIp4EYmwMnTnYz5xVxciN9KAEl6knkVSSbBHQmCBgq
         u1kyCAkBt0yGrbyDD3DL5CE9UIsFz8L5ZbGU5R59+9TY1tQdnnj5bjD+AxOIMR79cCZD
         hNY2nkEmo7Dsug+xRu78uitRyFlGuaZ6QfCRvX+AfFi1PTXUBSIAHUqNAk62JrRjizww
         fpjHQyGVxinMY4g4P8AiSQMgk6XtjtmcCq9Fqe4Wo5yqy0SYw50KaFd+vM7zezyhmovf
         kMMQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=6dUkUYoVMH3OwOP3a0FuWoeqPsTxQ37KEIRMyIyUIXw=;
        b=G/5WCX4KBjYtKnQwtu8H1+Ej0n0smVzdGdA3LniGLrd1E0EdKonvoWsnnrwfDbycS0
         SQUzozaSnmC7WBpG+yyAQ5sjFlEri0gblno1QFSUMeT6f2OzoTXSv8BT+JWcyGs9bL05
         ZR3Qjeklt9gM+mdfnCxaYW15RjanwLI8nZLHqtxce/37eZ6C61BF76yDJbm18Qx1jdfe
         sXC+fa+GJbttShgreXMQexuEe17LV4HeFbsHT5bZYXSu2zmWSxchumgq7q12v+SN6VIx
         TYr7bVxJu7h7bjw6LKNui1ICOsJyoldGgoUIf6whwExSQjPUxx/baa/K4VZ0y0DxZgfU
         XvhQ==
X-Gm-Message-State: AJIora+337Q7zVltKI93QsZ8YYrbd7ccLcPGMO7EgBfM4SvBg2xierfh
	f+ZfviNnTWAeZc8i3xDDO9g=
X-Google-Smtp-Source: AGRyM1vlW8bdEy6WM2SNLXgzSshd97Auj++84aJ7Br+Z38cc/nC3nlwd1OVDnvAd1Vl5W/9tYhmLfQ==
X-Received: by 2002:adf:d1c4:0:b0:219:ea12:100 with SMTP id b4-20020adfd1c4000000b00219ea120100mr8348431wrd.54.1655453723062;
        Fri, 17 Jun 2022 01:15:23 -0700 (PDT)
Subject: Re: [PATCH v2] xen: don't require virtio with grants for non-PV
 guests
To: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: hch@infradead.org, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org, viresh.kumar@linaro.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
References: <20220616053715.3166-1-jgross@suse.com>
 <573c2d9f-8df0-0e0f-2f57-e8ea85e403b4@gmail.com>
 <cf755bb8-4265-875f-dc20-eefc0e8740f4@suse.com>
 <a67a709a-78b1-c3b1-009e-2d9c834bdd67@gmail.com>
 <alpine.DEB.2.22.394.2206161657440.10483@ubuntu-linux-20-04-desktop>
 <ab59158f-c718-f109-074c-7fc193c5406d@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <c25d64c6-2910-c137-41fb-14d44c4aac7b@gmail.com>
Date: Fri, 17 Jun 2022 11:15:21 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <ab59158f-c718-f109-074c-7fc193c5406d@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 17.06.22 08:49, Juergen Gross wrote:

Hello Juergen, Stefano


> On 17.06.22 02:03, Stefano Stabellini wrote:
>> On Thu, 16 Jun 2022, Oleksandr wrote:
>>> On 16.06.22 11:56, Juergen Gross wrote:
>>>> On 16.06.22 09:31, Oleksandr wrote:
>>>>>
>>>>> On 16.06.22 08:37, Juergen Gross wrote:
>>>>>
>>>>>
>>>>> Hello Juergen
>>>>>
>>>>>> Commit fa1f57421e0b ("xen/virtio: Enable restricted memory access 
>>>>>> using
>>>>>> Xen grant mappings") introduced a new requirement for using virtio
>>>>>> devices: the backend now needs to support the 
>>>>>> VIRTIO_F_ACCESS_PLATFORM
>>>>>> feature.
>>>>>>
>>>>>> This is an undue requirement for non-PV guests, as those can be 
>>>>>> operated
>>>>>> with existing backends without any problem, as long as those 
>>>>>> backends
>>>>>> are running in dom0.
>>>>>>
>>>>>> Per default allow virtio devices without grant support for non-PV
>>>>>> guests.
>>>>>>
>>>>>> Add a new config item to always force use of grants for virtio.
>>>>>>
>>>>>> Fixes: fa1f57421e0b ("xen/virtio: Enable restricted memory access 
>>>>>> using
>>>>>> Xen grant mappings")
>>>>>> Reported-by: Viresh Kumar <viresh.kumar@linaro.org>
>>>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>>>> ---
>>>>>> V2:
>>>>>> - remove command line parameter (Christoph Hellwig)
>>>>>> ---
>>>>>>    drivers/xen/Kconfig | 9 +++++++++
>>>>>>    include/xen/xen.h   | 2 +-
>>>>>>    2 files changed, 10 insertions(+), 1 deletion(-)
>>>>>>
>>>>>> diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
>>>>>> index bfd5f4f706bc..a65bd92121a5 100644
>>>>>> --- a/drivers/xen/Kconfig
>>>>>> +++ b/drivers/xen/Kconfig
>>>>>> @@ -355,4 +355,13 @@ config XEN_VIRTIO
>>>>>>          If in doubt, say n.
>>>>>> +config XEN_VIRTIO_FORCE_GRANT
>>>>>> +    bool "Require Xen virtio support to use grants"
>>>>>> +    depends on XEN_VIRTIO
>>>>>> +    help
>>>>>> +      Require virtio for Xen guests to use grant mappings.
>>>>>> +      This will avoid the need to give the backend the right to 
>>>>>> map all
>>>>>> +      of the guest memory. This will need support on the backend 
>>>>>> side
>>>>>> +      (e.g. qemu or kernel, depending on the virtio device types 
>>>>>> used).
>>>>>> +
>>>>>>    endmenu
>>>>>> diff --git a/include/xen/xen.h b/include/xen/xen.h
>>>>>> index 0780a81e140d..4d4188f20337 100644
>>>>>> --- a/include/xen/xen.h
>>>>>> +++ b/include/xen/xen.h
>>>>>> @@ -56,7 +56,7 @@ extern u64 xen_saved_max_mem_size;
>>>>>>    static inline void xen_set_restricted_virtio_memory_access(void)
>>>>>>    {
>>>>>> -    if (IS_ENABLED(CONFIG_XEN_VIRTIO) && xen_domain())
>>>>>> +    if (IS_ENABLED(CONFIG_XEN_VIRTIO_FORCE_GRANT) || 
>>>>>> xen_pv_domain())
>>>>>> platform_set(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS);
>>>>>
>>>>>
>>>>> Looks like, the flag will be *always* set for paravirtualized 
>>>>> guests even
>>>>> if CONFIG_XEN_VIRTIO disabled.
>>>>>
>>>>> Maybe we should clarify the check?
>>>>>
>>>>>
>>>>> if (IS_ENABLED(CONFIG_XEN_VIRTIO_FORCE_GRANT) ||
>>>>> IS_ENABLED(CONFIG_XEN_VIRTIO) && xen_pv_domain())
>>>>>
>>>>>       platform_set(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS);
>>>>>
>>>>
>>>> Yes, we should. I had the function in grant-dma-ops.c in V1, and 
>>>> could drop
>>>> the
>>>> CONFIG_XEN_VIRTIO dependency for that reason.
>>>>
>>>> I'll wait for more comments before sending V3, though.
>>>
>>> ok
>>>
>>>
>>>
>>> Please note, I am happy with current patch and it works in my Arm64 
>>> based
>>> environment.
>>>
>>> Just one moment to consider.
>>>
>>>
>>> As it was already mentioned earlier in current thread the
>>> PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS (former
>>> arch_has_restricted_virtio_memory_access()) is not per device but 
>>> about the
>>> whole guest. Being set it makes VIRTIO_F_ACCESS_PLATFORM and
>>> VIRTIO_F_VERSION_1 features mandatory for *all* virtio devices in 
>>> the guest.
>>>
>>> The question is “Do we want/need to lift this restriction for some 
>>> devices
>>> (which backends are trusted so can access all guest memory) at the 
>>> same time”?
>>> Copy here the original Viresh's question for the convenience:
>>>
>>> "I understand from your email that the backends need to offer the
>>> VIRTIO_F_ACCESS_PLATFORM flag now, but should this requirement be a 
>>> bit soft?
>>> I mean shouldn't we allow both types of backends to run with the 
>>> same kernel,
>>> ones that offer this feature and others that don't? The ones that 
>>> don't offer
>>> the feature, should continue to work like they used to, i.e. without 
>>> the
>>> restricted memory access feature."
>>>
>>> Technically this can be possible with HVM.
>>>
>>> Let's imagine the following situation:
>>>
>>> - Dom0 with backends which don't offer required features for some 
>>> reason(s)
>>>
>>> But running in Dom0 (trusted domain) these backends are not obliged 
>>> to offer
>>> it (yes they can offer the required features and support grant 
>>> mappings for
>>> the virtio, but this is not strictly necessary, as they are 
>>> considered as
>>> trusted so are allowed to access all guest memory).
>>>
>>> - DomD with backend which do offer them and require grant mappings 
>>> for the
>>> virtio
>>>
>>> If this is a valid and correct use-case, then we indeed need an 
>>> ability to
>>> control that per device, otherwise - what is written below doesn't 
>>> really
>>> matter.
>>>
>>> I am wondering whether we can avoid using global
>>> PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS for Xen guests at all? I 
>>> assume that all
>>> we need to do (when CONFIG_XEN_VIRTIO is enabled) is to make sure 
>>> that *only*
>>> Xen grant DMA devices in HVM guests and *all* devices in PV guests 
>>> offer
>>> required flags.
>>>
>>> Below the diff how this could be done w/o an extra options (not 
>>> completely
>>> tested), although I realize it might look hackish, and a lot more 
>>> effort is
>>> needed to get it right. In my Arm64 based environment it works, I 
>>> have tried
>>> to run two backends, the first offered required features and the 
>>> corresponding
>>> device node had required property, but the second didn’t and there 
>>> was no
>>> property.
>>>
>>> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
>>> index 1f9c3ba..07eb69f 100644
>>> --- a/arch/arm/xen/enlighten.c
>>> +++ b/arch/arm/xen/enlighten.c
>>> @@ -443,8 +443,6 @@ static int __init xen_guest_init(void)
>>>          if (!xen_domain())
>>>                  return 0;
>>>
>>> -       xen_set_restricted_virtio_memory_access();
>>> -
>>>          if (!acpi_disabled)
>>>                  xen_acpi_guest_init();
>>>          else
>>> diff --git a/arch/x86/xen/enlighten_hvm.c 
>>> b/arch/x86/xen/enlighten_hvm.c
>>> index 8b71b1d..517a9d8 100644
>>> --- a/arch/x86/xen/enlighten_hvm.c
>>> +++ b/arch/x86/xen/enlighten_hvm.c
>>> @@ -195,8 +195,6 @@ static void __init xen_hvm_guest_init(void)
>>>          if (xen_pv_domain())
>>>                  return;
>>>
>>> -       xen_set_restricted_virtio_memory_access();
>>> -
>>>          init_hvm_pv_info();
>>>
>>>          reserve_shared_info();
>>> diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
>>> index 30d24fe..ca85d14 100644
>>> --- a/arch/x86/xen/enlighten_pv.c
>>> +++ b/arch/x86/xen/enlighten_pv.c
>>> @@ -108,8 +108,6 @@ static DEFINE_PER_CPU(struct tls_descs, 
>>> shadow_tls_desc);
>>>
>>>   static void __init xen_pv_init_platform(void)
>>>   {
>>> -       xen_set_restricted_virtio_memory_access();
>>> -
>>> populate_extra_pte(fix_to_virt(FIX_PARAVIRT_BOOTMAP));
>>>
>>>          set_fixmap(FIX_PARAVIRT_BOOTMAP, xen_start_info->shared_info);
>>> diff --git a/drivers/virtio/virtio.c b/drivers/virtio/virtio.c
>>> index 371e16b..875690a 100644
>>> --- a/drivers/virtio/virtio.c
>>> +++ b/drivers/virtio/virtio.c
>>> @@ -167,6 +167,11 @@ void virtio_add_status(struct virtio_device *dev,
>>> unsigned int status)
>>>   }
>>>   EXPORT_SYMBOL_GPL(virtio_add_status);
>>>
>>> +int __weak device_has_restricted_virtio_memory_access(struct device 
>>> *dev)
>>> +{
>>> +       return platform_has(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS);
>>> +}
>>> +
>>>   /* Do some validation, then set FEATURES_OK */
>>>   static int virtio_features_ok(struct virtio_device *dev)
>>>   {
>>> @@ -174,7 +179,7 @@ static int virtio_features_ok(struct 
>>> virtio_device *dev)
>>>
>>>          might_sleep();
>>>
>>> -       if (platform_has(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS)) {
>>> +       if 
>>> (device_has_restricted_virtio_memory_access(dev->dev.parent)) {
>>>                  if (!virtio_has_feature(dev, VIRTIO_F_VERSION_1)) {
>>>                          dev_warn(&dev->dev,
>>>                                   "device must provide 
>>> VIRTIO_F_VERSION_1\n");
>>> diff --git a/drivers/xen/grant-dma-ops.c b/drivers/xen/grant-dma-ops.c
>>> index 6586152..da938f6 100644
>>> --- a/drivers/xen/grant-dma-ops.c
>>> +++ b/drivers/xen/grant-dma-ops.c
>>> @@ -11,6 +11,7 @@
>>>   #include <linux/dma-map-ops.h>
>>>   #include <linux/of.h>
>>>   #include <linux/pfn.h>
>>> +#include <linux/virtio_config.h>
>>>   #include <linux/xarray.h>
>>>   #include <xen/xen.h>
>>>   #include <xen/grant_table.h>
>>> @@ -286,6 +287,11 @@ bool xen_is_grant_dma_device(struct device *dev)
>>>          return has_iommu;
>>>   }
>>>
>>> +int device_has_restricted_virtio_memory_access(struct device *dev)
>>> +{
>>> +       return (xen_pv_domain() || xen_is_grant_dma_device(dev));
>>> +}
>>> +
>>>   void xen_grant_setup_dma_ops(struct device *dev)
>>>   {
>>>          struct xen_grant_dma_data *data;
>>> diff --git a/include/linux/virtio_config.h 
>>> b/include/linux/virtio_config.h
>>> index 7949829..b3a455b 100644
>>> --- a/include/linux/virtio_config.h
>>> +++ b/include/linux/virtio_config.h
>>> @@ -559,4 +559,6 @@ static inline void virtio_cwrite64(struct 
>>> virtio_device
>>> *vdev,
>>> _r;                                                     \
>>>          })
>>>
>>> +int device_has_restricted_virtio_memory_access(struct device *dev);
>>> +
>>>   #endif /* _LINUX_VIRTIO_CONFIG_H */
>>> diff --git a/include/xen/xen.h b/include/xen/xen.h
>>> index 0780a81..a99bab8 100644
>>> --- a/include/xen/xen.h
>>> +++ b/include/xen/xen.h
>>> @@ -52,14 +52,6 @@ bool xen_biovec_phys_mergeable(const struct 
>>> bio_vec *vec1,
>>>   extern u64 xen_saved_max_mem_size;
>>>   #endif
>>>
>>> -#include <linux/platform-feature.h>
>>> -
>>> -static inline void xen_set_restricted_virtio_memory_access(void)
>>> -{
>>> -       if (IS_ENABLED(CONFIG_XEN_VIRTIO) && xen_domain())
>>> - platform_set(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS);
>>> -}
>>> -
>>>   #ifdef CONFIG_XEN_UNPOPULATED_ALLOC
>>>   int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page 
>>> **pages);
>>>   void xen_free_unpopulated_pages(unsigned int nr_pages, struct page 
>>> **pages);
>>> (END)
>>>
>>>
>>> I think when x86 HVM gains required support (via ACPI or other 
>>> means) to
>>> communicate the x86's alternative of "xen,grant-dma" then
>>> xen_is_grant_dma_device() will be just extended to handle that.
>>
>> Yeah I like this approach:
>>
>> - on ARM it bases the setting of PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS
>>    on "xen,grant-dma", as it should be
>> - it goes beyond my suggestion and it is capable of doing that
>>    per-device, which is awesome
>> - on x86, it always enables for PV guests as they have no other choice
>>
>> On top of this we could add a command line option or kconfig option to
>> force-enable it as well for the benefit of x86/HVM, but I would make
>> that option x86 specific.
>
> In the end the proper solution would be a per-device setting, as 
> Christoph
> already said.


agree


>
> So basically I think we can rip out the 
> PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS
> flag again (which would mean we could rip out the whole platform feature
> support again). Instead we should have a platform specific callback in 
> virtio
> which replaces the test for PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS. The 
> callback
> would have the virtio device as a parameter.
>
> This callback would be pre-initialized with a function returning always
> "false". SEV, TDX and /390 PV could replace it with a function returning
> always "true". When CONFIG_XEN_VIRTIO_FORCE_GRANT is set, Xen guests 
> would
> return always "true", otherwise they can check whether e.g. 
> "xen,grant-dma"
> was set for the device in the device table and return "true" if this 
> is the
> case. This scheme would IMO cover all needs.



If I got the idea correctly, I think this will work too. Sounds fine to me.


>
>
>
> Juergen

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Fri Jun 17 08:16:56 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 08:16:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351099.577606 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o279z-0004tE-24; Fri, 17 Jun 2022 08:16:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351099.577606; Fri, 17 Jun 2022 08:16:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o279y-0004t7-VE; Fri, 17 Jun 2022 08:16:54 +0000
Received: by outflank-mailman (input) for mailman id 351099;
 Fri, 17 Jun 2022 08:16:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o279x-0004su-FU
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 08:16:53 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o279w-0006Bj-KC; Fri, 17 Jun 2022 08:16:52 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=[192.168.0.243]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o279w-0005Z2-Cy; Fri, 17 Jun 2022 08:16:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=ychhPBjESfktqZuqgD7DpY1PUlkzuhKF23PbFWiNdv0=; b=s+A2QN7fQUJeAoOivDV5IEiJQv
	TgyCAFthIDsR4feU+7DKEyAit8tT+3jh5kV3wzgIMgq6pFenV3Q3vx7pi2cvtZQy9OFCgNAOb/uvr
	OnvlXtLR+gWG8ivQOly1UHWMQ+ERcZX+VCLNGJXLon6jiCPdWzvwwkPA11+iIvljHNK0=;
Message-ID: <94122e8d-224d-2632-27ad-d56d3a24b367@xen.org>
Date: Fri, 17 Jun 2022 09:16:50 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH v2 2/2] xen/arm: add FF-A mediator
To: Jens Wiklander <jens.wiklander@linaro.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20220609061812.422130-1-jens.wiklander@linaro.org>
 <20220609061812.422130-3-jens.wiklander@linaro.org> <874k0nhvsq.fsf@epam.com>
 <20220616223728.GA71444@jade>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220616223728.GA71444@jade>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Jens,

On 16/06/2022 23:37, Jens Wiklander wrote:
> On Tue, Jun 14, 2022 at 07:47:18PM +0000, Volodymyr Babchuk wrote:
>>
>> Hello Jens,
>>
>> Sorry for late review, I was busy with internal projects.
>>
>> This is preliminary review. I gave up at scatter-gather operations. Need
>> more time to review them properly.
> 
> No problem, thanks for taking the time.
> 
>>
>> One thing that bothers me is that Xen is non-preemptive and there are
>> plenty potentially long-running operations.
> 
> There's room to deal with that in the FF-A specification. These scatter
> gather operation are quite complicated so I started with the minimum. We
> can as a future optimization address the problem with long running
> operations.

I would be OK to defer this work. However, I think this should be 
written down as Xen community will not be able to security support until 
we resolved the known places where a vCPU may hog pCPU longer than 
necessary.

This reminds me that this series doesn't add a support statement for the 
new subsystem in SUPPORT.md. AFAICT, this should be tech preview for now.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 08:24:58 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 08:24:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351108.577617 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o27Hh-0006Z7-Sa; Fri, 17 Jun 2022 08:24:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351108.577617; Fri, 17 Jun 2022 08:24:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o27Hh-0006Z0-Pc; Fri, 17 Jun 2022 08:24:53 +0000
Received: by outflank-mailman (input) for mailman id 351108;
 Fri, 17 Jun 2022 08:24:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o27Hh-0006Yu-7j
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 08:24:53 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o27Hg-0006Jm-BC; Fri, 17 Jun 2022 08:24:52 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=[192.168.0.243]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o27Hg-0005vd-4I; Fri, 17 Jun 2022 08:24:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=qvHYX2J41cIKah7mxPn4mYSyPxiYpNdLglsYhEX5rlw=; b=j4GwyG/sTiBRa/lbDGXKdCrN5C
	ADMWVPuun6G/NxhVQya48HEwsdtxPEfMkorTh6Dpzc7X+BeAXjZR3l8iuMw9zD/vMqKGpHzTtNVFJ
	zn71+eqLDTluBYI+azwK88d+czKEGWkFQIbgXEeAtp88Ammm6/ZfRZJfoQsaoUlUvDrU=;
Message-ID: <7e4a9c8d-f88c-83dd-535a-b8fae3ac2f6a@xen.org>
Date: Fri, 17 Jun 2022 09:24:49 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [XEN PATCH v2 00/29] Toolstack build system improvement, toward
 non-recursive makefiles
To: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Christian Lindig <christian.lindig@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Juergen Gross <jgross@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Jan Beulich <jbeulich@suse.com>, David Scott <dave@recoil.org>,
 Elena Ufimtseva <elena.ufimtseva@oracle.com>
References: <20220225151321.44126-1-anthony.perard@citrix.com>
 <Yqsw5mmC8KHVbtrb@perard.uk.xensource.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <Yqsw5mmC8KHVbtrb@perard.uk.xensource.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 16/06/2022 14:32, Anthony PERARD wrote:
> Hi,

Hi Anthony,

> There's quite a few patch in this series that are reviewed and could be
> committed. The one reviewed don't depends on the other ones.
> 
> The list I've gathered that I think are reviewed properly are:
> 
> 11: tools/xenstore: Cleanup makefile
> 14: libs: rename LDUSELIBS to LDLIBS and use it instead of APPEND_LDFLAGS
> 15: libs: Remove need for *installlocal targets
> 16: libs,tools/include: Clean "clean" targets
> 17: libs: Rename $(SRCS-y) to $(OBJS-y)
> 18: libs/guest: rename ELF_OBJS to LIBELF_OBJS
> 19: libs/guest: rework CFLAGS
> 20: libs/store: use of -iquote instead of -I
> 21: libs/stat: Fix and rework python-bindings build
> 22: libs/stat: Fix and rework perl-binding build
> 24: stubdom: introduce xenlibs.mk
> 25: tools/libs: create Makefile.common to be used by stubdom build system
> 26: tools/xenstore: introduce Makefile.common to be used by stubdom
> 27: stubdom: build xenstore*-stubdom using new Makefile.common
> 28: stubdom: xenlibs linkfarm, ignore non-regular files
> 29: tools/ocaml: fix build dependency target

Committed.

> 
> (I didn't a run with them on our gitlab ci, and no build issue.)

I am guessing you mean you *did* a run?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 08:32:10 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 08:32:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351117.577628 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o27Oe-0008FX-MN; Fri, 17 Jun 2022 08:32:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351117.577628; Fri, 17 Jun 2022 08:32:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o27Oe-0008FQ-He; Fri, 17 Jun 2022 08:32:04 +0000
Received: by outflank-mailman (input) for mailman id 351117;
 Fri, 17 Jun 2022 08:32:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o27Od-0008FK-Ey
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 08:32:03 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o27Od-0006SH-31; Fri, 17 Jun 2022 08:32:03 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=[192.168.0.243]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o27Oc-0006Bc-Rn; Fri, 17 Jun 2022 08:32:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=7wPxoulyscd6uNYV9TCtejy04qLXHbPs2WjcheNYt7s=; b=jpZWVw2KOGdqWEKFHhk7kIm5Tq
	powNX8lRAeV7Ofk9Hqih6PP1iTTAkYk4jXUokvOQOGhoTiG0gyjVwdR5ihF33x5bTdd8wDn7rqDbl
	kvrG7dqKjnjzFH30ASFDcPkg7/EzWoUotOnYnKGkG6Pl5nEtunmFggGfjJXpoSYAlOX8=;
Message-ID: <6faaa38d-63af-d962-e8de-4accb5b73ab4@xen.org>
Date: Fri, 17 Jun 2022 09:32:00 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH] xen/arm: avoid vtimer flip-flop transition in context
 switch
To: Wei Chen <Wei.Chen@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220615013909.283887-1-wei.chen@arm.com>
 <c48bb719-8cc6-ea8d-291d-4e09d42f93c2@xen.org>
 <PAXPR08MB7420FDB50DA7265956A3B0BC9EAD9@PAXPR08MB7420.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <PAXPR08MB7420FDB50DA7265956A3B0BC9EAD9@PAXPR08MB7420.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit



On 15/06/2022 11:36, Wei Chen wrote:
> Hi Julien,

Hi Wei,

>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: 2022年6月15日 17:47
>> To: Wei Chen <Wei.Chen@arm.com>; xen-devel@lists.xenproject.org
>> Cc: nd <nd@arm.com>; Stefano Stabellini <sstabellini@kernel.org>; Bertrand
>> Marquis <Bertrand.Marquis@arm.com>; Volodymyr Babchuk
>> <Volodymyr_Babchuk@epam.com>
>> Subject: Re: [PATCH] xen/arm: avoid vtimer flip-flop transition in context
>> switch
>>>
>>> So in this patch, we adjust the formula to use "offset - boot_count"
>>> first, and then use the result to plus cval. This will avoid the
>>> uint64_t overflow.
>>
>> Technically, the overflow is still present because the (offset -
>> boot_count) is a non-zero value *and* cval is a 64-bit value.
>>
> 
> Yes, GuestOS can issue any valid 64-bit value for their usage.
> 
>> So I think the equation below should be reworked to...
>>
>>>
>>> Signed-off-by: Wei Chen <wei.chen@arm.com>
>>> ---
>>>    xen/arch/arm/vtimer.c | 5 +++--
>>>    1 file changed, 3 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
>>> index 5bb5970f58..86e63303c8 100644
>>> --- a/xen/arch/arm/vtimer.c
>>> +++ b/xen/arch/arm/vtimer.c
>>> @@ -144,8 +144,9 @@ void virt_timer_save(struct vcpu *v)
>>>        if ( (v->arch.virt_timer.ctl & CNTx_CTL_ENABLE) &&
>>>             !(v->arch.virt_timer.ctl & CNTx_CTL_MASK))
>>>        {
>>> -        set_timer(&v->arch.virt_timer.timer, ticks_to_ns(v-
>>> arch.virt_timer.cval +
>>> -                  v->domain->arch.virt_timer_base.offset - boot_count));
>>> +        set_timer(&v->arch.virt_timer.timer,
>>> +                  ticks_to_ns(v->domain->arch.virt_timer_base.offset -
>>> +                              boot_count + v->arch.virt_timer.cval));
>>
>> ... something like:
>>
>> ticks_to_ns(offset - boot_count) + ticks_to_ns(cval);
>>
>> The first part of the equation should always be the same. So it could be
>> stored in struct domain.
>>
> 
> If you think there is still some values to continue this patch, I will
> address this comment : )

I think there are. This can be easily triggered by a vCPU setting a 
large cval.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 08:41:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 08:41:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351142.577703 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o27XP-0002DW-9S; Fri, 17 Jun 2022 08:41:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351142.577703; Fri, 17 Jun 2022 08:41:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o27XP-0002DP-6T; Fri, 17 Jun 2022 08:41:07 +0000
Received: by outflank-mailman (input) for mailman id 351142;
 Fri, 17 Jun 2022 08:41:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o27XO-0002DJ-9A
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 08:41:06 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o27XO-0006eR-2U; Fri, 17 Jun 2022 08:41:06 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=[192.168.0.243]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o27XN-0006rx-QY; Fri, 17 Jun 2022 08:41:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=sB1mRVl4Rk1sygilJ5jg9sG+Sz6Ndm72XR3K4BeEsyQ=; b=mt2gdbCOjqg8eEJCm4tzaUxk7V
	zUGTnDlTadPZ9dr/UIKeZc7WcKZfTwRRaMjRLl6RsoU78+tcRDS+EU1XcE/KBIghRNFKgcWQy1bAz
	eD0c1zrf1kO72XECL5aJ6JbIilyynARlXvJ/WNJnGjwt2Hv4zcTiGUQJB1aP60Y3MQVA=;
Message-ID: <8c03895e-4134-53c5-2248-e8afe4be7b25@xen.org>
Date: Fri, 17 Jun 2022 09:41:03 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH v3 1/9] xen/arm: Print a 64-bit number in hex from early
 uart
To: Henry Wang <Henry.Wang@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Jiamei Xie <Jiamei.Xie@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Julien Grall <jgrall@amazon.com>
References: <20220511014639.197825-1-wei.chen@arm.com>
 <20220511014639.197825-2-wei.chen@arm.com>
 <46f6a909-2f77-021c-a069-6a8f827e53fc@xen.org>
 <AS8PR08MB79912A6797514E583095CBFC92AF9@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <AS8PR08MB79912A6797514E583095CBFC92AF9@AS8PR08MB7991.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 17/06/2022 04:27, Henry Wang wrote:
> Hi Julien,

Hi Henry,

>> -----Original Message-----
>> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of
>> Julien Grall
>>
>> Hi,
>>
>> I have committed this patch.
>>
>> Patch #3 looks to be suitably acked but I am not sure whether it can be
>> committed before #2. So I didn't commit it.
>>
>> Please let me know if it can be.
> 
> IIUC, the latest series (v6) [1] is properly acked and reviewed for the whole
> series, so I think v6 of this series is ready to be merged. Sending this as a
> gentle reminder :)

Thanks for the reminder. My comment above was specifically referring to 
patches in v3. If the patches are from a new version, can I suggest to 
ping on the exact version?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 08:43:05 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 08:43:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351150.577714 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o27ZI-0002sY-RZ; Fri, 17 Jun 2022 08:43:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351150.577714; Fri, 17 Jun 2022 08:43:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o27ZI-0002sR-Nw; Fri, 17 Jun 2022 08:43:04 +0000
Received: by outflank-mailman (input) for mailman id 351150;
 Fri, 17 Jun 2022 08:43:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o27ZH-0002sL-AV
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 08:43:03 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o27ZG-0006hy-SS; Fri, 17 Jun 2022 08:43:02 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=[192.168.0.243]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o27ZG-0006vp-Mi; Fri, 17 Jun 2022 08:43:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=cayx7TCsotA4ywUtxUHTncfYJiL0245ngkc8VWGVtao=; b=10IXOBfDO5M6DDjxWkXr8VP0XK
	0v1ih/b9jJi60zIslYKy5q4Isv8WNpELLJ747MeXjue+tdR2tYXQPB6Szo8cRFdgAposSeVl09dRO
	n+M4jXt0s/1Nmqt7fOH2EYvfgD6T/XDqFW0asZYKqBGf0+idejeNIM15rlImB5keu8KE=;
Message-ID: <3cbfb656-419d-15cc-c1da-d826f08ccb77@xen.org>
Date: Fri, 17 Jun 2022 09:43:00 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH v6 0/8] Device tree based NUMA support for Arm - Part#1
To: Wei Chen <wei.chen@arm.com>, xen-devel@lists.xenproject.org
Cc: nd@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <20220610055316.2197571-1-wei.chen@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220610055316.2197571-1-wei.chen@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Wei,

On 10/06/2022 06:53, Wei Chen wrote:
> Wei Chen (8):
>    xen: reuse x86 EFI stub functions for Arm
>    xen/arm: Keep memory nodes in device tree when Xen boots from EFI
>    xen: introduce an arch helper for default dma zone status
>    xen: decouple NUMA from ACPI in Kconfig
>    xen/arm: use !CONFIG_NUMA to keep fake NUMA API
>    xen/x86: use paddr_t for addresses in NUMA node structure
>    xen/x86: add detection of memory interleaves for different nodes
>    xen/x86: use INFO level for node's without memory log message

I have committed the series.

Thanks for the contribution.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 09:10:59 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 09:10:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351211.577771 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2801-000893-Pt; Fri, 17 Jun 2022 09:10:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351211.577771; Fri, 17 Jun 2022 09:10:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2801-00088w-Mu; Fri, 17 Jun 2022 09:10:41 +0000
Received: by outflank-mailman (input) for mailman id 351211;
 Fri, 17 Jun 2022 09:10:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XWRj=WY=epam.com=prvs=81677805da=volodymyr_babchuk@srs-se1.protection.inumbo.net>)
 id 1o27zz-00088q-UU
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 09:10:40 +0000
Received: from mx0b-0039f301.pphosted.com (mx0b-0039f301.pphosted.com
 [148.163.137.242]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 59f26181-ee1d-11ec-bd2d-47488cf2e6aa;
 Fri, 17 Jun 2022 11:10:38 +0200 (CEST)
Received: from pps.filterd (m0174681.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 25H8pNTt004309;
 Fri, 17 Jun 2022 09:10:32 GMT
Received: from eur01-ve1-obe.outbound.protection.outlook.com
 (mail-ve1eur01lp2059.outbound.protection.outlook.com [104.47.1.59])
 by mx0b-0039f301.pphosted.com (PPS) with ESMTPS id 3grpfbr2fg-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 17 Jun 2022 09:10:32 +0000
Received: from VI1PR03MB3710.eurprd03.prod.outlook.com (2603:10a6:803:31::18)
 by AM9PR03MB7268.eurprd03.prod.outlook.com (2603:10a6:20b:262::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.22; Fri, 17 Jun
 2022 09:10:28 +0000
Received: from VI1PR03MB3710.eurprd03.prod.outlook.com
 ([fe80::28d9:fd20:dee0:74ed]) by VI1PR03MB3710.eurprd03.prod.outlook.com
 ([fe80::28d9:fd20:dee0:74ed%6]) with mapi id 15.20.5353.014; Fri, 17 Jun 2022
 09:10:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 59f26181-ee1d-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=G2unOd8An8tfZCjZ9O/2omr1tO6a1LH/68pOoUeVepD2i7cBTSAuKByKf+HVdJq9Nt5Rr8eqfjBwe0u4Elv4R+m9PSgRc2hltP4uQHvk2Pete3gtN+lIUPs7hPQIRVti/J9Ss6XCmxias4Xx4c5moJdExTFVeAw0gvhRI55crdMJ+HpIZjS1Io7usOriP5Uf0/S+QmvafOsSwdA9pjOpR8PJebzke9rj01d6k6KItT7WUBpfGk5XMMjglbFjYkaBsreS2AUF33k7nM2coEoJBICThUfhrt7UKyQSQLCctg0pmTKfCb3IsvWlCi3tI51+CFDTzbez42tylOhBCNF5HA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=sBt4BpruxuDW+3U+YaEMd/LSKPRuAhGP8rBevWZMe/U=;
 b=fTHsrHFfpj1NzJ8PNAswgFl+phgYpvy1T7K+fJPyWsfk/xJrEnCg4Bx4aMJUz7Hg5uHYdwumZWoEB89iPJybGz0hND7NHuSq/qMoSHuNBfRRWkoLMEdVaeZHGUo+smqxS/rNVhlhuAjXLJc4jyRJjvHSInO9sv0uQMbvgtDvtWocyJ4eNYejIX2AeAwsZrVB40MfDtYLzzdxrMFdpOObYcSZVUWTO69NbimZVxfIx9OiAwX2q3BsZIQX8JQuFKMxEbAHPsOjoO2J1enG/xeptj2Jzeyvh8YjFB0jvo9WapoROgiX+JyohXNJOeAp4Q4L4ys5SaqzsCC0B/kbM5Exvw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sBt4BpruxuDW+3U+YaEMd/LSKPRuAhGP8rBevWZMe/U=;
 b=UnQcBjrrgK52cesOgg2OzdzxASTGv3NsJDBwB//lSbUn8Eq2Heo+9yybsFdfbYDTjDnVeByIwePlPzrnlcV2qJnfw4lm/unNiXQ4VfzU4QRrhm0FC25NwdBhAStU+UNmrEehJNugASeptaH/Rk2vKBHnipnxinRqm4M9nxQvYql0f9TdHZzF0x8iFfhMbn6fR1iAD3wS8emFvkLUx/ceUq6QrFM5vCb1xAUsBpCrZg4AIrrOudU7zXhx/ERXYW0GW94z7Fj+UexE7RhVuU42n5fxUDRX+pI5eQ50Cb7AzDEH770jy6gvxPaiy0nO/CsTXx3hp4CaqQeLrxezDk3xOw==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: Julien Grall <julien@xen.org>
CC: "dmitry.semenets@gmail.com" <dmitry.semenets@gmail.com>,
        Dmytro Semenets
	<Dmytro_Semenets@epam.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Bertrand Marquis <bertrand.marquis@arm.com>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] xen: Don't call panic if ARM TF cpu off returns DENIED
Thread-Topic: [PATCH] xen: Don't call panic if ARM TF cpu off returns DENIED
Thread-Index: AQHYgYkEncCZtJfZsEuchGWkwMivea1SI+UAgAA3C4CAAAs5gIAA5jKA
Date: Fri, 17 Jun 2022 09:10:28 +0000
Message-ID: <87pmj7hczg.fsf@epam.com>
References: <20220616135541.3333760-1-dmitry.semenets@gmail.com>
 <cf7660da-0bde-865e-7c22-a2e21e31fae5@xen.org> <87wndgh2og.fsf@epam.com>
 <67f56cdd-531b-72fc-1257-214d078f6bb6@xen.org>
In-Reply-To: <67f56cdd-531b-72fc-1257-214d078f6bb6@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: mu4e 1.6.5; emacs 28.1
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 898b1721-97d8-48dd-0c7e-08da504138cd
x-ms-traffictypediagnostic: AM9PR03MB7268:EE_
x-ld-processed: b41b72d0-4e9f-4c26-8a69-f949f367c91d,ExtAddr
x-microsoft-antispam-prvs: 
 <AM9PR03MB72685330622F53A128A405FBE6AF9@AM9PR03MB7268.eurprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 wtD9zAUF+NPnKQ5VRuNg8N+7NCF0g6VdVYOvnp73ktP+PCj8aVpm9qFEreb99p9m90PQAr5cDZrTXne3cfAtR7RkCUFh31A5LDUt4CiyE7W3ZmzW8M7POQ0ZGf0xqZkyYibenwkBlxGp4HFEKWIeFCfS6LZDeouCQe0EUBTAVziRHm/sckEnoY+auvcti6FeDvJhm5I3gHGCdwvIPLMjvHV0TBUFlhO8wnbe/U8L+e2dpHiUEfExipMqoqNks4ktM7KKu/LZMDIbEoTCrMXbKjSriUyazDbPS21YHwLfjNqibJsPI5PPTb7UFNMYZHJNgvCTcp2a6fJ8EjnJyiZZVyqhbXzdQmzbAMBkFrpyOJ9A7PH60d63hnaUsVGuEqds66mn41cZrEfLArMtW5MFNo0dUclFlPfgC+g+VDfWc+jmkhAKgfnfwwudTHRxcM92/kduz6ZEh2VOGP4BnJYlcVI1m/4w8ejzXri1flZ0b4UhfeIOcNvVl/kJlwIIooOKGNCwVEt59OYYw6dKm3k3801NcJ2tc4tGZgw1Qw8KDo1u35ZRikg9XqwiU1h+JhAzUGPXz0RRTDRBhR8IS4hbL40KMlrZwMV8bMX1p+bMCXWHxqmw2tdZZ6l1DpxXTZzuP37HgjF0h+lLNnn1NouQXSi+hf9265HhUnzvlZ45LK1oD0Z5ygkNbMGc0TGqyDpCNBMg30I5tTL4r6qslXPWbA==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR03MB3710.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(38100700002)(55236004)(498600001)(6506007)(71200400001)(83380400001)(316002)(53546011)(186003)(2616005)(2906002)(6916009)(54906003)(86362001)(38070700005)(6486002)(66556008)(122000001)(36756003)(8936002)(64756008)(8676002)(66946007)(4326008)(26005)(66446008)(66476007)(91956017)(76116006)(6512007)(5660300002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: 
 =?iso-8859-1?Q?1o65He2Sueg7jjs+gaI5kqsSceWx4W4M1gabBABVn77zQAyUXdKVCIvZzt?=
 =?iso-8859-1?Q?IKU52oVHuu0zl3Wjj2rYFHRcNv3Bw2UrkyR6dIZ+N7BgedjnqW3YjwvaM4?=
 =?iso-8859-1?Q?B/QtNrbmCCmYdl0ZVPjeOsH4Y5kDAeExOj2z8m5rUN2LnA8YPMPz+280cy?=
 =?iso-8859-1?Q?ZJvJxoo4mhiyC/rEg2ZSGWJ8plaIV5ej0D/6h89EgIwncVe8bh+enmGEYB?=
 =?iso-8859-1?Q?ujG4AV7UCDB3XdDzZOfmChrAsd+r2t6+2QFHOZ8shnqoOuCK+YJF6Huw4Y?=
 =?iso-8859-1?Q?T6lxJlCoWXtgRiTXUOELgJWZpHso9DS8KQpUW1WHQ6HLSxULG2R3z6B8EN?=
 =?iso-8859-1?Q?c0JWk3axEH31d5W30R875E5FUWJmGCx916xqGAlndvD9iBxyWhAmX159SN?=
 =?iso-8859-1?Q?x9nPIpyEYQZI5B+cZeda03KP6hP/9YcpELzSf9WMkdICue4KWxZZdy975l?=
 =?iso-8859-1?Q?GbTlJJenRMLXmPTiLju2pDKFXbb2XICTFs9gRM7tLroBuPmg7EQXoBLs3D?=
 =?iso-8859-1?Q?TVEbjppCzbsy+BLeI7IjE8U/HUXRADqHJNqbnv07kobiBqJp2TNsReG7B8?=
 =?iso-8859-1?Q?7h+/IiHxDsYfySNq9Vsh2yl4XFwHKWDa5upsQvh5z5JTkZGjlspqXS/WYX?=
 =?iso-8859-1?Q?O1bcfLDwPncptpFcHNEAaLL+jDbDWQZ8vqykQ6Wc2zwwzOkwHB4r9oXBLb?=
 =?iso-8859-1?Q?SiD/ztPRmwg+7Zq8CaAnLkP9cl2EB2kfMGoD5eK9zDHVHXIIQHgoog+EKT?=
 =?iso-8859-1?Q?MQ9GFRwPh6buSYCVFOOLo5bjdxgD+8HmHRH7cweRSBdrU4FeYgqJQ+HkOF?=
 =?iso-8859-1?Q?+a+hUSpQWBAViaOoHk4wqDPJyP0Ne9+4NRam3GXSM4iFxCFGeBdDEZxbkm?=
 =?iso-8859-1?Q?yjvEGMdmc5iO0V519kHctqYUg8e57W9g9sqjKm5vWYoNJKe5bKpl2yNq1z?=
 =?iso-8859-1?Q?avDUNsAIanMMMLFDsy61YkiZn6+IpnDfd8mMkVWyZzOf1c6SQsoil20cHK?=
 =?iso-8859-1?Q?zh9DJyJSPiRwjVWe7uvfGi6DsNR+7zBTU0HhXn75K5YVEF3P+eOnUNDrE0?=
 =?iso-8859-1?Q?HkJgtmKnqPRNRHp45QRPCngEHJXOZSyx4Ng1BS+t8dPVoJHI+OrIf1d+ww?=
 =?iso-8859-1?Q?AoRTbgrCH+Is1byypWCgoWHP+kWl8mZdQF/j0s7Nl5Qj5CGbxucudqsa2o?=
 =?iso-8859-1?Q?HVxtYZ9+Zq0vwECMlaM4la8xlQnKykICI0NZSPp3A7r0CzM1qX0NVkI05Q?=
 =?iso-8859-1?Q?f+A3s06Q2Wu1+nTXi73QVFr6eBeh1J9O2kD20qRz2gYUx/Dg/eIqlXt2t5?=
 =?iso-8859-1?Q?Z2Qpi6DzQhlrWJrWvWboGhoeAbwRU2J4akFnnrZ2bkbTsda9slJ/OZbx4n?=
 =?iso-8859-1?Q?WkSTHXj5VNIPfp60cnv5N35iONoh57AWh+CJWkl98u2PYGoLVaA99CsDmE?=
 =?iso-8859-1?Q?vHYVlHtIxGHGi1f29pYzrM3Zuo2yrGnAWWcAKW3mqQ+UKeJHjMbqJDC/JJ?=
 =?iso-8859-1?Q?vcujsL4T6D/GnatVHW3/xRwCwFaGLUK7v0MdhuTEvU9NHMEi2E9IO8Ifd5?=
 =?iso-8859-1?Q?LoSEi19RW2pgsTH2SJytrVjmrke1eTXRR3UMpqNJCF/nvcW6+PanTt+lbS?=
 =?iso-8859-1?Q?MlmbIyHluhjoNXtj8jQeds4PcYznHrUI9bgQNwXMtEZ3zeIE/JfWdOqeq5?=
 =?iso-8859-1?Q?itfpC8T6I1hIqTxePhs+DBXvM/T6jS18aZGuy3XyOZFec5bl0x1JCvKKrn?=
 =?iso-8859-1?Q?gvy9qHeod4uY2mBdpwzb+8R1tE/M7LhdZHgsH095Vw0qSo38e5m1pHRprn?=
 =?iso-8859-1?Q?UHlJgxA23R8PeMygpXv0I6l/y/Air6g=3D?=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: VI1PR03MB3710.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 898b1721-97d8-48dd-0c7e-08da504138cd
X-MS-Exchange-CrossTenant-originalarrivaltime: 17 Jun 2022 09:10:28.1936
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 8e4Wr9UTeCwt/BDrir8Nvyj5FHk4vX9R9FBLu9E7pjSqMgW5dbRcoBPkmw/zhLW2qIjCIwtAOYErkvUDHvOqwLllwz9knCr6/qXx7qqtkhA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR03MB7268
X-Proofpoint-ORIG-GUID: G47x4be7OizfFGuwV3Vm2uT9BlGlY176
X-Proofpoint-GUID: G47x4be7OizfFGuwV3Vm2uT9BlGlY176
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.64.514
 definitions=2022-06-17_07,2022-06-16_01,2022-02-23_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 adultscore=0
 mlxlogscore=999 lowpriorityscore=0 spamscore=0 clxscore=1015
 impostorscore=0 bulkscore=0 phishscore=0 suspectscore=0 priorityscore=1501
 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2204290000 definitions=main-2206170040


Hi Julien,


Julien Grall <julien@xen.org> writes:

> On 16/06/2022 19:40, Volodymyr Babchuk wrote:
>> Hi Julien,
>
> Hi Volodymyr,
>
>> Julien Grall <julien@xen.org> writes:
>>=20
>>> Hi,
>>>
>>> On 16/06/2022 14:55, dmitry.semenets@gmail.com wrote:
>>>> From: Dmytro Semenets <dmytro_semenets@epam.com>
>>>> According to PSCI specification ARM TF can return DENIED on CPU OFF.
>>>
>>> I am confused. The spec is talking about Trusted OS and not
>>> firmware. The docummentation is also not specific to ARM Trusted
>>> Firmware. So did you mean "Trusted OS"?
>> It should be "firmware", I believe.
>
> Hmmm... I couldn't find a reference in the spec suggesting that
> CPU_OFF could return DENIED because of the firmware. Do you have a
> pointer to the spec?

Ah, looks like we are talking about different things. Indeed, CPU_OFF
can return DENIED only because of Trusted OS. But entity that *returns*
the error to a caller is a firmware.

>>=20
>>>
>>> Also, did you reproduce on HW? If so, on which CPU this will fail?
>>>
>> Yes, we reproduced this on HW. In our case it failed on CPU0. To be
>> fair, in our case it had nothing to do with Trusted OS. It is just
>> platform limitation - it can't turn off CPU0. But from Xen perspective
>> there is no difference - CPU_OFF call returns DENIED.
>
> Thanks for the clarification. I think I have seen that in the wild
> also but it never got on top of my queue. It is good that we are
> fixing it.
>
>>=20
>>>> This patch brings the hypervisor into compliance with the PSCI
>>>> specification.
>>>
>>> Now it means CPU off will never be turned off using PSCI. Instead, we
>>> would end up to spin in Xen. This would be a problem because we could
>>> save less power.
>> Agreed.
>>=20
>>>
>>>> Refer to "Arm Power State Coordination Interface (DEN0022D.b)"
>>>> section 5.5.2
>>>
>>> Reading both 5.5.2 and 5.9.1 together, DENIED would be returned when
>>> the trusted OS can only run on one core.
>>>
>>> Some of the trusted OS are migratable. So I think we should first
>>> attempt to migrate the CPU. Then if it doesn't work, we should prevent
>>> the CPU to go offline.
>>>
>>> That said, upstream doesn't support cpu offlining (I don't know for
>>> your use case). In case of shutdown, it is not necessary to offline
>>> the CPU, so we could avoid to call CPU_OFF on all CPUs but
>>> one. Something like:
>>>
>> This is even better approach yes. But you mentioned CPU_OFF. Did you
>> mean SYSTEM_RESET?
>
> By CPU_OFF I was referring to the fact that Xen will issue the call
> all CPUs but one. The remaining CPU will issue the command to
> reset/shutdown the system.
>

I just want to clarify: change that you suggested removes call to
stop_cpu() in halt_this_cpu(). So no CPU_OFF will be sent at all.

All CPUs except one will spin in

    while ( 1 )
        wfi();

while last cpu will issue SYSTEM_OFF or SYSTEM_RESET.

Is this correct?

>>>   void machine_halt(void)
>>> @@ -21,10 +23,6 @@ void machine_halt(void)
>>>       smp_call_function(halt_this_cpu, NULL, 0);
>>>       local_irq_disable();
>>>
>>> -    /* Wait at most another 10ms for all other CPUs to go offline. */
>>> -    while ( (num_online_cpus() > 1) && (timeout-- > 0) )
>>> -        mdelay(1);
>>> -
>>>       /* This is mainly for PSCI-0.2, which does not return if success.=
 */
>>>       call_psci_system_off();
>>>
>>>> Signed-off-by: Dmytro Semenets <dmytro_semenets@epam.com>
>>>> Reviewed-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
>>>
>>> I don't recall to see patch on the ML recently for this. So is this an
>>> internal review?
>> Yeah, sorry about that. Dmytro is a new member in our team and he is
>> not
>> yet familiar with differences in internal reviews and reviews in ML.
>
> No worries. I usually classify internal review anything that was done
> privately. This looks to be a public review, althought not on
> xen-devel.
>
> I understand that not all some of the patches are still in PoC stage
> and doing the review on your github is a good idea. But for those are
> meant to be for upstream (e.g. bug fixes, small patches), I would
> suggest to do the review on xen-devel directly.

It not always clear if patch is eligible for upstream. At first we
thought that problem is platform-specific and we weren't sure that we
will find a proper upstreamable fix. Probably you saw that PR's name
quite differs from final patch. This is because initial solution was
completely different from final one.

--=20
Volodymyr Babchuk at EPAM=


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 09:11:02 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 09:11:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351212.577782 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o280M-0008V4-5C; Fri, 17 Jun 2022 09:11:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351212.577782; Fri, 17 Jun 2022 09:11:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o280M-0008Ux-0p; Fri, 17 Jun 2022 09:11:02 +0000
Received: by outflank-mailman (input) for mailman id 351212;
 Fri, 17 Jun 2022 09:11:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YAhX=WY=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1o280K-0008T4-9L
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 09:11:00 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on0626.outbound.protection.outlook.com
 [2a01:111:f400:fe1e::626])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 65c88d88-ee1d-11ec-b725-ed86ccbb4733;
 Fri, 17 Jun 2022 11:10:58 +0200 (CEST)
Received: from DB6PR0501CA0046.eurprd05.prod.outlook.com (2603:10a6:4:67::32)
 by DBBPR08MB5900.eurprd08.prod.outlook.com (2603:10a6:10:200::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.16; Fri, 17 Jun
 2022 09:10:55 +0000
Received: from DBAEUR03FT017.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:67:cafe::73) by DB6PR0501CA0046.outlook.office365.com
 (2603:10a6:4:67::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.16 via Frontend
 Transport; Fri, 17 Jun 2022 09:10:55 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT017.mail.protection.outlook.com (100.127.142.243) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5353.14 via Frontend Transport; Fri, 17 Jun 2022 09:10:54 +0000
Received: ("Tessian outbound 6f53897bcd4e:v120");
 Fri, 17 Jun 2022 09:10:54 +0000
Received: from 6c1c159030a3.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 C8A341FA-B39D-4D0F-8721-6D657E6A6B30.1; 
 Fri, 17 Jun 2022 09:10:49 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 6c1c159030a3.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 17 Jun 2022 09:10:49 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AS8PR08MB7025.eurprd08.prod.outlook.com (2603:10a6:20b:34c::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.14; Fri, 17 Jun
 2022 09:10:47 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::502f:a77a:aba1:f3ee]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::502f:a77a:aba1:f3ee%7]) with mapi id 15.20.5353.015; Fri, 17 Jun 2022
 09:10:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65c88d88-ee1d-11ec-b725-ed86ccbb4733
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=FJVcRNHN87rBZdUHtodJHr6hzKrbEFjN+LB13qqgEv9e8szT0jsTveX7jKV7e6OqBwDvyZE3DnqzN5pfTJaivyEA0TSE1FrY7OEEksK6ph7XJS0GP9s7vIanEliEN/FRq7CTPA8vEvHDMZmpoSoP9mUCpPZrro7k5Pr1nOSB40m6jnEmpfCdpF5zAF2ABGzk7GfVNsemNokh4wVUW4xsT/mmMmeoJEMqSbSjrPFOIOtEwMeeynGKBhjzIO2q+A1SaEsvfaCiLV6yUuuM0bIWBsvTyTRLoUwv12XWb+5SRQe1uX6wDbJHI7KToLqnC12l4MBSxyATIKvdnq+8gXnQaA==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=T6/3qIlKtdAn4aw+vT17R/j1Fx80o58vgW1a2Swb+lA=;
 b=B1RWD7xAwOokwrUozjzAl+27AMsVf5of7q+q4RnmNgpg4Z1y97EMdUJaEpYNp9lsRP08Z/JD9IT1xrPzIKCXpJRXwePopJcbLc/VQASRAm813CsVV1/e0AcntLEtioB0cKoUPBLrybvQM9yuijKRRNsStgYXePx/Mnis1VNVsvN6OMRTUchePBxGBHw1M3eu+VUUHDZ4RiZJD/d5/2vrQNCLqXgs6Dg8ein6cK7FuXZnAuRvVUgE3L/QVSv/B9mFImdCpwDTveQkw+sn6/D/gTcjg76cnBxjl1pan8KfGfAYDTdiLmS3DMHfGyT29+cx8jeEiUrNTHbzRjIIjibaeA==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=T6/3qIlKtdAn4aw+vT17R/j1Fx80o58vgW1a2Swb+lA=;
 b=ELSi8ehZ/pEBA81h5YF6SPSi4kzDwEJDjGxSWiIMqHttSKgMTNVmmUaXMg3EboTQ/wd5nHvuLf7NeU8bWWIKkLmqlEXfDmosRD3t7ECzznz/GEtJ48NJGD4af49UFYXPkPq3fNKyw0G/Vix/voCpB5xJwOv5v4BFXUMaWJHu0SU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DfKWpF0tbj3v/GHoDoK8WA+t9/3roN37zg2Gbi94UZIz3rCQDVTfeLBNdl+pKyhyRvd/eVqlbLWXAy24VItz00hlf8JulcUTIsHBGbMYOT3Z/txZQHdK6vmtZaSQ+pl1UFXmndn+gJk+ZIV3joB1M5KBKa4xP8OZvsZ4QN2xXU9fNcAkNyFmNbL1cLylETV4VZmGvg9n5CAvluRBzCCiGQgiqBnEYa1H7N6zK9pkPCqHBD4AK9ata/AcHWA3h/r6NBkZ8nT1sMLt3fg+VccrIV9fKJOtvkSDqwj9zmHu7DVPJ+M2Yaxj4NNRIE9+b+0nkUvaeokqOHtSsp5GQDL6vg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=T6/3qIlKtdAn4aw+vT17R/j1Fx80o58vgW1a2Swb+lA=;
 b=H718lRy3LXeJg3ev5mzNJ1pBD8IWiCltzV2maNEOyYC3FJdrfCpXPp4qWnjdxySC1wMyRCQ4jf7H3JtzNXqdme1mivObW2DWbTMYH4IAbpfuZPLR6FqEIfgaIciXmc/17mAg0WaI3fxRfPnKJI9iaoaoiRbjhTCkHNJmSU+032zdxaGcIwPnvW4IRciMSh9ty3sEdRhpLPBe5hO7+l/2fwvrWgDoCH0pX31GjlOJNLeR8f/6aklHat2I6SvNQvhVwfh6TdLgpMR1T9Hst9zwtoyWaq8TnDYjQfqO2Fc+0ioOZ8wHYIvQogZosnJQYd3DPj60FDmJBiPz+5avnOrziQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=T6/3qIlKtdAn4aw+vT17R/j1Fx80o58vgW1a2Swb+lA=;
 b=ELSi8ehZ/pEBA81h5YF6SPSi4kzDwEJDjGxSWiIMqHttSKgMTNVmmUaXMg3EboTQ/wd5nHvuLf7NeU8bWWIKkLmqlEXfDmosRD3t7ECzznz/GEtJ48NJGD4af49UFYXPkPq3fNKyw0G/Vix/voCpB5xJwOv5v4BFXUMaWJHu0SU=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Jiamei Xie <Jiamei.Xie@arm.com>, Wei Chen
	<Wei.Chen@arm.com>, Julien Grall <jgrall@amazon.com>
Subject: RE: [PATCH v3 1/9] xen/arm: Print a 64-bit number in hex from early
 uart
Thread-Topic: [PATCH v3 1/9] xen/arm: Print a 64-bit number in hex from early
 uart
Thread-Index: AQHYZNkcOiKSsWrCWE+HAY/+6cemHa0hxzQAgDFVPLCAAGXSgIAABzuQ
Date: Fri, 17 Jun 2022 09:10:46 +0000
Message-ID:
 <AS8PR08MB799161072CDA77146D5AF5FE92AF9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20220511014639.197825-1-wei.chen@arm.com>
 <20220511014639.197825-2-wei.chen@arm.com>
 <46f6a909-2f77-021c-a069-6a8f827e53fc@xen.org>
 <AS8PR08MB79912A6797514E583095CBFC92AF9@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <8c03895e-4134-53c5-2248-e8afe4be7b25@xen.org>
In-Reply-To: <8c03895e-4134-53c5-2248-e8afe4be7b25@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: A9BFB2A4AFBB7E49B42049F134C1A677.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 30b51604-f4f9-47a6-486c-08da50414899
x-ms-traffictypediagnostic:
	AS8PR08MB7025:EE_|DBAEUR03FT017:EE_|DBBPR08MB5900:EE_
X-Microsoft-Antispam-PRVS:
	<DBBPR08MB59005FF5F56C9E4A12C30D7C92AF9@DBBPR08MB5900.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 C31YVjFUJxikyI2BnswadPpteXWC4GyHDgT7Q3E4c26BZMtd1FKpQKtyC+F2IYmEyPU6wHsHUSgmmeadwgheQy4B5G3tH9svmXWFhB1QCHL1TSekqeNBTG6J9QL+R1WpTyHcSauWEoRbYI+u6OHQ03xGx4Nha4Nfd2suVK2w43s8NCbiPPSStNHEYVhNH/sskIu2lZTEJ0Auea5psIA3exG35E9YW21TonwMCvtbkzgW0BJs72mbxotV05Un7QgISVOOA1boi/SI5bCT/s7Ea3ATEQmsvzu6xbqFHjLZL2qV91tWO8l5uWth6R+0wdV73x9f5wMF0sbsLUX11xT12f/nkjOFIzbCk4QhHotPLOPsJjF8mSuUgoIJ8Y2KkroHCEHRdlJ/Vsj6SoJ0XcTdDbJDgRtAF8ZTl5I3VAOPkUFRQK88awnYueWnXAGOj213aLyehbAvKD5qO/WBUQMcPYoNMpp+LDFbR9UxfjqAO1+/XxBryrMJSxZdmAlHGxTLN0iu+0/x6k7gg8cg/QON1UXRo2PRMNlUeeGG7vAigPaAijtZHdnr9EOWqCWguwucgjL7P/SyOQvoveyLIxV7L6/FCY/M8XbEZ0ArHAHCLykTX11paUoKA6+0Vm3ZYgFunb9X2WgzOpRGDCQROWyRGH4SzyVAg4nSXWqWYZGSYJnaXIGroAmRa0vhAPdPN8ysy0uHV+hf61eJepG8Bvl3hQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(71200400001)(38100700002)(8676002)(110136005)(8936002)(66556008)(86362001)(5660300002)(66476007)(7696005)(186003)(6506007)(53546011)(498600001)(9686003)(26005)(316002)(76116006)(66946007)(54906003)(2906002)(66446008)(33656002)(4744005)(52536014)(122000001)(38070700005)(4326008)(64756008)(83380400001)(55016003);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB7025
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT017.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	4ca51468-9373-45df-5eb2-08da504143f5
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	zCwhJuth/2cEIrUJKMsNruT04BFGKpdI/t1+uJVTUfiGcqC3izt3bUSpS8jJdFoxCLm5UpeP2LmFqHb8Q/JK4Az2fqHlHk/psD88Ptaro4lOoyrICdiEmh/+BQuEd3CJ+L3KVglFPwBM7JZ2K657iVY2LXvhNJgYTV9H/9dY/OIZADLivDUPgFEKhi/4+vj7/SVlF6Ejat5AVrcHnPlcMG/jD4rDwfxjrhfivqgWhWtq4ouvcgkJLQABQK4qi8PUDsUv3MQ7Wf2pLZMgeLHmOU0w9x9FKRZ8ukYoSi4bKv0+NfOiafM3rskHqxiZhP6xXoCfID75IAHmxw5tiGNA01y3sv81Z3VXWMQXa235jExRbaH8UoRmCKRdWWaA9VLONMHP51py/i5017gf6KzMDZOZqrbuD/hoDxwghqnQwQ/hiRrbvYDns0EMpX+6Byhi5IvhPux3zQES7iok9oKQd0eK+A8ttqEp3Hh5b61tI+/YmWxTRF6lWAwxOl+mK0Uekp20DqUZLfEWi2U51hZ4OvfCsrAQ68oWGD20WQCBFp9RH9FoKEcWXSNwe+WcOVxJSDzpfPjyxx3V2w3PbOClCTCHSv5aL8lBed3vj/WQSi9Llw8tx/FYTDNPiPxvPD22zucryOx9BbDS0TAJ84TWOs5Ol9Os8nynyQ3cK4+Kzbk=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(40470700004)(36840700001)(46966006)(8936002)(54906003)(316002)(356005)(6506007)(82310400005)(2906002)(81166007)(86362001)(83380400001)(107886003)(52536014)(40460700003)(36860700001)(26005)(4744005)(9686003)(8676002)(70586007)(508600001)(70206006)(55016003)(4326008)(7696005)(336012)(110136005)(5660300002)(47076005)(53546011)(33656002)(186003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jun 2022 09:10:54.7291
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 30b51604-f4f9-47a6-486c-08da50414899
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT017.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB5900

SGkgSnVsaWVuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEp1bGll
biBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+IE9uIDE3LzA2LzIwMjIgMDQ6MjcsIEhlbnJ5IFdh
bmcgd3JvdGU6DQo+ID4gSGkgSnVsaWVuLA0KPiBIaSBIZW5yeSwNCj4gPj4NCj4gPj4gSGksDQo+
ID4+DQo+ID4+IEkgaGF2ZSBjb21taXR0ZWQgdGhpcyBwYXRjaC4NCj4gPj4NCj4gPj4gUGF0Y2gg
IzMgbG9va3MgdG8gYmUgc3VpdGFibHkgYWNrZWQgYnV0IEkgYW0gbm90IHN1cmUgd2hldGhlciBp
dCBjYW4gYmUNCj4gPj4gY29tbWl0dGVkIGJlZm9yZSAjMi4gU28gSSBkaWRuJ3QgY29tbWl0IGl0
Lg0KPiA+Pg0KPiA+PiBQbGVhc2UgbGV0IG1lIGtub3cgaWYgaXQgY2FuIGJlLg0KPiA+DQo+ID4g
SUlVQywgdGhlIGxhdGVzdCBzZXJpZXMgKHY2KSBbMV0gaXMgcHJvcGVybHkgYWNrZWQgYW5kIHJl
dmlld2VkIGZvciB0aGUNCj4gd2hvbGUNCj4gPiBzZXJpZXMsIHNvIEkgdGhpbmsgdjYgb2YgdGhp
cyBzZXJpZXMgaXMgcmVhZHkgdG8gYmUgbWVyZ2VkLiBTZW5kaW5nIHRoaXMgYXMgYQ0KPiA+IGdl
bnRsZSByZW1pbmRlciA6KQ0KPiANCj4gVGhhbmtzIGZvciB0aGUgcmVtaW5kZXIuIE15IGNvbW1l
bnQgYWJvdmUgd2FzIHNwZWNpZmljYWxseSByZWZlcnJpbmcgdG8NCj4gcGF0Y2hlcyBpbiB2My4g
SWYgdGhlIHBhdGNoZXMgYXJlIGZyb20gYSBuZXcgdmVyc2lvbiwgY2FuIEkgc3VnZ2VzdCB0bw0K
PiBwaW5nIG9uIHRoZSBleGFjdCB2ZXJzaW9uPw0KDQpPaCBvZiBjb3Vyc2UsIG15IGJhZCAtIEkg
dGhvdWdodCB0aGF0IGVtYWlsIGRpZG4ndCByZWNlaXZlZCByZXBseSBzbyBJIHRyaWVkIHRvDQpj
b250aW51ZSB0aGUgZGlzY3Vzc2lvbiB0aGVyZS4gVGhhbmtzIGZvciB0aGUgc3VnZ2VzdGlvbiEg
SSB3aWxsIGtlZXAgdGhhdCBpbg0KbWluZCBmcm9tIG5vdyA6KSkNCg0KS2luZCByZWdhcmRzLA0K
SGVucnkNCg0KPiANCj4gQ2hlZXJzLA0KPiANCj4gLS0NCj4gSnVsaWVuIEdyYWxsDQo=


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 09:25:54 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 09:25:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351236.577817 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o28Ee-0002Ir-Vd; Fri, 17 Jun 2022 09:25:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351236.577817; Fri, 17 Jun 2022 09:25:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o28Ee-0002Ik-Ry; Fri, 17 Jun 2022 09:25:48 +0000
Received: by outflank-mailman (input) for mailman id 351236;
 Fri, 17 Jun 2022 09:25:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zxT3=WY=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1o28Ed-0002Ie-Lr
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 09:25:47 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0623.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::623])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 77787cc5-ee1f-11ec-b725-ed86ccbb4733;
 Fri, 17 Jun 2022 11:25:46 +0200 (CEST)
Received: from AS9PR06CA0761.eurprd06.prod.outlook.com (2603:10a6:20b:484::16)
 by DB6PR0801MB2024.eurprd08.prod.outlook.com (2603:10a6:4:74::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.16; Fri, 17 Jun
 2022 09:25:43 +0000
Received: from VE1EUR03FT041.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:484:cafe::ff) by AS9PR06CA0761.outlook.office365.com
 (2603:10a6:20b:484::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.15 via Frontend
 Transport; Fri, 17 Jun 2022 09:25:43 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT041.mail.protection.outlook.com (10.152.19.163) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5353.14 via Frontend Transport; Fri, 17 Jun 2022 09:25:42 +0000
Received: ("Tessian outbound ff2e13d26e0f:v120");
 Fri, 17 Jun 2022 09:25:41 +0000
Received: from 1afc6d2d7871.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 19D19E18-8679-40A5-AE23-75D66F53ABCB.1; 
 Fri, 17 Jun 2022 09:25:35 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 1afc6d2d7871.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 17 Jun 2022 09:25:35 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by PR3PR08MB5611.eurprd08.prod.outlook.com (2603:10a6:102:85::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.14; Fri, 17 Jun
 2022 09:25:32 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5332.021; Fri, 17 Jun 2022
 09:25:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77787cc5-ee1f-11ec-b725-ed86ccbb4733
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=L+PNvsLutLS+cySQG699rlXYjGBWukGTQ1nKS1uCIYrtiBW5npeFJMnD5/YYWgjuPA4WyUW2G92G+PiMzjFY2Uhm0TN+r8uH/3Jsb7QcMUM3nK0IiMU2i62mYP2eyfkCu3YhV9Df8bdR+0czyRKUIO7aMpePJL9tTJLhL4G4cfkubYwp4UPM4Aduyq8U4D+cHeF9RLospSnEfzyCOtQdFckJx47Kch6I8NDwWpFvDesepPrKkwxSggzW+qLggJAVl3woa/9JuzQYxGjFEav0Qt8DFc1aiGncQbykDo3TlEAq39OvOZcp9EDB98HNS30ABAlGR/Jc3e1Q8ojryToldw==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=uo+zauEWG+FJqJpZXcm9xKRVr1oloIRExHq6SRmPBzM=;
 b=JtdHHjG17E35VwN/PVPdSpFDR9U4Dp3O1wTq5+r+5Q6nhbDQm/dXHqAlUML0NNgIXCjoNOr+aa2AgV1s2QP/TbYUGQ90JZ7hFH3uk1mLCs9z22VqYKy/76sQjnaI5+6NaNQxPwJrsAXUXLT1bmygcYLzbYf/P3Egy99qTY+immgYnfM7y9ssin48d0n6rE3IKp4PCZBIlCBJI0g6gj74AfzGxlQ8CYyor6KBn34kBRdPoxFvfjsVHP8A9AejNMxFQlTZxyn+0TSqqPzk69lPDSeQ1xLqjUxt8PnUFflJj/kBpOtyUxxGSkZN9n2FFrJ1OxVzTBItvqkhV7iO7AnbHQ==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uo+zauEWG+FJqJpZXcm9xKRVr1oloIRExHq6SRmPBzM=;
 b=RWZELMy0flSQ61ObmaCnGGWNfRHEG9w1j3Ylx5PTiaEc4A5Wkz6wHOD8o1CF4KvPYbig3FvWvSE4gJI6OXtW9p1YW7aZKlNdAamxlrDJryxV+wBn7Bf1pxx5VwdybQs/mzsF3285OSyC5VisEI+hnKaMiihqvXjQfNSqk6eqzY8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: a7680b667d39747b
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EVzWtSrc1YIa/81IRctPhGtivlIkGvtUvx33immFWmfkiqKT05rdNbTWsjM3ZepxgJksfqHjOXG7LD62Nn/bukdZxsjq5ABlqMpfYc8KT+YaLPZezIzF/7/YgZZhnRV1oDsZjcH83z77weXb7i8TL10MYDBY7kRf8AMxACekW3GvcIJr/a2FRxYnejcT3VocMOPZUae0HO6EMxGoPIXEzYQ+KuyMNhETryy2c5zZ+hg7Y4381Mtgc3UvCcK/osjW+VsCEDES09s0WUEZH136jy6dXwSH8uU9HZPyTb1F9mhFQqVMH42Z6bxEbpPzipnf6U/b/ZJblhh7DlgDV/8c4Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=uo+zauEWG+FJqJpZXcm9xKRVr1oloIRExHq6SRmPBzM=;
 b=S3xjqnOl+VlKZu2b5Zuo9pTm4gaX4ca0dzlWoMV8vrGLn1mbD9ZSwdRztKzoB2XTqw4T8aN7CJiEUofRPncE5T9SafP1KM41LmLXlIQejZDm8NCA4ekYMFJ1Kixa2UR1183/w+o8gLwDqf61SeOlbBqfa4XcHV0S84/M5z7GdcqaH8xoSMeb8S/ssnFQD+KvdAEqgN3ZHiAId9qbtTd9A0Mg7xAFzbElFYBz0aPHLUhowZWh825JNlnhO9ILJvh0yRL5ywE6WJz/7Wc8BJfZkiMPJ4DC3BR7LPXrOm/+pB4wMhaSN/UCCDhQMyVKNKP8T/SJUTXg8J342LhJylMqPQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uo+zauEWG+FJqJpZXcm9xKRVr1oloIRExHq6SRmPBzM=;
 b=RWZELMy0flSQ61ObmaCnGGWNfRHEG9w1j3Ylx5PTiaEc4A5Wkz6wHOD8o1CF4KvPYbig3FvWvSE4gJI6OXtW9p1YW7aZKlNdAamxlrDJryxV+wBn7Bf1pxx5VwdybQs/mzsF3285OSyC5VisEI+hnKaMiihqvXjQfNSqk6eqzY8=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Anthony PERARD <anthony.perard@citrix.com>
CC: xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>, George
 Dunlap <george.dunlap@citrix.com>, Christian Lindig
	<christian.lindig@citrix.com>, Samuel Thibault
	<samuel.thibault@ens-lyon.org>, Juergen Gross <jgross@suse.com>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Stefano Stabellini
	<sstabellini@kernel.org>, =?iso-8859-1?Q?Roger_Pau_Monn=E9?=
	<roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>, David Scott
	<dave@recoil.org>, Elena Ufimtseva <elena.ufimtseva@oracle.com>, Julien Grall
	<julien@xen.org>
Subject: Re: [XEN PATCH v2 00/29] Toolstack build system improvement, toward
 non-recursive makefiles
Thread-Topic: [XEN PATCH v2 00/29] Toolstack build system improvement, toward
 non-recursive makefiles
Thread-Index: AQHYKlphduUE0nKqEECOcZhD0sFLT61Stk8AgAFNXYA=
Date: Fri, 17 Jun 2022 09:25:32 +0000
Message-ID: <756B5C19-C6DB-44AE-B98E-9257436E28EE@arm.com>
References: <20220225151321.44126-1-anthony.perard@citrix.com>
 <Yqsw5mmC8KHVbtrb@perard.uk.xensource.com>
In-Reply-To: <Yqsw5mmC8KHVbtrb@perard.uk.xensource.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.80.82.1.1)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 9105a9c8-3a99-4149-bd7e-08da504359b1
x-ms-traffictypediagnostic:
	PR3PR08MB5611:EE_|VE1EUR03FT041:EE_|DB6PR0801MB2024:EE_
X-Microsoft-Antispam-PRVS:
	<DB6PR0801MB20245639B95EFCCA926B9C279DAF9@DB6PR0801MB2024.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 eN2JWtCpdDLJlLplinORCX4X6Pn40j4Fz/xDvscxcbx704bEyFrwwhVMP2F9jg/p78Fo3S7WfwCSkbOq4TCtWhaebLd9Oj+d6+aweOnRZtW06ZmRLXk1FR3baQV/DlofNz65o4hJoRHSVdhDmRzfvE6vz2Oi07au9mgFbkx/T4eHxEkJDdZ3Nw+z7EN4oUZY/PuTNdNdPFTsU9Wdwi5Eja8IbQNwXjnCWzWo0QqB7whBoBFsi6lEAvEon4VWBrwfUdW1MalohDyGcYZM8WkUHUcbzpzx6rcX+O9e/kNtR3i5Kh1n/s7M0w7frr7hSDvMJLdX0Qtg0iZ5m0kfBD8mXk65ufiFCxwc+H4aMMiRy7NRBUNF41P3PIEV4Vl2fSBz0QlH4mrnmbuqYhItbF/xm2vtb32v0ZhuJjWMJ7VNRd9AcvRQQXmCzNj272ls0EaaPuUYfpICfmuQkWW7T0wKp+1d+OJAj04KhO6RF/XTRZ+fSymfHvdMP+dx1ID7mJ/VLOZd17DTz+wJkgGS1iiOFRd3Fe7kk6ZAUaxRgi8JXkCwMADR7y9unATZhD9mzmYCLBQvwllvL2waXX5JUPYEBzdgSIWcHd2OuL5h/5gXZk3gEU2V5XizpQwOdtdKTBcSkTlBJzjJrrIEaxhQ4yNdUNB8/Dw2Er+v+cs24rSp1wAFI+AlAgvJ2gjARZ9KLOGoURS/QG4dFVhvId/qMytOBWX9/YaIOmyGPVywCO2osO/Pm757WAbVHrCCrAp23EEz
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(76116006)(53546011)(2906002)(64756008)(6916009)(5660300002)(54906003)(26005)(8676002)(6506007)(186003)(6512007)(2616005)(122000001)(33656002)(66556008)(38100700002)(4326008)(38070700005)(6486002)(7416002)(316002)(8936002)(91956017)(66476007)(498600001)(86362001)(66446008)(66946007)(71200400001)(36756003)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <2C72CB236CBC4E449D152A5D98A38736@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR08MB5611
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT041.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	6b5adaf1-667b-474a-4d32-08da504353d6
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NpOcWdPLejR4E1dUI/rxmVLyBibC519OzaoKFjlolY8eLA8oFcjwIqa6qUblxA5TQH2HUOVOzZKzxbMkGNxwoCBAH/SJA2mjaN8RkfqkOabh2U6MrXBWtY6geR/59v6SItXsXGuRO3VmI2wpzkFfEAutIqjbE5PhPpOVKVKzBIRODmcKjlAg+jmUpPE/8l4K8QZYNwV+lUVEOH8HXTpmmZA0EByVOLYEhwdH9lxr/MOYQSEfJtA0nOJoip5qitOW5+c9K+RjX7ksPQmAUsV4uzBW4tXCMGgsLNidEhrGFfXNuqhgeLjOVrZe5+Jmmjlm9LMYk3ShjBIh2WGIxwvMhn7jJxpnwBOiuZhc/YllxnyQGejNs4uYLvAD1DxFoGEwz4EcXgzdy7lWWJAZ5K4/pJoLG5aMhk1YpNjnLB2qPEOwTXcGSVzW9tBXXUbrXYaFbRWn6P5SZcwP7Mjbl/EleWOxmk9+VCz6TgKDwKzeuSnx1zM3W3LRyUT071z8ODDk9EAH2c+L/HQOSV4LoYdF+eYmAxJ1OSl70+M1YDErjD9QfobZeCyJkaVhyo6mZcQYzn72iOuX+MrfkeJU86ZYTeDEessQxS6t1zxIY5m3/FCxLxJijrL1SDHy7BLRm5wmMmefqnwU7TBPbYpas4VJnB1vmlh65afrVZMwo86BwdvYAD1Pq8a+ZaW49TWEPT5d
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(40470700004)(36840700001)(46966006)(6506007)(8936002)(53546011)(5660300002)(70206006)(4326008)(70586007)(6862004)(8676002)(498600001)(54906003)(86362001)(356005)(316002)(6486002)(81166007)(36860700001)(2616005)(186003)(36756003)(40460700003)(47076005)(336012)(26005)(82310400005)(2906002)(33656002)(6512007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jun 2022 09:25:42.3104
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 9105a9c8-3a99-4149-bd7e-08da504359b1
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT041.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB2024

Hi Anthony,

> On 16 Jun 2022, at 14:32, Anthony PERARD <anthony.perard@citrix.com> wrot=
e:
>=20
> Hi,
>=20
> There's quite a few patch in this series that are reviewed and could be
> committed. The one reviewed don't depends on the other ones.
>=20
> The list I've gathered that I think are reviewed properly are:
>=20
> 11: tools/xenstore: Cleanup makefile
> 14: libs: rename LDUSELIBS to LDLIBS and use it instead of APPEND_LDFLAGS
> 15: libs: Remove need for *installlocal targets
> 16: libs,tools/include: Clean "clean" targets
> 17: libs: Rename $(SRCS-y) to $(OBJS-y)
> 18: libs/guest: rename ELF_OBJS to LIBELF_OBJS
> 19: libs/guest: rework CFLAGS
> 20: libs/store: use of -iquote instead of -I
> 21: libs/stat: Fix and rework python-bindings build
> 22: libs/stat: Fix and rework perl-binding build
> 24: stubdom: introduce xenlibs.mk
> 25: tools/libs: create Makefile.common to be used by stubdom build system
> 26: tools/xenstore: introduce Makefile.common to be used by stubdom
> 27: stubdom: build xenstore*-stubdom using new Makefile.common
> 28: stubdom: xenlibs linkfarm, ignore non-regular files
> 29: tools/ocaml: fix build dependency target
>=20
> (I didn't a run with them on our gitlab ci, and no build issue.)

If you do not mind resend a v3 with the remaining patches rebase on staging=
,=20
I can spend some time next week to test and review them.

Maybe you can also put in the fixes for the header check.

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Fri Jun 17 09:27:37 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 09:27:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351243.577828 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o28GN-0002ti-DM; Fri, 17 Jun 2022 09:27:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351243.577828; Fri, 17 Jun 2022 09:27:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o28GN-0002tb-9u; Fri, 17 Jun 2022 09:27:35 +0000
Received: by outflank-mailman (input) for mailman id 351243;
 Fri, 17 Jun 2022 09:27:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o28GL-0002tR-Iz; Fri, 17 Jun 2022 09:27:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o28GL-0007YC-Ff; Fri, 17 Jun 2022 09:27:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o28GL-0000hs-7d; Fri, 17 Jun 2022 09:27:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o28GL-0005bP-73; Fri, 17 Jun 2022 09:27:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zhugFwnFbN/SkBZIgGYvSag2C4li0N9eVGj/zrA1gJA=; b=icF0qLL1WRhGqzjc5d2Io2WkUi
	1aRGAWzkzW5PJ9Ib/AZ97u+vf28I9VivaAoXMvGL3+RnhSiGcTLw2nHNpohN/+5c8YZi1GKEYzMyj
	DpFgmIXiXh77uhXhkH9mQfjSvlqzjJK15/0ewgtGJ27n3giyEXrAjmxie44XoMW2hElE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171205-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.15-testing test] 171205: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.15-testing:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    xen-4.15-testing:test-amd64-amd64-dom0pvh-xl-amd:guest-localmigrate/x10:fail:heisenbug
    xen-4.15-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=a3faf632606e54437146dbcac2c9bbb89b9a4007
X-Osstest-Versions-That:
    xen=0d12261727410d13c4a59d94e34937d6d91ba641
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 17 Jun 2022 09:27:33 +0000

flight 171205 xen-4.15-testing real [real]
flight 171238 xen-4.15-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/171205/
http://logs.test-lab.xenproject.org/osstest/logs/171238/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl           8 xen-boot            fail pass in 171238-retest
 test-amd64-amd64-dom0pvh-xl-amd 20 guest-localmigrate/x10 fail pass in 171238-retest

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl         15 migrate-support-check fail in 171238 never pass
 test-arm64-arm64-xl     16 saverestore-support-check fail in 171238 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170922
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170922
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170922
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170922
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170922
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170922
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170922
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 170922
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170922
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170922
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 170922
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170922
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 170922
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  a3faf632606e54437146dbcac2c9bbb89b9a4007
baseline version:
 xen                  0d12261727410d13c4a59d94e34937d6d91ba641

Last test of basis   170922  2022-06-10 08:54:24 Z    7 days
Testing same since   171205  2022-06-16 13:08:03 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   0d12261727..a3faf63260  a3faf632606e54437146dbcac2c9bbb89b9a4007 -> stable-4.15


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 09:39:56 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 09:39:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351257.577839 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o28SF-0004wn-Lp; Fri, 17 Jun 2022 09:39:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351257.577839; Fri, 17 Jun 2022 09:39:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o28SF-0004wg-Ie; Fri, 17 Jun 2022 09:39:51 +0000
Received: by outflank-mailman (input) for mailman id 351257;
 Fri, 17 Jun 2022 09:39:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Utso=WY=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1o28SD-0004wa-W2
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 09:39:50 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur02on0619.outbound.protection.outlook.com
 [2a01:111:f400:fe05::619])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6d7530dd-ee21-11ec-bd2d-47488cf2e6aa;
 Fri, 17 Jun 2022 11:39:48 +0200 (CEST)
Received: from AS9PR06CA0301.eurprd06.prod.outlook.com (2603:10a6:20b:45b::7)
 by DB9PR08MB6556.eurprd08.prod.outlook.com (2603:10a6:10:261::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.14; Fri, 17 Jun
 2022 09:39:37 +0000
Received: from AM5EUR03FT037.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:45b:cafe::c8) by AS9PR06CA0301.outlook.office365.com
 (2603:10a6:20b:45b::7) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.15 via Frontend
 Transport; Fri, 17 Jun 2022 09:39:37 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT037.mail.protection.outlook.com (10.152.17.241) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5353.14 via Frontend Transport; Fri, 17 Jun 2022 09:39:36 +0000
Received: ("Tessian outbound d3318d0cda7b:v120");
 Fri, 17 Jun 2022 09:39:36 +0000
Received: from dfe7596f22f5.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 2C01085B-3ED4-4F39-9738-9EA955BAB181.1; 
 Fri, 17 Jun 2022 09:39:26 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id dfe7596f22f5.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 17 Jun 2022 09:39:26 +0000
Received: from DU2PR08MB7325.eurprd08.prod.outlook.com (2603:10a6:10:2e4::7)
 by PR3PR08MB5803.eurprd08.prod.outlook.com (2603:10a6:102:82::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.14; Fri, 17 Jun
 2022 09:39:23 +0000
Received: from DU2PR08MB7325.eurprd08.prod.outlook.com
 ([fe80::50e5:8757:6643:77f8]) by DU2PR08MB7325.eurprd08.prod.outlook.com
 ([fe80::50e5:8757:6643:77f8%9]) with mapi id 15.20.5353.016; Fri, 17 Jun 2022
 09:39:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6d7530dd-ee21-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=jkgCm7KmaltYUPrfRCOr476n5xsGrFU+C9TyZmaTnEJFHdWxkpEEova7qwnXAowXJKwIO6Br7fPurJ3YL0ypvjHZpT6dEh6GCzCglHHR/gSSV4bqJ6Ty2blCPGKJNUuJ24vABZ/LaDS0l1+U8hsZKt/OjYP7/ZcqlU2+720SMuRdMUvLzvfsJhDyZTyTFdwbvblYCijctQyTnkHuEitRgu5n1eYLcGLjdPZ4oE/T8Pog32CNCJQ6IWhS5qmabtoHTsvGQoPx9SPNG0aprAdg9hx7Ln4YqqcjN/HPNDE5LB34VUayjHu9+iwlYVcMRaPJ/48dVDZlJGPrd4hEMk0hKQ==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8BvXRVqzRMxnScxiVlkOgMiqilzZG79r3Y5G4m344iE=;
 b=CJEJGY3P/q+Trb0Ef5TaJA+2l4UibZv64QQ8uDd0S1S3W0XfOc/h6BrqS/bSuDJl9CwbfMkxx57XqgyNvQtsJRVvP92Wd7xypC+HvwjkJvBxeysAQUGfDXAbQxOWqKzgZPQEoA4a2oC9VMMBgISuTl+EP49/7PbiKbZasOaf2XAouWTEVISSt/aXXV2myte2wuKKEGDcclo1sodY1jYVGAoCsz1BnSvzsmsOdGt1AISfn+RTLqBNlQ5NlVizl+JXbJEH7GyetRBtCxwdguXVMmHLXsYkLK6ZEoBgPAWRZt/iztVRC1MYh3gNYd5zx2at9BEC3yzTsE+xqvTbZYTBJw==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8BvXRVqzRMxnScxiVlkOgMiqilzZG79r3Y5G4m344iE=;
 b=f063PH3fvdqFe08qljczF2ith5rZZE9lxPp6st3c0w4fRygWBVlEucdBxgFgLWq6hz9ax/LXafGPhLJ64QU3ZzkhK1b5xvbVvqU7gLTb/SC8gWoO6KfNGa4L3xTnnPXcU+z82Nk08boRTAB4bPf/1cYynczyq1FPQZxJIS2DAfY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Si0P3w4EDcZkLSJi85cLeHSHzKqg1omrPuDV4xL38PnTHlUekU+tPUH8snB2DwnxP5i9m2uGhY+04Ts88AUimYo1KxBTC60T8yO8LdtpwTVsaUK4wkm6cIETo0WGwlNkWeGgAK8i+Sgzbj4h+ymUPsrcHJa0QT0SFCBDunwCz6ZTEVdL1b1A+F4tMt0LSQ6DDQ/9KasNDangpmjPQkqdL4wGhG6uYqVqm7xUkAiTsfTT4tfF9C5eNkReN5t9BsxfC73wFWwqoZiNQHhNBINAR94x8JVp5vP4NBCqlPpqm3FToBWZdrUEpMqZMsi7K7FzFdodkmmbKmOKZ47mMjy3QQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8BvXRVqzRMxnScxiVlkOgMiqilzZG79r3Y5G4m344iE=;
 b=dPGppfWO/C5PYLQc0fDtcQrOvsr4XRCz/iZGBTUnkKUmrHxuxjKJlgvMursMJbxG8xojWaC2XxPUDVzUsUtToFR3Pv38+bCtZ2RTJBAzDVu5RjNMRsf3YxXIyjNFmJbs9fj+gg4tIuAlavIKHXIN0ap3LlrhhS+AjXjCxIMEXCAfosVaIPhRyvLxEL2F3WRNZ7ws9vq7RMfMM/vjFwwmIkuZMc9/MSnVpKoYTtKa0sJU53AN4mcPsogKc1QASZXEuh7Av68qs2SygmE8hEKEXpjxXZGF0bmPHFKPFaAVBKAY8ce8gouFI8X82JInMuQ+UJzdjQzk854VdNo5sKbFHw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8BvXRVqzRMxnScxiVlkOgMiqilzZG79r3Y5G4m344iE=;
 b=f063PH3fvdqFe08qljczF2ith5rZZE9lxPp6st3c0w4fRygWBVlEucdBxgFgLWq6hz9ax/LXafGPhLJ64QU3ZzkhK1b5xvbVvqU7gLTb/SC8gWoO6KfNGa4L3xTnnPXcU+z82Nk08boRTAB4bPf/1cYynczyq1FPQZxJIS2DAfY=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Wei
 Liu <wl@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v4 2/8] xen/arm: allocate static shared memory to the
 default owner dom_io
Thread-Topic: [PATCH v4 2/8] xen/arm: allocate static shared memory to the
 default owner dom_io
Thread-Index: AQHYac1fSwyDVJqZLUSNuimuDmM9e60jOyEAgACwTiCAAEQEAIAvUw8A
Date: Fri, 17 Jun 2022 09:39:22 +0000
Message-ID:
 <DU2PR08MB732548EE76E21ABF9E8232AAF7AF9@DU2PR08MB7325.eurprd08.prod.outlook.com>
References: <20220517090529.3140417-1-Penny.Zheng@arm.com>
 <20220517090529.3140417-3-Penny.Zheng@arm.com>
 <e587d965-819c-993f-f5fc-0d863d372507@suse.com>
 <DU2PR08MB7325ACBD82A63879F770F8BBF7D19@DU2PR08MB7325.eurprd08.prod.outlook.com>
 <385c964d-0d25-3967-5683-3731dc1eb0c2@suse.com>
In-Reply-To: <385c964d-0d25-3967-5683-3731dc1eb0c2@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: A3E40837EBD76946AB659099C70B964D.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 86ac944d-e506-4a01-0c9a-08da50454b18
x-ms-traffictypediagnostic:
	PR3PR08MB5803:EE_|AM5EUR03FT037:EE_|DB9PR08MB6556:EE_
X-Microsoft-Antispam-PRVS:
	<DB9PR08MB65561202B7620C03C430C6D4F7AF9@DB9PR08MB6556.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 iDgLTJymoySGxHgi1vekxnt+ivPhTZ847MYXoahoF51+TSxZSZAwtQmZBMKCghnwHjWm5uSjRYH0j85KJISCTcr90a7azt5GCs0tZJMcfkstRQUyDQ1KYvMAaYKNiNxkJpUbgAalw5lCetFN8GA2cPmOLGdGKfGhBzv5jBe4kLSKQq9K5WQr7DYyghgnvDU6VWUoMo3H8H/P8V1DHdCxefjqBXE9LOzGU7IXDLceWFYhF83NMpu6RNdKI0faZvSSy5OxiB422vOoZqn/pEK3HIoI3O12WizGn+PdQcpYCiqH/Sina0hLsc18AobtnA43SzoV7rm5o9xgof/4EcCwOXCsdnWSMznlZyS58l5zKm0pfBGxvej+n10pu1jYmO7TrhX0y5TzE59hSHT1HNdkCLWo/JsTDZhpUtewZE+mma2g/ZQvpHjRXJkRjB6LEw0vE8Qt9dznkcQgukBPvqyJZCTxzJ+Qph6ncblQIf2UgQtAVtLZBNMcmEO6EfhmcTuVAy1+79lCx2vDYn8PVi9jGX6xmzFJPwR+R0WZ3plctwqiY8QLeoc4amVfDq6E3022ipA6XAFEYqPe4v/oYmGMnHMYALK9sRZjMi4DQjihv+3ITDjHKgyK7nq69TL4EqrDVVbdeC+P9HA7txVEEpXtTXOEXJXo5DFWbtJpeJXMfVaivifwmCHgkgOfKU7y6N3b7bb1XGLXyjkfXWpKD/rIlPEuGW8UeHKfBBtwcXWNIkUOl0h8kH7u/31myPSe4dNeyBKMU5iZFuD/YvXwY71/CRN9p0GAbOfj1OAKU5mTaGvGWaUizB7tkRzudkATRX/z
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DU2PR08MB7325.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(2906002)(9686003)(83380400001)(8676002)(53546011)(26005)(7696005)(186003)(8936002)(5660300002)(6506007)(52536014)(33656002)(55016003)(54906003)(4326008)(66476007)(76116006)(498600001)(66946007)(122000001)(86362001)(66556008)(66446008)(38070700005)(64756008)(38100700002)(71200400001)(6916009)(966005)(316002)(21314003);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR08MB5803
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT037.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	51c0e075-91f1-4f3e-731c-08da504542d5
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7EQB4kqwFfqWJcgq+hlDE+ldYRcrjsTzIpO7Gq6dXMLCfFdqprrNHzKbTm9D6CM3reE/7ViR4gddp7k4bbxTg07eOQ4BtqMp1gZ6o4WzCO5n6JWTolIuZl/INq+mnoGRaiZOmL+kr2lNQMg98dLP/G3C6j5YOcR14ZNh5k7iczgwypjeyoW9A9wHHyJIhtjIAiJfS5JzJ51+SXfHIyjEyMqUoXVj996vH0Js1EfY+Os76M2TraxtvxdbjgobxVbVhIHJQ1Hy68CxhuLy9Cmk9FcUM32jcZgcbRZprtTWahTtpSaNvDgP0i4hgk55swcoSt2jJ+XPJ3nM14McoYElEpjoGvKQYRKa8/2lZ2Vjyy+f/SdHRvFAyS41WCpKp5/xErsbEWG8atPJUMKAYyZ/IbB7bl0zXSA8FnYuy+/K/bL7SWD4okomqId5aYdl5RVTxK5CPpZ9TaHSoAV+aSv76ad/9gnrNmaR/FKMZEErdiGPVH9l8gPeZBBlbdLgWP4XIFpLf4Y3cYIrn5D1BXrXIgDYNSOQXYq40L5Bd1d6yRmJIBaEWxIOsiUlP/HREsW/8DtSAHROEzBu6y5WN5JC8HtDedFoI2oz+N9Gh68151nknWS9IPSVlg86TJ4It/BQJNzXR2wDV863bdrgRZpMC0jYDH/oOgsz8MnTHpFxQlYc0BxKFIArVvJtVIwIWgMMTEkfExhiPbw0PyMHVRuCmkB6ZfteJAgGfrnk6pybWDmcFEWVD0A6TpGKOiRWkuM6il1Qg4yw8pMkdtMbVrl+ckx3+KJOcCVx4KupiBhMcPc=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(40470700004)(46966006)(36840700001)(26005)(40460700003)(70586007)(9686003)(8936002)(70206006)(7696005)(5660300002)(186003)(6506007)(53546011)(55016003)(316002)(6862004)(356005)(81166007)(86362001)(82310400005)(52536014)(33656002)(2906002)(4326008)(47076005)(8676002)(54906003)(966005)(508600001)(36860700001)(83380400001)(336012)(21314003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jun 2022 09:39:36.8325
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 86ac944d-e506-4a01-0c9a-08da50454b18
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT037.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB6556

SGkgSmFuDQoNClNvcnJ5IGFib3V0IHRoZSBsYXRlIHJlcGx5LCBnb3Qgc2lkZXRyYWNrZWQgYSBm
ZXcgd2Vla3MuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KPiBTZW50OiBXZWRuZXNkYXksIE1heSAxOCwgMjAy
MiAyOjM2IFBNDQo+IFRvOiBQZW5ueSBaaGVuZyA8UGVubnkuWmhlbmdAYXJtLmNvbT4NCj4gQ2M6
IFdlaSBDaGVuIDxXZWkuQ2hlbkBhcm0uY29tPjsgU3RlZmFubyBTdGFiZWxsaW5pDQo+IDxzc3Rh
YmVsbGluaUBrZXJuZWwub3JnPjsgSnVsaWVuIEdyYWxsIDxqdWxpZW5AeGVuLm9yZz47IEJlcnRy
YW5kIE1hcnF1aXMNCj4gPEJlcnRyYW5kLk1hcnF1aXNAYXJtLmNvbT47IFZvbG9keW15ciBCYWJj
aHVrDQo+IDxWb2xvZHlteXJfQmFiY2h1a0BlcGFtLmNvbT47IEFuZHJldyBDb29wZXINCj4gPGFu
ZHJldy5jb29wZXIzQGNpdHJpeC5jb20+OyBHZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNp
dHJpeC5jb20+Ow0KPiBXZWkgTGl1IDx3bEB4ZW4ub3JnPjsgeGVuLWRldmVsQGxpc3RzLnhlbnBy
b2plY3Qub3JnDQo+IFN1YmplY3Q6IFJlOiBbUEFUQ0ggdjQgMi84XSB4ZW4vYXJtOiBhbGxvY2F0
ZSBzdGF0aWMgc2hhcmVkIG1lbW9yeSB0byB0aGUNCj4gZGVmYXVsdCBvd25lciBkb21faW8NCj4g
DQo+IE9uIDE4LjA1LjIwMjIgMDU6MTQsIFBlbm55IFpoZW5nIHdyb3RlOg0KPiA+IEhpIEphbg0K
PiA+DQo+ID4+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+ID4+IEZyb206IEphbiBCZXVs
aWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4gPj4gU2VudDogV2VkbmVzZGF5LCBNYXkgMTgsIDIw
MjIgMTI6MDEgQU0NCj4gPj4gVG86IFBlbm55IFpoZW5nIDxQZW5ueS5aaGVuZ0Bhcm0uY29tPg0K
PiA+PiBDYzogV2VpIENoZW4gPFdlaS5DaGVuQGFybS5jb20+OyBTdGVmYW5vIFN0YWJlbGxpbmkN
Cj4gPj4gPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+OyBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4u
b3JnPjsgQmVydHJhbmQNCj4gPj4gTWFycXVpcyA8QmVydHJhbmQuTWFycXVpc0Bhcm0uY29tPjsg
Vm9sb2R5bXlyIEJhYmNodWsNCj4gPj4gPFZvbG9keW15cl9CYWJjaHVrQGVwYW0uY29tPjsgQW5k
cmV3IENvb3Blcg0KPiA+PiA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT47IEdlb3JnZSBEdW5s
YXANCj4gPj4gPGdlb3JnZS5kdW5sYXBAY2l0cml4LmNvbT47IFdlaSBMaXUgPHdsQHhlbi5vcmc+
Ow0KPiA+PiB4ZW4tZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmcNCj4gPj4gU3ViamVjdDogUmU6
IFtQQVRDSCB2NCAyLzhdIHhlbi9hcm06IGFsbG9jYXRlIHN0YXRpYyBzaGFyZWQgbWVtb3J5IHRv
DQo+ID4+IHRoZSBkZWZhdWx0IG93bmVyIGRvbV9pbw0KPiA+Pg0KPiA+PiBPbiAxNy4wNS4yMDIy
IDExOjA1LCBQZW5ueSBaaGVuZyB3cm90ZToNCj4gPj4+IC0tLSBhL3hlbi9jb21tb24vZG9tYWlu
LmMNCj4gPj4+ICsrKyBiL3hlbi9jb21tb24vZG9tYWluLmMNCj4gPj4+IEBAIC03ODAsNiArNzgw
LDExIEBAIHZvaWQgX19pbml0IHNldHVwX3N5c3RlbV9kb21haW5zKHZvaWQpDQo+ID4+PiAgICAg
ICAqIFRoaXMgZG9tYWluIG93bnMgSS9PIHBhZ2VzIHRoYXQgYXJlIHdpdGhpbiB0aGUgcmFuZ2Ug
b2YgdGhlDQo+IHBhZ2VfaW5mbw0KPiA+Pj4gICAgICAgKiBhcnJheS4gTWFwcGluZ3Mgb2NjdXIg
YXQgdGhlIHByaXYgb2YgdGhlIGNhbGxlci4NCj4gPj4+ICAgICAgICogUXVhcmFudGluZWQgUENJ
IGRldmljZXMgd2lsbCBiZSBhc3NvY2lhdGVkIHdpdGggdGhpcyBkb21haW4uDQo+ID4+PiArICAg
ICAqDQo+ID4+PiArICAgICAqIERPTUlEX0lPIGNvdWxkIGFsc28gYmUgdXNlZCBmb3IgbWFwcGlu
ZyBtZW1vcnkgd2hlbiBubyBleHBsaWNpdA0KPiA+Pj4gKyAgICAgKiBkb21haW4gaXMgc3BlY2lm
aWVkLg0KPiA+Pj4gKyAgICAgKiBGb3IgaW5zdGFuY2UsIERPTUlEX0lPIGlzIHRoZSBvd25lciBv
ZiBtZW1vcnkgcHJlLXNoYXJlZCBhbW9uZw0KPiA+Pj4gKyAgICAgKiBtdWx0aXBsZSBkb21haW5z
IGF0IGJvb3QgdGltZSwgd2hlbiBubyBleHBsaWNpdCBvd25lciBpcyBzcGVjaWZpZWQuDQo+ID4+
PiAgICAgICAqLw0KPiA+Pj4gICAgICBkb21faW8gPSBkb21haW5fY3JlYXRlKERPTUlEX0lPLCBO
VUxMLCAwKTsNCj4gPj4+ICAgICAgaWYgKCBJU19FUlIoZG9tX2lvKSApDQo+ID4+DQo+ID4+IEkn
bSBzb3JyeTogVGhlIGNvbW1lbnQgY2hhbmdlIGlzIGRlZmluaXRlbHkgYmV0dGVyIG5vdyB0aGFu
IGl0IHdhcywNCj4gPj4gYnV0IGl0IGlzIHN0aWxsIHdyaXR0ZW4gaW4gYSB3YXkgcmVxdWlyaW5n
IGZ1cnRoZXIga25vd2xlZGdlIHRvDQo+ID4+IHVuZGVyc3RhbmQgd2hhdCBpdCB0YWxrcyBhYm91
dC4gV2l0aG91dCBmdXJ0aGVyIGNvbnRleHQsICJ3aGVuIG5vDQo+ID4+IGV4cGxpY2l0IGRvbWFp
biBpcyBzcGVjaWZpZWQiIG9ubHkgcmFpc2VzIHF1ZXN0aW9ucy4gSSB3b3VsZCBoYXZlDQo+ID4+
IHRyaWVkIHRvIG1ha2UgYSBzdWdnZXN0aW9uLCBidXQgSSBjYW4ndCByZWFsbHkgZmlndXJlIHdo
YXQgaXQgaXMgdGhhdCB5b3Ugd2FudA0KPiB0byBnZXQgYWNyb3NzIGhlcmUuDQo+ID4NCj4gPiBI
b3cgYWJvdXQgSSBvbmx5IHJldGFpbiB0aGUgIkZvciBpbnN0YW5jZSwgeHh4IiBhbmQgbWFrZSBp
dCBtb3JlIGluIGRldGFpbHMuDQo+ID4gIg0KPiA+IERPTUlEX0lPIGlzIGFsc28gdGhlIGRlZmF1
bHQgb3duZXIgb2YgbWVtb3J5IHByZS1zaGFyZWQgYW1vbmcgbXVsdGlwbGUNCj4gPiBkb21haW5z
IGF0IGJvb3QgdGltZSwgd2hlbiBubyBleHBsaWNpdCBvd25lciBpcyBzcGVjaWZpZWQgd2l0aCAi
b3duZXIiDQo+ID4gcHJvcGVydHkgaW4gc3RhdGljIHNoYXJlZCBtZW1vcnkgZGV2aWNlIG5vZGUu
IFNlZSBzZWN0aW9uDQo+ID4gZG9jcy9taXNjL2FybS9kZXZpY2UtdHJlZS9ib290aW5nLnR4dDog
U3RhdGljIFNoYXJlZCBNZW1vcnkgZm9yIG1vcmUNCj4gZGV0YWlscy4NCj4gPiAiDQo+IA0KPiBU
aGlzIHJlYWRzIHF1aXRlIGEgYml0IGJldHRlci4gWWV0IEkgY29udGludWUgdG8gYmUgcHV6emxl
ZCBhYm91dCB0aGUgYXBwYXJlbnQNCj4gY29uZmxpY3Qgb2YgInByZS1zaGFyZWQiIGFuZCAibm8g
ZXhwbGljaXQgb3duZXIiOiBIb3cgY2FuIG1lbW9yeSBiZSAocHJlLQ0KPiApc2hhcmVkIHdoZW4g
dGhlIG93bmVyIGlzbid0IGtub3duPyBTaG91bGRuJ3QgYWxsIG1lbW9yeSBoYXZlIGFuIG93bmVy
Pw0KPiBPciBhbHRlcm5hdGl2ZWx5IGlmIHRoaXMgc2hhcmluZyBtb2RlbCBkb2Vzbid0IHJlcXVp
cmUgb3duZXJzaGlwLCBzaG91bGRuJ3QgYWxsDQo+IHNoYXJlZCBtZW1vcnkgYmUgb3duZWQgYnkg
RG9tSU8/IEluIGFueSBldmVudCwgdG8gbGVhdmUgc3VjaCBkZXRhaWxzIG91dCBvZg0KPiBoZXJl
LCBwZXJoYXBzIHRoZSBjb21tZW50IGNvdWxkIGNvbnNpc3Qgb2YganVzdCB0aGUgZmlyc3QgcGFy
dCBvZiB3aGF0IHlvdQ0KPiB3cm90ZSwgZW5kaW5nIGF0IHdoZXJlIHRoZSBmaXJzdCBjb21tYSBp
cz8NCj4gDQoNCldlIGhhdmUgYSBzaG9ydCBkaXNjdXNzaW9uIGFib3V0IHRoZSBtZW1vcnkgb3du
ZXJzaGlwIG9uIG15IGRlc2lnbiBsaW5rKA0KaHR0cHM6Ly9sb3JlLmtlcm5lbC5vcmcvYWxsL2E1
MGQ5ZmRlLTFkMDYtN2NkYS0yNzc5LTllZWE5ZTFjMDEzNEB4ZW4ub3JnL1QvKQ0KLCB3ZSBoYXZl
IHVzZXIgY2FzZXMgZm9yIGJvdGggc2NlbmFyaW8uDQoNCk9rLCBJIHdpbGwgbW9kaWZ5IHRoZSBj
b21tZW50IGFuZCBvbmx5IGtlZXANCiINCkRPTUlEX0lPIGlzIGFsc28gdGhlIGRlZmF1bHQgb3du
ZXIgb2YgbWVtb3J5IHByZS1zaGFyZWQgYW1vbmcgbXVsdGlwbGUNCmRvbWFpbnMgYXQgYm9vdCB0
aW1lLg0KIg0KIA0KPiBKYW4NCg0K


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 10:30:57 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 10:30:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351268.577856 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o29FS-0003ah-IF; Fri, 17 Jun 2022 10:30:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351268.577856; Fri, 17 Jun 2022 10:30:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o29FS-0003aa-F7; Fri, 17 Jun 2022 10:30:42 +0000
Received: by outflank-mailman (input) for mailman id 351268;
 Fri, 17 Jun 2022 10:30:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o29FR-0003aU-12
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 10:30:41 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o29FQ-0000GD-Nm; Fri, 17 Jun 2022 10:30:40 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o29FQ-0003sg-DK; Fri, 17 Jun 2022 10:30:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:Message-Id:Date:
	Subject:Cc:To:From; bh=YulpAZhbVg6y+ee/fL/9bL7V1W2kdnst8Vz1P9ePPw0=; b=yf+iJk
	yI71j1TcQ4ladswWyndbvQfuGP+BeLRZoU4udUHEybiof1cl2Fr/+byepGeDvJasr+2dCKEBnxxBu
	DPB6qIORYjKdpEdJFJiyXbOStaI0yFq+dN7RFbQlrGIzeveTqzSRllFMOKZj0kJQtXhSRv74V1xNH
	5FsmH/+gdpQ=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	jgross@suse.com,
	oleksandr_tyshchenko@epam.com,
	linux-kernel@vger.kernel.org,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH] x86/xen: Remove undefined behavior in setup_features()
Date: Fri, 17 Jun 2022 11:30:37 +0100
Message-Id: <20220617103037.57828-1-julien@xen.org>
X-Mailer: git-send-email 2.32.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

1 << 31 is undefined. So switch to 1U << 31.

Fixes: 5ead97c84fa7 ("xen: Core Xen implementation")
Signed-off-by: Julien Grall <jgrall@amazon.com>

---

This was actually caught because I wasn't able to boot Linux 5.18
and onwards when built with GCC 7.3 and UBSAN enabled. There was
no message but instead an early crash because the instruction "cli
was used too early.

This issue has always been there but it only shows after Linux
switched from C89 to C11.
---
 drivers/xen/features.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/xen/features.c b/drivers/xen/features.c
index 7b591443833c..87f1828d40d5 100644
--- a/drivers/xen/features.c
+++ b/drivers/xen/features.c
@@ -42,7 +42,7 @@ void xen_setup_features(void)
 		if (HYPERVISOR_xen_version(XENVER_get_features, &fi) < 0)
 			break;
 		for (j = 0; j < 32; j++)
-			xen_features[i * 32 + j] = !!(fi.submap & 1<<j);
+			xen_features[i * 32 + j] = !!(fi.submap & 1U << j);
 	}
 
 	if (xen_pv_domain()) {
-- 
2.32.0



From xen-devel-bounces@lists.xenproject.org Fri Jun 17 10:32:35 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 10:32:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351275.577866 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o29HG-0004Ad-Tp; Fri, 17 Jun 2022 10:32:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351275.577866; Fri, 17 Jun 2022 10:32:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o29HG-0004AW-RG; Fri, 17 Jun 2022 10:32:34 +0000
Received: by outflank-mailman (input) for mailman id 351275;
 Fri, 17 Jun 2022 10:32:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=laXa=WY=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o29HF-0004AQ-L0
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 10:32:33 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cb833f91-ee28-11ec-bd2d-47488cf2e6aa;
 Fri, 17 Jun 2022 12:32:32 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id D2A581FDB4;
 Fri, 17 Jun 2022 10:32:31 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 9BB881348E;
 Fri, 17 Jun 2022 10:32:31 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id Z1I9JD9YrGKsSwAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 17 Jun 2022 10:32:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cb833f91-ee28-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655461951; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=vfGVG7l4qDZVfaHii8+lmDAoiErziAk9Hp+LpSYiOKQ=;
	b=m1mPYTztGK2yaKiFHrZlS2F9egHmTMmXingjMgcKW/mD5XN4rrspB+fzObw3suKioAUR4/
	YbhWVotHwIIw+04620dLcE8tB6YsNNNBOngCTW5L1+EU4JcNpVQluB16LIzg9WRQh8BpJs
	p2/q54VKDzu2nFBVCT7jwgN/1gg1XC0=
Message-ID: <c82cd25e-8039-9f12-eb83-b40ddbea78a7@suse.com>
Date: Fri, 17 Jun 2022 12:32:31 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH] x86/xen: Remove undefined behavior in setup_features()
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, oleksandr_tyshchenko@epam.com,
 linux-kernel@vger.kernel.org, Julien Grall <jgrall@amazon.com>
References: <20220617103037.57828-1-julien@xen.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20220617103037.57828-1-julien@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------KRKdfK0u9bxFqfjsHauZWlBV"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------KRKdfK0u9bxFqfjsHauZWlBV
Content-Type: multipart/mixed; boundary="------------MwfX8tx192BFjafLict0kdja";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, oleksandr_tyshchenko@epam.com,
 linux-kernel@vger.kernel.org, Julien Grall <jgrall@amazon.com>
Message-ID: <c82cd25e-8039-9f12-eb83-b40ddbea78a7@suse.com>
Subject: Re: [PATCH] x86/xen: Remove undefined behavior in setup_features()
References: <20220617103037.57828-1-julien@xen.org>
In-Reply-To: <20220617103037.57828-1-julien@xen.org>

--------------MwfX8tx192BFjafLict0kdja
Content-Type: multipart/mixed; boundary="------------0T3UG1Mu2FVW8xe7wk45JqFe"

--------------0T3UG1Mu2FVW8xe7wk45JqFe
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTcuMDYuMjIgMTI6MzAsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gRnJvbTogSnVsaWVu
IEdyYWxsIDxqZ3JhbGxAYW1hem9uLmNvbT4NCj4gDQo+IDEgPDwgMzEgaXMgdW5kZWZpbmVk
LiBTbyBzd2l0Y2ggdG8gMVUgPDwgMzEuDQo+IA0KPiBGaXhlczogNWVhZDk3Yzg0ZmE3ICgi
eGVuOiBDb3JlIFhlbiBpbXBsZW1lbnRhdGlvbiIpDQo+IFNpZ25lZC1vZmYtYnk6IEp1bGll
biBHcmFsbCA8amdyYWxsQGFtYXpvbi5jb20+DQoNClJldmlld2VkLWJ5OiBKdWVyZ2VuIEdy
b3NzIDxqZ3Jvc3NAc3VzZS5jb20+DQoNCg0KSnVlcmdlbg0K
--------------0T3UG1Mu2FVW8xe7wk45JqFe
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------0T3UG1Mu2FVW8xe7wk45JqFe--

--------------MwfX8tx192BFjafLict0kdja--

--------------KRKdfK0u9bxFqfjsHauZWlBV
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmKsWD8FAwAAAAAACgkQsN6d1ii/Ey/0
HAgAnvuJNuW6p8gwbdOzIOvj6MKEcB+w59Xe+bDbKcv0x6NBNZD9TwtgF+NOPPq2vLPhOn5YQzBN
JR27N4qKRhxdP5N9nsrSoNnbeHXJKLDzfHHk0emSItvBxmOqyw0e+4E8eAPEFjINQtZt9uHUasrD
dTTkLN5iGHjrZDvnrtRtd5j/nWzdhM/DuvJutCaD+J4Cg4G4Tt7qUOM0NuISaf1ZT1YbZrYvL2It
fEAuVZ2t1gO8WBH8gXhUQd8JSLfHb7id8VbVuzRS1jj/+4LgUz0LEzkgPtouYrkk3t/RiVocYyg5
5H3MQ+JJcjITdbfbwDmHTYQkDq3m0/3hYlWnIVBibQ==
=6UhH
-----END PGP SIGNATURE-----

--------------KRKdfK0u9bxFqfjsHauZWlBV--


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 10:43:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 10:43:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351287.577883 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o29RU-000653-3G; Fri, 17 Jun 2022 10:43:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351287.577883; Fri, 17 Jun 2022 10:43:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o29RT-00064w-WF; Fri, 17 Jun 2022 10:43:08 +0000
Received: by outflank-mailman (input) for mailman id 351287;
 Fri, 17 Jun 2022 10:43:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tGKg=WY=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1o29RS-00064p-05
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 10:43:06 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur02on0603.outbound.protection.outlook.com
 [2a01:111:f400:fe06::603])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 439194fa-ee2a-11ec-b725-ed86ccbb4733;
 Fri, 17 Jun 2022 12:43:03 +0200 (CEST)
Received: from DB8P191CA0007.EURP191.PROD.OUTLOOK.COM (2603:10a6:10:130::17)
 by DB6PR0802MB2181.eurprd08.prod.outlook.com (2603:10a6:4:82::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.14; Fri, 17 Jun
 2022 10:43:01 +0000
Received: from DBAEUR03FT036.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:130:cafe::4e) by DB8P191CA0007.outlook.office365.com
 (2603:10a6:10:130::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.16 via Frontend
 Transport; Fri, 17 Jun 2022 10:43:01 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT036.mail.protection.outlook.com (100.127.142.193) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5353.14 via Frontend Transport; Fri, 17 Jun 2022 10:43:00 +0000
Received: ("Tessian outbound 01afcf8ccfad:v120");
 Fri, 17 Jun 2022 10:43:00 +0000
Received: from 1d2a114aa5c2.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 9C317B7D-6A40-49DB-A118-27F92C3EC71E.1; 
 Fri, 17 Jun 2022 10:42:50 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 1d2a114aa5c2.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 17 Jun 2022 10:42:50 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com (2603:10a6:102:2b9::9)
 by AM0PR08MB5121.eurprd08.prod.outlook.com (2603:10a6:208:159::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.14; Fri, 17 Jun
 2022 10:42:49 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::7da8:5168:ad86:7178]) by PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::7da8:5168:ad86:7178%7]) with mapi id 15.20.5353.013; Fri, 17 Jun 2022
 10:42:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 439194fa-ee2a-11ec-b725-ed86ccbb4733
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=C9rtyUXibJ+uqfGKlk206yW197/fpbbH82k85dS3RohKy3fCI8u+h2m1v4fHhiVo7G88DGck+ptLoCGDuXJh4t8hweB4A2ynR6ShDpucl+7b3xT7U3xPKy5eJK5pIyjP2fDueO4paFKUKeYi5n5ymcVtPxANDGtVaX7rtXuAH+5kiXzJZpHM1s6yNt4/xvaDi2iHI0DDp4/1SZwP2et6SPwG5CksOE4E9dsTAp2Vnke/matfPCI/ViCMHSNM5ENzXo8LRsKw0lXJCvX+8uik1ufUI5HqSZ4pQMtU/RkmMz+mMDrcPNZrn03NvQALrbbX45utsZebsCQbHKdJmF1aUg==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ZhHnLb1DtHSLpn0nYAVsoJxBk+anSGpro/6G43XRzKU=;
 b=AKA/gItZiMGmTEHZ6qU6pg5Wkedpxd5D1nwt+a+suKL0bD7tEYSnaFpLp2kpKQEFDl9Qbe8QLCd8yzMWnvE6Nufy0bEVU3XAKS4NXdSAwv8mWMaAapvQLszy+fo3T0ERW2xv1r5fVBjoTNQpjTRZP9MZX8c+PjnZw4e5YKbCqAKOpk8GM3t9X3QNZgiyDBsba85vXxJGL7/0Gfb48BuVoH2iDo5u1r/VvkOunF9DrY1yM3vEP22wvIwzqlWbm6nvSX+qDv3pNn6OSXRUyKihuEWrN6PFk/28dSzR6Vs0X/vT+eLePqUs3obPAlZw726U0FjueCJxwGiWedeI0PKaFw==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZhHnLb1DtHSLpn0nYAVsoJxBk+anSGpro/6G43XRzKU=;
 b=VgvUFMwLe9m4lcLdWlmrdhSFedIVDU+wnLhOtuZDXtkwesDpdhBC8QV6wzxxbo4bIay/e390rQZH8iUuuOiSxqI4ixCddY6u3fAK//9WbkCrf0SVhFwpVYWRc2ISJuD778aT7hN/+HAOwoQhOYyJE6JvUVgveYZBne4LYu0xFSQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WS2vsi4z1kQ9K5+XbuAwOWqXJHwrYiVbTIM2Nr9zk96cCNyI/Xl52NKeYLEmXi58iht02dw9mZ3m71NrAFx/Xcu2JBwdoutJ7GAI3xZAWPK9WQBnMQnrjolQHARCzSKAWfKCkX3/MfvZyXdkg3WQomcAOgfQyQAq1eEYhTQMYJ8X/Sixfr9t5XyVvIrp8Bw+3B33EQEgaqwSKUgYPEOul6+mV9bU9IH4JqXR08XexoMq4CK67iIWvSJKfyEzjuNxXKjARMsYxPMf4DG6a/UUt8zK2efHT+cwouXYHfMixhP6TdBkh1FNiAmAG+3j1Gz2dYV0anreW1GeVgYjB7lSqA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ZhHnLb1DtHSLpn0nYAVsoJxBk+anSGpro/6G43XRzKU=;
 b=LDGoFmYbUdXHwsopJFdqHHGfAaw/YstVODE3KIazK3YgajDfTDaCNXubOxU9Lus6PivctCFLakU8zTGuoV+QcNtSFyQ3SPmHPFRZgnSSusmvhVHz6japkHlmoHl6mVsou1jNO7SyUz4HEgY4mBMOrQ+QQc1Cj0KlhuMqzn5mwFcWyLAFflFXMl+sDUCL+5P1rbPKAdJFlTEy+Ey/jz5czleUm/3dSLNHwr9MBTy0kPsNMEgN6zhcCh/gcbgcipqyIMbJn+aV3uK1i2awSQl1gVGuSjzjIWGx6hN2s7ygJeVykeXHmrHufngTO7ltrPK2THk1W13iQlZub6pSgG1HmQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZhHnLb1DtHSLpn0nYAVsoJxBk+anSGpro/6G43XRzKU=;
 b=VgvUFMwLe9m4lcLdWlmrdhSFedIVDU+wnLhOtuZDXtkwesDpdhBC8QV6wzxxbo4bIay/e390rQZH8iUuuOiSxqI4ixCddY6u3fAK//9WbkCrf0SVhFwpVYWRc2ISJuD778aT7hN/+HAOwoQhOYyJE6JvUVgveYZBne4LYu0xFSQ=
From: Wei Chen <Wei.Chen@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH] xen/arm: avoid vtimer flip-flop transition in context
 switch
Thread-Topic: [PATCH] xen/arm: avoid vtimer flip-flop transition in context
 switch
Thread-Index: AQHYgFjGGQRmuhVkyUGP4eiVYzlBr61QOR4AgAABKRCAAw50AIAAI+pg
Date: Fri, 17 Jun 2022 10:42:49 +0000
Message-ID:
 <PAXPR08MB74207970D8931FBCDF6A132E9EAF9@PAXPR08MB7420.eurprd08.prod.outlook.com>
References: <20220615013909.283887-1-wei.chen@arm.com>
 <c48bb719-8cc6-ea8d-291d-4e09d42f93c2@xen.org>
 <PAXPR08MB7420FDB50DA7265956A3B0BC9EAD9@PAXPR08MB7420.eurprd08.prod.outlook.com>
 <6faaa38d-63af-d962-e8de-4accb5b73ab4@xen.org>
In-Reply-To: <6faaa38d-63af-d962-e8de-4accb5b73ab4@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 89E68991C931084D8B4048D60FCBADE8.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 9a6eb116-9ed5-4b87-afbf-08da504e2679
x-ms-traffictypediagnostic:
	AM0PR08MB5121:EE_|DBAEUR03FT036:EE_|DB6PR0802MB2181:EE_
X-Microsoft-Antispam-PRVS:
	<DB6PR0802MB21814B8753DC9B78C5BBB41D9EAF9@DB6PR0802MB2181.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 t+o6SCSucnx4z9Wop8LoqZ3wdTg4vyYFywelK8HVUEUbpjPW0ZT381g2g7Xzg6qgfn1QGMS9QC8jcuygNIIAmAxStD9lYiSTfhNGLBVHyGSJ+IGerdPdO+uyqwuv1j1k12zFNPnt4jwL4aGcwM7JmN//N+XUdSwULDCbgxdU8gbP3b7Snh54203RqZbpBmq/ckhX3nRrmH/GVrDPlOEEMDXDzOah3QI1/ial7YFYmKg5C8gtu4telFCxRKqm15ketY8cQUmIVP1FVJBEN636L4S5YIk2qzsh1MppiBdsxRVA/Az/ikF1CqJFVkEwJnP0QNcHrxcijt2vpqkkDDPMoPV2cf5URSZEvPHe5aD/dE60z3MHgWNro5dLtTBJFXFeggTVLQvRoV4e12H/GQcaMUfAXnm9C4IqGk1baC4qLJV99wfZkTdtrURe8oMh6cZLPHR7koykwByHWayeoYH4A5ps2LBthekQPNHTFkZE+lTh95mwqrVNmQtOBXKBRF0Gh2lNObAGmgsP+7zkI9XW4p9t7ky6GRqDtRxPdRI85iq3hZAIWO3XF61q7gE6zlR5WDnCRg4Pzv9sMfFSa4GDUr8R3AVGG0e/LPem6+HFF6sHx+UpAioVyjSc2txQGHbZSCExRFaEouDRZVdooO3aXX+ChSuYj8E0z38bqx4wLYQ7n0fEKS4jGhjbZxYC6AWTDordrmcqrfEPrl+YYR15Rg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB7420.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(83380400001)(26005)(9686003)(66446008)(7696005)(53546011)(4326008)(52536014)(110136005)(8936002)(86362001)(33656002)(5660300002)(38070700005)(38100700002)(2906002)(6506007)(186003)(8676002)(71200400001)(64756008)(66556008)(54906003)(316002)(76116006)(66476007)(122000001)(66946007)(498600001)(55016003);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB5121
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT036.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	60858cbd-b31d-44a3-1897-08da504e1f81
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	f8NPcwYsbvxqmQYutpDY25xakF+rh9iopRWSpZd27rDOsO1nqg5PHnaNHuR4kdoWZyCyQaRklCj3CRqnATG0q8yy2EzA4sW5xdrKZeew/jOX42lthOvXpeF+uzkjHzIePlFY+OAgeziLMNQpe8FODpfM1/8ZHsWOPd0tYEwHdYTd5TdWgtUv+gDRjAzAJE/RhkHm/QNR+gBdQKWdS+1MH3w3ZAyQnYPhIs2N9Rv1mdUOgyuca+pB64MmVndk2LmJ14N8qWoz+MeAYn4MMAEmLOBLonWwM/nWkDl+9lWXtYW8NmxRXeCA4tJ1eT6JrBvY9qT47yXHZNyyGm2QxF/EzXzXvG+t/aa40Ua4hPO6grQc/Pqz58Pm/qH37f5jDU21OlMVnU0o0OrbvLzp0t8vzFmLhycQCYb0+MU6Izakxpce6VjZ1RZh06ry7iES0wuAeioAGuoT7sKTceQj8bBtIj5rZ2RdrTfncf4Lr7ou3wDs10kia4l1jHLYLkq+1Y8Pmy/NmjueQisPW/Y1K+iwIHa9MIAl/4DdMV8qx7tuwVxxN9VIo4GgGy46/99Vv3RpPZkmMm3n4qQ5aKhhLD98hJgotX8ZPrYopLq9HmLoAFPyXh5xuAimwSTXDIBjUpXerfwDImAB20f+1lGlOKoEwlcpgxfCIWj/kF3PeC7Rx22svPij38BXXNCf+PqcOx2Z
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(36840700001)(46966006)(40470700004)(186003)(8936002)(5660300002)(8676002)(33656002)(6506007)(316002)(508600001)(26005)(9686003)(53546011)(83380400001)(107886003)(54906003)(81166007)(2906002)(7696005)(336012)(86362001)(47076005)(36860700001)(55016003)(52536014)(70586007)(40460700003)(82310400005)(70206006)(4326008)(356005)(110136005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jun 2022 10:43:00.9370
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 9a6eb116-9ed5-4b87-afbf-08da504e2679
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT036.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0802MB2181

SGkgSnVsaWVuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEp1bGll
biBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+IFNlbnQ6IDIwMjLlubQ25pyIMTfml6UgMTY6MzIN
Cj4gVG86IFdlaSBDaGVuIDxXZWkuQ2hlbkBhcm0uY29tPjsgeGVuLWRldmVsQGxpc3RzLnhlbnBy
b2plY3Qub3JnDQo+IENjOiBuZCA8bmRAYXJtLmNvbT47IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0
YWJlbGxpbmlAa2VybmVsLm9yZz47IEJlcnRyYW5kDQo+IE1hcnF1aXMgPEJlcnRyYW5kLk1hcnF1
aXNAYXJtLmNvbT47IFZvbG9keW15ciBCYWJjaHVrDQo+IDxWb2xvZHlteXJfQmFiY2h1a0BlcGFt
LmNvbT4NCj4gU3ViamVjdDogUmU6IFtQQVRDSF0geGVuL2FybTogYXZvaWQgdnRpbWVyIGZsaXAt
ZmxvcCB0cmFuc2l0aW9uIGluIGNvbnRleHQNCj4gc3dpdGNoDQo+IA0KPiANCj4gDQo+IE9uIDE1
LzA2LzIwMjIgMTE6MzYsIFdlaSBDaGVuIHdyb3RlOg0KPiA+IEhpIEp1bGllbiwNCj4gDQo+IEhp
IFdlaSwNCj4gDQo+ID4+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+ID4+IEZyb206IEp1
bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+ID4+IFNlbnQ6IDIwMjLlubQ25pyIMTXml6Ug
MTc6NDcNCj4gPj4gVG86IFdlaSBDaGVuIDxXZWkuQ2hlbkBhcm0uY29tPjsgeGVuLWRldmVsQGxp
c3RzLnhlbnByb2plY3Qub3JnDQo+ID4+IENjOiBuZCA8bmRAYXJtLmNvbT47IFN0ZWZhbm8gU3Rh
YmVsbGluaSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz47DQo+IEJlcnRyYW5kDQo+ID4+IE1hcnF1
aXMgPEJlcnRyYW5kLk1hcnF1aXNAYXJtLmNvbT47IFZvbG9keW15ciBCYWJjaHVrDQo+ID4+IDxW
b2xvZHlteXJfQmFiY2h1a0BlcGFtLmNvbT4NCj4gPj4gU3ViamVjdDogUmU6IFtQQVRDSF0geGVu
L2FybTogYXZvaWQgdnRpbWVyIGZsaXAtZmxvcCB0cmFuc2l0aW9uIGluDQo+IGNvbnRleHQNCj4g
Pj4gc3dpdGNoDQo+ID4+Pg0KPiA+Pj4gU28gaW4gdGhpcyBwYXRjaCwgd2UgYWRqdXN0IHRoZSBm
b3JtdWxhIHRvIHVzZSAib2Zmc2V0IC0gYm9vdF9jb3VudCINCj4gPj4+IGZpcnN0LCBhbmQgdGhl
biB1c2UgdGhlIHJlc3VsdCB0byBwbHVzIGN2YWwuIFRoaXMgd2lsbCBhdm9pZCB0aGUNCj4gPj4+
IHVpbnQ2NF90IG92ZXJmbG93Lg0KPiA+Pg0KPiA+PiBUZWNobmljYWxseSwgdGhlIG92ZXJmbG93
IGlzIHN0aWxsIHByZXNlbnQgYmVjYXVzZSB0aGUgKG9mZnNldCAtDQo+ID4+IGJvb3RfY291bnQp
IGlzIGEgbm9uLXplcm8gdmFsdWUgKmFuZCogY3ZhbCBpcyBhIDY0LWJpdCB2YWx1ZS4NCj4gPj4N
Cj4gPg0KPiA+IFllcywgR3Vlc3RPUyBjYW4gaXNzdWUgYW55IHZhbGlkIDY0LWJpdCB2YWx1ZSBm
b3IgdGhlaXIgdXNhZ2UuDQo+ID4NCj4gPj4gU28gSSB0aGluayB0aGUgZXF1YXRpb24gYmVsb3cg
c2hvdWxkIGJlIHJld29ya2VkIHRvLi4uDQo+ID4+DQo+ID4+Pg0KPiA+Pj4gU2lnbmVkLW9mZi1i
eTogV2VpIENoZW4gPHdlaS5jaGVuQGFybS5jb20+DQo+ID4+PiAtLS0NCj4gPj4+ICAgIHhlbi9h
cmNoL2FybS92dGltZXIuYyB8IDUgKysrLS0NCj4gPj4+ICAgIDEgZmlsZSBjaGFuZ2VkLCAzIGlu
c2VydGlvbnMoKyksIDIgZGVsZXRpb25zKC0pDQo+ID4+Pg0KPiA+Pj4gZGlmZiAtLWdpdCBhL3hl
bi9hcmNoL2FybS92dGltZXIuYyBiL3hlbi9hcmNoL2FybS92dGltZXIuYw0KPiA+Pj4gaW5kZXgg
NWJiNTk3MGY1OC4uODZlNjMzMDNjOCAxMDA2NDQNCj4gPj4+IC0tLSBhL3hlbi9hcmNoL2FybS92
dGltZXIuYw0KPiA+Pj4gKysrIGIveGVuL2FyY2gvYXJtL3Z0aW1lci5jDQo+ID4+PiBAQCAtMTQ0
LDggKzE0NCw5IEBAIHZvaWQgdmlydF90aW1lcl9zYXZlKHN0cnVjdCB2Y3B1ICp2KQ0KPiA+Pj4g
ICAgICAgIGlmICggKHYtPmFyY2gudmlydF90aW1lci5jdGwgJiBDTlR4X0NUTF9FTkFCTEUpICYm
DQo+ID4+PiAgICAgICAgICAgICAhKHYtPmFyY2gudmlydF90aW1lci5jdGwgJiBDTlR4X0NUTF9N
QVNLKSkNCj4gPj4+ICAgICAgICB7DQo+ID4+PiAtICAgICAgICBzZXRfdGltZXIoJnYtPmFyY2gu
dmlydF90aW1lci50aW1lciwgdGlja3NfdG9fbnModi0NCj4gPj4+IGFyY2gudmlydF90aW1lci5j
dmFsICsNCj4gPj4+IC0gICAgICAgICAgICAgICAgICB2LT5kb21haW4tPmFyY2gudmlydF90aW1l
cl9iYXNlLm9mZnNldCAtDQo+IGJvb3RfY291bnQpKTsNCj4gPj4+ICsgICAgICAgIHNldF90aW1l
cigmdi0+YXJjaC52aXJ0X3RpbWVyLnRpbWVyLA0KPiA+Pj4gKyAgICAgICAgICAgICAgICAgIHRp
Y2tzX3RvX25zKHYtPmRvbWFpbi0+YXJjaC52aXJ0X3RpbWVyX2Jhc2Uub2Zmc2V0DQo+IC0NCj4g
Pj4+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBib290X2NvdW50ICsgdi0+YXJjaC52
aXJ0X3RpbWVyLmN2YWwpKTsNCj4gPj4NCj4gPj4gLi4uIHNvbWV0aGluZyBsaWtlOg0KPiA+Pg0K
PiA+PiB0aWNrc190b19ucyhvZmZzZXQgLSBib290X2NvdW50KSArIHRpY2tzX3RvX25zKGN2YWwp
Ow0KPiA+Pg0KPiA+PiBUaGUgZmlyc3QgcGFydCBvZiB0aGUgZXF1YXRpb24gc2hvdWxkIGFsd2F5
cyBiZSB0aGUgc2FtZS4gU28gaXQgY291bGQNCj4gYmUNCj4gPj4gc3RvcmVkIGluIHN0cnVjdCBk
b21haW4uDQo+ID4+DQo+ID4NCj4gPiBJZiB5b3UgdGhpbmsgdGhlcmUgaXMgc3RpbGwgc29tZSB2
YWx1ZXMgdG8gY29udGludWUgdGhpcyBwYXRjaCwgSSB3aWxsDQo+ID4gYWRkcmVzcyB0aGlzIGNv
bW1lbnQgOiApDQo+IA0KPiBJIHRoaW5rIHRoZXJlIGFyZS4gVGhpcyBjYW4gYmUgZWFzaWx5IHRy
aWdnZXJlZCBieSBhIHZDUFUgc2V0dGluZyBhDQo+IGxhcmdlIGN2YWwuDQo+IA0KDQpPaywgdGhh
bmtzISBXZSB3aWxsIGFkZHJlc3MgaXQgYW5kIHJlZmluZSB0aGUgc3ViamVjdCBhbmQgZGVzY3Jp
cHRpb25zLg0KDQo+IENoZWVycywNCj4gDQo+IC0tDQo+IEp1bGllbiBHcmFsbA0K


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 10:48:10 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 10:48:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351295.577895 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o29WK-0006ig-Mq; Fri, 17 Jun 2022 10:48:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351295.577895; Fri, 17 Jun 2022 10:48:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o29WK-0006iZ-JW; Fri, 17 Jun 2022 10:48:08 +0000
Received: by outflank-mailman (input) for mailman id 351295;
 Fri, 17 Jun 2022 10:48:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dTWu=WY=citrix.com=prvs=1609a9fe9=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1o29WJ-0006iT-BJ
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 10:48:07 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f67282e7-ee2a-11ec-bd2d-47488cf2e6aa;
 Fri, 17 Jun 2022 12:48:05 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f67282e7-ee2a-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655462885;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=HDrLuO8sCXEgIPWCZmYj7yNU3KNf/lyGH4JJ9QdrWKs=;
  b=claK6JMsn+80mp1dRNwQkejM9WCo6xeUazLfWUJ10p99py03WB042sJg
   sIYSoLJmXOuX9g8SAxDxIzuyPo7iyHDlnLy1OptQeF6B34bKv0gVrcznx
   pbXJtNXrehbxZ2l9woPaCh0jeuuq66xf3E5chAwPO7VaMiT/0apLAckX/
   s=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 73852055
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:JDivV6oUrG6Hc/aj5tJjasS2pIdeBmJEZRIvgKrLsJaIsI4StFCzt
 garIBnVM/7fYmr3ct9+ad7g9kMH75/TmtdmQQFp/Ho0FCgT+JuZCYyVIHmrMnLJJKUvbq7GA
 +byyDXkBJppJpMJjk71atANlVEliefQAOCU5NfsYkidfyc9IMsaoU8lyrRRbrJA24DjWVvT4
 I2q/6UzBXf+s9JKGjNMg068gEsHUMTa4Fv0aXRnOJinFHeH/5UkJMp3yZOZdhMUcaENdgKOf
 M7RzanRw4/s10xF5uVJMFrMWhZirrb6ZWBig5fNMkSoqkAqSicais7XOBeAAKv+Zvrgc91Zk
 b1wWZKMpQgBFPTRpbVCUyJhUCh+JoZpxuTaI2GZvpnGp6HGWyOEL/RGCUg3OcsT+/ptAHEI/
 vsdQNwPRknd3aTsmuv9E7QywJR4RCXoFNp3VnVI5DfVF/s5B7vERL3H/4Rw1zYsnMFeW/3ZY
 qL1bBIwN0SdOUUSZz/7Drofw/iGgymiIgdd61/Mg7g2/nKL6Qxuhe2F3N39JYXRGJQ9clyjj
 n3C13T0BFcdLtP34Riv/2+oh+TPtTjmQ49UH7q9ntZ6jVvWymENBRk+UVqgveL/mkO4Q8hYK
 UEf5mwpt6dayaCwZoCjBVvi+ifC50NCHYoLewEn1O2T4oCN/jvIWWg/d31IaMcNm/FtWD4z8
 FDcyrsFGgdTXK2ppWO1r+nJ8G/pYXJNcAfudgdfE1JbvoCLTJUby0uWE409SPPdYsjdQ2mY/
 tyckMQpa1z/Z+Yv3r7zw13IiinESnPhHl9svVW/so5IA2pEiG+Zi2+AswGzAQ5odtrxc7V4l
 CFsdzKixO4PF4qRsyeGXf8AGrqkj97cbmCB3QYzQ8R5rmn3k5JGQWy3yGAWGauUGpxcJW+Bj
 LH74mu9G6O/zFP1NPQqMupd+uwhzLT6FMSNa804muFmO8ArHCfepXkGTRfJgwjFzRh9+Ylia
 MzzWZv9Uh4n5VFPkWPeqxE1iuRwmEjTBAr7GPjG8vhQ+ePGPCDNEuxZYQTmgyJQxPrsnTg5O
 u13b6OioyizmsWnCsUL2eb/9Ww3EEU=
IronPort-HdrOrdr: A9a23:lT/liqHI9VTP7myspLqE5seALOsnbusQ8zAXP0AYc31om6uj5q
 aTdZUgpHjJYVkqKRIdcLy7V5VoIkmskaKdg7NhX4tKNTOO0ADDQe1fBOPZskTd8kbFltK1u5
 0PT0EHMqyUMWRH
X-IronPort-AV: E=Sophos;i="5.92,306,1650945600"; 
   d="scan'208";a="73852055"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@eu.citrix.com>
Subject: [PATCH] x86/mm: Add an early PGT_validated exit in _get_page_type()
Date: Fri, 17 Jun 2022 11:47:39 +0100
Message-ID: <20220617104739.7861-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

This is a continuation of the cleanup and commenting in:
  9186e96b199e ("x86/pv: Clean up _get_page_type()")
  8cc5036bc385 ("x86/pv: Fix ABAC cmpxchg() race in _get_page_type()")

With the re-arranged and newly commented logic, it's far more clear that the
second half of _get_page_type() only has work to do for page validation.

Introduce an early exit for PGT_validated.  This makes the fastpath marginally
faster, and simplifies the subsequent logic as it no longer needs to exclude
the fully validated case.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: George Dunlap <george.dunlap@eu.citrix.com>

Not that it's relevant, but bloat-o-meter says:

  add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-300 (-300)
  Function                                     old     new   delta
  _get_page_type                              6618    6318    -300

which is more impressive than I was expecting.
---
 xen/arch/x86/mm.c | 70 +++++++++++++++++++++++++++++++------------------------
 1 file changed, 39 insertions(+), 31 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index ac74ae389c99..57751d2ed70f 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -3002,6 +3002,17 @@ static int _get_page_type(struct page_info *page, unsigned long type,
      * fully validated (PGT_[type] | PGT_validated | >0).
      */
 
+    /* If the page is fully validated, we're done. */
+    if ( likely(nx & PGT_validated) )
+        return 0;
+
+    /*
+     * The page is in the "validate locked" state.  We have exclusive access,
+     * and any concurrent callers are waiting in the cmpxchg() loop above.
+     *
+     * Exclusive access ends when PGT_validated or PGT_partial get set.
+     */
+
     if ( unlikely((x & PGT_count_mask) == 0) )
     {
         struct domain *d = page_get_owner(page);
@@ -3071,43 +3082,40 @@ static int _get_page_type(struct page_info *page, unsigned long type,
         }
     }
 
-    if ( unlikely(!(nx & PGT_validated)) )
+    /*
+     * Flush the cache if there were previously non-coherent mappings of
+     * this page, and we're trying to use it as anything other than a
+     * writeable page.  This forces the page to be coherent before we
+     * validate its contents for safety.
+     */
+    if ( (nx & PGT_non_coherent) && type != PGT_writable_page )
     {
-        /*
-         * Flush the cache if there were previously non-coherent mappings of
-         * this page, and we're trying to use it as anything other than a
-         * writeable page.  This forces the page to be coherent before we
-         * validate its contents for safety.
-         */
-        if ( (nx & PGT_non_coherent) && type != PGT_writable_page )
-        {
-            void *addr = __map_domain_page(page);
+        void *addr = __map_domain_page(page);
 
-            cache_flush(addr, PAGE_SIZE);
-            unmap_domain_page(addr);
+        cache_flush(addr, PAGE_SIZE);
+        unmap_domain_page(addr);
 
-            page->u.inuse.type_info &= ~PGT_non_coherent;
-        }
+        page->u.inuse.type_info &= ~PGT_non_coherent;
+    }
 
-        /*
-         * No special validation needed for writable or shared pages.  Page
-         * tables and GDT/LDT need to have their contents audited.
-         *
-         * per validate_page(), non-atomic updates are fine here.
-         */
-        if ( type == PGT_writable_page || type == PGT_shared_page )
-            page->u.inuse.type_info |= PGT_validated;
-        else
+    /*
+     * No special validation needed for writable or shared pages.  Page
+     * tables and GDT/LDT need to have their contents audited.
+     *
+     * per validate_page(), non-atomic updates are fine here.
+     */
+    if ( type == PGT_writable_page || type == PGT_shared_page )
+        page->u.inuse.type_info |= PGT_validated;
+    else
+    {
+        if ( !(x & PGT_partial) )
         {
-            if ( !(x & PGT_partial) )
-            {
-                page->nr_validated_ptes = 0;
-                page->partial_flags = 0;
-                page->linear_pt_count = 0;
-            }
-
-            rc = validate_page(page, type, preemptible);
+            page->nr_validated_ptes = 0;
+            page->partial_flags = 0;
+            page->linear_pt_count = 0;
         }
+
+        rc = validate_page(page, type, preemptible);
     }
 
  out:
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Jun 17 11:03:59 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 11:03:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351308.577924 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o29lY-0001Hs-8Y; Fri, 17 Jun 2022 11:03:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351308.577924; Fri, 17 Jun 2022 11:03:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o29lY-0001Hl-5F; Fri, 17 Jun 2022 11:03:52 +0000
Received: by outflank-mailman (input) for mailman id 351308;
 Fri, 17 Jun 2022 11:03:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o29lX-0001Hb-0k; Fri, 17 Jun 2022 11:03:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o29lW-0000rT-UZ; Fri, 17 Jun 2022 11:03:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o29lW-0006Nw-DE; Fri, 17 Jun 2022 11:03:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o29lW-0005aB-Cc; Fri, 17 Jun 2022 11:03:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=L9ATVG7lWnSnApF/3ieCQ+8GyTn+SGU4evicVAKAw2s=; b=NF6BW2HLnj19lfq7ELgcNdGGx9
	1jzXQROfZnNbFCwqn4wveYy5uqKlCM4XXgqw7NnLH4PYAXX4DDg5mzaL2Uw3Wp1LLkq28lOOXvHQV
	cm8jGG61wkBoTCg+hmksg9aDKL5UdJaxhl+PoifTc0Hkj0X6O65dOGM1bcKrEg3hllnM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171207-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.16-testing test] 171207: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.16-testing:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:heisenbug
    xen-4.16-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-start/debianhvm.repeat:fail:heisenbug
    xen-4.16-testing:test-amd64-i386-xl-vhd:xen-install:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=2e82446cb252f6c8ac697e81f4155872c69afde4
X-Osstest-Versions-That:
    xen=0b4e62847c5af1a59eea8d17093feccd550d1c26
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 17 Jun 2022 11:03:50 +0000

flight 171207 xen-4.16-testing real [real]
flight 171240 xen-4.16-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/171207/
http://logs.test-lab.xenproject.org/osstest/logs/171240/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install   fail pass in 171240-retest
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 20 guest-start/debianhvm.repeat fail pass in 171240-retest

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-vhd        7 xen-install                  fail  like 170984
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170984
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170984
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170984
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170984
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 170984
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170984
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170984
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170984
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170984
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170984
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170984
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 170984
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  2e82446cb252f6c8ac697e81f4155872c69afde4
baseline version:
 xen                  0b4e62847c5af1a59eea8d17093feccd550d1c26

Last test of basis   170984  2022-06-11 02:19:57 Z    6 days
Testing same since   171207  2022-06-16 13:08:03 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   0b4e62847c..2e82446cb2  2e82446cb252f6c8ac697e81f4155872c69afde4 -> stable-4.16


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 11:12:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 11:12:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351319.577935 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o29tj-00033J-78; Fri, 17 Jun 2022 11:12:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351319.577935; Fri, 17 Jun 2022 11:12:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o29tj-00033C-3y; Fri, 17 Jun 2022 11:12:19 +0000
Received: by outflank-mailman (input) for mailman id 351319;
 Fri, 17 Jun 2022 11:12:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o29th-000336-Ez
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 11:12:17 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o29te-000115-4o; Fri, 17 Jun 2022 11:12:14 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233] helo=[192.168.0.58])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o29td-0006Bd-TH; Fri, 17 Jun 2022 11:12:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=DWlmvQWn5M+xdNlJjkcI7Mh1FGa8W6Sqj4MhjphrdT0=; b=SHcqf6jd9R5FLrMMXNpMGpqZxE
	8mox2Dnv1k0PftmDT6sBbXW7FBYjF2Gye19P40CIcn6WCZB+bfqEGazwOwXhRd82ZSbfZx/tOa+PA
	w2HOKolv4I0PRCqNlCr8zpJ5hQHjZ56YU4QOCzNlWJBRmxY8fBhO86vVCgCT2VejOdWU=;
Message-ID: <f260703d-4651-f9e9-3713-9e85a51b1d70@xen.org>
Date: Fri, 17 Jun 2022 12:12:11 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH] xen: Don't call panic if ARM TF cpu off returns DENIED
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: "dmitry.semenets@gmail.com" <dmitry.semenets@gmail.com>,
 Dmytro Semenets <Dmytro_Semenets@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20220616135541.3333760-1-dmitry.semenets@gmail.com>
 <cf7660da-0bde-865e-7c22-a2e21e31fae5@xen.org> <87wndgh2og.fsf@epam.com>
 <67f56cdd-531b-72fc-1257-214d078f6bb6@xen.org> <87pmj7hczg.fsf@epam.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <87pmj7hczg.fsf@epam.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 17/06/2022 10:10, Volodymyr Babchuk wrote:
> Julien Grall <julien@xen.org> writes:
> 
>> On 16/06/2022 19:40, Volodymyr Babchuk wrote:
>>> Hi Julien,
>>
>> Hi Volodymyr,
>>
>>> Julien Grall <julien@xen.org> writes:
>>>
>>>> Hi,
>>>>
>>>> On 16/06/2022 14:55, dmitry.semenets@gmail.com wrote:
>>>>> From: Dmytro Semenets <dmytro_semenets@epam.com>
>>>>> According to PSCI specification ARM TF can return DENIED on CPU OFF.
>>>>
>>>> I am confused. The spec is talking about Trusted OS and not
>>>> firmware. The docummentation is also not specific to ARM Trusted
>>>> Firmware. So did you mean "Trusted OS"?
>>> It should be "firmware", I believe.
>>
>> Hmmm... I couldn't find a reference in the spec suggesting that
>> CPU_OFF could return DENIED because of the firmware. Do you have a
>> pointer to the spec?
> 
> Ah, looks like we are talking about different things. Indeed, CPU_OFF
> can return DENIED only because of Trusted OS. But entity that *returns*
> the error to a caller is a firmware.

Right, the interesting part is *why* DENIED is returned not *who* 
returns it.
>>>>> Refer to "Arm Power State Coordination Interface (DEN0022D.b)"
>>>>> section 5.5.2
>>>>
>>>> Reading both 5.5.2 and 5.9.1 together, DENIED would be returned when
>>>> the trusted OS can only run on one core.
>>>>
>>>> Some of the trusted OS are migratable. So I think we should first
>>>> attempt to migrate the CPU. Then if it doesn't work, we should prevent
>>>> the CPU to go offline.
>>>>
>>>> That said, upstream doesn't support cpu offlining (I don't know for
>>>> your use case). In case of shutdown, it is not necessary to offline
>>>> the CPU, so we could avoid to call CPU_OFF on all CPUs but
>>>> one. Something like:
>>>>
>>> This is even better approach yes. But you mentioned CPU_OFF. Did you
>>> mean SYSTEM_RESET?
>>
>> By CPU_OFF I was referring to the fact that Xen will issue the call
>> all CPUs but one. The remaining CPU will issue the command to
>> reset/shutdown the system.
>>
> 
> I just want to clarify: change that you suggested removes call to
> stop_cpu() in halt_this_cpu(). So no CPU_OFF will be sent at all.

I was describing the existing behavior.

> 
> All CPUs except one will spin in
> 
>      while ( 1 )
>          wfi();
> 
> while last cpu will issue SYSTEM_OFF or SYSTEM_RESET.
> 
> Is this correct?

Yes.

> 
>>>>    void machine_halt(void)
>>>> @@ -21,10 +23,6 @@ void machine_halt(void)
>>>>        smp_call_function(halt_this_cpu, NULL, 0);
>>>>        local_irq_disable();
>>>>
>>>> -    /* Wait at most another 10ms for all other CPUs to go offline. */
>>>> -    while ( (num_online_cpus() > 1) && (timeout-- > 0) )
>>>> -        mdelay(1);
>>>> -
>>>>        /* This is mainly for PSCI-0.2, which does not return if success. */
>>>>        call_psci_system_off();
>>>>
>>>>> Signed-off-by: Dmytro Semenets <dmytro_semenets@epam.com>
>>>>> Reviewed-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
>>>>
>>>> I don't recall to see patch on the ML recently for this. So is this an
>>>> internal review?
>>> Yeah, sorry about that. Dmytro is a new member in our team and he is
>>> not
>>> yet familiar with differences in internal reviews and reviews in ML.
>>
>> No worries. I usually classify internal review anything that was done
>> privately. This looks to be a public review, althought not on
>> xen-devel.
>>
>> I understand that not all some of the patches are still in PoC stage
>> and doing the review on your github is a good idea. But for those are
>> meant to be for upstream (e.g. bug fixes, small patches), I would
>> suggest to do the review on xen-devel directly.
> 
> It not always clear if patch is eligible for upstream. At first we
> thought that problem is platform-specific and we weren't sure that we
> will find a proper upstreamable fix. 

You can guess but not be sure until you send it to upstrema :). In fact,...

> Probably you saw that PR's name
> quite differs from final patch. This is because initial solution was
> completely different from final one.

... even before looking at your PR, this was the first solution I had in 
mind. I am still pondering whether this could be the best approach 
because I have the suspicion there might be some platform out relying on 
receiving the shutdown request on CPU0.

Anyway, this is so far just theorical, my proposal should solve your 
problem.

On a separate topic, the community is aiming to support a wide range of 
platforms out-of-the-box. I think platform-specific patches are 
acceptable so long they are self-contained (to some extend. I.e if you 
ask to support Xen on RPI3 then I would still probably argue against :)) 
or have a limited impact to the rest of the users (this is why we have 
alternative in Xen).

My point here is your initial solution may have been the preferred 
approach for upstream. So if you involve the community early, you are 
reducing the risk to have to backtrack and/or spend extra time in the 
wrong directions.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 11:15:51 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 11:15:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351327.577946 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o29x8-0003eR-NB; Fri, 17 Jun 2022 11:15:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351327.577946; Fri, 17 Jun 2022 11:15:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o29x8-0003eK-Jl; Fri, 17 Jun 2022 11:15:50 +0000
Received: by outflank-mailman (input) for mailman id 351327;
 Fri, 17 Jun 2022 11:15:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XzG5=WY=gmail.com=mykyta.poturai@srs-se1.protection.inumbo.net>)
 id 1o29x6-0003eE-OD
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 11:15:48 +0000
Received: from mail-wm1-x336.google.com (mail-wm1-x336.google.com
 [2a00:1450:4864:20::336])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d668b18e-ee2e-11ec-b725-ed86ccbb4733;
 Fri, 17 Jun 2022 13:15:47 +0200 (CEST)
Received: by mail-wm1-x336.google.com with SMTP id
 m32-20020a05600c3b2000b0039756bb41f2so2226657wms.3
 for <xen-devel@lists.xenproject.org>; Fri, 17 Jun 2022 04:15:47 -0700 (PDT)
Received: from localhost.localdomain (93.75.52.3.lut.volia.net. [93.75.52.3])
 by smtp.gmail.com with ESMTPSA id
 c130-20020a1c3588000000b0039c798b2dc5sm8916790wma.8.2022.06.17.04.15.45
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 17 Jun 2022 04:15:46 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d668b18e-ee2e-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=J1e/KU6OMq50U31RovHeE60PE1VWI/UwnSnAHJHKnsI=;
        b=AVw+QeA+wDP5Mm7Xu9KAd/7gIohZeyU54lCJjD4oCvPiRYBgR2z/kn2iQ+PmyN8jDI
         mLRwSi3bK3ZlyoRsiBCru1qH8offOAn857LGmor5gaKDsp9UlxHmt1CCXaKQa/YI4xqN
         DxKksRzLMO2Sn6QCGXFwOiqsM6oT2e4AAgXI6Z8pzxIbo/Iu3bsRQkFCSiNma4OCKBE8
         huTYVuzY5dpN062SPhMl2iUpp28jTbWyhA7V25D2K8nGXzUww2o5GMJWHrzFbUTU+jOy
         ElJcf4BxPL2tT2XML+WHuUtTC1qwyJksFt6esrBWbzXq2NHiyMZa5ViECV5pFma1AL0m
         fFWQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=J1e/KU6OMq50U31RovHeE60PE1VWI/UwnSnAHJHKnsI=;
        b=qieZh3C4FfvEXpVoK24oodNpvqfhLAYUMqA4jrOGdMwuYnTspjbzjJCtN/7/BM4BAO
         St8v8Atktf1ofLSDHK0V9WWE02pDRGxs69wDAOPl2s7AmC36WsYVIExw76L5ukTn0FOL
         k9V5Vu5j0BJP6dU8bJFQ5Av2YcqW/MoLIKQIWqZDL0M/uBD62cNSieGkXenQiYABX01W
         cC2Wtj7THx5Y3ZiZnuXqFIwgG2L558/h0zusd0bDz28SwQtrM/SQzYXe0x2fI04WzD0s
         BZW0YO9ImAEf0dhcU953Bk73k8yl644HQGlAs5slYU34+Ml+MyCHXBKRMmuJxUp9dafP
         B32A==
X-Gm-Message-State: AJIora+P3Gy1txDfu8NewDuhuzlHT0HGAWSuMeBlpLVs2GG+qKZQNhGu
	RiqndCYvKbeDQwrVIFezrjg=
X-Google-Smtp-Source: AGRyM1tsQ7yfIic8PadfBShaFR23UPpDPEKz7X6C3Us1CHpmAXomoJFowkCkaFGNshPEWCzFilx1KQ==
X-Received: by 2002:a05:600c:4fc6:b0:39c:9417:a8bf with SMTP id o6-20020a05600c4fc600b0039c9417a8bfmr9577803wmq.26.1655464547039;
        Fri, 17 Jun 2022 04:15:47 -0700 (PDT)
From: Mykyta Poturai <mykyta.poturai@gmail.com>
X-Google-Original-From: Mykyta Poturai <mykyta_poturai@epam.com>
To: rahul.singh@arm.com
Cc: Bertrand.Marquis@arm.com,
	Volodymyr_Babchuk@epam.com,
	julien@xen.org,
	mykyta.poturai@gmail.com,
	sstabellini@kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH] xen/arm: smmuv1: remove iommu group when deassign a device
Date: Fri, 17 Jun 2022 14:15:44 +0300
Message-Id: <20220617111544.205861-1-mykyta_poturai@epam.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <029EEEE1-69E1-42A9-90D3-BEC18CD5B7BC@arm.com>
References: <029EEEE1-69E1-42A9-90D3-BEC18CD5B7BC@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

> Hi Mykyta,
> 
>> On 16 Jun 2022, at 8:48 am, Mykyta Poturai <mykyta.poturai@gmail.com> wrote:
>> 
>> Hi Julien, Rahul
>> I've encountered a similar problem with IMX8 GPU recently. It wasn't probing
>> properly after the domain reboot.  After some digging, I came to the same
>> solution as Rahul and found this thread. I also encountered the occasional
>> "Unexpected global fault, this could be serious" error message when destroying
>> a domain with an actively-working GPU.
>> 
>>> Hmmmm.... Looking at the code, arm_smmu_alloc_smes() doesn't seem to use
>>> the domain information. So why would it need to be done every time it is assigned?
>> Indeed after removing the arm_smmu_master_free_smes() call, both reboot and global
>> fault issues are gone. If I understand correctly, device removing is not yet
>> supported, so I can't find a proper place for the arm_smmu_master_free_smes() call.
>> Should we remove the function completely or just left it commented for later or
>> something else?
>> 
>> Rahul, are you still working on this or could I send my patch?
> 
> Yes, I have this on my to-do list but I was busy with other work and it got delayed. 
> 
> I created another solution for this issue, in which we don’t need to call arm_smmu_master_free_smes() 
> in arm_smmu_detach_dev() but we can configure the s2cr value to type fault in detach function.
> 
> Will call new function arm_smmu_domain_remove_master() in detach function that will revert the changes done 
> by arm_smmu_domain_add_master()  in attach function.
> 
> I don’t have any board to test the patch. If it is okay, Could you please test the patch and let me know the result.
> 
> diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
> index 69511683b4..da3adf8e7f 100644
> --- a/xen/drivers/passthrough/arm/smmu.c
> +++ b/xen/drivers/passthrough/arm/smmu.c
> @@ -1598,21 +1598,6 @@ out_err:
>         return ret;
>  }
>  
> -static void arm_smmu_master_free_smes(struct arm_smmu_master_cfg *cfg)
> -{
> -    struct arm_smmu_device *smmu = cfg->smmu;
> -       int i, idx;
> -       struct iommu_fwspec *fwspec = arm_smmu_get_fwspec(cfg);
> -
> -       spin_lock(&smmu->stream_map_lock);
> -       for_each_cfg_sme(cfg, i, idx, fwspec->num_ids) {
> -               if (arm_smmu_free_sme(smmu, idx))
> -                       arm_smmu_write_sme(smmu, idx);
> -               cfg->smendx[i] = INVALID_SMENDX;
> -       }
> -       spin_unlock(&smmu->stream_map_lock);
> -}
> -
>  static int arm_smmu_domain_add_master(struct arm_smmu_domain *smmu_domain,
>                                       struct arm_smmu_master_cfg *cfg)
>  {
> @@ -1635,6 +1620,20 @@ static int arm_smmu_domain_add_master(struct arm_smmu_domain *smmu_domain,
>         return 0;
>  }
>  
> +static void arm_smmu_domain_remove_master(struct arm_smmu_domain *smmu_domain,
> +                                     struct arm_smmu_master_cfg *cfg)
> +{
> +       struct arm_smmu_device *smmu = smmu_domain->smmu;
> +       struct arm_smmu_s2cr *s2cr = smmu->s2crs;
> +       struct iommu_fwspec *fwspec = arm_smmu_get_fwspec(cfg);
> +       int i, idx;
> +
> +       for_each_cfg_sme(cfg, i, idx, fwspec->num_ids) {
> +               s2cr[idx] = s2cr_init_val;
> +               arm_smmu_write_s2cr(smmu, idx);
> +       }
> +}
> +
>  static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
>  {
>         int ret;
> @@ -1684,10 +1683,11 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
>  
>  static void arm_smmu_detach_dev(struct iommu_domain *domain, struct device *dev)
>  {
> +       struct arm_smmu_domain *smmu_domain = domain->priv;
>         struct arm_smmu_master_cfg *cfg = find_smmu_master_cfg(dev);
>  
>         if (cfg)
> -               arm_smmu_master_free_smes(cfg);
> +               return arm_smmu_domain_remove_master(smmu_domain, cfg);
>  
>  }
> 
> Regards,
> Rahul

Hello Rahul,

For me, this patch fixed the issue with the GPU not probing after domain reboot.
But not fixed the "Unexpected Global fault" that occasionally happens when destroying
the domain with an actively working GPU. Although, I am not sure if this issue
is relevant here.

Regards,
Mykyta


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 12:32:34 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 12:32:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351373.577968 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2B96-00055C-PK; Fri, 17 Jun 2022 12:32:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351373.577968; Fri, 17 Jun 2022 12:32:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2B96-000555-MW; Fri, 17 Jun 2022 12:32:16 +0000
Received: by outflank-mailman (input) for mailman id 351373;
 Fri, 17 Jun 2022 12:32:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2B95-00054v-Kj; Fri, 17 Jun 2022 12:32:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2B95-0002N0-IH; Fri, 17 Jun 2022 12:32:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2B95-0005Yf-AO; Fri, 17 Jun 2022 12:32:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o2B95-0001cp-9t; Fri, 17 Jun 2022 12:32:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nisXw2Z1wyWMWG+zojD64PSbsDSh0jch9sg1zvuFWgE=; b=3jbDMsP7huA/5VVfekK2Ukr6QZ
	auH0CIX8NUlMQzvdpdImYrHeyb9kuaK9CBibXEaQvSfG1+cgQLl5IzsIm6C5j4YG5C0E2fRHvNLqy
	6ctvVnosBDhUzFp0PCXAcpCqMchdbNiiR9n7nUOBKBn51Zv3I6AoIx5LBPJzaAm3pOBs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171239-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 171239: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c9040f25be317ab2f7647605397d79313e3f303e
X-Osstest-Versions-That:
    xen=d6d0cb659fda64430d4649f8680c5cead32da8fd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 17 Jun 2022 12:32:15 +0000

flight 171239 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171239/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c9040f25be317ab2f7647605397d79313e3f303e
baseline version:
 xen                  d6d0cb659fda64430d4649f8680c5cead32da8fd

Last test of basis   171213  2022-06-16 16:00:32 Z    0 days
Testing same since   171239  2022-06-17 09:00:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jiamei Xie <jiamei.xie@arm.com>
  Wei Chen <wei.chen@arm.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   d6d0cb659f..c9040f25be  c9040f25be317ab2f7647605397d79313e3f303e -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 12:42:38 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 12:42:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351392.577980 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2BJ4-0006mQ-QM; Fri, 17 Jun 2022 12:42:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351392.577980; Fri, 17 Jun 2022 12:42:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2BJ4-0006mJ-Mh; Fri, 17 Jun 2022 12:42:34 +0000
Received: by outflank-mailman (input) for mailman id 351392;
 Fri, 17 Jun 2022 12:42:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rI7C=WY=citrix.com=prvs=160677330=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o2BJ3-0006m3-8F
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 12:42:33 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f34e1622-ee3a-11ec-b725-ed86ccbb4733;
 Fri, 17 Jun 2022 14:42:31 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f34e1622-ee3a-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655469751;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=aQ2JFJdgxW+bvJ7wlrkPH1pYSZsqotk9gFtitGX3Kfo=;
  b=Dnh1S8br5VDF7qjkmZElJFOvxyIU3x8p1QdEr0kF3yOf8ZkOlG0thB7H
   MGvU4e/9YfAWq4A19gWZKui/YuZqLVwIJBBPWn/OLCNwkenIhT38SnD9e
   rG1smhn/iCm8HZHRKc2NNfkYTyk6oMY0jJylRZdgAWPbgjU6RYDO2zbhZ
   U=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 74267833
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:A6+buqOqEc0TJXjvrR2Cl8FynXyQoLVcMsEvi/4bfWQNrUoj3mNRn
 zcWXG6HO//fNmamLtBzO4jl9k9UupLWx9JlQAto+SlhQUwRpJueD7x1DKtR0wB+jCHnZBg6h
 ynLQoCYdKjYdleF+lH1dOKJQUBUjclkfJKlYAL/En03FFUMpBsJ00o5wbZn29Aw3bBVPivW0
 T/Mi5yHULOa82Yc3lI8s8pvfzs24ZweEBtB1rAPTagjUG32zhH5P7pGTU2FFFPqQ5E8IwKPb
 72rIIdVXI/u10xF5tuNyt4Xe6CRK1LYFVDmZnF+A8BOjvXez8CbP2lS2Pc0MC9qZzu1c99Z2
 fFsiZWqVCkSIKzLid4cYhJ7CjFTBPgTkFPHCSDXXc27ykTHdz3nwul0DVFwNoodkgp1KTgQr
 7pCcmlLN03dwbLtqF64YrAEasALJc/3PIQZqzd4wCvQF/oOSpHfWaTao9Rf2V/cg+gRQayAO
 JpCMlKDajyaYTpoO1I8U6kwt7unp1ehXT9H+W6a8P9fD2/7k1UqjemF3MDuUsWHQNgQglyZu
 GPP+0z/BRcVMsHZziCKmlq3nfPGly7/XIMUFZW7++RsjVnVwXYcYDUOXEa/iem0jAi5Qd03A
 0YO8Sozpqsg3EWsSp/2WBjQiGeJuwNZV9dOHukS7gaLxazJpQGDCQAsXjNHLdArqsIybTgrz
 UOS2cPkAyR1t7+YQm7b8a2bxQ5eIgBMczVEP3VdC1JYvZ+z++nfky4jUP5yNI+Jh8foNwruw
 jeblikPjJYKneMygvDTEU/8v968mnTYZldru1iLBTr/tl4RiJ2NPNLxtwWChRpUBMPAFwTa4
 iBZ8ySLxLpWZaxhghBhVwnk8FuBw/+eeAPRjld0d3XK32T8oiXzFWy8DdwXGauIDirnUWWwC
 KMrkVkNjKK/xVPzBUONX6q/Ct4x0Y/rHsn/W/bfY7JmO8YsKVPfoH0zPRfNhQgBdXTAdoluU
 ap3jO72VSpKYUiZ5GHeqxghPU8DmXllmDK7qWHTxBW7y7uODEOopUM+GALWNIgRtfrcyC2Mq
 oo3H5bamn13DbylCgGKoNF7ELz/BSVibXwAg5cMLbDrz8sPMDxJNsI9Npt4I9Q7x/8OyraXl
 px/M2cBoGfCabT8AV3iQhhehHnHBP6TcVpT0fQQAGuV
IronPort-HdrOrdr: A9a23:ofSIZKHW6Xg6BiWIpLqE6MeALOsnbusQ8zAXP0AYc3Jom+ij5q
 STdZUgpHrJYVkqNU3I9ertBEDEewK6yXcX2/hyAV7BZmnbUQKTRekIh7cKgQeQeBEWntQts5
 uIGJIeNDSfNzdHsfo=
X-IronPort-AV: E=Sophos;i="5.92,306,1650945600"; 
   d="scan'208";a="74267833"
Date: Fri, 17 Jun 2022 13:42:21 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Chuck Zmudzinski <brchuckz@aol.com>
CC: <qemu-devel@nongnu.org>, <xen-devel@lists.xenproject.org>,
	<qemu-trivial@nongnu.org>, Stefano Stabellini <sstabellini@kernel.org>, Paul
 Durrant <paul@xen.org>
Subject: Re: [PATCH v2] xen/pass-through: merge emulated bits correctly
Message-ID: <Yqx2rYn+9jEV679a@perard.uk.xensource.com>
References: <b6718a3512ec0a97c6ef4a5b5c1f3de72238c603.1654961918.git.brchuckz.ref@aol.com>
 <b6718a3512ec0a97c6ef4a5b5c1f3de72238c603.1654961918.git.brchuckz@aol.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <b6718a3512ec0a97c6ef4a5b5c1f3de72238c603.1654961918.git.brchuckz@aol.com>

On Sat, Jun 11, 2022 at 12:43:29PM -0400, Chuck Zmudzinski wrote:
> In xen_pt_config_reg_init(), there is an error in the merging of the
> emulated data with the host value. With the current Qemu, instead of
> merging the emulated bits with the host bits as defined by emu_mask,
> the emulated bits are merged with the host bits as defined by the
> inverse of emu_mask. In some cases, depending on the data in the
> registers on the host, the way the registers are setup, and the
> initial values of the emulated bits, the end result will be that
> the register is initialized with the wrong value.
> 
> To correct this error, use the XEN_PT_MERGE_VALUE macro to help ensure
> the merge is done correctly.
> 
> This correction is needed to resolve Qemu project issue #1061, which
> describes the failure of Xen HVM Linux guests to boot in certain
> configurations with passed through PCI devices, that is, when this error
> disables instead of enables the PCI_STATUS_CAP_LIST bit of the
> PCI_STATUS register of a passed through PCI device, which in turn
> disables the MSI-X capability of the device in Linux guests with the end
> result being that the Linux guest never completes the boot process.
> 
> Fixes: 2e87512eccf3
> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1061
> Buglink: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=988333
> 
> Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>

Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thank you, looks like it's been a long quest to figure this one out.

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 12:44:53 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 12:44:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351402.578003 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2BLI-0007UD-Bq; Fri, 17 Jun 2022 12:44:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351402.578003; Fri, 17 Jun 2022 12:44:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2BLI-0007U6-6w; Fri, 17 Jun 2022 12:44:52 +0000
Received: by outflank-mailman (input) for mailman id 351402;
 Fri, 17 Jun 2022 12:44:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2BLH-0007Tw-9M; Fri, 17 Jun 2022 12:44:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2BLH-0002b3-6v; Fri, 17 Jun 2022 12:44:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2BLG-0006Zt-Sd; Fri, 17 Jun 2022 12:44:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o2BLG-0008Jc-Rq; Fri, 17 Jun 2022 12:44:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4PzWl5BHP82Q5CBF1qynzO2oAb6eYfNetN6KMRjEbRE=; b=6tXUl09R42rnsB2xf+M6lDy85v
	kkT1lse24vgIhs9tdFx1ljo9cCakXo3ejaDtGZX/ACSl/aGB7mOxcKE5MIZgLi2ughXfLAPF8jYTH
	ru4+x2UCzJsIoHTMHpzDasYbzDK1E93xklHy0CN7nMMcq9J5o3m1UAJKTGhYg1eChHWA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171204-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.13-testing test] 171204: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.13-testing:test-amd64-i386-libvirt-xsm:guest-start/debian.repeat:fail:heisenbug
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=87ff11354f0dc0d6e77e1695e6c1e14aa1382cdc
X-Osstest-Versions-That:
    xen=1575075b2e3ac93e9bb2271f4c26a2fb7d947ade
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 17 Jun 2022 12:44:50 +0000

flight 171204 xen-4.13-testing real [real]
flight 171249 xen-4.13-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/171204/
http://logs.test-lab.xenproject.org/osstest/logs/171249/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-xsm 20 guest-start/debian.repeat fail pass in 171249-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170929
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 170929
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 170929
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170929
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170929
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 170929
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170929
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170929
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170929
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170929
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 170929
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170929
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  87ff11354f0dc0d6e77e1695e6c1e14aa1382cdc
baseline version:
 xen                  1575075b2e3ac93e9bb2271f4c26a2fb7d947ade

Last test of basis   170929  2022-06-10 10:52:06 Z    7 days
Testing same since   171204  2022-06-16 13:08:04 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   1575075b2e..87ff11354f  87ff11354f0dc0d6e77e1695e6c1e14aa1382cdc -> stable-4.13


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 13:07:49 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 13:07:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351423.578020 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2BhM-000235-EM; Fri, 17 Jun 2022 13:07:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351423.578020; Fri, 17 Jun 2022 13:07:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2BhM-00022y-BM; Fri, 17 Jun 2022 13:07:40 +0000
Received: by outflank-mailman (input) for mailman id 351423;
 Fri, 17 Jun 2022 13:07:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rI7C=WY=citrix.com=prvs=160677330=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o2BhK-00022s-Fz
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 13:07:38 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 742f51cd-ee3e-11ec-bd2d-47488cf2e6aa;
 Fri, 17 Jun 2022 15:07:36 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 742f51cd-ee3e-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655471256;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=43DOepLpiA6dAclzX6XEZibIuvrKwywcq/eObv9mD9M=;
  b=PtN5gWuphf7cXSvTfjUBJn9Tkz22pcL26/whB39eR4rFWxjAF1IcziZl
   xE0yyIFeMalw5a2fRRMMWKViZcrhtMojl3iAfAppHAljjX2d7Bn1AAPTI
   LhwEY5h/bzFD++MVKNBrHwxMkEXZK7oAz4VW4E1ovzTm/seagXJA1yUhJ
   A=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 73699563
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:65xIEqCvzCqKSxVW/w7jw5YqxClBgxIJ4kV8jS/XYbTApDMr1DwCn
 GAdWWmAb67eZWDyeNxwO9y3/U0E6peGnIVmQQY4rX1jcSlH+JHPbTi7wuYcHM8wwunrFh8PA
 xA2M4GYRCwMZiaA4E/raNANlFEkvU2ybuOU5NXsZ2YgH2eIdA970Ug5w7Bg3NY06TSEK1jlV
 e3a8pW31GCNg1aYAkpMg05UgEoy1BhakGpwUm0WPZinjneH/5UmJMt3yZWKB2n5WuFp8tuSH
 I4v+l0bElTxpH/BAvv9+lryn9ZjrrT6ZWBigVIOM0Sub4QrSoXfHc/XOdJFAXq7hQllkPha9
 9FipaKRWTsGfbHVm95efEREMgVhaPguFL/veRBTsOSWxkzCNXDt3+9vHAc9OohwFuRfWD8Us
 6ZCcXZUM07F17neLLGTE4GAguwqKtXrO4UO/Glt1zjDAd4tQIzZQrWM7thdtNs1rp8VQ6ePO
 pRCAdZpRCrMUUx9NQ81McMVk9ijlGfFUDsG813A8MLb5ECMlVcsgdABKuH9Y9GPWIBJhEeGp
 2vC12L+BB4cKZqY0zXt2mm3mubFkCf/WYQTPL617PhnhBuU3GN7IAUfSF+TsfS/zEmkVLp3I
 VYf+jclrroa/UuvCNL6WnWQuXOBo1sQVsRdF8U87weCzLeS5ByWbkAUQzgEZNE4ucseQT0xy
 kTPj97vHSZosrCeVTSa7Lj8kN+pEXFLdylYP3ZCFFZbpYm4yG0usv7RZv1cFIGlsPzlJR6z3
 ymJlmsR2qkyqdFegs1X4mv7byKQSonhF1Bou1mMBjj9s2uVd6b+OdX2tAGzAeJoad/AEwLf5
 CVsd922trhmMH2bqMCarAzh9pmN7u3NDjDTiEUH83IJp2X0oC7LkWy9DVhDyKZV3iUsI2aBj
 Lf741852XOqFCLCgVVLS4ywEd826qPrCM7oUPvZBvIXPMUsKFfboHw2PBPKt4wIrKTLufBXB
 HtmWZz0USZy5VpPl1JauNvxIZd0n3tjlAs/tLjwzgi90Kr2WUN5vYwtaQPUBshgtfvsiFyMr
 753apvboz0CAbaWSnSGruYuwaUicCFT6Wbe8JcMKIZu42NORQkcNhMm6ep5I9I9xP8Jx7igE
 7PUchYw9WcTTEbvcW2iAk2Popu2NXqjhRrX5RARAGs=
IronPort-HdrOrdr: A9a23:a6XODK4eXS8yIRbciwPXwMjXdLJyesId70hD6qhwISY6TiW9rb
 HLoB17726QtN9/YhwdcLy7VJVoBEmskqKdgrNhX4tKPjOHhILAFugLhuHfKn/bak7DH4ZmpM
 FdmsNFaeEYY2IUsfrH
X-IronPort-AV: E=Sophos;i="5.92,306,1650945600"; 
   d="scan'208";a="73699563"
Date: Fri, 17 Jun 2022 14:07:18 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Chuck Zmudzinski <brchuckz@aol.com>
CC: <qemu-devel@nongnu.org>, <xen-devel@lists.xenproject.org>,
	<qemu-trivial@nongnu.org>, Stefano Stabellini <sstabellini@kernel.org>, Paul
 Durrant <paul@xen.org>
Subject: Re: [PATCH] xen/pass-through: don't create needless register group
Message-ID: <Yqx8ht2teAoRJF4b@perard.uk.xensource.com>
References: <a2e946dfb45260a5e29cec3b2195e4c1385b0d63.1654876622.git.brchuckz.ref@aol.com>
 <a2e946dfb45260a5e29cec3b2195e4c1385b0d63.1654876622.git.brchuckz@aol.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <a2e946dfb45260a5e29cec3b2195e4c1385b0d63.1654876622.git.brchuckz@aol.com>

On Fri, Jun 10, 2022 at 12:23:35PM -0400, Chuck Zmudzinski wrote:
> Currently we are creating a register group for the Intel IGD OpRegion
> for every device we pass through, but the XEN_PCI_INTEL_OPREGION
> register group is only valid for an Intel IGD. Add a check to make
> sure the device is an Intel IGD and a check that the administrator has
> enabled gfx_passthru in the xl domain configuration. Require both checks
> to be true before creating the register group. Use the existing
> is_igd_vga_passthrough() function to check for a graphics device from
> any vendor and that the administrator enabled gfx_passthru in the xl
> domain configuration, but further require that the vendor be Intel,
> because only Intel IGD devices have an Intel OpRegion. These are the
> same checks hvmloader and libxl do to determine if the Intel OpRegion
> needs to be mapped into the guest's memory.
> 
> Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
> ---
>  hw/xen/xen_pt_config_init.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/hw/xen/xen_pt_config_init.c b/hw/xen/xen_pt_config_init.c
> index c5c4e943a8..ffd915654c 100644
> --- a/hw/xen/xen_pt_config_init.c
> +++ b/hw/xen/xen_pt_config_init.c
> @@ -2037,6 +2037,10 @@ void xen_pt_config_init(XenPCIPassthroughState *s, Error **errp)
>           * therefore the size should be 0xff.
>           */

Could you move that comment? I think it would make more sense to comment
the "reg_grp_offset=XEN_PCI_INTEL_OPREGION" line now that the `if` block
also skip setting up the group on non-intel devices.

>          if (xen_pt_emu_reg_grps[i].grp_id == XEN_PCI_INTEL_OPREGION) {
> +            if (!is_igd_vga_passthrough(&s->real_device) ||
> +                s->real_device.vendor_id != PCI_VENDOR_ID_INTEL) {
> +                continue;
> +            }
>              reg_grp_offset = XEN_PCI_INTEL_OPREGION;
>          }

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 13:25:16 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 13:25:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351444.578077 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2ByI-0005Hn-Fb; Fri, 17 Jun 2022 13:25:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351444.578077; Fri, 17 Jun 2022 13:25:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2ByI-0005Hg-By; Fri, 17 Jun 2022 13:25:10 +0000
Received: by outflank-mailman (input) for mailman id 351444;
 Fri, 17 Jun 2022 13:25:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2ByH-0005HW-Up; Fri, 17 Jun 2022 13:25:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2ByH-0003La-SO; Fri, 17 Jun 2022 13:25:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2ByH-0000lv-Io; Fri, 17 Jun 2022 13:25:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o2ByH-0005In-IM; Fri, 17 Jun 2022 13:25:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=IOTvrgSrb5k9BT8/SvbxK38vm6hT9EDhCzSJkCIpPiA=; b=HI3terKNYEHhSUD8UsH4SZRi4I
	MVokHzJjmb+CIqO0POrVcHQsEGRWYontx0w+Godc1Mw61+Lsfd7/2Lo9u0PREfF/Stvr1RrmDLGv8
	oEZJd/h1Qq/JigcmCQHWFtKgOKX3Iwzox/jW217Gu2dhs7Zl0DpPYCdJzXSTFHuyIXCE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171210-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 171210: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=3c2a14ea81c77ae7973c1e436a32436a7e6d017b
X-Osstest-Versions-That:
    xen=c9a707df83aad17a6fcf2e8330ab3b5bead6fb8b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 17 Jun 2022 13:25:09 +0000

flight 171210 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171210/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171174
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171174
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171174
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171174
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171174
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171174
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171174
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171174
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171174
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171174
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171174
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171174
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  3c2a14ea81c77ae7973c1e436a32436a7e6d017b
baseline version:
 xen                  c9a707df83aad17a6fcf2e8330ab3b5bead6fb8b

Last test of basis   171174  2022-06-15 01:53:26 Z    2 days
Failing since        171181  2022-06-15 14:38:30 Z    1 days    3 attempts
Testing same since   171210  2022-06-16 14:36:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Jane Malalane <jane.malalane@citrix.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   c9a707df83..3c2a14ea81  3c2a14ea81c77ae7973c1e436a32436a7e6d017b -> master


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 13:46:54 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 13:46:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351454.578088 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2CJ2-00087Q-9U; Fri, 17 Jun 2022 13:46:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351454.578088; Fri, 17 Jun 2022 13:46:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2CJ2-00087J-6V; Fri, 17 Jun 2022 13:46:36 +0000
Received: by outflank-mailman (input) for mailman id 351454;
 Fri, 17 Jun 2022 13:46:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rI7C=WY=citrix.com=prvs=160677330=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o2CJ1-00087D-Dl
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 13:46:35 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e57ee484-ee43-11ec-bd2d-47488cf2e6aa;
 Fri, 17 Jun 2022 15:46:33 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e57ee484-ee43-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655473593;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=K9DTIGZoKLsNPanDI8UIMlN4m4LWEe/dXwk4moImZpw=;
  b=Z166WuJ3tnwSfyr3AyGdF7cn90khPn8zWMtH0CbSo5sYC9OTXtoVSHji
   uygysJNEGVgmKi7WnTJ1ZrTZ8GUh+6wtcgW9o4x5JrdOxhMOUqjdeCzkM
   zcXsmzBE4RVbSzh35TowWojbPR8aVg2/3Xgi1gwf2XEQfshhnDKTOah3z
   s=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 76412064
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:6GIky6sOwfAPxp46tJODr2rUkufnVH1eMUV32f8akzHdYApBsoF/q
 tZmKWmEPaqDZTfxf9Eka9i38htQvZHQn9RqTVA5pXo8Hn4U+JbJXdiXEBz9bniYRiHhoOOLz
 Cm8hv3odp1coqr0/0/1WlTZhSAgk/nOHNIQMcacUsxLbVYMpBwJ1FQywYbVvqYy2YLjW13U5
 4uuyyHiEATNNwBcYzp8B52r8HuDjNyq0N/PlgVjDRzjlAa2e0g9VPrzF4noR5fLatA88tqBb
 /TC1NmEElbxpH/BPD8HfoHTKSXmSpaKVeSHZ+E/t6KK2nCurQRquko32WZ1he66RFxlkvgoo
 Oihu6BcRi8zPPbvmNRCciNEFgpaIPxtpZLJYkqg5Jn7I03uKxMAwt1rBUAye4YZ5vx2ESdF8
 vlwxDIlN07ZwbjsmfTiF7cq1p9LwMrDZevzvllpyy3ZCvA3B4jOWazQ6fdT3Ssqh9AIFvHbD
 yYcQWUxME2aO0MTUrsRII06nqSN2WnjSSYClEK3+7c9ykaN9xMkhdABN/KKI4fXFK25hH2wu
 Wbu72n/RBYAO7S3yzWf9Wm3rvTShi69U4UXfJW6/PN3hFyYxkQIFQYbE1C8pJGRmkO4Ht5SN
 UEQ0i4vtrQpslymSMHnWB+1q2LCuQQTM/JaCeY69QqO2ILS7hqCDWEcQ3hHZcBOnM0/QzAwx
 0KKt9zsDD1r9raSTBq1/7Kf/G2aIjIeIykEaDNscOcey4C9+sdp1EuJF4s9Vv7u5jHoJd3u6
 yqI9ws+t+oyt9IO/IGmrHuarjzvlIecG2bZ+T7rsnKZAhJRPdD4OtDzsQKDsJ6sP67CEADf4
 SFsd9y2qblXUMrTzHHlrPAlRunB2hqTDNHLbbeD9bEF/i/lxXOsdJs4DNpWdBYwaZZsldMEj
 SbuVeJtCHx7ZiLCgVdfOd7ZNijQ8YDuFM7+StffZcdUb556eWevpX8zOBLIgzywyBFxy8nT3
 Kt3lu79ZUv29Iw9lGbmLwvj+eRDKt8CKZP7GsmgkkXPPUu2b3+JU7YVWGazghQCxPrc+m39q
 o8HX+PTkkk3eLCuM0H/rN9IRXhXfCdTOHwDg5EOHsaZPBFcEX0sY9eIh+tJl3pNxP8OyI8lP
 xiVBydl9bYIrSeXdlnSNik7MumHsFQWhStTABHA9G2AgxALCbtDJo9DH3frVdHLLNBe8MM=
IronPort-HdrOrdr: A9a23:23/5pKAARxqNpT3lHemu55DYdb4zR+YMi2TC1yhKKCC9Vvbo8P
 xG/c5rsSMc5wx8ZJhNo7+90ey7MBXhHP1OkOws1NWZLWrbUQKTRekIh+bfKn/bak/DH4ZmpN
 5dmsNFaOEYY2IVsfrH
X-IronPort-AV: E=Sophos;i="5.92,306,1650945600"; 
   d="scan'208";a="76412064"
Date: Fri, 17 Jun 2022 14:46:08 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Chuck Zmudzinski <brchuckz@netscape.net>
CC: Jason Andryuk <jandryuk@gmail.com>, Andrew Cooper <amc96@srcf.net>,
	xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>, "Juergen
 Gross" <jgross@suse.com>, Jan Beulich <jbeulich@suse.com>
Subject: Re: [XEN PATCH] tools/libs/light/libxl_pci.c: explicitly grant
 access to Intel IGD opregion
Message-ID: <YqyFoPJ8Bv9EnO5N@perard.uk.xensource.com>
References: <b62fbc602a629941c1acaad3b93d250a3eba33c0.1647222184.git.brchuckz.ref@netscape.net>
 <b62fbc602a629941c1acaad3b93d250a3eba33c0.1647222184.git.brchuckz@netscape.net>
 <YkSQIoYhomhNKpYR@perard.uk.xensource.com>
 <408e5e07-453c-f377-a5b0-c421d002aec5@srcf.net>
 <46a8585e-2a2a-4d12-f221-e57bd157dec6@netscape.net>
 <CAKf6xpths4SX4wq-j4VhnXZnx0DW=468z3=9FYHso=Wy1i_Rsg@mail.gmail.com>
 <da62d06d-fd18-a313-9e69-2a4581e97c1a@netscape.net>
 <CAKf6xptZ9g79MphwYPAGq6ATBtCrW+pCd5+NYJPdFniW+tFzPg@mail.gmail.com>
 <ea5c1606-04d3-c847-643e-d242d8f6ba06@netscape.net>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <ea5c1606-04d3-c847-643e-d242d8f6ba06@netscape.net>

On Thu, Mar 31, 2022 at 03:44:33PM -0400, Chuck Zmudzinski wrote:
> On 3/31/22 10:19 AM, Jason Andryuk wrote:
> > On Thu, Mar 31, 2022 at 10:05 AM Chuck Zmudzinski <brchuckz@netscape.net> wrote:
> > > 
> > > That still doesn't answer my question - will the Qemu upstream
> > > accept the patches that move the hypercalls to the toolstack? If
> > > not, we have to live with what we have now, which is that the
> > > hypercalls are done in Qemu.
> > Xen-associated people maintain hw/xen code in QEMU, so yes it could be accepted.
> > 
> > Maybe it would need to be backwards compatible to have libxl check the
> > QEMU version to decide who makes the hypercall?  Unless it is broken
> > today, in which case just make it work.
> > 
> > Regards,
> > Jason
> 
> I know of another reason to check the Qemu upstream version,
> and that is dealing with deprecated / removed device model
> options that xl.cfg still uses. I looked at that a few years ago
> with regard to the deprecated 'usbdevice tablet' Qemu option,
> but I did not see anything in libxl to distinguish Qemu versions
> except for upstream vs. traditional. AFAICT, detecting traditional
> vs. upstream Qemu depends solely on the device_model_version
> xl.cfg setting. So it might be useful for libxl to add the capability
> to detect the upstream Qemu version or at least create an xl.cfg
> setting to inform libxl about the Qemu version.

Hi,

There's already some code to deal with QEMU's version (QEMU = upstream
Qemu) in libxl, but this code is dealing with an already running QEMU.
When libxl interact with QEMU through QMP, to setup some more devices,
QEMU also advertise it's version, which we can use on follow up qmp
commands.

I think adding passthrough pci devices is all done via QMP, so we can
potentially move an hypercall from QEMU to libxl, and tell libxl that
depending on which version of QEMU is talking to, it needs or not do the
hypercall. Also, we could probably add a parameter so that QEMU know if
it have to do the hypercall or not, and thus newer version of QEMU could
also deal with older version of libxl.

(There's maybe some example like that in both QEMU and libxl, when doing
live migration, dm_state_save_to_fdset() in libxl as a pointer)

Cheers,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 14:30:08 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 14:30:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351469.578105 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2Cz0-00054c-SO; Fri, 17 Jun 2022 14:29:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351469.578105; Fri, 17 Jun 2022 14:29:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2Cz0-00054U-Ni; Fri, 17 Jun 2022 14:29:58 +0000
Received: by outflank-mailman (input) for mailman id 351469;
 Fri, 17 Jun 2022 14:29:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rI7C=WY=citrix.com=prvs=160677330=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o2Cyz-00054O-Lo
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 14:29:57 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f3ea4531-ee49-11ec-b725-ed86ccbb4733;
 Fri, 17 Jun 2022 16:29:55 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f3ea4531-ee49-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655476195;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=JjLicrjVGNyROkNVZSvGbe9mLPpNbmx/1XVj1eSpmWs=;
  b=bK8+4rvlYWCbw9Nluv2O/LiMyn2pdLba6t/vBPhwYGW/xRcT6w7UJ5Vu
   qW7//Iq3LtdHTRDB2UjBNELBOfxAUzMgn6eCbClPhdcTrFmSfTX0sLKEI
   AOm4TJJw6YggpgVqvs30PiHNicxKKGtGU01wTphGxUvR1xzBTsOAVHJqA
   s=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 73706981
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:jiiUS6L4kQ9fjtkMFE+RKpUlxSXFcZb7ZxGr2PjKsXjdYENShDxWm
 zAXXTiGPPmCNjGje9klPI+38E4DucfcmNQ1GVNlqX01Q3x08seUXt7xwmUcns+xwm8vaGo9s
 q3yv/GZdJhcokf0/0vrav67xZVF/fngqoDUUYYoAQgsA14+IMsdoUg7wbRh3Nc22YTR7z6l4
 rseneWOYDdJ5BYsWo4kw/rrRMRH5amaVJsw5zTSVNgT1LPsvyB94KE3fMldG0DQUIhMdtNWc
 s6YpF2PEsE1yD92Yj+tuu6TnkTn2dc+NyDW4pZdc/DKbhSvOkXee0v0XRYRQR4/ttmHozx+4
 M9Ev5qgahUxBZ/vhcAkYiJZDyA9A6ITrdcrIVDn2SCS50jPcn+qyPRyFkAme4Yf/46bA0kXq
 6ZecmpUKEne2aTmm9pXScE17ignBMDtIIMYvGAm1TzDBOwqaZvCX7/L9ZlT2zJYasVmQq2BP
 5RIOWMHgBLoQgdgMA9PNawCsb2CoHjnagZjomi2qv9ii4TU5FMoi+W8WDbPQfSRXtlclEuco
 mPA/kz6DwscOdjZziCKmlquifXIhjjTQ58JGfuz8fsCqF+Owm0eDjUGWF39puO24ma0VshDM
 UUS9mwrpLIr6U2wZtDnWluzp3vsliAbX91cAugr8janw6Df4xuaLmUcRzsHY9sj3OcmSDpv2
 lKXktfBAT10rKbTWX+b7q2Trz65JW4SN2BqTSgAQAge/8j4oKk8ixvOSpBoF6vdptrxFDLry
 jaGth8ilq4Ths4G0aa81V3fijfqrZ/MJiYv4R7dRGWi7QVRa4usZoju4l/ehd5fKK6JQ1/Hu
 2IL8/Vy98hXU8vLznbUBrxQQvf5vJ5pLQEwn3Z1FpMn5xe/40WRXp102QBFJVtocfgLLGqBj
 FDohe9B2HNCFCL0MPIrONrrU5lCIbvIToq8CK2NBjZaSt0oLVLconkzDaKF9zq1+HXAh53TL
 ntynSyEKX8BQZpqwzOtLwv2+e96n3turY8/qH2S8vhG7VZ9TCTMIVv9GAHSBt3VFYvdyOkvz
 /5RNtGR1zJUW/Dkby/c/OY7dA5XcCRjWcyr85UKLIZvxzaK/0lwY8I9PJt7I9A190irvrygE
 o6Btr9wlwOk2CyvxfSiYXF/crL/NatCQYYAFXV0Zz6AgiF7Ca72tft3X8ZnLNEPqb04pdYpH
 qZtRil1KqkWItgx029GNseVQU0LXEnDuD9iyAL/MGdjJ8I5H1aTkjImFyO2nBQz4uOMnZNWi
 9WdOsnzHPLvmywK4B7qVc+S
IronPort-HdrOrdr: A9a23:IDmbjaDLfpjixI7lHemq55DYdb4zR+YMi2TC1yhKJyC9Vvbo8/
 xG+85rsiMc6QxhPU3I9ursBEDtex/hHNtOkO8s1NSZLWvbUQmTTL2KhLGKq1aLJ8S9zJ8/6U
 4JSdkGNDSaNzlHZKjBjzWFLw==
X-IronPort-AV: E=Sophos;i="5.92,306,1650945600"; 
   d="scan'208";a="73706981"
Date: Fri, 17 Jun 2022 15:29:25 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Oleksandr <olekstysh@gmail.com>
CC: <xen-devel@lists.xenproject.org>, Oleksandr Tyshchenko
	<oleksandr_tyshchenko@epam.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Nick Rosbrook <rosbrookn@gmail.com>, "Juergen
 Gross" <jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>,
	"Julien Grall" <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Bertrand Marquis <bertrand.marquis@arm.com>
Subject: Re: [PATCH V10 1/3] libxl: Add support for Virtio disk configuration
Message-ID: <YqyPxd9yNvfd5idJ@perard.uk.xensource.com>
References: <1655143522-14356-1-git-send-email-olekstysh@gmail.com>
 <1655143522-14356-2-git-send-email-olekstysh@gmail.com>
 <YqnrerEAFXJUCRL1@perard.uk.xensource.com>
 <21798651-1254-0c17-5379-224b52a92566@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <21798651-1254-0c17-5379-224b52a92566@gmail.com>

On Wed, Jun 15, 2022 at 07:32:54PM +0300, Oleksandr wrote:
> On 15.06.22 17:23, Anthony PERARD wrote:
> > On Mon, Jun 13, 2022 at 09:05:20PM +0300, Oleksandr Tyshchenko wrote:
> > > diff --git a/tools/xl/xl_block.c b/tools/xl/xl_block.c
> > > index 70eed43..f2b0ff5 100644
> > > --- a/tools/xl/xl_block.c
> > > +++ b/tools/xl/xl_block.c
> > > @@ -50,6 +50,11 @@ int main_blockattach(int argc, char **argv)
> > >           return 0;
> > >       }
> > > +    if (disk.specification != LIBXL_DISK_SPECIFICATION_XEN) {
> > > +        fprintf(stderr, "block-attach is only supported for specification xen\n");
> > This check prevents a previously working `block-attach` command line
> > from working.
> > 
> >      # xl -Tvvv block-attach 0 /dev/vg/guest_disk,raw,hda
> >      block-attach is only supported for specification xen
> > 
> > At least, that works by adding ",specification=xen", but it should work
> > without it as "xen" is the default (from the man page).
> 
> yes, you are right. thank you for pointing this out.
> 
> 
> > 
> > Maybe the check is done too soon, or maybe a better place to do it would
> > be in libxl.
> > 
> > libxl__device_disk_setdefault() is called much later, while executing
> > libxl_device_disk_add(), so `xl` can't use the default been done there
> > to "disk.specification".
> 
> I got it.
> 
> 
> > 
> > `xl block-attach` calls libxl_device_disk_add() which I think is only
> > called for hotplug of disk. If I recall correctly, libxl__add_disks() is
> > called instead at guest creation. So maybe it is possible to do
> > something in libxl_device_disk_add(), but that a function defined by a
> > macro, and the macro is using the same libxl__device_disk_add() that
> > libxl_device_disk_add(). On the other hand, there is a "hotplug"
> > parameter to libxl__device_disk_setdefault(), maybe that could be use?
> 
> Thank you for digging into the details here.
> 
> If I understood correctly your suggestion we simply can drop checks in
> main_blockattach() (and likely main_blockdetach() ?) and add it to
> libxl__device_disk_setdefault().
> 
> 
> diff --git a/tools/libs/light/libxl_disk.c b/tools/libs/light/libxl_disk.c
> index 9e82adb..96ace09 100644
> --- a/tools/libs/light/libxl_disk.c
> +++ b/tools/libs/light/libxl_disk.c
> @@ -182,6 +182,11 @@ static int libxl__device_disk_setdefault(libxl__gc *gc,
> uint32_t domid,
>  disk->transport = LIBXL_DISK_TRANSPORT_MMIO;
>  }
> 
> + if (hotplug && disk->specification != LIBXL_DISK_SPECIFICATION_XEN) {
> + LOGD(ERROR, domid, "Hotplug is only supported for specification
> xen");

Maybe check for `specification == VIRTIO` instead, and report that
hotplug isn't supported in virtio's case.

> + return ERROR_FAIL;
> + }
> +
>  /* Force Qdisk backend for CDROM devices of guests with a device model.
> */
>  if (disk->is_cdrom != 0 &&
>  libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM) {
> 
> 
> Is my understanding correct?

Yes, that looks correct.

But I didn't look at the block-detach, and it seems that this part only
use generic functions to remove a disk. So there is no easy way to
prevent hotunplug from libxl. But `xl` does have access to a fully
initialised "disk" so can check the value of "specification", I guess
the check can stay in main_blockdetach().

Cheers,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 14:59:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 14:59:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351478.578119 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2DRK-0000Ae-55; Fri, 17 Jun 2022 14:59:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351478.578119; Fri, 17 Jun 2022 14:59:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2DRK-0000AX-1x; Fri, 17 Jun 2022 14:59:14 +0000
Received: by outflank-mailman (input) for mailman id 351478;
 Fri, 17 Jun 2022 14:59:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rI7C=WY=citrix.com=prvs=160677330=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o2DRI-0000AQ-4C
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 14:59:12 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0a16967e-ee4e-11ec-b725-ed86ccbb4733;
 Fri, 17 Jun 2022 16:59:10 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0a16967e-ee4e-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655477950;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=Q3ozQLSS4NjZzgsa97J9WQApnN44hx5YtzaZmsKsTCY=;
  b=VzRz7gjQFGANolwBcbAy2hoIfe++YslgqTfRRDoAQBCvliwNR0e1faOP
   fjV2dCdaGR+YbWdZoCbIXCsdDDPTBJCY/IidatITNpC4T6oQYX2nslDt7
   tLvRtxM5B8C7R7G2uMna4nucJRuFgktjyWaTqSzn+ZAjfMJEJ/8xsCq/v
   8=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 73872615
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:9EU24qJ+rCYNfepwFE+R1ZUlxSXFcZb7ZxGr2PjKsXjdYENS0TIBy
 DMaXmuEb6zfYTH1KY1wat+1900P7JTTyNEyTAVlqX01Q3x08seUXt7xwmUcns+xwm8vaGo9s
 q3yv/GZdJhcokf0/0vrav67xZVF/fngqoDUUYYoAQgsA14+IMsdoUg7wbRh3Nc22YTR7z6l4
 rseneWOYDdJ5BYsWo4kw/rrRMRH5amaVJsw5zTSVNgT1LPsvyB94KE3fMldG0DQUIhMdtNWc
 s6YpF2PEsE1yD92Yj+tuu6TnkTn2dc+NyDW4pZdc/DKbhSvOkXee0v0XRYRQR4/ttmHozx+4
 OtxlJ/oTjozBP3ztu8QSQliM3w5FqITrdcrIVDn2SCS50jPcn+qyPRyFkAme4Yf/46bA0kXq
 6ZecmpUKEne2aTmm9pXScE17ignBMDtIIMYvGAm1TzDBOwqaZvCX7/L9ZlT2zJYasVmQq2OO
 JBHMGcHgBLoakVyFAooKb0C27ms30fHYyF4rg+5nP9ii4TU5FMoi+W8WDbPQfSVQe1Fk0Deo
 XjJl0zpDxdfONGBxD6t9nO3mvSJjS79QJgVFrCz6rhtmlL77ncIFBQcWF+/oP+4ok2zQdRSL
 woT4CVGhao4+VGvT9L9dwalu3PCtRkZM/JSDuk75Qel2qfSpQGDCQA5oiVpMYJ88pVsHHpzi
 wHPz4iB6SFTXKO9d0689e+TkCmIaRc7JmIYdRUkEEwu7Iy2yG0stS4jXuqPAYbs0ICoRWqom
 WjXxMQtr+5N1JBWjs1X6XiC2mvx/caRE2bZ8y2NBgqYAhVFiJlJjmBCwXzS9r5+IYmQVTFtV
 1BUypHFvIji4Xxg/RFhodnh/5nzvp5pyBWG3TZS82AJrlxBAUKLc4FK+y1ZL0x0KMsCcjKBS
 BaN5F4NvMMPYSPzMPUfj2eN5yMCnMDd+SnNDKiIPrKinLAqHON4wM2eTRHJhD28+KTduao+J
 Y2aYa6RMJruMow+lGDeb75EidcDn3lirUuOFcGT50n2itK2OS/KIYrpxXPTN4jVGovf+16Lm
 zueXuPXoyhivBrWOHmIrdZPdAhQdxDWx/ne8qRqSwJKGSI+cElJNhMb6epJl1BN90iNqtr1w
 w==
IronPort-HdrOrdr: A9a23:Y1Ekt61TqPGczkPIzqr3rwqjBEgkLtp133Aq2lEZdPU0SKGlfg
 6V/MjztCWE7Ar5PUtLpTnuAsa9qB/nm6KdgrNhWItKPjOW21dARbsKheffKlXbcBEWndQtt5
 uIHZIeNDXxZ2IK8PoT4mODYqodKA/sytHWuQ/cpU0dMz2Dc8tbnmBE4p7wKDwMeOFBb6BJcq
 a01458iBeLX28YVci/DmltZZm4mzWa/KiWGCLvHnQcmXGzsQ8=
X-IronPort-AV: E=Sophos;i="5.92,306,1650945600"; 
   d="scan'208";a="73872615"
Date: Fri, 17 Jun 2022 15:59:04 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	<xen-devel@lists.xenproject.org>
Subject: Re: [XEN PATCH v2 3/4] build: set PERL
Message-ID: <YqyWuIDP0FVAR7mr@perard.uk.xensource.com>
References: <20220614162248.40278-1-anthony.perard@citrix.com>
 <20220614162248.40278-4-anthony.perard@citrix.com>
 <7c76c81a-d781-8ffb-f68a-ece5487ad01f@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <7c76c81a-d781-8ffb-f68a-ece5487ad01f@suse.com>

On Wed, Jun 15, 2022 at 08:11:10AM +0200, Jan Beulich wrote:
> On 14.06.2022 18:22, Anthony PERARD wrote:
> > --- a/xen/Makefile
> > +++ b/xen/Makefile
> > @@ -22,6 +22,7 @@ PYTHON_INTERPRETER	:= $(word 1,$(shell which python3 python python2 2>/dev/null)
> >  export PYTHON		?= $(PYTHON_INTERPRETER)
> >  
> >  export CHECKPOLICY	?= checkpolicy
> > +export PERL		?= perl
> >  
> >  $(if $(filter __%, $(MAKECMDGOALS)), \
> >      $(error targets prefixed with '__' are only for internal use))
> 
> Considering my patch yesterday that moved the exporting of CC etc, I
> wonder if - at the very least for consistency - this and the neighbouring
> pre-existing exports shouldn't all be moved below the inclusion of
> ./Config.mk as well. After all ./config might override any of those.
> (Yes, the ones here don't prevent such overriding, but only as long as
> there aren't any further make quirks.)

Sounds good, we can move CHECKPOLICY and PERL, but there's a problem
with PYTHON. Config.mk define a different value for PYTHON, so the one
in xen/Makefile needs to be set before including ./Config.mk.

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 15:26:32 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 15:26:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351486.578133 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2Dre-0003wp-95; Fri, 17 Jun 2022 15:26:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351486.578133; Fri, 17 Jun 2022 15:26:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2Dre-0003wi-6A; Fri, 17 Jun 2022 15:26:26 +0000
Received: by outflank-mailman (input) for mailman id 351486;
 Fri, 17 Jun 2022 15:26:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2Drd-0003wY-1A; Fri, 17 Jun 2022 15:26:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2Drc-0005Y2-Ur; Fri, 17 Jun 2022 15:26:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2Drc-0006em-KP; Fri, 17 Jun 2022 15:26:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o2Drc-000308-Ju; Fri, 17 Jun 2022 15:26:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jnTVvFcskdsEkxb6P8VCYB52CQ2g+ebA9eRIOGUt99s=; b=eN8efO7/k3Pa0qAIgxVz4eklFM
	Yxq3huwi4aaCziRVfSEf047RIYfRa+hwyTCYCSoOAeTPfCHaUPFWq1wkpSUuxAQQ0Ub8EraJDZvNs
	By8cfGq/lHLZ8O7GgETburRy+qA1Xxttg6GiW/kFHUr/0CGIIE48GUbbm7t7VBSnMO4g=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171243-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 171243: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=cc2db6ebfb6d9d85ba4c7b35fba1fa37fffc0bc2
X-Osstest-Versions-That:
    ovmf=92ab049719afe96913c0452bcf12946e0af0f0d5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 17 Jun 2022 15:26:24 +0000

flight 171243 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171243/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 cc2db6ebfb6d9d85ba4c7b35fba1fa37fffc0bc2
baseline version:
 ovmf                 92ab049719afe96913c0452bcf12946e0af0f0d5

Last test of basis   171202  2022-06-16 12:43:16 Z    1 days
Testing same since   171243  2022-06-17 09:43:17 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Xie, Yuanhao <yuanhao.xie@intel.com>
  Yuanhao Xie <yuanhao.xie@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   92ab049719..cc2db6ebfb  cc2db6ebfb6d9d85ba4c7b35fba1fa37fffc0bc2 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 15:34:41 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 15:34:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351496.578144 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2Dza-0005TL-2b; Fri, 17 Jun 2022 15:34:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351496.578144; Fri, 17 Jun 2022 15:34:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2DzZ-0005TE-Vz; Fri, 17 Jun 2022 15:34:37 +0000
Received: by outflank-mailman (input) for mailman id 351496;
 Fri, 17 Jun 2022 15:34:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FDGi=WY=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1o2DzZ-0005T8-Dg
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 15:34:37 +0000
Received: from mail-wr1-x42c.google.com (mail-wr1-x42c.google.com
 [2a00:1450:4864:20::42c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fdc583fe-ee52-11ec-bd2d-47488cf2e6aa;
 Fri, 17 Jun 2022 17:34:35 +0200 (CEST)
Received: by mail-wr1-x42c.google.com with SMTP id e25so2364665wrc.13
 for <xen-devel@lists.xenproject.org>; Fri, 17 Jun 2022 08:34:35 -0700 (PDT)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 m15-20020a05600c4f4f00b0039748be12dbsm10317510wmq.47.2022.06.17.08.34.33
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 17 Jun 2022 08:34:34 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fdc583fe-ee52-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=tdWwDkIjEBR5FAhhoXIwfjyS1j1RkQ7wqDzOuyjqT6Y=;
        b=bWB+Txp4hxjrkLJpaPwu0Ywz3sIuks4Qtu7V+3pVHsqfT03lJAnbplGh8Ad7BI2LGl
         YXCL4Nffx8fjY3GcT9cWs6jeEA3bQ8LTuFc7CeWI7esV5YFfJdCjtnr9GVzWcAYCRNs9
         ukKPNsXEzfgSvuaRR31rbneQfuH6wTCMM4u3yVJtWixAkipKr9M80WnKqvjlyHJLSJAg
         f0AVNmpsTmOk2XBXHjFEY402a1qqB4zV8zmgKXSj0Zg0Zid8SR0Qxm0s7J2mTeJL+FGS
         ud10eDM6I33JK5wN/zIjRJL+k6XZ+AOAgbZ5hbnq6+rhdEk7tjI+lBqDBNgdTVQGSE9z
         0AJA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=tdWwDkIjEBR5FAhhoXIwfjyS1j1RkQ7wqDzOuyjqT6Y=;
        b=g8RypO+pUplt9FqEx+tPt6lnnhII0CAKWa36HiDNrxJSjue2XlweHJSlJlV8vPMuvQ
         zEEBA0TLWzSRLYGsMl7V/74S+CAAoYVh+Q2nrBYUDd/TwNz0z7VCpBlKXIm8dLDaRGlM
         LPJbklyxO1AtXm1IESBU+rS5wKjRYtLNxMTiqIncmIi4NwxqgB56gTmPTG4xkBiOK/K2
         qfpthRxrb60RCQJaAjwcu5BV7JbFsGEhtoQdJlltkaLmq0w2Ab+icR7ZMolTCw4opAhf
         UBEG63B9eKbU+ZQGfjazy4Te3FTbqKDFJ7RD0TynvDJvduGfPl++baRLBt4Zs0txDmqE
         BCwg==
X-Gm-Message-State: AJIora9VtKQ5Ng37penEVaSSd8raQX6rRbZNrjL2UKVPPySuvQMdLdVq
	os3zq/9vKWDJr9fggExOsjs=
X-Google-Smtp-Source: AGRyM1snf19aPKhaDenBQgCK0t5vEPJ947t10rjXW7J2KyXhY5/IjiTdSzAmJnvYjZ1SZphfqyey1Q==
X-Received: by 2002:adf:d1c1:0:b0:219:e994:6b8a with SMTP id b1-20020adfd1c1000000b00219e9946b8amr9890180wrd.462.1655480075005;
        Fri, 17 Jun 2022 08:34:35 -0700 (PDT)
Subject: Re: [PATCH V10 1/3] libxl: Add support for Virtio disk configuration
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel@lists.xenproject.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Nick Rosbrook
 <rosbrookn@gmail.com>, Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>
References: <1655143522-14356-1-git-send-email-olekstysh@gmail.com>
 <1655143522-14356-2-git-send-email-olekstysh@gmail.com>
 <YqnrerEAFXJUCRL1@perard.uk.xensource.com>
 <21798651-1254-0c17-5379-224b52a92566@gmail.com>
 <YqyPxd9yNvfd5idJ@perard.uk.xensource.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <62903b8e-6c20-600e-8283-5a3e3b853a18@gmail.com>
Date: Fri, 17 Jun 2022 18:34:33 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <YqyPxd9yNvfd5idJ@perard.uk.xensource.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 17.06.22 17:29, Anthony PERARD wrote:


Hello Anthony

> On Wed, Jun 15, 2022 at 07:32:54PM +0300, Oleksandr wrote:
>> On 15.06.22 17:23, Anthony PERARD wrote:
>>> On Mon, Jun 13, 2022 at 09:05:20PM +0300, Oleksandr Tyshchenko wrote:
>>>> diff --git a/tools/xl/xl_block.c b/tools/xl/xl_block.c
>>>> index 70eed43..f2b0ff5 100644
>>>> --- a/tools/xl/xl_block.c
>>>> +++ b/tools/xl/xl_block.c
>>>> @@ -50,6 +50,11 @@ int main_blockattach(int argc, char **argv)
>>>>            return 0;
>>>>        }
>>>> +    if (disk.specification != LIBXL_DISK_SPECIFICATION_XEN) {
>>>> +        fprintf(stderr, "block-attach is only supported for specification xen\n");
>>> This check prevents a previously working `block-attach` command line
>>> from working.
>>>
>>>       # xl -Tvvv block-attach 0 /dev/vg/guest_disk,raw,hda
>>>       block-attach is only supported for specification xen
>>>
>>> At least, that works by adding ",specification=xen", but it should work
>>> without it as "xen" is the default (from the man page).
>> yes, you are right. thank you for pointing this out.
>>
>>
>>> Maybe the check is done too soon, or maybe a better place to do it would
>>> be in libxl.
>>>
>>> libxl__device_disk_setdefault() is called much later, while executing
>>> libxl_device_disk_add(), so `xl` can't use the default been done there
>>> to "disk.specification".
>> I got it.
>>
>>
>>> `xl block-attach` calls libxl_device_disk_add() which I think is only
>>> called for hotplug of disk. If I recall correctly, libxl__add_disks() is
>>> called instead at guest creation. So maybe it is possible to do
>>> something in libxl_device_disk_add(), but that a function defined by a
>>> macro, and the macro is using the same libxl__device_disk_add() that
>>> libxl_device_disk_add(). On the other hand, there is a "hotplug"
>>> parameter to libxl__device_disk_setdefault(), maybe that could be use?
>> Thank you for digging into the details here.
>>
>> If I understood correctly your suggestion we simply can drop checks in
>> main_blockattach() (and likely main_blockdetach() ?) and add it to
>> libxl__device_disk_setdefault().
>>
>>
>> diff --git a/tools/libs/light/libxl_disk.c b/tools/libs/light/libxl_disk.c
>> index 9e82adb..96ace09 100644
>> --- a/tools/libs/light/libxl_disk.c
>> +++ b/tools/libs/light/libxl_disk.c
>> @@ -182,6 +182,11 @@ static int libxl__device_disk_setdefault(libxl__gc *gc,
>> uint32_t domid,
>>           disk->transport = LIBXL_DISK_TRANSPORT_MMIO;
>>       }
>>
>> +    if (hotplug && disk->specification != LIBXL_DISK_SPECIFICATION_XEN) {
>> +        LOGD(ERROR, domid, "Hotplug is only supported for specification
>> xen");
> Maybe check for `specification == VIRTIO` instead, and report that
> hotplug isn't supported in virtio's case.

ok, will do


>
>> +        return ERROR_FAIL;
>> +    }
>> +
>>       /* Force Qdisk backend for CDROM devices of guests with a device model.
>> */
>>       if (disk->is_cdrom != 0 &&
>>           libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM) {
>>
>>
>> Is my understanding correct?
> Yes, that looks correct.


thank you for the confirmation


>
> But I didn't look at the block-detach, and it seems that this part only
> use generic functions to remove a disk. So there is no easy way to
> prevent hotunplug from libxl. But `xl` does have access to a fully
> initialised "disk" so can check the value of "specification", I guess
> the check can stay in main_blockdetach().

ok, I got it


So, it works

root@generic-armv8-xt-dom0:~# xl block-attach DomU 
/dev/loop0,raw,xvda3,specification=virtio
libxl: error: libxl_disk.c:186:libxl__device_disk_setdefault: Domain 
2:Hotplug isn't supported for specification virtio
libxl: error: libxl_device.c:1468:device_addrm_aocomplete: unable to add 
device
libxl_device_disk_add failed.

root@generic-armv8-xt-dom0:~# xl block-detach DomU 51713
Hotunplug isn't supported for specification virtio

**********

root@generic-armv8-xt-dom0:~# xl block-attach DomU /dev/loop0,raw,xvda3
[  364.656091] xen-blkback: backend/vbd/3/51715: using 4 queues, 
protocol 1 (arm-abi)


>
> Cheers,
>
-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Fri Jun 17 15:53:03 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 15:53:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351505.578154 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2EHJ-0008Is-Kz; Fri, 17 Jun 2022 15:52:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351505.578154; Fri, 17 Jun 2022 15:52:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2EHJ-0008Il-IL; Fri, 17 Jun 2022 15:52:57 +0000
Received: by outflank-mailman (input) for mailman id 351505;
 Fri, 17 Jun 2022 15:52:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2EHH-0008Ib-Uz; Fri, 17 Jun 2022 15:52:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2EHH-0005zZ-RB; Fri, 17 Jun 2022 15:52:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2EHH-0007cJ-CW; Fri, 17 Jun 2022 15:52:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o2EHH-0000CJ-C3; Fri, 17 Jun 2022 15:52:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vyu+4YQSc+fbt2A9tEkMZ7kDCUqBGozdI/IB6VjcyB0=; b=Jo1Hmq/r+FhPRQhhFJIvqwd7FD
	NoKnJSqgzSOvDzkgx4m4An1MZn+ATE0+CcNAWt0vg9hHX6uNXmcFeple3DkKUkWRvR4czBxmRlC6m
	10/5VCBNuceL2MEhHjABdxFN2Gb48/r1MSYMhJ42r0Xsh3PpN+Sl1TgOdzIXqv8muoDk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171215-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 171215: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:heisenbug
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:guest-localmigrate/x10:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=def6fd6c9ce9e00a30cdd0066e0fde206b3f3d2f
X-Osstest-Versions-That:
    qemuu=9ac873a46963098441be920ef7a2eaf244a3352d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 17 Jun 2022 15:52:55 +0000

flight 171215 qemu-mainline real [real]
flight 171254 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/171215/
http://logs.test-lab.xenproject.org/osstest/logs/171254/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 171183

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install   fail pass in 171254-retest
 test-amd64-amd64-dom0pvh-xl-intel 20 guest-localmigrate/x10 fail pass in 171254-retest
 test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail pass in 171254-retest

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    18 guest-start/debian.repeat fail REGR. vs. 171183

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171183
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171183
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171183
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171183
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171183
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171183
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171183
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171183
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 qemuu                def6fd6c9ce9e00a30cdd0066e0fde206b3f3d2f
baseline version:
 qemuu                9ac873a46963098441be920ef7a2eaf244a3352d

Last test of basis   171183  2022-06-15 23:40:54 Z    1 days
Testing same since   171215  2022-06-16 16:37:03 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Bulekov <alxndr@bu.edu>
  Mark Kanda <mark.kanda@oracle.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Richard Henderson <richard.henderson@linaro.org>
  Zhenzhong Duan <zhenzhong.duan@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 557 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 16:15:07 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 16:15:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351520.578171 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2EcQ-0003xu-Ma; Fri, 17 Jun 2022 16:14:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351520.578171; Fri, 17 Jun 2022 16:14:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2EcQ-0003xn-JY; Fri, 17 Jun 2022 16:14:46 +0000
Received: by outflank-mailman (input) for mailman id 351520;
 Fri, 17 Jun 2022 16:14:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FDGi=WY=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1o2EcO-0003xh-MS
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 16:14:45 +0000
Received: from mail-wm1-x334.google.com (mail-wm1-x334.google.com
 [2a00:1450:4864:20::334])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 97b8068a-ee58-11ec-b725-ed86ccbb4733;
 Fri, 17 Jun 2022 18:14:41 +0200 (CEST)
Received: by mail-wm1-x334.google.com with SMTP id
 s21-20020a1cf215000000b0039ee8149524so1119394wmc.5
 for <xen-devel@lists.xenproject.org>; Fri, 17 Jun 2022 09:14:41 -0700 (PDT)
Received: from otyshchenko.router ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 x17-20020a5d60d1000000b00219fb3a275csm5151531wrt.16.2022.06.17.09.14.36
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 17 Jun 2022 09:14:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 97b8068a-ee58-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=vcJkfmvxic4H9AlL4iBIbk8ggYZxiy6bKn0Z3Qo9uXs=;
        b=YexdBD/0Ih1O7rIS9MY+QwpfZUdQJCqVZwHU+2ZFEsUyn6+lu6VxbvujY6aONPZMpm
         ZwDoUbjTlh+ybNrBuKuIsaWU8v1cgmhhGMDgsVfep0KHsJYS3nF3vnWWbVkDDd9a/oYc
         6X6yQpzhabYRUeQbkA/asgPvGQmynpZ5metrHIjaKoLvxC+j8NAhAeH9F4NohvuB6j8Y
         F286JDqth9CNlN8Jpb7rEUTVknB5For7W80Ps7hjkTOYEVYyylrrPshjppCPfchbEHJi
         6PCld7wf1LBuolL4aOAfW2NJqs2j7P2V5XzfpeMzY9SWXC3lYD18M5sQW2CKXOkvk9k5
         i5Dw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=vcJkfmvxic4H9AlL4iBIbk8ggYZxiy6bKn0Z3Qo9uXs=;
        b=Lyx6+D5Vm+9O578RkWBsCDFYd1onA2qgQ4flwxuxZgCI8wwCfRxBR8Sx8fQMlXAzay
         fs0Tc3hWwwO8BpdQNQWBPEqjCLr/ul9F4YcZ3TTgharplUdFaW7/lWcJK22Wqvesz7ze
         leTvdFAf93b255yvm1KO1Ql2aJZVgirNfBWls0t1B0Rl1vquzhWY8gZ4ybyk3G5zxS9R
         z4RbPgJyF47OLQjv4c9Pqb4qJZHgYxQRzR/icYj/apa26xJc6TlO59hN35qcQ+sOjzj/
         aItW3IJpJZijXZA//tll2yr+mbwwtiD98UIQRD/IOL1rH8WJ9MFOYb0vmeWyjKZUOm+0
         SZsw==
X-Gm-Message-State: AJIora+qy0rwca9alf05tcrNcJ0gKTFoYBODQuS7TtglQ1EF0704E8d1
	HxCq67M/hWp9NOIl+WV2F0oUtMN2y4k=
X-Google-Smtp-Source: AGRyM1sXXinr/yVh2OR/n2ZlpH+HLCDxtDk4QCyAuYXFIXfApMUPropZnlUyTt0hX2hinKgokxKb0w==
X-Received: by 2002:a05:600c:4e0e:b0:39c:8d11:58eb with SMTP id b14-20020a05600c4e0e00b0039c8d1158ebmr11152521wmq.190.1655482478822;
        Fri, 17 Jun 2022 09:14:38 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Nick Rosbrook <rosbrookn@gmail.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>
Subject: [PATCH V10 1/3] libxl: Add support for Virtio disk configuration
Date: Fri, 17 Jun 2022 19:14:31 +0300
Message-Id: <1655482471-16850-1-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <62903b8e-6c20-600e-8283-5a3e3b853a18@gmail.com>
References: <62903b8e-6c20-600e-8283-5a3e3b853a18@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

This patch adds basic support for configuring and assisting virtio-mmio
based virtio-disk backend (emulator) which is intended to run out of
Qemu and could be run in any domain.
Although the Virtio block device is quite different from traditional
Xen PV block device (vbd) from the toolstack's point of view:
 - as the frontend is virtio-blk which is not a Xenbus driver, nothing
   written to Xenstore are fetched by the frontend currently ("vdev"
   is not passed to the frontend). But this might need to be revised
   in future, so frontend data might be written to Xenstore in order to
   support hotplugging virtio devices or passing the backend domain id
   on arch where the device-tree is not available.
 - the ring-ref/event-channel are not used for the backend<->frontend
   communication, the proposed IPC for Virtio is IOREQ/DM
it is still a "block device" and ought to be integrated in existing
"disk" handling. So, re-use (and adapt) "disk" parsing/configuration
logic to deal with Virtio devices as well.

For the immediate purpose and an ability to extend that support for
other use-cases in future (Qemu, virtio-pci, etc) perform the following
actions:
- Add new disk backend type (LIBXL_DISK_BACKEND_OTHER) and reflect
  that in the configuration
- Introduce new disk "specification" and "transport" fields to struct
  libxl_device_disk. Both are written to the Xenstore. The transport
  field is only used for the specification "virtio" and it assumes
  only "mmio" value for now.
- Introduce new "specification" option with "xen" communication
  protocol being default value.
- Add new device kind (LIBXL__DEVICE_KIND_VIRTIO_DISK) as current
  one (LIBXL__DEVICE_KIND_VBD) doesn't fit into Virtio disk model

An example of domain configuration for Virtio disk:
disk = [ 'phy:/dev/mmcblk0p3, xvda1, backendtype=other, specification=virtio']

Nothing has changed for default Xen disk configuration.

Please note, this patch is not enough for virtio-disk to work
on Xen (Arm), as for every Virtio device (including disk) we need
to allocate Virtio MMIO params (IRQ and memory region) and pass
them to the backend, also update Guest device-tree. The subsequent
patch will add these missing bits. For the current patch,
the default "irq" and "base" are just written to the Xenstore.
This is not an ideal splitting, but this way we avoid breaking
the bisectability.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
---
Changes RFC -> V1:
   - no changes

Changes V1 -> V2:
   - rebase according to the new location of libxl_virtio_disk.c

Changes V2 -> V3:
   - no changes

Changes V3 -> V4:
   - rebase according to the new argument for DEFINE_DEVICE_TYPE_STRUCT

Changes V4 -> V5:
   - split the changes, change the order of the patches
   - update patch description
   - don't introduce new "vdisk" configuration option with own parsing logic,
     re-use Xen PV block "disk" parsing/configuration logic for the virtio-disk
   - introduce "virtio" flag and document it's usage
   - add LIBXL_HAVE_DEVICE_DISK_VIRTIO
   - update libxlu_disk_l.[ch]
   - drop num_disks variable/MAX_VIRTIO_DISKS
   - drop Wei's T-b

Changes V5 -> V6:
   - rebase on current staging
   - use "%"PRIu64 instead of %lu for disk->base in device_disk_add()
   - update *.gen.go files

Changes V6 -> V7:
   - rebase on current staging
   - update *.gen.go files and libxlu_disk_l.[ch] files
   - update patch description
   - rework significantly to support more flexible configuration
     and have more generic basic implementation for being able to extend
     that for other use-cases (virtio-pci, qemu, etc).

Changes V7 -> V8:
   - update *.gen.go files and libxlu_disk_l.[ch] files
   - update patch description and comments in the code
   - use "specification" config option instead of "protocol"
   - update libxl_types.idl and code according to new fields
     in libxl_device_disk

Changes V8 -> V9:
   - update (and harden) checks in libxl__device_disk_setdefault(),
     return error in case of incorrect settings of specification
     and transport
   - remove both asserts in device_disk_add()
   - update virtio related code in libxl__disk_from_xenstore(),
     do not fail if specification node is absent, replace
     open-coded checks of fetched specification and transport by
     libxl_disk_specification_from_string() and libxl_disk_transport_from_string()
     respectively
   - s/libxl_device_disk_get_path/libxl__device_disk_get_path
   - add a comment for virtio-mmio parameters in struct libxl_device_disk

Changes V9 -> V10:
   - s/ERROR_FAIL/ERROR_INVAL in both places in libxl__device_disk_setdefault()
   - rework libxl__device_disk_get_path()

Changes V10 -> V10.1:
   - fix small coding style issue in libxl__device_disk_get_path()
   - drop specification check in main_blockattach() and add
     required check in libxl__device_disk_setdefault()
   - update specification check in main_blockdetach()
---
 docs/man/xl-disk-configuration.5.pod.in   |  38 +-
 tools/golang/xenlight/helpers.gen.go      |   8 +
 tools/golang/xenlight/types.gen.go        |  18 +
 tools/include/libxl.h                     |   7 +
 tools/libs/light/libxl_device.c           |  62 +-
 tools/libs/light/libxl_disk.c             | 146 ++++-
 tools/libs/light/libxl_internal.h         |   2 +
 tools/libs/light/libxl_types.idl          |  18 +
 tools/libs/light/libxl_types_internal.idl |   1 +
 tools/libs/light/libxl_utils.c            |   2 +
 tools/libs/util/libxlu_disk_l.c           | 959 +++++++++++++++---------------
 tools/libs/util/libxlu_disk_l.h           |   2 +-
 tools/libs/util/libxlu_disk_l.l           |   9 +
 tools/xl/xl_block.c                       |   6 +
 14 files changed, 798 insertions(+), 480 deletions(-)

diff --git a/docs/man/xl-disk-configuration.5.pod.in b/docs/man/xl-disk-configuration.5.pod.in
index 71d0e86..487ffef 100644
--- a/docs/man/xl-disk-configuration.5.pod.in
+++ b/docs/man/xl-disk-configuration.5.pod.in
@@ -232,7 +232,7 @@ Specifies the backend implementation to use
 
 =item Supported values
 
-phy, qdisk
+phy, qdisk, other
 
 =item Mandatory
 
@@ -244,11 +244,13 @@ Automatically determine which backend to use.
 
 =back
 
-This does not affect the guest's view of the device.  It controls
-which software implementation of the Xen backend driver is used.
+It controls which software implementation of the backend driver is used.
+Depending on the "specification" option this may affect the guest's view
+of the device.
 
 Not all backend drivers support all combinations of other options.
-For example, "phy" does not support formats other than "raw".
+For example, "phy" and "other" do not support formats other than "raw" and
+"other" does not support specifications other than "virtio".
 Normally this option should not be specified, in which case libxl will
 automatically determine the most suitable backend.
 
@@ -344,8 +346,36 @@ can be used to disable "hole punching" for file based backends which
 were intentionally created non-sparse to avoid fragmentation of the
 file.
 
+=item B<specification>=I<SPECIFICATION>
+
+=over 4
+
+=item Description
+
+Specifies the communication protocol (specification) to use for the chosen
+"backendtype" option
+
+=item Supported values
+
+xen, virtio
+
+=item Mandatory
+
+No
+
+=item Default value
+
+xen
+
 =back
 
+Besides forcing toolstack to use specific backend implementation, this also
+affects the guest's view of the device. For example, "virtio" requires
+Virtio frontend driver (virtio-blk) to be used. Please note, the virtual
+device (vdev) is not passed to the guest in that case, but it still must be
+specified for the internal purposes.
+
+=back
 
 =head1 COLO Parameters
 
diff --git a/tools/golang/xenlight/helpers.gen.go b/tools/golang/xenlight/helpers.gen.go
index b746ff1..00f10b9 100644
--- a/tools/golang/xenlight/helpers.gen.go
+++ b/tools/golang/xenlight/helpers.gen.go
@@ -1751,6 +1751,10 @@ x.DirectIoSafe = bool(xc.direct_io_safe)
 if err := x.DiscardEnable.fromC(&xc.discard_enable);err != nil {
 return fmt.Errorf("converting field DiscardEnable: %v", err)
 }
+x.Specification = DiskSpecification(xc.specification)
+x.Transport = DiskTransport(xc.transport)
+x.Irq = uint32(xc.irq)
+x.Base = uint64(xc.base)
 if err := x.ColoEnable.fromC(&xc.colo_enable);err != nil {
 return fmt.Errorf("converting field ColoEnable: %v", err)
 }
@@ -1788,6 +1792,10 @@ xc.direct_io_safe = C.bool(x.DirectIoSafe)
 if err := x.DiscardEnable.toC(&xc.discard_enable); err != nil {
 return fmt.Errorf("converting field DiscardEnable: %v", err)
 }
+xc.specification = C.libxl_disk_specification(x.Specification)
+xc.transport = C.libxl_disk_transport(x.Transport)
+xc.irq = C.uint32_t(x.Irq)
+xc.base = C.uint64_t(x.Base)
 if err := x.ColoEnable.toC(&xc.colo_enable); err != nil {
 return fmt.Errorf("converting field ColoEnable: %v", err)
 }
diff --git a/tools/golang/xenlight/types.gen.go b/tools/golang/xenlight/types.gen.go
index b1e84d5..cc52936 100644
--- a/tools/golang/xenlight/types.gen.go
+++ b/tools/golang/xenlight/types.gen.go
@@ -99,6 +99,20 @@ DiskBackendUnknown DiskBackend = 0
 DiskBackendPhy DiskBackend = 1
 DiskBackendTap DiskBackend = 2
 DiskBackendQdisk DiskBackend = 3
+DiskBackendOther DiskBackend = 4
+)
+
+type DiskSpecification int
+const(
+DiskSpecificationUnknown DiskSpecification = 0
+DiskSpecificationXen DiskSpecification = 1
+DiskSpecificationVirtio DiskSpecification = 2
+)
+
+type DiskTransport int
+const(
+DiskTransportUnknown DiskTransport = 0
+DiskTransportMmio DiskTransport = 1
 )
 
 type NicType int
@@ -643,6 +657,10 @@ Readwrite int
 IsCdrom int
 DirectIoSafe bool
 DiscardEnable Defbool
+Specification DiskSpecification
+Transport DiskTransport
+Irq uint32
+Base uint64
 ColoEnable Defbool
 ColoRestoreEnable Defbool
 ColoHost string
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 51a9b6c..cd8067b 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -528,6 +528,13 @@
 #define LIBXL_HAVE_MAX_GRANT_VERSION 1
 
 /*
+ * LIBXL_HAVE_DEVICE_DISK_SPECIFICATION indicates that 'specification' and
+ * 'transport' fields (of libxl_disk_specification and libxl_disk_transport
+ * types respectively) are present in libxl_device_disk.
+ */
+#define LIBXL_HAVE_DEVICE_DISK_SPECIFICATION 1
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
diff --git a/tools/libs/light/libxl_device.c b/tools/libs/light/libxl_device.c
index e6025d1..a38d2e2 100644
--- a/tools/libs/light/libxl_device.c
+++ b/tools/libs/light/libxl_device.c
@@ -289,9 +289,16 @@ static int disk_try_backend(disk_try_backend_args *a,
                             libxl_disk_backend backend)
  {
     libxl__gc *gc = a->gc;
+    libxl_disk_specification specification = a->disk->specification;
     /* returns 0 (ie, DISK_BACKEND_UNKNOWN) on failure, or
      * backend on success */
 
+    if ((specification == LIBXL_DISK_SPECIFICATION_VIRTIO &&
+         backend != LIBXL_DISK_BACKEND_OTHER) ||
+        (specification != LIBXL_DISK_SPECIFICATION_VIRTIO &&
+         backend == LIBXL_DISK_BACKEND_OTHER))
+        goto bad_specification;
+
     switch (backend) {
     case LIBXL_DISK_BACKEND_PHY:
         if (a->disk->format != LIBXL_DISK_FORMAT_RAW) {
@@ -329,6 +336,29 @@ static int disk_try_backend(disk_try_backend_args *a,
         if (a->disk->script) goto bad_script;
         return backend;
 
+    case LIBXL_DISK_BACKEND_OTHER:
+        if (a->disk->format != LIBXL_DISK_FORMAT_RAW)
+            goto bad_format;
+
+        if (a->disk->script)
+            goto bad_script;
+
+        if (libxl_defbool_val(a->disk->colo_enable))
+            goto bad_colo;
+
+        if (a->disk->backend_domid != LIBXL_TOOLSTACK_DOMID) {
+            LOG(DEBUG, "Disk vdev=%s, is using a storage driver domain, "
+                       "skipping physical device check", a->disk->vdev);
+            return backend;
+        }
+
+        if (libxl__try_phy_backend(a->stab.st_mode))
+            return backend;
+
+        LOG(DEBUG, "Disk vdev=%s, backend other unsuitable as phys path not a "
+                   "block device", a->disk->vdev);
+        return 0;
+
     default:
         LOG(DEBUG, "Disk vdev=%s, backend %d unknown", a->disk->vdev, backend);
         return 0;
@@ -352,6 +382,12 @@ static int disk_try_backend(disk_try_backend_args *a,
     LOG(DEBUG, "Disk vdev=%s, backend %s not compatible with colo",
         a->disk->vdev, libxl_disk_backend_to_string(backend));
     return 0;
+
+ bad_specification:
+    LOG(DEBUG, "Disk vdev=%s, backend %s not compatible with specification %s",
+        a->disk->vdev, libxl_disk_backend_to_string(backend),
+        libxl_disk_specification_to_string(specification));
+    return 0;
 }
 
 int libxl__backendpath_parse_domid(libxl__gc *gc, const char *be_path,
@@ -376,8 +412,9 @@ int libxl__device_disk_set_backend(libxl__gc *gc, libxl_device_disk *disk) {
     a.gc = gc;
     a.disk = disk;
 
-    LOG(DEBUG, "Disk vdev=%s spec.backend=%s", disk->vdev,
-               libxl_disk_backend_to_string(disk->backend));
+    LOG(DEBUG, "Disk vdev=%s spec.backend=%s specification=%s", disk->vdev,
+               libxl_disk_backend_to_string(disk->backend),
+               libxl_disk_specification_to_string(disk->specification));
 
     if (disk->format == LIBXL_DISK_FORMAT_EMPTY) {
         if (!disk->is_cdrom) {
@@ -392,7 +429,8 @@ int libxl__device_disk_set_backend(libxl__gc *gc, libxl_device_disk *disk) {
         }
         memset(&a.stab, 0, sizeof(a.stab));
     } else if ((disk->backend == LIBXL_DISK_BACKEND_UNKNOWN ||
-                disk->backend == LIBXL_DISK_BACKEND_PHY) &&
+                disk->backend == LIBXL_DISK_BACKEND_PHY ||
+                disk->backend == LIBXL_DISK_BACKEND_OTHER) &&
                disk->backend_domid == LIBXL_TOOLSTACK_DOMID &&
                !disk->script) {
         if (stat(disk->pdev_path, &a.stab)) {
@@ -408,7 +446,8 @@ int libxl__device_disk_set_backend(libxl__gc *gc, libxl_device_disk *disk) {
         ok=
             disk_try_backend(&a, LIBXL_DISK_BACKEND_PHY) ?:
             disk_try_backend(&a, LIBXL_DISK_BACKEND_QDISK) ?:
-            disk_try_backend(&a, LIBXL_DISK_BACKEND_TAP);
+            disk_try_backend(&a, LIBXL_DISK_BACKEND_TAP) ?:
+            disk_try_backend(&a, LIBXL_DISK_BACKEND_OTHER);
         if (ok)
             LOG(DEBUG, "Disk vdev=%s, using backend %s",
                        disk->vdev,
@@ -441,10 +480,25 @@ char *libxl__device_disk_string_of_backend(libxl_disk_backend backend)
         case LIBXL_DISK_BACKEND_QDISK: return "qdisk";
         case LIBXL_DISK_BACKEND_TAP: return "phy";
         case LIBXL_DISK_BACKEND_PHY: return "phy";
+        case LIBXL_DISK_BACKEND_OTHER: return "other";
+        default: return NULL;
+    }
+}
+
+char *libxl__device_disk_string_of_specification(libxl_disk_specification specification)
+{
+    switch (specification) {
+        case LIBXL_DISK_SPECIFICATION_XEN: return "xen";
+        case LIBXL_DISK_SPECIFICATION_VIRTIO: return "virtio";
         default: return NULL;
     }
 }
 
+char *libxl__device_disk_string_of_transport(libxl_disk_transport transport)
+{
+    return (transport == LIBXL_DISK_TRANSPORT_MMIO ? "mmio" : NULL);
+}
+
 const char *libxl__qemu_disk_format_string(libxl_disk_format format)
 {
     switch (format) {
diff --git a/tools/libs/light/libxl_disk.c b/tools/libs/light/libxl_disk.c
index a5ca778..ead2e90 100644
--- a/tools/libs/light/libxl_disk.c
+++ b/tools/libs/light/libxl_disk.c
@@ -163,6 +163,30 @@ static int libxl__device_disk_setdefault(libxl__gc *gc, uint32_t domid,
     rc = libxl__resolve_domid(gc, disk->backend_domname, &disk->backend_domid);
     if (rc < 0) return rc;
 
+    if (disk->specification == LIBXL_DISK_SPECIFICATION_UNKNOWN)
+        disk->specification = LIBXL_DISK_SPECIFICATION_XEN;
+
+    if (disk->specification == LIBXL_DISK_SPECIFICATION_XEN &&
+        disk->transport != LIBXL_DISK_TRANSPORT_UNKNOWN) {
+        LOGD(ERROR, domid, "Transport is only supported for specification virtio");
+        return ERROR_INVAL;
+    }
+
+    /* Force transport mmio for specification virtio for now */
+    if (disk->specification == LIBXL_DISK_SPECIFICATION_VIRTIO) {
+        if (!(disk->transport == LIBXL_DISK_TRANSPORT_UNKNOWN ||
+              disk->transport == LIBXL_DISK_TRANSPORT_MMIO)) {
+            LOGD(ERROR, domid, "Unsupported transport for specification virtio");
+            return ERROR_INVAL;
+        }
+        disk->transport = LIBXL_DISK_TRANSPORT_MMIO;
+    }
+
+    if (hotplug && disk->specification == LIBXL_DISK_SPECIFICATION_VIRTIO) {
+        LOGD(ERROR, domid, "Hotplug isn't supported for specification virtio");
+        return ERROR_FAIL;
+    }
+
     /* Force Qdisk backend for CDROM devices of guests with a device model. */
     if (disk->is_cdrom != 0 &&
         libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM) {
@@ -204,6 +228,9 @@ static int libxl__device_from_disk(libxl__gc *gc, uint32_t domid,
         case LIBXL_DISK_BACKEND_QDISK:
             device->backend_kind = LIBXL__DEVICE_KIND_QDISK;
             break;
+        case LIBXL_DISK_BACKEND_OTHER:
+            device->backend_kind = LIBXL__DEVICE_KIND_VIRTIO_DISK;
+            break;
         default:
             LOGD(ERROR, domid, "Unrecognized disk backend type: %d",
                  disk->backend);
@@ -212,7 +239,8 @@ static int libxl__device_from_disk(libxl__gc *gc, uint32_t domid,
 
     device->domid = domid;
     device->devid = devid;
-    device->kind  = LIBXL__DEVICE_KIND_VBD;
+    device->kind = disk->backend == LIBXL_DISK_BACKEND_OTHER ?
+        LIBXL__DEVICE_KIND_VIRTIO_DISK : LIBXL__DEVICE_KIND_VBD;
 
     return 0;
 }
@@ -330,7 +358,14 @@ static void device_disk_add(libxl__egc *egc, uint32_t domid,
 
                 assert(device->backend_kind == LIBXL__DEVICE_KIND_VBD);
                 break;
+            case LIBXL_DISK_BACKEND_OTHER:
+                dev = disk->pdev_path;
+
+                flexarray_append(back, "params");
+                flexarray_append(back, dev);
 
+                assert(device->backend_kind == LIBXL__DEVICE_KIND_VIRTIO_DISK);
+                break;
             case LIBXL_DISK_BACKEND_TAP:
                 LOG(ERROR, "blktap is not supported");
                 rc = ERROR_FAIL;
@@ -386,6 +421,14 @@ static void device_disk_add(libxl__egc *egc, uint32_t domid,
         flexarray_append_pair(back, "discard-enable",
                               libxl_defbool_val(disk->discard_enable) ?
                               "1" : "0");
+        flexarray_append(back, "specification");
+        flexarray_append(back, libxl__device_disk_string_of_specification(disk->specification));
+        if (disk->specification == LIBXL_DISK_SPECIFICATION_VIRTIO) {
+            flexarray_append(back, "transport");
+            flexarray_append(back, libxl__device_disk_string_of_transport(disk->transport));
+            flexarray_append_pair(back, "base", GCSPRINTF("%"PRIu64, disk->base));
+            flexarray_append_pair(back, "irq", GCSPRINTF("%u", disk->irq));
+        }
 
         flexarray_append(front, "backend-id");
         flexarray_append(front, GCSPRINTF("%d", disk->backend_domid));
@@ -532,6 +575,53 @@ static int libxl__disk_from_xenstore(libxl__gc *gc, const char *libxl_path,
     }
     libxl_string_to_backend(ctx, tmp, &(disk->backend));
 
+    tmp = libxl__xs_read(gc, XBT_NULL,
+                         GCSPRINTF("%s/specification", libxl_path));
+    if (!tmp) {
+        LOG(DEBUG, "Missing xenstore node %s/specification, assuming specification xen", libxl_path);
+        disk->specification = LIBXL_DISK_SPECIFICATION_XEN;
+    } else {
+        rc = libxl_disk_specification_from_string(tmp, &disk->specification);
+        if (rc) {
+            LOG(ERROR, "Unable to parse xenstore node %s/specification", libxl_path);
+            goto cleanup;
+        }
+    }
+
+    if (disk->specification == LIBXL_DISK_SPECIFICATION_VIRTIO) {
+        tmp = libxl__xs_read(gc, XBT_NULL,
+                             GCSPRINTF("%s/transport", libxl_path));
+        if (!tmp) {
+            LOG(ERROR, "Missing xenstore node %s/transport", libxl_path);
+            goto cleanup;
+        }
+        rc = libxl_disk_transport_from_string(tmp, &disk->transport);
+        if (rc) {
+            LOG(ERROR, "Unable to parse xenstore node %s/transport", libxl_path);
+            goto cleanup;
+        }
+        if (disk->transport != LIBXL_DISK_TRANSPORT_MMIO) {
+            LOG(ERROR, "Only transport mmio is expected for specification virtio");
+            goto cleanup;
+        }
+
+        tmp = libxl__xs_read(gc, XBT_NULL,
+                             GCSPRINTF("%s/base", libxl_path));
+        if (!tmp) {
+            LOG(ERROR, "Missing xenstore node %s/base", libxl_path);
+            goto cleanup;
+        }
+        disk->base = strtoul(tmp, NULL, 10);
+
+        tmp = libxl__xs_read(gc, XBT_NULL,
+                             GCSPRINTF("%s/irq", libxl_path));
+        if (!tmp) {
+            LOG(ERROR, "Missing xenstore node %s/irq", libxl_path);
+            goto cleanup;
+        }
+        disk->irq = strtoul(tmp, NULL, 10);
+    }
+
     disk->vdev = xs_read(ctx->xsh, XBT_NULL,
                          GCSPRINTF("%s/dev", libxl_path), &len);
     if (!disk->vdev) {
@@ -575,6 +665,42 @@ cleanup:
     return rc;
 }
 
+static int libxl__device_disk_get_path(libxl__gc *gc, uint32_t domid,
+                                       char **path)
+{
+    const char *xen_dir, *virtio_dir;
+    char *xen_path, *virtio_path;
+    int rc;
+
+    /* default path */
+    xen_path = GCSPRINTF("%s/device/%s",
+                         libxl__xs_libxl_path(gc, domid),
+                         libxl__device_kind_to_string(LIBXL__DEVICE_KIND_VBD));
+
+    rc = libxl__xs_read_checked(gc, XBT_NULL, xen_path, &xen_dir);
+    if (rc)
+        return rc;
+
+    virtio_path = GCSPRINTF("%s/device/%s",
+                            libxl__xs_libxl_path(gc, domid),
+                            libxl__device_kind_to_string(LIBXL__DEVICE_KIND_VIRTIO_DISK));
+
+    rc = libxl__xs_read_checked(gc, XBT_NULL, virtio_path, &virtio_dir);
+    if (rc)
+        return rc;
+
+    if (xen_dir && virtio_dir) {
+        LOGD(ERROR, domid, "Invalid configuration, both xen and virtio paths are present");
+        return ERROR_INVAL;
+    } else if (virtio_dir) {
+        *path = virtio_path;
+    } else {
+        *path = xen_path;
+    }
+
+    return 0;
+}
+
 int libxl_vdev_to_device_disk(libxl_ctx *ctx, uint32_t domid,
                               const char *vdev, libxl_device_disk *disk)
 {
@@ -588,10 +714,12 @@ int libxl_vdev_to_device_disk(libxl_ctx *ctx, uint32_t domid,
 
     libxl_device_disk_init(disk);
 
-    libxl_path = libxl__domain_device_libxl_path(gc, domid, devid,
-                                                 LIBXL__DEVICE_KIND_VBD);
+    rc = libxl__device_disk_get_path(gc, domid, &libxl_path);
+    if (rc)
+        return rc;
 
-    rc = libxl__disk_from_xenstore(gc, libxl_path, devid, disk);
+    rc = libxl__disk_from_xenstore(gc, GCSPRINTF("%s/%d", libxl_path, devid),
+                                   devid, disk);
 
     GC_FREE;
     return rc;
@@ -605,16 +733,19 @@ int libxl_device_disk_getinfo(libxl_ctx *ctx, uint32_t domid,
     char *fe_path, *libxl_path;
     char *val;
     int rc;
+    libxl__device_kind kind;
 
     diskinfo->backend = NULL;
 
     diskinfo->devid = libxl__device_disk_dev_number(disk->vdev, NULL, NULL);
 
-    /* tap devices entries in xenstore are written as vbd devices. */
+    /* tap devices entries in xenstore are written as vbd/virtio_disk devices. */
+    kind = disk->backend == LIBXL_DISK_BACKEND_OTHER ?
+        LIBXL__DEVICE_KIND_VIRTIO_DISK : LIBXL__DEVICE_KIND_VBD;
     fe_path = libxl__domain_device_frontend_path(gc, domid, diskinfo->devid,
-                                                 LIBXL__DEVICE_KIND_VBD);
+                                                 kind);
     libxl_path = libxl__domain_device_libxl_path(gc, domid, diskinfo->devid,
-                                                 LIBXL__DEVICE_KIND_VBD);
+                                                 kind);
     diskinfo->backend = xs_read(ctx->xsh, XBT_NULL,
                                 GCSPRINTF("%s/backend", libxl_path), NULL);
     if (!diskinfo->backend) {
@@ -1418,6 +1549,7 @@ LIBXL_DEFINE_DEVICE_LIST(disk)
 #define libxl__device_disk_update_devid NULL
 
 DEFINE_DEVICE_TYPE_STRUCT(disk, VBD, disks,
+    .get_path    = libxl__device_disk_get_path,
     .merge       = libxl_device_disk_merge,
     .dm_needed   = libxl_device_disk_dm_needed,
     .from_xenstore = (device_from_xenstore_fn_t)libxl__disk_from_xenstore,
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index bdef5a6..cb9e8b3 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -1493,6 +1493,8 @@ _hidden char * libxl__domain_pvcontrol_read(libxl__gc *gc,
 
 /* from xl_device */
 _hidden char *libxl__device_disk_string_of_backend(libxl_disk_backend backend);
+_hidden char *libxl__device_disk_string_of_specification(libxl_disk_specification specification);
+_hidden char *libxl__device_disk_string_of_transport(libxl_disk_transport transport);
 _hidden char *libxl__device_disk_string_of_format(libxl_disk_format format);
 _hidden const char *libxl__qemu_disk_format_string(libxl_disk_format format);
 _hidden int libxl__device_disk_set_backend(libxl__gc*, libxl_device_disk*);
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 2a42da2..858e32b 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -130,6 +130,18 @@ libxl_disk_backend = Enumeration("disk_backend", [
     (1, "PHY"),
     (2, "TAP"),
     (3, "QDISK"),
+    (4, "OTHER"),
+    ])
+
+libxl_disk_specification = Enumeration("disk_specification", [
+    (0, "UNKNOWN"),
+    (1, "XEN"),
+    (2, "VIRTIO"),
+    ])
+
+libxl_disk_transport = Enumeration("disk_transport", [
+    (0, "UNKNOWN"),
+    (1, "MMIO"),
     ])
 
 libxl_nic_type = Enumeration("nic_type", [
@@ -704,6 +716,12 @@ libxl_device_disk = Struct("device_disk", [
     ("is_cdrom", integer),
     ("direct_io_safe", bool),
     ("discard_enable", libxl_defbool),
+    ("specification", libxl_disk_specification),
+    ("transport", libxl_disk_transport),
+    # Note that virtio-mmio parameters (irq and base) are for internal use
+    # by libxl and can't be modified.
+    ("irq", uint32),
+    ("base", uint64),
     # Note that the COLO configuration settings should be considered unstable.
     # They may change incompatibly in future versions of Xen.
     ("colo_enable", libxl_defbool),
diff --git a/tools/libs/light/libxl_types_internal.idl b/tools/libs/light/libxl_types_internal.idl
index 3593e21..8f71980 100644
--- a/tools/libs/light/libxl_types_internal.idl
+++ b/tools/libs/light/libxl_types_internal.idl
@@ -32,6 +32,7 @@ libxl__device_kind = Enumeration("device_kind", [
     (14, "PVCALLS"),
     (15, "VSND"),
     (16, "VINPUT"),
+    (17, "VIRTIO_DISK"),
     ])
 
 libxl__console_backend = Enumeration("console_backend", [
diff --git a/tools/libs/light/libxl_utils.c b/tools/libs/light/libxl_utils.c
index e5e6b2d..f55915e 100644
--- a/tools/libs/light/libxl_utils.c
+++ b/tools/libs/light/libxl_utils.c
@@ -297,6 +297,8 @@ int libxl_string_to_backend(libxl_ctx *ctx, char *s, libxl_disk_backend *backend
         *backend = LIBXL_DISK_BACKEND_TAP;
     } else if (!strcmp(s, "qdisk")) {
         *backend = LIBXL_DISK_BACKEND_QDISK;
+    } else if (!strcmp(s, "other")) {
+        *backend = LIBXL_DISK_BACKEND_OTHER;
     } else if (!strcmp(s, "tap")) {
         p = strchr(s, ':');
         if (!p) {
diff --git a/tools/libs/util/libxlu_disk_l.c b/tools/libs/util/libxlu_disk_l.c
index 32d4b74..bb1337c 100644
--- a/tools/libs/util/libxlu_disk_l.c
+++ b/tools/libs/util/libxlu_disk_l.c
@@ -549,8 +549,8 @@ static void yynoreturn yy_fatal_error ( const char* msg , yyscan_t yyscanner );
 	yyg->yy_hold_char = *yy_cp; \
 	*yy_cp = '\0'; \
 	yyg->yy_c_buf_p = yy_cp;
-#define YY_NUM_RULES 36
-#define YY_END_OF_BUFFER 37
+#define YY_NUM_RULES 37
+#define YY_END_OF_BUFFER 38
 /* This struct is not used in this scanner,
    but its presence is necessary. */
 struct yy_trans_info
@@ -558,74 +558,77 @@ struct yy_trans_info
 	flex_int32_t yy_verify;
 	flex_int32_t yy_nxt;
 	};
-static const flex_int16_t yy_acclist[575] =
+static const flex_int16_t yy_acclist[594] =
     {   0,
-       35,   35,   37,   33,   34,   36, 8193,   33,   34,   36,
-    16385, 8193,   33,   36,16385,   33,   34,   36,   34,   36,
-       33,   34,   36,   33,   34,   36,   33,   34,   36,   33,
-       34,   36,   33,   34,   36,   33,   34,   36,   33,   34,
-       36,   33,   34,   36,   33,   34,   36,   33,   34,   36,
-       33,   34,   36,   33,   34,   36,   33,   34,   36,   33,
-       34,   36,   33,   34,   36,   33,   34,   36,   35,   36,
-       36,   33,   33, 8193,   33, 8193,   33,16385, 8193,   33,
-     8193,   33,   33, 8224,   33,16416,   33,   33,   33,   33,
-       33,   33,   33,   33,   33,   33,   33,   33,   33,   33,
-
-       33,   33,   33,   33,   33,   33,   33,   33,   33,   35,
-     8193,   33, 8193,   33, 8193, 8224,   33, 8224,   33, 8224,
-       23,   33,   33,   33,   33,   33,   33,   33,   33,   33,
-       33,   33,   33,   33,   33,   33,   33,   33,   33,   33,
-       33,   33,   33,   33,   33, 8224,   33, 8224,   33, 8224,
-       23,   33,   33,   28, 8224,   33,16416,   33,   33,   15,
-       33,   33,   33,   33,   33,   33,   33,   33,   33, 8217,
-     8224,   33,16409,16416,   33,   33,   31, 8224,   33,16416,
-       33, 8216, 8224,   33,16408,16416,   33,   33, 8219, 8224,
-       33,16411,16416,   33,   33,   33,   33,   33,   28, 8224,
-
-       33,   28, 8224,   33,   28,   33,   28, 8224,   33,    3,
-       33,   15,   33,   33,   33,   33,   33,   30, 8224,   33,
-    16416,   33,   33,   33, 8217, 8224,   33, 8217, 8224,   33,
-     8217,   33, 8217, 8224,   33,   33,   31, 8224,   33,   31,
-     8224,   33,   31,   33,   31, 8224, 8216, 8224,   33, 8216,
-     8224,   33, 8216,   33, 8216, 8224,   33, 8219, 8224,   33,
-     8219, 8224,   33, 8219,   33, 8219, 8224,   33,   33,   10,
-       33,   33,   28, 8224,   33,   28, 8224,   33,   28, 8224,
-       28,   33,   28,   33,    3,   33,   33,   33,   33,   33,
-       33,   33,   30, 8224,   33,   30, 8224,   33,   30,   33,
-
-       30, 8224,   33,   33,   29, 8224,   33,16416, 8217, 8224,
-       33, 8217, 8224,   33, 8217, 8224, 8217,   33, 8217,   33,
-       33,   31, 8224,   33,   31, 8224,   33,   31, 8224,   31,
-       33,   31, 8216, 8224,   33, 8216, 8224,   33, 8216, 8224,
-     8216,   33, 8216,   33, 8219, 8224,   33, 8219, 8224,   33,
-     8219, 8224, 8219,   33, 8219,   33,   33,   10,   23,   10,
-        7,   33,   33,   33,   33,   33,   33,   33,   13,   33,
-       30, 8224,   33,   30, 8224,   33,   30, 8224,   30,   33,
-       30,    2,   33,   29, 8224,   33,   29, 8224,   33,   29,
-       33,   29, 8224,   16,   33,   33,   11,   33,   22,   10,
-
-       10,   23,    7,   23,    7,   33,    8,   33,   33,   33,
-       33,    6,   33,   13,   33,    2,   23,    2,   33,   29,
-     8224,   33,   29, 8224,   33,   29, 8224,   29,   33,   29,
-       16,   33,   33,   11,   23,   11,   26, 8224,   33,16416,
-       22,   23,   22,    7,    7,   23,   33,    8,   23,    8,
-       33,   33,   33,   33,    6,   23,    6,    6,   23,    6,
-       23,   33,    2,    2,   23,   33,   33,   11,   11,   23,
-       26, 8224,   33,   26, 8224,   33,   26,   33,   26, 8224,
-       22,   23,   33,    8,    8,   23,   33,   33,   17,   18,
-        6,    6,   23,    6,    6,   33,   33,   14,   33,   26,
-
-     8224,   33,   26, 8224,   33,   26, 8224,   26,   33,   26,
-       33,   33,   33,   17,   23,   17,   18,   23,   18,    6,
-        6,   33,   33,   14,   33,   20,    9,   19,   17,   17,
-       23,   18,   18,   23,    6,    5,    6,   33,   21,   20,
-       23,   20,    9,   23,    9,   19,   23,   19,    4,    6,
-        5,    6,   33,   21,   23,   21,   20,   20,   23,    9,
-        9,   23,   19,   19,   23,    4,    6,   12,   33,   21,
-       21,   23,   12,   33
+       36,   36,   38,   34,   35,   37, 8193,   34,   35,   37,
+    16385, 8193,   34,   37,16385,   34,   35,   37,   35,   37,
+       34,   35,   37,   34,   35,   37,   34,   35,   37,   34,
+       35,   37,   34,   35,   37,   34,   35,   37,   34,   35,
+       37,   34,   35,   37,   34,   35,   37,   34,   35,   37,
+       34,   35,   37,   34,   35,   37,   34,   35,   37,   34,
+       35,   37,   34,   35,   37,   34,   35,   37,   36,   37,
+       37,   34,   34, 8193,   34, 8193,   34,16385, 8193,   34,
+     8193,   34,   34, 8225,   34,16417,   34,   34,   34,   34,
+       34,   34,   34,   34,   34,   34,   34,   34,   34,   34,
+
+       34,   34,   34,   34,   34,   34,   34,   34,   34,   34,
+       36, 8193,   34, 8193,   34, 8193, 8225,   34, 8225,   34,
+     8225,   24,   34,   34,   34,   34,   34,   34,   34,   34,
+       34,   34,   34,   34,   34,   34,   34,   34,   34,   34,
+       34,   34,   34,   34,   34,   34,   34, 8225,   34, 8225,
+       34, 8225,   24,   34,   34,   29, 8225,   34,16417,   34,
+       34,   16,   34,   34,   34,   34,   34,   34,   34,   34,
+       34, 8218, 8225,   34,16410,16417,   34,   34,   32, 8225,
+       34,16417,   34, 8217, 8225,   34,16409,16417,   34,   34,
+       34, 8220, 8225,   34,16412,16417,   34,   34,   34,   34,
+
+       34,   29, 8225,   34,   29, 8225,   34,   29,   34,   29,
+     8225,   34,    3,   34,   16,   34,   34,   34,   34,   34,
+       31, 8225,   34,16417,   34,   34,   34, 8218, 8225,   34,
+     8218, 8225,   34, 8218,   34, 8218, 8225,   34,   34,   32,
+     8225,   34,   32, 8225,   34,   32,   34,   32, 8225, 8217,
+     8225,   34, 8217, 8225,   34, 8217,   34, 8217, 8225,   34,
+       34, 8220, 8225,   34, 8220, 8225,   34, 8220,   34, 8220,
+     8225,   34,   34,   11,   34,   34,   29, 8225,   34,   29,
+     8225,   34,   29, 8225,   29,   34,   29,   34,    3,   34,
+       34,   34,   34,   34,   34,   34,   31, 8225,   34,   31,
+
+     8225,   34,   31,   34,   31, 8225,   34,   34,   30, 8225,
+       34,16417, 8218, 8225,   34, 8218, 8225,   34, 8218, 8225,
+     8218,   34, 8218,   34,   34,   32, 8225,   34,   32, 8225,
+       34,   32, 8225,   32,   34,   32, 8217, 8225,   34, 8217,
+     8225,   34, 8217, 8225, 8217,   34, 8217,   34,   34, 8220,
+     8225,   34, 8220, 8225,   34, 8220, 8225, 8220,   34, 8220,
+       34,   34,   11,   24,   11,    7,   34,   34,   34,   34,
+       34,   34,   34,   14,   34,   31, 8225,   34,   31, 8225,
+       34,   31, 8225,   31,   34,   31,    2,   34,   30, 8225,
+       34,   30, 8225,   34,   30,   34,   30, 8225,   17,   34,
+
+       34,   12,   34,   34,   23,   11,   11,   24,    7,   24,
+        7,   34,    8,   34,   34,   34,   34,    6,   34,   14,
+       34,    2,   24,    2,   34,   30, 8225,   34,   30, 8225,
+       34,   30, 8225,   30,   34,   30,   17,   34,   34,   12,
+       24,   12,   34,   27, 8225,   34,16417,   23,   24,   23,
+        7,    7,   24,   34,    8,   24,    8,   34,   34,   34,
+       34,    6,   24,    6,    6,   24,    6,   24,   34,    2,
+        2,   24,   34,   34,   12,   12,   24,   34,   27, 8225,
+       34,   27, 8225,   34,   27,   34,   27, 8225,   23,   24,
+       34,    8,    8,   24,   34,   34,   18,   19,    6,    6,
+
+       24,    6,    6,   34,   34,   15,   34,   34,   27, 8225,
+       34,   27, 8225,   34,   27, 8225,   27,   34,   27,   34,
+       34,   34,   18,   24,   18,   19,   24,   19,    6,    6,
+       34,   34,   15,   34,   34,   21,    9,   20,   18,   18,
+       24,   19,   19,   24,    6,    5,    6,   34,   22,   34,
+       21,   24,   21,    9,   24,    9,   20,   24,   20,    4,
+        6,    5,    6,   34,   22,   24,   22,   34,   21,   21,
+       24,    9,    9,   24,   20,   20,   24,    4,    6,   13,
+       34,   22,   22,   24,   10,   13,   34,   10,   24,   10,
+       10,   10,   24
+
     } ;
 
-static const flex_int16_t yy_accept[356] =
+static const flex_int16_t yy_accept[373] =
     {   0,
         1,    1,    1,    2,    3,    4,    7,   12,   16,   19,
        21,   24,   27,   30,   33,   36,   39,   42,   45,   48,
@@ -633,39 +636,41 @@ static const flex_int16_t yy_accept[356] =
        74,   76,   79,   81,   82,   83,   84,   87,   87,   88,
        89,   90,   91,   92,   93,   94,   95,   96,   97,   98,
        99,  100,  101,  102,  103,  104,  105,  106,  107,  108,
-      109,  110,  111,  113,  115,  116,  118,  120,  121,  122,
+      109,  110,  111,  112,  114,  116,  117,  119,  121,  122,
       123,  124,  125,  126,  127,  128,  129,  130,  131,  132,
       133,  134,  135,  136,  137,  138,  139,  140,  141,  142,
-      143,  144,  145,  146,  148,  150,  151,  152,  153,  154,
-
-      158,  159,  160,  162,  163,  164,  165,  166,  167,  168,
-      169,  170,  175,  176,  177,  181,  182,  187,  188,  189,
-      194,  195,  196,  197,  198,  199,  202,  205,  207,  209,
-      210,  212,  214,  215,  216,  217,  218,  222,  223,  224,
-      225,  228,  231,  233,  235,  236,  237,  240,  243,  245,
-      247,  250,  253,  255,  257,  258,  261,  264,  266,  268,
-      269,  270,  271,  272,  273,  276,  279,  281,  283,  284,
-      285,  287,  288,  289,  290,  291,  292,  293,  296,  299,
-      301,  303,  304,  305,  309,  312,  315,  317,  319,  320,
-      321,  322,  325,  328,  330,  332,  333,  336,  339,  341,
-
-      343,  344,  345,  348,  351,  353,  355,  356,  357,  358,
-      360,  361,  362,  363,  364,  365,  366,  367,  368,  369,
-      371,  374,  377,  379,  381,  382,  383,  384,  387,  390,
-      392,  394,  396,  397,  398,  399,  400,  401,  403,  405,
-      406,  407,  408,  409,  410,  411,  412,  413,  414,  416,
-      418,  419,  420,  423,  426,  428,  430,  431,  433,  434,
-      436,  437,  441,  443,  444,  445,  447,  448,  450,  451,
-      452,  453,  454,  455,  457,  458,  460,  462,  463,  464,
-      466,  467,  468,  469,  471,  474,  477,  479,  481,  483,
-      484,  485,  487,  488,  489,  490,  491,  492,  494,  495,
-
-      496,  497,  498,  500,  503,  506,  508,  510,  511,  512,
-      513,  514,  516,  517,  519,  520,  521,  522,  523,  524,
-      526,  527,  528,  529,  530,  532,  533,  535,  536,  538,
-      539,  540,  542,  543,  545,  546,  548,  549,  551,  553,
-      554,  556,  557,  558,  560,  561,  563,  564,  566,  568,
-      570,  571,  573,  575,  575
+      143,  144,  145,  146,  147,  148,  150,  152,  153,  154,
+
+      155,  156,  160,  161,  162,  164,  165,  166,  167,  168,
+      169,  170,  171,  172,  177,  178,  179,  183,  184,  189,
+      190,  191,  192,  197,  198,  199,  200,  201,  202,  205,
+      208,  210,  212,  213,  215,  217,  218,  219,  220,  221,
+      225,  226,  227,  228,  231,  234,  236,  238,  239,  240,
+      243,  246,  248,  250,  253,  256,  258,  260,  261,  262,
+      265,  268,  270,  272,  273,  274,  275,  276,  277,  280,
+      283,  285,  287,  288,  289,  291,  292,  293,  294,  295,
+      296,  297,  300,  303,  305,  307,  308,  309,  313,  316,
+      319,  321,  323,  324,  325,  326,  329,  332,  334,  336,
+
+      337,  340,  343,  345,  347,  348,  349,  350,  353,  356,
+      358,  360,  361,  362,  363,  365,  366,  367,  368,  369,
+      370,  371,  372,  373,  374,  376,  379,  382,  384,  386,
+      387,  388,  389,  392,  395,  397,  399,  401,  402,  403,
+      404,  405,  406,  407,  409,  411,  412,  413,  414,  415,
+      416,  417,  418,  419,  420,  422,  424,  425,  426,  429,
+      432,  434,  436,  437,  439,  440,  442,  443,  444,  448,
+      450,  451,  452,  454,  455,  457,  458,  459,  460,  461,
+      462,  464,  465,  467,  469,  470,  471,  473,  474,  475,
+      476,  478,  479,  482,  485,  487,  489,  491,  492,  493,
+
+      495,  496,  497,  498,  499,  500,  502,  503,  504,  505,
+      506,  508,  509,  512,  515,  517,  519,  520,  521,  522,
+      523,  525,  526,  528,  529,  530,  531,  532,  533,  535,
+      536,  537,  538,  539,  540,  542,  543,  545,  546,  548,
+      549,  550,  551,  553,  554,  556,  557,  559,  560,  562,
+      564,  565,  567,  568,  569,  570,  572,  573,  575,  576,
+      578,  580,  582,  583,  585,  586,  588,  590,  591,  592,
+      594,  594
     } ;
 
 static const YY_CHAR yy_ec[256] =
@@ -708,216 +713,224 @@ static const YY_CHAR yy_meta[35] =
         1,    1,    1,    1
     } ;
 
-static const flex_int16_t yy_base[424] =
+static const flex_int16_t yy_base[443] =
     {   0,
-        0,    0,  901,  900,  902,  897,   33,   36,  905,  905,
-       45,   63,   31,   42,   51,   52,  890,   33,   65,   67,
-       69,   70,  889,   71,  888,   75,    0,  905,  893,  905,
-       91,   94,    0,    0,  103,  886,  112,    0,   89,   98,
-      113,   92,  114,   99,  100,   48,  121,  116,  119,   74,
-      124,  129,  123,  135,  132,  133,  137,  134,  138,  139,
-      141,    0,  155,    0,    0,  164,    0,    0,  849,  142,
-      152,  164,  140,  161,  165,  166,  167,  168,  169,  173,
-      174,  178,  176,  180,  184,  208,  189,  183,  192,  195,
-      215,  191,  193,  223,    0,    0,  905,  208,  204,  236,
-
-      219,  209,  238,  196,  237,  831,  242,  815,  241,  224,
-      243,  261,  244,  259,  277,  266,  286,  250,  288,  298,
-      249,  283,  274,  282,  294,  308,    0,  310,    0,  295,
-      305,  905,  308,  306,  313,  314,  342,  319,  316,  320,
-      331,    0,  349,    0,  342,  344,  356,    0,  358,    0,
-      365,    0,  367,    0,  354,  375,    0,  377,    0,  363,
-      356,  809,  327,  322,  384,    0,    0,    0,    0,  379,
-      905,  382,  384,  386,  390,  372,  392,  403,    0,  410,
-        0,  407,  413,  423,  426,    0,    0,    0,    0,  409,
-      424,  435,    0,    0,    0,    0,  437,    0,    0,    0,
-
-        0,  433,  444,    0,    0,    0,    0,  391,  440,  781,
-      905,  769,  439,  445,  444,  447,  449,  454,  453,  399,
-      464,    0,    0,    0,    0,  757,  465,  476,    0,  478,
-        0,  479,  476,  753,  462,  490,  749,  905,  745,  905,
-      483,  737,  424,  485,  487,  490,  500,  493,  905,  729,
-      905,  502,  518,    0,    0,    0,    0,  905,  498,  721,
-      905,  527,  713,    0,  705,  905,  495,  697,  905,  365,
-      521,  528,  530,  685,  905,  534,  540,  540,  657,  905,
-      537,  542,  650,  905,  553,    0,  557,    0,    0,  551,
-      641,  905,  558,  557,  633,  614,  613,  905,  547,  555,
-
-      563,  565,  569,  584,    0,    0,    0,    0,  583,  570,
-      585,  612,  905,  601,  905,  522,  580,  589,  594,  905,
-      600,  585,  563,  520,  905,  514,  905,  586,  486,  597,
-      480,  441,  905,  416,  905,  345,  905,  334,  905,  601,
-      254,  905,  242,  905,  200,  905,  151,  905,  905,  607,
-       86,  905,  905,  905,  620,  624,  627,  631,  635,  639,
-      643,  647,  651,  655,  659,  663,  667,  671,  675,  679,
-      683,  687,  691,  695,  699,  703,  707,  711,  715,  719,
-      723,  727,  731,  735,  739,  743,  747,  751,  755,  759,
-      763,  767,  771,  775,  779,  783,  787,  791,  795,  799,
-
-      803,  807,  811,  815,  819,  823,  827,  831,  835,  839,
-      843,  847,  851,  855,  859,  863,  867,  871,  875,  879,
-      883,  887,  891
+        0,    0,  936,  935,  937,  932,   33,   36,  940,  940,
+       45,   63,   31,   42,   51,   52,  925,   33,   65,   67,
+       69,   70,  924,   71,  923,   75,    0,  940,  928,  940,
+       91,   95,    0,    0,  104,  921,  113,    0,   91,   99,
+      114,   92,  115,   80,  100,   48,  119,  121,  122,   74,
+      123,  128,  131,  129,  125,  133,  135,  136,  137,  143,
+      138,  145,    0,  157,    0,    0,  168,    0,    0,  926,
+      140,  146,  165,  159,  152,  164,  155,  168,  171,  176,
+      177,  170,  180,  175,  184,  188,  212,  191,  185,  192,
+      193,  194,  219,  212,  199,  230,    0,    0,  940,  195,
+
+      200,  239,  235,  197,  246,  225,  226,  919,  244,  918,
+      243,  236,  245,  266,  248,  264,  282,  271,  291,  248,
+      270,  254,  300,  279,  296,  302,  288,  303,  311,    0,
+      315,    0,  311,  318,  940,  313,  319,  208,  313,  344,
+      321,  331,  325,  333,    0,  352,    0,  345,  347,  359,
+        0,  361,    0,  368,    0,  370,    0,  322,  366,  379,
+        0,  381,    0,  359,  357,  923,  382,  384,  392,    0,
+        0,    0,    0,  387,  940,  386,  390,  392,  329,  401,
+      397,  409,    0,  417,    0,  399,  412,  426,  429,    0,
+        0,    0,    0,  412,  427,  438,    0,    0,    0,    0,
+
+      440,    0,    0,    0,    0,  436,  405,  447,    0,    0,
+        0,    0,  438,  443,  922,  940,  921,  442,  450,  449,
+      452,  454,  459,  458,  453,  469,    0,    0,    0,    0,
+      920,  470,  481,    0,  483,    0,  484,  481,  919,  368,
+      467,  495,  918,  940,  917,  940,  488,  916,  479,  490,
+      492,  495,  505,  498,  940,  915,  940,  507,  523,    0,
+        0,    0,    0,  940,  503,  864,  940,  846,  532,  836,
+        0,  824,  940,  516,  796,  940,  513,  530,  536,  538,
+      784,  940,  542,  535,  547,  772,  940,  549,  551,  768,
+      940,  502,  562,    0,  564,    0,    0,  562,  764,  940,
+
+      544,  557,  760,  752,  744,  940,  552,  568,  571,  568,
+      581,  577,  588,    0,    0,    0,    0,  589,  580,  591,
+      736,  940,  728,  940,  601,  602,  597,  599,  940,  603,
+      720,  712,  700,  672,  940,  665,  940,  610,  656,  603,
+      648,  607,  629,  940,  627,  940,  625,  940,  624,  940,
+      607,  574,  940,  614,  572,  940,  491,  940,  433,  940,
+      940,  622,  389,  940,  303,  940,  261,  940,  204,  940,
+      940,  635,  639,  642,  646,  650,  654,  658,  662,  666,
+      670,  674,  678,  682,  686,  690,  694,  698,  702,  706,
+      710,  714,  718,  722,  726,  730,  734,  738,  742,  746,
+
+      750,  754,  758,  762,  766,  770,  774,  778,  782,  786,
+      790,  794,  798,  802,  806,  810,  814,  818,  822,  826,
+      830,  834,  838,  842,  846,  850,  854,  858,  862,  866,
+      870,  874,  878,  882,  886,  890,  894,  898,  902,  906,
+      910,  914
     } ;
 
-static const flex_int16_t yy_def[424] =
+static const flex_int16_t yy_def[443] =
     {   0,
-      354,    1,  355,  355,  354,  356,  357,  357,  354,  354,
-      358,  358,   12,   12,   12,   12,   12,   12,   12,   12,
-       12,   12,   12,   12,   12,   12,  359,  354,  356,  354,
-      360,  357,  361,  361,  362,   12,  356,  363,   12,   12,
-       12,   12,   12,   12,   12,   12,   12,   12,   12,   12,
+      371,    1,  372,  372,  371,  373,  374,  374,  371,  371,
+      375,  375,   12,   12,   12,   12,   12,   12,   12,   12,
+       12,   12,   12,   12,   12,   12,  376,  371,  373,  371,
+      377,  374,  378,  378,  379,   12,  373,  380,   12,   12,
        12,   12,   12,   12,   12,   12,   12,   12,   12,   12,
-       12,  359,  360,  361,  361,  364,  365,  365,  354,   12,
        12,   12,   12,   12,   12,   12,   12,   12,   12,   12,
-       12,   12,   12,   12,   12,  362,   12,   12,   12,   12,
-       12,   12,   12,  364,  365,  365,  354,   12,   12,  366,
-
+       12,   12,  376,  377,  378,  378,  381,  382,  382,  371,
        12,   12,   12,   12,   12,   12,   12,   12,   12,   12,
-       12,  367,   86,   86,  368,   12,  369,   12,   12,  370,
-       12,   12,   12,   12,   12,  371,  372,  366,  372,   12,
-       12,  354,   86,   12,   12,   12,  373,   12,   12,   12,
-      374,  375,  367,  375,   86,   86,  376,  377,  368,  377,
-      378,  379,  369,  379,   12,  380,  381,  370,  381,   12,
-       12,  382,   12,   12,  371,  372,  372,  383,  383,   12,
-      354,   86,   86,   86,   12,   12,   12,  384,  385,  373,
-      385,   12,   12,  386,  374,  375,  375,  387,  387,   86,
-       86,  376,  377,  377,  388,  388,  378,  379,  379,  389,
-
-      389,   12,  380,  381,  381,  390,  390,   12,   12,  391,
-      354,  392,   86,   12,   86,   86,   86,   12,   86,   12,
-      384,  385,  385,  393,  393,  394,   86,  395,  396,  386,
-      396,   86,   86,  397,   12,  398,  391,  354,  399,  354,
-       86,  400,   12,   86,   86,   86,  401,   86,  354,  402,
-      354,   86,  395,  396,  396,  403,  403,  354,   86,  404,
-      354,  405,  406,  406,  399,  354,   86,  407,  354,   12,
-       86,   86,   86,  408,  354,  408,  408,   86,  402,  354,
-       86,   86,  404,  354,  409,  410,  405,  410,  406,   86,
-      407,  354,   12,   86,  411,  412,  408,  354,  408,  408,
-
-       86,   86,   86,  409,  410,  410,  413,  413,   86,   12,
-       86,  414,  354,  415,  354,  408,  408,   86,   86,  354,
-      416,  417,  418,  414,  354,  415,  354,  408,  408,   86,
-      419,  420,  354,  421,  354,  422,  354,  408,  354,   86,
-      423,  354,  420,  354,  421,  354,  422,  354,  354,   86,
-      423,  354,  354,    0,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354
+       12,   12,   12,   12,   12,   12,  379,   12,   12,   12,
+       12,   12,   12,   12,   12,  381,  382,  382,  371,   12,
+
+       12,  383,   12,   12,   12,   12,   12,   12,   12,   12,
+       12,   12,   12,  384,   87,   87,  385,   12,  386,   12,
+       12,   12,  387,   12,   12,   12,   12,   12,  388,  389,
+      383,  389,   12,   12,  371,   87,   12,   12,   12,  390,
+       12,   12,   12,  391,  392,  384,  392,   87,   87,  393,
+      394,  385,  394,  395,  396,  386,  396,   12,   12,  397,
+      398,  387,  398,   12,   12,  399,   12,   12,  388,  389,
+      389,  400,  400,   12,  371,   87,   87,   87,   12,   12,
+       12,  401,  402,  390,  402,   12,   12,  403,  391,  392,
+      392,  404,  404,   87,   87,  393,  394,  394,  405,  405,
+
+      395,  396,  396,  406,  406,   12,   12,  397,  398,  398,
+      407,  407,   12,   12,  408,  371,  409,   87,   12,   87,
+       87,   87,   12,   87,   12,  401,  402,  402,  410,  410,
+      411,   87,  412,  413,  403,  413,   87,   87,  414,   12,
+       12,  415,  408,  371,  416,  371,   87,  417,   12,   87,
+       87,   87,  418,   87,  371,  419,  371,   87,  412,  413,
+      413,  420,  420,  371,   87,  421,  371,   12,  422,  423,
+      423,  416,  371,   87,  424,  371,   12,   87,   87,   87,
+      425,  371,  425,  425,   87,  419,  371,   87,   87,  421,
+      371,   12,  426,  427,  422,  427,  423,   87,  424,  371,
+
+       12,   87,  428,  429,  425,  371,  425,  425,   87,   87,
+       87,   12,  426,  427,  427,  430,  430,   87,   12,   87,
+      431,  371,  432,  371,  425,  425,   87,   87,  371,   12,
+      433,  434,  435,  431,  371,  432,  371,  425,  425,   87,
+      436,   12,  437,  371,  438,  371,  439,  371,  425,  371,
+       87,  440,  371,   12,  437,  371,  438,  371,  439,  371,
+      371,   87,  440,  371,  441,  371,  442,  371,  442,  371,
+        0,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+      371,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+      371,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+
+      371,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+      371,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+      371,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+      371,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+      371,  371
     } ;
 
-static const flex_int16_t yy_nxt[940] =
+static const flex_int16_t yy_nxt[975] =
     {   0,
         6,    7,    8,    9,    6,    6,    6,    6,   10,   11,
        12,   13,   14,   15,   16,   17,   18,   19,   17,   17,
        17,   17,   20,   17,   21,   22,   23,   24,   25,   17,
        26,   17,   17,   17,   32,   32,   33,   32,   32,   33,
        36,   34,   36,   42,   34,   29,   29,   29,   30,   35,
-       50,   36,   37,   38,   43,   44,   39,   36,   79,   45,
+       50,   36,   37,   38,   43,   44,   39,   36,   80,   45,
        36,   36,   40,   29,   29,   29,   30,   35,   46,   48,
        37,   38,   41,   47,   36,   49,   36,   53,   36,   36,
-       36,   56,   58,   36,   36,   55,   82,   60,   51,  342,
-       54,   61,   52,   29,   64,   32,   32,   33,   36,   65,
-
-       70,   36,   34,   29,   29,   29,   30,   36,   36,   36,
-       29,   38,   66,   66,   66,   67,   66,   71,   74,   66,
-       68,   72,   36,   36,   73,   36,   77,   78,   36,   76,
-       36,   53,   36,   36,   75,   85,   80,   83,   36,   86,
-       84,   36,   36,   36,   36,   81,   36,   36,   36,   36,
-       36,   36,   93,   89,  337,   98,   88,   29,   64,  101,
-       90,   36,   91,   65,   92,   87,   29,   95,   89,   99,
-       36,  100,   96,   36,   36,   36,   36,   36,   36,  106,
-      105,   85,   36,   36,  102,   36,  107,   36,  103,   36,
-      109,  112,   36,   36,  104,  108,  115,  110,   36,  117,
-
-       36,   36,   36,  335,   36,   36,  122,  111,   29,   29,
-       29,   30,  118,   36,  116,   29,   38,   36,   36,  113,
-      114,  119,  120,  123,   36,   29,   95,  121,   36,  134,
-      131,   96,  130,   36,  125,  124,  126,  126,   66,  127,
-      126,  132,  133,  126,  129,  333,   36,   36,  135,  137,
-       36,   36,   36,  140,  139,   35,   35,  352,   36,   36,
-       85,  141,  141,   66,  142,  141,  160,  145,  141,  144,
-       35,   35,   89,  117,  155,   36,  146,  147,  147,   66,
-      148,  147,  162,   36,  147,  150,  151,  151,   66,  152,
-      151,   36,   36,  151,  154,  120,  161,   36,  156,  156,
-
-       66,  157,  156,   36,   36,  156,  159,  164,  171,  163,
-       29,  166,   29,  168,   36,   36,  167,  170,  169,   35,
-       35,  172,   36,   36,  173,   36,  213,  184,   36,   36,
-      175,   36,  174,   29,  186,  212,   36,  349,  183,  187,
-      177,  176,  178,  178,   66,  179,  178,  182,  348,  178,
-      181,   29,  188,   35,   35,   35,   35,  189,   29,  193,
-       29,  195,  190,   36,  194,   36,  196,   29,  198,   29,
-      200,  191,   36,  199,   36,  201,  219,   29,  204,   29,
-      206,   36,  202,  205,  209,  207,   29,  166,   36,  293,
-      208,  214,  167,   35,   35,   35,   35,   35,   35,   36,
-
-       36,   36,  249,  218,  220,   29,  222,  216,   36,  217,
-      235,  223,   29,  224,  215,  226,   36,  227,  225,  346,
-       35,   35,   36,  228,  228,   66,  229,  228,   29,  186,
-      228,  231,  232,   36,  187,  233,   35,   29,  193,   29,
-      198,  234,   36,  194,  344,  199,   29,  204,  236,   36,
-       35,  241,  205,  242,   36,   35,   35,  270,   35,   35,
-       35,   35,  247,   36,   35,   35,   29,  222,  244,  262,
-      248,   36,  223,  243,  245,  246,   35,  252,   29,  254,
-       29,  256,  258,  342,  255,  259,  257,   35,   35,  339,
-       35,   35,   69,  264,   35,   35,   35,   35,   35,   35,
-
-      267,   35,   35,  275,   35,   35,   35,   35,  271,   35,
-       35,  276,  277,   35,   35,  272,  278,  315,  273,  281,
-       29,  254,  290,  313,  282,  275,  255,  285,  285,   66,
-      286,  285,   35,   35,  285,  288,  295,  298,  296,   35,
-       35,   35,   35,  298,  301,  328,  299,  294,   35,   35,
-      275,   35,   35,   35,  303,   29,  305,  300,  275,   29,
-      307,  306,   35,   35,  302,  308,  337,   36,   35,   35,
-      309,  310,  320,  316,   35,   35,   35,   35,  322,   36,
-       35,   35,  317,  275,  319,  311,   29,  305,  335,  275,
-      318,  321,  306,  323,   35,   35,   35,   35,  330,  329,
-
-       35,   35,  331,  333,  327,   35,   35,  338,   35,   35,
-      353,  340,   35,   35,  350,  325,  275,  315,   35,   35,
-       27,   27,   27,   27,   29,   29,   29,   31,   31,   31,
-       31,   36,   36,   36,   36,   62,  313,   62,   62,   63,
-       63,   63,   63,   65,  269,   65,   65,   35,   35,   35,
-       35,   69,   69,  261,   69,   94,   94,   94,   94,   96,
-      251,   96,   96,  128,  128,  128,  128,  143,  143,  143,
-      143,  149,  149,  149,  149,  153,  153,  153,  153,  158,
-      158,  158,  158,  165,  165,  165,  165,  167,  298,  167,
-      167,  180,  180,  180,  180,  185,  185,  185,  185,  187,
-
-      292,  187,  187,  192,  192,  192,  192,  194,  240,  194,
-      194,  197,  197,  197,  197,  199,  289,  199,  199,  203,
-      203,  203,  203,  205,  284,  205,  205,  210,  210,  210,
-      210,  169,  280,  169,  169,  221,  221,  221,  221,  223,
-      269,  223,  223,  230,  230,  230,  230,  189,  266,  189,
-      189,  196,  211,  196,  196,  201,  261,  201,  201,  207,
-      251,  207,  207,  237,  237,  237,  237,  239,  239,  239,
-      239,  225,  240,  225,  225,  250,  250,  250,  250,  253,
-      253,  253,  253,  255,  238,  255,  255,  260,  260,  260,
-      260,  263,  263,  263,  263,  265,  265,  265,  265,  268,
-
-      268,  268,  268,  274,  274,  274,  274,  279,  279,  279,
-      279,  257,  211,  257,  257,  283,  283,  283,  283,  287,
-      287,  287,  287,  264,  138,  264,  264,  291,  291,  291,
-      291,  297,  297,  297,  297,  304,  304,  304,  304,  306,
-      136,  306,  306,  312,  312,  312,  312,  314,  314,  314,
-      314,  308,   97,  308,  308,  324,  324,  324,  324,  326,
-      326,  326,  326,  332,  332,  332,  332,  334,  334,  334,
-      334,  336,  336,  336,  336,  341,  341,  341,  341,  343,
-      343,  343,  343,  345,  345,  345,  345,  347,  347,  347,
-      347,  351,  351,  351,  351,   36,   30,   59,   57,   36,
-
-       30,  354,   28,   28,    5,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354
+       36,   56,   58,   36,   36,   55,   83,   61,   51,   36,
+       54,   62,   52,   29,   65,   59,   32,   32,   33,   66,
+
+       36,   36,   71,   34,   29,   29,   29,   30,   36,   36,
+       77,   29,   38,   67,   67,   67,   68,   67,   75,   72,
+       67,   69,   73,   36,   36,   74,   78,   79,   36,   53,
+       36,   36,   36,   87,   36,   76,   84,   36,   36,   85,
+       36,   81,   36,   86,   36,   36,   36,   36,   82,   36,
+       92,   95,   36,  100,   36,   36,   89,   90,   88,   29,
+       65,   36,   91,  101,   36,   66,   90,   93,   36,   94,
+       29,   97,  102,   36,   36,  104,   98,   36,  103,   36,
+       36,  107,  108,  106,   36,   36,   36,  105,   86,   36,
+      109,  110,  111,   36,   36,  114,  112,   36,  117,  119,
+
+       36,   36,   36,   36,   36,  121,   36,  368,   36,   36,
+      120,  113,   29,   29,   29,   30,  118,   36,  134,   29,
+       38,   36,  127,  115,  116,  122,  123,  125,   36,  126,
+      128,  124,   29,   97,   36,   36,  180,  138,   98,  129,
+      129,   67,  130,  129,   36,   36,  129,  132,  133,  135,
+      136,  140,   36,   36,   36,   36,  142,   36,  137,   35,
+       35,  123,   86,   36,  370,  143,  144,  144,   67,  145,
+      144,  148,  158,  144,  147,   35,   35,   90,  119,   36,
+       36,  149,  150,  150,   67,  151,  150,  159,   36,  150,
+      153,  154,  154,   67,  155,  154,  164,   36,  154,  157,
+
+      160,  160,   67,  161,  160,   36,  368,  160,  163,  165,
+      166,   36,   36,   29,  170,  167,  168,   29,  172,  171,
+       36,  175,   36,  173,   35,   35,  176,   36,   36,  177,
+       36,   36,  188,  174,   36,   29,  190,  178,   36,  181,
+       36,  191,  223,  179,  182,  182,   67,  183,  182,  186,
+      206,  182,  185,  187,   29,  192,   35,   35,   35,   35,
+      193,   29,  197,   29,  199,  194,   36,  198,   36,  200,
+       29,  202,   29,  204,  195,   36,  203,   36,  205,  268,
+      207,   29,  209,   29,  211,  214,  213,  210,  218,  212,
+      217,   36,  353,   36,   29,  170,   36,   35,   35,  219,
+
+      171,   35,   35,   35,   35,  224,   36,  231,   36,  225,
+       36,   29,  227,  221,   36,  222,  232,  228,  220,   29,
+      229,   36,  240,   35,   35,  230,  233,  233,   67,  234,
+      233,   29,  190,  233,  236,  237,  348,  191,  238,   35,
+       29,  197,   29,  202,  239,   36,  198,   36,  203,   29,
+      209,  242,   36,   35,  247,  210,  255,  241,  248,   36,
+       35,   35,   36,   35,   35,   35,   35,  253,   36,   35,
+       35,   29,  227,  250,  269,  254,   36,  228,  249,  251,
+      252,   35,  258,   29,  260,   29,  262,  264,   36,  261,
+      265,  263,   35,   35,  346,   35,   35,   70,  271,   35,
+
+       35,   35,   35,   35,   35,  274,   35,   35,  282,   35,
+       35,   36,  277,  278,   35,   35,  283,  284,   35,   35,
+      279,  285,   36,  280,  288,   29,  260,   35,   35,  289,
+      312,  261,  293,  293,   67,  294,  293,  301,  306,  293,
+      296,   35,   35,  298,  303,  306,  304,   35,   35,   35,
+       35,  309,  308,   36,  307,  282,  302,  319,   35,   35,
+       35,   35,   35,  311,   29,  314,   29,  316,   35,   35,
+      315,  282,  317,   35,   35,  344,  310,  364,  325,   35,
+       35,  318,   35,   35,  329,  320,   36,  328,  332,   36,
+       29,  314,   35,   35,  330,  326,  315,  331,  327,  333,
+
+       35,   35,   35,   35,  282,  282,  340,  341,   35,   35,
+       35,   35,   36,  282,   35,   35,   36,  351,   35,   35,
+      362,  339,  365,   36,  338,  366,  342,  361,  360,  354,
+      358,  349,  356,   35,   35,   27,   27,   27,   27,   29,
+       29,   29,   31,   31,   31,   31,   36,   36,   36,   36,
+       63,  353,   63,   63,   64,   64,   64,   64,   66,  350,
+       66,   66,   35,   35,   35,   35,   70,   70,  324,   70,
+       96,   96,   96,   96,   98,  322,   98,   98,  131,  131,
+      131,  131,  146,  146,  146,  146,  152,  152,  152,  152,
+      156,  156,  156,  156,  162,  162,  162,  162,  169,  169,
+
+      169,  169,  171,  348,  171,  171,  184,  184,  184,  184,
+      189,  189,  189,  189,  191,  346,  191,  191,  196,  196,
+      196,  196,  198,  344,  198,  198,  201,  201,  201,  201,
+      203,  337,  203,  203,  208,  208,  208,  208,  210,  335,
+      210,  210,  215,  215,  215,  215,  173,  282,  173,  173,
+      226,  226,  226,  226,  228,  324,  228,  228,  235,  235,
+      235,  235,  193,  322,  193,  193,  200,  276,  200,  200,
+      205,  267,  205,  205,  212,  257,  212,  212,  243,  243,
+      243,  243,  245,  245,  245,  245,  230,  306,  230,  230,
+      256,  256,  256,  256,  259,  259,  259,  259,  261,  300,
+
+      261,  261,  266,  266,  266,  266,  270,  270,  270,  270,
+      272,  272,  272,  272,  275,  275,  275,  275,  281,  281,
+      281,  281,  286,  286,  286,  286,  263,  246,  263,  263,
+      290,  290,  290,  290,  295,  295,  295,  295,  271,  297,
+      271,  271,  299,  299,  299,  299,  305,  305,  305,  305,
+      313,  313,  313,  313,  315,  292,  315,  315,  321,  321,
+      321,  321,  323,  323,  323,  323,  317,  291,  317,  317,
+      334,  334,  334,  334,  336,  336,  336,  336,  343,  343,
+      343,  343,  345,  345,  345,  345,  347,  347,  347,  347,
+      352,  352,  352,  352,  355,  355,  355,  355,  357,  357,
+
+      357,  357,  359,  359,  359,  359,  363,  363,  363,  363,
+      367,  367,  367,  367,  369,  369,  369,  369,  287,  276,
+      273,  216,  267,  257,  246,  244,  216,  141,  139,   99,
+       36,   30,   60,   57,   36,   30,  371,   28,   28,    5,
+      371,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+      371,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+      371,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+      371,  371,  371,  371
     } ;
 
-static const flex_int16_t yy_chk[940] =
+static const flex_int16_t yy_chk[975] =
     {   0,
         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
@@ -927,101 +940,105 @@ static const flex_int16_t yy_chk[940] =
        18,   14,   11,   11,   13,   14,   11,   46,   46,   14,
        15,   16,   11,   12,   12,   12,   12,   12,   14,   16,
        12,   12,   12,   15,   19,   16,   20,   20,   21,   22,
-       24,   22,   24,   50,   26,   21,   50,   26,   19,  351,
-       20,   26,   19,   31,   31,   32,   32,   32,   39,   31,
-
-       39,   42,   32,   35,   35,   35,   35,   40,   44,   45,
-       35,   35,   37,   37,   37,   37,   37,   39,   42,   37,
-       37,   40,   41,   43,   41,   48,   45,   45,   49,   44,
-       47,   47,   53,   51,   43,   53,   48,   51,   52,   54,
-       52,   55,   56,   58,   54,   49,   57,   59,   60,   73,
-       61,   70,   60,   61,  347,   70,   56,   63,   63,   73,
-       58,   71,   59,   63,   59,   55,   66,   66,   57,   71,
-       74,   72,   66,   72,   75,   76,   77,   78,   79,   78,
-       77,   79,   80,   81,   74,   83,   80,   82,   75,   84,
-       82,   85,   88,   85,   76,   81,   87,   83,   87,   89,
-
-       92,   89,   93,  345,   90,  104,   92,   84,   86,   86,
-       86,   86,   90,   99,   88,   86,   86,   98,  102,   86,
-       86,   91,   91,   93,   91,   94,   94,   91,  101,  104,
-      102,   94,  101,  110,   99,   98,  100,  100,  100,  100,
-      100,  103,  103,  100,  100,  343,  105,  103,  105,  107,
-      109,  107,  111,  110,  109,  113,  113,  341,  121,  118,
-      111,  112,  112,  112,  112,  112,  121,  113,  112,  112,
-      114,  114,  116,  116,  118,  116,  114,  115,  115,  115,
-      115,  115,  123,  123,  115,  115,  117,  117,  117,  117,
-      117,  124,  122,  117,  117,  119,  122,  119,  120,  120,
-
-      120,  120,  120,  125,  130,  120,  120,  125,  131,  124,
-      126,  126,  128,  128,  131,  134,  126,  130,  128,  133,
-      133,  133,  135,  136,  133,  139,  164,  140,  138,  140,
-      134,  164,  133,  141,  141,  163,  163,  338,  139,  141,
-      136,  135,  137,  137,  137,  137,  137,  138,  336,  137,
-      137,  143,  143,  145,  145,  146,  146,  143,  147,  147,
-      149,  149,  145,  155,  147,  161,  149,  151,  151,  153,
-      153,  146,  160,  151,  270,  153,  176,  156,  156,  158,
-      158,  176,  155,  156,  161,  158,  165,  165,  170,  270,
-      160,  170,  165,  172,  172,  173,  173,  174,  174,  175,
-
-      208,  177,  220,  175,  177,  178,  178,  173,  220,  174,
-      208,  178,  180,  180,  172,  182,  182,  183,  180,  334,
-      190,  190,  183,  184,  184,  184,  184,  184,  185,  185,
-      184,  184,  190,  243,  185,  191,  191,  192,  192,  197,
-      197,  202,  202,  192,  332,  197,  203,  203,  209,  209,
-      213,  213,  203,  214,  214,  215,  215,  243,  216,  216,
-      217,  217,  218,  218,  219,  219,  221,  221,  215,  235,
-      219,  235,  221,  214,  216,  217,  227,  227,  228,  228,
-      230,  230,  232,  331,  228,  233,  230,  233,  233,  329,
-      232,  232,  236,  236,  241,  241,  244,  244,  245,  245,
-
-      241,  246,  246,  247,  248,  248,  267,  267,  244,  259,
-      259,  247,  247,  252,  252,  245,  248,  326,  246,  252,
-      253,  253,  267,  324,  259,  316,  253,  262,  262,  262,
-      262,  262,  271,  271,  262,  262,  272,  276,  273,  272,
-      272,  273,  273,  277,  278,  316,  276,  271,  281,  281,
-      299,  278,  278,  282,  282,  285,  285,  277,  300,  287,
-      287,  285,  290,  290,  281,  287,  323,  293,  294,  294,
-      290,  293,  303,  299,  301,  301,  302,  302,  310,  310,
-      303,  303,  300,  317,  302,  294,  304,  304,  322,  328,
-      301,  309,  304,  311,  309,  309,  311,  311,  318,  317,
-
-      318,  318,  319,  321,  314,  319,  319,  328,  330,  330,
-      350,  330,  340,  340,  340,  312,  297,  296,  350,  350,
-      355,  355,  355,  355,  356,  356,  356,  357,  357,  357,
-      357,  358,  358,  358,  358,  359,  295,  359,  359,  360,
-      360,  360,  360,  361,  291,  361,  361,  362,  362,  362,
-      362,  363,  363,  283,  363,  364,  364,  364,  364,  365,
-      279,  365,  365,  366,  366,  366,  366,  367,  367,  367,
-      367,  368,  368,  368,  368,  369,  369,  369,  369,  370,
-      370,  370,  370,  371,  371,  371,  371,  372,  274,  372,
-      372,  373,  373,  373,  373,  374,  374,  374,  374,  375,
-
-      268,  375,  375,  376,  376,  376,  376,  377,  265,  377,
-      377,  378,  378,  378,  378,  379,  263,  379,  379,  380,
-      380,  380,  380,  381,  260,  381,  381,  382,  382,  382,
-      382,  383,  250,  383,  383,  384,  384,  384,  384,  385,
-      242,  385,  385,  386,  386,  386,  386,  387,  239,  387,
-      387,  388,  237,  388,  388,  389,  234,  389,  389,  390,
-      226,  390,  390,  391,  391,  391,  391,  392,  392,  392,
-      392,  393,  212,  393,  393,  394,  394,  394,  394,  395,
-      395,  395,  395,  396,  210,  396,  396,  397,  397,  397,
-      397,  398,  398,  398,  398,  399,  399,  399,  399,  400,
-
-      400,  400,  400,  401,  401,  401,  401,  402,  402,  402,
-      402,  403,  162,  403,  403,  404,  404,  404,  404,  405,
-      405,  405,  405,  406,  108,  406,  406,  407,  407,  407,
-      407,  408,  408,  408,  408,  409,  409,  409,  409,  410,
-      106,  410,  410,  411,  411,  411,  411,  412,  412,  412,
-      412,  413,   69,  413,  413,  414,  414,  414,  414,  415,
-      415,  415,  415,  416,  416,  416,  416,  417,  417,  417,
-      417,  418,  418,  418,  418,  419,  419,  419,  419,  420,
-      420,  420,  420,  421,  421,  421,  421,  422,  422,  422,
-      422,  423,  423,  423,  423,   36,   29,   25,   23,   17,
-
-        6,    5,    4,    3,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354
+       24,   22,   24,   50,   26,   21,   50,   26,   19,   44,
+       20,   26,   19,   31,   31,   24,   32,   32,   32,   31,
+
+       39,   42,   39,   32,   35,   35,   35,   35,   40,   45,
+       44,   35,   35,   37,   37,   37,   37,   37,   42,   39,
+       37,   37,   40,   41,   43,   41,   45,   45,   47,   47,
+       48,   49,   51,   54,   55,   43,   51,   52,   54,   52,
+       53,   48,   56,   53,   57,   58,   59,   61,   49,   71,
+       59,   61,   60,   71,   62,   72,   56,   62,   55,   64,
+       64,   75,   58,   72,   77,   64,   57,   60,   74,   60,
+       67,   67,   73,   76,   73,   75,   67,   78,   74,   82,
+       79,   78,   79,   77,   84,   80,   81,   76,   80,   83,
+       81,   82,   83,   85,   89,   86,   84,   86,   88,   90,
+
+       88,   90,   91,   92,  100,   92,  104,  369,   95,  101,
+       91,   85,   87,   87,   87,   87,   89,  138,  104,   87,
+       87,   94,  100,   87,   87,   93,   93,   94,   93,   95,
+      101,   93,   96,   96,  106,  107,  138,  107,   96,  102,
+      102,  102,  102,  102,  103,  112,  102,  102,  103,  105,
+      105,  109,  111,  109,  113,  105,  111,  120,  106,  115,
+      115,  122,  113,  122,  367,  112,  114,  114,  114,  114,
+      114,  115,  120,  114,  114,  116,  116,  118,  118,  121,
+      118,  116,  117,  117,  117,  117,  117,  121,  124,  117,
+      117,  119,  119,  119,  119,  119,  124,  127,  119,  119,
+
+      123,  123,  123,  123,  123,  125,  365,  123,  123,  125,
+      126,  126,  128,  129,  129,  127,  128,  131,  131,  129,
+      133,  134,  139,  131,  136,  136,  136,  134,  137,  136,
+      141,  158,  143,  133,  143,  144,  144,  136,  179,  139,
+      142,  144,  179,  137,  140,  140,  140,  140,  140,  141,
+      158,  140,  140,  142,  146,  146,  148,  148,  149,  149,
+      146,  150,  150,  152,  152,  148,  165,  150,  164,  152,
+      154,  154,  156,  156,  149,  159,  154,  240,  156,  240,
+      159,  160,  160,  162,  162,  165,  164,  160,  168,  162,
+      167,  167,  363,  168,  169,  169,  174,  176,  176,  174,
+
+      169,  177,  177,  178,  178,  180,  181,  186,  186,  181,
+      180,  182,  182,  177,  207,  178,  187,  182,  176,  184,
+      184,  187,  207,  194,  194,  184,  188,  188,  188,  188,
+      188,  189,  189,  188,  188,  194,  359,  189,  195,  195,
+      196,  196,  201,  201,  206,  206,  196,  213,  201,  208,
+      208,  214,  214,  218,  218,  208,  225,  213,  219,  219,
+      220,  220,  225,  221,  221,  222,  222,  223,  223,  224,
+      224,  226,  226,  220,  241,  224,  241,  226,  219,  221,
+      222,  232,  232,  233,  233,  235,  235,  237,  249,  233,
+      238,  235,  238,  238,  357,  237,  237,  242,  242,  247,
+
+      247,  250,  250,  251,  251,  247,  252,  252,  253,  254,
+      254,  292,  249,  250,  265,  265,  253,  253,  258,  258,
+      251,  254,  277,  252,  258,  259,  259,  274,  274,  265,
+      292,  259,  269,  269,  269,  269,  269,  277,  284,  269,
+      269,  278,  278,  274,  279,  283,  280,  279,  279,  280,
+      280,  285,  284,  301,  283,  307,  278,  301,  285,  285,
+      288,  288,  289,  289,  293,  293,  295,  295,  302,  302,
+      293,  308,  295,  298,  298,  355,  288,  352,  307,  310,
+      310,  298,  309,  309,  311,  302,  312,  310,  319,  319,
+      313,  313,  311,  311,  312,  308,  313,  318,  309,  320,
+
+      318,  318,  320,  320,  325,  326,  327,  328,  327,  327,
+      328,  328,  330,  338,  340,  340,  342,  340,  351,  351,
+      351,  326,  354,  354,  325,  362,  330,  349,  347,  342,
+      345,  338,  343,  362,  362,  372,  372,  372,  372,  373,
+      373,  373,  374,  374,  374,  374,  375,  375,  375,  375,
+      376,  341,  376,  376,  377,  377,  377,  377,  378,  339,
+      378,  378,  379,  379,  379,  379,  380,  380,  336,  380,
+      381,  381,  381,  381,  382,  334,  382,  382,  383,  383,
+      383,  383,  384,  384,  384,  384,  385,  385,  385,  385,
+      386,  386,  386,  386,  387,  387,  387,  387,  388,  388,
+
+      388,  388,  389,  333,  389,  389,  390,  390,  390,  390,
+      391,  391,  391,  391,  392,  332,  392,  392,  393,  393,
+      393,  393,  394,  331,  394,  394,  395,  395,  395,  395,
+      396,  323,  396,  396,  397,  397,  397,  397,  398,  321,
+      398,  398,  399,  399,  399,  399,  400,  305,  400,  400,
+      401,  401,  401,  401,  402,  304,  402,  402,  403,  403,
+      403,  403,  404,  303,  404,  404,  405,  299,  405,  405,
+      406,  290,  406,  406,  407,  286,  407,  407,  408,  408,
+      408,  408,  409,  409,  409,  409,  410,  281,  410,  410,
+      411,  411,  411,  411,  412,  412,  412,  412,  413,  275,
+
+      413,  413,  414,  414,  414,  414,  415,  415,  415,  415,
+      416,  416,  416,  416,  417,  417,  417,  417,  418,  418,
+      418,  418,  419,  419,  419,  419,  420,  272,  420,  420,
+      421,  421,  421,  421,  422,  422,  422,  422,  423,  270,
+      423,  423,  424,  424,  424,  424,  425,  425,  425,  425,
+      426,  426,  426,  426,  427,  268,  427,  427,  428,  428,
+      428,  428,  429,  429,  429,  429,  430,  266,  430,  430,
+      431,  431,  431,  431,  432,  432,  432,  432,  433,  433,
+      433,  433,  434,  434,  434,  434,  435,  435,  435,  435,
+      436,  436,  436,  436,  437,  437,  437,  437,  438,  438,
+
+      438,  438,  439,  439,  439,  439,  440,  440,  440,  440,
+      441,  441,  441,  441,  442,  442,  442,  442,  256,  248,
+      245,  243,  239,  231,  217,  215,  166,  110,  108,   70,
+       36,   29,   25,   23,   17,    6,    5,    4,    3,  371,
+      371,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+      371,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+      371,  371,  371,  371,  371,  371,  371,  371,  371,  371,
+      371,  371,  371,  371
     } ;
 
 #define YY_TRAILING_MASK 0x2000
@@ -1160,9 +1177,17 @@ static void setbackendtype(DiskParseContext *dpc, const char *str) {
     if (     !strcmp(str,"phy"))   DSET(dpc,backend,BACKEND,str,PHY);
     else if (!strcmp(str,"tap"))   DSET(dpc,backend,BACKEND,str,TAP);
     else if (!strcmp(str,"qdisk")) DSET(dpc,backend,BACKEND,str,QDISK);
+    else if (!strcmp(str,"other")) DSET(dpc,backend,BACKEND,str,OTHER);
     else xlu__disk_err(dpc,str,"unknown value for backendtype");
 }
 
+/* Sets ->specification from the string.  IDL should provide something for this. */
+static void setspecification(DiskParseContext *dpc, const char *str) {
+    if      (!strcmp(str,"xen"))    DSET(dpc,specification,SPECIFICATION,str,XEN);
+    else if (!strcmp(str,"virtio")) DSET(dpc,specification,SPECIFICATION,str,VIRTIO);
+    else xlu__disk_err(dpc,str,"unknown value for specification");
+}
+
 /* Sets ->colo-port from the string.  COLO need this. */
 static void setcoloport(DiskParseContext *dpc, const char *str) {
     int port = atoi(str);
@@ -1199,9 +1224,9 @@ static int vdev_and_devtype(DiskParseContext *dpc, char *str) {
 #undef DPC /* needs to be defined differently the actual lexer */
 #define DPC ((DiskParseContext*)yyextra)
 
-#line 1202 "libxlu_disk_l.c"
+#line 1227 "libxlu_disk_l.c"
 
-#line 1204 "libxlu_disk_l.c"
+#line 1229 "libxlu_disk_l.c"
 
 #define INITIAL 0
 #define LEXERR 1
@@ -1477,13 +1502,13 @@ YY_DECL
 		}
 
 	{
-#line 177 "libxlu_disk_l.l"
+#line 185 "libxlu_disk_l.l"
 
 
-#line 180 "libxlu_disk_l.l"
+#line 188 "libxlu_disk_l.l"
  /*----- the scanner rules which do the parsing -----*/
 
-#line 1486 "libxlu_disk_l.c"
+#line 1511 "libxlu_disk_l.c"
 
 	while ( /*CONSTCOND*/1 )		/* loops until end-of-file is reached */
 		{
@@ -1515,14 +1540,14 @@ yy_match:
 			while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 				{
 				yy_current_state = (int) yy_def[yy_current_state];
-				if ( yy_current_state >= 355 )
+				if ( yy_current_state >= 372 )
 					yy_c = yy_meta[yy_c];
 				}
 			yy_current_state = yy_nxt[yy_base[yy_current_state] + yy_c];
 			*yyg->yy_state_ptr++ = yy_current_state;
 			++yy_cp;
 			}
-		while ( yy_current_state != 354 );
+		while ( yy_current_state != 371 );
 
 yy_find_action:
 		yy_current_state = *--yyg->yy_state_ptr;
@@ -1572,152 +1597,158 @@ do_action:	/* This label is used only to access EOF actions. */
 case 1:
 /* rule 1 can match eol */
 YY_RULE_SETUP
-#line 182 "libxlu_disk_l.l"
+#line 190 "libxlu_disk_l.l"
 { /* ignore whitespace before parameters */ }
 	YY_BREAK
 /* ordinary parameters setting enums or strings */
 case 2:
 /* rule 2 can match eol */
 YY_RULE_SETUP
-#line 186 "libxlu_disk_l.l"
+#line 194 "libxlu_disk_l.l"
 { STRIP(','); setformat(DPC, FROMEQUALS); }
 	YY_BREAK
 case 3:
 YY_RULE_SETUP
-#line 188 "libxlu_disk_l.l"
+#line 196 "libxlu_disk_l.l"
 { DPC->disk->is_cdrom = 1; }
 	YY_BREAK
 case 4:
 YY_RULE_SETUP
-#line 189 "libxlu_disk_l.l"
+#line 197 "libxlu_disk_l.l"
 { DPC->disk->is_cdrom = 1; }
 	YY_BREAK
 case 5:
 YY_RULE_SETUP
-#line 190 "libxlu_disk_l.l"
+#line 198 "libxlu_disk_l.l"
 { DPC->disk->is_cdrom = 0; }
 	YY_BREAK
 case 6:
 /* rule 6 can match eol */
 YY_RULE_SETUP
-#line 191 "libxlu_disk_l.l"
+#line 199 "libxlu_disk_l.l"
 { xlu__disk_err(DPC,yytext,"unknown value for type"); }
 	YY_BREAK
 case 7:
 /* rule 7 can match eol */
 YY_RULE_SETUP
-#line 193 "libxlu_disk_l.l"
+#line 201 "libxlu_disk_l.l"
 { STRIP(','); setaccess(DPC, FROMEQUALS); }
 	YY_BREAK
 case 8:
 /* rule 8 can match eol */
 YY_RULE_SETUP
-#line 194 "libxlu_disk_l.l"
+#line 202 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("backend", backend_domname, FROMEQUALS); }
 	YY_BREAK
 case 9:
 /* rule 9 can match eol */
 YY_RULE_SETUP
-#line 195 "libxlu_disk_l.l"
+#line 203 "libxlu_disk_l.l"
 { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
 	YY_BREAK
 case 10:
 /* rule 10 can match eol */
 YY_RULE_SETUP
-#line 197 "libxlu_disk_l.l"
-{ STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
+#line 204 "libxlu_disk_l.l"
+{ STRIP(','); setspecification(DPC,FROMEQUALS); }
 	YY_BREAK
 case 11:
 /* rule 11 can match eol */
 YY_RULE_SETUP
-#line 198 "libxlu_disk_l.l"
-{ STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
+#line 206 "libxlu_disk_l.l"
+{ STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
 	YY_BREAK
 case 12:
+/* rule 12 can match eol */
 YY_RULE_SETUP
-#line 199 "libxlu_disk_l.l"
-{ DPC->disk->direct_io_safe = 1; }
+#line 207 "libxlu_disk_l.l"
+{ STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
 	YY_BREAK
 case 13:
 YY_RULE_SETUP
-#line 200 "libxlu_disk_l.l"
-{ libxl_defbool_set(&DPC->disk->discard_enable, true); }
+#line 208 "libxlu_disk_l.l"
+{ DPC->disk->direct_io_safe = 1; }
 	YY_BREAK
 case 14:
 YY_RULE_SETUP
-#line 201 "libxlu_disk_l.l"
-{ libxl_defbool_set(&DPC->disk->discard_enable, false); }
+#line 209 "libxlu_disk_l.l"
+{ libxl_defbool_set(&DPC->disk->discard_enable, true); }
 	YY_BREAK
-/* Note that the COLO configuration settings should be considered unstable.
-  * They may change incompatibly in future versions of Xen. */
 case 15:
 YY_RULE_SETUP
-#line 204 "libxlu_disk_l.l"
-{ libxl_defbool_set(&DPC->disk->colo_enable, true); }
+#line 210 "libxlu_disk_l.l"
+{ libxl_defbool_set(&DPC->disk->discard_enable, false); }
 	YY_BREAK
+/* Note that the COLO configuration settings should be considered unstable.
+  * They may change incompatibly in future versions of Xen. */
 case 16:
 YY_RULE_SETUP
-#line 205 "libxlu_disk_l.l"
-{ libxl_defbool_set(&DPC->disk->colo_enable, false); }
+#line 213 "libxlu_disk_l.l"
+{ libxl_defbool_set(&DPC->disk->colo_enable, true); }
 	YY_BREAK
 case 17:
-/* rule 17 can match eol */
 YY_RULE_SETUP
-#line 206 "libxlu_disk_l.l"
-{ STRIP(','); SAVESTRING("colo-host", colo_host, FROMEQUALS); }
+#line 214 "libxlu_disk_l.l"
+{ libxl_defbool_set(&DPC->disk->colo_enable, false); }
 	YY_BREAK
 case 18:
 /* rule 18 can match eol */
 YY_RULE_SETUP
-#line 207 "libxlu_disk_l.l"
-{ STRIP(','); setcoloport(DPC, FROMEQUALS); }
+#line 215 "libxlu_disk_l.l"
+{ STRIP(','); SAVESTRING("colo-host", colo_host, FROMEQUALS); }
 	YY_BREAK
 case 19:
 /* rule 19 can match eol */
 YY_RULE_SETUP
-#line 208 "libxlu_disk_l.l"
-{ STRIP(','); SAVESTRING("colo-export", colo_export, FROMEQUALS); }
+#line 216 "libxlu_disk_l.l"
+{ STRIP(','); setcoloport(DPC, FROMEQUALS); }
 	YY_BREAK
 case 20:
 /* rule 20 can match eol */
 YY_RULE_SETUP
-#line 209 "libxlu_disk_l.l"
-{ STRIP(','); SAVESTRING("active-disk", active_disk, FROMEQUALS); }
+#line 217 "libxlu_disk_l.l"
+{ STRIP(','); SAVESTRING("colo-export", colo_export, FROMEQUALS); }
 	YY_BREAK
 case 21:
 /* rule 21 can match eol */
 YY_RULE_SETUP
-#line 210 "libxlu_disk_l.l"
+#line 218 "libxlu_disk_l.l"
+{ STRIP(','); SAVESTRING("active-disk", active_disk, FROMEQUALS); }
+	YY_BREAK
+case 22:
+/* rule 22 can match eol */
+YY_RULE_SETUP
+#line 219 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("hidden-disk", hidden_disk, FROMEQUALS); }
 	YY_BREAK
 /* the target magic parameter, eats the rest of the string */
-case 22:
+case 23:
 YY_RULE_SETUP
-#line 214 "libxlu_disk_l.l"
+#line 223 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("target", pdev_path, FROMEQUALS); }
 	YY_BREAK
 /* unknown parameters */
-case 23:
-/* rule 23 can match eol */
+case 24:
+/* rule 24 can match eol */
 YY_RULE_SETUP
-#line 218 "libxlu_disk_l.l"
+#line 227 "libxlu_disk_l.l"
 { xlu__disk_err(DPC,yytext,"unknown parameter"); }
 	YY_BREAK
 /* deprecated prefixes */
 /* the "/.*" in these patterns ensures that they count as if they
    * matched the whole string, so these patterns take precedence */
-case 24:
+case 25:
 YY_RULE_SETUP
-#line 225 "libxlu_disk_l.l"
+#line 234 "libxlu_disk_l.l"
 {
                     STRIP(':');
                     DPC->had_depr_prefix=1; DEPRECATE("use `[format=]...,'");
                     setformat(DPC, yytext);
                  }
 	YY_BREAK
-case 25:
+case 26:
 YY_RULE_SETUP
-#line 231 "libxlu_disk_l.l"
+#line 240 "libxlu_disk_l.l"
 {
                     char *newscript;
                     STRIP(':');
@@ -1731,65 +1762,65 @@ YY_RULE_SETUP
                     free(newscript);
                 }
 	YY_BREAK
-case 26:
+case 27:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
 yyg->yy_c_buf_p = yy_cp = yy_bp + 8;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 244 "libxlu_disk_l.l"
+#line 253 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
-case 27:
+case 28:
 YY_RULE_SETUP
-#line 245 "libxlu_disk_l.l"
+#line 254 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
-case 28:
+case 29:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
 yyg->yy_c_buf_p = yy_cp = yy_bp + 4;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 246 "libxlu_disk_l.l"
+#line 255 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
-case 29:
+case 30:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
 yyg->yy_c_buf_p = yy_cp = yy_bp + 6;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 247 "libxlu_disk_l.l"
+#line 256 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
-case 30:
+case 31:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
 yyg->yy_c_buf_p = yy_cp = yy_bp + 5;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 248 "libxlu_disk_l.l"
+#line 257 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
-case 31:
+case 32:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
 yyg->yy_c_buf_p = yy_cp = yy_bp + 4;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 249 "libxlu_disk_l.l"
+#line 258 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
-case 32:
-/* rule 32 can match eol */
+case 33:
+/* rule 33 can match eol */
 YY_RULE_SETUP
-#line 251 "libxlu_disk_l.l"
+#line 260 "libxlu_disk_l.l"
 {
 		  xlu__disk_err(DPC,yytext,"unknown deprecated disk prefix");
 		  return 0;
 		}
 	YY_BREAK
 /* positional parameters */
-case 33:
-/* rule 33 can match eol */
+case 34:
+/* rule 34 can match eol */
 YY_RULE_SETUP
-#line 258 "libxlu_disk_l.l"
+#line 267 "libxlu_disk_l.l"
 {
     STRIP(',');
 
@@ -1816,27 +1847,27 @@ YY_RULE_SETUP
     }
 }
 	YY_BREAK
-case 34:
+case 35:
 YY_RULE_SETUP
-#line 284 "libxlu_disk_l.l"
+#line 293 "libxlu_disk_l.l"
 {
     BEGIN(LEXERR);
     yymore();
 }
 	YY_BREAK
-case 35:
+case 36:
 YY_RULE_SETUP
-#line 288 "libxlu_disk_l.l"
+#line 297 "libxlu_disk_l.l"
 {
     xlu__disk_err(DPC,yytext,"bad disk syntax"); return 0;
 }
 	YY_BREAK
-case 36:
+case 37:
 YY_RULE_SETUP
-#line 291 "libxlu_disk_l.l"
+#line 300 "libxlu_disk_l.l"
 YY_FATAL_ERROR( "flex scanner jammed" );
 	YY_BREAK
-#line 1839 "libxlu_disk_l.c"
+#line 1870 "libxlu_disk_l.c"
 			case YY_STATE_EOF(INITIAL):
 			case YY_STATE_EOF(LEXERR):
 				yyterminate();
@@ -2104,7 +2135,7 @@ static int yy_get_next_buffer (yyscan_t yyscanner)
 		while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 			{
 			yy_current_state = (int) yy_def[yy_current_state];
-			if ( yy_current_state >= 355 )
+			if ( yy_current_state >= 372 )
 				yy_c = yy_meta[yy_c];
 			}
 		yy_current_state = yy_nxt[yy_base[yy_current_state] + yy_c];
@@ -2128,11 +2159,11 @@ static int yy_get_next_buffer (yyscan_t yyscanner)
 	while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 		{
 		yy_current_state = (int) yy_def[yy_current_state];
-		if ( yy_current_state >= 355 )
+		if ( yy_current_state >= 372 )
 			yy_c = yy_meta[yy_c];
 		}
 	yy_current_state = yy_nxt[yy_base[yy_current_state] + yy_c];
-	yy_is_jam = (yy_current_state == 354);
+	yy_is_jam = (yy_current_state == 371);
 	if ( ! yy_is_jam )
 		*yyg->yy_state_ptr++ = yy_current_state;
 
@@ -2941,4 +2972,4 @@ void yyfree (void * ptr , yyscan_t yyscanner)
 
 #define YYTABLES_NAME "yytables"
 
-#line 291 "libxlu_disk_l.l"
+#line 300 "libxlu_disk_l.l"
diff --git a/tools/libs/util/libxlu_disk_l.h b/tools/libs/util/libxlu_disk_l.h
index 6abeecf..509aad6 100644
--- a/tools/libs/util/libxlu_disk_l.h
+++ b/tools/libs/util/libxlu_disk_l.h
@@ -694,7 +694,7 @@ extern int yylex (yyscan_t yyscanner);
 #undef yyTABLES_NAME
 #endif
 
-#line 291 "libxlu_disk_l.l"
+#line 300 "libxlu_disk_l.l"
 
 #line 699 "libxlu_disk_l.h"
 #undef xlu__disk_yyIN_HEADER
diff --git a/tools/libs/util/libxlu_disk_l.l b/tools/libs/util/libxlu_disk_l.l
index 3bd639a..47b8ee0 100644
--- a/tools/libs/util/libxlu_disk_l.l
+++ b/tools/libs/util/libxlu_disk_l.l
@@ -122,9 +122,17 @@ static void setbackendtype(DiskParseContext *dpc, const char *str) {
     if (     !strcmp(str,"phy"))   DSET(dpc,backend,BACKEND,str,PHY);
     else if (!strcmp(str,"tap"))   DSET(dpc,backend,BACKEND,str,TAP);
     else if (!strcmp(str,"qdisk")) DSET(dpc,backend,BACKEND,str,QDISK);
+    else if (!strcmp(str,"other")) DSET(dpc,backend,BACKEND,str,OTHER);
     else xlu__disk_err(dpc,str,"unknown value for backendtype");
 }
 
+/* Sets ->specification from the string.  IDL should provide something for this. */
+static void setspecification(DiskParseContext *dpc, const char *str) {
+    if      (!strcmp(str,"xen"))    DSET(dpc,specification,SPECIFICATION,str,XEN);
+    else if (!strcmp(str,"virtio")) DSET(dpc,specification,SPECIFICATION,str,VIRTIO);
+    else xlu__disk_err(dpc,str,"unknown value for specification");
+}
+
 /* Sets ->colo-port from the string.  COLO need this. */
 static void setcoloport(DiskParseContext *dpc, const char *str) {
     int port = atoi(str);
@@ -192,6 +200,7 @@ devtype=[^,]*,?	{ xlu__disk_err(DPC,yytext,"unknown value for type"); }
 access=[^,]*,?	{ STRIP(','); setaccess(DPC, FROMEQUALS); }
 backend=[^,]*,? { STRIP(','); SAVESTRING("backend", backend_domname, FROMEQUALS); }
 backendtype=[^,]*,? { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
+specification=[^,]*,? { STRIP(','); setspecification(DPC,FROMEQUALS); }
 
 vdev=[^,]*,?	{ STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
 script=[^,]*,?	{ STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
diff --git a/tools/xl/xl_block.c b/tools/xl/xl_block.c
index 70eed43..8836c07 100644
--- a/tools/xl/xl_block.c
+++ b/tools/xl/xl_block.c
@@ -119,6 +119,12 @@ int main_blockdetach(int argc, char **argv)
         fprintf(stderr, "Error: Device %s not connected.\n", argv[optind+1]);
         return 1;
     }
+
+    if (disk.specification == LIBXL_DISK_SPECIFICATION_VIRTIO) {
+        fprintf(stderr, "Hotunplug isn't supported for specification virtio\n");
+        return 1;
+    }
+
     rc = !force ? libxl_device_disk_safe_remove(ctx, domid, &disk, 0) :
         libxl_device_disk_destroy(ctx, domid, &disk, 0);
     if (rc) {
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Fri Jun 17 19:25:16 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 19:25:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351543.578201 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2HaS-0000jt-Jr; Fri, 17 Jun 2022 19:24:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351543.578201; Fri, 17 Jun 2022 19:24:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2HaS-0000jm-Gz; Fri, 17 Jun 2022 19:24:56 +0000
Received: by outflank-mailman (input) for mailman id 351543;
 Fri, 17 Jun 2022 19:24:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JiXX=WY=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1o2HaR-0000jg-1N
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 19:24:55 +0000
Received: from sonic305-20.consmr.mail.gq1.yahoo.com
 (sonic305-20.consmr.mail.gq1.yahoo.com [98.137.64.83])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 28be5728-ee73-11ec-b725-ed86ccbb4733;
 Fri, 17 Jun 2022 21:24:53 +0200 (CEST)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic305.consmr.mail.gq1.yahoo.com with HTTP; Fri, 17 Jun 2022 19:24:50 +0000
Received: by hermes--canary-production-ne1-6b56569f5b-5gxzn (Yahoo Inc. Hermes
 SMTP Server) with ESMTPA ID ea48c22c787e8e2f80b2bf48244f4fa4; 
 Fri, 17 Jun 2022 19:13:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 28be5728-ee73-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1655493890; bh=WF5uiJ+8knZteM5nBC4zDmv2r5H6wr0fs8exKYDIyxI=; h=From:To:Cc:Subject:Date:References:From:Subject:Reply-To; b=TGvB3WDU0Y4d143FSsbQFa/GiFU//0FPjYlHdumE6Oj+RzdADTRRB6QiXbbdnSdMb39hMtdK+oQr1z2LVJjWRyFHbg/0+1Q9auPcfeVaDn2Yiqpg/u3UWTNLPJ/A3OWwQt9Q0JE7k6DlCyvonsqTzYV83cLxCYUaNSUwI6OmgkXm0JXdKZoxaNe4Tl/JyEl/dAlhC8ZsJQglGMg7V1Brqz+TLGoQl5zYd/w+jYa0wFVnn4M7i1LiB8zJPmRIEV2TxtPcyjOiiDedOKfepgXQznu7pIPNmrmHrxNwXMwGgQzkscz0UQzO/ZkG7EJtDbh7KneiKQgBbAGyuZjUarymyg==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1655493890; bh=sF4XDapfaKBPJOHnorYO2fVJRm9eFqUYWkeVB5IWlAh=; h=X-Sonic-MF:From:To:Subject:Date:From:Subject; b=mAdqYzjK6WZgTfUO4EH+m4sb2a9MHPNUg7FeY8/Mu6mqwWST0iUNGekpZeTs1TcoDw8LwhUKWVm85Cf3bGJBbVnDR0Ss8sO0J0IMM7zr9LTkB9D0inY9CEOxAjUg/yu6faklLdsVCen0+Q85D3rJUr81rfhiXBN6D3rzuh7bq3GOyL9X9gi9e5CNVmoYU1ul8rMXmU5ZbTdQRpmkf9ZpGban0kH0kPK1+mnEpgXvIcJC96Za4xqro3s8w3FNO2uYGee64LHTsu2JJeems1RHP0YVf2jAYepj8/GXjQzBa0lHH9xOYV4pH8FMqkr5Af1FjWGQYcoBEzhKoffgS3FLxg==
X-YMail-OSG: wJz8EwAVM1k8OI7xhtX412g2HhPpI0kvnrWcFBT1CB1qD9oFuIl8uJOqoO1wvyx
 x.zCOpQhPIzD7i9cC23uf8F8omrVAO7WITc..PR6VvsWML_QL07ss1ENqn5OsQIp2zCXGCiMS8UM
 gCCv4_TWmrNR.CHirrNtj5hwNrbyYAAvksTLU8OFfwn.agJPg5ZDMiH3Y7LzTUCCmBw.s0YMH09G
 99FoQQwei9KdZ738OxdI.dnpBnwd4saDmJIFLH.L7zNDfsxp2tfVEYisyMjA3RScbDEPPJ4eo_pI
 oMf.Ijw7qFKUphSSgXpBepz8FdG8.SSmiFxhbL.d0ZEgkew9CYj9NogGDrl.rk0Wlu3C6UOpkEiy
 SeKSy.SR3usKGWungf0C96oUrGSKlVSkZsm02666EOmd9s1kcJXtZcDBHwR1xlR8WMZaixYMjWzD
 qlYjPLXDSTP7EK7EpUkFTNvO8qCpPRQ05Ic40gHw9kU17BPjz.EwjIE9Bf4ZmKuyTtpvfDUau5fL
 laTsDZJ7jSEvTmCw9Pe3MTKxKJoGadOAL7fhaKoNjxaRNB_GEow3tbvNWM0vz1sz2peHAAALebCp
 OnOfKZ0.4OoNoNu_Z3DzSlAl1FCWgZLlyknnbdQCFHWRQJBOymLKIbYirj.hJLgRQFnq.ziA9uJb
 p4kXcuun4VdY1qtuIR5cb6bYBujqdbgQACCLPC2_3Ii51sknS.B7GHwbuY735fe24eB6MtrCnp81
 2aA7J2Sq4OpHsndcXNAU7GMo2.0iaGrP6r.2EOYpqx2FhcP8sY.ikzbbBjnHfvNHDyNSh_fG5Cdm
 lNsaUyoNzPS2ObnAEqVgCSCV1CWPUUpURkkLJHjEufNTrhYqtMBG.GAjJFHCAR4z_FdVbEOS0uh_
 tNqaSJrfGPGTtY3T3.AlhLXrmjjNjzJ7mb_VvoDPce3naepL5FrsHdauBXt7kZ.cI1qaEvlVUGcu
 f99.Q.th80copPTMPCw7tpc7av5QtlZ01XKpoqsHEDP.E7M5SsDQcUbV0gsKJvmL6PnMyI73c1IE
 yj7Mc0AJpArdaFjJK3VLf19_XZLHFUAn88SEZGHp_IGGLJPu46eC6SDgDTGVrhfhHGmp6MDSAvwX
 A8l0knThJLvjyJG2VwzslvyPgTbGFdgN7AJvlmPsxFTNqRlgII0oxg3sHKHg31DUvPmO7lpTbyY0
 a2sc51ggCAFlbvUz1RRWEyEToZxcz_9YIlH20mdwQ5__bzewX0kcfGw9Uc6_jUIPzXq3S.DkAgKF
 rPoNbjMrzvTyPK642b2xjoGSe4rYQvargC0.oJSuTXakEAyWagX9qGQynx5IytiNP3AfbMwKgVdS
 BxAi8OS0XdjqBe6RfBXu2X13sgfWcl92H.q6qL7v7YICXEGkoaB1qbNWpberzLumTWnqqj45Ld2A
 K9_Qgpa2XGFL6VQB7UJcwfJeLGN3bDpXUEph4OQ06SG6J13T2CPSGKu08rPWzE0KJfolcVHLtfaN
 rTGUXDf..ka2.DCXF2ay7kdulGJaQaCDfiSdVVtXyFfL0Uy8064guVqy7lSV8vxZpt0M2KYJoW8I
 ZI8RntE9rhmiHHBm6xu5._DIUqV5xpXB83C0SjKuBKyEhWeojEujdcdRFipht1823GgcAF86Aar5
 bCZUR6UhqqJdaR0a6RXmvuTKTLH3OWPQKONjrg1urg..FzEFppC9DEoVD4QHkh8jIw8YwqsrujGD
 FbPc4Mm1wNBwTSKJDcfpZ1PTTMI8H.BV5Sce04DCtDawc4y3pUpzKpZLrzQIK5zBaJaniT.hMzdg
 5JEkRvE3aQ3WdLpv1z_CSUo0bAK4Z77ps5bfC1Zx4lV0fIk8FaDdniIOjzZd2u6G8o7zvsnaU.Nu
 S2gr1llaEBKTLMvtAcTZhwouvEZwM_YRwGAJdvFqB03pDDgTMvvwLUMQSHwQ3UkwAQCmL2OprHFv
 c65F_ahYyskMN5NbukbHFbYMYFIdBIKCO6oaJPPZCiWv7wILcv9VuhAaD3w7952.VRYBP4IcQ.0K
 71f3aNWpYQa18W1EFsrUUl06F5Y2hpmw0k0upuiDMgWOc1y32SdoljwmWcms4JoZw9XYct7XIBbF
 MCXsnvOtaGjTwRRhlBBmYJfiy.okodcQieDQT3aPwApCpivvNbQNZube1wXRyyLrxLJfOKgqS_O5
 hhyhmcFcEGl0dJC7YzjLlgga_9wgCyfuZLLIiQsAEx64wBRDFuXhggOowWLwisIh0Nmfwieej.fX
 Bfrwi
X-Sonic-MF: <brchuckz@aim.com>
From: Chuck Zmudzinski <brchuckz@aol.com>
To: qemu-devel@nongnu.org
Cc: xen-devel@lists.xenproject.org,
	qemu-trivial@nongnu.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH v2] xen/pass-through: don't create needless register group
Date: Fri, 17 Jun 2022 15:13:33 -0400
Message-Id: <4d4b58473beb0565876687e9d8a53b76a7cf3fbb.1655490576.git.brchuckz@aol.com>
X-Mailer: git-send-email 2.36.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
References: <4d4b58473beb0565876687e9d8a53b76a7cf3fbb.1655490576.git.brchuckz.ref@aol.com>
Content-Length: 2331

Currently we are creating a register group for the Intel IGD OpRegion
for every device we pass through, but the XEN_PCI_INTEL_OPREGION
register group is only valid for an Intel IGD. Add a check to make
sure the device is an Intel IGD and a check that the administrator has
enabled gfx_passthru in the xl domain configuration. Require both checks
to be true before creating the register group. Use the existing
is_igd_vga_passthrough() function to check for a graphics device from
any vendor and that the administrator enabled gfx_passthru in the xl
domain configuration, but further require that the vendor be Intel,
because only Intel IGD devices have an Intel OpRegion. These are the
same checks hvmloader and libxl do to determine if the Intel OpRegion
needs to be mapped into the guest's memory. Also, move the comment
about trapping 0xfc for the Intel OpRegion where it belongs after
applying this patch.

Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
---
v2: * Move the comment to an appropriate place after applying this patch
    * Mention that the comment is moved in the commit message

v2 addresses the comment by Anthony Perard on the original
version of this patch.

 hw/xen/xen_pt_config_init.c | 14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/hw/xen/xen_pt_config_init.c b/hw/xen/xen_pt_config_init.c
index c5c4e943a8..cad4aeba84 100644
--- a/hw/xen/xen_pt_config_init.c
+++ b/hw/xen/xen_pt_config_init.c
@@ -2031,12 +2031,16 @@ void xen_pt_config_init(XenPCIPassthroughState *s, Error **errp)
             }
         }
 
-        /*
-         * By default we will trap up to 0x40 in the cfg space.
-         * If an intel device is pass through we need to trap 0xfc,
-         * therefore the size should be 0xff.
-         */
         if (xen_pt_emu_reg_grps[i].grp_id == XEN_PCI_INTEL_OPREGION) {
+            if (!is_igd_vga_passthrough(&s->real_device) ||
+                s->real_device.vendor_id != PCI_VENDOR_ID_INTEL) {
+                continue;
+            }
+            /*
+             * By default we will trap up to 0x40 in the cfg space.
+             * If an intel device is pass through we need to trap 0xfc,
+             * therefore the size should be 0xff.
+             */
             reg_grp_offset = XEN_PCI_INTEL_OPREGION;
         }
 
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 17 19:25:37 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 19:25:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351550.578212 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2Hb6-0001EG-Ts; Fri, 17 Jun 2022 19:25:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351550.578212; Fri, 17 Jun 2022 19:25:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2Hb6-0001E9-Qb; Fri, 17 Jun 2022 19:25:36 +0000
Received: by outflank-mailman (input) for mailman id 351550;
 Fri, 17 Jun 2022 19:25:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JiXX=WY=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1o2Hb5-00014E-2d
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 19:25:35 +0000
Received: from sonic316-54.consmr.mail.gq1.yahoo.com
 (sonic316-54.consmr.mail.gq1.yahoo.com [98.137.69.30])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 40aa5375-ee73-11ec-bd2d-47488cf2e6aa;
 Fri, 17 Jun 2022 21:25:33 +0200 (CEST)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic316.consmr.mail.gq1.yahoo.com with HTTP; Fri, 17 Jun 2022 19:25:30 +0000
Received: by hermes--canary-production-bf1-8bb76d6cf-7lzpt (Yahoo Inc. Hermes
 SMTP Server) with ESMTPA ID 53c15538f9520d4ca7add3ad628bdd99; 
 Fri, 17 Jun 2022 19:25:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 40aa5375-ee73-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netscape.net; s=a2048; t=1655493930; bh=vlcQCj9UTfubRdysfbCKFBRm8+4ciyYN0AmNZyeI72E=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=W64Hyk0KdhYnX9eXyT8dkHrcfb1X5XZv8AEebzrnxObyxENzQd9OIgAO4PkNtn3wQBrRRMhQkVNtPlSyzi/4S+lnxxvgTvW+UfOdKK0RgY3JeqeKBguqiBPq9mkCKQ0fheaFk26ItpBqkHL8sszGW8YQ4uljRJ9ohpx8YV8m/khgVTBt/tUV8Bub/w0jB+jYRQxeMS1iwrJQJ/kCxKJdsca4OuecVjfExkj6B7UoC1juTbgsBZXP3MNKV3Ck1Fky03zv6pzojaIGHagcMbwydJJLeeKejyzjrTI/Oay/weoj95rHxGQF7uRRbIFHmkwZ9KpTLtcT/5s9atF/6QNy1g==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1655493930; bh=qNjS/Va1GoI+ySU9xTMv+RsojFuWBUgo1wrTLH0W6Vs=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=Es6T2TayzTooeK/TnHPFo8rXiTQUGjG2tDGzDEqXwGQnjpU73Vq9HBZNX3a/uKnQrapPVLG5GwY8DXZFQyhBAfpyuCCap373EKRufPHwVjnsyn91c5uWoGUkkNNOoLg861tBMlC1ri2iCr6K62w0CxfUjXuQshzqxVoIq4Ar6qi+J8nuw5AQVkfKKfKOzcl4ZKhs0tDBz57Br1Unhtf5EW546UKiJJ0UG0FX3jcXjEYEcUSrmAK8H3wuqcivrfy2cQWDOZw1qUFZpSZ4yxw2bqehuRP9AD5I5xPJw6wuaso6h47PlP1ykGyfijXZtC5+gBw0lh8J+CpDOrnfeJVU/Q==
X-YMail-OSG: 8g2H05QVM1nOwZscjg4ZJammOfHkIfN3ZZ6OZuNaC42ui4CQlRdSNH5Y4XZuTRr
 Tx5uiohXqqP.4gxpF6vLw0XO3mSP_NO_YwGtBH2rMGHE0GtTBpbYG1tFXuDk1lt6cIfRHxo62Jp0
 aQpASMlIHe.J2gpkpqNlFyzcNKuqPMDcC85J2i73Pc7Sms9hG.eTGFcnVPkM8IEHmagqhqmwjqFc
 y1pkUWsr4Wkl8Jap4.NUt6GCVfkmVRQvD6cDC_h1z3YRjoYRKdzPYVFDtAlFXyNqEpLQ6Eug.nPu
 FDdUJtHYxSLmUv1adccIl8hu3AjINPI2UkFcaPtKDHrmtdR4TiziXzmt8W9B5y_Qiodv0k2.ZCOd
 .zpcQBlpkTnvG3hWtUkMMA_N12KiNYmuOnAGJh_frFTWHUs3XtL4dkDPfeywu3Z3SGMhLWNxMRB.
 7Zl6PmyYV3Kbv0vrXyrf93nTvQO.xHuJ7jHl57tSRcguczjZx.1FjdPEuk4cVLqZrCqLVytaw6Qn
 v65OuttbLHE4frViHxInC6_dURYnZuxxu1ikhUJ7aRbWJY83KUvaUibnvpBtpK0gtGkYeaFjQpxx
 VR9J0A4.zs.P7Hle.2lYoaLIlBUGaq0vxi873dJUfW0zoxllY7QBKhKpOPyIG_qr6PJRJ3gAVOBP
 BdMeTzw8.toDi_uZ9Bm8oYU_5Vu7BIdWLD3KG7j2lleyK4xt73XIIGIF7VOZMjhT8AdZa8QSvkb9
 AFV3RsVvsJUYj5AYMPNCcVlhKEcyTzZgBVOaEweEliWY196Tq71MQGksHwfJ6kCn9TUJVAirDy5Z
 pHHasKn_3m7N1wtq54PSB.pCSgIU.0qUkRwfhs_H4LZ_y7Ub5GFcxRY_NBwi78emUNyAbjPl5BQm
 MrmCXAaLuC8nQNUFLF5LW_GI8Zim_RdszAVH.U5VfAoT4Hbb2zz.0QWdD4oQR9jsO_z0bQnZPJNq
 Pufpqb9CF675czz0GgV4uYBXpjkuZQtoSN6RMiQ00QFGwaLdi0WcOpNft7P7ZrnLU0Vh5.fU_Pfg
 m07_pJJ8Lhe87vhwpi7JMEgBxfhqdoCT.e9rmHYk8b6S8u9GCZa92wXW4FJH2nfTMwsfcaGS5alx
 V4BR8QEuAGHq7UNQvWsnhTtu1PIlQyYGvr_mknCRO2TNiiN3.6D9WyPNn_fvquqB3VSAP5Hkljwc
 U8emc0tnAxKmb8TVYY3sPl8IpB1qjIvitBJ5nzAqwWRB4D7mz_2yuvuCznt.Tl2fEQU9uc7BYkRV
 jSs2q6BIk8FcABySKuMhnvDrxWwxAwtrSLV6MHBKKsoqNoKLt6AciTcOtb3yPZUJjpf2mxYpbrbj
 sXA3YgXEe.J3K1pcq8Es94ijnHufjh34e8nfLFlKDQy0LrOwGpEOsQbBuIPY7vU79L_ulmsPNusK
 LWiutpmNJeIt0QW4iBpF8_FFLlpyYR7.oQ0McGCg0ksk2d1ypF2zi3V0uKzwjh3XIoGg3YUKzbta
 IuK4hDSlkNddQhE1u9EGXZ9rRsx_CW499xfxEw1w3HOQVBoPkbKeXIOb9LGdpYlD5mH69_EA7LUZ
 n5GbCe.xjve1sX19h7mAaVo55huC_27w3Zi8qajh3zo10h5fJ.zwamWoA4iPQCUAlyckNID1s29J
 GWm17gh1D6sd1t1kmVXErVD105PHE5imSOb0pAOaahK78Q_v3f6M3siFQODxjaHs4t6E6CW5Pl3z
 84w9.JdAp.Nv6d3KrY205XOFhKYfJKLtpDS1p5TR0L.SiuCt6SWCudrnC8NxyG7wwiYNQljFxXxJ
 7.ytMcnivJyY7IzfBgtZ0pHAT6Gie9yjKzPBZXqSiUi3yzC5RXUCQ.TsKuVo9YvMY5hnSutLFjm.
 WKygxDVi3kBZN1.5b2dRXzC7sYT0yYLZEtpXvAVz.iayioMZazqd8UHRsrYQvUjYx2gIxDCaso1m
 cp6jKTb.MRyqmrQrs9jQ2xvSyafQTMvmue56WWJKkU85QTguKsrmaJOk6B7yLaV_G19MbdZLVQO_
 R1RCESAk3H5QRa0CdxKKhj5gf93a2Sa9RydT5NTiEp13Ze3ouiIml8DR7YR.m59jmECuEVolxa8g
 73UTBmDY16JqUi7aL8.v3B4_XbyY5Nhefc7PF_i9IuzJFDP.cazDG.wpzIQqHYrGIlM.h1txA1kb
 JLmgpRltGFgmLKco7IPcb1FszlNgjaV3MCSGS0FiVwvQr97Le5TJ75jDKcAKGc6SpD_dKO9RFcNw
 8.XryB.FFbJQ7QxhtXXU1
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <40ffbdbb-6af7-5ab0-7065-db5c0e718ed5@netscape.net>
Date: Fri, 17 Jun 2022 15:25:25 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH] xen/pass-through: don't create needless register group
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: qemu-devel@nongnu.org, xen-devel@lists.xenproject.org,
 qemu-trivial@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
 Paul Durrant <paul@xen.org>
References: <a2e946dfb45260a5e29cec3b2195e4c1385b0d63.1654876622.git.brchuckz.ref@aol.com>
 <a2e946dfb45260a5e29cec3b2195e4c1385b0d63.1654876622.git.brchuckz@aol.com>
 <Yqx8ht2teAoRJF4b@perard.uk.xensource.com>
From: Chuck Zmudzinski <brchuckz@netscape.net>
In-Reply-To: <Yqx8ht2teAoRJF4b@perard.uk.xensource.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-Mailer: WebService/1.1.20280 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 2142

On 6/17/22 9:07 AM, Anthony PERARD wrote:
> On Fri, Jun 10, 2022 at 12:23:35PM -0400, Chuck Zmudzinski wrote:
>> Currently we are creating a register group for the Intel IGD OpRegion
>> for every device we pass through, but the XEN_PCI_INTEL_OPREGION
>> register group is only valid for an Intel IGD. Add a check to make
>> sure the device is an Intel IGD and a check that the administrator has
>> enabled gfx_passthru in the xl domain configuration. Require both checks
>> to be true before creating the register group. Use the existing
>> is_igd_vga_passthrough() function to check for a graphics device from
>> any vendor and that the administrator enabled gfx_passthru in the xl
>> domain configuration, but further require that the vendor be Intel,
>> because only Intel IGD devices have an Intel OpRegion. These are the
>> same checks hvmloader and libxl do to determine if the Intel OpRegion
>> needs to be mapped into the guest's memory.
>>
>> Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
>> ---
>>   hw/xen/xen_pt_config_init.c | 4 ++++
>>   1 file changed, 4 insertions(+)
>>
>> diff --git a/hw/xen/xen_pt_config_init.c b/hw/xen/xen_pt_config_init.c
>> index c5c4e943a8..ffd915654c 100644
>> --- a/hw/xen/xen_pt_config_init.c
>> +++ b/hw/xen/xen_pt_config_init.c
>> @@ -2037,6 +2037,10 @@ void xen_pt_config_init(XenPCIPassthroughState *s, Error **errp)
>>            * therefore the size should be 0xff.
>>            */
> Could you move that comment? I think it would make more sense to comment
> the "reg_grp_offset=XEN_PCI_INTEL_OPREGION" line now that the `if` block
> also skip setting up the group on non-intel devices.

OK. I just e-mailed interested parties v2 that moves the comment
and mentions that the comment is moved in the commit message.

Best Regards,

Chuck

>
>>           if (xen_pt_emu_reg_grps[i].grp_id == XEN_PCI_INTEL_OPREGION) {
>> +            if (!is_igd_vga_passthrough(&s->real_device) ||
>> +                s->real_device.vendor_id != PCI_VENDOR_ID_INTEL) {
>> +                continue;
>> +            }
>>               reg_grp_offset = XEN_PCI_INTEL_OPREGION;
>>           }
> Thanks,
>



From xen-devel-bounces@lists.xenproject.org Fri Jun 17 19:55:10 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 19:55:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351560.578226 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2I3c-0005HU-9C; Fri, 17 Jun 2022 19:55:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351560.578226; Fri, 17 Jun 2022 19:55:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2I3c-0005HN-58; Fri, 17 Jun 2022 19:55:04 +0000
Received: by outflank-mailman (input) for mailman id 351560;
 Fri, 17 Jun 2022 19:55:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vvnF=WY=citrix.com=prvs=160c9be11=George.Dunlap@srs-se1.protection.inumbo.net>)
 id 1o2I3a-0005HF-C6
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 19:55:02 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5e16cc28-ee77-11ec-bd2d-47488cf2e6aa;
 Fri, 17 Jun 2022 21:55:00 +0200 (CEST)
Received: from mail-dm6nam12lp2170.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.170])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 17 Jun 2022 15:54:34 -0400
Received: from PH0PR03MB5669.namprd03.prod.outlook.com (2603:10b6:510:33::16)
 by PH0PR03MB5703.namprd03.prod.outlook.com (2603:10b6:510:30::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.14; Fri, 17 Jun
 2022 19:54:33 +0000
Received: from PH0PR03MB5669.namprd03.prod.outlook.com
 ([fe80::b402:44ba:be8:2308]) by PH0PR03MB5669.namprd03.prod.outlook.com
 ([fe80::b402:44ba:be8:2308%4]) with mapi id 15.20.5353.014; Fri, 17 Jun 2022
 19:54:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5e16cc28-ee77-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655495700;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:mime-version;
  bh=IvsHZBL13OEveEHs+f3HEeTY34U1bMHTRvnFtFiSfqc=;
  b=eBJmbFNCvohhBh4kZ68PBHQQCz3x4sHF/YCzntY9fhtY0Rfc4fVft0WF
   EPnp40Jh9THjZKWvPV390l0UN5ksGVNiT+VbANxgwEJ9Z/em6S7IU8dOs
   gXuu0hBeefsAVXbPTnq9WQ8D8Ka6bFY6k2H/WabHLk01OSHLKQmlFzaKT
   U=;
X-IronPort-RemoteIP: 104.47.59.170
X-IronPort-MID: 73222233
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:zOfsUakz1jpVN5gWD8ReVrzo5gzKJkRdPkR7XQ2eYbSJt16W5oA//
 9YtKSrfbaHbJie3LscnK96GQXl27cLSnNM3TQBoqy8xQnkRoJCVVYnHfxb7Y3qcc5DPFR04s
 5tCZ4CdfJ0/FCGA90z3a+W9pCIl3P7STLTyWbacakidKeMcpAIJ0HqPzMZl0t4AbaGFPj6wV
 fPOT+z3MQb/1mN4aTMdt/KNoUwztv3+6G1H5gVgPPlGsQfTy3BEUJ5HKa+PdHapGYM88sxW5
 grgIBNV2kuDon/B3/v8yu6TnnUiG+KUZU7U4pZvc/DKbiJq/0Te6Y5mcqtGAatro2/RxYopl
 owT7cDYpToBZcUgpsxMC3G0LAkmVUF20Oevza+X6JH7I+XuKhMA8t02ZK0EFdRwFtVfWAmiw
 ccwOjEVBi1vssrtqF6NpkuAsex4RCXjFNt3VniNVlg1B95+KXzIa/2iCdO1QF7cLy2BdBrTT
 5NxVNZhUPjPSzZRA21HEpE1pcuHnkT2a2NhsnSX/INitgA/zCQpuFTsGPz8X4XQAOlwwAOfr
 G+A+HnlCBYHMtDZ0SCC7n+nmu7Im2X8RZ4WE7q7sPVthTV/xERKUEFQCQT9/Kj/0xHmMz5cA
 xV8Fi4GgqU17kOmCPXgWRmxuFaPvwIGWsoWGOo/gO2I4vWPuVzDXDNfJtJHQOQjsM9tb2Y06
 gSytc7HDGVfqJquZ0vIo994qhv3Y0D5N1QqYCYYTAIe7sfquogbgRfGT9IlG6mw5vXlFDe1z
 z2UoSwWg7QIkdVNx6i95UrAgT+nut7OVAFdzgDeQmOs9UVnbZSsT5Kh9VXAq/haRK6bRFScu
 HkPm+CF8fsDS5qKkUSlQvgJHbyvz+aINnvbm1EHN4I66z2n9nqnfIZRyDJzPkFkNoADYzCBS
 FDXkRNc4tlUJnTCRaN5ao2+CsMuzID7CM/oEPvTa7JzjoNZcQaG+GRiYBCW1mW0ykw0y/hgY
 dGcbNqmCmscBeJ/1j2qSuwB0LgtgCcj2WfUQpO9xBOiuVaDWEOopX4+GAPmRogEAGms+205L
 /432xO29ihi
IronPort-HdrOrdr: A9a23:D0qmLq2Bfc2mPlMZEo0emgqjBR5yeYIsimQD101hICG9Lfb0qy
 n+pp4mPEHP4wr5AEtQ4uxpOMG7MBDhHO1OkPMs1NaZLUTbUQ6TQL2KgrGSpAEIdxeeygcZ79
 YZT0EcMqy9MbEZt7ed3ODQKb9Jr7e6GeKT9J7jJhxWPGNXgtRbnmNE43GgYyhLrWd9ZaYRJd
 653I5qtjCgcXMYYoCQHX8eRdXOoNXNidbPfQMGLwRP0njBsRqYrJrBVzSI1BYXVD1ChZ0493
 LergD/7qK/99mm1x7n0XPJ5Zg+oqqh9jIDPr3NtiEmEESvtu+aXvUlZ1REhkFwnAib0idorD
 ALmWZmAy080QKWQoj/m2qR5+Cp6kdT15al8y7WvZKrm72GeBsqT8VGno5XaR3f9g4pu8x9yr
 tC2yaDu4NQFg6oplW02zBZPysa6XZcjEBS59L7tUYvGLf2qYUh37A37QdQCtMNDSj64IcoHK
 1nC9zd/u9fdRefY2rCtmdizdSwVjBrdy32CHQqq4iQyXxbjXp5x0wXyIgWmWoB7os0T91B6/
 7fOqplmblSRosdbL57Bu0GXcyrY1a9Ci7kISaXOxDqBasHM3XCp9r+56g0/vijfNgSwJ47iP
 36ISRlXK4JCjbT4OG1re12G0r2MRSAtBzWu7Jjzok8vKHgT7z2NiDGQEwykqKb0oAiPvE=
X-IronPort-AV: E=Sophos;i="5.92,306,1650945600"; 
   d="asc'?scan'208";a="73222233"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Too1Imf7yeV3TLF1KydySpCfWSHDWszPM9iTMBc/xCbxpXAWer6ZLxha+zuLj7Z2SrnQn/tWWuVlMvQA01rsRAGSHnmAHYtUDawHITPAyij9NOOSK7BUD7CZiYwPplShabZ12rf0F+W0QHZ3axMVtAbkD9aWTkN7qKnkNvuiJmuLux8/LPpZh3IqbTRxQR0qRVwfef1A3hMWbkRaac5FGMneZ/a+CpypSFiMOlXFtiguwx/3gsrETjOQsdZvONJzr3uOa4Ry09UwYrpgSCPb9/pMq20VzzXN+m1emTnz/T/t91Y3kL/0YJOB5aPBooNqxqExhbfeVd6/hQyY/XomYg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8Axys3FCOi8XwWU+7Riro6JD9DbxQNYpWH34M95p3o8=;
 b=bCyZwnRvG3q6pHV5oq4AJHts+b/uYKxVa3b/rhlkpH8oTVUgE9SjQU75uVv4OE/xoTbvtMVLgntkT7uTFlDCgHoxGU1zDT26SNA4JSaC06g/e4l8QpmwAwT6dvMfZijD42TPvcnq4ka7GRgzFBa/3BxfUWH4gSYdXt/9FBk+Ges0+l5VE7cMAgOYuxKgZIVb583xZaaNXa++g6XK6uTns0V+8pglmv17NXZQYDjNl00SMu0TU65q41K2lfnFmgjqoeAUYV2ItDcnBax8vVatp/LqJ9evDZeUhR1Sf2vC2j6qjQjNN7FtC5EnKymwQqbCcU6/c9gTDEJyrDfb3XX3zw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8Axys3FCOi8XwWU+7Riro6JD9DbxQNYpWH34M95p3o8=;
 b=K0+H3LrxkUKqrBmKr5DvBZ3S/o2NeP9+s0+XJJ5l61MI5NutfZfGQI+/zvUDWFP4y0E2dwPWKBtTVstES/XYZjN/DudcdxbmrACrqAXoAIFBAvCOxkiFMDXtDiwJ0MxKF2Laf6kATL+rfpjvhY+KAHjo4jsu5Gs1xgdeeeuKMTU=
From: George Dunlap <George.Dunlap@citrix.com>
To: Matias Ezequiel Vara Larsen <matiasevara@gmail.com>
CC: xen-devel <xen-devel@lists.xenproject.org>, Matias Ezequiel Vara Larsen
	<matias.vara@vates.fr>, Andrew Cooper <Andrew.Cooper3@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Dario Faggioli
	<dfaggioli@suse.com>, Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [RFC PATCH 0/2] Add a new acquire resource to query vcpu
 statistics
Thread-Topic: [RFC PATCH 0/2] Add a new acquire resource to query vcpu
 statistics
Thread-Index: AQHYafskDxt9xTUpi0SpWmJ0NCtB9K1UNB8A
Date: Fri, 17 Jun 2022 19:54:32 +0000
Message-ID: <EEFF4C8C-F26D-47CF-8E5D-5E62BB6579BC@citrix.com>
References: <cover.1652797713.git.matias.vara@vates.fr>
In-Reply-To: <cover.1652797713.git.matias.vara@vates.fr>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: c560b9de-e001-4cdb-99bf-08da509b32c9
x-ms-traffictypediagnostic: PH0PR03MB5703:EE_
x-microsoft-antispam-prvs:
 <PH0PR03MB5703EE4DDDF1DF9F2C81645F99AF9@PH0PR03MB5703.namprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 6l+76oCglL6JWumXTUEeYiUTnYvCJ6IdK+gbLRT2ULfxPEPvygjd1zbprrk/i6ClDS76aKng2xBdjEJl+N7XPMbzEOfZA6Ns1T7e1TTWsnNm2OVnGuUjCHDfD3UPgicKQ8hQ3ua78Gz9Dr3x5h8F2q4nl9K9zlvkxgzr8RoZ1Na8NiDR8CiYBoZo4qdGJgE6/T0T5LcwjhCPi+PfGQUneAeKLfOlLhRH7sIOcyUQNu0lVzKrTHvxT5H1zp5pTCogkqAA3/z2FtCVwV0mai8/GSSaJiSa8LYxRDybWeCEffBVeiKMOLVqkcro5F2CGq2u4poI3gWmjtCmmX6L1DqYZK6un64Mx4AY4E1kAfjqt+HaRQAnLZU6g/P5b/tORC/MEB30RxIk9FIZyt/RiM92BxzV9mEXmzUhzWYs6DtFjEmJLVgyn9zByK7tuJqaRhqYZ1sLFqeO8IfobD6BlCS/9eKbQEqPmJULetN8mnf0KJNeC6TCPr2AIe8mNOK7KaV0Bq+V0YriCnPJjTNjxfXyFceewBKGqtkXJJeXaiveZp6fbZowug6tQDBmL21ylOJVdS+rAARkccVZwFPXTyJFq+iESJWP+5JZt0KVfKjm91SQqV+qAAo0IvuYgBbHb2nbQ/fi29qCBQ7mzCe2Vf3sDcpybpPfyW6F2wlIe+V8/1MTZr6vqK1hfLIMmDHlH2PxGEJPWEX+iniQR2dku360FVnO90jgKISb6ow6ZelVP+lcmiegHj0mhUqGtqfUb8I9WRj6L8ButgLODNd8474g4hptKDRfLh3C7rY9zoWWIJ+jQFUK9wri31DCQtW/TWa3ge6vaI3/7F9HIlEk05/k8A==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(33656002)(8676002)(36756003)(8936002)(38070700005)(4326008)(91956017)(498600001)(66476007)(64756008)(66556008)(66946007)(71200400001)(966005)(66446008)(6486002)(316002)(5660300002)(2906002)(54906003)(76116006)(53546011)(6506007)(2616005)(26005)(6916009)(107886003)(82960400001)(186003)(38100700002)(122000001)(99936003)(83380400001)(6512007)(86362001)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?RjZpSVpSbXV1UnlUV1EwUHJNSkFZbEFJa0JlOVJCTU40c05Bb3ZCZ2Y4U3lQ?=
 =?utf-8?B?ckpyY3c5MjZuL3RJQmNaZFlVcnhPRmErUlZwTGpYL1AzRGJQWS9BYzVMUWpP?=
 =?utf-8?B?NkRydVZHVmNGUkF0WjVtSG8ySjVRbC8vZUE3K3pHYVlOU3A0U0ZLVDlnVmlh?=
 =?utf-8?B?R1ZWN2t4WEtDdFVxQmpISzFBVFRnRUFHY2NpcTlub2lLZXhjdFRqUTVLTkd1?=
 =?utf-8?B?Y3RIa1NKajdMYXdQNENHYzFLYkpIM2ZLekZKNGd4ZXoxaFBHU08zQm80MTY5?=
 =?utf-8?B?SkdNQVpHMVQ3NHNZV0YwcFlXcHk2Z3RnVlNvZVEzSzBheXFoL242Q0FlU0Iy?=
 =?utf-8?B?dSthMVZQclduNnlHSXlSaFRmR1BmR2ptRDJtNXBBR1VuNGd1YTBRanR1eEdK?=
 =?utf-8?B?WXoraE1sNW1VSm9xV2pKWkFjYkwweVQyeUsvcXRlK1F6K0dHZWo3a01KUnYz?=
 =?utf-8?B?SWJpK2VRQWc5ZmV4N3FyMmxWNEpPNC9qaW56VjVBaHZ1N0RPbVQwVUQ0Vm9G?=
 =?utf-8?B?alFYWXFrdGJ5T09hU1BGdGRmQ2NKTTZsYW1uUkZOclV0YW5tY1M2a2ZlS1ZM?=
 =?utf-8?B?RWlWRWR2YldEaEtxVk9lZ09VQ21ZUU90Qld5MGVVSGNxSisvb3F5TlQ4MWNn?=
 =?utf-8?B?M284YnAzVjNZRTFiUnF1QlYydmRoQjF4eEFMdnByeFZ4c2pJKzZwZFRXM3c1?=
 =?utf-8?B?N1o0M3drdG03aW9LcnRDZ0dJajAyUnFPRHhycEkrdkc5b2JEOWMwM3ZLTldu?=
 =?utf-8?B?L3lDZVFGRG94NEIvUlVUYUlmc0FWMUJ1a3BMdlVMM0QwMEJnTzYvWTVZekc0?=
 =?utf-8?B?bkhOVHBUaUpzemV2bUM3T3JsdFJBNktveUFqOGdzaC9Ialh6TXNvdDRGVUFF?=
 =?utf-8?B?V3BrWEtpZzBURUFpQmJLMm5NUkhCYksvODZLQTEyTU1mNEhINlpvK3hKcjFT?=
 =?utf-8?B?NUY3bnJFUWgyMVVuZnR3MGVKeDNaK25oUjZnM0tpQ1lyeWJ3Tk1hRWhXTk9S?=
 =?utf-8?B?SVZPOGtHa09SYkp1azI4UzJUVWFxNncvaUY5RlZSNU5TQlc0MFRwRlRtajFv?=
 =?utf-8?B?VXRybm5yaFdaWGRMOUhIeHFkbHBDRmdzVkNnN3dnS0xZbkhWbFpUR01LN3Rz?=
 =?utf-8?B?SjA5UlRkM2ZKaVRNTVRPbmRkODdPWlVkUTVnQTlycHNoY2dtNTFDQ2ZEL3Nz?=
 =?utf-8?B?V3l2cWJyVWZNNW8yUW1WVEp6OEx1dyt5d3l0QzI4bjc5TzNacWVSS3EybFB4?=
 =?utf-8?B?ME0xZUdLWDZsK0ppb2MyZ2VrS3M1dlpENWZ4RU9KY3Nmem04NUlFdGsvYW9E?=
 =?utf-8?B?NittcjN1SmxmWjdEd0dkTFlvdmFpdkdHUEhzOWpoSGxoVzJHZmxYY3pvOFhu?=
 =?utf-8?B?T2FsNDg0andtaHhBWFpJYWZSVGpXWTlWNWg1Z0oxNFFUVVI5K3BBb2FJa0ZS?=
 =?utf-8?B?Mmo1bWJ5VjZUZHdBM1pjb3FwY2NtUnhmRnR6RzUyU1N2b0l5Qm51RzZQd1Vz?=
 =?utf-8?B?emJPZUxGSUgrblFpVEJjWkpFZE83bTh6Zi9WMFFNQ3VlV2hpMEV1L3g5MS9u?=
 =?utf-8?B?OWhCTDZJUjdxbkRjcTEzTmJRK1FvY0dqWTFoRFQ3SzJ4cHlrTkc5WWF2MTJT?=
 =?utf-8?B?aHNYNmN4djM4MHdzZ1p5aXhBQ0ZaMFpIaGVmS0FhcVJDb1RHdWJSWnQ4OEYy?=
 =?utf-8?B?MnNxcXJ3SHZNV0RhTnZYZHprS2x5V3YwUTZlSVE5QS9Icmc5REFvZ3I1TDJt?=
 =?utf-8?B?dmZPY05aSFYxS04rTGovK1pQQ1ZBRlBvRG1xU1NyUEVPT3dtQmYxc1BKbTcy?=
 =?utf-8?B?UW5RREF5cFZhVlF6MGxiODlJSS9EVnhXc0FnY04xbkFIajhudWZGUzJIOVQ2?=
 =?utf-8?B?QTZCQ3pPNW50dEFUcmxkbkhQd21GVTRxM01hSnN4ejVSeTlsMnVabjlmdEd2?=
 =?utf-8?B?d1Nqb25CWVNEWDcyR3Z0R3ZwSWdnbWltL0VKZnQ4T201YjJNVlZzamhZQXZS?=
 =?utf-8?B?QVBoa3dxMnpiWnBwMld0SU01em9lUEhRT2puQndjZGI5TVFjVWtoaTlub0s3?=
 =?utf-8?B?N3dOYnYvNDRqYUVrZ0llUDBqM3I2ME1rYmloZ1pOcW1neFJOa1BIa1kvTnc2?=
 =?utf-8?B?VDZhOUVCUVh1eGdJQkJ3dkppTVFTSEF5alNLQXYyRlAxMmRya1M5b0hlSDV5?=
 =?utf-8?B?RWtOaHFJNWxyRUhTWHRsQkRBL2VVUjE3VWg5dmlOd1c0Wm9oblhqb3Bxb2xV?=
 =?utf-8?B?TXd3UGs1MC9BU3dtUDc4STg4djhIbGkzZng3V1BqMlMxT2ZsdStQU2Z2OWZs?=
 =?utf-8?B?NHlVV0IwcHpDNGFzdVNUZFdIOUQ1QlErUHdET1pzZyt5azhaMThpWTgzQjln?=
 =?utf-8?Q?A2W8JG99ajGCczDA=3D?=
Content-Type: multipart/signed;
	boundary="Apple-Mail=_03802B3E-EEA0-4C22-99C1-23D0DD9F3FC1";
	protocol="application/pgp-signature";
	micalg=pgp-sha256
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c560b9de-e001-4cdb-99bf-08da509b32c9
X-MS-Exchange-CrossTenant-originalarrivaltime: 17 Jun 2022 19:54:32.7947
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 2DgxksVEI8rJ6Cz8JuVPIB0FFu1l9jNH6ium92OJQjf7vDKzLQi2Mx2Jw+lStc+MuBF1rJ0P6CVYEG2hT+7JLm2itinFxCwJCwphiwE6wrE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5703

--Apple-Mail=_03802B3E-EEA0-4C22-99C1-23D0DD9F3FC1
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=utf-8



> On 17 May 2022, at 15:33, Matias Ezequiel Vara Larsen =
<matiasevara@gmail.com> wrote:
>=20
> Hello all,
>=20
> The purpose of this RFC is to get feedback about a new acquire =
resource that
> exposes vcpu statistics for a given domain. The current mechanism to =
get those
> statistics is by querying the hypervisor. This mechanism relies on a =
hypercall
> and holds the domctl spinlock during its execution. When a pv tool =
like xcp-rrdd
> periodically samples these counters, it ends up affecting other paths =
that share
> that spinlock. By using acquire resources, the pv tool only requires a =
few
> hypercalls to set the shared memory region and samples are got without =
issuing
> any other hypercall. The original idea has been suggested by Andrew =
Cooper to
> which I have been discussing about how to implement the current PoC. =
You can
> find the RFC patch series at [1]. The series is rebased on top of =
stable-4.15.
>=20
> I am currently a bit blocked on 1) what to expose and 2) how to expose =
it. For
> 1), I decided to expose what xcp-rrdd is querying, e.g., =
XEN_DOMCTL_getvcpuinfo.
> More precisely, xcp-rrd gets runstate.time[RUNSTATE_running]. This is =
a uint64_t
> counter. However, the time spent in other states may be interesting =
too.
> Regarding 2), I am not sure if simply using an array of uint64_t is =
enough or if
> a different interface should be exposed. The remaining question is =
when to get
> new values. For the moment, I am updating this counter during
> vcpu_runstate_change().
>=20
> The current series includes a simple pv tool that shows how this new =
interface is
> used. This tool maps the counter and periodically samples it.
>=20
> Any feedback/help would be appreciated.

Hey Matias,

Sorry it=E2=80=99s taken so long to get back to you.  My day-to-day job =
has shifted away from technical things to community management; this has =
been on my radar but I never made time to dig into it.

There are some missing details I=E2=80=99ve had to try to piece together =
about the situation, so let me sketch things out a bit further and see =
if I understand the situation:

* xcp-rrd currently wants (at minimum) to record =
runstate.time[RUNSTATE_running] for each vcpu.  Currently that means =
calling XEN_DOMCTL_getvcpuinfo, which has to hold a single global =
domctl_lock (!) for the entire hypercall; and of course must be iterated =
over every vcpu in the system for every update.

* VCPUOP_get_runstate_info copies out a vcpu_runstate_info struct, which =
contains information on the other runstates.  Additionally, =
VCPUOP_register_runstate_memory_area already does something similar to =
what you want: it passes a virtual address to Xen, which Xen maps, and =
copies information about the various vcpus into (in =
update_runstate_area()).

* However, the above assumes a domain of =E2=80=9Ccurrent->domain=E2=80=9D=
: That is a domain can call VCPUOP_get_runstate_info on one of its own =
vcpus, but dom0 cannot call it to get information about the vcpus of =
other domains.

* Additionally, VCPUOP_register_runstate_memory_area registers by =
*virtual address*; this is actually problematic even for guest kernels =
looking at their own vcpus; but would be completely inappropriate for a =
dom0 userspace application, which is what you=E2=80=99re looking at.

Your solution is to expose things via the xenforeignmemory interface =
instead, modelled after the vmtrace_buf functionality.

Does that all sound right?

I think at a high level that=E2=80=99s probably the right way to go.

As you say, my default would be to expose similar information as =
VCPUOP_get_runstate_info.  I=E2=80=99d even consider just using =
vcpu_runstate_info_t.

The other option would be to try to make the page a more general =
=E2=80=9Cforeign vcpu info=E2=80=9D page, which we could expand with =
more information as we find it useful.

In this patch, you=E2=80=99re allocating 4k *per vcpu on the entire =
system* to hold a single 64-bit value; even if you decide to use =
vcpu_runstate_info_t, there=E2=80=99s still quite a large wastage.  =
Would it make sense rather to have this pass back an array of MFNs =
designed to be mapped contiguously, with the vcpus listed as an array? =
This seems to be what XENMEM_resource_ioreq_server does.

The advantage of making the structure extensible is that we wouldn=E2=80=99=
t need to add another interface, and potentially another full page, if =
we wanted to add more functionality that we wanted to export.  On the =
other hand, every new functionality that we add may require adding code =
to copy things into it; making it so that such code is added bit by bit =
as it=E2=80=99s requested might be better.

I have some more comments I=E2=80=99ll give on the 1/2 patch.

 -George






>=20
> Thanks, Matias.
>=20
> [1] https://github.com/MatiasVara/xen/tree/feature_stats
>=20
> Matias Ezequiel Vara Larsen (2):
>  xen/memory : Add stats_table resource type
>  tools/misc: Add xen-stats tool
>=20
> tools/misc/Makefile         |  5 +++
> tools/misc/xen-stats.c      | 83 +++++++++++++++++++++++++++++++++++++
> xen/common/domain.c         | 42 +++++++++++++++++++
> xen/common/memory.c         | 29 +++++++++++++
> xen/common/sched/core.c     |  5 +++
> xen/include/public/memory.h |  1 +
> xen/include/xen/sched.h     |  5 +++
> 7 files changed, 170 insertions(+)
> create mode 100644 tools/misc/xen-stats.c
>=20
> --
> 2.25.1
>=20


--Apple-Mail=_03802B3E-EEA0-4C22-99C1-23D0DD9F3FC1
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
	filename=signature.asc
Content-Type: application/pgp-signature;
	name=signature.asc
Content-Description: Message signed with OpenPGP

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEj3+7SZ4EDefWZFyCshXHp8eEG+0FAmKs2/AACgkQshXHp8eE
G+1BHgf9FLZncxfDyyUCcd6iZqcG+/Hse/3pjRVdO425HQHuVzrsxfQ684mDHCN/
0VnNIhK413ME0Ih4npbyLI/xAQsKMiVZwntEfm0ndGXI9mKcn38njE5C64L/pnkV
w5zbDql+FXzWqzSNUrgtCPa35f2wc9nPQ2FrQBpQr+0dIPHqBZvefPNY83/toIqp
Z1zfqnG5hDwZAZg96jpTdZJbM9Mo6KFWN4LmIGiXm4yBRa2dow2Kyb+0OPFTscpl
VVyoN27Xww9hS7Up1Erc9EC4I0oTQXBV7Qmyz4zvVKobM0PpoeGr5bj63dkT3mP9
37pJa5rcp7jEAPIsOdN5ar0dSQ8Cgg==
=puLY
-----END PGP SIGNATURE-----

--Apple-Mail=_03802B3E-EEA0-4C22-99C1-23D0DD9F3FC1--


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 20:39:40 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 20:39:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351570.578240 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2IkU-0002Xn-TN; Fri, 17 Jun 2022 20:39:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351570.578240; Fri, 17 Jun 2022 20:39:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2IkU-0002Xg-Ql; Fri, 17 Jun 2022 20:39:22 +0000
Received: by outflank-mailman (input) for mailman id 351570;
 Fri, 17 Jun 2022 20:39:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JiXX=WY=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1o2IkT-0002Xa-Mf
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 20:39:22 +0000
Received: from sonic317-20.consmr.mail.gq1.yahoo.com
 (sonic317-20.consmr.mail.gq1.yahoo.com [98.137.66.146])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8dd9e7b6-ee7d-11ec-b725-ed86ccbb4733;
 Fri, 17 Jun 2022 22:39:18 +0200 (CEST)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic317.consmr.mail.gq1.yahoo.com with HTTP; Fri, 17 Jun 2022 20:39:15 +0000
Received: by hermes--canary-production-bf1-8bb76d6cf-cdgdn (Yahoo Inc. Hermes
 SMTP Server) with ESMTPA ID da0b3df65b30f311b738c9ef19c10b95; 
 Fri, 17 Jun 2022 20:39:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8dd9e7b6-ee7d-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netscape.net; s=a2048; t=1655498355; bh=rpIu/ci3eeyDIgBM0G1EcULgjyumv2YVl8OTfqaLwtc=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=XvWzLNYrbCQI37VPGreA9eO3psuargN16FA+mZwcnrZImep4U9tV2kv7NRxJVKEuo1Kxvd5LU7OBZ/fBX+UBfLbI94lDGf645d+a0TiaBR57UCkIehSi8XgvjBVITJUfWPbXJ/U65FND62BaKJ8f/Ccv7Vq5/wje9RdPRmelRuxbseaJCf2fkQhdRFSjICMoBlnj6C29M8UDTTTKcaNVVKcEPGuZpbYslSl4nkoQ31Mun7OZNbp/E3uP7tTdRtj3r/IHMlt14fbOk3/ppNpY/2HVzI+9CeO2AbGALhQJx0+24vDG99utf50Mw3ffcWOR9zCEo+8dlxhoE2M84zwq8g==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1655498355; bh=NfOaH88yQj52FHpR1I4wUzavux55HfNzXePk9vQSWI6=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=CSfHMBLBFPdyVHvc9W03Lf3g5iLVgFzJpzMzRuLMgn8ameqvoXpouXtlE6fayYiMhurTdfVxZY6ZjTkcQIBfomgg6DLeaHHob4Qq4lkBfyJu/Nr1qRsywahSaHPa4pmEYOkpQQDpBXkrt/rge1NlKpXkvG1vo/rZfR4ozFl9fabNYJRAd/XtdLZRNnUwtlBGE2xbBXQY5yb6Z81tmHgBYL4YKlUryP5Fceo79UGXJ6PYcXPamST80zbz549rI4G0ly4/4YHRFxbMPzGhpOQyK+bX+Rp72YlkWQ0g7B/ffMlb2NxReOUO/LOD51aQb/jTc0i17D6yFzHVNV2xKecnAQ==
X-YMail-OSG: nlvO3rcVM1mn74oyZahPB0uBfU.777Rf5bWI53_mT7Rd6_jUS.GG8i0FEUJaZR5
 EX3wA43O2POI.fH45CuVE0zYQik5ZRtd5S2SAjDBugNkkc9Ph5zry3d38ppPagRsL9PpyJmoXiy6
 KuPjFT2g1mBbpvBafmlMbYqFFbFEC7W17Rds734FdlMG1V9myiy5mfMKGV8uhyK1h6DJUHc3jgW2
 UcUmY9yDaUq6N0ajB5EkfQUN1xpi8Q0xZ1jSUpaJGoZnN5ZbodsGzVnmAomKP9UbS5nakP.DoNNC
 ETyb9Z6Ur2gw2HDs.rB7eoELsinkqGmiq9.5Fjm5uCYAkgzUXmk0rL5SNvO4ICelX.4lGZp2_LOd
 6PDn3QwpwWuoeutS.i6UKkV_wms62TIMmnTvV0q46D4661DSr9v3ukTBMjkWzosQrC0epRtyNrKR
 vGGGouKvr7FChhns8qcMg3RLrep_qIgCAWf7EWN3mnjaF8ZPUBG3NkkWvJj6g_l8V4xkMfIWVQfC
 _ZcCpXk7UF4bk1W6jdeIUqQDidUAwWLNkquP6uavT90DZs1e5WTbo5guDz7REaFL5YTn0NyRgwud
 k46AAh2PRmvEhxtJ59it1g6M5x_qFCzjuZt5WZ2AECfvUsNtQGeMSvcOFtA79DuAWA_EfjFI0QXd
 pj7xZ8HkPzX5mut4I2Qg3psF.zUJT_GZykSJEwTO4vrDtCyefJNyDQW83b.C_sz1itS9_MZtwQjv
 OazqkaUW8KiCaugodvPu8zMDlPzfx7WYhUSzARNPjuscJCx37m9AjkJ67mW3bdaJRu33SmpZ11yB
 DouxeGFd1liXct7mZvkCMndDN6JuS02gqwJv0W2ntJdB7oIXAiSZlGwxvJvEQ1L5Y92DjVgzyMLN
 E2MuC9QMM_u8j4a9e6FrSVyL04njNEesoaLjPyv73sEjKRUrPHp7o_79nqKPj77KPb5HPO7mB_iT
 AjcgpMDfANZRSGVeWbNK2ntZBeQxJdttwWVNaHgsCIizKkB1A8fhMVwE8c480HwyhUo.w5uNHneE
 0p5i6C_SDLJOA9.t4AleY4BhusERYVRvpuUo0P5Ri43GweeN7G7yob1divEruUceG6a7HrridseX
 nZPZx8HBbHh7ewTpPScF5qKrsrtWjWa2EQuEslf2SVv5iwd_lgldvYHBo3izaoBneUHi_56SA5VU
 npI74aMUkfntJyaqYgCBT1IsmEweTP62Uok.aVr1KBRkoDRBs84TCGU8s8grWcNMmn48gOJ2FHPH
 Td1JwjBhUhEHF2iUhheba4SLjb7hZ6WfK4S1yka9ZByp5h2016uhfPmtMYR70SSE9nN.qDq90AEa
 Riw3KuM_QXzyGgdADPCuZclK1dhFlvFPPzwBoZpS9UzmVILSX0APLG6eUSUIMuPB8AuI42aJA8MX
 zwLyxtH4TYa5DfbKw7NwkD.a_rjQsR3bs_.BxZ2IniLxlFB9se.98h9hSgXv1dGrti_QmWF.4fLg
 Vm0_it8yAn4fveiJC7evEoJN4eGjeWJxmtKn9Iu0HbrzJxABlKNyOgwAtb4yR8wtU9azKtLYZUph
 4LGrJaQ_CBNT9SLpF2MVEAS.HAOj4rGX94DgrSY.7QZ1fp29.orwkhgwU4RhnEx8jPnHU7r9CY61
 3kVbA2XvrSGe5GFiBFntL5h5dGZ4zcq1egEgd6XqzV2SrPin0TINv6EvyMkGZbLttLtmexobtxX0
 2Ta.exA.mASmLE6aFhT5JLpiTSjyHGcnCGpUtUqvRd8Oe_bb9fGJXys5tTlzwiSdSSxsdD4sTPv0
 i0SU.8TAPmQCXM7kwKo3zvrkoiUEJfgOHybkfY995t1T8TZxsY2OHUQKDdfmFXBvmuTmwXw_g9v0
 _k3vON36cZsUejrjAlGUbwptc8TVwvcY_i5ZdKDdVSovMBNY6ocCDpLcwq2JxeGoMZYMy2XCmh2Y
 f_rBfeRPkliFF7qzd0tJXq80WcFSVGx3vRiJD3rB9dHxS0pS5TQ5UIpNMxDdy6Fo98IcUarEgfWo
 8JKz4H3vqHyxouz2MMaxytUlg1yA31uwsT_5.PEsD0AGKMXZ0zh8IAxlN7caJ7jHNGcVPI.ysBsH
 fGwjQfH_hkpT2nosZ5uMcAPujI78tOdFDBnJJPp8Pwt6V_8wBbeO5gElF7KQdJCRWkJ0GFS_D5WB
 Eyqiug6TTEw0tZ9RJ6mqkTVTtPV0taw8k5hWBIpkjoCgHpRvXJm8Udsns_EBaRbYhCoiVi3M3k.6
 FmSfAVM3PvlV6f3JMHxsd2bY-
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <7be7002d-8a6e-b6fb-f926-b8b4cdb0c404@netscape.net>
Date: Fri, 17 Jun 2022 16:39:09 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [XEN PATCH] tools/libs/light/libxl_pci.c: explicitly grant access
 to Intel IGD opregion
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Jason Andryuk <jandryuk@gmail.com>, Andrew Cooper <amc96@srcf.net>,
 xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>, Jan Beulich <jbeulich@suse.com>
References: <b62fbc602a629941c1acaad3b93d250a3eba33c0.1647222184.git.brchuckz.ref@netscape.net>
 <b62fbc602a629941c1acaad3b93d250a3eba33c0.1647222184.git.brchuckz@netscape.net>
 <YkSQIoYhomhNKpYR@perard.uk.xensource.com>
 <408e5e07-453c-f377-a5b0-c421d002aec5@srcf.net>
 <46a8585e-2a2a-4d12-f221-e57bd157dec6@netscape.net>
 <CAKf6xpths4SX4wq-j4VhnXZnx0DW=468z3=9FYHso=Wy1i_Rsg@mail.gmail.com>
 <da62d06d-fd18-a313-9e69-2a4581e97c1a@netscape.net>
 <CAKf6xptZ9g79MphwYPAGq6ATBtCrW+pCd5+NYJPdFniW+tFzPg@mail.gmail.com>
 <ea5c1606-04d3-c847-643e-d242d8f6ba06@netscape.net>
 <YqyFoPJ8Bv9EnO5N@perard.uk.xensource.com>
From: Chuck Zmudzinski <brchuckz@netscape.net>
In-Reply-To: <YqyFoPJ8Bv9EnO5N@perard.uk.xensource.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-Mailer: WebService/1.1.20280 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 2932

On 6/17/22 9:46 AM, Anthony PERARD wrote:
> On Thu, Mar 31, 2022 at 03:44:33PM -0400, Chuck Zmudzinski wrote:
>> On 3/31/22 10:19 AM, Jason Andryuk wrote:
>>> On Thu, Mar 31, 2022 at 10:05 AM Chuck Zmudzinski <brchuckz@netscape.net> wrote:
>>>> That still doesn't answer my question - will the Qemu upstream
>>>> accept the patches that move the hypercalls to the toolstack? If
>>>> not, we have to live with what we have now, which is that the
>>>> hypercalls are done in Qemu.
>>> [...]
>> I know of another reason to check the Qemu upstream version,
>> and that is dealing with deprecated / removed device model
>> options that xl.cfg still uses. I looked at that a few years ago
>> with regard to the deprecated 'usbdevice tablet' Qemu option,
>> but I did not see anything in libxl to distinguish Qemu versions
>> except for upstream vs. traditional. AFAICT, detecting traditional
>> vs. upstream Qemu depends solely on the device_model_version
>> xl.cfg setting. So it might be useful for libxl to add the capability
>> to detect the upstream Qemu version or at least create an xl.cfg
>> setting to inform libxl about the Qemu version.
> Hi,
>
> There's already some code to deal with QEMU's version (QEMU = upstream
> Qemu) in libxl, but this code is dealing with an already running QEMU.
> When libxl interact with QEMU through QMP, to setup some more devices,
> QEMU also advertise it's version, which we can use on follow up qmp
> commands.
>
> I think adding passthrough pci devices is all done via QMP, so we can
> potentially move an hypercall from QEMU to libxl, and tell libxl that
> depending on which version of QEMU is talking to, it needs or not do the
> hypercall. Also, we could probably add a parameter so that QEMU know if
> it have to do the hypercall or not, and thus newer version of QEMU could
> also deal with older version of libxl.
>
> (There's maybe some example like that in both QEMU and libxl, when doing
> live migration, dm_state_save_to_fdset() in libxl as a pointer)
>
> Cheers,
>

Hi Anthony,

Thanks for this information, it is useful because I plan to work on
improved Xen / Qemu interactions to better support features
such as running the device model in a stub domain, in which case
it is probably better to do some hypercalls in libxl or maybe in
hvmloader that are currently done in Qemu.

I also would like to see Xen HVM using Q35 instead of i440fx
emulation which also requires improved Xen / Qemu interactions.
I know of the patch set for Q35 emulation that was proposed back
in 2018, but I have not tried anything like that yet. That looks like
a complex problem. Has there been any more recent work in that
area? I couldn't find much recent work on that by searching the
Internet. I have quite a bit to learn before I can make contributions
to support Q35 as a replacement for i440fx, but I plan to slowly
work on it. Any suggestions anyone has are welcome.

Best Regards,

Chuck


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 20:54:55 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 20:54:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351578.578251 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2IzO-00052c-8R; Fri, 17 Jun 2022 20:54:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351578.578251; Fri, 17 Jun 2022 20:54:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2IzO-00052V-5C; Fri, 17 Jun 2022 20:54:46 +0000
Received: by outflank-mailman (input) for mailman id 351578;
 Fri, 17 Jun 2022 20:54:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vvnF=WY=citrix.com=prvs=160c9be11=George.Dunlap@srs-se1.protection.inumbo.net>)
 id 1o2IzN-00052P-5O
 for xen-devel@lists.xenproject.org; Fri, 17 Jun 2022 20:54:45 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b500b304-ee7f-11ec-bd2d-47488cf2e6aa;
 Fri, 17 Jun 2022 22:54:42 +0200 (CEST)
Received: from mail-dm6nam12lp2176.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.176])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 17 Jun 2022 16:54:38 -0400
Received: from PH0PR03MB5669.namprd03.prod.outlook.com (2603:10b6:510:33::16)
 by MW4PR03MB6990.namprd03.prod.outlook.com (2603:10b6:303:1b9::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.15; Fri, 17 Jun
 2022 20:54:35 +0000
Received: from PH0PR03MB5669.namprd03.prod.outlook.com
 ([fe80::b402:44ba:be8:2308]) by PH0PR03MB5669.namprd03.prod.outlook.com
 ([fe80::b402:44ba:be8:2308%4]) with mapi id 15.20.5353.014; Fri, 17 Jun 2022
 20:54:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b500b304-ee7f-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655499282;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:mime-version;
  bh=V5WOvekxRQAqHJS1Vu0vGWoyWrDwEg2PiqFxGBvwZx8=;
  b=VVed0QDOiBq2OYvE0J5tLz1hIaMQeiDBGzhqzpF1EhC36idjx1zzAFRY
   qX/PLHDodN1RkpuZv5Pm0YO8XGwSxfoQTR8rih9piofl0Qyc5DLIxyNYm
   MDa6t/Wh/eIs9bmdxNy8doDhJuUcuEEwkGTjyJSLYm3GkGVA+KEhbtKJg
   U=;
X-IronPort-RemoteIP: 104.47.59.176
X-IronPort-MID: 74307023
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:abribamW5cKF1NaFDc2pFcHo5gzKJkRdPkR7XQ2eYbSJt16W5oA//
 9YtKSrfbaHbJie3LscnK96GQXl2uJHXmoVnGQc6+Cg1Hy4U8JDPC9mUJRj5Z3OeIMGTFR854
 Z5PY9fLJ5hrRXPQqkbzaLbv/HR32/yBTeOiUrLJakidKeMcpAIJ0HqPzMZl0t4AbaGFPj6wV
 fPOT+z3YA6r12YlbmsfsPPb9B415aqi6TpF4gJvP/oW4wXXyyNEUJ5HKa+PdHapGYM88sxW5
 grgIBNV2kuDon/B3/v8yu6TnnUiG+KUZU7U4pZvc/DKbiJq/0Te6Y5mcqtGAatro2/RxYopl
 owT7cDYpToBZcUgpsxMC3G0LAkmVUF20Oevza+X6JH7I+XuKhMA8t02ZK0EFdRwFtVfWAmiw
 ccwOjEVBi1vssrtqF6NpkuAsex4RCXjFNt3VniNVlg1B95+KXzIa/2iCdO1QF7cLy2BdBrTT
 5NxVNZhUPjPSxhGFHRIBIB5p8appXbELDp4t1SV5oNitgA/zCQpuFTsGPz8X4XQAOlwwAOfr
 G+A+HnlCBYHMtDZ0SCC7n+nmu7Im2X8RZ4WE7q7sPVthTV/xERKUEFQCQT9/Kj/0xHgMz5cA
 xV8Fi4GgqU17kOmCPXgWRmxuFaPvwIGWsoWGOo/gO2I4vWIuF7GXjJfJtJHQO0NrOF1SDYk7
 Qeutve1ASBogb6STn3Io994qhv3Y0D5N1QqYCYYTAIe7sfquogbgRfGT9IlG6mw5vXlFDe1z
 z2UoSwWg7QIkdVNx6i95UrAgT+nut7OVAFdzgDeQmOs9UVnbZSsT5Kh9VXAq/haRK6bRFScu
 HkPm+CF8fsDS5qKkUSlQvgJHbyvz+aINnvbm1EHN4I66z2n9nqnfIZRyDJzPkFkNoADYzCBS
 FDXkRNc4tlUJnTCRaN5ao2+CsMuzID7CM/oEPvTa7JzjoNZcQaG+GRiYBCW1mW0ykw0y/hgZ
 9GcbNqmCmscBeJ/1j2qSuwB0LgtgCcj2WfUQpO9xBOiuVaDWEOopX4+GAPmRogEAGms+W05L
 /432xO29ihi
IronPort-HdrOrdr: A9a23:P0pKwqMRaGXjasBcT23155DYdb4zR+YMi2TDiHoddfUFSKalfp
 6V98jzjSWE8wr4WBkb6LO90dq7MAnhHP9OkMQs1NKZMDUO11HYS72KgbGC/9SkIVyHygc/79
 YtT0EdMqyXMbESt6+Tj2eF+pQbsaC6GcuT9IXjJgJWPGVXgtZbnmJE42igcnFedU1jP94UBZ
 Cc7s1Iq36LYnIMdPm2AXEDQqzqu8DLvIiOW29JOzcXrC21yR+44r/zFBaVmj0EVSlU/Lsk+W
 /Z1yTk+6SYte2hwBO07R6T030Woqqg9jJwPr3PtiEnEESotu9uXvUkZ1S2hkF3nAho0idsrD
 CDmWZnAy050QKtQoj8m2qQ5+Cn6kdg15aq8y7nvVLz5cP+Xz40EMxHmMZQdQbY8VMpuJVm3L
 tMxH/xjeseMftR9B6NmOQgeisa4HZcm0BS2NL7TkYvI7c2eftUt8gS7UlVGJAPEGbz750mCv
 BnCIXZ6OxNeV2XYnjFti03qebcFEgbD1ODWAwPq8aV2z9ZkDRwyFYZ3tUWmjMF+IgmQ5dJ6u
 zYOuBjla1ITMURcaVhbd1xCvefGyjIW1bBIWiSKVPoGOUOPG/MsYf+5PEv6OSjaPUzvewPcV
 T6ISdlXEIJCjLT4Je1rex2Gzj2MRaAdCWozN1C7J5kvbC5TKb3MES4OSUTr/c=
X-IronPort-AV: E=Sophos;i="5.92,306,1650945600"; 
   d="asc'?scan'208,217";a="74307023"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=U/izcAVbivA92UdrijL39BYQSN2ZKbZhSxSY5jd/SA+l3T7UArUhtGikTHFUec7rhoL4i6r5/nQHVMjt4apQjLs+snOviulSThsiwPdgiWKT9UV4a6IjHDiEVlE7YAhzOFT7qHNSbcB+0y5vijoNf65zOCIZAYbCL/98wW+A930wsTp/+O+YGY7cly9R+89uP5qWyZNdOanfF1I6uYOGdjD29ClWNdZt9hUhPdADTfGoRzZE3O0qIaAc7bom95+IbjCBlyCEQNFiIkDqVgrHjKl+hWgT3w5XwyTZ1ZzZkL0OClhiGrlZVcV0aU77ZBE+uX55EtLHo4H8xtil/2L9hQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=XG96ehoVod90izL5wMNmxzinFhus3gqZDjAuShcvDNg=;
 b=gUbpYQbTtWvocH1s8JZ3GRbfDmbk5ivyZTHCzK6/jOEWBpPpL9MI1Afg39Eo/WIEe+VoUFvmaq+f+pJWYYb6OlUspnkh9JcTOwPy5ou44ntGlBc44rDslGrrL+67Sl7QW/Ffc91Jkr7BTBr19Ge0WU7RgKQdzjOwajf5Z8Wc4nDIlj17VLhTZpILFDuFu35a3DXgpnF/AaXp/5V7kFTecgr9k+soq1cllxM4cKy1/BPUZ0UIiZk7vflaE3qXMkgjxpbuKUJUzNNkp2ImSjIJ3HhiNrg52GNghwQhcKBRPEXH/pBeQn3oYJrk/fZvfqDYFI/z6qjT8+PPdogtVATqLw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XG96ehoVod90izL5wMNmxzinFhus3gqZDjAuShcvDNg=;
 b=F9GhndsTRsD63r6FLIJQ3SNJ0Ulz9SS8wBQajnNuAPUGak6RXEawRk8n8AN6KK/MMlxV1U4jTSUlOB3N7NTywFq5dSAk+kv9bgYZ7mU95R2UobNjIml9xjc60inyhG95j2A+0TQP7BRE9gapmif5tbZ94mgmTBtV/M93JEqA/hs=
From: George Dunlap <George.Dunlap@citrix.com>
To: Matias Ezequiel Vara Larsen <matiasevara@gmail.com>
CC: xen-devel <xen-devel@lists.xenproject.org>, Matias Ezequiel Vara Larsen
	<matias.vara@vates.fr>, Andrew Cooper <Andrew.Cooper3@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Dario Faggioli
	<dfaggioli@suse.com>
Subject: Re: [RFC PATCH 1/2] xen/memory : Add stats_table resource type
Thread-Topic: [RFC PATCH 1/2] xen/memory : Add stats_table resource type
Thread-Index: AQHYafsc/WlHBvFB/kqWUrR4CBI56K1UROwA
Date: Fri, 17 Jun 2022 20:54:34 +0000
Message-ID: <C9B7EF20-595D-4BCB-8545-F35611B718AF@citrix.com>
References: <cover.1652797713.git.matias.vara@vates.fr>
 <d0afb6657b1e78df4857ad7bcc875982e9c022b4.1652797713.git.matias.vara@vates.fr>
In-Reply-To:
 <d0afb6657b1e78df4857ad7bcc875982e9c022b4.1652797713.git.matias.vara@vates.fr>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: e61f9645-c0fc-40d0-2830-08da50a395db
x-ms-traffictypediagnostic: MW4PR03MB6990:EE_
x-microsoft-antispam-prvs:
 <MW4PR03MB699040B20858E489A2DD88FF99AF9@MW4PR03MB6990.namprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 q9iM6U+wcT3ZLa45eU0kRKPJGzOlGW1CTv0GNeUogekPqsruKIv9897EII9Pyp/Wn7JMumrexhTeEXZIdC7bKwjsxrHWC4/tUFdQ6T0HR6/rY7yV68gG5djnI2fHNi+zkvXJKBDp5kfA3ITWyBj2iUFU8OQbToE9K3aOZH2r8LTfGV0r6aGTc6XcN5Rav0Fukq5Flb+RaTqTrIKx5SBdBKU6FvIjCgRtodh4nA3mzuOCS4b/wjjP79BtUpGqOdgI2VRYGnw4MxTwKQwiuMXO8uIXq6I9TKvkORpdI5lDxB1R+qVjXALcZfrwkE8O4QtEgj1MLdLPjx/w1PkR02+N7EJXB/wtv/2lUwFrW6pKRJFT5G05a4tEesjMGB3yGikhVN+jK4n48yHjHcDaENkDmk/4aAtG8g4suKNBLuDJ2HWh0e392+PDrgUbsUgGRDvbyBSurEtvYuwCQwowiNjbygIgthzOvypaVDxGDV13rYnPxLOiojL0N+GGlZa/7/2yC/KRAY56z9npPKboKobNAOWCnvHGiib0sNaKu5O1tIIqCkUXmISbgM8EpZN2XKi97OCx2hunkvVRLO4TcAgWTWiszmvUcG8au8k9sp76CpZnbkzD0O9tiiEf66OopD4dPRMgf0oO3zYraV5pyOjUhMubpv2WEe1gykE2Z600J7qU0SIY24DJ9KnnPJaptWW60JMlVZ+Ig7q1i7Fu6dqokfhehzFMwkOKMDDz/sJztRGfnwaG3huv3ZGe3DLvhWUJv8NaYA3YvlMpsHM4hzYVOLakR3u+7/NrYwr2CT8b6iq8jj5ChpH1wGETSuT90D7hCJluYhGs3vZsl99Pw6DoFA==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(8936002)(33656002)(4326008)(5660300002)(54906003)(36756003)(66946007)(71200400001)(66476007)(76116006)(8676002)(91956017)(2906002)(64756008)(26005)(2616005)(66556008)(66446008)(316002)(6486002)(6512007)(38070700005)(6916009)(6506007)(53546011)(498600001)(99936003)(82960400001)(186003)(38100700002)(84970400001)(166002)(83380400001)(122000001)(86362001)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?RHVCaFI2Wi85bGVQYjdmRUllK0V1Nitld1ptWksyMG93ZVRkOXNjckdjRjJO?=
 =?utf-8?B?WXNEamF3WDVoendCaVlac3B5WEtqSWZydXNWNlZmaURvVmMvMmJuMTVadFBw?=
 =?utf-8?B?L1BXWWxsWmtHZ29HeVhWTDZGMmNZbTVFOU8yWkJ6aFIvSHJ1clVsQjRzeTFm?=
 =?utf-8?B?em5hYXJVQ21oQWwycjFJcEhaWFBDY2daa01lV1VpOXducXJRSnlaRGVabkJC?=
 =?utf-8?B?TG1VQWxNcVVOcjNPWUFDK3B2TCszcndjVUEzV2J5NWtsbEp2d3RxUmtDZ0J5?=
 =?utf-8?B?MWRsSHlPVTdWWUoxakcydmdTaWE4SkxPbDVyMDNWVEdQdEFnZ1JDRlZBTlY0?=
 =?utf-8?B?SERMK01qSnJXckpIUHhmUUFjNmlqMzV3WWsxR1Z2dTJYWUx2M3hRSEVDOUll?=
 =?utf-8?B?RlFuUHlURFMrSXNqSU5KclZxNUNsM1ljdlFOZExQR0VLdjZyeVZ6M3A4RWY2?=
 =?utf-8?B?WE5kNDkyZFJhVmpBckJSWS8wQ3JVeC9hdXdKeGhUeU9HNm15REttZlp3TjdU?=
 =?utf-8?B?cFNFd015VEtZUkxUYW1Ea0E5Qnc3bFJHMGQ4Zkt2N1cxVExVTkNxYmpOTUt2?=
 =?utf-8?B?Umh1QXRHSUN5NVV6NHBRV0IvaW5rbXZndFgwS0pnY3g5enR5YWd3ZWFTSENB?=
 =?utf-8?B?aHdCRUdoYVo1UG9pSS9EYjFaQnBTT2N1RjZSejdTeG1GZGdyZHhpSC84VWpu?=
 =?utf-8?B?bWx6SytFbEtaK3JWVU1lc25ZNEFpbkxvQys3bnAvazlMbGU3Q1k1YjVSTTFj?=
 =?utf-8?B?NWIxZ2w4bU10SmhrVGpPbXpSZlNYb3o5Sldpei8xSm5tV0RlM1k2aDZRUkMv?=
 =?utf-8?B?MUVVTit2YmVZcXNoL2xmU1YrMjR4TmxCSmI2YkZ3clZ4S09acXZldndzYkFV?=
 =?utf-8?B?MFY0clR3eUtDUUhzcTVHNjN2Y253amJVWTU4OGdFTENjbGRmbGszck5vcjRv?=
 =?utf-8?B?TXhxSzFXeEVkOHg1dnI2RW41dTRPbXNTeFI1di82Wk5iU0c1ZzlCYjVzajhj?=
 =?utf-8?B?djY4SVU0WmV5Tko0Rk9ZZDFCbUFYK3VLSVVHdlhsenlycDJwbWhVM1BEZWVP?=
 =?utf-8?B?eGhUcE56R21jYmFoZGovWHpteS9kc2xZRU5pYWhBZFNVODd6Z2VhZ1IrYkgw?=
 =?utf-8?B?ZXBNNG54dXpMZGIvdU00VVE0RnJlSE9XZmNielFnQTBBbzVONzVTc3FTSlBP?=
 =?utf-8?B?Yjd2VDlBRVZveFpJSXdzd2s1VWJPMEFQNDJxTmdaak1VSFIrNFd6TExEUzcx?=
 =?utf-8?B?ZzJKL2RkQmQ1OExkN0Q3V1M1SERrVkRoQndMSVBnekxLVTNIOGJxaUdFdnlE?=
 =?utf-8?B?cHpTU1hRLzdpVjVNRkh4VHU2S2ZpdkZESzFMZGkzT1dmdmpJSlduM3k2SmhS?=
 =?utf-8?B?b0g4TFpTWDN4RXhqTk8xNnhkTi85WSs2NHhlck9SeW8vWjdZSDBPbmRkV1Nj?=
 =?utf-8?B?NFpTSDFibUFERkpTd1QrSFg2RDRiOFgzT2FmV29CUVk3VVNRNXlCd0l5RHZ1?=
 =?utf-8?B?RzM4dFY1T1hHNUFqVW4wL1pVMWVLRFNkNkJHU1REVmpsT0cvUUkwMkJPVW56?=
 =?utf-8?B?aEI1NGZhYXVPME1YbjlKOE8vcUVTaHkxT2tXZEo4S3oyaVFiQ0RHZWdzcXhv?=
 =?utf-8?B?dVVoeDVLK3FkVXk5WkY4M2dmaytyWjA1QkZtZlh6YmtBRnpOOXIzVUx2NkdP?=
 =?utf-8?B?emZ2QXVmL2hZaitOZ1RlM1c2dktuYVVBR0lBZkhnMUhaTGRzZkx2N1BJenkx?=
 =?utf-8?B?bjdYQWo4UnplTHVjQ1huM3BqdnUzbXkzak0wdzBya2duSk1CRlVoT2ZseERK?=
 =?utf-8?B?d0R1Y285OHZvZEtURWZzUmhwekhyR1FkTFY1SG5OaEdCTlhkSk5UMm1xYzY4?=
 =?utf-8?B?RkUwbURxZm9vVERSQkN5bFdqVnJUNU1IekFsNHQ4VmloTHl6SWpha1VQUzZh?=
 =?utf-8?B?Q1ZlWU5xak1DdENvOTdRc3AxM3RIQnF4VGhvWEpKRjRNM0xnejJvOWN0TG5V?=
 =?utf-8?B?UENYR2E2Vzhxa3Nna25wKzhuR3p1enF6RkdPVGVPWW5zazFqNnBVU2p3anJj?=
 =?utf-8?B?QWdSY2szUkdISXZZVVEwcVkzd21wRW5vOEZaUDV2dGpMenN3SVpzcm15eVVM?=
 =?utf-8?B?cVlNUFpEWjNpczErSXhXS2prbENCTy9sdktFRDdvMFdwTlpiY3liRUlWNU1o?=
 =?utf-8?B?Mm5uR1gzd1hmMTNYdXFyZkJqQUljUThzdStpT0JyQUJtY1NaOVVwYU1qTC9Z?=
 =?utf-8?B?Ni9rM0ZWU0VqWVJ5UnNEcy9zSjNwclkvL3dxK1p5ZVhobk0rSzVRR01FMzRr?=
 =?utf-8?B?eUdsZ21aenEwY3V3TVlha0xZODc2WFVwbytwMUppSHlHbXJXMk85UUZoMjRY?=
 =?utf-8?Q?M20N7GJJUYy18klk=3D?=
Content-Type: multipart/signed;
	boundary="Apple-Mail=_75AC0B82-0989-47A8-A553-375D14B97929";
	protocol="application/pgp-signature";
	micalg=pgp-sha256
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e61f9645-c0fc-40d0-2830-08da50a395db
X-MS-Exchange-CrossTenant-originalarrivaltime: 17 Jun 2022 20:54:34.9780
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 47CjwblZHUlNZ/WS3gA5UjEtlLPY4z6Om3M8e8Hl6ovYhbZ65va+wx6diNvKh/X1xLQY+HKH4UEYHBM317a58jscm4TLbTOIo67NgXYAoK0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR03MB6990

--Apple-Mail=_75AC0B82-0989-47A8-A553-375D14B97929
Content-Type: multipart/alternative;
	boundary="Apple-Mail=_FA80A71B-A1CF-42BD-9B44-701910E9BB1C"


--Apple-Mail=_FA80A71B-A1CF-42BD-9B44-701910E9BB1C
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=utf-8

Preface: It looks like this may be one of your first hypervisor patches. =
 So thank you!  FYI there=E2=80=99s a lot that needs fixing up here; =
please don=E2=80=99t read a tone of annoyance into it, I=E2=80=99m just =
trying to tell you what needs fixing in the most efficient manner. :-)

> On 17 May 2022, at 15:33, Matias Ezequiel Vara Larsen =
<matiasevara@gmail.com> wrote:
>=20
> Allow to map vcpu stats using acquire_resource().

This needs a lot more expansion in terms of what it is that you=E2=80=99re=
 doing in this patch and why.

>=20
> Signed-off-by: Matias Ezequiel Vara Larsen <matias.vara@vates.fr>
> ---
> xen/common/domain.c         | 42 +++++++++++++++++++++++++++++++++++++
> xen/common/memory.c         | 29 +++++++++++++++++++++++++
> xen/common/sched/core.c     |  5 +++++
> xen/include/public/memory.h |  1 +
> xen/include/xen/sched.h     |  5 +++++
> 5 files changed, 82 insertions(+)
>=20
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 17cc32fde3..ddd9f88874 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -132,6 +132,42 @@ static void vcpu_info_reset(struct vcpu *v)
>     v->vcpu_info_mfn =3D INVALID_MFN;
> }
>=20
> +static void stats_free_buffer(struct vcpu * v)
> +{
> +    struct page_info *pg =3D v->stats.pg;
> +
> +    if ( !pg )
> +        return;
> +
> +    v->stats.va =3D NULL;
> +
> +    if ( v->stats.va )
> +        unmap_domain_page_global(v->stats.va);
> +
> +    v->stats.va =3D NULL;

Looks like you meant to delete the first `va->stats.va =3D NULL` but =
forgot?

> +
> +    free_domheap_page(pg);

Pretty sure this will crash.

Unfortunately page allocation and freeing is somewhat complicated and =
requires a bit of boilerplate.  You can look at the =
vmtrace_alloc_buffer() code for a template, but the general sequence you =
want is as follows:

* On the allocate side:

1. Allocate the page

   pg =3D alloc_domheap_page(d, MEMF_no_refcount);

This will allocate a page with the PGC_allocated bit set and a single =
reference count.  (If you pass a page with PGC_allocated set to =
free_domheap_page(), it will crash; which is why I said the above.)

2.  Grab a general reference count for the vcpu struct=E2=80=99s =
reference to it; as well as a writable type count, so that it doesn=E2=80=99=
t get used as a special page

if (get_page_and_type(pg, d, PGT_writable_page)) {
  put_page_alloc_ref(p);
  /* failure path */
}

* On the free side, don=E2=80=99t call free_domheap_pages() directly.  =
Rather, drop the allocation, then drop your own type count, thus:

v->stats.va <http://stats.va/> =3D NULL;

put_page_alloc_ref(pg);
put_page_and_type(pg);

The issue here is that we can=E2=80=99t free the page until all =
references have dropped; and the whole point of this exercise is to =
allow guest userspace in dom0 to gain a reference to the page.  We =
can=E2=80=99t actually free the page until *all* references are gone, =
including the userspace one in dom0.  The put_page() which brings the =
reference count to 0 will automatically free the page.


> +}
> +
> +static int stats_alloc_buffer(struct vcpu *v)
> +{
> +    struct domain *d =3D v->domain;
> +    struct page_info *pg;
> +
> +    pg =3D alloc_domheap_page(d, MEMF_no_refcount);
> +
> +    if ( !pg )
> +        return -ENOMEM;
> +
> +    v->stats.va =3D __map_domain_page_global(pg);
> +    if ( !v->stats.va )
> +        return -ENOMEM;
> +
> +    v->stats.pg =3D pg;
> +    clear_page(v->stats.va);
> +    return 0;
> +}

The other thing to say about this is that the memory is being allocated =
unconditionally, even if nobody is planning to read it.  The vast =
majority of Xen users will not be running xcp-rrd, so it will be =
pointless overheard.

At a basic level, you want to follow suit with the vmtrace buffers, =
which are only allocated if the proper domctl flag is set on domain =
creation.  (We could consider instead, or in addition, having a global =
Xen command-line parameter which would enable this feature for all =
domains.)

> +
> static void vmtrace_free_buffer(struct vcpu *v)
> {
>     const struct domain *d =3D v->domain;
> @@ -203,6 +239,9 @@ static int vmtrace_alloc_buffer(struct vcpu *v)
>  */
> static int vcpu_teardown(struct vcpu *v)
> {
> +
> +    stats_free_buffer(v);
> +
>     vmtrace_free_buffer(v);
>=20
>     return 0;
> @@ -269,6 +308,9 @@ struct vcpu *vcpu_create(struct domain *d, =
unsigned int vcpu_id)
>     if ( vmtrace_alloc_buffer(v) !=3D 0 )
>         goto fail_wq;
>=20
> +    if ( stats_alloc_buffer(v) !=3D 0 )
> +        goto fail_wq;
> +
>     if ( arch_vcpu_create(v) !=3D 0 )
>         goto fail_sched;
>=20
> diff --git a/xen/common/memory.c b/xen/common/memory.c
> index 297b98a562..39de6d9d05 100644
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -1099,6 +1099,10 @@ static unsigned int resource_max_frames(const =
struct domain *d,
>     case XENMEM_resource_vmtrace_buf:
>         return d->vmtrace_size >> PAGE_SHIFT;
>=20
> +    // WIP: to figure out the correct size of the resource
> +    case XENMEM_resource_stats_table:
> +        return 1;
> +
>     default:
>         return -EOPNOTSUPP;
>     }
> @@ -1162,6 +1166,28 @@ static int acquire_vmtrace_buf(
>     return nr_frames;
> }
>=20
> +static int acquire_stats_table(struct domain *d,
> +                                unsigned int id,
> +                                unsigned int frame,
> +                                unsigned int nr_frames,
> +                                xen_pfn_t mfn_list[])
> +{
> +    const struct vcpu *v =3D domain_vcpu(d, id);
> +    mfn_t mfn;
> +

Maybe I=E2=80=99m paranoid, but I might add an =E2=80=9CASSERT(nr_frames =
=3D=3D 1)=E2=80=9D here

> +    if ( !v )
> +        return -ENOENT;
> +
> +    if ( !v->stats.pg )
> +        return -EINVAL;
> +
> +    mfn =3D page_to_mfn(v->stats.pg);
> +    mfn_list[0] =3D mfn_x(mfn);
> +
> +    printk("acquire_perf_table: id: %d, nr_frames: %d, %p, domainid: =
%d\n", id, nr_frames, v->stats.pg, d->domain_id);
> +    return 1;
> +}
> +
> /*
>  * Returns -errno on error, or positive in the range [1, nr_frames] on
>  * success.  Returning less than nr_frames contitutes a request for a
> @@ -1182,6 +1208,9 @@ static int _acquire_resource(
>     case XENMEM_resource_vmtrace_buf:
>         return acquire_vmtrace_buf(d, id, frame, nr_frames, mfn_list);
>=20
> +    case XENMEM_resource_stats_table:
> +        return acquire_stats_table(d, id, frame, nr_frames, =
mfn_list);
> +
>     default:
>         return -EOPNOTSUPP;
>     }
> diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
> index 8f4b1ca10d..2a8b534977 100644
> --- a/xen/common/sched/core.c
> +++ b/xen/common/sched/core.c
> @@ -264,6 +264,7 @@ static inline void vcpu_runstate_change(
> {
>     s_time_t delta;
>     struct sched_unit *unit =3D v->sched_unit;
> +    uint64_t * runstate;
>=20
>     =
ASSERT(spin_is_locked(get_sched_res(v->processor)->schedule_lock));
>     if ( v->runstate.state =3D=3D new_state )
> @@ -287,6 +288,10 @@ static inline void vcpu_runstate_change(
>     }
>=20
>     v->runstate.state =3D new_state;
> +
> +    // WIP: use a different interface
> +    runstate =3D (uint64_t*)v->stats.va;

I think you should look at =
xen.git/xen/include/public/hvm/ioreq.h:shared_iopage_t for inspiration.  =
Basically, you cast the void pointer to that type, and then just access =
`iopage->vcpu_ioreq[vcpuid]`.   Put it in a public header, and then both =
the userspace consumer and Xen can use the same structure.

As I said in my response to the cover letter, I think =
vcpu_runstate_info_t would be something to look at and gain inspiration =
from.

The other thing to say here is that this is a hot path; we don=E2=80=99t =
want to be copying lots of information around if it=E2=80=99s not going =
to be used.  So you should only allocate the buffers if specifically =
enabled, and you should only copy the information over if v->stats.va =
<http://stats.va/> !=3D NULL.

I think that should be enough to start with; we can nail down more when =
you send v1.

Peace,
 -George


--Apple-Mail=_FA80A71B-A1CF-42BD-9B44-701910E9BB1C
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
	charset=utf-8

<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html; =
charset=3Dutf-8"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; line-break: after-white-space;" =
class=3D"">Preface: It looks like this may be one of your first =
hypervisor patches. &nbsp;So thank you! &nbsp;FYI there=E2=80=99s a lot =
that needs fixing up here; please don=E2=80=99t read a tone of annoyance =
into it, I=E2=80=99m just trying to tell you what needs fixing in the =
most efficient manner. :-)<br class=3D""><div><br class=3D""><blockquote =
type=3D"cite" class=3D""><div class=3D"">On 17 May 2022, at 15:33, =
Matias Ezequiel Vara Larsen &lt;<a href=3D"mailto:matiasevara@gmail.com" =
class=3D"">matiasevara@gmail.com</a>&gt; wrote:</div><br =
class=3D"Apple-interchange-newline"><div class=3D""><div class=3D"">Allow =
to map vcpu stats using acquire_resource().<br =
class=3D""></div></div></blockquote><div><br class=3D""></div><div>This =
needs a lot more expansion in terms of what it is that you=E2=80=99re =
doing in this patch and why.</div><br class=3D""><blockquote type=3D"cite"=
 class=3D""><div class=3D""><div class=3D""><br class=3D"">Signed-off-by: =
Matias Ezequiel Vara Larsen &lt;<a href=3D"mailto:matias.vara@vates.fr" =
class=3D"">matias.vara@vates.fr</a>&gt;<br class=3D"">---<br class=3D""> =
xen/common/domain.c &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| 42 =
+++++++++++++++++++++++++++++++++++++<br class=3D""> xen/common/memory.c =
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| 29 =
+++++++++++++++++++++++++<br class=3D""> xen/common/sched/core.c =
&nbsp;&nbsp;&nbsp;&nbsp;| &nbsp;5 +++++<br class=3D""> =
xen/include/public/memory.h | &nbsp;1 +<br class=3D""> =
xen/include/xen/sched.h &nbsp;&nbsp;&nbsp;&nbsp;| &nbsp;5 +++++<br =
class=3D""> 5 files changed, 82 insertions(+)<br class=3D""><br =
class=3D"">diff --git a/xen/common/domain.c b/xen/common/domain.c<br =
class=3D"">index 17cc32fde3..ddd9f88874 100644<br class=3D"">--- =
a/xen/common/domain.c<br class=3D"">+++ b/xen/common/domain.c<br =
class=3D"">@@ -132,6 +132,42 @@ static void vcpu_info_reset(struct vcpu =
*v)<br class=3D""> &nbsp;&nbsp;&nbsp;&nbsp;v-&gt;vcpu_info_mfn =3D =
INVALID_MFN;<br class=3D""> }<br class=3D""><br class=3D"">+static void =
stats_free_buffer(struct vcpu * v)<br class=3D"">+{<br class=3D"">+ =
&nbsp;&nbsp;&nbsp;struct page_info *pg =3D v-&gt;stats.pg;<br =
class=3D"">+<br class=3D"">+ &nbsp;&nbsp;&nbsp;if ( !pg )<br class=3D"">+ =
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return;<br class=3D"">+<br =
class=3D"">+ &nbsp;&nbsp;&nbsp;v-&gt;<a href=3D"http://stats.va" =
class=3D"">stats.va</a> =3D NULL;<br class=3D"">+<br class=3D"">+ =
&nbsp;&nbsp;&nbsp;if ( v-&gt;<a href=3D"http://stats.va" =
class=3D"">stats.va</a> )<br class=3D"">+ =
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;unmap_domain_page_global(v-&gt;<=
a href=3D"http://stats.va" class=3D"">stats.va</a>);<br class=3D"">+<br =
class=3D"">+ &nbsp;&nbsp;&nbsp;v-&gt;<a href=3D"http://stats.va" =
class=3D"">stats.va</a> =3D NULL;<br =
class=3D""></div></div></blockquote><div><br class=3D""></div><div>Looks =
like you meant to delete the first `va-&gt;<a href=3D"http://stats.va" =
class=3D"">stats.va</a> =3D NULL` but forgot?</div><br =
class=3D""><blockquote type=3D"cite" class=3D""><div class=3D""><div =
class=3D"">+<br class=3D"">+ &nbsp;&nbsp;&nbsp;free_domheap_page(pg);<br =
class=3D""></div></div></blockquote><div><br class=3D""></div><div>Pretty =
sure this will crash.</div><div><br class=3D""></div><div>Unfortunately =
page allocation and freeing is somewhat complicated and requires a bit =
of boilerplate. &nbsp;You can look at the vmtrace_alloc_buffer() code =
for a template, but the general sequence you want is as =
follows:</div><div><br class=3D""></div><div>* On the allocate =
side:</div><div><br class=3D""></div><div>1. Allocate the =
page</div><div><br class=3D""></div><div>&nbsp; &nbsp;pg =3D =
alloc_domheap_page(d, MEMF_no_refcount);</div><div><br =
class=3D""></div><div>This will allocate a page with the PGC_allocated =
bit set and a single reference count. &nbsp;(If you pass a page with =
PGC_allocated set to free_domheap_page(), it will crash; which is why I =
said the above.)</div><div><br class=3D""></div><div>2. &nbsp;Grab a =
general reference count for the vcpu struct=E2=80=99s reference to it; =
as well as a writable type count, so that it doesn=E2=80=99t get used as =
a special page</div><div><br class=3D""></div><div>if =
(get_page_and_type(pg, d, PGT_writable_page)) {</div><div>&nbsp; =
put_page_alloc_ref(p);</div><div>&nbsp; /* failure path =
*/</div><div>}</div><div><br class=3D""></div><div>* On the free side, =
don=E2=80=99t call free_domheap_pages() directly. &nbsp;Rather, drop the =
allocation, then drop your own type count, thus:</div><div><br =
class=3D""></div><div>v-&gt;<a href=3D"http://stats.va" =
class=3D"">stats.va</a>&nbsp;=3D NULL;</div><div><br =
class=3D""></div><div>put_page_alloc_ref(pg);</div><div>put_page_and_type(=
pg);</div><div><br class=3D""></div><div>The issue here is that we =
can=E2=80=99t free the page until all references have dropped; and the =
whole point of this exercise is to allow guest userspace in dom0 to gain =
a reference to the page. &nbsp;We can=E2=80=99t actually free the page =
until *all* references are gone, including the userspace one in dom0. =
&nbsp;The put_page() which brings the reference count to 0 will =
automatically free the page.</div><div><br class=3D""></div><div><br =
class=3D""></div><blockquote type=3D"cite" class=3D""><div class=3D""><div=
 class=3D"">+}<br class=3D"">+<br class=3D"">+static int =
stats_alloc_buffer(struct vcpu *v)<br class=3D"">+{<br class=3D"">+ =
&nbsp;&nbsp;&nbsp;struct domain *d =3D v-&gt;domain;<br class=3D"">+ =
&nbsp;&nbsp;&nbsp;struct page_info *pg;<br class=3D"">+<br class=3D"">+ =
&nbsp;&nbsp;&nbsp;pg =3D alloc_domheap_page(d, MEMF_no_refcount);<br =
class=3D"">+<br class=3D"">+ &nbsp;&nbsp;&nbsp;if ( !pg )<br class=3D"">+ =
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return -ENOMEM;<br =
class=3D"">+<br class=3D"">+ &nbsp;&nbsp;&nbsp;v-&gt;<a =
href=3D"http://stats.va" class=3D"">stats.va</a> =3D =
__map_domain_page_global(pg);<br class=3D"">+ &nbsp;&nbsp;&nbsp;if ( =
!v-&gt;<a href=3D"http://stats.va" class=3D"">stats.va</a> )<br =
class=3D"">+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return =
-ENOMEM;<br class=3D"">+<br class=3D"">+ =
&nbsp;&nbsp;&nbsp;v-&gt;stats.pg =3D pg;<br class=3D"">+ =
&nbsp;&nbsp;&nbsp;clear_page(v-&gt;<a href=3D"http://stats.va" =
class=3D"">stats.va</a>);<br class=3D"">+ &nbsp;&nbsp;&nbsp;return 0;<br =
class=3D"">+}<br class=3D""></div></div></blockquote><div><br =
class=3D""></div><div>The other thing to say about this is that the =
memory is being allocated unconditionally, even if nobody is planning to =
read it. &nbsp;The vast majority of Xen users will not be running =
xcp-rrd, so it will be pointless overheard.</div><div><br =
class=3D""></div><div>At a basic level, you want to follow suit with the =
vmtrace buffers, which are only allocated if the proper domctl flag is =
set on domain creation. &nbsp;(We could consider instead, or in =
addition, having a global Xen command-line parameter which would enable =
this feature for all domains.)</div><br class=3D""><blockquote =
type=3D"cite" class=3D""><div class=3D""><div class=3D"">+<br class=3D""> =
static void vmtrace_free_buffer(struct vcpu *v)<br class=3D""> {<br =
class=3D""> &nbsp;&nbsp;&nbsp;&nbsp;const struct domain *d =3D =
v-&gt;domain;<br class=3D"">@@ -203,6 +239,9 @@ static int =
vmtrace_alloc_buffer(struct vcpu *v)<br class=3D""> &nbsp;*/<br =
class=3D""> static int vcpu_teardown(struct vcpu *v)<br class=3D""> {<br =
class=3D"">+<br class=3D"">+ &nbsp;&nbsp;&nbsp;stats_free_buffer(v);<br =
class=3D"">+<br class=3D""> =
&nbsp;&nbsp;&nbsp;&nbsp;vmtrace_free_buffer(v);<br class=3D""><br =
class=3D""> &nbsp;&nbsp;&nbsp;&nbsp;return 0;<br class=3D"">@@ -269,6 =
+308,9 @@ struct vcpu *vcpu_create(struct domain *d, unsigned int =
vcpu_id)<br class=3D""> &nbsp;&nbsp;&nbsp;&nbsp;if ( =
vmtrace_alloc_buffer(v) !=3D 0 )<br class=3D""> =
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;goto fail_wq;<br =
class=3D""><br class=3D"">+ &nbsp;&nbsp;&nbsp;if ( stats_alloc_buffer(v) =
!=3D 0 )<br class=3D"">+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;goto =
fail_wq;<br class=3D"">+<br class=3D""> &nbsp;&nbsp;&nbsp;&nbsp;if ( =
arch_vcpu_create(v) !=3D 0 )<br class=3D""> =
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;goto fail_sched;<br =
class=3D""><br class=3D"">diff --git a/xen/common/memory.c =
b/xen/common/memory.c<br class=3D"">index 297b98a562..39de6d9d05 =
100644<br class=3D"">--- a/xen/common/memory.c<br class=3D"">+++ =
b/xen/common/memory.c<br class=3D"">@@ -1099,6 +1099,10 @@ static =
unsigned int resource_max_frames(const struct domain *d,<br class=3D""> =
&nbsp;&nbsp;&nbsp;&nbsp;case XENMEM_resource_vmtrace_buf:<br class=3D""> =
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return =
d-&gt;vmtrace_size &gt;&gt; PAGE_SHIFT;<br class=3D""><br class=3D"">+ =
&nbsp;&nbsp;&nbsp;// WIP: to figure out the correct size of the =
resource<br class=3D"">+ &nbsp;&nbsp;&nbsp;case =
XENMEM_resource_stats_table:<br class=3D"">+ =
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return 1;<br class=3D"">+<br =
class=3D""> &nbsp;&nbsp;&nbsp;&nbsp;default:<br class=3D""> =
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return -EOPNOTSUPP;<br =
class=3D""> &nbsp;&nbsp;&nbsp;&nbsp;}<br class=3D"">@@ -1162,6 +1166,28 =
@@ static int acquire_vmtrace_buf(<br class=3D""> =
&nbsp;&nbsp;&nbsp;&nbsp;return nr_frames;<br class=3D""> }<br =
class=3D""><br class=3D"">+static int acquire_stats_table(struct domain =
*d,<br class=3D"">+ =
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;unsigned int id,<br class=3D"">+ =
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;unsigned int frame,<br class=3D"">+ =
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;unsigned int nr_frames,<br =
class=3D"">+ =
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;xen_pfn_t mfn_list[])<br =
class=3D"">+{<br class=3D"">+ &nbsp;&nbsp;&nbsp;const struct vcpu *v =3D =
domain_vcpu(d, id);<br class=3D"">+ &nbsp;&nbsp;&nbsp;mfn_t mfn;<br =
class=3D"">+<br class=3D""></div></div></blockquote><div><br =
class=3D""></div><div>Maybe I=E2=80=99m paranoid, but I might add an =
=E2=80=9CASSERT(nr_frames =3D=3D 1)=E2=80=9D here&nbsp;</div><br =
class=3D""><blockquote type=3D"cite" class=3D""><div class=3D""><div =
class=3D"">+ &nbsp;&nbsp;&nbsp;if ( !v )<br class=3D"">+ =
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return -ENOENT;<br =
class=3D"">+<br class=3D"">+ &nbsp;&nbsp;&nbsp;if ( !v-&gt;stats.pg )<br =
class=3D"">+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return =
-EINVAL;<br class=3D"">+<br class=3D"">+ &nbsp;&nbsp;&nbsp;mfn =3D =
page_to_mfn(v-&gt;stats.pg);<br class=3D"">+ =
&nbsp;&nbsp;&nbsp;mfn_list[0] =3D mfn_x(mfn);<br class=3D"">+<br =
class=3D"">+ &nbsp;&nbsp;&nbsp;printk("acquire_perf_table: id: %d, =
nr_frames: %d, %p, domainid: %d\n", id, nr_frames, v-&gt;stats.pg, =
d-&gt;domain_id);<br class=3D"">+ &nbsp;&nbsp;&nbsp;return 1;<br =
class=3D"">+}<br class=3D"">+<br class=3D""> /*<br class=3D""> &nbsp;* =
Returns -errno on error, or positive in the range [1, nr_frames] on<br =
class=3D""> &nbsp;* success. &nbsp;Returning less than nr_frames =
contitutes a request for a<br class=3D"">@@ -1182,6 +1208,9 @@ static =
int _acquire_resource(<br class=3D""> &nbsp;&nbsp;&nbsp;&nbsp;case =
XENMEM_resource_vmtrace_buf:<br class=3D""> =
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return =
acquire_vmtrace_buf(d, id, frame, nr_frames, mfn_list);<br class=3D""><br =
class=3D"">+ &nbsp;&nbsp;&nbsp;case XENMEM_resource_stats_table:<br =
class=3D"">+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return =
acquire_stats_table(d, id, frame, nr_frames, mfn_list);<br class=3D"">+<br=
 class=3D""> &nbsp;&nbsp;&nbsp;&nbsp;default:<br class=3D""> =
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return -EOPNOTSUPP;<br =
class=3D""> &nbsp;&nbsp;&nbsp;&nbsp;}<br class=3D"">diff --git =
a/xen/common/sched/core.c b/xen/common/sched/core.c<br class=3D"">index =
8f4b1ca10d..2a8b534977 100644<br class=3D"">--- =
a/xen/common/sched/core.c<br class=3D"">+++ b/xen/common/sched/core.c<br =
class=3D"">@@ -264,6 +264,7 @@ static inline void =
vcpu_runstate_change(<br class=3D""> {<br class=3D""> =
&nbsp;&nbsp;&nbsp;&nbsp;s_time_t delta;<br class=3D""> =
&nbsp;&nbsp;&nbsp;&nbsp;struct sched_unit *unit =3D v-&gt;sched_unit;<br =
class=3D"">+ &nbsp;&nbsp;&nbsp;uint64_t * runstate;<br class=3D""><br =
class=3D""> =
&nbsp;&nbsp;&nbsp;&nbsp;ASSERT(spin_is_locked(get_sched_res(v-&gt;processo=
r)-&gt;schedule_lock));<br class=3D""> &nbsp;&nbsp;&nbsp;&nbsp;if ( =
v-&gt;runstate.state =3D=3D new_state )<br class=3D"">@@ -287,6 +288,10 =
@@ static inline void vcpu_runstate_change(<br class=3D""> =
&nbsp;&nbsp;&nbsp;&nbsp;}<br class=3D""><br class=3D""> =
&nbsp;&nbsp;&nbsp;&nbsp;v-&gt;runstate.state =3D new_state;<br =
class=3D"">+<br class=3D"">+ &nbsp;&nbsp;&nbsp;// WIP: use a different =
interface<br class=3D"">+ &nbsp;&nbsp;&nbsp;runstate =3D =
(uint64_t*)v-&gt;<a href=3D"http://stats.va" class=3D"">stats.va</a>;<br =
class=3D""></div></div></blockquote><div><br class=3D""></div><div>I =
think you should look at =
xen.git/xen/include/public/hvm/ioreq.h:shared_iopage_t for inspiration. =
&nbsp;Basically, you cast the void pointer to that type, and then just =
access `iopage-&gt;vcpu_ioreq[vcpuid]`. &nbsp; Put it in a public =
header, and then both the userspace consumer and Xen can use the same =
structure.</div><div><br class=3D""></div><div>As I said in my response =
to the cover letter, I think vcpu_runstate_info_t would be something to =
look at and gain inspiration from.</div><div><br class=3D""></div><div>The=
 other thing to say here is that this is a hot path; we don=E2=80=99t =
want to be copying lots of information around if it=E2=80=99s not going =
to be used. &nbsp;So you should only allocate the buffers if =
specifically enabled, and you should only copy the information over if =
v-&gt;<a href=3D"http://stats.va" class=3D"">stats.va</a>&nbsp;!=3D =
NULL.</div><div><br class=3D""></div><div>I think that should be enough =
to start with; we can nail down more when you send =
v1.&nbsp;</div><div><br =
class=3D""></div><div>Peace,</div><div>&nbsp;-George</div></div><br =
class=3D""></body></html>=

--Apple-Mail=_FA80A71B-A1CF-42BD-9B44-701910E9BB1C--

--Apple-Mail=_75AC0B82-0989-47A8-A553-375D14B97929
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
	filename=signature.asc
Content-Type: application/pgp-signature;
	name=signature.asc
Content-Description: Message signed with OpenPGP

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEj3+7SZ4EDefWZFyCshXHp8eEG+0FAmKs6ggACgkQshXHp8eE
G+1E6QgAxFeMf2r7mzEkC/LAmrWY46s/DYX5fEChGxIqDmbZmO8yL+4AwSV6QBra
hUQVJfG6pRKRvGlV/MeAxiDlPgK1D6K2bgES574Ie5DKjjdKpO+sm10zxCeTuebB
ak1MilpSLhY+8b6rtZgXarzQT3i5V30PEvRX+33CuTD4Tc9RboSSP1U2dB18C90x
BBhMlT4ijbaunzMW8CsgV2VspKgVbUOjWEH1USuqXim4F1MyhbXD2GYdUN+NvYRK
spHVX/jcOW0n16SUSbNYXD57F69WI2z5bnIosMeTnYY50qX9b+A+v5IOuRRo/mfF
AXIipWGg76AYh4B5NdkoXzLsErlTlQ==
=MTyZ
-----END PGP SIGNATURE-----

--Apple-Mail=_75AC0B82-0989-47A8-A553-375D14B97929--


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 22:20:50 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 22:20:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351625.578292 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2KKV-0008MV-8Y; Fri, 17 Jun 2022 22:20:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351625.578292; Fri, 17 Jun 2022 22:20:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2KKV-0008MO-5E; Fri, 17 Jun 2022 22:20:39 +0000
Received: by outflank-mailman (input) for mailman id 351625;
 Fri, 17 Jun 2022 22:20:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2KKT-0008ME-Sm; Fri, 17 Jun 2022 22:20:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2KKT-0004uh-Qt; Fri, 17 Jun 2022 22:20:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2KKT-0001ow-D4; Fri, 17 Jun 2022 22:20:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o2KKT-0003Is-Ce; Fri, 17 Jun 2022 22:20:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Tbsyq07xw4hVoRSuz7CqlP0WDG27kKubw5ncCSMO6ak=; b=ih8DEA53o8ZVhgvu3Gtr7EsAe7
	qv30xtGXwRcvkNBYvvqRMJqrC2tsUNw+WYabzFdtonu+roGEH/lSw6NsHd0ICyDBbW+qb8NEb5e0M
	rONFa+SQS/d412JP6XBVIPL+4BQygXw6JySCqtVtt8ybNuoPzsmcRdM0TynwCLYqtqvc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171218-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171218: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=48a23ec6ff2b2a5effe8d3ae5f17fc6b7f35df65
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 17 Jun 2022 22:20:37 +0000

flight 171218 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171218/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 170714
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 170714
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 170714
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 170714
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 170714
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 170714
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 170714
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 170714
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 170714
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 170714
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 170714
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 170714
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 170714
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 170714

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170714
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170714
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170714
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                48a23ec6ff2b2a5effe8d3ae5f17fc6b7f35df65
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z   24 days
Failing since        170716  2022-05-24 11:12:06 Z   24 days   57 attempts
Testing same since   171218  2022-06-16 21:43:29 Z    1 days    1 attempts

------------------------------------------------------------
2360 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 278313 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 23:09:27 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 23:09:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351637.578309 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2L5W-0005MR-2d; Fri, 17 Jun 2022 23:09:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351637.578309; Fri, 17 Jun 2022 23:09:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2L5V-0005MK-Ux; Fri, 17 Jun 2022 23:09:13 +0000
Received: by outflank-mailman (input) for mailman id 351637;
 Fri, 17 Jun 2022 23:09:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2L5V-0005MA-3M; Fri, 17 Jun 2022 23:09:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2L5U-0005ha-Vk; Fri, 17 Jun 2022 23:09:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2L5U-0003W1-HK; Fri, 17 Jun 2022 23:09:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o2L5U-0002uy-Ga; Fri, 17 Jun 2022 23:09:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=r3K7I0Q5ty7EXevjDUyJHs7XuwHcUJz//ndK+a31es0=; b=l9/poVyfvqFv9WQLAhnWnuYuLc
	h+Y5dFf4vlmMEkvz/h/uJwNrhxmdvFW+4PxkQaakrYLvR2Xm8Zf2n7i4CCQmaNdFqelzyxo7VLzuv
	/Fe+8IVCaF6EJiWiC0xLxdO0OqLlRcCsju+4qPUGRuWhCv+5EdyffVYPyLT87jS1MhfU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171224-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 171224: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start/debian.repeat:fail:regression
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:heisenbug
    linux-5.4:test-amd64-i386-libvirt-xsm:xen-install:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=a31bd366116cb157fc20d5cdc8a2fd1c0d39ac38
X-Osstest-Versions-That:
    linux=9d6e67bf50908cc661972969e8f073ec1d1bc97d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 17 Jun 2022 23:09:12 +0000

flight 171224 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171224/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-multivcpu 18 guest-start/debian.repeat fail REGR. vs. 171173

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-amd 7 xen-install fail in 171200 pass in 171224
 test-amd64-i386-libvirt-xsm   7 xen-install                fail pass in 171200
 test-armhf-armhf-xl-rtds     14 guest-start                fail pass in 171200

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit2 18 guest-start/debian.repeat fail in 171200 like 171173
 test-amd64-i386-libvirt-xsm 15 migrate-support-check fail in 171200 never pass
 test-armhf-armhf-xl-credit2 15 migrate-support-check fail in 171200 never pass
 test-armhf-armhf-xl-credit2 16 saverestore-support-check fail in 171200 never pass
 test-armhf-armhf-xl-rtds    15 migrate-support-check fail in 171200 never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-check fail in 171200 never pass
 test-armhf-armhf-xl-credit2  14 guest-start                  fail  like 171167
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171173
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171173
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171173
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171173
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171173
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171173
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171173
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171173
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171173
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171173
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171173
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171173
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 linux                a31bd366116cb157fc20d5cdc8a2fd1c0d39ac38
baseline version:
 linux                9d6e67bf50908cc661972969e8f073ec1d1bc97d

Last test of basis   171173  2022-06-15 01:11:00 Z    2 days
Testing same since   171200  2022-06-16 11:41:43 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Borislav Petkov <bp@suse.de>
  Florian Fainelli <f.fainelli@gmail.com>
  Gayatri Kammela <gayatri.kammela@intel.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Hulk Robot <hulkrobot@huawei.com>
  Ingo Molnar <mingo@kernel.org>
  Jon Hunter <jonathanh@nvidia.com>
  Josh Poimboeuf <jpoimboe@kernel.org>
  Josh Poimboeuf <jpoimboe@redhat.com>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Thomas Gleixner <tglx@linutronix.de>
  Tony Luck <tony.luck@intel.com>
  Zhang Rui <rui.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 352 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 17 23:16:48 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jun 2022 23:16:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351648.578320 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2LCm-0006ri-UI; Fri, 17 Jun 2022 23:16:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351648.578320; Fri, 17 Jun 2022 23:16:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2LCm-0006rb-Qa; Fri, 17 Jun 2022 23:16:44 +0000
Received: by outflank-mailman (input) for mailman id 351648;
 Fri, 17 Jun 2022 23:16:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2LCl-0006rR-EB; Fri, 17 Jun 2022 23:16:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2LCl-0005rR-Aa; Fri, 17 Jun 2022 23:16:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2LCk-0003hN-TC; Fri, 17 Jun 2022 23:16:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o2LCk-0000LJ-Qe; Fri, 17 Jun 2022 23:16:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FF+Vohx24CQGYI9woFYH35RG/qAGvlqvHK+nSfihUmw=; b=5v5uFFvW7w5x+N7CuWfI9KsyDJ
	SA/mwlA5HGywSpeL8KYjSdFsGjSNU+ilIoEsq8+a+LvT1bgyoA7DNq1Sh1MMEwIUsoAkkEg84WbDb
	Dg+dYiixSyc5/fhsAgI8HlTtk/raydKYcWj599f5hCRSafPOUAA9JKGscdNQRat1skic=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171230-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 171230: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=812edc95a36b997d674ce4f3a56f4fd01f31904e
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 17 Jun 2022 23:16:42 +0000

flight 171230 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171230/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              812edc95a36b997d674ce4f3a56f4fd01f31904e
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  707 days
Failing since        151818  2020-07-11 04:18:52 Z  706 days  688 attempts
Testing same since   171230  2022-06-17 04:18:59 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Amneesh Singh <natto@weirdnatto.in>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani@anisinha.ca>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brad Laue <brad@brad-x.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Kirbach <christian.kirbach@gmail.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Christophe Fergeau <cfergeau@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Didik Supriadi <didiksupriadi41@gmail.com>
  dinglimin <dinglimin@cmss.chinamobile.com>
  Divya Garg <divya.garg@nutanix.com>
  Dmitrii Shcherbakov <dmitrii.shcherbakov@canonical.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Emilio Herrera <ehespinosa57@gmail.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Florian Schmidt <flosch@nutanix.com>
  Franck Ridel <fridel@protonmail.com>
  Gavi Teitz <gavi@nvidia.com>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Haonan Wang <hnwanga1@gmail.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Hiroki Narukawa <hnarukaw@yahoo-corp.jp>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Ian Wienand <iwienand@redhat.com>
  Ioanna Alifieraki <ioanna-maria.alifieraki@canonical.com>
  Ivan Teterevkov <ivan.teterevkov@nutanix.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  jason lee <ppark5237@gmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jia Zhou <zhou.jia2@zte.com.cn>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jing Qi <jinqi@redhat.com>
  Jinsheng Zhang <zhangjl02@inspur.com>
  Jiri Denemark <jdenemar@redhat.com>
  Joachim Falk <joachim.falk@gmx.de>
  John Ferlan <jferlan@redhat.com>
  John Levon <john.levon@nutanix.com>
  John Levon <levon@movementarian.org>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Justin Gatzen <justin.gatzen@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kim InSoo <simmon@nplob.com>
  Koichi Murase <myoga.murase@gmail.com>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Lei Yang <yanglei209@huawei.com>
  Lena Voytek <lena.voytek@canonical.com>
  Liang Yan <lyan@digitalocean.com>
  Liang Yan <lyan@digtalocean.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Liu Yiding <liuyd.fnst@fujitsu.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  luzhipeng <luzhipeng@cestc.cn>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Mielke <mark.mielke@gmail.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Martin Pitt <mpitt@debian.org>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matej Cepl <mcepl@cepl.eu>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Goodhart <c@chromakode.com>
  Maxim Nestratov <mnestratov@virtuozzo.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Moteen Shah <codeguy.moteen@gmail.com>
  Moteen Shah <moteenshah.02@gmail.com>
  Muha Aliss <muhaaliss@gmail.com>
  Nathan <nathan95@live.it>
  Neal Gompa <ngompa13@gmail.com>
  Nick Chevsky <nchevsky@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nicolas Lécureuil <neoclust@mageia.org>
  Nicolas Lécureuil <nicolas.lecureuil@siveo.net>
  Nikolay Shirokovskiy <nikolay.shirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Niteesh Dubey <niteesh@linux.ibm.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Or Ozeri <oro@il.ibm.com>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peng Liang <tcx4c70@gmail.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Praveen K Paladugu <prapal@linux.microsoft.com>
  Prerna Saxena <prerna.saxena@nutanix.com>
  Richard W.M. Jones <rjones@redhat.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Robin Lee <cheeselee@fedoraproject.org>
  Rohit Kumar <rohit.kumar3@nutanix.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Davis <scott.davis@starlab.io>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Sergey A <sw@atrus.ru>
  Sergey A. <sw@atrus.ru>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  shenjiatong <yshxxsjt715@gmail.com>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Simon Rowe <simon.rowe@nutanix.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Temuri Doghonadze <temuri.doghonadze@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tom Wieczorek <tom@bibbu.net>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tu Qiang <tu.qiang35@zte.com.cn>
  Tuguoyi <tu.guoyi@h3c.com>
  tuqiang <tu.qiang35@zte.com.cn>
  Vasiliy Ulyanov <vulyanov@suse.de>
  Victor Toso <victortoso@redhat.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Vinayak Kale <vkale@nvidia.com>
  Vineeth Pillai <viremana@linux.microsoft.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  Wei-Chen Chen <weicche@microsoft.com>
  William Douglas <william.douglas@intel.com>
  Xu Chao <xu.chao6@zte.com.cn>
  Yalan Zhang <yalzhang@redhat.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Fei <yangfei85@huawei.com>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yasuhiko Kamata <belphegor@belbel.or.jp>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
  zhangjl02 <zhangjl02@inspur.com>
  zhanglei <zhanglei@smartx.com>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>
  Дамјан Георгиевски <gdamjan@gmail.com>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 113770 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 18 04:42:53 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jun 2022 04:42:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351663.578334 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2QIB-0008Bl-0E; Sat, 18 Jun 2022 04:42:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351663.578334; Sat, 18 Jun 2022 04:42:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2QIA-0008Be-Si; Sat, 18 Jun 2022 04:42:38 +0000
Received: by outflank-mailman (input) for mailman id 351663;
 Sat, 18 Jun 2022 04:42:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2QIA-0008BU-FH; Sat, 18 Jun 2022 04:42:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2QIA-00023v-BG; Sat, 18 Jun 2022 04:42:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2QI9-0005uV-UT; Sat, 18 Jun 2022 04:42:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o2QI9-0005o8-TV; Sat, 18 Jun 2022 04:42:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Ni867ugmwHnnsD0drBQYwXEy9x0ykJpEszeu/K64pWc=; b=2KKSmDNhPUu+6eiI7dCIvO6d7M
	nlD5B8L3/AXIa555P3EfdlK2zQl+4Cl1f8GsUXnhREcTlog/AQ+IksjtkjsHB1siS16aWNJneHItG
	S+sFb71OF/y8NvrHFlXBNV1QAr8wwvXQq0iRo4o2jdTPUtf0lbXGrH4i76F3Ul/9tNsg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171252-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 171252: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c9040f25be317ab2f7647605397d79313e3f303e
X-Osstest-Versions-That:
    xen=3c2a14ea81c77ae7973c1e436a32436a7e6d017b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 18 Jun 2022 04:42:37 +0000

flight 171252 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171252/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    18 guest-start/debian.repeat fail REGR. vs. 171210

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171210
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171210
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171210
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171210
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171210
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171210
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171210
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171210
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171210
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171210
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171210
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171210
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  c9040f25be317ab2f7647605397d79313e3f303e
baseline version:
 xen                  3c2a14ea81c77ae7973c1e436a32436a7e6d017b

Last test of basis   171210  2022-06-16 14:36:58 Z    1 days
Testing same since   171252  2022-06-17 13:27:32 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jiamei Xie <jiamei.xie@arm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Wei Chen <wei.chen@arm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   3c2a14ea81..c9040f25be  c9040f25be317ab2f7647605397d79313e3f303e -> master


From xen-devel-bounces@lists.xenproject.org Sat Jun 18 08:08:33 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jun 2022 08:08:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351748.578479 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2TVB-0005VZ-1W; Sat, 18 Jun 2022 08:08:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351748.578479; Sat, 18 Jun 2022 08:08:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2TVA-0005VS-Td; Sat, 18 Jun 2022 08:08:16 +0000
Received: by outflank-mailman (input) for mailman id 351748;
 Sat, 18 Jun 2022 08:08:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2TV9-0005VI-Tn; Sat, 18 Jun 2022 08:08:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2TV9-00071f-R3; Sat, 18 Jun 2022 08:08:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2TV9-0008F3-86; Sat, 18 Jun 2022 08:08:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o2TV9-0005pD-7Y; Sat, 18 Jun 2022 08:08:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ZcEqysbezOmRSaFdRxAtTMfi1zRneng5s688t3U14es=; b=vmxp8319Eqy13Ei3ois6MU4/9K
	OtQO1tj5jk+p4i4qtT7aqqFMb0b6S7PFGa5RMnhsOJQ4dLIn13ttA5dzqZdOIZiz11Qb2MEUUDUm0
	I58DRxqsRh1Y7e/6Dl+5jNJqIORkNlK/1r2xwyEQVQooOzgXH8jnp/AdkX8ohWfaIczc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171256-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 171256: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-libvirt:xen-install:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-credit1:guest-localmigrate/x10:fail:heisenbug
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a28498b1f9591e12dcbfdf06dc8f54e15926760e
X-Osstest-Versions-That:
    qemuu=9ac873a46963098441be920ef7a2eaf244a3352d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 18 Jun 2022 08:08:15 +0000

flight 171256 qemu-mainline real [real]
flight 171268 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/171256/
http://logs.test-lab.xenproject.org/osstest/logs/171268/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt       7 xen-install         fail pass in 171268-retest
 test-amd64-amd64-xl-credit1 20 guest-localmigrate/x10 fail pass in 171268-retest

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt     15 migrate-support-check fail in 171268 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171183
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171183
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171183
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171183
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171183
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171183
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171183
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171183
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 qemuu                a28498b1f9591e12dcbfdf06dc8f54e15926760e
baseline version:
 qemuu                9ac873a46963098441be920ef7a2eaf244a3352d

Last test of basis   171183  2022-06-15 23:40:54 Z    2 days
Failing since        171215  2022-06-16 16:37:03 Z    1 days    2 attempts
Testing same since   171256  2022-06-17 15:58:21 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Bulekov <alxndr@bu.edu>
  Ani Sinha <ani@anisinha.ca>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  lei he <helei.sig11@bytedance.com
  Mark Kanda <mark.kanda@oracle.com>
  Michael S. Tsirkin <mst@redhat.com>
  Ni Xun <richardni@tencent.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Parav Pandit <parav@nvidia.com>
  Richard Henderson <richard.henderson@linaro.org>
  Yajun Wu <yajunw@nvidia.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   9ac873a469..a28498b1f9  a28498b1f9591e12dcbfdf06dc8f54e15926760e -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Sat Jun 18 10:48:37 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jun 2022 10:48:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351768.578529 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2W0C-0004m4-1l; Sat, 18 Jun 2022 10:48:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351768.578529; Sat, 18 Jun 2022 10:48:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2W0B-0004l4-Su; Sat, 18 Jun 2022 10:48:27 +0000
Received: by outflank-mailman (input) for mailman id 351768;
 Sat, 18 Jun 2022 10:48:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c8TN=WZ=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o2W0A-00049A-HX
 for xen-devel@lists.xenproject.org; Sat, 18 Jun 2022 10:48:26 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2a73de85-eef4-11ec-b725-ed86ccbb4733;
 Sat, 18 Jun 2022 12:48:22 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 1F9611F9BA;
 Sat, 18 Jun 2022 10:48:19 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id E6E9C13776;
 Sat, 18 Jun 2022 10:48:18 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id MFkIN3KtrWIXKAAAMHmgww
 (envelope-from <jgross@suse.com>); Sat, 18 Jun 2022 10:48:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a73de85-eef4-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655549299; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=4sY7iEL9wOcvVypFP7Ls5tXz2Bjjv1VYhMj7YTitExc=;
	b=LiwkP9HIcZgOFPLalVxPbDczT1dYWrL9X0XquPtotLqX6wU67ClBSjGPx5sI5rJcNPCGtb
	RaydcL+h4bom+/IG5nROCYbmK+8qvvsQucjspqDH80fdEeRTr4KPaACkNzmXIFiXNskCAh
	YhgsLfEgTcAhyzKACWblSKA34KlLA6k=
From: Juergen Gross <jgross@suse.com>
To: minios-devel@lists.xenproject.org,
	xen-devel@lists.xenproject.org
Cc: samuel.thibault@ens-lyon.org,
	wl@xen.org,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH 1/3] mini-os: take newest version of arch-x86/hvm/start_info.h
Date: Sat, 18 Jun 2022 12:48:14 +0200
Message-Id: <20220618104816.11527-2-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20220618104816.11527-1-jgross@suse.com>
References: <20220618104816.11527-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Update include/xen/arch-x86/hvm/start_info.h to the newest version
from the Xen tree.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 include/xen/arch-x86/hvm/start_info.h | 63 ++++++++++++++++++++++++++-
 1 file changed, 62 insertions(+), 1 deletion(-)

diff --git a/include/xen/arch-x86/hvm/start_info.h b/include/xen/arch-x86/hvm/start_info.h
index 64841597..50af9ea2 100644
--- a/include/xen/arch-x86/hvm/start_info.h
+++ b/include/xen/arch-x86/hvm/start_info.h
@@ -33,7 +33,7 @@
  *    | magic          | Contains the magic value XEN_HVM_START_MAGIC_VALUE
  *    |                | ("xEn3" with the 0x80 bit of the "E" set).
  *  4 +----------------+
- *    | version        | Version of this structure. Current version is 0. New
+ *    | version        | Version of this structure. Current version is 1. New
  *    |                | versions are guaranteed to be backwards-compatible.
  *  8 +----------------+
  *    | flags          | SIF_xxx flags.
@@ -48,6 +48,15 @@
  * 32 +----------------+
  *    | rsdp_paddr     | Physical address of the RSDP ACPI data structure.
  * 40 +----------------+
+ *    | memmap_paddr   | Physical address of the (optional) memory map. Only
+ *    |                | present in version 1 and newer of the structure.
+ * 48 +----------------+
+ *    | memmap_entries | Number of entries in the memory map table. Zero
+ *    |                | if there is no memory map being provided. Only
+ *    |                | present in version 1 and newer of the structure.
+ * 52 +----------------+
+ *    | reserved       | Version 1 and newer only.
+ * 56 +----------------+
  *
  * The layout of each entry in the module structure is the following:
  *
@@ -62,13 +71,51 @@
  *    | reserved       |
  * 32 +----------------+
  *
+ * The layout of each entry in the memory map table is as follows:
+ *
+ *  0 +----------------+
+ *    | addr           | Base address
+ *  8 +----------------+
+ *    | size           | Size of mapping in bytes
+ * 16 +----------------+
+ *    | type           | Type of mapping as defined between the hypervisor
+ *    |                | and guest. See XEN_HVM_MEMMAP_TYPE_* values below.
+ * 20 +----------------|
+ *    | reserved       |
+ * 24 +----------------+
+ *
  * The address and sizes are always a 64bit little endian unsigned integer.
  *
  * NB: Xen on x86 will always try to place all the data below the 4GiB
  * boundary.
+ *
+ * Version numbers of the hvm_start_info structure have evolved like this:
+ *
+ * Version 0:  Initial implementation.
+ *
+ * Version 1:  Added the memmap_paddr/memmap_entries fields (plus 4 bytes of
+ *             padding) to the end of the hvm_start_info struct. These new
+ *             fields can be used to pass a memory map to the guest. The
+ *             memory map is optional and so guests that understand version 1
+ *             of the structure must check that memmap_entries is non-zero
+ *             before trying to read the memory map.
  */
 #define XEN_HVM_START_MAGIC_VALUE 0x336ec578
 
+/*
+ * The values used in the type field of the memory map table entries are
+ * defined below and match the Address Range Types as defined in the "System
+ * Address Map Interfaces" section of the ACPI Specification. Please refer to
+ * section 15 in version 6.2 of the ACPI spec: http://uefi.org/specifications
+ */
+#define XEN_HVM_MEMMAP_TYPE_RAM       1
+#define XEN_HVM_MEMMAP_TYPE_RESERVED  2
+#define XEN_HVM_MEMMAP_TYPE_ACPI      3
+#define XEN_HVM_MEMMAP_TYPE_NVS       4
+#define XEN_HVM_MEMMAP_TYPE_UNUSABLE  5
+#define XEN_HVM_MEMMAP_TYPE_DISABLED  6
+#define XEN_HVM_MEMMAP_TYPE_PMEM      7
+
 /*
  * C representation of the x86/HVM start info layout.
  *
@@ -86,6 +133,13 @@ struct hvm_start_info {
     uint64_t cmdline_paddr;     /* Physical address of the command line.     */
     uint64_t rsdp_paddr;        /* Physical address of the RSDP ACPI data    */
                                 /* structure.                                */
+    /* All following fields only present in version 1 and newer */
+    uint64_t memmap_paddr;      /* Physical address of an array of           */
+                                /* hvm_memmap_table_entry.                   */
+    uint32_t memmap_entries;    /* Number of entries in the memmap table.    */
+                                /* Value will be zero if there is no memory  */
+                                /* map being provided.                       */
+    uint32_t reserved;          /* Must be zero.                             */
 };
 
 struct hvm_modlist_entry {
@@ -95,4 +149,11 @@ struct hvm_modlist_entry {
     uint64_t reserved;
 };
 
+struct hvm_memmap_table_entry {
+    uint64_t addr;              /* Base address of the memory region         */
+    uint64_t size;              /* Size of the memory region in bytes        */
+    uint32_t type;              /* Mapping type                              */
+    uint32_t reserved;          /* Must be zero for Version 1.               */
+};
+
 #endif /* __XEN_PUBLIC_ARCH_X86_HVM_START_INFO_H__ */
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Sat Jun 18 10:48:37 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jun 2022 10:48:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351765.578516 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2W0B-0004Wd-Dg; Sat, 18 Jun 2022 10:48:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351765.578516; Sat, 18 Jun 2022 10:48:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2W0B-0004Vu-5W; Sat, 18 Jun 2022 10:48:27 +0000
Received: by outflank-mailman (input) for mailman id 351765;
 Sat, 18 Jun 2022 10:48:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c8TN=WZ=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o2W09-000499-65
 for xen-devel@lists.xenproject.org; Sat, 18 Jun 2022 10:48:25 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2aa1b67a-eef4-11ec-bd2d-47488cf2e6aa;
 Sat, 18 Jun 2022 12:48:22 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 887A61FA72;
 Sat, 18 Jun 2022 10:48:19 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 5AB461348B;
 Sat, 18 Jun 2022 10:48:19 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id iJjUFHOtrWIXKAAAMHmgww
 (envelope-from <jgross@suse.com>); Sat, 18 Jun 2022 10:48:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2aa1b67a-eef4-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655549299; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=O7Ezlwl1sJwrsR+Lshf/i9Q310DW+uMi1RBHydn95TE=;
	b=ppXAcJl8tx1RGIqamUFhG9gpMhtvj6ECxnAMPfg+AW78PChXgHICCgKWoVNCW1yULseeny
	9tT2dAtdH600cuoxTX+s/pFLeEyO2F9xHSedyG/cwkOxkRB16oCMtk/+naorezL3W5cNuY
	tslEcxEQOHT7jMTS20aYMqiY04HM2A0=
From: Juergen Gross <jgross@suse.com>
To: minios-devel@lists.xenproject.org,
	xen-devel@lists.xenproject.org
Cc: samuel.thibault@ens-lyon.org,
	wl@xen.org,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH 3/3] mini-os: fix number of pages for PVH
Date: Sat, 18 Jun 2022 12:48:16 +0200
Message-Id: <20220618104816.11527-4-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20220618104816.11527-1-jgross@suse.com>
References: <20220618104816.11527-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When getting the current allocation from Xen, this value includes the
pages allocated in the MMIO area. Fix the highest available RAM page
by subtracting the size of that area.

This requires to read the E820 map before needing this value.

At the same time add the LAPIC page to the memory map in order to
avoid reusing that PFN for internal purposes.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/mm.c         |  4 +++-
 balloon.c             |  2 +-
 e820.c                | 28 +++++++++++++++++++++-------
 include/e820.h        |  1 +
 include/x86/arch_mm.h |  2 ++
 5 files changed, 28 insertions(+), 9 deletions(-)

diff --git a/arch/x86/mm.c b/arch/x86/mm.c
index 41fcee67..37089978 100644
--- a/arch/x86/mm.c
+++ b/arch/x86/mm.c
@@ -114,6 +114,8 @@ void arch_mm_preinit(void *p)
     if ( hsi->version >= 1 && hsi->memmap_entries > 0 )
         e820_init_memmap((struct hvm_memmap_table_entry *)(unsigned long)
                          hsi->memmap_paddr, hsi->memmap_entries);
+    else
+        e820_init_memmap(NULL, 0);
 
     pt_base = page_table_base;
     first_free_pfn = PFN_UP(to_phys(&_end));
@@ -124,7 +126,7 @@ void arch_mm_preinit(void *p)
         do_exit();
     }
 
-    last_free_pfn = e820_get_maxpfn(ret);
+    last_free_pfn = e820_get_maxpfn(ret - e820_initial_reserved_pfns);
     balloon_set_nr_pages(ret, last_free_pfn);
 }
 #endif
diff --git a/balloon.c b/balloon.c
index 9dc77c54..779223de 100644
--- a/balloon.c
+++ b/balloon.c
@@ -54,7 +54,7 @@ void get_max_pages(void)
         return;
     }
 
-    nr_max_pages = ret;
+    nr_max_pages = ret - e820_initial_reserved_pfns;
     printk("Maximum memory size: %ld pages\n", nr_max_pages);
 
     nr_max_pfn = e820_get_maxpfn(nr_max_pages);
diff --git a/e820.c b/e820.c
index ad91e00b..c3047336 100644
--- a/e820.c
+++ b/e820.c
@@ -29,6 +29,8 @@
 #include <mini-os/e820.h>
 #include <xen/memory.h>
 
+unsigned int e820_initial_reserved_pfns;
+
 #ifdef CONFIG_E820_TRIVIAL
 struct e820entry e820_map[1] = {
     {
@@ -40,10 +42,6 @@ struct e820entry e820_map[1] = {
 
 unsigned e820_entries = 1;
 
-static void e820_get_memmap(void)
-{
-}
-
 #else
 struct e820entry e820_map[E820_MAX];
 unsigned e820_entries;
@@ -199,6 +197,7 @@ static void e820_sanitize(void)
 {
     int i;
     unsigned long end, start;
+    bool found_lapic = false;
 
     /* Sanitize memory map in current form. */
     e820_process_entries();
@@ -238,8 +237,20 @@ static void e820_sanitize(void)
 
     /* Make remaining temporarily reserved entries permanently reserved. */
     for ( i = 0; i < e820_entries; i++ )
+    {
         if ( e820_map[i].type == E820_TMP_RESERVED )
             e820_map[i].type = E820_RESERVED;
+        if ( e820_map[i].type == E820_RESERVED )
+        {
+            e820_initial_reserved_pfns += e820_map[i].size / PAGE_SIZE;
+            if ( e820_map[i].addr <= LAPIC_ADDRESS &&
+                 e820_map[i].addr + e820_map[i].size > LAPIC_ADDRESS )
+                found_lapic = true;
+        }
+    }
+
+    if ( !found_lapic )
+        e820_insert_entry(LAPIC_ADDRESS, PAGE_SIZE, E820_RESERVED);
 }
 
 static void e820_get_memmap(void)
@@ -264,6 +275,12 @@ void e820_init_memmap(struct hvm_memmap_table_entry *entry, unsigned int num)
 {
     unsigned int i;
 
+    if ( !entry )
+    {
+        e820_get_memmap();
+        return;
+    }
+
     BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_RAM != E820_RAM);
     BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_RESERVED != E820_RESERVED);
     BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_ACPI != E820_ACPI);
@@ -365,9 +382,6 @@ unsigned long e820_get_maxpfn(unsigned long pages)
     int i;
     unsigned long pfns = 0, start = 0;
 
-    if ( !e820_entries )
-        e820_get_memmap();
-
     for ( i = 0; i < e820_entries; i++ )
     {
         if ( e820_map[i].type != E820_RAM )
diff --git a/include/e820.h b/include/e820.h
index 5438a7c8..5533894e 100644
--- a/include/e820.h
+++ b/include/e820.h
@@ -51,6 +51,7 @@ struct __packed e820entry {
 
 extern struct e820entry e820_map[];
 extern unsigned e820_entries;
+extern unsigned int e820_initial_reserved_pfns;
 
 unsigned long e820_get_maxpfn(unsigned long pages);
 unsigned long e820_get_max_contig_pages(unsigned long pfn, unsigned long pages);
diff --git a/include/x86/arch_mm.h b/include/x86/arch_mm.h
index ffbec5a8..a1b975dc 100644
--- a/include/x86/arch_mm.h
+++ b/include/x86/arch_mm.h
@@ -207,6 +207,8 @@ typedef unsigned long pgentry_t;
 /* to align the pointer to the (next) page boundary */
 #define PAGE_ALIGN(addr)        (((addr)+PAGE_SIZE-1)&PAGE_MASK)
 
+#define LAPIC_ADDRESS	CONST(0xfee00000)
+
 #ifndef __ASSEMBLY__
 /* Definitions for machine and pseudophysical addresses. */
 #ifdef __i386__
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Sat Jun 18 10:48:37 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jun 2022 10:48:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351764.578509 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2W0B-0004Tl-1z; Sat, 18 Jun 2022 10:48:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351764.578509; Sat, 18 Jun 2022 10:48:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2W0A-0004Te-Un; Sat, 18 Jun 2022 10:48:26 +0000
Received: by outflank-mailman (input) for mailman id 351764;
 Sat, 18 Jun 2022 10:48:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c8TN=WZ=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o2W09-00049A-0J
 for xen-devel@lists.xenproject.org; Sat, 18 Jun 2022 10:48:25 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2a73de96-eef4-11ec-b725-ed86ccbb4733;
 Sat, 18 Jun 2022 12:48:22 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id EA8D81F99B;
 Sat, 18 Jun 2022 10:48:18 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id B00241348B;
 Sat, 18 Jun 2022 10:48:18 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 4ZpmKXKtrWIXKAAAMHmgww
 (envelope-from <jgross@suse.com>); Sat, 18 Jun 2022 10:48:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a73de96-eef4-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655549298; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=RInIqIonFLW15r0nOyuZItE2ii5IHR7IInXm5oJr6O0=;
	b=nBEPrLOR64DhcXkRffIHR5xzxr3hXwZdB7FIRtYxv+0g2L7x92RESUSUMG1QwoFxZ3WXbm
	xClM03iUTwX439VTp/VyJfEg0mxw46341i7CkZZFnlRPGHXdPoNfGqa7TJYDWsLMti7L09
	TrOyd0he9ZBFY1LP+s3gIxzcfpriY+s=
From: Juergen Gross <jgross@suse.com>
To: minios-devel@lists.xenproject.org,
	xen-devel@lists.xenproject.org
Cc: samuel.thibault@ens-lyon.org,
	wl@xen.org,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH 0/3] mini-os: some memory map updates for PVH
Date: Sat, 18 Jun 2022 12:48:13 +0200
Message-Id: <20220618104816.11527-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Do some memory map related changes/fixes for PVH mode:

- Prefer the memory map delivered via start-info over the one obtained
  from the hypervisor. This is a prerequisite for Xenstore-stubdom
  live-update with rising the memory limit.

- Fix a bug related to ballooning in PVH mode: PVH Xenstore-stubdom
  can't read its target memory size from Xenstore, as this introduces
  a chicken-and-egg problem. The memory size read from the hypervisor
  OTOH includes additional "special" pages marked as reserved in the
  memory map. Those pages need to be subtracted from the read size.

Juergen Gross (3):
  mini-os: take newest version of arch-x86/hvm/start_info.h
  mini-os: prefer memory map via start_info for PVH
  mini-os: fix number of pages for PVH

 arch/x86/mm.c                         | 10 ++++-
 balloon.c                             |  2 +-
 e820.c                                | 53 +++++++++++++++++++---
 include/e820.h                        |  5 +++
 include/x86/arch_mm.h                 |  2 +
 include/xen/arch-x86/hvm/start_info.h | 63 ++++++++++++++++++++++++++-
 6 files changed, 125 insertions(+), 10 deletions(-)

-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Sat Jun 18 10:48:38 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jun 2022 10:48:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351763.578499 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2W09-0004AZ-QP; Sat, 18 Jun 2022 10:48:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351763.578499; Sat, 18 Jun 2022 10:48:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2W09-0004A5-Ig; Sat, 18 Jun 2022 10:48:25 +0000
Received: by outflank-mailman (input) for mailman id 351763;
 Sat, 18 Jun 2022 10:48:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c8TN=WZ=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o2W08-000499-Nx
 for xen-devel@lists.xenproject.org; Sat, 18 Jun 2022 10:48:24 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2a7d2a1e-eef4-11ec-bd2d-47488cf2e6aa;
 Sat, 18 Jun 2022 12:48:22 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 54F1D1FA6F;
 Sat, 18 Jun 2022 10:48:19 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 265B31348B;
 Sat, 18 Jun 2022 10:48:19 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id wDcACHOtrWIXKAAAMHmgww
 (envelope-from <jgross@suse.com>); Sat, 18 Jun 2022 10:48:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a7d2a1e-eef4-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655549299; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=yDDn1mSreaoh9MuJXeryyxgSIBKCF/BQ2diMJ00W4gc=;
	b=tot9UHjcC4L9sx2f56CUcELlvv/IVHFDTxeDcPAnz4Bj/i4fNtv5pPSqKmR/gLKf/HsxTo
	SYkKOWctuqnHNaPikTwIrJkl6w0g4R2sX5N9s2flNReoYoXm8Eab6uozGYNJtw/tBzQ0cq
	ydmFxdoJ7s1l6u7wTb2MAYCS9Ft63oY=
From: Juergen Gross <jgross@suse.com>
To: minios-devel@lists.xenproject.org,
	xen-devel@lists.xenproject.org
Cc: samuel.thibault@ens-lyon.org,
	wl@xen.org,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH 2/3] mini-os: prefer memory map via start_info for PVH
Date: Sat, 18 Jun 2022 12:48:15 +0200
Message-Id: <20220618104816.11527-3-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20220618104816.11527-1-jgross@suse.com>
References: <20220618104816.11527-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Since some time now a guest started in PVH mode will get the memory
map from Xen via the start_info structure.

Modify the PVH initialization to prefer this memory map over the one
obtained via hypercall, as this will allow to add information to the
memory map for a new kernel when supporting kexec.

In case the start_info structure doesn't contain memory map information
fall back to the hypercall.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/mm.c  |  6 ++++++
 e820.c         | 25 +++++++++++++++++++++++++
 include/e820.h |  4 ++++
 3 files changed, 35 insertions(+)

diff --git a/arch/x86/mm.c b/arch/x86/mm.c
index 220c0b4d..41fcee67 100644
--- a/arch/x86/mm.c
+++ b/arch/x86/mm.c
@@ -45,6 +45,7 @@
 #include <mini-os/xmalloc.h>
 #include <mini-os/e820.h>
 #include <xen/memory.h>
+#include <xen/arch-x86/hvm/start_info.h>
 
 #ifdef MM_DEBUG
 #define DEBUG(_f, _a...) \
@@ -108,6 +109,11 @@ void arch_mm_preinit(void *p)
 {
     long ret;
     domid_t domid = DOMID_SELF;
+    struct hvm_start_info *hsi = p;
+
+    if ( hsi->version >= 1 && hsi->memmap_entries > 0 )
+        e820_init_memmap((struct hvm_memmap_table_entry *)(unsigned long)
+                         hsi->memmap_paddr, hsi->memmap_entries);
 
     pt_base = page_table_base;
     first_free_pfn = PFN_UP(to_phys(&_end));
diff --git a/e820.c b/e820.c
index 991ed382..ad91e00b 100644
--- a/e820.c
+++ b/e820.c
@@ -54,6 +54,7 @@ static char *e820_types[E820_TYPES] = {
     [E820_ACPI]     = "ACPI",
     [E820_NVS]      = "NVS",
     [E820_UNUSABLE] = "Unusable",
+    [E820_DISABLED] = "Disabled",
     [E820_PMEM]     = "PMEM"
 };
 
@@ -259,6 +260,30 @@ static void e820_get_memmap(void)
     e820_sanitize();
 }
 
+void e820_init_memmap(struct hvm_memmap_table_entry *entry, unsigned int num)
+{
+    unsigned int i;
+
+    BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_RAM != E820_RAM);
+    BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_RESERVED != E820_RESERVED);
+    BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_ACPI != E820_ACPI);
+    BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_NVS != E820_NVS);
+    BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_UNUSABLE != E820_UNUSABLE);
+    BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_DISABLED != E820_DISABLED);
+    BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_PMEM != E820_PMEM);
+
+    for ( i = 0; i < num; i++ )
+    {
+        e820_map[i].addr = entry[i].addr;
+        e820_map[i].size = entry[i].size;
+        e820_map[i].type = entry[i].type;
+    }
+
+    e820_entries = num;
+
+    e820_sanitize();
+}
+
 void arch_print_memmap(void)
 {
     int i;
diff --git a/include/e820.h b/include/e820.h
index aaf2f2ca..5438a7c8 100644
--- a/include/e820.h
+++ b/include/e820.h
@@ -26,6 +26,8 @@
 
 #if defined(__arm__) || defined(__aarch64__) || defined(CONFIG_PARAVIRT)
 #define CONFIG_E820_TRIVIAL
+#else
+#include <xen/arch-x86/hvm/start_info.h>
 #endif
 
 /* PC BIOS standard E820 types and structure. */
@@ -34,6 +36,7 @@
 #define E820_ACPI         3
 #define E820_NVS          4
 #define E820_UNUSABLE     5
+#define E820_DISABLED     6
 #define E820_PMEM         7
 #define E820_TYPES        8
 
@@ -54,6 +57,7 @@ unsigned long e820_get_max_contig_pages(unsigned long pfn, unsigned long pages);
 #ifndef CONFIG_E820_TRIVIAL
 unsigned long e820_get_reserved_pfns(int pages);
 void e820_put_reserved_pfns(unsigned long start_pfn, int pages);
+void e820_init_memmap(struct hvm_memmap_table_entry *entry, unsigned int num);
 #endif
 
 #endif /*__E820_HEADER*/
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Sat Jun 18 12:13:46 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jun 2022 12:13:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351814.578553 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2XKX-00071A-Qh; Sat, 18 Jun 2022 12:13:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351814.578553; Sat, 18 Jun 2022 12:13:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2XKX-000713-O1; Sat, 18 Jun 2022 12:13:33 +0000
Received: by outflank-mailman (input) for mailman id 351814;
 Sat, 18 Jun 2022 12:13:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qCNw=WZ=ens-lyon.org=samuel.thibault@bounce.ens-lyon.org>)
 id 1o2XKW-0006z2-2k
 for xen-devel@lists.xenproject.org; Sat, 18 Jun 2022 12:13:32 +0000
Received: from sonata.ens-lyon.org (sonata.ens-lyon.org [140.77.166.138])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1063b743-ef00-11ec-bd2d-47488cf2e6aa;
 Sat, 18 Jun 2022 14:13:30 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by sonata.ens-lyon.org (Postfix) with ESMTP id A697D20157;
 Sat, 18 Jun 2022 14:13:28 +0200 (CEST)
Received: from sonata.ens-lyon.org ([127.0.0.1])
 by localhost (sonata.ens-lyon.org [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id pdh_QtE20XQS; Sat, 18 Jun 2022 14:13:28 +0200 (CEST)
Received: from begin (cerbere11.aquilenet.fr [185.233.102.190])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange ECDHE (P-256) server-signature RSA-PSS (4096 bits) server-digest
 SHA256) (No client certificate requested)
 by sonata.ens-lyon.org (Postfix) with ESMTPSA id 7D6FB20154;
 Sat, 18 Jun 2022 14:13:28 +0200 (CEST)
Received: from samy by begin with local (Exim 4.95)
 (envelope-from <samuel.thibault@ens-lyon.org>) id 1o2XKS-00BaWL-DO;
 Sat, 18 Jun 2022 14:13:28 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1063b743-ef00-11ec-bd2d-47488cf2e6aa
Date: Sat, 18 Jun 2022 14:13:28 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Juergen Gross <jgross@suse.com>
Cc: minios-devel@lists.xenproject.org, xen-devel@lists.xenproject.org,
	wl@xen.org
Subject: Re: [PATCH 3/3] mini-os: fix number of pages for PVH
Message-ID: <20220618121328.54byw5ggucap6x5j@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Juergen Gross <jgross@suse.com>, minios-devel@lists.xenproject.org,
	xen-devel@lists.xenproject.org, wl@xen.org
References: <20220618104816.11527-1-jgross@suse.com>
 <20220618104816.11527-4-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20220618104816.11527-4-jgross@suse.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)

Hello,

Juergen Gross, le sam. 18 juin 2022 12:48:16 +0200, a ecrit:
> @@ -124,7 +126,7 @@ void arch_mm_preinit(void *p)
>          do_exit();
>      }
>  
> -    last_free_pfn = e820_get_maxpfn(ret);
> +    last_free_pfn = e820_get_maxpfn(ret - e820_initial_reserved_pfns);

Mmm, but the reserved pfn could be in the middle of the e820 address
space.

Samuel


From xen-devel-bounces@lists.xenproject.org Sat Jun 18 12:13:48 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jun 2022 12:13:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351815.578565 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2XKm-0007Lq-3a; Sat, 18 Jun 2022 12:13:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351815.578565; Sat, 18 Jun 2022 12:13:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2XKl-0007Lg-Vv; Sat, 18 Jun 2022 12:13:47 +0000
Received: by outflank-mailman (input) for mailman id 351815;
 Sat, 18 Jun 2022 12:13:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qCNw=WZ=ens-lyon.org=samuel.thibault@bounce.ens-lyon.org>)
 id 1o2XKk-0006z2-EE
 for xen-devel@lists.xenproject.org; Sat, 18 Jun 2022 12:13:46 +0000
Received: from sonata.ens-lyon.org (sonata.ens-lyon.org [140.77.166.138])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 19b8cc89-ef00-11ec-bd2d-47488cf2e6aa;
 Sat, 18 Jun 2022 14:13:45 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by sonata.ens-lyon.org (Postfix) with ESMTP id 1FE0D20157;
 Sat, 18 Jun 2022 14:13:45 +0200 (CEST)
Received: from sonata.ens-lyon.org ([127.0.0.1])
 by localhost (sonata.ens-lyon.org [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id yy_byW0nzfZw; Sat, 18 Jun 2022 14:13:45 +0200 (CEST)
Received: from begin (cerbere11.aquilenet.fr [185.233.102.190])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange ECDHE (P-256) server-signature RSA-PSS (4096 bits) server-digest
 SHA256) (No client certificate requested)
 by sonata.ens-lyon.org (Postfix) with ESMTPSA id 039CC20154;
 Sat, 18 Jun 2022 14:13:45 +0200 (CEST)
Received: from samy by begin with local (Exim 4.95)
 (envelope-from <samuel.thibault@ens-lyon.org>) id 1o2XKi-00BaWW-97;
 Sat, 18 Jun 2022 14:13:44 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 19b8cc89-ef00-11ec-bd2d-47488cf2e6aa
Date: Sat, 18 Jun 2022 14:13:44 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Juergen Gross <jgross@suse.com>
Cc: minios-devel@lists.xenproject.org, xen-devel@lists.xenproject.org,
	wl@xen.org
Subject: Re: [PATCH 1/3] mini-os: take newest version of
 arch-x86/hvm/start_info.h
Message-ID: <20220618121344.3nsrtgwvmdhnfvlq@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Juergen Gross <jgross@suse.com>, minios-devel@lists.xenproject.org,
	xen-devel@lists.xenproject.org, wl@xen.org
References: <20220618104816.11527-1-jgross@suse.com>
 <20220618104816.11527-2-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20220618104816.11527-2-jgross@suse.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)

Juergen Gross, le sam. 18 juin 2022 12:48:14 +0200, a ecrit:
> Update include/xen/arch-x86/hvm/start_info.h to the newest version
> from the Xen tree.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

> ---
>  include/xen/arch-x86/hvm/start_info.h | 63 ++++++++++++++++++++++++++-
>  1 file changed, 62 insertions(+), 1 deletion(-)
> 
> diff --git a/include/xen/arch-x86/hvm/start_info.h b/include/xen/arch-x86/hvm/start_info.h
> index 64841597..50af9ea2 100644
> --- a/include/xen/arch-x86/hvm/start_info.h
> +++ b/include/xen/arch-x86/hvm/start_info.h
> @@ -33,7 +33,7 @@
>   *    | magic          | Contains the magic value XEN_HVM_START_MAGIC_VALUE
>   *    |                | ("xEn3" with the 0x80 bit of the "E" set).
>   *  4 +----------------+
> - *    | version        | Version of this structure. Current version is 0. New
> + *    | version        | Version of this structure. Current version is 1. New
>   *    |                | versions are guaranteed to be backwards-compatible.
>   *  8 +----------------+
>   *    | flags          | SIF_xxx flags.
> @@ -48,6 +48,15 @@
>   * 32 +----------------+
>   *    | rsdp_paddr     | Physical address of the RSDP ACPI data structure.
>   * 40 +----------------+
> + *    | memmap_paddr   | Physical address of the (optional) memory map. Only
> + *    |                | present in version 1 and newer of the structure.
> + * 48 +----------------+
> + *    | memmap_entries | Number of entries in the memory map table. Zero
> + *    |                | if there is no memory map being provided. Only
> + *    |                | present in version 1 and newer of the structure.
> + * 52 +----------------+
> + *    | reserved       | Version 1 and newer only.
> + * 56 +----------------+
>   *
>   * The layout of each entry in the module structure is the following:
>   *
> @@ -62,13 +71,51 @@
>   *    | reserved       |
>   * 32 +----------------+
>   *
> + * The layout of each entry in the memory map table is as follows:
> + *
> + *  0 +----------------+
> + *    | addr           | Base address
> + *  8 +----------------+
> + *    | size           | Size of mapping in bytes
> + * 16 +----------------+
> + *    | type           | Type of mapping as defined between the hypervisor
> + *    |                | and guest. See XEN_HVM_MEMMAP_TYPE_* values below.
> + * 20 +----------------|
> + *    | reserved       |
> + * 24 +----------------+
> + *
>   * The address and sizes are always a 64bit little endian unsigned integer.
>   *
>   * NB: Xen on x86 will always try to place all the data below the 4GiB
>   * boundary.
> + *
> + * Version numbers of the hvm_start_info structure have evolved like this:
> + *
> + * Version 0:  Initial implementation.
> + *
> + * Version 1:  Added the memmap_paddr/memmap_entries fields (plus 4 bytes of
> + *             padding) to the end of the hvm_start_info struct. These new
> + *             fields can be used to pass a memory map to the guest. The
> + *             memory map is optional and so guests that understand version 1
> + *             of the structure must check that memmap_entries is non-zero
> + *             before trying to read the memory map.
>   */
>  #define XEN_HVM_START_MAGIC_VALUE 0x336ec578
>  
> +/*
> + * The values used in the type field of the memory map table entries are
> + * defined below and match the Address Range Types as defined in the "System
> + * Address Map Interfaces" section of the ACPI Specification. Please refer to
> + * section 15 in version 6.2 of the ACPI spec: http://uefi.org/specifications
> + */
> +#define XEN_HVM_MEMMAP_TYPE_RAM       1
> +#define XEN_HVM_MEMMAP_TYPE_RESERVED  2
> +#define XEN_HVM_MEMMAP_TYPE_ACPI      3
> +#define XEN_HVM_MEMMAP_TYPE_NVS       4
> +#define XEN_HVM_MEMMAP_TYPE_UNUSABLE  5
> +#define XEN_HVM_MEMMAP_TYPE_DISABLED  6
> +#define XEN_HVM_MEMMAP_TYPE_PMEM      7
> +
>  /*
>   * C representation of the x86/HVM start info layout.
>   *
> @@ -86,6 +133,13 @@ struct hvm_start_info {
>      uint64_t cmdline_paddr;     /* Physical address of the command line.     */
>      uint64_t rsdp_paddr;        /* Physical address of the RSDP ACPI data    */
>                                  /* structure.                                */
> +    /* All following fields only present in version 1 and newer */
> +    uint64_t memmap_paddr;      /* Physical address of an array of           */
> +                                /* hvm_memmap_table_entry.                   */
> +    uint32_t memmap_entries;    /* Number of entries in the memmap table.    */
> +                                /* Value will be zero if there is no memory  */
> +                                /* map being provided.                       */
> +    uint32_t reserved;          /* Must be zero.                             */
>  };
>  
>  struct hvm_modlist_entry {
> @@ -95,4 +149,11 @@ struct hvm_modlist_entry {
>      uint64_t reserved;
>  };
>  
> +struct hvm_memmap_table_entry {
> +    uint64_t addr;              /* Base address of the memory region         */
> +    uint64_t size;              /* Size of the memory region in bytes        */
> +    uint32_t type;              /* Mapping type                              */
> +    uint32_t reserved;          /* Must be zero for Version 1.               */
> +};
> +
>  #endif /* __XEN_PUBLIC_ARCH_X86_HVM_START_INFO_H__ */
> -- 
> 2.35.3
> 

-- 
Samuel
---
Pour une évaluation indépendante, transparente et rigoureuse !
Je soutiens la Commission d'Évaluation de l'Inria.


From xen-devel-bounces@lists.xenproject.org Sat Jun 18 12:14:14 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jun 2022 12:14:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351829.578585 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2XLC-00083K-KH; Sat, 18 Jun 2022 12:14:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351829.578585; Sat, 18 Jun 2022 12:14:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2XLC-000834-DV; Sat, 18 Jun 2022 12:14:14 +0000
Received: by outflank-mailman (input) for mailman id 351829;
 Sat, 18 Jun 2022 12:14:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qCNw=WZ=ens-lyon.org=samuel.thibault@bounce.ens-lyon.org>)
 id 1o2XLA-0007p5-MM
 for xen-devel@lists.xenproject.org; Sat, 18 Jun 2022 12:14:12 +0000
Received: from sonata.ens-lyon.org (domu-toccata.ens-lyon.fr [140.77.166.138])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 28f3e525-ef00-11ec-b725-ed86ccbb4733;
 Sat, 18 Jun 2022 14:14:11 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by sonata.ens-lyon.org (Postfix) with ESMTP id F36A220157;
 Sat, 18 Jun 2022 14:14:09 +0200 (CEST)
Received: from sonata.ens-lyon.org ([127.0.0.1])
 by localhost (sonata.ens-lyon.org [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id YauqhsVtHHYl; Sat, 18 Jun 2022 14:14:09 +0200 (CEST)
Received: from begin (cerbere11.aquilenet.fr [185.233.102.190])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange ECDHE (P-256) server-signature RSA-PSS (4096 bits) server-digest
 SHA256) (No client certificate requested)
 by sonata.ens-lyon.org (Postfix) with ESMTPSA id DE9D220154;
 Sat, 18 Jun 2022 14:14:09 +0200 (CEST)
Received: from samy by begin with local (Exim 4.95)
 (envelope-from <samuel.thibault@ens-lyon.org>) id 1o2XL7-00BaWi-S0;
 Sat, 18 Jun 2022 14:14:09 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 28f3e525-ef00-11ec-b725-ed86ccbb4733
Date: Sat, 18 Jun 2022 14:14:09 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Juergen Gross <jgross@suse.com>
Cc: minios-devel@lists.xenproject.org, xen-devel@lists.xenproject.org,
	wl@xen.org
Subject: Re: [PATCH 2/3] mini-os: prefer memory map via start_info for PVH
Message-ID: <20220618121409.mopy5vqf3z7gpjed@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Juergen Gross <jgross@suse.com>, minios-devel@lists.xenproject.org,
	xen-devel@lists.xenproject.org, wl@xen.org
References: <20220618104816.11527-1-jgross@suse.com>
 <20220618104816.11527-3-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20220618104816.11527-3-jgross@suse.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)

Juergen Gross, le sam. 18 juin 2022 12:48:15 +0200, a ecrit:
> Since some time now a guest started in PVH mode will get the memory
> map from Xen via the start_info structure.
> 
> Modify the PVH initialization to prefer this memory map over the one
> obtained via hypercall, as this will allow to add information to the
> memory map for a new kernel when supporting kexec.
> 
> In case the start_info structure doesn't contain memory map information
> fall back to the hypercall.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

> ---
>  arch/x86/mm.c  |  6 ++++++
>  e820.c         | 25 +++++++++++++++++++++++++
>  include/e820.h |  4 ++++
>  3 files changed, 35 insertions(+)
> 
> diff --git a/arch/x86/mm.c b/arch/x86/mm.c
> index 220c0b4d..41fcee67 100644
> --- a/arch/x86/mm.c
> +++ b/arch/x86/mm.c
> @@ -45,6 +45,7 @@
>  #include <mini-os/xmalloc.h>
>  #include <mini-os/e820.h>
>  #include <xen/memory.h>
> +#include <xen/arch-x86/hvm/start_info.h>
>  
>  #ifdef MM_DEBUG
>  #define DEBUG(_f, _a...) \
> @@ -108,6 +109,11 @@ void arch_mm_preinit(void *p)
>  {
>      long ret;
>      domid_t domid = DOMID_SELF;
> +    struct hvm_start_info *hsi = p;
> +
> +    if ( hsi->version >= 1 && hsi->memmap_entries > 0 )
> +        e820_init_memmap((struct hvm_memmap_table_entry *)(unsigned long)
> +                         hsi->memmap_paddr, hsi->memmap_entries);
>  
>      pt_base = page_table_base;
>      first_free_pfn = PFN_UP(to_phys(&_end));
> diff --git a/e820.c b/e820.c
> index 991ed382..ad91e00b 100644
> --- a/e820.c
> +++ b/e820.c
> @@ -54,6 +54,7 @@ static char *e820_types[E820_TYPES] = {
>      [E820_ACPI]     = "ACPI",
>      [E820_NVS]      = "NVS",
>      [E820_UNUSABLE] = "Unusable",
> +    [E820_DISABLED] = "Disabled",
>      [E820_PMEM]     = "PMEM"
>  };
>  
> @@ -259,6 +260,30 @@ static void e820_get_memmap(void)
>      e820_sanitize();
>  }
>  
> +void e820_init_memmap(struct hvm_memmap_table_entry *entry, unsigned int num)
> +{
> +    unsigned int i;
> +
> +    BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_RAM != E820_RAM);
> +    BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_RESERVED != E820_RESERVED);
> +    BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_ACPI != E820_ACPI);
> +    BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_NVS != E820_NVS);
> +    BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_UNUSABLE != E820_UNUSABLE);
> +    BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_DISABLED != E820_DISABLED);
> +    BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_PMEM != E820_PMEM);
> +
> +    for ( i = 0; i < num; i++ )
> +    {
> +        e820_map[i].addr = entry[i].addr;
> +        e820_map[i].size = entry[i].size;
> +        e820_map[i].type = entry[i].type;
> +    }
> +
> +    e820_entries = num;
> +
> +    e820_sanitize();
> +}
> +
>  void arch_print_memmap(void)
>  {
>      int i;
> diff --git a/include/e820.h b/include/e820.h
> index aaf2f2ca..5438a7c8 100644
> --- a/include/e820.h
> +++ b/include/e820.h
> @@ -26,6 +26,8 @@
>  
>  #if defined(__arm__) || defined(__aarch64__) || defined(CONFIG_PARAVIRT)
>  #define CONFIG_E820_TRIVIAL
> +#else
> +#include <xen/arch-x86/hvm/start_info.h>
>  #endif
>  
>  /* PC BIOS standard E820 types and structure. */
> @@ -34,6 +36,7 @@
>  #define E820_ACPI         3
>  #define E820_NVS          4
>  #define E820_UNUSABLE     5
> +#define E820_DISABLED     6
>  #define E820_PMEM         7
>  #define E820_TYPES        8
>  
> @@ -54,6 +57,7 @@ unsigned long e820_get_max_contig_pages(unsigned long pfn, unsigned long pages);
>  #ifndef CONFIG_E820_TRIVIAL
>  unsigned long e820_get_reserved_pfns(int pages);
>  void e820_put_reserved_pfns(unsigned long start_pfn, int pages);
> +void e820_init_memmap(struct hvm_memmap_table_entry *entry, unsigned int num);
>  #endif
>  
>  #endif /*__E820_HEADER*/
> -- 
> 2.35.3
> 

-- 
Samuel
---
Pour une évaluation indépendante, transparente et rigoureuse !
Je soutiens la Commission d'Évaluation de l'Inria.


From xen-devel-bounces@lists.xenproject.org Sat Jun 18 12:54:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jun 2022 12:54:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351847.578595 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2Xy3-0004EP-Lk; Sat, 18 Jun 2022 12:54:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351847.578595; Sat, 18 Jun 2022 12:54:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2Xy3-0004EI-HM; Sat, 18 Jun 2022 12:54:23 +0000
Received: by outflank-mailman (input) for mailman id 351847;
 Sat, 18 Jun 2022 12:54:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2Xy2-0004E8-RF; Sat, 18 Jun 2022 12:54:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2Xy2-0003aU-Nq; Sat, 18 Jun 2022 12:54:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2Xy2-00017N-4s; Sat, 18 Jun 2022 12:54:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o2Xy2-0007hT-4Q; Sat, 18 Jun 2022 12:54:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9pkQHxsP1M5c3JvImLPysbIh2LCECZlQnAzRz+C0O3A=; b=gZnH96rKf7OHIH6yF5P+NBePJq
	112LSLrq5uExML9gnceYqRQgOPY5u318uzkOos22/LZAzL3hYpOSOkQD5BtKz1+PLdPGele5CL16b
	OZTJhavzy101yZcpcF7XCe+zbnq/pbyQh6i+6OiMVJahHOdvME8x3R+6t7FJalZ+LR4c=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171265-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 171265: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start/debian.repeat:fail:regression
    linux-5.4:test-amd64-i386-libvirt-xsm:xen-install:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start:fail:heisenbug
    linux-5.4:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:heisenbug
    linux-5.4:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=a31bd366116cb157fc20d5cdc8a2fd1c0d39ac38
X-Osstest-Versions-That:
    linux=9d6e67bf50908cc661972969e8f073ec1d1bc97d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 18 Jun 2022 12:54:22 +0000

flight 171265 linux-5.4 real [real]
flight 171269 linux-5.4 real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/171265/
http://logs.test-lab.xenproject.org/osstest/logs/171269/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-multivcpu 18 guest-start/debian.repeat fail in 171224 REGR. vs. 171173

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-xsm   7 xen-install      fail in 171224 pass in 171265
 test-armhf-armhf-xl-rtds     14 guest-start      fail in 171224 pass in 171265
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail pass in 171224
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install          fail pass in 171224
 test-armhf-armhf-xl-multivcpu 14 guest-start               fail pass in 171224
 test-armhf-armhf-xl-vhd      17 guest-start/debian.repeat  fail pass in 171224

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit2  14 guest-start         fail in 171224 like 171167
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check fail in 171224 never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check fail in 171224 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171173
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171173
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171173
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171173
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171173
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171173
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171173
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171173
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 171173
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171173
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171173
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171173
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171173
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 linux                a31bd366116cb157fc20d5cdc8a2fd1c0d39ac38
baseline version:
 linux                9d6e67bf50908cc661972969e8f073ec1d1bc97d

Last test of basis   171173  2022-06-15 01:11:00 Z    3 days
Testing same since   171200  2022-06-16 11:41:43 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Borislav Petkov <bp@suse.de>
  Florian Fainelli <f.fainelli@gmail.com>
  Gayatri Kammela <gayatri.kammela@intel.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Hulk Robot <hulkrobot@huawei.com>
  Ingo Molnar <mingo@kernel.org>
  Jon Hunter <jonathanh@nvidia.com>
  Josh Poimboeuf <jpoimboe@kernel.org>
  Josh Poimboeuf <jpoimboe@redhat.com>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Thomas Gleixner <tglx@linutronix.de>
  Tony Luck <tony.luck@intel.com>
  Zhang Rui <rui.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 352 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 18 13:08:20 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jun 2022 13:08:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351863.578606 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2YBT-0005xC-1h; Sat, 18 Jun 2022 13:08:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351863.578606; Sat, 18 Jun 2022 13:08:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2YBS-0005x5-V5; Sat, 18 Jun 2022 13:08:14 +0000
Received: by outflank-mailman (input) for mailman id 351863;
 Sat, 18 Jun 2022 13:08:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2YBR-0005wv-OX; Sat, 18 Jun 2022 13:08:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2YBR-0003q3-Gj; Sat, 18 Jun 2022 13:08:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2YBR-0001xy-0N; Sat, 18 Jun 2022 13:08:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o2YBQ-0000nO-WC; Sat, 18 Jun 2022 13:08:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Z6xIwBfpQOh0wyAAV9WzY8gj/am1qY5NX/1LOb0zNdM=; b=JnCAw3tbMhmfnTczAjQWlinQd6
	x9kNBXFTNIA8BEhp/k8vYumSBxGf4jiBudd+ZUF+H/AHMPHYmOIWaIuXcFz+0worr/REkhTNUelw2
	A9cFfwHnmALgP9Va4HgjTkvKY0nJfCThvNSVrgzlmeeoMWNENsWNPH6Qfkj/2SzWEqM4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171266-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 171266: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-pvops:kernel-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=812edc95a36b997d674ce4f3a56f4fd01f31904e
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 18 Jun 2022 13:08:12 +0000

flight 171266 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171266/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-arm64-pvops             6 kernel-build             fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              812edc95a36b997d674ce4f3a56f4fd01f31904e
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  708 days
Failing since        151818  2020-07-11 04:18:52 Z  707 days  689 attempts
Testing same since   171230  2022-06-17 04:18:59 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Amneesh Singh <natto@weirdnatto.in>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani@anisinha.ca>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brad Laue <brad@brad-x.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Kirbach <christian.kirbach@gmail.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Christophe Fergeau <cfergeau@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Didik Supriadi <didiksupriadi41@gmail.com>
  dinglimin <dinglimin@cmss.chinamobile.com>
  Divya Garg <divya.garg@nutanix.com>
  Dmitrii Shcherbakov <dmitrii.shcherbakov@canonical.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Emilio Herrera <ehespinosa57@gmail.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Florian Schmidt <flosch@nutanix.com>
  Franck Ridel <fridel@protonmail.com>
  Gavi Teitz <gavi@nvidia.com>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Haonan Wang <hnwanga1@gmail.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Hiroki Narukawa <hnarukaw@yahoo-corp.jp>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Ian Wienand <iwienand@redhat.com>
  Ioanna Alifieraki <ioanna-maria.alifieraki@canonical.com>
  Ivan Teterevkov <ivan.teterevkov@nutanix.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  jason lee <ppark5237@gmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jia Zhou <zhou.jia2@zte.com.cn>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jing Qi <jinqi@redhat.com>
  Jinsheng Zhang <zhangjl02@inspur.com>
  Jiri Denemark <jdenemar@redhat.com>
  Joachim Falk <joachim.falk@gmx.de>
  John Ferlan <jferlan@redhat.com>
  John Levon <john.levon@nutanix.com>
  John Levon <levon@movementarian.org>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Justin Gatzen <justin.gatzen@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kim InSoo <simmon@nplob.com>
  Koichi Murase <myoga.murase@gmail.com>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Lei Yang <yanglei209@huawei.com>
  Lena Voytek <lena.voytek@canonical.com>
  Liang Yan <lyan@digitalocean.com>
  Liang Yan <lyan@digtalocean.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Liu Yiding <liuyd.fnst@fujitsu.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  luzhipeng <luzhipeng@cestc.cn>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Mielke <mark.mielke@gmail.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Martin Pitt <mpitt@debian.org>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matej Cepl <mcepl@cepl.eu>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Goodhart <c@chromakode.com>
  Maxim Nestratov <mnestratov@virtuozzo.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Moteen Shah <codeguy.moteen@gmail.com>
  Moteen Shah <moteenshah.02@gmail.com>
  Muha Aliss <muhaaliss@gmail.com>
  Nathan <nathan95@live.it>
  Neal Gompa <ngompa13@gmail.com>
  Nick Chevsky <nchevsky@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nicolas Lécureuil <neoclust@mageia.org>
  Nicolas Lécureuil <nicolas.lecureuil@siveo.net>
  Nikolay Shirokovskiy <nikolay.shirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Niteesh Dubey <niteesh@linux.ibm.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Or Ozeri <oro@il.ibm.com>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peng Liang <tcx4c70@gmail.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Praveen K Paladugu <prapal@linux.microsoft.com>
  Prerna Saxena <prerna.saxena@nutanix.com>
  Richard W.M. Jones <rjones@redhat.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Robin Lee <cheeselee@fedoraproject.org>
  Rohit Kumar <rohit.kumar3@nutanix.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Davis <scott.davis@starlab.io>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Sergey A <sw@atrus.ru>
  Sergey A. <sw@atrus.ru>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  shenjiatong <yshxxsjt715@gmail.com>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Simon Rowe <simon.rowe@nutanix.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Temuri Doghonadze <temuri.doghonadze@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tom Wieczorek <tom@bibbu.net>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tu Qiang <tu.qiang35@zte.com.cn>
  Tuguoyi <tu.guoyi@h3c.com>
  tuqiang <tu.qiang35@zte.com.cn>
  Vasiliy Ulyanov <vulyanov@suse.de>
  Victor Toso <victortoso@redhat.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Vinayak Kale <vkale@nvidia.com>
  Vineeth Pillai <viremana@linux.microsoft.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  Wei-Chen Chen <weicche@microsoft.com>
  William Douglas <william.douglas@intel.com>
  Xu Chao <xu.chao6@zte.com.cn>
  Yalan Zhang <yalzhang@redhat.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Fei <yangfei85@huawei.com>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yasuhiko Kamata <belphegor@belbel.or.jp>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
  zhangjl02 <zhangjl02@inspur.com>
  zhanglei <zhanglei@smartx.com>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>
  Дамјан Георгиевски <gdamjan@gmail.com>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 113770 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 18 14:07:27 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jun 2022 14:07:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351874.578620 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2Z6V-0003n9-J4; Sat, 18 Jun 2022 14:07:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351874.578620; Sat, 18 Jun 2022 14:07:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2Z6V-0003n2-FT; Sat, 18 Jun 2022 14:07:11 +0000
Received: by outflank-mailman (input) for mailman id 351874;
 Sat, 18 Jun 2022 14:07:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c8TN=WZ=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o2Z6U-0003mr-2O
 for xen-devel@lists.xenproject.org; Sat, 18 Jun 2022 14:07:10 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f0ea9b44-ef0f-11ec-b725-ed86ccbb4733;
 Sat, 18 Jun 2022 16:07:09 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 6AB331F74A;
 Sat, 18 Jun 2022 14:07:08 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 3BABE1342C;
 Sat, 18 Jun 2022 14:07:08 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id hdGqDAzcrWJsYwAAMHmgww
 (envelope-from <jgross@suse.com>); Sat, 18 Jun 2022 14:07:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f0ea9b44-ef0f-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655561228; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=iJ5dO8R/0jN47PrOOM8KMUMIFakJdFFnhXMc95nQrRY=;
	b=WqxxO9gExdamolPAt72JjvvZZjNQOVhU0xeF+4sq5FaXmr71Ju76gYSuRGPq97OPIpj+Sz
	0htcfWGdjqRq76GczJSisAw0ZMobLn7cWdAlTtLfEE3qYpbXTLPTx0PKN71MAaxz0sn3mM
	6tPAD4c6MGbskTWkdxSQjl1VdM6ikpw=
Message-ID: <8815b69d-f687-3b0f-1b9c-6bd273cd3404@suse.com>
Date: Sat, 18 Jun 2022 16:07:07 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH 3/3] mini-os: fix number of pages for PVH
Content-Language: en-US
To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
 minios-devel@lists.xenproject.org, xen-devel@lists.xenproject.org, wl@xen.org
References: <20220618104816.11527-1-jgross@suse.com>
 <20220618104816.11527-4-jgross@suse.com>
 <20220618121328.54byw5ggucap6x5j@begin>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20220618121328.54byw5ggucap6x5j@begin>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------p9xLAeNx0XdfvcYeMBn0FF8F"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------p9xLAeNx0XdfvcYeMBn0FF8F
Content-Type: multipart/mixed; boundary="------------E4wgi72Bwn6iiqNLXv2KDhkQ";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
 minios-devel@lists.xenproject.org, xen-devel@lists.xenproject.org, wl@xen.org
Message-ID: <8815b69d-f687-3b0f-1b9c-6bd273cd3404@suse.com>
Subject: Re: [PATCH 3/3] mini-os: fix number of pages for PVH
References: <20220618104816.11527-1-jgross@suse.com>
 <20220618104816.11527-4-jgross@suse.com>
 <20220618121328.54byw5ggucap6x5j@begin>
In-Reply-To: <20220618121328.54byw5ggucap6x5j@begin>

--------------E4wgi72Bwn6iiqNLXv2KDhkQ
Content-Type: multipart/mixed; boundary="------------KvEEbbqzaVpnHYxCeQAofYnQ"

--------------KvEEbbqzaVpnHYxCeQAofYnQ
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTguMDYuMjIgMTQ6MTMsIFNhbXVlbCBUaGliYXVsdCB3cm90ZToNCj4gSGVsbG8sDQo+
IA0KPiBKdWVyZ2VuIEdyb3NzLCBsZSBzYW0uIDE4IGp1aW4gMjAyMiAxMjo0ODoxNiArMDIw
MCwgYSBlY3JpdDoNCj4+IEBAIC0xMjQsNyArMTI2LDcgQEAgdm9pZCBhcmNoX21tX3ByZWlu
aXQodm9pZCAqcCkNCj4+ICAgICAgICAgICBkb19leGl0KCk7DQo+PiAgICAgICB9DQo+PiAg
IA0KPj4gLSAgICBsYXN0X2ZyZWVfcGZuID0gZTgyMF9nZXRfbWF4cGZuKHJldCk7DQo+PiAr
ICAgIGxhc3RfZnJlZV9wZm4gPSBlODIwX2dldF9tYXhwZm4ocmV0IC0gZTgyMF9pbml0aWFs
X3Jlc2VydmVkX3BmbnMpOw0KPiANCj4gTW1tLCBidXQgdGhlIHJlc2VydmVkIHBmbiBjb3Vs
ZCBiZSBpbiB0aGUgbWlkZGxlIG9mIHRoZSBlODIwIGFkZHJlc3MNCj4gc3BhY2UuDQoNClRo
YXQgZG9lc24ndCBtYXR0ZXIuDQoNCmU4MjBfZ2V0X21heHBmbihuKSB3aWxsIGp1c3QgcmV0
dXJuIHRoZSBwZm4gb2YgdGhlIG4tdGggUkFNIHBmbiBpdCBpcw0KZmluZGluZyBpbiB0aGUg
RTgyMCBtYXAuIFRoaXMgc2hvdWxkIGJlIHRoZSBsYXN0IHBmbiB3aXRoIGFsbG9jYXRlZA0K
bWVtb3J5LiBXaXRob3V0IHN1YnRyYWN0aW5nIHRoZSBudW1iZXIgb2YgcmVzZXJ2ZWQgcGZu
cyAod2hpY2ggY29udGFpbg0Kbm9ybWFsbHkgbWVtb3J5IHdoaWNoIGlzIGFsbG9jYXRlZCBm
b3IgdGhlIGd1ZXN0LCBidXQgbm90IHVzYWJsZSBhcw0KUkFNKSwgTWluaS1PUyB0cmllcyB0
byB1c2UgUkFNIGJleW9uZCBpdHMgYWxsb2NhdGlvbiwgd2hpY2ggZmFpbHMuDQoNCg0KSnVl
cmdlbg0K
--------------KvEEbbqzaVpnHYxCeQAofYnQ
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------KvEEbbqzaVpnHYxCeQAofYnQ--

--------------E4wgi72Bwn6iiqNLXv2KDhkQ--

--------------p9xLAeNx0XdfvcYeMBn0FF8F
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmKt3AsFAwAAAAAACgkQsN6d1ii/Ey/Z
iwf/YdA1009qcPXsrTxCmMp/3SAyk1katT1ziIVATeduWpDSAKo5f29uSzZ2PZG+LYHh5M/AaM8n
9qTZ5wGiEdCANYMG6cbHkSE6f0b2/1oEJ3dqPrruA6FLcZgy93cpJrR+DETLG/Cadtq3+21lTwfJ
y9JtLftI1ycNzyd08NmZ5Z5E0wZoBSjE6R26iODOJtQICXtYPB9IT4XiwXGJZCe2bYZGCdBopcdr
ejSw3I6+yE3+X2dwZKPep2JHD7GuzEE+cveUf6dSo71F4ZbuEM6IVnWJWDDOnEBjxz7viLLtWtMs
nZjWRHnsjRsDion1jSUCkHYWTdaAbuWJQJYTAoX71A==
=aw8p
-----END PGP SIGNATURE-----

--------------p9xLAeNx0XdfvcYeMBn0FF8F--


From xen-devel-bounces@lists.xenproject.org Sat Jun 18 14:37:29 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jun 2022 14:37:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351921.578653 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2ZZi-0008IV-CY; Sat, 18 Jun 2022 14:37:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351921.578653; Sat, 18 Jun 2022 14:37:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2ZZi-0008IO-9c; Sat, 18 Jun 2022 14:37:22 +0000
Received: by outflank-mailman (input) for mailman id 351921;
 Sat, 18 Jun 2022 14:37:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2ZZg-0008IE-O0; Sat, 18 Jun 2022 14:37:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2ZZg-0005Mk-IA; Sat, 18 Jun 2022 14:37:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2ZZg-0006nB-42; Sat, 18 Jun 2022 14:37:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o2ZZg-0001Se-3X; Sat, 18 Jun 2022 14:37:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=oa9cWxbSnHdpUt+MrzG1xIfu92q8Vq7lbjBBodLM8/k=; b=FWhu1fLi5t01lKn0plVGXcE5nK
	/TU2HgQP4df2udAXcmPAjjunEBbS84dvqNw9pWrvHZn4eu+fqu5v2vB8qljV4jum1TmY0faBYvxFw
	LlSq3EMaiGxGLoexmnK+omYrEt/TBgSq5JPk/eI7lDzFxAXzBTeOtK73bscasNI9bgsU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171263-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171263: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-freebsd12-amd64:guest-start/freebsd.repeat:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=4b35035bcf80ddb47c0112c4fbd84a63a2836a18
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 18 Jun 2022 14:37:20 +0000

flight 171263 linux-linus real [real]
flight 171270 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/171263/
http://logs.test-lab.xenproject.org/osstest/logs/171270/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-freebsd12-amd64 21 guest-start/freebsd.repeat fail in 171270 REGR. vs. 170714

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-freebsd12-amd64 19 guest-localmigrate/x10 fail pass in 171270-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170714
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170714
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170714
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                4b35035bcf80ddb47c0112c4fbd84a63a2836a18
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z   25 days
Failing since        170716  2022-05-24 11:12:06 Z   25 days   58 attempts
Testing same since   171263  2022-06-17 22:23:23 Z    0 days    1 attempts

------------------------------------------------------------
2386 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 281617 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 18 15:57:14 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jun 2022 15:57:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351934.578664 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2aon-00084b-DX; Sat, 18 Jun 2022 15:57:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351934.578664; Sat, 18 Jun 2022 15:57:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2aon-00084U-AF; Sat, 18 Jun 2022 15:57:01 +0000
Received: by outflank-mailman (input) for mailman id 351934;
 Sat, 18 Jun 2022 15:57:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qCNw=WZ=ens-lyon.org=samuel.thibault@bounce.ens-lyon.org>)
 id 1o2aom-00084J-8i
 for xen-devel@lists.xenproject.org; Sat, 18 Jun 2022 15:57:00 +0000
Received: from sonata.ens-lyon.org (domu-toccata.ens-lyon.fr [140.77.166.138])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4700c7ea-ef1f-11ec-bd2d-47488cf2e6aa;
 Sat, 18 Jun 2022 17:56:58 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by sonata.ens-lyon.org (Postfix) with ESMTP id 986BC2015C;
 Sat, 18 Jun 2022 17:56:55 +0200 (CEST)
Received: from sonata.ens-lyon.org ([127.0.0.1])
 by localhost (sonata.ens-lyon.org [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id KfNSFwHv7YZv; Sat, 18 Jun 2022 17:56:55 +0200 (CEST)
Received: from begin (201.52.205.77.rev.sfr.net [77.205.52.201])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange ECDHE (P-256) server-signature RSA-PSS (4096 bits) server-digest
 SHA256) (No client certificate requested)
 by sonata.ens-lyon.org (Postfix) with ESMTPSA id 5385A20003;
 Sat, 18 Jun 2022 17:56:55 +0200 (CEST)
Received: from samy by begin with local (Exim 4.95)
 (envelope-from <samuel.thibault@ens-lyon.org>) id 1o2aog-00Bc2C-Cm;
 Sat, 18 Jun 2022 17:56:54 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4700c7ea-ef1f-11ec-bd2d-47488cf2e6aa
Date: Sat, 18 Jun 2022 17:56:54 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Juergen Gross <jgross@suse.com>
Cc: minios-devel@lists.xenproject.org, xen-devel@lists.xenproject.org,
	wl@xen.org
Subject: Re: [PATCH 3/3] mini-os: fix number of pages for PVH
Message-ID: <20220618155654.kcvodnjcd7khwspl@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Juergen Gross <jgross@suse.com>, minios-devel@lists.xenproject.org,
	xen-devel@lists.xenproject.org, wl@xen.org
References: <20220618104816.11527-1-jgross@suse.com>
 <20220618104816.11527-4-jgross@suse.com>
 <20220618121328.54byw5ggucap6x5j@begin>
 <8815b69d-f687-3b0f-1b9c-6bd273cd3404@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <8815b69d-f687-3b0f-1b9c-6bd273cd3404@suse.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)

Juergen Gross, le sam. 18 juin 2022 16:07:07 +0200, a ecrit:
> On 18.06.22 14:13, Samuel Thibault wrote:
> > Hello,
> > 
> > Juergen Gross, le sam. 18 juin 2022 12:48:16 +0200, a ecrit:
> > > @@ -124,7 +126,7 @@ void arch_mm_preinit(void *p)
> > >           do_exit();
> > >       }
> > > -    last_free_pfn = e820_get_maxpfn(ret);
> > > +    last_free_pfn = e820_get_maxpfn(ret - e820_initial_reserved_pfns);
> > 
> > Mmm, but the reserved pfn could be in the middle of the e820 address
> > space.
> 
> That doesn't matter.
> 
> e820_get_maxpfn(n) will just return the pfn of the n-th RAM pfn it is
> finding in the E820 map.

Yes, but subtracting at this point looks a bit hacky to me.

It seems to me that it'd be better to make e820_get_maxpfn count by
itself the reserved pages (but never return its pfn of course), rather
than having to make e820_sanitize look at the reserved pages, store
it somewhere, and hope that other code will remember to subtract that
before calling e820_get_maxpfn.

I mean something like:

unsigned long e820_get_maxpfn(unsigned long pages)
{
    int i;
    unsigned long pfns = 0, start = 0;

    if ( !e820_entries )
        e820_get_memmap();

    for ( i = 0; i < e820_entries; i++ )
    {
        pfns = e820_map[i].size >> PAGE_SHIFT;

	if ( e820_map[i].type == E820_RESERVED )
	{
	    /* This counts in the memory reservation, but is not usable */
            pages -= pfns;
	    continue;
	}
        if ( e820_map[i].type != E820_RAM )
            continue;

        start = e820_map[i].addr >> PAGE_SHIFT;
        if ( pages <= pfns )
            return start + pages;
        pages -= pfns;
    }

    return start + pfns;
}

Samuel


From xen-devel-bounces@lists.xenproject.org Sat Jun 18 16:53:43 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jun 2022 16:53:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351948.578678 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2bhY-0006Wn-KP; Sat, 18 Jun 2022 16:53:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351948.578678; Sat, 18 Jun 2022 16:53:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2bhY-0006Wg-Hm; Sat, 18 Jun 2022 16:53:36 +0000
Received: by outflank-mailman (input) for mailman id 351948;
 Sat, 18 Jun 2022 16:53:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2bhX-0006WW-Hs; Sat, 18 Jun 2022 16:53:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2bhX-0008A3-DY; Sat, 18 Jun 2022 16:53:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2bhW-0006sQ-Qz; Sat, 18 Jun 2022 16:53:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o2bhW-0003BC-QY; Sat, 18 Jun 2022 16:53:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=AFmQf/XyVsq3kLxj5XDBXW4ppW76FZSrFRPKX4q1vEk=; b=L9RzY3xn8ZM1CdUD+HkJgNm7RC
	B/StlRGsDxEh0zA4tkFnYRZ5NaNVrW6LbaWmw6CNfv8lyYtWzDm9+9tNeaNSWwcTDyFRXoZ2EKqUQ
	z6/SiW9G3R/ysXbbCceH3xoI/YSvDqL0kLHBjYBNd5Z5BDcnthPqzl/H8asb/fuAB8/c=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171267-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 171267: trouble: broken/fail/pass
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-migrupgrade:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-migrupgrade:host-install/src_host(6):broken:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c9040f25be317ab2f7647605397d79313e3f303e
X-Osstest-Versions-That:
    xen=c9040f25be317ab2f7647605397d79313e3f303e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 18 Jun 2022 16:53:34 +0000

flight 171267 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171267/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-migrupgrade    <job status>                 broken

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-migrupgrade  6 host-install/src_host(6) broken pass in 171252

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171252
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171252
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171252
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171252
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171252
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 171252
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171252
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171252
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171252
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171252
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171252
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171252
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171252
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  c9040f25be317ab2f7647605397d79313e3f303e
baseline version:
 xen                  c9040f25be317ab2f7647605397d79313e3f303e

Last test of basis   171267  2022-06-18 04:44:45 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 broken  
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-amd64-migrupgrade broken
broken-step test-amd64-amd64-migrupgrade host-install/src_host(6)

Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sat Jun 18 17:44:18 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jun 2022 17:44:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351958.578690 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2cUO-0003PO-GC; Sat, 18 Jun 2022 17:44:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351958.578690; Sat, 18 Jun 2022 17:44:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2cUO-0003PH-Bi; Sat, 18 Jun 2022 17:44:04 +0000
Received: by outflank-mailman (input) for mailman id 351958;
 Sat, 18 Jun 2022 17:44:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ejv/=WZ=gmail.com=dmitry.semenets@srs-se1.protection.inumbo.net>)
 id 1o2cUN-0003PB-9M
 for xen-devel@lists.xenproject.org; Sat, 18 Jun 2022 17:44:03 +0000
Received: from mail-ed1-x52c.google.com (mail-ed1-x52c.google.com
 [2a00:1450:4864:20::52c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3cd79f9a-ef2e-11ec-b725-ed86ccbb4733;
 Sat, 18 Jun 2022 19:44:01 +0200 (CEST)
Received: by mail-ed1-x52c.google.com with SMTP id eo8so10009663edb.0
 for <xen-devel@lists.xenproject.org>; Sat, 18 Jun 2022 10:44:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3cd79f9a-ef2e-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=yQ5Mt1K3U5MOISQ5+oCw+2BwV/RNzMfHtTPWWuDU2BQ=;
        b=gT3ef+WGvFvLvlb3Xq9/uBAsw9U6YljKL53ZsezVMbyj2ly+jlQtL45faLXLUItIsz
         m+sa/TmaRIby334AQauXdCrD21JVKibbaalZpNG19BkOW1X7aQIa5P+hEGr2Lnu+3IQ2
         TGm59X6KA2geWmSwSjICc3bvf6MkKYi1adVpMKTBLVgCWMHppMzVCXAoWUpsfNBvfo8z
         FDzP0gb7RzqFQSevUsebDvuwOru8yiKM7A+RC7rp2FkvEe+U8OO7lDnPkTolJvtFfkXk
         ZqOpMgr3eVVPDWIqiK7XcMR+Jh7779F3D8i7IuUPY2VDTEg1QFuPyQCOZpjnyNk8t0Uu
         4njw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=yQ5Mt1K3U5MOISQ5+oCw+2BwV/RNzMfHtTPWWuDU2BQ=;
        b=Wcd090PCxBRPWZSZWBI7gnEy/s0Wx+6+H0TUIEZImsePzIEs3YWNBAQMebchW0OyIw
         xi6FEVA3Ehqps7CIjt3nYIKzvSMV9K00cXQDZHENReP5QYY3h3T+zpZJhyyQOj1j3nzo
         sk7M7++x3Z6I9h98lXlLvlODb5oQe2P+MJ7Xcg9Zkc06JHWk9IKdInQvz8O349ji4A+a
         J9Z+UImjAwFDSHSyynvGc5gFWH80EXK4l9llq4wN1QINrpEiC/OI84MkT3oIatC0cprU
         BOqR4r5WGEvXiLv44txdXVuRBQtXM1MKB2ywCsDuR2JTnHOY1FbBrqPxby1RQmSFMOvl
         tmrg==
X-Gm-Message-State: AJIora/hBoenlnxUwdN6SzB0FBAb+FSPJNXyWgy+/ZNEhfMN+C+gKGbN
	F+x4Knf2o/trGWTMmlChMoqLOXzHXiOMCOnYvak=
X-Google-Smtp-Source: AGRyM1vFf2CerkT2dnmgBUnCctBORIZk+/tho9CGJTBPM+8+Bse2qRZq4bcPK4MSWSO9AlhlamlNRD1QWwhiEhBiKlQ=
X-Received: by 2002:a05:6402:51c7:b0:42d:f4ea:c09 with SMTP id
 r7-20020a05640251c700b0042df4ea0c09mr19078490edd.319.1655574240446; Sat, 18
 Jun 2022 10:44:00 -0700 (PDT)
MIME-Version: 1.0
References: <20220616135541.3333760-1-dmitry.semenets@gmail.com>
 <cf7660da-0bde-865e-7c22-a2e21e31fae5@xen.org> <87wndgh2og.fsf@epam.com>
 <67f56cdd-531b-72fc-1257-214d078f6bb6@xen.org> <87pmj7hczg.fsf@epam.com> <f260703d-4651-f9e9-3713-9e85a51b1d70@xen.org>
In-Reply-To: <f260703d-4651-f9e9-3713-9e85a51b1d70@xen.org>
From: Dmytro Semenets <dmitry.semenets@gmail.com>
Date: Sat, 18 Jun 2022 20:43:47 +0300
Message-ID: <CACM97VUukaWoegmNvF4F+tf2tHCyPcjG41CSjjz72V2+Cte4Ew@mail.gmail.com>
Subject: Re: [PATCH] xen: Don't call panic if ARM TF cpu off returns DENIED
To: Julien Grall <julien@xen.org>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Dmytro Semenets <Dmytro_Semenets@epam.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis <bertrand.marquis@arm.com>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

=D0=BF=D1=82, 17 =D0=B8=D1=8E=D0=BD. 2022 =D0=B3. =D0=B2 14:12, Julien Gral=
l <julien@xen.org>:
Hi Julien,
>
> Hi,
>
> On 17/06/2022 10:10, Volodymyr Babchuk wrote:
> > Julien Grall <julien@xen.org> writes:
> >
> >> On 16/06/2022 19:40, Volodymyr Babchuk wrote:
> >>> Hi Julien,
> >>
> >> Hi Volodymyr,
> >>
> >>> Julien Grall <julien@xen.org> writes:
> >>>
> >>>> Hi,
> >>>>
> >>>> On 16/06/2022 14:55, dmitry.semenets@gmail.com wrote:
> >>>>> From: Dmytro Semenets <dmytro_semenets@epam.com>
> >>>>> According to PSCI specification ARM TF can return DENIED on CPU OFF=
.
> >>>>
> >>>> I am confused. The spec is talking about Trusted OS and not
> >>>> firmware. The docummentation is also not specific to ARM Trusted
> >>>> Firmware. So did you mean "Trusted OS"?
> >>> It should be "firmware", I believe.
> >>
> >> Hmmm... I couldn't find a reference in the spec suggesting that
> >> CPU_OFF could return DENIED because of the firmware. Do you have a
> >> pointer to the spec?
> >
> > Ah, looks like we are talking about different things. Indeed, CPU_OFF
> > can return DENIED only because of Trusted OS. But entity that *returns*
> > the error to a caller is a firmware.
>
> Right, the interesting part is *why* DENIED is returned not *who*
> returns it.
ARM TF returns DENIED *only* for the platform I have.
We have a dissonance between spec and xen implementation because
DENIED returned by
ARM TF or Thrusted OS or whatever is not a reason for panic. And we
have issues with this.
If machine_restart() behaviour is more or less correct  (sometimes
reports about panic but restarts the machine)
but machine_halt() doesn't work at all.
Transit execution to CPU0 for my understanding is a workaround and
this approach will fix
machine_restart() function but will not fix machine_halt(). Approach
you suggested (spinning all cpus) will work but
will save less energy.
> >>>>> Refer to "Arm Power State Coordination Interface (DEN0022D.b)"
> >>>>> section 5.5.2
> >>>>
> >>>> Reading both 5.5.2 and 5.9.1 together, DENIED would be returned when
> >>>> the trusted OS can only run on one core.
> >>>>
> >>>> Some of the trusted OS are migratable. So I think we should first
> >>>> attempt to migrate the CPU. Then if it doesn't work, we should preve=
nt
> >>>> the CPU to go offline.
> >>>>
> >>>> That said, upstream doesn't support cpu offlining (I don't know for
> >>>> your use case). In case of shutdown, it is not necessary to offline
> >>>> the CPU, so we could avoid to call CPU_OFF on all CPUs but
> >>>> one. Something like:
> >>>>
> >>> This is even better approach yes. But you mentioned CPU_OFF. Did you
> >>> mean SYSTEM_RESET?
> >>
> >> By CPU_OFF I was referring to the fact that Xen will issue the call
> >> all CPUs but one. The remaining CPU will issue the command to
> >> reset/shutdown the system.
> >>
> >
> > I just want to clarify: change that you suggested removes call to
> > stop_cpu() in halt_this_cpu(). So no CPU_OFF will be sent at all.
>
> I was describing the existing behavior.
>
> >
> > All CPUs except one will spin in
> >
> >      while ( 1 )
> >          wfi();
> >
> > while last cpu will issue SYSTEM_OFF or SYSTEM_RESET.
> >
> > Is this correct?
>
> Yes.
>
> >
> >>>>    void machine_halt(void)
> >>>> @@ -21,10 +23,6 @@ void machine_halt(void)
> >>>>        smp_call_function(halt_this_cpu, NULL, 0);
> >>>>        local_irq_disable();
> >>>>
> >>>> -    /* Wait at most another 10ms for all other CPUs to go offline. =
*/
> >>>> -    while ( (num_online_cpus() > 1) && (timeout-- > 0) )
> >>>> -        mdelay(1);
> >>>> -
> >>>>        /* This is mainly for PSCI-0.2, which does not return if succ=
ess. */
> >>>>        call_psci_system_off();
> >>>>
> >>>>> Signed-off-by: Dmytro Semenets <dmytro_semenets@epam.com>
> >>>>> Reviewed-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
> >>>>
> >>>> I don't recall to see patch on the ML recently for this. So is this =
an
> >>>> internal review?
> >>> Yeah, sorry about that. Dmytro is a new member in our team and he is
> >>> not
> >>> yet familiar with differences in internal reviews and reviews in ML.
> >>
> >> No worries. I usually classify internal review anything that was done
> >> privately. This looks to be a public review, althought not on
> >> xen-devel.
> >>
> >> I understand that not all some of the patches are still in PoC stage
> >> and doing the review on your github is a good idea. But for those are
> >> meant to be for upstream (e.g. bug fixes, small patches), I would
> >> suggest to do the review on xen-devel directly.
> >
> > It not always clear if patch is eligible for upstream. At first we
> > thought that problem is platform-specific and we weren't sure that we
> > will find a proper upstreamable fix.
>
> You can guess but not be sure until you send it to upstrema :). In fact,.=
..
>
> > Probably you saw that PR's name
> > quite differs from final patch. This is because initial solution was
> > completely different from final one.
>
> ... even before looking at your PR, this was the first solution I had in
> mind. I am still pondering whether this could be the best approach
> because I have the suspicion there might be some platform out relying on
> receiving the shutdown request on CPU0.
>
> Anyway, this is so far just theorical, my proposal should solve your
> problem.
>
> On a separate topic, the community is aiming to support a wide range of
> platforms out-of-the-box. I think platform-specific patches are
> acceptable so long they are self-contained (to some extend. I.e if you
> ask to support Xen on RPI3 then I would still probably argue against :))
> or have a limited impact to the rest of the users (this is why we have
> alternative in Xen).
>
> My point here is your initial solution may have been the preferred
> approach for upstream. So if you involve the community early, you are
> reducing the risk to have to backtrack and/or spend extra time in the
> wrong directions.
>
> Cheers,
>
> --
> Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat Jun 18 21:29:14 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jun 2022 21:29:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351978.578701 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2fzy-0008Nl-KK; Sat, 18 Jun 2022 21:28:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351978.578701; Sat, 18 Jun 2022 21:28:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2fzy-0008Ne-HD; Sat, 18 Jun 2022 21:28:54 +0000
Received: by outflank-mailman (input) for mailman id 351978;
 Sat, 18 Jun 2022 21:28:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2fzw-0008NU-Qo; Sat, 18 Jun 2022 21:28:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2fzw-0004Sb-Lr; Sat, 18 Jun 2022 21:28:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2fzw-0008J0-6W; Sat, 18 Jun 2022 21:28:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o2fzw-0004gV-64; Sat, 18 Jun 2022 21:28:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NyH4fO+HNTjCv0rNJQB9tkR6MY/ThjFufiRJYtHCXog=; b=yYmUhU4+4CWUhHNZ5TCnv3w1vc
	7OwEa+m1LH8AhjJemtzPmrOPwarr2H2wMR0AwEp6OzwcNf7cnY6lMg00G87WaU5dP+snYokGOR4Ae
	F6HFJhs3PtVUf8brheqq7JxM+nfeQMm1knV8RqD7Ps9/lS5nqQfwyKFo8udzFiUmMxuw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171271-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 171271: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start/debian.repeat:fail:regression
    linux-5.4:test-amd64-i386-libvirt-xsm:xen-install:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start:fail:heisenbug
    linux-5.4:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=a31bd366116cb157fc20d5cdc8a2fd1c0d39ac38
X-Osstest-Versions-That:
    linux=9d6e67bf50908cc661972969e8f073ec1d1bc97d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 18 Jun 2022 21:28:52 +0000

flight 171271 linux-5.4 real [real]
flight 171274 linux-5.4 real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/171271/
http://logs.test-lab.xenproject.org/osstest/logs/171274/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-multivcpu 18 guest-start/debian.repeat fail in 171224 REGR. vs. 171173

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-xsm   7 xen-install      fail in 171224 pass in 171271
 test-armhf-armhf-xl-rtds     14 guest-start      fail in 171224 pass in 171271
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail in 171265 pass in 171271
 test-armhf-armhf-xl-vhd 17 guest-start/debian.repeat fail in 171265 pass in 171271
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install          fail pass in 171224
 test-armhf-armhf-xl-multivcpu 14 guest-start               fail pass in 171224
 test-armhf-armhf-xl-credit1  18 guest-start/debian.repeat  fail pass in 171265

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit2  14 guest-start         fail in 171224 like 171167
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check fail in 171224 never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check fail in 171224 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171173
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171173
 test-armhf-armhf-xl-credit2  18 guest-start/debian.repeat    fail  like 171173
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171173
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171173
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171173
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171173
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171173
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171173
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 171173
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171173
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171173
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171173
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171173
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 linux                a31bd366116cb157fc20d5cdc8a2fd1c0d39ac38
baseline version:
 linux                9d6e67bf50908cc661972969e8f073ec1d1bc97d

Last test of basis   171173  2022-06-15 01:11:00 Z    3 days
Testing same since   171200  2022-06-16 11:41:43 Z    2 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Borislav Petkov <bp@suse.de>
  Florian Fainelli <f.fainelli@gmail.com>
  Gayatri Kammela <gayatri.kammela@intel.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Hulk Robot <hulkrobot@huawei.com>
  Ingo Molnar <mingo@kernel.org>
  Jon Hunter <jonathanh@nvidia.com>
  Josh Poimboeuf <jpoimboe@kernel.org>
  Josh Poimboeuf <jpoimboe@redhat.com>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Thomas Gleixner <tglx@linutronix.de>
  Tony Luck <tony.luck@intel.com>
  Zhang Rui <rui.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 352 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 18 21:31:35 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jun 2022 21:31:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.351987.578712 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2g2Z-0001JN-57; Sat, 18 Jun 2022 21:31:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 351987.578712; Sat, 18 Jun 2022 21:31:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2g2Z-0001JG-0w; Sat, 18 Jun 2022 21:31:35 +0000
Received: by outflank-mailman (input) for mailman id 351987;
 Sat, 18 Jun 2022 21:31:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2g2Y-0001In-6Y; Sat, 18 Jun 2022 21:31:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2g2Y-0004WB-4n; Sat, 18 Jun 2022 21:31:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2g2X-0008M9-JY; Sat, 18 Jun 2022 21:31:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o2g2X-0004ip-J4; Sat, 18 Jun 2022 21:31:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=c5Gst5tfpDPK/2IOt6ksABWzWqWKkMBT1ZMscV3gQio=; b=3Vdcs1bdg6oyYNE2HNdrjz+FyV
	6HfwNGn0WNDVxO1qSBjeCYWNYyeP5zem7DvoyweD/idGiUa8kG1XiDwJyp54NYfwMHwa41kbznwiw
	HhQzryXwAyA9pSIkOv3YQwzToSWxf5UkguasA/2bQsLLscbORfgIWb6mRI/77AExY4/g=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171273-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171273: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=4b35035bcf80ddb47c0112c4fbd84a63a2836a18
X-Osstest-Versions-That:
    linux=d6ecaa0024485effd065124fe774de2e22095f2d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 18 Jun 2022 21:31:33 +0000

flight 171273 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171273/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 170714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 170714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 170714
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 170714
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 170714
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 170714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 170714
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                4b35035bcf80ddb47c0112c4fbd84a63a2836a18
baseline version:
 linux                d6ecaa0024485effd065124fe774de2e22095f2d

Last test of basis   170714  2022-05-24 03:27:44 Z   25 days
Failing since        170716  2022-05-24 11:12:06 Z   25 days   59 attempts
Testing same since   171263  2022-06-17 22:23:23 Z    0 days    2 attempts

------------------------------------------------------------
2386 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   d6ecaa002448..4b35035bcf80  4b35035bcf80ddb47c0112c4fbd84a63a2836a18 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Sun Jun 19 04:29:21 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jun 2022 04:29:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352015.578723 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2mYY-0005Yc-0W; Sun, 19 Jun 2022 04:29:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352015.578723; Sun, 19 Jun 2022 04:29:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2mYX-0005YU-Qk; Sun, 19 Jun 2022 04:29:01 +0000
Received: by outflank-mailman (input) for mailman id 352015;
 Sun, 19 Jun 2022 04:29:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2mYW-0005YJ-3m; Sun, 19 Jun 2022 04:29:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2mYW-00028Z-0X; Sun, 19 Jun 2022 04:29:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2mYV-0001Zw-H4; Sun, 19 Jun 2022 04:28:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o2mYV-0007mG-GQ; Sun, 19 Jun 2022 04:28:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vAia/CmkRqpHTpY+rG7y9d/clt2jzUcCAnwVL2qC+N4=; b=xyncSoIRDiZqQMH9Tjmy3tlso9
	wtTOADuFT2p3KBvE8X/MfYln3RS/H00SDaYyg+9ivVEI2i0iuSyPZntDNuxcj5GFUgtpErZQQg+Ut
	RcFX7gv+ued7YTWQgQwk23MlEr+SJKpkbLYRzxcWNcNP0rYVEYIJIoukCmEnSYkNltn8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171275-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 171275: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:heisenbug
    linux-5.4:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=a31bd366116cb157fc20d5cdc8a2fd1c0d39ac38
X-Osstest-Versions-That:
    linux=9d6e67bf50908cc661972969e8f073ec1d1bc97d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 19 Jun 2022 04:28:59 +0000

flight 171275 linux-5.4 real [real]
flight 171278 linux-5.4 real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/171275/
http://logs.test-lab.xenproject.org/osstest/logs/171278/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail in 171265 pass in 171275
 test-amd64-i386-qemut-rhel6hvm-amd 7 xen-install fail in 171265 pass in 171275
 test-armhf-armhf-xl-multivcpu 14 guest-start     fail in 171265 pass in 171275
 test-armhf-armhf-xl-vhd 17 guest-start/debian.repeat fail in 171265 pass in 171275
 test-armhf-armhf-xl-credit1  18 guest-start/debian.repeat  fail pass in 171265
 test-armhf-armhf-xl-multivcpu 18 guest-start/debian.repeat fail pass in 171278-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171173
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171173
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171173
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171173
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171173
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171173
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171173
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171173
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 171173
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171173
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171173
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171173
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171173
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 linux                a31bd366116cb157fc20d5cdc8a2fd1c0d39ac38
baseline version:
 linux                9d6e67bf50908cc661972969e8f073ec1d1bc97d

Last test of basis   171173  2022-06-15 01:11:00 Z    4 days
Testing same since   171200  2022-06-16 11:41:43 Z    2 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Borislav Petkov <bp@suse.de>
  Florian Fainelli <f.fainelli@gmail.com>
  Gayatri Kammela <gayatri.kammela@intel.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Hulk Robot <hulkrobot@huawei.com>
  Ingo Molnar <mingo@kernel.org>
  Jon Hunter <jonathanh@nvidia.com>
  Josh Poimboeuf <jpoimboe@kernel.org>
  Josh Poimboeuf <jpoimboe@redhat.com>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Thomas Gleixner <tglx@linutronix.de>
  Tony Luck <tony.luck@intel.com>
  Zhang Rui <rui.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   9d6e67bf5090..a31bd366116c  a31bd366116cb157fc20d5cdc8a2fd1c0d39ac38 -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Sun Jun 19 05:52:12 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jun 2022 05:52:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352028.578734 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2nqn-0006J7-Vc; Sun, 19 Jun 2022 05:51:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352028.578734; Sun, 19 Jun 2022 05:51:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2nqn-0006J0-Sa; Sun, 19 Jun 2022 05:51:57 +0000
Received: by outflank-mailman (input) for mailman id 352028;
 Sun, 19 Jun 2022 05:51:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ii8P=W2=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o2nqm-0006Ip-Rm
 for xen-devel@lists.xenproject.org; Sun, 19 Jun 2022 05:51:56 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ec2cf6f0-ef93-11ec-b725-ed86ccbb4733;
 Sun, 19 Jun 2022 07:51:54 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 0E5A021DFC;
 Sun, 19 Jun 2022 05:51:54 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id D3DD113458;
 Sun, 19 Jun 2022 05:51:53 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id XW4pMXm5rmILUAAAMHmgww
 (envelope-from <jgross@suse.com>); Sun, 19 Jun 2022 05:51:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ec2cf6f0-ef93-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655617914; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=L/RykbmL5s0TZXBBWPLquIWnkE/8q9vQJ3Ra5JnTDjg=;
	b=R8WLOD5Mn4bPjpJfsD1YbN0Bc44LOmMB0SI1FWSQhUzq6GQYtUpziDemDuaokPt8PqqhCW
	vZJbA6lahumenjpw5aJ1acSlYQqxg9e3IzOPUAlIOQYe/qN5nNyr/j8CjU2i0Sin06NcwC
	e4KvnZ72RS+VOB1WdjAQno6oEZ5B2lg=
Message-ID: <09d88287-a957-b89a-939a-7d39282e7d94@suse.com>
Date: Sun, 19 Jun 2022 07:51:53 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH 3/3] mini-os: fix number of pages for PVH
Content-Language: en-US
To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
 minios-devel@lists.xenproject.org, xen-devel@lists.xenproject.org, wl@xen.org
References: <20220618104816.11527-1-jgross@suse.com>
 <20220618104816.11527-4-jgross@suse.com>
 <20220618121328.54byw5ggucap6x5j@begin>
 <8815b69d-f687-3b0f-1b9c-6bd273cd3404@suse.com>
 <20220618155654.kcvodnjcd7khwspl@begin>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20220618155654.kcvodnjcd7khwspl@begin>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------5CgqcIrOffQ7QwdM0BKvXb4p"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------5CgqcIrOffQ7QwdM0BKvXb4p
Content-Type: multipart/mixed; boundary="------------s0gT5wMZ48gjfVaL77yMyCbE";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
 minios-devel@lists.xenproject.org, xen-devel@lists.xenproject.org, wl@xen.org
Message-ID: <09d88287-a957-b89a-939a-7d39282e7d94@suse.com>
Subject: Re: [PATCH 3/3] mini-os: fix number of pages for PVH
References: <20220618104816.11527-1-jgross@suse.com>
 <20220618104816.11527-4-jgross@suse.com>
 <20220618121328.54byw5ggucap6x5j@begin>
 <8815b69d-f687-3b0f-1b9c-6bd273cd3404@suse.com>
 <20220618155654.kcvodnjcd7khwspl@begin>
In-Reply-To: <20220618155654.kcvodnjcd7khwspl@begin>

--------------s0gT5wMZ48gjfVaL77yMyCbE
Content-Type: multipart/mixed; boundary="------------2GZMiMbs0NqV6DqwP8JmwmsF"

--------------2GZMiMbs0NqV6DqwP8JmwmsF
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTguMDYuMjIgMTc6NTYsIFNhbXVlbCBUaGliYXVsdCB3cm90ZToNCj4gSnVlcmdlbiBH
cm9zcywgbGUgc2FtLiAxOCBqdWluIDIwMjIgMTY6MDc6MDcgKzAyMDAsIGEgZWNyaXQ6DQo+
PiBPbiAxOC4wNi4yMiAxNDoxMywgU2FtdWVsIFRoaWJhdWx0IHdyb3RlOg0KPj4+IEhlbGxv
LA0KPj4+DQo+Pj4gSnVlcmdlbiBHcm9zcywgbGUgc2FtLiAxOCBqdWluIDIwMjIgMTI6NDg6
MTYgKzAyMDAsIGEgZWNyaXQ6DQo+Pj4+IEBAIC0xMjQsNyArMTI2LDcgQEAgdm9pZCBhcmNo
X21tX3ByZWluaXQodm9pZCAqcCkNCj4+Pj4gICAgICAgICAgICBkb19leGl0KCk7DQo+Pj4+
ICAgICAgICB9DQo+Pj4+IC0gICAgbGFzdF9mcmVlX3BmbiA9IGU4MjBfZ2V0X21heHBmbihy
ZXQpOw0KPj4+PiArICAgIGxhc3RfZnJlZV9wZm4gPSBlODIwX2dldF9tYXhwZm4ocmV0IC0g
ZTgyMF9pbml0aWFsX3Jlc2VydmVkX3BmbnMpOw0KPj4+DQo+Pj4gTW1tLCBidXQgdGhlIHJl
c2VydmVkIHBmbiBjb3VsZCBiZSBpbiB0aGUgbWlkZGxlIG9mIHRoZSBlODIwIGFkZHJlc3MN
Cj4+PiBzcGFjZS4NCj4+DQo+PiBUaGF0IGRvZXNuJ3QgbWF0dGVyLg0KPj4NCj4+IGU4MjBf
Z2V0X21heHBmbihuKSB3aWxsIGp1c3QgcmV0dXJuIHRoZSBwZm4gb2YgdGhlIG4tdGggUkFN
IHBmbiBpdCBpcw0KPj4gZmluZGluZyBpbiB0aGUgRTgyMCBtYXAuDQo+IA0KPiBZZXMsIGJ1
dCBzdWJ0cmFjdGluZyBhdCB0aGlzIHBvaW50IGxvb2tzIGEgYml0IGhhY2t5IHRvIG1lLg0K
PiANCj4gSXQgc2VlbXMgdG8gbWUgdGhhdCBpdCdkIGJlIGJldHRlciB0byBtYWtlIGU4MjBf
Z2V0X21heHBmbiBjb3VudCBieQ0KPiBpdHNlbGYgdGhlIHJlc2VydmVkIHBhZ2VzIChidXQg
bmV2ZXIgcmV0dXJuIGl0cyBwZm4gb2YgY291cnNlKSwgcmF0aGVyDQo+IHRoYW4gaGF2aW5n
IHRvIG1ha2UgZTgyMF9zYW5pdGl6ZSBsb29rIGF0IHRoZSByZXNlcnZlZCBwYWdlcywgc3Rv
cmUNCj4gaXQgc29tZXdoZXJlLCBhbmQgaG9wZSB0aGF0IG90aGVyIGNvZGUgd2lsbCByZW1l
bWJlciB0byBzdWJ0cmFjdCB0aGF0DQo+IGJlZm9yZSBjYWxsaW5nIGU4MjBfZ2V0X21heHBm
bi4NCj4gDQo+IEkgbWVhbiBzb21ldGhpbmcgbGlrZToNCj4gDQo+IHVuc2lnbmVkIGxvbmcg
ZTgyMF9nZXRfbWF4cGZuKHVuc2lnbmVkIGxvbmcgcGFnZXMpDQo+IHsNCj4gICAgICBpbnQg
aTsNCj4gICAgICB1bnNpZ25lZCBsb25nIHBmbnMgPSAwLCBzdGFydCA9IDA7DQo+IA0KPiAg
ICAgIGlmICggIWU4MjBfZW50cmllcyApDQo+ICAgICAgICAgIGU4MjBfZ2V0X21lbW1hcCgp
Ow0KPiANCj4gICAgICBmb3IgKCBpID0gMDsgaSA8IGU4MjBfZW50cmllczsgaSsrICkNCj4g
ICAgICB7DQo+ICAgICAgICAgIHBmbnMgPSBlODIwX21hcFtpXS5zaXplID4+IFBBR0VfU0hJ
RlQ7DQo+IA0KPiAJaWYgKCBlODIwX21hcFtpXS50eXBlID09IEU4MjBfUkVTRVJWRUQgKQ0K
PiAJew0KPiAJICAgIC8qIFRoaXMgY291bnRzIGluIHRoZSBtZW1vcnkgcmVzZXJ2YXRpb24s
IGJ1dCBpcyBub3QgdXNhYmxlICovDQo+ICAgICAgICAgICAgICBwYWdlcyAtPSBwZm5zOw0K
PiAJICAgIGNvbnRpbnVlOw0KPiAJfQ0KPiAgICAgICAgICBpZiAoIGU4MjBfbWFwW2ldLnR5
cGUgIT0gRTgyMF9SQU0gKQ0KPiAgICAgICAgICAgICAgY29udGludWU7DQo+IA0KPiAgICAg
ICAgICBzdGFydCA9IGU4MjBfbWFwW2ldLmFkZHIgPj4gUEFHRV9TSElGVDsNCj4gICAgICAg
ICAgaWYgKCBwYWdlcyA8PSBwZm5zICkNCj4gICAgICAgICAgICAgIHJldHVybiBzdGFydCAr
IHBhZ2VzOw0KPiAgICAgICAgICBwYWdlcyAtPSBwZm5zOw0KPiAgICAgIH0NCj4gDQo+ICAg
ICAgcmV0dXJuIHN0YXJ0ICsgcGZuczsNCj4gfQ0KDQpUaGlzIHdvdWxkIGxlYWQgdG8gd3Jv
bmcgdmFsdWVzIG9mIG5yX21lbV9wYWdlcy4gSSB0aGluayB0aGUgYmVzdCBzb2x1dGlvbg0K
d291bGQgYmUgdG8gaGF2ZSBmdW5jdGlvbnMgcmV0dXJuaW5nIHRoZSBudW1iZXIgb2YgYXZh
aWxhYmxlIGFuZCBtYXggUkFNDQpwYWdlcyB0byBlODIwLmMuIFRoaXMgd291bGQgYWRkcmVz
cyB5b3VyIHZhbGlkIGNvbmNlcm4sIHdoaWxlIG5vdCBsZWFkaW5nDQp0byB3cm9uZyB2YWx1
ZXMgYXQgdGhlIGNhbGxlcnMgc2lkZS4NCg0KDQpKdWVyZ2VuDQo=
--------------2GZMiMbs0NqV6DqwP8JmwmsF
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------2GZMiMbs0NqV6DqwP8JmwmsF--

--------------s0gT5wMZ48gjfVaL77yMyCbE--

--------------5CgqcIrOffQ7QwdM0BKvXb4p
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmKuuXkFAwAAAAAACgkQsN6d1ii/Ey+J
QAgAn3f11VEGbKSa2SRSCF/1x2iiXgwNnUvcFA1z7a1JG6vCKdmWNoHGgHyORIsEpNsiruAsViRw
W/jLjS7+u3oC5pe/VsF8E4S19+OgGFjztzwaB6TgDp+hDyMv7VGrr05phmL3TI8Pt/xWXdQQIEWI
QEbbOnHWr0IAplWPxBJvTFmxUU9ww3N674k0vFxgtaPg+41ZDSrCfUZ9UM6CeNYRyD903JsM0tMw
LM5I5jNc49v+w5ZsBgecMb9SbqUxa46pqs17CGaMRuEiAWkPN6ZggBJgxZ7vQ2RPUvG6rC0bouxi
RjPAEc0l+sGi6eREfJaZBVA3Dn7kRBkMlrqrKuABeQ==
=7Qeu
-----END PGP SIGNATURE-----

--------------5CgqcIrOffQ7QwdM0BKvXb4p--


From xen-devel-bounces@lists.xenproject.org Sun Jun 19 06:53:07 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jun 2022 06:53:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352044.578779 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2ont-0004mn-Ai; Sun, 19 Jun 2022 06:53:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352044.578779; Sun, 19 Jun 2022 06:53:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2ont-0004ma-5a; Sun, 19 Jun 2022 06:53:01 +0000
Received: by outflank-mailman (input) for mailman id 352044;
 Sun, 19 Jun 2022 06:52:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ii8P=W2=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o2onr-0004F8-Np
 for xen-devel@lists.xenproject.org; Sun, 19 Jun 2022 06:52:59 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 72c3744e-ef9c-11ec-b725-ed86ccbb4733;
 Sun, 19 Jun 2022 08:52:56 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 089FE1FD96;
 Sun, 19 Jun 2022 06:52:56 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id BBB2F13427;
 Sun, 19 Jun 2022 06:52:55 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id UCh0LMfHrmJzXgAAMHmgww
 (envelope-from <jgross@suse.com>); Sun, 19 Jun 2022 06:52:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 72c3744e-ef9c-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655621576; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=xmLVZAezCgzTgOsivEiAdLkZ5h3o4dxtcy7J5SE1ulo=;
	b=Nki0XgCrhrAtcSGlsxAi7Igkh5HutNGZ1se7+Qjw1BBYbqM1ZY0J1CSIw1nMGG1OpYVOiY
	+m6BU7OVZtvEUC25lBZiXcEUIOPKtCqF4yYh9bgSp+xdmgOgyw9/zMQeHmMmN65rC/rJj9
	FhKCY+x9DOhzuOKuHa//gpUF4UEX9Mw=
From: Juergen Gross <jgross@suse.com>
To: minios-devel@lists.xenproject.org,
	xen-devel@lists.xenproject.org
Cc: samuel.thibault@ens-lyon.org,
	wl@xen.org,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v2 4/4] mini-os: fix bug in ballooning on PVH
Date: Sun, 19 Jun 2022 08:52:53 +0200
Message-Id: <20220619065253.19503-5-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20220619065253.19503-1-jgross@suse.com>
References: <20220619065253.19503-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

There is a subtle bug in ballooning code for PVH: in case ballooning
extends above a non-RAM area of the memory map, wrong pages will be
used.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- new patch
---
 balloon.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/balloon.c b/balloon.c
index 6ad07644..55be8141 100644
--- a/balloon.c
+++ b/balloon.c
@@ -124,7 +124,7 @@ int balloon_up(unsigned long n_pages)
     for ( pfn = 0; pfn < rc; pfn++ )
     {
         arch_pfn_add(start_pfn + pfn, balloon_frames[pfn]);
-        free_page(pfn_to_virt(nr_mem_pages + pfn));
+        free_page(pfn_to_virt(start_pfn + pfn));
     }
 
     nr_mem_pages += rc;
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Sun Jun 19 06:53:07 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jun 2022 06:53:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352046.578794 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2onu-00057C-SR; Sun, 19 Jun 2022 06:53:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352046.578794; Sun, 19 Jun 2022 06:53:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2onu-00056A-LX; Sun, 19 Jun 2022 06:53:02 +0000
Received: by outflank-mailman (input) for mailman id 352046;
 Sun, 19 Jun 2022 06:53:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ii8P=W2=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o2ont-0004F8-AM
 for xen-devel@lists.xenproject.org; Sun, 19 Jun 2022 06:53:01 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 72844e11-ef9c-11ec-b725-ed86ccbb4733;
 Sun, 19 Jun 2022 08:52:56 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 1E2FD1F8F9;
 Sun, 19 Jun 2022 06:52:55 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id E378913427;
 Sun, 19 Jun 2022 06:52:54 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 1echNsbHrmJzXgAAMHmgww
 (envelope-from <jgross@suse.com>); Sun, 19 Jun 2022 06:52:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 72844e11-ef9c-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655621575; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=4/VBdIlOf0fGkH80DrK+kifFIKli7auGQ8C3iCy/sOs=;
	b=c8BMbjPEFETnXHanJPJbi3acpIGjTIKia2MlnxWPPyqgCIc0Fdz8nER8mBrCn4cTemVsZv
	jAb7Lw1Wbug31VxVKq0ubuPyt3OE63p+MWqYDMklWNBabkKo0Ia+RMRA+h+ufGaHT07nEq
	J+fDDoXHsdbVnROv5wp6tsTyQKQOxkg=
From: Juergen Gross <jgross@suse.com>
To: minios-devel@lists.xenproject.org,
	xen-devel@lists.xenproject.org
Cc: samuel.thibault@ens-lyon.org,
	wl@xen.org,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v2 0/4] mini-os: some memory map updates for PVH
Date: Sun, 19 Jun 2022 08:52:49 +0200
Message-Id: <20220619065253.19503-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Do some memory map related changes/fixes for PVH mode:

- Prefer the memory map delivered via start-info over the one obtained
  from the hypervisor. This is a prerequisite for Xenstore-stubdom
  live-update with rising the memory limit.

- Fix a bug related to ballooning in PVH mode: PVH Xenstore-stubdom
  can't read its target memory size from Xenstore, as this introduces
  a chicken-and-egg problem. The memory size read from the hypervisor
  OTOH includes additional "special" pages marked as reserved in the
  memory map. Those pages need to be subtracted from the read size.

- Fix a bug in ballooning code in PVH mode when using memory beyond
  a RAM hole in the memory map

Changes in V2:
- added patch 4
- addressed comment regarding patch 3

Juergen Gross (4):
  mini-os: take newest version of arch-x86/hvm/start_info.h
  mini-os: prefer memory map via start_info for PVH
  mini-os: fix number of pages for PVH
  mini-os: fix bug in ballooning on PVH

 arch/x86/mm.c                         | 23 ++++----
 balloon.c                             | 18 ++----
 e820.c                                | 83 ++++++++++++++++++++++++---
 include/e820.h                        |  6 ++
 include/x86/arch_mm.h                 |  2 +
 include/xen/arch-x86/hvm/start_info.h | 63 +++++++++++++++++++-
 6 files changed, 163 insertions(+), 32 deletions(-)

-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Sun Jun 19 06:53:08 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jun 2022 06:53:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352040.578748 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2onq-0004FQ-Lf; Sun, 19 Jun 2022 06:52:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352040.578748; Sun, 19 Jun 2022 06:52:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2onq-0004FJ-J2; Sun, 19 Jun 2022 06:52:58 +0000
Received: by outflank-mailman (input) for mailman id 352040;
 Sun, 19 Jun 2022 06:52:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ii8P=W2=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o2onp-0004F7-EF
 for xen-devel@lists.xenproject.org; Sun, 19 Jun 2022 06:52:57 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 728ce77f-ef9c-11ec-bd2d-47488cf2e6aa;
 Sun, 19 Jun 2022 08:52:56 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 811D61F97A;
 Sun, 19 Jun 2022 06:52:55 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 5661213A5D;
 Sun, 19 Jun 2022 06:52:55 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 0LDUE8fHrmJzXgAAMHmgww
 (envelope-from <jgross@suse.com>); Sun, 19 Jun 2022 06:52:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 728ce77f-ef9c-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655621575; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=sqQB2mee0jCv15NOcWCx6SVOgjnvvSzpgTYbgdXHN/4=;
	b=ie4i4WPFQ6OSeuhCBDRpD7dx5s2n3JIx76lUv+g0jwcY8mwvKpG8rLUm6Am95jytnv96e8
	6Uo4Sv8YMiI0TMsao5RTw7M+vEU7XdeD8fvmcpWra+lOqBUVZOoL2fpFbZfvR4PJVaPnqD
	ZCeZyyxWvVnQ6hGd1aMgySeYJDkhe70=
From: Juergen Gross <jgross@suse.com>
To: minios-devel@lists.xenproject.org,
	xen-devel@lists.xenproject.org
Cc: samuel.thibault@ens-lyon.org,
	wl@xen.org,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v2 2/4] mini-os: prefer memory map via start_info for PVH
Date: Sun, 19 Jun 2022 08:52:51 +0200
Message-Id: <20220619065253.19503-3-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20220619065253.19503-1-jgross@suse.com>
References: <20220619065253.19503-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Since some time now a guest started in PVH mode will get the memory
map from Xen via the start_info structure.

Modify the PVH initialization to prefer this memory map over the one
obtained via hypercall, as this will allow to add information to the
memory map for a new kernel when supporting kexec.

In case the start_info structure doesn't contain memory map information
fall back to the hypercall.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
---
 arch/x86/mm.c  |  6 ++++++
 e820.c         | 25 +++++++++++++++++++++++++
 include/e820.h |  4 ++++
 3 files changed, 35 insertions(+)

diff --git a/arch/x86/mm.c b/arch/x86/mm.c
index 220c0b4d..41fcee67 100644
--- a/arch/x86/mm.c
+++ b/arch/x86/mm.c
@@ -45,6 +45,7 @@
 #include <mini-os/xmalloc.h>
 #include <mini-os/e820.h>
 #include <xen/memory.h>
+#include <xen/arch-x86/hvm/start_info.h>
 
 #ifdef MM_DEBUG
 #define DEBUG(_f, _a...) \
@@ -108,6 +109,11 @@ void arch_mm_preinit(void *p)
 {
     long ret;
     domid_t domid = DOMID_SELF;
+    struct hvm_start_info *hsi = p;
+
+    if ( hsi->version >= 1 && hsi->memmap_entries > 0 )
+        e820_init_memmap((struct hvm_memmap_table_entry *)(unsigned long)
+                         hsi->memmap_paddr, hsi->memmap_entries);
 
     pt_base = page_table_base;
     first_free_pfn = PFN_UP(to_phys(&_end));
diff --git a/e820.c b/e820.c
index 991ed382..ad91e00b 100644
--- a/e820.c
+++ b/e820.c
@@ -54,6 +54,7 @@ static char *e820_types[E820_TYPES] = {
     [E820_ACPI]     = "ACPI",
     [E820_NVS]      = "NVS",
     [E820_UNUSABLE] = "Unusable",
+    [E820_DISABLED] = "Disabled",
     [E820_PMEM]     = "PMEM"
 };
 
@@ -259,6 +260,30 @@ static void e820_get_memmap(void)
     e820_sanitize();
 }
 
+void e820_init_memmap(struct hvm_memmap_table_entry *entry, unsigned int num)
+{
+    unsigned int i;
+
+    BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_RAM != E820_RAM);
+    BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_RESERVED != E820_RESERVED);
+    BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_ACPI != E820_ACPI);
+    BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_NVS != E820_NVS);
+    BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_UNUSABLE != E820_UNUSABLE);
+    BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_DISABLED != E820_DISABLED);
+    BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_PMEM != E820_PMEM);
+
+    for ( i = 0; i < num; i++ )
+    {
+        e820_map[i].addr = entry[i].addr;
+        e820_map[i].size = entry[i].size;
+        e820_map[i].type = entry[i].type;
+    }
+
+    e820_entries = num;
+
+    e820_sanitize();
+}
+
 void arch_print_memmap(void)
 {
     int i;
diff --git a/include/e820.h b/include/e820.h
index aaf2f2ca..5438a7c8 100644
--- a/include/e820.h
+++ b/include/e820.h
@@ -26,6 +26,8 @@
 
 #if defined(__arm__) || defined(__aarch64__) || defined(CONFIG_PARAVIRT)
 #define CONFIG_E820_TRIVIAL
+#else
+#include <xen/arch-x86/hvm/start_info.h>
 #endif
 
 /* PC BIOS standard E820 types and structure. */
@@ -34,6 +36,7 @@
 #define E820_ACPI         3
 #define E820_NVS          4
 #define E820_UNUSABLE     5
+#define E820_DISABLED     6
 #define E820_PMEM         7
 #define E820_TYPES        8
 
@@ -54,6 +57,7 @@ unsigned long e820_get_max_contig_pages(unsigned long pfn, unsigned long pages);
 #ifndef CONFIG_E820_TRIVIAL
 unsigned long e820_get_reserved_pfns(int pages);
 void e820_put_reserved_pfns(unsigned long start_pfn, int pages);
+void e820_init_memmap(struct hvm_memmap_table_entry *entry, unsigned int num);
 #endif
 
 #endif /*__E820_HEADER*/
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Sun Jun 19 06:53:08 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jun 2022 06:53:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352047.578804 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2onw-0005MN-CI; Sun, 19 Jun 2022 06:53:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352047.578804; Sun, 19 Jun 2022 06:53:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2onw-0005LX-4U; Sun, 19 Jun 2022 06:53:04 +0000
Received: by outflank-mailman (input) for mailman id 352047;
 Sun, 19 Jun 2022 06:53:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ii8P=W2=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o2onu-0004F8-AO
 for xen-devel@lists.xenproject.org; Sun, 19 Jun 2022 06:53:02 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 728438db-ef9c-11ec-b725-ed86ccbb4733;
 Sun, 19 Jun 2022 08:52:55 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 51F3621D96;
 Sun, 19 Jun 2022 06:52:55 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 245AA13427;
 Sun, 19 Jun 2022 06:52:55 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id GLKIB8fHrmJzXgAAMHmgww
 (envelope-from <jgross@suse.com>); Sun, 19 Jun 2022 06:52:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 728438db-ef9c-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655621575; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=5vH2NqyFMDYKMsu2Ud2YN8B9pnuLSEdOQPWCQhYC+kA=;
	b=KuNxw/BnF+1yrTWlTm/CJW/I2L7NggaMX43YRO9yxn+bE9DI3/aTgHbVFNVPn8cD/3/LaY
	/w5shJpfcvotjxt3MHfnzrvvBSDNZR76G0uDVhkbksl/UWERUhtVUKDE6/uZV4RgOtngJ0
	5xWBPFo81TjFbKK0WVjGhDBrRAujYgM=
From: Juergen Gross <jgross@suse.com>
To: minios-devel@lists.xenproject.org,
	xen-devel@lists.xenproject.org
Cc: samuel.thibault@ens-lyon.org,
	wl@xen.org,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v2 1/4] mini-os: take newest version of arch-x86/hvm/start_info.h
Date: Sun, 19 Jun 2022 08:52:50 +0200
Message-Id: <20220619065253.19503-2-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20220619065253.19503-1-jgross@suse.com>
References: <20220619065253.19503-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Update include/xen/arch-x86/hvm/start_info.h to the newest version
from the Xen tree.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
---
 include/xen/arch-x86/hvm/start_info.h | 63 ++++++++++++++++++++++++++-
 1 file changed, 62 insertions(+), 1 deletion(-)

diff --git a/include/xen/arch-x86/hvm/start_info.h b/include/xen/arch-x86/hvm/start_info.h
index 64841597..50af9ea2 100644
--- a/include/xen/arch-x86/hvm/start_info.h
+++ b/include/xen/arch-x86/hvm/start_info.h
@@ -33,7 +33,7 @@
  *    | magic          | Contains the magic value XEN_HVM_START_MAGIC_VALUE
  *    |                | ("xEn3" with the 0x80 bit of the "E" set).
  *  4 +----------------+
- *    | version        | Version of this structure. Current version is 0. New
+ *    | version        | Version of this structure. Current version is 1. New
  *    |                | versions are guaranteed to be backwards-compatible.
  *  8 +----------------+
  *    | flags          | SIF_xxx flags.
@@ -48,6 +48,15 @@
  * 32 +----------------+
  *    | rsdp_paddr     | Physical address of the RSDP ACPI data structure.
  * 40 +----------------+
+ *    | memmap_paddr   | Physical address of the (optional) memory map. Only
+ *    |                | present in version 1 and newer of the structure.
+ * 48 +----------------+
+ *    | memmap_entries | Number of entries in the memory map table. Zero
+ *    |                | if there is no memory map being provided. Only
+ *    |                | present in version 1 and newer of the structure.
+ * 52 +----------------+
+ *    | reserved       | Version 1 and newer only.
+ * 56 +----------------+
  *
  * The layout of each entry in the module structure is the following:
  *
@@ -62,13 +71,51 @@
  *    | reserved       |
  * 32 +----------------+
  *
+ * The layout of each entry in the memory map table is as follows:
+ *
+ *  0 +----------------+
+ *    | addr           | Base address
+ *  8 +----------------+
+ *    | size           | Size of mapping in bytes
+ * 16 +----------------+
+ *    | type           | Type of mapping as defined between the hypervisor
+ *    |                | and guest. See XEN_HVM_MEMMAP_TYPE_* values below.
+ * 20 +----------------|
+ *    | reserved       |
+ * 24 +----------------+
+ *
  * The address and sizes are always a 64bit little endian unsigned integer.
  *
  * NB: Xen on x86 will always try to place all the data below the 4GiB
  * boundary.
+ *
+ * Version numbers of the hvm_start_info structure have evolved like this:
+ *
+ * Version 0:  Initial implementation.
+ *
+ * Version 1:  Added the memmap_paddr/memmap_entries fields (plus 4 bytes of
+ *             padding) to the end of the hvm_start_info struct. These new
+ *             fields can be used to pass a memory map to the guest. The
+ *             memory map is optional and so guests that understand version 1
+ *             of the structure must check that memmap_entries is non-zero
+ *             before trying to read the memory map.
  */
 #define XEN_HVM_START_MAGIC_VALUE 0x336ec578
 
+/*
+ * The values used in the type field of the memory map table entries are
+ * defined below and match the Address Range Types as defined in the "System
+ * Address Map Interfaces" section of the ACPI Specification. Please refer to
+ * section 15 in version 6.2 of the ACPI spec: http://uefi.org/specifications
+ */
+#define XEN_HVM_MEMMAP_TYPE_RAM       1
+#define XEN_HVM_MEMMAP_TYPE_RESERVED  2
+#define XEN_HVM_MEMMAP_TYPE_ACPI      3
+#define XEN_HVM_MEMMAP_TYPE_NVS       4
+#define XEN_HVM_MEMMAP_TYPE_UNUSABLE  5
+#define XEN_HVM_MEMMAP_TYPE_DISABLED  6
+#define XEN_HVM_MEMMAP_TYPE_PMEM      7
+
 /*
  * C representation of the x86/HVM start info layout.
  *
@@ -86,6 +133,13 @@ struct hvm_start_info {
     uint64_t cmdline_paddr;     /* Physical address of the command line.     */
     uint64_t rsdp_paddr;        /* Physical address of the RSDP ACPI data    */
                                 /* structure.                                */
+    /* All following fields only present in version 1 and newer */
+    uint64_t memmap_paddr;      /* Physical address of an array of           */
+                                /* hvm_memmap_table_entry.                   */
+    uint32_t memmap_entries;    /* Number of entries in the memmap table.    */
+                                /* Value will be zero if there is no memory  */
+                                /* map being provided.                       */
+    uint32_t reserved;          /* Must be zero.                             */
 };
 
 struct hvm_modlist_entry {
@@ -95,4 +149,11 @@ struct hvm_modlist_entry {
     uint64_t reserved;
 };
 
+struct hvm_memmap_table_entry {
+    uint64_t addr;              /* Base address of the memory region         */
+    uint64_t size;              /* Size of the memory region in bytes        */
+    uint32_t type;              /* Mapping type                              */
+    uint32_t reserved;          /* Must be zero for Version 1.               */
+};
+
 #endif /* __XEN_PUBLIC_ARCH_X86_HVM_START_INFO_H__ */
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Sun Jun 19 06:53:08 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jun 2022 06:53:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352042.578764 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2ons-0004VW-27; Sun, 19 Jun 2022 06:53:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352042.578764; Sun, 19 Jun 2022 06:53:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2onr-0004V7-SB; Sun, 19 Jun 2022 06:52:59 +0000
Received: by outflank-mailman (input) for mailman id 352042;
 Sun, 19 Jun 2022 06:52:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ii8P=W2=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o2onq-0004F8-Or
 for xen-devel@lists.xenproject.org; Sun, 19 Jun 2022 06:52:58 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 729e91ac-ef9c-11ec-b725-ed86ccbb4733;
 Sun, 19 Jun 2022 08:52:56 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id B4EF71FD3C;
 Sun, 19 Jun 2022 06:52:55 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 890F613427;
 Sun, 19 Jun 2022 06:52:55 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 0N8WIMfHrmJzXgAAMHmgww
 (envelope-from <jgross@suse.com>); Sun, 19 Jun 2022 06:52:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 729e91ac-ef9c-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655621575; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=AOnLOnXpUKuT3ZPKKnEQ4FE2kaPgsf79zVn7fKehw7c=;
	b=knRVAWHAzL0m/Y+Fs5C2n5U+iUHX+9kgJTQhx1NO+HLVnYAh5AoF3wi2Il6tQn7iQWz9ue
	XKQBW76Ryf05GP6VcIeY/BuwscFh6ue/NXJRKpapM9IiZsuNsiLwpuBMPhgp3sjE5moSLa
	3/5eq5KaAtkRC5X4LLBuYaSPL8ok3VA=
From: Juergen Gross <jgross@suse.com>
To: minios-devel@lists.xenproject.org,
	xen-devel@lists.xenproject.org
Cc: samuel.thibault@ens-lyon.org,
	wl@xen.org,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v2 3/4] mini-os: fix number of pages for PVH
Date: Sun, 19 Jun 2022 08:52:52 +0200
Message-Id: <20220619065253.19503-4-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20220619065253.19503-1-jgross@suse.com>
References: <20220619065253.19503-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When getting the current allocation from Xen, this value includes the
pages allocated in the MMIO area. Fix the highest available RAM page
by subtracting the size of that area.

This requires to read the E820 map before needing this value. Add two
functions returning the current and the maximum number of RAM pages
taking this correction into account.

At the same time add the LAPIC page to the memory map in order to
avoid reusing that PFN for internal purposes.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- make e820_initial_reserved_pfns static (Samuel Thibault)
- add e820_get_current_pages() and e820_get_max_pages()
---
 arch/x86/mm.c         | 17 +++++--------
 balloon.c             | 16 +++---------
 e820.c                | 58 +++++++++++++++++++++++++++++++++++++------
 include/e820.h        |  2 ++
 include/x86/arch_mm.h |  2 ++
 5 files changed, 65 insertions(+), 30 deletions(-)

diff --git a/arch/x86/mm.c b/arch/x86/mm.c
index 41fcee67..cfc978f6 100644
--- a/arch/x86/mm.c
+++ b/arch/x86/mm.c
@@ -107,25 +107,20 @@ desc_ptr idt_ptr =
 
 void arch_mm_preinit(void *p)
 {
-    long ret;
-    domid_t domid = DOMID_SELF;
+    unsigned int pages;
     struct hvm_start_info *hsi = p;
 
     if ( hsi->version >= 1 && hsi->memmap_entries > 0 )
         e820_init_memmap((struct hvm_memmap_table_entry *)(unsigned long)
                          hsi->memmap_paddr, hsi->memmap_entries);
+    else
+        e820_init_memmap(NULL, 0);
 
     pt_base = page_table_base;
     first_free_pfn = PFN_UP(to_phys(&_end));
-    ret = HYPERVISOR_memory_op(XENMEM_current_reservation, &domid);
-    if ( ret < 0 )
-    {
-        xprintk("could not get memory size\n");
-        do_exit();
-    }
-
-    last_free_pfn = e820_get_maxpfn(ret);
-    balloon_set_nr_pages(ret, last_free_pfn);
+    pages = e820_get_current_pages();
+    last_free_pfn = e820_get_maxpfn(pages);
+    balloon_set_nr_pages(pages, last_free_pfn);
 }
 #endif
 
diff --git a/balloon.c b/balloon.c
index 9dc77c54..6ad07644 100644
--- a/balloon.c
+++ b/balloon.c
@@ -44,20 +44,12 @@ void balloon_set_nr_pages(unsigned long pages, unsigned long pfn)
 
 void get_max_pages(void)
 {
-    long ret;
-    domid_t domid = DOMID_SELF;
-
-    ret = HYPERVISOR_memory_op(XENMEM_maximum_reservation, &domid);
-    if ( ret < 0 )
+    nr_max_pages = e820_get_max_pages();
+    if ( nr_max_pages )
     {
-        printk("Could not get maximum pfn\n");
-        return;
+        printk("Maximum memory size: %ld pages\n", nr_max_pages);
+        nr_max_pfn = e820_get_maxpfn(nr_max_pages);
     }
-
-    nr_max_pages = ret;
-    printk("Maximum memory size: %ld pages\n", nr_max_pages);
-
-    nr_max_pfn = e820_get_maxpfn(nr_max_pages);
 }
 
 void mm_alloc_bitmap_remap(void)
diff --git a/e820.c b/e820.c
index ad91e00b..48c9eadc 100644
--- a/e820.c
+++ b/e820.c
@@ -29,6 +29,38 @@
 #include <mini-os/e820.h>
 #include <xen/memory.h>
 
+static unsigned int e820_initial_reserved_pfns;
+
+unsigned int e820_get_current_pages(void)
+{
+    domid_t domid = DOMID_SELF;
+    long ret;
+
+    ret = HYPERVISOR_memory_op(XENMEM_current_reservation, &domid);
+    if ( ret < 0 )
+    {
+        xprintk("could not get memory size\n");
+        do_exit();
+    }
+
+    return ret - e820_initial_reserved_pfns;
+}
+
+unsigned int e820_get_max_pages(void)
+{
+    domid_t domid = DOMID_SELF;
+    long ret;
+
+    ret = HYPERVISOR_memory_op(XENMEM_maximum_reservation, &domid);
+    if ( ret < 0 )
+    {
+        printk("Could not get maximum pfn\n");
+        return 0;
+    }
+
+    return ret - e820_initial_reserved_pfns;
+}
+
 #ifdef CONFIG_E820_TRIVIAL
 struct e820entry e820_map[1] = {
     {
@@ -40,10 +72,6 @@ struct e820entry e820_map[1] = {
 
 unsigned e820_entries = 1;
 
-static void e820_get_memmap(void)
-{
-}
-
 #else
 struct e820entry e820_map[E820_MAX];
 unsigned e820_entries;
@@ -199,6 +227,7 @@ static void e820_sanitize(void)
 {
     int i;
     unsigned long end, start;
+    bool found_lapic = false;
 
     /* Sanitize memory map in current form. */
     e820_process_entries();
@@ -238,8 +267,20 @@ static void e820_sanitize(void)
 
     /* Make remaining temporarily reserved entries permanently reserved. */
     for ( i = 0; i < e820_entries; i++ )
+    {
         if ( e820_map[i].type == E820_TMP_RESERVED )
             e820_map[i].type = E820_RESERVED;
+        if ( e820_map[i].type == E820_RESERVED )
+        {
+            e820_initial_reserved_pfns += e820_map[i].size / PAGE_SIZE;
+            if ( e820_map[i].addr <= LAPIC_ADDRESS &&
+                 e820_map[i].addr + e820_map[i].size > LAPIC_ADDRESS )
+                found_lapic = true;
+        }
+    }
+
+    if ( !found_lapic )
+        e820_insert_entry(LAPIC_ADDRESS, PAGE_SIZE, E820_RESERVED);
 }
 
 static void e820_get_memmap(void)
@@ -264,6 +305,12 @@ void e820_init_memmap(struct hvm_memmap_table_entry *entry, unsigned int num)
 {
     unsigned int i;
 
+    if ( !entry )
+    {
+        e820_get_memmap();
+        return;
+    }
+
     BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_RAM != E820_RAM);
     BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_RESERVED != E820_RESERVED);
     BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_ACPI != E820_ACPI);
@@ -365,9 +412,6 @@ unsigned long e820_get_maxpfn(unsigned long pages)
     int i;
     unsigned long pfns = 0, start = 0;
 
-    if ( !e820_entries )
-        e820_get_memmap();
-
     for ( i = 0; i < e820_entries; i++ )
     {
         if ( e820_map[i].type != E820_RAM )
diff --git a/include/e820.h b/include/e820.h
index 5438a7c8..6f15fcd2 100644
--- a/include/e820.h
+++ b/include/e820.h
@@ -52,6 +52,8 @@ struct __packed e820entry {
 extern struct e820entry e820_map[];
 extern unsigned e820_entries;
 
+unsigned int e820_get_current_pages(void);
+unsigned int e820_get_max_pages(void);
 unsigned long e820_get_maxpfn(unsigned long pages);
 unsigned long e820_get_max_contig_pages(unsigned long pfn, unsigned long pages);
 #ifndef CONFIG_E820_TRIVIAL
diff --git a/include/x86/arch_mm.h b/include/x86/arch_mm.h
index ffbec5a8..a1b975dc 100644
--- a/include/x86/arch_mm.h
+++ b/include/x86/arch_mm.h
@@ -207,6 +207,8 @@ typedef unsigned long pgentry_t;
 /* to align the pointer to the (next) page boundary */
 #define PAGE_ALIGN(addr)        (((addr)+PAGE_SIZE-1)&PAGE_MASK)
 
+#define LAPIC_ADDRESS	CONST(0xfee00000)
+
 #ifndef __ASSEMBLY__
 /* Definitions for machine and pseudophysical addresses. */
 #ifdef __i386__
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Sun Jun 19 08:26:21 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jun 2022 08:26:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352096.578824 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2qFx-00085A-Qc; Sun, 19 Jun 2022 08:26:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352096.578824; Sun, 19 Jun 2022 08:26:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2qFx-000853-NY; Sun, 19 Jun 2022 08:26:05 +0000
Received: by outflank-mailman (input) for mailman id 352096;
 Sun, 19 Jun 2022 08:26:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2qFw-00084t-LU; Sun, 19 Jun 2022 08:26:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2qFw-0007G7-Im; Sun, 19 Jun 2022 08:26:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2qFw-00083k-5P; Sun, 19 Jun 2022 08:26:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o2qFw-0005Mq-4w; Sun, 19 Jun 2022 08:26:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=oObJxHRF35518hzYmCpZ7kfzAV0Lr/3yDGzX/PO6atI=; b=FJfVil/7WDZYGvw0yUPl+DQPg3
	W4G4iYzuZoZRHdmYUnMXRAw4biC2HJN8oP/l4YOOBXIWM23IXYVc4E7jd836uUcg1r5WvhDB66oy0
	K9AWPfYEhIw/Ccqc4vocfIi0cy29Sk0yor03LSr6JZWDV6MexUtXwtQilR2Wv8QcIlU8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171276-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 171276: FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-migrupgrade:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-migrupgrade:host-install/src_host(6):broken:heisenbug
    xen-unstable:test-armhf-armhf-examine:reboot:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c9040f25be317ab2f7647605397d79313e3f303e
X-Osstest-Versions-That:
    xen=c9040f25be317ab2f7647605397d79313e3f303e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 19 Jun 2022 08:26:04 +0000

flight 171276 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171276/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-migrupgrade    <job status>                 broken  in 171267

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-migrupgrade 6 host-install/src_host(6) broken in 171267 pass in 171276
 test-armhf-armhf-examine      8 reboot                     fail pass in 171267

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171267
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171267
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171267
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171267
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171267
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 171267
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171267
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171267
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171267
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171267
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171267
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171267
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171267
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  c9040f25be317ab2f7647605397d79313e3f303e
baseline version:
 xen                  c9040f25be317ab2f7647605397d79313e3f303e

Last test of basis   171276  2022-06-19 01:53:29 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-amd64-migrupgrade broken

Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Jun 19 09:04:19 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jun 2022 09:04:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352106.578835 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2qqr-0003u6-R1; Sun, 19 Jun 2022 09:04:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352106.578835; Sun, 19 Jun 2022 09:04:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2qqr-0003tz-O5; Sun, 19 Jun 2022 09:04:13 +0000
Received: by outflank-mailman (input) for mailman id 352106;
 Sun, 19 Jun 2022 09:04:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2qqp-0003tp-T8; Sun, 19 Jun 2022 09:04:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2qqp-000869-PY; Sun, 19 Jun 2022 09:04:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2qqp-0000kY-DH; Sun, 19 Jun 2022 09:04:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o2qqp-0008Bs-Co; Sun, 19 Jun 2022 09:04:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=yB85iXHD0mo8JrQ+QPcU28/DuIu7U6Wus3qB9WIUFd8=; b=uryqYdIDOTDSUdqSGCaM+zqyyx
	h051HMP3/Ku6dE640uH73vkfFmfAl/hbH4YHfOmvvbmpPO3nkStipI2EPtkcIKvD1RMDxSps6NwAP
	y9arVucn0beoqN0s9KZHBNTmnwCPzOgfQLcGQxm79wV7jJedqGZN2XFvYdoUqv5NOVWA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171279-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 171279: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=812edc95a36b997d674ce4f3a56f4fd01f31904e
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 19 Jun 2022 09:04:11 +0000

flight 171279 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171279/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              812edc95a36b997d674ce4f3a56f4fd01f31904e
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  709 days
Failing since        151818  2020-07-11 04:18:52 Z  708 days  690 attempts
Testing same since   171230  2022-06-17 04:18:59 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Amneesh Singh <natto@weirdnatto.in>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani@anisinha.ca>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brad Laue <brad@brad-x.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Kirbach <christian.kirbach@gmail.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Christophe Fergeau <cfergeau@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Didik Supriadi <didiksupriadi41@gmail.com>
  dinglimin <dinglimin@cmss.chinamobile.com>
  Divya Garg <divya.garg@nutanix.com>
  Dmitrii Shcherbakov <dmitrii.shcherbakov@canonical.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Emilio Herrera <ehespinosa57@gmail.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Florian Schmidt <flosch@nutanix.com>
  Franck Ridel <fridel@protonmail.com>
  Gavi Teitz <gavi@nvidia.com>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Haonan Wang <hnwanga1@gmail.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Hiroki Narukawa <hnarukaw@yahoo-corp.jp>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Ian Wienand <iwienand@redhat.com>
  Ioanna Alifieraki <ioanna-maria.alifieraki@canonical.com>
  Ivan Teterevkov <ivan.teterevkov@nutanix.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  jason lee <ppark5237@gmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jia Zhou <zhou.jia2@zte.com.cn>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jing Qi <jinqi@redhat.com>
  Jinsheng Zhang <zhangjl02@inspur.com>
  Jiri Denemark <jdenemar@redhat.com>
  Joachim Falk <joachim.falk@gmx.de>
  John Ferlan <jferlan@redhat.com>
  John Levon <john.levon@nutanix.com>
  John Levon <levon@movementarian.org>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Justin Gatzen <justin.gatzen@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kim InSoo <simmon@nplob.com>
  Koichi Murase <myoga.murase@gmail.com>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Lei Yang <yanglei209@huawei.com>
  Lena Voytek <lena.voytek@canonical.com>
  Liang Yan <lyan@digitalocean.com>
  Liang Yan <lyan@digtalocean.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Liu Yiding <liuyd.fnst@fujitsu.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  luzhipeng <luzhipeng@cestc.cn>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Mielke <mark.mielke@gmail.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Martin Pitt <mpitt@debian.org>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matej Cepl <mcepl@cepl.eu>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Goodhart <c@chromakode.com>
  Maxim Nestratov <mnestratov@virtuozzo.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Moteen Shah <codeguy.moteen@gmail.com>
  Moteen Shah <moteenshah.02@gmail.com>
  Muha Aliss <muhaaliss@gmail.com>
  Nathan <nathan95@live.it>
  Neal Gompa <ngompa13@gmail.com>
  Nick Chevsky <nchevsky@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nicolas Lécureuil <neoclust@mageia.org>
  Nicolas Lécureuil <nicolas.lecureuil@siveo.net>
  Nikolay Shirokovskiy <nikolay.shirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Niteesh Dubey <niteesh@linux.ibm.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Or Ozeri <oro@il.ibm.com>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peng Liang <tcx4c70@gmail.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Praveen K Paladugu <prapal@linux.microsoft.com>
  Prerna Saxena <prerna.saxena@nutanix.com>
  Richard W.M. Jones <rjones@redhat.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Robin Lee <cheeselee@fedoraproject.org>
  Rohit Kumar <rohit.kumar3@nutanix.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Davis <scott.davis@starlab.io>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Sergey A <sw@atrus.ru>
  Sergey A. <sw@atrus.ru>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  shenjiatong <yshxxsjt715@gmail.com>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Simon Rowe <simon.rowe@nutanix.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Temuri Doghonadze <temuri.doghonadze@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tom Wieczorek <tom@bibbu.net>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tu Qiang <tu.qiang35@zte.com.cn>
  Tuguoyi <tu.guoyi@h3c.com>
  tuqiang <tu.qiang35@zte.com.cn>
  Vasiliy Ulyanov <vulyanov@suse.de>
  Victor Toso <victortoso@redhat.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Vinayak Kale <vkale@nvidia.com>
  Vineeth Pillai <viremana@linux.microsoft.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  Wei-Chen Chen <weicche@microsoft.com>
  William Douglas <william.douglas@intel.com>
  Xu Chao <xu.chao6@zte.com.cn>
  Yalan Zhang <yalzhang@redhat.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Fei <yangfei85@huawei.com>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yasuhiko Kamata <belphegor@belbel.or.jp>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
  zhangjl02 <zhangjl02@inspur.com>
  zhanglei <zhanglei@smartx.com>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>
  Дамјан Георгиевски <gdamjan@gmail.com>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 113770 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 19 09:05:03 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jun 2022 09:05:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352114.578846 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2qrf-0004T2-9k; Sun, 19 Jun 2022 09:05:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352114.578846; Sun, 19 Jun 2022 09:05:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2qrf-0004Sv-5p; Sun, 19 Jun 2022 09:05:03 +0000
Received: by outflank-mailman (input) for mailman id 352114;
 Sun, 19 Jun 2022 09:05:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o2qrd-0004Sa-HQ
 for xen-devel@lists.xenproject.org; Sun, 19 Jun 2022 09:05:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o2qra-00086j-SR; Sun, 19 Jun 2022 09:04:58 +0000
Received: from home.octic.net ([81.187.162.82] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o2qra-0008JO-Me; Sun, 19 Jun 2022 09:04:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=bxKO7v/a0v8BuF99PzOVIgzMhXyKCZNB86OFyUACDqM=; b=vt82I5gLLmUBOdfZV1uMN9rOCw
	WuP7scAYEcxXkiK/miJtq+gy+JfFxXHzHDjLz63K9+ISoUuvuoGLuJi2HUafUtvMIgTH1ikFcKd/D
	Vqs9Lul2dbP111CIzltpYKARwAu24e2+Zr42aL7EB8o1PouB1+mU/eH8uQgBejLMIsCY=;
Message-ID: <49ace8c9-8fd6-57a2-e0c8-cfba04c9e151@xen.org>
Date: Sun, 19 Jun 2022 10:04:56 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
To: Dmytro Semenets <dmitry.semenets@gmail.com>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Dmytro Semenets <Dmytro_Semenets@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20220616135541.3333760-1-dmitry.semenets@gmail.com>
 <cf7660da-0bde-865e-7c22-a2e21e31fae5@xen.org> <87wndgh2og.fsf@epam.com>
 <67f56cdd-531b-72fc-1257-214d078f6bb6@xen.org> <87pmj7hczg.fsf@epam.com>
 <f260703d-4651-f9e9-3713-9e85a51b1d70@xen.org>
 <CACM97VUukaWoegmNvF4F+tf2tHCyPcjG41CSjjz72V2+Cte4Ew@mail.gmail.com>
From: Julien Grall <julien@xen.org>
Subject: Re: [PATCH] xen: Don't call panic if ARM TF cpu off returns DENIED
In-Reply-To: <CACM97VUukaWoegmNvF4F+tf2tHCyPcjG41CSjjz72V2+Cte4Ew@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi,

On 18/06/2022 18:43, Dmytro Semenets wrote:
> пт, 17 июн. 2022 г. в 14:12, Julien Grall <julien@xen.org>:
> Hi Julien,
>>
>> Hi,
>>
>> On 17/06/2022 10:10, Volodymyr Babchuk wrote:
>>> Julien Grall <julien@xen.org> writes:
>>>
>>>> On 16/06/2022 19:40, Volodymyr Babchuk wrote:
>>>>> Hi Julien,
>>>>
>>>> Hi Volodymyr,
>>>>
>>>>> Julien Grall <julien@xen.org> writes:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> On 16/06/2022 14:55, dmitry.semenets@gmail.com wrote:
>>>>>>> From: Dmytro Semenets <dmytro_semenets@epam.com>
>>>>>>> According to PSCI specification ARM TF can return DENIED on CPU OFF.
>>>>>>
>>>>>> I am confused. The spec is talking about Trusted OS and not
>>>>>> firmware. The docummentation is also not specific to ARM Trusted
>>>>>> Firmware. So did you mean "Trusted OS"?
>>>>> It should be "firmware", I believe.
>>>>
>>>> Hmmm... I couldn't find a reference in the spec suggesting that
>>>> CPU_OFF could return DENIED because of the firmware. Do you have a
>>>> pointer to the spec?
>>>
>>> Ah, looks like we are talking about different things. Indeed, CPU_OFF
>>> can return DENIED only because of Trusted OS. But entity that *returns*
>>> the error to a caller is a firmware.
>>
>> Right, the interesting part is *why* DENIED is returned not *who*
>> returns it.
> ARM TF returns DENIED *only* for the platform I have.
> We have a dissonance between spec and xen implementation because
> DENIED returned by
> ARM TF or Thrusted OS or whatever is not a reason for panic.

I agree that's not a reason for panic. However, knowing the reason does 
help to figure out the correct approach.

For instance, one could have suggest to migrate the trusted OS to 
another pCPU. But this would not have worked for you because the DENIED 
is not about that.

> And we
> have issues with this.
> If machine_restart() behaviour is more or less correct  (sometimes
> reports about panic but restarts the machine)

Right...

> but machine_halt() doesn't work at al
... this should also be the case here because machine_halt() could also 
be called from cpu0. So I am a bit confused why you are saying it never 
works.

> Transit execution to CPU0 for my understanding is a workaround and
> this approach will fix
> machine_restart() function but will not fix machine_halt().

I would say it is a more specific case of what the spec suggests (see 
below). But it should fix both machine_restart() and machine_halt() 
because the last CPU running will be CPU0. So Xen would call SYSTEM_* 
rather than CPU_OF. So I don't understand why you think it will fix one 
but not the other.

In fact, the idea to always run the request from a given CPU is quite 
similar to what the specification suggests (5.10.3 DEN0022D.b):

"
One way in which cores can be placed into a known state is to use calls 
to CPU_OFF on all online cores
except for the last one, which instead uses SYSTEM_OFF. If a UP Trusted 
OS is present, this method
only works if the core that calls SYSTEM_OFF is the one where the 
Trusted OS is resident, as calls to
CPU_OFF on this core return a DENIED error. Any core can call SYSTEM_OFF.
"

For Xen, we would need to detect if the trusted OS is UP and where it is 
running. Then we could always restart/halt from that CPU or CPU0.

> Approach
> you suggested (spinning all cpus) will work but
> will save less energy.

I am not sure to understand what's the concern about the energy here. 
 From my understanding of the specification, SYSTEM_OFF will take care 
of switching off the power for all the cores. So at worse, the CPUs will 
spin for a few ms. This would like be more efficient than a call to PSCI 
CPU off.

This is different compare just turning off one CPU (i.e. CPU hot-unplug) 
because the CPU will end up to spin for a very long time. And this is 
why I wasn't OK with conditionally avoiding the panic.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sun Jun 19 09:33:51 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jun 2022 09:33:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352125.578857 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2rJJ-0007sx-Ig; Sun, 19 Jun 2022 09:33:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352125.578857; Sun, 19 Jun 2022 09:33:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2rJJ-0007sq-F9; Sun, 19 Jun 2022 09:33:37 +0000
Received: by outflank-mailman (input) for mailman id 352125;
 Sun, 19 Jun 2022 09:33:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2rJI-0007sg-Rl; Sun, 19 Jun 2022 09:33:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2rJI-00008n-M4; Sun, 19 Jun 2022 09:33:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o2rJI-0001Ye-6n; Sun, 19 Jun 2022 09:33:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o2rJI-0007QV-6O; Sun, 19 Jun 2022 09:33:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7g+rbHHXY46YEp8FZvbsQbDz805A/D2l6fzOmj0zzy0=; b=URj5WaFJDdvzrJOfYIpYj/7Ebc
	o3ILlhzAexGji7gHGVi1ZzPTOF3jmyNxH4Z/+UJfNHnDzMU2iYM9B5NWj65ig8YOoSPA5JAwxcNP2
	Uup/1+UkyUzFzc8IwBULlQGmRSvMHJlpY5J2zuUjlHyKHDUXcEetMVgLrgqfCXZzM4Qk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171277-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171277: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=354c6e071be986a44b956f7b57f1884244431048
X-Osstest-Versions-That:
    linux=4b35035bcf80ddb47c0112c4fbd84a63a2836a18
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 19 Jun 2022 09:33:36 +0000

flight 171277 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171277/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    18 guest-start/debian.repeat fail REGR. vs. 171273

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171273
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171273
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171273
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171273
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171273
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171273
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171273
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171273
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                354c6e071be986a44b956f7b57f1884244431048
baseline version:
 linux                4b35035bcf80ddb47c0112c4fbd84a63a2836a18

Last test of basis   171273  2022-06-18 14:41:59 Z    0 days
Testing same since   171277  2022-06-19 03:11:35 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Baokun Li <libaokun1@huawei.com>
  Ding Xiang <dingxiang@cmss.chinamobile.com>
  Eric Biggers <ebiggers@google.com>
  Jan Kara <jack@suse.cz>
  Linus Torvalds <torvalds@linux-foundation.org>
  Shuqi Zhang <zhangshuqi3@huawei.com>
  Shyam Prasad N <sprasad@microsoft.com>
  Steve French <stfrench@microsoft.com>
  Theodore Ts'o <tytso@mit.edu>
  Wang Jianjian <wangjianjian3@huawei.com>
  Xiang wangx <wangxiang@cdjrlc.com>
  Yang Li <yang.lee@linux.alibaba.com>
  Ye Bin <yebin10@huawei.com>
  Zhang Yi <yi.zhang@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   4b35035bcf80..354c6e071be9  354c6e071be986a44b956f7b57f1884244431048 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Sun Jun 19 10:15:25 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jun 2022 10:15:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352135.578867 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2rxf-0003qZ-Pf; Sun, 19 Jun 2022 10:15:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352135.578867; Sun, 19 Jun 2022 10:15:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2rxf-0003qS-Mh; Sun, 19 Jun 2022 10:15:19 +0000
Received: by outflank-mailman (input) for mailman id 352135;
 Sun, 19 Jun 2022 10:15:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7L/F=W2=epam.com=prvs=8169932144=oleksandr_andrushchenko@srs-se1.protection.inumbo.net>)
 id 1o2rxd-0003qM-T3
 for xen-devel@lists.xenproject.org; Sun, 19 Jun 2022 10:15:18 +0000
Received: from mx0a-0039f301.pphosted.com (mx0a-0039f301.pphosted.com
 [148.163.133.242]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b5edbd6d-efb8-11ec-b725-ed86ccbb4733;
 Sun, 19 Jun 2022 12:15:16 +0200 (CEST)
Received: from pps.filterd (m0174676.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 25J53TRs018067;
 Sun, 19 Jun 2022 10:15:10 GMT
Received: from eur05-vi1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2177.outbound.protection.outlook.com [104.47.17.177])
 by mx0a-0039f301.pphosted.com (PPS) with ESMTPS id 3gs6ryhvse-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Sun, 19 Jun 2022 10:15:10 +0000
Received: from DU0PR03MB8292.eurprd03.prod.outlook.com (2603:10a6:10:320::10)
 by AM9PR03MB7995.eurprd03.prod.outlook.com (2603:10a6:20b:43e::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.20; Sun, 19 Jun
 2022 10:15:06 +0000
Received: from DU0PR03MB8292.eurprd03.prod.outlook.com
 ([fe80::b57f:9009:fba8:a795]) by DU0PR03MB8292.eurprd03.prod.outlook.com
 ([fe80::b57f:9009:fba8:a795%5]) with mapi id 15.20.5353.018; Sun, 19 Jun 2022
 10:15:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b5edbd6d-efb8-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=J/zMXplcHDNiIX8lvGo4dzWeCGOXD7Ngd7AJbOSTF1mvmMeRb8UbzkSOeSwA5xOIOSh7+NO06oy8RpODq6h6H6UFqTVQsyE88XWlRntieQ6p5Y7MV9dhV4hwCjVv9rnENdsjYde9BK5fJymbwY3BDIVVCflvixnFPaKv9QlsejSG9HShPcYMxwhLOwSE6XRhYYsoYWseHSZSMKnrAccLvZZwQznVJdVhwwzyk9wO6WXnTln9J5sUb6xl5FplnjVySquE9WlOKo6TzHc+35+oDxt3jm6ny5RuEHUvd0zkSs8zCTqVEq5xqYfDQfcf6VsTHxBXf5kIkhFcL5jdaYYC/A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=hGplq3oWsFAAVCxUacIORBzHw9VJIgU9kEPa/BQeFh0=;
 b=keU4dGhDX9/MM+WISXLck9f7/OPDUf4bhS2WXHtArRhT/qmpnNstCFXHFle3VQc+2cxtq/AE6qV7Ld4WFTbQNkwHG9xlpdUW02kBPQg7rmYGNYQI5BtiHrGWNBprC2emj3Uvnt1Z+GS01eAnVx+f+y4UmNTHj8luFSbyPWLHKCIK1fG1nGl5XcaaH/v8UL0PG2wjXxM8+d4NYoq8DYOAq/8WPucaRlUkwqWWrKoHrisdMQvZhHdXmhu6GwFJSx6sdVrXcMgYr7ZlbbSgh/vCVC7uYKdVp54WKWjztLnUpsNIPNBjTcuaiI+2f4rnS+pc1sGj7CQ9Y4kle98PuqLKDA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hGplq3oWsFAAVCxUacIORBzHw9VJIgU9kEPa/BQeFh0=;
 b=cxGC6P3N6evx7jt4miif8GPmQOZjROHU7xfeQm1T4TdP9Ub3WynnXD6k6btX2GbBA6ErO7rOVbDsczIZ1keZBm0t2a/EVPX+psY0gbYi9ihGMFSiB85XB+wefMoviAhNu6j062FoZtemfqNNtWGtddSzPeZEbbyO+srZIb9dMjIkNqbAIPyI+VQEK3u+lNJeQQuwsd2bpeSN/5GZf8fcYMdciWw6I1GAPWNadaq2dYmYhJv7aUaKX1QTHRA9CNHIWuH1yv7Y2zlT8Z1in79RCAKgmmGQCUY95ayYvRbQEmn9f/YikJlhZZDd3uxLZiTSSsT6AgnpteZvBeBsbTeo/g==
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Oleksandr Tyshchenko <olekstysh@gmail.com>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        "dri-devel@lists.freedesktop.org" <dri-devel@lists.freedesktop.org>
CC: Oleksandr Tyshchenko <Oleksandr_Tyshchenko@epam.com>,
        David Airlie
	<airlied@linux.ie>, Daniel Vetter <daniel@ffwll.ch>,
        Juergen Gross
	<jgross@suse.com>
Subject: Re: [PATCH] drm/xen: Add missing VM_DONTEXPAND flag in mmap callback
Thread-Topic: [PATCH] drm/xen: Add missing VM_DONTEXPAND flag in mmap callback
Thread-Index: AQHYY6v/wU3gEIzWM0yI8demCauBeq1Ww4oA
Date: Sun, 19 Jun 2022 10:15:05 +0000
Message-ID: <ab942c02-cc3d-b90b-ea68-d271b6b04638@epam.com>
References: <1652104303-5098-1-git-send-email-olekstysh@gmail.com>
In-Reply-To: <1652104303-5098-1-git-send-email-olekstysh@gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: b2062dc2-d2c9-4629-3170-08da51dc94ec
x-ms-traffictypediagnostic: AM9PR03MB7995:EE_
x-microsoft-antispam-prvs: 
 <AM9PR03MB7995ECFCBABA0386F96DA341E7B19@AM9PR03MB7995.eurprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 /GJe/V2YbRJMnBT3Qh9Su8XWEfVrZ0+OWhvQwtoRtM04RmfTahWWRNpUT56+YvhFdQFZVrU70B/WfFPy8I9RHMqNWNXlNdCd8R1N69KyCbHjYzI1kVBOdwzm58OAcAuwBegKL9oA/PF9Axu9PGemm47pIjSm5tWZ4Nr70+tyy+XjbUGyZbjLpUmPtLzO8BYfNpSnsU5pAD2rP3UXAvQKWcLds9JLXSnfYgA/4XUy0dKA+ZO8IC/V306Eb8ynF/UyQ87OEKjWqOpejdZKQIX8+J8VWR0BzqeaXcLjFl8u+HVZvELDZ7BZn8vUyo2Vsl56frV0ZaTEppFCT3os7icchkt6IBqG3wufPUMHBw93o3NOz4ZoiQzu51Yf7KT+V3NkVlwNbEaoK7ZzmLG/b6xaN2wWb3UfEX6O7PchWHpEL/KrkD2sK1ApqeqVH4RmRPdTXJvCmCsL8uVQVD/M63DKupH+DPvgo+3EI+scWGpJDVMOrPdjKXzdOElsghqky86OxuU8dW6NHwZTQhBlbi3/bgyiUyjUN5OA1+1KG49qlABGUwEqMd495kffZlzL9rrQB2Kzz38127ceRFijflla9MDrIuo/F9lQCFtWOJWaZsO84/kcYMN6+ZtZf2dynDgKqhlOB3RMFpVZgG0EDzDcUq4ux5LDlwaBdf2CHapB6cWpJb2MInhHz8mQZL1reB6ZDBAXkd6B5aBJsfzRO7nsSzL1XHKIeb7o9dOKwc8igQI=
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DU0PR03MB8292.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(45080400002)(5660300002)(71200400001)(8936002)(122000001)(498600001)(83380400001)(6486002)(2616005)(186003)(86362001)(2906002)(6506007)(31696002)(76116006)(53546011)(6512007)(26005)(38070700005)(66556008)(8676002)(4326008)(91956017)(64756008)(66476007)(31686004)(110136005)(66446008)(316002)(38100700002)(54906003)(66946007)(36756003)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: 
 =?utf-8?B?Y3g4Wm92YnkwZjkzSlBNSVJNZXhEK0cyY3RKYko1QmlEUFA1eUJQWU1EWk54?=
 =?utf-8?B?U3ErRVNXSVBUYnVJNzhVWCtTa3pIUXNVbVl6Vk94bG1xYXVqV3dZWVdwM2t5?=
 =?utf-8?B?SVVyYzc1UXhCdmdRa3Q2MndsQ1dDSkZNWmkyMEJyZHdEUEpRQlgrWi9iN1dH?=
 =?utf-8?B?bk93NGJYRnY3T2pmWUlyZ1Y1RVBLYUxOaHpEak9SYlZaUjE5R0NKQlVoYUtN?=
 =?utf-8?B?S01aakgzd0VvZGMwTU5GTmZERk5Qa09vclI4NnhLZTBQdjE0Ni9mcnZlNFJm?=
 =?utf-8?B?eGVVN0pzWGc2TE1LNHF5SFpka3JORTlHUDFWb2d2ZHc1S0RRc1ovM05GTmpi?=
 =?utf-8?B?Z1dtSDlsbzBzNlZxQUtRQUF1ZVEvakZWdkpOWnJreWNudnNFNWNtVkVxVDkz?=
 =?utf-8?B?VUg1YVo2bWlVYWxJTVJ6bkdUNUpOeUkxOGdsbDdSN1huN05nS3BTbDJYcmhj?=
 =?utf-8?B?OFkrcmdIUWdTVVJ0eHBWeXlVUkZNQ3VzQ0x6dm9mTUJrNlRaaW9SLytmN1dj?=
 =?utf-8?B?THFKRVZvOFAvN2t5NjNuM29nQTlSTE5BbzYxOUEyS1kvYnF6akdIQkx6SENz?=
 =?utf-8?B?dVphbDI0b0hwRUNRa1VBL241eWtYRkFEbUhqQi9Xa1JkcEM1blZFUE5BZnFw?=
 =?utf-8?B?ODAvd2dUZWlSUGFWUTgzOFJqZVdCOWFxRjN4dlBhM2dRN1lwNmFJTVNlMGVP?=
 =?utf-8?B?ZWJ3WldGb2gvYjN4V3NQUFduYnR5bmNYdy9yclI1Y3JEMGtDZG1SajNDakxj?=
 =?utf-8?B?ZG1jQUFJZkZMVHlVNThUdFJTNUNTb0JmdjR3bTc1eVJXVms0S2kwK2FuU055?=
 =?utf-8?B?MmF2YXlSRllJZXhieGZqeEpNSGI5c2xhVzVBOGZKdGQ3Z1RuMFgxRktyb2Zj?=
 =?utf-8?B?R2J4aWZ3aXdLNmVlUUVSNkJ6M0NsSGNpdFB3ZmxvKytFK1hXc2JNZFB4Rk14?=
 =?utf-8?B?SSsxZ2cxanJ4TThaMkVyTlBpWXJlTG53aVFGZC9QemM5ZFBCclJ6cjdYc0J1?=
 =?utf-8?B?K1lFMGdNNTF6RzZEMDZDNkVJejNWK2ZtZFVyRVdhM2Ribm9lM1MxaUtQTnlI?=
 =?utf-8?B?VDhwekxYbTVHUTlVWlF1RS9rZnlDRUdPckIyNlpEcy80ODFJd2xnQ3hzdm1w?=
 =?utf-8?B?blpaVHduL2FET2tEaE1TTkR6NTRqR2paL2p2TkJlSS9PZm4wZ1JqcFhpNzNS?=
 =?utf-8?B?Uk5ZbTl0VTZvbzV5Vk9XdVVEY3pvYkdkYUo0Yk9saVdORUtSeFJYNFhweEZX?=
 =?utf-8?B?L2REbEJybUY2UTFrS1lEaDU4aTVKejdSRGs1TEpLdUlYTGpvWjZjcTc0Y3Rz?=
 =?utf-8?B?OFlVbkdlWmhsVXMva0E0UjZRUXZYUzVWWGhPcUx0bW5aaU5JcGQvVkY4SlJO?=
 =?utf-8?B?M3lwNzdzaG9GUlpxbTZVRlN6NVFUUnc2ck1pMlhrOHA0b2REeXorOURiMW1q?=
 =?utf-8?B?L1VqWTVTNUliemZDUjVHUGFvSFZjRFBBb1JyeEhucXRPaGJ6SHllK3l4dTBk?=
 =?utf-8?B?d0VqN3NML3ZGRThEQXF3ZlhrcHZqL2w4dkdsVnBXQkhvd0xvL29DOHhqTzB4?=
 =?utf-8?B?QmI1WGQ5VWJ0eWxMWHZzRDhINXNubXo1YlJueVZuREQwT3RvemRiK0RMTis0?=
 =?utf-8?B?WDJ3Z1orbEp0bUM3TXFOMUtLc0thVkcvN2QxRDZ0ZDBpZWVsQzB5cCt6bzhU?=
 =?utf-8?B?ZTFScklNRGFJNFgxVjNZTE9hK280bHVlZzkwS1A5YUtUQjdud01teTJHNUlD?=
 =?utf-8?B?SG5VNW12am93blFwdFB4TUhMaWhxZk5DdUtUSi9BTy8wL2NPVG5vL0llRzAw?=
 =?utf-8?B?S1g2NS9JTVE3Ykt0TWVCN0lUY3kxUWVKTlNVTFczZ2x5ZG9QSW9Db1ZLSSti?=
 =?utf-8?B?ejk5Y1JKd3ViTXNER2JieFZCT1pxK2ZKMlA2Ylo0UUVCeEczOWd5bVRXdndq?=
 =?utf-8?B?WUU2clhBQ0l3ZmpHTkRoVEFPNTlJTEt1WTVkR3FnVFd5U2hRYXh4clhmMmRD?=
 =?utf-8?B?aUFiWlc2TjZWbkJIUzVpUWpqenBiQmNIWXN2RjhHNTRVSnc2R2EvQXc0VjVD?=
 =?utf-8?B?dnoxSVpyYVBTMFZYTTdWbW5hWi9uQVJnWUZLdVBBVVBtQ0hTQ0NHek83c25p?=
 =?utf-8?B?TDNTZzVZell5TWtFRjgyVU1aNTl6d1VmdlVZT3c3clpSdWlrakI1SVpYWmVI?=
 =?utf-8?B?N2NwUS9qZjdMcTk1QU1GN1hLS3pENk51cUkxUTE3T0d4cmtZK0R0d1puVWd6?=
 =?utf-8?B?REpCeU54akhwRE1UTkVMM1RLNDVwaGpSdmhwZlplR2RpWjcrMzdiY2QyUUhi?=
 =?utf-8?B?WHZnRitUSTF5Z2l2MHh6R01nM0poSXhjVDdvT1VtOGZjcHY2OHlZVC9NanZ2?=
 =?utf-8?Q?++/BeOkx4TL+PwDzAiaCNZ9/jlx/UubdyoiB2?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <B17FB5A8F01724479FD5954570B95EAD@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: DU0PR03MB8292.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b2062dc2-d2c9-4629-3170-08da51dc94ec
X-MS-Exchange-CrossTenant-originalarrivaltime: 19 Jun 2022 10:15:05.9039
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: oyBCtHlHhulv7CbXO7K3lNNw0qJypoiKb7MqSyOmvSoG7gIlP1136LDpch1cANB6sJJW0IKkPCRdq69AJ21C7vt0kG4U47AvZ7uOEUobTEg3bnCiLKdvRVb9/eNEre4p
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR03MB7995
X-Proofpoint-GUID: Q2OCQ0eURb-9P5UTJ9uDJza2jrG6TVy-
X-Proofpoint-ORIG-GUID: Q2OCQ0eURb-9P5UTJ9uDJza2jrG6TVy-
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.64.514
 definitions=2022-06-19_09,2022-06-17_01,2022-02-23_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0
 bulkscore=0 clxscore=1011 priorityscore=1501 mlxlogscore=918 adultscore=0
 mlxscore=0 phishscore=0 spamscore=0 suspectscore=0 malwarescore=0
 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2204290000 definitions=main-2206190050

SGksIE9sZWtzYW5kciENCg0KT24gMDkuMDUuMjIgMTY6NTEsIE9sZWtzYW5kciBUeXNoY2hlbmtv
IHdyb3RlOg0KPiBGcm9tOiBPbGVrc2FuZHIgVHlzaGNoZW5rbyA8b2xla3NhbmRyX3R5c2hjaGVu
a29AZXBhbS5jb20+DQo+DQo+IFdpdGggWGVuIFBWIERpc3BsYXkgZHJpdmVyIGluIHVzZSB0aGUg
ImV4cGVjdGVkIiBWTV9ET05URVhQQU5EIGZsYWcNCj4gaXMgbm90IHNldCAobmVpdGhlciBleHBs
aWNpdGx5IG5vciBpbXBsaWNpdGx5KSwgc28gdGhlIGRyaXZlciBoaXRzDQo+IHRoZSBjb2RlIHBh
dGggaW4gZHJtX2dlbV9tbWFwX29iaigpIHdoaWNoIHRyaWdnZXJzIHRoZSBXQVJOSU5HLg0KPg0K
PiBTaWduZWQtb2ZmLWJ5OiBPbGVrc2FuZHIgVHlzaGNoZW5rbyA8b2xla3NhbmRyX3R5c2hjaGVu
a29AZXBhbS5jb20+DQpSZXZpZXdlZC1ieTogT2xla3NhbmRyIEFuZHJ1c2hjaGVua28gPG9sZWtz
YW5kcl9hbmRydXNoY2hlbmtvQGVwYW0uY29tPg0KDQo+IC0tLQ0KPiBUaGlzIHBhdGNoIGVsaW1p
bmF0ZXMgYSBXQVJOSU5HIHdoaWNoIG9jY3VycyBkdXJpbmcgcnVubmluZyBhbnkgdXNlciBzcGFj
ZQ0KPiBhcHBsaWNhdGlvbiBvdmVyIGRybSAod2VzdG9uLCBtb2RldGVzdCwgZXRjKSB1c2luZyBQ
ViBEaXNwbGF5IGZyb250ZW5kDQo+IGluIFhlbiBndWVzdCAoaXQgd29ydGggbWVudGlvbmluZyB0
aGUgZnJvbnRlbmQgc3RpbGwgd29ya3MgZGVzcGl0ZSB0aGUgV0FSTklORyk6DQo+DQo+IHJvb3RA
c2FsdmF0b3IteC1oMy00eDJnLXh0LWRvbXU6fiMgbW9kZXRlc3QgLU0geGVuZHJtLWR1IC1zIDMx
OjE5MjB4MTA4MA0KPiAoWEVOKSBjb21tb24vZ3JhbnRfdGFibGUuYzoxODgyOmQydjAgRXhwYW5k
aW5nIGQyIGdyYW50IHRhYmxlIGZyb20gNSB0byA5IGZyYW1lcw0KPiBbICAgMzEuNTY2NzU5XSAt
LS0tLS0tLS0tLS1bIGN1dCBoZXJlIF0tLS0tLS0tLS0tLS0NCj4gWyAgIDMxLjU2NjgxMV0gV0FS
TklORzogQ1BVOiAwIFBJRDogMjM1IGF0IGRyaXZlcnMvZ3B1L2RybS9kcm1fZ2VtLmM6MTA1NSBk
cm1fZ2VtX21tYXBfb2JqKzB4MTZjLzB4MTgwDQo+IFsgICAzMS41NjY4NjRdIE1vZHVsZXMgbGlu
a2VkIGluOg0KPiBbICAgMzEuNTY2ODg2XSBDUFU6IDAgUElEOiAyMzUgQ29tbTogbW9kZXRlc3Qg
Tm90IHRhaW50ZWQgNS4xOC4wLXJjNC15b2N0by1zdGFuZGFyZC0wMDAwOS1nYWJlODdkNzhiYmM5
ICMxDQo+IFsgICAzMS41NjY5MjJdIEhhcmR3YXJlIG5hbWU6IFhFTlZNLTQuMTcgKERUKQ0KPiBb
ICAgMzEuNTY2OTQwXSBwc3RhdGU6IDYwMDAwMDA1IChuWkN2IGRhaWYgLVBBTiAtVUFPIC1UQ08g
LURJVCAtU1NCUyBCVFlQRT0tLSkNCj4gWyAgIDMxLjU2Njk3M10gcGMgOiBkcm1fZ2VtX21tYXBf
b2JqKzB4MTZjLzB4MTgwDQo+IFsgICAzMS41NjcwMDFdIGxyIDogZHJtX2dlbV9tbWFwX29iaisw
eDc4LzB4MTgwDQo+IFsgICAzMS41NjcwMjZdIHNwIDogZmZmZjgwMDAwOWQwM2JiMA0KPiBbICAg
MzEuNTY3MDQ0XSB4Mjk6IGZmZmY4MDAwMDlkMDNiYjAgeDI4OiAwMDAwMDAwMDAwMDAwMDA4IHgy
NzogZmZmZjAwMDFjNDJkNDNjMA0KPiBbICAgMzEuNTY3MDgwXSB4MjY6IGZmZmYwMDAxYzQyZDRj
YzAgeDI1OiAwMDAwMDAwMDAwMDAwN2U5IHgyNDogZmZmZjAwMDFjMDEzNjAwMA0KPiBbICAgMzEu
NTY3MTE2XSB4MjM6IGZmZmYwMDAxYzAzMTAwMDAgeDIyOiBmZmZmMDAwMWM0MDAyYjgwIHgyMTog
MDAwMDAwMDAwMDAwMDAwMA0KPiBbICAgMzEuNTY3MTUwXSB4MjA6IGZmZmYwMDAxYzQyZDQzYzAg
eDE5OiBmZmZmMDAwMWMwMTM3NjAwIHgxODogMDAwMDAwMDAwMDAwMDAwMQ0KPiBbICAgMzEuNTY3
MTg2XSB4MTc6IDAwMDAwMDAwMDAwMDAwMDAgeDE2OiAwMDAwMDAwMDAwMDAwMDAwIHgxNTogMDAw
MDAwMDAwMDAzNWM4MQ0KPiBbICAgMzEuNTY3MjIwXSB4MTQ6IDAwMDAwMDAwMDAwMDAwMDAgeDEz
OiAwMDAwMDAwMDAwMDAwMDAwIHgxMjogMDAwMDAwMDAwMDAwMDAwMA0KPiBbICAgMzEuNTY3MjU4
XSB4MTE6IDAwMDAwMDAwMDAxMDAwMDAgeDEwOiAwMDAwZmZmZjk1ZDY5MDAwIHg5IDogZmZmZjAw
MDFjNDM1YWMzMA0KPiBbICAgMzEuNTY3Mjk0XSB4OCA6IGZmZmY4MDAxZjY1Y2UwMDAgeDcgOiAw
MDAwMDAwMDAwMDAwMDAxIHg2IDogZmZmZjAwMDFjMjRkZTAwMA0KPiBbICAgMzEuNTY3MzI5XSB4
NSA6IGZmZmY4MDAwMDlkMDNhMTAgeDQgOiAwMDAwMDAwMDAwMDAwMDkwIHgzIDogMDAwMDAwMDAx
MDA0NjQwMA0KPiBbICAgMzEuNTY3MzY1XSB4MiA6IDAwMDAwMDAwMDAwMDA3ZTkgeDEgOiA5ZGQ4
Y2I3YzAyYjFiZDAwIHgwIDogMDAwMDAwMDAxMDAwMDBmYg0KPiBbICAgMzEuNTY3NDAxXSBDYWxs
IHRyYWNlOg0KPiBbICAgMzEuNTY3NDE1XSAgZHJtX2dlbV9tbWFwX29iaisweDE2Yy8weDE4MA0K
PiBbICAgMzEuNTY3NDM5XSAgZHJtX2dlbV9tbWFwKzB4MTI4LzB4MjI4DQo+IFsgICAzMS41Njc0
NjBdICBtbWFwX3JlZ2lvbisweDM4NC8weDVhMA0KPiBbICAgMzEuNTY3NDg0XSAgZG9fbW1hcCsw
eDM1NC8weDRmMA0KPiBbICAgMzEuNTY3NTA1XSAgdm1fbW1hcF9wZ29mZisweGRjLzB4MTA4DQo+
IFsgICAzMS41Njc1MjldICBrc3lzX21tYXBfcGdvZmYrMHgxYjgvMHgyMDgNCj4gWyAgIDMxLjU2
NzU1MF0gIF9fYXJtNjRfc3lzX21tYXArMHgzMC8weDQ4DQo+IFsgICAzMS41Njc1NzZdICBpbnZv
a2Vfc3lzY2FsbCsweDQ0LzB4MTA4DQo+IFsgICAzMS41Njc1OTldICBlbDBfc3ZjX2NvbW1vbi5j
b25zdHByb3AuMCsweGNjLzB4ZjANCj4gWyAgIDMxLjU2NzYyOV0gIGRvX2VsMF9zdmMrMHgyNC8w
eDg4DQo+IFsgICAzMS41Njc2NDldICBlbDBfc3ZjKzB4MmMvMHg4OA0KPiBbICAgMzEuNTY3Njg2
XSAgZWwwdF82NF9zeW5jX2hhbmRsZXIrMHhiMC8weGI4DQo+IFsgICAzMS41Njc3MDhdICBlbDB0
XzY0X3N5bmMrMHgxOGMvMHgxOTANCj4gWyAgIDMxLjU2NzczMV0gLS0tWyBlbmQgdHJhY2UgMDAw
MDAwMDAwMDAwMDAwMCBdLS0tDQo+IHNldHRpbmcgbW9kZSAxOTIweDEwODAtNjAuMDBIekBYUjI0
IG9uIGNvbm5lY3RvcnMgMzEsIGNydGMgMzQNCj4gLS0tDQo+ICAgZHJpdmVycy9ncHUvZHJtL3hl
bi94ZW5fZHJtX2Zyb250X2dlbS5jIHwgMiArLQ0KPiAgIDEgZmlsZSBjaGFuZ2VkLCAxIGluc2Vy
dGlvbigrKSwgMSBkZWxldGlvbigtKQ0KPg0KPiBkaWZmIC0tZ2l0IGEvZHJpdmVycy9ncHUvZHJt
L3hlbi94ZW5fZHJtX2Zyb250X2dlbS5jIGIvZHJpdmVycy9ncHUvZHJtL3hlbi94ZW5fZHJtX2Zy
b250X2dlbS5jDQo+IGluZGV4IDVhNWJmNGUuLmUzMTU1NGQgMTAwNjQ0DQo+IC0tLSBhL2RyaXZl
cnMvZ3B1L2RybS94ZW4veGVuX2RybV9mcm9udF9nZW0uYw0KPiArKysgYi9kcml2ZXJzL2dwdS9k
cm0veGVuL3hlbl9kcm1fZnJvbnRfZ2VtLmMNCj4gQEAgLTcxLDcgKzcxLDcgQEAgc3RhdGljIGlu
dCB4ZW5fZHJtX2Zyb250X2dlbV9vYmplY3RfbW1hcChzdHJ1Y3QgZHJtX2dlbV9vYmplY3QgKmdl
bV9vYmosDQo+ICAgCSAqIHRoZSB3aG9sZSBidWZmZXIuDQo+ICAgCSAqLw0KPiAgIAl2bWEtPnZt
X2ZsYWdzICY9IH5WTV9QRk5NQVA7DQo+IC0Jdm1hLT52bV9mbGFncyB8PSBWTV9NSVhFRE1BUDsN
Cj4gKwl2bWEtPnZtX2ZsYWdzIHw9IFZNX01JWEVETUFQIHwgVk1fRE9OVEVYUEFORDsNCj4gICAJ
dm1hLT52bV9wZ29mZiA9IDA7DQo+ICAgDQo+ICAgCS8qDQo=


From xen-devel-bounces@lists.xenproject.org Sun Jun 19 12:43:54 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jun 2022 12:43:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352155.578889 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2uHH-0001u4-4G; Sun, 19 Jun 2022 12:43:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352155.578889; Sun, 19 Jun 2022 12:43:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2uHH-0001tx-11; Sun, 19 Jun 2022 12:43:43 +0000
Received: by outflank-mailman (input) for mailman id 352155;
 Sun, 19 Jun 2022 12:43:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fWMG=W2=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1o2uHG-0001dY-EM
 for xen-devel@lists.xenproject.org; Sun, 19 Jun 2022 12:43:42 +0000
Received: from mail-ed1-x52b.google.com (mail-ed1-x52b.google.com
 [2a00:1450:4864:20::52b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 72c43af4-efcd-11ec-bd2d-47488cf2e6aa;
 Sun, 19 Jun 2022 14:43:41 +0200 (CEST)
Received: by mail-ed1-x52b.google.com with SMTP id ej4so7713417edb.7
 for <xen-devel@lists.xenproject.org>; Sun, 19 Jun 2022 05:43:41 -0700 (PDT)
Received: from uni.. (adsl-190.37.6.169.tellas.gr. [37.6.169.190])
 by smtp.googlemail.com with ESMTPSA id
 b9-20020a17090630c900b006feba31171bsm4602659ejb.11.2022.06.19.05.43.39
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 19 Jun 2022 05:43:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 72c43af4-efcd-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=FjZNmxi+2WT8F5w+hlZIxGF+xmDyxTtCYBnFYEEitNo=;
        b=iwCNl++eIweW8fcKZ+9+ip7WKlT0EFV8s2Ho07eIdsHUuDn85ZGDbXcdOZHTLjJe5U
         lFDTISGPUkGip4D1d9J1WnSiPTenBnZcYJtpTg/MAIDNHcVl850/stNpY4NWfdxkU5Qk
         0jqn4zqB689aNQ+6zzEyFG3xyE9ZcKl3quiVR8KWoTDOX1ySsDnyJgc2jZ9wKPaklnZN
         9qDZ/GKYvLVSnH+qzukSNRv/XTbrUKyZhD9XYQ6GLAOEbObyoKn1wQHP34dofNn+5uDU
         9ugpuoQvwcWZiBUNCp83fTzgjkGmc67AzqgqulzIZTO7m1iYMPSfOHGmr1nInsmVhN7k
         KtYQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=FjZNmxi+2WT8F5w+hlZIxGF+xmDyxTtCYBnFYEEitNo=;
        b=0K/bUyr4v+CLuLxboe1WEx1nyreohAtNWTj9Zmd60Npm9M1oByChsnJ/nkZ3x5TxAd
         OpkIMhFMS3yiEdTUZobeaQqtd7hg9ZnTInKO0nKg+lvNMbi3doxcIbVNe/9W+bGMH8MO
         PTE6xGEtXgljiha/Ifjq/rj73AcY2+GH1o2GVfCBwIfYJ1s9aUuIo5qGs45ZtqWk0J+1
         y85NtsMd90P+CgNnxda7KztKmqBYUvnJo3zAFkvvM99O+9+JYnIsmoOSD1J6dbuBe1Ji
         rJGrrLm3Eyq7Di7EznJaHp0MXlRb9ioY+XDD4v/ieaMPIZF+RYDEuFdmWArF0rOFke3p
         M/zg==
X-Gm-Message-State: AJIora+dR6TIzSHxc7yIj/GKUAKdtWXlrS/C+nBipbtulelcz3XQdIBM
	CwJSiLhef4cRbBgaqwksROfi5XvcoxA=
X-Google-Smtp-Source: AGRyM1s+EnV1hRSzk8o7pv5EPl82xJMr9HVxhgKyCD3hsOOIbTnzSFUAlIo+abrnHJdIfQPZ1rfUwA==
X-Received: by 2002:a05:6402:2706:b0:430:6238:78d5 with SMTP id y6-20020a056402270600b00430623878d5mr24049336edd.413.1655642621005;
        Sun, 19 Jun 2022 05:43:41 -0700 (PDT)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	viryaos-discuss@lists.sourceforge.net,
	Xenia Ragiadakou <burzalodowa@gmail.com>
Subject: [ImageBuilder] [PATCH 2/2] uboot-script-gen: Enable direct mapping of statically allocated memory
Date: Sun, 19 Jun 2022 15:43:16 +0300
Message-Id: <20220619124316.378365-2-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20220619124316.378365-1-burzalodowa@gmail.com>
References: <20220619124316.378365-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Direct mapping for dom0less VMs is disabled by default in XEN and can be
enabled through the 'direct-map' property.
Add a new config parameter DOMU_DIRECT_MAP to be able to enable or disable
direct mapping, i.e set to 1 for enabling and to 0 for disabling.
This parameter is optional. Direct mapping is enabled by default for all
dom0less VMs with static allocation.

The property 'direct-map' is a boolean property. Boolean properties are true
if present and false if missing.
Add a new data_type 'bool' in function dt_set() to setup a boolean property.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---
 README.md                |  4 ++++
 scripts/uboot-script-gen | 18 ++++++++++++++++++
 2 files changed, 22 insertions(+)

diff --git a/README.md b/README.md
index c52e4b9..17ff206 100644
--- a/README.md
+++ b/README.md
@@ -168,6 +168,10 @@ Where:
   if specified, indicates the host physical address regions
   [baseaddr, baseaddr + size) to be reserved to the VM for static allocation.
 
+- DOMU_DIRECT_MAP[number] can be set to 1 or 0.
+  If set to 1, the VM is direct mapped. The default is 1.
+  This is only applicable when DOMU_STATIC_MEM is specified.
+
 - LINUX is optional but specifies the Linux kernel for when Xen is NOT
   used.  To enable this set any LINUX\_\* variables and do NOT set the
   XEN variable.
diff --git a/scripts/uboot-script-gen b/scripts/uboot-script-gen
index bdc8a6b..e85c6ec 100755
--- a/scripts/uboot-script-gen
+++ b/scripts/uboot-script-gen
@@ -27,6 +27,7 @@ function dt_mknode()
 #   hex
 #   str
 #   str_a
+#   bool
 function dt_set()
 {
     local path=$1
@@ -49,6 +50,12 @@ function dt_set()
                 array+=" \"$element\""
             done
             echo "fdt set $path $var $array" >> $UBOOT_SOURCE
+        elif test $data_type = "bool"
+        then
+            if test "$data" -eq 1
+            then
+                echo "fdt set $path $var" >> $UBOOT_SOURCE
+            fi
         else
             echo "fdt set $path $var \"$data\"" >> $UBOOT_SOURCE
         fi
@@ -65,6 +72,12 @@ function dt_set()
         elif test $data_type = "str_a"
         then
             fdtput $FDTEDIT -p -t s $path $var $data
+        elif test $data_type = "bool"
+        then
+            if test "$data" -eq 1
+            then
+                fdtput $FDTEDIT -p $path $var
+            fi
         else
             fdtput $FDTEDIT -p -t s $path $var "$data"
         fi
@@ -206,6 +219,7 @@ function xen_device_tree_editing()
         if test "${DOMU_STATIC_MEM[$i]}"
         then
             add_device_tree_static_mem "/chosen/domU$i" "${DOMU_STATIC_MEM[$i]}"
+            dt_set "/chosen/domU$i" "direct-map" "bool" "${DOMU_DIRECT_MAP[$i]}"
         fi
         dt_set "/chosen/domU$i" "vpl011" "hex" "0x1"
         if test "$DOM0_KERNEL"
@@ -470,6 +484,10 @@ function xen_config()
         then
             DOMU_CMD[$i]="console=ttyAMA0"
         fi
+        if test -z "${DOMU_DIRECT_MAP[$i]}"
+        then
+             DOMU_DIRECT_MAP[$i]=1
+        fi
         i=$(( $i + 1 ))
     done
 }
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jun 19 12:43:54 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jun 2022 12:43:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352154.578878 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2uH9-0001dl-RM; Sun, 19 Jun 2022 12:43:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352154.578878; Sun, 19 Jun 2022 12:43:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o2uH9-0001de-Oj; Sun, 19 Jun 2022 12:43:35 +0000
Received: by outflank-mailman (input) for mailman id 352154;
 Sun, 19 Jun 2022 12:43:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fWMG=W2=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1o2uH7-0001dY-RU
 for xen-devel@lists.xenproject.org; Sun, 19 Jun 2022 12:43:33 +0000
Received: from mail-ej1-x62b.google.com (mail-ej1-x62b.google.com
 [2a00:1450:4864:20::62b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6d3521fb-efcd-11ec-bd2d-47488cf2e6aa;
 Sun, 19 Jun 2022 14:43:32 +0200 (CEST)
Received: by mail-ej1-x62b.google.com with SMTP id h23so16360810ejj.12
 for <xen-devel@lists.xenproject.org>; Sun, 19 Jun 2022 05:43:32 -0700 (PDT)
Received: from uni.. (adsl-190.37.6.169.tellas.gr. [37.6.169.190])
 by smtp.googlemail.com with ESMTPSA id
 b9-20020a17090630c900b006feba31171bsm4602659ejb.11.2022.06.19.05.43.30
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 19 Jun 2022 05:43:31 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6d3521fb-efcd-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=CS7wHEgbN/XsLDJgzi/19eu8SiPOp/J5RpTdMpBDJdA=;
        b=EMNeEY8rLc23qGQmjjJ5BFqADCmKnTpLa0xdCmjd2B9L6SzxRTpAQsdrXg1kkh3WBv
         ZzZ7tG+llGz+MW1n2v239yLazmVz8/bTrGudujcq0h1udjl1SE9MRyD16hDBgV4hl9m9
         R7j2NVu3/k5394K0h3xYgp8J129j1wM0a1QnQf/jDGl2nhUkkYfXpNZa9+/nI0OvV8Jo
         MpeAnZFohq2noOd+zTdciwXnMqxw7hV+v+H/PPRoYYZ3Z1yfJaw7jMG6vYF7eyIZ/UUe
         8jaBAmAIYIQPHFxAmoDRGNhA+ru/fgMucDoREUu8Hk5e70lWkczi33DmYODOvnUONapm
         vOEA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=CS7wHEgbN/XsLDJgzi/19eu8SiPOp/J5RpTdMpBDJdA=;
        b=St1azipwATxBdQTTYOOzBFpMMPSIqunDW85me9CVG/mebwwZI+6vihTlAsEYVer6EJ
         ftMBRrJ5KPcmYM98xxF+Y1t+16QF2F7rEXP21AlXMW9/22S+IppFkNjhOLOiFkUYWOqQ
         V8zh+yqh5FRPRu02wEAxBBXsDYoRc2gcOYYBD6qCoEyHeb9h4zGpulLTM3N1BYKxaqv1
         Mo8E5x2L/lq9FQkxwFeGvWqAnQOyt7qzUefXcylti+Fh2gmZemMXpbE0PotYCro7wsWj
         E9YKfqmJ/LpIPxBZ/8tmf9fDYl2e8XDgLguwLDIArMrO15BmtBR2adq3ZCHsyq2HmPXP
         C+nQ==
X-Gm-Message-State: AJIora9E7fVXVSoNDV2bS6rudDpOXkQkSN0jBXYQcojZpBdv2vFriNoB
	zT4zR+JHp+F/kNASaMq8sH0jnnSjc+k=
X-Google-Smtp-Source: AGRyM1vSS2u2BGatiyktffiPRe1xvubluqHFA/6ivxwHJPxGXAxGWgJmzCaXYdYVFH9NyzTThezW+w==
X-Received: by 2002:a17:906:74d8:b0:712:b97:f14f with SMTP id z24-20020a17090674d800b007120b97f14fmr16313471ejl.112.1655642611717;
        Sun, 19 Jun 2022 05:43:31 -0700 (PDT)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	viryaos-discuss@lists.sourceforge.net,
	Xenia Ragiadakou <burzalodowa@gmail.com>
Subject: [ImageBuilder] [PATCH 1/2] uboot-script-gen: Skip dom0 instead of exiting if DOM0_KERNEL is not set
Date: Sun, 19 Jun 2022 15:43:15 +0300
Message-Id: <20220619124316.378365-1-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When the parameter DOM0_KERNEL is not specified and NUM_DOMUS is not 0,
instead of failing the script, just skip any dom0 specific setup.
This way the script can be used to boot XEN in dom0less mode.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---
 scripts/uboot-script-gen | 60 ++++++++++++++++++++++++++++------------
 1 file changed, 43 insertions(+), 17 deletions(-)

diff --git a/scripts/uboot-script-gen b/scripts/uboot-script-gen
index 455b4c0..bdc8a6b 100755
--- a/scripts/uboot-script-gen
+++ b/scripts/uboot-script-gen
@@ -168,10 +168,15 @@ function xen_device_tree_editing()
     dt_set "/chosen" "#address-cells" "hex" "0x2"
     dt_set "/chosen" "#size-cells" "hex" "0x2"
     dt_set "/chosen" "xen,xen-bootargs" "str" "$XEN_CMD"
-    dt_mknode "/chosen" "dom0"
-    dt_set "/chosen/dom0" "compatible" "str_a" "xen,linux-zimage xen,multiboot-module multiboot,module"
-    dt_set "/chosen/dom0" "reg" "hex" "0x0 $dom0_kernel_addr 0x0 $(printf "0x%x" $dom0_kernel_size)"
-    dt_set "/chosen" "xen,dom0-bootargs" "str" "$DOM0_CMD"
+
+    if test "$DOM0_KERNEL"
+    then
+        dt_mknode "/chosen" "dom0"
+        dt_set "/chosen/dom0" "compatible" "str_a" "xen,linux-zimage xen,multiboot-module multiboot,module"
+        dt_set "/chosen/dom0" "reg" "hex" "0x0 $dom0_kernel_addr 0x0 $(printf "0x%x" $dom0_kernel_size)"
+        dt_set "/chosen" "xen,dom0-bootargs" "str" "$DOM0_CMD"
+    fi
+
     if test "$DOM0_RAMDISK" && test $ramdisk_addr != "-"
     then
         dt_mknode "/chosen" "dom0-ramdisk"
@@ -203,7 +208,10 @@ function xen_device_tree_editing()
             add_device_tree_static_mem "/chosen/domU$i" "${DOMU_STATIC_MEM[$i]}"
         fi
         dt_set "/chosen/domU$i" "vpl011" "hex" "0x1"
-        dt_set "/chosen/domU$i" "xen,enhanced" "str" "enabled"
+        if test "$DOM0_KERNEL"
+        then
+            dt_set "/chosen/domU$i" "xen,enhanced" "str" "enabled"
+        fi
 
         if test "${DOMU_COLORS[$i]}"
         then
@@ -433,6 +441,19 @@ function xen_config()
             DOM0_CMD="$DOM0_CMD root=$root_dev"
         fi
     fi
+    if test -z "$DOM0_KERNEL"
+    then
+        if test "$NUM_DOMUS" -eq "0"
+        then
+            echo "Neither dom0 or domUs are specified, exiting."
+            exit 1
+        fi
+        echo "Dom0 kernel is not specified, continue with dom0less setup."
+        unset DOM0_RAMDISK
+        # Remove dom0 specific parameters from the XEN command line.
+        local params=($XEN_CMD)
+        XEN_CMD="${params[@]/dom0*/}"
+    fi
     i=0
     while test $i -lt $NUM_DOMUS
     do
@@ -490,11 +511,13 @@ generate_uboot_images()
 
 xen_file_loading()
 {
-    check_compressed_file_type $DOM0_KERNEL "executable"
-    dom0_kernel_addr=$memaddr
-    load_file $DOM0_KERNEL "dom0_linux"
-    dom0_kernel_size=$filesize
-
+    if test "$DOM0_KERNEL"
+    then
+        check_compressed_file_type $DOM0_KERNEL "executable"
+        dom0_kernel_addr=$memaddr
+        load_file $DOM0_KERNEL "dom0_linux"
+        dom0_kernel_size=$filesize
+    fi
     if test "$DOM0_RAMDISK"
     then
         check_compressed_file_type $DOM0_RAMDISK "cpio archive"
@@ -597,14 +620,16 @@ bitstream_load_and_config()
 
 create_its_file_xen()
 {
-    if test "$ramdisk_addr" != "-"
+    if test "$DOM0_KERNEL"
     then
-        load_files="\"dom0_linux\", \"dom0_ramdisk\""
-    else
-        load_files="\"dom0_linux\""
-    fi
-    # xen below
-    cat >> "$its_file" <<- EOF
+        if test "$ramdisk_addr" != "-"
+        then
+            load_files="\"dom0_linux\", \"dom0_ramdisk\""
+        else
+            load_files="\"dom0_linux\""
+        fi
+        # xen below
+        cat >> "$its_file" <<- EOF
         dom0_linux {
             description = "dom0 linux kernel binary";
             data = /incbin/("$DOM0_KERNEL");
@@ -616,6 +641,7 @@ create_its_file_xen()
             $fit_algo
         };
 	EOF
+    fi
     # domUs
     i=0
     while test $i -lt $NUM_DOMUS
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jun 19 20:03:48 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jun 2022 20:03:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352172.578901 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o318r-000223-Su; Sun, 19 Jun 2022 20:03:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352172.578901; Sun, 19 Jun 2022 20:03:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o318r-00021w-PQ; Sun, 19 Jun 2022 20:03:29 +0000
Received: by outflank-mailman (input) for mailman id 352172;
 Sun, 19 Jun 2022 20:03:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o318p-00021m-Se; Sun, 19 Jun 2022 20:03:27 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o318p-0003Bx-OZ; Sun, 19 Jun 2022 20:03:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o318p-0007SG-3T; Sun, 19 Jun 2022 20:03:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o318p-0000Sc-2z; Sun, 19 Jun 2022 20:03:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=oGi+JROo9OL5s+2xDPMvfK8aSR9jG8QCSe8/JFzWg4Y=; b=fe254lXtaHmFbKgrE2tkMkLcO2
	CeFIbxutBHe307QlXW2jKbZXt4qvQX/ExRcMBkaAeVW2NcZv9yt8tY4ZMYnMIW4USR2cxmI4Q0pP4
	2qlVTEHnxt0WzeBD9vJSLfvhA1+gCq505d7JgvNZqEPkmaoEGSprdm0Lkllr/afhNHxM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171280-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171280: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=05c6ca8512f2722f57743d653bb68cf2a273a55a
X-Osstest-Versions-That:
    linux=354c6e071be986a44b956f7b57f1884244431048
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 19 Jun 2022 20:03:27 +0000

flight 171280 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171280/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-intel  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-amd  8 xen-boot              fail REGR. vs. 171277
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 171277
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 171277
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 171277
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 171277
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 171277
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 171277

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 171277

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171277
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171277
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                05c6ca8512f2722f57743d653bb68cf2a273a55a
baseline version:
 linux                354c6e071be986a44b956f7b57f1884244431048

Last test of basis   171277  2022-06-19 03:11:35 Z    0 days
Testing same since   171280  2022-06-19 15:12:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Usyskin <alexander.usyskin@intel.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Arnd Bergmann <arnd@arndb.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Darrick J. Wong <djwong@kernel.org>
  Dave Hansen <dave.hansen@linux.intel.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Ian Abbott <abbotti@mev.co.uk>
  Jamie Iles <jamie@jamieiles.com>
  Jarkko Nikula <jarkko.nikula@linux.intel.com>
  Jiasheng Jiang <jiasheng@iscas.ac.cn>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jing-Ting Wu <jing-ting.wu@mediatek.com>
  Joe Damato <jdamato@fastly.com>
  Josh Poimboeuf <jpoimboe@kernel.org>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Kunihiko Hayashi <hayashi.kunihiko@socionext.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Liu Ying <victor.liu@nxp.com>
  Lukas Bulwahn <lukas.bulwahn@gmail.com>
  Marc Zyngier <maz@kernel.org>
  Miaoqian Lin <linmq006@gmail.com>
  Michal Simek <michal.simek@amd.com>
  Nathan Chancellor <nathan@kernel.org>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Peter Zijlstra <peterz@infradead.org>
  Rob Herring <robh@kernel.org>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Serge Semin <Sergey.Semin@baikalelectronics.ru>
  Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
  Tali Perry <tali.perry1@gmail.com>
  Thomas Gleixner <tglx@linutronix.de>
  Tomas Winkler <tomas.winkler@intel.com>
  Wolfram Sang <wsa@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 960 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 20 02:44:55 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 02:44:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352186.578945 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o37PG-0005Sm-FX; Mon, 20 Jun 2022 02:44:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352186.578945; Mon, 20 Jun 2022 02:44:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o37PG-0005Sd-Ba; Mon, 20 Jun 2022 02:44:50 +0000
Received: by outflank-mailman (input) for mailman id 352186;
 Mon, 20 Jun 2022 02:44:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dqE0=W3=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1o37PF-0004eN-0q
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 02:44:49 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id f330157a-f042-11ec-bd2d-47488cf2e6aa;
 Mon, 20 Jun 2022 04:44:48 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B7CCE1042;
 Sun, 19 Jun 2022 19:44:47 -0700 (PDT)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 6F1943F7D7;
 Sun, 19 Jun 2022 19:44:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f330157a-f042-11ec-bd2d-47488cf2e6aa
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v7 3/9] xen: update SUPPORT.md for static allocation
Date: Mon, 20 Jun 2022 10:44:02 +0800
Message-Id: <20220620024408.203797-4-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220620024408.203797-1-Penny.Zheng@arm.com>
References: <20220620024408.203797-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

SUPPORT.md doesn't seem to explicitly say whether static memory is
supported, so this commit updates SUPPORT.md to add feature static
allocation tech preview for now.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
v7 changes:
- no change
---
v6 changes:
- use domain instead of sub-systems
---
v5 changes:
- new commit
---
 SUPPORT.md | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/SUPPORT.md b/SUPPORT.md
index 70e98964cb..8e040d1c1e 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -286,6 +286,13 @@ to boot with memory < maxmem.
 
     Status, x86 HVM: Supported
 
+### Static Allocation
+
+Static allocation refers to domains for which memory areas are
+pre-defined by configuration using physical address ranges.
+
+    Status, ARM: Tech Preview
+
 ### Memory Sharing
 
 Allow sharing of identical pages between guests
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 02:44:55 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 02:44:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352184.578923 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o37P8-0004tx-Tq; Mon, 20 Jun 2022 02:44:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352184.578923; Mon, 20 Jun 2022 02:44:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o37P8-0004tV-Qu; Mon, 20 Jun 2022 02:44:42 +0000
Received: by outflank-mailman (input) for mailman id 352184;
 Mon, 20 Jun 2022 02:44:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dqE0=W3=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1o37P7-0004eN-H7
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 02:44:41 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id ee8af88a-f042-11ec-bd2d-47488cf2e6aa;
 Mon, 20 Jun 2022 04:44:40 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E6944113E;
 Sun, 19 Jun 2022 19:44:39 -0700 (PDT)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 251493F7D7;
 Sun, 19 Jun 2022 19:44:35 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ee8af88a-f042-11ec-bd2d-47488cf2e6aa
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v7 1/9] xen/arm: rename PGC_reserved to PGC_static
Date: Mon, 20 Jun 2022 10:44:00 +0800
Message-Id: <20220620024408.203797-2-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220620024408.203797-1-Penny.Zheng@arm.com>
References: <20220620024408.203797-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

PGC_reserved could be ambiguous, and we have to tell what the pages are
reserved for, so this commit intends to rename PGC_reserved to
PGC_static, which clearly indicates the page is reserved for static
memory.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
v7 changes:
- no change
---
v6 changes:
- rename PGC_staticmem to PGC_static
---
v5 changes:
- new commit
---
 xen/arch/arm/include/asm/mm.h |  6 +++---
 xen/common/page_alloc.c       | 22 +++++++++++-----------
 2 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
index 045a8ba4bb..daef12e740 100644
--- a/xen/arch/arm/include/asm/mm.h
+++ b/xen/arch/arm/include/asm/mm.h
@@ -108,9 +108,9 @@ struct page_info
   /* Page is Xen heap? */
 #define _PGC_xen_heap     PG_shift(2)
 #define PGC_xen_heap      PG_mask(1, 2)
-  /* Page is reserved */
-#define _PGC_reserved     PG_shift(3)
-#define PGC_reserved      PG_mask(1, 3)
+  /* Page is static memory */
+#define _PGC_static    PG_shift(3)
+#define PGC_static     PG_mask(1, 3)
 /* ... */
 /* Page is broken? */
 #define _PGC_broken       PG_shift(7)
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 000ae6b972..743e3543fd 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -151,8 +151,8 @@
 #define p2m_pod_offline_or_broken_replace(pg) BUG_ON(pg != NULL)
 #endif
 
-#ifndef PGC_reserved
-#define PGC_reserved 0
+#ifndef PGC_static
+#define PGC_static 0
 #endif
 
 /*
@@ -2286,7 +2286,7 @@ int assign_pages(
 
         for ( i = 0; i < nr; i++ )
         {
-            ASSERT(!(pg[i].count_info & ~(PGC_extra | PGC_reserved)));
+            ASSERT(!(pg[i].count_info & ~(PGC_extra | PGC_static)));
             if ( pg[i].count_info & PGC_extra )
                 extra_pages++;
         }
@@ -2346,7 +2346,7 @@ int assign_pages(
         page_set_owner(&pg[i], d);
         smp_wmb(); /* Domain pointer must be visible before updating refcnt. */
         pg[i].count_info =
-            (pg[i].count_info & (PGC_extra | PGC_reserved)) | PGC_allocated | 1;
+            (pg[i].count_info & (PGC_extra | PGC_static)) | PGC_allocated | 1;
 
         page_list_add_tail(&pg[i], page_to_list(d, &pg[i]));
     }
@@ -2652,8 +2652,8 @@ void __init free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
             scrub_one_page(pg);
         }
 
-        /* In case initializing page of static memory, mark it PGC_reserved. */
-        pg[i].count_info |= PGC_reserved;
+        /* In case initializing page of static memory, mark it PGC_static. */
+        pg[i].count_info |= PGC_static;
     }
 }
 
@@ -2682,8 +2682,8 @@ static struct page_info * __init acquire_staticmem_pages(mfn_t smfn,
 
     for ( i = 0; i < nr_mfns; i++ )
     {
-        /* The page should be reserved and not yet allocated. */
-        if ( pg[i].count_info != (PGC_state_free | PGC_reserved) )
+        /* The page should be static and not yet allocated. */
+        if ( pg[i].count_info != (PGC_state_free | PGC_static) )
         {
             printk(XENLOG_ERR
                    "pg[%lu] Static MFN %"PRI_mfn" c=%#lx t=%#x\n",
@@ -2697,10 +2697,10 @@ static struct page_info * __init acquire_staticmem_pages(mfn_t smfn,
                                 &tlbflush_timestamp);
 
         /*
-         * Preserve flag PGC_reserved and change page state
+         * Preserve flag PGC_static and change page state
          * to PGC_state_inuse.
          */
-        pg[i].count_info = PGC_reserved | PGC_state_inuse;
+        pg[i].count_info = PGC_static | PGC_state_inuse;
         /* Initialise fields which have other uses for free pages. */
         pg[i].u.inuse.type_info = 0;
         page_set_owner(&pg[i], NULL);
@@ -2722,7 +2722,7 @@ static struct page_info * __init acquire_staticmem_pages(mfn_t smfn,
 
  out_err:
     while ( i-- )
-        pg[i].count_info = PGC_reserved | PGC_state_free;
+        pg[i].count_info = PGC_static | PGC_state_free;
 
     spin_unlock(&heap_lock);
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 02:44:55 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 02:44:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352185.578933 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o37PF-0005CG-5N; Mon, 20 Jun 2022 02:44:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352185.578933; Mon, 20 Jun 2022 02:44:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o37PF-0005C6-2F; Mon, 20 Jun 2022 02:44:49 +0000
Received: by outflank-mailman (input) for mailman id 352185;
 Mon, 20 Jun 2022 02:44:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dqE0=W3=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1o37PE-0005AM-9J
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 02:44:48 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id f10185ef-f042-11ec-b725-ed86ccbb4733;
 Mon, 20 Jun 2022 04:44:45 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 15A7B1042;
 Sun, 19 Jun 2022 19:44:44 -0700 (PDT)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 4BC8F3F7D7;
 Sun, 19 Jun 2022 19:44:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f10185ef-f042-11ec-b725-ed86ccbb4733
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v7 2/9] xen: do not free reserved memory into heap
Date: Mon, 20 Jun 2022 10:44:01 +0800
Message-Id: <20220620024408.203797-3-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220620024408.203797-1-Penny.Zheng@arm.com>
References: <20220620024408.203797-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Pages used as guest RAM for static domain, shall be reserved to this
domain only.
So in case reserved pages being used for other purpose, users
shall not free them back to heap, even when last ref gets dropped.

free_staticmem_pages will be called by free_heap_pages in runtime
for static domain freeing memory resource, so let's drop the __init
flag.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
v7 changes:
- protect free_staticmem_pages with heap_lock to match its reverse function
acquire_staticmem_pages
---
v6 changes:
- adapt to PGC_static
- remove #ifdef aroud function declaration
---
v5 changes:
- In order to avoid stub functions, we #define PGC_staticmem to non-zero only
when CONFIG_STATIC_MEMORY
- use "unlikely()" around pg->count_info & PGC_staticmem
- remove pointless "if", since mark_page_free() is going to set count_info
to PGC_state_free and by consequence clear PGC_staticmem
- move #define PGC_staticmem 0 to mm.h
---
v4 changes:
- no changes
---
v3 changes:
- fix possible racy issue in free_staticmem_pages()
- introduce a stub free_staticmem_pages() for the !CONFIG_STATIC_MEMORY case
- move the change to free_heap_pages() to cover other potential call sites
- fix the indentation
---
v2 changes:
- new commit
---
 xen/arch/arm/include/asm/mm.h |  4 +++-
 xen/common/page_alloc.c       | 16 +++++++++++++---
 xen/include/xen/mm.h          |  2 --
 3 files changed, 16 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
index daef12e740..066a869783 100644
--- a/xen/arch/arm/include/asm/mm.h
+++ b/xen/arch/arm/include/asm/mm.h
@@ -108,9 +108,11 @@ struct page_info
   /* Page is Xen heap? */
 #define _PGC_xen_heap     PG_shift(2)
 #define PGC_xen_heap      PG_mask(1, 2)
-  /* Page is static memory */
+#ifdef CONFIG_STATIC_MEMORY
+/* Page is static memory */
 #define _PGC_static    PG_shift(3)
 #define PGC_static     PG_mask(1, 3)
+#endif
 /* ... */
 /* Page is broken? */
 #define _PGC_broken       PG_shift(7)
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 743e3543fd..f27fa90ec4 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -1443,6 +1443,13 @@ static void free_heap_pages(
 
     ASSERT(order <= MAX_ORDER);
 
+    if ( unlikely(pg->count_info & PGC_static) )
+    {
+        /* Pages of static memory shall not go back to the heap. */
+        free_staticmem_pages(pg, 1UL << order, need_scrub);
+        return;
+    }
+
     spin_lock(&heap_lock);
 
     for ( i = 0; i < (1 << order); i++ )
@@ -2636,12 +2643,14 @@ struct domain *get_pg_owner(domid_t domid)
 
 #ifdef CONFIG_STATIC_MEMORY
 /* Equivalent of free_heap_pages to free nr_mfns pages of static memory. */
-void __init free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
-                                 bool need_scrub)
+void free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
+                          bool need_scrub)
 {
     mfn_t mfn = page_to_mfn(pg);
     unsigned long i;
 
+    spin_lock(&heap_lock);
+
     for ( i = 0; i < nr_mfns; i++ )
     {
         mark_page_free(&pg[i], mfn_add(mfn, i));
@@ -2652,9 +2661,10 @@ void __init free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
             scrub_one_page(pg);
         }
 
-        /* In case initializing page of static memory, mark it PGC_static. */
         pg[i].count_info |= PGC_static;
     }
+
+    spin_unlock(&heap_lock);
 }
 
 /*
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index 3be754da92..1c4ddb336b 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -85,13 +85,11 @@ bool scrub_free_pages(void);
 } while ( false )
 #define FREE_XENHEAP_PAGE(p) FREE_XENHEAP_PAGES(p, 0)
 
-#ifdef CONFIG_STATIC_MEMORY
 /* These functions are for static memory */
 void free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
                           bool need_scrub);
 int acquire_domstatic_pages(struct domain *d, mfn_t smfn, unsigned int nr_mfns,
                             unsigned int memflags);
-#endif
 
 /* Map machine page range in Xen virtual address space. */
 int map_pages_to_xen(
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 02:44:55 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 02:44:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352183.578912 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o37P7-0004ef-Oj; Mon, 20 Jun 2022 02:44:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352183.578912; Mon, 20 Jun 2022 02:44:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o37P7-0004eU-K5; Mon, 20 Jun 2022 02:44:41 +0000
Received: by outflank-mailman (input) for mailman id 352183;
 Mon, 20 Jun 2022 02:44:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dqE0=W3=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1o37P5-0004eN-Lm
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 02:44:39 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id ec16ec18-f042-11ec-bd2d-47488cf2e6aa;
 Mon, 20 Jun 2022 04:44:37 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C09981042;
 Sun, 19 Jun 2022 19:44:35 -0700 (PDT)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 3CA363F7D7;
 Sun, 19 Jun 2022 19:44:31 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ec16ec18-f042-11ec-bd2d-47488cf2e6aa
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v7 0/9] populate/unpopulate memory when domain on static allocation
Date: Mon, 20 Jun 2022 10:43:59 +0800
Message-Id: <20220620024408.203797-1-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Today when a domain unpopulates the memory on runtime, they will always
hand the memory over to the heap allocator. And it will be a problem if it
is a static domain.
Pages used as guest RAM for static domain shall always be reserved to this
domain only, and not be used for any other purposes, so they shall never go
back to heap allocator.

This patch serie intends to fix this issue, by adding pages on the new list
resv_page_list after having taken them off the "normal" list, when unpopulating
memory, and retrieving pages from resv page list(resv_page_list) when
populating memory.

---
v7 changes:
- protect free_staticmem_pages with heap_lock to match its reverse function
acquire_staticmem_pages
- IS_ENABLED(CONFIG_STATIC_MEMORY) would not be needed anymore
- add page on the rsv_page_list *after* it has been freed
- remove the lock, since we add the page to rsv_page_list after it has
been totally freed.
---
v6 changes:
- rename PGC_staticmem to PGC_static
- remove #ifdef aroud function declaration
- use domain instead of sub-systems
- move non-zero is_domain_using_staticmem() from ARM header to common
header
- move PGC_static !CONFIG_STATIC_MEMORY definition to common header
- drop the lock before returning
---
v5 changes:
- introduce three new commits
- In order to avoid stub functions, we #define PGC_staticmem to non-zero only
when CONFIG_STATIC_MEMORY
- use "unlikely()" around pg->count_info & PGC_staticmem
- remove pointless "if", since mark_page_free() is going to set count_info
to PGC_state_free and by consequence clear PGC_staticmem
- move #define PGC_staticmem 0 to mm.h
- guard "is_domain_using_staticmem" under CONFIG_STATIC_MEMORY
- #define is_domain_using_staticmem zero if undefined
- extract common codes for assigning pages into a helper assign_domstatic_pages
- refine commit message
- remove stub function acquire_reserved_page
- Alloc/free of memory can happen concurrently. So access to rsv_page_list
needs to be protected with a spinlock
---
v4 changes:
- commit message refinement
- miss dropping __init in acquire_domstatic_pages
- add the page back to the reserved list in case of error
- remove redundant printk
- refine log message and make it warn level
- guard "is_domain_using_staticmem" under CONFIG_STATIC_MEMORY
- #define is_domain_using_staticmem zero if undefined
---
v3 changes:
- fix possible racy issue in free_staticmem_pages()
- introduce a stub free_staticmem_pages() for the !CONFIG_STATIC_MEMORY case
- move the change to free_heap_pages() to cover other potential call sites
- change fixed width type uint32_t to unsigned int
- change "flags" to a more descriptive name "cdf"
- change name from "is_domain_static()" to "is_domain_using_staticmem"
- have page_list_del() just once out of the if()
- remove resv_pages counter
- make arch_free_heap_page be an expression, not a compound statement.
- move #ifndef is_domain_using_staticmem to the common header file
- remove #ifdef CONFIG_STATIC_MEMORY-ary
- remove meaningless page_to_mfn(page) in error log
---
v2 changes:
- let "flags" live in the struct domain. So other arch can take
advantage of it in the future
- change name from "is_domain_on_static_allocation" to "is_domain_static()"
- put reserved pages on resv_page_list after having taken them off
the "normal" list
- introduce acquire_reserved_page to retrieve reserved pages from
resv_page_list
- forbid non-zero-order requests in populate_physmap
- let is_domain_static return ((void)(d), false) on x86
- fix coding style

Penny Zheng (9):
  xen/arm: rename PGC_reserved to PGC_static
  xen: do not free reserved memory into heap
  xen: update SUPPORT.md for static allocation
  xen: do not merge reserved pages in free_heap_pages()
  xen: add field "flags" to cover all internal CDF_XXX
  xen/arm: introduce CDF_staticmem
  xen/arm: unpopulate memory when domain is static
  xen: introduce prepare_staticmem_pages
  xen: retrieve reserved pages on populate_physmap

 SUPPORT.md                        |   7 ++
 xen/arch/arm/domain.c             |   2 -
 xen/arch/arm/domain_build.c       |   5 +-
 xen/arch/arm/include/asm/domain.h |   3 +-
 xen/arch/arm/include/asm/mm.h     |   8 +-
 xen/common/domain.c               |   7 ++
 xen/common/memory.c               |  23 +++++
 xen/common/page_alloc.c           | 155 +++++++++++++++++++++---------
 xen/include/xen/domain.h          |  12 +++
 xen/include/xen/mm.h              |  10 +-
 xen/include/xen/sched.h           |   6 ++
 11 files changed, 182 insertions(+), 56 deletions(-)

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 02:44:55 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 02:44:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352187.578956 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o37PK-0005oO-Ok; Mon, 20 Jun 2022 02:44:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352187.578956; Mon, 20 Jun 2022 02:44:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o37PK-0005oD-Kg; Mon, 20 Jun 2022 02:44:54 +0000
Received: by outflank-mailman (input) for mailman id 352187;
 Mon, 20 Jun 2022 02:44:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dqE0=W3=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1o37PI-0004eN-Ua
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 02:44:52 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id f576b243-f042-11ec-bd2d-47488cf2e6aa;
 Mon, 20 Jun 2022 04:44:52 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9C2141042;
 Sun, 19 Jun 2022 19:44:51 -0700 (PDT)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 1D2723F7D7;
 Sun, 19 Jun 2022 19:44:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f576b243-f042-11ec-bd2d-47488cf2e6aa
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Julien Grall <jgrall@amazon.com>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v7 4/9] xen: do not merge reserved pages in free_heap_pages()
Date: Mon, 20 Jun 2022 10:44:03 +0800
Message-Id: <20220620024408.203797-5-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220620024408.203797-1-Penny.Zheng@arm.com>
References: <20220620024408.203797-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The code in free_heap_pages() will try to merge pages with the
successor/predecessor if pages are suitably aligned. So if the pages
reserved are right next to the pages given to the heap allocator,
free_heap_pages() will merge them, and give the reserved pages to heap
allocator accidently as a result.

So in order to avoid the above scenario, this commit updates free_heap_pages()
to check whether the predecessor and/or successor has PGC_reserved set,
when trying to merge the about-to-be-freed chunk with the predecessor
and/or successor.

Suggested-by: Julien Grall <jgrall@amazon.com>
Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
v7 changes:
- no change
---
v6 changes:
- adapt to PGC_static
---
v5 changes:
- change PGC_reserved to adapt to PGC_staticmem
---
v4 changes:
- no changes
---
v3 changes:
- no changes
---
v2 changes:
- new commit
---
 xen/common/page_alloc.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index f27fa90ec4..d9253df270 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -1486,6 +1486,7 @@ static void free_heap_pages(
             /* Merge with predecessor block? */
             if ( !mfn_valid(page_to_mfn(predecessor)) ||
                  !page_state_is(predecessor, free) ||
+                 (predecessor->count_info & PGC_static) ||
                  (PFN_ORDER(predecessor) != order) ||
                  (phys_to_nid(page_to_maddr(predecessor)) != node) )
                 break;
@@ -1509,6 +1510,7 @@ static void free_heap_pages(
             /* Merge with successor block? */
             if ( !mfn_valid(page_to_mfn(successor)) ||
                  !page_state_is(successor, free) ||
+                 (successor->count_info & PGC_static) ||
                  (PFN_ORDER(successor) != order) ||
                  (phys_to_nid(page_to_maddr(successor)) != node) )
                 break;
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 02:44:58 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 02:44:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352190.578967 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o37PO-0006A6-1e; Mon, 20 Jun 2022 02:44:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352190.578967; Mon, 20 Jun 2022 02:44:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o37PN-00069y-Sz; Mon, 20 Jun 2022 02:44:57 +0000
Received: by outflank-mailman (input) for mailman id 352190;
 Mon, 20 Jun 2022 02:44:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dqE0=W3=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1o37PN-0004eN-HG
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 02:44:57 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id f8116d11-f042-11ec-bd2d-47488cf2e6aa;
 Mon, 20 Jun 2022 04:44:56 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0FCA41042;
 Sun, 19 Jun 2022 19:44:56 -0700 (PDT)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 02AFD3F7D7;
 Sun, 19 Jun 2022 19:44:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f8116d11-f042-11ec-bd2d-47488cf2e6aa
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	Penny Zheng <penny.zheng@arm.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v7 5/9] xen: add field "flags" to cover all internal CDF_XXX
Date: Mon, 20 Jun 2022 10:44:04 +0800
Message-Id: <20220620024408.203797-6-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220620024408.203797-1-Penny.Zheng@arm.com>
References: <20220620024408.203797-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

With more and more CDF_xxx internal flags in and to save the space, this
commit introduces a new field "flags" in struct domain to store CDF_*
internal flags directly.

Another new CDF_xxx will be introduced in the next patch.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
v7 changes:
- no change
---
v6 changes:
- no change
---
v5 changes:
- no change
---
v4 changes:
- no change
---
v3 changes:
- change fixed width type uint32_t to unsigned int
- change "flags" to a more descriptive name "cdf"
---
v2 changes:
- let "flags" live in the struct domain. So other arch can take
advantage of it in the future
- fix coding style
---
 xen/arch/arm/domain.c             | 2 --
 xen/arch/arm/include/asm/domain.h | 3 +--
 xen/common/domain.c               | 3 +++
 xen/include/xen/sched.h           | 3 +++
 4 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 8110c1df86..74189d9878 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -709,8 +709,6 @@ int arch_domain_create(struct domain *d,
     ioreq_domain_init(d);
 #endif
 
-    d->arch.directmap = flags & CDF_directmap;
-
     /* p2m_init relies on some value initialized by the IOMMU subsystem */
     if ( (rc = iommu_domain_init(d, config->iommu_opts)) != 0 )
         goto fail;
diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
index ed63c2b6f9..fe7a029ebf 100644
--- a/xen/arch/arm/include/asm/domain.h
+++ b/xen/arch/arm/include/asm/domain.h
@@ -29,7 +29,7 @@ enum domain_type {
 #define is_64bit_domain(d) (0)
 #endif
 
-#define is_domain_direct_mapped(d) (d)->arch.directmap
+#define is_domain_direct_mapped(d) ((d)->cdf & CDF_directmap)
 
 /*
  * Is the domain using the host memory layout?
@@ -103,7 +103,6 @@ struct arch_domain
     void *tee;
 #endif
 
-    bool directmap;
 }  __cacheline_aligned;
 
 struct arch_vcpu
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 7570eae91a..a3ef991bd1 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -567,6 +567,9 @@ struct domain *domain_create(domid_t domid,
     /* Sort out our idea of is_system_domain(). */
     d->domain_id = domid;
 
+    /* Holding CDF_* internal flags. */
+    d->cdf = flags;
+
     /* Debug sanity. */
     ASSERT(is_system_domain(d) ? config == NULL : config != NULL);
 
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 463d41ffb6..5191853c18 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -596,6 +596,9 @@ struct domain
         struct ioreq_server     *server[MAX_NR_IOREQ_SERVERS];
     } ioreq_server;
 #endif
+
+    /* Holding CDF_* constant. Internal flags for domain creation. */
+    unsigned int cdf;
 };
 
 static inline struct page_list_head *page_to_list(
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 02:45:07 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 02:45:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352203.578978 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o37PX-0006xJ-Ir; Mon, 20 Jun 2022 02:45:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352203.578978; Mon, 20 Jun 2022 02:45:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o37PX-0006x6-EI; Mon, 20 Jun 2022 02:45:07 +0000
Received: by outflank-mailman (input) for mailman id 352203;
 Mon, 20 Jun 2022 02:45:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dqE0=W3=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1o37PV-0005AM-Hb
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 02:45:05 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id fcc3436e-f042-11ec-b725-ed86ccbb4733;
 Mon, 20 Jun 2022 04:45:04 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D260E1042;
 Sun, 19 Jun 2022 19:45:03 -0700 (PDT)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 8C4F13F7D7;
 Sun, 19 Jun 2022 19:45:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fcc3436e-f042-11ec-b725-ed86ccbb4733
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v7 7/9] xen/arm: unpopulate memory when domain is static
Date: Mon, 20 Jun 2022 10:44:06 +0800
Message-Id: <20220620024408.203797-8-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220620024408.203797-1-Penny.Zheng@arm.com>
References: <20220620024408.203797-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Today when a domain unpopulates the memory on runtime, they will always
hand the memory back to the heap allocator. And it will be a problem if domain
is static.

Pages as guest RAM for static domain shall be reserved to only this domain
and not be used for any other purposes, so they shall never go back to heap
allocator.

This commit puts reserved pages on the new list resv_page_list only after
having taken them off the "normal" list, when the last ref dropped.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
v7 changes:
- Add page on the rsv_page_list *after* it has been freed
---
v6 changes:
- refine in-code comment
- move PGC_static !CONFIG_STATIC_MEMORY definition to common header
---
v5 changes:
- adapt this patch for PGC_staticmem
---
v4 changes:
- no changes
---
v3 changes:
- have page_list_del() just once out of the if()
- remove resv_pages counter
- make arch_free_heap_page be an expression, not a compound statement.
---
v2 changes:
- put reserved pages on resv_page_list after having taken them off
the "normal" list
---
 xen/common/domain.c     | 4 ++++
 xen/common/page_alloc.c | 4 ++++
 xen/include/xen/mm.h    | 9 +++++++++
 xen/include/xen/sched.h | 3 +++
 4 files changed, 20 insertions(+)

diff --git a/xen/common/domain.c b/xen/common/domain.c
index a3ef991bd1..a49574fa24 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -604,6 +604,10 @@ struct domain *domain_create(domid_t domid,
     INIT_PAGE_LIST_HEAD(&d->page_list);
     INIT_PAGE_LIST_HEAD(&d->extra_page_list);
     INIT_PAGE_LIST_HEAD(&d->xenpage_list);
+#ifdef CONFIG_STATIC_MEMORY
+    INIT_PAGE_LIST_HEAD(&d->resv_page_list);
+#endif
+
 
     spin_lock_init(&d->node_affinity_lock);
     d->node_affinity = NODE_MASK_ALL;
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index d9253df270..7d223087c0 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -2498,6 +2498,10 @@ void free_domheap_pages(struct page_info *pg, unsigned int order)
         }
 
         free_heap_pages(pg, order, scrub);
+
+        /* Add page on the resv_page_list *after* it has been freed. */
+        if ( unlikely(pg->count_info & PGC_static) )
+            put_static_pages(d, pg, order);
     }
 
     if ( drop_dom_ref )
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index 1c4ddb336b..68a647ceae 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -90,6 +90,15 @@ void free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
                           bool need_scrub);
 int acquire_domstatic_pages(struct domain *d, mfn_t smfn, unsigned int nr_mfns,
                             unsigned int memflags);
+#ifdef CONFIG_STATIC_MEMORY
+#define put_static_pages(d, page, order) ({                 \
+    unsigned int i;                                         \
+    for ( i = 0; i < (1 << (order)); i++ )                  \
+        page_list_add_tail((pg) + i, &(d)->resv_page_list); \
+})
+#else
+#define put_static_pages(d, page, order) ((void)(d), (void)(page), (void)(order))
+#endif
 
 /* Map machine page range in Xen virtual address space. */
 int map_pages_to_xen(
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 5191853c18..bd2782b3c5 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -381,6 +381,9 @@ struct domain
     struct page_list_head page_list;  /* linked list */
     struct page_list_head extra_page_list; /* linked list (size extra_pages) */
     struct page_list_head xenpage_list; /* linked list (size xenheap_pages) */
+#ifdef CONFIG_STATIC_MEMORY
+    struct page_list_head resv_page_list; /* linked list */
+#endif
 
     /*
      * This field should only be directly accessed by domain_adjust_tot_pages()
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 02:45:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 02:45:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352205.578989 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o37Pa-0007Ou-Tl; Mon, 20 Jun 2022 02:45:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352205.578989; Mon, 20 Jun 2022 02:45:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o37Pa-0007Ok-Om; Mon, 20 Jun 2022 02:45:10 +0000
Received: by outflank-mailman (input) for mailman id 352205;
 Mon, 20 Jun 2022 02:45:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dqE0=W3=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1o37PZ-0005AM-02
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 02:45:09 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id feeef90b-f042-11ec-b725-ed86ccbb4733;
 Mon, 20 Jun 2022 04:45:08 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 880601042;
 Sun, 19 Jun 2022 19:45:07 -0700 (PDT)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 385883F7D7;
 Sun, 19 Jun 2022 19:45:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: feeef90b-f042-11ec-b725-ed86ccbb4733
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v7 8/9] xen: introduce prepare_staticmem_pages
Date: Mon, 20 Jun 2022 10:44:07 +0800
Message-Id: <20220620024408.203797-9-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220620024408.203797-1-Penny.Zheng@arm.com>
References: <20220620024408.203797-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Later, we want to use acquire_domstatic_pages() for populating memory
for static domain on runtime, however, there are a lot of pointless work
(checking mfn_valid(), scrubbing the free part, cleaning the cache...)
considering we know the page is valid and belong to the guest.

This commit splits acquire_staticmem_pages() in two parts, and
introduces prepare_staticmem_pages to bypass all "pointless work".

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
v7 changes:
- no change
---
v6 changes:
- adapt to PGC_static
---
v5 changes:
- new commit
---
 xen/common/page_alloc.c | 61 ++++++++++++++++++++++++-----------------
 1 file changed, 36 insertions(+), 25 deletions(-)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 7d223087c0..fee396a92d 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -2673,26 +2673,13 @@ void free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
     spin_unlock(&heap_lock);
 }
 
-/*
- * Acquire nr_mfns contiguous reserved pages, starting at #smfn, of
- * static memory.
- * This function needs to be reworked if used outside of boot.
- */
-static struct page_info * __init acquire_staticmem_pages(mfn_t smfn,
-                                                         unsigned long nr_mfns,
-                                                         unsigned int memflags)
+static bool __init prepare_staticmem_pages(struct page_info *pg,
+                                           unsigned long nr_mfns,
+                                           unsigned int memflags)
 {
     bool need_tlbflush = false;
     uint32_t tlbflush_timestamp = 0;
     unsigned long i;
-    struct page_info *pg;
-
-    ASSERT(nr_mfns);
-    for ( i = 0; i < nr_mfns; i++ )
-        if ( !mfn_valid(mfn_add(smfn, i)) )
-            return NULL;
-
-    pg = mfn_to_page(smfn);
 
     spin_lock(&heap_lock);
 
@@ -2703,7 +2690,7 @@ static struct page_info * __init acquire_staticmem_pages(mfn_t smfn,
         {
             printk(XENLOG_ERR
                    "pg[%lu] Static MFN %"PRI_mfn" c=%#lx t=%#x\n",
-                   i, mfn_x(smfn) + i,
+                   i, mfn_x(page_to_mfn(pg)) + i,
                    pg[i].count_info, pg[i].tlbflush_timestamp);
             goto out_err;
         }
@@ -2727,6 +2714,38 @@ static struct page_info * __init acquire_staticmem_pages(mfn_t smfn,
     if ( need_tlbflush )
         filtered_flush_tlb_mask(tlbflush_timestamp);
 
+    return true;
+
+ out_err:
+    while ( i-- )
+        pg[i].count_info = PGC_static | PGC_state_free;
+
+    spin_unlock(&heap_lock);
+
+    return false;
+}
+
+/*
+ * Acquire nr_mfns contiguous reserved pages, starting at #smfn, of
+ * static memory.
+ * This function needs to be reworked if used outside of boot.
+ */
+static struct page_info * __init acquire_staticmem_pages(mfn_t smfn,
+                                                         unsigned long nr_mfns,
+                                                         unsigned int memflags)
+{
+    unsigned long i;
+    struct page_info *pg;
+
+    ASSERT(nr_mfns);
+    for ( i = 0; i < nr_mfns; i++ )
+        if ( !mfn_valid(mfn_add(smfn, i)) )
+            return NULL;
+
+    pg = mfn_to_page(smfn);
+    if ( !prepare_staticmem_pages(pg, nr_mfns, memflags) )
+        return NULL;
+
     /*
      * Ensure cache and RAM are consistent for platforms where the guest
      * can control its own visibility of/through the cache.
@@ -2735,14 +2754,6 @@ static struct page_info * __init acquire_staticmem_pages(mfn_t smfn,
         flush_page_to_ram(mfn_x(smfn) + i, !(memflags & MEMF_no_icache_flush));
 
     return pg;
-
- out_err:
-    while ( i-- )
-        pg[i].count_info = PGC_static | PGC_state_free;
-
-    spin_unlock(&heap_lock);
-
-    return NULL;
 }
 
 /*
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 02:45:16 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 02:45:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352212.579000 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o37Pg-0007w4-8H; Mon, 20 Jun 2022 02:45:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352212.579000; Mon, 20 Jun 2022 02:45:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o37Pg-0007vv-4L; Mon, 20 Jun 2022 02:45:16 +0000
Received: by outflank-mailman (input) for mailman id 352212;
 Mon, 20 Jun 2022 02:45:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dqE0=W3=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1o37Pf-0005AM-4e
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 02:45:15 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 011ba567-f043-11ec-b725-ed86ccbb4733;
 Mon, 20 Jun 2022 04:45:11 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3A3AA1042;
 Sun, 19 Jun 2022 19:45:11 -0700 (PDT)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E456F3F7D7;
 Sun, 19 Jun 2022 19:45:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 011ba567-f043-11ec-b725-ed86ccbb4733
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v7 9/9] xen: retrieve reserved pages on populate_physmap
Date: Mon, 20 Jun 2022 10:44:08 +0800
Message-Id: <20220620024408.203797-10-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220620024408.203797-1-Penny.Zheng@arm.com>
References: <20220620024408.203797-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

When a static domain populates memory through populate_physmap at runtime,
it shall retrieve reserved pages from resv_page_list to make sure that
guest RAM is still restricted in statically configured memory regions.
This commit also introduces a new helper acquire_reserved_page to make it work.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
v7 changes:
- remove the lock, since we add the page to rsv_page_list after it has
been totally freed.
---
v6 changes:
- drop the lock before returning
---
v5 changes:
- extract common codes for assigning pages into a helper assign_domstatic_pages
- refine commit message
- remove stub function acquire_reserved_page
- Alloc/free of memory can happen concurrently. So access to rsv_page_list
needs to be protected with a spinlock
---
v4 changes：
- miss dropping __init in acquire_domstatic_pages
- add the page back to the reserved list in case of error
- remove redundant printk
- refine log message and make it warn level
---
v3 changes:
- move is_domain_using_staticmem to the common header file
- remove #ifdef CONFIG_STATIC_MEMORY-ary
- remove meaningless page_to_mfn(page) in error log
---
v2 changes:
- introduce acquire_reserved_page to retrieve reserved pages from
resv_page_list
- forbid non-zero-order requests in populate_physmap
- let is_domain_static return ((void)(d), false) on x86
---
 xen/common/memory.c     | 23 ++++++++++++++
 xen/common/page_alloc.c | 68 ++++++++++++++++++++++++++++++-----------
 xen/include/xen/mm.h    |  1 +
 3 files changed, 75 insertions(+), 17 deletions(-)

diff --git a/xen/common/memory.c b/xen/common/memory.c
index f2d009843a..cb330ce877 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -245,6 +245,29 @@ static void populate_physmap(struct memop_args *a)
 
                 mfn = _mfn(gpfn);
             }
+            else if ( is_domain_using_staticmem(d) )
+            {
+                /*
+                 * No easy way to guarantee the retrieved pages are contiguous,
+                 * so forbid non-zero-order requests here.
+                 */
+                if ( a->extent_order != 0 )
+                {
+                    gdprintk(XENLOG_WARNING,
+                             "Cannot allocate static order-%u pages for static %pd\n",
+                             a->extent_order, d);
+                    goto out;
+                }
+
+                mfn = acquire_reserved_page(d, a->memflags);
+                if ( mfn_eq(mfn, INVALID_MFN) )
+                {
+                    gdprintk(XENLOG_WARNING,
+                             "%pd: failed to retrieve a reserved page\n",
+                             d);
+                    goto out;
+                }
+            }
             else
             {
                 page = alloc_domheap_pages(d, a->extent_order, a->memflags);
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index fee396a92d..74628889ea 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -2673,9 +2673,8 @@ void free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
     spin_unlock(&heap_lock);
 }
 
-static bool __init prepare_staticmem_pages(struct page_info *pg,
-                                           unsigned long nr_mfns,
-                                           unsigned int memflags)
+static bool prepare_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
+                                    unsigned int memflags)
 {
     bool need_tlbflush = false;
     uint32_t tlbflush_timestamp = 0;
@@ -2756,21 +2755,9 @@ static struct page_info * __init acquire_staticmem_pages(mfn_t smfn,
     return pg;
 }
 
-/*
- * Acquire nr_mfns contiguous pages, starting at #smfn, of static memory,
- * then assign them to one specific domain #d.
- */
-int __init acquire_domstatic_pages(struct domain *d, mfn_t smfn,
-                                   unsigned int nr_mfns, unsigned int memflags)
+static int assign_domstatic_pages(struct domain *d, struct page_info *pg,
+                                  unsigned int nr_mfns, unsigned int memflags)
 {
-    struct page_info *pg;
-
-    ASSERT_ALLOC_CONTEXT();
-
-    pg = acquire_staticmem_pages(smfn, nr_mfns, memflags);
-    if ( !pg )
-        return -ENOENT;
-
     if ( !d || (memflags & (MEMF_no_owner | MEMF_no_refcount)) )
     {
         /*
@@ -2789,6 +2776,53 @@ int __init acquire_domstatic_pages(struct domain *d, mfn_t smfn,
 
     return 0;
 }
+
+/*
+ * Acquire nr_mfns contiguous pages, starting at #smfn, of static memory,
+ * then assign them to one specific domain #d.
+ */
+int __init acquire_domstatic_pages(struct domain *d, mfn_t smfn,
+                                   unsigned int nr_mfns, unsigned int memflags)
+{
+    struct page_info *pg;
+
+    ASSERT_ALLOC_CONTEXT();
+
+    pg = acquire_staticmem_pages(smfn, nr_mfns, memflags);
+    if ( !pg )
+        return -ENOENT;
+
+    if ( assign_domstatic_pages(d, pg, nr_mfns, memflags) )
+        return -EINVAL;
+
+    return 0;
+}
+
+/*
+ * Acquire a page from reserved page list(resv_page_list), when populating
+ * memory for static domain on runtime.
+ */
+mfn_t acquire_reserved_page(struct domain *d, unsigned int memflags)
+{
+    struct page_info *page;
+
+    /* Acquire a page from reserved page list(resv_page_list). */
+    page = page_list_remove_head(&d->resv_page_list);
+    if ( unlikely(!page) )
+        return INVALID_MFN;
+
+    if ( !prepare_staticmem_pages(page, 1, memflags) )
+        goto fail;
+
+    if ( assign_domstatic_pages(d, page, 1, memflags) )
+        goto fail;
+
+    return page_to_mfn(page);
+
+ fail:
+    page_list_add_tail(page, &d->resv_page_list);
+    return INVALID_MFN;
+}
 #endif
 
 /*
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index 68a647ceae..e6803fd8a2 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -99,6 +99,7 @@ int acquire_domstatic_pages(struct domain *d, mfn_t smfn, unsigned int nr_mfns,
 #else
 #define put_static_pages(d, page, order) ((void)(d), (void)(page), (void)(order))
 #endif
+mfn_t acquire_reserved_page(struct domain *d, unsigned int memflags);
 
 /* Map machine page range in Xen virtual address space. */
 int map_pages_to_xen(
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 02:48:40 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 02:48:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352242.579011 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o37Sx-0001Xl-Nx; Mon, 20 Jun 2022 02:48:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352242.579011; Mon, 20 Jun 2022 02:48:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o37Sx-0001Xe-L6; Mon, 20 Jun 2022 02:48:39 +0000
Received: by outflank-mailman (input) for mailman id 352242;
 Mon, 20 Jun 2022 02:48:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dqE0=W3=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1o37PS-0004eN-Hq
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 02:45:02 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id fa8d538b-f042-11ec-bd2d-47488cf2e6aa;
 Mon, 20 Jun 2022 04:45:00 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 328C31042;
 Sun, 19 Jun 2022 19:45:00 -0700 (PDT)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 69F0E3F7D7;
 Sun, 19 Jun 2022 19:44:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fa8d538b-f042-11ec-bd2d-47488cf2e6aa
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v7 6/9] xen/arm: introduce CDF_staticmem
Date: Mon, 20 Jun 2022 10:44:05 +0800
Message-Id: <20220620024408.203797-7-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220620024408.203797-1-Penny.Zheng@arm.com>
References: <20220620024408.203797-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to have an easy and quick way to find out whether this domain memory
is statically configured, this commit introduces a new flag CDF_staticmem and a
new helper is_domain_using_staticmem() to tell.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
v7 changes:
- IS_ENABLED(CONFIG_STATIC_MEMORY) would not be needed anymore
---
v6 changes:
- move non-zero is_domain_using_staticmem() from ARM header to common
header
---
v5 changes:
- guard "is_domain_using_staticmem" under CONFIG_STATIC_MEMORY
- #define is_domain_using_staticmem zero if undefined
---
v4 changes:
- no changes
---
v3 changes:
- change name from "is_domain_static()" to "is_domain_using_staticmem"
---
v2 changes:
- change name from "is_domain_on_static_allocation" to "is_domain_static()"
---
 xen/arch/arm/domain_build.c |  5 ++++-
 xen/include/xen/domain.h    | 12 ++++++++++++
 2 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 7ddd16c26d..17cd886be8 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -3287,9 +3287,12 @@ void __init create_domUs(void)
         if ( !dt_device_is_compatible(node, "xen,domain") )
             continue;
 
+        if ( dt_find_property(node, "xen,static-mem", NULL) )
+            flags |= CDF_staticmem;
+
         if ( dt_property_read_bool(node, "direct-map") )
         {
-            if ( !IS_ENABLED(CONFIG_STATIC_MEMORY) || !dt_find_property(node, "xen,static-mem", NULL) )
+            if ( !(flags & CDF_staticmem) )
                 panic("direct-map is not valid for domain %s without static allocation.\n",
                       dt_node_name(node));
 
diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
index 1c3c88a14d..cf34eddf6d 100644
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -35,6 +35,18 @@ void arch_get_domain_info(const struct domain *d,
 /* Should domain memory be directly mapped? */
 #define CDF_directmap            (1U << 1)
 #endif
+/* Is domain memory on static allocation? */
+#ifdef CONFIG_STATIC_MEMORY
+#define CDF_staticmem            (1U << 2)
+#else
+#define CDF_staticmem            0
+#endif
+
+#ifdef CONFIG_STATIC_MEMORY
+#define is_domain_using_staticmem(d) ((d)->cdf & CDF_staticmem)
+#else
+#define is_domain_using_staticmem(d) ((void)(d), false)
+#endif
 
 /*
  * Arch-specifics.
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 03:08:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 03:08:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352274.579021 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o37lk-0004TX-BO; Mon, 20 Jun 2022 03:08:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352274.579021; Mon, 20 Jun 2022 03:08:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o37lk-0004TQ-8G; Mon, 20 Jun 2022 03:08:04 +0000
Received: by outflank-mailman (input) for mailman id 352274;
 Mon, 20 Jun 2022 03:08:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o37li-0004TG-PJ; Mon, 20 Jun 2022 03:08:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o37li-00019H-Ja; Mon, 20 Jun 2022 03:08:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o37li-0003B2-10; Mon, 20 Jun 2022 03:08:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o37lh-0006p3-Uj; Mon, 20 Jun 2022 03:08:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=87WCYXFQxczu0XBRd6KdkjKYJbQQ66lymU6kPUNmuVE=; b=VtsXMAhjIFwt3dC6wAW1XXVcFi
	MKonAMDcz8C7ELZGCsSqQQ+18MgMoNGYqe15kJFLi7AEoZQ7unPuo8oUllLjEDVR/PUTZ/upAmN2g
	WjAkG+5IAzjqPs6scN+2gw4zd18Q8votbkTwhsO8VqePOQr/tOBg9avUVTFBaUWLCh2A=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171281-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171281: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-vhd:guest-start:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=05c6ca8512f2722f57743d653bb68cf2a273a55a
X-Osstest-Versions-That:
    linux=354c6e071be986a44b956f7b57f1884244431048
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 20 Jun 2022 03:08:01 +0000

flight 171281 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171281/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-intel  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-amd  8 xen-boot              fail REGR. vs. 171277
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 171277
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 171277
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 171277
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 171277
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 171277

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-libvirt-qcow2  8 xen-boot       fail in 171280 pass in 171281
 test-arm64-arm64-xl-vhd      13 guest-start                fail pass in 171280

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 171277

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-vhd     14 migrate-support-check fail in 171280 never pass
 test-arm64-arm64-xl-vhd 15 saverestore-support-check fail in 171280 never pass
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171277
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 171277
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171277
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171277
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                05c6ca8512f2722f57743d653bb68cf2a273a55a
baseline version:
 linux                354c6e071be986a44b956f7b57f1884244431048

Last test of basis   171277  2022-06-19 03:11:35 Z    0 days
Testing same since   171280  2022-06-19 15:12:25 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Usyskin <alexander.usyskin@intel.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Arnd Bergmann <arnd@arndb.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Darrick J. Wong <djwong@kernel.org>
  Dave Hansen <dave.hansen@linux.intel.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Ian Abbott <abbotti@mev.co.uk>
  Jamie Iles <jamie@jamieiles.com>
  Jarkko Nikula <jarkko.nikula@linux.intel.com>
  Jiasheng Jiang <jiasheng@iscas.ac.cn>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jing-Ting Wu <jing-ting.wu@mediatek.com>
  Joe Damato <jdamato@fastly.com>
  Josh Poimboeuf <jpoimboe@kernel.org>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Kunihiko Hayashi <hayashi.kunihiko@socionext.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Liu Ying <victor.liu@nxp.com>
  Lukas Bulwahn <lukas.bulwahn@gmail.com>
  Marc Zyngier <maz@kernel.org>
  Miaoqian Lin <linmq006@gmail.com>
  Michal Simek <michal.simek@amd.com>
  Nathan Chancellor <nathan@kernel.org>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Peter Zijlstra <peterz@infradead.org>
  Rob Herring <robh@kernel.org>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Serge Semin <Sergey.Semin@baikalelectronics.ru>
  Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
  Tali Perry <tali.perry1@gmail.com>
  Thomas Gleixner <tglx@linutronix.de>
  Tomas Winkler <tomas.winkler@intel.com>
  Wolfram Sang <wsa@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 960 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 20 05:11:46 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 05:11:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352301.579055 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o39hM-00016z-4J; Mon, 20 Jun 2022 05:11:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352301.579055; Mon, 20 Jun 2022 05:11:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o39hM-00016s-0m; Mon, 20 Jun 2022 05:11:40 +0000
Received: by outflank-mailman (input) for mailman id 352301;
 Mon, 20 Jun 2022 05:11:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dqE0=W3=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1o39hK-0000aY-Fj
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 05:11:38 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 75d07bec-f057-11ec-b725-ed86ccbb4733;
 Mon, 20 Jun 2022 07:11:37 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D6ABF1042;
 Sun, 19 Jun 2022 22:11:36 -0700 (PDT)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 1330A3F7D7;
 Sun, 19 Jun 2022 22:11:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 75d07bec-f057-11ec-b725-ed86ccbb4733
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <penny.zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v5 2/8] xen/arm: allocate static shared memory to the default owner dom_io
Date: Mon, 20 Jun 2022 13:11:08 +0800
Message-Id: <20220620051114.210118-3-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220620051114.210118-1-Penny.Zheng@arm.com>
References: <20220620051114.210118-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Penny Zheng <penny.zheng@arm.com>

This commit introduces process_shm to cope with static shared memory in
domain construction.

DOMID_IO will be the default owner of memory pre-shared among multiple domains
at boot time, when no explicit owner is specified.

This commit only considers allocating static shared memory to dom_io
when owner domain is not explicitly defined in device tree, all the left,
including the "borrower" code path, the "explicit owner" code path, shall
be introduced later in the following patches.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
v5 change:
- refine in-code comment
---
v4 change:
- no changes
---
v3 change:
- refine in-code comment
---
v2 change:
- instead of introducing a new system domain, reuse the existing dom_io
- make dom_io a non-auto-translated domain, then no need to create P2M
for it
- change dom_io definition and make it wider to support static shm here too
- introduce is_shm_allocated_to_domio to check whether static shm is
allocated yet, instead of using shm_mask bitmap
- add in-code comment
---
 xen/arch/arm/domain_build.c | 132 +++++++++++++++++++++++++++++++++++-
 xen/common/domain.c         |   3 +
 2 files changed, 134 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 7ddd16c26d..91a5ace851 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -527,6 +527,10 @@ static bool __init append_static_memory_to_bank(struct domain *d,
     return true;
 }
 
+/*
+ * If cell is NULL, pbase and psize should hold valid values.
+ * Otherwise, cell will be populated together with pbase and psize.
+ */
 static mfn_t __init acquire_static_memory_bank(struct domain *d,
                                                const __be32 **cell,
                                                u32 addr_cells, u32 size_cells,
@@ -535,7 +539,8 @@ static mfn_t __init acquire_static_memory_bank(struct domain *d,
     mfn_t smfn;
     int res;
 
-    device_tree_get_reg(cell, addr_cells, size_cells, pbase, psize);
+    if ( cell )
+        device_tree_get_reg(cell, addr_cells, size_cells, pbase, psize);
     ASSERT(IS_ALIGNED(*pbase, PAGE_SIZE) && IS_ALIGNED(*psize, PAGE_SIZE));
     if ( PFN_DOWN(*psize) > UINT_MAX )
     {
@@ -759,6 +764,125 @@ static void __init assign_static_memory_11(struct domain *d,
     panic("Failed to assign requested static memory for direct-map domain %pd.",
           d);
 }
+
+#ifdef CONFIG_STATIC_SHM
+/*
+ * This function checks whether the static shared memory region is
+ * already allocated to dom_io.
+ */
+static bool __init is_shm_allocated_to_domio(paddr_t pbase)
+{
+    struct page_info *page;
+
+    page = maddr_to_page(pbase);
+    ASSERT(page);
+
+    if ( page_get_owner(page) == NULL )
+        return false;
+
+    ASSERT(page_get_owner(page) == dom_io);
+    return true;
+}
+
+static mfn_t __init acquire_shared_memory_bank(struct domain *d,
+                                               u32 addr_cells, u32 size_cells,
+                                               paddr_t *pbase, paddr_t *psize)
+{
+    /*
+     * Pages of statically shared memory shall be included
+     * in domain_tot_pages().
+     */
+    d->max_pages += PFN_DOWN(*psize);
+
+    return acquire_static_memory_bank(d, NULL, addr_cells, size_cells,
+                                      pbase, psize);
+
+}
+
+/*
+ * Func allocate_shared_memory is supposed to be only called
+ * from the owner.
+ */
+static int __init allocate_shared_memory(struct domain *d,
+                                         u32 addr_cells, u32 size_cells,
+                                         paddr_t pbase, paddr_t psize)
+{
+    mfn_t smfn;
+
+    dprintk(XENLOG_INFO,
+            "Allocate static shared memory BANK %#"PRIpaddr"-%#"PRIpaddr".\n",
+            pbase, pbase + psize);
+
+    smfn = acquire_shared_memory_bank(d, addr_cells, size_cells, &pbase,
+                                      &psize);
+    if ( mfn_eq(smfn, INVALID_MFN) )
+        return -EINVAL;
+
+    /*
+     * DOMID_IO is the domain, like DOMID_XEN, that is not auto-translated.
+     * It sees RAM 1:1 and we do not need to create P2M mapping for it
+     */
+    ASSERT(d == dom_io);
+    return 0;
+}
+
+static int __init process_shm(struct domain *d,
+                              const struct dt_device_node *node)
+{
+    struct dt_device_node *shm_node;
+    int ret = 0;
+    const struct dt_property *prop;
+    const __be32 *cells;
+    u32 shm_id;
+    u32 addr_cells, size_cells;
+    paddr_t gbase, pbase, psize;
+
+    dt_for_each_child_node(node, shm_node)
+    {
+        if ( !dt_device_is_compatible(shm_node, "xen,domain-shared-memory-v1") )
+            continue;
+
+        if ( !dt_property_read_u32(shm_node, "xen,shm-id", &shm_id) )
+        {
+            printk("Shared memory node does not provide \"xen,shm-id\" property.\n");
+            return -ENOENT;
+        }
+
+        addr_cells = dt_n_addr_cells(shm_node);
+        size_cells = dt_n_size_cells(shm_node);
+        prop = dt_find_property(shm_node, "xen,shared-mem", NULL);
+        if ( !prop )
+        {
+            printk("Shared memory node does not provide \"xen,shared-mem\" property.\n");
+            return -ENOENT;
+        }
+        cells = (const __be32 *)prop->value;
+        /* xen,shared-mem = <pbase, psize, gbase>; */
+        device_tree_get_reg(&cells, addr_cells, size_cells, &pbase, &psize);
+        ASSERT(IS_ALIGNED(pbase, PAGE_SIZE) && IS_ALIGNED(psize, PAGE_SIZE));
+        gbase = dt_read_number(cells, addr_cells);
+
+        /* TODO: Consider owner domain is not the default dom_io. */
+        /*
+         * Per static shared memory region could be shared between multiple
+         * domains.
+         * In case re-allocating the same shared memory region, we check
+         * if it is already allocated to the default owner dom_io before
+         * the actual allocation.
+         */
+        if ( !is_shm_allocated_to_domio(pbase) )
+        {
+            /* Allocate statically shared pages to the default owner dom_io. */
+            ret = allocate_shared_memory(dom_io, addr_cells, size_cells,
+                                         pbase, psize);
+            if ( ret )
+                return ret;
+        }
+    }
+
+    return 0;
+}
+#endif /* CONFIG_STATIC_SHM */
 #else
 static void __init allocate_static_memory(struct domain *d,
                                           struct kernel_info *kinfo,
@@ -3236,6 +3360,12 @@ static int __init construct_domU(struct domain *d,
     else
         assign_static_memory_11(d, &kinfo, node);
 
+#ifdef CONFIG_STATIC_SHM
+    rc = process_shm(d, node);
+    if ( rc < 0 )
+        return rc;
+#endif
+
     /*
      * Base address and irq number are needed when creating vpl011 device
      * tree node in prepare_dtb_domU, so initialization on related variables
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 7570eae91a..7070f5a9b9 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -780,6 +780,9 @@ void __init setup_system_domains(void)
      * This domain owns I/O pages that are within the range of the page_info
      * array. Mappings occur at the priv of the caller.
      * Quarantined PCI devices will be associated with this domain.
+     *
+     * DOMID_IO is also the default owner of memory pre-shared among multiple
+     * domains at boot time.
      */
     dom_io = domain_create(DOMID_IO, NULL, 0);
     if ( IS_ERR(dom_io) )
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 05:11:46 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 05:11:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352302.579065 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o39hP-0001Pd-Bo; Mon, 20 Jun 2022 05:11:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352302.579065; Mon, 20 Jun 2022 05:11:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o39hP-0001PQ-8X; Mon, 20 Jun 2022 05:11:43 +0000
Received: by outflank-mailman (input) for mailman id 352302;
 Mon, 20 Jun 2022 05:11:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dqE0=W3=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1o39hN-0000ky-KP
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 05:11:41 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 77c55544-f057-11ec-bd2d-47488cf2e6aa;
 Mon, 20 Jun 2022 07:11:40 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 11D5E113E;
 Sun, 19 Jun 2022 22:11:40 -0700 (PDT)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 3FB2D3F7D7;
 Sun, 19 Jun 2022 22:11:36 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77c55544-f057-11ec-bd2d-47488cf2e6aa
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v5 3/8] xen/arm: allocate static shared memory to a specific owner domain
Date: Mon, 20 Jun 2022 13:11:09 +0800
Message-Id: <20220620051114.210118-4-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220620051114.210118-1-Penny.Zheng@arm.com>
References: <20220620051114.210118-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

If owner property is defined, then owner domain of a static shared memory
region is not the default dom_io anymore, but a specific domain.

This commit implements allocating static shared memory to a specific domain
when owner property is defined.

Coding flow for dealing borrower domain will be introduced later in the
following commits.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
v5 change:
- no change
---
v4 change:
- no changes
---
v3 change:
- simplify the code since o_gbase is not used if the domain is dom_io
---
v2 change:
- P2M mapping is restricted to normal domain
- in-code comment fix
---
 xen/arch/arm/domain_build.c | 44 +++++++++++++++++++++++++++----------
 1 file changed, 33 insertions(+), 11 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 91a5ace851..d4fd64e2bd 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -805,9 +805,11 @@ static mfn_t __init acquire_shared_memory_bank(struct domain *d,
  */
 static int __init allocate_shared_memory(struct domain *d,
                                          u32 addr_cells, u32 size_cells,
-                                         paddr_t pbase, paddr_t psize)
+                                         paddr_t pbase, paddr_t psize,
+                                         paddr_t gbase)
 {
     mfn_t smfn;
+    int ret = 0;
 
     dprintk(XENLOG_INFO,
             "Allocate static shared memory BANK %#"PRIpaddr"-%#"PRIpaddr".\n",
@@ -822,8 +824,18 @@ static int __init allocate_shared_memory(struct domain *d,
      * DOMID_IO is the domain, like DOMID_XEN, that is not auto-translated.
      * It sees RAM 1:1 and we do not need to create P2M mapping for it
      */
-    ASSERT(d == dom_io);
-    return 0;
+    if ( d != dom_io )
+    {
+        ret = guest_physmap_add_pages(d, gaddr_to_gfn(gbase), smfn, PFN_DOWN(psize));
+        if ( ret )
+        {
+            printk(XENLOG_ERR
+                   "Failed to map shared memory to %pd.\n", d);
+            return ret;
+        }
+    }
+
+    return ret;
 }
 
 static int __init process_shm(struct domain *d,
@@ -836,6 +848,8 @@ static int __init process_shm(struct domain *d,
     u32 shm_id;
     u32 addr_cells, size_cells;
     paddr_t gbase, pbase, psize;
+    const char *role_str;
+    bool owner_dom_io = true;
 
     dt_for_each_child_node(node, shm_node)
     {
@@ -862,19 +876,27 @@ static int __init process_shm(struct domain *d,
         ASSERT(IS_ALIGNED(pbase, PAGE_SIZE) && IS_ALIGNED(psize, PAGE_SIZE));
         gbase = dt_read_number(cells, addr_cells);
 
-        /* TODO: Consider owner domain is not the default dom_io. */
+        /*
+         * "role" property is optional and if it is defined explicitly,
+         * then the owner domain is not the default "dom_io" domain.
+         */
+        if ( dt_property_read_string(shm_node, "role", &role_str) == 0 )
+            owner_dom_io = false;
+
         /*
          * Per static shared memory region could be shared between multiple
          * domains.
-         * In case re-allocating the same shared memory region, we check
-         * if it is already allocated to the default owner dom_io before
-         * the actual allocation.
+         * So when owner domain is the default dom_io, in case re-allocating
+         * the same shared memory region, we check if it is already allocated
+         * to the default owner dom_io before the actual allocation.
          */
-        if ( !is_shm_allocated_to_domio(pbase) )
+        if ( (owner_dom_io && !is_shm_allocated_to_domio(pbase)) ||
+             (!owner_dom_io && strcmp(role_str, "owner") == 0) )
         {
-            /* Allocate statically shared pages to the default owner dom_io. */
-            ret = allocate_shared_memory(dom_io, addr_cells, size_cells,
-                                         pbase, psize);
+            /* Allocate statically shared pages to the owner domain. */
+            ret = allocate_shared_memory(owner_dom_io ? dom_io : d,
+                                         addr_cells, size_cells,
+                                         pbase, psize, gbase);
             if ( ret )
                 return ret;
         }
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 05:11:46 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 05:11:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352303.579077 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o39hR-0001j5-Qi; Mon, 20 Jun 2022 05:11:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352303.579077; Mon, 20 Jun 2022 05:11:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o39hR-0001in-MV; Mon, 20 Jun 2022 05:11:45 +0000
Received: by outflank-mailman (input) for mailman id 352303;
 Mon, 20 Jun 2022 05:11:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dqE0=W3=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1o39hQ-0000ky-PL
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 05:11:44 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 79a4a373-f057-11ec-bd2d-47488cf2e6aa;
 Mon, 20 Jun 2022 07:11:43 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 451211042;
 Sun, 19 Jun 2022 22:11:43 -0700 (PDT)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 6E64B3F7D7;
 Sun, 19 Jun 2022 22:11:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 79a4a373-f057-11ec-bd2d-47488cf2e6aa
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v5 4/8] xen/arm: introduce put_page_nr and get_page_nr
Date: Mon, 20 Jun 2022 13:11:10 +0800
Message-Id: <20220620051114.210118-5-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220620051114.210118-1-Penny.Zheng@arm.com>
References: <20220620051114.210118-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Later, we need to add the right amount of references, which should be
the number of borrower domains, to the owner domain. Since we only have
get_page() to increment the page reference by 1, a loop is needed per
page, which is inefficient and time-consuming.

To save the loop time, this commit introduces a set of new helpers
put_page_nr() and get_page_nr() to increment/drop the page reference by nr.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
v5 change:
- no change
---
v4 changes:
- fix the assert about checking overflow to make sure that the right equation
return is at least equal to nr
- simplify the assert about checking the underflow
---
v3 changes:
- check overflow with "n"
- remove spurious change
- bring back the check that we enter the loop only when count_info is
greater than 0
---
v2 change:
- new commit
---
 xen/arch/arm/include/asm/mm.h |  4 ++++
 xen/arch/arm/mm.c             | 42 +++++++++++++++++++++++++++--------
 2 files changed, 37 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
index 045a8ba4bb..8384eac2c8 100644
--- a/xen/arch/arm/include/asm/mm.h
+++ b/xen/arch/arm/include/asm/mm.h
@@ -343,6 +343,10 @@ void free_init_memory(void);
 int guest_physmap_mark_populate_on_demand(struct domain *d, unsigned long gfn,
                                           unsigned int order);
 
+extern bool get_page_nr(struct page_info *page, const struct domain *domain,
+                        unsigned long nr);
+extern void put_page_nr(struct page_info *page, unsigned long nr);
+
 extern void put_page_type(struct page_info *page);
 static inline void put_page_and_type(struct page_info *page)
 {
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index be37176a47..79b7d8de56 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -1587,21 +1587,29 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
     return 0;
 }
 
-struct domain *page_get_owner_and_reference(struct page_info *page)
+static struct domain *page_get_owner_and_nr_reference(struct page_info *page,
+                                                      unsigned long nr)
 {
     unsigned long x, y = page->count_info;
     struct domain *owner;
 
+    /* Restrict nr to avoid "double" overflow */
+    if ( nr >= PGC_count_mask )
+    {
+        ASSERT_UNREACHABLE();
+        return NULL;
+    }
+
     do {
         x = y;
         /*
          * Count ==  0: Page is not allocated, so we cannot take a reference.
          * Count == -1: Reference count would wrap, which is invalid.
          */
-        if ( unlikely(((x + 1) & PGC_count_mask) <= 1) )
+        if ( unlikely(((x + nr) & PGC_count_mask) <= nr) )
             return NULL;
     }
-    while ( (y = cmpxchg(&page->count_info, x, x + 1)) != x );
+    while ( (y = cmpxchg(&page->count_info, x, x + nr)) != x );
 
     owner = page_get_owner(page);
     ASSERT(owner);
@@ -1609,14 +1617,19 @@ struct domain *page_get_owner_and_reference(struct page_info *page)
     return owner;
 }
 
-void put_page(struct page_info *page)
+struct domain *page_get_owner_and_reference(struct page_info *page)
+{
+    return page_get_owner_and_nr_reference(page, 1);
+}
+
+void put_page_nr(struct page_info *page, unsigned long nr)
 {
     unsigned long nx, x, y = page->count_info;
 
     do {
-        ASSERT((y & PGC_count_mask) != 0);
+        ASSERT((y & PGC_count_mask) >= nr);
         x  = y;
-        nx = x - 1;
+        nx = x - nr;
     }
     while ( unlikely((y = cmpxchg(&page->count_info, x, nx)) != x) );
 
@@ -1626,19 +1639,30 @@ void put_page(struct page_info *page)
     }
 }
 
-bool get_page(struct page_info *page, const struct domain *domain)
+void put_page(struct page_info *page)
+{
+    put_page_nr(page, 1);
+}
+
+bool get_page_nr(struct page_info *page, const struct domain *domain,
+                 unsigned long nr)
 {
-    const struct domain *owner = page_get_owner_and_reference(page);
+    const struct domain *owner = page_get_owner_and_nr_reference(page, nr);
 
     if ( likely(owner == domain) )
         return true;
 
     if ( owner != NULL )
-        put_page(page);
+        put_page_nr(page, nr);
 
     return false;
 }
 
+bool get_page(struct page_info *page, const struct domain *domain)
+{
+    return get_page_nr(page, domain, 1);
+}
+
 /* Common code requires get_page_type and put_page_type.
  * We don't care about typecounts so we just do the minimum to make it
  * happy. */
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 05:11:47 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 05:11:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352300.579044 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o39hI-0000qB-ST; Mon, 20 Jun 2022 05:11:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352300.579044; Mon, 20 Jun 2022 05:11:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o39hI-0000q4-P8; Mon, 20 Jun 2022 05:11:36 +0000
Received: by outflank-mailman (input) for mailman id 352300;
 Mon, 20 Jun 2022 05:11:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dqE0=W3=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1o39hH-0000ky-IJ
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 05:11:35 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 73447bef-f057-11ec-bd2d-47488cf2e6aa;
 Mon, 20 Jun 2022 07:11:33 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9E413113E;
 Sun, 19 Jun 2022 22:11:32 -0700 (PDT)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C1FF53F7D7;
 Sun, 19 Jun 2022 22:11:29 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 73447bef-f057-11ec-bd2d-47488cf2e6aa
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <penny.zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v5 1/8] xen/arm: introduce static shared memory
Date: Mon, 20 Jun 2022 13:11:07 +0800
Message-Id: <20220620051114.210118-2-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220620051114.210118-1-Penny.Zheng@arm.com>
References: <20220620051114.210118-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Penny Zheng <penny.zheng@arm.com>

This patch serie introduces a new feature: setting up static
shared memory on a dom0less system, through device tree configuration.

This commit parses shared memory node at boot-time, and reserve it in
bootinfo.reserved_mem to avoid other use.

This commits proposes a new Kconfig CONFIG_STATIC_SHM to wrap
static-shm-related codes, and this option depends on static memory(
CONFIG_STATIC_MEMORY). That's because that later we want to reuse a few
helpers, guarded with CONFIG_STATIC_MEMORY, like acquire_staticmem_pages, etc,
on static shared memory.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
v5 change:
- no change
---
v4 change:
- nit fix on doc
---
v3 change:
- make nr_shm_domain unsigned int
---
v2 change:
- document refinement
- remove bitmap and use the iteration to check
- add a new field nr_shm_domain to keep the number of shared domain
---
 docs/misc/arm/device-tree/booting.txt | 120 ++++++++++++++++++++++++++
 xen/arch/arm/Kconfig                  |   6 ++
 xen/arch/arm/bootfdt.c                |  68 +++++++++++++++
 xen/arch/arm/include/asm/setup.h      |   3 +
 4 files changed, 197 insertions(+)

diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
index 98253414b8..6467bc5a28 100644
--- a/docs/misc/arm/device-tree/booting.txt
+++ b/docs/misc/arm/device-tree/booting.txt
@@ -378,3 +378,123 @@ device-tree:
 
 This will reserve a 512MB region starting at the host physical address
 0x30000000 to be exclusively used by DomU1.
+
+Static Shared Memory
+====================
+
+The static shared memory device tree nodes allow users to statically set up
+shared memory on dom0less system, enabling domains to do shm-based
+communication.
+
+- compatible
+
+    "xen,domain-shared-memory-v1"
+
+- xen,shm-id
+
+    An 8-bit integer that represents the unique identifier of the shared memory
+    region. The maximum identifier shall be "xen,shm-id = <0xff>".
+
+- xen,shared-mem
+
+    An array takes a physical address, which is the base address of the
+    shared memory region in host physical address space, a size, and a guest
+    physical address, as the target address of the mapping. The number of cells
+    for the host address (and size) is the same as the guest pseudo-physical
+    address and they are inherited from the parent node.
+
+- role (Optional)
+
+    A string property specifying the ownership of a shared memory region,
+    the value must be one of the following: "owner", or "borrower"
+    A shared memory region could be explicitly backed by one domain, which is
+    called "owner domain", and all the other domains who are also sharing
+    this region are called "borrower domain".
+    If not specified, the default value is "borrower" and owner is
+    "dom_shared", a system domain.
+
+As an example:
+
+chosen {
+    #address-cells = <0x1>;
+    #size-cells = <0x1>;
+    xen,xen-bootargs = "console=dtuart dtuart=serial0 bootscrub=0";
+
+    ......
+
+    /* this is for Dom0 */
+    dom0-shared-mem@10000000 {
+        compatible = "xen,domain-shared-memory-v1";
+        role = "owner";
+        xen,shm-id = <0x0>;
+        xen,shared-mem = <0x10000000 0x10000000 0x10000000>;
+    }
+
+    domU1 {
+        compatible = "xen,domain";
+        #address-cells = <0x1>;
+        #size-cells = <0x1>;
+        memory = <0 131072>;
+        cpus = <2>;
+        vpl011;
+
+        /*
+         * shared memory region identified as 0x0(xen,shm-id = <0x0>)
+         * is shared between Dom0 and DomU1.
+         */
+        domU1-shared-mem@10000000 {
+            compatible = "xen,domain-shared-memory-v1";
+            role = "borrower";
+            xen,shm-id = <0x0>;
+            xen,shared-mem = <0x10000000 0x10000000 0x50000000>;
+        }
+
+        /*
+         * shared memory region identified as 0x1(xen,shm-id = <0x1>)
+         * is shared between DomU1 and DomU2.
+         */
+        domU1-shared-mem@50000000 {
+            compatible = "xen,domain-shared-memory-v1";
+            xen,shm-id = <0x1>;
+            xen,shared-mem = <0x50000000 0x20000000 0x60000000>;
+        }
+
+        ......
+
+    };
+
+    domU2 {
+        compatible = "xen,domain";
+        #address-cells = <0x1>;
+        #size-cells = <0x1>;
+        memory = <0 65536>;
+        cpus = <1>;
+
+        /*
+         * shared memory region identified as 0x1(xen,shm-id = <0x1>)
+         * is shared between domU1 and domU2.
+         */
+        domU2-shared-mem@50000000 {
+            compatible = "xen,domain-shared-memory-v1";
+            xen,shm-id = <0x1>;
+            xen,shared-mem = <0x50000000 0x20000000 0x70000000>;
+        }
+
+        ......
+    };
+};
+
+This is an example with two static shared memory regions.
+
+For the static shared memory region identified as 0x0, host physical
+address starting at 0x10000000 of 256MB will be reserved to be shared between
+Dom0 and DomU1. It will get mapped at 0x10000000 in Dom0 guest physical address
+space, and at 0x50000000 in DomU1 guest physical address space. Dom0 is
+explicitly defined as the owner domain, and DomU1 is the borrower domain.
+
+For the static shared memory region identified as 0x1, host physical
+address starting at 0x50000000 of 512MB will be reserved to be shared between
+DomU1 and DomU2. It will get mapped at 0x60000000 in DomU1 guest physical
+address space, and at 0x70000000 in DomU2 guest physical address space. DomU1
+and DomU2 are both the borrower domain, the owner domain is the default owner
+domain dom_shared.
diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index be9eff0141..7321f47c0f 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -139,6 +139,12 @@ config TEE
 
 source "arch/arm/tee/Kconfig"
 
+config STATIC_SHM
+	bool "Statically shared memory on a dom0less system" if UNSUPPORTED
+	depends on STATIC_MEMORY
+	help
+	  This option enables statically shared memory on a dom0less system.
+
 endmenu
 
 menu "ARM errata workaround via the alternative framework"
diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
index ec81a45de9..38dcb05d5d 100644
--- a/xen/arch/arm/bootfdt.c
+++ b/xen/arch/arm/bootfdt.c
@@ -361,6 +361,70 @@ static int __init process_domain_node(const void *fdt, int node,
                                    size_cells, &bootinfo.reserved_mem, true);
 }
 
+#ifdef CONFIG_STATIC_SHM
+static int __init process_shm_node(const void *fdt, int node,
+                                   u32 address_cells, u32 size_cells)
+{
+    const struct fdt_property *prop;
+    const __be32 *cell;
+    paddr_t paddr, size;
+    struct meminfo *mem = &bootinfo.reserved_mem;
+    unsigned long i;
+
+    if ( address_cells < 1 || size_cells < 1 )
+    {
+        printk("fdt: invalid #address-cells or #size-cells for static shared memory node.\n");
+        return -EINVAL;
+    }
+
+    prop = fdt_get_property(fdt, node, "xen,shared-mem", NULL);
+    if ( !prop )
+        return -ENOENT;
+
+    /*
+     * xen,shared-mem = <paddr, size, gaddr>;
+     * Memory region starting from physical address #paddr of #size shall
+     * be mapped to guest physical address #gaddr as static shared memory
+     * region.
+     */
+    cell = (const __be32 *)prop->data;
+    device_tree_get_reg(&cell, address_cells, size_cells, &paddr, &size);
+    for ( i = 0; i < mem->nr_banks; i++ )
+    {
+        /*
+         * A static shared memory region could be shared between multiple
+         * domains.
+         */
+        if ( paddr == mem->bank[i].start && size == mem->bank[i].size )
+            break;
+    }
+
+    if ( i == mem->nr_banks )
+    {
+        if ( i < NR_MEM_BANKS )
+        {
+            /* Static shared memory shall be reserved from any other use. */
+            mem->bank[mem->nr_banks].start = paddr;
+            mem->bank[mem->nr_banks].size = size;
+            mem->bank[mem->nr_banks].xen_domain = true;
+            mem->nr_banks++;
+        }
+        else
+        {
+            printk("Warning: Max number of supported memory regions reached.\n");
+            return -ENOSPC;
+        }
+    }
+    /*
+     * keep a count of the number of domains, which later may be used to
+     * calculate the number of the reference count.
+     */
+    mem->bank[i].nr_shm_domain++;
+
+    return 0;
+}
+#endif
+
 static int __init early_scan_node(const void *fdt,
                                   int node, const char *name, int depth,
                                   u32 address_cells, u32 size_cells,
@@ -386,6 +450,10 @@ static int __init early_scan_node(const void *fdt,
         process_chosen_node(fdt, node, name, address_cells, size_cells);
     else if ( depth == 2 && device_tree_node_compatible(fdt, node, "xen,domain") )
         rc = process_domain_node(fdt, node, name, address_cells, size_cells);
+#ifdef CONFIG_STATIC_SHM
+    else if ( depth <= 3 && device_tree_node_compatible(fdt, node, "xen,domain-shared-memory-v1") )
+        rc = process_shm_node(fdt, node, address_cells, size_cells);
+#endif
 
     if ( rc < 0 )
         printk("fdt: node `%s': parsing failed\n", name);
diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
index 2bb01ecfa8..5063e5d077 100644
--- a/xen/arch/arm/include/asm/setup.h
+++ b/xen/arch/arm/include/asm/setup.h
@@ -27,6 +27,9 @@ struct membank {
     paddr_t start;
     paddr_t size;
     bool xen_domain; /* whether the memory bank is bound to a Xen domain. */
+#ifdef CONFIG_STATIC_SHM
+    unsigned int nr_shm_domain;
+#endif
 };
 
 struct meminfo {
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 05:11:47 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 05:11:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352299.579033 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o39hG-0000an-KV; Mon, 20 Jun 2022 05:11:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352299.579033; Mon, 20 Jun 2022 05:11:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o39hG-0000ag-HX; Mon, 20 Jun 2022 05:11:34 +0000
Received: by outflank-mailman (input) for mailman id 352299;
 Mon, 20 Jun 2022 05:11:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dqE0=W3=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1o39hF-0000aY-EP
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 05:11:33 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 71647ee3-f057-11ec-b725-ed86ccbb4733;
 Mon, 20 Jun 2022 07:11:30 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 605401042;
 Sun, 19 Jun 2022 22:11:29 -0700 (PDT)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 8C97E3F7D7;
 Sun, 19 Jun 2022 22:11:25 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 71647ee3-f057-11ec-b725-ed86ccbb4733
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v5 0/8] static shared memory on dom0less system
Date: Mon, 20 Jun 2022 13:11:06 +0800
Message-Id: <20220620051114.210118-1-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In safety-critical environment, it is not considered safe to
dynamically change important configurations at runtime. Everything
should be statically defined and statically verified.

In this case, if the system configuration knows a priori that there are
only 2 VMs and they need to communicate over shared memory, it is safer
to pre-configure the shared memory at build time rather than let the VMs
attempt to share memory at runtime. And it is faster too.

Furthermore, on dom0less system, the legacy way to build up communication
channels between domains, like grant table, are normally absent there.

So this patch serie introduces a set of static shared memory device tree nodes
to allow users to statically set up shared memory on dom0less system, enabling
domains to do shm-based communication.

The only way to trigger this static shared memory configuration should
be via device tree, which is at the same level as the XSM rules.

It was inspired by the patch serie of ["xl/libxl-based shared mem](
https://marc.info/?l=xen-devel&m=154404821731186ory").

Looking into related [design link](
https://lore.kernel.org/all/a50d9fde-1d06-7cda-2779-9eea9e1c0134@xen.org/T/)
for more details.

Penny Zheng (8):
  xen/arm: introduce static shared memory
  xen/arm: allocate static shared memory to the default owner dom_io
  xen/arm: allocate static shared memory to a specific owner domain
  xen/arm: introduce put_page_nr and get_page_nr
  xen/arm: Add additional reference to owner domain when the owner is
    allocated
  xen/arm: set up shared memory foreign mapping for borrower domain
  xen/arm: create shared memory nodes in guest device tree
  xen/arm: enable statically shared memory on Dom0

 docs/misc/arm/device-tree/booting.txt | 120 ++++++++
 xen/arch/arm/Kconfig                  |   6 +
 xen/arch/arm/bootfdt.c                |  68 +++++
 xen/arch/arm/domain_build.c           | 378 +++++++++++++++++++++++++-
 xen/arch/arm/include/asm/kernel.h     |   1 +
 xen/arch/arm/include/asm/mm.h         |   4 +
 xen/arch/arm/include/asm/setup.h      |   4 +
 xen/arch/arm/mm.c                     |  42 ++-
 xen/common/domain.c                   |   3 +
 9 files changed, 616 insertions(+), 10 deletions(-)

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 05:11:49 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 05:11:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352304.579088 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o39hV-00025P-4F; Mon, 20 Jun 2022 05:11:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352304.579088; Mon, 20 Jun 2022 05:11:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o39hV-00025A-0a; Mon, 20 Jun 2022 05:11:49 +0000
Received: by outflank-mailman (input) for mailman id 352304;
 Mon, 20 Jun 2022 05:11:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dqE0=W3=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1o39hT-0000aY-T4
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 05:11:48 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 7b7c2840-f057-11ec-b725-ed86ccbb4733;
 Mon, 20 Jun 2022 07:11:46 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6DC871042;
 Sun, 19 Jun 2022 22:11:46 -0700 (PDT)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 9DA3C3F7D7;
 Sun, 19 Jun 2022 22:11:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7b7c2840-f057-11ec-b725-ed86ccbb4733
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v5 5/8] xen/arm: Add additional reference to owner domain when the owner is allocated
Date: Mon, 20 Jun 2022 13:11:11 +0800
Message-Id: <20220620051114.210118-6-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220620051114.210118-1-Penny.Zheng@arm.com>
References: <20220620051114.210118-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Borrower domain will fail to get a page ref using the owner domain
during allocation, when the owner is created after borrower.

So here, we decide to get and add the right amount of reference, which
is the number of borrowers, when the owner is allocated.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
v5 change:
- no change
---
v4 changes:
- no change
---
v3 change:
- printk rather than dprintk since it is a serious error
---
v2 change:
- new commit
---
 xen/arch/arm/domain_build.c | 62 +++++++++++++++++++++++++++++++++++++
 1 file changed, 62 insertions(+)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index d4fd64e2bd..650d18f5ef 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -799,6 +799,34 @@ static mfn_t __init acquire_shared_memory_bank(struct domain *d,
 
 }
 
+static int __init acquire_nr_borrower_domain(struct domain *d,
+                                             paddr_t pbase, paddr_t psize,
+                                             unsigned long *nr_borrowers)
+{
+    unsigned long bank;
+
+    /* Iterate reserved memory to find requested shm bank. */
+    for ( bank = 0 ; bank < bootinfo.reserved_mem.nr_banks; bank++ )
+    {
+        paddr_t bank_start = bootinfo.reserved_mem.bank[bank].start;
+        paddr_t bank_size = bootinfo.reserved_mem.bank[bank].size;
+
+        if ( pbase == bank_start && psize == bank_size )
+            break;
+    }
+
+    if ( bank == bootinfo.reserved_mem.nr_banks )
+        return -ENOENT;
+
+    if ( d == dom_io )
+        *nr_borrowers = bootinfo.reserved_mem.bank[bank].nr_shm_domain;
+    else
+        /* Exclude the owner domain itself. */
+        *nr_borrowers = bootinfo.reserved_mem.bank[bank].nr_shm_domain - 1;
+
+    return 0;
+}
+
 /*
  * Func allocate_shared_memory is supposed to be only called
  * from the owner.
@@ -810,6 +838,8 @@ static int __init allocate_shared_memory(struct domain *d,
 {
     mfn_t smfn;
     int ret = 0;
+    unsigned long nr_pages, nr_borrowers, i;
+    struct page_info *page;
 
     dprintk(XENLOG_INFO,
             "Allocate static shared memory BANK %#"PRIpaddr"-%#"PRIpaddr".\n",
@@ -824,6 +854,7 @@ static int __init allocate_shared_memory(struct domain *d,
      * DOMID_IO is the domain, like DOMID_XEN, that is not auto-translated.
      * It sees RAM 1:1 and we do not need to create P2M mapping for it
      */
+    nr_pages = PFN_DOWN(psize);
     if ( d != dom_io )
     {
         ret = guest_physmap_add_pages(d, gaddr_to_gfn(gbase), smfn, PFN_DOWN(psize));
@@ -835,6 +866,37 @@ static int __init allocate_shared_memory(struct domain *d,
         }
     }
 
+    /*
+     * Get the right amount of references per page, which is the number of
+     * borrow domains.
+     */
+    ret = acquire_nr_borrower_domain(d, pbase, psize, &nr_borrowers);
+    if ( ret )
+        return ret;
+
+    /*
+     * Instead of let borrower domain get a page ref, we add as many
+     * additional reference as the number of borrowers when the owner
+     * is allocated, since there is a chance that owner is created
+     * after borrower.
+     */
+    page = mfn_to_page(smfn);
+    for ( i = 0; i < nr_pages; i++ )
+    {
+        if ( !get_page_nr(page + i, d, nr_borrowers) )
+        {
+            printk(XENLOG_ERR
+                   "Failed to add %lu references to page %"PRI_mfn".\n",
+                   nr_borrowers, mfn_x(smfn) + i);
+            goto fail;
+        }
+    }
+
+    return 0;
+
+ fail:
+    while ( --i >= 0 )
+        put_page_nr(page + i, nr_borrowers);
     return ret;
 }
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 05:11:52 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 05:11:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352306.579099 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o39hY-0002Rv-GW; Mon, 20 Jun 2022 05:11:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352306.579099; Mon, 20 Jun 2022 05:11:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o39hY-0002Ri-Cd; Mon, 20 Jun 2022 05:11:52 +0000
Received: by outflank-mailman (input) for mailman id 352306;
 Mon, 20 Jun 2022 05:11:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dqE0=W3=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1o39hW-0000aY-V7
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 05:11:50 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 7d79a30a-f057-11ec-b725-ed86ccbb4733;
 Mon, 20 Jun 2022 07:11:50 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C36271042;
 Sun, 19 Jun 2022 22:11:49 -0700 (PDT)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id CD15F3F7D7;
 Sun, 19 Jun 2022 22:11:46 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7d79a30a-f057-11ec-b725-ed86ccbb4733
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v5 6/8] xen/arm: set up shared memory foreign mapping for borrower domain
Date: Mon, 20 Jun 2022 13:11:12 +0800
Message-Id: <20220620051114.210118-7-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220620051114.210118-1-Penny.Zheng@arm.com>
References: <20220620051114.210118-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This commit sets up shared memory foreign mapping for borrower domain.

If owner domain is the default dom_io, all shared domain are treated as
borrower domain.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
v5 change:
- no change
---
v4 changes:
- no change
---
v3 change:
- use map_regions_p2mt instead
---
v2 change:
- remove guest_physmap_add_shm, since for borrower domain, we only
do P2M foreign memory mapping now.
---
 xen/arch/arm/domain_build.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 650d18f5ef..1584e6c2ce 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -962,6 +962,15 @@ static int __init process_shm(struct domain *d,
             if ( ret )
                 return ret;
         }
+
+        if ( owner_dom_io || (strcmp(role_str, "borrower") == 0) )
+        {
+            /* Set up P2M foreign mapping for borrower domain. */
+            ret = map_regions_p2mt(d, _gfn(PFN_UP(gbase)), PFN_DOWN(psize),
+                                   _mfn(PFN_UP(pbase)), p2m_map_foreign_rw);
+            if ( ret )
+                return ret;
+        }
     }
 
     return 0;
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 05:11:56 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 05:11:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352315.579110 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o39hb-00031S-RO; Mon, 20 Jun 2022 05:11:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352315.579110; Mon, 20 Jun 2022 05:11:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o39hb-00031C-N4; Mon, 20 Jun 2022 05:11:55 +0000
Received: by outflank-mailman (input) for mailman id 352315;
 Mon, 20 Jun 2022 05:11:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dqE0=W3=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1o39hb-0000ky-9q
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 05:11:55 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 7f794c26-f057-11ec-bd2d-47488cf2e6aa;
 Mon, 20 Jun 2022 07:11:53 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2BACE1042;
 Sun, 19 Jun 2022 22:11:53 -0700 (PDT)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 307DB3F7D7;
 Sun, 19 Jun 2022 22:11:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7f794c26-f057-11ec-bd2d-47488cf2e6aa
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v5 7/8] xen/arm: create shared memory nodes in guest device tree
Date: Mon, 20 Jun 2022 13:11:13 +0800
Message-Id: <20220620051114.210118-8-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220620051114.210118-1-Penny.Zheng@arm.com>
References: <20220620051114.210118-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

We expose the shared memory to the domU using the "xen,shared-memory-v1"
reserved-memory binding. See
Documentation/devicetree/bindings/reserved-memory/xen,shared-memory.txt
in Linux for the corresponding device tree binding.

To save the cost of re-parsing shared memory device tree configuration when
creating shared memory nodes in guest device tree, this commit adds new field
"shm_mem" to store shm-info per domain.

For each shared memory region, a range is exposed under
the /reserved-memory node as a child node. Each range sub-node is
named xen-shmem@<address> and has the following properties:
- compatible:
        compatible = "xen,shared-memory-v1"
- reg:
        the base guest physical address and size of the shared memory region
- xen,id:
        a string that identifies the shared memory region.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
v5 change:
- no change
---
v4 change:
- no change
---
v3 change:
- move field "shm_mem" to kernel_info
---
v2 change:
- using xzalloc
- shm_id should be uint8_t
- make reg a local variable
- add #address-cells and #size-cells properties
- fix alignment
---
 xen/arch/arm/domain_build.c       | 143 +++++++++++++++++++++++++++++-
 xen/arch/arm/include/asm/kernel.h |   1 +
 xen/arch/arm/include/asm/setup.h  |   1 +
 3 files changed, 143 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 1584e6c2ce..4d62440a0e 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -900,7 +900,22 @@ static int __init allocate_shared_memory(struct domain *d,
     return ret;
 }
 
-static int __init process_shm(struct domain *d,
+static int __init append_shm_bank_to_domain(struct kernel_info *kinfo,
+                                            paddr_t start, paddr_t size,
+                                            u32 shm_id)
+{
+    if ( (kinfo->shm_mem.nr_banks + 1) > NR_MEM_BANKS )
+        return -ENOMEM;
+
+    kinfo->shm_mem.bank[kinfo->shm_mem.nr_banks].start = start;
+    kinfo->shm_mem.bank[kinfo->shm_mem.nr_banks].size = size;
+    kinfo->shm_mem.bank[kinfo->shm_mem.nr_banks].shm_id = shm_id;
+    kinfo->shm_mem.nr_banks++;
+
+    return 0;
+}
+
+static int __init process_shm(struct domain *d, struct kernel_info *kinfo,
                               const struct dt_device_node *node)
 {
     struct dt_device_node *shm_node;
@@ -971,6 +986,14 @@ static int __init process_shm(struct domain *d,
             if ( ret )
                 return ret;
         }
+
+        /*
+         * Record static shared memory region info for later setting
+         * up shm-node in guest device tree.
+         */
+        ret = append_shm_bank_to_domain(kinfo, gbase, psize, shm_id);
+        if ( ret )
+            return ret;
     }
 
     return 0;
@@ -1301,6 +1324,117 @@ static int __init make_memory_node(const struct domain *d,
     return res;
 }
 
+#ifdef CONFIG_STATIC_SHM
+static int __init make_shm_memory_node(const struct domain *d,
+                                       void *fdt,
+                                       int addrcells, int sizecells,
+                                       struct meminfo *mem)
+{
+    unsigned long i = 0;
+    int res = 0;
+
+    if ( mem->nr_banks == 0 )
+        return -ENOENT;
+
+    /*
+     * For each shared memory region, a range is exposed under
+     * the /reserved-memory node as a child node. Each range sub-node is
+     * named xen-shmem@<address>.
+     */
+    dt_dprintk("Create xen-shmem node\n");
+
+    for ( ; i < mem->nr_banks; i++ )
+    {
+        uint64_t start = mem->bank[i].start;
+        uint64_t size = mem->bank[i].size;
+        uint8_t shm_id = mem->bank[i].shm_id;
+        /* Placeholder for xen-shmem@ + a 64-bit number + \0 */
+        char buf[27];
+        const char compat[] = "xen,shared-memory-v1";
+        __be32 reg[4];
+        __be32 *cells;
+        unsigned int len = (addrcells + sizecells) * sizeof(__be32);
+
+        snprintf(buf, sizeof(buf), "xen-shmem@%"PRIx64, mem->bank[i].start);
+        res = fdt_begin_node(fdt, buf);
+        if ( res )
+            return res;
+
+        res = fdt_property(fdt, "compatible", compat, sizeof(compat));
+        if ( res )
+            return res;
+
+        cells = reg;
+        dt_child_set_range(&cells, addrcells, sizecells, start, size);
+
+        res = fdt_property(fdt, "reg", reg, len);
+        if ( res )
+            return res;
+
+        dt_dprintk("Shared memory bank %lu: %#"PRIx64"->%#"PRIx64"\n",
+                   i, start, start + size);
+
+        res = fdt_property_cell(fdt, "xen,id", shm_id);
+        if ( res )
+            return res;
+
+        res = fdt_end_node(fdt);
+        if ( res )
+            return res;
+    }
+
+    return res;
+}
+#else
+static int __init make_shm_memory_node(const struct domain *d,
+                                       void *fdt,
+                                       int addrcells, int sizecells,
+                                       struct meminfo *mem)
+{
+    ASSERT_UNREACHABLE();
+}
+#endif
+
+static int __init make_resv_memory_node(const struct domain *d,
+                                        void *fdt,
+                                        int addrcells, int sizecells,
+                                        struct meminfo *mem)
+{
+    int res = 0;
+    /* Placeholder for reserved-memory\0 */
+    char resvbuf[16] = "reserved-memory";
+
+    if ( mem->nr_banks == 0 )
+        /* No shared memory provided. */
+        return 0;
+
+    dt_dprintk("Create reserved-memory node\n");
+
+    res = fdt_begin_node(fdt, resvbuf);
+    if ( res )
+        return res;
+
+    res = fdt_property(fdt, "ranges", NULL, 0);
+    if ( res )
+        return res;
+
+    res = fdt_property_cell(fdt, "#address-cells", addrcells);
+    if ( res )
+        return res;
+
+    res = fdt_property_cell(fdt, "#size-cells", sizecells);
+    if ( res )
+        return res;
+
+    res = make_shm_memory_node(d, fdt, addrcells, sizecells, mem);
+    if ( res )
+        return res;
+
+    res = fdt_end_node(fdt);
+
+    return res;
+}
+
 static int __init add_ext_regions(unsigned long s, unsigned long e, void *data)
 {
     struct meminfo *ext_regions = data;
@@ -3078,6 +3212,11 @@ static int __init prepare_dtb_domU(struct domain *d, struct kernel_info *kinfo)
     if ( ret )
         goto err;
 
+    ret = make_resv_memory_node(d, kinfo->fdt, addrcells, sizecells,
+                                &kinfo->shm_mem);
+    if ( ret )
+        goto err;
+
     /*
      * domain_handle_dtb_bootmodule has to be called before the rest of
      * the device tree is generated because it depends on the value of
@@ -3454,7 +3593,7 @@ static int __init construct_domU(struct domain *d,
         assign_static_memory_11(d, &kinfo, node);
 
 #ifdef CONFIG_STATIC_SHM
-    rc = process_shm(d, node);
+    rc = process_shm(d, &kinfo, node);
     if ( rc < 0 )
         return rc;
 #endif
diff --git a/xen/arch/arm/include/asm/kernel.h b/xen/arch/arm/include/asm/kernel.h
index c4dc039b54..2cc506b100 100644
--- a/xen/arch/arm/include/asm/kernel.h
+++ b/xen/arch/arm/include/asm/kernel.h
@@ -19,6 +19,7 @@ struct kernel_info {
     void *fdt; /* flat device tree */
     paddr_t unassigned_mem; /* RAM not (yet) assigned to a bank */
     struct meminfo mem;
+    struct meminfo shm_mem;
 
     /* kernel entry point */
     paddr_t entry;
diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
index 5063e5d077..7497cc40aa 100644
--- a/xen/arch/arm/include/asm/setup.h
+++ b/xen/arch/arm/include/asm/setup.h
@@ -29,6 +29,7 @@ struct membank {
     bool xen_domain; /* whether the memory bank is bound to a Xen domain. */
 #ifdef CONFIG_STATIC_SHM
     unsigned int nr_shm_domain;
+    uint8_t shm_id ; /* ID identifier of a static shared memory bank. */
 #endif
 };
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 05:18:59 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 05:18:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352363.579121 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o39oN-0005NM-Ly; Mon, 20 Jun 2022 05:18:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352363.579121; Mon, 20 Jun 2022 05:18:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o39oN-0005NF-Gx; Mon, 20 Jun 2022 05:18:55 +0000
Received: by outflank-mailman (input) for mailman id 352363;
 Mon, 20 Jun 2022 05:18:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dqE0=W3=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1o39he-0000ky-5M
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 05:11:58 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 814c9547-f057-11ec-bd2d-47488cf2e6aa;
 Mon, 20 Jun 2022 07:11:56 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 401141042;
 Sun, 19 Jun 2022 22:11:56 -0700 (PDT)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 8C2D03F7D7;
 Sun, 19 Jun 2022 22:11:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 814c9547-f057-11ec-bd2d-47488cf2e6aa
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <penny.zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v5 8/8] xen/arm: enable statically shared memory on Dom0
Date: Mon, 20 Jun 2022 13:11:14 +0800
Message-Id: <20220620051114.210118-9-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220620051114.210118-1-Penny.Zheng@arm.com>
References: <20220620051114.210118-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Penny Zheng <penny.zheng@arm.com>

To add statically shared memory nodes in Dom0, user shall put according
static shared memory configuration under /chosen node.

This commit adds shm-processing function process_shm in construct_dom0
to enable statically shared memory on Dom0.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
v5 change:
- no change
---
v4 change:
- no change
---
v3 change:
- no change
---
v2 change:
- no change
---
 xen/arch/arm/domain_build.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 4d62440a0e..b57c60f411 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2658,6 +2658,11 @@ static int __init handle_node(struct domain *d, struct kernel_info *kinfo,
             if ( res )
                 return res;
         }
+
+        res = make_resv_memory_node(d, kinfo->fdt, addrcells, sizecells,
+                                    &kinfo->shm_mem);
+        if ( res )
+            return res;
     }
 
     res = fdt_end_node(kinfo->fdt);
@@ -3730,6 +3735,9 @@ static int __init construct_dom0(struct domain *d)
 {
     struct kernel_info kinfo = {};
     int rc;
+#ifdef CONFIG_STATIC_SHM
+    const struct dt_device_node *chosen = dt_find_node_by_path("/chosen");
+#endif
 
     /* Sanity! */
     BUG_ON(d->domain_id != 0);
@@ -3764,6 +3772,12 @@ static int __init construct_dom0(struct domain *d)
     allocate_memory_11(d, &kinfo);
     find_gnttab_region(d, &kinfo);
 
+#ifdef CONFIG_STATIC_SHM
+    rc = process_shm(d, &kinfo, chosen);
+    if ( rc < 0 )
+        return rc;
+#endif
+
     /* Map extra GIC MMIO, irqs and other hw stuffs to dom0. */
     rc = gic_map_hwdom_extra_mappings(d);
     if ( rc < 0 )
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 05:22:25 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 05:22:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352382.579132 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o39rk-0006sA-9F; Mon, 20 Jun 2022 05:22:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352382.579132; Mon, 20 Jun 2022 05:22:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o39rk-0006s3-5Z; Mon, 20 Jun 2022 05:22:24 +0000
Received: by outflank-mailman (input) for mailman id 352382;
 Mon, 20 Jun 2022 05:22:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6bZF=W3=leemhuis.info=regressions@srs-se1.protection.inumbo.net>)
 id 1o39ri-0006rx-7c
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 05:22:22 +0000
Received: from wp530.webpack.hosteurope.de (wp530.webpack.hosteurope.de
 [2a01:488:42:1000:50ed:8234::])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f5579259-f058-11ec-b725-ed86ccbb4733;
 Mon, 20 Jun 2022 07:22:20 +0200 (CEST)
Received: from [2a02:8108:963f:de38:eca4:7d19:f9a2:22c5]; authenticated
 by wp530.webpack.hosteurope.de running ExIM with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
 id 1o39re-0008H9-Uu; Mon, 20 Jun 2022 07:22:19 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f5579259-f058-11ec-b725-ed86ccbb4733
Message-ID: <effc0c6a-9e4d-b503-e4ba-6c8d2da72699@leemhuis.info>
Date: Mon, 20 Jun 2022 07:22:18 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 1/2] x86/pat: fix x86_has_pat_wp()
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 x86@kernel.org, linux-kernel@vger.kernel.org,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Dave Hansen <dave.hansen@linux.intel.com>, Borislav Petkov <bp@alien8.de>
Cc: jbeulich@suse.com, Andy Lutomirski <luto@kernel.org>,
 Peter Zijlstra <peterz@infradead.org>, "H. Peter Anvin" <hpa@zytor.com>
References: <20220503132207.17234-1-jgross@suse.com>
 <20220503132207.17234-2-jgross@suse.com>
 <fb0eadee-1d45-f414-eda4-a87f01eeb57a@suse.com>
From: Thorsten Leemhuis <regressions@leemhuis.info>
In-Reply-To: <fb0eadee-1d45-f414-eda4-a87f01eeb57a@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-bounce-key: webpack.hosteurope.de;regressions@leemhuis.info;1655702540;0c7ec7dc;
X-HE-SMSGID: 1o39re-0008H9-Uu

On 14.06.22 17:09, Juergen Gross wrote:
> On 03.05.22 15:22, Juergen Gross wrote:
>> x86_has_pat_wp() is using a wrong test, as it relies on the normal
>> PAT configuration used by the kernel. In case the PAT MSR has been
>> setup by another entity (e.g. BIOS or Xen hypervisor) it might return
>> false even if the PAT configuration is allowing WP mappings.
>>
>> Fixes: 1f6f655e01ad ("x86/mm: Add a x86_has_pat_wp() helper")
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>>   arch/x86/mm/init.c | 3 ++-
>>   1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
>> index d8cfce221275..71e182ebced3 100644
>> --- a/arch/x86/mm/init.c
>> +++ b/arch/x86/mm/init.c
>> @@ -80,7 +80,8 @@ static uint8_t __pte2cachemode_tbl[8] = {
>>   /* Check that the write-protect PAT entry is set for write-protect */
>>   bool x86_has_pat_wp(void)
>>   {
>> -    return __pte2cachemode_tbl[_PAGE_CACHE_MODE_WP] ==
>> _PAGE_CACHE_MODE_WP;
>> +    return
>> __pte2cachemode_tbl[__cachemode2pte_tbl[_PAGE_CACHE_MODE_WP]] ==
>> +           _PAGE_CACHE_MODE_WP;
>>   }
>>     enum page_cache_mode pgprot2cachemode(pgprot_t pgprot)
> 
> x86 maintainers, please consider taking this patch, as it is fixing
> a real bug. Patch 2 of this series can be dropped IMO.

Juergen, can you help me out here please. Patch 2 afaics was supposed to
fix this regression I'm tracking:
https://lore.kernel.org/regressions/YnHK1Z3o99eMXsVK@mail-itl/

Is Patch 1 alone enough to fix it? Or is there a different fix for it?
Or is there some other solution to finally fix that regressions that
ideally should have been fixed weeks ago already?

Ciao, Thorsten (wearing his 'the Linux kernel's regression tracker' hat)

P.S.: As the Linux kernel's regression tracker I deal with a lot of
reports and sometimes miss something important when writing mails like
this. If that's the case here, don't hesitate to tell me in a public
reply, it's in everyone's interest to set the public record straight.


From xen-devel-bounces@lists.xenproject.org Mon Jun 20 05:30:21 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 05:30:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352391.579143 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o39zN-0008LY-3X; Mon, 20 Jun 2022 05:30:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352391.579143; Mon, 20 Jun 2022 05:30:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o39zM-0008LR-VD; Mon, 20 Jun 2022 05:30:16 +0000
Received: by outflank-mailman (input) for mailman id 352391;
 Mon, 20 Jun 2022 05:30:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hycu=W3=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o39zL-0007Yv-DN
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 05:30:15 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0f83efbb-f05a-11ec-bd2d-47488cf2e6aa;
 Mon, 20 Jun 2022 07:30:14 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id AD3641F9A0;
 Mon, 20 Jun 2022 05:30:13 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 55C7513427;
 Mon, 20 Jun 2022 05:30:13 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id ahGRE+UFsGKiVwAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 20 Jun 2022 05:30:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0f83efbb-f05a-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655703013; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=lTKBeEnkA13UMoSCaO9KZJ/EHnwH3pAZboyVMGM77rU=;
	b=QjLziJ7rqxGaGGZCI/l6SBWrV3yWGMzHEqUYWyHzS4tzmyl792yQtcV4wJylgPv01vLTF/
	/fo/C8+r32/Hx5Ni6vdtumj1dCcxONe4Mv8xLthn1o/nuXovfzBh1NANPk9hQsijuroy2q
	OhYKvZHkOE1C00c+bCpc6I5RK2umpNs=
Message-ID: <c5515533-29a9-9e91-5a36-45f00f25b37b@suse.com>
Date: Mon, 20 Jun 2022 07:30:12 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Content-Language: en-US
To: Thorsten Leemhuis <regressions@leemhuis.info>,
 xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, Dave Hansen <dave.hansen@linux.intel.com>,
 Borislav Petkov <bp@alien8.de>
Cc: jbeulich@suse.com, Andy Lutomirski <luto@kernel.org>,
 Peter Zijlstra <peterz@infradead.org>, "H. Peter Anvin" <hpa@zytor.com>
References: <20220503132207.17234-1-jgross@suse.com>
 <20220503132207.17234-2-jgross@suse.com>
 <fb0eadee-1d45-f414-eda4-a87f01eeb57a@suse.com>
 <effc0c6a-9e4d-b503-e4ba-6c8d2da72699@leemhuis.info>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH 1/2] x86/pat: fix x86_has_pat_wp()
In-Reply-To: <effc0c6a-9e4d-b503-e4ba-6c8d2da72699@leemhuis.info>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------TrYTNr10W5n70OBgv21LKjlS"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------TrYTNr10W5n70OBgv21LKjlS
Content-Type: multipart/mixed; boundary="------------pw6YLSc6mvGCJfu2mpvQIea8";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Thorsten Leemhuis <regressions@leemhuis.info>,
 xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, Dave Hansen <dave.hansen@linux.intel.com>,
 Borislav Petkov <bp@alien8.de>
Cc: jbeulich@suse.com, Andy Lutomirski <luto@kernel.org>,
 Peter Zijlstra <peterz@infradead.org>, "H. Peter Anvin" <hpa@zytor.com>
Message-ID: <c5515533-29a9-9e91-5a36-45f00f25b37b@suse.com>
Subject: Re: [PATCH 1/2] x86/pat: fix x86_has_pat_wp()
References: <20220503132207.17234-1-jgross@suse.com>
 <20220503132207.17234-2-jgross@suse.com>
 <fb0eadee-1d45-f414-eda4-a87f01eeb57a@suse.com>
 <effc0c6a-9e4d-b503-e4ba-6c8d2da72699@leemhuis.info>
In-Reply-To: <effc0c6a-9e4d-b503-e4ba-6c8d2da72699@leemhuis.info>

--------------pw6YLSc6mvGCJfu2mpvQIea8
Content-Type: multipart/mixed; boundary="------------h099hHche060m0xNSeZh399g"

--------------h099hHche060m0xNSeZh399g
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjAuMDYuMjIgMDc6MjIsIFRob3JzdGVuIExlZW1odWlzIHdyb3RlOg0KPiBPbiAxNC4w
Ni4yMiAxNzowOSwgSnVlcmdlbiBHcm9zcyB3cm90ZToNCj4+IE9uIDAzLjA1LjIyIDE1OjIy
LCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4+IHg4Nl9oYXNfcGF0X3dwKCkgaXMgdXNpbmcg
YSB3cm9uZyB0ZXN0LCBhcyBpdCByZWxpZXMgb24gdGhlIG5vcm1hbA0KPj4+IFBBVCBjb25m
aWd1cmF0aW9uIHVzZWQgYnkgdGhlIGtlcm5lbC4gSW4gY2FzZSB0aGUgUEFUIE1TUiBoYXMg
YmVlbg0KPj4+IHNldHVwIGJ5IGFub3RoZXIgZW50aXR5IChlLmcuIEJJT1Mgb3IgWGVuIGh5
cGVydmlzb3IpIGl0IG1pZ2h0IHJldHVybg0KPj4+IGZhbHNlIGV2ZW4gaWYgdGhlIFBBVCBj
b25maWd1cmF0aW9uIGlzIGFsbG93aW5nIFdQIG1hcHBpbmdzLg0KPj4+DQo+Pj4gRml4ZXM6
IDFmNmY2NTVlMDFhZCAoIng4Ni9tbTogQWRkIGEgeDg2X2hhc19wYXRfd3AoKSBoZWxwZXIi
KQ0KPj4+IFNpZ25lZC1vZmYtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4N
Cj4+PiAtLS0NCj4+PiAgwqAgYXJjaC94ODYvbW0vaW5pdC5jIHwgMyArKy0NCj4+PiAgwqAg
MSBmaWxlIGNoYW5nZWQsIDIgaW5zZXJ0aW9ucygrKSwgMSBkZWxldGlvbigtKQ0KPj4+DQo+
Pj4gZGlmZiAtLWdpdCBhL2FyY2gveDg2L21tL2luaXQuYyBiL2FyY2gveDg2L21tL2luaXQu
Yw0KPj4+IGluZGV4IGQ4Y2ZjZTIyMTI3NS4uNzFlMTgyZWJjZWQzIDEwMDY0NA0KPj4+IC0t
LSBhL2FyY2gveDg2L21tL2luaXQuYw0KPj4+ICsrKyBiL2FyY2gveDg2L21tL2luaXQuYw0K
Pj4+IEBAIC04MCw3ICs4MCw4IEBAIHN0YXRpYyB1aW50OF90IF9fcHRlMmNhY2hlbW9kZV90
YmxbOF0gPSB7DQo+Pj4gIMKgIC8qIENoZWNrIHRoYXQgdGhlIHdyaXRlLXByb3RlY3QgUEFU
IGVudHJ5IGlzIHNldCBmb3Igd3JpdGUtcHJvdGVjdCAqLw0KPj4+ICDCoCBib29sIHg4Nl9o
YXNfcGF0X3dwKHZvaWQpDQo+Pj4gIMKgIHsNCj4+PiAtwqDCoMKgIHJldHVybiBfX3B0ZTJj
YWNoZW1vZGVfdGJsW19QQUdFX0NBQ0hFX01PREVfV1BdID09DQo+Pj4gX1BBR0VfQ0FDSEVf
TU9ERV9XUDsNCj4+PiArwqDCoMKgIHJldHVybg0KPj4+IF9fcHRlMmNhY2hlbW9kZV90Ymxb
X19jYWNoZW1vZGUycHRlX3RibFtfUEFHRV9DQUNIRV9NT0RFX1dQXV0gPT0NCj4+PiArwqDC
oMKgwqDCoMKgwqDCoMKgwqAgX1BBR0VfQ0FDSEVfTU9ERV9XUDsNCj4+PiAgwqAgfQ0KPj4+
ICDCoCDCoCBlbnVtIHBhZ2VfY2FjaGVfbW9kZSBwZ3Byb3QyY2FjaGVtb2RlKHBncHJvdF90
IHBncHJvdCkNCj4+DQo+PiB4ODYgbWFpbnRhaW5lcnMsIHBsZWFzZSBjb25zaWRlciB0YWtp
bmcgdGhpcyBwYXRjaCwgYXMgaXQgaXMgZml4aW5nDQo+PiBhIHJlYWwgYnVnLiBQYXRjaCAy
IG9mIHRoaXMgc2VyaWVzIGNhbiBiZSBkcm9wcGVkIElNTy4NCj4gDQo+IEp1ZXJnZW4sIGNh
biB5b3UgaGVscCBtZSBvdXQgaGVyZSBwbGVhc2UuIFBhdGNoIDIgYWZhaWNzIHdhcyBzdXBw
b3NlZCB0bw0KPiBmaXggdGhpcyByZWdyZXNzaW9uIEknbSB0cmFja2luZzoNCj4gaHR0cHM6
Ly9sb3JlLmtlcm5lbC5vcmcvcmVncmVzc2lvbnMvWW5ISzFaM285OWVNWHNWS0BtYWlsLWl0
bC8NCg0KTm8sIHBhdGNoIDIgd2Fzbid0IGNvdmVyaW5nIGFsbCBuZWVkZWQgY2FzZXMuDQoN
Cj4gSXMgUGF0Y2ggMSBhbG9uZSBlbm91Z2ggdG8gZml4IGl0PyBPciBpcyB0aGVyZSBhIGRp
ZmZlcmVudCBmaXggZm9yIGl0Pw0KDQpQYXRjaCAxIGlzIGZpeGluZyBhIGRpZmZlcmVudCBp
c3N1ZSAoaXQgaXMgbGFja2luZyBhbnkgbWFpbnRhaW5lcg0KZmVlZGJhY2ssIHRob3VnaCku
DQoNClRoaXMgcGF0Y2ggb2YgSmFuIHNob3VsZCBkbyB0aGUgam9iLCBidXQgaXQgc2VlbXMg
dG8gYmUgc3R1Y2ssIHRvbzoNCg0KaHR0cHM6Ly9sb3JlLmtlcm5lbC5vcmcvbGttbC85Mzg1
ZmE2MC1mYTVkLWY1NTktYTEzNy02NjA4NDA4Zjg4YjBAc3VzZS5jb20vDQoNCj4gT3IgaXMg
dGhlcmUgc29tZSBvdGhlciBzb2x1dGlvbiB0byBmaW5hbGx5IGZpeCB0aGF0IHJlZ3Jlc3Np
b25zIHRoYXQNCj4gaWRlYWxseSBzaG91bGQgaGF2ZSBiZWVuIGZpeGVkIHdlZWtzIGFnbyBh
bHJlYWR5Pw0KDQpJIGFncmVlIGl0IHNob3VsZCBoYXZlIGJlZW4gZml4ZWQgcXVpdGUgc29t
ZSB0aW1lIG5vdywgYnV0IHRoZSB4ODYNCm1haW50YWluZXJzIGRvbid0IHNlZW0gdG8gYmUg
aW50ZXJlc3RlZCBpbiB0aG9zZSBzdHVjayBwYXRjaGVzLiA6LSgNCg0KTWF5YmUgSSBzaG91
bGQgdGFrZSBhIGRpZmZlcmVudCBhcHByb2FjaDoNCg0KeDg2IG1haW50YWluZXJzLCBwbGVh
c2Ugc3BlYWsgdXAgaWYgeW91IE5BSyAob3IgQWNrKSBhbnkgb2YgYWJvdmUgdHdvIHBhdGNo
ZXMuDQpJbiBjYXNlIHlvdSBkb24ndCBOQUsgb3IgdGFrZSB0aGUgcGF0Y2hlcywgSSdtIGlu
Y2xpbmVkIHRvIGNhcnJ5IHRoZW0gdmlhDQp0aGUgWGVuIHRyZWUgdG8gZ2V0IHRoZSBpc3N1
ZXMgZml4ZWQuDQoNCg0KSnVlcmdlbg0K
--------------h099hHche060m0xNSeZh399g
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------h099hHche060m0xNSeZh399g--

--------------pw6YLSc6mvGCJfu2mpvQIea8--

--------------TrYTNr10W5n70OBgv21LKjlS
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmKwBeQFAwAAAAAACgkQsN6d1ii/Ey9r
cgf/SuURpcc5UxmfxrN1FBmBlECRuboQgwTqaMkIKajWwphVD3yONz4d7X607XzJjciy7JnnNX4l
2mldvK9BRvUVfFrjSZsBnRVfJWnMXtjkolupeC95e5uUcskfqEBjKVokofRQgHPLIcd8o3Uf6+0/
7yqCgUvAWUN/lY1pQ/vniERrzDs64v+z4/YSoclZ1U4sVDSH/U97GQ/XlZhhCyO3G4hdChEqYgaU
sMkffF7zaididwAIJFWm/sjO9hOTe16D+MbLa46ifKR/hHB8jTLobeoDmFVpg+op0RilDuaZeeK1
lUftqBV49BvxysD8YW6JXfgEv0xJO0cUJFnfWhmZcQ==
=A6TZ
-----END PGP SIGNATURE-----

--------------TrYTNr10W5n70OBgv21LKjlS--


From xen-devel-bounces@lists.xenproject.org Mon Jun 20 06:16:07 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 06:16:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352400.579154 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3AhT-0004HC-Ij; Mon, 20 Jun 2022 06:15:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352400.579154; Mon, 20 Jun 2022 06:15:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3AhT-0004H5-FW; Mon, 20 Jun 2022 06:15:51 +0000
Received: by outflank-mailman (input) for mailman id 352400;
 Mon, 20 Jun 2022 06:15:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6bZF=W3=leemhuis.info=regressions@srs-se1.protection.inumbo.net>)
 id 1o3AhS-0004Gz-9l
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 06:15:50 +0000
Received: from wp530.webpack.hosteurope.de (wp530.webpack.hosteurope.de
 [2a01:488:42:1000:50ed:8234::])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6d09f9e9-f060-11ec-bd2d-47488cf2e6aa;
 Mon, 20 Jun 2022 08:15:48 +0200 (CEST)
Received: from [2a02:8108:963f:de38:eca4:7d19:f9a2:22c5]; authenticated
 by wp530.webpack.hosteurope.de running ExIM with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
 id 1o3AhO-0007Pt-KS; Mon, 20 Jun 2022 08:15:46 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6d09f9e9-f060-11ec-bd2d-47488cf2e6aa
Message-ID: <f6359e71-5516-5b04-ca35-6a4870456cec@leemhuis.info>
Date: Mon, 20 Jun 2022 08:15:45 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 1/2] x86/pat: fix x86_has_pat_wp()
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 x86@kernel.org, linux-kernel@vger.kernel.org,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Dave Hansen <dave.hansen@linux.intel.com>, Borislav Petkov <bp@alien8.de>
Cc: jbeulich@suse.com, Andy Lutomirski <luto@kernel.org>,
 Peter Zijlstra <peterz@infradead.org>, "H. Peter Anvin" <hpa@zytor.com>
References: <20220503132207.17234-1-jgross@suse.com>
 <20220503132207.17234-2-jgross@suse.com>
 <fb0eadee-1d45-f414-eda4-a87f01eeb57a@suse.com>
 <effc0c6a-9e4d-b503-e4ba-6c8d2da72699@leemhuis.info>
 <c5515533-29a9-9e91-5a36-45f00f25b37b@suse.com>
From: Thorsten Leemhuis <regressions@leemhuis.info>
In-Reply-To: <c5515533-29a9-9e91-5a36-45f00f25b37b@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-bounce-key: webpack.hosteurope.de;regressions@leemhuis.info;1655705748;ccabcbb3;
X-HE-SMSGID: 1o3AhO-0007Pt-KS

On 20.06.22 07:30, Juergen Gross wrote:
> On 20.06.22 07:22, Thorsten Leemhuis wrote:
>> On 14.06.22 17:09, Juergen Gross wrote:
>>> On 03.05.22 15:22, Juergen Gross wrote:
>>>> x86_has_pat_wp() is using a wrong test, as it relies on the normal
>>>> PAT configuration used by the kernel. In case the PAT MSR has been
>>>> setup by another entity (e.g. BIOS or Xen hypervisor) it might return
>>>> false even if the PAT configuration is allowing WP mappings.
>>>>
>>>> Fixes: 1f6f655e01ad ("x86/mm: Add a x86_has_pat_wp() helper")
>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>> ---
>>>>    arch/x86/mm/init.c | 3 ++-
>>>>    1 file changed, 2 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
>>>> index d8cfce221275..71e182ebced3 100644
>>>> --- a/arch/x86/mm/init.c
>>>> +++ b/arch/x86/mm/init.c
>>>> @@ -80,7 +80,8 @@ static uint8_t __pte2cachemode_tbl[8] = {
>>>>    /* Check that the write-protect PAT entry is set for
>>>> write-protect */
>>>>    bool x86_has_pat_wp(void)
>>>>    {
>>>> -    return __pte2cachemode_tbl[_PAGE_CACHE_MODE_WP] ==
>>>> _PAGE_CACHE_MODE_WP;
>>>> +    return
>>>> __pte2cachemode_tbl[__cachemode2pte_tbl[_PAGE_CACHE_MODE_WP]] ==
>>>> +           _PAGE_CACHE_MODE_WP;
>>>>    }
>>>>      enum page_cache_mode pgprot2cachemode(pgprot_t pgprot)
>>>
>>> x86 maintainers, please consider taking this patch, as it is fixing
>>> a real bug. Patch 2 of this series can be dropped IMO.
>>
>> Juergen, can you help me out here please. Patch 2 afaics was supposed to
>> fix this regression I'm tracking:
>> https://lore.kernel.org/regressions/YnHK1Z3o99eMXsVK@mail-itl/
> No, patch 2 wasn't covering all needed cases.

Ahh, happens. Thx for the info.

>> Is Patch 1 alone enough to fix it? Or is there a different fix for it?
> Patch 1 is fixing a different issue (it is lacking any maintainer
> feedback, though).
> 
> This patch of Jan should do the job, but it seems to be stuck, too:
> https://lore.kernel.org/lkml/9385fa60-fa5d-f559-a137-6608408f88b0@suse.com/

Ahh. Fun fact: that was on my list of things to prod, too.

>> Or is there some other solution to finally fix that regressions that
>> ideally should have been fixed weeks ago already?
> 
> I agree it should have been fixed quite some time now, but the x86
> maintainers don't seem to be interested in those stuck patches. :-(
> 
> Maybe I should take a different approach:
> 
> x86 maintainers, please speak up if you NAK (or Ack) any of above two
> patches.
> In case you don't NAK or take the patches, I'm inclined to carry them via
> the Xen tree to get the issues fixed.

Yeah, I'd be really glad if we could find a solution for this situation
and get it finally fixed in mainline and backported to stable.

Ciao, Thorsten


From xen-devel-bounces@lists.xenproject.org Mon Jun 20 06:59:18 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 06:59:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352409.579164 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3BNP-00006b-R3; Mon, 20 Jun 2022 06:59:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352409.579164; Mon, 20 Jun 2022 06:59:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3BNP-00006U-OL; Mon, 20 Jun 2022 06:59:11 +0000
Received: by outflank-mailman (input) for mailman id 352409;
 Mon, 20 Jun 2022 06:59:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3BNO-00006K-1e; Mon, 20 Jun 2022 06:59:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3BNN-0005Zf-Vi; Mon, 20 Jun 2022 06:59:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3BNN-0000pz-9d; Mon, 20 Jun 2022 06:59:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o3BNN-0004WO-9C; Mon, 20 Jun 2022 06:59:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/PVE0EPZULRRUH+uMuuJ9/kmXY0lcbWPQKC8EtYewaE=; b=5WygCbGZdaHC1a/3zqFD2EIqpz
	CWpQx+dBmW0Et9HoxZ4XGaSOod5+kYXPG9JtdQSkWUtP00F5W7uJQ25jVVpZyxsDWCT3NyOBrUrhf
	29TwHUe9cUoUO5WH30LFmTBvc7xgXVRZixxNcw5FnQDlj+/zRxIWsPfOLEZBEmjA8cvI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171286-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 171286: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=e8034b534ab51635b62dca631514bb6305850a5a
X-Osstest-Versions-That:
    ovmf=cc2db6ebfb6d9d85ba4c7b35fba1fa37fffc0bc2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 20 Jun 2022 06:59:09 +0000

flight 171286 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171286/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 e8034b534ab51635b62dca631514bb6305850a5a
baseline version:
 ovmf                 cc2db6ebfb6d9d85ba4c7b35fba1fa37fffc0bc2

Last test of basis   171243  2022-06-17 09:43:17 Z    2 days
Testing same since   171286  2022-06-20 05:11:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Liu, Zhiguang <Zhiguang.Liu@intel.com>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   cc2db6ebfb..e8034b534a  e8034b534ab51635b62dca631514bb6305850a5a -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon Jun 20 07:03:12 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 07:03:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352418.579176 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3BRH-0001Z9-Bb; Mon, 20 Jun 2022 07:03:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352418.579176; Mon, 20 Jun 2022 07:03:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3BRH-0001Yz-8V; Mon, 20 Jun 2022 07:03:11 +0000
Received: by outflank-mailman (input) for mailman id 352418;
 Mon, 20 Jun 2022 07:03:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=k7+S=W3=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o3BRF-0001Yr-Qn
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 07:03:09 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 0981acb4-f067-11ec-bd2d-47488cf2e6aa;
 Mon, 20 Jun 2022 09:03:07 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2382C1042;
 Mon, 20 Jun 2022 00:03:07 -0700 (PDT)
Received: from e129167.arm.com (unknown [10.57.35.125])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5FEC63F5A1;
 Mon, 20 Jun 2022 00:03:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0981acb4-f067-11ec-bd2d-47488cf2e6aa
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	"Daniel P. Smith" <dpsmith@apertussolutions.com>
Subject: [PATCH 0/9] MISRA C 2012 8.1 rule fixes
Date: Mon, 20 Jun 2022 09:02:36 +0200
Message-Id: <20220620070245.77979-1-michal.orzel@arm.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This series fixes all the findings for MISRA C 2012 8.1 rule, reported by
cppcheck 2.7 with misra addon, for Arm (arm32/arm64 - target allyesconfig).
Fixing this rule comes down to replacing implicit 'unsigned' with explicit
'unsigned int' type as there are no other violations being part of that rule
in the Xen codebase.

The last three patches contain fixes for files that originated from other
projects like Linux kernel or libfdt, therefore they are rather not applicable
to be fixed in Xen (they can be dropped). Nevertheless they act as a sign to
what should be added to a deviation list.

Some important notes:
Static analyzers are not perfect. Cppcheck generates internal AST error for some
of the files resulting in skipping all the checks. For these files, one need to
manually check if there are no findings.

Recently there was a release of a new Cppcheck 2.8 version. However it contains
a regression bug with misra addon. Therefore do not use that version for now.

Michal Orzel (9):
  xen/arm: Use explicitly specified types
  xen/domain: Use explicitly specified types
  xen/common: Use explicitly specified types
  include/xen: Use explicitly specified types
  include/public: Use explicitly specified types
  xsm/flask: Use explicitly specified types
  common/libfdt: Use explicitly specified types
  common/inflate: Use explicitly specified types
  drivers/acpi: Use explicitly specified types

 xen/arch/arm/domain_build.c             |   2 +-
 xen/arch/arm/guestcopy.c                |  13 +-
 xen/arch/arm/include/asm/arm32/bitops.h |   8 +-
 xen/arch/arm/include/asm/fixmap.h       |   4 +-
 xen/arch/arm/include/asm/guest_access.h |   8 +-
 xen/arch/arm/include/asm/mm.h           |   2 +-
 xen/arch/arm/irq.c                      |   2 +-
 xen/arch/arm/kernel.c                   |   2 +-
 xen/arch/arm/mm.c                       |   4 +-
 xen/common/domain.c                     |   2 +-
 xen/common/grant_table.c                |   6 +-
 xen/common/gunzip.c                     |   8 +-
 xen/common/inflate.c                    | 166 ++++++++++++------------
 xen/common/libfdt/fdt_ro.c              |   4 +-
 xen/common/libfdt/fdt_rw.c              |   2 +-
 xen/common/libfdt/fdt_sw.c              |   2 +-
 xen/common/libfdt/fdt_wip.c             |   2 +-
 xen/common/sched/cpupool.c              |   4 +-
 xen/common/trace.c                      |   2 +-
 xen/drivers/acpi/tables/tbfadt.c        |   2 +-
 xen/drivers/acpi/tables/tbutils.c       |   2 +-
 xen/include/public/physdev.h            |   4 +-
 xen/include/public/sysctl.h             |  10 +-
 xen/include/xen/domain.h                |   2 +-
 xen/include/xen/perfc.h                 |   2 +-
 xen/include/xen/sched.h                 |   6 +-
 xen/xsm/flask/ss/avtab.c                |   2 +-
 27 files changed, 137 insertions(+), 136 deletions(-)

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 07:03:12 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 07:03:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352419.579187 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3BRI-0001pF-Mh; Mon, 20 Jun 2022 07:03:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352419.579187; Mon, 20 Jun 2022 07:03:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3BRI-0001p8-JD; Mon, 20 Jun 2022 07:03:12 +0000
Received: by outflank-mailman (input) for mailman id 352419;
 Mon, 20 Jun 2022 07:03:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=k7+S=W3=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o3BRH-0001Yx-7M
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 07:03:11 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 0a9ec9ec-f067-11ec-b725-ed86ccbb4733;
 Mon, 20 Jun 2022 09:03:09 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 48DD6113E;
 Mon, 20 Jun 2022 00:03:09 -0700 (PDT)
Received: from e129167.arm.com (unknown [10.57.35.125])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 834493F5A1;
 Mon, 20 Jun 2022 00:03:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0a9ec9ec-f067-11ec-b725-ed86ccbb4733
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 1/9] xen/arm: Use explicitly specified types
Date: Mon, 20 Jun 2022 09:02:37 +0200
Message-Id: <20220620070245.77979-2-michal.orzel@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220620070245.77979-1-michal.orzel@arm.com>
References: <20220620070245.77979-1-michal.orzel@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

According to MISRA C 2012 Rule 8.1, types shall be explicitly
specified. Fix all the findings reported by cppcheck with misra addon
by substituting implicit type 'unsigned' to explicit 'unsigned int'.

Signed-off-by: Michal Orzel <michal.orzel@arm.com>
---
 xen/arch/arm/domain_build.c             |  2 +-
 xen/arch/arm/guestcopy.c                | 13 +++++++------
 xen/arch/arm/include/asm/arm32/bitops.h |  8 ++++----
 xen/arch/arm/include/asm/fixmap.h       |  4 ++--
 xen/arch/arm/include/asm/guest_access.h |  8 ++++----
 xen/arch/arm/include/asm/mm.h           |  2 +-
 xen/arch/arm/irq.c                      |  2 +-
 xen/arch/arm/kernel.c                   |  2 +-
 xen/arch/arm/mm.c                       |  4 ++--
 9 files changed, 23 insertions(+), 22 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 7ddd16c26d..3fd1186b53 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1007,7 +1007,7 @@ static void __init set_interrupt(gic_interrupt_t interrupt,
  */
 static int __init fdt_property_interrupts(const struct kernel_info *kinfo,
                                           gic_interrupt_t *intr,
-                                          unsigned num_irq)
+                                          unsigned int num_irq)
 {
     int res;
 
diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index 32681606d8..abb6236e27 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -56,7 +56,7 @@ static unsigned long copy_guest(void *buf, uint64_t addr, unsigned int len,
                                 copy_info_t info, unsigned int flags)
 {
     /* XXX needs to handle faults */
-    unsigned offset = addr & ~PAGE_MASK;
+    unsigned int offset = addr & ~PAGE_MASK;
 
     BUILD_BUG_ON((sizeof(addr)) < sizeof(vaddr_t));
     BUILD_BUG_ON((sizeof(addr)) < sizeof(paddr_t));
@@ -64,7 +64,7 @@ static unsigned long copy_guest(void *buf, uint64_t addr, unsigned int len,
     while ( len )
     {
         void *p;
-        unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
+        unsigned int size = min(len, (unsigned int)PAGE_SIZE - offset);
         struct page_info *page;
 
         page = translate_get_page(info, addr, flags & COPY_linear,
@@ -106,26 +106,27 @@ static unsigned long copy_guest(void *buf, uint64_t addr, unsigned int len,
     return 0;
 }
 
-unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len)
+unsigned long raw_copy_to_guest(void *to, const void *from, unsigned int len)
 {
     return copy_guest((void *)from, (vaddr_t)to, len,
                       GVA_INFO(current), COPY_to_guest | COPY_linear);
 }
 
 unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
-                                             unsigned len)
+                                             unsigned int len)
 {
     return copy_guest((void *)from, (vaddr_t)to, len, GVA_INFO(current),
                       COPY_to_guest | COPY_flush_dcache | COPY_linear);
 }
 
-unsigned long raw_clear_guest(void *to, unsigned len)
+unsigned long raw_clear_guest(void *to, unsigned int len)
 {
     return copy_guest(NULL, (vaddr_t)to, len, GVA_INFO(current),
                       COPY_to_guest | COPY_linear);
 }
 
-unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned len)
+unsigned long raw_copy_from_guest(void *to, const void __user *from,
+                                  unsigned int len)
 {
     return copy_guest(to, (vaddr_t)from, len, GVA_INFO(current),
                       COPY_from_guest | COPY_linear);
diff --git a/xen/arch/arm/include/asm/arm32/bitops.h b/xen/arch/arm/include/asm/arm32/bitops.h
index 57938a5874..d0309d47c1 100644
--- a/xen/arch/arm/include/asm/arm32/bitops.h
+++ b/xen/arch/arm/include/asm/arm32/bitops.h
@@ -6,17 +6,17 @@
 /*
  * Little endian assembly bitops.  nr = 0 -> byte 0 bit 0.
  */
-extern int _find_first_zero_bit_le(const void * p, unsigned size);
+extern int _find_first_zero_bit_le(const void * p, unsigned int size);
 extern int _find_next_zero_bit_le(const void * p, int size, int offset);
-extern int _find_first_bit_le(const unsigned long *p, unsigned size);
+extern int _find_first_bit_le(const unsigned long *p, unsigned int size);
 extern int _find_next_bit_le(const unsigned long *p, int size, int offset);
 
 /*
  * Big endian assembly bitops.  nr = 0 -> byte 3 bit 0.
  */
-extern int _find_first_zero_bit_be(const void * p, unsigned size);
+extern int _find_first_zero_bit_be(const void * p, unsigned int size);
 extern int _find_next_zero_bit_be(const void * p, int size, int offset);
-extern int _find_first_bit_be(const unsigned long *p, unsigned size);
+extern int _find_first_bit_be(const unsigned long *p, unsigned int size);
 extern int _find_next_bit_be(const unsigned long *p, int size, int offset);
 
 #ifndef __ARMEB__
diff --git a/xen/arch/arm/include/asm/fixmap.h b/xen/arch/arm/include/asm/fixmap.h
index 365a2385a0..d0c9a52c8c 100644
--- a/xen/arch/arm/include/asm/fixmap.h
+++ b/xen/arch/arm/include/asm/fixmap.h
@@ -30,9 +30,9 @@
 extern lpae_t xen_fixmap[XEN_PT_LPAE_ENTRIES];
 
 /* Map a page in a fixmap entry */
-extern void set_fixmap(unsigned map, mfn_t mfn, unsigned attributes);
+extern void set_fixmap(unsigned int map, mfn_t mfn, unsigned int attributes);
 /* Remove a mapping from a fixmap entry */
-extern void clear_fixmap(unsigned map);
+extern void clear_fixmap(unsigned int map);
 
 #define fix_to_virt(slot) ((void *)FIXMAP_ADDR(slot))
 
diff --git a/xen/arch/arm/include/asm/guest_access.h b/xen/arch/arm/include/asm/guest_access.h
index 53766386d3..4421e43611 100644
--- a/xen/arch/arm/include/asm/guest_access.h
+++ b/xen/arch/arm/include/asm/guest_access.h
@@ -4,11 +4,11 @@
 #include <xen/errno.h>
 #include <xen/sched.h>
 
-unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len);
+unsigned long raw_copy_to_guest(void *to, const void *from, unsigned int len);
 unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
-                                             unsigned len);
-unsigned long raw_copy_from_guest(void *to, const void *from, unsigned len);
-unsigned long raw_clear_guest(void *to, unsigned len);
+                                             unsigned int len);
+unsigned long raw_copy_from_guest(void *to, const void *from, unsigned int len);
+unsigned long raw_clear_guest(void *to, unsigned int len);
 
 /* Copy data to guest physical address, then clean the region. */
 unsigned long copy_to_guest_phys_flush_dcache(struct domain *d,
diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
index 045a8ba4bb..c4bc3cd1e5 100644
--- a/xen/arch/arm/include/asm/mm.h
+++ b/xen/arch/arm/include/asm/mm.h
@@ -192,7 +192,7 @@ extern void setup_xenheap_mappings(unsigned long base_mfn, unsigned long nr_mfns
 /* Map a frame table to cover physical addresses ps through pe */
 extern void setup_frametable_mappings(paddr_t ps, paddr_t pe);
 /* map a physical range in virtual memory */
-void __iomem *ioremap_attr(paddr_t start, size_t len, unsigned attributes);
+void __iomem *ioremap_attr(paddr_t start, size_t len, unsigned int attributes);
 
 static inline void __iomem *ioremap_nocache(paddr_t start, size_t len)
 {
diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
index b761d90c40..b6449c9065 100644
--- a/xen/arch/arm/irq.c
+++ b/xen/arch/arm/irq.c
@@ -610,7 +610,7 @@ void pirq_set_affinity(struct domain *d, int pirq, const cpumask_t *mask)
     BUG();
 }
 
-static bool irq_validate_new_type(unsigned int curr, unsigned new)
+static bool irq_validate_new_type(unsigned int curr, unsigned int new)
 {
     return (curr == IRQ_TYPE_INVALID || curr == new );
 }
diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index 25ded1c056..2556a45c38 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -256,7 +256,7 @@ static __init int kernel_decompress(struct bootmodule *mod)
     char *output, *input;
     char magic[2];
     int rc;
-    unsigned kernel_order_out;
+    unsigned int kernel_order_out;
     paddr_t output_size;
     struct page_info *pages;
     mfn_t mfn;
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index be37176a47..009b8cd9ef 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -352,7 +352,7 @@ lpae_t mfn_to_xen_entry(mfn_t mfn, unsigned int attr)
 }
 
 /* Map a 4k page in a fixmap entry */
-void set_fixmap(unsigned map, mfn_t mfn, unsigned int flags)
+void set_fixmap(unsigned int map, mfn_t mfn, unsigned int flags)
 {
     int res;
 
@@ -361,7 +361,7 @@ void set_fixmap(unsigned map, mfn_t mfn, unsigned int flags)
 }
 
 /* Remove a mapping from a fixmap entry */
-void clear_fixmap(unsigned map)
+void clear_fixmap(unsigned int map)
 {
     int res;
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 07:03:14 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 07:03:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352421.579198 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3BRJ-00025t-Vu; Mon, 20 Jun 2022 07:03:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352421.579198; Mon, 20 Jun 2022 07:03:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3BRJ-00025k-SC; Mon, 20 Jun 2022 07:03:13 +0000
Received: by outflank-mailman (input) for mailman id 352421;
 Mon, 20 Jun 2022 07:03:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=k7+S=W3=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o3BRJ-0001Yx-77
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 07:03:13 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 0c3a8213-f067-11ec-b725-ed86ccbb4733;
 Mon, 20 Jun 2022 09:03:12 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EE8151424;
 Mon, 20 Jun 2022 00:03:11 -0700 (PDT)
Received: from e129167.arm.com (unknown [10.57.35.125])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8AC823F5A1;
 Mon, 20 Jun 2022 00:03:09 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0c3a8213-f067-11ec-b725-ed86ccbb4733
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 2/9] xen/domain: Use explicitly specified types
Date: Mon, 20 Jun 2022 09:02:38 +0200
Message-Id: <20220620070245.77979-3-michal.orzel@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220620070245.77979-1-michal.orzel@arm.com>
References: <20220620070245.77979-1-michal.orzel@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

According to MISRA C 2012 Rule 8.1, types shall be explicitly
specified. Fix all the findings reported by cppcheck with misra addon
by substituting implicit type 'unsigned' to explicit 'unsigned int'.

Signed-off-by: Michal Orzel <michal.orzel@arm.com>
---
 xen/common/domain.c      | 2 +-
 xen/include/xen/domain.h | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/common/domain.c b/xen/common/domain.c
index 7570eae91a..57a8515f21 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -1446,7 +1446,7 @@ int vcpu_reset(struct vcpu *v)
  * of memory, and it sets a pending event to make sure that a pending
  * event doesn't get missed.
  */
-int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset)
+int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned int offset)
 {
     struct domain *d = v->domain;
     void *mapping;
diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
index 1c3c88a14d..628b14b086 100644
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -65,7 +65,7 @@ void cf_check free_pirq_struct(void *);
 int  arch_vcpu_create(struct vcpu *v);
 void arch_vcpu_destroy(struct vcpu *v);
 
-int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset);
+int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned int offset);
 void unmap_vcpu_info(struct vcpu *v);
 
 int arch_domain_create(struct domain *d,
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 07:03:18 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 07:03:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352422.579209 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3BRO-0002S9-8v; Mon, 20 Jun 2022 07:03:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352422.579209; Mon, 20 Jun 2022 07:03:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3BRO-0002Rw-4O; Mon, 20 Jun 2022 07:03:18 +0000
Received: by outflank-mailman (input) for mailman id 352422;
 Mon, 20 Jun 2022 07:03:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=k7+S=W3=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o3BRM-0001Yx-NL
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 07:03:16 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 0e30fc89-f067-11ec-b725-ed86ccbb4733;
 Mon, 20 Jun 2022 09:03:15 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3DD451042;
 Mon, 20 Jun 2022 00:03:15 -0700 (PDT)
Received: from e129167.arm.com (unknown [10.57.35.125])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6C8553F5A1;
 Mon, 20 Jun 2022 00:03:12 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0e30fc89-f067-11ec-b725-ed86ccbb4733
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>,
	Dario Faggioli <dfaggioli@suse.com>
Subject: [PATCH 3/9] xen/common: Use explicitly specified types
Date: Mon, 20 Jun 2022 09:02:39 +0200
Message-Id: <20220620070245.77979-4-michal.orzel@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220620070245.77979-1-michal.orzel@arm.com>
References: <20220620070245.77979-1-michal.orzel@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

According to MISRA C 2012 Rule 8.1, types shall be explicitly
specified. Fix all the findings reported by cppcheck with misra addon
by substituting implicit type 'unsigned' to explicit 'unsigned int'.

Signed-off-by: Michal Orzel <michal.orzel@arm.com>
---
 xen/common/grant_table.c   | 6 +++---
 xen/common/gunzip.c        | 8 ++++----
 xen/common/sched/cpupool.c | 4 ++--
 xen/common/trace.c         | 2 +-
 4 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 3918e6de6b..2d110d9f41 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -895,7 +895,7 @@ done:
 static int _set_status(const grant_entry_header_t *shah,
                        grant_status_t *status,
                        struct domain *rd,
-                       unsigned rgt_version,
+                       unsigned int rgt_version,
                        struct active_grant_entry *act,
                        int readonly,
                        int mapflag,
@@ -1763,8 +1763,8 @@ static int
 gnttab_populate_status_frames(struct domain *d, struct grant_table *gt,
                               unsigned int req_nr_frames)
 {
-    unsigned i;
-    unsigned req_status_frames;
+    unsigned int i;
+    unsigned int req_status_frames;
 
     req_status_frames = grant_to_status_frames(req_nr_frames);
 
diff --git a/xen/common/gunzip.c b/xen/common/gunzip.c
index b9ecc17e44..244f8d8903 100644
--- a/xen/common/gunzip.c
+++ b/xen/common/gunzip.c
@@ -13,13 +13,13 @@ static memptr __initdata free_mem_end_ptr;
 #define WSIZE           0x80000000
 
 static unsigned char *__initdata inbuf;
-static unsigned __initdata insize;
+static unsigned int __initdata insize;
 
 /* Index of next byte to be processed in inbuf: */
-static unsigned __initdata inptr;
+static unsigned int __initdata inptr;
 
 /* Bytes in output buffer: */
-static unsigned __initdata outcnt;
+static unsigned int __initdata outcnt;
 
 #define OF(args)        args
 
@@ -72,7 +72,7 @@ static __init void flush_window(void)
      * compute the crc.
      */
     unsigned long c = crc;
-    unsigned n;
+    unsigned int n;
     unsigned char *in, ch;
 
     in = window;
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index a20e3a5fcb..2afe54f54d 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -850,7 +850,7 @@ int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op)
 
     case XEN_SYSCTL_CPUPOOL_OP_ADDCPU:
     {
-        unsigned cpu;
+        unsigned int cpu;
         const cpumask_t *cpus;
 
         cpu = op->cpu;
@@ -895,7 +895,7 @@ int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op)
 
     case XEN_SYSCTL_CPUPOOL_OP_RMCPU:
     {
-        unsigned cpu;
+        unsigned int cpu;
 
         c = cpupool_get_by_id(op->cpupool_id);
         ret = -ENOENT;
diff --git a/xen/common/trace.c b/xen/common/trace.c
index a7c092fcbb..fb3752ce62 100644
--- a/xen/common/trace.c
+++ b/xen/common/trace.c
@@ -834,7 +834,7 @@ void __trace_hypercall(uint32_t event, unsigned long op,
 
 #define APPEND_ARG32(i)                         \
     do {                                        \
-        unsigned i_ = (i);                      \
+        unsigned int i_ = (i);                  \
         *a++ = args[(i_)];                      \
         d.op |= TRC_PV_HYPERCALL_V2_ARG_32(i_); \
     } while( 0 )
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 07:03:21 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 07:03:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352423.579220 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3BRR-0002nS-Kt; Mon, 20 Jun 2022 07:03:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352423.579220; Mon, 20 Jun 2022 07:03:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3BRR-0002n9-HN; Mon, 20 Jun 2022 07:03:21 +0000
Received: by outflank-mailman (input) for mailman id 352423;
 Mon, 20 Jun 2022 07:03:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=k7+S=W3=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o3BRP-0001Yx-JU
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 07:03:19 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 1012962d-f067-11ec-b725-ed86ccbb4733;
 Mon, 20 Jun 2022 09:03:18 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6CFE41042;
 Mon, 20 Jun 2022 00:03:18 -0700 (PDT)
Received: from e129167.arm.com (unknown [10.57.35.125])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A76693F5A1;
 Mon, 20 Jun 2022 00:03:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1012962d-f067-11ec-b725-ed86ccbb4733
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 4/9] include/xen: Use explicitly specified types
Date: Mon, 20 Jun 2022 09:02:40 +0200
Message-Id: <20220620070245.77979-5-michal.orzel@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220620070245.77979-1-michal.orzel@arm.com>
References: <20220620070245.77979-1-michal.orzel@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

According to MISRA C 2012 Rule 8.1, types shall be explicitly
specified. Fix all the findings reported by cppcheck with misra addon
by substituting implicit type 'unsigned' to explicit 'unsigned int'.

Signed-off-by: Michal Orzel <michal.orzel@arm.com>
---
 xen/include/xen/perfc.h | 2 +-
 xen/include/xen/sched.h | 6 +++---
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/xen/include/xen/perfc.h b/xen/include/xen/perfc.h
index bb010b0aae..7c5ce537bd 100644
--- a/xen/include/xen/perfc.h
+++ b/xen/include/xen/perfc.h
@@ -49,7 +49,7 @@ enum perfcounter {
 #undef PERFSTATUS
 #undef PERFSTATUS_ARRAY
 
-typedef unsigned perfc_t;
+typedef unsigned int perfc_t;
 #define PRIperfc ""
 
 DECLARE_PER_CPU(perfc_t[NUM_PERFCOUNTERS], perfcounters);
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 463d41ffb6..d957b6e11f 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -518,9 +518,9 @@ struct domain
 
     /* hvm_print_line() and guest_console_write() logging. */
 #define DOMAIN_PBUF_SIZE 200
-    char       *pbuf;
-    unsigned    pbuf_idx;
-    spinlock_t  pbuf_lock;
+    char         *pbuf;
+    unsigned int  pbuf_idx;
+    spinlock_t    pbuf_lock;
 
     /* OProfile support. */
     struct xenoprof *xenoprof;
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 07:03:24 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 07:03:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352424.579231 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3BRU-00039B-0n; Mon, 20 Jun 2022 07:03:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352424.579231; Mon, 20 Jun 2022 07:03:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3BRT-00038q-Sr; Mon, 20 Jun 2022 07:03:23 +0000
Received: by outflank-mailman (input) for mailman id 352424;
 Mon, 20 Jun 2022 07:03:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=k7+S=W3=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o3BRS-0001Yr-Mz
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 07:03:22 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 12005090-f067-11ec-bd2d-47488cf2e6aa;
 Mon, 20 Jun 2022 09:03:21 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5B8311042;
 Mon, 20 Jun 2022 00:03:21 -0700 (PDT)
Received: from e129167.arm.com (unknown [10.57.35.125])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DB2EC3F5A1;
 Mon, 20 Jun 2022 00:03:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12005090-f067-11ec-bd2d-47488cf2e6aa
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 5/9] include/public: Use explicitly specified types
Date: Mon, 20 Jun 2022 09:02:41 +0200
Message-Id: <20220620070245.77979-6-michal.orzel@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220620070245.77979-1-michal.orzel@arm.com>
References: <20220620070245.77979-1-michal.orzel@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

According to MISRA C 2012 Rule 8.1, types shall be explicitly
specified. Fix all the findings reported by cppcheck with misra addon
by substituting implicit type 'unsigned' to explicit 'unsigned int'.

Bump sysctl interface version.

Signed-off-by: Michal Orzel <michal.orzel@arm.com>
---
 xen/include/public/physdev.h |  4 ++--
 xen/include/public/sysctl.h  | 10 +++++-----
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/xen/include/public/physdev.h b/xen/include/public/physdev.h
index d271766ad0..a2ca0ee564 100644
--- a/xen/include/public/physdev.h
+++ b/xen/include/public/physdev.h
@@ -211,8 +211,8 @@ struct physdev_manage_pci_ext {
     /* IN */
     uint8_t bus;
     uint8_t devfn;
-    unsigned is_extfn;
-    unsigned is_virtfn;
+    unsigned int is_extfn;
+    unsigned int is_virtfn;
     struct {
         uint8_t bus;
         uint8_t devfn;
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index b0a4af8789..a2a762fe46 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -35,7 +35,7 @@
 #include "domctl.h"
 #include "physdev.h"
 
-#define XEN_SYSCTL_INTERFACE_VERSION 0x00000014
+#define XEN_SYSCTL_INTERFACE_VERSION 0x00000015
 
 /*
  * Read console content from Xen buffer ring.
@@ -644,18 +644,18 @@ struct xen_sysctl_credit_schedule {
     /* Length of timeslice in milliseconds */
 #define XEN_SYSCTL_CSCHED_TSLICE_MAX 1000
 #define XEN_SYSCTL_CSCHED_TSLICE_MIN 1
-    unsigned tslice_ms;
-    unsigned ratelimit_us;
+    unsigned int tslice_ms;
+    unsigned int ratelimit_us;
     /*
      * How long we consider a vCPU to be cache-hot on the
      * CPU where it has run (max 100ms, in microseconds)
     */
 #define XEN_SYSCTL_CSCHED_MGR_DLY_MAX_US (100 * 1000)
-    unsigned vcpu_migr_delay_us;
+    unsigned int vcpu_migr_delay_us;
 };
 
 struct xen_sysctl_credit2_schedule {
-    unsigned ratelimit_us;
+    unsigned int ratelimit_us;
 };
 
 /* XEN_SYSCTL_scheduler_op */
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 07:03:25 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 07:03:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352426.579241 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3BRV-0003Ps-Fi; Mon, 20 Jun 2022 07:03:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352426.579241; Mon, 20 Jun 2022 07:03:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3BRV-0003Oi-9g; Mon, 20 Jun 2022 07:03:25 +0000
Received: by outflank-mailman (input) for mailman id 352426;
 Mon, 20 Jun 2022 07:03:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=k7+S=W3=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o3BRU-0001Yr-6p
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 07:03:24 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 12f97d18-f067-11ec-bd2d-47488cf2e6aa;
 Mon, 20 Jun 2022 09:03:23 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0E7DE113E;
 Mon, 20 Jun 2022 00:03:23 -0700 (PDT)
Received: from e129167.arm.com (unknown [10.57.35.125])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 981AB3F5A1;
 Mon, 20 Jun 2022 00:03:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12f97d18-f067-11ec-bd2d-47488cf2e6aa
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	"Daniel P. Smith" <dpsmith@apertussolutions.com>
Subject: [PATCH 6/9] xsm/flask: Use explicitly specified types
Date: Mon, 20 Jun 2022 09:02:42 +0200
Message-Id: <20220620070245.77979-7-michal.orzel@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220620070245.77979-1-michal.orzel@arm.com>
References: <20220620070245.77979-1-michal.orzel@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

According to MISRA C 2012 Rule 8.1, types shall be explicitly
specified. Fix all the findings reported by cppcheck with misra addon
by substituting implicit type 'unsigned' to explicit 'unsigned int'.

Signed-off-by: Michal Orzel <michal.orzel@arm.com>
---
 xen/xsm/flask/ss/avtab.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/xsm/flask/ss/avtab.c b/xen/xsm/flask/ss/avtab.c
index 017f5183de..9761d028d8 100644
--- a/xen/xsm/flask/ss/avtab.c
+++ b/xen/xsm/flask/ss/avtab.c
@@ -349,7 +349,7 @@ int avtab_read_item(struct avtab *a, void *fp, struct policydb *pol,
     struct avtab_key key;
     struct avtab_datum datum;
     int i, rc;
-    unsigned set;
+    unsigned int set;
 
     memset(&key, 0, sizeof(struct avtab_key));
     memset(&datum, 0, sizeof(struct avtab_datum));
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 07:03:27 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 07:03:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352429.579253 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3BRX-0003oq-9m; Mon, 20 Jun 2022 07:03:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352429.579253; Mon, 20 Jun 2022 07:03:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3BRX-0003o9-0P; Mon, 20 Jun 2022 07:03:27 +0000
Received: by outflank-mailman (input) for mailman id 352429;
 Mon, 20 Jun 2022 07:03:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=k7+S=W3=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o3BRV-0001Yx-NP
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 07:03:25 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 13acfbf2-f067-11ec-b725-ed86ccbb4733;
 Mon, 20 Jun 2022 09:03:24 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 851021042;
 Mon, 20 Jun 2022 00:03:24 -0700 (PDT)
Received: from e129167.arm.com (unknown [10.57.35.125])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4EA493F5A1;
 Mon, 20 Jun 2022 00:03:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 13acfbf2-f067-11ec-b725-ed86ccbb4733
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>
Subject: [PATCH 7/9] common/libfdt: Use explicitly specified types
Date: Mon, 20 Jun 2022 09:02:43 +0200
Message-Id: <20220620070245.77979-8-michal.orzel@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220620070245.77979-1-michal.orzel@arm.com>
References: <20220620070245.77979-1-michal.orzel@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

According to MISRA C 2012 Rule 8.1, types shall be explicitly
specified. Fix all the findings reported by cppcheck with misra addon
by substituting implicit type 'unsigned' to explicit 'unsigned int'.

Signed-off-by: Michal Orzel <michal.orzel@arm.com>
---
This patch may not be applicable as these files come from libfdt.
---
 xen/common/libfdt/fdt_ro.c  | 4 ++--
 xen/common/libfdt/fdt_rw.c  | 2 +-
 xen/common/libfdt/fdt_sw.c  | 2 +-
 xen/common/libfdt/fdt_wip.c | 2 +-
 4 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/xen/common/libfdt/fdt_ro.c b/xen/common/libfdt/fdt_ro.c
index 17584da257..0fc4f793fe 100644
--- a/xen/common/libfdt/fdt_ro.c
+++ b/xen/common/libfdt/fdt_ro.c
@@ -53,7 +53,7 @@ const char *fdt_get_string(const void *fdt, int stroffset, int *lenp)
 
 	err = -FDT_ERR_BADOFFSET;
 	absoffset = stroffset + fdt_off_dt_strings(fdt);
-	if (absoffset >= (unsigned)totalsize)
+	if (absoffset >= (unsigned int)totalsize)
 		goto fail;
 	len = totalsize - absoffset;
 
@@ -61,7 +61,7 @@ const char *fdt_get_string(const void *fdt, int stroffset, int *lenp)
 		if (stroffset < 0)
 			goto fail;
 		if (can_assume(LATEST) || fdt_version(fdt) >= 17) {
-			if ((unsigned)stroffset >= fdt_size_dt_strings(fdt))
+			if ((unsigned int)stroffset >= fdt_size_dt_strings(fdt))
 				goto fail;
 			if ((fdt_size_dt_strings(fdt) - stroffset) < len)
 				len = fdt_size_dt_strings(fdt) - stroffset;
diff --git a/xen/common/libfdt/fdt_rw.c b/xen/common/libfdt/fdt_rw.c
index 3621d3651d..1707238ebc 100644
--- a/xen/common/libfdt/fdt_rw.c
+++ b/xen/common/libfdt/fdt_rw.c
@@ -59,7 +59,7 @@ static int fdt_splice_(void *fdt, void *splicepoint, int oldlen, int newlen)
 
 	if ((oldlen < 0) || (soff + oldlen < soff) || (soff + oldlen > dsize))
 		return -FDT_ERR_BADOFFSET;
-	if ((p < (char *)fdt) || (dsize + newlen < (unsigned)oldlen))
+	if ((p < (char *)fdt) || (dsize + newlen < (unsigned int)oldlen))
 		return -FDT_ERR_BADOFFSET;
 	if (dsize - oldlen + newlen > fdt_totalsize(fdt))
 		return -FDT_ERR_NOSPACE;
diff --git a/xen/common/libfdt/fdt_sw.c b/xen/common/libfdt/fdt_sw.c
index 4c569ee7eb..eb694b5dbb 100644
--- a/xen/common/libfdt/fdt_sw.c
+++ b/xen/common/libfdt/fdt_sw.c
@@ -162,7 +162,7 @@ int fdt_resize(void *fdt, void *buf, int bufsize)
 	    headsize + tailsize > fdt_totalsize(fdt))
 		return -FDT_ERR_INTERNAL;
 
-	if ((headsize + tailsize) > (unsigned)bufsize)
+	if ((headsize + tailsize) > (unsigned int)bufsize)
 		return -FDT_ERR_NOSPACE;
 
 	oldtail = (char *)fdt + fdt_totalsize(fdt) - tailsize;
diff --git a/xen/common/libfdt/fdt_wip.c b/xen/common/libfdt/fdt_wip.c
index c2d7566a67..82db674014 100644
--- a/xen/common/libfdt/fdt_wip.c
+++ b/xen/common/libfdt/fdt_wip.c
@@ -23,7 +23,7 @@ int fdt_setprop_inplace_namelen_partial(void *fdt, int nodeoffset,
 	if (!propval)
 		return proplen;
 
-	if ((unsigned)proplen < (len + idx))
+	if ((unsigned int)proplen < (len + idx))
 		return -FDT_ERR_NOSPACE;
 
 	memcpy((char *)propval + idx, val, len);
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 07:03:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 07:03:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352434.579264 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3BRa-0004R6-Kj; Mon, 20 Jun 2022 07:03:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352434.579264; Mon, 20 Jun 2022 07:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3BRa-0004Qt-GW; Mon, 20 Jun 2022 07:03:30 +0000
Received: by outflank-mailman (input) for mailman id 352434;
 Mon, 20 Jun 2022 07:03:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=k7+S=W3=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o3BRZ-0001Yx-8h
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 07:03:29 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 155e3588-f067-11ec-b725-ed86ccbb4733;
 Mon, 20 Jun 2022 09:03:27 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 624C61042;
 Mon, 20 Jun 2022 00:03:27 -0700 (PDT)
Received: from e129167.arm.com (unknown [10.57.35.125])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DA7A93F5A1;
 Mon, 20 Jun 2022 00:03:24 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 155e3588-f067-11ec-b725-ed86ccbb4733
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 8/9] common/inflate: Use explicitly specified types
Date: Mon, 20 Jun 2022 09:02:44 +0200
Message-Id: <20220620070245.77979-9-michal.orzel@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220620070245.77979-1-michal.orzel@arm.com>
References: <20220620070245.77979-1-michal.orzel@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

According to MISRA C 2012 Rule 8.1, types shall be explicitly
specified. Fix all the findings reported by cppcheck with misra addon
by substituting implicit type 'unsigned' to explicit 'unsigned int'.

Signed-off-by: Michal Orzel <michal.orzel@arm.com>
---
This patch may not be applicable as inflate comes from Linux.
---
 xen/common/inflate.c | 166 +++++++++++++++++++++----------------------
 1 file changed, 83 insertions(+), 83 deletions(-)

diff --git a/xen/common/inflate.c b/xen/common/inflate.c
index 8fa4b96d12..71616ff60c 100644
--- a/xen/common/inflate.c
+++ b/xen/common/inflate.c
@@ -138,7 +138,7 @@ struct huft {
 
 
 /* Function prototypes */
-static int huft_build OF((unsigned *, unsigned, unsigned,
+static int huft_build OF((unsigned int *, unsigned int, unsigned int,
                           const ush *, const ush *, struct huft **, int *));
 static int huft_free OF((struct huft *));
 static int inflate_codes OF((struct huft *, struct huft *, int, int));
@@ -162,20 +162,20 @@ static int inflate OF((void));
 #define flush_output(w) (wp=(w),flush_window())
 
 /* Tables for deflate from PKZIP's appnote.txt. */
-static const unsigned border[] = {    /* Order of the bit length code lengths */
+static const unsigned int border[] = { /* Order of the bit length code lengths */
     16, 17, 18, 0, 8, 7, 9, 6, 10, 5, 11, 4, 12, 3, 13, 2, 14, 1, 15};
-static const ush cplens[] = {         /* Copy lengths for literal codes 257..285 */
+static const ush cplens[] = {          /* Copy lengths for literal codes 257..285 */
     3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 15, 17, 19, 23, 27, 31,
     35, 43, 51, 59, 67, 83, 99, 115, 131, 163, 195, 227, 258, 0, 0};
 /* note: see note #13 above about the 258 in this list. */
-static const ush cplext[] = {         /* Extra bits for literal codes 257..285 */
+static const ush cplext[] = {          /* Extra bits for literal codes 257..285 */
     0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2,
     3, 3, 3, 3, 4, 4, 4, 4, 5, 5, 5, 5, 0, 99, 99}; /* 99==invalid */
-static const ush cpdist[] = {         /* Copy offsets for distance codes 0..29 */
+static const ush cpdist[] = {          /* Copy offsets for distance codes 0..29 */
     1, 2, 3, 4, 5, 7, 9, 13, 17, 25, 33, 49, 65, 97, 129, 193,
     257, 385, 513, 769, 1025, 1537, 2049, 3073, 4097, 6145,
     8193, 12289, 16385, 24577};
-static const ush cpdext[] = {         /* Extra bits for distance codes */
+static const ush cpdext[] = {          /* Extra bits for distance codes */
     0, 0, 0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6,
     7, 7, 8, 8, 9, 9, 10, 10, 11, 11,
     12, 12, 13, 13};
@@ -213,7 +213,7 @@ static const ush cpdext[] = {         /* Extra bits for distance codes */
  */
 
 static ulg __initdata bb;                /* bit buffer */
-static unsigned __initdata bk;           /* bits in bit buffer */
+static unsigned int __initdata bk;       /* bits in bit buffer */
 
 static const ush mask_bits[] = {
     0x0000,
@@ -313,13 +313,13 @@ static const int dbits = 6;          /* bits in base distance lookup table */
 #define N_MAX 288       /* maximum number of codes in any set */
 
 
-static unsigned __initdata hufts;      /* track memory usage */
+static unsigned int __initdata hufts;      /* track memory usage */
 
 
 static int __init huft_build(
-    unsigned *b,            /* code lengths in bits (all assumed <= BMAX) */
-    unsigned n,             /* number of codes (assumed <= N_MAX) */
-    unsigned s,             /* number of simple-valued codes (0..s-1) */
+    unsigned int *b,        /* code lengths in bits (all assumed <= BMAX) */
+    unsigned int n,         /* number of codes (assumed <= N_MAX) */
+    unsigned int s,         /* number of simple-valued codes (0..s-1) */
     const ush *d,           /* list of base values for non-simple codes */
     const ush *e,           /* list of extra bits for non-simple codes */
     struct huft **t,        /* result: starting table */
@@ -331,28 +331,28 @@ static int __init huft_build(
    case), two if the input is invalid (all zero length codes or an
    oversubscribed set of lengths), and three if not enough memory. */
 {
-    unsigned a;                   /* counter for codes of length k */
-    unsigned f;                   /* i repeats in table every f entries */
+    unsigned int a;               /* counter for codes of length k */
+    unsigned int f;               /* i repeats in table every f entries */
     int g;                        /* maximum code length */
     int h;                        /* table level */
-    register unsigned i;          /* counter, current code */
-    register unsigned j;          /* counter */
+    register unsigned int i;      /* counter, current code */
+    register unsigned int j;      /* counter */
     register int k;               /* number of bits in current code */
     int l;                        /* bits per table (returned in m) */
-    register unsigned *p;         /* pointer into c[], b[], or v[] */
+    register unsigned int *p;     /* pointer into c[], b[], or v[] */
     register struct huft *q;      /* points to current table */
     struct huft r;                /* table entry for structure assignment */
     register int w;               /* bits before this table == (l * h) */
-    unsigned *xp;                 /* pointer into x */
+    unsigned int *xp;             /* pointer into x */
     int y;                        /* number of dummy codes added */
-    unsigned z;                   /* number of entries in current table */
+    unsigned int z;               /* number of entries in current table */
     struct {
-        unsigned c[BMAX+1];           /* bit length count table */
-        struct huft *u[BMAX];         /* table stack */
-        unsigned v[N_MAX];            /* values in order of bit length */
-        unsigned x[BMAX+1];           /* bit offsets, then code stack */
+        unsigned int c[BMAX+1];   /* bit length count table */
+        struct huft *u[BMAX];     /* table stack */
+        unsigned int v[N_MAX];    /* values in order of bit length */
+        unsigned int x[BMAX+1];   /* bit offsets, then code stack */
     } *stk;
-    unsigned *c, *v, *x;
+    unsigned int *c, *v, *x;
     struct huft **u;
     int ret;
 
@@ -392,13 +392,13 @@ static int __init huft_build(
         if (c[j])
             break;
     k = j;                        /* minimum code length */
-    if ((unsigned)l < j)
+    if ((unsigned int)l < j)
         l = j;
     for (i = BMAX; i; i--)
         if (c[i])
             break;
     g = i;                        /* maximum code length */
-    if ((unsigned)l > i)
+    if ((unsigned int)l > i)
         l = i;
     *m = l;
 
@@ -464,7 +464,7 @@ static int __init huft_build(
                 w += l;                 /* previous table always l bits */
 
                 /* compute minimum size table less than or equal to l bits */
-                z = (z = g - w) > (unsigned)l ? l : z;  /* upper limit on table size */
+                z = (z = g - w) > (unsigned int)l ? l : z;  /* upper limit on table size */
                 if ((f = 1 << (j = k - w)) > a + 1)     /* try a k-w bit table */
                 {                       /* too few codes for k-w bit table */
                     DEBG1("2 ");
@@ -592,13 +592,13 @@ static int __init inflate_codes(
 /* inflate (decompress) the codes in a deflated (compressed) block.
    Return an error code or zero if it all goes ok. */
 {
-    register unsigned e;  /* table entry flag/number of extra bits */
-    unsigned n, d;        /* length and index for copy */
-    unsigned w;           /* current window position */
-    struct huft *t;       /* pointer to table entry */
-    unsigned ml, md;      /* masks for bl and bd bits */
-    register ulg b;       /* bit buffer */
-    register unsigned k;  /* number of bits in bit buffer */
+    register unsigned int e;  /* table entry flag/number of extra bits */
+    unsigned int n, d;        /* length and index for copy */
+    unsigned int w;           /* current window position */
+    struct huft *t;           /* pointer to table entry */
+    unsigned int ml, md;      /* masks for bl and bd bits */
+    register ulg b;           /* bit buffer */
+    register unsigned int k;  /* number of bits in bit buffer */
 
 
     /* make local copies of globals */
@@ -611,15 +611,15 @@ static int __init inflate_codes(
     md = mask_bits[bd];
     for (;;)                      /* do until end of block */
     {
-        NEEDBITS((unsigned)bl)
-            if ((e = (t = tl + ((unsigned)b & ml))->e) > 16)
+        NEEDBITS((unsigned int)bl)
+            if ((e = (t = tl + ((unsigned int)b & ml))->e) > 16)
                 do {
                     if (e == 99)
                         return 1;
                     DUMPBITS(t->b)
                         e -= 16;
                     NEEDBITS(e)
-                        } while ((e = (t = t->v.t + ((unsigned)b & mask_bits[e]))->e) > 16);
+                        } while ((e = (t = t->v.t + ((unsigned int)b & mask_bits[e]))->e) > 16);
         DUMPBITS(t->b)
             if (e == 16)                /* then it's a literal */
             {
@@ -639,22 +639,22 @@ static int __init inflate_codes(
 
                 /* get length of block to copy */
                 NEEDBITS(e)
-                    n = t->v.n + ((unsigned)b & mask_bits[e]);
+                    n = t->v.n + ((unsigned int)b & mask_bits[e]);
                 DUMPBITS(e);
 
                 /* decode distance of block to copy */
-                NEEDBITS((unsigned)bd)
-                    if ((e = (t = td + ((unsigned)b & md))->e) > 16)
+                NEEDBITS((unsigned int)bd)
+                    if ((e = (t = td + ((unsigned int)b & md))->e) > 16)
                         do {
                             if (e == 99)
                                 return 1;
                             DUMPBITS(t->b)
                                 e -= 16;
                             NEEDBITS(e)
-                                } while ((e = (t = t->v.t + ((unsigned)b & mask_bits[e]))->e) > 16);
+                                } while ((e = (t = t->v.t + ((unsigned int)b & mask_bits[e]))->e) > 16);
                 DUMPBITS(t->b)
                     NEEDBITS(e)
-                    d = w - t->v.n - ((unsigned)b & mask_bits[e]);
+                    d = w - t->v.n - ((unsigned int)b & mask_bits[e]);
                 DUMPBITS(e)
                     Tracevv((stderr,"\\[%d,%d]", w-d, n));
 
@@ -701,10 +701,10 @@ static int __init inflate_codes(
 static int __init inflate_stored(void)
 /* "decompress" an inflated type 0 (stored) block. */
 {
-    unsigned n;           /* number of bytes in block */
-    unsigned w;           /* current window position */
-    register ulg b;       /* bit buffer */
-    register unsigned k;  /* number of bits in bit buffer */
+    unsigned int n;           /* number of bytes in block */
+    unsigned int w;           /* current window position */
+    register ulg b;           /* bit buffer */
+    register unsigned int k;  /* number of bits in bit buffer */
 
     DEBG("<stor");
 
@@ -721,10 +721,10 @@ static int __init inflate_stored(void)
 
     /* get the length and its complement */
     NEEDBITS(16)
-        n = ((unsigned)b & 0xffff);
+        n = ((unsigned int)b & 0xffff);
     DUMPBITS(16)
         NEEDBITS(16)
-        if (n != (unsigned)((~b) & 0xffff))
+        if (n != (unsigned int)((~b) & 0xffff))
             return 1;                   /* error in compressed data */
     DUMPBITS(16)
 
@@ -769,7 +769,7 @@ static int noinline __init inflate_fixed(void)
     struct huft *td;      /* distance code table */
     int bl;               /* lookup bits for tl */
     int bd;               /* lookup bits for td */
-    unsigned *l;          /* length list for huft_build */
+    unsigned int *l;      /* length list for huft_build */
 
     DEBG("<fix");
 
@@ -826,21 +826,21 @@ static int noinline __init inflate_fixed(void)
 static int noinline __init inflate_dynamic(void)
 /* decompress an inflated type 2 (dynamic Huffman codes) block. */
 {
-    int i;                /* temporary variables */
-    unsigned j;
-    unsigned l;           /* last length */
-    unsigned m;           /* mask for bit lengths table */
-    unsigned n;           /* number of lengths to get */
-    struct huft *tl;      /* literal/length code table */
-    struct huft *td;      /* distance code table */
-    int bl;               /* lookup bits for tl */
-    int bd;               /* lookup bits for td */
-    unsigned nb;          /* number of bit length codes */
-    unsigned nl;          /* number of literal/length codes */
-    unsigned nd;          /* number of distance codes */
-    unsigned *ll;         /* literal/length and distance code lengths */
-    register ulg b;       /* bit buffer */
-    register unsigned k;  /* number of bits in bit buffer */
+    int i;                    /* temporary variables */
+    unsigned int j;
+    unsigned int l;           /* last length */
+    unsigned int m;           /* mask for bit lengths table */
+    unsigned int n;           /* number of lengths to get */
+    struct huft *tl;          /* literal/length code table */
+    struct huft *td;          /* distance code table */
+    int bl;                   /* lookup bits for tl */
+    int bd;                   /* lookup bits for td */
+    unsigned int nb;          /* number of bit length codes */
+    unsigned int nl;          /* number of literal/length codes */
+    unsigned int nd;          /* number of distance codes */
+    unsigned int *ll;         /* literal/length and distance code lengths */
+    register ulg b;           /* bit buffer */
+    register unsigned int k;  /* number of bits in bit buffer */
     int ret;
 
     DEBG("<dyn");
@@ -861,13 +861,13 @@ static int noinline __init inflate_dynamic(void)
 
     /* read in table lengths */
     NEEDBITS(5)
-        nl = 257 + ((unsigned)b & 0x1f);      /* number of literal/length codes */
+        nl = 257 + ((unsigned int)b & 0x1f);      /* number of literal/length codes */
     DUMPBITS(5)
         NEEDBITS(5)
-        nd = 1 + ((unsigned)b & 0x1f);        /* number of distance codes */
+        nd = 1 + ((unsigned int)b & 0x1f);        /* number of distance codes */
     DUMPBITS(5)
         NEEDBITS(4)
-        nb = 4 + ((unsigned)b & 0xf);         /* number of bit length codes */
+        nb = 4 + ((unsigned int)b & 0xf);         /* number of bit length codes */
     DUMPBITS(4)
 #ifdef PKZIP_BUG_WORKAROUND
         if (nl > 288 || nd > 32)
@@ -885,7 +885,7 @@ static int noinline __init inflate_dynamic(void)
     for (j = 0; j < nb; j++)
     {
         NEEDBITS(3)
-            ll[border[j]] = (unsigned)b & 7;
+            ll[border[j]] = (unsigned int)b & 7;
         DUMPBITS(3)
             }
     for (; j < 19; j++)
@@ -909,10 +909,10 @@ static int noinline __init inflate_dynamic(void)
     n = nl + nd;
     m = mask_bits[bl];
     i = l = 0;
-    while ((unsigned)i < n)
+    while ((unsigned int)i < n)
     {
-        NEEDBITS((unsigned)bl)
-            j = (td = tl + ((unsigned)b & m))->b;
+        NEEDBITS((unsigned int)bl)
+            j = (td = tl + ((unsigned int)b & m))->b;
         DUMPBITS(j)
             j = td->v.n;
         if (j < 16)                 /* length of code in bits (0..15) */
@@ -920,9 +920,9 @@ static int noinline __init inflate_dynamic(void)
         else if (j == 16)           /* repeat last length 3 to 6 times */
         {
             NEEDBITS(2)
-                j = 3 + ((unsigned)b & 3);
+                j = 3 + ((unsigned int)b & 3);
             DUMPBITS(2)
-                if ((unsigned)i + j > n) {
+                if ((unsigned int)i + j > n) {
                     ret = 1;
                     goto out;
                 }
@@ -932,9 +932,9 @@ static int noinline __init inflate_dynamic(void)
         else if (j == 17)           /* 3 to 10 zero length codes */
         {
             NEEDBITS(3)
-                j = 3 + ((unsigned)b & 7);
+                j = 3 + ((unsigned int)b & 7);
             DUMPBITS(3)
-                if ((unsigned)i + j > n) {
+                if ((unsigned int)i + j > n) {
                     ret = 1;
                     goto out;
                 }
@@ -945,9 +945,9 @@ static int noinline __init inflate_dynamic(void)
         else                        /* j == 18: 11 to 138 zero length codes */
         {
             NEEDBITS(7)
-                j = 11 + ((unsigned)b & 0x7f);
+                j = 11 + ((unsigned int)b & 0x7f);
             DUMPBITS(7)
-                if ((unsigned)i + j > n) {
+                if ((unsigned int)i + j > n) {
                     ret = 1;
                     goto out;
                 }
@@ -1033,9 +1033,9 @@ int *e                  /* last block flag */
 )
 /* decompress an inflated block */
 {
-unsigned t;           /* block type */
-register ulg b;       /* bit buffer */
-register unsigned k;  /* number of bits in bit buffer */
+unsigned int t;           /* block type */
+register ulg b;           /* bit buffer */
+register unsigned int k;  /* number of bits in bit buffer */
 
 DEBG("<blk");
 
@@ -1052,7 +1052,7 @@ NEEDBITS(1)
 
     /* read in block type */
     NEEDBITS(2)
-    t = (unsigned)b & 3;
+    t = (unsigned int)b & 3;
     DUMPBITS(2)
 
 
@@ -1084,7 +1084,7 @@ static int __init inflate(void)
 {
     int e;                /* last block flag */
     int r;                /* result code */
-    unsigned h;           /* maximum struct huft's malloc'ed */
+    unsigned int h;       /* maximum struct huft's malloc'ed */
 
     /* initialize window, bit buffer */
     wp = 0;
@@ -1235,8 +1235,8 @@ static int __init gunzip(void)
     (void)NEXTBYTE();  /* Ignore OS type for the moment */
 
     if ((flags & EXTRA_FIELD) != 0) {
-        unsigned len = (unsigned)NEXTBYTE();
-        len |= ((unsigned)NEXTBYTE())<<8;
+        unsigned int len = (unsigned int)NEXTBYTE();
+        len |= ((unsigned int)NEXTBYTE())<<8;
         while (len--) (void)NEXTBYTE();
     }
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 07:03:32 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 07:03:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352437.579274 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3BRc-0004k8-02; Mon, 20 Jun 2022 07:03:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352437.579274; Mon, 20 Jun 2022 07:03:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3BRb-0004jS-S9; Mon, 20 Jun 2022 07:03:31 +0000
Received: by outflank-mailman (input) for mailman id 352437;
 Mon, 20 Jun 2022 07:03:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=k7+S=W3=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o3BRa-0001Yx-90
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 07:03:30 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 16150c3d-f067-11ec-b725-ed86ccbb4733;
 Mon, 20 Jun 2022 09:03:28 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 87ADE1424;
 Mon, 20 Jun 2022 00:03:28 -0700 (PDT)
Received: from e129167.arm.com (unknown [10.57.35.125])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9AA9E3F5A1;
 Mon, 20 Jun 2022 00:03:27 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 16150c3d-f067-11ec-b725-ed86ccbb4733
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 9/9] drivers/acpi: Use explicitly specified types
Date: Mon, 20 Jun 2022 09:02:45 +0200
Message-Id: <20220620070245.77979-10-michal.orzel@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220620070245.77979-1-michal.orzel@arm.com>
References: <20220620070245.77979-1-michal.orzel@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

According to MISRA C 2012 Rule 8.1, types shall be explicitly
specified. Fix all the findings reported by cppcheck with misra addon
by substituting implicit type 'unsigned' to explicit 'unsigned int'.

Signed-off-by: Michal Orzel <michal.orzel@arm.com>
---
This patch may not be applicable as these files come from Linux.
---
 xen/drivers/acpi/tables/tbfadt.c  | 2 +-
 xen/drivers/acpi/tables/tbutils.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/acpi/tables/tbfadt.c b/xen/drivers/acpi/tables/tbfadt.c
index f11fd5a900..ad55cd769a 100644
--- a/xen/drivers/acpi/tables/tbfadt.c
+++ b/xen/drivers/acpi/tables/tbfadt.c
@@ -235,7 +235,7 @@ void __init acpi_tb_create_local_fadt(struct acpi_table_header *table, u32 lengt
 		ACPI_WARNING((AE_INFO,
 			      "FADT (revision %u) is longer than ACPI 5.0 version,"
 			      " truncating length %u to %zu",
-			      table->revision, (unsigned)length,
+			      table->revision, (unsigned int)length,
 			      sizeof(struct acpi_table_fadt)));
 	}
 
diff --git a/xen/drivers/acpi/tables/tbutils.c b/xen/drivers/acpi/tables/tbutils.c
index d135a50ff9..ddb7f628c9 100644
--- a/xen/drivers/acpi/tables/tbutils.c
+++ b/xen/drivers/acpi/tables/tbutils.c
@@ -481,7 +481,7 @@ acpi_tb_parse_root_table(acpi_physical_address rsdp_address, u8 flags)
 			if (ACPI_FAILURE(status)) {
 				ACPI_WARNING((AE_INFO,
 					      "Truncating %u table entries!",
-					      (unsigned)
+					      (unsigned int)
 					      (acpi_gbl_root_table_list.size -
 					       acpi_gbl_root_table_list.
 					       count)));
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 07:17:38 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 07:17:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352511.579286 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Bf2-0008Th-AM; Mon, 20 Jun 2022 07:17:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352511.579286; Mon, 20 Jun 2022 07:17:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Bf2-0008Ta-65; Mon, 20 Jun 2022 07:17:24 +0000
Received: by outflank-mailman (input) for mailman id 352511;
 Mon, 20 Jun 2022 07:17:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3Bf1-0008TQ-6B; Mon, 20 Jun 2022 07:17:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3Bf1-0005yU-10; Mon, 20 Jun 2022 07:17:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3Bf0-00024t-ML; Mon, 20 Jun 2022 07:17:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o3Bf0-0001yV-Lv; Mon, 20 Jun 2022 07:17:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=yAMKyPKmj63d6nUqvYE4OPKFz91qXV1Ka5sGivQghfg=; b=GwsBKDM2YmhPdmnZKFrzeEEYGk
	at+Iq0LWqym/nuAcqspANy04gX4UkJ8AYY6Rdo4qsoKzNrl2MLKoRboQs3aByBjrrSN6vcsjG4l4r
	fT73Z2AAq+24c2K/eIENeMK0o9xjZz/3SzJ4ZIhN3YnYSLXIu1D4ftJS9O7/nrIiH2xE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171282-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 171282: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-xl-shadow:xen-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt:xen-install:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=c8b2d413761af732a0798d8df45ce968732083fe
X-Osstest-Versions-That:
    qemuu=a28498b1f9591e12dcbfdf06dc8f54e15926760e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 20 Jun 2022 07:17:22 +0000

flight 171282 qemu-mainline real [real]
flight 171287 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/171282/
http://logs.test-lab.xenproject.org/osstest/logs/171287/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 171256

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt       7 xen-install                  fail  like 171256
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171256
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171256
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171256
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171256
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171256
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171256
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171256
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171256
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 qemuu                c8b2d413761af732a0798d8df45ce968732083fe
baseline version:
 qemuu                a28498b1f9591e12dcbfdf06dc8f54e15926760e

Last test of basis   171256  2022-06-17 15:58:21 Z    2 days
Testing same since   171282  2022-06-20 00:38:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jung-uk Kim <jkim@FreeBSD.org>
  Kyle Evans <kevans@FreeBSD.org>
  Richard Henderson <richard.henderson@linaro.org>
  Stacey Son <sson@FreeBSD.org>
  Warner Losh <imp@bsdimp.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit c8b2d413761af732a0798d8df45ce968732083fe
Merge: a28498b1f9 d35020ed00
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Sun Jun 19 13:56:13 2022 -0700

    Merge tag 'bsd-user-syscall-2022q2-pull-request' of ssh://github.com/qemu-bsd-user/qemu-bsd-user into staging
    
    bsd-user: Next round of syscalls
    
    Implement the next round of system calls. These are open, openat, close,
    fdatasync, fsync, close_from, revoke, access, eacccess, facccessat, chdir,
    fchdir, rename, renameat, mkdir, mkdirat, rmdir, _getcwd, dup, dup2, truncate,
    ftruncate, acct and sync. In addition, the helper functions needed for these to
    work are included. With the helper functions, all of these system calls are the
    'obvious' wrapper...
    
    # -----BEGIN PGP SIGNATURE-----
    # Comment: GPGTools - https://gpgtools.org
    #
    # iQIzBAABCgAdFiEEIDX4lLAKo898zeG3bBzRKH2wEQAFAmKvZSwACgkQbBzRKH2w
    # EQCrdxAA0UeXmh/l1znPSrX4lif7Vhe4H5TdmHavGQX0p7B+dMd160SMLfKFJt7J
    # HHXuQZbPFNuwqE5qiFPTcXIFjT5tq2WSjd9ZC/ZexfzBJIICwcUWuWvG2WfCA3fD
    # hth/Ru2fX0vUwoUwvYw7lTPnhb9o52Z1rf5AEFu85E3UjKWEcARHCakm7n8a+Cg+
    # PkF1qZ/qFic+bkBZkZLWyHB5qR2p2sIp+VHwlG1ew39Xim457kynQOoF8etIXc1Y
    # g5PrjePUsVhPR7qm4CFplM4UOyGOOqIykHERppaXKtk2+kP8dp9HWog9Z/IFVOKc
    # z3huDtf03UtmohjdJBYkpCcCzmd2EETRPgkFaVT5ciVGMb3Nom1b2/DOnndpS9qb
    # TdE7J6Ek1vp4Mr386QHzm6AfdoHGZc4tH+SpDQZrsWbnugklYnQd3++GCqj8D2rA
    # LJ8oWInviZP8xWDn5q1sXCNw/lgVup9ZNrMl7TcXmQDZXHSW1tElIAT2PZCebman
    # rSwg/umr7fPOXdIAkLhF77bAt3J3kAzxhuYwHEstB3kRXEJ2VinLMf3BJBrGLnuK
    # kr6kJy6hw7luIT5nUNLrrNtwsAAwEu6S7OSGhEiGaUSIhiER96k/tX2u/KOBtwGC
    # VzIP7vK5V2xYPepyj4tXkVRHkjxxw3s8fYRXf73IsaZ6Avot8pg=
    # =JmJY
    # -----END PGP SIGNATURE-----
    # gpg: Signature made Sun 19 Jun 2022 11:04:28 AM PDT
    # gpg:                using RSA key 2035F894B00AA3CF7CCDE1B76C1CD1287DB01100
    # gpg: Good signature from "Warner Losh <wlosh@netflix.com>" [unknown]
    # gpg:                 aka "Warner Losh <imp@bsdimp.com>" [unknown]
    # gpg:                 aka "Warner Losh <imp@freebsd.org>" [unknown]
    # gpg:                 aka "Warner Losh <imp@village.org>" [unknown]
    # gpg:                 aka "Warner Losh <wlosh@bsdimp.com>" [unknown]
    # gpg: WARNING: This key is not certified with a trusted signature!
    # gpg:          There is no indication that the signature belongs to the owner.
    # Primary key fingerprint: 2035 F894 B00A A3CF 7CCD  E1B7 6C1C D128 7DB0 1100
    
    * tag 'bsd-user-syscall-2022q2-pull-request' of ssh://github.com/qemu-bsd-user/qemu-bsd-user:
      bsd-user: Implement acct and sync
      bsd-user: Implement trunctate and ftruncate
      bsd-user: Implement dup and dup2
      bsd-user: Implement rmdir and undocumented __getcwd
      bsd-user: Implement mkdir and mkdirat
      bsd-user: Implement link, linkat, unlink and unlinkat
      bsd-user: Implement rename and renameat
      bsd-user: Implement chdir and fchdir
      bsd-user: Implement revoke, access, eaccess and faccessat
      bsd-user: Implement fdatasync, fsync and close_from
      bsd-user: Implement open, openat and close
    
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit d35020ed00b1cb649ccd73ba4f5e918a5cc5363a
Author: Warner Losh <imp@bsdimp.com>
Date:   Sun Jun 12 09:31:18 2022 -0600

    bsd-user: Implement acct and sync
    
    Signed-off-by: Stacey Son <sson@FreeBSD.org>
    Signed-off-by: Warner Losh <imp@bsdimp.com>
    Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

commit 4b795b147b4b0eee01f24664f630411dde8ed872
Author: Warner Losh <imp@bsdimp.com>
Date:   Sun Jun 12 09:29:02 2022 -0600

    bsd-user: Implement trunctate and ftruncate
    
    Signed-off-by: Stacey Son <sson@FreeBSD.org>
    Signed-off-by: Warner Losh <imp@bsdimp.com>
    Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

commit a15699acafd5cf9e4c414505a4aa36b0f2338ee8
Author: Warner Losh <imp@bsdimp.com>
Date:   Sun Jun 12 09:27:19 2022 -0600

    bsd-user: Implement dup and dup2
    
    Signed-off-by: Stacey Son <sson@FreeBSD.org>
    Signed-off-by: Warner Losh <imp@bsdimp.com>
    Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

commit 6af8f76a9f2c7b4d1ac5ba885695d8b6cc7c4dd0
Author: Warner Losh <imp@bsdimp.com>
Date:   Sun Jun 12 08:23:57 2022 -0600

    bsd-user: Implement rmdir and undocumented __getcwd
    
    Implemenet rmdir and __getcwd. __getcwd is the undocumented
    back end to getcwd(3).
    
    Signed-off-by: Stacey Son <sson@FreeBSD.org>
    Signed-off-by: Jung-uk Kim <jkim@FreeBSD.org>
    Signed-off-by: Warner Losh <imp@bsdimp.com>
    Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

commit 1ffbd5e7feae239aa2d6d986f086c57a5835720a
Author: Warner Losh <imp@bsdimp.com>
Date:   Sun Jun 12 08:21:01 2022 -0600

    bsd-user: Implement mkdir and mkdirat
    
    Signed-off-by: Stacey Son <sson@FreeBSD.org>
    Signed-off-by: Warner Losh <imp@bsdimp.com>
    Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

commit 2d3b7e01d6ba9f6dcb86782484da42766ef7fef0
Author: Warner Losh <imp@bsdimp.com>
Date:   Sun Jun 12 08:18:48 2022 -0600

    bsd-user: Implement link, linkat, unlink and unlinkat
    
    Signed-off-by: Stacey Son <sson@FreeBSD.org>
    Signed-off-by: Jung-uk Kim <jkim@FreeBSD.org>
    Signed-off-by: Warner Losh <imp@bsdimp.com>
    Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

commit ab5fd2d969855be6a0355e55d21b51c676f7b1b6
Author: Warner Losh <imp@bsdimp.com>
Date:   Sun Jun 12 08:16:46 2022 -0600

    bsd-user: Implement rename and renameat
    
    Plus the helper LOCK_PATH2 and UNLOCK_PATH2 macros.
    
    Signed-off-by: Stacey Son <sson@FreeBSD.org>
    Signed-off-by: Jung-uk Kim <jkim@FreeBSD.org>
    Signed-off-by: Warner Losh <imp@bsdimp.com>
    Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

commit 390f547ea80d6758099a867669e6429d511d9c88
Author: Warner Losh <imp@bsdimp.com>
Date:   Sun Jun 12 08:13:34 2022 -0600

    bsd-user: Implement chdir and fchdir
    
    Signed-off-by: Stacey Son <sson@FreeBSD.org>
    Signed-off-by: Warner Losh <imp@bsdimp.com>
    Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

commit 65c6c4c893a29cf5c2eef48ab92ef4f04f31576f
Author: Warner Losh <imp@bsdimp.com>
Date:   Sun Jun 12 08:11:30 2022 -0600

    bsd-user: Implement revoke, access, eaccess and faccessat
    
    Signed-off-by: Stacey Son <sson@FreeBSD.org>
    Signed-off-by: Warner Losh <imp@bsdimp.com>
    Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

commit a2ba6c7b80b6aa1c6e33af778c6a9c8d99c7520e
Author: Warner Losh <imp@bsdimp.com>
Date:   Sun Jun 12 08:07:39 2022 -0600

    bsd-user: Implement fdatasync, fsync and close_from
    
    Implement fdatasync(2), fsync(2) and close_from(2).
    
    Signed-off-by: Stacey Son <sson@FreeBSD.org>
    Signed-off-by: Jung-uk Kim <jkim@FreeBSD.org>
    Signed-off-by: Warner Losh <imp@bsdimp.com>
    Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

commit 77d3522b3fb6da9f39ada61fe7c2d0121c10de7f
Author: Warner Losh <imp@bsdimp.com>
Date:   Sat Jun 11 21:32:19 2022 -0600

    bsd-user: Implement open, openat and close
    
    Add the open, openat and close system calls. We need to lock paths, so
    implmenent that as well.
    
    Signed-off-by: Stacey Son <sson@FreeBSD.org>
    Signed-off-by: Jung-uk Kim <jkim@FreeBSD.org>
    Signed-off-by: Kyle Evans <kevans@FreeBSD.org>
    Signed-off-by: Warner Losh <imp@bsdimp.com>
    Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


From xen-devel-bounces@lists.xenproject.org Mon Jun 20 07:38:41 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 07:38:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352525.579308 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3BzZ-0002ae-Fn; Mon, 20 Jun 2022 07:38:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352525.579308; Mon, 20 Jun 2022 07:38:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3BzZ-0002a6-8f; Mon, 20 Jun 2022 07:38:37 +0000
Received: by outflank-mailman (input) for mailman id 352525;
 Mon, 20 Jun 2022 07:38:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hycu=W3=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o3BzY-0002ZD-BQ
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 07:38:36 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fc0d334d-f06b-11ec-b725-ed86ccbb4733;
 Mon, 20 Jun 2022 09:38:33 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id BF3C521B85;
 Mon, 20 Jun 2022 07:38:32 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 91AD1134CA;
 Mon, 20 Jun 2022 07:38:32 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id nOwwIvgjsGI3DAAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 20 Jun 2022 07:38:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fc0d334d-f06b-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655710712; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=Nw3aoVhr/WUv7SQwVbjYwYfUJ9Uxj7BJzlvpNqO58Ls=;
	b=R/2lQtS318TyDax/XDzFR0kI36a8m2rfgLF2Lrv4sspyhoEJbmSJe9DwzIy5LJQBRwGpAl
	34tJOPRYQ4mTPmerBXT2tBf/53Vea0FLctnSh8xhYpifZqgY359Q3JB9qvYx97bkaEJMTj
	b8X6CBfwMZPBlDKGvJ3TrvdGAAqIuWo=
From: Juergen Gross <jgross@suse.com>
To: minios-devel@lists.xenproject.org,
	xen-devel@lists.xenproject.org
Cc: samuel.thibault@ens-lyon.org,
	wl@xen.org,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH 0/8] mini-os: some cleanup patches
Date: Mon, 20 Jun 2022 09:38:12 +0200
Message-Id: <20220620073820.9336-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Do some cleanups.

No functional change intended, apart from whitespace changes only
minor modifications making code easier to read.

Juergen Gross (8):
  mini-os: drop xenbus directory
  mini-os: apply coding style to xenbus.c
  mini-os: eliminate console/console.h
  mini-os: rename console/xenbus.c to consfront.c
  mini-os: apply coding style to consfront.c
  mini-os: eliminate console directory
  mini-os: apply coding style to console.c
  mini-os: add mini-os-debug[.gz] to .gitignore

 .gitignore                      |   2 +
 Makefile                        |   9 +-
 console/xenbus.c => consfront.c |  99 ++++---
 console.c                       | 415 ++++++++++++++++++++++++++
 console/console.c               | 177 -----------
 console/console.h               |   2 -
 console/xencons_ring.c          | 238 ---------------
 include/console.h               |   1 +
 xenbus/xenbus.c => xenbus.c     | 510 +++++++++++++++++++-------------
 9 files changed, 778 insertions(+), 675 deletions(-)
 rename console/xenbus.c => consfront.c (78%)
 create mode 100644 console.c
 delete mode 100644 console/console.c
 delete mode 100644 console/console.h
 delete mode 100644 console/xencons_ring.c
 rename xenbus/xenbus.c => xenbus.c (71%)

-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 07:38:41 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 07:38:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352530.579336 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Bzb-00035R-MZ; Mon, 20 Jun 2022 07:38:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352530.579336; Mon, 20 Jun 2022 07:38:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Bzb-00032O-H0; Mon, 20 Jun 2022 07:38:39 +0000
Received: by outflank-mailman (input) for mailman id 352530;
 Mon, 20 Jun 2022 07:38:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hycu=W3=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o3Bza-0002ZE-K0
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 07:38:38 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fd8397c4-f06b-11ec-bd2d-47488cf2e6aa;
 Mon, 20 Jun 2022 09:38:34 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 985DB21BD0;
 Mon, 20 Jun 2022 07:38:34 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 6206E13A79;
 Mon, 20 Jun 2022 07:38:34 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id EFuXFvojsGI3DAAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 20 Jun 2022 07:38:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fd8397c4-f06b-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655710714; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=g253R54BCUXErJG0A5bqZZ4ZPguYj/1UWM+5iFTL2Q4=;
	b=NGnw937lMT/LOym5c4FxAJvsiKqbPHFdtXnXWnw+KPZQ2CZdE9erEzVRNC9go0JBFdw03p
	8vE7N10ooM3hfysMkQhuqM6HfgEQpdZa1qMynsfHUvdUYEFL6tdgtW5f0TIK0r9ltl89H3
	Bz9JR4qLhGl9SYSPaB8KTneMQ/jTnSI=
From: Juergen Gross <jgross@suse.com>
To: minios-devel@lists.xenproject.org,
	xen-devel@lists.xenproject.org
Cc: samuel.thibault@ens-lyon.org,
	wl@xen.org,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH 8/8] mini-os: add mini-os-debug[.gz] to .gitignore
Date: Mon, 20 Jun 2022 09:38:20 +0200
Message-Id: <20220620073820.9336-9-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20220620073820.9336-1-jgross@suse.com>
References: <20220620073820.9336-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

mini-os-debug and mini-os-debug.gz are created when building Mini-OS,
add them to .gitignore.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 .gitignore | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/.gitignore b/.gitignore
index d57c2bdd..abef46b2 100644
--- a/.gitignore
+++ b/.gitignore
@@ -14,3 +14,5 @@ include/list.h
 mini-os
 mini-os.gz
 minios-config.mk
+mini-os-debug
+mini-os-debug.gz
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 07:38:41 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 07:38:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352529.579334 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Bzb-0002xe-FC; Mon, 20 Jun 2022 07:38:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352529.579334; Mon, 20 Jun 2022 07:38:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Bzb-0002wp-2I; Mon, 20 Jun 2022 07:38:39 +0000
Received: by outflank-mailman (input) for mailman id 352529;
 Mon, 20 Jun 2022 07:38:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hycu=W3=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o3Bza-0002ZD-BS
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 07:38:38 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fc6d24bb-f06b-11ec-b725-ed86ccbb4733;
 Mon, 20 Jun 2022 09:38:33 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id A9F1A1F383;
 Mon, 20 Jun 2022 07:38:33 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 7BC73134CA;
 Mon, 20 Jun 2022 07:38:33 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id aAbZHPkjsGI3DAAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 20 Jun 2022 07:38:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fc6d24bb-f06b-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655710713; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=hnPzvgMWe8RIRQajkOAJiVhJlYFwWqqQzJSgyQ7guJk=;
	b=CzZNsrKQQhGFss+TpjyunC4m5576B69vSH9hfI+3uRjliaCOAI0gFyhtNBlW0XHgOzp7uT
	9pRjAc041oQJ5rsL3/RqoCmPvC6Yk3vYPH9beMMJHA6MFgwPxlTxqexwQv/xK6KM67Sf1N
	IiAm934tudtI2+QIdwABW85FwF0cA6Q=
From: Juergen Gross <jgross@suse.com>
To: minios-devel@lists.xenproject.org,
	xen-devel@lists.xenproject.org
Cc: samuel.thibault@ens-lyon.org,
	wl@xen.org,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH 4/8] mini-os: rename console/xenbus.c to consfront.c
Date: Mon, 20 Jun 2022 09:38:16 +0200
Message-Id: <20220620073820.9336-5-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20220620073820.9336-1-jgross@suse.com>
References: <20220620073820.9336-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Move console/xenbus.c into the main directory and rename it to
consfront.c.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 Makefile                        | 2 +-
 console/xenbus.c => consfront.c | 0
 2 files changed, 1 insertion(+), 1 deletion(-)
 rename console/xenbus.c => consfront.c (100%)

diff --git a/Makefile b/Makefile
index 16d1f5d6..509d927b 100644
--- a/Makefile
+++ b/Makefile
@@ -37,6 +37,7 @@ TARGET := mini-os
 SUBDIRS := lib xenbus console
 
 src-$(CONFIG_BLKFRONT) += blkfront.c
+src-$(CONFIG_CONSFRONT) += consfront.c
 src-$(CONFIG_TPMFRONT) += tpmfront.c
 src-$(CONFIG_TPM_TIS) += tpm_tis.c
 src-$(CONFIG_TPMBACK) += tpmback.c
@@ -70,7 +71,6 @@ src-$(CONFIG_LIBXS) += lib/xs.c
 
 src-y += console/console.c
 src-y += console/xencons_ring.c
-src-$(CONFIG_CONSFRONT) += console/xenbus.c
 
 # The common mini-os objects to build.
 APP_OBJS :=
diff --git a/console/xenbus.c b/consfront.c
similarity index 100%
rename from console/xenbus.c
rename to consfront.c
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 07:38:41 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 07:38:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352526.579319 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Bza-0002sx-MX; Mon, 20 Jun 2022 07:38:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352526.579319; Mon, 20 Jun 2022 07:38:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Bza-0002sq-J7; Mon, 20 Jun 2022 07:38:38 +0000
Received: by outflank-mailman (input) for mailman id 352526;
 Mon, 20 Jun 2022 07:38:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hycu=W3=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o3BzY-0002ZE-Jq
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 07:38:36 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fca02103-f06b-11ec-bd2d-47488cf2e6aa;
 Mon, 20 Jun 2022 09:38:34 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 758BA21BCC;
 Mon, 20 Jun 2022 07:38:33 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 48E2F13A79;
 Mon, 20 Jun 2022 07:38:33 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 0H4eEPkjsGI3DAAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 20 Jun 2022 07:38:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fca02103-f06b-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655710713; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=mXoU4RmuES8ycHpl3QBq4ZigOK3MOzKFqQcwP9Rcli4=;
	b=QqRprFy/46avRj+XmmN+NOAl8ov8kRK941KMadJFb9N5OhcdNsIwPfT4lZRBj4mv8wOeXD
	yCLGF+ZwcfA6RIcs0RScMTmzzllsCd+ZkJ+rRr87E7/U9RpLu8RoEiB1Oyko0lWwoF9YfL
	0U88RrGFfrwgRAeaagcxsn8Uvuy4QGk=
From: Juergen Gross <jgross@suse.com>
To: minios-devel@lists.xenproject.org,
	xen-devel@lists.xenproject.org
Cc: samuel.thibault@ens-lyon.org,
	wl@xen.org,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH 3/8] mini-os: eliminate console/console.h
Date: Mon, 20 Jun 2022 09:38:15 +0200
Message-Id: <20220620073820.9336-4-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20220620073820.9336-1-jgross@suse.com>
References: <20220620073820.9336-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

console/console.h contains only a single prototype. Move that to
include/console.h and remove console/console.h.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 console/console.h      | 2 --
 console/xenbus.c       | 2 +-
 console/xencons_ring.c | 2 +-
 include/console.h      | 1 +
 4 files changed, 3 insertions(+), 4 deletions(-)
 delete mode 100644 console/console.h

diff --git a/console/console.h b/console/console.h
deleted file mode 100644
index e85147a4..00000000
--- a/console/console.h
+++ /dev/null
@@ -1,2 +0,0 @@
-
-void console_handle_input(evtchn_port_t port, struct pt_regs *regs, void *data);
diff --git a/console/xenbus.c b/console/xenbus.c
index d8950454..73659656 100644
--- a/console/xenbus.c
+++ b/console/xenbus.c
@@ -5,13 +5,13 @@
 #include <mini-os/events.h>
 #include <mini-os/os.h>
 #include <mini-os/lib.h>
+#include <mini-os/console.h>
 #include <mini-os/xenbus.h>
 #include <xen/io/console.h>
 #include <xen/io/protocols.h>
 #include <xen/io/ring.h>
 #include <mini-os/xmalloc.h>
 #include <mini-os/gnttab.h>
-#include "console.h"
 
 void free_consfront(struct consfront_dev *dev)
 {
diff --git a/console/xencons_ring.c b/console/xencons_ring.c
index efedf46b..495f0a19 100644
--- a/console/xencons_ring.c
+++ b/console/xencons_ring.c
@@ -5,6 +5,7 @@
 #include <mini-os/events.h>
 #include <mini-os/os.h>
 #include <mini-os/lib.h>
+#include <mini-os/console.h>
 #include <mini-os/xenbus.h>
 #include <xen/io/console.h>
 #include <xen/io/protocols.h>
@@ -12,7 +13,6 @@
 #include <xen/hvm/params.h>
 #include <mini-os/xmalloc.h>
 #include <mini-os/gnttab.h>
-#include "console.h"
 
 DECLARE_WAIT_QUEUE_HEAD(console_queue);
 
diff --git a/include/console.h b/include/console.h
index e76e4234..d216d247 100644
--- a/include/console.h
+++ b/include/console.h
@@ -98,5 +98,6 @@ void free_consfront(struct consfront_dev *dev);
 extern const struct file_ops console_ops;
 int open_consfront(char *nodename);
 #endif
+void console_handle_input(evtchn_port_t port, struct pt_regs *regs, void *data);
 
 #endif /* _LIB_CONSOLE_H_ */
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 07:38:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 07:38:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352533.579369 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Bze-0003oU-Ko; Mon, 20 Jun 2022 07:38:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352533.579369; Mon, 20 Jun 2022 07:38:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Bze-0003o4-6C; Mon, 20 Jun 2022 07:38:42 +0000
Received: by outflank-mailman (input) for mailman id 352533;
 Mon, 20 Jun 2022 07:38:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hycu=W3=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o3Bzc-0002ZD-Bd
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 07:38:40 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fc0b9888-f06b-11ec-b725-ed86ccbb4733;
 Mon, 20 Jun 2022 09:38:33 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 0014521BB8;
 Mon, 20 Jun 2022 07:38:32 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id C6946134CA;
 Mon, 20 Jun 2022 07:38:32 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id IP0rL/gjsGI3DAAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 20 Jun 2022 07:38:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fc0b9888-f06b-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655710713; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qUmUwfIlKzy4uayyFfnh4ZN+f1gTP+njO8tkwccWoP4=;
	b=MAKOBlsmDx/DpF2gfo0e6hjRB0chjmo+Fm8UcxMU7vC87txTwbnVPqFXpXstAKKwvpkldD
	b7IAcsjIzqCzfR/z9CJrG7kITaSInvFuo0tp/1bP4wfm7sTOMFAM1FSIEC0317AAUMLMSe
	+TO+qjXbqxbzUXKCrklkUqixCFUPueo=
From: Juergen Gross <jgross@suse.com>
To: minios-devel@lists.xenproject.org,
	xen-devel@lists.xenproject.org
Cc: samuel.thibault@ens-lyon.org,
	wl@xen.org,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH 1/8] mini-os: drop xenbus directory
Date: Mon, 20 Jun 2022 09:38:13 +0200
Message-Id: <20220620073820.9336-2-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20220620073820.9336-1-jgross@suse.com>
References: <20220620073820.9336-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The xenbus directory contains only a single source. Move this source
up and remove the xenbus directory.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 Makefile                    | 3 +--
 xenbus/xenbus.c => xenbus.c | 0
 2 files changed, 1 insertion(+), 2 deletions(-)
 rename xenbus/xenbus.c => xenbus.c (100%)

diff --git a/Makefile b/Makefile
index 9f95d197..16d1f5d6 100644
--- a/Makefile
+++ b/Makefile
@@ -57,6 +57,7 @@ src-y += sched.c
 src-y += shutdown.c
 src-$(CONFIG_TEST) += test.c
 src-$(CONFIG_BALLOON) += balloon.c
+src-$(CONFIG_XENBUS) += xenbus.c
 
 src-y += lib/ctype.c
 src-y += lib/math.c
@@ -67,8 +68,6 @@ src-y += lib/sys.c
 src-y += lib/xmalloc.c
 src-$(CONFIG_LIBXS) += lib/xs.c
 
-src-$(CONFIG_XENBUS) += xenbus/xenbus.c
-
 src-y += console/console.c
 src-y += console/xencons_ring.c
 src-$(CONFIG_CONSFRONT) += console/xenbus.c
diff --git a/xenbus/xenbus.c b/xenbus.c
similarity index 100%
rename from xenbus/xenbus.c
rename to xenbus.c
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 07:38:43 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 07:38:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352534.579373 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Bzf-0003uA-2Z; Mon, 20 Jun 2022 07:38:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352534.579373; Mon, 20 Jun 2022 07:38:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Bze-0003sA-Lx; Mon, 20 Jun 2022 07:38:42 +0000
Received: by outflank-mailman (input) for mailman id 352534;
 Mon, 20 Jun 2022 07:38:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hycu=W3=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o3Bzc-0002ZE-KP
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 07:38:40 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fd465502-f06b-11ec-bd2d-47488cf2e6aa;
 Mon, 20 Jun 2022 09:38:34 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 5B6ED1F972;
 Mon, 20 Jun 2022 07:38:34 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 292A1134CA;
 Mon, 20 Jun 2022 07:38:34 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id AKiuCPojsGI3DAAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 20 Jun 2022 07:38:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fd465502-f06b-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655710714; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Ij/l8vGwMUP3sj1l6oCVo+3lGirXIhChnHnYSjdN7M4=;
	b=iF7oXWL274QxV6P/YGQuE7yJf2FZ0UXUVsAp/+MPXoFykVPZWfIApaoD94IxzdB9eHBASW
	njBlkkiQQWfD4VEiQqxiOKMA4qCWqPyyhvost8/ERAPHX38leEth0ndXibttvUX81T0hpY
	deQXWgsSL9uNG4oH4OkMlGZriS7IVLY=
From: Juergen Gross <jgross@suse.com>
To: minios-devel@lists.xenproject.org,
	xen-devel@lists.xenproject.org
Cc: samuel.thibault@ens-lyon.org,
	wl@xen.org,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH 7/8] mini-os: apply coding style to console.c
Date: Mon, 20 Jun 2022 09:38:19 +0200
Message-Id: <20220620073820.9336-8-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20220620073820.9336-1-jgross@suse.com>
References: <20220620073820.9336-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Make console.c coding style compliant.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 console.c | 280 ++++++++++++++++++++++++++++--------------------------
 1 file changed, 145 insertions(+), 135 deletions(-)

diff --git a/console.c b/console.c
index 29277eac..5d205c7d 100644
--- a/console.c
+++ b/console.c
@@ -1,14 +1,14 @@
-/* 
+/*
  ****************************************************************************
  * (C) 2006 - Grzegorz Milos - Cambridge University
  ****************************************************************************
  *
  *        File: console.c
  *      Author: Grzegorz Milos
- *     Changes: 
- *              
+ *     Changes:
+ *
  *        Date: Mar 2006
- * 
+ *
  * Environment: Xen Minimal OS
  * Description: Console interface.
  *
@@ -21,19 +21,19 @@
  * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
  * sell copies of the Software, and to permit persons to whom the Software is
  * furnished to do so, subject to the following conditions:
- * 
+ *
  * The above copyright notice and this permission notice shall be included in
  * all copies or substantial portions of the Software.
- * 
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 
- * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER 
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
  * DEALINGS IN THE SOFTWARE.
  */
- 
+
 #include <mini-os/types.h>
 #include <mini-os/wait.h>
 #include <mini-os/mm.h>
@@ -50,26 +50,24 @@
 #include <xen/io/ring.h>
 #include <xen/hvm/params.h>
 
-/* If console not initialised the printk will be sent to xen serial line 
-   NOTE: you need to enable verbose in xen/Rules.mk for it to work. */
 static struct consfront_dev* xen_console = NULL;
 static int console_initialised = 0;
 
-__attribute__((weak)) void console_input(char * buf, unsigned len)
+__attribute__((weak)) void console_input(char *buf, unsigned int len)
 {
-    if(len > 0)
+    if ( len > 0 )
     {
         /* Just repeat what's written */
         buf[len] = '\0';
         printk("%s", buf);
-        
-        if(buf[len-1] == '\r')
+
+        if ( buf[len - 1] == '\r' )
             printk("\nNo console input handler.\n");
     }
 }
 
 #ifndef HAVE_LIBC
-void xencons_rx(char *buf, unsigned len, struct pt_regs *regs)
+void xencons_rx(char *buf, unsigned int len, struct pt_regs *regs)
 {
     console_input(buf, len);
 }
@@ -80,88 +78,94 @@ void xencons_tx(void)
 }
 #endif
 
-
 void console_print(struct consfront_dev *dev, const char *data, int length)
 {
     char *curr_char, saved_char;
     char copied_str[length+1];
     char *copied_ptr;
     int part_len;
-    int (*ring_send_fn)(struct consfront_dev *dev, const char *data, unsigned length);
+    int (*ring_send_fn)(struct consfront_dev *dev, const char *data,
+                        unsigned int length);
 
-    if(!console_initialised)
+    if ( !console_initialised )
         ring_send_fn = xencons_ring_send_no_notify;
     else
         ring_send_fn = xencons_ring_send;
 
-    if (dev && dev->is_raw) {
+    if ( dev && dev->is_raw )
+    {
         ring_send_fn(dev, data, length);
         return;
     }
 
     copied_ptr = copied_str;
     memcpy(copied_ptr, data, length);
-    for(curr_char = copied_ptr; curr_char < copied_ptr+length-1; curr_char++)
+    for ( curr_char = copied_ptr; curr_char < copied_ptr + length - 1;
+          curr_char++ )
     {
-        if(*curr_char == '\n')
+        if ( *curr_char == '\n' )
         {
             *curr_char = '\r';
-            saved_char = *(curr_char+1);
-            *(curr_char+1) = '\n';
+            saved_char = *(curr_char + 1);
+            *(curr_char + 1) = '\n';
             part_len = curr_char - copied_ptr + 2;
             ring_send_fn(dev, copied_ptr, part_len);
-            *(curr_char+1) = saved_char;
-            copied_ptr = curr_char+1;
+            *(curr_char + 1) = saved_char;
+            copied_ptr = curr_char + 1;
             length -= part_len - 1;
         }
     }
 
-    if (copied_ptr[length-1] == '\n') {
-        copied_ptr[length-1] = '\r';
+    if ( copied_ptr[length - 1] == '\n')
+    {
+        copied_ptr[length - 1] = '\r';
         copied_ptr[length] = '\n';
         length++;
     }
-    
+
     ring_send_fn(dev, copied_ptr, length);
 }
 
 void print(int direct, const char *fmt, va_list args)
 {
     static char __print_buf[1024];
-    
+
     (void)vsnprintf(__print_buf, sizeof(__print_buf), fmt, args);
 
-    if(direct)
+    if ( direct )
     {
-        (void)HYPERVISOR_console_io(CONSOLEIO_write, strlen(__print_buf), __print_buf);
+        (void)HYPERVISOR_console_io(CONSOLEIO_write, strlen(__print_buf),
+                                    __print_buf);
         return;
-    } else {
-#ifndef CONFIG_USE_XEN_CONSOLE
-    if(!console_initialised)
-#endif    
-            (void)HYPERVISOR_console_io(CONSOLEIO_write, strlen(__print_buf), __print_buf);
-        
-        console_print(NULL, __print_buf, strlen(__print_buf));
     }
+#ifndef CONFIG_USE_XEN_CONSOLE
+    if ( !console_initialised )
+#endif
+        (void)HYPERVISOR_console_io(CONSOLEIO_write, strlen(__print_buf),
+                                    __print_buf);
+
+    console_print(NULL, __print_buf, strlen(__print_buf));
 }
 
 void printk(const char *fmt, ...)
 {
-    va_list       args;
+    va_list args;
+
     va_start(args, fmt);
     print(0, fmt, args);
-    va_end(args);        
+    va_end(args);
 }
 
 void xprintk(const char *fmt, ...)
 {
-    va_list       args;
+    va_list args;
+
     va_start(args, fmt);
     print(1, fmt, args);
-    va_end(args);        
+    va_end(args);
 }
 void init_console(void)
-{   
+{
     printk("Initialising console ... ");
     xen_console = xencons_ring_init();
     console_initialised = 1;
@@ -186,7 +190,7 @@ DECLARE_WAIT_QUEUE_HEAD(console_queue);
 static struct xencons_interface *console_ring;
 uint32_t console_evtchn;
 
-static struct consfront_dev* resume_xen_console(struct consfront_dev* dev);
+static struct consfront_dev* resume_xen_console(struct consfront_dev *dev);
 
 #ifdef CONFIG_PARAVIRT
 void get_console(void *p)
@@ -201,11 +205,11 @@ void get_console(void *p)
 {
     uint64_t v = -1;
 
-    if (hvm_get_parameter(HVM_PARAM_CONSOLE_EVTCHN, &v))
+    if ( hvm_get_parameter(HVM_PARAM_CONSOLE_EVTCHN, &v) )
         BUG();
     console_evtchn = v;
 
-    if (hvm_get_parameter(HVM_PARAM_CONSOLE_PFN, &v))
+    if ( hvm_get_parameter(HVM_PARAM_CONSOLE_PFN, &v) )
         BUG();
     console_ring = (struct xencons_interface *)map_frame_virt(v);
 }
@@ -214,7 +218,7 @@ void get_console(void *p)
 static inline void notify_daemon(struct consfront_dev *dev)
 {
     /* Use evtchn: this is called early, before irq is set up. */
-    if (!dev)
+    if ( !dev )
         notify_remote_via_evtchn(console_evtchn);
     else
         notify_remote_via_evtchn(dev->evtchn);
@@ -223,36 +227,38 @@ static inline void notify_daemon(struct consfront_dev *dev)
 static inline struct xencons_interface *xencons_interface(void)
 {
     return console_evtchn ? console_ring : NULL;
-} 
- 
-int xencons_ring_send_no_notify(struct consfront_dev *dev, const char *data, unsigned len)
-{	
+}
+
+int xencons_ring_send_no_notify(struct consfront_dev *dev, const char *data,
+                                unsigned int len)
+{
     int sent = 0;
-	struct xencons_interface *intf;
-	XENCONS_RING_IDX cons, prod;
-
-	if (!dev)
-            intf = xencons_interface();
-        else
-            intf = dev->ring;
-        if (!intf)
-            return sent;
-
-	cons = intf->out_cons;
-	prod = intf->out_prod;
-	mb();
-	BUG_ON((prod - cons) > sizeof(intf->out));
-
-	while ((sent < len) && ((prod - cons) < sizeof(intf->out)))
-		intf->out[MASK_XENCONS_IDX(prod++, intf->out)] = data[sent++];
-
-	wmb();
-	intf->out_prod = prod;
-    
+    struct xencons_interface *intf;
+    XENCONS_RING_IDX cons, prod;
+
+    if ( !dev )
+        intf = xencons_interface();
+    else
+        intf = dev->ring;
+    if ( !intf )
+        return sent;
+
+    cons = intf->out_cons;
+    prod = intf->out_prod;
+    mb();
+    BUG_ON((prod - cons) > sizeof(intf->out));
+
+    while ( (sent < len) && ((prod - cons) < sizeof(intf->out)) )
+        intf->out[MASK_XENCONS_IDX(prod++, intf->out)] = data[sent++];
+
+    wmb();
+    intf->out_prod = prod;
+
     return sent;
 }
 
-int xencons_ring_send(struct consfront_dev *dev, const char *data, unsigned len)
+int xencons_ring_send(struct consfront_dev *dev, const char *data,
+                      unsigned int len)
 {
     int sent;
 
@@ -264,83 +270,85 @@ int xencons_ring_send(struct consfront_dev *dev, const char *data, unsigned len)
 
 void console_handle_input(evtchn_port_t port, struct pt_regs *regs, void *data)
 {
-	struct consfront_dev *dev = (struct consfront_dev *) data;
+    struct consfront_dev *dev = (struct consfront_dev *) data;
 #ifdef HAVE_LIBC
-        struct file *file = dev ? get_file_from_fd(dev->fd) : NULL;
+    struct file *file = dev ? get_file_from_fd(dev->fd) : NULL;
 
-        if ( file )
-            file->read = true;
+    if ( file )
+        file->read = true;
 
-        wake_up(&console_queue);
+    wake_up(&console_queue);
 #else
-	struct xencons_interface *intf = xencons_interface();
-	XENCONS_RING_IDX cons, prod;
+    struct xencons_interface *intf = xencons_interface();
+    XENCONS_RING_IDX cons, prod;
 
-	cons = intf->in_cons;
-	prod = intf->in_prod;
-	mb();
-	BUG_ON((prod - cons) > sizeof(intf->in));
+    cons = intf->in_cons;
+    prod = intf->in_prod;
+    mb();
+    BUG_ON((prod - cons) > sizeof(intf->in));
 
-	while (cons != prod) {
-		xencons_rx(intf->in+MASK_XENCONS_IDX(cons,intf->in), 1, regs);
-		cons++;
-	}
+    while ( cons != prod )
+    {
+        xencons_rx(intf->in + MASK_XENCONS_IDX(cons, intf->in), 1, regs);
+        cons++;
+    }
 
-	mb();
-	intf->in_cons = cons;
+    mb();
+    intf->in_cons = cons;
 
-	notify_daemon(dev);
+    notify_daemon(dev);
 
-	xencons_tx();
+    xencons_tx();
 #endif
 }
 
 #ifdef HAVE_LIBC
 int xencons_ring_avail(struct consfront_dev *dev)
 {
-	struct xencons_interface *intf;
-	XENCONS_RING_IDX cons, prod;
+    struct xencons_interface *intf;
+    XENCONS_RING_IDX cons, prod;
 
-        if (!dev)
-            intf = xencons_interface();
-        else
-            intf = dev->ring;
+    if ( !dev )
+        intf = xencons_interface();
+    else
+        intf = dev->ring;
 
-	cons = intf->in_cons;
-	prod = intf->in_prod;
-	mb();
-	BUG_ON((prod - cons) > sizeof(intf->in));
+    cons = intf->in_cons;
+    prod = intf->in_prod;
+    mb();
+    BUG_ON((prod - cons) > sizeof(intf->in));
 
-        return prod - cons;
+    return prod - cons;
 }
 
-int xencons_ring_recv(struct consfront_dev *dev, char *data, unsigned len)
+int xencons_ring_recv(struct consfront_dev *dev, char *data, unsigned int len)
 {
-	struct xencons_interface *intf;
-	XENCONS_RING_IDX cons, prod;
-        unsigned filled = 0;
+    struct xencons_interface *intf;
+    XENCONS_RING_IDX cons, prod;
+    unsigned int filled = 0;
 
-        if (!dev)
-            intf = xencons_interface();
-        else
-            intf = dev->ring;
+    if ( !dev )
+        intf = xencons_interface();
+    else
+        intf = dev->ring;
 
-	cons = intf->in_cons;
-	prod = intf->in_prod;
-	mb();
-	BUG_ON((prod - cons) > sizeof(intf->in));
+    cons = intf->in_cons;
+    prod = intf->in_prod;
+    mb();
+    BUG_ON((prod - cons) > sizeof(intf->in));
 
-        while (filled < len && cons + filled != prod) {
-                data[filled] = *(intf->in + MASK_XENCONS_IDX(cons + filled, intf->in));
-                filled++;
-	}
+    while ( filled < len && cons + filled != prod )
+    {
+        data[filled] = *(intf->in + MASK_XENCONS_IDX(cons + filled, intf->in));
+        filled++;
+    }
 
-	mb();
-        intf->in_cons = cons + filled;
+    mb();
+    intf->in_cons = cons + filled;
 
-	notify_daemon(dev);
+    notify_daemon(dev);
 
-        return filled;
+    return filled;
 }
 #endif
 
@@ -348,7 +356,7 @@ struct consfront_dev *xencons_ring_init(void)
 {
     struct consfront_dev *dev;
 
-    if (!console_evtchn)
+    if ( !console_evtchn )
         return 0;
 
     dev = malloc(sizeof(struct consfront_dev));
@@ -365,7 +373,7 @@ struct consfront_dev *xencons_ring_init(void)
     return resume_xen_console(dev);
 }
 
-static struct consfront_dev* resume_xen_console(struct consfront_dev* dev)
+static struct consfront_dev *resume_xen_console(struct consfront_dev *dev)
 {
     int err;
 
@@ -373,7 +381,8 @@ static struct consfront_dev* resume_xen_console(struct consfront_dev* dev)
     dev->ring = xencons_interface();
 
     err = bind_evtchn(dev->evtchn, console_handle_input, dev);
-    if (err <= 0) {
+    if ( err <= 0 )
+    {
         printk("XEN console request chn bind failed %i\n", err);
         free(dev);
         return NULL;
@@ -386,15 +395,16 @@ static struct consfront_dev* resume_xen_console(struct consfront_dev* dev)
     return dev;
 }
 
-void xencons_ring_fini(struct consfront_dev* dev)
+void xencons_ring_fini(struct consfront_dev *dev)
 {
-    if (dev)
+    if ( dev )
         mask_evtchn(dev->evtchn);
 }
 
-void xencons_ring_resume(struct consfront_dev* dev)
+void xencons_ring_resume(struct consfront_dev *dev)
 {
-    if (dev) {
+    if ( dev )
+    {
 #if CONFIG_PARAVIRT
         get_console(&start_info);
 #else
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 07:38:45 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 07:38:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352537.579389 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Bzg-0004Bm-OS; Mon, 20 Jun 2022 07:38:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352537.579389; Mon, 20 Jun 2022 07:38:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Bzg-0004Ab-5R; Mon, 20 Jun 2022 07:38:44 +0000
Received: by outflank-mailman (input) for mailman id 352537;
 Mon, 20 Jun 2022 07:38:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hycu=W3=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o3Bze-0002ZD-CF
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 07:38:42 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fc8da548-f06b-11ec-b725-ed86ccbb4733;
 Mon, 20 Jun 2022 09:38:33 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id DD1BF1F965;
 Mon, 20 Jun 2022 07:38:33 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id B0F56134CA;
 Mon, 20 Jun 2022 07:38:33 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 6IjTKfkjsGI3DAAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 20 Jun 2022 07:38:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fc8da548-f06b-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655710713; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=f6bMt+f3pFpdaJuWe2kNv32ADnt11PKGb5MGFZlNFfI=;
	b=VNs992xF2rRY/4TMpQS+AdsE1pIJcmbTPoytwq4AOuNVmc26FTTOceKPDsUytjkrUUe5or
	xJCnyoidGJMJbREy2Sg6fuYDJKx2nYd5nER40PXa1lL3dItCmAAGwF60c79bNec8yLZAoD
	k0aoUd+Uj+aBwVu0esc/XZxdkxlsQiA=
From: Juergen Gross <jgross@suse.com>
To: minios-devel@lists.xenproject.org,
	xen-devel@lists.xenproject.org
Cc: samuel.thibault@ens-lyon.org,
	wl@xen.org,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH 5/8] mini-os: apply coding style to consfront.c
Date: Mon, 20 Jun 2022 09:38:17 +0200
Message-Id: <20220620073820.9336-6-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20220620073820.9336-1-jgross@suse.com>
References: <20220620073820.9336-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Make consfront.c coding style compliant.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 consfront.c | 97 +++++++++++++++++++++++++++++------------------------
 1 file changed, 53 insertions(+), 44 deletions(-)

diff --git a/consfront.c b/consfront.c
index 73659656..dfe6a3f0 100644
--- a/consfront.c
+++ b/consfront.c
@@ -15,26 +15,30 @@
 
 void free_consfront(struct consfront_dev *dev)
 {
-    char* err = NULL;
+    char *err = NULL;
     XenbusState state;
-
     char path[strlen(dev->backend) + strlen("/state") + 1];
     char nodename[strlen(dev->nodename) + strlen("/state") + 1];
 
     snprintf(path, sizeof(path), "%s/state", dev->backend);
     snprintf(nodename, sizeof(nodename), "%s/state", dev->nodename);
 
-    if ((err = xenbus_switch_state(XBT_NIL, nodename, XenbusStateClosing)) != NULL) {
+    if ( (err = xenbus_switch_state(XBT_NIL, nodename, XenbusStateClosing)) !=
+         NULL )
+    {
         printk("free_consfront: error changing state to %d: %s\n",
                 XenbusStateClosing, err);
         goto close;
     }
+
     state = xenbus_read_integer(path);
-    while (err == NULL && state < XenbusStateClosing)
+    while ( err == NULL && state < XenbusStateClosing )
         err = xenbus_wait_for_state_change(path, &state, &dev->events);
     free(err);
 
-    if ((err = xenbus_switch_state(XBT_NIL, nodename, XenbusStateClosed)) != NULL) {
+    if ( (err = xenbus_switch_state(XBT_NIL, nodename, XenbusStateClosed)) !=
+         NULL)
+    {
         printk("free_consfront: error changing state to %d: %s\n",
                 XenbusStateClosed, err);
         goto close;
@@ -59,19 +63,22 @@ close:
 struct consfront_dev *init_consfront(char *_nodename)
 {
     xenbus_transaction_t xbt;
-    char* err = NULL;
-    char* message=NULL;
-    int retry=0;
-    char* msg = NULL;
+    char *err = NULL;
+    char *message = NULL;
+    int retry = 0;
+    char *msg = NULL;
     char nodename[256];
     char path[256];
+    XenbusState state;
     static int consfrontends = 3;
     struct consfront_dev *dev;
     int res;
 
-    if (!_nodename)
-        snprintf(nodename, sizeof(nodename), "device/console/%d", consfrontends);
-    else {
+    if ( !_nodename )
+        snprintf(nodename, sizeof(nodename), "device/console/%d",
+                 consfrontends);
+    else
+    {
         strncpy(nodename, _nodename, sizeof(nodename) - 1);
         nodename[sizeof(nodename) - 1] = 0;
     }
@@ -87,13 +94,13 @@ struct consfront_dev *init_consfront(char *_nodename)
 #endif
 
     snprintf(path, sizeof(path), "%s/backend-id", nodename);
-    if ((res = xenbus_read_integer(path)) < 0) 
+    if ( (res = xenbus_read_integer(path)) < 0 )
         goto error;
     else
         dev->dom = res;
     evtchn_alloc_unbound(dev->dom, console_handle_input, dev, &dev->evtchn);
 
-    dev->ring = (struct xencons_interface *) alloc_page();
+    dev->ring = (struct xencons_interface *)alloc_page();
     memset(dev->ring, 0, PAGE_SIZE);
     dev->ring_ref = gnttab_grant_access(dev->dom, virt_to_mfn(dev->ring), 0);
 
@@ -101,33 +108,36 @@ struct consfront_dev *init_consfront(char *_nodename)
 
 again:
     err = xenbus_transaction_start(&xbt);
-    if (err) {
+    if ( err )
+    {
         printk("starting transaction\n");
         free(err);
     }
 
-    err = xenbus_printf(xbt, nodename, "ring-ref","%u",
-                dev->ring_ref);
-    if (err) {
+    err = xenbus_printf(xbt, nodename, "ring-ref","%u", dev->ring_ref);
+    if ( err )
+    {
         message = "writing ring-ref";
         goto abort_transaction;
     }
-    err = xenbus_printf(xbt, nodename,
-                "port", "%u", dev->evtchn);
-    if (err) {
+    err = xenbus_printf(xbt, nodename, "port", "%u", dev->evtchn);
+    if ( err )
+    {
         message = "writing event-channel";
         goto abort_transaction;
     }
-    err = xenbus_printf(xbt, nodename,
-                "protocol", "%s", XEN_IO_PROTO_ABI_NATIVE);
-    if (err) {
+    err = xenbus_printf(xbt, nodename, "protocol", "%s",
+                        XEN_IO_PROTO_ABI_NATIVE);
+    if ( err )
+    {
         message = "writing protocol";
         goto abort_transaction;
     }
 
     snprintf(path, sizeof(path), "%s/state", nodename);
     err = xenbus_switch_state(xbt, path, XenbusStateConnected);
-    if (err) {
+    if ( err )
+    {
         message = "switching state";
         goto abort_transaction;
     }
@@ -135,8 +145,9 @@ again:
 
     err = xenbus_transaction_end(xbt, 0, &retry);
     free(err);
-    if (retry) {
-            goto again;
+    if ( retry )
+    {
+        goto again;
         printk("completing transaction\n");
     }
 
@@ -149,31 +160,28 @@ abort_transaction:
     goto error;
 
 done:
-
     snprintf(path, sizeof(path), "%s/backend", nodename);
     msg = xenbus_read(XBT_NIL, path, &dev->backend);
-    if (msg) {
+    if ( msg )
+    {
         printk("Error %s when reading the backend path %s\n", msg, path);
         goto error;
     }
 
     printk("backend at %s\n", dev->backend);
+    snprintf(path, sizeof(path), "%s/state", dev->backend);
+
+    free(xenbus_watch_path_token(XBT_NIL, path, path, &dev->events));
+    msg = NULL;
+    state = xenbus_read_integer(path);
+    while ( msg == NULL && state < XenbusStateConnected )
+        msg = xenbus_wait_for_state_change(path, &state, &dev->events);
 
+    if ( msg != NULL || state != XenbusStateConnected )
     {
-        XenbusState state;
-        char path[strlen(dev->backend) + strlen("/state") + 1];
-        snprintf(path, sizeof(path), "%s/state", dev->backend);
-        
-	free(xenbus_watch_path_token(XBT_NIL, path, path, &dev->events));
-        msg = NULL;
-        state = xenbus_read_integer(path);
-        while (msg == NULL && state < XenbusStateConnected)
-            msg = xenbus_wait_for_state_change(path, &state, &dev->events);
-        if (msg != NULL || state != XenbusStateConnected) {
-            printk("backend not available, state=%d\n", state);
-            err = xenbus_unwatch_path_token(XBT_NIL, path, path);
-            goto error;
-        }
+        printk("backend not available, state=%d\n", state);
+        err = xenbus_unwatch_path_token(XBT_NIL, path, path);
+        goto error;
     }
     unmask_evtchn(dev->evtchn);
 
@@ -190,7 +198,8 @@ error:
 
 void fini_consfront(struct consfront_dev *dev)
 {
-    if (dev) free_consfront(dev);
+    if ( dev )
+        free_consfront(dev);
 }
 
 #ifdef HAVE_LIBC
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 07:38:45 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 07:38:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352538.579393 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Bzh-0004LK-Hd; Mon, 20 Jun 2022 07:38:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352538.579393; Mon, 20 Jun 2022 07:38:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Bzh-0004K1-5F; Mon, 20 Jun 2022 07:38:45 +0000
Received: by outflank-mailman (input) for mailman id 352538;
 Mon, 20 Jun 2022 07:38:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hycu=W3=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o3Bze-0002ZE-KT
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 07:38:42 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fc822beb-f06b-11ec-bd2d-47488cf2e6aa;
 Mon, 20 Jun 2022 09:38:34 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 4217C21BC8;
 Mon, 20 Jun 2022 07:38:33 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 09A63134CA;
 Mon, 20 Jun 2022 07:38:33 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id mPnvAPkjsGI3DAAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 20 Jun 2022 07:38:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fc822beb-f06b-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655710713; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=vgB6fTgQL43gcxc48HlbOD3FwW8e0Wc8cOsNTfGA2fM=;
	b=pFYN5R+LFVNfPmEVSqYvHvqQvriSnnGqsviLHmWLZN+qbOrSZmjz5jINkVb2suyJKZcFXs
	T5H17S/1T20lJ1HvSBtEMJLpaUw9kQpJ+694TjSPAMvGxs7+QMLRr6wWvX9FRJTLfon+eD
	i/XDrNOXTMWFH7sHC3hsj4X3ZoJ3p2o=
From: Juergen Gross <jgross@suse.com>
To: minios-devel@lists.xenproject.org,
	xen-devel@lists.xenproject.org
Cc: samuel.thibault@ens-lyon.org,
	wl@xen.org,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH 2/8] mini-os: apply coding style to xenbus.c
Date: Mon, 20 Jun 2022 09:38:14 +0200
Message-Id: <20220620073820.9336-3-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20220620073820.9336-1-jgross@suse.com>
References: <20220620073820.9336-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Make xenbus.c coding style compliant.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xenbus.c | 510 +++++++++++++++++++++++++++++++++----------------------
 1 file changed, 303 insertions(+), 207 deletions(-)

diff --git a/xenbus.c b/xenbus.c
index b687678f..aa1fe7bf 100644
--- a/xenbus.c
+++ b/xenbus.c
@@ -1,15 +1,15 @@
-/* 
+/*
  ****************************************************************************
  * (C) 2006 - Cambridge University
  ****************************************************************************
  *
  *        File: xenbus.c
- *      Author: Steven Smith (sos22@cam.ac.uk) 
+ *      Author: Steven Smith (sos22@cam.ac.uk)
  *     Changes: Grzegorz Milos (gm281@cam.ac.uk)
  *     Changes: John D. Ramsdell
- *              
+ *
  *        Date: Jun 2006, chages Aug 2005
- * 
+ *
  * Environment: Xen Minimal OS
  * Description: Minimal implementation of xenbus
  *
@@ -32,10 +32,10 @@
 #include <mini-os/semaphore.h>
 
 #define min(x,y) ({                       \
-        typeof(x) tmpx = (x);                 \
-        typeof(y) tmpy = (y);                 \
-        tmpx < tmpy ? tmpx : tmpy;            \
-        })
+    typeof(x) tmpx = (x);                 \
+    typeof(y) tmpy = (y);                 \
+    tmpx < tmpy ? tmpx : tmpy;            \
+    })
 
 #ifdef XENBUS_DEBUG
 #define DEBUG(_f, _a...) \
@@ -56,7 +56,8 @@ static struct watch {
     xenbus_event_queue *events;
     struct watch *next;
 } *watches;
-struct xenbus_req_info 
+
+struct xenbus_req_info
 {
     int in_use:1;
     struct wait_queue_head waitq;
@@ -93,14 +94,12 @@ void get_xenbus(void *p)
 }
 #endif
 
-static void memcpy_from_ring(const void *Ring,
-        void *Dest,
-        int off,
-        int len)
+static void memcpy_from_ring(const void *Ring, void *Dest, int off, int len)
 {
     int c1, c2;
     const char *ring = Ring;
     char *dest = Dest;
+
     c1 = min(len, XENSTORE_RING_SIZE - off);
     c2 = len - c1;
     memcpy(dest, ring + off, c1);
@@ -111,24 +110,28 @@ char **xenbus_wait_for_watch_return(xenbus_event_queue *queue)
 {
     struct xenbus_event *event;
     DEFINE_WAIT(w);
-    if (!queue)
+
+    if ( !queue )
         queue = &xenbus_events;
-    while (!(event = *queue)) {
+    while ( !(event = *queue) )
+    {
         add_waiter(w, xenbus_watch_queue);
         schedule();
     }
     remove_waiter(w, xenbus_watch_queue);
     *queue = event->next;
+
     return &event->path;
 }
 
 void xenbus_wait_for_watch(xenbus_event_queue *queue)
 {
     char **ret;
-    if (!queue)
+
+    if ( !queue )
         queue = &xenbus_events;
     ret = xenbus_wait_for_watch_return(queue);
-    if (ret)
+    if ( ret )
         free(ret);
     else
         printk("unexpected path returned by watch\n");
@@ -137,33 +140,39 @@ void xenbus_wait_for_watch(xenbus_event_queue *queue)
 void xenbus_release_wait_for_watch(xenbus_event_queue *queue)
 {
     struct xenbus_event *event = malloc(sizeof(*event));
+
     event->next = *queue;
     *queue = event;
     wake_up(&xenbus_watch_queue);
 }
 
-char* xenbus_wait_for_value(const char* path, const char* value, xenbus_event_queue *queue)
+char *xenbus_wait_for_value(const char *path, const char *value,
+                            xenbus_event_queue *queue)
 {
-    if (!queue)
+    if ( !queue )
         queue = &xenbus_events;
-    for(;;)
+
+    for( ;; )
     {
         char *res, *msg;
         int r;
 
         msg = xenbus_read(XBT_NIL, path, &res);
-        if(msg) return msg;
+        if ( msg )
+            return msg;
 
         r = strcmp(value,res);
         free(res);
 
-        if(r==0) break;
-        else xenbus_wait_for_watch(queue);
+        if ( r==0 )
+            return NULL;
+
+        xenbus_wait_for_watch(queue);
     }
-    return NULL;
 }
 
-char *xenbus_switch_state(xenbus_transaction_t xbt, const char* path, XenbusState state)
+char *xenbus_switch_state(xenbus_transaction_t xbt, const char *path,
+                          XenbusState state)
 {
     char *current_state;
     char *msg = NULL;
@@ -174,18 +183,22 @@ char *xenbus_switch_state(xenbus_transaction_t xbt, const char* path, XenbusStat
     int retry = 0;
 
     do {
-        if (xbt == XBT_NIL) {
+        if ( xbt == XBT_NIL )
+        {
             msg = xenbus_transaction_start(&xbt);
-            if (msg) goto exit;
+            if ( msg )
+                goto exit;
             xbt_flag = 1;
         }
 
         msg = xenbus_read(xbt, path, &current_state);
-        if (msg) goto exit;
+        if ( msg )
+            goto exit;
 
         rs = (XenbusState) (current_state[0] - '0');
         free(current_state);
-        if (rs == state) {
+        if ( rs == state )
+        {
             msg = NULL;
             goto exit;
         }
@@ -194,37 +207,42 @@ char *xenbus_switch_state(xenbus_transaction_t xbt, const char* path, XenbusStat
         msg = xenbus_write(xbt, path, value);
 
 exit:
-        if (xbt_flag) {
+        if ( xbt_flag )
+        {
             msg2 = xenbus_transaction_end(xbt, 0, &retry);
             xbt = XBT_NIL;
         }
-        if (msg == NULL && msg2 != NULL)
+        if ( msg == NULL && msg2 != NULL )
             msg = msg2;
         else
             free(msg2);
-    } while (retry);
+    } while ( retry );
 
     return msg;
 }
 
-char *xenbus_wait_for_state_change(const char* path, XenbusState *state, xenbus_event_queue *queue)
+char *xenbus_wait_for_state_change(const char *path, XenbusState *state,
+                                   xenbus_event_queue *queue)
 {
-    if (!queue)
+    if ( !queue )
         queue = &xenbus_events;
-    for(;;)
+
+    for( ;; )
     {
         char *res, *msg;
         XenbusState rs;
 
         msg = xenbus_read(XBT_NIL, path, &res);
-        if(msg) return msg;
+        if ( msg )
+            return msg;
 
-        rs = (XenbusState) (res[0] - 48);
+        rs = (XenbusState)(res[0] - 48);
         free(res);
 
-        if (rs == *state)
+        if ( rs == *state )
             xenbus_wait_for_watch(queue);
-        else {
+        else
+        {
             *state = rs;
             break;
         }
@@ -232,14 +250,13 @@ char *xenbus_wait_for_state_change(const char* path, XenbusState *state, xenbus_
     return NULL;
 }
 
-
 static void xenbus_read_data(char *buf, unsigned int len)
 {
     unsigned int off = 0;
     unsigned int prod, cons;
     unsigned int size;
 
-    while (off != len)
+    while ( off != len )
     {
         wait_event(xb_waitq, xenstore_buf->rsp_prod != xenstore_buf->rsp_cons);
 
@@ -255,7 +272,7 @@ static void xenbus_read_data(char *buf, unsigned int len)
         mb();    /* memcpy() and rsp_cons update must not be reordered. */
         xenstore_buf->rsp_cons += size;
         mb();    /* rsp_cons must be visible before we look at rsp_prod. */
-        if (xenstore_buf->rsp_prod - cons >= XENSTORE_RING_SIZE)
+        if ( xenstore_buf->rsp_prod - cons >= XENSTORE_RING_SIZE )
             notify_remote_via_evtchn(xenbus_evtchn);
     }
 }
@@ -265,30 +282,35 @@ static void xenbus_thread_func(void *ign)
     struct xsd_sockmsg msg;
     char *data;
 
-    for (;;) {
+    for ( ;; )
+    {
         xenbus_read_data((char *)&msg, sizeof(msg));
         DEBUG("Msg len %d, %d avail, id %d.\n", msg.len + sizeof(msg),
               xenstore_buf->rsp_prod - xenstore_buf->rsp_cons, msg.req_id);
 
-        if (msg.len > XENSTORE_PAYLOAD_MAX) {
+        if ( msg.len > XENSTORE_PAYLOAD_MAX )
+        {
             printk("Xenstore violates protocol, message longer than allowed.\n");
             return;
         }
 
-        if (msg.type == XS_WATCH_EVENT) {
+        if ( msg.type == XS_WATCH_EVENT )
+        {
             struct xenbus_event *event = malloc(sizeof(*event) + msg.len);
             xenbus_event_queue *events = NULL;
             struct watch *watch;
             char *c;
             int zeroes = 0;
 
-            data = (char*)event + sizeof(*event);
+            data = (char *)event + sizeof(*event);
             xenbus_read_data(data, msg.len);
 
-            for (c = data; c < data + msg.len; c++)
-                if (!*c)
+            for ( c = data; c < data + msg.len; c++ )
+                if ( !*c )
                     zeroes++;
-            if (zeroes != 2) {
+
+            if ( zeroes != 2 )
+            {
                 printk("Xenstore: illegal watch event data\n");
                 free(event);
                 continue;
@@ -297,17 +319,21 @@ static void xenbus_thread_func(void *ign)
             event->path = data;
             event->token = event->path + strlen(event->path) + 1;
 
-            for (watch = watches; watch; watch = watch->next)
-                if (!strcmp(watch->token, event->token)) {
+            for ( watch = watches; watch; watch = watch->next )
+                if ( !strcmp(watch->token, event->token) )
+                {
                     events = watch->events;
                     break;
                 }
 
-            if (events) {
+            if ( events )
+            {
                 event->next = *events;
                 *events = event;
                 wake_up(&xenbus_watch_queue);
-            } else {
+            }
+            else
+            {
                 printk("Xenstore: unexpected watch token %s\n", event->token);
                 free(event);
             }
@@ -319,7 +345,8 @@ static void xenbus_thread_func(void *ign)
         memcpy(data, &msg, sizeof(msg));
         xenbus_read_data(data + sizeof(msg), msg.len);
 
-        if (msg.req_id >= NR_REQS || !req_info[msg.req_id].in_use) {
+        if ( msg.req_id >= NR_REQS || !req_info[msg.req_id].in_use )
+        {
             printk("Xenstore: illegal request id %d\n", msg.req_id);
             free(data);
             continue;
@@ -334,7 +361,7 @@ static void xenbus_thread_func(void *ign)
 }
 
 static void xenbus_evtchn_handler(evtchn_port_t port, struct pt_regs *regs,
-				  void *ign)
+                                  void *ign)
 {
     wake_up(&xb_waitq);
 }
@@ -347,12 +374,15 @@ static DECLARE_WAIT_QUEUE_HEAD(req_wq);
 static void release_xenbus_id(int id)
 {
     BUG_ON(!req_info[id].in_use);
+
     spin_lock(&req_lock);
+
     req_info[id].in_use = 0;
     nr_live_reqs--;
     req_info[id].in_use = 0;
-    if (nr_live_reqs == 0 || nr_live_reqs == NR_REQS - 1)
+    if ( nr_live_reqs == 0 || nr_live_reqs == NR_REQS - 1 )
         wake_up(&req_wq);
+
     spin_unlock(&req_lock);
 }
 
@@ -363,27 +393,27 @@ static int allocate_xenbus_id(void)
     static int probe;
     int o_probe;
 
-    while (1) 
+    while ( 1 )
     {
         spin_lock(&req_lock);
-        if (nr_live_reqs < NR_REQS)
+        if ( nr_live_reqs < NR_REQS )
             break;
         spin_unlock(&req_lock);
-        wait_event(req_wq, (nr_live_reqs < NR_REQS));
+        wait_event(req_wq, nr_live_reqs < NR_REQS);
     }
 
     o_probe = probe;
-    for (;;) 
+    while ( req_info[o_probe].in_use )
     {
-        if (!req_info[o_probe].in_use)
-            break;
         o_probe = (o_probe + 1) % NR_REQS;
         BUG_ON(o_probe == probe);
     }
     nr_live_reqs++;
     req_info[o_probe].in_use = 1;
     probe = (o_probe + 1) % NR_REQS;
+
     spin_unlock(&req_lock);
+
     init_waitqueue_head(&req_info[o_probe].waitq);
 
     return o_probe;
@@ -393,6 +423,7 @@ static int allocate_xenbus_id(void)
 void init_xenbus(void)
 {
     int err;
+
     DEBUG("init_xenbus called.\n");
     create_thread("xenstore", xenbus_thread_func, NULL);
     DEBUG("buf at %p.\n", xenstore_buf);
@@ -408,13 +439,13 @@ void fini_xenbus(void)
 void suspend_xenbus(void)
 {
     /* Check for live requests and wait until they finish */
-    while (1)
+    while ( 1 )
     {
         spin_lock(&req_lock);
-        if (nr_live_reqs == 0)
+        if ( nr_live_reqs == 0 )
             break;
         spin_unlock(&req_lock);
-        wait_event(req_wq, (nr_live_reqs == 0));
+        wait_event(req_wq, nr_live_reqs == 0);
     }
 
     mask_evtchn(xenbus_evtchn);
@@ -436,8 +467,10 @@ void resume_xenbus(int canceled)
 #endif
     unmask_evtchn(xenbus_evtchn);
 
-    if (!canceled) {
-        for (watch = watches; watch; watch = watch->next) {
+    if ( !canceled )
+    {
+        for ( watch = watches; watch; watch = watch->next )
+        {
             req[0].data = watch->path;
             req[0].len = strlen(watch->path) + 1;
             req[1].data = watch->token;
@@ -445,10 +478,12 @@ void resume_xenbus(int canceled)
 
             rep = xenbus_msg_reply(XS_WATCH, XBT_NIL, req, ARRAY_SIZE(req));
             msg = errmsg(rep);
-            if (msg) {
+            if ( msg )
+            {
                 xprintk("error on XS_WATCH: %s\n", msg);
                 free(msg);
-            } else
+            }
+            else
                 free(rep);
         }
     }
@@ -456,12 +491,14 @@ void resume_xenbus(int canceled)
     notify_remote_via_evtchn(xenbus_evtchn);
 }
 
-/* Send data to xenbus.  This can block.  All of the requests are seen
-   by xenbus as if sent atomically.  The header is added
-   automatically, using type %type, req_id %req_id, and trans_id
-   %trans_id. */
+/*
+ * Send data to xenbus.  This can block.  All of the requests are seen
+ * by xenbus as if sent atomically.  The header is added
+ * automatically, using type %type, req_id %req_id, and trans_id
+ * %trans_id.
+ */
 static void xb_write(int type, int req_id, xenbus_transaction_t trans_id,
-		     const struct write_req *req, int nr_reqs)
+                     const struct write_req *req, int nr_reqs)
 {
     XENSTORE_RING_IDX prod;
     int r;
@@ -470,12 +507,12 @@ static void xb_write(int type, int req_id, xenbus_transaction_t trans_id,
     int req_off;
     int total_off;
     int this_chunk;
-    struct xsd_sockmsg m = {.type = type, .req_id = req_id,
-        .tx_id = trans_id };
+    struct xsd_sockmsg m = {.type = type, .req_id = req_id, .tx_id = trans_id };
     struct write_req header_req = { &m, sizeof(m) };
 
-    for (r = 0; r < nr_reqs; r++)
+    for ( r = 0; r < nr_reqs; r++ )
         len += req[r].len;
+
     m.len = len;
     len += sizeof(m);
 
@@ -489,10 +526,10 @@ static void xb_write(int type, int req_id, xenbus_transaction_t trans_id,
     /* Send the message in chunks using free ring space when available. */
     total_off = 0;
     req_off = 0;
-    while (total_off < len)
+    while ( total_off < len )
     {
         prod = xenstore_buf->req_prod;
-        if (prod - xenstore_buf->req_cons >= XENSTORE_RING_SIZE)
+        if ( prod - xenstore_buf->req_cons >= XENSTORE_RING_SIZE )
         {
             /* Send evtchn to notify remote */
             notify_remote_via_evtchn(xenbus_evtchn);
@@ -514,10 +551,10 @@ static void xb_write(int type, int req_id, xenbus_transaction_t trans_id,
         prod += this_chunk;
         req_off += this_chunk;
         total_off += this_chunk;
-        if (req_off == cur_req->len)
+        if ( req_off == cur_req->len )
         {
             req_off = 0;
-            if (cur_req == &header_req)
+            if ( cur_req == &header_req )
                 cur_req = req;
             else
                 cur_req++;
@@ -538,14 +575,13 @@ static void xb_write(int type, int req_id, xenbus_transaction_t trans_id,
     up(&xb_write_sem);
 }
 
-/* Send a mesasge to xenbus, in the same fashion as xb_write, and
-   block waiting for a reply.  The reply is malloced and should be
-   freed by the caller. */
-struct xsd_sockmsg *
-xenbus_msg_reply(int type,
-		 xenbus_transaction_t trans,
-		 struct write_req *io,
-		 int nr_reqs)
+/*
+ * Send a mesasge to xenbus, in the same fashion as xb_write, and
+ * block waiting for a reply.  The reply is malloced and should be
+ * freed by the caller.
+ */
+struct xsd_sockmsg *xenbus_msg_reply(int type, xenbus_transaction_t trans,
+                                     struct write_req *io, int nr_reqs)
 {
     int id;
     DEFINE_WAIT(w);
@@ -563,29 +599,36 @@ xenbus_msg_reply(int type,
     rep = req_info[id].reply;
     BUG_ON(rep->req_id != id);
     release_xenbus_id(id);
+
     return rep;
 }
 
 static char *errmsg(struct xsd_sockmsg *rep)
 {
     char *res;
-    if (!rep) {
-	char msg[] = "No reply";
-	size_t len = strlen(msg) + 1;
-	return memcpy(malloc(len), msg, len);
+
+    if ( !rep )
+    {
+        char msg[] = "No reply";
+        size_t len = strlen(msg) + 1;
+        return memcpy(malloc(len), msg, len);
     }
-    if (rep->type != XS_ERROR)
-	return NULL;
+    if ( rep->type != XS_ERROR )
+        return NULL;
+
     res = malloc(rep->len + 1);
     memcpy(res, rep + 1, rep->len);
     res[rep->len] = 0;
     free(rep);
+
     return res;
 }
 
-/* List the contents of a directory.  Returns a malloc()ed array of
-   pointers to malloc()ed strings.  The array is NULL terminated.  May
-   block. */
+/*
+ * List the contents of a directory.  Returns a malloc()ed array of
+ * pointers to malloc()ed strings.  The array is NULL terminated.  May
+ * block.
+ */
 char *xenbus_ls(xenbus_transaction_t xbt, const char *pre, char ***contents)
 {
     struct xsd_sockmsg *reply, *repmsg;
@@ -595,23 +638,30 @@ char *xenbus_ls(xenbus_transaction_t xbt, const char *pre, char ***contents)
 
     repmsg = xenbus_msg_reply(XS_DIRECTORY, xbt, req, ARRAY_SIZE(req));
     msg = errmsg(repmsg);
-    if (msg) {
-	*contents = NULL;
-	return msg;
+    if ( msg )
+    {
+        *contents = NULL;
+        return msg;
     }
+
     reply = repmsg + 1;
-    for (x = nr_elems = 0; x < repmsg->len; x++)
+    for ( x = nr_elems = 0; x < repmsg->len; x++ )
         nr_elems += (((char *)reply)[x] == 0);
+
     res = malloc(sizeof(res[0]) * (nr_elems + 1));
-    for (x = i = 0; i < nr_elems; i++) {
+    for ( x = i = 0; i < nr_elems; i++ )
+    {
         int l = strlen((char *)reply + x);
+
         res[i] = malloc(l + 1);
         memcpy(res[i], (char *)reply + x, l + 1);
         x += l + 1;
     }
+
     res[i] = NULL;
     free(repmsg);
     *contents = res;
+
     return NULL;
 }
 
@@ -620,49 +670,56 @@ char *xenbus_read(xenbus_transaction_t xbt, const char *path, char **value)
     struct write_req req[] = { {path, strlen(path) + 1} };
     struct xsd_sockmsg *rep;
     char *res, *msg;
+
     rep = xenbus_msg_reply(XS_READ, xbt, req, ARRAY_SIZE(req));
     msg = errmsg(rep);
-    if (msg) {
-	*value = NULL;
-	return msg;
+    if ( msg )
+    {
+        *value = NULL;
+        return msg;
     }
+
     res = malloc(rep->len + 1);
     memcpy(res, rep + 1, rep->len);
     res[rep->len] = 0;
     free(rep);
     *value = res;
+
     return NULL;
 }
 
-char *xenbus_write(xenbus_transaction_t xbt, const char *path, const char *value)
+char *xenbus_write(xenbus_transaction_t xbt, const char *path,
+                   const char *value)
 {
-    struct write_req req[] = { 
-	{path, strlen(path) + 1},
-	{value, strlen(value)},
+    struct write_req req[] = {
+        {path, strlen(path) + 1},
+        {value, strlen(value)},
     };
     struct xsd_sockmsg *rep;
     char *msg;
+
     rep = xenbus_msg_reply(XS_WRITE, xbt, req, ARRAY_SIZE(req));
     msg = errmsg(rep);
-    if (msg) return msg;
+    if ( msg )
+        return msg;
+
     free(rep);
+
     return NULL;
 }
 
-char* xenbus_watch_path_token( xenbus_transaction_t xbt, const char *path, const char *token, xenbus_event_queue *events)
+char* xenbus_watch_path_token(xenbus_transaction_t xbt, const char *path,
+                              const char *token, xenbus_event_queue *events)
 {
     struct xsd_sockmsg *rep;
-
-    struct write_req req[] = { 
+    struct write_req req[] = {
         {path, strlen(path) + 1},
-	{token, strlen(token) + 1},
+        {token, strlen(token) + 1},
     };
-
     struct watch *watch = malloc(sizeof(*watch));
-
     char *msg;
 
-    if (!events)
+    if ( !events )
         events = &xenbus_events;
 
     watch->token = strdup(token);
@@ -674,33 +731,37 @@ char* xenbus_watch_path_token( xenbus_transaction_t xbt, const char *path, const
     rep = xenbus_msg_reply(XS_WATCH, xbt, req, ARRAY_SIZE(req));
 
     msg = errmsg(rep);
-    if (msg) return msg;
+    if ( msg )
+        return msg;
+
     free(rep);
 
     return NULL;
 }
 
-char* xenbus_unwatch_path_token( xenbus_transaction_t xbt, const char *path, const char *token)
+char* xenbus_unwatch_path_token(xenbus_transaction_t xbt, const char *path,
+                                const char *token)
 {
     struct xsd_sockmsg *rep;
-
-    struct write_req req[] = { 
+    struct write_req req[] = {
         {path, strlen(path) + 1},
-	{token, strlen(token) + 1},
+        {token, strlen(token) + 1},
     };
-
     struct watch *watch, **prev;
-
     char *msg;
 
     rep = xenbus_msg_reply(XS_UNWATCH, xbt, req, ARRAY_SIZE(req));
 
     msg = errmsg(rep);
-    if (msg) return msg;
+    if ( msg )
+        return msg;
+
     free(rep);
 
-    for (prev = &watches, watch = *prev; watch; prev = &watch->next, watch = *prev)
-        if (!strcmp(watch->token, token)) {
+    for ( prev = &watches, watch = *prev; watch;
+          prev = &watch->next, watch = *prev)
+        if ( !strcmp(watch->token, token) )
+        {
             free(watch->token);
             free(watch->path);
             *prev = watch->next;
@@ -716,11 +777,14 @@ char *xenbus_rm(xenbus_transaction_t xbt, const char *path)
     struct write_req req[] = { {path, strlen(path) + 1} };
     struct xsd_sockmsg *rep;
     char *msg;
+
     rep = xenbus_msg_reply(XS_RM, xbt, req, ARRAY_SIZE(req));
     msg = errmsg(rep);
-    if (msg)
-	return msg;
+    if ( msg )
+        return msg;
+
     free(rep);
+
     return NULL;
 }
 
@@ -729,59 +793,70 @@ char *xenbus_get_perms(xenbus_transaction_t xbt, const char *path, char **value)
     struct write_req req[] = { {path, strlen(path) + 1} };
     struct xsd_sockmsg *rep;
     char *res, *msg;
+
     rep = xenbus_msg_reply(XS_GET_PERMS, xbt, req, ARRAY_SIZE(req));
     msg = errmsg(rep);
-    if (msg) {
-	*value = NULL;
-	return msg;
+    if ( msg )
+    {
+        *value = NULL;
+        return msg;
     }
+
     res = malloc(rep->len + 1);
     memcpy(res, rep + 1, rep->len);
     res[rep->len] = 0;
     free(rep);
     *value = res;
+
     return NULL;
 }
 
 #define PERM_MAX_SIZE 32
-char *xenbus_set_perms(xenbus_transaction_t xbt, const char *path, domid_t dom, char perm)
+char *xenbus_set_perms(xenbus_transaction_t xbt, const char *path, domid_t dom,
+                       char perm)
 {
     char value[PERM_MAX_SIZE];
-    struct write_req req[] = { 
-	{path, strlen(path) + 1},
-	{value, 0},
+    struct write_req req[] = {
+        {path, strlen(path) + 1},
+        {value, 0},
     };
     struct xsd_sockmsg *rep;
     char *msg;
+
     snprintf(value, PERM_MAX_SIZE, "%c%hu", perm, dom);
     req[1].len = strlen(value) + 1;
     rep = xenbus_msg_reply(XS_SET_PERMS, xbt, req, ARRAY_SIZE(req));
     msg = errmsg(rep);
-    if (msg)
-	return msg;
+    if ( msg )
+        return msg;
+
     free(rep);
+
     return NULL;
 }
 
 char *xenbus_transaction_start(xenbus_transaction_t *xbt)
 {
-    /* xenstored becomes angry if you send a length 0 message, so just
-       shove a nul terminator on the end */
+    /*
+     * xenstored becomes angry if you send a length 0 message, so just
+     * shove a nul terminator on the end
+     */
     struct write_req req = { "", 1};
     struct xsd_sockmsg *rep;
     char *err;
 
     rep = xenbus_msg_reply(XS_TRANSACTION_START, 0, &req, 1);
     err = errmsg(rep);
-    if (err)
-	return err;
+    if ( err )
+        return err;
+
     sscanf((char *)(rep + 1), "%lu", xbt);
     free(rep);
+
     return NULL;
 }
 
-char *
-xenbus_transaction_end(xenbus_transaction_t t, int abort, int *retry)
+char *xenbus_transaction_end(xenbus_transaction_t t, int abort, int *retry)
 {
     struct xsd_sockmsg *rep;
     struct write_req req;
@@ -793,16 +868,19 @@ xenbus_transaction_end(xenbus_transaction_t t, int abort, int *retry)
     req.len = 2;
     rep = xenbus_msg_reply(XS_TRANSACTION_END, t, &req, 1);
     err = errmsg(rep);
-    if (err) {
-	if (!strcmp(err, "EAGAIN")) {
-	    *retry = 1;
-	    free(err);
-	    return NULL;
-	} else {
-	    return err;
-	}
+    if ( err )
+    {
+        if ( !strcmp(err, "EAGAIN") )
+        {
+            *retry = 1;
+            free(err);
+            return NULL;
+        }
+        else
+            return err;
     }
     free(rep);
+
     return NULL;
 }
 
@@ -812,46 +890,54 @@ int xenbus_read_integer(const char *path)
     int t;
 
     res = xenbus_read(XBT_NIL, path, &buf);
-    if (res) {
-	printk("Failed to read %s.\n", path);
-	free(res);
-	return -1;
+    if ( res )
+    {
+        printk("Failed to read %s.\n", path);
+        free(res);
+        return -1;
     }
+
     sscanf(buf, "%d", &t);
     free(buf);
+
     return t;
 }
 
-int xenbus_read_uuid(const char* path, unsigned char uuid[16]) {
-   char * res, *buf;
-   res = xenbus_read(XBT_NIL, path, &buf);
-   if(res) {
-      printk("Failed to read %s.\n", path);
-      free(res);
-      return 0;
-   }
-   if(strlen(buf) != ((2*16)+4) /* 16 hex bytes and 4 hyphens */
-         || sscanf(buf,
-            "%2hhx%2hhx%2hhx%2hhx-"
-            "%2hhx%2hhx-"
-            "%2hhx%2hhx-"
-            "%2hhx%2hhx-"
-            "%2hhx%2hhx%2hhx%2hhx%2hhx%2hhx",
-            uuid, uuid + 1, uuid + 2, uuid + 3,
-            uuid + 4, uuid + 5, uuid + 6, uuid + 7,
-            uuid + 8, uuid + 9, uuid + 10, uuid + 11,
-            uuid + 12, uuid + 13, uuid + 14, uuid + 15) != 16) {
-      printk("Xenbus path %s value %s is not a uuid!\n", path, buf);
-      free(buf);
-      return 0;
-   }
-   free(buf);
-   return 1;
-}
-
-char* xenbus_printf(xenbus_transaction_t xbt,
-                                  const char* node, const char* path,
-                                  const char* fmt, ...)
+int xenbus_read_uuid(const char *path, unsigned char uuid[16])
+{
+    char *res, *buf;
+
+    res = xenbus_read(XBT_NIL, path, &buf);
+    if ( res )
+    {
+       printk("Failed to read %s.\n", path);
+       free(res);
+       return 0;
+    }
+
+    if ( strlen(buf) != ((2 * 16) + 4) /* 16 hex bytes and 4 hyphens */ ||
+         sscanf(buf, "%2hhx%2hhx%2hhx%2hhx-"
+                     "%2hhx%2hhx-"
+                     "%2hhx%2hhx-"
+                     "%2hhx%2hhx-"
+                     "%2hhx%2hhx%2hhx%2hhx%2hhx%2hhx",
+                uuid, uuid + 1, uuid + 2, uuid + 3,
+                uuid + 4, uuid + 5, uuid + 6, uuid + 7,
+                uuid + 8, uuid + 9, uuid + 10, uuid + 11,
+                uuid + 12, uuid + 13, uuid + 14, uuid + 15) != 16)
+    {
+        printk("Xenbus path %s value %s is not a uuid!\n", path, buf);
+        free(buf);
+        return 0;
+    }
+
+    free(buf);
+
+    return 1;
+}
+
+char *xenbus_printf(xenbus_transaction_t xbt, const char* node,
+                    const char* path, const char* fmt, ...)
 {
 #define BUFFER_SIZE 256
     char fullpath[BUFFER_SIZE];
@@ -863,6 +949,7 @@ char* xenbus_printf(xenbus_transaction_t xbt,
     va_start(args, fmt);
     vsprintf(val, fmt, args);
     va_end(args);
+
     return xenbus_write(xbt,fullpath,val);
 }
 
@@ -890,7 +977,7 @@ static void xenbus_debug_msg(const char *msg)
 
     reply = xenbus_msg_reply(XS_DEBUG, 0, req, ARRAY_SIZE(req));
     printk("Got a reply, type %d, id %d, len %d.\n",
-            reply->type, reply->req_id, reply->len);
+           reply->type, reply->req_id, reply->len);
 }
 
 static void do_ls_test(const char *pre)
@@ -900,28 +987,33 @@ static void do_ls_test(const char *pre)
 
     printk("ls %s...\n", pre);
     msg = xenbus_ls(XBT_NIL, pre, &dirs);
-    if (msg) {
-	printk("Error in xenbus ls: %s\n", msg);
-	free(msg);
-	return;
+    if ( msg )
+    {
+        printk("Error in xenbus ls: %s\n", msg);
+        free(msg);
+        return;
     }
-    for (x = 0; dirs[x]; x++) 
+
+    for ( x = 0; dirs[x]; x++ )
     {
         printk("ls %s[%d] -> %s\n", pre, x, dirs[x]);
         free(dirs[x]);
     }
+
     free(dirs);
 }
 
 static void do_read_test(const char *path)
 {
     char *res, *msg;
+
     printk("Read %s...\n", path);
     msg = xenbus_read(XBT_NIL, path, &res);
-    if (msg) {
-	printk("Error in xenbus read: %s\n", msg);
-	free(msg);
-	return;
+    if ( msg )
+    {
+        printk("Error in xenbus read: %s\n", msg);
+        free(msg);
+        return;
     }
     printk("Read %s -> %s.\n", path, res);
     free(res);
@@ -930,27 +1022,31 @@ static void do_read_test(const char *path)
 static void do_write_test(const char *path, const char *val)
 {
     char *msg;
+
     printk("Write %s to %s...\n", val, path);
     msg = xenbus_write(XBT_NIL, path, val);
-    if (msg) {
-	printk("Result %s\n", msg);
-	free(msg);
-    } else {
-	printk("Success.\n");
+    if ( msg )
+    {
+        printk("Result %s\n", msg);
+        free(msg);
     }
+    else
+        printk("Success.\n");
 }
 
 static void do_rm_test(const char *path)
 {
     char *msg;
+
     printk("rm %s...\n", path);
     msg = xenbus_rm(XBT_NIL, path);
-    if (msg) {
-	printk("Result %s\n", msg);
-	free(msg);
-    } else {
-	printk("Success.\n");
+    if ( msg )
+    {
+        printk("Result %s\n", msg);
+        free(msg);
     }
+    else
+        printk("Success.\n");
 }
 
 /* Simple testing thing */
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 07:38:48 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 07:38:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352540.579412 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Bzj-0004pu-Tu; Mon, 20 Jun 2022 07:38:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352540.579412; Mon, 20 Jun 2022 07:38:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Bzj-0004n7-Js; Mon, 20 Jun 2022 07:38:47 +0000
Received: by outflank-mailman (input) for mailman id 352540;
 Mon, 20 Jun 2022 07:38:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hycu=W3=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o3Bzg-0002ZD-CV
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 07:38:44 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fcb15982-f06b-11ec-b725-ed86ccbb4733;
 Mon, 20 Jun 2022 09:38:33 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 2193721BCE;
 Mon, 20 Jun 2022 07:38:34 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id E52F6134CA;
 Mon, 20 Jun 2022 07:38:33 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id OEubNvkjsGI3DAAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 20 Jun 2022 07:38:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fcb15982-f06b-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655710714; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=F2usykdrZz/ETCDtrSzcODBAgM6yVFNOOYprC5PICS8=;
	b=sSC6tQ1TwuIm9F1IbZobt/BlZYlWRWpysSxHKvHzavFfCHHIa5nFk1ZdZElY8a3WwNwMRA
	y9ZrzB/r5gPVTVtZ8kKydPNmy8Ic/jBpOArgr7WXEI28MTsM1sqy5oup9ByadnXzGCtIh0
	ynmiF7xExpf2uig1Lusce6y7AOxIUqA=
From: Juergen Gross <jgross@suse.com>
To: minios-devel@lists.xenproject.org,
	xen-devel@lists.xenproject.org
Cc: samuel.thibault@ens-lyon.org,
	wl@xen.org,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH 6/8] mini-os: eliminate console directory
Date: Mon, 20 Jun 2022 09:38:18 +0200
Message-Id: <20220620073820.9336-7-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20220620073820.9336-1-jgross@suse.com>
References: <20220620073820.9336-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Merge the two remaining source files in the console directory into
a single one and move it to the main directory.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 Makefile                            |   4 +-
 console/xencons_ring.c => console.c | 171 ++++++++++++++++++++++++++-
 console/console.c                   | 177 ----------------------------
 3 files changed, 170 insertions(+), 182 deletions(-)
 rename console/xencons_ring.c => console.c (51%)
 delete mode 100644 console/console.c

diff --git a/Makefile b/Makefile
index 509d927b..f3acdd2f 100644
--- a/Makefile
+++ b/Makefile
@@ -41,6 +41,7 @@ src-$(CONFIG_CONSFRONT) += consfront.c
 src-$(CONFIG_TPMFRONT) += tpmfront.c
 src-$(CONFIG_TPM_TIS) += tpm_tis.c
 src-$(CONFIG_TPMBACK) += tpmback.c
+src-y += console.c
 src-y += daytime.c
 src-y += e820.c
 src-y += events.c
@@ -69,9 +70,6 @@ src-y += lib/sys.c
 src-y += lib/xmalloc.c
 src-$(CONFIG_LIBXS) += lib/xs.c
 
-src-y += console/console.c
-src-y += console/xencons_ring.c
-
 # The common mini-os objects to build.
 APP_OBJS :=
 OBJS := $(patsubst %.c,$(OBJ_DIR)/%.o,$(src-y))
diff --git a/console/xencons_ring.c b/console.c
similarity index 51%
rename from console/xencons_ring.c
rename to console.c
index 495f0a19..29277eac 100644
--- a/console/xencons_ring.c
+++ b/console.c
@@ -1,3 +1,39 @@
+/* 
+ ****************************************************************************
+ * (C) 2006 - Grzegorz Milos - Cambridge University
+ ****************************************************************************
+ *
+ *        File: console.c
+ *      Author: Grzegorz Milos
+ *     Changes: 
+ *              
+ *        Date: Mar 2006
+ * 
+ * Environment: Xen Minimal OS
+ * Description: Console interface.
+ *
+ * Handles console I/O. Defines printk.
+ *
+ ****************************************************************************
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ * 
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ * 
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER 
+ * DEALINGS IN THE SOFTWARE.
+ */
+ 
 #include <mini-os/types.h>
 #include <mini-os/wait.h>
 #include <mini-os/mm.h>
@@ -7,12 +43,143 @@
 #include <mini-os/lib.h>
 #include <mini-os/console.h>
 #include <mini-os/xenbus.h>
+#include <mini-os/xmalloc.h>
+#include <mini-os/gnttab.h>
 #include <xen/io/console.h>
 #include <xen/io/protocols.h>
 #include <xen/io/ring.h>
 #include <xen/hvm/params.h>
-#include <mini-os/xmalloc.h>
-#include <mini-os/gnttab.h>
+
+/* If console not initialised the printk will be sent to xen serial line 
+   NOTE: you need to enable verbose in xen/Rules.mk for it to work. */
+static struct consfront_dev* xen_console = NULL;
+static int console_initialised = 0;
+
+__attribute__((weak)) void console_input(char * buf, unsigned len)
+{
+    if(len > 0)
+    {
+        /* Just repeat what's written */
+        buf[len] = '\0';
+        printk("%s", buf);
+        
+        if(buf[len-1] == '\r')
+            printk("\nNo console input handler.\n");
+    }
+}
+
+#ifndef HAVE_LIBC
+void xencons_rx(char *buf, unsigned len, struct pt_regs *regs)
+{
+    console_input(buf, len);
+}
+
+void xencons_tx(void)
+{
+    /* Do nothing, handled by _rx */
+}
+#endif
+
+
+void console_print(struct consfront_dev *dev, const char *data, int length)
+{
+    char *curr_char, saved_char;
+    char copied_str[length+1];
+    char *copied_ptr;
+    int part_len;
+    int (*ring_send_fn)(struct consfront_dev *dev, const char *data, unsigned length);
+
+    if(!console_initialised)
+        ring_send_fn = xencons_ring_send_no_notify;
+    else
+        ring_send_fn = xencons_ring_send;
+
+    if (dev && dev->is_raw) {
+        ring_send_fn(dev, data, length);
+        return;
+    }
+
+    copied_ptr = copied_str;
+    memcpy(copied_ptr, data, length);
+    for(curr_char = copied_ptr; curr_char < copied_ptr+length-1; curr_char++)
+    {
+        if(*curr_char == '\n')
+        {
+            *curr_char = '\r';
+            saved_char = *(curr_char+1);
+            *(curr_char+1) = '\n';
+            part_len = curr_char - copied_ptr + 2;
+            ring_send_fn(dev, copied_ptr, part_len);
+            *(curr_char+1) = saved_char;
+            copied_ptr = curr_char+1;
+            length -= part_len - 1;
+        }
+    }
+
+    if (copied_ptr[length-1] == '\n') {
+        copied_ptr[length-1] = '\r';
+        copied_ptr[length] = '\n';
+        length++;
+    }
+    
+    ring_send_fn(dev, copied_ptr, length);
+}
+
+void print(int direct, const char *fmt, va_list args)
+{
+    static char __print_buf[1024];
+    
+    (void)vsnprintf(__print_buf, sizeof(__print_buf), fmt, args);
+
+    if(direct)
+    {
+        (void)HYPERVISOR_console_io(CONSOLEIO_write, strlen(__print_buf), __print_buf);
+        return;
+    } else {
+#ifndef CONFIG_USE_XEN_CONSOLE
+    if(!console_initialised)
+#endif    
+            (void)HYPERVISOR_console_io(CONSOLEIO_write, strlen(__print_buf), __print_buf);
+        
+        console_print(NULL, __print_buf, strlen(__print_buf));
+    }
+}
+
+void printk(const char *fmt, ...)
+{
+    va_list       args;
+    va_start(args, fmt);
+    print(0, fmt, args);
+    va_end(args);        
+}
+
+void xprintk(const char *fmt, ...)
+{
+    va_list       args;
+    va_start(args, fmt);
+    print(1, fmt, args);
+    va_end(args);        
+}
+void init_console(void)
+{   
+    printk("Initialising console ... ");
+    xen_console = xencons_ring_init();
+    console_initialised = 1;
+    /* This is also required to notify the daemon */
+    printk("done.\n");
+}
+
+void suspend_console(void)
+{
+    console_initialised = 0;
+    xencons_ring_fini(xen_console);
+}
+
+void resume_console(void)
+{
+    xencons_ring_resume(xen_console);
+    console_initialised = 1;
+}
 
 DECLARE_WAIT_QUEUE_HEAD(console_queue);
 
diff --git a/console/console.c b/console/console.c
deleted file mode 100644
index 68c8435e..00000000
--- a/console/console.c
+++ /dev/null
@@ -1,177 +0,0 @@
-/* 
- ****************************************************************************
- * (C) 2006 - Grzegorz Milos - Cambridge University
- ****************************************************************************
- *
- *        File: console.h
- *      Author: Grzegorz Milos
- *     Changes: 
- *              
- *        Date: Mar 2006
- * 
- * Environment: Xen Minimal OS
- * Description: Console interface.
- *
- * Handles console I/O. Defines printk.
- *
- ****************************************************************************
- * Permission is hereby granted, free of charge, to any person obtaining a copy
- * of this software and associated documentation files (the "Software"), to
- * deal in the Software without restriction, including without limitation the
- * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
- * sell copies of the Software, and to permit persons to whom the Software is
- * furnished to do so, subject to the following conditions:
- * 
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- * 
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 
- * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER 
- * DEALINGS IN THE SOFTWARE.
- */
- 
-#include <mini-os/types.h>
-#include <mini-os/wait.h>
-#include <mini-os/mm.h>
-#include <mini-os/hypervisor.h>
-#include <mini-os/events.h>
-#include <mini-os/os.h>
-#include <mini-os/lib.h>
-#include <mini-os/xenbus.h>
-#include <xen/io/console.h>
-
-
-/* If console not initialised the printk will be sent to xen serial line 
-   NOTE: you need to enable verbose in xen/Rules.mk for it to work. */
-static struct consfront_dev* xen_console = NULL;
-static int console_initialised = 0;
-
-__attribute__((weak)) void console_input(char * buf, unsigned len)
-{
-    if(len > 0)
-    {
-        /* Just repeat what's written */
-        buf[len] = '\0';
-        printk("%s", buf);
-        
-        if(buf[len-1] == '\r')
-            printk("\nNo console input handler.\n");
-    }
-}
-
-#ifndef HAVE_LIBC
-void xencons_rx(char *buf, unsigned len, struct pt_regs *regs)
-{
-    console_input(buf, len);
-}
-
-void xencons_tx(void)
-{
-    /* Do nothing, handled by _rx */
-}
-#endif
-
-
-void console_print(struct consfront_dev *dev, const char *data, int length)
-{
-    char *curr_char, saved_char;
-    char copied_str[length+1];
-    char *copied_ptr;
-    int part_len;
-    int (*ring_send_fn)(struct consfront_dev *dev, const char *data, unsigned length);
-
-    if(!console_initialised)
-        ring_send_fn = xencons_ring_send_no_notify;
-    else
-        ring_send_fn = xencons_ring_send;
-
-    if (dev && dev->is_raw) {
-        ring_send_fn(dev, data, length);
-        return;
-    }
-
-    copied_ptr = copied_str;
-    memcpy(copied_ptr, data, length);
-    for(curr_char = copied_ptr; curr_char < copied_ptr+length-1; curr_char++)
-    {
-        if(*curr_char == '\n')
-        {
-            *curr_char = '\r';
-            saved_char = *(curr_char+1);
-            *(curr_char+1) = '\n';
-            part_len = curr_char - copied_ptr + 2;
-            ring_send_fn(dev, copied_ptr, part_len);
-            *(curr_char+1) = saved_char;
-            copied_ptr = curr_char+1;
-            length -= part_len - 1;
-        }
-    }
-
-    if (copied_ptr[length-1] == '\n') {
-        copied_ptr[length-1] = '\r';
-        copied_ptr[length] = '\n';
-        length++;
-    }
-    
-    ring_send_fn(dev, copied_ptr, length);
-}
-
-void print(int direct, const char *fmt, va_list args)
-{
-    static char __print_buf[1024];
-    
-    (void)vsnprintf(__print_buf, sizeof(__print_buf), fmt, args);
-
-    if(direct)
-    {
-        (void)HYPERVISOR_console_io(CONSOLEIO_write, strlen(__print_buf), __print_buf);
-        return;
-    } else {
-#ifndef CONFIG_USE_XEN_CONSOLE
-    if(!console_initialised)
-#endif    
-            (void)HYPERVISOR_console_io(CONSOLEIO_write, strlen(__print_buf), __print_buf);
-        
-        console_print(NULL, __print_buf, strlen(__print_buf));
-    }
-}
-
-void printk(const char *fmt, ...)
-{
-    va_list       args;
-    va_start(args, fmt);
-    print(0, fmt, args);
-    va_end(args);        
-}
-
-void xprintk(const char *fmt, ...)
-{
-    va_list       args;
-    va_start(args, fmt);
-    print(1, fmt, args);
-    va_end(args);        
-}
-void init_console(void)
-{   
-    printk("Initialising console ... ");
-    xen_console = xencons_ring_init();
-    console_initialised = 1;
-    /* This is also required to notify the daemon */
-    printk("done.\n");
-}
-
-void suspend_console(void)
-{
-    console_initialised = 0;
-    xencons_ring_fini(xen_console);
-}
-
-void resume_console(void)
-{
-    xencons_ring_resume(xen_console);
-    console_initialised = 1;
-}
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 08:56:36 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 08:56:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352623.579437 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3DCn-0007uB-7z; Mon, 20 Jun 2022 08:56:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352623.579437; Mon, 20 Jun 2022 08:56:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3DCn-0007u4-5P; Mon, 20 Jun 2022 08:56:21 +0000
Received: by outflank-mailman (input) for mailman id 352623;
 Mon, 20 Jun 2022 08:56:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l44P=W3=gmail.com=matiasevara@srs-se1.protection.inumbo.net>)
 id 1o3DCl-0007ty-CN
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 08:56:19 +0000
Received: from mail-wr1-x430.google.com (mail-wr1-x430.google.com
 [2a00:1450:4864:20::430])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d88a1e3a-f076-11ec-b725-ed86ccbb4733;
 Mon, 20 Jun 2022 10:56:17 +0200 (CEST)
Received: by mail-wr1-x430.google.com with SMTP id q9so13655331wrd.8
 for <xen-devel@lists.xenproject.org>; Mon, 20 Jun 2022 01:56:17 -0700 (PDT)
Received: from horizon (lfbn-gre-1-214-221.w90-112.abo.wanadoo.fr.
 [90.112.175.221]) by smtp.gmail.com with ESMTPSA id
 k7-20020a7bc407000000b0039c747a1e8fsm19633449wmi.7.2022.06.20.01.56.16
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 20 Jun 2022 01:56:16 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d88a1e3a-f076-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=/CYGdhvh0JTTrmuuZw+EA2MlPBAr2UuAM4GaC7CWKq4=;
        b=OLS5nhf3UrAbhEEIJHfB0Gdoj7Tc47R89wHKqOPLqEDqeP25J7AVKI3+vzEMqjrsDH
         AgIPPoSgFfdBiKU0te1kErNx4wjjOew3aH/GOE7mxxf+zSCSsrn/FKwCtWYf6JVxSs9m
         gkre39OkCIzPO4JWbzuUn4YXhUPTywI6A45DjlNyJTF6nrvIaXjkmE3olHKpug7T3oBn
         tqVabirXbQQtO5N8SF4Zx8uuB9Wi/4hiPh1gF5tynXtIttqlIG2BuNJRbPijtEn0lVQ1
         nGaytsl+pDDPB7sI8kTOj8SKtDTztwZZWDVvixnY5TtKSilpgS80bXZuV5pczxSJzvxB
         /A3w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=/CYGdhvh0JTTrmuuZw+EA2MlPBAr2UuAM4GaC7CWKq4=;
        b=7DkU8VUN/sdW9eQLLemYnZ6ox65jMd1TLUzmsaaLy+ALoOCtxjaU0quW5BkUKYD4iE
         WEekfdFA9vNWbI/2C7w7B+lthI4e3IJHEA57LUIvQOwUFx87ERZy9x8p7PPjuJDPFQq0
         +VbUtJHpwEFHAl/L8S2OQB1kvr+OsCLucLhnaUtno4mJlZ3CIweUQeYeubWT5wSN9A9a
         /skFuTlyl7B2dFrPJz9aBPQSAKsHSxmYn0vz1Ulom+IXd+cHNBGHmn4vR5HRQQ3DcF4U
         3kqPmldMxdf5W9ClJq1Qm73EDgiIbDu+U3VF/CladuQW8Vxs1OOf5/AOsYj932SLmRii
         EIPg==
X-Gm-Message-State: AJIora9ivN97ePFRBmhax1d2pR2hb6kXYijDa7Fn+OFjMp8MEJvdQbUA
	l22FOtsgevYWk8fJJ5YWTRc=
X-Google-Smtp-Source: AGRyM1t0QCJGOHa7pghE1mG2q/DLffpIncDXltiFj2wl1li+rotuu0dFTPYyg3kFTQ/ZnP3j25hSkg==
X-Received: by 2002:a05:6000:1a88:b0:218:4e7f:279d with SMTP id f8-20020a0560001a8800b002184e7f279dmr22265148wry.670.1655715377277;
        Mon, 20 Jun 2022 01:56:17 -0700 (PDT)
Date: Mon, 20 Jun 2022 10:56:15 +0200
From: Matias Ezequiel Vara Larsen <matiasevara@gmail.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel@lists.xenproject.org,
	Matias Ezequiel Vara Larsen <matias.vara@vates.fr>,
	Wei Liu <wl@xen.org>
Subject: Re: [RFC PATCH 2/2] tools/misc: Add xen-stats tool
Message-ID: <20220620085615.GA2039596@horizon>
References: <cover.1652797713.git.matias.vara@vates.fr>
 <e233c4f60c6fe97b93b3adf9affeb0404c554130.1652797713.git.matias.vara@vates.fr>
 <YpX48uwOGVqayb/x@perard.uk.xensource.com>
 <20220603110820.GA193297@horizon>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20220603110820.GA193297@horizon>

Hello Anthony, 

On Fri, Jun 03, 2022 at 01:08:20PM +0200, Matias Ezequiel Vara Larsen wrote:
> Hello Anthony and thanks for your comments. I addressed them below:
> 
> On Tue, May 31, 2022 at 12:16:02PM +0100, Anthony PERARD wrote:
> > Hi Matias,
> > 
> > On Tue, May 17, 2022 at 04:33:15PM +0200, Matias Ezequiel Vara Larsen wrote:
> > > Add a demostration tool that uses the stats_table resource to
> > > query vcpu time for a DomU.
> > > 
> > > Signed-off-by: Matias Ezequiel Vara Larsen <matias.vara@vates.fr>
> > > ---
> > > diff --git a/tools/misc/Makefile b/tools/misc/Makefile
> > > index 2b683819d4..b510e3aceb 100644
> > > --- a/tools/misc/Makefile
> > > +++ b/tools/misc/Makefile
> > > @@ -135,4 +135,9 @@ xencov: xencov.o
> > >  xen-ucode: xen-ucode.o
> > >  	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(APPEND_LDFLAGS)
> > >  
> > > +xen-stats.o: CFLAGS += $(CFLAGS_libxenforeginmemory)
> > > +
> > > +xen-stats: xen-stats.o
> > 
> > The tools seems to be only about vcpu, maybe `xen-stats` is a bit too
> > generic. Would `xen-vcpus-stats`, or maybe something with `time` would
> > be better?
> 
> Do you think `xen-vcpus-stats` would be good enough?
> 

I will pick up `xen-vcpus-stats` for v1 if you are not against it.

Thanks,

Matias

> > Also, is it a tools that could be useful enough to be installed by
> > default? Should we at least build it by default so it doesn't rot? (By
> > adding it to only $(TARGETS).)
> 
> I will add to build this tool by default in the next patches' version.
>  
> > > +	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(LDLIBS_libxenforeignmemory) $(APPEND_LDFLAGS)
> > > +
> > >  -include $(DEPS_INCLUDE)
> > > diff --git a/tools/misc/xen-stats.c b/tools/misc/xen-stats.c
> > > new file mode 100644
> > > index 0000000000..5d4a3239cc
> > > --- /dev/null
> > > +++ b/tools/misc/xen-stats.c
> > > @@ -0,0 +1,83 @@
> > > +#include <err.h>
> > > +#include <errno.h>
> > > +#include <error.h>
> > > +#include <stdio.h>
> > > +#include <stdlib.h>
> > > +#include <string.h>
> > > +#include <sys/mman.h>
> > > +#include <signal.h>
> > > +
> > > +#include <xenctrl.h>
> > 
> > It seems overkill to use this header when the tool only use
> > xenforeignmemory interface. But I don't know how to replace
> > XC_PAGE_SHIFT, so I guess that's ok.
> > 
> > > +#include <xenforeignmemory.h>
> > > +#include <xen-tools/libs.h>
> > 
> > What do you use this headers for? Is it left over?
> 
> `xenforeignmemory.h` is used for `xenforeignmemory_*` functions.
> `xen-tools/libs.h` is left over so I will remove it in next version.
> 
> > > +static sig_atomic_t interrupted;
> > > +static void close_handler(int signum)
> > > +{
> > > +    interrupted = 1;
> > > +}
> > > +
> > > +int main(int argc, char **argv)
> > > +{
> > > +    xenforeignmemory_handle *fh;
> > > +    xenforeignmemory_resource_handle *res;
> > > +    size_t size;
> > > +    int rc, nr_frames, domid, frec, vcpu;
> > > +    uint64_t * info;
> > > +    struct sigaction act;
> > > +
> > > +    if (argc != 4 ) {
> > > +        fprintf(stderr, "Usage: %s <domid> <vcpu> <period>\n", argv[0]);
> > > +        return 1;
> > > +    }
> > > +
> > > +    // TODO: this depends on the resource
> > > +    nr_frames = 1;
> > > +
> > > +    domid = atoi(argv[1]);
> > > +    frec = atoi(argv[3]);
> > > +    vcpu = atoi(argv[2]);
> > 
> > Can you swap the last two line? I think it is better if the order is the same
> > as on the command line.
> 
> Yes, I can.
> 
> > > +
> > > +    act.sa_handler = close_handler;
> > > +    act.sa_flags = 0;
> > > +    sigemptyset(&act.sa_mask);
> > > +    sigaction(SIGHUP,  &act, NULL);
> > > +    sigaction(SIGTERM, &act, NULL);
> > > +    sigaction(SIGINT,  &act, NULL);
> > > +    sigaction(SIGALRM, &act, NULL);
> > > +
> > > +    fh = xenforeignmemory_open(NULL, 0);
> > > +
> > > +    if ( !fh )
> > > +        err(1, "xenforeignmemory_open");
> > > +
> > > +    rc = xenforeignmemory_resource_size(
> > > +        fh, domid, XENMEM_resource_stats_table,
> > > +        vcpu, &size);
> > > +
> > > +    if ( rc )
> > > +        err(1, "    Fail: Get size: %d - %s\n", errno, strerror(errno));
> > 
> > It seems that err() already does print strerror(), and add a "\n", so
> > why print it again? Also, if we have strerror(), what the point of
> > printing "errno"?
> 
> I will remove errno, strerror(errno), and the extra "\n".
> 
> > Also, I'm not sure the extra indentation in the error message is really
> > useful, but that doesn't really matter.
> 
> I will remove the indentation.
> 
> > > +
> > > +    if ( (size >> XC_PAGE_SHIFT) != nr_frames )
> > > +        err(1, "    Fail: Get size: expected %u frames, got %zu\n",
> > > +                    nr_frames, size >> XC_PAGE_SHIFT);
> > 
> > err() prints strerror(errno), maybe errx() is better here. 
> 
> I will use errx().
> 
> Thanks,
>  
> > 
> > Thanks,
> > 
> > -- 
> > Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Mon Jun 20 09:39:05 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 09:39:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352633.579452 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Ds2-0003s0-Lb; Mon, 20 Jun 2022 09:38:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352633.579452; Mon, 20 Jun 2022 09:38:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Ds2-0003rt-Hm; Mon, 20 Jun 2022 09:38:58 +0000
Received: by outflank-mailman (input) for mailman id 352633;
 Mon, 20 Jun 2022 09:38:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Js9C=W3=arm.com=Rahul.Singh@srs-se1.protection.inumbo.net>)
 id 1o3Ds1-0003rX-C0
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 09:38:57 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2061d.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::61d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cd0b2979-f07c-11ec-b725-ed86ccbb4733;
 Mon, 20 Jun 2022 11:38:55 +0200 (CEST)
Received: from AS8P189CA0037.EURP189.PROD.OUTLOOK.COM (2603:10a6:20b:458::31)
 by VI1PR08MB4286.eurprd08.prod.outlook.com (2603:10a6:803:f6::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.16; Mon, 20 Jun
 2022 09:38:34 +0000
Received: from AM5EUR03FT024.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:458:cafe::bd) by AS8P189CA0037.outlook.office365.com
 (2603:10a6:20b:458::31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.15 via Frontend
 Transport; Mon, 20 Jun 2022 09:38:34 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT024.mail.protection.outlook.com (10.152.16.175) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5353.14 via Frontend Transport; Mon, 20 Jun 2022 09:38:33 +0000
Received: ("Tessian outbound 01afcf8ccfad:v120");
 Mon, 20 Jun 2022 09:38:33 +0000
Received: from e4a2a033e4e8.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 069EA64F-796B-44F0-8B8B-4398CB417182.1; 
 Mon, 20 Jun 2022 09:38:22 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e4a2a033e4e8.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 20 Jun 2022 09:38:22 +0000
Received: from AS8PR08MB7158.eurprd08.prod.outlook.com (2603:10a6:20b:404::24)
 by AM9PR08MB6275.eurprd08.prod.outlook.com (2603:10a6:20b:286::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.13; Mon, 20 Jun
 2022 09:38:18 +0000
Received: from AS8PR08MB7158.eurprd08.prod.outlook.com
 ([fe80::5cc5:d9b5:e3b0:c8d7]) by AS8PR08MB7158.eurprd08.prod.outlook.com
 ([fe80::5cc5:d9b5:e3b0:c8d7%9]) with mapi id 15.20.5353.022; Mon, 20 Jun 2022
 09:38:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cd0b2979-f07c-11ec-b725-ed86ccbb4733
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=S4k4r8mY/ka34mtlXQmuGov7OMvWRqjuFUuMgNnLT+Qp03tKkrDCW/DRbteH98BEysokMVQmT1cNiT0ws/RT67EXcr5ZGhZXHG8UpxmitzZShB8jSjAAucB9MlBQTAUL9TnguYat9dxjSpZ8yRK/56Y6W0oqn1511R5wwlL6BEGTwhIVRby9n5vaSeMxj9dd5OSO8u47IDgfu1PB8U5uh0tUQYJhg1nBFjJKtUSIeBeYVccNQ96KZ9rTVXWkc35dvMTak8B9L6H1YHnOmz0pNT3eIEuE7VwdyQwa1QlOF5TMThx/71w2SvA6P82Z6HpPTqthcMY7MNuItOhsqtEdEw==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6+tmg8qjPFWiP8y7jqtZEugKcyw+bDuA9E8t0SVbl7g=;
 b=G8i2V1WZQofq++av+3pQmoCtv2zE7D5tjW97enlNVoHX7xZj6YFWj6NmQsMQR2sKW+G4K12PhlEJOb5AG8qKE9ccEmd2Q1B483D5+HdsDCjYBb87OSbAOTwf8VC6AKzUTs/rZnrsqrA/+3BQijpGfh24sQy+568SbLE/9xesTcu1RBydPbKoa1ehWk94biuHoEe39IcT32/NTh+uXPJV+SHH3bK1cLFWRt50IOchOdyGZL3zBuRRKW123wHBeRrELyU410kiggvl98OVyircY0Q6YBDr/oe25JvERRnsEOpgjDld6cIO6rNJfkgsDRadaLZmtM5IrbfaCEiyzGPURQ==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6+tmg8qjPFWiP8y7jqtZEugKcyw+bDuA9E8t0SVbl7g=;
 b=VIY/vzAle5n7KHGfP0eXoNnRsyQHTYwJWBERFjlyyYS/gBU+VPkZs3YzrwK52VCsU10CNEyrCSMcxCNilws+P1sXFnzCFWUNn9fHH8RBX8yqJF1dxm7CdFhAQNhNQcLrDQx/wFv5ig15hB+HafFRCRkqi2xBHak+LJ4P1fX5zaY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 9f9afe7d304077a0
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=P+xlVf/lQ5kBOHCzd7wmM6GhilqMn+55R1CKmtVuNk6UcTISBOh/i4Cs5Bc63nNCX3OPMrPPZUV93vojPsgNpWrq/Ll+HUQf6oRIiCgjEwUeP83WI+/540T04t48ZNbmMV47JF3yU5n6SSzSqGMfHO6Jt4jNQzTnsz3BtQ0vDfUlrKDfPZGUWiXjxhOsHraulc2++xIQ1WX6N0w82VrB7bVvMqQ5fL5viH3lphA91Gez2dPlCviwDqYGXd3OQkjnKm6BPb3oG06GWFOCQ9AsSN3KYcS6YxXKeSsbG0CDWaOlNroJE4wL8YMS4sGS+bXL3rX57unh86FF58OyktOk6g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6+tmg8qjPFWiP8y7jqtZEugKcyw+bDuA9E8t0SVbl7g=;
 b=CsaYJDhnIbAaf1y6N9zRkkmax+I6MVZoDGUL2MXYQRHQbW33zScY6ggFVUVxALpCiICs7OYT2Rr8Hu0hK7DwZGKhBeh7IPoqbBfI9EkZne5zpx0jRRWOyEL3D7LQ5CilCLJNt5hmgWoOSA3aETiGHpx8Dv7s7XBDiu/qjLiSnQeNp6BJapqs9M5/iy0Ac6CrIEqpIJnr3Zq8S4F0AFwvNFWXgbDxYDFDR0HmOpWKB8CmxzIjeR7z118UmU/bG88mU3nhSzwoBuuyRwVsxdzHP4zKVbg4en4T3vuo3nAA4dIY++tQVmfzzY7fakk2QcMX7Acb0Xffk3wuM67Jkro5jw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6+tmg8qjPFWiP8y7jqtZEugKcyw+bDuA9E8t0SVbl7g=;
 b=VIY/vzAle5n7KHGfP0eXoNnRsyQHTYwJWBERFjlyyYS/gBU+VPkZs3YzrwK52VCsU10CNEyrCSMcxCNilws+P1sXFnzCFWUNn9fHH8RBX8yqJF1dxm7CdFhAQNhNQcLrDQx/wFv5ig15hB+HafFRCRkqi2xBHak+LJ4P1fX5zaY=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Mykyta Poturai <mykyta.poturai@gmail.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, "Volodymyr_Babchuk@epam.com"
	<Volodymyr_Babchuk@epam.com>, "julien@xen.org" <julien@xen.org>,
	"sstabellini@kernel.org" <sstabellini@kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] xen/arm: smmuv1: remove iommu group when deassign a
 device
Thread-Topic: [PATCH] xen/arm: smmuv1: remove iommu group when deassign a
 device
Thread-Index:
 AQHYWlIekEFYa4Z+ZkOKg6prrRxnv60EB7eAgALvxoCABl1EgIBEoW2AgAA0fACAAZfdAIAEm8YA
Date: Mon, 20 Jun 2022 09:38:18 +0000
Message-ID: <A53A2C83-BA19-481D-8851-0B0E1A162F4D@arm.com>
References: <029EEEE1-69E1-42A9-90D3-BEC18CD5B7BC@arm.com>
 <20220617111544.205861-1-mykyta_poturai@epam.com>
In-Reply-To: <20220617111544.205861-1-mykyta_poturai@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 12b8e209-f8ec-4a66-0d92-08da52a0a4b6
x-ms-traffictypediagnostic:
	AM9PR08MB6275:EE_|AM5EUR03FT024:EE_|VI1PR08MB4286:EE_
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB4286DF60C239F9C14F95B653FCB09@VI1PR08MB4286.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 aelLvI6PA7teJLztPyDbwBVZEZosNlwixYy2sSSfnW4AkP5+krqDVTL9twOOUxYzzv6Y7hrty9CgTyL4t++jAw6zBe7AVg4/rD/9tcTeEo5VoVxrodKIMQz6aExMFYQBqkwEtlBgLbCYDGRWTkXfMVCvo+ptcwv1pRCtTN9uiKRTw/GDQsV+m4YZX3tkW9XY44E46lD+TcGsOspXV/9PO82nAWSlVr41F36nNrxJyv0NClqmIQV5L5J1KXMRaly0HCWEgdy+M29fmyB9tpuplJShvEWqqAP4wCUGtaWvbP2OAXR9N5FL3P11fJlPnso0qX0O4KRFVHSiXTv0sIzE7EnEMPT72f8pRlGMd0HF5ijIGAwmqDobbiUS7YH1A+8OrjPv9qwVUIDuzw0iVUc55yA1zfXTkbWItr0UrpHyQh8cpYuSuqHIlIZaQgwf9BJ27sybwTrn7tmg9h8i3QlbSxdE9FDeu3qc/VSVWmQ4Vp8k+6Rh7iQE/iP+Rrv9JbBTb5HGYrRqw5wnwuLL/ifSLEYPQ+afTCPR0Rkq6Gi19mshtBsWuElEQ0lqFQQzdeX37zZWe3X7XzBCvlhVWc19PpLGnt7E7TZUFL9fk6jD3IzLinzhuT8YGN6A4AKj/YG0axaLKLBVZxrtaTbHopKgxAr001SPLz7Yiq1dtzwz5J8EBMpxrZz6D24xqh1vdimK0YZKyaoanq3659cBeKfZwINCrXT2TBc6YLiODY4RhAM=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7158.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(376002)(346002)(39860400002)(396003)(366004)(136003)(186003)(36756003)(38070700005)(8676002)(4326008)(66556008)(66946007)(66476007)(91956017)(5660300002)(64756008)(76116006)(66446008)(33656002)(6506007)(26005)(6512007)(71200400001)(38100700002)(2906002)(122000001)(86362001)(53546011)(54906003)(41300700001)(83380400001)(6486002)(316002)(6916009)(8936002)(478600001)(2616005)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <93943F5CCB5E444FB01C7E34214EE9A8@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6275
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT024.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	8fcfc118-0a25-43ae-b512-08da52a09ba8
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+8Ghq7U6GQDGJPdtpTZG47nnqnv+zdyLHN6rJRpZWsbaLzhlTVD9SzGD5xciPqMKej3xVDcB33cE5lZvIREpBRcI79yzS69kpTTr0z423r7N88BBLHXS0NSuUTxibiQsbzakZhnPkrlSwKeIEtFj85uo6txEYcnePyobn2480BYWK17fLlt6KIZGYY4U938bLFlewPzn6se6TjK1VhH1fv7t9yzXKrS+ud26NmUd/OiZ1i34qnvJPXmgp5S2dHZdEBj54wFMnIJsapmc763795r/KPEIQurVV7gmfqFRVnHys/V3kFDPwdapnHqB91QnD0UG0hh6UOmS8kmpWY02dVclhwDFZFYJ0l/BEgqmd0I7rm6ZZLY+ZzK++/laJgOz9wkSDk+ikSoDKVHp+AB9aIpnNZK6op5hZ58HOIBFMI7fJTbOF/xeASbVnWAlmTqZOJvtr8frNtdbqH9kqEE2HijmNwXtwORWLreWE5vGV1ogv2yZ8SFYpSiyaAUwySZNl/dyLPfiK2uBA1FMjx9w8b4YPG8JGStEFM8wCXPkpBedWjub6GyhfgPj8Vmr+22vCtih4NxnuBn77u+8hLFe9lEabFvLv3fc1U9koV86t5NcFKci/174K+jeEOBH/rK2LVhQ9rlEv1KilAKfPnoypzCnD1BBgG9wemVl0n4WbGbTRYqlJhjqqo08Be5GSqY0
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(46966006)(36840700001)(40470700004)(70586007)(70206006)(498600001)(4326008)(8676002)(356005)(83380400001)(82310400005)(316002)(6486002)(36860700001)(86362001)(186003)(53546011)(6506007)(336012)(26005)(81166007)(6512007)(5660300002)(40460700003)(54906003)(47076005)(2616005)(8936002)(36756003)(33656002)(6862004)(2906002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Jun 2022 09:38:33.7047
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 12b8e209-f8ec-4a66-0d92-08da52a0a4b6
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT024.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4286

SGkgTXlreXRhLA0KDQo+IE9uIDE3IEp1biAyMDIyLCBhdCAxMjoxNSBwbSwgTXlreXRhIFBvdHVy
YWkgPG15a3l0YS5wb3R1cmFpQGdtYWlsLmNvbT4gd3JvdGU6DQo+IA0KPj4gSGkgTXlreXRhLA0K
Pj4gDQo+Pj4gT24gMTYgSnVuIDIwMjIsIGF0IDg6NDggYW0sIE15a3l0YSBQb3R1cmFpIDxteWt5
dGEucG90dXJhaUBnbWFpbC5jb20+IHdyb3RlOg0KPj4+IA0KPj4+IEhpIEp1bGllbiwgUmFodWwN
Cj4+PiBJJ3ZlIGVuY291bnRlcmVkIGEgc2ltaWxhciBwcm9ibGVtIHdpdGggSU1YOCBHUFUgcmVj
ZW50bHkuIEl0IHdhc24ndCBwcm9iaW5nDQo+Pj4gcHJvcGVybHkgYWZ0ZXIgdGhlIGRvbWFpbiBy
ZWJvb3QuICBBZnRlciBzb21lIGRpZ2dpbmcsIEkgY2FtZSB0byB0aGUgc2FtZQ0KPj4+IHNvbHV0
aW9uIGFzIFJhaHVsIGFuZCBmb3VuZCB0aGlzIHRocmVhZC4gSSBhbHNvIGVuY291bnRlcmVkIHRo
ZSBvY2Nhc2lvbmFsDQo+Pj4gIlVuZXhwZWN0ZWQgZ2xvYmFsIGZhdWx0LCB0aGlzIGNvdWxkIGJl
IHNlcmlvdXMiIGVycm9yIG1lc3NhZ2Ugd2hlbiBkZXN0cm95aW5nDQo+Pj4gYSBkb21haW4gd2l0
aCBhbiBhY3RpdmVseS13b3JraW5nIEdQVS4NCj4+PiANCj4+Pj4gSG1tbW0uLi4uIExvb2tpbmcg
YXQgdGhlIGNvZGUsIGFybV9zbW11X2FsbG9jX3NtZXMoKSBkb2Vzbid0IHNlZW0gdG8gdXNlDQo+
Pj4+IHRoZSBkb21haW4gaW5mb3JtYXRpb24uIFNvIHdoeSB3b3VsZCBpdCBuZWVkIHRvIGJlIGRv
bmUgZXZlcnkgdGltZSBpdCBpcyBhc3NpZ25lZD8NCj4+PiBJbmRlZWQgYWZ0ZXIgcmVtb3Zpbmcg
dGhlIGFybV9zbW11X21hc3Rlcl9mcmVlX3NtZXMoKSBjYWxsLCBib3RoIHJlYm9vdCBhbmQgZ2xv
YmFsDQo+Pj4gZmF1bHQgaXNzdWVzIGFyZSBnb25lLiBJZiBJIHVuZGVyc3RhbmQgY29ycmVjdGx5
LCBkZXZpY2UgcmVtb3ZpbmcgaXMgbm90IHlldA0KPj4+IHN1cHBvcnRlZCwgc28gSSBjYW4ndCBm
aW5kIGEgcHJvcGVyIHBsYWNlIGZvciB0aGUgYXJtX3NtbXVfbWFzdGVyX2ZyZWVfc21lcygpIGNh
bGwuDQo+Pj4gU2hvdWxkIHdlIHJlbW92ZSB0aGUgZnVuY3Rpb24gY29tcGxldGVseSBvciBqdXN0
IGxlZnQgaXQgY29tbWVudGVkIGZvciBsYXRlciBvcg0KPj4+IHNvbWV0aGluZyBlbHNlPw0KPj4+
IA0KPj4+IFJhaHVsLCBhcmUgeW91IHN0aWxsIHdvcmtpbmcgb24gdGhpcyBvciBjb3VsZCBJIHNl
bmQgbXkgcGF0Y2g/DQo+PiANCj4+IFllcywgSSBoYXZlIHRoaXMgb24gbXkgdG8tZG8gbGlzdCBi
dXQgSSB3YXMgYnVzeSB3aXRoIG90aGVyIHdvcmsgYW5kIGl0IGdvdCBkZWxheWVkLiANCj4+IA0K
Pj4gSSBjcmVhdGVkIGFub3RoZXIgc29sdXRpb24gZm9yIHRoaXMgaXNzdWUsIGluIHdoaWNoIHdl
IGRvbuKAmXQgbmVlZCB0byBjYWxsIGFybV9zbW11X21hc3Rlcl9mcmVlX3NtZXMoKSANCj4+IGlu
IGFybV9zbW11X2RldGFjaF9kZXYoKSBidXQgd2UgY2FuIGNvbmZpZ3VyZSB0aGUgczJjciB2YWx1
ZSB0byB0eXBlIGZhdWx0IGluIGRldGFjaCBmdW5jdGlvbi4NCj4+IA0KPj4gV2lsbCBjYWxsIG5l
dyBmdW5jdGlvbiBhcm1fc21tdV9kb21haW5fcmVtb3ZlX21hc3RlcigpIGluIGRldGFjaCBmdW5j
dGlvbiB0aGF0IHdpbGwgcmV2ZXJ0IHRoZSBjaGFuZ2VzIGRvbmUgDQo+PiBieSBhcm1fc21tdV9k
b21haW5fYWRkX21hc3RlcigpICBpbiBhdHRhY2ggZnVuY3Rpb24uDQo+PiANCj4+IEkgZG9u4oCZ
dCBoYXZlIGFueSBib2FyZCB0byB0ZXN0IHRoZSBwYXRjaC4gSWYgaXQgaXMgb2theSwgQ291bGQg
eW91IHBsZWFzZSB0ZXN0IHRoZSBwYXRjaCBhbmQgbGV0IG1lIGtub3cgdGhlIHJlc3VsdC4NCj4+
IA0KPj4gZGlmZiAtLWdpdCBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL2FybS9zbW11LmMgYi94
ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hcm0vc21tdS5jDQo+PiBpbmRleCA2OTUxMTY4M2I0Li5k
YTNhZGY4ZTdmIDEwMDY0NA0KPj4gLS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvYXJtL3Nt
bXUuYw0KPj4gKysrIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvYXJtL3NtbXUuYw0KPj4gQEAg
LTE1OTgsMjEgKzE1OTgsNiBAQCBvdXRfZXJyOg0KPj4gICAgICAgIHJldHVybiByZXQ7DQo+PiB9
DQo+PiANCj4+IC1zdGF0aWMgdm9pZCBhcm1fc21tdV9tYXN0ZXJfZnJlZV9zbWVzKHN0cnVjdCBh
cm1fc21tdV9tYXN0ZXJfY2ZnICpjZmcpDQo+PiAtew0KPj4gLSAgICBzdHJ1Y3QgYXJtX3NtbXVf
ZGV2aWNlICpzbW11ID0gY2ZnLT5zbW11Ow0KPj4gLSAgICAgICBpbnQgaSwgaWR4Ow0KPj4gLSAg
ICAgICBzdHJ1Y3QgaW9tbXVfZndzcGVjICpmd3NwZWMgPSBhcm1fc21tdV9nZXRfZndzcGVjKGNm
Zyk7DQo+PiAtDQo+PiAtICAgICAgIHNwaW5fbG9jaygmc21tdS0+c3RyZWFtX21hcF9sb2NrKTsN
Cj4+IC0gICAgICAgZm9yX2VhY2hfY2ZnX3NtZShjZmcsIGksIGlkeCwgZndzcGVjLT5udW1faWRz
KSB7DQo+PiAtICAgICAgICAgICAgICAgaWYgKGFybV9zbW11X2ZyZWVfc21lKHNtbXUsIGlkeCkp
DQo+PiAtICAgICAgICAgICAgICAgICAgICAgICBhcm1fc21tdV93cml0ZV9zbWUoc21tdSwgaWR4
KTsNCj4+IC0gICAgICAgICAgICAgICBjZmctPnNtZW5keFtpXSA9IElOVkFMSURfU01FTkRYOw0K
Pj4gLSAgICAgICB9DQo+PiAtICAgICAgIHNwaW5fdW5sb2NrKCZzbW11LT5zdHJlYW1fbWFwX2xv
Y2spOw0KPj4gLX0NCj4+IC0NCj4+IHN0YXRpYyBpbnQgYXJtX3NtbXVfZG9tYWluX2FkZF9tYXN0
ZXIoc3RydWN0IGFybV9zbW11X2RvbWFpbiAqc21tdV9kb21haW4sDQo+PiAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgc3RydWN0IGFybV9zbW11X21hc3Rlcl9jZmcgKmNmZykN
Cj4+IHsNCj4+IEBAIC0xNjM1LDYgKzE2MjAsMjAgQEAgc3RhdGljIGludCBhcm1fc21tdV9kb21h
aW5fYWRkX21hc3RlcihzdHJ1Y3QgYXJtX3NtbXVfZG9tYWluICpzbW11X2RvbWFpbiwNCj4+ICAg
ICAgICByZXR1cm4gMDsNCj4+IH0NCj4+IA0KPj4gK3N0YXRpYyB2b2lkIGFybV9zbW11X2RvbWFp
bl9yZW1vdmVfbWFzdGVyKHN0cnVjdCBhcm1fc21tdV9kb21haW4gKnNtbXVfZG9tYWluLA0KPj4g
KyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdHJ1Y3QgYXJtX3NtbXVfbWFz
dGVyX2NmZyAqY2ZnKQ0KPj4gK3sNCj4+ICsgICAgICAgc3RydWN0IGFybV9zbW11X2RldmljZSAq
c21tdSA9IHNtbXVfZG9tYWluLT5zbW11Ow0KPj4gKyAgICAgICBzdHJ1Y3QgYXJtX3NtbXVfczJj
ciAqczJjciA9IHNtbXUtPnMyY3JzOw0KPj4gKyAgICAgICBzdHJ1Y3QgaW9tbXVfZndzcGVjICpm
d3NwZWMgPSBhcm1fc21tdV9nZXRfZndzcGVjKGNmZyk7DQo+PiArICAgICAgIGludCBpLCBpZHg7
DQo+PiArDQo+PiArICAgICAgIGZvcl9lYWNoX2NmZ19zbWUoY2ZnLCBpLCBpZHgsIGZ3c3BlYy0+
bnVtX2lkcykgew0KPj4gKyAgICAgICAgICAgICAgIHMyY3JbaWR4XSA9IHMyY3JfaW5pdF92YWw7
DQo+PiArICAgICAgICAgICAgICAgYXJtX3NtbXVfd3JpdGVfczJjcihzbW11LCBpZHgpOw0KPj4g
KyAgICAgICB9DQo+PiArfQ0KPj4gKw0KPj4gc3RhdGljIGludCBhcm1fc21tdV9hdHRhY2hfZGV2
KHN0cnVjdCBpb21tdV9kb21haW4gKmRvbWFpbiwgc3RydWN0IGRldmljZSAqZGV2KQ0KPj4gew0K
Pj4gICAgICAgIGludCByZXQ7DQo+PiBAQCAtMTY4NCwxMCArMTY4MywxMSBAQCBzdGF0aWMgaW50
IGFybV9zbW11X2F0dGFjaF9kZXYoc3RydWN0IGlvbW11X2RvbWFpbiAqZG9tYWluLCBzdHJ1Y3Qg
ZGV2aWNlICpkZXYpDQo+PiANCj4+IHN0YXRpYyB2b2lkIGFybV9zbW11X2RldGFjaF9kZXYoc3Ry
dWN0IGlvbW11X2RvbWFpbiAqZG9tYWluLCBzdHJ1Y3QgZGV2aWNlICpkZXYpDQo+PiB7DQo+PiAr
ICAgICAgIHN0cnVjdCBhcm1fc21tdV9kb21haW4gKnNtbXVfZG9tYWluID0gZG9tYWluLT5wcml2
Ow0KPj4gICAgICAgIHN0cnVjdCBhcm1fc21tdV9tYXN0ZXJfY2ZnICpjZmcgPSBmaW5kX3NtbXVf
bWFzdGVyX2NmZyhkZXYpOw0KPj4gDQo+PiAgICAgICAgaWYgKGNmZykNCj4+IC0gICAgICAgICAg
ICAgICBhcm1fc21tdV9tYXN0ZXJfZnJlZV9zbWVzKGNmZyk7DQo+PiArICAgICAgICAgICAgICAg
cmV0dXJuIGFybV9zbW11X2RvbWFpbl9yZW1vdmVfbWFzdGVyKHNtbXVfZG9tYWluLCBjZmcpOw0K
Pj4gDQo+PiB9DQo+PiANCj4+IFJlZ2FyZHMsDQo+PiBSYWh1bA0KPiANCj4gSGVsbG8gUmFodWws
DQo+IA0KPiBGb3IgbWUsIHRoaXMgcGF0Y2ggZml4ZWQgdGhlIGlzc3VlIHdpdGggdGhlIEdQVSBu
b3QgcHJvYmluZyBhZnRlciBkb21haW4gcmVib290Lg0KDQpUaGFua3MgZm9yIHRlc3RpbmcgdGhl
IHBhdGNoLg0KPiBCdXQgbm90IGZpeGVkIHRoZSAiVW5leHBlY3RlZCBHbG9iYWwgZmF1bHQiIHRo
YXQgb2NjYXNpb25hbGx5IGhhcHBlbnMgd2hlbiBkZXN0cm95aW5nDQo+IHRoZSBkb21haW4gd2l0
aCBhbiBhY3RpdmVseSB3b3JraW5nIEdQVS4gQWx0aG91Z2gsIEkgYW0gbm90IHN1cmUgaWYgdGhp
cyBpc3N1ZQ0KPiBpcyByZWxldmFudCBoZXJlLg0KDQpDYW4geW91IHBsZWFzZSBpZiBwb3NzaWJs
ZSBzaGFyZSB0aGUgbW9yZSBkZXRhaWxzIGFuZCBsb2dzIHNvIHRoYXQgSSBjYW4gbG9vayBpZiB0
aGlzIGlzc3VlIGlzIHJlbGV2YW50IGhlcmUgPw0KDQpUaGFua3MgaW4gYWR2YW5jZS4NCg0KUmVn
YXJkcywNClJhaHVs


From xen-devel-bounces@lists.xenproject.org Mon Jun 20 09:47:47 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 09:47:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352642.579463 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3E0U-0005Na-Ip; Mon, 20 Jun 2022 09:47:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352642.579463; Mon, 20 Jun 2022 09:47:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3E0U-0005NT-FM; Mon, 20 Jun 2022 09:47:42 +0000
Received: by outflank-mailman (input) for mailman id 352642;
 Mon, 20 Jun 2022 09:47:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o3E0T-0005NN-Gn
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 09:47:41 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o3E0S-0000jN-Uz; Mon, 20 Jun 2022 09:47:40 +0000
Received: from 54-240-197-226.amazon.com ([54.240.197.226] helo=[192.168.1.39])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o3E0S-0000XW-Op; Mon, 20 Jun 2022 09:47:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=/NnHuR2O7sfkCDEmvcgwPIix5vP58a2/KArTqE4jTPQ=; b=NzB/M27RkCVGsAMd8oNL7PVEFI
	EaMg5pf9mpbN+wEe+IMyXtD78LfArGYBJ7UVzOTWT8d+Ef2jKALQOyNJqTtqRhix+weFfY2a+82FA
	P81S4AmALm9/MbgQIxABrpe7kzS2450JjIJ0fZNRYF5m3E9a1P08Vc+gDfcfTd9yyLFs=;
Message-ID: <c6164f9a-cb00-2ad6-e831-b23a8e31a0e5@xen.org>
Date: Mon, 20 Jun 2022 10:47:38 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH 1/9] xen/arm: Use explicitly specified types
To: Michal Orzel <michal.orzel@arm.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220620070245.77979-1-michal.orzel@arm.com>
 <20220620070245.77979-2-michal.orzel@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220620070245.77979-2-michal.orzel@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Michal,

On 20/06/2022 08:02, Michal Orzel wrote:
> According to MISRA C 2012 Rule 8.1, types shall be explicitly
> specified. Fix all the findings reported by cppcheck with misra addon
> by substituting implicit type 'unsigned' to explicit 'unsigned int'.
> 
> Signed-off-by: Michal Orzel <michal.orzel@arm.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 20 09:48:14 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 09:48:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352649.579474 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3E0z-0005uN-Re; Mon, 20 Jun 2022 09:48:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352649.579474; Mon, 20 Jun 2022 09:48:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3E0z-0005uG-O2; Mon, 20 Jun 2022 09:48:13 +0000
Received: by outflank-mailman (input) for mailman id 352649;
 Mon, 20 Jun 2022 09:48:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o3E0y-0005u2-9A
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 09:48:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o3E0x-0000k5-DN; Mon, 20 Jun 2022 09:48:11 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234] helo=[192.168.1.39])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o3E0x-0000Z0-5R; Mon, 20 Jun 2022 09:48:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=/NnHuR2O7sfkCDEmvcgwPIix5vP58a2/KArTqE4jTPQ=; b=qyGPtvf0TZIL7i3yVdiymMaL5d
	HCSaF8C2Ozk6ZmGRjPRJF/1V7NfmgVjw25q+VPYKDDgM43bc/4zoai/ZYApjrbO6Mr2IgMijJySxJ
	dWhkhN2GpuC6jJxpKfWNl4aNKe/TzBm8qEu9zo8l6DgTg/onT2M1uhNc5dpXYR0dAyI0=;
Message-ID: <36f952c4-be3d-87ce-dfb8-a7b68844e9f0@xen.org>
Date: Mon, 20 Jun 2022 10:48:09 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH 2/9] xen/domain: Use explicitly specified types
To: Michal Orzel <michal.orzel@arm.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20220620070245.77979-1-michal.orzel@arm.com>
 <20220620070245.77979-3-michal.orzel@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220620070245.77979-3-michal.orzel@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Michal,

On 20/06/2022 08:02, Michal Orzel wrote:
> According to MISRA C 2012 Rule 8.1, types shall be explicitly
> specified. Fix all the findings reported by cppcheck with misra addon
> by substituting implicit type 'unsigned' to explicit 'unsigned int'.
> 
> Signed-off-by: Michal Orzel <michal.orzel@arm.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 20 09:49:25 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 09:49:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352656.579485 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3E26-0006Ws-4u; Mon, 20 Jun 2022 09:49:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352656.579485; Mon, 20 Jun 2022 09:49:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3E26-0006Wl-1R; Mon, 20 Jun 2022 09:49:22 +0000
Received: by outflank-mailman (input) for mailman id 352656;
 Mon, 20 Jun 2022 09:49:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o3E24-0006WZ-0v
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 09:49:20 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o3E23-0000le-8c; Mon, 20 Jun 2022 09:49:19 +0000
Received: from 54-240-197-226.amazon.com ([54.240.197.226] helo=[192.168.1.39])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o3E23-0000eN-2u; Mon, 20 Jun 2022 09:49:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=/NnHuR2O7sfkCDEmvcgwPIix5vP58a2/KArTqE4jTPQ=; b=hB6w45q58bEAL2JTSJ6BP0PMjx
	zSrmtVBZCbxqbZGiv3yubH9Z6qtmhWcpiCylHTjAmpAJskianP6fsxenW5Yq4D2K+L8xG3a7Jf9uZ
	CuCu9CUB61cGuNO07lqhmzGebeAvRIRRNvB5B/r7qAsCBxl8zohcDr1EvfVDpbCIesKY=;
Message-ID: <82c90795-375c-4581-c6c1-9e3c109b653b@xen.org>
Date: Mon, 20 Jun 2022 10:49:16 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH 3/9] xen/common: Use explicitly specified types
To: Michal Orzel <michal.orzel@arm.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>, Dario Faggioli <dfaggioli@suse.com>
References: <20220620070245.77979-1-michal.orzel@arm.com>
 <20220620070245.77979-4-michal.orzel@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220620070245.77979-4-michal.orzel@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Michal,

On 20/06/2022 08:02, Michal Orzel wrote:
> According to MISRA C 2012 Rule 8.1, types shall be explicitly
> specified. Fix all the findings reported by cppcheck with misra addon
> by substituting implicit type 'unsigned' to explicit 'unsigned int'.
> 
> Signed-off-by: Michal Orzel <michal.orzel@arm.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 20 09:51:50 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 09:51:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352665.579495 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3E4T-0007x5-H4; Mon, 20 Jun 2022 09:51:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352665.579495; Mon, 20 Jun 2022 09:51:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3E4T-0007wy-EB; Mon, 20 Jun 2022 09:51:49 +0000
Received: by outflank-mailman (input) for mailman id 352665;
 Mon, 20 Jun 2022 09:51:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hycu=W3=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o3E4S-0007ws-Cj
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 09:51:48 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9961d40c-f07e-11ec-b725-ed86ccbb4733;
 Mon, 20 Jun 2022 11:51:47 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id E77221F460;
 Mon, 20 Jun 2022 09:51:46 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 894D1134CA;
 Mon, 20 Jun 2022 09:51:46 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id t20nHzJDsGJ4UAAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 20 Jun 2022 09:51:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9961d40c-f07e-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655718706; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=QbkHh2/30wlt3Y0QQgK/UbQvi2NWitoaQyMkZN4YfVs=;
	b=Lwjw+RzPBhVFZzAe20wTDUxNZ/24puZONM0Al7g3b4ka/723ZSrAbXeBlpZRp6g29Sb4BH
	BMbeNCGgrF9MxUGgKENiqfh13N1CwkU6ugrwTFa4Cw6yQJeNH+hnVz0HOAh0MxXZK++NfZ
	aIlXQ+M21erRA5nEiQE53qaq1Tuzh0Q=
Message-ID: <beed4899-6bd7-d670-5714-5b620ca6f4b4@suse.com>
Date: Mon, 20 Jun 2022 11:51:46 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH 3/9] xen/common: Use explicitly specified types
Content-Language: en-US
To: Michal Orzel <michal.orzel@arm.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Dario Faggioli <dfaggioli@suse.com>
References: <20220620070245.77979-1-michal.orzel@arm.com>
 <20220620070245.77979-4-michal.orzel@arm.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20220620070245.77979-4-michal.orzel@arm.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------iWRBF21uVxW8pYD7wApW2Rt9"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------iWRBF21uVxW8pYD7wApW2Rt9
Content-Type: multipart/mixed; boundary="------------WulNtazfeqdcIt3qbJxAZkha";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Michal Orzel <michal.orzel@arm.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Dario Faggioli <dfaggioli@suse.com>
Message-ID: <beed4899-6bd7-d670-5714-5b620ca6f4b4@suse.com>
Subject: Re: [PATCH 3/9] xen/common: Use explicitly specified types
References: <20220620070245.77979-1-michal.orzel@arm.com>
 <20220620070245.77979-4-michal.orzel@arm.com>
In-Reply-To: <20220620070245.77979-4-michal.orzel@arm.com>

--------------WulNtazfeqdcIt3qbJxAZkha
Content-Type: multipart/mixed; boundary="------------3rtccMc6nj9n3nDzJ86WO9cy"

--------------3rtccMc6nj9n3nDzJ86WO9cy
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjAuMDYuMjIgMDk6MDIsIE1pY2hhbCBPcnplbCB3cm90ZToNCj4gQWNjb3JkaW5nIHRv
IE1JU1JBIEMgMjAxMiBSdWxlIDguMSwgdHlwZXMgc2hhbGwgYmUgZXhwbGljaXRseQ0KPiBz
cGVjaWZpZWQuIEZpeCBhbGwgdGhlIGZpbmRpbmdzIHJlcG9ydGVkIGJ5IGNwcGNoZWNrIHdp
dGggbWlzcmEgYWRkb24NCj4gYnkgc3Vic3RpdHV0aW5nIGltcGxpY2l0IHR5cGUgJ3Vuc2ln
bmVkJyB0byBleHBsaWNpdCAndW5zaWduZWQgaW50Jy4NCj4gDQo+IFNpZ25lZC1vZmYtYnk6
IE1pY2hhbCBPcnplbCA8bWljaGFsLm9yemVsQGFybS5jb20+DQoNClJldmlld2VkLWJ5OiBK
dWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+DQoNCg0KSnVlcmdlbg0K
--------------3rtccMc6nj9n3nDzJ86WO9cy
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------3rtccMc6nj9n3nDzJ86WO9cy--

--------------WulNtazfeqdcIt3qbJxAZkha--

--------------iWRBF21uVxW8pYD7wApW2Rt9
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmKwQzIFAwAAAAAACgkQsN6d1ii/Ey/8
ZQf/WzUhVGb27qsrfpFqeclM1wYweP8keOWoV5tQlwBa5ZcbYO6vZqO84ER50objQAjdH1xV29T5
jVVqzTQrAyQDqJ52sRST1Nwe5s7D23ogxQuibUUkqDfGcfVgJn6PM5+y2YoVvr1V3QaPLTv9hfK2
GePZ9Ps/1ijy4V69xBVGODxgJqFZ9YHMUPLywaPgTiUUj6NB/FCnEe6uHHbDRQwBQmkI+nkpJwVm
buKCZySo/bkVXQ0amroSVBq2lHBFGorfiDnEb+rk/CmyAq6y2iC/UY4rq+4xsWydxmXYs+E+S9nz
qcOUs3yC0Y1GHkr7JHTm84K3PQK1a7vcejWb4FDLfg==
=Yb2Z
-----END PGP SIGNATURE-----

--------------iWRBF21uVxW8pYD7wApW2Rt9--


From xen-devel-bounces@lists.xenproject.org Mon Jun 20 09:53:38 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 09:53:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352673.579507 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3E6A-00005m-Rq; Mon, 20 Jun 2022 09:53:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352673.579507; Mon, 20 Jun 2022 09:53:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3E6A-00005e-Oy; Mon, 20 Jun 2022 09:53:34 +0000
Received: by outflank-mailman (input) for mailman id 352673;
 Mon, 20 Jun 2022 09:53:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o3E6A-00005Y-1d
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 09:53:34 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o3E69-0000qj-4c; Mon, 20 Jun 2022 09:53:33 +0000
Received: from 54-240-197-226.amazon.com ([54.240.197.226] helo=[192.168.1.39])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o3E68-0000iu-Uc; Mon, 20 Jun 2022 09:53:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=+k3EeC0vPzczHHm5EyhICqT6Mx/itTIa3WcU5mtwNeE=; b=HYcWbXo9l9zQSd1OUXAYWC3Jlk
	p1B3VP8dq5ry56G+K4f3MGoJZIko0qrNUYlN21otkkYllBzUA7CVhwEm/oD2FhTBEZeHQKjJgkLC5
	9EYEKQgj9moLqO9N043TyMsGff7DtgckZfi7EaTf3feD63ToWMsr/tEWf7A/mQCtxVXY=;
Message-ID: <af427a56-3023-ce84-af43-1a0a1961a805@xen.org>
Date: Mon, 20 Jun 2022 10:53:31 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH 4/9] include/xen: Use explicitly specified types
To: Michal Orzel <michal.orzel@arm.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20220620070245.77979-1-michal.orzel@arm.com>
 <20220620070245.77979-5-michal.orzel@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220620070245.77979-5-michal.orzel@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 20/06/2022 08:02, Michal Orzel wrote:
> According to MISRA C 2012 Rule 8.1, types shall be explicitly
> specified. Fix all the findings reported by cppcheck with misra addon
> by substituting implicit type 'unsigned' to explicit 'unsigned int'.
> 
> Signed-off-by: Michal Orzel <michal.orzel@arm.com>
> ---
>   xen/include/xen/perfc.h | 2 +-
>   xen/include/xen/sched.h | 6 +++---
>   2 files changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/include/xen/perfc.h b/xen/include/xen/perfc.h
> index bb010b0aae..7c5ce537bd 100644
> --- a/xen/include/xen/perfc.h
> +++ b/xen/include/xen/perfc.h
> @@ -49,7 +49,7 @@ enum perfcounter {
>   #undef PERFSTATUS
>   #undef PERFSTATUS_ARRAY
>   
> -typedef unsigned perfc_t;
> +typedef unsigned int perfc_t;
>   #define PRIperfc ""
>   
>   DECLARE_PER_CPU(perfc_t[NUM_PERFCOUNTERS], perfcounters);
> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> index 463d41ffb6..d957b6e11f 100644
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -518,9 +518,9 @@ struct domain
>   
>       /* hvm_print_line() and guest_console_write() logging. */
>   #define DOMAIN_PBUF_SIZE 200
> -    char       *pbuf;
> -    unsigned    pbuf_idx;
> -    spinlock_t  pbuf_lock;
> +    char         *pbuf;
> +    unsigned int  pbuf_idx;
> +    spinlock_t    pbuf_lock;

Looking a "struct domain", we don't often seem to align the name. In 
fact, this is something I don't particularly like because it adds a lot 
of churn in a patch.

So my preference would be to just change the line "unsigned".

Other than that:

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 20 09:55:03 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 09:55:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352681.579517 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3E7a-0000gp-72; Mon, 20 Jun 2022 09:55:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352681.579517; Mon, 20 Jun 2022 09:55:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3E7a-0000gi-3t; Mon, 20 Jun 2022 09:55:02 +0000
Received: by outflank-mailman (input) for mailman id 352681;
 Mon, 20 Jun 2022 09:55:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o3E7Y-0000gc-Ne
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 09:55:00 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o3E7X-0000s1-UI; Mon, 20 Jun 2022 09:54:59 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234] helo=[192.168.1.39])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o3E7X-0000n6-OK; Mon, 20 Jun 2022 09:54:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=EACjZ5nszWO2S7RNDVIetvYM8dzOo869XQxUcB9sUPg=; b=NcPscIHZBA+5KplLEy6it8wjuQ
	YBye6Of/hZoTSZFfqAGWE9LcucXX/k0EaMzBtW7du6Yjzyav554dpR8VIJorPa60vubRPHcB5a3v2
	RK8cV86eeIh/ht/7QGF8Sva+GxFoix78niJ2+6uRghMQRFuvPayH25lhMBexZ20xPxwQ=;
Message-ID: <e91f6bd2-271c-12c1-ee7e-bea3d74c8beb@xen.org>
Date: Mon, 20 Jun 2022 10:54:57 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH 5/9] include/public: Use explicitly specified types
To: Michal Orzel <michal.orzel@arm.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20220620070245.77979-1-michal.orzel@arm.com>
 <20220620070245.77979-6-michal.orzel@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220620070245.77979-6-michal.orzel@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Michal,

On 20/06/2022 08:02, Michal Orzel wrote:
> According to MISRA C 2012 Rule 8.1, types shall be explicitly
> specified. Fix all the findings reported by cppcheck with misra addon
> by substituting implicit type 'unsigned' to explicit 'unsigned int'.
> 
> Bump sysctl interface version.

The sysctl version should only be bumped if the ABI has changed. AFAICT 
switching from "unsigned" to "unsigned" will not modify it, so I don't 
think this is necessary.

> 
> Signed-off-by: Michal Orzel <michal.orzel@arm.com>
> ---
>   xen/include/public/physdev.h |  4 ++--
>   xen/include/public/sysctl.h  | 10 +++++-----
>   2 files changed, 7 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/include/public/physdev.h b/xen/include/public/physdev.h
> index d271766ad0..a2ca0ee564 100644
> --- a/xen/include/public/physdev.h
> +++ b/xen/include/public/physdev.h
> @@ -211,8 +211,8 @@ struct physdev_manage_pci_ext {
>       /* IN */
>       uint8_t bus;
>       uint8_t devfn;
> -    unsigned is_extfn;
> -    unsigned is_virtfn;
> +    unsigned int is_extfn;
> +    unsigned int is_virtfn;
>       struct {
>           uint8_t bus;
>           uint8_t devfn;
> diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
> index b0a4af8789..a2a762fe46 100644
> --- a/xen/include/public/sysctl.h
> +++ b/xen/include/public/sysctl.h
> @@ -35,7 +35,7 @@
>   #include "domctl.h"
>   #include "physdev.h"
>   
> -#define XEN_SYSCTL_INTERFACE_VERSION 0x00000014
> +#define XEN_SYSCTL_INTERFACE_VERSION 0x00000015
>   
>   /*
>    * Read console content from Xen buffer ring.
> @@ -644,18 +644,18 @@ struct xen_sysctl_credit_schedule {
>       /* Length of timeslice in milliseconds */
>   #define XEN_SYSCTL_CSCHED_TSLICE_MAX 1000
>   #define XEN_SYSCTL_CSCHED_TSLICE_MIN 1
> -    unsigned tslice_ms;
> -    unsigned ratelimit_us;
> +    unsigned int tslice_ms;
> +    unsigned int ratelimit_us;
>       /*
>        * How long we consider a vCPU to be cache-hot on the
>        * CPU where it has run (max 100ms, in microseconds)
>       */
>   #define XEN_SYSCTL_CSCHED_MGR_DLY_MAX_US (100 * 1000)
> -    unsigned vcpu_migr_delay_us;
> +    unsigned int vcpu_migr_delay_us;
>   };
>   
>   struct xen_sysctl_credit2_schedule {
> -    unsigned ratelimit_us;
> +    unsigned int ratelimit_us;
>   };
>   
>   /* XEN_SYSCTL_scheduler_op */

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 20 09:56:36 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 09:56:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352688.579528 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3E8y-0001IN-JO; Mon, 20 Jun 2022 09:56:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352688.579528; Mon, 20 Jun 2022 09:56:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3E8y-0001IG-Gm; Mon, 20 Jun 2022 09:56:28 +0000
Received: by outflank-mailman (input) for mailman id 352688;
 Mon, 20 Jun 2022 09:56:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o3E8w-0001IA-J0
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 09:56:26 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o3E8w-0000vQ-A5; Mon, 20 Jun 2022 09:56:26 +0000
Received: from 54-240-197-226.amazon.com ([54.240.197.226] helo=[192.168.1.39])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o3E8w-00010p-4W; Mon, 20 Jun 2022 09:56:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=2lA5V1g0a1NFBGBwYDGatm/v+Kz7kq5qsGUYune1dK4=; b=1a4HttZ/8mGwpS+DCRivYzes7e
	QxY1DgrXzqpPybyO/w2WjWS910g5M2AhqAsxYfjtPVjhtDhCeIFgTPAfVBqNBmMiSfp40ggIogb8F
	KiB/wiHjX3mdMymTKUeNI+MRa8oQhEksJq/ReJjm/83MIwUavThDxP+gnsygVBxoyZhc=;
Message-ID: <9e835e64-0182-26d0-241b-0e7796940e35@xen.org>
Date: Mon, 20 Jun 2022 10:56:24 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH 7/9] common/libfdt: Use explicitly specified types
To: Michal Orzel <michal.orzel@arm.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>
References: <20220620070245.77979-1-michal.orzel@arm.com>
 <20220620070245.77979-8-michal.orzel@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220620070245.77979-8-michal.orzel@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Michal,

On 20/06/2022 08:02, Michal Orzel wrote:
> According to MISRA C 2012 Rule 8.1, types shall be explicitly
> specified. Fix all the findings reported by cppcheck with misra addon
> by substituting implicit type 'unsigned' to explicit 'unsigned int'.
> 
> Signed-off-by: Michal Orzel <michal.orzel@arm.com>
> ---
> This patch may not be applicable as these files come from libfdt.

The libfdt code is a verbatim copy of the one shipped in DTC. So any 
changes will have to be first accepted there and then the directory 
re-synced.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 20 10:07:14 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 10:07:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352700.579544 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3EJF-0002xk-RX; Mon, 20 Jun 2022 10:07:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352700.579544; Mon, 20 Jun 2022 10:07:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3EJF-0002xd-Lv; Mon, 20 Jun 2022 10:07:05 +0000
Received: by outflank-mailman (input) for mailman id 352700;
 Mon, 20 Jun 2022 10:07:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3EJE-0002xT-Tk; Mon, 20 Jun 2022 10:07:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3EJE-0001C3-RA; Mon, 20 Jun 2022 10:07:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3EJE-0002Rj-CE; Mon, 20 Jun 2022 10:07:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o3EJE-0004yx-Bg; Mon, 20 Jun 2022 10:07:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zPAN4VvxN1NLU10frN9G5X+qaaCTTq2DJxfQtP62+N8=; b=oUv2Q3yAKx6cnT61vgfgxdKjua
	S5sYmTWdctm2LEL6GM9lG595y4c7I4hXGNrs4BV1qEfqDPSrvQQEqPOCj3xNbDsdlXjeBKVZUTmyJ
	syE7T3zID1yHz4V1xlQsN3ZBPCBX277uN3Zx7PE0ekfFe131d8GhdPjbKkpudhjplclo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171285-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 171285: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=812edc95a36b997d674ce4f3a56f4fd01f31904e
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 20 Jun 2022 10:07:04 +0000

flight 171285 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171285/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              812edc95a36b997d674ce4f3a56f4fd01f31904e
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  710 days
Failing since        151818  2020-07-11 04:18:52 Z  709 days  691 attempts
Testing same since   171230  2022-06-17 04:18:59 Z    3 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Amneesh Singh <natto@weirdnatto.in>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani@anisinha.ca>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brad Laue <brad@brad-x.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Kirbach <christian.kirbach@gmail.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Christophe Fergeau <cfergeau@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Didik Supriadi <didiksupriadi41@gmail.com>
  dinglimin <dinglimin@cmss.chinamobile.com>
  Divya Garg <divya.garg@nutanix.com>
  Dmitrii Shcherbakov <dmitrii.shcherbakov@canonical.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Emilio Herrera <ehespinosa57@gmail.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Florian Schmidt <flosch@nutanix.com>
  Franck Ridel <fridel@protonmail.com>
  Gavi Teitz <gavi@nvidia.com>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Haonan Wang <hnwanga1@gmail.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Hiroki Narukawa <hnarukaw@yahoo-corp.jp>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Ian Wienand <iwienand@redhat.com>
  Ioanna Alifieraki <ioanna-maria.alifieraki@canonical.com>
  Ivan Teterevkov <ivan.teterevkov@nutanix.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  jason lee <ppark5237@gmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jia Zhou <zhou.jia2@zte.com.cn>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jing Qi <jinqi@redhat.com>
  Jinsheng Zhang <zhangjl02@inspur.com>
  Jiri Denemark <jdenemar@redhat.com>
  Joachim Falk <joachim.falk@gmx.de>
  John Ferlan <jferlan@redhat.com>
  John Levon <john.levon@nutanix.com>
  John Levon <levon@movementarian.org>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Justin Gatzen <justin.gatzen@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kim InSoo <simmon@nplob.com>
  Koichi Murase <myoga.murase@gmail.com>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Lei Yang <yanglei209@huawei.com>
  Lena Voytek <lena.voytek@canonical.com>
  Liang Yan <lyan@digitalocean.com>
  Liang Yan <lyan@digtalocean.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Liu Yiding <liuyd.fnst@fujitsu.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  luzhipeng <luzhipeng@cestc.cn>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Mielke <mark.mielke@gmail.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Martin Pitt <mpitt@debian.org>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matej Cepl <mcepl@cepl.eu>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Goodhart <c@chromakode.com>
  Maxim Nestratov <mnestratov@virtuozzo.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Moteen Shah <codeguy.moteen@gmail.com>
  Moteen Shah <moteenshah.02@gmail.com>
  Muha Aliss <muhaaliss@gmail.com>
  Nathan <nathan95@live.it>
  Neal Gompa <ngompa13@gmail.com>
  Nick Chevsky <nchevsky@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nicolas Lécureuil <neoclust@mageia.org>
  Nicolas Lécureuil <nicolas.lecureuil@siveo.net>
  Nikolay Shirokovskiy <nikolay.shirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Niteesh Dubey <niteesh@linux.ibm.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Or Ozeri <oro@il.ibm.com>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peng Liang <tcx4c70@gmail.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Praveen K Paladugu <prapal@linux.microsoft.com>
  Prerna Saxena <prerna.saxena@nutanix.com>
  Richard W.M. Jones <rjones@redhat.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Robin Lee <cheeselee@fedoraproject.org>
  Rohit Kumar <rohit.kumar3@nutanix.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Davis <scott.davis@starlab.io>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Sergey A <sw@atrus.ru>
  Sergey A. <sw@atrus.ru>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  shenjiatong <yshxxsjt715@gmail.com>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Simon Rowe <simon.rowe@nutanix.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Temuri Doghonadze <temuri.doghonadze@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tom Wieczorek <tom@bibbu.net>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tu Qiang <tu.qiang35@zte.com.cn>
  Tuguoyi <tu.guoyi@h3c.com>
  tuqiang <tu.qiang35@zte.com.cn>
  Vasiliy Ulyanov <vulyanov@suse.de>
  Victor Toso <victortoso@redhat.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Vinayak Kale <vkale@nvidia.com>
  Vineeth Pillai <viremana@linux.microsoft.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  Wei-Chen Chen <weicche@microsoft.com>
  William Douglas <william.douglas@intel.com>
  Xu Chao <xu.chao6@zte.com.cn>
  Yalan Zhang <yalzhang@redhat.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Fei <yangfei85@huawei.com>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yasuhiko Kamata <belphegor@belbel.or.jp>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
  zhangjl02 <zhangjl02@inspur.com>
  zhanglei <zhanglei@smartx.com>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>
  Дамјан Георгиевски <gdamjan@gmail.com>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 113770 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 20 10:07:57 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 10:07:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352707.579553 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3EK5-0003Tt-4N; Mon, 20 Jun 2022 10:07:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352707.579553; Mon, 20 Jun 2022 10:07:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3EK5-0003Tm-1h; Mon, 20 Jun 2022 10:07:57 +0000
Received: by outflank-mailman (input) for mailman id 352707;
 Mon, 20 Jun 2022 10:07:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ook6=W3=citrix.com=prvs=16312ba74=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1o3EK3-0003Jj-Jm
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 10:07:55 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d86fa552-f080-11ec-bd2d-47488cf2e6aa;
 Mon, 20 Jun 2022 12:07:53 +0200 (CEST)
Received: from mail-mw2nam12lp2045.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.45])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 20 Jun 2022 06:07:50 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB6616.namprd03.prod.outlook.com (2603:10b6:a03:389::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.15; Mon, 20 Jun
 2022 10:07:49 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::bd46:feab:b3:4a5c]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::bd46:feab:b3:4a5c%4]) with mapi id 15.20.5353.021; Mon, 20 Jun 2022
 10:07:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d86fa552-f080-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655719673;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=BrkBczKJRh0JmtNDuH0h2zEOcWlVs8YN2TqqPR5TdIw=;
  b=PLuRfdOvyKFlbXLgEbuAjotuGxMvTeG/pNF3GrHGNyXPuCC6W6YQy2nM
   65BTqYGijOCL7X2llCdsVgnh5J7nKHZMmalZyuzHFKqFxgUXYwZRbLUb2
   P2IBnp1KlmFqzNXT0IhpPPmjK+QzoaX8lD7eN1ZItZK9+uIbSeipsCPAO
   k=;
X-IronPort-RemoteIP: 104.47.66.45
X-IronPort-MID: 74405531
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:0y3/jKPhR9y1eADvrR2TlsFynXyQoLVcMsEvi/4bfWQNrUorgjNUm
 GAfUT/TOfmDNGH8fdwjbYm1oE1SscLcn9QySAto+SlhQUwRpJueD7x1DKtR0wB+jCHnZBg6h
 ynLQoCYdKjYdleF+lH1dOKJQUBUjclkfJKlYAL/En03FFUMpBsJ00o5wbZn29Aw3bBVPivW0
 T/Mi5yHULOa82Yc3lI8s8pvfzs24ZweEBtB1rAPTagjUG32zhH5P7pGTU2FFFPqQ5E8IwKPb
 72rIIdVXI/u10xF5tuNyt4Xe6CRK1LYFVDmZnF+A8BOjvXez8CbP2lS2Pc0MC9qZzu1c99Z2
 sQKlbvgVwIVA6DtkrkfCkR3NwckFPgTkFPHCSDXXc276WTjKiGp5so0SUY8MMsf5/p9BnxI+
 boAMjcRYxufhuWwhrWmVu1rgcdlJ87uVG8dkig4kXeFUrB4H9afHs0m5vcBtNs0rulIEezTe
 Iwybj13YQ6bSxZOJk0WGNQ1m+LAanzXLGEE8A/I/vNfD2777CFxjYKyavvuftWAYvlppXuX+
 CX08DGsav0dHJnFodafyVqum+vOkCXTSI8UUrqi+ZZChVyYxmUXThoMR1a/ifCjjwi1XNc3A
 1wZ/G8ioLY/8GSvT8LhRFuorXicpBkeVtFMVeog52mlyKDZ/gKYDWgsVSNaZZots8pebScxy
 laDktftBDpumL6YU3SQ8vGTtzzaETcRBX8PY2kDVwRt3jX4iIQ6jxaKS8k5Fqew14fxAWuon
 23MqzUijbIOi8JNz7+84V3MnzOroN7OUxIx4QLUGGmi62uVebKYWmBh0nCDhd4oEWpTZgPpU
 KQs8yRG0N0zMA==
IronPort-HdrOrdr: A9a23:pqjpTqs1VwdA4Um+fhktBU+B7skCL4Aji2hC6mlwRA09TyXGra
 2TdaUgvyMc1gx7ZJh5o6H6BEGBKUmslqKceeEqTPuftXrdyRGVxeZZnMTfKlzbamDDH4tmuZ
 uIHJIOb+EYYWIasS++2njBLz9C+qjIzEnLv5a5854Fd2gDBM9dBkVCe3+m+yZNNWt77O8CZf
 6hD7181l+dkBosDviTNz0gZazuttfLnJXpbVotHBg88jSDijuu9frTDwWY9g12aUIO/Z4StU
 z+1yDp7KSqtP+2jjXG0XXI0phQkNz9jvNeGc23jNQPIDmEsHfqWG0hYczBgNkGmpDq1L8Yqq
 iKn/7mBbU015rlRBDxnfIq4Xi47N9h0Q679bbSuwqfnSWwfkNHNyMGv/MZTvKR0TtfgDk3up
 g7oF6xpt5ZCwjNkz/64MWNXxZ2llCsqX5niuILiWdDOLFuIIO5gLZvin+9Kq1wVR4SKbpXYt
 VGHYXZ/rJbYFmaZ3fWsi1mx8GtRG06GlODTlIZssKY3jBKlDQhpnFojvA3jzMF7tYwWpNE7+
 PLPuBhk6xPVNYfaeZ4CP0aScW6B2TRSVbHMX6UI17gCKYbUki94KLf8fEw/qWnaZYIxJw9lN
 DIV05Zr3c7fwb0BciHzPRwg2fwqaWGLEDQI+1lluhEU+fHNcvW2AW4OSMTutrlpekDCcvGXP
 v2MI5KApbYXB7TJbo=
X-IronPort-AV: E=Sophos;i="5.92,306,1650945600"; 
   d="scan'208";a="74405531"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DbPoLwKKNm6tfSOWPHXDaZVU5FOY5C6WdVaN1Ly6uA2KsJKwCx4a4b0ncSgXuEqlZHG03HD7NXnEEMRM3olVy4Tp86fWh0LLiOdi5rQpbhg/yo+aqESxxlVI2NsuyvoVHMjYWDkmJIIzWkKWauXmtEZgG0ED6yXWSBRKKzVPIvVp/BqOgBsariWm6wbJ2eb+C7mP499UT11LaEncDV2HO+30bBrub932G4/0UKnV7RiBQGGL9kAFQzeJhY+/vl+gHwT9lqvYtpzF9xhHvvAYMaYPcTAABPXMSiKqbdVBO45DybJ6BVrUu561kdb6BOSJPWGtVII+222e34EIR031nw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=BrkBczKJRh0JmtNDuH0h2zEOcWlVs8YN2TqqPR5TdIw=;
 b=JM/U0dmw8kdZwOJ6rWULDlM6WIqv8H+kLgCSaNX6dYdUodX2Zssf0LugYrLZX8+UOa+H8mzkEpPKUeBy93P3lV0DO+703Em75O/GOXt+TRJKg6dCURC8SBgy7cJkqBNdA1Rs50ZSqlpKwZ6Oy3Na7F8Rep0vOzo+IzC5ohwxcMdlan8/Bft/hDEygQ4LToTzkagk44ayKozDN3DqbGl19N1mEHqxqBEHeAsa9du7KyXdQr7CczwGLC5XjYlbPj9PXc/so08yWo2cbHnHVtIy24/kxe/zv2Ng5bgN9zmCOO9fBoIL75rCHzLYkeKSDSWPSKbDivJyDWXZmJEiF63nFQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BrkBczKJRh0JmtNDuH0h2zEOcWlVs8YN2TqqPR5TdIw=;
 b=dfrod0xfE8s2I5LjcG07l3hsXtGEluyEALW3NUZG5yhpwSUa1qfaOktAebf6KGrJHgpIwaPnCFfuWsoxJOggZhVIJWqYMaz1hxUvT7ETdv2eTYjxA4EEAEYqAfyVUzwNL8oIgc6jS7vRESf41VJt3momWKYfx/agZJZOk9C6q3Q=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Julien Grall <julien@xen.org>, Michal Orzel <michal.orzel@arm.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: George Dunlap <George.Dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 5/9] include/public: Use explicitly specified types
Thread-Topic: [PATCH 5/9] include/public: Use explicitly specified types
Thread-Index: AQHYhHPXhH2gjwLhME6AOejjpMzR0a1YDrCAgAADlwA=
Date: Mon, 20 Jun 2022 10:07:49 +0000
Message-ID: <72e0814f-bae5-ee08-a0bf-ffbe64e42886@citrix.com>
References: <20220620070245.77979-1-michal.orzel@arm.com>
 <20220620070245.77979-6-michal.orzel@arm.com>
 <e91f6bd2-271c-12c1-ee7e-bea3d74c8beb@xen.org>
In-Reply-To: <e91f6bd2-271c-12c1-ee7e-bea3d74c8beb@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 82a3b933-0fe9-4ac7-2de0-08da52a4bb11
x-ms-traffictypediagnostic: SJ0PR03MB6616:EE_
x-microsoft-antispam-prvs:
 <SJ0PR03MB6616287B4B4CBCF176697648BAB09@SJ0PR03MB6616.namprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 //qitLjOUfTnJiTEaAKv66v/VbW49g+IMY1l6ZkslkDYnsAk5944DsA0cPmbkXmNhDTZH0uiq1aLZQa+MmNNxTC8v6A+iNy7pvSKFjXZBAkKqYH6ISI8T2RoeP19NRzQ8nkqJwTE21UQt4Xf3upKc3qGIvgtvd3UQ1ry9r/59Bf8m5I1iNRRar1lCoiNpBF2+1yWCBUIl7165O4xB+i4ROYvX3MaQMhpryvYkogo1n9oakfKc5PdNiPvdChtpyjfSSmenQ+XtbAy+lqtINaVVHLpTc0ykz5hVCPxvNxpj3PXDtvj3gaH36ibbftOdboUGt72gCvAcp3a0ziAs0OZnmqOayBh4ugSheyQLRCGpu7N+RbAVOEk6kdjf3T5ex+tBI1Pnniu9i4xnrMEMD920KbhsXE60IE/RGGqN39EW5C2YJszl9VHmnIjN8WCVRXsbwq2kZz5RN61w6XNyoKthzJTLXXy9DmVJ4EO+RFFJcBm3Glm6kbFUpXlxN4iKydeUXPwvkIvGNLDUPEdXxnUD22crrHaXNN21ir/PmNJAlI58ggmrFtCk0tOAwG0jZ8HJxtzP1JRhZY7bZOS+UNWIhOs65j68z7ySIprTna7QtUNdhrIZEITO/fjDs8tgFZ2ak0GAecew4WGY4Rl242TsQnNqa4C3cxFt0mEl5zPXaqmKhK3xm7gq1CvzuWGdZPBI0AsTWYRihMa1hkj1RCWgtP5J+C+5N1SRtegmK4uk+Cx2lSYgaMwopncgI6jZtb+1YDStE/1YCrRqk0sU7eTPQ==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(376002)(346002)(136003)(39860400002)(396003)(366004)(186003)(91956017)(26005)(66946007)(8676002)(38070700005)(4326008)(66556008)(76116006)(4744005)(64756008)(66446008)(66476007)(41300700001)(71200400001)(54906003)(5660300002)(122000001)(110136005)(86362001)(82960400001)(478600001)(6506007)(2906002)(36756003)(31686004)(38100700002)(31696002)(6512007)(2616005)(53546011)(6486002)(8936002)(316002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?dklKRnJsWmNYNS9PVmFuclAvNW1mSUJ2ZStQWmVSMlBNSk9vTnZLUUVyOUNz?=
 =?utf-8?B?TWEzWVBpT0IwMDBneHZYZDQrUjlBb1ZlZ2Z4dDZSRVU1UGhwQWgxSUZKcjF3?=
 =?utf-8?B?MjRsOG5kWk5zYlVJYzNNbTlPcG9WeWNYSG5SS25BRTBEVkZwWmgxVEVZbXR6?=
 =?utf-8?B?d3JjcHJhdG9OV1JYWjlZRVJwY1ZaSXhuYUhqOXU2RXpWQWpBTVllc0hrWXZU?=
 =?utf-8?B?Zkp5WnBxcWFCWHZwMm82azhTM2llb0RBaGlmSHhTSXA1RVo0blhIL090WjA5?=
 =?utf-8?B?bE05dVJOTmxDVjhKSHA3WTJodkNMMTBrOEx0TXhQck10ZWJpMVE1M0JvVUI3?=
 =?utf-8?B?MHNyOVlCaHdlN1Vqb2ppNzdsNXdSdnNqQWpodmxmNjFzOEdQR3B6eWg2VXBT?=
 =?utf-8?B?Rzg2N3ozTllMVXFYTnZGL1YzNFBxVm9EbUhFMzVXTTNMUjlJVHh6SmhUemNo?=
 =?utf-8?B?UFRZdmxsaUEyTkorNFNQZEo2ZDFzY2J5dldFeXVqWktCdVhYczVMTUxyeExq?=
 =?utf-8?B?RW1lWEUzbE0wR2RSL3RRNFoxVzFTQWs4UnBmdDlXYS90WGwreEdvdEorcXlZ?=
 =?utf-8?B?a080ZUQ2YWhqbXdqVU1RYUpVMyszN2dBZkJSdWpFVzNIR21aZStTZnE4V09L?=
 =?utf-8?B?RG1VdVM4ZFllSk1XMzJWZkpkSnoyMFB4QmlNR0JxNEZKLzc5L1dNeG9kc1l6?=
 =?utf-8?B?RURGdmt2a0NKM0U1aDBsa0d3c0JiRGF5Umk4dUp2SGhDbDFJZFM4WkNJdzNG?=
 =?utf-8?B?Y3VxLzlnOXdDSk5WL2p4Qkd6dEVSdWlqYVh6RFE4WlJJY2x0SXJBMmltRDh0?=
 =?utf-8?B?eHlIV0hRTjZCMk11TW9keWtOaEMzUGN0YlNLWGdxNkVpaSt0Q05jU283MnNV?=
 =?utf-8?B?cFQvQWFYL3RQWDQ3aGJTdVdYbCtNVENGSHhFdzN6dGM1aGM1T1VGcDZTVDBJ?=
 =?utf-8?B?N3pXUTRPL3V0OFNYN1k4dCt1aEh6TUx0bFNEcXhjaXRwTkF4dUdGMllSTEYx?=
 =?utf-8?B?bkwyMzlBV3dwSExJOHduM1YrZVVCdzN5elM1MCtPdjZqSDNrUGVha3c3b1Jh?=
 =?utf-8?B?VEdreDBGSk55Nm96RncvZ3l3WFpKa2JBeTJjRlp4bnlEZ0NWM1RDUjRBM2pK?=
 =?utf-8?B?OEt4L3RGZkZ5cEZXVUIvcGpiQ1Z5bFNaU3dYaHR2MVJDUDlvYW1SNWpKU1hI?=
 =?utf-8?B?bGdFSVFNYnVCbFYwYUJmZmtUNStWWXhVeFN5RkFLb0orQWJRbEVoMU5qaG9J?=
 =?utf-8?B?Z0k4bXkzUmRyK1cwaFhqSFQwWU5qYit1SUZaMHIveENJVjVuNlp0TWJqVXZo?=
 =?utf-8?B?dzFlOHMxZVU3SllQTzlwdEdHV2lvNlh3d1Rsck1LdUo1Uk5Mc0cyRWZxczRo?=
 =?utf-8?B?SVZ0UmpkcXNKNURFSHpseGtHSk9jODlyWmVieC9iUWNxUnRTNE1uU1V5Visx?=
 =?utf-8?B?RXZnUmM5eENzSTBwbmNwdWtGVXpobmlscHRzT3hiVmtFTDEyMk4wb0w3UDRy?=
 =?utf-8?B?UlRoQ0ppR0Rmd0NMeGNvQjhHaTZRS0lzNFdYQWhPWVhmajVQUVBrQVJqZEE0?=
 =?utf-8?B?RlJmQ2hUMXo0d3BUVWVBTU15OGZER29FZVZ1Wk5BNXFxM0V2L0tlU2NJZGZF?=
 =?utf-8?B?RXZ6bkxRU2FqQTdIMlVjbzhSNW5FMExJMUhyWEU4MHN6dnVGZHRXd0VvSFZW?=
 =?utf-8?B?UmVZdDR1N2lTKzlCMU0renV2WkQ5OWk0UnZuN3FhYlBUUlVwdVd5TnJZMGFS?=
 =?utf-8?B?eHRLcTNDUVMvbEV3THlkTlZaM3JFR2FpY0dPbk1ob2lURUlyMS9PQUFXRytt?=
 =?utf-8?B?NkFvMjhDVkxocGYvZkRod3ZHMnh2ZEVIcHdxeHBDQ0VkdHladEhvdjAxUWVD?=
 =?utf-8?B?SGYwTmZnVHd1a3p6TE1HVUhRZXdaR1lJa0xQcjdIQzYvQ2lsMFNObUNtd2Ew?=
 =?utf-8?B?NUwydkt4a1VnMVB6cmxGR1B6MFQzbGlLSlc5THlJOEcrc0RuYUhRWHNjOXdF?=
 =?utf-8?B?NkUxemNrVHlrNEZoWkNtSUVYQUFncVdSb0JmTDhSdFIzdmtlLzhScFRKUG5Y?=
 =?utf-8?B?ZUhtQVRNMWpzOHdabWYzbGM0TVZ5dWFMZ0JBM0ZEVmozUnNiVHZCTVlnTzJz?=
 =?utf-8?B?TEdBN0JaMFRPeG1ubm4vOWREWDZEV2NTK2V3OEFabi85Z3hFNEhRRmpDT0hi?=
 =?utf-8?B?d3U3WmQ1bDh4bG5PZEZScDhNditucm9ZdnRwMW9rQjM3WVFnRy9qbmJJUytZ?=
 =?utf-8?B?M3hGQ01RYThTWXZZUVprVUN4R3lNeEdMM3FyeFBkRHhDWkFhSks2NHZYaWll?=
 =?utf-8?B?cms5TWJYMEhBVm9JOFVvYXlWYjkrYkFmeG43am1Ra2tScWJyVWp6OURGSVlV?=
 =?utf-8?Q?VUqGBlZGdCEYC5zg=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <8DFE223740ED46409A2EEA5B9D1DA5C7@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 82a3b933-0fe9-4ac7-2de0-08da52a4bb11
X-MS-Exchange-CrossTenant-originalarrivaltime: 20 Jun 2022 10:07:49.2769
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: r3IB6Inu5OABE2QWPdsDIZOoCYlb8i9tXjsBDw+8OJe3c82qQMQ512lkC0xks6aiL1kdxnwUXsRkPEfezZe1qWEsHqLM5TA6zQB5i1WpkU4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6616

T24gMjAvMDYvMjAyMiAxMDo1NCwgSnVsaWVuIEdyYWxsIHdyb3RlOg0KPiBIaSBNaWNoYWwsDQo+
DQo+IE9uIDIwLzA2LzIwMjIgMDg6MDIsIE1pY2hhbCBPcnplbCB3cm90ZToNCj4+IEFjY29yZGlu
ZyB0byBNSVNSQSBDIDIwMTIgUnVsZSA4LjEsIHR5cGVzIHNoYWxsIGJlIGV4cGxpY2l0bHkNCj4+
IHNwZWNpZmllZC4gRml4IGFsbCB0aGUgZmluZGluZ3MgcmVwb3J0ZWQgYnkgY3BwY2hlY2sgd2l0
aCBtaXNyYSBhZGRvbg0KPj4gYnkgc3Vic3RpdHV0aW5nIGltcGxpY2l0IHR5cGUgJ3Vuc2lnbmVk
JyB0byBleHBsaWNpdCAndW5zaWduZWQgaW50Jy4NCj4+DQo+PiBCdW1wIHN5c2N0bCBpbnRlcmZh
Y2UgdmVyc2lvbi4NCj4NCj4gVGhlIHN5c2N0bCB2ZXJzaW9uIHNob3VsZCBvbmx5IGJlIGJ1bXBl
ZCBpZiB0aGUgQUJJIGhhcyBjaGFuZ2VkLg0KPiBBRkFJQ1Qgc3dpdGNoaW5nIGZyb20gInVuc2ln
bmVkIiB0byAidW5zaWduZWQiIHdpbGwgbm90IG1vZGlmeSBpdCwgc28NCj4gSSBkb24ndCB0aGlu
ayB0aGlzIGlzIG5lY2Vzc2FyeS4NCg0KWWVhaC7CoCBObyBuZWVkIHRvIGJ1bXAgdGhlIGludGVy
ZmFjZSB2ZXJzaW9uIGZvciB0aGlzLg0KDQp+QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Mon Jun 20 10:14:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 10:14:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352719.579565 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3EQL-00053C-0Y; Mon, 20 Jun 2022 10:14:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352719.579565; Mon, 20 Jun 2022 10:14:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3EQK-000535-T5; Mon, 20 Jun 2022 10:14:24 +0000
Received: by outflank-mailman (input) for mailman id 352719;
 Mon, 20 Jun 2022 10:14:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3EQJ-00052v-4F; Mon, 20 Jun 2022 10:14:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3EQJ-0001KP-1e; Mon, 20 Jun 2022 10:14:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3EQI-0002n7-LX; Mon, 20 Jun 2022 10:14:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o3EQI-0004jw-L4; Mon, 20 Jun 2022 10:14:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=If7ydOF5XkT/cQ8KYY9rWW4MMh6pc1dv4/3FcT2xVbQ=; b=59ANz4uZ8dhUV8naYxmx+ZjHlw
	3GsRRPBOFLaFohUHu9uCNDCFT/Tpzop25ZtKCGYUA21idf6app2RoVeAuSqBSdZS7NB2lJhYyrIsx
	BrK2Pb+IahtGcphpsCmv7J6m/jFYrh8ZLeAnkwmvBpF6ZqH49cfhU+ka42UTGebNkZh4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171283-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 171283: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-examine:reboot:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c9040f25be317ab2f7647605397d79313e3f303e
X-Osstest-Versions-That:
    xen=c9040f25be317ab2f7647605397d79313e3f303e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 20 Jun 2022 10:14:22 +0000

flight 171283 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171283/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-examine      8 reboot           fail in 171276 pass in 171283
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 171276

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 171276 like 171267
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171276
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171276
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171276
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171276
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171276
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171276
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171276
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171276
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171276
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171276
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171276
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171276
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  c9040f25be317ab2f7647605397d79313e3f303e
baseline version:
 xen                  c9040f25be317ab2f7647605397d79313e3f303e

Last test of basis   171283  2022-06-20 01:52:00 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 10:26:26 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 10:26:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352729.579576 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Ebq-0006ZP-3o; Mon, 20 Jun 2022 10:26:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352729.579576; Mon, 20 Jun 2022 10:26:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Ebq-0006ZI-0o; Mon, 20 Jun 2022 10:26:18 +0000
Received: by outflank-mailman (input) for mailman id 352729;
 Mon, 20 Jun 2022 10:26:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hcs/=W3=alien8.de=bp@srs-se1.protection.inumbo.net>)
 id 1o3Ebj-0006XX-F5
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 10:26:16 +0000
Received: from mail.skyhub.de (mail.skyhub.de [2a01:4f8:190:11c2::b:1457])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 61c1a98c-f083-11ec-b725-ed86ccbb4733;
 Mon, 20 Jun 2022 12:26:02 +0200 (CEST)
Received: from zn.tnic (p200300ea974657f0329c23fffea6a903.dip0.t-ipconnect.de
 [IPv6:2003:ea:9746:57f0:329c:23ff:fea6:a903])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.skyhub.de (SuperMail on ZX Spectrum 128k) with ESMTPSA id D9D7D1EC05ED;
 Mon, 20 Jun 2022 12:26:00 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 61c1a98c-f083-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alien8.de; s=dkim;
	t=1655720760;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:in-reply-to:in-reply-to:  references:references;
	bh=q17a9XDpjwTWrwYDkkTaQRPmMqGL4q0bN7aju6sX8fc=;
	b=ot3uDji6ieMC7dhTqMOPLpRfYZ+D/oOGVFqRzqgKnfa0kcNpM/bfgTF0HcgcUwm1ytR3UF
	ZcxLVAfRbqvIk7ehY+JDCg/3PXcH03o7yOuj7hy3MWW6UBKZz+alWlDdoJiftWvYpq7DOu
	W9CoStDuS1uK4u2ui87Jeq2OESCWxlw=
Date: Mon, 20 Jun 2022 12:26:27 +0200
From: Borislav Petkov <bp@alien8.de>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, jbeulich@suse.com,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>
Subject: Re: [PATCH 1/2] x86/pat: fix x86_has_pat_wp()
Message-ID: <YrBLU2C5cJoalnax@zn.tnic>
References: <20220503132207.17234-1-jgross@suse.com>
 <20220503132207.17234-2-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20220503132207.17234-2-jgross@suse.com>

On Tue, May 03, 2022 at 03:22:06PM +0200, Juergen Gross wrote:
> x86_has_pat_wp() is using a wrong test, as it relies on the normal
> PAT configuration used by the kernel. In case the PAT MSR has been
> setup by another entity (e.g. BIOS or Xen hypervisor) it might return
> false even if the PAT configuration is allowing WP mappings.
> 
> Fixes: 1f6f655e01ad ("x86/mm: Add a x86_has_pat_wp() helper")
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  arch/x86/mm/init.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
> index d8cfce221275..71e182ebced3 100644
> --- a/arch/x86/mm/init.c
> +++ b/arch/x86/mm/init.c
> @@ -80,7 +80,8 @@ static uint8_t __pte2cachemode_tbl[8] = {
>  /* Check that the write-protect PAT entry is set for write-protect */
>  bool x86_has_pat_wp(void)
>  {
> -	return __pte2cachemode_tbl[_PAGE_CACHE_MODE_WP] == _PAGE_CACHE_MODE_WP;
> +	return __pte2cachemode_tbl[__cachemode2pte_tbl[_PAGE_CACHE_MODE_WP]] ==
> +	       _PAGE_CACHE_MODE_WP;

So this code always makes my head spin... especially after vacation but
lemme take a stab:

__pte2cachemode_tbl indices are of type enum page_cache_mode.

What you've done is index with

__cachemode2pte_tbl[_PAGE_CACHE_MODE_WP]

which gives uint16_t.

So, if at all, this should do __pte2cm_idx(_PAGE_CACHE_MODE_WP) to index
into it.

But I'm still unclear on the big picture. Looking at Jan's explanation,
there's something about PAT init being skipped due to MTRRs not being
emulated by Xen.... or something to that effect.

So if that's the case, the Xen guest code should init PAT in its own
way, so that the generic code works with this without doing hacks.

But I'm only guessing - this needs a *lot* more elaboration and
explanation why exactly this is needed.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette


From xen-devel-bounces@lists.xenproject.org Mon Jun 20 10:41:56 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 10:41:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352737.579587 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Eqs-0000U5-Gl; Mon, 20 Jun 2022 10:41:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352737.579587; Mon, 20 Jun 2022 10:41:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Eqs-0000Ty-CU; Mon, 20 Jun 2022 10:41:50 +0000
Received: by outflank-mailman (input) for mailman id 352737;
 Mon, 20 Jun 2022 10:41:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hycu=W3=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o3Eqq-0000Ts-8f
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 10:41:48 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 95529117-f085-11ec-bd2d-47488cf2e6aa;
 Mon, 20 Jun 2022 12:41:46 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id B480E1FA76;
 Mon, 20 Jun 2022 10:41:46 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 611B813638;
 Mon, 20 Jun 2022 10:41:46 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id DbBOFupOsGIxaQAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 20 Jun 2022 10:41:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 95529117-f085-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655721706; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=7w41IQsGsxN+8vimErJxT8zUTzeKMC0ET7aZWma418Y=;
	b=NwE3qmXMlXJgI3tmRrz/2SgugzzEqUwikHHh5ypDI1Csjezmrc0Ru8upXPYIunvXBBKcor
	GT5GoXo5NYfsD/l7lfj/CJ9NOWHamfp78xYdZ1CxoS0vzBdTTxrYxMaTcLaSLMwOdSLISg
	ofTs3BoAYofv1aIpiQYZPlo+eUp2vV0=
Message-ID: <1cfde4bf-241f-d94c-ffd7-2a11cf9aa1f2@suse.com>
Date: Mon, 20 Jun 2022 12:41:45 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Content-Language: en-US
To: Borislav Petkov <bp@alien8.de>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org, jbeulich@suse.com,
 Dave Hansen <dave.hansen@linux.intel.com>, Andy Lutomirski
 <luto@kernel.org>, Peter Zijlstra <peterz@infradead.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 "H. Peter Anvin" <hpa@zytor.com>
References: <20220503132207.17234-1-jgross@suse.com>
 <20220503132207.17234-2-jgross@suse.com> <YrBLU2C5cJoalnax@zn.tnic>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH 1/2] x86/pat: fix x86_has_pat_wp()
In-Reply-To: <YrBLU2C5cJoalnax@zn.tnic>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------d8k0hMH5UzQW082oVB8YUPtP"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------d8k0hMH5UzQW082oVB8YUPtP
Content-Type: multipart/mixed; boundary="------------BSsqO8n9CO6URm6ibKlIXMtW";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Borislav Petkov <bp@alien8.de>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org, jbeulich@suse.com,
 Dave Hansen <dave.hansen@linux.intel.com>, Andy Lutomirski
 <luto@kernel.org>, Peter Zijlstra <peterz@infradead.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 "H. Peter Anvin" <hpa@zytor.com>
Message-ID: <1cfde4bf-241f-d94c-ffd7-2a11cf9aa1f2@suse.com>
Subject: Re: [PATCH 1/2] x86/pat: fix x86_has_pat_wp()
References: <20220503132207.17234-1-jgross@suse.com>
 <20220503132207.17234-2-jgross@suse.com> <YrBLU2C5cJoalnax@zn.tnic>
In-Reply-To: <YrBLU2C5cJoalnax@zn.tnic>

--------------BSsqO8n9CO6URm6ibKlIXMtW
Content-Type: multipart/mixed; boundary="------------00h0XzlSR6lx9AL9hG09U3QX"

--------------00h0XzlSR6lx9AL9hG09U3QX
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjAuMDYuMjIgMTI6MjYsIEJvcmlzbGF2IFBldGtvdiB3cm90ZToNCj4gT24gVHVlLCBN
YXkgMDMsIDIwMjIgYXQgMDM6MjI6MDZQTSArMDIwMCwgSnVlcmdlbiBHcm9zcyB3cm90ZToN
Cj4+IHg4Nl9oYXNfcGF0X3dwKCkgaXMgdXNpbmcgYSB3cm9uZyB0ZXN0LCBhcyBpdCByZWxp
ZXMgb24gdGhlIG5vcm1hbA0KPj4gUEFUIGNvbmZpZ3VyYXRpb24gdXNlZCBieSB0aGUga2Vy
bmVsLiBJbiBjYXNlIHRoZSBQQVQgTVNSIGhhcyBiZWVuDQo+PiBzZXR1cCBieSBhbm90aGVy
IGVudGl0eSAoZS5nLiBCSU9TIG9yIFhlbiBoeXBlcnZpc29yKSBpdCBtaWdodCByZXR1cm4N
Cj4+IGZhbHNlIGV2ZW4gaWYgdGhlIFBBVCBjb25maWd1cmF0aW9uIGlzIGFsbG93aW5nIFdQ
IG1hcHBpbmdzLg0KPj4NCj4+IEZpeGVzOiAxZjZmNjU1ZTAxYWQgKCJ4ODYvbW06IEFkZCBh
IHg4Nl9oYXNfcGF0X3dwKCkgaGVscGVyIikNCj4+IFNpZ25lZC1vZmYtYnk6IEp1ZXJnZW4g
R3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4NCj4+IC0tLQ0KPj4gICBhcmNoL3g4Ni9tbS9pbml0
LmMgfCAzICsrLQ0KPj4gICAxIGZpbGUgY2hhbmdlZCwgMiBpbnNlcnRpb25zKCspLCAxIGRl
bGV0aW9uKC0pDQo+Pg0KPj4gZGlmZiAtLWdpdCBhL2FyY2gveDg2L21tL2luaXQuYyBiL2Fy
Y2gveDg2L21tL2luaXQuYw0KPj4gaW5kZXggZDhjZmNlMjIxMjc1Li43MWUxODJlYmNlZDMg
MTAwNjQ0DQo+PiAtLS0gYS9hcmNoL3g4Ni9tbS9pbml0LmMNCj4+ICsrKyBiL2FyY2gveDg2
L21tL2luaXQuYw0KPj4gQEAgLTgwLDcgKzgwLDggQEAgc3RhdGljIHVpbnQ4X3QgX19wdGUy
Y2FjaGVtb2RlX3RibFs4XSA9IHsNCj4+ICAgLyogQ2hlY2sgdGhhdCB0aGUgd3JpdGUtcHJv
dGVjdCBQQVQgZW50cnkgaXMgc2V0IGZvciB3cml0ZS1wcm90ZWN0ICovDQo+PiAgIGJvb2wg
eDg2X2hhc19wYXRfd3Aodm9pZCkNCj4+ICAgew0KPj4gLQlyZXR1cm4gX19wdGUyY2FjaGVt
b2RlX3RibFtfUEFHRV9DQUNIRV9NT0RFX1dQXSA9PSBfUEFHRV9DQUNIRV9NT0RFX1dQOw0K
Pj4gKwlyZXR1cm4gX19wdGUyY2FjaGVtb2RlX3RibFtfX2NhY2hlbW9kZTJwdGVfdGJsW19Q
QUdFX0NBQ0hFX01PREVfV1BdXSA9PQ0KPj4gKwkgICAgICAgX1BBR0VfQ0FDSEVfTU9ERV9X
UDsNCj4gDQo+IFNvIHRoaXMgY29kZSBhbHdheXMgbWFrZXMgbXkgaGVhZCBzcGluLi4uIGVz
cGVjaWFsbHkgYWZ0ZXIgdmFjYXRpb24gYnV0DQo+IGxlbW1lIHRha2UgYSBzdGFiOg0KPiAN
Cj4gX19wdGUyY2FjaGVtb2RlX3RibCBpbmRpY2VzIGFyZSBvZiB0eXBlIGVudW0gcGFnZV9j
YWNoZV9tb2RlLg0KDQpZZXMuDQoNCj4gV2hhdCB5b3UndmUgZG9uZSBpcyBpbmRleCB3aXRo
DQo+IA0KPiBfX2NhY2hlbW9kZTJwdGVfdGJsW19QQUdFX0NBQ0hFX01PREVfV1BdDQo+IA0K
PiB3aGljaCBnaXZlcyB1aW50MTZfdC4NCj4gDQo+IFNvLCBpZiBhdCBhbGwsIHRoaXMgc2hv
dWxkIGRvIF9fcHRlMmNtX2lkeChfUEFHRV9DQUNIRV9NT0RFX1dQKSB0byBpbmRleA0KPiBp
bnRvIGl0Lg0KDQpPaCwgeW91IGFyZSBwYXJ0aWFsbHkgcmlnaHQuDQoNCkl0IHNob3VsZCBi
ZSBfX3B0ZTJjbV9pZHgoX19jYWNoZW1vZGUycHRlX3RibFtfUEFHRV9DQUNIRV9NT0RFX1dQ
XSkuDQoNCj4gQnV0IEknbSBzdGlsbCB1bmNsZWFyIG9uIHRoZSBiaWcgcGljdHVyZS4gTG9v
a2luZyBhdCBKYW4ncyBleHBsYW5hdGlvbiwNCj4gdGhlcmUncyBzb21ldGhpbmcgYWJvdXQg
UEFUIGluaXQgYmVpbmcgc2tpcHBlZCBkdWUgdG8gTVRSUnMgbm90IGJlaW5nDQo+IGVtdWxh
dGVkIGJ5IFhlbi4uLi4gb3Igc29tZXRoaW5nIHRvIHRoYXQgZWZmZWN0Lg0KDQpQQVQgaW5p
dCBpcyBiZWluZyBza2lwcGVkIGZvciBYZW4gUFYgZ3Vlc3RzLCBhcyB0aG9zZSBjYW4ndCB3
cml0ZSB0aGUNClBBVCBNU1IuIFRoZXkgbmVlZCB0byBjb3BlIHdpdGggdGhlIHNldHRpbmcg
dGhlIGh5cGVydmlzb3IgaGFzIGRvbmUNCih3aGljaCBjb250YWlucyBhbGwgY2FjaGluZyBt
b2RlcywgYnV0IGluIGEgZGlmZmVyZW50IGxheW91dCB0aGFuIHRoZQ0Ka2VybmVsIGlzIHVz
aW5nIG5vcm1hbGx5KS4NCg0KPiBTbyBpZiB0aGF0J3MgdGhlIGNhc2UsIHRoZSBYZW4gZ3Vl
c3QgY29kZSBzaG91bGQgaW5pdCBQQVQgaW4gaXRzIG93bg0KPiB3YXksIHNvIHRoYXQgdGhl
IGdlbmVyaWMgY29kZSB3b3JrcyB3aXRoIHRoaXMgd2l0aG91dCBkb2luZyBoYWNrcy4NCg0K
RGVwZW5kcyBvbiB3aGF0IHlvdSBtZWFuIHdpdGggImluaXQgUEFUIi4gSWYgeW91IG1lYW4g
dG8gd3JpdGUgdGhlDQpQQVQgTVNSLCB0aGVuIG5vLCB0aGlzIHdvbid0IHdvcmsuIElmIHlv
dSBtZWFuIHRvIHNldHVwIHRoZSB0cmFuc2xhdGlvbg0KYXJyYXlzIF9fY2FjaGVtb2RlMnB0
ZV90YmxbXSBhbmQgX19wdGUyY2FjaGVtb2RlX3RibFtdLCB0aGVuIHllcywgdGhpcw0KaXMg
YWxyZWFkeSBkb25lLg0KDQpNeSBwYXRjaCBpcyBvbmx5IGZpeGluZyB0aGUgd3Jvbmcgd2F5
IHF1ZXJ5aW5nIGZvciBXUCBiZWluZyBzdXBwb3J0ZWQuDQoNCj4gQnV0IEknbSBvbmx5IGd1
ZXNzaW5nIC0gdGhpcyBuZWVkcyBhICpsb3QqIG1vcmUgZWxhYm9yYXRpb24gYW5kDQo+IGV4
cGxhbmF0aW9uIHdoeSBleGFjdGx5IHRoaXMgaXMgbmVlZGVkLg0KDQpJIHdpbGwgY29ycmVj
dCB0aGUgY29kZSBhbmQgdXBkYXRlIHRoZSBjb21taXQgbWVzc2FnZS4NCg0KDQpKdWVyZ2Vu
DQo=
--------------00h0XzlSR6lx9AL9hG09U3QX
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------00h0XzlSR6lx9AL9hG09U3QX--

--------------BSsqO8n9CO6URm6ibKlIXMtW--

--------------d8k0hMH5UzQW082oVB8YUPtP
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmKwTukFAwAAAAAACgkQsN6d1ii/Ey/T
/Af+K1iQ7zdT/cqbRsGNY60+8Reyw+xEYCBvULc3labAa6S8CMirE4G0DBNY9Ddx8XEdLZCJBA7/
Yjo4F/NRcawlRxPHlwbSJbk5bxUoMhrHLyTFYuxSb6ERPXgWsVAfc0+rnptXAJvY+mtvK+eraNAv
S1LU3JtSZs1c/HbWemn/M0yMPez378+qBrxAL2LtJ10R1u5rZRYeRePyWG5ZM741guuGMa/hC31h
BiaKwJWlEkHWy2GHP9AiBg67QC9NxOQ6xivjWz4kVuLOXNG2+dKjdk0yMgMV1NzWcUZ5fKVp9lDa
7C93y0bgN3SoZz0loyHWEq5GPyvyav5DSDVDE5mHNQ==
=dJRd
-----END PGP SIGNATURE-----

--------------d8k0hMH5UzQW082oVB8YUPtP--


From xen-devel-bounces@lists.xenproject.org Mon Jun 20 11:35:00 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 11:35:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352751.579607 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Fg7-0005lQ-0O; Mon, 20 Jun 2022 11:34:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352751.579607; Mon, 20 Jun 2022 11:34:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Fg6-0005lJ-TH; Mon, 20 Jun 2022 11:34:46 +0000
Received: by outflank-mailman (input) for mailman id 352751;
 Mon, 20 Jun 2022 11:34:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hycu=W3=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o3Fg5-0005lD-I8
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 11:34:45 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f8adfc35-f08c-11ec-b725-ed86ccbb4733;
 Mon, 20 Jun 2022 13:34:40 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id ED8AC21BB8;
 Mon, 20 Jun 2022 11:34:43 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id A1FA1134CA;
 Mon, 20 Jun 2022 11:34:43 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id uKlHJlNbsGJ7BQAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 20 Jun 2022 11:34:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f8adfc35-f08c-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655724883; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=sFZCDHHdeMfba3WZMR9ZmHr8MgDiELflME+wwT8OZfM=;
	b=KsbBYHBwqPcIaOsBTaoPBqYic3Z0buj6zNzYSIesoD5Tt9+JchJYZcz+7aT63ca7qOBtrc
	MGYys/MoKrmKsaDyr4QD0UbFEZsNB6R1qVrKaVq/bFjExnxdb6kTYcTP1J7usIhLwUfk19
	0cX4iAibdLH1tP4Ppx0TQ6miO2cJHCk=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>
Subject: [PATCH v2] x86/pat: fix x86_has_pat_wp()
Date: Mon, 20 Jun 2022 13:34:41 +0200
Message-Id: <20220620113441.23961-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

x86_has_pat_wp() is using a wrong test, as it relies on the normal
PAT configuration used by the kernel. In case the PAT MSR has been
setup by another entity (e.g. BIOS or Xen hypervisor) it might return
false even if the PAT configuration is allowing WP mappings.

The correct way to test for WP support is:

1. Get the PTE protection bits needed to select WP mode by reading
   __cachemode2pte_tbl[_PAGE_CACHE_MODE_WP] (depending on the PAT MSR
   setting this might return protection bits for a stronger mode, e.g.
   UC-)
2. Translate those bits back into the real cache mode selected by those
   PTE bits by reading __pte2cachemode_tbl[__pte2cm_idx(prot)]
3. Test for the cache mode to be _PAGE_CACHE_MODE_WP

Fixes: 1f6f655e01ad ("x86/mm: Add a x86_has_pat_wp() helper")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- fix indexing into __pte2cachemode_tbl[]
---
 arch/x86/mm/init.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index d8cfce221275..914ac5f29fca 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -80,7 +80,9 @@ static uint8_t __pte2cachemode_tbl[8] = {
 /* Check that the write-protect PAT entry is set for write-protect */
 bool x86_has_pat_wp(void)
 {
-	return __pte2cachemode_tbl[_PAGE_CACHE_MODE_WP] == _PAGE_CACHE_MODE_WP;
+	uint16_t prot = __cachemode2pte_tbl[_PAGE_CACHE_MODE_WP];
+
+	return __pte2cachemode_tbl[__pte2cm_idx(prot)] == _PAGE_CACHE_MODE_WP;
 }
 
 enum page_cache_mode pgprot2cachemode(pgprot_t pgprot)
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 12:36:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 12:36:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352762.579618 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Gda-0003U3-PU; Mon, 20 Jun 2022 12:36:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352762.579618; Mon, 20 Jun 2022 12:36:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Gda-0003Tw-Kx; Mon, 20 Jun 2022 12:36:14 +0000
Received: by outflank-mailman (input) for mailman id 352762;
 Mon, 20 Jun 2022 12:36:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ruGu=W3=citrix.com=prvs=1630bf34c=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o3GdY-0003SO-Mm
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 12:36:13 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8eddac4c-f095-11ec-bd2d-47488cf2e6aa;
 Mon, 20 Jun 2022 14:36:09 +0200 (CEST)
Received: from mail-bn1nam07lp2045.outbound.protection.outlook.com (HELO
 NAM02-BN1-obe.outbound.protection.outlook.com) ([104.47.51.45])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 20 Jun 2022 08:35:46 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by SJ0PR03MB5664.namprd03.prod.outlook.com (2603:10b6:a03:28f::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.14; Mon, 20 Jun
 2022 12:35:44 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5353.022; Mon, 20 Jun 2022
 12:35:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8eddac4c-f095-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655728569;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=RF9cLs8rPB7KP1minkLNrdTlLRnKkcWcJkUGY/AVWGo=;
  b=OqIIX7BSQ6if2yx+mZ1z+Sa9rqVcrbSCg6K+oiGDx2ooyIMAXE8IWCcy
   ybORZXcvebkngJDRQ4ynf+EKv/ETW17Q/7jayOKvkZlPMS7Nbe2mRiKoG
   hEw/KhWsNDh7vpZkrNRUlA7QuGvqM5jUNe2mtDxBFhCXa/EWutlN8W+od
   c=;
X-IronPort-RemoteIP: 104.47.51.45
X-IronPort-MID: 73332866
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:2zHHeq5pNxyzR0gypFRkGwxRtFPGchMFZxGqfqrLsTDasY5as4F+v
 mocXDuPbPeDNmugfNpwYdjloB8DuJLcmtQ3GQA4pHs8Hi5G8cbLO4+Ufxz6V8+wwmwvb67FA
 +E2MISowBUcFyeEzvuVGuG96yE6j8lkf5KkYAL+EnkZqTRMFWFw03qPp8Zj2tQy2YbgXVvR0
 T/Pi5a31GGNimYc3l08s8pvmDs31BglkGpF1rCWTakjUG72zxH5PrpGTU2CByKQrr1vNvy7X
 47+IISRpQs1yfuP5uSNyd4XemVSKlLb0JPnZnB+A8BOiTAazsA+PzpS2FPxpi67hh3Q9+2dx
 umhurSzalwCMoHek94+Sh5nVDpdJ4xYqa3udC3XXcy7lyUqclPK6tA2VgQcG9Rd/ex6R2ZT6
 fYfNTYBKAiZgP67y666Te8qgdk/KM7sP8UUvXQIITPxVK56B8ycBfiTo4MFtNszrpkm8fL2f
 c0WZCApdB3dSxZOJk0WGNQ1m+LAanzXLGYD8wjF+/RfD2776BddiZrrANrudc2jd/d8zhrHj
 0P3xjGsav0dHJnFodafyVq8i+mKkS7lVYY6ELyj6uUskFCV3nYUChAdSR28u/bRolG6c8JSL
 QoT4CVGhao97kuwVfHmQga15nWDu3Yht8F4FuQ77ESHzPPS6gPBWWwcFGYdNZohqdM8QiEs2
 hmRhdT1CDdzsbqTD3WA6rOTqjD0Mi8QRYMfWRI5ocI+y4GLiOkOYtjnHr6PzIbdYgXJJAzN
IronPort-HdrOrdr: A9a23:CBvTxKMh6KFwB8BcTyf155DYdb4zR+YMi2TDiHoddfUFSKalfp
 6V98jztSWatN/eYgBDpTnmAtj5fZq8z+8N3WB1B9uftWbdyQ+Vxe1ZjbcKoAeQZhEWiNQtsp
 uIGpIWYLOQMbETt7eB3ODSKadE/DDoytHKuQ+IpE0dNj2CJpsQmDtRO0K+KAlbVQNGDZ02GN
 614ddGnSOpfTAyYt6gDncIcuDfr5mT/aiWKCIuNloC0k2jnDmo4Ln1H1yx2QofaSpGxfMH/X
 LemwL0y62/u7WQywPa1UXU85NK8eGRvOdrNYipsIw4Oz/sggGnaMBIXKCDhik8pKWV5FMjgL
 D30mUdFvU2z0mUUnC+oBPr1QWl+i0p8WXexViRhmamidDlRRohYvAxyL5xQ1/80Q4Nrdt82K
 VE0yayrJxMFy7Nmyz7+pzhSwxqrEypunAv+NRjxUC3abFuJ4O5kLZvsn+8SPw7bWPHAcEcYa
 JT5fjnlbprmQjwVQGYgoFtqObcLUjbUC32AXTqgfblrAS+rEoJs3fw+/Zv4EvojKhNLaWsx9
 60R5iAx4s+OvP/U8pGdZY8aPryLFDxajTxF0/XCWjbNcg8SgLwQtjMkf0I2N0=
X-IronPort-AV: E=Sophos;i="5.92,306,1650945600"; 
   d="scan'208";a="73332866"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=edw1k4evUgB/+3rZjxrHSMSaOD6wBirbfHbGxpL5nzTylRBdclrnpmYMXXrZoXkoJE646+DpBGjaAjnfVw1bdex2wvUZu87wkCDNri2SX38I7w29uEHa4gr8EF0uWI9I/dsNAQInI7cgUHC+IKisor7pP6DdZb+OFc8i0CLKt7AWjAGSNOhLU2bjkCkVoM73q/056QJlyi589JajWHoydPyuYAPidTON9rYWrueGTdWbRNaCnuHl+LcXnEl617iHob7o7jGWIr4KSSPymkulk2jpRycWpqz4xOVPIXHpuZjxLg3A65tO27OTdwvMPcKBJFK3dgy4cRQurxUsVpPt+g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zqzsEmV6kSo5TqA5YhqsX2vWi4HCPrafXmwpwSoqiJk=;
 b=TKE3FFJURag25E121PbRbsClE+rqLojZQ8URMYmqdiunqP2opsCrkE+KknrmI1JIVhf2b4zSjjiFdmwmhWkjNmRnlhq95jlNdtDgBLdd0oGe1FOH9ieJyZ9SM3Um0C9IezJhPdfkin2p23aDp1ZYbBgG71njopcs9KNt/XhuI/oAgdOT5RRWcsNyYa9suKE3BQLkVExwU/nMXUdS24QrfZwJ4XIUYyjU9dCCr2EUEBSX6awwHs6FnW1d/jUb37jz+zJonSkzji6BhbtP0JO8Qvz5dWN3J4NE+xvGBVezL0Ohk7sNAq3mEvfaGD5rLGUBE93fgLE8qEnKQPmBYz5P+w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zqzsEmV6kSo5TqA5YhqsX2vWi4HCPrafXmwpwSoqiJk=;
 b=XsivyPnkjxU4xSEGFjcFyI6ESVKeRPh/ILG2ymQJagHRkunNsfljIvJr9Mw9SJ/mVGSW/Axx/SnVe0Fe91WnTf8RtyLRuIGdTu25dh0fdvkNqBNsaocnYM8cPoKal0/gB3tiKo8/n1WSviApFQZqAc/jNFVwNppwqA7Wz4dzzJU=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH] tools/include: drop leading underscore from xen_list header
Date: Mon, 20 Jun 2022 14:35:00 +0200
Message-Id: <20220620123500.2866-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.36.1
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LNXP123CA0012.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:d2::24) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 708469f1-15a3-4c48-988f-08da52b96523
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5664:EE_
X-Microsoft-Antispam-PRVS:
	<SJ0PR03MB566475AA45992B95445D7D8B8FB09@SJ0PR03MB5664.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	eyfxsx/1dcnybseddKa/LNVBO+wtbRk2wIIY9u7jacgm5TEr6qwqxHZVyGN4Gck7lFv3+Ocw94+9QnsrzZPwsn8T1P773W8EmhdndHDWdTjqmJwnLQh3yLi/YJCEdeLB2xxfPkm53FUvsX1qrtE96Jxrijry2yLutzD2FJYQ9X23LGyJPpoJVz2nraeSAaoGbveeTZ5mDupmvkSP+QrkpilapMvb/+0iadFWDHkecm20NipxLNq3YPt3UXlijtXRD/HKbIPN1FpYF+4VDbSD14zeWl+MIXe4bjUH9hvcyVXKlAf5xtN5Tq45jGaVrsylXjE7qRuUhpReU3KrSWsXgZ+rfUznfqSwL/LZP476RHqlrj74yfXA56I0SQ24mlDxUpbCdO9DJ4WSqW++EhYgvji2FBIy0QwPgrRbltNiQD/Q+hAZaFb4W0BCLTRb8Ung7dYeh8MARXH5hiibUUZFwRsGTtMwQWXZ9J/aCbLsbVL2cPklrVrXMOtIgnCQRWOaRVHVVHCh+ZZ/a3pLHUBWkzJLb7GHHiEGHozPddfaskfj+8nkqr3zG2M/sFykBYcazA9VnEugwDIXZ5dbtPmVhzfjQzAPgRdNZzMMDzndU2OOmIc7U5t1KbG5Y6zVinRpW5wmKZud66aVfHtkbPe+VQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(376002)(136003)(396003)(346002)(39860400002)(8936002)(5660300002)(478600001)(36756003)(82960400001)(2616005)(1076003)(186003)(41300700001)(83380400001)(2906002)(26005)(6512007)(66476007)(66946007)(86362001)(66556008)(54906003)(8676002)(4326008)(6666004)(38100700002)(6506007)(316002)(6486002)(6916009);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?R3FGT2tOVnArRXNhaHZjRnVIOGtRY1hZd1FhT2pNTEpJT2tseldYNDUzSmc5?=
 =?utf-8?B?dGdWY0FYaEhTc1VjbysxSzBtNWMwQ0VYU0tGWkxzYXpqaUo2ZloyVEhvdHBU?=
 =?utf-8?B?azFlbDBENEkzeGhPSElZaVF6MTVENWFlZ0g4ZXk1YXRBZ2xQNVNhMUh4MFVr?=
 =?utf-8?B?YWwybTd0dEhFOEFpOVdiMk52SThjSUI0S2J0eVhQRzR0UE9WNGtxM1RNdmdq?=
 =?utf-8?B?QzlxQVNoajRoMDhCTzEza2xJbnVPekljL2JrVTdMZUlaMXZCZW1FK1ZGbGtm?=
 =?utf-8?B?ak9kNllNZFgzMHBGWjBhZ2o5bWc1TkJERlMxSmh5Q01oSmxGVTU4a1JualQy?=
 =?utf-8?B?MkZWQzRWd0MwZEFqQkhsc2RVZXVRNGhkamdkTEFwTUYxR2wxc0dwKzY4RDVS?=
 =?utf-8?B?enRQYXR0QkpjbFNJY2l1RFhlRlBEMlJ4QVl4U1puRWZRcnhZSzE4Y3VyVXBV?=
 =?utf-8?B?TzlYQnNwODZNcGdnb1owUklNSm1KSGhXcWFqbVRzc1d1YWgrNkZSTE13blNN?=
 =?utf-8?B?enpiU243ZS80dXdwV0svRERUSUpqQ050dVdQdXBQcjN0RklxdUNUaFVGNDlu?=
 =?utf-8?B?YlFDd2pFMlR1ZTMyYm91MXFnM29xU21PbFNDUUU4RnRyQjk5SktnUTRBMGpM?=
 =?utf-8?B?WitUdVkvVlMwSitVZzRBalMvaDcvaHFDeGoyb0dOTHM1WEs5aEdab3BFNSs2?=
 =?utf-8?B?OEtoM2lQcXp1Q3BWOGphd0JxWlVtQXllSUNBcnRrb0lqQWtvSVRuZkd0N3E0?=
 =?utf-8?B?RDVIRnpxM21iakJmemE1WlpHRmtFbGJEUEpGbGlnVmhrcVpHRHVvUHJzdkh0?=
 =?utf-8?B?bWlnRlg5UGF3YXFDTjFoNytRT25tZjZrK09ISjAyN2VENUt1Wks3ZzFqd3Z5?=
 =?utf-8?B?d3UzRzhXYjdiWUpwa2l4aUZxM3VjelJERmJKTTZlZERHckpHc1VXTDJPYStS?=
 =?utf-8?B?T0thSm81Q2JPZUpDdnFrVUZXSnV5eFFTaEdVWGRuOVcxTnY0RE5vWkc2Sjdt?=
 =?utf-8?B?eGNGMFd4RHRoVjdhTlkwL21YNlVSOTNpdjhTZUlLdWtKMVJ2MUpnaW9QYmRi?=
 =?utf-8?B?c2tVZE1ybVhDTzlxSkh3VWE1Z1ZzZ2lGekJhZlNnU3ZxYko1VWRRbjkzSEYy?=
 =?utf-8?B?cG1PeHI0Y0tCaFJ2ZTBjRWN5MEdKdFRPVjFpdWFKY3NwZnJoSHViVFpVWFpU?=
 =?utf-8?B?TzNGUnQ1K1V3c01TUHpNd25md2l3d0JybTlmUUpVV1Z6dVNxV2paa2EwdFU4?=
 =?utf-8?B?U3dqOXBhZ25nQW1uTGVEc1RXYm9LckZoZXA3N1FYVGxWZHROa1QxWG9BTzg4?=
 =?utf-8?B?MHExd0QzNTFiU3FSZmpBTmhlQjBIY2pHRWM3d1kzeGF1VjZjdU9qd3RoQWFx?=
 =?utf-8?B?NlY0RlJGaDV5VCs1aEdETVZlRHBjNktFLzRleVFMSWlycjl4OC94VXVDektq?=
 =?utf-8?B?VVFRZ08zWDlhaHRhR1ZpUkFlZ3U5eGxQZlg2NjBtUXo4alFkai95WHVYWjBT?=
 =?utf-8?B?ZkUwVHFnU0VEUGRNSTNmd1VnMTJzaEVpMHFSM1NRaktGK2ViYzhna1ZNTFlL?=
 =?utf-8?B?TFQwQmxyZlZSYTdiVXlYOU5jTnFxbU1Sa3ArMGhraG5tQ21YRFl2Y3F2T1lJ?=
 =?utf-8?B?SW5yWlRJQmpUcjAvejJpK1hEdnRjOFRiYm80ZldTakEzTlNIdGdtVEdIN2xM?=
 =?utf-8?B?bEJzZGVaM1BnVGo3U2dSK1g0dGtDYnFNOGpBdGJ3MXpIOW8xeGd0TkJhYXRM?=
 =?utf-8?B?RklWTkxxWGlVR2R2TjRsTUFabzNxR0dKb0x2ODZMSklhT0sxZVAzSk9IMkdG?=
 =?utf-8?B?TEdLeitNUWFnT1ZNUjhUdWVFNkhXMERVTTlvcEpzKzluL3ViVmxZRVNkb0Ra?=
 =?utf-8?B?Y3E2UUFGYjJteldsRGFQZkkrak5pYlZiS2w3MWd4eDE3WmpuK3l4RVJ0SGdy?=
 =?utf-8?B?c0tYRFF4RmdjSlBEZXF0TXFNcXF6aTVmR1FSdXAyL1ZuNW1hdHF6ZEZPeFJl?=
 =?utf-8?B?b2Rqa2c4NThya3lMNC9ZUmdGbzFEZXliOEVvYjdOZjViR21FV2tkb21sZ0Zj?=
 =?utf-8?B?UE1IaFpUTTlUMFpjdGlMeXlzRHd5N3h6cW41TFFUSlBJZlBBYjVlc3dkNGYz?=
 =?utf-8?B?M1B3VTF1bU5ROGVldkJ6OE5RM0JidHhBUnJPUnhWY0dZMlVaaVVwUGNBamZ0?=
 =?utf-8?B?YUxHREhQY2JFN0cyNkNkeHQrLzBrSkE2RG5pSjNSSTRGVWhzWnh1NVlpalBx?=
 =?utf-8?B?ZnJpbEMwTVZaREhPZlZXS1k0cDJlc1R1T1l5enMvKzBtbEV1KzBaNEdOWUFV?=
 =?utf-8?B?RDhBZDdKQXcyKzRvdlpkOFE0S2tkbmZ0V3BBSEtHdWZPWklUeTBEdz09?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 708469f1-15a3-4c48-988f-08da52b96523
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Jun 2022 12:35:44.7948
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: QQ1xEmLRdLWH7is87ubUx1SnRQHiBBa4aSQOyVtZCs75dRoQYF7mCjC9oZWIbCOPda0+Qz1lxyjJheC+h+l4yg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5664

A leading underscore is used to indicate auto generated headers, and
the clean use of 'rm -f _*.h' will remove those.  _xen_list.h also
uses a leading underscore, but is checked in the repo and as such
cannot be removed as part of the clean rule.

Fix this by dropping the leading underscore, so that the header is not
removed.

Fixes: a03b3552d4 ('libs,tools/include: Clean "clean" targets')
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 tools/include/Makefile                    | 4 ++--
 tools/include/libxl.h                     | 2 +-
 tools/include/{_xen_list.h => xen_list.h} | 0
 tools/include/xentoolcore_internal.h      | 2 +-
 tools/libs/evtchn/minios.c                | 2 +-
 tools/libs/light/libxl_qmp.c              | 2 +-
 6 files changed, 6 insertions(+), 6 deletions(-)
 rename tools/include/{_xen_list.h => xen_list.h} (100%)

diff --git a/tools/include/Makefile b/tools/include/Makefile
index 3a03a0b0fa..b488f7ca9f 100644
--- a/tools/include/Makefile
+++ b/tools/include/Makefile
@@ -70,13 +70,13 @@ install: all
 	$(INSTALL_DATA) xen/io/*.h $(DESTDIR)$(includedir)/xen/io
 	$(INSTALL_DATA) xen/sys/*.h $(DESTDIR)$(includedir)/xen/sys
 	$(INSTALL_DATA) xen/xsm/*.h $(DESTDIR)$(includedir)/xen/xsm
-	$(INSTALL_DATA) _xen_list.h $(DESTDIR)$(includedir)
+	$(INSTALL_DATA) xen_list.h $(DESTDIR)$(includedir)
 
 .PHONY: uninstall
 uninstall:
 	echo "[FIXME] uninstall headers"
 	rm -rf $(DESTDIR)$(includedir)/xen
-	rm -f $(DESTDIR)$(includedir)/_xen_list.h
+	rm -f $(DESTDIR)$(includedir)/xen_list.h
 
 .PHONY: clean
 clean:
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 51a9b6cfac..7ce978e83c 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -747,7 +747,7 @@
 typedef struct libxl__ctx libxl_ctx;
 
 #include <libxl_uuid.h>
-#include <_xen_list.h>
+#include <xen_list.h>
 
 /* API compatibility. */
 #ifdef LIBXL_API_VERSION
diff --git a/tools/include/_xen_list.h b/tools/include/xen_list.h
similarity index 100%
rename from tools/include/_xen_list.h
rename to tools/include/xen_list.h
diff --git a/tools/include/xentoolcore_internal.h b/tools/include/xentoolcore_internal.h
index deccefd612..1be014434d 100644
--- a/tools/include/xentoolcore_internal.h
+++ b/tools/include/xentoolcore_internal.h
@@ -27,7 +27,7 @@
 #include <stddef.h>
 
 #include "xentoolcore.h"
-#include "_xen_list.h"
+#include "xen_list.h"
 
 /*---------- active handle registration ----------*/
 
diff --git a/tools/libs/evtchn/minios.c b/tools/libs/evtchn/minios.c
index 8ff46de884..28743cb055 100644
--- a/tools/libs/evtchn/minios.c
+++ b/tools/libs/evtchn/minios.c
@@ -20,7 +20,7 @@
  * Split off from xc_minios.c
  */
 
-#include "_xen_list.h"
+#include "xen_list.h"
 #include <mini-os/types.h>
 #include <mini-os/os.h>
 #include <mini-os/lib.h>
diff --git a/tools/libs/light/libxl_qmp.c b/tools/libs/light/libxl_qmp.c
index 8faa102e4d..6b0cd607d8 100644
--- a/tools/libs/light/libxl_qmp.c
+++ b/tools/libs/light/libxl_qmp.c
@@ -63,7 +63,7 @@
 
 #include <yajl/yajl_gen.h>
 
-#include "_xen_list.h"
+#include "xen_list.h"
 #include "libxl_internal.h"
 
 /* #define DEBUG_RECEIVED */
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 12:39:51 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 12:39:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352770.579629 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Gh4-000462-7M; Mon, 20 Jun 2022 12:39:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352770.579629; Mon, 20 Jun 2022 12:39:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Gh4-00045v-4a; Mon, 20 Jun 2022 12:39:50 +0000
Received: by outflank-mailman (input) for mailman id 352770;
 Mon, 20 Jun 2022 12:39:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hycu=W3=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o3Gh3-00045n-LZ
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 12:39:49 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 126dd534-f096-11ec-bd2d-47488cf2e6aa;
 Mon, 20 Jun 2022 14:39:48 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 53D981F45E;
 Mon, 20 Jun 2022 12:39:48 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 287FD13638;
 Mon, 20 Jun 2022 12:39:48 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id OFbvB5RqsGKhJgAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 20 Jun 2022 12:39:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 126dd534-f096-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655728788; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=xWm9u2b7s7Y01xhLGRZVFcpr/MnMNYQkh0+QD1f4oCM=;
	b=T4Lr0B4YyNAJNMssrX0MlgPDK2D6OfPk9opbHxxboWKaGvVaCCz4e67tqOPDKc5dXgMY1C
	IqJziW7812B5S6QNB4Bljg/4Dok/jspPzCpMbQMuV/UkZhrK4oiRI7lKhQ6lr7etW16oDh
	1XBE8IgIf/w6NqRWg1bcHLQAHabYk2E=
Message-ID: <5038c9b8-272d-e39e-f8af-67455d53458c@suse.com>
Date: Mon, 20 Jun 2022 14:39:47 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH] tools/include: drop leading underscore from xen_list
 header
Content-Language: en-US
To: Roger Pau Monne <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20220620123500.2866-1-roger.pau@citrix.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20220620123500.2866-1-roger.pau@citrix.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------95mj9WZPeDmCJRRrQVeFZszP"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------95mj9WZPeDmCJRRrQVeFZszP
Content-Type: multipart/mixed; boundary="------------S0dkK3toNtf2kwEI7oJpap84";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Roger Pau Monne <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <5038c9b8-272d-e39e-f8af-67455d53458c@suse.com>
Subject: Re: [PATCH] tools/include: drop leading underscore from xen_list
 header
References: <20220620123500.2866-1-roger.pau@citrix.com>
In-Reply-To: <20220620123500.2866-1-roger.pau@citrix.com>

--------------S0dkK3toNtf2kwEI7oJpap84
Content-Type: multipart/mixed; boundary="------------Lj0V7s2aL0qVRDjLk0xjLmMu"

--------------Lj0V7s2aL0qVRDjLk0xjLmMu
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjAuMDYuMjIgMTQ6MzUsIFJvZ2VyIFBhdSBNb25uZSB3cm90ZToNCj4gQSBsZWFkaW5n
IHVuZGVyc2NvcmUgaXMgdXNlZCB0byBpbmRpY2F0ZSBhdXRvIGdlbmVyYXRlZCBoZWFkZXJz
LCBhbmQNCj4gdGhlIGNsZWFuIHVzZSBvZiAncm0gLWYgXyouaCcgd2lsbCByZW1vdmUgdGhv
c2UuICBfeGVuX2xpc3QuaCBhbHNvDQo+IHVzZXMgYSBsZWFkaW5nIHVuZGVyc2NvcmUsIGJ1
dCBpcyBjaGVja2VkIGluIHRoZSByZXBvIGFuZCBhcyBzdWNoDQo+IGNhbm5vdCBiZSByZW1v
dmVkIGFzIHBhcnQgb2YgdGhlIGNsZWFuIHJ1bGUuDQo+IA0KPiBGaXggdGhpcyBieSBkcm9w
cGluZyB0aGUgbGVhZGluZyB1bmRlcnNjb3JlLCBzbyB0aGF0IHRoZSBoZWFkZXIgaXMgbm90
DQo+IHJlbW92ZWQuDQo+IA0KPiBGaXhlczogYTAzYjM1NTJkNCAoJ2xpYnMsdG9vbHMvaW5j
bHVkZTogQ2xlYW4gImNsZWFuIiB0YXJnZXRzJykNCj4gU2lnbmVkLW9mZi1ieTogUm9nZXIg
UGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+DQoNClJldmlld2VkLWJ5OiBKdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+DQoNCg0KSnVlcmdlbg0K
--------------Lj0V7s2aL0qVRDjLk0xjLmMu
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------Lj0V7s2aL0qVRDjLk0xjLmMu--

--------------S0dkK3toNtf2kwEI7oJpap84--

--------------95mj9WZPeDmCJRRrQVeFZszP
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmKwapMFAwAAAAAACgkQsN6d1ii/Ey97
Igf+OChhIgSdGWAqawuK5+pJqY3QyHYUdMfP5wRnpsk4FeHI1X7azLfavoaeh2OSRPA9ZfjhjwNC
DTzBFouAD6OXjppHtHLzkGGTM2fhXrOVwrg7IE0//LA5ZGloX2bLrMWGihOcyX/0+HHl6MIcGs3r
MiBWLaoVB75msxYk1aWTAx7sucOFupvOASHZgDQo1AY1YdzC7nz2A8fCTKNo9zw7qCjiKyhVjSSl
EVO2kAsd+yiw4+Bs8ZxTjdmRX6rHGDx/Ms/7c0UzQU3LTTJ7q+kM4CFypMpdm9YO4SoVOVYuI2XD
emI1TZn9jLXik6GrHwsSS6Vt7RG0s4t4qBLeTELksg==
=LwCX
-----END PGP SIGNATURE-----

--------------95mj9WZPeDmCJRRrQVeFZszP--


From xen-devel-bounces@lists.xenproject.org Mon Jun 20 12:58:08 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 12:58:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352781.579640 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Gyb-0006SW-PT; Mon, 20 Jun 2022 12:57:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352781.579640; Mon, 20 Jun 2022 12:57:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Gyb-0006SP-ME; Mon, 20 Jun 2022 12:57:57 +0000
Received: by outflank-mailman (input) for mailman id 352781;
 Mon, 20 Jun 2022 12:57:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3Gya-0006SF-Ki; Mon, 20 Jun 2022 12:57:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3Gya-0004HY-Cp; Mon, 20 Jun 2022 12:57:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3GyZ-0007FS-WC; Mon, 20 Jun 2022 12:57:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o3GyZ-0002S1-Vj; Mon, 20 Jun 2022 12:57:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=lnlvsDjGCvNLqLjpzUfjeOtSRl0avonh68hssNDipdA=; b=V7fTfGjN4OR2SaoUeTU/cb1yLO
	p021Z5Gc9TRFaFu41p+uA0btEWe/+Zyvl7JBt0F3N7uQJmL2oRmHz7/yJ86sW7VE2DocIrS0aoBPB
	rsk9lVlzioi+qga9kKZT44HFzu74scsuuLzwiH8hljLz4KE0hZQtp4uOy5wkxdGB1F7M=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171284-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171284: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=a111daf0c53ae91e71fd2bfe7497862d14132e3e
X-Osstest-Versions-That:
    linux=354c6e071be986a44b956f7b57f1884244431048
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 20 Jun 2022 12:57:55 +0000

flight 171284 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171284/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-intel  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-amd  8 xen-boot              fail REGR. vs. 171277
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 171277
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 171277
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 171277
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 171277
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 171277
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 171277
 test-armhf-armhf-xl-arndale  10 host-ping-check-xen      fail REGR. vs. 171277
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 171277

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 171277

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171277
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 171277
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171277
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171277
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                a111daf0c53ae91e71fd2bfe7497862d14132e3e
baseline version:
 linux                354c6e071be986a44b956f7b57f1884244431048

Last test of basis   171277  2022-06-19 03:11:35 Z    1 days
Failing since        171280  2022-06-19 15:12:25 Z    0 days    3 attempts
Testing same since   171284  2022-06-20 03:14:14 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Usyskin <alexander.usyskin@intel.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Arnd Bergmann <arnd@arndb.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Darrick J. Wong <djwong@kernel.org>
  Dave Hansen <dave.hansen@linux.intel.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Ian Abbott <abbotti@mev.co.uk>
  Jamie Iles <jamie@jamieiles.com>
  Jarkko Nikula <jarkko.nikula@linux.intel.com>
  Jiasheng Jiang <jiasheng@iscas.ac.cn>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jing-Ting Wu <jing-ting.wu@mediatek.com>
  Joe Damato <jdamato@fastly.com>
  Josh Poimboeuf <jpoimboe@kernel.org>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Kunihiko Hayashi <hayashi.kunihiko@socionext.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Liu Ying <victor.liu@nxp.com>
  Lukas Bulwahn <lukas.bulwahn@gmail.com>
  Marc Zyngier <maz@kernel.org>
  Miaoqian Lin <linmq006@gmail.com>
  Michal Simek <michal.simek@amd.com>
  Nathan Chancellor <nathan@kernel.org>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Peter Zijlstra <peterz@infradead.org>
  Rob Herring <robh@kernel.org>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Serge Semin <Sergey.Semin@baikalelectronics.ru>
  Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
  Tali Perry <tali.perry1@gmail.com>
  Thomas Gleixner <tglx@linutronix.de>
  Tomas Winkler <tomas.winkler@intel.com>
  Wolfram Sang <wsa@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 966 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 20 13:05:44 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 13:05:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352793.579651 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3H63-00081L-Ow; Mon, 20 Jun 2022 13:05:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352793.579651; Mon, 20 Jun 2022 13:05:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3H63-00081E-Ky; Mon, 20 Jun 2022 13:05:39 +0000
Received: by outflank-mailman (input) for mailman id 352793;
 Mon, 20 Jun 2022 13:05:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=t3Ah=W3=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1o3H61-000818-WB
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 13:05:38 +0000
Received: from mail-pf1-x434.google.com (mail-pf1-x434.google.com
 [2607:f8b0:4864:20::434])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ac9939ff-f099-11ec-b725-ed86ccbb4733;
 Mon, 20 Jun 2022 15:05:36 +0200 (CEST)
Received: by mail-pf1-x434.google.com with SMTP id 128so3171396pfv.12
 for <xen-devel@lists.xenproject.org>; Mon, 20 Jun 2022 06:05:36 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ac9939ff-f099-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=5XCq3aw9mDW4vLXTuAFW7Gl3EWmIsDOrQpN5UmWv+fQ=;
        b=Ca8jZKgHlF+F7xawEGDXXVybMDyfFxCJfWeGe5rrwwK+Rr/qm6/8/qwL3VW1LiU64p
         AcKMG7+nf5bJxilwQH5VyjMsQwtFSDhsXeVw854pfdR5VsA5RgAuXoqiVWRKT2m1GH9Z
         ooA3NrC3nbLhKXvBQYIoZD9zkv1QZpWwynStcbJZB20ypW+VqmIUmGdIpS/RuL6udzRw
         aEXqLR4Tpb7tVMv53/KziPMeSL5AQYuRn4p9+SjUlnaNIRMhrr9qO7JduaUMRtfMKoCt
         ftixJOFPZn4qTX2u33HAEpvsG3CLLjZFgc33VphqNWdtGIhEN/nbgOxrXHO4ioXDnd4v
         3r7g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=5XCq3aw9mDW4vLXTuAFW7Gl3EWmIsDOrQpN5UmWv+fQ=;
        b=TM3WObaYekd+CrTFIQR8ufpP8Uisxn0hxEaLPrqlM3hwrovPlYwOFeto4tw9B9XnW5
         bzrz2btN7idV5rfPtgdj0zfjFB1phCKs4bdNu7+MHjytFh7w1W5+y4dYPFEoG9LDCVUo
         y2zfuQ3Zg7NI4nMIYntqvx2aRiMD3/EjtaspVS8zQedUA3oIvJIC+ERwi4/0rPGzDmyN
         H+9VapoY3xhKSnvVcG+Zsd5cul/cKFg4Zlqzyf60HOcgGn5NHnrAUfo+vtWNo/U5SYva
         khDBkItTFsCc3Acwi6nSGE2Oq1lt2TNBBs6zAR+/Mer+4Vs76d+v/jyyTww4Mtgn6wQa
         u4Ng==
X-Gm-Message-State: AJIora+881xyzcHll63rGGLzqfBz9nCeXZvQD5Vfli/RenYsPxwkvKKR
	q07LhmsTckA6i2fps/bhdvQMNLIlXylIKIyMttfZkFhlBxI=
X-Google-Smtp-Source: AGRyM1uXBgvYNGyYYiJ7P+RDvneEFsseZqD3e1M3vKtyVOasZKg0d1N4yYI9FygAGJMznCH2nFrZKHs8oifoCavKCk8=
X-Received: by 2002:a05:6a00:18a9:b0:51b:f63b:6f7c with SMTP id
 x41-20020a056a0018a900b0051bf63b6f7cmr24394513pfh.49.1655730335169; Mon, 20
 Jun 2022 06:05:35 -0700 (PDT)
MIME-Version: 1.0
References: <20220609061812.422130-1-jens.wiklander@linaro.org>
 <20220609061812.422130-3-jens.wiklander@linaro.org> <874k0nhvsq.fsf@epam.com>
 <20220616223728.GA71444@jade> <94122e8d-224d-2632-27ad-d56d3a24b367@xen.org>
In-Reply-To: <94122e8d-224d-2632-27ad-d56d3a24b367@xen.org>
From: Jens Wiklander <jens.wiklander@linaro.org>
Date: Mon, 20 Jun 2022 15:05:24 +0200
Message-ID: <CAHUa44E7takcNtXtLxmrMdDV+hO=86uBpJz7tjp_W26x1mGB-Q@mail.gmail.com>
Subject: Re: [PATCH v2 2/2] xen/arm: add FF-A mediator
To: Julien Grall <julien@xen.org>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"

Hi Julien,

On Fri, Jun 17, 2022 at 10:16 AM Julien Grall <julien@xen.org> wrote:
>
> Hi Jens,
>
> On 16/06/2022 23:37, Jens Wiklander wrote:
> > On Tue, Jun 14, 2022 at 07:47:18PM +0000, Volodymyr Babchuk wrote:
> >>
> >> Hello Jens,
> >>
> >> Sorry for late review, I was busy with internal projects.
> >>
> >> This is preliminary review. I gave up at scatter-gather operations. Need
> >> more time to review them properly.
> >
> > No problem, thanks for taking the time.
> >
> >>
> >> One thing that bothers me is that Xen is non-preemptive and there are
> >> plenty potentially long-running operations.
> >
> > There's room to deal with that in the FF-A specification. These scatter
> > gather operation are quite complicated so I started with the minimum. We
> > can as a future optimization address the problem with long running
> > operations.
>
> I would be OK to defer this work. However, I think this should be
> written down as Xen community will not be able to security support until
> we resolved the known places where a vCPU may hog pCPU longer than
> necessary.
>
> This reminds me that this series doesn't add a support statement for the
> new subsystem in SUPPORT.md. AFAICT, this should be tech preview for now.

I'll add an entry for this.

Thanks,
Jens


From xen-devel-bounces@lists.xenproject.org Mon Jun 20 13:22:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 13:22:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352802.579666 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3HME-00020X-7Z; Mon, 20 Jun 2022 13:22:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352802.579666; Mon, 20 Jun 2022 13:22:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3HME-00020Q-4Q; Mon, 20 Jun 2022 13:22:22 +0000
Received: by outflank-mailman (input) for mailman id 352802;
 Mon, 20 Jun 2022 13:22:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hcs/=W3=alien8.de=bp@srs-se1.protection.inumbo.net>)
 id 1o3HMC-00020K-2K
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 13:22:20 +0000
Received: from mail.skyhub.de (mail.skyhub.de [5.9.137.197])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 01885a76-f09c-11ec-bd2d-47488cf2e6aa;
 Mon, 20 Jun 2022 15:22:17 +0200 (CEST)
Received: from zn.tnic (p200300ea974657f0329c23fffea6a903.dip0.t-ipconnect.de
 [IPv6:2003:ea:9746:57f0:329c:23ff:fea6:a903])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.skyhub.de (SuperMail on ZX Spectrum 128k) with ESMTPSA id E81721EC0657;
 Mon, 20 Jun 2022 15:22:12 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 01885a76-f09c-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alien8.de; s=dkim;
	t=1655731333;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:in-reply-to:in-reply-to:  references:references;
	bh=LrGpaF007OQqZrjEUL4cOEK+6b5ptHcd1OgrPbOBhQc=;
	b=kWLHioT5UQlaSJoVi5j86oOVwPwb26iLxGk6Lbs+ei26Nq/TNiLbkDhLvIBEcM1dDW7PwE
	REHLuI7rtdbxYZJ11zXspxBMbDlwr0roQw/RaMWMbPlZlmUydz0+9GqupSWNQ5iHJDtzLd
	ZDKnGzQwg1BILxQITGEggBx6lvjZ+IE=
Date: Mon, 20 Jun 2022 15:22:08 +0200
From: Borislav Petkov <bp@alien8.de>
To: Juergen Gross <jgross@suse.com>, Tom Lendacky <thomas.lendacky@amd.com>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
	linux-kernel@vger.kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>
Subject: Re: [PATCH v2] x86/pat: fix x86_has_pat_wp()
Message-ID: <YrB0gNtIfCwV+xnE@zn.tnic>
References: <20220620113441.23961-1-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20220620113441.23961-1-jgross@suse.com>

+ Tom.

On Mon, Jun 20, 2022 at 01:34:41PM +0200, Juergen Gross wrote:
> x86_has_pat_wp() is using a wrong test, as it relies on the normal
> PAT configuration used by the kernel. In case the PAT MSR has been
> setup by another entity (e.g. BIOS or Xen hypervisor) it might return
> false even if the PAT configuration is allowing WP mappings.

... because Xen doesn't allow writing the PAT MSR. Please explain
exactly what happens because we will forget.

> The correct way to test for WP support is:
> 
> 1. Get the PTE protection bits needed to select WP mode by reading
>    __cachemode2pte_tbl[_PAGE_CACHE_MODE_WP] (depending on the PAT MSR
>    setting this might return protection bits for a stronger mode, e.g.
>    UC-)
> 2. Translate those bits back into the real cache mode selected by those
>    PTE bits by reading __pte2cachemode_tbl[__pte2cm_idx(prot)]
> 3. Test for the cache mode to be _PAGE_CACHE_MODE_WP

Yes, this is a good explanation albeit a bit too verbose. You can stick
a shorter version of it as a comment over the function so that we don't
have to swap it all back in next time.

> Fixes: 1f6f655e01ad ("x86/mm: Add a x86_has_pat_wp() helper")

If anything, this should be:

f88a68facd9a ("x86/mm: Extend early_memremap() support with additional attrs")

Also, I'm thinking CC:stable here.

> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V2:
> - fix indexing into __pte2cachemode_tbl[]

Yes, in any case, I see it now. The key aspect being in the comment
above it:

 *   Index into __pte2cachemode_tbl[] are the caching attribute bits of the pte
 *   (_PAGE_PWT, _PAGE_PCD, _PAGE_PAT) at index bit positions 0, 1, 2.

which is how one should index into that array.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette


From xen-devel-bounces@lists.xenproject.org Mon Jun 20 15:28:05 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 15:28:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352811.579676 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3JJb-00059q-6W; Mon, 20 Jun 2022 15:27:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352811.579676; Mon, 20 Jun 2022 15:27:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3JJb-00059j-3r; Mon, 20 Jun 2022 15:27:47 +0000
Received: by outflank-mailman (input) for mailman id 352811;
 Mon, 20 Jun 2022 15:27:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f0pT=W3=intel.com=dave.hansen@srs-se1.protection.inumbo.net>)
 id 1o3JJZ-00059d-Jd
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 15:27:46 +0000
Received: from mga07.intel.com (mga07.intel.com [134.134.136.100])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 83dd4510-f0ad-11ec-bd2d-47488cf2e6aa;
 Mon, 20 Jun 2022 17:27:40 +0200 (CEST)
Received: from orsmga006.jf.intel.com ([10.7.209.51])
 by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 20 Jun 2022 08:27:36 -0700
Received: from echeresh-mobl1.amr.corp.intel.com (HELO [10.209.15.145])
 ([10.209.15.145])
 by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 20 Jun 2022 08:27:36 -0700
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 83dd4510-f0ad-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1655738860; x=1687274860;
  h=message-id:date:mime-version:subject:to:cc:references:
   from:in-reply-to:content-transfer-encoding;
  bh=8giL0qZv3yIElRFo4fVuIKMG5Tx+3XAwog7jJ8DWOI8=;
  b=fsMA6M5O2EmDaq5pGmmWI/nwriHtxrXSQO9VZlCo8PJXaFm+qUu00HPL
   Ctu5NI79TH+QTJmPzriPmA+Cg9uq+8K3xfVBmpQPKk/AXqiUNj0FZ/NM1
   x4KnD7thqXc/M+5rOpi2+DB9fV6sbKt08q7I0ciI/qJw9bgLXGS9uKCYt
   D3Ssg3tf9njp1y5RYR1uLQi5rgz0V29yNXMWxGWsX5bKNWmIuGsB2lLJt
   nduTqXVLHwv8knJXS2VDAf83ET7KRO/ad4kAoKxKlL/mV6v+2/SS3x5DF
   pKFnFpDU6ujlMwc2ZjrGuVy4XR1Za6XTnRnvjLnZvgQetT3/qxO2WYlfN
   Q==;
X-IronPort-AV: E=McAfee;i="6400,9594,10384"; a="343915119"
X-IronPort-AV: E=Sophos;i="5.92,207,1650956400"; 
   d="scan'208";a="343915119"
X-IronPort-AV: E=Sophos;i="5.92,207,1650956400"; 
   d="scan'208";a="561998169"
Message-ID: <63ccccac-2aa7-8850-9cd3-a8b7b89e1872@intel.com>
Date: Mon, 20 Jun 2022 08:27:33 -0700
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH 1/2] x86/pat: fix x86_has_pat_wp()
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, Borislav Petkov <bp@alien8.de>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org, jbeulich@suse.com,
 Dave Hansen <dave.hansen@linux.intel.com>, Andy Lutomirski
 <luto@kernel.org>, Peter Zijlstra <peterz@infradead.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 "H. Peter Anvin" <hpa@zytor.com>
References: <20220503132207.17234-1-jgross@suse.com>
 <20220503132207.17234-2-jgross@suse.com> <YrBLU2C5cJoalnax@zn.tnic>
 <1cfde4bf-241f-d94c-ffd7-2a11cf9aa1f2@suse.com>
From: Dave Hansen <dave.hansen@intel.com>
In-Reply-To: <1cfde4bf-241f-d94c-ffd7-2a11cf9aa1f2@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/20/22 03:41, Juergen Gross wrote:
>> But I'm only guessing - this needs a *lot* more elaboration and
>> explanation why exactly this is needed.
> 
> I will correct the code and update the commit message.

It would also be great to cover the end-user-visible impact of the bug
and the fix.  It _looks_ like it will probably only affect an SEV
system's ability to read some EFI data.  That will presumably be pretty
bad because it ends up reading from an encrypted mapping instead of a
decrypted one.

The

	pr_warn("failed to early memremap...

is (counterintuitively) what is wanted here.

Right?


From xen-devel-bounces@lists.xenproject.org Mon Jun 20 15:34:49 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 15:34:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352819.579688 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3JQL-0006b4-UE; Mon, 20 Jun 2022 15:34:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352819.579688; Mon, 20 Jun 2022 15:34:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3JQL-0006ax-R2; Mon, 20 Jun 2022 15:34:45 +0000
Received: by outflank-mailman (input) for mailman id 352819;
 Mon, 20 Jun 2022 15:34:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hycu=W3=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o3JQK-0006ar-NU
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 15:34:44 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 81e66a4c-f0ae-11ec-bd2d-47488cf2e6aa;
 Mon, 20 Jun 2022 17:34:43 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 69BD121A37;
 Mon, 20 Jun 2022 15:34:43 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 1145513638;
 Mon, 20 Jun 2022 15:34:43 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id a+HBApOTsGJ3fwAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 20 Jun 2022 15:34:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 81e66a4c-f0ae-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655739283; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=ACoOTC2stnarzxryygTxid+tfZOlr5PbEG8RRUFLwi8=;
	b=H3a/JMLM1Dr5ocBo4uC4Ohfb/QWojMoWb6FZRQfCFMKGr9SBY4x6Jp9IT9avM7jfOYh9tE
	cthAzt5hRvfOCtsBsMiSzNg6hslRkSWmp4FPU0XDZXD4IxhDaqG9pWYBnJZ8GJlYu2djbq
	M77Z4uBU6VoCGZp5x0uWx6hCbp9rUH0=
Message-ID: <ddb0cc0d-cefc-4f33-23f8-3a94c7c51a49@suse.com>
Date: Mon, 20 Jun 2022 17:34:42 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH 1/2] x86/pat: fix x86_has_pat_wp()
Content-Language: en-US
To: Dave Hansen <dave.hansen@intel.com>, Borislav Petkov <bp@alien8.de>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org, jbeulich@suse.com,
 Dave Hansen <dave.hansen@linux.intel.com>, Andy Lutomirski
 <luto@kernel.org>, Peter Zijlstra <peterz@infradead.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 "H. Peter Anvin" <hpa@zytor.com>
References: <20220503132207.17234-1-jgross@suse.com>
 <20220503132207.17234-2-jgross@suse.com> <YrBLU2C5cJoalnax@zn.tnic>
 <1cfde4bf-241f-d94c-ffd7-2a11cf9aa1f2@suse.com>
 <63ccccac-2aa7-8850-9cd3-a8b7b89e1872@intel.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <63ccccac-2aa7-8850-9cd3-a8b7b89e1872@intel.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------0we9pLFkJBVTNxTMON0yDeQD"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------0we9pLFkJBVTNxTMON0yDeQD
Content-Type: multipart/mixed; boundary="------------iPeeSZwKgEsGXEdaV0RlhR0A";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Dave Hansen <dave.hansen@intel.com>, Borislav Petkov <bp@alien8.de>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org, jbeulich@suse.com,
 Dave Hansen <dave.hansen@linux.intel.com>, Andy Lutomirski
 <luto@kernel.org>, Peter Zijlstra <peterz@infradead.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 "H. Peter Anvin" <hpa@zytor.com>
Message-ID: <ddb0cc0d-cefc-4f33-23f8-3a94c7c51a49@suse.com>
Subject: Re: [PATCH 1/2] x86/pat: fix x86_has_pat_wp()
References: <20220503132207.17234-1-jgross@suse.com>
 <20220503132207.17234-2-jgross@suse.com> <YrBLU2C5cJoalnax@zn.tnic>
 <1cfde4bf-241f-d94c-ffd7-2a11cf9aa1f2@suse.com>
 <63ccccac-2aa7-8850-9cd3-a8b7b89e1872@intel.com>
In-Reply-To: <63ccccac-2aa7-8850-9cd3-a8b7b89e1872@intel.com>

--------------iPeeSZwKgEsGXEdaV0RlhR0A
Content-Type: multipart/mixed; boundary="------------Ik6WguhLVV1z6SqwyzSmNVps"

--------------Ik6WguhLVV1z6SqwyzSmNVps
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjAuMDYuMjIgMTc6MjcsIERhdmUgSGFuc2VuIHdyb3RlOg0KPiBPbiA2LzIwLzIyIDAz
OjQxLCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4+IEJ1dCBJJ20gb25seSBndWVzc2luZyAt
IHRoaXMgbmVlZHMgYSAqbG90KiBtb3JlIGVsYWJvcmF0aW9uIGFuZA0KPj4+IGV4cGxhbmF0
aW9uIHdoeSBleGFjdGx5IHRoaXMgaXMgbmVlZGVkLg0KPj4NCj4+IEkgd2lsbCBjb3JyZWN0
IHRoZSBjb2RlIGFuZCB1cGRhdGUgdGhlIGNvbW1pdCBtZXNzYWdlLg0KPiANCj4gSXQgd291
bGQgYWxzbyBiZSBncmVhdCB0byBjb3ZlciB0aGUgZW5kLXVzZXItdmlzaWJsZSBpbXBhY3Qg
b2YgdGhlIGJ1Zw0KPiBhbmQgdGhlIGZpeC4gIEl0IF9sb29rc18gbGlrZSBpdCB3aWxsIHBy
b2JhYmx5IG9ubHkgYWZmZWN0IGFuIFNFVg0KPiBzeXN0ZW0ncyBhYmlsaXR5IHRvIHJlYWQg
c29tZSBFRkkgZGF0YS4gIFRoYXQgd2lsbCBwcmVzdW1hYmx5IGJlIHByZXR0eQ0KPiBiYWQg
YmVjYXVzZSBpdCBlbmRzIHVwIHJlYWRpbmcgZnJvbSBhbiBlbmNyeXB0ZWQgbWFwcGluZyBp
bnN0ZWFkIG9mIGENCj4gZGVjcnlwdGVkIG9uZS4NCg0KWGVuIGRvZXNuJ3Qgc3VwcG9ydCBT
RVYgZ3Vlc3RzIHlldC4gU28gdGhlIG9ubHkgY2F2ZWF0IGhlcmUgd291bGQgYmUgRUZJDQpz
ZXR0aW5nIHVwIFBBVCBieSBpdHNlbGYuDQoNCk5vdCBzdXJlIHRoaXMgaXMgcmVhbGx5IGEg
cmVhbCB3b3JsZCBpc3N1ZS4NCg0KDQpKdWVyZ2VuDQo=
--------------Ik6WguhLVV1z6SqwyzSmNVps
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------Ik6WguhLVV1z6SqwyzSmNVps--

--------------iPeeSZwKgEsGXEdaV0RlhR0A--

--------------0we9pLFkJBVTNxTMON0yDeQD
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmKwk5IFAwAAAAAACgkQsN6d1ii/Ey8c
oAf/Tm/GXq1f93iDng9MUWcolmpgjyI+fikteB4dq/37A/h/SsPHjcq5sNH3u1wDAOBPGmG0VCjt
1o6GeZZTPbTwbhKzScy5GZEMRvxVi3CtLc4DTNnJdENnYFAraKRftqEX82hBkqWBiW8OxpT7yl7m
bW+zbuE76dGSxAs8uw9jjTjK+f82NVtBpOR2qluCpnoGvOY7MOS3NW6Jeoj6NaITJmosY+iBQa8h
1yMY307n1KqB3Djz4Lwv/sISeS6uxroVIs0aCieke/7lmbnCl48zYL+24kSFWZ17j05LuJNwKYLN
X9Xu8Y9X5Afhs4A5zxjwlrLbF1MWtNIXlExtJ+McgQ==
=OupB
-----END PGP SIGNATURE-----

--------------0we9pLFkJBVTNxTMON0yDeQD--


From xen-devel-bounces@lists.xenproject.org Mon Jun 20 15:49:08 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 15:49:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352829.579710 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3JeD-0008Qa-MW; Mon, 20 Jun 2022 15:49:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352829.579710; Mon, 20 Jun 2022 15:49:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3JeD-0008QR-Ho; Mon, 20 Jun 2022 15:49:05 +0000
Received: by outflank-mailman (input) for mailman id 352829;
 Mon, 20 Jun 2022 15:49:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hWFk=W3=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1o3JeC-0008Aw-KJ
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 15:49:04 +0000
Received: from mail-lf1-x12a.google.com (mail-lf1-x12a.google.com
 [2a00:1450:4864:20::12a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 821676ec-f0b0-11ec-b725-ed86ccbb4733;
 Mon, 20 Jun 2022 17:49:03 +0200 (CEST)
Received: by mail-lf1-x12a.google.com with SMTP id g4so5553113lfv.9
 for <xen-devel@lists.xenproject.org>; Mon, 20 Jun 2022 08:49:03 -0700 (PDT)
Received: from otyshchenko.router ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 m9-20020a2e9349000000b0024f3d1dae94sm1690149ljh.28.2022.06.20.08.49.01
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 20 Jun 2022 08:49:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 821676ec-f0b0-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=uaCvxlWu7Mr7Lgwgym0opWD7n4MJzmZlV9ZV/7GQCmI=;
        b=KyTptLRvwAoJbveW/XylhauTOn/SLl30O/1KlcewfX0kqwQ81awXSdlGZEeRg2sRr8
         s2Em8EmWKPDJEMIBFjYTQlSweVqYitBd0lvuL/TFsRwzgGRXccVLgFV4gFjVO6LXwLMB
         sOfbyJ/hqQHfxfSUqTgGGraS1/3g21Ph0vx5qN33b19XIAN379ls2kz10/8cL498Qms5
         eUMwNqJfW9rpjl98XD00590Ii9aH+aMOV95DYqfFd0m2prePquvciW9Ttq32cQaUEqVx
         EScyO0IjgopxHBvszRG8BNPJlc5VrXrlJxRMPWNQf09ARrmkzp/qG9DokvWZVN/IeRci
         n1fA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=uaCvxlWu7Mr7Lgwgym0opWD7n4MJzmZlV9ZV/7GQCmI=;
        b=uJu3fYFNj9xXiDf6UqRjI3js03dI9OhJEUgfXUShqjLWY1x3AMVYEDd3kJF5+b3JXp
         KSn/D3Wt5s5aO3K0FX6Nr9ypNxFKjUft587FtxBa4wxkcMFsbMkSZlTKHg3msHTdwbc+
         hyqwEkZTcXYCW9nNTup+0Ey5xi9goQDoDdjJQWzALXsNcRf6PFGYdXuI5k8R2bkM6MCb
         BQawkx32csS/LGlYv8zJhKghktkt3bWemGsLEQZ3kpeAOCLFOrlDktbKfoEp7p0NqtGg
         QCEiBwxAYRYMefXG3kLRO/jzE3EdlcFBo63r8xxG1hf5ZR4D9KwtNkaJGooqQdxpa+1B
         Mh9A==
X-Gm-Message-State: AJIora9NGc1VUOx83GuM0gh3OvFpI/CDnQiVYlU6lIobunIi/LGbh/LO
	rS+FXltzPlUrVoUbm0vGlNWQrk7CVA4=
X-Google-Smtp-Source: AGRyM1vlQXq99U+Nc6bBbpfMbNrm9kXtQw6Mp+ShvMUrY/uyqLzTHnA+d0hntvWHmaMy2ocBQR1jpw==
X-Received: by 2002:a19:f813:0:b0:479:3fd7:ce43 with SMTP id a19-20020a19f813000000b004793fd7ce43mr13897324lff.375.1655740142091;
        Mon, 20 Jun 2022 08:49:02 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [PATCH V1 1/2] xen/unpopulated-alloc: Introduce helpers for contiguous allocations
Date: Mon, 20 Jun 2022 18:48:55 +0300
Message-Id: <1655740136-3974-2-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1655740136-3974-1-git-send-email-olekstysh@gmail.com>
References: <1655740136-3974-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Add ability to allocate unpopulated contiguous pages suitable for
grant mapping into. This is going to be used by userspace PV backends
for grant mappings in gnttab code (see gnttab_dma_alloc_pages()).

Patch also changes the allocation mechanism for unpopulated pages.
Instead of using page_list and page->zone_device_data to manage
hot-plugged in fill_pool() (former fill_list()) pages, reuse genpool
subsystem to do the job for us.

Please note that even for non-contiguous allocations we always try
to allocate single contiguous chunk in alloc_unpopulated_pages()
instead of allocating memory page-by-page. Although it leads to less
efficient resource utilization, it is faster. Taking into the account
that on both x86 and Arm the unpopulated memory resource is arbitrarily
large (it is not backed by real memory) this is not going to be a problem.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
---
I am still thinking how we can optimize free_unpopulated_pages()
to avoid freeing memory page-by-page for non-contiguous allocations:
1. We could update users to allocate/free contiguous pages even when
   continuity is not strictly required. But besides a need to alter
   a few places, this also requires having a valid struct device
   pointer in hand (maybe instead of passing *dev we could pass
   max_addr? With that we could drop DMA_BIT_MASK).
2. Almost all users of unpopulated pages (except gnttab_page_cache_shrink()
   in grant-table.c) retain initially allocated pages[i] array, so it
   passed in free_unpopulated_pages() absolutely unmodified since
   being allocated.
   We could update free_unpopulated_pages() to always try to free memory
   by a single chuck (previously make sure that chunk is in a pool using
   gen_pool_has_addr()) and update gnttab_page_cache_shrink() to not pass
   pages[i] array with mixed pages in it when dealing with unpopulated
   pages. This doesn't require altering a few places.

Any thoughts?

Changes RFC -> V1:
   - update commit subject/description
   - rework to avoid code duplication (resolve initial TODO)
   - rename API according to new naming scheme (s/dma/contiguous),
     also rename some local stuff
   - drop the page_list & friends entirely and use unpopulated_pool for all
     (contiguous and non-contiguous) allocations
   - fix build on x86 by inclusion of <linux/dma-mapping.h>
   - introduce is_xen_unpopulated_page()
   - share the implementation for xen_alloc_unpopulated_contiguous_pages()
     and xen_alloc_unpopulated_pages()
---
 drivers/xen/unpopulated-alloc.c | 188 +++++++++++++++++++++++++++++-----------
 include/xen/xen.h               |  20 +++++
 2 files changed, 158 insertions(+), 50 deletions(-)

diff --git a/drivers/xen/unpopulated-alloc.c b/drivers/xen/unpopulated-alloc.c
index a39f2d3..3988480d 100644
--- a/drivers/xen/unpopulated-alloc.c
+++ b/drivers/xen/unpopulated-alloc.c
@@ -1,5 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0
+#include <linux/dma-mapping.h>
 #include <linux/errno.h>
+#include <linux/genalloc.h>
 #include <linux/gfp.h>
 #include <linux/kernel.h>
 #include <linux/mm.h>
@@ -12,9 +14,8 @@
 #include <xen/page.h>
 #include <xen/xen.h>
 
-static DEFINE_MUTEX(list_lock);
-static struct page *page_list;
-static unsigned int list_count;
+static DEFINE_MUTEX(pool_lock);
+static struct gen_pool *unpopulated_pool;
 
 static struct resource *target_resource;
 
@@ -31,12 +32,12 @@ int __weak __init arch_xen_unpopulated_init(struct resource **res)
 	return 0;
 }
 
-static int fill_list(unsigned int nr_pages)
+static int fill_pool(unsigned int nr_pages)
 {
 	struct dev_pagemap *pgmap;
 	struct resource *res, *tmp_res = NULL;
 	void *vaddr;
-	unsigned int i, alloc_pages = round_up(nr_pages, PAGES_PER_SECTION);
+	unsigned int alloc_pages = round_up(nr_pages, PAGES_PER_SECTION);
 	struct range mhp_range;
 	int ret;
 
@@ -106,6 +107,7 @@ static int fill_list(unsigned int nr_pages)
          * conflict with any devices.
          */
 	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
+		unsigned int i;
 		xen_pfn_t pfn = PFN_DOWN(res->start);
 
 		for (i = 0; i < alloc_pages; i++) {
@@ -125,16 +127,17 @@ static int fill_list(unsigned int nr_pages)
 		goto err_memremap;
 	}
 
-	for (i = 0; i < alloc_pages; i++) {
-		struct page *pg = virt_to_page(vaddr + PAGE_SIZE * i);
-
-		pg->zone_device_data = page_list;
-		page_list = pg;
-		list_count++;
+	ret = gen_pool_add_virt(unpopulated_pool, (unsigned long)vaddr, res->start,
+			alloc_pages * PAGE_SIZE, NUMA_NO_NODE);
+	if (ret) {
+		pr_err("Cannot add memory range to the unpopulated pool\n");
+		goto err_pool;
 	}
 
 	return 0;
 
+err_pool:
+	memunmap_pages(pgmap);
 err_memremap:
 	kfree(pgmap);
 err_pgmap:
@@ -149,51 +152,49 @@ static int fill_list(unsigned int nr_pages)
 	return ret;
 }
 
-/**
- * xen_alloc_unpopulated_pages - alloc unpopulated pages
- * @nr_pages: Number of pages
- * @pages: pages returned
- * @return 0 on success, error otherwise
- */
-int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages)
+static int alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages,
+		bool contiguous)
 {
 	unsigned int i;
 	int ret = 0;
+	void *vaddr;
+	bool filled = false;
 
 	/*
 	 * Fallback to default behavior if we do not have any suitable resource
 	 * to allocate required region from and as the result we won't be able to
 	 * construct pages.
 	 */
-	if (!target_resource)
+	if (!target_resource) {
+		if (contiguous && nr_pages > 1)
+			return -ENODEV;
+
 		return xen_alloc_ballooned_pages(nr_pages, pages);
+	}
+
+	mutex_lock(&pool_lock);
 
-	mutex_lock(&list_lock);
-	if (list_count < nr_pages) {
-		ret = fill_list(nr_pages - list_count);
+	while (!(vaddr = (void *)gen_pool_alloc(unpopulated_pool,
+			nr_pages * PAGE_SIZE))) {
+		if (filled)
+			ret = -ENOMEM;
+		else {
+			ret = fill_pool(nr_pages);
+			filled = true;
+		}
 		if (ret)
 			goto out;
 	}
 
 	for (i = 0; i < nr_pages; i++) {
-		struct page *pg = page_list;
-
-		BUG_ON(!pg);
-		page_list = pg->zone_device_data;
-		list_count--;
-		pages[i] = pg;
+		pages[i] = virt_to_page(vaddr + PAGE_SIZE * i);
 
 #ifdef CONFIG_XEN_HAVE_PVMMU
 		if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-			ret = xen_alloc_p2m_entry(page_to_pfn(pg));
+			ret = xen_alloc_p2m_entry(page_to_pfn(pages[i]));
 			if (ret < 0) {
-				unsigned int j;
-
-				for (j = 0; j <= i; j++) {
-					pages[j]->zone_device_data = page_list;
-					page_list = pages[j];
-					list_count++;
-				}
+				gen_pool_free(unpopulated_pool, (unsigned long)vaddr,
+						nr_pages * PAGE_SIZE);
 				goto out;
 			}
 		}
@@ -201,9 +202,68 @@ int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages)
 	}
 
 out:
-	mutex_unlock(&list_lock);
+	mutex_unlock(&pool_lock);
 	return ret;
 }
+
+static bool in_unpopulated_pool(unsigned int nr_pages, struct page *page)
+{
+	if (!target_resource)
+		return false;
+
+	return gen_pool_has_addr(unpopulated_pool,
+			(unsigned long)page_to_virt(page), nr_pages * PAGE_SIZE);
+}
+
+static void free_unpopulated_pages(unsigned int nr_pages, struct page **pages,
+		bool contiguous)
+{
+	if (!target_resource) {
+		if (contiguous && nr_pages > 1)
+			return;
+
+		xen_free_ballooned_pages(nr_pages, pages);
+		return;
+	}
+
+	mutex_lock(&pool_lock);
+
+	/* XXX Do we need to check the range (gen_pool_has_addr)? */
+	if (contiguous)
+		gen_pool_free(unpopulated_pool, (unsigned long)page_to_virt(pages[0]),
+				nr_pages * PAGE_SIZE);
+	else {
+		unsigned int i;
+
+		for (i = 0; i < nr_pages; i++)
+			gen_pool_free(unpopulated_pool,
+					(unsigned long)page_to_virt(pages[i]), PAGE_SIZE);
+	}
+
+	mutex_unlock(&pool_lock);
+}
+
+/**
+ * is_xen_unpopulated_page - check whether page is unpopulated
+ * @page: page to be checked
+ * @return true if page is unpopulated, else otherwise
+ */
+bool is_xen_unpopulated_page(struct page *page)
+{
+	return in_unpopulated_pool(1, page);
+}
+EXPORT_SYMBOL(is_xen_unpopulated_page);
+
+/**
+ * xen_alloc_unpopulated_pages - alloc unpopulated pages
+ * @nr_pages: Number of pages
+ * @pages: pages returned
+ * @return 0 on success, error otherwise
+ */
+int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages)
+{
+	return alloc_unpopulated_pages(nr_pages, pages, false);
+}
 EXPORT_SYMBOL(xen_alloc_unpopulated_pages);
 
 /**
@@ -213,22 +273,40 @@ EXPORT_SYMBOL(xen_alloc_unpopulated_pages);
  */
 void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages)
 {
-	unsigned int i;
+	free_unpopulated_pages(nr_pages, pages, false);
+}
+EXPORT_SYMBOL(xen_free_unpopulated_pages);
 
-	if (!target_resource) {
-		xen_free_ballooned_pages(nr_pages, pages);
-		return;
-	}
+/**
+ * xen_alloc_unpopulated_contiguous_pages - alloc unpopulated contiguous pages
+ * @dev: valid struct device pointer
+ * @nr_pages: Number of pages
+ * @pages: pages returned
+ * @return 0 on success, error otherwise
+ */
+int xen_alloc_unpopulated_contiguous_pages(struct device *dev,
+		unsigned int nr_pages, struct page **pages)
+{
+	/* XXX Handle devices which support 64-bit DMA address only for now */
+	if (dma_get_mask(dev) != DMA_BIT_MASK(64))
+		return -EINVAL;
 
-	mutex_lock(&list_lock);
-	for (i = 0; i < nr_pages; i++) {
-		pages[i]->zone_device_data = page_list;
-		page_list = pages[i];
-		list_count++;
-	}
-	mutex_unlock(&list_lock);
+	return alloc_unpopulated_pages(nr_pages, pages, true);
 }
-EXPORT_SYMBOL(xen_free_unpopulated_pages);
+EXPORT_SYMBOL(xen_alloc_unpopulated_contiguous_pages);
+
+/**
+ * xen_free_unpopulated_contiguous_pages - return unpopulated contiguous pages
+ * @dev: valid struct device pointer
+ * @nr_pages: Number of pages
+ * @pages: pages to return
+ */
+void xen_free_unpopulated_contiguous_pages(struct device *dev,
+		unsigned int nr_pages, struct page **pages)
+{
+	free_unpopulated_pages(nr_pages, pages, true);
+}
+EXPORT_SYMBOL(xen_free_unpopulated_contiguous_pages);
 
 static int __init unpopulated_init(void)
 {
@@ -237,9 +315,19 @@ static int __init unpopulated_init(void)
 	if (!xen_domain())
 		return -ENODEV;
 
+	unpopulated_pool = gen_pool_create(PAGE_SHIFT, NUMA_NO_NODE);
+	if (!unpopulated_pool) {
+		pr_err("xen:unpopulated: Cannot create unpopulated pool\n");
+		return -ENOMEM;
+	}
+
+	gen_pool_set_algo(unpopulated_pool, gen_pool_best_fit, NULL);
+
 	ret = arch_xen_unpopulated_init(&target_resource);
 	if (ret) {
 		pr_err("xen:unpopulated: Cannot initialize target resource\n");
+		gen_pool_destroy(unpopulated_pool);
+		unpopulated_pool = NULL;
 		target_resource = NULL;
 	}
 
diff --git a/include/xen/xen.h b/include/xen/xen.h
index 0780a81..7d396cc 100644
--- a/include/xen/xen.h
+++ b/include/xen/xen.h
@@ -60,9 +60,16 @@ static inline void xen_set_restricted_virtio_memory_access(void)
 		platform_set(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS);
 }
 
+struct device;
+
 #ifdef CONFIG_XEN_UNPOPULATED_ALLOC
 int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages);
 void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages);
+int xen_alloc_unpopulated_contiguous_pages(struct device *dev,
+		unsigned int nr_pages, struct page **pages);
+void xen_free_unpopulated_contiguous_pages(struct device *dev,
+		unsigned int nr_pages, struct page **pages);
+bool is_xen_unpopulated_page(struct page *page);
 #include <linux/ioport.h>
 int arch_xen_unpopulated_init(struct resource **res);
 #else
@@ -77,6 +84,19 @@ static inline void xen_free_unpopulated_pages(unsigned int nr_pages,
 {
 	xen_free_ballooned_pages(nr_pages, pages);
 }
+static inline int xen_alloc_unpopulated_contiguous_pages(struct device *dev,
+		unsigned int nr_pages, struct page **pages)
+{
+	return -1;
+}
+static inline void xen_free_unpopulated_contiguous_pages(struct device *dev,
+		unsigned int nr_pages, struct page **pages)
+{
+}
+static inline bool is_xen_unpopulated_page(struct page *page)
+{
+	return false;
+}
 #endif
 
 #endif	/* _XEN_XEN_H */
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 15:49:08 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 15:49:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352828.579699 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3JeC-0008Ar-7y; Mon, 20 Jun 2022 15:49:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352828.579699; Mon, 20 Jun 2022 15:49:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3JeC-0008Ak-46; Mon, 20 Jun 2022 15:49:04 +0000
Received: by outflank-mailman (input) for mailman id 352828;
 Mon, 20 Jun 2022 15:49:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hWFk=W3=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1o3JeB-0008Ae-0j
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 15:49:03 +0000
Received: from mail-lf1-x12f.google.com (mail-lf1-x12f.google.com
 [2a00:1450:4864:20::12f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 81716efb-f0b0-11ec-bd2d-47488cf2e6aa;
 Mon, 20 Jun 2022 17:49:01 +0200 (CEST)
Received: by mail-lf1-x12f.google.com with SMTP id c4so17892101lfj.12
 for <xen-devel@lists.xenproject.org>; Mon, 20 Jun 2022 08:49:01 -0700 (PDT)
Received: from otyshchenko.router ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 m9-20020a2e9349000000b0024f3d1dae94sm1690149ljh.28.2022.06.20.08.48.59
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 20 Jun 2022 08:49:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 81716efb-f0b0-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id;
        bh=BxFmrRIa+vekezXU2ndkJI53IFxQQRuwCL2hgCyh3rI=;
        b=EE5uYpfUwbW+yhCsitbDozwmKkNg4mpG4ZQahK3XlYeTGhDYoqvVpHCoT5xzpURTcc
         cjNo0lfLdmeVJ5IQ69SU5T0l5SyOsyTVRZ/JsJgYtYepPqmK4x7OlqRlJs6A+dxXh09z
         TyqSN8Ltk0iX1VNgQ3wwi4LxxDz9REMLqH6/vXtkx6JVA0buOi5ldRHYWRVIGBavD/oT
         b8au3yNt/IwMuIePkU8N12rbDfxXhc9SDjRLMjVsdttQ2o7IWrpQ6oW1nDpitO/31dHU
         cYm9lqncdKbu0sq+m+mFIPb1hRuVEeWq3M4mVzMtBEQ9S3WQW6hoqAGucaMBRHAv9N8q
         VRjw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id;
        bh=BxFmrRIa+vekezXU2ndkJI53IFxQQRuwCL2hgCyh3rI=;
        b=LioC+CWrqYcQgoxErWyFYCjsEPZVjR8CJrrdhOzjFY/RlLtVwZDRteGk3F3Gr15qJ4
         re9VjFOPR+1kaUNANoLdrHorcldX8Odqtc8j1hY5OdBQyiq/fCM5TU8RZ7glFYtgbWgV
         q1bGhKPiQqNnju/5FFX/oEAQ3mTnUPU88j/bhnpVk1La2loeJv/JLDlefATpDnqxej9T
         s6mkxmdHV7KmhUYQfsGSMT7KBkI5CN1JOWekn6sZIhiESStyBQ5sHXTgxxewA48Og22Q
         ioCTNpJwgDdbT+y3nrb5um2iqPAr4cTisGtF/2VK2ahtsE+xTM2Ad0A+It870l1d25y8
         b6tA==
X-Gm-Message-State: AJIora/vqpzbO6NyrDR61BDkXdZ14quDif9eFhAiElY9Mw8his6IYdhU
	lloRjWVDeToh7fioBUVbpoCszeH3n1E=
X-Google-Smtp-Source: AGRyM1sqdWC3IbkXi29u50LPbXr7CiOhdvbGy2bBIGNkubnWBbrYqutLjzCLlsUENGqK/CS2NF49dA==
X-Received: by 2002:a05:6512:228c:b0:479:2fa9:6773 with SMTP id f12-20020a056512228c00b004792fa96773mr13621418lfu.413.1655740140978;
        Mon, 20 Jun 2022 08:49:00 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [PATCH V1 0/2] Ability to allocate contiguous (was DMAable) pages using unpopulated-alloc
Date: Mon, 20 Jun 2022 18:48:54 +0300
Message-Id: <1655740136-3974-1-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Hello all.

You can find previous discussion at [1].

The purpose of this patch series is to get feedback about supporting the allocation
of contiguous pages by Linux's unpopulated-alloc.

The unpopulated-alloc feature has been enabled on Arm since the extended-regions support
reached upstream. With that (if, of course, we run new Xen version and Xen was able to
allocate extended regions), we don't allocate the real RAM pages from host memory and balloon
them out (in order to obtain physical memory space to map the guest pages into) anymore, we use
the unpopulated pages instead. And it seems that all users I have played with on Arm (I mean,
Xen PV and virtio backends) are happy with the pages provided by xen_alloc_unpopulated_pages().
It is worth mentioning that these pages are not contiguous, but this wasn't an issue so far.

There is one place, where we still steal RAM pages if user-space Xen PV backend tries
to establish grant mapping with a need to be backed by a DMA buffer for the sake of zero-copy
(see dma_alloc*() usage in gnttab_dma_alloc_pages()).

And, if I am not mistaken (there might be pitfalls which I am not aware of), we could avoid
wasting real RAM pages in that particular case also by adding an ability to allocate
unpopulated contiguous pages (which are guaranteed to be contiguous in IPA).
The benefit is quite clear here:
1. Avoid wasting real RAM pages (reducing the amount of CMA memory usable) for allocating
   physical memory space to map the granted buffer into (which can be big enough if
   we deal with Xen PV Display driver using multiple Full HD buffers)
2. Avoid superpage shattering in Xen P2M when establishing stage-2 mapping for that
   granted buffer
3. Avoid extra operations needed for the granted buffer to be properly mapped and
   unmapped such as ballooning in/out real RAM pages
   
The corresponding series is located at [2]. In my Arm64 based environment the series works good.
Only build tested on x86.

Any feedback/help would be highly appreciated.

[1] https://lore.kernel.org/xen-devel/1652810658-27810-1-git-send-email-olekstysh@gmail.com/
[2] https://github.com/otyshchenko1/linux/commits/unpopulated-cma1

Oleksandr Tyshchenko (2):
  xen/unpopulated-alloc: Introduce helpers for contiguous allocations
  xen/grant-table: Use unpopulated contiguous pages instead of real RAM
    ones

 drivers/xen/grant-table.c       |  24 +++++
 drivers/xen/unpopulated-alloc.c | 188 +++++++++++++++++++++++++++++-----------
 include/xen/xen.h               |  20 +++++
 3 files changed, 182 insertions(+), 50 deletions(-)

-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 15:49:08 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 15:49:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352830.579716 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3JeE-0008UC-3h; Mon, 20 Jun 2022 15:49:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352830.579716; Mon, 20 Jun 2022 15:49:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3JeD-0008Tg-TO; Mon, 20 Jun 2022 15:49:05 +0000
Received: by outflank-mailman (input) for mailman id 352830;
 Mon, 20 Jun 2022 15:49:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hWFk=W3=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1o3JeC-0008Ae-R8
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 15:49:04 +0000
Received: from mail-lj1-x229.google.com (mail-lj1-x229.google.com
 [2a00:1450:4864:20::229])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 82c9eb2a-f0b0-11ec-bd2d-47488cf2e6aa;
 Mon, 20 Jun 2022 17:49:04 +0200 (CEST)
Received: by mail-lj1-x229.google.com with SMTP id g12so6313339ljk.11
 for <xen-devel@lists.xenproject.org>; Mon, 20 Jun 2022 08:49:04 -0700 (PDT)
Received: from otyshchenko.router ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 m9-20020a2e9349000000b0024f3d1dae94sm1690149ljh.28.2022.06.20.08.49.02
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 20 Jun 2022 08:49:02 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 82c9eb2a-f0b0-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=rlaK5EBLGtLc0krm/NL8VjdXnXIvfVQErc2uihOdLNA=;
        b=apvgpzqNUstKOMcGs47PvJ/CJGiFo3SYPo3oiXc1rEQh2ny24O/ZaJoNVHjnfHcBhL
         sE1/1NG3Vs/kIMo8gj3Dk2c1z7sWI1F6EdHVU5RcPgXfz+Rrfks6uLTuEu55gw1yM27b
         nh85POmnWW1OCbeaOgCJ1XfYEXGuLwKCpDlvM5NdDcpG18XiV34MmfpYzn/qxxS3SZdR
         3w/I/cMlbXoVrCAObEEAHpvGvc4hR6QkugR/gTlGww4Geap1+ZxIUXXzj/4cE8cyHnkL
         cIeYuaL8KGwLApG0jMafBYFesH5ABExrJ9c2WvNvoIXzvQwm7mV8bNGgDjR8bq5WUCJ8
         Dg7g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=rlaK5EBLGtLc0krm/NL8VjdXnXIvfVQErc2uihOdLNA=;
        b=xflh4LTJLPEwjJcZ8D3zEeDdQPooGO+x6lHibr488MlmJsDtfwig8ApwICJcX4Rg15
         Slq40XFfYfnXBXksxGb3GNx9xs3fPWPQ1iHM4Yd++s65XvJCr0F0zuM/uo6Lf44RZdez
         0PwjNIUpgi77e+3QZmn+MNhrqpANa28q+1TZshlZUZ4eG32Q8Bmmh3usPI8lFj8XFMTs
         KrzD+7j4wlmymIq/EBiPs3aUmY9XS4Z67ZmEEmkOa+JapmludLIP4uM4tXJBMLTKazB6
         rYiibfi1EUPckwF5aU3xSR+rWQ3hyMMKTeZbQREqDgedmMgtY5R8Zv8PT7Ye8LlZiut5
         I21A==
X-Gm-Message-State: AJIora8V2EGrqiB8Z3IkjtvysyWoLS6ta8GETeVWi3cx0NL2xpmtwJUG
	f74Ec97oSnjbBTKKV9CIGohUd/lHQ5U=
X-Google-Smtp-Source: AGRyM1t8jZIIoIQOefBZwbkKIP31jN1TS68e69Tx5q0PTW0PQDRQ78ifLWRGU9zxegOfnvyNtEbWjQ==
X-Received: by 2002:a05:651c:8f:b0:255:8e6e:1980 with SMTP id 15-20020a05651c008f00b002558e6e1980mr11944050ljq.462.1655740143445;
        Mon, 20 Jun 2022 08:49:03 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [PATCH V1 2/2] xen/grant-table: Use unpopulated contiguous pages instead of real RAM ones
Date: Mon, 20 Jun 2022 18:48:56 +0300
Message-Id: <1655740136-3974-3-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1655740136-3974-1-git-send-email-olekstysh@gmail.com>
References: <1655740136-3974-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Depends on CONFIG_XEN_UNPOPULATED_ALLOC. If enabled then unpopulated
contiguous pages will be allocated for grant mapping into instead of
ballooning out real RAM pages.

Also fallback to allocate DMAable pages (balloon out real RAM pages)
if we failed to allocate unpopulated contiguous pages. Use recently
introduced is_xen_unpopulated_page() in gnttab_dma_free_pages() to know
what API to use for freeing pages.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
---
Please note, I haven't re-checked yet the use-case where the xen-swiotlb
is involved (proposed by Stefano):
https://lore.kernel.org/xen-devel/alpine.DEB.2.22.394.2206031348230.2783803@ubuntu-linux-20-04-desktop/
I will re-check that for next version and add corresponding comment
in the code.

Changes RFC -> V1:
   - update commit subject/description
   - rework to avoid introducing alternative implementation
     of gnttab_dma_alloc(free)_pages(), use IS_ENABLED()
   - implement a fallback to real RAM pages if we failed to allocate
     unpopulated contiguous pages (resolve initial TODO)
   - update according to the API renaming (s/dma/contiguous)
---
 drivers/xen/grant-table.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 738029d..15e426b 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -1047,6 +1047,23 @@ int gnttab_dma_alloc_pages(struct gnttab_dma_alloc_args *args)
 	size_t size;
 	int i, ret;
 
+	if (IS_ENABLED(CONFIG_XEN_UNPOPULATED_ALLOC)) {
+		ret = xen_alloc_unpopulated_contiguous_pages(args->dev, args->nr_pages,
+				args->pages);
+		if (ret < 0)
+			goto fallback;
+
+		ret = gnttab_pages_set_private(args->nr_pages, args->pages);
+		if (ret < 0)
+			goto fail;
+
+		args->vaddr = page_to_virt(args->pages[0]);
+		args->dev_bus_addr = page_to_phys(args->pages[0]);
+
+		return ret;
+	}
+
+fallback:
 	size = args->nr_pages << PAGE_SHIFT;
 	if (args->coherent)
 		args->vaddr = dma_alloc_coherent(args->dev, size,
@@ -1103,6 +1120,13 @@ int gnttab_dma_free_pages(struct gnttab_dma_alloc_args *args)
 
 	gnttab_pages_clear_private(args->nr_pages, args->pages);
 
+	if (IS_ENABLED(CONFIG_XEN_UNPOPULATED_ALLOC) &&
+			is_xen_unpopulated_page(args->pages[0])) {
+		xen_free_unpopulated_contiguous_pages(args->dev, args->nr_pages,
+				args->pages);
+		return 0;
+	}
+
 	for (i = 0; i < args->nr_pages; i++)
 		args->frames[i] = page_to_xen_pfn(args->pages[i]);
 
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Mon Jun 20 15:54:02 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 15:54:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352855.579732 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Jiy-0002M6-LQ; Mon, 20 Jun 2022 15:54:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352855.579732; Mon, 20 Jun 2022 15:54:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Jiy-0002Lz-Ha; Mon, 20 Jun 2022 15:54:00 +0000
Received: by outflank-mailman (input) for mailman id 352855;
 Mon, 20 Jun 2022 15:53:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3Jix-0002Lo-2X; Mon, 20 Jun 2022 15:53:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3Jix-0007aS-0i; Mon, 20 Jun 2022 15:53:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3Jiw-0003JB-Da; Mon, 20 Jun 2022 15:53:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o3Jiw-0002w7-D7; Mon, 20 Jun 2022 15:53:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=v62P41sVeHeVV3WK8Tw7A3jGrzQ31DIddMIPAR6Vc1k=; b=W6DWcAFeP+C3LnBzEhQiSaBSZ4
	L1W5y+Li/R7NqVuaoQP2JEJffUsJgthrbHnouA9WH+pr6KzxU9VJObrpyi6Vm7TlDK4RcR8GedH1J
	xYfh39B9K4ynVLx1TJ/XgVmaeYI4O8xrUVG+HLAegub65Mf4AuwVArzXa12k0FL6LAeg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171288-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 171288: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-xl-shadow:xen-install:fail:heisenbug
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-i386-libvirt:xen-install:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=c8b2d413761af732a0798d8df45ce968732083fe
X-Osstest-Versions-That:
    qemuu=a28498b1f9591e12dcbfdf06dc8f54e15926760e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 20 Jun 2022 15:53:58 +0000

flight 171288 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171288/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-shadow     7 xen-install      fail in 171282 pass in 171288
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat  fail pass in 171282
 test-amd64-i386-xl-vhd       21 guest-start/debian.repeat  fail pass in 171282

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt       7 xen-install         fail in 171282 like 171256
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171256
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171256
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171256
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171256
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171256
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171256
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171256
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171256
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 qemuu                c8b2d413761af732a0798d8df45ce968732083fe
baseline version:
 qemuu                a28498b1f9591e12dcbfdf06dc8f54e15926760e

Last test of basis   171256  2022-06-17 15:58:21 Z    2 days
Testing same since   171282  2022-06-20 00:38:29 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jung-uk Kim <jkim@FreeBSD.org>
  Kyle Evans <kevans@FreeBSD.org>
  Richard Henderson <richard.henderson@linaro.org>
  Stacey Son <sson@FreeBSD.org>
  Warner Losh <imp@bsdimp.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   a28498b1f9..c8b2d41376  c8b2d413761af732a0798d8df45ce968732083fe -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Mon Jun 20 18:45:05 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 18:45:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352878.579746 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3MOG-0002Sc-2n; Mon, 20 Jun 2022 18:44:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352878.579746; Mon, 20 Jun 2022 18:44:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3MOF-0002SV-Vy; Mon, 20 Jun 2022 18:44:47 +0000
Received: by outflank-mailman (input) for mailman id 352878;
 Mon, 20 Jun 2022 18:44:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3MOE-0002SL-Hc; Mon, 20 Jun 2022 18:44:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3MOE-0002Zb-DY; Mon, 20 Jun 2022 18:44:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3MOE-0007b4-2l; Mon, 20 Jun 2022 18:44:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o3MOE-0006qo-2I; Mon, 20 Jun 2022 18:44:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=EzqII6sqVD/v58Zi9lXIjvqujWa89IJGnJlXb+aRH1U=; b=rkda8jANBlLBGzLa1oxnJHoMB1
	WqjfvBAmy5diucL+n88Ia+/y1XB3SPOBrXYgMiO4UH8iMrVY4IfUefbyEnPv+KlnBpO2d7SMkkQDo
	PmpJP/dx7cgnsFYtdWSqx3pCy1vfCHoJhOLK+5UP/A4Q7QMp12dvavmM6LrpQ6t25Q0k=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171292-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 171292: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=9d067857d1ff6805608aac4d9c0ea1c848b2e637
X-Osstest-Versions-That:
    xen=c9040f25be317ab2f7647605397d79313e3f303e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 20 Jun 2022 18:44:46 +0000

flight 171292 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171292/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  9d067857d1ff6805608aac4d9c0ea1c848b2e637
baseline version:
 xen                  c9040f25be317ab2f7647605397d79313e3f303e

Last test of basis   171239  2022-06-17 09:00:26 Z    3 days
Testing same since   171292  2022-06-20 14:00:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   c9040f25be..9d067857d1  9d067857d1ff6805608aac4d9c0ea1c848b2e637 -> smoke


From xen-devel-bounces@lists.xenproject.org Mon Jun 20 20:47:53 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 20:47:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352889.579757 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3OJ3-0005ZW-0c; Mon, 20 Jun 2022 20:47:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352889.579757; Mon, 20 Jun 2022 20:47:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3OJ2-0005ZP-U5; Mon, 20 Jun 2022 20:47:32 +0000
Received: by outflank-mailman (input) for mailman id 352889;
 Mon, 20 Jun 2022 20:47:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3OJ2-0005ZF-3B; Mon, 20 Jun 2022 20:47:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3OJ1-0004ho-Vb; Mon, 20 Jun 2022 20:47:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3OJ1-0002nS-Gy; Mon, 20 Jun 2022 20:47:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o3OJ1-0003kp-BM; Mon, 20 Jun 2022 20:47:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gusyN2ZbsTqhXWUFv97d+r+brc9Bv1ErF2GkJrduzJQ=; b=SWkgR5M0TDmeD11ZUBi9V9zVRN
	iHBWvIoPgDym1d9z8VB82Mw1bqtc3f4rtzig/kFsNJMOlnLFJJZmzprVhHn0p42z/jSHGecXRmKsX
	DQZWxtfY5djRMQzkairgIvUCUQ5ddS6pfXRrCWW9kqkPX7kSNlFKb7UKJg5s3Jx56p9o=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171291-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171291: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=a111daf0c53ae91e71fd2bfe7497862d14132e3e
X-Osstest-Versions-That:
    linux=354c6e071be986a44b956f7b57f1884244431048
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 20 Jun 2022 20:47:31 +0000

flight 171291 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171291/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-intel  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-amd  8 xen-boot              fail REGR. vs. 171277
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 171277
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 171277
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 171277
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 171277
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 171277

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 171277

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171277
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171277
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171277
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                a111daf0c53ae91e71fd2bfe7497862d14132e3e
baseline version:
 linux                354c6e071be986a44b956f7b57f1884244431048

Last test of basis   171277  2022-06-19 03:11:35 Z    1 days
Failing since        171280  2022-06-19 15:12:25 Z    1 days    4 attempts
Testing same since   171284  2022-06-20 03:14:14 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Usyskin <alexander.usyskin@intel.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Arnd Bergmann <arnd@arndb.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Darrick J. Wong <djwong@kernel.org>
  Dave Hansen <dave.hansen@linux.intel.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Ian Abbott <abbotti@mev.co.uk>
  Jamie Iles <jamie@jamieiles.com>
  Jarkko Nikula <jarkko.nikula@linux.intel.com>
  Jiasheng Jiang <jiasheng@iscas.ac.cn>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jing-Ting Wu <jing-ting.wu@mediatek.com>
  Joe Damato <jdamato@fastly.com>
  Josh Poimboeuf <jpoimboe@kernel.org>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Kunihiko Hayashi <hayashi.kunihiko@socionext.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Liu Ying <victor.liu@nxp.com>
  Lukas Bulwahn <lukas.bulwahn@gmail.com>
  Marc Zyngier <maz@kernel.org>
  Miaoqian Lin <linmq006@gmail.com>
  Michal Simek <michal.simek@amd.com>
  Nathan Chancellor <nathan@kernel.org>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Peter Zijlstra <peterz@infradead.org>
  Rob Herring <robh@kernel.org>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Serge Semin <Sergey.Semin@baikalelectronics.ru>
  Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
  Tali Perry <tali.perry1@gmail.com>
  Thomas Gleixner <tglx@linutronix.de>
  Tomas Winkler <tomas.winkler@intel.com>
  Wolfram Sang <wsa@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 966 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 20 22:40:05 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 22:40:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352903.579772 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Q3j-0007ke-B2; Mon, 20 Jun 2022 22:39:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352903.579772; Mon, 20 Jun 2022 22:39:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Q3j-0007kP-4q; Mon, 20 Jun 2022 22:39:51 +0000
Received: by outflank-mailman (input) for mailman id 352903;
 Mon, 20 Jun 2022 22:39:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=B8m2=W3=ens-lyon.org=samuel.thibault@bounce.ens-lyon.org>)
 id 1o3Q3i-0007k3-4k
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 22:39:50 +0000
Received: from sonata.ens-lyon.org (domu-toccata.ens-lyon.fr [140.77.166.138])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e2f02af7-f0e9-11ec-b725-ed86ccbb4733;
 Tue, 21 Jun 2022 00:39:48 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by sonata.ens-lyon.org (Postfix) with ESMTP id 5721E2014F;
 Tue, 21 Jun 2022 00:39:46 +0200 (CEST)
Received: from sonata.ens-lyon.org ([127.0.0.1])
 by localhost (sonata.ens-lyon.org [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id wv5TOJiU0PRU; Tue, 21 Jun 2022 00:39:46 +0200 (CEST)
Received: from begin.home (anantes-655-1-33-15.w83-195.abo.wanadoo.fr
 [83.195.225.15])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange ECDHE (P-256) server-signature RSA-PSS (4096 bits) server-digest
 SHA256) (No client certificate requested)
 by sonata.ens-lyon.org (Postfix) with ESMTPSA id 1259420137;
 Tue, 21 Jun 2022 00:39:45 +0200 (CEST)
Received: from samy by begin.home with local (Exim 4.95)
 (envelope-from <samuel.thibault@ens-lyon.org>) id 1o3Q3d-00B6Ec-JF;
 Tue, 21 Jun 2022 00:39:45 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e2f02af7-f0e9-11ec-b725-ed86ccbb4733
Date: Tue, 21 Jun 2022 00:39:45 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Juergen Gross <jgross@suse.com>
Cc: minios-devel@lists.xenproject.org, xen-devel@lists.xenproject.org,
	wl@xen.org
Subject: Re: [PATCH 0/8] mini-os: some cleanup patches
Message-ID: <20220620223945.mjnn3kulnyhi4xfn@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Juergen Gross <jgross@suse.com>, minios-devel@lists.xenproject.org,
	xen-devel@lists.xenproject.org, wl@xen.org
References: <20220620073820.9336-1-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20220620073820.9336-1-jgross@suse.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)

Hello,

Juergen Gross, le lun. 20 juin 2022 09:38:12 +0200, a ecrit:
> Do some cleanups.
> 
> No functional change intended, apart from whitespace changes only
> minor modifications making code easier to read.
> 
> Juergen Gross (8):
>   mini-os: drop xenbus directory
>   mini-os: apply coding style to xenbus.c
>   mini-os: eliminate console/console.h
>   mini-os: rename console/xenbus.c to consfront.c
>   mini-os: apply coding style to consfront.c
>   mini-os: eliminate console directory
>   mini-os: apply coding style to console.c
>   mini-os: add mini-os-debug[.gz] to .gitignore

For the whole series:

Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

Thanks!

>  .gitignore                      |   2 +
>  Makefile                        |   9 +-
>  console/xenbus.c => consfront.c |  99 ++++---
>  console.c                       | 415 ++++++++++++++++++++++++++
>  console/console.c               | 177 -----------
>  console/console.h               |   2 -
>  console/xencons_ring.c          | 238 ---------------
>  include/console.h               |   1 +
>  xenbus/xenbus.c => xenbus.c     | 510 +++++++++++++++++++-------------
>  9 files changed, 778 insertions(+), 675 deletions(-)
>  rename console/xenbus.c => consfront.c (78%)
>  create mode 100644 console.c
>  delete mode 100644 console/console.c
>  delete mode 100644 console/console.h
>  delete mode 100644 console/xencons_ring.c
>  rename xenbus/xenbus.c => xenbus.c (71%)
> 
> -- 
> 2.35.3
> 


From xen-devel-bounces@lists.xenproject.org Mon Jun 20 22:44:06 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 22:44:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352915.579787 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Q7p-0000qG-4z; Mon, 20 Jun 2022 22:44:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352915.579787; Mon, 20 Jun 2022 22:44:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Q7o-0000pi-Vi; Mon, 20 Jun 2022 22:44:04 +0000
Received: by outflank-mailman (input) for mailman id 352915;
 Mon, 20 Jun 2022 22:44:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=B8m2=W3=ens-lyon.org=samuel.thibault@bounce.ens-lyon.org>)
 id 1o3Q7n-0000pC-UL
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 22:44:03 +0000
Received: from sonata.ens-lyon.org (sonata.ens-lyon.org [140.77.166.138])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 79df0d36-f0ea-11ec-bd2d-47488cf2e6aa;
 Tue, 21 Jun 2022 00:44:02 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by sonata.ens-lyon.org (Postfix) with ESMTP id CED232014F;
 Tue, 21 Jun 2022 00:43:58 +0200 (CEST)
Received: from sonata.ens-lyon.org ([127.0.0.1])
 by localhost (sonata.ens-lyon.org [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id rWJLcV2fRi5A; Tue, 21 Jun 2022 00:43:58 +0200 (CEST)
Received: from begin.home (anantes-655-1-33-15.w83-195.abo.wanadoo.fr
 [83.195.225.15])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange ECDHE (P-256) server-signature RSA-PSS (4096 bits) server-digest
 SHA256) (No client certificate requested)
 by sonata.ens-lyon.org (Postfix) with ESMTPSA id B95C720148;
 Tue, 21 Jun 2022 00:43:58 +0200 (CEST)
Received: from samy by begin.home with local (Exim 4.95)
 (envelope-from <samuel.thibault@ens-lyon.org>) id 1o3Q7j-00B6Jd-Bt;
 Tue, 21 Jun 2022 00:43:59 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 79df0d36-f0ea-11ec-bd2d-47488cf2e6aa
Date: Tue, 21 Jun 2022 00:43:59 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Juergen Gross <jgross@suse.com>
Cc: minios-devel@lists.xenproject.org, xen-devel@lists.xenproject.org,
	wl@xen.org
Subject: Re: [PATCH v2 3/4] mini-os: fix number of pages for PVH
Message-ID: <20220620224359.qbpojkdwbxbsfcv3@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Juergen Gross <jgross@suse.com>, minios-devel@lists.xenproject.org,
	xen-devel@lists.xenproject.org, wl@xen.org
References: <20220619065253.19503-1-jgross@suse.com>
 <20220619065253.19503-4-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20220619065253.19503-4-jgross@suse.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)

Juergen Gross, le dim. 19 juin 2022 08:52:52 +0200, a ecrit:
> When getting the current allocation from Xen, this value includes the
> pages allocated in the MMIO area. Fix the highest available RAM page
> by subtracting the size of that area.
> 
> This requires to read the E820 map before needing this value. Add two
> functions returning the current and the maximum number of RAM pages
> taking this correction into account.
> 
> At the same time add the LAPIC page to the memory map in order to
> avoid reusing that PFN for internal purposes.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V2:
> - make e820_initial_reserved_pfns static (Samuel Thibault)
> - add e820_get_current_pages() and e820_get_max_pages()
> ---

> diff --git a/include/e820.h b/include/e820.h
> index 5438a7c8..6f15fcd2 100644
> --- a/include/e820.h
> +++ b/include/e820.h
> @@ -52,6 +52,8 @@ struct __packed e820entry {
>  extern struct e820entry e820_map[];
>  extern unsigned e820_entries;
>  
> +unsigned int e820_get_current_pages(void);
> +unsigned int e820_get_max_pages(void);

Why an int rather than a long int? Yes 4TiB memory is large for mini-os,
but better keep numbers of pages a long?

Apart from that,

Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

Samuel


From xen-devel-bounces@lists.xenproject.org Mon Jun 20 22:45:25 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jun 2022 22:45:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352924.579801 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Q94-0001VE-HW; Mon, 20 Jun 2022 22:45:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352924.579801; Mon, 20 Jun 2022 22:45:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Q94-0001V7-EV; Mon, 20 Jun 2022 22:45:22 +0000
Received: by outflank-mailman (input) for mailman id 352924;
 Mon, 20 Jun 2022 22:45:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=B8m2=W3=ens-lyon.org=samuel.thibault@bounce.ens-lyon.org>)
 id 1o3Q92-0001T5-Rc
 for xen-devel@lists.xenproject.org; Mon, 20 Jun 2022 22:45:20 +0000
Received: from sonata.ens-lyon.org (sonata.ens-lyon.org [140.77.166.138])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a8e4bc60-f0ea-11ec-bd2d-47488cf2e6aa;
 Tue, 21 Jun 2022 00:45:19 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by sonata.ens-lyon.org (Postfix) with ESMTP id A705C2014F;
 Tue, 21 Jun 2022 00:45:18 +0200 (CEST)
Received: from sonata.ens-lyon.org ([127.0.0.1])
 by localhost (sonata.ens-lyon.org [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id x8bigerdlSdi; Tue, 21 Jun 2022 00:45:18 +0200 (CEST)
Received: from begin.home (anantes-655-1-33-15.w83-195.abo.wanadoo.fr
 [83.195.225.15])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange ECDHE (P-256) server-signature RSA-PSS (4096 bits) server-digest
 SHA256) (No client certificate requested)
 by sonata.ens-lyon.org (Postfix) with ESMTPSA id 8E1D820148;
 Tue, 21 Jun 2022 00:45:18 +0200 (CEST)
Received: from samy by begin.home with local (Exim 4.95)
 (envelope-from <samuel.thibault@ens-lyon.org>) id 1o3Q90-00B6Ki-5w;
 Tue, 21 Jun 2022 00:45:18 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a8e4bc60-f0ea-11ec-bd2d-47488cf2e6aa
Date: Tue, 21 Jun 2022 00:45:18 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Juergen Gross <jgross@suse.com>
Cc: minios-devel@lists.xenproject.org, xen-devel@lists.xenproject.org,
	wl@xen.org
Subject: Re: [PATCH v2 4/4] mini-os: fix bug in ballooning on PVH
Message-ID: <20220620224518.xfm426w5gcu322sh@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Juergen Gross <jgross@suse.com>, minios-devel@lists.xenproject.org,
	xen-devel@lists.xenproject.org, wl@xen.org
References: <20220619065253.19503-1-jgross@suse.com>
 <20220619065253.19503-5-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20220619065253.19503-5-jgross@suse.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)

Juergen Gross, le dim. 19 juin 2022 08:52:53 +0200, a ecrit:
> There is a subtle bug in ballooning code for PVH: in case ballooning
> extends above a non-RAM area of the memory map, wrong pages will be
> used.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

> ---
> V2:
> - new patch
> ---
>  balloon.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/balloon.c b/balloon.c
> index 6ad07644..55be8141 100644
> --- a/balloon.c
> +++ b/balloon.c
> @@ -124,7 +124,7 @@ int balloon_up(unsigned long n_pages)
>      for ( pfn = 0; pfn < rc; pfn++ )
>      {
>          arch_pfn_add(start_pfn + pfn, balloon_frames[pfn]);
> -        free_page(pfn_to_virt(nr_mem_pages + pfn));
> +        free_page(pfn_to_virt(start_pfn + pfn));
>      }
>  
>      nr_mem_pages += rc;
> -- 
> 2.35.3
> 


From xen-devel-bounces@lists.xenproject.org Tue Jun 21 02:43:00 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 02:43:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352938.579813 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Tqj-0006AW-3b; Tue, 21 Jun 2022 02:42:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352938.579813; Tue, 21 Jun 2022 02:42:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Tqi-0006AO-VJ; Tue, 21 Jun 2022 02:42:40 +0000
Received: by outflank-mailman (input) for mailman id 352938;
 Tue, 21 Jun 2022 02:42:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3Tqh-0006AE-GN; Tue, 21 Jun 2022 02:42:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3Tqh-0001CV-A3; Tue, 21 Jun 2022 02:42:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3Tqg-0001ZH-OU; Tue, 21 Jun 2022 02:42:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o3Tqg-00067H-MR; Tue, 21 Jun 2022 02:42:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4koJaN4M2GrZXr2+Q/Nd8uMxxjK/82E2gcINQcfg/ro=; b=B57ztRn0SNPe0uTu9RE6yqBM0+
	oPQXrCsoaFRrCZAF7MK5terCtIHNpfQ8KRnWT4pM+xtK+ToAqeyc8nXVnH7fYd3ENF1X87ASI8OzG
	AIVfhCycNn8gmu+1k438QGxPBJHTAgDQoPCMOIOffyi9QIg5SQ/zxjqKT5GwoooWuLTo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171293-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 171293: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:host-install(5):broken:regression
    xen-unstable:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=9d067857d1ff6805608aac4d9c0ea1c848b2e637
X-Osstest-Versions-That:
    xen=c9040f25be317ab2f7647605397d79313e3f303e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 21 Jun 2022 02:42:38 +0000

flight 171293 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171293/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd12-amd64    <job status>                 broken
 test-amd64-amd64-qemuu-freebsd12-amd64 5 host-install(5) broken REGR. vs. 171283
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 171283

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171283
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171283
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171283
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171283
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171283
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171283
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171283
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171283
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171283
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171283
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171283
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171283
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  9d067857d1ff6805608aac4d9c0ea1c848b2e637
baseline version:
 xen                  c9040f25be317ab2f7647605397d79313e3f303e

Last test of basis   171283  2022-06-20 01:52:00 Z    1 days
Testing same since   171293  2022-06-20 19:09:40 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       broken  
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-amd64-qemuu-freebsd12-amd64 broken
broken-step test-amd64-amd64-qemuu-freebsd12-amd64 host-install(5)

Not pushing.

------------------------------------------------------------
commit 9d067857d1ff6805608aac4d9c0ea1c848b2e637
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Mon Jun 20 14:35:00 2022 +0200

    tools/include: drop leading underscore from xen_list header
    
    A leading underscore is used to indicate auto generated headers, and
    the clean use of 'rm -f _*.h' will remove those.  _xen_list.h also
    uses a leading underscore, but is checked in the repo and as such
    cannot be removed as part of the clean rule.
    
    Fix this by dropping the leading underscore, so that the header is not
    removed.
    
    Fixes: a03b3552d4 ('libs,tools/include: Clean "clean" targets')
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Juergen Gross <jgross@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jun 21 03:47:59 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 03:47:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352950.579824 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Urf-0003sQ-1g; Tue, 21 Jun 2022 03:47:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352950.579824; Tue, 21 Jun 2022 03:47:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Ure-0003sJ-UH; Tue, 21 Jun 2022 03:47:42 +0000
Received: by outflank-mailman (input) for mailman id 352950;
 Tue, 21 Jun 2022 03:47:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3Ure-0003s9-B7; Tue, 21 Jun 2022 03:47:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3Ure-0002Hd-7i; Tue, 21 Jun 2022 03:47:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3Urd-0003AF-SC; Tue, 21 Jun 2022 03:47:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o3Urd-0005E7-Rk; Tue, 21 Jun 2022 03:47:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BiiFDRvpd6l6BdEozSo9rhNDwgll50wCgufSRG1MZOM=; b=CwLOZwr14NFFUmbp3ZC5QKCRKh
	14IBvcK1j1I+IsX9+yTlBmvjrXxkTTZxdIfke7YRmYX4p6PtlCLOYgOvtI+IXWCGdzMfp2MkfheVt
	7/03jlWMcr9vDgFJ1OXBsQA0BzN1DQf8A7CcsFrieOYaIikarf27Od6x/D530ITSc84Q=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171294-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171294: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=78ca55889a549a9a194c6ec666836329b774ab6d
X-Osstest-Versions-That:
    linux=354c6e071be986a44b956f7b57f1884244431048
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 21 Jun 2022 03:47:41 +0000

flight 171294 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171294/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-intel  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-amd  8 xen-boot              fail REGR. vs. 171277
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 171277
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 171277
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 171277
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 171277
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 171277

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 171277

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171277
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171277
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171277
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                78ca55889a549a9a194c6ec666836329b774ab6d
baseline version:
 linux                354c6e071be986a44b956f7b57f1884244431048

Last test of basis   171277  2022-06-19 03:11:35 Z    2 days
Failing since        171280  2022-06-19 15:12:25 Z    1 days    5 attempts
Testing same since   171294  2022-06-20 21:11:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Usyskin <alexander.usyskin@intel.com>
  Ali Saidi <alisaidi@amazon.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Arnd Bergmann <arnd@arndb.de>
  Athira Jajeev <atrajeev@linux.vnet.ibm.com>
  Athira Rajeev <atrajeev@linux.vnet.ibm.com>
  Bart Van Assche <bvanassche@acm.org>
  Christoph Lameter <cl@linux.com>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Damien Le Moal <damien.lemoal@opensource.wdc.com>
  Darrick J. Wong <djwong@kernel.org>
  Dave Hansen <dave.hansen@linux.intel.com>
  David Rientjes <rientjes@google.com>
  Douglas Gilbert <dgilbert@interlog.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Hyeonggon Yoo <42.hyeyoo@gmail.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ian Rogers <irogers@google.com>
  Jamie Iles <jamie@jamieiles.com>
  Jann Horn <jannh@google.com>
  Jarkko Nikula <jarkko.nikula@linux.intel.com>
  Jiasheng Jiang <jiasheng@iscas.ac.cn>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jing-Ting Wu <jing-ting.wu@mediatek.com>
  Joe Damato <jdamato@fastly.com>
  Josh Poimboeuf <jpoimboe@kernel.org>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Kunihiko Hayashi <hayashi.kunihiko@socionext.com>
  Leo Yan <leo.yan@linaro.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Liu Ying <victor.liu@nxp.com>
  Lukas Bulwahn <lukas.bulwahn@gmail.com>
  Marc Zyngier <maz@kernel.org>
  Martin K. Petersen <martin.petersen@oracle.com>
  Miaoqian Lin <linmq006@gmail.com>
  Michael Petlan <mpetlan@redhat.com>
  Michal Simek <michal.simek@amd.com>
  Nathan Chancellor <nathan@kernel.org>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Peter Zijlstra <peterz@infradead.org>
  Rob Herring <robh@kernel.org>
  Saurabh Sengar <ssengar@linux.microsoft.com>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Sedat Dilek <sedat.dilek@gmail.com> # LLVM-14 (x86-64)
  Serge Semin <Sergey.Semin@baikalelectronics.ru>
  Sergey Gorenko <sergeygo@nvidia.com>
  Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
  Tali Perry <tali.perry1@gmail.com>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Richter <tmricht@linux.ibm.com>
  Tomas Winkler <tomas.winkler@intel.com>
  Tyrel Datwyler <tyreld@linux.ibm.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wolfram Sang <wsa@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1865 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 21 06:20:15 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 06:20:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352966.579835 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3XEz-0001nE-6Z; Tue, 21 Jun 2022 06:19:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352966.579835; Tue, 21 Jun 2022 06:19:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3XEz-0001n7-2u; Tue, 21 Jun 2022 06:19:57 +0000
Received: by outflank-mailman (input) for mailman id 352966;
 Tue, 21 Jun 2022 06:19:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VA9I=W4=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o3XEx-0001n1-Ue
 for xen-devel@lists.xenproject.org; Tue, 21 Jun 2022 06:19:56 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2a4406eb-f12a-11ec-b725-ed86ccbb4733;
 Tue, 21 Jun 2022 08:19:54 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id B2D611FD7B;
 Tue, 21 Jun 2022 06:19:53 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 674BB13A88;
 Tue, 21 Jun 2022 06:19:53 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id PlLmFwljsWLiRwAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 21 Jun 2022 06:19:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a4406eb-f12a-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655792393; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=HHS8XoF+R1u8s5svpxcVHVvoIwH1iD1DpcdKbGMjXis=;
	b=gFgsTTR2/H96nB0WMFb+fAdAZ1GVCP2sjYfUjcs3V42HYABcIVB13/BTn2QqoW2JQlGvi0
	caHnRgncv07t6uA6D0MfU8+uADAjQMjdfaXNSjoEf86bhMjCS73BBpYz0GRuiEam3QT1Fk
	Iaa3Eu78mBxVgowd0ekkJSBKEhAEtrs=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: samuel.thibault@ens-lyon.org,
	Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] maintainers: add me as reviewer for Mini-OS
Date: Tue, 21 Jun 2022 08:19:52 +0200
Message-Id: <20220621061952.6673-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

I'm the main contributor for Mini-OS since several years now.

Add myself as reviewer.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 MAINTAINERS | 1 +
 1 file changed, 1 insertion(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index a0b0d88ea4..8a99526784 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -419,6 +419,7 @@ F:	xen/test/livepatch/*
 MINI-OS
 M:	Samuel Thibault <samuel.thibault@ens-lyon.org>
 R:	Wei Liu <wl@xen.org>
+R:	Juergen Gross <jgross@suse.com>
 S:	Supported
 L:	minios-devel@lists.xenproject.org
 T:	git https://xenbits.xenproject.org/git-http/mini-os.git
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue Jun 21 06:38:32 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 06:38:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352975.579846 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3XWq-0004A4-Pm; Tue, 21 Jun 2022 06:38:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352975.579846; Tue, 21 Jun 2022 06:38:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3XWq-00049x-Kx; Tue, 21 Jun 2022 06:38:24 +0000
Received: by outflank-mailman (input) for mailman id 352975;
 Tue, 21 Jun 2022 06:38:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Od3y=W4=ens-lyon.org=samuel.thibault@bounce.ens-lyon.org>)
 id 1o3XWo-00049r-UZ
 for xen-devel@lists.xenproject.org; Tue, 21 Jun 2022 06:38:22 +0000
Received: from sonata.ens-lyon.org (sonata.ens-lyon.org [140.77.166.138])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bdec9bca-f12c-11ec-bd2d-47488cf2e6aa;
 Tue, 21 Jun 2022 08:38:21 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by sonata.ens-lyon.org (Postfix) with ESMTP id A46CC20159;
 Tue, 21 Jun 2022 08:38:20 +0200 (CEST)
Received: from sonata.ens-lyon.org ([127.0.0.1])
 by localhost (sonata.ens-lyon.org [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id 3x1XNA8Raift; Tue, 21 Jun 2022 08:38:20 +0200 (CEST)
Received: from begin (nat-inria-interne-52-gw-01-bso.bordeaux.inria.fr
 [194.199.1.52])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange ECDHE (P-256) server-signature RSA-PSS (4096 bits) server-digest
 SHA256) (No client certificate requested)
 by sonata.ens-lyon.org (Postfix) with ESMTPSA id 44ECA20151;
 Tue, 21 Jun 2022 08:38:20 +0200 (CEST)
Received: from samy by begin with local (Exim 4.95)
 (envelope-from <samuel.thibault@ens-lyon.org>) id 1o3XWl-008pA3-Ti;
 Tue, 21 Jun 2022 08:38:19 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bdec9bca-f12c-11ec-bd2d-47488cf2e6aa
Date: Tue, 21 Jun 2022 08:38:19 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] maintainers: add me as reviewer for Mini-OS
Message-ID: <20220621063819.6ksfam6k6yqqdt5j@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20220621061952.6673-1-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20220621061952.6673-1-jgross@suse.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)

Juergen Gross, le mar. 21 juin 2022 08:19:52 +0200, a ecrit:
> I'm the main contributor for Mini-OS since several years now.
> 
> Add myself as reviewer.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Acked-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

Thanks!

> ---
>  MAINTAINERS | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index a0b0d88ea4..8a99526784 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -419,6 +419,7 @@ F:	xen/test/livepatch/*
>  MINI-OS
>  M:	Samuel Thibault <samuel.thibault@ens-lyon.org>
>  R:	Wei Liu <wl@xen.org>
> +R:	Juergen Gross <jgross@suse.com>
>  S:	Supported
>  L:	minios-devel@lists.xenproject.org
>  T:	git https://xenbits.xenproject.org/git-http/mini-os.git
> -- 
> 2.35.3
> 


From xen-devel-bounces@lists.xenproject.org Tue Jun 21 07:18:32 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 07:18:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352986.579861 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Y9Z-0008TB-Vh; Tue, 21 Jun 2022 07:18:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352986.579861; Tue, 21 Jun 2022 07:18:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Y9Z-0008T4-Rz; Tue, 21 Jun 2022 07:18:25 +0000
Received: by outflank-mailman (input) for mailman id 352986;
 Tue, 21 Jun 2022 07:18:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VA9I=W4=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o3Y9X-0008R4-SQ
 for xen-devel@lists.xenproject.org; Tue, 21 Jun 2022 07:18:23 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 541bbbc6-f132-11ec-b725-ed86ccbb4733;
 Tue, 21 Jun 2022 09:18:20 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 60DA121EC7;
 Tue, 21 Jun 2022 07:18:20 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 3EBBF13638;
 Tue, 21 Jun 2022 07:18:20 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id Min+DbxwsWJ1YQAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 21 Jun 2022 07:18:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 541bbbc6-f132-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655795900; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=FYzXzyNLzxsDxkplJxEXVmQtR/+WRwMen2LYGkswUAg=;
	b=UedvPFVXUz6Pwwbt3p91AyHRx91X3F5psqWH2mwSrFcUT50qda33Clpq2B4JDw27mDvuvn
	NkaasDvlrfPN1h/ZM+f05QF36AbWikkTQJ0wJcAcjr7TrSDXxr/ZHrcwQLs9UCe5VkmSmU
	36IY33UU5yvCwPFrh1x6C6iKw0tGTl8=
Message-ID: <d45f10c8-236b-3302-5cc5-9aba6dab2dea@suse.com>
Date: Tue, 21 Jun 2022 09:18:19 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH v2 3/4] mini-os: fix number of pages for PVH
Content-Language: en-US
To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
 minios-devel@lists.xenproject.org, xen-devel@lists.xenproject.org, wl@xen.org
References: <20220619065253.19503-1-jgross@suse.com>
 <20220619065253.19503-4-jgross@suse.com>
 <20220620224359.qbpojkdwbxbsfcv3@begin>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20220620224359.qbpojkdwbxbsfcv3@begin>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------I43HiciVyFPAPnKWa290gyk6"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------I43HiciVyFPAPnKWa290gyk6
Content-Type: multipart/mixed; boundary="------------PzYnehsueDrjJ0UbXmf7UCZQ";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
 minios-devel@lists.xenproject.org, xen-devel@lists.xenproject.org, wl@xen.org
Message-ID: <d45f10c8-236b-3302-5cc5-9aba6dab2dea@suse.com>
Subject: Re: [PATCH v2 3/4] mini-os: fix number of pages for PVH
References: <20220619065253.19503-1-jgross@suse.com>
 <20220619065253.19503-4-jgross@suse.com>
 <20220620224359.qbpojkdwbxbsfcv3@begin>
In-Reply-To: <20220620224359.qbpojkdwbxbsfcv3@begin>

--------------PzYnehsueDrjJ0UbXmf7UCZQ
Content-Type: multipart/mixed; boundary="------------b1dMPg1H1UbvgQvcqzHbdAqC"

--------------b1dMPg1H1UbvgQvcqzHbdAqC
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjEuMDYuMjIgMDA6NDMsIFNhbXVlbCBUaGliYXVsdCB3cm90ZToNCj4gSnVlcmdlbiBH
cm9zcywgbGUgZGltLiAxOSBqdWluIDIwMjIgMDg6NTI6NTIgKzAyMDAsIGEgZWNyaXQ6DQo+
PiBXaGVuIGdldHRpbmcgdGhlIGN1cnJlbnQgYWxsb2NhdGlvbiBmcm9tIFhlbiwgdGhpcyB2
YWx1ZSBpbmNsdWRlcyB0aGUNCj4+IHBhZ2VzIGFsbG9jYXRlZCBpbiB0aGUgTU1JTyBhcmVh
LiBGaXggdGhlIGhpZ2hlc3QgYXZhaWxhYmxlIFJBTSBwYWdlDQo+PiBieSBzdWJ0cmFjdGlu
ZyB0aGUgc2l6ZSBvZiB0aGF0IGFyZWEuDQo+Pg0KPj4gVGhpcyByZXF1aXJlcyB0byByZWFk
IHRoZSBFODIwIG1hcCBiZWZvcmUgbmVlZGluZyB0aGlzIHZhbHVlLiBBZGQgdHdvDQo+PiBm
dW5jdGlvbnMgcmV0dXJuaW5nIHRoZSBjdXJyZW50IGFuZCB0aGUgbWF4aW11bSBudW1iZXIg
b2YgUkFNIHBhZ2VzDQo+PiB0YWtpbmcgdGhpcyBjb3JyZWN0aW9uIGludG8gYWNjb3VudC4N
Cj4+DQo+PiBBdCB0aGUgc2FtZSB0aW1lIGFkZCB0aGUgTEFQSUMgcGFnZSB0byB0aGUgbWVt
b3J5IG1hcCBpbiBvcmRlciB0bw0KPj4gYXZvaWQgcmV1c2luZyB0aGF0IFBGTiBmb3IgaW50
ZXJuYWwgcHVycG9zZXMuDQo+Pg0KPj4gU2lnbmVkLW9mZi1ieTogSnVlcmdlbiBHcm9zcyA8
amdyb3NzQHN1c2UuY29tPg0KPj4gLS0tDQo+PiBWMjoNCj4+IC0gbWFrZSBlODIwX2luaXRp
YWxfcmVzZXJ2ZWRfcGZucyBzdGF0aWMgKFNhbXVlbCBUaGliYXVsdCkNCj4+IC0gYWRkIGU4
MjBfZ2V0X2N1cnJlbnRfcGFnZXMoKSBhbmQgZTgyMF9nZXRfbWF4X3BhZ2VzKCkNCj4+IC0t
LQ0KPiANCj4+IGRpZmYgLS1naXQgYS9pbmNsdWRlL2U4MjAuaCBiL2luY2x1ZGUvZTgyMC5o
DQo+PiBpbmRleCA1NDM4YTdjOC4uNmYxNWZjZDIgMTAwNjQ0DQo+PiAtLS0gYS9pbmNsdWRl
L2U4MjAuaA0KPj4gKysrIGIvaW5jbHVkZS9lODIwLmgNCj4+IEBAIC01Miw2ICs1Miw4IEBA
IHN0cnVjdCBfX3BhY2tlZCBlODIwZW50cnkgew0KPj4gICBleHRlcm4gc3RydWN0IGU4MjBl
bnRyeSBlODIwX21hcFtdOw0KPj4gICBleHRlcm4gdW5zaWduZWQgZTgyMF9lbnRyaWVzOw0K
Pj4gICANCj4+ICt1bnNpZ25lZCBpbnQgZTgyMF9nZXRfY3VycmVudF9wYWdlcyh2b2lkKTsN
Cj4+ICt1bnNpZ25lZCBpbnQgZTgyMF9nZXRfbWF4X3BhZ2VzKHZvaWQpOw0KPiANCj4gV2h5
IGFuIGludCByYXRoZXIgdGhhbiBhIGxvbmcgaW50PyBZZXMgNFRpQiBtZW1vcnkgaXMgbGFy
Z2UgZm9yIG1pbmktb3MsDQo+IGJ1dCBiZXR0ZXIga2VlcCBudW1iZXJzIG9mIHBhZ2VzIGEg
bG9uZz8NCg0KSSBkb24ndCB0aGluayBpdCBtYXR0ZXJzIHRoYXQgbXVjaCAoY3VycmVudGx5
IE1pbmktT1MgY2FuJ3Qgc3VwcG9ydCBtb3JlDQp0aGFuIDUxMkdpQiBvZiBtZW1vcnkpLCBi
dXQgSSBjYW4gY2hhbmdlIHRoZSBmdW5jdGlvbnMgdG8gdW5zaWduZWQgbG9uZy4NCg0KPiAN
Cj4gQXBhcnQgZnJvbSB0aGF0LA0KPiANCj4gUmV2aWV3ZWQtYnk6IFNhbXVlbCBUaGliYXVs
dCA8c2FtdWVsLnRoaWJhdWx0QGVucy1seW9uLm9yZz4NCg0KVGhhbmtzLA0KDQoNCkp1ZXJn
ZW4NCg==
--------------b1dMPg1H1UbvgQvcqzHbdAqC
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------b1dMPg1H1UbvgQvcqzHbdAqC--

--------------PzYnehsueDrjJ0UbXmf7UCZQ--

--------------I43HiciVyFPAPnKWa290gyk6
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmKxcLsFAwAAAAAACgkQsN6d1ii/Ey8f
VQf/ZilBm1MDU6FR6KXhb73Zic/eJtTqV1m8kZsrdu0NdvPfOZExXl+9cqYG0DzYrXLBcYLtM5Ik
NWSTJMJTzvrGsjwiu6SJRY05jqYmSz1GNWFa5DP3bdbyRt1tQjJxw6XalYh4Kce4VduS2x9hVezH
AI4iP05C2m+9WDxhSJxbn3AQfOgdwkfCnqx5hjXTn/lBSlWqlQdkgYL3qWpL7ZfOQe3Dzfb7pm5z
2qBywR9lJDWIlAOfw0rv1697tuI2Fy4Ebj2p3AowoEH/F8GYbTz6nI0XLXfOlcrRxP3cuuLQPjvv
HLW+Cq3kxJ/XcRGgbPfMig1MNcaNv2HsXUL8tITOuw==
=cLAS
-----END PGP SIGNATURE-----

--------------I43HiciVyFPAPnKWa290gyk6--


From xen-devel-bounces@lists.xenproject.org Tue Jun 21 07:23:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 07:23:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352997.579880 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3YEI-0001VC-OL; Tue, 21 Jun 2022 07:23:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352997.579880; Tue, 21 Jun 2022 07:23:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3YEI-0001Us-HI; Tue, 21 Jun 2022 07:23:18 +0000
Received: by outflank-mailman (input) for mailman id 352997;
 Tue, 21 Jun 2022 07:23:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VA9I=W4=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o3YEI-0001Tv-0r
 for xen-devel@lists.xenproject.org; Tue, 21 Jun 2022 07:23:18 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 043a01cd-f133-11ec-b725-ed86ccbb4733;
 Tue, 21 Jun 2022 09:23:16 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 0C7F11FDD0;
 Tue, 21 Jun 2022 07:23:16 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id D5A5C13638;
 Tue, 21 Jun 2022 07:23:15 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id oPDsMuNxsWLDYwAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 21 Jun 2022 07:23:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 043a01cd-f133-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655796196; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=r8IJi4G0dpuEL3/1eath/8IW/jxqJG8WThQocwvM6XY=;
	b=JbgVOS9Dda7Ko5UNhr3J2zWh2GDx6L1G91NAkHiWRYhsp3Nk6arhiDa+7Wve+E5W7n3Xwm
	DymTMv4BPlA24vH+ZdirA/FbVmmqBhKKZYqSor33oTbcrJGJLla5S6XTtQOf3lIQivU+oc
	7a0w2McbX8/6S7fxWk+vaWUNqTwJgw0=
From: Juergen Gross <jgross@suse.com>
To: minios-devel@lists.xenproject.org,
	xen-devel@lists.xenproject.org
Cc: samuel.thibault@ens-lyon.org,
	wl@xen.org,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v3 0/4] mini-os: some memory map updates for PVH
Date: Tue, 21 Jun 2022 09:23:10 +0200
Message-Id: <20220621072314.16382-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Do some memory map related changes/fixes for PVH mode:

- Prefer the memory map delivered via start-info over the one obtained
  from the hypervisor. This is a prerequisite for Xenstore-stubdom
  live-update with rising the memory limit.

- Fix a bug related to ballooning in PVH mode: PVH Xenstore-stubdom
  can't read its target memory size from Xenstore, as this introduces
  a chicken-and-egg problem. The memory size read from the hypervisor
  OTOH includes additional "special" pages marked as reserved in the
  memory map. Those pages need to be subtracted from the read size.

- Fix a bug in ballooning code in PVH mode when using memory beyond
  a RAM hole in the memory map

Changes in V3:
- minor comment for patch 3 addressed

Changes in V2:
- added patch 4
- addressed comment regarding patch 3

Juergen Gross (4):
  mini-os: take newest version of arch-x86/hvm/start_info.h
  mini-os: prefer memory map via start_info for PVH
  mini-os: fix number of pages for PVH
  mini-os: fix bug in ballooning on PVH

 arch/x86/mm.c                         | 23 ++++----
 balloon.c                             | 18 ++----
 e820.c                                | 83 ++++++++++++++++++++++++---
 include/e820.h                        |  6 ++
 include/x86/arch_mm.h                 |  2 +
 include/xen/arch-x86/hvm/start_info.h | 63 +++++++++++++++++++-
 6 files changed, 163 insertions(+), 32 deletions(-)

-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue Jun 21 07:23:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 07:23:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353001.579910 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3YEL-0002AU-UA; Tue, 21 Jun 2022 07:23:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353001.579910; Tue, 21 Jun 2022 07:23:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3YEL-00029t-MX; Tue, 21 Jun 2022 07:23:21 +0000
Received: by outflank-mailman (input) for mailman id 353001;
 Tue, 21 Jun 2022 07:23:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VA9I=W4=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o3YEK-0001Tv-0z
 for xen-devel@lists.xenproject.org; Tue, 21 Jun 2022 07:23:20 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 045eec7a-f133-11ec-b725-ed86ccbb4733;
 Tue, 21 Jun 2022 09:23:16 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 4CA7B1FA5D;
 Tue, 21 Jun 2022 07:23:16 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 134A113638;
 Tue, 21 Jun 2022 07:23:16 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 4LFqA+RxsWLDYwAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 21 Jun 2022 07:23:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 045eec7a-f133-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655796196; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=5vH2NqyFMDYKMsu2Ud2YN8B9pnuLSEdOQPWCQhYC+kA=;
	b=Ji4OgDXCs3g7pOm0MGuTzpDGx1npbuTC62kRMt6Gxvr7skYlXuuXUJh/Cm1qLDKrwktTHq
	01x+eS3hEqQc0AymmVZ96+E6bH8EPmy65pIK/8s9A347lkN4ri9T+Lk34VllN6b4xoDtHF
	xOeISvMnLfIO1emAV7Qy7f9FezSFd/s=
From: Juergen Gross <jgross@suse.com>
To: minios-devel@lists.xenproject.org,
	xen-devel@lists.xenproject.org
Cc: samuel.thibault@ens-lyon.org,
	wl@xen.org,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v3 1/4] mini-os: take newest version of arch-x86/hvm/start_info.h
Date: Tue, 21 Jun 2022 09:23:11 +0200
Message-Id: <20220621072314.16382-2-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20220621072314.16382-1-jgross@suse.com>
References: <20220621072314.16382-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Update include/xen/arch-x86/hvm/start_info.h to the newest version
from the Xen tree.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
---
 include/xen/arch-x86/hvm/start_info.h | 63 ++++++++++++++++++++++++++-
 1 file changed, 62 insertions(+), 1 deletion(-)

diff --git a/include/xen/arch-x86/hvm/start_info.h b/include/xen/arch-x86/hvm/start_info.h
index 64841597..50af9ea2 100644
--- a/include/xen/arch-x86/hvm/start_info.h
+++ b/include/xen/arch-x86/hvm/start_info.h
@@ -33,7 +33,7 @@
  *    | magic          | Contains the magic value XEN_HVM_START_MAGIC_VALUE
  *    |                | ("xEn3" with the 0x80 bit of the "E" set).
  *  4 +----------------+
- *    | version        | Version of this structure. Current version is 0. New
+ *    | version        | Version of this structure. Current version is 1. New
  *    |                | versions are guaranteed to be backwards-compatible.
  *  8 +----------------+
  *    | flags          | SIF_xxx flags.
@@ -48,6 +48,15 @@
  * 32 +----------------+
  *    | rsdp_paddr     | Physical address of the RSDP ACPI data structure.
  * 40 +----------------+
+ *    | memmap_paddr   | Physical address of the (optional) memory map. Only
+ *    |                | present in version 1 and newer of the structure.
+ * 48 +----------------+
+ *    | memmap_entries | Number of entries in the memory map table. Zero
+ *    |                | if there is no memory map being provided. Only
+ *    |                | present in version 1 and newer of the structure.
+ * 52 +----------------+
+ *    | reserved       | Version 1 and newer only.
+ * 56 +----------------+
  *
  * The layout of each entry in the module structure is the following:
  *
@@ -62,13 +71,51 @@
  *    | reserved       |
  * 32 +----------------+
  *
+ * The layout of each entry in the memory map table is as follows:
+ *
+ *  0 +----------------+
+ *    | addr           | Base address
+ *  8 +----------------+
+ *    | size           | Size of mapping in bytes
+ * 16 +----------------+
+ *    | type           | Type of mapping as defined between the hypervisor
+ *    |                | and guest. See XEN_HVM_MEMMAP_TYPE_* values below.
+ * 20 +----------------|
+ *    | reserved       |
+ * 24 +----------------+
+ *
  * The address and sizes are always a 64bit little endian unsigned integer.
  *
  * NB: Xen on x86 will always try to place all the data below the 4GiB
  * boundary.
+ *
+ * Version numbers of the hvm_start_info structure have evolved like this:
+ *
+ * Version 0:  Initial implementation.
+ *
+ * Version 1:  Added the memmap_paddr/memmap_entries fields (plus 4 bytes of
+ *             padding) to the end of the hvm_start_info struct. These new
+ *             fields can be used to pass a memory map to the guest. The
+ *             memory map is optional and so guests that understand version 1
+ *             of the structure must check that memmap_entries is non-zero
+ *             before trying to read the memory map.
  */
 #define XEN_HVM_START_MAGIC_VALUE 0x336ec578
 
+/*
+ * The values used in the type field of the memory map table entries are
+ * defined below and match the Address Range Types as defined in the "System
+ * Address Map Interfaces" section of the ACPI Specification. Please refer to
+ * section 15 in version 6.2 of the ACPI spec: http://uefi.org/specifications
+ */
+#define XEN_HVM_MEMMAP_TYPE_RAM       1
+#define XEN_HVM_MEMMAP_TYPE_RESERVED  2
+#define XEN_HVM_MEMMAP_TYPE_ACPI      3
+#define XEN_HVM_MEMMAP_TYPE_NVS       4
+#define XEN_HVM_MEMMAP_TYPE_UNUSABLE  5
+#define XEN_HVM_MEMMAP_TYPE_DISABLED  6
+#define XEN_HVM_MEMMAP_TYPE_PMEM      7
+
 /*
  * C representation of the x86/HVM start info layout.
  *
@@ -86,6 +133,13 @@ struct hvm_start_info {
     uint64_t cmdline_paddr;     /* Physical address of the command line.     */
     uint64_t rsdp_paddr;        /* Physical address of the RSDP ACPI data    */
                                 /* structure.                                */
+    /* All following fields only present in version 1 and newer */
+    uint64_t memmap_paddr;      /* Physical address of an array of           */
+                                /* hvm_memmap_table_entry.                   */
+    uint32_t memmap_entries;    /* Number of entries in the memmap table.    */
+                                /* Value will be zero if there is no memory  */
+                                /* map being provided.                       */
+    uint32_t reserved;          /* Must be zero.                             */
 };
 
 struct hvm_modlist_entry {
@@ -95,4 +149,11 @@ struct hvm_modlist_entry {
     uint64_t reserved;
 };
 
+struct hvm_memmap_table_entry {
+    uint64_t addr;              /* Base address of the memory region         */
+    uint64_t size;              /* Size of the memory region in bytes        */
+    uint32_t type;              /* Mapping type                              */
+    uint32_t reserved;          /* Must be zero for Version 1.               */
+};
+
 #endif /* __XEN_PUBLIC_ARCH_X86_HVM_START_INFO_H__ */
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue Jun 21 07:23:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 07:23:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353000.579906 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3YEL-000252-C4; Tue, 21 Jun 2022 07:23:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353000.579906; Tue, 21 Jun 2022 07:23:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3YEL-00024v-8H; Tue, 21 Jun 2022 07:23:21 +0000
Received: by outflank-mailman (input) for mailman id 353000;
 Tue, 21 Jun 2022 07:23:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VA9I=W4=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o3YEJ-0001Tw-E4
 for xen-devel@lists.xenproject.org; Tue, 21 Jun 2022 07:23:19 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 04e57763-f133-11ec-bd2d-47488cf2e6aa;
 Tue, 21 Jun 2022 09:23:17 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id D95C81FDE8;
 Tue, 21 Jun 2022 07:23:16 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id AD8C613A8F;
 Tue, 21 Jun 2022 07:23:16 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id KJkaKeRxsWLDYwAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 21 Jun 2022 07:23:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 04e57763-f133-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655796196; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=iK4xpyO/24q/Du+ofA9DMrmGHYHqE1QdZ+2SGjY3+1E=;
	b=TLm0LLGu8RVX2fv2/nucY1GzNv04PipwpCSX20R8nj8u2d2/GfARxKo3E3i12mBoX3C118
	AJo8fKEsKkVQ/dzJsvG2yVV2e85TyQydpxL8HBcVMQ4Ay2tzCscsIf0usqacKMU8nnkGoz
	/1lCFwCJKeqkZ4qT7cwGYYBMhQVlchE=
From: Juergen Gross <jgross@suse.com>
To: minios-devel@lists.xenproject.org,
	xen-devel@lists.xenproject.org
Cc: samuel.thibault@ens-lyon.org,
	wl@xen.org,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v3 4/4] mini-os: fix bug in ballooning on PVH
Date: Tue, 21 Jun 2022 09:23:14 +0200
Message-Id: <20220621072314.16382-5-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20220621072314.16382-1-jgross@suse.com>
References: <20220621072314.16382-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

There is a subtle bug in ballooning code for PVH: in case ballooning
extends above a non-RAM area of the memory map, wrong pages will be
used.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
---
V2:
- new patch
---
 balloon.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/balloon.c b/balloon.c
index 6ad07644..55be8141 100644
--- a/balloon.c
+++ b/balloon.c
@@ -124,7 +124,7 @@ int balloon_up(unsigned long n_pages)
     for ( pfn = 0; pfn < rc; pfn++ )
     {
         arch_pfn_add(start_pfn + pfn, balloon_frames[pfn]);
-        free_page(pfn_to_virt(nr_mem_pages + pfn));
+        free_page(pfn_to_virt(start_pfn + pfn));
     }
 
     nr_mem_pages += rc;
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue Jun 21 07:23:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 07:23:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.352998.579891 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3YEK-0001no-0Z; Tue, 21 Jun 2022 07:23:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 352998.579891; Tue, 21 Jun 2022 07:23:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3YEJ-0001nh-TQ; Tue, 21 Jun 2022 07:23:19 +0000
Received: by outflank-mailman (input) for mailman id 352998;
 Tue, 21 Jun 2022 07:23:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VA9I=W4=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o3YEI-0001Tw-E6
 for xen-devel@lists.xenproject.org; Tue, 21 Jun 2022 07:23:18 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 04ad967a-f133-11ec-bd2d-47488cf2e6aa;
 Tue, 21 Jun 2022 09:23:16 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 7FFF61FDE7;
 Tue, 21 Jun 2022 07:23:16 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 467B613AAA;
 Tue, 21 Jun 2022 07:23:16 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id iLsBEORxsWLDYwAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 21 Jun 2022 07:23:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 04ad967a-f133-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655796196; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=sqQB2mee0jCv15NOcWCx6SVOgjnvvSzpgTYbgdXHN/4=;
	b=R8+VFBmuL4/bHanaZhB9KH4TxeG5iLgiaOJ8U3o57yzJGOc0as9jZLbVigjvVcC4+0cbgl
	TSTReEqQZCWcw9xQCgLJi4zzqFL/jptDJ/dUnHi5EIwi3EG1czuqVXbSJiPAVbxf5IjkcV
	2EGmoM4jmIVX9egQC/0YUF+hF4SquTk=
From: Juergen Gross <jgross@suse.com>
To: minios-devel@lists.xenproject.org,
	xen-devel@lists.xenproject.org
Cc: samuel.thibault@ens-lyon.org,
	wl@xen.org,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v3 2/4] mini-os: prefer memory map via start_info for PVH
Date: Tue, 21 Jun 2022 09:23:12 +0200
Message-Id: <20220621072314.16382-3-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20220621072314.16382-1-jgross@suse.com>
References: <20220621072314.16382-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Since some time now a guest started in PVH mode will get the memory
map from Xen via the start_info structure.

Modify the PVH initialization to prefer this memory map over the one
obtained via hypercall, as this will allow to add information to the
memory map for a new kernel when supporting kexec.

In case the start_info structure doesn't contain memory map information
fall back to the hypercall.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
---
 arch/x86/mm.c  |  6 ++++++
 e820.c         | 25 +++++++++++++++++++++++++
 include/e820.h |  4 ++++
 3 files changed, 35 insertions(+)

diff --git a/arch/x86/mm.c b/arch/x86/mm.c
index 220c0b4d..41fcee67 100644
--- a/arch/x86/mm.c
+++ b/arch/x86/mm.c
@@ -45,6 +45,7 @@
 #include <mini-os/xmalloc.h>
 #include <mini-os/e820.h>
 #include <xen/memory.h>
+#include <xen/arch-x86/hvm/start_info.h>
 
 #ifdef MM_DEBUG
 #define DEBUG(_f, _a...) \
@@ -108,6 +109,11 @@ void arch_mm_preinit(void *p)
 {
     long ret;
     domid_t domid = DOMID_SELF;
+    struct hvm_start_info *hsi = p;
+
+    if ( hsi->version >= 1 && hsi->memmap_entries > 0 )
+        e820_init_memmap((struct hvm_memmap_table_entry *)(unsigned long)
+                         hsi->memmap_paddr, hsi->memmap_entries);
 
     pt_base = page_table_base;
     first_free_pfn = PFN_UP(to_phys(&_end));
diff --git a/e820.c b/e820.c
index 991ed382..ad91e00b 100644
--- a/e820.c
+++ b/e820.c
@@ -54,6 +54,7 @@ static char *e820_types[E820_TYPES] = {
     [E820_ACPI]     = "ACPI",
     [E820_NVS]      = "NVS",
     [E820_UNUSABLE] = "Unusable",
+    [E820_DISABLED] = "Disabled",
     [E820_PMEM]     = "PMEM"
 };
 
@@ -259,6 +260,30 @@ static void e820_get_memmap(void)
     e820_sanitize();
 }
 
+void e820_init_memmap(struct hvm_memmap_table_entry *entry, unsigned int num)
+{
+    unsigned int i;
+
+    BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_RAM != E820_RAM);
+    BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_RESERVED != E820_RESERVED);
+    BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_ACPI != E820_ACPI);
+    BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_NVS != E820_NVS);
+    BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_UNUSABLE != E820_UNUSABLE);
+    BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_DISABLED != E820_DISABLED);
+    BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_PMEM != E820_PMEM);
+
+    for ( i = 0; i < num; i++ )
+    {
+        e820_map[i].addr = entry[i].addr;
+        e820_map[i].size = entry[i].size;
+        e820_map[i].type = entry[i].type;
+    }
+
+    e820_entries = num;
+
+    e820_sanitize();
+}
+
 void arch_print_memmap(void)
 {
     int i;
diff --git a/include/e820.h b/include/e820.h
index aaf2f2ca..5438a7c8 100644
--- a/include/e820.h
+++ b/include/e820.h
@@ -26,6 +26,8 @@
 
 #if defined(__arm__) || defined(__aarch64__) || defined(CONFIG_PARAVIRT)
 #define CONFIG_E820_TRIVIAL
+#else
+#include <xen/arch-x86/hvm/start_info.h>
 #endif
 
 /* PC BIOS standard E820 types and structure. */
@@ -34,6 +36,7 @@
 #define E820_ACPI         3
 #define E820_NVS          4
 #define E820_UNUSABLE     5
+#define E820_DISABLED     6
 #define E820_PMEM         7
 #define E820_TYPES        8
 
@@ -54,6 +57,7 @@ unsigned long e820_get_max_contig_pages(unsigned long pfn, unsigned long pages);
 #ifndef CONFIG_E820_TRIVIAL
 unsigned long e820_get_reserved_pfns(int pages);
 void e820_put_reserved_pfns(unsigned long start_pfn, int pages);
+void e820_init_memmap(struct hvm_memmap_table_entry *entry, unsigned int num);
 #endif
 
 #endif /*__E820_HEADER*/
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue Jun 21 07:23:24 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 07:23:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353004.579936 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3YEO-0002m4-I6; Tue, 21 Jun 2022 07:23:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353004.579936; Tue, 21 Jun 2022 07:23:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3YEO-0002lw-EQ; Tue, 21 Jun 2022 07:23:24 +0000
Received: by outflank-mailman (input) for mailman id 353004;
 Tue, 21 Jun 2022 07:23:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VA9I=W4=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o3YEM-0001Tw-EV
 for xen-devel@lists.xenproject.org; Tue, 21 Jun 2022 07:23:22 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 04c46c7d-f133-11ec-bd2d-47488cf2e6aa;
 Tue, 21 Jun 2022 09:23:16 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id A5F851FDEA;
 Tue, 21 Jun 2022 07:23:16 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 7AE9E13638;
 Tue, 21 Jun 2022 07:23:16 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id oPOpHORxsWLDYwAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 21 Jun 2022 07:23:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 04c46c7d-f133-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655796196; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=CgVNbVQGUCTaIYP+BHs4gAn3b+qv8h822SUhzQAjGoI=;
	b=gKyS74dTKyHkB9YN22Xr6mm5x9q1uigRxKb8vhOFbIlWet23p//Z/HjjJq2HSZvQBVPUL7
	Z/kfIrftMU9eVVZu249TT9mQ8NGohdfgpu/o2bDRNcCUzgkQ/x7CfbdPxz5TYZQgb4bEZA
	off+3BIq4riF5pNLFLzaK8KEPk3RzlU=
From: Juergen Gross <jgross@suse.com>
To: minios-devel@lists.xenproject.org,
	xen-devel@lists.xenproject.org
Cc: samuel.thibault@ens-lyon.org,
	wl@xen.org,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v3 3/4] mini-os: fix number of pages for PVH
Date: Tue, 21 Jun 2022 09:23:13 +0200
Message-Id: <20220621072314.16382-4-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20220621072314.16382-1-jgross@suse.com>
References: <20220621072314.16382-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When getting the current allocation from Xen, this value includes the
pages allocated in the MMIO area. Fix the highest available RAM page
by subtracting the size of that area.

This requires to read the E820 map before needing this value. Add two
functions returning the current and the maximum number of RAM pages
taking this correction into account.

At the same time add the LAPIC page to the memory map in order to
avoid reusing that PFN for internal purposes.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
---
V2:
- make e820_initial_reserved_pfns static (Samuel Thibault)
- add e820_get_current_pages() and e820_get_max_pages()
V3:
- change return type of e820_get_current_pages() and e820_get_max_pages()
  to unsigned long (Samuel Thibault)
---
 arch/x86/mm.c         | 17 +++++--------
 balloon.c             | 16 +++---------
 e820.c                | 58 +++++++++++++++++++++++++++++++++++++------
 include/e820.h        |  2 ++
 include/x86/arch_mm.h |  2 ++
 5 files changed, 65 insertions(+), 30 deletions(-)

diff --git a/arch/x86/mm.c b/arch/x86/mm.c
index 41fcee67..cfc978f6 100644
--- a/arch/x86/mm.c
+++ b/arch/x86/mm.c
@@ -107,25 +107,20 @@ desc_ptr idt_ptr =
 
 void arch_mm_preinit(void *p)
 {
-    long ret;
-    domid_t domid = DOMID_SELF;
+    unsigned int pages;
     struct hvm_start_info *hsi = p;
 
     if ( hsi->version >= 1 && hsi->memmap_entries > 0 )
         e820_init_memmap((struct hvm_memmap_table_entry *)(unsigned long)
                          hsi->memmap_paddr, hsi->memmap_entries);
+    else
+        e820_init_memmap(NULL, 0);
 
     pt_base = page_table_base;
     first_free_pfn = PFN_UP(to_phys(&_end));
-    ret = HYPERVISOR_memory_op(XENMEM_current_reservation, &domid);
-    if ( ret < 0 )
-    {
-        xprintk("could not get memory size\n");
-        do_exit();
-    }
-
-    last_free_pfn = e820_get_maxpfn(ret);
-    balloon_set_nr_pages(ret, last_free_pfn);
+    pages = e820_get_current_pages();
+    last_free_pfn = e820_get_maxpfn(pages);
+    balloon_set_nr_pages(pages, last_free_pfn);
 }
 #endif
 
diff --git a/balloon.c b/balloon.c
index 9dc77c54..6ad07644 100644
--- a/balloon.c
+++ b/balloon.c
@@ -44,20 +44,12 @@ void balloon_set_nr_pages(unsigned long pages, unsigned long pfn)
 
 void get_max_pages(void)
 {
-    long ret;
-    domid_t domid = DOMID_SELF;
-
-    ret = HYPERVISOR_memory_op(XENMEM_maximum_reservation, &domid);
-    if ( ret < 0 )
+    nr_max_pages = e820_get_max_pages();
+    if ( nr_max_pages )
     {
-        printk("Could not get maximum pfn\n");
-        return;
+        printk("Maximum memory size: %ld pages\n", nr_max_pages);
+        nr_max_pfn = e820_get_maxpfn(nr_max_pages);
     }
-
-    nr_max_pages = ret;
-    printk("Maximum memory size: %ld pages\n", nr_max_pages);
-
-    nr_max_pfn = e820_get_maxpfn(nr_max_pages);
 }
 
 void mm_alloc_bitmap_remap(void)
diff --git a/e820.c b/e820.c
index ad91e00b..49b16878 100644
--- a/e820.c
+++ b/e820.c
@@ -29,6 +29,38 @@
 #include <mini-os/e820.h>
 #include <xen/memory.h>
 
+static unsigned long e820_initial_reserved_pfns;
+
+unsigned long e820_get_current_pages(void)
+{
+    domid_t domid = DOMID_SELF;
+    long ret;
+
+    ret = HYPERVISOR_memory_op(XENMEM_current_reservation, &domid);
+    if ( ret < 0 )
+    {
+        xprintk("could not get memory size\n");
+        do_exit();
+    }
+
+    return ret - e820_initial_reserved_pfns;
+}
+
+unsigned long e820_get_max_pages(void)
+{
+    domid_t domid = DOMID_SELF;
+    long ret;
+
+    ret = HYPERVISOR_memory_op(XENMEM_maximum_reservation, &domid);
+    if ( ret < 0 )
+    {
+        printk("Could not get maximum pfn\n");
+        return 0;
+    }
+
+    return ret - e820_initial_reserved_pfns;
+}
+
 #ifdef CONFIG_E820_TRIVIAL
 struct e820entry e820_map[1] = {
     {
@@ -40,10 +72,6 @@ struct e820entry e820_map[1] = {
 
 unsigned e820_entries = 1;
 
-static void e820_get_memmap(void)
-{
-}
-
 #else
 struct e820entry e820_map[E820_MAX];
 unsigned e820_entries;
@@ -199,6 +227,7 @@ static void e820_sanitize(void)
 {
     int i;
     unsigned long end, start;
+    bool found_lapic = false;
 
     /* Sanitize memory map in current form. */
     e820_process_entries();
@@ -238,8 +267,20 @@ static void e820_sanitize(void)
 
     /* Make remaining temporarily reserved entries permanently reserved. */
     for ( i = 0; i < e820_entries; i++ )
+    {
         if ( e820_map[i].type == E820_TMP_RESERVED )
             e820_map[i].type = E820_RESERVED;
+        if ( e820_map[i].type == E820_RESERVED )
+        {
+            e820_initial_reserved_pfns += e820_map[i].size / PAGE_SIZE;
+            if ( e820_map[i].addr <= LAPIC_ADDRESS &&
+                 e820_map[i].addr + e820_map[i].size > LAPIC_ADDRESS )
+                found_lapic = true;
+        }
+    }
+
+    if ( !found_lapic )
+        e820_insert_entry(LAPIC_ADDRESS, PAGE_SIZE, E820_RESERVED);
 }
 
 static void e820_get_memmap(void)
@@ -264,6 +305,12 @@ void e820_init_memmap(struct hvm_memmap_table_entry *entry, unsigned int num)
 {
     unsigned int i;
 
+    if ( !entry )
+    {
+        e820_get_memmap();
+        return;
+    }
+
     BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_RAM != E820_RAM);
     BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_RESERVED != E820_RESERVED);
     BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_ACPI != E820_ACPI);
@@ -365,9 +412,6 @@ unsigned long e820_get_maxpfn(unsigned long pages)
     int i;
     unsigned long pfns = 0, start = 0;
 
-    if ( !e820_entries )
-        e820_get_memmap();
-
     for ( i = 0; i < e820_entries; i++ )
     {
         if ( e820_map[i].type != E820_RAM )
diff --git a/include/e820.h b/include/e820.h
index 5438a7c8..ffa15aa9 100644
--- a/include/e820.h
+++ b/include/e820.h
@@ -52,6 +52,8 @@ struct __packed e820entry {
 extern struct e820entry e820_map[];
 extern unsigned e820_entries;
 
+unsigned long e820_get_current_pages(void);
+unsigned long e820_get_max_pages(void);
 unsigned long e820_get_maxpfn(unsigned long pages);
 unsigned long e820_get_max_contig_pages(unsigned long pfn, unsigned long pages);
 #ifndef CONFIG_E820_TRIVIAL
diff --git a/include/x86/arch_mm.h b/include/x86/arch_mm.h
index ffbec5a8..a1b975dc 100644
--- a/include/x86/arch_mm.h
+++ b/include/x86/arch_mm.h
@@ -207,6 +207,8 @@ typedef unsigned long pgentry_t;
 /* to align the pointer to the (next) page boundary */
 #define PAGE_ALIGN(addr)        (((addr)+PAGE_SIZE-1)&PAGE_MASK)
 
+#define LAPIC_ADDRESS	CONST(0xfee00000)
+
 #ifndef __ASSEMBLY__
 /* Definitions for machine and pseudophysical addresses. */
 #ifdef __i386__
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue Jun 21 08:20:10 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 08:20:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353053.579947 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Z7D-0001g4-6P; Tue, 21 Jun 2022 08:20:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353053.579947; Tue, 21 Jun 2022 08:20:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3Z7D-0001fM-1h; Tue, 21 Jun 2022 08:20:03 +0000
Received: by outflank-mailman (input) for mailman id 353053;
 Tue, 21 Jun 2022 08:20:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3Z7B-0001X1-Kd; Tue, 21 Jun 2022 08:20:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3Z7B-00085U-IK; Tue, 21 Jun 2022 08:20:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3Z7B-0008IR-80; Tue, 21 Jun 2022 08:20:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o3Z7B-0008P2-7Z; Tue, 21 Jun 2022 08:20:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NYBmr9/U2F5A0lcv8yzFYO4zwhVy2DYL3IcR+8iNIVA=; b=CXgbf1qCjz5IzKchknsBedpASj
	sud9egeh1u43l8oVXlJrejXZjdXApzAqj4Ne4z75MKGiHCeYbfDpPQAJH9elrIo5RHpkTq2Nv8irX
	CptSoqDAnvhu7DCSBhDBzbf8YUoMN5vSMnON7EojlMweq5V69ObWegGzymlRSy2TDiDk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171298-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 171298: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=cfe165140a7c140c2d2f382113abd6e9ac89ce77
X-Osstest-Versions-That:
    ovmf=e8034b534ab51635b62dca631514bb6305850a5a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 21 Jun 2022 08:20:01 +0000

flight 171298 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171298/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 cfe165140a7c140c2d2f382113abd6e9ac89ce77
baseline version:
 ovmf                 e8034b534ab51635b62dca631514bb6305850a5a

Last test of basis   171286  2022-06-20 05:11:51 Z    1 days
Testing same since   171298  2022-06-21 04:40:36 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gua Guo <gua.guo@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   e8034b534a..cfe165140a  cfe165140a7c140c2d2f382113abd6e9ac89ce77 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jun 21 08:44:05 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 08:44:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353064.579962 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3ZUG-0004n0-8N; Tue, 21 Jun 2022 08:43:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353064.579962; Tue, 21 Jun 2022 08:43:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3ZUG-0004mt-5T; Tue, 21 Jun 2022 08:43:52 +0000
Received: by outflank-mailman (input) for mailman id 353064;
 Tue, 21 Jun 2022 08:43:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0ORL=W4=citrix.com=prvs=164d1f6c5=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1o3ZUE-0004kn-DG
 for xen-devel@lists.xenproject.org; Tue, 21 Jun 2022 08:43:50 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 42a138f6-f13e-11ec-b725-ed86ccbb4733;
 Tue, 21 Jun 2022 10:43:47 +0200 (CEST)
Received: from mail-dm6nam10lp2108.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.108])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 21 Jun 2022 04:43:43 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by PH0PR03MB5671.namprd03.prod.outlook.com (2603:10b6:510:35::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.16; Tue, 21 Jun
 2022 08:43:42 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::bd46:feab:b3:4a5c]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::bd46:feab:b3:4a5c%4]) with mapi id 15.20.5353.022; Tue, 21 Jun 2022
 08:43:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 42a138f6-f13e-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655801027;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=FNrCyn8Mx/7tAkgrwEhkXNSohg3BRnwaYcikUPpOo2Q=;
  b=HRAwPfvKYFOh9e3FZjEBRMixfUJVxJIAXNEfa0RSXpMcUltXSEyMmz/b
   3diLSE72RL50J6tlMv13LWBOwcxVZr5YztbzXl1I902yMpZTlnMNnOc1G
   ZrcNyEfNOoXtL6W8OB7OS+7lQJ/jsZn0MgrZEj99uJsphO0sIRpou61YF
   k=;
X-IronPort-RemoteIP: 104.47.58.108
X-IronPort-MID: 74473539
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:B215lK8tcKk4BIaEdTTeDrUDsH+TJUtcMsCJ2f8bNWPcYEJGY0x3m
 GRLWTiAP/eLMzfwctonPoqx8BkBuMeGmNM1SVM4/is8E34SpcT7XtnIdU2Y0wF+jyHgoOCLy
 +1EN7Es+ehtFie0Si+Fa+Sn9T8mvU2xbuKU5NTsY0idfic5DnZ74f5fs7Rh2NQw34LgW1rlV
 e7a+KUzBnf0g1aYDUpMg06zgEsHUCPa4W5wUvQWPJinjXeG/5UnJMt3yZKZdhMUdrJ8DO+iL
 9sv+Znilo/vE7XBPfv++lrzWhVirrc/pmFigFIOM0SpqkAqSiDfTs/XnRfTAKtao2zhojx/9
 DlCnaSLVRx4Y5Xno+I2fDB/FwonMYZmwrCSdBBTseTLp6HHW13F5qw2SW0TY8gf8OsxBnxS/
 /sFLjxLdgqEm++93LO8TK9rm9gnK87oeogYvxmMzxmAVapgHc+FHvWMvIcItNszrpkm8fL2T
 swVczdwKj/HZAVCIAw/A5Mihua4wHL4dlW0rXrK+fZouDWInWSd1pCwEofkI9y4WPxrm3i/+
 k3X2nrdU0oVYYn3JT2ttyjEavX0tSH0QoUJD5Wj6+VnxlaUwwQ7DRwQVFyg5PW0lEO6c9ZeM
 FAPvDojq7Ao806mRcW7WAe3yFaUsxhZV9dOHukS7ACW1rGS8wufHnIDTDNKdJohrsBebScj0
 0KEm5X1BT1luaCRVVqQ8KzRqSK1P24SN2BqTTAAZRsI5Z/kuo5bs/7UZtNqEarwgtirHzj1m
 miOtHJn3+VVitMX3aKm+1yBmyirupXCUg8y4EPQQ36h6QR6IoWiYuRE9GTm0BqJF67BJnHpg
 ZTOs5H2ADwmZX1VqBGwfQ==
IronPort-HdrOrdr: A9a23:ed2KnKnGIQZOkInND3rKOuTONqXpDfN1iWdD5ihNYBxZY6Wkfp
 +V8cjzhCWftN9OYhodcIi7SdK9qXO1z+8X3WGIVY3SEDUOy1HYVr2KirGSjAEIeheOu9K1sJ
 0NT0EQMqyWMbEXt6fHCUyDYq4dKbq8ge6VbIXlvhFQpGhRAskOgTuRSDzra3GeLzM2Z6bRYa
 Dsgvav0ADQHEj/AP7aOlA1G8z44/HbnpPvZhALQzQ97hOVsD+u4LnmVzCFwxY3SVp0sPcf2F
 mAtza8yrSosvm9xBOZ/XTU9Y5qlNzozcYGLNCQi/ISNi7nhm+TFcdcsvy5zXIISdOUmRIXee
 r30lAd1gNImjXsl1SO0F7QMs/boW8TAjHZuAelaDDY0LHErXoBerZ8bMRiA1rkAgMbza9BOO
 gg5RPni7NHSRzHhyjz/N7OSlVjkVe1u2MrlaoJg2VYSpZ2Us4YkWUzxjIiLH47JlOy1GnnKp
 gdMOjMoPJNNV+KZXHQuWdihNSqQ3QoBx+DBkwPoNac3TRalG1wixJw/r1Uol4QsJYmD5VU7e
 XNNapl0LlIU88NdKp4QOMMW9G+BGDBSQ/FdGiSPVPkHqcaPG+lke+93JwloOWxPJAYxpo7n5
 rMFFteqG4pYkrrTdaD2ZVamyq9N1lVnQ6dvv22y6IJyoEUHoCbQBFrYGpe4PeIsrEYHtDRXe
 q1NdZfH+LjRFGebLp04w==
X-IronPort-AV: E=Sophos;i="5.92,209,1650945600"; 
   d="scan'208";a="74473539"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GVlTNc3qWhMJ2+IVzdSGMtm+jRwTiP3Ejc8RvdNu3bChdB32XTsXNdicvQ6fPD4vcdTfMsJbY9VJEvbT2pr+S6zu3uH2TJfSS2OXPfBLXtH+C4cnSR9nwCkyryhDF7e+wzYmKmVTF3J7nyQwnlXR7888btrJO6d7C+cF053KEG2VPbdrOenQ5KnP9wgzOB/3FqQoJnhzHsnyyUremVnBC+6cjpUAkRF+NSRfmZ1AVRJ1NMelhh4IokZeigqrsFPwTHRnGT7ICgQ9abDit9hR4tUPlR8cnn/NuoE5Bvnr01JrkmC61VgkVyf3I5xW3z7v2yO5c9km4xot2UWxYrQzCA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=FNrCyn8Mx/7tAkgrwEhkXNSohg3BRnwaYcikUPpOo2Q=;
 b=kVi+WNOn3gHalNykrmMeiGAKWtM6szLfElAaXQjOCbbjT98C3HdO2uBR1O1p1XA7SEqDjPIyYEcI0dIw3zzG3yFSEclwyK1Je5SuukphOP+FlQvEnEztVc4SiS2CKPCn5VQIg3K0SYDTpdWTruoUhVbwwXSnNe7CDTrr5HgJdE6HF0njrbDnmJfYijb9SDwamJfCtcriLstZvLlccRBc0zU6PYBk9pZRx/4JWEOZezen9dadkIMZRYd5hVStFSIG4/9rzuwq1ljTwHFFU4cb0h4h+bvDN6UXNpZNcEKrIkgjeKoCQRRH9ogUioIsGXlUdbrW3we03mTCiadpe5KPeg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FNrCyn8Mx/7tAkgrwEhkXNSohg3BRnwaYcikUPpOo2Q=;
 b=LzSHFxmDC3Iyrd84ZRufriv4EbYuOF0j4Js5MFpMcl3E0kXQrfKDycgxNDqiYHy0Zpao2oQDw6eHuG/s8yuj3toFiNAyysq5Jt3/Urj8Jsa44eZzDR7aobVc4SD6exYpVyU4Nz2DTExx+LzZfZ4hYPcNM3Fe7K0p3L3VLBrBiNo=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Juergen Gross <jgross@suse.com>, "minios-devel@lists.xenproject.org"
	<minios-devel@lists.xenproject.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: "samuel.thibault@ens-lyon.org" <samuel.thibault@ens-lyon.org>,
	"wl@xen.org" <wl@xen.org>
Subject: Re: [PATCH v3 3/4] mini-os: fix number of pages for PVH
Thread-Topic: [PATCH v3 3/4] mini-os: fix number of pages for PVH
Thread-Index: AQHYhT/O8+kLDI1LXkCy8scmU4sLua1Zi4MA
Date: Tue, 21 Jun 2022 08:43:41 +0000
Message-ID: <ec0d19c7-cc28-3c2a-05a9-0cdecd5b9f56@citrix.com>
References: <20220621072314.16382-1-jgross@suse.com>
 <20220621072314.16382-4-jgross@suse.com>
In-Reply-To: <20220621072314.16382-4-jgross@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 11355091-74e2-4bc9-083c-08da536224f8
x-ms-traffictypediagnostic: PH0PR03MB5671:EE_
x-microsoft-antispam-prvs:
 <PH0PR03MB5671FA9AB2D6F5406502E857BAB39@PH0PR03MB5671.namprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 D4KK4Rp1sYNKtF0DTX7TrffGRWnI6a66bigD4AE7SyoviVLpx+SglTph1jNdHUuqbr1QT7NHWXVP1yyV/cM/Frm4qS/2nz65Mze4QBWHuvzZE6Any8G87d/EI+dAET8NDcC2RE6uCn3CrzNQhTPOV+G6hV3ut4g6FFXyratvfuMJzW/0wRBXdw0niqdZj5uwgkN5+av4SYQ5jW341OU+mbrXQlQxa9dj2x8Mxi1uXF2KFh//VjqFpffJnPm/0vit9gU4ByScjCvy6swiZc5IByqI9fV2E7VyMcTMfgtd8Zb1KyAmCIvzQWJl01dYYt4B6IXycVJ9yFQfN9Hd+4i/huDBEvcL5aUmdi2qbQ2qF9wqhSM/RDv73d6cXBlvkU/Wb5ZxlvDOf0j1NTOiyfzEaryNH0we0Ba9iu2fI9R8BxhUgPe41uwBeVz4t+tatpo/tp/38o6D21aTb0d+fgsGY3sjQZk/1QBPX6jlCdbuogNN1Pd2T3aiRFtGdZ+NWqDhuPitdCsZHq6pOs9HL+YO86oVmH6M9H/pUlamsDsbdzp4lHHJ9RFIETZTuzK4y94eciKmURSj+pCswVBugW+h1PgRj9ZBNzc2IZ5QbzTBE6/8lS7eEFwqrAq/hcqGX+EbTM+CcHzgDjcOBMxk9qt+8byXKZsLDI1UCU0ckfVZ4MoX82vB+0909a2NOaFgKjCeqQKNync5dWServ1SC5Kxv1m80kE4m0V4P6Klf9TUUEJ38PrUaSVwtWHIXThJ5s2tP6KvdkHnQVZNvTQBK2od6g==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(376002)(39860400002)(346002)(366004)(396003)(136003)(316002)(4744005)(478600001)(6486002)(2906002)(54906003)(110136005)(4326008)(8936002)(71200400001)(66946007)(5660300002)(86362001)(53546011)(91956017)(26005)(64756008)(31686004)(36756003)(31696002)(2616005)(6512007)(41300700001)(66476007)(66556008)(66446008)(76116006)(8676002)(186003)(38100700002)(38070700005)(82960400001)(6506007)(122000001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?L2QvNzN3T0ZsWFZnc2ZTdFBsNVplVEZIS0R6UDE4S3RjbFBDRGlLMWNKcFBX?=
 =?utf-8?B?RXBJdEp0b0V1aXZSZllaKzZRcUFkYjE4VHA5dGp6NWF6RFAwOUdnSytMTG81?=
 =?utf-8?B?UXIydSt5Ujg1N3pVb2MzcmY5aTBQUzlHNkZ6RDBNQWVWakFxaGxtV3VjSFNK?=
 =?utf-8?B?QXA0ZXpYYmkrQmh0d0kydHJySFR4UDJZRU1sNll6bHZ2Ti9CajNXZjB4R1N2?=
 =?utf-8?B?aDBlQ2lQMjhObW4zcDluYkFLWUN5MXNKK0FLdmhqRzVXamVvNTU0YWZnRFpw?=
 =?utf-8?B?VWozdXVvUnhVZ0p4RmxyU1NJTjB6bTgrZUJVWkozNUZ4emJLa3ZCWEpaVDBn?=
 =?utf-8?B?RExoS0JWdHNaNTlpc1crVTdheVVpa25pZnRTeUJzVHlNK05JOG44N05RSUhq?=
 =?utf-8?B?S1haMFBpY3MxOEduZm5LYjJKSGpPOWdkRE0yZmpCSnArWFVXQmNDTEgxVGVK?=
 =?utf-8?B?cWdBVHFCMDAybHF6a00xZDMwL2g5SFNwbzZXcGRneDgzYVZSemZaeGR2ZFZp?=
 =?utf-8?B?RWx2OTA3UkdwRkRhNVhBRnVEN3h4OEdIUDFGd1IzYWliRHdidlVRWW91U3Jh?=
 =?utf-8?B?Z0FjR2RpdU9jOHJGMStzQ1VIQitnRDQ5ZmJQMDR2NlM4YlJxb3R3dU1mNWlO?=
 =?utf-8?B?SFJUdnNIdEJ6V2pKNzlPUnhVV0VuRkoyWVJRNE5JTmhTVE15TFdqY1lpTGRt?=
 =?utf-8?B?WWxsQURSSE9EK1F3UnFZNTVCODFxelJISURrQmtFRjlFQUYxZDFtMmphcEcx?=
 =?utf-8?B?aU1OalJPRlhMeTQ3ckNidGJucEFkM3VrdDZ2NVVYd3JZZWIxeU04QU1ScW1Y?=
 =?utf-8?B?S1VydStXZldDUzVLMzJTUmtyMEk0aHZOVHpvTkZtalYxL25GdzBpMWJCaUNo?=
 =?utf-8?B?NVJLQ3FKeEtDNUxTUThzMVJYaFNNKzVCZ0VCMzlHRk5wL0Z6ODFTZlFoaUhy?=
 =?utf-8?B?eUh0ZVRaSTE2RzlmZHJtMFBrM2FmT1hhSXhIdjRjZEVWSGF1dWtHMkJEeDNq?=
 =?utf-8?B?c1ZoQmRLV2tTZlRXcTc0aDZONnZ0RTBTemZyU0I1TzZweHRJaWlhNitBUDcy?=
 =?utf-8?B?ZnZiZDBtNG1kOWVXTTBoVzJRbHBpczNwak56WTJubnlXV1JjYkZTV1BqT1Q4?=
 =?utf-8?B?SWo0cjVIUnlxNCs0ZkNtRmI2QXY4LzA4ZHBZdXlieXcyaVNvY3lOZXZNdFQ3?=
 =?utf-8?B?RzZrTmZzVUkvUUpzYVFFZ3VGUjg1WjFMTnRiVzBodjVxY01GYXFMK29LaWNq?=
 =?utf-8?B?WVowQlFYWkJVMEJiRnEyVVNFWExHN25GUzhMQ1l4Q0FJd2pVVnQ2RnBvTEJM?=
 =?utf-8?B?dnJZOUxqNkpSbUNManlpVVU4MWZLcmVzM1B2akxPaDR1b3h5aHgzck96dTI0?=
 =?utf-8?B?L2lqQVM0ajArdktMekpBeEZWWnZPMmMrV0kyaHFFd2NENU1ZS0M3dVFNNTY0?=
 =?utf-8?B?VE1KaUt3RzQva0ZacDF5djRmbXhyZE1ieGRGRkxSNlE2Q1Q0UUtqL3lRUE5y?=
 =?utf-8?B?TGhGMHlJUmZzQlZKTkltcDBTMkI2NEdacGJrd2s3Y3dUc1hwQWFZZ0NkTjFB?=
 =?utf-8?B?UUlyZFIzUERoNnJaNGl3ekltd1lSdng2QjNUN2pzTGNtN3lzN2FCTTJGRXlX?=
 =?utf-8?B?NmJxS1YyTktaejNaQ2hRcms5Y1pZZGQ2cVhhdjR5dHJ6N0Fybjl1MnJxSFkz?=
 =?utf-8?B?SGhmYkVkUVpvV3k0NVZYWTRUbVNDOGY0SzNrSm9HUHJIRDNSZmQ2ampDWXVG?=
 =?utf-8?B?V1czUGpjcGVzRUM3bENKcks4SzdRMVViN0tVTk5BVWg5TVBXQmduWFVXdjRh?=
 =?utf-8?B?ejRZZnVpK3p2UVhGcUcxWGpVN3ByaHJJNW1yRWZTa3BYY1BtUDBoYkIyNUZS?=
 =?utf-8?B?R3dTYWw0ZjlCdndMd3RsdG5NQUFQVzBiVURYVDZucTJkbVJ1cW9WYkNJbVNO?=
 =?utf-8?B?aEs1UXZQYk44WElFU0F6dDBDK2NkV2xRQURWdW5LUWdjU3JIdEJsMUo0Q05H?=
 =?utf-8?B?N2ZFenZSUFdrZ0RaN0xCU25GK0hiQzFFZ1VRSTRTVUhrRHJGN0s1b215U1Iy?=
 =?utf-8?B?eDBROUQzb1QyT1NDV2JPSStaQjhLSHU3SEVVSFkwTkxQU0R2RzVTQ0dtZWdD?=
 =?utf-8?B?S2tPbGtSZ2FUSUZQME5qOTErcFFJdFlWL0FpOW9RMVhzZWZHTTFDemhVVGls?=
 =?utf-8?B?UEFzZi9jQk5xS081a0pBbU15dWlwZjY5cHo3ZTVVbFo0ZWVLU3FVaFg1c2ly?=
 =?utf-8?B?UnlhYjRFS1E1NSs2KzFHbnRyOWZCSDJOQUFWQmlMY2NyU0VKYk53SWV3cmdk?=
 =?utf-8?B?SVM5MXRsWEg5Z2RnZ28xYXhqM1V2SkYxQ3YwNVlHUlBvVmNFTll2aExBMzNQ?=
 =?utf-8?Q?mo32VNP5nD7iPbQM=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <EB86FB817E85AA4E84C6D11FA8547F2E@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 11355091-74e2-4bc9-083c-08da536224f8
X-MS-Exchange-CrossTenant-originalarrivaltime: 21 Jun 2022 08:43:41.8012
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 8dt0ch4db4iPucSIvA2YiogfpP8YBlVt4v+uCTXsbbnlLSmfSM6+ujIay/ICXUbrG+/7z3Z86WHMVI/HwCZT5Zn9IHUhjRaTVaxsL2J7ozU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5671

T24gMjEvMDYvMjAyMiAwODoyMywgSnVlcmdlbiBHcm9zcyB3cm90ZToNCj4gZGlmZiAtLWdpdCBh
L2U4MjAuYyBiL2U4MjAuYw0KPiBpbmRleCBhZDkxZTAwYi4uNDliMTY4NzggMTAwNjQ0DQo+IC0t
LSBhL2U4MjAuYw0KPiArKysgYi9lODIwLmMNCj4gQEAgLTI5LDYgKzI5LDM4IEBADQo+ICAjaW5j
bHVkZSA8bWluaS1vcy9lODIwLmg+DQo+ICAjaW5jbHVkZSA8eGVuL21lbW9yeS5oPg0KPiAgDQo+
ICtzdGF0aWMgdW5zaWduZWQgbG9uZyBlODIwX2luaXRpYWxfcmVzZXJ2ZWRfcGZuczsNCj4gKw0K
PiArdW5zaWduZWQgbG9uZyBlODIwX2dldF9jdXJyZW50X3BhZ2VzKHZvaWQpDQo+ICt7DQo+ICsg
ICAgZG9taWRfdCBkb21pZCA9IERPTUlEX1NFTEY7DQo+ICsgICAgbG9uZyByZXQ7DQo+ICsNCj4g
KyAgICByZXQgPSBIWVBFUlZJU09SX21lbW9yeV9vcChYRU5NRU1fY3VycmVudF9yZXNlcnZhdGlv
biwgJmRvbWlkKTsNCj4gKyAgICBpZiAoIHJldCA8IDAgKQ0KPiArICAgIHsNCj4gKyAgICAgICAg
eHByaW50aygiY291bGQgbm90IGdldCBtZW1vcnkgc2l6ZVxuIik7DQoNCiVsZCByZXQNCg0KQWxz
bywgeHByaW50aygpIHZzIC4uLg0KDQo+ICsgICAgICAgIGRvX2V4aXQoKTsNCj4gKyAgICB9DQo+
ICsNCj4gKyAgICByZXR1cm4gcmV0IC0gZTgyMF9pbml0aWFsX3Jlc2VydmVkX3BmbnM7DQo+ICt9
DQo+ICsNCj4gK3Vuc2lnbmVkIGxvbmcgZTgyMF9nZXRfbWF4X3BhZ2VzKHZvaWQpDQo+ICt7DQo+
ICsgICAgZG9taWRfdCBkb21pZCA9IERPTUlEX1NFTEY7DQo+ICsgICAgbG9uZyByZXQ7DQo+ICsN
Cj4gKyAgICByZXQgPSBIWVBFUlZJU09SX21lbW9yeV9vcChYRU5NRU1fbWF4aW11bV9yZXNlcnZh
dGlvbiwgJmRvbWlkKTsNCj4gKyAgICBpZiAoIHJldCA8IDAgKQ0KPiArICAgIHsNCj4gKyAgICAg
ICAgcHJpbnRrKCJDb3VsZCBub3QgZ2V0IG1heGltdW0gcGZuXG4iKTsNCg0KLi4uIHByaW50aygp
Pw0KDQpTaG91bGRuJ3QgdGhleSBib3RoIGJlIHByaW50aygpP8KgIENhbiBmaXggYm90aCBpc3N1
ZXMgb24gY29tbWl0Lg0KDQp+QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Tue Jun 21 08:44:05 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 08:44:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353067.579973 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3ZUS-00056a-HS; Tue, 21 Jun 2022 08:44:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353067.579973; Tue, 21 Jun 2022 08:44:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3ZUS-00056T-D7; Tue, 21 Jun 2022 08:44:04 +0000
Received: by outflank-mailman (input) for mailman id 353067;
 Tue, 21 Jun 2022 08:44:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=L66N=W4=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o3ZUR-0004kn-8t
 for xen-devel@lists.xenproject.org; Tue, 21 Jun 2022 08:44:03 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 4c2c4069-f13e-11ec-b725-ed86ccbb4733;
 Tue, 21 Jun 2022 10:44:01 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CEE221596;
 Tue, 21 Jun 2022 01:44:00 -0700 (PDT)
Received: from [10.57.35.142] (unknown [10.57.35.142])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 092BF3F5A1;
 Tue, 21 Jun 2022 01:43:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4c2c4069-f13e-11ec-b725-ed86ccbb4733
Message-ID: <44179ffe-e3c4-d9ea-80fe-67cf7d946a34@arm.com>
Date: Tue, 21 Jun 2022 10:43:45 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH 5/9] include/public: Use explicitly specified types
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20220620070245.77979-1-michal.orzel@arm.com>
 <20220620070245.77979-6-michal.orzel@arm.com>
 <e91f6bd2-271c-12c1-ee7e-bea3d74c8beb@xen.org>
From: Michal Orzel <michal.orzel@arm.com>
In-Reply-To: <e91f6bd2-271c-12c1-ee7e-bea3d74c8beb@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Hi Julien,

On 20.06.2022 11:54, Julien Grall wrote:
> Hi Michal,
> 
> On 20/06/2022 08:02, Michal Orzel wrote:
>> According to MISRA C 2012 Rule 8.1, types shall be explicitly
>> specified. Fix all the findings reported by cppcheck with misra addon
>> by substituting implicit type 'unsigned' to explicit 'unsigned int'.
>>
>> Bump sysctl interface version.
> 
> The sysctl version should only be bumped if the ABI has changed. AFAICT switching from "unsigned" to "unsigned" will not modify it, so I don't think this is necessary.

Sure, I can remove that in v2 but first I'd like to wait at least for xsm patch to be reviewed.
Also as these patches are not dependent from each other, do you think it is worth respinning the reviewed ones?

Cheers,
Michal


From xen-devel-bounces@lists.xenproject.org Tue Jun 21 08:46:16 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 08:46:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353083.579983 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3ZWa-00063z-4y; Tue, 21 Jun 2022 08:46:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353083.579983; Tue, 21 Jun 2022 08:46:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3ZWa-00063s-2F; Tue, 21 Jun 2022 08:46:16 +0000
Received: by outflank-mailman (input) for mailman id 353083;
 Tue, 21 Jun 2022 08:46:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o3ZWY-00063R-2n
 for xen-devel@lists.xenproject.org; Tue, 21 Jun 2022 08:46:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o3ZWX-00008w-9l; Tue, 21 Jun 2022 08:46:13 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=[192.168.3.84])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o3ZWX-0005MX-2D; Tue, 21 Jun 2022 08:46:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=HlOat/p4kwftn399ugNMIQWLcsVVBpfn74a44hAsaAw=; b=BVR4enZeTKbdXSDYtX+kyUdRKF
	fFnGHpMfOPjzRyvdXBKD1cXlacSxihIV5148dAccgZ4NqhsFNDsuA1Cbnn+W4sR5idvL5u33GrfXz
	HIFWmMZBTdxCCEJo1AtB8zvECBo5TZWDvI3XkB9MMUvFcftczQtxppgHgontYwcCRUBA=;
Message-ID: <ddf91f3f-74d7-b21d-de40-679d786c137a@xen.org>
Date: Tue, 21 Jun 2022 09:46:10 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH 5/9] include/public: Use explicitly specified types
To: Michal Orzel <michal.orzel@arm.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20220620070245.77979-1-michal.orzel@arm.com>
 <20220620070245.77979-6-michal.orzel@arm.com>
 <e91f6bd2-271c-12c1-ee7e-bea3d74c8beb@xen.org>
 <44179ffe-e3c4-d9ea-80fe-67cf7d946a34@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <44179ffe-e3c4-d9ea-80fe-67cf7d946a34@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 21/06/2022 09:43, Michal Orzel wrote:
> Hi Julien,

Hi Michal,

> 
> On 20.06.2022 11:54, Julien Grall wrote:
>> Hi Michal,
>>
>> On 20/06/2022 08:02, Michal Orzel wrote:
>>> According to MISRA C 2012 Rule 8.1, types shall be explicitly
>>> specified. Fix all the findings reported by cppcheck with misra addon
>>> by substituting implicit type 'unsigned' to explicit 'unsigned int'.
>>>
>>> Bump sysctl interface version.
>>
>> The sysctl version should only be bumped if the ABI has changed. AFAICT switching from "unsigned" to "unsigned" will not modify it, so I don't think this is necessary.
> 
> Sure, I can remove that in v2 but first I'd like to wait at least for xsm patch to be reviewed.
> Also as these patches are not dependent from each other, do you think it is worth respinning the reviewed ones?

I would suggest to wait until you get input on all the patches before 
respinning.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 21 08:52:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 08:52:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353095.579994 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3ZcX-0007Wc-SE; Tue, 21 Jun 2022 08:52:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353095.579994; Tue, 21 Jun 2022 08:52:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3ZcX-0007WV-Om; Tue, 21 Jun 2022 08:52:25 +0000
Received: by outflank-mailman (input) for mailman id 353095;
 Tue, 21 Jun 2022 08:52:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VA9I=W4=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o3ZcW-0007WK-L8
 for xen-devel@lists.xenproject.org; Tue, 21 Jun 2022 08:52:24 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 777d0fac-f13f-11ec-b725-ed86ccbb4733;
 Tue, 21 Jun 2022 10:52:23 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id C30E221EBA;
 Tue, 21 Jun 2022 08:52:22 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 933F713A88;
 Tue, 21 Jun 2022 08:52:22 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id D4WbIsaGsWLLEwAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 21 Jun 2022 08:52:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 777d0fac-f13f-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655801542; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=ecQl4gx2pJEVJl/2Ry2eGo14Mgun+bU1T/nT7aFLXns=;
	b=daqkP76PTZG8leaqnMXEZLXJTFWb27KGCY1yVxUpbpUGDYgSfq8lP4jm/ru9m7kVmRkP8I
	VUzfuIEVKxPujRfawkEiS7Yj2de4XS3PgDmAOAAyQ8egkrbbJIH77ODX/YJdITYVLlvcd+
	fK7PaRTV9c29nfBk8HdnJt9flx2ywWM=
Message-ID: <ea7d7837-785a-c543-a0c1-ad471c7e6d1c@suse.com>
Date: Tue, 21 Jun 2022 10:52:22 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>,
 "minios-devel@lists.xenproject.org" <minios-devel@lists.xenproject.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: "samuel.thibault@ens-lyon.org" <samuel.thibault@ens-lyon.org>,
 "wl@xen.org" <wl@xen.org>
References: <20220621072314.16382-1-jgross@suse.com>
 <20220621072314.16382-4-jgross@suse.com>
 <ec0d19c7-cc28-3c2a-05a9-0cdecd5b9f56@citrix.com>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v3 3/4] mini-os: fix number of pages for PVH
In-Reply-To: <ec0d19c7-cc28-3c2a-05a9-0cdecd5b9f56@citrix.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------4PFqV1eEr0FcLU29tejucyFp"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------4PFqV1eEr0FcLU29tejucyFp
Content-Type: multipart/mixed; boundary="------------EuYWG09dcRRmAm985LvvRqMV";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>,
 "minios-devel@lists.xenproject.org" <minios-devel@lists.xenproject.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: "samuel.thibault@ens-lyon.org" <samuel.thibault@ens-lyon.org>,
 "wl@xen.org" <wl@xen.org>
Message-ID: <ea7d7837-785a-c543-a0c1-ad471c7e6d1c@suse.com>
Subject: Re: [PATCH v3 3/4] mini-os: fix number of pages for PVH
References: <20220621072314.16382-1-jgross@suse.com>
 <20220621072314.16382-4-jgross@suse.com>
 <ec0d19c7-cc28-3c2a-05a9-0cdecd5b9f56@citrix.com>
In-Reply-To: <ec0d19c7-cc28-3c2a-05a9-0cdecd5b9f56@citrix.com>

--------------EuYWG09dcRRmAm985LvvRqMV
Content-Type: multipart/mixed; boundary="------------IVnimBOUk60PHtLClNttlB6l"

--------------IVnimBOUk60PHtLClNttlB6l
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjEuMDYuMjIgMTA6NDMsIEFuZHJldyBDb29wZXIgd3JvdGU6DQo+IE9uIDIxLzA2LzIw
MjIgMDg6MjMsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+PiBkaWZmIC0tZ2l0IGEvZTgyMC5j
IGIvZTgyMC5jDQo+PiBpbmRleCBhZDkxZTAwYi4uNDliMTY4NzggMTAwNjQ0DQo+PiAtLS0g
YS9lODIwLmMNCj4+ICsrKyBiL2U4MjAuYw0KPj4gQEAgLTI5LDYgKzI5LDM4IEBADQo+PiAg
ICNpbmNsdWRlIDxtaW5pLW9zL2U4MjAuaD4NCj4+ICAgI2luY2x1ZGUgPHhlbi9tZW1vcnku
aD4NCj4+ICAgDQo+PiArc3RhdGljIHVuc2lnbmVkIGxvbmcgZTgyMF9pbml0aWFsX3Jlc2Vy
dmVkX3BmbnM7DQo+PiArDQo+PiArdW5zaWduZWQgbG9uZyBlODIwX2dldF9jdXJyZW50X3Bh
Z2VzKHZvaWQpDQo+PiArew0KPj4gKyAgICBkb21pZF90IGRvbWlkID0gRE9NSURfU0VMRjsN
Cj4+ICsgICAgbG9uZyByZXQ7DQo+PiArDQo+PiArICAgIHJldCA9IEhZUEVSVklTT1JfbWVt
b3J5X29wKFhFTk1FTV9jdXJyZW50X3Jlc2VydmF0aW9uLCAmZG9taWQpOw0KPj4gKyAgICBp
ZiAoIHJldCA8IDAgKQ0KPj4gKyAgICB7DQo+PiArICAgICAgICB4cHJpbnRrKCJjb3VsZCBu
b3QgZ2V0IG1lbW9yeSBzaXplXG4iKTsNCj4gDQo+ICVsZCByZXQNCj4gDQo+IEFsc28sIHhw
cmludGsoKSB2cyAuLi4NCj4gDQo+PiArICAgICAgICBkb19leGl0KCk7DQo+PiArICAgIH0N
Cj4+ICsNCj4+ICsgICAgcmV0dXJuIHJldCAtIGU4MjBfaW5pdGlhbF9yZXNlcnZlZF9wZm5z
Ow0KPj4gK30NCj4+ICsNCj4+ICt1bnNpZ25lZCBsb25nIGU4MjBfZ2V0X21heF9wYWdlcyh2
b2lkKQ0KPj4gK3sNCj4+ICsgICAgZG9taWRfdCBkb21pZCA9IERPTUlEX1NFTEY7DQo+PiAr
ICAgIGxvbmcgcmV0Ow0KPj4gKw0KPj4gKyAgICByZXQgPSBIWVBFUlZJU09SX21lbW9yeV9v
cChYRU5NRU1fbWF4aW11bV9yZXNlcnZhdGlvbiwgJmRvbWlkKTsNCj4+ICsgICAgaWYgKCBy
ZXQgPCAwICkNCj4+ICsgICAgew0KPj4gKyAgICAgICAgcHJpbnRrKCJDb3VsZCBub3QgZ2V0
IG1heGltdW0gcGZuXG4iKTsNCj4gDQo+IC4uLiBwcmludGsoKT8NCj4gDQo+IFNob3VsZG4n
dCB0aGV5IGJvdGggYmUgcHJpbnRrKCk/wqAgQ2FuIGZpeCBib3RoIGlzc3VlcyBvbiBjb21t
aXQuDQoNCmU4MjBfZ2V0X2N1cnJlbnRfcGFnZXMoKSBpcyBiZWluZyBjYWxsZWQgYmVmb3Jl
IGNvbnNvbGUgaW5pdGlhbGl6YXRpb24sDQpzbyBpdCBzaG91bGQgcmVhbGx5IHVzZSB4cHJp
bnRrKCkuDQoNCkFkZGluZyB0aGUgcmV0dXJuZWQgZXJyb3IgbWlnaHQgYmUgaW50ZXJlc3Rp
bmcsIHRob3VnaCAoZXZlbiBpZiBhbiBlcnJvcg0KaXMgdmVyeSB1bmxpa2VseSkuDQoNCk5v
dGUgdGhhdCB0aGUgZXJyb3IgbWVzc2FnZSBwcmludGluZyBoYXMganVzdCBiZWVuIG1vdmVk
IGZyb20gdGhlIHByZXZpb3VzDQp1c2UgY2FzZSBpbnRvIHRoZSBuZXcgZnVuY3Rpb24uDQoN
Cg0KSnVlcmdlbg0K
--------------IVnimBOUk60PHtLClNttlB6l
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------IVnimBOUk60PHtLClNttlB6l--

--------------EuYWG09dcRRmAm985LvvRqMV--

--------------4PFqV1eEr0FcLU29tejucyFp
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmKxhsYFAwAAAAAACgkQsN6d1ii/Ey9N
0AgAnwy/5BcDtCtdKZvvrx1UN4UAEDqFkal51Fo47pBHjjPs5sRQgx+7eunLWbsKhwWDz1cz7d+9
o5H5/KxQJ8Nja2xovI5Q7v2JZg0oz/1tnJcxVHs42ZitNhv8vyqwTL7SoC0A+h0dhEBieIVq8LXm
1chIEh0OIjPWDWXUInyWcHXTgkHFIh7UCJ6/fOskeDk49/8I5OlBhGVPo5Mto8fdrITpHzAIsaaV
515tnuuPYRbXn0IUQ6DNkYDzaBW27jRNi1trFj9sTeMhSg31eTJ64Y2YxH5W8IJy1vhdgHH9HXzC
brkb2HO8u3xTbmuYX2M8GAClhPG8P6XPwZmn5dY03w==
=KXTE
-----END PGP SIGNATURE-----

--------------4PFqV1eEr0FcLU29tejucyFp--


From xen-devel-bounces@lists.xenproject.org Tue Jun 21 08:55:19 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 08:55:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353105.580009 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3ZfK-0008BE-Ax; Tue, 21 Jun 2022 08:55:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353105.580009; Tue, 21 Jun 2022 08:55:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3ZfK-0008B7-8P; Tue, 21 Jun 2022 08:55:18 +0000
Received: by outflank-mailman (input) for mailman id 353105;
 Tue, 21 Jun 2022 08:55:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kiuJ=W4=gmail.com=dmitry.semenets@srs-se1.protection.inumbo.net>)
 id 1o3ZfI-0008B1-Kc
 for xen-devel@lists.xenproject.org; Tue, 21 Jun 2022 08:55:16 +0000
Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com
 [2a00:1450:4864:20::635])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id de298202-f13f-11ec-bd2d-47488cf2e6aa;
 Tue, 21 Jun 2022 10:55:15 +0200 (CEST)
Received: by mail-ej1-x635.google.com with SMTP id g25so25964151ejh.9
 for <xen-devel@lists.xenproject.org>; Tue, 21 Jun 2022 01:55:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: de298202-f13f-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=eOzuKK+tTZIrLrL5q8T8WECZpLq6F+5BgwSjIAXKYcY=;
        b=ar2qyV2hjB+y7/rIEfCgTbTHAoEnZP05QvnVp54XZkOCwK4YZRhipq6ZOqkDcNO9Qj
         /O+iqYrkCCyAPNAsjLLMHjJC2rs5F7YlgzHsB8X3BumGsmvyVfU3gZ3aTYHiiAPkSj7z
         leYR+oNbHFmvITPJFvkZ2iEKGU8RKJOEHkIZglAFGHHIl66T7ropitdfHCYK/7gUPzQf
         5eTDkdpEyEyHVfI7A/EqOOgPPAYV5E89k185t7x5gz+EfkO8use2zw3LRP2j1+H3lot9
         OKWey1daGXt67qNi+svKbpZEJUHEl56b5fOfqC+4qPVeZKTGcc7dtWt002c4N9D37qzp
         287Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=eOzuKK+tTZIrLrL5q8T8WECZpLq6F+5BgwSjIAXKYcY=;
        b=F8NZyvNtP00G1p94NdmNlLzefrGyLufJpZKhX13KhDXMvtQkfcj6Fe872nARtoCYAu
         RQjjK0ibTrbNjQ8QAPF08LjlSH0rnPKfsG4CuDkkr0oNZ7l7un/cXBo2dOHTQYSMK8Aw
         pyCYLSuo6M7jRKK2ufQfTDMRtoSo2e3WPZADKDIHsc25phxBnDcc4vaF0jHFEsYyNPOn
         saj90457ju5wKbGx9aUvdThjkcAPhsS+0AIqEeEambpfgY6V91mU6eQw7r7d9haKD5hS
         FE557O292Ut+ADxz/8ffEyWZQmylZEbJwZG1e+tIqwg5lB6H29UX8ZmRmchsLne+160z
         keaw==
X-Gm-Message-State: AJIora9eBhOBSv6gWm9rzCgOLfHx1dzpSihgT4Wn33lT5RoSF5PdPM/k
	PSeLxDPPYChvKYKoNO1LLMjAh9eejDISVD7T7g8=
X-Google-Smtp-Source: AGRyM1tOzwfjL2XPZP3bV66N7ngEnvmg4kOt4lsywRSoPdamcPnI8VsPGNlbp3PLMk8TOP5osx4+PZX9ImsIowYablY=
X-Received: by 2002:a17:906:5352:b0:712:3916:e92 with SMTP id
 j18-20020a170906535200b0071239160e92mr24351770ejo.756.1655801714982; Tue, 21
 Jun 2022 01:55:14 -0700 (PDT)
MIME-Version: 1.0
References: <20220616135541.3333760-1-dmitry.semenets@gmail.com>
 <cf7660da-0bde-865e-7c22-a2e21e31fae5@xen.org> <87wndgh2og.fsf@epam.com>
 <67f56cdd-531b-72fc-1257-214d078f6bb6@xen.org> <87pmj7hczg.fsf@epam.com>
 <f260703d-4651-f9e9-3713-9e85a51b1d70@xen.org> <CACM97VUukaWoegmNvF4F+tf2tHCyPcjG41CSjjz72V2+Cte4Ew@mail.gmail.com>
 <49ace8c9-8fd6-57a2-e0c8-cfba04c9e151@xen.org>
In-Reply-To: <49ace8c9-8fd6-57a2-e0c8-cfba04c9e151@xen.org>
From: Dmytro Semenets <dmitry.semenets@gmail.com>
Date: Tue, 21 Jun 2022 11:55:02 +0300
Message-ID: <CACM97VV5MO0vmqG01pR7dXg1xU3jptOvjt4S+KS27zD+E66fPw@mail.gmail.com>
Subject: Re: [PATCH] xen: Don't call panic if ARM TF cpu off returns DENIED
To: Julien Grall <julien@xen.org>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Dmytro Semenets <Dmytro_Semenets@epam.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis <bertrand.marquis@arm.com>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

Hi Julien,
> >>
> >> Hi,
> >>
> >> On 17/06/2022 10:10, Volodymyr Babchuk wrote:
> >>> Julien Grall <julien@xen.org> writes:
> >>>
> >>>> On 16/06/2022 19:40, Volodymyr Babchuk wrote:
> >>>>> Hi Julien,
> >>>>
> >>>> Hi Volodymyr,
> >>>>
> >>>>> Julien Grall <julien@xen.org> writes:
> >>>>>
> >>>>>> Hi,
> >>>>>>
> >>>>>> On 16/06/2022 14:55, dmitry.semenets@gmail.com wrote:
> >>>>>>> From: Dmytro Semenets <dmytro_semenets@epam.com>
> >>>>>>> According to PSCI specification ARM TF can return DENIED on CPU OFF.
> >>>>>>
> >>>>>> I am confused. The spec is talking about Trusted OS and not
> >>>>>> firmware. The docummentation is also not specific to ARM Trusted
> >>>>>> Firmware. So did you mean "Trusted OS"?
> >>>>> It should be "firmware", I believe.
> >>>>
> >>>> Hmmm... I couldn't find a reference in the spec suggesting that
> >>>> CPU_OFF could return DENIED because of the firmware. Do you have a
> >>>> pointer to the spec?
> >>>
> >>> Ah, looks like we are talking about different things. Indeed, CPU_OFF
> >>> can return DENIED only because of Trusted OS. But entity that *returns*
> >>> the error to a caller is a firmware.
> >>
> >> Right, the interesting part is *why* DENIED is returned not *who*
> >> returns it.
> > ARM TF returns DENIED *only* for the platform I have.
> > We have a dissonance between spec and xen implementation because
> > DENIED returned by
> > ARM TF or Thrusted OS or whatever is not a reason for panic.
>
> I agree that's not a reason for panic. However, knowing the reason does
> help to figure out the correct approach.
>
> For instance, one could have suggest to migrate the trusted OS to
> another pCPU. But this would not have worked for you because the DENIED
> is not about that.
>
> > And we
> > have issues with this.
> > If machine_restart() behaviour is more or less correct  (sometimes
> > reports about panic but restarts the machine)
>
> Right...
>
> > but machine_halt() doesn't work at al
> ... this should also be the case here because machine_halt() could also
> be called from cpu0. So I am a bit confused why you are saying it never
> works.
If machine_halt() called on a CPU other than CPU0 it caused panic and reboot.
If it called on a CPU0 it also caused panic but after system power off
and reboot
is not issued. In this state you can still call the xen console. But
you can't reboot the system.
>
> > Transit execution to CPU0 for my understanding is a workaround and
> > this approach will fix
> > machine_restart() function but will not fix machine_halt().
>
> I would say it is a more specific case of what the spec suggests (see
> below). But it should fix both machine_restart() and machine_halt()
> because the last CPU running will be CPU0. So Xen would call SYSTEM_*
> rather than CPU_OF. So I don't understand why you think it will fix one
> but not the other.
Looks like this is specific for my HW case. SYSTEM_OFF doesn't stop
the whole system.
>
> In fact, the idea to always run the request from a given CPU is quite
> similar to what the specification suggests (5.10.3 DEN0022D.b):
>
> "
> One way in which cores can be placed into a known state is to use calls
> to CPU_OFF on all online cores
> except for the last one, which instead uses SYSTEM_OFF. If a UP Trusted
> OS is present, this method
> only works if the core that calls SYSTEM_OFF is the one where the
> Trusted OS is resident, as calls to
> CPU_OFF on this core return a DENIED error. Any core can call SYSTEM_OFF.
> "
>
> For Xen, we would need to detect if the trusted OS is UP and where it is
> running. Then we could always restart/halt from that CPU or CPU0.
>
> > Approach
> > you suggested (spinning all cpus) will work but
> > will save less energy.
>
> I am not sure to understand what's the concern about the energy here.
>  From my understanding of the specification, SYSTEM_OFF will take care
> of switching off the power for all the cores. So at worse, the CPUs will
> spin for a few ms. This would like be more efficient than a call to PSCI
> CPU off.
>
> This is different compare just turning off one CPU (i.e. CPU hot-unplug)
> because the CPU will end up to spin for a very long time. And this is
> why I wasn't OK with conditionally avoiding the panic.
>
> Cheers,
>
> --
> Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 21 09:38:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 09:38:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353117.580021 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3aKs-00046M-If; Tue, 21 Jun 2022 09:38:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353117.580021; Tue, 21 Jun 2022 09:38:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3aKs-00046F-FA; Tue, 21 Jun 2022 09:38:14 +0000
Received: by outflank-mailman (input) for mailman id 353117;
 Tue, 21 Jun 2022 09:38:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uUKi=W4=gmail.com=mykyta.poturai@srs-se1.protection.inumbo.net>)
 id 1o3aKq-000468-Vq
 for xen-devel@lists.xenproject.org; Tue, 21 Jun 2022 09:38:13 +0000
Received: from mail-ej1-x632.google.com (mail-ej1-x632.google.com
 [2a00:1450:4864:20::632])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dd8eaa8a-f145-11ec-bd2d-47488cf2e6aa;
 Tue, 21 Jun 2022 11:38:11 +0200 (CEST)
Received: by mail-ej1-x632.google.com with SMTP id pk21so3108101ejb.2
 for <xen-devel@lists.xenproject.org>; Tue, 21 Jun 2022 02:38:11 -0700 (PDT)
Received: from localhost.localdomain (93.75.52.3.lut.volia.net. [93.75.52.3])
 by smtp.gmail.com with ESMTPSA id
 kw10-20020a170907770a00b00722d8f902f2sm1620719ejc.33.2022.06.21.02.38.09
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 21 Jun 2022 02:38:10 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dd8eaa8a-f145-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=I/LeMdrvMGgzKYeanXHZKatWBJ9+aqT42HX6W2qSCpw=;
        b=jBpR63/3wpSzzE27pLIuuSBWQu/1J56e6KN6MNwxf54mQ1rCCxs/wgug3XNnZN2RJr
         7I5vR2dzXW4uR48+jot21JuT5c3JNu9pXvXi4Vk3596Fa0IhV6px8QL7bfSwDp8TWd2H
         Yw0UgUxx83ksbvcKv5SfAZBVf0uSWnxMVBDIOcQuUx9PjuD+AI9wOOmqQ/aGZhphT8l8
         wkNDENclj1yJ/e8XLM+r6HjOSdtzi/CckvrtcyipgK6WiFInGagupWQ2tyjqtTqzgaz0
         d30LqLTO9vb2qGtXjndVgyAMgmzqsXzNvJFmabDXwgJ6XGXZX+GK14JEL4FCdMP+jXOV
         JGzw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=I/LeMdrvMGgzKYeanXHZKatWBJ9+aqT42HX6W2qSCpw=;
        b=2Bdb67B4yrtvL00D3gUDfzPizB2JFXX8UZd5+WrClEj4lydLUVpWL6lrPK4lnpUirf
         1NEBwEfaa8wvUGTdv9soSP1fhoPaMkv7GzPXgigP4IJNGxOPGadI6+Ew1Igbarlok11L
         wqFkbT4AAX6dCbGd0oILr43RYeuz+Uec9dBjF/o3oRXRkhg2Bpk8weV/HjwZKZSUoB8H
         EohPOyPZisTkOgch501XlKjG+ep9E5A/inSOio2liOlZo+/barkx+hADre8Ik19sucpn
         4Rcgt0LCGm3RPTDIDJzEDPPhdE91Ln7NqfIqQfFlur4RZohixe2GnMrHSl4VKx/cIbZb
         7yvQ==
X-Gm-Message-State: AJIora/vpoSl+CKtSvishdlTdh/gbGDj8HPjqYwW0OsvgUDTtH1B4Byi
	nbi50K57BpeDhXU8PwG/zLM=
X-Google-Smtp-Source: AGRyM1tLKPZU3wDVQzHat1EcARc3YFAm5QQeAx4sGkOS6fsGj1ehEKn6dI2CC1F/1jSJrKWASihqIA==
X-Received: by 2002:a17:907:971b:b0:711:dc09:fde1 with SMTP id jg27-20020a170907971b00b00711dc09fde1mr24627616ejc.749.1655804291075;
        Tue, 21 Jun 2022 02:38:11 -0700 (PDT)
From: Mykyta Poturai <mykyta.poturai@gmail.com>
X-Google-Original-From: Mykyta Poturai <mykyta_poturai@epam.com>
To: rahul.singh@arm.com
Cc: Bertrand.Marquis@arm.com,
	Volodymyr_Babchuk@epam.com,
	julien@xen.org,
	mykyta.poturai@gmail.com,
	sstabellini@kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH] xen/arm: smmuv1: remove iommu group when deassign a device
Date: Tue, 21 Jun 2022 12:38:08 +0300
Message-Id: <20220621093808.597929-1-mykyta_poturai@epam.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <A53A2C83-BA19-481D-8851-0B0E1A162F4D@arm.com>
References: <A53A2C83-BA19-481D-8851-0B0E1A162F4D@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

> Thanks for testing the patch.
>> But not fixed the "Unexpected Global fault" that occasionally happens when destroying
>> the domain with an actively working GPU. Although, I am not sure if this issue
>> is relevant here.
>
> Can you please if possible share the more details and logs so that I can look if this issue is relevant here ?

So in my setup I have a board with IMX8 chip and 2 core Vivante GPU. GPU is split between domains.
One core goes to Dom0 and one to DomU.

Steps to trigger this issue:
1. Start DomU
2. Start wayland and glmark2-es2-wayland inside DomU
3. Destroy DomU

Sometimes it destroys fine but roughly 1 of 8 times I get logs like this:

root@dom0:~# xl dest DomU
[12725.412940] xenbr0: port 1(vif8.0) entered disabled state
[12725.671033] xenbr0: port 1(vif8.0) entered disabled state
[12725.689923] device vif8.0 left promiscuous mode
[12725.696736] xenbr0: port 1(vif8.0) entered disabled state
[12725.696989] audit: type=1700 audit(1616594240.068:39): dev=vif8.0 prom=0 old_prom=256 auid=4294967295 uid=0 gid=0 ses=4294967295
(XEN) smmu: /iommu@51400000: Unexpected global fault, this could be serious
(XEN) smmu: /iommu@51400000:    GFSR 0x00000001, GFSYNR0 0x00000004, GFSYNR1 0x00001055, GFSYNR2 0x00000000

My guess is that this happens because GPU continues to access memory when the context is already invalidated,
and therefore triggers the "Invalid context fault".

Regards,
Mykyta


From xen-devel-bounces@lists.xenproject.org Tue Jun 21 09:51:32 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 09:51:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353126.580032 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3aXW-0006Nq-Mm; Tue, 21 Jun 2022 09:51:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353126.580032; Tue, 21 Jun 2022 09:51:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3aXW-0006Nj-Ji; Tue, 21 Jun 2022 09:51:18 +0000
Received: by outflank-mailman (input) for mailman id 353126;
 Tue, 21 Jun 2022 09:51:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o3aXV-0006Nd-DF
 for xen-devel@lists.xenproject.org; Tue, 21 Jun 2022 09:51:17 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o3aXR-0001Hy-7m; Tue, 21 Jun 2022 09:51:13 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=[192.168.3.84])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o3aXR-0008U4-12; Tue, 21 Jun 2022 09:51:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=CjE8sTVPolhZGahjszz09Hro69cY6owMRPhzOg4fLb4=; b=l8yXhzdKqhmYJ4X17lTHehi/mW
	xOg6yMRsvmN/ElGmj9d2jSZmP9g84tjis7ZZkqyisMgN/MVUD25o8xD2qfTD5lBGBpX3VlhUbqjto
	vnKROSAnM2uw/72tz8eqRe/d6kEX6+i3p1DZA00yImg0iVcLEWRdzM79qDODqPMiXrRA=;
Message-ID: <371f195b-291e-e5e0-9e1d-1b2d2fa55a7d@xen.org>
Date: Tue, 21 Jun 2022 10:51:10 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH] xen: Don't call panic if ARM TF cpu off returns DENIED
To: Dmytro Semenets <dmitry.semenets@gmail.com>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Dmytro Semenets <Dmytro_Semenets@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20220616135541.3333760-1-dmitry.semenets@gmail.com>
 <cf7660da-0bde-865e-7c22-a2e21e31fae5@xen.org> <87wndgh2og.fsf@epam.com>
 <67f56cdd-531b-72fc-1257-214d078f6bb6@xen.org> <87pmj7hczg.fsf@epam.com>
 <f260703d-4651-f9e9-3713-9e85a51b1d70@xen.org>
 <CACM97VUukaWoegmNvF4F+tf2tHCyPcjG41CSjjz72V2+Cte4Ew@mail.gmail.com>
 <49ace8c9-8fd6-57a2-e0c8-cfba04c9e151@xen.org>
 <CACM97VV5MO0vmqG01pR7dXg1xU3jptOvjt4S+KS27zD+E66fPw@mail.gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <CACM97VV5MO0vmqG01pR7dXg1xU3jptOvjt4S+KS27zD+E66fPw@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 21/06/2022 09:55, Dmytro Semenets wrote:
> Hi Julien,

Hi Dmytro,

>>>>
>>>> Hi,
>>>>
>>>> On 17/06/2022 10:10, Volodymyr Babchuk wrote:
>>>>> Julien Grall <julien@xen.org> writes:
>>>>>
>>>>>> On 16/06/2022 19:40, Volodymyr Babchuk wrote:
>>>>>>> Hi Julien,
>>>>>>
>>>>>> Hi Volodymyr,
>>>>>>
>>>>>>> Julien Grall <julien@xen.org> writes:
>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> On 16/06/2022 14:55, dmitry.semenets@gmail.com wrote:
>>>>>>>>> From: Dmytro Semenets <dmytro_semenets@epam.com>
>>>>>>>>> According to PSCI specification ARM TF can return DENIED on CPU OFF.
>>>>>>>>
>>>>>>>> I am confused. The spec is talking about Trusted OS and not
>>>>>>>> firmware. The docummentation is also not specific to ARM Trusted
>>>>>>>> Firmware. So did you mean "Trusted OS"?
>>>>>>> It should be "firmware", I believe.
>>>>>>
>>>>>> Hmmm... I couldn't find a reference in the spec suggesting that
>>>>>> CPU_OFF could return DENIED because of the firmware. Do you have a
>>>>>> pointer to the spec?
>>>>>
>>>>> Ah, looks like we are talking about different things. Indeed, CPU_OFF
>>>>> can return DENIED only because of Trusted OS. But entity that *returns*
>>>>> the error to a caller is a firmware.
>>>>
>>>> Right, the interesting part is *why* DENIED is returned not *who*
>>>> returns it.
>>> ARM TF returns DENIED *only* for the platform I have.
>>> We have a dissonance between spec and xen implementation because
>>> DENIED returned by
>>> ARM TF or Thrusted OS or whatever is not a reason for panic.
>>
>> I agree that's not a reason for panic. However, knowing the reason does
>> help to figure out the correct approach.
>>
>> For instance, one could have suggest to migrate the trusted OS to
>> another pCPU. But this would not have worked for you because the DENIED
>> is not about that.
>>
>>> And we
>>> have issues with this.
>>> If machine_restart() behaviour is more or less correct  (sometimes
>>> reports about panic but restarts the machine)
>>
>> Right...
>>
>>> but machine_halt() doesn't work at al
>> ... this should also be the case here because machine_halt() could also
>> be called from cpu0. So I am a bit confused why you are saying it never
>> works.
> If machine_halt() called on a CPU other than CPU0 it caused panic and reboot.
> If it called on a CPU0 it also caused panic but after system power off
> and reboot
> is not issued. In this state you can still call the xen console. But
> you can't reboot the system.

I am lost. In a previous e-mail you said that PSCI CPU_OFF would return 
DENIED on CPU0. IOW, I understood that for other CPUs, it would succeed.

But here, you are tell me the opposite:

"If it called on a CPU0 it also caused panic but after system power off
  and reboot".

If machine_halt() is called from CPU0, then CPU_OFF should not be called 
on CPU0. So where is that panic coming from?

>>
>>> Transit execution to CPU0 for my understanding is a workaround and
>>> this approach will fix
>>> machine_restart() function but will not fix machine_halt().
>>
>> I would say it is a more specific case of what the spec suggests (see
>> below). But it should fix both machine_restart() and machine_halt()
>> because the last CPU running will be CPU0. So Xen would call SYSTEM_*
>> rather than CPU_OF. So I don't understand why you think it will fix one
>> but not the other.
> Looks like this is specific for my HW case. SYSTEM_OFF doesn't stop
> the whole system.

Hmmm... All the other CPUs should be off (or spinning with interrupt 
disabled), so are you saying that SYSTEM_OFF return?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 21 09:55:02 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 09:55:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353135.580043 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3ab7-00072r-Aq; Tue, 21 Jun 2022 09:55:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353135.580043; Tue, 21 Jun 2022 09:55:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3ab7-00072k-7E; Tue, 21 Jun 2022 09:55:01 +0000
Received: by outflank-mailman (input) for mailman id 353135;
 Tue, 21 Jun 2022 09:54:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3ab5-00072Z-Jq; Tue, 21 Jun 2022 09:54:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3ab5-0001Kv-HO; Tue, 21 Jun 2022 09:54:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3ab5-0003OB-1s; Tue, 21 Jun 2022 09:54:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o3ab5-0005Zm-1P; Tue, 21 Jun 2022 09:54:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vnjJyEZeDMIdxj7t5MH2XOwCmUEQ2aphsp4pI15lyHU=; b=kEoTeo/z6FUKcMzRDhmMq5uazd
	J9K4KDZi5Tj7GStpx1Rmep9hWz1zOejpgW06Bsby1oOOCUTTyeTdj7CYJreJuC/LscqjxcqDFw1qq
	FSbDtqKk3ZlQ6Em6RtYalxVzdqo3XRecQHBX+nBXVN+MvtUDmmjHNJ/wdjwDNc8U0h2Q=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171297-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 171297: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=d428c7f5a70039a444f970145ac0fe9d7a0d6802
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 21 Jun 2022 09:54:59 +0000

flight 171297 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171297/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              d428c7f5a70039a444f970145ac0fe9d7a0d6802
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  711 days
Failing since        151818  2020-07-11 04:18:52 Z  710 days  692 attempts
Testing same since   171297  2022-06-21 04:18:59 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Amneesh Singh <natto@weirdnatto.in>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani@anisinha.ca>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brad Laue <brad@brad-x.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Kirbach <christian.kirbach@gmail.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Christophe Fergeau <cfergeau@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Didik Supriadi <didiksupriadi41@gmail.com>
  dinglimin <dinglimin@cmss.chinamobile.com>
  Divya Garg <divya.garg@nutanix.com>
  Dmitrii Shcherbakov <dmitrii.shcherbakov@canonical.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Emilio Herrera <ehespinosa57@gmail.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Florian Schmidt <flosch@nutanix.com>
  Franck Ridel <fridel@protonmail.com>
  Gavi Teitz <gavi@nvidia.com>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Haonan Wang <hnwanga1@gmail.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Hiroki Narukawa <hnarukaw@yahoo-corp.jp>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Ian Wienand <iwienand@redhat.com>
  Ioanna Alifieraki <ioanna-maria.alifieraki@canonical.com>
  Ivan Teterevkov <ivan.teterevkov@nutanix.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  jason lee <ppark5237@gmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jia Zhou <zhou.jia2@zte.com.cn>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jing Qi <jinqi@redhat.com>
  Jinsheng Zhang <zhangjl02@inspur.com>
  Jiri Denemark <jdenemar@redhat.com>
  Joachim Falk <joachim.falk@gmx.de>
  John Ferlan <jferlan@redhat.com>
  John Levon <john.levon@nutanix.com>
  John Levon <levon@movementarian.org>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Justin Gatzen <justin.gatzen@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kim InSoo <simmon@nplob.com>
  Koichi Murase <myoga.murase@gmail.com>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Lei Yang <yanglei209@huawei.com>
  Lena Voytek <lena.voytek@canonical.com>
  Liang Yan <lyan@digitalocean.com>
  Liang Yan <lyan@digtalocean.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Liu Yiding <liuyd.fnst@fujitsu.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  luzhipeng <luzhipeng@cestc.cn>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Mielke <mark.mielke@gmail.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Martin Pitt <mpitt@debian.org>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matej Cepl <mcepl@cepl.eu>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Goodhart <c@chromakode.com>
  Maxim Nestratov <mnestratov@virtuozzo.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Moteen Shah <codeguy.moteen@gmail.com>
  Moteen Shah <moteenshah.02@gmail.com>
  Muha Aliss <muhaaliss@gmail.com>
  Nathan <nathan95@live.it>
  Neal Gompa <ngompa13@gmail.com>
  Nick Chevsky <nchevsky@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nicolas Lécureuil <neoclust@mageia.org>
  Nicolas Lécureuil <nicolas.lecureuil@siveo.net>
  Nikolay Shirokovskiy <nikolay.shirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Niteesh Dubey <niteesh@linux.ibm.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Or Ozeri <oro@il.ibm.com>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peng Liang <tcx4c70@gmail.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Praveen K Paladugu <prapal@linux.microsoft.com>
  Prerna Saxena <prerna.saxena@nutanix.com>
  Richard W.M. Jones <rjones@redhat.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Robin Lee <cheeselee@fedoraproject.org>
  Rohit Kumar <rohit.kumar3@nutanix.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Davis <scott.davis@starlab.io>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Sergey A <sw@atrus.ru>
  Sergey A. <sw@atrus.ru>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  shenjiatong <yshxxsjt715@gmail.com>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Simon Rowe <simon.rowe@nutanix.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Temuri Doghonadze <temuri.doghonadze@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tom Wieczorek <tom@bibbu.net>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tu Qiang <tu.qiang35@zte.com.cn>
  Tuguoyi <tu.guoyi@h3c.com>
  tuqiang <tu.qiang35@zte.com.cn>
  Vasiliy Ulyanov <vulyanov@suse.de>
  Victor Toso <victortoso@redhat.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Vinayak Kale <vkale@nvidia.com>
  Vineeth Pillai <viremana@linux.microsoft.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  Wei-Chen Chen <weicche@microsoft.com>
  William Douglas <william.douglas@intel.com>
  Xu Chao <xu.chao6@zte.com.cn>
  Yalan Zhang <yalzhang@redhat.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Fei <yangfei85@huawei.com>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yasuhiko Kamata <belphegor@belbel.or.jp>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
  zhangjl02 <zhangjl02@inspur.com>
  zhanglei <zhanglei@smartx.com>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>
  Дамјан Георгиевски <gdamjan@gmail.com>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 113893 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 21 10:05:38 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 10:05:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353149.580054 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3alK-0000Ag-AG; Tue, 21 Jun 2022 10:05:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353149.580054; Tue, 21 Jun 2022 10:05:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3alK-0000AZ-7D; Tue, 21 Jun 2022 10:05:34 +0000
Received: by outflank-mailman (input) for mailman id 353149;
 Tue, 21 Jun 2022 10:05:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kiuJ=W4=gmail.com=dmitry.semenets@srs-se1.protection.inumbo.net>)
 id 1o3alI-0000AR-FX
 for xen-devel@lists.xenproject.org; Tue, 21 Jun 2022 10:05:32 +0000
Received: from mail-ed1-x534.google.com (mail-ed1-x534.google.com
 [2a00:1450:4864:20::534])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id aee7d78f-f149-11ec-b725-ed86ccbb4733;
 Tue, 21 Jun 2022 12:05:31 +0200 (CEST)
Received: by mail-ed1-x534.google.com with SMTP id cf14so8871405edb.8
 for <xen-devel@lists.xenproject.org>; Tue, 21 Jun 2022 03:05:31 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aee7d78f-f149-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=PJC2Yr++Ft5B3thwtCw9JLqY/FSgfHiF5Q1iuBYB6EM=;
        b=ma+c6YIug7mb0eXLsx8RXJmc498OwQG3Z3SP5PIGNzNg9pkTlQICnleBczwiPtRYaU
         6sT4J51xDP2Mdkek9hl4pyPq8TXgQJ2g8KsrQGmVGQXvUDKecXHpp7vfVfL0JbDGZeLk
         4wqGqasD3RzduGUmopyujo2qlF8Z8dyzjSC4smyHIQFSHE1UIkzf3Zpa9quYR8n5ov21
         Ex9e4O2BEdqB1zrxBuH3BJXqI2xVfk3vsLFvfHPWNBBDA7nx0+5vO54hN3hc8CqD4NXO
         G4Id8iwKCC303IXmNQA3r8UIKiTc0ZLulFar60ywwjJiwYSG9QKfJIa/B1S5yeK948CQ
         FZOw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=PJC2Yr++Ft5B3thwtCw9JLqY/FSgfHiF5Q1iuBYB6EM=;
        b=Q+kJ5wi1KTAzcCLXuKFLcOI2e7u8f9uqdL8JRwSh0GGrSFD1pqzYavNVeVfd2HPXY7
         NOeZmWcVfKVUSSu5iWQFjqUx0wLZA9JeajG8KbgOh4UeG+vRg4opsXqqi/peTCZbaCfE
         7dwHIFs5Q2LX0jIsLH5CXd4Z152DZz8PwlfVUbf+Mpy3/eSmAsG0xFeIm7C4Z2gdi2gO
         bthiqwonaXT/GbLJqp2/eDERtx8O/LNeJC1LfrM1YAVU2DB5XHN05rSqTiRJVfP61xfY
         hTMnsBsZ2/Brk7DfUrE1FhoPYMuEwsRltrW985V0F+ez5EVlRCCCfMIcpJf0Tj+ZHJlg
         yRFQ==
X-Gm-Message-State: AJIora9cmbHYmE3EElV2ATIXBjCP8o10s1AXMmywGw0kYPvcfkeOiiU7
	AzoGcnyz3mZ+e7S9cy4SftbWfAtZKvO1scTKw+0/yg6l4mg8Ag==
X-Google-Smtp-Source: AGRyM1vnyxFLLD2h9Ib9OArObxWsnOWoY4y2KpmSBUyPXcla/92yTcopS6T0gzfDpDaflG3+yewJFdSRuj8/Q46pT3M=
X-Received: by 2002:a05:6402:2687:b0:430:328f:e46b with SMTP id
 w7-20020a056402268700b00430328fe46bmr34467662edd.33.1655805930661; Tue, 21
 Jun 2022 03:05:30 -0700 (PDT)
MIME-Version: 1.0
References: <20220616135541.3333760-1-dmitry.semenets@gmail.com>
 <cf7660da-0bde-865e-7c22-a2e21e31fae5@xen.org> <87wndgh2og.fsf@epam.com>
 <67f56cdd-531b-72fc-1257-214d078f6bb6@xen.org> <87pmj7hczg.fsf@epam.com>
 <f260703d-4651-f9e9-3713-9e85a51b1d70@xen.org> <CACM97VUukaWoegmNvF4F+tf2tHCyPcjG41CSjjz72V2+Cte4Ew@mail.gmail.com>
 <49ace8c9-8fd6-57a2-e0c8-cfba04c9e151@xen.org> <CACM97VV5MO0vmqG01pR7dXg1xU3jptOvjt4S+KS27zD+E66fPw@mail.gmail.com>
 <371f195b-291e-e5e0-9e1d-1b2d2fa55a7d@xen.org>
In-Reply-To: <371f195b-291e-e5e0-9e1d-1b2d2fa55a7d@xen.org>
From: Dmytro Semenets <dmitry.semenets@gmail.com>
Date: Tue, 21 Jun 2022 13:05:19 +0300
Message-ID: <CACM97VXqAh4ApnkC_1wuDx38njbNxRwGrrfJhHqKV2x3R1svmA@mail.gmail.com>
Subject: Re: [PATCH] xen: Don't call panic if ARM TF cpu off returns DENIED
To: Julien Grall <julien@xen.org>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Dmytro Semenets <Dmytro_Semenets@epam.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis <bertrand.marquis@arm.com>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

Hi Julien,
>
> Hi Dmytro,
>
> >>>>
> >>>> Hi,
> >>>>
> >>>> On 17/06/2022 10:10, Volodymyr Babchuk wrote:
> >>>>> Julien Grall <julien@xen.org> writes:
> >>>>>
> >>>>>> On 16/06/2022 19:40, Volodymyr Babchuk wrote:
> >>>>>>> Hi Julien,
> >>>>>>
> >>>>>> Hi Volodymyr,
> >>>>>>
> >>>>>>> Julien Grall <julien@xen.org> writes:
> >>>>>>>
> >>>>>>>> Hi,
> >>>>>>>>
> >>>>>>>> On 16/06/2022 14:55, dmitry.semenets@gmail.com wrote:
> >>>>>>>>> From: Dmytro Semenets <dmytro_semenets@epam.com>
> >>>>>>>>> According to PSCI specification ARM TF can return DENIED on CPU OFF.
> >>>>>>>>
> >>>>>>>> I am confused. The spec is talking about Trusted OS and not
> >>>>>>>> firmware. The docummentation is also not specific to ARM Trusted
> >>>>>>>> Firmware. So did you mean "Trusted OS"?
> >>>>>>> It should be "firmware", I believe.
> >>>>>>
> >>>>>> Hmmm... I couldn't find a reference in the spec suggesting that
> >>>>>> CPU_OFF could return DENIED because of the firmware. Do you have a
> >>>>>> pointer to the spec?
> >>>>>
> >>>>> Ah, looks like we are talking about different things. Indeed, CPU_OFF
> >>>>> can return DENIED only because of Trusted OS. But entity that *returns*
> >>>>> the error to a caller is a firmware.
> >>>>
> >>>> Right, the interesting part is *why* DENIED is returned not *who*
> >>>> returns it.
> >>> ARM TF returns DENIED *only* for the platform I have.
> >>> We have a dissonance between spec and xen implementation because
> >>> DENIED returned by
> >>> ARM TF or Thrusted OS or whatever is not a reason for panic.
> >>
> >> I agree that's not a reason for panic. However, knowing the reason does
> >> help to figure out the correct approach.
> >>
> >> For instance, one could have suggest to migrate the trusted OS to
> >> another pCPU. But this would not have worked for you because the DENIED
> >> is not about that.
> >>
> >>> And we
> >>> have issues with this.
> >>> If machine_restart() behaviour is more or less correct  (sometimes
> >>> reports about panic but restarts the machine)
> >>
> >> Right...
> >>
> >>> but machine_halt() doesn't work at al
> >> ... this should also be the case here because machine_halt() could also
> >> be called from cpu0. So I am a bit confused why you are saying it never
> >> works.
> > If machine_halt() called on a CPU other than CPU0 it caused panic and reboot.
> > If it called on a CPU0 it also caused panic but after system power off
> > and reboot
> > is not issued. In this state you can still call the xen console. But
> > you can't reboot the system.
>
> I am lost. In a previous e-mail you said that PSCI CPU_OFF would return
> DENIED on CPU0. IOW, I understood that for other CPUs, it would succeed.
I'm sorry I confused You.
Yes it causes panic and prints it will be rebooted but actual reboot
doesn't happen.

>
> But here, you are tell me the opposite:
>
> "If it called on a CPU0 it also caused panic but after system power off
>   and reboot".
>
> If machine_halt() is called from CPU0, then CPU_OFF should not be called
> on CPU0. So where is that panic coming from?
>
> >>
> >>> Transit execution to CPU0 for my understanding is a workaround and
> >>> this approach will fix
> >>> machine_restart() function but will not fix machine_halt().
> >>
> >> I would say it is a more specific case of what the spec suggests (see
> >> below). But it should fix both machine_restart() and machine_halt()
> >> because the last CPU running will be CPU0. So Xen would call SYSTEM_*
> >> rather than CPU_OF. So I don't understand why you think it will fix one
> >> but not the other.
> > Looks like this is specific for my HW case. SYSTEM_OFF doesn't stop
> > the whole system.
>
> Hmmm... All the other CPUs should be off (or spinning with interrupt
> disabled), so are you saying that SYSTEM_OFF return?
Yes. SYSTEM_OFF returns on my HW. This is reason when CPU_OFF for CPU0 happens.
>
> Cheers,
>
> --
> Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 21 10:12:00 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 10:12:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353158.580064 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3arV-0001dt-0M; Tue, 21 Jun 2022 10:11:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353158.580064; Tue, 21 Jun 2022 10:11:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3arU-0001dm-T0; Tue, 21 Jun 2022 10:11:56 +0000
Received: by outflank-mailman (input) for mailman id 353158;
 Tue, 21 Jun 2022 10:11:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/KMk=W4=citrix.com=prvs=16408edfd=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o3arT-0001dg-8N
 for xen-devel@lists.xenproject.org; Tue, 21 Jun 2022 10:11:55 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9158e4a7-f14a-11ec-bd2d-47488cf2e6aa;
 Tue, 21 Jun 2022 12:11:52 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9158e4a7-f14a-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655806312;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=7MDJa/m5qjvzvr9hb3TI3e/TXp0vIhk0EimgAI5h7z8=;
  b=ETp6PaeArAqyl4x2MnlgpnGyN5b1a0Kf5RfwETpd2wyMNwY/esh2DsLR
   MYSxyqIixRX7e5LVPzBIA0XcdF1SzsQB6HbMMzWCC9YnR++TNGH+mNnEN
   6Q5MZP0y+22F6c/uCR6ZU+5zJoGm+p8TVQ7iKZ5By57X+2YUEzOYrSw0w
   0=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 74055919
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:y0fO/6mIC9K4DnBqOedlZcHo5gyTJkRdPkR7XQ2eYbSJt1+Wr1Gzt
 xIcWTqCPPuCNmvwfYt+Pt63/R4PvMXVzYQwTFM5/ylnQiMWpZLJC+rCIxarNUt+DCFioGGLT
 Sk6QoOdRCzhZiaE/n9BCpC48T8kk/vgqoPUUIYoAAgoLeNfYHpn2EgLd9IR2NYy24DnWVrV4
 LsenuWEULOb828sWo4rw/rrRCNH5JwebxtB4zTSzdgS1LPvvyF94KA3fMldHFOhKmVgJcaoR
 v6r8V2M1jixEyHBqD+Suu2TnkUiGtY+NOUV45Zcc/DKbhNq/kTe3kunXRa1hIg+ZzihxrhMJ
 NtxWZOYWV4rZv3+gv8haj4bITBEG/BA4I2bCC3q2SCT5xWun3rExvxvCAc9PJEC+/YxCmZLn
 RAaAGlTNFbZ3bvwme/lDLk37iggBJCD0Ic3s3d8zTbfHLA+TIrKWani7t5ExjYgwMtJGJ4yY
 uJGNWIyMUWZMnWjPH8uN5URh+2OtkW4fhJ7twiVmq8czTXcmVkZPL/Fb4OOJ43iqd9utkSFo
 mPL+UzpDxdcM8aQoRKe6W6ljOLLmSL9WaoRGae++/osh0ecrkQMDDUGWF39puO24mauVtQaJ
 0EK9y4Gqakp6FftXtT7Rwe/onOPolgbQdU4O8c38h2Xw6zYpSOQHHEZTyVpYcYj8sQxQFQC1
 FWEgtfoDjxHq6CORDSW8bL8kN+pEXFLdylYP3ZCFFZbpYm4yG0usv7RZsY6EvblvMfuJQjXg
 AKblg5jga0h0edegs1X4mv7byKQSonhF1Bou1qIAzL7sWuVd6b+OdX2tAGzAeJoad/AEwLf5
 CVsd922trhmMH2bqMCarAzh9pmN7u3NDjDTiEUH83IJp2X0oC7LkWy9DVhDyKZV3iUsI2aBj
 Lf741852XOqFCLCgVVLS4ywEd826qPrCM7oUPvZBvIXPMUsK1LfpH40PB/Pt4wIrKTLufhnU
 ap3jO72VSpKYUiZ5GHeqxghPU8DmXllmDK7qWHTxBW7y7uODEOopUM+GALWNIgRtfrcyC2Mq
 oY3H5bbkH13DbyhChQ7BKZOdDjm21BgXcCowyGWH8beSjdb9JYJUaSOmul+IdI7wsy4VI7gp
 xmAZ6OR83Kn7VWvFOlAQioLhG/HNXqnkU8GAA==
IronPort-HdrOrdr: A9a23:sXKYbK7YFtBS4KbPawPXwPDXdLJyesId70hD6qhwISY6TiX+rb
 HJoB17726NtN9/YhEdcLy7VJVoBEmskKKdgrNhWotKPjOW21dARbsKheCJrgEIWReOktK1vZ
 0QCpSWY+eQMbEVt6nHCXGDYrQd/OU=
X-IronPort-AV: E=Sophos;i="5.92,209,1650945600"; 
   d="scan'208";a="74055919"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Bertrand Marquis <bertrand.marquis@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
Subject: [XEN PATCH v2.1 1/4] build,include: rework shell script for headers++.chk
Date: Tue, 21 Jun 2022 11:11:28 +0100
Message-ID: <20220621101128.50543-1-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220614162248.40278-1-anthony.perard@citrix.com>
References: <20220614162248.40278-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

The command line generated for headers++.chk by make is quite long,
and in some environment it is too long. This issue have been seen in
Yocto build environment.

Error messages:
    make[9]: execvp: /bin/sh: Argument list too long
    make[9]: *** [include/Makefile:181: include/headers++.chk] Error 127

Rework so that we do the foreach loop in shell rather that make to
reduce the command line size by a lot. We also need a way to get
headers prerequisite for some public headers so we use a switch "case"
in shell to be able to do some simple pattern matching. Variables
alone in POSIX shell don't allow to work with associative array or
variables with "/".

Also rework headers99.chk as it has a similar implementation, even if
with only two headers to check the command line isn't too long at the
moment.

Reported-by: Bertrand Marquis <Bertrand.Marquis@arm.com>
Fixes: 28e13c7f43 ("build: xen/include: use if_changed")
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
---

Notes:
    v3:
    - add one more pattern to avoid a possible empty body for "case"
    - use $() instead of `` to execute get_prereq()
    - also convert headers99_chk
    - convert some 'tab' to 'space', have only 1 tab at start of line
    
    v2:
    - fix typo in commit message
    - fix out-of-tree build
    
    v1:
    - was sent as a reply to v1 of the series

 xen/include/Makefile | 37 +++++++++++++++++++++++++++++--------
 1 file changed, 29 insertions(+), 8 deletions(-)

diff --git a/xen/include/Makefile b/xen/include/Makefile
index 617599df7e..510f65c92a 100644
--- a/xen/include/Makefile
+++ b/xen/include/Makefile
@@ -141,13 +141,24 @@ cmd_header_chk = \
 quiet_cmd_headers99_chk = CHK     $@
 define cmd_headers99_chk
 	rm -f $@.new; \
-	$(foreach i, $(filter %.h,$^),                                        \
-	    echo "#include "\"$(i)\"                                          \
+	get_prereq() {                                                        \
+	    case $$1 in                                                       \
+	    $(foreach i, $(filter %.h,$^),                                    \
+	    $(if $($(patsubst $(srctree)/%,%,$(i))-prereq),                   \
+	        $(i)$(close)                                                  \
+	        echo "$(foreach j, $($(patsubst $(srctree)/%,%,$(i))-prereq), \
+	                -include $(j).h)";;))                                 \
+	    *) ;;                                                             \
+	    esac;                                                             \
+	};                                                                    \
+	for i in $(filter %.h,$^); do                                         \
+	    echo "#include "\"$$i\"                                           \
 	    | $(CC) -x c -std=c99 -Wall -Werror                               \
 	      -include stdint.h                                               \
-	      $(foreach j, $($(patsubst $(srctree)/%,%,$i)-prereq), -include $(j).h) \
+	      $$(get_prereq $$i)                                              \
 	      -S -o /dev/null -                                               \
-	    || exit $$?; echo $(i) >> $@.new;) \
+	    || exit $$?; echo $$i >> $@.new;                                  \
+	done;                                                                 \
 	mv $@.new $@
 endef
 
@@ -158,13 +169,23 @@ define cmd_headerscxx_chk
 	    touch $@.new;                                                     \
 	    exit 0;                                                           \
 	fi;                                                                   \
-	$(foreach i, $(filter %.h,$^),                                        \
-	    echo "#include "\"$(i)\"                                          \
+	get_prereq() {                                                        \
+	    case $$1 in                                                       \
+	    $(foreach i, $(filter %.h,$^),                                    \
+	    $(if $($(patsubst $(srctree)/%,%,$(i))-prereq),                   \
+	        $(i)$(close)                                                  \
+	        echo "$(foreach j, $($(patsubst $(srctree)/%,%,$(i))-prereq), \
+	                -include c$(j))";;))                                  \
+	    *) ;;                                                             \
+	    esac;                                                             \
+	};                                                                    \
+	for i in $(filter %.h,$^); do                                         \
+	    echo "#include "\"$$i\"                                           \
 	    | $(CXX) -x c++ -std=gnu++98 -Wall -Werror -D__XEN_TOOLS__        \
 	      -include stdint.h -include $(srcdir)/public/xen.h               \
-	      $(foreach j, $($(patsubst $(srctree)/%,%,$i)-prereq), -include c$(j)) \
+	      $$(get_prereq $$i)                                              \
 	      -S -o /dev/null -                                               \
-	    || exit $$?; echo $(i) >> $@.new;) \
+	    || exit $$?; echo $$i >> $@.new; done;                            \
 	mv $@.new $@
 endef
 
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue Jun 21 10:15:41 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 10:15:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353165.580075 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3av5-0002Ip-Kx; Tue, 21 Jun 2022 10:15:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353165.580075; Tue, 21 Jun 2022 10:15:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3av5-0002Ii-I2; Tue, 21 Jun 2022 10:15:39 +0000
Received: by outflank-mailman (input) for mailman id 353165;
 Tue, 21 Jun 2022 10:15:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/KMk=W4=citrix.com=prvs=16408edfd=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o3av5-0002Ic-3r
 for xen-devel@lists.xenproject.org; Tue, 21 Jun 2022 10:15:39 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1762ccc9-f14b-11ec-bd2d-47488cf2e6aa;
 Tue, 21 Jun 2022 12:15:37 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1762ccc9-f14b-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655806537;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=QcSpgKS225JamIFv9EzQROZ047pLLv+YqmxHeJjalYQ=;
  b=GWG1+YoGZ8yADSDmscGvmX0/K8Zx8eP/cI5g0AvxWxNdGjbpBLeUuvHt
   lVqdD7a5SRyLGH/ZiOMwOs2YbcIMqhzhfQxCtdAx71bBWialjBHxdZdF2
   sNdWqSw6PhV86ivxQe9MQZJII5C3tJNQZW19OiWlOWyyhjUA0OWDV6Zaa
   E=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 74479126
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:Cy6BZa5aSvgBW2sqn/pT8QxRtCLHchMFZxGqfqrLsTDasY5as4F+v
 jQdWD/Xb/yDYGXweN9+bN+yo0hUv8WEndBqSQNuryhkHi5G8cbLO4+Ufxz6V8+wwmwvb67FA
 +E2MISowBUcFyeEzvuVGuG96yE6j8lkf5KkYAL+EnkZqTRMFWFw03qPp8Zj2tQy2YbjWlvU0
 T/Pi5a31GGNimYc3l08s8pvmDs31BglkGpF1rCWTakjUG72zxH5PrpGTU2CByKQrr1vNvy7X
 47+IISRpQs1yfuP5uSNyd4XemVSKlLb0JPnZnB+A8BOiTAazsA+PzpS2FPxpi67hh3Q9+2dx
 umhurThd1gmF7LVmdgaXkRUTx9iZJdL27b+dC3XXcy7lyUqclPpyvRqSko3IZcZ6qB8BmQmG
 f4wcW5XKErZ3qTvnez9GrIEascLdaEHOKsWvG1gyjfIS+4rW5nZT43B5MNC3Sd2jcdLdRrbT
 5VFNWU+NU6eC/FJEnwRWMoVsuuKvEvUcwII73iwmq0bvGeGmWSd15CyaYGIK7RmX/59h0udu
 yfa5WXnAxgeHNqYzzWD7zSrnOCntTjgRIsYGbm89/hrqF6e3GoeDFsRT1TTieGwl0qWS99Zb
 UsO9UIGp7I59FGgTcvVVhq85nWDu3Y0QtdVDqg25R+AzoLS5ACWAHVCSSROAPQ2uclzSTE02
 1uhm9LyGScpoLCTUWia9LqfsXW1Iyd9EIMZTXZaF01fuYCl+dxtyEKUJjp+LEKrpozLRj7Z3
 CmLkC8z2rlPs9JS7aiY9GmS1lpAuaP1oh4JChT/Bzz4s1wmOd77OORE+nCAs68ecd/xok2p+
 SFdxpPAtL1m4YSlznTlfQkbIF2+Cx9p2hX4iEUnIZQu/i/FF5WLLdEJu2EWyKuE3685ld7Vj
 Kz741o5CGd7ZifCUEOOS9vZ5z4W5abhD8/5cfvfc8BDZJN8HCfeonwzOhPIhz+3yRlz+U3aB
 Xt8WZ/0ZUv29Iw9lGbmLwvj+eRDKt8CKZP7GsmgkkXPPUu2b3+JU7YVWGazghQCxPrc+m39q
 o8HX+PTkkU3eLCvOUH/rN9MRW3m2FBmXPgaXeQMLr7dSuencUl8Y8LsLUQJIdc6xP4KzLmWo
 xlQmCZwkTLCuJEOEi3SAlgLVV8ldcwnxZ7nFUTA5WqV5kU=
IronPort-HdrOrdr: A9a23:7qOdMqH/+cPYMaaRpLqE6MeALOsnbusQ8zAXP0AYc3Jom+ij5q
 STdZUgpHrJYVkqNU3I9ertBEDEewK6yXcX2/hyAV7BZmnbUQKTRekIh7cKgQeQeBEWntQts5
 uIGJIeNDSfNzdHsfo=
X-IronPort-AV: E=Sophos;i="5.92,209,1650945600"; 
   d="scan'208";a="74479126"
Date: Tue, 21 Jun 2022 11:15:05 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Chuck Zmudzinski <brchuckz@aol.com>
CC: <qemu-devel@nongnu.org>, <xen-devel@lists.xenproject.org>,
	<qemu-trivial@nongnu.org>, Stefano Stabellini <sstabellini@kernel.org>, Paul
 Durrant <paul@xen.org>
Subject: Re: [PATCH v2] xen/pass-through: don't create needless register group
Message-ID: <YrGaKdx+af+7g2HY@perard.uk.xensource.com>
References: <4d4b58473beb0565876687e9d8a53b76a7cf3fbb.1655490576.git.brchuckz.ref@aol.com>
 <4d4b58473beb0565876687e9d8a53b76a7cf3fbb.1655490576.git.brchuckz@aol.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <4d4b58473beb0565876687e9d8a53b76a7cf3fbb.1655490576.git.brchuckz@aol.com>

On Fri, Jun 17, 2022 at 03:13:33PM -0400, Chuck Zmudzinski wrote:
> Currently we are creating a register group for the Intel IGD OpRegion
> for every device we pass through, but the XEN_PCI_INTEL_OPREGION
> register group is only valid for an Intel IGD. Add a check to make
> sure the device is an Intel IGD and a check that the administrator has
> enabled gfx_passthru in the xl domain configuration. Require both checks
> to be true before creating the register group. Use the existing
> is_igd_vga_passthrough() function to check for a graphics device from
> any vendor and that the administrator enabled gfx_passthru in the xl
> domain configuration, but further require that the vendor be Intel,
> because only Intel IGD devices have an Intel OpRegion. These are the
> same checks hvmloader and libxl do to determine if the Intel OpRegion
> needs to be mapped into the guest's memory. Also, move the comment
> about trapping 0xfc for the Intel OpRegion where it belongs after
> applying this patch.
> 
> Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>

Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue Jun 21 10:43:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 10:43:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353176.580086 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3bLk-0005Yf-RT; Tue, 21 Jun 2022 10:43:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353176.580086; Tue, 21 Jun 2022 10:43:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3bLk-0005YY-Oo; Tue, 21 Jun 2022 10:43:12 +0000
Received: by outflank-mailman (input) for mailman id 353176;
 Tue, 21 Jun 2022 10:43:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3bLj-0005YN-DH; Tue, 21 Jun 2022 10:43:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3bLj-0002Ed-9E; Tue, 21 Jun 2022 10:43:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3bLi-0004e7-N1; Tue, 21 Jun 2022 10:43:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o3bLi-0000uy-Mc; Tue, 21 Jun 2022 10:43:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jQ5XtQ8DuldsB/vT4H0Ck8scjusG3jYgwDcrP+5jhqg=; b=BDnlcwBjrT+pyLlvPVEM1MSmZA
	iC5p2/p9xKFinDZRfO4VmCbMtzcirnHO1JaOp3yXCu9GPa3Buf3xDdsxUWTdJpfn/vnPUmeEdrJrS
	Ir8vXoY25bUwugFQg5vlzOorTaOZPulJsVWTZh5hgHGazpU/xxkB7URKosMYz+TVUi8Q=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171295-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 171295: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=9d067857d1ff6805608aac4d9c0ea1c848b2e637
X-Osstest-Versions-That:
    xen=c9040f25be317ab2f7647605397d79313e3f303e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 21 Jun 2022 10:43:10 +0000

flight 171295 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171295/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171283
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171283
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171283
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171283
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171283
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171283
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171283
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171283
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171283
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171283
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171283
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171283
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  9d067857d1ff6805608aac4d9c0ea1c848b2e637
baseline version:
 xen                  c9040f25be317ab2f7647605397d79313e3f303e

Last test of basis   171283  2022-06-20 01:52:00 Z    1 days
Testing same since   171293  2022-06-20 19:09:40 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   c9040f25be..9d067857d1  9d067857d1ff6805608aac4d9c0ea1c848b2e637 -> master


From xen-devel-bounces@lists.xenproject.org Tue Jun 21 10:45:07 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 10:45:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353187.580102 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3bNa-0006BL-CA; Tue, 21 Jun 2022 10:45:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353187.580102; Tue, 21 Jun 2022 10:45:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3bNa-0006BE-8u; Tue, 21 Jun 2022 10:45:06 +0000
Received: by outflank-mailman (input) for mailman id 353187;
 Tue, 21 Jun 2022 10:45:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o3bNZ-0006B6-CS
 for xen-devel@lists.xenproject.org; Tue, 21 Jun 2022 10:45:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o3bNW-0002HP-VX; Tue, 21 Jun 2022 10:45:02 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=[192.168.3.84])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o3bNW-0002x7-Oz; Tue, 21 Jun 2022 10:45:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=s6rh5/3M6+HuWHGc4V6AxQf0zOOLdSD0Xh/ho6qppvA=; b=6NSa/5DBTnZflvSa8A4B3iHErx
	r/r6UKZ3YEqXbbyuF7/JTfMbGt5uIvhhuMEkSHIgIvLsglQN0YDFqXT+TRXORmO7ubz/6SBGEKjpF
	/KZD0j0faW0fdjH5tucRXRXX6rP2WUoUQRh/cX3rkwsxki/EAk2tSDebwroNobvJJIcY=;
Message-ID: <b8c18f8b-7401-bb67-0436-9bd8a03e3652@xen.org>
Date: Tue, 21 Jun 2022 11:45:00 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH] xen: Don't call panic if ARM TF cpu off returns DENIED
To: Dmytro Semenets <dmitry.semenets@gmail.com>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Dmytro Semenets <Dmytro_Semenets@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20220616135541.3333760-1-dmitry.semenets@gmail.com>
 <cf7660da-0bde-865e-7c22-a2e21e31fae5@xen.org> <87wndgh2og.fsf@epam.com>
 <67f56cdd-531b-72fc-1257-214d078f6bb6@xen.org> <87pmj7hczg.fsf@epam.com>
 <f260703d-4651-f9e9-3713-9e85a51b1d70@xen.org>
 <CACM97VUukaWoegmNvF4F+tf2tHCyPcjG41CSjjz72V2+Cte4Ew@mail.gmail.com>
 <49ace8c9-8fd6-57a2-e0c8-cfba04c9e151@xen.org>
 <CACM97VV5MO0vmqG01pR7dXg1xU3jptOvjt4S+KS27zD+E66fPw@mail.gmail.com>
 <371f195b-291e-e5e0-9e1d-1b2d2fa55a7d@xen.org>
 <CACM97VXqAh4ApnkC_1wuDx38njbNxRwGrrfJhHqKV2x3R1svmA@mail.gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <CACM97VXqAh4ApnkC_1wuDx38njbNxRwGrrfJhHqKV2x3R1svmA@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 21/06/2022 11:05, Dmytro Semenets wrote:
>>>>> but machine_halt() doesn't work at al
>>>> ... this should also be the case here because machine_halt() could also
>>>> be called from cpu0. So I am a bit confused why you are saying it never
>>>> works.
>>> If machine_halt() called on a CPU other than CPU0 it caused panic and reboot.
>>> If it called on a CPU0 it also caused panic but after system power off
>>> and reboot
>>> is not issued. In this state you can still call the xen console. But
>>> you can't reboot the system.
>>
>> I am lost. In a previous e-mail you said that PSCI CPU_OFF would return
>> DENIED on CPU0. IOW, I understood that for other CPUs, it would succeed.
> I'm sorry I confused You.
> Yes it causes panic and prints it will be rebooted but actual reboot
> doesn't happen.

Ok. That's most likely because of the call to smp_call_function() in 
machine_restart(). It is using cpu_online_map to decide which CPUs to 
send the IPI.

This will block because some of the CPUs are already off. So they will 
never acknowledge the IPI.

> 
>>
>> But here, you are tell me the opposite:
>>
>> "If it called on a CPU0 it also caused panic but after system power off
>>    and reboot".
>>
>> If machine_halt() is called from CPU0, then CPU_OFF should not be called
>> on CPU0. So where is that panic coming from?
>>
>>>>
>>>>> Transit execution to CPU0 for my understanding is a workaround and
>>>>> this approach will fix
>>>>> machine_restart() function but will not fix machine_halt().
>>>>
>>>> I would say it is a more specific case of what the spec suggests (see
>>>> below). But it should fix both machine_restart() and machine_halt()
>>>> because the last CPU running will be CPU0. So Xen would call SYSTEM_*
>>>> rather than CPU_OF. So I don't understand why you think it will fix one
>>>> but not the other.
>>> Looks like this is specific for my HW case. SYSTEM_OFF doesn't stop
>>> the whole system.
>>
>> Hmmm... All the other CPUs should be off (or spinning with interrupt
>> disabled), so are you saying that SYSTEM_OFF return?
> Yes. SYSTEM_OFF returns on my HW.

Hmmm... This is not compliant with the specification. Could you check 
why PSCI SYSTEM_OFF is returning?

> This is reason when CPU_OFF for CPU0 happens.

Right, machine_halt() will call halt_this_cpu() that in turn will call 
PSCI CPU_OFF.

If you modify halt_this_cpu() to avoid calling PSCI CPU_OFF (as I 
suggested before) then the panic() will never happen. Instead, the CPU 
will execute "wfi" in a loop with interrupt disabled.

To summarize there are two parts to resolve:
   1) Understand why PSCI SYSTEM_OFF returns on your platform
   2) Modify stop_cpu() to not call PSCI CPU_OFF

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 21 11:28:32 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 11:28:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353198.580113 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3c3O-00026V-RW; Tue, 21 Jun 2022 11:28:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353198.580113; Tue, 21 Jun 2022 11:28:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3c3O-00026O-OM; Tue, 21 Jun 2022 11:28:18 +0000
Received: by outflank-mailman (input) for mailman id 353198;
 Tue, 21 Jun 2022 11:28:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0ORL=W4=citrix.com=prvs=164d1f6c5=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1o3c3M-00025z-2c
 for xen-devel@lists.xenproject.org; Tue, 21 Jun 2022 11:28:16 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3b2d1d28-f155-11ec-bd2d-47488cf2e6aa;
 Tue, 21 Jun 2022 13:28:12 +0200 (CEST)
Received: from mail-bn7nam10lp2109.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.109])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 21 Jun 2022 07:28:08 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM4PR03MB6125.namprd03.prod.outlook.com (2603:10b6:5:394::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.14; Tue, 21 Jun
 2022 11:28:02 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::bd46:feab:b3:4a5c]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::bd46:feab:b3:4a5c%4]) with mapi id 15.20.5353.022; Tue, 21 Jun 2022
 11:27:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3b2d1d28-f155-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655810892;
  h=from:to:subject:date:message-id:mime-version;
  bh=k8/VnorPAO5FJU29o8JC9EyiHIKSYO2d+In2riQzAmo=;
  b=DuRjRHeVtksqCWDs9dDOQeCTRDvG3TGrszXMrlzcMLzc1eRFVZhpJCH4
   Ap33+2UD9lzJL4e2QjmU7WVBj2pGzpFqjC5B0BWPB/eZSrP0iaIJ0PIDj
   2wqVTJ9bUEv9aJVhRaK93vkNEvVtr7kQT7YT1xO/7mEnA3rEaXTcuF1pv
   s=;
X-IronPort-RemoteIP: 104.47.70.109
X-IronPort-MID: 74079313
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:56byVKBtiVI8wxVW/9Tjw5YqxClBgxIJ4kV8jC+esTYN5kpmgnZD0
 zNdBG+CJ7zOJzDoOJArK70CxjoH6JfWz95hT1FprClgRi8R8pOZDN7IIkv+Y36fccPORRo3s
 poUMofOIs04E3TQqEykaum4pCMhif+FT+bwVb6UUswdqWGIbQ944f40s7Fj39Qw6TTAPz6wh
 T+bT6MzUne51iVod2sO7vnZ7Qhzof63oyIdpTTSDthH5w/Sm3dJVMkTfqrodnemGtFYQ+KzH
 befl+Dn8z+JpB1wBo+uyr33KBBVGO+JYVXQ2yUOUaGsjxJMrXQ8g6oyXBZwhSm7rh3Q9zwm4
 IoX7cDgIetQApDxpQg9b/V5OyolN6YbpeTKfSPhvMCakxzNKirgya8zBkg/YdZDo78mUD1Cr
 fAWFmsAP0uJ7w6ULBNXaQXOauALdpSD0FY34yk4pd3hJa96B8iFGc0m3PcAtNsKrpkm8c32O
 oxIM1KDUDyaO0cVYglNUcpn9AuVriKXnwNw+Qr9SZUfuwA//CQpuFQ6GIOIEjAibZw9cnew/
 goqzUygav0pHIX3JQ6+2mCtnofycRbTA+r+IlEaGslC2zV/zkRLYPEfuMDSTfOR0iZSUPoHQ
 6AYF7ZHQQHfOyVHQ/GkNyBUrkJosTYlRtNcH98ozTqD6YfR6TuQH3MoSWd4PYlOWM8eHVTG1
 3evtvawXXlFluLQTniQsLCJsTm1JC4Za3cYYjMJRhcE5N+lp5wvihXITZBoF6vdYt/dQGmsh
 WzV6ndmwexL3Kbn1I3ilbzDqxuqqoLEUUge4QLPU3j+xgh4eJSkd8qj7l2zAfNoc9rBEQja5
 SNsd8621+Y8MNaEvh60HdoAFuyA6/2VKT3djgs6d3Um33H3k5K5RqhA7Tc7KEp3P8IsfT7yf
 FSVqQ5X/IVUPnahcelweY3ZI+4n17T6HNLpENXddMNTY4NZfRWCuippYCa4+mHmkEEo14YlK
 5qfWc+2CDARDqEP5DirQ+YQ15c7yyZ4wnncLbjg1Aiu27eaYH+TSJ8GPUGIY+R/67mLyC3r9
 NJYO9qP2g9oeuT0aSnK8qYeNVkPa3M8APjeoMxNd/WfJRJmFXtnFLnaxbo7eKRqmq1UkqHD+
 XTVckNA01/+mX3vIB2HcGx+c6joWYtjrHU9JmonOlPA5pQ4SYOm7aNafZ1oe7AirbRn1aQtF
 6hDfNicCPNSTDiB4y4acZT2sI1lclKsmB6KOC2mJjM4evaMWjD0xzMtRSO3nAFmM8Z9nZJWT
 2GIvu8Dfac+eg==
IronPort-HdrOrdr: A9a23:Zz+RCKu/XQArgO9yPcbjaKzQ7skC+IMji2hC6mlwRA09TyXGra
 +TdaUguSMc1gx9ZJh5o6H5BEGBKUmsl6KdkrNhRotKPTOW81dAQ7sSi7cKrweBJ8SczJ8W6U
 4DSdkGNDSYNzET5qyVgTVUC+xQh+VvmJrYwds2pE0dKD2CHpsQiDuRfTzrdnGeKjM2ZqYRJd
 653I5qtjCgcXMYYoCQHX8eRdXOoNXNidbPfQMGLwRP0njCsRqYrJrBVzSI1BYXVD1ChZ0493
 LergD/7qK/99mm1x7n0XPJ5Zg+oqqs9jIDPr3CtiEmEESstu+aXvUgZ1REhkF3nAib0idlrD
 ALmWZjAy080QKVQoj/m2qW5+Cp6kdS15aq8y7lvVLz5cP+Xz40EMxHmMZQdQbY8VMpuJVm3L
 tMxH/xjesgMfpuplWM2zHkbWAfqqOPmwtUrQfTtQ0tbaIOLLtK6YAP9kJcF5kNWCr89YA8Ce
 FrSMXR/uxff1+WZ23Q+jAH+q3lYl0jWhOdBkQSsM2c1DZb2Hh/0ksD3cQa2nMN7og0RZVI7/
 nNdq5oiLZNRMkLar8VPpZJfeKnTmjWBR7cOmObJlrqUKkBJnLWspbypK444em7EaZ4uafaWK
 6xIm+wmVRCBX4GU/f+o6Gj2iq9MVmAYQ==
X-IronPort-AV: E=Sophos;i="5.92,209,1650945600"; 
   d="scan'208,104,217";a="74079313"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=D0SGRcsy41tH3CYEscGrLHvTUDNRs828AOL5sTuy7Pe49o2+C6E2vyujLVQFY4UfV7e8OwfK3NhTGHfORJECT0HqJm7xpfVGcU+JX3GWKICwgfKpDFGtvOZPg68LGo94eZbiJ1ftY09ML8kld0jvzbePSLL+HsQFoPadrsTMYo3ZLhiKsZls2qc2x1TzjRWOpEi/AfMe8J4KVHrdA8V/HuWE8+k3Ze4cdv1/Xk48z42O6xvYNnHsG/ahM/HMKroVB4JdomE49EHkN3Y3iqswXC0D9CjYQqB5jW7P6OIKY+nr5WroRzGJEwn4ddvK/++XMaLbEHF3SIgL2ipIoYNaeg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=u8556xLAMsr8uHWJ+izrfsE9G6zlPFgFu8jWVd56LJs=;
 b=CYpydZ66i7rVFRrmMafeqcDEi1eG4tNDGMu1cC5AWIG9b2vBz1huERS2UssrTQtW4W8gD1cKTACP97f3b7+mH/4QRl7Xy/JH+vQia1TwZvchn5E4BdjOob/8kvLSG9GUGbe/iTqt2jqslQhbyLtdJKXVwYHdf0qagajQCLfimYIVMmW/95+kiSuc+LUgnmUDJOquShGetK4Ttymi3UsjvUyc03A+6uxUtWEMpzd5zWjnIQE6UHcKVYE3YWoHhzvXaYaujloMvoVId+Gg+LpOA3PwksS2JUQNJYGAj18xOiUgYgPPlBYGDTO8sFdhfEoYS9UfFeciIcS6kRye1j6HTg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=u8556xLAMsr8uHWJ+izrfsE9G6zlPFgFu8jWVd56LJs=;
 b=QJY5ZgqEe6QMsBgbewoy9mP3H8oNab1hxta67QFWu93fYdn7b7NHGxusETq14OlIA6m9x6JC+40ejgZVdd+1juo6aU/Gv1ge8ggRoRX0WTlERr0/SCdKZ3nY7vVD2s13E+6ZRv92RXh+Zn1a8H/U4mj9TeNbU0j5/aU+lnqTN/s=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Michal Orzel <Michal.Orzel@arm.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>, Christopher Clark
	<christopher.w.clark@gmail.com>, Daniel Smith <dpsmith@apertussolutions.com>,
	Roger Pau Monne <roger.pau@citrix.com>, George Dunlap
	<George.Dunlap@citrix.com>
Subject: XTF-on-ARM: Bugs
Thread-Topic: XTF-on-ARM: Bugs
Thread-Index: AQHYhWH1+fjNtvukV0adEqGEdTB7sg==
Date: Tue, 21 Jun 2022 11:27:58 +0000
Message-ID: <7f490d75-153d-7e1d-b3c0-5418ff7fdf8f@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 3a13ca2d-3dab-4b7b-a5ab-08da537917ee
x-ms-traffictypediagnostic: DM4PR03MB6125:EE_
x-microsoft-antispam-prvs:
 <DM4PR03MB612565D2E590B28B3658BEA5BAB39@DM4PR03MB6125.namprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 dYp8OucKzdw+gBv2agQ0HovAu7kSaoXoUgc7lntJLyq4A79cxEYGPUaerBGJ4vMu52hMkdrlsaP2Wt5vE70ImaUG+y0uO4vClQEiWje7O6umiedZUHBmH4yuH+ph28tubYnoVEHUbSJnmPoDpy0xblaaLqDPD3PhG54NeFDtBnUpx6yiErC2bgWXxQPdcCTlUHB1j8qc09wApxjGNB3ttxrD0x92ZlNw8TC7eRg/SyT7heNratJyqdf3/tPbhTO95IFHljiy17eaMOrbmBMHrvuvT7QvLwXvnciOzeHpWXRGVaAsKWCtWZZGbU/8h9d0vuUI00gU2ISRVxErot0N1s/cHKomWz5mYbO0Ykr20nWnTO4KOWdqEAiCPY3Js7snR27HiLECWisY15s1eROynxTFHzP/DAyibHYAVXeFeijYzcq5cBFRn2JsGmH32GtyP5zCrNfxPPIlFYtj9LV8PQ2btaTN39YUQOywrTOaAeQJOembYKMOARuVShhNa7UgFpCVfN+5GGSTfLgmiXQiEyvo5Ykxwm8Dz5Q/gZiU6s1yOGoavZ9ecK8YZUt69p785/21VzlB8NaDJ/sdZMKqdPQ8lNRxnwmYi/CO3HGZcGksn5N6xhW+oDGOpZEy7faDkUbv+qdtqLpT1s4n1tzOtINosANCrVVtqmitYJ+ny4k0nZhWWxVIEFXhTkooSHu+QNdqfdYQxqwh1Nv1I4rKHFJ1Zt3W5uvtJhcS9eXTaq4q7XDZdIiYJl0IskeCoLBwugQ5PADVIel1eVtdiCFLkprEv/GVbeX/OjCnpPMJ7IujpZr9ulGK2Isp9GT5Y3IxYoariwAP37eYMl3GI/5j1Q30PXTykUVVBKHWd0Y+A60+CXLL/2hGgn3WdrAiorV8
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(396003)(376002)(39860400002)(346002)(366004)(136003)(38100700002)(66946007)(66446008)(6512007)(2906002)(921005)(166002)(99936003)(5660300002)(31686004)(36756003)(83380400001)(82960400001)(122000001)(8676002)(8936002)(478600001)(66476007)(66556008)(91956017)(38070700005)(71200400001)(186003)(86362001)(2616005)(6636002)(31696002)(76116006)(6506007)(41300700001)(6486002)(966005)(110136005)(316002)(64756008)(26005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?dnUrTVlzOFZKVE1yWnJibVhJdVN2MUJEb0hhRkxnWmF3T2ZSZU55U084enli?=
 =?utf-8?B?aXNFOWkzNFpHbUVkdUJ6dHFTMHRGbTJDM0xPSWVKWmFCMHhWVFN4Q3k3M3Q4?=
 =?utf-8?B?U0R2K3dVbmswZHBFVW5ZZk1mSmY0Y2lJSVdIZ0Fhd3AxUTQ4U3YzUzNkUTNl?=
 =?utf-8?B?OXhYRXBBWmJBbjQyVUwzaURDajF3WTFHbENUOWJ1T3JvRzlMTFdMMkFkdVUw?=
 =?utf-8?B?SGJNMG5mcExDa3ZnbXlkWUhhMCthMUxHNGJvT1lDaWNxT1c2TktaOVVwV0Mz?=
 =?utf-8?B?aU54U2grUnRwWWw1bytQeGtKNElkcVBiQ01LNUJVTkhLaGQ0cW1yZExRZTRj?=
 =?utf-8?B?b0VjY0w5SGV2dXl4WE9BaU16dVkyb0lGRi9SNU5UMWdUOUVsL25NU2lGMVBC?=
 =?utf-8?B?bkpVUEl0ZjUxY2trYzc3N2EvaTV0Q1MyazQwODBlMnBab3V6dU1ieUgvMTEz?=
 =?utf-8?B?eFRMS1kyWk43Z2VQN0dIREFiYjVoVHJ2WXRiUW5BSldNekRlU2VWaHlqN0JO?=
 =?utf-8?B?d1RNV1l4N25XanhxdDhrQm01WFRlY1J0YUhMeDE3UDJHck9FMXNKWEZzcnl6?=
 =?utf-8?B?Nk9yL1N5Ymg1MjJGU1JYNUJONHJ6bGZhSm5KUVpzTGdpdy9qNE5uV1VUd2E4?=
 =?utf-8?B?ZW8vRXV6UnVmUExUNzhEWkVWY3BzSFVPbXIvblpKV2dFRnQvRldUUFI2bU9O?=
 =?utf-8?B?RjA1ck5FYWxXdk9ueU0reGk4TnVzcmhLUkdrek4rSlMyamZnRllGWlppbDJ0?=
 =?utf-8?B?MWFMN1QrWkUzV0VKMTNPWVRuYUJsU1hDdnp5ZDdmNDM0b2grOXJIclBjdkpj?=
 =?utf-8?B?Zi9YWklOSzRDVHZ3RDNVQi9iTk5MRlN5RUZTUitWS3N5blB0YUxTWDd2WERN?=
 =?utf-8?B?V1gzWTk2Q2w0OUJ4M1IvYTFPNGhyZ3RNSUlJVDhUeDg0MzVKRjFpNEhSSDdR?=
 =?utf-8?B?Z0ZZVWdpdmkvNS9nVmFaclZ4ZXVkYzNwWFUyWnM3d2JPdEY3bjlzNlc4ZThD?=
 =?utf-8?B?eXZmTGpRRExQRGJsblp6YUU1SUZUV2ZNdUhLWms3UU9odDNobUFFTk5PSkZB?=
 =?utf-8?B?VVdtNHBkNFRoQ3FUa2VrVWJ5Zm1wclZIUTAzN2FLOUloeUpwYVZKcUkrbDM0?=
 =?utf-8?B?Y0MvbUs2RG1iUzA3NlhuL3lwVkpDb1UybXJHOTZPNk4vS21ZQlpZVE05S1or?=
 =?utf-8?B?cjlPZkFqVGgrb0c1aHc2cWdIcEF4ZE5hL0hYMjhWOVF2Mzl4UHNDNGRCbklN?=
 =?utf-8?B?Q2VLNlNpTnA0dXY2dUVzSEtiVDJ4SVd0R3pFUTdUR0dEZzMwNjdvVFF5QmRu?=
 =?utf-8?B?NmtCUk1DazloY0tqak5BOWhocmpUbXJNYW5yTDRKTDdGYkpPRVJMeXlaeUZk?=
 =?utf-8?B?UGQxdzNQOVhSaE5qL1BjRElhdFdoKzVuaTdOaUwzUG8vSkptTVlhQXNuNFZN?=
 =?utf-8?B?MFZPb0pZZWFNVDZGT21ucGM1OHo1cjZuaDgraDFaSTU2aStDWklENHlJUHNQ?=
 =?utf-8?B?NWFteWhpRmlFZmhnanprS0Z0d1FuVSsySjhHMXEzaVc4NTMrVnpQamNKaVBS?=
 =?utf-8?B?enRGWUFGRC90UVR5c1VHSXUvMGx6OWpMb1crblJNejM0MGRVUU1obU9IZGdl?=
 =?utf-8?B?UkxpR0pWckVIc3FlSENvMmxRSE84QXZPcWhOZXp1V1NWbW1Tb2hWN3E5Snkv?=
 =?utf-8?B?Y3V0b3VTRXdDakhUVkR4RkQ0bUlNaEtmU1lOZWhVTXVrMkJrMGs4ZGQwSWg2?=
 =?utf-8?B?a3U3ZUJ0bjJJWFRxamEvUXo2K1JTWE9Ma25vZnFCeUljL2tFbmQvRjluWWJw?=
 =?utf-8?B?MlpqZGhuRjRXbEY3R2pGcHRqRkhwREx5UUlrV2ZnUVFwNVg5REZhdEFjbitT?=
 =?utf-8?B?UEFFYVBQeXg1Z3BXNE0raXFnQUllcG5kTUI0NllsOVFwWHRxL1JPb3FESlRO?=
 =?utf-8?B?R1VTNlIwSmNvdndzaVE5NWJTaXdQM3RnTFRmQUliWGREQVBOa0IzNkF0aTBG?=
 =?utf-8?B?dU5kTW9POUVqbWZ5Rkw0Qkx4bVRSV2N6Z1dnck9BTThuU3UwQVZyMUppRWt3?=
 =?utf-8?B?VEtuWVUxSk9laU9uckI3Y0dDQjhVVk1LdGRSTmVWQjA5eStRMVpJekxWcjdY?=
 =?utf-8?B?WHJGS2NCMkt6RTlyK3BPUnhUR3NYSVQzaDZKTGRCeGZacGpoWTFvVW04OFFh?=
 =?utf-8?B?VW5Zbk5lRkk2QVhRdFdIVkMvUFE2VXhjY1dLUXdKc0dMeXdJN3pvazNyVEpM?=
 =?utf-8?B?SnFsd0laZWpSVjRGSjYwWCt1Z1dFenZXTENsSGI3dngxRWgzMm9obDFGS2tt?=
 =?utf-8?B?NndGNk9YWFFtN2Q4SjFkaDNYZGhBQmVoRW11NXo1YXV6WExZYUZvQVZpUC9u?=
 =?utf-8?Q?3X8GOjTyQ3QY2QOw=3D?=
Content-Type: multipart/mixed;
	boundary="_005_7f490d75153d7e1db3c05418ff7fdf8fcitrixcom_"
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3a13ca2d-3dab-4b7b-a5ab-08da537917ee
X-MS-Exchange-CrossTenant-originalarrivaltime: 21 Jun 2022 11:27:58.3854
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 4T+XjlsDX0hDYce3MoJ68YH7bfGsFEDlcGYQcyWl8ckxEwT1LitOlYfJpPcCZBwxWz/Fm6lOagbBI6lawnGIeXgI+XiqGHgozMpx2gUoFQ0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB6125

--_005_7f490d75153d7e1db3c05418ff7fdf8fcitrixcom_
Content-Type: multipart/alternative;
	boundary="_000_7f490d75153d7e1db3c05418ff7fdf8fcitrixcom_"

--_000_7f490d75153d7e1db3c05418ff7fdf8fcitrixcom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64

SGVsbG8sDQoNCkkgdHJpZWQgdG8gaGF2ZSBhIGhhbGYgaG91ciByZXNwaXRlIGZyb20gc2VjdXJp
dHkgYW5kIHB1c2ggZm9yd2FyZCB3aXRoIFhURi1vbi1BUk0sIGJ1dCB0aGUgcmVzdWx0IHdhcyBh
IG1lc3MuDQoNCmh0dHBzOi8vZ2l0aHViLmNvbS9hbmR5aGhwL3h0Zi9jb21taXQvYmM4NmUyZDI3
MWYyMTA3ZGE5YjFjOWJjNTVhMDUwZGJkZjA3YzZjNiBpcyB0aGUgYWJzb2x1dGUgYmFyZSBtaW5p
bXVtIHN0dWIgVk0sIHdoaWNoIGhhcyBhIHpJbWFnZXszMiw2NH0gaGVhZGVyLCBzZXRzIHVwIHRo
ZSBzdGFjaywgbWFrZXMgb25lIENPTlNPTEVJT193cml0ZSBoeXBlcmNhbGwsIGFuZCB0aGVuIGEg
Y2xlYW4gU0NIRURPUF9zaHV0ZG93bi4NCg0KVGhlcmUgYXJlIHNvbWUgYnVnczoNCg0KMSkga2Vy
bmVsX3ppbWFnZTMyX3Byb2JlKCkgcmVqZWN0cyByZWxvY2F0YWJsZSBiaW5hcmllcywgYnV0IGlm
IEkgc2tpcCB0aGUgY2hlY2sgaXQgd29ya3MgZmluZS4NCg0KRnVydGhlcm1vcmUsIGtlcm5lbF96
aW1hZ2U2NF9wcm9iZSgpIGlnbm9yZXMgdGhlIGhlYWRlciBhbmQgYXNzdW1lcyB0aGUgYmluYXJ5
IGlzIHJlbG9jYXRhYmxlLiAgQm90aCBwcm9iZSBmdW5jdGlvbnMgZmFpbCB0byBjaGVjayB0aGUg
ZW5kaWFubmVzcyBtYXJrZXIuDQoNCjIpIEknbSB1c2luZyBxZW11LXN5c3RlbS1hcm0gNC4yLjEg
KERlYmlhbiAxOjQuMi0zdWJ1bnR1Ni4yMSksIHdpdGggc29tZSBwYXJhbWV0ZXJzIGNyaWJiZWQg
ZnJvbSB0aGUgR2l0bGFiIENJIHNtb2tlIHRlc3QsIGJ1dCBjdHh0X3N3aXRjaF90bygpIGV4cGxv
ZGVkIHdpdGggdW5kZWYgb246DQoNCldSSVRFX0NQMzIobi0+YXJjaC5qb3NjciwgSk9TQ1IpOw0K
V1JJVEVfQ1AzMihuLT5hcmNoLmptY3IsIEpNQ1IpOw0KDQpJJ20gbm90IHN1cmUgd2hhdCB0aGVz
ZSBhcmUgKGJleW9uZCBKYXplbGxlIGNvbmYgcmVnaXN0ZXIpLCBidXQgSSBjb21tZW50ZWQgdGhl
bSBvdXQgYW5kIGl0IG1hZGUgZnVydGhlciBwcm9ncmVzcy4gIEkgaGF2ZSBubyBpZGVhIGlmIHRo
aXMgaXMgYSBYZW4gYnVnLCBxZW11IGJ1Zywgb3IgdXNlciBlcnJvciwgYnV0IHNvbWV0aGluZyBp
cyBjbGVhcmx5IHdyb25nIGhlcmUuDQoNCjMpIEZvciB0ZXN0LWFybTY0LXN0dWIsIEkgZ2V0IHRo
aXM6DQoNCihYRU4pIGQwOiBleHRlbmRlZCByZWdpb24gMTogMHg3MDAwMDAwMC0+MHg4MDAwMDAw
MA0KKFhFTikgTG9hZGluZyB6SW1hZ2UgZnJvbSAwMDAwMDAwMDQ4MDAwMDAwIHRvIDAwMDAwMDAw
NTAwMDAwMDAtMDAwMDAwMDA1MDAwMTAxMg0KKFhFTikgTG9hZGluZyBkMCBEVEIgdG8gMHgwMDAw
MDAwMDU4MDAwMDAwLTB4MDAwMDAwMDA1ODAwMWM4NQ0KLi4uDQooWEVOKSAqKiogU2VyaWFsIGlu
cHV0IHRvIERPTTAgKHR5cGUgJ0NUUkwtYScgdGhyZWUgdGltZXMgdG8gc3dpdGNoIGlucHV0KQ0K
KFhFTikgRnJlZWQgMzI0a0IgaW5pdCBtZW1vcnkuDQooWEVOKSAqKiogR290IENPTlNPTEVJT193
cml0ZSAoMTggYnl0ZXMpDQpIZWxsbyBmcm9tIEFSTTY0DQooWEVOKSAqKiogQ09OU09MRUlPX3dy
aXRlIGRvbmUNCihYRU4pIGFyY2gvYXJtL3RyYXBzLmM6MjA1NDpkMHYwIEhTUj0weDAwMDAwMDkz
OWYwMDQ1IHBjPTB4MDAwMDAwNTAwMDAwOTggZ3ZhPTB4ODAwMDJmZmMgZ3BhPTB4MDAwMDAwODAw
MDJmZmMNCnFlbXUtc3lzdGVtLWFhcmNoNjQ6IHRlcm1pbmF0aW5nIG9uIHNpZ25hbCAyDQoNCmku
ZS4gdGhlIENPTlNPTEVJT193cml0ZSBoeXBlcmNhbGwgY29tcGxldGVzIHN1Y2Nlc3NmdWxseSwg
YnV0IGEgdHJhcCBvY2N1cnMgYmVmb3JlIHRoZSBTQ0hFRE9QX3NodXRkb3duIGNvbXBsZXRlcy4g
IFRoZSBmdWxsICh0aW55KSBiaW5hcmllcyBhcmUgYXR0YWNoZWQsIGJ1dCBpdCBzZWVtcyB0byBi
ZSBmYXVsdGluZyBvbjoNCg0KICAgIDQwMDAwMDk4OiAgICBiODFmY2MzZiAgICAgc3RyICAgIHd6
ciwgW3gxLCAjLTRdIQ0KDQp3aGljaCAoSSB0aGluaykgaXMgdGhlIHN0b3JlIG9mIDAgdG8gdGhl
IHN0YWNrIGZvciB0aGUgc2NoZWRvcCBzaHV0ZG93biByZWFzb24uDQoNCjQpIEZvciB0ZXN0LWFy
bTMyLXN0dWIgdW5kZXIgZWl0aGVyIHRoZSAzMmJpdCBvciA2NGJpdCBYZW4sIEkgZ2V0Og0KDQoo
WEVOKSBGcmVlZCAzNDhrQiBpbml0IG1lbW9yeS4NCihYRU4pICoqKiBHb3QgQ09OU09MRUlPX3dy
aXRlICgxOCBieXRlcykNCihYRU4pICoqKiBnb3QgZmF1bHQNCihYRU4pICoqKiBHb3QgU0NIRURP
UF9zaHV0ZG93biwgMA0KKFhFTikgSGFyZHdhcmUgRG9tMCBoYWx0ZWQ6IGhhbHRpbmcgbWFjaGlu
ZQ0KDQp3aGljaCBpcyB3ZWlyZC4gIFRoZSBDT05TT0xFSU9fd3JpdGUgZmFpbHMgdG8gcmVhZCB0
aGUgcGFzc2VkIHBvaW50ZXIsIGRlc3BpdGUgYXBwZWFyaW5nIHRvIGhhdmUgYSBpcC1yZWxhdGl2
ZSBsb2FkIHRvIGZpbmQgdGhlIHN0cmluZywgd2hpbGUgdGhlIFNDSEVET1Bfc2h1dGRvd24gcGFz
c2VzIGl0cyBwYXJhbWV0ZXIgZmluZSAoaXQncyBhIHN0YWNrIHJlbGF0aXZlIGxvYWQpLg0KDQoN
Ck90aGVyIG9ic2VydmF0aW9uczoNCg0KKiBUaGVyZSBpcyBubyBkb2N1bWVudGVkIHZDUFUgc3Rh
cnRpbmcgc3RhdGUuDQoqIFFlbXUgaXMgaW5maW5pdGVseSBlYXNpZXIgdG8gdG8gdXNlIChpLmUu
IG5vIG1lc3Npbmcgd2l0aCBkdGIvZXRjKSBhcyAta2VybmVsIHhlbiAtaW5pdHJkIHRlc3QtJGZv
byB3aXRoIGEgb25lbGluZXIgY2hhbmdlIHRvIHRoZSBkdGIgcGFyc2luZyB0byB0cmVhdCByYW1k
aXNrIGFuZCBubyBrZXJuZWwgYXMgdGhlIGRvbTAga2VybmVsLiAgTWF5YmUgYSBiZXR0ZXIgY2hh
bmdlIHdvdWxkIGJlIHRvIG1vZGlmeSBxZW11IHRvIHVuZGVyc3RhbmQgbXVsdGlwbGUgLWtlcm5l
bCdzLg0KKiBYZW4gY2FuJ3QgbG9hZCBFTEZzLg0KDQpTb21lIG9mIHRoZXNlIGJ1Z3MgbWlnaHQg
YmUgbWluZSwgYnV0IGF0IGEgbWluaW11bSAxIGlzIGEgYnVnIGluIFhlbiBhbmQgbmVlZHMgZml4
aW5nLiAgQW55IGlkZWFzPw0KDQp+QW5kcmV3DQo=

--_000_7f490d75153d7e1db3c05418ff7fdf8fcitrixcom_
Content-Type: text/html; charset="utf-8"
Content-ID: <8D556DE6BD6FBA4FB85191CBFD09E8D1@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64

PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5Pg0KSGVsbG8sPGJyPg0K
PGJyPg0KSSB0cmllZCB0byBoYXZlIGEgaGFsZiBob3VyIHJlc3BpdGUgZnJvbSBzZWN1cml0eSBh
bmQgcHVzaCBmb3J3YXJkIHdpdGggWFRGLW9uLUFSTSwgYnV0IHRoZSByZXN1bHQgd2FzIGEgbWVz
cy48YnI+DQo8YnI+DQo8YSBjbGFzcz0ibW96LXR4dC1saW5rLWZyZWV0ZXh0IiBocmVmPSJodHRw
czovL2dpdGh1Yi5jb20vYW5keWhocC94dGYvY29tbWl0L2JjODZlMmQyNzFmMjEwN2RhOWIxYzli
YzU1YTA1MGRiZGYwN2M2YzYiPmh0dHBzOi8vZ2l0aHViLmNvbS9hbmR5aGhwL3h0Zi9jb21taXQv
YmM4NmUyZDI3MWYyMTA3ZGE5YjFjOWJjNTVhMDUwZGJkZjA3YzZjNjwvYT4gaXMgdGhlIGFic29s
dXRlIGJhcmUgbWluaW11bSBzdHViIFZNLCB3aGljaCBoYXMgYSB6SW1hZ2V7MzIsNjR9DQogaGVh
ZGVyLCBzZXRzIHVwIHRoZSBzdGFjaywgbWFrZXMgb25lIDxzcGFuIGNsYXNzPSJibG9iLWNvZGUt
aW5uZXINCiAgICAgIGJsb2ItY29kZS1tYXJrZXIganMtY29kZS1uYXYtcGFzcyAiIGRhdGEtY29k
ZS1tYXJrZXI9IisiPg0KPHNwYW4gY2xhc3M9InBsLWMiPkNPTlNPTEVJT193cml0ZSBoeXBlcmNh
bGwsIGFuZCB0aGVuIGEgY2xlYW4gU0NIRURPUF9zaHV0ZG93bi48YnI+DQo8YnI+DQpUaGVyZSBh
cmUgc29tZSBidWdzOjxicj4NCjxicj4NCjEpIGtlcm5lbF96aW1hZ2UzMl9wcm9iZSgpIHJlamVj
dHMgcmVsb2NhdGFibGUgYmluYXJpZXMsIGJ1dCBpZiBJIHNraXAgdGhlIGNoZWNrIGl0IHdvcmtz
IGZpbmUuPGJyPg0KPGJyPg0KRnVydGhlcm1vcmUsIGtlcm5lbF96aW1hZ2U2NF9wcm9iZSgpIGln
bm9yZXMgdGhlIGhlYWRlciBhbmQgYXNzdW1lcyB0aGUgYmluYXJ5IGlzIHJlbG9jYXRhYmxlLiZu
YnNwOyBCb3RoIHByb2JlIGZ1bmN0aW9ucyBmYWlsIHRvIGNoZWNrIHRoZSBlbmRpYW5uZXNzIG1h
cmtlci48YnI+DQo8YnI+DQoyKSBJJ20gdXNpbmcgcWVtdS1zeXN0ZW0tYXJtIDQuMi4xIChEZWJp
YW4gMTo0LjItM3VidW50dTYuMjEpLCB3aXRoIHNvbWUgcGFyYW1ldGVycyBjcmliYmVkIGZyb20g
dGhlIEdpdGxhYiBDSSBzbW9rZSB0ZXN0LCBidXQgY3R4dF9zd2l0Y2hfdG8oKSBleHBsb2RlZCB3
aXRoIHVuZGVmIG9uOjxicj4NCjxicj4NCldSSVRFX0NQMzIobi0mZ3Q7YXJjaC5qb3NjciwgSk9T
Q1IpOzxicj4NCldSSVRFX0NQMzIobi0mZ3Q7YXJjaC5qbWNyLCBKTUNSKTs8YnI+DQo8YnI+DQpJ
J20gbm90IHN1cmUgd2hhdCB0aGVzZSBhcmUgKGJleW9uZCBKYXplbGxlIGNvbmYgcmVnaXN0ZXIp
LCBidXQgSSBjb21tZW50ZWQgdGhlbSBvdXQgYW5kIGl0IG1hZGUgZnVydGhlciBwcm9ncmVzcy4m
bmJzcDsgSSBoYXZlIG5vIGlkZWEgaWYgdGhpcyBpcyBhIFhlbiBidWcsIHFlbXUgYnVnLCBvciB1
c2VyIGVycm9yLCBidXQgc29tZXRoaW5nIGlzIGNsZWFybHkgd3JvbmcgaGVyZS48YnI+DQo8YnI+
DQozKSBGb3IgdGVzdC1hcm02NC1zdHViLCBJIGdldCB0aGlzOjxicj4NCjxicj4NCihYRU4pIGQw
OiBleHRlbmRlZCByZWdpb24gMTogMHg3MDAwMDAwMC0mZ3Q7MHg4MDAwMDAwMDxicj4NCihYRU4p
IExvYWRpbmcgekltYWdlIGZyb20gMDAwMDAwMDA0ODAwMDAwMCB0byAwMDAwMDAwMDUwMDAwMDAw
LTAwMDAwMDAwNTAwMDEwMTI8YnI+DQooWEVOKSBMb2FkaW5nIGQwIERUQiB0byAweDAwMDAwMDAw
NTgwMDAwMDAtMHgwMDAwMDAwMDU4MDAxYzg1PGJyPg0KLi4uPGJyPg0KKFhFTikgKioqIFNlcmlh
bCBpbnB1dCB0byBET00wICh0eXBlICdDVFJMLWEnIHRocmVlIHRpbWVzIHRvIHN3aXRjaCBpbnB1
dCk8YnI+DQooWEVOKSBGcmVlZCAzMjRrQiBpbml0IG1lbW9yeS48YnI+DQooWEVOKSAqKiogR290
IENPTlNPTEVJT193cml0ZSAoMTggYnl0ZXMpPGJyPg0KSGVsbG8gZnJvbSBBUk02NDxicj4NCihY
RU4pICoqKiBDT05TT0xFSU9fd3JpdGUgZG9uZTxicj4NCihYRU4pIGFyY2gvYXJtL3RyYXBzLmM6
MjA1NDpkMHYwIEhTUj0weDAwMDAwMDkzOWYwMDQ1IHBjPTB4MDAwMDAwNTAwMDAwOTggZ3ZhPTB4
ODAwMDJmZmMgZ3BhPTB4MDAwMDAwODAwMDJmZmM8YnI+DQpxZW11LXN5c3RlbS1hYXJjaDY0OiB0
ZXJtaW5hdGluZyBvbiBzaWduYWwgMjxicj4NCjxicj4NCmkuZS4gdGhlIENPTlNPTEVJT193cml0
ZSBoeXBlcmNhbGwgY29tcGxldGVzIHN1Y2Nlc3NmdWxseSwgYnV0IGEgdHJhcCBvY2N1cnMgYmVm
b3JlIHRoZSBTQ0hFRE9QX3NodXRkb3duIGNvbXBsZXRlcy4mbmJzcDsgVGhlIGZ1bGwgKHRpbnkp
IGJpbmFyaWVzIGFyZSBhdHRhY2hlZCwgYnV0IGl0IHNlZW1zIHRvIGJlIGZhdWx0aW5nIG9uOjxi
cj4NCjxicj4NCiZuYnNwOyZuYnNwOyZuYnNwOyA0MDAwMDA5ODombmJzcDsmbmJzcDsmbmJzcDsg
YjgxZmNjM2YgJm5ic3A7Jm5ic3A7Jm5ic3A7IHN0ciZuYnNwOyZuYnNwOyZuYnNwOyB3enIsIFt4
MSwgIy00XSE8YnI+DQo8YnI+DQp3aGljaCAoSSB0aGluaykgaXMgdGhlIHN0b3JlIG9mIDAgdG8g
dGhlIHN0YWNrIGZvciB0aGUgc2NoZWRvcCBzaHV0ZG93biByZWFzb24uPGJyPg0KPGJyPg0KNCkg
Rm9yIHRlc3QtYXJtMzItc3R1YiB1bmRlciBlaXRoZXIgdGhlIDMyYml0IG9yIDY0Yml0IFhlbiwg
SSBnZXQ6PGJyPg0KPGJyPg0KKFhFTikgRnJlZWQgMzQ4a0IgaW5pdCBtZW1vcnkuPGJyPg0KKFhF
TikgKioqIEdvdCBDT05TT0xFSU9fd3JpdGUgKDE4IGJ5dGVzKTxicj4NCihYRU4pICoqKiBnb3Qg
ZmF1bHQ8YnI+DQooWEVOKSAqKiogR290IFNDSEVET1Bfc2h1dGRvd24sIDA8YnI+DQooWEVOKSBI
YXJkd2FyZSBEb20wIGhhbHRlZDogaGFsdGluZyBtYWNoaW5lPGJyPg0KPGJyPg0Kd2hpY2ggaXMg
d2VpcmQuJm5ic3A7IFRoZSBDT05TT0xFSU9fd3JpdGUgZmFpbHMgdG8gcmVhZCB0aGUgcGFzc2Vk
IHBvaW50ZXIsIGRlc3BpdGUgYXBwZWFyaW5nIHRvIGhhdmUgYSBpcC1yZWxhdGl2ZSBsb2FkIHRv
IGZpbmQgdGhlIHN0cmluZywgd2hpbGUgdGhlIFNDSEVET1Bfc2h1dGRvd24gcGFzc2VzIGl0cyBw
YXJhbWV0ZXIgZmluZSAoaXQncyBhIHN0YWNrIHJlbGF0aXZlIGxvYWQpLjxicj4NCjxicj4NCjxi
cj4NCk90aGVyIG9ic2VydmF0aW9uczo8YnI+DQo8YnI+DQoqIFRoZXJlIGlzIG5vIGRvY3VtZW50
ZWQgdkNQVSBzdGFydGluZyBzdGF0ZS48YnI+DQoqIFFlbXUgaXMgaW5maW5pdGVseSBlYXNpZXIg
dG8gdG8gdXNlIChpLmUuIG5vIG1lc3Npbmcgd2l0aCBkdGIvZXRjKSBhcyAta2VybmVsIHhlbiAt
aW5pdHJkIHRlc3QtJGZvbyB3aXRoIGEgb25lbGluZXIgY2hhbmdlIHRvIHRoZSBkdGIgcGFyc2lu
ZyB0byB0cmVhdCByYW1kaXNrIGFuZCBubyBrZXJuZWwgYXMgdGhlIGRvbTAga2VybmVsLiZuYnNw
OyBNYXliZSBhIGJldHRlciBjaGFuZ2Ugd291bGQgYmUgdG8gbW9kaWZ5IHFlbXUgdG8gdW5kZXJz
dGFuZCBtdWx0aXBsZQ0KIC1rZXJuZWwncy48YnI+DQoqIFhlbiBjYW4ndCBsb2FkIEVMRnMuPGJy
Pg0KPGJyPg0KU29tZSBvZiB0aGVzZSBidWdzIG1pZ2h0IGJlIG1pbmUsIGJ1dCBhdCBhIG1pbmlt
dW0gMSBpcyBhIGJ1ZyBpbiBYZW4gYW5kIG5lZWRzIGZpeGluZy4mbmJzcDsgQW55IGlkZWFzPzxi
cj4NCjxicj4NCn5BbmRyZXc8YnI+DQo8L3NwYW4+PC9zcGFuPg0KPC9ib2R5Pg0KPC9odG1sPg0K

--_000_7f490d75153d7e1db3c05418ff7fdf8fcitrixcom_--

--_005_7f490d75153d7e1db3c05418ff7fdf8fcitrixcom_
Content-Type: application/octet-stream; name="test-arm64-stub-syms"
Content-Description: test-arm64-stub-syms
Content-Disposition: attachment; filename="test-arm64-stub-syms"; size=72520;
	creation-date="Tue, 21 Jun 2022 11:27:58 GMT";
	modification-date="Tue, 21 Jun 2022 11:27:58 GMT"
Content-ID: <08CE25FEA802FB4B80D873E2E2CC6AD6@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64

f0VMRgIBAQAAAAAAAAAAAAIAtwABAAAAAAAAQAAAAABAAAAAAAAAAIgXAQAAAAAAAAAAAEAAOAAC
AEAADwAOAAEAAAAHAAAAAAABAAAAAAAAAABAAAAAAAAAAEAAAAAAEhAAAAAAAAAAMAAAAAAAAAAA
AQAAAAAAUeV0ZAYAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAABQfIAPVAAAAAAAA
AAAAMAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQVJNZAAAAAAWAQBY9f3/
ELUCFsvgAABYH2A2iwcAAJQAAAAUAAAAAAAAAEAAAAAAADAAQAAAAABQAoDSAgAAsP9DANEAAIDS
4QMQqkIAAJEi1AHU4UMAkbADgNJAAIDSP8wfuCLUAdT/QwCRwANf1gAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABIZWxsbyBmcm9tIEFSTTY0CgD3AAAA
BAAAAAAACAEGAAAADNwAAADzAAAAcAAAQAAAAAA4AAAAAAAAAAAAAAACUAAAAD0AAAADQgAAABEA
BC0AAAAFCAfqAAAABQEILgEAAARJAAAABscAAAABCz0AAAAJAwAQAEAAAAAAAkkAAAB7AAAAB0IA
AAD/DwAGAAAAAAENagAAAAkDACAAQAAAAAAI0gAAAAEPcAAAQAAAAAA4AAAAAAAAAAGc7AAAAAlu
cgABEuwAAAABYAlhMAABE+wAAAABUAlhMQABFOwAAAABUQlhMgABFewAAAABUgrjAAAAASHzAAAA
ApF8AAUIBxwBAAAFBAchAQAAAAERASUOEwsDDhsOEQESBxAXAAACAQFJEwETAAADIQBJEy8LAAAE
JgBJEwAABSQACws+CwMOAAAGNAADDjoLOwtJEz8ZAhgAAAchAEkTLwUAAAguAT8ZAw46CzsLJxkR
ARIHQBiXQhkBEwAACTQAAwg6CzsLSRMCGAAACjQAAw46CzsLSRMCGAAAACwAAAACAAAAAAAIAAAA
AABwAABAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEUAAAACAB0AAAAEAfsODQABAQEBAAAA
AQAAAQBtYWluLmMAAAAAAAAJAnAAAEAAAAAAAw8BGSMDdiAoISEhJiIhHSUlAgIAAQFzdGFjawBH
TlUgQzk5IDYuMy4wIDIwMTcwNTE2IC1tYXJjaD1hcm12OC1hIC1tbGl0dGxlLWVuZGlhbiAtbWFi
aT1scDY0IC1nIC1PMyAtc3RkPWdudTk5IC1mbm8tY29tbW9uIC1mbm8tYXN5bmNocm9ub3VzLXVu
d2luZC10YWJsZXMgLWZuby1zdHJpY3QtYWxpYXNpbmcgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZu
by1waWMgLWZmcmVlc3RhbmRpbmcAdGVzdF90aXRsZQB0ZXN0X21haW4AbWFpbi5jAHJlYXNvbgBz
aXpldHlwZQAvbG9jYWwveGVuLXRlc3QtZnJhbWV3b3JrLmdpdC90ZXN0cy9zdHViAGxvbmcgdW5z
aWduZWQgaW50AGNoYXIAR0NDOiAoRGViaWFuIDYuMy4wLTE4KSA2LjMuMCAyMDE3MDUxNgAAAAAA
AAAADAAAAP////8BAAR4HgwfABwAAAAAAAAAcAAAQAAAAAA4AAAAAAAAAEMOEEoOAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMAAQAAAABAAAAAAAAAAAAAAAAAAAAAAAMAAgAAEABAAAAA
AAAAAAAAAAAAAAAAAAMAAwAAEABAAAAAAAAAAAAAAAAAAAAAAAMABAAAIABAAAAAAAAAAAAAAAAA
AAAAAAMABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAMABgAAAAAAAAAAAAAAAAAAAAAAAAAAAAMABwAA
AAAAAAAAAAAAAAAAAAAAAAAAAAMACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMACQAAAAAAAAAAAAAA
AAAAAAAAAAAAAAMACgAAAAAAAAAAAAAAAAAAAAAAAAAAAAMACwAAAAAAAAAAAAAAAAAAAAAAAQAA
AAQA8f8AAAAAAAAAAAAAAAAAAAAAOwAAAAAAAQAAAABAAAAAAAAAAAAAAAAAPgAAAAIAAQBAAABA
AAAAABwAAAAAAAAARgAAAAAAAQAIAABAAAAAAAAAAAAAAAAAOwAAAAAAAQBAAABAAAAAAAAAAAAA
AAAARgAAAAAAAQBgAABAAAAAAAAAAAAAAAAASQAAAAQA8f8AAAAAAAAAAAAAAAAAAAAAOwAAAAAA
AQBwAABAAAAAAAAAAAAAAAAARgAAAAAAAwAAEABAAAAAAAAAAAAAAAAARgAAAAAABAAAIABAAAAA
AAAAAAAAAAAARgAAAAAACwAQAAAAAAAAAAAAAAAAAAAAUAAAABEAAwAAEABAAAAAABIAAAAAAAAA
WwAAABAABAAAMABAAAAAAAAAAAAAAAAAZQAAABEABAAAIABAAAAAAAAQAAAAAAAAawAAABAAAQAA
AABAAAAAAAAAAAAAAAAAcgAAABIAAQBwAABAAAAAADgAAAAAAAAAfAAAABAABAAAMABAAAAAAAAA
AAAAAAAAgQAAABAABAAAIABAAAAAAAAAAAAAAAAAAC9sb2NhbC94ZW4tdGVzdC1mcmFtZXdvcmsu
Z2l0L2FyY2gvYXJtL2FybTY0L2hlYWQtYXJtNjQubwAkeABzdGFydHVwACRkAG1haW4uYwB0ZXN0
X3RpdGxlAF9fZW5kX2JzcwBzdGFjawBfc3RhcnQAdGVzdF9tYWluAF9lbmQAX19zdGFydF9ic3MA
AC5zeW10YWIALnN0cnRhYgAuc2hzdHJ0YWIALnRleHQALmRhdGEALnJvZGF0YQAuYnNzAC5kZWJ1
Z19pbmZvAC5kZWJ1Z19hYmJyZXYALmRlYnVnX2FyYW5nZXMALmRlYnVnX2xpbmUALmRlYnVnX3N0
cgAuY29tbWVudAAuZGVidWdfZnJhbWUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAbAAAAAQAAAAYAAAAAAAAAAAAAQAAAAAAA
AAEAAAAAAKgAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAIQAAAAEAAAADAAAAAAAAAAAQ
AEAAAAAAABABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAAAAAAAAAAAAAAAAAAACcAAAABAAAAAgAA
AAAAAAAAEABAAAAAAAAQAQAAAAAAEgAAAAAAAAAAAAAAAAAAABAAAAAAAAAAAAAAAAAAAAAvAAAA
CAAAAAMAAAAAAAAAACAAQAAAAAASEAEAAAAAAAAQAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAA
AAAANAAAAAEAAAAAAAAAAAAAAAAAAAAAAAAAEhABAAAAAAD7AAAAAAAAAAAAAAAAAAAAAQAAAAAA
AAAAAAAAAAAAAEAAAAABAAAAAAAAAAAAAAAAAAAAAAAAAA0RAQAAAAAAigAAAAAAAAAAAAAAAAAA
AAEAAAAAAAAAAAAAAAAAAABOAAAAAQAAAAAAAAAAAAAAAAAAAAAAAACXEQEAAAAAADAAAAAAAAAA
AAAAAAAAAAABAAAAAAAAAAAAAAAAAAAAXQAAAAEAAAAAAAAAAAAAAAAAAAAAAAAAxxEBAAAAAABJ
AAAAAAAAAAAAAAAAAAAAAQAAAAAAAAAAAAAAAAAAAGkAAAABAAAAMAAAAAAAAAAAAAAAAAAAABAS
AQAAAAAAMwEAAAAAAAAAAAAAAAAAAAEAAAAAAAAAAQAAAAAAAAB0AAAAAQAAADAAAAAAAAAAAAAA
AAAAAABDEwEAAAAAACYAAAAAAAAAAAAAAAAAAAABAAAAAAAAAAEAAAAAAAAAfQAAAAEAAAAAAAAA
AAAAAAAAAAAAAAAAcBMBAAAAAAAwAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAEAAAAC
AAAAAAAAAAAAAAAAAAAAAAAAAKATAQAAAAAA0AIAAAAAAAANAAAAFwAAAAgAAAAAAAAAGAAAAAAA
AAAJAAAAAwAAAAAAAAAAAAAAAAAAAAAAAABwFgEAAAAAAI0AAAAAAAAAAAAAAAAAAAABAAAAAAAA
AAAAAAAAAAAAEQAAAAMAAAAAAAAAAAAAAAAAAAAAAAAA/RYBAAAAAACKAAAAAAAAAAAAAAAAAAAA
AQAAAAAAAAAAAAAAAAAAAA==

--_005_7f490d75153d7e1db3c05418ff7fdf8fcitrixcom_
Content-Type: application/octet-stream; name="test-arm32-stub-syms"
Content-Description: test-arm32-stub-syms
Content-Disposition: attachment; filename="test-arm32-stub-syms"; size=72048;
	creation-date="Tue, 21 Jun 2022 11:27:58 GMT";
	modification-date="Tue, 21 Jun 2022 11:27:58 GMT"
Content-ID: <CC0F58D7392CF549AEBF6E0D79C8B8E9@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64

f0VMRgEBAQAAAAAAAAAAAAIAKAABAAAAAAAAQDQAAADwFgEAAAQABTQAIAACACgAEAAPAAEAAAAA
AAEAAAAAQAAAAEASEAAAADAAAAcAAAAAAAEAUeV0ZAAAAAAAAAAAAAAAAAAAAAAAAAAABwAAABAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADwIOMA8CDjAPAg4wDw
IOMA8CDjAPAg4wDwIOMA8CDjAwAA6hgobwEAAAAAADAAAAECAwQUAJ/lQIBP4gCQSOAM0J/lCdCN
4AIAAOv+///qAAAAQAAwAEASwKDjCNBN4gAgAeMMEKDhACBE4wAAoONx6kDhCBCN4gAwoOMdwKDj
AgCg4wQwIeVx6kDhCNCN4h7/L+EAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABIZWxsbyBmcm9tIEFSTTMyCgBBNAAA
AGFlYWJpAAEqAAAABTctQQAGCgdBCAEJAgoEEgQUARUBFwMYARoCHAEiASoBLAJEA0dDQzogKERl
YmlhbiA2LjMuMC0xOCkgNi4zLjAgMjAxNzA1MTYA3wAAAAQAAAAAAAQBAAAAAAwMAQAAIwEAAFgA
AEA8AAAAAAAAAAJIAAAANQAAAAM6AAAAEQAEJQAAAAUEBxoBAAAFAQhMAQAABEEAAAAG9wAAAAEL
NQAAAAUDABAAQAJBAAAAbwAAAAc6AAAA/w8ABt8AAAABDV4AAAAFAwAgAEAIAgEAAAEPWAAAQDwA
AAABnNQAAAAJbnIAARLUAAAAAVwJYTAAARPUAAAAAVAJYTEAARTUAAAAAVEJYTIAARXUAAAAAVIK
EwEAAAEh2wAAAAKRfAAFBAflAAAABQQH6gAAAAABEQElDhMLAw4bDhEBEgYQFwAAAgEBSRMBEwAA
AyEASRMvCwAABCYASRMAAAUkAAsLPgsDDgAABjQAAw46CzsLSRM/GQIYAAAHIQBJEy8FAAAILgE/
GQMOOgs7CycZEQESBkAYl0IZARMAAAk0AAMIOgs7C0kTAhgAAAo0AAMOOgs7C0kTAhgAAAAcAAAA
AgAAAAAABAAAAAAAWAAAQDwAAAAAAAAAAAAAAEMAAAACAB0AAAACAfsODQABAQEBAAAAAQAAAQBt
YWluLmMAAAAAAAAFAlgAAEADDwEZA3kuAwouLS8sMTRMLyszMwIEAAEBR05VIEM5OSA2LjMuMCAy
MDE3MDUxNiAtbWFybSAtbWFyY2g9YXJtdjd2ZSAtbWZsb2F0LWFiaT1oYXJkIC1tZnB1PXZmcHYz
LWQxNiAtbXRscy1kaWFsZWN0PWdudSAtZyAtTzMgLXN0ZD1nbnU5OSAtZm5vLWNvbW1vbiAtZm5v
LWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1mbm8tc3Rh
Y2stcHJvdGVjdG9yIC1mbm8tcGljIC1mZnJlZXN0YW5kaW5nAHN0YWNrAGxvbmcgdW5zaWduZWQg
aW50AHRlc3RfdGl0bGUAdGVzdF9tYWluAG1haW4uYwByZWFzb24Ac2l6ZXR5cGUAL2xvY2FsL3hl
bi10ZXN0LWZyYW1ld29yay5naXQvdGVzdHMvc3R1YgBjaGFyAAAADAAAAP////8BAAJ8DgwNABQA
AAAAAAAAWAAAQDwAAABEDghYDgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAAAAAADAAEAAAAA
AAAQAEAAAAAAAwACAAAAAAAAEABAAAAAAAMAAwAAAAAAACAAQAAAAAADAAQAAAAAAAAAAAAAAAAA
AwAFAAAAAAAAAAAAAAAAAAMABgAAAAAAAAAAAAAAAAADAAcAAAAAAAAAAAAAAAAAAwAIAAAAAAAA
AAAAAAAAAAMACQAAAAAAAAAAAAAAAAADAAoAAAAAAAAAAAAAAAAAAwALAAAAAAAAAAAAAAAAAAMA
DAABAAAAAAAAAAAAAAAEAPH/OwAAAAAAAEAAAAAAAAABAD4AAAA0AABAHAAAAAIAAQBGAAAAJAAA
QAAAAAAAAAEAOwAAADQAAEAAAAAAAAABAEYAAABQAABAAAAAAAAAAQBJAAAAAAAAAAAAAAAEAPH/
OwAAAFgAAEAAAAAAAAABAFAAAAAAEABAAAAAAAAAAwBGAAAAABAAQAAAAAAAAAMARgAAAAAgAEAA
AAAAAAAEAEYAAAAQAAAAAAAAAAAADABaAAAAABAAQBIAAAARAAMAZQAAAAAwAEAAAAAAEAAEAG8A
AAAAIABAABAAABEABAB1AAAAAAAAQAAAAAAQAAEAfAAAAFgAAEA8AAAAEgABAIYAAAAAMABAAAAA
ABAABACLAAAAACAAQAAAAAAQAAQAAC9sb2NhbC94ZW4tdGVzdC1mcmFtZXdvcmsuZ2l0L2FyY2gv
YXJtL2FybTMyL2hlYWQtYXJtMzIubwAkYQBzdGFydHVwACRkAG1haW4uYwAuTEFOQ0hPUjAAdGVz
dF90aXRsZQBfX2VuZF9ic3MAc3RhY2sAX3N0YXJ0AHRlc3RfbWFpbgBfZW5kAF9fc3RhcnRfYnNz
AAAuc3ltdGFiAC5zdHJ0YWIALnNoc3RydGFiAC50ZXh0AC5kYXRhAC5yb2RhdGEALmJzcwAuQVJN
LmF0dHJpYnV0ZXMALmNvbW1lbnQALmRlYnVnX2luZm8ALmRlYnVnX2FiYnJldgAuZGVidWdfYXJh
bmdlcwAuZGVidWdfbGluZQAuZGVidWdfc3RyAC5kZWJ1Z19mcmFtZQAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGwAAAAEAAAAGAAAAAAAAQAAAAQCUAAAAAAAAAAAA
AAAEAAAAAAAAACEAAAABAAAAAwAAAAAQAEAAEAEAAAAAAAAAAAAAAAAAAQAAAAAAAAAnAAAAAQAA
AAIAAAAAEABAABABABIAAAAAAAAAAAAAAAQAAAAAAAAALwAAAAgAAAADAAAAACAAQBIQAQAAEAAA
AAAAAAAAAAAEAAAAAAAAADQAAAADAABwAAAAAAAAAAASEAEANQAAAAAAAAAAAAAAAQAAAAAAAABE
AAAAAQAAADAAAAAAAAAARxABACYAAAAAAAAAAAAAAAEAAAABAAAATQAAAAEAAAAAAAAAAAAAAG0Q
AQDjAAAAAAAAAAAAAAABAAAAAAAAAFkAAAABAAAAAAAAAAAAAABQEQEAigAAAAAAAAAAAAAAAQAA
AAAAAABnAAAAAQAAAAAAAAAAAAAA2hEBACAAAAAAAAAAAAAAAAEAAAAAAAAAdgAAAAEAAAAAAAAA
AAAAAPoRAQBHAAAAAAAAAAAAAAABAAAAAAAAAIIAAAABAAAAMAAAAAAAAABBEgEAUQEAAAAAAAAA
AAAAAQAAAAEAAACNAAAAAQAAAAAAAAAAAAAAlBMBACgAAAAAAAAAAAAAAAQAAAAAAAAAAQAAAAIA
AAAAAAAAAAAAALwTAQAAAgAADgAAABkAAAAEAAAAEAAAAAkAAAADAAAAAAAAAAAAAAC8FQEAlwAA
AAAAAAAAAAAAAQAAAAAAAAARAAAAAwAAAAAAAAAAAAAAUxYBAJoAAAAAAAAAAAAAAAEAAAAAAAAA

--_005_7f490d75153d7e1db3c05418ff7fdf8fcitrixcom_--


From xen-devel-bounces@lists.xenproject.org Tue Jun 21 12:07:37 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 12:07:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353211.580124 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3cfK-0006N3-2P; Tue, 21 Jun 2022 12:07:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353211.580124; Tue, 21 Jun 2022 12:07:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3cfJ-0006Mw-VD; Tue, 21 Jun 2022 12:07:29 +0000
Received: by outflank-mailman (input) for mailman id 353211;
 Tue, 21 Jun 2022 12:07:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o3cfI-0006Mq-83
 for xen-devel@lists.xenproject.org; Tue, 21 Jun 2022 12:07:28 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o3cfH-0003fb-IE; Tue, 21 Jun 2022 12:07:27 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=[192.168.3.84])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o3cfH-0006jO-BE; Tue, 21 Jun 2022 12:07:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:To:Subject:MIME-Version:Date:Message-ID;
	bh=jKcHp4lsTX841vtWizLuHWuO/Yak1LeyMQO4I+k8GdY=; b=yB1Tc9HKqLxDkmLho+bwsKXpt6
	Q6Ajg+jK1S1IHFyulURd7PqCOZlF+hioby2mYxK2WaEGQocXuK/XBUxNOAuLhnwt4dfLuBhsyCpw3
	dsD/E8gL3A43+s+2lrKdxRDRzPW1BV78Ki/rVwhtGX48ALC/sXF6LkL2nAowDDdQInbk=;
Message-ID: <b8f05e22-c30d-d4b2-b725-9db91ee7a09d@xen.org>
Date: Tue, 21 Jun 2022 13:07:24 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: XTF-on-ARM: Bugs
To: Andrew Cooper <Andrew.Cooper3@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Michal Orzel <Michal.Orzel@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Christopher Clark <christopher.w.clark@gmail.com>,
 Daniel Smith <dpsmith@apertussolutions.com>,
 Roger Pau Monne <roger.pau@citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>
References: <7f490d75-153d-7e1d-b3c0-5418ff7fdf8f@citrix.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <7f490d75-153d-7e1d-b3c0-5418ff7fdf8f@citrix.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 21/06/2022 12:27, Andrew Cooper wrote:
> Hello,

Hi,

> I tried to have a half hour respite from security and push forward with XTF-on-ARM, but the result was a mess.
> 
> https://github.com/andyhhp/xtf/commit/bc86e2d271f2107da9b1c9bc55a050dbdf07c6c6 is the absolute bare minimum stub VM, which has a zImage{32,64} header, sets up the stack, makes one CONSOLEIO_write hypercall, and then a clean SCHEDOP_shutdown.
> 
> There are some bugs:
> 
> 1) kernel_zimage32_probe() rejects relocatable binaries, but if I skip the check it works fine.

Hmmmm... which check are you referring to?

> 
> Furthermore, kernel_zimage64_probe() ignores the header and assumes the binary is relocatable.

Are you referring to bit 3 "Kernel physical placement"?

> Both probe functions fail to check the endianness marker.

AFAIU the header is little endian. So it is not clear to me why we 
should check the endianess marker?

> 
> 2) I'm using qemu-system-arm 4.2.1 (Debian 1:4.2-3ubuntu6.21), with some parameters cribbed from the Gitlab CI smoke test, but ctxt_switch_to() exploded with undef on:
> 
> WRITE_CP32(n->arch.joscr, JOSCR);
> WRITE_CP32(n->arch.jmcr, JMCR);
> 
> I'm not sure what these are (beyond Jazelle conf register), but I commented them out and it made further progress.  I have no idea if this is a Xen bug, qemu bug, or user error, but something is clearly wrong here.

I suspect the QEMU version is a bit too old to support 32-bit 
virtualization. Can you try a newer one?

> 
> 3) For test-arm64-stub, I get this:
> 
> (XEN) d0: extended region 1: 0x70000000->0x80000000
> (XEN) Loading zImage from 0000000048000000 to 0000000050000000-0000000050001012
> (XEN) Loading d0 DTB to 0x0000000058000000-0x0000000058001c85
> ...
> (XEN) *** Serial input to DOM0 (type 'CTRL-a' three times to switch input)
> (XEN) Freed 324kB init memory.
> (XEN) *** Got CONSOLEIO_write (18 bytes)
> Hello from ARM64
> (XEN) *** CONSOLEIO_write done
> (XEN) arch/arm/traps.c:2054:d0v0 HSR=0x000000939f0045 pc=0x00000050000098 gva=0x80002ffc gpa=0x00000080002ffc

Looking at the log above, GPA belong to neither the kernel, extended 
region or DTB.

> qemu-system-aarch64: terminating on signal 2
> 
> i.e. the CONSOLEIO_write hypercall completes successfully, but a trap occurs before the SCHEDOP_shutdown completes.  The full (tiny) binaries are attached, but it seems to be faulting on:
> 
>      40000098:    b81fcc3f     str    wzr, [x1, #-4]!
> 
> which (I think) is the store of 0 to the stack for the schedop shutdown reason.

AFAICT the stack is meant to be right next after the kernel. However, 
the fault above suggest that the value is not even close.

> 
> 4) For test-arm32-stub under either the 32bit or 64bit Xen, I get:
> 
> (XEN) Freed 348kB init memory.
> (XEN) *** Got CONSOLEIO_write (18 bytes)
> (XEN) *** got fault
> (XEN) *** Got SCHEDOP_shutdown, 0

Where are those messages coming from?

> (XEN) Hardware Dom0 halted: halting machine
> 
> which is weird.  The CONSOLEIO_write fails to read the passed pointer, despite appearing to have a ip-relative load to find the string, while the SCHEDOP_shutdown passes its parameter fine (it's a stack relative load).

 From a brief look, your code is still running with MMU off and Cache 
"off" (on armv8, it is more a bypass "cache" rather than off).

This means that you ought to be a lot more careful when reading/writing 
value to avoid reading any stall data.

> Other observations:
> 
> * There is no documented vCPU starting state.

See 
https://github.com/torvalds/linux/blob/master/Documentation/arm64/booting.rst.

> * Qemu is infinitely easier to to use (i.e. no messing with dtb/etc) as -kernel xen -initrd test-$foo with a oneliner change to the dtb parsing to treat ramdisk and no kernel as the dom0 kernel.  Maybe a better change would be to modify qemu to understand multiple -kernel's.
> * Xen can't load ELFs.

The support was dropped in 2018 because it was bogus and not used:

https://lists.xenproject.org/archives/html/xen-devel/2018-06/msg00242.html

Personally, I think that zImage/Image is simple enough that 
re-introducing ELF is not worth it. But I would be OK to consider 
patches if you feel like writing them.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 21 12:15:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 12:15:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353221.580135 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3cn0-0007oO-Tx; Tue, 21 Jun 2022 12:15:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353221.580135; Tue, 21 Jun 2022 12:15:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3cn0-0007oH-Pi; Tue, 21 Jun 2022 12:15:26 +0000
Received: by outflank-mailman (input) for mailman id 353221;
 Tue, 21 Jun 2022 12:15:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=L66N=W4=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o3cmz-0007oB-AJ
 for xen-devel@lists.xenproject.org; Tue, 21 Jun 2022 12:15:25 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id d331485f-f15b-11ec-b725-ed86ccbb4733;
 Tue, 21 Jun 2022 14:15:23 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 891FC165C;
 Tue, 21 Jun 2022 05:15:22 -0700 (PDT)
Received: from [10.57.35.142] (unknown [10.57.35.142])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DE25E3F534;
 Tue, 21 Jun 2022 05:15:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d331485f-f15b-11ec-b725-ed86ccbb4733
Message-ID: <c3232cf1-eec1-36a5-ab62-170a1a40a960@arm.com>
Date: Tue, 21 Jun 2022 14:15:06 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: XTF-on-ARM: Bugs
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Christopher Clark <christopher.w.clark@gmail.com>,
 Daniel Smith <dpsmith@apertussolutions.com>,
 Roger Pau Monne <roger.pau@citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>
References: <7f490d75-153d-7e1d-b3c0-5418ff7fdf8f@citrix.com>
From: Michal Orzel <michal.orzel@arm.com>
In-Reply-To: <7f490d75-153d-7e1d-b3c0-5418ff7fdf8f@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Hi Andrew,

On 21.06.2022 13:27, Andrew Cooper wrote:
> Hello,
> 
> I tried to have a half hour respite from security and push forward with XTF-on-ARM, but the result was a mess.
> 
> https://github.com/andyhhp/xtf/commit/bc86e2d271f2107da9b1c9bc55a050dbdf07c6c6 is the absolute bare minimum stub VM, which has a zImage{32,64} header, sets up the stack, makes one CONSOLEIO_write hypercall, and then a clean SCHEDOP_shutdown.
> 
> There are some bugs:
> 
> 1) kernel_zimage32_probe() rejects relocatable binaries, but if I skip the check it works fine.
> 
> Furthermore, kernel_zimage64_probe() ignores the header and assumes the binary is relocatable.  Both probe functions fail to check the endianness marker.
> 
> 2) I'm using qemu-system-arm 4.2.1 (Debian 1:4.2-3ubuntu6.21), with some parameters cribbed from the Gitlab CI smoke test, but ctxt_switch_to() exploded with undef on:
> 
> WRITE_CP32(n->arch.joscr, JOSCR);
> WRITE_CP32(n->arch.jmcr, JMCR);
> 
> I'm not sure what these are (beyond Jazelle conf register), but I commented them out and it made further progress.  I have no idea if this is a Xen bug, qemu bug, or user error, but something is clearly wrong here.
> 
> 3) For test-arm64-stub, I get this:
> 
> (XEN) d0: extended region 1: 0x70000000->0x80000000
> (XEN) Loading zImage from 0000000048000000 to 0000000050000000-0000000050001012
> (XEN) Loading d0 DTB to 0x0000000058000000-0x0000000058001c85
> ...
> (XEN) *** Serial input to DOM0 (type 'CTRL-a' three times to switch input)
> (XEN) Freed 324kB init memory.
> (XEN) *** Got CONSOLEIO_write (18 bytes)
> Hello from ARM64
> (XEN) *** CONSOLEIO_write done
> (XEN) arch/arm/traps.c:2054:d0v0 HSR=0x000000939f0045 pc=0x00000050000098 gva=0x80002ffc gpa=0x00000080002ffc
> qemu-system-aarch64: terminating on signal 2
> 
> i.e. the CONSOLEIO_write hypercall completes successfully, but a trap occurs before the SCHEDOP_shutdown completes.  The full (tiny) binaries are attached, but it seems to be faulting on:
> 
>     40000098:    b81fcc3f     str    wzr, [x1, #-4]!
> 
> which (I think) is the store of 0 to the stack for the schedop shutdown reason.
> 
> 4) For test-arm32-stub under either the 32bit or 64bit Xen, I get:
> 
> (XEN) Freed 348kB init memory.
> (XEN) *** Got CONSOLEIO_write (18 bytes)
> (XEN) *** got fault
> (XEN) *** Got SCHEDOP_shutdown, 0
> (XEN) Hardware Dom0 halted: halting machine
> 
> which is weird.  The CONSOLEIO_write fails to read the passed pointer, despite appearing to have a ip-relative load to find the string, while the SCHEDOP_shutdown passes its parameter fine (it's a stack relative load).
> 
> 
> Other observations:
> 
> * There is no documented vCPU starting state.
> * Qemu is infinitely easier to to use (i.e. no messing with dtb/etc) as -kernel xen -initrd test-$foo with a oneliner change to the dtb parsing to treat ramdisk and no kernel as the dom0 kernel.  Maybe a better change would be to modify qemu to understand multiple -kernel's.
> * Xen can't load ELFs.
> 
> Some of these bugs might be mine, but at a minimum 1 is a bug in Xen and needs fixing.  Any ideas?
> 
> ~Andrew

FWICT Xen does not support booting ELF images so I'm not sure why do you want to use relocatable binaries.

Apart from that I'd suggest to use my patches that are tested and work fine to prevent working on the same thing.
FWICS you are based on some old patches from v1 while the new pull request is already there since March:
https://github.com/andyhhp/xtf/pull/6

This PR contains fixes for findings reported by Julien and Christopher during v1 review.

Cheers,
Michal


From xen-devel-bounces@lists.xenproject.org Tue Jun 21 12:33:53 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 12:33:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353231.580146 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3d4c-0001kh-DQ; Tue, 21 Jun 2022 12:33:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353231.580146; Tue, 21 Jun 2022 12:33:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3d4c-0001kZ-Ad; Tue, 21 Jun 2022 12:33:38 +0000
Received: by outflank-mailman (input) for mailman id 353231;
 Tue, 21 Jun 2022 12:33:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3d4b-0001kQ-Vv; Tue, 21 Jun 2022 12:33:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3d4b-00046D-TU; Tue, 21 Jun 2022 12:33:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3d4b-00075m-Fh; Tue, 21 Jun 2022 12:33:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o3d4b-0000aq-FG; Tue, 21 Jun 2022 12:33:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1TYY3AG0lLYIj10M86LiSi9t/zDnC9G1AG4aYjUrvsg=; b=SMKiRZgbTYaU/eCdHomOvc5ait
	VgJwec1V1pRdovkaFpNmiC1suIw0AP2Jl96CrHQ4YzH/GVuNbvZQDLPJ1OeALaNIdxlEjcBYkske7
	g63rrfHyHgM78RetDM/KP6t02kVd/Elaw8+Xf/Nia7OYzg30uJ0e1+M4KKIMwvZTLxOE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171296-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171296: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=78ca55889a549a9a194c6ec666836329b774ab6d
X-Osstest-Versions-That:
    linux=354c6e071be986a44b956f7b57f1884244431048
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 21 Jun 2022 12:33:37 +0000

flight 171296 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171296/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-intel  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-amd  8 xen-boot              fail REGR. vs. 171277
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 171277
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 171277
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 171277
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 171277
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 171277

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 171277

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171277
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171277
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171277
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                78ca55889a549a9a194c6ec666836329b774ab6d
baseline version:
 linux                354c6e071be986a44b956f7b57f1884244431048

Last test of basis   171277  2022-06-19 03:11:35 Z    2 days
Failing since        171280  2022-06-19 15:12:25 Z    1 days    6 attempts
Testing same since   171294  2022-06-20 21:11:31 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Usyskin <alexander.usyskin@intel.com>
  Ali Saidi <alisaidi@amazon.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Arnd Bergmann <arnd@arndb.de>
  Athira Jajeev <atrajeev@linux.vnet.ibm.com>
  Athira Rajeev <atrajeev@linux.vnet.ibm.com>
  Bart Van Assche <bvanassche@acm.org>
  Christoph Lameter <cl@linux.com>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Damien Le Moal <damien.lemoal@opensource.wdc.com>
  Darrick J. Wong <djwong@kernel.org>
  Dave Hansen <dave.hansen@linux.intel.com>
  David Rientjes <rientjes@google.com>
  Douglas Gilbert <dgilbert@interlog.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Hyeonggon Yoo <42.hyeyoo@gmail.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ian Rogers <irogers@google.com>
  Jamie Iles <jamie@jamieiles.com>
  Jann Horn <jannh@google.com>
  Jarkko Nikula <jarkko.nikula@linux.intel.com>
  Jiasheng Jiang <jiasheng@iscas.ac.cn>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jing-Ting Wu <jing-ting.wu@mediatek.com>
  Joe Damato <jdamato@fastly.com>
  Josh Poimboeuf <jpoimboe@kernel.org>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Kunihiko Hayashi <hayashi.kunihiko@socionext.com>
  Leo Yan <leo.yan@linaro.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Liu Ying <victor.liu@nxp.com>
  Lukas Bulwahn <lukas.bulwahn@gmail.com>
  Marc Zyngier <maz@kernel.org>
  Martin K. Petersen <martin.petersen@oracle.com>
  Miaoqian Lin <linmq006@gmail.com>
  Michael Petlan <mpetlan@redhat.com>
  Michal Simek <michal.simek@amd.com>
  Nathan Chancellor <nathan@kernel.org>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Peter Zijlstra <peterz@infradead.org>
  Rob Herring <robh@kernel.org>
  Saurabh Sengar <ssengar@linux.microsoft.com>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Sedat Dilek <sedat.dilek@gmail.com> # LLVM-14 (x86-64)
  Serge Semin <Sergey.Semin@baikalelectronics.ru>
  Sergey Gorenko <sergeygo@nvidia.com>
  Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
  Tali Perry <tali.perry1@gmail.com>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Richter <tmricht@linux.ibm.com>
  Tomas Winkler <tomas.winkler@intel.com>
  Tyrel Datwyler <tyreld@linux.ibm.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wolfram Sang <wsa@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1865 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 21 13:30:44 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 13:30:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353243.580157 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3dxg-0007jw-Ni; Tue, 21 Jun 2022 13:30:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353243.580157; Tue, 21 Jun 2022 13:30:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3dxg-0007jp-KF; Tue, 21 Jun 2022 13:30:32 +0000
Received: by outflank-mailman (input) for mailman id 353243;
 Tue, 21 Jun 2022 13:30:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0ORL=W4=citrix.com=prvs=164d1f6c5=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1o3dxf-0007jj-Hq
 for xen-devel@lists.xenproject.org; Tue, 21 Jun 2022 13:30:31 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 502a4a98-f166-11ec-bd2d-47488cf2e6aa;
 Tue, 21 Jun 2022 15:30:29 +0200 (CEST)
Received: from mail-dm6nam12lp2172.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.172])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 21 Jun 2022 09:30:16 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SA2PR03MB5721.namprd03.prod.outlook.com (2603:10b6:806:117::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.13; Tue, 21 Jun
 2022 13:30:13 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::bd46:feab:b3:4a5c]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::bd46:feab:b3:4a5c%4]) with mapi id 15.20.5353.022; Tue, 21 Jun 2022
 13:30:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 502a4a98-f166-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655818229;
  h=from:to:subject:date:message-id:references:in-reply-to:
   content-id:content-transfer-encoding:mime-version;
  bh=pHMzBsE7t20XUGkgBC8VAO2Azuil1gGzLGSC9BTCGpQ=;
  b=N+GEQqhOyXAS69l2GzNvJDRckCgyQ0fzV10Kr8yCy7lX5/Pz2GtPHJQ2
   U28E3UmjQqKoGbDBwsIETUUbEHTrIzeecWZgOOZbOeyLwQGg4Xr0aPPoY
   UTOXg8p5SfCQsMdRTgPtKJVu7EMfPPw0IV7gby/RFs+V6tgi3yqzpJIsa
   g=;
X-IronPort-RemoteIP: 104.47.59.172
X-IronPort-MID: 73919832
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:5/Bbf6qIlbvOex1Ey6pJmfAx+1heBmKVZBIvgKrLsJaIsI4StFCzt
 garIBmCM/iLM2WnLotzO4i/pE4P68fcyodiQAdo/npnEXxE+ZuZCYyVIHmrMnLJJKUvbq7GA
 +byyDXkBJppJpMJjk71atANlVEliefQAOCU5NfsYkidfyc9IMsaoU8lyrRRbrJA24DjWVvT4
 4qq+qUzBXf+s9JKGjNMg068gEsHUMTa4Fv0aXRnOJinFHeH/5UkJMp3yZOZdhMUcaENdgKOf
 M7RzanRw4/s10xF5uVJMFrMWhZirrb6ZWBig5fNMkSoqkAqSicais7XOBeAAKv+Zvrgc91Zk
 b1wWZKMpQgBL/HPyboFAh9jIwJhNvV01PzHCme8iJnGp6HGWyOEL/RGKmgTZNdd1sMpRGZE+
 LofNSwHaQ2Fi6Su2rWnR+Jwh8Mlas72IIcYvXImxjbcZRokacmbH+OWupkFjXFp2Zom8fX2P
 qL1bRJGahjabgIJEVAQEJ8kx8+jh2Xlci0eo1WQzUYyyzeInVUggOO3WDbTUuyQHvl/h1ikn
 T/P2kfnAgABNNWx7xPQpxpAgceKx0sXQrk6FqC89/NsqE2ewCoUEhJ+fUu2p7y1h1CzX/pbK
 lcI4Ww+oK4q7kupQ9LhGRqirxaslBMGR8BZFeF8zQiX07fV+C6QHG1CRTlEAPQDtcQ2TDhs8
 UWbktfBDCZq9raSTBq17ayIpDm/PSwUK24qZiIeSwYBpd75r+kbsBXLSdpyFb+vuff8Ezrw3
 jOioTA3gvMYistj/66751HcnzW0ppXTCBFz7QHeRGGN4QZwZYrjbIutgXDX9e1FLZqZZlCZs
 WIYhtOF6+QTEZCKkjfLS+IIdIxF/N6AOTzYxFJqQZ8o8m33/2b5JN8KpjZjOE1uL8AIPyfzZ
 1Pesh9Q45kVO2a2aahwYMS6DMFCIbXcKOkJn8v8NrJmCqWdvifelM2yTSZ8B1zQrXU=
IronPort-HdrOrdr: A9a23:JZ7tX6OFVOJMQcBcT2P155DYdb4zR+YMi2TDiHoddfUFSKalfp
 6V98jzjSWE8Ar4wBkb6J290dq7MAjhHPlOkMUs1NaZLUPbUQ6TQL2KgrGSpwEIdxeeygcZ79
 YYT0EcMqy+MbEZt7ec3ODQKb9Jr7e6GeKT9IHjJhxWPGJXgtRbnmJE43GgYy9LrWd9ZaYRJd
 653I5qtjCgcXMYYoCQHX8eRdXOoNXNidbPfQMGLwRP0njOsRqYrJrBVzSI1BYXVD1ChZ0493
 LergD/7qK/99mm1x7n0XPJ5Zg+oqqg9jIDPr3OtiEmEESotu+aXvUkZ1REhkFznAib0idprD
 ALmWZnAy080QKJQoj/m2qW5+Cp6kdS15al8y7XvZKrm72HeNpxYfAx+b5xY1/X7VEts8p717
 8O12WFt4BPBReFhyjl4cPUPisa33ZcjEBS5tL7tUYvJ7f2qYUh3rA37QdQCtMNDSj64IcoHK
 1nC9zd/u9fdRefY2rCtmdizdSwVjBrdy32CXQqq4iQyXxbjXp5x0wXyIgWmWoB7os0T91B6/
 7fOqplmblSRosdbL57Bu0GXcyrY1a9CS7kISaXOxDqBasHM3XCp9r+56g0/vijfNgSwJ47iP
 36ISdlXK4JCjfT4OG1rex2G0r2MRuAtBzWu7Fjzok8vKHgT7z2NiDGQEwykqKb0ociPvE=
X-IronPort-AV: E=Sophos;i="5.92,209,1650945600"; 
   d="scan'208";a="73919832"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cuHk1ld4zb0dQ5EgrsI72EzRTpM2YaMPFTWdMenm3y7qXVLmI84AQThAPDm6PPpgHsJt4auy7fA1g50Od+4dvJf4OoZmH/Tc37Z/Qy58cOFiDDRKZGacPiDn9Mhh1foyVFjxemSzLRTbr0D2qR0D4hIf3YI4XhyEKdU7Sgt4Z2lQz6XlI4wzlUVrDIMkDn8UKXdt/S6EuY3X5Ir7PPmZMNR/CH88Sd2uB35HUzZo6Fe4U/n/F/koZBXpdqjgmmgxuHPR5wKapgv6Ua/lCADC0GIeZVWo63ruSDepPxoKEfiZGIPI237wzWH2QxgMDOcm7HTWaWWh2YlzZVa8oIH3hQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=pHMzBsE7t20XUGkgBC8VAO2Azuil1gGzLGSC9BTCGpQ=;
 b=M6sxR0b+GYc3EZnHiO5bjDra1tssy4a1b5H+rRZQEQg1uNXMyodMGIAvi6PlZoxXEj9ZmY8EIGWCYJuTEuQyTDRfYBvdK+O6bz958EPiem/3f9isp98E2JChL654DjeI426VyWli9v/YejLgnUjpf3sj0KkkX1SWJ+wQlAYsmQF7ZR8W32L1OfqaLNhVtXkfFv9AOzSoNQEvSRy1XDve/kpql4BlR8NrIgdFyLxWmIiMPbxpMErYPaS3TlinniFSOKhRf9tvsEfqy8QgL/qGHDv3SIhsgZylXPoOUzHt/rYoDLlSjWKvzp9dk5wOq+H9FvJTgf8ZqbCQlFoiksA5uA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pHMzBsE7t20XUGkgBC8VAO2Azuil1gGzLGSC9BTCGpQ=;
 b=IAAz9qmmLFI2Fvm3neAjTAcq4GNouZ9vuhbxfN2AXXKSv7rIfbjbfHnfSEarUKXfmrbTCRbjG1kWqC9+F7k+4MSZQTqjm60pCt/FHO/hwrMVlFMpSOlEYlatcYppNjqA6Z54ipWFTO1scEgOUNP9rD8/PqHMfFoI7bicgJN+024=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Julien Grall <julien@xen.org>, xen-devel <xen-devel@lists.xenproject.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Michal Orzel
	<Michal.Orzel@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Christopher Clark
	<christopher.w.clark@gmail.com>, Daniel Smith <dpsmith@apertussolutions.com>,
	Roger Pau Monne <roger.pau@citrix.com>, George Dunlap
	<George.Dunlap@citrix.com>
Subject: Re: XTF-on-ARM: Bugs
Thread-Topic: XTF-on-ARM: Bugs
Thread-Index: AQHYhWH1+fjNtvukV0adEqGEdTB7sq1ZxCsAgAAXIgA=
Date: Tue, 21 Jun 2022 13:30:12 +0000
Message-ID: <fd30be68-d1ac-b1bc-b3f1-cff589f338ee@citrix.com>
References: <7f490d75-153d-7e1d-b3c0-5418ff7fdf8f@citrix.com>
 <b8f05e22-c30d-d4b2-b725-9db91ee7a09d@xen.org>
In-Reply-To: <b8f05e22-c30d-d4b2-b725-9db91ee7a09d@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 0957a34e-af84-4f8d-78d0-08da538a2b8a
x-ms-traffictypediagnostic: SA2PR03MB5721:EE_
x-microsoft-antispam-prvs:
 <SA2PR03MB5721C3113005F8524E4C0AAFBAB39@SA2PR03MB5721.namprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 Bweia6SvRt6uG5zI3AltiKLpwOo0RXfilZkGMggjJNbAVCP1YaojvSQ8FE4ZvsQawRVjBo6A1P2n8nf9sh5V5aP1sUI6GTIwzYDap3LmVkCwlkeoz5osh1JVa6cyjyZYr8biG3mWqyjiEl0U9HEf9sNRGSI5JVj7zK3OXqkUtY7B2dgrgE0S/C6mgFaXjmbI4fNrBGXDkJLxyUHha9RphIBZVFRhhK0Fb8x7En7nn0IBfA8qLc53r3YzqJ/FoD6FYiQQqhqQnSOOrTe+Nkgkq00yF2ivBI9pXdNOpj0fa7rMThhfyJEx1rqdd2Z8jQ1W+KV4jKCvuHuC+4/cBCue5M+UYBDje6mUp2ketme7rPukUmoK2+fe/XS2wKP7Rgk1qL/oHIVRQptSeiA62gqyiFko4z7xv2HM9YZW2W8LwCeObGixY3jCzgyzK5ZvCaLIk6srLMxAOUUsTcZo/xlPvNkEpNGF3R5UYYhqTuZgID/kC6AXOW3PsFhovbipUmATe3lMzXOJ5oblXwtV/6kZgs+K92exUK9z/g7sh1u7Ob1XqWe8tz6gm//bPA+bCjYhCFz9ZofWypzlMvcSZ6V36VtpfxhSw+y45T02oPdSQ77665QW4IY25ibj6LhtBPBl2hOtf6EgXY1jQM0Cy8ZBuvtGwbLvNcATPS54ztuNGpAPd6W980dHM2udey3J0wXjfzRGCs5QVhKsXMkTLHcAcugwHz/2kphpEEXLFvsKdB7+r/Ojd+rmBCOVJfjBn69FQapugkN+n+z8zOPB588syWTIBJzBBpMwb6RUh213bkwqGsUDko3Txq/1KQzeHK5KRJa3eA6Ypco5QaOVIAPXDO0Vbo2uSRX1kzzVDfSJBHLT7iryE/t531k/UTGWKimnQ5khkbARaqgWGJWiV6aDtOy+Yx+nlfljAUivtN6OYTM=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(346002)(376002)(396003)(136003)(39860400002)(366004)(966005)(71200400001)(478600001)(5660300002)(6512007)(36756003)(8936002)(41300700001)(122000001)(2616005)(82960400001)(6486002)(6506007)(53546011)(316002)(83380400001)(38070700005)(26005)(186003)(921005)(31686004)(76116006)(8676002)(66476007)(91956017)(110136005)(66446008)(66556008)(2906002)(66946007)(38100700002)(31696002)(64756008)(6636002)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?aWFEbmI4VGdTWjZrOHVLVVdHZXp5OHE3T1lxcnVtVUtXTVBvekxmeGFwN0x0?=
 =?utf-8?B?M2JGMis2STJXREtSbGtTdUV5RFJGaHpEWEFtMTFmTGNjMGd2UkNjbU8vMUhZ?=
 =?utf-8?B?b0hyN2RxYWd1dGNWTzdHdkZBd1R6S0FWenBqaGNJUjk4bitZa3dya2ZUWFRO?=
 =?utf-8?B?bU9XMW13c1kxd0lRZnoweWNZek44SHY0OGdyZmxnK1Ezb0FBNmdUZTdoOUNF?=
 =?utf-8?B?SnJSSEtGUUxmV040YWlmOXNoazlGQmh6TnViZUhQUlpPTzUyMTAybDJ4VlBq?=
 =?utf-8?B?cTBaWlB0amV1dElHVUlqcWxKUU1xbUdSL0g1REI3M1Q3NHhyNVBjek0rRFFB?=
 =?utf-8?B?Zmh2ZXNUYlplSmtiektWNDdER2k3RTNDTW1CMFEyQXdVUm54NHNkOGprdnVZ?=
 =?utf-8?B?dnQ4bXRiQ21EWGh5OEppdCtRVUtSR2xwb1cwRitVY0RaLzJmaXhRNGo0aG9z?=
 =?utf-8?B?cjBYWmgyK1ZpR3k5TVN3Uk9KODgzYjhFNEJVRTJLK2grcjh6c0RyOEJHdzJq?=
 =?utf-8?B?UlRtdGRMQkZEU3pDN3NQYitlQ0h4bXNXanpFMUI5UmR4QTlsV20wOEU3SEll?=
 =?utf-8?B?ZksvUjFycm9MMzBoTVFyMzN6a3N4WmRHaG0vRURkL0pYekpmbUgvK3NmbS9k?=
 =?utf-8?B?NEI5MURRSnFHN0ZYNnhqamtiWkNYcC91OS9laW91Y2hhNHVyMFRKMytMOENq?=
 =?utf-8?B?d0FsdmVRRS9saG40aXMxNFJtWFdORXh1TEVlampFQlRlUXVJNWVHRTVRREw5?=
 =?utf-8?B?RmlyL3M5aXc0TnpFbU5pWEg0QmVaS2IvZ21EVlJPcUtkK1MwaDlML0Z2TmdM?=
 =?utf-8?B?Y2hEaDFFeG9jVFZ6ZWtVRldUWkgwSnR2anpKeVF0SndoMEx6RUFWOUY2SHNY?=
 =?utf-8?B?bXFzalVHc211NFFlNmpyZ2NBKytubHFpcDQ1SSswNlZ5ZjNwRHd0Um5Cb3A2?=
 =?utf-8?B?RCtvMXY3OWlPakR0aXFta2E0T3FHV1NTTFphaGIwRGJ1a3pRZUxIVWhIeFRy?=
 =?utf-8?B?NDNiT0MrdFJ4QUtMcFBBb1NkRDhiZ2dzbU1iby9ZTDNpNEFOWWJoYmw0SzdY?=
 =?utf-8?B?UE5pbUdqcXZsQytxbzBpNGsvK1hRRFZCc25zOFlmTnYyUFNoSE5iRDcxcGgy?=
 =?utf-8?B?dVNWN2FyN3d2bHJDOVp6UFJDc1EwR2VJWkFBMEZXMlAxSUhLT1JXdzdSYUZR?=
 =?utf-8?B?cHJ4SUxWb2pNcmJtOHg1dE5GOWJZcm1uMDdTQjJlZFlJZ1hKdDBlQjJjU1Bs?=
 =?utf-8?B?anY3V0lmL1VYcDVkNzlVOWxlYk5CS3VpVVRwSXhIcWIwUEo1M2EyZjZGbUJL?=
 =?utf-8?B?L1VrM05GS2R3MzJ3cTh1azA0UlZiRC9ISG1MaDFNdGVUdWhUdUZjSzFsVDZ5?=
 =?utf-8?B?SjA3ZUxlWXpUNHVkUGEzMlN5anRiZ2Fjc3dDVGsrM0lURVFnRjc2MFVLUmdi?=
 =?utf-8?B?QUo3NE1MWVB0S1Q3YjJlTjE0aGJUR3dyTkxwbGQrYmZJYnVCNHpWKzdQNVFS?=
 =?utf-8?B?K1gwQ1BKM0g2VytEdVQ3L2NNUUhCVHlzbXo4SUxlRW5CRXc4VENJSUg2UlQv?=
 =?utf-8?B?MFVBV0ltZEdWcEpjVkMxdzFrT2diTDRNMFdyZ0pBSkJqL0U2RThtdFFvRHlm?=
 =?utf-8?B?VHRxSDYyV1RxWFV1engrZzhLcXphc2pIUi8yNnczNFJ1TGlXUitrT1BuMFhR?=
 =?utf-8?B?VHQ0VFd5Tjh3cy85cU1iNGxCd0s0Q3FWSmplRFdXMUEwRmdHeEExZWhMcWVt?=
 =?utf-8?B?K2IwWFhOemhFbGk3OVN4d1lYeUhDNTZ6bTdHckJlRWZ2ZFBYRXUzMGptY2Q5?=
 =?utf-8?B?cnA4REU5dVJhUVF2OFFnMnpsRkxvTUlSa3Njd2IxeWczUFowcjBkNm56M1dh?=
 =?utf-8?B?VVp3Vy9TRW9VVWJ0UHUzalYrd3ZrdkxzMmNFRldVY0ZZcmdQRDFpYTNPUmhi?=
 =?utf-8?B?S0oyRi9nN3NSa0VOUUMxOThPY0FFcjE3dE1oMHRGQ1Z3UzJnbzFHQ29acVBO?=
 =?utf-8?B?SDFCQTBSSWM2ZDZwQnQ2Q1QxbUF2dzlvRnEwd3gzRUc5dGQ3MkJxQ2w1anNN?=
 =?utf-8?B?TWFFOFpidzVlY2pWZ3pYa2Y1L2lMa005aWgwalhJaWRZZjlOMGxPSEpTU2lR?=
 =?utf-8?B?OG9qVXVqMHNaVHhtOUI3TWt5QU5LaCtRYS96TEFrdHJwUkFKRUd3MjB4YXdT?=
 =?utf-8?B?VWNqVisxMW4zS1A4ekhEWUJFRTdYTVlYcGtWdXZaRmllVDVIV1hNN0pRTE9L?=
 =?utf-8?B?VWY1ckpCTFVXY1p5dll1ZFFKZzlGWFFaa1ZZR2FvS3NSdkd6TmlGbmdLQlRB?=
 =?utf-8?B?bHh1ZU9RdGVHY2Fnbjd2cUpERFMvbVBjWWcvK0lPMnY4REZvWjdCbFFrK0w0?=
 =?utf-8?Q?LHJgmuDFV8BWnsV0=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <751A5FA0D02CC342BCD0806E66320CD5@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0957a34e-af84-4f8d-78d0-08da538a2b8a
X-MS-Exchange-CrossTenant-originalarrivaltime: 21 Jun 2022 13:30:12.7121
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: vyi82vLbqsdeRqwu8uNu5B+fJRDydDJk2WBDtsh2bZ9H93pCEPSgnRDLJAbJAoPuWjD7Rp3uemIZrAcEnM98fAvBcjJYrz+Rg1Rt/sxqDZ0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA2PR03MB5721

T24gMjEvMDYvMjAyMiAxMzowNywgSnVsaWVuIEdyYWxsIHdyb3RlOg0KPg0KPg0KPiBPbiAyMS8w
Ni8yMDIyIDEyOjI3LCBBbmRyZXcgQ29vcGVyIHdyb3RlOg0KPj4gSGVsbG8sDQo+DQo+IEhpLA0K
Pg0KPj4gSSB0cmllZCB0byBoYXZlIGEgaGFsZiBob3VyIHJlc3BpdGUgZnJvbSBzZWN1cml0eSBh
bmQgcHVzaCBmb3J3YXJkDQo+PiB3aXRoIFhURi1vbi1BUk0sIGJ1dCB0aGUgcmVzdWx0IHdhcyBh
IG1lc3MuDQo+Pg0KPj4gaHR0cHM6Ly9naXRodWIuY29tL2FuZHloaHAveHRmL2NvbW1pdC9iYzg2
ZTJkMjcxZjIxMDdkYTliMWM5YmM1NWEwNTBkYmRmMDdjNmM2DQo+PiBpcyB0aGUgYWJzb2x1dGUg
YmFyZSBtaW5pbXVtIHN0dWIgVk0sIHdoaWNoIGhhcyBhIHpJbWFnZXszMiw2NH0NCj4+IGhlYWRl
ciwgc2V0cyB1cCB0aGUgc3RhY2ssIG1ha2VzIG9uZSBDT05TT0xFSU9fd3JpdGUgaHlwZXJjYWxs
LCBhbmQNCj4+IHRoZW4gYSBjbGVhbiBTQ0hFRE9QX3NodXRkb3duLg0KPj4NCj4+IFRoZXJlIGFy
ZSBzb21lIGJ1Z3M6DQo+Pg0KPj4gMSkga2VybmVsX3ppbWFnZTMyX3Byb2JlKCkgcmVqZWN0cyBy
ZWxvY2F0YWJsZSBiaW5hcmllcywgYnV0IGlmIEkNCj4+IHNraXAgdGhlIGNoZWNrIGl0IHdvcmtz
IGZpbmUuDQo+DQo+IEhtbW1tLi4uIHdoaWNoIGNoZWNrIGFyZSB5b3UgcmVmZXJyaW5nIHRvPw0K
DQppZiAoIChlbmQgLSBzdGFydCkgPiBzaXplICkNCsKgwqDCoCByZXR1cm4gLUVJTlZBTDsNCg0K
QWx0aG91Z2ggbm93IEkgdGhpbmsgYWJvdXQgaXQsIHRoZSBwcm9ibGVtIGlzIHN1YnRseSBkaWZm
ZXJlbnQuDQoNClNlY3Rpb24gSGVhZGVyczoNCsKgIFtOcl0gTmFtZcKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgIFR5cGXCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIEFkZHLCoMKgwqDCoCBPZmbCoMKg
wqAgU2l6ZcKgwqAgRVMgRmxnDQpMayBJbmYgQWwNCsKgIFsgMF3CoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqAgTlVMTMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgMDAwMDAwMDAgMDAw
MDAwIDAwMDAwMCAwMMKgwqDCoMKgwqANCjDCoMKgIDDCoCAwDQrCoCBbIDFdIC50ZXh0wqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgIFBST0dCSVRTwqDCoMKgwqDCoMKgwqAgNDAwMDAwMDAgMDEwMDAw
IDAwMDA5NCAwMMKgIEFYwqANCjDCoMKgIDDCoCA0DQrCoCBbIDJdIC5kYXRhwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgIFBST0dCSVRTwqDCoMKgwqDCoMKgwqAgNDAwMDEwMDAgMDExMDAwIDAwMDAw
MCAwMMKgIFdBwqANCjDCoMKgIDDCoCAxDQrCoCBbIDNdIC5yb2RhdGHCoMKgwqDCoMKgwqDCoMKg
wqDCoCBQUk9HQklUU8KgwqDCoMKgwqDCoMKgIDQwMDAxMDAwIDAxMTAwMCAwMDAwMTIgMDDCoMKg
IEHCoA0KMMKgwqAgMMKgIDQNCsKgIFsgNF0gLmJzc8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
IE5PQklUU8KgwqDCoMKgwqDCoMKgwqDCoCA0MDAwMjAwMCAwMTEwMTIgMDAxMDAwIDAwwqAgV0HC
oA0KMMKgwqAgMMKgIDQNCg0KZW5kIGlzIGNhbGN1bGF0ZWQgYXMgMHgzMDAwIHdoaWNoIGluY2x1
ZGVzIHRoZSBic3MgKGluYyBzdGFjayB3aGljaCBpcw0KYnNzIHBhZ2UgYWxpZ25lZCksIHdoaWxl
IHRoZSByYXcgYmluYXJ5IHNpemUgaXMgMHgxMDEyIGJlY2F1c2UgaXQgc3RvcHMNCmF0IHRoZSBl
bmQgb2YgLnJvZGF0YS4NCg0KPg0KPj4NCj4+IEZ1cnRoZXJtb3JlLCBrZXJuZWxfemltYWdlNjRf
cHJvYmUoKSBpZ25vcmVzIHRoZSBoZWFkZXIgYW5kIGFzc3VtZXMNCj4+IHRoZSBiaW5hcnkgaXMg
cmVsb2NhdGFibGUuDQo+DQo+IEFyZSB5b3UgcmVmZXJyaW5nIHRvIGJpdCAzICJLZXJuZWwgcGh5
c2ljYWwgcGxhY2VtZW50Ij8NCg0KTm8uIFRoaXM6DQoNCi8qIEN1cnJlbnRseSB0aGVyZSBpcyBu
byBsZW5ndGggaW4gdGhlIGhlYWRlciwgc28ganVzdCB1c2UgdGhlIHNpemUgKi8NCnN0YXJ0ID0g
MDsNCmVuZCA9IHNpemU7DQoNCldoaWNoIGlzbid0IHRydWUgZXZlbiBmb3IgdGhlIHYwIGhlYWRl
ci7CoCBUaGUgZmllbGQgbmFtZWQgdGV4dF9vZmZzZXQgaW4NClhlbidzIGNvZGUgaXMgc3RhcnQs
IGFuZCByZXMxIGlzIGVuZCAob3Igc2l6ZSBmb3IgcmVsb2NhdGFibGUpLg0KDQpLZXJuZWwgcGxh
Y2VtZW50IG9ubHkgcGVydGFpbnMgdG8gd2hldGhlciB0aGUgaW1hZ2Ugc2hvdWxkIGJlIDJNIGFs
aWduZWQNCm9yIG5vdC4NCg0KPg0KPj4gQm90aCBwcm9iZSBmdW5jdGlvbnMgZmFpbCB0byBjaGVj
ayB0aGUgZW5kaWFubmVzcyBtYXJrZXIuDQo+DQo+IEFGQUlVIHRoZSBoZWFkZXIgaXMgbGl0dGxl
IGVuZGlhbi4gU28gaXQgaXMgbm90IGNsZWFyIHRvIG1lIHdoeSB3ZQ0KPiBzaG91bGQgY2hlY2sg
dGhlIGVuZGlhbmVzcyBtYXJrZXI/DQoNCk5vdCB0aGUgZW5kaWVuZXNzIG9mIHRoZSBoZWFkZXIs
IHRoZSBlbmRpYW5uZXNzIG9mIHRoZSBpbWFnZS7CoCBCb3RoDQpoZWFkZXJzIGhhdmUgYSBmaWVs
ZCB3aGljaCBzaG91bGQgb3VnaHQgdG8gYmUgY2hlY2tlZCBmb3IgIT0gTEUgc2VlaW5nDQphcyBY
ZW4gZG9lc24ndCBzdXBwb3J0IGJpZyBlbmRpYW4gZG9tYWlucyB5ZXQuDQoNCj4NCj4+DQo+PiAy
KSBJJ20gdXNpbmcgcWVtdS1zeXN0ZW0tYXJtIDQuMi4xIChEZWJpYW4gMTo0LjItM3VidW50dTYu
MjEpLCB3aXRoDQo+PiBzb21lIHBhcmFtZXRlcnMgY3JpYmJlZCBmcm9tIHRoZSBHaXRsYWIgQ0kg
c21va2UgdGVzdCwgYnV0DQo+PiBjdHh0X3N3aXRjaF90bygpIGV4cGxvZGVkIHdpdGggdW5kZWYg
b246DQo+Pg0KPj4gV1JJVEVfQ1AzMihuLT5hcmNoLmpvc2NyLCBKT1NDUik7DQo+PiBXUklURV9D
UDMyKG4tPmFyY2guam1jciwgSk1DUik7DQo+Pg0KPj4gSSdtIG5vdCBzdXJlIHdoYXQgdGhlc2Ug
YXJlIChiZXlvbmQgSmF6ZWxsZSBjb25mIHJlZ2lzdGVyKSwgYnV0IEkNCj4+IGNvbW1lbnRlZCB0
aGVtIG91dCBhbmQgaXQgbWFkZSBmdXJ0aGVyIHByb2dyZXNzLsKgIEkgaGF2ZSBubyBpZGVhIGlm
DQo+PiB0aGlzIGlzIGEgWGVuIGJ1ZywgcWVtdSBidWcsIG9yIHVzZXIgZXJyb3IsIGJ1dCBzb21l
dGhpbmcgaXMgY2xlYXJseQ0KPj4gd3JvbmcgaGVyZS4NCj4NCj4gSSBzdXNwZWN0IHRoZSBRRU1V
IHZlcnNpb24gaXMgYSBiaXQgdG9vIG9sZCB0byBzdXBwb3J0IDMyLWJpdA0KPiB2aXJ0dWFsaXph
dGlvbi4gQ2FuIHlvdSB0cnkgYSBuZXdlciBvbmU/DQoNCk5vdCBlYXNpbHkgcmlnaHQgbm93LCBi
dXQgKG90aGVyIHRoYW4gdGhlc2UgdHdvIGlzc3VlcyksIGl0IGFwcGVhcnMgdG8NCndvcmsgZmlu
ZS7CoCBUaGUgMzJiaXQgWFRGIGd1ZXN0IGlzIGV4ZWN1dGluZyBjb3JyZWN0bHkgKHRvIGEgZmly
c3QNCmFwcHJveGltYXRpb24pIGJhc2VkIG9uIHRoZSBmYWN0IHRoYXQgaXQgZG9lcyBtYWtlIHRo
ZSB0d28gZXhwZWN0ZWQNCmh5cGVyY2FsbHMuDQoNCkF0IGEgZ3Vlc3MsIGl0J3MNCmh0dHBzOi8v
cGF0Y2h3b3JrLm96bGFicy5vcmcvcHJvamVjdC9xZW11LWRldmVsL3BhdGNoLzIwMTkxMjE2MTEw
OTA0LjMwODE1LTI1LXBldGVyLm1heWRlbGxAbGluYXJvLm9yZy8NCg0KPg0KPj4NCj4+IDMpIEZv
ciB0ZXN0LWFybTY0LXN0dWIsIEkgZ2V0IHRoaXM6DQo+Pg0KPj4gKFhFTikgZDA6IGV4dGVuZGVk
IHJlZ2lvbiAxOiAweDcwMDAwMDAwLT4weDgwMDAwMDAwDQo+PiAoWEVOKSBMb2FkaW5nIHpJbWFn
ZSBmcm9tIDAwMDAwMDAwNDgwMDAwMDAgdG8NCj4+IDAwMDAwMDAwNTAwMDAwMDAtMDAwMDAwMDA1
MDAwMTAxMg0KPj4gKFhFTikgTG9hZGluZyBkMCBEVEIgdG8gMHgwMDAwMDAwMDU4MDAwMDAwLTB4
MDAwMDAwMDA1ODAwMWM4NQ0KPj4gLi4uDQo+PiAoWEVOKSAqKiogU2VyaWFsIGlucHV0IHRvIERP
TTAgKHR5cGUgJ0NUUkwtYScgdGhyZWUgdGltZXMgdG8gc3dpdGNoDQo+PiBpbnB1dCkNCj4+IChY
RU4pIEZyZWVkIDMyNGtCIGluaXQgbWVtb3J5Lg0KPj4gKFhFTikgKioqIEdvdCBDT05TT0xFSU9f
d3JpdGUgKDE4IGJ5dGVzKQ0KPj4gSGVsbG8gZnJvbSBBUk02NA0KPj4gKFhFTikgKioqIENPTlNP
TEVJT193cml0ZSBkb25lDQo+PiAoWEVOKSBhcmNoL2FybS90cmFwcy5jOjIwNTQ6ZDB2MCBIU1I9
MHgwMDAwMDA5MzlmMDA0NQ0KPj4gcGM9MHgwMDAwMDA1MDAwMDA5OCBndmE9MHg4MDAwMmZmYyBn
cGE9MHgwMDAwMDA4MDAwMmZmYw0KPg0KPiBMb29raW5nIGF0IHRoZSBsb2cgYWJvdmUsIEdQQSBi
ZWxvbmcgdG8gbmVpdGhlciB0aGUga2VybmVsLCBleHRlbmRlZA0KPiByZWdpb24gb3IgRFRCLg0K
Pg0KPj4gcWVtdS1zeXN0ZW0tYWFyY2g2NDogdGVybWluYXRpbmcgb24gc2lnbmFsIDINCj4+DQo+
PiBpLmUuIHRoZSBDT05TT0xFSU9fd3JpdGUgaHlwZXJjYWxsIGNvbXBsZXRlcyBzdWNjZXNzZnVs
bHksIGJ1dCBhIHRyYXANCj4+IG9jY3VycyBiZWZvcmUgdGhlIFNDSEVET1Bfc2h1dGRvd24gY29t
cGxldGVzLsKgIFRoZSBmdWxsICh0aW55KQ0KPj4gYmluYXJpZXMgYXJlIGF0dGFjaGVkLCBidXQg
aXQgc2VlbXMgdG8gYmUgZmF1bHRpbmcgb246DQo+Pg0KPj4gwqDCoMKgwqAgNDAwMDAwOTg6wqDC
oMKgIGI4MWZjYzNmwqDCoMKgwqAgc3RywqDCoMKgIHd6ciwgW3gxLCAjLTRdIQ0KPj4NCj4+IHdo
aWNoIChJIHRoaW5rKSBpcyB0aGUgc3RvcmUgb2YgMCB0byB0aGUgc3RhY2sgZm9yIHRoZSBzY2hl
ZG9wDQo+PiBzaHV0ZG93biByZWFzb24uDQo+DQo+IEFGQUlDVCB0aGUgc3RhY2sgaXMgbWVhbnQg
dG8gYmUgcmlnaHQgbmV4dCBhZnRlciB0aGUga2VybmVsLiBIb3dldmVyLA0KPiB0aGUgZmF1bHQg
YWJvdmUgc3VnZ2VzdCB0aGF0IHRoZSB2YWx1ZSBpcyBub3QgZXZlbiBjbG9zZS4NCj4NCj4+DQo+
PiA0KSBGb3IgdGVzdC1hcm0zMi1zdHViIHVuZGVyIGVpdGhlciB0aGUgMzJiaXQgb3IgNjRiaXQg
WGVuLCBJIGdldDoNCj4+DQo+PiAoWEVOKSBGcmVlZCAzNDhrQiBpbml0IG1lbW9yeS4NCj4+IChY
RU4pICoqKiBHb3QgQ09OU09MRUlPX3dyaXRlICgxOCBieXRlcykNCj4+IChYRU4pICoqKiBnb3Qg
ZmF1bHQNCj4+IChYRU4pICoqKiBHb3QgU0NIRURPUF9zaHV0ZG93biwgMA0KPg0KPiBXaGVyZSBh
cmUgdGhvc2UgbWVzc2FnZXMgY29taW5nIGZyb20/DQoNCk15IGRlYnVnZ2luZywgc28gSSBjYW4g
c2VlIHdoYXQncyBnb2luZyBvbi7CoCBUaGUgImdvdCBmYXVsdCIgd2FzDQpjb3B5X2Zyb21fdXNl
cigpIEVGQVVMVCBwYXRoLg0KDQo+DQo+PiAoWEVOKSBIYXJkd2FyZSBEb20wIGhhbHRlZDogaGFs
dGluZyBtYWNoaW5lDQo+Pg0KPj4gd2hpY2ggaXMgd2VpcmQuwqAgVGhlIENPTlNPTEVJT193cml0
ZSBmYWlscyB0byByZWFkIHRoZSBwYXNzZWQNCj4+IHBvaW50ZXIsIGRlc3BpdGUgYXBwZWFyaW5n
IHRvIGhhdmUgYSBpcC1yZWxhdGl2ZSBsb2FkIHRvIGZpbmQgdGhlDQo+PiBzdHJpbmcsIHdoaWxl
IHRoZSBTQ0hFRE9QX3NodXRkb3duIHBhc3NlcyBpdHMgcGFyYW1ldGVyIGZpbmUgKGl0J3MgYQ0K
Pj4gc3RhY2sgcmVsYXRpdmUgbG9hZCkuDQo+DQo+IEZyb20gYSBicmllZiBsb29rLCB5b3VyIGNv
ZGUgaXMgc3RpbGwgcnVubmluZyB3aXRoIE1NVSBvZmYgYW5kIENhY2hlDQo+ICJvZmYiIChvbiBh
cm12OCwgaXQgaXMgbW9yZSBhIGJ5cGFzcyAiY2FjaGUiIHJhdGhlciB0aGFuIG9mZikuDQo+DQo+
IFRoaXMgbWVhbnMgdGhhdCB5b3Ugb3VnaHQgdG8gYmUgYSBsb3QgbW9yZSBjYXJlZnVsIHdoZW4N
Cj4gcmVhZGluZy93cml0aW5nIHZhbHVlIHRvIGF2b2lkIHJlYWRpbmcgYW55IHN0YWxsIGRhdGEu
DQoNClRoZXJlIGFyZSBubyByZWxvY2F0aW9uL2V0YyBzbyBldmVyeXRoaW5nIGhhcyB3ZWxsIGRl
ZmluZWQgYmVoYXZpb3VyDQpldmVuIHdoZW4gdGhlIGNhY2hlcyBhcmUgb2ZmLg0KDQpUaGlzIGlz
IHRoZSBwb2ludCBvZiB0aGUgbWluaW11bSB2aWFibGUgYmluYXJ5LsKgIEl0J3MgdG8gaGF2ZSBz
b21ldGhpbmcNCnRyaXZpYWwgd2UgY2FuIGRldmVsb3AgdGhlIGJ1aWxkIHN5c3RlbSBhcm91bmQu
DQoNCj4NCj4+IE90aGVyIG9ic2VydmF0aW9uczoNCj4+DQo+PiAqIFRoZXJlIGlzIG5vIGRvY3Vt
ZW50ZWQgdkNQVSBzdGFydGluZyBzdGF0ZS4NCj4NCj4gU2VlDQo+IGh0dHBzOi8vZ2l0aHViLmNv
bS90b3J2YWxkcy9saW51eC9ibG9iL21hc3Rlci9Eb2N1bWVudGF0aW9uL2FybTY0L2Jvb3Rpbmcu
cnN0Lg0KDQpXaGF0J3MgaXQgZ290IHRvIGRvIHdpdGggWGVuJ3MgdkNQVSBzdGFydGluZyBzdGF0
ZT/CoCBBbHNvLCB0aGF0J3MNCmNsZWFybHkgbm90IHJlbGV2YW50IGZvciBhcm0zMiBldmVuIGlm
IHRoZSBpbXBsaWNhdGlvbiBpcyAiWGVuIG9ubHkNCnNwZWFrcyB0aGUgTGludXggQUJJIi4NCg0K
SXQgbmVlZHMgdG8gYmUgaW4gZG9jcy8gKG9yIHB1YmxpYyBhdCBhIHN0cmV0Y2gpIGFuZCBub3Qg
aW4gdGhlIGhlYWRzIG9mDQp0aGUgbWFpbnRhaW5lcnMuDQoNCj4NCj4+ICogUWVtdSBpcyBpbmZp
bml0ZWx5IGVhc2llciB0byB0byB1c2UgKGkuZS4gbm8gbWVzc2luZyB3aXRoIGR0Yi9ldGMpDQo+
PiBhcyAta2VybmVsIHhlbiAtaW5pdHJkIHRlc3QtJGZvbyB3aXRoIGEgb25lbGluZXIgY2hhbmdl
IHRvIHRoZSBkdGINCj4+IHBhcnNpbmcgdG8gdHJlYXQgcmFtZGlzayBhbmQgbm8ga2VybmVsIGFz
IHRoZSBkb20wIGtlcm5lbC7CoCBNYXliZSBhDQo+PiBiZXR0ZXIgY2hhbmdlIHdvdWxkIGJlIHRv
IG1vZGlmeSBxZW11IHRvIHVuZGVyc3RhbmQgbXVsdGlwbGUgLWtlcm5lbCdzLg0KPj4gKiBYZW4g
Y2FuJ3QgbG9hZCBFTEZzLg0KPg0KPiBUaGUgc3VwcG9ydCB3YXMgZHJvcHBlZCBpbiAyMDE4IGJl
Y2F1c2UgaXQgd2FzIGJvZ3VzIGFuZCBub3QgdXNlZDoNCj4NCj4gaHR0cHM6Ly9saXN0cy54ZW5w
cm9qZWN0Lm9yZy9hcmNoaXZlcy9odG1sL3hlbi1kZXZlbC8yMDE4LTA2L21zZzAwMjQyLmh0bWwN
Cj4NCj4NCj4gUGVyc29uYWxseSwgSSB0aGluayB0aGF0IHpJbWFnZS9JbWFnZSBpcyBzaW1wbGUg
ZW5vdWdoIHRoYXQNCj4gcmUtaW50cm9kdWNpbmcgRUxGIGlzIG5vdCB3b3J0aCBpdC4gQnV0IEkg
d291bGQgYmUgT0sgdG8gY29uc2lkZXINCj4gcGF0Y2hlcyBpZiB5b3UgZmVlbCBsaWtlIHdyaXRp
bmcgdGhlbS4NCg0KVGhlcmUgaXMgYSBtYXNzaXZlIHVzYWJpbGl0eSBpbXByb3ZlbWVudCBmcm9t
IGJlaW5nIGFibGUgdG8gcG9pbnQgbm9ybWFsDQp0b29sY2hhaW4gdG9vbHMgYXQgdGhlIHNhbWUg
YmluYXJ5IHlvdSdyZSB0cnlpbmcgdG8gbG9hZC4NCg0KfkFuZHJldw0K


From xen-devel-bounces@lists.xenproject.org Tue Jun 21 13:50:36 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 13:50:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353252.580168 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3eGr-0001gh-C9; Tue, 21 Jun 2022 13:50:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353252.580168; Tue, 21 Jun 2022 13:50:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3eGr-0001ga-9F; Tue, 21 Jun 2022 13:50:21 +0000
Received: by outflank-mailman (input) for mailman id 353252;
 Tue, 21 Jun 2022 13:50:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AKif=W4=citrix.com=prvs=1644f076b=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o3eGq-0001gU-1w
 for xen-devel@lists.xenproject.org; Tue, 21 Jun 2022 13:50:20 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 154598ab-f169-11ec-b725-ed86ccbb4733;
 Tue, 21 Jun 2022 15:50:18 +0200 (CEST)
Received: from mail-mw2nam04lp2169.outbound.protection.outlook.com (HELO
 NAM04-MW2-obe.outbound.protection.outlook.com) ([104.47.73.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 21 Jun 2022 09:49:01 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by BN8PR03MB5010.namprd03.prod.outlook.com (2603:10b6:408:7b::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.14; Tue, 21 Jun
 2022 13:49:00 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5353.022; Tue, 21 Jun 2022
 13:49:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 154598ab-f169-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655819418;
  h=date:from:to:cc:subject:message-id:mime-version;
  bh=GFS12QkIJF/ybwLpytD+HinIGnAEhWw9CJ4dOuhPUe4=;
  b=IB/DizMy+CgoVGZjr/ybh44TmpXn3OLUZe0aycch4qOuZ+iYy+8aR4wQ
   1IQqYeEBHs/cJSl6P5tQrTDHP6mu1ot9G5KUNTA+9VpIkN5ldjQ8q+veP
   c/ydjfim6C0lIQdFy2EgZ19MMahMiKJfM/0rQbfE6kP/4KHGBig55vp7s
   I=;
X-IronPort-RemoteIP: 104.47.73.169
X-IronPort-MID: 73414808
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:+7E0jK60jS649RaR4+guWQxRtFPGchMFZxGqfqrLsTDasY5as4F+v
 jAZCDqGPa6Iamf3L993PYvnoBsHupfWm9RhSVNoris9Hi5G8cbLO4+Ufxz6V8+wwmwvb67FA
 +E2MISowBUcFyeEzvuVGuG96yE6j8lkf5KkYAL+EnkZqTRMFWFw03qPp8Zj2tQy2YbjWlvU0
 T/Pi5a31GGNimYc3l08s8pvmDs31BglkGpF1rCWTakjUG72zxH5PrpGTU2CByKQrr1vNvy7X
 47+IISRpQs1yfuP5uSNyd4XemVSKlLb0JPnZnB+A8BOiTAazsA+PzpS2FPxpi67hh3Q9+2dx
 umhurSuEQ14J47CvN4GQjB0Iy9dHZR40rrYdC3XXcy7lyUqclPK6tA2UAQTAtdd/ex6R2ZT6
 fYfNTYBKAiZgP67y666Te8qgdk/KM7sP8UUvXQIITPxVK56B8ycBfqRo4YGjV/chegXdRraT
 9AeZjd1KgzJfjVEO0sNCYJ4l+Ct7pX6W2ID9AvJ9fJni4TV5BchjpnqYPPQRs2petxuo3acp
 GD35musV3n2M/Tak1Jp6EmEluLJ2C/2Ro8WPLm57eJxxk2ewHQJDx8bXkf9puO24ma1XNdaK
 lAI9zA1hbg/8lSxSdvwVAH+p2SL1iPwQPJVGuw+rQSSkKzd5l7FAnBeF2AQLts7qMUxWDomk
 EeTmM/kDiBut7vTTm+B8rCTrnW5Pi19wXI+WBLohDAtu7HLyLzfRDqWHr6PzIbdYgXJJAzN
IronPort-HdrOrdr: A9a23:l/ee2aq06D8UKokWAreNhPkaV5vLL9V00zEX/kB9WHVpm5Oj+v
 xGzc5w6farsl0ssREb9uxo9pPwJE800aQFmLX5XI3SOjUO3VHFEGgM1+vfKlHbak7DH41mpN
 1dmspFebrN5DFB5K6VgTVQe+xQu+Vvm5rY4ds2oU0dLz2DPMpbnn9E40ugYzlLbTgDIaB8OI
 uX58JBqTblUXMLbv6jDn1Ae+TYvdXEmL/vfBZDXnccmX+zpALtzIS/PwmT3x8YXT8K6bA+8V
 Ldmwi8wqm4qfm0xjLVymeWxZVLn9nKzMdFGaW3+78oAwSprjztSJVqWrWEsjxwiOaz6GwymN
 2JmBskN9Qb0QKjQkiF5T/WnyXw2jcn7HHvjXWCh2H4nMD/TDUmT+JcmINwaHLimgAdleA59J
 gO83OStpJRAx+Ftj/6/cL0WxZjkVfxiWY+kNQUk2dUXeIlGfRsRLQkjQ9o+ao7bW3HANhNKp
 ghMCic3ocfTbqiVQGVgoE1q+bcH0jaHX+9Mzg/U4KuonhrdUtCvjclLfwk7wo9Ha0GOuZ5Ds
 T/Q9ZVfeJ1P7orhZwUPpZ+feKHTkrwfDnrDEW+ZXzaKYBvAQO9l3ew2sR92N2X
X-IronPort-AV: E=Sophos;i="5.92,209,1650945600"; 
   d="scan'208";a="73414808"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=L26lYX/cMNDUQmGoeytqsMbARfbknPpRo77fUO/itIU5IlBR3OPnwX4mNmcp/ayqhBhyCaNtu/M4hxMQHueLcQMDS7cshaEXxOuxDOOtXq9quzEfSaUmTrPpIQMqybXRvYUSWnOWwmSTKIrkeXKpLRNhZd5nZ5fhqXRGfwLZ2wVn9pak5RZagOEEXtzRWYmM9JjwDNhWpx+aiVX/yRKBW94UtRU+zZBeksJ7a3kLvfvCU9Or3Nclogkepcu6WbqriTnK+aCmMu2cKCPnbbcO22eYAss120HJqtk6Jh1EIpakB114sVlw6WWmI+o3RvyrCk+wW6l88EYHeGOfUdtuDg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=U/AEZc2H3B3nF/EYTAit+FUlWI2YucMNbLE1IYNCTNo=;
 b=Tsyx4uWWUBxS0zzPifgdpJeQM5L5TY1enjGuBe31lC40Yms38mlZEB3CLD5+V5ZOtYa9sNm67gvS1/rAJYsnNAdb0IFeofFrn5gXvHhE/1P1afjMThLizbcmXCnzJQNXeFYA3SWCADDbuqToogeNLK494FT/55AmxgkrIyC/zLMrVnmGUttI6+2YxGOOjcaKLi63Zh9m/BoIS+L54U6hO43VuWoCElfEgPi0n5ARySTj0wQImPom+FhGOXtVnN3V3viQK9Hayymj1V8I0UOsscIjXPOZflpuCWuVB5FM79AKFqR++mtxhuBg0we6l1oXca/I4bkrbeb1QlDyCvkv+Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=U/AEZc2H3B3nF/EYTAit+FUlWI2YucMNbLE1IYNCTNo=;
 b=i34xSYxAjE7hV0sAsOjEH2ns2rJUGEsCWWydcNEnobNi4dGnFcFS/cnCp4Tci4jMLXL6Fc+N0XSmVx9lIig4U6+UBJgkWLuK+twDR24wPknx7cFMzV/IeDPi1OSMxMoaqdGb25zZugW1DYC+I80zHNsh861OvBpEe7eP/oNagt4=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 21 Jun 2022 15:48:56 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: committers@xenproject.org
Subject: disabling mercurial repositories
Message-ID: <YrHMSJg6Rx9ULvr6@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
X-ClientProxiedBy: LO4P123CA0349.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18d::12) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 49230560-6d8d-43f6-0908-08da538ccb8e
X-MS-TrafficTypeDiagnostic: BN8PR03MB5010:EE_
X-Microsoft-Antispam-PRVS:
	<BN8PR03MB5010BBEA0E5888FA4D997D9B8FB39@BN8PR03MB5010.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tp1myDqj3gIPV4BJ59J/ulYSZab/ZYvnDaedbo2sXSnTDaA4/7tghh3mO+yyTHv3zn0nrU1rzBJwHx/FGrbo5ewt6f4q3kWveqLBD/GmoGIwJ3KuhEdBQQEdflrrc1H6l2uVvThRrFQGoQPp4zBdbee41IQ0VUpS5w7Xo9/VHQfcdmnmT8gja9a8K68zuWxzpPVBj5zU6ugyMYpFq0OrZ1ECydrPSnVi3wf42fGY7E1q2oD/hCrvNty3sXyx5KJ5yd3Tzp1NTivjz6zxgK+ws6A1leqSJiht8Ja8LriZAtUa2uzZZ20cnTV/zIW8Lw89KFGcKO6QFEOnTqzJimWO8ZLxCxCC/TjFGyl8HqQeRJpsZz0kPris2xYMkvtty5Sf6WSO7kgh4bzvShYAMSdQsAH+VAw99rTbLqz3UJeG3F39iH38W6q0DgNv39v9cfy6eqZ64hl8oQL5k+yK9Bha+n2GwAavvxQ5bjOoDd4Gs85WvA1KWGfW+2cu5VAAvUcPWkI7nDb0LnrQmL0aW5a1ENcnw9OySUpO1RhWExwlAVDKeQPn8rtZusn4mgXv8XXq3/yofW21Qndco8F50q2ntpmxsx0e3zvKRRclBBY33krCfqOtLrpiLK2Bvlbh5vTjZ5EvNVT26YVhIggCF+rqXm/Ax9qVa1GQztSV+H+TzOWFEGlQoxUYEW5jZgn2I2pyaFu3qf8HepgVcLOiC3BDJA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(7916004)(4636009)(366004)(136003)(396003)(39860400002)(346002)(376002)(83380400001)(6666004)(3480700007)(186003)(82960400001)(33716001)(38100700002)(6506007)(66476007)(478600001)(6512007)(9686003)(26005)(6916009)(6486002)(316002)(2906002)(4744005)(66946007)(41300700001)(8936002)(7116003)(5660300002)(4326008)(66556008)(8676002)(85182001)(86362001)(133083001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TE1rVXRleDI2bXJrMmpQSnhEcThEN0E1eTRXTUdqK05nTWlxMGxVd0FrY1JS?=
 =?utf-8?B?OGY3K0ltMDdzaU9LVWpLR29Lc3c4ZzFQWDhzZmZWZ1JNbjNwZXk0VHZXZkh2?=
 =?utf-8?B?MjE4UkgvWHFQRksrNDZyRFNnSlpYU3BSS1RFd0dKZ3F5bkVNdHpXN2RHd1dh?=
 =?utf-8?B?YnlvTXU1NHhxNEYvVGhkNjhSb3lYOVhROEZYWnlWTHNIdkR6UTBFeE9SeXYz?=
 =?utf-8?B?TjNVT3lDUGE1SkNzaFlEbUFzMVlxMVdNOHJYL29INlAwTlRHalRKd1A5NHFI?=
 =?utf-8?B?ODREVnZaS1dmMk1YcHdqNkZ1aERkNGc1V0IrQ1N2VlQ5Z0NYdHJpV2JkekxJ?=
 =?utf-8?B?MWwwU3ZkTklJeFdYUDRNVmIxSjdsT20xWlFNUndIL2V3UzJkM2EyQ2lvcFJ6?=
 =?utf-8?B?c3ZQK1Rjd1FDdWRVQlFQTzJEaEtVTFRnQmJhdWd0K2RLdjFVdzl5b3VwSjAr?=
 =?utf-8?B?Z1FPYjltMFNPNW1OWEVZZmRsbmc3ai9FK1BOWG5yY3FRbklncHZ6SDI1MkhM?=
 =?utf-8?B?M3BqUU9xcUw4WTR3NmkxbUNIb2tEdDR6cUw1cHlwZjhCVTBtcE5rUlJ1WkVa?=
 =?utf-8?B?Q0tGeEJWcklSamc1WEhjY3pYbzdCd2pQeDY3clU1ZFA4MldTQ3Z4elVYWFlj?=
 =?utf-8?B?RzZLTFJYUWltRGkxZ0M1WGl2T0lVSFFNU1FnZFpTbHRoNWNKOGhEcHIxWHVn?=
 =?utf-8?B?djkrM3E0Zjc2UlloZTFhN21WR1k2K01qRWpMbmVFNVBHSENGbTZSVGZwakRs?=
 =?utf-8?B?K1l5SmhhMHpUaER0Ty9vaVJXR0N1anU5Z3ZsdHI5L3J0WENEdkdxRFY2VjZq?=
 =?utf-8?B?ak9qL1pzUU9xR3dJN1gzNEdNZUF2NVpUclBBYndMWmlNWkZRWEc4bmczdjVC?=
 =?utf-8?B?NTVJczZxY0o3eUhhUzJLQTVhYndUY3JVbm53ZnphaVI5Z2NISW9WVC82NTJo?=
 =?utf-8?B?U3ZNaTQwZWx2aU8wclpMQnNFa3hiMWEzS3JNRXlNU0szM2N1NFgzUUc3azJv?=
 =?utf-8?B?dE1YQzFzQ0NHSWhlbjZ4TFFKTTRRVjdDNlRkZ1VEcnd0NUJFTlRMVE4ySjh3?=
 =?utf-8?B?UGpad2JXZ2JWVWwrbzMzQmZiRjdrWU1QZmJ1dVFqRSs5aEd6UWlCZE1zZUow?=
 =?utf-8?B?c2VKcDVMYVJyZEJvK2hOdGZCTXduK3I1RTk5NmlVQytqSmxYSWhwVFRiUXV6?=
 =?utf-8?B?a1lzM2pGaW15RUdQRFY0YldXSWZBTElmeE9WeGxMQTgwREpYOVkxQkVmbzZk?=
 =?utf-8?B?Q1krWmc0dndNK0xpRXRydUhsaHEvNmtWMUp5RTN5WDBsZGltNkkvSU82emxP?=
 =?utf-8?B?MkI2cm9scWJVQmFITE1lMXF3ckpyb0czckNPYVNtQmRFa2NobzRJeC9vTkZM?=
 =?utf-8?B?OHlabHUyNHBmK1pmQ1pscUhHTFVEcWdya1gvYWJmMk1FUy9TQmJMMmptQ0R6?=
 =?utf-8?B?Y1RoVE1DMHdGVjN4TksybFVrTS9rc0IzYzcrMUdhK2wzMkhHUnd5RmZEMEo0?=
 =?utf-8?B?bWNIc0xEQ2tLcDZva2JYNThyTVpJYjlnMTk5YjlsTXJ0RTJnQWVpYWhGTEpo?=
 =?utf-8?B?bENUZlZWK2hYTjR5T1MwdU0yNkxHbmdiaUpncFhjSlV4Wk50YnNoYlp3WkVa?=
 =?utf-8?B?OUZHR0Voa2xsS2xNbERWMGtYTGRkV1oyRWdZMEhmZlI5RExsT3kyd2hYSlNL?=
 =?utf-8?B?YjBxUTkySTNkL3ZZSTdUbTM3KzRxSXpjcFN2NGo0UmdwS0JHN3ZaTkltemlx?=
 =?utf-8?B?WmtnY3o4dEdRUW1rSkw0ZG1EaG9LbWFUUHJLazg3b2dPSGRPWnlNa2M3dWc2?=
 =?utf-8?B?dmRvM083bkVHVlhRSzJyL3FSTm04UGRjNnBpZUVDOVE2RDhkSE1xNXpjNkV6?=
 =?utf-8?B?UnJaeTZVZ3N2dGVNVXNJc2x3WXBTMGZpRUduN2xwTjAzTXFxSTNwNytuY0RJ?=
 =?utf-8?B?WVNMczUvamFSNTA0OGdHYnFjZWdINFhmUXFkTjBOTjAwdFdWbmlFbVBvTFd0?=
 =?utf-8?B?L05VTERkSUd3UmhETk9zQ2w3VWkybHpORWpGYzlEbkduajhHOUtSamtIVHIw?=
 =?utf-8?B?TnMydm1tdmczM2QydTR1M3Z5a3lJcDAwRGQ0R3ZBYUJUUUpmTHA0cjRxK0Zz?=
 =?utf-8?B?N09HZW9lM204UTBqOEVETTBIbG82aHdYSDJoU2ZuMGVGQjdwVjJSRkRLeG9X?=
 =?utf-8?B?TnZYS3F6MktxZ2g0UnZ2VXhRNVBVS2FKdUlTM3ZVRVdhMjVSL3M1MTVGQkRL?=
 =?utf-8?B?SjRIUHlUTkdPaWlIcjdOSXk5MUx4NmhIMDBDS09WNXZORVJpeGl5Z1QrY3F0?=
 =?utf-8?B?Vk1JYnk3ckprK3FFdDJwQTFwTDh5dVlEUlRXb3UzNU5aODN2NEJtOWdXNVov?=
 =?utf-8?Q?YEAkoopYBHjMLts8=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 49230560-6d8d-43f6-0908-08da538ccb8e
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Jun 2022 13:49:00.3206
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: +t6C/50OXv2xwyHy7gjwmdUnhW9waC1YYjqTdwD4NYRR38XoZSzLHP7r0CI3I09KBwCn/maIGBnEu4AwJ0akkw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR03MB5010

Hello,

Last week we had a bit of an emergency when a web crawler started
indexing all our mercurial repositories on xenbits, as caused the load
on xenbits to go beyond what it can handle.

As a temporary solution we decided to remove access to mercurial
repositories, but the contents there are AFAIK only for historical
repositories, so we might consider completely removing access to
mercurial repositories.  This would however require migrating any
repository we care about to git.

I would like an opinion from committers as well as the broad community
whether shutting down mercurial repositories and migrating whatever we
care about is appropriate.  Otherwise we will need to implement some
throttling to mercurial accesses in order to avoid overloading
xenbits.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jun 21 13:52:13 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 13:52:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353259.580179 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3eIe-0002KO-Sv; Tue, 21 Jun 2022 13:52:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353259.580179; Tue, 21 Jun 2022 13:52:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3eIe-0002KH-PN; Tue, 21 Jun 2022 13:52:12 +0000
Received: by outflank-mailman (input) for mailman id 353259;
 Tue, 21 Jun 2022 13:52:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M95C=W4=arm.com=Rahul.Singh@srs-se1.protection.inumbo.net>)
 id 1o3eIc-0002K7-Gy
 for xen-devel@lists.xenproject.org; Tue, 21 Jun 2022 13:52:10 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-eopbgr60083.outbound.protection.outlook.com [40.107.6.83])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 576684bd-f169-11ec-b725-ed86ccbb4733;
 Tue, 21 Jun 2022 15:52:08 +0200 (CEST)
Received: from AS9PR06CA0474.eurprd06.prod.outlook.com (2603:10a6:20b:49a::27)
 by AM6PR08MB4565.eurprd08.prod.outlook.com (2603:10a6:20b:af::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.19; Tue, 21 Jun
 2022 13:52:06 +0000
Received: from AM5EUR03FT011.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:49a:cafe::b) by AS9PR06CA0474.outlook.office365.com
 (2603:10a6:20b:49a::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.14 via Frontend
 Transport; Tue, 21 Jun 2022 13:52:05 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT011.mail.protection.outlook.com (10.152.16.152) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5353.14 via Frontend Transport; Tue, 21 Jun 2022 13:52:05 +0000
Received: ("Tessian outbound 4ab5a053767b:v120");
 Tue, 21 Jun 2022 13:52:05 +0000
Received: from d279d20d6b1b.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 C3B97E04-4136-4F81-A73B-5FE7901240E2.1; 
 Tue, 21 Jun 2022 13:51:57 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d279d20d6b1b.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 21 Jun 2022 13:51:57 +0000
Received: from AS8PR08MB7158.eurprd08.prod.outlook.com (2603:10a6:20b:404::24)
 by PA4PR08MB5888.eurprd08.prod.outlook.com (2603:10a6:102:e8::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.14; Tue, 21 Jun
 2022 13:51:56 +0000
Received: from AS8PR08MB7158.eurprd08.prod.outlook.com
 ([fe80::5cc5:d9b5:e3b0:c8d7]) by AS8PR08MB7158.eurprd08.prod.outlook.com
 ([fe80::5cc5:d9b5:e3b0:c8d7%9]) with mapi id 15.20.5353.022; Tue, 21 Jun 2022
 13:51:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 576684bd-f169-11ec-b725-ed86ccbb4733
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=f4PRqajjSzYGJoAxGh3+QnrSQ+JdhRij8tCqC/WOI5q+6YTiw58CpDEbT3APo9xxNvYPW6fiuj3oA9jcRo/7duqAvy3QIwPx75wclETML34akaxE+Y9syAGcMg2O8c7rJerd+Z1qXhLksoCaxreNcvVxBuobfuGbLgbw79QBJ8M3rrq4OvZigq9IQPOBmLdMSaF9VKKgG0KweNWoR7u7Era95R0YRqM2tNg0m06ftmFbpdq/FBTEILybg2M5pkFmLyF04oo7RoILMX31TACKb9Fo0QU3+Qhi7Ol9yqiqp7LT5a45vmLoGTdPYFHRL9UZjoOW/SBz395ryqRUxrFoUQ==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=tOGuhvqCIs1oxNymuMFyvzbwzfCeT9HJS6rcrPwb1kY=;
 b=giyCTYvRvDGk43quwUNhcxkvQQyOGp5i3YM0rqarE2mgkluYGjaPAFZ3wATI7AMXBgwhq8P5shvdMjABhS5O8tZ8QUGSIWC/9Oa/8TwHxLxtfC/2i8egB3IjidoWQCNmBycRdyKuoyE2tdqSS7EEcFO3474VjgrJo7BtSkpK2qbtN77Y/a5F8sIVy/t3PB6Ea/xPf+rFNwGwNYOrU6VIPA73LCY8+7P2VvuYMB9+ZwVkCH57R5Ku2BV2YTD220m/QK/PVGvVMBJaR4OGHCXzck5aJD7aa6+vbYg7fXyZR4hhtiMP9d215MJlCKHgbCVnqsRg3UagtMp3VEY9V8u7vg==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tOGuhvqCIs1oxNymuMFyvzbwzfCeT9HJS6rcrPwb1kY=;
 b=tXBRpJFhhkktjYvO6bPTMD9tD+UpmgJBj/SCDo/+jS27cpCyB42jx+R0MzOi7ulRvEHaSk8EuKEi3zMFox45kkNy2xB19osfKVHOytr3G0JLDqgttRHY+zmYM2mW87lxox9IeBwnJz2ITAWvoG7wNqQ7bRYLSbMIUen/Cqu8wjk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: ce8ebecf0af0b502
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XQLdM5aKc4DkDbPxrP3RjCirRmTtYoP9TVRttvuFCuqrYTolHbtd9anRCy/AFOAV1cdSGsPKEWTKK8hn+tZPFYp71ouxOvr9rZQplhe6uFmRziuf83/g71k2fB4r1XR6eFOHblKenZIljQlJ9FDy7zq+8xfqokFft0yNjYK0EOCcIB4JU4lBPrSiaG9OzVxe4k8cZ8imGv6NSUrON6aJ4k0TQ9E/AXrt3LByTbAbYYiqNdegpqZn8KZ/Oi0Z7K+nLbXcQBlXpGpMiCGR5LTQp2idJ+CpQYKko3/ln1cBwOAN7owqsKjTfN5WjWyvsB6uAiLTiJTq3Hw9PwDs2ra7xg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=tOGuhvqCIs1oxNymuMFyvzbwzfCeT9HJS6rcrPwb1kY=;
 b=cNNLbJ/HuCu2ZtArX6/gfeduwcVTGHtd8cIBm7O1e/fx+UqlBRAz2/NgFUuMk1VNmZy0soVMcDGwK8PfxaaZ8IiMOpgcR7Fs8R0dFPoib7bFiDlNFuzWN4j/k7xVTGEbNa0VqL4Zf95zUyKrQrwbuqqLZUvZgTMJ5DUTxebQgkZh7EIefJfQVF1i/K6pFne5vtlConqca3UekHQnNcRyTpvMzwKg5MqgwHHCPevtkOwnzrDbv5VN1iZLNsdq+/N+8z/96STV+v11goxlEvCZ2As/HrfkcoVhZogqAzx9CJYH7yQ1i3J7kI8vWYNH6BovqhT5+dmo4aOfp05iyRJg9A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tOGuhvqCIs1oxNymuMFyvzbwzfCeT9HJS6rcrPwb1kY=;
 b=tXBRpJFhhkktjYvO6bPTMD9tD+UpmgJBj/SCDo/+jS27cpCyB42jx+R0MzOi7ulRvEHaSk8EuKEi3zMFox45kkNy2xB19osfKVHOytr3G0JLDqgttRHY+zmYM2mW87lxox9IeBwnJz2ITAWvoG7wNqQ7bRYLSbMIUen/Cqu8wjk=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Mykyta Poturai <mykyta.poturai@gmail.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, "Volodymyr_Babchuk@epam.com"
	<Volodymyr_Babchuk@epam.com>, "julien@xen.org" <julien@xen.org>,
	"sstabellini@kernel.org" <sstabellini@kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] xen/arm: smmuv1: remove iommu group when deassign a
 device
Thread-Topic: [PATCH] xen/arm: smmuv1: remove iommu group when deassign a
 device
Thread-Index:
 AQHYWlIekEFYa4Z+ZkOKg6prrRxnv60EB7eAgALvxoCABl1EgIBEoW2AgAA0fACAAZfdAIAEm8YAgAGSSQCAAEbogA==
Date: Tue, 21 Jun 2022 13:51:56 +0000
Message-ID: <0A58139F-CA6F-4E18-B44A-2066AEF0C8F6@arm.com>
References: <A53A2C83-BA19-481D-8851-0B0E1A162F4D@arm.com>
 <20220621093808.597929-1-mykyta_poturai@epam.com>
In-Reply-To: <20220621093808.597929-1-mykyta_poturai@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: e31edf60-7189-4e72-68e0-08da538d39e3
x-ms-traffictypediagnostic:
	PA4PR08MB5888:EE_|AM5EUR03FT011:EE_|AM6PR08MB4565:EE_
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB456500BD6C9ACD9131F30438FCB39@AM6PR08MB4565.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 iM201TIXXpCc0flE0bAkjANbilEmEU4oqF0BcFsmk2lKF9c8XkwJe5anTCuUGAYn50MpLSmcoc4Tb2SjY7lmJ/giDn86DMD5Rk96ibE2YcXY3zLCe1FaGT8UvGsAowdSTenC/nYAMWae1SI1Cah2OTzLnajIFrkc2AQXZbj9fK54X1VAg/XNwY2FaoArSxDtshE5nq/7n4zArZrdmZeTCDHNdxc323skrgiV22NngObsZpEWAf6fkS1XULCIBqQ+MVh7y+fvnSV/IEtr73QDY2dON7F/Wgd6WZVgqMDMLoMC79dQJNszJiWw0wYkalFxRnYG9WFERhJzsexNNx87ohwjn0KPN/8pf0wIb1cf+hNAIgkJVHJBQh5DFlsiQx4DVdpSMS9W9K2Z/N7Xk7ET8JYfDCgfE3sQdetDQwPGZb5mrXC4znBcVH+0HRrsby9ukfVJ7CeX6D0GWUf7eeB5hV1B6MSs4iU0jl+hpUU3lJtAbv7H4wjNliCQPbSc90PknXL7HxHhkcpWy0fsY6K/JClIIu5R8/xS/WLGH4pFOSWQeSlT2ZJ/QUMpzJkkGFswEq4c4CksML4NBEnIiq/DbCmxlgsF3brOEol+rQoWPvT4bGwCdzM0fbLctXYOC/1+MW1Gyekcl+AtYVEZfVizu/Wk9NE1BGtUIOeUN2xAsuP4OliBDF1v02WLWt6MqOc9MQZRUJeXOPfW7IMqal/novj0yCFX8OWRcRsKgr9LBn0Z45glmFzA3zgwvfkrm3/E
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7158.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(396003)(136003)(346002)(376002)(366004)(86362001)(6506007)(2906002)(38100700002)(26005)(122000001)(6512007)(2616005)(41300700001)(83380400001)(38070700005)(186003)(53546011)(66556008)(71200400001)(6916009)(316002)(4326008)(8676002)(8936002)(66446008)(76116006)(33656002)(64756008)(478600001)(36756003)(5660300002)(54906003)(91956017)(6486002)(66946007)(66476007)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <68480629F3DA5D428A12240C0DFCFF98@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB5888
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT011.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	62fb99df-00cf-42de-2e09-08da538d3490
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	85De4jiPDlSg8eRk+bAJ6L1rYlYHfJcAVssgjDVNcANHCZyaD+HlFE9WFm7yGCSyc6BKINHLFl2kNb8CD3zIhfGgGiW8KNaAm42RaVAp//zG4AXw6BA7u5coy4opH37f3be/M5vG0zunHp03eyYcnLBds9nN1PUt5eDq043GR5ZD2Rdpf0kltPwiyJWFDzJotOywYh/OLkKDOGpIFER737SjyjzJXSq949JFivZLhoiUvG921anmy8xOt5q30TUWiRzCpOjRjE2qqZqR87g5AmbX8/U0T9TwHPIsIsRyubYLV3W8ryOob2KXaJFyWyuYG+fQ7q1zTZenVQbYTS0QC4zckmNw3zSZ3hywJwSeJpwf7v0nPKatVjisEVVyfTfaIz8qaXwwpEYcWIrqD1i1OJ6WTwygF0tfGaEdIX6kBsJ2NGJLaKOnZnRhXu3tlVEZIuYV0tdXuNi/HzT7oXUgEzioo+E821oj+OdcRAmaefdwmSqmZlubPvf/VStToaP9FW2cfSkmx9ZGscv4bo1+KB3idW0guEMuS9bGy9muheLgKB2OjzWcAP6JdQ5dHrSA0FgP66th22ib6kMGJISvL/8bHuG1FLezDp3ukm1GT7NYO37fHaeUjpIq+xuEx+Fp1fhEqm/bDaxasNS9Z9YtienNsCnBhkctExXcNqbMfr8uAR+ro3rAFoIsp1YLZz75+UShJrx2iMhP9eDFPXcfhw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(396003)(136003)(346002)(376002)(46966006)(36840700001)(40470700004)(316002)(5660300002)(40480700001)(6512007)(54906003)(86362001)(26005)(6506007)(70586007)(47076005)(36756003)(6486002)(8936002)(478600001)(336012)(2906002)(53546011)(83380400001)(41300700001)(2616005)(186003)(4326008)(33656002)(82310400005)(81166007)(6862004)(82740400003)(70206006)(36860700001)(8676002)(356005)(40460700003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Jun 2022 13:52:05.2420
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e31edf60-7189-4e72-68e0-08da538d39e3
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT011.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4565

Hi Mykyta,

> On 21 Jun 2022, at 10:38 am, Mykyta Poturai <mykyta.poturai@gmail.com> wr=
ote:
>=20
>> Thanks for testing the patch.
>>> But not fixed the "Unexpected Global fault" that occasionally happens w=
hen destroying
>>> the domain with an actively working GPU. Although, I am not sure if thi=
s issue
>>> is relevant here.
>>=20
>> Can you please if possible share the more details and logs so that I can=
 look if this issue is relevant here ?
>=20
> So in my setup I have a board with IMX8 chip and 2 core Vivante GPU. GPU =
is split between domains.
> One core goes to Dom0 and one to DomU.
>=20
> Steps to trigger this issue:
> 1. Start DomU
> 2. Start wayland and glmark2-es2-wayland inside DomU
> 3. Destroy DomU
>=20
> Sometimes it destroys fine but roughly 1 of 8 times I get logs like this:
>=20
> root@dom0:~# xl dest DomU
> [12725.412940] xenbr0: port 1(vif8.0) entered disabled state
> [12725.671033] xenbr0: port 1(vif8.0) entered disabled state
> [12725.689923] device vif8.0 left promiscuous mode
> [12725.696736] xenbr0: port 1(vif8.0) entered disabled state
> [12725.696989] audit: type=3D1700 audit(1616594240.068:39): dev=3Dvif8.0 =
prom=3D0 old_prom=3D256 auid=3D4294967295 uid=3D0 gid=3D0 ses=3D4294967295
> (XEN) smmu: /iommu@51400000: Unexpected global fault, this could be serio=
us
> (XEN) smmu: /iommu@51400000:    GFSR 0x00000001, GFSYNR0 0x00000004, GFSY=
NR1 0x00001055, GFSYNR2 0x00000000
>=20
> My guess is that this happens because GPU continues to access memory when=
 the context is already invalidated,
> and therefore triggers the "Invalid context fault".

Yes you are right in this case GPU trying to do DMA operation after Xen des=
troyed the guest and configures
the S2CR type value to fault. Solution to this issue is the patch that I sh=
ared earlier.

You can try this patch and confirm.This patch will solve both the issues.

diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/a=
rm/smmu.c
index 5cacb2dd99..ff1b73d3d8 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -1680,6 +1680,10 @@ static int arm_smmu_attach_dev(struct iommu_domain *=
domain, struct device *dev)
if (!cfg)
return -ENODEV;

+ ret =3D arm_smmu_master_alloc_smes(dev);
+ if (ret)
+ return ret;
+
return arm_smmu_domain_add_master(smmu_domain, cfg);
}

@@ -2075,7 +2079,7 @@ static int arm_smmu_add_device(struct device *dev)
iommu_group_add_device(group, dev);
iommu_group_put(group);

- return arm_smmu_master_alloc_smes(dev);
+ return 0;
}


Regards,
Rahul=


From xen-devel-bounces@lists.xenproject.org Tue Jun 21 14:01:44 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 14:01:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353270.580190 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3eRm-0003uj-OP; Tue, 21 Jun 2022 14:01:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353270.580190; Tue, 21 Jun 2022 14:01:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3eRm-0003uc-LV; Tue, 21 Jun 2022 14:01:38 +0000
Received: by outflank-mailman (input) for mailman id 353270;
 Tue, 21 Jun 2022 14:01:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0ORL=W4=citrix.com=prvs=164d1f6c5=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1o3eRm-0003uW-29
 for xen-devel@lists.xenproject.org; Tue, 21 Jun 2022 14:01:38 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a919ab5d-f16a-11ec-b725-ed86ccbb4733;
 Tue, 21 Jun 2022 16:01:36 +0200 (CEST)
Received: from mail-sn1anam02lp2049.outbound.protection.outlook.com (HELO
 NAM02-SN1-obe.outbound.protection.outlook.com) ([104.47.57.49])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 21 Jun 2022 10:01:30 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BN8PR03MB5059.namprd03.prod.outlook.com (2603:10b6:408:d9::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.14; Tue, 21 Jun
 2022 14:01:25 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::bd46:feab:b3:4a5c]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::bd46:feab:b3:4a5c%4]) with mapi id 15.20.5353.022; Tue, 21 Jun 2022
 14:01:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a919ab5d-f16a-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655820096;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=Yta9BEyH8+hwyBurT2yw57GJo+fTM3zHvJT7jvCa2Bo=;
  b=f95t1icOH2BFgl2D4AdZI+bH3dRA7GfaxyXxUh3gMFTBFb7a2RGi9RPW
   D4slj0wb+HCLiWCp67V8doNPG/aD87PFfHnseZ+txf9bgT3UgekaKC95K
   2tbCW5uAjVSV5sJjd5UbdlrAoEPdXwllKrztRWOE/WzhiG3i7L5lB1Mlf
   I=;
X-IronPort-RemoteIP: 104.47.57.49
X-IronPort-MID: 76638111
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:GtPJea1CREv2QPF/jPbD5aRwkn2cJEfYwER7XKvMYLTBsI5bpzNSn
 zcZXT2OM/+Ma2Lzc41/PI2//E8G7JLTn4VjHQdupC1hF35El5HIVI+TRqvS04J+DSFhoGZPt
 Zh2hgzodZhsJpPkjk7xdOCn9xGQ7InQLlbGILes1htZGEk1Ek/NtTo5w7Rj2tAy2IDga++wk
 YiaT/P3aQfNNwFcagr424rbwP+4lK2v0N+wlgVWicFj5DcypVFMZH4sDfjZw0/DaptVBoaHq
 9Prl9lVyI97EyAFUbtJmp6jGqEDryW70QKm0hK6UID66vROS7BbPg/W+5PwZG8O4whlkeydx
 /1fpLKPcyMnMJTQhe8YYR4IFXFVOupZreqvzXiX6aR/zmXgWl60mbBVKhhzOocVvOFqHWtJ6
 PoUbigXaQyOjP63x7T9TfRwgsMkL4/gO4Z3VnNIlGmFS6p5B82cBfmbjTNb9G5YasRmNPDSf
 ccGLxFoawzNeUZnMVYLEpMu2uyvgxETdhUH9gzO9fNuugA/yiR4z7roEMrJSuCYROhWwkyni
 mvX5DjQV0Ry2Nu3jGDtHmiXrv/Cm2b3VZwfEJW89+V2mxuDy2oLEhoUWFCn5/6jhSaWUNVaL
 k0I5ic0toAi+UqzVN7/Uhak5nmesXYht8F4FuQ77ESWzPPd5Q+cXjIAVmQZNI1gs9IqTzs30
 FPPh8nuGTFkrLySTzSa66uQqjSxfyMSKAfueBM5cOfM2PG7yKlbs/4FZo8L/HKd5jEtJQzN/
 g==
IronPort-HdrOrdr: A9a23:SnYcza29aEQBkB2o4ICH3wqjBEgkLtp133Aq2lEZdPU0SKGlfg
 6V/MjztCWE7Ar5PUtLpTnuAsa9qB/nm6KdgrNhWItKPjOW21dARbsKheffKlXbcBEWndQtt5
 uIHZIeNDXxZ2IK8PoT4mODYqodKA/sytHWuQ/cpU0dMz2Dc8tbnmBE4p7wKDwMeOFBb6BJcq
 a01458iBeLX28YVci/DmltZZm4mzWa/KiWGCLvHnQcmXGzsQ8=
X-IronPort-AV: E=Sophos;i="5.92,209,1650945600"; 
   d="scan'208";a="76638111"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Pgzw4b5vTRdvb/qS4XVBdQGFtvI2q23Ajho+QGSibG6J+SrzyhiT79XEklUnhJuumDn+wQnSf82K2sb+R5NXz0WhtqZ7SII8Z046uheEKnPRaX0Q0qS+bE8qKscb8h8X6OUEeV93hCwXFCb2Ce5G1PnbKQBf5Vdvy2IDNE3QuBi4oOC6sNDl929RqI81hI/xJczlD68LpCS9ZPTtYyLlpmipyGT4I4YLP9EuWp3ERJ20J4aox7h1pAeoGWZprgX4ekiaYRAWESEvOO9NtozeAcetDXHqurSVQk165pcLKX2zKJBOv2ZPs5aUeeSXC7Xj8M3QJWawxvEDYwLc83BA+g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Yta9BEyH8+hwyBurT2yw57GJo+fTM3zHvJT7jvCa2Bo=;
 b=Vu7ajkoqOX//yWQZA1NnxlNQmE5RUbnW+WWj0iaQGDgpYHUwZTHHJdsdGjnweKVLzOn8g0ta4TIWltbPmqITk88dizGOcXkzFTK+JhcRIOKxdftXIs6JwSUz8LNHHlo3Sxl7oKiPLONRn90vZIBOv60g5qw4JZkyGe4iiG0GfKouyPzr8/DgkcFOyUndB8oXB5vMjnW8JNkSBNGegV2SD0nATA+uGrjXlC3njjRhacU6gOpLfOfIfcdBL5lpOG/yPDNZyZC6dSgQUlpDgTLNAHxcTeb9ibBA7Xx1KtkWXCTvJdvecrqgKCAnrNVFm1siT+4vbLk30Tn1MKZOoXQUSg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Yta9BEyH8+hwyBurT2yw57GJo+fTM3zHvJT7jvCa2Bo=;
 b=bCcMys6SeCvceBjkV8gDIIeQfD1pV1ujhytPS227b34/c6ejbbQvQkKHWtrUNmcEC2kKv6bLSfdgPJDf30NYWcm2A8cInq5GFuZRgOfZH0CuAxG16ZkvslXhFEFgKpgIdQt48e5d3ShQsLd+9x5FUuaoTPAD7OfKXJC/CSS1OFE=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: "committers@xenproject.org" <committers@xenproject.org>
Subject: Re: disabling mercurial repositories
Thread-Topic: disabling mercurial repositories
Thread-Index: AQHYhXXQrdr64fwrwEqfZHy5KYkmJq1Z49yA
Date: Tue, 21 Jun 2022 14:01:24 +0000
Message-ID: <925e715d-df9f-4bb8-616c-389c5c58f479@citrix.com>
References: <YrHMSJg6Rx9ULvr6@Air-de-Roger>
In-Reply-To: <YrHMSJg6Rx9ULvr6@Air-de-Roger>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 2186810d-58ee-49f2-f9db-08da538e86fc
x-ms-traffictypediagnostic: BN8PR03MB5059:EE_
x-microsoft-antispam-prvs:
 <BN8PR03MB5059433BC7D4A26504A332C5BAB39@BN8PR03MB5059.namprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 lRcfYD+kuvcAlCZ9VH30YZxi6mH3e195egagnqPYXxM9IUBYRTegRd3e9qYLh2JREpttxR0yGw7CksHrXXNwTnkM3h/hGG0CfPEkctLkdsArayj1Nft8P3DTcaLS8criULexG38AwdGTD90H20WnPb2XUhPBe1wEXz/ix8or6wQcyOGbXx9Rvx8VhxBM2lMzUwrmhmYb+E6eioZ1y7XYQCXQ2nxnpMZvHCD4qDkZWmTc+JbE/Y2qD23yWzVk6uT3ku99d76xQfQs8JhR4QaulrfBZTGJf9vDJYx0iEX0JP0xytusrPbdXPFX/xyydWhc392OWqrsxusH3m5x6Yk0vIUh7BHTWszR5gslQ7N1S3GtdMVG4RCmsG0vTrL+7JBeR5pninBmrrnApz5vPPRajC78A5zFBd4A2azTMpBDVdgQM0oms2HQzHQZzzvfTSz/hIRF8IWYvkSrt/R1BlXa5BwrZV2s8rBbNG07wbEFPcfFVHnkYdX5OoQRapGiRalkKx5uCyFzoOPkKcmqYYa3S62QSaeaztCaR4C3EMi9XsTMQPT4mLHkazHMHZbc+NlkImfn1nG6tdidUIUPzSp5HVvEeAzPgN2+f9bFnLSPlH6H+Mqd/MMEz6G/0LcGo9nG0fhuN3CMmWyE8anvk1HDRr10QgRQqlvzrKDPtBMq0LxAX7pV5RzJfg38FTLSg2q26OyM/i5rTmTGbIq9CubvnhNvaPNUW02a4kVTJqt+2pHAhJD6gUpYtEo57TP9QgWle5Rh0Jq/+/dvLLfmRO0y4A==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(376002)(396003)(39860400002)(136003)(366004)(346002)(91956017)(31686004)(76116006)(66946007)(36756003)(66556008)(7116003)(66476007)(478600001)(2906002)(83380400001)(8676002)(6486002)(38070700005)(4326008)(3480700007)(186003)(64756008)(110136005)(66446008)(71200400001)(8936002)(316002)(82960400001)(5660300002)(86362001)(122000001)(41300700001)(6512007)(38100700002)(53546011)(26005)(2616005)(31696002)(6506007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?MHE1M2tqbVBLMXdvZ0s3d2pkSExPaDVnV0NDaGV0dHRCems5TDNBOGgxMzBL?=
 =?utf-8?B?S05TbXVReEw1ZkFZcVhXYWVvdW90czlwZTZ6aWRxdFlkZTRNVmE1Z0ZaMmcv?=
 =?utf-8?B?K3EvVlZMZEJzaDdUM3I2TjZ3WEN1bXZ3MktBUCtTRnhlY21UeXZWQVFKbGkr?=
 =?utf-8?B?dXlzbXBtUlcwckY0TmNtRjZCTXVCNi9xR1FRdXI2OU4zeWpJSGdiWGZ3YnR1?=
 =?utf-8?B?STBSUHl4VDd4QUljaTlmL0o2WlZ0b0RUQllDSVcyaFBPM3ZVT1RwTDZJQWpD?=
 =?utf-8?B?NWdLZ2NYTnpEcXhXdG1SL2VWbmd6Wi93bUM2MklDQWk5WnlIUUZFYzNaWGYz?=
 =?utf-8?B?VFhkM0UzczJ0aHltWjNjZ2JPWW1lbmwwY0svSGR1S3NEU3FVT2p4c0I4cGVC?=
 =?utf-8?B?WlJEbzBPSCtYdVhDNXVVbW9sVjk3ZFloSHZUdXFodS95a0pDZEFjamVSVCsy?=
 =?utf-8?B?NVdoZVhmNWwrZHVqM1dFeWFLZDFPNXdOWXhlL1liL0VwVGZUd1pMWDh3b3lQ?=
 =?utf-8?B?S2p4N1RUZTY3NmphOVoweThtUzlBWFVZd2c1c2wybzdMUFpTQWlJUG1TbjNK?=
 =?utf-8?B?ZS9nT01ML3gwSkhtYnJkOEpYMVIyV1BvUnhKU1IrRXF0TCsxbHU0M0U0NkJU?=
 =?utf-8?B?VG1sV2hFeDlTY25lSTVrZmhIcXQ0U294b0JnUTdQUFdOblNYK28zL2QzTi9S?=
 =?utf-8?B?a1NzdXJFaTdsaVQ1NURzb1FzbHBCcTJZcU9XZUFqMXNmTW54QXBvUUs1NEVS?=
 =?utf-8?B?NW00OE1mdGxDM284cTJWb0JzMjZ1UWtWVHlSTmJzOXB0M3doNTNqZjBqSW40?=
 =?utf-8?B?Ym9SS3lxMzNxZGFlU1plUUwxeis0bjREWm5rMmwwdElBMWpkanpoaXlVV3R0?=
 =?utf-8?B?TGZjM3pSY0hzbzhQTjNuUThUeEEzeiszOXBjWWlMSUJuOXVxK01wYUNqTUtR?=
 =?utf-8?B?OTBhSmU3S1VQdVVuZU9aZUhVb0hHZEkrWkJYRzAyRm1DWExDNU1rN2l3MmJP?=
 =?utf-8?B?d1BhZVVtRkZ5VDlKeEYwa0tYR0Z0aERiaHRxOUJYR2pzVjF1SlFsYUE2UjdK?=
 =?utf-8?B?NGd6SFNTN1RscVJiNWlHZi9lcktLWHRzWFl1RFp6U2RmRzB0OTBWZ01wM3FQ?=
 =?utf-8?B?Q09LZHgySU1qMjA0VkdQOFk1Y0J6Skl4VHQ3MGx2dHdIVVVtY1lSd0s5aE56?=
 =?utf-8?B?VXhWYWZBSGNuTkpqU0t6blloWHQ3ZVQ1dXZiVDMzU2pIRTZGMFl2Unl3OUhJ?=
 =?utf-8?B?eTNpWUQrRzFNd2JuRnYvelpsSi90NFhyY0JUYXB2bXp0QlJnNDZWSVdMVWhr?=
 =?utf-8?B?RFB6TStyczkrdXFBTSs5dlhjMUxlU0lTMmtEQW00NG9TNzFSRmdKY2x2MGdG?=
 =?utf-8?B?QXFyWjUza3g1QmlNaFA2YVZsbHpCTmxkelZrQmpQOU53UHRrTEVIa2xqcDJT?=
 =?utf-8?B?Uzg4VGVxSEprbmxkV05sMHN0ZHpQa2lCWlZFWTVOSkE4ZVhtWE1OUklkNjcr?=
 =?utf-8?B?UWZhOFJOd0c1WHpzd1lMKzFuekk5T2hoV1pkZDZKYnVTMjRGYXVOV0Z3VGxj?=
 =?utf-8?B?LzNGMEM3QlNjWFN3dkNLY3hHMS9QQUxGcmdhOGdmTVBDVm55bndMazNKRWZo?=
 =?utf-8?B?WEZKc2o2bW9YQ0U4WUZQWnF6ZnQ1TCtyVHZTMUpVek5tcStIMXFiVUlacjVY?=
 =?utf-8?B?M1RKZ0JrdEs2bmV5WjNYejV0M1Mwd0tqeXkvU1Q3cHVnTGxoZnBPQzlOZTZu?=
 =?utf-8?B?RE14ZjZTWUJ1eU1mT1JQTjFmMW9GbnpmclVNRjZqR0ZHN0Y1MStJbHJFTEc1?=
 =?utf-8?B?L3VRczJVVTRYUzhWd2VBekZackpkQUJ2bHRhMTh5SkNzRzd4enQyL3YzNkhU?=
 =?utf-8?B?dG9UVFNHSHd2ck95c055UEtnbnFXcktvUCs0MEY3NXBCRTZHcG8xRTdWbE4v?=
 =?utf-8?B?bEZpMDhHQkJKdEY5ZjhFYStZZDlUaHNCbFU3NlRaeXFpclE3RnhmblhBK3JQ?=
 =?utf-8?B?L3lxRWpVMHlMNDZ4RnF2TUF2a1R5NXJPZGdQYlRtaGN5eGFRclRjbVNFY3Jn?=
 =?utf-8?B?dGxLMVZEQnd0djEwU2ptVU5obUpvZXJCWlNqaUlHRUdFc21hSnZCT3F2cHlT?=
 =?utf-8?B?UWFBRmhldXZlVWY3QlRZVk1LSTQ2enZLVkJUMW41ZDMzZVF1RlZ6Q2VnRlNN?=
 =?utf-8?B?bCswZEVaVHBzcEtNVjlJRWtzYWJZaFpKalVrRXlVT2J4TEs3NUR2YlRMYmdm?=
 =?utf-8?B?NjJwTjVCMVhtODh1Mkk0aXoySTJyVGxrZDVTS2syWFN4OU9WcDJBUC9nZ2Yz?=
 =?utf-8?B?Qi8rTkhLRFJqaTZnWDRGR2F2dWpFZ0JMTnlReWdFZHNiZEZIMHRreG9EcE9L?=
 =?utf-8?Q?XLwTs7XyW5GZiRnM=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <3AF80604B63C944C9E67A037633D5E9D@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2186810d-58ee-49f2-f9db-08da538e86fc
X-MS-Exchange-CrossTenant-originalarrivaltime: 21 Jun 2022 14:01:24.1221
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: COHiJKBBZprEmNQiK4B2RcA3VCYp2+Z9lzLSS/nZUYl+6yPtC+JhJeMIDbBHFcZr4W/S0isb6lY4uyjSkM3n7/444M/iBnIbpmy/w25FuBU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR03MB5059

T24gMjEvMDYvMjAyMiAxNDo0OCwgUm9nZXIgUGF1IE1vbm7DqSB3cm90ZToNCj4gSGVsbG8sDQo+
DQo+IExhc3Qgd2VlayB3ZSBoYWQgYSBiaXQgb2YgYW4gZW1lcmdlbmN5IHdoZW4gYSB3ZWIgY3Jh
d2xlciBzdGFydGVkDQo+IGluZGV4aW5nIGFsbCBvdXIgbWVyY3VyaWFsIHJlcG9zaXRvcmllcyBv
biB4ZW5iaXRzLCBhcyBjYXVzZWQgdGhlIGxvYWQNCj4gb24geGVuYml0cyB0byBnbyBiZXlvbmQg
d2hhdCBpdCBjYW4gaGFuZGxlLg0KPg0KPiBBcyBhIHRlbXBvcmFyeSBzb2x1dGlvbiB3ZSBkZWNp
ZGVkIHRvIHJlbW92ZSBhY2Nlc3MgdG8gbWVyY3VyaWFsDQo+IHJlcG9zaXRvcmllcywgYnV0IHRo
ZSBjb250ZW50cyB0aGVyZSBhcmUgQUZBSUsgb25seSBmb3IgaGlzdG9yaWNhbA0KPiByZXBvc2l0
b3JpZXMsIHNvIHdlIG1pZ2h0IGNvbnNpZGVyIGNvbXBsZXRlbHkgcmVtb3ZpbmcgYWNjZXNzIHRv
DQo+IG1lcmN1cmlhbCByZXBvc2l0b3JpZXMuICBUaGlzIHdvdWxkIGhvd2V2ZXIgcmVxdWlyZSBt
aWdyYXRpbmcgYW55DQo+IHJlcG9zaXRvcnkgd2UgY2FyZSBhYm91dCB0byBnaXQuDQo+DQo+IEkg
d291bGQgbGlrZSBhbiBvcGluaW9uIGZyb20gY29tbWl0dGVycyBhcyB3ZWxsIGFzIHRoZSBicm9h
ZCBjb21tdW5pdHkNCj4gd2hldGhlciBzaHV0dGluZyBkb3duIG1lcmN1cmlhbCByZXBvc2l0b3Jp
ZXMgYW5kIG1pZ3JhdGluZyB3aGF0ZXZlciB3ZQ0KPiBjYXJlIGFib3V0IGlzIGFwcHJvcHJpYXRl
LiAgT3RoZXJ3aXNlIHdlIHdpbGwgbmVlZCB0byBpbXBsZW1lbnQgc29tZQ0KPiB0aHJvdHRsaW5n
IHRvIG1lcmN1cmlhbCBhY2Nlc3NlcyBpbiBvcmRlciB0byBhdm9pZCBvdmVybG9hZGluZw0KPiB4
ZW5iaXRzLg0KDQpJSVJDLCB3ZSdkIG1vc3RseSBtb3ZlZCBvZmYgaGcgb250byBnaXQgYmVmb3Jl
IG1vdmluZyB0byB0aGUgTGludXgNCkZvdW5kYXRpb24sIHdoZXJlIGdpdCBiZWNhbWUgbWFuZGF0
b3J5LsKgIEhnIGhhc24ndCBiZWVuIHRoZSBwcmltYXJ5IGRldg0KdG9vbCBmb3IgYWdlcywgYW5k
IGdpdCBoYXMgb25seSBnb3QgbW9yZSB1YmlxdWl0b3VzIGluIHRoZSBtZWFudGltZS4NCg0KSSdk
IHN1Z2dlc3Qga2VlcGluZyBoZ3dlYiBkaXNhYmxlZCBmb3Igbm93IGFuZCBzZWUgaWYgYW55b25l
IGNvbXBsYWlucy4NCg0KfkFuZHJldw0K


From xen-devel-bounces@lists.xenproject.org Tue Jun 21 14:05:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 14:05:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353277.580200 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3eVT-0004W6-8u; Tue, 21 Jun 2022 14:05:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353277.580200; Tue, 21 Jun 2022 14:05:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3eVT-0004Vt-60; Tue, 21 Jun 2022 14:05:27 +0000
Received: by outflank-mailman (input) for mailman id 353277;
 Tue, 21 Jun 2022 14:05:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o3eVS-0004Vn-C9
 for xen-devel@lists.xenproject.org; Tue, 21 Jun 2022 14:05:26 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o3eVS-0005ia-1K; Tue, 21 Jun 2022 14:05:26 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=[192.168.3.84])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o3eVR-0003tw-Pu; Tue, 21 Jun 2022 14:05:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:To:Subject:MIME-Version:Date:Message-ID;
	bh=8WPV//DXzD7BGCF2jxKSwFmFoQ5RDYtBd22QB0lwcRM=; b=D4/dRwdToKmxMFBjGwtrH6ChNN
	PNgIcdxmNdYS8FSZb9+QW0YKYgUR/rHQV398ZuDjEaNIk//fEnGh0g6v+mCMj6k2ZNjQClU+JHqda
	o65d6Pxa1/GzX9d7GgRS/e5EuTBVwjRJGVPQ5fh8mQEKySRqpBtWPwA4LzCu1ApRq5cQ=;
Message-ID: <c97de57c-4812-cdfc-f329-cc2e1d950dc7@xen.org>
Date: Tue, 21 Jun 2022 15:05:22 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: XTF-on-ARM: Bugs
To: Andrew Cooper <Andrew.Cooper3@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Michal Orzel <Michal.Orzel@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Christopher Clark <christopher.w.clark@gmail.com>,
 Daniel Smith <dpsmith@apertussolutions.com>,
 Roger Pau Monne <roger.pau@citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>
References: <7f490d75-153d-7e1d-b3c0-5418ff7fdf8f@citrix.com>
 <b8f05e22-c30d-d4b2-b725-9db91ee7a09d@xen.org>
 <fd30be68-d1ac-b1bc-b3f1-cff589f338ee@citrix.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <fd30be68-d1ac-b1bc-b3f1-cff589f338ee@citrix.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Andrew,

On 21/06/2022 14:30, Andrew Cooper wrote:
> On 21/06/2022 13:07, Julien Grall wrote:
>> On 21/06/2022 12:27, Andrew Cooper wrote:
>>> Hello,
>>> I tried to have a half hour respite from security and push forward
>>> with XTF-on-ARM, but the result was a mess.
>>>
>>> https://github.com/andyhhp/xtf/commit/bc86e2d271f2107da9b1c9bc55a050dbdf07c6c6
>>> is the absolute bare minimum stub VM, which has a zImage{32,64}
>>> header, sets up the stack, makes one CONSOLEIO_write hypercall, and
>>> then a clean SCHEDOP_shutdown.
>>>
>>> There are some bugs:
>>>
>>> 1) kernel_zimage32_probe() rejects relocatable binaries, but if I
>>> skip the check it works fine.
>>
>> Hmmmm... which check are you referring to?
> 
> if ( (end - start) > size )
>      return -EINVAL;
> 
> Although now I think about it, the problem is subtly different.
> 
> Section Headers:
>    [Nr] Name              Type            Addr     Off    Size   ES Flg
> Lk Inf Al
>    [ 0]                   NULL            00000000 000000 000000 00
> 0   0  0
>    [ 1] .text             PROGBITS        40000000 010000 000094 00  AX
> 0   0  4
>    [ 2] .data             PROGBITS        40001000 011000 000000 00  WA
> 0   0  1
>    [ 3] .rodata           PROGBITS        40001000 011000 000012 00   A
> 0   0  4
>    [ 4] .bss              NOBITS          40002000 011012 001000 00  WA
> 0   0  4
> 
> end is calculated as 0x3000 which includes the bss (inc stack which is
> bss page aligned), while the raw binary size is 0x1012 because it stops
> at the end of .rodata.

Ok. I agree this is a bug. Can you send a patch?
>>> Furthermore, kernel_zimage64_probe() ignores the header and assumes
>>> the binary is relocatable.
>>
>> Are you referring to bit 3 "Kernel physical placement"?
> 
> No. This:
> 
> /* Currently there is no length in the header, so just use the size */
> start = 0;
> end = size;
> 
> Which isn't true even for the v0 header.  The field named text_offset in
> Xen's code is start, and res1 is end (or size for relocatable).

Hmmm... text_offset is not the start. But I agree that res1 is the 
effective size and should be used instead of the binary size.

>>
>>> Both probe functions fail to check the endianness marker.
>>
>> AFAIU the header is little endian. So it is not clear to me why we
>> should check the endianess marker?
> 
> Not the endieness of the header, the endianness of the image.  Both
> headers have a field which should ought to be checked for != LE seeing
> as Xen doesn't support big endian domains yet

Aside potential bugs, big endian OS should boot on Xen (PV protocol and 
hypercalls are always litte endian).

[...]

>>> (XEN) Hardware Dom0 halted: halting machine
>>>
>>> which is weird.  The CONSOLEIO_write fails to read the passed
>>> pointer, despite appearing to have a ip-relative load to find the
>>> string, while the SCHEDOP_shutdown passes its parameter fine (it's a
>>> stack relative load).
>>
>>  From a brief look, your code is still running with MMU off and Cache
>> "off" (on armv8, it is more a bypass "cache" rather than off).
>>
>> This means that you ought to be a lot more careful when
>> reading/writing value to avoid reading any stall data.
> 
> There are no relocation/etc so everything has well defined behaviour
> even when the caches are off.

The problem is you are writing to the stack and then passing a pointer 
to the stack to Xen. For hypercalls, we mandate the memory to be 
cacheable (see arch-arm.h). So Xen may read a different value than what 
you passed.
>>> Other observations:
>>>
>>> * There is no documented vCPU starting state.
>>
>> See
>> https://github.com/torvalds/linux/blob/master/Documentation/arm64/booting.rst.
> 
> What's it got to do with Xen's vCPU starting state?

Because we are following what Image defined. Anything outside is 
implementation defined and not something that an OS should rely on.

>  Also, that's
> clearly not relevant for arm32 even if the implication is "Xen only
> speaks the Linux ABI".

The interface exposed to the guest depends on the binary format used. At 
the moment, we are implementing zImage, Image and U-boot. If there were 
another, then the vCPU will be the same as defined by the new format.

> 
> It needs to be in docs/ (or public at a stretch) and not in the heads of
> the maintainers.

Patches are welcomed.

>>
>>> * Qemu is infinitely easier to to use (i.e. no messing with dtb/etc)
>>> as -kernel xen -initrd test-$foo with a oneliner change to the dtb
>>> parsing to treat ramdisk and no kernel as the dom0 kernel.  Maybe a
>>> better change would be to modify qemu to understand multiple -kernel's.
>>> * Xen can't load ELFs.
>>
>> The support was dropped in 2018 because it was bogus and not used:
>>
>> https://lists.xenproject.org/archives/html/xen-devel/2018-06/msg00242.html
>>
>>
>> Personally, I think that zImage/Image is simple enough that
>> re-introducing ELF is not worth it. But I would be OK to consider
>> patches if you feel like writing them.
> 
> There is a massive usability improvement from being able to point normal
> toolchain tools at the same binary you're trying to load.
Ditto.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 21 14:28:17 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 14:28:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353294.580221 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3erS-0007BE-CE; Tue, 21 Jun 2022 14:28:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353294.580221; Tue, 21 Jun 2022 14:28:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3erS-0007B7-8h; Tue, 21 Jun 2022 14:28:10 +0000
Received: by outflank-mailman (input) for mailman id 353294;
 Tue, 21 Jun 2022 14:28:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WTzQ=W4=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1o3erQ-0007B1-Qa
 for xen-devel@lists.xenproject.org; Tue, 21 Jun 2022 14:28:08 +0000
Received: from mail-lf1-x130.google.com (mail-lf1-x130.google.com
 [2a00:1450:4864:20::130])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5e9b05cd-f16e-11ec-b725-ed86ccbb4733;
 Tue, 21 Jun 2022 16:28:07 +0200 (CEST)
Received: by mail-lf1-x130.google.com with SMTP id a13so12818316lfr.10
 for <xen-devel@lists.xenproject.org>; Tue, 21 Jun 2022 07:28:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5e9b05cd-f16e-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=PCxBILTJ6Zk898i/dve3lPvaQBDYqprK1gvRHYGVzwU=;
        b=iiP+5n1f05rH8piEJi+CE0TSr6c4cwnWwQse1HLsRXdejg8u5MeL3LFySKinCsFj07
         xiLQxtBVkebtNwAIY3KBtbIXJsD9gdPLt4Zn33sTQkIDDtsBad1nr6ruZ7qgrz2V5O80
         c7RXBQewM9mx+akwGxUkjGe3Md7mLKGF+6ek7T3o5x43TrO1hrnyYNBi8hYous96QCx6
         R/cM+CRBrP22Y+cxJPYFke7aY96abSfdLxAD5w8XouToGRCh4GWNQlVB5RVUWdMSTh7c
         9S9GkHmwSxRFqlxwYnWF5gbuSiZX+qr16/UNlLdb6fzBSjgUmlGYU/3mEsBb6m9J5Ei4
         55/g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=PCxBILTJ6Zk898i/dve3lPvaQBDYqprK1gvRHYGVzwU=;
        b=UAFvicmRjj38pRwpVdvkdD1MQLspbAoxSOiPn8Fd/zn6tMV7dT4Q975aaRjX2mTSi7
         tLtuecERAlqUJMgIJ6UWRsg1r3FQ5wE9CQ9pTvpKE3o1cwFQ8wGtHFTREmDgJlvxuSBH
         ZTKYu7mZZz2+bhjaUpf0TuUnUJxzSbFUg3xXADOGhtgZgF1HjWRymet9IVHe5ual0xC1
         I4wzfH75/LIyZ+cLD6FHKBocAKlRWJW/Zkl2SyjEycOpM2kzZy2Wyjw2+bJCyIGSpdQr
         KkqzdN4DjzCyUMUSCWt/agMOI4FmvSkc+Zm+AVqDpUL/GxeXh1nyW/A5iWXSjhNb/T0Q
         AKtg==
X-Gm-Message-State: AJIora//1AzUIpqwmWv98I4gSBDEAJR7jonU6FJx65hJqv7MCaXwehnn
	EfqtdgqrU1sOloSifaUbFbiCpnsmVuzpTTSoPb8=
X-Google-Smtp-Source: AGRyM1tU3cBRt/2PW4G1D3qp9GYb+SczjsmtJ9Fm4pT6vj2XGhL9y29toCAjdZazO5Vob9Dr9j4HwVhsBmizVs7FCg0=
X-Received: by 2002:a05:6512:401d:b0:47f:654d:e48d with SMTP id
 br29-20020a056512401d00b0047f654de48dmr9001183lfb.359.1655821687295; Tue, 21
 Jun 2022 07:28:07 -0700 (PDT)
MIME-Version: 1.0
References: <20220620070245.77979-1-michal.orzel@arm.com> <20220620070245.77979-7-michal.orzel@arm.com>
In-Reply-To: <20220620070245.77979-7-michal.orzel@arm.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Tue, 21 Jun 2022 10:27:55 -0400
Message-ID: <CAKf6xpvQhq4LRPqnxLnEgedLhz3Zrfdp2daVBTpEm40SWRnz-w@mail.gmail.com>
Subject: Re: [PATCH 6/9] xsm/flask: Use explicitly specified types
To: Michal Orzel <michal.orzel@arm.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>, 
	"Daniel P. Smith" <dpsmith@apertussolutions.com>
Content-Type: text/plain; charset="UTF-8"

On Mon, Jun 20, 2022 at 3:03 AM Michal Orzel <michal.orzel@arm.com> wrote:
>
> According to MISRA C 2012 Rule 8.1, types shall be explicitly
> specified. Fix all the findings reported by cppcheck with misra addon
> by substituting implicit type 'unsigned' to explicit 'unsigned int'.
>
> Signed-off-by: Michal Orzel <michal.orzel@arm.com>

Reviewed-by: Jason Andryuk <jandryuk@gmail.com>

Thanks,
Jason


From xen-devel-bounces@lists.xenproject.org Tue Jun 21 15:44:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 15:44:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353304.580232 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3g32-0006hK-3H; Tue, 21 Jun 2022 15:44:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353304.580232; Tue, 21 Jun 2022 15:44:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3g32-0006hD-08; Tue, 21 Jun 2022 15:44:12 +0000
Received: by outflank-mailman (input) for mailman id 353304;
 Tue, 21 Jun 2022 15:44:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wIjP=W4=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1o3g30-0006h7-LG
 for xen-devel@lists.xenproject.org; Tue, 21 Jun 2022 15:44:10 +0000
Received: from mail-ed1-x533.google.com (mail-ed1-x533.google.com
 [2a00:1450:4864:20::533])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fced05df-f178-11ec-b725-ed86ccbb4733;
 Tue, 21 Jun 2022 17:44:08 +0200 (CEST)
Received: by mail-ed1-x533.google.com with SMTP id es26so18233706edb.4
 for <xen-devel@lists.xenproject.org>; Tue, 21 Jun 2022 08:44:08 -0700 (PDT)
Received: from uni.. (adsl-190.37.6.169.tellas.gr. [37.6.169.190])
 by smtp.googlemail.com with ESMTPSA id
 q16-20020a056402033000b0043564320274sm10090705edw.19.2022.06.21.08.44.05
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 21 Jun 2022 08:44:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fced05df-f178-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=IJxvF7GeSYmid6ScjNEEek498j+4eSxJj5t2L79Xl+E=;
        b=ERDoRgYzh09EpTuGJ4Ne37c9ffY1DEjgZeztSETgJYYKUoulwFd9Z7JDcruu7SVVih
         9nvRYg71b6+utykU4GKc4jm0O2fwA6juehVcWlb5INeg6R/jlUcqcG5JjTSlowPzYMSG
         a8J+vQzFYSF9fIA20U7Qyhn03CWC87lj3IqTNiIVxhOEfWQI7GQOdlmB86w/nL5eC7qp
         6FoDFotzt/uhu3v2sqxhBE1bsdE5Yiq26UyDJQ/2SNC+1izotfYHNzsZ1CUqiKv2BEMw
         dtanUBLCMkeAQxXHFmGRbd1sGF04G5Tk8e4OEo6VIrfrHIrGtFhoK/tiCxz6Lv+zUV73
         +Q5Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=IJxvF7GeSYmid6ScjNEEek498j+4eSxJj5t2L79Xl+E=;
        b=Db4XTeY5mIORWe/r1eVoCWNd8z3FXRDMWwnR89ZQSPvJVL5CXDqDlI+nQ3zQNYktu3
         8BIYWDMMOfVd/CeOcA9n3dxxlJ46uaPJMisGRv7EiJzHlw67UfxoTYGnrgFn9XVbvuYv
         VPRxoe808f7h1pPaCXMoDKy5GveyF66hX2sI5Jo+43C47To/i+/gNeG4dVmUCViZK8mL
         Vv5aUy7RoxziufJLAZzukCt5dvyPKn7XtMaCBXK7uKRTvU+l1hAJ5GW7QrDBmjyoF23q
         A5Cnfnm35EW+RcP+nTvMpz3EYPLdvGTWBfVjsx047u/UTFa10EPj3EQlksZxPF0z7NIz
         HnTQ==
X-Gm-Message-State: AJIora86C+zDwSxn8dxob3bA6IRsJVP3fsvm+pvIwkXvuRNaCegnmSJK
	evyGfJ4xvqGOXg1pgq1K7yUtBdxStDs=
X-Google-Smtp-Source: AGRyM1vHWdTBeZG1DoXCObuAEzsXuhhg6bJU+EOzlXvuq81EdRlC2Jo74IcX77tViHZm7kJ8kJJE/Q==
X-Received: by 2002:aa7:c881:0:b0:435:5dc4:5832 with SMTP id p1-20020aa7c881000000b004355dc45832mr28151980eds.265.1655826247808;
        Tue, 21 Jun 2022 08:44:07 -0700 (PDT)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 1/2] xen/arm: vtimer: Fix MISRA C 2012 Rule 8.4 violation
Date: Tue, 21 Jun 2022 18:44:01 +0300
Message-Id: <20220621154402.482857-1-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Include vtimer.h so that the declarations of vtimer functions with external
linkage are visible before the function definitions.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---
 xen/arch/arm/vtimer.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
index 5bb5970f58..5f26463354 100644
--- a/xen/arch/arm/vtimer.c
+++ b/xen/arch/arm/vtimer.c
@@ -29,6 +29,7 @@
 #include <asm/time.h>
 #include <asm/vgic.h>
 #include <asm/vreg.h>
+#include <asm/vtimer.h>
 #include <asm/regs.h>
 
 /*
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 21 15:44:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 15:44:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353305.580243 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3g34-0006wi-BA; Tue, 21 Jun 2022 15:44:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353305.580243; Tue, 21 Jun 2022 15:44:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3g34-0006wb-7o; Tue, 21 Jun 2022 15:44:14 +0000
Received: by outflank-mailman (input) for mailman id 353305;
 Tue, 21 Jun 2022 15:44:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wIjP=W4=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1o3g32-0006h7-SU
 for xen-devel@lists.xenproject.org; Tue, 21 Jun 2022 15:44:12 +0000
Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com
 [2a00:1450:4864:20::629])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ff24ab11-f178-11ec-b725-ed86ccbb4733;
 Tue, 21 Jun 2022 17:44:12 +0200 (CEST)
Received: by mail-ej1-x629.google.com with SMTP id o7so28401502eja.1
 for <xen-devel@lists.xenproject.org>; Tue, 21 Jun 2022 08:44:12 -0700 (PDT)
Received: from uni.. (adsl-190.37.6.169.tellas.gr. [37.6.169.190])
 by smtp.googlemail.com with ESMTPSA id
 q16-20020a056402033000b0043564320274sm10090705edw.19.2022.06.21.08.44.09
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 21 Jun 2022 08:44:10 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ff24ab11-f178-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=LrkB1fU20QvlNrcfiWTsM7um/FmclqwQ8isYFawF1LU=;
        b=bCFA5QY7DfLHY3/8Oe/yq77T5aHUuablw6uGDGGcZ3r856P/ZF6tMbZ1CfHC52XlOW
         U5k2uNbRoghYkzcMabIMSLS8MihBzB72QXiFlIFq7PUNyCPrrGzB5WBmCT7ex/G20bVU
         qeE+rLGaG753EucfFRIGQam/2jHp4TBwn2+tVYVcEn6QcCeNpHuMAigLbEtEn/DGthEm
         Eo+j5zNZsnJ5e9W9GqQTznZ/DxUA/H26jZVWjzzig4npsfhzSCzik2MdQ/Dh78BqqDWP
         JmhBP8osCAODYCk9Cwtqo3P91ty2WB2HOh7F4ym9ArEe+E170ZHJG2HjdijCAOdKO7Qn
         m7sA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=LrkB1fU20QvlNrcfiWTsM7um/FmclqwQ8isYFawF1LU=;
        b=GswjVCPL33ITtaZ0AQmD9MScXu6e3kKwF5HzbRWYTjEX/buVVTPr23sgytDOo1oRmP
         kQwpLqcuy6169gWP7opUBK279TomvpAtAZRfxAY5AJQn8GYbtU2kNs0b+h3EcPCl/Lkj
         fzLOY27A5y1GZHksnXwJwlM1jbYKILGJpIUGzDRsxYN+LDUlO04z3pqrr7kS49oTwpuw
         qvWjQdQJcmfDQ8567lubHy61OuCJ4kClPRm/x0SDWCDDVHSkyX+iuTRVhNvJtNU048DH
         XiXCjbYJFKCk7GsJrfwOwdgUdQqEODDchtj5BnkvpvIXHNnfeGSe2t4HjPh0j+aLQheu
         QzAQ==
X-Gm-Message-State: AJIora+OI1ecHhF+9qkupQRMCJw8bxBmHFBg9Repb/1iru9MjhP5mJrd
	ZWjK7saITFqjYVCQujiZn0XdhnAs+us=
X-Google-Smtp-Source: AGRyM1tbIFBPp38itQ0rLLYjoQQrzfd1A81yW71wh8tUtMFxr1jPzTtguP95LUoeyEnWWDyTFkmP8g==
X-Received: by 2002:a17:907:72d6:b0:722:e59a:72f4 with SMTP id du22-20020a17090772d600b00722e59a72f4mr2968623ejc.158.1655826251590;
        Tue, 21 Jun 2022 08:44:11 -0700 (PDT)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 2/2] xen/arm: vtimer: Remove duplicate header
Date: Tue, 21 Jun 2022 18:44:02 +0300
Message-Id: <20220621154402.482857-2-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20220621154402.482857-1-burzalodowa@gmail.com>
References: <20220621154402.482857-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The header file <asm/regs.h> is already included above and can be removed here.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---
 xen/arch/arm/vtimer.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
index 5f26463354..6b78fea77d 100644
--- a/xen/arch/arm/vtimer.c
+++ b/xen/arch/arm/vtimer.c
@@ -30,7 +30,6 @@
 #include <asm/vgic.h>
 #include <asm/vreg.h>
 #include <asm/vtimer.h>
-#include <asm/regs.h>
 
 /*
  * Check if regs is allowed access, user_gate is tail end of a
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 21 17:19:35 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 17:19:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353321.580254 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3hX8-00088s-7g; Tue, 21 Jun 2022 17:19:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353321.580254; Tue, 21 Jun 2022 17:19:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3hX8-00088l-2u; Tue, 21 Jun 2022 17:19:22 +0000
Received: by outflank-mailman (input) for mailman id 353321;
 Tue, 21 Jun 2022 17:19:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3hX6-00088b-Ds; Tue, 21 Jun 2022 17:19:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3hX6-0001ET-61; Tue, 21 Jun 2022 17:19:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3hX5-0005cT-Jd; Tue, 21 Jun 2022 17:19:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o3hX5-00035M-JC; Tue, 21 Jun 2022 17:19:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=RVPbwFzy6ruly1YCJIN+kDcGC+AUeg4TiyLvsnevhGg=; b=Rd/eboTrJdwwGsSOV3nrAot704
	4iRlDd2sfKQI6JfHm//VsfkmNo5WY+yVuUncwS4XQ7cnvTnQf+kZ4+JkuMTQE32f8BcJFZz4zvmvw
	luI+mYulCVOCdvtFiBAdF0Osa+OheNvkyxFGqRk5ctqiEXpyGvowyGx9Q+UBM8z9gIeU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171299-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171299: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=78ca55889a549a9a194c6ec666836329b774ab6d
X-Osstest-Versions-That:
    linux=354c6e071be986a44b956f7b57f1884244431048
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 21 Jun 2022 17:19:19 +0000

flight 171299 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171299/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-intel  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-amd  8 xen-boot              fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 171277
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 171277
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 171277
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 171277
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 171277

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 171277

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171277
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171277
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171277
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                78ca55889a549a9a194c6ec666836329b774ab6d
baseline version:
 linux                354c6e071be986a44b956f7b57f1884244431048

Last test of basis   171277  2022-06-19 03:11:35 Z    2 days
Failing since        171280  2022-06-19 15:12:25 Z    2 days    7 attempts
Testing same since   171294  2022-06-20 21:11:31 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Usyskin <alexander.usyskin@intel.com>
  Ali Saidi <alisaidi@amazon.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Arnd Bergmann <arnd@arndb.de>
  Athira Jajeev <atrajeev@linux.vnet.ibm.com>
  Athira Rajeev <atrajeev@linux.vnet.ibm.com>
  Bart Van Assche <bvanassche@acm.org>
  Christoph Lameter <cl@linux.com>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Damien Le Moal <damien.lemoal@opensource.wdc.com>
  Darrick J. Wong <djwong@kernel.org>
  Dave Hansen <dave.hansen@linux.intel.com>
  David Rientjes <rientjes@google.com>
  Douglas Gilbert <dgilbert@interlog.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Hyeonggon Yoo <42.hyeyoo@gmail.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ian Rogers <irogers@google.com>
  Jamie Iles <jamie@jamieiles.com>
  Jann Horn <jannh@google.com>
  Jarkko Nikula <jarkko.nikula@linux.intel.com>
  Jiasheng Jiang <jiasheng@iscas.ac.cn>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jing-Ting Wu <jing-ting.wu@mediatek.com>
  Joe Damato <jdamato@fastly.com>
  Josh Poimboeuf <jpoimboe@kernel.org>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Kunihiko Hayashi <hayashi.kunihiko@socionext.com>
  Leo Yan <leo.yan@linaro.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Liu Ying <victor.liu@nxp.com>
  Lukas Bulwahn <lukas.bulwahn@gmail.com>
  Marc Zyngier <maz@kernel.org>
  Martin K. Petersen <martin.petersen@oracle.com>
  Miaoqian Lin <linmq006@gmail.com>
  Michael Petlan <mpetlan@redhat.com>
  Michal Simek <michal.simek@amd.com>
  Nathan Chancellor <nathan@kernel.org>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Peter Zijlstra <peterz@infradead.org>
  Rob Herring <robh@kernel.org>
  Saurabh Sengar <ssengar@linux.microsoft.com>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Sedat Dilek <sedat.dilek@gmail.com> # LLVM-14 (x86-64)
  Serge Semin <Sergey.Semin@baikalelectronics.ru>
  Sergey Gorenko <sergeygo@nvidia.com>
  Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
  Tali Perry <tali.perry1@gmail.com>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Richter <tmricht@linux.ibm.com>
  Tomas Winkler <tomas.winkler@intel.com>
  Tyrel Datwyler <tyreld@linux.ibm.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wolfram Sang <wsa@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1865 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 21 21:42:20 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 21:42:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353340.580265 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3ldL-0007wR-A3; Tue, 21 Jun 2022 21:42:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353340.580265; Tue, 21 Jun 2022 21:42:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3ldL-0007wK-6p; Tue, 21 Jun 2022 21:42:03 +0000
Received: by outflank-mailman (input) for mailman id 353340;
 Tue, 21 Jun 2022 21:42:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w6nJ=W4=gmail.com=christopher.w.clark@srs-se1.protection.inumbo.net>)
 id 1o3ldJ-0007wE-0Y
 for xen-devel@lists.xenproject.org; Tue, 21 Jun 2022 21:42:01 +0000
Received: from mail-ua1-x936.google.com (mail-ua1-x936.google.com
 [2607:f8b0:4864:20::936])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fa675749-f1aa-11ec-bd2d-47488cf2e6aa;
 Tue, 21 Jun 2022 23:41:59 +0200 (CEST)
Received: by mail-ua1-x936.google.com with SMTP id n16so3381085ual.12
 for <xen-devel@lists.xenproject.org>; Tue, 21 Jun 2022 14:41:59 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fa675749-f1aa-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=l01jNFqiufAouH1VLt/1v8IOtPSNiuPd8CbkSmAR+i4=;
        b=b3LOBQzuOWW30NhKqXW2uvyWxhGAWoDfpGyZ0ldEUqh2j+SgUpU+uSBSIQtIlBSn9I
         zui7Y3/siSIuYokDQbdBiqDGOb0giTi/qspI3NNu27//Qd9Zxz61YBrPI8to6kCY5s9g
         3eLtHLttMRSHx0Q5485RMnwAV+3UB3YzuHAXueytyo3fFZuMjfVOJiL5oNixNmEiTQYf
         IA+okOE9Vkcy6JgmVTfaIbv9bKRpWCjGjrvYNoPiVOUD2L6Mv1sSDxcsz9KvrsbGyEdL
         L30Rq3FbNNQZ2VCmZxPx3OUR2BDCti8I3JaxePqNJ3mLY20yYVS5qFELCWHo8K4bBNhc
         FPJg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=l01jNFqiufAouH1VLt/1v8IOtPSNiuPd8CbkSmAR+i4=;
        b=egfkFWE7bvjHrhNoVJWmrQ5GrbDw8w4MFvyoaTGq3dil7zAdawy748NBVOMdMFRnbR
         24Zk+sXwr+RCoTCcXpgqxjcQNuzcIB7RAfgPj8LK9PalKyP1qOypQCBTTZoZBaN79YLw
         OKsoSrv0VPPKL5g83aUH+TNIjOsssz5OY3MV+ahqnYJQajnFcTaVMske98Gfvum/bkND
         GuWIGdnWCEjpSqY6oV1jzYmiqwotF/LlYtck05yAxg+7UFZRv9uRMUrnAHetVa8J0djF
         8m/KLLyK6le7AtEokC+TTk7UHdKVDBaXs87UcLIS0g/JpYlwPBJGP/nvh6NUNqG9bhTu
         BtBA==
X-Gm-Message-State: AJIora9qTGN8WhrTGbywsFhxJecOQjy8QRP1/Ky9SAXWIhLfN2ZAkZ60
	PxvA1Bco6pPZCTHOP4gNlghRxbaI5ICYIdGdPu8=
X-Google-Smtp-Source: AGRyM1u3O0F9+MT5pI3JMcg1G39K+uCYGX39B7w2LthGCsOksIE9XB4BgFxMMsDhYanSjLeHfx0hZ7D45QSXMR4tjI0=
X-Received: by 2002:ab0:314f:0:b0:379:704d:a076 with SMTP id
 e15-20020ab0314f000000b00379704da076mr149589uam.55.1655847718469; Tue, 21 Jun
 2022 14:41:58 -0700 (PDT)
MIME-Version: 1.0
References: <7f490d75-153d-7e1d-b3c0-5418ff7fdf8f@citrix.com>
 <b8f05e22-c30d-d4b2-b725-9db91ee7a09d@xen.org> <fd30be68-d1ac-b1bc-b3f1-cff589f338ee@citrix.com>
 <c97de57c-4812-cdfc-f329-cc2e1d950dc7@xen.org>
In-Reply-To: <c97de57c-4812-cdfc-f329-cc2e1d950dc7@xen.org>
From: Christopher Clark <christopher.w.clark@gmail.com>
Date: Tue, 21 Jun 2022 14:41:47 -0700
Message-ID: <CACMJ4GY+H7P733_-UNgSd7P8+Z4ryeJwVy3QfekMJskkmh9btQ@mail.gmail.com>
Subject: Re: XTF-on-ARM: Bugs
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>, xen-devel <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Michal Orzel <Michal.Orzel@arm.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
	Daniel Smith <dpsmith@apertussolutions.com>, Roger Pau Monne <roger.pau@citrix.com>, 
	George Dunlap <George.Dunlap@citrix.com>
Content-Type: text/plain; charset="UTF-8"

On Tue, Jun 21, 2022 at 7:05 AM Julien Grall <julien@xen.org> wrote:
>
> Hi Andrew,
>
> On 21/06/2022 14:30, Andrew Cooper wrote:
> > On 21/06/2022 13:07, Julien Grall wrote:
> >> On 21/06/2022 12:27, Andrew Cooper wrote:
> >>> Hello,
> >>> I tried to have a half hour respite from security and push forward
> >>> with XTF-on-ARM, but the result was a mess.

Hi all - I've been quiet on this front but have actually been plugging
away at this in the meantime, educating myself on the details of the
Arm init code. I have made arm32 XTF tests run under qemu, with XTF
result code reporting working via PV console and reading domid from
very basic Xenstore, so far with the MMU off but a bringup of that
part-implemented. I am following this thread and planning to be
continuing work on this later this week.

Christopher

> >>> https://github.com/andyhhp/xtf/commit/bc86e2d271f2107da9b1c9bc55a050dbdf07c6c6
> >>> is the absolute bare minimum stub VM, which has a zImage{32,64}
> >>> header, sets up the stack, makes one CONSOLEIO_write hypercall, and
> >>> then a clean SCHEDOP_shutdown.
> >>>
> >>> There are some bugs:
> >>>
> >>> 1) kernel_zimage32_probe() rejects relocatable binaries, but if I
> >>> skip the check it works fine.
> >>
> >> Hmmmm... which check are you referring to?
> >
> > if ( (end - start) > size )
> >      return -EINVAL;
> >
> > Although now I think about it, the problem is subtly different.
> >
> > Section Headers:
> >    [Nr] Name              Type            Addr     Off    Size   ES Flg
> > Lk Inf Al
> >    [ 0]                   NULL            00000000 000000 000000 00
> > 0   0  0
> >    [ 1] .text             PROGBITS        40000000 010000 000094 00  AX
> > 0   0  4
> >    [ 2] .data             PROGBITS        40001000 011000 000000 00  WA
> > 0   0  1
> >    [ 3] .rodata           PROGBITS        40001000 011000 000012 00   A
> > 0   0  4
> >    [ 4] .bss              NOBITS          40002000 011012 001000 00  WA
> > 0   0  4
> >
> > end is calculated as 0x3000 which includes the bss (inc stack which is
> > bss page aligned), while the raw binary size is 0x1012 because it stops
> > at the end of .rodata.
>
> Ok. I agree this is a bug. Can you send a patch?
> >>> Furthermore, kernel_zimage64_probe() ignores the header and assumes
> >>> the binary is relocatable.
> >>
> >> Are you referring to bit 3 "Kernel physical placement"?
> >
> > No. This:
> >
> > /* Currently there is no length in the header, so just use the size */
> > start = 0;
> > end = size;
> >
> > Which isn't true even for the v0 header.  The field named text_offset in
> > Xen's code is start, and res1 is end (or size for relocatable).
>
> Hmmm... text_offset is not the start. But I agree that res1 is the
> effective size and should be used instead of the binary size.
>
> >>
> >>> Both probe functions fail to check the endianness marker.
> >>
> >> AFAIU the header is little endian. So it is not clear to me why we
> >> should check the endianess marker?
> >
> > Not the endieness of the header, the endianness of the image.  Both
> > headers have a field which should ought to be checked for != LE seeing
> > as Xen doesn't support big endian domains yet
>
> Aside potential bugs, big endian OS should boot on Xen (PV protocol and
> hypercalls are always litte endian).
>
> [...]
>
> >>> (XEN) Hardware Dom0 halted: halting machine
> >>>
> >>> which is weird.  The CONSOLEIO_write fails to read the passed
> >>> pointer, despite appearing to have a ip-relative load to find the
> >>> string, while the SCHEDOP_shutdown passes its parameter fine (it's a
> >>> stack relative load).
> >>
> >>  From a brief look, your code is still running with MMU off and Cache
> >> "off" (on armv8, it is more a bypass "cache" rather than off).
> >>
> >> This means that you ought to be a lot more careful when
> >> reading/writing value to avoid reading any stall data.
> >
> > There are no relocation/etc so everything has well defined behaviour
> > even when the caches are off.
>
> The problem is you are writing to the stack and then passing a pointer
> to the stack to Xen. For hypercalls, we mandate the memory to be
> cacheable (see arch-arm.h). So Xen may read a different value than what
> you passed.
> >>> Other observations:
> >>>
> >>> * There is no documented vCPU starting state.
> >>
> >> See
> >> https://github.com/torvalds/linux/blob/master/Documentation/arm64/booting.rst.
> >
> > What's it got to do with Xen's vCPU starting state?
>
> Because we are following what Image defined. Anything outside is
> implementation defined and not something that an OS should rely on.
>
> >  Also, that's
> > clearly not relevant for arm32 even if the implication is "Xen only
> > speaks the Linux ABI".
>
> The interface exposed to the guest depends on the binary format used. At
> the moment, we are implementing zImage, Image and U-boot. If there were
> another, then the vCPU will be the same as defined by the new format.
>
> >
> > It needs to be in docs/ (or public at a stretch) and not in the heads of
> > the maintainers.
>
> Patches are welcomed.
>
> >>
> >>> * Qemu is infinitely easier to to use (i.e. no messing with dtb/etc)
> >>> as -kernel xen -initrd test-$foo with a oneliner change to the dtb
> >>> parsing to treat ramdisk and no kernel as the dom0 kernel.  Maybe a
> >>> better change would be to modify qemu to understand multiple -kernel's.
> >>> * Xen can't load ELFs.
> >>
> >> The support was dropped in 2018 because it was bogus and not used:
> >>
> >> https://lists.xenproject.org/archives/html/xen-devel/2018-06/msg00242.html
> >>
> >>
> >> Personally, I think that zImage/Image is simple enough that
> >> re-introducing ELF is not worth it. But I would be OK to consider
> >> patches if you feel like writing them.
> >
> > There is a massive usability improvement from being able to point normal
> > toolchain tools at the same binary you're trying to load.
> Ditto.
>
> Cheers,
>
> --
> Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 21 23:44:46 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 23:44:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353359.580287 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3nXx-0002pB-Ix; Tue, 21 Jun 2022 23:44:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353359.580287; Tue, 21 Jun 2022 23:44:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3nXx-0002p4-Fg; Tue, 21 Jun 2022 23:44:37 +0000
Received: by outflank-mailman (input) for mailman id 353359;
 Tue, 21 Jun 2022 23:44:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=D44s=W4=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o3nXv-0002oI-Jv
 for xen-devel@lists.xenproject.org; Tue, 21 Jun 2022 23:44:35 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org
 [2604:1380:4601:e00::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 189dbd11-f1bc-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 01:44:34 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id B684EB81B80;
 Tue, 21 Jun 2022 23:44:30 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2B7D1C3411C;
 Tue, 21 Jun 2022 23:44:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 189dbd11-f1bc-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1655855069;
	bh=ZwuyshTN2DegwgRqLLCi7PSc5TuAIg0JJ5t8fSiwduw=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Te/elY0eIYeQ6kK8TP1l6rWfJcIDwTMeG4yspUyohrVO/OPfbKCnuVtICcfCYumY/
	 6ADf896wDhtN6Sm/ayioi7Z/P88qeol2CIpqZCskQfNUiAnO746MaZHdCRdFnPuQGi
	 E6V+aUtjvnBZSEZu99/U5tz6AQjtb9SaAMcq8CtT9XzRZQ5bZmP2Bcq+89Y0LIu4Pt
	 30vrpBpy60jmpwYHXVm0U77dj26wzQTnYrCEs7lu0zGdY06VQe19POjUeN+J45rRmr
	 Gno/5ivHcNllB74i6F00x6RSNCH2h31QIAl7BnOa2GW1fo5JrQQqjNsfxQSMmS2tol
	 7yCH028YW045g==
Date: Tue, 21 Jun 2022 16:44:28 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Xenia Ragiadakou <burzalodowa@gmail.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH 1/2] xen/arm: vtimer: Fix MISRA C 2012 Rule 8.4
 violation
In-Reply-To: <20220621154402.482857-1-burzalodowa@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2206211644210.788376@ubuntu-linux-20-04-desktop>
References: <20220621154402.482857-1-burzalodowa@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 21 Jun 2022, Xenia Ragiadakou wrote:
> Include vtimer.h so that the declarations of vtimer functions with external
> linkage are visible before the function definitions.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/arch/arm/vtimer.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
> index 5bb5970f58..5f26463354 100644
> --- a/xen/arch/arm/vtimer.c
> +++ b/xen/arch/arm/vtimer.c
> @@ -29,6 +29,7 @@
>  #include <asm/time.h>
>  #include <asm/vgic.h>
>  #include <asm/vreg.h>
> +#include <asm/vtimer.h>
>  #include <asm/regs.h>
>  
>  /*
> -- 
> 2.34.1
> 


From xen-devel-bounces@lists.xenproject.org Tue Jun 21 23:44:46 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 23:44:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353356.580276 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3nXo-0002XS-75; Tue, 21 Jun 2022 23:44:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353356.580276; Tue, 21 Jun 2022 23:44:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3nXo-0002XL-3O; Tue, 21 Jun 2022 23:44:28 +0000
Received: by outflank-mailman (input) for mailman id 353356;
 Tue, 21 Jun 2022 23:44:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3nXn-0002XB-Bf; Tue, 21 Jun 2022 23:44:27 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3nXn-0007kd-8y; Tue, 21 Jun 2022 23:44:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3nXm-0003A5-Pz; Tue, 21 Jun 2022 23:44:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o3nXm-0001I0-PY; Tue, 21 Jun 2022 23:44:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9+QR/5Y10OhlFbsjXydTlSQYThnGHpaTljTX50bJnYU=; b=ZarO/Y+QtRtbt6jPgkTa1YuBAi
	ih0qfcuk83iIBbkAQ8NMh7pzgYQAXTuttTgfZU4D9Gh/DgijAM0UjzOU41zPjXToLXIWZ8yMYvZPM
	+pOQExemLj8RySJkU+/sfHFHVzwdGmwLGcZ3IJueijDhn1tdPSoXO3JYEm6gv6nm6ecU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171300-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171300: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=78ca55889a549a9a194c6ec666836329b774ab6d
X-Osstest-Versions-That:
    linux=354c6e071be986a44b956f7b57f1884244431048
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 21 Jun 2022 23:44:26 +0000

flight 171300 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171300/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-intel  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-amd  8 xen-boot              fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 171277
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 171277
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 171277
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 171277
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 171277

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 171277

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171277
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171277
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171277
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                78ca55889a549a9a194c6ec666836329b774ab6d
baseline version:
 linux                354c6e071be986a44b956f7b57f1884244431048

Last test of basis   171277  2022-06-19 03:11:35 Z    2 days
Failing since        171280  2022-06-19 15:12:25 Z    2 days    8 attempts
Testing same since   171294  2022-06-20 21:11:31 Z    1 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Usyskin <alexander.usyskin@intel.com>
  Ali Saidi <alisaidi@amazon.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Arnd Bergmann <arnd@arndb.de>
  Athira Jajeev <atrajeev@linux.vnet.ibm.com>
  Athira Rajeev <atrajeev@linux.vnet.ibm.com>
  Bart Van Assche <bvanassche@acm.org>
  Christoph Lameter <cl@linux.com>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Damien Le Moal <damien.lemoal@opensource.wdc.com>
  Darrick J. Wong <djwong@kernel.org>
  Dave Hansen <dave.hansen@linux.intel.com>
  David Rientjes <rientjes@google.com>
  Douglas Gilbert <dgilbert@interlog.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Hyeonggon Yoo <42.hyeyoo@gmail.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ian Rogers <irogers@google.com>
  Jamie Iles <jamie@jamieiles.com>
  Jann Horn <jannh@google.com>
  Jarkko Nikula <jarkko.nikula@linux.intel.com>
  Jiasheng Jiang <jiasheng@iscas.ac.cn>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jing-Ting Wu <jing-ting.wu@mediatek.com>
  Joe Damato <jdamato@fastly.com>
  Josh Poimboeuf <jpoimboe@kernel.org>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Kunihiko Hayashi <hayashi.kunihiko@socionext.com>
  Leo Yan <leo.yan@linaro.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Liu Ying <victor.liu@nxp.com>
  Lukas Bulwahn <lukas.bulwahn@gmail.com>
  Marc Zyngier <maz@kernel.org>
  Martin K. Petersen <martin.petersen@oracle.com>
  Miaoqian Lin <linmq006@gmail.com>
  Michael Petlan <mpetlan@redhat.com>
  Michal Simek <michal.simek@amd.com>
  Nathan Chancellor <nathan@kernel.org>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Peter Zijlstra <peterz@infradead.org>
  Rob Herring <robh@kernel.org>
  Saurabh Sengar <ssengar@linux.microsoft.com>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Sedat Dilek <sedat.dilek@gmail.com> # LLVM-14 (x86-64)
  Serge Semin <Sergey.Semin@baikalelectronics.ru>
  Sergey Gorenko <sergeygo@nvidia.com>
  Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
  Tali Perry <tali.perry1@gmail.com>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Richter <tmricht@linux.ibm.com>
  Tomas Winkler <tomas.winkler@intel.com>
  Tyrel Datwyler <tyreld@linux.ibm.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wolfram Sang <wsa@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1865 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 21 23:44:46 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jun 2022 23:44:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353360.580298 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3nY6-0003BO-QK; Tue, 21 Jun 2022 23:44:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353360.580298; Tue, 21 Jun 2022 23:44:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3nY6-0003BD-NB; Tue, 21 Jun 2022 23:44:46 +0000
Received: by outflank-mailman (input) for mailman id 353360;
 Tue, 21 Jun 2022 23:44:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=D44s=W4=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o3nY5-000393-4i
 for xen-devel@lists.xenproject.org; Tue, 21 Jun 2022 23:44:45 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1e424e11-f1bc-11ec-bd2d-47488cf2e6aa;
 Wed, 22 Jun 2022 01:44:43 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id F412D61798;
 Tue, 21 Jun 2022 23:44:39 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 16FFDC3411C;
 Tue, 21 Jun 2022 23:44:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1e424e11-f1bc-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1655855079;
	bh=cLNxB1L3NIVS+qsysMNnTABUz7WYMmeObAtsONQsznw=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=QZfaT/B1Atjx9NckBJy9dPfaKr05DVUqUv62poi6d9cXgitwCJg5lIL8dkTw4IkNg
	 /Zna3t9DsIwljybjdxp/PDPWBrI7unPVomnE/DA3BUG4KYnBTspRT46ascQrVV0vEX
	 8sRM8j2LY3uCiEWOuPCcdsUWo2m2nKMoQ+nrBKh6zCwOcCHCzTTd30d/fcG3Y660gh
	 ZFd7AFn4JK/Ko41RxYtXHKqfzniuWIfdavi2cNPzwD9L342jZh3MAQ5+42bq5IEU2b
	 AEb0B98H4tyCQynYsb8V1QEvT/hXTjeaHek4lYjBiOgJGdrM3Q+FZ5bC2hnqV8RAwQ
	 TMEly1ceEQHuw==
Date: Tue, 21 Jun 2022 16:44:38 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Xenia Ragiadakou <burzalodowa@gmail.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH 2/2] xen/arm: vtimer: Remove duplicate header
In-Reply-To: <20220621154402.482857-2-burzalodowa@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2206211644320.788376@ubuntu-linux-20-04-desktop>
References: <20220621154402.482857-1-burzalodowa@gmail.com> <20220621154402.482857-2-burzalodowa@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 21 Jun 2022, Xenia Ragiadakou wrote:
> The header file <asm/regs.h> is already included above and can be removed here.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/arch/arm/vtimer.c | 1 -
>  1 file changed, 1 deletion(-)
> 
> diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
> index 5f26463354..6b78fea77d 100644
> --- a/xen/arch/arm/vtimer.c
> +++ b/xen/arch/arm/vtimer.c
> @@ -30,7 +30,6 @@
>  #include <asm/vgic.h>
>  #include <asm/vreg.h>
>  #include <asm/vtimer.h>
> -#include <asm/regs.h>
>  
>  /*
>   * Check if regs is allowed access, user_gate is tail end of a
> -- 
> 2.34.1
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 00:00:54 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 00:00:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353384.580309 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3nnd-0006hr-H1; Wed, 22 Jun 2022 00:00:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353384.580309; Wed, 22 Jun 2022 00:00:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3nnd-0006hk-C2; Wed, 22 Jun 2022 00:00:49 +0000
Received: by outflank-mailman (input) for mailman id 353384;
 Wed, 22 Jun 2022 00:00:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/BDV=W5=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o3nnc-0006he-0B
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 00:00:48 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5cf6e45c-f1be-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 02:00:45 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 494526171E;
 Wed, 22 Jun 2022 00:00:44 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4ED62C3411C;
 Wed, 22 Jun 2022 00:00:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5cf6e45c-f1be-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1655856043;
	bh=RAgU+FrfC88PPxduGjhUGQtj1BDX6H2Gh3kLDvWJQdM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=MXwdOBKFeuZ+ONTJXJsksDAX4/abQ4kAlMiI40T1r8uLpNvM1wI6EBX6pUS0AbgGA
	 XDSv4TpPcbg3aQ5YKIr4ZcziMEMYJcR/JR4Pb9jIEFhzHARU9BUhDNDiyPzEB1fbGo
	 HG1kk+rPIenZIrRUT5Rc7kqBAOTNuKQfEtvH6N/yZAZaoLzWQM/9fjpeU8ut4XyjyS
	 djUClMe93Wc4Uf5CSHn8jfVBHv94Fb+Mfba5w7eUqM6Vz9cyTuuPeL1L8cDLguCsKa
	 L21J3DFfiuZvQtC7wgR4d9jZhFEiPje6JSLv3564Rn+GHm3DAz4UcD67fwcwLagn89
	 Wx/eEij/GAFXg==
Date: Tue, 21 Jun 2022 17:00:42 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Bertrand Marquis <bertrand.marquis@arm.com>
cc: xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH v3] xen: Add MISRA support to cppcheck make rule
In-Reply-To: <82a29dff7a0da97cc6ad9d247a97372bcf71f17c.1654850751.git.bertrand.marquis@arm.com>
Message-ID: <alpine.DEB.2.22.394.2206211658480.788376@ubuntu-linux-20-04-desktop>
References: <82a29dff7a0da97cc6ad9d247a97372bcf71f17c.1654850751.git.bertrand.marquis@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 10 Jun 2022, Bertrand Marquis wrote:
> cppcheck MISRA addon can be used to check for non compliance to some of
> the MISRA standard rules.
> 
> Add a CPPCHECK_MISRA variable that can be set to "y" using make command
> line to generate a cppcheck report including cppcheck misra checks.
> 
> When MISRA checking is enabled, a file with a text description suitable
> for cppcheck misra addon is generated out of Xen documentation file
> which lists the rules followed by Xen (docs/misra/rules.rst).
> 
> By default MISRA checking is turned off.
> 
> While adding cppcheck-misra files to gitignore, also fix the missing /
> for htmlreport gitignore
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>

Hi Bertrand,

I tried this patch and I am a bit confused by the output
cppcheck-misra.txt file that I get (appended.)

I can see that there are all the rules from docs/misra/rules.rst as it
should be together with the one line summary, but there are also a bunch
of additional rules not present in docs/misra/rules.rst. Starting from
Rule 1.1 all the way to Rule 21.21. Is the expected?

Cheers,

Stefano


Appendix A Summary of guidelines
Rule 2.1 Required
All source files shall compile without any compilation errors (Misra rule 2.1)
Rule 4.7 Required
If a function returns error information then that error information shall be tested (Misra rule 4.7)
Rule 4.10 Required
Precautions shall be taken in order to prevent the contents of a header file being included more than once (Misra rule 4.10)
Rule 4.14 Required
The validity of values received from external sources shall be checked (Misra rule 4.14)
Rule 1.3 Required
There shall be no occurrence of undefined or critical unspecified behaviour (Misra rule 1.3)
Rule 3.2 Required
Line-splicing shall not be used in // comments (Misra rule 3.2)
Rule 5.1 Required
External identifiers shall be distinct (Misra rule 5.1)
Rule 5.2 Required
Identifiers declared in the same scope and name space shall be distinct (Misra rule 5.2)
Rule 5.3 Required
An identifier declared in an inner scope shall not hide an identifier declared in an outer scope (Misra rule 5.3)
Rule 5.4 Required
Macro identifiers shall be distinct (Misra rule 5.4)
Rule 6.2 Required
Single-bit named bit fields shall not be of a signed type (Misra rule 6.2)
Rule 8.1 Required
Types shall be explicitly specified (Misra rule 8.1)
Rule 8.4 Required
A compatible declaration shall be visible when an object or function with external linkage is defined (Misra rule 8.4)
Rule 8.5 Required
An external object or function shall be declared once in one and only one file (Misra rule 8.5)
Rule 8.6 Required
An identifier with external linkage shall have exactly one external definition (Misra rule 8.6)
Rule 8.8 Required
The static storage class specifier shall be used in all declarations of objects and functions that have internal linkage (Misra rule 8.8)
Rule 8.10 Required
An inline function shall be declared with the static storage class (Misra rule 8.10)
Rule 8.12 Required
Within an enumerator list the value of an implicitly-specified enumeration constant shall be unique (Misra rule 8.12)
Rule 9.1 Mandatory
The value of an object with automatic storage duration shall not be read before it has been set (Misra rule 9.1)
Rule 9.2 Required
The initializer for an aggregate or union shall be enclosed in braces (Misra rule 9.2)
Rule 13.6 Mandatory
The operand of the sizeof operator shall not contain any expression which has potential side effects (Misra rule 13.6)
Rule 14.1 Required
A loop counter shall not have essentially floating type (Misra rule 14.1)
Rule 16.7 Required
A switch-expression shall not have essentially Boolean type (Misra rule 16.7)
Rule 17.3 Mandatory
A function shall not be declared implicitly (Misra rule 17.3)
Rule 17.4 Mandatory
All exit paths from a function with non-void return type shall have an explicit return statement with an expression (Misra rule 17.4)
Rule 20.7 Required
Expressions resulting from the expansion of macro parameters shall be enclosed in parentheses (Misra rule 20.7)
Rule 20.13 Required
A line whose first token is # shall be a valid preprocessing directive (Misra rule 20.13)
Rule 20.14 Required
All #else #elif and #endif preprocessor directives shall reside in the same file as the #if #ifdef or #ifndef directive to which they are related (Misra rule 20.14)
Rule 1.1
No description for rule 1.1
Rule 1.2
No description for rule 1.2
Rule 1.4
No description for rule 1.4
Rule 1.5
No description for rule 1.5
Rule 1.6
No description for rule 1.6
Rule 1.7
No description for rule 1.7
Rule 1.8
No description for rule 1.8
Rule 1.9
No description for rule 1.9
Rule 1.10
No description for rule 1.10
Rule 1.11
No description for rule 1.11
Rule 1.12
No description for rule 1.12
Rule 1.13
No description for rule 1.13
Rule 1.14
No description for rule 1.14
Rule 1.15
No description for rule 1.15
Rule 1.16
No description for rule 1.16
Rule 1.17
No description for rule 1.17
Rule 1.18
No description for rule 1.18
Rule 1.19
No description for rule 1.19
Rule 1.20
No description for rule 1.20
Rule 1.21
No description for rule 1.21
Rule 2.2
No description for rule 2.2
Rule 2.3
No description for rule 2.3
Rule 2.4
No description for rule 2.4
Rule 2.5
No description for rule 2.5
Rule 2.6
No description for rule 2.6
Rule 2.7
No description for rule 2.7
Rule 2.8
No description for rule 2.8
Rule 2.9
No description for rule 2.9
Rule 2.10
No description for rule 2.10
Rule 2.11
No description for rule 2.11
Rule 2.12
No description for rule 2.12
Rule 2.13
No description for rule 2.13
Rule 2.14
No description for rule 2.14
Rule 2.15
No description for rule 2.15
Rule 2.16
No description for rule 2.16
Rule 2.17
No description for rule 2.17
Rule 2.18
No description for rule 2.18
Rule 2.19
No description for rule 2.19
Rule 2.20
No description for rule 2.20
Rule 2.21
No description for rule 2.21
Rule 3.1
No description for rule 3.1
Rule 3.3
No description for rule 3.3
Rule 3.4
No description for rule 3.4
Rule 3.5
No description for rule 3.5
Rule 3.6
No description for rule 3.6
Rule 3.7
No description for rule 3.7
Rule 3.8
No description for rule 3.8
Rule 3.9
No description for rule 3.9
Rule 3.10
No description for rule 3.10
Rule 3.11
No description for rule 3.11
Rule 3.12
No description for rule 3.12
Rule 3.13
No description for rule 3.13
Rule 3.14
No description for rule 3.14
Rule 3.15
No description for rule 3.15
Rule 3.16
No description for rule 3.16
Rule 3.17
No description for rule 3.17
Rule 3.18
No description for rule 3.18
Rule 3.19
No description for rule 3.19
Rule 3.20
No description for rule 3.20
Rule 3.21
No description for rule 3.21
Rule 4.1
No description for rule 4.1
Rule 4.2
No description for rule 4.2
Rule 4.3
No description for rule 4.3
Rule 4.4
No description for rule 4.4
Rule 4.5
No description for rule 4.5
Rule 4.6
No description for rule 4.6
Rule 4.8
No description for rule 4.8
Rule 4.9
No description for rule 4.9
Rule 4.11
No description for rule 4.11
Rule 4.12
No description for rule 4.12
Rule 4.13
No description for rule 4.13
Rule 4.15
No description for rule 4.15
Rule 4.16
No description for rule 4.16
Rule 4.17
No description for rule 4.17
Rule 4.18
No description for rule 4.18
Rule 4.19
No description for rule 4.19
Rule 4.20
No description for rule 4.20
Rule 4.21
No description for rule 4.21
Rule 5.5
No description for rule 5.5
Rule 5.6
No description for rule 5.6
Rule 5.7
No description for rule 5.7
Rule 5.8
No description for rule 5.8
Rule 5.9
No description for rule 5.9
Rule 5.10
No description for rule 5.10
Rule 5.11
No description for rule 5.11
Rule 5.12
No description for rule 5.12
Rule 5.13
No description for rule 5.13
Rule 5.14
No description for rule 5.14
Rule 5.15
No description for rule 5.15
Rule 5.16
No description for rule 5.16
Rule 5.17
No description for rule 5.17
Rule 5.18
No description for rule 5.18
Rule 5.19
No description for rule 5.19
Rule 5.20
No description for rule 5.20
Rule 5.21
No description for rule 5.21
Rule 6.1
No description for rule 6.1
Rule 6.3
No description for rule 6.3
Rule 6.4
No description for rule 6.4
Rule 6.5
No description for rule 6.5
Rule 6.6
No description for rule 6.6
Rule 6.7
No description for rule 6.7
Rule 6.8
No description for rule 6.8
Rule 6.9
No description for rule 6.9
Rule 6.10
No description for rule 6.10
Rule 6.11
No description for rule 6.11
Rule 6.12
No description for rule 6.12
Rule 6.13
No description for rule 6.13
Rule 6.14
No description for rule 6.14
Rule 6.15
No description for rule 6.15
Rule 6.16
No description for rule 6.16
Rule 6.17
No description for rule 6.17
Rule 6.18
No description for rule 6.18
Rule 6.19
No description for rule 6.19
Rule 6.20
No description for rule 6.20
Rule 6.21
No description for rule 6.21
Rule 7.1
No description for rule 7.1
Rule 7.2
No description for rule 7.2
Rule 7.3
No description for rule 7.3
Rule 7.4
No description for rule 7.4
Rule 7.5
No description for rule 7.5
Rule 7.6
No description for rule 7.6
Rule 7.7
No description for rule 7.7
Rule 7.8
No description for rule 7.8
Rule 7.9
No description for rule 7.9
Rule 7.10
No description for rule 7.10
Rule 7.11
No description for rule 7.11
Rule 7.12
No description for rule 7.12
Rule 7.13
No description for rule 7.13
Rule 7.14
No description for rule 7.14
Rule 7.15
No description for rule 7.15
Rule 7.16
No description for rule 7.16
Rule 7.17
No description for rule 7.17
Rule 7.18
No description for rule 7.18
Rule 7.19
No description for rule 7.19
Rule 7.20
No description for rule 7.20
Rule 7.21
No description for rule 7.21
Rule 8.2
No description for rule 8.2
Rule 8.3
No description for rule 8.3
Rule 8.7
No description for rule 8.7
Rule 8.9
No description for rule 8.9
Rule 8.11
No description for rule 8.11
Rule 8.13
No description for rule 8.13
Rule 8.14
No description for rule 8.14
Rule 8.15
No description for rule 8.15
Rule 8.16
No description for rule 8.16
Rule 8.17
No description for rule 8.17
Rule 8.18
No description for rule 8.18
Rule 8.19
No description for rule 8.19
Rule 8.20
No description for rule 8.20
Rule 8.21
No description for rule 8.21
Rule 9.3
No description for rule 9.3
Rule 9.4
No description for rule 9.4
Rule 9.5
No description for rule 9.5
Rule 9.6
No description for rule 9.6
Rule 9.7
No description for rule 9.7
Rule 9.8
No description for rule 9.8
Rule 9.9
No description for rule 9.9
Rule 9.10
No description for rule 9.10
Rule 9.11
No description for rule 9.11
Rule 9.12
No description for rule 9.12
Rule 9.13
No description for rule 9.13
Rule 9.14
No description for rule 9.14
Rule 9.15
No description for rule 9.15
Rule 9.16
No description for rule 9.16
Rule 9.17
No description for rule 9.17
Rule 9.18
No description for rule 9.18
Rule 9.19
No description for rule 9.19
Rule 9.20
No description for rule 9.20
Rule 9.21
No description for rule 9.21
Rule 10.1
No description for rule 10.1
Rule 10.2
No description for rule 10.2
Rule 10.3
No description for rule 10.3
Rule 10.4
No description for rule 10.4
Rule 10.5
No description for rule 10.5
Rule 10.6
No description for rule 10.6
Rule 10.7
No description for rule 10.7
Rule 10.8
No description for rule 10.8
Rule 10.9
No description for rule 10.9
Rule 10.10
No description for rule 10.10
Rule 10.11
No description for rule 10.11
Rule 10.12
No description for rule 10.12
Rule 10.13
No description for rule 10.13
Rule 10.14
No description for rule 10.14
Rule 10.15
No description for rule 10.15
Rule 10.16
No description for rule 10.16
Rule 10.17
No description for rule 10.17
Rule 10.18
No description for rule 10.18
Rule 10.19
No description for rule 10.19
Rule 10.20
No description for rule 10.20
Rule 10.21
No description for rule 10.21
Rule 11.1
No description for rule 11.1
Rule 11.2
No description for rule 11.2
Rule 11.3
No description for rule 11.3
Rule 11.4
No description for rule 11.4
Rule 11.5
No description for rule 11.5
Rule 11.6
No description for rule 11.6
Rule 11.7
No description for rule 11.7
Rule 11.8
No description for rule 11.8
Rule 11.9
No description for rule 11.9
Rule 11.10
No description for rule 11.10
Rule 11.11
No description for rule 11.11
Rule 11.12
No description for rule 11.12
Rule 11.13
No description for rule 11.13
Rule 11.14
No description for rule 11.14
Rule 11.15
No description for rule 11.15
Rule 11.16
No description for rule 11.16
Rule 11.17
No description for rule 11.17
Rule 11.18
No description for rule 11.18
Rule 11.19
No description for rule 11.19
Rule 11.20
No description for rule 11.20
Rule 11.21
No description for rule 11.21
Rule 12.1
No description for rule 12.1
Rule 12.2
No description for rule 12.2
Rule 12.3
No description for rule 12.3
Rule 12.4
No description for rule 12.4
Rule 12.5
No description for rule 12.5
Rule 12.6
No description for rule 12.6
Rule 12.7
No description for rule 12.7
Rule 12.8
No description for rule 12.8
Rule 12.9
No description for rule 12.9
Rule 12.10
No description for rule 12.10
Rule 12.11
No description for rule 12.11
Rule 12.12
No description for rule 12.12
Rule 12.13
No description for rule 12.13
Rule 12.14
No description for rule 12.14
Rule 12.15
No description for rule 12.15
Rule 12.16
No description for rule 12.16
Rule 12.17
No description for rule 12.17
Rule 12.18
No description for rule 12.18
Rule 12.19
No description for rule 12.19
Rule 12.20
No description for rule 12.20
Rule 12.21
No description for rule 12.21
Rule 13.1
No description for rule 13.1
Rule 13.2
No description for rule 13.2
Rule 13.3
No description for rule 13.3
Rule 13.4
No description for rule 13.4
Rule 13.5
No description for rule 13.5
Rule 13.7
No description for rule 13.7
Rule 13.8
No description for rule 13.8
Rule 13.9
No description for rule 13.9
Rule 13.10
No description for rule 13.10
Rule 13.11
No description for rule 13.11
Rule 13.12
No description for rule 13.12
Rule 13.13
No description for rule 13.13
Rule 13.14
No description for rule 13.14
Rule 13.15
No description for rule 13.15
Rule 13.16
No description for rule 13.16
Rule 13.17
No description for rule 13.17
Rule 13.18
No description for rule 13.18
Rule 13.19
No description for rule 13.19
Rule 13.20
No description for rule 13.20
Rule 13.21
No description for rule 13.21
Rule 14.2
No description for rule 14.2
Rule 14.3
No description for rule 14.3
Rule 14.4
No description for rule 14.4
Rule 14.5
No description for rule 14.5
Rule 14.6
No description for rule 14.6
Rule 14.7
No description for rule 14.7
Rule 14.8
No description for rule 14.8
Rule 14.9
No description for rule 14.9
Rule 14.10
No description for rule 14.10
Rule 14.11
No description for rule 14.11
Rule 14.12
No description for rule 14.12
Rule 14.13
No description for rule 14.13
Rule 14.14
No description for rule 14.14
Rule 14.15
No description for rule 14.15
Rule 14.16
No description for rule 14.16
Rule 14.17
No description for rule 14.17
Rule 14.18
No description for rule 14.18
Rule 14.19
No description for rule 14.19
Rule 14.20
No description for rule 14.20
Rule 14.21
No description for rule 14.21
Rule 15.1
No description for rule 15.1
Rule 15.2
No description for rule 15.2
Rule 15.3
No description for rule 15.3
Rule 15.4
No description for rule 15.4
Rule 15.5
No description for rule 15.5
Rule 15.6
No description for rule 15.6
Rule 15.7
No description for rule 15.7
Rule 15.8
No description for rule 15.8
Rule 15.9
No description for rule 15.9
Rule 15.10
No description for rule 15.10
Rule 15.11
No description for rule 15.11
Rule 15.12
No description for rule 15.12
Rule 15.13
No description for rule 15.13
Rule 15.14
No description for rule 15.14
Rule 15.15
No description for rule 15.15
Rule 15.16
No description for rule 15.16
Rule 15.17
No description for rule 15.17
Rule 15.18
No description for rule 15.18
Rule 15.19
No description for rule 15.19
Rule 15.20
No description for rule 15.20
Rule 15.21
No description for rule 15.21
Rule 16.1
No description for rule 16.1
Rule 16.2
No description for rule 16.2
Rule 16.3
No description for rule 16.3
Rule 16.4
No description for rule 16.4
Rule 16.5
No description for rule 16.5
Rule 16.6
No description for rule 16.6
Rule 16.8
No description for rule 16.8
Rule 16.9
No description for rule 16.9
Rule 16.10
No description for rule 16.10
Rule 16.11
No description for rule 16.11
Rule 16.12
No description for rule 16.12
Rule 16.13
No description for rule 16.13
Rule 16.14
No description for rule 16.14
Rule 16.15
No description for rule 16.15
Rule 16.16
No description for rule 16.16
Rule 16.17
No description for rule 16.17
Rule 16.18
No description for rule 16.18
Rule 16.19
No description for rule 16.19
Rule 16.20
No description for rule 16.20
Rule 16.21
No description for rule 16.21
Rule 17.1
No description for rule 17.1
Rule 17.2
No description for rule 17.2
Rule 17.5
No description for rule 17.5
Rule 17.6
No description for rule 17.6
Rule 17.7
No description for rule 17.7
Rule 17.8
No description for rule 17.8
Rule 17.9
No description for rule 17.9
Rule 17.10
No description for rule 17.10
Rule 17.11
No description for rule 17.11
Rule 17.12
No description for rule 17.12
Rule 17.13
No description for rule 17.13
Rule 17.14
No description for rule 17.14
Rule 17.15
No description for rule 17.15
Rule 17.16
No description for rule 17.16
Rule 17.17
No description for rule 17.17
Rule 17.18
No description for rule 17.18
Rule 17.19
No description for rule 17.19
Rule 17.20
No description for rule 17.20
Rule 17.21
No description for rule 17.21
Rule 18.1
No description for rule 18.1
Rule 18.2
No description for rule 18.2
Rule 18.3
No description for rule 18.3
Rule 18.4
No description for rule 18.4
Rule 18.5
No description for rule 18.5
Rule 18.6
No description for rule 18.6
Rule 18.7
No description for rule 18.7
Rule 18.8
No description for rule 18.8
Rule 18.9
No description for rule 18.9
Rule 18.10
No description for rule 18.10
Rule 18.11
No description for rule 18.11
Rule 18.12
No description for rule 18.12
Rule 18.13
No description for rule 18.13
Rule 18.14
No description for rule 18.14
Rule 18.15
No description for rule 18.15
Rule 18.16
No description for rule 18.16
Rule 18.17
No description for rule 18.17
Rule 18.18
No description for rule 18.18
Rule 18.19
No description for rule 18.19
Rule 18.20
No description for rule 18.20
Rule 18.21
No description for rule 18.21
Rule 19.1
No description for rule 19.1
Rule 19.2
No description for rule 19.2
Rule 19.3
No description for rule 19.3
Rule 19.4
No description for rule 19.4
Rule 19.5
No description for rule 19.5
Rule 19.6
No description for rule 19.6
Rule 19.7
No description for rule 19.7
Rule 19.8
No description for rule 19.8
Rule 19.9
No description for rule 19.9
Rule 19.10
No description for rule 19.10
Rule 19.11
No description for rule 19.11
Rule 19.12
No description for rule 19.12
Rule 19.13
No description for rule 19.13
Rule 19.14
No description for rule 19.14
Rule 19.15
No description for rule 19.15
Rule 19.16
No description for rule 19.16
Rule 19.17
No description for rule 19.17
Rule 19.18
No description for rule 19.18
Rule 19.19
No description for rule 19.19
Rule 19.20
No description for rule 19.20
Rule 19.21
No description for rule 19.21
Rule 20.1
No description for rule 20.1
Rule 20.2
No description for rule 20.2
Rule 20.3
No description for rule 20.3
Rule 20.4
No description for rule 20.4
Rule 20.5
No description for rule 20.5
Rule 20.6
No description for rule 20.6
Rule 20.8
No description for rule 20.8
Rule 20.9
No description for rule 20.9
Rule 20.10
No description for rule 20.10
Rule 20.11
No description for rule 20.11
Rule 20.12
No description for rule 20.12
Rule 20.15
No description for rule 20.15
Rule 20.16
No description for rule 20.16
Rule 20.17
No description for rule 20.17
Rule 20.18
No description for rule 20.18
Rule 20.19
No description for rule 20.19
Rule 20.20
No description for rule 20.20
Rule 20.21
No description for rule 20.21
Rule 21.1
No description for rule 21.1
Rule 21.2
No description for rule 21.2
Rule 21.3
No description for rule 21.3
Rule 21.4
No description for rule 21.4
Rule 21.5
No description for rule 21.5
Rule 21.6
No description for rule 21.6
Rule 21.7
No description for rule 21.7
Rule 21.8
No description for rule 21.8
Rule 21.9
No description for rule 21.9
Rule 21.10
No description for rule 21.10
Rule 21.11
No description for rule 21.11
Rule 21.12
No description for rule 21.12
Rule 21.13
No description for rule 21.13
Rule 21.14
No description for rule 21.14
Rule 21.15
No description for rule 21.15
Rule 21.16
No description for rule 21.16
Rule 21.17
No description for rule 21.17
Rule 21.18
No description for rule 21.18
Rule 21.19
No description for rule 21.19
Rule 21.20
No description for rule 21.20
Rule 21.21
No description for rule 21.21
Appendix B



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 00:09:21 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 00:09:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353394.580320 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3nvp-0007Ug-Fb; Wed, 22 Jun 2022 00:09:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353394.580320; Wed, 22 Jun 2022 00:09:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3nvp-0007UZ-C6; Wed, 22 Jun 2022 00:09:17 +0000
Received: by outflank-mailman (input) for mailman id 353394;
 Wed, 22 Jun 2022 00:09:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/BDV=W5=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o3nvo-0007UT-7a
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 00:09:16 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8c4ac433-f1bf-11ec-bd2d-47488cf2e6aa;
 Wed, 22 Jun 2022 02:09:14 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 15941617B7;
 Wed, 22 Jun 2022 00:09:13 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3D857C3411C;
 Wed, 22 Jun 2022 00:09:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8c4ac433-f1bf-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1655856552;
	bh=jaHk90M96jXcHuKs9BQl+3bER0XQNMlvET5rYb7Ihnc=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=VDTSPG5+RA40IDGhckWyUmcEqRBwmcMaGTwHYLhYLqREKHfYaTsGnZ5WOsm3BSY5B
	 kKGXvitclxZpz1iUtF3cy/J12OYEzpbzEFvcOjkDl9vM1wRWToeAGAdGXRrANgyyiZ
	 VZhpgIzQ8tEhSW3arIwPRpxqZ494G01fJl2o5wHO3p94BCwKWWc6lesHrFRsgcDlp/
	 T5MT2CKBzGKXXG0p0C/SEx9ysk8hmGdthf2hjjHc7QLa1/fyUMAZZK169E8LHLn9mM
	 JKTuFBW76Vaj/gnSl1aY93GrZdxLgecU2bvpRuvrm+BwosVWo+a7QszCdlbdv1ukDi
	 vqqmk/HB3O54w==
Date: Tue, 21 Jun 2022 17:09:11 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Xenia Ragiadakou <burzalodowa@gmail.com>
cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
    viryaos-discuss@lists.sourceforge.net
Subject: Re: [ImageBuilder] [PATCH 1/2] uboot-script-gen: Skip dom0 instead
 of exiting if DOM0_KERNEL is not set
In-Reply-To: <20220619124316.378365-1-burzalodowa@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2206211709020.788376@ubuntu-linux-20-04-desktop>
References: <20220619124316.378365-1-burzalodowa@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sun, 19 Jun 2022, Xenia Ragiadakou wrote:
> When the parameter DOM0_KERNEL is not specified and NUM_DOMUS is not 0,
> instead of failing the script, just skip any dom0 specific setup.
> This way the script can be used to boot XEN in dom0less mode.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  scripts/uboot-script-gen | 60 ++++++++++++++++++++++++++++------------
>  1 file changed, 43 insertions(+), 17 deletions(-)
> 
> diff --git a/scripts/uboot-script-gen b/scripts/uboot-script-gen
> index 455b4c0..bdc8a6b 100755
> --- a/scripts/uboot-script-gen
> +++ b/scripts/uboot-script-gen
> @@ -168,10 +168,15 @@ function xen_device_tree_editing()
>      dt_set "/chosen" "#address-cells" "hex" "0x2"
>      dt_set "/chosen" "#size-cells" "hex" "0x2"
>      dt_set "/chosen" "xen,xen-bootargs" "str" "$XEN_CMD"
> -    dt_mknode "/chosen" "dom0"
> -    dt_set "/chosen/dom0" "compatible" "str_a" "xen,linux-zimage xen,multiboot-module multiboot,module"
> -    dt_set "/chosen/dom0" "reg" "hex" "0x0 $dom0_kernel_addr 0x0 $(printf "0x%x" $dom0_kernel_size)"
> -    dt_set "/chosen" "xen,dom0-bootargs" "str" "$DOM0_CMD"
> +
> +    if test "$DOM0_KERNEL"
> +    then
> +        dt_mknode "/chosen" "dom0"
> +        dt_set "/chosen/dom0" "compatible" "str_a" "xen,linux-zimage xen,multiboot-module multiboot,module"
> +        dt_set "/chosen/dom0" "reg" "hex" "0x0 $dom0_kernel_addr 0x0 $(printf "0x%x" $dom0_kernel_size)"
> +        dt_set "/chosen" "xen,dom0-bootargs" "str" "$DOM0_CMD"
> +    fi
> +
>      if test "$DOM0_RAMDISK" && test $ramdisk_addr != "-"
>      then
>          dt_mknode "/chosen" "dom0-ramdisk"
> @@ -203,7 +208,10 @@ function xen_device_tree_editing()
>              add_device_tree_static_mem "/chosen/domU$i" "${DOMU_STATIC_MEM[$i]}"
>          fi
>          dt_set "/chosen/domU$i" "vpl011" "hex" "0x1"
> -        dt_set "/chosen/domU$i" "xen,enhanced" "str" "enabled"
> +        if test "$DOM0_KERNEL"
> +        then
> +            dt_set "/chosen/domU$i" "xen,enhanced" "str" "enabled"
> +        fi
>  
>          if test "${DOMU_COLORS[$i]}"
>          then
> @@ -433,6 +441,19 @@ function xen_config()
>              DOM0_CMD="$DOM0_CMD root=$root_dev"
>          fi
>      fi
> +    if test -z "$DOM0_KERNEL"
> +    then
> +        if test "$NUM_DOMUS" -eq "0"
> +        then
> +            echo "Neither dom0 or domUs are specified, exiting."
> +            exit 1
> +        fi
> +        echo "Dom0 kernel is not specified, continue with dom0less setup."
> +        unset DOM0_RAMDISK
> +        # Remove dom0 specific parameters from the XEN command line.
> +        local params=($XEN_CMD)
> +        XEN_CMD="${params[@]/dom0*/}"
> +    fi
>      i=0
>      while test $i -lt $NUM_DOMUS
>      do
> @@ -490,11 +511,13 @@ generate_uboot_images()
>  
>  xen_file_loading()
>  {
> -    check_compressed_file_type $DOM0_KERNEL "executable"
> -    dom0_kernel_addr=$memaddr
> -    load_file $DOM0_KERNEL "dom0_linux"
> -    dom0_kernel_size=$filesize
> -
> +    if test "$DOM0_KERNEL"
> +    then
> +        check_compressed_file_type $DOM0_KERNEL "executable"
> +        dom0_kernel_addr=$memaddr
> +        load_file $DOM0_KERNEL "dom0_linux"
> +        dom0_kernel_size=$filesize
> +    fi
>      if test "$DOM0_RAMDISK"
>      then
>          check_compressed_file_type $DOM0_RAMDISK "cpio archive"
> @@ -597,14 +620,16 @@ bitstream_load_and_config()
>  
>  create_its_file_xen()
>  {
> -    if test "$ramdisk_addr" != "-"
> +    if test "$DOM0_KERNEL"
>      then
> -        load_files="\"dom0_linux\", \"dom0_ramdisk\""
> -    else
> -        load_files="\"dom0_linux\""
> -    fi
> -    # xen below
> -    cat >> "$its_file" <<- EOF
> +        if test "$ramdisk_addr" != "-"
> +        then
> +            load_files="\"dom0_linux\", \"dom0_ramdisk\""
> +        else
> +            load_files="\"dom0_linux\""
> +        fi
> +        # xen below
> +        cat >> "$its_file" <<- EOF
>          dom0_linux {
>              description = "dom0 linux kernel binary";
>              data = /incbin/("$DOM0_KERNEL");
> @@ -616,6 +641,7 @@ create_its_file_xen()
>              $fit_algo
>          };
>  	EOF
> +    fi
>      # domUs
>      i=0
>      while test $i -lt $NUM_DOMUS
> -- 
> 2.34.1
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 00:12:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 00:12:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353402.580331 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3nz6-0000SU-Vq; Wed, 22 Jun 2022 00:12:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353402.580331; Wed, 22 Jun 2022 00:12:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3nz6-0000SN-S9; Wed, 22 Jun 2022 00:12:40 +0000
Received: by outflank-mailman (input) for mailman id 353402;
 Wed, 22 Jun 2022 00:12:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/BDV=W5=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o3nz5-0000SH-Pz
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 00:12:39 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 05a6349d-f1c0-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 02:12:38 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id EB13B61779;
 Wed, 22 Jun 2022 00:12:36 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 240A8C3411C;
 Wed, 22 Jun 2022 00:12:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 05a6349d-f1c0-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1655856756;
	bh=mBFskbB8TiU8xRYMwmKbJYtZ2QdaEzC4sErsujEwie4=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=W49BTjBttx7UO+01zUGRyTTU2BTAWpT26I7y4hUORosHG6ejNqM5hEAzFpHs7LOFy
	 oTZ5mFJPIhK6L1UndGNCPQuLZkrQUXkMLI8R1mIodhwqa7nptIHUU6/+llXJDhvht9
	 kpT0j8pnx4z4b+YX856iku18ftUqdMIU1DxNSZPBPtxEHeQWJqEUfjGQAVVUG1aGn9
	 Sy9GEXiiLOUAi4F4NMaJzPxTPsEj5SGGrkTufbf8+wK7XGTaVOIk1jse8OCpq3as8V
	 o9C8jzQv8cl3uuAkYLWNlSlZ/kx0XSBcVcKIZErfkf7Nbwtavpq39QRKJMK400/E2Q
	 G1uVZwvWTAEfA==
Date: Tue, 21 Jun 2022 17:12:35 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Xenia Ragiadakou <burzalodowa@gmail.com>
cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
    viryaos-discuss@lists.sourceforge.net
Subject: Re: [ImageBuilder] [PATCH 2/2] uboot-script-gen: Enable direct
 mapping of statically allocated memory
In-Reply-To: <20220619124316.378365-2-burzalodowa@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2206211712260.788376@ubuntu-linux-20-04-desktop>
References: <20220619124316.378365-1-burzalodowa@gmail.com> <20220619124316.378365-2-burzalodowa@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sun, 19 Jun 2022, Xenia Ragiadakou wrote:
> Direct mapping for dom0less VMs is disabled by default in XEN and can be
> enabled through the 'direct-map' property.
> Add a new config parameter DOMU_DIRECT_MAP to be able to enable or disable
> direct mapping, i.e set to 1 for enabling and to 0 for disabling.
> This parameter is optional. Direct mapping is enabled by default for all
> dom0less VMs with static allocation.
> 
> The property 'direct-map' is a boolean property. Boolean properties are true
> if present and false if missing.
> Add a new data_type 'bool' in function dt_set() to setup a boolean property.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  README.md                |  4 ++++
>  scripts/uboot-script-gen | 18 ++++++++++++++++++
>  2 files changed, 22 insertions(+)
> 
> diff --git a/README.md b/README.md
> index c52e4b9..17ff206 100644
> --- a/README.md
> +++ b/README.md
> @@ -168,6 +168,10 @@ Where:
>    if specified, indicates the host physical address regions
>    [baseaddr, baseaddr + size) to be reserved to the VM for static allocation.
>  
> +- DOMU_DIRECT_MAP[number] can be set to 1 or 0.
> +  If set to 1, the VM is direct mapped. The default is 1.
> +  This is only applicable when DOMU_STATIC_MEM is specified.
> +
>  - LINUX is optional but specifies the Linux kernel for when Xen is NOT
>    used.  To enable this set any LINUX\_\* variables and do NOT set the
>    XEN variable.
> diff --git a/scripts/uboot-script-gen b/scripts/uboot-script-gen
> index bdc8a6b..e85c6ec 100755
> --- a/scripts/uboot-script-gen
> +++ b/scripts/uboot-script-gen
> @@ -27,6 +27,7 @@ function dt_mknode()
>  #   hex
>  #   str
>  #   str_a
> +#   bool
>  function dt_set()
>  {
>      local path=$1
> @@ -49,6 +50,12 @@ function dt_set()
>                  array+=" \"$element\""
>              done
>              echo "fdt set $path $var $array" >> $UBOOT_SOURCE
> +        elif test $data_type = "bool"
> +        then
> +            if test "$data" -eq 1
> +            then
> +                echo "fdt set $path $var" >> $UBOOT_SOURCE
> +            fi
>          else
>              echo "fdt set $path $var \"$data\"" >> $UBOOT_SOURCE
>          fi
> @@ -65,6 +72,12 @@ function dt_set()
>          elif test $data_type = "str_a"
>          then
>              fdtput $FDTEDIT -p -t s $path $var $data
> +        elif test $data_type = "bool"
> +        then
> +            if test "$data" -eq 1
> +            then
> +                fdtput $FDTEDIT -p $path $var
> +            fi
>          else
>              fdtput $FDTEDIT -p -t s $path $var "$data"
>          fi
> @@ -206,6 +219,7 @@ function xen_device_tree_editing()
>          if test "${DOMU_STATIC_MEM[$i]}"
>          then
>              add_device_tree_static_mem "/chosen/domU$i" "${DOMU_STATIC_MEM[$i]}"
> +            dt_set "/chosen/domU$i" "direct-map" "bool" "${DOMU_DIRECT_MAP[$i]}"
>          fi
>          dt_set "/chosen/domU$i" "vpl011" "hex" "0x1"
>          if test "$DOM0_KERNEL"
> @@ -470,6 +484,10 @@ function xen_config()
>          then
>              DOMU_CMD[$i]="console=ttyAMA0"
>          fi
> +        if test -z "${DOMU_DIRECT_MAP[$i]}"
> +        then
> +             DOMU_DIRECT_MAP[$i]=1
> +        fi
>          i=$(( $i + 1 ))
>      done
>  }
> -- 
> 2.34.1
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 01:01:18 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 01:01:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353411.580342 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3oju-0001tW-Ke; Wed, 22 Jun 2022 01:01:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353411.580342; Wed, 22 Jun 2022 01:01:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3oju-0001sZ-Fx; Wed, 22 Jun 2022 01:01:02 +0000
Received: by outflank-mailman (input) for mailman id 353411;
 Wed, 22 Jun 2022 01:01:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3ojt-0001SP-7U; Wed, 22 Jun 2022 01:01:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3ojt-0001AU-3U; Wed, 22 Jun 2022 01:01:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3ojs-00057Y-L8; Wed, 22 Jun 2022 01:01:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o3ojs-0003Ar-Ki; Wed, 22 Jun 2022 01:01:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=c7BOo5vynBMuwZyc5hd+wS1WaX7cEVbVnQTLppq1VGg=; b=wAHT98/+iyx/Ue5s9m2kKABmpp
	XQZ0fTIMi7OD4xMcFqQT7q2XGWNmOPLh7E9kDsbpabi1LoNkNo/JKaYjI6A1S1F1ADsQZqfIv1CGH
	ib8yAEiZyvIVWjO+am7Rlsl+6p0s+/A4UZS561YhnE/QVVQfrsnQh6qCPhDmoB6gEIIE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171301-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 171301: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=5cdcfd861e3cdb98d3239ba78c97a1a2b13d2a70
X-Osstest-Versions-That:
    qemuu=c8b2d413761af732a0798d8df45ce968732083fe
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 22 Jun 2022 01:01:00 +0000

flight 171301 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171301/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171288
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171288
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171288
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171288
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171288
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171288
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171288
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171288
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 qemuu                5cdcfd861e3cdb98d3239ba78c97a1a2b13d2a70
baseline version:
 qemuu                c8b2d413761af732a0798d8df45ce968732083fe

Last test of basis   171288  2022-06-20 07:19:31 Z    1 days
Testing same since   171301  2022-06-21 18:38:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  BALATON Zoltan <balaton@eik.bme.hu>
  Cédric Le Goater <clg@kaod.org>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Frederic Barrat <fbarrat@linux.ibm.com>
  Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
  Matheus Ferst <matheus.ferst@eldorado.org.br>
  Michael S. Tsirkin <mst@redhat.com>
  Richard Henderson <richard.henderson@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   c8b2d41376..5cdcfd861e  5cdcfd861e3cdb98d3239ba78c97a1a2b13d2a70 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 01:24:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 01:24:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353422.580353 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3p6D-0006GY-Mp; Wed, 22 Jun 2022 01:24:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353422.580353; Wed, 22 Jun 2022 01:24:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3p6D-0006GR-Ji; Wed, 22 Jun 2022 01:24:05 +0000
Received: by outflank-mailman (input) for mailman id 353422;
 Wed, 22 Jun 2022 01:24:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KR9/=W5=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1o3p6B-0006GL-F0
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 01:24:03 +0000
Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com
 [66.111.4.26]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fce85ed1-f1c9-11ec-bd2d-47488cf2e6aa;
 Wed, 22 Jun 2022 03:24:01 +0200 (CEST)
Received: from compute4.internal (compute4.nyi.internal [10.202.2.44])
 by mailout.nyi.internal (Postfix) with ESMTP id 3453F5C0172;
 Tue, 21 Jun 2022 21:23:57 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute4.internal (MEProxy); Tue, 21 Jun 2022 21:23:57 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 21 Jun 2022 21:23:56 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fce85ed1-f1c9-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:message-id
	:mime-version:reply-to:sender:subject:subject:to:to; s=fm2; t=
	1655861037; x=1655947437; bh=y7SHRRSIRDbqIJG7VRoehYFyw4aX5XSOFLk
	h/fPAzBk=; b=Y0qqXqntAutVJCe1lmwsqYy+8HFFvSEDwpnxRauOVxZV6bl0R62
	iXplt//uBCMZyozPNZn37R8TVEP4gcJ9/PzlO1avv0iUwQJkm1bAbztznk/BW+c/
	tWcyX3/njNgytOemJ5NKVtHQuDh2GRxpb1UcxVPht1TRDv3SZkKXXlZ/eLAUxafu
	TIVjlrm5Q+xhTDrbpgrC6ycVQXUWm8MWQKCvHwwRqYO9jDV/wb/4AxmTEDTj6WuB
	XFfgsDoty0QxV/GIuAWfLdSyKXaVUrZLQ7eaDEfVVsnn8XyBy/G3yIqGGEtAsA4g
	9hLSE8nLUnnQ4BXmQclBgxC3RHriSZI2tDg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:message-id:mime-version:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm2; t=1655861037; x=1655947437; bh=y7SHRRSIRDbqI
	JG7VRoehYFyw4aX5XSOFLkh/fPAzBk=; b=rWBDBfNeruRQfDp1HiPQeEfy4Z1vP
	pJ31Pj2ohAIfFUqGuLjF1rND5P6PI4vk5LS0RbpK/sflSrx0d7o4Y6sECbTaObHM
	72d3c1JipKqRgpoEuwDslCJCP+CHPrdUpPs8UjAZFruMceU/FIbpRRFAy4bx4ODy
	I68aMPgEyOxQAUPGMCwUmHCvhqJN4zSNR9eElu9s4EJv694Uk1BLDABF4Nb9N3XM
	S0AFUvofw7oJyAUkwoIg5nDMMg8MlxBmCjnqaiuRwm96Tu0t4PvVTspooRHLEOnf
	NbOqPJPsSl5Lh0dJmQnWZsLsa+dsIfpRDDiUv63xl2vpIMzoP8ORzv2BQ==
X-ME-Sender: <xms:LG-yYrCky2D3OWthiT3XFGE-YiY0y1_vU-nm9lt9mZ_qlyQqIVovUQ>
    <xme:LG-yYhhb4eYIpM5K0-99QCDiSflVfmKlnjeOww_lWngWkVnlMWJNzGs-Du_y4pYTt
    5jQKKN9U06YWLY>
X-ME-Received: <xmr:LG-yYmmxS4dom-FcTiI7sGWdpHNb9M5A7r2kFPGfgUTGC4cDKpH2THe-FPnWS3NWit5FlArybHRN>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrudefgedggeejucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofggtgfgsehtkeertdertdejnecuhfhrohhmpeffvghmihcu
    ofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinhhgsh
    hlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeekieeljeeutefgieejfffglefhieet
    veffleefudefjeegleejvdelvdefueffveenucffohhmrghinhepkhgvrhhnvghlrdhorh
    hgnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepuggv
    mhhisehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:LW-yYtxLTnxmIzhG5IT7lfCRdlA72TfHqxunph3FS4h1q-Y7O2Lyqw>
    <xmx:LW-yYgS5kTjSOs7KT8S4shdOtfWfIeqa5equ1rLTnQipnQeZN4M-UQ>
    <xmx:LW-yYgb724olnXg5n82nuD0HadK1tkayQWo01ORV29wfv7W7EO1uGg>
    <xmx:LW-yYp4ybHdvX2ClISNDpOdiES5x4Q5gFX7QrWogLtD-Js1pZ6NAFQ>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v7] Preserve the EFI System Resource Table for dom0
Date: Tue, 21 Jun 2022 21:23:39 -0400
Message-Id: <7b2f97eda968d6db368c605ff0350d732554c39b.1655860720.git.demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The EFI System Resource Table (ESRT) is necessary for fwupd to identify
firmware updates to install.  According to the UEFI specification §23.4,
the ESRT shall be stored in memory of type EfiBootServicesData.  However,
memory of type EfiBootServicesData is considered general-purpose memory
by Xen, so the ESRT needs to be moved somewhere where Xen will not
overwrite it.  Copy the ESRT to memory of type EfiRuntimeServicesData,
which Xen will not reuse.  dom0 can use the ESRT if (and only if) it is
in memory of type EfiRuntimeServicesData.

Earlier versions of this patch reserved the memory in which the ESRT was
located.  This created awkward alignment problems, and required either
splitting the E820 table or wasting memory.  It also would have required
a new platform op for dom0 to use to indicate if the ESRT is reserved.
By copying the ESRT into EfiRuntimeServicesData memory, the E820 table
does not need to be modified, and dom0 can just check the type of the
memory region containing the ESRT.  The copy is only done if the ESRT is
not already in EfiRuntimeServicesData memory, avoiding memory leaks on
repeated kexec.

See https://lore.kernel.org/xen-devel/20200818184018.GN1679@mail-itl/T/
for details.

Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
---
 xen/common/efi/boot.c | 133 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 133 insertions(+)

diff --git a/xen/common/efi/boot.c b/xen/common/efi/boot.c
index a25e1d29f1..593962c42c 100644
--- a/xen/common/efi/boot.c
+++ b/xen/common/efi/boot.c
@@ -39,6 +39,26 @@
   { 0x605dab50, 0xe046, 0x4300, {0xab, 0xb6, 0x3d, 0xd8, 0x10, 0xdd, 0x8b, 0x23} }
 #define APPLE_PROPERTIES_PROTOCOL_GUID \
   { 0x91bd12fe, 0xf6c3, 0x44fb, { 0xa5, 0xb7, 0x51, 0x22, 0xab, 0x30, 0x3a, 0xe0} }
+#define EFI_SYSTEM_RESOURCE_TABLE_GUID    \
+  { 0xb122a263, 0x3661, 0x4f68, {0x99, 0x29, 0x78, 0xf8, 0xb0, 0xd6, 0x21, 0x80} }
+#define EFI_SYSTEM_RESOURCE_TABLE_FIRMWARE_RESOURCE_VERSION 1
+
+typedef struct {
+    EFI_GUID FwClass;
+    UINT32 FwType;
+    UINT32 FwVersion;
+    UINT32 LowestSupportedFwVersion;
+    UINT32 CapsuleFlags;
+    UINT32 LastAttemptVersion;
+    UINT32 LastAttemptStatus;
+} EFI_SYSTEM_RESOURCE_ENTRY;
+
+typedef struct {
+    UINT32 FwResourceCount;
+    UINT32 FwResourceCountMax;
+    UINT64 FwResourceVersion;
+    EFI_SYSTEM_RESOURCE_ENTRY Entries[];
+} EFI_SYSTEM_RESOURCE_TABLE;
 
 typedef EFI_STATUS
 (/* _not_ EFIAPI */ *EFI_SHIM_LOCK_VERIFY) (
@@ -567,6 +587,41 @@ static int __init efi_check_dt_boot(const EFI_LOADED_IMAGE *loaded_image)
 }
 #endif
 
+static UINTN __initdata esrt = EFI_INVALID_TABLE_ADDR;
+
+static size_t __init get_esrt_size(const EFI_MEMORY_DESCRIPTOR *desc)
+{
+    size_t available_len, len;
+    const UINTN physical_start = desc->PhysicalStart;
+    const EFI_SYSTEM_RESOURCE_TABLE *esrt_ptr;
+
+    len = desc->NumberOfPages << EFI_PAGE_SHIFT;
+    if ( esrt == EFI_INVALID_TABLE_ADDR )
+        return 0;
+    if ( physical_start > esrt || esrt - physical_start >= len )
+        return 0;
+    /*
+     * The specification requires EfiBootServicesData, but accept
+     * EfiRuntimeServicesData, which is a more logical choice.
+     */
+    if ( (desc->Type != EfiRuntimeServicesData) &&
+         (desc->Type != EfiBootServicesData) )
+        return 0;
+    available_len = len - (esrt - physical_start);
+    if ( available_len <= offsetof(EFI_SYSTEM_RESOURCE_TABLE, Entries) )
+        return 0;
+    available_len -= offsetof(EFI_SYSTEM_RESOURCE_TABLE, Entries);
+    esrt_ptr = (const EFI_SYSTEM_RESOURCE_TABLE *)esrt;
+    if ( (esrt_ptr->FwResourceVersion !=
+          EFI_SYSTEM_RESOURCE_TABLE_FIRMWARE_RESOURCE_VERSION) ||
+         !esrt_ptr->FwResourceCount )
+        return 0;
+    if ( esrt_ptr->FwResourceCount > available_len / sizeof(esrt_ptr->Entries[0]) )
+        return 0;
+
+    return esrt_ptr->FwResourceCount * sizeof(esrt_ptr->Entries[0]);
+}
+
 /*
  * Include architecture specific implementation here, which references the
  * static globals defined above.
@@ -845,6 +900,8 @@ static UINTN __init efi_find_gop_mode(EFI_GRAPHICS_OUTPUT_PROTOCOL *gop,
     return gop_mode;
 }
 
+static EFI_GUID __initdata esrt_guid = EFI_SYSTEM_RESOURCE_TABLE_GUID;
+
 static void __init efi_tables(void)
 {
     unsigned int i;
@@ -868,6 +925,8 @@ static void __init efi_tables(void)
             efi.smbios = (unsigned long)efi_ct[i].VendorTable;
         if ( match_guid(&smbios3_guid, &efi_ct[i].VendorGuid) )
             efi.smbios3 = (unsigned long)efi_ct[i].VendorTable;
+        if ( match_guid(&esrt_guid, &efi_ct[i].VendorGuid) )
+            esrt = (UINTN)efi_ct[i].VendorTable;
     }
 
 #ifndef CONFIG_ARM /* TODO - disabled until implemented on ARM */
@@ -1051,6 +1110,62 @@ static void __init efi_set_gop_mode(EFI_GRAPHICS_OUTPUT_PROTOCOL *gop, UINTN gop
 #define INVALID_VIRTUAL_ADDRESS (0xBAAADUL << \
                                  (EFI_PAGE_SHIFT + BITS_PER_LONG - 32))
 
+static void __init efi_relocate_esrt(EFI_SYSTEM_TABLE *SystemTable)
+{
+    EFI_STATUS status;
+    UINTN info_size = 0, map_key;
+    unsigned int i;
+    void *memory_map = NULL;
+
+    for (;;) {
+        status = efi_bs->GetMemoryMap(&info_size, memory_map, &map_key,
+                                      &efi_mdesc_size, &mdesc_ver);
+        if ( status == EFI_SUCCESS && memory_map != NULL )
+            break;
+        if ( status == EFI_BUFFER_TOO_SMALL || memory_map == NULL ) {
+            info_size *= 2;
+            if ( memory_map != NULL )
+                efi_bs->FreePool(memory_map);
+            status = efi_bs->AllocatePool(EfiLoaderData, info_size, &memory_map);
+            if ( status == EFI_SUCCESS )
+                continue;
+        }
+        return;
+    }
+
+    /* Try to obtain the ESRT.  Errors are not fatal. */
+    for ( i = 0; i < info_size; i += efi_mdesc_size )
+    {
+        /*
+         * ESRT needs to be moved to memory of type EfiRuntimeServicesData
+         * so that the memory it is in will not be used for other purposes.
+         */
+        void *new_esrt = NULL;
+        size_t esrt_size = get_esrt_size(efi_memmap + i);
+
+        if ( !esrt_size )
+            continue;
+        if ( ((EFI_MEMORY_DESCRIPTOR *)(efi_memmap + i))->Type ==
+             EfiRuntimeServicesData )
+            return; /* ESRT already safe from reuse */
+        status = efi_bs->AllocatePool(EfiRuntimeServicesData, esrt_size,
+                                      &new_esrt);
+        if ( status == EFI_SUCCESS && new_esrt )
+        {
+            memcpy(new_esrt, (void *)esrt, esrt_size);
+            status = efi_bs->InstallConfigurationTable(&esrt_guid, new_esrt);
+            if ( status != EFI_SUCCESS )
+            {
+                PrintErr(L"Cannot install new ESRT\r\n");
+                efi_bs->FreePool(new_esrt);
+            }
+        }
+        else
+            PrintErr(L"Cannot allocate memory for ESRT\r\n");
+        break;
+    }
+}
+
 static void __init efi_exit_boot(EFI_HANDLE ImageHandle, EFI_SYSTEM_TABLE *SystemTable)
 {
     EFI_STATUS status;
@@ -1067,6 +1182,13 @@ static void __init efi_exit_boot(EFI_HANDLE ImageHandle, EFI_SYSTEM_TABLE *Syste
     if ( !efi_memmap )
         blexit(L"Unable to allocate memory for EFI memory map");
 
+    status = SystemTable->BootServices->GetMemoryMap(&efi_memmap_size,
+                                                     efi_memmap, &map_key,
+                                                     &efi_mdesc_size,
+                                                     &mdesc_ver);
+    if ( EFI_ERROR(status) )
+        PrintErrMesg(L"Cannot obtain memory map", status);
+
     for ( retry = false; ; retry = true )
     {
         efi_memmap_size = info_size;
@@ -1413,6 +1535,8 @@ efi_start(EFI_HANDLE ImageHandle, EFI_SYSTEM_TABLE *SystemTable)
     if ( gop )
         efi_set_gop_mode(gop, gop_mode);
 
+    efi_relocate_esrt(SystemTable);
+
     efi_exit_boot(ImageHandle, SystemTable);
 
     efi_arch_post_exit_boot(); /* Doesn't return. */
@@ -1753,3 +1877,12 @@ void __init efi_init_memory(void)
     unmap_domain_page(efi_l4t);
 }
 #endif
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 02:28:12 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 02:28:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353432.580364 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3q5y-0004JU-Go; Wed, 22 Jun 2022 02:27:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353432.580364; Wed, 22 Jun 2022 02:27:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3q5y-0004JM-Bh; Wed, 22 Jun 2022 02:27:54 +0000
Received: by outflank-mailman (input) for mailman id 353432;
 Wed, 22 Jun 2022 02:27:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KR9/=W5=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1o3q5x-0004JG-4R
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 02:27:53 +0000
Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com
 [66.111.4.26]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e858b309-f1d2-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 04:27:49 +0200 (CEST)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.nyi.internal (Postfix) with ESMTP id 3190B5C00E2;
 Tue, 21 Jun 2022 22:27:48 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute5.internal (MEProxy); Tue, 21 Jun 2022 22:27:48 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 21 Jun 2022 22:27:46 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e858b309-f1d2-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding:date
	:date:from:from:in-reply-to:message-id:mime-version:reply-to
	:sender:subject:subject:to:to; s=fm2; t=1655864868; x=
	1655951268; bh=GcU2tFZp/9iFH/MJo6UjIxw9Kv+4l3HrP6DKPjWQj3w=; b=a
	l0Emb6cXDROwzCXUToGQnhRpoR1/J6dwLUF3/5kSptwCsV24PLJHfHf4noNGsoMD
	JwtgENxeuX6dDMf6GnxJ7bpfEog/4/nDObMTHVZ6ZxgnlwpCESdeoCQNiExqEPPQ
	UQWnpUmGenU/sVjMRC5pTFQtw7elwbJNxgU1PsaO5Ce6QmTVl1NIMBKckri3YIXR
	OMO0qSupcgbb5CbzxmoM3UNOwAyZUyfGwwLsKPakFb6ldWgdb7VQ0ORTQMcdM3Ql
	gVgVtAZzxdTIJxMKZkq14N91q7/ZxjzBBTfRch4iLaFJzCj40FHaOLevOvPC4zBi
	JVKKWbUmIOCcqBFEH4+NA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:message-id
	:mime-version:reply-to:sender:subject:subject:to:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=
	1655864868; x=1655951268; bh=GcU2tFZp/9iFH/MJo6UjIxw9Kv+4l3HrP6D
	KPjWQj3w=; b=feJyVyQ1s+P9ZzcklTIilzR38/QFuvFFfj64g2AKSCgB4pFB2bn
	A+JeCCPEaHVj2YhxKlEmB7tep0LiPX44gqnAJ1/+UB1tc3/lY59Vp2Aje9aU688u
	Q/EgHFkMMIGZB442wEKYsC9qZqRktQZ0ISoTz9rnnimLoSVW6azwhBzLcJgUcGIB
	nH5a0wMYQu4XsC7m9Ebv9peS8QujYT5YTvqVRcrxP1DK0DSgCJVRhGKGAdHS9pCi
	BZPsmHqnbtKls+kNTj5XsXdATI2Nt8grY+wJlS8valcIx/VO6zdgawV+eVtz4gMb
	JzZciDk+eqWeJ4iiYvuOD22jbsGinqsp4Tg==
X-ME-Sender: <xms:I36yYj_vxrerhtOWOwWG46djUE3yh-7SReOl8tos0DHbmF7bciObQg>
    <xme:I36yYvuTsHJk1V6jJtbdPuqD0pGXWL_wm6reglIpAmf4Wt3J88ny7zEMvOulXOr1v
    msuWvc9BlGKyug>
X-ME-Received: <xmr:I36yYhDc5ob-l71sJ6Ep1b7GklCoB5-PAmuyPqTgC36tIePaXduQ5UM7ouIsi16Eo3L1AluuXty5>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrudefgedgieduucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgggfestdekredtredttdenucfhrhhomhepffgvmhhiucfo
    rghrihgvucfqsggvnhhouhhruceouggvmhhisehinhhvihhsihgslhgvthhhihhnghhslh
    grsgdrtghomheqnecuggftrfgrthhtvghrnhepieejgedufeeukeeijedukefgueekvdeg
    iedtudefhfdtffehffeuveefvdfglefgnecuffhomhgrihhnpehgihhthhhusgdrtghomh
    enucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpeguvghm
    ihesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:I36yYvd-pd361OCqBHhIaxJQP1JsJPDj2YoC4whetfLpwQQSXZMbuw>
    <xmx:I36yYoO9gInvnQXHTiyvzV1GhbbGbDWdACX-I0C46BMLqxwLfe9RXg>
    <xmx:I36yYhn_vgVL3gBf6nn9oevYwyF6WQ7JmbdyoMkWQ5M1nzp8NJ93oQ>
    <xmx:JH6yYkCiWutpEP6hBtUCK6K6sMOsaQDo4foIUszzrfwSO359kFb2Fw>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Jennifer Herbert <jennifer.herbert@citrix.com>
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Subject: [PATCH v4] xen/gntdev: Avoid blocking in unmap_grant_pages()
Date: Tue, 21 Jun 2022 22:27:26 -0400
Message-Id: <20220622022726.2538-1-demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

unmap_grant_pages() currently waits for the pages to no longer be used.
In https://github.com/QubesOS/qubes-issues/issues/7481, this lead to a
deadlock against i915: i915 was waiting for gntdev's MMU notifier to
finish, while gntdev was waiting for i915 to free its pages.  I also
believe this is responsible for various deadlocks I have experienced in
the past.

Avoid these problems by making unmap_grant_pages async.  This requires
making it return void, as any errors will not be available when the
function returns.  Fortunately, the only use of the return value is a
WARN_ON(), which can be replaced by a WARN_ON when the error is
detected.  Additionally, a failed call will not prevent further calls
from being made, but this is harmless.

Because unmap_grant_pages is now async, the grant handle will be sent to
INVALID_GRANT_HANDLE too late to prevent multiple unmaps of the same
handle.  Instead, a separate bool array is allocated for this purpose.
This wastes memory, but stuffing this information in padding bytes is
too fragile.  Furthermore, it is necessary to grab a reference to the
map before making the asynchronous call, and release the reference when
the call returns.

It is also necessary to guard against reentrancy in gntdev_map_put(),
and to handle the case where userspace tries to map a mapping whose
contents have not all been freed yet.

Fixes: 745282256c75 ("xen/gntdev: safely unmap grants in case they are still in use")
Cc: stable@vger.kernel.org
Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
---
 drivers/xen/gntdev-common.h |   7 ++
 drivers/xen/gntdev.c        | 157 ++++++++++++++++++++++++------------
 2 files changed, 113 insertions(+), 51 deletions(-)

diff --git a/drivers/xen/gntdev-common.h b/drivers/xen/gntdev-common.h
index 20d7d059dadb..40ef379c28ab 100644
--- a/drivers/xen/gntdev-common.h
+++ b/drivers/xen/gntdev-common.h
@@ -16,6 +16,7 @@
 #include <linux/mmu_notifier.h>
 #include <linux/types.h>
 #include <xen/interface/event_channel.h>
+#include <xen/grant_table.h>
 
 struct gntdev_dmabuf_priv;
 
@@ -56,6 +57,7 @@ struct gntdev_grant_map {
 	struct gnttab_unmap_grant_ref *unmap_ops;
 	struct gnttab_map_grant_ref   *kmap_ops;
 	struct gnttab_unmap_grant_ref *kunmap_ops;
+	bool *being_removed;
 	struct page **pages;
 	unsigned long pages_vm_start;
 
@@ -73,6 +75,11 @@ struct gntdev_grant_map {
 	/* Needed to avoid allocation in gnttab_dma_free_pages(). */
 	xen_pfn_t *frames;
 #endif
+
+	/* Number of live grants */
+	atomic_t live_grants;
+	/* Needed to avoid allocation in __unmap_grant_pages */
+	struct gntab_unmap_queue_data unmap_data;
 };
 
 struct gntdev_grant_map *gntdev_alloc_map(struct gntdev_priv *priv, int count,
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index 59ffea800079..4b56c39f766d 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -35,6 +35,7 @@
 #include <linux/slab.h>
 #include <linux/highmem.h>
 #include <linux/refcount.h>
+#include <linux/workqueue.h>
 
 #include <xen/xen.h>
 #include <xen/grant_table.h>
@@ -60,10 +61,11 @@ module_param(limit, uint, 0644);
 MODULE_PARM_DESC(limit,
 	"Maximum number of grants that may be mapped by one mapping request");
 
+/* True in PV mode, false otherwise */
 static int use_ptemod;
 
-static int unmap_grant_pages(struct gntdev_grant_map *map,
-			     int offset, int pages);
+static void unmap_grant_pages(struct gntdev_grant_map *map,
+			      int offset, int pages);
 
 static struct miscdevice gntdev_miscdev;
 
@@ -120,6 +122,7 @@ static void gntdev_free_map(struct gntdev_grant_map *map)
 	kvfree(map->unmap_ops);
 	kvfree(map->kmap_ops);
 	kvfree(map->kunmap_ops);
+	kvfree(map->being_removed);
 	kfree(map);
 }
 
@@ -140,10 +143,13 @@ struct gntdev_grant_map *gntdev_alloc_map(struct gntdev_priv *priv, int count,
 	add->unmap_ops = kvmalloc_array(count, sizeof(add->unmap_ops[0]),
 					GFP_KERNEL);
 	add->pages     = kvcalloc(count, sizeof(add->pages[0]), GFP_KERNEL);
+	add->being_removed =
+		kvcalloc(count, sizeof(add->being_removed[0]), GFP_KERNEL);
 	if (NULL == add->grants    ||
 	    NULL == add->map_ops   ||
 	    NULL == add->unmap_ops ||
-	    NULL == add->pages)
+	    NULL == add->pages     ||
+	    NULL == add->being_removed)
 		goto err;
 	if (use_ptemod) {
 		add->kmap_ops   = kvmalloc_array(count, sizeof(add->kmap_ops[0]),
@@ -250,9 +256,36 @@ void gntdev_put_map(struct gntdev_priv *priv, struct gntdev_grant_map *map)
 	if (!refcount_dec_and_test(&map->users))
 		return;
 
-	if (map->pages && !use_ptemod)
+	if (map->pages && !use_ptemod) {
+		/*
+		 * Increment the reference count.  This ensures that the
+		 * subsequent call to unmap_grant_pages() will not wind up
+		 * re-entering itself.  It *can* wind up calling
+		 * gntdev_put_map() recursively, but such calls will be with a
+		 * reference count greater than 1, so they will return before
+		 * this code is reached.  The recursion depth is thus limited to
+		 * 1.  Do NOT use refcount_inc() here, as it will detect that
+		 * the reference count is zero and WARN().
+		 */
+		refcount_set(&map->users, 1);
+
+		/*
+		 * Unmap the grants.  This may or may not be asynchronous, so it
+		 * is possible that the reference count is 1 on return, but it
+		 * could also be greater than 1.
+		 */
 		unmap_grant_pages(map, 0, map->count);
 
+		/* Check if the memory now needs to be freed */
+		if (!refcount_dec_and_test(&map->users))
+			return;
+
+		/*
+		 * All pages have been returned to the hypervisor, so free the
+		 * map.
+		 */
+	}
+
 	if (map->notify.flags & UNMAP_NOTIFY_SEND_EVENT) {
 		notify_remote_via_evtchn(map->notify.event);
 		evtchn_put(map->notify.event);
@@ -283,6 +316,7 @@ static int find_grant_ptes(pte_t *pte, unsigned long addr, void *data)
 
 int gntdev_map_grant_pages(struct gntdev_grant_map *map)
 {
+	size_t alloced = 0;
 	int i, err = 0;
 
 	if (!use_ptemod) {
@@ -331,97 +365,116 @@ int gntdev_map_grant_pages(struct gntdev_grant_map *map)
 			map->count);
 
 	for (i = 0; i < map->count; i++) {
-		if (map->map_ops[i].status == GNTST_okay)
+		if (map->map_ops[i].status == GNTST_okay) {
 			map->unmap_ops[i].handle = map->map_ops[i].handle;
-		else if (!err)
+			if (!use_ptemod)
+				alloced++;
+		} else if (!err)
 			err = -EINVAL;
 
 		if (map->flags & GNTMAP_device_map)
 			map->unmap_ops[i].dev_bus_addr = map->map_ops[i].dev_bus_addr;
 
 		if (use_ptemod) {
-			if (map->kmap_ops[i].status == GNTST_okay)
+			if (map->kmap_ops[i].status == GNTST_okay) {
+				if (map->map_ops[i].status == GNTST_okay)
+					alloced++;
 				map->kunmap_ops[i].handle = map->kmap_ops[i].handle;
-			else if (!err)
+			} else if (!err)
 				err = -EINVAL;
 		}
 	}
+	atomic_add(alloced, &map->live_grants);
 	return err;
 }
 
-static int __unmap_grant_pages(struct gntdev_grant_map *map, int offset,
-			       int pages)
+static void __unmap_grant_pages_done(int result,
+		struct gntab_unmap_queue_data *data)
 {
-	int i, err = 0;
-	struct gntab_unmap_queue_data unmap_data;
-
-	if (map->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) {
-		int pgno = (map->notify.addr >> PAGE_SHIFT);
-		if (pgno >= offset && pgno < offset + pages) {
-			/* No need for kmap, pages are in lowmem */
-			uint8_t *tmp = pfn_to_kaddr(page_to_pfn(map->pages[pgno]));
-			tmp[map->notify.addr & (PAGE_SIZE-1)] = 0;
-			map->notify.flags &= ~UNMAP_NOTIFY_CLEAR_BYTE;
-		}
-	}
-
-	unmap_data.unmap_ops = map->unmap_ops + offset;
-	unmap_data.kunmap_ops = use_ptemod ? map->kunmap_ops + offset : NULL;
-	unmap_data.pages = map->pages + offset;
-	unmap_data.count = pages;
-
-	err = gnttab_unmap_refs_sync(&unmap_data);
-	if (err)
-		return err;
+	unsigned int i;
+	struct gntdev_grant_map *map = data->data;
+	unsigned int offset = data->unmap_ops - map->unmap_ops;
 
-	for (i = 0; i < pages; i++) {
-		if (map->unmap_ops[offset+i].status)
-			err = -EINVAL;
+	for (i = 0; i < data->count; i++) {
+		WARN_ON(map->unmap_ops[offset+i].status);
 		pr_debug("unmap handle=%d st=%d\n",
 			map->unmap_ops[offset+i].handle,
 			map->unmap_ops[offset+i].status);
 		map->unmap_ops[offset+i].handle = INVALID_GRANT_HANDLE;
 		if (use_ptemod) {
-			if (map->kunmap_ops[offset+i].status)
-				err = -EINVAL;
+			WARN_ON(map->kunmap_ops[offset+i].status);
 			pr_debug("kunmap handle=%u st=%d\n",
 				 map->kunmap_ops[offset+i].handle,
 				 map->kunmap_ops[offset+i].status);
 			map->kunmap_ops[offset+i].handle = INVALID_GRANT_HANDLE;
 		}
 	}
-	return err;
+	/*
+	 * Decrease the live-grant counter.  This must happen after the loop to
+	 * prevent premature reuse of the grants by gnttab_mmap().
+	 */
+	atomic_sub(data->count, &map->live_grants);
+
+	/* Release reference taken by __unmap_grant_pages */
+	gntdev_put_map(NULL, map);
+}
+
+static void __unmap_grant_pages(struct gntdev_grant_map *map, int offset,
+			       int pages)
+{
+	if (map->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) {
+		int pgno = (map->notify.addr >> PAGE_SHIFT);
+
+		if (pgno >= offset && pgno < offset + pages) {
+			/* No need for kmap, pages are in lowmem */
+			uint8_t *tmp = pfn_to_kaddr(page_to_pfn(map->pages[pgno]));
+
+			tmp[map->notify.addr & (PAGE_SIZE-1)] = 0;
+			map->notify.flags &= ~UNMAP_NOTIFY_CLEAR_BYTE;
+		}
+	}
+
+	map->unmap_data.unmap_ops = map->unmap_ops + offset;
+	map->unmap_data.kunmap_ops = use_ptemod ? map->kunmap_ops + offset : NULL;
+	map->unmap_data.pages = map->pages + offset;
+	map->unmap_data.count = pages;
+	map->unmap_data.done = __unmap_grant_pages_done;
+	map->unmap_data.data = map;
+	refcount_inc(&map->users); /* to keep map alive during async call below */
+
+	gnttab_unmap_refs_async(&map->unmap_data);
 }
 
-static int unmap_grant_pages(struct gntdev_grant_map *map, int offset,
-			     int pages)
+static void unmap_grant_pages(struct gntdev_grant_map *map, int offset,
+			      int pages)
 {
-	int range, err = 0;
+	int range;
+
+	if (atomic_read(&map->live_grants) == 0)
+		return; /* Nothing to do */
 
 	pr_debug("unmap %d+%d [%d+%d]\n", map->index, map->count, offset, pages);
 
 	/* It is possible the requested range will have a "hole" where we
 	 * already unmapped some of the grants. Only unmap valid ranges.
 	 */
-	while (pages && !err) {
-		while (pages &&
-		       map->unmap_ops[offset].handle == INVALID_GRANT_HANDLE) {
+	while (pages) {
+		while (pages && map->being_removed[offset]) {
 			offset++;
 			pages--;
 		}
 		range = 0;
 		while (range < pages) {
-			if (map->unmap_ops[offset + range].handle ==
-			    INVALID_GRANT_HANDLE)
+			if (map->being_removed[offset + range])
 				break;
+			map->being_removed[offset + range] = true;
 			range++;
 		}
-		err = __unmap_grant_pages(map, offset, range);
+		if (range)
+			__unmap_grant_pages(map, offset, range);
 		offset += range;
 		pages -= range;
 	}
-
-	return err;
 }
 
 /* ------------------------------------------------------------------ */
@@ -473,7 +526,6 @@ static bool gntdev_invalidate(struct mmu_interval_notifier *mn,
 	struct gntdev_grant_map *map =
 		container_of(mn, struct gntdev_grant_map, notifier);
 	unsigned long mstart, mend;
-	int err;
 
 	if (!mmu_notifier_range_blockable(range))
 		return false;
@@ -494,10 +546,9 @@ static bool gntdev_invalidate(struct mmu_interval_notifier *mn,
 			map->index, map->count,
 			map->vma->vm_start, map->vma->vm_end,
 			range->start, range->end, mstart, mend);
-	err = unmap_grant_pages(map,
+	unmap_grant_pages(map,
 				(mstart - map->vma->vm_start) >> PAGE_SHIFT,
 				(mend - mstart) >> PAGE_SHIFT);
-	WARN_ON(err);
 
 	return true;
 }
@@ -985,6 +1036,10 @@ static int gntdev_mmap(struct file *flip, struct vm_area_struct *vma)
 		goto unlock_out;
 	if (use_ptemod && map->vma)
 		goto unlock_out;
+	if (atomic_read(&map->live_grants)) {
+		err = -EAGAIN;
+		goto unlock_out;
+	}
 	refcount_inc(&map->users);
 
 	vma->vm_ops = &gntdev_vmops;
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 04:26:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 04:26:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353446.580375 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3rwf-0007N7-5o; Wed, 22 Jun 2022 04:26:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353446.580375; Wed, 22 Jun 2022 04:26:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3rwf-0007N0-2C; Wed, 22 Jun 2022 04:26:25 +0000
Received: by outflank-mailman (input) for mailman id 353446;
 Wed, 22 Jun 2022 04:26:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3rwd-0007Mp-PK; Wed, 22 Jun 2022 04:26:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3rwd-00037D-KK; Wed, 22 Jun 2022 04:26:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3rwd-00051k-57; Wed, 22 Jun 2022 04:26:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o3rwd-00060r-4Q; Wed, 22 Jun 2022 04:26:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BPdm8AiVa/jSzSEIsEKBhE/CR+FMvqQ/hFGKpbE9R+A=; b=AbstYl9qqi3TcI9Im5ymkNOndc
	nluME6WiqE5FtiT412QqrO7IXGA/EQHlhEnIqJ+ZSj5il1tIfwDu4hV0vez+GUY2OAUNaFF0jTuti
	eI4jIGwCw16NMQHNdjrYjjCFjN0CKApOtSASWJIvxdB3grL6bbK/cox13y1QntcG1xXQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171304-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 171304: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=b97243dea3c95ad923fa4ca190940158209e8384
X-Osstest-Versions-That:
    ovmf=cfe165140a7c140c2d2f382113abd6e9ac89ce77
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 22 Jun 2022 04:26:23 +0000

flight 171304 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171304/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 b97243dea3c95ad923fa4ca190940158209e8384
baseline version:
 ovmf                 cfe165140a7c140c2d2f382113abd6e9ac89ce77

Last test of basis   171298  2022-06-21 04:40:36 Z    0 days
Testing same since   171304  2022-06-22 01:40:36 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Heng Luo <heng.luo@intel.com>
  Luo, Heng <heng.luo@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   cfe165140a..b97243dea3  b97243dea3c95ad923fa4ca190940158209e8384 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 06:38:58 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 06:38:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353468.580408 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3u0m-0003fX-VS; Wed, 22 Jun 2022 06:38:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353468.580408; Wed, 22 Jun 2022 06:38:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3u0m-0003d0-Mq; Wed, 22 Jun 2022 06:38:48 +0000
Received: by outflank-mailman (input) for mailman id 353468;
 Wed, 22 Jun 2022 06:38:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zPYt=W5=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o3u0l-0003Ka-LP
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 06:38:47 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f655841f-f1f5-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 08:38:44 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 2FB3421BD0;
 Wed, 22 Jun 2022 06:38:44 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id BFDE2134A9;
 Wed, 22 Jun 2022 06:38:43 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id ECZ+LfO4smKNUAAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 22 Jun 2022 06:38:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f655841f-f1f5-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655879924; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=m9f7VaM7LHywgL8edWFIK8hnsYgPrl1K1+9FwdnrA7I=;
	b=vWn95FYtCaOSmLs/qVgOFeVWZTPG7+811NqLHkzlTvdHZAEM+uKhWNhSIzKroY9Tx4StIW
	9LRziyk8K/BhqzQQjfPycSfV65ys+LzcM05IUG2+GeA5wdikMrBaGOgvb25qF1yd5sYE4g
	biRC2G6v9qRHicsZBZzi4xvaKXdrFeE=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Russell King <linux@armlinux.org.uk>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	linux-arm-kernel@lists.infradead.org,
	Viresh Kumar <viresh.kumar@linaro.org>
Subject: [PATCH v3 3/3] xen: don't require virtio with grants for non-PV guests
Date: Wed, 22 Jun 2022 08:38:38 +0200
Message-Id: <20220622063838.8854-4-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20220622063838.8854-1-jgross@suse.com>
References: <20220622063838.8854-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Commit fa1f57421e0b ("xen/virtio: Enable restricted memory access using
Xen grant mappings") introduced a new requirement for using virtio
devices: the backend now needs to support the VIRTIO_F_ACCESS_PLATFORM
feature.

This is an undue requirement for non-PV guests, as those can be operated
with existing backends without any problem, as long as those backends
are running in dom0.

Per default allow virtio devices without grant support for non-PV
guests.

On Arm require VIRTIO_F_ACCESS_PLATFORM for devices having been listed
in the device tree to use grants.

Add a new config item to always force use of grants for virtio.

Fixes: fa1f57421e0b ("xen/virtio: Enable restricted memory access using Xen grant mappings")
Reported-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- remove command line parameter (Christoph Hellwig)
V3:
- rebase to callback method
---
 arch/arm/xen/enlighten.c     |  4 +++-
 arch/x86/xen/enlighten_hvm.c |  4 +++-
 arch/x86/xen/enlighten_pv.c  |  5 ++++-
 drivers/xen/Kconfig          |  9 +++++++++
 drivers/xen/grant-dma-ops.c  | 10 ++++++++++
 include/xen/xen-ops.h        |  6 ++++++
 include/xen/xen.h            |  8 --------
 7 files changed, 35 insertions(+), 11 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 1f9c3ba32833..93c8ccbf2982 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -34,6 +34,7 @@
 #include <linux/timekeeping.h>
 #include <linux/timekeeper_internal.h>
 #include <linux/acpi.h>
+#include <linux/virtio_anchor.h>
 
 #include <linux/mm.h>
 
@@ -443,7 +444,8 @@ static int __init xen_guest_init(void)
 	if (!xen_domain())
 		return 0;
 
-	xen_set_restricted_virtio_memory_access();
+	if (IS_ENABLED(CONFIG_XEN_VIRTIO))
+		virtio_set_mem_acc_cb(xen_virtio_mem_acc);
 
 	if (!acpi_disabled)
 		xen_acpi_guest_init();
diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
index 8b71b1dd7639..28762f800596 100644
--- a/arch/x86/xen/enlighten_hvm.c
+++ b/arch/x86/xen/enlighten_hvm.c
@@ -4,6 +4,7 @@
 #include <linux/cpu.h>
 #include <linux/kexec.h>
 #include <linux/memblock.h>
+#include <linux/virtio_anchor.h>
 
 #include <xen/features.h>
 #include <xen/events.h>
@@ -195,7 +196,8 @@ static void __init xen_hvm_guest_init(void)
 	if (xen_pv_domain())
 		return;
 
-	xen_set_restricted_virtio_memory_access();
+	if (IS_ENABLED(CONFIG_XEN_VIRTIO_FORCE_GRANT))
+		virtio_set_mem_acc_cb(virtio_require_restricted_mem_acc);
 
 	init_hvm_pv_info();
 
diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index e3297b15701c..5aaae8a77f55 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -31,6 +31,7 @@
 #include <linux/gfp.h>
 #include <linux/edd.h>
 #include <linux/reboot.h>
+#include <linux/virtio_anchor.h>
 
 #include <xen/xen.h>
 #include <xen/events.h>
@@ -109,7 +110,9 @@ static DEFINE_PER_CPU(struct tls_descs, shadow_tls_desc);
 
 static void __init xen_pv_init_platform(void)
 {
-	xen_set_restricted_virtio_memory_access();
+	/* PV guests can't operate virtio devices without grants. */
+	if (IS_ENABLED(CONFIG_XEN_VIRTIO))
+		virtio_set_mem_acc_cb(virtio_require_restricted_mem_acc);
 
 	populate_extra_pte(fix_to_virt(FIX_PARAVIRT_BOOTMAP));
 
diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
index bfd5f4f706bc..a65bd92121a5 100644
--- a/drivers/xen/Kconfig
+++ b/drivers/xen/Kconfig
@@ -355,4 +355,13 @@ config XEN_VIRTIO
 
 	  If in doubt, say n.
 
+config XEN_VIRTIO_FORCE_GRANT
+	bool "Require Xen virtio support to use grants"
+	depends on XEN_VIRTIO
+	help
+	  Require virtio for Xen guests to use grant mappings.
+	  This will avoid the need to give the backend the right to map all
+	  of the guest memory. This will need support on the backend side
+	  (e.g. qemu or kernel, depending on the virtio device types used).
+
 endmenu
diff --git a/drivers/xen/grant-dma-ops.c b/drivers/xen/grant-dma-ops.c
index fc0142484001..8973fc1e9ccc 100644
--- a/drivers/xen/grant-dma-ops.c
+++ b/drivers/xen/grant-dma-ops.c
@@ -12,6 +12,8 @@
 #include <linux/of.h>
 #include <linux/pfn.h>
 #include <linux/xarray.h>
+#include <linux/virtio_anchor.h>
+#include <linux/virtio.h>
 #include <xen/xen.h>
 #include <xen/xen-ops.h>
 #include <xen/grant_table.h>
@@ -287,6 +289,14 @@ bool xen_is_grant_dma_device(struct device *dev)
 	return has_iommu;
 }
 
+bool xen_virtio_mem_acc(struct virtio_device *dev)
+{
+	if (IS_ENABLED(CONFIG_XEN_VIRTIO_FORCE_GRANT))
+		return true;
+
+	return xen_is_grant_dma_device(dev->dev.parent);
+}
+
 void xen_grant_setup_dma_ops(struct device *dev)
 {
 	struct xen_grant_dma_data *data;
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index 80546960f8b7..98c399a960a3 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -5,6 +5,7 @@
 #include <linux/percpu.h>
 #include <linux/notifier.h>
 #include <linux/efi.h>
+#include <linux/virtio_anchor.h>
 #include <xen/features.h>
 #include <asm/xen/interface.h>
 #include <xen/interface/vcpu.h>
@@ -217,6 +218,7 @@ static inline void xen_preemptible_hcall_end(void) { }
 #ifdef CONFIG_XEN_GRANT_DMA_OPS
 void xen_grant_setup_dma_ops(struct device *dev);
 bool xen_is_grant_dma_device(struct device *dev);
+bool xen_virtio_mem_acc(struct virtio_device *dev);
 #else
 static inline void xen_grant_setup_dma_ops(struct device *dev)
 {
@@ -225,6 +227,10 @@ static inline bool xen_is_grant_dma_device(struct device *dev)
 {
 	return false;
 }
+static inline bool xen_virtio_mem_acc(struct virtio_device *dev)
+{
+	return false;
+}
 #endif /* CONFIG_XEN_GRANT_DMA_OPS */
 
 #endif /* INCLUDE_XEN_OPS_H */
diff --git a/include/xen/xen.h b/include/xen/xen.h
index ac5a144c6a65..a99bab817523 100644
--- a/include/xen/xen.h
+++ b/include/xen/xen.h
@@ -52,14 +52,6 @@ bool xen_biovec_phys_mergeable(const struct bio_vec *vec1,
 extern u64 xen_saved_max_mem_size;
 #endif
 
-#include <linux/virtio_anchor.h>
-
-static inline void xen_set_restricted_virtio_memory_access(void)
-{
-	if (IS_ENABLED(CONFIG_XEN_VIRTIO) && xen_domain())
-		virtio_set_mem_acc_cb(virtio_require_restricted_mem_acc);
-}
-
 #ifdef CONFIG_XEN_UNPOPULATED_ALLOC
 int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages);
 void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages);
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 06:38:58 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 06:38:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353467.580399 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3u0m-0003Uu-JM; Wed, 22 Jun 2022 06:38:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353467.580399; Wed, 22 Jun 2022 06:38:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3u0m-0003UX-Bk; Wed, 22 Jun 2022 06:38:48 +0000
Received: by outflank-mailman (input) for mailman id 353467;
 Wed, 22 Jun 2022 06:38:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zPYt=W5=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o3u0l-0003Ka-47
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 06:38:47 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f5e3d974-f1f5-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 08:38:43 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 848C021BC4;
 Wed, 22 Jun 2022 06:38:43 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id EBDE3134A9;
 Wed, 22 Jun 2022 06:38:42 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id kIg2OPK4smKNUAAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 22 Jun 2022 06:38:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f5e3d974-f1f5-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655879923; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JrITqNYCJEt3p9uYO0wGQLTJBftwCzF3rWhJ29DtcVs=;
	b=LEA/Lbcr+fb+XPiMkM1yGYxA5BbKwaxYNMfiUx71lus0oa+mBHLcgLZJythOp+jufQvJfE
	rRNiACi9JjKfiX9SUsk2QkkTNLoBd1ZQdhkjglt2Stw5sRmEk+20hiki1/6CwdqGn7qXMU
	hU1vE+Rn+gPHaC8ciSxiI7YuVQLY9xI=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-s390@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org
Cc: Juergen Gross <jgross@suse.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Alexander Gordeev <agordeev@linux.ibm.com>,
	Christian Borntraeger <borntraeger@linux.ibm.com>,
	Sven Schnelle <svens@linux.ibm.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: [PATCH v3 1/3] virtio: replace restricted mem access flag with callback
Date: Wed, 22 Jun 2022 08:38:36 +0200
Message-Id: <20220622063838.8854-2-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20220622063838.8854-1-jgross@suse.com>
References: <20220622063838.8854-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of having a global flag to require restricted memory access
for all virtio devices, introduce a callback which can select that
requirement on a per-device basis.

For convenience add a common function returning always true, which can
be used for use cases like SEV.

Per default use a callback always returning false.

As the callback needs to be set in early init code already, add a
virtio anchor which is builtin in case virtio is enabled.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/s390/mm/init.c              |  4 ++--
 arch/x86/mm/mem_encrypt_amd.c    |  4 ++--
 drivers/virtio/Kconfig           |  4 ++++
 drivers/virtio/Makefile          |  1 +
 drivers/virtio/virtio.c          |  4 ++--
 drivers/virtio/virtio_anchor.c   | 18 ++++++++++++++++++
 include/linux/platform-feature.h |  6 +-----
 include/linux/virtio_anchor.h    | 19 +++++++++++++++++++
 include/xen/xen.h                |  4 ++--
 9 files changed, 51 insertions(+), 13 deletions(-)
 create mode 100644 drivers/virtio/virtio_anchor.c
 create mode 100644 include/linux/virtio_anchor.h

diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
index 6a0ac00d5a42..4a154a084966 100644
--- a/arch/s390/mm/init.c
+++ b/arch/s390/mm/init.c
@@ -31,7 +31,6 @@
 #include <linux/cma.h>
 #include <linux/gfp.h>
 #include <linux/dma-direct.h>
-#include <linux/platform-feature.h>
 #include <asm/processor.h>
 #include <linux/uaccess.h>
 #include <asm/pgalloc.h>
@@ -48,6 +47,7 @@
 #include <asm/kasan.h>
 #include <asm/dma-mapping.h>
 #include <asm/uv.h>
+#include <linux/virtio_anchor.h>
 #include <linux/virtio_config.h>
 
 pgd_t swapper_pg_dir[PTRS_PER_PGD] __section(".bss..swapper_pg_dir");
@@ -175,7 +175,7 @@ static void pv_init(void)
 	if (!is_prot_virt_guest())
 		return;
 
-	platform_set(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS);
+	virtio_set_mem_acc_cb(virtio_require_restricted_mem_acc);
 
 	/* make sure bounce buffers are shared */
 	swiotlb_init(true, SWIOTLB_FORCE | SWIOTLB_VERBOSE);
diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c
index f6d038e2cd8e..97452688f99f 100644
--- a/arch/x86/mm/mem_encrypt_amd.c
+++ b/arch/x86/mm/mem_encrypt_amd.c
@@ -20,8 +20,8 @@
 #include <linux/bitops.h>
 #include <linux/dma-mapping.h>
 #include <linux/virtio_config.h>
+#include <linux/virtio_anchor.h>
 #include <linux/cc_platform.h>
-#include <linux/platform-feature.h>
 
 #include <asm/tlbflush.h>
 #include <asm/fixmap.h>
@@ -245,7 +245,7 @@ void __init sev_setup_arch(void)
 	swiotlb_adjust_size(size);
 
 	/* Set restricted memory access for virtio. */
-	platform_set(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS);
+	virtio_set_mem_acc_cb(virtio_require_restricted_mem_acc);
 }
 
 static unsigned long pg_level_to_pfn(int level, pte_t *kpte, pgprot_t *ret_prot)
diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig
index a6dc8b5846fe..ce93966575a1 100644
--- a/drivers/virtio/Kconfig
+++ b/drivers/virtio/Kconfig
@@ -1,6 +1,10 @@
 # SPDX-License-Identifier: GPL-2.0-only
+config VIRTIO_ANCHOR
+	bool
+
 config VIRTIO
 	tristate
+	select VIRTIO_ANCHOR
 	help
 	  This option is selected by any driver which implements the virtio
 	  bus, such as CONFIG_VIRTIO_PCI, CONFIG_VIRTIO_MMIO, CONFIG_RPMSG
diff --git a/drivers/virtio/Makefile b/drivers/virtio/Makefile
index 0a82d0873248..8e98d24917cc 100644
--- a/drivers/virtio/Makefile
+++ b/drivers/virtio/Makefile
@@ -1,5 +1,6 @@
 # SPDX-License-Identifier: GPL-2.0
 obj-$(CONFIG_VIRTIO) += virtio.o virtio_ring.o
+obj-$(CONFIG_VIRTIO_ANCHOR) += virtio_anchor.o
 obj-$(CONFIG_VIRTIO_PCI_LIB) += virtio_pci_modern_dev.o
 obj-$(CONFIG_VIRTIO_PCI_LIB_LEGACY) += virtio_pci_legacy_dev.o
 obj-$(CONFIG_VIRTIO_MMIO) += virtio_mmio.o
diff --git a/drivers/virtio/virtio.c b/drivers/virtio/virtio.c
index 6bace84ae37e..21e753fe1b50 100644
--- a/drivers/virtio/virtio.c
+++ b/drivers/virtio/virtio.c
@@ -2,10 +2,10 @@
 #include <linux/virtio.h>
 #include <linux/spinlock.h>
 #include <linux/virtio_config.h>
+#include <linux/virtio_anchor.h>
 #include <linux/module.h>
 #include <linux/idr.h>
 #include <linux/of.h>
-#include <linux/platform-feature.h>
 #include <uapi/linux/virtio_ids.h>
 
 /* Unique numbering for virtio devices. */
@@ -174,7 +174,7 @@ static int virtio_features_ok(struct virtio_device *dev)
 
 	might_sleep();
 
-	if (platform_has(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS)) {
+	if (virtio_check_mem_acc_cb(dev)) {
 		if (!virtio_has_feature(dev, VIRTIO_F_VERSION_1)) {
 			dev_warn(&dev->dev,
 				 "device must provide VIRTIO_F_VERSION_1\n");
diff --git a/drivers/virtio/virtio_anchor.c b/drivers/virtio/virtio_anchor.c
new file mode 100644
index 000000000000..4d6a5d269b55
--- /dev/null
+++ b/drivers/virtio/virtio_anchor.c
@@ -0,0 +1,18 @@
+// SPDX-License-Identifier: GPL-2.0-only
+#include <linux/virtio.h>
+#include <linux/virtio_anchor.h>
+
+bool virtio_require_restricted_mem_acc(struct virtio_device *dev)
+{
+	return true;
+}
+EXPORT_SYMBOL_GPL(virtio_require_restricted_mem_acc);
+
+static bool virtio_no_restricted_mem_acc(struct virtio_device *dev)
+{
+	return false;
+}
+
+bool (*virtio_check_mem_acc_cb)(struct virtio_device *dev) =
+	virtio_no_restricted_mem_acc;
+EXPORT_SYMBOL_GPL(virtio_check_mem_acc_cb);
diff --git a/include/linux/platform-feature.h b/include/linux/platform-feature.h
index b2f48be999fa..6ed859928b97 100644
--- a/include/linux/platform-feature.h
+++ b/include/linux/platform-feature.h
@@ -6,11 +6,7 @@
 #include <asm/platform-feature.h>
 
 /* The platform features are starting with the architecture specific ones. */
-
-/* Used to enable platform specific DMA handling for virtio devices. */
-#define PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS	(0 + PLATFORM_ARCH_FEAT_N)
-
-#define PLATFORM_FEAT_N				(1 + PLATFORM_ARCH_FEAT_N)
+#define PLATFORM_FEAT_N				(0 + PLATFORM_ARCH_FEAT_N)
 
 void platform_set(unsigned int feature);
 void platform_clear(unsigned int feature);
diff --git a/include/linux/virtio_anchor.h b/include/linux/virtio_anchor.h
new file mode 100644
index 000000000000..432e6c00b3ca
--- /dev/null
+++ b/include/linux/virtio_anchor.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _LINUX_VIRTIO_ANCHOR_H
+#define _LINUX_VIRTIO_ANCHOR_H
+
+#ifdef CONFIG_VIRTIO_ANCHOR
+struct virtio_device;
+
+bool virtio_require_restricted_mem_acc(struct virtio_device *dev);
+extern bool (*virtio_check_mem_acc_cb)(struct virtio_device *dev);
+
+static inline void virtio_set_mem_acc_cb(bool (*func)(struct virtio_device *))
+{
+	virtio_check_mem_acc_cb = func;
+}
+#else
+#define virtio_set_mem_acc_cb(func) do { } while (0)
+#endif
+
+#endif /* _LINUX_VIRTIO_ANCHOR_H */
diff --git a/include/xen/xen.h b/include/xen/xen.h
index 0780a81e140d..ac5a144c6a65 100644
--- a/include/xen/xen.h
+++ b/include/xen/xen.h
@@ -52,12 +52,12 @@ bool xen_biovec_phys_mergeable(const struct bio_vec *vec1,
 extern u64 xen_saved_max_mem_size;
 #endif
 
-#include <linux/platform-feature.h>
+#include <linux/virtio_anchor.h>
 
 static inline void xen_set_restricted_virtio_memory_access(void)
 {
 	if (IS_ENABLED(CONFIG_XEN_VIRTIO) && xen_domain())
-		platform_set(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS);
+		virtio_set_mem_acc_cb(virtio_require_restricted_mem_acc);
 }
 
 #ifdef CONFIG_XEN_UNPOPULATED_ALLOC
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 06:38:58 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 06:38:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353465.580386 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3u0l-0003L5-S4; Wed, 22 Jun 2022 06:38:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353465.580386; Wed, 22 Jun 2022 06:38:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3u0l-0003Ky-Nr; Wed, 22 Jun 2022 06:38:47 +0000
Received: by outflank-mailman (input) for mailman id 353465;
 Wed, 22 Jun 2022 06:38:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zPYt=W5=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o3u0j-0003Kb-Vb
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 06:38:46 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f5df2146-f1f5-11ec-bd2d-47488cf2e6aa;
 Wed, 22 Jun 2022 08:38:44 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id B7C881FA5C;
 Wed, 22 Jun 2022 06:38:43 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 8B0E313AC7;
 Wed, 22 Jun 2022 06:38:43 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id UEeiIPO4smKNUAAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 22 Jun 2022 06:38:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f5df2146-f1f5-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655879923; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+Mw0ys4QNDOfeayffGgMX8B88kpTSJxdLQDajglpGuY=;
	b=LCVs0MbvhQNUhVhcKw0Gx4BjRA6mFWwQEuAQj795WvsjaDx3XJz3qFtwMtTQPA5LbOyk87
	OoFglCNqL6Ouf6H3tU0YIMkaGx7B1rjpMdezz+TLJhEgpKvqmNmNXpBs3CC+xZlzR9/MnE
	T67fj5LW4wb7JP9B0hROZTNq/kZYhU4=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	linux-arch@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Arnd Bergmann <arnd@arndb.de>
Subject: [PATCH v3 2/3] kernel: remove platform_has() infrastructure
Date: Wed, 22 Jun 2022 08:38:37 +0200
Message-Id: <20220622063838.8854-3-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20220622063838.8854-1-jgross@suse.com>
References: <20220622063838.8854-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The only use case of the platform_has() infrastructure has been
removed again, so remove the whole feature.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 MAINTAINERS                            |  8 --------
 include/asm-generic/Kbuild             |  1 -
 include/asm-generic/platform-feature.h |  8 --------
 include/linux/platform-feature.h       | 15 --------------
 kernel/Makefile                        |  2 +-
 kernel/platform-feature.c              | 27 --------------------------
 6 files changed, 1 insertion(+), 60 deletions(-)
 delete mode 100644 include/asm-generic/platform-feature.h
 delete mode 100644 include/linux/platform-feature.h
 delete mode 100644 kernel/platform-feature.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 3cf9842d9233..1a800f6becd2 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -15835,14 +15835,6 @@ S:	Maintained
 F:	Documentation/devicetree/bindings/iio/chemical/plantower,pms7003.yaml
 F:	drivers/iio/chemical/pms7003.c
 
-PLATFORM FEATURE INFRASTRUCTURE
-M:	Juergen Gross <jgross@suse.com>
-S:	Maintained
-F:	arch/*/include/asm/platform-feature.h
-F:	include/asm-generic/platform-feature.h
-F:	include/linux/platform-feature.h
-F:	kernel/platform-feature.c
-
 PLDMFW LIBRARY
 M:	Jacob Keller <jacob.e.keller@intel.com>
 S:	Maintained
diff --git a/include/asm-generic/Kbuild b/include/asm-generic/Kbuild
index 8e47d483b524..302506bbc2a4 100644
--- a/include/asm-generic/Kbuild
+++ b/include/asm-generic/Kbuild
@@ -44,7 +44,6 @@ mandatory-y += msi.h
 mandatory-y += pci.h
 mandatory-y += percpu.h
 mandatory-y += pgalloc.h
-mandatory-y += platform-feature.h
 mandatory-y += preempt.h
 mandatory-y += rwonce.h
 mandatory-y += sections.h
diff --git a/include/asm-generic/platform-feature.h b/include/asm-generic/platform-feature.h
deleted file mode 100644
index 4b0af3d51588..000000000000
--- a/include/asm-generic/platform-feature.h
+++ /dev/null
@@ -1,8 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef _ASM_GENERIC_PLATFORM_FEATURE_H
-#define _ASM_GENERIC_PLATFORM_FEATURE_H
-
-/* Number of arch specific feature flags. */
-#define PLATFORM_ARCH_FEAT_N	0
-
-#endif /* _ASM_GENERIC_PLATFORM_FEATURE_H */
diff --git a/include/linux/platform-feature.h b/include/linux/platform-feature.h
deleted file mode 100644
index 6ed859928b97..000000000000
--- a/include/linux/platform-feature.h
+++ /dev/null
@@ -1,15 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef _PLATFORM_FEATURE_H
-#define _PLATFORM_FEATURE_H
-
-#include <linux/bitops.h>
-#include <asm/platform-feature.h>
-
-/* The platform features are starting with the architecture specific ones. */
-#define PLATFORM_FEAT_N				(0 + PLATFORM_ARCH_FEAT_N)
-
-void platform_set(unsigned int feature);
-void platform_clear(unsigned int feature);
-bool platform_has(unsigned int feature);
-
-#endif /* _PLATFORM_FEATURE_H */
diff --git a/kernel/Makefile b/kernel/Makefile
index a7e1f49ab2b3..318789c728d3 100644
--- a/kernel/Makefile
+++ b/kernel/Makefile
@@ -7,7 +7,7 @@ obj-y     = fork.o exec_domain.o panic.o \
 	    cpu.o exit.o softirq.o resource.o \
 	    sysctl.o capability.o ptrace.o user.o \
 	    signal.o sys.o umh.o workqueue.o pid.o task_work.o \
-	    extable.o params.o platform-feature.o \
+	    extable.o params.o \
 	    kthread.o sys_ni.o nsproxy.o \
 	    notifier.o ksysfs.o cred.o reboot.o \
 	    async.o range.o smpboot.o ucount.o regset.o
diff --git a/kernel/platform-feature.c b/kernel/platform-feature.c
deleted file mode 100644
index cb6a6c3e4fed..000000000000
--- a/kernel/platform-feature.c
+++ /dev/null
@@ -1,27 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-
-#include <linux/bitops.h>
-#include <linux/cache.h>
-#include <linux/export.h>
-#include <linux/platform-feature.h>
-
-#define PLATFORM_FEAT_ARRAY_SZ  BITS_TO_LONGS(PLATFORM_FEAT_N)
-static unsigned long __read_mostly platform_features[PLATFORM_FEAT_ARRAY_SZ];
-
-void platform_set(unsigned int feature)
-{
-	set_bit(feature, platform_features);
-}
-EXPORT_SYMBOL_GPL(platform_set);
-
-void platform_clear(unsigned int feature)
-{
-	clear_bit(feature, platform_features);
-}
-EXPORT_SYMBOL_GPL(platform_clear);
-
-bool platform_has(unsigned int feature)
-{
-	return test_bit(feature, platform_features);
-}
-EXPORT_SYMBOL_GPL(platform_has);
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 06:38:58 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 06:38:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353466.580393 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3u0m-0003Oa-9f; Wed, 22 Jun 2022 06:38:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353466.580393; Wed, 22 Jun 2022 06:38:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3u0m-0003Nm-0I; Wed, 22 Jun 2022 06:38:48 +0000
Received: by outflank-mailman (input) for mailman id 353466;
 Wed, 22 Jun 2022 06:38:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zPYt=W5=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o3u0k-0003Ka-J4
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 06:38:46 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f5a50457-f1f5-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 08:38:43 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id E416D1FA12;
 Wed, 22 Jun 2022 06:38:42 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 31A45134A9;
 Wed, 22 Jun 2022 06:38:42 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id gMq2CvK4smKNUAAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 22 Jun 2022 06:38:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f5a50457-f1f5-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655879922; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=QMRrUcaW7noQ1pULOfC6DTDxjpRydqT5qx5KdCmqN2s=;
	b=F2GAX4tgcaaJK09IKfBFeDbtHvOq+IkDEU7l7PrA3RhoibYo+uttdFz2t+qPZAaQOwDw3A
	fHEDcSKePj6ww5nUqqNiqcxwleIRpxn45rgsUtNEvK5o15XMISBUcJfW8fMTMeto1TiYz1
	6avqbbonITmQT2qBICWookqie0msOhY=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-s390@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-arch@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Alexander Gordeev <agordeev@linux.ibm.com>,
	Christian Borntraeger <borntraeger@linux.ibm.com>,
	Sven Schnelle <svens@linux.ibm.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Arnd Bergmann <arnd@arndb.de>,
	Russell King <linux@armlinux.org.uk>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-arm-kernel@lists.infradead.org
Subject: [PATCH v3 0/3] virtio: support requiring restricted access per device
Date: Wed, 22 Jun 2022 08:38:35 +0200
Message-Id: <20220622063838.8854-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of an all or nothing approach add support for requiring
restricted memory access per device.

Changes in V3:
- new patches 1 + 2
- basically complete rework of patch 3

Juergen Gross (3):
  virtio: replace restricted mem access flag with callback
  kernel: remove platform_has() infrastructure
  xen: don't require virtio with grants for non-PV guests

 MAINTAINERS                            |  8 --------
 arch/arm/xen/enlighten.c               |  4 +++-
 arch/s390/mm/init.c                    |  4 ++--
 arch/x86/mm/mem_encrypt_amd.c          |  4 ++--
 arch/x86/xen/enlighten_hvm.c           |  4 +++-
 arch/x86/xen/enlighten_pv.c            |  5 ++++-
 drivers/virtio/Kconfig                 |  4 ++++
 drivers/virtio/Makefile                |  1 +
 drivers/virtio/virtio.c                |  4 ++--
 drivers/virtio/virtio_anchor.c         | 18 +++++++++++++++++
 drivers/xen/Kconfig                    |  9 +++++++++
 drivers/xen/grant-dma-ops.c            | 10 ++++++++++
 include/asm-generic/Kbuild             |  1 -
 include/asm-generic/platform-feature.h |  8 --------
 include/linux/platform-feature.h       | 19 ------------------
 include/linux/virtio_anchor.h          | 19 ++++++++++++++++++
 include/xen/xen-ops.h                  |  6 ++++++
 include/xen/xen.h                      |  8 --------
 kernel/Makefile                        |  2 +-
 kernel/platform-feature.c              | 27 --------------------------
 20 files changed, 84 insertions(+), 81 deletions(-)
 create mode 100644 drivers/virtio/virtio_anchor.c
 delete mode 100644 include/asm-generic/platform-feature.h
 delete mode 100644 include/linux/platform-feature.h
 create mode 100644 include/linux/virtio_anchor.h
 delete mode 100644 kernel/platform-feature.c

-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 06:42:33 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 06:42:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353494.580430 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3u4O-0006X5-Iw; Wed, 22 Jun 2022 06:42:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353494.580430; Wed, 22 Jun 2022 06:42:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3u4O-0006Wy-G2; Wed, 22 Jun 2022 06:42:32 +0000
Received: by outflank-mailman (input) for mailman id 353494;
 Wed, 22 Jun 2022 06:42:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3u4N-0006Wo-FV; Wed, 22 Jun 2022 06:42:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3u4N-0005jB-CV; Wed, 22 Jun 2022 06:42:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3u4M-0006jc-QU; Wed, 22 Jun 2022 06:42:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o3u4M-0005z2-Q2; Wed, 22 Jun 2022 06:42:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=kpPD5n1OnQmZLC4AL4622RpnpQZ+na4htlLk/Vytzxg=; b=owz67hHpAkqucKK/q1dQLWdUYp
	LnpVGLt53f/91+NAPniU/AxXTu+qwFUBCMl0Gvuvif9zcYxD7ZOoQ73jSRS5Mu/1m7jVyNquLTWAA
	JG8jdg0SEmv2pRK6/kJRCP86TALQV3owmN46T6E9MVM6+Wwk2wEfqg4VwoUIOOVIlmPE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171302-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171302: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=ca1fdab7fd27eb069df1384b2850dcd0c2bebe8d
X-Osstest-Versions-That:
    linux=354c6e071be986a44b956f7b57f1884244431048
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 22 Jun 2022 06:42:30 +0000

flight 171302 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171302/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-intel  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-amd  8 xen-boot              fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 171277
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 171277
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 171277
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 171277
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 171277

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 171277

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171277
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171277
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171277
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                ca1fdab7fd27eb069df1384b2850dcd0c2bebe8d
baseline version:
 linux                354c6e071be986a44b956f7b57f1884244431048

Last test of basis   171277  2022-06-19 03:11:35 Z    3 days
Failing since        171280  2022-06-19 15:12:25 Z    2 days    9 attempts
Testing same since   171302  2022-06-22 00:12:23 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Usyskin <alexander.usyskin@intel.com>
  Ali Saidi <alisaidi@amazon.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Ard Biesheuvel <ardb@kernel.org>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Arnd Bergmann <arnd@arndb.de>
  Athira Jajeev <atrajeev@linux.vnet.ibm.com>
  Athira Rajeev <atrajeev@linux.vnet.ibm.com>
  Bart Van Assche <bvanassche@acm.org>
  Christoph Lameter <cl@linux.com>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Damien Le Moal <damien.lemoal@opensource.wdc.com>
  Darrick J. Wong <djwong@kernel.org>
  Dave Hansen <dave.hansen@linux.intel.com>
  David Howells <dhowells@redhat.com>
  David Rientjes <rientjes@google.com>
  David Sterba <dsterba@suse.com>
  Douglas Gilbert <dgilbert@interlog.com>
  Evgeniy Baskov <baskov@ispras.ru>
  Filipe Manana <fdmanana@suse.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Hyeonggon Yoo <42.hyeyoo@gmail.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ian Rogers <irogers@google.com>
  Jamie Iles <jamie@jamieiles.com>
  Jann Horn <jannh@google.com>
  Jarkko Nikula <jarkko.nikula@linux.intel.com>
  Javier Martinez Canillas <javierm@redhat.com>
  Jiasheng Jiang <jiasheng@iscas.ac.cn>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jing-Ting Wu <jing-ting.wu@mediatek.com>
  Joe Damato <jdamato@fastly.com>
  Josh Poimboeuf <jpoimboe@kernel.org>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Kunihiko Hayashi <hayashi.kunihiko@socionext.com>
  Leo Yan <leo.yan@linaro.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Liu Ying <victor.liu@nxp.com>
  Lukas Bulwahn <lukas.bulwahn@gmail.com>
  Marc Dionne <marc.dionne@auristor.com>
  Marc Zyngier <maz@kernel.org>
  Martin K. Petersen <martin.petersen@oracle.com>
  Miaoqian Lin <linmq006@gmail.com>
  Michael Petlan <mpetlan@redhat.com>
  Michal Simek <michal.simek@amd.com>
  Nathan Chancellor <nathan@kernel.org>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Peter Zijlstra <peterz@infradead.org>
  Qu Wenruo <wqu@suse.com>
  Rob Herring <robh@kernel.org>
  Saurabh Sengar <ssengar@linux.microsoft.com>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Sedat Dilek <sedat.dilek@gmail.com> # LLVM-14 (x86-64)
  Serge Semin <Sergey.Semin@baikalelectronics.ru>
  Sergey Gorenko <sergeygo@nvidia.com>
  Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
  Tali Perry <tali.perry1@gmail.com>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Richter <tmricht@linux.ibm.com>
  Tomas Winkler <tomas.winkler@intel.com>
  Tyrel Datwyler <tyreld@linux.ibm.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wolfram Sang <wsa@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2201 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 07:24:25 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 07:24:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353510.580440 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3uin-0002KD-PY; Wed, 22 Jun 2022 07:24:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353510.580440; Wed, 22 Jun 2022 07:24:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3uin-0002K6-Mt; Wed, 22 Jun 2022 07:24:17 +0000
Received: by outflank-mailman (input) for mailman id 353510;
 Wed, 22 Jun 2022 07:24:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OJu0=W5=gmail.com=dmitry.semenets@srs-se1.protection.inumbo.net>)
 id 1o3uim-0002K0-Iy
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 07:24:16 +0000
Received: from mail-wm1-x32e.google.com (mail-wm1-x32e.google.com
 [2a00:1450:4864:20::32e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 521184bd-f1fc-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 09:24:15 +0200 (CEST)
Received: by mail-wm1-x32e.google.com with SMTP id
 i81-20020a1c3b54000000b0039c76434147so10518647wma.1
 for <xen-devel@lists.xenproject.org>; Wed, 22 Jun 2022 00:24:15 -0700 (PDT)
Received: from localhost.localdomain ([91.219.254.75])
 by smtp.gmail.com with ESMTPSA id
 k26-20020a7bc31a000000b00397623ff335sm20549059wmj.10.2022.06.22.00.24.13
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 22 Jun 2022 00:24:13 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 521184bd-f1fc-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=lBC3vy2Dl55klC9bIEsto3pkIg2TsVYPIBOXZj2Bf9Q=;
        b=Ox4VR8a79ghIIO1n1nSPw5Mbk9cUVc5/gKojikMN6+9ugNvJ0XhGY0PHznM+HdS/n/
         1/wg4DzP+eV0RI2l9WhQqFnsTyGxnY96cGTmkM4fzGBhg3+3Bw6Y07Dz1qWRp4D9wu4y
         CXUen9vbUx0gHzkSnILaCBRI/ZYpdzfE8mXfX1eb/FbXCWeAoTBfN9i3eljN20Olo+t0
         ZYY9zSAS3t3O9WYeWRJFUOQfSkRyf6n+Se83VVxX9qpXrtJjxEy6Vn/ey86n9vTACg6M
         aPNNjFXb2VnLjv1j/PSHt8hkb9FG35ujqelqeNprevA/2IoiHwplkVOI+58ti+OYbXwC
         dN8Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=lBC3vy2Dl55klC9bIEsto3pkIg2TsVYPIBOXZj2Bf9Q=;
        b=dUj8mCS3mS8b11eYPi1E/pPZGU4NmOoD/wqN0cpXG6LeHyqBzEN3lm3JyQiHka45n4
         QuSCczZ9CZCtbkgjbzgAzSMo1nxyysNYz7OEK6IEHkXRrvZd85f11ZuIEEh/TdDyrQ9Y
         UZ5cxl7tLx2VKcAPCotueALMOq2CLqyVl2NHkEU44+fHWt2zdEKN0AJd4FKUOQTVESVk
         74zjkYzTTCV0YLdlxRntS96R4ocrsyW3qg4LvWvVz3ybtQbFG0MsU8e9iGSvaxrBKzOg
         Lfufn7ORLC+/a05mdSZZOVeGaE0t0vkB3tAud7sjcZpvcFcT0j4szv2ejs+BCb9fgvfz
         ZKdg==
X-Gm-Message-State: AJIora9bjGGdVfa985xZw8u8v/tPsmKnvGRdhszT+MK6ncK5Q5EwPQKZ
	nw1rYKj0XRLLq7EB6sj+qEYvxbwFIA8PMQ==
X-Google-Smtp-Source: AGRyM1tsPosM+3pdrvGa63BVwoDhTMKBekdBg1ykUpi8hLhvo9OLQPNLFP9jub9dKkJ0i7g1D+nt+w==
X-Received: by 2002:a05:600c:a182:b0:39e:f33a:a990 with SMTP id id2-20020a05600ca18200b0039ef33aa990mr2235694wmb.59.1655882654411;
        Wed, 22 Jun 2022 00:24:14 -0700 (PDT)
From: dmitry.semenets@gmail.com
To: xen-devel@lists.xenproject.org
Cc: Dmytro Semenets <dmytro_semenets@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH] xen: arm: Spin-up cpu instead PSCI CPU OFF
Date: Wed, 22 Jun 2022 10:24:10 +0300
Message-Id: <20220622072410.87346-1-dmitry.semenets@gmail.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Dmytro Semenets <dmytro_semenets@epam.com>

Use spin-up cpu with disabled interrupts instead PSCI CPU OFF
halt and reboot procedures. Some platforms can't stop CPU via PSCI
because Thrusted OS can't migrate execution to other CPU.

Signed-off-by: Dmytro Semenets <dmytro_semenets@epam.com>
---
 xen/arch/arm/shutdown.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/shutdown.c b/xen/arch/arm/shutdown.c
index 3dc6819d56..a9aea19e8e 100644
--- a/xen/arch/arm/shutdown.c
+++ b/xen/arch/arm/shutdown.c
@@ -8,7 +8,12 @@
 
 static void noreturn halt_this_cpu(void *arg)
 {
-    stop_cpu();
+    local_irq_disable();
+    /* Make sure the write happens before we sleep forever */
+    dsb(sy);
+    isb();
+    while ( 1 )
+        wfi();
 }
 
 void machine_halt(void)
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 08:04:35 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 08:04:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353522.580455 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3vLa-0007Tq-7h; Wed, 22 Jun 2022 08:04:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353522.580455; Wed, 22 Jun 2022 08:04:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3vLa-0007Tj-4O; Wed, 22 Jun 2022 08:04:22 +0000
Received: by outflank-mailman (input) for mailman id 353522;
 Wed, 22 Jun 2022 08:04:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kzGk=W5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o3vLY-0007Td-GL
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 08:04:20 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2047.outbound.protection.outlook.com [40.107.21.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ea9c8194-f201-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 10:04:19 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8950.eurprd04.prod.outlook.com (2603:10a6:10:2e1::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15; Wed, 22 Jun
 2022 08:04:17 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Wed, 22 Jun 2022
 08:04:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ea9c8194-f201-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KK6PNOcJDlgirF3tmBNq8FM7exXp98zNuqpEwA54+oCdNvV4YzHgydquKtyf2dUSDd3pSZ5KTYr9ie5OHPq0Cux4pvcl1TmrQLnnn9yPPmypalOqDiy/3FlhubdDIB/9wDdNAnNR20SVk0FB+LHD4FTbR/A8wVark4DbeNaokLd1ijnpBqlCGpU/Rl0yG7yCKZFzARClzPgbh/+BmAq+ymuxVdv/kRaUP/3BxfQWpfcnVM52xjNDGkCCIPI2vd6QIAyOuP5X/Idl010Uu+kK8EegMkcUWLhOP2fwa5bFoSFPWSlR7qmTXlX2wtgS8rUuD4gr5j1GP/jCOH37yZdyBg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=PIhfBpWI2qTPNsoFMcYpVGbr+wW25luuICYFqYxKsxo=;
 b=hqIce9Aikaxm8Cz9W+M85WGUNEJ5lm5dbRzuqeR9h+tXMgt2awKenGqbzmHVwAAHxhr494f0096iKrWJ9DUn7ftd7RJxr6r6nLYNmbT2ZP3FBPJnPL38wCEBGy6r4bzC4Tk316I1Zwz+BWrA9sNqDqOIUZ9BR544Os7s3jXd8ccSlN+5U+giSWrTo8sp/5FNVfWyV0diQwxFHA7R68GzGQPXxcQ8kBcS1hj4BZdd7ArpirGErwbWXAsLvV6USRYpaoHBrUxHrYrESHEEXZOQuywIURx3u2tFhbT+DOiOGG1+GX7xGiuMHHW2iFTBsuLKDpj0jrxqrwUS7sUZdIGTPw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PIhfBpWI2qTPNsoFMcYpVGbr+wW25luuICYFqYxKsxo=;
 b=vXBtt3ON5gxBAjrPbC7IesBu476GAwIUb5zjoaYIEkNpjNseUbgL1ByaykSJ5ItqVBSBlMXmJE7eayUmpbEafId0LBGef9zuGTuh6aox5aXX3Px37ZINn/3NqsVebEnQ1XrHG5rlgBROAcm2ZKAHCyiB29XcGTf4fo3Qreo6emzuuCsjh0ipAr7wRxyuDWxAK5J1jT4c43Rgs9Zl0xaxYTl9bvuGeJ61AZBlnZVTwDNUsYOV8+qFTSE9u+I2/7GhJIDPPONSP49F+NHN/hk4rqiL+QTfec4iatq72FdHE6t5XqX8kA0Rf1fRp9NR2l9CAI35pbZtURm536wOGR3mjw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <8ee15e94-f4a9-69f2-4c57-2e0cc9df8746@suse.com>
Date: Wed, 22 Jun 2022 10:04:19 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH] xen/console: do not drop serial output from the hardware
 domain
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <Yqb9gKUMokLAots7@Air-de-Roger>
 <afa0a9e3-fd35-be38-427e-3389f4c3ca26@suse.com>
 <YqcuTUJUgXcO3iYE@Air-de-Roger>
 <f0f87e99-282b-6df7-7e57-3a6c73029519@suse.com>
 <YqgwNu3QSpPcZjnU@Air-de-Roger>
 <69d85d88-4ec1-987c-151f-0d433021fe34@suse.com>
 <YqhHtetipYTG8tuc@Air-de-Roger>
 <72c94980-cbcd-d3b3-7aad-c9db58d9c4a2@suse.com>
 <YqhXFKMlIvkQzVoT@Air-de-Roger>
 <291bb0ee-06d7-af25-79bb-e099c7ff2fe1@suse.com>
 <YqsUfH763oSchRdW@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <YqsUfH763oSchRdW@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AM6PR01CA0072.eurprd01.prod.exchangelabs.com
 (2603:10a6:20b:e0::49) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4131a68b-6531-4e90-b7a9-08da5425cdb9
X-MS-TrafficTypeDiagnostic: DU2PR04MB8950:EE_
X-Microsoft-Antispam-PRVS:
	<DU2PR04MB8950350169D43252D12265A8B3B29@DU2PR04MB8950.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	dROnWwGE9o2fW2QIUKAZklMRrOaTTX0CStP/FN3vj6W2qsTwC8XhzmTrY0KN6oIEGCtEdJ1qpKUll0ip0vRYm3041jXOkZL2ezZWvC+hhv123FXtbU4RK3w2l1FsQmBjHC+ea8LBq10PDzRui/7aWljYGDReKaEawVjcC8Vu3y3aAsi2/1FRx8XPzpjvsAxCROa6yne2jV3SapUC0808oIf6A+4jAY7AJWqH/FZKhDZZFHimr1mEx0REzzvZPx2Kd+J+n4+z0Qgo4LB97u3GulANAdC8v7ZvMKSpCte/OzCZ96H8OzHzAaIJY8THGDVQpKVT6VYLPbbt+kRND1t5NIbaKcKgMm6VklN+PIf0I8RcVO086oOLoFPoOd1og5omCipNMs0oXOTjYb7wYAn7y2KXGkB3m1YgQFz8siFzDWGC7IZrUPiBV2GjlJ6hKOeE4MypqOS4Sf8GzNIp7Z/ahLxlnq+BLwsguXzRBbS1lJliE6HIaEpETDOwQwMCq8ObGqdiwt3rL6c8kw2pq6YQSYtfdHQaWK4Un2bSSLQ50qLZ7QTFLQiJjFmyiOUFchnGSQxxHeIL7HOc9436eqWLFM79ZujNA8Arnzcoe3hhOr67DN67T+KzrLy/rKCzUzxUYQX5HFWg8oujmNqyDu0ZjIx8QwQDEB1QZozjXf2YVP1+rHAeOh0QalXWl1lnk4Wo9u47ukIVknnHYF5DmLMzG8qQguOhzEVf640QXMunY4KJKeNaIIHyGT8yWNTM9eGsTwixnyzlTvh8B90r2R3reSsaoMMRvEZpUUZ9rSESYLo=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(346002)(376002)(136003)(39860400002)(366004)(396003)(2906002)(186003)(316002)(31686004)(8676002)(4326008)(66556008)(66476007)(66946007)(36756003)(6916009)(54906003)(5660300002)(8936002)(478600001)(6486002)(86362001)(6512007)(31696002)(38100700002)(6506007)(26005)(83380400001)(53546011)(2616005)(41300700001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WmR0dHVjNGwvM2JOQkZtSTN4ODZ0QlZ3OHRza2F1RVQzZjRKZWU0QmVKTzFv?=
 =?utf-8?B?UlNYYncyejRBZ3ZEa1ovMU9MamZ4ekhJbEpSNWI0a1l3c2VVek05bDFBZmgw?=
 =?utf-8?B?aytkSDI0VXJXNU1WRy9MVVpUZCtacVJMZVVTRExiSDNPY3dSQzdRaW9KeFdy?=
 =?utf-8?B?UEU4WFh5c05vTFFNRWg3YmxHZVhWL3pTOC9lZXBNWEFRUUU1aHNuLzdHdm9y?=
 =?utf-8?B?bE5SSGdNMDlzMlZOZjJwYmlXL01hY0FEWUlBRk9zcFJGUFNWMU83ZEtSL3ln?=
 =?utf-8?B?QlJwYU54WlhtMVNkQWRrYmpjdmFMUjAza2JNbUlONWVsZW55b2haODZXeFFD?=
 =?utf-8?B?SEN3Z2c2UFhsM0ZBU0pVaGlNRnR3UUhQK0d6d0xZMWFXME9PdG1zWldtanpR?=
 =?utf-8?B?Um1tdXBJZ0RWK1NjMk1BVkJBbmJpVHgxRHFJV05NMEhkcXlkaWo5b2N3cVNv?=
 =?utf-8?B?VENWcjUvelZnTVM1UU12UERTNlJlUWZBV3EzdE9JZVBPaTRhMjZxbVV5ZG5J?=
 =?utf-8?B?QUxlMDh0M2gxQU0vTzFka09OSFhBZzFJejBsVHNDV1YvK2pCMFFxTDRTMG9P?=
 =?utf-8?B?c0w1UlcwU0pubXpqYzVWMXQxc1oxSlE3NEhOMGhOUG14eTRTNXJLczFUZ0J4?=
 =?utf-8?B?eW9GWFRpWm1wVUdKVUllb093T3dMTzdCeVhOVmg2VFFrcExpaWd5cUEwZFRU?=
 =?utf-8?B?a1VBUlNmdXFobDVjRzRoc0VBVjY5YmdJN2t1ZW13dURTMHJZOVRENGJCVGht?=
 =?utf-8?B?N0tNSEVLa0E5ajJqb3hPRmlEb1hpa2UwRDBXYnRTRC9Ga2JYRURLR0l5NVpG?=
 =?utf-8?B?UHVRdElWdUpsSVV1MldUb0U3RWNqeisvZHhsbEJMakRyOGYrcjgzanphY1E5?=
 =?utf-8?B?NDcxNEFJekRzOENUbHkyc25tTlEzN2p4L1NNNWJTRmVNMnBxa09rUEluN0Vs?=
 =?utf-8?B?OTF0OWFpZXN5ZDN0dUsrR01UTVhjSUpaNVc3MC9ZdERQc1prbjlsc2lzRkw2?=
 =?utf-8?B?YUp4WGs1VjJSa1doZlBHTUJoeG00eVFwZHJKejQweVJzb2wwbWpBY2FqUmNp?=
 =?utf-8?B?YnoydmxkTk80YTh0dkpNTjZ3Umd3TUx5Z01PeHd4N0VibmxBTklLWnhXOXZX?=
 =?utf-8?B?clY5dzNBajFGRVJJK2s5VE5CZTV4NUdCZ3UvczVJSCtlV0kzd2NqSVVZTVBm?=
 =?utf-8?B?c1ovYk9nQTh5ckhpK0M1b3RQNWt6UGljSXA5NFdWNSt4dXlWelRSVUh0cGdt?=
 =?utf-8?B?bVowalNLUXcrR1JkbFUzM3BnUnlnWFZPR1l3c3VManNMc0tTcWc1cGE1Ung4?=
 =?utf-8?B?TW5neVZWbmcvUEpuRHZlektJQlA4MjRMRmF4WUdtZGdGdmorZWU3VDZ2RFBI?=
 =?utf-8?B?UGIrQjY2R3lELzdtTnpMMWd5a2J0cFNyUWE0RSsxZGVyNHVMSlFFRm4yTFAx?=
 =?utf-8?B?cDFMRmlpR0s3SUFoNHNuNFF1UEhjOFFzSGJFVGRON2NHNkkxdDJibWc0NEdx?=
 =?utf-8?B?eWhtMmZRSEl0RzdDaURGZjc0eGdVTmFKMTg0U1BGWC9qd3o2R0Z6a2pVcm4x?=
 =?utf-8?B?NE93d0wzWVJ2czNMd0poTWdHUkY3WE5mM3VrdnlxQTFiNXMzaWgyV3dHRGNj?=
 =?utf-8?B?d3ZJZXlYbDRkTEtSQVZTNTl6bjFvQVc0UDF1enFBSHpSUG4xOGpGeHB4UDMx?=
 =?utf-8?B?UTlOdXI4OHZkSUJ0NXpMbDBqdWdtSGZzdElKS3lKTy95MWNJQXAvVnVNNTdy?=
 =?utf-8?B?UzJ1RlhGZEpobTRDWGgvb0VhRXR5NzFod0dtUGppblc0UGt6aUV6UGtnbHhW?=
 =?utf-8?B?NFhDZndIZ3RTNHpQWGR3MGtrVmM2c29La2NpYVIrSVNKSUFiQ1lQV05aTks0?=
 =?utf-8?B?alRjL2lvbUkreWYvTXBmNkpnTXk1cGowd3ZHREluSFVhQzJRQk91Q2k5QjNP?=
 =?utf-8?B?YlhibE5MRXE1M09xQ2tZalhncUJxZGk1MVZlcDBBNm5NcWpMT01XWnZiY2F2?=
 =?utf-8?B?NjVCOHI2bmpsd3BtbjRRa1phVDQxc0FLM2dRWTF5ay9wcFJkQXNpb05YaHlC?=
 =?utf-8?B?WFBHclVHTllZQk9qWGI2M2RHbmRsOG1GK2pvV245WmYvNjhBTnR4NGhyNHNz?=
 =?utf-8?B?NkQxVzBlYXh0aXpXREZoRnMrVldrbmNCbnMvR2hPeGlPOE8rYmYxQWd1S3N3?=
 =?utf-8?B?cGtzWlhITGRSN29ieUo5ZVV3S0pNUkdRa0RpU2pYR1VvellkeHFNL3hJaWRk?=
 =?utf-8?B?VWNUNmtTaW9SSzI4TXBFbFVaUDBNWVJHaGp2TnRWZGRiS2xCUzMzMmV2SGRq?=
 =?utf-8?B?a1FvY2JJbkszaWZxREJxRDZmM2dUV0ppYWpEWnBER0VrU0owR0ZWdz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4131a68b-6531-4e90-b7a9-08da5425cdb9
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2022 08:04:16.9932
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: fr5USLOFkEoOoon9gmBINJV03LB9eDSqt9bGd3/PemXoe/JAYkg1fB+OoTQApXB3NEKjL6VPlAhq6Ud92WdSpg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8950

On 16.06.2022 13:31, Roger Pau Monné wrote:
> On Tue, Jun 14, 2022 at 11:45:54AM +0200, Jan Beulich wrote:
>> On 14.06.2022 11:38, Roger Pau Monné wrote:
>>> On Tue, Jun 14, 2022 at 11:13:07AM +0200, Jan Beulich wrote:
>>>> On 14.06.2022 10:32, Roger Pau Monné wrote:
>>>>> On Tue, Jun 14, 2022 at 10:10:03AM +0200, Jan Beulich wrote:
>>>>>> On 14.06.2022 08:52, Roger Pau Monné wrote:
>>>>>>> On Mon, Jun 13, 2022 at 03:56:54PM +0200, Jan Beulich wrote:
>>>>>>>> On 13.06.2022 14:32, Roger Pau Monné wrote:
>>>>>>>>> On Mon, Jun 13, 2022 at 11:18:49AM +0200, Jan Beulich wrote:
>>>>>>>>>> On 13.06.2022 11:04, Roger Pau Monné wrote:
>>>>>>>>>>> On Mon, Jun 13, 2022 at 10:29:43AM +0200, Jan Beulich wrote:
>>>>>>>>>>>> On 13.06.2022 10:21, Roger Pau Monné wrote:
>>>>>>>>>>>>> On Mon, Jun 13, 2022 at 09:30:06AM +0200, Jan Beulich wrote:
>>>>>>>>>>>>>> On 10.06.2022 17:06, Roger Pau Monne wrote:
>>>>>>>>>>>>>>> Prevent dropping console output from the hardware domain, since it's
>>>>>>>>>>>>>>> likely important to have all the output if the boot fails without
>>>>>>>>>>>>>>> having to resort to sync_console (which also affects the output from
>>>>>>>>>>>>>>> other guests).
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Do so by pairing the console_serial_puts() with
>>>>>>>>>>>>>>> serial_{start,end}_log_everything(), so that no output is dropped.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> While I can see the goal, why would Dom0 output be (effectively) more
>>>>>>>>>>>>>> important than Xen's own one (which isn't "forced")? And with this
>>>>>>>>>>>>>> aiming at boot output only, wouldn't you want to stop the overriding
>>>>>>>>>>>>>> once boot has completed (of which, if I'm not mistaken, we don't
>>>>>>>>>>>>>> really have any signal coming from Dom0)? And even during boot I'm
>>>>>>>>>>>>>> not convinced we'd want to let through everything, but perhaps just
>>>>>>>>>>>>>> Dom0's kernel messages?
>>>>>>>>>>>>>
>>>>>>>>>>>>> I normally use sync_console on all the boxes I'm doing dev work, so
>>>>>>>>>>>>> this request is something that come up internally.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Didn't realize Xen output wasn't forced, since we already have rate
>>>>>>>>>>>>> limiting based on log levels I was assuming that non-ratelimited
>>>>>>>>>>>>> messages wouldn't be dropped.  But yes, I agree that Xen (non-guest
>>>>>>>>>>>>> triggered) output shouldn't be rate limited either.
>>>>>>>>>>>>
>>>>>>>>>>>> Which would raise the question of why we have log levels for non-guest
>>>>>>>>>>>> messages.
>>>>>>>>>>>
>>>>>>>>>>> Hm, maybe I'm confused, but I don't see a direct relation between log
>>>>>>>>>>> levels and rate limiting.  If I set log level to WARNING I would
>>>>>>>>>>> expect to not loose _any_ non-guest log messages with level WARNING or
>>>>>>>>>>> above.  It's still useful to have log levels for non-guest messages,
>>>>>>>>>>> since user might want to filter out DEBUG non-guest messages for
>>>>>>>>>>> example.
>>>>>>>>>>
>>>>>>>>>> It was me who was confused, because of the two log-everything variants
>>>>>>>>>> we have (console and serial). You're right that your change is unrelated
>>>>>>>>>> to log levels. However, when there are e.g. many warnings or when an
>>>>>>>>>> admin has lowered the log level, what you (would) do is effectively
>>>>>>>>>> force sync_console mode transiently (for a subset of messages, but
>>>>>>>>>> that's secondary, especially because the "forced" output would still
>>>>>>>>>> be waiting for earlier output to make it out).
>>>>>>>>>
>>>>>>>>> Right, it would have to wait for any previous output on the buffer to
>>>>>>>>> go out first.  In any case we can guarantee that no more output will
>>>>>>>>> be added to the buffer while Xen waits for it to be flushed.
>>>>>>>>>
>>>>>>>>> So for the hardware domain it might make sense to wait for the TX
>>>>>>>>> buffers to be half empty (the current tx_quench logic) by preempting
>>>>>>>>> the hypercall.  That however could cause issues if guests manage to
>>>>>>>>> keep filling the buffer while the hardware domain is being preempted.
>>>>>>>>>
>>>>>>>>> Alternatively we could always reserve half of the buffer for the
>>>>>>>>> hardware domain, and allow it to be preempted while waiting for space
>>>>>>>>> (since it's guaranteed non hardware domains won't be able to steal the
>>>>>>>>> allocation from the hardware domain).
>>>>>>>>
>>>>>>>> Getting complicated it seems. I have to admit that I wonder whether we
>>>>>>>> wouldn't be better off leaving the current logic as is.
>>>>>>>
>>>>>>> Another possible solution (more like a band aid) is to increase the
>>>>>>> buffer size from 4 pages to 8 or 16.  That would likely allow to cope
>>>>>>> fine with the high throughput of boot messages.
>>>>>>
>>>>>> You mean the buffer whose size is controlled by serial_tx_buffer?
>>>>>
>>>>> Yes.
>>>>>
>>>>>> On
>>>>>> large systems one may want to simply make use of the command line
>>>>>> option then; I don't think the built-in default needs changing. Or
>>>>>> if so, then perhaps not statically at build time, but taking into
>>>>>> account system properties (like CPU count).
>>>>>
>>>>> So how about we use:
>>>>>
>>>>> min(16384, ROUNDUP(1024 * num_possible_cpus(), 4096))
>>>>
>>>> That would _reduce_ size on small systems, wouldn't it? Originally
>>>> you were after increasing the default size. But if you had meant
>>>> max(), then I'd fear on very large systems this may grow a little
>>>> too large.
>>>
>>> See previous followup about my mistake of using min() instead of
>>> max().
>>>
>>> On a system with 512 CPUs that would be 512KB, I don't think that's a
>>> lot of memory, specially taking into account that a system with 512
>>> CPUs should have a matching amount of memory I would expect.
>>>
>>> It's true however that I very much doubt we would fill a 512K buffer,
>>> so limiting to 64K might be a sensible starting point?
>>
>> Yeah, 64k could be a value to compromise on. What total size of
>> output have you observed to trigger the making of this patch? Xen
>> alone doesn't even manage to fill 16k on most of my systems ...
> 
> I've tried on one of the affected systems now, it's a 8 CPU Kaby Lake
> at 3,5GHz, and manages to fill the buffer while booting Linux.
> 
> My proposed formula won't fix this use case, so what about just
> bumping the buffer to 32K by default, which does fix it?

As said, suitably explained I could also agree with going to 64k. The
question though is in how far 32k, 64k, or ...

> Or alternatively use the proposed formula, but adjust the buffer to be
> between [32K,64K].

... this formula would cover a wide range of contemporary systems.
Without such I can't really see what good a bump would do, as then
many people may still find themselves in need of using the command
line option to put in place a larger buffer.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 08:22:16 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 08:22:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353534.580466 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3vcl-0001Sw-T8; Wed, 22 Jun 2022 08:22:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353534.580466; Wed, 22 Jun 2022 08:22:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3vcl-0001Sp-P8; Wed, 22 Jun 2022 08:22:07 +0000
Received: by outflank-mailman (input) for mailman id 353534;
 Wed, 22 Jun 2022 08:22:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3vcl-0001Sf-Bi; Wed, 22 Jun 2022 08:22:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3vcl-000829-9f; Wed, 22 Jun 2022 08:22:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3vck-000325-Mx; Wed, 22 Jun 2022 08:22:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o3vck-0001NM-MU; Wed, 22 Jun 2022 08:22:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zP1/7lktvRnDQHgRrXKBMSblc6DrESU/b96da7Urjk8=; b=H5sqgLu/6mdJgfiwFLVF/DRgFY
	oEJ6+0h42UpyYzHFMM53McIoXrsipASkbOivnTbmD9OoKts/vzBPUa/f1cXSO9BleK576Cc2aNEeJ
	+mj6NY3w73wagYxNkQKqToODJ9QVt/WprxueOeMTUU8aXH9n40mlRRfpAv+mFIbO4b9E=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171303-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 171303: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=f200ff158d5abcb974a6b597a962b6b2fbea2b06
X-Osstest-Versions-That:
    qemuu=5cdcfd861e3cdb98d3239ba78c97a1a2b13d2a70
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 22 Jun 2022 08:22:06 +0000

flight 171303 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171303/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171301
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171301
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171301
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171301
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171301
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171301
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171301
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171301
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 qemuu                f200ff158d5abcb974a6b597a962b6b2fbea2b06
baseline version:
 qemuu                5cdcfd861e3cdb98d3239ba78c97a1a2b13d2a70

Last test of basis   171301  2022-06-21 18:38:24 Z    0 days
Testing same since   171303  2022-06-22 01:08:36 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bin Meng <bmeng.cn@gmail.com>
  Idan Horowitz <idan.horowitz@gmail.com>
  Matheus Ferst <matheus.ferst@eldorado.org.br>
  Matheus Kowalczuk Ferst <matheus.ferst@eldorado.org.br>
  Nicholas Piggin <npiggin@gmail.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Richard Henderson <richard.henderson@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   5cdcfd861e..f200ff158d  f200ff158d5abcb974a6b597a962b6b2fbea2b06 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 08:31:20 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 08:31:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353544.580477 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3vlc-0002wL-Qi; Wed, 22 Jun 2022 08:31:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353544.580477; Wed, 22 Jun 2022 08:31:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3vlc-0002wE-MT; Wed, 22 Jun 2022 08:31:16 +0000
Received: by outflank-mailman (input) for mailman id 353544;
 Wed, 22 Jun 2022 08:31:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Os2d=W5=citrix.com=prvs=1655f9567=George.Dunlap@srs-se1.protection.inumbo.net>)
 id 1o3vlb-0002w8-UN
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 08:31:16 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id abf18351-f205-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 10:31:13 +0200 (CEST)
Received: from mail-bn7nam10lp2100.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.100])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 22 Jun 2022 04:31:01 -0400
Received: from PH0PR03MB5669.namprd03.prod.outlook.com (2603:10b6:510:33::16)
 by BN8PR03MB4914.namprd03.prod.outlook.com (2603:10b6:408:7b::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15; Wed, 22 Jun
 2022 08:31:00 +0000
Received: from PH0PR03MB5669.namprd03.prod.outlook.com
 ([fe80::b402:44ba:be8:2308]) by PH0PR03MB5669.namprd03.prod.outlook.com
 ([fe80::b402:44ba:be8:2308%4]) with mapi id 15.20.5353.022; Wed, 22 Jun 2022
 08:31:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: abf18351-f205-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655886673;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:mime-version;
  bh=cTB+REf+EfWeu3MSb06QA/Lz0pdfHWUEBAN+bMFudIM=;
  b=Y8FqS3/CHcCqZhUw8Gmmwjjot5Sky20u9MFtmcbalgcMwupfD58YkdfC
   bZ6Ll8FizXLte8DBO/R+sxP9Qgml37pzhxQGqW05p1Nk5dSzIFEduupZB
   vJwfX20lb0dkFntbG/z4MrcuymwxN0+cSAxYAh1+sAW+wb59wNsCsVY3+
   8=;
X-IronPort-RemoteIP: 104.47.70.100
X-IronPort-MID: 76709548
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:jyUxZavo5VSFf9q2f1nY98r3SOfnVLJfMUV32f8akzHdYApBs4E2e
 1ou7Vv2eabdPDOxPpsjdtz1pnqyiuaGyINhTwVopS5kFSga9MaYCIiTJRv6NXrJI8SbHBw7v
 s8TOoWfIpA4QHWC+0r1O7O59HIh362CT+TxVeOUYnp8LeMIpF/NrDo68wJuqtI40bBVej+wh
 O4eg/EzGXeshmMpPGtP5f/dpRlhtvr+tTgWsgM3OvlA5wePzSRKB8NDKfm9IUWjT9gPFIZWZ
 QpiIJJVXI/9101wYj9wuu+jKiXmepaLYU7WzCA+t5GK2nCunARrukoAHKdaOB0/ZwmhxYgrk
 o0T78boE2/FA4WX8Agje0gAe81BFfUuFI/veRBTZuTKkiUq21O1qxlfJBle0b8wo46bMkkXn
 RAsExgfbwjrug6D6OnTpt+AJCgUBJKD0Is34hmMxNxCZBosacirr67ivbe00Nqs7yzn8Dm3i
 8cxMFJSgBr8jxJnP0gtU7c9k+aThyP/LzFKuHOMlKhp2j2GpOBx+OCF3Nv9XPWvHJ0QtGDH4
 2XM8iL+Hw0QM8GZxXyd6HWwi+TTnCT9HoUPCLm/8f0si1qWroARIEROCR3n/r/m0gjnA4o3x
 088o0LCqYAd+UuxQdS7cwC+pHeclhUdR8BRA6sx7wTlJq/8vFjGXjlZEm4phNoOnZcJRhMh/
 1ywgN7JGwxeiOWcVEy0z+LBxd+1EW1PRYMYXgcUQA1A79T9rYUbihPUUs0lAKOzlsfyGzz73
 3aNtidWr74UiMsKy7m250vvkz+qvoLOTAM++kPQRG3N0+9iTIusZojt416E6/9Fdd6dVgPY5
 CBCnNWC5ucTC53LjDaKXOgGALCu4bCCLSHYhllsWZIm8lxB5kKeQGyZ2xkmTG8BDyrOUWaBj
 JP70e+J2KJuAQ==
IronPort-HdrOrdr: A9a23:R8D0g659iH+R6vK46APXwWSBI+orL9Y04lQ7vn2ZFiY5TiXIra
 qTdaogviMc0AxhI03Jmbi7Scq9qADnhORICOgqTPqftWzd1FdAQ7sSircKrweAJ8S6zJ8k6U
 4CSdkzNDSTNykdsS+S2mDRfLgdKZu8gdmVbIzlvhVQpHRRGsVdBnBCe2Om+yNNJDVuNN4cLt
 6x98BHrz2vdTA8dcKgHEQIWODFupniiI/mSQRuPW9o1CC+yReTrJLqGRmR2RkTFxlVx605zG
 TDmwvloo2+rvCAzAPG3WO71eUWpDKh8KoCOCW/sLlWFtzesHfsWG2nYczHgNkBmpDt1L/tqq
 iKn/5vBbU015qbRBDJnfKk4Xid7N9p0Q6s9bbQuwqcneXpAD09EMZPnoRfb1/Q7Fchpsh11O
 ZR03uerIc/N2KJoMxsj+K4KC2Cu3DE10bKq9RjxkC3kLFuGoN5vMga5gdYAZ0AFCX15MQuF/
 RvFtjV4LJTfUmBZ37Us2FzyJj0N05DVCuuUwwHoIiYwjJWlHd2ww8Rw9EehG4J8NY4R4Nf7+
 rJP6x0nPVFT9MQb6h6GOAdKPHHQVDlUFbJKiafMF7nHKYINzbErIP2+qw84KWwdJkB3PIJ6e
 P8uZNjxBoPkm7VeL2zNcdwg2HwqU2GLEfQ49Ab4YRlsbvhQ7euOTGfSTkV4r6dn8k=
X-IronPort-AV: E=Sophos;i="5.92,212,1650945600"; 
   d="asc'?scan'208,217";a="76709548"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XEfB5AvCQnUEUkwXPzPOB0M5oOEYFTnleu6kp2z/vPitKOD7yEx6fzP8Esp0sD9i3VsrMB3CnS237M+GkWOgntvHgmGtisQvP16FxjgcEBASARoH5c+Z32N1yNnSipORBpeKm1s1YvS9EPUZi2I8MMzKvvqg2Wp0ygHt3lYHf0oP+woyphwdwwbjuzA+nm85C0hhuJ9sCJ/+aekYQajQLdU5t41UaZDe22lX0iyv4PIt3YNP86ENeoZLuUhYdgH+mslm6fbqTFCHLH/5jEDDZ1AbIxngCZbYygjMAYKDKg1WYudTKBkd90qwmfLLRhdJtC9ANjA4h9kRKcJGgAJRuw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=G7N/xLNmQov+fUYvspN+f/o+l50VE6w16Sgm6l9cJlw=;
 b=e6l6nZwl9/2mCHFXD0/UBy+ywIRLjylk6InFXRjF5dqsKyR2/pITanscQjMdUepX/VbBB4pXKMYBK1P1Nd0C5OsnlyuY7oqrckOt1+3vTVzNQ+4fDFXSh1qwVcjchuc1x94MuI8/CkIq1sCedgiL32f9ocdGCOu1kE42EhIOO6ZGvDblC/RSTew0zdqMVSJSkIOyqazkLgSX5andC45GLsAHSB7217wdS7lw3LrErPa8v3c3KWfKdjykYuxdWvnV+DgMsDwK1Oj+myY6+VRUJnIxEb1pzS3rmD44nAE2a+z8YgrB1zyHE5JX9CQVTqns4t91Zw56U2jZqibCGS6j/A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G7N/xLNmQov+fUYvspN+f/o+l50VE6w16Sgm6l9cJlw=;
 b=v1UqLfk4af9NmVLlKbLwHVm6I1JbKVA6ZtT2TIGqoIExcLIDqyARsW7lPErohZ+IEwrHE+KJvHdYSbjiGoGItA1B1ADOjEBYhbVgPryfcEyZ/avViNtGE7gBK7g9du9oHJTV0RwM4nFY1dH//HSCdqGLV/OOIKmUR83su53u3FQ=
From: George Dunlap <George.Dunlap@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
CC: Roger Pau Monne <roger.pau@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "committers@xenproject.org"
	<committers@xenproject.org>
Subject: Re: disabling mercurial repositories
Thread-Topic: disabling mercurial repositories
Thread-Index: AQHYhXXQ19SGHBtNG0y4Kw3bcwMbNq1Z490AgAE2A4A=
Date: Wed, 22 Jun 2022 08:30:59 +0000
Message-ID: <BEBA205F-9621-46E0-AC5F-8D6054EC727A@citrix.com>
References: <YrHMSJg6Rx9ULvr6@Air-de-Roger>
 <925e715d-df9f-4bb8-616c-389c5c58f479@citrix.com>
In-Reply-To: <925e715d-df9f-4bb8-616c-389c5c58f479@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: cd9239bf-bd14-4dd3-b6fe-08da5429893d
x-ms-traffictypediagnostic: BN8PR03MB4914:EE_
x-microsoft-antispam-prvs:
 <BN8PR03MB4914D3B6AA0AD483C6D6A31199B29@BN8PR03MB4914.namprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 1qxOHoXZmGSupVwarr/oVRG7q+QF2gTHVgX713/6Ax4XpX0LuQk2Pe9IpbBCwGfutaqKQvGezarRReVHOQ4IVAG7mkNxcidrW8zcXA7BTL/aUnZD2r1OiezE70LyVosdqxz6nLwvDzQO4PGDInNtZ/dRtBfqtHIRaBCWb1XBCnD+y925SYQPMYbu6xOMfp2fbQu/wBCOAcOV//yiBdv2l1Ulpw3d089mRpUWaAPisa2aoyeJfs12zQAz9KtUQZTLa2YRr+FDv7D89crVkeRSo4Pd30pAf3/6BxTznwszPw1ceAH7XLDyBPQbMFlzYmFycSWlQRY69WM2+ehWfQv54v0UDwO+m+qbpe5UucZ6uF7nJdtLRnve9bU3yO8XrQZA/2BsThsVEj5NsiWKaJ0yiIwatLAevLyHNSOhUKk5/zMuUs0pY4Fsmw4P3ad4dJN2wsjKuXDyraRffRnFu+iMw4uCYGb1djyryQNugh/4A2HLmTr6GMyvwUmmNvTwXsBKSIGLlwHXOtR+VW3m0Y2FPtawGe5IE62a1D4T+ksExFh/c10Y+79eQIMKwCy1TJ11t+lOjzB/9lmo74PwhfQBaK+KNkAGYjzUPK+Cy6nLE+qQGwFfgCVaH5VvmASubJ7fTGzU7CRyIeNp/6vsQK0vQZakexcmbmCQUyVY5zITxEZI9EwPLYhGI33zANPyhc7v9cvaHGJrUECS5ETB36fQa+tjVVrVcoeUz0aCtWNCx9fR2zqO31ELTs3DZ8i5yLStDZhfoFDlLLzf2WtufWb5XDl6FjW9+E26dKTT1FasaQY=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(346002)(366004)(376002)(39860400002)(136003)(396003)(7116003)(71200400001)(99936003)(6862004)(82960400001)(8936002)(122000001)(5660300002)(3480700007)(6486002)(2616005)(83380400001)(186003)(41300700001)(478600001)(33656002)(26005)(53546011)(6506007)(66946007)(316002)(86362001)(38070700005)(2906002)(6512007)(4326008)(91956017)(37006003)(6636002)(66556008)(66446008)(8676002)(76116006)(64756008)(36756003)(54906003)(66476007)(38100700002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?MmhkVG90S2dFN1ZMSWdhc1NVa3AwcEJYWjlCcXRMMEhIaG1nRU5kVWJVSjJ3?=
 =?utf-8?B?UVd0TFpLMnp6WFZhRVZqQm9ncVZlMnVhRHFvSklZVkpEYVNUTVdDWmhyUFlT?=
 =?utf-8?B?cDRlbUhkaUR3bGptYkJ2aVcxZmt5WnVRSkkzMThEa2ZtbGt1RkoveFgzVGdV?=
 =?utf-8?B?Z05NSW1rN0ZNd1JOc1l4clFoVlVOQ2NIWENvbGNPdTV6M3k3bklpZ1BrVG1R?=
 =?utf-8?B?NVhpM0M2NkFzNGlqejgzdmw0N2JDOW1PTjBHaVJwdGtXclM3UHFIR0I3VUNp?=
 =?utf-8?B?MUhDaWZ5NFFOSk8wTVdIald5RHlvSG8rb3RTVVRhRVVBZmtCUGQ1TW0rbnB6?=
 =?utf-8?B?WkxrbnpKdTQwaDA0TGZ5czJ5azVJOERuVTRHYk9acWxCd1JBYTBHSlh6OGlX?=
 =?utf-8?B?R3BUQXhqSThPWkFkVGJNaDBUV2FJdVRveVJ3UTVWaGQ3dkI3SFh5bWZ0ZlZa?=
 =?utf-8?B?NWlxM0ZGTTdMU0pDZVZ4dlhrZW5pRTNmMWxxdHdPVm1FSnJ0cXVmcUV0dWdL?=
 =?utf-8?B?SXl0MFU4YWVSRDAyaXVoa1ZoQi9OM3pvRHhqNGd4KytWRlhzbDNkK3hlSzNv?=
 =?utf-8?B?YUdlL2Mvc0dZNUxQS2QxanBNeTBKVG1LY3RZLzhCV2V6ZjhOaGQ2em1PVTly?=
 =?utf-8?B?QVZ4cElSU01ITlVBSnJSYU05b2JDQlhjaVhhT3BEN21JS09lVlZ0cVZHS1F5?=
 =?utf-8?B?TC90dURpL0hxWjN1QTFoRlMxcnI2M3VodUhNN2pVb09YK2djNXI5ZGZzelpr?=
 =?utf-8?B?ay9CQUswbXgrUEdaOWx1NGtvbWFrckF6YWJJaUdoS0VNNWROdWJHVnJOa25n?=
 =?utf-8?B?eFhMNUJ1dHZYbFVRRkk5M2FKTzdtT2oyMDU2Wkh2TXlZc0R4dE1HYVpWZHhK?=
 =?utf-8?B?RGlDU0dUVkxGZTF1eVBQUUplaFpodDM4NFY3eHV4bzl1N2JwVnIrRUZId3BN?=
 =?utf-8?B?NnBTMXNPZ1l5Q1VNM3dSanNnWWRUb0hlRFg2ZytzNnpqdWdrK3V4U0tCdjR0?=
 =?utf-8?B?TVlkbHNaTjNEMlA1TjlmSldrRGluNUhjR09kUXpyODByam1OOGlSMUFqVnp6?=
 =?utf-8?B?YXkvMHh5cVFyYXFiR3M5ajBDR1hXUFVjVHM3WEpvT1RBY0U3T3ZjNmRQbWVC?=
 =?utf-8?B?MU96WVUyK0ZpUWZ4YlpxcjNtLzNwZ3RtSFJ5YWw3VlJZVE14bk1jN3I5TmpK?=
 =?utf-8?B?Zll3NU9rTisyN09URnZsVlg0ZjVIQkV2RzhjOGVRUVZKK3Nab1QrOVNZSEFR?=
 =?utf-8?B?U2xoUGcyR0ZlQlZJWTZoY2x4UEUzdW9LMTVWbGZsQVk2NWFSaGgzTE9NRjZj?=
 =?utf-8?B?RERJLzJWZElmRkIzbUxhWUh4NGJkTmpJWkFJeEFmeGhvSnB4dVUyaXRTQ1lZ?=
 =?utf-8?B?blBRVXluTnNlT0FqK3JWSXo4ZTAxay9IY0Z0bW8wTTk1eGg5KzNPd3A1Sm0w?=
 =?utf-8?B?ZFdnbE96cXN3WVVnUUNHeWRuWW1lbXJ5aVdtWWxrNys4cTkzc2JBQk5EaStI?=
 =?utf-8?B?RGN0ekhVRExYWGdWamU4bEd5SmJlc0ZRMDJLVXFMNnJGU2ZqTDVhRFNXSDJI?=
 =?utf-8?B?TXdxNzkwcGR0Y0RqYVZjUzdzR3A3ZzM5eEFvekVueHpzWGpkcGpjUXVUajlu?=
 =?utf-8?B?ajBlODlJMlFlNFRkdnA5U3lqOVBpdS94MmtSVVJxS2dxaEZVcEx3c0EzYlZI?=
 =?utf-8?B?bUNManFGdTYxcUNldFVaM09PZjdSVFdqNW4xa3lWaVI5Sm5lZ29nOXBvTE04?=
 =?utf-8?B?bCtYbTFOWmVkOTZCWUZ4RmhITzMwUUhNcmE1QzJRZ0lZNjJrdExnMHFBakJZ?=
 =?utf-8?B?YmxWMFlocmE5RklwN1Nrc1VFOVk5WVJhUjRXakFkZUFZaEZMQU9uamJ2WGVQ?=
 =?utf-8?B?RHE5TTJ4QzUyYmo4RysrcTE3c2VFK2Y3QnBvOGNURkN2VlNNWURFQmN6ZDlj?=
 =?utf-8?B?TXEydTBuTUFMUWRIdTVrVUtWdXJuVEV6YWo4djBCYWRTb3NWVCt0eXFoSmd0?=
 =?utf-8?B?SnNYQmhYclEvT1pmU2djNTdVRzhKV0hhOEZUSllSUTJaMDJVcFVYK0VCNWY5?=
 =?utf-8?B?bHZBejRYU1UzVlI2K1lkenBSdG1RS1R2bDVIVmN0RXpHNlZNbHorc0hScXlO?=
 =?utf-8?B?b3FYT0xnM2NjanZOV3F1c2xtNUdhY2h6aWR6TG5McWd0NmlTazV5RFIrS3Yy?=
 =?utf-8?B?UVBxdTBuaDVKMUZSRm5ha0J2a2xwc2tCaTdTTkhuTFo1NVRwUXcydGk4allw?=
 =?utf-8?B?cy9kblZ6TFpjbVFoREp0ZGQwc0dSM0FuSkpRVGkxVForbW5TRGlNbXNlWHpv?=
 =?utf-8?B?aC9ISzF5bjZGNnB2U3M0eUg5VDlZNXZYckg4MEczT1NyTmFKTlhXNS9UVFN4?=
 =?utf-8?Q?u0mwgX4LdIL0gzcM=3D?=
Content-Type: multipart/signed;
	boundary="Apple-Mail=_96CBDE59-F6D9-4B04-952A-2204E57A8D34";
	protocol="application/pgp-signature";
	micalg=pgp-sha256
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cd9239bf-bd14-4dd3-b6fe-08da5429893d
X-MS-Exchange-CrossTenant-originalarrivaltime: 22 Jun 2022 08:30:59.8793
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: GtLGuNCA5HGJBmbYtPk2M2AHTcUQEMnzciCekywae9GO8DVRVJREKlnArgRBHXbzBjigCNnpp9ACj7DiJstAzCOpZy64DjE1GrvM+BvibZ4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR03MB4914

--Apple-Mail=_96CBDE59-F6D9-4B04-952A-2204E57A8D34
Content-Type: multipart/alternative;
	boundary="Apple-Mail=_75EF4269-EB0C-4811-AB8A-7C420775BBAD"


--Apple-Mail=_75EF4269-EB0C-4811-AB8A-7C420775BBAD
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=utf-8



> On 21 Jun 2022, at 15:01, Andrew Cooper <Andrew.Cooper3@citrix.com> =
wrote:
>=20
> On 21/06/2022 14:48, Roger Pau Monn=C3=A9 wrote:
>> Hello,
>>=20
>> Last week we had a bit of an emergency when a web crawler started
>> indexing all our mercurial repositories on xenbits, as caused the =
load
>> on xenbits to go beyond what it can handle.
>>=20
>> As a temporary solution we decided to remove access to mercurial
>> repositories, but the contents there are AFAIK only for historical
>> repositories, so we might consider completely removing access to
>> mercurial repositories. This would however require migrating any
>> repository we care about to git.
>>=20
>> I would like an opinion from committers as well as the broad =
community
>> whether shutting down mercurial repositories and migrating whatever =
we
>> care about is appropriate. Otherwise we will need to implement some
>> throttling to mercurial accesses in order to avoid overloading
>> xenbits.
>=20
> IIRC, we'd mostly moved off hg onto git before moving to the Linux
> Foundation, where git became mandatory.  Hg hasn't been the primary =
dev
> tool for ages, and git has only got more ubiquitous in the meantime.
>=20
> I'd suggest keeping hgweb disabled for now and see if anyone =
complains.

+ 1

 -George

--Apple-Mail=_75EF4269-EB0C-4811-AB8A-7C420775BBAD
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
	charset=utf-8

<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html; =
charset=3Dutf-8"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; line-break: after-white-space;" class=3D""><br =
class=3D""><div><br class=3D""><blockquote type=3D"cite" class=3D""><div =
class=3D"">On 21 Jun 2022, at 15:01, Andrew Cooper &lt;<a =
href=3D"mailto:Andrew.Cooper3@citrix.com" =
class=3D"">Andrew.Cooper3@citrix.com</a>&gt; wrote:</div><br =
class=3D"Apple-interchange-newline"><div class=3D""><meta =
charset=3D"UTF-8" class=3D""><span style=3D"caret-color: rgb(0, 0, 0); =
font-family: JetBrainsMonoRoman-Thin; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">On 21/06/2022 14:48, Roger Pau Monn=C3=A9 wrote:</span><br =
style=3D"caret-color: rgb(0, 0, 0); font-family: =
JetBrainsMonoRoman-Thin; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: 400; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><blockquote type=3D"cite" =
style=3D"font-family: JetBrainsMonoRoman-Thin; font-size: 14px; =
font-style: normal; font-variant-caps: normal; font-weight: 400; =
letter-spacing: normal; orphans: auto; text-align: start; text-indent: =
0px; text-transform: none; white-space: normal; widows: auto; =
word-spacing: 0px; -webkit-text-size-adjust: auto; =
-webkit-text-stroke-width: 0px; text-decoration: none;" =
class=3D"">Hello,<br class=3D""><br class=3D"">Last week we had a bit of =
an emergency when a web crawler started<br class=3D"">indexing all our =
mercurial repositories on xenbits, as caused the load<br class=3D"">on =
xenbits to go beyond what it can handle.<br class=3D""><br class=3D"">As =
a temporary solution we decided to remove access to mercurial<br =
class=3D"">repositories, but the contents there are AFAIK only for =
historical<br class=3D"">repositories, so we might consider completely =
removing access to<br class=3D"">mercurial repositories. This would =
however require migrating any<br class=3D"">repository we care about to =
git.<br class=3D""><br class=3D"">I would like an opinion from =
committers as well as the broad community<br class=3D"">whether shutting =
down mercurial repositories and migrating whatever we<br class=3D"">care =
about is appropriate. Otherwise we will need to implement some<br =
class=3D"">throttling to mercurial accesses in order to avoid =
overloading<br class=3D"">xenbits.<br class=3D""></blockquote><br =
style=3D"caret-color: rgb(0, 0, 0); font-family: =
JetBrainsMonoRoman-Thin; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: 400; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><span style=3D"caret-color: rgb(0, 0, =
0); font-family: JetBrainsMonoRoman-Thin; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">IIRC, we'd mostly moved off hg onto git before moving to the =
Linux</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: =
JetBrainsMonoRoman-Thin; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: 400; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><span style=3D"caret-color: rgb(0, 0, =
0); font-family: JetBrainsMonoRoman-Thin; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">Foundation, where git became mandatory.&nbsp; Hg hasn't been =
the primary dev</span><br style=3D"caret-color: rgb(0, 0, 0); =
font-family: JetBrainsMonoRoman-Thin; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><span style=3D"caret-color: rgb(0, 0, =
0); font-family: JetBrainsMonoRoman-Thin; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">tool for ages, and git has only got more ubiquitous in the =
meantime.</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: =
JetBrainsMonoRoman-Thin; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: 400; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><br style=3D"caret-color: rgb(0, 0, =
0); font-family: JetBrainsMonoRoman-Thin; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><span style=3D"caret-color: rgb(0, 0, =
0); font-family: JetBrainsMonoRoman-Thin; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">I'd suggest keeping hgweb disabled for now and see if anyone =
complains.</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: =
JetBrainsMonoRoman-Thin; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: 400; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""></div></blockquote><div><br =
class=3D""></div>+ 1</div><div><br =
class=3D""></div><div>&nbsp;-George</div></body></html>=

--Apple-Mail=_75EF4269-EB0C-4811-AB8A-7C420775BBAD--

--Apple-Mail=_96CBDE59-F6D9-4B04-952A-2204E57A8D34
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
	filename=signature.asc
Content-Type: application/pgp-signature;
	name=signature.asc
Content-Description: Message signed with OpenPGP

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEj3+7SZ4EDefWZFyCshXHp8eEG+0FAmKy00MACgkQshXHp8eE
G+3Gvgf9FZgxoKQMWPPxROv+xWT8XEoSUNz0LKowajjGSCTycBnud4vC7voc+hQg
iBaeB/RIgFsCJrvBd9ctzs8ugys34AGehc8Q8pwFCMcdOSb15OtPX44j83ayY6oX
q/6WILb5Lkbl2Ftcavchq6ZIz8w41nM55DH1QeGuS/7X2EJewHA+XxjjJsON2TLC
yt3a+lgeHIfjUvkPyZkBEOMwX5xGSaXGU7Myx3kXPd4uI1xrna3o6A/oDrbaSLD/
T2uyXPjqy96nLclMwpZw/YQdWw56PzKkFADsc+CShhvQFH6rizYKbOhrb9SrFozF
SJKyMpM3eND0IJGuf/IkdtKsx8yprw==
=xUyX
-----END PGP SIGNATURE-----

--Apple-Mail=_96CBDE59-F6D9-4B04-952A-2204E57A8D34--


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 08:41:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 08:41:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353554.580487 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3vva-0004Td-Tl; Wed, 22 Jun 2022 08:41:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353554.580487; Wed, 22 Jun 2022 08:41:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3vva-0004TW-Qz; Wed, 22 Jun 2022 08:41:34 +0000
Received: by outflank-mailman (input) for mailman id 353554;
 Wed, 22 Jun 2022 08:41:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kzGk=W5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o3vvZ-0004TQ-Vj
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 08:41:33 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 (mail-eopbgr20068.outbound.protection.outlook.com [40.107.2.68])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1e1f53f7-f207-11ec-bd2d-47488cf2e6aa;
 Wed, 22 Jun 2022 10:41:33 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB6332.eurprd04.prod.outlook.com (2603:10a6:10:d1::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.22; Wed, 22 Jun
 2022 08:41:29 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Wed, 22 Jun 2022
 08:41:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1e1f53f7-f207-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=M1v1a6o+8crOkW3alnHwdaCgQIrWphjEELEGSt/N06BR85pE3ubIxQMPK6aSOHoqJl2EpiFudoULBTvDMR4QYi1sncNLBT2uJF64pTIdOLw7o3G0MKHvO/6SHmDrLGOpFQdj4BCpeQDFv4rucCSL+WvgWjRkpqUZJDBLJYKGwjV79Bp1lzCeb711iErJ3tYcjkHN48o2nHybqsrQwxjwRf64pvOnx2ch53H54fwOLUwj05i+V1CwiGHeKEc37NrUe4mKDEjpz8DFKm5CTgTq4NSY5LqZj9baTWU5wXQxbPMKZnEt6PbMOFFRj75hAlSpcE0lvWk5gWRD7W8VYTP1PQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=WGTYiYmYf7ItaYZrWZ98gkYlE3bj2CRd0KuSig9IKF4=;
 b=Rlwn3sjTTeH4KiPUCMw4CypVdEZe29iGjmwiIk0E0Ajuy74lhmkHcnn7qWQvomra8fCzhZXId/uPpUQPRcKsjarQv1xIxgaJXwSkw7+a41PloMcen/AizppLvJR3/VjAv99ybdhHOKRQf5iIj8axu25Cdpi83dFnsqv4tts90mUiBGRltLlyYhS6geEM7Ek9sFXrLd7QUtdYQrVhn9LroveRCKLNspvuZitpNHOYjBQACAVZ36NacQOmbk7Wu+02K0oXq7jIU9vHgw4SyzCbKSOXgAh+IVsdTMHm89c08HbwLghBapD6drX5+E/yOBpDvpNP9zHr4/FwR36cILEkKA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WGTYiYmYf7ItaYZrWZ98gkYlE3bj2CRd0KuSig9IKF4=;
 b=ETBdevqhDUJ6vT3tdaaLCbVskLFoD9dyoP7rEd4nK3YdG/0IPoRzkWGPE6vTOa7MG/Yi8OVSeecqMvyGbgBour6KKdm5V/sMPjIOi97yL5vY+jby6Mvv7730vaau+SbVCrX4LT7MF1EgIqQ+SVNtbWODmucC0oYnBk0t1ElQJ2qn4pWs3oE/4DpVO8YCs53v2v+Qka55ZhJ7+ALp/MRg6YjMgZ+V5QCh/Hc/iKs3PzCJ3tWwpRmfWcQXL2qZutki+wzip7k6qtlGce438hOhsxvde6ldQacBWwBEWeHXaxtWE9+ocEk1aFMFgl9PDfspYHnxiP7pH/zKC15krJUKFQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <aaea6105-e83d-feba-edf3-3d7e26b90769@suse.com>
Date: Wed, 22 Jun 2022 10:41:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: disabling mercurial repositories
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: committers@xenproject.org, xen-devel@lists.xenproject.org
References: <YrHMSJg6Rx9ULvr6@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <YrHMSJg6Rx9ULvr6@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AM5PR1001CA0044.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:206:15::21) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6f209d3a-50e2-4a53-c103-08da542b007d
X-MS-TrafficTypeDiagnostic: DBBPR04MB6332:EE_
X-Microsoft-Antispam-PRVS:
	<DBBPR04MB6332AD66715F416952BE4066B3B29@DBBPR04MB6332.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	k29Q6B6hqyjTLHcJqa64mQ+jABZLkRsWO8kfGjmpSk1SL9uuGxSIUf8IdD5Dfx0p1dLN2gAE0ii99n1fUxZ1Q5ZGa4w4KQaYM5GrcZUWbXqit/yOq2qB/1r4wH+myvkzZFXXGEskCR/4ZRUpR345OgX5HNw9aTC5AU80R+tUirjn0rNnaYVwlffzFzKVoiCHV3LLLmk9628ecPnpGIdszXWcvsVGHKb3NRoRKt6966oH9R8xuX2CQcKOohp/VwxWikzqHa+HrpzNjBoRunrdeLvmU+3J0hY1nsu4M0FhN2kUx2Sj4mDvbGIdb0JMqUa19seL/A9BYpPytl58u7NYUjJCvzxamPdJs/TXRJq35a4oQuuSFRqf/Uwwrc9H64MbgFRoveqALJLg7azuQl/3t2DqdoBKWEZbqn2oVZK0DeYFdBazPTJrSB1DWK0HBH9fanop4NIleiKLLq+z4oQPjAyC0sB5nxThOWBjQT8UmjxA3jOGDsNni6kJWBkEwVWAlmvp+9PnikyOmK5pG5yJX00W1+W1bUJF8DndwOdxBBzNVkViavg7ZiwKTk6s4HdCf/KtE2fh0Vi8oHUrGENiOokH3CCEc6T3Rab5aqmvr8rr45H2cYd7iMRtijrKCHkLyGKPXahdMwKqu+z7IDv1jOEAkaXoLJQ5SiZJoKb4GAEf+j2x/ZfxtW6Zgxe++SoxYgy+U+2kQZ9wW/1UQTWTrWAbUwz1C2uTrKoGRVuDgQEawD1ay5C4tp4ql2mlaU7UJ5NzY5jpA8K/1FNKz/Rd8fPEAyIv90NBS7XEIJN7eBg=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(396003)(376002)(136003)(366004)(346002)(39860400002)(186003)(53546011)(26005)(2906002)(6666004)(6512007)(86362001)(31696002)(6506007)(2616005)(41300700001)(316002)(36756003)(5660300002)(38100700002)(478600001)(3480700007)(83380400001)(6916009)(4326008)(8936002)(31686004)(8676002)(66946007)(66556008)(7116003)(6486002)(66476007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dXpKRnZvWlNib3JYSWN4MEl0dDBDcGhNbnRMa0hodkxBMktab08zOXV5UlVD?=
 =?utf-8?B?VzU1RmdsYjNDSXE5K2poeVlSWTlnNGpHbmRyUUM3SkN4Z3FPWGg0d2IvWmgx?=
 =?utf-8?B?ZmdFZDlFT1pKTElDQ1dDbUF6b1dEQjAvajd5VXkzYW91QldidkdRZHVzczJ1?=
 =?utf-8?B?aGRKelBkY01WU1g2b1AvTWN6dlJORW9OZFdqUkJ5YUhnV2dRVEVmWlI4a0Na?=
 =?utf-8?B?Y2Vaa2FQWjkxMldWdGt3L0FacU5ndGFMclVTckxVbmFhR1kwZzRwei8zZnJF?=
 =?utf-8?B?V1F0TkR5MU53clV1Z3VOTFRvbERLQzhCMGIzaXJHY0ErTXMzSjZJUGl0U0FK?=
 =?utf-8?B?VU1kY0RCT0k2eGVHMmFJamU4SUtKZ0sxTEhDRUF0cjg0U3pCbEtkdnVZbW1M?=
 =?utf-8?B?ZXRyMm5OTXBvTWNCbzczNGM5TnV1V3NCNEJYTlMwZEp3SEJuNHZ3SU9zN21S?=
 =?utf-8?B?aC9xNUEvM3JXemk4OWJ6NVNIR0xpOXpRVkV3WkhCYXRGM3l3VHlUQVdMcTRQ?=
 =?utf-8?B?bmdiY3dOMWZFZ2JPN1NOWXVhRUMxOUF0cis4aSt1bUErazdoZnhDS3FvVE5z?=
 =?utf-8?B?SE15OXplVXBEdENaclhTOFY2UHplVStXMjhCUzhZVHRJRzFFMUhzNW02TTY0?=
 =?utf-8?B?KzVKTEdKS3pLVXh6MGpDR2pleGc2b00rZTU0SnlSbUdZSWt6RGRtekZVQmxT?=
 =?utf-8?B?U3l2bzYrMHRNb3paVVY0bjdtWXR0SFNsbi9UMnVubHBsSGh2R0FORjQvQ25k?=
 =?utf-8?B?bUttRzlOYkl5SmpaLzJYOFo0RjhyZldiWGMrTmQySHl1NmJwZmdqOERZY0tz?=
 =?utf-8?B?d09CQjZ3MW5XazROd2M4Ry96eTVCWGsyR3JIZ1VkVWlad0ZQdHk5YUszaVgw?=
 =?utf-8?B?ZGJ6NmhDMmRRWWNtRGdmcXJPMFZ1VWgyM3NWa0R3aFRNVjBjdDhrOEovakRS?=
 =?utf-8?B?aVFUMlAvUGJha1FPc2hFSUM2aHl0YVBzckdWN1htdDJKeFkyc3MxQlVVcWQ1?=
 =?utf-8?B?QmdrK3hsRWl2OG0ram1FVTgyVWxIbUdPOHJHZkplMDU3blNYVGtYcUFweWRy?=
 =?utf-8?B?cDZoWlQ0VjFGc0JUSEgybEI4djZhSkJyNWxYZGVQWFd6clBlc3UvNjhGNlJJ?=
 =?utf-8?B?eXdUc1JpOC9yaEIybUp0eU5JSm1rUVpPcXR0dkEwcVBFMHFwWTR3TnVRdnlx?=
 =?utf-8?B?Nk9mOGE0MHhqUW9tb0xnZjVxeWR4QTFwZEFRRDgrak5uNXF0U3ptYmFxaEdn?=
 =?utf-8?B?d3psa3czYW03Y24vd2pQMk1BVml6a0g4eXdRQ3lPd2s2SEN1djNQRjVxTXN3?=
 =?utf-8?B?K2pvME9kb0gwdmdyN21BV3BkUFVUZlRNa2l1NXRGeitrS1JxNzNyanZOK3RF?=
 =?utf-8?B?dm1adWpRNUQxa2Y2ZEc3RUhZeVVteXE0amJ6ZWx2ZzB4VU9XelliZkRoQUx0?=
 =?utf-8?B?dE5PUEJDUkEzSW1tQTREZ3dIRWVMOWdlb09rc1l2MmsrYWJVT25qUmQyUXBP?=
 =?utf-8?B?RlBnR0dUbHc2bGp6cUJLRTFSZXQwV0k2SFc3aTcraXNIY3AxL2VxdjYrSE1y?=
 =?utf-8?B?aGl0YjNVbGtFN3BERjBJRGxSR0V0K1pucHplbk9LL3dDZTFnd3c3RVpycjJj?=
 =?utf-8?B?dHplZXZtYWhtY1lxb21wbzl3d0xlQ1dESGxIU1Z0YWdybHlLcW1sOEhSbEJU?=
 =?utf-8?B?UE1oUGtxajc1NjBhWDJONWViZ0lPS3MySGVqblBrUUVrNnJSbHZTb05mNkJ1?=
 =?utf-8?B?Y09oeGd1VlpCcWtQWExDd0tYL0RKU3RoK0twcmdXZVpkZGZHQjdjRDU4cDFO?=
 =?utf-8?B?c1F4SUZjMCs5VENCb0JOellqekhKNHhyZm1WaHZscit6cXVzVU9xZW9Sd2xz?=
 =?utf-8?B?dE9wSCtLNm1tUll5NlN1bkkremRXbm4yZUZLVkdFRzFtcFAveEZYQjdtaGxM?=
 =?utf-8?B?dXdpQzBtNHdBL3lMNmpnaSs0bnNBWDdHK0MwSHFzbGJZdjNJeVh6Q2lyc0lh?=
 =?utf-8?B?MkVHbG13cVp5RkZNbVR4QjJoeldnZmgyUjN6WW04U09paURSb1JVbm9iUXBr?=
 =?utf-8?B?ZGwxNElpSzZYMFFCTkJoMFdubXBmY09hdEkyUUwyajVYSnJiYTZwVUVnS29G?=
 =?utf-8?B?M3M0Mm1iZlJQbUEzTEpYT2txeHRkS0Mxd1NldU9TcGw2YUEyaFB5cFVEcGFx?=
 =?utf-8?B?K3FoVHpkTkZxczZXMWtxaWZQdE4rNFNUbmRvbm1vZUVBUDVjMUQ2K1hLS0Jq?=
 =?utf-8?B?LzRSRWxycEF0ZHBTb1U4OGZCTkh5TFY4R0NidEZ5b0gzY3VCZ2VKcFI2UzdD?=
 =?utf-8?B?VFFyYmR3d2hsemV0Z1FickJod3o5bWlaTUxhM0M4cXBUNVNmeG9ZZz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6f209d3a-50e2-4a53-c103-08da542b007d
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2022 08:41:29.6477
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: v7rnLGMtsbCbqhQS83EBCivFEOOvR6gieM53rzwefHnWOw+1Rc0y8lq2mxl/bLYR3Bl3Nqc0RvD8zzomqxjs/Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB6332

On 21.06.2022 15:48, Roger Pau Monné wrote:
> Last week we had a bit of an emergency when a web crawler started
> indexing all our mercurial repositories on xenbits, as caused the load
> on xenbits to go beyond what it can handle.
> 
> As a temporary solution we decided to remove access to mercurial
> repositories, but the contents there are AFAIK only for historical
> repositories, so we might consider completely removing access to
> mercurial repositories.  This would however require migrating any
> repository we care about to git.
> 
> I would like an opinion from committers as well as the broad community
> whether shutting down mercurial repositories and migrating whatever we
> care about is appropriate.  Otherwise we will need to implement some
> throttling to mercurial accesses in order to avoid overloading
> xenbits.

While I wouldn't strictly mind its shutting off or the disabling of
hgweb as was suggested in a reply, either would mean to me personally
that it wouldn't be easy enough anymore to warrant trying to hunt
down the origin of certain Linux side aspects in the 2.6.18-xen tree.
Admittedly me doing so has become increasingly rare over time ...

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 08:46:34 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 08:46:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353563.580499 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3w0P-00057L-HP; Wed, 22 Jun 2022 08:46:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353563.580499; Wed, 22 Jun 2022 08:46:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3w0P-00057E-Di; Wed, 22 Jun 2022 08:46:33 +0000
Received: by outflank-mailman (input) for mailman id 353563;
 Wed, 22 Jun 2022 08:46:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kzGk=W5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o3w0O-000578-DH
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 08:46:32 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur02on062f.outbound.protection.outlook.com
 [2a01:111:f400:fe05::62f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cfa050a2-f207-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 10:46:31 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB7661.eurprd04.prod.outlook.com (2603:10a6:102:e3::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.22; Wed, 22 Jun
 2022 08:46:28 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Wed, 22 Jun 2022
 08:46:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cfa050a2-f207-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RDQwKz6et6h36fGijV986VAEdDrKS/IZUuzxyzx8Q28jgWpIqVTbkVWfxxLMAD3HZIi5pqO7p1OKzMeApqhEaSNkQiNYj/IZmOpRvCIbdhxnvnXzg0p7GpNDg9SRLot58G+nVMJ1tEt8PH0OEz1oH38WRNT+7aFOPMatG23eJO9uWe1I7uiNXyMxZ2zjdv9giwwJRTKer8JeCZaA5Rvz/EjHZlEZJdQ3GbikI0h99B2vlJEJ4IvSOMtPk68tnL/R36x6YbGXSSTHeCmK2EYqUMcZs8TpnFBVgSc+j3ZDvKpsmhTc2fuRVlH6VCe7f/soH0pP1eMBuFIMGeLDov9Yug==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xfexDT+BrAztnNIMygwZKhS3UlXhMWRKZIm9dzJOBJ8=;
 b=n3M8B2rEi3VTKSwKSm8rX96LNbQFDA1nIup4qP8gStFn4x9ia1zD+vcaY5fUgP81LPIYHZ/qPlaPl1YGDaFjoFAxjV9WlpN0Hi8EBOxM1j9fnH7nZ9lCDbRY5kPFV8XK1YPTgTFMENbzD2+4L3RjfX8mnmky2v0nA32i6Vr0fX6lefv/6SX8paWBtjMTA4xAxM5+LvFKq4+oB2Qua0EHP2vY9OdoGI2JjeN5HF7yXN2aUWuLawtWwTmiHygTUnTbet3CTGal4wZpGXFFSQWsiWu/MW+KshaLnMv3Oz3S0+xUDHEBciqoe7Sqq3gYxjx+TbldI0Q0jdFqPwRsATn5fw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xfexDT+BrAztnNIMygwZKhS3UlXhMWRKZIm9dzJOBJ8=;
 b=Q2S7k9yyAyq+eHXvuWQRrX8nYJwNdxEo8l6sRI0Nvnqvgz6ZPHpvt82fKJw4UwWCoaEW6Zc2ddRjVcZ9XUsXWiklnTozgOpiWjhhx93ywIW9XZHYNEzcU0+m6QnmIWIplSbvY2dE/4Ekq+fAsKrjXhumHMxB4fDBnsF6pzYLa6H/n87OroJgoLEUq7n3++PSVcG1xFamFO46QYyBWOsq3kGkIBqySCNEln6/H9Dhgu//uzReLG6yTNHFWeXnucXkx09Mjk7jAXHHIqp/mxnQ8XcIG9P1tVkVoQTED8dFdFBLLAKsrIO+10COVDLCp5403Un0i1qF1CKx6RRhjVrBZw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5a1eada8-90b6-f998-cb8d-6b0d1b781590@suse.com>
Date: Wed, 22 Jun 2022 10:46:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v6 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests
 using legacy SSBD
Content-Language: en-US
To: Henry Wang <Henry.Wang@arm.com>
Cc: Community Manager <community.manager@xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20220517153127.40276-1-roger.pau@citrix.com>
 <20220517153127.40276-4-roger.pau@citrix.com>
 <AS8PR08MB799195FC7D9949031F33802892AF9@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <AS8PR08MB799195FC7D9949031F33802892AF9@AS8PR08MB7991.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS8P189CA0040.EURP189.PROD.OUTLOOK.COM
 (2603:10a6:20b:458::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: baabdc2a-615b-4ee0-7bee-08da542bb2ab
X-MS-TrafficTypeDiagnostic: PA4PR04MB7661:EE_
X-Microsoft-Antispam-PRVS:
	<PA4PR04MB7661AE0B1B167F74BDDE240AB3B29@PA4PR04MB7661.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	eOrKED2alV4gYeYGlR6Sjo1ktXlHZL/lD0YsOxj6/On1mwa5U15IM1HWDTOotpugakgSgW/q/ApUbDZd+02cKb3a3AOI9Ki9ukV9mryzzg3qw8LZx5RnzMj6DYpxVeUD9lWvwZC/lH6OOuA/SBDgVDBDwpVYPuUcKFDbskxSphIIE//KjFBghs+0stiJ8Uh6N3qW810Zm1MEl4GdxC5iBcmEeoZjyXcrrMsDHN0gB/WfXqILtFlfYCeKoRqA2Ss6+0vT3zyJwM/7JYetHVEUW1sZCW7FxpwyWxONsyGJCU6LTezJ1DIwRkD2/SaJdIWYHdE3HOiCgPKz2ChuaBgTQ7zoVF8rx2frtiLNCH+6Ep46R0UtCH2U/ktQX4Y7CcdRE6yhvPf3y+8vC39kg0zGEwLP14tHNz8AXe1n21tx4Xz6+t6v2HcXZwEk3UNa916mOFL6FNsWaxuvq0Cp76w1BdR7rN5weySVa72IZ6kEbP8C/4K5lwUxfxE745HTSiHOOvT4eRcplW5D3+1GtFyha+wzcroly6Zf/3/0PvD0ep7+p2x/VM8aIKnU3rWvsqORVXH6b5+Z8DPZgKqVY1AZ69jRGQfiBLpJaCe12KR1pcpuD56MenauWFGT727sEVhidfSk5sM5NQV185k+X92ywnVll+woGOKFJoGtzTrxD1tvipvKf/PXtUmgkcJiLA1zJeOTepPulD7UKDC5O/RvFoA2gC7LaC+CACCVMx3sl9vys0/RJK6oMT4pR5cZkeNkqVSWvH2yd8YT3VSANalYmr3PAxN6f/kFxHun8+v2EtY=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(396003)(346002)(39860400002)(366004)(136003)(376002)(4326008)(6506007)(53546011)(66946007)(6666004)(6512007)(186003)(26005)(41300700001)(38100700002)(2616005)(86362001)(31696002)(66476007)(8676002)(6916009)(6486002)(478600001)(36756003)(83380400001)(54906003)(4744005)(2906002)(8936002)(316002)(5660300002)(31686004)(66556008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZE1UUnRmN3F2ZmJ3NGdpYjRkOWlJNnF6RVZqZnMxc2xjdFBqR2xUbWpxcGtj?=
 =?utf-8?B?UFdweVFnL3JWMEZwZXhYZlpRYWRpLzJFTXk1NG01Znc2VDNSNE43NGF0YmVz?=
 =?utf-8?B?eDFZZGF3Uk5DS01pa2dZWHZlcWk0UWJvU2ZZUTc5T0RCZmlkejFUWmlYNFRL?=
 =?utf-8?B?RmZrSVRlZnpUN2V3SUV3QXJ4YThaMEVLdVFzeEhkdU9HQWQwWGZ5U3FucW9T?=
 =?utf-8?B?UGwwYzJjQ1IzRWdlNGorN3UxeEZiR0twZWlTVVpnSWVNSDZqOHk1cHA3TlpM?=
 =?utf-8?B?d3pBQUZJQjBmbnVOeVU4Z3BsdENRYlJKU1YwRDQxVjE1ckV1NVVBOWZMcFpZ?=
 =?utf-8?B?TVFDNmhuSTJiQTlCQmpvaTV0RDBoMENRcXdad0c3aE0xVkxDNGJRUDQyVk5H?=
 =?utf-8?B?Smd4M2s5UWZ1MHZOZUtBV2p4WTVZWEhvaDVid2pBMFZyVHFDUGsxQkw3d0c2?=
 =?utf-8?B?OU1hMlJlS3hDQVIxVnkrbmd0V2JoZ2NYL1ljUXY3OVhhc0hqUnlEcmNTMWhD?=
 =?utf-8?B?aDQ3MkNSOWxCeWZQUFFFZ3g4UnVCOE5NSEc3SWlTUlZOdmJTQ1RHR1d0L3J3?=
 =?utf-8?B?b3hmSFh5TVkrOWRsOE4wOXpNZE1nc1lPR0R4ZDJWTWc3MUlpZW9tekkrekFN?=
 =?utf-8?B?cWxxejhvbW5jcFR0OUYxWmo4dzlEdUNYMUs0UkhteWtjYkoyTTZDOW1MYkts?=
 =?utf-8?B?Q3kyM3R5UjRrL2hIdzgzekNtb3E5YjhZcThYZVkxK09KYldpU2NOY01YUkZB?=
 =?utf-8?B?Ty8zaGx0aTZLQTlpRUtBdTZJNDlwQlU4QmNLcnF1VW01L0hhWXA1V043OVFl?=
 =?utf-8?B?YmxKOTRqZ0VEN3NUMmpYMVF3UlNPdUVzQVNrV3BxOEFhKzFmSm54MDN3VHdI?=
 =?utf-8?B?SS9YV0E4dnU0OUt5ZDhpWUdpRTZoblprcUtOT29zKzBwMEp6WndRQ3BIeXJJ?=
 =?utf-8?B?MFFCT0dvVjdHOFJRT1huZlN1R09yQ0kwbzRUZkpmSm1EMmtnVi9zUWhLRG0v?=
 =?utf-8?B?dzZhOWY4YjdSZnRrUmQvZ3NjTDhUY3dVcGRraXJTdEJGRFdNY0NXaTl4QjNK?=
 =?utf-8?B?MW5kdjYrUDNlTGJwWUZtcWJlSE8rV2RVSWFZd29pc3d6RmxucjFDanlDNmM4?=
 =?utf-8?B?c1B6bVZNWXNpREt6bGIxT0FSNTJoelFHZWlKTjBuMzdxTE5kQWJ6NktadVBR?=
 =?utf-8?B?dm0xbXZ3Qysrd2ZYYytaZVZUOGFGUUh3WGtsSitvQ1l6czNSWGRJNlRYdFQr?=
 =?utf-8?B?UEtnQk9lOVZ2N051cUpGSVowZ2lIUjRyeERJVkk2TkEwbnBSbG1nUjJYUWho?=
 =?utf-8?B?WXltKzE3R0JiZ0twczR4N09jV0hUdElubkdDWVkxTlBPckFoaE5DTnBkaE9w?=
 =?utf-8?B?NFN5WEdoZnA5ZTdoVXI0dWI5K2MzVlJ4VmVzeXFrZTN1REI4QzZ2MGtYTnFM?=
 =?utf-8?B?cjN4MHo0c3ZDVkp1SHhEdWJXSUN4eXRjRkg4MVZpRWdvTWF3Q0VTUU42U2R4?=
 =?utf-8?B?MVIrUi9adVloTmF5QkFhT3FCTWc5VzNaU1UrTXQ2VXNJSmFRRHYrc0syL2FI?=
 =?utf-8?B?MkZpQVdSNDRMdHRQVGtuR05HUmRuSkNpdmd4bzgwYzRUc1BMUUV6STVYOTlR?=
 =?utf-8?B?eHRUZERrdG54MXFUOEpWeHJlWkVETXlsbDZpN1R5NE1XRkRINEREbVJFc0Uy?=
 =?utf-8?B?ZTE2UXVFSEV1NWcyaFF6NVFNOERiTjR4QWtUV2s0aFlGQWpkZEhHRlFGa0lW?=
 =?utf-8?B?aENaS1ZLdDdHU2R4WVg1cUZTTW5mZFozUXJYNXl2ZGZYNVlIZ1lrYkR4Ujhm?=
 =?utf-8?B?SFJDdXZLT1Q2RmRrRjhESkZPOTlkTG5qQzZGN1V6eS8wN24rZlM5YW5ETmRF?=
 =?utf-8?B?cWNVSXVsTVlxVWxwOElmOGhFYWFac0FraVo1Y0hNb3JGSCsyNjBieUpRWWsv?=
 =?utf-8?B?MEdpUWV1eTJrZ2FLN2JJa3NSZitNYXZkV0hxRnNOQ0F5dm9LMkVHUEo5cEFx?=
 =?utf-8?B?K0hidXE3bTI4NllUS2lOMGxkaGY2dHpyekhRSHZkV3V5TmJySXoycHdOWnFs?=
 =?utf-8?B?U29GNDFNSUwxVHltQjZNTU9YVnBhb0xwYVZrY3BjOFQ2cERZSnZvUE5Zb0tR?=
 =?utf-8?B?VlBXS2FRU1ZIQ3dnZUhwNFJ1QTlkNlFEVGZocCswcmxaR1A4L0ZyNk1PbVk1?=
 =?utf-8?B?ZUVWNVBKWS9MbmZZWS9adkY5OWFpYzA4VWVjREFhL0QvY0lHeGxKYVRnSm8z?=
 =?utf-8?B?VUZqUDdFQkxhWXNzM2svcmpRNkNkSk55cDNEOWMxMlB5K2pjQmE3TXdwU2px?=
 =?utf-8?B?c1ltRDRzTHhzTVBzeVVrQ0ZZNTVwOXVSYXFxR0V5MkNMWjZQNWhzdz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: baabdc2a-615b-4ee0-7bee-08da542bb2ab
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2022 08:46:28.6131
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: iFHUo4GnHSnMNQijSQeNie9yPM78W5eAs0PXLIYaPZhLKTS9Iu/N5ytaubivOsnQgCuqxdwxj0fz1fMQdF5bBg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB7661

On 17.06.2022 05:26, Henry Wang wrote:
> It seems that this series [1] has been stale for more than a month and also
> this series seems to be properly reviewed and acked already. 
> 
> From what Jan has replied to Roger and Andrew:
> "... this addition the series would now look to be ready to go in,
> I'd like to have some form of confirmation by you, Andrew, that
> you now view this as meeting the comments you gave on an earlier
> version."
> 
> So I guess this can be merged. Sending this as a gentle reminder for
> possible actions from Roger and Andrew. Thanks!

My view here remains as before - I'd prefer to avoid merging this
without at least informal agreement by Andrew.

> Also, not sure why my acked-by for the CHANGELOG.md is missing in
> patchwork, just in case - for the change in CHANGELOG.md in patch#3:
> 
> Acked-by: Henry Wang <Henry.Wang@arm.com>

At a guess that might be because that earlier reply that you did send
was to 0/3, not 3/3.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 08:47:13 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 08:47:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353569.580510 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3w10-0005dg-Rp; Wed, 22 Jun 2022 08:47:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353569.580510; Wed, 22 Jun 2022 08:47:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3w10-0005dZ-Ob; Wed, 22 Jun 2022 08:47:10 +0000
Received: by outflank-mailman (input) for mailman id 353569;
 Wed, 22 Jun 2022 08:47:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rPuW=W5=citrix.com=prvs=165a25834=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1o3w0y-000578-R8
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 08:47:09 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e4cb24b6-f207-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 10:47:07 +0200 (CEST)
Received: from mail-sn1anam02lp2045.outbound.protection.outlook.com (HELO
 NAM02-SN1-obe.outbound.protection.outlook.com) ([104.47.57.45])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 22 Jun 2022 04:47:02 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SA0PR03MB5450.namprd03.prod.outlook.com (2603:10b6:806:be::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.15; Wed, 22 Jun
 2022 08:47:01 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::bd46:feab:b3:4a5c]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::bd46:feab:b3:4a5c%4]) with mapi id 15.20.5353.022; Wed, 22 Jun 2022
 08:47:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e4cb24b6-f207-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655887627;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=GMfMvkMpQPLVBYrmcJSMiy4wwMLRqvsXWZamxUW258s=;
  b=J5R1gargjo1Krb+sIZAVDl86WJ+R1GXDEDcHRLOa4nXem1sKMZkB7zJI
   WYJEv6EvrUsRoBAQVTIPBhXU/XwAcTScb2PwThjvRbeSi+tLiXpm+actw
   DlXv81GnSr0s7d5YyRWAXsrldVSOjOxJOeKVVb4CBehps2Khembpz6lIj
   k=;
X-IronPort-RemoteIP: 104.47.57.45
X-IronPort-MID: 73487308
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:geOLQK5NddH2BFbqZOQ5zQxRtDjGchMFZxGqfqrLsTDasY5as4F+v
 mQYWz2CMq6KMTTzedAkPt+x8BsF7ZeEz4U3SwBqrnozHi5G8cbLO4+Ufxz6V8+wwmwvb67FA
 +E2MISowBUcFyeEzvuVGuG96yE6j8lkf5KkYAL+EnkZqTRMFWFw03qPp8Zj2tQy2YbjWVvX0
 T/Pi5a31GGNimYc3l08s8pvmDs31BglkGpF1rCWTakjUG72zxH5PrpGTU2CByKQrr1vNvy7X
 47+IISRpQs1yfuP5uSNyd4XemVSKlLb0JPnZnB+A8BOiTAazsA+PzpS2FPxpi67hh3Q9+2dx
 umhurSOcwE3YfDSyd0saBMFGgFTBZJD+oT+dC3XXcy7lyUqclPK6tA2VgQNG9Rd/ex6R2ZT6
 fYfNTYBKAiZgP67y666Te8qgdk/KM7sP8UUvXQIITPxVK56B8ycBfiVo4MFtNszrpkm8fL2T
 swVczdwKj/HZAVCIAw/A5Mihua4wHL4dlW0rXrK/vZnvDOLnGSd1pDtacuLIOHRHflTvWKFn
 TLd2ULEUwMVYYn3JT2ttyjEavX0tSHxVZ8WFba43uV3m1DVzWsWYDUdUl6+oOWlh1Slc8JSL
 VQO/SgjprR081akJvHtUhv9rHOasxo0X9tLD/Z8+AyL0rDT4QuSGi4DVDEpVTA9nMo/RDhvz
 VnQltXgAGQ1tKXPES7AsLCJsTm1JC4Za3cYYjMJRhcE5N+lp5wvihXITZBoF6vdYsDJJAwcC
 gui9EAW74j/R+ZSv0ln1TgrWw6Rm6U=
IronPort-HdrOrdr: A9a23:t5XWZaDILx8yQBflHej1sseALOsnbusQ8zAXPh9KJCC9I/bzqy
 nxpp8mPEfP+U0ssHFJo6HiBEEZKUmsuaKdkrNhR4tKOzOW91dATbsSoLcKpgeNJ8SQzJ876U
 4NSclD4ZjLfCBHZKXBkUeF+rQbsb+6GcmT7I+woUuFDzsaEp2IhD0JaDpzZ3cGIDWucqBJca
 Z0iPAmmxOQPVAsKuirDHgMWObO4/fRkoj9XBIADxk7rCGTkDKB8tfBYlil9yZbdwkK7aYp8G
 DDnQC8zL6kqeuHxhjV0HKWx4hKmeHm1sBICKW3+4sow3TX+0SVjbZaKvm/VQMO0aaSAZER4Z
 /xSiIbToFOArXqDziISFXWqlHdOX0VmgLfIBej8AfeSIrCNXMH4oN69PxkmlGy0TtegPhslK
 1MxG6XrJxREFfJmzn8/cHBU1VwmlOzumdKq59as5Vza/ppVFZql/1XwKqVKuZzIAvqrIQ8VO
 V+BsDV4/hbNVuccnDCp2FqhNihRG46EBuKSlUL/pX96UkdoFlpi08DgMAPlHYJ85wwD5FC+u
 TfK6xt0LVDVNUfY65xDPoIBcG3FmvOSxTRN3/6GyWtKIgXf3bW75Ln6rQ84++nPJQO0ZspgZ
 zEFEhVsGYjEniefvFmHKc7hiwlbF/NLQgFkPsulqSRkoeMN4bDIGmEVE0kldemrrEWHtDbMs
 zDTa5rPw==
X-IronPort-AV: E=Sophos;i="5.92,212,1650945600"; 
   d="scan'208";a="73487308"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XrljG9YRm2AexZzdgDQ4ehZSsUiT5XmOUEBzXCpV3ABY/s5lKiRtEeQ+eyO8lkWee6lL39aawESRS9PjP2yhD4jHkPgNKtpKV88/D7sZ2y/BoVh7W/F8u+PIeUj75RvfrBwkx8RYEnRvKS2JYHy2qsFpltUyh3KmQfpZRat2y3Bdr97yb8Nzkxgga1ttd55eGGnigUYPaioMEqu3PQ4arla4MCIXiXS7Ov39QP/jz8fe84cgYj2P9zEtN84EnDnxgxoqGEdC8+DOqem+Rus8mTpgNsjHbkqiQlrUU1tO0a14BZoK5VKmgt7YsGy88pOcW1xD7xpkm7i1CSSj1ry3Rw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=GMfMvkMpQPLVBYrmcJSMiy4wwMLRqvsXWZamxUW258s=;
 b=dKx2ikpqN1R/18rhnFlhBYWxyp44ZxGt1u6u7wBtHPwH1zySIcI6wGOrxh1D+fe2rOACzdevX+iu4syYtw369bABvYTrwCAi3Jz3EPGxyxHhjiwukHO0RPJVG/4wq/xUZMuIIyjnjcsZlNwf1v4fAWNaF7GiD1fQybup3QdcTFN05tzP+yG+NxFnTok3z1U94bp10zV1S8qSXgRVAxOauZxgzUKgsMTn2C2L+1NVBkQ+7HUDpHCzld8NACw3k2waWAvfbJ+bgT+kfD4+P7GINaXulsOza7Eqsa459kAjOe3Xzm+WdzIdmdgeYR5FCMP4fLgc701c1w7fAGlLuwqhbA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GMfMvkMpQPLVBYrmcJSMiy4wwMLRqvsXWZamxUW258s=;
 b=AGSw2m9zBRnLrx9pegOnGj2ZMAbK5OkOG661Q4NIs1B2F4W62hJOicE9wPoQKBhJAaagviXn3IBr1Rqff6qLAyi1V0DYxqLds3Wbk1m2ji2IGrCb6FCgvFLdWmwgnjLlaSqQkQ/FpvuaO0y0sQpPJLPvxsjlPxANPhNKzVQTQcE=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>
CC: "committers@xenproject.org" <committers@xenproject.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: disabling mercurial repositories
Thread-Topic: disabling mercurial repositories
Thread-Index: AQHYhXXQrdr64fwrwEqfZHy5KYkmJq1bHNMAgAABhoA=
Date: Wed, 22 Jun 2022 08:47:00 +0000
Message-ID: <10a33bef-bcef-9ec4-5171-a579019a69f1@citrix.com>
References: <YrHMSJg6Rx9ULvr6@Air-de-Roger>
 <aaea6105-e83d-feba-edf3-3d7e26b90769@suse.com>
In-Reply-To: <aaea6105-e83d-feba-edf3-3d7e26b90769@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 09ac4e29-f9a5-4974-6568-08da542bc5d6
x-ms-traffictypediagnostic: SA0PR03MB5450:EE_
x-microsoft-antispam-prvs:
 <SA0PR03MB5450FE3EB30DF4FDAAF81325BAB29@SA0PR03MB5450.namprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 VoYag12EyJyNv8MDXqYi/lJWL+EZeK0uFfAP3S6fX9uRZSVRnQePTQM47H35UQdiz0VINtY74Hh1pjYS9ZSti0ZkyajWEx1ouY2+BX4rjfTEAZ2+Ic2TZTZJDg6r8lYb0MTkbUBtdht1C8/D9yTM2ANkHWc4guI1qnWXJfl2/U8yOcOd1Ysm5HuXVZzzXh05DjIDtOSoccDJn0kiXp1OPJi9cGyoPX2lLVRJPQAIk5e1hrnMCvtzYAazYej/hnBL2p12m8c3uC1obTv9tvXJHef8LJsmJhZXxuWveoIh1rxsAE/TCbnJbOWTOUeDIEBBunCjXQCKAdWfqbAV06csjRP+KS6XaVFzRJzR9VlT3hLUCJjdqDJJ6vHAl4i5oV3MbWxj/PC3WPFaRpL5Xt3Nzdw3jr3FKx2lUiO6u8wwEE1MkckGno8ViL3fzyik1zWD0iOE9jklv8dr/ovN7vWkvkadWIkgnMQcQj6KU0bJk3bSXkMcEjof4r7qankqXhi6ayozOYpzm1dVtuaRfn2AbETLdduEdB1cXWZ8IZOoaRHQN6TcVfrg5mG2OyOToubRpOuGCXvS5yEGALsyJw/NEsKj27QpNsbyXzRPgpOljxNrx7lE89i3IHO7fQ1ufEZswtrdf4NdeMOZKR8K/7bPWncn3cDBsPLbgMzbskMuVY+TPcA2Xrd4BvEQQVY9F2NpbArXmg3ijvAuMYJURJuwMsjQYWPubjWAtKu1b0ycTv9yUV5Bin5L9DlaBkFkOOPG67nmVZMXVtb2bInYPeJ5ioBUS3YAmHBf3nOp9LhcSN3JgaeWJOVuWcQZ1CwINlpTW00niU8Ygn32VsbYh1W9SA==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(396003)(39860400002)(366004)(136003)(376002)(346002)(2616005)(41300700001)(6636002)(316002)(38100700002)(478600001)(31686004)(71200400001)(26005)(31696002)(6486002)(36756003)(83380400001)(2906002)(6512007)(186003)(54906003)(91956017)(122000001)(6506007)(66946007)(64756008)(76116006)(82960400001)(8676002)(53546011)(66556008)(86362001)(5660300002)(66446008)(8936002)(3480700007)(66476007)(38070700005)(7116003)(4326008)(110136005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?SGZYby9jMHpKcjNsRnJTZE1GZkl3VDNKeitYNXAwQWFWa2F0UGY2WHFiSDMx?=
 =?utf-8?B?dG5zT3lXMUpXY0s1TjhwZEJzYzI2QVlIRVVtbzk2ZzFMcXVtamJOMDhwdVp1?=
 =?utf-8?B?aGMvMUVlZU9sSzhaY0tyUWhRYzBnV2FrdDE1S05nVDFmbENFakdMQ3g5b3VO?=
 =?utf-8?B?UzN4K0pyZTQyL3NSN3VKZmZGOHNnSDVIM1NyZ2JxSWZZUWpvU01wZ2sxZXJ0?=
 =?utf-8?B?QldaeVBDTkZBMERSVGczMGNGMWorMGJXdGVWNS9TcVBCa2toZ2hGZjMxaUp3?=
 =?utf-8?B?aE5DREkvb0pIeEJhdCttekdKQmlRT2RkRElrcDgrK3k1VFlRNk00RzVuTjZW?=
 =?utf-8?B?aWdlZzFRZDl5a2pSZzlKb0lJT1l1UUxVbEVKV0dvYnJMUjZuWHB0VllRYUFJ?=
 =?utf-8?B?NnVBNzRkTFJZb1E3UnA2amlBdHNrWFEzZGd2ek8zM2JvUGRVR284bHRvTTNT?=
 =?utf-8?B?MHZ3YXVhME1kWDlBbm00Sjd3THkrSzZNNXlmVDhkRXZSV1ZpMjh2aS82TURF?=
 =?utf-8?B?VDh5bG1HQmRhcCs5TjE0SGQwS3ZnNHUzekVYVDZ2UmRiZURqM3g3aHoxemF6?=
 =?utf-8?B?NFpCcnl3WHdtOXB3enJzTVpiQmdnQXRKd0FZTHY3VG0vSHY4cTNSRFBRbG5X?=
 =?utf-8?B?NkYvYmpRbzNNNE9iSGJMQmFpbG5INkwxaDJUeWZJL0ZPVkFFWi9hRXJYVkN5?=
 =?utf-8?B?d1JtdkFsUHBGWnIva0tBdHFLcWdDK0MxRDZ4cGQrRzFIN29NNTJKc2g4dU1K?=
 =?utf-8?B?VDNmdmZ6ZXV2RlRnN0FzWWgyb2VvRWFFbDA0WGtOM1FwWXF0SFpBSm80Z0E5?=
 =?utf-8?B?eXRudGZjNVBPODNrRDE5UkpLTHlBU3NaR2R0eVEzNmNIQ3dJQTBjRUkzQTVi?=
 =?utf-8?B?Uk5UaFR5NHpJTnVKazBXVGxmdDJwQVEwZ3dvUTE0QUxTbUk0R3g0V0RmamZy?=
 =?utf-8?B?K0FrYTJQeHNKUGFQc3JGalV5VkRja29OSS80ZGVDNTZKMW1YcGRrK25rVjdI?=
 =?utf-8?B?MUNvSnRXQ09FdDFnSHloaThWMDhLbFJ2aitzSkphZTcrWTQxWVBkRzNDclJG?=
 =?utf-8?B?RnB3V2Z4UEYva0IyMTFLdFpQc21VSG5QdHBZQ241Mk9WdzdCaC8yUks0dDhN?=
 =?utf-8?B?RjlPckl6bWt4bllaNVAyZmZVSlRhZmRMaWt1QUlPejdkTFVQbmRsOGx0bDln?=
 =?utf-8?B?UW1iODVCSzlRbHFaTkZmWnNraE03ZjRzdDIzQzhkWXVhek1iQVVXLy9adjNj?=
 =?utf-8?B?TzE2aW54dVhyRUZNTUkzU2MvU3loWnE0Z1Iyb05uRkV2TWZ5TnVQWXFvSk13?=
 =?utf-8?B?RHZwMURxdWZMODE5QkxCUVdtdGxINzhmd0Q3V2tGQ1hJS2lrZFp0YWpWMzNr?=
 =?utf-8?B?cmM1OXVaNmVPOWF1Wkw0Zys2SExCMUNybkZ2U2g2SEZaRExWL1NOYUM5dTN5?=
 =?utf-8?B?SlZsSjlIYmlTb2dVckRMKzYzMkMvM3ZnK2tGaXJDWENNVEU4aE9ZT3o3ZmZI?=
 =?utf-8?B?KzJnUk1PcmFQNSs0WFE3Y1RCT1l0dHp3aHBhcURZMWxCSXI5dy9lNjZBQWRY?=
 =?utf-8?B?QTloYlM1TkhTVFRGZWoyTWNJT3FtTjVHV0tzRVdGb2xrd1MzU1JLN3p6eStp?=
 =?utf-8?B?d1hzNUIvTHFyMXo0NlBUam1Mc1FwNWtTV05xL2tCY3pqYzZjRTRPWjVoNUx5?=
 =?utf-8?B?UjhkelY0dEhVV3hmTzI3Z1FjaW5ZLzFEdCtKaU16T1NpRkZBbHdsdE1XVjc1?=
 =?utf-8?B?TzEzSWFhNTJ3clF4WHhyS0JOSlpFeVZoTUJkb1hnVFpNa3NKanB3RXZMTlY5?=
 =?utf-8?B?a2VidzBXenFSRnpqNkR0cHNXNzRvYmFObWpRSGJQUjIwQ3ZEUWx1Qlo3Rzh0?=
 =?utf-8?B?VjAvdFZnQXlya1VrcU00Qm5sdFdHc2xJcjhvTHM2SUcxcVhxdHBRcVlEa3VD?=
 =?utf-8?B?eE16Q1hrZnlmOCttOTVoTDdvUVdld1BoK1FRU0kwazNUSTVPS2RCOTRhRTln?=
 =?utf-8?B?ZU5GUGxqTmxJQllDZHdNT3VhSFN0cmdxWlhwa1hqUVRGQ0toaFVGTmRNTHlR?=
 =?utf-8?B?T01ZYlllcldlUlRIRmV6bURIQ0ZLMmtXZEtrYUZsdU4zK0Y0ZEw5SXRQVkht?=
 =?utf-8?B?a28zeVlJTlBVcWlRU25IWTF5aW9nSzRLNExra1cwNHI2RHZ6WjNEMWYxcU1i?=
 =?utf-8?B?KzMvOUxUMmZVb2I1eW1EWDFCWVB0TkJZSUhmZUpHaTZIUVUvZDVHQjV3dDlK?=
 =?utf-8?B?UkRJbXM4R1I4KzRaa2NjS1NLdkt0VEhIZ1F4a2NtMVU2enZ1TkVwTzlJbmJL?=
 =?utf-8?B?SWxNSHg2L3NSWUkzM1BQZlQ0T0VGL0w2citGV0dkTWg0ZHk0dEdKMnhmRkk1?=
 =?utf-8?Q?A76NOlZam9qzrYhc=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <67671BD8CEAF504C9F8B36D3939CDF13@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 09ac4e29-f9a5-4974-6568-08da542bc5d6
X-MS-Exchange-CrossTenant-originalarrivaltime: 22 Jun 2022 08:47:00.5210
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: x/jCfv2buW+J80r5W7B8mIzpld9wxKGFSUr4fXMwgS4KKPLJofLUMrvjeAOTacPD62mF9s5FfWlO6UcwpWBY/yg8EKlDHo2FxwjXjK4pDQ0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR03MB5450

T24gMjIvMDYvMjAyMiAwOTo0MSwgSmFuIEJldWxpY2ggd3JvdGU6DQo+IE9uIDIxLjA2LjIwMjIg
MTU6NDgsIFJvZ2VyIFBhdSBNb25uw6kgd3JvdGU6DQo+PiBMYXN0IHdlZWsgd2UgaGFkIGEgYml0
IG9mIGFuIGVtZXJnZW5jeSB3aGVuIGEgd2ViIGNyYXdsZXIgc3RhcnRlZA0KPj4gaW5kZXhpbmcg
YWxsIG91ciBtZXJjdXJpYWwgcmVwb3NpdG9yaWVzIG9uIHhlbmJpdHMsIGFzIGNhdXNlZCB0aGUg
bG9hZA0KPj4gb24geGVuYml0cyB0byBnbyBiZXlvbmQgd2hhdCBpdCBjYW4gaGFuZGxlLg0KPj4N
Cj4+IEFzIGEgdGVtcG9yYXJ5IHNvbHV0aW9uIHdlIGRlY2lkZWQgdG8gcmVtb3ZlIGFjY2VzcyB0
byBtZXJjdXJpYWwNCj4+IHJlcG9zaXRvcmllcywgYnV0IHRoZSBjb250ZW50cyB0aGVyZSBhcmUg
QUZBSUsgb25seSBmb3IgaGlzdG9yaWNhbA0KPj4gcmVwb3NpdG9yaWVzLCBzbyB3ZSBtaWdodCBj
b25zaWRlciBjb21wbGV0ZWx5IHJlbW92aW5nIGFjY2VzcyB0bw0KPj4gbWVyY3VyaWFsIHJlcG9z
aXRvcmllcy4gIFRoaXMgd291bGQgaG93ZXZlciByZXF1aXJlIG1pZ3JhdGluZyBhbnkNCj4+IHJl
cG9zaXRvcnkgd2UgY2FyZSBhYm91dCB0byBnaXQuDQo+Pg0KPj4gSSB3b3VsZCBsaWtlIGFuIG9w
aW5pb24gZnJvbSBjb21taXR0ZXJzIGFzIHdlbGwgYXMgdGhlIGJyb2FkIGNvbW11bml0eQ0KPj4g
d2hldGhlciBzaHV0dGluZyBkb3duIG1lcmN1cmlhbCByZXBvc2l0b3JpZXMgYW5kIG1pZ3JhdGlu
ZyB3aGF0ZXZlciB3ZQ0KPj4gY2FyZSBhYm91dCBpcyBhcHByb3ByaWF0ZS4gIE90aGVyd2lzZSB3
ZSB3aWxsIG5lZWQgdG8gaW1wbGVtZW50IHNvbWUNCj4+IHRocm90dGxpbmcgdG8gbWVyY3VyaWFs
IGFjY2Vzc2VzIGluIG9yZGVyIHRvIGF2b2lkIG92ZXJsb2FkaW5nDQo+PiB4ZW5iaXRzLg0KPiBX
aGlsZSBJIHdvdWxkbid0IHN0cmljdGx5IG1pbmQgaXRzIHNodXR0aW5nIG9mZiBvciB0aGUgZGlz
YWJsaW5nIG9mDQo+IGhnd2ViIGFzIHdhcyBzdWdnZXN0ZWQgaW4gYSByZXBseSwgZWl0aGVyIHdv
dWxkIG1lYW4gdG8gbWUgcGVyc29uYWxseQ0KPiB0aGF0IGl0IHdvdWxkbid0IGJlIGVhc3kgZW5v
dWdoIGFueW1vcmUgdG8gd2FycmFudCB0cnlpbmcgdG8gaHVudA0KPiBkb3duIHRoZSBvcmlnaW4g
b2YgY2VydGFpbiBMaW51eCBzaWRlIGFzcGVjdHMgaW4gdGhlIDIuNi4xOC14ZW4gdHJlZS4NCj4g
QWRtaXR0ZWRseSBtZSBkb2luZyBzbyBoYXMgYmVjb21lIGluY3JlYXNpbmdseSByYXJlIG92ZXIg
dGltZSAuLi4NCg0KV2UgY291bGQgY29udmVydCB0aGF0IGludG8gYSBnaXQgcmVwbyAocHJvYmFi
bHkgYSBicmFuY2ggb24gYW4gZXhpc3RpbmcNCkxpbnV4LmdpdCB0byBzYXZlIG1vc3Qgb2YgdGhl
IGNvbnZlcnNpb24gd29yaykgYW5kIG1ha2UgaXQgYXZhaWxhYmxlIHZpYQ0KZ2l0d2ViIGlmIGl0
J3Mgc3RpbGwgdXNlZnVsPw0KDQp+QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 09:00:04 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 09:00:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353581.580521 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3wDN-0007H3-4o; Wed, 22 Jun 2022 08:59:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353581.580521; Wed, 22 Jun 2022 08:59:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3wDN-0007Gw-1q; Wed, 22 Jun 2022 08:59:57 +0000
Received: by outflank-mailman (input) for mailman id 353581;
 Wed, 22 Jun 2022 08:59:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kzGk=W5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o3wDL-0007Gq-Sj
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 08:59:56 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr70087.outbound.protection.outlook.com [40.107.7.87])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ad46ea0d-f209-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 10:59:51 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8851.eurprd04.prod.outlook.com (2603:10a6:20b:42e::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.22; Wed, 22 Jun
 2022 08:59:48 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Wed, 22 Jun 2022
 08:59:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ad46ea0d-f209-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Dlg7K0R+NqwjEUa0jik9ZZ7vz9vsWoOWKLI336HZnwbeGLiWCvdJS/6xySRu9T/uWIrKXu0HACV+IGQ7Cb0OGpf4zUh6SccrSML4G3Oi0vBhne0pOYoMq4e8imUsAYLRQVpQvjDLSoIrk/WDDJmvFm584B4lVPRuKVNf5U0EaU1HvHwuKpHaP7+uAggjxda/4VD9VS++0GnHGbCjN/k1+GXwVHmTZ5Sz2KOkTwgO5WwA3C7VSLYAOUbnoemvav0exaX6cOflgMB/+hKo++K9QYGP+jqpLRcWwIxzJCCfXHSphpn5iXDofPy43AhPHvJnZTsEEtH16K4DGWK2/m70qw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8wUJM8J/QKOFelvYOIxKZZT3Bd4+jlhw35Xmbg0vNTU=;
 b=eAbVCJlColhfC7GbQ0bzXma+BUwTJ8IOUM2djmXhl1v6+mz+mv1UqEmNpucEFPpIHhpck307vCLh+4GnnSXvBAlnDEdpVYRVtlhLoNbe03XYAJA1IIQIEJIPdQ/RCBWdj7ES16UekZBmIKyD83AvJ2s6Y/BdvlPzbVmEop4t7jdskI/KoJ3+kS0EZMfQS0Bqjn2QXZDXi4z7y4rlRCFXuegSq8Wz5114jCbtCQhWkSshPGRzlQa2bsGxvUSRtqWOL9H+K6PxUkvnsskzYjKLMk1GzIIPXGho0p3zk2EnVzruzZL8ct3oglY/wiVehXVChKula1QVc1xYwS+5YGQN1g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8wUJM8J/QKOFelvYOIxKZZT3Bd4+jlhw35Xmbg0vNTU=;
 b=vGe9pcQoec8ZVacPydpI4W71JDF2QpNPUXxOJ8eilgKW1ryS9LynCtfLU+QW1fyO/6fqpjUZrqxwtxZdVFMPY9HboQ33rBizTH/IWnDFqUp04vI8/z+/4GuBJcyLtb6XHHb4a4hyyGNR1gqy6NLyL0WO2BgXgEVJuei85n6ZXxrzsdFyr3mL65tqlDE4VPnH/NN4O2cdxc8kgw9KSp8pfPt9CB49vO/sS18u9/1h7YArjlUVZlaO4DLhy8Egepa9kHv/cODm98tO58XsDLv62Wzm934+UTEZzIOxTVJuctlnACbpdfz0RCzhB1S4gpp8aH0+A5yYk8SNpHUHi+rPSg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5b788e1a-d872-e318-1be5-8640fe887b9d@suse.com>
Date: Wed, 22 Jun 2022 10:59:50 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [RFC PATCH 1/2] xen/memory : Add stats_table resource type
Content-Language: en-US
To: Matias Ezequiel Vara Larsen <matiasevara@gmail.com>
Cc: Matias Ezequiel Vara Larsen <matias.vara@vates.fr>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
References: <cover.1652797713.git.matias.vara@vates.fr>
 <d0afb6657b1e78df4857ad7bcc875982e9c022b4.1652797713.git.matias.vara@vates.fr>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <d0afb6657b1e78df4857ad7bcc875982e9c022b4.1652797713.git.matias.vara@vates.fr>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR06CA0362.eurprd06.prod.outlook.com
 (2603:10a6:20b:460::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b7b09417-6db9-4261-057e-08da542d8f99
X-MS-TrafficTypeDiagnostic: AS8PR04MB8851:EE_
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-Microsoft-Antispam-PRVS:
	<AS8PR04MB885101606C6B5DEDADE865AAB3B29@AS8PR04MB8851.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	rGEuA7kBHJ33uuEosQXjw9xhtrTE2/JB/NTXQ1j7sqii8NnaMP27fcNPz903x2y4H3FIFDUIWiP+lw2EzbfSxax8nI4+a4WuKXg1LXVZxlFUZ0FyN38lVodrHNSsZVaRqdiecutGIt5TiwcFb+SKYjdCxXICZ/OIftI8e/n/B+aHCx4J9HvNApRBtOloHCjXfwYzDzR/oJYQx5By15WB/4Avk+KVwCV1lgZYuNe5yf6lOXonbfxxB8p+XJjjKdjRcb+xY98vgIBCu8pPW7Z4VAhH1OkPYCzYNM1pZBW+Qbny5DZ7keMCe2KdNrX3yakTP/jkRxpHriJPWt4loay2SoDhuC2AU51X8j//YdSjCdmUvJQ5WY6LCmBRjIJOTDngu1HhGkZj6ysMxkONLQA1vKpn1NsRkKVH5lrYXOS5JI2M6mGCi/3N12upcrybWQwUdQFuGc5qfLX69Gy7r9m7jvmw+Ckr/yq1AcSL4yt4FOXRLcIcDVZlql5EVtz84dnZm9wNxn2oSBjoynPmrOmphcDNrKsiHxZFZfgfAQEKpCMNhyZdzFajl3L0nQbX2293owGLRcBZE6ozIvoffOV7dYSogoM/E91XWW/v0/Ho80mpCADtgIxZ634VxJa9qvfZHvOZrtTLzknEFz+zvz2MuskZc9e+/3hwHaNPZSnkkd/6unBIxbn1MqB/MjeLA3ArY2UktX3azyqEWLFLBMjIvbDxGcVqwOL/Iq4hyAjvmhVwRCWTFtSFOumg21vAB+qefVKQYZ6Qp1ayd9O/AaETyfrFJ6zzLtnqoa0dq02KvRIJzsHkbwLm+jLz+hZWH/o7qrnK+oPGxqmkbvJQapUzag==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(396003)(346002)(39860400002)(366004)(376002)(136003)(53546011)(26005)(6506007)(66476007)(6512007)(2906002)(2616005)(36756003)(6666004)(38100700002)(41300700001)(5660300002)(8936002)(66556008)(31696002)(6486002)(66946007)(31686004)(83380400001)(478600001)(186003)(8676002)(54906003)(6916009)(316002)(4326008)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dUx6bjVveVczbFN0eUVNSWpla2RjQTVodFpMUlBXRUlOczVBSmhwTWtpcUwy?=
 =?utf-8?B?TE9vZlNpaHZGdEgrZERKRmpCM09GQTBJQWdkNHE0eUdhOE9JOS9NVjdSSDJv?=
 =?utf-8?B?eEROb2J1b3lZam1zNTdRV3pwb29BcTNGUmFHZU8vZlhiZ3VaK1ptMGkyY1Fn?=
 =?utf-8?B?bkJpc1lydDN4dCtQbWRGZzRqVDUzODZmdW45dFBPdm9DZUlRekZLZUxQSjlY?=
 =?utf-8?B?a29wemNCNUZpYWdkOFFzN0s5cjJwcmZyNDVDa24yNWdUMUIrdkZyK1dJUW9p?=
 =?utf-8?B?d1FoSUJCbGVIeG1EWHRaTml4ZmFYUldZdW9hQWxYbWZ4SlR4QmY4V2IzVmFF?=
 =?utf-8?B?SFJNQU1FOHRodFFmTWtiblFpcGVTVjFPZHk3T0p0aG1yM2Y3SEllR0N4QVF0?=
 =?utf-8?B?dUFERjZMUE9teHM1Wm9FNFZ0SHZ2VVdyN25tRGJvVTBWRk13YXFuQWRkbkt6?=
 =?utf-8?B?bGc0eHFGTnRadlVia1lLWStlc3R2QnZtUEVzOGs3a3ZteE9RdHlTNE95UXk5?=
 =?utf-8?B?Q1EzMXFhdCtDd0FjSkhubnQ4Q0p0VWZaWm5qcUNxQ0pQUDltWWlBdng5UDh5?=
 =?utf-8?B?Rk9TOENqdkp5WXNpeVNCRTN0QUczRjNLajRMd2pKaHNNbTlZZjNpcThnQkxB?=
 =?utf-8?B?MjBMOWFCSkMvdnV6SEc5emdSUjJPb2k0VGNaZ2QxVHd3UFlYRGR5NlQ3QVRX?=
 =?utf-8?B?citZTXdGWms5WlFrVCt2djJ0QWdSOW54aUVzQ3BqYXlEQjFLN2RnRGxScGhD?=
 =?utf-8?B?YWV5b1hSa0VBZzJVREg3MWwzVG43VHVrUEdhcGNPaFlrUGFaL0lFWm5SaVg3?=
 =?utf-8?B?YTR0UklNUEF0SG4xK2tSRjdJcUtHcHJsVFpFdlkwWDNXbldPR1FpTnVJdDMy?=
 =?utf-8?B?VittQmEzUkgrZlNHOHhZWE01cEJ6a3J5SFhjbDFDUmhlb1IrWnJvcndSaDBm?=
 =?utf-8?B?L0Npc2tkZ3ptR2s1QzllWU1XWkprUURFWmR5cFhkeGJVaFptNE1SYnJqMkhY?=
 =?utf-8?B?Zk5UMUhpZTZBMVM0TFh4QmFhSjd5cnp3Zjg1RlFWU1NKT3pobzlsWjJ0MGxV?=
 =?utf-8?B?a0JFaXlMTWdDSHM2Z0ZycEpLZk8ySlVJaXA4Um4wRkliTEkrbUJCK0dxVnFu?=
 =?utf-8?B?MHRhQUtKODZIODREM1pJbzFlTmVtTzJmM3JnaFR4cjA0eGxxRENYY29STlcw?=
 =?utf-8?B?a3g5M1V0WXhsR0lPSHMxMk9KdDhKemwwbVZtNVdqWXNxdXoyVW11UGRZQ3RQ?=
 =?utf-8?B?ZXQ3QUd2TUNJV0tCK3FDMWxQVUhnUzRqMG9DTG1xUk1obkpNM1FvajFQRTY0?=
 =?utf-8?B?VEVlYWc4NFhBdHF3QkE0VktLZjZ5OU5FYjE3NXV5R2l4MTkzYWRZaEhtcmt1?=
 =?utf-8?B?R28yM2oyNjQ2U0h2dGwyeE40MDNIV00zTUNjaG5hQzRUb0svLy94Y0NIanhO?=
 =?utf-8?B?K3FSVWFHdGgyZ0N2a2NvS0Y0OXk5WFRicmJhU1hTa21BRHlJaWdjaURXYktU?=
 =?utf-8?B?Zm1na1Q3OU91MnJ4RVk2S2poYk9ieVpwbThhcW1aUEJXbG5UUnB1T3JCTnRX?=
 =?utf-8?B?UmR2bTNtQkxlcEZoekNMTU0yWitkYlVIZnV2NWJHbjN1MnRvV0k4bWVqM2FB?=
 =?utf-8?B?eHdxZFBlZ3RzUCtFZ3hJa0QxV1poOUtnU2JpNDdDUGEreVNQc3dWZGxCaGY5?=
 =?utf-8?B?bVhjOTZ1NFd1a1h3ZWFxTk5VemxXNDdHUHhGRVRKcTBPZk9HUldzS3lsbU9W?=
 =?utf-8?B?dFg5WWRtYUVFWldNdTdIRnluUm1kRTkyd29OTUdrZCs2S3JIdmx6aHBveTNW?=
 =?utf-8?B?NG1uWGNHL1BnalQ3Z2VaMXNodHZBZDFPN252UGVCSEVpTWtCT0pkc3pMVjNT?=
 =?utf-8?B?SjEwaTFadHZUNVhYZG9MalJuT0Vib0VmcGovdzdPNW9uck1YNytCL3JzbGp0?=
 =?utf-8?B?R0Q0ZC9CSnFhR09xSDNScG9kc2ZuV2l3dXhDVXRzRUcxVzkvWkFleXhIUDl3?=
 =?utf-8?B?T3FUWFl6ek9JSGpkZVU0R1RTbllvOGlyeUgwbEdFV0lmSWphSDh1ajc3VkFm?=
 =?utf-8?B?T2VMampId2VVQkIvZENocm05L0poVkVYeG1STDY4YmkrVXdTYVR4TThoTkpy?=
 =?utf-8?B?TDFqY0sraE9yM2poTHVNNDhBNTEySWdUcmdkcnluRXZoaFYwcDNmOVo2aFBZ?=
 =?utf-8?B?cGtHM201d3NTQmQxWitUR3dRakpPRzR0YkJhcWJ1bEowYnBaNnZ5a0wwamd3?=
 =?utf-8?B?T0plMVl4K0lTcEEyR1JFR2xZdUxNbFFQN1I5RHFWZWdTY2dMUnNGOFU2ektQ?=
 =?utf-8?B?M1pjbW5IWHhUTFh3SjNZb0FpUmw4UWROOEYrRlVxNWVIUkQrRDQzdz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b7b09417-6db9-4261-057e-08da542d8f99
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2022 08:59:48.7340
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /UlmdXPGlKeJXwokXCIOGjILTA4xh0lEMZU+OJa6FoVUVyxuIfIV9CYzcy9bg0HPAM+uoXXg0Sg1ynWxIGVmKA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8851

On 17.05.2022 16:33, Matias Ezequiel Vara Larsen wrote:
> @@ -287,6 +288,10 @@ static inline void vcpu_runstate_change(
>      }
>  
>      v->runstate.state = new_state;
> +
> +    // WIP: use a different interface
> +    runstate = (uint64_t*)v->stats.va;
> +    memcpy(runstate, &v->runstate.time[0], sizeof(v->runstate.time[0]));
>  }

One remark on top of what George has said: By exposing this information the
way you do, you allow updating and reading of it to be fully asynchronous.
That way a consumer may fetch inconsistent (partially updated) state (and
this would be even more so when further fields would be added). For the
data to be useful, you need to add a mechanism for consumers to know when
an update is in progress, so they can wait and then retry. You'll find a
number of instances of such in the code base.

In general I also have to admit that I'm not sure the exposed data really
qualifies as a "resource", and hence I'm not really convinced of your use
of the resource mapping interface as a vehicle.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 09:03:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 09:03:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353589.580532 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3wGo-0000GC-MO; Wed, 22 Jun 2022 09:03:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353589.580532; Wed, 22 Jun 2022 09:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3wGo-0000G5-IP; Wed, 22 Jun 2022 09:03:30 +0000
Received: by outflank-mailman (input) for mailman id 353589;
 Wed, 22 Jun 2022 09:03:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sIgH=W5=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1o3wGn-0000Fj-Kp
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 09:03:29 +0000
Received: from mail-lf1-x130.google.com (mail-lf1-x130.google.com
 [2a00:1450:4864:20::130])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2e596dc4-f20a-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 11:03:28 +0200 (CEST)
Received: by mail-lf1-x130.google.com with SMTP id w20so26610113lfa.11
 for <xen-devel@lists.xenproject.org>; Wed, 22 Jun 2022 02:03:28 -0700 (PDT)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 j5-20020a056512344500b0047f81438160sm547727lfr.112.2022.06.22.02.03.26
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 22 Jun 2022 02:03:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2e596dc4-f20a-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=4UKmOtPuqR9ZV79ZF3RUEUPJlPzQswRYnZS88Bovo1w=;
        b=nTUI5zRn9XS6bFDBR+nn0BgevUpzBla/1rPpk2D8wbRjchNnEZ4dQCt+Ds9Qr53GdN
         aEtiOHS0r65exrhiuv5b2y8mPrUXKMFgPwGujno39obnVBN9gt+8aIyrnztYaRq3Fbui
         JzzEUZwXoq4VsEp6JMDLpt+udclYKzaUjh/v+7ocMGsGRKGfxQuC4Fdu3Ls0KDfOmVmk
         +QVTKT/p5bKbotHeUqjvhGo0ISegQdJeKSK25gt4IaFFQg2ZEqBN9pqu+m/5h8sK6LQ2
         6N/qDT6zFeg/hN3960avfIIQiZrarFaQ9vZCrw+Yb1Iq3o6SygQNayDuyUdLwUUxww1U
         1N+g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=4UKmOtPuqR9ZV79ZF3RUEUPJlPzQswRYnZS88Bovo1w=;
        b=Tl/XCacLIxzUbFs7C73ap1iK7EoxgEzauzRysVVJCH5ANEoiVn/msCBBWwoyBwjz2D
         tqxg3whRBfMVpVQvtgA9sla5KglmcxXV35ZuDdSm9c2LxJd0TShI/4TrqbZG/TQzhJx4
         txPprmUovbvhzdwtOXig4Mzp21kPjNG8L9+L93r9uizlE7fqYj1xXcIqb+BIJ5g+Mx09
         8taWVlyK1hDLNsKMRF5zfwfcU986XEbOJ+fc51KIPt6Q77ogXtX/bp17rSYv0ELKK6aR
         3Ch+CaxvK7kfUds58DO3rp0C39IsI7DtCQt/DpK/iGHLKFAnIlilLZMqQqQropq4tdd+
         uL7A==
X-Gm-Message-State: AJIora+m8Q8m3kTPwR9FmfnjmxFjoSFV1dpZNfn2JBWR+Rt1C8oaNYcM
	mNgEcXUUkZ03x1eXr2uAvD4=
X-Google-Smtp-Source: AGRyM1sukuX1vlem6ZPvfyVXPKMVb8vOqm4ADCvt+D12LzgOZZdGRu1UVCFtQUeaa8AXFbgvr2AMNg==
X-Received: by 2002:a05:6512:3f6:b0:47f:6dff:dab9 with SMTP id n22-20020a05651203f600b0047f6dffdab9mr1563227lfq.645.1655888607690;
        Wed, 22 Jun 2022 02:03:27 -0700 (PDT)
Subject: Re: [PATCH v3 3/3] xen: don't require virtio with grants for non-PV
 guests
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 x86@kernel.org, linux-kernel@vger.kernel.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Russell King <linux@armlinux.org.uk>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 linux-arm-kernel@lists.infradead.org, Viresh Kumar <viresh.kumar@linaro.org>
References: <20220622063838.8854-1-jgross@suse.com>
 <20220622063838.8854-4-jgross@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <a8ce8ad3-aa3b-ea87-34cf-6532a272e9d8@gmail.com>
Date: Wed, 22 Jun 2022 12:03:25 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20220622063838.8854-4-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 22.06.22 09:38, Juergen Gross wrote:

Hello Juergen

> Commit fa1f57421e0b ("xen/virtio: Enable restricted memory access using
> Xen grant mappings") introduced a new requirement for using virtio
> devices: the backend now needs to support the VIRTIO_F_ACCESS_PLATFORM
> feature.
>
> This is an undue requirement for non-PV guests, as those can be operated
> with existing backends without any problem, as long as those backends
> are running in dom0.
>
> Per default allow virtio devices without grant support for non-PV
> guests.
>
> On Arm require VIRTIO_F_ACCESS_PLATFORM for devices having been listed
> in the device tree to use grants.
>
> Add a new config item to always force use of grants for virtio.
>
> Fixes: fa1f57421e0b ("xen/virtio: Enable restricted memory access using Xen grant mappings")
> Reported-by: Viresh Kumar <viresh.kumar@linaro.org>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V2:
> - remove command line parameter (Christoph Hellwig)
> V3:
> - rebase to callback method


Patch looks good, just one NIT ...


> ---
>   arch/arm/xen/enlighten.c     |  4 +++-
>   arch/x86/xen/enlighten_hvm.c |  4 +++-
>   arch/x86/xen/enlighten_pv.c  |  5 ++++-
>   drivers/xen/Kconfig          |  9 +++++++++
>   drivers/xen/grant-dma-ops.c  | 10 ++++++++++
>   include/xen/xen-ops.h        |  6 ++++++
>   include/xen/xen.h            |  8 --------
>   7 files changed, 35 insertions(+), 11 deletions(-)
>
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index 1f9c3ba32833..93c8ccbf2982 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -34,6 +34,7 @@
>   #include <linux/timekeeping.h>
>   #include <linux/timekeeper_internal.h>
>   #include <linux/acpi.h>
> +#include <linux/virtio_anchor.h>
>   
>   #include <linux/mm.h>
>   
> @@ -443,7 +444,8 @@ static int __init xen_guest_init(void)
>   	if (!xen_domain())
>   		return 0;
>   
> -	xen_set_restricted_virtio_memory_access();
> +	if (IS_ENABLED(CONFIG_XEN_VIRTIO))
> +		virtio_set_mem_acc_cb(xen_virtio_mem_acc);
>   
>   	if (!acpi_disabled)
>   		xen_acpi_guest_init();
> diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
> index 8b71b1dd7639..28762f800596 100644
> --- a/arch/x86/xen/enlighten_hvm.c
> +++ b/arch/x86/xen/enlighten_hvm.c
> @@ -4,6 +4,7 @@
>   #include <linux/cpu.h>
>   #include <linux/kexec.h>
>   #include <linux/memblock.h>
> +#include <linux/virtio_anchor.h>
>   
>   #include <xen/features.h>
>   #include <xen/events.h>
> @@ -195,7 +196,8 @@ static void __init xen_hvm_guest_init(void)
>   	if (xen_pv_domain())
>   		return;
>   
> -	xen_set_restricted_virtio_memory_access();
> +	if (IS_ENABLED(CONFIG_XEN_VIRTIO_FORCE_GRANT))
> +		virtio_set_mem_acc_cb(virtio_require_restricted_mem_acc);
>   
>   	init_hvm_pv_info();
>   
> diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
> index e3297b15701c..5aaae8a77f55 100644
> --- a/arch/x86/xen/enlighten_pv.c
> +++ b/arch/x86/xen/enlighten_pv.c
> @@ -31,6 +31,7 @@
>   #include <linux/gfp.h>
>   #include <linux/edd.h>
>   #include <linux/reboot.h>
> +#include <linux/virtio_anchor.h>
>   
>   #include <xen/xen.h>
>   #include <xen/events.h>
> @@ -109,7 +110,9 @@ static DEFINE_PER_CPU(struct tls_descs, shadow_tls_desc);
>   
>   static void __init xen_pv_init_platform(void)
>   {
> -	xen_set_restricted_virtio_memory_access();
> +	/* PV guests can't operate virtio devices without grants. */
> +	if (IS_ENABLED(CONFIG_XEN_VIRTIO))
> +		virtio_set_mem_acc_cb(virtio_require_restricted_mem_acc);
>   
>   	populate_extra_pte(fix_to_virt(FIX_PARAVIRT_BOOTMAP));
>   
> diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
> index bfd5f4f706bc..a65bd92121a5 100644
> --- a/drivers/xen/Kconfig
> +++ b/drivers/xen/Kconfig
> @@ -355,4 +355,13 @@ config XEN_VIRTIO
>   
>   	  If in doubt, say n.
>   
> +config XEN_VIRTIO_FORCE_GRANT
> +	bool "Require Xen virtio support to use grants"
> +	depends on XEN_VIRTIO
> +	help
> +	  Require virtio for Xen guests to use grant mappings.
> +	  This will avoid the need to give the backend the right to map all
> +	  of the guest memory. This will need support on the backend side
> +	  (e.g. qemu or kernel, depending on the virtio device types used).
> +
>   endmenu
> diff --git a/drivers/xen/grant-dma-ops.c b/drivers/xen/grant-dma-ops.c
> index fc0142484001..8973fc1e9ccc 100644
> --- a/drivers/xen/grant-dma-ops.c
> +++ b/drivers/xen/grant-dma-ops.c
> @@ -12,6 +12,8 @@
>   #include <linux/of.h>
>   #include <linux/pfn.h>
>   #include <linux/xarray.h>
> +#include <linux/virtio_anchor.h>
> +#include <linux/virtio.h>
>   #include <xen/xen.h>
>   #include <xen/xen-ops.h>
>   #include <xen/grant_table.h>
> @@ -287,6 +289,14 @@ bool xen_is_grant_dma_device(struct device *dev)
>   	return has_iommu;
>   }
>   
> +bool xen_virtio_mem_acc(struct virtio_device *dev)
> +{
> +	if (IS_ENABLED(CONFIG_XEN_VIRTIO_FORCE_GRANT))
> +		return true;
> +
> +	return xen_is_grant_dma_device(dev->dev.parent);
> +}


    ... I am thinking would it be better to move this to xen/xen-ops.h 
as grant-dma-ops.c is generic (not only for virtio, although the virtio 
is the first use-case)


diff --git a/drivers/xen/grant-dma-ops.c b/drivers/xen/grant-dma-ops.c
index 8973fc1..fc01424 100644
--- a/drivers/xen/grant-dma-ops.c
+++ b/drivers/xen/grant-dma-ops.c
@@ -12,8 +12,6 @@
  #include <linux/of.h>
  #include <linux/pfn.h>
  #include <linux/xarray.h>
-#include <linux/virtio_anchor.h>
-#include <linux/virtio.h>
  #include <xen/xen.h>
  #include <xen/xen-ops.h>
  #include <xen/grant_table.h>
@@ -289,14 +287,6 @@ bool xen_is_grant_dma_device(struct device *dev)
         return has_iommu;
  }

-bool xen_virtio_mem_acc(struct virtio_device *dev)
-{
-       if (IS_ENABLED(CONFIG_XEN_VIRTIO_FORCE_GRANT))
-               return true;
-
-       return xen_is_grant_dma_device(dev->dev.parent);
-}
-
  void xen_grant_setup_dma_ops(struct device *dev)
  {
         struct xen_grant_dma_data *data;
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index 98c399a..a9ae51b 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -6,6 +6,7 @@
  #include <linux/notifier.h>
  #include <linux/efi.h>
  #include <linux/virtio_anchor.h>
+#include <linux/virtio.h>
  #include <xen/features.h>
  #include <asm/xen/interface.h>
  #include <xen/interface/vcpu.h>
@@ -218,7 +219,13 @@ static inline void xen_preemptible_hcall_end(void) { }
  #ifdef CONFIG_XEN_GRANT_DMA_OPS
  void xen_grant_setup_dma_ops(struct device *dev);
  bool xen_is_grant_dma_device(struct device *dev);
-bool xen_virtio_mem_acc(struct virtio_device *dev);
+static inline bool xen_virtio_mem_acc(struct virtio_device *dev)
+{
+       if (IS_ENABLED(CONFIG_XEN_VIRTIO_FORCE_GRANT))
+               return true;
+
+       return xen_is_grant_dma_device(dev->dev.parent);
+}
  #else
  static inline void xen_grant_setup_dma_ops(struct device *dev)
  {


> +
>   void xen_grant_setup_dma_ops(struct device *dev)
>   {
>   	struct xen_grant_dma_data *data;
> diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
> index 80546960f8b7..98c399a960a3 100644
> --- a/include/xen/xen-ops.h
> +++ b/include/xen/xen-ops.h
> @@ -5,6 +5,7 @@
>   #include <linux/percpu.h>
>   #include <linux/notifier.h>
>   #include <linux/efi.h>
> +#include <linux/virtio_anchor.h>
>   #include <xen/features.h>
>   #include <asm/xen/interface.h>
>   #include <xen/interface/vcpu.h>
> @@ -217,6 +218,7 @@ static inline void xen_preemptible_hcall_end(void) { }
>   #ifdef CONFIG_XEN_GRANT_DMA_OPS
>   void xen_grant_setup_dma_ops(struct device *dev);
>   bool xen_is_grant_dma_device(struct device *dev);
> +bool xen_virtio_mem_acc(struct virtio_device *dev);
>   #else
>   static inline void xen_grant_setup_dma_ops(struct device *dev)
>   {
> @@ -225,6 +227,10 @@ static inline bool xen_is_grant_dma_device(struct device *dev)
>   {
>   	return false;
>   }
> +static inline bool xen_virtio_mem_acc(struct virtio_device *dev)
> +{
> +	return false;
> +}
>   #endif /* CONFIG_XEN_GRANT_DMA_OPS */
>   
>   #endif /* INCLUDE_XEN_OPS_H */
> diff --git a/include/xen/xen.h b/include/xen/xen.h
> index ac5a144c6a65..a99bab817523 100644
> --- a/include/xen/xen.h
> +++ b/include/xen/xen.h
> @@ -52,14 +52,6 @@ bool xen_biovec_phys_mergeable(const struct bio_vec *vec1,
>   extern u64 xen_saved_max_mem_size;
>   #endif
>   
> -#include <linux/virtio_anchor.h>
> -
> -static inline void xen_set_restricted_virtio_memory_access(void)
> -{
> -	if (IS_ENABLED(CONFIG_XEN_VIRTIO) && xen_domain())
> -		virtio_set_mem_acc_cb(virtio_require_restricted_mem_acc);
> -}
> -
>   #ifdef CONFIG_XEN_UNPOPULATED_ALLOC
>   int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages);
>   void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages);

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 09:09:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 09:09:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353599.580543 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3wMR-0000uq-B4; Wed, 22 Jun 2022 09:09:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353599.580543; Wed, 22 Jun 2022 09:09:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3wMR-0000uj-7A; Wed, 22 Jun 2022 09:09:19 +0000
Received: by outflank-mailman (input) for mailman id 353599;
 Wed, 22 Jun 2022 09:09:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yiMn=W5=citrix.com=prvs=16524ee06=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o3wMP-0000ud-4u
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 09:09:17 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fbe05460-f20a-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 11:09:14 +0200 (CEST)
Received: from mail-dm6nam10lp2105.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.105])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 22 Jun 2022 05:09:10 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by BN8PR03MB5124.namprd03.prod.outlook.com (2603:10b6:408:db::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15; Wed, 22 Jun
 2022 09:09:06 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5353.022; Wed, 22 Jun 2022
 09:09:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fbe05460-f20a-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655888954;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=c28WG6AR0lcD66mP1PvtxNQndEWSn+n1+OMLxmZNS2Y=;
  b=cIrIPqgQElmFunS3i8tiDf8Zd9BSB07/cybxmOOWmaEAcojLgkCELQtv
   OGoKO0M0+KBgLinyQPABFCcpM7SzAqmPs6q6FzYvE+sngF2U7ca14Yepg
   feJ8w+67OcceKCTEYO83/vustWRhkNLuR5wncgUK2LPfEyr5POvEY3STO
   k=;
X-IronPort-RemoteIP: 104.47.58.105
X-IronPort-MID: 73992485
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:ctQGya37X+qe0J4ksPbD5c9wkn2cJEfYwER7XKvMYLTBsI5bpzdUz
 2YaXm2Cb67fM2Pwet5yYNyw9BtSuMLVydA1GwNqpC1hF35El5HIVI+TRqvS04J+DSFhoGZPt
 Zh2hgzodZhsJpPkjk7xdOCn9xGQ7InQLlbGILes1htZGEk1Ek/NtTo5w7Rj2tAy24Dja++wk
 YiaT/P3aQfNNwFcagr424rbwP+4lK2v0N+wlgVWicFj5DcypVFMZH4sDfjZw0/DaptVBoaHq
 9Prl9lVyI97EyAFUbtJmp6jGqEDryW70QKm0hK6UID66vROS7BbPg/W+5PwZG8O4whlkeydx
 /1vkZz3YiYnAZbdhesyWRlmNzxHD/VvreqvzXiX6aR/zmXgWl61m7BCKR9zOocVvOFqHWtJ6
 PoUbigXaQyOjP63x7T9TfRwgsMkL4/gO4Z3VnNIlGmFS6p5B82TBfySuLe03x9p7ixKNezZa
 McDLyJmcTzLYgFVO0dRA5U79AutriakImwC9ALIzUYxy1GDwSVjyfv0CsOPc4bXX+lXsV2eg
 m2TqgwVBTlfbrRz0wGt8Hihm+vOliPTQ58JGfuz8fsCqF+Owm0eDjUGWF39puO24malQM5WI
 UEQ/isorIAx+VatQ927WAe3yFabujYMVtwWFPc1gCmdx6yR7wuHC2wsSj9adMdgpMIwXSYt1
 FKCg5XuHzMHjVGOYXeU97PRoTbsPyEQdDcGfXVdFVZD5MT/qoYuiB6JVsxkDKO+ktzyH3f33
 iyOqy89wb4UiKbnypmGwLwOuBr0zrChc+L/zl+/sr6Nhu+hWLOYWg==
IronPort-HdrOrdr: A9a23:iny006xSsfOFQxXvzoLKKrPxvuskLtp133Aq2lEZdPULSKGlfp
 GV9sjziyWetN9wYh4dcB67Scy9qFfnhOZICO4qTMyftWjdyRKVxeRZgbcKrAeBJ8STzJ8/6U
 4kSdkFNDSSNykEsS+Z2njeLz9I+rDunsGVbKXlvhFQpGlRGt1dBmxCe2Km+yNNNWt77c1TLu
 vg2iMLnUvoRV0nKuCAQlUVVenKoNPG0LrgfB49HhYirC2Dlymh5rLWGwWRmk52aUIF/Z4StU
 z+1yDp7KSqtP+2jjfaym/o9pxT3P/s0MFKCsCggtUcbh/slgGrToJ8XKDqhkF8nMifrHIR1P
 XcqRYpOMp+r1vXY2GOuBPonzLt1T4/gkWSvWOwsD/Gm4jUVTg6A81OicZyaR3C8Xctu9l6ze
 Ziw3+Zn4A/N2KOoA3No/zzEz16nEu9pnQv1cQJiWZEbIcYYLhN6aQC4UJuFosaFi6S0vFqLA
 BXNrCc2B9qSyLbU5iA1VMfg+BEH05DUytue3Jy9PB8iFNt7TJEJ0hx/r1rop5PzuN5d3B+3Z
 W0Dk1ZrsAxciYoV9MMOA4ge7rBNoWfe2O7DIqtSW6XZ50vCjbql6PdxokTyaWDRKEopaFC6q
 gpFmko/1IPRw==
X-IronPort-AV: E=Sophos;i="5.92,212,1650945600"; 
   d="scan'208";a="73992485"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=irRaTbhpNJH54YN3MzO+6tOHEwcwpanTs0OnBeH8joSDsHVvSvdjqkBB+ZwTZEVFQKoVInM5DTFSZkL4YJrLnSXmlfdb16wsnP9ARm6OzaiY+rQdIIEdrj/QQF8g770iTp/5gEoB9Q0N9+nrAZHSul9pEDJtRdn4I+33VqxMNypnjC3ASYXdAJ3bXr9BQCnte3K+zLcf76fiQURqKh3hU9+6wbAWIzVQ5gzqVrsp/HUHchY77YlpLJQtVXd85YQlGVWotYOppHcq90wPNW+jUZ8MADaVsvJ6tTc5zf6g/AdKqLVtz6Y8EF5ZAFnXHpCyu3JSixpWnXbQN4EGJGlGgQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=W2OTwPwlYSOyxYUZMM3ou862fvH/R2uq1iZS7krJxi8=;
 b=X1wCTiG56JvzQyCe06ZKfV9nZeFzBORJLsXRtbKO2bnSBlYY1+ZaJaNp7tVQz75dYy8jIhyEPSRqtnh/TouT74uynqxtsTH4bjrblUprSkTCwzgLr+v0lp8D/RE605nbbQrqiaw4Sq/sTSlj1cTBOKO5Sy/X3DlcVZAJ0/mbrMCAsIq3vFPjl2johahSnfGVNYYlLV5ft7UtrrPgJHxocmrG/bETEp2C8DS9V+j0gAq2oqxQvrilou3/oqtVFZpfxYVVzVp2q0epK5vl26JIYF6KtZjbXn1hT9HThD8txufQatS9gEjXS/t55ney4WmBFxpOKqxuyMDj1KVqez/FdQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=W2OTwPwlYSOyxYUZMM3ou862fvH/R2uq1iZS7krJxi8=;
 b=vf6t3aPYUjjWVEox4YrdHQlR6GOBTvWuifYjcGSTS98Mu0xyVzc5VhkT6nnLEckkiU+ca3W8GWp+pcGK5GVjybT3zBsB6RH7ONOQVCialdKC6CSycYlXfDV77NwYOZz/0N1N7vRntR9eAROBFrsFUV7oJilrsFjDBRRwSmHHl50=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 22 Jun 2022 11:09:02 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH] xen/console: do not drop serial output from the hardware
 domain
Message-ID: <YrLcLpsd8hOcMOGI@Air-de-Roger>
References: <YqcuTUJUgXcO3iYE@Air-de-Roger>
 <f0f87e99-282b-6df7-7e57-3a6c73029519@suse.com>
 <YqgwNu3QSpPcZjnU@Air-de-Roger>
 <69d85d88-4ec1-987c-151f-0d433021fe34@suse.com>
 <YqhHtetipYTG8tuc@Air-de-Roger>
 <72c94980-cbcd-d3b3-7aad-c9db58d9c4a2@suse.com>
 <YqhXFKMlIvkQzVoT@Air-de-Roger>
 <291bb0ee-06d7-af25-79bb-e099c7ff2fe1@suse.com>
 <YqsUfH763oSchRdW@Air-de-Roger>
 <8ee15e94-f4a9-69f2-4c57-2e0cc9df8746@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <8ee15e94-f4a9-69f2-4c57-2e0cc9df8746@suse.com>
X-ClientProxiedBy: PAZP264CA0175.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:102:236::34) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3fc0d2b8-422c-4f58-3ce1-08da542edbe1
X-MS-TrafficTypeDiagnostic: BN8PR03MB5124:EE_
X-Microsoft-Antispam-PRVS:
	<BN8PR03MB51249DF5122E555E21C87F068FB29@BN8PR03MB5124.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	t9JrC8p7HHIK6Gg/761CgV36hqebtScjasBcb3DtHNzjaoE5nGBfhGIbtXEiwNxWtVaa5jGMz3a0APqYGU9aqn9xCeq9X+7PzfNmSaoeF8j3iKTm7YcExIbzJjeBHgDAsiqp5KaGs0ooIgn7LzqEraTT8bZ/Tvd9GXb2+8S9F5/j1uD93QJU0yvvRKQH9vYD9v7Uc3ycn42IsDtBHsphNFtn8a8AmO526fYPK2TJMfVtin4PR6KMCxBpYlawGnfn/fF8s/NsWpotO7GgFiN/Ck03bKXEDsXIohBI08pzW+Dj0awznhYQgms7PEqe5m/deTSd72/9kKAMK00USzqlOOQ7y8EwV/znKlxPq3W7vWu/toRStsC5HsMPf9CNstExHW6lS0PsDIB93OWgFQNHk56z/SriqB2cLB5Sryiq6vKIwQR7Ik9PBGyNp7f1CDSPi4jqTpGTS0Q5RxNTj6Y1TbYEpbErNkfCiVJPLyW4AQm8nh0EGXSnyS9rx5JfJctLRvcOmYjgmLGx4Kf8XXTTyjrmJ1XhNuardLXg4JO2qHZRSyXwoZ4HQctnvU2TdY2YOn43haG1+VSICFTScYhsz6W420PCpVOD4q26ZHpdApVMLNVb7qiI+QVd3/b0U2RyukieWOFaviEKI1yfGEZ6B45eQDW5pbIs7GKTaoltKeetodl2Brc9lYR9R7S4NxOxMC/ErrMQ3+flGfmOWwvuUA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(7916004)(4636009)(396003)(39860400002)(376002)(366004)(346002)(136003)(316002)(26005)(53546011)(86362001)(6506007)(6666004)(85182001)(2906002)(66476007)(6512007)(8676002)(4326008)(6916009)(66556008)(38100700002)(66946007)(54906003)(33716001)(82960400001)(8936002)(83380400001)(186003)(9686003)(41300700001)(478600001)(5660300002)(6486002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YzRVYW9wWWRKdm1xWkU1MWFIVWJObHVxN2FrTUIva0UxTWtoWE1yWUhUUnpS?=
 =?utf-8?B?ZW9OQVdKUk1hdkRkWXNyeHUzeFAxYTcwejgrcENZTkg5cjE4OWwrQUFZVGJB?=
 =?utf-8?B?U0ZKa3ZLWWxQTXBySmY5OUpaSHZ5d3NCbmQ5NjF6Wmx4OXdWbnkvQzdHc2s1?=
 =?utf-8?B?R2dZdHFIOVBQY2dCT25DQWpaVDh4eThmZkJYdHdITVhMWFhJNlBWbWs1MGlk?=
 =?utf-8?B?dGpIWlhKQlp4RFlxSk0wdTV4S1lSaTVHV1hqbkxXTWMzblo0Vi9IS09zckU1?=
 =?utf-8?B?WC9tWVZIZysrZXZXeCtKN0Rmc3JQY2wwYkVpTUlMdFEvRWJkUjh0L0NhNXhK?=
 =?utf-8?B?QldpZmNnSzc4NHpLZmRBanNUa3l3RmxFWE5NcjFxWkFUQmdabkxZTHpNcE9H?=
 =?utf-8?B?OC9NMytUMklxWGFsQ0xKRFhQajErYVl1eE9oa3dSNFJUYnBrQlU4Q3FPbk4r?=
 =?utf-8?B?SGxLWit3RzRJc2VXcXBXOXJxdGt4WHkwWlI3QmRnTEhPalhMOWtqSjFyazVT?=
 =?utf-8?B?OStHc3NLK040eGl1VFNVR3MxSDR4YnM2NEt3dTJXSTYrUjV5MitCcDh3c3lq?=
 =?utf-8?B?V1cxZjlrT1ZTRnVjalFPeU5oMGFMYkVINklxYXdpcEI2SlU1aExSWFVaL2VY?=
 =?utf-8?B?UURIVXNlTnB4b2FkK01IN0F1YTVsZ1owN0xPSFFhNUpmWllNZXE5d3oxM0sx?=
 =?utf-8?B?amZpR2xZcmdnU3VRU3FibGE4YjVqQUs0STQvenZ6SXVvSTBNQ2QrVVdjRjZF?=
 =?utf-8?B?MHdicnkrbkJyTkphRXJBZm9uRHdRZ3huU3NzV3h6YzY3UHVqOFVXYmJOK0ZU?=
 =?utf-8?B?dExPRnAwd1p5bEN6SnhJQ013cVFIbWxpcnUwWWk1UXlNN2JBMlNzbGhhN0xV?=
 =?utf-8?B?R29CMXhkMkE4UTdWaDJWZVhKS1RjVWZLZUh0VjlkMEsrZ3BLSXcvTy9GMkZN?=
 =?utf-8?B?cnc2OFNma3ErdjdjK1pnTXRHMVNtYzlZSjVoUlhzWXFSblQ4NW9LUU95dTMy?=
 =?utf-8?B?UFBjREM5cjF6WmJxcDdvSUNNUzBvb1RCcFErbHBmMmoxWVRmaExoeUV3eGdw?=
 =?utf-8?B?VTBnQ3orVDdRdkkwUmM2RU5ES3ZqS004aUNMc0d3bm1FemljZDZLQ0xERkE4?=
 =?utf-8?B?bUU4bklaMGE4c0tMZGhsdWRaOWN5Q1pTYzgzd0F0SVBLYWtyM0dyb1BKSWVT?=
 =?utf-8?B?OHc4Rzk4L0FZRXNDTVVXMDNZdWF6MmtaTGp5OFhqR3RQT0VGYzR6WnRvUlVj?=
 =?utf-8?B?MU1NbFlCZXkrOHhKbW13UHI5cG8vUHFLTVVpYXJYRXM3b2VHK2VabFY1SENR?=
 =?utf-8?B?SVdScEFaYi9hZ0VnRUZGNjh2Y05ZNWxtb1pjZFVFbXM3cHg0dXpxdHFwbnNr?=
 =?utf-8?B?dnRyMjVHNmYwc05TTzk0ZEc0Vmg3cGtGNVArVmhqL1hIaUhZSlR6UnZXNmpL?=
 =?utf-8?B?U3UwbVJVSE4xN2l0K0RFaGFYYkZVNldSUDZGV2g2dEtyaXFXZWxtV0NoRGhm?=
 =?utf-8?B?eDNtRlBORGE1Y2ozeUdzTkJoQXRBMjRJRTVpbFZQSDR1ZVQyeXhabk5jMXB2?=
 =?utf-8?B?MXFXenc2TkhLa2J1SlNGdVNZN3Y2TzY5aTBGUHhReW1iSHgyRm5pUTBZRXlm?=
 =?utf-8?B?eFZxUGVFMzhEZERQNENENlVFV05GZHMrNWFKSmRVRE5nWDhjQjd6SGdWQVNa?=
 =?utf-8?B?VHhTdjg4TU5LSVA0K3BHTVBLWWhpM3JGS3R3c2hiVW5mcnNoRlk3bytTaHNX?=
 =?utf-8?B?TVVMdVQwdDBGVEN1emNScFFpSVN6a09KRzAyQlhuQnB3WHFrWGcwVEgvWFI0?=
 =?utf-8?B?eit2bVNKNldqeEt3bjdLMEY3TEdUVVVpWXUwOTN5dXRuc2JMMmtycit4cnRh?=
 =?utf-8?B?QWhMcDVuRjV6RTNPazc4NU02bXRXZWhIOTEzRGg5RFBSc1duMkdvejFpT3JI?=
 =?utf-8?B?SWdFSzZMYlY5Z1poOVR1RUhsaE5HSWUzZm1SamxncnRuZ0hKSFl3Q1U1Y290?=
 =?utf-8?B?MmxBRXYzUzBCOXVLK2dHT0lhR2NpL3lQNkIrTW9oWDd2YjhZM3IzNmwzNlRw?=
 =?utf-8?B?TEtUTXhsYVpPTnB1N1FYZFVrQ3dEK1NUajJVcWZHWWc1SUg1TEhtNE5lRU1N?=
 =?utf-8?B?cUtua2IvT3ZFZ1lxY3Y0NE9TMkgwTDNRRGZQeEpyenVETkY2czRQKzhDMkp5?=
 =?utf-8?B?ZENZNzgySFdZN0NDb0ZMWHN1SmU1WUFVUE1SMUJoRzIzVXNrbmlPd0luZWJI?=
 =?utf-8?B?c083TDRheUtFL0FlVFpjREdUNUdTMEpxR0Rlbk56Yld6ZUZQeWlacnBJQUtW?=
 =?utf-8?B?b1NsUUhqY1lQWGQzWkduSXJwMHJxdGs1SUx3eXR5UFdESC9EN0xKUT09?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3fc0d2b8-422c-4f58-3ce1-08da542edbe1
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2022 09:09:06.2265
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 8/KtpMwv2ar78DBj7LfCRuYYykFY1yYxlhpA/TkV379S/inKKsOdBgUP6reiMSYPgT9dablixsrwvElra2Bc/w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR03MB5124

On Wed, Jun 22, 2022 at 10:04:19AM +0200, Jan Beulich wrote:
> On 16.06.2022 13:31, Roger Pau Monné wrote:
> > On Tue, Jun 14, 2022 at 11:45:54AM +0200, Jan Beulich wrote:
> >> On 14.06.2022 11:38, Roger Pau Monné wrote:
> >>> On Tue, Jun 14, 2022 at 11:13:07AM +0200, Jan Beulich wrote:
> >>>> On 14.06.2022 10:32, Roger Pau Monné wrote:
> >>>>> On Tue, Jun 14, 2022 at 10:10:03AM +0200, Jan Beulich wrote:
> >>>>>> On 14.06.2022 08:52, Roger Pau Monné wrote:
> >>>>>>> On Mon, Jun 13, 2022 at 03:56:54PM +0200, Jan Beulich wrote:
> >>>>>>>> On 13.06.2022 14:32, Roger Pau Monné wrote:
> >>>>>>>>> On Mon, Jun 13, 2022 at 11:18:49AM +0200, Jan Beulich wrote:
> >>>>>>>>>> On 13.06.2022 11:04, Roger Pau Monné wrote:
> >>>>>>>>>>> On Mon, Jun 13, 2022 at 10:29:43AM +0200, Jan Beulich wrote:
> >>>>>>>>>>>> On 13.06.2022 10:21, Roger Pau Monné wrote:
> >>>>>>>>>>>>> On Mon, Jun 13, 2022 at 09:30:06AM +0200, Jan Beulich wrote:
> >>>>>>>>>>>>>> On 10.06.2022 17:06, Roger Pau Monne wrote:
> >>>>>>>>>>>>>>> Prevent dropping console output from the hardware domain, since it's
> >>>>>>>>>>>>>>> likely important to have all the output if the boot fails without
> >>>>>>>>>>>>>>> having to resort to sync_console (which also affects the output from
> >>>>>>>>>>>>>>> other guests).
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> Do so by pairing the console_serial_puts() with
> >>>>>>>>>>>>>>> serial_{start,end}_log_everything(), so that no output is dropped.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> While I can see the goal, why would Dom0 output be (effectively) more
> >>>>>>>>>>>>>> important than Xen's own one (which isn't "forced")? And with this
> >>>>>>>>>>>>>> aiming at boot output only, wouldn't you want to stop the overriding
> >>>>>>>>>>>>>> once boot has completed (of which, if I'm not mistaken, we don't
> >>>>>>>>>>>>>> really have any signal coming from Dom0)? And even during boot I'm
> >>>>>>>>>>>>>> not convinced we'd want to let through everything, but perhaps just
> >>>>>>>>>>>>>> Dom0's kernel messages?
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> I normally use sync_console on all the boxes I'm doing dev work, so
> >>>>>>>>>>>>> this request is something that come up internally.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Didn't realize Xen output wasn't forced, since we already have rate
> >>>>>>>>>>>>> limiting based on log levels I was assuming that non-ratelimited
> >>>>>>>>>>>>> messages wouldn't be dropped.  But yes, I agree that Xen (non-guest
> >>>>>>>>>>>>> triggered) output shouldn't be rate limited either.
> >>>>>>>>>>>>
> >>>>>>>>>>>> Which would raise the question of why we have log levels for non-guest
> >>>>>>>>>>>> messages.
> >>>>>>>>>>>
> >>>>>>>>>>> Hm, maybe I'm confused, but I don't see a direct relation between log
> >>>>>>>>>>> levels and rate limiting.  If I set log level to WARNING I would
> >>>>>>>>>>> expect to not loose _any_ non-guest log messages with level WARNING or
> >>>>>>>>>>> above.  It's still useful to have log levels for non-guest messages,
> >>>>>>>>>>> since user might want to filter out DEBUG non-guest messages for
> >>>>>>>>>>> example.
> >>>>>>>>>>
> >>>>>>>>>> It was me who was confused, because of the two log-everything variants
> >>>>>>>>>> we have (console and serial). You're right that your change is unrelated
> >>>>>>>>>> to log levels. However, when there are e.g. many warnings or when an
> >>>>>>>>>> admin has lowered the log level, what you (would) do is effectively
> >>>>>>>>>> force sync_console mode transiently (for a subset of messages, but
> >>>>>>>>>> that's secondary, especially because the "forced" output would still
> >>>>>>>>>> be waiting for earlier output to make it out).
> >>>>>>>>>
> >>>>>>>>> Right, it would have to wait for any previous output on the buffer to
> >>>>>>>>> go out first.  In any case we can guarantee that no more output will
> >>>>>>>>> be added to the buffer while Xen waits for it to be flushed.
> >>>>>>>>>
> >>>>>>>>> So for the hardware domain it might make sense to wait for the TX
> >>>>>>>>> buffers to be half empty (the current tx_quench logic) by preempting
> >>>>>>>>> the hypercall.  That however could cause issues if guests manage to
> >>>>>>>>> keep filling the buffer while the hardware domain is being preempted.
> >>>>>>>>>
> >>>>>>>>> Alternatively we could always reserve half of the buffer for the
> >>>>>>>>> hardware domain, and allow it to be preempted while waiting for space
> >>>>>>>>> (since it's guaranteed non hardware domains won't be able to steal the
> >>>>>>>>> allocation from the hardware domain).
> >>>>>>>>
> >>>>>>>> Getting complicated it seems. I have to admit that I wonder whether we
> >>>>>>>> wouldn't be better off leaving the current logic as is.
> >>>>>>>
> >>>>>>> Another possible solution (more like a band aid) is to increase the
> >>>>>>> buffer size from 4 pages to 8 or 16.  That would likely allow to cope
> >>>>>>> fine with the high throughput of boot messages.
> >>>>>>
> >>>>>> You mean the buffer whose size is controlled by serial_tx_buffer?
> >>>>>
> >>>>> Yes.
> >>>>>
> >>>>>> On
> >>>>>> large systems one may want to simply make use of the command line
> >>>>>> option then; I don't think the built-in default needs changing. Or
> >>>>>> if so, then perhaps not statically at build time, but taking into
> >>>>>> account system properties (like CPU count).
> >>>>>
> >>>>> So how about we use:
> >>>>>
> >>>>> min(16384, ROUNDUP(1024 * num_possible_cpus(), 4096))
> >>>>
> >>>> That would _reduce_ size on small systems, wouldn't it? Originally
> >>>> you were after increasing the default size. But if you had meant
> >>>> max(), then I'd fear on very large systems this may grow a little
> >>>> too large.
> >>>
> >>> See previous followup about my mistake of using min() instead of
> >>> max().
> >>>
> >>> On a system with 512 CPUs that would be 512KB, I don't think that's a
> >>> lot of memory, specially taking into account that a system with 512
> >>> CPUs should have a matching amount of memory I would expect.
> >>>
> >>> It's true however that I very much doubt we would fill a 512K buffer,
> >>> so limiting to 64K might be a sensible starting point?
> >>
> >> Yeah, 64k could be a value to compromise on. What total size of
> >> output have you observed to trigger the making of this patch? Xen
> >> alone doesn't even manage to fill 16k on most of my systems ...
> > 
> > I've tried on one of the affected systems now, it's a 8 CPU Kaby Lake
> > at 3,5GHz, and manages to fill the buffer while booting Linux.
> > 
> > My proposed formula won't fix this use case, so what about just
> > bumping the buffer to 32K by default, which does fix it?
> 
> As said, suitably explained I could also agree with going to 64k. The
> question though is in how far 32k, 64k, or ...
> 
> > Or alternatively use the proposed formula, but adjust the buffer to be
> > between [32K,64K].
> 
> ... this formula would cover a wide range of contemporary systems.
> Without such I can't really see what good a bump would do, as then
> many people may still find themselves in need of using the command
> line option to put in place a larger buffer.

I'm afraid I don't know how to make progress with this.

The current value is clearly too low for at least one of my systems.
I don't think it's feasible for me to propose a value or formula that
I can confirm will be suitable for all systems, hence I would suggest
increasing the buffer value to 32K as that does fix the problem on
that specific system (without claiming it's a value that would suit
all setups).

I agree that many people could still find themselves in the need of
using the command line option, but I can assure that new buffer value
would fix the issue on at least one system, which should be enough as
a justification.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 09:12:37 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 09:12:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353610.580554 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3wPb-0002N2-3L; Wed, 22 Jun 2022 09:12:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353610.580554; Wed, 22 Jun 2022 09:12:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3wPa-0002Mv-Vr; Wed, 22 Jun 2022 09:12:34 +0000
Received: by outflank-mailman (input) for mailman id 353610;
 Wed, 22 Jun 2022 09:12:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kzGk=W5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o3wPa-0002Mn-1n
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 09:12:34 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-eopbgr60055.outbound.protection.outlook.com [40.107.6.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 72ca7c99-f20b-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 11:12:33 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM6PR04MB6326.eurprd04.prod.outlook.com (2603:10a6:20b:be::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.22; Wed, 22 Jun
 2022 09:12:31 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Wed, 22 Jun 2022
 09:12:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 72ca7c99-f20b-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nGLJ4kgItSiZavQc4RSQbuCValLcEVgmFMT5ZaqJEICu3DCsZ9+XDM/1j8EtLZNmAzDELenZr3ztYdrs7yLnFZbmk5hdANlQIWjToWBMdEH3UsoH7aV24gJaZxb4S7zndOHCpxbo40YdIeQP9fTPOwomf6I9msoDBbJSBqdoPSlydrb4olw1vHa9uM70GYHlBC2CRkRbMYIiwsMCdo12N6OFmcmo31N29MUbaIwdDjgqI1CDQH0szWMfbHcAULQM7g78iblpvjCL5qTeEFbnLeBOgfhiI184jByNujaUEjL4E66WzH+S21EpUTmRIVi1SD9+B39dRI+0otGxWDk96g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zQ0EQNclJ8W/P8unN8Ie9ROq8PAoFEBXZZCngGJrLiM=;
 b=TY31veEGnvXjyWozDhOt6ldRS84HaFGde5bLhoa7RpuLfSYsxqiyqJ7+qVOmQOAnJZeAyTflycWyxrYVd63Hx78iMMTOgkZsRfsHgV2VSK7U3EhFnYegluAz3Go9G+sxmEcmS2wWqGIEralZ12C0CqUkxzWTuBvwxHOUXknSA1gIvbSvqh/OL0epL6V9+9oWWelkrvUx4ym/EAMJ/l5m+3G6ztzsBcugCBVhKF0y1tBCztxcoDQLmTvpWMnZQOKxV6QIFfGKv3I7YhjBJxa/1ggXisGBraozkjYSvz3Swqxk5R6zWYO4Bg/G9yzrKbxI1xtN+aio+xCaiUo0X/7niA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zQ0EQNclJ8W/P8unN8Ie9ROq8PAoFEBXZZCngGJrLiM=;
 b=PmKsWOgIUIzl5tjE7Ic6SNyLB+pHgc+O70Kc4D5j6PJHVOk+JHuZpzf2E0cyRRCtuFyeSV6jE4L05qw3LwI9cavrV//IJB+gRW1jy5f8+B1qOjgd7n0NDvmZI6JNK+JCCutiSs51B5XxD7hbBU2ILSpx7iQHUm2iwY7KQz2Ao0Wr55eLpdR/WSXGkoS5e/Yyqvsm1gV9UmvIr9Sy2Hx5BbEoNGaBb2BCmcbfl+6b4gol4P6SHkcVlGNZOdwPuZLYZhkrb46DYvXsAUE32KxNNztR2Qxj6kn3VgwJkiQ4kK9MK5rq8DkvDJ4GFzs3veSr77crjPt69v6Zt7M02Tr/KQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a410600f-0c33-effd-e4b5-43edb1eeb647@suse.com>
Date: Wed, 22 Jun 2022 11:12:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v7 6/9] xen/arm: introduce CDF_staticmem
Content-Language: en-US
To: Penny Zheng <Penny.Zheng@arm.com>
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220620024408.203797-1-Penny.Zheng@arm.com>
 <20220620024408.203797-7-Penny.Zheng@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220620024408.203797-7-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM6PR10CA0041.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:209:80::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 92392de2-bddd-416f-3028-08da542f55fd
X-MS-TrafficTypeDiagnostic: AM6PR04MB6326:EE_
X-Microsoft-Antispam-PRVS:
	<AM6PR04MB632614236C11BD8D8BD14973B3B29@AM6PR04MB6326.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	cxgQVEsObioBnVAuZF92WLYgKAbO2Q4imtv8HhW5CVPxQ09qfxVBZkNCaXGQr1TAFT5rijKnIxxZLEb8ZfXPAovuc1FmYfMJvodneZmF2FAcTS3hEJR+mXnXHBnYnKuDzamZlICfegUQRQZY2shPJfkvulat8E+y+nSvdr2AYFKpGuNkU01bZxKrDu+rdHiG4vPD+cQyE4GQ4FCqoF7/we6JP/X9MzUpBPRCaVQnqmHa8E2Srbgh7/vxxF5wniDLm7qCsJV+Nekwgp2iwwDd9gsOzAv2MQcE7s7QHJwRrvEx2i5wfoRQkuvqzbHTZwcoCHMPkf0/Dd/IqNXShd5F374/TEsGNZIqNUNMt8iUvpKLM/Afy8ACO2CgNCbJv+Me1rn2RKNZiAfJjEZjYcKm550r9AZoReepFiPeUqLc+qB0YKzoilRYCiom4RK0TDKPJEHXigPR3LDHD02Le4B+4FXR746JUrc0hEPfKx/AFECo68xtVEy2u4sd/7XYwRYhdZ+S7OFn9ZTscuX29dOsLEvAwCZiX/iJs6EageFYvq+lmphKyotcHn1cEyYpyUA0VAIx7AzxKz4s2QthKo1h2W5dPxifvlRFwDiV5zj5/OsM7hYEv62w/XCo7QnNRnyyUZrnhpqx5GV1p4HzzErqpXkZuJr7L4FC3VTaU1jCR+Q+VIQyDjJP6PR3na0gZl50jOvLvu3HZrjJ/5Q//diOwa+bLTQWDzg+eYQHzeRUd+8jOYkiKna/ANCM82C7Y4UWYuHhJMepGqvoqhiIxIXZThs2Xn0UBdC5sDz387BWrfw=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(396003)(39860400002)(136003)(346002)(366004)(376002)(4744005)(38100700002)(2906002)(31686004)(8676002)(186003)(316002)(66556008)(66476007)(66946007)(36756003)(54906003)(6916009)(6506007)(4326008)(6486002)(31696002)(53546011)(2616005)(478600001)(5660300002)(7416002)(41300700001)(8936002)(6512007)(26005)(86362001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WEVyclA5Q2hnQm5pWi9HTFZBMmZ0Rm5KdmUrSG1MdlJmNUpNWG0vUU5pUTlX?=
 =?utf-8?B?bTBUdmpWWFFSZzZxTkNJaGpUTys0WjliV1FkVmhEM0ZUMjc1alh6YldBWXQw?=
 =?utf-8?B?RFFzeHZ5VkY3ZWE0S3dHRCtTL3FxbXRSV2NEM0E4ZW90RG5scjJIWWZRazhF?=
 =?utf-8?B?WmwrZUhJWi83dFhyZkJnZ2lBd2hJd0hWMG9FZzhaclNqeGNudE5LeFZVa0Ju?=
 =?utf-8?B?bmYxQ3I3c214UEsxb0tJY3V5Zm11SmdTRXNQRUlLa3ZacDdPSXgvRWgzSVRq?=
 =?utf-8?B?QTE5OUJwbzF5eTN6TUhVRnFsT2VMMGN5ZWMvVVFPQzA0b1FTbmtDVFBYamYy?=
 =?utf-8?B?ZFU4YkFTRWJ1SHE4Z3o3VVFQWVh3elczcUFCRUtBbU94R0NaY2hpRW9ZQkRa?=
 =?utf-8?B?bGJ2K2h3NjZTRFFibVgyVFBMTitDSjMvT3pOUE90MC9IQithMndvOTBvVkFy?=
 =?utf-8?B?dEZzRmJ1cmt6elVOeHN4ZzBJWU42c1VrTG9OcEpQSnN1OGVRZENlTzNOTzkw?=
 =?utf-8?B?bjZ5WkZaRnZtTW5BMzYzek9QSHlQREFURjcwd2JVZytpZnFJNHRLeWxZY0g4?=
 =?utf-8?B?Y1gwVHl4SURKbW5WSmo2TUxCaVF2OHRaV2VsNCswWmwxclJnNzBSczdadFRr?=
 =?utf-8?B?Mmw4WVBzTmpqbTN6eFg2azVBclRvOHhmNjZLS1dUaUdmanFNM3VQWTJBNXo5?=
 =?utf-8?B?bVFNcSs3bUtRSHhlbmpGWlZCLyt4MmZHT0NYampSdURLdXRWbW5IQ3hleVpJ?=
 =?utf-8?B?U2pzR3kwQUNxL3BYWWVYR09yTDRReGFRUTZBNUkxaVJQanc3Q1ZIRHBab1hK?=
 =?utf-8?B?aGZLUTdCSTRmalE0MGhFRVpDMlJGM1FycWhJQnN6V0VuSVlWTXhOTGNjOE1u?=
 =?utf-8?B?eWwxUy91SWlwdHdGRkdWVUxtdFo0ZXR0QnF4SkUxNExvakJHM1pSOVhpd0hW?=
 =?utf-8?B?MDJwUlJCWENYbFZyRVpwSkRpcjFFMlZCRUZ5ZU5PTFZZNlcwVjF5cmROVUMx?=
 =?utf-8?B?T3RCYTBjbFgrV1IzSUdXRkhHcVZVQU1LaDUyL3FJY0hySlBMdllqVlNHM05w?=
 =?utf-8?B?a2VocXJYelhRamtVckhSRG5TODZ2VGdVdmd3aTRzV0hlTmc1U0NXVGJjV0RB?=
 =?utf-8?B?L2VEMXBhcXM2b29tZ21WNi91Rkl6R1o4MGEzSk03VkFJbjNNTjIzUTljN0Qx?=
 =?utf-8?B?N1BZeVhXeHdUT1VjWU9kd1dzZWQ5OEtPamhxQU90QWZZRVZYOFRjbzJRY1VE?=
 =?utf-8?B?bm1oR2tyNVdyeUN1TG15MStmalZLenJzbnY5Vm44K0h6YWtqS1NsLytySmZK?=
 =?utf-8?B?S1hrLzRUcGtoNDNTTnFPeVoySzFCWUljTDJuczk2Nm9LNmFhUCtkNVg0YXh6?=
 =?utf-8?B?NGdQNEttTk41TVBZUjBvZ3g0ZWluanlscXFnVlJRcEQ4U2lHQ1dWVmhtMks1?=
 =?utf-8?B?cCszRUxsL2ZEZUtNcmFobGFGTEJlRmtvMWdkRnJzbUVGeHM5TnhCdFhzc1d4?=
 =?utf-8?B?cTJtYkYyR0twRFN4R3ZMRTVTMWZvTHpBdGJIL1VrTEhQaEZwbXpIWGNTZm9S?=
 =?utf-8?B?cy9aNlZrODBTVWVjdWswTHV2dDFIT2l2anFHcDZaQmpacGw2MnNxbGkwMkYv?=
 =?utf-8?B?eUpLZlNMblhBckxQUlRCZFNtOWFIZm1NRll6Q29FTUNvWTgra0c1UUQxbmNZ?=
 =?utf-8?B?TjhIVGQ5bzdySmgzU3o3OGpXamtlaENJQ0xwQVQwRnhjWUNLNk9NVFRDTVN6?=
 =?utf-8?B?aDdVVEhWUG1WWGtnTW0wY3J1MVdXY3hRYUx3ZTVZaCtFVnlJSC9Dd1lRcXRq?=
 =?utf-8?B?cHdNRms5aEFkZFZyQ3E1WW56a3E3ZEQyaDJHMERrWUdoMzZiQk1ZVitLQksx?=
 =?utf-8?B?dFpQbGp5QXUxdThTMEFna2U4Q0pJWU4vUW5hNDFISGRYRUdWSGlObmdsQmc4?=
 =?utf-8?B?VFZFY3JzRUoxRTIxQ3l1VGpiQSttSVQ1dXpSUTZJckZRSjkxOC9icHVTMVFE?=
 =?utf-8?B?blBPWEFPYU5EUGhndEhxYTNSS2tCZlhkei9FWE1iREdkR2s5dUQ3M2dKeDgz?=
 =?utf-8?B?QWlmamdlN2lhS1ZuWXIxRnVTQjU5M09XbHRPM1VQYitsNW82MlZwWFc3cTA2?=
 =?utf-8?B?LzBLK1RLd0E5WHY1SnlNUlE1NjBLWmtUNG9SQUo4RWtFMkQ1ZTJqQkxlR1Ir?=
 =?utf-8?B?ckRzcGRBWkNDUHp0TlVramFFQzRCbmNVVmU3TkNOOW55REphditNclhZRXNh?=
 =?utf-8?B?UWFFb1d0WlUrK3RNaUo1Z3N0aEFjRWZSTWEwMVg2Mm40MnNobStURHpNQ1lO?=
 =?utf-8?B?RUIrdmY5QlRIRkpPNkJ6MW13QmRtSmhlcGhiN21WaXV2UzFodFkzQT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 92392de2-bddd-416f-3028-08da542f55fd
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2022 09:12:31.0605
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: euARPPukzRCMzRHpVfkudTgcYxc/lE7JwnYIxz3NSfq8616dwoS6bs1HYQx7fa0VaylaYydwOM22SmO7onRaFQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR04MB6326

On 20.06.2022 04:44, Penny Zheng wrote:
> --- a/xen/include/xen/domain.h
> +++ b/xen/include/xen/domain.h
> @@ -35,6 +35,18 @@ void arch_get_domain_info(const struct domain *d,
>  /* Should domain memory be directly mapped? */
>  #define CDF_directmap            (1U << 1)
>  #endif
> +/* Is domain memory on static allocation? */
> +#ifdef CONFIG_STATIC_MEMORY
> +#define CDF_staticmem            (1U << 2)
> +#else
> +#define CDF_staticmem            0
> +#endif

With this do you really need ...

> +#ifdef CONFIG_STATIC_MEMORY
> +#define is_domain_using_staticmem(d) ((d)->cdf & CDF_staticmem)
> +#else
> +#define is_domain_using_staticmem(d) ((void)(d), false)
> +#endif

... the #ifdef-ary here anymore?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 09:24:35 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 09:24:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353620.580565 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3wb6-0003xc-5O; Wed, 22 Jun 2022 09:24:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353620.580565; Wed, 22 Jun 2022 09:24:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3wb6-0003xV-2I; Wed, 22 Jun 2022 09:24:28 +0000
Received: by outflank-mailman (input) for mailman id 353620;
 Wed, 22 Jun 2022 09:24:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kzGk=W5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o3wb5-0003xM-63
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 09:24:27 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2054.outbound.protection.outlook.com [40.107.21.54])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1be0deed-f20d-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 11:24:26 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB5822.eurprd04.prod.outlook.com (2603:10a6:803:de::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.22; Wed, 22 Jun
 2022 09:24:22 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Wed, 22 Jun 2022
 09:24:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1be0deed-f20d-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hviing2ydUWTnaDdbjeMIfhmqKM4t9lI8ht1ptcDcLfsppaXduwwSDsv//b82fBzC+zambR++/SWANg+OJu/n32zqAYGxxHviCWOIsd5SDOglCJX3xwsw6rQPEh+S5fngwoklBO4qLz0Ls8Ht/X0YBsw5vbKgmHsAKWQwbl6yP35gsJXDtodCaSayFCL/qAZrFj1qUiAk7VJDJA4DWlyZgq3zG8ehzmIjedGui20v2zP5PROD4y7Fnx/VgUR+p5x0BeHpLPHl1eA500iQtOXfb6a01E6qZOS9+yt1rtfTgOlv2DOaasCIV0R+e90edHS7hb411qkky1Rz5e/M/+zaA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=AfCsUpOrZvhWMsOu9W4Q/7uuYdHvFdenfDSpKMmnifo=;
 b=aaaiijICug6/CqHooa3HkxTgRdPxRLIN2lHaN6BLpaWAC7LXvLs5cQBEpzDCU+meL2dNCQjfMHhxpzANmnr6xVLsHIw4K9ncAc9iOBkubO6RHepckT8VlZOcAcN/dTiaeQzIRDmPFzya/U/5P01FLPTRKTxZw8hRBPVs1TDL3d0Hedu8nG4O/Bm9WwDzrumJgyT9G7XTuMuuUcecf/XLIcO+XRMwGv5IlO0oOapM2HHsKNc1xDNzpvS/QN2ssRy6mT27M6lgW98uwlkUlzifE3pr2gB9gjFOvV3yhU1WMlmiYkj0j2nkh3TuFLuIPrBQUH3AsUQ+8SdbBoyiB/ldPQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AfCsUpOrZvhWMsOu9W4Q/7uuYdHvFdenfDSpKMmnifo=;
 b=r+LIQcPV/za0IImnDkRjv7La7KBN82pZbsrALmooyMA4K+/DLOi1sf/INnEJrGmBRV/9+RBXLT0SillxcKlYINzxJ/ckyuIbWBfmvYfYXSAaRDqgKuxny7LfiNEB7q/PSUuHJaxz0LRWhrx+kd1RBHPLBRHS/havIMixuwFrUlAsUfo1iUqtFUdzXDQP/YX6U1+gmKhZcj4klysEzVdV149mTpfsuNbxwH0amUB3L9h+wcsibhT9TaWWnAs0l2heJ7EnUGE7wgENwBX5homXxCJB5lSK10Y3/gyw/UCnxC4+xZc1YaydLSk7hm9UNoUbyoz9xmAei2y3NepeXphEPw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5ac0e46d-2100-331e-b4d2-8fc715973b71@suse.com>
Date: Wed, 22 Jun 2022 11:24:25 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v7 7/9] xen/arm: unpopulate memory when domain is static
Content-Language: en-US
To: Penny Zheng <Penny.Zheng@arm.com>
Cc: wei.chen@arm.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220620024408.203797-1-Penny.Zheng@arm.com>
 <20220620024408.203797-8-Penny.Zheng@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220620024408.203797-8-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FRYP281CA0008.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10::18)
 To VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d80077f3-51bb-41ed-ca58-08da5430fe07
X-MS-TrafficTypeDiagnostic: VI1PR04MB5822:EE_
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB5822E215C45D414EE9C9E4E0B3B29@VI1PR04MB5822.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	LmYEZCEAdWcV3zHa3F9ulNJ0rpkFbQTkRcgZPriw8KVRRxqouHN/TmTRQIVVFmMKLGkFTcnBY3tvZDnkDGXqFpka9ixfm4lq4zowqNWuHfdwXghi4HiiWuSmGqvHMXFGBFPuGetnQWVarKxPrSVVRrjV+otMhDPE4Y/83wa9Aj32LnDUwwA2aNBEDEJo5fFEhTVjohX0lbJDeFgnrnJNLcV8d1KXKhEXKBiP6tqb/OrvBo0OsSs0dURYHtZJLqV/II9xMmvqg1w/UXL+FBf0GoGmwsktdUlzSCx2jFHBsQqQP2FC4l6+8fPEdAUd3kbuYwBhbOIAUPZziQD7In5Om5jXPySClVP7CIfZjIqhD7NXI9N0Vn8XEgQeYTjsvg6ub6F20zJbwyhdo1ylf1bP3wO5luDL0ZJPGgSCbK59xAwfvIZtbIIX6t7zDC4AP8XK6woyhMGGvoJNkGK2wOqGZg1l2b9VMtZw+H/HMW3P5BX5o0NK2VNN46rOGpmeguUPU4RQxF+d+Uvtw32H0/9MXXmkY5hlpMhisCb8rdn2xwJ3pvWfRo4YpSbI6MvlS+1qMo//ULf/ONLu/cFdLdMRJBpuheT307ncfkESG9bU46oyE0JLQ0IXukEQmBljF0C92kkGWh+s7fCPCd5bArwQq3gidgtsJUYOIQd+/MTP8bx2WYt76mecnVBAn2IGH8coloVHYlsyJu/P1hQIT38mRCjRpXg7fMUG4Yx0pdjSPvofLwClmqO8EeVzeAtryrKZKPcQv9MJZ3WkYQE9R+PDe1xwdarzR7rEZQBoun2scDg=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(396003)(376002)(39860400002)(136003)(346002)(366004)(6506007)(38100700002)(2906002)(26005)(6512007)(2616005)(53546011)(83380400001)(186003)(8676002)(6916009)(4326008)(6486002)(66476007)(66556008)(5660300002)(31696002)(478600001)(54906003)(41300700001)(8936002)(36756003)(316002)(31686004)(66946007)(86362001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Y0J4MnRMcjNyZzh4eWpDVXAxc1BQU1lCR1JrdHc0bXRKeHZuYUl5UVRJTDRG?=
 =?utf-8?B?N1QrbThWaWkzaWxKNlcyV29KMkVlcVhCU01OeWdjUmthcnFKYjFiaHJtZFgr?=
 =?utf-8?B?SW9wQ0JEcVVVUDQ4ekdJSFd2NDJDeTdyWWxVMWFJTUNiYjIvYmNaTEdoY3h0?=
 =?utf-8?B?SGFLcUxVRGhCdFdXS1MrTkV1VnAyYysxSno3VnE3a1QzMlZuZDRyS0FTMk9w?=
 =?utf-8?B?dHVFUzlieE8zcEt1OGNoTUJFM2JSaUtzSFZWQm9sT2YvRTkrdkpPRHhQQWlM?=
 =?utf-8?B?dEdhS2djVnRGV1VDVlRxYU1VbnRvNGhxV0k5Y1ZyTlFhNmE5N1ZVRklJcUNa?=
 =?utf-8?B?Tk9IMVhsbkFLK1diSitkdXVXc1JkeTM4TW9NK2lqQitLSnZLWktqU0x1SWNy?=
 =?utf-8?B?S0wvekVXZHRhSk4ySEhQOHE5TjI4SUYybHlaMXQvWXQ1djNGaVRFcm5vVUFs?=
 =?utf-8?B?M0FLUXNDWTRVTEdsNyszenFER3hhcUlYTkd1Y2RiZUd2ZzBIOGdJMHo5eXBD?=
 =?utf-8?B?d0QrOTk3enp2dmZaWElwM1lxMFdYdC9VT1NMTVo1Q25MRHdDbDN6cDNYc0Jo?=
 =?utf-8?B?b0ZyaU1mUjNQUlBSbWtqS1JWL3o5Q1JRNC9vZ2IxT2xSeHBsOEdyTkFTY2pL?=
 =?utf-8?B?VVQybTUyaTdzYit5Y3Jyc2JET2QxZUhzNTl0SFJVK2RmMGI3QmZCbFJ6aU9B?=
 =?utf-8?B?K0ovMmpOaGU5UTRNQWhZc2N4L0tIY2FaTUJhT1piVVdFd1pZWUdhSzZDR04x?=
 =?utf-8?B?anlNSE1zMy9oRWtHTGRJL2VqUFIvWS8wWW9kUmlMbzl3RHJ2cUlDQnI1eFZP?=
 =?utf-8?B?WDhvQ1RiRmgxV2JwaE91OUp2OHRCNldvdE95TmhtdWsyQmpVeDdvUGh1aEVY?=
 =?utf-8?B?RUVkVUZFNWppWkp6T0VsSDhvU0lVMkVxSEFzTGs2RVBDcUdqaVhtSVBtZ1pv?=
 =?utf-8?B?KzRpdVhrMFNQb3hkV3ZnZzNReDRqTHU4T2RheGNoTks0ZVlGcGNEZTNWTGhF?=
 =?utf-8?B?RWwzWHB0MWFRYnQxNXhWRTVQOWdXWGlpcnVpZDZxUUV6MFhRVU1WTDZ3VXFI?=
 =?utf-8?B?SE1MRk11dW9QQ08ralJUTXI5SzZSdExIZUJlaVFVSERuQXNDSHFXSTV6QkIw?=
 =?utf-8?B?bE15RnRvTHVqWHRmb1lSLzdGYkJ1UlNZSHY0SitMc01JN2l6aEJVWWZoQWFE?=
 =?utf-8?B?UUwyUktCRWFPd3hSOXZhTE1EbkJlZlZHOXV2US90SUdvbEp0RnJaZTEzRWYv?=
 =?utf-8?B?ZThkS2FnQi9QZWZrb1N4M3FLbm5VN2IwTk1STzZrT1A5UW9uK3JOc3BxK0U1?=
 =?utf-8?B?Y2FaaVZaZ1pDVTc5QmZCcWNSMVZVMzNVWVdhdUxvVXNLRGI2S2NuUklLaHhi?=
 =?utf-8?B?T3RhNHVKWjFNWDJUa1ZjVXJCRzByZ2lIaGJLTDM3MFpJL05tcDgzMktBS0l1?=
 =?utf-8?B?SzBEaEVKVG83eHZIbmtreUpodlY4WTZlWGNTRm1ZNFFBcnc5SmNidlQ2cjRl?=
 =?utf-8?B?cGdWSjZzbjNtUTBaRUx6bFhuWHlQSXhCK0tSUGNBVjZnSzQxVzJydjYwV3Fp?=
 =?utf-8?B?bVFrbUNqQTBHYzdNajR5K3NHVWdPMTZNNGhYbzJwVlp3Sm5QL1B3WThXaUZK?=
 =?utf-8?B?RGFrTytOc3hFR2hab2NPaVBaeDFSNTE5WWlRWGYzK0c0SFViendEZVgrMXVV?=
 =?utf-8?B?eFk4TXZYVXJaZGxYSmxjaEtvTmlGY1NTaDV0NkM4Z1JMNFF4eE5vdHZXNDRu?=
 =?utf-8?B?RjA2MUV6VjZhV1NiaVV3eWVmUTRiZWVqOGFxYzYvN2ZDQVlGTW5YZTYzT1NC?=
 =?utf-8?B?OWowUHBTWGVWcWpHbHhKRWxXenlPTk0zbzM1K2w5SG5uUnpFaEtZZ2s5SEVR?=
 =?utf-8?B?ZEZxOStKMllucnh5cDB2OE52djlvbCsxQ1VrUmVJWkxITGM1bTAxT2tCdnUz?=
 =?utf-8?B?SWRkK0NmeVZaUlpweXhQZnIwUHAvK2g4RFJWd21mN1M4SnA0T3ROa2xKV05C?=
 =?utf-8?B?OWNEZDlwK1kxQUw0eityS0MxUFFWdGRibWhuRjNQbmtZZCs3Qmc4SE9waEo2?=
 =?utf-8?B?Nk14YTFjcVJBdmtCeFpqQVFDT2I3RWplN2ZyZVhheEMyc3VqU3hpRUJxQTRM?=
 =?utf-8?B?L1NtNm1xRG5CN01nOGZPaldnSVNQV1ZIenBFdDZIeGdIQU1RandTTXIrNXdD?=
 =?utf-8?B?aGkrencxRTFTbEdzcjZkRGJiK29hZU9PNXNxRlYvR3hYQ1AxQ3BWZlV3MVFL?=
 =?utf-8?B?NnFZTW9xcFNVRmE0a2xKN2cyNldLZkltK1NwM043VVpqMXYwSzhPelc1Mk1N?=
 =?utf-8?B?b1pxT3FSRjdTYnVMVlpsVy9BZFpKNzJhc3FTVmNueDZ5RWgyMm1kdz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d80077f3-51bb-41ed-ca58-08da5430fe07
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2022 09:24:22.4839
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: U1OzQZlgmIVgV0pVhNaUpEi/kBfqQOaNi9KgC/aGxh76c+DTTgjh+cAI/K4QCSN0TOTaAmTKfW/3PoHf35j6HQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB5822

On 20.06.2022 04:44, Penny Zheng wrote:
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -2498,6 +2498,10 @@ void free_domheap_pages(struct page_info *pg, unsigned int order)
>          }
>  
>          free_heap_pages(pg, order, scrub);
> +
> +        /* Add page on the resv_page_list *after* it has been freed. */
> +        if ( unlikely(pg->count_info & PGC_static) )
> +            put_static_pages(d, pg, order);

Unless I'm overlooking something the list addition done there / ...

> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -90,6 +90,15 @@ void free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
>                            bool need_scrub);
>  int acquire_domstatic_pages(struct domain *d, mfn_t smfn, unsigned int nr_mfns,
>                              unsigned int memflags);
> +#ifdef CONFIG_STATIC_MEMORY
> +#define put_static_pages(d, page, order) ({                 \
> +    unsigned int i;                                         \
> +    for ( i = 0; i < (1 << (order)); i++ )                  \
> +        page_list_add_tail((pg) + i, &(d)->resv_page_list); \
> +})

... here isn't guarded by any lock. Feels like we've been there before.
It's not really clear to me why the freeing of staticmem pages needs to
be split like this - if it wasn't split, the list addition would
"naturally" occur with the lock held, I think.

Furthermore careful with the local variable name used here. Consider
what would happen with an invocation of

    put_static_pages(d, page, i);

To common approach is to suffix an underscore to the variable name.
Such names are not supposed to be used outside of macros definitions,
and hence there's then no potential for such a conflict.

Finally I think you mean (1u << (order)) to be on the safe side against
UB if order could ever reach 31. Then again - is "order" as a parameter
needed here in the first place? Wasn't it that staticmem operations are
limited to order-0 regions?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 09:26:32 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 09:26:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353627.580575 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3wd5-0004aF-I4; Wed, 22 Jun 2022 09:26:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353627.580575; Wed, 22 Jun 2022 09:26:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3wd5-0004a8-F8; Wed, 22 Jun 2022 09:26:31 +0000
Received: by outflank-mailman (input) for mailman id 353627;
 Wed, 22 Jun 2022 09:26:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UqP7=W5=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1o3wd3-0004Zj-Lv
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 09:26:29 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2067.outbound.protection.outlook.com [40.107.22.67])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6499cf8f-f20d-11ec-bd2d-47488cf2e6aa;
 Wed, 22 Jun 2022 11:26:28 +0200 (CEST)
Received: from AM5PR0701CA0014.eurprd07.prod.outlook.com
 (2603:10a6:203:51::24) by DB6PR0802MB2198.eurprd08.prod.outlook.com
 (2603:10a6:4:84::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.16; Wed, 22 Jun
 2022 09:26:23 +0000
Received: from AM5EUR03FT027.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:51:cafe::8e) by AM5PR0701CA0014.outlook.office365.com
 (2603:10a6:203:51::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15 via Frontend
 Transport; Wed, 22 Jun 2022 09:26:23 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT027.mail.protection.outlook.com (10.152.16.138) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Wed, 22 Jun 2022 09:26:23 +0000
Received: ("Tessian outbound 5b5a41c043d3:v120");
 Wed, 22 Jun 2022 09:26:23 +0000
Received: from 46301a3343a8.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 38379565-F31B-43F2-B646-1183F330EA5F.1; 
 Wed, 22 Jun 2022 09:26:16 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 46301a3343a8.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 22 Jun 2022 09:26:16 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AM0PR08MB3123.eurprd08.prod.outlook.com (2603:10a6:208:5b::26)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.16; Wed, 22 Jun
 2022 09:26:07 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5353.022; Wed, 22 Jun 2022
 09:26:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6499cf8f-f20d-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=hMRaKdwBC3zDZiToGjRRM3kqG+PqhuluIIoF7EQW1K9NYXTUbi0IuJ9GW3oy9KwzXgf5f7l9QQ1+MorLE066wjZNzuJLAO8STjqVSGXHdBToiJ8pYnPTyfqlCtuYsNYy+IAfz0foc0lgwuAbFSiV53veuSZirNMPOZBssiX8TcMREOVr3Pts6bXuecfJets6u9ehhPayyffFUVD4KVw4yqNNcU44BT0hphvjrovZWED/sTXy0jtue4FZmL6e6lBFT1kgXQJrIgNympUUFsc99WHsZ7h/iama5lg+wxuAV4TLtJOCo2HAJDA/0U/Rmp63jtYv5ZzMQMM9vPpyAw8MUg==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=lz4VetPnBLpKEZ6MbxjV+9xPr+XxcPib2X3D0MCIlf8=;
 b=dm2XES0JOdwv+Lr0w31bEEo4hqZ9qMzwWEf1PWKWIQ/2Guv9ek2yNIBcAG+yl0NRqjoO8JEiUfXcSGrau/CdzgkajKYrsmUjIdeMlOu0n//CsJR0K58blUoO6vZJoxZ3uTKyI5u7fVKqgtrMbVydEz7wEdkY1JslGJZLiTaZtDBz7Wt/aJH1U4DthV0YWyrpNbMHb2BEztwRmq02PYEmKhzKMfPO15qii4+bNvXcynZO3OQwYBLuxigGrTLnHnQxHhcJBTVyYwjWCMipviWdAdJWL5Z/nbYmgF+FRCN+PYY8UKrKiSxv4R5597fxqASvqlUDqPdvWbwVBGQ8GzGRxg==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lz4VetPnBLpKEZ6MbxjV+9xPr+XxcPib2X3D0MCIlf8=;
 b=8Mw/Y9bdJNHco1uqmCOZ6nd1tpLp5mSRKkF5xhv/jM6wdMdHm61BR05G+R5567ZY1pjGHv1DYdX7l0B0tZPT9O1lq/GxkV4LMcXIMKS0apZfEKcY3y/9jqjlBkbAG9k4k5IwFqEhpGoUgnUPLlJIqEOxN08Zcbfe10ncbfCtF14=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: ed165d5ef3ab6385
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=i1I43oV6ndHjbvMUzXhXOfRwJlv7J/DkFkBlq8WhHfasGZLIEDh2HXpUyZhZVU2OCPkY1oNKhYJpZI/OwLH1Y0AfZetXF352cGiqasNg/cz7BSLdtlGcCZo3beKYcnyuGZXEZb1LXmo98fCrvrL1ALGZdwmwcEX75yiJIWXhpocy7YO0AMjsM8bLMapNDI0DGFyGtw+15VLR5yvkKhP7R8T9vofkfT0fJTNvCyXZtZ5Q/gYb5miJDtsl36yM984xRjz3QXM7UN1KbtlwlapB1OjlN28agzyUfz0fCEGquLpfnraDzG20ZXJXk/CNBks0lxVSq17oGmBvlsyAs+MCrA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=lz4VetPnBLpKEZ6MbxjV+9xPr+XxcPib2X3D0MCIlf8=;
 b=LTp/9fCZXh1uh5+KahezbwxnKAb1j9FyRy2lM4T5uQWhk53NTpvtbvPxJnrpCuIts08Y5A0PhmL6Z/JpOUuVrRkykrHjCBWXBMLHNaSJHoKdGvhm96HIQBCmMiFK6R1N3q0riLyTWpw/Z1DKuNYFBtLmshjCsCt3M/xOnsxdrs+5CSwV88z9u9EQNhm08ZneNoA6sG8/uGqPLbht5knL7J6IZiMWkcnIqaswavJDjk1//MlP8jP5Q4v7JwhU3pkmSdxHpPHAOuEty6Sys/m/tcuOPsRZmq/mp6uufORYNdts5yfRleiChc08o4G5qgUqt7Y3Bqt+Nui6gcrUuWJuOQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lz4VetPnBLpKEZ6MbxjV+9xPr+XxcPib2X3D0MCIlf8=;
 b=8Mw/Y9bdJNHco1uqmCOZ6nd1tpLp5mSRKkF5xhv/jM6wdMdHm61BR05G+R5567ZY1pjGHv1DYdX7l0B0tZPT9O1lq/GxkV4LMcXIMKS0apZfEKcY3y/9jqjlBkbAG9k4k5IwFqEhpGoUgnUPLlJIqEOxN08Zcbfe10ncbfCtF14=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH v3] xen: Add MISRA support to cppcheck make rule
Thread-Topic: [PATCH v3] xen: Add MISRA support to cppcheck make rule
Thread-Index: AQHYfKpzLyGOfeb87kCDJ/BIB2eZQa1anOUAgACd+AA=
Date: Wed, 22 Jun 2022 09:26:07 +0000
Message-ID: <FE2CD795-09AC-4AD0-8F08-8320FE7122C5@arm.com>
References:
 <82a29dff7a0da97cc6ad9d247a97372bcf71f17c.1654850751.git.bertrand.marquis@arm.com>
 <alpine.DEB.2.22.394.2206211658480.788376@ubuntu-linux-20-04-desktop>
In-Reply-To:
 <alpine.DEB.2.22.394.2206211658480.788376@ubuntu-linux-20-04-desktop>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3693.60.0.1.1)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: ad3ebfc9-5e19-4aee-66bc-08da5431460e
x-ms-traffictypediagnostic:
	AM0PR08MB3123:EE_|AM5EUR03FT027:EE_|DB6PR0802MB2198:EE_
X-Microsoft-Antispam-PRVS:
	<DB6PR0802MB21989567F2A634C62FA6F3419DB29@DB6PR0802MB2198.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 BKGq4haIyPvsZLSEK5GikNR5ddbOw2U4LCK2per6Ek0h2BOZo7K5UQ6MB3rVL536rjSgLld9n6IPCRVCgY4XQbxNR/H9xezCxPHQIGxnP3DHymmeqGL2uwPn3tvcnHlRT1jhNmszFCVTNspV3efwbFBeJboncwrEw8lMVKJcI7AcVBmPRcCA6nJ4CZP1AK0R+NplADKl09TLaNL0WVCuyZdr6GmdcQ+C2tAzB+RVDm+NKUPSRRhYX++0LVwogBdR5YScB3wBOqzUn6F3/Onb33PNafuqlD8LVN0B4FXrOL1QmwyEAeliAFIiSWjfo+0M2NrBhAi4BlWyvB9o7myvF9ku5Rb6T+28tDIrXvz4rL6K6jppKvLfOIysq9J3Lb+saJD2BJjVjPblMrQoZxHSmO053JljiOtvPa2zFV8M/eOLQV52I9w2H8V1cC5zzoh9ComlYZTEBqN1vdWvKG88Zzp8ZArKw7fuYp47ySNU16kBqe0LvSfS2E+y3rnLGkJXuc69va8/m3VTlWJ0AEAnK1XztCoPqjFQbsAX7kzWsS4vX9I1XuzTFDZsJCMccr8Fl61sr8vC8zrlBThS1JSlZcv7vop+6NLwFb21hi1T9kXjd68S4qkYYYBBzw0W3OQHCfeF2wGfEC0p2Q3K5rHvHQPMzvZpOymcz8Tgj18A7+n0xDqUE9aC+HnSBWz5q0q07J/I4JPNEeR/ttvevN7bY5o6vgUgW140QXAM4rqxUtKuujr/JW0hFxeLv9FOZyd5ckv3wn01M1emORn/ZdrYlzCmX3ENZ2YtfSgbFizlA7M=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(376002)(366004)(39860400002)(136003)(346002)(396003)(2616005)(41300700001)(71200400001)(36756003)(76116006)(66556008)(4326008)(6916009)(8676002)(66946007)(186003)(6512007)(64756008)(91956017)(66476007)(54906003)(66446008)(316002)(38100700002)(2906002)(86362001)(478600001)(6486002)(26005)(53546011)(6506007)(8936002)(38070700005)(83380400001)(33656002)(5660300002)(30864003)(122000001)(45980500001)(579004);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <BAD2DA6A5D51894695EBD847514B758D@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB3123
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT027.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	e86fbd0f-4562-4b3d-f211-08da54313c88
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	FuxtiyIqFAkMJhJy3aKyr3q5/qIzK+UdDte+TNut5eEhLsrrFQ2c1gU2eT4+xJdRbHaJ9VXcNjOPv6XbGRdYczcGakOBG1AQOmarGjqibKEVWJHiC9FNfCBvy4H0rQcR8alzdSIjFRfysck7UcwZ7Sgp5uDjhCoCX4cPe5Tf1G6Gx8NhrBmKGpGyUCQP4duCqKmD8jsCH/zFOCsJmwUJpd7hUNQ4sZwi3EzAn/AwvmvAk1pk3NCBl6avCKrfe8gRIGRrbzQKbVZnVY6vCiltSskiFHH8HPweHZU1mWD9IZ6UI1WiqsRWwidqgOaz+M2kdqChdOZrD7piKFcdiqLHozu/KPGX8feDxa+kG8EuK1xUZpJJcp8wF0hQezp9xjxs3AFdrkjE7Rx5esgQeasDbWfG5qPdEP3mc5LkbrTDluJxzkS3aeLWajEtmpcwN6nZrbuHyPvZEJZ6qdQWZZlwOHWCRplIgajmHhVZypMo1Xoo4dVFdvsw839SjITf7CRjeBGk9IVdbEQ+SXk1dNJBAdeHbpX03FI9uWtcfrVgQet9eluDeSr9mhRuq+G10NQWqSFL5ATbemu4jB/Rb2LK0nRKiERHllf//WA+o1XG4u4RsUfGYF6Y4WWVcdOcj/9S0ydqF6H3Z2ZilLxGDM69LwQQlDMtg9Sq7MLaJoJBrukvVCwOQ+UFfvg0nq7mxWON/UZsalV9es6pOBxt5mD0EuetxcywhuNhhQc8ePZUlaUeP+bbELzmswXMR+v6pHZGj87Pdall5YLp4TSo5mi2dw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(136003)(376002)(346002)(39860400002)(396003)(46966006)(40470700004)(36840700001)(53546011)(26005)(41300700001)(336012)(2616005)(82740400003)(186003)(70206006)(6506007)(81166007)(47076005)(356005)(83380400001)(6512007)(36860700001)(4326008)(36756003)(40480700001)(8936002)(86362001)(6862004)(33656002)(30864003)(8676002)(478600001)(2906002)(54906003)(40460700003)(316002)(6486002)(82310400005)(5660300002)(70586007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2022 09:26:23.1044
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ad3ebfc9-5e19-4aee-66bc-08da5431460e
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT027.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0802MB2198

Hi Stefano,

> On 22 Jun 2022, at 01:00, Stefano Stabellini <sstabellini@kernel.org> wro=
te:
>=20
> On Fri, 10 Jun 2022, Bertrand Marquis wrote:
>> cppcheck MISRA addon can be used to check for non compliance to some of
>> the MISRA standard rules.
>>=20
>> Add a CPPCHECK_MISRA variable that can be set to "y" using make command
>> line to generate a cppcheck report including cppcheck misra checks.
>>=20
>> When MISRA checking is enabled, a file with a text description suitable
>> for cppcheck misra addon is generated out of Xen documentation file
>> which lists the rules followed by Xen (docs/misra/rules.rst).
>>=20
>> By default MISRA checking is turned off.
>>=20
>> While adding cppcheck-misra files to gitignore, also fix the missing /
>> for htmlreport gitignore
>>=20
>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>=20
> Hi Bertrand,
>=20
> I tried this patch and I am a bit confused by the output
> cppcheck-misra.txt file that I get (appended.)
>=20
> I can see that there are all the rules from docs/misra/rules.rst as it
> should be together with the one line summary, but there are also a bunch
> of additional rules not present in docs/misra/rules.rst. Starting from
> Rule 1.1 all the way to Rule 21.21. Is the expected?

To make cppcheck happy I need to give a text for all rules so the python sc=
ript is generating a dummy sentence for not declared Misra rules to prevent=
 cppcheck warnings. To make it simpler I just did it for 1 to 22 for main a=
nd sub numbers.

So yes this is expected.

Cheers
Bertrand


>=20
> Cheers,
>=20
> Stefano
>=20
>=20
> Appendix A Summary of guidelines
> Rule 2.1 Required
> All source files shall compile without any compilation errors (Misra rule=
 2.1)
> Rule 4.7 Required
> If a function returns error information then that error information shall=
 be tested (Misra rule 4.7)
> Rule 4.10 Required
> Precautions shall be taken in order to prevent the contents of a header f=
ile being included more than once (Misra rule 4.10)
> Rule 4.14 Required
> The validity of values received from external sources shall be checked (M=
isra rule 4.14)
> Rule 1.3 Required
> There shall be no occurrence of undefined or critical unspecified behavio=
ur (Misra rule 1.3)
> Rule 3.2 Required
> Line-splicing shall not be used in // comments (Misra rule 3.2)
> Rule 5.1 Required
> External identifiers shall be distinct (Misra rule 5.1)
> Rule 5.2 Required
> Identifiers declared in the same scope and name space shall be distinct (=
Misra rule 5.2)
> Rule 5.3 Required
> An identifier declared in an inner scope shall not hide an identifier dec=
lared in an outer scope (Misra rule 5.3)
> Rule 5.4 Required
> Macro identifiers shall be distinct (Misra rule 5.4)
> Rule 6.2 Required
> Single-bit named bit fields shall not be of a signed type (Misra rule 6.2=
)
> Rule 8.1 Required
> Types shall be explicitly specified (Misra rule 8.1)
> Rule 8.4 Required
> A compatible declaration shall be visible when an object or function with=
 external linkage is defined (Misra rule 8.4)
> Rule 8.5 Required
> An external object or function shall be declared once in one and only one=
 file (Misra rule 8.5)
> Rule 8.6 Required
> An identifier with external linkage shall have exactly one external defin=
ition (Misra rule 8.6)
> Rule 8.8 Required
> The static storage class specifier shall be used in all declarations of o=
bjects and functions that have internal linkage (Misra rule 8.8)
> Rule 8.10 Required
> An inline function shall be declared with the static storage class (Misra=
 rule 8.10)
> Rule 8.12 Required
> Within an enumerator list the value of an implicitly-specified enumeratio=
n constant shall be unique (Misra rule 8.12)
> Rule 9.1 Mandatory
> The value of an object with automatic storage duration shall not be read =
before it has been set (Misra rule 9.1)
> Rule 9.2 Required
> The initializer for an aggregate or union shall be enclosed in braces (Mi=
sra rule 9.2)
> Rule 13.6 Mandatory
> The operand of the sizeof operator shall not contain any expression which=
 has potential side effects (Misra rule 13.6)
> Rule 14.1 Required
> A loop counter shall not have essentially floating type (Misra rule 14.1)
> Rule 16.7 Required
> A switch-expression shall not have essentially Boolean type (Misra rule 1=
6.7)
> Rule 17.3 Mandatory
> A function shall not be declared implicitly (Misra rule 17.3)
> Rule 17.4 Mandatory
> All exit paths from a function with non-void return type shall have an ex=
plicit return statement with an expression (Misra rule 17.4)
> Rule 20.7 Required
> Expressions resulting from the expansion of macro parameters shall be enc=
losed in parentheses (Misra rule 20.7)
> Rule 20.13 Required
> A line whose first token is # shall be a valid preprocessing directive (M=
isra rule 20.13)
> Rule 20.14 Required
> All #else #elif and #endif preprocessor directives shall reside in the sa=
me file as the #if #ifdef or #ifndef directive to which they are related (M=
isra rule 20.14)
> Rule 1.1
> No description for rule 1.1
> Rule 1.2
> No description for rule 1.2
> Rule 1.4
> No description for rule 1.4
> Rule 1.5
> No description for rule 1.5
> Rule 1.6
> No description for rule 1.6
> Rule 1.7
> No description for rule 1.7
> Rule 1.8
> No description for rule 1.8
> Rule 1.9
> No description for rule 1.9
> Rule 1.10
> No description for rule 1.10
> Rule 1.11
> No description for rule 1.11
> Rule 1.12
> No description for rule 1.12
> Rule 1.13
> No description for rule 1.13
> Rule 1.14
> No description for rule 1.14
> Rule 1.15
> No description for rule 1.15
> Rule 1.16
> No description for rule 1.16
> Rule 1.17
> No description for rule 1.17
> Rule 1.18
> No description for rule 1.18
> Rule 1.19
> No description for rule 1.19
> Rule 1.20
> No description for rule 1.20
> Rule 1.21
> No description for rule 1.21
> Rule 2.2
> No description for rule 2.2
> Rule 2.3
> No description for rule 2.3
> Rule 2.4
> No description for rule 2.4
> Rule 2.5
> No description for rule 2.5
> Rule 2.6
> No description for rule 2.6
> Rule 2.7
> No description for rule 2.7
> Rule 2.8
> No description for rule 2.8
> Rule 2.9
> No description for rule 2.9
> Rule 2.10
> No description for rule 2.10
> Rule 2.11
> No description for rule 2.11
> Rule 2.12
> No description for rule 2.12
> Rule 2.13
> No description for rule 2.13
> Rule 2.14
> No description for rule 2.14
> Rule 2.15
> No description for rule 2.15
> Rule 2.16
> No description for rule 2.16
> Rule 2.17
> No description for rule 2.17
> Rule 2.18
> No description for rule 2.18
> Rule 2.19
> No description for rule 2.19
> Rule 2.20
> No description for rule 2.20
> Rule 2.21
> No description for rule 2.21
> Rule 3.1
> No description for rule 3.1
> Rule 3.3
> No description for rule 3.3
> Rule 3.4
> No description for rule 3.4
> Rule 3.5
> No description for rule 3.5
> Rule 3.6
> No description for rule 3.6
> Rule 3.7
> No description for rule 3.7
> Rule 3.8
> No description for rule 3.8
> Rule 3.9
> No description for rule 3.9
> Rule 3.10
> No description for rule 3.10
> Rule 3.11
> No description for rule 3.11
> Rule 3.12
> No description for rule 3.12
> Rule 3.13
> No description for rule 3.13
> Rule 3.14
> No description for rule 3.14
> Rule 3.15
> No description for rule 3.15
> Rule 3.16
> No description for rule 3.16
> Rule 3.17
> No description for rule 3.17
> Rule 3.18
> No description for rule 3.18
> Rule 3.19
> No description for rule 3.19
> Rule 3.20
> No description for rule 3.20
> Rule 3.21
> No description for rule 3.21
> Rule 4.1
> No description for rule 4.1
> Rule 4.2
> No description for rule 4.2
> Rule 4.3
> No description for rule 4.3
> Rule 4.4
> No description for rule 4.4
> Rule 4.5
> No description for rule 4.5
> Rule 4.6
> No description for rule 4.6
> Rule 4.8
> No description for rule 4.8
> Rule 4.9
> No description for rule 4.9
> Rule 4.11
> No description for rule 4.11
> Rule 4.12
> No description for rule 4.12
> Rule 4.13
> No description for rule 4.13
> Rule 4.15
> No description for rule 4.15
> Rule 4.16
> No description for rule 4.16
> Rule 4.17
> No description for rule 4.17
> Rule 4.18
> No description for rule 4.18
> Rule 4.19
> No description for rule 4.19
> Rule 4.20
> No description for rule 4.20
> Rule 4.21
> No description for rule 4.21
> Rule 5.5
> No description for rule 5.5
> Rule 5.6
> No description for rule 5.6
> Rule 5.7
> No description for rule 5.7
> Rule 5.8
> No description for rule 5.8
> Rule 5.9
> No description for rule 5.9
> Rule 5.10
> No description for rule 5.10
> Rule 5.11
> No description for rule 5.11
> Rule 5.12
> No description for rule 5.12
> Rule 5.13
> No description for rule 5.13
> Rule 5.14
> No description for rule 5.14
> Rule 5.15
> No description for rule 5.15
> Rule 5.16
> No description for rule 5.16
> Rule 5.17
> No description for rule 5.17
> Rule 5.18
> No description for rule 5.18
> Rule 5.19
> No description for rule 5.19
> Rule 5.20
> No description for rule 5.20
> Rule 5.21
> No description for rule 5.21
> Rule 6.1
> No description for rule 6.1
> Rule 6.3
> No description for rule 6.3
> Rule 6.4
> No description for rule 6.4
> Rule 6.5
> No description for rule 6.5
> Rule 6.6
> No description for rule 6.6
> Rule 6.7
> No description for rule 6.7
> Rule 6.8
> No description for rule 6.8
> Rule 6.9
> No description for rule 6.9
> Rule 6.10
> No description for rule 6.10
> Rule 6.11
> No description for rule 6.11
> Rule 6.12
> No description for rule 6.12
> Rule 6.13
> No description for rule 6.13
> Rule 6.14
> No description for rule 6.14
> Rule 6.15
> No description for rule 6.15
> Rule 6.16
> No description for rule 6.16
> Rule 6.17
> No description for rule 6.17
> Rule 6.18
> No description for rule 6.18
> Rule 6.19
> No description for rule 6.19
> Rule 6.20
> No description for rule 6.20
> Rule 6.21
> No description for rule 6.21
> Rule 7.1
> No description for rule 7.1
> Rule 7.2
> No description for rule 7.2
> Rule 7.3
> No description for rule 7.3
> Rule 7.4
> No description for rule 7.4
> Rule 7.5
> No description for rule 7.5
> Rule 7.6
> No description for rule 7.6
> Rule 7.7
> No description for rule 7.7
> Rule 7.8
> No description for rule 7.8
> Rule 7.9
> No description for rule 7.9
> Rule 7.10
> No description for rule 7.10
> Rule 7.11
> No description for rule 7.11
> Rule 7.12
> No description for rule 7.12
> Rule 7.13
> No description for rule 7.13
> Rule 7.14
> No description for rule 7.14
> Rule 7.15
> No description for rule 7.15
> Rule 7.16
> No description for rule 7.16
> Rule 7.17
> No description for rule 7.17
> Rule 7.18
> No description for rule 7.18
> Rule 7.19
> No description for rule 7.19
> Rule 7.20
> No description for rule 7.20
> Rule 7.21
> No description for rule 7.21
> Rule 8.2
> No description for rule 8.2
> Rule 8.3
> No description for rule 8.3
> Rule 8.7
> No description for rule 8.7
> Rule 8.9
> No description for rule 8.9
> Rule 8.11
> No description for rule 8.11
> Rule 8.13
> No description for rule 8.13
> Rule 8.14
> No description for rule 8.14
> Rule 8.15
> No description for rule 8.15
> Rule 8.16
> No description for rule 8.16
> Rule 8.17
> No description for rule 8.17
> Rule 8.18
> No description for rule 8.18
> Rule 8.19
> No description for rule 8.19
> Rule 8.20
> No description for rule 8.20
> Rule 8.21
> No description for rule 8.21
> Rule 9.3
> No description for rule 9.3
> Rule 9.4
> No description for rule 9.4
> Rule 9.5
> No description for rule 9.5
> Rule 9.6
> No description for rule 9.6
> Rule 9.7
> No description for rule 9.7
> Rule 9.8
> No description for rule 9.8
> Rule 9.9
> No description for rule 9.9
> Rule 9.10
> No description for rule 9.10
> Rule 9.11
> No description for rule 9.11
> Rule 9.12
> No description for rule 9.12
> Rule 9.13
> No description for rule 9.13
> Rule 9.14
> No description for rule 9.14
> Rule 9.15
> No description for rule 9.15
> Rule 9.16
> No description for rule 9.16
> Rule 9.17
> No description for rule 9.17
> Rule 9.18
> No description for rule 9.18
> Rule 9.19
> No description for rule 9.19
> Rule 9.20
> No description for rule 9.20
> Rule 9.21
> No description for rule 9.21
> Rule 10.1
> No description for rule 10.1
> Rule 10.2
> No description for rule 10.2
> Rule 10.3
> No description for rule 10.3
> Rule 10.4
> No description for rule 10.4
> Rule 10.5
> No description for rule 10.5
> Rule 10.6
> No description for rule 10.6
> Rule 10.7
> No description for rule 10.7
> Rule 10.8
> No description for rule 10.8
> Rule 10.9
> No description for rule 10.9
> Rule 10.10
> No description for rule 10.10
> Rule 10.11
> No description for rule 10.11
> Rule 10.12
> No description for rule 10.12
> Rule 10.13
> No description for rule 10.13
> Rule 10.14
> No description for rule 10.14
> Rule 10.15
> No description for rule 10.15
> Rule 10.16
> No description for rule 10.16
> Rule 10.17
> No description for rule 10.17
> Rule 10.18
> No description for rule 10.18
> Rule 10.19
> No description for rule 10.19
> Rule 10.20
> No description for rule 10.20
> Rule 10.21
> No description for rule 10.21
> Rule 11.1
> No description for rule 11.1
> Rule 11.2
> No description for rule 11.2
> Rule 11.3
> No description for rule 11.3
> Rule 11.4
> No description for rule 11.4
> Rule 11.5
> No description for rule 11.5
> Rule 11.6
> No description for rule 11.6
> Rule 11.7
> No description for rule 11.7
> Rule 11.8
> No description for rule 11.8
> Rule 11.9
> No description for rule 11.9
> Rule 11.10
> No description for rule 11.10
> Rule 11.11
> No description for rule 11.11
> Rule 11.12
> No description for rule 11.12
> Rule 11.13
> No description for rule 11.13
> Rule 11.14
> No description for rule 11.14
> Rule 11.15
> No description for rule 11.15
> Rule 11.16
> No description for rule 11.16
> Rule 11.17
> No description for rule 11.17
> Rule 11.18
> No description for rule 11.18
> Rule 11.19
> No description for rule 11.19
> Rule 11.20
> No description for rule 11.20
> Rule 11.21
> No description for rule 11.21
> Rule 12.1
> No description for rule 12.1
> Rule 12.2
> No description for rule 12.2
> Rule 12.3
> No description for rule 12.3
> Rule 12.4
> No description for rule 12.4
> Rule 12.5
> No description for rule 12.5
> Rule 12.6
> No description for rule 12.6
> Rule 12.7
> No description for rule 12.7
> Rule 12.8
> No description for rule 12.8
> Rule 12.9
> No description for rule 12.9
> Rule 12.10
> No description for rule 12.10
> Rule 12.11
> No description for rule 12.11
> Rule 12.12
> No description for rule 12.12
> Rule 12.13
> No description for rule 12.13
> Rule 12.14
> No description for rule 12.14
> Rule 12.15
> No description for rule 12.15
> Rule 12.16
> No description for rule 12.16
> Rule 12.17
> No description for rule 12.17
> Rule 12.18
> No description for rule 12.18
> Rule 12.19
> No description for rule 12.19
> Rule 12.20
> No description for rule 12.20
> Rule 12.21
> No description for rule 12.21
> Rule 13.1
> No description for rule 13.1
> Rule 13.2
> No description for rule 13.2
> Rule 13.3
> No description for rule 13.3
> Rule 13.4
> No description for rule 13.4
> Rule 13.5
> No description for rule 13.5
> Rule 13.7
> No description for rule 13.7
> Rule 13.8
> No description for rule 13.8
> Rule 13.9
> No description for rule 13.9
> Rule 13.10
> No description for rule 13.10
> Rule 13.11
> No description for rule 13.11
> Rule 13.12
> No description for rule 13.12
> Rule 13.13
> No description for rule 13.13
> Rule 13.14
> No description for rule 13.14
> Rule 13.15
> No description for rule 13.15
> Rule 13.16
> No description for rule 13.16
> Rule 13.17
> No description for rule 13.17
> Rule 13.18
> No description for rule 13.18
> Rule 13.19
> No description for rule 13.19
> Rule 13.20
> No description for rule 13.20
> Rule 13.21
> No description for rule 13.21
> Rule 14.2
> No description for rule 14.2
> Rule 14.3
> No description for rule 14.3
> Rule 14.4
> No description for rule 14.4
> Rule 14.5
> No description for rule 14.5
> Rule 14.6
> No description for rule 14.6
> Rule 14.7
> No description for rule 14.7
> Rule 14.8
> No description for rule 14.8
> Rule 14.9
> No description for rule 14.9
> Rule 14.10
> No description for rule 14.10
> Rule 14.11
> No description for rule 14.11
> Rule 14.12
> No description for rule 14.12
> Rule 14.13
> No description for rule 14.13
> Rule 14.14
> No description for rule 14.14
> Rule 14.15
> No description for rule 14.15
> Rule 14.16
> No description for rule 14.16
> Rule 14.17
> No description for rule 14.17
> Rule 14.18
> No description for rule 14.18
> Rule 14.19
> No description for rule 14.19
> Rule 14.20
> No description for rule 14.20
> Rule 14.21
> No description for rule 14.21
> Rule 15.1
> No description for rule 15.1
> Rule 15.2
> No description for rule 15.2
> Rule 15.3
> No description for rule 15.3
> Rule 15.4
> No description for rule 15.4
> Rule 15.5
> No description for rule 15.5
> Rule 15.6
> No description for rule 15.6
> Rule 15.7
> No description for rule 15.7
> Rule 15.8
> No description for rule 15.8
> Rule 15.9
> No description for rule 15.9
> Rule 15.10
> No description for rule 15.10
> Rule 15.11
> No description for rule 15.11
> Rule 15.12
> No description for rule 15.12
> Rule 15.13
> No description for rule 15.13
> Rule 15.14
> No description for rule 15.14
> Rule 15.15
> No description for rule 15.15
> Rule 15.16
> No description for rule 15.16
> Rule 15.17
> No description for rule 15.17
> Rule 15.18
> No description for rule 15.18
> Rule 15.19
> No description for rule 15.19
> Rule 15.20
> No description for rule 15.20
> Rule 15.21
> No description for rule 15.21
> Rule 16.1
> No description for rule 16.1
> Rule 16.2
> No description for rule 16.2
> Rule 16.3
> No description for rule 16.3
> Rule 16.4
> No description for rule 16.4
> Rule 16.5
> No description for rule 16.5
> Rule 16.6
> No description for rule 16.6
> Rule 16.8
> No description for rule 16.8
> Rule 16.9
> No description for rule 16.9
> Rule 16.10
> No description for rule 16.10
> Rule 16.11
> No description for rule 16.11
> Rule 16.12
> No description for rule 16.12
> Rule 16.13
> No description for rule 16.13
> Rule 16.14
> No description for rule 16.14
> Rule 16.15
> No description for rule 16.15
> Rule 16.16
> No description for rule 16.16
> Rule 16.17
> No description for rule 16.17
> Rule 16.18
> No description for rule 16.18
> Rule 16.19
> No description for rule 16.19
> Rule 16.20
> No description for rule 16.20
> Rule 16.21
> No description for rule 16.21
> Rule 17.1
> No description for rule 17.1
> Rule 17.2
> No description for rule 17.2
> Rule 17.5
> No description for rule 17.5
> Rule 17.6
> No description for rule 17.6
> Rule 17.7
> No description for rule 17.7
> Rule 17.8
> No description for rule 17.8
> Rule 17.9
> No description for rule 17.9
> Rule 17.10
> No description for rule 17.10
> Rule 17.11
> No description for rule 17.11
> Rule 17.12
> No description for rule 17.12
> Rule 17.13
> No description for rule 17.13
> Rule 17.14
> No description for rule 17.14
> Rule 17.15
> No description for rule 17.15
> Rule 17.16
> No description for rule 17.16
> Rule 17.17
> No description for rule 17.17
> Rule 17.18
> No description for rule 17.18
> Rule 17.19
> No description for rule 17.19
> Rule 17.20
> No description for rule 17.20
> Rule 17.21
> No description for rule 17.21
> Rule 18.1
> No description for rule 18.1
> Rule 18.2
> No description for rule 18.2
> Rule 18.3
> No description for rule 18.3
> Rule 18.4
> No description for rule 18.4
> Rule 18.5
> No description for rule 18.5
> Rule 18.6
> No description for rule 18.6
> Rule 18.7
> No description for rule 18.7
> Rule 18.8
> No description for rule 18.8
> Rule 18.9
> No description for rule 18.9
> Rule 18.10
> No description for rule 18.10
> Rule 18.11
> No description for rule 18.11
> Rule 18.12
> No description for rule 18.12
> Rule 18.13
> No description for rule 18.13
> Rule 18.14
> No description for rule 18.14
> Rule 18.15
> No description for rule 18.15
> Rule 18.16
> No description for rule 18.16
> Rule 18.17
> No description for rule 18.17
> Rule 18.18
> No description for rule 18.18
> Rule 18.19
> No description for rule 18.19
> Rule 18.20
> No description for rule 18.20
> Rule 18.21
> No description for rule 18.21
> Rule 19.1
> No description for rule 19.1
> Rule 19.2
> No description for rule 19.2
> Rule 19.3
> No description for rule 19.3
> Rule 19.4
> No description for rule 19.4
> Rule 19.5
> No description for rule 19.5
> Rule 19.6
> No description for rule 19.6
> Rule 19.7
> No description for rule 19.7
> Rule 19.8
> No description for rule 19.8
> Rule 19.9
> No description for rule 19.9
> Rule 19.10
> No description for rule 19.10
> Rule 19.11
> No description for rule 19.11
> Rule 19.12
> No description for rule 19.12
> Rule 19.13
> No description for rule 19.13
> Rule 19.14
> No description for rule 19.14
> Rule 19.15
> No description for rule 19.15
> Rule 19.16
> No description for rule 19.16
> Rule 19.17
> No description for rule 19.17
> Rule 19.18
> No description for rule 19.18
> Rule 19.19
> No description for rule 19.19
> Rule 19.20
> No description for rule 19.20
> Rule 19.21
> No description for rule 19.21
> Rule 20.1
> No description for rule 20.1
> Rule 20.2
> No description for rule 20.2
> Rule 20.3
> No description for rule 20.3
> Rule 20.4
> No description for rule 20.4
> Rule 20.5
> No description for rule 20.5
> Rule 20.6
> No description for rule 20.6
> Rule 20.8
> No description for rule 20.8
> Rule 20.9
> No description for rule 20.9
> Rule 20.10
> No description for rule 20.10
> Rule 20.11
> No description for rule 20.11
> Rule 20.12
> No description for rule 20.12
> Rule 20.15
> No description for rule 20.15
> Rule 20.16
> No description for rule 20.16
> Rule 20.17
> No description for rule 20.17
> Rule 20.18
> No description for rule 20.18
> Rule 20.19
> No description for rule 20.19
> Rule 20.20
> No description for rule 20.20
> Rule 20.21
> No description for rule 20.21
> Rule 21.1
> No description for rule 21.1
> Rule 21.2
> No description for rule 21.2
> Rule 21.3
> No description for rule 21.3
> Rule 21.4
> No description for rule 21.4
> Rule 21.5
> No description for rule 21.5
> Rule 21.6
> No description for rule 21.6
> Rule 21.7
> No description for rule 21.7
> Rule 21.8
> No description for rule 21.8
> Rule 21.9
> No description for rule 21.9
> Rule 21.10
> No description for rule 21.10
> Rule 21.11
> No description for rule 21.11
> Rule 21.12
> No description for rule 21.12
> Rule 21.13
> No description for rule 21.13
> Rule 21.14
> No description for rule 21.14
> Rule 21.15
> No description for rule 21.15
> Rule 21.16
> No description for rule 21.16
> Rule 21.17
> No description for rule 21.17
> Rule 21.18
> No description for rule 21.18
> Rule 21.19
> No description for rule 21.19
> Rule 21.20
> No description for rule 21.20
> Rule 21.21
> No description for rule 21.21
> Appendix B



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 09:26:34 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 09:26:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353628.580587 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3wd7-0004qp-TX; Wed, 22 Jun 2022 09:26:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353628.580587; Wed, 22 Jun 2022 09:26:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3wd7-0004qg-QK; Wed, 22 Jun 2022 09:26:33 +0000
Received: by outflank-mailman (input) for mailman id 353628;
 Wed, 22 Jun 2022 09:26:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kzGk=W5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o3wd5-0004Zj-OA
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 09:26:31 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2052.outbound.protection.outlook.com [40.107.20.52])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 664a7e5a-f20d-11ec-bd2d-47488cf2e6aa;
 Wed, 22 Jun 2022 11:26:31 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB7005.eurprd04.prod.outlook.com (2603:10a6:803:136::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.22; Wed, 22 Jun
 2022 09:26:28 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Wed, 22 Jun 2022
 09:26:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 664a7e5a-f20d-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=I8ejIyPckfKg2PLK+HSClOuXW4z9GgYWYIwsoQgFi2HQo4u5fDdfZs12WZ4llMgBEKJHsdziByY3OirgsmD1dyc8zWRAM4+6wDcUnVRm8Hf3T/efoRg8WVyNVLK1VxB5J6Ox7PohQQ0B2rG+lTtBggEodPUKyT/bQhxowhBqFGoutKxxV9i4RDWBMLq7lqztFTaAfhmW8ih9spmjQ7BXarOQSg4/B3HjmYSjGiEI6mN5BsoL0YdLc2zmPrIVCJRaGt6H5jXuTeNwPloDNAPJ48s7NwgRIIBDaNQReBjGYK6leC2zFg5sxJr3I9SWaiCwzUpoWdxoyB2uSisVj5jyrA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=AuZs7AscUaUDYSw/FQhVebXRBQoXaWNVZm36XO4KP9o=;
 b=ZCjyGEcRoJownKXm1wv7EjD7XUs/kiRjBF7LdFoy0ZyV/8qYWl4jVzorM6fQyhoQxr12AU2cmjnXYxbZG2RHJ4EP1wdndlPgRRVWQqP1rbuIcpIuTupUNS7/R7PbleOMMP0aALJrQTLdwXu/63Sxy7W1aAv6VzfE0D7Ks0FbL0WyvetuAIVZGRsOmnCA/iAbZeWb9tvjJViS+JMKQBNuLkonOyinBVvbnVpH1aHLFGtQghzADpQQ9nQ+i9NrgvYp4BYNVzdYnj/6QoxYTG68eclnUW5U7tY7pQ8HsaBHkPpwhE6nTiiFPGOh6KjY0VVARzs4BLWoTzrVzdXyy7p5LQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AuZs7AscUaUDYSw/FQhVebXRBQoXaWNVZm36XO4KP9o=;
 b=eCjCIQYDdBjpcnGYuN7l0Z8gJ7w15dz6e3dkc2KGFyzD1j/5+GQpVL9+VOc+8XVs8x+TToFURiwS1+Uq+nO/K19bbISUgsWhwvm8khH0UZUX7QtDfCVMoZVfTd71jpskPHU8GEMYP9JCwcEIvfD/sNaAIrmqIMlb67SR8ZMp4V7LDBT8sMYtU4umZtDHOvN2Ta9QwgpAoyZfXj9CXLkMssnnpht0sWFBmydmfFfbDLr9HAA5YJo0b8whgNRCJsFkzvv8Gn4nDM+S92PfUn64EfxcMFchjOMDSa2FGn3BLpUZq+lV9eMC/m1XXNKoWvi2Ged2STt3whHB8GjyfD39WQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <676786ba-68ab-0575-c599-92a1c1eee6a5@suse.com>
Date: Wed, 22 Jun 2022 11:26:30 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v7 9/9] xen: retrieve reserved pages on populate_physmap
Content-Language: en-US
To: Penny Zheng <Penny.Zheng@arm.com>
Cc: wei.chen@arm.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220620024408.203797-1-Penny.Zheng@arm.com>
 <20220620024408.203797-10-Penny.Zheng@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220620024408.203797-10-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM5PR04CA0032.eurprd04.prod.outlook.com
 (2603:10a6:206:1::45) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b3e83adf-29ed-431b-f6fe-08da543148d1
X-MS-TrafficTypeDiagnostic: VI1PR04MB7005:EE_
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB70054428458A643F4D5DBF4AB3B29@VI1PR04MB7005.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	kcoeFDKVN8KbVhHd/DNUX8A9I5y3cG3vidh9Tth4N5WlohXEP3NVQGC3qJJ0J5AROoUHSYUslmz1JyoM+wnolt8nDPi++tcerzwwpGgjaLWhuBUTI0PQ7Guq1z7fmUMaGLpK5l0shcI31HZEvJJUESjcgIFpqtotOPeOCJ0cL+lGara9mWuC13q5cMyMxOxMx1ROIayD4OWeLCZm/GNVZBeygIaoj/+R+VVpxV1t6lTXcP3VgYEYkmmjRA+ZdriDSjQlyp7sv72eYPMY4OcdyutCdZw/BC4YTwXHmCNP6LQugaPvmX2uqhBEymAu1dM8Qr0SOJGVyArZn90V7xYOK06iPh0LYIGtq3p3Z0fDH2E7LrrjDX/oeMHOcLERRFvocArbbUqm41OHtgh3ThIW2KkX9fhZJx8vb4vt6XUgQW6hlsGushjkY7xuMYkQzLZMKSKursnA+GSlHfqzPIEiuO6sBg75q4NmwCqEC3io6uTcsZLaWdfhBHj0DKOQqq8WftdrJT8u/XBEqJrSdw6cfaqXmpeG6ILMhbLP/h3jELMWea3x/GEoAvZV00vI6e6HCwHoeO16/uobAjX+gWOFCCim323/3dH6DcQ9uEg5q2uycXBy1hmS03wXBvj4Vk2fDygZGKffoRpjZWO/EgO7yM0y4weRmTpoBP5lhe0aBoHCilnUAzBhR1igGMSW4yrfSwCo/eMVYEwzU1LdrED0TPrEDKlEStia9ETrowfFXhwo1eJbvUZ1wCRb6j5I5/6+7xbc9jPQ2uK3jsLUM0h7vHBZQuF0TX1kAiCyZqcYwxYRGa+krk54EbsJowCeSDk3
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(396003)(39860400002)(376002)(136003)(366004)(346002)(53546011)(86362001)(6506007)(66946007)(4326008)(6666004)(31696002)(31686004)(38100700002)(2906002)(36756003)(316002)(66476007)(54906003)(66556008)(6916009)(8676002)(4744005)(6486002)(186003)(5660300002)(478600001)(6512007)(26005)(8936002)(41300700001)(2616005)(83380400001)(156664002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YjFOVjFWODVCWXRFR21VM2xKZlBxTFJJUXVjd0NFTzhnTjBGUUxSQzl0Qmds?=
 =?utf-8?B?VmlGdlY1K1pQVjRtRVJHRndKb3Arc0VMTUpTTjlGWXYrVWdrT0ljNlFlZ3BD?=
 =?utf-8?B?cHhBaXJkRWg1S1ltTGJjTWlXRFEvekpMdEpRUGNCNEd3SklMa0Q2ekxzczBj?=
 =?utf-8?B?dzAzMVBQZ2VpL0UyZFhlbUp2d0VodkxiYTFYaVJNdXhtU3h2eWNGMlA3NUli?=
 =?utf-8?B?QlNIaFJMc0V5QnpUeTROeWhzTUg2UXh6Rysvc2puaXRLT1NIN2pzckZRYjIr?=
 =?utf-8?B?WVVUZ0FrczlXRTZ1dGNFYVp0Wit2K0kxMGlQc2JkeVdXYm8waVhKZTRYdVNx?=
 =?utf-8?B?N3pMVjhhblhIbWs3eFhyaTVqUUxSeEZNMkozWlpaRXRTL01Icms0MGZmMnlN?=
 =?utf-8?B?b2VkVG8xenJTUWowOS9MekZQZkZ1dlpucStGczU3VnloUFczK0N1OU5lcUhP?=
 =?utf-8?B?NU5mbE4zcVo4aXQ3U1FTcGxXN2xFa01JVkRsWmZNaXF1aFhBalFBazUxbC9P?=
 =?utf-8?B?aUpOZFBhL01MaUxoZ2hVdkNwYi94dDJCNHQweG1EWlBXSU9xamxZaDRuN1dY?=
 =?utf-8?B?VUFVUi9KS0VTQVVEY2thL2RuR2FtejA3U0J0c1EyeWJvMFNxelREMlh1YnY4?=
 =?utf-8?B?TVF0NCtXVUl6MUpDR1hqTzF3S3A3MHYxU1BiT2pVSHhVUnQyZG5xS1VwZ2FG?=
 =?utf-8?B?V05oQWJ1MVZiZkcxRFBOaVRnMjdBTHZUeTJ3Tyt5NG94Y2pCUVJiSi9YRDEw?=
 =?utf-8?B?cm1IUXFxOVhHQUZ3Tmphc2oxTGllS1V1M2kwVmVVQ3VXeW5pcVUvY0JxSGw4?=
 =?utf-8?B?UThQLytzYlM5dDBOa0ZwOVZmVHdiMWVKK2JhMjJ1bEFuTldCYktOYWRmZE9r?=
 =?utf-8?B?MDVyZlNHWEdjNU5nMGIza01zRFVxR2p2dXN3cFduaU8rYlg1cHZKT3Y1NnRP?=
 =?utf-8?B?UjFaT1NPNklZNkNEek1aZ1BkZHh2NjJGMjFBaS9FbnJua0RIdit3bmdJQWtM?=
 =?utf-8?B?R0VId2pkblNXdUVoTUVaaGw0YlRTd28zVi8wbURMdkpvVUZ5TlM5YTkwNVJK?=
 =?utf-8?B?WmFKNkdBRTR0ZXQ2RDN0bFlTTVc3dGJ1YkFOQldEUDhRdXVoVmtmSjJZOFhR?=
 =?utf-8?B?MnZhQ0t5RUcwMmdVNmY1SlNYb1VkUnB4WWVBRzVMY2V5cGc0RkE3eklGOXFP?=
 =?utf-8?B?SGxFQTNrV2pueWMvRFVYVFB4b0ptU1FOTkMvdmVhYlR2Vkd3V2dHZWRMbUNw?=
 =?utf-8?B?MldFN3N6c3JuN1k1eENhQStaaE16Wng5VXMrRHhYRTlJa3NZTDZzZmkzN0Nw?=
 =?utf-8?B?cjNGS1h2RFhVYTlqTk9OdDVNVGpvNHZHYmZJS0xYQ2FWSExvMEd0bFdBcDkz?=
 =?utf-8?B?cDI1a2pVL3NtTU5yVnpTdFVTRmFTdDFEMjRFRmdib3Q5NGlJVnRWMUVIYmtT?=
 =?utf-8?B?VHVZa0F6MDNhYzJaOXJxRzJhWFc1VjdPTFpuSG9DQ1g2czhyS0RXVXFXVEhY?=
 =?utf-8?B?VWxkVnkwUFJOdDZqa1lCZW9JS1o0QTRsT29zZGhocVZQck1JYnhkUTVVaVVa?=
 =?utf-8?B?Tmd5VUMydUplcFNZeFRoNTlpeUN3MFRSVGJSazZGSWtiUUZjTWFSUjcxdWt3?=
 =?utf-8?B?UG9kVSthaTVBQVdtZXg0YXNjMEN1MXNPb2FUcnVDQUZMTFNORDJETFplT0Zz?=
 =?utf-8?B?T0dMZDB3aVJMRFNFRXVzeFZoV3FNbms3NEZ0UTh6ajRMNHRVcC9DeEE0d3BX?=
 =?utf-8?B?dWdoWXJwQitDQTd3LzZYSDg1d1VUMTNmOHltcFc0dTl1YjZRWXV6OW1kOWV4?=
 =?utf-8?B?WnVnSy81UVhmd09KMHpLNXdXWElYWGdrL2hacDIwSzVCbXJPd0RxMGtGQkM4?=
 =?utf-8?B?NklCMXBCRVBLT0xMY1lDTHdMZDFWVUVzd29vRmg3WU5DWEZwOW1jdWRKRmZo?=
 =?utf-8?B?LzRDRkFSdGhSa2Fjd055MGFWM0R5dlFQVmZhRDB5RVdjSGtFM2lzaEltMjBt?=
 =?utf-8?B?a1o1emRHVDJFZDNpYlVwdTJRRTRsN0IzS1hEZXVicW1VUlJoVEFzaUx1RnJi?=
 =?utf-8?B?Q3ZWdXZLaDRFNGdlemxCZzFyaHpXZFU2cTJXYWhobU5QVTEzRDc4akFWU2xZ?=
 =?utf-8?B?azZZcmRmWWhaSnpDSUwwTEYxSHlJVXR2amp6T29rU1FKekJuYjNDMTl5L0tx?=
 =?utf-8?B?U1B2T05BMXRoZ2hlUkMwVzhDaGlkaTE2R0t4c29ZaWZGcmlqQXJtS2RHL0Iw?=
 =?utf-8?B?d2RMeERhWVJqN1F2VlRhNVFDOUh3VEpSa2tQdGtrL1kzY1gwUklCUzBBQmdW?=
 =?utf-8?B?K1NkUDF4QjBOK0t2TTMxVnZrSDQxaWl5SFp1ZDl1L2pvU2lvVStVdz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b3e83adf-29ed-431b-f6fe-08da543148d1
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2022 09:26:27.9915
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: J/newxIqWraV0zoXqbjGppHnzZJ67nOmgED4o+mALaZj+dd9pOxXPTAmAw+Skkc+UgnIShD8knrjTyl7QHs9dQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7005

On 20.06.2022 04:44, Penny Zheng wrote:
> When a static domain populates memory through populate_physmap at runtime,
> it shall retrieve reserved pages from resv_page_list to make sure that
> guest RAM is still restricted in statically configured memory regions.
> This commit also introduces a new helper acquire_reserved_page to make it work.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
> v7 changes:
> - remove the lock, since we add the page to rsv_page_list after it has
> been totally freed.

Was this meant to go into another patch? I can't seem to locate respective
code here, and the remark also doesn't give enough suitable context.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 09:33:37 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 09:33:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353646.580598 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3wjt-0006gG-KZ; Wed, 22 Jun 2022 09:33:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353646.580598; Wed, 22 Jun 2022 09:33:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3wjt-0006g9-HN; Wed, 22 Jun 2022 09:33:33 +0000
Received: by outflank-mailman (input) for mailman id 353646;
 Wed, 22 Jun 2022 09:33:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kzGk=W5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o3wjs-0006g3-VO
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 09:33:33 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-eopbgr60056.outbound.protection.outlook.com [40.107.6.56])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 610fb2aa-f20e-11ec-bd2d-47488cf2e6aa;
 Wed, 22 Jun 2022 11:33:31 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8404.eurprd04.prod.outlook.com (2603:10a6:20b:3f8::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.22; Wed, 22 Jun
 2022 09:33:29 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Wed, 22 Jun 2022
 09:33:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 610fb2aa-f20e-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jpTyFv9btR6prjPX722b1LmlcdpoEF9E+ylZHz10AHYodWrpRDH1TvKNcojBjUFmvJYpH+1iKIQbj06QZd1aqWLAQ2hjJuE0VJ+HulJZxzdNVzQlkgKkh2AD0gYkomoyUR21mpPZ/omeHXcc5czvcmbMjI7P/BfkqDmCKm/BCSnJJRVjLjlt2DWvmmqXJ5uwEP8NyzrTgl97BNW1/hPzwhFH41J0u/6E1JWMoR5UUo9ejztJtpI+ZZp4vKh6Wq7488p8jmhFeI4rKHgLnEJNiKepr5Jo6gl2i/E94C3c/T/IYnoj3C079Vod6Z9j1VWYG/xktHpegFePzF1nDgryvQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=E7hcPWWqIcNONf0hEvR1CNAcoSuKNK50b00DDrGVezQ=;
 b=SBWntLCQ3PfBR+P9ACX2WhtvHU7wLJ7TGwTPUB16VlXgAXDVc2k4wYl1HczT3nPqH6Fh2jJSGdP8NtyjYTAhRFs5H7ZX0JzXmvV2zBclQU0rShoa5SW+k/MIXcLYcj+Gc+klvd/65zpWLnIwSeR2T/+ewuT6m8Ep0R1ZPsW4j5SAGThAuXP/YYC4HyJATbNF0Dx53zGP3t4Kr7Ybp5wQnix6iwlTYLxPzvUKqwtlnNvIrRVoJLVrWSjKmIQCJTnxzrUGNkqJogVH6K9uGewYGUnoRH//l+hnfWamGZwv7AxcRti6rIBHXB5bpfTwAlv3O8uHp1UVrr+Zd9wDdXrkJw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=E7hcPWWqIcNONf0hEvR1CNAcoSuKNK50b00DDrGVezQ=;
 b=R2/Loonskxy4Xct9EW56Q4iWtVv9WDI7gu0eers+RygFYpdlulfAKNJTaUd1KRmcKa+PwUTyD+9ty9v+7c7N1glhttQitgKRBn87j8kJyCap/YegHM26/UTnom5EdrHOSZPr2KkU/pkeLuxa2PyAEW7zimEyPavkfoTZmZl++Px658sv7ag+LrTjCfJORNNzmZm8kcIf5lUvHsAUIj4UasJWesIG8NX/bxHp3FKFLcZgukeCpgwenQ5ZCYFjTd3oPnrC9fym9QwrN+m12/xHX0l0z1lSKtydY0La5Bo2FFW6N+/u5heJy2mbxM4f2xSxOKN5nrZ/VjD1fEFFfjElYw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <af4f325c-367d-4b05-fe96-b102b7f7e554@suse.com>
Date: Wed, 22 Jun 2022 11:33:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH] xen/console: do not drop serial output from the hardware
 domain
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <YqcuTUJUgXcO3iYE@Air-de-Roger>
 <f0f87e99-282b-6df7-7e57-3a6c73029519@suse.com>
 <YqgwNu3QSpPcZjnU@Air-de-Roger>
 <69d85d88-4ec1-987c-151f-0d433021fe34@suse.com>
 <YqhHtetipYTG8tuc@Air-de-Roger>
 <72c94980-cbcd-d3b3-7aad-c9db58d9c4a2@suse.com>
 <YqhXFKMlIvkQzVoT@Air-de-Roger>
 <291bb0ee-06d7-af25-79bb-e099c7ff2fe1@suse.com>
 <YqsUfH763oSchRdW@Air-de-Roger>
 <8ee15e94-f4a9-69f2-4c57-2e0cc9df8746@suse.com>
 <YrLcLpsd8hOcMOGI@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <YrLcLpsd8hOcMOGI@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AM6PR04CA0045.eurprd04.prod.outlook.com
 (2603:10a6:20b:f0::22) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 2829f0d3-1143-4771-cd17-08da5432442f
X-MS-TrafficTypeDiagnostic: AS8PR04MB8404:EE_
X-Microsoft-Antispam-PRVS:
	<AS8PR04MB84048F90896DCD0FE51B0ED3B3B29@AS8PR04MB8404.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NU/oPnlHOqXEyi0cfHmQ5f0V3BquWZlCvo+IsjJyueEVBIwQJV/j6G3vh6yK03J8UU1cIxphBRMPEi4Pc69uw76EFKEibft+q7yQ8jqQKx5t7aNv2JFciv8TkmZH/cALUxH1HhfWl9mS7ly2sfuxS61EDv0O8VtrkmPQEhz+425EJZQBpt6f2nt82VyRHKr5BeV5Vj+8O5xqWKzYsoFlkUASYngWlt35iq0XatU1GN6/diIJMq9MERbzlkKaXrkrU/A2bqOOpVxIZo0AZqOhjfIGrtEeBGTZsNUQGT+413M1wwqhjooa76XTw1vDXYHcuP92Gz9cnL2vF++DN/zM+EOWJmO2qKhDeCNQWDkVb06IUUhcxllJzub818Duq0MotiLaNDipUVEg1kk9i5ZUjnit9YH2BahZB5QMf7J02LGv6orHOOh7eUnBbFm0UEe5gt6HVtjNvYoR/63XczYe+8d4J72Ido+Y3OjHEUHuJo1TU9QC2hwbWS3chlvcEMpYm/aZvi67h+oePmYGjwft8Kv3bWLpnZ+Nb806ftul+PKxRNyqeO8EF6Of9bmH/UlFZ8qZLWSQIp7OWvUSqD3OrfSKh4AjbeG1FWcDq95qiuOmg5AdWt58Nyot1otnJtUCsHJb12H5deDw3To3IunVrSfDFNJlSSF83odMpVwdsrmJCqBDYn8RnnQfE562C8ol3l4HBgxhCrp98WZcPmZWJ5prfMS+5MKqPFA2jP4LDaFDciaV9La+xXiVAHahlqyGsJYseIea3ZY2L4YZzwpnHmqazZjHpE9JGnII+0NWOWI=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(346002)(366004)(376002)(136003)(39860400002)(396003)(26005)(6486002)(6512007)(53546011)(478600001)(6506007)(6666004)(38100700002)(186003)(41300700001)(66946007)(4326008)(5660300002)(66476007)(66556008)(31686004)(8676002)(2906002)(2616005)(316002)(6916009)(83380400001)(36756003)(8936002)(54906003)(86362001)(31696002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?R2g0QXdTWUQ5TFN1ZG52bENVMzRUV2ozSkVVVU9MYWxzQ2s1ajJzMndaQ1V2?=
 =?utf-8?B?d2pQQjJWVG9oVVRjWGpqcnpSZ1lmL2NkZnppLzROdklLU1Vyam1uZ2ZvVWF0?=
 =?utf-8?B?VFJJSzRmOENFZnFscXFNWUY5RzArQWFrWUxIb0xSdXBjMjA5bkZneERYMy9a?=
 =?utf-8?B?T1diZ0dadlJOTHhCUVZYSDF5d0JGbHQ5cmgyaDdqVk5xZDhsT0l6dTRxZFNo?=
 =?utf-8?B?MEkzSTExYnEyQTBEUTd0MFlHVnVsL3ZpYzZ1M3p2RnQ0eUt5aWVuM0lHRTFs?=
 =?utf-8?B?a2I4MU9ySXpnZGw4YjdsNFBab0J6cWFQOHFOMWdhR0RWWHJRdWRvaUFTenpF?=
 =?utf-8?B?YzZaQ1hqRVdCcXVYTjVkak81WWVZTzZKbGhMN0tCQjRtcUZKb1JmQ0hjcEtT?=
 =?utf-8?B?UUkwYXNkbmphTkthMFhrZml0OS9WRHdwbXI4c01vYkVaR1ZIYXZCem9NQnRR?=
 =?utf-8?B?dmRtamVmN3EzS3pyQ2MwQ2lKT1hLWDl2cFBRVnVvcERjMTBtelpQWTcyOU9F?=
 =?utf-8?B?Vk9XYkJoS0sxemxQMUs3WnhKZFR3QjhmVTVLQUg5RnpJTnNTa1RFQ3Q0ZmRZ?=
 =?utf-8?B?VzJpWmhLU2Ixd2xNaHNjenFERWNtaC9yTG1RZTNxeTB0cUl3MTk1eERETnYx?=
 =?utf-8?B?WStqNGc1SWZpRnhUYWUyM3E1WkJrVUNaMnZ4T051dkROM2puY0NPWW92aS95?=
 =?utf-8?B?d1NNcmlPakRobUQ3OUxOQTNEUUtQZ3FJSGdMdDNwY01CS241QnpMWmhEKy9n?=
 =?utf-8?B?bXM4RGFhVk4wVGxIQTd1OUdpRnJqM3VTdnlKalhFNFZEcWhLRzJMdmtFaWpm?=
 =?utf-8?B?bUdmeWQxa1htM09XUzYxa0M1L05sajRleUwrbllFekg1TmtCUHlnUjRDbWJP?=
 =?utf-8?B?Sm5LUWdpMldrckZyZDdnWXJNN0pOcGdoYjcvT1FMUEMzbC9VRmhqTjRMQXFx?=
 =?utf-8?B?Mk1Ga3pwNVd4K2V0cWJUbVVyRHFUYlhRc3NKaVFDQm9LMDJ5Tml6Rm9QVWtY?=
 =?utf-8?B?d3EwYUxseGJpU0RuTFo1VHpKdFBWRGdwOTlHbHZqa3Q5YUJHQVh2Z0l5ZnFX?=
 =?utf-8?B?enp3S3J0Z3B5M1ZwZ3dhTGdmRjdtcDg2KytFQy8zaDVueThvMi9peUpaMUMr?=
 =?utf-8?B?RnhZS1FBdGhSM21hcGNQdU1lajgwcWs4Y0hQZDNVSkZ2ektFczFLR1BxTXBr?=
 =?utf-8?B?UUxoN0J3Z0hDQXRadU5SLzlVOVlLUjRCZ0o0Y0VlN05xeThhZmNuMHRjWTMr?=
 =?utf-8?B?RTk0cnZucitiWEF4eEg2L1RnSkFaTFBrdVdUT3I0T1hVbk5xM2dCQnE5TFNy?=
 =?utf-8?B?QndGaHBQUkg5aFZuM3RKYWFjRU51VXNzSkgxVW5XbHpUS0c5TEh0bEh3cDFJ?=
 =?utf-8?B?aW5DMlBmdi9GWVUveHhNRGFMV0ZRYlA0ZVR3VjAyTkk3Mk5CbmRPaC9OSkMv?=
 =?utf-8?B?dERMSW1zTkk5a0taUkl4Rmh4cmNSNWN4UUh4Sk1ENkNIK0Y3WEs2UzB5KzlV?=
 =?utf-8?B?eVBWK2lqdE5hRHdHcDRLU1EwUmZHV1NNUTQ4Y2dSZmFOLzBKSFk2TjZhb1Qy?=
 =?utf-8?B?YVN4bFJ1OFR3NitvbzFZR0IzNmZtUFhVZE5WbVNmaGxKVnlIUXVLRy9keVRG?=
 =?utf-8?B?NGI0UGNGc1ZBK0k2ajIzd2lOeVIzNlBSUlE5U1Z5bzFBNE9DMmtNQkk2Tkcy?=
 =?utf-8?B?cW5obWlldzhIMXJYZ3VSdmc1K21YZFJuQTRMMkU0Y0xuOWRxNjJsYlNyTHhN?=
 =?utf-8?B?WWFPRmFheUliTFE0VDdVR3Z3WURDekN5dllzVkZybGJMazhXem1JZEF1TVk5?=
 =?utf-8?B?dHN5Ujl4SVBBbE43STJLWGpVaFA2ZEM3d05WaDR6S0J1cTZjUEF2bG40anhE?=
 =?utf-8?B?dVRoSTYyRHhPRVU1Z0JneWpBb282b0ZadFAzUG1iYmloUFMwU3dkSmgrNjlW?=
 =?utf-8?B?Q2xsQThZdWdMekxpa2lPZHNmRU1pRUpKbUVBWTdjazNjVnpEdFcrVnNPUkZV?=
 =?utf-8?B?QjdkSmVYdHp3VWlpci9SVHZaWlV2MzlEM3RzeXhDdWd0MzkyWnp0RTVZS292?=
 =?utf-8?B?RWZ2Q2lMbTFtZ1QzYlVBM2tDNjVaVHpVVUQwMVh6aVJGS1B3ZnlZUWpuYXlY?=
 =?utf-8?B?SXpJREFXRjJUUmdING1Tc1ZZdnFLQ3hEWHhYdlpoNmtVeXhWdXlXbnNoNWhm?=
 =?utf-8?B?N1NzN3JSU0Nnb1U5bTVRTXR5bTFTTmo2VGIzRXd3OWNNeWo3MTMvQTJWZ2Zr?=
 =?utf-8?B?VXNOeHlUYm9YVnBqbXJRZlRwcmwxQ291Q2l2c3E2TkRCRGgyT1d4dkpVdktx?=
 =?utf-8?B?cGw1Q3J5SVM2TnhDWnF3MGFjTnRJY1UyVlluQ2xieTVIY0FYTlhEZz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2829f0d3-1143-4771-cd17-08da5432442f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2022 09:33:29.6990
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: V0GLk1UZA35HUYO+FOM4tPW28/7pB+JTTHN70+L8Qqo39wkxHfEXEtSMttncq5Cw5hQWujy4imzTcl8ylO0fhw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8404

On 22.06.2022 11:09, Roger Pau Monné wrote:
> On Wed, Jun 22, 2022 at 10:04:19AM +0200, Jan Beulich wrote:
>> On 16.06.2022 13:31, Roger Pau Monné wrote:
>>> On Tue, Jun 14, 2022 at 11:45:54AM +0200, Jan Beulich wrote:
>>>> On 14.06.2022 11:38, Roger Pau Monné wrote:
>>>>> On Tue, Jun 14, 2022 at 11:13:07AM +0200, Jan Beulich wrote:
>>>>>> On 14.06.2022 10:32, Roger Pau Monné wrote:
>>>>>>> On Tue, Jun 14, 2022 at 10:10:03AM +0200, Jan Beulich wrote:
>>>>>>>> On 14.06.2022 08:52, Roger Pau Monné wrote:
>>>>>>>>> On Mon, Jun 13, 2022 at 03:56:54PM +0200, Jan Beulich wrote:
>>>>>>>>>> On 13.06.2022 14:32, Roger Pau Monné wrote:
>>>>>>>>>>> On Mon, Jun 13, 2022 at 11:18:49AM +0200, Jan Beulich wrote:
>>>>>>>>>>>> On 13.06.2022 11:04, Roger Pau Monné wrote:
>>>>>>>>>>>>> On Mon, Jun 13, 2022 at 10:29:43AM +0200, Jan Beulich wrote:
>>>>>>>>>>>>>> On 13.06.2022 10:21, Roger Pau Monné wrote:
>>>>>>>>>>>>>>> On Mon, Jun 13, 2022 at 09:30:06AM +0200, Jan Beulich wrote:
>>>>>>>>>>>>>>>> On 10.06.2022 17:06, Roger Pau Monne wrote:
>>>>>>>>>>>>>>>>> Prevent dropping console output from the hardware domain, since it's
>>>>>>>>>>>>>>>>> likely important to have all the output if the boot fails without
>>>>>>>>>>>>>>>>> having to resort to sync_console (which also affects the output from
>>>>>>>>>>>>>>>>> other guests).
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Do so by pairing the console_serial_puts() with
>>>>>>>>>>>>>>>>> serial_{start,end}_log_everything(), so that no output is dropped.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> While I can see the goal, why would Dom0 output be (effectively) more
>>>>>>>>>>>>>>>> important than Xen's own one (which isn't "forced")? And with this
>>>>>>>>>>>>>>>> aiming at boot output only, wouldn't you want to stop the overriding
>>>>>>>>>>>>>>>> once boot has completed (of which, if I'm not mistaken, we don't
>>>>>>>>>>>>>>>> really have any signal coming from Dom0)? And even during boot I'm
>>>>>>>>>>>>>>>> not convinced we'd want to let through everything, but perhaps just
>>>>>>>>>>>>>>>> Dom0's kernel messages?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I normally use sync_console on all the boxes I'm doing dev work, so
>>>>>>>>>>>>>>> this request is something that come up internally.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Didn't realize Xen output wasn't forced, since we already have rate
>>>>>>>>>>>>>>> limiting based on log levels I was assuming that non-ratelimited
>>>>>>>>>>>>>>> messages wouldn't be dropped.  But yes, I agree that Xen (non-guest
>>>>>>>>>>>>>>> triggered) output shouldn't be rate limited either.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Which would raise the question of why we have log levels for non-guest
>>>>>>>>>>>>>> messages.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Hm, maybe I'm confused, but I don't see a direct relation between log
>>>>>>>>>>>>> levels and rate limiting.  If I set log level to WARNING I would
>>>>>>>>>>>>> expect to not loose _any_ non-guest log messages with level WARNING or
>>>>>>>>>>>>> above.  It's still useful to have log levels for non-guest messages,
>>>>>>>>>>>>> since user might want to filter out DEBUG non-guest messages for
>>>>>>>>>>>>> example.
>>>>>>>>>>>>
>>>>>>>>>>>> It was me who was confused, because of the two log-everything variants
>>>>>>>>>>>> we have (console and serial). You're right that your change is unrelated
>>>>>>>>>>>> to log levels. However, when there are e.g. many warnings or when an
>>>>>>>>>>>> admin has lowered the log level, what you (would) do is effectively
>>>>>>>>>>>> force sync_console mode transiently (for a subset of messages, but
>>>>>>>>>>>> that's secondary, especially because the "forced" output would still
>>>>>>>>>>>> be waiting for earlier output to make it out).
>>>>>>>>>>>
>>>>>>>>>>> Right, it would have to wait for any previous output on the buffer to
>>>>>>>>>>> go out first.  In any case we can guarantee that no more output will
>>>>>>>>>>> be added to the buffer while Xen waits for it to be flushed.
>>>>>>>>>>>
>>>>>>>>>>> So for the hardware domain it might make sense to wait for the TX
>>>>>>>>>>> buffers to be half empty (the current tx_quench logic) by preempting
>>>>>>>>>>> the hypercall.  That however could cause issues if guests manage to
>>>>>>>>>>> keep filling the buffer while the hardware domain is being preempted.
>>>>>>>>>>>
>>>>>>>>>>> Alternatively we could always reserve half of the buffer for the
>>>>>>>>>>> hardware domain, and allow it to be preempted while waiting for space
>>>>>>>>>>> (since it's guaranteed non hardware domains won't be able to steal the
>>>>>>>>>>> allocation from the hardware domain).
>>>>>>>>>>
>>>>>>>>>> Getting complicated it seems. I have to admit that I wonder whether we
>>>>>>>>>> wouldn't be better off leaving the current logic as is.
>>>>>>>>>
>>>>>>>>> Another possible solution (more like a band aid) is to increase the
>>>>>>>>> buffer size from 4 pages to 8 or 16.  That would likely allow to cope
>>>>>>>>> fine with the high throughput of boot messages.
>>>>>>>>
>>>>>>>> You mean the buffer whose size is controlled by serial_tx_buffer?
>>>>>>>
>>>>>>> Yes.
>>>>>>>
>>>>>>>> On
>>>>>>>> large systems one may want to simply make use of the command line
>>>>>>>> option then; I don't think the built-in default needs changing. Or
>>>>>>>> if so, then perhaps not statically at build time, but taking into
>>>>>>>> account system properties (like CPU count).
>>>>>>>
>>>>>>> So how about we use:
>>>>>>>
>>>>>>> min(16384, ROUNDUP(1024 * num_possible_cpus(), 4096))
>>>>>>
>>>>>> That would _reduce_ size on small systems, wouldn't it? Originally
>>>>>> you were after increasing the default size. But if you had meant
>>>>>> max(), then I'd fear on very large systems this may grow a little
>>>>>> too large.
>>>>>
>>>>> See previous followup about my mistake of using min() instead of
>>>>> max().
>>>>>
>>>>> On a system with 512 CPUs that would be 512KB, I don't think that's a
>>>>> lot of memory, specially taking into account that a system with 512
>>>>> CPUs should have a matching amount of memory I would expect.
>>>>>
>>>>> It's true however that I very much doubt we would fill a 512K buffer,
>>>>> so limiting to 64K might be a sensible starting point?
>>>>
>>>> Yeah, 64k could be a value to compromise on. What total size of
>>>> output have you observed to trigger the making of this patch? Xen
>>>> alone doesn't even manage to fill 16k on most of my systems ...
>>>
>>> I've tried on one of the affected systems now, it's a 8 CPU Kaby Lake
>>> at 3,5GHz, and manages to fill the buffer while booting Linux.
>>>
>>> My proposed formula won't fix this use case, so what about just
>>> bumping the buffer to 32K by default, which does fix it?
>>
>> As said, suitably explained I could also agree with going to 64k. The
>> question though is in how far 32k, 64k, or ...
>>
>>> Or alternatively use the proposed formula, but adjust the buffer to be
>>> between [32K,64K].
>>
>> ... this formula would cover a wide range of contemporary systems.
>> Without such I can't really see what good a bump would do, as then
>> many people may still find themselves in need of using the command
>> line option to put in place a larger buffer.
> 
> I'm afraid I don't know how to make progress with this.
> 
> The current value is clearly too low for at least one of my systems.
> I don't think it's feasible for me to propose a value or formula that
> I can confirm will be suitable for all systems, hence I would suggest
> increasing the buffer value to 32K as that does fix the problem on
> that specific system (without claiming it's a value that would suit
> all setups).
> 
> I agree that many people could still find themselves in the need of
> using the command line option, but I can assure that new buffer value
> would fix the issue on at least one system, which should be enough as
> a justification.

I'm afraid I view this differently. Dealing with individual systems is
imo not a reason to change a default, when there is a command line
option to adjust the value in question. And when, at the same time,
the higher default might cause waste of resources on at least on other
system. As said before, I'm not going to object to bumping to 32k or
even 64k, provided this has wider benefit and limited downsides. But
with a justification of "this fixes one system" I'm not going to ack
(but also not nak) such a change.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 09:38:52 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 09:38:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353655.580609 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3woy-0007KK-9D; Wed, 22 Jun 2022 09:38:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353655.580609; Wed, 22 Jun 2022 09:38:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3woy-0007KD-4v; Wed, 22 Jun 2022 09:38:48 +0000
Received: by outflank-mailman (input) for mailman id 353655;
 Wed, 22 Jun 2022 09:38:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kzGk=W5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o3wox-0007K7-Bt
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 09:38:47 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-eopbgr60062.outbound.protection.outlook.com [40.107.6.62])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1c9103d5-f20f-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 11:38:46 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR0401MB2464.eurprd04.prod.outlook.com (2603:10a6:800:56::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.22; Wed, 22 Jun
 2022 09:38:43 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Wed, 22 Jun 2022
 09:38:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c9103d5-f20f-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MIH4CSFOq2wkK1oL7sx2Vnnhyb2BRgu9gnaD7+JOu8pOZxTtDSYu1SLCMieGQ0rwuD/ULBE0uGMHgx7veCPYd/oB8+3rCskZU+NUjSkwh0cApmiznGl2XWWmCnSIj7vY8T/yxaP+cuzpvg2ZxHHdIXzT8avAo5ggRP3ND1cDqSAofk3gypdeqmqJQQ+Oot17ed+DMi7IpFHmqiatDB0TarUXA+MxgGI7zzuqNEW9s82mJx6p59kRhud7mNwlm3tG6vtsZUBpcrd1jpTDHkVx0yrT6sHYrofskvMJxAVGgbCUpezKpL8SBxUtrcENDgi4CxxyI75s7TLdUuxtt5SrTg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=z97VoGKtP0qMWqKzSvy2pGvISlSUGeKNm1XcEf0GUyQ=;
 b=bBmJBFfv8eVAr8eplFkYxFUUxdbgcmHrJmGcd8zw3sznzQbBxhiC2fgRgj8dhTv2YHi4Fq3n1rwJ7EfBGUbG3uFIln4VnLzztcvtR7wV1XIA9g2QUcAGFweUsYA1hxjW8txIll2ZmDp/OhjuKXmk96k5/PQd3FJ3OnT5KZgoCOK1vXNOgRgI+ia07vXECBpaOEMWppIvuI8EDylb3tu4EM6BxU71+opsNguO0e0tNVwxJ++vG0UR9dJxdg+yqf/dPwOkHwe1bze4SIfx3Xo0Avb0CRH3LwnITput+Bmdl5XiTIrjlTapemxGPhZFmyAweDo2DJAmtgXL4t3K5k4PqA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=z97VoGKtP0qMWqKzSvy2pGvISlSUGeKNm1XcEf0GUyQ=;
 b=Rl7aIU7jwqVEy0izta9X8ZBwGW7jVntQ/fhmx+hIgBam1e3PtLznawIWvpeGw1i+EclS3T2327Ch+ne4B4w2izajA3N478fUjgEBpuF7raxJx28PvaymYOrUKGYxZguFIbMKalI39uU+mokMGMPvcEZirpJxvrT42oIwZHvwTN1WsMkFhqr4rea/tmeIGjqeWsb00B3mfWE3QZB5RRvRSIqFYMehWOE7LfpYqdS9WZdJQD0delpzPXGxg6mlooitw58ebUe0pg1Xn4YLqxcTpFLUlm3rtNiLssvWJfqPGFmT4YL/9xXJsdA7bU3NkTO6u0pXk+JbleuaGq5nTGM6ZQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <1309d21d-795e-5c2c-0e94-06cf5205e2ff@suse.com>
Date: Wed, 22 Jun 2022 11:38:46 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: disabling mercurial repositories
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Roger Pau Monne <roger.pau@citrix.com>
Cc: "committers@xenproject.org" <committers@xenproject.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <YrHMSJg6Rx9ULvr6@Air-de-Roger>
 <aaea6105-e83d-feba-edf3-3d7e26b90769@suse.com>
 <10a33bef-bcef-9ec4-5171-a579019a69f1@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <10a33bef-bcef-9ec4-5171-a579019a69f1@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AS9PR06CA0271.eurprd06.prod.outlook.com
 (2603:10a6:20b:45a::25) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e66cd668-4fca-4df2-81ef-08da5432ff3d
X-MS-TrafficTypeDiagnostic: VI1PR0401MB2464:EE_
X-Microsoft-Antispam-PRVS:
	<VI1PR0401MB246489051A23B9D0B634B8B5B3B29@VI1PR0401MB2464.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	M5plyy2YI/J+NoCCejLqlErC35gHR4A0SgZGstMiMqd3PDrbo/HRTUYa6vMD87EBql+y1566V8/tUi2Ujxe2SM+Aj5dCQJ5i18Q7gEGYlYPk/7NAOSDxkQqYSRUaOeehcN3kJlT0xumZ0obnCaUPaJ0OTIMPPXzKlJQcyxk7YfLsj6aaLVC9n+o15MD5ynHIIO16BlzesVZ4ftmgQq3odS4J32XhfHLy/r/I2+tRJXprxXJknEdqdbmuhllMkLm4uugQ2vzPLZZXP8APdqQW8J3r8GOkSZW4F6Gsr77p2xPf1wyQQvN+jODxHRXAL99L7z35DwGCivVO6QQnb6JDrkSuUrNCCO4Sq2yyWRMF5+56ImHTSZGX49CXCzXxHQ8Ptrw2OWITJKkS59RxL70JuS7U4YcfcgTblijEv0rPkCoO5+0xyhTsCphMI3OtHChmIjkb7lC+QFPPlzmhs7E1Ipnz9Si3fL6Qu4Ae0+cnQIi0XYAsW0zUZurUri/KboBM75b9mW/Z6pwSvNQlXQD9/H2CASDiazlZNRoloMg+I4uAo8QwDY5Bmjt/DshhPWljwyRHbNY/0a+RDwTVaLzqvDDgqHiawUvuUzG4rNK3t08H1umi57amMI4k82Ji+GlU0/tQ+NxgoiUdN210TTnCRh42QqQv0VL8vXaF6i4QWIGsUnK4ShGTaDv9UKi38TKRZFkdWef1rOIrc0OTa0peOO8OvsJgzGqiF+DYj4BnQaHI1IzNtFtLllyiY+PWirzXj2E6P/BbedxdvtKl+6tZ19q+evXS/mSMwpf2d19BAu4=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(396003)(376002)(39860400002)(136003)(346002)(6506007)(41300700001)(3480700007)(53546011)(38100700002)(36756003)(6512007)(26005)(2616005)(2906002)(31686004)(8936002)(5660300002)(83380400001)(31696002)(4326008)(6486002)(66476007)(478600001)(8676002)(66946007)(186003)(86362001)(7116003)(110136005)(66556008)(54906003)(316002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZzNRVmhlYm9wRFRiVitGMTlIanRTbW9vRzlXRkFSNlo4b1ZIeG1UcmQyY2Q4?=
 =?utf-8?B?dmEwMXNmcDRJeEZPbnZyMDlMTTVKemFRU2ljbzRTYmxVby9raWpycGF5ZmE1?=
 =?utf-8?B?QTFIcFAvT0FBQzJMU3hJNWUrNnYralkvMFJ4S3RZbm5LSEwzRTdEb3p3ZFEr?=
 =?utf-8?B?OG1xRFUwNzlWak1NbDN0QjcyMER6N2dEQlZNYWlqaDBVemplU01qSWRhUWRa?=
 =?utf-8?B?a3hodDQwbEdXYXExaWdFbWVaS25kNE1jSmlScCtGWVFOME94ZFF1UFFQN0h5?=
 =?utf-8?B?QzNRNnBzZnNxR1RoQmo5d2VOMlo1MFRLVVY5WTNKVmhPZEd2WExhRDBVaTNB?=
 =?utf-8?B?R0oxakZNcDFBN2pSaEdvNEV1VytxVFRUSWpkR1VIdklnWlo5dWJkZ1laQTFN?=
 =?utf-8?B?TmE0dG1ZcWpMdGFiM2VveDBXUjBsQW85RjM2Y3RhdU9WUUFTeFdKNFB1Sk0r?=
 =?utf-8?B?VFE1NkVvWndKOUpLZWNZMDRoWVMyU1pLbkNZT1R1akdjYW1LdXQxd3NNdkF3?=
 =?utf-8?B?b3RxVjRaNTF2aGdXdkZUTCs5MkVIczlDZy9ERXVKME1wRGY3TlFGVDBESmVM?=
 =?utf-8?B?dmtRSUxzVWtrbU5sNEFaQzVpU1lQV0dTWkJwODNKUS9Mc2dxdjVJRytuYnF5?=
 =?utf-8?B?R2JvRkgzT28xQ0VGZG1tMHNibnM0aW1NK1hib3BYSURTV3BSN0dlUUx0aDlx?=
 =?utf-8?B?Y2VEaXY3cGo1UzlBcTFkc1BZbW0yZlkwL2hzcG45dEd5U0IxeitZVWwydjh4?=
 =?utf-8?B?OWVycU9Id29YL2g0S3hMcVNDK3hMUDR2amNIbTBQWlVQRGd4Q1ZHV0UyMjhI?=
 =?utf-8?B?KzdGYTRQL1NWT3NGTjlhSHBRUm10MzNmUEJ6b3J2OWFVRWJyRW5kNXFFNzVw?=
 =?utf-8?B?dkpOUkNRb3Q0MHoyL1k2cjdKM0pqOTUwQll3WTZaVWh0R2VjaTh4MXZpa2Zl?=
 =?utf-8?B?QWpPSWZhYWZsZUdOUzdyUXRzTk5SK0pCaDB5SEo0VzJiWklIQjBkK01zZjFm?=
 =?utf-8?B?SGl2RXJPSnVzUEEybG1tVjBSSTNlaENSRGU2LzJ0dWtWRjZORlNwSUJGTS90?=
 =?utf-8?B?cElrb3lRbmE4QlVRYmhLMnp0Zk05VHlBeHFJTGhwV2tJajNaMjR6TCtaeDJr?=
 =?utf-8?B?d0JCTVp5RmhsSmk2WjJJWU5QSXZ2NS9GS1lKMTk3aE1sSU5lR095STZQMXEv?=
 =?utf-8?B?UlZtcm1RREhTdDJ4cHRhek8yYnI0ZHhSWEYwcm9SL0pac0EwVmR3TEdKNCtE?=
 =?utf-8?B?Sk40NmdJOEtVVzY5WFlDMytwVHJjc2xVVGpkNFZuTGZqdkdaQnpaUjhSSTZv?=
 =?utf-8?B?TkdidTVkVTNwS1BJcVM2QWFjZUhoN1ZycWp5MXJPK3RmWEErOHR6b0ZBeXRh?=
 =?utf-8?B?TmZ6YlJOaEFhbjN2MkFIOE5Ocnc5OXpMUmw3SWNFcEE4M1FWb3BpYWs5eXdD?=
 =?utf-8?B?Nm1scy81bkZUTmF0Z3Eyb3Fha3RRbmxyYi83UHhYdW1XTjU5a3NlMUdnc2pD?=
 =?utf-8?B?a21Cak1FWDhUR25QZGFxUHMyQm5yWlNRODJ5NTE4VE9XMDU5bHpEemNwT1RF?=
 =?utf-8?B?MEtBaHJ2bHB0Ky9hOXRQandER0hDa3AzV2lxa0d5c2h4L2lSTHo5MzdMdDF3?=
 =?utf-8?B?eVRBTXJBK284UGw0TU45dGs3cHMrdHd3UEgrV0lmZzkrRWxBaVBwdk0rU0RT?=
 =?utf-8?B?ajMyU2dXQkpNalI2MmpYZXRNaXM4L25iY1kyVWhVc2xPRUM2UE5EVGhlaWEx?=
 =?utf-8?B?SXhHdlJOVjhEV2hxR3VGelVKQkpsd3VGZmNBbWhCcnJETFJaMWx2VldPY04w?=
 =?utf-8?B?ckJYZ1Z6cGN2a3ZOU0ZWNzVpdFJoNnNxV3VHT2hFTmxMcGNQRHpETkJYLzc0?=
 =?utf-8?B?TnFzNzUwRHJuTVJZbkVWTmM3dHdaYXRZajNWMm5xeEhpOEdZbjFhQ2tYcUox?=
 =?utf-8?B?SU9XR05KajBZYmRVZ3cyM3J5QTQzcjJRUkZ6aFhETGNMMjNhMHkzQTJqdmxC?=
 =?utf-8?B?ZkJoODh4b2hLdVVDeldieTdack9Sc3g0UjNSRzNqMjA2ck9QYUM4UmlNT2xQ?=
 =?utf-8?B?V2hzS3A0MXFqWE1jTjNudHlrcXZCaUUvaDJFdFNndnFPM1Nra2Y0YXErc050?=
 =?utf-8?B?ZXdMdUtXbnl3SXg3STVzczVJc1RCWjQ5QlhtbWdkRG1TY1NHd29GU3hRY1FJ?=
 =?utf-8?B?WnlvdmRZREV3Q3BhNVBINVduTnd1Y3hUeFdpOGRiUFYyMnA5bDFUTjFBZUJp?=
 =?utf-8?B?K1NDSWxUSCt5Ukt3NithY3dvdTdMVCtFN1I4d2xWLzRJTmYvQ2NNSmpVdTNp?=
 =?utf-8?B?c1B3cHdPWVVJTjlpNmtEOXpGNlhwczFRQUU4SWt0c0Nwd3FGYXdNUT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e66cd668-4fca-4df2-81ef-08da5432ff3d
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2022 09:38:43.5384
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: gVOLHbXZ/sD4DDXGTS80A+yjstO6FbvZGcW19hvPjwd1mInXbIGqwKjHt638VPuQ0qpYP+BPqrDnb4WWJxzt/w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2464

On 22.06.2022 10:47, Andrew Cooper wrote:
> On 22/06/2022 09:41, Jan Beulich wrote:
>> On 21.06.2022 15:48, Roger Pau Monné wrote:
>>> Last week we had a bit of an emergency when a web crawler started
>>> indexing all our mercurial repositories on xenbits, as caused the load
>>> on xenbits to go beyond what it can handle.
>>>
>>> As a temporary solution we decided to remove access to mercurial
>>> repositories, but the contents there are AFAIK only for historical
>>> repositories, so we might consider completely removing access to
>>> mercurial repositories.  This would however require migrating any
>>> repository we care about to git.
>>>
>>> I would like an opinion from committers as well as the broad community
>>> whether shutting down mercurial repositories and migrating whatever we
>>> care about is appropriate.  Otherwise we will need to implement some
>>> throttling to mercurial accesses in order to avoid overloading
>>> xenbits.
>> While I wouldn't strictly mind its shutting off or the disabling of
>> hgweb as was suggested in a reply, either would mean to me personally
>> that it wouldn't be easy enough anymore to warrant trying to hunt
>> down the origin of certain Linux side aspects in the 2.6.18-xen tree.
>> Admittedly me doing so has become increasingly rare over time ...
> 
> We could convert that into a git repo (probably a branch on an existing
> Linux.git to save most of the conversion work) and make it available via
> gitweb if it's still useful?

If such a conversion would go cleanly enough, why not.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 09:58:25 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 09:58:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353665.580620 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3x7q-0001J7-1o; Wed, 22 Jun 2022 09:58:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353665.580620; Wed, 22 Jun 2022 09:58:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3x7p-0001J0-Tx; Wed, 22 Jun 2022 09:58:17 +0000
Received: by outflank-mailman (input) for mailman id 353665;
 Wed, 22 Jun 2022 09:58:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kzGk=W5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o3x7o-0001Iu-PP
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 09:58:16 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 (mail-eopbgr50089.outbound.protection.outlook.com [40.107.5.89])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d57bb5b6-f211-11ec-bd2d-47488cf2e6aa;
 Wed, 22 Jun 2022 11:58:15 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB7098.eurprd04.prod.outlook.com (2603:10a6:10:fd::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.22; Wed, 22 Jun
 2022 09:58:12 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Wed, 22 Jun 2022
 09:58:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d57bb5b6-f211-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fB0axncoFDciVTYpHWcBLdQF9c6iN1nR1jq9NzqoQowzqUUd6BPkoQwef61DKfN1qs6jNt9D/p2Iv83AAlPpofwkKXAOom2dvCCzXdo/OJkMgcQ9TGhODO3uAF4B0eSXKWpOinxB4kdSFrqKvoWf2V5v7OCzhpyzhvOgIF8M0jc9N2jSytYI5IXdHLpJU5VJKpeyF1BXYPahD5260sgf2VOKXmpfeKUCsMa22XnX1tZICTuVzX+9JPwD13MfD4Epa44SMscxxOJ0sipjNdJrLQVzl0q1XFSkzs7/bJBZBHvZjnHKQx+4/EuZOejhRTa/alEyq6J7gbWCDEjJuYH+6g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=b9XvSjWts0tVBpj71IZMlvk5FpDg77U4Xa8DuTdlaKI=;
 b=M3XBMTxkvl5yqxt0tD+ck0SJvoTbzkPkh2ZdN8Be+N/KUt6hBOciXOQD/YR7Se/VBqgHKpSh/5iS9HxfpGAFXrHEqIAOPs7zVGlCJt4yMPcEoMk2SewRnzhqqeKXL+7HO8i3rTbTE2bGqrvpQfLo+dK0eqHJExMICe4QGfj4FLLIP76VEKjnvbySAT96uwdhm28+Z8Wk5zGFDhZMqFka372uSyAef2D7mo+Hv6UIZwBZr+OyqRCzuL9RI1aUWpwmPZIWQ2olJafVt5jM7RHqri5GLwakMN/b+8rKhJrqhCOFg83jSi0UXDHgODb9VS9osfUFOPc0XlijjwYxLKYgDA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=b9XvSjWts0tVBpj71IZMlvk5FpDg77U4Xa8DuTdlaKI=;
 b=N75O4KSckeyLZxsiIQVfwPNFQipFZVDVIBQ2WfhqmJrvdTlf55cu8Jctdn4UN9X5PyhmmcQIFeBP8yuS42JlJKbi8NbM/io923XzWL+Mkj57mjfex4yK0YQqZ/AJTS8zE/3gQ1AaaX7C+Nu2aZA6oE0ISLYBSTUeJgMoY7s6wLGGpAL4axa41R34cwv+fKO96+L/4NAuazHGrQCmbObNkcwYeXrsQoGJjz9j8sLfx0qorM2LnqH1ESsDilNE0eCtM8vBvwX+7vfIbnfuwzLlX1kMNdBsjrFpOagHQ438S5ky/QWF5eCoUZ1dvP5fNtNK/plx4dA4pDbkMsBO+1fpww==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <3066680f-6213-7190-3f2d-d05edae723c1@suse.com>
Date: Wed, 22 Jun 2022 11:58:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH] x86/mm: Add an early PGT_validated exit in
 _get_page_type()
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@eu.citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20220617104739.7861-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220617104739.7861-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR04CA0054.eurprd04.prod.outlook.com
 (2603:10a6:20b:46a::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 68df0b5c-0249-4dfc-144e-08da5435b7eb
X-MS-TrafficTypeDiagnostic: DB8PR04MB7098:EE_
X-Microsoft-Antispam-PRVS:
	<DB8PR04MB7098C543474C46D3C942A3F2B3B29@DB8PR04MB7098.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	VqF1BwZG/wgJtnLMn8p3jXH+ww57UsrkcexLUfHgn05FzzKMpdpgGEX3Ii5s3FoeBPbJmtUUfkcuJp2fkU4cBXnrkIym0nfMkBpxU2a5IkXD93lfNPcW/6FsXYaCRUPkg6eH4VDONYtqyKiz9UImWLmpRVISn8hmZKjt57Q6rreo0D0UzW/j3BPIg2fgfa8W8Us39ooxmQN6IQI9QeVH4Qw8hGKMTLCknLcksKNA94uwywv4hEQt2+W1gsvRi3QwAZ37SrjHsdvNQKMaLdh+ZYxjSGK/c4EPo0ayJblLampoO3WxS7v/gKm8Pkdw0LOxjT8sO+sb/doVme5e1AtE5MbT4qA3ixkuwzfCupBuKw7m+FffV4HhHxSP2ngkMVvMt7j0OKDDpkBqcmPt0aGSbaU1rBZiNvIb7xgIWHnn3dqjGJMAr3AeirjG2UZk1yhvGRKxlKhM0bweJpAkNp01W1D4Bi2WZL1hbEFU7D0q3S1JKiIr1zixP+YWgPOr0lJeQdBWt8gZdjCGQkSAlSiSmlAulcg5bLs3Zz6eqKEvztj/tC+Co2HNTz6N8tXm1PLGzFDdd4rtvOhOpJO86CdaJy21HnORr9TnBe5SqMPxI9P9wdypIZ+SlARjGzU4ESDy6gzB35BwjXBzaHOmeqXPAGnEryXEbjlRW5ERO0fXjbixFnqlU5V0tjL9VpW2RPi8vKfPEmMnL45cl3o05kzDUzFSOG26QALJ2Xp/yNe8OZ5R4VTUU9SFpZi8xMJy0czqvYScvhQ5FgzTiMb4Gb1KHbgX1dcXbvrquG1b57+Uzx4=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(39860400002)(396003)(136003)(366004)(346002)(376002)(6506007)(2616005)(6512007)(186003)(86362001)(41300700001)(38100700002)(26005)(31696002)(53546011)(66476007)(66946007)(4326008)(83380400001)(5660300002)(31686004)(8676002)(8936002)(36756003)(2906002)(6916009)(6486002)(66556008)(54906003)(478600001)(316002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?d25CRCt6djllQS9EOElFaVJEcHN6bTIwZDBZekhLeWY4QlhDZEkrSW9nZVkx?=
 =?utf-8?B?M2NYT1Q0Um5LUm4xSytvR01IMmZNS2FTR1RlZWpRYUhZVmFJZURteEQxYzFM?=
 =?utf-8?B?OU5ESlNXNDkxMjJzQ3ZoejlDSG9zNnRLQU1rcWE4dHVSazV2QXQyQ3hjVFJL?=
 =?utf-8?B?bmdSSFNIWS9FU1ZBMlc2elE1TmMzcDZsaGg0cGl1b3dEL2h6a0JVdGxrSDJK?=
 =?utf-8?B?L1lzaVF5dDlkOFdPdExhaWg3aHpDWk9Ra1lsU0NjdUUwYTQ0bU56eDJyNitG?=
 =?utf-8?B?a0NtYjYyaXJqNHZXZFJrOTJyakxBdzM2QlM4MFNpbmFJRnFnRThWbkREQi80?=
 =?utf-8?B?RlUvYkRZOUJnc0hwV1EzdG5KUlR4MDMvMjh6UmxXVThCR3lqM3ZTRnF3Qno0?=
 =?utf-8?B?cE5wUFN5bHJZOW1GL1hDUWJiRmdKOFlkYzUrcmtkK050STA0cUdXc2tHWURy?=
 =?utf-8?B?U0ZSek4rQ0RoTGJPdzVrQnhRRFVwdWtKRWYyKzcxb1c2RTBhaUZrWGpTOW9R?=
 =?utf-8?B?ZThoc0lwbm04RFpNTGtNaWs2dTEzT1BuaUpFSzl0TnRxZEgwRVl2VkE1NlRH?=
 =?utf-8?B?ZjhzR3hMekltWkx3L05JY0VlaFNBWDRTTXorZWFVTUJyU01IMG5xV05pcUJq?=
 =?utf-8?B?c2g1Ni9PVllHQVcycVFmNEFvdDZFcG1pSmhPMmV3TnlGbC9qQnRiUzd3d1Er?=
 =?utf-8?B?STR5bEpQdDk1YnRPTk41VXRnRjJOdlRaejlqdnk5SEppVVZqSVRMSXFQY2p3?=
 =?utf-8?B?UnQxckFJQTdBL2hRclFldmRlZ3dzM0tydWRFNHM5VDFFQW1ZNStWY1hDUFpr?=
 =?utf-8?B?MWZkcUdxRWNaOGI0ZGN1OFhpTU5WNThyWGMxYzNrMmdMUDBOaENFMXBtRHhN?=
 =?utf-8?B?OVgrcVJCMUJQTm9JQU5PWGNhdHBxQlpaeXFidFFkWVhRbExiRE4xdStOSjNr?=
 =?utf-8?B?TjJ4TEt2S0x2T1hiYWhjRENwUmpNY2FManVvcWp3VDByWTM4aWNtb1cweFd6?=
 =?utf-8?B?OU1pLzBhWnR4SnlmRDUyQnhqUUlGM21QaHpBL2lNZXFqaUU3WlhrQ1FoTzVQ?=
 =?utf-8?B?ZnQyM0VpTlVmNExuZFlocDJ4c24wbWFnU204R0drVkVOWnh4cWdJNjdUTkN2?=
 =?utf-8?B?aGdUNmlqdmZjMDlCZzh3WnpXZzRtam9CM29qL2NMRkVJcnllejllME1RYlEr?=
 =?utf-8?B?dUZXTVpmY0ltU0VSNXNNWG0vVCtqZlA1N0dsMitDVGNYOGFDUjk5SjV3ZnlN?=
 =?utf-8?B?TXpFTlk1dGVNY0pLQXRIOXhnUGlESVErWFl0ekpkeHczQXU3MzJBUnAxMFB4?=
 =?utf-8?B?VjJ3Z0FuMmpaNkNWT3U2N2xQNTNGR1pCTTBYSHVKclVsS3V6K0dMNTEzRy9w?=
 =?utf-8?B?ZVdHSURLeW44OUJtcnFUcnFJYloxWURHZXl0OUxjNzJQOUVjeXJlbGxtWWZ4?=
 =?utf-8?B?RjcxTHRaL1JUeXN1a2tyN2cyeGd3Ulp4aG1vTnVGOTAyWEU2eDRVclNSdkZV?=
 =?utf-8?B?ZnMwTnJiekJKVGlSY3JsTzNTaWpUL05OSUpVcEh1Tzh5bDdENEpnZGZwa2ph?=
 =?utf-8?B?L2w3QVkybUFrYjlSNFlWN0pkNjRpMjFrU2NkQVl3aDNXK2tLVzhGdFpYUFVL?=
 =?utf-8?B?OFFlUXF4M3VRZGlSVzloL25WTithNTMwK1VVV3J6bzhYSmNmVGoxZHdSWjZs?=
 =?utf-8?B?MCtWYjFpMWFWNHFqZWg4bktiaGVOcEw3aG92cVJMWmc0L2dnTHlXQmJ5d3Ni?=
 =?utf-8?B?UVFJa2tTKzJXNElIZHBtU3pJZTRnV3JibXZoME5WSVBEb2hRYUlxOTVFbWo5?=
 =?utf-8?B?M0VFekM5QmZPcFdYS2lRTmJCcnpwRUx4WFNLOE1JTEVyZWNSaUNKTnZZT1Ar?=
 =?utf-8?B?RXlGVDdqTlAxNThKUEJlcnpjZnpWQ054aVVUenI1bW1lbmJHbUI0djY3T1VN?=
 =?utf-8?B?VlZYNVEyKzhFdDRHOGUvcmEzRmpmZVJ3c1hkK01mM1d5UEtDODZHUjRWcEsz?=
 =?utf-8?B?YzdKZ0RyWks2YkxwOE9mbHV2U1ZlZC8ydGlmVlpIVVd5TGF2RFNkU05UVmJM?=
 =?utf-8?B?enJSK0trSUl2bzMxK0lBd2R3aUpEWlBuRjFwb2JvUEJ6UGZraExQVHhJNFVV?=
 =?utf-8?B?K3RLM3FWUXNlQlQzZFB5NmtFYlpWQVZrZTQyUWg5R29GMzZaRVMrKzZuU0FU?=
 =?utf-8?B?eVVvK1B4V0tOb0hBWHFRMUt0eUdMQmdxWWZhQ2IvQU5ac3dJSVoxYzZSUlR4?=
 =?utf-8?B?a1dPTUxxc0xpaDVWTnJHcE9URXd0bFJNNFNaNGU2TDhxVVMyNUZvRUVRcnBC?=
 =?utf-8?B?MnI1U0NWQ0VSOHQ2MlJYM1lNQ0NKbGI4OS9pRFcxUFN3cUg3eFM1UT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 68df0b5c-0249-4dfc-144e-08da5435b7eb
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2022 09:58:12.3701
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: vGllCBzVFR1l39sfCeHAlETE8w7ToKpbGbHP5a42ZupvOZlbaiqJa8+Wl/n/xzq4vUsE+zwDfNKsEQh4mosE4w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB7098

On 17.06.2022 12:47, Andrew Cooper wrote:
> This is a continuation of the cleanup and commenting in:
>   9186e96b199e ("x86/pv: Clean up _get_page_type()")
>   8cc5036bc385 ("x86/pv: Fix ABAC cmpxchg() race in _get_page_type()")
> 
> With the re-arranged and newly commented logic, it's far more clear that the
> second half of _get_page_type() only has work to do for page validation.

To be honest "far more clear" reads misleading to me: Part of the re-
arrangement was to move later the early setting of PGT_validated for
PGT_writable pages, without which the stated fact wasn't entirely true
prior to the re-arrangement. How about s/far more/now/ ?

> Introduce an early exit for PGT_validated.  This makes the fastpath marginally
> faster, and simplifies the subsequent logic as it no longer needs to exclude
> the fully validated case.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Preferably with the wording above adjusted:
Reviewed-by: Jan Beulich <jbeulich@suse.com>

> Not that it's relevant, but bloat-o-meter says:
> 
>   add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-300 (-300)
>   Function                                     old     new   delta
>   _get_page_type                              6618    6318    -300
> 
> which is more impressive than I was expecting.

And I have to admit I'm having trouble seeing why that would be.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 09:59:03 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 09:59:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353671.580630 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3x8Z-0001pX-At; Wed, 22 Jun 2022 09:59:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353671.580630; Wed, 22 Jun 2022 09:59:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3x8Z-0001pQ-7k; Wed, 22 Jun 2022 09:59:03 +0000
Received: by outflank-mailman (input) for mailman id 353671;
 Wed, 22 Jun 2022 09:59:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yiMn=W5=citrix.com=prvs=16524ee06=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o3x8X-0001k6-3M
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 09:59:01 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id eede6264-f211-11ec-bd2d-47488cf2e6aa;
 Wed, 22 Jun 2022 11:58:59 +0200 (CEST)
Received: from mail-bn7nam10lp2102.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.102])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 22 Jun 2022 05:58:55 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by CH0PR03MB6004.namprd03.prod.outlook.com (2603:10b6:610:e3::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.15; Wed, 22 Jun
 2022 09:58:54 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5353.022; Wed, 22 Jun 2022
 09:58:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eede6264-f211-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655891939;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=eNiLAIgssQhm0nLX8FD66p3aLclCqyMd2FZ9hUfeSzw=;
  b=AGvEeAwG3+ng0onPrc3NE6awIczsxn/p/ZNWfDmt8uDVTciKa1b2JLci
   GCC4G6YBtkiolOfAyGMRe8032hkYUs6XwiEAQo3oFOgTKYyH7vL6dhDl1
   7TUdJ9mo1Z4JJsJurDIUDuUg3BLiioe9UcGz3ea6gYaImaAa7dBmDitpQ
   I=;
X-IronPort-RemoteIP: 104.47.70.102
X-IronPort-MID: 74169286
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Xf6tvaBiAjnJtRVW/zXiw5YqxClBgxIJ4kV8jS/XYbTApGwr0mEGz
 mAXXT2FP//cZmCkc4slYdzkoxxQupDcmNY3QQY4rX1jcSlH+JHPbTi7wuYcHM8wwunrFh8PA
 xA2M4GYRCwMZiaA4E/raNANlFEkvU2ybuOU5NXsZ2YgH2eIdA970Ug5w7Bj2NYy6TSEK1jlV
 e3a8pW31GCNg1aYAkpMg05UgEoy1BhakGpwUm0WPZinjneH/5UmJMt3yZWKB2n5WuFp8tuSH
 I4v+l0bElTxpH/BAvv9+lryn9ZjrrT6ZWBigVIOM0Sub4QrSoXfHc/XOdJFAXq7hQllkPgp8
 oRpiZuLajwMFfHKqsssThV9FQNHaPguFL/veRBTsOS15mifKz7G5aorC0s7e4oF5uxwHGdCs
 +QCLywAZQyCgOTwx6+nTu5rhYIoK8yD0IE34yk8i22GS6l+B8mbGc0m5vcBtNs0rtpJEvvEI
 dIQdBJkbQjaYg0JMVASYH47tLj03yeuKW0GwL6TjbJmzW+I0VxT6qf0c9/Fa+2KZd5xn3/N8
 woq+Ey8WHn2Lue3yzCI73atje/nhj7gVcQZE7jQ3vx3hFyewEQDBRtQUkG0ydGikVK3Ud9bL
 00S+wItoLI0+UjtScPyNzWnpFaUsxhaXMBfe8Uq5QfIxqfK7gKxAmkfUiUHeNEgrNUxRzEhy
 hmOhdyBONB0mLicSHbY/LHLqzq3YHARNTVbPXVCShYZ6d7+po11lgjIUttoDK+yiJvyBC30x
 DeJ6iM5gt3/kPI26klyxnif6xrEm3QDZlVdCtn/No590j5EWQ==
IronPort-HdrOrdr: A9a23:TP/+bqiDSR6GM5R252F8L4P4YnBQX0h13DAbv31ZSRFFG/FwyP
 rCoB1L73XJYWgqM03I+eruBEBPewK4yXdQ2/hoAV7EZnichILIFvAa0WKG+VHd8kLFltK1uZ
 0QEJSWTeeAd2SS7vyKnzVQcexQp+VvmZrA7Ym+854ud3ANV0gJ1XYENu/xKDwTeOApP+taKH
 LKjfA32gZINE5nJ/iTNz0gZazuttfLnJXpbVovAAMm0hCHiXeN5KThGxaV8x8CW3cXqI1Sul
 Ttokjc3OGOovu7whjT2yv66IlXosLozp9mCNaXgsYYBz3wgkKDZZhnWZeFoDcpydvfoGoCoZ
 3pmVMNLs5z43TeciWcpgbs4RDp1HIU53rr2Taj8A/eiP28YAh/J9tKhIpffBecwVEnpstA3K
 VC2H/cn4ZLDDvb9R6NqOTgZlVPrA6ZsHAimekcgzh0So0FcoJcqoQZ4Qd8DIoAJiTn84oqed
 MeQP003MwmMG9yUkqp/lWGmLeXLzcO91a9MwU/U/WuonZrdCsT9Tpb+CQd9k1wga7VBaM0ot
 gsCZ4Y5Y2mfvVmE56VO91xMfdfKla9Ni4kY1jiV2gOKsk8SgHwgq+yxokJz8eXX7FN5KcOuf
 36ISFlXCgJCgjTNfE=
X-IronPort-AV: E=Sophos;i="5.92,212,1650945600"; 
   d="scan'208";a="74169286"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oJJFVL+svx6RMZaaFE6kZ84Rsz7gUfHBfC3vfZRtexALuQ+OYtDnUfBMwVsvjMwGOH/naRTy8obUr3mTQqDo0fk3PIGPO/M2Uwi2DioSnRrHT4xDQUMm7YIf+IBUzS98v4FwtAhHINlfT/diihbdPIYYKYqDG0S3eA8gBAtG3uJKLasfjIF+wGm7x+LDzs5wTh3q1O38keVYw3k2OEA9mbt+OhYXvRggptuJclLkpnGxLkuWIoX9UNdoISuGsHveqR1Tcu0dU/ieUqrFjjgiJhYQl3sjXoWWJ2YStsRbRelXDiwyrac7gOicdRynxbM4X2VOpuBcsxAtqGDoB/d+mg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=J2EV5tonzJIPulUy46pxp4GLLEnXB3kv3cVvgV4PxsM=;
 b=jqN+D6HoSkHE80GX1Wc3T0uA4CQ1LIkwCE2d+RZ1NfTBWQ9Gf6XAYLurYRUeVBVNIMxnIZgds3ali4iXSucgv3oDgSeOHbscAFFKj2UuBUTXAL06cAM7QOI3cPil1xegEQ7k+8yL5/RLna4y2wbJkkR1WB2Who+XnyPF/Ga+/XsYtHi9v6DHelN+l02VjPkSzVF0RBoKR5sYr0g6zoMCG4FeGm/pXeXZe+937PbsWHSxw5ZL1GRBtHcUTiUS2N/hBMZXNQoJgR4s2W0tf+jbwUZZI2sZunPwilkLWVAR/iE3oLYaiXFQ3+w6AfEiWqyev5DOU7neb+98649XrFvX8g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=J2EV5tonzJIPulUy46pxp4GLLEnXB3kv3cVvgV4PxsM=;
 b=fvF+3I5mhxnj1tN32THWDaJ6/IjP6LPbDpTuADua1zkKuVe4bb7pYu41uUQBWda/TpTLEjDWEvxKt8D/BVi0q9m5oJrz2GXazRIdsWpYt/BMXRy/dGCVph7ir08pe8DYh1LzILhsSKc1RcQOcop64dQsEfmS+r9bhTqYsxiIi0M=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 22 Jun 2022 11:58:50 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH] xen/console: do not drop serial output from the hardware
 domain
Message-ID: <YrLn2tgaAQlnMbIL@Air-de-Roger>
References: <YqgwNu3QSpPcZjnU@Air-de-Roger>
 <69d85d88-4ec1-987c-151f-0d433021fe34@suse.com>
 <YqhHtetipYTG8tuc@Air-de-Roger>
 <72c94980-cbcd-d3b3-7aad-c9db58d9c4a2@suse.com>
 <YqhXFKMlIvkQzVoT@Air-de-Roger>
 <291bb0ee-06d7-af25-79bb-e099c7ff2fe1@suse.com>
 <YqsUfH763oSchRdW@Air-de-Roger>
 <8ee15e94-f4a9-69f2-4c57-2e0cc9df8746@suse.com>
 <YrLcLpsd8hOcMOGI@Air-de-Roger>
 <af4f325c-367d-4b05-fe96-b102b7f7e554@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <af4f325c-367d-4b05-fe96-b102b7f7e554@suse.com>
X-ClientProxiedBy: LO4P123CA0270.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:194::23) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3a1d5695-a592-4e30-38f3-08da5435d0db
X-MS-TrafficTypeDiagnostic: CH0PR03MB6004:EE_
X-Microsoft-Antispam-PRVS:
	<CH0PR03MB60049A65984019101B2BA6D38FB29@CH0PR03MB6004.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	yFDwD8crU3YTG7qfJoJMe4c8guZ+DlWXZfCxFCSaXOMOiXrtNLUo12A1i3eAPFQLM/BsFdLn0UOvkHyi1s3WaWVr7+LS++QkmvtnF1Yj1muz6XhVnawMpu3aT29PsjdQoUZVNVDtJHmsnTZKU4HFLY9Z6B3/85uMJ6mf6F8I5II7yw8LeHmqmKmk2TS93ySv8JptLxAw6EMLO+9g4mPzwPfpc0f68pv7kWROVtdwfW5MrKyvVNf5TFB6XPVbiVwRA/kVoyiJfMkdOqFHnLcewf1ZMrFFKEWTDEuWaoWbw4wTxgDUMBqCh0iitO0iGWuAo4nmAdGqvlL5X5mPrD8/cu7zRA3T/DgrTKotAFcc0UEm3CyFzHI2WhrRl7HPbOvns5oQ6yNYW1gt+m/WUQH8RN9913A/enV74ZWA/v+X4nDSp4rFLrQ5lqJxfWPfoXKmf3GB+mzXD9hhVJYEdCI4j5mD2V5ugvAnw7aZSA2kylG9ZkA5og9ZVYEj1bhsXzI84TfmhjX6ABy8CmXk/OARRmWUxrsTjQS7Cv5BkLRqwOdx4i9j2d/2xxaQHNyTgSIAI97EGOi9rTraYqMZPQ5yJCbeKEIZ/1aWPpinTIPQPMs8ha+KVPI15AJXnyNs6oDnCndSEDbPhmMAvqOUK1DLEGL3cl/W2gjGdvMqe3uekhCqGezN8EM6QRTxmRHP7m3P1dEE2ZB5yMk/IkUOWOL4zw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(7916004)(136003)(366004)(396003)(39860400002)(376002)(346002)(82960400001)(85182001)(38100700002)(478600001)(83380400001)(33716001)(6916009)(54906003)(316002)(6486002)(4326008)(53546011)(186003)(2906002)(6512007)(66476007)(26005)(66946007)(5660300002)(6666004)(41300700001)(8936002)(8676002)(66556008)(86362001)(9686003)(6506007);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?a3VDTlFZVElURlBCWUk0ME5idE53YTlsWlhoZDZoZERMUStKQ2ttM1E2R2ds?=
 =?utf-8?B?THc0ZUh1TlJCaytrSjNtOXU3VFRrMXpQc25PbkMrL3FwOE5qZnFGSjJJcmZW?=
 =?utf-8?B?cldFL0FvV1gyVnhqa3h3aTVITkpUSDNmQmRCSWlNUkozQ0M4SEJzZTJ2VHJQ?=
 =?utf-8?B?b0dKM014eVpqOGZ1WjRMYVpTRkZJSTNEZW1IMVlSZ3lZWTBNelg0NU5RanFm?=
 =?utf-8?B?SXJlWWZ1SlQyajlHUFNZMmZsTW5GSUR2SDl6NWUvWjlFYnV4ZWo4Q1JaOEVL?=
 =?utf-8?B?S3A1b1pveERCZ04zVzVnTTJUSW8rcUhRS1pHN3JBaHNlVkQvWFQ4cWtvQXRR?=
 =?utf-8?B?azFjUkpIdnpRZ2dLdzRmVWt2REYydElvcnhLaXVpOU9VVlJ4NmFxK0tIVDBT?=
 =?utf-8?B?ZHpHVThQTE1RTzRZT2J1Ui9hanBRKzluVVk1RDRzOVFrOTAxTUdCMGVwa1kx?=
 =?utf-8?B?L05tdzdvSk9FOGVOUThJTUxsdnNnVjFjTXJndENqUlVFRWZtcGN2UTJYVlJn?=
 =?utf-8?B?YnRyQnArQ29yNjc4WVF6YVlMMmRFYzQ2dSsvMjBNTm9wUHMwTjFBaXZxeFA1?=
 =?utf-8?B?V0ZaR01HZTAxQnM5cW95OXZsWlpXd3VhMlkyZ2daYkk0Z1RXamRYNXhYRlRx?=
 =?utf-8?B?Vk10WjR6QytsUGtPRzdwd0cybE1EejJZYVF0R3VZeXRVQmpjbzBaTitVblB5?=
 =?utf-8?B?WkM1aXE3ZEdlZFRzNDhLVW1JT0IvOE03UUJFM1NKR0tJMHUrOXBRN1NPZW01?=
 =?utf-8?B?MUNBV1NJeEtOcDRudFJYSkpNc3VOWkc4WFVJNG1sbkk2UTlRT1RURk12aTBk?=
 =?utf-8?B?NzdiSkpST0l2ODN3cHF1dEVhMkM5aHI0bXJRUFJmZ043SGg2U0pYMks0dWk2?=
 =?utf-8?B?TVN0UzUxUk9MRjYxdGZ6cUZiS1N5SXYzMnlDSkVWMDhMU000bHJkMkxBeXhr?=
 =?utf-8?B?cFpJaWtvWGVTWm9ibWxLaThXd095Zkx5cnZ6b3h6ZzdjaC9mUDVKekk5cWR6?=
 =?utf-8?B?bWJkelptZW4yZFhXcGwzT1kvdTBtZld5T3hKTXI3cGFhK0gxVTFyOVFNWjlG?=
 =?utf-8?B?RzNzMlhaY2NaenVwRzJuVmpPQm9uc01VcDgyOGs1bXJxSnFJQWhTY0dUU3V0?=
 =?utf-8?B?dEVGOUMwa2Q0cDkvOE5JcEpwZE8xMXR4dTJNOXIyQUs2aFBwQi9lNFlqZG5u?=
 =?utf-8?B?ZnBaZzlXcHFONHB6QnR5NFhxVDJHdXFkMEpQWGxjK2JNWkYwVnNORHV0MVcx?=
 =?utf-8?B?K1laMnRlNWovYWxLVU5SOWFiWk5rZ3NmbE95RkZ6Ung4bU1CdFYxUXk0WFZY?=
 =?utf-8?B?YkpXWTNiVnk2N3A2RUt6MGRnUjRKMmZvcDdNOVBmR0lLN0Zwc0hWT3R6dzBZ?=
 =?utf-8?B?Qkc4d1JNNkVQeEtyZXQ0dVNyS1B0bmlPV2xvOWdmTlEySzN0Wm1jcnpUZkQ2?=
 =?utf-8?B?MmRSWVgrRzBudTJOWFd5TDBiTHhTSDhKaVVFRWxoRmkrZTd3WElvUVdzVDRT?=
 =?utf-8?B?QmprY0NpcEdXa3dYY0Q5QU9ubFVFZnJjTjhCd3VyQzFyUll5Tkt1a1BqQ0lF?=
 =?utf-8?B?TFI4WjRkN2NGbVB5bldrNmxSQTVvQ2IzakFxREFpYTliV25PTTM0SWZuUG9q?=
 =?utf-8?B?aHR5LzQyWGlYMnhRTDI3WkxpelBQMmJjNzFWQ3VPb2QwM1AxL3hIWlkrNGdT?=
 =?utf-8?B?MExpU3FVbmlvSDlqWGErM0JTdGJTUjBaNHhYYkMrNmxHR3N0Tk41bDlIcGla?=
 =?utf-8?B?TDNkWEFiQ3U0eXhqbDhkbVNWUDduZWlUcHZmQUlFL0RJdjJnazNpa1dXc1BK?=
 =?utf-8?B?ZTUrN0JXVUZ5elllODdyYWlpeXovQm1CR3E3UG1uUTc5L1pPL2x0MjNQNXQx?=
 =?utf-8?B?MU13bGlvamNuU1BkdXdXSXZGQzVkQWZ5RGZqdWg3WnBXYzFHK0R6bk85UFAr?=
 =?utf-8?B?OFhTTE9GcGFWVjlmY1EvTHNvbldCRm5jeTZvMyt4MDQxRzhiRDlGdDZjd0RD?=
 =?utf-8?B?c2t0WmtsQ055NG9FQTBBVmVJbjdWZEZRSWNYcVJkTDU1ZjlMM3ppS2Q2UTFP?=
 =?utf-8?B?QzJyemtMWXc5UWhQTmF5ZnpCdTRldkNubGoyRW5Da040VWgvcHFRL09tbitJ?=
 =?utf-8?B?U1RZbURGb3Y4Z216WmVxSWcvcnJMVy9BK0FlNHIzQmd3OGN1dEVPbjM5NTlE?=
 =?utf-8?B?eU9kSitNZTdrby9XbkRBQW1jTWZjSWU2RFlkbUNaUk9UeG9EMjlRcVdNK0Yv?=
 =?utf-8?B?c1g1MHpPbnBYNCtIaGt0cklsSk5aRyt5a2g4Q00wcmlZaGZOd1QxR3k0anZD?=
 =?utf-8?B?MmQ3ME9SYk9PSFV5MTIxaEpadDVFNFVxc2UyeWw3OXhhdTVuTUtva2pCRG1h?=
 =?utf-8?Q?4DI8nlPAElQTUDTo=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3a1d5695-a592-4e30-38f3-08da5435d0db
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2022 09:58:54.1779
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ByMpwiiwH66EwmdeOnUAaiCfrAkqMfLJYaMlvMgGE+G9EiwgCqWYgPaF//bsE4OpOGIEWg8t2IKzDeJYNZb8hA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR03MB6004

On Wed, Jun 22, 2022 at 11:33:32AM +0200, Jan Beulich wrote:
> On 22.06.2022 11:09, Roger Pau Monné wrote:
> > On Wed, Jun 22, 2022 at 10:04:19AM +0200, Jan Beulich wrote:
> >> On 16.06.2022 13:31, Roger Pau Monné wrote:
> >>> On Tue, Jun 14, 2022 at 11:45:54AM +0200, Jan Beulich wrote:
> >>>> On 14.06.2022 11:38, Roger Pau Monné wrote:
> >>>>> On Tue, Jun 14, 2022 at 11:13:07AM +0200, Jan Beulich wrote:
> >>>>>> On 14.06.2022 10:32, Roger Pau Monné wrote:
> >>>>>>> On Tue, Jun 14, 2022 at 10:10:03AM +0200, Jan Beulich wrote:
> >>>>>>>> On 14.06.2022 08:52, Roger Pau Monné wrote:
> >>>>>>>>> On Mon, Jun 13, 2022 at 03:56:54PM +0200, Jan Beulich wrote:
> >>>>>>>>>> On 13.06.2022 14:32, Roger Pau Monné wrote:
> >>>>>>>>>>> On Mon, Jun 13, 2022 at 11:18:49AM +0200, Jan Beulich wrote:
> >>>>>>>>>>>> On 13.06.2022 11:04, Roger Pau Monné wrote:
> >>>>>>>>>>>>> On Mon, Jun 13, 2022 at 10:29:43AM +0200, Jan Beulich wrote:
> >>>>>>>>>>>>>> On 13.06.2022 10:21, Roger Pau Monné wrote:
> >>>>>>>>>>>>>>> On Mon, Jun 13, 2022 at 09:30:06AM +0200, Jan Beulich wrote:
> >>>>>>>>>>>>>>>> On 10.06.2022 17:06, Roger Pau Monne wrote:
> >>>>>>>>>>>>>>>>> Prevent dropping console output from the hardware domain, since it's
> >>>>>>>>>>>>>>>>> likely important to have all the output if the boot fails without
> >>>>>>>>>>>>>>>>> having to resort to sync_console (which also affects the output from
> >>>>>>>>>>>>>>>>> other guests).
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> Do so by pairing the console_serial_puts() with
> >>>>>>>>>>>>>>>>> serial_{start,end}_log_everything(), so that no output is dropped.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> While I can see the goal, why would Dom0 output be (effectively) more
> >>>>>>>>>>>>>>>> important than Xen's own one (which isn't "forced")? And with this
> >>>>>>>>>>>>>>>> aiming at boot output only, wouldn't you want to stop the overriding
> >>>>>>>>>>>>>>>> once boot has completed (of which, if I'm not mistaken, we don't
> >>>>>>>>>>>>>>>> really have any signal coming from Dom0)? And even during boot I'm
> >>>>>>>>>>>>>>>> not convinced we'd want to let through everything, but perhaps just
> >>>>>>>>>>>>>>>> Dom0's kernel messages?
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> I normally use sync_console on all the boxes I'm doing dev work, so
> >>>>>>>>>>>>>>> this request is something that come up internally.
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> Didn't realize Xen output wasn't forced, since we already have rate
> >>>>>>>>>>>>>>> limiting based on log levels I was assuming that non-ratelimited
> >>>>>>>>>>>>>>> messages wouldn't be dropped.  But yes, I agree that Xen (non-guest
> >>>>>>>>>>>>>>> triggered) output shouldn't be rate limited either.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> Which would raise the question of why we have log levels for non-guest
> >>>>>>>>>>>>>> messages.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Hm, maybe I'm confused, but I don't see a direct relation between log
> >>>>>>>>>>>>> levels and rate limiting.  If I set log level to WARNING I would
> >>>>>>>>>>>>> expect to not loose _any_ non-guest log messages with level WARNING or
> >>>>>>>>>>>>> above.  It's still useful to have log levels for non-guest messages,
> >>>>>>>>>>>>> since user might want to filter out DEBUG non-guest messages for
> >>>>>>>>>>>>> example.
> >>>>>>>>>>>>
> >>>>>>>>>>>> It was me who was confused, because of the two log-everything variants
> >>>>>>>>>>>> we have (console and serial). You're right that your change is unrelated
> >>>>>>>>>>>> to log levels. However, when there are e.g. many warnings or when an
> >>>>>>>>>>>> admin has lowered the log level, what you (would) do is effectively
> >>>>>>>>>>>> force sync_console mode transiently (for a subset of messages, but
> >>>>>>>>>>>> that's secondary, especially because the "forced" output would still
> >>>>>>>>>>>> be waiting for earlier output to make it out).
> >>>>>>>>>>>
> >>>>>>>>>>> Right, it would have to wait for any previous output on the buffer to
> >>>>>>>>>>> go out first.  In any case we can guarantee that no more output will
> >>>>>>>>>>> be added to the buffer while Xen waits for it to be flushed.
> >>>>>>>>>>>
> >>>>>>>>>>> So for the hardware domain it might make sense to wait for the TX
> >>>>>>>>>>> buffers to be half empty (the current tx_quench logic) by preempting
> >>>>>>>>>>> the hypercall.  That however could cause issues if guests manage to
> >>>>>>>>>>> keep filling the buffer while the hardware domain is being preempted.
> >>>>>>>>>>>
> >>>>>>>>>>> Alternatively we could always reserve half of the buffer for the
> >>>>>>>>>>> hardware domain, and allow it to be preempted while waiting for space
> >>>>>>>>>>> (since it's guaranteed non hardware domains won't be able to steal the
> >>>>>>>>>>> allocation from the hardware domain).
> >>>>>>>>>>
> >>>>>>>>>> Getting complicated it seems. I have to admit that I wonder whether we
> >>>>>>>>>> wouldn't be better off leaving the current logic as is.
> >>>>>>>>>
> >>>>>>>>> Another possible solution (more like a band aid) is to increase the
> >>>>>>>>> buffer size from 4 pages to 8 or 16.  That would likely allow to cope
> >>>>>>>>> fine with the high throughput of boot messages.
> >>>>>>>>
> >>>>>>>> You mean the buffer whose size is controlled by serial_tx_buffer?
> >>>>>>>
> >>>>>>> Yes.
> >>>>>>>
> >>>>>>>> On
> >>>>>>>> large systems one may want to simply make use of the command line
> >>>>>>>> option then; I don't think the built-in default needs changing. Or
> >>>>>>>> if so, then perhaps not statically at build time, but taking into
> >>>>>>>> account system properties (like CPU count).
> >>>>>>>
> >>>>>>> So how about we use:
> >>>>>>>
> >>>>>>> min(16384, ROUNDUP(1024 * num_possible_cpus(), 4096))
> >>>>>>
> >>>>>> That would _reduce_ size on small systems, wouldn't it? Originally
> >>>>>> you were after increasing the default size. But if you had meant
> >>>>>> max(), then I'd fear on very large systems this may grow a little
> >>>>>> too large.
> >>>>>
> >>>>> See previous followup about my mistake of using min() instead of
> >>>>> max().
> >>>>>
> >>>>> On a system with 512 CPUs that would be 512KB, I don't think that's a
> >>>>> lot of memory, specially taking into account that a system with 512
> >>>>> CPUs should have a matching amount of memory I would expect.
> >>>>>
> >>>>> It's true however that I very much doubt we would fill a 512K buffer,
> >>>>> so limiting to 64K might be a sensible starting point?
> >>>>
> >>>> Yeah, 64k could be a value to compromise on. What total size of
> >>>> output have you observed to trigger the making of this patch? Xen
> >>>> alone doesn't even manage to fill 16k on most of my systems ...
> >>>
> >>> I've tried on one of the affected systems now, it's a 8 CPU Kaby Lake
> >>> at 3,5GHz, and manages to fill the buffer while booting Linux.
> >>>
> >>> My proposed formula won't fix this use case, so what about just
> >>> bumping the buffer to 32K by default, which does fix it?
> >>
> >> As said, suitably explained I could also agree with going to 64k. The
> >> question though is in how far 32k, 64k, or ...
> >>
> >>> Or alternatively use the proposed formula, but adjust the buffer to be
> >>> between [32K,64K].
> >>
> >> ... this formula would cover a wide range of contemporary systems.
> >> Without such I can't really see what good a bump would do, as then
> >> many people may still find themselves in need of using the command
> >> line option to put in place a larger buffer.
> > 
> > I'm afraid I don't know how to make progress with this.
> > 
> > The current value is clearly too low for at least one of my systems.
> > I don't think it's feasible for me to propose a value or formula that
> > I can confirm will be suitable for all systems, hence I would suggest
> > increasing the buffer value to 32K as that does fix the problem on
> > that specific system (without claiming it's a value that would suit
> > all setups).
> > 
> > I agree that many people could still find themselves in the need of
> > using the command line option, but I can assure that new buffer value
> > would fix the issue on at least one system, which should be enough as
> > a justification.
> 
> I'm afraid I view this differently. Dealing with individual systems is
> imo not a reason to change a default, when there is a command line
> option to adjust the value in question. And when, at the same time,
> the higher default might cause waste of resources on at least on other
> system. As said before, I'm not going to object to bumping to 32k or
> even 64k, provided this has wider benefit and limited downsides. But
> with a justification of "this fixes one system" I'm not going to ack
> (but also not nak) such a change.

Sorry, I certainly have a different view on this, as I think we should
aim to provide sane defaults that allow for proper functioning of Xen,
unless it turns out those defaults could cause issues on other
systems.  In this case I don't see how bumping the default console
ring from 16K to 32K is going to cause issues elsewhere.

Will prepare a patch to do that and send to the list, in case anyone
else would like to Ack it.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 10:04:17 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 10:04:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353683.580642 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3xDX-0003Qx-4c; Wed, 22 Jun 2022 10:04:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353683.580642; Wed, 22 Jun 2022 10:04:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3xDX-0003Qq-1J; Wed, 22 Jun 2022 10:04:11 +0000
Received: by outflank-mailman (input) for mailman id 353683;
 Wed, 22 Jun 2022 10:04:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3xDW-0003Qg-6O; Wed, 22 Jun 2022 10:04:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3xDW-0001Wk-2e; Wed, 22 Jun 2022 10:04:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3xDV-0005gg-GV; Wed, 22 Jun 2022 10:04:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o3xDV-0001aV-G2; Wed, 22 Jun 2022 10:04:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7uuqFRzzok88NCQhgl2P6axWgvYznKwMIr8kXqbHuXk=; b=tppY9uZxliLTGgIYcVn8IPwuCq
	zNIvG7AwH3wq+17C431lARgaxuALAkuwU2SlQNgGOyChHSzo2jpjtdlws19YekX8gBUu+6zxNfWyo
	qn+cwK63xdQDzn8z+/Cr9GaGrQzruT3krbxd0kkEYWvNElTYIkUOL5Np4RcsPl1GdMwI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171306-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 171306: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=b8a2d9675875cb9cd117025ebd48c9c31c6affb6
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 22 Jun 2022 10:04:09 +0000

flight 171306 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171306/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              b8a2d9675875cb9cd117025ebd48c9c31c6affb6
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  712 days
Failing since        151818  2020-07-11 04:18:52 Z  711 days  693 attempts
Testing same since   171306  2022-06-22 04:21:09 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Amneesh Singh <natto@weirdnatto.in>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani@anisinha.ca>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brad Laue <brad@brad-x.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Kirbach <christian.kirbach@gmail.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Christophe Fergeau <cfergeau@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Didik Supriadi <didiksupriadi41@gmail.com>
  dinglimin <dinglimin@cmss.chinamobile.com>
  Divya Garg <divya.garg@nutanix.com>
  Dmitrii Shcherbakov <dmitrii.shcherbakov@canonical.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Emilio Herrera <ehespinosa57@gmail.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Florian Schmidt <flosch@nutanix.com>
  Franck Ridel <fridel@protonmail.com>
  Gavi Teitz <gavi@nvidia.com>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Haonan Wang <hnwanga1@gmail.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Hiroki Narukawa <hnarukaw@yahoo-corp.jp>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Ian Wienand <iwienand@redhat.com>
  Ioanna Alifieraki <ioanna-maria.alifieraki@canonical.com>
  Ivan Teterevkov <ivan.teterevkov@nutanix.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  jason lee <ppark5237@gmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jia Zhou <zhou.jia2@zte.com.cn>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jing Qi <jinqi@redhat.com>
  Jinsheng Zhang <zhangjl02@inspur.com>
  Jiri Denemark <jdenemar@redhat.com>
  Joachim Falk <joachim.falk@gmx.de>
  John Ferlan <jferlan@redhat.com>
  John Levon <john.levon@nutanix.com>
  John Levon <levon@movementarian.org>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Justin Gatzen <justin.gatzen@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kim InSoo <simmon@nplob.com>
  Koichi Murase <myoga.murase@gmail.com>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Lei Yang <yanglei209@huawei.com>
  Lena Voytek <lena.voytek@canonical.com>
  Liang Yan <lyan@digitalocean.com>
  Liang Yan <lyan@digtalocean.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Liu Yiding <liuyd.fnst@fujitsu.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  luzhipeng <luzhipeng@cestc.cn>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Mielke <mark.mielke@gmail.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Martin Pitt <mpitt@debian.org>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matej Cepl <mcepl@cepl.eu>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Goodhart <c@chromakode.com>
  Maxim Nestratov <mnestratov@virtuozzo.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Moteen Shah <codeguy.moteen@gmail.com>
  Moteen Shah <moteenshah.02@gmail.com>
  Muha Aliss <muhaaliss@gmail.com>
  Nathan <nathan95@live.it>
  Neal Gompa <ngompa13@gmail.com>
  Nick Chevsky <nchevsky@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nicolas Lécureuil <neoclust@mageia.org>
  Nicolas Lécureuil <nicolas.lecureuil@siveo.net>
  Nikolay Shirokovskiy <nikolay.shirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Niteesh Dubey <niteesh@linux.ibm.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Or Ozeri <oro@il.ibm.com>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peng Liang <tcx4c70@gmail.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Praveen K Paladugu <prapal@linux.microsoft.com>
  Prerna Saxena <prerna.saxena@nutanix.com>
  Richard W.M. Jones <rjones@redhat.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Robin Lee <cheeselee@fedoraproject.org>
  Rohit Kumar <rohit.kumar3@nutanix.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Davis <scott.davis@starlab.io>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Sergey A <sw@atrus.ru>
  Sergey A. <sw@atrus.ru>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  shenjiatong <yshxxsjt715@gmail.com>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Simon Rowe <simon.rowe@nutanix.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Temuri Doghonadze <temuri.doghonadze@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tom Wieczorek <tom@bibbu.net>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tu Qiang <tu.qiang35@zte.com.cn>
  Tuguoyi <tu.guoyi@h3c.com>
  tuqiang <tu.qiang35@zte.com.cn>
  Vasiliy Ulyanov <vulyanov@suse.de>
  Victor Toso <victortoso@redhat.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Vinayak Kale <vkale@nvidia.com>
  Vineeth Pillai <viremana@linux.microsoft.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  Wei-Chen Chen <weicche@microsoft.com>
  William Douglas <william.douglas@intel.com>
  Xu Chao <xu.chao6@zte.com.cn>
  Yalan Zhang <yalzhang@redhat.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Fei <yangfei85@huawei.com>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yasuhiko Kamata <belphegor@belbel.or.jp>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
  zhangjl02 <zhangjl02@inspur.com>
  zhanglei <zhanglei@smartx.com>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>
  Дамјан Георгиевски <gdamjan@gmail.com>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 113975 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 10:05:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 10:05:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353691.580653 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3xEz-0003z9-GQ; Wed, 22 Jun 2022 10:05:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353691.580653; Wed, 22 Jun 2022 10:05:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3xEz-0003z2-DG; Wed, 22 Jun 2022 10:05:41 +0000
Received: by outflank-mailman (input) for mailman id 353691;
 Wed, 22 Jun 2022 10:05:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Os2d=W5=citrix.com=prvs=1655f9567=George.Dunlap@srs-se1.protection.inumbo.net>)
 id 1o3xEy-0003yp-Gk
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 10:05:40 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dce625c7-f212-11ec-bd2d-47488cf2e6aa;
 Wed, 22 Jun 2022 12:05:38 +0200 (CEST)
Received: from mail-dm6nam04lp2045.outbound.protection.outlook.com (HELO
 NAM04-DM6-obe.outbound.protection.outlook.com) ([104.47.73.45])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 22 Jun 2022 06:05:29 -0400
Received: from PH0PR03MB5669.namprd03.prod.outlook.com (2603:10b6:510:33::16)
 by SA2PR03MB5691.namprd03.prod.outlook.com (2603:10b6:806:118::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.15; Wed, 22 Jun
 2022 10:05:25 +0000
Received: from PH0PR03MB5669.namprd03.prod.outlook.com
 ([fe80::b402:44ba:be8:2308]) by PH0PR03MB5669.namprd03.prod.outlook.com
 ([fe80::b402:44ba:be8:2308%4]) with mapi id 15.20.5353.022; Wed, 22 Jun 2022
 10:05:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dce625c7-f212-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655892338;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:mime-version;
  bh=p3X8mF44Ye+gNBlaEb2kuL/soYaCDycAKq1cOiOXmUU=;
  b=UhZ/pDQZCwsaoxleVneXBs7AAgJJX1BfRhCge1MrwYSgeOsZbdek8CBn
   hfehidpM4OorOvUqeBTEH5gfIdFy48yjpthV1PmljMa0mNaASDDxcagDS
   iHIlL2tvIpbjHH3iesl6u6NIRFGNbspff0zlabtxLm/MGs0r/Ze6e2ovB
   A=;
X-IronPort-RemoteIP: 104.47.73.45
X-IronPort-MID: 73995798
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Chsj3ajLEbigULqxqTFjFlbgX161KBAKZh0ujC45NGQN5FlGYwSy9
 lOraxnFY6jUMyawOYxoOc7lxf41yZPUndM1TAU5rSoyFStB9cGaDtqSdBusNCrPcJTOEk4+t
 c8TNdeeIcllHibX+0z0auDrpyl3j/nRG+SlAr+UZHkZqWOIMMsEoUsLd7kR3t446TTAPz6wh
 D/SnyH+EAH51jUoY2lEsvqIpB0+7f2o5j8RsgFhbP4WsFOOznIbXc4Tfa2/ESD1E9JedgKYq
 0cv710bEkfxpUpF5gaNy+6jGqEyaueOe1DI0BK6YoD66vR4jnVaPp0TabxNMy+7tx3Tx4ork
 IsX78TqIesUFvakdNo1AkEw/x5WZcWqyJefSZRomZXOp6FuWyKEL8RGVCnaD6VBkgpEKTgmG
 cgjACIMdni+a9eem9pXfAXOavMLd6EHNKtH0p1pIKqw4fwOGfgvSI2SjTNUMatZammj0p8ya
 uJAAQeDYigsbDViBA4QJo0ut9un3GvGQjZClA2slKc4tj27IAxZiNABMfLzU/nSGYB5uxjdo
 WjLuWPkHhsdKdqTjyKf9W6hjfPOmiW9X58OELq/9bhhh1j7Km47UUVKEwfk56bizBLjMz5cA
 xV8Fi4GgqU17kOmCPXgWRmxuFaPvwIGWsoWGOo/gO2I4vWPslfEWTlUJtJHQP5/6/Q8VTgS6
 mHXouO3JwJiraSFZn3Io994qhv3Y0D5N1QqZyUJUA8E6NnLu5wog1TESdMLOKu8ktz8Ajzuy
 iqDhCc7jrQXy8UM0s2T+Fnbgju34IbAVAcd+AzLU3nj4BkRTIy4Y42l73DL4PAGK5yWJnGap
 2QNkcWa6OEIDLmOmTaLTeFLG6umj96aNBXMjFgpGIMunxyh9XescoFX5DBWP1pyP4APfjqBS
 F/ev0Zd6YFeOFOubLRreMShBsIy16/iGN/5EPfOYbJzjoNZcQaG+GRiYBCW1mW1zEw0y/ljZ
 NGcbNqmCmscBeJ/1j2qSuwB0LgtgCcj2WfUQpO9xBOiuVaDWEOopX4+GAPmRogEAGms+W05L
 /432xO29ihi
IronPort-HdrOrdr: A9a23:trb3HaHI+pXwplK+pLqFRJHXdLJyesId70hD6qkvc3Fom52j/f
 xGws5x6fatskdrZJkh8erwW5VoMkmsj6KdgLNhcItKOTOLhILGFvAE0WKP+Vzd8mjFh5ZgPM
 RbAuRD4b/LfD5HZK/BiWHWferIguP3iZxA7t2urUuFODsaD52ImD0JbzpzfHcXeCB2Qb4CUL
 aM7MtOoDStPV4NaN6gO3UDV+/f4/XWiZPPe3c9dlAawTjLqQntxK/xEhCe0BtbeShI260e/W
 /MlBG8zrm/ssu81gTX2wbontVrcZrau5t+7f63+4oowwbX+0OVjUNaKvm/VQUO0aKSAZAR4Z
 7xSlkbToJOAjjqDxyISFPWqnXdOXAVmjDfIBaj8AXeiN28SzQgB8Vbg4VFNhPf9ko7pdl5lL
 lGxmSDqvNsfFv9dLSU3am2a/hGrDvDnZMZq59bs5Wfa/ptVJZB6YgEuE9FGpYJGyz3rIghDe
 l1FcnZoPJba0mTYXzVtnRmhIXEZAV4Ij6WBkwZ/sCF2Tlfm350i0Me2cwEh38FsJYwUYNN6e
 jIOrlh0LtOUsgVZ6RgA/ppe7r9NkXdBRbXdG6CK1XuE68Kf3rLtp7s+b0woPqnfZQZpaFC76
 gpkGkowVLaV3ieefFmhqc7gywlaF/NLgjF24VZ+4VzvKH6Sf7iLTCDIWpe5vednw==
X-IronPort-AV: E=Sophos;i="5.92,212,1650945600"; 
   d="asc'?scan'208";a="73995798"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZH6IcRsEFboGKSxQKjaf37P+W8F0cPxmZTQl7Vy3XKvsIcdjC2PC/qVlNSlWTO7pjeeBgP2hDI4t2NrST/Q5uGOOFBXwuQF8UC9VbT3yg1d2wGEdW8EpvwAt4xnrvj/+O5rxB2LtNhZRc2/Ya3QN/3aKKr61f4XogLuyZlzNc3UD8GN4o/0P8hzQAHUms0hOW4XqrsImwSHwaoPXwOvZYB11nSieqIk+KYaVixZn2650Ry5OQFCg1YweUM7fKIctc3jYoPx/arntD8zlVzBAp1J+N6t9D5/molwyHnLZczXdRsMYxTaA23g6djzW1t71QRnEopoyfy6MLLo5vEHNcw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=vHATz9a3VeiG49wv5gpmGPDG42GsV2Sg+b6A3HVOMCw=;
 b=OKLYoWPd5bgMQMBIFBXwuxY0H/0JxKUPPJRVR9gsKf1rem4fhT701j8Tfsau4ZC9kycZUFNuNgUJTFSun1h+Dw6UMwKaL3pPPVjtfmI57Yh5o2DdroBX3CbPurleXM8iLP34RJXU6UCUP6cgMEMb2T5DdWpoJeAfea2qX2Y7h+ApAnrse2sTecvKPXoXhJMwVx+2sd7/I5wc4Yu4Dx3TtOhqSp8C3AmKbQKILFxtHFqZUzVew0UG1f8f97+kiig1s+g7Sdy+0CVO+i/bw62uso0qaVKSJPAFaYqq/GXHYNsKsIi5TmiF3pi5dqtSMnleqAZCDKglB6E2e4EuDvspdg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vHATz9a3VeiG49wv5gpmGPDG42GsV2Sg+b6A3HVOMCw=;
 b=tnAk9mC8aEbfjlk5EIKEKe0VIBkxHpnhjkksVWrhLqez8gW/kVl0xoKHwINR4+rPkPx0JdQ592Sp5k3fWBVE0gi63sdgwitrlRsxWltFzWpZ2tOG5Zj4BP8d/YhHGOeEv9vtoG3S3FtiagsnQltG4o0OuVSYbp89SlVnklXLNbE=
From: George Dunlap <George.Dunlap@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Matias Ezequiel Vara Larsen <matiasevara@gmail.com>, Matias Ezequiel Vara
 Larsen <matias.vara@vates.fr>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>, Dario Faggioli <dfaggioli@suse.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [RFC PATCH 1/2] xen/memory : Add stats_table resource type
Thread-Topic: [RFC PATCH 1/2] xen/memory : Add stats_table resource type
Thread-Index: AQHYafsc/WlHBvFB/kqWUrR4CBI56K1bWOUAgAASTQA=
Date: Wed, 22 Jun 2022 10:05:25 +0000
Message-ID: <63208954-3C7A-4C91-97C3-8D6EA21F29C8@citrix.com>
References: <cover.1652797713.git.matias.vara@vates.fr>
 <d0afb6657b1e78df4857ad7bcc875982e9c022b4.1652797713.git.matias.vara@vates.fr>
 <5b788e1a-d872-e318-1be5-8640fe887b9d@suse.com>
In-Reply-To: <5b788e1a-d872-e318-1be5-8640fe887b9d@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: ae10ce37-17ef-4f60-f84c-08da5436ba4f
x-ms-traffictypediagnostic: SA2PR03MB5691:EE_
x-microsoft-antispam-prvs:
 <SA2PR03MB569142DAEE257F99F002A45999B29@SA2PR03MB5691.namprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 4KpqzKPJCTbYRDqFjRroLtf6bOGaPqjj7hgEuAKZnPW8AzAtBIlCt3sPBz+BcNgmeUjADfr+Q86FXUk4QSOODDBpn9cOR7yj7LKTELVACsBzdsVJ5yS+WKQvotTGYWxL6lzlOpTg7EFjTWyuj9oeSeBkqQazSvwhpElBunn++DUrSEgmaJ/jGAHGpTL8Fis7JiQ8Cab48Sqymublat7GZCTbgvvgfxg6DO6HUFBmX/SWGWDjnkzpYc2VC1GiQmjrOmZvXX5zCiTA92Ek+3hBcJo0uQIdrBtvSOyQqq0Xb2rLyK34U/9yb+42XSpvwhJjyiu1ZocOV6jy6HOjqj2rppOO66X8H1/go/lggdZNiLqpBlqWujno655fbHHtvUhIAjxm6xur3Yftr03ZcRxfQ84ilcLN0OMi+40tOrMtro66P88U2pKXRAvjRq7l7/93znZJKpJpmyqU5XG0HvgxUoLPzODPflE/qwCvdPHqrRoCgptQ5PPWOAonla5irMmLKSG9WIR2s8vu3BwxhTm22ZsqBJp0A+wvS35aWCKn6qTdMsPdWg5IUgy5RNw861uXd9zSAtEJ7pK2jTP7LagNzjmOatZoKdO9PZ+tWyoUXo2JVapoDel9D/JcZUpeYNwyYDYSImUeYpeXRF5Tbp91+eyQeTAjEFCDu56crTTmSfj6HnmY8KQfHn0AqSgdojGy5h/4K3lcsNVqPqxVioTKbdVJcqhnhw2J8BN8z/EBn1zR6Y0aizEXBb4Rwpo1EBcc7teTrleO5UUHTjeKVlAvfugej1q8bFwcxvAAkYAwpP4=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(136003)(346002)(39860400002)(366004)(376002)(396003)(99936003)(66446008)(4326008)(66556008)(54906003)(8676002)(82960400001)(122000001)(66476007)(66946007)(38070700005)(91956017)(6486002)(186003)(76116006)(71200400001)(64756008)(41300700001)(53546011)(6916009)(478600001)(2616005)(86362001)(33656002)(38100700002)(6506007)(316002)(6512007)(36756003)(4744005)(2906002)(8936002)(26005)(5660300002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?UElyZHorL2FkSDNLZFlkTVg4NW43Skk2VVM3bWY2R21vZjhpRVhKazBxdHdm?=
 =?utf-8?B?MXlnckdxK2dsMUpkQUtSdEhVN3lqejk2YlNGaEJvVGp0NFBLMEZvVXRVbWZS?=
 =?utf-8?B?dHMydmZLa21TRzF6dnVVdzJkMThvWmdjSzhPUkdGc0VHcGVxOGhFU01oYnp5?=
 =?utf-8?B?aUZEOXNrQzNsbGoyYys4cEhyTnlVQlRFMGJETHUydzVENmVWMzdHelFZaUpB?=
 =?utf-8?B?NzZVcGNqMlhlQy80T1lsd2NTeExodzJGMlZxdUIyaGdxVFY5YXhKZmZmUUpP?=
 =?utf-8?B?SFAxbjNtZmFiMFpoYkJneHV0STUyb0ZXeWZDc1dhdUg4ZVVOMW5HdEgxSjJ3?=
 =?utf-8?B?dk1vTnBkdU4vU0E0YlAzZjJCOG5MVmRJTlJqZHNGT2Q0cTZUTTdKTTE5UHp1?=
 =?utf-8?B?Z0Fhckp1TEJuQ2dpU2tKMVNWZi9qTnJDZ3REWkhFL2VsVG1wQUpZaEFSQWph?=
 =?utf-8?B?UlZFSklic2UxUjBCYzVCVWZockRDdlN2NGhsZWw0OW9vN0d3SzVIMFp5TERE?=
 =?utf-8?B?Z2dwRXNlMlQwZzdXMHEzTjZaVGRhMmc0dGJiaEhyYkphNTl1c2lCL25laERR?=
 =?utf-8?B?Y3dQNzY4bXNTdnc5RXZmU2dOUURGcUVXWHZIVnJlVVpSOHM0K1EvV1czdnRP?=
 =?utf-8?B?dHVrWWlNbW54NGNEK3MwcDJWb1N4UVB1NGhoTXc5V0g1Mk5DZVpJWkxlMEV3?=
 =?utf-8?B?WW9vRnNUVzV6K1djbE1xaitZMnpXYUMyUFowTGpFbkpDTkxGQ1dYWXl5T2ha?=
 =?utf-8?B?NzNUNCtJT01yUHBvb1NxYUIrQkFBNHEvY2ZyTm03ME9kSWRzVEwyb1g5c280?=
 =?utf-8?B?cmpZaTNaZi9udDZDdlZ2ak9xRnM2VWlEMTFkc08veVdpYUFRQUZuYyt1a2pK?=
 =?utf-8?B?WGFBM3BXUTNpazRMK1FkZytBay9sYjh5TDRjU0g0a3RzWXRuU1liNlRieklK?=
 =?utf-8?B?VGhrT29WN1d3RG9pL21Ebk5ycUw0Z25tdkFnb1djZjBSREg1QWVrdVd0WTIr?=
 =?utf-8?B?TDR6Smx3cWlodEFEcS9FMFpJU2I0bXVsbDV5Mlo1MEJ6bitIK3VJaXA1cjVG?=
 =?utf-8?B?MnJzN3o3WVlQeEpBa2xPcW51clN2bU1UZ2d6alNNQ2tncTRXdVV1QUJDSjRu?=
 =?utf-8?B?TWRtNkNET1lPUWZzSkljbWEzVjcwa1RFZGJmc1pkZFFvM3lJaGZWZGNjK0tz?=
 =?utf-8?B?SFFaN1g5LzdTbDZzS2FISTdNU1JFb0ozUjduM1hZbng1TmM1M0srOEZMTVRn?=
 =?utf-8?B?ZERHOWc3THA5VGpDWGEzaGtHNlY2UThieDhMQkxPcXQ4MnhWMHB2ZVpkdTFt?=
 =?utf-8?B?UTQ1eFFDSWNnRWpkRTZKaHluN3BlbW9mSmRXY2FWa0djU21OSnFrdUg2QlN5?=
 =?utf-8?B?aXF4MUgvdVUxN3BqNEZHY21zalF2a3lyRlVsR0JFc3BNak1vOW9MYVVES0JO?=
 =?utf-8?B?bmxvM0pkSEF0WmZMZWdtR08wV3poM1dydVBjclcvekdSb0pGTWFrWmdFSG1l?=
 =?utf-8?B?N2lkVHlQV21SU1ViS01FSUt4VEovTDRqcDRWZmp5QVFrV1ZvRkFMMHphcDBj?=
 =?utf-8?B?c29pU3p5YlZMYTJDRjlhSmQwWHVCWngxSUVFMVhERFVtYWJRQXNFcG1tc2pS?=
 =?utf-8?B?Ry84NnZQSkNndGZVZlhlekRuaXBicHlPZU5lY2MrWXdFd1ljU2ttWE1uQ25h?=
 =?utf-8?B?WXE2YmZUdDdscys1Q203ZkpEZllKRzA5YnliTG5GeVloaFdsbkxxZ1BRV0xQ?=
 =?utf-8?B?enVMUlNEMkRXcDV2NnhZcDY5a200cE4rcVZ5OEI5TzEwWEN5dEs4eGZWb1Nu?=
 =?utf-8?B?Y252MDRQdEpNYlZiOHRwYTdYMkRpaEdjM3BIbnRlSGVEZklBMFNqLy8ycVc2?=
 =?utf-8?B?eWVlRlZGLy9ETWJ5TEduVU1WMVp1REJCeisxSE8xQ0F3ZWJiSzd4SGxVQzc4?=
 =?utf-8?B?VVBaRzdTY20xZTd3QUY5Y1dIL0pGekphRmtwUWZydjVhaWs2SXBTbHdqYkZp?=
 =?utf-8?B?Mjd6TU5nS0ZzYkZTM21HcnMyNlorbENjR2hlbTJGd0ZYNEhLSmdzOFp6cjB3?=
 =?utf-8?B?MG40QU9rVlhlTStzUVVyZnVrQmlvTGxRV3JQcjF0R2FVTy81RGh4Y2F0dDg0?=
 =?utf-8?B?VyszbStIVkVxankyVUFrQXNyVStoOTQ1d1U0S3VKaUpwOS8vTXE0Y1RseXNx?=
 =?utf-8?B?TmIrbXpHOXVmU0R4dXpBMU5mSG9hWktvbjlyTElpajFZR1dqMWt2MFhwMmxu?=
 =?utf-8?B?T2xqalk2OEEvaTNzcHUxaE5RNGtPeFk2TDlmY1ZFOWpPemkzU0xVbDF0UnRD?=
 =?utf-8?B?M0gyWDVMZWpHQzZSV1d0TmIzRDRSUUp0QU9VcGpMQWFJd2tTdWdZalJrM1dM?=
 =?utf-8?Q?xi87GQJE+l2iH5jw=3D?=
Content-Type: multipart/signed;
	boundary="Apple-Mail=_A27804B2-1101-47D6-8B94-B22A797E32EF";
	protocol="application/pgp-signature";
	micalg=pgp-sha256
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ae10ce37-17ef-4f60-f84c-08da5436ba4f
X-MS-Exchange-CrossTenant-originalarrivaltime: 22 Jun 2022 10:05:25.6960
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: KQXlK5nqiM6WmFflNypoL+ah8ARipv5jHsIOOtNNWn3ij66pRE+w0P3yl5DNaf+AZYr+3+JKcGNbMVKvIlACRRkQ3t6qWiOe/xlXyhQneus=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA2PR03MB5691

--Apple-Mail=_A27804B2-1101-47D6-8B94-B22A797E32EF
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=utf-8



> On 22 Jun 2022, at 09:59, Jan Beulich <jbeulich@suse.com> wrote:
>=20
[snip]
> In general I also have to admit that I'm not sure the exposed data =
really
> qualifies as a "resource", and hence I'm not really convinced of your =
use
> of the resource mapping interface as a vehicle.

I=E2=80=99m not sure if I=E2=80=99d call any of the things currently =
mappable via that interface (ioreq pages, vmcall buffers, etc) =
=E2=80=9Cresources=E2=80=9D.  I=E2=80=99m not sure why the name was =
chosen, except perhaps that it was meant to be a more generalized form =
of =E2=80=9Cpage=E2=80=9D or =E2=80=9Cpages".

The alternate is to try to plumb through a new ad-hoc hypercall.  Andy =
suggested Matias use this interface specifically to avoid having to do =
that; and it sounds like he believes the interface was designed =
specifically for this kind of thing.

 -George

--Apple-Mail=_A27804B2-1101-47D6-8B94-B22A797E32EF
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
	filename=signature.asc
Content-Type: application/pgp-signature;
	name=signature.asc
Content-Description: Message signed with OpenPGP

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEj3+7SZ4EDefWZFyCshXHp8eEG+0FAmKy6WAACgkQshXHp8eE
G+34Tgf9G7VE0lOrDdV0pss0YGd39U8iR4xI4SFYmZMnjxU19iJQk4/Xo3COMALh
bO1fQuJkFducNyapE3EW9DmT0AfzPoUGapJ8+bBxvsRXjmSCErLxTGcvIjSmH3q8
FDFGFa3BWoG5F0/ouGbaw8jU5XjZgM/9b/47/5Gsm7UUg0LcKOAxqTWkwmXJq/9f
H89AvfpCOZGs9thepSmIQ8yziq/tMdyWdkeGZp5ZXX6PScJkyF1mXG1s34AqJS22
jKTVNKoY8eenu6fiqqktKTJkyN7M7Buc8bNbQseE25vuQpc4b3Su03lwS1YcRX43
wmYuEVu5zWj+UktSesSJ8f0GT13LlQ==
=f55s
-----END PGP SIGNATURE-----

--Apple-Mail=_A27804B2-1101-47D6-8B94-B22A797E32EF--


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 10:09:53 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 10:09:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353701.580663 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3xIq-0004jB-5N; Wed, 22 Jun 2022 10:09:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353701.580663; Wed, 22 Jun 2022 10:09:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3xIq-0004j4-2c; Wed, 22 Jun 2022 10:09:40 +0000
Received: by outflank-mailman (input) for mailman id 353701;
 Wed, 22 Jun 2022 10:09:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kzGk=W5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o3xIo-0004iw-0p
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 10:09:38 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr70080.outbound.protection.outlook.com [40.107.7.80])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6bcd8536-f213-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 12:09:37 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB6304.eurprd04.prod.outlook.com (2603:10a6:803:fd::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.22; Wed, 22 Jun
 2022 10:09:33 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Wed, 22 Jun 2022
 10:09:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6bcd8536-f213-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cgQfUuQNbh1CL1tZp3T09EuchIOYxDmsYw5c8nobc9NeHhiPg7RcrqSPEU/flhW8GPqng/ift5v+3fuI63NWh6zQ1uqqPgl97Qp93SpDGJY+NCRUoVpiq1P2b/hFX05ztYBssUx22z0ur0+VgwhTCnbvxBteO11lnCzoItdsyZXxw63DUdL78HR4lsueqOt10xVvoYh/B/DcviaMgxjSkZp2An04f+J8zNmK5WHUZBQR10/Jz2B/zWkbJ7zrRvyIOvfppYcUecjVf3RKrBOSODpNBW3e5A3DQ7eoDdU3lwmIXpu/eFDcW0c7tgVdEYHSvtHKnZ+Tb/Y1tXayQ8fyOQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=B7RrT7m0OtSEOmtV2jiIOQrTurbm5yswm9owuVzDt9w=;
 b=MPtdaFBVfRXyaslfuOSHXARNajMo0+yDK6DMiGnKfPVuYJIoLk78nUc7d3/LJfYbV27JYQUsfxR35YVxIFZ158Qh1wTpsFoLfMO7jj/52YwBYP49EZrNWYsdc4HKvN1kkUNzaW4divqp0aFe8JpEv0eWkT6GD0hZgTOAmW0c3BzuhSXYBfjeCd/nc/y1R4fjo/Xy9kXA/7xbwjNEf5KhUJmcmFp/L9t09xJjuE4NbKiVZ1gGzkQeYaFWeB59jUpFxpFfANYf5zHMyPQmuxJLlauFU/dOiKytmdoN4qYYjmDuLQJqdauccWlyJRPDLTrQ0cvQKwY4ZMc8zc361XBZYg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=B7RrT7m0OtSEOmtV2jiIOQrTurbm5yswm9owuVzDt9w=;
 b=CvWW3j+UDLlUsqyzhdP0DorJwBsCXTBXi8ojx08tXnIRUziPNH4T0aMq2Hz8hseCTwvzyjU4j15NWz7W0aUCk1W3+IeH5WHDSBug8kGOEgi+ijpgAdgaAhBKx4sRn4tyEUx/LY6FubHocVP8oVQHCZvskhe0zX9DBQyHgKuktUIn71G04wPhEdwBa0Ralo9cM37DtpmbFmbSpPSNSiaG7D3N2reTi12SiTloPf/vueCO7+HMS6lGbGxFVBD4oWTrfSnajplFxKTKj/hA6veW40OOf2zZRJibDgD7T3Xg0fxufkRnr+NBrbJ1u1V2c2G1VkzkQbWOWmDtHZFodYkrkQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <18494326-7cb2-3fe4-e5e9-98e2b6260abc@suse.com>
Date: Wed, 22 Jun 2022 12:09:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [RFC PATCH 1/2] xen/memory : Add stats_table resource type
Content-Language: en-US
To: George Dunlap <George.Dunlap@citrix.com>
Cc: Matias Ezequiel Vara Larsen <matiasevara@gmail.com>,
 Matias Ezequiel Vara Larsen <matias.vara@vates.fr>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Dario Faggioli <dfaggioli@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <cover.1652797713.git.matias.vara@vates.fr>
 <d0afb6657b1e78df4857ad7bcc875982e9c022b4.1652797713.git.matias.vara@vates.fr>
 <5b788e1a-d872-e318-1be5-8640fe887b9d@suse.com>
 <63208954-3C7A-4C91-97C3-8D6EA21F29C8@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <63208954-3C7A-4C91-97C3-8D6EA21F29C8@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AS9PR06CA0415.eurprd06.prod.outlook.com
 (2603:10a6:20b:461::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f28d18f1-b2e1-4e3b-91d0-08da54374dd1
X-MS-TrafficTypeDiagnostic: VI1PR04MB6304:EE_
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB630498FFC695F7B998D36BEAB3B29@VI1PR04MB6304.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RSsC0TIfZ1CyKzRYxVDApaNHgYrm53M3tLq+jcQ1OkTmEh5DBZMNrHINIjG8cU4MaTaXoMYLkNIILHp3BLa9frG5A/1OSFws0NkKB4EavrfB4He4qp/d3WbYMN/Efi8yZn49UAbNLrlCLjhift7FpBrHZbPVTuEXGx3sE+sZ9ecd4O2j4JL3W40Px0qQjwSkCrys5EXaJkamtGN4WP9hk+tUGnIcDdWqCy3LDoOJuJ010fxjXQkzAbHzRU4ISSYs+KJj4j/43ln7Akvzf/9v02xMN2DiuFP6uRVYaRw7eYp6iD8Sd82FfWyNEqM1ocK2UYwHg60gGybCiPTct2Dhk7eqVx+0FAWNA7ff9mTtEV9tzLqMXLF0Zx+w8jheNJSSza0shUm3fTn8TWq9ZZidk4TfTKkKxoMW6ChNks2hvzSyevQzkxoyVswLc9VkwFSE1lehgFyPXgx+1GvQkZEbQLZEGGOrm3CIjgl3SguWq2nnz8Eq1qKK3nGN/Vk2plskZlulM+jYFaIF3uUW2FzBLK6WLOYI2Rn+XxCPuGg09+lHk9hekw9pdD0k8D0x7oBk+b3+xfGM7ehHBiy/SJvc8kpB5gF/ZXWYq25EOwf7A4CbwjQxsuJ/P4OVyvwtryOg/2LM/QfAQ/Wr8K4jpWDwKTBhIUTjhW19r+u/VIysdSKAPD3398//8SWos/wNzybOMRGPoTt11qZjD7IbLg2774YsH8QXnAM0JeK7LJTY8aaP2JfsEfqTAdxrzMy9V7/tzyl1lUSmexwd9vvezuhdAlXdSR4oPqIHv3LdGkCXjD4=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(136003)(39860400002)(376002)(396003)(366004)(346002)(8676002)(186003)(38100700002)(4744005)(36756003)(8936002)(4326008)(31686004)(5660300002)(2906002)(6506007)(2616005)(6486002)(31696002)(66476007)(26005)(53546011)(41300700001)(66556008)(6916009)(54906003)(316002)(66946007)(86362001)(478600001)(6512007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cGxyekVGM3p5SHFiTHFOV1A5MmZEZ3JiOFA2QU85SnFkc3V0UVltODhBWWpN?=
 =?utf-8?B?ZlZPcHdYUnFyMHpaeGxHQzNyNFptaU1pSzVvM3JmSUdtK21DQmFtbDJtblIv?=
 =?utf-8?B?cTI0TGlNcGtzbnpGWi9ubXE3elc1eWdNMHlSQ3FTWWxiTHQ1cjVudE0vczJ0?=
 =?utf-8?B?M2tlVm9DcENUcGhCVGgxR1hFWDVlMzkrS1ozV1kzNlp0OVM3OFIyaFRrSDNI?=
 =?utf-8?B?b2VXT0VRcXMwbVZ1UDI3emN3Q3BTeUZUdEVscVFOenpNSHFMdXA3bTJubW5n?=
 =?utf-8?B?RU8zb3JITzdLcGFmdjFtODk4b2ZWWmxqZllBN2t5Tld4N1g1WTJGc3JaenRm?=
 =?utf-8?B?VGN6ZStYd1J3elhtRVdoWUsyYS9Pem42TVVGOVBLWUphQmxlNjZqenBJMi90?=
 =?utf-8?B?QjNnVXdVVUorZU9KaWRDSVZYZUtlamlaTk00UHVpNDBReloxMExBUWdIRmM2?=
 =?utf-8?B?eWhuaXdiSStyN1pqaG1UYVg1U295QWFsb2pXdXZZKzQ0QkNjVUcvUFc0ZzI5?=
 =?utf-8?B?bzVJSFpaeGZzRFVlV1JkWE9hamRhVkRLcWZBSUVQaDQ1SHNyNW9FSlZDL3RD?=
 =?utf-8?B?RXZtZVBqOUdhUjNTSXJHZWFPZXN3R2V0aXdDQmZLcGNkbDNySHBPbURqaGhH?=
 =?utf-8?B?dU1PMkR5OFVEYUZBeWNGUkZtQnl0RUdsN3o3TXg0eVJhbWVFK1laUW5jaWJn?=
 =?utf-8?B?NjlhYzN1bEVtTk1penJXN2YxcmkrM1Zsa0psL3ZlR2RVZzZPOVhZMFdIRkcr?=
 =?utf-8?B?VW5QRElacDkyUGpYU2NWVHpzeTAvS2tUc2pwWE05b0tLRkhEME02SjNFc0Ru?=
 =?utf-8?B?OFFwaVBZc1NFT2YvZ20vV1J6cHA4cndNajIrMjhiVHhUcGZpR0VKdnFZVS9y?=
 =?utf-8?B?d1c0bkpHbWF2Y3dFcjI5ZTI2Skc5MmExZ0ZKdlRCQnEyT29ER25ZdDhqbUdS?=
 =?utf-8?B?Rytwb0FFSUFIOHh0amhFUTM2OXBWOGdwOURDcWtIM0tnbitBOE9CYW4vU1lC?=
 =?utf-8?B?SmgxODJpWUtxWTQ3YUtMdkNVWWRidjB0ZFcvWU12NTRicEI1bWRWWTF5b3Vo?=
 =?utf-8?B?RTJEVS85ZzZMZzBveEFZT1JsM2dNQzdEZkF0WXpDcUVLRmFuYWhreVlIb2Jj?=
 =?utf-8?B?bkk1dlhTcFo2OFdoU0xsaVkvaU5XcXdubFlIWXA0aytWK1pYSUxuaXA5S0xp?=
 =?utf-8?B?emFZY3Nva2dxWHo1L2Q4K0I4Y2MvekhwZ2hRNjFtazA5bm9sanpuK0VsVjdn?=
 =?utf-8?B?bWRkRHJxWE0yd0RhcTkvYW5wNSthNSs5UnQ2Ukk2SXA3aVZpdUNvTXM0OTFr?=
 =?utf-8?B?dGpzbnh5cHNLVTJHUHc1V2NPUWdZSk9KOUNvRitYeDRPNjFpZGNmdFpGQzlI?=
 =?utf-8?B?c1ByY3plTDdNYUo0OHRiYTBObmk1SmI4WWZTZ2RwZEt3VnFPRnc4a1pmS2Vm?=
 =?utf-8?B?TWhtR1J3UTJSQnpidUx3YlR5TkRoc3U4YVArYmJQVkdQdm5LZGh1NnB2a3k1?=
 =?utf-8?B?ajJ2a2VyOENUVjN3NCtSMHlIS3RVL0pxSkl2WERmVGxNTTNWSGh1SHVuci84?=
 =?utf-8?B?Y1EwYXNmTDMxVTR0aFJ4eUIvdnVYUzFmMk9NOFFRYUNGcW9ab3cxS1M2ai9n?=
 =?utf-8?B?U2hWbUIzYVlOc0NjMWZwMjM4bU82cUlQMEk5TDZ0L1pzeVRzWFVBZTA0cE1U?=
 =?utf-8?B?RUtmZGJBMFd1NDJ0aDl2d29jNVkvRWYxZjV4YmZnTDN1TW9tSjQrU2ltb0JV?=
 =?utf-8?B?SXBTdlh2V2JGS2lwNzM3WDlKem42VkptbWpsRVRuVU8renNFV214cTBCRHll?=
 =?utf-8?B?UVRqYUNLWlhVR2Vjd0dRS3l5TFVIZ0ZLMng2cXlhRkhFRHJWcXhyWDlmK00z?=
 =?utf-8?B?QkNlVC96S3BPUG1ILy9tS0RtM043Ujk1NzAzcWd6VVJLbHJpZ0phOXFtRXpp?=
 =?utf-8?B?YWlvT3M2anM0MEZ3Qkh1ZEpWcFg4blVFcEZha1ViQk1OZHVPbTRvM09GYXA2?=
 =?utf-8?B?L0hlOWZ5YlMzclVnbE4wbFFuTnFZaE9NcjRGZjduQ1FnS0dydHNnY2pNc0hD?=
 =?utf-8?B?My9TRXEwd0NtWW9CaEc5cThXV3Y3RERtTEwybS9FK1luVmRDL0Y5OGhPQVZq?=
 =?utf-8?B?YUVMU3NtcFc3dUh5clVNU2dPQlJkb2ZHUnVSZWRmNDU2QXdnUHJVSkF2dnVI?=
 =?utf-8?B?WW5LZ3ZsVDU5MnR1dlVlTzJLZVpnZXBNSHEyUGJQSlFkTTVkVFF1ZEROT0px?=
 =?utf-8?B?dUlmMm4xeHYrYWNIbWZBZk1YSFI3dHlQMEM1dzZuU1dLUWMvVHNDSkFnUFdL?=
 =?utf-8?B?SHpPN2hqUWsxWFlVQjdhZU01WUpicjNrMlo1UnRHaS9qS0dsbXlndz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f28d18f1-b2e1-4e3b-91d0-08da54374dd1
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2022 10:09:33.3267
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ur+kWV1fLnTv8kxE040bnEUnOnBtrg80EnKldPAaOLtTI2rOKWZK11YVgU/4j+RvLHFPX6DwKmKC67aYlaUrxg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6304

On 22.06.2022 12:05, George Dunlap wrote:
>> On 22 Jun 2022, at 09:59, Jan Beulich <jbeulich@suse.com> wrote:
>>
> [snip]
>> In general I also have to admit that I'm not sure the exposed data really
>> qualifies as a "resource", and hence I'm not really convinced of your use
>> of the resource mapping interface as a vehicle.
> 
> I’m not sure if I’d call any of the things currently mappable via that interface (ioreq pages, vmcall buffers, etc) “resources”.  I’m not sure why the name was chosen, except perhaps that it was meant to be a more generalized form of “page” or “pages".
> 
> The alternate is to try to plumb through a new ad-hoc hypercall.  Andy suggested Matias use this interface specifically to avoid having to do that; and it sounds like he believes the interface was designed specifically for this kind of thing.

Okay. I guess I wasn't aware of that suggestion.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 10:14:05 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 10:14:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353710.580675 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3xN5-00068V-O1; Wed, 22 Jun 2022 10:14:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353710.580675; Wed, 22 Jun 2022 10:14:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3xN5-00068O-Js; Wed, 22 Jun 2022 10:14:03 +0000
Received: by outflank-mailman (input) for mailman id 353710;
 Wed, 22 Jun 2022 10:14:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rPuW=W5=citrix.com=prvs=165a25834=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1o3xN4-00068I-5M
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 10:14:02 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 07973933-f214-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 12:14:00 +0200 (CEST)
Received: from mail-dm6nam04lp2041.outbound.protection.outlook.com (HELO
 NAM04-DM6-obe.outbound.protection.outlook.com) ([104.47.73.41])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 22 Jun 2022 06:13:57 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BY5PR03MB5079.namprd03.prod.outlook.com (2603:10b6:a03:1f2::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.15; Wed, 22 Jun
 2022 10:13:52 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::bd46:feab:b3:4a5c]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::bd46:feab:b3:4a5c%4]) with mapi id 15.20.5353.022; Wed, 22 Jun 2022
 10:13:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 07973933-f214-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655892840;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=rwNMTYaxCWkAOd/QkHrUWoYX8jFFY1F7nOuXkaZy19g=;
  b=Wh/HoNwGpfhSJl57urlwcLqgLGMhXw7znt5FgmAMTR0+udRxaNrNWAc3
   0Kfx3dRG4gfr4TgOmfgLD78jRAEQLnZHnpYUWmXdcydawqctKrRRzi1ZE
   taCuqt+F+Wd8zagS79XaY7ztlu7OmA9jGPQ4tDO4b1AHz6wbVJ1A/7tkx
   0=;
X-IronPort-RemoteIP: 104.47.73.41
X-IronPort-MID: 73996298
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:NbwRYaksEQ1H8a/A2NRzUNvo5gzxJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xIfWmnVaKmLa2T3edlwYYq29EsBvMfSx95jSwo5+Ss3FCMWpZLJC+rCIxarNUt+DCFioGGLT
 Sk6QoOdRCzhZiaE/n9BCpC48T8kk/vgqoPUUIYoAAgoLeNfYHpn2EgLd9IR2NYy24DnWVnV4
 7senuWEULOb828sWo4rw/rrRCNH5JwebxtB4zTSzdgS1LPvvyF94KA3fMldHFOhKmVgJcaoR
 v6r8V2M1jixEyHBqD+Suu2TnkUiGtY+NOUV45Zcc/DKbhNq/kTe3kunXRa1hIg+ZzihxrhMJ
 NtxWZOYFSUoZIjhhuImbRR/S3llL6xe4q/KGC3q2SCT5xWun3rE5dxLVB1zEahGv+F9DCdJ6
 OASLy0LYlabneWqzbmnS+5qwMM+MM3sO4BZsXZlpd3bJa9+HdafHOOVvpkEhV/chegXdRraT
 +MfZSBic1LrZBpXN01MIJk/gP2plj/0dDgwRFe9+vFmsjaJnVwZPL7FaduEa9WuGptusFuF+
 FDM4kLoJCAUK4nKodaC2jf27gPVpgv5Uo8PELyz9tZxnUaegGcUDXU+Sl+TsfS/zEmkVLp3O
 0ESvyYjs6U23EiqVcXmGQ21pmaeuRwRUMYWFPc1gDxh0YLR6gedQ2QBEDhIbYR6sNdsHWBzk
 FiUg9nuGDpj9qWPTm6Q/auVqjX0PjUJKWgFZmkPSg5tD8TfnbzfRynnFr5LeJNZRPWscd0s6
 1hmdBQDuog=
IronPort-HdrOrdr: A9a23:1aiYk6FpJ4nWZhLOpLqFsZLXdLJyesId70hD6qkvc3Fom52j/f
 xGws5x6fatskdrZJkh8erwW5Vp2RvnhNJICPoqTM2ftW7dySSVxeBZnMbfKljbdxEWmdQtsp
 uIH5IeNDS0NykDsS+Y2nj3Lz9D+qjgzEnAv463oBlQpENRGthdBmxCe2Sm+zhNNW177O0CZf
 +hD6R8xwaISDAyVICWF3MFV+/Mq5ngj5T9eyMLABYh9U2nkS6owKSSKWnZ4j4uFxd0hZsy+2
 nMlAL0oo+5teug9xPa32jPq7xLhdrazMdZDsDksLlXFtyssHfrWG1SYczHgNkHmpDp1L/sqq
 iLn/4UBbU315oWRBDtnfKi4Xi57N9k0Q6e9bbRuwqenSW+fkN6NyMJv/MmTvOSgXBQw+1Uwe
 ZF2XmUuIFQCg6FlCPh58LQXxUvjUasp2E++NRjx0C3fLFuHoO5l7ZvtX+90a1wbh7S+cQiCq
 1jHcvc7PFZfReTaG3YpHBmxJipUm4oFhmLT0AesojNugIm1kxR3g8d3ogSj30A/JUyR91N4P
 nFKL1hkPVLQtUNZaxwCe8dSY+8C3DLQxjLLGWOSG6XX50vKjbIsdr68b817OaldNgBy4Yzgo
 3IVBdCuWs7ayvVeLqzNV1wg2TwqUmGLETQI5tllulEU5XHNcnWGDzGTkwymM29pPhaCtHHWp
 +ISedrP8M=
X-IronPort-AV: E=Sophos;i="5.92,212,1650945600"; 
   d="scan'208";a="73996298"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IBbhnP1p3zL9n5ICvHDzV2suf7JvsIPeoi1MaBkTIhtujgkDxTnlg5oSL+Z8kacVnNwE4cRXLKt+Y6lnU+5cyIg8qrchOa7jilaZdIVHi6Y8CCl4jOIqvZylWcai2fkG8SFDN1nL59c9Zivs2rxi1pM5RoF6KeOg2ElK1ifF7Y7xQj1Vn9N2ZOUeISF3zf/uTKbW9KrloAmFYw7W7eoE8wJQ/yUWMYC6dcnIcZKMdUUmgatTfuLY2yZhdpGWfZHJx4878f2oB1MrwzwXULPxB5cGFH+IcP1rcygPxwLDNs8zUpI9GDTTN3Y1eN8NyM/0L63HAJ295lC68s0Pj7p8RA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=rwNMTYaxCWkAOd/QkHrUWoYX8jFFY1F7nOuXkaZy19g=;
 b=FRB5FXCW38ZsLsuIl2RcSlU6wbDiffCNvTWHba4TO/5cG9dab/LahKxK8/jTXCk+nvp0MA8mOlMeHiex86epLcVkWgLtLwbGFQi1ifxH/3cPhfKroeJQMiMG07LTOeYv0fzJbDIeFF5CitG7l8BKAaQtdNpG4kdho+WfRtCiuhkJ2gi/Edsvcr/9cFM/pvy5+v9IDQWH/97WKUTKO4WAo38ZKsLhhTJNwTctuTLh+0rTVjbmdLRQIcbBRxZcdDET1jwlMZy9WCoPpLxv+DZHochKcRrjRg/Y96wDbVZFP9+v6FvmiDdbDEQnRDnA/XyhQp73qOXMI6LDsPoAJp//+Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rwNMTYaxCWkAOd/QkHrUWoYX8jFFY1F7nOuXkaZy19g=;
 b=Rm0nlY9x86VXSe82hBV6BXYXGOTWGaylHRE0obkbKQQ84Wpk/HtZwLM2e+mH29l/nd6x3+43zu/Bm/IOH4D8/hx82nZ9mJE4blT7tz2F3zvhlcibzP087M5870TMbazyfRSO2v2m0D03m6fas/PyaC8a9B2prewpxLdpGGw18KU=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, George
 Dunlap <George.Dunlap@citrix.com>, Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] x86/mm: Add an early PGT_validated exit in
 _get_page_type()
Thread-Topic: [PATCH] x86/mm: Add an early PGT_validated exit in
 _get_page_type()
Thread-Index: AQHYgje+Hnf8qY51yESHaKuTNkI+fa1bOL6AgAAEXQA=
Date: Wed, 22 Jun 2022 10:13:52 +0000
Message-ID: <92649f5b-1454-e3cf-0abb-1da3a0edfa41@citrix.com>
References: <20220617104739.7861-1-andrew.cooper3@citrix.com>
 <3066680f-6213-7190-3f2d-d05edae723c1@suse.com>
In-Reply-To: <3066680f-6213-7190-3f2d-d05edae723c1@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: d44b8ca4-45ef-44ca-0720-08da5437e854
x-ms-traffictypediagnostic: BY5PR03MB5079:EE_
x-microsoft-antispam-prvs:
 <BY5PR03MB5079E43801A99F07A2B51C24BAB29@BY5PR03MB5079.namprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 VtTqs5rgMtmXU5KgWpk0BIloScv+K5rFdxTQMbjEYh6LCFjyHwN5BNZu//OF4vpvpfr6uxZebgvSp3z9mqWLDDe2cWy9wbo7cwLUzfFO3pNxh/a9ix3JZlM5WHP1/AC6HXXppwwEUY0iMCQr89FKkuyo7J7DGvfRjBUCueZ4CFI5mnwuK01lRa1WbmRwnv4Ee2SkfmCBudN4o6o4kIl/h6XItUH6vAnWBkWsoGEBnrKswZ6CC/5qnB7wa2iMGtkgaawYNwwOHOdqyH6OmrW/bFxSGhA6o9z72DQz86HqcvGiYwzLM0QKCn06uDU31Sj2YIcZOTMhaj029Qq1J6JkFsVZF/NoaNBvfyXiTOlqmOCRgVf5sc5vq2wpoKJFV3DLG3mxr5ob+fP4tdiePArVd4K7AFN2LTNfmTykHRLjxZJ/YU7IwRgfOWfSWQZVuw+AhqA/irVFmMHE6pGoT+quNZmsPiUzCEroixqJGH96G5WfslD91peMlMsE8Wnf+DZE7xuzZW2q5ZryNd7G11cWGnGxCf3xqUnB34a8GHjdzzR03VWvlvJD9O3ZzzaQiPrjmOSgYM5sWfpT2jFUnNIyH6mQg5vG9p3PZyQgWC4jf1Tm5P2KosY5fBfpNcbKAB0rJQ5fg7YSTGDZ4y4MCmZElwgHWKhchnedWP0ERaqCuz9MfvOaEPj7pz298pB58VHopR13Mh/T627cjoY619iOwNRgJNiETu/9PcDESEf2iUdkV28jocB9ODzkI3JXeej/zd+doS9s+JOJTQ6S74ERUjHHsSF2SQ6Xifl5U/D7vV1HR4xJ7O9dl3Wqtyf44h7ehiCsnIJIRtmk6s3B/rsycg==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(346002)(366004)(376002)(396003)(136003)(39860400002)(478600001)(8936002)(186003)(6486002)(83380400001)(5660300002)(41300700001)(2616005)(38070700005)(53546011)(6506007)(31696002)(71200400001)(6512007)(54906003)(38100700002)(26005)(4326008)(31686004)(91956017)(8676002)(2906002)(66946007)(82960400001)(6916009)(76116006)(36756003)(86362001)(66556008)(66476007)(122000001)(66446008)(316002)(64756008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?NVZ4dFVNV3hVY0syS3grTk42a2luYWpkWnZOQkYxM3hqa3JXZ0t6UGlCVTlo?=
 =?utf-8?B?MS9zR09OTUE5MG1TUnVVTE5oUnczTDFjNlNJaDloZ3BBZmwxVGdsK1JMeGZ5?=
 =?utf-8?B?NTVOZENLL2JvWFhWdTIyOWtqR3crTTAzN3NGVWxsSHBDVkdGeFZNZXhrNFVm?=
 =?utf-8?B?NE5zNnpXbFMvYVFkNFV3Y1E1UTZ3QWhRbERsaGV6MEorMGtqZk9sNGNsbTE2?=
 =?utf-8?B?dVdFRHhhbWtYWGdFaGdRQXgvTUsySTZEYjFaTWFrNHdZcE0xOGtOQUxIMi9L?=
 =?utf-8?B?NWxzQm44TWMxT1RhZUQ4QjBBTE1QUUpubUFaTXFkQU81QTROVXZxeGkxdFhs?=
 =?utf-8?B?dEJ1SDd1OXVrcDlWRWcwY3U3YlB0cWtmdy9CTGhSRU80OHRCL2EvZWVXTHdp?=
 =?utf-8?B?a3NyamswaVI1S0JYb3cxNU1maFhnMWZzVnpOQ0VrRFRpaDNuWHZkQnlYVzQx?=
 =?utf-8?B?ZklHVGc3bHBnZDNXOWZBeDdZZWJPaldWZU9rWGYzdlpmYndVQ1psY0VzajYx?=
 =?utf-8?B?SlMzTFhaMHlsa3F4M3ZDU2ljUW1XcGxXdDIzblBpanlNb0tyYmE0b0ZOcEdy?=
 =?utf-8?B?S0ZaVnB2NXdnazFxNlBIUHUzVytnUWNsbFJTZXkwMlpueTA5YjNSTGpHNDBB?=
 =?utf-8?B?dnRsT3pNWGhWY2pNVHlBbENPamNvNDBlTHBPMCtKWmYrTlM5SStYWUNMOXRB?=
 =?utf-8?B?T1h3cTFpNWRPU3BTMmIyQzZzLy9oWlVtTC9qYy9RcGNwRzZvM1JXTnMyN0Y1?=
 =?utf-8?B?VTlDNm1HaUtPV1g4aHNMZVpkajVrZzV1WjNTQ0lnbGlBVDNKTFA1UGFWQlky?=
 =?utf-8?B?Wmcxd2JOdTBFclVKTnZzRTdoVGtSODJkY2ErZmRTU1pvc2lYOVQzb3BIdndn?=
 =?utf-8?B?dFpDdlZMeVlSN0x0T0pSWkQ4cEx1Q1lVSGEyVlRtYVRTWDFYNTQyaEVjWjlV?=
 =?utf-8?B?UGR6YlZOcCtVZGpSMkplNC9JSm12Q1ZTaWM4OVFqUzh4QnFITldadzZnMHgx?=
 =?utf-8?B?Y0NXUEwrbVR5a0M3am9nbmtIR2ppczBoczN2eVMraUs1TmVPVGhxN2h4UTVW?=
 =?utf-8?B?cGEyNlQ0ZWE2NFpwNTYrdk94NzdIeWk3KzFFMkdZQjFGc0lDL2thVWVQdmhI?=
 =?utf-8?B?Z2ZPRXBBd216YXpLdERSU0tYSytZR3g1cUY5L3prc0w5T1ZkdGxvOER1aUpI?=
 =?utf-8?B?T05Vdk41bFpWaVFmcllQWnV3dnd0YTVDOTlEMFlRL0hUQU4zei81bEVsazZF?=
 =?utf-8?B?Y3YvRFYwa3I2dkRsZHZHb1R3dngyZjZTcXpyVS9NZ0pUSFJSR1QwVHF5M1Rl?=
 =?utf-8?B?a29OZHZucnpqU1NSYy95NDduTW5XTW5uSDRzT0pVNnJ0ejFKWWJKYmZocWly?=
 =?utf-8?B?Q0VNNXBnNmJBbGR6QkNLS1V1a0VGZzZ0VCtzNmNVbjJZbGdHeU5maWFTWkQy?=
 =?utf-8?B?akpOKzJtQ3FNYUNnLy9jTmRxdENzbzN1R1lmQ1gvTU9WYVpmaDVQWkRzTUJS?=
 =?utf-8?B?dk8yR0VHS1krU09yaVh4MWtVaTI3V0szc1VreFFhSWJ2aTNDb2hEUENwb2Vt?=
 =?utf-8?B?VEdqVGxMeWdNM2VQeGtpc3Q4aWdvanVjV0hMRzNPUHIrZ0hUZ21PbTRFSDJu?=
 =?utf-8?B?bmFwbTQwekpSbzBUNU9aK2FFdUpwdHZqVzJJSlNsenduNTJxc1JwN00xZm5J?=
 =?utf-8?B?YVl3bExsYzIxZXhZdzIvTDI5a2R2NHJ1UGlvL1N1MVFNcXF4QXBCbjlUS2tQ?=
 =?utf-8?B?M09VMUt4emhsZnN1SUpmZXY0Nlptd3lMMGJKc1NIWWdWUGJiK25VbnVmNEt4?=
 =?utf-8?B?OVJiSWUyVGFyZUxmSG1rOWp3SFpPMjNDNURXbEdIeXYrSFYvemp1MG5rVlRM?=
 =?utf-8?B?dUhFSTF4ZGovWDlYc2JBVUpzdXBrYTg1RU9hUkdESUI2eE93SGdKSVdQK3Er?=
 =?utf-8?B?alJ4SUJxUm50cHczdXNxbGREUjRPSDhMSmltSkZxWVJSb0h2L2dkNXVKSWdj?=
 =?utf-8?B?bmlXNlN0cmpQL2NFWnFVSEI4SlpXcDNreXlqZWhYWDI2b1VlRmhGS0NUVGRL?=
 =?utf-8?B?NnVOOTFtWTcwcjArSTdwRTV6U2FBZnNmWUVabWtQTVdKNllwUUFmQTFMZGEy?=
 =?utf-8?B?ZXp1dDNIYkplOVpHdzJGK1NsZGRtVUFJMjRXT0VwbmJMTDlvSEdVMDhySkJx?=
 =?utf-8?B?Nm1ja1k4R3hyTWJWbGtxUDJXeWxvMTBJNzNsSXhkZUM0S2I4L0ZoRmlscWcw?=
 =?utf-8?B?RVFJT0Y3bUVVcm1laVpJOVA1MUIvVWNibnZxWExIcGRQNmNBR05OM0QyVU51?=
 =?utf-8?B?b1lGS08xRG5kdDdLdUlzczZwNm0yalorcVpjaUNRaFVvaWplMzB0dnFKdkU2?=
 =?utf-8?Q?adJpodnuYz0vhnwI=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <866D7D898840B642B9CAD4F9DD256D38@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d44b8ca4-45ef-44ca-0720-08da5437e854
X-MS-Exchange-CrossTenant-originalarrivaltime: 22 Jun 2022 10:13:52.3945
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: JZskd4NmHuhQ07WUXxPTr0r/CayhIxzNARxyCZtSsp1cSy+GcdiDQub7tTHJSJ0D3SfdedLX0jdjJHClEmBWF5RcJySLNXLFiarTnVfpeMs=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5079

T24gMjIvMDYvMjAyMiAxMDo1OCwgSmFuIEJldWxpY2ggd3JvdGU6DQo+IE9uIDE3LjA2LjIwMjIg
MTI6NDcsIEFuZHJldyBDb29wZXIgd3JvdGU6DQo+PiBUaGlzIGlzIGEgY29udGludWF0aW9uIG9m
IHRoZSBjbGVhbnVwIGFuZCBjb21tZW50aW5nIGluOg0KPj4gICA5MTg2ZTk2YjE5OWUgKCJ4ODYv
cHY6IENsZWFuIHVwIF9nZXRfcGFnZV90eXBlKCkiKQ0KPj4gICA4Y2M1MDM2YmMzODUgKCJ4ODYv
cHY6IEZpeCBBQkFDIGNtcHhjaGcoKSByYWNlIGluIF9nZXRfcGFnZV90eXBlKCkiKQ0KPj4NCj4+
IFdpdGggdGhlIHJlLWFycmFuZ2VkIGFuZCBuZXdseSBjb21tZW50ZWQgbG9naWMsIGl0J3MgZmFy
IG1vcmUgY2xlYXIgdGhhdCB0aGUNCj4+IHNlY29uZCBoYWxmIG9mIF9nZXRfcGFnZV90eXBlKCkg
b25seSBoYXMgd29yayB0byBkbyBmb3IgcGFnZSB2YWxpZGF0aW9uLg0KPiBUbyBiZSBob25lc3Qg
ImZhciBtb3JlIGNsZWFyIiByZWFkcyBtaXNsZWFkaW5nIHRvIG1lOiBQYXJ0IG9mIHRoZSByZS0N
Cj4gYXJyYW5nZW1lbnQgd2FzIHRvIG1vdmUgbGF0ZXIgdGhlIGVhcmx5IHNldHRpbmcgb2YgUEdU
X3ZhbGlkYXRlZCBmb3INCj4gUEdUX3dyaXRhYmxlIHBhZ2VzLCB3aXRob3V0IHdoaWNoIHRoZSBz
dGF0ZWQgZmFjdCB3YXNuJ3QgZW50aXJlbHkgdHJ1ZQ0KPiBwcmlvciB0byB0aGUgcmUtYXJyYW5n
ZW1lbnQuIEhvdyBhYm91dCBzL2ZhciBtb3JlL25vdy8gPw0KPg0KPj4gSW50cm9kdWNlIGFuIGVh
cmx5IGV4aXQgZm9yIFBHVF92YWxpZGF0ZWQuICBUaGlzIG1ha2VzIHRoZSBmYXN0cGF0aCBtYXJn
aW5hbGx5DQo+PiBmYXN0ZXIsIGFuZCBzaW1wbGlmaWVzIHRoZSBzdWJzZXF1ZW50IGxvZ2ljIGFz
IGl0IG5vIGxvbmdlciBuZWVkcyB0byBleGNsdWRlDQo+PiB0aGUgZnVsbHkgdmFsaWRhdGVkIGNh
c2UuDQo+Pg0KPj4gU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNA
Y2l0cml4LmNvbT4NCj4gUHJlZmVyYWJseSB3aXRoIHRoZSB3b3JkaW5nIGFib3ZlIGFkanVzdGVk
Og0KPiBSZXZpZXdlZC1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KDQpPay4N
Cg0KPj4gTm90IHRoYXQgaXQncyByZWxldmFudCwgYnV0IGJsb2F0LW8tbWV0ZXIgc2F5czoNCj4+
DQo+PiAgIGFkZC9yZW1vdmU6IDAvMCBncm93L3NocmluazogMC8xIHVwL2Rvd246IDAvLTMwMCAo
LTMwMCkNCj4+ICAgRnVuY3Rpb24gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
b2xkICAgICBuZXcgICBkZWx0YQ0KPj4gICBfZ2V0X3BhZ2VfdHlwZSAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIDY2MTggICAgNjMxOCAgICAtMzAwDQo+Pg0KPj4gd2hpY2ggaXMgbW9yZSBp
bXByZXNzaXZlIHRoYW4gSSB3YXMgZXhwZWN0aW5nLg0KPiBBbmQgSSBoYXZlIHRvIGFkbWl0IEkn
bSBoYXZpbmcgdHJvdWJsZSBzZWVpbmcgd2h5IHRoYXQgd291bGQgYmUuDQoNCkkgd2FzIHN1cnBy
aXNlZCB0b28sIGJ1dCBpdCdzIGRldGVybWluaXN0aWMgd2l0aCBHQ0MgMTEuwqAgSSBpbml0aWFs
bHkNCmRpc2JlbGlldmVkIGl0IGFuZCBkaWQgZnVsbCBjbGVhbiByZWJ1aWxkcy4NCg0KdmFsaWRh
dGVfcGFnZSgpIGlzIGZ1bGx5IGlubGluZWQsIGFuZCB0aGVyZSB3ZXJlIGEgcmVhc29uYWJsZSBu
dW1iZXIgb2YNCmptcC5kMzIgLT4gam1wLmQ4IGNoYW5nZXMsIGJ1dCBJIHZlcnkgbXVjaCBkb3Vi
dCB0aGVyZSBhcmUgNzUgb2YgdGhlbS7CoA0KSSBjYW4gb25seSBhc3N1bWUgdGhlcmUncyBhIHJl
YXNvbmFibGUgY2h1bmsgb2YgbG9naWMgd2hpY2ggc3VjY3VtYnMgdG8gRENFLg0KDQp+QW5kcmV3
DQo=


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 10:16:59 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 10:16:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353717.580686 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3xPp-0006jT-6D; Wed, 22 Jun 2022 10:16:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353717.580686; Wed, 22 Jun 2022 10:16:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3xPp-0006jM-2t; Wed, 22 Jun 2022 10:16:53 +0000
Received: by outflank-mailman (input) for mailman id 353717;
 Wed, 22 Jun 2022 10:16:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kzGk=W5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o3xPn-0006jG-UR
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 10:16:51 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr70047.outbound.protection.outlook.com [40.107.7.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6e62a0be-f214-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 12:16:51 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB5643.eurprd04.prod.outlook.com (2603:10a6:10:aa::25) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15; Wed, 22 Jun
 2022 10:16:49 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Wed, 22 Jun 2022
 10:16:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6e62a0be-f214-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=J9PHbBLnTnufEtHlDV03m+jQOKBcuolr6c/w0O/08QGygVfXB04/pG1Fqgd7YbVdK0d9m4D63JBeTd/TIUh+55e2tqV8NDAce6XDRpu1dHp6UuIGePpX9zy8sRiBnNltY7qt7cKynnWp4OwHimQmtHqCi7Dlccl57ZWzOwcDsBNDncklqKLVaUM2D3RC3dObLEz+m/B+t6MwsEroW7iQ3JZJ6ZnSTF8WI7rHEAwWPHCIcSqAmoapMEpvGcmXLQsEBZmlfwsiTE2R8IzppLLVGsdiQ8cmAFpfUqsXJt05F5PoQP7eMIVghqPVfI/Oganr8SLuKw8vdumjXQmTyRxyEQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=voXfmZ3qSmVDqG1NrDJqP3vlhexvEkCy5iaVtnIIr2o=;
 b=hBmkWBokU/6Ya6xafVjsCHG3Kk5ttProzsef319oV0am6USro8FZxClqFblWI/ly90dm2Xu4hTFrrWTcM7ZM9VDeb3ZdQ/gWSMxQNlMd8Zzl3sRCuspEfJOHnufjSSpZgNGYDAHj5YsWQYxu0USwo8aEGR6fgg3k7HQDI8asj1hvJYJNRzEEdfj43cOrjvIzY6DxriC8D5DE6oOQaTdo4FFtquGhPc36rAxOLhK1xt1IkKRHquF68tz4RDmu+vo1u6o3oaiwW1Lk+b9cyFs/GzlRp3NqoiNeDJ2LYfs9zMT3Wi4X5yOD1iIGC1xLQFbUOKFS3UG7kyr2+abXealLqw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=voXfmZ3qSmVDqG1NrDJqP3vlhexvEkCy5iaVtnIIr2o=;
 b=qA0jrJnI7AM7qWL18tmVEinCSHXD1XtW8uF9Vsc0A2gtvwDGW2Je5ygPOiyFcSFGJ5BHMkghpEoBgq5znWiMh2abTeHNg29ZKJZ+BI+ldemI0k89cERsqOPlBWYkB7xZNw7enpmMBf7DPrz1R5Mj1tp+6fbW3wrdm9MwDyD81ffb5wPTHabdxJ0UFPKyqMwqf6TZPp+fn05zli7bxm2YZIVXeIWAGSZxpbxqkxQs9iRcbU0iMRuHiaKbrbk9f+St2U9a4lHSsJi1+F2eLsAJOiHvCTB2I2T7epv/TT/LGL6K2t9B33zVp8k0DsVWp+LcCK3Fkk40zOeO5V6szycgxQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <386e765e-8bb8-32a0-9170-11db3978a17a@suse.com>
Date: Wed, 22 Jun 2022 12:16:51 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 5/9] include/public: Use explicitly specified types
Content-Language: en-US
To: Michal Orzel <michal.orzel@arm.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220620070245.77979-1-michal.orzel@arm.com>
 <20220620070245.77979-6-michal.orzel@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220620070245.77979-6-michal.orzel@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9P250CA0015.EURP250.PROD.OUTLOOK.COM
 (2603:10a6:20b:532::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 51806fe5-ddb0-47d6-a47e-08da543851ad
X-MS-TrafficTypeDiagnostic: DB8PR04MB5643:EE_
X-Microsoft-Antispam-PRVS:
	<DB8PR04MB5643D29DDDA6D5541541EB22B3B29@DB8PR04MB5643.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6wsIJATwlC9nqIrkKkLPt1CENdpsNL/BCWYf3ynaHNZAPIFE6vvYvpnAPXZbHpUbUQrGZkcN+/f94Y4nC3WF6oKeLnmBGH156F+3h947mLLMZQf4lX0H7FX2bQ4OUiD/2/AjDJpb42ZGA85nkT6yCZr654YtBByenPjrKT2kQusOTbXgRNWDNP3IPuiBrvcoWdCMWiT2U/pHLxJVqH5HZK919QNhydVRzq56AJlgaa/a9+cnFBgyf8fUC8BcyFCBVFMDL5aT7WKLAJb+hm1YU5zcDR7+1Sj2peI1UpBxMjDSRlB1YSkTCU34eU6wD1GvdcaFLiI4xsez4LSP0qgwmfLNYH7muIBUsRbYSx6B3OABNsIRNsiZ3Jy5W8p/yO18HPS9lr18Bz7gtrxudgI/2WyYRGqGocCtPbfH5ebOtQ25FXthL1oQF9aDyX2UvjMdbyM1TNJM/7LZY8zb3Om3Oe5taU8XlBFUV32+q+ol1ob7u/XMFgxly7hT2NYNbnVKgpmuTwxLp4FAVEr4KlCfLKjwP0YqKXsbFuJWIzOLHiovinXxNXbDyG0Lfb2Z9YVaWbeFkoeij5BV1MW84MXvtbJVDHu3dsVSoa4A8ipnGYmrB6yqG6uhEqs1gQqxya2uXzxj0/mEApgS1d7kcGTmXzv8Lc4zMc4GuuJD3Piupb/0/M9U1GaUVBWzNQf6O+1mP0HLgjIw0+s8DM38Nzl0pf+cZucAtJU1MR+6R+8qlhaua73OEcig2dyQjmS1l4/OHLm+scdSpki/YZQxXY+Pd4rNWsA71NOvNA45qZTDAIM=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(39860400002)(396003)(376002)(136003)(346002)(2616005)(41300700001)(8676002)(66556008)(4326008)(66476007)(31686004)(66946007)(316002)(54906003)(6666004)(6916009)(36756003)(86362001)(5660300002)(4744005)(186003)(6486002)(8936002)(478600001)(2906002)(6506007)(26005)(53546011)(31696002)(6512007)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NysrNG9YZDl2ZjVqTE0zNjZzdG1Oc0orVmZDeEMzTGtnd3lWb2lMd2F2S3Fq?=
 =?utf-8?B?aTJiZmtwNGJaM3RKenVjQm90TUIrb0VLeTJVaC9KNktJVVdDV2g5dmNJNy9a?=
 =?utf-8?B?YTFacTBDTjRzSjc1aEJiK0prdUprRDBLV2MxZGVhZm1laEdnbFR3Z1cwZ1Zh?=
 =?utf-8?B?aXFMbWFKTDdJOU1Hdi9nclVGWEd2ZzR3cUI5aS9haFM1Z0Fvd2dyaHBqbjFR?=
 =?utf-8?B?MEt1VFFyU1lpTnF3RjNBa2xDbytRcjdyQWZnQi9aZzJqQzV4cllIUk9aV3pi?=
 =?utf-8?B?SzZSbWdkSzJsYjY2cm1KUDc4S3gvZ3g2QXJ6ZkJVd21wdFhtREppMmhBNGhH?=
 =?utf-8?B?ejZvUVd1MmpYUnMwMHZrZWpYMlFzVFYvcmdKdUdXOHFDUEEwL2czY3ZqQ1R1?=
 =?utf-8?B?ai9iZ2VFNVVFSTFmT0paLytRTzYzMUFsS25vTWpLZmRjcmhTb3J3SVpPSUd4?=
 =?utf-8?B?enAvbkhUMDVYbHJXajFSM1ZkYlNWbTlsbk9iSDFsNWpOS3EyVFJ2TWo0MFlu?=
 =?utf-8?B?dDJxRlFWMFdaMmlMaEVFaXJGdnpWOGlFK2lld3AxQmcvYlQwM3dDc2hUbEdO?=
 =?utf-8?B?YWFxbnYzYWo0YkRKVGF1bXM1ZGNzWVNSckUwSDBSUERSMnJ1Y2Z4cXRyZlBi?=
 =?utf-8?B?YjNqdnFRWHplcWhtVlNSWSt2YXA1dkVGeVVmcE9WWXRxeE5Dd21sa0QwdVlV?=
 =?utf-8?B?VmZuT0diRjFzU0FKaDV2blUxc2JBcG9SS2ozbWJXMThvanh6SkUraXpWZ3Uw?=
 =?utf-8?B?bzFEbHFxRGRiZ2dEclBsQS9OWGFJdTJwbG95aWRUS3RmdWtoMnN0MmxmR2Zz?=
 =?utf-8?B?OG9zR0xCNDBXVVMzNHFISlBWaEtldUxrKzhLdFZSMXJ0a2lCTFZYRG01OHZS?=
 =?utf-8?B?UXJKQXRvMGZlQ1hKaGFVV0ZSOEllbTdvVllOSk5TYWk1MkljMTBjWkpRQWk1?=
 =?utf-8?B?MHpRWFp2bW5DcFZHcWZ5NHZQbkVSUjh1WWtiTHFrYnlTcVl2MllueU9Cbk0x?=
 =?utf-8?B?T1crTjFMS0s5NU9PSXdPaVlRYWtkYkdmdllDRm5VRlEyWUlqRExweXpIbjhM?=
 =?utf-8?B?STcrTUpkR0swdFlNNGYwZitYbVBjQnhKQlBMY1QxOVAzUDFPYWdRWUMvWWkx?=
 =?utf-8?B?Z0doRE03TEkrb1o2Q0kyczNVRWxCUFlsamMyVzRHNDJVbUY5R2NJNW9Ja2tv?=
 =?utf-8?B?L2ZqN29aTkJKMzEvSWdtR0czZnUvVDI2RG00ZnF3cm5PN1UrQjRSUitLY1pB?=
 =?utf-8?B?d0QxRVNTWExkYkFCUjNIejZaelJFQWZuTlVjN0RrcVJXLzFiQlA3SWMxWE5N?=
 =?utf-8?B?bzZ6S21yamh3UGY4MFFwQmxRL1B6eU5BVVczemp5alZGaW9ZWkI2THRRdjBZ?=
 =?utf-8?B?ZEtjNmp3TVpMZGlvZVZjYzZZK2ROYWtQRHk0ektQL2RocDZjYnlOZ09wMnlZ?=
 =?utf-8?B?NHlnTVRML0NoREhiN0JsK3pRRlVjdFJKRVBZSmlSUDhQVjRGa0ZLL3VyTHpz?=
 =?utf-8?B?dWR5U2dhc2RjZnJiK0lKNkxUYktPTFl4cnFkbFF1Znh1c1B2akVMRFp4YlpQ?=
 =?utf-8?B?YkFtRnFpTHl3bGJFTENBMUNGMXI0MkxUdk13Sm5DVEJHTVdQclVleEtXVU1p?=
 =?utf-8?B?c3RtZFI5YVhIdFZUb3dNMExSUkdpMG1FaDA1OU5HQXRoK1JSSzFtRjNxQUpW?=
 =?utf-8?B?dmhkYUwyVmU5SU9NMUkxM0ZJL1ZoeW5GOVhWZ05qOVliZllodzMzRjRSOEsy?=
 =?utf-8?B?Y1VIVkEvMktjNlpNRHBIWTBtT2ZaTWh0d0ZkR0dtby84RE5iRUpFZTBMSUYv?=
 =?utf-8?B?OG9EMUhHbW5rK3pERFRZVHNhVXllUllJYmxqVmE2cGNKM3VQT2tOZnhFL1d4?=
 =?utf-8?B?ZzI1VHlyL1ZRcUJNMTAzT2RZWndTZUVyWHZwWWtRQlUwNmJlVnZQSXg2MmN5?=
 =?utf-8?B?N2o0eS9KNWlKUktrSUpiUjVUMWRjcVRwUEtMY1hvbmtwMHpCcklGRzUxRzBy?=
 =?utf-8?B?eWw2WWdZWXpRM3cveXVrclBxS0JnVUI0UjJieVlBWEF0bmZKRTlRQ3BNeVM2?=
 =?utf-8?B?bHpBYWhEMUs1QUJyQlFTY1g4aXpNRFI5Y2lmdWZVc0lIZ0g3cTRjRGxMbXI2?=
 =?utf-8?B?b3U4NlMxcFF1MVB6ZC9rMnNzVzZSVlM0dnI4N3hoTG1tUU1KODdqQ2lzV2ZG?=
 =?utf-8?B?ZjJ4ZFFoa3lxMTg5WFpyQ1BtOWVacUtFT3FEMzVYOXIzVHhQRm9OUlZxdTlH?=
 =?utf-8?B?Tm43WWhxNDFVN1JMUGhWcmpjRnIzSUlvRENQTG9rd1ZPZTRJMVcwYjludGxX?=
 =?utf-8?B?a01JTDlCU2VLcUhNM1dTRFVpWG9XV1lobnNOTXQrNzZ6Q21FRnQrdz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 51806fe5-ddb0-47d6-a47e-08da543851ad
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2022 10:16:49.3769
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: jXxcoCrVzB9yzPZBKfaoqYJwUb7lHgAoPDKESJrnEA77efDqoJ2RQ36fgU9E76QDUNTUF28gsc0UXVSFDGDVAg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB5643

On 20.06.2022 09:02, Michal Orzel wrote:
> --- a/xen/include/public/physdev.h
> +++ b/xen/include/public/physdev.h
> @@ -211,8 +211,8 @@ struct physdev_manage_pci_ext {
>      /* IN */
>      uint8_t bus;
>      uint8_t devfn;
> -    unsigned is_extfn;
> -    unsigned is_virtfn;
> +    unsigned int is_extfn;
> +    unsigned int is_virtfn;

It is wrong for us to use unsigned (or unsigned int) here and in sysctl.h.
It should be uint32_t instead, and I think this is a great opportunity to
correct that mistake.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 10:20:37 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 10:20:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353728.580696 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3xTP-0008BT-RM; Wed, 22 Jun 2022 10:20:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353728.580696; Wed, 22 Jun 2022 10:20:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3xTP-0008BM-OI; Wed, 22 Jun 2022 10:20:35 +0000
Received: by outflank-mailman (input) for mailman id 353728;
 Wed, 22 Jun 2022 10:20:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sIgH=W5=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1o3xTO-0008BG-HM
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 10:20:34 +0000
Received: from mail-lj1-x236.google.com (mail-lj1-x236.google.com
 [2a00:1450:4864:20::236])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f338e0b7-f214-11ec-bd2d-47488cf2e6aa;
 Wed, 22 Jun 2022 12:20:33 +0200 (CEST)
Received: by mail-lj1-x236.google.com with SMTP id b23so10187209ljh.7
 for <xen-devel@lists.xenproject.org>; Wed, 22 Jun 2022 03:20:33 -0700 (PDT)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 be25-20020a05651c171900b0025a877115e1sm265981ljb.76.2022.06.22.03.20.31
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 22 Jun 2022 03:20:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f338e0b7-f214-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=BRog1sj+gmItJaQDKR7qvCAZWvkZ7pMz9+40o0vf0Gk=;
        b=UUojOJ/3GhSFIDFWciEjoZf31boWvOwfY/A6y80DZ2AqPaRsjijeEEkpKd45yfIx5X
         T5qwiHEcn+WfjIIofETUIL7tz5LDB4zY9kp9MXSk6OQPXBGINu1crWi36K4nlSPbmcb5
         ns2UU7VZZeTIkiEYmto2v5NIYv2eILa9rB1/5m64RuLA/T0D6ZO6Un72OVDXzcAUPK+l
         PPhyMaBqxhtQ7JBQjyj1kQGTXUr7u1jwMg9s0CMFPrR/rrEjUDhn9YWO0WIMfv01mIdK
         RsQBg9tKET+3y74+Bak6nDomvtwsGEXS410Cmc+nH5udKp7knt0HGyjhmQU3pC/HNiwM
         N4RQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=BRog1sj+gmItJaQDKR7qvCAZWvkZ7pMz9+40o0vf0Gk=;
        b=Tt3cuywUqOZ3H2xb/nsSOAJdk0EyEiBmre5GQW+JzEc3JEotHh42znLKpV30/1ixxR
         A2hw1JJEdmfdY3PPndIPMDf4GuaZ2vwyPgHkUCxHKzreTqamrYS2mgk5C03GNcf02qs7
         yof2rNUxTksjczdMMsAr8Rp3h4w0psUXRfpFbe/haw63Fm9njRZ3REQPJFJ9Vu59h+F3
         U221fggfEtEAh0CDCW48kRhhrd/nNRtxCOc11klQ8V/B/n5IxsmwImU2AkiNqGreIrSw
         wPxZJjErV9rwQYeBSKRn+t2FzBuzwDx1POpi1tHZW9enMcgJTIg7DruLx/oW2A3CoZDj
         M3qA==
X-Gm-Message-State: AJIora/9Cgi6jYSDKPLlyDMGwwx+mXNjD/ADGdtV0DpTUGqpspvMZ31D
	jO635p4fno7HfDTxBpu09i4=
X-Google-Smtp-Source: AGRyM1vNLGnKO9vo5sGvPhUVSiHmsOOuuHKaW0/zVzi+GGSxVG3rdNUjYtWUArgnCHBYbBTSu/+bRQ==
X-Received: by 2002:a05:651c:2105:b0:255:90b3:835c with SMTP id a5-20020a05651c210500b0025590b3835cmr1466695ljq.414.1655893233121;
        Wed, 22 Jun 2022 03:20:33 -0700 (PDT)
Subject: Re: [PATCH v3 0/3] virtio: support requiring restricted access per
 device
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 x86@kernel.org, linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org,
 virtualization@lists.linux-foundation.org, linux-arch@vger.kernel.org
Cc: Heiko Carstens <hca@linux.ibm.com>, Vasily Gorbik <gor@linux.ibm.com>,
 Alexander Gordeev <agordeev@linux.ibm.com>,
 Christian Borntraeger <borntraeger@linux.ibm.com>,
 Sven Schnelle <svens@linux.ibm.com>,
 Dave Hansen <dave.hansen@linux.intel.com>, Andy Lutomirski
 <luto@kernel.org>, Peter Zijlstra <peterz@infradead.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Arnd Bergmann <arnd@arndb.de>, Russell King <linux@armlinux.org.uk>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 linux-arm-kernel@lists.infradead.org
References: <20220622063838.8854-1-jgross@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <7eb66aec-df40-4e12-8211-8a6db4ad6060@gmail.com>
Date: Wed, 22 Jun 2022 13:20:30 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20220622063838.8854-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 22.06.22 09:38, Juergen Gross wrote:

Hello Juergen

> Instead of an all or nothing approach add support for requiring
> restricted memory access per device.
>
> Changes in V3:
> - new patches 1 + 2
> - basically complete rework of patch 3
>
> Juergen Gross (3):
>    virtio: replace restricted mem access flag with callback
>    kernel: remove platform_has() infrastructure
>    xen: don't require virtio with grants for non-PV guests
>
>   MAINTAINERS                            |  8 --------
>   arch/arm/xen/enlighten.c               |  4 +++-
>   arch/s390/mm/init.c                    |  4 ++--
>   arch/x86/mm/mem_encrypt_amd.c          |  4 ++--
>   arch/x86/xen/enlighten_hvm.c           |  4 +++-
>   arch/x86/xen/enlighten_pv.c            |  5 ++++-
>   drivers/virtio/Kconfig                 |  4 ++++
>   drivers/virtio/Makefile                |  1 +
>   drivers/virtio/virtio.c                |  4 ++--
>   drivers/virtio/virtio_anchor.c         | 18 +++++++++++++++++
>   drivers/xen/Kconfig                    |  9 +++++++++
>   drivers/xen/grant-dma-ops.c            | 10 ++++++++++
>   include/asm-generic/Kbuild             |  1 -
>   include/asm-generic/platform-feature.h |  8 --------
>   include/linux/platform-feature.h       | 19 ------------------
>   include/linux/virtio_anchor.h          | 19 ++++++++++++++++++
>   include/xen/xen-ops.h                  |  6 ++++++
>   include/xen/xen.h                      |  8 --------
>   kernel/Makefile                        |  2 +-
>   kernel/platform-feature.c              | 27 --------------------------
>   20 files changed, 84 insertions(+), 81 deletions(-)
>   create mode 100644 drivers/virtio/virtio_anchor.c
>   delete mode 100644 include/asm-generic/platform-feature.h
>   delete mode 100644 include/linux/platform-feature.h
>   create mode 100644 include/linux/virtio_anchor.h
>   delete mode 100644 kernel/platform-feature.c

I have tested the series on Arm64 guest using Xen hypervisor and didn't 
notice any issues.


I assigned two virtio-mmio devices to the guest:
#1 - grant dma device (required DT binding is present, so 
xen_is_grant_dma_device() returns true), virtio-mmio modern transport 
(backend offers VIRTIO_F_VERSION_1, VIRTIO_F_ACCESS_PLATFORM)
#2 - non grant dma device (required DT binding is absent, so 
xen_is_grant_dma_device() returns false), virtio-mmio legacy transport 
(backend does not offer these flags)


# CONFIG_XEN_VIRTIO is not set

both works, and both do not use grant mappings for virtio


CONFIG_XEN_VIRTIO=y
# CONFIG_XEN_VIRTIO_FORCE_GRANT is not set

both works, #1 uses grant mappings for virtio, #2 does not use it


CONFIG_XEN_VIRTIO=y
CONFIG_XEN_VIRTIO_FORCE_GRANT=y

only #1 works and uses grant mappings for virtio, #2 was rejected by 
validation in virtio_features_ok()


You can add my:
Tested-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> # Arm64 
guest using Xen


>
-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 10:25:21 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 10:25:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353739.580708 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3xXx-0000RV-Dm; Wed, 22 Jun 2022 10:25:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353739.580708; Wed, 22 Jun 2022 10:25:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3xXx-0000RO-AG; Wed, 22 Jun 2022 10:25:17 +0000
Received: by outflank-mailman (input) for mailman id 353739;
 Wed, 22 Jun 2022 10:25:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kzGk=W5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o3xXv-0000RI-Si
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 10:25:15 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2045.outbound.protection.outlook.com [40.107.21.45])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9a9efd63-f215-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 12:25:14 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by HE1PR04MB2956.eurprd04.prod.outlook.com (2603:10a6:7:1f::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.22; Wed, 22 Jun
 2022 10:25:08 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Wed, 22 Jun 2022
 10:25:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a9efd63-f215-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aAskq3TSYQOjcGS7YqvDZ0WVf+Sil3IGXpCfwL9j6eZ9dd+x9PjdlfhVQsAy/UmLGcXXwZsU/ENjJpNNwEys1r+q61FV/67bOZ2HLtNUPCNBp4OJjSliJr8F0AEvT9WPbgwEGcwCnkERhksdB9UQmE7NKXXgFhiVUyiUmqlC336KEHLl4tJY+2sWrZz+hcdM/Jks/2neXImTcPSDx9nz3wvBaIiUFGDp7KRzOwG2B9WpxhLCQGFig4epk0aGiBrqK2EGNE1bzikuvrWAxGHsO/hGBrPKSsev1bR2/Ox+FVtE/niQtImwp/goH0poWEiFrh34gF2/k2Lmfq1J0iH5XQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=TPo/RNvkzGPfFu6kMx1RbynPeIl9D+qzg3qIYnt+g4s=;
 b=JREifI5JcYw+2BM/l+0Jn9eIKh6DWpUP9J1hAYzY0/HXMjLkNHxn4UhklcTOd6E7dJa4ySpgBCD8cocqq+U7awYvJzFwtn4NSf4SBeRJg8855Jh9sfxDSqQhoapVQ23lHBdGmFBqHrbZcFtY1M/Arm1GgMc6ttr1fGc1KOQpudg+O09UfFiFA0S8iBwlCN9LAwB25xr2nEfBGncri1pfUdIlLcTkt6bwMCW3wJzkc1eFH4yCgmhG3cjnry3lTnXs3uPjvlcbq4r/kuzllwNDBidcxEAeVVRjKql5ht+Ww9hjvqxz+BVT/iOd1z3zYLzJoqN35yrHnXXj1jO1qftH0Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TPo/RNvkzGPfFu6kMx1RbynPeIl9D+qzg3qIYnt+g4s=;
 b=G/HM5Z2cuqBGCD6W29BfMXUTd+AFRt7nZk0W5iU7nj23862J/FU2EYahQgqIvU6h8LykILVdNSkP7kxKY5gGv2YWIolSDAoOTtTRsqXl1WMWgAr7UHHN6Ky/1klkD5rvtmXCXAAJAegJWUpReSLMxQn1FWP9TL2B/gaG+OcvU64XUwcpzD1/thD0VSVWjF6bJTtYXCbq2xJZJLPeh5lDUpp7OMKglqTg1VvYlSzWmxTENClogCstaa7HeIn9eln9eKrx9NRUnHjQMkuxyqG0VEJnqsiCK2S9UEGGPtvZ6gTFObQunrNYWR7FvgaU9kWoYWhfvIHs0MrDYMiRUCplDA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <dd016e82-2480-0e1e-6286-18b2f677dd65@suse.com>
Date: Wed, 22 Jun 2022 12:25:10 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 0/9] MISRA C 2012 8.1 rule fixes
Content-Language: en-US
To: Michal Orzel <michal.orzel@arm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>, Dario Faggioli <dfaggioli@suse.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 "Daniel P. Smith" <dpsmith@apertussolutions.com>,
 xen-devel@lists.xenproject.org
References: <20220620070245.77979-1-michal.orzel@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220620070245.77979-1-michal.orzel@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR06CA0750.eurprd06.prod.outlook.com
 (2603:10a6:20b:487::29) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a3e3b9c5-ba8f-45f6-5e37-08da54397b11
X-MS-TrafficTypeDiagnostic: HE1PR04MB2956:EE_
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-Microsoft-Antispam-PRVS:
	<HE1PR04MB2956988411375473A9A3F08BB3B29@HE1PR04MB2956.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ATAV3WT2O2QS6MBn+gi4V8QIb0ruloHp9s6nE9Dla2xqawUiFnhOCVJcar7RowBUNOomf+t7R376Sccte0hGg3S5EAWz/47kIY1emm2feqitHiuuEjxBc9FvRBDYeJR6QuyMcHPBYDvr3oUjjYXS7KAwkkR+XBNCU4n84WraUSc8bep7oaJG2utjPgw6fItdCtCAiNHpQ0k7UrM4al/go5VH96DDMO0Ho0cDCm0VWEZPSluMATfwS/94sGky0qqjECSqZBLdhjoqbv2IKZEQQUQfeSX3Vff5+Yyb8fqh13ICY9pOKuCv8kfFHN6XLDoyNyu2FoHRO3Vfo9DhDT8xBH13M3s4pSAR2Eu2R6cwxh7Gn8nROtfOeuTxzu7A+c0dhUH12nDAA1+q6L1o76jSSOPgJ3yGc37m0g1zPo+fopybBK0sEh4EEQXK9q9XHx6d4EztnKhuSgLxV4e38KZhTzFJbIFnPThDsfnaNKeYVHPesbySDWi/Qlz1QDd7HUEUGac3q398mrMmApXJc6+nZX+3NqhK7RxdtyLCJtfV0Y/MMzsUKUnyoQ/70gIaYJE5SfhfY26CxZg52ijgV+Jd7yfiI9/NDmIfTv1Ytj4Qvks9eQ7Yx+hMi2IL8xTnJ5WwOofhptuemEzU9LucIW2Glxxw5wpNxQ1z0UfIX+6FIvMDZFhgvAqwVzuhTsfWTeTAFx1SaeOXbzc6yhZgPhf82tcsMGj/j8vM0klXoh5LS959HWJXHSZkXsiI2NmpTafVBZsk7qX9Q+2mF8LGe0tnA6pmKtwhLJo40MEtE8ij6xo=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(376002)(366004)(346002)(39860400002)(396003)(136003)(6506007)(186003)(53546011)(6666004)(2616005)(41300700001)(66476007)(26005)(66556008)(83380400001)(6512007)(38100700002)(31686004)(4326008)(6916009)(7416002)(86362001)(5660300002)(36756003)(66946007)(54906003)(2906002)(31696002)(8936002)(8676002)(316002)(6486002)(478600001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?M1d0SUptdDNVWGc1OU00ME9pRmVBb2xTeWQ0L0h6cVh4bVRvenNFVnZJL21t?=
 =?utf-8?B?bDJwOXZMUEhVU0RxdElXdkFiQStobEZ5N3p6TEJacVNnNzNSUjJmYkVpSlh2?=
 =?utf-8?B?QkVjS0x0RVBEdkdlQlVVWWhaMnpFWnUwbytHbFBHc2ptMTcwUUQ0UWt3M1JM?=
 =?utf-8?B?dS9vWnFBeldibU9oWXpFUmRQL3hUT3I3Qm1jUFZUNW5jb2M3WkVpejJEcnpZ?=
 =?utf-8?B?OVg3aHAvczFIMWVjTk5rNTh6QlhyUkZ6a3JSZWVYUTVVWjZGbkdYNFRyRUtS?=
 =?utf-8?B?RDFIbERHZEkyZktGM3BOV29VMkRPYnBEZFNKazA1U1pjbkY1NzZtM3Zzc3Rh?=
 =?utf-8?B?RjFhdnAvam9VZmJnQTBrWUdXRC8ySXYza1kyZHF4TWtUYVpGRERpTDhVaGhN?=
 =?utf-8?B?a3Z5S2w4VVVRSUYzdUU5RXlyNzEwZnNBSkU4YnByMzZiNCtpOTlaQ2JQZnJr?=
 =?utf-8?B?bEU0VzNuaFlEbElET3QxbkFMb1FIMHo3UHR5M25MdUJtemtvSVdVSStZd0I0?=
 =?utf-8?B?bENLOFQvaS92SUlXaFlhRUt3ZnZJZWtKaWY1S1RqTm9LbkxCMEFrYVlVWUlZ?=
 =?utf-8?B?aDlRYktvb3pYWWtTbFBYaEZSTmtzSTB4a1hDc2ZoRnc4SFZjakJ5cS9tMmVB?=
 =?utf-8?B?QjNtVWZLSTZGcnB6Y1orbUdmTllsbHZOMmUyQ0gxazNPNkR1MUM3d3dPL3Bp?=
 =?utf-8?B?STRmUmEvUmVaa29rWHNQZTI1TUVKenF1OFA1QlBaa3ZnQXZEcU1rZmg4bmFX?=
 =?utf-8?B?ZXZ4T2pUMUxmd2FIeHBzbWo5aDJRd1dpd285NWZMYTVKR0FmT1VMSnMzRHhK?=
 =?utf-8?B?R2RmYjdxMXdWcElvR2Q5dnMvVkxtSXpSd3ZsMi9ORTdTU1Rha1VQaWlCYzA0?=
 =?utf-8?B?WmVqTHBKM1QzclFNWGpPbDRSUXM4SGN2NHkyUnFuS1VWTlZ3b3c4SUZXSFR1?=
 =?utf-8?B?ZGRseWZnRG9NdXhENU5iVVJSQUlFT29UL0NFQmhlQlZmeWR6R3pXb0prWFhN?=
 =?utf-8?B?RlBSRE9KYXVHSUVLajFhTlh0N0treEhrdDViYjdVcUlOVzM3dDBxbEhxbHN6?=
 =?utf-8?B?UW4waDNIOUJvRmhKamFVOVJKWllZN25EVFdtOWZkcWRBd3MwMFNwbVdlMGFT?=
 =?utf-8?B?QlFFY0lidGNCTDZpYVFLQnprVkQ3VTVNYVNuazYvVnVLTm1Ua0Uxc3pFNGhB?=
 =?utf-8?B?ak4reGRKL1pjNWVpa25VaHo1d0hBNEh5SjlvQXdlQWRtYSttWjg5ZlpBckRP?=
 =?utf-8?B?YmxKcnp4L3JSRnVZVWpWQmxrbTZKMy9FTDRncE4xUlY0eWZqNnNRd3dKUk9Q?=
 =?utf-8?B?TEtYL3d5K0ViMDJYQ2hWRzM1VWF4LzJRd2NsNU8xTGtkTDU2WFNPZnA3TG5T?=
 =?utf-8?B?aVFIdVNpbTlQZEMwYVRBMDFGV3ZnUzF2MFVvcytJT3drdkhyRmViRUFhL2hk?=
 =?utf-8?B?cVh0QjZtVjV2WXA3S0xjSDZFQkQ1UUpsaGpKTTFBZTBSL0U5bDBMRHZVakdL?=
 =?utf-8?B?NStXQXBvTHp4SXdBV0JTbWNOVGMyNUowNkZKdjN6RUVFNDFpOWZSeC9kdW9B?=
 =?utf-8?B?U2dGS0N2d2dFbWlUS1BHREthaTBYYk5RUUtjYnpsamYzRVNRUnQvQ2NCTmZi?=
 =?utf-8?B?UXVJbWtjNmhuK3p1Y0IxZzZoQTlSWlRNMko0Z0RFUWJTUjVtZDkwTk9YU0E2?=
 =?utf-8?B?b2lnR3ovQ3hURlo3dFRQMVIrelByRzYxREpvMldvalIvRi9xdXVOK2l6eUxu?=
 =?utf-8?B?M1hkak8vZCtIaWZmbjlvY1R5TWIwajVFb0hhVTloVjdTTEZ2UU9FeWhZUjVt?=
 =?utf-8?B?ZndNWkoyUklDbE5rSGNkb1hTWDNCa3hPWnhEK2pwNVJLeUVWL2FlL3NIZXFK?=
 =?utf-8?B?eFo5SmxxdWRCK2o0eDh3QUhHUWFUZWtGLzdxSHUzeUZOY1ZGSmxQQzZDbEk3?=
 =?utf-8?B?aGdVWEgzYUJYb0VveFdaUm1tVkV0L2J0dFhWUks4Y0lIcWJuQW91eUNVMzUw?=
 =?utf-8?B?ZTZpUVIwWlFReGNmWmkyVnpab28rOW5lK3RFZ3o0c3pYVDlFdXVpYjdsamJM?=
 =?utf-8?B?eHg4czdPMXhmRHk1ODM1V0ZjREpxbDJtcUovMVNpRGVkZUN6VnpaMDlwVllL?=
 =?utf-8?B?Z0JJU0ZrUWJtUktNS0krbE5XbHNySitMc012aGxRRHo0UEN1Z1BPRG50b1Jr?=
 =?utf-8?B?V3hvNTNUTTFGN2tBM1E0am9Gem5TUGxRWnFKSHNLKzAyOUJvREtsSXB2dTVS?=
 =?utf-8?B?dTVSYkpUOUVZVUpESGxtSHF4QVBCakRYZEQ3ZDJUMnVvZ0htdVVXTlBGN2Jn?=
 =?utf-8?B?RXRyQmMzY2xTSE1Ham1ieXllQmFkaVBSMTdudEYyVUNwcnZBNzdUZz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a3e3b9c5-ba8f-45f6-5e37-08da54397b11
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2022 10:25:08.2513
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 0x6DKQLUcVvg2F1v4tfzAB3Hiv0u92yVLfdnXC1h9dzD+ibkRA8jD0rVMlhyMACLXw1DLQ841NBaWtagyOvbrg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR04MB2956

On 20.06.2022 09:02, Michal Orzel wrote:
> This series fixes all the findings for MISRA C 2012 8.1 rule, reported by
> cppcheck 2.7 with misra addon, for Arm (arm32/arm64 - target allyesconfig).
> Fixing this rule comes down to replacing implicit 'unsigned' with explicit
> 'unsigned int' type as there are no other violations being part of that rule
> in the Xen codebase.

I'm puzzled, I have to admit. While I agree with all the examples in the
doc, I notice that there's no instance of "signed" or "unsigned" there.
Which matches my understanding that "unsigned" and "signed" on their own
(just like "long") are proper types, and hence the omission of "int"
there is not an "omission of an explicit type".

Nevertheless I think we have had the intention to use "unsigned int"
everywhere, but simply for cosmetic / style reasons (while I didn't ever
see anyone request the use of "long int" in place of "long", despite it
also being possible to combine with "double"), so I'm happy to see this
being changed. Just that (for now) I don't buy the justification.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 10:36:43 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 10:36:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353749.580722 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3xiu-0001zv-GK; Wed, 22 Jun 2022 10:36:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353749.580722; Wed, 22 Jun 2022 10:36:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3xiu-0001zo-Cs; Wed, 22 Jun 2022 10:36:36 +0000
Received: by outflank-mailman (input) for mailman id 353749;
 Wed, 22 Jun 2022 10:36:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kzGk=W5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o3xis-0001zi-9W
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 10:36:34 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2086.outbound.protection.outlook.com [40.107.22.86])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2f095322-f217-11ec-bd2d-47488cf2e6aa;
 Wed, 22 Jun 2022 12:36:33 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VE1PR04MB6542.eurprd04.prod.outlook.com (2603:10a6:803:123::32)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.22; Wed, 22 Jun
 2022 10:36:30 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Wed, 22 Jun 2022
 10:36:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2f095322-f217-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=D5kPfimPc2LRcNJLrDLYrQPRRUod1PxcQFijYHOOjEtciQD9X1iLgmoPzZVWwn7blqVoKrsQuy+pH6HnRZmyzdG1Kex8/3zCnfH0/2ICroiivgEN0NNNK1zAV06ddYRRN3Z8kZ7GdOrY5myV4cO/xynxHFwN42acyrX7P3pWKTcnsfFeQzAIR0W/LqXZjueFfuSvCwHS5Je+xpovQvnXdrMTt8nYh0cBxVqlpbWs9NcX4F0XXUXem1etWWnZFtIHr/yN/hV9FWY+lPwfsJDfS5CSRNkSeaQwlRqh4JMqHhovQi95GCYAw2DdeGRVkrtlmLGgCi6lP6E9JEIbC792ZQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=THiQZQA9BYFFOtXzxLC2xtyzASzx6KnHD9/p/+q81Eo=;
 b=Ro/O/U9b1trsTdvhiMwdYahM9apcp+p3ZByKyM4aBgKNWjapI+zjUitavgXn99FX4KuBzwuuR7WdFeCaxljlqSRKSU81xgZi5kB6Fk60EiTToxgeYucKDpFpLksYXpcPgv2elpfpTXcfguIT2LQ1JuqTEC/JNwqxBIrIBGs+pB82tsqUc2TSkyFkHD91d0By3o6kTb547CC4Oi1hxILHPgwPaTS/wQvFdViHTLXZL+ckX//x349lvECqOZ/bnmPgOXLPnav/q3bit9uK+lrHHRJEn/wGPW2QjedRWkdba6z0L8+6JtXljtNeu3qtuFsTInEtBJKX/1w6voNvdqJPrw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=THiQZQA9BYFFOtXzxLC2xtyzASzx6KnHD9/p/+q81Eo=;
 b=OTvb3TRoNo6w/RcciW53Vmfy/4sr6kHPro1EIWwshlqZLfmRLxbL31mDeyN/58hs/LHvv/Yh9dnE7OfhhxIz0Q/cNOjS6JV1cQJVj6kgL5dejf64MU9kiPD8k04LWBlkHIDP7uCCKRJeujxjmwSqB72pGNLnG2eGGnVbhoaTskEWXqIEzZnW4fqBN7qkVupGphu3DFUv6Kl/CrZyusWCyIdPy87RZMAsxanAUIynM3rsdxF83o4Uoql+w1UfXOiiuCi6+q+LSouSJf0L4Wj2F5WX6InH0Zh+Sjv8f3SEgEv3FE8ViM62avIzDh/wTiB7U0ofTyBrvxc3PR1cdEcBpg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <155d8f62-aece-8fb8-8be2-6d21d20926d7@suse.com>
Date: Wed, 22 Jun 2022 12:36:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 9/9] drivers/acpi: Use explicitly specified types
Content-Language: en-US
To: Michal Orzel <michal.orzel@arm.com>
References: <20220620070245.77979-1-michal.orzel@arm.com>
 <20220620070245.77979-10-michal.orzel@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org
In-Reply-To: <20220620070245.77979-10-michal.orzel@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM5PR0301CA0011.eurprd03.prod.outlook.com
 (2603:10a6:206:14::24) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 63fd0936-64a1-436e-e6e5-08da543b1186
X-MS-TrafficTypeDiagnostic: VE1PR04MB6542:EE_
X-Microsoft-Antispam-PRVS:
	<VE1PR04MB6542A416E226146D6FCA5206B3B29@VE1PR04MB6542.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7wg5pRhDgp1jw5fQ0vl39u7hCjoNgG0PLUHfI6D314HClzHrwDqWjOEPqogtmAyfurbah4uDm1tZqX6PwtEklzoDrwoTAKYU6HapoN9HRt+5BhgWB/9ovkoNp8I6LfsVhSA4AR+zMcck+G9vLoC7sqbTzzVkkozle3308SA138SuIpFA+/dAiWHDd3eLqowW96/EV7PnJZCbgUZa+tdw8Cw9mgRi75AldcXUoM8w9QcBdZ1DoUpsfjkzoaJRSxKgJtBJ4TgjbaXMfTa9hUBwTCyn9laPfnHipol34XecE22vKhErI7N9lwjlIz0o14hyeR/25TZrx2095z7wdMeNUSJQwGPegApHLSA8mjhLl+peDfW8B6I0WDOuvEc2wtEM40VKvmnxcpMPuXRWB2nqlZM/L2spHIJuIo0csXxBzWNHv53arGpSThX7Ws0wgb+ohQ0IFLqfsNPoRbgzaLHWD7lHohev/yGHBF70E4is6xcFiAeRwzNqJPDcEDgcDMoxg+DFh3gGDruvGAI+xR7yGO4vTZjtPSLLTC7yNVEDsaB0t7ejYEQ2YFQNMhgHWxULpXZC8i+y/Oa9Sch7zBNyT1hIeEM8T0hVK1GyRud6HcVNC9rPIBMbQ5ioeFXC0nb8yJXb0Clk501RU2HTMKEdFBMejg/XOn8+ud9FyDfYQ1APlfKqt0hdhQLqWRw40q/SkokEt1v+lDnuaBTvhuzDsHcEjjWy0Ef23MKU76sgviACGNt46FFcv/6sGpUBvRwdHMPbyoWOqe9ciBoEnYFx0sthTXBRxph7Eso7FUnCOiM=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(136003)(346002)(376002)(366004)(396003)(39860400002)(66476007)(4326008)(6486002)(316002)(8676002)(36756003)(31686004)(5660300002)(66946007)(6506007)(478600001)(8936002)(6916009)(2906002)(66556008)(83380400001)(41300700001)(2616005)(38100700002)(86362001)(6512007)(26005)(31696002)(186003)(53546011)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SDhaN2FOci9ONmZmSEVsOVVGUFc4QTFmRUR2bzk4SzdUZi8zcktJdGRMN21n?=
 =?utf-8?B?YlhGb2huei9PKzRzZ1p0bUtUQTN0SGV6Tzd3c01OZjAwTmp3MHdoQ0hOU0Vz?=
 =?utf-8?B?UWplVkYrMGFmaXhQWEVnbVJlc0ZHR3ljNFp4VzBaSkkzMUhwQVhUSG11aVZJ?=
 =?utf-8?B?VG1kb1RGN1VTbGQ0WXQyaEN4bTZ0RDNSeUNLYXo2M2tYWHBZVlVsaG5IN1JF?=
 =?utf-8?B?cjVQUXUrZ1RpS0g3OTV5Ym9MK2QrdlllOGNrRWV3TE1XT0x5NWxNMy9ZdDVJ?=
 =?utf-8?B?U2xOUUwyamJIejNKQmVWUm5wMDU4b29ka2NhWnpRYnczeCt6eEwydGswU3A5?=
 =?utf-8?B?eEtTa0s3NDhwZGptcit5K1drejRYQ1FFeW5YeHNmUFNsNk1LY1JrREkxUWJI?=
 =?utf-8?B?QUE0S2xEKzJ5ZWJJR1A2czZuQ1pXV1dIanEyaktjck94K0Z5YU9ocUVJRitP?=
 =?utf-8?B?K1FRYUJWVUpZbVFHV2lQaHd6ZEpVQjZJOGlyMWd0ZGZQYUp3Zml1TXRVR1d6?=
 =?utf-8?B?YjAwKy93YXBOSlRsNElFRzAvTHZFWmkrakw0YUFMcTNuSlFYY2lWL3ZZZmtt?=
 =?utf-8?B?YmdDdi90NEFEVzRmZG9Hc0c1TWhQZCtDZW02TXVsOWR0bmk5VDIrdUdSQzYv?=
 =?utf-8?B?bjk5aDNRZFAvNUNjWC9BaUhqMHNGYXdLcjBha2RNSXRidnJiYi9VbE42akJ0?=
 =?utf-8?B?RnRSUGcrSWNsTkJ2eFdWanB1aXIxVU1JY2I5OFdLTlFBMkFPVUlEYmFsWjFi?=
 =?utf-8?B?UWpzV0J6a2xIVmt6YkxIbGVwVDNCZVhoWHVGc0FxYmd5V3lmbm1xVzBINFhS?=
 =?utf-8?B?WFJHbzM4Nzg1S1hvU2s1eXR4Q0dlTzlxZXFmQktOVGtGVndwOWE2N0ZEL3N2?=
 =?utf-8?B?NGtXb09sZEFFbms3ZTVhKzRjcnJSckFwUzRYQm9kWjdrWDk2QnlCa205Mjlq?=
 =?utf-8?B?ZXByeGp1UmM1RE4zdHUvQzdCcldxMkhQYkU3eXVBSC9uczFjVmZjNjlCR05P?=
 =?utf-8?B?K3F3dnd3WHplTVBpUlBCdWFWMGZkeTUyNkVRVnkvUU1HbVpYdVdKQWpBZW9O?=
 =?utf-8?B?RmtxYU9SWGZmd2tFeDRoR1E2YXRNZng4bmRXS0xhdkN2VmxXTFZkTGFaWXlJ?=
 =?utf-8?B?Q2tYOUtzZEZlZkdqSFdaSWdWbGplTFRjYWQ4Nlo2TWpGTmlUaFRPVEdiclY5?=
 =?utf-8?B?bjNXZW1vSjBqcFo5NldKQ1lTRGpTWVZtbWMzSXhjUDMrOXNKRUpGQXZaL1BN?=
 =?utf-8?B?VTMvMDVELzc5MjVMMm1JUm9OWWNab2R2SnJPalVwdFYvYzBhNTVYVis5UklU?=
 =?utf-8?B?R1Y0NmhQam1NV2FieFBLM0IwUkxGcTlaZW1udHM4SWlyeUlvSGE4VTlFMnJQ?=
 =?utf-8?B?UDdzNEpWQmthV3U2b2FtcU5TRjhZemw3SXRoWFVSN2YvUG1FbGVrRkZhVG55?=
 =?utf-8?B?UTJpZXlBR2x3aXZxa0psNE83NCsyOTlMa25vSEcrb3U2QXZXWmFXa1F0R24y?=
 =?utf-8?B?WGUyVFBVN01McEtROVlscFltb2lhY3pNTW93dzNKOVo4enV3Y3psbmk3UlFo?=
 =?utf-8?B?eGJxWUlhWXJOTVBUa2NGRkNDdjcybDlUdzNGYm1reWhVYzZFbklZVTRUL0pO?=
 =?utf-8?B?Tkw4MHJjQmJOVmE3eWJKRWkzekppc000MitVdUFnVmQxREJCeXBYcmNEdHlG?=
 =?utf-8?B?dzFsRFF5MnBzQXVRbTd6Vk5JcmpWWE5WTmU0R2hsY21UeUc1QVpibTZmY2Z6?=
 =?utf-8?B?YmxrWjZUYjVPQ1ppWVZQT1BkQVdGeFVWQjNPaDdRV1VPZW9ZWjBLQ0FyUXVC?=
 =?utf-8?B?L2tjcTAwelA2dDlkS1lJZDBXd1FWbDgvc0pGWldvOEMwdmlWNEdwSFZSVzVz?=
 =?utf-8?B?bjViYnh0TVVGQkwwdGlBejlxKzArV0FjYVZTOHRDY21CNVlxczZIeXBNQi9B?=
 =?utf-8?B?TmhCQnZKMTY4dXYxRER5Rk1zeUdKMlpTWlJVV083RmczbVZGK2RSbVJKdXlL?=
 =?utf-8?B?Rk16Z01CdFZhRzgwbll6eURTT2F0ZjlwdnM1Qjc1amVIZmNUYloxQzY5Vng5?=
 =?utf-8?B?a3ZGR0YydkMybUxscnB4TklJQ3ozT09pUnp4cXM1UXFmTHJ2M3pTTFIzV2s1?=
 =?utf-8?B?RndWYnVCeDR4a2YySU1rZHBET2k5clVNZStkR0ZXN0IxVFQ0KzZ2elRCMmZv?=
 =?utf-8?B?Y1MrRUpjYnhuenk5bURNc3dhUjZlRE5DOUFpSThpRWt0UjZlY1EvdGhOR3g3?=
 =?utf-8?B?OCtUUlc4cXd6ZE9pNU4zeVBNTGt1RXJtVnZmbTdER2JkT2dpMmxad0M4UGJv?=
 =?utf-8?B?Z1FUeEdFM3BLRzZHL2pSZkEwU3lzL2ZRRnM3UFNtT3p4S3pQZkZxUT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 63fd0936-64a1-436e-e6e5-08da543b1186
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2022 10:36:30.1609
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: K6aw9SEPxq8JDVfGLCFtM05+s0mN0gIry3y73fvPlnAL9+BlXjVm6uPddKi+fCj9ctWbZ0S3YX1qLIrPoVdcpA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB6542

On 20.06.2022 09:02, Michal Orzel wrote:
> According to MISRA C 2012 Rule 8.1, types shall be explicitly
> specified. Fix all the findings reported by cppcheck with misra addon
> by substituting implicit type 'unsigned' to explicit 'unsigned int'.
> 
> Signed-off-by: Michal Orzel <michal.orzel@arm.com>
> ---
> This patch may not be applicable as these files come from Linux.

We've diverged quite far, so the Linux origin isn't really a concern
(anymore).

> --- a/xen/drivers/acpi/tables/tbfadt.c
> +++ b/xen/drivers/acpi/tables/tbfadt.c
> @@ -235,7 +235,7 @@ void __init acpi_tb_create_local_fadt(struct acpi_table_header *table, u32 lengt
>  		ACPI_WARNING((AE_INFO,
>  			      "FADT (revision %u) is longer than ACPI 5.0 version,"
>  			      " truncating length %u to %zu",
> -			      table->revision, (unsigned)length,
> +			      table->revision, (unsigned int)length,

Since we generally try to avoid casts where not needed - did you consider
dropping the cast here instead of fiddling with it? Aiui this wouldn't be
a problem since we assume sizeof(int) >= 4 and since types more narrow
than int would be converted to int (which in turn is generally printing
okay with %u). Strictly speaking it would want to be PRIu32 ...

> --- a/xen/drivers/acpi/tables/tbutils.c
> +++ b/xen/drivers/acpi/tables/tbutils.c
> @@ -481,7 +481,7 @@ acpi_tb_parse_root_table(acpi_physical_address rsdp_address, u8 flags)
>  			if (ACPI_FAILURE(status)) {
>  				ACPI_WARNING((AE_INFO,
>  					      "Truncating %u table entries!",
> -					      (unsigned)
> +					      (unsigned int)
>  					      (acpi_gbl_root_table_list.size -
>  					       acpi_gbl_root_table_list.
>  					       count)));

Same here then, except PRIu32 wouldn't be correct to use in this case.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 10:45:39 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 10:45:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353759.580734 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3xrZ-0003VD-IB; Wed, 22 Jun 2022 10:45:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353759.580734; Wed, 22 Jun 2022 10:45:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3xrZ-0003V6-Dz; Wed, 22 Jun 2022 10:45:33 +0000
Received: by outflank-mailman (input) for mailman id 353759;
 Wed, 22 Jun 2022 10:45:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zPYt=W5=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o3xrY-0003V0-Hj
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 10:45:32 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6fd562b2-f218-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 12:45:31 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id CCBC421A97;
 Wed, 22 Jun 2022 10:45:30 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id AD89B134A9;
 Wed, 22 Jun 2022 10:45:30 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id +UzKKMrysmJPVQAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 22 Jun 2022 10:45:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6fd562b2-f218-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655894730; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type;
	bh=aRapF1oRH7n+rUj6AqavtElno3IhEnMrbC5x00uKDR4=;
	b=PPiyCV2q2kJhKctVq8iv7cx5ew7fY7V7VKRIZjc4TSsM2aOl6YUtc900DI1oeeEBovmyx1
	oeDFVgfxYdMafqgzesTBHQ/qcJXaDEuN23m5iLHAn376AIEvuEt8Auvd4U6OJGBu6BuYBO
	y+0fNOebKtBjHP8IM/jXTrOx+5f6vz0=
Message-ID: <d465abfb-6d44-0739-9959-3e3311dd671c@suse.com>
Date: Wed, 22 Jun 2022 12:45:30 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Content-Language: en-US
To: Julien Grall <julien@xen.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
From: Juergen Gross <jgross@suse.com>
Subject: Tentative fix for dom0 boot problem
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------envwsNKROTDCMCw20S0Myj33"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------envwsNKROTDCMCw20S0Myj33
Content-Type: multipart/mixed; boundary="------------3BYPQVRNjCFz0IeZEydB0qyT";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <d465abfb-6d44-0739-9959-3e3311dd671c@suse.com>
Subject: Tentative fix for dom0 boot problem

--------------3BYPQVRNjCFz0IeZEydB0qyT
Content-Type: multipart/mixed; boundary="------------lF0NVraN6G0wylrX4pR4NAAH"

--------------lF0NVraN6G0wylrX4pR4NAAH
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

SnVsaWVuLA0KDQpjb3VsZCB5b3UgcGxlYXNlIHRlc3QgdGhlIGF0dGFjaGVkIHBhdGNoZXM/
DQoNCg0KSnVlcmdlbg0K
--------------lF0NVraN6G0wylrX4pR4NAAH
Content-Type: text/x-patch; charset=UTF-8;
 name="0001-x86-xen-use-clear_bss-for-Xen-PV-guests.patch"
Content-Disposition: attachment;
 filename="0001-x86-xen-use-clear_bss-for-Xen-PV-guests.patch"
Content-Transfer-Encoding: base64

RnJvbSA3ZDY3MzRlNTE3YjY1YzM3MDJhMjc5ZmMyYzFmZmY1YmE3OTI3ZWY5IE1vbiBTZXAg
MTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+
CkRhdGU6IFdlZCwgMjIgSnVuIDIwMjIgMTI6MTk6NTUgKzAyMDAKU3ViamVjdDogW1BBVENI
IDEvMl0geDg2L3hlbjogdXNlIGNsZWFyX2JzcygpIGZvciBYZW4gUFYgZ3Vlc3RzCgpJbnN0
ZWFkIG9mIGNsZWFyaW5nIHRoZSBic3MgYXJlYSBpbiBhc3NlbWJseSBjb2RlLCB1c2UgdGhl
IGNsZWFyX2JzcygpCmZ1bmN0aW9uLgoKVGhpcyByZXF1aXJlcyB0byBwYXNzIHRoZSBzdGFy
dF9pbmZvIGFkZHJlc3MgYXMgcGFyYW1ldGVyIHRvCnhlbl9zdGFydF9rZXJuZWwoKSBpbiBv
cmRlciB0byBhdm9pZCB0aGUgeGVuX3N0YXJ0X2luZm8gYmVpbmcgemVyb2VkCmFnYWluLgoK
U2lnbmVkLW9mZi1ieTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgotLS0KIGFy
Y2gveDg2L2luY2x1ZGUvYXNtL3NldHVwLmggfCAgMyArKysKIGFyY2gveDg2L2tlcm5lbC9o
ZWFkNjQuYyAgICAgfCAgMiArLQogYXJjaC94ODYveGVuL2VubGlnaHRlbl9wdi5jICB8ICA4
ICsrKysrKy0tCiBhcmNoL3g4Ni94ZW4veGVuLWhlYWQuUyAgICAgIHwgMTAgKy0tLS0tLS0t
LQogNCBmaWxlcyBjaGFuZ2VkLCAxMSBpbnNlcnRpb25zKCspLCAxMiBkZWxldGlvbnMoLSkK
CmRpZmYgLS1naXQgYS9hcmNoL3g4Ni9pbmNsdWRlL2FzbS9zZXR1cC5oIGIvYXJjaC94ODYv
aW5jbHVkZS9hc20vc2V0dXAuaAppbmRleCBmOGI5ZWU5N2E4OTEuLmYzN2NiZmY3MzU0YyAx
MDA2NDQKLS0tIGEvYXJjaC94ODYvaW5jbHVkZS9hc20vc2V0dXAuaAorKysgYi9hcmNoL3g4
Ni9pbmNsdWRlL2FzbS9zZXR1cC5oCkBAIC0xMjAsNiArMTIwLDkgQEAgdm9pZCAqZXh0ZW5k
X2JyayhzaXplX3Qgc2l6ZSwgc2l6ZV90IGFsaWduKTsKIAlzdGF0aWMgY2hhciBfX2Jya18j
I25hbWVbc2l6ZV0KIAogZXh0ZXJuIHZvaWQgcHJvYmVfcm9tcyh2b2lkKTsKKwordm9pZCBj
bGVhcl9ic3Modm9pZCk7CisKICNpZmRlZiBfX2kzODZfXwogCiBhc21saW5rYWdlIHZvaWQg
X19pbml0IGkzODZfc3RhcnRfa2VybmVsKHZvaWQpOwpkaWZmIC0tZ2l0IGEvYXJjaC94ODYv
a2VybmVsL2hlYWQ2NC5jIGIvYXJjaC94ODYva2VybmVsL2hlYWQ2NC5jCmluZGV4IGJkNGEz
NDEwMGVkMC4uZTdlMjMzMjA5YThjIDEwMDY0NAotLS0gYS9hcmNoL3g4Ni9rZXJuZWwvaGVh
ZDY0LmMKKysrIGIvYXJjaC94ODYva2VybmVsL2hlYWQ2NC5jCkBAIC00MjYsNyArNDI2LDcg
QEAgdm9pZCBfX2luaXQgZG9fZWFybHlfZXhjZXB0aW9uKHN0cnVjdCBwdF9yZWdzICpyZWdz
LCBpbnQgdHJhcG5yKQogCiAvKiBEb24ndCBhZGQgYSBwcmludGsgaW4gdGhlcmUuIHByaW50
ayByZWxpZXMgb24gdGhlIFBEQSB3aGljaCBpcyBub3QgaW5pdGlhbGl6ZWQgCiAgICB5ZXQu
ICovCi1zdGF0aWMgdm9pZCBfX2luaXQgY2xlYXJfYnNzKHZvaWQpCit2b2lkIF9faW5pdCBj
bGVhcl9ic3Modm9pZCkKIHsKIAltZW1zZXQoX19ic3Nfc3RhcnQsIDAsCiAJICAgICAgICh1
bnNpZ25lZCBsb25nKSBfX2Jzc19zdG9wIC0gKHVuc2lnbmVkIGxvbmcpIF9fYnNzX3N0YXJ0
KTsKZGlmZiAtLWdpdCBhL2FyY2gveDg2L3hlbi9lbmxpZ2h0ZW5fcHYuYyBiL2FyY2gveDg2
L3hlbi9lbmxpZ2h0ZW5fcHYuYwppbmRleCBlMzI5N2IxNTcwMWMuLjcwZmIyZWE4NWU5MCAx
MDA2NDQKLS0tIGEvYXJjaC94ODYveGVuL2VubGlnaHRlbl9wdi5jCisrKyBiL2FyY2gveDg2
L3hlbi9lbmxpZ2h0ZW5fcHYuYwpAQCAtMTE4MywxNSArMTE4MywxOSBAQCBzdGF0aWMgdm9p
ZCBfX2luaXQgeGVuX2RvbXVfc2V0X2xlZ2FjeV9mZWF0dXJlcyh2b2lkKQogZXh0ZXJuIHZv
aWQgZWFybHlfeGVuX2lyZXRfcGF0Y2godm9pZCk7CiAKIC8qIEZpcnN0IEMgZnVuY3Rpb24g
dG8gYmUgY2FsbGVkIG9uIFhlbiBib290ICovCi1hc21saW5rYWdlIF9fdmlzaWJsZSB2b2lk
IF9faW5pdCB4ZW5fc3RhcnRfa2VybmVsKHZvaWQpCithc21saW5rYWdlIF9fdmlzaWJsZSB2
b2lkIF9faW5pdCB4ZW5fc3RhcnRfa2VybmVsKHN0cnVjdCBzdGFydF9pbmZvICpzaSkKIHsK
IAlzdHJ1Y3QgcGh5c2Rldl9zZXRfaW9wbCBzZXRfaW9wbDsKIAl1bnNpZ25lZCBsb25nIGlu
aXRyZF9zdGFydCA9IDA7CiAJaW50IHJjOwogCi0JaWYgKCF4ZW5fc3RhcnRfaW5mbykKKwlp
ZiAoIXNpKQogCQlyZXR1cm47CiAKKwljbGVhcl9ic3MoKTsKKworCXhlbl9zdGFydF9pbmZv
ID0gc2k7CisKIAlfX3RleHRfZ2VuX2luc24oJmVhcmx5X3hlbl9pcmV0X3BhdGNoLAogCQkJ
Sk1QMzJfSU5TTl9PUENPREUsICZlYXJseV94ZW5faXJldF9wYXRjaCwgJnhlbl9pcmV0LAog
CQkJSk1QMzJfSU5TTl9TSVpFKTsKZGlmZiAtLWdpdCBhL2FyY2gveDg2L3hlbi94ZW4taGVh
ZC5TIGIvYXJjaC94ODYveGVuL3hlbi1oZWFkLlMKaW5kZXggM2EyY2Q5M2JmMDU5Li4xM2Fm
NmZlNDUzZTMgMTAwNjQ0Ci0tLSBhL2FyY2gveDg2L3hlbi94ZW4taGVhZC5TCisrKyBiL2Fy
Y2gveDg2L3hlbi94ZW4taGVhZC5TCkBAIC00OCwxNSArNDgsNiBAQCBTWU1fQ09ERV9TVEFS
VChzdGFydHVwX3hlbikKIAlBTk5PVEFURV9OT0VOREJSCiAJY2xkCiAKLQkvKiBDbGVhciAu
YnNzICovCi0JeG9yICVlYXgsJWVheAotCW1vdiAkX19ic3Nfc3RhcnQsICVyZGkKLQltb3Yg
JF9fYnNzX3N0b3AsICVyY3gKLQlzdWIgJXJkaSwgJXJjeAotCXNociAkMywgJXJjeAotCXJl
cCBzdG9zcQotCi0JbW92ICVyc2ksIHhlbl9zdGFydF9pbmZvCiAJbW92IGluaXRpYWxfc3Rh
Y2soJXJpcCksICVyc3AKIAogCS8qIFNldCB1cCAlZ3MuCkBAIC03MSw2ICs2Miw3IEBAIFNZ
TV9DT0RFX1NUQVJUKHN0YXJ0dXBfeGVuKQogCWNkcQogCXdybXNyCiAKKwltb3YJJXJzaSwg
JXJkaQogCWNhbGwgeGVuX3N0YXJ0X2tlcm5lbAogU1lNX0NPREVfRU5EKHN0YXJ0dXBfeGVu
KQogCV9fRklOSVQKLS0gCjIuMzUuMwoK
--------------lF0NVraN6G0wylrX4pR4NAAH
Content-Type: text/x-patch; charset=UTF-8;
 name="0002-x86-fix-setup-of-brk-area.patch"
Content-Disposition: attachment;
 filename="0002-x86-fix-setup-of-brk-area.patch"
Content-Transfer-Encoding: base64

RnJvbSBmZjM1YmMzM2M1ZWUxODQ4NjhkNDFiZjc2ZGNjNzMyMjE5MWY5ZmQzIE1vbiBTZXAg
MTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+
CkRhdGU6IFdlZCwgMjIgSnVuIDIwMjIgMTI6MTc6NDcgKzAyMDAKU3ViamVjdDogW1BBVENI
IDIvMl0geDg2OiBmaXggc2V0dXAgb2YgYnJrIGFyZWEKCkNvbW1pdCBlMzI2ODNjNmY3ZDIg
KCJ4ODYvbW06IEZpeCBSRVNFUlZFX0JSSygpIGZvciBvbGRlciBiaW51dGlscyIpCnB1dCB0
aGUgYnJrIGFyZWEgaW50byB0aGUgLmJzcyBzZWdtZW50LCBjYXVzaW5nIGl0IG5vdCB0byBi
ZSBjbGVhcmVkCmluaXRpYWxseS4gQXMgdGhlIGJyayBhcmVhIGlzIHVzZWQgdG8gYWxsb2Nh
dGUgZWFybHkgcGFnZSB0YWJsZXMsIHRoZXNlCm1pZ2h0IGNvbnRhaW4gZ2FyYmFnZSBpbiBu
b3QgZXhwbGljaXRseSB3cml0dGVuIGVudHJpZXMuCgpGaXggdGhhdCBieSBsZXR0aW5nIGNs
ZWFyX2JzcygpIGNsZWFyIHRoZSBicmsgYXJlYSwgdG9vLgoKRml4ZXM6IGUzMjY4M2M2Zjdk
MiAoIng4Ni9tbTogRml4IFJFU0VSVkVfQlJLKCkgZm9yIG9sZGVyIGJpbnV0aWxzIikKU2ln
bmVkLW9mZi1ieTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgotLS0KIGFyY2gv
eDg2L2tlcm5lbC9oZWFkNjQuYyB8IDIgKysKIDEgZmlsZSBjaGFuZ2VkLCAyIGluc2VydGlv
bnMoKykKCmRpZmYgLS1naXQgYS9hcmNoL3g4Ni9rZXJuZWwvaGVhZDY0LmMgYi9hcmNoL3g4
Ni9rZXJuZWwvaGVhZDY0LmMKaW5kZXggZTdlMjMzMjA5YThjLi42YTNjZmFmNmI3MmEgMTAw
NjQ0Ci0tLSBhL2FyY2gveDg2L2tlcm5lbC9oZWFkNjQuYworKysgYi9hcmNoL3g4Ni9rZXJu
ZWwvaGVhZDY0LmMKQEAgLTQzMCw2ICs0MzAsOCBAQCB2b2lkIF9faW5pdCBjbGVhcl9ic3Mo
dm9pZCkKIHsKIAltZW1zZXQoX19ic3Nfc3RhcnQsIDAsCiAJICAgICAgICh1bnNpZ25lZCBs
b25nKSBfX2Jzc19zdG9wIC0gKHVuc2lnbmVkIGxvbmcpIF9fYnNzX3N0YXJ0KTsKKwltZW1z
ZXQoX19icmtfYmFzZSwgMCwKKwkgICAgICAgKHVuc2lnbmVkIGxvbmcpIF9fYnJrX2xpbWl0
IC0gKHVuc2lnbmVkIGxvbmcpIF9fYnJrX2Jhc2UpOwogfQogCiBzdGF0aWMgdW5zaWduZWQg
bG9uZyBnZXRfY21kX2xpbmVfcHRyKHZvaWQpCi0tIAoyLjM1LjMKCg==
--------------lF0NVraN6G0wylrX4pR4NAAH
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------lF0NVraN6G0wylrX4pR4NAAH--

--------------3BYPQVRNjCFz0IeZEydB0qyT--

--------------envwsNKROTDCMCw20S0Myj33
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmKy8soFAwAAAAAACgkQsN6d1ii/Ey/M
gAf+Pmx0AqeqQ98JMe9TIWqkLfjNnrNpdedGi/wHNxmLKb8rd3Suo/m03K3cjge+lt/31guw5WPl
amPvQKUT8KbFmyhSXLukxN11fQcr30Ket7KRnRhRZu23ytFf45VP5gBl999y4oQkCcCc5uuAc87D
yg00cmBZkP90nudB/+pwF7l5aGtDZ9EnJkw6x1Cb3u+F8rsQKpiMWm6g3rIgdXGRLcFNDKQp1JV4
8JRcOp6VCR2g2M68Ww9oeyD+gSWZYRKJshMmAqtsLeCqoKFWazTt7ZvMWxbx3d4HprdVg4fVhInK
TckZv3q5Vq4spfNutWYLwRg5IqZG8hgzc+4OEaUD+g==
=mTwu
-----END PGP SIGNATURE-----

--------------envwsNKROTDCMCw20S0Myj33--


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 10:50:40 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 10:50:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353767.580745 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3xwT-0004uJ-45; Wed, 22 Jun 2022 10:50:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353767.580745; Wed, 22 Jun 2022 10:50:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3xwT-0004uC-1J; Wed, 22 Jun 2022 10:50:37 +0000
Received: by outflank-mailman (input) for mailman id 353767;
 Wed, 22 Jun 2022 10:50:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o3xwR-0004u6-Ed
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 10:50:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o3xwQ-0002LN-Rp; Wed, 22 Jun 2022 10:50:34 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=[192.168.1.223]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o3xwQ-0006uI-JP; Wed, 22 Jun 2022 10:50:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=nuy1yoEAqeQbX72UFuD3RfXVTgc0Gd3KeE9tCI6/97g=; b=ozSI14P16AQdsBmeqdWd8Sz9d/
	LPdND0rrYN3IujQXPY2KI8vG+HB9EY0J0bkLFw27WxnNMDBbEBuXlSEo1yQoxa6newz+fgOi1Iug3
	/aDAxmy6fzpMcORJEy4J78AQeZZy/vs4UulsutiXgcQDWpNDjDgG+4FDNnDNJCJs7ULE=;
Message-ID: <e32a84bf-ad49-da95-4a19-61872c2ff7e0@xen.org>
Date: Wed, 22 Jun 2022 11:50:32 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: Tentative fix for dom0 boot problem
To: Juergen Gross <jgross@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <d465abfb-6d44-0739-9959-3e3311dd671c@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <d465abfb-6d44-0739-9959-3e3311dd671c@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 22/06/2022 11:45, Juergen Gross wrote:
> Julien,

Hi Juergen,

> could you please test the attached patches?

I am getting the following error:

(XEN) d0v0 Unhandled: vec 14, #PF[0003]
(XEN) Pagetable walk from ffffffff84001000:
(XEN)  L4[0x1ff] = 000000046c004067 0000000000004004
(XEN)  L3[0x1fe] = 000000046c003067 0000000000004003
(XEN)  L2[0x020] = 000000046c024067 0000000000004024
(XEN)  L1[0x001] = 001000046c001025 0000000000004001
(XEN) domain_crash_sync called from entry.S: fault at ffff82d040325906 
x86_64/entry.S#create_bounce_frame+0x15d/0x177
(XEN) Domain 0 (vcpu#0) crashed on cpu#1:
(XEN) ----[ Xen-4.17-unstable  x86_64  debug=y  Tainted:   C    ]----
(XEN) CPU:    1
(XEN) RIP:    e033:[<ffffffff832a3481>]
(XEN) RFLAGS: 0000000000000206   EM: 1   CONTEXT: pv guest (d0v0)
(XEN) rax: 0000000000000000   rbx: ffffffff84000000   rcx: 000000000002b000
(XEN) rdx: ffffffff84000000   rsi: ffffffff84000000   rdi: ffffffff84001000
(XEN) rbp: 0000000000000000   rsp: ffffffff82a03e60   r8:  0000000000000000
(XEN) r9:  0000000000000000   r10: 0000000000000000   r11: 0000000000000000
(XEN) r12: 0000000000000000   r13: 0000000000000000   r14: 0000000000000000
(XEN) r15: 0000000000000000   cr0: 0000000080050033   cr4: 00000000003426e0
(XEN) cr3: 000000046c001000   cr2: ffffffff84001000
(XEN) fsb: 0000000000000000   gsb: ffffffff83271000   gss: 0000000000000000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e02b   cs: e033
(XEN) Guest stack trace from rsp=ffffffff82a03e60:
(XEN)    000000000002b000 0000000000000000 0000000000000003 ffffffff832a3481
(XEN)    000000010000e030 0000000000010006 ffffffff82a03ea8 000000000000e02b
(XEN)    0000000000000000 ffffffff832ae884 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 ffffffff832a317f 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN) Hardware Dom0 crashed: rebooting machine in 5 seconds.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 10:57:26 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 10:57:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353775.580755 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3y2y-0005Zt-RQ; Wed, 22 Jun 2022 10:57:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353775.580755; Wed, 22 Jun 2022 10:57:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3y2y-0005Zm-Oh; Wed, 22 Jun 2022 10:57:20 +0000
Received: by outflank-mailman (input) for mailman id 353775;
 Wed, 22 Jun 2022 10:57:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4/ZK=W5=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o3y2x-0005Zg-0f
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 10:57:19 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 14729281-f21a-11ec-bd2d-47488cf2e6aa;
 Wed, 22 Jun 2022 12:57:17 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 856C613D5;
 Wed, 22 Jun 2022 03:57:16 -0700 (PDT)
Received: from [10.57.38.102] (unknown [10.57.38.102])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B0D983F66F;
 Wed, 22 Jun 2022 03:57:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 14729281-f21a-11ec-bd2d-47488cf2e6aa
Message-ID: <8c1a2037-1bdc-cdb8-7c57-5e84448cc1d0@arm.com>
Date: Wed, 22 Jun 2022 12:56:59 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH 5/9] include/public: Use explicitly specified types
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220620070245.77979-1-michal.orzel@arm.com>
 <20220620070245.77979-6-michal.orzel@arm.com>
 <386e765e-8bb8-32a0-9170-11db3978a17a@suse.com>
From: Michal Orzel <michal.orzel@arm.com>
In-Reply-To: <386e765e-8bb8-32a0-9170-11db3978a17a@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit



On 22.06.2022 12:16, Jan Beulich wrote:
> On 20.06.2022 09:02, Michal Orzel wrote:
>> --- a/xen/include/public/physdev.h
>> +++ b/xen/include/public/physdev.h
>> @@ -211,8 +211,8 @@ struct physdev_manage_pci_ext {
>>      /* IN */
>>      uint8_t bus;
>>      uint8_t devfn;
>> -    unsigned is_extfn;
>> -    unsigned is_virtfn;
>> +    unsigned int is_extfn;
>> +    unsigned int is_virtfn;
> 
> It is wrong for us to use unsigned (or unsigned int) here and in sysctl.h.
> It should be uint32_t instead, and I think this is a great opportunity to
> correct that mistake.
> 
That is perfectly fine for me to do.

> Jan

Cheers,
Michal


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 11:09:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 11:09:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353785.580767 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3yEd-00077B-UW; Wed, 22 Jun 2022 11:09:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353785.580767; Wed, 22 Jun 2022 11:09:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3yEd-000774-RH; Wed, 22 Jun 2022 11:09:23 +0000
Received: by outflank-mailman (input) for mailman id 353785;
 Wed, 22 Jun 2022 11:09:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4/ZK=W5=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o3yEc-00076y-Gw
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 11:09:22 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id c33dca3b-f21b-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 13:09:19 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4D71013D5;
 Wed, 22 Jun 2022 04:09:19 -0700 (PDT)
Received: from [10.57.38.102] (unknown [10.57.38.102])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 658DB3F66F;
 Wed, 22 Jun 2022 04:09:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c33dca3b-f21b-11ec-b725-ed86ccbb4733
Message-ID: <b8510774-f000-8b66-ebab-4286567f7747@arm.com>
Date: Wed, 22 Jun 2022 13:09:04 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH 9/9] drivers/acpi: Use explicitly specified types
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org
References: <20220620070245.77979-1-michal.orzel@arm.com>
 <20220620070245.77979-10-michal.orzel@arm.com>
 <155d8f62-aece-8fb8-8be2-6d21d20926d7@suse.com>
From: Michal Orzel <michal.orzel@arm.com>
In-Reply-To: <155d8f62-aece-8fb8-8be2-6d21d20926d7@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit



On 22.06.2022 12:36, Jan Beulich wrote:
> On 20.06.2022 09:02, Michal Orzel wrote:
>> According to MISRA C 2012 Rule 8.1, types shall be explicitly
>> specified. Fix all the findings reported by cppcheck with misra addon
>> by substituting implicit type 'unsigned' to explicit 'unsigned int'.
>>
>> Signed-off-by: Michal Orzel <michal.orzel@arm.com>
>> ---
>> This patch may not be applicable as these files come from Linux.
> 
> We've diverged quite far, so the Linux origin isn't really a concern
> (anymore).
> 
Ok.
>> --- a/xen/drivers/acpi/tables/tbfadt.c
>> +++ b/xen/drivers/acpi/tables/tbfadt.c
>> @@ -235,7 +235,7 @@ void __init acpi_tb_create_local_fadt(struct acpi_table_header *table, u32 lengt
>>  		ACPI_WARNING((AE_INFO,
>>  			      "FADT (revision %u) is longer than ACPI 5.0 version,"
>>  			      " truncating length %u to %zu",
>> -			      table->revision, (unsigned)length,
>> +			      table->revision, (unsigned int)length,
> 
> Since we generally try to avoid casts where not needed - did you consider
> dropping the cast here instead of fiddling with it? Aiui this wouldn't be
> a problem since we assume sizeof(int) >= 4 and since types more narrow
> than int would be converted to int (which in turn is generally printing
> okay with %u). Strictly speaking it would want to be PRIu32 ...
> 
The reason I did not consider it was to make the review process easier by performing only
the mechanical change being part of fixing 8.1 rule. However I'm fully ok to drop the cast.

Changing the format specifier to PRIu32 for length would require using PRIu8 for table->revision
to be coherent.

>> --- a/xen/drivers/acpi/tables/tbutils.c
>> +++ b/xen/drivers/acpi/tables/tbutils.c
>> @@ -481,7 +481,7 @@ acpi_tb_parse_root_table(acpi_physical_address rsdp_address, u8 flags)
>>  			if (ACPI_FAILURE(status)) {
>>  				ACPI_WARNING((AE_INFO,
>>  					      "Truncating %u table entries!",
>> -					      (unsigned)
>> +					      (unsigned int)
>>  					      (acpi_gbl_root_table_list.size -
>>  					       acpi_gbl_root_table_list.
>>  					       count)));
> 
> Same here then, except PRIu32 wouldn't be correct to use in this case.
> 
Why is it so given that both size and count are of type u32?

> Jan

Cheers,
Michal


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 11:19:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 11:19:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353795.580778 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3yOF-00008z-U6; Wed, 22 Jun 2022 11:19:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353795.580778; Wed, 22 Jun 2022 11:19:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3yOF-00008s-QS; Wed, 22 Jun 2022 11:19:19 +0000
Received: by outflank-mailman (input) for mailman id 353795;
 Wed, 22 Jun 2022 11:19:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3yOE-00008i-VX; Wed, 22 Jun 2022 11:19:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3yOE-0002rb-Rb; Wed, 22 Jun 2022 11:19:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o3yOE-0007cu-Io; Wed, 22 Jun 2022 11:19:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o3yOE-0003pr-IL; Wed, 22 Jun 2022 11:19:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=94ABl8syfLkrv2HwQAik5fc5HJyvgQSN128RCfehTxE=; b=V+0JRBOF1NqQHqLeEmWyKrSqrt
	rzE1iwJ+JZR55O4XvzzOpFQzVkuHlpOssEiyJTZC9tvLgoV0FyqKTM0v+OgrsZ0mfAoKPMupES0Uy
	/MKqDq/hvMuEhoE+4VeitM0iRr2ijqdexna2kEFSC/T8hY53ZuL9VyjM8C36GSFuRmYU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171305-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 171305: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-vhd:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=9d067857d1ff6805608aac4d9c0ea1c848b2e637
X-Osstest-Versions-That:
    xen=9d067857d1ff6805608aac4d9c0ea1c848b2e637
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 22 Jun 2022 11:19:18 +0000

flight 171305 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171305/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-vhd        7 xen-install                fail pass in 171295

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171295
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171295
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171295
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171295
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171295
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171295
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171295
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171295
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171295
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171295
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171295
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171295
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  9d067857d1ff6805608aac4d9c0ea1c848b2e637
baseline version:
 xen                  9d067857d1ff6805608aac4d9c0ea1c848b2e637

Last test of basis   171305  2022-06-22 01:53:25 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 11:45:24 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 11:45:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353806.580789 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3ynM-0003QE-58; Wed, 22 Jun 2022 11:45:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353806.580789; Wed, 22 Jun 2022 11:45:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3ynM-0003Q7-1b; Wed, 22 Jun 2022 11:45:16 +0000
Received: by outflank-mailman (input) for mailman id 353806;
 Wed, 22 Jun 2022 11:45:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kzGk=W5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o3ynK-0003Q1-9I
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 11:45:14 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2068.outbound.protection.outlook.com [40.107.22.68])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c6a258f4-f220-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 13:45:13 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB6PR0401MB2470.eurprd04.prod.outlook.com (2603:10a6:4:35::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.22; Wed, 22 Jun
 2022 11:45:09 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Wed, 22 Jun 2022
 11:45:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c6a258f4-f220-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ve7+wSibFQnMqErN57DDewXrexvysNbPM/L4AGESjeuZPLSHsO7tQw1A//aKDkXQtlUo50mI/ZeWHv3xBKraY7D3bzyP9yuAHeoKHxmZJ7W0YHrerFtma3yLSAO+1zfuMmy9WsRhMtnabZyHpRqPd8OLeXh+J/rmC2kFhdiRsiF4nK76kLw9D8IIEvws1E/aLnj3xxSvSSXDbKYIlI4SxGXCC5waWJHi/ndDsNWvDcHNuY7lygJoc13ytWK4xIxS4Ow1mTx8EicdxpQ+v9+pRbAHcXvidliiYFJNKvrqs7ovqmdzZnIHDENyOD4eFHxE2mCDS65I7Knew13ZLtORLQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6ruDuU7s8jqWCHuZ9KyMVhPKXPTUir8i63pED/xwDmM=;
 b=CbQ6za2HneH2rwtw/rBCpzKNJUJTk0+3RoQoWODg+44j4o9xeY5c4Zg4t0D49fmkm0sfeINJVJuMqC4ZCYaPL+0xw1nyCENQUiL39JO1wkoRHj46NAMbsRi+FhCabkBuytaP6ccbq5UIdiwP5TuNbjJmKp6alhTnEd6hjUJBoYYL8mGt6Il5LW2V+4mwJw8U25d1TxAypafIxZuBddlFP8UqghO1JJgYRYu3w/DPlTHfboBfuqSnwnrWPXz9TQNNCZ+vY2gcXuIAAoLgAsLQegNyTGalBi3CKbI7DrIW0kJnSqN480Duny4Qw+n/KOOEoq6UALVoUgJ/oKgYgUXevQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6ruDuU7s8jqWCHuZ9KyMVhPKXPTUir8i63pED/xwDmM=;
 b=2uulOCpMS3xP6nwunapSdbG62redbLiG07k3a3nLR68bkSVq2FqDNRrGiJy59VNubpWfAZtKuTKcDTKhkXT5vzWmUj74GgC94zQfXcSA1V8DRkrvsOhLI7FwUR5mElloC9c53A8VeavI9Dp8JBzaLTtYpdE5ql4Oh1WYvSLsbi//rNns/ixLaut2mXmPL4QDqaoDHVjmspUPMFnrRWSmPCc/Svx4eIhXFA9oieY36spfApAA8dDYFHKWq8hwmfsJv7cCd+FfBtNSmuRouoGvcc9vzdlc81BaUoiTU1xDHrytdc+xIXTCGi/hNw4usuOn1gyj6XPBVQWH+uhFyETBLw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <fab48596-2efe-a3e7-1531-11527bd72d38@suse.com>
Date: Wed, 22 Jun 2022 13:45:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 9/9] drivers/acpi: Use explicitly specified types
Content-Language: en-US
To: Michal Orzel <michal.orzel@arm.com>
Cc: xen-devel@lists.xenproject.org
References: <20220620070245.77979-1-michal.orzel@arm.com>
 <20220620070245.77979-10-michal.orzel@arm.com>
 <155d8f62-aece-8fb8-8be2-6d21d20926d7@suse.com>
 <b8510774-f000-8b66-ebab-4286567f7747@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <b8510774-f000-8b66-ebab-4286567f7747@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0044.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4a::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ac586f8d-2667-4bca-1e58-08da5444a8fd
X-MS-TrafficTypeDiagnostic: DB6PR0401MB2470:EE_
X-Microsoft-Antispam-PRVS:
	<DB6PR0401MB2470A8A9835FEE80D77C9D83B3B29@DB6PR0401MB2470.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Dwc0XnTP93ulAtE6vj2qnRDJxWQ3BMacOX+gS0nw3am/NGTDSSgEkkrPVNL3QSZcT1DRl7TF0EWOHhIwbqbB2i+uK5zDzAP1+PPWiwEsRxltjSm7NAGwrVNjmmdJ+HmLs76e/8rsQ2teWatSSwnCfJpdCOFpNIzWx8BDR/N4vJVl26GoLn04L1BALaLN5yzDcRZLlnEiueeEvcJcmVjqjdkcRr/nIfOFpQF1WX1xMnPWEwUb/h5iWzk+cK+Y1koK3oM3p77uBz+RaqcP90imXTxWiAr2UojkulNxPwCd6VqV4nt5nne+/URVUVgHfu+iz/jYqryetb7Y4BbzT8E0px2mdEa4oUS1yEWsLr0ZCmHP45uoM4uFmMvdhCiSfLwoGQ6s6Uq85o5CQVFVDCCJfe6MT32QvMeLLbe1upX0PExGprwQIJKTbEUHtegFSU19b1DvSyNOji5uH9KddowfbnwpyhlKouEWJIH2YXlcetTcOKJBliqVbcBgTDBK6DPHhfEOL0Z4LtbT00DVZweAvc0RigWyI7pXlhxT2yeXNK2G1mhgSouZBWpQtzcGby6kst+SLeA/Q1oLopwtgOWfRaaefvgcOVJoFXahnb0yCrcsj8RWbrICKINGfFzRKNAATyadaUpzqpPfpP2TJPtF6GACz738UMfhpPzJWDDTay0LNzjf722jffS2ORs0GLquhlZz/CBio0Ihgt27JUFhNtywjXEkPYP2zJT1cFYpIarr8BcfDkQm7d10KqNxxvzxL2GajIbQfl8W6nLs/nADR08txNh8cYCMnf0xuS1E7vE=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(136003)(376002)(346002)(396003)(366004)(39860400002)(26005)(2616005)(86362001)(31696002)(186003)(6512007)(41300700001)(53546011)(6506007)(38100700002)(8936002)(31686004)(6916009)(2906002)(316002)(5660300002)(66556008)(66476007)(66946007)(8676002)(36756003)(4744005)(6486002)(4326008)(83380400001)(478600001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MW9XcVVXYVF1U2NSMFo3RXZzUHFwOU42algzd2tjUHVZQnRwWDkvbWFpRWVM?=
 =?utf-8?B?NXFkUEptMm1BZmhSampoQ2FjUmUxU2pqWGJpcGZ0TEc2ZTVjYlR3dTBla3Nz?=
 =?utf-8?B?L2V2Sy9RQ0dzcmEveWF3d1BQejN4eFpvbTJJbmZYdXFFTzlLRVpOb084Yk9P?=
 =?utf-8?B?aWMwazdvNnk0eFZta0J0OUkzVEF0MGUydXZ3SWRUekxjMk1yWWg4OVkzbyt1?=
 =?utf-8?B?azFTclJ3TmQyejlCSUp1dTZ5bm9iUDF3TDVQNGMxbUZwMFIzMVFCSHRDVXVj?=
 =?utf-8?B?SEZ6bTRMWTRDYWRyNHcyOW9oYWFaeS94Q2NSVmdaRUc0QlNhR1VmcmdkaVdj?=
 =?utf-8?B?UzdFNmxaSzBhY0hKaTNlZVhQb0JiYW8ycjhjUG12YlkrNzRoM0xwNGpkVkR4?=
 =?utf-8?B?QTJDQTlQUnUrZVVFY280YTJtejJ0ckwyRDRMQ2lIUUYxWXo2amxrcjM3ZGNr?=
 =?utf-8?B?YXlvUlNWNFFXcXBoNFhWVFRlQi8xVlg2VGFpQkRTMkI3RFpPVy8vRllCWmg0?=
 =?utf-8?B?a3crSllkZEZVdU41amtlT2Q4NXM4NzdXNkxEVW9pTVpqdGZqcjZxM3ZiVjd3?=
 =?utf-8?B?NmxNNytCNVdoMTNsV1BpWU4wNHBNTEQrRHd6c25adi9ud0EzT0QyUHR5a09t?=
 =?utf-8?B?TURYR0c2TjFGRG53aWN2NGZTUlYrTlRXMTJCZnhWcFNPTnRUZmZKVVRQbFhM?=
 =?utf-8?B?SWtLTk1LZnk1YXpNRlR2ZVd2L0xtSjdMblRRMkg2UTVzS3BvRHJKL0xlQWZG?=
 =?utf-8?B?YlREN1doZzlWb28wUmlBdXhvb2x4YmRwTEdsbzNsRlJTcEVhNkdNMXdIeGVG?=
 =?utf-8?B?QTRSRS9DMGFnb01YNlBOZUJUZXNLYlM1ajMzRjdaZ3hCZFFkeVBhY3EzMWhs?=
 =?utf-8?B?ZXdkdU5YZjVIc3Q0RmU0d2l5bUdISG5LOFFUNHBiUjVMQndrNzI3eGhHaHN0?=
 =?utf-8?B?UHROK2FtMFB4NG15WW96Q2tCRmJFLy9kazMvblB6Q1l6WmlTSjZaRGxSajRD?=
 =?utf-8?B?bkdtUUhmYnAvSzA2MXNtS24vSWlZLzNWa3RIRnJQNERMT3dQcEk0aUU2cHA3?=
 =?utf-8?B?SXFrcmZ6QVY4ZDhpWFJYUjU4aG4zcktOeXhJNDM5Q29rOWRyUU5ja2ZNT25K?=
 =?utf-8?B?Q0NmVFQ5a1lFQThoTVBEWnordVIwNWZ0QWo1bVJJNlpPcHdneVROYmJ5Vk0x?=
 =?utf-8?B?b0UwdW00eit1NXFDQ1Z0T1puVWdDMVI4YVY4bjFOTTk4cDBmTmhFWCt1cnEv?=
 =?utf-8?B?L3FicHh4Q0FtdlV3UjZLdW9ndkU1UUpKdFN1OHI2TEhiTWVBcFJqOC9aNG1B?=
 =?utf-8?B?cnBZdE5ISTBCMVJiWmc0SzBINXByejNSMS9mU2daMHFueGIyMUVxZzVTR1JO?=
 =?utf-8?B?TGpZT05LNGRjZ0VSbUN3NldCL2s1bi9lQVg3c0w5VnNTUFhaRVQ1ejhoUlZP?=
 =?utf-8?B?eE9VeCs0RlhaS3U4dkZGRjhUZUtHcHNRRzRXMy9xZHpNb2w0d1cybTNvZU1J?=
 =?utf-8?B?ZjdnUjFjcUpFNEdyUnNyZ0JrbFA5OFlRTDAzdEt6MFNWUlhyYlgvSE5UNWhK?=
 =?utf-8?B?SFNFZzNNQXk2VHBTWFlzaGxJSHY5WWpRMTd0TktxcU1IQ1YvWVVIbmUzUzB4?=
 =?utf-8?B?ME5DRGRVNW1GWS9jWFNXTlA2MTkyZ2lkZjl1bHp6S0prTWJDQnVnYkpBVXpX?=
 =?utf-8?B?WVBwY3U4Z0xXYmZ3dy9qYW5xY2pVeTN0QU1XZjRvVkdrRFo4Q2N6N0oyUmYx?=
 =?utf-8?B?bHBrM1p4dm42NGM1VWVueHZHa3dNdzNUWXlPQ2Q0UlVmY2lTM3A3MTV2dmth?=
 =?utf-8?B?RHVuRS8zMWQwN0VnRjBVTUNPc2prNXhPejNmYWgybzBsT0s1a21HekFuNzRj?=
 =?utf-8?B?QkVuR1IrYXlKWjRUQjIwZ3FvK0w3cTlOZDZyMHJSdXZCYTZUUnd4aE81eFVv?=
 =?utf-8?B?VE1RK2I5R2NPaVhtWGRQMForWjB1WlVLTW0rV09LWjA1RUVKZFBpQTBmMzJl?=
 =?utf-8?B?VmFIYmhmVVBlUFFra0YvVm16ZFMvZVFLTmYyeC93UHB6bjBkd3pyZUIrU2Ji?=
 =?utf-8?B?WGpIQllpdXRNVFdUc2pOUFlMdDRaUVdlRW0yMHQvMUVOcHRRM2lKdWhpL1Rl?=
 =?utf-8?B?WFppYVZ3SWJpWFZDYVYwUTdHTGRwYWZzNVQ5aVN4M2FYV2NZQVVKVjNFVm9o?=
 =?utf-8?B?TjV1VENqM2hxa0NFOXBWY2cvV09pQnZPbld1Y2JodHpWR0ZmdTlDZjI2ajVz?=
 =?utf-8?B?SkV3VEhUSnVwaEUwQjhyQ0hxZnExdG96R2MvZ0FCZUdMcWdRYXBVVWh3Q3Fo?=
 =?utf-8?B?c1lxTFVsZ0FZTVZnMWxmbzRmamhtcFNraFVINFUxRXBOdGRWUFpyZz09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ac586f8d-2667-4bca-1e58-08da5444a8fd
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2022 11:45:09.7735
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 0NGTmWFe9OAfoMWYHUZeuc3RvzoJV/Hp3q7F7KN543u+Q2gmnFJnsOLw9xa2EX7/Ay2lnuDbPySHBsGHbzi7Vg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0401MB2470

On 22.06.2022 13:09, Michal Orzel wrote:
> On 22.06.2022 12:36, Jan Beulich wrote:
>> On 20.06.2022 09:02, Michal Orzel wrote:
>>> --- a/xen/drivers/acpi/tables/tbutils.c
>>> +++ b/xen/drivers/acpi/tables/tbutils.c
>>> @@ -481,7 +481,7 @@ acpi_tb_parse_root_table(acpi_physical_address rsdp_address, u8 flags)
>>>  			if (ACPI_FAILURE(status)) {
>>>  				ACPI_WARNING((AE_INFO,
>>>  					      "Truncating %u table entries!",
>>> -					      (unsigned)
>>> +					      (unsigned int)
>>>  					      (acpi_gbl_root_table_list.size -
>>>  					       acpi_gbl_root_table_list.
>>>  					       count)));
>>
>> Same here then, except PRIu32 wouldn't be correct to use in this case.
>>
> Why is it so given that both size and count are of type u32?

Because the promoted type (i.e. the type of the result of the subtraction)
isn't uint32_t (and will never be). It'll be "unsigned int" when
sizeof(int) == 4 (and in this case it'll happen to alias uint32_t) and
just "int" when sizeof(int) > 4 (not even aliasing int32_t).

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 11:49:59 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 11:49:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353814.580800 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3yru-000440-Nn; Wed, 22 Jun 2022 11:49:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353814.580800; Wed, 22 Jun 2022 11:49:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3yru-00043t-Jv; Wed, 22 Jun 2022 11:49:58 +0000
Received: by outflank-mailman (input) for mailman id 353814;
 Wed, 22 Jun 2022 11:49:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vZBD=W5=linaro.org=viresh.kumar@srs-se1.protection.inumbo.net>)
 id 1o3yrs-00043n-VO
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 11:49:57 +0000
Received: from mail-pf1-x42c.google.com (mail-pf1-x42c.google.com
 [2607:f8b0:4864:20::42c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6eba794a-f221-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 13:49:55 +0200 (CEST)
Received: by mail-pf1-x42c.google.com with SMTP id d17so6900706pfq.9
 for <xen-devel@lists.xenproject.org>; Wed, 22 Jun 2022 04:49:55 -0700 (PDT)
Received: from localhost ([122.172.201.58]) by smtp.gmail.com with ESMTPSA id
 u1-20020a1709026e0100b0016160b3331bsm12552122plk.305.2022.06.22.04.49.53
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 22 Jun 2022 04:49:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6eba794a-f221-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=xTtCO1zrt/IGUKd4FRnY6oWVk5z9FJ/JSc1j6sCiUd4=;
        b=n5TklPf8OeFgLiLBfco2InMdTPlWlmwKj5N5N7es/9LDV7HGfdKe5FC9RvttX7ADH9
         qmXqHUui9kUyhud5UjS1aG17dAs4wbZtG2edYPezh7AGoGtwChAEw/40WM+6qLp1zrZy
         4V0AWIjWsFUvIuG1JjhL2n4J4CsXaxLuvrF0yxRAw5xYFMumsYxkOTamEOserwOg53Y2
         gfTZhxspGroOMjPYKgshBe5W6NDaLB+O1QdpD6eaE5CtKKStMfe6VvdPO6TXMjq+ldDK
         B+1InY3GWc///NG7bl8sV7GxMNdoZxVTH2KoHO2dZ1op7RolOWjELYhcKYhPN7mUKFJu
         +pVw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=xTtCO1zrt/IGUKd4FRnY6oWVk5z9FJ/JSc1j6sCiUd4=;
        b=FlUti1XdTXs6cQgb/VStJAyIkx2C24ErDLrYvlle+lNL49kZMs8dKq4TZWA/n/qBLo
         rbPNK9IoCeOkSrQBPXwzqAIsDdwVPZaAKkJpauW9orI5x5P+Ewh1ANVNaAv2JAyinsB1
         6MZWa8/AC8e5eA4ChGxKsRRpwKX40WuianffwDE5V+ElXvDYwhIi2daTgnmSLJsiTGdo
         gmnTjc5h0fQ2hz9Scg1NxffoyrzFhbNRW19u+cN/+JVE1NbYcNcWxK4+pUDUB9w1tqxo
         cd4aInJ2X9Zhfw+LOcQo26ts2G0fj3o3H6WBMi2W0Mg/PZVaB8SqnnonR/wSlUr3UImN
         3n1g==
X-Gm-Message-State: AJIora8iG1S3gBK5DowzoMs3lPQ3SyfBv1K1DadbuBbwSSHp/OIGgIUJ
	AhdJCeCtq9rRgfafIQC0vAswCA==
X-Google-Smtp-Source: AGRyM1v0xeMpziioxlHN7hWY+tq/O/X29xdzMLfqoO7AXopXKS/bsLPcaM754GBBxwsXfe2/KGqE5Q==
X-Received: by 2002:aa7:9a11:0:b0:525:2412:920a with SMTP id w17-20020aa79a11000000b005252412920amr15308016pfj.66.1655898594110;
        Wed, 22 Jun 2022 04:49:54 -0700 (PDT)
Date: Wed, 22 Jun 2022 17:19:50 +0530
From: Viresh Kumar <viresh.kumar@linaro.org>
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Stratos Mailing List <stratos-dev@op-lists.linaro.org>,
	Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Stefano Stabellini <stefano.stabellini@xilinx.com>,
	Mathieu Poirier <mathieu.poirier@linaro.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Mike Holmes <mike.holmes@linaro.org>, Wei Liu <wl@xen.org>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Juergen Gross <jgross@suse.com>, Julien Grall <julien@xen.org>
Subject: Re: Virtio on Xen with Rust
Message-ID: <20220622114950.lpidph5ugvozhbu5@vireshk-i7>
References: <20220414091538.jijj4lbrkjiby6el@vireshk-i7>
 <CAPD2p-ks4ZxWB8YT0pmX1sF_Mu2H+n_SyvdzE8LwVP_k_+Biog@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAPD2p-ks4ZxWB8YT0pmX1sF_Mu2H+n_SyvdzE8LwVP_k_+Biog@mail.gmail.com>

On 28-04-22, 16:52, Oleksandr Tyshchenko wrote:
> FYI, currently we are working on one feature to restrict memory access
> using Xen grant mappings based on xen-grant DMA-mapping layer for Linux [1].
> And there is a working PoC on Arm based on an updated virtio-disk. As for
> libraries, there is a new dependency on "xengnttab" library. In comparison
> with Xen foreign mappings model (xenforeignmemory),
> the Xen grant mappings model is a good fit into the Xen security model,
> this is a safe mechanism to share pages between guests.

Hi Oleksandr,

I started getting this stuff into our work and have few questions.

- IIUC, with this feature the guest will allow the host to access only certain
  parts of the guest memory, which is exactly what we want as well. I looked at
  the updated code in virtio-disk and you currently don't allow the grant table
  mappings along with MAP_IN_ADVANCE, is there any particular reason for that ?

- I understand that you currently map on the go, the virqueue descriptor rings
  and then the protocol specific addresses later on, once virtio requests are
  received from the guest.

  But in our case, Vhost user with Rust based hypervisor agnostic backend, the
  vhost master side can send a number of memory regions for the slave (backend)
  to map and the backend won't try to map anything apart from that. The
  virtqueue descriptor rings are available at this point and can be sent, but
  not the protocol specific addresses, which are available only when a virtio
  request comes.

- And so we would like to map everything in advance, and access only the parts
  which we need to, assuming that the guest would just allow those (as the
  addresses are shared by the guest itself).

- Will that just work with the current stuff ?

- In Linux's drivers/xen/gntdev.c, we have:

  static unsigned int limit = 64*1024;

  which translates to 256MB I think, i.e. the max amount of memory we can map at
  once. Will making this 128*1024 allow me to map 512 MB for example in a single
  call ? Any other changes required ?

- When I tried that, I got few errors which I am still not able to fix:

  The IOCTL_GNTDEV_MAP_GRANT_REF ioctl passed but there were failures after
  that:

  (XEN) common/grant_table.c:1055:d0v2 Bad ref 0x40000 for d1
  (XEN) common/grant_table.c:1055:d0v2 Bad ref 0x40001 for d1

  ...

  (XEN) common/grant_table.c:1055:d0v2 Bad ref 0x5fffd for d1
  (XEN) common/grant_table.c:1055:d0v2 Bad ref 0x5fffe for d1
  (XEN) common/grant_table.c:1055:d0v2 Bad ref 0x5ffff for d1
  gnttab: error: mmap failed: Invalid argument


I am working on Linus's origin/master along with the initial patch from Juergen,
picked your Xen patch for iommu node.

I am still at initial stages to properly test this stuff, just wanted to share
the progress to help myself save some of the time debugging this :)

Thanks.

-- 
viresh


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 12:13:53 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 12:13:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353824.580810 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3zEv-0007Mf-Sn; Wed, 22 Jun 2022 12:13:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353824.580810; Wed, 22 Jun 2022 12:13:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3zEv-0007MY-Q5; Wed, 22 Jun 2022 12:13:45 +0000
Received: by outflank-mailman (input) for mailman id 353824;
 Wed, 22 Jun 2022 12:13:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zPYt=W5=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o3zEu-0007MS-RL
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 12:13:44 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c1b0dca7-f224-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 14:13:42 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 747661FD03;
 Wed, 22 Jun 2022 12:13:42 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 42D3013A5D;
 Wed, 22 Jun 2022 12:13:42 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id iA4EDnYHs2JxBwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 22 Jun 2022 12:13:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c1b0dca7-f224-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655900022; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Nyz5d+Nfti6ATwCcKW/q97KnzMJtSbCcqDsc1SjYCpo=;
	b=XvpRQe6MZnEPaTJj/QI8pXk5tRINNyXNHyE47OIwPrMC5cFG2eOA7NjxfD4kPtTG282/2M
	yA32C6JfdI5/YLt2dN9N5alEw/c3dh6CnwID5sFXEpjBGI/V5lHNlvDNHl2toyE5Huzi0u
	I8JZlLbPI7fCIcrgz4FFQNEkiiWazHo=
Message-ID: <bc8899d7-0300-8640-57d9-52c2a1bf599c@suse.com>
Date: Wed, 22 Jun 2022 14:13:41 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Content-Language: en-US
To: Julien Grall <julien@xen.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <d465abfb-6d44-0739-9959-3e3311dd671c@suse.com>
 <e32a84bf-ad49-da95-4a19-61872c2ff7e0@xen.org>
From: Juergen Gross <jgross@suse.com>
Subject: Re: Tentative fix for dom0 boot problem
In-Reply-To: <e32a84bf-ad49-da95-4a19-61872c2ff7e0@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------L0WbjTE05TTpPXqsbwm0Ut7e"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------L0WbjTE05TTpPXqsbwm0Ut7e
Content-Type: multipart/mixed; boundary="------------7o6Slg9VvQkxA3AwnxNLmKTP";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <bc8899d7-0300-8640-57d9-52c2a1bf599c@suse.com>
Subject: Re: Tentative fix for dom0 boot problem
References: <d465abfb-6d44-0739-9959-3e3311dd671c@suse.com>
 <e32a84bf-ad49-da95-4a19-61872c2ff7e0@xen.org>
In-Reply-To: <e32a84bf-ad49-da95-4a19-61872c2ff7e0@xen.org>

--------------7o6Slg9VvQkxA3AwnxNLmKTP
Content-Type: multipart/mixed; boundary="------------2l6dvOFPu0Bx0QNLZKxZGb0k"

--------------2l6dvOFPu0Bx0QNLZKxZGb0k
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjIuMDYuMjIgMTI6NTAsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gDQo+IA0KPiBPbiAy
Mi8wNi8yMDIyIDExOjQ1LCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4gSnVsaWVuLA0KPiAN
Cj4gSGkgSnVlcmdlbiwNCj4gDQo+PiBjb3VsZCB5b3UgcGxlYXNlIHRlc3QgdGhlIGF0dGFj
aGVkIHBhdGNoZXM/DQo+IA0KPiBJIGFtIGdldHRpbmcgdGhlIGZvbGxvd2luZyBlcnJvcjoN
Cj4gDQo+IChYRU4pIGQwdjAgVW5oYW5kbGVkOiB2ZWMgMTQsICNQRlswMDAzXQ0KPiAoWEVO
KSBQYWdldGFibGUgd2FsayBmcm9tIGZmZmZmZmZmODQwMDEwMDA6DQo+IChYRU4pwqAgTDRb
MHgxZmZdID0gMDAwMDAwMDQ2YzAwNDA2NyAwMDAwMDAwMDAwMDA0MDA0DQo+IChYRU4pwqAg
TDNbMHgxZmVdID0gMDAwMDAwMDQ2YzAwMzA2NyAwMDAwMDAwMDAwMDA0MDAzDQo+IChYRU4p
wqAgTDJbMHgwMjBdID0gMDAwMDAwMDQ2YzAyNDA2NyAwMDAwMDAwMDAwMDA0MDI0DQo+IChY
RU4pwqAgTDFbMHgwMDFdID0gMDAxMDAwMDQ2YzAwMTAyNSAwMDAwMDAwMDAwMDA0MDAxDQoN
CkhtbSwgZnJvbSB0aGlzIGRhdGEgSSBndWVzcyB0aGlzIHdhcyBhIHdyaXRlIHRvIGEgcGFn
ZSB0YWJsZS4NCg0KPiAoWEVOKSBkb21haW5fY3Jhc2hfc3luYyBjYWxsZWQgZnJvbSBlbnRy
eS5TOiBmYXVsdCBhdCBmZmZmODJkMDQwMzI1OTA2IA0KPiB4ODZfNjQvZW50cnkuUyNjcmVh
dGVfYm91bmNlX2ZyYW1lKzB4MTVkLzB4MTc3DQo+IChYRU4pIERvbWFpbiAwICh2Y3B1IzAp
IGNyYXNoZWQgb24gY3B1IzE6DQo+IChYRU4pIC0tLS1bIFhlbi00LjE3LXVuc3RhYmxlwqAg
eDg2XzY0wqAgZGVidWc9ecKgIFRhaW50ZWQ6wqDCoCBDwqDCoMKgIF0tLS0tDQo+IChYRU4p
IENQVTrCoMKgwqAgMQ0KPiAoWEVOKSBSSVA6wqDCoMKgIGUwMzM6WzxmZmZmZmZmZjgzMmEz
NDgxPl0NCg0KQ2FuIHlvdSBwbGVhc2UgZmluZCBvdXQgdGhlIGFzc29jaWF0ZWQgc3RhdGVt
ZW50Pw0KDQo+IChYRU4pIFJGTEFHUzogMDAwMDAwMDAwMDAwMDIwNsKgwqAgRU06IDHCoMKg
IENPTlRFWFQ6IHB2IGd1ZXN0IChkMHYwKQ0KPiAoWEVOKSByYXg6IDAwMDAwMDAwMDAwMDAw
MDDCoMKgIHJieDogZmZmZmZmZmY4NDAwMDAwMMKgwqAgcmN4OiAwMDAwMDAwMDAwMDJiMDAw
DQo+IChYRU4pIHJkeDogZmZmZmZmZmY4NDAwMDAwMMKgwqAgcnNpOiBmZmZmZmZmZjg0MDAw
MDAwwqDCoCByZGk6IGZmZmZmZmZmODQwMDEwMDANCj4gKFhFTikgcmJwOiAwMDAwMDAwMDAw
MDAwMDAwwqDCoCByc3A6IGZmZmZmZmZmODJhMDNlNjDCoMKgIHI4OsKgIDAwMDAwMDAwMDAw
MDAwMDANCj4gKFhFTikgcjk6wqAgMDAwMDAwMDAwMDAwMDAwMMKgwqAgcjEwOiAwMDAwMDAw
MDAwMDAwMDAwwqDCoCByMTE6IDAwMDAwMDAwMDAwMDAwMDANCj4gKFhFTikgcjEyOiAwMDAw
MDAwMDAwMDAwMDAwwqDCoCByMTM6IDAwMDAwMDAwMDAwMDAwMDDCoMKgIHIxNDogMDAwMDAw
MDAwMDAwMDAwMA0KPiAoWEVOKSByMTU6IDAwMDAwMDAwMDAwMDAwMDDCoMKgIGNyMDogMDAw
MDAwMDA4MDA1MDAzM8KgwqAgY3I0OiAwMDAwMDAwMDAwMzQyNmUwDQo+IChYRU4pIGNyMzog
MDAwMDAwMDQ2YzAwMTAwMMKgwqAgY3IyOiBmZmZmZmZmZjg0MDAxMDAwDQo+IChYRU4pIGZz
YjogMDAwMDAwMDAwMDAwMDAwMMKgwqAgZ3NiOiBmZmZmZmZmZjgzMjcxMDAwwqDCoCBnc3M6
IDAwMDAwMDAwMDAwMDAwMDANCj4gKFhFTikgZHM6IDAwMDDCoMKgIGVzOiAwMDAwwqDCoCBm
czogMDAwMMKgwqAgZ3M6IDAwMDDCoMKgIHNzOiBlMDJiwqDCoCBjczogZTAzMw0KPiAoWEVO
KSBHdWVzdCBzdGFjayB0cmFjZSBmcm9tIHJzcD1mZmZmZmZmZjgyYTAzZTYwOg0KPiAoWEVO
KcKgwqDCoCAwMDAwMDAwMDAwMDJiMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMyBmZmZmZmZmZjgzMmEzNDgxDQo+IChYRU4pwqDCoMKgIDAwMDAwMDAxMDAwMGUwMzAg
MDAwMDAwMDAwMDAxMDAwNiBmZmZmZmZmZjgyYTAzZWE4IDAwMDAwMDAwMDAwMGUwMmINCj4g
KFhFTinCoMKgwqAgMDAwMDAwMDAwMDAwMDAwMCBmZmZmZmZmZjgzMmFlODg0IDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KPiAoWEVOKcKgwqDCoCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
DQo+IChYRU4pwqDCoMKgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCj4gKFhFTinCoMKgwqAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMA0KPiAoWEVOKcKgwqDCoCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQo+IChYRU4pwqDCoMKgIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCBmZmZmZmZmZjgzMmEzMTdmIDAwMDAwMDAw
MDAwMDAwMDANCg0KRnVydGhlciBhbmFseXNpcyBtaWdodCBiZSBlYXNpZXIgaWYgeW91IGNh
biBzdXBwbHkgZnVuY3Rpb24gKyBkaXNwbGFjZW1lbnQgZm9yDQphbnkgdGV4dCBzZWdtZW50
IGFkZHJlc3NlcyBvbiB0aGUgc3RhY2suDQoNCkJUVywgSSBjb3VsZCBib290IHRoZSBrZXJu
ZWwgd2l0aCBteSBwYXRjaGVzIGFzIERvbTAgd2l0aG91dCBhbnkgcHJvYmxlbS4gT1RPSA0K
aXQgYm9vdGVkIGV2ZW4gd2l0aG91dCB0aGUgcGF0Y2hlcy4gOi0pDQoNCg0KSnVlcmdlbg0K

--------------2l6dvOFPu0Bx0QNLZKxZGb0k
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------2l6dvOFPu0Bx0QNLZKxZGb0k--

--------------7o6Slg9VvQkxA3AwnxNLmKTP--

--------------L0WbjTE05TTpPXqsbwm0Ut7e
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmKzB3UFAwAAAAAACgkQsN6d1ii/Ey/N
xQgAg7PauOQo/Oz56BZRiiz9fw5N19MVdonS9gkBAxsFHxBz0014y+NE4Aj9w/TrOxjeBuwk/V1v
BrsqYctmzudl1XdntkohHbPY0plS3WJSEy3uVGCFvJWMI+VlALzPAXULGEvoQ6ixEpQmW09enpBN
kOTsrL9DYkcAz6AfE2FTUFENvHcHPtnzB14J6SmaiDSrwgr2f5lZDYVod4uS7nTm/ZH2jAhp+C1M
O96npf5bNa/38xrCHyxTjl4JL8Cvgi4U511a3QWfvCo3ACROpsBWsbUB/PGxCQRTXIj2RRTHDV94
ELOOB+CPxQ673LvxozjXpTTGpL7XHtrxgte00eBThg==
=BmQY
-----END PGP SIGNATURE-----

--------------L0WbjTE05TTpPXqsbwm0Ut7e--


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 12:32:45 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 12:32:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353832.580822 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3zX7-0001HF-Eg; Wed, 22 Jun 2022 12:32:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353832.580822; Wed, 22 Jun 2022 12:32:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3zX7-0001H8-Ar; Wed, 22 Jun 2022 12:32:33 +0000
Received: by outflank-mailman (input) for mailman id 353832;
 Wed, 22 Jun 2022 12:32:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UqP7=W5=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1o3zX5-0001Gj-Hj
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 12:32:31 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2052.outbound.protection.outlook.com [40.107.21.52])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 612ac2c3-f227-11ec-bd2d-47488cf2e6aa;
 Wed, 22 Jun 2022 14:32:29 +0200 (CEST)
Received: from DB6PR0402CA0012.eurprd04.prod.outlook.com (2603:10a6:4:91::22)
 by AM0PR08MB3201.eurprd08.prod.outlook.com (2603:10a6:208:59::27)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.18; Wed, 22 Jun
 2022 12:32:26 +0000
Received: from DBAEUR03FT034.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:91:cafe::a6) by DB6PR0402CA0012.outlook.office365.com
 (2603:10a6:4:91::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15 via Frontend
 Transport; Wed, 22 Jun 2022 12:32:26 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT034.mail.protection.outlook.com (100.127.142.97) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5353.14 via Frontend Transport; Wed, 22 Jun 2022 12:32:25 +0000
Received: ("Tessian outbound e40990bc24d7:v120");
 Wed, 22 Jun 2022 12:32:25 +0000
Received: from a97cfa49dd92.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 1209F5E3-7F94-4BD0-A9D2-21D0CCFDF735.1; 
 Wed, 22 Jun 2022 12:32:14 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a97cfa49dd92.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 22 Jun 2022 12:32:14 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by PAXPR08MB6954.eurprd08.prod.outlook.com (2603:10a6:102:1d9::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.14; Wed, 22 Jun
 2022 12:32:12 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5353.022; Wed, 22 Jun 2022
 12:32:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 612ac2c3-f227-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=dGInT8NxHVJEnvgMU+9mSdUhxb3ZxNS+jpN7bBk0ouRZcSSVSDQ/PiwbjeDD2HfNgxQlZDWpDELWy5B+pur3w8lQLdjCPb4e7Z2iBA1aILuG0+UtFDsKb3Id61azuZ7jRlY+e1nXPxaxzwuv0VYW0f5Dxi7nIi0sSVEKwbrmKIJeMasf9QiAGlfGBP2/lsAqkAWoOb9ushawsuz8Dxh+G10i4cylKDiaCpSM2FsEbLfxmnr7Vxzu38EzZyxyYkxO/ModI8Yq57Z5SrXxnkWGxBoFKYTHMLphkGsLlTzASxBze/z8Kl+JjchmTULp7Z0bOw/CvV+fVN/mZpmpD1Wi3Q==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Fgi6TXcmSbzgQatpsETcKcNUOjoWcdN5aXIF0Cw4eNg=;
 b=Bum95bq355+27efksxIrdDRkU7dqppJL1Yr0Syk9YzAROvtGuUEbOI3Lh2/KxeE5oo75zPj4nKWHWoLjlXyfAjeSWJHpbxR+R5dMN24U6UsV5PcpLwOo2A9bIAKKBwCcCKimUm8uUNhRUuCZmzL2uxRyTIakIkVNmYp6WX5wiliZAKfJDh0m5zkyZs+CovAcD6de8ZmJBi1a7heiY/pQVE35VXbFIJ80TbSKvfmVUOjuh35C53s2+yqKLEZP+/PN+xrVnRW3X6o24v9gOTB5l+ItGVVB6suuR37F6/AOlUYhy2lZHzJ0ItzYxFYtf4cHYJ5xGz3fXq6SeXYr7yIStQ==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Fgi6TXcmSbzgQatpsETcKcNUOjoWcdN5aXIF0Cw4eNg=;
 b=2L3RfpB77qfuDVxSinipYBvcYO8sbEhMd55JSjSFygLMf9J+vidmm0j2crxA95O3zuXTf/tkZV6U7zTMD+NCibYU4kEUBJP4U7cbt0qeAq9Fg8jZS+YXfjN800Ybfimn/tDsRHrsejn4AzLQmImcNEWt4iIZcIUDy1zFJoKdYkw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 0dd5bd4d7044be4c
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=of29tEcOHRZFQrdt0v3lDJoqR0P8fPxpbJxcFZ7sayn9yqSPNcA2j0bGkiQqsWqQeWgqNc3nov7/1PAdj+Vfd4cPr61GiABW/rTI57tUnJlZLJf0Oa6jUoYQru7lyhOAYkdd9Ln/77lHptBoIN2FNAcAD3b8XpMHSR8eQN/ME+qn+v7hohXJ2B+NWImX0zl0Csw3ST7wqFBlNaPv/Lb2VydX8lAhI9pldKDzE9/25YiDMBH9EwTFny2KtpEuUzgVwnwKGuW1IzzdkIFCmE2o6NXruxO+Z8DB+l+p+pzGblw+jEYAW5lpJSKN5NyBN0cAz/m+Tam8CDbsLNl+rZuFww==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Fgi6TXcmSbzgQatpsETcKcNUOjoWcdN5aXIF0Cw4eNg=;
 b=Tgg/DwxbGPVbYBLNZbNw5ih1qJvQiexkM92ZDp8iJMfuIWg9JC1IVIGLwaOYlwKIagDtANG1INJZ4cIPNSwvzmUJa9aE+8oC4xsByeU20sZyZGwrWG2qTDcMiB0Mg+g4mPqcAIpwarF+k7T5ORtheqo9XCRYpIrSk0m9G5la4KoUgWmxFNRGmCrG+e3i1il0Z7DJ0pdIPSxD5QuVJ2ufwpHph20qypKpxvmxG2lnfITUIPiXX0snevS/qp4eUzoNSH+7bKTSyHSvV2csQLNqHNp0rm0YgWGFA5hfTg+9oevFP0uz82P8yTzFeCxCzSTo5c8e37SiH4Oop/yXsuo0Bg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Fgi6TXcmSbzgQatpsETcKcNUOjoWcdN5aXIF0Cw4eNg=;
 b=2L3RfpB77qfuDVxSinipYBvcYO8sbEhMd55JSjSFygLMf9J+vidmm0j2crxA95O3zuXTf/tkZV6U7zTMD+NCibYU4kEUBJP4U7cbt0qeAq9Fg8jZS+YXfjN800Ybfimn/tDsRHrsejn4AzLQmImcNEWt4iIZcIUDy1zFJoKdYkw=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Christopher Clark <christopher.w.clark@gmail.com>
CC: Julien Grall <julien@xen.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, Michal Orzel
	<Michal.Orzel@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Daniel Smith
	<dpsmith@apertussolutions.com>, Roger Pau Monne <roger.pau@citrix.com>,
	George Dunlap <George.Dunlap@citrix.com>
Subject: Re: XTF-on-ARM: Bugs
Thread-Topic: XTF-on-ARM: Bugs
Thread-Index: AQHYhWH1+fjNtvukV0adEqGEdTB7sq1ZxCsAgAAXIgCAAAnTAIAAf4aAgAD4xwA=
Date: Wed, 22 Jun 2022 12:32:12 +0000
Message-ID: <30BB31A7-F49C-4908-8053-74E31D03BD33@arm.com>
References: <7f490d75-153d-7e1d-b3c0-5418ff7fdf8f@citrix.com>
 <b8f05e22-c30d-d4b2-b725-9db91ee7a09d@xen.org>
 <fd30be68-d1ac-b1bc-b3f1-cff589f338ee@citrix.com>
 <c97de57c-4812-cdfc-f329-cc2e1d950dc7@xen.org>
 <CACMJ4GY+H7P733_-UNgSd7P8+Z4ryeJwVy3QfekMJskkmh9btQ@mail.gmail.com>
In-Reply-To:
 <CACMJ4GY+H7P733_-UNgSd7P8+Z4ryeJwVy3QfekMJskkmh9btQ@mail.gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3693.60.0.1.1)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: eb7ba02f-f231-4e69-4afb-08da544b436e
x-ms-traffictypediagnostic:
	PAXPR08MB6954:EE_|DBAEUR03FT034:EE_|AM0PR08MB3201:EE_
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB320125EB8D73DD9218AADC739DB29@AM0PR08MB3201.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 nndr8OAhU0/+syNHOt4GSvvQx2mGHQhDVDQHaISjyW+EQsyEMHRCinLO5xusnRS1DiSYaFt5Q+FY/NBv1Xee5nLeeq0RUZExDJ4bEp92TfXgxOgNsgYohS4TxJtyC2s5rXVYsxpBhGmbvHEzDV2QYKxoYsoaOSx4MwkX84JR/I3d5NKnzfXQqW0JKRElKSN2+Zcoyho15HOwiZ14mms9H1Ml5hGsp58OBgEtJA7KVD4fBkwikIQNkF2+/KWTGr/0iV7/mVU4mXGTOnH+QhsAWH8F+Awk9u5jGGk2JEM3+I0qB0mDdPLnu7LdbNZbw5zz4bpgRw5TTLtpN/vdI9JQK/Aj25PBKSqkfBRkybzFizfVC33YFxnhFG7zLmTme4ks93G2+53qtSfFvQvqVdzfwZXkz8/FkWt8fYKp64FZxcxDhP8nkFrNxyqgZ85l1moVCNjMAHk1mEN4Nzdi/wFzvOwYe9pi5PjvYTZ9Dt5197+Vknl8EJLCYIZXIXm8Dl39d2nif84BdOmmDW20B7tLQ/Uq1hty+3cRmKfbyCFk2F0iw+PGNohDQgnGafnf44DPKXmaQrFmPk2lPy+mqPuM5WGnLLom2krWikemsgFVrSGweLRbtIIxZX06lZTeNZNF5mq3aa2bw1ER7OsazvsIfrHqbgnON+gp1ZFGR5v4Ms3dqvxhWajxYuXlyfunx0uOmHuE9Bb4KDbSwejajCstJ8KEeIgTUS8csROScDDQVBCdbXSZheP+QH5nRsNB7NuD+TQdyPLFHxZuQi730f/CPP8PtCUwEk3aitmPBLh2uU1deITtYH7qNIxfSwqLSih5MkkECJ7qdh+lXCspyvFgCw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(376002)(346002)(396003)(39860400002)(136003)(5660300002)(4744005)(36756003)(8936002)(478600001)(66946007)(33656002)(6486002)(186003)(2616005)(122000001)(71200400001)(41300700001)(966005)(83380400001)(66446008)(64756008)(66556008)(86362001)(6512007)(26005)(66476007)(38070700005)(2906002)(6916009)(91956017)(4326008)(54906003)(8676002)(76116006)(6506007)(38100700002)(316002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <CA3F0DA72BBC484A99C97A01047363B3@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB6954
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	359dd567-4264-49c9-e76b-08da544b3b97
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0P/d88AiezvuU2yLHksBDR4S0IlLD6qjhGtTJ8fnDac5vAdThxdLu0njWSgzQLgh1ZZyFEa4lulFIyqbly3MRcD7rcdXmHCvdFRF9Aws5DYEJFTYGPMPvwSYkPwOHZ2vI1L8S227GiINtzWxtT07E5C51x3SaGOeAx3DoDEvlh7DbdYMUs6nRMzlxo+0ToZS4pcV8jQvY6i5b38En0ombqfM3Y5MFhBG76fo3Ut6TXcVZvSlge4/M3483g+cAQx6CUiIgWbB+/ttXabRkTngG3/1tt3YcKodLxYXqilfL5y05lbss2ND5b1gRk8ObCZQU8wkC9hSe9uJS34957glzJjeohhQkUi5qvustutTr5pYtRziJNUk3QJXV5cqSUBnYAnVyGRd5tyEI4ZJ8fkMNaRn+FfB8yhBC6uGLHYhVkdNCjAkw4Fc3c5tPUqtG812mBu9caoajCLNaqluwmMzUmNkcoVloyashWj9FjFfmvljhLNLwlFcQr9XWksvDVczMH9FPivcuRpRu4TysYKrnpnh3G1EA5rX++n8FNEtJY6pwMcSkGSuDFwbCKWAWyGr7IRzeF1d1VXUZV4zFAnOOBLqqP2YYMFfKAud3rMeGoc/2pvdDqXCsgRvUHJ/FuLMlbkkdz6v1DNiKDUJkQR7j8YpieI8PaSt9XZLwST2nFFbAT3B1qmCSAws/uWr/BqH1+VBl2nMbplkRT3ry27OhlCdQfmr/DVaLrpB+RNqXLQwW2I7vULUkn5FlK/UbLp9TpNlPsE1/fUH2Vhr8LdXMubuf+Ll51dozXCNM8AV82M=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(346002)(136003)(39860400002)(376002)(396003)(36840700001)(46966006)(40470700004)(966005)(54906003)(6486002)(8936002)(478600001)(2906002)(36756003)(316002)(4744005)(5660300002)(70206006)(4326008)(86362001)(6862004)(33656002)(6512007)(41300700001)(107886003)(8676002)(2616005)(70586007)(40480700001)(83380400001)(40460700003)(82740400003)(47076005)(81166007)(82310400005)(26005)(336012)(186003)(36860700001)(6506007)(356005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2022 12:32:25.6772
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: eb7ba02f-f231-4e69-4afb-08da544b436e
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB3201

Hi Andrew and Christopher,

I will not dig into the details of the issues you currently have
but it seems you are trying to re-do the work we already did
and have been using for quite a while.

Currently we maintain the xtf on arm code in gitlab and we
recently rebased it on the latest xtf master:
https://gitlab.com/xen-project/people/bmarquis/xtf

If possible I would suggest to start from there.

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 12:35:27 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 12:35:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353839.580833 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3zZu-0001r3-SD; Wed, 22 Jun 2022 12:35:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353839.580833; Wed, 22 Jun 2022 12:35:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3zZu-0001qw-PF; Wed, 22 Jun 2022 12:35:26 +0000
Received: by outflank-mailman (input) for mailman id 353839;
 Wed, 22 Jun 2022 12:35:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4/ZK=W5=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o3zZt-0001qo-6V
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 12:35:25 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id c9076705-f227-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 14:35:23 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2C27413D5;
 Wed, 22 Jun 2022 05:35:23 -0700 (PDT)
Received: from [10.57.38.102] (unknown [10.57.38.102])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7065A3F534;
 Wed, 22 Jun 2022 05:35:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c9076705-f227-11ec-b725-ed86ccbb4733
Message-ID: <e264e3bf-b436-684a-13a1-be0aa0f6bfbb@arm.com>
Date: Wed, 22 Jun 2022 14:35:07 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [XEN PATCH v2.1 1/4] build,include: rework shell script for
 headers++.chk
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <20220614162248.40278-1-anthony.perard@citrix.com>
 <20220621101128.50543-1-anthony.perard@citrix.com>
From: Michal Orzel <michal.orzel@arm.com>
In-Reply-To: <20220621101128.50543-1-anthony.perard@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Hi Anthony,

On 21.06.2022 12:11, Anthony PERARD wrote:
> The command line generated for headers++.chk by make is quite long,
> and in some environment it is too long. This issue have been seen in
> Yocto build environment.
> 
> Error messages:
>     make[9]: execvp: /bin/sh: Argument list too long
>     make[9]: *** [include/Makefile:181: include/headers++.chk] Error 127
> 
> Rework so that we do the foreach loop in shell rather that make to
> reduce the command line size by a lot. We also need a way to get
> headers prerequisite for some public headers so we use a switch "case"
> in shell to be able to do some simple pattern matching. Variables
> alone in POSIX shell don't allow to work with associative array or
> variables with "/".
> 
> Also rework headers99.chk as it has a similar implementation, even if
> with only two headers to check the command line isn't too long at the
> moment.
> 
> Reported-by: Bertrand Marquis <Bertrand.Marquis@arm.com>
> Fixes: 28e13c7f43 ("build: xen/include: use if_changed")
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Reviewed-by: Michal Orzel <michal.orzel@arm.com>
Tested-by: Michal Orzel <michal.orzel@arm.com>

Cheers,
Michal


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 12:56:13 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 12:56:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353852.580844 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3ztu-0004FX-Lf; Wed, 22 Jun 2022 12:56:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353852.580844; Wed, 22 Jun 2022 12:56:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3ztu-0004FQ-Ho; Wed, 22 Jun 2022 12:56:06 +0000
Received: by outflank-mailman (input) for mailman id 353852;
 Wed, 22 Jun 2022 12:56:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4/ZK=W5=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o3zts-0004FK-KH
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 12:56:04 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id abe18e63-f22a-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 14:56:03 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8F71613D5;
 Wed, 22 Jun 2022 05:56:02 -0700 (PDT)
Received: from [10.57.38.102] (unknown [10.57.38.102])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 77BC83F534;
 Wed, 22 Jun 2022 05:55:59 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: abe18e63-f22a-11ec-b725-ed86ccbb4733
Message-ID: <74ec2158-3d19-3b2c-1e8c-fb5b30267658@arm.com>
Date: Wed, 22 Jun 2022 14:55:45 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH 0/9] MISRA C 2012 8.1 rule fixes
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>, Dario Faggioli <dfaggioli@suse.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 "Daniel P. Smith" <dpsmith@apertussolutions.com>,
 xen-devel@lists.xenproject.org
References: <20220620070245.77979-1-michal.orzel@arm.com>
 <dd016e82-2480-0e1e-6286-18b2f677dd65@suse.com>
From: Michal Orzel <michal.orzel@arm.com>
In-Reply-To: <dd016e82-2480-0e1e-6286-18b2f677dd65@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Hi Jan,

On 22.06.2022 12:25, Jan Beulich wrote:
> On 20.06.2022 09:02, Michal Orzel wrote:
>> This series fixes all the findings for MISRA C 2012 8.1 rule, reported by
>> cppcheck 2.7 with misra addon, for Arm (arm32/arm64 - target allyesconfig).
>> Fixing this rule comes down to replacing implicit 'unsigned' with explicit
>> 'unsigned int' type as there are no other violations being part of that rule
>> in the Xen codebase.
> 
> I'm puzzled, I have to admit. While I agree with all the examples in the
> doc, I notice that there's no instance of "signed" or "unsigned" there.
> Which matches my understanding that "unsigned" and "signed" on their own
> (just like "long") are proper types, and hence the omission of "int"
> there is not an "omission of an explicit type".
> 
Cppcheck was choosed as a tool for MISRA checking and it is considering it as a violation.
It treats unsigned as an implicit type. You can see this flag in cppcheck source code:

"fIsImplicitInt          = (1U << 31),   // Is "int" token implicitly added?"

> Nevertheless I think we have had the intention to use "unsigned int"
> everywhere, but simply for cosmetic / style reasons (while I didn't ever
> see anyone request the use of "long int" in place of "long", despite it
> also being possible to combine with "double"), so I'm happy to see this
> being changed. Just that (for now) I don't buy the justification.
> 
> Jan

Cheers,
Michal


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 13:02:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 13:02:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353863.580855 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3zzj-0005mr-Fe; Wed, 22 Jun 2022 13:02:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353863.580855; Wed, 22 Jun 2022 13:02:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o3zzj-0005mk-CJ; Wed, 22 Jun 2022 13:02:07 +0000
Received: by outflank-mailman (input) for mailman id 353863;
 Wed, 22 Jun 2022 13:02:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kzGk=W5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o3zzi-0005me-KX
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 13:02:06 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr130053.outbound.protection.outlook.com [40.107.13.53])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 836f06e2-f22b-11ec-bd2d-47488cf2e6aa;
 Wed, 22 Jun 2022 15:02:04 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB7597.eurprd04.prod.outlook.com (2603:10a6:102:e0::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15; Wed, 22 Jun
 2022 13:02:02 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Wed, 22 Jun 2022
 13:02:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 836f06e2-f22b-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ME9P2woINjOebr93xuRszgsiubPmbObpWkafEk9ofJLqPSlFFPbWrwxqCoClz1cLp4RAcsNCXMWCaMMMYc5bkZW/nAwVLaWS+cphiScQRIaLYpK9iNhMQr+EG5U31H8lwsMSRqEOOcMx5ENiolq0yk0gJeBsr8pRaJBbbqMgu/Ed6OEC/zSh+15JLTRcjBf7/wJ7/nla5hQQ3xWVjNhXWMyDdeJZqWEjZ3rhWw8g2t/iyiiETBxuiOF8Bkzh38vu5RGq+JF/MMoNSDOM+YPebYALFgNXi7Csf8YKspYkWtjQyIvrdC198jir6JBv1slTDiitGTPM0DJV6aPEAzOS3w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=RBj2JPc8UpYI6OEVt6nbuwqcrZmOnCzFbEe9Osq+8Hw=;
 b=Hrmbf12meiE3cEm3Qm+hcrmj281qVyHuz1wSq6cRymJUaRGSwPQFoQ1VaCOzfib+wuyXk5n0KIRM9TF74xOBp6hurjKKixWyVJzV4xvhgbot5uZFr0IR5f7BxkBU2S3+Kxh1HUgf5MtroeWxpQpbgT/0T9otYmWXyQ+xL06808NwUdSNO2nKVyI2UFDLuJyMRYMtpnBwwIATpR15v+qRRIISY9vcvFGInUSXYNzOnYKsFwVpAxVeSO/xRJOCG/LGSOnR1IhSOkIBbc6LkYWafk2U28sLxWfVLzVNC4s778Ids64hBhwp2qwXAgRH0bxy2vVqrLSKCX0uybFvnyo9ww==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RBj2JPc8UpYI6OEVt6nbuwqcrZmOnCzFbEe9Osq+8Hw=;
 b=MmPqSPWSqAu5cypUxTEnj8QlGU+xWNPqw24AGMMV7Uquk8DElnsy4EmicW1gH8miv0o9josDMT3HX5EjULfvAe9wZ+rqsXmMRll+UDSDFzpKByo1rMkmdrC/zH6/oqB1fbZZnBotYT4qkTeOnVpKn+VxlxHeEHUiWB/6DaUP+cFLbbBVQ1NRRSJAYzFoLSq55rbmpciCfjvTnBjQsb+x0XXbtD0Pa2pLIcJ4wj8/TNnz7BY5GOTkOf0iCGi0MTwO4Qv+JM9DbwSYW4dY7C8MGPyPYnWagYggH8jhC8lxtVBiTv4kTdtpsvDxUWsdyhk2ufReNdtghJwPLXnHQHXcJw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d91bb4ea-41be-225e-e2fe-1b03aa06c677@suse.com>
Date: Wed, 22 Jun 2022 15:01:59 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 0/9] MISRA C 2012 8.1 rule fixes
Content-Language: en-US
To: Michal Orzel <michal.orzel@arm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>, Dario Faggioli <dfaggioli@suse.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 "Daniel P. Smith" <dpsmith@apertussolutions.com>,
 xen-devel@lists.xenproject.org
References: <20220620070245.77979-1-michal.orzel@arm.com>
 <dd016e82-2480-0e1e-6286-18b2f677dd65@suse.com>
 <74ec2158-3d19-3b2c-1e8c-fb5b30267658@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <74ec2158-3d19-3b2c-1e8c-fb5b30267658@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM6PR10CA0036.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:209:89::49) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 435717a5-1e85-4032-9c33-08da544f6624
X-MS-TrafficTypeDiagnostic: PA4PR04MB7597:EE_
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-Microsoft-Antispam-PRVS:
	<PA4PR04MB7597657455C7BFE8ACEE6C57B3B29@PA4PR04MB7597.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	dqA+JvKXJHpMTEQCgAGeEULDcLoXmiGRmuHfE3ObzT8yCu/sW2P8XWQZw95B1eY7YlgFCo9mkVnAa7YFBXMwD1yuSNo9sGad19sb4JPCCKYteHuFC7jvzwhfnzSyKffda9K4v5Qv5YGRKKCw6QQq+2a4E4gEyDGo/bWs1UvluaDV/0egw8dRSb+4HzSZV/qpf+rBGs2CydoK/E6kf8oa8NvK1maF29l/Qj4UvoZ+BqGWzCzNcaFcwxEonSW2zQ+YNt/Jk+a20cpuTpODX8e23HGVoQN80Vp4T00tuLjlEvK67PiEHz34vrF+R/E/oAxyZCfnECjcUJ4itk46lH4nY619ckbnwOzeUuFWA/ozBMuBJwJ2BLcdrNMExSnEL739Udfu4RRlmFfxx8lWHhUMJd226fhxvu7IT1XiU5q4WDoLqVew2v0WZEe/k27FYKze+NyRwgy1Q0q+tj4tTcQFqOODS4gC/lRVYG6Mq25qisy47sMWeGUR2Rz+EaSinZA9uQYfi2rnUjeO9eZtDZTBktvvWyfbGGQ6jQTXrrQKhzs7DVOYlF8Mym6lKzKL4+tIivpQr7BFCKLjtooJ7glNBX/fozWse//N2HNPyoPz6b9tUkJbS4AQOVOIYl8+4jo80lHUSLnm/eapiUQS+gQCRoJg4KbzEBtoWfHt5/NzMBr05FsBubdA1xPojhK94tQpPeN9Du+FI/nNuda/coYfBHGIgCRkZnNK7TQrzS1sCwF+24Onub6iDwR+m4PJnho6Zf7jjgr1X3zrULPfn6pxHSbKfKFtvIJksYvb3Rvv3C4=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(396003)(366004)(39860400002)(346002)(136003)(376002)(5660300002)(7416002)(478600001)(6486002)(8936002)(31686004)(2616005)(53546011)(6512007)(38100700002)(26005)(31696002)(6506007)(86362001)(83380400001)(54906003)(186003)(41300700001)(6666004)(316002)(2906002)(36756003)(6916009)(4326008)(8676002)(66946007)(66556008)(66476007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?d1V6K2p4RGxRbDNzSmVwbTA1SW91cjI2c1hSQlo3bXdLUjhaOUphMm5vaUxV?=
 =?utf-8?B?aDMyaURVMjhsQk5BWTRIcnBtWDROYmduMW9Ed1hKcXg5RVQxNHM0TmFnRE8r?=
 =?utf-8?B?UDNPdDdaNmFWYjhTQ0xtbmxNdld5ZHZrZEgwczJFWThuWnlRVDVaSXc2N1N0?=
 =?utf-8?B?bEJqbzJwOTh2bWVQbERKNWVsdXRDb3k1RGMwWEVub0NaVnVNYmdnUncwUTZX?=
 =?utf-8?B?ZWhtOGFWb0FYdFczYk83RlRhK2twNHZYTC90d3NZcXBvVEoxSDdPREs3eG9X?=
 =?utf-8?B?REtZU2wyN1lMZ0VhZ2dKVXQrVFduRFM1eEhCUFFpUkxyUG9NL2RNbVFlSitB?=
 =?utf-8?B?YzZNaUZ3blE2SUpzVHNXTWtkaUxXNXJIck5jOWY0Z29ESDdmTzNWZXUrUzdx?=
 =?utf-8?B?a3pRTTdrbkxtR0IvOXdGeC9YNkNwL3Jhd3pXYUtMd3RqcWtDSDl1bkpyeDNW?=
 =?utf-8?B?TElwcDBFb0FvUXNhYjczNHF3QnVFb1BsY0VqR1BKNm4xZ0hUdUNIVUtwRTZq?=
 =?utf-8?B?aE0rZjVSQ1p2dDVqSWNpb2VkZnFHWk5MVkg3UlBadCthNVUyQldhQXlCWDJI?=
 =?utf-8?B?TFhzK2F1L0VyTHdmcFIydzdaNnphNW9xY0grOGRRd09CVGwvd2d2d0I4c2c2?=
 =?utf-8?B?SS9mR0c3eUpUZUlhL2taVjJMVHlUTG90NlE0aEVnMHBTa0w3T1NrUkJvQm4z?=
 =?utf-8?B?N3NxYnAwRzJBMmowcW0wUjBNek1icm52UWdZaVJsMTAzc2hRQncyaWJxRnl3?=
 =?utf-8?B?d2NwOThOdmtuOTJtV1dNNUpQQ3VmZ09GbkRDL0QzZkp4cGpLZUx0clNwWDh1?=
 =?utf-8?B?T2NuM0tobldrSEtaWnFKYW5LbUtia1lUSTFaNVUwb0pLaG9VWU9tSWhRNmlr?=
 =?utf-8?B?NE53MnU3QWZVNlhWcUJSRG4ycGxhcDN0cVBwNVlGSm5BWTFKYW40TlJmQlBY?=
 =?utf-8?B?SzJvSUoyZGVoZVpYT09VanpxeXpaZVhRMlB1b2VXWUpJT0lRTVdsSUtqRk1Z?=
 =?utf-8?B?MU9UYzJhUFlrRzFQYklCbHA3dTh0WSs2Um1pT0grb2NyM3A1WGtrWHRhcE9Q?=
 =?utf-8?B?dENaREQ2UFdTNEpkUjBkTjF1WUcxRU1FelVqSy9ZVHpmbDRsN0NBZTVveUFY?=
 =?utf-8?B?bWI2UXhid1pYbVJHZXpYOFZVRXBnbkdpOUVpZjUvSHFuNmlWVkdwaHkwTG1D?=
 =?utf-8?B?ejlESkx2ODN1ZmtQakRCVVcwL25JR0lmZGk2enBDR2hZYjFpV3hzSXF1dXFK?=
 =?utf-8?B?UjlCVTl6UVV2VGtYOW1aYzBQblVBWXVaa21lbmZZNFJmbWE5Sk02bE9mSWVF?=
 =?utf-8?B?S2pweUpDOXZmMHNJS214UzVueTBsRURiNWNLUDlYOW9YbXZsZHUvOU5oUWlD?=
 =?utf-8?B?ZDFFKzFNWlVSSnh5Q0RKd216ZjhLeXhRUVhBdUxkY3BLaDQ1NjBtcHFVNlRF?=
 =?utf-8?B?ODArc3NadjJBQ0lWSG5tU0NkLy9PT3BocHFVSVdTQlBzTjFvSGlGUlJ2KzFm?=
 =?utf-8?B?UVVMM29McDNrVWJBYUd1QTEwNG1HaU5iR3YwOEg5M3N2L2RoUmpSMTV1TG5F?=
 =?utf-8?B?ckJnS29HejIzbys4ZXdmMmJGQUE2RnNXeU1CbWQyY2dvTHpXNC9zMEpCUm9K?=
 =?utf-8?B?NkJRbU9FMHFqei9oUS9QL1kva2h5M1Y5UzN5Y0xKVWRJMEdlR1hkWXpFYjNF?=
 =?utf-8?B?WnoyMEI5SmxraWVkYkZjYUlrN0JSRWVPeHpsRFZFYnRpdExTaHdDNGRGbmFT?=
 =?utf-8?B?NDFJNUNKKzJYeDZyYzlHTlVtSjE0NGw2bytweW5ZWExocG96SEx5cHRRbVIz?=
 =?utf-8?B?M2RYci9PSWt4ZjdFVHFUaUUraFNHUUhPNlhnaDJPUnl1QUZRMjRtSVczazB0?=
 =?utf-8?B?b2FJQ2pUazFEV0NLcHRvQ3hleGwvUWNWSG1iOTI3RDRraGhrQnBEeWZWSVhJ?=
 =?utf-8?B?MWRwOFloY0w1YUo3ZkxCWWdUN0Vrc1FuMzk5MytHenBibXdYOWJzYXh2N29V?=
 =?utf-8?B?eGFNNnROVXVOSkhtRFU5UWhXUzJTYkR0ZUtvTy9rUkRLVkhZeGN1M29xYmlv?=
 =?utf-8?B?K0hZYzBLQUZBOHpINjNmV0xYTTd5VnVpOU9UUEUwbmZ5RjN5RlRoWUJwY3dz?=
 =?utf-8?B?RGdhaUpjNm9kY1NXeE1oTTFXeWZDK0Rmd1hCcWdNUUlYcGlPWTJUd0EvWDYv?=
 =?utf-8?B?ZEQvWUtabGZ5ZUxwK1FBY3ZzNDJUMkZMR0NIVllraXZ2eXZNUHVpZTdzaFox?=
 =?utf-8?B?K2lEK1N5Z0FYTzVMWkIwOXYxOXZ2UE8zM2VNTW5KYXVMUWVsUG8wTjNkMWs0?=
 =?utf-8?B?REh1Ylh4cFdzNk5GSTNZNFY0MkozT09oQmxSTXhBd0RsaWs2bmJiQT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 435717a5-1e85-4032-9c33-08da544f6624
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2022 13:02:02.0733
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: n5rh2H1EpAvCc3K6ZDke2ju9fdSPAydaSDyKj0VW97CyFSyfM4RIgS3APYdGhGVSF0Amx2JHA5zz7Bscwy6mow==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB7597

On 22.06.2022 14:55, Michal Orzel wrote:
> On 22.06.2022 12:25, Jan Beulich wrote:
>> On 20.06.2022 09:02, Michal Orzel wrote:
>>> This series fixes all the findings for MISRA C 2012 8.1 rule, reported by
>>> cppcheck 2.7 with misra addon, for Arm (arm32/arm64 - target allyesconfig).
>>> Fixing this rule comes down to replacing implicit 'unsigned' with explicit
>>> 'unsigned int' type as there are no other violations being part of that rule
>>> in the Xen codebase.
>>
>> I'm puzzled, I have to admit. While I agree with all the examples in the
>> doc, I notice that there's no instance of "signed" or "unsigned" there.
>> Which matches my understanding that "unsigned" and "signed" on their own
>> (just like "long") are proper types, and hence the omission of "int"
>> there is not an "omission of an explicit type".
>>
> Cppcheck was choosed as a tool for MISRA checking and it is considering it as a violation.

Which by no means indicates that the tool pointing out something as a
violation actually is one.

> It treats unsigned as an implicit type. You can see this flag in cppcheck source code:
> 
> "fIsImplicitInt          = (1U << 31),   // Is "int" token implicitly added?"

Neither the name of the variable nor the comment clarify that this is about
the specific case of "unsigned". As said there's also the fact that they
don't appear to point out the lack of "int" when seeing plain "long" (or
"long long"). I fully agree that "extern x;" or "const y;" lack explicit
"int".

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 13:20:17 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 13:20:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353871.580865 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o40H8-00082q-Vi; Wed, 22 Jun 2022 13:20:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353871.580865; Wed, 22 Jun 2022 13:20:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o40H8-00082j-Sh; Wed, 22 Jun 2022 13:20:06 +0000
Received: by outflank-mailman (input) for mailman id 353871;
 Wed, 22 Jun 2022 13:20:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o40H7-0007uz-E0
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 13:20:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o40H7-0004vT-4G; Wed, 22 Jun 2022 13:20:05 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=[192.168.1.223]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o40H6-0005ms-UO; Wed, 22 Jun 2022 13:20:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=re9m33pb8PdlVNWNCIbk86qkedpewIxKyzmz4q5PK3Y=; b=nsVG5gGoF8vOj2y2m7CZkdwLZy
	kjBKRoqwO3NgxD06ESf82Ebk2P923LYGRVhacxguYE3UVom8sakPxvFP9C0njjUVlJWQuFM8WH66/
	cHJCv4IsDCAm9OXVm0Dx7I7H3tusR+ACTVqFb4cvlf/Za6N5YCC/tVWbiDJmTgByV89g=;
Message-ID: <a9566b42-2360-4d3f-5272-8f69368d50f2@xen.org>
Date: Wed, 22 Jun 2022 14:20:03 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: Tentative fix for dom0 boot problem
To: Juergen Gross <jgross@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <d465abfb-6d44-0739-9959-3e3311dd671c@suse.com>
 <e32a84bf-ad49-da95-4a19-61872c2ff7e0@xen.org>
 <bc8899d7-0300-8640-57d9-52c2a1bf599c@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <bc8899d7-0300-8640-57d9-52c2a1bf599c@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 22/06/2022 13:13, Juergen Gross wrote:
> On 22.06.22 12:50, Julien Grall wrote:
>>
>>
>> On 22/06/2022 11:45, Juergen Gross wrote:
>>> Julien,
>>
>> Hi Juergen,
>>
>>> could you please test the attached patches?
>>
>> I am getting the following error:
>>
>> (XEN) d0v0 Unhandled: vec 14, #PF[0003]
>> (XEN) Pagetable walk from ffffffff84001000:
>> (XEN)  L4[0x1ff] = 000000046c004067 0000000000004004
>> (XEN)  L3[0x1fe] = 000000046c003067 0000000000004003
>> (XEN)  L2[0x020] = 000000046c024067 0000000000004024
>> (XEN)  L1[0x001] = 001000046c001025 0000000000004001
> 
> Hmm, from this data I guess this was a write to a page table.
> 
>> (XEN) domain_crash_sync called from entry.S: fault at ffff82d040325906 
>> x86_64/entry.S#create_bounce_frame+0x15d/0x177
>> (XEN) Domain 0 (vcpu#0) crashed on cpu#1:
>> (XEN) ----[ Xen-4.17-unstable  x86_64  debug=y  Tainted:   C    ]----
>> (XEN) CPU:    1
>> (XEN) RIP:    e033:[<ffffffff832a3481>]
> 
> Can you please find out the associated statement?

arch/x86/kernel/head64.c:433

This is the memset() for __brk_base.

> 
>> (XEN) RFLAGS: 0000000000000206   EM: 1   CONTEXT: pv guest (d0v0)
>> (XEN) rax: 0000000000000000   rbx: ffffffff84000000   rcx: 
>> 000000000002b000
>> (XEN) rdx: ffffffff84000000   rsi: ffffffff84000000   rdi: 
>> ffffffff84001000
>> (XEN) rbp: 0000000000000000   rsp: ffffffff82a03e60   r8:  
>> 0000000000000000
>> (XEN) r9:  0000000000000000   r10: 0000000000000000   r11: 
>> 0000000000000000
>> (XEN) r12: 0000000000000000   r13: 0000000000000000   r14: 
>> 0000000000000000
>> (XEN) r15: 0000000000000000   cr0: 0000000080050033   cr4: 
>> 00000000003426e0
>> (XEN) cr3: 000000046c001000   cr2: ffffffff84001000
>> (XEN) fsb: 0000000000000000   gsb: ffffffff83271000   gss: 
>> 0000000000000000
>> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e02b   cs: e033
>> (XEN) Guest stack trace from rsp=ffffffff82a03e60:
>> (XEN)    000000000002b000 0000000000000000 0000000000000003 
>> ffffffff832a3481
>> (XEN)    000000010000e030 0000000000010006 ffffffff82a03ea8 
>> 000000000000e02b
>> (XEN)    0000000000000000 ffffffff832ae884 0000000000000000 
>> 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 
>> 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 
>> 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 
>> 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 
>> 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 ffffffff832a317f 
>> 0000000000000000
> 
> Further analysis might be easier if you can supply function + 
> displacement for
> any text segment addresses on the stack.

ffffffff832ae884: arch/x86/include/asm/text-patching.h:112
ffffffff832a317f: arch/x86/kernel/head64.c:325

> 
> BTW, I could boot the kernel with my patches as Dom0 without any 
> problem. OTOH
> it booted even without the patches. :-)

So I have tried with two different compilers (GCC 7.3.1 and GCC 10.2.1) 
and hit the same error. This would suggest this is related to my 
.config. You can find it in [1] if you want to reproduce it yourself.

Cheers,

[1] https://pastebin.com/xityGDN9

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 13:42:44 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 13:42:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353888.580893 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o40cr-0002Wn-7K; Wed, 22 Jun 2022 13:42:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353888.580893; Wed, 22 Jun 2022 13:42:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o40cr-0002Wg-2k; Wed, 22 Jun 2022 13:42:33 +0000
Received: by outflank-mailman (input) for mailman id 353888;
 Wed, 22 Jun 2022 13:42:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4T0g=W5=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1o40cp-0002HE-Ng
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 13:42:31 +0000
Received: from mail-lf1-x12c.google.com (mail-lf1-x12c.google.com
 [2a00:1450:4864:20::12c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 29aec4b5-f231-11ec-bd2d-47488cf2e6aa;
 Wed, 22 Jun 2022 15:42:30 +0200 (CEST)
Received: by mail-lf1-x12c.google.com with SMTP id i18so14214089lfu.8
 for <xen-devel@lists.xenproject.org>; Wed, 22 Jun 2022 06:42:30 -0700 (PDT)
Received: from jade.urgonet (h-79-136-84-253.A175.priv.bahnhof.se.
 [79.136.84.253]) by smtp.gmail.com with ESMTPSA id
 p5-20020ac24ec5000000b0047f666011e4sm1523292lfr.26.2022.06.22.06.42.28
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 22 Jun 2022 06:42:29 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 29aec4b5-f231-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=K3/VT3Y9oOhER9vVT3TVNVa0siiJyunGdW85AfZUMjo=;
        b=uy2h+Yh/FPR7Ao08uURaLGllmWPGZXKyimqkEJSBrOeoD3osOJ0ytzHni49lp+7Xjq
         W9Z2J8tmjJh+6FkvwGJIGx80HlHnVQB/ChtoU4uDWrDUhjmNyKti4gpqwmwhCHeU1yhI
         L+4L62p8uFnAyJ2YCQB8VZ13uAwcOmT79VgfuJEZo4Y7FNV/5qax8r/LK642CsksR8fU
         dsSHNCT+WH2G0AMjE8c6Jr1LSwALU9c5T8+JwPM8+YBeq8F8LWwsCgcTmQi/W3Lmu6jh
         AlDu8wKTa1wEctbFX21TEUAGG5/sLOhxu9csFNEysS9VH42p9xALaSr2AdpVWygF02uw
         m1ng==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=K3/VT3Y9oOhER9vVT3TVNVa0siiJyunGdW85AfZUMjo=;
        b=7pJv8acgpdVqEp87lIZYY3oqYtXrgGbwOhlZa6E/pmm6QasUWQsKKLUaSJvcd08rJR
         xNRIkwrGjBj0Bfr+zvOh+AbRd2fngfQhMbF03xAAY0G/yWkaUJXrv2tgxCeCnaE4/k2P
         ZWipJz90Ob7W37DglMpzQHluhmEKZZxqNFGgA2FocTnJVrxYVhwoBrwWx+5iPImEkNcf
         hq6THdAlR0xM48RSnWxPosjkUNKS0bkK64deaeuTi43Lq2i21LFwh2gXFyCn9wOrufAT
         B/MCv26gKHo4JP+2coBvNHSBia+tnDUHVHrL5jYl2tQQegjeb+T/j0rwHqe32zEaLDFz
         syHg==
X-Gm-Message-State: AJIora/him6UU9DxSXFivaHjTvvBe0ayxnaJheHpQyQKZzkiBxabVcRZ
	c83yzX6EGZsTJ/7jq0tBSZuJt0mCazzOgA==
X-Google-Smtp-Source: AGRyM1szUTxnhQl+IgX6yfjuKDsYGN6dbV57I65NHHvZcFIvRoxeFZsuWUNP8afpuqNLdY22Vd0t8w==
X-Received: by 2002:a05:6512:110f:b0:47f:985c:e010 with SMTP id l15-20020a056512110f00b0047f985ce010mr1828359lfg.390.1655905350212;
        Wed, 22 Jun 2022 06:42:30 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Bertrand.Marquis@arm.com,
	Jens Wiklander <jens.wiklander@linaro.org>
Subject: [PATCH v4 1/2] xen/arm: smccc: add support for SMCCCv1.2 extended input/output registers
Date: Wed, 22 Jun 2022 15:42:18 +0200
Message-Id: <20220622134219.1596613-2-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20220622134219.1596613-1-jens.wiklander@linaro.org>
References: <20220622134219.1596613-1-jens.wiklander@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

SMCCC v1.2 AArch64 allows x0-x17 to be used as both parameter registers
and result registers for the SMC and HVC instructions.

Arm Firmware Framework for Armv8-A specification makes use of x0-x7 as
parameter and result registers.

Let us add new interface to support this extended set of input/output
registers.

This is based on 3fdc0cb59d97 ("arm64: smccc: Add support for SMCCCv1.2
extended input/output registers") by Sudeep Holla from the Linux kernel

The SMCCC version reported to the VM is bumped to 1.2 in order to support
handling FF-A messages.

Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
---
 xen/arch/arm/arm64/asm-offsets.c |  9 +++++++
 xen/arch/arm/arm64/smc.S         | 43 ++++++++++++++++++++++++++++++++
 xen/arch/arm/include/asm/smccc.h | 40 +++++++++++++++++++++++++++++
 xen/arch/arm/vsmc.c              |  2 +-
 4 files changed, 93 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/arm64/asm-offsets.c b/xen/arch/arm/arm64/asm-offsets.c
index 280ddb55bfd4..1721e1ed26e1 100644
--- a/xen/arch/arm/arm64/asm-offsets.c
+++ b/xen/arch/arm/arm64/asm-offsets.c
@@ -56,6 +56,15 @@ void __dummy__(void)
    BLANK();
    OFFSET(SMCCC_RES_a0, struct arm_smccc_res, a0);
    OFFSET(SMCCC_RES_a2, struct arm_smccc_res, a2);
+   OFFSET(ARM_SMCCC_1_2_REGS_X0_OFFS, struct arm_smccc_1_2_regs, a0);
+   OFFSET(ARM_SMCCC_1_2_REGS_X2_OFFS, struct arm_smccc_1_2_regs, a2);
+   OFFSET(ARM_SMCCC_1_2_REGS_X4_OFFS, struct arm_smccc_1_2_regs, a4);
+   OFFSET(ARM_SMCCC_1_2_REGS_X6_OFFS, struct arm_smccc_1_2_regs, a6);
+   OFFSET(ARM_SMCCC_1_2_REGS_X8_OFFS, struct arm_smccc_1_2_regs, a8);
+   OFFSET(ARM_SMCCC_1_2_REGS_X10_OFFS, struct arm_smccc_1_2_regs, a10);
+   OFFSET(ARM_SMCCC_1_2_REGS_X12_OFFS, struct arm_smccc_1_2_regs, a12);
+   OFFSET(ARM_SMCCC_1_2_REGS_X14_OFFS, struct arm_smccc_1_2_regs, a14);
+   OFFSET(ARM_SMCCC_1_2_REGS_X16_OFFS, struct arm_smccc_1_2_regs, a16);
 }
 
 /*
diff --git a/xen/arch/arm/arm64/smc.S b/xen/arch/arm/arm64/smc.S
index 91bae62dd4d2..c546192e7f2d 100644
--- a/xen/arch/arm/arm64/smc.S
+++ b/xen/arch/arm/arm64/smc.S
@@ -27,3 +27,46 @@ ENTRY(__arm_smccc_1_0_smc)
         stp     x2, x3, [x4, #SMCCC_RES_a2]
 1:
         ret
+
+
+/*
+ * void arm_smccc_1_2_smc(const struct arm_smccc_1_2_regs *args,
+ *                        struct arm_smccc_1_2_regs *res)
+ */
+ENTRY(arm_smccc_1_2_smc)
+    /* Save `res` and free a GPR that won't be clobbered */
+    stp     x1, x19, [sp, #-16]!
+
+    /* Ensure `args` won't be clobbered while loading regs in next step */
+    mov	x19, x0
+
+    /* Load the registers x0 - x17 from the struct arm_smccc_1_2_regs */
+    ldp	x0, x1, [x19, #ARM_SMCCC_1_2_REGS_X0_OFFS]
+    ldp	x2, x3, [x19, #ARM_SMCCC_1_2_REGS_X2_OFFS]
+    ldp	x4, x5, [x19, #ARM_SMCCC_1_2_REGS_X4_OFFS]
+    ldp	x6, x7, [x19, #ARM_SMCCC_1_2_REGS_X6_OFFS]
+    ldp	x8, x9, [x19, #ARM_SMCCC_1_2_REGS_X8_OFFS]
+    ldp	x10, x11, [x19, #ARM_SMCCC_1_2_REGS_X10_OFFS]
+    ldp	x12, x13, [x19, #ARM_SMCCC_1_2_REGS_X12_OFFS]
+    ldp	x14, x15, [x19, #ARM_SMCCC_1_2_REGS_X14_OFFS]
+    ldp	x16, x17, [x19, #ARM_SMCCC_1_2_REGS_X16_OFFS]
+
+    smc #0
+
+    /* Load the `res` from the stack */
+    ldr	x19, [sp]
+
+    /* Store the registers x0 - x17 into the result structure */
+    stp	x0, x1, [x19, #ARM_SMCCC_1_2_REGS_X0_OFFS]
+    stp	x2, x3, [x19, #ARM_SMCCC_1_2_REGS_X2_OFFS]
+    stp	x4, x5, [x19, #ARM_SMCCC_1_2_REGS_X4_OFFS]
+    stp	x6, x7, [x19, #ARM_SMCCC_1_2_REGS_X6_OFFS]
+    stp	x8, x9, [x19, #ARM_SMCCC_1_2_REGS_X8_OFFS]
+    stp	x10, x11, [x19, #ARM_SMCCC_1_2_REGS_X10_OFFS]
+    stp	x12, x13, [x19, #ARM_SMCCC_1_2_REGS_X12_OFFS]
+    stp	x14, x15, [x19, #ARM_SMCCC_1_2_REGS_X14_OFFS]
+    stp	x16, x17, [x19, #ARM_SMCCC_1_2_REGS_X16_OFFS]
+
+    /* Restore original x19 */
+    ldp     xzr, x19, [sp], #16
+    ret
diff --git a/xen/arch/arm/include/asm/smccc.h b/xen/arch/arm/include/asm/smccc.h
index b3dbeecc90ad..b5e3f67eb34e 100644
--- a/xen/arch/arm/include/asm/smccc.h
+++ b/xen/arch/arm/include/asm/smccc.h
@@ -33,6 +33,7 @@
 
 #define ARM_SMCCC_VERSION_1_0   SMCCC_VERSION(1, 0)
 #define ARM_SMCCC_VERSION_1_1   SMCCC_VERSION(1, 1)
+#define ARM_SMCCC_VERSION_1_2   SMCCC_VERSION(1, 2)
 
 /*
  * This file provides common defines for ARM SMC Calling Convention as
@@ -265,6 +266,45 @@ void __arm_smccc_1_0_smc(register_t a0, register_t a1, register_t a2,
         else                                                    \
             arm_smccc_1_0_smc(__VA_ARGS__);                     \
     } while ( 0 )
+
+/**
+ * struct arm_smccc_1_2_regs - Arguments for or Results from SMC call
+ * @a0-a17 argument values from registers 0 to 17
+ */
+struct arm_smccc_1_2_regs {
+    unsigned long a0;
+    unsigned long a1;
+    unsigned long a2;
+    unsigned long a3;
+    unsigned long a4;
+    unsigned long a5;
+    unsigned long a6;
+    unsigned long a7;
+    unsigned long a8;
+    unsigned long a9;
+    unsigned long a10;
+    unsigned long a11;
+    unsigned long a12;
+    unsigned long a13;
+    unsigned long a14;
+    unsigned long a15;
+    unsigned long a16;
+    unsigned long a17;
+};
+
+/**
+ * arm_smccc_1_2_smc() - make SMC calls
+ * @args: arguments passed via struct arm_smccc_1_2_regs
+ * @res: result values via struct arm_smccc_1_2_regs
+ *
+ * This function is used to make SMC calls following SMC Calling Convention
+ * v1.2 or above. The content of the supplied param are copied from the
+ * structure to registers prior to the SMC instruction. The return values
+ * are updated with the content from registers on return from the SMC
+ * instruction.
+ */
+void arm_smccc_1_2_smc(const struct arm_smccc_1_2_regs *args,
+                       struct arm_smccc_1_2_regs *res);
 #endif /* CONFIG_ARM_64 */
 
 #endif /* __ASSEMBLY__ */
diff --git a/xen/arch/arm/vsmc.c b/xen/arch/arm/vsmc.c
index 676740ef1520..6f90c08a6304 100644
--- a/xen/arch/arm/vsmc.c
+++ b/xen/arch/arm/vsmc.c
@@ -93,7 +93,7 @@ static bool handle_arch(struct cpu_user_regs *regs)
     switch ( fid )
     {
     case ARM_SMCCC_VERSION_FID:
-        set_user_reg(regs, 0, ARM_SMCCC_VERSION_1_1);
+        set_user_reg(regs, 0, ARM_SMCCC_VERSION_1_2);
         return true;
 
     case ARM_SMCCC_ARCH_FEATURES_FID:
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 13:42:44 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 13:42:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353887.580881 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o40cp-0002HS-UC; Wed, 22 Jun 2022 13:42:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353887.580881; Wed, 22 Jun 2022 13:42:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o40cp-0002HK-RP; Wed, 22 Jun 2022 13:42:31 +0000
Received: by outflank-mailman (input) for mailman id 353887;
 Wed, 22 Jun 2022 13:42:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4T0g=W5=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1o40co-0002HE-Nb
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 13:42:30 +0000
Received: from mail-lj1-x233.google.com (mail-lj1-x233.google.com
 [2a00:1450:4864:20::233])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 28cd1c9c-f231-11ec-bd2d-47488cf2e6aa;
 Wed, 22 Jun 2022 15:42:29 +0200 (CEST)
Received: by mail-lj1-x233.google.com with SMTP id g12so13348463ljk.11
 for <xen-devel@lists.xenproject.org>; Wed, 22 Jun 2022 06:42:29 -0700 (PDT)
Received: from jade.urgonet (h-79-136-84-253.A175.priv.bahnhof.se.
 [79.136.84.253]) by smtp.gmail.com with ESMTPSA id
 p5-20020ac24ec5000000b0047f666011e4sm1523292lfr.26.2022.06.22.06.42.27
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 22 Jun 2022 06:42:28 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 28cd1c9c-f231-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=vhgjqfIBUtzNmowELx4pok3t4Jd74q2jodj8xtz+2fI=;
        b=T+f69ZQ9IFhNkNCoU0TfgfWaA1dETBQEixxLa9e/jpEpbfpNt066Y2DnFwBlx3Yxh8
         vZ+ySqWtYbOgg7lXtQ/6vdyoRkp6b2J9QFwG5A1sE4HzW7DOm255fcV7zlk1rAIRHWsc
         vimBdgsOQwCVNr18JSnRLaqVf90KSfoKIrrIDBc7OrwtHUv/vX+na5LKMYAuwuBTonaC
         IiCZOMXU5HHsJIflcw+K5H1YnvbP3AUTQgQAELset0Id4RrL695dz3uy5tpneTBVKS6x
         uGrxcWnEW3bLfZzPkwdHBHMPXjsb6TwR21BOUrWoWO3LhYHl/L7y9ZiT5XEtZnMrBg5N
         OVGw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=vhgjqfIBUtzNmowELx4pok3t4Jd74q2jodj8xtz+2fI=;
        b=64PKWzJ6OLRpm+Po0yGQfBKoANlLx0XhFXjYxOC8NH+ipdiR0VeGNqLmHZ5fsUQEGM
         Tx+zybb5CiHqrd2f++4jzQXhiMNIg0a/eAciUG9lpB37NtYU20jJyq9m00vnvp/t0QVp
         ZK+F5rZVsOFlUO5M4aZTuiRvRqKWSpsSXjleLt7K2WpspqAaA5qsReo6CPwLld26ULRO
         lXoeSGpAgjXueeZqH7CFdUVpxdKij1hPACCgpyHaL/3iUJBImlz7bu4EKJxOkMvajQt4
         9mizsanYXZ7HR9d5nubFDgHMFFlf/0Wbs7aYXM4px94ZtlHxn1Z7WQS4ozFZHtW9zyUK
         xQow==
X-Gm-Message-State: AJIora+q48vJImE/hGBO6AV0Ciu0vl3+dKWr5edF3lhjEQZ1h2Ob4aHF
	8cKSwJZKSgJIw4Ngh31T7dFFPs+Q/rwDuw==
X-Google-Smtp-Source: AGRyM1srHY5Xlsv6oPCz6ZMknz3XrKlsMD05vaIjs29V9Ovi2QkH7JPCX2VekXaEwzELdBGKyts3UQ==
X-Received: by 2002:a2e:6f03:0:b0:25a:74b7:3a59 with SMTP id k3-20020a2e6f03000000b0025a74b73a59mr1920791ljc.390.1655905348518;
        Wed, 22 Jun 2022 06:42:28 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Bertrand.Marquis@arm.com,
	Jens Wiklander <jens.wiklander@linaro.org>
Subject: [PATCH v4 0/2] Xen FF-A mediator
Date: Wed, 22 Jun 2022 15:42:17 +0200
Message-Id: <20220622134219.1596613-1-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.31.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Hi,

This patch sets add a FF-A [1] mediator modeled after the TEE mediator
already present in Xen. The FF-A mediator implements the subset of the FF-A
1.1 specification needed to communicate with OP-TEE using FF-A as transport
mechanism instead of SMC/HVC as with the TEE mediator. It allows a similar
design in OP-TEE as with the TEE mediator where OP-TEE presents one virtual
partition of itself to each guest in Xen.

The FF-A mediator is generic in the sense it has nothing OP-TEE specific
except that only the subset needed for OP-TEE is implemented so far. The
hooks needed to inform OP-TEE that a guest is created or destroyed is part
of the FF-A specification.

It should be possible to extend the FF-A mediator to implement a larger
portion of the FF-A 1.1 specification without breaking with the way OP-TEE
is communicated with here. So it should be possible to support any TEE or
Secure Partition using FF-A as transport with this mediator.

[1] https://developer.arm.com/documentation/den0077/latest

Thanks,
Jens

v2->v3:
* Generates offsets into struct arm_smccc_1_2_regs with asm-offsets.c in
  order to avoid hard coded offsets in the assembly function
  arm_smccc_1_2_smc()
* Adds an entry in SUPPORT.md on the FF-A status
* Adds a configuration variable "ffa_enabled" to tell if FF-A should be
  enabled for a particular domu guest
* Moves the ffa_frag_list for fragmented memory share requests into
  struct ffa_ctx instead to keep it per guest in order to avoid mixups
  and simplify locking
* Adds a spinlock to struct ffa_ctx for per guest locking
* Addressing style issues and suggestions
* Uses FFA_FEATURES to check that all the needed features are available
  before initializing the mediator
* Rebased on staging as of 2022-06-20

v1->v2:
* Rebased on staging to resolve some merge conflicts as requested

Jens Wiklander (2):
  xen/arm: smccc: add support for SMCCCv1.2 extended input/output
    registers
  xen/arm: add FF-A mediator

 SUPPORT.md                        |    7 +
 tools/libs/light/libxl_arm.c      |    3 +
 tools/libs/light/libxl_types.idl  |    1 +
 tools/xl/xl_parse.c               |    3 +
 xen/arch/arm/Kconfig              |   11 +
 xen/arch/arm/Makefile             |    1 +
 xen/arch/arm/arm64/asm-offsets.c  |    9 +
 xen/arch/arm/arm64/smc.S          |   43 +
 xen/arch/arm/domain.c             |   10 +
 xen/arch/arm/domain_build.c       |    1 +
 xen/arch/arm/ffa.c                | 1683 +++++++++++++++++++++++++++++
 xen/arch/arm/include/asm/domain.h |    4 +
 xen/arch/arm/include/asm/ffa.h    |   71 ++
 xen/arch/arm/include/asm/smccc.h  |   40 +
 xen/arch/arm/vsmc.c               |   19 +-
 xen/include/public/arch-arm.h     |    2 +
 16 files changed, 1904 insertions(+), 4 deletions(-)
 create mode 100644 xen/arch/arm/ffa.c
 create mode 100644 xen/arch/arm/include/asm/ffa.h

-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 13:42:44 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 13:42:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353889.580904 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o40cu-0002o0-II; Wed, 22 Jun 2022 13:42:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353889.580904; Wed, 22 Jun 2022 13:42:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o40cu-0002nt-EL; Wed, 22 Jun 2022 13:42:36 +0000
Received: by outflank-mailman (input) for mailman id 353889;
 Wed, 22 Jun 2022 13:42:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4T0g=W5=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1o40ct-0002HE-Af
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 13:42:35 +0000
Received: from mail-lj1-x230.google.com (mail-lj1-x230.google.com
 [2a00:1450:4864:20::230])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2ae24e6d-f231-11ec-bd2d-47488cf2e6aa;
 Wed, 22 Jun 2022 15:42:32 +0200 (CEST)
Received: by mail-lj1-x230.google.com with SMTP id j22so12791733ljg.0
 for <xen-devel@lists.xenproject.org>; Wed, 22 Jun 2022 06:42:32 -0700 (PDT)
Received: from jade.urgonet (h-79-136-84-253.A175.priv.bahnhof.se.
 [79.136.84.253]) by smtp.gmail.com with ESMTPSA id
 p5-20020ac24ec5000000b0047f666011e4sm1523292lfr.26.2022.06.22.06.42.30
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 22 Jun 2022 06:42:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2ae24e6d-f231-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=zJtEm+kpOsGP4ZHT8csw1/rmKU3ebF1l5uRGgggyEtU=;
        b=C89jQ9C/Hn7dshbYBV9yJ0LIoYRdKq7A3jukwvajuCJYT/Xd9m4t5RDsfU3cTtaaKj
         36BV2Z+Cl0+I1SbS7oo82U9iaZBKtMygrSVf/xvz3sJM/MX9V4YVVyhTiCYHtCy8jYE+
         r+uYWHhf7ZW4lGEMyfocZ5VKD+NP2JEq7Dw9YFXjPNXM8SxhR8+DLXbDwDX9gls9p2GV
         8r/DXmvF29tLKaWzPA2euKnl+E3bUynh4qBCsWMZ+7hBAA5Xfu8QfJ42ICEPI/znnseS
         4ge6NftTiPRIlibDTohNFgx9gcZJHBkd4GKNbMvW6sNoTlXdpZ2i9VCpVb3AHyy3O8SB
         GEog==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=zJtEm+kpOsGP4ZHT8csw1/rmKU3ebF1l5uRGgggyEtU=;
        b=tE2N6afXHJnsAQLBpsqwGAMn3nt/++/QFqFCxnHdGIr91laE0xwVWXfmOgoH3AyTSz
         8xuG44cDR0Gafk1kPc1NbxG0J/eKqOVQ2whxqDhG3rA3WPqlBjDx8cafkbAbC0iILjbJ
         XoNChkoZ1BlwO0qjR+EE/BD2Psu8m93R50WAqZ0vcCB3bKF5njSAdI241HuzmebH6Pgk
         LcmoTWngboqSP/x8hcYkts/Kk+GELMZwyOll7fdLMu98idPvSWJDprcyct4Csd3Wx4uT
         32B17lwg6nUmGdVdyu5eIQf64+PXzB0KvIa98Zigc/XNaNi1qD4TfMG27oMeyOR2Q4HO
         mitg==
X-Gm-Message-State: AJIora98plUePEVABDxMzfgft0jzDSY3Gl7WB7BKPEnZWuPtTEQd8ttO
	3A2P3gog5InLDdamd7eTqdQLbR9yRpZUcw==
X-Google-Smtp-Source: AGRyM1uw9bpwmUh+pv1Xz5uFggx/wG7jQmhFK3+ZsJHmbdDuHk8YBwZuhGH/bD+xHa6q01PV6W7TPQ==
X-Received: by 2002:a05:651c:514:b0:25a:6232:dbd4 with SMTP id o20-20020a05651c051400b0025a6232dbd4mr1956017ljp.5.1655905351586;
        Wed, 22 Jun 2022 06:42:31 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Bertrand.Marquis@arm.com,
	Jens Wiklander <jens.wiklander@linaro.org>
Subject: [PATCH v4 2/2] xen/arm: add FF-A mediator
Date: Wed, 22 Jun 2022 15:42:19 +0200
Message-Id: <20220622134219.1596613-3-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20220622134219.1596613-1-jens.wiklander@linaro.org>
References: <20220622134219.1596613-1-jens.wiklander@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Adds a FF-A version 1.1 [1] mediator to communicate with a Secure
Partition in secure world.

The implementation is the bare minimum to be able to communicate with
OP-TEE running as an SPMC at S-EL1.

This is loosely based on the TEE mediator framework and the OP-TEE
mediator.

[1] https://developer.arm.com/documentation/den0077/latest
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
---
 SUPPORT.md                        |    7 +
 tools/libs/light/libxl_arm.c      |    3 +
 tools/libs/light/libxl_types.idl  |    1 +
 tools/xl/xl_parse.c               |    3 +
 xen/arch/arm/Kconfig              |   11 +
 xen/arch/arm/Makefile             |    1 +
 xen/arch/arm/domain.c             |   10 +
 xen/arch/arm/domain_build.c       |    1 +
 xen/arch/arm/ffa.c                | 1683 +++++++++++++++++++++++++++++
 xen/arch/arm/include/asm/domain.h |    4 +
 xen/arch/arm/include/asm/ffa.h    |   71 ++
 xen/arch/arm/vsmc.c               |   17 +-
 xen/include/public/arch-arm.h     |    2 +
 13 files changed, 1811 insertions(+), 3 deletions(-)
 create mode 100644 xen/arch/arm/ffa.c
 create mode 100644 xen/arch/arm/include/asm/ffa.h

diff --git a/SUPPORT.md b/SUPPORT.md
index 70e98964cbc0..215bb3c9043b 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -785,6 +785,13 @@ that covers the DMA of the device to be passed through.
 
 No support for QEMU backends in a 16K or 64K domain.
 
+### ARM: Firmware Framework for Arm A-profile (FF-A) Mediator
+
+    Status, Arm64: Tech Preview
+
+There are still some code paths where a vCPU may hog a pCPU longer than
+necessary. The FF-A mediator is not yet implemented for Arm32.
+
 ### ARM: Guest Device Tree support
 
     Status: Supported
diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index eef1de093914..a985609861c7 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -101,6 +101,9 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
         return ERROR_FAIL;
     }
 
+    config->arch.ffa_enabled =
+        libxl_defbool_val(d_config->b_info.arch_arm.ffa_enabled);
+
     return 0;
 }
 
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 2a42da2f7d78..bf4544bef399 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -646,6 +646,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
 
     ("arch_arm", Struct(None, [("gic_version", libxl_gic_version),
                                ("vuart", libxl_vuart_type),
+                               ("ffa_enabled", libxl_defbool),
                               ])),
     ("arch_x86", Struct(None, [("msr_relaxed", libxl_defbool),
                               ])),
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index b98c0de378b6..e0e99ed8d2b1 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -2746,6 +2746,9 @@ skip_usbdev:
             exit(-ERROR_FAIL);
         }
     }
+    libxl_defbool_setdefault(&b_info->arch_arm.ffa_enabled, false);
+    xlu_cfg_get_defbool(config, "ffa_enabled",
+                        &b_info->arch_arm.ffa_enabled, 0);
 
     parse_vkb_list(config, d_config);
 
diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index be9eff014120..e57e1d3757e2 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -139,6 +139,17 @@ config TEE
 
 source "arch/arm/tee/Kconfig"
 
+config FFA
+	bool "Enable FF-A mediator support" if EXPERT
+	default n
+	depends on ARM_64
+	help
+	  This option enables a minimal FF-A mediator. The mediator is
+	  generic as it follows the FF-A specification [1], but it only
+	  implements a small subset of the specification.
+
+	  [1] https://developer.arm.com/documentation/den0077/latest
+
 endmenu
 
 menu "ARM errata workaround via the alternative framework"
diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index bb7a6151c13c..af0c69f793d4 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -20,6 +20,7 @@ obj-y += domain_build.init.o
 obj-y += domctl.o
 obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
 obj-y += efi/
+obj-$(CONFIG_FFA) += ffa.o
 obj-y += gic.o
 obj-y += gic-v2.o
 obj-$(CONFIG_GICV3) += gic-v3.o
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 8110c1df8638..a3f00e7e234d 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -27,6 +27,7 @@
 #include <asm/cpufeature.h>
 #include <asm/current.h>
 #include <asm/event.h>
+#include <asm/ffa.h>
 #include <asm/gic.h>
 #include <asm/guest_atomics.h>
 #include <asm/irq.h>
@@ -756,6 +757,9 @@ int arch_domain_create(struct domain *d,
     if ( (rc = tee_domain_init(d, config->arch.tee_type)) != 0 )
         goto fail;
 
+    if ( (rc = ffa_domain_init(d, config->arch.ffa_enabled)) != 0 )
+        goto fail;
+
     update_domain_wallclock_time(d);
 
     /*
@@ -998,6 +1002,7 @@ static int relinquish_memory(struct domain *d, struct page_list_head *list)
 enum {
     PROG_pci = 1,
     PROG_tee,
+    PROG_ffa,
     PROG_xen,
     PROG_page,
     PROG_mapping,
@@ -1043,6 +1048,11 @@ int domain_relinquish_resources(struct domain *d)
 
     PROGRESS(tee):
         ret = tee_relinquish_resources(d);
+        if ( ret )
+            return ret;
+
+    PROGRESS(ffa):
+        ret = ffa_relinquish_resources(d);
         if (ret )
             return ret;
 
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 7ddd16c26da5..d708f76356f7 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -3450,6 +3450,7 @@ void __init create_dom0(void)
     if ( gic_number_lines() > 992 )
         printk(XENLOG_WARNING "Maximum number of vGIC IRQs exceeded.\n");
     dom0_cfg.arch.tee_type = tee_get_type();
+    dom0_cfg.arch.ffa_enabled = true;
     dom0_cfg.max_vcpus = dom0_max_vcpus();
 
     if ( iommu_enabled )
diff --git a/xen/arch/arm/ffa.c b/xen/arch/arm/ffa.c
new file mode 100644
index 000000000000..3117ce5cec4d
--- /dev/null
+++ b/xen/arch/arm/ffa.c
@@ -0,0 +1,1683 @@
+/*
+ * xen/arch/arm/ffa.c
+ *
+ * Arm Firmware Framework for ARMv8-A (FF-A) mediator
+ *
+ * Copyright (C) 2022  Linaro Limited
+ *
+ * Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without restriction,
+ * including without limitation the rights to use, copy, modify, merge,
+ * publish, distribute, sublicense, and/or sell copies of the Software,
+ * and to permit persons to whom the Software is furnished to do so,
+ * subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+ * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
+ * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
+ * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+ * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include <xen/domain_page.h>
+#include <xen/errno.h>
+#include <xen/init.h>
+#include <xen/lib.h>
+#include <xen/sched.h>
+#include <xen/types.h>
+#include <xen/sizes.h>
+#include <xen/bitops.h>
+
+#include <asm/smccc.h>
+#include <asm/event.h>
+#include <asm/ffa.h>
+#include <asm/regs.h>
+
+/* Error codes */
+#define FFA_RET_OK			0
+#define FFA_RET_NOT_SUPPORTED		-1
+#define FFA_RET_INVALID_PARAMETERS	-2
+#define FFA_RET_NO_MEMORY		-3
+#define FFA_RET_BUSY			-4
+#define FFA_RET_INTERRUPTED		-5
+#define FFA_RET_DENIED			-6
+#define FFA_RET_RETRY			-7
+#define FFA_RET_ABORTED			-8
+
+/* FFA_VERSION helpers */
+#define FFA_VERSION_MAJOR		_AC(1,U)
+#define FFA_VERSION_MAJOR_SHIFT		_AC(16,U)
+#define FFA_VERSION_MAJOR_MASK		_AC(0x7FFF,U)
+#define FFA_VERSION_MINOR		_AC(1,U)
+#define FFA_VERSION_MINOR_SHIFT		_AC(0,U)
+#define FFA_VERSION_MINOR_MASK		_AC(0xFFFF,U)
+#define MAKE_FFA_VERSION(major, minor)	\
+	((((major) & FFA_VERSION_MAJOR_MASK) << FFA_VERSION_MAJOR_SHIFT) | \
+	 ((minor) & FFA_VERSION_MINOR_MASK))
+
+#define FFA_MIN_VERSION		MAKE_FFA_VERSION(1, 0)
+#define FFA_VERSION_1_0		MAKE_FFA_VERSION(1, 0)
+#define FFA_VERSION_1_1		MAKE_FFA_VERSION(1, 1)
+#define FFA_MY_VERSION		MAKE_FFA_VERSION(FFA_VERSION_MAJOR, \
+						 FFA_VERSION_MINOR)
+
+
+#define FFA_HANDLE_HYP_FLAG             BIT(63,ULL)
+
+/* Memory attributes: Normal memory, Write-Back cacheable, Inner shareable */
+#define FFA_NORMAL_MEM_REG_ATTR		_AC(0x2f,U)
+
+/* Memory access permissions: Read-write */
+#define FFA_MEM_ACC_RW			_AC(0x2,U)
+
+/* Clear memory before mapping in receiver */
+#define FFA_MEMORY_REGION_FLAG_CLEAR		BIT(0, U)
+/* Relayer may time slice this operation */
+#define FFA_MEMORY_REGION_FLAG_TIME_SLICE	BIT(1, U)
+/* Clear memory after receiver relinquishes it */
+#define FFA_MEMORY_REGION_FLAG_CLEAR_RELINQUISH	BIT(2, U)
+
+/* Share memory transaction */
+#define FFA_MEMORY_REGION_TRANSACTION_TYPE_SHARE (_AC(1,U) << 3)
+
+#define FFA_HANDLE_INVALID		_AC(0xffffffffffffffff,ULL)
+
+/* Framework direct request/response */
+#define FFA_MSG_FLAG_FRAMEWORK		BIT(31, U)
+#define FFA_MSG_TYPE_MASK		_AC(0xFF,U);
+#define FFA_MSG_PSCI			_AC(0x0,U)
+#define FFA_MSG_SEND_VM_CREATED		_AC(0x4,U)
+#define FFA_MSG_RESP_VM_CREATED		_AC(0x5,U)
+#define FFA_MSG_SEND_VM_DESTROYED	_AC(0x6,U)
+#define FFA_MSG_RESP_VM_DESTROYED	_AC(0x7,U)
+
+/*
+ * Flags used for the FFA_PARTITION_INFO_GET return message:
+ * BIT(0): Supports receipt of direct requests
+ * BIT(1): Can send direct requests
+ * BIT(2): Can send and receive indirect messages
+ * BIT(3): Supports receipt of notifications
+ * BIT(4-5): Partition ID is a PE endpoint ID
+ */
+#define FFA_PART_PROP_DIRECT_REQ_RECV   BIT(0,U)
+#define FFA_PART_PROP_DIRECT_REQ_SEND   BIT(1,U)
+#define FFA_PART_PROP_INDIRECT_MSGS     BIT(2,U)
+#define FFA_PART_PROP_RECV_NOTIF        BIT(3,U)
+#define FFA_PART_PROP_IS_PE_ID          (_AC(0,U) << 4)
+#define FFA_PART_PROP_IS_SEPID_INDEP    (_AC(1,U) << 4)
+#define FFA_PART_PROP_IS_SEPID_DEP      (_AC(2,U) << 4)
+#define FFA_PART_PROP_IS_AUX_ID         (_AC(3,U) << 4)
+#define FFA_PART_PROP_NOTIF_CREATED     BIT(6,U)
+#define FFA_PART_PROP_NOTIF_DESTROYED   BIT(7,U)
+#define FFA_PART_PROP_AARCH64_STATE     BIT(8,U)
+
+/* Function IDs */
+#define FFA_ERROR			_AC(0x84000060,U)
+#define FFA_SUCCESS_32			_AC(0x84000061,U)
+#define FFA_SUCCESS_64			_AC(0xC4000061,U)
+#define FFA_INTERRUPT			_AC(0x84000062,U)
+#define FFA_VERSION			_AC(0x84000063,U)
+#define FFA_FEATURES			_AC(0x84000064,U)
+#define FFA_RX_ACQUIRE			_AC(0x84000084,U)
+#define FFA_RX_RELEASE			_AC(0x84000065,U)
+#define FFA_RXTX_MAP_32			_AC(0x84000066,U)
+#define FFA_RXTX_MAP_64			_AC(0xC4000066,U)
+#define FFA_RXTX_UNMAP			_AC(0x84000067,U)
+#define FFA_PARTITION_INFO_GET		_AC(0x84000068,U)
+#define FFA_ID_GET			_AC(0x84000069,U)
+#define FFA_SPM_ID_GET			_AC(0x84000085,U)
+#define FFA_MSG_WAIT			_AC(0x8400006B,U)
+#define FFA_MSG_YIELD			_AC(0x8400006C,U)
+#define FFA_MSG_RUN			_AC(0x8400006D,U)
+#define FFA_MSG_SEND2			_AC(0x84000086,U)
+#define FFA_MSG_SEND_DIRECT_REQ_32	_AC(0x8400006F,U)
+#define FFA_MSG_SEND_DIRECT_REQ_64	_AC(0xC400006F,U)
+#define FFA_MSG_SEND_DIRECT_RESP_32	_AC(0x84000070,U)
+#define FFA_MSG_SEND_DIRECT_RESP_64	_AC(0xC4000070,U)
+#define FFA_MEM_DONATE_32		_AC(0x84000071,U)
+#define FFA_MEM_DONATE_64		_AC(0xC4000071,U)
+#define FFA_MEM_LEND_32			_AC(0x84000072,U)
+#define FFA_MEM_LEND_64			_AC(0xC4000072,U)
+#define FFA_MEM_SHARE_32		_AC(0x84000073,U)
+#define FFA_MEM_SHARE_64		_AC(0xC4000073,U)
+#define FFA_MEM_RETRIEVE_REQ_32		_AC(0x84000074,U)
+#define FFA_MEM_RETRIEVE_REQ_64		_AC(0xC4000074,U)
+#define FFA_MEM_RETRIEVE_RESP		_AC(0x84000075,U)
+#define FFA_MEM_RELINQUISH		_AC(0x84000076,U)
+#define FFA_MEM_RECLAIM			_AC(0x84000077,U)
+#define FFA_MEM_FRAG_RX			_AC(0x8400007A,U)
+#define FFA_MEM_FRAG_TX			_AC(0x8400007B,U)
+#define FFA_MSG_SEND			_AC(0x8400006E,U)
+#define FFA_MSG_POLL			_AC(0x8400006A,U)
+
+/* Partition information descriptor */
+struct ffa_partition_info_1_0 {
+    uint16_t id;
+    uint16_t execution_context;
+    uint32_t partition_properties;
+};
+
+struct ffa_partition_info_1_1 {
+    uint16_t id;
+    uint16_t execution_context;
+    uint32_t partition_properties;
+    uint8_t uuid[16];
+};
+
+/* Constituent memory region descriptor */
+struct ffa_address_range {
+    uint64_t address;
+    uint32_t page_count;
+    uint32_t reserved;
+};
+
+/* Composite memory region descriptor */
+struct ffa_mem_region {
+    uint32_t total_page_count;
+    uint32_t address_range_count;
+    uint64_t reserved;
+    struct ffa_address_range address_range_array[];
+};
+
+/* Memory access permissions descriptor */
+struct ffa_mem_access_perm {
+    uint16_t endpoint_id;
+    uint8_t perm;
+    uint8_t flags;
+};
+
+/* Endpoint memory access descriptor */
+struct ffa_mem_access {
+    struct ffa_mem_access_perm access_perm;
+    uint32_t region_offs;
+    uint64_t reserved;
+};
+
+/* Lend, donate or share memory transaction descriptor */
+struct ffa_mem_transaction_1_0 {
+    uint16_t sender_id;
+    uint8_t mem_reg_attr;
+    uint8_t reserved0;
+    uint32_t flags;
+    uint64_t global_handle;
+    uint64_t tag;
+    uint32_t reserved1;
+    uint32_t mem_access_count;
+    struct ffa_mem_access mem_access_array[];
+};
+
+struct ffa_mem_transaction_1_1 {
+    uint16_t sender_id;
+    uint16_t mem_reg_attr;
+    uint32_t flags;
+    uint64_t global_handle;
+    uint64_t tag;
+    uint32_t mem_access_size;
+    uint32_t mem_access_count;
+    uint32_t mem_access_offs;
+    uint8_t reserved[12];
+};
+
+/*
+ * The parts needed from struct ffa_mem_transaction_1_0 or struct
+ * ffa_mem_transaction_1_1, used to provide an abstraction of difference in
+ * data structures between version 1.0 and 1.1. This is just an internal
+ * interface and can be changed without changing any ABI.
+ */
+struct ffa_mem_transaction_x {
+    uint16_t sender_id;
+    uint8_t mem_reg_attr;
+    uint8_t flags;
+    uint8_t mem_access_size;
+    uint8_t mem_access_count;
+    uint16_t mem_access_offs;
+    uint64_t global_handle;
+    uint64_t tag;
+};
+
+/* Endpoint RX/TX descriptor */
+struct ffa_endpoint_rxtx_descriptor_1_0 {
+    uint16_t sender_id;
+    uint16_t reserved;
+    uint32_t rx_range_count;
+    uint32_t tx_range_count;
+};
+
+struct ffa_endpoint_rxtx_descriptor_1_1 {
+    uint16_t sender_id;
+    uint16_t reserved;
+    uint32_t rx_region_offs;
+    uint32_t tx_region_offs;
+};
+
+struct ffa_ctx {
+    void *rx;
+    void *tx;
+    struct page_info *rx_pg;
+    struct page_info *tx_pg;
+    unsigned int page_count;
+    uint32_t guest_vers;
+    bool tx_is_mine;
+    bool interrupted;
+    struct list_head frag_list;
+    spinlock_t lock;
+};
+
+struct ffa_shm_mem {
+    struct list_head list;
+    uint16_t sender_id;
+    uint16_t ep_id;     /* endpoint, the one lending */
+    uint64_t handle;    /* FFA_HANDLE_INVALID if not set yet */
+    unsigned int page_count;
+    struct page_info *pages[];
+};
+
+struct mem_frag_state {
+    struct list_head list;
+    struct ffa_shm_mem *shm;
+    uint32_t range_count;
+    unsigned int current_page_idx;
+    unsigned int frag_offset;
+    unsigned int range_offset;
+    uint8_t *buf;
+    unsigned int buf_size;
+    struct ffa_address_range range;
+};
+
+/*
+ * Our rx/rx buffer shared with the SPMC
+ */
+static uint32_t ffa_version;
+static uint16_t *subsr_vm_created;
+static unsigned int subsr_vm_created_count;
+static uint16_t *subsr_vm_destroyed;
+static unsigned int subsr_vm_destroyed_count;
+static void *ffa_rx;
+static void *ffa_tx;
+static unsigned int ffa_page_count;
+static DEFINE_SPINLOCK(ffa_buffer_lock);
+
+static LIST_HEAD(ffa_mem_list);
+static DEFINE_SPINLOCK(ffa_mem_list_lock);
+
+static uint64_t next_handle = FFA_HANDLE_HYP_FLAG;
+
+static inline uint64_t reg_pair_to_64(uint32_t reg0, uint32_t reg1)
+{
+    return (uint64_t)reg0 << 32 | reg1;
+}
+
+static void inline reg_pair_from_64(uint32_t *reg0, uint32_t *reg1,
+                                    uint64_t val)
+{
+    *reg0 = val >> 32;
+    *reg1 = val;
+}
+
+static bool ffa_get_version(uint32_t *vers)
+{
+    const struct arm_smccc_1_2_regs arg = {
+        .a0 = FFA_VERSION, .a1 = FFA_MY_VERSION,
+    };
+    struct arm_smccc_1_2_regs resp;
+
+    arm_smccc_1_2_smc(&arg, &resp);
+    if ( resp.a0 == FFA_RET_NOT_SUPPORTED )
+    {
+        printk(XENLOG_ERR "ffa: FFA_VERSION returned not supported\n");
+        return false;
+    }
+
+    *vers = resp.a0;
+    return true;
+}
+
+static uint32_t get_ffa_ret_code(const struct arm_smccc_1_2_regs *resp)
+{
+    switch ( resp->a0 )
+    {
+    case FFA_ERROR:
+        if ( resp->a2 )
+            return resp->a2;
+        else
+            return FFA_RET_NOT_SUPPORTED;
+    case FFA_SUCCESS_32:
+    case FFA_SUCCESS_64:
+        return FFA_RET_OK;
+    default:
+        return FFA_RET_NOT_SUPPORTED;
+    }
+}
+
+static uint32_t ffa_features(uint32_t id)
+{
+    const struct arm_smccc_1_2_regs arg = { .a0 = FFA_FEATURES, .a1 = id, };
+    struct arm_smccc_1_2_regs resp;
+
+    arm_smccc_1_2_smc(&arg, &resp);
+    return get_ffa_ret_code(&resp);
+}
+
+static bool check_mandatory_feature(uint32_t id)
+{
+    uint32_t ret = ffa_features(id);
+
+    if (ret)
+        printk(XENLOG_ERR "ffa: mandatory feature id %#x missing\n", id);
+    return !ret;
+}
+
+static uint32_t ffa_rxtx_map(register_t tx_addr, register_t rx_addr,
+                             uint32_t page_count)
+{
+    const struct arm_smccc_1_2_regs arg = {
+#ifdef CONFIG_ARM_64
+        .a0 = FFA_RXTX_MAP_64,
+#endif
+#ifdef CONFIG_ARM_32
+        .a0 = FFA_RXTX_MAP_32,
+#endif
+	.a1 = tx_addr, .a2 = rx_addr,
+        .a3 = page_count,
+    };
+    struct arm_smccc_1_2_regs resp;
+
+    arm_smccc_1_2_smc(&arg, &resp);
+
+    return get_ffa_ret_code(&resp);
+}
+
+static uint32_t ffa_rxtx_unmap(uint16_t vm_id)
+{
+    const struct arm_smccc_1_2_regs arg = {
+        .a0 = FFA_RXTX_UNMAP, .a1 = vm_id,
+    };
+    struct arm_smccc_1_2_regs resp;
+
+    arm_smccc_1_2_smc(&arg, &resp);
+
+    return get_ffa_ret_code(&resp);
+}
+
+static uint32_t ffa_partition_info_get(uint32_t w1, uint32_t w2, uint32_t w3,
+                                       uint32_t w4, uint32_t w5,
+                                       uint32_t *count)
+{
+    const struct arm_smccc_1_2_regs arg = {
+        .a0 = FFA_PARTITION_INFO_GET, .a1 = w1, .a2 = w2, .a3 = w3, .a4 = w4,
+        .a5 = w5,
+    };
+    struct arm_smccc_1_2_regs resp;
+    uint32_t ret;
+
+    arm_smccc_1_2_smc(&arg, &resp);
+
+    ret = get_ffa_ret_code(&resp);
+    if ( !ret )
+        *count = resp.a2;
+
+    return ret;
+}
+
+static uint32_t ffa_rx_release(void)
+{
+    const struct arm_smccc_1_2_regs arg = { .a0 = FFA_RX_RELEASE, };
+    struct arm_smccc_1_2_regs resp;
+
+    arm_smccc_1_2_smc(&arg, &resp);
+
+    return get_ffa_ret_code(&resp);
+}
+
+static int32_t ffa_mem_share(uint32_t tot_len, uint32_t frag_len,
+                             register_t addr, uint32_t pg_count,
+                             uint64_t *handle)
+{
+    struct arm_smccc_1_2_regs arg = {
+        .a0 = FFA_MEM_SHARE_32, .a1 = tot_len, .a2 = frag_len, .a3 = addr,
+        .a4 = pg_count,
+    };
+    struct arm_smccc_1_2_regs resp;
+
+    /*
+     * For arm64 we must use 64-bit calling convention if the buffer isn't
+     * passed in our tx buffer.
+     */
+    if (sizeof(addr) > 4 && addr)
+        arg.a0 = FFA_MEM_SHARE_64;
+
+    arm_smccc_1_2_smc(&arg, &resp);
+
+    switch ( resp.a0 )
+    {
+    case FFA_ERROR:
+        if ( resp.a2 )
+            return resp.a2;
+        else
+            return FFA_RET_NOT_SUPPORTED;
+    case FFA_SUCCESS_32:
+        *handle = reg_pair_to_64(resp.a3, resp.a2);
+        return FFA_RET_OK;
+    case FFA_MEM_FRAG_RX:
+        *handle = reg_pair_to_64(resp.a2, resp.a1);
+        return resp.a3;
+    default:
+        return FFA_RET_NOT_SUPPORTED;
+    }
+}
+
+static int32_t ffa_mem_frag_tx(uint64_t handle, uint32_t frag_len,
+                               uint16_t sender_id)
+{
+    struct arm_smccc_1_2_regs arg = {
+        .a0 = FFA_MEM_FRAG_TX, .a1 = handle & UINT32_MAX, .a2 = handle >> 32,
+        .a3 = frag_len, .a4 = (uint32_t)sender_id << 16,
+    };
+    struct arm_smccc_1_2_regs resp;
+
+    arm_smccc_1_2_smc(&arg, &resp);
+
+    switch ( resp.a0 )
+    {
+    case FFA_ERROR:
+        if ( resp.a2 )
+            return resp.a2;
+        else
+            return FFA_RET_NOT_SUPPORTED;
+    case FFA_SUCCESS_32:
+        return FFA_RET_OK;
+    case FFA_MEM_FRAG_RX:
+        return resp.a3;
+    default:
+            return FFA_RET_NOT_SUPPORTED;
+    }
+}
+
+static uint32_t ffa_mem_reclaim(uint32_t handle_lo, uint32_t handle_hi,
+                                uint32_t flags)
+{
+    const struct arm_smccc_1_2_regs arg = {
+        .a0 = FFA_MEM_RECLAIM, .a1 = handle_lo, .a2 = handle_hi, .a3 = flags,
+    };
+    struct arm_smccc_1_2_regs resp;
+
+    arm_smccc_1_2_smc(&arg, &resp);
+
+    return get_ffa_ret_code(&resp);
+}
+
+static int32_t ffa_direct_req_send_vm(uint16_t sp_id, uint16_t vm_id,
+                                      uint8_t msg)
+{
+    uint32_t exp_resp = FFA_MSG_FLAG_FRAMEWORK;
+    int32_t res;
+
+    if ( msg != FFA_MSG_SEND_VM_CREATED && msg !=FFA_MSG_SEND_VM_DESTROYED )
+        return FFA_RET_INVALID_PARAMETERS;
+
+    if ( msg == FFA_MSG_SEND_VM_CREATED )
+        exp_resp |= FFA_MSG_RESP_VM_CREATED;
+    else
+        exp_resp |= FFA_MSG_RESP_VM_DESTROYED;
+
+    do {
+        const struct arm_smccc_1_2_regs arg = {
+            .a0 = FFA_MSG_SEND_DIRECT_REQ_32,
+            .a1 = sp_id,
+            .a2 = FFA_MSG_FLAG_FRAMEWORK | msg,
+            .a5 = vm_id,
+        };
+        struct arm_smccc_1_2_regs resp;
+
+        arm_smccc_1_2_smc(&arg, &resp);
+        if ( resp.a0 != FFA_MSG_SEND_DIRECT_RESP_32 || resp.a2 != exp_resp )
+        {
+            /*
+             * This is an invalid response, likely due to some error in the
+             * implementation of the ABI.
+             */
+            return FFA_RET_INVALID_PARAMETERS;
+        }
+        res = resp.a3;
+    } while ( res == FFA_RET_INTERRUPTED || res == FFA_RET_RETRY );
+
+    return res;
+}
+
+static u16 get_vm_id(struct domain *d)
+{
+    /* +1 since 0 is reserved for the hypervisor in FF-A */
+    return d->domain_id + 1;
+}
+
+static void set_regs(struct cpu_user_regs *regs, register_t v0, register_t v1,
+                     register_t v2, register_t v3, register_t v4, register_t v5,
+                     register_t v6, register_t v7)
+{
+        set_user_reg(regs, 0, v0);
+        set_user_reg(regs, 1, v1);
+        set_user_reg(regs, 2, v2);
+        set_user_reg(regs, 3, v3);
+        set_user_reg(regs, 4, v4);
+        set_user_reg(regs, 5, v5);
+        set_user_reg(regs, 6, v6);
+        set_user_reg(regs, 7, v7);
+}
+
+static void set_regs_error(struct cpu_user_regs *regs, uint32_t error_code)
+{
+    set_regs(regs, FFA_ERROR, 0, error_code, 0, 0, 0, 0, 0);
+}
+
+static void set_regs_success(struct cpu_user_regs *regs, uint32_t w2,
+                             uint32_t w3)
+{
+    set_regs(regs, FFA_SUCCESS_32, 0, w2, w3, 0, 0, 0, 0);
+}
+
+static void set_regs_frag_rx(struct cpu_user_regs *regs, uint32_t handle_lo,
+                             uint32_t handle_hi, uint32_t frag_offset,
+                             uint16_t sender_id)
+{
+    set_regs(regs, FFA_MEM_FRAG_RX, handle_lo, handle_hi, frag_offset,
+             (uint32_t)sender_id << 16, 0, 0, 0);
+}
+
+static void handle_version(struct cpu_user_regs *regs)
+{
+    struct domain *d = current->domain;
+    struct ffa_ctx *ctx = d->arch.ffa;
+    uint32_t vers = get_user_reg(regs, 1);
+
+    if ( vers < FFA_VERSION_1_1 )
+        vers = FFA_VERSION_1_0;
+    else
+        vers = FFA_VERSION_1_1;
+
+    ctx->guest_vers = vers;
+    set_regs(regs, vers, 0, 0, 0, 0, 0, 0, 0);
+}
+
+static uint32_t handle_rxtx_map(uint32_t fid, register_t tx_addr,
+                                register_t rx_addr, uint32_t page_count)
+{
+    uint32_t ret = FFA_RET_INVALID_PARAMETERS;
+    struct domain *d = current->domain;
+    struct ffa_ctx *ctx = d->arch.ffa;
+    struct page_info *tx_pg;
+    struct page_info *rx_pg;
+    p2m_type_t t;
+    void *rx;
+    void *tx;
+
+    if ( !smccc_is_conv_64(fid) )
+    {
+        tx_addr &= UINT32_MAX;
+        rx_addr &= UINT32_MAX;
+    }
+
+    /* For now to keep things simple, only deal with a single page */
+    if ( page_count != 1 )
+        return FFA_RET_NOT_SUPPORTED;
+
+    /* Already mapped */
+    if ( ctx->rx )
+        return FFA_RET_DENIED;
+
+    tx_pg = get_page_from_gfn(d, gaddr_to_gfn(tx_addr), &t, P2M_ALLOC);
+    if ( !tx_pg )
+        return FFA_RET_INVALID_PARAMETERS;
+    /* Only normal RAM for now */
+    if (t != p2m_ram_rw)
+        goto err_put_tx_pg;
+
+    rx_pg = get_page_from_gfn(d, gaddr_to_gfn(rx_addr), &t, P2M_ALLOC);
+    if ( !tx_pg )
+        goto err_put_tx_pg;
+    /* Only normal RAM for now */
+    if ( t != p2m_ram_rw )
+        goto err_put_rx_pg;
+
+    tx = __map_domain_page_global(tx_pg);
+    if ( !tx )
+        goto err_put_rx_pg;
+
+    rx = __map_domain_page_global(rx_pg);
+    if ( !rx )
+        goto err_unmap_tx;
+
+    ctx->rx = rx;
+    ctx->tx = tx;
+    ctx->rx_pg = rx_pg;
+    ctx->tx_pg = tx_pg;
+    ctx->page_count = 1;
+    ctx->tx_is_mine = true;
+    return FFA_RET_OK;
+
+err_unmap_tx:
+    unmap_domain_page_global(tx);
+err_put_rx_pg:
+    put_page(rx_pg);
+err_put_tx_pg:
+    put_page(tx_pg);
+    return ret;
+}
+
+static uint32_t handle_rxtx_unmap(void)
+{
+    struct domain *d = current->domain;
+    struct ffa_ctx *ctx = d->arch.ffa;
+    uint32_t ret;
+
+    if ( !ctx->rx )
+        return FFA_RET_INVALID_PARAMETERS;
+
+    ret = ffa_rxtx_unmap(get_vm_id(d));
+    if ( ret )
+        return ret;
+
+    unmap_domain_page_global(ctx->rx);
+    unmap_domain_page_global(ctx->tx);
+    put_page(ctx->rx_pg);
+    put_page(ctx->tx_pg);
+    ctx->rx = NULL;
+    ctx->tx = NULL;
+    ctx->rx_pg = NULL;
+    ctx->tx_pg = NULL;
+    ctx->page_count = 0;
+    ctx->tx_is_mine = false;
+
+    return FFA_RET_OK;
+}
+
+static uint32_t handle_partition_info_get(uint32_t w1, uint32_t w2, uint32_t w3,
+                                          uint32_t w4, uint32_t w5,
+                                          uint32_t *count)
+{
+    uint32_t ret = FFA_RET_DENIED;
+    struct domain *d = current->domain;
+    struct ffa_ctx *ctx = d->arch.ffa;
+
+    if ( !ffa_page_count )
+        return FFA_RET_DENIED;
+
+    spin_lock(&ctx->lock);
+    if ( !ctx->page_count || !ctx->tx_is_mine )
+        goto out;
+    ret = ffa_partition_info_get(w1, w2, w3, w4, w5, count);
+    if ( ret )
+        goto out;
+    if ( ctx->guest_vers == FFA_VERSION_1_0 )
+    {
+        size_t n;
+        struct ffa_partition_info_1_1 *src = ffa_rx;
+        struct ffa_partition_info_1_0 *dst = ctx->rx;
+
+        for ( n = 0; n < *count; n++ )
+        {
+            dst[n].id = src[n].id;
+            dst[n].execution_context = src[n].execution_context;
+            dst[n].partition_properties = src[n].partition_properties;
+        }
+    }
+    else
+    {
+        size_t sz = *count * sizeof(struct ffa_partition_info_1_1);
+
+        memcpy(ctx->rx, ffa_rx, sz);
+    }
+    ffa_rx_release();
+    ctx->tx_is_mine = false;
+out:
+    spin_unlock(&ctx->lock);
+
+    return ret;
+}
+
+static uint32_t handle_rx_release(void)
+{
+    uint32_t ret = FFA_RET_DENIED;
+    struct domain *d = current->domain;
+    struct ffa_ctx *ctx = d->arch.ffa;
+
+    spin_lock(&ctx->lock);
+    if ( !ctx->page_count || ctx->tx_is_mine )
+        goto out;
+    ret = FFA_RET_OK;
+    ctx->tx_is_mine = true;
+out:
+    spin_unlock(&ctx->lock);
+
+    return ret;
+}
+
+static void handle_msg_send_direct_req(struct cpu_user_regs *regs, uint32_t fid)
+{
+    struct arm_smccc_1_2_regs arg = { .a0 = fid, };
+    struct arm_smccc_1_2_regs resp = { };
+    struct domain *d = current->domain;
+    struct ffa_ctx *ctx = d->arch.ffa;
+    uint32_t src_dst;
+    uint64_t mask;
+
+    if ( smccc_is_conv_64(fid) )
+        mask = 0xffffffffffffffff;
+    else
+        mask = 0xffffffff;
+
+    src_dst = get_user_reg(regs, 1);
+    if ( (src_dst >> 16) != get_vm_id(d) )
+    {
+        resp.a0 = FFA_ERROR;
+        resp.a2 = FFA_RET_INVALID_PARAMETERS;
+        goto out;
+    }
+
+    arg.a1 = src_dst;
+    arg.a2 = get_user_reg(regs, 2) & mask;
+    arg.a3 = get_user_reg(regs, 3) & mask;
+    arg.a4 = get_user_reg(regs, 4) & mask;
+    arg.a5 = get_user_reg(regs, 5) & mask;
+    arg.a6 = get_user_reg(regs, 6) & mask;
+    arg.a7 = get_user_reg(regs, 7) & mask;
+
+    while ( true )
+    {
+        arm_smccc_1_2_smc(&arg, &resp);
+
+        switch ( resp.a0 )
+        {
+        case FFA_INTERRUPT:
+            ctx->interrupted = true;
+            goto out;
+        case FFA_ERROR:
+        case FFA_SUCCESS_32:
+        case FFA_SUCCESS_64:
+        case FFA_MSG_SEND_DIRECT_RESP_32:
+        case FFA_MSG_SEND_DIRECT_RESP_64:
+            goto out;
+        default:
+            /* Bad fid, report back. */
+            memset(&arg, 0, sizeof(arg));
+            arg.a0 = FFA_ERROR;
+            arg.a1 = src_dst;
+            arg.a2 = FFA_RET_NOT_SUPPORTED;
+            continue;
+        }
+    }
+
+out:
+    set_user_reg(regs, 0, resp.a0);
+    set_user_reg(regs, 1, resp.a1 & mask);
+    set_user_reg(regs, 2, resp.a2 & mask);
+    set_user_reg(regs, 3, resp.a3 & mask);
+    set_user_reg(regs, 4, resp.a4 & mask);
+    set_user_reg(regs, 5, resp.a5 & mask);
+    set_user_reg(regs, 6, resp.a6 & mask);
+    set_user_reg(regs, 7, resp.a7 & mask);
+}
+
+static int get_shm_pages(struct domain *d, struct ffa_shm_mem *shm,
+                         struct ffa_address_range *range, uint32_t range_count,
+                         unsigned int start_page_idx,
+                         unsigned int *last_page_idx)
+{
+    unsigned int pg_idx = start_page_idx;
+    unsigned long gfn;
+    unsigned int n;
+    unsigned int m;
+    p2m_type_t t;
+    uint64_t addr;
+
+    for ( n = 0; n < range_count; n++ )
+    {
+        for ( m = 0; m < range[n].page_count; m++ )
+        {
+            if ( pg_idx >= shm->page_count )
+                return FFA_RET_INVALID_PARAMETERS;
+
+            addr = read_atomic(&range[n].address);
+            gfn = gaddr_to_gfn(addr + m * PAGE_SIZE);
+            shm->pages[pg_idx] = get_page_from_gfn(d, gfn, &t, P2M_ALLOC);
+            if ( !shm->pages[pg_idx] )
+                return FFA_RET_DENIED;
+            pg_idx++;
+            /* Only normal RAM for now */
+            if ( t != p2m_ram_rw )
+                return FFA_RET_DENIED;
+        }
+    }
+
+    *last_page_idx = pg_idx;
+
+    return FFA_RET_OK;
+}
+
+static void put_shm_pages(struct ffa_shm_mem *shm)
+{
+    unsigned int n;
+
+    for ( n = 0; n < shm->page_count && shm->pages[n]; n++ )
+    {
+        put_page(shm->pages[n]);
+        shm->pages[n] = NULL;
+    }
+}
+
+static void init_range(struct ffa_address_range *addr_range,
+                       paddr_t pa)
+{
+    memset(addr_range, 0, sizeof(*addr_range));
+    addr_range->address = pa;
+    addr_range->page_count = 1;
+}
+
+static int share_shm(struct ffa_shm_mem *shm)
+{
+    uint32_t max_frag_len = ffa_page_count * PAGE_SIZE;
+    struct ffa_mem_transaction_1_1 *descr = ffa_tx;
+    struct ffa_mem_access *mem_access_array;
+    struct ffa_mem_region *region_descr;
+    struct ffa_address_range *addr_range;
+    paddr_t pa;
+    paddr_t last_pa;
+    unsigned int n;
+    uint32_t frag_len;
+    uint32_t tot_len;
+    int ret;
+    unsigned int range_count;
+    unsigned int range_base;
+    bool first;
+
+    memset(descr, 0, sizeof(*descr));
+    descr->sender_id = shm->sender_id;
+    descr->global_handle = shm->handle;
+    descr->mem_reg_attr = FFA_NORMAL_MEM_REG_ATTR;
+    descr->mem_access_count = 1;
+    descr->mem_access_size = sizeof(*mem_access_array);
+    descr->mem_access_offs = sizeof(*descr);
+    mem_access_array = (void *)(descr + 1);
+    region_descr = (void *)(mem_access_array + 1);
+
+    memset(mem_access_array, 0, sizeof(*mem_access_array));
+    mem_access_array[0].access_perm.endpoint_id = shm->ep_id;
+    mem_access_array[0].access_perm.perm = FFA_MEM_ACC_RW;
+    mem_access_array[0].region_offs = (vaddr_t)region_descr - (vaddr_t)ffa_tx;
+
+    memset(region_descr, 0, sizeof(*region_descr));
+    region_descr->total_page_count = shm->page_count;
+
+    region_descr->address_range_count = 1;
+    last_pa = page_to_maddr(shm->pages[0]);
+    for ( n = 1; n < shm->page_count; last_pa = pa, n++ )
+    {
+        pa = page_to_maddr(shm->pages[n]);
+        if ( last_pa + PAGE_SIZE == pa )
+        {
+            continue;
+        }
+        region_descr->address_range_count++;
+    }
+
+    tot_len = sizeof(*descr) + sizeof(*mem_access_array) +
+              sizeof(*region_descr) +
+              region_descr->address_range_count * sizeof(*addr_range);
+
+    addr_range = region_descr->address_range_array;
+    frag_len = (vaddr_t)(addr_range + 1) - (vaddr_t)ffa_tx;
+    last_pa = page_to_maddr(shm->pages[0]);
+    init_range(addr_range, last_pa);
+    first = true;
+    range_count = 1;
+    range_base = 0;
+    for ( n = 1; n < shm->page_count; last_pa = pa, n++ )
+    {
+        pa = page_to_maddr(shm->pages[n]);
+        if ( last_pa + PAGE_SIZE == pa )
+        {
+            addr_range->page_count++;
+            continue;
+        }
+
+        if ( frag_len == max_frag_len )
+        {
+            if ( first )
+            {
+                ret = ffa_mem_share(tot_len, frag_len, 0, 0, &shm->handle);
+                first = false;
+            }
+            else
+            {
+                ret = ffa_mem_frag_tx(shm->handle, frag_len, shm->sender_id);
+            }
+            if ( ret <= 0)
+                return ret;
+            range_base = range_count;
+            range_count = 0;
+            frag_len = sizeof(*addr_range);
+            addr_range = ffa_tx;
+        }
+        else
+        {
+            frag_len += sizeof(*addr_range);
+            addr_range++;
+        }
+        init_range(addr_range, pa);
+        range_count++;
+    }
+
+    if ( first )
+        return ffa_mem_share(tot_len, frag_len, 0, 0, &shm->handle);
+    else
+        return ffa_mem_frag_tx(shm->handle, frag_len, shm->sender_id);
+}
+
+static int read_mem_transaction(uint32_t ffa_vers, void *buf, size_t blen,
+                                struct ffa_mem_transaction_x *trans)
+{
+    uint16_t mem_reg_attr;
+    uint32_t flags;
+    uint32_t count;
+    uint32_t offs;
+    uint32_t size;
+
+    if ( ffa_vers >= FFA_VERSION_1_1 )
+    {
+        struct ffa_mem_transaction_1_1 *descr;
+
+        if ( blen < sizeof(*descr) )
+            return FFA_RET_INVALID_PARAMETERS;
+
+        descr = buf;
+        trans->sender_id = read_atomic(&descr->sender_id);
+        mem_reg_attr = read_atomic(&descr->mem_reg_attr);
+        flags = read_atomic(&descr->flags);
+        trans->global_handle = read_atomic(&descr->global_handle);
+        trans->tag = read_atomic(&descr->tag);
+
+        count = read_atomic(&descr->mem_access_count);
+        size = read_atomic(&descr->mem_access_size);
+        offs = read_atomic(&descr->mem_access_offs);
+    }
+    else
+    {
+        struct ffa_mem_transaction_1_0 *descr;
+
+        if ( blen < sizeof(*descr) )
+            return FFA_RET_INVALID_PARAMETERS;
+
+        descr = buf;
+        trans->sender_id = read_atomic(&descr->sender_id);
+        mem_reg_attr = read_atomic(&descr->mem_reg_attr);
+        flags = read_atomic(&descr->flags);
+        trans->global_handle = read_atomic(&descr->global_handle);
+        trans->tag = read_atomic(&descr->tag);
+
+        count = read_atomic(&descr->mem_access_count);
+        size = sizeof(struct ffa_mem_access);
+        offs = offsetof(struct ffa_mem_transaction_1_0, mem_access_array);
+    }
+
+    if ( mem_reg_attr > UINT8_MAX || flags > UINT8_MAX || size > UINT8_MAX ||
+        count > UINT8_MAX || offs > UINT16_MAX )
+        return FFA_RET_INVALID_PARAMETERS;
+
+    /* Check that the endpoint memory access descriptor array fits */
+    if ( size * count + offs > blen )
+        return FFA_RET_INVALID_PARAMETERS;
+
+    trans->mem_reg_attr = mem_reg_attr;
+    trans->flags = flags;
+    trans->mem_access_size = size;
+    trans->mem_access_count = count;
+    trans->mem_access_offs = offs;
+    return 0;
+}
+
+static int add_mem_share_frag(struct mem_frag_state *s, unsigned int offs,
+                              unsigned int frag_len)
+{
+    struct domain *d = current->domain;
+    unsigned int o = offs;
+    unsigned int l;
+    int ret;
+
+    if ( frag_len < o )
+        return FFA_RET_INVALID_PARAMETERS;
+
+    /* Fill up the first struct ffa_address_range */
+    l = min_t(unsigned int, frag_len - o, sizeof(s->range) - s->range_offset);
+    memcpy((uint8_t *)&s->range + s->range_offset, s->buf + o, l);
+    s->range_offset += l;
+    o += l;
+    if ( s->range_offset != sizeof(s->range) )
+        goto out;
+    s->range_offset = 0;
+
+    while ( true )
+    {
+        ret = get_shm_pages(d, s->shm, &s->range, 1, s->current_page_idx,
+                            &s->current_page_idx);
+        if ( ret )
+            return ret;
+        if ( s->range_count == 1 )
+            return 0;
+        s->range_count--;
+        if ( frag_len - o < sizeof(s->range) )
+            break;
+        memcpy(&s->range, s->buf + o, sizeof(s->range));
+        o += sizeof(s->range);
+    }
+
+    /* Collect any remaining bytes for the next struct ffa_address_range */
+    s->range_offset = frag_len - o;
+    memcpy(&s->range, s->buf + o, frag_len - o);
+out:
+    s->frag_offset += frag_len;
+    return s->frag_offset;
+}
+
+static void handle_mem_share(struct cpu_user_regs *regs)
+{
+    uint32_t tot_len = get_user_reg(regs, 1);
+    uint32_t frag_len = get_user_reg(regs, 2);
+    uint64_t addr = get_user_reg(regs, 3);
+    uint32_t page_count = get_user_reg(regs, 4);
+    struct ffa_mem_transaction_x trans;
+    struct ffa_mem_access *mem_access;
+    struct ffa_mem_region *region_descr;
+    struct domain *d = current->domain;
+    struct ffa_ctx *ctx = d->arch.ffa;
+    struct ffa_shm_mem *shm = NULL;
+    unsigned int last_page_idx = 0;
+    uint32_t range_count;
+    uint32_t region_offs;
+    int ret = FFA_RET_DENIED;
+    uint32_t handle_hi = 0;
+    uint32_t handle_lo = 0;
+
+    /*
+     * We're only accepting memory transaction descriptors via the rx/tx
+     * buffer.
+     */
+    if ( addr )
+    {
+        ret = FFA_RET_NOT_SUPPORTED;
+        goto out_unlock;
+    }
+
+    /* Check that fragment legnth doesn't exceed total length */
+    if ( frag_len > tot_len )
+    {
+        ret = FFA_RET_INVALID_PARAMETERS;
+        goto out_unlock;
+    }
+
+    spin_lock(&ctx->lock);
+
+    if ( frag_len > ctx->page_count * PAGE_SIZE )
+        goto out_unlock;
+
+    if ( !ffa_page_count )
+    {
+        ret = FFA_RET_NO_MEMORY;
+        goto out_unlock;
+    }
+
+    ret = read_mem_transaction(ctx->guest_vers, ctx->tx, frag_len, &trans);
+    if ( ret )
+        goto out_unlock;
+
+    if ( trans.mem_reg_attr != FFA_NORMAL_MEM_REG_ATTR )
+    {
+        ret = FFA_RET_NOT_SUPPORTED;
+        goto out;
+    }
+
+    /* Only supports sharing it with one SP for now */
+    if ( trans.mem_access_count != 1 )
+    {
+        ret = FFA_RET_NOT_SUPPORTED;
+        goto out_unlock;
+    }
+
+    if ( trans.sender_id != get_vm_id(d) )
+    {
+        ret = FFA_RET_INVALID_PARAMETERS;
+        goto out_unlock;
+    }
+
+    /* Check that it fits in the supplied data */
+    if ( trans.mem_access_offs + trans.mem_access_size > frag_len )
+        goto out_unlock;
+
+    mem_access = (void *)((vaddr_t)ctx->tx + trans.mem_access_offs);
+    if ( read_atomic(&mem_access->access_perm.perm) != FFA_MEM_ACC_RW )
+    {
+        ret = FFA_RET_NOT_SUPPORTED;
+        goto out_unlock;
+    }
+
+    region_offs = read_atomic(&mem_access->region_offs);
+    if ( sizeof(*region_descr) + region_offs > frag_len )
+    {
+        ret = FFA_RET_NOT_SUPPORTED;
+        goto out_unlock;
+    }
+
+    region_descr = (void *)((vaddr_t)ctx->tx + region_offs);
+    range_count = read_atomic(&region_descr->address_range_count);
+    page_count = read_atomic(&region_descr->total_page_count);
+
+    shm = xzalloc_flex_struct(struct ffa_shm_mem, pages, page_count);
+    if ( !shm )
+    {
+        ret = FFA_RET_NO_MEMORY;
+        goto out;
+    }
+    shm->sender_id = trans.sender_id;
+    shm->ep_id = read_atomic(&mem_access->access_perm.endpoint_id);
+    shm->page_count = page_count;
+
+    if ( frag_len != tot_len )
+    {
+        struct mem_frag_state *s = xzalloc(struct mem_frag_state);
+
+        if ( !s )
+        {
+            ret = FFA_RET_NO_MEMORY;
+            goto out;
+        }
+        s->shm = shm;
+        s->range_count = range_count;
+        s->buf = ctx->tx;
+        s->buf_size = ffa_page_count * PAGE_SIZE;
+        ret = add_mem_share_frag(s, sizeof(*region_descr)  + region_offs,
+                                 frag_len);
+        if ( ret <= 0 )
+        {
+            xfree(s);
+            if ( ret < 0 )
+                goto out;
+        }
+        else
+        {
+            shm->handle = next_handle++;
+            reg_pair_from_64(&handle_hi, &handle_lo, shm->handle);
+            list_add_tail(&s->list, &ctx->frag_list);
+        }
+        goto out_unlock;
+    }
+
+    /*
+     * Check that the Composite memory region descriptor fits.
+     */
+    if ( sizeof(*region_descr) + region_offs +
+         range_count * sizeof(struct ffa_address_range) > frag_len )
+    {
+        ret = FFA_RET_INVALID_PARAMETERS;
+        goto out;
+    }
+
+    ret = get_shm_pages(d, shm, region_descr->address_range_array, range_count,
+                        0, &last_page_idx);
+    if ( ret )
+        goto out;
+    if ( last_page_idx != shm->page_count )
+    {
+        ret = FFA_RET_INVALID_PARAMETERS;
+        goto out;
+    }
+
+    /* Note that share_shm() uses our tx buffer */
+    spin_lock(&ffa_buffer_lock);
+    ret = share_shm(shm);
+    spin_unlock(&ffa_buffer_lock);
+    if ( ret )
+        goto out;
+
+    spin_lock(&ffa_mem_list_lock);
+    list_add_tail(&shm->list, &ffa_mem_list);
+    spin_unlock(&ffa_mem_list_lock);
+
+    reg_pair_from_64(&handle_hi, &handle_lo, shm->handle);
+
+out:
+    if ( ret && shm )
+    {
+        put_shm_pages(shm);
+        xfree(shm);
+    }
+out_unlock:
+    spin_unlock(&ctx->lock);
+
+    if ( ret > 0 )
+            set_regs_frag_rx(regs, handle_lo, handle_hi, ret, trans.sender_id);
+    else if ( ret == 0)
+            set_regs_success(regs, handle_lo, handle_hi);
+    else
+            set_regs_error(regs, ret);
+}
+
+static struct mem_frag_state *find_frag_state(struct ffa_ctx *ctx,
+                                              uint64_t handle)
+{
+    struct mem_frag_state *s;
+
+    list_for_each_entry(s, &ctx->frag_list, list)
+        if ( s->shm->handle == handle )
+            return s;
+
+    return NULL;
+}
+
+static void handle_mem_frag_tx(struct cpu_user_regs *regs)
+{
+    struct domain *d = current->domain;
+    struct ffa_ctx *ctx = d->arch.ffa;
+    uint32_t frag_len = get_user_reg(regs, 3);
+    uint32_t handle_lo = get_user_reg(regs, 1);
+    uint32_t handle_hi = get_user_reg(regs, 2);
+    uint64_t handle = reg_pair_to_64(handle_hi, handle_lo);
+    struct mem_frag_state *s;
+    uint16_t sender_id = 0;
+    int ret;
+
+    spin_lock(&ctx->lock);
+    s = find_frag_state(ctx, handle);
+    if ( !s )
+    {
+        ret = FFA_RET_INVALID_PARAMETERS;
+        goto out;
+    }
+    sender_id = s->shm->sender_id;
+
+    if ( frag_len > s->buf_size )
+    {
+        ret = FFA_RET_INVALID_PARAMETERS;
+        goto out;
+    }
+
+    ret = add_mem_share_frag(s, 0, frag_len);
+    if ( ret == 0 )
+    {
+        /* Note that share_shm() uses our tx buffer */
+        spin_lock(&ffa_buffer_lock);
+        ret = share_shm(s->shm);
+        spin_unlock(&ffa_buffer_lock);
+        if ( ret == 0 )
+        {
+            spin_lock(&ffa_mem_list_lock);
+            list_add_tail(&s->shm->list, &ffa_mem_list);
+            spin_unlock(&ffa_mem_list_lock);
+        }
+        else
+        {
+            put_shm_pages(s->shm);
+            xfree(s->shm);
+        }
+        list_del(&s->list);
+        xfree(s);
+    }
+    else if ( ret < 0 )
+    {
+        put_shm_pages(s->shm);
+        xfree(s->shm);
+        list_del(&s->list);
+        xfree(s);
+    }
+out:
+    spin_unlock(&ctx->lock);
+
+    if ( ret > 0 )
+            set_regs_frag_rx(regs, handle_lo, handle_hi, ret, sender_id);
+    else if ( ret == 0)
+            set_regs_success(regs, handle_lo, handle_hi);
+    else
+            set_regs_error(regs, ret);
+}
+
+static int handle_mem_reclaim(uint64_t handle, uint32_t flags)
+{
+    struct ffa_shm_mem *shm;
+    uint32_t handle_hi;
+    uint32_t handle_lo;
+    int ret;
+
+    spin_lock(&ffa_mem_list_lock);
+    list_for_each_entry(shm, &ffa_mem_list, list)
+    {
+        if ( shm->handle == handle )
+            goto found_it;
+    }
+    shm = NULL;
+found_it:
+    spin_unlock(&ffa_mem_list_lock);
+
+    if ( !shm )
+        return FFA_RET_INVALID_PARAMETERS;
+
+    reg_pair_from_64(&handle_hi, &handle_lo, handle);
+    ret = ffa_mem_reclaim(handle_lo, handle_hi, flags);
+    if ( ret )
+        return ret;
+
+    spin_lock(&ffa_mem_list_lock);
+    list_del(&shm->list);
+    spin_unlock(&ffa_mem_list_lock);
+
+    put_shm_pages(shm);
+    xfree(shm);
+
+    return ret;
+}
+
+bool ffa_handle_call(struct cpu_user_regs *regs, uint32_t fid)
+{
+    struct domain *d = current->domain;
+    struct ffa_ctx *ctx = d->arch.ffa;
+    uint32_t count;
+    uint32_t e;
+
+    if ( !ctx )
+        return false;
+
+    switch ( fid )
+    {
+    case FFA_VERSION:
+        handle_version(regs);
+        return true;
+    case FFA_ID_GET:
+        set_regs_success(regs, get_vm_id(d), 0);
+        return true;
+    case FFA_RXTX_MAP_32:
+#ifdef CONFIG_ARM_64
+    case FFA_RXTX_MAP_64:
+#endif
+        e = handle_rxtx_map(fid, get_user_reg(regs, 1), get_user_reg(regs, 2),
+                            get_user_reg(regs, 3));
+        if ( e )
+            set_regs_error(regs, e);
+        else
+            set_regs_success(regs, 0, 0);
+        return true;
+    case FFA_RXTX_UNMAP:
+        e = handle_rxtx_unmap();
+        if ( e )
+            set_regs_error(regs, e);
+        else
+            set_regs_success(regs, 0, 0);
+        return true;
+    case FFA_PARTITION_INFO_GET:
+        e = handle_partition_info_get(get_user_reg(regs, 1),
+                                      get_user_reg(regs, 2),
+                                      get_user_reg(regs, 3),
+                                      get_user_reg(regs, 4),
+                                      get_user_reg(regs, 5), &count);
+        if ( e )
+            set_regs_error(regs, e);
+        else
+            set_regs_success(regs, count, 0);
+        return true;
+    case FFA_RX_RELEASE:
+        e = handle_rx_release();
+        if ( e )
+            set_regs_error(regs, e);
+        else
+            set_regs_success(regs, 0, 0);
+        return true;
+    case FFA_MSG_SEND_DIRECT_REQ_32:
+#ifdef CONFIG_ARM_64
+    case FFA_MSG_SEND_DIRECT_REQ_64:
+#endif
+        handle_msg_send_direct_req(regs, fid);
+        return true;
+    case FFA_MEM_SHARE_32:
+#ifdef CONFIG_ARM_64
+    case FFA_MEM_SHARE_64:
+#endif
+        handle_mem_share(regs);
+        return true;
+    case FFA_MEM_RECLAIM:
+        e = handle_mem_reclaim(reg_pair_to_64(get_user_reg(regs, 2),
+                                              get_user_reg(regs, 1)),
+                               get_user_reg(regs, 3));
+        if ( e )
+            set_regs_error(regs, e);
+        else
+            set_regs_success(regs, 0, 0);
+        return true;
+    case FFA_MEM_FRAG_TX:
+        handle_mem_frag_tx(regs);
+        return true;
+
+    default:
+        printk(XENLOG_ERR "ffa: unhandled fid 0x%x\n", fid);
+        return false;
+    }
+}
+
+int ffa_domain_init(struct domain *d, bool ffa_enabled)
+{
+    struct ffa_ctx *ctx;
+    unsigned int n;
+    unsigned int m;
+    unsigned int c_pos;
+    int32_t res;
+
+    if ( !ffa_version || !ffa_enabled )
+        return 0;
+
+    ctx = xzalloc(struct ffa_ctx);
+    if ( !ctx )
+        return -ENOMEM;
+
+    for ( n = 0; n < subsr_vm_created_count; n++ )
+    {
+        res = ffa_direct_req_send_vm(subsr_vm_created[n], get_vm_id(d),
+                                     FFA_MSG_SEND_VM_CREATED);
+        if ( res )
+        {
+            printk(XENLOG_ERR "ffa: Failed to report creation of vm_id %u to  %u: res %d\n",
+                   get_vm_id(d), subsr_vm_created[n], res);
+            c_pos = n;
+            goto err;
+        }
+    }
+
+    INIT_LIST_HEAD(&ctx->frag_list);
+
+    d->arch.ffa = ctx;
+
+    return 0;
+
+err:
+    /* Undo any already sent vm created messaged */
+    for ( n = 0; n < c_pos; n++ )
+        for ( m = 0; m < subsr_vm_destroyed_count; m++ )
+            if ( subsr_vm_destroyed[m] == subsr_vm_created[n] )
+                ffa_direct_req_send_vm(subsr_vm_destroyed[n], get_vm_id(d),
+                                       FFA_MSG_SEND_VM_DESTROYED);
+    return -ENOMEM;
+}
+
+int ffa_relinquish_resources(struct domain *d)
+{
+    struct ffa_ctx *ctx = d->arch.ffa;
+    unsigned int n;
+    int32_t res;
+
+    if ( !ctx )
+        return 0;
+
+    for ( n = 0; n < subsr_vm_destroyed_count; n++ )
+    {
+        res = ffa_direct_req_send_vm(subsr_vm_destroyed[n], get_vm_id(d),
+                                     FFA_MSG_SEND_VM_DESTROYED);
+
+        if ( res )
+            printk(XENLOG_ERR "ffa: Failed to report destruction of vm_id %u to  %u: res %d\n",
+                   get_vm_id(d), subsr_vm_destroyed[n], res);
+    }
+
+    XFREE(d->arch.ffa);
+
+    return 0;
+}
+
+static bool __init init_subscribers(void)
+{
+    struct ffa_partition_info_1_1 *fpi;
+    bool ret = false;
+    uint32_t count;
+    uint32_t e;
+    uint32_t n;
+    uint32_t c_pos;
+    uint32_t d_pos;
+
+    if ( ffa_version < FFA_VERSION_1_1 )
+        return true;
+
+    e = ffa_partition_info_get(0, 0, 0, 0, 1, &count);
+    ffa_rx_release();
+    if ( e )
+    {
+        printk(XENLOG_ERR "ffa: Failed to get list of SPs: %d\n", (int)e);
+        goto out;
+    }
+
+    fpi = ffa_rx;
+    subsr_vm_created_count = 0;
+    subsr_vm_destroyed_count = 0;
+    for ( n = 0; n < count; n++ )
+    {
+        if (fpi[n].partition_properties & FFA_PART_PROP_NOTIF_CREATED)
+            subsr_vm_created_count++;
+        if (fpi[n].partition_properties & FFA_PART_PROP_NOTIF_DESTROYED)
+            subsr_vm_destroyed_count++;
+    }
+
+    if ( subsr_vm_created_count )
+        subsr_vm_created = xzalloc_array(uint16_t, subsr_vm_created_count);
+    if ( subsr_vm_destroyed_count )
+        subsr_vm_destroyed = xzalloc_array(uint16_t, subsr_vm_destroyed_count);
+    if ( (subsr_vm_created_count && !subsr_vm_created) ||
+        (subsr_vm_destroyed_count && !subsr_vm_destroyed) )
+    {
+        printk(XENLOG_ERR "ffa: Failed to allocate subscription lists\n");
+        subsr_vm_created_count = 0;
+        subsr_vm_destroyed_count = 0;
+        XFREE(subsr_vm_created);
+        XFREE(subsr_vm_destroyed);
+        goto out;
+    }
+
+    for ( c_pos = 0, d_pos = 0, n = 0; n < count; n++ )
+    {
+        if ( fpi[n].partition_properties & FFA_PART_PROP_NOTIF_CREATED )
+            subsr_vm_created[c_pos++] = fpi[n].id;
+        if ( fpi[n].partition_properties & FFA_PART_PROP_NOTIF_DESTROYED )
+            subsr_vm_destroyed[d_pos++] = fpi[n].id;
+    }
+
+    ret = true;
+out:
+    ffa_rx_release();
+    return ret;
+}
+
+static int __init ffa_init(void)
+{
+    uint32_t vers;
+    uint32_t e;
+    unsigned int major_vers;
+    unsigned int minor_vers;
+
+    /*
+     * psci_init_smccc() updates this value with what's reported by EL-3
+     * or secure world.
+     */
+    if ( smccc_ver < ARM_SMCCC_VERSION_1_2 )
+    {
+        printk(XENLOG_ERR
+               "ffa: unsupported SMCCC version %#x (need at least %#x)\n",
+               smccc_ver, ARM_SMCCC_VERSION_1_2);
+        return 0;
+    }
+
+    if ( !ffa_get_version(&vers) )
+        return 0;
+
+    if ( vers < FFA_MIN_VERSION || vers > FFA_MY_VERSION )
+    {
+        printk(XENLOG_ERR "ffa: Incompatible version %#x found\n", vers);
+        return 0;
+    }
+
+    major_vers = (vers >> FFA_VERSION_MAJOR_SHIFT) & FFA_VERSION_MAJOR_MASK;
+    minor_vers = vers & FFA_VERSION_MINOR_MASK;
+    printk(XENLOG_INFO "ARM FF-A Mediator version %u.%u\n",
+           FFA_VERSION_MAJOR, FFA_VERSION_MINOR);
+    printk(XENLOG_INFO "ARM FF-A Firmware version %u.%u\n",
+           major_vers, minor_vers);
+
+    if ( !check_mandatory_feature(FFA_PARTITION_INFO_GET) ||
+         !check_mandatory_feature(FFA_RX_RELEASE) ||
+#ifdef CONFIG_ARM_64
+         !check_mandatory_feature(FFA_RXTX_MAP_64) ||
+         !check_mandatory_feature(FFA_MEM_SHARE_64) ||
+#endif
+#ifdef CONFIG_ARM_32
+         !check_mandatory_feature(FFA_RXTX_MAP_32) ||
+#endif
+         !check_mandatory_feature(FFA_RXTX_UNMAP) ||
+         !check_mandatory_feature(FFA_MEM_SHARE_32) ||
+         !check_mandatory_feature(FFA_MEM_FRAG_TX) ||
+         !check_mandatory_feature(FFA_MEM_RECLAIM) ||
+         !check_mandatory_feature(FFA_MSG_SEND_DIRECT_REQ_32) )
+        return 0;
+
+    ffa_rx = alloc_xenheap_pages(0, 0);
+    if ( !ffa_rx )
+        return 0;
+
+    ffa_tx = alloc_xenheap_pages(0, 0);
+    if ( !ffa_tx )
+        goto err_free_ffa_rx;
+
+    e = ffa_rxtx_map(__pa(ffa_tx), __pa(ffa_rx), 1);
+    if ( e )
+    {
+        printk(XENLOG_ERR "ffa: Failed to map rxtx: error %d\n", (int)e);
+        goto err_free_ffa_tx;
+    }
+    ffa_page_count = 1;
+    ffa_version = vers;
+
+    if ( !init_subscribers() )
+        goto err_free_ffa_tx;
+
+    return 0;
+
+err_free_ffa_tx:
+    free_xenheap_pages(ffa_tx, 0);
+    ffa_tx = NULL;
+err_free_ffa_rx:
+    free_xenheap_pages(ffa_rx, 0);
+    ffa_rx = NULL;
+    ffa_page_count = 0;
+    ffa_version = 0;
+    XFREE(subsr_vm_created);
+    subsr_vm_created_count = 0;
+    XFREE(subsr_vm_destroyed);
+    subsr_vm_destroyed_count = 0;
+    return 0;
+}
+
+__initcall(ffa_init);
diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
index ed63c2b6f91f..b3dee269bced 100644
--- a/xen/arch/arm/include/asm/domain.h
+++ b/xen/arch/arm/include/asm/domain.h
@@ -103,6 +103,10 @@ struct arch_domain
     void *tee;
 #endif
 
+#ifdef CONFIG_FFA
+    void *ffa;
+#endif
+
     bool directmap;
 }  __cacheline_aligned;
 
diff --git a/xen/arch/arm/include/asm/ffa.h b/xen/arch/arm/include/asm/ffa.h
new file mode 100644
index 000000000000..4f4a739345bd
--- /dev/null
+++ b/xen/arch/arm/include/asm/ffa.h
@@ -0,0 +1,71 @@
+/*
+ * xen/arch/arm/ffa.c
+ *
+ * Arm Firmware Framework for ARMv8-A(FFA) mediator
+ *
+ * Copyright (C) 2021  Linaro Limited
+ *
+ * Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without restriction,
+ * including without limitation the rights to use, copy, modify, merge,
+ * publish, distribute, sublicense, and/or sell copies of the Software,
+ * and to permit persons to whom the Software is furnished to do so,
+ * subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+ * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
+ * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
+ * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+ * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#ifndef __ASM_ARM_FFA_H__
+#define __ASM_ARM_FFA_H__
+
+#include <xen/const.h>
+
+#include <asm/smccc.h>
+#include <asm/types.h>
+
+#define FFA_FNUM_MIN_VALUE              _AC(0x60,U)
+#define FFA_FNUM_MAX_VALUE              _AC(0x86,U)
+
+static inline bool is_ffa_fid(uint32_t fid)
+{
+    uint32_t fn = fid & ARM_SMCCC_FUNC_MASK;
+
+    return fn >= FFA_FNUM_MIN_VALUE && fn <= FFA_FNUM_MAX_VALUE;
+}
+
+#ifdef CONFIG_FFA
+#define FFA_NR_FUNCS    11
+
+bool ffa_handle_call(struct cpu_user_regs *regs, uint32_t fid);
+int ffa_domain_init(struct domain *d, bool ffa_enabled);
+int ffa_relinquish_resources(struct domain *d);
+#else
+#define FFA_NR_FUNCS    0
+
+static inline bool ffa_handle_call(struct cpu_user_regs *regs, uint32_t fid)
+{
+    return false;
+}
+
+static inline int ffa_domain_init(struct domain *d, bool ffa_enabled)
+{
+    return 0;
+}
+
+static inline int ffa_relinquish_resources(struct domain *d)
+{
+    return 0;
+}
+#endif
+
+#endif /*__ASM_ARM_FFA_H__*/
diff --git a/xen/arch/arm/vsmc.c b/xen/arch/arm/vsmc.c
index 6f90c08a6304..34586025eff8 100644
--- a/xen/arch/arm/vsmc.c
+++ b/xen/arch/arm/vsmc.c
@@ -20,6 +20,7 @@
 #include <public/arch-arm/smccc.h>
 #include <asm/cpuerrata.h>
 #include <asm/cpufeature.h>
+#include <asm/ffa.h>
 #include <asm/monitor.h>
 #include <asm/regs.h>
 #include <asm/smccc.h>
@@ -32,7 +33,7 @@
 #define XEN_SMCCC_FUNCTION_COUNT 3
 
 /* Number of functions currently supported by Standard Service Service Calls. */
-#define SSSC_SMCCC_FUNCTION_COUNT (3 + VPSCI_NR_FUNCS)
+#define SSSC_SMCCC_FUNCTION_COUNT (3 + VPSCI_NR_FUNCS + FFA_NR_FUNCS)
 
 static bool fill_uid(struct cpu_user_regs *regs, xen_uuid_t uuid)
 {
@@ -196,13 +197,23 @@ static bool handle_existing_apis(struct cpu_user_regs *regs)
     return do_vpsci_0_1_call(regs, fid);
 }
 
+static bool is_psci_fid(uint32_t fid)
+{
+    uint32_t fn = fid & ARM_SMCCC_FUNC_MASK;
+
+    return fn >= 0 && fn <= 0x1fU;
+}
+
 /* PSCI 0.2 interface and other Standard Secure Calls */
 static bool handle_sssc(struct cpu_user_regs *regs)
 {
     uint32_t fid = (uint32_t)get_user_reg(regs, 0);
 
-    if ( do_vpsci_0_2_call(regs, fid) )
-        return true;
+    if ( is_psci_fid(fid) )
+        return do_vpsci_0_2_call(regs, fid);
+
+    if ( is_ffa_fid(fid) )
+        return ffa_handle_call(regs, fid);
 
     switch ( fid )
     {
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index ab05fe12b0de..53f8d44a6a8e 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -318,6 +318,8 @@ struct xen_arch_domainconfig {
     /* IN/OUT */
     uint8_t gic_version;
     /* IN */
+    uint8_t ffa_enabled;
+    /* IN */
     uint16_t tee_type;
     /* IN */
     uint32_t nr_spis;
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 13:56:05 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 13:56:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353915.580915 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o40ps-00053e-0y; Wed, 22 Jun 2022 13:56:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353915.580915; Wed, 22 Jun 2022 13:55:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o40pr-00053X-UH; Wed, 22 Jun 2022 13:55:59 +0000
Received: by outflank-mailman (input) for mailman id 353915;
 Wed, 22 Jun 2022 13:55:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UqP7=W5=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1o40pq-00053R-Q2
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 13:55:58 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-eopbgr80044.outbound.protection.outlook.com [40.107.8.44])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 09fe9f7a-f233-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 15:55:57 +0200 (CEST)
Received: from AS9PR06CA0479.eurprd06.prod.outlook.com (2603:10a6:20b:49a::28)
 by AM8PR08MB5697.eurprd08.prod.outlook.com (2603:10a6:20b:1d7::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.14; Wed, 22 Jun
 2022 13:55:53 +0000
Received: from AM5EUR03FT007.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:49a:cafe::72) by AS9PR06CA0479.outlook.office365.com
 (2603:10a6:20b:49a::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15 via Frontend
 Transport; Wed, 22 Jun 2022 13:55:53 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT007.mail.protection.outlook.com (10.152.16.145) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Wed, 22 Jun 2022 13:55:52 +0000
Received: ("Tessian outbound ff2e13d26e0f:v120");
 Wed, 22 Jun 2022 13:55:52 +0000
Received: from b5220959366a.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 2BE71F05-76EF-4A20-B053-711B6C953C62.1; 
 Wed, 22 Jun 2022 13:55:46 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b5220959366a.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 22 Jun 2022 13:55:46 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AM4PR0802MB2355.eurprd08.prod.outlook.com (2603:10a6:200:60::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.14; Wed, 22 Jun
 2022 13:55:44 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5353.022; Wed, 22 Jun 2022
 13:55:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 09fe9f7a-f233-11ec-b725-ed86ccbb4733
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=lUX1B/1h11NPbPpAgpIDcCHUZ9JNHo4Alzei4JwNj02KYUSvlAZDFbJdxXG8g9ItgOXzJz63QRETownE6Y5WarmpUJ6Fgv2XF54uBJq3Br0tepRdl3HXDHrCQWoe7qdVLoqeNriaeNgdLZKPZeBt+9uTBqXfI2d9k/IoowC8NHW4dQ2AK1q9iUAuhCtxClOL5RTMo+klU1uUJPRpK8YIC/0MahvQuwrzYokeeEOUsPKTCwux2sKVmO2SIbP44sq9TxH2xWy0yJC7P+GO52aSuF4A8duVFomQKjVOcbljqdd75/ALYAa7dxyEyobkCaEZX2OdLdhIBj9+mAkGK9wPrQ==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ZjlE6dYPhNAj0yRoH4imen+v3vVsAivOZq1XwC5ITtE=;
 b=a3DUmxFgyQyci6v4O7XzmqyLEU28tHE43aRD9VjIYi5ZdBNNdkBCF8yTEN0Ea+YcgAEe7qZvT5UuvnlrZd9GxoFo2Ybdeh7Yy60qw+QVl/ejusWAl0LL1TRC5QEO8SvcvfEGBNiWIJ+eZD0OMDzPVjzcZ2yNUdLfPnHLOVUmkHho242Sd7HD40d4lE6KW2aa/HXScPkXvwTIZUtlFBrieffImdDsj1HZJslT27VbIex/Y+WPADZnhNKdrdgbQ7Ei/gSDgwtmEPK0p9hB4UiHl6RgjrwONpy7rDw94XAGOTHrZkfgHIGsoaJPJxXrKSCXKr7hwghlazqpvjSBDtq0vg==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZjlE6dYPhNAj0yRoH4imen+v3vVsAivOZq1XwC5ITtE=;
 b=kdaRgjiFhqWN7T82dSnah7UgI9ChS6SqL1MdaFYY+X6gyVDaYliKjwr/jII+itgSagtYlVCj25x8bCHJLUIjJtdzFaquF8Dw4hQn6aZeT52tBvHOk5giZ+N2SihxegXGR86AfXooDIIP0QYslc9Wfk8W7xtpw6Lpw0ecGhEZVHY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: ab74deaa781876ae
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZgWg/DAFY/31kwl6JGRUmHrI+ABeg4Bf9ltiIQxwOervf8aB/EeznLoG0soy5AB6ceie5pqxrsTg8OuQQBJK5qI9WhzQrGQm76Cqr/npl/z/Wp9ojDPps93VXvgB16UlQitTmgkki82PaBF+8dxCYCs2PoYZHqyiddBAiUZtXNmJ/PCWeOmdnnVKq5lsqMJv4KwNEGQmaSJYQJhFsJaOr8ufCNPdGw1bpZbjlVad+P4l/xNh/AXDdnXK9mJQ/GBuGOUhfIngStnxKh8HvbB0YyzbHTx8qKrTNH71gneH2XmH2KyVj9ViPjtw6PwtaqgsFj8l6vCR0nNgWnEmx1EwFg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ZjlE6dYPhNAj0yRoH4imen+v3vVsAivOZq1XwC5ITtE=;
 b=g2JhfwC6DI/EZxVJ5GINVOjF/7vVmUaoUk/2XIwd6lmNXxVj+KZahve7Gnjw19AG7+shzbouHLr7FRTxitzsUo+QBDpmp6zaSECgnoL71qWzfbrUdaReHoioJDLTMm7rN5vxhahigsXtAl+J36ZGLZRPzLcqA5XAleKukg0NELajCXC2JtfmPYLwxselNbdPhSBwtJVrp5rnhHGxQiD+L2TAcJtehQz4JRrWQrZcHh0jIJJkFGMZWvwbVt2+6yxAtG0ZLU4QfSuB157ChkxeufdvScnI1q3iMc8FcWhYgV/FXHPafYlTeDgtxzR4ReW426/qBmbFtLoOdWPUiMfstQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZjlE6dYPhNAj0yRoH4imen+v3vVsAivOZq1XwC5ITtE=;
 b=kdaRgjiFhqWN7T82dSnah7UgI9ChS6SqL1MdaFYY+X6gyVDaYliKjwr/jII+itgSagtYlVCj25x8bCHJLUIjJtdzFaquF8Dw4hQn6aZeT52tBvHOk5giZ+N2SihxegXGR86AfXooDIIP0QYslc9Wfk8W7xtpw6Lpw0ecGhEZVHY=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Michal Orzel <Michal.Orzel@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>, Juergen Gross
	<jgross@suse.com>, Dario Faggioli <dfaggioli@suse.com>, Daniel De Graaf
	<dgdegra@tycho.nsa.gov>, "Daniel P. Smith" <dpsmith@apertussolutions.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 0/9] MISRA C 2012 8.1 rule fixes
Thread-Topic: [PATCH 0/9] MISRA C 2012 8.1 rule fixes
Thread-Index: AQHYhHPRZL4o3B5WdE6OPtXA3biXiq1bO8sAgAAqE4CAAAG+gIAADwQA
Date: Wed, 22 Jun 2022 13:55:44 +0000
Message-ID: <C45BA6EE-6294-4C6F-ADC4-3DE7C8DA866F@arm.com>
References: <20220620070245.77979-1-michal.orzel@arm.com>
 <dd016e82-2480-0e1e-6286-18b2f677dd65@suse.com>
 <74ec2158-3d19-3b2c-1e8c-fb5b30267658@arm.com>
 <d91bb4ea-41be-225e-e2fe-1b03aa06c677@suse.com>
In-Reply-To: <d91bb4ea-41be-225e-e2fe-1b03aa06c677@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 5fc8957c-55d6-413d-50d6-08da5456ec03
x-ms-traffictypediagnostic:
	AM4PR0802MB2355:EE_|AM5EUR03FT007:EE_|AM8PR08MB5697:EE_
X-Microsoft-Antispam-PRVS:
	<AM8PR08MB56973E7313AF986B9787DAA09DB29@AM8PR08MB5697.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 3XefAYPpoY05KXW22pyaq8Io0aortkhotak+G0keugt4bB1LASxBOrYtbUj1gUWTJRd0T/OIn/tGaj39RREVE0iuuVKi5gcdGq88774gb5p0+If5aX+mZrvp6qegPLjQHZ+iMw3ua9oxa7HzyZsq81qqDFsc5uSeOjCF3mDvY1+SCUAAw5rX9mht9TVHYe9XgOFrPah4rQ4A2SuZabj6qZbBoOBiDa3pvnvHf9IV3pCNUseGa4o23lTh2DUWRwawigtzu4TbjBG7Ptn9AIlJl7mFCel791ifdgJglXMuTH84tpVP0TUNWsT/uXJzYQojVOvzfE/6KJWkQuetECyfhxrZwPfRPDmwNHJLwVjxxJHgWAsisZlQ2/wCb+ytwajrh4jg7rzEjCw5eYYPF/1/uqkKb+NnuA0uMgWxhxsdRqP5cxnaFJ7vYsd5Wdr2LKLc5yox4lgoAYqJfOhE9+ylEZGcgr9gbZ7hOt2ZlFcsiP6j8GcIrPn/FBRkeSoy7y0r/tpyX376cVX1+JlS+9tdlhQ1zRcSLh5DmdHrgZt01oWFKRT8vx1ecQfoOtuaCpRRWfA5tHUKRK4eRODYGHXD4qlktZI7fELki6LT1lA8SflnJsqsI5mU48uPkKHyF/qY/3ss21zfLlAUCssA38oZm6DhgBcaPk9toICL11flzb3ShRliTVEw7Kx1O3nVpuiko4KLGY/hqpElw74qMloUi1XdwO5UeYMA30eQ358yOJ4AX2w8HNhbgnqRRfV00DIKYR4YKlsJmyLojWAqaWoNofDjmF+YBctyM3yDJVIEvjc=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(136003)(396003)(39860400002)(346002)(366004)(376002)(186003)(36756003)(66446008)(2616005)(76116006)(71200400001)(478600001)(64756008)(8676002)(33656002)(66476007)(66556008)(7416002)(6486002)(66946007)(53546011)(83380400001)(6506007)(38070700005)(8936002)(316002)(41300700001)(38100700002)(26005)(122000001)(5660300002)(91956017)(2906002)(6916009)(54906003)(6512007)(86362001)(4326008)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <ABB91CA44D040343A31882CB17526891@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR0802MB2355
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT007.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	e5f529c6-f34d-4f3f-1589-08da5456e6f5
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	liPXEJjAYbAg3uAs2GIB8z75rILuCDV9lCMGY9C65mezlv7CiQePRG2m4+ldHBB6lQkrgLRTTFrxdjBAoApFYcOV+bcAM28tNF7N4kyofOz3oM+s5JwCh/TksgDe2Sqf4T+GHGabvhmuI4Jf6X01P4PKddTvG/Cfo3+BFR0xr155+QilBn6VeoBMMnKXtkEe8sm27P5fkxfT6h1Fy6QUyrh0NsdrjDictzs321okBCS97ryN5YlgWq+pWPO5cQ9kyDByZ8bnHhq1t1NwUt9UiikMVm8QttuFVufA+ke17op+CHBdhYZKDVCoL1SmTuoA1yIY2Zb3Rd1WHU6folcf25Df4QxKLef/V+AQUI2IqXzuD0InHie97YQY9LfhN968Cgaqgbc5M6+cRcWo8G9mHrfbDo+GDggHopsZTpySoEew0eqDD6+m5VlW8nSQ7wScH5kQSSTgXZXgv/zyTW4/qpSWoeoxTgh0Ob64JY/JC9FcvghSHUF0PJvu60f3T4cOmcKyp9idYwKJRw7wYKcPDyNSQdbAwd+0iDVP84tTtwHaqqDPDYrTSGtnfqMTgztmiWxSxvlj6RWCWul6rpqJ17YyS+wz1aa3tRp0zu4fynx+7z+ZPyV/LIRJb6OgObiqnX2H9y7YlXRtPdvLj8qiYPOt8e3qeq/ixoIuDZKt+2KFoqJsXVA47pPtKrew+sG2SYOXye7clN2J6p9gOuLGDagJvN8LEoIqye35tRVhhqka/5E3vBLhNUnexHd9wwTE52QVfYWezFfxQUxdg0T5ug==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(346002)(396003)(136003)(376002)(39860400002)(40470700004)(46966006)(36840700001)(4326008)(6486002)(26005)(86362001)(6512007)(336012)(70586007)(70206006)(47076005)(2616005)(33656002)(83380400001)(8676002)(53546011)(2906002)(82310400005)(478600001)(186003)(40460700003)(6506007)(8936002)(41300700001)(5660300002)(6862004)(36860700001)(82740400003)(54906003)(36756003)(316002)(40480700001)(356005)(81166007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2022 13:55:52.9282
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 5fc8957c-55d6-413d-50d6-08da5456ec03
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT007.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB5697

Hi Jan,

> On 22 Jun 2022, at 14:01, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 22.06.2022 14:55, Michal Orzel wrote:
>> On 22.06.2022 12:25, Jan Beulich wrote:
>>> On 20.06.2022 09:02, Michal Orzel wrote:
>>>> This series fixes all the findings for MISRA C 2012 8.1 rule, reported=
 by
>>>> cppcheck 2.7 with misra addon, for Arm (arm32/arm64 - target allyescon=
fig).
>>>> Fixing this rule comes down to replacing implicit 'unsigned' with expl=
icit
>>>> 'unsigned int' type as there are no other violations being part of tha=
t rule
>>>> in the Xen codebase.
>>>=20
>>> I'm puzzled, I have to admit. While I agree with all the examples in th=
e
>>> doc, I notice that there's no instance of "signed" or "unsigned" there.
>>> Which matches my understanding that "unsigned" and "signed" on their ow=
n
>>> (just like "long") are proper types, and hence the omission of "int"
>>> there is not an "omission of an explicit type".
>>>=20
>> Cppcheck was choosed as a tool for MISRA checking and it is considering =
it as a violation.
>=20
> Which by no means indicates that the tool pointing out something as a
> violation actually is one.
>=20
>> It treats unsigned as an implicit type. You can see this flag in cppchec=
k source code:
>>=20
>> "fIsImplicitInt          =3D (1U << 31),   // Is "int" token implicitly =
added?"
>=20
> Neither the name of the variable nor the comment clarify that this is abo=
ut
> the specific case of "unsigned". As said there's also the fact that they
> don't appear to point out the lack of "int" when seeing plain "long" (or
> "long long"). I fully agree that "extern x;" or "const y;" lack explicit
> "int".

I am a bit puzzled here trying to understand what you actually want here.

Do you suggest the change is ok but you are not ok with the fact that is fl=
agged
as MISRA fix even though cppcheck is saying otherwise ?

Bertrand



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 13:58:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 13:58:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353922.580925 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o40sT-0005fM-EW; Wed, 22 Jun 2022 13:58:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353922.580925; Wed, 22 Jun 2022 13:58:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o40sT-0005fF-BZ; Wed, 22 Jun 2022 13:58:41 +0000
Received: by outflank-mailman (input) for mailman id 353922;
 Wed, 22 Jun 2022 13:58:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o40sS-0005f5-8l; Wed, 22 Jun 2022 13:58:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o40sS-0005ZR-72; Wed, 22 Jun 2022 13:58:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o40sR-00039Q-G5; Wed, 22 Jun 2022 13:58:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o40sR-0000Uo-Fd; Wed, 22 Jun 2022 13:58:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=R40DEV6hXaQfZm6xj73qBh3A835CV3a4hC1sgMIFxpg=; b=bMBZDo1uSJffxvlI7LMRPf2NBJ
	ZnXbrrgFWrwNt7icPI8mlRgLT4w8zr3zeXXBkfaYXJv+nn41YgrxaNiOhuO4zfcWUO0hSr6Jak3mk
	vyEFS1kyoF0mLu4oZSWqpCN19YcYuakWuKQWxpjKx/5kMay3kAAmP1cKeZTTJMWGrEqQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171308-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 171308: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=15d93068e3484cb14006e935734a1e6088f228fd
X-Osstest-Versions-That:
    xen=9d067857d1ff6805608aac4d9c0ea1c848b2e637
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 22 Jun 2022 13:58:39 +0000

flight 171308 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171308/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  15d93068e3484cb14006e935734a1e6088f228fd
baseline version:
 xen                  9d067857d1ff6805608aac4d9c0ea1c848b2e637

Last test of basis   171292  2022-06-20 14:00:25 Z    1 days
Testing same since   171308  2022-06-22 11:01:53 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   9d067857d1..15d93068e3  15d93068e3484cb14006e935734a1e6088f228fd -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 14:10:59 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 14:10:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353934.580937 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o414I-00082G-KB; Wed, 22 Jun 2022 14:10:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353934.580937; Wed, 22 Jun 2022 14:10:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o414I-000829-Fl; Wed, 22 Jun 2022 14:10:54 +0000
Received: by outflank-mailman (input) for mailman id 353934;
 Wed, 22 Jun 2022 14:10:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kzGk=W5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o414G-000823-Am
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 14:10:52 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr70079.outbound.protection.outlook.com [40.107.7.79])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1ee40036-f235-11ec-bd2d-47488cf2e6aa;
 Wed, 22 Jun 2022 16:10:51 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS1PR04MB9359.eurprd04.prod.outlook.com (2603:10a6:20b:4db::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.22; Wed, 22 Jun
 2022 14:10:48 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Wed, 22 Jun 2022
 14:10:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ee40036-f235-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JQu0K3O4sdRP52ueM5kQ49vWMr9EQ9n2wExou8FLVlx0r+pCyvbVAQIng5B46pFPrg3bHmF3GsiUgZgeFsJaeskkr82HpEwgU8n261UH2LM8LP+CQHmx0xcK/g5rK8WwHd2DHrOmh1s10lzFZkpuHIeFI2w2/BvqSuoVBess37ivt/QOu/QllUh61XFtt+dXYzONticNBOP+3pCo/FwctKCyf/LuFuwZkRkVUDjhusBX1kOJPHIT8PSZ/ZoT/ErdlBf+8JJyh4tDtnik2EHQCZ7oN0meAnZFq98I0Tp4jhe0JdpJMvTMctTSVSEmKuAGxV7InU3LOUHuPSgYo4JfIw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3/x4+UAHeQPtNwfuoTJuZ4KGdODNKDMmXfrC0B7jizc=;
 b=W/40aoK+bBG52ABG2CHAWifhJi2GFWAYcglYK0/Sx6wkKmw6SNfxqiAPrEH3ASFod6yakCSyWCRc3AdAFl+eYizYWMxovDfemFf3I2gMVEbsVc4NZPOggbdbLRFoTIajedQH9DIP23PRP1VMq4bK2bAir1xYrUEt/GP4skbD+yxAJeOng4IvCkq+4z2lnss/iXa2k6wN4I9Fsp5gvd0sbYlaguaxAeokrqdfPSA+w53xDKDUerEpaujHVQT89JKitv4UL/YSYn6i2lhatV0eHBY6oWI7/v3gFK+H5IdLt+CBBnLGpX9HuDF1S5wbdMCWxORk5nUxgDaWuL1qoCMNwA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3/x4+UAHeQPtNwfuoTJuZ4KGdODNKDMmXfrC0B7jizc=;
 b=juXV5D0KsFuJQfY6UJ0UVpS8ZavHNY96AMMW+2SBkPAmVYQpZxIMEZwO8cEX9JI3k+Dikb23dDUIfITZc3QMa1Uu3lDX+tT41ROxrHsz2jtwTQ7FW1UorVvQdbVgTVY1LmjNQ2DgEFglZXgj7N4ViqIFi7PC1xDpJwA9ClzCLx7hAJe5Qi4EMXrBZlkgqcSMJrE4E0WxK020HNra6wIVMW0p6MQ973qKI5xwVLkEXGyYfByTyQh9h0QutVjxXp1zaFVPG+Ldt2xUnlBIfVnEci3rnbCLJZw8MSvIVhDPXR8A8LuD5QIAcqm8ImiPnRHMszZtZkRcNDx4p5SxFHIm7g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <68d7fb35-e4c5-e5d2-13a8-9ee1369e8dbe@suse.com>
Date: Wed, 22 Jun 2022 16:10:45 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 0/9] MISRA C 2012 8.1 rule fixes
Content-Language: en-US
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: Michal Orzel <Michal.Orzel@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>, Dario Faggioli <dfaggioli@suse.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 "Daniel P. Smith" <dpsmith@apertussolutions.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20220620070245.77979-1-michal.orzel@arm.com>
 <dd016e82-2480-0e1e-6286-18b2f677dd65@suse.com>
 <74ec2158-3d19-3b2c-1e8c-fb5b30267658@arm.com>
 <d91bb4ea-41be-225e-e2fe-1b03aa06c677@suse.com>
 <C45BA6EE-6294-4C6F-ADC4-3DE7C8DA866F@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <C45BA6EE-6294-4C6F-ADC4-3DE7C8DA866F@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS8P251CA0017.EURP251.PROD.OUTLOOK.COM
 (2603:10a6:20b:2f2::30) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d7197d2f-7f8c-4ce1-ac7c-08da54590111
X-MS-TrafficTypeDiagnostic: AS1PR04MB9359:EE_
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-Microsoft-Antispam-PRVS:
	<AS1PR04MB93599E5DACB1E8BC9B5CC1F6B3B29@AS1PR04MB9359.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	lbaOkYAlJXGc541PeL0qU/Y59iYk/cJet6Nr/lzxDvKDXStBbXEEhz3Uys5ZW2bx7E/Fg8giPkl/VqsVGFVWeIZQCab9ysUjvfZj09NQEyjGxe3onOmurncxMl6JWgD95HVuZDfZ3vtFP71ajqUzCVSDiD2BKWuMQ8bQWPkEVNHspWXegG/9P7wcfYbH76P4QufZdA92TicMsW3Q0OLwSUfsxJZJnKDrES4IYLRxKX4ufmBgaB1I1JCNd4K/WiblUxLbGPhVu7RuWYD59iSqdsRNLNqie+VADClaOtDqHbsVtCMCjE0zoJpaaOiCq8AZi44FQbnsxl7jqm2WhN4GndlhqSvMFu1pi+x9Ue/ecETw+2gU+6DlBxLc3caz3EjSAcWSlma6v5wmP/QmKi/b6aKhhqzYtVIq+LetY9DuoNX4aGhk74sD+2/kd3eoy10rQPKGQoorCv+wnJMg3SNnXyBIgENRHS9XjhMTDc+CmAavfaVKgPEhBqMml1A8mOyGd84k8gsQwQJnDsTxeQ4K2J+5aISonrIDx/LlycBde7ydl721O6spaNLwUa9UQhWCrbnT6Od08HzaoS1o1nR8b6jfAATnpG2XcMrUxQAGyRrsdYNDeCZtoTkNuhHsh/v9ICXHgCtKGTJqQvVXg6VqCQuTm5n1z7fe433BZyer/p/5EFM9UnHSQTJEonwtnweME90Oc2SNx0RLhmPMyNEnIQnxS28QnUtKuXIYJdx/Zdunszh2opJ0vFV1kxPgDrOtejAwH7Vbc/gCJwEyFOokeM4yIxmrFGMCBRIFITLols8=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(136003)(396003)(346002)(39860400002)(376002)(2906002)(8676002)(4326008)(478600001)(66556008)(6506007)(6486002)(31686004)(7416002)(8936002)(66946007)(53546011)(6916009)(36756003)(6512007)(26005)(41300700001)(316002)(66476007)(83380400001)(54906003)(31696002)(2616005)(186003)(86362001)(38100700002)(5660300002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NDBCZGY2a0ZvSnA5cm9JeU4yeGVJaHlueWtER095bHMwSFdaK1RIdnkzbEZZ?=
 =?utf-8?B?Zjl3Y1p3MXNMaGovLzYvKyttU1dEZ1psSDdJaDk2RVYxdXlpeXJSOGk3TXJk?=
 =?utf-8?B?SDZ4aHJRcStQa3hzaldqT05RWUZ0aVYvaEZzemEyRFJQYmE4VTQxQWNFY1h1?=
 =?utf-8?B?QlVXb2o3QldKQ284bEZ2dmpjdlF2UW0ybmNhVTZjc1RqbDZIYk9MSGJDNHBE?=
 =?utf-8?B?TFUrYjhWVjlMUmVtcXJBakwrV2ZvRDMwdHNXSXY5bkszY0tJbituaGwwNGhX?=
 =?utf-8?B?QXJtQWlUOEhuS0ZMWTVHWWtiZkRpRnV5R3BSVmhJdTkxR3BwV242RFRVQTkz?=
 =?utf-8?B?WkpVU0FoTEJGVndaanNYaTR3dUFzTHIwYkRHaHRkZWova1N2a0ZneDUrT1FP?=
 =?utf-8?B?VkxFdWJnTmhHYVBGZXVScklZSFpnS0FlTDZiVTZ6R2plQTNVWUhZajI4c3du?=
 =?utf-8?B?VmxObHhhbjNMMjdvRVg4V0w4cHVzLzNnMHRGSGVJVExxT3pQMXhaaDJXQmZk?=
 =?utf-8?B?KytvSEloSjBCU25Zb2t2VWlQMTZYRXRNQ0JCNmNuT3RoZ1dMQldPejJFQjR0?=
 =?utf-8?B?Z3B3VEZGSDI3Z1cydVY2a3g5M2ttRk5zTysxSXB3VGZQRE1yRUxKZnBuZ3lv?=
 =?utf-8?B?S203SG85am5ubjk0OFpRU2Y2R3V4ZnlBU2JiWUgrL1hGajVRdDhVbTcrU2Vm?=
 =?utf-8?B?Qk1vMWVLMHBtL01nLzVkL0VUaTh5TG85UGZ3b1c0YVBFMWkxYmNPc2NQeFQw?=
 =?utf-8?B?VzVXRkVhU1daNWRDR052RTVqTTdmaCtlSzdHKzVYUHBGQ2pFdlNUZXQ1ZVkr?=
 =?utf-8?B?OWtBakdSMS9ZMXB1Q2Y3UTV0MnQ1U2xQeWx5dFl0QnQ1QWkzc05KZE5TQ3A1?=
 =?utf-8?B?WGlRZHphTkkwTGRvOHZuUi90cTUxYmVPYlM2VTBXTkMwU3hiK256UTR2a1Z2?=
 =?utf-8?B?WmEvLzF6OHQxZDlVam1MUFFBaVZYWXJxNmluRWlvbEM3WG1nZ0FIWVJxR2Ez?=
 =?utf-8?B?UzhDSUJFRDVYeG90YkhoazhtUTNTR1pIaFg1Y2JUNVM4YXpqRWVtbzZMdHpB?=
 =?utf-8?B?NEYrYVFVdzVOaHN6TC95SThYTVZ2NzZkUEo4NmdCcjhLWU1XV3RYcmJKdnAw?=
 =?utf-8?B?bHdhd2tkQVZldnFVd3JhczV0TmlyK3NPVHhvcUtzcGZ5SUQzQkZqb2UrbXRF?=
 =?utf-8?B?WVlaZTMrQ3B3MlBheDhCbHQvQjFGeCtqMExMRnE1TEdqWTdJZUZRVUNOMWpQ?=
 =?utf-8?B?V1B6NTIyR2xPUnlOekV1b252dHN5MzdHUHdzMFUwTlVDQVVtMUFKMXd2QXM2?=
 =?utf-8?B?bzNYa0xnTTl3SE1xRkVURFY0M1orSDlRZ0tIbUdTSmVCWjlsNlloQlVwUmQ5?=
 =?utf-8?B?R01BbUtseEV3em1WNDZXek9mOWlvLzc5Z2VTKy8ydk5RckFLa0crYTMxK01m?=
 =?utf-8?B?SnFwOGVUL01vUmliNTFzTVR1SUd2KzJXQlFveUVrb2hQOG0xNnEwS2gzUnV3?=
 =?utf-8?B?ckVVRGRMbmlkQzRFR1FZKzJOL1FaZFY1TnZhckxyZStiWmw4aWxqK2Via2ZS?=
 =?utf-8?B?SFVBYzBCQWQ1V0R1M3NDQnBpYlBFWGR0Vk9CT1kyTFhUd3JUb055ZmdrWkFr?=
 =?utf-8?B?K1BLL054M08zTS8yS0UxbkdVUkx4cDdBam1GckljYndiZmxacjZLMTBJaHJR?=
 =?utf-8?B?dmkxcko0MEwrNVFhbVZBcGo3MVlOWUVPRVBiUnNqV0toSExaUGthQTRZYVll?=
 =?utf-8?B?aEhDb3ZCaExEU29icjNKS3lFQzBySFRNSVJuZzlNL0FhMWV2cDRCSDhtZjlC?=
 =?utf-8?B?RkR5ckNoc0pHU2V2U3BNeFQyakNMTzBDMDVwTEpTOG9BZ2o1L0FCOE5Kb00w?=
 =?utf-8?B?V080dEIrMHh5d1RwV0MrOG1GN3NxSmF4aTlSNS9pSm1adzdrdDBDaVB3OFpQ?=
 =?utf-8?B?Yk13TTJqbkFVRDdYR0gvWWRzNUFZK3pkay9VeENGdS9qbWhqb0dXNExXbzZx?=
 =?utf-8?B?Mm1zdE0rR2QyNkN1N2V3NUZCZDR3eEdQcDBkMFVUT2l3VTEwWU9zWlh0QXhh?=
 =?utf-8?B?c09NYXFjZmRDS1dOdHQwbU41UVRjUVc2OVBYbkt2azh6YnFmVFhxZCs4UHUr?=
 =?utf-8?B?Zi9aVFVNRUNMdFNOdytYMU9kUTl3VG1BS0VYQW1NWCtJakIydVNiZEVUYTZt?=
 =?utf-8?B?UklERlY5RzVraEZRUUpEWW9jTUhFT2dVRzd1aHVtS3NKYnJRaWt0cWM1dWhI?=
 =?utf-8?B?UG8vT3lsdkJUOGdldGN6MTh2dmFuVnBDZG8xMlFlOTBBcTBBckhZOWFsSnhQ?=
 =?utf-8?B?U2hQKzVEdEI1QzF5bXVhN0lXeHdHbk1SRkxpL2Vya1dHT25XTGMzQT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d7197d2f-7f8c-4ce1-ac7c-08da54590111
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2022 14:10:47.4979
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: +gDvbs60SFdLrqNXhyQ9hWiUtgAbJv6YqOBwzsx06hJPWrqzF/jPmJ3iWzKNQ+x3YKr32dYyqy2AnsKxezxk6g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS1PR04MB9359

On 22.06.2022 15:55, Bertrand Marquis wrote:
> Hi Jan,
> 
>> On 22 Jun 2022, at 14:01, Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 22.06.2022 14:55, Michal Orzel wrote:
>>> On 22.06.2022 12:25, Jan Beulich wrote:
>>>> On 20.06.2022 09:02, Michal Orzel wrote:
>>>>> This series fixes all the findings for MISRA C 2012 8.1 rule, reported by
>>>>> cppcheck 2.7 with misra addon, for Arm (arm32/arm64 - target allyesconfig).
>>>>> Fixing this rule comes down to replacing implicit 'unsigned' with explicit
>>>>> 'unsigned int' type as there are no other violations being part of that rule
>>>>> in the Xen codebase.
>>>>
>>>> I'm puzzled, I have to admit. While I agree with all the examples in the
>>>> doc, I notice that there's no instance of "signed" or "unsigned" there.
>>>> Which matches my understanding that "unsigned" and "signed" on their own
>>>> (just like "long") are proper types, and hence the omission of "int"
>>>> there is not an "omission of an explicit type".
>>>>
>>> Cppcheck was choosed as a tool for MISRA checking and it is considering it as a violation.
>>
>> Which by no means indicates that the tool pointing out something as a
>> violation actually is one.
>>
>>> It treats unsigned as an implicit type. You can see this flag in cppcheck source code:
>>>
>>> "fIsImplicitInt          = (1U << 31),   // Is "int" token implicitly added?"
>>
>> Neither the name of the variable nor the comment clarify that this is about
>> the specific case of "unsigned". As said there's also the fact that they
>> don't appear to point out the lack of "int" when seeing plain "long" (or
>> "long long"). I fully agree that "extern x;" or "const y;" lack explicit
>> "int".
> 
> I am a bit puzzled here trying to understand what you actually want here.
> 
> Do you suggest the change is ok but you are not ok with the fact that is flagged
> as MISRA fix even though cppcheck is saying otherwise ?

First of all I'd like to understand whether what we're talking about here
are actually violations (and if so, why that is). Further actions / requests
depend on the answer.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 14:22:03 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 14:22:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353943.580948 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o41Ex-0001AW-Oo; Wed, 22 Jun 2022 14:21:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353943.580948; Wed, 22 Jun 2022 14:21:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o41Ex-0001AP-Lr; Wed, 22 Jun 2022 14:21:55 +0000
Received: by outflank-mailman (input) for mailman id 353943;
 Wed, 22 Jun 2022 14:21:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zPYt=W5=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o41Ew-0001AJ-SF
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 14:21:54 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a946afe0-f236-11ec-bd2d-47488cf2e6aa;
 Wed, 22 Jun 2022 16:21:53 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 2C05621BF7;
 Wed, 22 Jun 2022 14:21:52 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 0C2AE13A5D;
 Wed, 22 Jun 2022 14:21:52 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id hecZAYAls2I+UgAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 22 Jun 2022 14:21:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a946afe0-f236-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655907712; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=JWlsSBMic3KME3oUinrGm+qKceRjrlz1lkQJva2Hp14=;
	b=N5GTwz/lgLt4+jCRt+Lpx/a0PZbZPCQDUvOcnWBC84a3B54hN1NFyMnvzHiekEh96etubR
	FnHeyajPLlKkUkdXBslfAdZoku7nJZk04wdiNLzZ73c2MH2+1hc8M0kbyV0uzHVh3OjKGC
	OgZB1MbwW432YHBemYTv/CJ/h+SW4dk=
Message-ID: <fae2ea1b-a74c-a285-fa8b-def6b27f0809@suse.com>
Date: Wed, 22 Jun 2022 16:21:51 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Content-Language: en-US
To: Julien Grall <julien@xen.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <d465abfb-6d44-0739-9959-3e3311dd671c@suse.com>
 <e32a84bf-ad49-da95-4a19-61872c2ff7e0@xen.org>
 <bc8899d7-0300-8640-57d9-52c2a1bf599c@suse.com>
 <a9566b42-2360-4d3f-5272-8f69368d50f2@xen.org>
From: Juergen Gross <jgross@suse.com>
Subject: Re: Tentative fix for dom0 boot problem
In-Reply-To: <a9566b42-2360-4d3f-5272-8f69368d50f2@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------mqZ7jjNFnoN7juOnYB9eBcKo"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------mqZ7jjNFnoN7juOnYB9eBcKo
Content-Type: multipart/mixed; boundary="------------Jiabdt1bIloXCiW6RjD85VAZ";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <fae2ea1b-a74c-a285-fa8b-def6b27f0809@suse.com>
Subject: Re: Tentative fix for dom0 boot problem
References: <d465abfb-6d44-0739-9959-3e3311dd671c@suse.com>
 <e32a84bf-ad49-da95-4a19-61872c2ff7e0@xen.org>
 <bc8899d7-0300-8640-57d9-52c2a1bf599c@suse.com>
 <a9566b42-2360-4d3f-5272-8f69368d50f2@xen.org>
In-Reply-To: <a9566b42-2360-4d3f-5272-8f69368d50f2@xen.org>

--------------Jiabdt1bIloXCiW6RjD85VAZ
Content-Type: multipart/mixed; boundary="------------8Yk0kMtIoLifE0XL9448wgDY"

--------------8Yk0kMtIoLifE0XL9448wgDY
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjIuMDYuMjIgMTU6MjAsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGkgSnVlcmdlbiwN
Cj4gDQo+IE9uIDIyLzA2LzIwMjIgMTM6MTMsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+PiBP
biAyMi4wNi4yMiAxMjo1MCwgSnVsaWVuIEdyYWxsIHdyb3RlOg0KPj4+DQo+Pj4NCj4+PiBP
biAyMi8wNi8yMDIyIDExOjQ1LCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4+PiBKdWxpZW4s
DQo+Pj4NCj4+PiBIaSBKdWVyZ2VuLA0KPj4+DQo+Pj4+IGNvdWxkIHlvdSBwbGVhc2UgdGVz
dCB0aGUgYXR0YWNoZWQgcGF0Y2hlcz8NCj4+Pg0KPj4+IEkgYW0gZ2V0dGluZyB0aGUgZm9s
bG93aW5nIGVycm9yOg0KPj4+DQo+Pj4gKFhFTikgZDB2MCBVbmhhbmRsZWQ6IHZlYyAxNCwg
I1BGWzAwMDNdDQo+Pj4gKFhFTikgUGFnZXRhYmxlIHdhbGsgZnJvbSBmZmZmZmZmZjg0MDAx
MDAwOg0KPj4+IChYRU4pwqAgTDRbMHgxZmZdID0gMDAwMDAwMDQ2YzAwNDA2NyAwMDAwMDAw
MDAwMDA0MDA0DQo+Pj4gKFhFTinCoCBMM1sweDFmZV0gPSAwMDAwMDAwNDZjMDAzMDY3IDAw
MDAwMDAwMDAwMDQwMDMNCj4+PiAoWEVOKcKgIEwyWzB4MDIwXSA9IDAwMDAwMDA0NmMwMjQw
NjcgMDAwMDAwMDAwMDAwNDAyNA0KPj4+IChYRU4pwqAgTDFbMHgwMDFdID0gMDAxMDAwMDQ2
YzAwMTAyNSAwMDAwMDAwMDAwMDA0MDAxDQo+Pg0KPj4gSG1tLCBmcm9tIHRoaXMgZGF0YSBJ
IGd1ZXNzIHRoaXMgd2FzIGEgd3JpdGUgdG8gYSBwYWdlIHRhYmxlLg0KPj4NCj4+PiAoWEVO
KSBkb21haW5fY3Jhc2hfc3luYyBjYWxsZWQgZnJvbSBlbnRyeS5TOiBmYXVsdCBhdCBmZmZm
ODJkMDQwMzI1OTA2IA0KPj4+IHg4Nl82NC9lbnRyeS5TI2NyZWF0ZV9ib3VuY2VfZnJhbWUr
MHgxNWQvMHgxNzcNCj4+PiAoWEVOKSBEb21haW4gMCAodmNwdSMwKSBjcmFzaGVkIG9uIGNw
dSMxOg0KPj4+IChYRU4pIC0tLS1bIFhlbi00LjE3LXVuc3RhYmxlwqAgeDg2XzY0wqAgZGVi
dWc9ecKgIFRhaW50ZWQ6wqDCoCBDwqDCoMKgIF0tLS0tDQo+Pj4gKFhFTikgQ1BVOsKgwqDC
oCAxDQo+Pj4gKFhFTikgUklQOsKgwqDCoCBlMDMzOls8ZmZmZmZmZmY4MzJhMzQ4MT5dDQo+
Pg0KPj4gQ2FuIHlvdSBwbGVhc2UgZmluZCBvdXQgdGhlIGFzc29jaWF0ZWQgc3RhdGVtZW50
Pw0KPiANCj4gYXJjaC94ODYva2VybmVsL2hlYWQ2NC5jOjQzMw0KPiANCj4gVGhpcyBpcyB0
aGUgbWVtc2V0KCkgZm9yIF9fYnJrX2Jhc2UuDQoNClZlcnkgd2VpcmQuDQoNCkluIHRoZSBr
ZXJuZWwncyBsaW5rZXIgc2NyaXB0IHdlIGhhdmU6DQoNCiAgICAgICAgIF9fZW5kX29mX2tl
cm5lbF9yZXNlcnZlID0gLjsNCg0KICAgICAgICAgLiA9IEFMSUdOKFBBR0VfU0laRSk7DQog
ICAgICAgICAuYnJrIChOT0xPQUQpIDogQVQoQUREUiguYnJrKSAtIExPQURfT0ZGU0VUKSB7
DQogICAgICAgICAgICAgICAgIF9fYnJrX2Jhc2UgPSAuOw0KICAgICAgICAgICAgICAgICAu
ICs9IDY0ICogMTAyNDsgICAgICAgICAvKiA2NGsgYWxpZ25tZW50IHNsb3Agc3BhY2UgKi8N
CiAgICAgICAgICAgICAgICAgKiguYnNzLi5icmspICAgICAgICAgICAgLyogYXJlYXMgYnJr
IHVzZXJzIGhhdmUgcmVzZXJ2ZWQgKi8NCiAgICAgICAgICAgICAgICAgX19icmtfbGltaXQg
PSAuOw0KICAgICAgICAgfQ0KDQogICAgICAgICAuID0gQUxJR04oUEFHRV9TSVpFKTsgICAg
ICAgICAgIC8qIGtlZXAgVk9fSU5JVF9TSVpFIHBhZ2UgYWxpZ25lZCAqLw0KICAgICAgICAg
X2VuZCA9IC47DQoNClNvIHRoZSBhcmVhIHRvIGJlIHplcm9lZCBzaG91bGQgYmUgbGFyZ2Vy
IHRoYW4gNjRrLg0KDQo+IA0KPj4NCj4+PiAoWEVOKSBSRkxBR1M6IDAwMDAwMDAwMDAwMDAy
MDbCoMKgIEVNOiAxwqDCoCBDT05URVhUOiBwdiBndWVzdCAoZDB2MCkNCj4+PiAoWEVOKSBy
YXg6IDAwMDAwMDAwMDAwMDAwMDDCoMKgIHJieDogZmZmZmZmZmY4NDAwMDAwMMKgwqAgcmN4
OiAwMDAwMDAwMDAwMDJiMDAwDQo+Pj4gKFhFTikgcmR4OiBmZmZmZmZmZjg0MDAwMDAwwqDC
oCByc2k6IGZmZmZmZmZmODQwMDAwMDDCoMKgIHJkaTogZmZmZmZmZmY4NDAwMTAwMA0KDQpK
dXN0IGd1ZXNzaW5nIEknZCBzYXkgdGhhdCBtZW1zZXQgc3RhcnRlZCBhdCBmZmZmZmZmZjg0
MDAwMDAwIGFuZCB0aGVyZSBhcmUNCjJiMDAwIGJ5dGVzIGxlZnQgdG8gYmUgY2xlYXJlZC4g
QSBkaXNhc3NlbWJseSBvZiBjbGVhcl9ic3MoKSB3b3VsZCBoZWxwIGhlcmUuDQoNCkFueXdh
eSBpdCBzZWVtcyBhcyBpZiB0aGUgbWVtc2V0IHdvdWxkIHJ1biByaWdodCBpbnRvIHRoZSBp
bml0aWFsIHBhZ2UgdGFibGVzDQpzdXBwbGllZCBieSB0aGUgaHlwZXJ2aXNvci4NCg0KQ2Fu
IHlvdSBwbGVhc2UgcG9zdCB0aGUgaHlwZXJ2aXNvcidzIGNvbnNvbGUgbWVzc2FnZXMgcmVn
YXJkaW5nIGRvbTAgc2l6ZXMNCmFuZCBsb2NhdGlvbnM/DQoNCkNvdWxkIGl0IGJlIHRoYXQg
dGhlIGJyayBhcmVhIGlzIHRoZSBsYXN0IHNlY3Rpb24gcmVsZXZhbnQgZm9yIGxvYWRpbmcg
dGhlDQprZXJuZWw/IEluIGNvbnRyYXN0IHRvIHlvdXIgLmNvbmZpZywgSSBoYXZlIENPTkZJ
R19BTURfTUVNX0VOQ1JZUFQgZGVmaW5lZA0KYW5kIHRoaXMgYWRkcyB0aGUgLmluaXQuc2Ny
YXRjaCBzZWN0aW9uIGFmdGVyIHRoZSBicmsgYXJlYS4gTWF5YmUgdGhlDQpoeXBlcnZpc29y
IGlzIG5vdCBhZGp1c3RpbmcgdGhlIHBhZ2UgdGFibGUgbG9jYXRpb24gY29ycmVjdGx5IGR1
ZSB0byB0aGUNCk5PTE9BRCBhdHRyaWJ1dGUgb2YgYnJrPw0KDQoNCkp1ZXJnZW4NCg==
--------------8Yk0kMtIoLifE0XL9448wgDY
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------8Yk0kMtIoLifE0XL9448wgDY--

--------------Jiabdt1bIloXCiW6RjD85VAZ--

--------------mqZ7jjNFnoN7juOnYB9eBcKo
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmKzJX8FAwAAAAAACgkQsN6d1ii/Ey8W
owf/XiXh2hyQNzbuxnvWTWSy0EzPaRA5Plhmftxlg3dCS13l1lW5A39UsN8ZOwsXIPw5dyfH3ZGU
DwWDzH5ezglkPlAG+cgQLfmqiNpiOB1krc2HtY26iQRgVaVwB4nmJHcMuTKjrAHPzWyLAZcXzK1E
aM8HiasH+MsoRo2AGgu4j6XHd66mApU5rh8VBtyhO8QeNqCMQBWAS35U1u4XXZ2ILZBDWAK8BOQJ
nxMgCyXmpoQ4VvaRsYo63QV8bp5PKAQfa4lMZebrA59pqpzuDiBa4v9zZD7LMx7aBcs5sCy4+tbC
F647jxRhBbe6pMaWwFOseZsFNtTqPk9BJjErdMJCWQ==
=RDI2
-----END PGP SIGNATURE-----

--------------mqZ7jjNFnoN7juOnYB9eBcKo--


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 14:28:05 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 14:28:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353951.580958 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o41Kr-0001oU-EH; Wed, 22 Jun 2022 14:28:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353951.580958; Wed, 22 Jun 2022 14:28:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o41Kr-0001oN-As; Wed, 22 Jun 2022 14:28:01 +0000
Received: by outflank-mailman (input) for mailman id 353951;
 Wed, 22 Jun 2022 14:28:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UqP7=W5=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1o41Kq-0001oF-DY
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 14:28:00 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-eopbgr60070.outbound.protection.outlook.com [40.107.6.70])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 82b64b02-f237-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 16:27:58 +0200 (CEST)
Received: from AS9PR06CA0353.eurprd06.prod.outlook.com (2603:10a6:20b:466::30)
 by VE1PR08MB5613.eurprd08.prod.outlook.com (2603:10a6:800:1a7::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.16; Wed, 22 Jun
 2022 14:27:54 +0000
Received: from AM5EUR03FT020.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:466:cafe::e6) by AS9PR06CA0353.outlook.office365.com
 (2603:10a6:20b:466::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.15 via Frontend
 Transport; Wed, 22 Jun 2022 14:27:54 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT020.mail.protection.outlook.com (10.152.16.116) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Wed, 22 Jun 2022 14:27:52 +0000
Received: ("Tessian outbound ff2e13d26e0f:v120");
 Wed, 22 Jun 2022 14:27:52 +0000
Received: from 0334c8926917.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 7F685BC6-0D12-4BBC-B54C-499F45A504BC.1; 
 Wed, 22 Jun 2022 14:27:42 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 0334c8926917.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 22 Jun 2022 14:27:42 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by VI1PR08MB3117.eurprd08.prod.outlook.com (2603:10a6:803:42::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.15; Wed, 22 Jun
 2022 14:27:37 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5353.022; Wed, 22 Jun 2022
 14:27:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 82b64b02-f237-11ec-b725-ed86ccbb4733
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=W5HUAGPuKv+GArt5EYxmhGD+bi4QgtdNKAhqdycIfkhTHFBmPGOZW/JgKx+R0d8d86JgbpznvxQ/ozN+Vr4R/rSdII1R5u1GS/W2FyCctutu3xypMJ8KEG7y4C9Kjz8xLToreP+sKOjbQLvuv28E7lfXKo655DnSQuFPIpsHkn5DBvo1J5ks+rLVYdoxlp3GnnPAamyyT21zYB03lW5seaxZs3A4/V3wj1Yt96zLSPF5pjPl5tLqKZEK7cEmJqiLjp56khXBmvXF8mA4WzWEv6OhNXQOGTYwVkAd4jk+hCx/ykoneTMLH5SX96wYGsGTWTV/iJV/tx+0RnvBoFbXNA==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=SiT9BcoUKho+AnERVhlqd+nwbS3V+UP3nk5M87Uf6Zs=;
 b=M6/BY9IYS7YuZuE6uMZiH9s97iF6iCfoylTMqGAI0nYB7VGhmvxiGx70lOlBMHb41QDXARox78SHiE4538k7Ccx4aAq8Q42gkALYwPvLtk1byDxVqWKSziBDuxOzl0S5TUsaz/gyWE/6KlUeNvt+agyjdaG0RJIGYSsSbEdnIMqEQyJ6R7XGUhZDGnew/y824Qyb/phwa1Wx6CSQkdyVVia4cX2rIOsxTY1MqLgnvw/Q2xWZXmmK1SxI7ZYmFWDu/AaiNX8i1ReTJC0mARyw7ZRWwBVP6+kUunVn+ezg4reTBCmZEHJeOWty3I6k1CfLK4eqBCgbUkZfVmj8j3+gYA==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=temperror (sender ip
 is 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org
 smtp.mailfrom=arm.com; dmarc=temperror action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SiT9BcoUKho+AnERVhlqd+nwbS3V+UP3nk5M87Uf6Zs=;
 b=hcZ8u5sNUS+6qU5qNYapBICth0uUhALIj+cyxlD1rbrmqnJZN7xNuStSRzBcRHVjtDbXEl8EDUlUEk/GPtQnNrf6aWZFu7Lk/DHsoUs1PIiwyW8an+owBxtNQD5lg49d6OMWdY0NdBgpxXp8j/sNt80/A/crrZDzTg4qYV5CO/s=
X-MS-Exchange-Authentication-Results: spf=temperror (sender IP is
 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=temperror action=none
 header.from=arm.com;
Received-SPF: TempError (protection.outlook.com: error in processing during
 lookup of arm.com: DNS Timeout)
X-CheckRecipientChecked: true
X-CR-MTA-CID: a365c00f1da8da82
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bxgYGzg1W0QptBhQmUHF6nvWXyXzpYHn4EsfmnuR72K6PMCSAAwLKNHrwvraRiwq/1Fsm0a5VDQaIxRZ1D/PUTZmnadGwC0JYlcg7GU035kcAsncGCPd2z0lFq3wxjm4sV4d7rl/6R8WF/XrTFdfWjp2c2HqDTSMKkb1mw0GHx0z3UqBjsRXWF5zyvY/QE8MQd9JNLQLNvthIJZAquNseN2pHWFRheMO7tuqogDJp+rl+3a7BwhZ4ABF/hvQB57P1ArCEblh/WKU4EiWaL4HlcRB4QyPdSngkSuHGD/kBgrPfKfxb8DSbzOJs4BWLYLLvhGyP5ePCbTD9OSt4okJKg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=SiT9BcoUKho+AnERVhlqd+nwbS3V+UP3nk5M87Uf6Zs=;
 b=imE3dnwnsqL71yqRKWeSR6qjs7hyNcGT5PO8qnbhWQfdMVwvmZcFkRVBl7TcT80GScvkisotTnQbqmZjaJcX8oY7EHCUhttl+SotJsL/bWlZ+RaJcJUi0pHVoix0+B4271yg9A9J3buUKkKEHPQc0WJgJJOcB3DoKIZcIVZtC6sRp0rZeMdDA8jyJSkEjEJ2obmqHQkXlGe5aubA7WRzFtzGwAnvhhVOSVFLRVZAA9gSQtBu4scUsPRjzeQ3CyND+S/g6+YcdG3D8XerxEx+x373w1QRZ0huamxjM4j83Z0UMkjuyg7JnB/PPo+3yqYVJT4Ha/3JTbYv9ktA22Lxeg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SiT9BcoUKho+AnERVhlqd+nwbS3V+UP3nk5M87Uf6Zs=;
 b=hcZ8u5sNUS+6qU5qNYapBICth0uUhALIj+cyxlD1rbrmqnJZN7xNuStSRzBcRHVjtDbXEl8EDUlUEk/GPtQnNrf6aWZFu7Lk/DHsoUs1PIiwyW8an+owBxtNQD5lg49d6OMWdY0NdBgpxXp8j/sNt80/A/crrZDzTg4qYV5CO/s=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Michal Orzel <Michal.Orzel@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>, Juergen Gross
	<jgross@suse.com>, Dario Faggioli <dfaggioli@suse.com>, Daniel De Graaf
	<dgdegra@tycho.nsa.gov>, "Daniel P. Smith" <dpsmith@apertussolutions.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 0/9] MISRA C 2012 8.1 rule fixes
Thread-Topic: [PATCH 0/9] MISRA C 2012 8.1 rule fixes
Thread-Index:
 AQHYhHPRZL4o3B5WdE6OPtXA3biXiq1bO8sAgAAqE4CAAAG+gIAADwQAgAAEMoCAAAS3gA==
Date: Wed, 22 Jun 2022 14:27:37 +0000
Message-ID: <BE80A241-7983-425F-9212-0957E29AA5C7@arm.com>
References: <20220620070245.77979-1-michal.orzel@arm.com>
 <dd016e82-2480-0e1e-6286-18b2f677dd65@suse.com>
 <74ec2158-3d19-3b2c-1e8c-fb5b30267658@arm.com>
 <d91bb4ea-41be-225e-e2fe-1b03aa06c677@suse.com>
 <C45BA6EE-6294-4C6F-ADC4-3DE7C8DA866F@arm.com>
 <68d7fb35-e4c5-e5d2-13a8-9ee1369e8dbe@suse.com>
In-Reply-To: <68d7fb35-e4c5-e5d2-13a8-9ee1369e8dbe@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 83d1f982-cbf0-4ef1-dc8d-08da545b6475
x-ms-traffictypediagnostic:
	VI1PR08MB3117:EE_|AM5EUR03FT020:EE_|VE1PR08MB5613:EE_
X-Microsoft-Antispam-PRVS:
	<VE1PR08MB5613DEE4D6322EFD308DF64E9DB29@VE1PR08MB5613.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 2Wemg1Pc4dYYFxL6iHqF3sYE6sQc6ecZFabVzYTdIO7AVCkTlBdxUCguSWixxHo3YjgYEbQj6a2B+HooRcOeCyss+gb2e3STh3HjzzztEl18P+kAL1ugshECUgvAVPNByTr4MwZk2tvE9bg0dAVpaAdmfzT1nZH73CNQG/3QFJzJHgBn8cmE3GCHUwfGeLtgQcJ37/7j9ef1tmE09Jg4gE9JBDgBwREs8u4X4xdtcoZmmfcHBc/KOcZXuXQkf6WQApkAC9okHhSfqyodggWoGHscUy497YDOoniUn+VueA0+R3QaW2P8cBvayHAaZdrn9DpgACHTn/xnGjHEXt4gHZQHmHazYO+MbhUtjXc/dIz3XjVuRTu90gQIhc2y/O3lf92C4WrVVO3fHEiXvc5xYerXpoPqqpf4FUXyIUn/2EaCuRcgCM4Mk9zFgiJTmG7VHVRIDeKm16r7iNfaSpLWXfueJmBLbRN9iF96mTCEyOtfhPezQUXMtuJKashhARqX2lIxH9nxlkV/EG06zL9WV1cHoo90wh4vMJPQuQ8BvR1iVyznzP6Rv25pNpptAIc1URaKnoQsGOe7GyI3v8dfjUYC3Iky7kVvFDqqApmUYlH9FKVq0sWYtp0g1Kb3XEpdAAG+m2K9HQYW2yd8Z85Cd4YAPOvc3EmyqQEqh5pmu0nr/RtKvG9VXRuLJ2M6IlHM8Kq1efgVeZVkWZ17mIHJLlEJb6UdWBvij6clXX7jMwzApEts7szHq/m1OiMfOu4YdWn2nrEk//zIdAN9ceFglyFYTfVJSaRa8FPjaw3TdU0=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(346002)(136003)(376002)(396003)(366004)(39860400002)(38070700005)(5660300002)(8936002)(76116006)(53546011)(6916009)(7416002)(478600001)(4326008)(71200400001)(122000001)(6486002)(54906003)(33656002)(86362001)(8676002)(66476007)(64756008)(91956017)(36756003)(66946007)(66556008)(66446008)(316002)(186003)(41300700001)(83380400001)(6506007)(38100700002)(2906002)(26005)(6512007)(2616005)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <9ABC2D41EDEA444FAE0854DC39057966@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3117
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT020.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	782ab43b-c4e6-46f2-ebea-08da545b5b37
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6WdrR3L/EKd15k7DAvIQaGzF6ihewMge4C5kKR1pRuBdeBAIfZ4Nf9S08ayWCLUeBvxzaHeKH28OyJTlCbGduKB0fRA4GeJD9/PxcmCMbmhd0eL1xYnte/+HDS718SIq0s08f20P7GNuTpFHy0LsPPBgISO9cHeYxqDcIpO6cfBTxNzA8viBwh2EheRiEzmnwElnUFk30b9mdXFT44rhWkC4Ro5feuQ3q3Z24Nd6PDTEF8V7OkkldmCbfN9FLBlKGRb88NflPT2GjewSyXxAGlvnVMPGwhNG3MjzQ5xltOL/eVr0h4liUZEaBzTaZeSO+2sCsmWUnEZ7rb43gtqX/Q7Ah7EiGwYcrsIMtR5Aol4NS5mz3vCOGyM+6EfQPIzbA7cp9Npgp3frbHhlZifmfKCb/+SwBBfzQwHolYWNdh8vzQY7XDlulY77g9kS82SWFM1h+wM9ADMmCPXhgUSH4oC0FRG9lIGXbKPdeLsfvzKuFjFRFRqlDKvWtAzkCQHNSgGHFLEAH71yIy7mLz4AH36vI7J/NKBQAQp/NzJIWrUe5xSEryksbb9BDBns6tgm5Ag2NlSF/Gt23s0LkEaW4Oz5x9sHFykqtNQ3kNLEq0iyLZpJq2HGaMswSGDvoFKn0Ta7xAxeZfDtEsUGRoAXHQ2G+CnqUeHNSrpjCcXG/RYIit8NDlaYMan2zdJopATjPcdgaZ/YMmG5SdXTfn8dgoM20p3SzgsLOtpiBNASB1Tw/QsEySSlddkGzR99AFldQ0vzkZFlJYOuxxInLS+d6R+7XUkKNXG+fuU9dYleNemFcEkcTklV5IfYCu1+YJjWTJGzfq00QeuRbkcT/ZN6dQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ErrorRetry;CAT:NONE;SFS:(13230016)(4636009)(136003)(346002)(39860400002)(396003)(376002)(40470700004)(46966006)(36840700001)(6862004)(54906003)(356005)(6512007)(6486002)(8936002)(82740400003)(40480700001)(70206006)(81166007)(5660300002)(86362001)(33656002)(8676002)(4326008)(70586007)(40460700003)(2616005)(83380400001)(336012)(63350400001)(82310400005)(41300700001)(186003)(63370400001)(36860700001)(36756003)(2906002)(478600001)(26005)(53546011)(316002)(47076005)(6506007)(45980500001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2022 14:27:52.9680
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 83d1f982-cbf0-4ef1-dc8d-08da545b6475
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT020.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5613

Hi,

> On 22 Jun 2022, at 15:10, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 22.06.2022 15:55, Bertrand Marquis wrote:
>> Hi Jan,
>>=20
>>> On 22 Jun 2022, at 14:01, Jan Beulich <jbeulich@suse.com> wrote:
>>>=20
>>> On 22.06.2022 14:55, Michal Orzel wrote:
>>>> On 22.06.2022 12:25, Jan Beulich wrote:
>>>>> On 20.06.2022 09:02, Michal Orzel wrote:
>>>>>> This series fixes all the findings for MISRA C 2012 8.1 rule, report=
ed by
>>>>>> cppcheck 2.7 with misra addon, for Arm (arm32/arm64 - target allyesc=
onfig).
>>>>>> Fixing this rule comes down to replacing implicit 'unsigned' with ex=
plicit
>>>>>> 'unsigned int' type as there are no other violations being part of t=
hat rule
>>>>>> in the Xen codebase.
>>>>>=20
>>>>> I'm puzzled, I have to admit. While I agree with all the examples in =
the
>>>>> doc, I notice that there's no instance of "signed" or "unsigned" ther=
e.
>>>>> Which matches my understanding that "unsigned" and "signed" on their =
own
>>>>> (just like "long") are proper types, and hence the omission of "int"
>>>>> there is not an "omission of an explicit type".
>>>>>=20
>>>> Cppcheck was choosed as a tool for MISRA checking and it is considerin=
g it as a violation.
>>>=20
>>> Which by no means indicates that the tool pointing out something as a
>>> violation actually is one.
>>>=20
>>>> It treats unsigned as an implicit type. You can see this flag in cppch=
eck source code:
>>>>=20
>>>> "fIsImplicitInt =3D (1U << 31), // Is "int" token implicitly added?"
>>>=20
>>> Neither the name of the variable nor the comment clarify that this is a=
bout
>>> the specific case of "unsigned". As said there's also the fact that the=
y
>>> don't appear to point out the lack of "int" when seeing plain "long" (o=
r
>>> "long long"). I fully agree that "extern x;" or "const y;" lack explici=
t
>>> "int".
>>=20
>> I am a bit puzzled here trying to understand what you actually want here=
.
>>=20
>> Do you suggest the change is ok but you are not ok with the fact that is=
 flagged
>> as MISRA fix even though cppcheck is saying otherwise ?
>=20
> First of all I'd like to understand whether what we're talking about here
> are actually violations (and if so, why that is). Further actions / reque=
sts
> depend on the answer.

I would say yes but this would need to be confirmed by Roberto I guess.
In any case if we think this is something we want and the change is right
and cppcheck is saying that it solves a violation, I am wondering what is
the point of discussing that extensively this change.

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 14:35:33 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 14:35:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353963.580969 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o41S3-0003FO-8Q; Wed, 22 Jun 2022 14:35:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353963.580969; Wed, 22 Jun 2022 14:35:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o41S3-0003FH-5h; Wed, 22 Jun 2022 14:35:27 +0000
Received: by outflank-mailman (input) for mailman id 353963;
 Wed, 22 Jun 2022 14:35:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zPYt=W5=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o41S1-0003FB-HJ
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 14:35:25 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8d213ac0-f238-11ec-bd2d-47488cf2e6aa;
 Wed, 22 Jun 2022 16:35:24 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 0027F21BDD;
 Wed, 22 Jun 2022 14:35:23 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 6A81F13A5D;
 Wed, 22 Jun 2022 14:35:23 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id Qz7nF6sos2LQWQAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 22 Jun 2022 14:35:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8d213ac0-f238-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655908524; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=7jaxA5F5o0Lr/wQVDpMMYy+3VZ5Suu++PzDieNQ2BEs=;
	b=cx2UOedFGBuKGf/JqbMR/YQdhrS1QkliWi5/wBKwfKiY0M1HSoyfpAhX91B0lU2YXc28vx
	qx2UUJeuxzpYgekWgRdgySTx40EW+jCdDhPWFZ5rLHVfL95zKjvB/mSh4C6Rc8etdbqdC6
	RgLR4+iQOjTgfeSJBacW2SYWdjc9MOw=
Message-ID: <0f047970-d9ea-d2fd-3208-db843305e11c@suse.com>
Date: Wed, 22 Jun 2022 16:35:22 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH v3 3/3] xen: don't require virtio with grants for non-PV
 guests
Content-Language: en-US
To: Oleksandr <olekstysh@gmail.com>, xen-devel@lists.xenproject.org,
 x86@kernel.org, linux-kernel@vger.kernel.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Russell King <linux@armlinux.org.uk>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 linux-arm-kernel@lists.infradead.org, Viresh Kumar <viresh.kumar@linaro.org>
References: <20220622063838.8854-1-jgross@suse.com>
 <20220622063838.8854-4-jgross@suse.com>
 <a8ce8ad3-aa3b-ea87-34cf-6532a272e9d8@gmail.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <a8ce8ad3-aa3b-ea87-34cf-6532a272e9d8@gmail.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------HpOX0tVZwVFj2LVpY0Mh7dB0"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------HpOX0tVZwVFj2LVpY0Mh7dB0
Content-Type: multipart/mixed; boundary="------------8bYfcCrngUaPIdjCRUqfu1Ha";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Oleksandr <olekstysh@gmail.com>, xen-devel@lists.xenproject.org,
 x86@kernel.org, linux-kernel@vger.kernel.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Russell King <linux@armlinux.org.uk>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 linux-arm-kernel@lists.infradead.org, Viresh Kumar <viresh.kumar@linaro.org>
Message-ID: <0f047970-d9ea-d2fd-3208-db843305e11c@suse.com>
Subject: Re: [PATCH v3 3/3] xen: don't require virtio with grants for non-PV
 guests
References: <20220622063838.8854-1-jgross@suse.com>
 <20220622063838.8854-4-jgross@suse.com>
 <a8ce8ad3-aa3b-ea87-34cf-6532a272e9d8@gmail.com>
In-Reply-To: <a8ce8ad3-aa3b-ea87-34cf-6532a272e9d8@gmail.com>

--------------8bYfcCrngUaPIdjCRUqfu1Ha
Content-Type: multipart/mixed; boundary="------------PY7imNEMXQ8rcO310kVlq7S2"

--------------PY7imNEMXQ8rcO310kVlq7S2
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjIuMDYuMjIgMTE6MDMsIE9sZWtzYW5kciB3cm90ZToNCj4gDQo+IE9uIDIyLjA2LjIy
IDA5OjM4LCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPiANCj4gSGVsbG8gSnVlcmdlbg0KPiAN
Cj4+IENvbW1pdCBmYTFmNTc0MjFlMGIgKCJ4ZW4vdmlydGlvOiBFbmFibGUgcmVzdHJpY3Rl
ZCBtZW1vcnkgYWNjZXNzIHVzaW5nDQo+PiBYZW4gZ3JhbnQgbWFwcGluZ3MiKSBpbnRyb2R1
Y2VkIGEgbmV3IHJlcXVpcmVtZW50IGZvciB1c2luZyB2aXJ0aW8NCj4+IGRldmljZXM6IHRo
ZSBiYWNrZW5kIG5vdyBuZWVkcyB0byBzdXBwb3J0IHRoZSBWSVJUSU9fRl9BQ0NFU1NfUExB
VEZPUk0NCj4+IGZlYXR1cmUuDQo+Pg0KPj4gVGhpcyBpcyBhbiB1bmR1ZSByZXF1aXJlbWVu
dCBmb3Igbm9uLVBWIGd1ZXN0cywgYXMgdGhvc2UgY2FuIGJlIG9wZXJhdGVkDQo+PiB3aXRo
IGV4aXN0aW5nIGJhY2tlbmRzIHdpdGhvdXQgYW55IHByb2JsZW0sIGFzIGxvbmcgYXMgdGhv
c2UgYmFja2VuZHMNCj4+IGFyZSBydW5uaW5nIGluIGRvbTAuDQo+Pg0KPj4gUGVyIGRlZmF1
bHQgYWxsb3cgdmlydGlvIGRldmljZXMgd2l0aG91dCBncmFudCBzdXBwb3J0IGZvciBub24t
UFYNCj4+IGd1ZXN0cy4NCj4+DQo+PiBPbiBBcm0gcmVxdWlyZSBWSVJUSU9fRl9BQ0NFU1Nf
UExBVEZPUk0gZm9yIGRldmljZXMgaGF2aW5nIGJlZW4gbGlzdGVkDQo+PiBpbiB0aGUgZGV2
aWNlIHRyZWUgdG8gdXNlIGdyYW50cy4NCj4+DQo+PiBBZGQgYSBuZXcgY29uZmlnIGl0ZW0g
dG8gYWx3YXlzIGZvcmNlIHVzZSBvZiBncmFudHMgZm9yIHZpcnRpby4NCj4+DQo+PiBGaXhl
czogZmExZjU3NDIxZTBiICgieGVuL3ZpcnRpbzogRW5hYmxlIHJlc3RyaWN0ZWQgbWVtb3J5
IGFjY2VzcyB1c2luZyBYZW4gDQo+PiBncmFudCBtYXBwaW5ncyIpDQo+PiBSZXBvcnRlZC1i
eTogVmlyZXNoIEt1bWFyIDx2aXJlc2gua3VtYXJAbGluYXJvLm9yZz4NCj4+IFNpZ25lZC1v
ZmYtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4NCj4+IC0tLQ0KPj4gVjI6
DQo+PiAtIHJlbW92ZSBjb21tYW5kIGxpbmUgcGFyYW1ldGVyIChDaHJpc3RvcGggSGVsbHdp
ZykNCj4+IFYzOg0KPj4gLSByZWJhc2UgdG8gY2FsbGJhY2sgbWV0aG9kDQo+IA0KPiANCj4g
UGF0Y2ggbG9va3MgZ29vZCwganVzdCBvbmUgTklUIC4uLg0KPiANCj4gDQo+PiAtLS0NCj4+
IMKgIGFyY2gvYXJtL3hlbi9lbmxpZ2h0ZW4uY8KgwqDCoMKgIHzCoCA0ICsrKy0NCj4+IMKg
IGFyY2gveDg2L3hlbi9lbmxpZ2h0ZW5faHZtLmMgfMKgIDQgKysrLQ0KPj4gwqAgYXJjaC94
ODYveGVuL2VubGlnaHRlbl9wdi5jwqAgfMKgIDUgKysrKy0NCj4+IMKgIGRyaXZlcnMveGVu
L0tjb25maWfCoMKgwqDCoMKgwqDCoMKgwqAgfMKgIDkgKysrKysrKysrDQo+PiDCoCBkcml2
ZXJzL3hlbi9ncmFudC1kbWEtb3BzLmPCoCB8IDEwICsrKysrKysrKysNCj4+IMKgIGluY2x1
ZGUveGVuL3hlbi1vcHMuaMKgwqDCoMKgwqDCoMKgIHzCoCA2ICsrKysrKw0KPj4gwqAgaW5j
bHVkZS94ZW4veGVuLmjCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHzCoCA4IC0tLS0tLS0tDQo+
PiDCoCA3IGZpbGVzIGNoYW5nZWQsIDM1IGluc2VydGlvbnMoKyksIDExIGRlbGV0aW9ucygt
KQ0KPj4NCj4+IGRpZmYgLS1naXQgYS9hcmNoL2FybS94ZW4vZW5saWdodGVuLmMgYi9hcmNo
L2FybS94ZW4vZW5saWdodGVuLmMNCj4+IGluZGV4IDFmOWMzYmEzMjgzMy4uOTNjOGNjYmYy
OTgyIDEwMDY0NA0KPj4gLS0tIGEvYXJjaC9hcm0veGVuL2VubGlnaHRlbi5jDQo+PiArKysg
Yi9hcmNoL2FybS94ZW4vZW5saWdodGVuLmMNCj4+IEBAIC0zNCw2ICszNCw3IEBADQo+PiDC
oCAjaW5jbHVkZSA8bGludXgvdGltZWtlZXBpbmcuaD4NCj4+IMKgICNpbmNsdWRlIDxsaW51
eC90aW1la2VlcGVyX2ludGVybmFsLmg+DQo+PiDCoCAjaW5jbHVkZSA8bGludXgvYWNwaS5o
Pg0KPj4gKyNpbmNsdWRlIDxsaW51eC92aXJ0aW9fYW5jaG9yLmg+DQo+PiDCoCAjaW5jbHVk
ZSA8bGludXgvbW0uaD4NCj4+IEBAIC00NDMsNyArNDQ0LDggQEAgc3RhdGljIGludCBfX2lu
aXQgeGVuX2d1ZXN0X2luaXQodm9pZCkNCj4+IMKgwqDCoMKgwqAgaWYgKCF4ZW5fZG9tYWlu
KCkpDQo+PiDCoMKgwqDCoMKgwqDCoMKgwqAgcmV0dXJuIDA7DQo+PiAtwqDCoMKgIHhlbl9z
ZXRfcmVzdHJpY3RlZF92aXJ0aW9fbWVtb3J5X2FjY2VzcygpOw0KPj4gK8KgwqDCoCBpZiAo
SVNfRU5BQkxFRChDT05GSUdfWEVOX1ZJUlRJTykpDQo+PiArwqDCoMKgwqDCoMKgwqAgdmly
dGlvX3NldF9tZW1fYWNjX2NiKHhlbl92aXJ0aW9fbWVtX2FjYyk7DQo+PiDCoMKgwqDCoMKg
IGlmICghYWNwaV9kaXNhYmxlZCkNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoCB4ZW5fYWNwaV9n
dWVzdF9pbml0KCk7DQo+PiBkaWZmIC0tZ2l0IGEvYXJjaC94ODYveGVuL2VubGlnaHRlbl9o
dm0uYyBiL2FyY2gveDg2L3hlbi9lbmxpZ2h0ZW5faHZtLmMNCj4+IGluZGV4IDhiNzFiMWRk
NzYzOS4uMjg3NjJmODAwNTk2IDEwMDY0NA0KPj4gLS0tIGEvYXJjaC94ODYveGVuL2VubGln
aHRlbl9odm0uYw0KPj4gKysrIGIvYXJjaC94ODYveGVuL2VubGlnaHRlbl9odm0uYw0KPj4g
QEAgLTQsNiArNCw3IEBADQo+PiDCoCAjaW5jbHVkZSA8bGludXgvY3B1Lmg+DQo+PiDCoCAj
aW5jbHVkZSA8bGludXgva2V4ZWMuaD4NCj4+IMKgICNpbmNsdWRlIDxsaW51eC9tZW1ibG9j
ay5oPg0KPj4gKyNpbmNsdWRlIDxsaW51eC92aXJ0aW9fYW5jaG9yLmg+DQo+PiDCoCAjaW5j
bHVkZSA8eGVuL2ZlYXR1cmVzLmg+DQo+PiDCoCAjaW5jbHVkZSA8eGVuL2V2ZW50cy5oPg0K
Pj4gQEAgLTE5NSw3ICsxOTYsOCBAQCBzdGF0aWMgdm9pZCBfX2luaXQgeGVuX2h2bV9ndWVz
dF9pbml0KHZvaWQpDQo+PiDCoMKgwqDCoMKgIGlmICh4ZW5fcHZfZG9tYWluKCkpDQo+PiDC
oMKgwqDCoMKgwqDCoMKgwqAgcmV0dXJuOw0KPj4gLcKgwqDCoCB4ZW5fc2V0X3Jlc3RyaWN0
ZWRfdmlydGlvX21lbW9yeV9hY2Nlc3MoKTsNCj4+ICvCoMKgwqAgaWYgKElTX0VOQUJMRUQo
Q09ORklHX1hFTl9WSVJUSU9fRk9SQ0VfR1JBTlQpKQ0KPj4gK8KgwqDCoMKgwqDCoMKgIHZp
cnRpb19zZXRfbWVtX2FjY19jYih2aXJ0aW9fcmVxdWlyZV9yZXN0cmljdGVkX21lbV9hY2Mp
Ow0KPj4gwqDCoMKgwqDCoCBpbml0X2h2bV9wdl9pbmZvKCk7DQo+PiBkaWZmIC0tZ2l0IGEv
YXJjaC94ODYveGVuL2VubGlnaHRlbl9wdi5jIGIvYXJjaC94ODYveGVuL2VubGlnaHRlbl9w
di5jDQo+PiBpbmRleCBlMzI5N2IxNTcwMWMuLjVhYWFlOGE3N2Y1NSAxMDA2NDQNCj4+IC0t
LSBhL2FyY2gveDg2L3hlbi9lbmxpZ2h0ZW5fcHYuYw0KPj4gKysrIGIvYXJjaC94ODYveGVu
L2VubGlnaHRlbl9wdi5jDQo+PiBAQCAtMzEsNiArMzEsNyBAQA0KPj4gwqAgI2luY2x1ZGUg
PGxpbnV4L2dmcC5oPg0KPj4gwqAgI2luY2x1ZGUgPGxpbnV4L2VkZC5oPg0KPj4gwqAgI2lu
Y2x1ZGUgPGxpbnV4L3JlYm9vdC5oPg0KPj4gKyNpbmNsdWRlIDxsaW51eC92aXJ0aW9fYW5j
aG9yLmg+DQo+PiDCoCAjaW5jbHVkZSA8eGVuL3hlbi5oPg0KPj4gwqAgI2luY2x1ZGUgPHhl
bi9ldmVudHMuaD4NCj4+IEBAIC0xMDksNyArMTEwLDkgQEAgc3RhdGljIERFRklORV9QRVJf
Q1BVKHN0cnVjdCB0bHNfZGVzY3MsIHNoYWRvd190bHNfZGVzYyk7DQo+PiDCoCBzdGF0aWMg
dm9pZCBfX2luaXQgeGVuX3B2X2luaXRfcGxhdGZvcm0odm9pZCkNCj4+IMKgIHsNCj4+IC3C
oMKgwqAgeGVuX3NldF9yZXN0cmljdGVkX3ZpcnRpb19tZW1vcnlfYWNjZXNzKCk7DQo+PiAr
wqDCoMKgIC8qIFBWIGd1ZXN0cyBjYW4ndCBvcGVyYXRlIHZpcnRpbyBkZXZpY2VzIHdpdGhv
dXQgZ3JhbnRzLiAqLw0KPj4gK8KgwqDCoCBpZiAoSVNfRU5BQkxFRChDT05GSUdfWEVOX1ZJ
UlRJTykpDQo+PiArwqDCoMKgwqDCoMKgwqAgdmlydGlvX3NldF9tZW1fYWNjX2NiKHZpcnRp
b19yZXF1aXJlX3Jlc3RyaWN0ZWRfbWVtX2FjYyk7DQo+PiDCoMKgwqDCoMKgIHBvcHVsYXRl
X2V4dHJhX3B0ZShmaXhfdG9fdmlydChGSVhfUEFSQVZJUlRfQk9PVE1BUCkpOw0KPj4gZGlm
ZiAtLWdpdCBhL2RyaXZlcnMveGVuL0tjb25maWcgYi9kcml2ZXJzL3hlbi9LY29uZmlnDQo+
PiBpbmRleCBiZmQ1ZjRmNzA2YmMuLmE2NWJkOTIxMjFhNSAxMDA2NDQNCj4+IC0tLSBhL2Ry
aXZlcnMveGVuL0tjb25maWcNCj4+ICsrKyBiL2RyaXZlcnMveGVuL0tjb25maWcNCj4+IEBA
IC0zNTUsNCArMzU1LDEzIEBAIGNvbmZpZyBYRU5fVklSVElPDQo+PiDCoMKgwqDCoMKgwqDC
oCBJZiBpbiBkb3VidCwgc2F5IG4uDQo+PiArY29uZmlnIFhFTl9WSVJUSU9fRk9SQ0VfR1JB
TlQNCj4+ICvCoMKgwqAgYm9vbCAiUmVxdWlyZSBYZW4gdmlydGlvIHN1cHBvcnQgdG8gdXNl
IGdyYW50cyINCj4+ICvCoMKgwqAgZGVwZW5kcyBvbiBYRU5fVklSVElPDQo+PiArwqDCoMKg
IGhlbHANCj4+ICvCoMKgwqDCoMKgIFJlcXVpcmUgdmlydGlvIGZvciBYZW4gZ3Vlc3RzIHRv
IHVzZSBncmFudCBtYXBwaW5ncy4NCj4+ICvCoMKgwqDCoMKgIFRoaXMgd2lsbCBhdm9pZCB0
aGUgbmVlZCB0byBnaXZlIHRoZSBiYWNrZW5kIHRoZSByaWdodCB0byBtYXAgYWxsDQo+PiAr
wqDCoMKgwqDCoCBvZiB0aGUgZ3Vlc3QgbWVtb3J5LiBUaGlzIHdpbGwgbmVlZCBzdXBwb3J0
IG9uIHRoZSBiYWNrZW5kIHNpZGUNCj4+ICvCoMKgwqDCoMKgIChlLmcuIHFlbXUgb3Iga2Vy
bmVsLCBkZXBlbmRpbmcgb24gdGhlIHZpcnRpbyBkZXZpY2UgdHlwZXMgdXNlZCkuDQo+PiAr
DQo+PiDCoCBlbmRtZW51DQo+PiBkaWZmIC0tZ2l0IGEvZHJpdmVycy94ZW4vZ3JhbnQtZG1h
LW9wcy5jIGIvZHJpdmVycy94ZW4vZ3JhbnQtZG1hLW9wcy5jDQo+PiBpbmRleCBmYzAxNDI0
ODQwMDEuLjg5NzNmYzFlOWNjYyAxMDA2NDQNCj4+IC0tLSBhL2RyaXZlcnMveGVuL2dyYW50
LWRtYS1vcHMuYw0KPj4gKysrIGIvZHJpdmVycy94ZW4vZ3JhbnQtZG1hLW9wcy5jDQo+PiBA
QCAtMTIsNiArMTIsOCBAQA0KPj4gwqAgI2luY2x1ZGUgPGxpbnV4L29mLmg+DQo+PiDCoCAj
aW5jbHVkZSA8bGludXgvcGZuLmg+DQo+PiDCoCAjaW5jbHVkZSA8bGludXgveGFycmF5Lmg+
DQo+PiArI2luY2x1ZGUgPGxpbnV4L3ZpcnRpb19hbmNob3IuaD4NCj4+ICsjaW5jbHVkZSA8
bGludXgvdmlydGlvLmg+DQo+PiDCoCAjaW5jbHVkZSA8eGVuL3hlbi5oPg0KPj4gwqAgI2lu
Y2x1ZGUgPHhlbi94ZW4tb3BzLmg+DQo+PiDCoCAjaW5jbHVkZSA8eGVuL2dyYW50X3RhYmxl
Lmg+DQo+PiBAQCAtMjg3LDYgKzI4OSwxNCBAQCBib29sIHhlbl9pc19ncmFudF9kbWFfZGV2
aWNlKHN0cnVjdCBkZXZpY2UgKmRldikNCj4+IMKgwqDCoMKgwqAgcmV0dXJuIGhhc19pb21t
dTsNCj4+IMKgIH0NCj4+ICtib29sIHhlbl92aXJ0aW9fbWVtX2FjYyhzdHJ1Y3QgdmlydGlv
X2RldmljZSAqZGV2KQ0KPj4gK3sNCj4+ICvCoMKgwqAgaWYgKElTX0VOQUJMRUQoQ09ORklH
X1hFTl9WSVJUSU9fRk9SQ0VfR1JBTlQpKQ0KPj4gK8KgwqDCoMKgwqDCoMKgIHJldHVybiB0
cnVlOw0KPj4gKw0KPj4gK8KgwqDCoCByZXR1cm4geGVuX2lzX2dyYW50X2RtYV9kZXZpY2Uo
ZGV2LT5kZXYucGFyZW50KTsNCj4+ICt9DQo+IA0KPiANCj4gIMKgwqAgLi4uIEkgYW0gdGhp
bmtpbmcgd291bGQgaXQgYmUgYmV0dGVyIHRvIG1vdmUgdGhpcyB0byB4ZW4veGVuLW9wcy5o
IGFzIA0KPiBncmFudC1kbWEtb3BzLmMgaXMgZ2VuZXJpYyAobm90IG9ubHkgZm9yIHZpcnRp
bywgYWx0aG91Z2ggdGhlIHZpcnRpbyBpcyB0aGUgDQo+IGZpcnN0IHVzZS1jYXNlKQ0KDQpJ
IGRpc2xpa2UgdXNpbmcgYSBmdW5jdGlvbiBtYXJrZWQgYXMgaW5saW5lIGluIGEgZnVuY3Rp
b24gdmVjdG9yLg0KDQpXZSBjb3VsZCBhZGQgYW5vdGhlciBtb2R1bGUgInhlbi12aXJ0aW8i
IGZvciB0aGlzIHB1cnBvc2UsIGJ1dCB0aGlzIHNlZW1zDQp0byBiZSBvdmVya2lsbC4NCg0K
SSB0aGluayB3ZSBzaG91bGQganVzdCBsZWF2ZSBpdCBoZXJlIGFuZCBtb3ZlIGl0IGxhdGVy
IGluIGNhc2UgbW9yZSByZWFsDQp2aXJ0aW8gZGVwZW5kZW50IHN0dWZmIGlzIGJlaW5nIGFk
ZGVkLg0KDQoNCkp1ZXJnZW4NCg==
--------------PY7imNEMXQ8rcO310kVlq7S2
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------PY7imNEMXQ8rcO310kVlq7S2--

--------------8bYfcCrngUaPIdjCRUqfu1Ha--

--------------HpOX0tVZwVFj2LVpY0Mh7dB0
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmKzKKoFAwAAAAAACgkQsN6d1ii/Ey/E
NQf+KgQnJgl8+Rf3bpQZznWiqs8nBgnovMYNS8FRot7UII4dT9i7aCL+WJtlIcoQ54pugoxWf1G/
/N0iXjvHnhkP6zOf9zEjPmypDet8v5Gkwzj3k7utn2oPmV8ubgB0g6hVpyb0z4f2mNLzsUNBjWr6
Qc7i9bHDVxIbXtQ9pgfAvg1X4gZT4J1rW7w6rkEbRhMDn+aUZVaX/XcLVJZOwOjcta0Os21a1nBE
QevEDFQeUNt2iK5lWIt7fBZsoIFHKThFeOiSn4lVsYRDBvvqCwlRHP1qFHyolXtKQJCFP9QZSyO4
nuOn19LPKZaC/9vz8CLTZruaRKqooeFjiRYyVxC5cg==
=ABxh
-----END PGP SIGNATURE-----

--------------HpOX0tVZwVFj2LVpY0Mh7dB0--


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 14:38:19 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 14:38:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353971.580981 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o41Uo-0003ua-QY; Wed, 22 Jun 2022 14:38:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353971.580981; Wed, 22 Jun 2022 14:38:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o41Uo-0003uT-Ns; Wed, 22 Jun 2022 14:38:18 +0000
Received: by outflank-mailman (input) for mailman id 353971;
 Wed, 22 Jun 2022 14:38:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QBTv=W5=arm.com=rahul.singh@srs-se1.protection.inumbo.net>)
 id 1o41Un-0003uL-Fw
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 14:38:17 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id f3387b1f-f238-11ec-bd2d-47488cf2e6aa;
 Wed, 22 Jun 2022 16:38:15 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 37CA8D6E;
 Wed, 22 Jun 2022 07:38:15 -0700 (PDT)
Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com
 [10.1.199.62])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B1C1F3F792;
 Wed, 22 Jun 2022 07:38:13 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f3387b1f-f238-11ec-bd2d-47488cf2e6aa
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 0/8] xen/evtchn: implement static event channel signaling
Date: Wed, 22 Jun 2022 15:37:57 +0100
Message-Id: <cover.1655903088.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The purpose of this patch series is to add static event channel signaling
support to Xen on Arm based on design doc [1].

This patch series depends on patch series [2] to create event channel in Xen. 

[1] https://lists.xenproject.org/archives/html/xen-devel/2022-05/msg01160.html
[2] https://patchwork.kernel.org/project/xen-devel/list/?series=646289

Rahul Singh (8):
  xen/evtchn: make evtchn_bind_interdomain global
  xen/evtchn: modify evtchn_alloc_unbound to allocate specified port
  xen/evtchn: modify evtchn_bind_interdomain to allocate specified port
  xen/evtchn: modify evtchn_bind_interdomain to pass domain as argument
  xen/evtchn: don't close the static event channel.
  xen/evtchn: don't set notification in evtchn_bind_interdomain()
  xen: introduce xen-evtchn dom0less property
  xen/arm: introduce new xen,enhanced property value

 docs/misc/arm/device-tree/booting.txt |  62 +++++-
 xen/arch/arm/domain_build.c           | 290 +++++++++++++++++++-------
 xen/arch/arm/include/asm/domain.h     |   1 +
 xen/arch/arm/include/asm/kernel.h     |   3 +
 xen/arch/arm/include/asm/setup.h      |   1 +
 xen/arch/arm/setup.c                  |   2 +
 xen/common/event_channel.c            |  68 ++++--
 xen/include/xen/event.h               |   8 +-
 xen/include/xen/sched.h               |   1 +
 9 files changed, 351 insertions(+), 85 deletions(-)

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 14:38:51 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 14:38:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353979.580992 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o41VL-0004Q3-3V; Wed, 22 Jun 2022 14:38:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353979.580992; Wed, 22 Jun 2022 14:38:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o41VL-0004Pw-0b; Wed, 22 Jun 2022 14:38:51 +0000
Received: by outflank-mailman (input) for mailman id 353979;
 Wed, 22 Jun 2022 14:38:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QBTv=W5=arm.com=rahul.singh@srs-se1.protection.inumbo.net>)
 id 1o41VJ-0004Nu-Ub
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 14:38:49 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 06943b83-f239-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 16:38:48 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B8737D6E;
 Wed, 22 Jun 2022 07:38:48 -0700 (PDT)
Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com
 [10.1.199.62])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5333D3F792;
 Wed, 22 Jun 2022 07:38:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 06943b83-f239-11ec-b725-ed86ccbb4733
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 1/8] xen/evtchn: make evtchn_bind_interdomain global
Date: Wed, 22 Jun 2022 15:37:58 +0100
Message-Id: <b8324e47bcbd7feeb992501b22b46f0ede3c2c3d.1655903088.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1655903088.git.rahul.singh@arm.com>
References: <cover.1655903088.git.rahul.singh@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Event channel support will be added for dom0less domains to allocate
static event channel. It is necessary to have access to the
evtchn_bind_interdomain function to do that, so make
evtchn_bind_interdomain global and also make it __must_check.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
 xen/common/event_channel.c | 2 +-
 xen/include/xen/event.h    | 3 +++
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index e60cd98d75..8cbe9681da 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -347,7 +347,7 @@ static void double_evtchn_unlock(struct evtchn *lchn, struct evtchn *rchn)
     evtchn_write_unlock(rchn);
 }
 
-static int evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
+int evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
 {
     struct evtchn *lchn, *rchn;
     struct domain *ld = current->domain, *rd;
diff --git a/xen/include/xen/event.h b/xen/include/xen/event.h
index f3021fe304..61615ebbe3 100644
--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -74,6 +74,9 @@ int evtchn_allocate_port(struct domain *d, unsigned int port);
 /* Allocate a new event channel */
 int __must_check evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc);
 
+/* Bind an event channel port to interdomain */
+int __must_check evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind);
+
 /* Unmask a local event-channel port. */
 int evtchn_unmask(unsigned int port);
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 14:39:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 14:39:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353984.581003 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o41Vf-0004vZ-CO; Wed, 22 Jun 2022 14:39:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353984.581003; Wed, 22 Jun 2022 14:39:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o41Vf-0004vS-8r; Wed, 22 Jun 2022 14:39:11 +0000
Received: by outflank-mailman (input) for mailman id 353984;
 Wed, 22 Jun 2022 14:39:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QBTv=W5=arm.com=rahul.singh@srs-se1.protection.inumbo.net>)
 id 1o41Ve-0003uL-Ax
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 14:39:10 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 1334c710-f239-11ec-bd2d-47488cf2e6aa;
 Wed, 22 Jun 2022 16:39:09 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DDD55D6E;
 Wed, 22 Jun 2022 07:39:08 -0700 (PDT)
Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com
 [10.1.199.62])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 64ECE3F792;
 Wed, 22 Jun 2022 07:39:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1334c710-f239-11ec-bd2d-47488cf2e6aa
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 2/8] xen/evtchn: modify evtchn_alloc_unbound to allocate specified port
Date: Wed, 22 Jun 2022 15:37:59 +0100
Message-Id: <5ea66595248c41a011ac465bfabd7a7a40dcd565.1655903088.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1655903088.git.rahul.singh@arm.com>
References: <cover.1655903088.git.rahul.singh@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

evtchn_alloc_unbound() always allocates the next available port. Static
event channel support for dom0less domains requires allocating a
specified port.

Modify the evtchn_alloc_unbound() to accept the port number as an
argument and allocate the specified port if available. If the port
number argument is zero, the next available port will be allocated.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
 xen/arch/arm/domain_build.c |  2 +-
 xen/common/event_channel.c  | 26 +++++++++++++++++++++-----
 xen/include/xen/event.h     |  3 ++-
 3 files changed, 24 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 7ddd16c26d..5f97d9d181 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -3171,7 +3171,7 @@ static int __init alloc_xenstore_evtchn(struct domain *d)
 
     alloc.dom = d->domain_id;
     alloc.remote_dom = hardware_domain->domain_id;
-    rc = evtchn_alloc_unbound(&alloc);
+    rc = evtchn_alloc_unbound(&alloc, 0);
     if ( rc )
     {
         printk("Failed allocating event channel for domain\n");
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 8cbe9681da..80a88c1544 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -290,11 +290,15 @@ void evtchn_free(struct domain *d, struct evtchn *chn)
     xsm_evtchn_close_post(chn);
 }
 
-int evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
+/*
+ * If port is zero get the next free port and allocate. If port is non-zero
+ * allocate the specified port.
+ */
+int evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc, evtchn_port_t port)
 {
     struct evtchn *chn;
     struct domain *d;
-    int            port, rc;
+    int            rc;
     domid_t        dom = alloc->dom;
 
     d = rcu_lock_domain_by_any_id(dom);
@@ -303,8 +307,20 @@ int evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
 
     spin_lock(&d->event_lock);
 
-    if ( (port = get_free_port(d)) < 0 )
-        ERROR_EXIT_DOM(port, d);
+    if ( port != 0 )
+    {
+        if ( (rc = evtchn_allocate_port(d, port)) != 0 )
+            ERROR_EXIT_DOM(rc, d);
+    }
+    else
+    {
+        int alloc_port = get_free_port(d);
+
+        if ( alloc_port < 0 )
+            ERROR_EXIT_DOM(alloc_port, d);
+        port = alloc_port;
+    }
+
     chn = evtchn_from_port(d, port);
 
     rc = xsm_evtchn_unbound(XSM_TARGET, d, chn, alloc->remote_dom);
@@ -1206,7 +1222,7 @@ long cf_check do_event_channel_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         struct evtchn_alloc_unbound alloc_unbound;
         if ( copy_from_guest(&alloc_unbound, arg, 1) != 0 )
             return -EFAULT;
-        rc = evtchn_alloc_unbound(&alloc_unbound);
+        rc = evtchn_alloc_unbound(&alloc_unbound, 0);
         if ( !rc && __copy_to_guest(arg, &alloc_unbound, 1) )
             rc = -EFAULT; /* Cleaning up here would be a mess! */
         break;
diff --git a/xen/include/xen/event.h b/xen/include/xen/event.h
index 61615ebbe3..48820e393e 100644
--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -72,7 +72,8 @@ void evtchn_free(struct domain *d, struct evtchn *chn);
 int evtchn_allocate_port(struct domain *d, unsigned int port);
 
 /* Allocate a new event channel */
-int __must_check evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc);
+int __must_check evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc,
+                                      evtchn_port_t port);
 
 /* Bind an event channel port to interdomain */
 int __must_check evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind);
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 14:39:25 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 14:39:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353991.581015 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o41Vs-0005MV-Mf; Wed, 22 Jun 2022 14:39:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353991.581015; Wed, 22 Jun 2022 14:39:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o41Vs-0005ML-Gz; Wed, 22 Jun 2022 14:39:24 +0000
Received: by outflank-mailman (input) for mailman id 353991;
 Wed, 22 Jun 2022 14:39:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QBTv=W5=arm.com=rahul.singh@srs-se1.protection.inumbo.net>)
 id 1o41Vr-0003uL-29
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 14:39:23 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 1acf1757-f239-11ec-bd2d-47488cf2e6aa;
 Wed, 22 Jun 2022 16:39:22 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 96CD0D6E;
 Wed, 22 Jun 2022 07:39:21 -0700 (PDT)
Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com
 [10.1.199.62])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 389E93F792;
 Wed, 22 Jun 2022 07:39:20 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1acf1757-f239-11ec-bd2d-47488cf2e6aa
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 3/8] xen/evtchn: modify evtchn_bind_interdomain to allocate specified port
Date: Wed, 22 Jun 2022 15:38:00 +0100
Message-Id: <08fab20e71d280396d7b65397339ad9d9ab96d5c.1655903088.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1655903088.git.rahul.singh@arm.com>
References: <cover.1655903088.git.rahul.singh@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

evtchn_bind_interdomain() always allocates the next available local
port. Static event channel support for dom0less domains requires
allocating a specified port.

Modify the evtchn_bind_interdomain to accept the port number as an
argument and allocate the specified port if available. If the port
number argument is zero, the next available port will be allocated.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
 xen/common/event_channel.c | 26 +++++++++++++++++++++-----
 xen/include/xen/event.h    |  3 ++-
 2 files changed, 23 insertions(+), 6 deletions(-)

diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 80a88c1544..bf5dc2c8ad 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -363,11 +363,16 @@ static void double_evtchn_unlock(struct evtchn *lchn, struct evtchn *rchn)
     evtchn_write_unlock(rchn);
 }
 
-int evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
+/*
+ * If lport is zero get the next free port and allocate. If port is non-zero
+ * allocate the specified lport.
+ */
+int evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind,
+                            evtchn_port_t lport)
 {
     struct evtchn *lchn, *rchn;
     struct domain *ld = current->domain, *rd;
-    int            lport, rc;
+    int            rc;
     evtchn_port_t  rport = bind->remote_port;
     domid_t        rdom = bind->remote_dom;
 
@@ -387,8 +392,19 @@ int evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
         spin_lock(&ld->event_lock);
     }
 
-    if ( (lport = get_free_port(ld)) < 0 )
-        ERROR_EXIT(lport);
+    if ( lport != 0 )
+    {
+        if ( (rc = evtchn_allocate_port(ld, lport)) != 0 )
+            ERROR_EXIT_DOM(rc, ld);
+    }
+    else
+    {
+        int alloc_port = get_free_port(ld);
+
+        if ( alloc_port < 0 )
+            ERROR_EXIT_DOM(alloc_port, ld);
+        lport = alloc_port;
+    }
     lchn = evtchn_from_port(ld, lport);
 
     rchn = _evtchn_from_port(rd, rport);
@@ -1232,7 +1248,7 @@ long cf_check do_event_channel_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         struct evtchn_bind_interdomain bind_interdomain;
         if ( copy_from_guest(&bind_interdomain, arg, 1) != 0 )
             return -EFAULT;
-        rc = evtchn_bind_interdomain(&bind_interdomain);
+        rc = evtchn_bind_interdomain(&bind_interdomain, 0);
         if ( !rc && __copy_to_guest(arg, &bind_interdomain, 1) )
             rc = -EFAULT; /* Cleaning up here would be a mess! */
         break;
diff --git a/xen/include/xen/event.h b/xen/include/xen/event.h
index 48820e393e..6e26879793 100644
--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -76,7 +76,8 @@ int __must_check evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc,
                                       evtchn_port_t port);
 
 /* Bind an event channel port to interdomain */
-int __must_check evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind);
+int __must_check evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind,
+                                         evtchn_port_t port);
 
 /* Unmask a local event-channel port. */
 int evtchn_unmask(unsigned int port);
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 14:39:39 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 14:39:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.353996.581025 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o41W6-0005t7-SI; Wed, 22 Jun 2022 14:39:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 353996.581025; Wed, 22 Jun 2022 14:39:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o41W6-0005sy-PB; Wed, 22 Jun 2022 14:39:38 +0000
Received: by outflank-mailman (input) for mailman id 353996;
 Wed, 22 Jun 2022 14:39:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QBTv=W5=arm.com=rahul.singh@srs-se1.protection.inumbo.net>)
 id 1o41W5-0004Nu-8j
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 14:39:37 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 22c34c83-f239-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 16:39:35 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 05ACAD6E;
 Wed, 22 Jun 2022 07:39:36 -0700 (PDT)
Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com
 [10.1.199.62])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9B2E13F792;
 Wed, 22 Jun 2022 07:39:34 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 22c34c83-f239-11ec-b725-ed86ccbb4733
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 4/8] xen/evtchn: modify evtchn_bind_interdomain to pass domain as argument
Date: Wed, 22 Jun 2022 15:38:01 +0100
Message-Id: <037b30aa5186cff516f8acf17a3a465663a8194a.1655903088.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1655903088.git.rahul.singh@arm.com>
References: <cover.1655903088.git.rahul.singh@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

evtchn_bind_interdomain() finds the local domain from "current->domain"
pointer.

evtchn_bind_interdomain() will be called from the XEN to support static
event channel during domain creation. "current" pointer is not valid at
that time, therefore modify the evtchn_bind_interdomain() to pass
domain as an argument.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
 xen/common/event_channel.c | 6 +++---
 xen/include/xen/event.h    | 1 +
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index bf5dc2c8ad..84f0055a5a 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -367,11 +367,11 @@ static void double_evtchn_unlock(struct evtchn *lchn, struct evtchn *rchn)
  * If lport is zero get the next free port and allocate. If port is non-zero
  * allocate the specified lport.
  */
-int evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind,
+int evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind, struct domain *ld,
                             evtchn_port_t lport)
 {
     struct evtchn *lchn, *rchn;
-    struct domain *ld = current->domain, *rd;
+    struct domain *rd;
     int            rc;
     evtchn_port_t  rport = bind->remote_port;
     domid_t        rdom = bind->remote_dom;
@@ -1248,7 +1248,7 @@ long cf_check do_event_channel_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         struct evtchn_bind_interdomain bind_interdomain;
         if ( copy_from_guest(&bind_interdomain, arg, 1) != 0 )
             return -EFAULT;
-        rc = evtchn_bind_interdomain(&bind_interdomain, 0);
+        rc = evtchn_bind_interdomain(&bind_interdomain, current->domain, 0);
         if ( !rc && __copy_to_guest(arg, &bind_interdomain, 1) )
             rc = -EFAULT; /* Cleaning up here would be a mess! */
         break;
diff --git a/xen/include/xen/event.h b/xen/include/xen/event.h
index 6e26879793..8eae9984a9 100644
--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -77,6 +77,7 @@ int __must_check evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc,
 
 /* Bind an event channel port to interdomain */
 int __must_check evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind,
+                                         struct domain *ld,
                                          evtchn_port_t port);
 
 /* Unmask a local event-channel port. */
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 14:39:56 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 14:39:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354002.581035 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o41WO-0006VI-4a; Wed, 22 Jun 2022 14:39:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354002.581035; Wed, 22 Jun 2022 14:39:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o41WO-0006VB-1r; Wed, 22 Jun 2022 14:39:56 +0000
Received: by outflank-mailman (input) for mailman id 354002;
 Wed, 22 Jun 2022 14:39:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QBTv=W5=arm.com=rahul.singh@srs-se1.protection.inumbo.net>)
 id 1o41WM-0004Nu-OP
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 14:39:54 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 2cc5fc79-f239-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 16:39:52 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C111ED6E;
 Wed, 22 Jun 2022 07:39:52 -0700 (PDT)
Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com
 [10.1.199.62])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 482D03F792;
 Wed, 22 Jun 2022 07:39:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2cc5fc79-f239-11ec-b725-ed86ccbb4733
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 5/8] xen/evtchn: don't close the static event channel.
Date: Wed, 22 Jun 2022 15:38:02 +0100
Message-Id: <91656930b5bfd49e40ff5a9d060d7643e6311f4f.1655903088.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1655903088.git.rahul.singh@arm.com>
References: <cover.1655903088.git.rahul.singh@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Guest can request the Xen to close the event channels. Ignore the
request from guest to close the static channels as static event channels
should not be closed.

Add the new bool variable "is_static" in "struct evtchn" to mark the
event channel static when creating the event channel.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
 xen/arch/arm/domain_build.c |  2 +-
 xen/common/event_channel.c  | 15 +++++++++++----
 xen/include/xen/event.h     |  4 ++--
 xen/include/xen/sched.h     |  1 +
 4 files changed, 15 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 5f97d9d181..89195b042c 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -3171,7 +3171,7 @@ static int __init alloc_xenstore_evtchn(struct domain *d)
 
     alloc.dom = d->domain_id;
     alloc.remote_dom = hardware_domain->domain_id;
-    rc = evtchn_alloc_unbound(&alloc, 0);
+    rc = evtchn_alloc_unbound(&alloc, 0, false);
     if ( rc )
     {
         printk("Failed allocating event channel for domain\n");
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 84f0055a5a..cedc98ccaf 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -294,7 +294,8 @@ void evtchn_free(struct domain *d, struct evtchn *chn)
  * If port is zero get the next free port and allocate. If port is non-zero
  * allocate the specified port.
  */
-int evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc, evtchn_port_t port)
+int evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc, evtchn_port_t port,
+                         bool is_static)
 {
     struct evtchn *chn;
     struct domain *d;
@@ -330,6 +331,7 @@ int evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc, evtchn_port_t port)
     evtchn_write_lock(chn);
 
     chn->state = ECS_UNBOUND;
+    chn->is_static = is_static;
     if ( (chn->u.unbound.remote_domid = alloc->remote_dom) == DOMID_SELF )
         chn->u.unbound.remote_domid = current->domain->domain_id;
     evtchn_port_init(d, chn);
@@ -368,7 +370,7 @@ static void double_evtchn_unlock(struct evtchn *lchn, struct evtchn *rchn)
  * allocate the specified lport.
  */
 int evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind, struct domain *ld,
-                            evtchn_port_t lport)
+                            evtchn_port_t lport, bool is_static)
 {
     struct evtchn *lchn, *rchn;
     struct domain *rd;
@@ -423,6 +425,7 @@ int evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind, struct domain *ld,
     lchn->u.interdomain.remote_dom  = rd;
     lchn->u.interdomain.remote_port = rport;
     lchn->state                     = ECS_INTERDOMAIN;
+    lchn->is_static                 = is_static;
     evtchn_port_init(ld, lchn);
     
     rchn->u.interdomain.remote_dom  = ld;
@@ -659,6 +662,9 @@ int evtchn_close(struct domain *d1, int port1, bool guest)
         rc = -EINVAL;
         goto out;
     }
+    /* Guest cannot close a static event channel. */
+    if ( chn1->is_static && guest )
+        goto out;
 
     switch ( chn1->state )
     {
@@ -1238,7 +1244,7 @@ long cf_check do_event_channel_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         struct evtchn_alloc_unbound alloc_unbound;
         if ( copy_from_guest(&alloc_unbound, arg, 1) != 0 )
             return -EFAULT;
-        rc = evtchn_alloc_unbound(&alloc_unbound, 0);
+        rc = evtchn_alloc_unbound(&alloc_unbound, 0, false);
         if ( !rc && __copy_to_guest(arg, &alloc_unbound, 1) )
             rc = -EFAULT; /* Cleaning up here would be a mess! */
         break;
@@ -1248,7 +1254,8 @@ long cf_check do_event_channel_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         struct evtchn_bind_interdomain bind_interdomain;
         if ( copy_from_guest(&bind_interdomain, arg, 1) != 0 )
             return -EFAULT;
-        rc = evtchn_bind_interdomain(&bind_interdomain, current->domain, 0);
+        rc = evtchn_bind_interdomain(&bind_interdomain, current->domain,
+                                     0, false);
         if ( !rc && __copy_to_guest(arg, &bind_interdomain, 1) )
             rc = -EFAULT; /* Cleaning up here would be a mess! */
         break;
diff --git a/xen/include/xen/event.h b/xen/include/xen/event.h
index 8eae9984a9..71ad4c5bfd 100644
--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -73,12 +73,12 @@ int evtchn_allocate_port(struct domain *d, unsigned int port);
 
 /* Allocate a new event channel */
 int __must_check evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc,
-                                      evtchn_port_t port);
+                                      evtchn_port_t port, bool is_static);
 
 /* Bind an event channel port to interdomain */
 int __must_check evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind,
                                          struct domain *ld,
-                                         evtchn_port_t port);
+                                         evtchn_port_t port, bool is_static);
 
 /* Unmask a local event-channel port. */
 int evtchn_unmask(unsigned int port);
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 463d41ffb6..da823c8091 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -119,6 +119,7 @@ struct evtchn
     unsigned char priority;        /* FIFO event channels only. */
     unsigned short notify_vcpu_id; /* VCPU for local delivery notification */
     uint32_t fifo_lastq;           /* Data for identifying last queue. */
+    bool is_static;                /* Static event channels. */
 
 #ifdef CONFIG_XSM
     union {
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 14:41:34 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 14:41:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354010.581046 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o41Xu-0008EH-LN; Wed, 22 Jun 2022 14:41:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354010.581046; Wed, 22 Jun 2022 14:41:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o41Xu-0008EA-IS; Wed, 22 Jun 2022 14:41:30 +0000
Received: by outflank-mailman (input) for mailman id 354010;
 Wed, 22 Jun 2022 14:41:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kzGk=W5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o41Xt-0008Dr-A7
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 14:41:29 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-eopbgr60047.outbound.protection.outlook.com [40.107.6.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 65434f00-f239-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 16:41:27 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU0PR04MB9347.eurprd04.prod.outlook.com (2603:10a6:10:357::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.16; Wed, 22 Jun
 2022 14:41:24 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Wed, 22 Jun 2022
 14:41:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65434f00-f239-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=khNefwpP3pA1IhxOCZEHJgILXRvrqWLTscfxXSnDJUw+DDgOzaKTeh5HiRkPoQecHW6GRKybRo33sk0ye54K0drNyXpwmbwhgyRRtMchwlDmm9JkHZBIZ8qpo1zXsWsviFM0jCfq3GTs4vqfxdaNYcU5N5nyRQCzphszDP2OuRgPGzYJB2Xp88VJV1U+4FVZcCLSyhDrWs/Fw2+I7VRtQg1kLoFJkehg/g6ixAhPmCg/FOR5w6dafUiwA12zbEAowOnCFTMyEPUNYCraNnqzyzFQZuktShdxWZceWXZxQL2nTlbBv2SgF2i4p6vBmricbUy1emKuoZVjiqlsEE27Aw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=lxnYRklrBNX/a/PCwFncD9PTfIZLtozlem4/33xpRxo=;
 b=LL1/Nhn3geCuUmYVQ06Js1DvBlfblfGN3/4ehSUqcXUXLjfYsNhq4PlwPBSkWsYisbvxD4ygsxJeQPszjUGT27cHOskrVEorv79f1brrCyCOajj3sFLaoOZtOT49XZGOLRIbfURfQRVnWnijzOrlmAJATOl5lOaE2JVIVJiPT/pcD1nXfMa6ezPRN4VF2lYGV2Z6FuPIZEuQvtHNceak4/c3u+QtPBh1vVWBauftV4qOd1gyeokVXdwdRQCR2sByYrWOvtefCdOmQbdSp6QnCC6rXSjHDytp2UJDLyPpR5bWr03ggoui2s9H6Dr9ARhfZIrmi6HxG/IS/BkSBUD36g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lxnYRklrBNX/a/PCwFncD9PTfIZLtozlem4/33xpRxo=;
 b=1Fvmza11M6xGz7QWl6vLXDDVltpL1hI4UOHu+VV3fvKa+rH+Xu9JvUKWxhDbcx21k3MMac3zZC8FhflOOpgq+l+EOgkXHpzsNQo71I9R5bvMrGUB6ORjuHDtF8il70RuwmSjDJ/nuX7StC5I9AT3VNw/neo5IohGzqn2u1KhXZH6V1z7HepFF/UhdTZNh+6uyWlwS9Fcdh9DxWFd2UMznjVlE+8BqWLid3CF7RmuZOy2+2aWl+bTOwJTcF0yNctgzJuEKKiTTo2XpYGJswCvpNz5yottIjw8HqdqLF5X2lW1kRb2OHjQsBZUot/5t6NxYp7KO1iO5LmKbBwbONBCCQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <7a8d70e3-c331-426d-fe96-77bd65caade7@suse.com>
Date: Wed, 22 Jun 2022 16:41:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 0/9] MISRA C 2012 8.1 rule fixes
Content-Language: en-US
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: Michal Orzel <Michal.Orzel@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>, Dario Faggioli <dfaggioli@suse.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 "Daniel P. Smith" <dpsmith@apertussolutions.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20220620070245.77979-1-michal.orzel@arm.com>
 <dd016e82-2480-0e1e-6286-18b2f677dd65@suse.com>
 <74ec2158-3d19-3b2c-1e8c-fb5b30267658@arm.com>
 <d91bb4ea-41be-225e-e2fe-1b03aa06c677@suse.com>
 <C45BA6EE-6294-4C6F-ADC4-3DE7C8DA866F@arm.com>
 <68d7fb35-e4c5-e5d2-13a8-9ee1369e8dbe@suse.com>
 <BE80A241-7983-425F-9212-0957E29AA5C7@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <BE80A241-7983-425F-9212-0957E29AA5C7@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM5PR1001CA0037.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:206:15::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 913aec08-23bd-4559-d225-08da545d47c4
X-MS-TrafficTypeDiagnostic: DU0PR04MB9347:EE_
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-Microsoft-Antispam-PRVS:
	<DU0PR04MB934730222CCD7ABD9C9568F5B3B29@DU0PR04MB9347.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	xZZhuN1gOgUL2X7XVVQGq5r4PNkuam1gCVWYWZwZO929B3FmHJUdUGMvgnr24tCZrUqtlHXs7yb1J/RepGf+JMgMR3+CwEiob7f5N4HTf9vwopLlgEPCanDjqltjFefl+b3/FNOMXwBwy4jAFgk3kSasW0+DiJLCq0jCF9j5fbMEPpgHKdH06cDB+BQ8lIUWEZJPA92+op9zsCmCp71owdLFl5UfIzHKKmU7jpNXIMSiS8x3pPQFQegff9LYG3oaWoNS2hFHV2Fe97D+R9AVuzeHQ07OEg8ybcooIz+/Z8t7G7qe8Gi/EOvRlghOOhEdClakKZrt0wa0ION2UFJ6jFZNsFQLgPGPbE2qHHO5epQNG6h/1fS5EXHgKu+eJT1sAwC5XCPeE6seUM42A6fqO6BDZNZhMz9PbjEIz/J9LGc3LTC1HpilKr+BxGeKZymR0yXAW15HIgiBk9nKCi/uo+vGswp4KaAkEDCMwIrFamBbI8c6UAPpOue8yqqJN7yaJYD2qinC53Ip2iz2L9UOT7rqVsK3CC158jPY5pzmvy7ZzVT5EnDqjOOunpMWmQL4uhFrO2IvX13dqzs7/sYO9t9YTGsDA07W3MRleGSyzpCtATbMOsZJ/lunHOntB5pHfwOeApt2a4/A8Zyd8WTRVrE5q87GfnBzgHGSKnGipx7LvsS9xYSkWAUlYusGxdqzUVIxovtpKwHSiKzpQo8vHKs7wSj+hWzJzobiPXdSfvwEMDf2KtffhKPxDZNNfHf5OVLLQHNztQEd4uvgcfL0cl33yB0J3i1VKDZpY/zvR7E=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(376002)(396003)(136003)(346002)(39860400002)(366004)(316002)(8936002)(38100700002)(7416002)(2616005)(26005)(31696002)(6666004)(6506007)(41300700001)(6916009)(6486002)(86362001)(31686004)(5660300002)(36756003)(478600001)(6512007)(8676002)(66946007)(4326008)(66556008)(53546011)(54906003)(186003)(2906002)(83380400001)(66476007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?blZsbzd0dUZHdmVKKzFVck8xNEZSK3g3SFh0bktXcVpSUHVXNWpxaHRjWDI3?=
 =?utf-8?B?TkJYUS85bUdaR1dNeWVRRUdKYmx3bXR1anBuTEFuRnBpcFZpMlVyQUptbHQ5?=
 =?utf-8?B?NkRVK1A1UDRKUUdrOHB2dXlrSSszQlR2TjBIM2NBM3R5ZUtXRTRlTDVMaFcr?=
 =?utf-8?B?V3kyRnRrRXpKWnlzNGVyMzlxK1Jvb1FpTzNtWTBDRTVUQWduY2p0NHFVWkpS?=
 =?utf-8?B?Ylh0amFoTGlIWXBBejd1V0Zvb0ZXbVlyYVZpdlBoSnpFVGxsMjZjdkMzcmIw?=
 =?utf-8?B?cFAzUzNFRkcwdXVONi9BUFRuejdOcmhEdEQ0akVjMzZCYzdjb3ozWXJkN05W?=
 =?utf-8?B?dkE4QTRjdWRHZ3lsVW11aWZ6a3owdmlqYmpNSmhsZFBZdWx0anowVHNZTkpl?=
 =?utf-8?B?R2swY25HR1VQbll3VTIzY3l2a3NGNEVETUVkZDg0RlpIVEFHQWtYdHp5RzM5?=
 =?utf-8?B?RWY0aEw0UnY1TkkzZWpIQU5vYlZncWNvMlVxaDJIOGVyYStLRklnYkV1WDVn?=
 =?utf-8?B?SlgvakF5Uk1SM09nRlhUangvUjZReTlLSHQ5QmErRS9FT2lHQnNaV3VNK0Nq?=
 =?utf-8?B?dlR1eElpc1dsRU56UExoNXFibWx2bXBYZURkdENqK3dPS0x1b1k5MnZGdVNp?=
 =?utf-8?B?TjdDQmRjckk2S25INklNM0pwR0RDNkhIUVV1bmlsT3ZzUnl4UCtjSGVHMUVv?=
 =?utf-8?B?UDJyc2VVYzRzUFVnaDZqdlMyZEJJM25lMGFwekFVQ0UwRzcrNkVRLzFlWFIr?=
 =?utf-8?B?TTQ3OE93dDRaSEM5V3dZRHhjQnFSb29LMHVQNm9iYURPaWR4UlR1RkNXRGFY?=
 =?utf-8?B?ZnQ2ZmNsYm5yK3VKU2hBNEdyM28weW1LVVV6RVlja3pWVWw5ekIxSDd1QWVh?=
 =?utf-8?B?Ly9CZ1ZIejRBYURZbUE5azhqeStxYk9XNjV5WEt2UU4rNEw2VUdpUjRnaGEw?=
 =?utf-8?B?bzZURlA1enVWYitrL29QemZyMFR3YlBIOGx5bDBpdnZsWGhoeWFRVTBnUHZM?=
 =?utf-8?B?ZE4rbnJYcEFFMngrcXgyNU81bTErajVKWnRFTWxhekkyRlZpQmxGWUtQdm9Q?=
 =?utf-8?B?RWxhNjNGNU5sZ01tc2hHSVdBcGg4R3hQV05hRnlKaU1JVjcxMFpKU0h4Zy96?=
 =?utf-8?B?TkZ5N25QbkNqTWFvZ1BtcDl3T2VzbmsvWDE3RklOaFhMY0phd2VXRUZhd0tS?=
 =?utf-8?B?a1FYQjEyZWtwOHMxYUZ2VXViMzd5c2JuYjdveDNaSlJDMlN6Njg5ZSt0cDBy?=
 =?utf-8?B?WGtMVU4rUUIrU2Q5alUzeUdQcVE3Y2NaWm9kcDJwemdZMndZYkJwV3g2V3Rx?=
 =?utf-8?B?aENKSFJaRmVkV2RjZnJDdW15dHhuZm42TFZyTGlmSHFMWVliOExXMStHMk9X?=
 =?utf-8?B?VzlEUEJ5WWFoUXpYZG9WNFh1MUxyNTljTWs5YkhZYnpyT1pOYjlUaTVGNFZU?=
 =?utf-8?B?VUdmRHhySzFCT3daVThLYzl5RFRQNGhLZXo5dGd0S3BLTDdSdjNCZzQ3YXdS?=
 =?utf-8?B?VGhNMnRzWVA4Unlnd2NmcDlYZGdnRS8vZ1hNSHl0QXNHVFpuU25GRUxDZDhl?=
 =?utf-8?B?bFB3dzRmUERETjVHSFViOTVxUGMyang2REYxdmVIRXNjZkd1THNlOHE4dUtL?=
 =?utf-8?B?Sm9sUVpyUkx3T0tvNHIxYys4K1RwdExkSXlsRmt5UWVNZ3ZPOFdWaVZMdFIw?=
 =?utf-8?B?dUZvVUY4WFNET0JFVlhyaGZlR2Ztd3I2Rno0eVAvOFEzSFJzdG8wVHJtSmxu?=
 =?utf-8?B?L2xiTVNQVENOaDdzbVZlMmpxanNUZEd3a2IveFhwcEhQWmlmUm5lQnY2TzBq?=
 =?utf-8?B?Tmo4aWhMdEtLZ0N1K0I1R1plV1JYRVFPTk5SdmxkUHdjWEFQS1FiMjF5SkRL?=
 =?utf-8?B?SWdUWExvcEZXVUNaNVphc3k2ZFd0VFdxdmx3eFBSQVJlZjgybDg2RjJybzVI?=
 =?utf-8?B?MUc2WlBUTWdwcXplWEE4ZFltcFczckZ6TXd0b1AyTlc2K1N3N1IyZmE3bEVQ?=
 =?utf-8?B?WE1HRmt1MlhWQXNJMzl3WW84cVRqQUtjRm5PRERuYXlHUisyNzArSEI0ZGRq?=
 =?utf-8?B?UVprc3ViNlFJa21XaUhtSVJTMmFlVXl0cDJUcFFyU0x3VDhnaFBuZHdqcVZr?=
 =?utf-8?B?dnNSTllNUWFLYzZYMmdiMmd4TWRvUlNjOFJINkVvdFAxaXdneGRnZCtXMml4?=
 =?utf-8?B?MFA3Umh2UGtFVWQ1UzROdXRKdDhiRWRLNnpFcmVKVmVJSG5BTjJ5RW42cWgw?=
 =?utf-8?B?SjJ0VWtjb1h1WVZieXg2ZStMSHU4UUdPKzA1TVRLVk9qQmpkTVd0SU02V08r?=
 =?utf-8?B?dVExZndvb0Q5KzdKZ2JST05WWnRudVRGcHFUS0x6Y0hsamhWcjVtQT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 913aec08-23bd-4559-d225-08da545d47c4
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2022 14:41:24.1619
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: E+q/gL9G9yD+80hVnatmAh2/A1ZnR08c718yn1qtLXBskATsGufsPbF7Sm40JeVd4c06rjKvUfXyKFnROasU4w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR04MB9347

On 22.06.2022 16:27, Bertrand Marquis wrote:
> Hi,
> 
>> On 22 Jun 2022, at 15:10, Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 22.06.2022 15:55, Bertrand Marquis wrote:
>>> Hi Jan,
>>>
>>>> On 22 Jun 2022, at 14:01, Jan Beulich <jbeulich@suse.com> wrote:
>>>>
>>>> On 22.06.2022 14:55, Michal Orzel wrote:
>>>>> On 22.06.2022 12:25, Jan Beulich wrote:
>>>>>> On 20.06.2022 09:02, Michal Orzel wrote:
>>>>>>> This series fixes all the findings for MISRA C 2012 8.1 rule, reported by
>>>>>>> cppcheck 2.7 with misra addon, for Arm (arm32/arm64 - target allyesconfig).
>>>>>>> Fixing this rule comes down to replacing implicit 'unsigned' with explicit
>>>>>>> 'unsigned int' type as there are no other violations being part of that rule
>>>>>>> in the Xen codebase.
>>>>>>
>>>>>> I'm puzzled, I have to admit. While I agree with all the examples in the
>>>>>> doc, I notice that there's no instance of "signed" or "unsigned" there.
>>>>>> Which matches my understanding that "unsigned" and "signed" on their own
>>>>>> (just like "long") are proper types, and hence the omission of "int"
>>>>>> there is not an "omission of an explicit type".
>>>>>>
>>>>> Cppcheck was choosed as a tool for MISRA checking and it is considering it as a violation.
>>>>
>>>> Which by no means indicates that the tool pointing out something as a
>>>> violation actually is one.
>>>>
>>>>> It treats unsigned as an implicit type. You can see this flag in cppcheck source code:
>>>>>
>>>>> "fIsImplicitInt = (1U << 31), // Is "int" token implicitly added?"
>>>>
>>>> Neither the name of the variable nor the comment clarify that this is about
>>>> the specific case of "unsigned". As said there's also the fact that they
>>>> don't appear to point out the lack of "int" when seeing plain "long" (or
>>>> "long long"). I fully agree that "extern x;" or "const y;" lack explicit
>>>> "int".
>>>
>>> I am a bit puzzled here trying to understand what you actually want here.
>>>
>>> Do you suggest the change is ok but you are not ok with the fact that is flagged
>>> as MISRA fix even though cppcheck is saying otherwise ?
>>
>> First of all I'd like to understand whether what we're talking about here
>> are actually violations (and if so, why that is). Further actions / requests
>> depend on the answer.
> 
> I would say yes but this would need to be confirmed by Roberto I guess.
> In any case if we think this is something we want and the change is right
> and cppcheck is saying that it solves a violation, I am wondering what is
> the point of discussing that extensively this change.

Because imo a patch shouldn't be committed with a misleading description,
if at all possible. Even less so several such patches in one go.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 14:47:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 14:47:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354024.581058 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o41di-0000UW-BE; Wed, 22 Jun 2022 14:47:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354024.581058; Wed, 22 Jun 2022 14:47:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o41di-0000UO-8H; Wed, 22 Jun 2022 14:47:30 +0000
Received: by outflank-mailman (input) for mailman id 354024;
 Wed, 22 Jun 2022 14:47:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kzGk=W5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o41dh-0000UH-2r
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 14:47:29 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-eopbgr60042.outbound.protection.outlook.com [40.107.6.42])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3c639f23-f23a-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 16:47:28 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB9015.eurprd04.prod.outlook.com (2603:10a6:20b:40b::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.22; Wed, 22 Jun
 2022 14:47:26 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Wed, 22 Jun 2022
 14:47:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3c639f23-f23a-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OjAMMM3FGYKxG5nNrx+wFgbETpazU/UcD4bDqHrxreEPX0nEGaeycw6pQIPotiF14RX8d6g70q4BQNZPYSb0aXGyKH2V9UvfvURcfcWhbtiejrJYSAgj1w/PeXEuT2Cn6i2sEpgcZyUd4DUCBmCMWiD+7nJulhEpXxW2FIF+zqcn7Kv2VkDtp/TVADxWKP9Pu1lfOhHEGrUy/ADGhyjHZ6Gr+bMSB3awp+oGTSBohzF7G0+DkIFuDU3ZIfZ9ovOWvzn1b+ehl65x/gBfGYxnI2uyQQAIxwJz5brcwXrEnuh44hNyLRlVk0dCBK6O1FuxR1W8N/1TdF4lXg4m9Depmg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ZqdmGhm9cWS7wDJEELqzqMhbHsqthLU+rE1P1HskFfI=;
 b=PbX3I/fl0hruSTRhiUKGO5mOQQL2MYRJjMgk34THhCyh0zfut8rt7EwFDq0InhDeAykaLtPPHSkyIjEPi2eol5AhYe4YO09meMlpsNRmv8kO286a2UEu33wj6209dkSgw5C8FnUGj4txlSBnxYxrlH0NJ7vNHZqHm9e7Z1CYKpdQwDtkSHVWnsV5mXlqd1M3aM8z4qUisWjN/u+djnSRDkuTW69qNKTs72VyoxFyhRr7xIfqB2366rk5M+QQwB5N546gR7e7Opuvi4IjA+hsaBE9uFT1riuUgti5wchvFuTf3jw7TfpwWjZM/7qCffrAhEZTCIg79FaupMrwa6cbdg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZqdmGhm9cWS7wDJEELqzqMhbHsqthLU+rE1P1HskFfI=;
 b=bPfSA9V3gRtLIDFQoSzjXCpl1tCNd/ex6RDYZ7pCmkkrO4HSYekDspevt8nkC21kf9g0BrQNTaJ3FCFjpTau/zAxTOkytAamArTJnfg+l6AFCLzYTku7maFB5VcCo31CMgI70Qt41s7M1OSG8fABZi1Brp6bUllM7wEl1XYWz/PGWxOFon73ohHN0edCIZajAU57c/Hcp5wNaQ6gIHP08GjrC0p7io2gaXAmXihpYazIqH+YlPAUVPznTce/ExGqVqrW4eJvdolxQW+asG2mYVVe1vY07qhG50eqVkL/XFDVLSJWmXyN76BgTE64gtUoywq34pXyj4V7wHLGBtEe4Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <60ee0a19-064f-c8e3-aa65-b8c3d998a32a@suse.com>
Date: Wed, 22 Jun 2022 16:47:24 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [XEN PATCH v2.1 1/4] build,include: rework shell script for
 headers++.chk
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org, Michal Orzel <michal.orzel@arm.com>
References: <20220614162248.40278-1-anthony.perard@citrix.com>
 <20220621101128.50543-1-anthony.perard@citrix.com>
 <e264e3bf-b436-684a-13a1-be0aa0f6bfbb@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <e264e3bf-b436-684a-13a1-be0aa0f6bfbb@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS8PR04CA0039.eurprd04.prod.outlook.com
 (2603:10a6:20b:312::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: aa9cc63f-4370-42e6-2cd9-08da545e1fb8
X-MS-TrafficTypeDiagnostic: AM9PR04MB9015:EE_
X-Microsoft-Antispam-PRVS:
	<AM9PR04MB90153DC51B9487D2E4980AEEB3B29@AM9PR04MB9015.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	LVfsAu4j0WCSk6f/xDxpIcepIvctfwASHCMYpLYXlvgjTS8sCtLic5HcxswmionKDbe74i+Vm0YnnDWN+S+PTuH9XXCwvBfX/Cw8Qxqpku2spTFMAXZVeTxrU2wFOKMFaI8k1otec8scQkj6pJuHfWFF1Qv+fwckZS/DVKN5nfqIn5Qin5B2mwExf5+QR7uz09ash5VOaBNRbD4Q9/h9IQ89QVoXpC7oyc6liISxKW5KGYgRIzWAyc7sXEdbPVUGaU5syTqgrPomaffe6cW8rG58mMM/r/rd8okz+3uvC0g13cgTFT+xZMxpCSmXIGY2fhAF2HLdRWj5RxHlT/7iqRV0GbM6+qpUV94TWtqzDKGRlK+LLRhiEsqST29YNmrC8Mm3MEAxxQnAaGhbodiNm7r0PG9AeNVeJZ8PybWHBHgenU0rmuSSp0amgta5U1l3nF/3uU5sF6lhIrlZXH3Uag0KIVUnZTmPKbFvAe15qidUiKYwjXeaVrS/hD8C/uXIlaTmvl0+KPD8JS3OJTKgKseUvsuN+lC7iZK7vTuiDRMecp4QvtyahizF7MWK9L0Zqt0j3X7pU4fddiS676G2sGWujEP3R6PDZ0ZpsRMW/I2uOoVw0Ie53QFBGVw2ZOeNEf+kHob2GAqlXd623ySgTTDwmLblr2A7mlX/Msvm/IFbNk3QdvwnzH3l/HSQJcZGn0wxmo5oaZNNXP9ueDnTTWU6UNNESH+4g0g6WPHHKlB7Q6l1dnPA4rZibfEs/SQNFGzP7L5XU9n2D28qWx0JI/OUzc7fdQLYos8jS1hfiI0=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(396003)(376002)(39860400002)(366004)(346002)(136003)(53546011)(2906002)(6512007)(26005)(86362001)(6506007)(31696002)(54906003)(316002)(36756003)(4326008)(38100700002)(31686004)(6916009)(66476007)(66946007)(8676002)(66556008)(8936002)(2616005)(186003)(5660300002)(478600001)(83380400001)(41300700001)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dWxJM1ZjMFp3djdvZEplYkN2U3IwSzlDcjNUYko0bU9uTVB5cW5ROWZKMGtw?=
 =?utf-8?B?YjFzYTVIcldkL2pEZzh4NkE3NkNqYW5hOU91WlRoTlJDOFU3NW5yc0pBMXRP?=
 =?utf-8?B?NU1NL2h3Y1RSbktNZTZuT1ZQQU95SzdVOGNVY0trZ3F3ZjZlUDhsV0lRcFFC?=
 =?utf-8?B?ZEpadnNNVjhkanpKak02YTRscjB3aWIxcjNmZmpqTExESjRsSkJmeVpmQ0pz?=
 =?utf-8?B?R2loSWtWbWxueVVMTVQ3QWViL3VhT3hvOTBXRDUzYXB5R1hBT21PanlLUmti?=
 =?utf-8?B?aDl5UVh4MTEyNExpQkpDRHJURWpiOUVjU0VtaDhkOWJlZTkvbUVSUS9xVk1D?=
 =?utf-8?B?cjFYaUVCZEgybUs5Q2pVdDdmcE9kK1Jld1JvN3NybTVJWnBGK3QwYU9xWWFW?=
 =?utf-8?B?VHBpVWROL2x2b2tDYjkybHBybkd6K2Jnb0g4L21qajI1dm1Xbk16bTFUYXpv?=
 =?utf-8?B?QmVzSnFocTNseU9WL2VHKzFTeHNaa0RSb1R0NjYzU0Z6OGJ4akNPaVpITFN0?=
 =?utf-8?B?My9kdDF6Q0FXeUUvaVRybTA2T3NsSTNtS0JnSnBIRUJFUk13R3FrMEt2Tzdt?=
 =?utf-8?B?NjdMcENwbi8xVWZ3K1p3bUswQW1CcXlock1TUzMrNEV4RDlia01EQ0Z4RmpF?=
 =?utf-8?B?M2pUcWQ5cmdDM255a2s1OWlrNHRwZlMxQkpRQUw5Tm05blFVdjVSSFdFUkhy?=
 =?utf-8?B?eFlHV3U3d1lwMXBFR1RpSjhOcUpRUFVYSWxGZVFReHJMZGVtZFF4MXNvZXk5?=
 =?utf-8?B?QUlYVDl5TTJYVTd4SkdxUXhDaXVjeXlnWXVwc0V6d0FKb2VBbFNZaHM2RytJ?=
 =?utf-8?B?aW1GMC9TbGVZNmw4MkMyTjdVeXpuSEtQbGY2VHRYYXZQbU0rWm5hMW0yTS9r?=
 =?utf-8?B?SGFTd0NsZWs3TCtOZjhFTFZ1MTBTODVsazRPMFdPU2hIV0t1NVZjc0QxcEIr?=
 =?utf-8?B?akZ1MkVjV2xObDdVUXJXKzM1Znh4MlByL2xtMGFJNnJuTzZ4WWhCWXdnS0w1?=
 =?utf-8?B?ZkJISFhNSEtEQ0xqSUZwQ0tLMDVCV1VUdmFEbGtEMnNadWNmelhWOVg4ZGdM?=
 =?utf-8?B?Y1RPQUpnL2hrdmk3K1BWY051MEJ0K1EvQXdNdUJWT3ZMSitXa3VLMjNQWnhU?=
 =?utf-8?B?SXNtaXk0bkJxRWFIaEhpTDZKN3FaNlAvT29VblNFdVBCOE1jRWpEZUNIeTFh?=
 =?utf-8?B?MGRvN1huVjV6S1gzM3VkMUxkMUYvRDFZYkRjcnFRd21qUVNKMlpWajZSaE1q?=
 =?utf-8?B?WkpvVDVHdkJwUGV2SnBUWnhLeWlmU0pnODlEeTRNK1ZoSDFNdlVaWFd3clRQ?=
 =?utf-8?B?MVRuRUsxcEhVZlQ3b2VxN3ZCYTRKWndzMnNSbGV5VUdIa0VOOG01TC9pTVd3?=
 =?utf-8?B?R3ZzN0RhNENHVHFiQUtVUW9qWXBXTWdGRTFwMkp1NVZuUVI4dys1eER5ZkYw?=
 =?utf-8?B?ODkxbTBCVjBNRm1teWRlKzJsODZ0MlVTRkZDN2JLakFiQTJkZzhwYmlhYW8x?=
 =?utf-8?B?ZmRTR3hpRnR1WkFtejhHN0ltU3gvL3ZTSFllVk1VYWVHZWwrbGF0akxrZXFY?=
 =?utf-8?B?QlJPSlFqOEx3NWxVYi9ycWJtaWFXNnJlQUdTQzNTc1VpVzNaVHdXenYzWFU3?=
 =?utf-8?B?RURWWkdYSk9SWW1tbnNyalVGNU1MdTNrbGdsYTlna0xTWHVZSXhLZHptNXR2?=
 =?utf-8?B?a0VlWlU1MWdVNUZJU1N4cE9sK200UnRmbGw2dXdKZFlpbG1qdGoyaUJKSFdN?=
 =?utf-8?B?R3pxWS9ib0MyMUNSMWJKU0RoVlpycGNqVWgwejlNRUFnSC81eUE1bEw2a3pj?=
 =?utf-8?B?VmdPYjJkZTc5NmJBeldPeE9TNUJOdng4Z1htSG1BZlVYZ3MwZWZaRDZaUVFR?=
 =?utf-8?B?OE1PdVhyMjNzYml3M0dhRUJuVFhVMUpvY3cxLzBvbXh5R0JRNHpnU0cxeG43?=
 =?utf-8?B?V1dXRTdRVEFOQTVVU29QNFFUUzNIeDg4YWVoL1Vjcm9WckVPY25kOE9WbTlx?=
 =?utf-8?B?djBYVEk0czdSUFY3WUxaczE4TThtSXBOVHFEQmNwR3JOVUxNV04zZjkzKzdt?=
 =?utf-8?B?YUprc1djbHBPazR3OG0rUW5mL0p6VlZoK21zOStsY0JXTENBZW9pVTRCLys5?=
 =?utf-8?B?Rjg0M20rb1drRVZmVTgvOEpwSHFDQVlCZVQ0OXZwVTd6YW0renNpK2IvNEJG?=
 =?utf-8?B?cVdUS1NFcnhkcE1GWDlNK3FES2w3S3dSNFRUek94dnVIK2tpSk1uOThNMXBk?=
 =?utf-8?B?RW1LS204UkFnL25YK2htamZQVmNtRGVDNGxoSEdEUDFqWkpBYjFiK3pLTVJk?=
 =?utf-8?B?bEoyR2dra0w1c3VscHQyM2oycWRqU2pCRGNuWTRmR2ZxRTY1eGdnUT09?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: aa9cc63f-4370-42e6-2cd9-08da545e1fb8
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2022 14:47:26.3262
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 1jMoFD15jhrkZ+PCPTi3fDj3nJQoiL/dcVn/wBBQI1KNhjAf2Qq9bI01nSQiu1fbVo5LqWDNtVgTsx9LPdlFqA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB9015

On 22.06.2022 14:35, Michal Orzel wrote:
> Hi Anthony,
> 
> On 21.06.2022 12:11, Anthony PERARD wrote:
>> The command line generated for headers++.chk by make is quite long,
>> and in some environment it is too long. This issue have been seen in
>> Yocto build environment.
>>
>> Error messages:
>>     make[9]: execvp: /bin/sh: Argument list too long
>>     make[9]: *** [include/Makefile:181: include/headers++.chk] Error 127
>>
>> Rework so that we do the foreach loop in shell rather that make to
>> reduce the command line size by a lot. We also need a way to get
>> headers prerequisite for some public headers so we use a switch "case"
>> in shell to be able to do some simple pattern matching. Variables
>> alone in POSIX shell don't allow to work with associative array or
>> variables with "/".
>>
>> Also rework headers99.chk as it has a similar implementation, even if
>> with only two headers to check the command line isn't too long at the
>> moment.
>>
>> Reported-by: Bertrand Marquis <Bertrand.Marquis@arm.com>
>> Fixes: 28e13c7f43 ("build: xen/include: use if_changed")
>> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
>> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
> 
> Reviewed-by: Michal Orzel <michal.orzel@arm.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

> Tested-by: Michal Orzel <michal.orzel@arm.com>
> 
> Cheers,
> Michal



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 14:48:32 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 14:48:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354034.581069 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o41eh-00012G-Mr; Wed, 22 Jun 2022 14:48:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354034.581069; Wed, 22 Jun 2022 14:48:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o41eh-000129-IP; Wed, 22 Jun 2022 14:48:31 +0000
Received: by outflank-mailman (input) for mailman id 354034;
 Wed, 22 Jun 2022 14:48:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QBTv=W5=arm.com=rahul.singh@srs-se1.protection.inumbo.net>)
 id 1o41X5-0004Nu-3S
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 14:40:39 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 476aff44-f239-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 16:40:36 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 85593D6E;
 Wed, 22 Jun 2022 07:40:37 -0700 (PDT)
Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com
 [10.1.199.62])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7739C3F792;
 Wed, 22 Jun 2022 07:40:36 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 476aff44-f239-11ec-b725-ed86ccbb4733
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 8/8] xen/arm: introduce new xen,enhanced property value
Date: Wed, 22 Jun 2022 15:38:05 +0100
Message-Id: <67a86c52dfdcd5dd1f56ac3442089012aba09ff6.1655903088.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1655903088.git.rahul.singh@arm.com>
References: <cover.1655903088.git.rahul.singh@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Introduce a new "xen,enhanced" dom0less property value "evtchn" to
enable/disable event-channel interfaces for dom0less guests.

The configurable option is for domUs only. For dom0 we always set the
corresponding property in the Xen code to true.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
 xen/arch/arm/domain_build.c       | 149 ++++++++++++++++--------------
 xen/arch/arm/include/asm/kernel.h |   3 +
 2 files changed, 82 insertions(+), 70 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 8925f0d80c..a1c1ab5877 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1396,85 +1396,92 @@ static int __init make_hypervisor_node(struct domain *d,
     if ( res )
         return res;
 
-    if ( !opt_ext_regions )
-    {
-        printk(XENLOG_INFO "%pd: extended regions support is disabled\n", d);
-        nr_ext_regions = 0;
-    }
-    else if ( is_32bit_domain(d) )
-    {
-        printk(XENLOG_WARNING
-               "%pd: extended regions not supported for 32-bit guests\n", d);
-        nr_ext_regions = 0;
-    }
-    else
+    if ( kinfo->dom0less_enhanced )
     {
-        ext_regions = xzalloc(struct meminfo);
-        if ( !ext_regions )
-            return -ENOMEM;
-
-        if ( is_domain_direct_mapped(d) )
+        if ( !opt_ext_regions )
         {
-            if ( !is_iommu_enabled(d) )
-                res = find_unallocated_memory(kinfo, ext_regions);
-            else
-                res = find_memory_holes(kinfo, ext_regions);
+            printk(XENLOG_INFO
+                   "%pd: extended regions support is disabled\n", d);
+            nr_ext_regions = 0;
         }
-        else
+        else if ( is_32bit_domain(d) )
         {
-            res = find_domU_holes(kinfo, ext_regions);
+            printk(XENLOG_WARNING
+                   "%pd: extended regions not supported for 32-bit guests\n", d);
+            nr_ext_regions = 0;
         }
+        else
+        {
+            ext_regions = xzalloc(struct meminfo);
+            if ( !ext_regions )
+                return -ENOMEM;
 
-        if ( res )
-            printk(XENLOG_WARNING "%pd: failed to allocate extended regions\n",
-                   d);
-        nr_ext_regions = ext_regions->nr_banks;
-    }
+            if ( is_domain_direct_mapped(d) )
+            {
+                if ( !is_iommu_enabled(d) )
+                    res = find_unallocated_memory(kinfo, ext_regions);
+                else
+                    res = find_memory_holes(kinfo, ext_regions);
+            }
+            else
+            {
+                res = find_domU_holes(kinfo, ext_regions);
+            }
 
-    reg = xzalloc_array(__be32, (nr_ext_regions + 1) * (addrcells + sizecells));
-    if ( !reg )
-    {
-        xfree(ext_regions);
-        return -ENOMEM;
-    }
+            if ( res )
+                printk(XENLOG_WARNING
+                       "%pd: failed to allocate extended regions\n", d);
+            nr_ext_regions = ext_regions->nr_banks;
+        }
 
-    /* reg 0 is grant table space */
-    cells = &reg[0];
-    dt_child_set_range(&cells, addrcells, sizecells,
-                       kinfo->gnttab_start, kinfo->gnttab_size);
-    /* reg 1...N are extended regions */
-    for ( i = 0; i < nr_ext_regions; i++ )
-    {
-        u64 start = ext_regions->bank[i].start;
-        u64 size = ext_regions->bank[i].size;
+        reg = xzalloc_array(__be32, (nr_ext_regions + 1) * (addrcells + sizecells));
+        if ( !reg )
+        {
+            xfree(ext_regions);
+            return -ENOMEM;
+        }
 
-        printk("%pd: extended region %d: %#"PRIx64"->%#"PRIx64"\n",
-               d, i, start, start + size);
+        /* reg 0 is grant table space */
+        cells = &reg[0];
+        dt_child_set_range(&cells, addrcells, sizecells,
+                           kinfo->gnttab_start, kinfo->gnttab_size);
+        /* reg 1...N are extended regions */
+        for ( i = 0; i < nr_ext_regions; i++ )
+        {
+            u64 start = ext_regions->bank[i].start;
+            u64 size = ext_regions->bank[i].size;
 
-        dt_child_set_range(&cells, addrcells, sizecells, start, size);
-    }
+            printk("%pd: extended region %d: %#"PRIx64"->%#"PRIx64"\n",
+                   d, i, start, start + size);
 
-    res = fdt_property(fdt, "reg", reg,
-                       dt_cells_to_size(addrcells + sizecells) *
-                       (nr_ext_regions + 1));
-    xfree(ext_regions);
-    xfree(reg);
+            dt_child_set_range(&cells, addrcells, sizecells, start, size);
+        }
 
-    if ( res )
-        return res;
+        res = fdt_property(fdt, "reg", reg,
+                           dt_cells_to_size(addrcells + sizecells) *
+                           (nr_ext_regions + 1));
+        xfree(ext_regions);
+        xfree(reg);
 
-    BUG_ON(d->arch.evtchn_irq == 0);
+        if ( res )
+            return res;
+    }
 
-    /*
-     * Interrupt event channel upcall:
-     *  - Active-low level-sensitive
-     *  - All CPUs
-     *  TODO: Handle properly the cpumask;
-     */
-    set_interrupt(intr, d->arch.evtchn_irq, 0xf, DT_IRQ_TYPE_LEVEL_LOW);
-    res = fdt_property_interrupts(kinfo, &intr, 1);
-    if ( res )
-        return res;
+    if ( kinfo->dom0less_evtchn )
+    {
+        BUG_ON(d->arch.evtchn_irq == 0);
+
+        /*
+         * Interrupt event channel upcall:
+         *  - Active-low level-sensitive
+         *  - All CPUs
+         *  TODO: Handle properly the cpumask;
+        */
+        set_interrupt(intr, d->arch.evtchn_irq, 0xf, DT_IRQ_TYPE_LEVEL_LOW);
+        res = fdt_property_interrupts(kinfo, &intr, 1);
+        if ( res )
+            return res;
+    }
 
     res = fdt_end_node(fdt);
 
@@ -2891,7 +2898,7 @@ static int __init prepare_dtb_domU(struct domain *d, struct kernel_info *kinfo)
             goto err;
     }
 
-    if ( kinfo->dom0less_enhanced )
+    if ( kinfo->dom0less_enhanced || kinfo->dom0less_evtchn )
     {
         ret = make_hypervisor_node(d, kinfo, addrcells, sizecells);
         if ( ret )
@@ -3346,11 +3353,11 @@ static int __init construct_domU(struct domain *d,
          rc == -ENODATA ||
          (rc == 0 && !strcmp(dom0less_enhanced, "enabled")) )
     {
-        if ( hardware_domain )
-            kinfo.dom0less_enhanced = true;
-        else
-            panic("Tried to use xen,enhanced without dom0\n");
+        kinfo.dom0less_enhanced = true;
+        kinfo.dom0less_evtchn = true;
     }
+    else if ( rc == 0 && !strcmp(dom0less_enhanced, "evtchn") )
+        kinfo.dom0less_evtchn = true;
 
     if ( vcpu_create(d, 0) == NULL )
         return -ENOMEM;
@@ -3529,6 +3536,8 @@ static int __init construct_dom0(struct domain *d)
 
     kinfo.unassigned_mem = dom0_mem;
     kinfo.d = d;
+    kinfo.dom0less_enhanced = true;
+    kinfo.dom0less_evtchn = true;
 
     rc = kernel_probe(&kinfo, NULL);
     if ( rc < 0 )
diff --git a/xen/arch/arm/include/asm/kernel.h b/xen/arch/arm/include/asm/kernel.h
index c4dc039b54..7cff19b997 100644
--- a/xen/arch/arm/include/asm/kernel.h
+++ b/xen/arch/arm/include/asm/kernel.h
@@ -39,6 +39,9 @@ struct kernel_info {
     /* Enable PV drivers */
     bool dom0less_enhanced;
 
+    /* Enable event-channel interface */
+    bool dom0less_evtchn;
+
     /* GIC phandle */
     uint32_t phandle_gic;
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 14:48:38 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 14:48:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354039.581080 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o41eo-0001Mk-4M; Wed, 22 Jun 2022 14:48:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354039.581080; Wed, 22 Jun 2022 14:48:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o41en-0001Mb-WA; Wed, 22 Jun 2022 14:48:37 +0000
Received: by outflank-mailman (input) for mailman id 354039;
 Wed, 22 Jun 2022 14:48:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QBTv=W5=arm.com=rahul.singh@srs-se1.protection.inumbo.net>)
 id 1o41WV-0003uL-0k
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 14:40:03 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 323eee82-f239-11ec-bd2d-47488cf2e6aa;
 Wed, 22 Jun 2022 16:40:01 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0A0F0D6E;
 Wed, 22 Jun 2022 07:40:01 -0700 (PDT)
Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com
 [10.1.199.62])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A02233F792;
 Wed, 22 Jun 2022 07:39:59 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 323eee82-f239-11ec-bd2d-47488cf2e6aa
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 6/8] xen/evtchn: don't set notification in evtchn_bind_interdomain()
Date: Wed, 22 Jun 2022 15:38:03 +0100
Message-Id: <0cb096d37f2ac6cb7c5aa04cad7ad5377a0934db.1655903088.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1655903088.git.rahul.singh@arm.com>
References: <cover.1655903088.git.rahul.singh@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

evtchn_bind_interdomain() sets the notification on the local port to
handle the lost notification on remote unbound port.

Static event-channel will be created during domain creation, there is no
need to set the notification as remote domain is not alive.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
 xen/common/event_channel.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index cedc98ccaf..420d18b986 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -435,8 +435,13 @@ int evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind, struct domain *ld,
     /*
      * We may have lost notifications on the remote unbound port. Fix that up
      * here by conservatively always setting a notification on the local port.
+     *
+     * There is no need to set the notification if event channel is created in
+     * Xen because domain is not created at this time and no chance of lost
+     * notification.
      */
-    evtchn_port_set_pending(ld, lchn->notify_vcpu_id, lchn);
+    if ( !is_static )
+        evtchn_port_set_pending(ld, lchn->notify_vcpu_id, lchn);
 
     double_evtchn_unlock(lchn, rchn);
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 14:48:38 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 14:48:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354040.581085 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o41eo-0001Q8-Eo; Wed, 22 Jun 2022 14:48:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354040.581085; Wed, 22 Jun 2022 14:48:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o41eo-0001PI-8i; Wed, 22 Jun 2022 14:48:38 +0000
Received: by outflank-mailman (input) for mailman id 354040;
 Wed, 22 Jun 2022 14:48:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QBTv=W5=arm.com=rahul.singh@srs-se1.protection.inumbo.net>)
 id 1o41Wh-0004Nu-UO
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 14:40:16 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 398cbf7a-f239-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 16:40:13 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5A1F41477;
 Wed, 22 Jun 2022 07:40:14 -0700 (PDT)
Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com
 [10.1.199.62])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 302303F792;
 Wed, 22 Jun 2022 07:40:13 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 398cbf7a-f239-11ec-b725-ed86ccbb4733
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 7/8] xen: introduce xen-evtchn dom0less property
Date: Wed, 22 Jun 2022 15:38:04 +0100
Message-Id: <f2bc792f8dea59648b011cda4fe7c42929c4e3d7.1655903088.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1655903088.git.rahul.singh@arm.com>
References: <cover.1655903088.git.rahul.singh@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Introduce a new sub-node under /chosen node to establish static event
channel communication between domains on dom0less systems.

An event channel will be created beforehand to allow the domains to
send notifications to each other.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
 docs/misc/arm/device-tree/booting.txt |  62 +++++++++++-
 xen/arch/arm/domain_build.c           | 139 ++++++++++++++++++++++++++
 xen/arch/arm/include/asm/domain.h     |   1 +
 xen/arch/arm/include/asm/setup.h      |   1 +
 xen/arch/arm/setup.c                  |   2 +
 5 files changed, 204 insertions(+), 1 deletion(-)

diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
index 98253414b8..83e914b505 100644
--- a/docs/misc/arm/device-tree/booting.txt
+++ b/docs/misc/arm/device-tree/booting.txt
@@ -212,7 +212,7 @@ with the following properties:
     enable only selected interfaces.
 
 Under the "xen,domain" compatible node, one or more sub-nodes are present
-for the DomU kernel and ramdisk.
+for the DomU kernel, ramdisk and static event channel.
 
 The kernel sub-node has the following properties:
 
@@ -254,11 +254,42 @@ The ramdisk sub-node has the following properties:
     property because it will be created by the UEFI stub on boot.
     This option is needed only when UEFI boot is used.
 
+The static event channel sub-node has the following properties:
+
+- compatible
+
+    "xen,evtchn"
+
+- xen,evtchn
+
+    The property is tuples of two numbers
+    (local-evtchn link-to-foreign-evtchn) where:
+
+    local-evtchn is an integer value that will be used to allocate local port
+    for a domain to send and receive event notifications to/from the remote
+    domain. Maximum supported value is 2^17 for FIFO ABI and 4096 for 2L ABI.
+
+    link-to-foreign-evtchn is a single phandle to a remote evtchn to which
+    local-evtchn will be connected.
 
 Example
 =======
 
 chosen {
+
+    module@0 {
+        compatible = "multiboot,kernel", "multiboot,module";
+        xen,uefi-binary = "...";
+        bootargs = "...";
+
+        /* one sub-node per local event channel */
+        ec1: evtchn@1 {
+            compatible = "xen,evtchn-v1";
+            /* local-evtchn link-to-foreign-evtchn */
+            xen,evtchn = <0xa &ec2>;
+        };
+    };
+
     domU1 {
         compatible = "xen,domain";
         #address-cells = <0x2>;
@@ -277,6 +308,23 @@ chosen {
             compatible = "multiboot,ramdisk", "multiboot,module";
             reg = <0x0 0x4b000000 0xffffff>;
         };
+
+        /* one sub-node per local event channel */
+        ec2: evtchn@2 {
+            compatible = "xen,evtchn-v1";
+            /* local-evtchn link-to-foreign-evtchn */
+            xen,evtchn = <0xa &ec1>;
+        };
+
+        ec3: evtchn@3 {
+            compatible = "xen,evtchn-v1";
+            xen,evtchn = <0xb &ec5>;
+        };
+
+        ec4: evtchn@4 {
+            compatible = "xen,evtchn-v1";
+            xen,evtchn = <0xc &ec6>;
+        };
     };
 
     domU2 {
@@ -296,6 +344,18 @@ chosen {
             compatible = "multiboot,ramdisk", "multiboot,module";
             reg = <0x0 0x4d000000 0xffffff>;
         };
+
+        /* one sub-node per local event channel */
+        ec5: evtchn@5 {
+            compatible = "xen,evtchn-v1";
+            /* local-evtchn link-to-foreign-evtchn */
+            xen,evtchn = <0xb &ec3>;
+        };
+
+        ec6: evtchn@6 {
+            compatible = "xen,evtchn-v1";
+            xen,evtchn = <0xd &ec4>;
+        };
     };
 };
 
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 89195b042c..8925f0d80c 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -3052,6 +3052,144 @@ void __init evtchn_allocate(struct domain *d)
     d->arch.hvm.params[HVM_PARAM_CALLBACK_IRQ] = val;
 }
 
+static int __init allocate_domain_evtchn(const struct dt_device_node *node)
+{
+    const void *prop = NULL;
+    const __be32 *cell;
+    uint32_t len, domU1_port, domU2_port, remote_phandle;
+    const struct dt_device_node *evtchn_node, *remote_node;
+    struct evtchn_alloc_unbound alloc_unbound;
+    struct evtchn_bind_interdomain bind_interdomain;
+    int rc;
+
+    dt_for_each_child_node(node, evtchn_node)
+    {
+        struct domain *d, *d1 = NULL, *d2 = NULL;
+
+        if ( !dt_device_is_compatible(evtchn_node, "xen,evtchn-v1") )
+            continue;
+
+        prop = dt_get_property(evtchn_node, "xen,evtchn", &len);
+        /* If the property is not found, return without errors */
+        if ( !prop )
+            return 0;
+
+        if ( !len )
+        {
+            printk(XENLOG_ERR "xen,evtchn property cannot be empty.\n");
+            return -EINVAL;
+        }
+
+        cell = (const __be32 *)prop;
+        domU1_port = dt_next_cell(1, &cell);
+        remote_phandle = dt_next_cell(1, &cell);
+
+        remote_node = dt_find_node_by_phandle(remote_phandle);
+        if ( !remote_node )
+        {
+            printk(XENLOG_ERR
+                   "evtchn: could not find remote evtchn phandle\n");
+            return -EINVAL;
+        }
+
+        prop = dt_get_property(remote_node, "xen,evtchn", &len);
+        /* If the property is not found, return without errors */
+        if ( !prop )
+            return 0;
+
+        if ( !len )
+        {
+            printk(XENLOG_ERR "xen,evtchn property cannot be empty.\n");
+            return -EINVAL;
+        }
+
+        cell = (const __be32 *)prop;
+        domU2_port = dt_next_cell(1, &cell);
+        remote_phandle = dt_next_cell(1, &cell);
+
+        if ( evtchn_node->phandle != remote_phandle )
+        {
+            printk(XENLOG_ERR "xen,evtchn property is not setup correctly.\n");
+            return -EINVAL;
+        }
+
+        for_each_domain ( d )
+        {
+            if ( d->arch.node == node )
+            {
+                d1 = d;
+                continue;
+            }
+            if ( d->arch.node == dt_get_parent(remote_node) )
+                d2 = d;
+        }
+
+        if ( d1 == NULL )
+        {
+            if ( dt_device_is_compatible(node, "multiboot,kernel") )
+                d1 = hardware_domain;
+            else
+            {
+                printk(XENLOG_ERR "evtchn: could not find domain\n" );
+                return -EINVAL;
+            }
+        }
+
+        if ( d2 == NULL )
+        {
+            if ( dt_device_is_compatible(dt_get_parent(remote_node),
+                                         "multiboot,kernel") )
+                d2 = hardware_domain;
+            else
+            {
+                printk(XENLOG_ERR "evtchn: could not find domain\n" );
+                return -EINVAL;
+            }
+        }
+
+        alloc_unbound.dom = d1->domain_id;
+        alloc_unbound.remote_dom = d2->domain_id;
+
+        rc = evtchn_alloc_unbound(&alloc_unbound, domU1_port, true);
+        if ( rc < 0 && rc != -EBUSY )
+        {
+            printk(XENLOG_ERR
+                   "evtchn_alloc_unbound() failure (Error %d) \n", rc);
+            return rc;
+        }
+
+        bind_interdomain.remote_dom  = d1->domain_id;
+        bind_interdomain.remote_port = domU1_port;
+
+        rc = evtchn_bind_interdomain(&bind_interdomain, d2, domU2_port, true);
+        if ( rc < 0 && rc != -EBUSY )
+        {
+            printk(XENLOG_ERR
+                   "evtchn_bind_interdomain() failure (Error %d) \n", rc);
+            return rc;
+        }
+    }
+
+    return 0;
+}
+
+void __init allocate_static_evtchn(void)
+{
+    struct dt_device_node *node;
+    const struct dt_device_node *chosen = dt_find_node_by_path("/chosen");
+
+    BUG_ON(chosen == NULL);
+    dt_for_each_child_node(chosen, node)
+    {
+        if ( dt_device_is_compatible(node, "xen,domain") ||
+             dt_device_is_compatible(node, "multiboot,kernel") )
+        {
+            if ( allocate_domain_evtchn(node) != 0 )
+                panic("Could not set up domains evtchn\n");
+        }
+    }
+}
+
 static void __init find_gnttab_region(struct domain *d,
                                       struct kernel_info *kinfo)
 {
@@ -3358,6 +3496,7 @@ void __init create_domUs(void)
             panic("Error creating domain %s\n", dt_node_name(node));
 
         d->is_console = true;
+        d->arch.node = node;
 
         if ( construct_domU(d, node) != 0 )
             panic("Could not set up domain %s\n", dt_node_name(node));
diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
index ed63c2b6f9..7c22cbabcc 100644
--- a/xen/arch/arm/include/asm/domain.h
+++ b/xen/arch/arm/include/asm/domain.h
@@ -104,6 +104,7 @@ struct arch_domain
 #endif
 
     bool directmap;
+    struct dt_device_node *node;
 }  __cacheline_aligned;
 
 struct arch_vcpu
diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
index 2bb01ecfa8..bac876e68e 100644
--- a/xen/arch/arm/include/asm/setup.h
+++ b/xen/arch/arm/include/asm/setup.h
@@ -106,6 +106,7 @@ int acpi_make_efi_nodes(void *fdt, struct membank tbl_add[]);
 
 void create_domUs(void);
 void create_dom0(void);
+void allocate_static_evtchn(void);
 
 void discard_initial_modules(void);
 void fw_unreserved_regions(paddr_t s, paddr_t e,
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 05d97a1cfb..0936db58b2 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -1046,6 +1046,8 @@ void __init start_xen(unsigned long boot_phys_offset,
     if ( acpi_disabled )
         create_domUs();
 
+    allocate_static_evtchn();
+
     /*
      * This needs to be called **before** heap_init_late() so modules
      * will be scrubbed (unless suppressed).
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 14:51:55 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 14:51:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354078.581102 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o41hx-0003j7-SQ; Wed, 22 Jun 2022 14:51:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354078.581102; Wed, 22 Jun 2022 14:51:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o41hx-0003j0-Ow; Wed, 22 Jun 2022 14:51:53 +0000
Received: by outflank-mailman (input) for mailman id 354078;
 Wed, 22 Jun 2022 14:51:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o41hv-0003iu-UI
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 14:51:51 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o41hv-0006dn-HC; Wed, 22 Jun 2022 14:51:51 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231]
 helo=[192.168.1.223]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o41hv-0002FW-Ar; Wed, 22 Jun 2022 14:51:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=vGgKW0gVOm12CZhxUySqbYVfpcHSXjAYjivITo6fojQ=; b=qPKs1VaXZyvnzTNjY+ndc+nxNJ
	AsAJqJTEzpcoXueRpsluPLPgbRUyoJ8BXVXrisozm+xG7lQLX4xegSVTLMEsExI8wproiJf5FyaQk
	pQIin2XTe6yYcb7fN49KXJbdj56JLfOfT7V22WM9hUmTUqLiXVEpqzW+QuL7kfgY8MTs=;
Message-ID: <2cdde2eb-33ac-568b-a0ae-b819b7b4161b@xen.org>
Date: Wed, 22 Jun 2022 15:51:49 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH 2/8] xen/evtchn: modify evtchn_alloc_unbound to allocate
 specified port
To: Rahul Singh <rahul.singh@arm.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>
References: <cover.1655903088.git.rahul.singh@arm.com>
 <5ea66595248c41a011ac465bfabd7a7a40dcd565.1655903088.git.rahul.singh@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <5ea66595248c41a011ac465bfabd7a7a40dcd565.1655903088.git.rahul.singh@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 22/06/2022 15:37, Rahul Singh wrote:
> evtchn_alloc_unbound() always allocates the next available port. Static
> event channel support for dom0less domains requires allocating a
> specified port.
> 
> Modify the evtchn_alloc_unbound() to accept the port number as an
> argument and allocate the specified port if available. If the port
> number argument is zero, the next available port will be allocated.

I haven't yet fully reviewed this series. But I would like to point out 
that this opening a security hole (which I thought I had mention before) 
that could be exploited by a guest at runtime.

You would need [1] or similar in order to fix the issue. I am wrote 
"similar" because the patch could potentially be a problem if you allow 
a guest to use FIFO (you may need to allocate a lot of memory to fill 
the hole).

Cheers,

[1] 
https://xenbits.xen.org/gitweb/?p=people/julieng/xen-unstable.git;a=commit;h=2d89486fcf11216331e58a21b367b8a9be1af725

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 15:05:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 15:05:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354088.581112 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o41v1-0005Gd-1E; Wed, 22 Jun 2022 15:05:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354088.581112; Wed, 22 Jun 2022 15:05:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o41v0-0005GW-UN; Wed, 22 Jun 2022 15:05:22 +0000
Received: by outflank-mailman (input) for mailman id 354088;
 Wed, 22 Jun 2022 15:05:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o41uy-0005GQ-Rk
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 15:05:20 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o41uy-0006sJ-8S; Wed, 22 Jun 2022 15:05:20 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=[192.168.1.223]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o41uy-0002yF-1P; Wed, 22 Jun 2022 15:05:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=BFCMwyrNPp2qFe9FX3HCPNNL5Vv2AAfhhxTeX/ShkmI=; b=B77U3qPZF7i/8twLir5wK0ZuRg
	MDUE5YPwGC7bWb6JoT9O18DECLB7KFOn0hG716ObwrPkhiHTlyyzXNsEUGgKrvO3eAWauq8VSEGnh
	CW3rASEo2giW1LRHrfzoKD7T5gy4XYdeIJ4Mrfc9YtBEgdElmdqNbULgEWK3I9Mqp37k=;
Message-ID: <b64a7980-e51b-417b-4929-94a020c81438@xen.org>
Date: Wed, 22 Jun 2022 16:05:17 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH 5/8] xen/evtchn: don't close the static event channel.
To: Rahul Singh <rahul.singh@arm.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>
References: <cover.1655903088.git.rahul.singh@arm.com>
 <91656930b5bfd49e40ff5a9d060d7643e6311f4f.1655903088.git.rahul.singh@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <91656930b5bfd49e40ff5a9d060d7643e6311f4f.1655903088.git.rahul.singh@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 22/06/2022 15:38, Rahul Singh wrote:
> Guest can request the Xen to close the event channels. Ignore the
> request from guest to close the static channels as static event channels
> should not be closed.

Why do you want to prevent the guest to close static ports? The problem 
I can see is...

[...]

> diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
> index 84f0055a5a..cedc98ccaf 100644
> --- a/xen/common/event_channel.c
> +++ b/xen/common/event_channel.c
> @@ -294,7 +294,8 @@ void evtchn_free(struct domain *d, struct evtchn *chn)
>    * If port is zero get the next free port and allocate. If port is non-zero
>    * allocate the specified port.
>    */
> -int evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc, evtchn_port_t port)
> +int evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc, evtchn_port_t port,
> +                         bool is_static)
>   {
>       struct evtchn *chn;
>       struct domain *d;
> @@ -330,6 +331,7 @@ int evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc, evtchn_port_t port)
>       evtchn_write_lock(chn);
>   
>       chn->state = ECS_UNBOUND;
> +    chn->is_static = is_static;
>       if ( (chn->u.unbound.remote_domid = alloc->remote_dom) == DOMID_SELF )
>           chn->u.unbound.remote_domid = current->domain->domain_id;
>       evtchn_port_init(d, chn);
> @@ -368,7 +370,7 @@ static void double_evtchn_unlock(struct evtchn *lchn, struct evtchn *rchn)
>    * allocate the specified lport.
>    */
>   int evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind, struct domain *ld,
> -                            evtchn_port_t lport)
> +                            evtchn_port_t lport, bool is_static)
>   {
>       struct evtchn *lchn, *rchn;
>       struct domain *rd;
> @@ -423,6 +425,7 @@ int evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind, struct domain *ld,
>       lchn->u.interdomain.remote_dom  = rd;
>       lchn->u.interdomain.remote_port = rport;
>       lchn->state                     = ECS_INTERDOMAIN;
> +    lchn->is_static                 = is_static;
>       evtchn_port_init(ld, lchn);
>       
>       rchn->u.interdomain.remote_dom  = ld;
> @@ -659,6 +662,9 @@ int evtchn_close(struct domain *d1, int port1, bool guest)
>           rc = -EINVAL;
>           goto out;
>       }
> +    /* Guest cannot close a static event channel. */
> +    if ( chn1->is_static && guest )
> +        goto out;

... at least the interdomain structure store pointer to the domain. I am 
a bit concerned that we would end up to leave dangling pointers (such as 
chn->u.interdomain.remote_domain) as evtchn_close() is also used while 
destroying the domain.

Also, AFAICT Xen will return 0 (i.e. success) to the caller. I think 
this is a mistake because we didn't close the port as requested.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 15:05:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 15:05:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354089.581123 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o41v8-0005XD-A9; Wed, 22 Jun 2022 15:05:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354089.581123; Wed, 22 Jun 2022 15:05:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o41v8-0005X6-7B; Wed, 22 Jun 2022 15:05:30 +0000
Received: by outflank-mailman (input) for mailman id 354089;
 Wed, 22 Jun 2022 15:05:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sIgH=W5=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1o41v6-0005Wd-Qu
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 15:05:28 +0000
Received: from mail-pg1-x533.google.com (mail-pg1-x533.google.com
 [2607:f8b0:4864:20::533])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bf49d918-f23c-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 17:05:27 +0200 (CEST)
Received: by mail-pg1-x533.google.com with SMTP id e63so14843878pgc.5
 for <xen-devel@lists.xenproject.org>; Wed, 22 Jun 2022 08:05:27 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bf49d918-f23c-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=m2fEB/71pwCE9eTd59Wtx1+QnU9ZX7ZeAeUZ1Zdu2CY=;
        b=HkBqb8OKZgr8BZAuwu7/uRTiw2h/4d1vkK/FDTWiKwXe3E759U+41JvecM5jDlJaaF
         q1W5c+Iibv6l/bMprfxNt/RCGGwO12DdL5DSO6O2qDN6h+7IB0hRqM6SftRcfAd6demW
         4kIdjdC16RxJqfMMMGP/SY6pLtX2ePobiaqPxNyS8L8ZFoW6j5RLJh71mWmvT+oHis/i
         /wfxtT31zJakpUivUDrpEqeW7OlEXAyKEQje73h7+lazcex4J8aAAU3+Yzb57b/qGRfi
         ra1eXDYVnrZip5Xl9hSFNY9E1+Ngn2hlSjANQrK4iAGGZefOKes4bDffYXeAtbE7LIcN
         hphw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=m2fEB/71pwCE9eTd59Wtx1+QnU9ZX7ZeAeUZ1Zdu2CY=;
        b=lNV6hqyujaTowivGDoNRC0mBPxf/ClgOGCG55vW47PWCqP9NfqQitPYysMw8sOd+55
         0xXJ8zH5JtztFO30KuuZXuubpjMQkjFBmN9DrZrH7Nxt6a0BS1BRXSV02vc8NjZSfSNX
         LXNWwDGS+4UHjvQHaIiqCee+1ATIEaczBLXk6KqZOTAktFzbkKAfiKU0SyasFBsdSeIz
         8msK7RwJZjfcea416NTbyYgUHQTKYNh+ultOtKNhdBfwUCfBOa0xVZiT9f/Xi10dJQ9M
         aJwz3eIRB4fcSJNnuIVLXBQ6a2C0880SXGJu0L/smgPXsD5AYq+56ookNfZ3clmSnhhm
         8wkA==
X-Gm-Message-State: AJIora+pA8v92yGnDkrwTY0mdSBayswB6dQkTwJ0FWDSO9twO/zwI2Pe
	LM+pQKwYjSSo6WTARdi8FYV+cjCAuoZPsQlP9W8=
X-Google-Smtp-Source: AGRyM1uVcd6HHyGLdfbG38FwTiu27y4hzDKVXD9r3fCHwb6GrneqzFQLlpMcEhleSM/1EFdJbM/D4rg8aWR5ILZl4SY=
X-Received: by 2002:aa7:9f9b:0:b0:525:1e0a:a6b4 with SMTP id
 z27-20020aa79f9b000000b005251e0aa6b4mr18385997pfr.5.1655910325447; Wed, 22
 Jun 2022 08:05:25 -0700 (PDT)
MIME-Version: 1.0
References: <20220414091538.jijj4lbrkjiby6el@vireshk-i7> <CAPD2p-ks4ZxWB8YT0pmX1sF_Mu2H+n_SyvdzE8LwVP_k_+Biog@mail.gmail.com>
 <20220622114950.lpidph5ugvozhbu5@vireshk-i7>
In-Reply-To: <20220622114950.lpidph5ugvozhbu5@vireshk-i7>
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
Date: Wed, 22 Jun 2022 18:05:13 +0300
Message-ID: <CAPD2p-kFeC8FygFcbpEbH3CzrAM7Td+G68t9ebOFR4V0w1dpEQ@mail.gmail.com>
Subject: Re: Virtio on Xen with Rust
To: Viresh Kumar <viresh.kumar@linaro.org>
Cc: Stratos Mailing List <stratos-dev@op-lists.linaro.org>, 
	=?UTF-8?B?QWxleCBCZW5uw6ll?= <alex.bennee@linaro.org>, 
	Stefano Stabellini <stefano.stabellini@xilinx.com>, 
	Mathieu Poirier <mathieu.poirier@linaro.com>, Vincent Guittot <vincent.guittot@linaro.org>, 
	Mike Holmes <mike.holmes@linaro.org>, Wei Liu <wl@xen.org>, 
	xen-devel <xen-devel@lists.xenproject.org>, Juergen Gross <jgross@suse.com>, 
	Julien Grall <julien@xen.org>
Content-Type: multipart/alternative; boundary="000000000000ba6ecc05e20aaae9"

--000000000000ba6ecc05e20aaae9
Content-Type: text/plain; charset="UTF-8"

On Wed, Jun 22, 2022 at 2:49 PM Viresh Kumar <viresh.kumar@linaro.org>
wrote:

> On 28-04-22, 16:52, Oleksandr Tyshchenko wrote:
> > FYI, currently we are working on one feature to restrict memory access
> > using Xen grant mappings based on xen-grant DMA-mapping layer for Linux
> [1].
> > And there is a working PoC on Arm based on an updated virtio-disk. As for
> > libraries, there is a new dependency on "xengnttab" library. In
> comparison
> > with Xen foreign mappings model (xenforeignmemory),
> > the Xen grant mappings model is a good fit into the Xen security model,
> > this is a safe mechanism to share pages between guests.
>
> Hi Oleksandr,
>

Hello Viresh

[sorry for the possible format issues]


>
> I started getting this stuff into our work and have few questions.
>
> - IIUC, with this feature the guest will allow the host to access only
> certain
>   parts of the guest memory, which is exactly what we want as well. I
> looked at
>   the updated code in virtio-disk and you currently don't allow the grant
> table
>   mappings along with MAP_IN_ADVANCE, is there any particular reason for
> that ?
>


MAP_IN_ADVANCE is the optimization which is only applicable if all incoming
addresses are guest physical addresses and the backend is allowed to map
arbitrary guest pages using foreign mappings.
This is an option to demonstrate how the trusted backend (running in dom0,
for example) can pre-map guest memory in advance and just only calculate a
host address at the runtime based on the incoming gpa which is used as an
offset (there are no xenforeignmemory_map/xenforeignmemory_unmap calls
every request). But if the guest uses grant mappings for the virtio
(CONFIG_XEN_VIRTIO=y), all incoming addresses are grants instead of gpa
(even the virtqueue descriptor rings addresses are grants). Even leaving
aside the fact that restricted virtio memory access in the guest means that
not all of guest memory can be accessed, so even having pre-maped guest
memory in advance, we are not able to calculate a host pointer as we don't
know which gpa the particular grant belongs to.



>
> - I understand that you currently map on the go, the virqueue descriptor
> rings
>   and then the protocol specific addresses later on, once virtio requests
> are
>   received from the guest.
>
>   But in our case, Vhost user with Rust based hypervisor agnostic backend,
> the
>   vhost master side can send a number of memory regions for the slave
> (backend)
>   to map and the backend won't try to map anything apart from that. The
>   virtqueue descriptor rings are available at this point and can be sent,
> but
>   not the protocol specific addresses, which are available only when a
> virtio
>   request comes.


> - And so we would like to map everything in advance, and access only the
> parts
>   which we need to, assuming that the guest would just allow those (as the
>   addresses are shared by the guest itself).
>
> - Will that just work with the current stuff ?
>


I am not sure that I understand this use-case.
Well, let's consider the virtio-disk example, it demonstrates three
possible memory mapping modes:
1. All addresses are gpa, map/unmap at runtime using foreign mappings
2. All addresses are gpa, map in advance using foreign mappings
3. All addresses are grants, only map/unmap at runtime using grants mappings

If you are asking about #4 which would imply map in advance together with
using grants then I think, no. This won't work with the current stuff.
These are conflicting opinions, either grants and map at runtime or gpa and
map in advance.
If there is a wish to optimize when using grants then "maybe" it is worth
looking into how persistent grants work for PV block device for example
(feature-persistent in blkif.h).



>
> - In Linux's drivers/xen/gntdev.c, we have:
>
>   static unsigned int limit = 64*1024;
>
>   which translates to 256MB I think, i.e. the max amount of memory we can
> map at
>   once. Will making this 128*1024 allow me to map 512 MB for example in a
> single
>   call ? Any other changes required ?
>

I am not sure, but I guess the total number is limited by the hypervisor
itself. Could you try to increase gnttab_max_frames in the first place?



>
> - When I tried that, I got few errors which I am still not able to fix:
>
>   The IOCTL_GNTDEV_MAP_GRANT_REF ioctl passed but there were failures after
>   that:
>
>   (XEN) common/grant_table.c:1055:d0v2 Bad ref 0x40000 for d1
>   (XEN) common/grant_table.c:1055:d0v2 Bad ref 0x40001 for d1
>
>   ...
>
>   (XEN) common/grant_table.c:1055:d0v2 Bad ref 0x5fffd for d1
>   (XEN) common/grant_table.c:1055:d0v2 Bad ref 0x5fffe for d1
>   (XEN) common/grant_table.c:1055:d0v2 Bad ref 0x5ffff for d1
>   gnttab: error: mmap failed: Invalid argument
>
>
> I am working on Linus's origin/master along with the initial patch from
> Juergen,
> picked your Xen patch for iommu node.
>

Yes, this is the correct environment. Please note that Juergen has recently
pushed new version [1]


>
> I am still at initial stages to properly test this stuff, just wanted to
> share
> the progress to help myself save some of the time debugging this :)
>
> Thanks.
>
> --
> viresh
>

[1] https://lore.kernel.org/xen-devel/20220622063838.8854-1-jgross@suse.com/



-- 
Regards,

Oleksandr Tyshchenko

--000000000000ba6ecc05e20aaae9
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr"><br></div><br><div class=3D"gmail_quote">=
<div dir=3D"ltr" class=3D"gmail_attr">On Wed, Jun 22, 2022 at 2:49 PM Vires=
h Kumar &lt;<a href=3D"mailto:viresh.kumar@linaro.org">viresh.kumar@linaro.=
org</a>&gt; wrote:<br></div><blockquote class=3D"gmail_quote" style=3D"marg=
in:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1e=
x">On 28-04-22, 16:52, Oleksandr Tyshchenko wrote:<br>
&gt; FYI, currently we are working on one feature to restrict memory access=
<br>
&gt; using Xen grant mappings based on xen-grant DMA-mapping layer for Linu=
x [1].<br>
&gt; And there is a working PoC on Arm based on an updated virtio-disk. As =
for<br>
&gt; libraries, there is a new dependency on &quot;xengnttab&quot; library.=
 In comparison<br>
&gt; with Xen foreign mappings model (xenforeignmemory),<br>
&gt; the Xen grant mappings model is a good fit into the Xen security model=
,<br>
&gt; this is a safe mechanism to share pages between guests.<br>
<br>
Hi Oleksandr,<br></blockquote><div><br></div><div>Hello=C2=A0Viresh</div><d=
iv><br></div><div>[sorry for the possible format issues]</div><div>=C2=A0</=
div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;bor=
der-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
I started getting this stuff into our work and have few questions.<br>
<br>
- IIUC, with this feature the guest will allow the host to access only cert=
ain<br>
=C2=A0 parts of the guest memory, which is exactly what we want as well. I =
looked at<br>
=C2=A0 the updated code in virtio-disk and you currently don&#39;t allow th=
e grant table<br>
=C2=A0 mappings along with MAP_IN_ADVANCE, is there any particular reason f=
or that ?<br></blockquote><div><br></div><div>=C2=A0<br></div><div>MAP_IN_A=
DVANCE is the optimization which is only applicable if all incoming address=
es are guest physical addresses and the backend is allowed to map arbitrary=
 guest pages using foreign mappings. <br>This is an option to demonstrate h=
ow the trusted backend (running in dom0, for example) can pre-map guest mem=
ory in advance and just only calculate a host address at the runtime based =
on the incoming gpa which is used as an offset (there are no xenforeignmemo=
ry_map/xenforeignmemory_unmap calls every request). But if the guest uses g=
rant mappings for the virtio (CONFIG_XEN_VIRTIO=3Dy), all incoming addresse=
s are grants instead of gpa (even the virtqueue descriptor rings addresses =
are grants). Even leaving aside the fact that restricted virtio memory acce=
ss in the guest means that not all of guest memory can be accessed, so even=
 having pre-maped guest memory in advance, we are not able to calculate a h=
ost pointer as we don&#39;t know which gpa the particular grant belongs to.=
=C2=A0=C2=A0<br></div><div><br></div><div>=C2=A0</div><blockquote class=3D"=
gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(20=
4,204,204);padding-left:1ex">
<br>
- I understand that you currently map on the go, the virqueue descriptor ri=
ngs<br>
=C2=A0 and then the protocol specific addresses later on, once virtio reque=
sts are<br>
=C2=A0 received from the guest.<br>
<br>
=C2=A0 But in our case, Vhost user with Rust based hypervisor agnostic back=
end, the<br>
=C2=A0 vhost master side can send a number of memory regions for the slave =
(backend)<br>
=C2=A0 to map and the backend won&#39;t try to map anything apart from that=
. The<br>
=C2=A0 virtqueue descriptor rings are available at this point and can be se=
nt, but<br>
=C2=A0 not the protocol specific addresses, which are available only when a=
 virtio<br>
=C2=A0 request comes.</blockquote><blockquote class=3D"gmail_quote" style=
=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding=
-left:1ex">
<br>
- And so we would like to map everything in advance, and access only the pa=
rts<br>
=C2=A0 which we need to, assuming that the guest would just allow those (as=
 the<br>
=C2=A0 addresses are shared by the guest itself).<br>
<br>
- Will that just work with the current stuff ?<br></blockquote><div><br></d=
iv><div><br></div><div>I am not sure that I understand this use-case.<br>We=
ll, let&#39;s consider the virtio-disk example, it demonstrates three possi=
ble memory mapping modes:<br>1. All addresses are gpa, map/unmap at runtime=
 using foreign mappings<br>2. All addresses are gpa, map in advance using f=
oreign mappings<br>3. All addresses are grants, only map/unmap at runtime u=
sing grants mappings<br><br>If you are asking about #4 which would imply ma=
p in advance together with using grants then I think, no. This won&#39;t wo=
rk with the current stuff. These are conflicting opinions, either grants an=
d map at runtime or gpa and map in advance.<br>If there is a wish to optimi=
ze when using grants then &quot;maybe&quot; it is worth looking into how pe=
rsistent grants work for PV block device=C2=A0for=C2=A0example (feature-per=
sistent in blkif.h).=C2=A0 =C2=A0<br></div><div><br></div><div>=C2=A0</div>=
<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
- In Linux&#39;s drivers/xen/gntdev.c, we have:<br>
<br>
=C2=A0 static unsigned int limit =3D 64*1024;<br>
<br>
=C2=A0 which translates to 256MB I think, i.e. the max amount of memory we =
can map at<br>
=C2=A0 once. Will making this 128*1024 allow me to map 512 MB for example i=
n a single<br>
=C2=A0 call ? Any other changes required ?<br></blockquote><div><br></div><=
div>I am not sure, but I guess the total number is limited by the hyperviso=
r itself. Could you try to increase gnttab_max_frames in the first place?</=
div><div><br></div><div>=C2=A0</div><blockquote class=3D"gmail_quote" style=
=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding=
-left:1ex">
<br>
- When I tried that, I got few errors which I am still not able to fix:<br>
<br>
=C2=A0 The IOCTL_GNTDEV_MAP_GRANT_REF ioctl passed but there were failures =
after<br>
=C2=A0 that:<br>
<br>
=C2=A0 (XEN) common/grant_table.c:1055:d0v2 Bad ref 0x40000 for d1<br>
=C2=A0 (XEN) common/grant_table.c:1055:d0v2 Bad ref 0x40001 for d1<br>
<br>
=C2=A0 ...<br>
<br>
=C2=A0 (XEN) common/grant_table.c:1055:d0v2 Bad ref 0x5fffd for d1<br>
=C2=A0 (XEN) common/grant_table.c:1055:d0v2 Bad ref 0x5fffe for d1<br>
=C2=A0 (XEN) common/grant_table.c:1055:d0v2 Bad ref 0x5ffff for d1<br>
=C2=A0 gnttab: error: mmap failed: Invalid argument<br>
<br>
<br>
I am working on Linus&#39;s origin/master along with the initial patch from=
 Juergen,<br>
picked your Xen patch for iommu node.<br></blockquote><div><br></div><div>Y=
es, this is the correct environment. Please note that Juergen has recently =
pushed new version [1]</div><div>=C2=A0</div><blockquote class=3D"gmail_quo=
te" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204=
);padding-left:1ex">
<br>
I am still at initial stages to properly test this stuff, just wanted to sh=
are<br>
the progress to help myself save some of the time debugging this :)<br>
<br>
Thanks.<br>
<br>
-- <br>
viresh<br></blockquote><div><br></div><div>[1]=C2=A0<a href=3D"https://lore=
.kernel.org/xen-devel/20220622063838.8854-1-jgross@suse.com/">https://lore.=
kernel.org/xen-devel/20220622063838.8854-1-jgross@suse.com/</a>=C2=A0</div>=
</div><br clear=3D"all"><div><br></div>-- <br><div dir=3D"ltr" class=3D"gma=
il_signature"><div dir=3D"ltr"><div><div dir=3D"ltr"><div><div dir=3D"ltr">=
<span style=3D"background-color:rgb(255,255,255)"><font size=3D"2"><span st=
yle=3D"color:rgb(51,51,51);font-family:Arial,sans-serif">Regards,</span></f=
ont></span></div><div dir=3D"ltr"><br></div><div dir=3D"ltr"><div><span sty=
le=3D"background-color:rgb(255,255,255)"><font size=3D"2">Oleksandr Tyshche=
nko</font></span></div></div></div></div></div></div></div></div>

--000000000000ba6ecc05e20aaae9--


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 15:15:38 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 15:15:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354107.581137 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o424s-0007Qq-Ff; Wed, 22 Jun 2022 15:15:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354107.581137; Wed, 22 Jun 2022 15:15:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o424s-0007Qj-Cw; Wed, 22 Jun 2022 15:15:34 +0000
Received: by outflank-mailman (input) for mailman id 354107;
 Wed, 22 Jun 2022 15:15:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P1Wy=W5=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1o424q-0007QM-M1
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 15:15:32 +0000
Received: from mail-wr1-x42d.google.com (mail-wr1-x42d.google.com
 [2a00:1450:4864:20::42d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 28160e9e-f23e-11ec-bd2d-47488cf2e6aa;
 Wed, 22 Jun 2022 17:15:31 +0200 (CEST)
Received: by mail-wr1-x42d.google.com with SMTP id g27so17227019wrb.10
 for <xen-devel@lists.xenproject.org>; Wed, 22 Jun 2022 08:15:31 -0700 (PDT)
Received: from uni.. (adsl-190.37.6.169.tellas.gr. [37.6.169.190])
 by smtp.googlemail.com with ESMTPSA id
 p11-20020a05600c418b00b00397342e3830sm5069392wmh.0.2022.06.22.08.15.29
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 22 Jun 2022 08:15:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 28160e9e-f23e-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=j2rncxPDfJaPpuQwqHGVJi9q+yld7nhIqqPjjzXvlAA=;
        b=Y6WXWKq6JB6Ih24FFqZpi407L/j+ynMtcODAEudGQsyhpQ4xZs9LcxDArl6nu7KJxD
         cu4V8cHJnMIcqPwlc4faz88K1cKtgBNdKLT4qN4kBVFzKOmTqeQnahP5HtDvMOQ/z4xY
         SYIb1z28tcbZv8AQduxuR6hfKhH575RTnEJ7BRvccnJAt68xzY5MrwjIPdgvEax0g+83
         dTUv5uMwl6DcgLd/+mNaFJBwYVfPaiLfGY+dyZnoOhi2BBTW4b+bg97rWjRBFBq8VJ6/
         S3qjU2d/pMZcWqGCHw5E0yS3QoBNQM0RzaupdZJGovHKuti0yoSsf3/dS0xZmYTaO3Y0
         RvjA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=j2rncxPDfJaPpuQwqHGVJi9q+yld7nhIqqPjjzXvlAA=;
        b=niqsYCexZyT8igB3ksvahpbkPvRsYVHeJBlIg+G5ylkVsAGbCJMndluD4nOAeHq5Qh
         WTOkR/4Bpvy5ZXMW+MpgETRrAOk5i+bqi4DGP9FS2I9gxSSmodYVpUB4eBnxaeXb4II9
         PQLzAx15eZ6aV9E6l84R5C0m7p6hwBO+7Dyww+EubFdiieOKw1ZNpT/tjIZELziBE/0/
         kJjXr02GmnY9ITjA4oE1f1IIrjij+zTh7RTWHCRWYhXUA1QbnChsN05avhUlYxPLLFWG
         MA1OY/6mFZrrd9pJcXtxxTPrz9WQMgrJQyIyUk9dF3yvJA2IwyIIF/S9BKHPNuewe7kp
         86bw==
X-Gm-Message-State: AJIora9885cN78tFNODW0vxR88ziLDGzXm07s4aJVNYybUEbcFWIHwa+
	3etEn6HZ36JO8idGMYPY+SVZYzbv2FA=
X-Google-Smtp-Source: AGRyM1vvsdHVyLPFKyjsqyqU9fnen8tkjJvCIEEEYnvOfof3J2aQnIhmRBcGX1tPRtkNxwShGY2+1w==
X-Received: by 2002:a5d:504f:0:b0:21b:a39f:7e6f with SMTP id h15-20020a5d504f000000b0021ba39f7e6fmr1366617wrt.129.1655910930953;
        Wed, 22 Jun 2022 08:15:30 -0700 (PDT)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 1/3] xen/arm: shutdown: Fix MISRA C 2012 Rule 8.4 violation
Date: Wed, 22 Jun 2022 18:15:12 +0300
Message-Id: <20220622151514.545850-1-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Include header <xen/shutdown.h> so that the declarations of the functions
machine_halt() and machine_restart(), which have external linkage, are visible
before the function definitions.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---
 xen/arch/arm/shutdown.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/arm/shutdown.c b/xen/arch/arm/shutdown.c
index 3dc6819d56..5550f50f61 100644
--- a/xen/arch/arm/shutdown.c
+++ b/xen/arch/arm/shutdown.c
@@ -2,6 +2,7 @@
 #include <xen/cpu.h>
 #include <xen/delay.h>
 #include <xen/lib.h>
+#include <xen/shutdown.h>
 #include <xen/smp.h>
 #include <asm/platform.h>
 #include <asm/psci.h>
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 15:15:40 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 15:15:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354108.581149 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o424y-0007i1-PO; Wed, 22 Jun 2022 15:15:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354108.581149; Wed, 22 Jun 2022 15:15:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o424y-0007hs-M4; Wed, 22 Jun 2022 15:15:40 +0000
Received: by outflank-mailman (input) for mailman id 354108;
 Wed, 22 Jun 2022 15:15:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P1Wy=W5=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1o424x-0007hK-HZ
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 15:15:39 +0000
Received: from mail-wm1-x32c.google.com (mail-wm1-x32c.google.com
 [2a00:1450:4864:20::32c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2c2f0946-f23e-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 17:15:38 +0200 (CEST)
Received: by mail-wm1-x32c.google.com with SMTP id a10so9427087wmj.5
 for <xen-devel@lists.xenproject.org>; Wed, 22 Jun 2022 08:15:38 -0700 (PDT)
Received: from uni.. (adsl-190.37.6.169.tellas.gr. [37.6.169.190])
 by smtp.googlemail.com with ESMTPSA id
 p11-20020a05600c418b00b00397342e3830sm5069392wmh.0.2022.06.22.08.15.36
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 22 Jun 2022 08:15:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c2f0946-f23e-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=c+ahs+W5z01z1jHgynxKQadBuWdjH3bP2ezB1yrKqeg=;
        b=T6kQHr09Iy4hMQ9c/qninA2CbkRABuS+Dz4AGPrcZ6sJBG5WVlYEaw1UK5aPmIBLYy
         uLq4cN+eITTPco9HZqerchUMBcjDZL1xtBUF1NfkEgA1MWXKI2hx9IGRmCSUC25962iF
         uHFB2C6Y1T0h0LVNqjqAachojjUUoKi+uw6x7LpGzdaRNNM6DVPfptTzxghGOGCUtMVD
         wnj5/9zA34SDfvzsq/RAfUOJy3KqSMgmqlwx+Uk6RWMfx1/szJeAEg0QoFFcl//a2h0a
         tjWOLq3y+zj2fZ23bR3swivYDfPZW/ZqavB9kYzWz6lpavoObV5+gGUNzoFwieFmqOgc
         PvDQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=c+ahs+W5z01z1jHgynxKQadBuWdjH3bP2ezB1yrKqeg=;
        b=oscj6mSCERckzHEvf4JNmmiUjOQ8Lqq23fgZY5/xkdzvwulrY3gWUhUebRcfFtLo32
         IP2lK7SiMwS4Ltiu+AFjqhS5KGyZ6aSp8l5SwFy66JMgD4KrdIpDd1qvv3B5LR4LgJTT
         xV5x9ooraC+CzrDpy1x30uyNrQrX/VAOwJUzdQU+V4uF0Go6LUszN60Cke8RXQ7lF0Ba
         DyDoS4qEGECh09K6OwhnTfbYaNhSj++A6wQkSP7Wfw4+smqeE2n8LcdP+mKYb9Z5JdvH
         /qmmdzMOhHxKT14I1DApxIRvU05UJ4niINQRIALylwetpi4GfScedSt1R0fCamX1UxKA
         XjZw==
X-Gm-Message-State: AJIora8j9NE/VoYbJmC8QCYncxXGtubLmPZGNtV7JFPK/hTOUvbJi1Wo
	i/tGWgSwTbUt7hV94ky7mzXSzIP4AWE=
X-Google-Smtp-Source: AGRyM1vZsIGFhZMcy0B+6c/CWT9Ry2qKRhhtM10UJFKyMD2j+HGK67wQQk34kh8WegJs1FBgbej7sA==
X-Received: by 2002:a7b:cc8e:0:b0:39c:829d:609b with SMTP id p14-20020a7bcc8e000000b0039c829d609bmr4773932wma.160.1655910937608;
        Wed, 22 Jun 2022 08:15:37 -0700 (PDT)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 2/3] xen/lib: list-sort: Fix MISRA C 2012 Rule 8.4 violation
Date: Wed, 22 Jun 2022 18:15:13 +0300
Message-Id: <20220622151514.545850-2-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20220622151514.545850-1-burzalodowa@gmail.com>
References: <20220622151514.545850-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Include header <xen/list_sort.h> so that the declaration of the function
list_sort(), which has external linkage, is visible before the function
definition.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---
 xen/lib/list-sort.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/lib/list-sort.c b/xen/lib/list-sort.c
index f8d8bbf281..de1af2ef8b 100644
--- a/xen/lib/list-sort.c
+++ b/xen/lib/list-sort.c
@@ -16,6 +16,7 @@
  */
 
 #include <xen/list.h>
+#include <xen/list_sort.h>
 
 #define MAX_LIST_LENGTH_BITS 20
 
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 15:15:46 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 15:15:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354109.581160 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4254-00081z-1u; Wed, 22 Jun 2022 15:15:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354109.581160; Wed, 22 Jun 2022 15:15:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4253-00081s-U4; Wed, 22 Jun 2022 15:15:45 +0000
Received: by outflank-mailman (input) for mailman id 354109;
 Wed, 22 Jun 2022 15:15:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P1Wy=W5=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1o4251-0007QM-QV
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 15:15:43 +0000
Received: from mail-wr1-x42f.google.com (mail-wr1-x42f.google.com
 [2a00:1450:4864:20::42f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2ed96df5-f23e-11ec-bd2d-47488cf2e6aa;
 Wed, 22 Jun 2022 17:15:43 +0200 (CEST)
Received: by mail-wr1-x42f.google.com with SMTP id i10so20114966wrc.0
 for <xen-devel@lists.xenproject.org>; Wed, 22 Jun 2022 08:15:43 -0700 (PDT)
Received: from uni.. (adsl-190.37.6.169.tellas.gr. [37.6.169.190])
 by smtp.googlemail.com with ESMTPSA id
 p11-20020a05600c418b00b00397342e3830sm5069392wmh.0.2022.06.22.08.15.41
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 22 Jun 2022 08:15:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2ed96df5-f23e-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=pe0g9vWBoGbc+/wjs3IU5tAcrtZcaGO81zzEoJh9BPY=;
        b=gGw2UQYNssdj+LHFT8xisyLYmdTartswWmM+e/orWJke4chV7yMYBPQ2yt6pvAypaM
         uINsXC/1TNlh58gzYVdoMwgOt5dBg/5k1hCW/ELCy0ROp6jMKar5HWBNgpXbq4dl/4dw
         njA/sZw76GR7NkXrZ3i2zaNWfeSBGXska3K9nEOXBIluXc0A0nAO4f3nbG4oayz+w/lh
         8X8lBh5tTkkFn0ESBF8k1F5YGDmNkd8LNxh7uADYzp3atbEyw8AtxSggAGm5xJgY7i86
         iRTTb79OXwR3cVtBotvITNX6S6O564AyxTG+pTgJYqX2QnRv1xTUYVIXIT6EatiCyihb
         450g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=pe0g9vWBoGbc+/wjs3IU5tAcrtZcaGO81zzEoJh9BPY=;
        b=r2749/l/I+lHgXEytv+vWK06CD4t+yIHIpOTlXdZp5aw19RK3hx2tiyV/HepoBKNfk
         s0x/Bf2D60pTPnyqqgueMlTeYCG0U53y7KpE7RPukDSgnLp/zPwaZzmjq9WW/v2Fdj8Y
         4Z9YXvZJz8c2AigmruLMKP1cPxPh/ITDmozRqFAY/TLubAkbhfXc7rDQBl5vK4dcW358
         c1i6aakERyx7FdXv9t2mm6hM124RMV0wOdYjPaoNe51N+IOltY9bDsppoKgy1cgt1d4F
         9tGQMQFusir3z9rVki3COOIlFlfKS2rfI0LUFdj92ZQtjHuCHhBmWq9BrAYicurUkO+P
         d1sw==
X-Gm-Message-State: AJIora+ojW3TT3T84vfVh9BhkZFi273LoAtN7P05uZfA4PAF1/jM4Pt2
	BchdoEM2LV74pRmpzlDBQ7k2PLViHUg=
X-Google-Smtp-Source: AGRyM1tcEnRk4inWSyUF/ep4dXgy96YZkhbfT5/xXOvK891+8HXFldu7hLcevIUbAXhFmUli2LAaNw==
X-Received: by 2002:a5d:59a6:0:b0:21b:a234:8314 with SMTP id p6-20020a5d59a6000000b0021ba2348314mr2916996wrr.316.1655910942507;
        Wed, 22 Jun 2022 08:15:42 -0700 (PDT)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 3/3] xen/common: gunzip: Fix MISRA C 2012 Rule 8.4 violation
Date: Wed, 22 Jun 2022 18:15:14 +0300
Message-Id: <20220622151514.545850-3-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20220622151514.545850-1-burzalodowa@gmail.com>
References: <20220622151514.545850-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Include header <xen/gunzip.h> so that the declarations of functions gzip_check()
and perform_gunzip(), which have external linkage, are visible before the
function definitions.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---
 xen/common/gunzip.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/common/gunzip.c b/xen/common/gunzip.c
index b9ecc17e44..aa16fec4bb 100644
--- a/xen/common/gunzip.c
+++ b/xen/common/gunzip.c
@@ -1,4 +1,5 @@
 #include <xen/errno.h>
+#include <xen/gunzip.h>
 #include <xen/init.h>
 #include <xen/lib.h>
 #include <xen/mm.h>
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 15:16:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 15:16:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354124.581170 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o425T-0000UN-BU; Wed, 22 Jun 2022 15:16:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354124.581170; Wed, 22 Jun 2022 15:16:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o425T-0000UG-8b; Wed, 22 Jun 2022 15:16:11 +0000
Received: by outflank-mailman (input) for mailman id 354124;
 Wed, 22 Jun 2022 15:16:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P1Wy=W5=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1o425S-0007QM-A0
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 15:16:10 +0000
Received: from mail-wr1-x429.google.com (mail-wr1-x429.google.com
 [2a00:1450:4864:20::429])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3e20e090-f23e-11ec-bd2d-47488cf2e6aa;
 Wed, 22 Jun 2022 17:16:08 +0200 (CEST)
Received: by mail-wr1-x429.google.com with SMTP id j24so3170909wrb.11
 for <xen-devel@lists.xenproject.org>; Wed, 22 Jun 2022 08:16:08 -0700 (PDT)
Received: from uni.. (adsl-190.37.6.169.tellas.gr. [37.6.169.190])
 by smtp.googlemail.com with ESMTPSA id
 r17-20020a05600c35d100b0039c8d181ac6sm28227360wmq.26.2022.06.22.08.16.06
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 22 Jun 2022 08:16:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e20e090-f23e-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=Glmdpk5hXbczqNv+ac/nkmkigm6yLxJMUpuRVh2TjsI=;
        b=oS26zn8SKFQT+EdvV/SiugpAYS4v+7adUzfaWx5rWDjqg1e6SHtKaIcg2D6N1iJaqT
         B/NJhKYEYRffgo0msSSv9zMe3Estxd2p3pd1+7HXv/YDLjVHLEduhG7Z8fsv5vJcQOCP
         kcZ3JmnNHq6zItj/QIQ62Ls+gU2oWOC2FD2gIgm1KfTzq1mT+CaRmd8xA94X4fBGIXHv
         vlKtUB//cHBu8Nd22WGVUSJ9iBgr1DuCHSDRBN2BXCxk31UHw5P1acKd3ooqQP6uHESC
         Qf44atBQysoreFHzm09j6siYaN4St2jWG9bx/w8Fei6K8J0sjmeFH5DpeygaDHmuemU7
         AruA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=Glmdpk5hXbczqNv+ac/nkmkigm6yLxJMUpuRVh2TjsI=;
        b=JhDeFkZ9fNyibDxzUYzRjUxMb/O6B03SVj+xWuXm4GYCyC6CiYfwry/39azYS1p0JR
         RZ0/LzLExK9ndReqDiqSTcaoKO0loNr/R+8E0X1OHOoF4datUtUHtD7vdh1yO/DQY248
         Uy79IDs2iaD8nvlZwQWwtB4hBxxEIkY4NROOAjJMcDUNeAne3N5dE/c0Nkb0Yq/u3/da
         RIR9+RszM6vJfoz7JOWrI1jy7EnwXiWlq17OdASAtF8uOe7S2lb3SJBU4iOpjmG2/gKu
         hh/5vbacCzswOlZoOgx3Hv07S0VtUIwK0a8pQuoQ1Bz+uFbK4wDrZrCY5gGtw9uy1zUO
         3OWw==
X-Gm-Message-State: AJIora8Og4iXrZVL3auikJNT13VBviJyU7BZd0Qa5fEaPP+eMej3OwXa
	8ws0NB3aQYgntdEUp87XuZ17e+KHWmw=
X-Google-Smtp-Source: AGRyM1v6gsmmUBYmBcgDGD86yHRTJRC88HCtNFamITn6v2arofRI4+Z95XLvLyRko8Ail9P7TlAI8g==
X-Received: by 2002:a5d:47a1:0:b0:218:423a:de8f with SMTP id 1-20020a5d47a1000000b00218423ade8fmr4008747wrb.420.1655910968031;
        Wed, 22 Jun 2022 08:16:08 -0700 (PDT)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>
Subject: [PATCH] xen/common: device_tree: Fix MISRA C 2012 Rule 8.7 violation
Date: Wed, 22 Jun 2022 18:15:57 +0300
Message-Id: <20220622151557.545880-1-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The function __dt_n_size_cells() is referenced only in device_tree.c.
Change the linkage of the function from external to internal by adding
the storage-class specifier static to the function definition.

This patch aims to resolve indirectly a MISRA C 2012 Rule 8.4 violation
warning.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---
 xen/common/device_tree.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 0e8798bd24..6c9712ab7b 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -496,7 +496,7 @@ static int __dt_n_addr_cells(const struct dt_device_node *np, bool_t parent)
     return DT_ROOT_NODE_ADDR_CELLS_DEFAULT;
 }
 
-int __dt_n_size_cells(const struct dt_device_node *np, bool_t parent)
+static int __dt_n_size_cells(const struct dt_device_node *np, bool_t parent)
 {
     const __be32 *ip;
 
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 15:18:33 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 15:18:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354137.581182 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o427k-0001Mr-PU; Wed, 22 Jun 2022 15:18:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354137.581182; Wed, 22 Jun 2022 15:18:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o427k-0001Mk-MZ; Wed, 22 Jun 2022 15:18:32 +0000
Received: by outflank-mailman (input) for mailman id 354137;
 Wed, 22 Jun 2022 15:18:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sIgH=W5=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1o427j-0001Mc-3b
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 15:18:31 +0000
Received: from mail-lf1-x130.google.com (mail-lf1-x130.google.com
 [2a00:1450:4864:20::130])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 926390d1-f23e-11ec-bd2d-47488cf2e6aa;
 Wed, 22 Jun 2022 17:18:30 +0200 (CEST)
Received: by mail-lf1-x130.google.com with SMTP id f39so12710138lfv.3
 for <xen-devel@lists.xenproject.org>; Wed, 22 Jun 2022 08:18:30 -0700 (PDT)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 g4-20020a056512118400b0047f78ad78b7sm924572lfr.219.2022.06.22.08.18.27
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 22 Jun 2022 08:18:28 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 926390d1-f23e-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=voDIPs5YJmfo9tUKj6tnw6GFbS+YdFDAG7O9BzN4gXY=;
        b=fqFDZlq0BDYXOWaOyOFapKJILq2Vf1WwkB4Fb03CkE6T9zWG3WBomecBldgCeZ6HI1
         zNnwcQEpXMo4ubP1U6S5habjLO/6siOT3JWFcjcjLzVK7Ejjt1DdwZxFBP9Ldl2rWKrs
         MU9IAbcqPU7aXR09xDaFFCykT1Ip8Hyw7XA2CYvzgdGJnwNvJIYh7nI3qidBLiYoYNYQ
         wAJmQDwD6eo7yXnjqfr6fIRypyDAmnjgXyv4q+26P2A5eMZ1n5I4pY8yjFNFYkDH6zkz
         2fIct55dW8X/TK3N9QP+2aKG+tesiSNkrw5jVPnqgquKXT7UirFZh4HsO7i8K4T+4zUJ
         A64A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=voDIPs5YJmfo9tUKj6tnw6GFbS+YdFDAG7O9BzN4gXY=;
        b=xzsLuWvZx7hC+Yoq4XUStQ93qkeerCm8YcV8/HQuEqKV1WMJqY4zdHM9t0EcBmnk69
         1ILOC4i7x1Q+p4gzEnCdGNDFyvg8GgknquFEY9nGj2wGWcJfaPAHJN+bqHKOFduMuqpa
         aBeEaZRO9NoOQChX+U9LdUk29wvceOhkWhZMnHGVuKPOy7HquFoBu270HKQwQBelW7D6
         GnjmR8pTDuNUd575k5miDHI+FqKC8UJmuJLR7XIEuFs3CUvYQLtix5y1uMq+ze3ORPgX
         PRzQMjuSPhNda+dndQVJOo965yZJcjmKaAFTEmcAVUwiXg8FtCqUOIy5u3xkDG+Clr3B
         fibg==
X-Gm-Message-State: AJIora+VIkmp9PErPFReg8jwXe0HepZ815uRXwGBPW8me7szgjJjBl3o
	1mxuGkndZ6bifX9h6Z5nSbk=
X-Google-Smtp-Source: AGRyM1thyu2aqNX+9ul7EirdEpEty55FnPz/79PlYHYAIOgtoOYzPKH3v6m/DRInPDpfDYJrzQilCw==
X-Received: by 2002:a05:6512:b8d:b0:47f:74f0:729b with SMTP id b13-20020a0565120b8d00b0047f74f0729bmr2421974lfv.403.1655911109435;
        Wed, 22 Jun 2022 08:18:29 -0700 (PDT)
Subject: Re: [PATCH v3 3/3] xen: don't require virtio with grants for non-PV
 guests
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 x86@kernel.org, linux-kernel@vger.kernel.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Russell King <linux@armlinux.org.uk>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 linux-arm-kernel@lists.infradead.org, Viresh Kumar <viresh.kumar@linaro.org>
References: <20220622063838.8854-1-jgross@suse.com>
 <20220622063838.8854-4-jgross@suse.com>
 <a8ce8ad3-aa3b-ea87-34cf-6532a272e9d8@gmail.com>
 <0f047970-d9ea-d2fd-3208-db843305e11c@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <c9b6f236-73cd-f677-98a8-26117fb17dee@gmail.com>
Date: Wed, 22 Jun 2022 18:18:26 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <0f047970-d9ea-d2fd-3208-db843305e11c@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 22.06.22 17:35, Juergen Gross wrote:


Hello Juergen

> On 22.06.22 11:03, Oleksandr wrote:
>>
>> On 22.06.22 09:38, Juergen Gross wrote:
>>
>> Hello Juergen
>>
>>> Commit fa1f57421e0b ("xen/virtio: Enable restricted memory access using
>>> Xen grant mappings") introduced a new requirement for using virtio
>>> devices: the backend now needs to support the VIRTIO_F_ACCESS_PLATFORM
>>> feature.
>>>
>>> This is an undue requirement for non-PV guests, as those can be 
>>> operated
>>> with existing backends without any problem, as long as those backends
>>> are running in dom0.
>>>
>>> Per default allow virtio devices without grant support for non-PV
>>> guests.
>>>
>>> On Arm require VIRTIO_F_ACCESS_PLATFORM for devices having been listed
>>> in the device tree to use grants.
>>>
>>> Add a new config item to always force use of grants for virtio.
>>>
>>> Fixes: fa1f57421e0b ("xen/virtio: Enable restricted memory access 
>>> using Xen grant mappings")
>>> Reported-by: Viresh Kumar <viresh.kumar@linaro.org>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> ---
>>> V2:
>>> - remove command line parameter (Christoph Hellwig)
>>> V3:
>>> - rebase to callback method
>>
>>
>> Patch looks good, just one NIT ...
>>
>>
>>> ---
>>>   arch/arm/xen/enlighten.c     |  4 +++-
>>>   arch/x86/xen/enlighten_hvm.c |  4 +++-
>>>   arch/x86/xen/enlighten_pv.c  |  5 ++++-
>>>   drivers/xen/Kconfig          |  9 +++++++++
>>>   drivers/xen/grant-dma-ops.c  | 10 ++++++++++
>>>   include/xen/xen-ops.h        |  6 ++++++
>>>   include/xen/xen.h            |  8 --------
>>>   7 files changed, 35 insertions(+), 11 deletions(-)
>>>
>>> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
>>> index 1f9c3ba32833..93c8ccbf2982 100644
>>> --- a/arch/arm/xen/enlighten.c
>>> +++ b/arch/arm/xen/enlighten.c
>>> @@ -34,6 +34,7 @@
>>>   #include <linux/timekeeping.h>
>>>   #include <linux/timekeeper_internal.h>
>>>   #include <linux/acpi.h>
>>> +#include <linux/virtio_anchor.h>
>>>   #include <linux/mm.h>
>>> @@ -443,7 +444,8 @@ static int __init xen_guest_init(void)
>>>       if (!xen_domain())
>>>           return 0;
>>> -    xen_set_restricted_virtio_memory_access();
>>> +    if (IS_ENABLED(CONFIG_XEN_VIRTIO))
>>> +        virtio_set_mem_acc_cb(xen_virtio_mem_acc);
>>>       if (!acpi_disabled)
>>>           xen_acpi_guest_init();
>>> diff --git a/arch/x86/xen/enlighten_hvm.c 
>>> b/arch/x86/xen/enlighten_hvm.c
>>> index 8b71b1dd7639..28762f800596 100644
>>> --- a/arch/x86/xen/enlighten_hvm.c
>>> +++ b/arch/x86/xen/enlighten_hvm.c
>>> @@ -4,6 +4,7 @@
>>>   #include <linux/cpu.h>
>>>   #include <linux/kexec.h>
>>>   #include <linux/memblock.h>
>>> +#include <linux/virtio_anchor.h>
>>>   #include <xen/features.h>
>>>   #include <xen/events.h>
>>> @@ -195,7 +196,8 @@ static void __init xen_hvm_guest_init(void)
>>>       if (xen_pv_domain())
>>>           return;
>>> -    xen_set_restricted_virtio_memory_access();
>>> +    if (IS_ENABLED(CONFIG_XEN_VIRTIO_FORCE_GRANT))
>>> + virtio_set_mem_acc_cb(virtio_require_restricted_mem_acc);
>>>       init_hvm_pv_info();
>>> diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
>>> index e3297b15701c..5aaae8a77f55 100644
>>> --- a/arch/x86/xen/enlighten_pv.c
>>> +++ b/arch/x86/xen/enlighten_pv.c
>>> @@ -31,6 +31,7 @@
>>>   #include <linux/gfp.h>
>>>   #include <linux/edd.h>
>>>   #include <linux/reboot.h>
>>> +#include <linux/virtio_anchor.h>
>>>   #include <xen/xen.h>
>>>   #include <xen/events.h>
>>> @@ -109,7 +110,9 @@ static DEFINE_PER_CPU(struct tls_descs, 
>>> shadow_tls_desc);
>>>   static void __init xen_pv_init_platform(void)
>>>   {
>>> -    xen_set_restricted_virtio_memory_access();
>>> +    /* PV guests can't operate virtio devices without grants. */
>>> +    if (IS_ENABLED(CONFIG_XEN_VIRTIO))
>>> + virtio_set_mem_acc_cb(virtio_require_restricted_mem_acc);
>>>       populate_extra_pte(fix_to_virt(FIX_PARAVIRT_BOOTMAP));
>>> diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
>>> index bfd5f4f706bc..a65bd92121a5 100644
>>> --- a/drivers/xen/Kconfig
>>> +++ b/drivers/xen/Kconfig
>>> @@ -355,4 +355,13 @@ config XEN_VIRTIO
>>>         If in doubt, say n.
>>> +config XEN_VIRTIO_FORCE_GRANT
>>> +    bool "Require Xen virtio support to use grants"
>>> +    depends on XEN_VIRTIO
>>> +    help
>>> +      Require virtio for Xen guests to use grant mappings.
>>> +      This will avoid the need to give the backend the right to map 
>>> all
>>> +      of the guest memory. This will need support on the backend side
>>> +      (e.g. qemu or kernel, depending on the virtio device types 
>>> used).
>>> +
>>>   endmenu
>>> diff --git a/drivers/xen/grant-dma-ops.c b/drivers/xen/grant-dma-ops.c
>>> index fc0142484001..8973fc1e9ccc 100644
>>> --- a/drivers/xen/grant-dma-ops.c
>>> +++ b/drivers/xen/grant-dma-ops.c
>>> @@ -12,6 +12,8 @@
>>>   #include <linux/of.h>
>>>   #include <linux/pfn.h>
>>>   #include <linux/xarray.h>
>>> +#include <linux/virtio_anchor.h>
>>> +#include <linux/virtio.h>
>>>   #include <xen/xen.h>
>>>   #include <xen/xen-ops.h>
>>>   #include <xen/grant_table.h>
>>> @@ -287,6 +289,14 @@ bool xen_is_grant_dma_device(struct device *dev)
>>>       return has_iommu;
>>>   }
>>> +bool xen_virtio_mem_acc(struct virtio_device *dev)
>>> +{
>>> +    if (IS_ENABLED(CONFIG_XEN_VIRTIO_FORCE_GRANT))
>>> +        return true;
>>> +
>>> +    return xen_is_grant_dma_device(dev->dev.parent);
>>> +}
>>
>>
>>     ... I am thinking would it be better to move this to 
>> xen/xen-ops.h as grant-dma-ops.c is generic (not only for virtio, 
>> although the virtio is the first use-case)
>
> I dislike using a function marked as inline in a function vector.
>
> We could add another module "xen-virtio" for this purpose, but this seems
> to be overkill.
>
> I think we should just leave it here and move it later in case more real
> virtio dependent stuff is being added.

I am happy with that explanation.

Reviewed-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>


>
>
>
> Juergen

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 15:20:05 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 15:20:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354149.581192 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o429F-0002N4-4J; Wed, 22 Jun 2022 15:20:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354149.581192; Wed, 22 Jun 2022 15:20:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o429F-0002MU-1L; Wed, 22 Jun 2022 15:20:05 +0000
Received: by outflank-mailman (input) for mailman id 354149;
 Wed, 22 Jun 2022 15:20:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UqP7=W5=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1o429D-00029e-R9
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 15:20:03 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-eopbgr140057.outbound.protection.outlook.com [40.107.14.57])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c9386124-f23e-11ec-bd2d-47488cf2e6aa;
 Wed, 22 Jun 2022 17:20:02 +0200 (CEST)
Received: from AM0PR01CA0149.eurprd01.prod.exchangelabs.com
 (2603:10a6:208:aa::18) by AM4PR0802MB2369.eurprd08.prod.outlook.com
 (2603:10a6:200:65::13) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.15; Wed, 22 Jun
 2022 15:20:00 +0000
Received: from VE1EUR03FT018.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:208:aa:cafe::5d) by AM0PR01CA0149.outlook.office365.com
 (2603:10a6:208:aa::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.14 via Frontend
 Transport; Wed, 22 Jun 2022 15:20:00 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT018.mail.protection.outlook.com (10.152.18.135) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5353.14 via Frontend Transport; Wed, 22 Jun 2022 15:19:59 +0000
Received: ("Tessian outbound 4ab5a053767b:v120");
 Wed, 22 Jun 2022 15:19:58 +0000
Received: from 0936c3ae7147.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 E984FE09-717E-44E0-8D26-1B36E1B88F42.1; 
 Wed, 22 Jun 2022 15:19:51 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 0936c3ae7147.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 22 Jun 2022 15:19:51 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by HE1PR08MB2777.eurprd08.prod.outlook.com (2603:10a6:7:2e::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5332.13; Wed, 22 Jun
 2022 15:19:40 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5353.022; Wed, 22 Jun 2022
 15:19:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c9386124-f23e-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=ePAngA9A79tbH/6FJZ8Y/M++mAYNyItu0RlrWBdPyeRogYMiet08k7ycOlRCUju/wAy0i8ivRMfjqW0SKMxaoWlThvmLtlc7BzfIMdDdTiNtgBU7xa6d1ozZLtVU92ygR7jGiwr5oRvlnI69TG88qQ3hc3Fh1LDrjzUm7axcxr8P8StMyvicJ3pCxsLmWeHA7Us5pzD4aBgSzfNdGs0zgk2RL2t4gZ06PplX4d+ibIKDMk5sz9ktPoiGEhTGTISpKAyBGwD25PLMuPi/XN6Z3oXzc+cv8xgVB/4G0TVnhDnY66ULae9EplApGOEbLIzTw4d9eIy9oWXvXct9UOBwJw==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/+Rx8v74VzjBt6a8QSS2J8yQqn3HjTq3wmVdtxONqGI=;
 b=Wr528tyddUGup3UoLcshtyiUnT9FO52RSAp7YM6I0nNjjzERwa/shF4anDnnFgFbyQ18uUuom/2+Xv08UgXVTksi8uTA24Vx0MGHnPxnXScq7x68PAYJ7dnNUbBT3SxPUjQIeS0KOpvlUdLnCfqnGT+50x8xgaUECX+4OLTGk12PEqX8ml3fNnCYOefV/1bcqinaG/RKvWdh9K6xO+kidCbCNwdHw63XmVWL6mZaxZbiXWhAfbCNw7ahxE6RbaXc4aFTpYJ6IecjJCmScuCzz1mldRDKMURm2Yqa32JdeG9U5OuhGPyDuRCjqTYr4HICtTiZqNJ7eTmUy/Ivi/F4gw==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/+Rx8v74VzjBt6a8QSS2J8yQqn3HjTq3wmVdtxONqGI=;
 b=KgDd2B6qq0bDHsygif+3/RupqCXK0d2Z3uUK8X7JNlkFfqtpZG9Bvc8u4ViRvTcgxG9Kuq9BYyGLNhWJX2I660Rsi4gp50D3Rd3Lq83u3o5aTc/+3gd+k+19LlW6Xo63Gtj3FgJgOeTW2w7azDYLWO2/jo4y6Udnsk46KWeD/hY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 403cbbded84e815d
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cXRkn2E66Ude0nvShXMUTVUiVHt3oDy1h6MW+zzWnRBp/TsE19X7Qe0LmdMN/0OqSozClvBdC7ru5pb4BIx2M1xsH6vhx/u54lwcedeNKPM55gJehPxlYI47/PXHYXiAW2FuusAlSTsmA4O3Ug1az3/vUijB0N8NsLavW/m4UoYj4l0gWkc0cT/B5tYs69QZKabtyoI0TlSktbZJPt5SiFObVJ+FwH55ProNbZPNuhINjL6PXENA0iRKx/bUXLUwYXkejyP5FKhhIY44KToKf2nFbnPzouAx8KS1rKWEcGQKrB0s5J1Q2WRDIIeDNKevuh80F9VJ3Xo/tUJtQScYGw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/+Rx8v74VzjBt6a8QSS2J8yQqn3HjTq3wmVdtxONqGI=;
 b=QC3D+q6+w4KH+ZS9JK27Osc2HLrhFxZqHqSYLj+Ykk7IQVGBM6z4sfKDxGpfB5GsLD2MP5Cx9OQvYmqR5zynqhzidNODMamF8n+6dRNm/4OCmzKbD75055JlgRQwPG+hTRd2VTnWPzN+Zvbfr1gKYGc4x7IvZ/6p9IR2vwhBK8gmoDAf0WPvo/90VOccwa7SW/+XHGg1PTmfAetYQoZVDwnWYXhPf4QyrITaKNXS42Fwc5Md4n2JzXn+UWE7qjei7LJ+MvadPuE+pRwUTOA+ZkI1fdxkj2h+WkuQXxwXpLH1n5NPPmPj7YDwCInajRr9I4jNnn1xBdE5lCOadwqG3w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/+Rx8v74VzjBt6a8QSS2J8yQqn3HjTq3wmVdtxONqGI=;
 b=KgDd2B6qq0bDHsygif+3/RupqCXK0d2Z3uUK8X7JNlkFfqtpZG9Bvc8u4ViRvTcgxG9Kuq9BYyGLNhWJX2I660Rsi4gp50D3Rd3Lq83u3o5aTc/+3gd+k+19LlW6Xo63Gtj3FgJgOeTW2w7azDYLWO2/jo4y6Udnsk46KWeD/hY=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Xenia Ragiadakou <burzalodowa@gmail.com>
CC: xen-devel <xen-devel@lists.xenproject.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH 1/3] xen/arm: shutdown: Fix MISRA C 2012 Rule 8.4
 violation
Thread-Topic: [PATCH 1/3] xen/arm: shutdown: Fix MISRA C 2012 Rule 8.4
 violation
Thread-Index: AQHYhkrv8P1hRqUvVU6u7dH9Ml6t461bimSA
Date: Wed, 22 Jun 2022 15:19:40 +0000
Message-ID: <50F8F42B-F82B-4F9C-87B4-6090A5BB2B57@arm.com>
References: <20220622151514.545850-1-burzalodowa@gmail.com>
In-Reply-To: <20220622151514.545850-1-burzalodowa@gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: c50cf1e6-8daf-4bb0-70e1-08da5462abdf
x-ms-traffictypediagnostic:
	HE1PR08MB2777:EE_|VE1EUR03FT018:EE_|AM4PR0802MB2369:EE_
X-Microsoft-Antispam-PRVS:
	<AM4PR0802MB2369DE00016D1BAFC760DA3D9DB29@AM4PR0802MB2369.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 eeogQvMYQ0j5vlxBx0NLQ22Yyyej1xuvZEb7N/6czO1AmArGspWkbrL+IDWPKdHY6caqwPuRH0M+8ns86vDRGM8qph0BOZy8nxN6a9Q80Zo/qHINvuieekW0NSwwZy1ob6fF3h0NyFJUWABC+nQ/P/g6CCVS90SF6onSC+5lhOQd3DekkEQpqsSX5/MEJYiLqAD2XtAW1kqUrytdoBUK7HQK+OPzh7qBIitWzz1ZrlW8e43OnZrTkIjB8EP3zaSrczNTnBG9dXHjMFkgbofxX4Q4sGX1kUYVT5Uxupl40R6zo68RyJVd2QMD+dS7ly2Q39rqYDFoKqIejyWAYLw7cabS6ZEGtXLAxCXB7fZ5099W60/Ceiz9YogJFjCRpVKB0Fge3qVqkMx1CowREi5StIE01VE73mfsN0pypQfz2ROc58z59ZzplC/Dyh2bahNwrBRcbRYXAKgyHyQx40RiQdn37o19VkJCQnyO/sve9eQsFaK7HSf7PZO5A9EmEdvPKIxjDJKlrB+nJCFw3+oK0N+irXadot80ItMTx50emZ+cgUpOjwUxBx4Uan2OkKjg3B1UCCHbnHECBED++Yx9EFrK6J6k/kmffM9plvQLQpj2ehWgxSSy9QnEbWKPjVgi1/APUghw8akj9/PkdrB26rD1cUnYSdEFXbNSmbCjFmy3TwGOicW3CAptDxXbE2Yeeu56knXslduG95oi++LS7uDmY1jXgYKuy0AqbZdc2mg0UPh4h3DyjE0kz/YD9E7bfm5/pSKNpb9M6NFR7spMj3RA3jWLYhdRE2DeWOrssSw=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(396003)(366004)(136003)(39860400002)(346002)(376002)(4744005)(4326008)(66446008)(76116006)(8676002)(5660300002)(66556008)(64756008)(66476007)(6486002)(8936002)(2906002)(66946007)(478600001)(91956017)(71200400001)(316002)(54906003)(6916009)(38100700002)(6512007)(26005)(2616005)(41300700001)(36756003)(33656002)(53546011)(186003)(38070700005)(86362001)(122000001)(6506007)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <5D1894C01791F04791D9F413697DC797@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR08MB2777
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT018.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ee1cec48-0fda-4fa7-8e20-08da5462a0a0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6hX9SLbx1w2Cl5Un1yZNzAx1k6UHqv+G9vYgd5DMQW7yU73wigPXjeT9YWlqkkJLQ8crEBXoPyWQAtCQy+CLkShLiokWtmIVUqG3XzpndffUFSol3GV/+W8bzy7BI/EUJlHczowkSOnJwupMuFkf4f/0blQPJlAe21OU2G0w5ABfCrA5+Al5arws9gCm/b17TyoD8GfW0gzqBPse04Txcuj/PXajtZabXmRuWWpVRUwPF6Rx6jrPqaKkRb4ctsLJVY597s4Lnx9RVe5tAWUgIXUYi1FRwxzyc9Uk2Pg2PRBLg6b0Fx3zgiBTHujZA5btTl7ztWL6MkRCFU3RHyOBFqs3Y3Wxo+ZICBE5jkQKry7cq47b5lfRUVB7UkioERm/Bu3KwErvZ3XFLtmmzrH/3EMIrsLGSZ0cJoQ40xLE27Is9D2mSZ7QUQJZb2y7LcDBuajO8frX56FUbLVxGMqNAzuPLf4ajbq3h/JeZBj4HtJeQzGNjcYckyHulHxz0wwMm7yQjl6XAJ52E4VgniLb0aHkFTQ4haW9QIklkGuwWL8WIUBoT4LI7JISfSpjHa0kZ5YeWortrtW02qlDh5k5Fp38PPbI3+rKOyuJt3RKZxidKZqHDW1Sz2UepVIjyr+y4HKw6RTxCG1h1sItqKvS0ZGyuTdJtgnh7+iKuWyvvv1wJ0k155JFcuaN99ruVVCA8n4cMxfjwAJIg/wvvhdXqtQbAiFKIiz3URv3i60OpRTAikh/I0U5rSndO/DU7ZYJhontDB7/OGIyzEGryKXL1w==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(136003)(396003)(346002)(376002)(39860400002)(40470700004)(36840700001)(46966006)(4326008)(478600001)(70206006)(316002)(82740400003)(6486002)(81166007)(356005)(36860700001)(70586007)(336012)(186003)(83380400001)(54906003)(6512007)(107886003)(40460700003)(82310400005)(6862004)(26005)(33656002)(53546011)(36756003)(8936002)(40480700001)(5660300002)(8676002)(47076005)(41300700001)(86362001)(4744005)(2616005)(6506007)(2906002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2022 15:19:59.2172
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c50cf1e6-8daf-4bb0-70e1-08da5462abdf
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT018.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR0802MB2369

Hi Xenia,

> On 22 Jun 2022, at 16:15, Xenia Ragiadakou <burzalodowa@gmail.com> wrote:
>=20
> Include header <xen/shutdown.h> so that the declarations of the functions
> machine_halt() and machine_restart(), which have external linkage, are vi=
sible
> before the function definitions.
>=20
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

> ---
> xen/arch/arm/shutdown.c | 1 +
> 1 file changed, 1 insertion(+)
>=20
> diff --git a/xen/arch/arm/shutdown.c b/xen/arch/arm/shutdown.c
> index 3dc6819d56..5550f50f61 100644
> --- a/xen/arch/arm/shutdown.c
> +++ b/xen/arch/arm/shutdown.c
> @@ -2,6 +2,7 @@
> #include <xen/cpu.h>
> #include <xen/delay.h>
> #include <xen/lib.h>
> +#include <xen/shutdown.h>
> #include <xen/smp.h>
> #include <asm/platform.h>
> #include <asm/psci.h>
> --=20
> 2.34.1
>=20



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 15:20:56 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 15:20:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354157.581204 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o429z-0003KS-L6; Wed, 22 Jun 2022 15:20:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354157.581204; Wed, 22 Jun 2022 15:20:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o429z-0003KL-IF; Wed, 22 Jun 2022 15:20:51 +0000
Received: by outflank-mailman (input) for mailman id 354157;
 Wed, 22 Jun 2022 15:20:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UqP7=W5=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1o429z-0003FD-3p
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 15:20:51 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 (mail-eopbgr00044.outbound.protection.outlook.com [40.107.0.44])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e5bc8071-f23e-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 17:20:50 +0200 (CEST)
Received: from AS9PR06CA0367.eurprd06.prod.outlook.com (2603:10a6:20b:460::10)
 by AM4PR08MB2849.eurprd08.prod.outlook.com (2603:10a6:205:5::27) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.16; Wed, 22 Jun
 2022 15:20:47 +0000
Received: from VE1EUR03FT010.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:460:cafe::5e) by AS9PR06CA0367.outlook.office365.com
 (2603:10a6:20b:460::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15 via Frontend
 Transport; Wed, 22 Jun 2022 15:20:47 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT010.mail.protection.outlook.com (10.152.18.113) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5353.14 via Frontend Transport; Wed, 22 Jun 2022 15:20:47 +0000
Received: ("Tessian outbound e40990bc24d7:v120");
 Wed, 22 Jun 2022 15:20:46 +0000
Received: from f7333ba6cae3.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 8E62760A-8540-4E63-8B40-F0712CCBA213.1; 
 Wed, 22 Jun 2022 15:20:40 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f7333ba6cae3.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 22 Jun 2022 15:20:40 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by PAXPR08MB6813.eurprd08.prod.outlook.com (2603:10a6:102:15f::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.13; Wed, 22 Jun
 2022 15:20:38 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5353.022; Wed, 22 Jun 2022
 15:20:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e5bc8071-f23e-11ec-b725-ed86ccbb4733
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=T1M0S/a5EkQ6O3NlMhHxRuUipc80cNfhk+6YopXS/Y4B8N0Jf47Y+XehCW0iZz2/vaaLtDtrH7z+FHcgaB8Y7IW4d9k05NeE8lkO5qY9jlMxCNVenF6fPXxcyk7s58gg04hVIpA/rxUd7Aoo17DIF9HUazxs3UAPA/KVc8EHA9x7vpUKUACgzydpJ/DRCAU/cAgQMUZwDAZtQxc/jrCtHvaatIBsq4nfDpjs7camHWLg489wWvDvyqae5ZUl0s6nBdMGzmHlSLLVVWYtPVq6ytHY8YEWESBRYFhDP/SxIKt0gshIiOzKDp/k87r9y/qnKhnkQ731uWfiuEyyM2L1Yg==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5vFoZcwbqI+NsiNNqicTnfXAr7FgcW/4VWjVFJkbny4=;
 b=No+CCVQMhBlDFh/b2bAZyUWuyvgin31pF3Qdq5+cv6GOnoO55owGWR7eNKQgeE5WKwF6viJwdKchevfCpDDZg9bHb41i0Jwc9RUQZ66HZHgWqm0wBeQs3D/ARZx/y8z3HM7AfoWvPwPSI1kuy/TW/BzuPyQjl1qXdayGJCS9HzU6B3N0/GX5oAVe3J/gwAi0qCHD7wwU7JOQIlAMzHuUjynFocDSwtMrHmL36GRpXos7FXMBEe23sdPS4nd6MxVCxV/lXJeeLJjy/bDj4dcNtnfEtOhRpCWv5qtXiQyPxWCCDI5+hAX31sPyWeHchqre2C7nw7BHak6uTY9FfISxxw==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5vFoZcwbqI+NsiNNqicTnfXAr7FgcW/4VWjVFJkbny4=;
 b=0Avhmf1Z8eVggOCwqiRleTQaAtuR1cMCiyVeSE+UIhqsxF9RU9YL1X8IzTAfOMd/y5fe1W01qyIeUvGic19F8qzssbDa4Poie9cqm+2St7cX+3++mwkQq1/kn/yFUcZLFGCH/m4gab3fzkR5eae8Dde5lWFkMkiX7Svt42E5DFc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 34f99f6d10c7afad
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=U/wdRF+2A2bQloyCfDJpYY7NDBZSNhesKnQa75rmMTmlEVIkPNYrAJRXSrHiSdt8EET0jLXw4OIS9pMXqjKjvpXjeNI+CGMHATxa3Ie6ikS40g1T501As1lZDVNkkQmEXn8nP8VpLKNvqxR+EQ7It9T9wLqfjFYqa+L5mX7pYhlptawZEohV9ybBmtSnQb3mxHOPLbXoe8k8DgEJEAuzrb/WslFEkEXTnW8Qdgqf+NGg/X0cvNJ0ex8USYQixroapYxBSAToM0QjyS1xwMXRjCfAaKesOoYuM9Agm4Yc3JdqTC6p4EqtlYem74nx0wuNugRc03qHektTfOwn40PV2w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5vFoZcwbqI+NsiNNqicTnfXAr7FgcW/4VWjVFJkbny4=;
 b=M85nIfgdwaTZAhE4veDdRSWLZSzXDYnFk24y4ZDlV/ut/rR7LgJiaOCQGOQsTXdVbQJMDplhargc9cOhGwvZIHLQWqs2kgrR+TnxL0b3/9yrhPa8fVd9SKyj84Mvi3YEQ6v8Q2znAYhk1/CdZ/J4dpTZPFASCQCNd2TdLAHmQNWYBv1sAWBfHBnCi0aKlsAAufboXg9XVfN0IbuEQso0DS+m1H599uWlwfizNnmvPKTr54GU4szu/yUFiXnEws9k5UwoeWOWD3RVMvt958raXRuoz8FJS2lOtWINob1+NXAXu4680DhV8O8PKpfQEjH9lbVvt59UpJv/gMxSXVpUjQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5vFoZcwbqI+NsiNNqicTnfXAr7FgcW/4VWjVFJkbny4=;
 b=0Avhmf1Z8eVggOCwqiRleTQaAtuR1cMCiyVeSE+UIhqsxF9RU9YL1X8IzTAfOMd/y5fe1W01qyIeUvGic19F8qzssbDa4Poie9cqm+2St7cX+3++mwkQq1/kn/yFUcZLFGCH/m4gab3fzkR5eae8Dde5lWFkMkiX7Svt42E5DFc=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Xenia Ragiadakou <burzalodowa@gmail.com>
CC: xen-devel <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 2/3] xen/lib: list-sort: Fix MISRA C 2012 Rule 8.4
 violation
Thread-Topic: [PATCH 2/3] xen/lib: list-sort: Fix MISRA C 2012 Rule 8.4
 violation
Thread-Index: AQHYhksCTJ95E1G/9EC0Af/pOiohlq1biqoA
Date: Wed, 22 Jun 2022 15:20:38 +0000
Message-ID: <A8523E50-348E-45E9-B2D9-9C1A7C532E5B@arm.com>
References: <20220622151514.545850-1-burzalodowa@gmail.com>
 <20220622151514.545850-2-burzalodowa@gmail.com>
In-Reply-To: <20220622151514.545850-2-burzalodowa@gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 616a9e32-fbe0-40ea-db8c-08da5462c861
x-ms-traffictypediagnostic:
	PAXPR08MB6813:EE_|VE1EUR03FT010:EE_|AM4PR08MB2849:EE_
X-Microsoft-Antispam-PRVS:
	<AM4PR08MB2849DCCCA55B72725DDCAD6D9DB29@AM4PR08MB2849.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 y3tJh5GS9145yaTu5MWi1x2MMKm0S1fRpdGmr5UUuftLfGtiyH5OtsWs9DBhDwKxQnKIMhw+YPBKngz2NaQV2wdMNwpsoqS9SIyi7B7XcuykNrHHM07Kp2NS16KJvdJkaGVbn66IaeuX5cRLmxdN8iza5RLzGQJ0fMlDLtTsW397QV55N/oMx3r9sw0uZuh4AK42baDCeoPoqkbrcteb/0XkrTfIWzc6PE1G1fTJYgIgc5ItLDX/xbE5u3ZXULJoMDQUy+YTc26kSfGJXnhMtEL7z3MHI2qS0843MymuoLLwXgKfH64VWPzpszYLt7CemyztLHAmZUtbT2rJAX64bYwyuSWAgfRk27cVpj+2XOB6yV9yWnCN+4ZlxEKP7jboU4ZdVDvlGRylO3z+y3q5xVnWCa2jUq/uOjdRCpNM/uqN7rimTiwjfcXHPzjfIPaKPr2uTID43kX8XADAdC/l/m7lfvYWxvAuHrZgBsfLcmDqyammSQ5rKiL7TIhv/xgohHXuMBSaGoD9Wrd3lTEFbHvar9nUh4VOKa/tFkDeeLYy93v5W1wd8GT6Qh8mn+LpP/s+pjqVEhSzaf1mE9qZJ46yDSLYiQEu78mM0MpAwT0Mxj9vi5/TjkrcAtrefFOSmYTlnrlOHFP6klT0u2ztXVkb9tfEY8AJorZ4sJE1jzpcXFewfcM2+VE9DJzonxQpjJRJVy1yxug8CF3pcneKJCtMRz2CBcYwQQCz5f2gEbSrSS0j4G2neTpqam2GTGz1SxlHVR79Mdt9KmdO9pwlijGoEL/+mu4RAG7PcG32VM4=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(346002)(376002)(396003)(366004)(136003)(91956017)(4326008)(66476007)(66446008)(36756003)(38070700005)(66946007)(76116006)(66556008)(186003)(8676002)(71200400001)(64756008)(478600001)(8936002)(6486002)(316002)(6512007)(38100700002)(54906003)(4744005)(5660300002)(122000001)(6916009)(2616005)(41300700001)(6506007)(26005)(86362001)(53546011)(2906002)(33656002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <609AF9D387FD454E8D18B910030570F4@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB6813
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT010.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	eb4033f4-d761-443c-0473-08da5462c375
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/CWzFMXepnqA83PG02LxNbdPDsTVwI7yrfZOSzHrwX/8QPDCVfgws7r8PprsgZ33g88Q1OMQRHNgbQLXM3wi1C5Tik1yswfZhSM2Got9gDVoBEQgoZNYt0yhNdQ2HxrNxhgsJaeM4dyGM7g0pBaVBHknksraHojQdzisdJSjGegCG8Ff3VFCQ9egzBO/dTJNauGsnbz/SppaUrcmq6zLn7XXkQdUwH6noR4vaY7KBWXbnwjaoPc6iDh5qB1YwKQuzsnRkB/07SpPkTDZ1hOIyY2cXgFbCeUapQ8kWxHo9NG6jV2I/qSDcZxHnBtMnmzc8ohe0wMTyzM/7HlUO+MUgTfVXr/CrKmqbgY7m3u6SaOiOfPzxHKvnAlWszOHrK4Qon8myNpr15KVwn0ZcmDIm/WTttFYSlEvQw/pmaKHYE33WiwOXtv5WHRv/k1FL2KgdXqSFiyYqdBw1pegcBrbeb3zpdcVdhb2fH/wy3C4x1n/jJTrr2qK3M0pBnFcI4srONLNtQoXzkNz+wWTn3moAJn2k9xBGWOgWwdkdvg78qlOEVLdh65NRh+F3dTuIbarESGceLW57gus+Ba/g01rfMHA1bzedOJn6CNpuhPFRDtyWtzL9hQIqr/iHnjSzUIDI4xCo7qxVrpGMf8gQb02VW7ntAmr9JFlLC1rXPzBbryCArl7vqgfHJ6KYxZhSyZIHccARSzeeDVT7386hoyhJshUVkBe74RCdQfdEIjHnnATEm09DbAAYN5BxlJq0ppSP4a2wcp6WIhv+4c/5pMxDA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(136003)(39860400002)(396003)(376002)(346002)(40470700004)(46966006)(36840700001)(4744005)(6506007)(186003)(5660300002)(2616005)(6512007)(82740400003)(40460700003)(356005)(82310400005)(36860700001)(33656002)(41300700001)(2906002)(86362001)(26005)(53546011)(316002)(70586007)(54906003)(40480700001)(70206006)(83380400001)(81166007)(47076005)(36756003)(478600001)(4326008)(8676002)(336012)(6862004)(8936002)(6486002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2022 15:20:47.0637
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 616a9e32-fbe0-40ea-db8c-08da5462c861
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT010.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR08MB2849

Hi Xenia,

> On 22 Jun 2022, at 16:15, Xenia Ragiadakou <burzalodowa@gmail.com> wrote:
>=20
> Include header <xen/list_sort.h> so that the declaration of the function
> list_sort(), which has external linkage, is visible before the function
> definition.
>=20
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>

Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

> ---
> xen/lib/list-sort.c | 1 +
> 1 file changed, 1 insertion(+)
>=20
> diff --git a/xen/lib/list-sort.c b/xen/lib/list-sort.c
> index f8d8bbf281..de1af2ef8b 100644
> --- a/xen/lib/list-sort.c
> +++ b/xen/lib/list-sort.c
> @@ -16,6 +16,7 @@
>  */
>=20
> #include <xen/list.h>
> +#include <xen/list_sort.h>
>=20
> #define MAX_LIST_LENGTH_BITS 20
>=20
> --=20
> 2.34.1
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 15:21:49 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 15:21:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354165.581215 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o42Av-0003vX-0O; Wed, 22 Jun 2022 15:21:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354165.581215; Wed, 22 Jun 2022 15:21:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o42Au-0003vQ-Sq; Wed, 22 Jun 2022 15:21:48 +0000
Received: by outflank-mailman (input) for mailman id 354165;
 Wed, 22 Jun 2022 15:21:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UqP7=W5=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1o42At-0003FD-Na
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 15:21:47 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 (mail-eopbgr50082.outbound.protection.outlook.com [40.107.5.82])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 07ca1e5e-f23f-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 17:21:47 +0200 (CEST)
Received: from DU2PR04CA0353.eurprd04.prod.outlook.com (2603:10a6:10:2b4::13)
 by PR2PR08MB4908.eurprd08.prod.outlook.com (2603:10a6:101:23::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.16; Wed, 22 Jun
 2022 15:21:44 +0000
Received: from DBAEUR03FT017.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:2b4:cafe::d) by DU2PR04CA0353.outlook.office365.com
 (2603:10a6:10:2b4::13) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15 via Frontend
 Transport; Wed, 22 Jun 2022 15:21:44 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT017.mail.protection.outlook.com (100.127.142.243) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Wed, 22 Jun 2022 15:21:43 +0000
Received: ("Tessian outbound d3318d0cda7b:v120");
 Wed, 22 Jun 2022 15:21:43 +0000
Received: from a73f32a2332f.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 7F495646-A82F-4499-94F0-B91C49D91E0F.1; 
 Wed, 22 Jun 2022 15:21:37 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a73f32a2332f.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 22 Jun 2022 15:21:37 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by PAXPR08MB6813.eurprd08.prod.outlook.com (2603:10a6:102:15f::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.13; Wed, 22 Jun
 2022 15:21:36 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5353.022; Wed, 22 Jun 2022
 15:21:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 07ca1e5e-f23f-11ec-b725-ed86ccbb4733
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=GwYvtfwx8s1mS3OSFwFB/7hcHpOMsTwLHNFlgoboWVkUkvE7koXclo7kyAmmvou3HBQGY9bo1Oxn/BCmgZSaEnbAYu/YSkAKUhp50Wm9gejJ3His/eoMBSG8MZGxuJKcAUyPwOrcYTt/E7T9H2JNZAQtseEJ1X65osdhZr4dbOuHI+pjVlILqcZpQc6CBiaALSmT7sGrEMa+7hgOIIuCS3rZ/sfr3v8EcWn5TiPb7bwrmvUpDw3WYNDfKqL9Ur09eA+FCIy42UOw8htdypQ1caSZZZGHL3yeWGwmgUj48vBxGzY5NinaGJnZHdqRrLvfs/XKnWNVHBQUMxTUvcLV+g==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=o6f/osCnMMDoF/xXxvUEBQIfwtEhSDGWQtqTReoF29Y=;
 b=daU6oVrGQr+91L12q54G7V4lNpfYOyNzYPiYQ/2l8PldPIfDo1tLjZ/rGCfcQ8WoIx0ebSw4IfXmeZo2KSyoFmFJ8eCGQZOSVa6L6fxPdzf80dFlidRlS9P1ivhHLmXvYzVILjO5zmo0mKI2npyZFnYSOSilhggmHC63YgSRTc/Cn1D66Bxf9IoGO/KDY40M4AwOvK1yGAyT9njDW/AetmZZ3a/4VWbCjMwQzvjNLQQguqQy3p5dqW9RvnhNrtznNsywASqhJVfp5zUqIAkO+2qcjoc+q5/e3nVu8GrusefGVS7g9m2k9rhnmi2y0tynsoN7UfS1NP/VZnq1IkNJ0A==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=o6f/osCnMMDoF/xXxvUEBQIfwtEhSDGWQtqTReoF29Y=;
 b=i06cxzYZw0/z52CFP1UvYPRWWk4UT5pl7l2MJ9lDGwziKaBhavqEc/w7YiY+8YUa2EQ5Pc+4QNal1eM7i/e1l6aoEKKVZivH18bXSR77/7z4R4rFlMhfhJSLvaFxsO1knYpLs3+wefPSohwJc/mGMMN4jr+uhcofuDUblUOFL5M=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 78f9555be067c37b
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cqpbFbnLwqm6VllR17JbHbq3CRKsDeepNlHmmw+e/vjaQY7dJFbXPgZMj+lI+8hokary/nOKpTv97qNnQirxKu9fSGGDRJO5wTEnGAdhxC6K42WJrqWY/2orziKi64Y8ec5KRbnjzG9E35JNuPoOMCfpfWaxTOxApRgcp9qTw6k8pZYxbS31Fh0vzAitOEZ0Mu9Hl7TiFi+uDSqnqa5qKFyT/A33NaWOppg00H2qPZ0FvKgzW18/2Y/TPb3wsWgpmdUPEXIjvKaFbUNxk4jkOl/thDo92pPkuBqu8d7b+WRMg9RmmlPMEYKb8pIMQCvrtEtp3DoDOWbNMt3h8cbiYw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=o6f/osCnMMDoF/xXxvUEBQIfwtEhSDGWQtqTReoF29Y=;
 b=eMAKpjkEKAn1jOhQ0npJUaMHNUO7Ac4kmX1PahUGH3jgOeNzsXAzvo4brQTXqX6h1z/fvYW5S7p/atuZ5PYlU+EOU5+Srz9aqEDTtvzNjEqBVPFNqUbp/dcQHHpb5Vev4jRZRo8VPgu1vx2OX23Sa89p5Q6z1+PSxANdgiHQd3NHhr73N8Nn5wojSa1IjbeG1rMJTPN1xQjd5ZJO0JuEjKwmleK67nYf3kkvSfFRNGNcJNdfrwdVqw7JhTl1bWifDHsy1ok3TnhltAT4oOyDf8FnxWhwqd/bxxcFtdNW21c5y+meTfG4dYLL6N3MWoKXwwC/6CH58PvC25Yki94qfw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=o6f/osCnMMDoF/xXxvUEBQIfwtEhSDGWQtqTReoF29Y=;
 b=i06cxzYZw0/z52CFP1UvYPRWWk4UT5pl7l2MJ9lDGwziKaBhavqEc/w7YiY+8YUa2EQ5Pc+4QNal1eM7i/e1l6aoEKKVZivH18bXSR77/7z4R4rFlMhfhJSLvaFxsO1knYpLs3+wefPSohwJc/mGMMN4jr+uhcofuDUblUOFL5M=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Xenia Ragiadakou <burzalodowa@gmail.com>
CC: xen-devel <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 3/3] xen/common: gunzip: Fix MISRA C 2012 Rule 8.4
 violation
Thread-Topic: [PATCH 3/3] xen/common: gunzip: Fix MISRA C 2012 Rule 8.4
 violation
Thread-Index: AQHYhkr4CzBVsyJHpUKOeXKFemajka1bivAA
Date: Wed, 22 Jun 2022 15:21:36 +0000
Message-ID: <D8CBC78E-709E-4C04-AF8A-5789719BADDF@arm.com>
References: <20220622151514.545850-1-burzalodowa@gmail.com>
 <20220622151514.545850-3-burzalodowa@gmail.com>
In-Reply-To: <20220622151514.545850-3-burzalodowa@gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 7c576a44-0b36-44bb-fb8b-08da5462ea23
x-ms-traffictypediagnostic:
	PAXPR08MB6813:EE_|DBAEUR03FT017:EE_|PR2PR08MB4908:EE_
X-Microsoft-Antispam-PRVS:
	<PR2PR08MB49081E9230F7F6AAF48462819DB29@PR2PR08MB4908.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 Y1R3KKPNXO+MWsRpsvvfDFrhMn+OReIgJxnHjRLYTscLRvqW1pt23qv8rM+ZgSPJcEGFhH/db6nBl37wVPCs2J6AMug6SqUNVVlmP0P0HmZzoZADZhoiNyFneyuP0+vSFgi/j+8YvQma7urhCxu3OiKzADUynCquiQzN44ovNwS2IdZlpA1u68/lSpj3Xjz9+spzh3QQl6tazbmjC5sWAQDVzHnci53RKZhwevPj0mnQd7JHmHWUZiCoe+U1/Tzp0DwA246n4DF/4dnn0jNjr/Yww8mdSqVZnYwOjHFLtw7tSN9YNfcwM1aQxPF9re19rSt+95cXA8z2RJlsY6THavOx93IssGU/Z/AJdXQU3FWod5pMyyq1aWlJzhCd51B003ZOHgp0nat3USVdANk/oRr1B7r/d2clifkzJLdIkk1GWmpBtDEouhReOYnp/yowUDZBBr66J06faC9LwgsmUaDK5bamMV00kRqY9XBcdbhSFAk+eX9gJO2HLccUJqS1JYwlPaKToLQG/d9cPdnq/kUXIL4F/XZsnZ7DAbnKTBKgYq8xDfyCiE88Ufic7Ku4lKlpzzVYnjN4+ssCecNjMeJe+jRS+aIwe33MEaTmr92dlovTAEQaB9W0R2CdntnUNrqXXrRUKW3FY7G1VkdY0CZGnUBPdY286geO9OTZXprHlmrexUzokMANLKVUFZ0kGXlZeyjmf2zYhGyeQGce4QqtEuAd7F46gNVX/wBdmAbknpzTCFUJMyynY59Abhk31nfyNHuGh+nAgtwv4Y+gnyXHWUourD6ZLehdQ2M4wT0=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(346002)(376002)(396003)(366004)(136003)(91956017)(4326008)(66476007)(66446008)(36756003)(38070700005)(66946007)(76116006)(66556008)(186003)(8676002)(71200400001)(64756008)(478600001)(8936002)(6486002)(316002)(6512007)(38100700002)(54906003)(4744005)(5660300002)(122000001)(6916009)(2616005)(41300700001)(6506007)(26005)(86362001)(53546011)(2906002)(33656002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <60900A83FCB8D74882602CEB32A50FAA@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB6813
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT017.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	7f1ece56-16d1-45d2-5aee-08da5462e5c6
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	VWXuNa65a2hOHZwKwRyb/8o0JgHGpQCVmXH8pJARsJtxJL1iI00Vix5Covi6cTwx1qoOXHtltM7+NUcNYmw5xbN4IbnV0XrE3ppUoJh+2a+294wDEofg9N/ae83MRfXgpU2o0LEwp4fBIF+7RFHCQGOQ5VMyf5MplLNmg3THqfjPfCIpHNXBSPnp6DtEk1j1043WbO9+j1gpMLjzcN8MnMv0JlHvtKdIMY+1U1Fi1ejSBko83eRwlqVW154vcUQJdwsStFeJ50t+yjelpoVQGgq+Qw7Yy3oxdl3+RwrMR4iUtt10EZoVovbimrx07vRce1cid6fJ5T8DUCBM/uvENdv+NVJiQkNikaWlPS1mVEt5usYWokl9dEmTTBqFS3RdctV/2hxDmX0B1J8bSZe/pAxF76N6B9MpZP48DTmyPJf4VMmiBtkhjeQii39oDE4B3Wf4HKkFO/r1XW1FqxCRx4r7b8VqDcJoZOAxO08I3WaP35hPTvHM0Fm9yUsX3heOdgYhkVYpY7pXaKv+wdVsZUwTy19kLl6m4OCNdDDeABsEn5zjACmvCz0+lbLDfGux4lCiwgdmK0KzmbkMWQBRXX2Jxe/HG8dcsx+ncXPV0z+pp+52rrHqoktbKj3dAOimtGXrzCkkkAXrhMd1RYn4hT9yRLxRPNHp31RY03bzwdjQdN7sGz3g2iNQqqiWHomLQzs84SlPxjYznPUbe7F1henCTzRzHkCr/dMy/2YPzkl8fgRko99KE4xx3HmIvKSuiSs97ZIPoFpBZlo8Rm1TTw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(346002)(376002)(39860400002)(396003)(136003)(40470700004)(36840700001)(46966006)(6486002)(53546011)(54906003)(86362001)(316002)(478600001)(8936002)(5660300002)(70206006)(4744005)(36756003)(2906002)(8676002)(6512007)(4326008)(26005)(2616005)(70586007)(41300700001)(6862004)(356005)(82310400005)(83380400001)(33656002)(186003)(40460700003)(47076005)(36860700001)(336012)(81166007)(82740400003)(40480700001)(6506007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2022 15:21:43.7945
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 7c576a44-0b36-44bb-fb8b-08da5462ea23
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT017.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR2PR08MB4908

Hi Xenia,

> On 22 Jun 2022, at 16:15, Xenia Ragiadakou <burzalodowa@gmail.com> wrote:
>=20
> Include header <xen/gunzip.h> so that the declarations of functions gzip_=
check()
> and perform_gunzip(), which have external linkage, are visible before the
> function definitions.
>=20
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>

Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand


> ---
> xen/common/gunzip.c | 1 +
> 1 file changed, 1 insertion(+)
>=20
> diff --git a/xen/common/gunzip.c b/xen/common/gunzip.c
> index b9ecc17e44..aa16fec4bb 100644
> --- a/xen/common/gunzip.c
> +++ b/xen/common/gunzip.c
> @@ -1,4 +1,5 @@
> #include <xen/errno.h>
> +#include <xen/gunzip.h>
> #include <xen/init.h>
> #include <xen/lib.h>
> #include <xen/mm.h>
> --=20
> 2.34.1
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 15:47:40 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 15:47:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354176.581225 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o42Zl-0006Rh-1n; Wed, 22 Jun 2022 15:47:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354176.581225; Wed, 22 Jun 2022 15:47:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o42Zk-0006Ra-VP; Wed, 22 Jun 2022 15:47:28 +0000
Received: by outflank-mailman (input) for mailman id 354176;
 Wed, 22 Jun 2022 15:47:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5hfk=W5=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1o42Zj-0006RU-Bt
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 15:47:27 +0000
Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com
 [66.111.4.26]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9a9b99f4-f242-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 17:47:23 +0200 (CEST)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.nyi.internal (Postfix) with ESMTP id 644EE5C014D;
 Wed, 22 Jun 2022 11:47:21 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute3.internal (MEProxy); Wed, 22 Jun 2022 11:47:21 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed,
 22 Jun 2022 11:47:19 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a9b99f4-f242-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:date:date:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to; s=fm2; t=1655912841; x=
	1655999241; bh=rB9Wm+XmI+yNoro3smRJKMMciqzWIsI+xbYet9Terec=; b=R
	uH01bqTZBjOJsiIHEZ5/30xPXD2EZUSgcChfkSxMVP9GahPn7+g8LHapjW5YOAiT
	fCSYXm0uQAYCeZ1lEgi0deU69YCEQnI3Jt8W5aJI/l/XI/XM/Qwf/PlZpjOlTXax
	QtyhpUQ5QTrexO8iv04okysTc7yArRx9V/E3QYZfj6pMXm1xmHjxbhcwBXTTGQ5O
	7rnELYsguXva/zIjI+pOnFReKqo3M55xvxvo+SMf+PZLXM2qmR0+KcCe8lETcMBW
	M34JSD8eQeYu/jFnkbvopCmeE1b+zFesgMSkVc4Giu/XHpRE4piZhjb4illZxb9E
	ZZ7P+U4MDU38TxEqE57EQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:date:date:feedback-id
	:feedback-id:from:from:in-reply-to:in-reply-to:message-id
	:mime-version:references:reply-to:sender:subject:subject:to:to
	:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm2; t=1655912841; x=1655999241; bh=rB9Wm+XmI+yNoro3smRJKMMciqzW
	IsI+xbYet9Terec=; b=SRitMXfE/xaPp3xq9Gm3lYxuSiP9zcIAxPbT4TPpc9pP
	IdHZ1KC+Kec6VqQXsaQXG3lWj2cXst6bt3oq/RSmGahhrA6OC1dqzMFy9ZJvn372
	e4yyeKXesKstaPd/xqye4wzKGIyPF6E9rZmDyhhZrsT6yxEYj+FXQ72+59ecWn8i
	LHcv4w6J/677eKCxE0EQOkyXX2EZTz8wR50KTnn0Juc77lvONidclgwmEEgYW125
	4uhtonnyhZ3D9hq3nZCBbn7JPR1gh7KoQDFIReNkMQS8oVLjjc9YjeBg7g6gZWog
	rJKI6Ev2GIJcjIdfeWdEHx7wPTak/3lhopaLeoa3Rg==
X-ME-Sender: <xms:iDmzYpn3I7cdpknaxCl_fTLBtm_pM3UYVVQxoOyW9M30VgVUpOtnKQ>
    <xme:iDmzYk3-xYGOnV2GFdj7HGwtbVy7K5Hl94CUFKoEEOrfoASs7_IdyeJ6YMYLcP1yc
    G-othuv85gfhg>
X-ME-Received: <xmr:iDmzYvrMIlBecBPfnaMbzPImXiIcC-yiJuQSax5EjLRr_4ULp9LtdDzVdufxyLROmMScoSUrnTW3gNLq6EkBr1MlWjsgMIkLng>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrudefhedgleegucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepgfdu
    leetfeevhfefheeiteeliefhjefhleduveetteekveettddvgeeuteefjedunecuvehluh
    hsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghrvghk
    sehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:iTmzYplOVWrv8J9FZ8rXdrZMGwp2Y19i3suQNbaCxaXXpH8eepqQNg>
    <xmx:iTmzYn2xP0qRv_qOSFHlCmNiTP6VfIir9cyIFsaU2AV-fZ1t7GDg0Q>
    <xmx:iTmzYot8t1ybzH4ILo3tdpWeA-VIYQMfmAqNYTMlthYRWDk7fZDxlg>
    <xmx:iTmzYsm1j4zrLEkTUJnoi6AdkDZVojig3poOx6ihzlghhYul8-FlTQ>
Feedback-ID: i1568416f:Fastmail
Date: Wed, 22 Jun 2022 17:47:15 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Connor Davis <davisc@ainfosec.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Connor Davis <connojdavis@gmail.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v1 01/10] drivers/char: Add support for Xue USB3 debugger
Message-ID: <YrM5g3dLRJHTIVYt@mail-itl>
References: <cover.5d286dc6304969ed7155051e900236947c1b14dc.1654612169.git-series.marmarek@invisiblethingslab.com>
 <87c73737fe8ec6d9fe31c844b72b6c979b90c25d.1654612169.git-series.marmarek@invisiblethingslab.com>
 <9c7c11f5-be1e-f0ef-0659-48026675ec1a@suse.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="SrxAhTaFs1FNuynb"
Content-Disposition: inline
In-Reply-To: <9c7c11f5-be1e-f0ef-0659-48026675ec1a@suse.com>


--SrxAhTaFs1FNuynb
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Wed, 22 Jun 2022 17:47:15 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Connor Davis <davisc@ainfosec.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Connor Davis <connojdavis@gmail.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v1 01/10] drivers/char: Add support for Xue USB3 debugger

On Wed, Jun 15, 2022 at 04:25:54PM +0200, Jan Beulich wrote:
> On 07.06.2022 16:30, Marek Marczykowski-G=C3=B3recki wrote:
> > From: Connor Davis <davisc@ainfosec.com>
> > --- /dev/null
> > +++ b/xen/drivers/char/xue.c
> > @@ -0,0 +1,957 @@
> > +/*
> > + * drivers/char/xue.c
> > + *
> > + * Xen port for the xue debugger
>=20
> Since even here it's not spelled out - may I ask what "xue" actually
> stands for (assumng it's an acronym)?

Honestly, I don't know. That would be a question to Connor.

> > +/* Supported xHC PCI configurations */
> > +#define XUE_XHC_CLASSC 0xC0330ULL
>=20
> While I'm not meaning to fully review the code in this file (for lack
> of knowledge on xhci), the two ULL suffixes above strike me as odd.
> Is there a specific reason these can't just be U?

I don't think so (that's just how it was in xue.h). Will adjust. The
same response applies to many other remarks.

> > +    /* ...we found it, so parse the BAR and map the registers */
> > +    bar0 =3D pci_conf_read32(xue->sbdf, PCI_BASE_ADDRESS_0);
> > +    bar1 =3D pci_conf_read32(xue->sbdf, PCI_BASE_ADDRESS_1);
>=20
> What if there are multiple?

You mean two 32-bit BARs? I check for that below (refusing to use them).
Anyway, I don't think that's a thing in USB 3.0 controllers.


> > +    /* IO BARs not allowed; BAR must be 64-bit */
> > +    if ( (bar0 & 0x1) !=3D 0 || ((bar0 & 0x6) >> 1) !=3D 2 )

(...)

> > +    memset(xue, 0, sizeof(*xue));
> > +
> > +    xue->dbc_ctx =3D &ctx;
> > +    xue->dbc_erst =3D &erst;
> > +    xue->dbc_ering.trb =3D evt_trb;
> > +    xue->dbc_oring.trb =3D out_trb;
> > +    xue->dbc_iring.trb =3D in_trb;
> > +    xue->dbc_owork.buf =3D wrk_buf;
> > +    xue->dbc_str =3D str_buf;
>=20
> Especially the page-sized entities want allocating dynamically here, as
> they won't be needed without the command line option requesting the use
> of this driver.

Are you okay with changing this only in patch 9, where I restructure those
buffers anyway?

> > +    xue_open(xue);
>=20
> No check of return value?

Good catch, will adjust.

> > +    serial_register_uart(SERHND_DBGP, &xue_uart_driver, &xue_uart);
> > +}
> > +
> > +void xue_uart_dump(void)
> > +{
> > +    struct xue_uart *uart =3D &xue_uart;
> > +    struct xue *xue =3D &uart->xue;
> > +
> > +    xue_dump(xue);
> > +}
>=20
> This function looks to be unused (and lacks a declaration).

It is unused, same as xue_dump(), but is extremely useful when
debugging. Should I put it behind something like #ifdef XUE_DEBUG,
accompanied with a comment about its purpose?

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--SrxAhTaFs1FNuynb
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmKzOYMACgkQ24/THMrX
1yzkswf8D7xZb1jn6Bh6JZ9dvZxOMuodXKZ2kU0llv+c9IGGr83rvVHkyVwDl3Nz
ItpqTvVdoOCcXm27aMYZjgkro7t5Y5bgK0woKhHqcA0AnTLtAS4XGAE0pQfRtC6L
vFS41zZvMw2w3ljXxXBEkKbT11fyAC9KtdZ154kiSb/GcPDK3Ym2leodURVj0yor
5raUQ44X8kkxGtGobBgAofF98TV7j8tqDM3BFVQo+MNdokEHzKaqRjtTU6kBBqIc
/x+oa1ao0mMnP8rrTY+IOlNkmE6ENLsEzrnQeDqq0dbAwC9cEr+Z8oqAbhx2LhOQ
RKKo5jo6G1t3dtGzqQ47RcQ7rUU+fw==
=6EwA
-----END PGP SIGNATURE-----

--SrxAhTaFs1FNuynb--


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 16:06:25 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 16:06:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354187.581236 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o42rs-0000xs-Nj; Wed, 22 Jun 2022 16:06:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354187.581236; Wed, 22 Jun 2022 16:06:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o42rs-0000xl-KU; Wed, 22 Jun 2022 16:06:12 +0000
Received: by outflank-mailman (input) for mailman id 354187;
 Wed, 22 Jun 2022 16:06:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zPYt=W5=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o42rq-0000wG-UU
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 16:06:10 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 376f9331-f245-11ec-bd2d-47488cf2e6aa;
 Wed, 22 Jun 2022 18:06:04 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 7A7AC21AD1
 for <xen-devel@lists.xenproject.org>; Wed, 22 Jun 2022 16:06:03 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 67A54134A9
 for <xen-devel@lists.xenproject.org>; Wed, 22 Jun 2022 16:06:03 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id GmYCGOs9s2IMCQAAMHmgww
 (envelope-from <jgross@suse.com>)
 for <xen-devel@lists.xenproject.org>; Wed, 22 Jun 2022 16:06:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 376f9331-f245-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655913963; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:
	 mime-version:mime-version:content-type:content-type;
	bh=VXTOCtPUj5Vxvf3dGmXU7hcSkOfx3s8UuTvMiX0clR4=;
	b=IlOSTyxJZeRiGshFH/YrRmyJqWbC2Q37oxPcJGI5+IHy0qqkTAV88IMnebsv9eC01rFG6f
	djraa9AQARQPwJiRKDu2wXCqiVX5DYUiV+67hbmwQfs5Kd8ic+OJUTZrGYbjr7sBUdpHHM
	GWSLynK9sBPtCV6L/asf7zxe8PMKjqM=
Message-ID: <5c396832-3102-ff5b-c198-c037ee87d83f@suse.com>
Date: Wed, 22 Jun 2022 18:06:02 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Juergen Gross <jgross@suse.com>
Subject: Problem loading linux 5.19 as PV dom0
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------bSX0hyZEUvPsxqcqd6yQo2HF"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------bSX0hyZEUvPsxqcqd6yQo2HF
Content-Type: multipart/mixed; boundary="------------DM5VH0sM33To1uGUngcOySm0";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <5c396832-3102-ff5b-c198-c037ee87d83f@suse.com>
Subject: Problem loading linux 5.19 as PV dom0

--------------DM5VH0sM33To1uGUngcOySm0
Content-Type: multipart/mixed; boundary="------------hoDaLF9BU1jqQsrbGKsJvIHD"

--------------hoDaLF9BU1jqQsrbGKsJvIHD
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

QSBMaW51eCBrZXJuZWwgNS4xOSBjYW4gb25seSBiZSBsb2FkZWQgYXMgZG9tMCwgaWYgaXQg
aGFzIGJlZW4NCmJ1aWx0IHdpdGggQ09ORklHX0FNRF9NRU1fRU5DUllQVCBlbmFibGVkLiBU
aGlzIGlzIGR1ZSB0byB0aGUNCmZhY3QgdGhhdCBvdGhlcndpc2UgdGhlIChyZWxldmFudCkg
bGFzdCBzZWN0aW9uIG9mIHRoZSBidWlsdA0Ka2VybmVsIGhhcyB0aGUgTk9MT0FEIGZsYWcg
c2V0IChpdCBpcyBzdGlsbCBtYXJrZWQgd2l0aA0KU0hGX0FMTE9DKS4NCg0KSSB0aGluayBh
dCBsZWFzdCB0aGUgaHlwZXJ2aXNvciBuZWVkcyB0byBiZSBjaGFuZ2VkIHRvIHN1cHBvcnQN
CnRoaXMgbGF5b3V0LiBPdGhlcndpc2UgaXQgd2lsbCBwdXQgdGhlIGluaXRpYWwgcGFnZSB0
YWJsZXMgZm9yDQpkb20wIGF0IHRoZSBzYW1lIHBvc2l0aW9uIGFzIHRoaXMgbGFzdCBzZWN0
aW9uLCBsZWFkaW5nIHRvDQplYXJseSBjcmFzaGVzLg0KDQpBIHdvcmthcm91bmQgaW4gdGhl
IGtlcm5lbCB3b3VsZCBiZSB0byBhbHdheXMgYWRkIGEgc21hbGwNCnNlY3Rpb24gYXQgdGhl
IGVuZCB3aGljaCBuZWVkcyB0byBiZSBsb2FkZWQgKGxpa2UgaXMgZG9uZQ0Kd2l0aCBDT05G
SUdfQU1EX01FTV9FTkNSWVBUIHNldCksIGJ1dCBJIGRvbid0IHRoaW5rIHdlIGNhbg0KcHV0
IHRoaXMgYnVyZGVuIG9uIGFsbCBndWVzdHMgYmVpbmcgY2FwYWJsZSB0byBydW4gaW4gUFYN
Cm1vZGUuDQoNCkkgaGF2ZW4ndCB0ZXN0ZWQgeWV0LCB3aGV0aGVyIHVucHJpdmlsZWdlZCBQ
ViBndWVzdHMgYXJlDQphZmZlY3RlZCwgdG9vLg0KDQoNCkp1ZXJnZW4NCg==
--------------hoDaLF9BU1jqQsrbGKsJvIHD
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------hoDaLF9BU1jqQsrbGKsJvIHD--

--------------DM5VH0sM33To1uGUngcOySm0--

--------------bSX0hyZEUvPsxqcqd6yQo2HF
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmKzPeoFAwAAAAAACgkQsN6d1ii/Ey9P
Ewf/RmRogEs423+ApvlZhD6tt14aKwlarLN+TFt5JlTxmUNHQSzxY3GuXI28WlLsHco/7zylTVOe
tMmhn7N142rOk43PkdLH5hGZVLCodGz+c6AZ+2U+QxcMdSFvi0O+GlTAXVTkDiJuxtQu/BqfO65x
KL+FrnYU0789j76uxdynpRtH6fJdAlLRIZQ4peGgHgsg7ia6s1Rj7k3C0oC2C2bvIUClgu/I5n0p
KYXuriRtqEwCKQVeVPaDT7ad6Qk5jAWacUGZZKkE1ou9+XJHrg7kade7Hu5pSsj6EILo7LyxpyvN
kHIc+dr7/ZbD2Z9sQX2Y8xJmFZPNxSLPYZl/tnLn1A==
=GDFh
-----END PGP SIGNATURE-----

--------------bSX0hyZEUvPsxqcqd6yQo2HF--


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 16:10:53 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 16:10:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354195.581248 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o42wP-0002Lc-BY; Wed, 22 Jun 2022 16:10:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354195.581248; Wed, 22 Jun 2022 16:10:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o42wP-0002LV-7L; Wed, 22 Jun 2022 16:10:53 +0000
Received: by outflank-mailman (input) for mailman id 354195;
 Wed, 22 Jun 2022 16:10:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zPYt=W5=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o42wO-0002LP-O0
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 16:10:52 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e2f09c1f-f245-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 18:10:51 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 66ACA21B8F;
 Wed, 22 Jun 2022 16:10:51 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 1187E134A9;
 Wed, 22 Jun 2022 16:10:51 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id FVLyAgs/s2JzCwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 22 Jun 2022 16:10:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e2f09c1f-f245-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655914251; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=dLNCSuzgjE1irfejW+XQU6PJClEFaQlN1EssbDqEWZE=;
	b=ddEJwwo53MebTNELEsUYrQtqkyRVDUa9qirmRSQ4fMI0zQojNEw4VWpDeZecZb11hNRvVz
	szP8D2+RV5mAugoe6drgBCliA41Ra1zyUftEEF/LYW/i4DzLfS4ALbuxif7BKRZDyEAV/S
	QNKJCamoAb+V8Y0uQ/67w9VaTL3WuBU=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: [PATCH 0/2] x86: fix brk area initialization
Date: Wed, 22 Jun 2022 18:10:46 +0200
Message-Id: <20220622161048.4483-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The brk area needs to be zeroed initially, like the .bss section.

Juergen Gross (2):
  x86/xen: use clear_bss() for Xen PV guests
  x86: fix setup of brk area

 arch/x86/include/asm/setup.h |  3 +++
 arch/x86/kernel/head64.c     |  4 +++-
 arch/x86/xen/enlighten_pv.c  |  8 ++++++--
 arch/x86/xen/xen-head.S      | 10 +---------
 4 files changed, 13 insertions(+), 12 deletions(-)

-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 16:10:54 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 16:10:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354196.581259 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o42wQ-0002bK-Ia; Wed, 22 Jun 2022 16:10:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354196.581259; Wed, 22 Jun 2022 16:10:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o42wQ-0002bB-FH; Wed, 22 Jun 2022 16:10:54 +0000
Received: by outflank-mailman (input) for mailman id 354196;
 Wed, 22 Jun 2022 16:10:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zPYt=W5=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o42wP-0002LP-HH
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 16:10:53 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e3204999-f245-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 18:10:52 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id C43681F8E0;
 Wed, 22 Jun 2022 16:10:51 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 6E999134A9;
 Wed, 22 Jun 2022 16:10:51 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 8GO1GQs/s2JzCwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 22 Jun 2022 16:10:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e3204999-f245-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655914251; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=NZVsJ9w6p52CGnx+whse0Q5jtOMx+D4jAJ62tq6HNaI=;
	b=cd42wzcG9qaMXbWjJjPT6DeqeIeIraiBDdVXY8uFDunVR1bsFtlTkWabujY1sWEZYk3rPh
	g4T4IjxBOE8PIn0qEs6huxZI6p+qO7eNMCH7djqwSi30jRTgnY3Cy3I7CN1UN56ydHzj/A
	48WC5iBIXNOX5SPuKT343/uLBidXZg0=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: [PATCH 1/2] x86/xen: use clear_bss() for Xen PV guests
Date: Wed, 22 Jun 2022 18:10:47 +0200
Message-Id: <20220622161048.4483-2-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20220622161048.4483-1-jgross@suse.com>
References: <20220622161048.4483-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of clearing the bss area in assembly code, use the clear_bss()
function.

This requires to pass the start_info address as parameter to
xen_start_kernel() in order to avoid the xen_start_info being zeroed
again.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/include/asm/setup.h |  3 +++
 arch/x86/kernel/head64.c     |  2 +-
 arch/x86/xen/enlighten_pv.c  |  8 ++++++--
 arch/x86/xen/xen-head.S      | 10 +---------
 4 files changed, 11 insertions(+), 12 deletions(-)

diff --git a/arch/x86/include/asm/setup.h b/arch/x86/include/asm/setup.h
index f8b9ee97a891..f37cbff7354c 100644
--- a/arch/x86/include/asm/setup.h
+++ b/arch/x86/include/asm/setup.h
@@ -120,6 +120,9 @@ void *extend_brk(size_t size, size_t align);
 	static char __brk_##name[size]
 
 extern void probe_roms(void);
+
+void clear_bss(void);
+
 #ifdef __i386__
 
 asmlinkage void __init i386_start_kernel(void);
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index bd4a34100ed0..e7e233209a8c 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -426,7 +426,7 @@ void __init do_early_exception(struct pt_regs *regs, int trapnr)
 
 /* Don't add a printk in there. printk relies on the PDA which is not initialized 
    yet. */
-static void __init clear_bss(void)
+void __init clear_bss(void)
 {
 	memset(__bss_start, 0,
 	       (unsigned long) __bss_stop - (unsigned long) __bss_start);
diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index e3297b15701c..70fb2ea85e90 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -1183,15 +1183,19 @@ static void __init xen_domu_set_legacy_features(void)
 extern void early_xen_iret_patch(void);
 
 /* First C function to be called on Xen boot */
-asmlinkage __visible void __init xen_start_kernel(void)
+asmlinkage __visible void __init xen_start_kernel(struct start_info *si)
 {
 	struct physdev_set_iopl set_iopl;
 	unsigned long initrd_start = 0;
 	int rc;
 
-	if (!xen_start_info)
+	if (!si)
 		return;
 
+	clear_bss();
+
+	xen_start_info = si;
+
 	__text_gen_insn(&early_xen_iret_patch,
 			JMP32_INSN_OPCODE, &early_xen_iret_patch, &xen_iret,
 			JMP32_INSN_SIZE);
diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S
index 3a2cd93bf059..13af6fe453e3 100644
--- a/arch/x86/xen/xen-head.S
+++ b/arch/x86/xen/xen-head.S
@@ -48,15 +48,6 @@ SYM_CODE_START(startup_xen)
 	ANNOTATE_NOENDBR
 	cld
 
-	/* Clear .bss */
-	xor %eax,%eax
-	mov $__bss_start, %rdi
-	mov $__bss_stop, %rcx
-	sub %rdi, %rcx
-	shr $3, %rcx
-	rep stosq
-
-	mov %rsi, xen_start_info
 	mov initial_stack(%rip), %rsp
 
 	/* Set up %gs.
@@ -71,6 +62,7 @@ SYM_CODE_START(startup_xen)
 	cdq
 	wrmsr
 
+	mov	%rsi, %rdi
 	call xen_start_kernel
 SYM_CODE_END(startup_xen)
 	__FINIT
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 16:10:55 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 16:10:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354197.581269 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o42wR-0002rr-SK; Wed, 22 Jun 2022 16:10:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354197.581269; Wed, 22 Jun 2022 16:10:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o42wR-0002rk-Pf; Wed, 22 Jun 2022 16:10:55 +0000
Received: by outflank-mailman (input) for mailman id 354197;
 Wed, 22 Jun 2022 16:10:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zPYt=W5=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o42wQ-0002LP-HP
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 16:10:54 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e346282f-f245-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 18:10:52 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 0D4FE1F8E1;
 Wed, 22 Jun 2022 16:10:52 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id C084013AC7;
 Wed, 22 Jun 2022 16:10:51 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id +DmlLQs/s2JzCwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 22 Jun 2022 16:10:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e346282f-f245-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655914252; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=3utwreIcXrsHaZ2aPHJW+ToYT/lkMci89t36LS1rRtk=;
	b=rX2V6UHCEbDokFsQZRoynOuusc4yhrQefn8Slx89yncv7CUS9rWToPJogg9VgkkRvxO5zF
	N7BA+/1Vos8rc9Z+chjfuoz3ATNTNR2xeetaKfYJXvrv6KH3E/uFe1jwy76fiUCBiELIRh
	sXBNDkiFiCrv7gLrbWjiPv+yKlln+t4=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>
Subject: [PATCH 2/2] x86: fix setup of brk area
Date: Wed, 22 Jun 2022 18:10:48 +0200
Message-Id: <20220622161048.4483-3-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20220622161048.4483-1-jgross@suse.com>
References: <20220622161048.4483-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Commit e32683c6f7d2 ("x86/mm: Fix RESERVE_BRK() for older binutils")
put the brk area into the .bss segment, causing it not to be cleared
initially. As the brk area is used to allocate early page tables, these
might contain garbage in not explicitly written entries.

This is especially a problem for Xen PV guests, as the hypervisor will
validate page tables (check for writable page tables and hypervisor
private bits) before accepting them to be used. There have been reports
of early crashes of PV guests due to illegal page table contents.

Fix that by letting clear_bss() clear the brk area, too.

Fixes: e32683c6f7d2 ("x86/mm: Fix RESERVE_BRK() for older binutils")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/kernel/head64.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index e7e233209a8c..6a3cfaf6b72a 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -430,6 +430,8 @@ void __init clear_bss(void)
 {
 	memset(__bss_start, 0,
 	       (unsigned long) __bss_stop - (unsigned long) __bss_start);
+	memset(__brk_base, 0,
+	       (unsigned long) __brk_limit - (unsigned long) __brk_base);
 }
 
 static unsigned long get_cmd_line_ptr(void)
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 16:29:15 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 16:29:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354225.581281 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o43E3-00055o-EA; Wed, 22 Jun 2022 16:29:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354225.581281; Wed, 22 Jun 2022 16:29:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o43E3-00055h-B1; Wed, 22 Jun 2022 16:29:07 +0000
Received: by outflank-mailman (input) for mailman id 354225;
 Wed, 22 Jun 2022 16:29:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rPuW=W5=citrix.com=prvs=165a25834=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1o43E1-00055b-Hx
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 16:29:05 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6c612ab8-f248-11ec-bd2d-47488cf2e6aa;
 Wed, 22 Jun 2022 18:29:03 +0200 (CEST)
Received: from mail-dm6nam11lp2172.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.172])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 22 Jun 2022 12:28:59 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB5695.namprd03.prod.outlook.com (2603:10b6:a03:2d3::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.16; Wed, 22 Jun
 2022 16:28:57 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::bd46:feab:b3:4a5c]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::bd46:feab:b3:4a5c%4]) with mapi id 15.20.5353.022; Wed, 22 Jun 2022
 16:28:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c612ab8-f248-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655915343;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=GzjJYwLpAwSxwE6bw01UwWSPMT6Rsv3F7iFHgKI0RFA=;
  b=QEK9g/BcnNXBF55pBbE8+lyB3PTPbrM5YUhceZ1UHUnVUs10/1ytOr+f
   F3eTN03R1vUmpmlt75zjXusHTPTGtIa+OMMhqHrBpRmlF4zxblF4g8nOd
   M87T8rldbe47AkOGfY8ZybAGxRp3UB9CXBQeRe7TlV4xngWAEOFnBEW8t
   k=;
X-IronPort-RemoteIP: 104.47.57.172
X-IronPort-MID: 74203292
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:jvzh2q+rUtCZhuRy4dubDrUDDX+TJUtcMsCJ2f8bNWPcYEJGY0x3y
 DZJWD2OOKyKYWSnLt5wbYW/9R9T6pPUzIBlT1FsqHw8E34SpcT7XtnIdU2Y0wF+jyHgoOCLy
 +1EN7Es+ehtFie0Si+Fa+Sn9T8mvU2xbuKU5NTsY0idfic5DnZ74f5fs7Rh2NQw34LjW1nlV
 e7a+KUzBnf0g1aYDUpMg06zgEsHUCPa4W5wUvQWPJinjXeG/5UnJMt3yZKZdhMUdrJ8DO+iL
 9sv+Znilo/vE7XBPfv++lrzWhVirrc/pmFigFIOM0SpqkAqSiDfTs/XnRfTAKtao2zhojx/9
 DlCnZO/bDYoPvbrof06CTpeKjN9BK5P36CSdBBTseTLp6HHW13F5qw3SW0TY8gf8OsxBnxS/
 /sFLjxLdgqEm++93LO8TK9rm9gnK87oeogYvxmMzxmAVapgHc+FHvqMvIACtNszrpkm8fL2T
 swVczdwKj/HZAVCIAw/A5Mihua4wHL4dlW0rXrK+fBvuDKClmSd1pDMDt3zY4CRQ/xbk2WZh
 D3m+kfZJUsjYYn3JT2ttyjEavX0tQn2Qp4IHbu0sNtjmkSOx3c7AQcTE1C8pJGRgU6kWslDL
 FQU9zBosu458EWxTfHyWhS5pDiPuRt0c9hNF+w37imdx6yS5ByWbkAUQzgEZNE4ucseQT0xy
 kTPj97vHSZosrCeVTSa7Lj8hRazMigcKSklfz0JSSMM+dylq4Y25jrUVcpqGqOxitzzGBnzz
 iqMoSx4gK8c5eY10KG88UHCkiibjJHDRQ4o5S3aRmugqAh+YeaNa5Sz7FnH7d5JNIuDUkSap
 38AhtSf6+YVS5qKkUSwrP4lGbio47OJNWPaiFs2Rp05rW3yoTikYJxa5yx4KAFxKMEYdDT1Y
 UjV/wRM+JtUO3jsZqhyC26sN/kXIWHbPYyNfpjpghBmOfCdqCfvEPlSWHOt
IronPort-HdrOrdr: A9a23:wu6acqp3O54NCpqSY86+JOwaV5tyLNV00zEX/kB9WHVpm5Oj+v
 xGzc5w6farsl0ssSkb6Ku90KnpewK+yXbsibNhcYtKLzOWwldAS7sSorcKogeQVhEWk9Qw6U
 4OSdkYNDSdNzlHZIPBkXGF+rUbsZe6GcKT9IHjJh5WJGkEBZ2IrT0JczpzeXcGJjWucKBJcK
 Z0kfA3wgZIF052Uu2LQl0+G8TTrdzCk5zrJTQcAQQ81QWIhTS0rJbnDhmxxH4lInNy6IZn1V
 KAvx3y562lvf3+4ATbzXXv45Nfn8ak4sdfBfaLltMeJlzX+0WVjcVaKv+/VQIO0aWSAWUR4Z
 7xStAbToJOAkbqDySISN3WqlDdOXgVmiffIBSj8AbeSITCNU4H4ox69MNkm1LimjQdVJsX6t
 M140uJ85VQFh/OhyL7+pzBUAxrjFO9pT44nfcUlGE3a/pXVFZ9l/1owKpuKuZIIMs60vFULM
 B+SMXHoPpGe1KTaH7U+mFp3dy3R3w2WhOLWFILtMCZ2yVf2CkR9TpT+OUP2nMbsJ4tQZhN4O
 rJdqxuibFVV8cTKaZwHv0IT8e7AnHEBRjMLGWRK1L6E7xvAQOHl7fnpLEuoO26cp0By5U/3J
 zHTVNDrGY3P1njDMWftac7hSwlgF/NKQgF5vsul6SR4IeMNYYDGRfzO2wGgo+nv+gVBNHdVr
 K6JI9WasWTWFfTJQ==
X-IronPort-AV: E=Sophos;i="5.92,212,1650945600"; 
   d="scan'208";a="74203292"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IKO4SN036NjXMxEqz1B912GdF0WLIBpvDyrE0w5q9wL3/WHkzkxQmahdWqS/vHLVX7nTlXMZM1nLCpGHzPwxDH4TBLcrj3dwfg/Z0ho+MlVuk0wQw0oT6a8+eJCBfEyg7+j4CQH6sA+gKb58Uv3ExAkrJCfaCMvFjawaBYK8Iou41rF1HIZvfyy9lr1lzHGFsCYmMkh/+HdH4HAbB18SyIqcWLC6UIinoXdP0iE025MnZGyjNgt0UGHdB0DRqcC80pnPu29yMbwA4zQ1D1hG74W5mN21ZuyLTIfhNxUAeNwcJ93Y1IfOQdV7c+oljXukq9J2eKklPqNPYBQbhS1Zqg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=GzjJYwLpAwSxwE6bw01UwWSPMT6Rsv3F7iFHgKI0RFA=;
 b=HaIjp5gCXIpVjIhb3pTCAIR5QXl8hAuSBi48Q93E6exuvKE9owdt423R4ANzkcBXSL0CBbM812cvAEK3Pm7+m8EiqbwukvrieZhIzKBZuz+nL2Y96hDr/+gmzBqUiLQAHP1Kk0SdgbmjcvCunt4h6syDLA2clsQbKtBfcFl+pY2+LbCOOXPFHFZaCIrG5Zg+hdO1htbIhq4sQLl9m1OiDS61wfS4tlk1MG1sTW3hWHOBrd1fS7WUUlB14+y21V9qWnWa7DXTFMwKnA9ikIbeDkCz22HxN3tNJuf/i91TW8ysyEIquETDQdUWlIi/vhFrmvUEISa4aByXt3qULVCnug==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GzjJYwLpAwSxwE6bw01UwWSPMT6Rsv3F7iFHgKI0RFA=;
 b=pao1QdpMSrw5CtxzwLq7ETjQNJSHzzwX+hdQtQlnPw75Rn1EJ1Z23ETevyijkUar0vUXpthWVJ+wyWEjjFjKibtVqT8x/oEt+c6Hd5JpBv/fNyaw9TSwNikJXdNtHa3HyRrw+tnqpAbp4ta2hcLTF/gc/162x8rPe7ZlLxygffI=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>, Christopher Clark
	<christopher.w.clark@gmail.com>
CC: Julien Grall <julien@xen.org>, xen-devel <xen-devel@lists.xenproject.org>,
	Michal Orzel <Michal.Orzel@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Daniel Smith <dpsmith@apertussolutions.com>, Roger Pau Monne
	<roger.pau@citrix.com>, George Dunlap <George.Dunlap@citrix.com>
Subject: Re: XTF-on-ARM: Bugs
Thread-Topic: XTF-on-ARM: Bugs
Thread-Index:
 AQHYhWH1+fjNtvukV0adEqGEdTB7sq1ZxCsAgAAXIgCAAAnTAIAAf4aAgAD4xwCAAEImgA==
Date: Wed, 22 Jun 2022 16:28:57 +0000
Message-ID: <36854512-23fe-57dc-3c47-5f996927872b@citrix.com>
References: <7f490d75-153d-7e1d-b3c0-5418ff7fdf8f@citrix.com>
 <b8f05e22-c30d-d4b2-b725-9db91ee7a09d@xen.org>
 <fd30be68-d1ac-b1bc-b3f1-cff589f338ee@citrix.com>
 <c97de57c-4812-cdfc-f329-cc2e1d950dc7@xen.org>
 <CACMJ4GY+H7P733_-UNgSd7P8+Z4ryeJwVy3QfekMJskkmh9btQ@mail.gmail.com>
 <30BB31A7-F49C-4908-8053-74E31D03BD33@arm.com>
In-Reply-To: <30BB31A7-F49C-4908-8053-74E31D03BD33@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 63aa44ed-a3d1-4ff2-c8a2-08da546c4e6b
x-ms-traffictypediagnostic: SJ0PR03MB5695:EE_
x-microsoft-antispam-prvs:
 <SJ0PR03MB5695CD691AF70FDE1FE8CE87BAB29@SJ0PR03MB5695.namprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 XgkK1twTqkGI3JQlHEPhC14AcSuv1H1O8DxpoBWM61nz0KAtdtx5rQVjd9km29WaoDhozcF/I883Sc7IVtLZrQDBNFHy7ryO9FPMOm2kvRuQSxc46wOFKBX5grxc1INORb45I6uVUdlceT/BtkSfwQ1BnRiFZic+bucg9EmU80I2rWzuCi8p79O+Zp2u7+9OWm4Jbj6PSzysKVEK6kZ5V1Iml9sjnhhVy9nMnal/+hEF28W3PVA7bPbNnHWDqkO9nj7EMaZE2N+nCExQNIiN5ux39CHS1UTxjYsd3M1E8ldlFpnhjZmZD36Xtb2co8G3Bi/OlE8XZ7yldiZzvvfMXPGePCUCYo9zY/ifJ1O0bH8yPH8DFJWfVdAl7DInbuw2nTz3x4WHaJnmg466he3MHnWN97Cig1WWIenI20yuPtsVY/cjlDto9XoeYQhxXfKX1mXsc3z1Bb2ZuvvgxFj6OPIIajyTmpnBKL1WNhPsEZsTNgBpj8oSC+I5wLIiqWhiA/PeJMzsu7rM4uiNLKCt4lIIRhlWramONfSjKRhfcGCvpacit90Er2PRnql7MZmm1bYrmX2lrCPnyf7gVgqz5cWfYnyDjkRbFxqwA29LQ7QFGaX4gcORrngb+rCXW9VdfmBSoVG0fP3l5yiIbs+1SuOx+/pFkYuAp1xJn0+7dD65DTLzxwbEKX8ZnC/zNucwj0dDwwt4Zd2PBGDfizx5NsG5QjAyLeMh/3OO9sKqu7lS7Zi8C7yVcG5uOqraZWGpWtO3oeFdINXpdbeoUkjB/EGCn1Rpa/5icZzjtl8n6hciX/LljL8Mr7aOHGVP3rAjEVmhTKHu5fU1AAv4/YZVXdolp2OSHRbrTUcDTZ8CenM=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(346002)(396003)(136003)(366004)(376002)(6506007)(966005)(71200400001)(86362001)(53546011)(6512007)(31696002)(5660300002)(2906002)(8936002)(478600001)(26005)(82960400001)(122000001)(76116006)(6486002)(316002)(2616005)(41300700001)(107886003)(186003)(83380400001)(8676002)(4326008)(110136005)(36756003)(38100700002)(54906003)(38070700005)(91956017)(66556008)(66476007)(64756008)(66946007)(66446008)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?OVY4dS9KaXdmSlBrY2E3VXdnY09TdFkvdElGMkdQOGZ1alI0N1A4RWNIOFdI?=
 =?utf-8?B?S25vRE9oRytWOE1Jd1JHRUZtNFVLUk1qaUVCeDhOMkdwVWxDekhHeXpLc09a?=
 =?utf-8?B?ZnJSaTVkVXVWL1RWLzdRWFJVT3VRS20xWGtpdkJ2cmpPK0o1VDhmS2Rzcyth?=
 =?utf-8?B?T1RRUVRqMkVBTXErRmwwdUl5NW5JL1VMTlNSbGJlM1Mvb1NHWkRlRm1oUUR3?=
 =?utf-8?B?VEJjT0FlMU5iLy9uYjRFUkxwNUpYVWhQUUNsWEtqWllZWFpaTjBFUFBuY3g0?=
 =?utf-8?B?Z2g5VFZxMG5qeTVId0ZPVGl1RkNwSC9DbzZsWU45WnY3WURuUkVLL2orMUlh?=
 =?utf-8?B?WGVSQjVnVjZTdHpaTXIwR1N2VXEwdHpXRlJLZ3pGNnVlK0syWVdwMWR4QlZ1?=
 =?utf-8?B?RVQyQXdscnhaT1FvYkZEVUN4b0JLM0hFaFAya1VKeW1KblkvT2RtWUpmYUk5?=
 =?utf-8?B?SEpTZG1rS25lNzg0Zk5nWXBWdzRKbnNGYU5jYUJRRFFxRmlxU25oR3Z5VXM2?=
 =?utf-8?B?Z0E1K1I1d01qNURuTER5RldPWDdRSXN5Z1JFV2NBN3BTeVpraE5NdkFEUmhE?=
 =?utf-8?B?ekJFKzFhMVZHOEZpNGc4R0pPRVA5T2lBR0gzUjI5eGxtdnVHcmxXZzdYUy9E?=
 =?utf-8?B?Q0tNa0I1Z1F6MGtFSFZmTzMzRVluSUd3MkEwTzA1T09KeXdVeDVFMit1TWMy?=
 =?utf-8?B?QTc0YmZCTEs2c2ZJNHlRdEZraTNDaDA4Ui9pTWUvb1Y1SWdFVVcvM0NMV21W?=
 =?utf-8?B?QTF4RWFTd0NZNFQ3ZVE5Z0poMVRXenkyM2lkMlFlK2FHTHIvTi82ZzlDb0po?=
 =?utf-8?B?aG5xRVcrd3VrSDFuTUtNUzZwZG5XRHZ3Vmw2eFBWM1ZMRkxlUGwwbWRXSmFI?=
 =?utf-8?B?YjR3Mk82QnlQV3ptMFNmaVozMUJTT3hVS1R3ZWoyTXVvTU9GWVluZ1NLTmo3?=
 =?utf-8?B?WnRmY0xna1lvNzRQMUpzNG4xSWpETnBzMzZ4NDhBdjZUU09qaitTcUF1eXRM?=
 =?utf-8?B?VFFnbDAxTWJmSzNuc0EvYTZXMTR3bHJyOWRZb2lmd2tLYitpNUJaWjIyRjlL?=
 =?utf-8?B?WlZEZitoSS9hdk9wZEsvSFVRaWRFWWVES2htVUcxRmhMYko1ekpmVGE1aXNN?=
 =?utf-8?B?ZWY5RHQxbmlpWk90Q3BnbmIxVzZoRzQ5TkMyVzFzV092bjZDLzFaZDlhSENJ?=
 =?utf-8?B?Q1lydE91cTVBeDdzN01TRGJPMHRzQmM5SFdPL1psSVZkY21QNzBmY3UydW1I?=
 =?utf-8?B?em5jcVZrbWQ1RHQrY0I5WDlmL2dCWENabG9mOE1tZUFYNnVwTlFiTTEwdmpS?=
 =?utf-8?B?d3VaaGQxU2txbG1tNHRrWFByclZhU0Y3RGszSkdyY2s4YUJBUy84a1VDWFZK?=
 =?utf-8?B?S3VxcUFFRzNBcmltNE5FWXc0T3BLYzgydXBsdnZIUnQvOGZXS1BEd3pGc0xD?=
 =?utf-8?B?ckRyVjlEdUQ0eU5yai9nYVQvOXhLQnRhUjRTUUdpZ2h1VHJjZlhpZmxmNzdy?=
 =?utf-8?B?R0ZzVVhqM3RkR3gxVEJNeTE4cEVDWTZpNnU1R1IyK0IzUVZkL2RnRGhjNHZ6?=
 =?utf-8?B?NS80eXJnWmNaR09CaEZsUjBZZkRNUUE1ZmdDcDhjTXVVRlBjTEtvMW5rbHMv?=
 =?utf-8?B?Um5lcGlnL0dJTktJb2NLSzZiMzVTaWg0aXZYOERBdER0N2Q3UzNvZm5VbklD?=
 =?utf-8?B?am9RU0Y4eklGZ2ptdE9ocHJlTHErQUZmZkRVd2JxRVJ1QU1DSTl2UjNrUUJl?=
 =?utf-8?B?M1QxZXg4Uk5ab0QvbnBaSjlKQW5Nc3ZuaXVBdHlUS3dHY0pSS0c4RkZ2Q3ly?=
 =?utf-8?B?S2JlOWl4RWJqQkdZaXNLckdQaEJ3TzNjWlJNWmM5azZSSVhrR2kzcTVRdTEy?=
 =?utf-8?B?Y0xIZURpQlpWWWh0WGZDTmVIWGNXcVdxOEFFYkE0MzNJTnE5NTdOZllZUGNv?=
 =?utf-8?B?MkFlcVQ3aU5obmNIb1l4c2wvWGQzSmV1Sm5FbFJIdEZMWm80SnpyRzZpOGJq?=
 =?utf-8?B?RHRCbm8zSzJOYWxsVkdpRm1hRG0rY1hwbUppNFhNcEd2NzVTalc4OVlKWUxY?=
 =?utf-8?B?cFJSY1pOYzJNTTlscityTklhTStyMHFUeXFONmU1WUxzb2hib1IvQ2NzcnhL?=
 =?utf-8?B?ZXdlekY3bDV4b3J3eTRUVG5tbHorampLR3NpKzdsYkVvN2NFMFQ4WDE4Njhn?=
 =?utf-8?B?ZStPTVBzY2JDbEVNaU5HTGJqMThtRk96UmhON3NKc252cktXYXZlNWE2VXBH?=
 =?utf-8?B?Q0xrKzVmeGtEemF6MzdIU3VTZ0kzVzRRY3Rid0lZamtYZk5LeHFJRFhGb1VE?=
 =?utf-8?B?Rk1TZmNkdFYyeXFrSEZmL3JTdnQ5Z09VU3dWcURSQVhOUFUra2JvaWhaN3V1?=
 =?utf-8?Q?KQOINnvq2TvpaBfA=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <B464758853244D4595F5AE7FCC1875AA@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 63aa44ed-a3d1-4ff2-c8a2-08da546c4e6b
X-MS-Exchange-CrossTenant-originalarrivaltime: 22 Jun 2022 16:28:57.4609
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 0vyQs3rrva8ZIUUk7MJB7cOfM0NIOLC1QbApecmxBdv4+KHAU2+tFnWxL38J5AiIpjjMdjUDJa9+PR3eN0U4hYlzt8TaaZ9VhPUFtHGBfs0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5695

T24gMjIvMDYvMjAyMiAxMzozMiwgQmVydHJhbmQgTWFycXVpcyB3cm90ZToNCj4gSGkgQW5kcmV3
IGFuZCBDaHJpc3RvcGhlciwNCj4NCj4gSSB3aWxsIG5vdCBkaWcgaW50byB0aGUgZGV0YWlscyBv
ZiB0aGUgaXNzdWVzIHlvdSBjdXJyZW50bHkgaGF2ZQ0KPiBidXQgaXQgc2VlbXMgeW91IGFyZSB0
cnlpbmcgdG8gcmUtZG8gdGhlIHdvcmsgd2UgYWxyZWFkeSBkaWQNCj4gYW5kIGhhdmUgYmVlbiB1
c2luZyBmb3IgcXVpdGUgYSB3aGlsZS4NCj4NCj4gQ3VycmVudGx5IHdlIG1haW50YWluIHRoZSB4
dGYgb24gYXJtIGNvZGUgaW4gZ2l0bGFiIGFuZCB3ZQ0KPiByZWNlbnRseSByZWJhc2VkIGl0IG9u
IHRoZSBsYXRlc3QgeHRmIG1hc3RlcjoNCj4gaHR0cHM6Ly9naXRsYWIuY29tL3hlbi1wcm9qZWN0
L3Blb3BsZS9ibWFycXVpcy94dGYNCj4NCj4gSWYgcG9zc2libGUgSSB3b3VsZCBzdWdnZXN0IHRv
IHN0YXJ0IGZyb20gdGhlcmUuDQoNClNvcnJ5IHRvIGJlIGJsdW50LCBidXQgbm8uwqAgSSd2ZSBy
ZXF1ZXN0ZWQgc2V2ZXJhbCB0aW1lcyBmb3IgdGhhdCBzZXJpZXMNCnRvIGJlIGJyb2tlbiBkb3du
IGludG8gc29tZXRoaW5nIHdoaWNoIGlzIGFjdHVhbGx5IHJldmlld2FibGUsIGFuZA0KYmVjYXVz
ZSB0aGF0IGhhcyBub3QgYmVlbiBkb25lLCBJJ20gZG9pbmcgaXQgYXQgdGhlIGZhc3Rlc3QgcGFj
ZSBteQ0Kb3RoZXIgcHJpb3JpdGllcyBhbGxvdy4NCg0KTm90aWNlIGhvdyAyLzMgb2YgdGhlIHBh
dGNoZXMgaW4gdGhlIHBhc3QgeWVhciBoYXZlIGJlZW4gYml0cw0Kc3BlY2lmaWNhbGx5IGNhcnZl
ZCBvdXQgb2YgdGhlIEFSTSBzZXJpZXMsIG9yIGltcHJvdmVtZW50cyB0byBwcmV2ZW50DQp0aGUg
QVJNIHNlcmllcyBpbnRyb2R1Y2luZyB0ZWNobmljYWwgZGVidC7CoCBGdXJ0aGVybW9yZSwgeW91
J3ZlIG5vdA0KdGFrZW4gdGhlICJidWlsZCBBUk0gaW4gQ0kiIHBhdGNoIHRoYXQgSSB3cm90ZSBz
cGVjaWZpY2FsbHkgZm9yIHlvdSB0bw0KYmUgcGFydCBvZiB0aGUgc2VyaWVzLCBhbmQgeW91J3Zl
IGdvdCBicmVha2FnZXMgdG8geDg2IGZyb20gcmViYXNpbmcuDQoNCkF0IHRoaXMgcG9pbnQsIEkg
YW0gbm90IGludGVyZXN0ZWQgaW4gc2VlaW5nIGFueSB3b3JrIHdoaWNoIGlzIG5vdA0KbW9ycGhp
bmcgKGFuZCBtb3N0bHkgcHJ1bmluZykgdGhlIGFybS13aXAgYnJhbmNoIGRvd24gaW50byBhIHNl
dCBvZg0KY2xlYW4gYnVpbGQgc3lzdGVtIG1vZGlmaWNhdGlvbnMgdGhhdCBjYW4gYm9vdHN0cmFw
IHRoZQ0KYXMtbWluaW1hbC1hcy1JLWNhbi1tYWtlLWl0IHN0dWIuDQoNCkFzIGl0IHR1cm5zIG91
dCwgSSd2ZSBmb3VuZCB0aGUgYXJtNjQgYnVnIChpdCB3YXMgYSB0eXBvIGluIGFzbSksIGFuZA0K
dGhlIGFybTMyIGJ1ZyAoaXNzdWUgd2l0aCB0aGUgY29tcGlsZXIgZmxhZ3MsIGFmZmVjdGluZyBh
bGwgdGhlIGFybQ0KYnJhbmNoZXMgdGh1cyBmYXIpLg0KDQpXaGVuIHRoZSBtaW5pbXVtIHN0dWIg
aXMgd29ya2luZyBhbmQgbWVyZ2VkLCB3ZSBjYW4gdGhlbiBzZWUgYWJvdXQNCndvcmtpbmcgdXAg
dG8gZ2V0dGluZyB0aGUgc2VsZnRlc3Qgd29ya2luZyBmb3IgYXJtMzIvNjQuDQoNCn5BbmRyZXcN
Cg==


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 16:30:13 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 16:30:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354234.581292 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o43F6-0006St-U2; Wed, 22 Jun 2022 16:30:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354234.581292; Wed, 22 Jun 2022 16:30:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o43F6-0006Sm-Ow; Wed, 22 Jun 2022 16:30:12 +0000
Received: by outflank-mailman (input) for mailman id 354234;
 Wed, 22 Jun 2022 16:30:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o43F5-0006Sa-RM; Wed, 22 Jun 2022 16:30:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o43F5-0000VY-MV; Wed, 22 Jun 2022 16:30:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o43F5-0004qD-31; Wed, 22 Jun 2022 16:30:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o43F5-0003Ac-1r; Wed, 22 Jun 2022 16:30:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XrWASORIZLts1GB/a18+j0ZDM4wptDjS9o9P5rIVrEM=; b=DrHv26S/JPJfcPzHCMCkd2Se66
	nABYzNaAOCkt04fCbfFIZNBKC/yniozSTaHfjBNxiR7ZfbbAS666EDSs0dK1Le5S4PpNVEOws5C2I
	i5Ag3qshuv9G8YNB0gagSm5rGCKlB0f7r593Q8NBIo2TaU40qk0vkeIWsR6PdWNvF/cw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171307-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171307: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:build-arm64-pvops:kernel-build:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=ca1fdab7fd27eb069df1384b2850dcd0c2bebe8d
X-Osstest-Versions-That:
    linux=354c6e071be986a44b956f7b57f1884244431048
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 22 Jun 2022 16:30:11 +0000

flight 171307 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171307/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-intel  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-amd  8 xen-boot              fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 171277
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 171277
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 171277
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 171277
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 171277
 build-arm64-pvops             6 kernel-build             fail REGR. vs. 171277

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 171277

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171277
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171277
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171277
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                ca1fdab7fd27eb069df1384b2850dcd0c2bebe8d
baseline version:
 linux                354c6e071be986a44b956f7b57f1884244431048

Last test of basis   171277  2022-06-19 03:11:35 Z    3 days
Failing since        171280  2022-06-19 15:12:25 Z    3 days   10 attempts
Testing same since   171302  2022-06-22 00:12:23 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Usyskin <alexander.usyskin@intel.com>
  Ali Saidi <alisaidi@amazon.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Ard Biesheuvel <ardb@kernel.org>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Arnd Bergmann <arnd@arndb.de>
  Athira Jajeev <atrajeev@linux.vnet.ibm.com>
  Athira Rajeev <atrajeev@linux.vnet.ibm.com>
  Bart Van Assche <bvanassche@acm.org>
  Christoph Lameter <cl@linux.com>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Damien Le Moal <damien.lemoal@opensource.wdc.com>
  Darrick J. Wong <djwong@kernel.org>
  Dave Hansen <dave.hansen@linux.intel.com>
  David Howells <dhowells@redhat.com>
  David Rientjes <rientjes@google.com>
  David Sterba <dsterba@suse.com>
  Douglas Gilbert <dgilbert@interlog.com>
  Evgeniy Baskov <baskov@ispras.ru>
  Filipe Manana <fdmanana@suse.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Hyeonggon Yoo <42.hyeyoo@gmail.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ian Rogers <irogers@google.com>
  Jamie Iles <jamie@jamieiles.com>
  Jann Horn <jannh@google.com>
  Jarkko Nikula <jarkko.nikula@linux.intel.com>
  Javier Martinez Canillas <javierm@redhat.com>
  Jiasheng Jiang <jiasheng@iscas.ac.cn>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jing-Ting Wu <jing-ting.wu@mediatek.com>
  Joe Damato <jdamato@fastly.com>
  Josh Poimboeuf <jpoimboe@kernel.org>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Kunihiko Hayashi <hayashi.kunihiko@socionext.com>
  Leo Yan <leo.yan@linaro.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Liu Ying <victor.liu@nxp.com>
  Lukas Bulwahn <lukas.bulwahn@gmail.com>
  Marc Dionne <marc.dionne@auristor.com>
  Marc Zyngier <maz@kernel.org>
  Martin K. Petersen <martin.petersen@oracle.com>
  Miaoqian Lin <linmq006@gmail.com>
  Michael Petlan <mpetlan@redhat.com>
  Michal Simek <michal.simek@amd.com>
  Nathan Chancellor <nathan@kernel.org>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Peter Zijlstra <peterz@infradead.org>
  Qu Wenruo <wqu@suse.com>
  Rob Herring <robh@kernel.org>
  Saurabh Sengar <ssengar@linux.microsoft.com>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Sedat Dilek <sedat.dilek@gmail.com> # LLVM-14 (x86-64)
  Serge Semin <Sergey.Semin@baikalelectronics.ru>
  Sergey Gorenko <sergeygo@nvidia.com>
  Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
  Tali Perry <tali.perry1@gmail.com>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Richter <tmricht@linux.ibm.com>
  Tomas Winkler <tomas.winkler@intel.com>
  Tyrel Datwyler <tyreld@linux.ibm.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wolfram Sang <wsa@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2201 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 16:40:33 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 16:40:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354245.581303 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o43P2-0007z8-Tb; Wed, 22 Jun 2022 16:40:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354245.581303; Wed, 22 Jun 2022 16:40:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o43P2-0007z1-QL; Wed, 22 Jun 2022 16:40:28 +0000
Received: by outflank-mailman (input) for mailman id 354245;
 Wed, 22 Jun 2022 16:40:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UqP7=W5=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1o43P1-0007yv-Do
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 16:40:27 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2061.outbound.protection.outlook.com [40.107.21.61])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 047fd8b5-f24a-11ec-bd2d-47488cf2e6aa;
 Wed, 22 Jun 2022 18:40:26 +0200 (CEST)
Received: from DB7PR03CA0101.eurprd03.prod.outlook.com (2603:10a6:10:72::42)
 by AM9PR08MB6981.eurprd08.prod.outlook.com (2603:10a6:20b:414::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.13; Wed, 22 Jun
 2022 16:40:23 +0000
Received: from DBAEUR03FT041.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:72:cafe::ac) by DB7PR03CA0101.outlook.office365.com
 (2603:10a6:10:72::42) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15 via Frontend
 Transport; Wed, 22 Jun 2022 16:40:23 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT041.mail.protection.outlook.com (100.127.142.233) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5353.14 via Frontend Transport; Wed, 22 Jun 2022 16:40:23 +0000
Received: ("Tessian outbound d3318d0cda7b:v120");
 Wed, 22 Jun 2022 16:40:23 +0000
Received: from 16ec00be5f89.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 73024C99-0D15-4090-85D8-979CCAA40917.1; 
 Wed, 22 Jun 2022 16:40:16 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 16ec00be5f89.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 22 Jun 2022 16:40:16 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by DB6PR0801MB1911.eurprd08.prod.outlook.com (2603:10a6:4:74::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.18; Wed, 22 Jun
 2022 16:40:13 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5353.022; Wed, 22 Jun 2022
 16:40:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 047fd8b5-f24a-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=cuGyg6BG/sHYBiRCirvyDe5FrWE2HuVS13UGlQjzKwnH9P8rb3LFp2xUYIRnDWpfO/l18S90WrBLyZGDKcLY36WPW3SmprTo+deG3S3UKWhrVa4+C7MpbgSchXrHJG6GWZ205ltOvmNMcZbHWo5CuOejdr6kZzXiLkANRbPmM6BHCe5A8wTltnNMvzfqDltmu56s86jixd3v0/+ssjnOiGxjmYTpd8IChfIOE9wp7HNfmBMmKBEfUopY/vx40HFT1a1tBaUX5MlgfJD3ZgNGv8RKR3Cg/oeoIP9qOLy8RAW90QPeo4/sthArkuSaTWjCGsbwaN+gXz/33R+mzM98xg==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=PCebDf01YV/u880QEKJ/QrV9YMFnO0Td/GA8HPgRwc8=;
 b=l+PmW8bp7oT/Ph9EJN8LSE/qrsGIteWPhJ+fWFIk1mWHiw2JafWYgySQWEBj25JBijHSS3NY32/2/FQtfaC3J/8kA0s/Tqk1Xp3GK9xcd2iEI6LoAq/qYRuS29kosijMNIyoyNTXZs2LRhD0+tcSAak9lyQeQDum8Zs/RD0UDf76LokLIvlIUcVt8THwteGMbDYtoTGrYJA77xwOmyBaMPMJVJErF81ISa1d5ven1JY/mMyVl8fh+yaYFd2iTsqWXyJCvxKMmqGAGxwL79C1TfM3nT/1pWGC+ucoR3GP7o0uV06An67+eYT6ZJZNWwcvt7MbyOcTVI/Cxpu3/JS+SA==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PCebDf01YV/u880QEKJ/QrV9YMFnO0Td/GA8HPgRwc8=;
 b=mVbceSuv6zoOoLDB2rIXkjR7SRfkAzOuAkCPTniT984PUaaquxLNVtiOoXpW3w1E6P/6zuke46lt02+s2FdM0kFtBWB4oJfwfLPmPToC7Kt62elCmFDavnWnaCa7RN4k29sBuzypxb5F6yhVhwmy9lhHHCJMORyZ3cDcX+1a+p8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 5533e804f7d6d983
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kp2GL3fniGapjloSe/NrScGG08LebrSTkoIoapg/60lIZYpWkFGtp5oC/InNBUQPuDesQNN955FbOADqKRK631VU/QLY923E8A8kOAZ9Z/DBjW3Po1hE/0UksaK/xmPNf+rPrepiO+RMHYTsadW7+jFV55nT0biAIDQakK9U/zygiu9y0kM0m02JIn2UiL5qMcj7/Uff03e/fjYlwaIJcS+tDb9L6he3+/n4T8phCuku8OAgSuhrE0K2+EpsTMki9mhP+8QaCQPb7z/Id5KU58J1KREuiLjkluu2nOVKNRt2B8lAy30O089U43Mxc6CiIZDMdJYc/dwdXd8s7FascQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=PCebDf01YV/u880QEKJ/QrV9YMFnO0Td/GA8HPgRwc8=;
 b=l6st1TPjL+Fz68LihfEuRpqEHMOqdhShAp+wvZWmOpgoaHKfPW7B6EZkvutRkxDVWtAeVhB3h82sVduAFdaM6ukdtRSCZy6oVizdPQ0O4ndeJpZT11/TS36U+jfRLxEvJMHbBjWxtAlSX0UigIIS7oGEFnChOjhXdEFDSur/g1Z9wxkG6mZFE0w3/v9fMVsxlixtEUGlAZkHA5Y84lVpG1lHqz42Uaj0YVQb9dOl3RqL8Y+nq3yQkJmgMWQhydsh1uK9M2oggbVxDX0YhzbgyM/VCTGbXCY6Jyhjh3szv/UTzJtxmMOmu5wUuPefhwvJN0PDv61e6UVzn+f4jbVmcQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PCebDf01YV/u880QEKJ/QrV9YMFnO0Td/GA8HPgRwc8=;
 b=mVbceSuv6zoOoLDB2rIXkjR7SRfkAzOuAkCPTniT984PUaaquxLNVtiOoXpW3w1E6P/6zuke46lt02+s2FdM0kFtBWB4oJfwfLPmPToC7Kt62elCmFDavnWnaCa7RN4k29sBuzypxb5F6yhVhwmy9lhHHCJMORyZ3cDcX+1a+p8=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
CC: Christopher Clark <christopher.w.clark@gmail.com>, Julien Grall
	<julien@xen.org>, xen-devel <xen-devel@lists.xenproject.org>, Michal Orzel
	<Michal.Orzel@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Daniel Smith
	<dpsmith@apertussolutions.com>, Roger Pau Monne <roger.pau@citrix.com>,
	George Dunlap <George.Dunlap@citrix.com>
Subject: Re: XTF-on-ARM: Bugs
Thread-Topic: XTF-on-ARM: Bugs
Thread-Index:
 AQHYhWH1+fjNtvukV0adEqGEdTB7sq1ZxCsAgAAXIgCAAAnTAIAAf4aAgAD4xwCAAEImgIAAAyaA
Date: Wed, 22 Jun 2022 16:40:13 +0000
Message-ID: <A06EA6F6-BBB5-4FDC-BEA0-E5C6EB6B445B@arm.com>
References: <7f490d75-153d-7e1d-b3c0-5418ff7fdf8f@citrix.com>
 <b8f05e22-c30d-d4b2-b725-9db91ee7a09d@xen.org>
 <fd30be68-d1ac-b1bc-b3f1-cff589f338ee@citrix.com>
 <c97de57c-4812-cdfc-f329-cc2e1d950dc7@xen.org>
 <CACMJ4GY+H7P733_-UNgSd7P8+Z4ryeJwVy3QfekMJskkmh9btQ@mail.gmail.com>
 <30BB31A7-F49C-4908-8053-74E31D03BD33@arm.com>
 <36854512-23fe-57dc-3c47-5f996927872b@citrix.com>
In-Reply-To: <36854512-23fe-57dc-3c47-5f996927872b@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: c1da04de-6d66-430e-b2ae-08da546de716
x-ms-traffictypediagnostic:
	DB6PR0801MB1911:EE_|DBAEUR03FT041:EE_|AM9PR08MB6981:EE_
X-Microsoft-Antispam-PRVS:
	<AM9PR08MB6981500413CA534C4E166F499DB29@AM9PR08MB6981.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 Q5MYUQVhI8LKJoxQkc//VWgKpuoAj8+k+R2uHo9T3TsjLC9BVnHrnOLKXUMAj5iBxZ/beAjBL66BuQ4H815fewNHkch0FB2b7VcQJapaS1E5dFMz0h7zMMVTU26IReM4l6IqiQhzYIbQ6SpmnX1jtYVEpParXm43TEs4cVPcBRZpo1rEsx9dlQPq255RQogyDH0D3/fIsKltn5EKB17Y9O8NfmbgUVM/nh+YhjDFi1H6115JEovA59U1u5KdJPGSzGjBZbr/u8mPyETRBPve+0UtGwwvo7c7wqMDBboiCj/q1+UoKLoVHhPffzdbiPdpeDxwKuvmr0UBM/9i+S7oU0vDc78NUYQIxuhrsa+eE7EdRjZdm6CGxodScPJh8u86wN+0tTdrJMzuAqyg+CrkPWf+A2SdM6e6mTKOI3g4OZxPlQ/FNbVaqnJ9FOryDllS7a25A5NEgiXgi6oE8y2U2jw7YEla584WDLj8QJaO+Xh0jRHeRvfpkvdrxVgb7sL5M6pvcga/N1pqvLCUz6S2axnPFLZb5kqxaefI80tY2r8vICuXIv+BHuMazT1foIYNgwMybQxqnYkuuf3IeyOftgGgsFAnYlcRockYhj6tmTinww9wyfQV85hzQfEeh2JPlQ7tAaGSRpcpKjTdabHbHRKDRu03r0LKFIN0sme8lJU3D2tc4HqYGTJAt9D0jj3tAd9fCT2wGMhpk5W7FC362ldANueumq742zhdISFh7XdgXI8KeINx6+wmamjnoJWHMMPeUA0LOSnuB2wzeCmxGCq4qTK+9JnrUlM1y/UxUmcPPS+oafb+HMi8W2xzlH5zwcwMytLSTirTsDMVCP5VxQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(366004)(376002)(396003)(346002)(136003)(38100700002)(6486002)(76116006)(8936002)(316002)(6916009)(66446008)(64756008)(54906003)(91956017)(5660300002)(66556008)(71200400001)(66476007)(33656002)(966005)(478600001)(66946007)(4326008)(2616005)(38070700005)(41300700001)(6506007)(86362001)(6512007)(2906002)(36756003)(186003)(8676002)(26005)(83380400001)(53546011)(122000001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <1670565CC8A23848A7FDB70B9CF8251F@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1911
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT041.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	a178714f-4c0b-4876-8df0-08da546de176
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	peuGPf0iw4wKguagc2FiocBRZFmBjdjmbHz+LjAbU7Df0/A1RS5Ev3LPm24h5r/GA/SCap8zMCSp1fHxsidJ+DcBSdtRZgePX7n7mvmtgRQjwRDIXgq6dq3wzCYObp1l8ZW72Mz4FoLJgYmTzc82Fh/JVvRZgL0fopjbkRDrty2Lj0DR4ET5bM1W2kmbZkvjU/2TzQjOFWBAuNiNEM120XLIcPAwkifroEyBmmpyxU3KtmhKoTBH9PWdTqtNpioHInDCsKiqEPmkbJT5pbO5ziLYFh/4uGv06MbGC796u8GnSR4PcQPiv7fA/7HUrXPwuIq0ge4sFEJAgx9kLfcbBxyo3uKUt4bTyhDhVbrdPz7BJSUzO2foYsvlsfvhvlkzN5HVSYcRvC7wBF+PAvMZUEy7JPVXoR0lOaty6bSPZmeR1nBCJsSjOYVr6qAhatLpsNTROImwpEOpEkXUn9tQ5PcMkCivRMoRlci6RmqU7sIT4dPfk0fN1JJLHICYhYBDU9iKzXi73NNZ39yCiyxVZw1w6jdfcZOCdZi6+GErcWiAz8FzjJceGVf8piOJjw+8elYBJV3/K6N0KYlhYti8+/HYQRJST/7OJbks9/OR8d3lccyZhI4byvfvnUrG2/ckBlu9lld0vedG9VCX+7NAsWOeoakDK2O+XXDLdDVvkvYUi45Zj8F5VJWijc0MUs9lv2D90uZDb48ZgGd8xpAA+gDjM9F+gKhJDMf/q3uMHqU9GcvJaX0G4Yc2fqlTCER7kcJc0we5vEp/4DkuUTyL4IDFQY2BUer47A+H67O/kz8=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(136003)(346002)(376002)(39860400002)(396003)(46966006)(36840700001)(40470700004)(40460700003)(33656002)(186003)(41300700001)(316002)(107886003)(2616005)(6512007)(36860700001)(26005)(82310400005)(53546011)(6506007)(356005)(54906003)(86362001)(82740400003)(6862004)(4326008)(5660300002)(40480700001)(36756003)(70586007)(336012)(70206006)(83380400001)(2906002)(6486002)(966005)(478600001)(81166007)(8676002)(8936002)(47076005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2022 16:40:23.1383
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c1da04de-6d66-430e-b2ae-08da546de716
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT041.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6981

Hi Andrew,

> On 22 Jun 2022, at 17:28, Andrew Cooper <Andrew.Cooper3@citrix.com> wrote=
:
>=20
> On 22/06/2022 13:32, Bertrand Marquis wrote:
>> Hi Andrew and Christopher,
>>=20
>> I will not dig into the details of the issues you currently have
>> but it seems you are trying to re-do the work we already did
>> and have been using for quite a while.
>>=20
>> Currently we maintain the xtf on arm code in gitlab and we
>> recently rebased it on the latest xtf master:
>> https://gitlab.com/xen-project/people/bmarquis/xtf
>>=20
>> If possible I would suggest to start from there.
>=20
> Sorry to be blunt, but no.  I've requested several times for that series
> to be broken down into something which is actually reviewable, and
> because that has not been done, I'm doing it at the fastest pace my
> other priorities allow.

You have not requested anything, we have been asking for a year
what we could do to help without getting any answer.

>=20
> Notice how 2/3 of the patches in the past year have been bits
> specifically carved out of the ARM series, or improvements to prevent
> the ARM series introducing technical debt.  Furthermore, you've not
> taken the "build ARM in CI" patch that I wrote specifically for you to
> be part of the series, and you've got breakages to x86 from rebasing.

Which patch ? Where ? There was no communication on anything like that.

>=20
> At this point, I am not interested in seeing any work which is not
> morphing (and mostly pruning) the arm-wip branch down into a set of
> clean build system modifications that can bootstrap the
> as-minimal-as-I-can-make-it stub.

You cannot expect us to poll on all the possible branches that you are crea=
ting
and simply rework what we did when you do something on some branch.

We went through what you requested using GitHub and asked you at almost all
Xen Community Call what we could do to go further without getting any answe=
r.

You are not interested in us contributing to XTF, this is understood.

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Wed Jun 22 17:46:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 17:46:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354264.581318 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o44Qd-0005gc-R8; Wed, 22 Jun 2022 17:46:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354264.581318; Wed, 22 Jun 2022 17:46:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o44Qd-0005gV-N7; Wed, 22 Jun 2022 17:46:11 +0000
Received: by outflank-mailman (input) for mailman id 354264;
 Wed, 22 Jun 2022 17:46:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o44Qc-0005fe-Kz
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 17:46:10 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o44Qc-0001nE-A2; Wed, 22 Jun 2022 17:46:10 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231]
 helo=[192.168.1.223]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o44Qc-0007Y5-13; Wed, 22 Jun 2022 17:46:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=dTFEg76KRkxW2BD+pHmMB9wuW8MUbDPjrtIU13AOqb8=; b=vJ1Px+G6vqulPxR6V4ij8y2LoJ
	JY5KtQwErNmWcGKWdGbzijduhZOSrsBwpYC5spoF+lh1FrnrDCKcwsmOPZ4NCeBfx79vQaAEEDlHv
	f+Kq/S4rZ54YTMoDn1sZNn4AUdWV6X+zOfnq8Wmqf0YdJkZ3jZ3Vw9U8VCbp4E0ItO6s=;
Message-ID: <87b7646c-dbc0-f503-131a-a51aa3bd517f@xen.org>
Date: Wed, 22 Jun 2022 18:46:08 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH] xen/arm: irq: Initialize the per-CPU IRQs while preparing
 the CPU
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, andrew.cooper3@citrix.com,
 Julien Grall <jgrall@amazon.com>, Bertrand Marquis
 <bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220614094157.95631-1-julien@xen.org>
 <alpine.DEB.2.22.394.2206141731320.1837490@ubuntu-linux-20-04-desktop>
From: Julien Grall <julien@xen.org>
In-Reply-To: <alpine.DEB.2.22.394.2206141731320.1837490@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 15/06/2022 01:32, Stefano Stabellini wrote:
> On Tue, 14 Jun 2022, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Commit 5047cd1d5dea "xen/common: Use enhanced ASSERT_ALLOC_CONTEXT in
>> xmalloc()" extended the checks in _xmalloc() to catch any use of the
>> helpers from context with interrupts disabled.
>>
>> Unfortunately, the rule is not followed when initializing the per-CPU
>> IRQs:
>>
>> (XEN) Xen call trace:
>> (XEN)    [<002389f4>] _xmalloc+0xfc/0x314 (PC)
>> (XEN)    [<00000000>] 00000000 (LR)
>> (XEN)    [<0021a7c4>] init_one_irq_desc+0x48/0xd0
>> (XEN)    [<002807a8>] irq.c#init_local_irq_data+0x48/0xa4
>> (XEN)    [<00280834>] init_secondary_IRQ+0x10/0x2c
>> (XEN)    [<00288fa4>] start_secondary+0x194/0x274
>> (XEN)    [<40010170>] 40010170
>> (XEN)
>> (XEN)
>> (XEN) ****************************************
>> (XEN) Panic on CPU 2:
>> (XEN) Assertion '!in_irq() && (local_irq_is_enabled() || num_online_cpus() <= 1)' failed at common/xmalloc_tlsf.c:601
>> (XEN) ****************************************
>>
>> This is happening because zalloc_cpumask_var() may allocate memory
>> if NR_CPUS is > 2 * sizeof(unsigned long).
>>
>> Avoid the problem by allocate the per-CPU IRQs while preparing the
>> CPU.
>>
>> This also has the benefit to remove a BUG_ON() in the secondary CPU
>> code.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>> ---
>>   xen/arch/arm/include/asm/irq.h |  1 -
>>   xen/arch/arm/irq.c             | 35 +++++++++++++++++++++++++++-------
>>   xen/arch/arm/smpboot.c         |  2 --
>>   3 files changed, 28 insertions(+), 10 deletions(-)
>>
>> diff --git a/xen/arch/arm/include/asm/irq.h b/xen/arch/arm/include/asm/irq.h
>> index e45d57459899..245f49dcbac5 100644
>> --- a/xen/arch/arm/include/asm/irq.h
>> +++ b/xen/arch/arm/include/asm/irq.h
>> @@ -73,7 +73,6 @@ static inline bool is_lpi(unsigned int irq)
>>   bool is_assignable_irq(unsigned int irq);
>>   
>>   void init_IRQ(void);
>> -void init_secondary_IRQ(void);
>>   
>>   int route_irq_to_guest(struct domain *d, unsigned int virq,
>>                          unsigned int irq, const char *devname);
>> diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
>> index b761d90c4063..56bdcb95335d 100644
>> --- a/xen/arch/arm/irq.c
>> +++ b/xen/arch/arm/irq.c
>> @@ -17,6 +17,7 @@
>>    * GNU General Public License for more details.
>>    */
>>   
>> +#include <xen/cpu.h>
>>   #include <xen/lib.h>
>>   #include <xen/spinlock.h>
>>   #include <xen/irq.h>
>> @@ -100,7 +101,7 @@ static int __init init_irq_data(void)
>>       return 0;
>>   }
>>   
>> -static int init_local_irq_data(void)
>> +static int init_local_irq_data(unsigned int cpu)
>>   {
>>       int irq;
>>   
>> @@ -108,7 +109,7 @@ static int init_local_irq_data(void)
>>   
>>       for ( irq = 0; irq < NR_LOCAL_IRQS; irq++ )
>>       {
>> -        struct irq_desc *desc = irq_to_desc(irq);
>> +        struct irq_desc *desc = &per_cpu(local_irq_desc, cpu)[irq];
>>           int rc = init_one_irq_desc(desc);
>>   
>>           if ( rc )
>> @@ -131,6 +132,29 @@ static int init_local_irq_data(void)
>>       return 0;
>>   }
>>   
>> +static int cpu_callback(struct notifier_block *nfb, unsigned long action,
>> +                        void *hcpu)
>> +{
>> +    unsigned long cpu = (unsigned long)hcpu;
> 
> unsigned int cpu ?

Hmmm... We seem to have a mix in the code base. I am OK to switch to 
unsigned int.

> 
> The rest looks good
Can this be converted to an ack or review tag?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 17:57:38 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 17:57:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354274.581333 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o44bd-0007EN-TL; Wed, 22 Jun 2022 17:57:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354274.581333; Wed, 22 Jun 2022 17:57:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o44bd-0007EG-QN; Wed, 22 Jun 2022 17:57:33 +0000
Received: by outflank-mailman (input) for mailman id 354274;
 Wed, 22 Jun 2022 17:57:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o44bc-0007EA-Bm
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 17:57:32 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o44bb-0001xz-Pn; Wed, 22 Jun 2022 17:57:31 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=[192.168.1.223]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o44bb-00083F-IJ; Wed, 22 Jun 2022 17:57:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=XT/bi/uIzrCH6IecR7ly/L3haTafrz73zG8pzJE4w4c=; b=65EpWa/smI+duGLKubLG/OI8Bu
	KUuthK0lpXjWsOViL+Z4YQr0aNSDCLg/NtfotTrKNORECMKXxSysebSunB4Lr25w4vNZK/oCDCi+n
	xvUTm9nDaBqfkBIpAmVu7Eq/zFIoN2wf3j+KsniqHR8OJw81CMxdMX9LU5y7kLtz6m/M=;
Message-ID: <90b86795-b9a8-a01d-1e92-5e7bcdb1ae7a@xen.org>
Date: Wed, 22 Jun 2022 18:57:29 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH] xen/arm: smpboot: Allocate the CPU sibling/core maps
 while preparing the CPU
To: Stefano Stabellini <sstabellini@kernel.org>,
 Michal Orzel <michal.orzel@arm.com>
Cc: xen-devel@lists.xenproject.org, andrew.cooper3@citrix.com,
 Julien Grall <jgrall@amazon.com>, Bertrand Marquis
 <bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220614094119.94720-1-julien@xen.org>
 <f60bd88a-90bc-60a9-be72-aa533315c55f@arm.com>
 <3ed8e44f-293d-958f-c144-466e16d034e2@xen.org>
 <55f45337-2da1-fe8f-b7a5-272577ed4d50@arm.com>
 <alpine.DEB.2.22.394.2206141723360.1837490@ubuntu-linux-20-04-desktop>
From: Julien Grall <julien@xen.org>
In-Reply-To: <alpine.DEB.2.22.394.2206141723360.1837490@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit



On 15/06/2022 01:23, Stefano Stabellini wrote:
> On Tue, 14 Jun 2022, Michal Orzel wrote:
>> On 14.06.2022 13:08, Julien Grall wrote:
>>>>> +    unsigned int rc = 0;
>>>> ... here you are setting rc to 0 even though it will be reassigned.
>>>> Furthermore, if rc is used only in case of CPU_UP_PREPARE, why not moving the definition there?
>>>
>>> Because I forgot to replace "return NOTIFY_DONE;" with :
>>>
>>> return !rc ? NOTIFY_DONE : notifier_from_errno(rc);
>> That is what I thought.
>> With these fixes you can add my Rb.
> 
> And also my
> 
> Acked-by: Stefano Stabellini <sstabellini@kernel.org>

Thanks. I have committed this patch.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 18:19:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 18:19:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354283.581344 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o44wy-0001IB-LN; Wed, 22 Jun 2022 18:19:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354283.581344; Wed, 22 Jun 2022 18:19:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o44wy-0001I4-IC; Wed, 22 Jun 2022 18:19:36 +0000
Received: by outflank-mailman (input) for mailman id 354283;
 Wed, 22 Jun 2022 18:19:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o44wy-0001Hi-1V
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 18:19:36 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o44wx-0002QG-O1; Wed, 22 Jun 2022 18:19:35 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231]
 helo=[192.168.1.223]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o44wx-0000kv-HY; Wed, 22 Jun 2022 18:19:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=sMmLDbagw6mTyFw2CQmeigXWnULd9umiFrBOldJUzQ0=; b=F3D43Qolmqousx6Ky4b52DQ1M1
	Zd9x8ofHU/+NNTE2sc4rcBiWSXSouBfFUSasiPb2ZPniAqJNSGrnhI9wdDklZ+aSdotRXgLwHHVph
	uAZdJMVTlRfshwKQQBJ2gxWgjMLSagVdTRqg5jgB0tLpEtrGytaFdW7krlYRjqg6cl00=;
Message-ID: <20c3fbe7-4e19-8a03-ffcf-8177e00b37cc@xen.org>
Date: Wed, 22 Jun 2022 19:19:33 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH] xen: arm: Spin-up cpu instead PSCI CPU OFF
To: dmitry.semenets@gmail.com, xen-devel@lists.xenproject.org
Cc: Dmytro Semenets <dmytro_semenets@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220622072410.87346-1-dmitry.semenets@gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220622072410.87346-1-dmitry.semenets@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Dmytro,

Title: It seems to suggest we are completely removing PSCI CPU off. I 
would suggest to rename to:

xen/arm: Don't use stop_cpu() in halt_this_cpu()

On 22/06/2022 08:24, dmitry.semenets@gmail.com wrote:
> From: Dmytro Semenets <dmytro_semenets@epam.com>
> 
> Use spin-up cpu with disabled interrupts instead PSCI CPU OFF
> halt and reboot procedures. Some platforms can't stop CPU via PSCI
> because Thrusted OS can't migrate execution to other CPU.

There are some information missing:
  - What's the problem if we don't do that (i.e. Xen will panic())
  - Reference to the spec
  - Why this is fine to not use PSCI off

I would suggest the following commit message:

"
When shutting down (or rebooting) the platform, Xen will call stop_cpu() 
on all the CPUs but one. The last CPU will then request the system to 
shutdown/restart.

On platform using PSCI, stop_cpu() will call PSCI CPU off. Per the spec 
(section 5.5.2 DEN0022D.b), the call could return DENIED if the Trusted 
OS is resident on the CPU that is about to be turned off.

As Xen doesn't migrate off the trusted OS (which BTW may not be 
migratable), it would be possible to hit the panic().

In the ideal situation, Xen should migrate the trusted OS or make sure 
the CPU off is not called. However, when shutting down (or rebooting) 
the platform, it is pointless to try to turn off all the CPUs (per 
section 5.10.2, it is only required to put the core in a known state).

So solve the problem by open-coding stop_cpu() in halt_this_cpu() and 
not call PSCI CPU off.
"

I will give an opportunity for you, Bertrand and Stefano to answer 
before committing it.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 19:24:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 19:24:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354291.581354 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o45xT-00082P-HK; Wed, 22 Jun 2022 19:24:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354291.581354; Wed, 22 Jun 2022 19:24:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o45xT-00082I-EA; Wed, 22 Jun 2022 19:24:11 +0000
Received: by outflank-mailman (input) for mailman id 354291;
 Wed, 22 Jun 2022 19:24:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/BDV=W5=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o45xS-00082C-2I
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 19:24:10 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e2d7c335-f260-11ec-bd2d-47488cf2e6aa;
 Wed, 22 Jun 2022 21:24:08 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 3070DB81F2A;
 Wed, 22 Jun 2022 19:24:07 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 484D7C34114;
 Wed, 22 Jun 2022 19:24:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e2d7c335-f260-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1655925846;
	bh=WWkDom5U0fFW2T5yf+nx5KbkERxvChYw4S75KNRbsg8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=qdnWUYBK3GDaDHms+YmDm72clarUCcfnHkQwoVbmQbWbjWsi7TDd3vMoluReO1r6G
	 P/zOjHAQR+OKmdcwCzW4CO5frXpQ6IdplZUrqjPzl2AjI9rhzNtT+7D/eEVZxCAl0+
	 +sY0DbflpWRxUdGQFeYk5eKgxplj1J6/OsQNgrm2HMZz5hlgA1Ck/YrfbWmkPRFpGT
	 Y8zjmVWLHGfw46tPX2W9jm7PPzr3wsxptdu68Ss4Gi/XiGj4COzn5pzHkgpqnmViQm
	 kcbreUOtO4o14pNa+mWQ8w6TZYNZ/AzDgPc5RreDJWt6dJF2YrtpE+9jDQuC5/7528
	 U+T+yv0KfZa1w==
Date: Wed, 22 Jun 2022 12:23:29 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: roberto.bagnara@bugseng.com
cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, 
    Michal Orzel <Michal.Orzel@arm.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>, 
    Juergen Gross <jgross@suse.com>, Dario Faggioli <dfaggioli@suse.com>, 
    Daniel De Graaf <dgdegra@tycho.nsa.gov>, jbeulich@suse.com, 
    "Daniel P. Smith" <dpsmith@apertussolutions.com>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 0/9] MISRA C 2012 8.1 rule fixes
In-Reply-To: <7a8d70e3-c331-426d-fe96-77bd65caade7@suse.com>
Message-ID: <alpine.DEB.2.22.394.2206221212510.2157383@ubuntu-linux-20-04-desktop>
References: <20220620070245.77979-1-michal.orzel@arm.com> <dd016e82-2480-0e1e-6286-18b2f677dd65@suse.com> <74ec2158-3d19-3b2c-1e8c-fb5b30267658@arm.com> <d91bb4ea-41be-225e-e2fe-1b03aa06c677@suse.com> <C45BA6EE-6294-4C6F-ADC4-3DE7C8DA866F@arm.com>
 <68d7fb35-e4c5-e5d2-13a8-9ee1369e8dbe@suse.com> <BE80A241-7983-425F-9212-0957E29AA5C7@arm.com> <7a8d70e3-c331-426d-fe96-77bd65caade7@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

+Roberto


Hi Roberto,

A quick question about Rule 8.1.


Michal sent a patch series to fix Xen against Rule 8.1 (here is a link
if you are interested: https://marc.info/?l=xen-devel&m=165570851227125)

Although we all generally agree that the changes are a good thing, there
was a question about the rule itself. Specifically, is the following
actually a violation?

  unsigned x;


Looking through the examples in the MISRA document I can see various
instances of more confusing and obvious violations such as:

  const x;
  extern x;

but no examples of using "unsigned" without "int". Do you know if it is
considered a violation?


Thanks!

Cheers,

Stefano



On Wed, 22 Jun 2022, Jan Beulich wrote:
> >>>>> On 22.06.2022 12:25, Jan Beulich wrote:
> >>>>>> On 20.06.2022 09:02, Michal Orzel wrote:
> >>>>>>> This series fixes all the findings for MISRA C 2012 8.1 rule, reported by
> >>>>>>> cppcheck 2.7 with misra addon, for Arm (arm32/arm64 - target allyesconfig).
> >>>>>>> Fixing this rule comes down to replacing implicit 'unsigned' with explicit
> >>>>>>> 'unsigned int' type as there are no other violations being part of that rule
> >>>>>>> in the Xen codebase.
> >>>>>>
> >>>>>> I'm puzzled, I have to admit. While I agree with all the examples in the
> >>>>>> doc, I notice that there's no instance of "signed" or "unsigned" there.
> >>>>>> Which matches my understanding that "unsigned" and "signed" on their own
> >>>>>> (just like "long") are proper types, and hence the omission of "int"
> >>>>>> there is not an "omission of an explicit type".

[...]

> >>>> Neither the name of the variable nor the comment clarify that this is about
> >>>> the specific case of "unsigned". As said there's also the fact that they
> >>>> don't appear to point out the lack of "int" when seeing plain "long" (or
> >>>> "long long"). I fully agree that "extern x;" or "const y;" lack explicit
> >>>> "int".


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 19:29:33 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 19:29:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354300.581369 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o462c-0000HO-6V; Wed, 22 Jun 2022 19:29:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354300.581369; Wed, 22 Jun 2022 19:29:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o462c-0000HH-2g; Wed, 22 Jun 2022 19:29:30 +0000
Received: by outflank-mailman (input) for mailman id 354300;
 Wed, 22 Jun 2022 19:29:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/BDV=W5=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o462a-0000HB-Kr
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 19:29:28 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a0fa7205-f261-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 21:29:27 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 5D81B603E0;
 Wed, 22 Jun 2022 19:29:26 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id BFFDCC34114;
 Wed, 22 Jun 2022 19:29:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a0fa7205-f261-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1655926166;
	bh=lEXS5xvDudB/17pOBtKnSM5CJJDIEZjJvjb4/t/prBA=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=rKgMyXRbNUwrlSxfOtRFbAvdpuoIAphO7cvxMbWuH/7P2ZxThPpnPFasqs2Tso613
	 QG3QqDbm+UMvE6APWnJs2N/9FfYEOycVIfFnQ7eIv7Jc6yal0a3td4rTIDI3U6tQ4A
	 u0mDVnfd1rjxIBoKx9JFM3OVOqfAtL/taVoXJUrBIK6FRKUUvlHdBf8g+Yor8O4uDL
	 YOgPuzrXHCmiK4FmQfl3Zn14Cnf33lb/e2ZmU2GObLTmGYB4J3LWfMM5QtX2ea5qP0
	 nZs9SCxjGuTMHpxzUB+uGdXy2/QqOZKPXJDxSsUmEeTinhGTMBmxU/29Nm0S/5znuz
	 gL5pp/56mNK/w==
Date: Wed, 22 Jun 2022 12:29:25 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Xenia Ragiadakou <burzalodowa@gmail.com>
cc: xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH 2/3] xen/lib: list-sort: Fix MISRA C 2012 Rule 8.4
 violation
In-Reply-To: <20220622151514.545850-2-burzalodowa@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2206221229180.2157383@ubuntu-linux-20-04-desktop>
References: <20220622151514.545850-1-burzalodowa@gmail.com> <20220622151514.545850-2-burzalodowa@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 22 Jun 2022, Xenia Ragiadakou wrote:
> Include header <xen/list_sort.h> so that the declaration of the function
> list_sort(), which has external linkage, is visible before the function
> definition.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

> ---
>  xen/lib/list-sort.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/xen/lib/list-sort.c b/xen/lib/list-sort.c
> index f8d8bbf281..de1af2ef8b 100644
> --- a/xen/lib/list-sort.c
> +++ b/xen/lib/list-sort.c
> @@ -16,6 +16,7 @@
>   */
>  
>  #include <xen/list.h>
> +#include <xen/list_sort.h>
>  
>  #define MAX_LIST_LENGTH_BITS 20
>  
> -- 
> 2.34.1
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 19:30:16 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 19:30:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354307.581380 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o463M-0001Zr-EY; Wed, 22 Jun 2022 19:30:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354307.581380; Wed, 22 Jun 2022 19:30:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o463M-0001Zk-BN; Wed, 22 Jun 2022 19:30:16 +0000
Received: by outflank-mailman (input) for mailman id 354307;
 Wed, 22 Jun 2022 19:30:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/BDV=W5=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o463K-0001ZW-Q8
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 19:30:14 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bc5760f2-f261-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 21:30:13 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 62D7760EF4;
 Wed, 22 Jun 2022 19:30:12 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 64A49C34114;
 Wed, 22 Jun 2022 19:30:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bc5760f2-f261-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1655926211;
	bh=IwyPm/RSmjo26ngQXhzJwWqIaCH6+ra277GLEFac3pc=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=NiDRGdQa2V4ZE15+GXShHj+wWnHoWyD9GQDUrfcjapXvbhzDkA4znD07FlNMl3R3I
	 rI4sDaj5yMqHB3l+AUjFkLf8wKSILe52srTyAgZ0j5GSiCBl8A5LOozdpdXwWh37oF
	 ekT0N0KzRKNfq2jFzARPjHbPoM4NLvuJsZNpWXAfFKdlulCbIJBAfAwv8JTY5bxXnk
	 V1TwxC5dcHd508xeKvnoTYNW0NLiq1hsT1h/3X2SlVyb9KwHLxgTuV31/znm+s248F
	 dSezI9VqjPsM5x12k7lzi4QKySfNCrGEWGyWerc9d6zzdWb7/TUTGQy9GBsmabS2qe
	 LBzLQNiWJxkQg==
Date: Wed, 22 Jun 2022 12:30:10 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Xenia Ragiadakou <burzalodowa@gmail.com>
cc: xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH 3/3] xen/common: gunzip: Fix MISRA C 2012 Rule 8.4
 violation
In-Reply-To: <20220622151514.545850-3-burzalodowa@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2206221230040.2157383@ubuntu-linux-20-04-desktop>
References: <20220622151514.545850-1-burzalodowa@gmail.com> <20220622151514.545850-3-burzalodowa@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 22 Jun 2022, Xenia Ragiadakou wrote:
> Include header <xen/gunzip.h> so that the declarations of functions gzip_check()
> and perform_gunzip(), which have external linkage, are visible before the
> function definitions.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/common/gunzip.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/xen/common/gunzip.c b/xen/common/gunzip.c
> index b9ecc17e44..aa16fec4bb 100644
> --- a/xen/common/gunzip.c
> +++ b/xen/common/gunzip.c
> @@ -1,4 +1,5 @@
>  #include <xen/errno.h>
> +#include <xen/gunzip.h>
>  #include <xen/init.h>
>  #include <xen/lib.h>
>  #include <xen/mm.h>
> -- 
> 2.34.1
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 19:31:45 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 19:31:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354315.581391 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o464m-0002Cu-PM; Wed, 22 Jun 2022 19:31:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354315.581391; Wed, 22 Jun 2022 19:31:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o464m-0002Cn-M2; Wed, 22 Jun 2022 19:31:44 +0000
Received: by outflank-mailman (input) for mailman id 354315;
 Wed, 22 Jun 2022 19:31:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/BDV=W5=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o464l-0002Cf-O7
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 19:31:43 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f208d641-f261-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 21:31:42 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 46259B820E2;
 Wed, 22 Jun 2022 19:31:42 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id AC4EDC34114;
 Wed, 22 Jun 2022 19:31:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f208d641-f261-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1655926300;
	bh=kR5otOXRhzX6FXyHpwbkHtG6jluAE601v5yr7HLWApo=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=khZe/C45+TM5lJCvQg+zYYNrH5ySOyLIoGto6EBazLrGsHB6+wqZfCOUilR0f0j2p
	 nESK/ex3dNioiWmjfQDgj/ekvFaoey1n/XyPVpgAPh9MT1c2OEL6MDdXNWc1Phmecf
	 XRDZ7FEHGQ3hzbI39zs/iUJjjHZgB4ZjeIwtV1LwpkxLX9fx3n/g9llI4716btTbKn
	 3H/mkowW6MWiYGjJbVzFLmwCAHt7snujWojISceOZ+PqLscR92w/rSlOzDk2laMQdz
	 l5O8xFqDKIfBuhYzYdCz0YawR6kldgaLloDvQUtvL2pghfeHceTtoKBuCn9hL7oJWr
	 VexMTS0SaoNuQ==
Date: Wed, 22 Jun 2022 12:31:40 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Xenia Ragiadakou <burzalodowa@gmail.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Subject: Re: [PATCH] xen/common: device_tree: Fix MISRA C 2012 Rule 8.7
 violation
In-Reply-To: <20220622151557.545880-1-burzalodowa@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2206221231260.2157383@ubuntu-linux-20-04-desktop>
References: <20220622151557.545880-1-burzalodowa@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 22 Jun 2022, Xenia Ragiadakou wrote:
> The function __dt_n_size_cells() is referenced only in device_tree.c.
> Change the linkage of the function from external to internal by adding
> the storage-class specifier static to the function definition.
> 
> This patch aims to resolve indirectly a MISRA C 2012 Rule 8.4 violation
> warning.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/common/device_tree.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> index 0e8798bd24..6c9712ab7b 100644
> --- a/xen/common/device_tree.c
> +++ b/xen/common/device_tree.c
> @@ -496,7 +496,7 @@ static int __dt_n_addr_cells(const struct dt_device_node *np, bool_t parent)
>      return DT_ROOT_NODE_ADDR_CELLS_DEFAULT;
>  }
>  
> -int __dt_n_size_cells(const struct dt_device_node *np, bool_t parent)
> +static int __dt_n_size_cells(const struct dt_device_node *np, bool_t parent)
>  {
>      const __be32 *ip;
>  
> -- 
> 2.34.1
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 19:49:55 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 19:49:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354327.581402 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o46MI-0003rQ-EY; Wed, 22 Jun 2022 19:49:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354327.581402; Wed, 22 Jun 2022 19:49:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o46MI-0003rJ-AJ; Wed, 22 Jun 2022 19:49:50 +0000
Received: by outflank-mailman (input) for mailman id 354327;
 Wed, 22 Jun 2022 19:49:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o46MG-0003r9-UI; Wed, 22 Jun 2022 19:49:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o46MG-0003wP-RT; Wed, 22 Jun 2022 19:49:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o46MG-0000R6-FK; Wed, 22 Jun 2022 19:49:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o46MG-0001TD-Eg; Wed, 22 Jun 2022 19:49:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=QQ9TBVc1dFsIH4iCHCiL9hgb55PbfYE6GlswViB7tYI=; b=SXhznsUwuJ/+nkGeQohVVkRNXa
	8JCnoPK3w6Y4736AG9kOvx0Iis2kjRKYs2MD2y4jOCiWLHTOVsEmacngHWG9PwL4cXL+IAT4qtAxk
	1DNLNZGK1595p/ebKC3wtbSdEzHSA0c2veJ+Nz47jD+ilwzGSm1u3hvf4xWdxprRXcso=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171311-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 171311: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=3930d1791a0657a422d50f4d2e2d2683c36e34b8
X-Osstest-Versions-That:
    ovmf=b97243dea3c95ad923fa4ca190940158209e8384
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 22 Jun 2022 19:49:48 +0000

flight 171311 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171311/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 3930d1791a0657a422d50f4d2e2d2683c36e34b8
baseline version:
 ovmf                 b97243dea3c95ad923fa4ca190940158209e8384

Last test of basis   171304  2022-06-22 01:40:36 Z    0 days
Testing same since   171311  2022-06-22 15:11:57 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Rebecca Cran <rebecca@bsdio.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   b97243dea3..3930d1791a  3930d1791a0657a422d50f4d2e2d2683c36e34b8 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 20:45:03 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 20:45:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354338.581412 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o47DT-0001MS-KM; Wed, 22 Jun 2022 20:44:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354338.581412; Wed, 22 Jun 2022 20:44:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o47DT-0001ML-Ha; Wed, 22 Jun 2022 20:44:47 +0000
Received: by outflank-mailman (input) for mailman id 354338;
 Wed, 22 Jun 2022 20:44:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XHpp=W5=gmail.com=christopher.w.clark@srs-se1.protection.inumbo.net>)
 id 1o47DR-0001MF-P3
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 20:44:45 +0000
Received: from mail-vs1-xe2f.google.com (mail-vs1-xe2f.google.com
 [2607:f8b0:4864:20::e2f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 250d5893-f26c-11ec-bd2d-47488cf2e6aa;
 Wed, 22 Jun 2022 22:44:44 +0200 (CEST)
Received: by mail-vs1-xe2f.google.com with SMTP id 184so3187903vsz.2
 for <xen-devel@lists.xenproject.org>; Wed, 22 Jun 2022 13:44:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 250d5893-f26c-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=8bxbF05ZATqB3gxHyQNYTVLA6bjts9qBhfiHvLZyCy4=;
        b=nbSpEF02yLgLmrv4C2454mTdkwm60casljwOWdzpdWYcUpc2vriavG8TE/AC5I5tTt
         V0nRWLlwmoT3zFmx7HlEfAmhjVk75Cmw8F613n/nBhg9pubDW//jjJ3D0YZcL43Tziwp
         nTl9mG6xsyRV7J9IUcY6OMsHrz1VWXfEH4v92xiTtpzjhjkdXX5qZ1MYfH9A5cOlvNo2
         ip0vMlgbaTZj/XtsR9o0UZlcf8hN4QTZAZEcLtGiBE1CnlVMJveFA6rgL7r8x50DI94/
         zh9M1LFMvhT7VH155Tes/EP1aVEuiNRNT2DfbPqC0wv67oalGJx6fvO3Z29OqwDhCsHa
         R3+g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=8bxbF05ZATqB3gxHyQNYTVLA6bjts9qBhfiHvLZyCy4=;
        b=A2hlIDe45Xtid8hFgMS98GO7fMicVthwJwPq3Ofc0KPkSqRu3stjG349nwmW8gJ03u
         zfh5bs2PisFx4jVDlWnLdOZ4TjXytrEzlJivvTiAkSra0P3uCOkeSdalvpsGcWUfXZCE
         9IIvyoIxgq85R+P1uEZ1oxr99arLIO3jAuquoJTgfe4CkdU32rqUQcizM/Rci1SruGfY
         4Z6nFyZykq9OImM+1imdVbmcZY0vZHA0QBcq6bsVWxhJrteWzFjTDb3Y5BSr7sOyehG9
         HOajy4jiApRDJqiczksys9m4MyWAVgU2I+9QtC74I9lb+wpI5CX3eaeFWXdWE9mRKmks
         Tjlg==
X-Gm-Message-State: AJIora+IaUtW3RXBjYu+2PKuf79PlqpAPauAA7MwieQmqJfHqexYVApv
	qfBFJqzuHrKsiVeQTyMR26xZm7qpFIhrY3v2VAI=
X-Google-Smtp-Source: AGRyM1uCuuXQYL1qxO+Lmk1tq/4POG5GRU2gMHpgMBJE89WKBmUYddl0wrMQnDk5qGdMVkqeBfvNvqWXaFnJBtxY9PY=
X-Received: by 2002:a67:e00d:0:b0:354:2ab6:d355 with SMTP id
 c13-20020a67e00d000000b003542ab6d355mr9118327vsl.0.1655930682713; Wed, 22 Jun
 2022 13:44:42 -0700 (PDT)
MIME-Version: 1.0
References: <7f490d75-153d-7e1d-b3c0-5418ff7fdf8f@citrix.com>
 <b8f05e22-c30d-d4b2-b725-9db91ee7a09d@xen.org> <fd30be68-d1ac-b1bc-b3f1-cff589f338ee@citrix.com>
 <c97de57c-4812-cdfc-f329-cc2e1d950dc7@xen.org> <CACMJ4GY+H7P733_-UNgSd7P8+Z4ryeJwVy3QfekMJskkmh9btQ@mail.gmail.com>
 <30BB31A7-F49C-4908-8053-74E31D03BD33@arm.com> <36854512-23fe-57dc-3c47-5f996927872b@citrix.com>
 <A06EA6F6-BBB5-4FDC-BEA0-E5C6EB6B445B@arm.com>
In-Reply-To: <A06EA6F6-BBB5-4FDC-BEA0-E5C6EB6B445B@arm.com>
From: Christopher Clark <christopher.w.clark@gmail.com>
Date: Wed, 22 Jun 2022 13:44:31 -0700
Message-ID: <CACMJ4Gb4CPDP5OmW+D50QCALvVo82rvw_7yO0ze0u5fh6ey_Pw@mail.gmail.com>
Subject: Re: XTF-on-ARM: Bugs
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>, Julien Grall <julien@xen.org>, 
	xen-devel <xen-devel@lists.xenproject.org>, Michal Orzel <Michal.Orzel@arm.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
	Daniel Smith <dpsmith@apertussolutions.com>, Roger Pau Monne <roger.pau@citrix.com>, 
	George Dunlap <George.Dunlap@citrix.com>
Content-Type: text/plain; charset="UTF-8"

On Wed, Jun 22, 2022 at 9:40 AM Bertrand Marquis
<Bertrand.Marquis@arm.com> wrote:
>
> Hi Andrew,
>
> > On 22 Jun 2022, at 17:28, Andrew Cooper <Andrew.Cooper3@citrix.com> wrote:
> >
> > On 22/06/2022 13:32, Bertrand Marquis wrote:
> >> Hi Andrew and Christopher,
> >>
> >> I will not dig into the details of the issues you currently have
> >> but it seems you are trying to re-do the work we already did
> >> and have been using for quite a while.

Hi Bertrand - I apologise if it seems that way, and for the pace of
this being slower than you had been expecting to see.
I don't think I have actually been re-doing it and I'm grateful that
you have made your team's work available. I am working to get what you
need integrated into the upstream.

> >> Currently we maintain the xtf on arm code in gitlab and we
> >> recently rebased it on the latest xtf master:
> >> https://gitlab.com/xen-project/people/bmarquis/xtf
> >>
> >> If possible I would suggest to start from there.

Thanks - I will add this to the sources I am working with.

> >
> > Sorry to be blunt, but no.  I've requested several times for that series
> > to be broken down into something which is actually reviewable, and
> > because that has not been done, I'm doing it at the fastest pace my
> > other priorities allow.
>
> You have not requested anything, we have been asking for a year
> what we could do to help without getting any answer.

At Andy's request I had been looking into verifying the minimal
necessary pieces to get the 32-bit Arm platform implementation to
support a minimal stub test and also the XTF infrastructure (eg.
printf, xtf return code reporting) that wasn't present in the posted
work. The aim for doing that work was to build my familiarity with it
and inform judgement involved in ensuring that the initial pieces that
are merged into XTF have a maintainable structure to support each of
the architectures (and configurations of each) that we need.
It's taken longer than I wanted and it is clear that there is urgency
to getting 64-bit Arm support integrated.

>
> >
> > Notice how 2/3 of the patches in the past year have been bits
> > specifically carved out of the ARM series, or improvements to prevent
> > the ARM series introducing technical debt.  Furthermore, you've not
> > taken the "build ARM in CI" patch that I wrote specifically for you to
> > be part of the series, and you've got breakages to x86 from rebasing.
>
> Which patch ? Where ? There was no communication on anything like that.
>
> >
> > At this point, I am not interested in seeing any work which is not
> > morphing (and mostly pruning) the arm-wip branch down into a set of
> > clean build system modifications that can bootstrap the
> > as-minimal-as-I-can-make-it stub.
>
> You cannot expect us to poll on all the possible branches that you are creating
> and simply rework what we did when you do something on some branch.
>
> We went through what you requested using GitHub and asked you at almost all
> Xen Community Call what we could do to go further without getting any answer.

I will continue to be reachable via the Community Calls. I will have a
better understanding of what steps are needed next after reviewing the
branch that you have posted.

> You are not interested in us contributing to XTF, this is understood.

No, that's really not the case; your contributions are highly valued.

There's a gap that needs to be closed here between the needs of the
contributors (ie. you guys), to have platform support working and
ability to make incremental contributions for new tests, and what the
maintainer is looking for: a structure that implements an intended
design -- that is difficult for contributors to know without having
documentation for it, which is again time-consuming to produce --
supporting the many target configurations in a coherent fashion, and
introduced in small, concise logical steps. It's compounded by the
fact that this is intricate system software where hardware
platform-specific details are critical for reviewers and contributors
to understand and implement exactly correctly.

So: I'm working on closing the current gap, aiming to make meaningful
progress in the short term and can communicate with you more clearly
as to the status of that in the coming weeks.
I also think that once the initial platform support is merged, ongoing
contributions will be both easier to produce and easier to review, to
the advantage of all.

thanks

Christopher

>
> Cheers
> Bertrand
>


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 21:13:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 21:13:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354346.581424 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o47fM-0004cK-Un; Wed, 22 Jun 2022 21:13:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354346.581424; Wed, 22 Jun 2022 21:13:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o47fM-0004cD-QB; Wed, 22 Jun 2022 21:13:36 +0000
Received: by outflank-mailman (input) for mailman id 354346;
 Wed, 22 Jun 2022 21:13:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o47fK-0004c3-Ms; Wed, 22 Jun 2022 21:13:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o47fK-0005Sy-KR; Wed, 22 Jun 2022 21:13:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o47fK-0006Ne-9k; Wed, 22 Jun 2022 21:13:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o47fK-0005Tm-9L; Wed, 22 Jun 2022 21:13:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=fN0WhAtYr/6/GH98Jsqt9tpPsbRM67sNe97ucjIE0Gw=; b=4SlTx/17iXDoP18PbjAvxtDPu4
	Gcv7YZ7h7lce7ez9/4ngJTDe9pX6vmHcmVXIDncNXLP448nYQigk9WaUrBQVdGHVX5N3/SZpkWFcF
	01GBaa6Aho84S6BQYlQR7N6z9635t1a36fldHBkXuYz8w6ontM6d2qHgv0B4XVB1o0c4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171314-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 171314: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=65f684b728f779e170335e9e0cbbf82f7e1c7e5b
X-Osstest-Versions-That:
    xen=15d93068e3484cb14006e935734a1e6088f228fd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 22 Jun 2022 21:13:34 +0000

flight 171314 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171314/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  65f684b728f779e170335e9e0cbbf82f7e1c7e5b
baseline version:
 xen                  15d93068e3484cb14006e935734a1e6088f228fd

Last test of basis   171308  2022-06-22 11:01:53 Z    0 days
Testing same since   171314  2022-06-22 18:01:50 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   15d93068e3..65f684b728  65f684b728f779e170335e9e0cbbf82f7e1c7e5b -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 21:52:55 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 21:52:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354357.581435 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o48H4-0000K5-Sj; Wed, 22 Jun 2022 21:52:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354357.581435; Wed, 22 Jun 2022 21:52:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o48H4-0000Jy-Pv; Wed, 22 Jun 2022 21:52:34 +0000
Received: by outflank-mailman (input) for mailman id 354357;
 Wed, 22 Jun 2022 21:52:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/BDV=W5=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o48H3-0000Js-4W
 for xen-devel@lists.xenproject.org; Wed, 22 Jun 2022 21:52:33 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9bf9ca15-f275-11ec-b725-ed86ccbb4733;
 Wed, 22 Jun 2022 23:52:29 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id AC7AC6197F;
 Wed, 22 Jun 2022 21:52:27 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id A5A98C34114;
 Wed, 22 Jun 2022 21:52:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9bf9ca15-f275-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1655934747;
	bh=hewWTvLMfxcgGnqzRxJwDBH/CKwoImFi/eNSFiZDLtM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Kt+S2M4R6NdFPbHO48kLATlD5FbGK7ex7LDuaqf4k+kT7LV2LvUXyDIuUZDeqyeky
	 kj77U6bLzID4TMsvQqTzWKYe48XVru5xMJEXEdu301rEPk4UGpTF93iPMm0QP6bGml
	 vUEPy2pDzCTBrEBOmMnop2P/NP9VsZ30mGZclV9Ugr1YdjGG+l3YoMsmdDEOMZS0dy
	 aZckdHxzqVzZEoEY/8aVRUGvJXQSKoqe5As0fHD76ggCmAh/fd1+ZdYjGbic6xX9iM
	 grRWQkF1xf4nNsUe9EgyFuX2tpkWzbo59nTdi9oFTWjqKlWOdBC+Rie5I+/LwPwnFR
	 pUsbjYCdjAqXg==
Date: Wed, 22 Jun 2022 14:52:14 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v3] xen: Add MISRA support to cppcheck make rule
In-Reply-To: <FE2CD795-09AC-4AD0-8F08-8320FE7122C5@arm.com>
Message-ID: <alpine.DEB.2.22.394.2206221445520.2352613@ubuntu-linux-20-04-desktop>
References: <82a29dff7a0da97cc6ad9d247a97372bcf71f17c.1654850751.git.bertrand.marquis@arm.com> <alpine.DEB.2.22.394.2206211658480.788376@ubuntu-linux-20-04-desktop> <FE2CD795-09AC-4AD0-8F08-8320FE7122C5@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 22 Jun 2022, Bertrand Marquis wrote:
> Hi Stefano,
> 
> > On 22 Jun 2022, at 01:00, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > 
> > On Fri, 10 Jun 2022, Bertrand Marquis wrote:
> >> cppcheck MISRA addon can be used to check for non compliance to some of
> >> the MISRA standard rules.
> >> 
> >> Add a CPPCHECK_MISRA variable that can be set to "y" using make command
> >> line to generate a cppcheck report including cppcheck misra checks.
> >> 
> >> When MISRA checking is enabled, a file with a text description suitable
> >> for cppcheck misra addon is generated out of Xen documentation file
> >> which lists the rules followed by Xen (docs/misra/rules.rst).
> >> 
> >> By default MISRA checking is turned off.
> >> 
> >> While adding cppcheck-misra files to gitignore, also fix the missing /
> >> for htmlreport gitignore
> >> 
> >> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> > 
> > Hi Bertrand,
> > 
> > I tried this patch and I am a bit confused by the output
> > cppcheck-misra.txt file that I get (appended.)
> > 
> > I can see that there are all the rules from docs/misra/rules.rst as it
> > should be together with the one line summary, but there are also a bunch
> > of additional rules not present in docs/misra/rules.rst. Starting from
> > Rule 1.1 all the way to Rule 21.21. Is the expected?
> 
> To make cppcheck happy I need to give a text for all rules so the python script is generating a dummy sentence for not declared Misra rules to prevent cppcheck warnings. To make it simpler I just did it for 1 to 22 for main and sub numbers.
> 
> So yes this is expected.

No problem about the dummy text sentence. My question was why are all
those additional rules listed?

If you see below, the first few rules from 2.1 to 20.14 are coming from
docs/misra/rules.rst. Why are the other rules afterward from 1.1 to
21.21 listed and where are they coming from?

Is it because all rules need to be listed? And the ones that are enabled
are marked as "Required"?

I take we couldn't just avoid listing the other rules (the ones not in
docs/misra/rules.rst)?


> > Appendix A Summary of guidelines
> > Rule 2.1 Required
> > All source files shall compile without any compilation errors (Misra rule 2.1)
> > Rule 4.7 Required
> > If a function returns error information then that error information shall be tested (Misra rule 4.7)
> > Rule 4.10 Required
> > Precautions shall be taken in order to prevent the contents of a header file being included more than once (Misra rule 4.10)
> > Rule 4.14 Required
> > The validity of values received from external sources shall be checked (Misra rule 4.14)
> > Rule 1.3 Required
> > There shall be no occurrence of undefined or critical unspecified behaviour (Misra rule 1.3)
> > Rule 3.2 Required
> > Line-splicing shall not be used in // comments (Misra rule 3.2)
> > Rule 5.1 Required
> > External identifiers shall be distinct (Misra rule 5.1)
> > Rule 5.2 Required
> > Identifiers declared in the same scope and name space shall be distinct (Misra rule 5.2)
> > Rule 5.3 Required
> > An identifier declared in an inner scope shall not hide an identifier declared in an outer scope (Misra rule 5.3)
> > Rule 5.4 Required
> > Macro identifiers shall be distinct (Misra rule 5.4)
> > Rule 6.2 Required
> > Single-bit named bit fields shall not be of a signed type (Misra rule 6.2)
> > Rule 8.1 Required
> > Types shall be explicitly specified (Misra rule 8.1)
> > Rule 8.4 Required
> > A compatible declaration shall be visible when an object or function with external linkage is defined (Misra rule 8.4)
> > Rule 8.5 Required
> > An external object or function shall be declared once in one and only one file (Misra rule 8.5)
> > Rule 8.6 Required
> > An identifier with external linkage shall have exactly one external definition (Misra rule 8.6)
> > Rule 8.8 Required
> > The static storage class specifier shall be used in all declarations of objects and functions that have internal linkage (Misra rule 8.8)
> > Rule 8.10 Required
> > An inline function shall be declared with the static storage class (Misra rule 8.10)
> > Rule 8.12 Required
> > Within an enumerator list the value of an implicitly-specified enumeration constant shall be unique (Misra rule 8.12)
> > Rule 9.1 Mandatory
> > The value of an object with automatic storage duration shall not be read before it has been set (Misra rule 9.1)
> > Rule 9.2 Required
> > The initializer for an aggregate or union shall be enclosed in braces (Misra rule 9.2)
> > Rule 13.6 Mandatory
> > The operand of the sizeof operator shall not contain any expression which has potential side effects (Misra rule 13.6)
> > Rule 14.1 Required
> > A loop counter shall not have essentially floating type (Misra rule 14.1)
> > Rule 16.7 Required
> > A switch-expression shall not have essentially Boolean type (Misra rule 16.7)
> > Rule 17.3 Mandatory
> > A function shall not be declared implicitly (Misra rule 17.3)
> > Rule 17.4 Mandatory
> > All exit paths from a function with non-void return type shall have an explicit return statement with an expression (Misra rule 17.4)
> > Rule 20.7 Required
> > Expressions resulting from the expansion of macro parameters shall be enclosed in parentheses (Misra rule 20.7)
> > Rule 20.13 Required
> > A line whose first token is # shall be a valid preprocessing directive (Misra rule 20.13)
> > Rule 20.14 Required
> > All #else #elif and #endif preprocessor directives shall reside in the same file as the #if #ifdef or #ifndef directive to which they are related (Misra rule 20.14)
> > Rule 1.1
> > No description for rule 1.1
> > Rule 1.2
> > No description for rule 1.2
> > Rule 1.4
> > No description for rule 1.4
> > Rule 1.5
> > No description for rule 1.5
> > Rule 1.6
> > No description for rule 1.6
> > Rule 1.7
> > No description for rule 1.7
> > Rule 1.8
> > No description for rule 1.8
> > Rule 1.9
> > No description for rule 1.9
> > Rule 1.10
> > No description for rule 1.10
> > Rule 1.11
> > No description for rule 1.11
> > Rule 1.12
> > No description for rule 1.12
> > Rule 1.13
> > No description for rule 1.13
> > Rule 1.14
> > No description for rule 1.14
> > Rule 1.15
> > No description for rule 1.15
> > Rule 1.16
> > No description for rule 1.16
> > Rule 1.17
> > No description for rule 1.17
> > Rule 1.18
> > No description for rule 1.18
> > Rule 1.19
> > No description for rule 1.19
> > Rule 1.20
> > No description for rule 1.20
> > Rule 1.21
> > No description for rule 1.21
> > Rule 2.2
> > No description for rule 2.2
> > Rule 2.3
> > No description for rule 2.3
> > Rule 2.4
> > No description for rule 2.4
> > Rule 2.5
> > No description for rule 2.5
> > Rule 2.6
> > No description for rule 2.6
> > Rule 2.7
> > No description for rule 2.7
> > Rule 2.8
> > No description for rule 2.8
> > Rule 2.9
> > No description for rule 2.9
> > Rule 2.10
> > No description for rule 2.10
> > Rule 2.11
> > No description for rule 2.11
> > Rule 2.12
> > No description for rule 2.12
> > Rule 2.13
> > No description for rule 2.13
> > Rule 2.14
> > No description for rule 2.14
> > Rule 2.15
> > No description for rule 2.15
> > Rule 2.16
> > No description for rule 2.16
> > Rule 2.17
> > No description for rule 2.17
> > Rule 2.18
> > No description for rule 2.18
> > Rule 2.19
> > No description for rule 2.19
> > Rule 2.20
> > No description for rule 2.20
> > Rule 2.21
> > No description for rule 2.21
> > Rule 3.1
> > No description for rule 3.1
> > Rule 3.3
> > No description for rule 3.3
> > Rule 3.4
> > No description for rule 3.4
> > Rule 3.5
> > No description for rule 3.5
> > Rule 3.6
> > No description for rule 3.6
> > Rule 3.7
> > No description for rule 3.7
> > Rule 3.8
> > No description for rule 3.8
> > Rule 3.9
> > No description for rule 3.9
> > Rule 3.10
> > No description for rule 3.10
> > Rule 3.11
> > No description for rule 3.11
> > Rule 3.12
> > No description for rule 3.12
> > Rule 3.13
> > No description for rule 3.13
> > Rule 3.14
> > No description for rule 3.14
> > Rule 3.15
> > No description for rule 3.15
> > Rule 3.16
> > No description for rule 3.16
> > Rule 3.17
> > No description for rule 3.17
> > Rule 3.18
> > No description for rule 3.18
> > Rule 3.19
> > No description for rule 3.19
> > Rule 3.20
> > No description for rule 3.20
> > Rule 3.21
> > No description for rule 3.21
> > Rule 4.1
> > No description for rule 4.1
> > Rule 4.2
> > No description for rule 4.2
> > Rule 4.3
> > No description for rule 4.3
> > Rule 4.4
> > No description for rule 4.4
> > Rule 4.5
> > No description for rule 4.5
> > Rule 4.6
> > No description for rule 4.6
> > Rule 4.8
> > No description for rule 4.8
> > Rule 4.9
> > No description for rule 4.9
> > Rule 4.11
> > No description for rule 4.11
> > Rule 4.12
> > No description for rule 4.12
> > Rule 4.13
> > No description for rule 4.13
> > Rule 4.15
> > No description for rule 4.15
> > Rule 4.16
> > No description for rule 4.16
> > Rule 4.17
> > No description for rule 4.17
> > Rule 4.18
> > No description for rule 4.18
> > Rule 4.19
> > No description for rule 4.19
> > Rule 4.20
> > No description for rule 4.20
> > Rule 4.21
> > No description for rule 4.21
> > Rule 5.5
> > No description for rule 5.5
> > Rule 5.6
> > No description for rule 5.6
> > Rule 5.7
> > No description for rule 5.7
> > Rule 5.8
> > No description for rule 5.8
> > Rule 5.9
> > No description for rule 5.9
> > Rule 5.10
> > No description for rule 5.10
> > Rule 5.11
> > No description for rule 5.11
> > Rule 5.12
> > No description for rule 5.12
> > Rule 5.13
> > No description for rule 5.13
> > Rule 5.14
> > No description for rule 5.14
> > Rule 5.15
> > No description for rule 5.15
> > Rule 5.16
> > No description for rule 5.16
> > Rule 5.17
> > No description for rule 5.17
> > Rule 5.18
> > No description for rule 5.18
> > Rule 5.19
> > No description for rule 5.19
> > Rule 5.20
> > No description for rule 5.20
> > Rule 5.21
> > No description for rule 5.21
> > Rule 6.1
> > No description for rule 6.1
> > Rule 6.3
> > No description for rule 6.3
> > Rule 6.4
> > No description for rule 6.4
> > Rule 6.5
> > No description for rule 6.5
> > Rule 6.6
> > No description for rule 6.6
> > Rule 6.7
> > No description for rule 6.7
> > Rule 6.8
> > No description for rule 6.8
> > Rule 6.9
> > No description for rule 6.9
> > Rule 6.10
> > No description for rule 6.10
> > Rule 6.11
> > No description for rule 6.11
> > Rule 6.12
> > No description for rule 6.12
> > Rule 6.13
> > No description for rule 6.13
> > Rule 6.14
> > No description for rule 6.14
> > Rule 6.15
> > No description for rule 6.15
> > Rule 6.16
> > No description for rule 6.16
> > Rule 6.17
> > No description for rule 6.17
> > Rule 6.18
> > No description for rule 6.18
> > Rule 6.19
> > No description for rule 6.19
> > Rule 6.20
> > No description for rule 6.20
> > Rule 6.21
> > No description for rule 6.21
> > Rule 7.1
> > No description for rule 7.1
> > Rule 7.2
> > No description for rule 7.2
> > Rule 7.3
> > No description for rule 7.3
> > Rule 7.4
> > No description for rule 7.4
> > Rule 7.5
> > No description for rule 7.5
> > Rule 7.6
> > No description for rule 7.6
> > Rule 7.7
> > No description for rule 7.7
> > Rule 7.8
> > No description for rule 7.8
> > Rule 7.9
> > No description for rule 7.9
> > Rule 7.10
> > No description for rule 7.10
> > Rule 7.11
> > No description for rule 7.11
> > Rule 7.12
> > No description for rule 7.12
> > Rule 7.13
> > No description for rule 7.13
> > Rule 7.14
> > No description for rule 7.14
> > Rule 7.15
> > No description for rule 7.15
> > Rule 7.16
> > No description for rule 7.16
> > Rule 7.17
> > No description for rule 7.17
> > Rule 7.18
> > No description for rule 7.18
> > Rule 7.19
> > No description for rule 7.19
> > Rule 7.20
> > No description for rule 7.20
> > Rule 7.21
> > No description for rule 7.21
> > Rule 8.2
> > No description for rule 8.2
> > Rule 8.3
> > No description for rule 8.3
> > Rule 8.7
> > No description for rule 8.7
> > Rule 8.9
> > No description for rule 8.9
> > Rule 8.11
> > No description for rule 8.11
> > Rule 8.13
> > No description for rule 8.13
> > Rule 8.14
> > No description for rule 8.14
> > Rule 8.15
> > No description for rule 8.15
> > Rule 8.16
> > No description for rule 8.16
> > Rule 8.17
> > No description for rule 8.17
> > Rule 8.18
> > No description for rule 8.18
> > Rule 8.19
> > No description for rule 8.19
> > Rule 8.20
> > No description for rule 8.20
> > Rule 8.21
> > No description for rule 8.21
> > Rule 9.3
> > No description for rule 9.3
> > Rule 9.4
> > No description for rule 9.4
> > Rule 9.5
> > No description for rule 9.5
> > Rule 9.6
> > No description for rule 9.6
> > Rule 9.7
> > No description for rule 9.7
> > Rule 9.8
> > No description for rule 9.8
> > Rule 9.9
> > No description for rule 9.9
> > Rule 9.10
> > No description for rule 9.10
> > Rule 9.11
> > No description for rule 9.11
> > Rule 9.12
> > No description for rule 9.12
> > Rule 9.13
> > No description for rule 9.13
> > Rule 9.14
> > No description for rule 9.14
> > Rule 9.15
> > No description for rule 9.15
> > Rule 9.16
> > No description for rule 9.16
> > Rule 9.17
> > No description for rule 9.17
> > Rule 9.18
> > No description for rule 9.18
> > Rule 9.19
> > No description for rule 9.19
> > Rule 9.20
> > No description for rule 9.20
> > Rule 9.21
> > No description for rule 9.21
> > Rule 10.1
> > No description for rule 10.1
> > Rule 10.2
> > No description for rule 10.2
> > Rule 10.3
> > No description for rule 10.3
> > Rule 10.4
> > No description for rule 10.4
> > Rule 10.5
> > No description for rule 10.5
> > Rule 10.6
> > No description for rule 10.6
> > Rule 10.7
> > No description for rule 10.7
> > Rule 10.8
> > No description for rule 10.8
> > Rule 10.9
> > No description for rule 10.9
> > Rule 10.10
> > No description for rule 10.10
> > Rule 10.11
> > No description for rule 10.11
> > Rule 10.12
> > No description for rule 10.12
> > Rule 10.13
> > No description for rule 10.13
> > Rule 10.14
> > No description for rule 10.14
> > Rule 10.15
> > No description for rule 10.15
> > Rule 10.16
> > No description for rule 10.16
> > Rule 10.17
> > No description for rule 10.17
> > Rule 10.18
> > No description for rule 10.18
> > Rule 10.19
> > No description for rule 10.19
> > Rule 10.20
> > No description for rule 10.20
> > Rule 10.21
> > No description for rule 10.21
> > Rule 11.1
> > No description for rule 11.1
> > Rule 11.2
> > No description for rule 11.2
> > Rule 11.3
> > No description for rule 11.3
> > Rule 11.4
> > No description for rule 11.4
> > Rule 11.5
> > No description for rule 11.5
> > Rule 11.6
> > No description for rule 11.6
> > Rule 11.7
> > No description for rule 11.7
> > Rule 11.8
> > No description for rule 11.8
> > Rule 11.9
> > No description for rule 11.9
> > Rule 11.10
> > No description for rule 11.10
> > Rule 11.11
> > No description for rule 11.11
> > Rule 11.12
> > No description for rule 11.12
> > Rule 11.13
> > No description for rule 11.13
> > Rule 11.14
> > No description for rule 11.14
> > Rule 11.15
> > No description for rule 11.15
> > Rule 11.16
> > No description for rule 11.16
> > Rule 11.17
> > No description for rule 11.17
> > Rule 11.18
> > No description for rule 11.18
> > Rule 11.19
> > No description for rule 11.19
> > Rule 11.20
> > No description for rule 11.20
> > Rule 11.21
> > No description for rule 11.21
> > Rule 12.1
> > No description for rule 12.1
> > Rule 12.2
> > No description for rule 12.2
> > Rule 12.3
> > No description for rule 12.3
> > Rule 12.4
> > No description for rule 12.4
> > Rule 12.5
> > No description for rule 12.5
> > Rule 12.6
> > No description for rule 12.6
> > Rule 12.7
> > No description for rule 12.7
> > Rule 12.8
> > No description for rule 12.8
> > Rule 12.9
> > No description for rule 12.9
> > Rule 12.10
> > No description for rule 12.10
> > Rule 12.11
> > No description for rule 12.11
> > Rule 12.12
> > No description for rule 12.12
> > Rule 12.13
> > No description for rule 12.13
> > Rule 12.14
> > No description for rule 12.14
> > Rule 12.15
> > No description for rule 12.15
> > Rule 12.16
> > No description for rule 12.16
> > Rule 12.17
> > No description for rule 12.17
> > Rule 12.18
> > No description for rule 12.18
> > Rule 12.19
> > No description for rule 12.19
> > Rule 12.20
> > No description for rule 12.20
> > Rule 12.21
> > No description for rule 12.21
> > Rule 13.1
> > No description for rule 13.1
> > Rule 13.2
> > No description for rule 13.2
> > Rule 13.3
> > No description for rule 13.3
> > Rule 13.4
> > No description for rule 13.4
> > Rule 13.5
> > No description for rule 13.5
> > Rule 13.7
> > No description for rule 13.7
> > Rule 13.8
> > No description for rule 13.8
> > Rule 13.9
> > No description for rule 13.9
> > Rule 13.10
> > No description for rule 13.10
> > Rule 13.11
> > No description for rule 13.11
> > Rule 13.12
> > No description for rule 13.12
> > Rule 13.13
> > No description for rule 13.13
> > Rule 13.14
> > No description for rule 13.14
> > Rule 13.15
> > No description for rule 13.15
> > Rule 13.16
> > No description for rule 13.16
> > Rule 13.17
> > No description for rule 13.17
> > Rule 13.18
> > No description for rule 13.18
> > Rule 13.19
> > No description for rule 13.19
> > Rule 13.20
> > No description for rule 13.20
> > Rule 13.21
> > No description for rule 13.21
> > Rule 14.2
> > No description for rule 14.2
> > Rule 14.3
> > No description for rule 14.3
> > Rule 14.4
> > No description for rule 14.4
> > Rule 14.5
> > No description for rule 14.5
> > Rule 14.6
> > No description for rule 14.6
> > Rule 14.7
> > No description for rule 14.7
> > Rule 14.8
> > No description for rule 14.8
> > Rule 14.9
> > No description for rule 14.9
> > Rule 14.10
> > No description for rule 14.10
> > Rule 14.11
> > No description for rule 14.11
> > Rule 14.12
> > No description for rule 14.12
> > Rule 14.13
> > No description for rule 14.13
> > Rule 14.14
> > No description for rule 14.14
> > Rule 14.15
> > No description for rule 14.15
> > Rule 14.16
> > No description for rule 14.16
> > Rule 14.17
> > No description for rule 14.17
> > Rule 14.18
> > No description for rule 14.18
> > Rule 14.19
> > No description for rule 14.19
> > Rule 14.20
> > No description for rule 14.20
> > Rule 14.21
> > No description for rule 14.21
> > Rule 15.1
> > No description for rule 15.1
> > Rule 15.2
> > No description for rule 15.2
> > Rule 15.3
> > No description for rule 15.3
> > Rule 15.4
> > No description for rule 15.4
> > Rule 15.5
> > No description for rule 15.5
> > Rule 15.6
> > No description for rule 15.6
> > Rule 15.7
> > No description for rule 15.7
> > Rule 15.8
> > No description for rule 15.8
> > Rule 15.9
> > No description for rule 15.9
> > Rule 15.10
> > No description for rule 15.10
> > Rule 15.11
> > No description for rule 15.11
> > Rule 15.12
> > No description for rule 15.12
> > Rule 15.13
> > No description for rule 15.13
> > Rule 15.14
> > No description for rule 15.14
> > Rule 15.15
> > No description for rule 15.15
> > Rule 15.16
> > No description for rule 15.16
> > Rule 15.17
> > No description for rule 15.17
> > Rule 15.18
> > No description for rule 15.18
> > Rule 15.19
> > No description for rule 15.19
> > Rule 15.20
> > No description for rule 15.20
> > Rule 15.21
> > No description for rule 15.21
> > Rule 16.1
> > No description for rule 16.1
> > Rule 16.2
> > No description for rule 16.2
> > Rule 16.3
> > No description for rule 16.3
> > Rule 16.4
> > No description for rule 16.4
> > Rule 16.5
> > No description for rule 16.5
> > Rule 16.6
> > No description for rule 16.6
> > Rule 16.8
> > No description for rule 16.8
> > Rule 16.9
> > No description for rule 16.9
> > Rule 16.10
> > No description for rule 16.10
> > Rule 16.11
> > No description for rule 16.11
> > Rule 16.12
> > No description for rule 16.12
> > Rule 16.13
> > No description for rule 16.13
> > Rule 16.14
> > No description for rule 16.14
> > Rule 16.15
> > No description for rule 16.15
> > Rule 16.16
> > No description for rule 16.16
> > Rule 16.17
> > No description for rule 16.17
> > Rule 16.18
> > No description for rule 16.18
> > Rule 16.19
> > No description for rule 16.19
> > Rule 16.20
> > No description for rule 16.20
> > Rule 16.21
> > No description for rule 16.21
> > Rule 17.1
> > No description for rule 17.1
> > Rule 17.2
> > No description for rule 17.2
> > Rule 17.5
> > No description for rule 17.5
> > Rule 17.6
> > No description for rule 17.6
> > Rule 17.7
> > No description for rule 17.7
> > Rule 17.8
> > No description for rule 17.8
> > Rule 17.9
> > No description for rule 17.9
> > Rule 17.10
> > No description for rule 17.10
> > Rule 17.11
> > No description for rule 17.11
> > Rule 17.12
> > No description for rule 17.12
> > Rule 17.13
> > No description for rule 17.13
> > Rule 17.14
> > No description for rule 17.14
> > Rule 17.15
> > No description for rule 17.15
> > Rule 17.16
> > No description for rule 17.16
> > Rule 17.17
> > No description for rule 17.17
> > Rule 17.18
> > No description for rule 17.18
> > Rule 17.19
> > No description for rule 17.19
> > Rule 17.20
> > No description for rule 17.20
> > Rule 17.21
> > No description for rule 17.21
> > Rule 18.1
> > No description for rule 18.1
> > Rule 18.2
> > No description for rule 18.2
> > Rule 18.3
> > No description for rule 18.3
> > Rule 18.4
> > No description for rule 18.4
> > Rule 18.5
> > No description for rule 18.5
> > Rule 18.6
> > No description for rule 18.6
> > Rule 18.7
> > No description for rule 18.7
> > Rule 18.8
> > No description for rule 18.8
> > Rule 18.9
> > No description for rule 18.9
> > Rule 18.10
> > No description for rule 18.10
> > Rule 18.11
> > No description for rule 18.11
> > Rule 18.12
> > No description for rule 18.12
> > Rule 18.13
> > No description for rule 18.13
> > Rule 18.14
> > No description for rule 18.14
> > Rule 18.15
> > No description for rule 18.15
> > Rule 18.16
> > No description for rule 18.16
> > Rule 18.17
> > No description for rule 18.17
> > Rule 18.18
> > No description for rule 18.18
> > Rule 18.19
> > No description for rule 18.19
> > Rule 18.20
> > No description for rule 18.20
> > Rule 18.21
> > No description for rule 18.21
> > Rule 19.1
> > No description for rule 19.1
> > Rule 19.2
> > No description for rule 19.2
> > Rule 19.3
> > No description for rule 19.3
> > Rule 19.4
> > No description for rule 19.4
> > Rule 19.5
> > No description for rule 19.5
> > Rule 19.6
> > No description for rule 19.6
> > Rule 19.7
> > No description for rule 19.7
> > Rule 19.8
> > No description for rule 19.8
> > Rule 19.9
> > No description for rule 19.9
> > Rule 19.10
> > No description for rule 19.10
> > Rule 19.11
> > No description for rule 19.11
> > Rule 19.12
> > No description for rule 19.12
> > Rule 19.13
> > No description for rule 19.13
> > Rule 19.14
> > No description for rule 19.14
> > Rule 19.15
> > No description for rule 19.15
> > Rule 19.16
> > No description for rule 19.16
> > Rule 19.17
> > No description for rule 19.17
> > Rule 19.18
> > No description for rule 19.18
> > Rule 19.19
> > No description for rule 19.19
> > Rule 19.20
> > No description for rule 19.20
> > Rule 19.21
> > No description for rule 19.21
> > Rule 20.1
> > No description for rule 20.1
> > Rule 20.2
> > No description for rule 20.2
> > Rule 20.3
> > No description for rule 20.3
> > Rule 20.4
> > No description for rule 20.4
> > Rule 20.5
> > No description for rule 20.5
> > Rule 20.6
> > No description for rule 20.6
> > Rule 20.8
> > No description for rule 20.8
> > Rule 20.9
> > No description for rule 20.9
> > Rule 20.10
> > No description for rule 20.10
> > Rule 20.11
> > No description for rule 20.11
> > Rule 20.12
> > No description for rule 20.12
> > Rule 20.15
> > No description for rule 20.15
> > Rule 20.16
> > No description for rule 20.16
> > Rule 20.17
> > No description for rule 20.17
> > Rule 20.18
> > No description for rule 20.18
> > Rule 20.19
> > No description for rule 20.19
> > Rule 20.20
> > No description for rule 20.20
> > Rule 20.21
> > No description for rule 20.21
> > Rule 21.1
> > No description for rule 21.1
> > Rule 21.2
> > No description for rule 21.2
> > Rule 21.3
> > No description for rule 21.3
> > Rule 21.4
> > No description for rule 21.4
> > Rule 21.5
> > No description for rule 21.5
> > Rule 21.6
> > No description for rule 21.6
> > Rule 21.7
> > No description for rule 21.7
> > Rule 21.8
> > No description for rule 21.8
> > Rule 21.9
> > No description for rule 21.9
> > Rule 21.10
> > No description for rule 21.10
> > Rule 21.11
> > No description for rule 21.11
> > Rule 21.12
> > No description for rule 21.12
> > Rule 21.13
> > No description for rule 21.13
> > Rule 21.14
> > No description for rule 21.14
> > Rule 21.15
> > No description for rule 21.15
> > Rule 21.16
> > No description for rule 21.16
> > Rule 21.17
> > No description for rule 21.17
> > Rule 21.18
> > No description for rule 21.18
> > Rule 21.19
> > No description for rule 21.19
> > Rule 21.20
> > No description for rule 21.20
> > Rule 21.21
> > No description for rule 21.21
> > Appendix B
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 22 22:42:56 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jun 2022 22:42:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354366.581445 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o493h-0005Qw-P6; Wed, 22 Jun 2022 22:42:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354366.581445; Wed, 22 Jun 2022 22:42:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o493h-0005Qp-MP; Wed, 22 Jun 2022 22:42:49 +0000
Received: by outflank-mailman (input) for mailman id 354366;
 Wed, 22 Jun 2022 22:42:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o493g-0005Qf-8q; Wed, 22 Jun 2022 22:42:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o493g-0006wz-4P; Wed, 22 Jun 2022 22:42:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o493f-0002gK-SK; Wed, 22 Jun 2022 22:42:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o493f-00039y-Rt; Wed, 22 Jun 2022 22:42:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Igj+Zjr3bavBS5CZvftaFCeEB2YSdKyUTIMdE0tmweM=; b=LXAJvveu+iyaVwb5v+39f0/Rxc
	IVWp9UmrkjnjdKLJqjSVm2PawL57qkzeNVu3yMDA8J3euKmyd1IguikMj6aQKB9xzntakB55JQDxQ
	HOBSMpyFHM9nIlOjI8bHViUyWMC00nDoHmBvIhqOHSfWKTNaMxD7nfXbpTb6NBpDMgZs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171315-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 171315: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=f304308e1cb21846a79fc8e4aa9ffa2cb1db3e4c
X-Osstest-Versions-That:
    ovmf=3930d1791a0657a422d50f4d2e2d2683c36e34b8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 22 Jun 2022 22:42:47 +0000

flight 171315 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171315/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 f304308e1cb21846a79fc8e4aa9ffa2cb1db3e4c
baseline version:
 ovmf                 3930d1791a0657a422d50f4d2e2d2683c36e34b8

Last test of basis   171311  2022-06-22 15:11:57 Z    0 days
Testing same since   171315  2022-06-22 20:13:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Nicolas Ojeda Leon <ncoleon@amazon.com>
  Thomas Abraham <thomas.abraham@arm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   3930d1791a..f304308e1c  f304308e1cb21846a79fc8e4aa9ffa2cb1db3e4c -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 01:14:43 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 01:14:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354379.581464 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4BQR-0001Xz-Tx; Thu, 23 Jun 2022 01:14:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354379.581464; Thu, 23 Jun 2022 01:14:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4BQR-0001Xs-Q2; Thu, 23 Jun 2022 01:14:27 +0000
Received: by outflank-mailman (input) for mailman id 354379;
 Thu, 23 Jun 2022 01:14:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4BQR-0001Xi-CR; Thu, 23 Jun 2022 01:14:27 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4BQR-0007wE-9r; Thu, 23 Jun 2022 01:14:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4BQQ-00077K-M0; Thu, 23 Jun 2022 01:14:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o4BQQ-00028X-LV; Thu, 23 Jun 2022 01:14:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nyXIQAOk6uZAjXuMCiWVatRyklwX9fGcdhg8V0OG9B8=; b=k2zfqftXtOMkntKIWTfET+w2Ph
	uQCxvaoZRkZ0+ywUJlvVwcu7ck7VtugYK1v+YV/mjQRYPs+KCR7pu5v8Cj+gp916vTl8eCCeCQTYK
	omHYwGi8RaSh14nY36Yd2kgDmEXZwknX0QrKOhPl2/MXKsxc9JYIZuYH6LoRyRt3+v10=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171309-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 171309: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-libvirt-qcow2:guest-start:fail:heisenbug
    linux-5.4:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f0c280af0ec7c79cf043594974206d87c3c46524
X-Osstest-Versions-That:
    linux=a31bd366116cb157fc20d5cdc8a2fd1c0d39ac38
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 23 Jun 2022 01:14:26 +0000

flight 171309 linux-5.4 real [real]
flight 171317 linux-5.4 real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/171309/
http://logs.test-lab.xenproject.org/osstest/logs/171317/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 171275

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-libvirt-qcow2 13 guest-start       fail pass in 171317-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail in 171317 like 171275
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check fail in 171317 never pass
 test-armhf-armhf-xl-rtds     14 guest-start                  fail  like 171224
 test-armhf-armhf-xl-multivcpu 14 guest-start                  fail like 171271
 test-armhf-armhf-xl-credit2  18 guest-start/debian.repeat    fail  like 171271
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171275
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171275
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171275
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171275
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171275
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171275
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171275
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171275
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171275
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171275
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171275
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                f0c280af0ec7c79cf043594974206d87c3c46524
baseline version:
 linux                a31bd366116cb157fc20d5cdc8a2fd1c0d39ac38

Last test of basis   171275  2022-06-18 21:42:02 Z    4 days
Testing same since   171309  2022-06-22 12:44:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  "Ivan T. Ivanov" <iivanov@suse.de>
  Aaron Conole <aconole@redhat.com>
  Adam Ford <aford173@gmail.com>
  Al Viro <viro@zeniv.linux.org.uk>
  Aleksandr Loktionov <aleksandr.loktionov@intel.com>
  Alexey Kardashevskiy <aik@ozlabs.ru>
  Andre Przywara <andre.przywara@arm.com>
  Andy Chi <andy.chi@canonical.com>
  Andy Lutomirski <luto@kernel.org>
  Anna Schumaker <Anna.Schumaker@Netapp.com>
  Ard Biesheuvel <ardb@kernel.org>
  Arnd Bergmann <arnd@arndb.de>
  Arvind Sankar <nivedita@alum.mit.edu>
  Baokun Li <libaokun1@huawei.com>
  Bharathi Sreenivas <bharathi.sreenivas@intel.com>
  Brian King <brking@linux.vnet.ibm.com>
  Catalin Marinas <catalin.marinas@arm.com>
  Charles Keepax <ckeepax@opensource.cirrus.com>
  Chen Jingwen <chenjingwen6@huawei.com>
  Chen Lin <chen45464546@163.com>
  Chengguang Xu <cgxu519@mykernel.net>
  chengkaitao <pilgrimtao@gmail.com>
  Christoph Hellwig <hch@lst.de>
  Christophe de Dinechin <dinechin@redhat.com>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Damien Le Moal <damien.lemoal@opensource.wdc.com>
  David S. Miller <davem@davemloft.net>
  Davide Caratti <dcaratti@redhat.com>
  Ding Xiang <dingxiang@cmss.chinamobile.com>
  Dinh Nguyen <dinguyen@kernel.org>
  Dominik Brodowski <linux@dominikbrodowski.net>
  Eric Biggers <ebiggers@google.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Geert Uytterhoeven <geert@linux-m68k.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Grzegorz Szczurek <grzegorzx.szczurek@intel.com>
  Guenter Roeck <linux@roeck-us.net>
  Gurucharan <gurucharanx.g@intel.com> (A Contingent worker at Intel)
  He Ying <heying24@huawei.com>
  Heiko Carstens <hca@linux.ibm.com>
  Helge Deller <deller@gmx.de>
  Herbert Xu <herbert@gondor.apana.org.au>
  huangwenhui <huangwenhuia@uniontech.com>
  Hui Wang <hui.wang@canonical.com>
  Hulk Robot <hulkrobot@huawei.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ido Schimmel <idosch@nvidia.com>
  Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
  Ilya Maximets <i.maximets@ovn.org>
  Jakub Kicinski <kuba@kernel.org>
  James Smart <jsmart2021@gmail.com>
  Jan Varho <jan.varho@gmail.com>
  Jann Horn <jannh@google.com>
  Jarkko Nikula <jarkko.nikula@linux.intel.com>
  Jarkko Sakkinen <jarkko@kernel.org>
  Jason A. Donenfeld <Jason@zx2c4.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jedrzej Jagielski <jedrzej.jagielski@intel.com>
  Jens Axboe <axboe@kernel.dk>
  Jeremy Szu <jeremy.szu@canonical.com>
  Johan Hovold <johan@kernel.org>
  Johannes Berg <johannes@sipsolutions.net>
  Jon Hunter <jonathanh@nvidia.com>
  Jonathan Neuschäfer <j.neuschaefer@gmx.net>
  Josh Poimboeuf <jpoimboe@kernel.org>
  Justin Tee <justin.tee@broadcom.com>
  Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Maciej W. Rozycki <macro@orcam.me.uk>
  Marc Zyngier <maz@kernel.org>
  Mark Brown <broonie@kernel.org>
  Mark Rutland <mark.rutland@arm.com>
  Mark-PK Tsai <mark-pk.tsai@mediatek.com>
  Martin Faltesek <mfaltesek@google.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Masami Hiramatsu <mhiramat@kernel.org>
  Matt Turner <mattst88@gmail.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Miaoqian Lin <linmq006@gmail.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michael S. Tsirkin <mst@redhat.com>
  Michal Jaron <michalx.jaron@intel.com>
  Mike Snitzer <snitzer@kernel.org>
  Mikulas Patocka <mpatocka@redhat.com>
  Minas Harutyunyan <hminas@synopsys.com>
  Murilo Opsfelder Araujo <muriloo@linux.ibm.com>
  Nicolai Stange <nstange@suse.de>
  Olof Johansson <olof@lixom.net>
  Palmer Dabbelt <palmerdabbelt@google.com>
  Paolo Abeni <pabeni@redhat.com>
  Paul Walmsley <paul.walmsley@sifive.com>
  Petr Machata <petrm@nvidia.com>
  Randy Dunlap <rdunlap@infradead.org>
  Richard Henderson <richard.henderson@linaro.org>
  Richard Henderson <rth@twiddle.net>
  Rob Clark <robdclark@chromium.org>
  Robert Eckelmann <longnoserob@gmail.com>
  Samuel Neves <sneves@dei.uc.pt>
  Sasha Levin <sashal@kernel.org>
  Schspa Shi <schspa@gmail.com>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Serge Semin <Sergey.Semin@baikalelectronics.ru>
  Sergey Shtylyov <s.shtylyov@omp.ru>
  Shuah Khan <skhan@linuxfoundation.org>
  Slark Xiao <slark_xiao@163.com>
  Stephan Mueller <smueller@chronox.de>
  Stephan Müller <smueller@chronox.de>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Sudip Mukherjee <sudipm.mukherjee@gmail.com>
  Takashi Iwai <tiwai@suse.de>
  Theodore Ts'o <tytso@mit.edu>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Thomas Gleixner <tglx@linutronix.de>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
  Uwe Kleine-König <u.kleine-koenig@penugtronix.de>
  Vincent Whitchurch <vincent.whitchurch@axis.com>
  Wang Yufen <wangyufen@huawei.com>
  Wei Liu <wei.liu@kernel.org>
  Wentao Wang <wwentao@vmware.com>
  Will Deacon <will@kernel.org>
  Wolfram Sang <wsa@kernel.org>
  Xiaohui Zhang <xiaohuizhang@ruc.edu.cn>
  Yangtao Li <tiny.windzz@gmail.com>
  Yuntao Wang <ytcoode@gmail.com>
  Zhang Yi <yi.zhang@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 6427 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 01:19:10 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 01:19:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354391.581475 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4BUy-0002Ga-Nn; Thu, 23 Jun 2022 01:19:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354391.581475; Thu, 23 Jun 2022 01:19:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4BUy-0002GT-JN; Thu, 23 Jun 2022 01:19:08 +0000
Received: by outflank-mailman (input) for mailman id 354391;
 Thu, 23 Jun 2022 01:19:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4BUx-0002GJ-Kw; Thu, 23 Jun 2022 01:19:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4BUx-00081O-F2; Thu, 23 Jun 2022 01:19:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4BUx-0007Fk-34; Thu, 23 Jun 2022 01:19:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o4BUx-0002YK-2b; Thu, 23 Jun 2022 01:19:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dMas8BCKsqFiQlq2w/JHxoUZyWz0JDWZq8Boiw7k0hk=; b=VxF0rQiwdXdbQR2RUAaHlARxB1
	UoQfcUeRvxAWBJ/1pO5QgSOitBKukeEcDPB+rlAlem4tqI6owZAGxfxsGJQATfEuNzhpPGVggzNGX
	NbS+DvD/14yzutnTrkz5lzT+R9gGrteui7D6QMD6VM6c86obTTLmqM7ggED1dVB+WJbw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171310-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 171310: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-freebsd10-i386:guest-saverestore.2:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-vhd:xen-install:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=15d93068e3484cb14006e935734a1e6088f228fd
X-Osstest-Versions-That:
    xen=9d067857d1ff6805608aac4d9c0ea1c848b2e637
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 23 Jun 2022 01:19:07 +0000

flight 171310 xen-unstable real [real]
flight 171316 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/171310/
http://logs.test-lab.xenproject.org/osstest/logs/171316/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-freebsd10-i386 18 guest-saverestore.2 fail pass in 171316-retest
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 171316-retest

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-vhd        7 xen-install                  fail  like 171305
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171305
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171305
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171305
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171305
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171305
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171305
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171305
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171305
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171305
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171305
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171305
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171305
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  15d93068e3484cb14006e935734a1e6088f228fd
baseline version:
 xen                  9d067857d1ff6805608aac4d9c0ea1c848b2e637

Last test of basis   171305  2022-06-22 01:53:25 Z    0 days
Testing same since   171310  2022-06-22 14:08:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   9d067857d1..15d93068e3  15d93068e3484cb14006e935734a1e6088f228fd -> master


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 03:29:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 03:29:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354412.581490 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4DWm-0006Qu-3v; Thu, 23 Jun 2022 03:29:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354412.581490; Thu, 23 Jun 2022 03:29:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4DWl-0006Qn-W8; Thu, 23 Jun 2022 03:29:07 +0000
Received: by outflank-mailman (input) for mailman id 354412;
 Thu, 23 Jun 2022 03:29:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4DWk-0006Qd-Nm; Thu, 23 Jun 2022 03:29:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4DWk-0002Bk-JZ; Thu, 23 Jun 2022 03:29:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4DWk-00030W-16; Thu, 23 Jun 2022 03:29:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o4DWk-0007WE-0Z; Thu, 23 Jun 2022 03:29:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DIbyZUOWk9aJN68mGl9B4JY4jKwBMen+ONcEGxCZ0XM=; b=gvwWnv5C2m7fXWBbr7NpkopGM6
	1U1L27Fai5lIy1MG2/SklYG0FZh1YyHh60wzPxsBJ8wA/J7n92e8007B0HOkE5kJrhP8DeF7f6buL
	CbZfnWT9J4Y3DNNFgs2lwDI/jBGR6cz8KUkpD9GkCTyD4mJj293WKPxZeDrRWuV4hSzs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171313-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171313: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=3abc3ae553c7ed73365b385b9a4cffc5176aae45
X-Osstest-Versions-That:
    linux=354c6e071be986a44b956f7b57f1884244431048
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 23 Jun 2022 03:29:06 +0000

flight 171313 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171313/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-intel  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-amd  8 xen-boot              fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 171277
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 171277
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 171277
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 171277

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 171277

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171277
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171277
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171277
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                3abc3ae553c7ed73365b385b9a4cffc5176aae45
baseline version:
 linux                354c6e071be986a44b956f7b57f1884244431048

Last test of basis   171277  2022-06-19 03:11:35 Z    4 days
Failing since        171280  2022-06-19 15:12:25 Z    3 days   11 attempts
Testing same since   171313  2022-06-22 16:39:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Usyskin <alexander.usyskin@intel.com>
  Ali Saidi <alisaidi@amazon.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Ard Biesheuvel <ardb@kernel.org>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Arnd Bergmann <arnd@arndb.de>
  Athira Jajeev <atrajeev@linux.vnet.ibm.com>
  Athira Rajeev <atrajeev@linux.vnet.ibm.com>
  Bart Van Assche <bvanassche@acm.org>
  Christian Schoenebeck <linux_oss@crudebyte.com>
  Christoph Lameter <cl@linux.com>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Damien Le Moal <damien.lemoal@opensource.wdc.com>
  Darrick J. Wong <djwong@kernel.org>
  Dave Hansen <dave.hansen@linux.intel.com>
  David Howells <dhowells@redhat.com>
  David Rientjes <rientjes@google.com>
  David Sterba <dsterba@suse.com>
  Dominique Martinet <asmadeus@codewreck.org>
  Douglas Gilbert <dgilbert@interlog.com>
  Evgeniy Baskov <baskov@ispras.ru>
  Filipe Manana <fdmanana@suse.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Hyeonggon Yoo <42.hyeyoo@gmail.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ian Rogers <irogers@google.com>
  Jamie Iles <jamie@jamieiles.com>
  Jann Horn <jannh@google.com>
  Jarkko Nikula <jarkko.nikula@linux.intel.com>
  Javier Martinez Canillas <javierm@redhat.com>
  Jiasheng Jiang <jiasheng@iscas.ac.cn>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jing-Ting Wu <jing-ting.wu@mediatek.com>
  Joe Damato <jdamato@fastly.com>
  Josh Poimboeuf <jpoimboe@kernel.org>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Kunihiko Hayashi <hayashi.kunihiko@socionext.com>
  Leo Yan <leo.yan@linaro.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Liu Ying <victor.liu@nxp.com>
  Lukas Bulwahn <lukas.bulwahn@gmail.com>
  Marc Dionne <marc.dionne@auristor.com>
  Marc Zyngier <maz@kernel.org>
  Martin K. Petersen <martin.petersen@oracle.com>
  Miaoqian Lin <linmq006@gmail.com>
  Michael Petlan <mpetlan@redhat.com>
  Michal Simek <michal.simek@amd.com>
  Nathan Chancellor <nathan@kernel.org>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Peter Zijlstra <peterz@infradead.org>
  Qu Wenruo <wqu@suse.com>
  Rob Herring <robh@kernel.org>
  Saurabh Sengar <ssengar@linux.microsoft.com>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Sedat Dilek <sedat.dilek@gmail.com> # LLVM-14 (x86-64)
  Serge Semin <Sergey.Semin@baikalelectronics.ru>
  Sergey Gorenko <sergeygo@nvidia.com>
  Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
  Tali Perry <tali.perry1@gmail.com>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Richter <tmricht@linux.ibm.com>
  Tomas Winkler <tomas.winkler@intel.com>
  Tyler Hicks <tyhicks@linux.microsoft.com>
  Tyrel Datwyler <tyreld@linux.ibm.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wolfram Sang <wsa@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2327 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 03:41:58 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 03:41:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354422.581500 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Dj7-0000N8-D8; Thu, 23 Jun 2022 03:41:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354422.581500; Thu, 23 Jun 2022 03:41:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Dj7-0000N1-AP; Thu, 23 Jun 2022 03:41:53 +0000
Received: by outflank-mailman (input) for mailman id 354422;
 Thu, 23 Jun 2022 03:41:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4Dj6-0000Mr-31; Thu, 23 Jun 2022 03:41:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4Dj6-0002Ph-0A; Thu, 23 Jun 2022 03:41:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4Dj5-0003r7-Mo; Thu, 23 Jun 2022 03:41:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o4Dj5-0006x6-KA; Thu, 23 Jun 2022 03:41:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+yxEIMHKzYeV/kkk3CC/nS+AzQNpjSgicjrXc6BH9Xw=; b=0xIad3KQTjMuSpAX99sYZzM4Hr
	LHso2Mdgn01kLtONjdX+SQTiXE3oj6B68f+Ha2OmDfyzIa1hWux6l1YA3nNLrcZTbJvRIEha0WHwd
	45JkbsDwnop0G6oQcguo/cA7KPgZ3NOcxaRh4aS0yWkwIeL4pCpH7PNNPjKx2pJBeBCg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171312-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 171312: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-i386-libvirt-raw:guest-destroy:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=2b049d2c8dc01de750410f8f1a4eac498c04c723
X-Osstest-Versions-That:
    qemuu=f200ff158d5abcb974a6b597a962b6b2fbea2b06
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 23 Jun 2022 03:41:51 +0000

flight 171312 qemu-mainline real [real]
flight 171321 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/171312/
http://logs.test-lab.xenproject.org/osstest/logs/171321/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail pass in 171321-retest
 test-amd64-i386-libvirt-raw  21 guest-destroy       fail pass in 171321-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171303
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171303
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171303
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171303
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171303
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171303
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171303
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171303
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                2b049d2c8dc01de750410f8f1a4eac498c04c723
baseline version:
 qemuu                f200ff158d5abcb974a6b597a962b6b2fbea2b06

Last test of basis   171303  2022-06-22 01:08:36 Z    1 days
Testing same since   171312  2022-06-22 16:08:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Cédric Le Goater <clg@kaod.org>
  Iris Chen <irischenlj@fb.com>
  Jamin Lin <jamin_lin@aspeedtech.com>
  Joe Komlodi <komlodi@google.com>
  Joel Stanley <joel@jms.id.au>
  Klaus Jensen <k.jensen@samsung.com>
  Richard Henderson <richard.henderson@linaro.org>
  Steven Lee <steven_lee@aspeedtech.com>
  Troy Lee <troy_lee@aspeedtech.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   f200ff158d..2b049d2c8d  2b049d2c8dc01de750410f8f1a4eac498c04c723 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 04:46:51 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 04:46:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354431.581512 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Ejl-0006ZA-4M; Thu, 23 Jun 2022 04:46:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354431.581512; Thu, 23 Jun 2022 04:46:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Ejl-0006Z3-1N; Thu, 23 Jun 2022 04:46:37 +0000
Received: by outflank-mailman (input) for mailman id 354431;
 Thu, 23 Jun 2022 04:46:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IJ5T=W6=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1o4Eji-0006Yx-SO
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 04:46:35 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-eopbgr140075.outbound.protection.outlook.com [40.107.14.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 73add62a-f2af-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 06:46:32 +0200 (CEST)
Received: from DB7PR03CA0081.eurprd03.prod.outlook.com (2603:10a6:10:72::22)
 by AM9PR08MB7031.eurprd08.prod.outlook.com (2603:10a6:20b:41d::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.16; Thu, 23 Jun
 2022 04:46:29 +0000
Received: from DBAEUR03FT057.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:72:cafe::b7) by DB7PR03CA0081.outlook.office365.com
 (2603:10a6:10:72::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.22 via Frontend
 Transport; Thu, 23 Jun 2022 04:46:29 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT057.mail.protection.outlook.com (100.127.142.182) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Thu, 23 Jun 2022 04:46:29 +0000
Received: ("Tessian outbound 01afcf8ccfad:v120");
 Thu, 23 Jun 2022 04:46:29 +0000
Received: from 0ba270c09ef5.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A4A0B6F9-DF25-42C5-9669-F4E3EFFCE32A.1; 
 Thu, 23 Jun 2022 04:46:22 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 0ba270c09ef5.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 23 Jun 2022 04:46:22 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AS8PR08MB6503.eurprd08.prod.outlook.com (2603:10a6:20b:33b::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.18; Thu, 23 Jun
 2022 04:46:21 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::502f:a77a:aba1:f3ee]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::502f:a77a:aba1:f3ee%6]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 04:46:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 73add62a-f2af-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=FQbrfXxLKMY1JlIWnjfifoERKeQqDeQt7I/D76sLGBORqgh3VQYD5Fk5RXxnGEtW2GNk4MR8r6OqwYj96QB2cM0vEudR5lGdKIsZqHSguglj7a2IACslzDHkygh/QfXY5C5tlqlS2WxGgST+6Mf0KzdOZjsIRAg3lZUajlo4vBeJOL78K5n/+ZarG3OgsDIdj75e4X+/PKiSb6kJ7mavEXfAS5ezLftlNxJZL2YmGpYbH+L5/VQA6iRr+NXuZ8NwvPqOJmNrhuya3vfdQnciKiZDsuJnbBZK/w0xeU4GgtfKOYLaZtMTSIonztsNoAIt2Kte9/wt8MojazqpxgVguw==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Wg+hZlA9SK2d+kWlsuSJo2tzr1ePQelzfvwrOm8G0Mk=;
 b=RrpMfX8RITGKyA5vL1/J2WzeBCD+qIGnS/qDqZk7ASfcL4oOYli71eLxyq9BPjLOlgZuk/X3tKX6FdoM3LAwfx2llyUY0TRaNhACktPlWbFU90yKM/ewLAl+786RV1vnqQkC5Y9JkOedWTFJJAFxFZZg5I4+pD0Bv1vhh2tnrwYgWpNRJdwYew7RLH2G16FHoDBwoalECOkLF+xAuyRDh6aYEr7wBnzPYwza2pVt/u7VIRJxSXaZUM/KDoXlFA0zkGW1YjIBpjoj/9UxcV22Tyxu5r8W0b2cv/8bjvVKfhbFFSaz/VQWyBVyVkw39t91L5qSZZqcdOgoWYWZ43GKJw==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Wg+hZlA9SK2d+kWlsuSJo2tzr1ePQelzfvwrOm8G0Mk=;
 b=4ApxGQ3+vDrqs4xeQ2OfUWsbXdsD5/Vii8KWLASVYsJBlyzuFVTt9DnW3RzfJpdJZyVkj8WTVF1zyOVu/7Xo3vPC1eYBRXFSDFC/CsC9mCkwqqq0k1UEi2Vfd5jWXwtFcXCDBW+L9N080GaE4HfE2ddZDpy0bcEdU7KQfutMgcg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=encETdpdfYoUFZUpcjNQOqEx6Z6e3W3lp9I8XT0aZeXIpqRxXfEWaF4Fr5gM6sb5QP1aBf6XT+q+VG5bzB94EFgTm37XjYBO3wwUTkqaiyasyMtkne8QUw1RR6Nytw7J9V3Hq4dZB7XmWoLLIwZZ/wGchE3mFMUkuObuVx/RTkPHIvjQItQFYtzgEE1CUKM06gC81KZY/SfBGpc99mIbemGiplVDnN0YkDnrGL/0qh9vWshBAk3t+szxk422PdxCvGxNdDGfHe8405Vj7WW774HtW/JB7Hx+D0z+jtVE6B25dWuDVp0KPEg+ExngFg0e4Z0AOBfGvs6vgFxfs1DD6w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Wg+hZlA9SK2d+kWlsuSJo2tzr1ePQelzfvwrOm8G0Mk=;
 b=PTphAjvVbOt5zutmdkXiUn1x8VRepa+nQEecZqvt44FOkkBhl03G8+bRCw2zceQNMCkA/LxqcQtKgMe48YE32u2k9gSLzF6KBn7bskgfSNNJfeVGaI1I9Ch+uuo4+0PIGg8gYdpIs+NCfcCZqK7TTN40TaMpN1KIUCMTxKNksR8j2lr2yHNPpK4k4Xm3aWU4LAGiVrtDByDK4YNB2+4Vny0tlwBbU8hrBJFOClJCIVRwu7b1mZDwkqQkWENqKBPdv3/FpeMNhbpdpikeVctPrvIkrrb8sD/tySoYmsvs2i/ulCxvhom3q1yjvTi9WvFzT++vftfA/CPwHcNVphoMxA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Wg+hZlA9SK2d+kWlsuSJo2tzr1ePQelzfvwrOm8G0Mk=;
 b=4ApxGQ3+vDrqs4xeQ2OfUWsbXdsD5/Vii8KWLASVYsJBlyzuFVTt9DnW3RzfJpdJZyVkj8WTVF1zyOVu/7Xo3vPC1eYBRXFSDFC/CsC9mCkwqqq0k1UEi2Vfd5jWXwtFcXCDBW+L9N080GaE4HfE2ddZDpy0bcEdU7KQfutMgcg=
From: Henry Wang <Henry.Wang@arm.com>
To: Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
CC: Community Manager <community.manager@xenproject.org>, Wei Liu
	<wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v6 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests
 using legacy SSBD
Thread-Topic: [PATCH v6 3/3] amd/msr: implement VIRT_SPEC_CTRL for HVM guests
 using legacy SSBD
Thread-Index: AQHYagNC0CXNzdsfvEia3fuvabBVia1TGtkAgAg6Q4CAAU4UUA==
Date: Thu, 23 Jun 2022 04:46:21 +0000
Message-ID:
 <AS8PR08MB799160D2117E6B6E743F252392B59@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20220517153127.40276-1-roger.pau@citrix.com>
 <20220517153127.40276-4-roger.pau@citrix.com>
 <AS8PR08MB799195FC7D9949031F33802892AF9@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <5a1eada8-90b6-f998-cb8d-6b0d1b781590@suse.com>
In-Reply-To: <5a1eada8-90b6-f998-cb8d-6b0d1b781590@suse.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 9A0278D423FD6B44BB89311EAE58F8EB.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: d872314c-6560-4085-6485-08da54d35680
x-ms-traffictypediagnostic:
	AS8PR08MB6503:EE_|DBAEUR03FT057:EE_|AM9PR08MB7031:EE_
X-Microsoft-Antispam-PRVS:
	<AM9PR08MB70313BAFA17FC45A6E19050692B59@AM9PR08MB7031.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 OTqmtrichA+HgdojlQQovJzbshe9XDm82lfQVHx28XrP173cV1bEWB31CK5jPEK9o5KZDVWFssmxW/Qekujf0N3HLNewhLwY44qA/G3ZFKLXf5nDMxIyIZ4BQz3AnfY3kxWZav0WEQveh6WdtJPNNP9kVXTDhAq7n7gJ556ZKEbAsA/I260EM4jBpPI48GsbI18pdIla0H0aqd0BldmJb+lFGH8oswI9DO/fuZb+G7iTLOjiBPVcjZqrQOmpZb94WeoGVlYEK7rmSHmltZtaOJn1d4ZVoaIqlQEHjYSrUd2aKWAkhfQAB+cAz3OCpuKUCtiqUMqHmh5CHJHxZg9RIYqfKnMyKLqn6ZHuoeO3++WdW9wGC2EVD8xHEgjGM9bjt7VLaJeOwltSupAsTSo3ymndErLoYGWxrG2oQfEXi2naKke86P/q7uxPJsoejf8Rktem43mHCCAZupqdBJuYFT7R34+G7mc1dBOhYuj/8YmFAAYtkgYoqFYE3ye1kuzl2e8EHu4gkM/WkNcIoYyHv+qOsxc3oE3WyIKzAA5Pbtu+ZEC9ISitukdOzDKlOmbi+TUd53VuSdbHJSIZjVtQKiCQY709QHAy3u32iR5sL3jvUQp/9tT4OU+rTbVfBE6Yx5gPcn9bvJ0pYBGsb440rruZNZ2PuVx+VlPGI1e7ZX8LXU3JCK+4g3RGUmFyetAKVXytuZCh6VBz4ymTmOd1JTTqNr74GlD1bICJWdvtJbO95RRYDmCsAuc0VRur/3PJaZN2LdZU0fkViKpk5f9ekA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(396003)(136003)(366004)(39860400002)(346002)(376002)(52536014)(76116006)(5660300002)(66446008)(8676002)(66946007)(4326008)(66556008)(64756008)(478600001)(86362001)(38070700005)(66476007)(38100700002)(83380400001)(41300700001)(6506007)(8936002)(122000001)(7696005)(186003)(110136005)(53546011)(54906003)(33656002)(55016003)(9686003)(316002)(2906002)(26005)(71200400001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6503
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT057.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	1e43acdb-e41f-4ebc-51a5-08da54d351c0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nInjFrgRUh5KTijoGFFuiONBI6jjn2Bfr4Qe/fgOHCziCyaSlSID+xzd0FWpn8J/Vpxt2WKDRE4YjzQbn92fj49T2X3MepikhIN9iBrXAZCTLamOY8w6f3Lypg1zLZNFFB9V8madJfSVV0dv4SUq1RcHtDrSF/40xcJEV8qOjcykDMTbdtkb9DU+Ni/1IjMhoxpeSr+tCXnaQnmkBDVsWV7vmz8qtpr0d8OCL3wxusU0aX/X68NhF2hhwOTo+TuvTwAe6bVSI4FgkMGd8kOQHOxs0NWdYCHTFpvTuIx34fNgZzxutEKrNy+ovKV29BVHvTeBIBUAT2MxfwbBqViZjojIFYwAQevAyefUE86Xz1ufPmLxhzj93nweVtG0lI90CrQN4t5LLBcVkvzZpRnDNI1Tgqf8SwjjmxkavE4GiVulW6+SeVl3gq4X+BuKc63/IyBv45oyQnd6K836U+iX6YaYZ8v/jdTu4kp+7uCDG5n2nivZyiUXnsxgCaywwi1k7clwE5+49L9aXrL5JtO5AAMdTIvr9Y3DSbPHkMvLGfzoTLrfYkJLyqfbQMk3jzslT5CwXn1DQXyJDemrduOKtbNC21K109VhzdJgh6Nc6BaKMzFmlZhyjYQZOGy5/Hz+IzPXlh6R+n43BWDNsEdExzzpgJ3Trcs8YUzsM9fQG+gjq8Jrsw9uMvOlE1R4l+sIXAuQ2jlgp6xnd4otCAeZW4GcnJCDs1wrmY970cG799e4UdCLNk3aYTwsFJ+FlB/wH5Oz5GGhbZgnLnH/WmBXrw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(376002)(136003)(396003)(346002)(39860400002)(40470700004)(46966006)(36840700001)(82310400005)(86362001)(36860700001)(40460700003)(41300700001)(52536014)(5660300002)(33656002)(356005)(82740400003)(2906002)(81166007)(316002)(55016003)(40480700001)(47076005)(54906003)(8936002)(83380400001)(336012)(4326008)(70586007)(70206006)(6506007)(8676002)(53546011)(9686003)(478600001)(110136005)(7696005)(186003)(26005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 04:46:29.2277
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d872314c-6560-4085-6485-08da54d35680
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT057.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB7031

SGkgSmFuIChhbmQgQW5kcmV3KSwNCg0KPiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBG
cm9tOiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQo+IE9uIDE3LjA2LjIwMjIgMDU6
MjYsIEhlbnJ5IFdhbmcgd3JvdGU6DQo+ID4gSXQgc2VlbXMgdGhhdCB0aGlzIHNlcmllcyBbMV0g
aGFzIGJlZW4gc3RhbGUgZm9yIG1vcmUgdGhhbiBhIG1vbnRoIGFuZCBhbHNvDQo+ID4gdGhpcyBz
ZXJpZXMgc2VlbXMgdG8gYmUgcHJvcGVybHkgcmV2aWV3ZWQgYW5kIGFja2VkIGFscmVhZHkuDQo+
ID4NCj4gPiBGcm9tIHdoYXQgSmFuIGhhcyByZXBsaWVkIHRvIFJvZ2VyIGFuZCBBbmRyZXc6DQo+
ID4gIi4uLiB0aGlzIGFkZGl0aW9uIHRoZSBzZXJpZXMgd291bGQgbm93IGxvb2sgdG8gYmUgcmVh
ZHkgdG8gZ28gaW4sDQo+ID4gSSdkIGxpa2UgdG8gaGF2ZSBzb21lIGZvcm0gb2YgY29uZmlybWF0
aW9uIGJ5IHlvdSwgQW5kcmV3LCB0aGF0DQo+ID4geW91IG5vdyB2aWV3IHRoaXMgYXMgbWVldGlu
ZyB0aGUgY29tbWVudHMgeW91IGdhdmUgb24gYW4gZWFybGllcg0KPiA+IHZlcnNpb24uIg0KPiA+
DQo+ID4gU28gSSBndWVzcyB0aGlzIGNhbiBiZSBtZXJnZWQuIFNlbmRpbmcgdGhpcyBhcyBhIGdl
bnRsZSByZW1pbmRlciBmb3INCj4gPiBwb3NzaWJsZSBhY3Rpb25zIGZyb20gUm9nZXIgYW5kIEFu
ZHJldy4gVGhhbmtzIQ0KPiANCj4gTXkgdmlldyBoZXJlIHJlbWFpbnMgYXMgYmVmb3JlIC0gSSdk
IHByZWZlciB0byBhdm9pZCBtZXJnaW5nIHRoaXMNCj4gd2l0aG91dCBhdCBsZWFzdCBpbmZvcm1h
bCBhZ3JlZW1lbnQgYnkgQW5kcmV3Lg0KDQpTdXJlLCB0aGVuIEkgd291bGQgcm91dGUgdGhpcyBl
bWFpbCB0byBBbmRyZXcgKGJ5IGRpcmVjdGx5ICJUbzoiIGhpbSkgc28NCnRoYXQgaGUgY2FuIHRh
a2UgYSBsb29rIHdoZW4gaGUgZ2V0cyBzb21lIGZyZWUgdGltZS4NCg0KPiANCj4gPiBBbHNvLCBu
b3Qgc3VyZSB3aHkgbXkgYWNrZWQtYnkgZm9yIHRoZSBDSEFOR0VMT0cubWQgaXMgbWlzc2luZyBp
bg0KPiA+IHBhdGNod29yaywganVzdCBpbiBjYXNlIC0gZm9yIHRoZSBjaGFuZ2UgaW4gQ0hBTkdF
TE9HLm1kIGluIHBhdGNoIzM6DQo+ID4NCj4gPiBBY2tlZC1ieTogSGVucnkgV2FuZyA8SGVucnku
V2FuZ0Bhcm0uY29tPg0KPiANCj4gQXQgYSBndWVzcyB0aGF0IG1pZ2h0IGJlIGJlY2F1c2UgdGhh
dCBlYXJsaWVyIHJlcGx5IHRoYXQgeW91IGRpZCBzZW5kDQo+IHdhcyB0byAwLzMsIG5vdCAzLzMu
DQoNClllcCB0aGF0IHNob3VsZCBiZSB0aGUgcmVhc29uIC0gbm93IHRoaXMgcGF0Y2ggaW4gcGF0
Y2h3b3JrIGhhcyBteQ0KYWNrZWQtYnkuIFRoYW5rcyBmb3IgdGhlIGluZm9ybWF0aW9uIGFuZCBJ
IHdpbGwga2VlcCB0aGlzIGluIG1pbmQgaW4NCnRoZSBmdXR1cmUgOikNCg0KS2luZCByZWdhcmRz
LA0KSGVucnkNCg0KPiANCj4gSmFuDQo=


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 05:48:44 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 05:48:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354438.581523 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Fha-0004Q7-RU; Thu, 23 Jun 2022 05:48:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354438.581523; Thu, 23 Jun 2022 05:48:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Fha-0004Q0-N0; Thu, 23 Jun 2022 05:48:26 +0000
Received: by outflank-mailman (input) for mailman id 354438;
 Thu, 23 Jun 2022 05:48:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ozkr=W6=linaro.org=viresh.kumar@srs-se1.protection.inumbo.net>)
 id 1o4FhZ-0004Pu-Tv
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 05:48:26 +0000
Received: from mail-pg1-x52e.google.com (mail-pg1-x52e.google.com
 [2607:f8b0:4864:20::52e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1786abee-f2b8-11ec-b725-ed86ccbb4733;
 Thu, 23 Jun 2022 07:48:23 +0200 (CEST)
Received: by mail-pg1-x52e.google.com with SMTP id 23so11985810pgc.8
 for <xen-devel@lists.xenproject.org>; Wed, 22 Jun 2022 22:48:23 -0700 (PDT)
Received: from localhost ([122.172.201.58]) by smtp.gmail.com with ESMTPSA id
 u1-20020a170903124100b0016188a4005asm13738487plh.122.2022.06.22.22.48.20
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 22 Jun 2022 22:48:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1786abee-f2b8-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=oQ0mXmv2gl8QeyoTRvdPcAfsTMa0a+oWEd2XGS6OBuo=;
        b=a5F9NrGrWx+RS313osnifDyKBs4JGqACKW8amfFoRcqXm8Cp2v5VlgzCRwYY8Ktz6E
         N3BnoDd1aS2ZRa7LtlburQnbZbSXdZtYEd4F9GqoLGP73jW1LWBBr8sw/GOtd/ugkOp9
         uHzjG8wJjp66onz5N13NZFlPzjH4Ax6m893d20JOARbKNt7KP7htmyhJ4tME4djp4gpV
         josyprNm41VKphS3/sMoiIgJXywZK23xyN59xy8lCfrpkzKei4cMhZio7Bdyfii8yrZ4
         zcNhn/P0R37LF1JxDeLqtCIB1QI0as8OjRdBdRKPGLJ79+RsE6ZvbbOoYgiqwsStEqtv
         YhtQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=oQ0mXmv2gl8QeyoTRvdPcAfsTMa0a+oWEd2XGS6OBuo=;
        b=jYie/bvsff/Nl2n4+e3EgxttKHCT30OSQvipvcCPRyxAYjRZ+rIhZoA1DwEznQR4vo
         UOg9tFrrZ81lZEkfkSi+SohncxcKGJGgAOKe0HERJl2IpqTbweF2N22WAZnyyLr5bodE
         K0Gmr87wmh/A+fdrj7jXcQjltSeoE87QXcepue9GXzeLQcpHjRXyPKHKkSm/yT0pXuPQ
         Az/vcSRtKn0eqHSoM7qh55MvXxevb2FdeNbSyfmhCjceToqfyeuNiALL2cWTzzDUDS91
         gPNw6j15HXdzouA2acb4Yi2WFQDvntkJ5Sft+guPiL9Ma9LezujrRGYdStaGAkHANDpS
         gIXw==
X-Gm-Message-State: AJIora9H/f+yRIlUyukZElIJL0lk+IFyeys20zdT7LvhXBlwt9/idaK8
	6c4k3Lvcr9oszv7OZlgOWkpZwQ==
X-Google-Smtp-Source: AGRyM1v86LBI7/a0caOi3nBxJ046p7aGn5C45DDGdpWIOIlvyNq2bR1QKWmse0uG/e19YLtjmxdlfA==
X-Received: by 2002:a63:920f:0:b0:40c:75a2:f91f with SMTP id o15-20020a63920f000000b0040c75a2f91fmr6100322pgd.512.1655963301771;
        Wed, 22 Jun 2022 22:48:21 -0700 (PDT)
Date: Thu, 23 Jun 2022 11:18:19 +0530
From: Viresh Kumar <viresh.kumar@linaro.org>
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Stratos Mailing List <stratos-dev@op-lists.linaro.org>,
	Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Stefano Stabellini <stefano.stabellini@xilinx.com>,
	Mathieu Poirier <mathieu.poirier@linaro.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Mike Holmes <mike.holmes@linaro.org>, Wei Liu <wl@xen.org>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Juergen Gross <jgross@suse.com>, Julien Grall <julien@xen.org>
Subject: Re: Virtio on Xen with Rust
Message-ID: <20220623054819.do25phfuumnexw73@vireshk-i7>
References: <20220414091538.jijj4lbrkjiby6el@vireshk-i7>
 <CAPD2p-ks4ZxWB8YT0pmX1sF_Mu2H+n_SyvdzE8LwVP_k_+Biog@mail.gmail.com>
 <20220622114950.lpidph5ugvozhbu5@vireshk-i7>
 <CAPD2p-kFeC8FygFcbpEbH3CzrAM7Td+G68t9ebOFR4V0w1dpEQ@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAPD2p-kFeC8FygFcbpEbH3CzrAM7Td+G68t9ebOFR4V0w1dpEQ@mail.gmail.com>

On 22-06-22, 18:05, Oleksandr Tyshchenko wrote:
> Even leaving
> aside the fact that restricted virtio memory access in the guest means that
> not all of guest memory can be accessed, so even having pre-maped guest
> memory in advance, we are not able to calculate a host pointer as we don't
> know which gpa the particular grant belongs to.

Ahh, I clearly missed that as well. We can't simply convert the
address here on the requests :(

> I am not sure that I understand this use-case.
> Well, let's consider the virtio-disk example, it demonstrates three
> possible memory mapping modes:
> 1. All addresses are gpa, map/unmap at runtime using foreign mappings
> 2. All addresses are gpa, map in advance using foreign mappings
> 3. All addresses are grants, only map/unmap at runtime using grants mappings
> 
> If you are asking about #4 which would imply map in advance together with
> using grants then I think, no. This won't work with the current stuff.
> These are conflicting opinions, either grants and map at runtime or gpa and
> map in advance.
> If there is a wish to optimize when using grants then "maybe" it is worth
> looking into how persistent grants work for PV block device for example
> (feature-persistent in blkif.h).
 
I though #4 may make it work for our setup, but it isn't what we need
necessarily.

The deal is that we want hypervisor agnostic backends, they won't and
shouldn't know what hypervisor they are running against. So ideally,
no special handling.

To make it work, the simplest of the solutions can be to map all that
we need in advance, when the vhost negotiations happen and memory
regions are passed to the backend. It doesn't necessarily mean mapping
entire guest, but just the regions we need.

With what I have understood about grants until now, I don't think it
will work straight away.

> Yes, this is the correct environment. Please note that Juergen has recently
> pushed new version [1]

Yeah, I am following them up, will test the one you all agree on :)

Thanks.

-- 
viresh


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 07:16:39 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 07:16:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354444.581534 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4H4i-0004i7-8a; Thu, 23 Jun 2022 07:16:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354444.581534; Thu, 23 Jun 2022 07:16:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4H4i-0004hz-3z; Thu, 23 Jun 2022 07:16:24 +0000
Received: by outflank-mailman (input) for mailman id 354444;
 Thu, 23 Jun 2022 07:16:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CRa/=W6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o4H4g-0004ht-LC
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 07:16:22 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2058.outbound.protection.outlook.com [40.107.20.58])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6135f368-f2c4-11ec-b725-ed86ccbb4733;
 Thu, 23 Jun 2022 09:16:20 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8329.eurprd04.prod.outlook.com (2603:10a6:10:24c::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.22; Thu, 23 Jun
 2022 07:16:18 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 07:16:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6135f368-f2c4-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nk3kMSdt9SsMHqiK+3P6Kzz0oRoIglibiGBSPLO57TEeqKJUyJ9kE0I8wpyWwazU2xJkwBLOv+M9+R4/dcOoDiHdfMs+KOSTKcaN8C8EEPBubRXPD4HEsYfm+iC7MQvEUJPgLz/HDpIgzfVZ0h8Wkv3UK3gnAZqo8DeIkoJToVPfXoMYYM7zad9ecM6EmoLlUNzdIiT4s7CKLrm6lRrpwx3WwwcxXQp54gCajBtbRSVEsuzLyJtVZ2sAfhOawuTmsN6yJ9qrPNT+J/BK8AHEJoIgt3b+Q7oGclG1kUNmBkA3BPKu2B1t8nJzXwJjLxxpw7H7ZfLfpIbftgayYmNaNg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=lDgs1HB882nMw2vbqio/+G1oDHLLiit74yvDM7dk/N0=;
 b=V8520zKTOp3BHiXWm5Jh7gLMeNMksoLg2fzjibTfTYCItrafKrZyinGZfjn8VlJPcPagcko8HRG9ec4fMwd2yXmkdW6omdEyi6I4e11rFHpeZfpV/HxcshCSPQRYboz+L4zfvN/UPnQAvkZpFSBZOkmTJjkuRTeGfZ5navBPMM+4tSkoKGce2esvvdCvZ/vNhR10SBunaWVly5VnZIKqL3UzDpe33U0VOOZu1vKPGenwCDO5nPg/lPYF8J9+EDGhmu5XSwfTxlK8zmd28Nwh+k+SmFw5t8cF38EEHAnFsy5IqFp1gOSW3lh3b39o3lLyWcN+6FycR3QxFl5xO84f+g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lDgs1HB882nMw2vbqio/+G1oDHLLiit74yvDM7dk/N0=;
 b=LYrQWJSz0m3eDeiAE7c/pG4wvL4R9mepizO/kqm4w+Ir14+w+/nO3BxIjauofxLeT+y2L5Sikmh0Rn9SVHqflEUpCVpfhaFfq/Z5aCfpoUEHPBeJCa391ciRmPQ9bEO2l1vDDRg7rfTu2MZgIsStlxkhn28zRo9iPzijmkstgCDQGASumepgebDYZz3B2a0xuLPVIfhG4X4oJT+NR7GK8vwipyLCMebWfxjAdEbRcZu0ZJovUZdUo+q4Z2hnSfSF7iljPcesYYmq+FV2dUq4lciLjLZ3/brmu3mpCKvKeHbN7L6qr3DzIkPcFwf/QHADt00FcwxhfEc4+sjAPeb/ng==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <0114927f-ac77-83c0-5a58-4eef712b4fc8@suse.com>
Date: Thu, 23 Jun 2022 09:16:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v7] Preserve the EFI System Resource Table for dom0
Content-Language: en-US
To: Demi Marie Obenour <demi@invisiblethingslab.com>
References: <7b2f97eda968d6db368c605ff0350d732554c39b.1655860720.git.demi@invisiblethingslab.com>
Cc: xen-devel@lists.xenproject.org
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <7b2f97eda968d6db368c605ff0350d732554c39b.1655860720.git.demi@invisiblethingslab.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR0301CA0050.eurprd03.prod.outlook.com
 (2603:10a6:20b:469::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 023eab80-4c6a-479a-592d-08da54e8446a
X-MS-TrafficTypeDiagnostic: DB9PR04MB8329:EE_
X-Microsoft-Antispam-PRVS:
	<DB9PR04MB8329938E80498CDE53BB4886B3B59@DB9PR04MB8329.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RSgYxLM0kDjKQgHsbjPoCoxnnq7qDIYi3uYlCybvIU9AFroGTsWzq0yKZWwqz90aQfQ1aTjtIZEVy9xrBnxrBe1AzpVxYSIYg8ZKfj3cPKM5/5m7cB+bhltGojpoHqA+XTH8j0PN+h/CyRliP0zTyFUzLCnXthCbytP1Qfa3zWgODCTEcu5NUL5ZunyPoxws7ytfe15/FoAH88Kw+onM5JEPNEFnsi/3Rf1CTKziAgLfnL0vj1BikjzyjRCDRzLzo600cwK/cTJAwm7EzNwu9PmdN3NeGJaowiN8u/K8kD7Ffh0dMfNwDK83L7eiG0J568j/qbJsdkmmk7Rg3QWpdJBo3SRTyy9dmWILl4kV3ghbd1xVpljrDctJkEk+y2c+oCBnPz7yDAU795ARdqWSFD3vtYQEj6tKfaDcU16NHd8TdVmYGY9xIiyFvLBZZCcfwuLgRmcAEB5Lc00SRreVuxriqrHBneoU3Lc71QU6CbIFzl2SfWBqL2qOiTMAU38Ww8TP74mPB1FzBWQ7ZCCBeVNMnXZ7CKKsVqnHTS87KRQ5LO957MLkRSg32iyJFNKuB/TZboPGl5Isf/1PSpM+i1AA3tCYUbmoBbRYMa+lJL9+s6e4jcVNd8WE6ydjrcnsoi+wldX1cuJZhAY6sacqHWnmtlnistKAm3XSk7Fr8nYImR8bpH+IlvoHiQ3vRm3Z0Wr5jTfYGra9PlgdfKoc87tuiFD/ygvD0K+3QrWr03h1s9k5C7IWM6SEg8qA1N4hvcRZHCHExzu8o3JBumL3Vv2N+vi9/i20o0ToB1X49yM=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(376002)(396003)(346002)(136003)(366004)(39860400002)(31696002)(36756003)(86362001)(31686004)(6916009)(186003)(6506007)(26005)(38100700002)(53546011)(6486002)(8676002)(66946007)(4326008)(41300700001)(316002)(5660300002)(66556008)(478600001)(6512007)(8936002)(2616005)(83380400001)(2906002)(66476007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VzNoc2xWS05Fd0xtR0JTNkxCSDB0NEsvNEU1WGJ6c0lKdXZqMENCbTFCdU1T?=
 =?utf-8?B?S3dsWkhxbmVkeWF4VXFkcElkNUxKYlA5WDdOcG1qdit4ODVjeHlBQjN4VHNV?=
 =?utf-8?B?MW16alk0a3dNbE9NZ2lWU25uL2l1ZUhrVjVOOE5xZjBuQ2M4clRLaTV1dW13?=
 =?utf-8?B?SDhnZzFJclk3Q0Z2ZW13bUFDS1RoUVR1ZmV5QXRVdUVRVXFwQ05PaTgzWGlG?=
 =?utf-8?B?cHArMTJQSEY2aS96VDRxbGpkRXk0Y3VvZlR5WWk4Q014aXhuV3dBYWkzeEFs?=
 =?utf-8?B?S2pTYm40UDhMMUpYYnkvVGE4WkxFMjBrNTRza1MyY041eHozdWM1Qkh5WEZi?=
 =?utf-8?B?VDM1d0o5akgyYW5KSyt0WldWeGhYR0lzalJ0QXBab3VURmZLVDYvdWJZY3lP?=
 =?utf-8?B?Y1RWZzd3NnVlQitnYXMvNU1EREZVbER2d1hBWkhYaktwTFRWVCtJWUlnMUxS?=
 =?utf-8?B?azZxb216NlFkdXZlZ0E4WHhjWGczWFhQRzNURy9vaThRS3BOS0IxWXBaZHBN?=
 =?utf-8?B?LzlHM250ZmViT1ZheXFBN3hoQlVNVnFnd1FiU2MycU05OXoxeFBaaDZ2bk1K?=
 =?utf-8?B?WFdQZEN5S1pEMHltaXJzSXNOWmRQZUtQVHhyRTNuanpWbXBJZWNnZGxvOVpH?=
 =?utf-8?B?SFNiSEpnM0lYeUE5L1lZbUkvZnRZN0o1a3JBSjZiY1NMTTdHUXU4enZocEx6?=
 =?utf-8?B?V0tEZnh0dnV5NmVDU1JZOWVDdWE4MGIxaGlJY3dYcXArbTNtZitUL2R1T1FR?=
 =?utf-8?B?NjJPRm1RRis2MGpYL3o2MThhTlBmOGltMlVNV1lnajlBSGNweHV4QktLYUpB?=
 =?utf-8?B?SngxMTA4ZmhkbUZlbnIwbDN3OGszN3FzaGREZXNjaHlRWk5DbUUrVkhsWGI2?=
 =?utf-8?B?TTMwbllPcDl2LzFoOTFyMnhkS1o1MitHWjZya3prbTdvRGgxQ09GcUo4ZjNM?=
 =?utf-8?B?T2tyS0NhbG5lQUgwL2RWMkI2YnhsM0RubWxkdFRqaVhrZDgzNjl5QUdCL3dH?=
 =?utf-8?B?WGE4V0ZwWmdwSnVtTm0zSC9LRDZCTDR1KzU4SlFENURXbTNTeUpUSGlaYjFV?=
 =?utf-8?B?TjI1eVpWQ1VmK2FPVWp3RFFxVmdWVWJOcEpEcC9ZYWNkMkFwSlBncFJURE1v?=
 =?utf-8?B?REJGb05CUlFzTmY5RDRhTDNwb0VCR0tWUG9EMXJsbFB6ZktIUGkzUzZVZkwz?=
 =?utf-8?B?T2ZhUWNaQWN0amc4dE1BY2lpTEp0YUdvcDFTTHVyZS9WcU15dHgyL0hLcHY4?=
 =?utf-8?B?Um83WmVRU2p4WjdmMGk4R3JRU2JteXZ1bVRtUWtaMnR5MjFGMGtBeUkvUGt3?=
 =?utf-8?B?QjBtM1QrL3RGSEhKTkQ3SVhPdjhzLzhMQzhjK3ZMU0dkcFVYUlFXbmVJcWZR?=
 =?utf-8?B?Z1RONVVzUU95R2k1QnZvdnNwcnpYK3ZPa2ROWVZ0bjErZ2dYYXd4M055Tlhl?=
 =?utf-8?B?SlFGZkoxQmxrRXVxR3VOZnRTUkdsQzFCQ3dGS3A0bXY5S1Q2ZUlNYXhEMzU3?=
 =?utf-8?B?aGZxUWhMcHhnanoxZ1lKSXNKdGcyZW5WZTBGRnAydXF6c2h0K3U1VzY2QndT?=
 =?utf-8?B?cWxtUFRmT1FKTHpWL1FQdTQycnhmZVcwTG9yU1ROb3hTY3hEUEFNUlRqdThQ?=
 =?utf-8?B?SGIvbGJWb1ZFRi8yMHphTXdlbTBTbDdZVGZhWFQzRURLRHUvbHRldjQ3STdH?=
 =?utf-8?B?SjVwMDIydE5mSjdDTnh0bk9DN0wwc04rSm5TZW9pTS9ySGp0MEtyVlA2aXJN?=
 =?utf-8?B?WWhZdnFVby9ZV1ZLd1JqZVdvSzNFb0xVV1gzN051aExNVFRiaGZDeGZ6QXF6?=
 =?utf-8?B?UnZxeXVycmxwWjFNdkYvYytFaWJ4TWRNSFNxWjlSdTBDL1g4QlYxcHpwMFQv?=
 =?utf-8?B?eUVST3lLYWdma2c2dGNPdkdOOElQVkpkL3E0TW5DeGk0ZEkreVZvZ1BwVGpE?=
 =?utf-8?B?SzgzUzJJK2ZiQW94REZkYU93S3NValBFRWRISE03b0lHcGhudENaWG93TjRx?=
 =?utf-8?B?c1JsMlhnU2NhR292bkNHOWgrQVk4Ym4xNkpmNmJBWWcrUFpCWC9qNDUwV1Ix?=
 =?utf-8?B?dkgyWEM5TE56cmxCSjZSZlBKSHVPSE9EeWlMWElKTUJLSERKR1ZiY0NWUFRE?=
 =?utf-8?Q?6RQS3AQmQEiTDuM/a27PgECD4?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 023eab80-4c6a-479a-592d-08da54e8446a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 07:16:18.4562
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: W53SCu0ldH3VE3jJ0rTDPvBqr/H9KWavxLZzZu22r4FZS9qv0eBHR3/6XE9syF844aqQx8l0TTUD5VIH5Lri+w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8329

On 22.06.2022 03:23, Demi Marie Obenour wrote:
> @@ -1051,6 +1110,62 @@ static void __init efi_set_gop_mode(EFI_GRAPHICS_OUTPUT_PROTOCOL *gop, UINTN gop
>  #define INVALID_VIRTUAL_ADDRESS (0xBAAADUL << \
>                                   (EFI_PAGE_SHIFT + BITS_PER_LONG - 32))
>  
> +static void __init efi_relocate_esrt(EFI_SYSTEM_TABLE *SystemTable)
> +{
> +    EFI_STATUS status;
> +    UINTN info_size = 0, map_key;
> +    unsigned int i;
> +    void *memory_map = NULL;
> +
> +    for (;;) {

Nit: Style:

    for ( ; ; )
    {

> +        status = efi_bs->GetMemoryMap(&info_size, memory_map, &map_key,
> +                                      &efi_mdesc_size, &mdesc_ver);

Unless you have a reason to (which I don't see), please don't alter
global variables here. 

> +        if ( status == EFI_SUCCESS && memory_map != NULL )
> +            break;
> +        if ( status == EFI_BUFFER_TOO_SMALL || memory_map == NULL ) {

Nit: Brace placement.

> +            info_size *= 2;

Doubling the buffer size seems excessive to me. If you really want it
like that, I think this deserves saying a word in the description. As
said before, the same increment as in efi_exit_boot() would look best
to me, for consistency.

> +            if ( memory_map != NULL )
> +                efi_bs->FreePool(memory_map);
> +            status = efi_bs->AllocatePool(EfiLoaderData, info_size, &memory_map);
> +            if ( status == EFI_SUCCESS )
> +                continue;
> +        }
> +        return;

Perhaps emit a message?

> +    }
> +
> +    /* Try to obtain the ESRT.  Errors are not fatal. */
> +    for ( i = 0; i < info_size; i += efi_mdesc_size )
> +    {
> +        /*
> +         * ESRT needs to be moved to memory of type EfiRuntimeServicesData
> +         * so that the memory it is in will not be used for other purposes.
> +         */
> +        void *new_esrt = NULL;
> +        size_t esrt_size = get_esrt_size(efi_memmap + i);
> +
> +        if ( !esrt_size )
> +            continue;
> +        if ( ((EFI_MEMORY_DESCRIPTOR *)(efi_memmap + i))->Type ==
> +             EfiRuntimeServicesData )
> +            return; /* ESRT already safe from reuse */

"break" here so that ...

> +        status = efi_bs->AllocatePool(EfiRuntimeServicesData, esrt_size,
> +                                      &new_esrt);
> +        if ( status == EFI_SUCCESS && new_esrt )
> +        {
> +            memcpy(new_esrt, (void *)esrt, esrt_size);
> +            status = efi_bs->InstallConfigurationTable(&esrt_guid, new_esrt);
> +            if ( status != EFI_SUCCESS )
> +            {
> +                PrintErr(L"Cannot install new ESRT\r\n");
> +                efi_bs->FreePool(new_esrt);
> +            }
> +        }
> +        else
> +            PrintErr(L"Cannot allocate memory for ESRT\r\n");
> +        break;
> +    }

... you can free the memory map here.

> @@ -1067,6 +1182,13 @@ static void __init efi_exit_boot(EFI_HANDLE ImageHandle, EFI_SYSTEM_TABLE *Syste
>      if ( !efi_memmap )
>          blexit(L"Unable to allocate memory for EFI memory map");
>  
> +    status = SystemTable->BootServices->GetMemoryMap(&efi_memmap_size,
> +                                                     efi_memmap, &map_key,
> +                                                     &efi_mdesc_size,
> +                                                     &mdesc_ver);
> +    if ( EFI_ERROR(status) )
> +        PrintErrMesg(L"Cannot obtain memory map", status);
> +
>      for ( retry = false; ; retry = true )
>      {
>          efi_memmap_size = info_size;

What is this change about? If it really is needed for some reason, I
further don't see why you don't use efi_bs (it cannot be used inside
the loop, but is fine to use ahead of it).

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 07:23:32 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 07:23:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354451.581548 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4HBX-000694-VD; Thu, 23 Jun 2022 07:23:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354451.581548; Thu, 23 Jun 2022 07:23:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4HBX-00068x-S4; Thu, 23 Jun 2022 07:23:27 +0000
Received: by outflank-mailman (input) for mailman id 354451;
 Thu, 23 Jun 2022 07:23:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aspq=W6=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1o4HBW-00068o-E1
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 07:23:26 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2065.outbound.protection.outlook.com [40.107.21.65])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5e70effc-f2c5-11ec-b725-ed86ccbb4733;
 Thu, 23 Jun 2022 09:23:24 +0200 (CEST)
Received: from AS9PR06CA0727.eurprd06.prod.outlook.com (2603:10a6:20b:487::9)
 by VI1PR08MB4301.eurprd08.prod.outlook.com (2603:10a6:803:f7::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.14; Thu, 23 Jun
 2022 07:23:21 +0000
Received: from AM5EUR03FT031.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:487:cafe::35) by AS9PR06CA0727.outlook.office365.com
 (2603:10a6:20b:487::9) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.16 via Frontend
 Transport; Thu, 23 Jun 2022 07:23:20 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT031.mail.protection.outlook.com (10.152.16.111) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Thu, 23 Jun 2022 07:23:20 +0000
Received: ("Tessian outbound 1766a3bff204:v120");
 Thu, 23 Jun 2022 07:23:19 +0000
Received: from 3339cb0617c1.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 270EA737-B413-44B3-B412-0C0D5CBE7508.1; 
 Thu, 23 Jun 2022 07:23:13 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 3339cb0617c1.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 23 Jun 2022 07:23:13 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by VI1PR08MB4526.eurprd08.prod.outlook.com (2603:10a6:803:f1::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.16; Thu, 23 Jun
 2022 07:23:02 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 07:23:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5e70effc-f2c5-11ec-b725-ed86ccbb4733
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=IclGPQf6c9ddUt8tj199QjiQjjo6qIA1lFkj++60lxeqOxpacMvuO/TRiCo7PrD0gj+hrdz72cHzYpWNje767i7rESiWpmrJH1MhNpvLtxr/hMc+MefFFNKdi1XKPZiEp8905qRHEXjZVis4u/Qgup5f7tdw3fsMg8qNiRmS5isUk17Y73me+AyXC6sRBTqXQ4Ezr+tTpMdv+BIfT60MmsZqscNFOpx7tZSBq10h1PDo87vvo6wnGHJ0aETPp4D/2uxd09EhXw87nYeoYbOFCW3OmzhQluC+94pqDjfXwQpAbU6HPTQEgEFdJ7qRZ9ueY2n7fuNEoLXqLODnL70YLg==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=MzNd+DFQj7zzWXf4w12CX13CUGMp8yTcSkyDCfjEFec=;
 b=KgIaHG1Item6e467LIPHjOYwd+Z8OD6xUdfCRjwx9GA8TrpMhTctumv7EakTQElwVZBt1ubnTSeqry4WROvSmB5gigtqWeaZHCfm8m1xEo1lMab7In0vhZEWp5ebQaqUP09KxZu0BkT57tHG+vmC/wmno41lkjGv5sEG5DeumksVg/RyTf8kheQ6T9Lv3+RLxcnemK/pAC+Yqj+1F54h1lPB1Krov7nsjqwWn3NJu/E7lpo6Q6nvJ1swAFpIJaoJeGXzYv7Zeerc/PpocfaEh3/ROa8caZfzTS07JFq7cZ8fAz7xsSy7hd4bNNB0iZX2RP2j41mjyI+c8lqtG5+gRQ==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MzNd+DFQj7zzWXf4w12CX13CUGMp8yTcSkyDCfjEFec=;
 b=ODC5cvUsj2fUt2c72QY/HMOa0lhAe5uaAQ7aZUKpB4NPNlTN8JA99Z6rq7pBIu0HA5K/m9c91e6SAp1CW4ONvrnKxchnj8EBuVz4fW32Ze6kh3zp7qOkWTvNV9dH+rKQbH2OPG5wJoWuoHXclNO8lZwkSVWpKLTZy4sZNKww2nU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 99f0b3797824a904
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HzR1LQxkzOdsQrKTzdrSICMMTTdDL347ZPAdPvF/R1RpNH8sdAdTKxmsqP/kNbqMPsBVEkTc9YcwlajtfYqmON2eQbZP0gkU2gOhU9fyfj4XtX6DeRrc6P4hRXun6HVQ+G4FlL7IWCeXhK9BrpKTRksRbkFCIn8G2bn7YCtzBi7qNbVWIjuLU5Ily7sHtEJ1WLaSD4RomZie+EDfPRA1ZW2uwQ4R7g7EShtg6F9FWQiDTdXNbPAhZxXMgHOyP8TbNP9VW4Hr779+PL/9CVBTDwyXr3esamfNP6RHPacnyB5k3CLf/mJ2LfznjSgLGWjqwaTgkcEy3U3q1HFH6H6Img==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=MzNd+DFQj7zzWXf4w12CX13CUGMp8yTcSkyDCfjEFec=;
 b=AXjSZ8bOuQRizIPQsHtbgMi7u2+YFzJQXGZMOWobxyx1oaM/Po9EUeoKLbgBGKZ8ahYNaws/QfZz5rS31Iv7D5Ag61X5CPihJO+qOIpSdsJPEjCroWnzRoCGmk9is3Eafh972D9N9dohqwJw7iQ+0gkYAQwmP2q2Ko+Cfghx+0aQ3sid09hsZQtkYYoKYtba2PPlNQQetqw+JR15VDnVh8oqh9pB7mH5aK8FUh9YhESIvTraTFNKPc8W4C3AO6oDkGnvKUsH6ywN7sAqI+kU83VZJ3fwwLyf8qYqo8jSItas11RgM03Er4pke+w8W5OFTse5/11sID9cDe/HakoUmw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MzNd+DFQj7zzWXf4w12CX13CUGMp8yTcSkyDCfjEFec=;
 b=ODC5cvUsj2fUt2c72QY/HMOa0lhAe5uaAQ7aZUKpB4NPNlTN8JA99Z6rq7pBIu0HA5K/m9c91e6SAp1CW4ONvrnKxchnj8EBuVz4fW32Ze6kh3zp7qOkWTvNV9dH+rKQbH2OPG5wJoWuoHXclNO8lZwkSVWpKLTZy4sZNKww2nU=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH v3] xen: Add MISRA support to cppcheck make rule
Thread-Topic: [PATCH v3] xen: Add MISRA support to cppcheck make rule
Thread-Index: AQHYfKpzLyGOfeb87kCDJ/BIB2eZQa1anOUAgACd+ACAANB4AIAAn3sA
Date: Thu, 23 Jun 2022 07:23:02 +0000
Message-ID: <99E7CA0A-B87F-40D3-BE15-AA344AFB9855@arm.com>
References:
 <82a29dff7a0da97cc6ad9d247a97372bcf71f17c.1654850751.git.bertrand.marquis@arm.com>
 <alpine.DEB.2.22.394.2206211658480.788376@ubuntu-linux-20-04-desktop>
 <FE2CD795-09AC-4AD0-8F08-8320FE7122C5@arm.com>
 <alpine.DEB.2.22.394.2206221445520.2352613@ubuntu-linux-20-04-desktop>
In-Reply-To:
 <alpine.DEB.2.22.394.2206221445520.2352613@ubuntu-linux-20-04-desktop>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 674a6026-7da1-4814-85b8-08da54e93fd0
x-ms-traffictypediagnostic:
	VI1PR08MB4526:EE_|AM5EUR03FT031:EE_|VI1PR08MB4301:EE_
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB43018C1F4A088F549C7D9A8E9DB59@VI1PR08MB4301.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 PCYZpKMpnhTOBDgfLB0EeSwFHtjqr7HAJBelQOJCQN+lHpC+/FRz/INcrls8oyacQgZ4NxeWwNrYV3XoUf62hxiMCflQjid/cSTOBeeFwX2JKY4+CNErEiZ6Scm7/z8X3bteeEuXoE2TgA7FbDWk32aKnDG662Dp8aM8To4AUl8rdIReYoif1iIkyvYxzF9k1fm1wz1bj8EBn0BFBZFFd9m2lphT28LHZcPpHU/KNEJfJOcuCHBWop+D6/FH7vVJxMUpy92GdkLkjB8qDZ2cedwKYEAUum13+in8vQe0YdMjqwEcjPj1tcGntN5jjzhy9Ti69W6GB/L5YZHaVUWuW2qO3UtADnkRbrJ2L+AHny3ltDEaOjrEWywJSeZj/kL47nna+c+0dXV8z9lJpnV1EX7kOCeLJ7vuAl4+UVT2FwlnzsALPjBGvathKRwjyPDKStgcvBGW9/mxDGHZj+fQkK/B7peVt6fnoLafgoKf7ESEHwY7nvQqA/LhdGSZ8WIIDpP1SfayDl+6+aygjoQTHGcDwLAsKx7/jYsdz6yKdv1eFMFjbCfyCa6C9V8OFnczlcISa9F80RTV3POygGDXT4ETX/Oqr4yt2ZPqWdpbK0bS1Dim7cVf/Xb3mSL1J6D0J1LMDgCkhepT+y8NEqZuMm0ab2hAl/bsm6Oqk5bqFMWWhn/LET0YkTlcg+t40wq3ASEEgLE7hWE79J+KjuK/G0l1K9gQ7Qr8mZWSdR/i/QLfuD5odVP/l6l2llckkNjowhe+/rkb9dCikB3sF8Rb7g9T1Dbb9ESErFbmj9OvF8A=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(376002)(346002)(39860400002)(396003)(136003)(366004)(91956017)(53546011)(6506007)(6512007)(186003)(33656002)(36756003)(41300700001)(83380400001)(71200400001)(26005)(6486002)(478600001)(2616005)(5660300002)(54906003)(66446008)(64756008)(8676002)(316002)(6916009)(38100700002)(66946007)(4326008)(2906002)(122000001)(86362001)(66476007)(76116006)(66556008)(8936002)(38070700005)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <C40D88EB4FC5244D9BD20F8D78003EC9@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4526
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT031.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	b485db9e-246d-4c68-1136-08da54e93570
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	73ptMjvmRE2XEaWxbPPgAL/G5Tgif4L7moz3crOvulgFernovNZBYL2imG5K8uO+z1Kr5kwVG5RaVqK0/qpxc9yz6aTcjby3d64Cinitf/BhEWw5AzYDbwIUdLgjZFP83QuEe3b4EXvazcHFk6ulPVlHpq99ArBKuPLUIfGrtW3zCWdfMk0jPdPichWYa/fJAsNdHGSHtXcMyz2YH550YQbvqcdrBDsDwhT6hl+VOfvpY4ELE0YZbzec53gJs6A4ip729uJTcqJS25QP+cOvTMyzXWQv5GRa5wer5x8mgFMo13PEfpHBSs31n1NU2ngv3MTqMR2lHMa6irD1HHP4sPHKrNBvwLT4ZHzrHrWqP399wdMyc7QlrjPDwPUr3kors2cjKaPWP23kTZ9M3lbPnDCi20vTieYMO1tEpKJ1wWm8DfstStaLphXBadJALHyyHoy0JXP6c+46qWxI4piGaJcv6TB/D82stXpJA2Aj8selnUCSnL0ra3+LInycSJi2lFuEWC9tkVajmRYazqA42fRE+mb6FSVMRFLl6fUeyCXDCLtOvwZLB/jWvZlOUXFzet62dnvEGKedL3AsBO3IU99L+4vrh22H/eRQnAcOMnMWT+ULcAEWXB1ttX2H3nRlCsFDLcQDlkOMSft46zeNHJz1EH1QVGIbLUAI06rMeBH/SwVBpz60/lKOxzdb3Qq4z+wxATcbP5tp2XNpf4aD4dt7BIDTmjyQ+eD2U05Qa+/nBLVSw6bleqlW2fwBsdotA+XfoqfOQa99CWtSakf5rg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(136003)(39860400002)(396003)(376002)(346002)(46966006)(40470700004)(36840700001)(40480700001)(36756003)(6862004)(186003)(5660300002)(6486002)(336012)(8936002)(478600001)(41300700001)(82740400003)(33656002)(83380400001)(2616005)(47076005)(40460700003)(356005)(86362001)(2906002)(26005)(8676002)(53546011)(6512007)(70206006)(70586007)(4326008)(54906003)(36860700001)(81166007)(6506007)(316002)(82310400005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 07:23:20.0465
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 674a6026-7da1-4814-85b8-08da54e93fd0
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT031.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4301

Hi Stefano,

> On 22 Jun 2022, at 22:52, Stefano Stabellini <sstabellini@kernel.org> wro=
te:
>=20
> On Wed, 22 Jun 2022, Bertrand Marquis wrote:
>> Hi Stefano,
>>=20
>>> On 22 Jun 2022, at 01:00, Stefano Stabellini <sstabellini@kernel.org> w=
rote:
>>>=20
>>> On Fri, 10 Jun 2022, Bertrand Marquis wrote:
>>>> cppcheck MISRA addon can be used to check for non compliance to some o=
f
>>>> the MISRA standard rules.
>>>>=20
>>>> Add a CPPCHECK_MISRA variable that can be set to "y" using make comman=
d
>>>> line to generate a cppcheck report including cppcheck misra checks.
>>>>=20
>>>> When MISRA checking is enabled, a file with a text description suitabl=
e
>>>> for cppcheck misra addon is generated out of Xen documentation file
>>>> which lists the rules followed by Xen (docs/misra/rules.rst).
>>>>=20
>>>> By default MISRA checking is turned off.
>>>>=20
>>>> While adding cppcheck-misra files to gitignore, also fix the missing /
>>>> for htmlreport gitignore
>>>>=20
>>>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>>=20
>>> Hi Bertrand,
>>>=20
>>> I tried this patch and I am a bit confused by the output
>>> cppcheck-misra.txt file that I get (appended.)
>>>=20
>>> I can see that there are all the rules from docs/misra/rules.rst as it
>>> should be together with the one line summary, but there are also a bunc=
h
>>> of additional rules not present in docs/misra/rules.rst. Starting from
>>> Rule 1.1 all the way to Rule 21.21. Is the expected?
>>=20
>> To make cppcheck happy I need to give a text for all rules so the python=
 script is generating a dummy sentence for not declared Misra rules to prev=
ent cppcheck warnings. To make it simpler I just did it for 1 to 22 for mai=
n and sub numbers.
>>=20
>> So yes this is expected.
>=20
> No problem about the dummy text sentence. My question was why are all
> those additional rules listed?
>=20
> If you see below, the first few rules from 2.1 to 20.14 are coming from
> docs/misra/rules.rst. Why are the other rules afterward from 1.1 to
> 21.21 listed and where are they coming from?

Those are dummy entries generated by the python script.

>=20
> Is it because all rules need to be listed? And the ones that are enabled
> are marked as "Required"?

If a rule is not listed in the file, cppcheck will give a warning.

>=20
> I take we couldn't just avoid listing the other rules (the ones not in
> docs/misra/rules.rst)?

I can but each cppcheck command will output a warning for each rule which h=
as no description in the generated file.

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Thu Jun 23 07:23:35 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 07:23:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354452.581558 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4HBf-0006RJ-AU; Thu, 23 Jun 2022 07:23:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354452.581558; Thu, 23 Jun 2022 07:23:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4HBf-0006RA-7k; Thu, 23 Jun 2022 07:23:35 +0000
Received: by outflank-mailman (input) for mailman id 354452;
 Thu, 23 Jun 2022 07:23:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CRa/=W6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o4HBe-0006QO-8r
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 07:23:34 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2042.outbound.protection.outlook.com [40.107.104.42])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 622cebb4-f2c5-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 09:23:32 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR0402MB3662.eurprd04.prod.outlook.com (2603:10a6:803:1c::30)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16; Thu, 23 Jun
 2022 07:23:28 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 07:23:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 622cebb4-f2c5-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VOXaOUqWpw1GoMadxaAoYIUKRYjSi4Juz4c0CCd4uxtbfZGsxlDICsNBVPGM+pgosLp/TCDx5jRtAKZwxJZOCl7R7Li23etplNyhQdQDa6MspkZr0EzgLbO8Vy5T2/OsV8RTub0wVEw9xqCt7rjiL4nLvwa96mwbjGkQqGUU2ev9s8QUdSwOnNXPQimzRX+Ogf33KwOPe7ZhI2lR5eYg4rT/OD4i9IokVQRH4F/eGfc34cxbnljEFHvUd02xiToXcoUc1kR7wGgs+D1blG/1qxjLyS5xa0GY1bcrCCzpGzoH54ya4aYOzdRval/JvEImv/V0sMKzApLIB1yCGhHIRQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=HhVcCrq+jx0jYmRPaybSl8F3Py7HhKoGOD5h2YPRjNA=;
 b=cBs/2ScI+2oQrmU24LX8JrtERKaQunB7xPC3tJBD9Is3R3Op827FsmYsXhJCyEYTT22Zb3ithrQx2e2jfq0cIH4Z8yx2cRltLY5fGLxHfUldnzQym9GR1bynTPh1aIuVMYcVLmdyDnseesxr8oRC4RorY9ieI/+Q1338nVkuC63qmI+2zPMU3OOI4mVm4iktjaj6nAP8Bz5vwPml5gXzxtrg7Sby8OS0AZa2r9rxMlaPxwCQw+vFWX+VvSuyowYf2yAnECdYBRVXUos+faHX0unAJG8TqF4hecB8kDOhQo3w1a1eCe4FKeTOaeDOIzw5AehI1p7TfV9bJ29Y7GFPwQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HhVcCrq+jx0jYmRPaybSl8F3Py7HhKoGOD5h2YPRjNA=;
 b=48QHQvfVTwAn71SBrwuxweQfZ0qIlHsH4xU/OAQnMDPG7u3+jPY9DrGxj/a5mORa6vm9AYw3BX4FyI2vABQ48bO5CQLvhmSiy7nu/jrC/a+nS1FA5rPsAWemSmWDcRMJHuUPNtuU5c4wQicupQEevN+zF15sFRM8OBJ4/pUa9i6oO2C/O+RKYCksbpDqjRoeOru+nuRiMyqdUVtHu0/sZ+cpvZdWaz7xae9V8NyjADTMnKJdkBSRNarGklbrLnjZxpywlEfu5fm/u3eXrPctfM8cvfiCcnEthTZ3O7z6dQBQzzJ91ByJ9+2oGUvQQbJBjpC5wF1Qt3UFxRT5MoRA3A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e16b3b4b-45f3-a520-0360-c1d59602469b@suse.com>
Date: Thu, 23 Jun 2022 09:23:27 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v10 0/2] xen: Report and use hardware APIC virtualization
 capabilities
Content-Language: en-US
To: Jane Malalane <jane.malalane@citrix.com>
References: <20220413112111.30675-1-jane.malalane@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220413112111.30675-1-jane.malalane@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS8P250CA0013.EURP250.PROD.OUTLOOK.COM
 (2603:10a6:20b:330::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 347660a8-cd3d-4bbd-5037-08da54e944d4
X-MS-TrafficTypeDiagnostic: VI1PR0402MB3662:EE_
X-Microsoft-Antispam-PRVS:
	<VI1PR0402MB3662F3D5DBB27DA383B8C7ECB3B59@VI1PR0402MB3662.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	lZuMwVqgYjt5QoWiXqXhevGrL2JIkh/qddlC1VUyVg6LoxXmF3eReHCNT2KHxuGodxhZUeYZZQju4qjhfUzYIEz90AD9I62oIDsetvvJAKSwsuDH2d99DMGs19cdxe56V6Z0iGNUyMgOZ/YXzpOJXZC5YXLo2oholmvLExXyho4FO/PVpwBYIsu2kAMaEkSvbnfz7siNdrujFjpkEP2htlOpxv2avP0w5dJ0x+/Rqbv8XMAc0auQk3LA+QS4RfQFrO/cDZEbSg6O0rhFEesGPDKREaq8Op/YptBhxnCb4zVpqYFA24wtrud3suIV9C+b+sVbwbifKs5A61xJJBMexlI9Q8PzNPbYjFVToOL0fcP1tsUUStOxYqLEJgJIwlNiaHHmBbDPxDv157FqBiHbFbGFcVy7Kv2vJJ+EmJn48ZoGfmTZt/XqtwxsKmsHIdoZ/XWdHFkAgEtVesIEzp39WB//yt6RoV8htyNVSKMi6ZEC7XCS9l/z2kZfUyRmxXsMwJCkSsKpYhaaYTDeVeVbTa1NXcWL2z7/V88MN/953UPEdV45Ai3AJ1aCKsaRMZgWYgr8CIFHhvzWsgo5IrapXTyac2jEBN39ug6Y8Zf4P9fiHTfq1geIpaDk7G3TkmZmxmOnA6sgsxiQyru9/l3Cr8V7ZSZPsCfLOesLBCg3gZHd/iq6zBiN/3S9ICfAjH2MjWPwuJNJgWymlqcxDb2Ln5pNoT/0lkw3g2gdAv2C0AXkw2gpmWvrgaz4Wm4RSfv3mYA4oZB4x0hx6IcMgfd2ZXZWmdxzZa/W+sJS0n23rt4=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(136003)(376002)(366004)(39860400002)(346002)(396003)(6512007)(2616005)(31696002)(31686004)(38100700002)(26005)(316002)(2906002)(86362001)(5660300002)(83380400001)(6506007)(4326008)(66946007)(8676002)(6916009)(478600001)(41300700001)(66556008)(186003)(8936002)(36756003)(6486002)(66476007)(53546011)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bHVjdHpEaFh5UzRZMlJENStlbDN0b3dLaHFIazJudkxoeS82a1o5WVdGeWE1?=
 =?utf-8?B?Y3FYdGl5VnJDQ2hSSk1BdU1OR0lyd0RmNEhzYWFCdnpuSFg5amFpRHJObXFO?=
 =?utf-8?B?RGU3M3dZbW5vdVhMeDhXRkFtaGwzeVk2MGIxdjJPalBuR3F5ZFdBa3h6aFov?=
 =?utf-8?B?TnU0NEhGSC9WdWpoZHFKRXdmQWhUUGNLa0dwMjU0a1hieDErWUNBQjlvWjdC?=
 =?utf-8?B?WkRpZWlOT2tEM3lRWFhnbWtjaHJtU05jTXI3TDN5YjRwbVJ2b0R2Z09LRzJI?=
 =?utf-8?B?RVBUSElVaE9sVjdYSFlJdVEySmNjK05xem5xZ2dWYjhFUGRocGlBWFdiTkZs?=
 =?utf-8?B?VlJMVWtJN1BVSjdEbDZPU0JuV0d4LzN5SkFvdVhIQmp5emFzM0dLa2RoNy9H?=
 =?utf-8?B?ZTdVZ0pQdmh0TFBBQVhoUFhteTZ5OFpYVVlJV1YwUVBLOTNnQWtqZGVsTFE0?=
 =?utf-8?B?eFl0U3ZrNy92cnJUdFRiL1hpclUzMmJtV3dyODZnZ3YyVGRWWk1qVlRhL0RF?=
 =?utf-8?B?UXNzcktWa1AvNXBlYm16WDJiOFdVd2pYa2UzUjhiVjFITUEvN3BUR3lzSk40?=
 =?utf-8?B?eWdnYWdQMGdNd2hOdFVkYWZnMkUwaS9CUmpzUGtqSG9IajBnS3QrS2dmTkg4?=
 =?utf-8?B?Ly9UbnpHc0U5ZEFOM2lDYU1FbDVXUW1YZFBVZm5rZks2dkxSVGhYaHlhUDI3?=
 =?utf-8?B?T0VGTUNZWitPRWNFWWVvTlRuRkkrM0UzWFlQQzl1OGlOZ0NFVDZIQnRybWZQ?=
 =?utf-8?B?UWN6U2ZubHVTaDhwWHlwMDgvN2ZSSkJnbkhlV0N3VUc4YlVJcWk0WHdYaXEr?=
 =?utf-8?B?djA4UVh4a2FYYXhnaDVlZWxhdnhybGYyKzFzZTFZUndOWUs1OWx2ZjNTVlZn?=
 =?utf-8?B?ZWEyVHNoV2I5NnJhT2pGbXJISFhMczJPeTZESkkzWGVmZ1REZk1UeWUyR3l5?=
 =?utf-8?B?WjhtcW9sSmZLaXk3andudzZiZ1Vxa1hJQ2U5dmN1QjFUZnZGVWhmYmcwMjho?=
 =?utf-8?B?WmUxc3NEazRHVnRwYUNwTlFWbHVkdGdqTmZsRnJDelVrZStNNGlUbkltNHVv?=
 =?utf-8?B?bUlWOXU5SmZYaUpQV3RqV0dzYXF0QmtVUERjK1dsdE1XY2NleGNxSkVTLzVs?=
 =?utf-8?B?bmNXMlRuMzhiZ205dWJkTFBqNDYwbk5JNmZNSU1ya3JidkxFSFFGTVhEZDF5?=
 =?utf-8?B?SGhFUHNHalNqQmxBbmxKbmdDekR1a2xPTGxBZ3B3cTFSNkN3R3o4a0xIbUdT?=
 =?utf-8?B?Wjk2RENNTXN6NjYrSSswbEtodEg2Z3Jmd1VtUmhKeFB6RlRSTGREbVhlZVJ2?=
 =?utf-8?B?M3VkbkpzcS9nRHBSU0VKcEg2aU9udkhWeVU4YTQrRzdHbnRIdE1ncTNibGk1?=
 =?utf-8?B?THVVZHNjQ3lFNkxvSk5hSHBWa2s0T0NTdkdVdi9CWFl1UHp5NlFHYjQ5cFZL?=
 =?utf-8?B?SFFGQkVGTGQ0dnRMajRYQ0g5NCtXY2l5bk9ySHdtOU9aL1RKaE9vRnB2aXZz?=
 =?utf-8?B?emZUSmtxL1JNT3pDYitZSW1DZmdQMzFpODkvZFg3NWk0a3RYbGZaWFllQmJm?=
 =?utf-8?B?RFJNU3I5ZVFnU2huTC90Q2crazBOK0VsREdJWnh0eW1UZ0lyczFwWmpjalBh?=
 =?utf-8?B?NWIzbCs1TGExQ0RFQk92RXBOeHE5N3ZTaTMrclJkdTc2WGh0S2lseFdXTlcr?=
 =?utf-8?B?a1Nsb2NwdENUeTh3aFd4bDh4a1dnQ1Z2WFhEejYzWU1zQlBzbTRuNUdRVHYw?=
 =?utf-8?B?YTNNNlc1UFRQempaaGxoaEV0cjZwdHQyT3IwS1hrOWxyRFpNQ2tUV3Z5K1pj?=
 =?utf-8?B?Q2d2YnpibWNFQ0c1U2doNkJRL2tFUm92aGs5NHRMOStlVG5qM1JHNzVHbFdO?=
 =?utf-8?B?MXBpeFh2WExUazZuZHB0N2VUTFc5Ry9oeWRuMktjSWpDcFF0RDNrckhTcFJE?=
 =?utf-8?B?K3VmWXZId2dMTVFIYnZhKzIwMVRnN1doVmUrb2d3NVlOa1dVbjc4b1BUR0Zh?=
 =?utf-8?B?dUhQcXNGR24zUS9vZFpFRjFtVVlVZm5xOFdWcmM4Ym8ycTJXQUJUenpsaXVJ?=
 =?utf-8?B?UFJkSjhTbk1ublJ3cE9qazViNkl3bCtFbmFyVENrNjRaYnNObHZZek1ob0ky?=
 =?utf-8?Q?LbPPfzgaOcdb1+/kwLs1KQuLI?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 347660a8-cd3d-4bbd-5037-08da54e944d4
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 07:23:28.6786
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: RV1JxBoJaxUMijGO/amHEr+KgJXTJgCMtsgrbp3dV/uCAjudyXN/6wSJ9/mGWpJ1dtTajcazUor9u9x8kfKnNA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0402MB3662

On 13.04.2022 13:21, Jane Malalane wrote:
> Jane Malalane (2):
>   xen+tools: Report Interrupt Controller Virtualization capabilities on
>     x86
>   x86/xen: Allow per-domain usage of hardware virtualized APIC
> 
>  docs/man/xl.cfg.5.pod.in              | 15 ++++++++++++++
>  docs/man/xl.conf.5.pod.in             | 12 +++++++++++
>  tools/golang/xenlight/helpers.gen.go  | 16 ++++++++++++++
>  tools/golang/xenlight/types.gen.go    |  4 ++++
>  tools/include/libxl.h                 | 14 +++++++++++++
>  tools/libs/light/libxl.c              |  3 +++
>  tools/libs/light/libxl_arch.h         |  9 ++++++--
>  tools/libs/light/libxl_arm.c          | 14 ++++++++++---
>  tools/libs/light/libxl_create.c       | 22 ++++++++++++--------
>  tools/libs/light/libxl_types.idl      |  4 ++++
>  tools/libs/light/libxl_x86.c          | 39 +++++++++++++++++++++++++++++++++--
>  tools/ocaml/libs/xc/xenctrl.ml        |  7 +++++++
>  tools/ocaml/libs/xc/xenctrl.mli       |  7 +++++++
>  tools/ocaml/libs/xc/xenctrl_stubs.c   | 17 ++++++++++++---
>  tools/xl/xl.c                         |  8 +++++++
>  tools/xl/xl.h                         |  2 ++
>  tools/xl/xl_info.c                    |  6 ++++--
>  tools/xl/xl_parse.c                   | 19 +++++++++++++++++
>  xen/arch/x86/domain.c                 | 29 +++++++++++++++++++++++++-
>  xen/arch/x86/hvm/hvm.c                |  3 +++
>  xen/arch/x86/hvm/vmx/vmcs.c           | 11 ++++++++++
>  xen/arch/x86/hvm/vmx/vmx.c            | 13 ++++--------
>  xen/arch/x86/include/asm/hvm/domain.h |  6 ++++++
>  xen/arch/x86/include/asm/hvm/hvm.h    | 10 +++++++++
>  xen/arch/x86/sysctl.c                 |  4 ++++
>  xen/arch/x86/traps.c                  |  5 +++--
>  xen/include/public/arch-x86/xen.h     |  5 +++++
>  xen/include/public/sysctl.h           | 11 +++++++++-
>  28 files changed, 281 insertions(+), 34 deletions(-)
> 

Just FYI: It's been over two months that v10 has been pending. There
are still missing acks. You may want to ping the respective maintainers
for this to make progress.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 07:29:02 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 07:29:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354465.581570 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4HGs-0007QN-1B; Thu, 23 Jun 2022 07:28:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354465.581570; Thu, 23 Jun 2022 07:28:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4HGr-0007QG-UE; Thu, 23 Jun 2022 07:28:57 +0000
Received: by outflank-mailman (input) for mailman id 354465;
 Thu, 23 Jun 2022 07:28:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aspq=W6=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1o4HGr-0007QA-6l
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 07:28:57 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr70074.outbound.protection.outlook.com [40.107.7.74])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 23fac311-f2c6-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 09:28:56 +0200 (CEST)
Received: from DB6PR0201CA0009.eurprd02.prod.outlook.com (2603:10a6:4:3f::19)
 by AM0PR08MB3315.eurprd08.prod.outlook.com (2603:10a6:208:5c::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15; Thu, 23 Jun
 2022 07:28:53 +0000
Received: from DBAEUR03FT056.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:3f:cafe::18) by DB6PR0201CA0009.outlook.office365.com
 (2603:10a6:4:3f::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15 via Frontend
 Transport; Thu, 23 Jun 2022 07:28:53 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT056.mail.protection.outlook.com (100.127.142.88) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Thu, 23 Jun 2022 07:28:53 +0000
Received: ("Tessian outbound e40990bc24d7:v120");
 Thu, 23 Jun 2022 07:28:52 +0000
Received: from f7be18b06458.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 1920B1DE-94C7-4FB3-B2E6-BBA7A1EDD0C5.1; 
 Thu, 23 Jun 2022 07:28:46 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f7be18b06458.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 23 Jun 2022 07:28:46 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AM7PR08MB5319.eurprd08.prod.outlook.com (2603:10a6:20b:dc::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.14; Thu, 23 Jun
 2022 07:28:44 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 07:28:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 23fac311-f2c6-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=dRP0eEd5zBRGeA9VoQOyn4JlYqTII/0yA/EZpvqLArcLLBI3QCSiKs17ijxqhTU3mRVcyX2L1FWI+Z25nq/y4qizRyH6f6rMSqvqtT6A/tWNusq/e6JtUseZtCwISVKHPNB8lCcgrWgmYgH946OaEvLqTwl0zIi2WGfT9JvLZ9xwOh4gef+6ZnsiwUL/PAiBCAVCi0doJ1zYQv4hY5Yzos2syR8EguhWD0PCRGDne/UTIUnuZaz+L2Nl91kVP09Xech4e+mgth+NbIdGlDq9+AE7HNIPmKqYT/vuXOo9UMKNwNc/Dfdefnx2L5AKVvAJWb0UO+FpQzCdYzZgRr8zSQ==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Q2ynpdqnA17p//ZIIwi5zlfvptLNFe7iUYUZOX7A33k=;
 b=VmJGm20jWb5EppsBCTxwM1b6TntRemP7g9b54OCCVAbLA0DUxwZxev59m/Fph4GH0R0/HrwZq0s3D0fxpwhMMeN6OSyBPU7J2q22LREC4/IYGKfq2gd9lyrYI9r9MIcrrGBdvqyHw0p/7yPePwhO1HRYFmJrTIG0QzdEE8gN55NrwxVE/6cmTq2PxOQQIHqQO/RVfVT9/1V26T7YT+P+B6fv+dIsUkH+wtn/l9BJ5PYa1krfqVY/aKbh1KhyFJN/w5Hl1NIColIgLLNhvfOc+7QgOzXVyisTjPXxnvCyTjafGU+U25YZ0OrMFdH+i6InoGa3zsPBmAa9xSrfTDT/ww==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Q2ynpdqnA17p//ZIIwi5zlfvptLNFe7iUYUZOX7A33k=;
 b=mxbiaIchI7buuar3UKA4TFVhFyzNjP/Dhg3qxxZ6MsjC5qcU1n/Fk43ANiKIWv9FudTsA+HeISk+eaIk4zXiLRPzsApbMsYeAJnHiL1IQPRzFDye/W5Z8c/B5kmK7lfhiVrOo9IAdEtlgm8KBjWqmlMMWBXl9gMMVCMCbogto2I=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 1dbeb9f98070b095
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DDPQBql9p0hQCQt40EB7QwhrqsZEUTKlrHJ8ue7NfwCNMLg7+biek2gndd7vPME/0edqfuuu5BzXq9U4GyTmuRePmKcLQbtuBXJk0LGBBw0FYQyJm+3ranHk/QYFtDj2/Rc+H4iJTU6lNfPCicD9jVepUK1PTGkIV+6t1WfX2yBM5LB9fphse4vD68/v3J23bRJCWr0a++CNUMC1/GVsH1z0/zuONmDzuRH2S76PXIAlvSxTijDAcv9Te9K5w3oeqZZoQ2Zd8bhZL2o+SCEDPvzV+PStDIvSUFYUnuPq27Cj3koyZFdNa7NJ+iz2Ev//OJO4hvHL2HLIzBdMG2CqZg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Q2ynpdqnA17p//ZIIwi5zlfvptLNFe7iUYUZOX7A33k=;
 b=dvm8klWj9SYmzsJNWpzzclbbHqQOvtpaMOt04CDd8KI7VSSlxL7cTDXSqjrCE6rft202K8tgEJgAieyuqLfXKejkeTeFMPeDPqBFetAF5qHGwAyUjFuQzDk+ASWFxdszc//Nwm7wGaIBeNAv0bTdMKKhZV/RXNi3btne9ef462cUp0ay6BaoIkEEPtY3sOR9jYQB6vK5x9Uwkj0B/2IpBqp56XH+ExIyn6TCyE6fGkDEsSU+3jxIRtDXUhrqseCr4/xLqRMy3lajfHGEgMglwVDDLXBMamSzCk4KpCb304Gyq3Mts6QIIuzXM+D+aBERuG9qEQ4plRGdN1F7UgmGmA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Q2ynpdqnA17p//ZIIwi5zlfvptLNFe7iUYUZOX7A33k=;
 b=mxbiaIchI7buuar3UKA4TFVhFyzNjP/Dhg3qxxZ6MsjC5qcU1n/Fk43ANiKIWv9FudTsA+HeISk+eaIk4zXiLRPzsApbMsYeAJnHiL1IQPRzFDye/W5Z8c/B5kmK7lfhiVrOo9IAdEtlgm8KBjWqmlMMWBXl9gMMVCMCbogto2I=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Christopher Clark <christopher.w.clark@gmail.com>
CC: Andrew Cooper <Andrew.Cooper3@citrix.com>, Julien Grall <julien@xen.org>,
	xen-devel <xen-devel@lists.xenproject.org>, Michal Orzel
	<Michal.Orzel@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Daniel Smith
	<dpsmith@apertussolutions.com>, Roger Pau Monne <roger.pau@citrix.com>,
	George Dunlap <George.Dunlap@citrix.com>
Subject: Re: XTF-on-ARM: Bugs
Thread-Topic: XTF-on-ARM: Bugs
Thread-Index:
 AQHYhWH1+fjNtvukV0adEqGEdTB7sq1ZxCsAgAAXIgCAAAnTAIAAf4aAgAD4xwCAAEImgIAAAyaAgABEQYCAALP9gA==
Date: Thu, 23 Jun 2022 07:28:43 +0000
Message-ID: <D0C78CF7-FF51-42E5-92C2-02C4C71187D6@arm.com>
References: <7f490d75-153d-7e1d-b3c0-5418ff7fdf8f@citrix.com>
 <b8f05e22-c30d-d4b2-b725-9db91ee7a09d@xen.org>
 <fd30be68-d1ac-b1bc-b3f1-cff589f338ee@citrix.com>
 <c97de57c-4812-cdfc-f329-cc2e1d950dc7@xen.org>
 <CACMJ4GY+H7P733_-UNgSd7P8+Z4ryeJwVy3QfekMJskkmh9btQ@mail.gmail.com>
 <30BB31A7-F49C-4908-8053-74E31D03BD33@arm.com>
 <36854512-23fe-57dc-3c47-5f996927872b@citrix.com>
 <A06EA6F6-BBB5-4FDC-BEA0-E5C6EB6B445B@arm.com>
 <CACMJ4Gb4CPDP5OmW+D50QCALvVo82rvw_7yO0ze0u5fh6ey_Pw@mail.gmail.com>
In-Reply-To:
 <CACMJ4Gb4CPDP5OmW+D50QCALvVo82rvw_7yO0ze0u5fh6ey_Pw@mail.gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: af14352f-bd84-497a-440f-08da54ea0642
x-ms-traffictypediagnostic:
	AM7PR08MB5319:EE_|DBAEUR03FT056:EE_|AM0PR08MB3315:EE_
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB3315E9E154EB2383EA5B90DA9DB59@AM0PR08MB3315.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 RvgdO/+pFZy3KE0ZpPnAXKmfQXKx3oCdmHMHqrUiC1FQxQqN4t/6GphJS3FyqUsw2MqDZHd3EL7wsUqMQkzdMgqVhG+UV6fXtEzHGSa0PCO0tnihNMrs4XxvAwUTHGJbJVDtlI8kA5tCXgiC4ek589pJL3gE5rM88t5MNlRWD1s8RwrwbdfMVxI4RUTLDGJyvOgl22WHb2kVQRTzCtndTRna1RhzaNzN7DAMPNVwbb9D01GZHTPrnJ2+pGI0vd3UgyuOYwLgc9iKopY06lWY13ovUdOihbWdbkSm57dKsZwNrUQcAEdaNlhJZt5u8asoGvjifcW6VcLnagrbkTcW4wzwdlX0Qt2yciW8oVdHGNBK98vth3w65dRUGZaGcUU0LsIA1sIJaJo++LLZ0H3ZlVLDJnmpU2E/NkPJhHFDe53VuCAk89m3EAgskOFg4qf3cIySJ+ICfgu4OX2MtqlzdeDb3jYsAYjHj5ybVLFk7O6FrORne2crarr+fjotbyFEpLbJ55dQwbtGXim/l/svrgZPykONH/Hw+HieNz9jj3fMN2vhblUXD4T4bQxMmDKxBj7gIRwmOo9DuWWP6OhPdLIO/wrc6StupiMQnLVuZG+s5CnKG3TpdgmL5wqApU9uriFHV5955lRRi3YdrGUAV6dV/IrcgHjaOFcTmc0EHoHwJPmwRhIplfNgJaTwmNyiFTQDBZwX2oKK+ZtTxcSq0o3b1D0vQ8BY6YcyTrdfWbUjDj6NeXDgD5vzedDDtC8oSP5ZSiKq/noERCqIl31YTxrrIs7PrFH1TSHXe7B8C1n4hIRAjE14zidDQZ0lqUhw14+Xpdw6fv4KPmY80cZP9w==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(396003)(366004)(136003)(39860400002)(376002)(346002)(8676002)(71200400001)(66946007)(4326008)(54906003)(66476007)(66556008)(478600001)(86362001)(76116006)(6916009)(6506007)(36756003)(6512007)(64756008)(91956017)(186003)(83380400001)(2616005)(41300700001)(53546011)(66446008)(5660300002)(6486002)(33656002)(38100700002)(26005)(316002)(122000001)(8936002)(2906002)(966005)(38070700005)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <1AAFBA371139884A9726D7BACAEAF290@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR08MB5319
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT056.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	29654a63-6a1e-44e0-b602-08da54ea00ca
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JVkCoEHDUzs0mPApxWeJ0LyFqke9Asf/+GrQBsxpoPPsDZZiz9HCqx8OcJ1fyyIZhpZuPoFHTz4nMdXxn02dLUtDR0BM9YXDBlECZUfDTbi/ucBnPZf2/8lucZNuNPq+BhkvyPV+P0MOhESTrcxIdbgnZPmm0vx0FdECvei5CAY4mgwM5BdZF0e8tdMxWFSe7iV4e/GaS0Uw/RbjHRX/ebkHs0ey0OtTyX5BxRyO5OkT2CqSUmiUMLb5iqprF0vuJ33DmMYKQ/B4OYcO8NBDFycSAhJ0ETgsM0Blu94Wvvjw9dFDWQJwFLWXCEi1Pe2czWAILC9w/iUsmja11veHBSM/ykWyXNTn280gmAOPVjPjOUof7DScHuqu5uoa037rfh4lx85km2a5bA6yBaWjGluIgJGE1PCLaX6JhyxaWXQ2IY7sU1m98ihAuqdPvAnBxI8JnnhMtw7UdqU7CX0juMXhIWg7AgaegZCF17inoGPCymewRmcG2KHaZdeFzz2yrZ2ljVS9klkHibfVrrvM6WhUvxfhXUCw+fHniJs5s6ju/h0NU0U61dvRAOi55EkAYnGysjc14dsIcY6Ts/GFyUXi0x6UEFiICzNpx8pG9PtzlIwaPtkUIxHIde11eJcgIXgSu7QHBs5cJHu/h1caJilHwO8qnty9jlQoEUKkEu0zcSQu/O4qfE/GzJLguNcEx8PLhvJoZpUwCPOcWhcVykrJ9ZOXpfTiTgaghu6dOx6cdEKvd7xfwGAp/O50zXGZc7yQNJHdLVHcgdZ9BGEYRYi6Wup3gKf8OGUdBAavfpQ=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(376002)(39860400002)(396003)(136003)(346002)(46966006)(40470700004)(36840700001)(5660300002)(356005)(6512007)(478600001)(966005)(33656002)(53546011)(8936002)(26005)(6862004)(6506007)(40460700003)(316002)(2906002)(41300700001)(86362001)(6486002)(36860700001)(82740400003)(81166007)(336012)(40480700001)(186003)(82310400005)(107886003)(2616005)(54906003)(70206006)(4326008)(83380400001)(36756003)(8676002)(70586007)(47076005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 07:28:53.0433
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: af14352f-bd84-497a-440f-08da54ea0642
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT056.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB3315

Hi Christopher,

> On 22 Jun 2022, at 21:44, Christopher Clark <christopher.w.clark@gmail.co=
m> wrote:
>=20
> On Wed, Jun 22, 2022 at 9:40 AM Bertrand Marquis
> <Bertrand.Marquis@arm.com> wrote:
>>=20
>> Hi Andrew,
>>=20
>>> On 22 Jun 2022, at 17:28, Andrew Cooper <Andrew.Cooper3@citrix.com> wro=
te:
>>>=20
>>> On 22/06/2022 13:32, Bertrand Marquis wrote:
>>>> Hi Andrew and Christopher,
>>>>=20
>>>> I will not dig into the details of the issues you currently have
>>>> but it seems you are trying to re-do the work we already did
>>>> and have been using for quite a while.
>=20
> Hi Bertrand - I apologise if it seems that way, and for the pace of
> this being slower than you had been expecting to see.
> I don't think I have actually been re-doing it and I'm grateful that
> you have made your team's work available. I am working to get what you
> need integrated into the upstream.

We have not been informed of anything or requested to review any
patch you are working on.
Only information we see is a claim that there is a bug.

>=20
>>>> Currently we maintain the xtf on arm code in gitlab and we
>>>> recently rebased it on the latest xtf master:
>>>> https://gitlab.com/xen-project/people/bmarquis/xtf
>>>>=20
>>>> If possible I would suggest to start from there.
>=20
> Thanks - I will add this to the sources I am working with.

Ok

>=20
>>>=20
>>> Sorry to be blunt, but no. I've requested several times for that series
>>> to be broken down into something which is actually reviewable, and
>>> because that has not been done, I'm doing it at the fastest pace my
>>> other priorities allow.
>>=20
>> You have not requested anything, we have been asking for a year
>> what we could do to help without getting any answer.
>=20
> At Andy's request I had been looking into verifying the minimal
> necessary pieces to get the 32-bit Arm platform implementation to
> support a minimal stub test and also the XTF infrastructure (eg.
> printf, xtf return code reporting) that wasn't present in the posted
> work. The aim for doing that work was to build my familiarity with it
> and inform judgement involved in ensuring that the initial pieces that
> are merged into XTF have a maintainable structure to support each of
> the architectures (and configurations of each) that we need.
> It's taken longer than I wanted and it is clear that there is urgency
> to getting 64-bit Arm support integrated.

So you do not want our help and our code in its current form is not accepta=
ble.

>=20
>>=20
>>>=20
>>> Notice how 2/3 of the patches in the past year have been bits
>>> specifically carved out of the ARM series, or improvements to prevent
>>> the ARM series introducing technical debt. Furthermore, you've not
>>> taken the "build ARM in CI" patch that I wrote specifically for you to
>>> be part of the series, and you've got breakages to x86 from rebasing.
>>=20
>> Which patch ? Where ? There was no communication on anything like that.
>>=20
>>>=20
>>> At this point, I am not interested in seeing any work which is not
>>> morphing (and mostly pruning) the arm-wip branch down into a set of
>>> clean build system modifications that can bootstrap the
>>> as-minimal-as-I-can-make-it stub.
>>=20
>> You cannot expect us to poll on all the possible branches that you are c=
reating
>> and simply rework what we did when you do something on some branch.
>>=20
>> We went through what you requested using GitHub and asked you at almost =
all
>> Xen Community Call what we could do to go further without getting any an=
swer.
>=20
> I will continue to be reachable via the Community Calls. I will have a
> better understanding of what steps are needed next after reviewing the
> branch that you have posted.
>=20
>> You are not interested in us contributing to XTF, this is understood.
>=20
> No, that's really not the case; your contributions are highly valued.
>=20
> There's a gap that needs to be closed here between the needs of the
> contributors (ie. you guys), to have platform support working and
> ability to make incremental contributions for new tests, and what the
> maintainer is looking for: a structure that implements an intended
> design -- that is difficult for contributors to know without having
> documentation for it, which is again time-consuming to produce --
> supporting the many target configurations in a coherent fashion, and
> introduced in small, concise logical steps. It's compounded by the
> fact that this is intricate system software where hardware
> platform-specific details are critical for reviewers and contributors
> to understand and implement exactly correctly.
>=20
> So: I'm working on closing the current gap, aiming to make meaningful
> progress in the short term and can communicate with you more clearly
> as to the status of that in the coming weeks.
> I also think that once the initial platform support is merged, ongoing
> contributions will be both easier to produce and easier to review, to
> the advantage of all.

We will wait for all this to be available and at this stage we will check
if we use it or if we keep our own fork.
XTF porting was done as a base to develop Xen tests and, as it is
 now redesigned and we have no idea how, I will not have resources
anymore to assign on work on this.

All this could have been done in the open, discussing the needs and
involving the people who have tried to make this happen in the first place.

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Thu Jun 23 07:33:02 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 07:33:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354472.581580 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4HKm-0000Rc-KE; Thu, 23 Jun 2022 07:33:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354472.581580; Thu, 23 Jun 2022 07:33:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4HKm-0000RV-HF; Thu, 23 Jun 2022 07:33:00 +0000
Received: by outflank-mailman (input) for mailman id 354472;
 Thu, 23 Jun 2022 07:32:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CRa/=W6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o4HKk-0000RJ-Se
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 07:32:58 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2046.outbound.protection.outlook.com [40.107.22.46])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b3cc18f5-f2c6-11ec-b725-ed86ccbb4733;
 Thu, 23 Jun 2022 09:32:57 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB6073.eurprd04.prod.outlook.com (2603:10a6:10:c3::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.22; Thu, 23 Jun
 2022 07:32:52 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 07:32:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b3cc18f5-f2c6-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=E9krQ9/WU5DwNVJjXALtiErkJFB6TrXiC8TDJOiyC/IOxV6cjM+ydUrzzV23Qdndcr5iu57TArpVAW0MxFOemtWkFO0MRfEXjW7gSaF8ZdeFr8MMZJ3R7SiLdCLyy/OkyTPayKcpVRQiaeG/ATyLKVbS1FLT6ERDGuQqbZd5B1We83eJC02rLczHlLg63+gaxbSOKt8J1cBSTjqWEbq0pTWjeyz4G4qpmPFXbm4+HStoK5SLgFSjzPJU4B28B8q5n1Yd5ITmijSB9vtBLtMLdrLoGiH3PR/Xs8cHh5fAZ2tpvpPkkonQMuiPuMUs03lt32OaYlOu2nntZyQJRXTuPQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5b3Ww6RDJ86vfpNsdUp4sKDWUbLmUiZLD1rFJ+Awy0Q=;
 b=hdy7k9hxHd1u0CaHsf3MRLH+ZkCUX7BPHHwXqGOb1gT+35zEXuMBeqthj/Y2M3xfD4j7W+Py15eePaH3zxLZlJroLHfvvRsmnozfW//R+nHnJIxi1iTXiyuWNkJICL8v69I/4YmmBKWo0djM2ab8UEPLvIDe6W72LFWFBlHlyfQUbgjr9+hJBeYFUONKn+ItwJOu9GVtjPrQgbCgftM8Nplp67Vh/4lMqJstXZTLAsEi3+KR9fPqxFnEbmOiwvpGMaesDtGo9idULb+A9CjQolDEhdDmKXtuXzTFpCxKd+kW09AmicESAN+u/4crtUPY/ioqxczOwbiMV8FFIe+yPA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5b3Ww6RDJ86vfpNsdUp4sKDWUbLmUiZLD1rFJ+Awy0Q=;
 b=aYYlS8/FIukdJRsx3vVnrm6EReMj2f4X6J+UV32TxxhuPA4Qri83OMJjr8OwFguuiVSvkj0GZpUB30qIhkz8fG5FDZN0Pu8TryKr1uUlMUJHboeUBOFrUsX+hvomWHyT8aIRlqtEocRMf+dyuyKtXCkT23KxHJ7KCPuwGVhHW8HIAx0eFKqu6rOPt/QxFAUAfbZsB7uFDNYc7DEcOEEr4H1/bpG7Ji4sWC9tyPNjAgns10x/p3KprMVkKQvP7T3HDYOn7CBKJKOHTKN+YUI6YL7FPHQtU7C1wQT/aFTUiA8FM7WdIqu5KHnX/QoJ7PuD5nhnItBZkMNeRtIDrUxybQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e6c10adc-27a8-2f31-7d84-6aee916c56bf@suse.com>
Date: Thu, 23 Jun 2022 09:32:44 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 0/9] MISRA C 2012 8.1 rule fixes
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>, roberto.bagnara@bugseng.com
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Michal Orzel <Michal.Orzel@arm.com>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>, Dario Faggioli <dfaggioli@suse.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 "Daniel P. Smith" <dpsmith@apertussolutions.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20220620070245.77979-1-michal.orzel@arm.com>
 <dd016e82-2480-0e1e-6286-18b2f677dd65@suse.com>
 <74ec2158-3d19-3b2c-1e8c-fb5b30267658@arm.com>
 <d91bb4ea-41be-225e-e2fe-1b03aa06c677@suse.com>
 <C45BA6EE-6294-4C6F-ADC4-3DE7C8DA866F@arm.com>
 <68d7fb35-e4c5-e5d2-13a8-9ee1369e8dbe@suse.com>
 <BE80A241-7983-425F-9212-0957E29AA5C7@arm.com>
 <7a8d70e3-c331-426d-fe96-77bd65caade7@suse.com>
 <alpine.DEB.2.22.394.2206221212510.2157383@ubuntu-linux-20-04-desktop>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <alpine.DEB.2.22.394.2206221212510.2157383@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR06CA0033.eurprd06.prod.outlook.com
 (2603:10a6:20b:463::25) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 5967c2d9-456c-4149-8bc8-08da54ea94e2
X-MS-TrafficTypeDiagnostic: DBBPR04MB6073:EE_
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-Microsoft-Antispam-PRVS:
	<DBBPR04MB6073E63B445925015E907890B3B59@DBBPR04MB6073.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	IDbrkb4cvJIxw9hjpnHXyrxKHnVUcGHFtULvQNBAtbmyKYehpL1vGQVp6DiUxcmcM68NPk23uj4MwElteqQ/UEnCGqzLzNrPUa91pwzAdi7TYMR45deuEyZCLwf8/TPrkIUSku28fvt82sFN7PE65J3ihrwvB9tqWIF/Wd4tFHWKB6Y7cb5OZfTfDT0pJSrdSMaDQtf/wwx8VZ8y/YBsETA0NCOKyGks9mh8/6n6w7YG/KzreTO4GxHMkDXVCh+jVEHhXdzluaAc2bGPISFXuLvXHidlOJEf4kjmHSwOAiUbFZRhPTpVh6OPy3AeO6gsPthCDTmXEQHncciog9+vq4a+Wy5PzTY9lcuTznLKfY5BvFFoEqL1RDDs5cKNU1lwkq+mcyCbMtfBLFM8/cU77dGt/mhZn9TQa08SGEs2UH1sfwsgSVbeePGcoKE1/+j1VDijBc4Pb5edlW/SqJjG7MtW1VV6TKuuegWJvzNyxkifob7jLYxWXsS89UGqwPl/FGyXwyRKzFFT8lWF9DyWtUpM4nE2SIhhyeThtwJFcvZp+ZGmXh8+ID08ZzZrpVNpu8A/b4IeYrckEv9QNo5IkoQrvCIEnyg9Z4XUpqFwMEImfLyw6GIabULCKrFNvHaeusM1leNDw+8hOapSp3LzzNQT6Uv0/FkKr+gxDq2CPsDun8Vaga8Q7JdiVi5vX+EmJ43He3AjVSHJqmHkC4BwH2kYvSUzo8NIKmmkNoPwngbB832gL9tEDfmH+1+IDw0xpabpFxBL3iWw9WjJSAI34B9G5ILc0eyivsrw9/lLaptPFIiaGH53S8OU3oNnPOx5
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(39860400002)(136003)(396003)(366004)(376002)(346002)(66556008)(8676002)(66476007)(316002)(4744005)(86362001)(5660300002)(4326008)(31696002)(31686004)(36756003)(66946007)(54906003)(7416002)(2906002)(8936002)(6512007)(6486002)(478600001)(26005)(966005)(53546011)(38100700002)(6506007)(186003)(41300700001)(2616005)(6666004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QjdXclU5a09oYkdWUHRTUFRpUlc3S2xCS2I1dG1GdTQ5UnIxQnc2aHNkK0Ez?=
 =?utf-8?B?SDAwazluRk9TUy9pdmYzTGltN3Zua2o3M0YvM0wyN25TN3ZFQkZFdUlGSVla?=
 =?utf-8?B?ZldXKzIrdzdNcmFuaE93TW9GU1AxeEVaMjNOYU5qRkduM1lIdEpCWEdqWHlL?=
 =?utf-8?B?UkFuQldPNk1mblBwUW1iQXlrYWZxcDJpSjVIT1B5ejlEMlRBMHJtK1BFK3Js?=
 =?utf-8?B?M0NZV2tiNnUwU3ZXTzZ0bHVzZ3lZOEFnUWNjSUpkTWpwUnVUSjdOaHNBcUVm?=
 =?utf-8?B?WllCUHRQT1dFVFVEN2tqOENwbE1STENxWDVnL2lnVXV3dzgyYjZnSXhUQWtn?=
 =?utf-8?B?Y1I0RWxWOUVIcGdWS2dNaExFUVJBbzBlckdaSEd6aFhSL2R0MGRtdHUvbUtN?=
 =?utf-8?B?eXdxQ2lEQkY5bW80QlVxSm1PdDNEcEk3dmpBYWdoeWRYT25SeHlxNm1SNXZh?=
 =?utf-8?B?YW1UZmhrOWhPTUROWnNiYzJ1Vkd0RThqN0k3dVA3dFYzR1lkWU1BOWVJK2oz?=
 =?utf-8?B?VGNaN3RYdER1UGxLWjh0YWZncW4rcG5XR1dFYlVrQytjNldqVktEQ3k3ZWlG?=
 =?utf-8?B?RkR5L2JYdUhtK28zQjRCdGhxTC9jdmZkZUpJc0szQUVUWGdoejRtNnhpa0hT?=
 =?utf-8?B?VXZsZTJqMkhGRnhwSS9sa0htc0hJOC9HUGdJNG9rSFU2Q3VPbDh1KzZLUk4y?=
 =?utf-8?B?WkJ6UTA4dnpmUGhjZVR5Q0hmQ2I3WWVQTUI1VFhaSFE4ZytiRWRvMENUOVlz?=
 =?utf-8?B?ZkRSSTFaV2V0Sk4xYjNWdjFoNGVZQUhCdHpiTlhQakYxcHRFazRiRVpTalpG?=
 =?utf-8?B?WEJBZzV1NzZvelhLVkNLNHpMWllFS0E2RTFzL0hBcVhBb3pRaWRhUzhUNXNL?=
 =?utf-8?B?QmhERzV3bGpUZDZTWE9VSVVPTENnMXhyWHRoa3ZYUzVyeUVSK3VCTjhocER1?=
 =?utf-8?B?Y2dPSHVIbmdBYmh6akwzUmxpdlpjZUI4VWZkQlhxUzVjL2hxa2ZoankvSUJz?=
 =?utf-8?B?b08rcVF6QnJUalZsNXZsaGtNdVRGaWhLY2JteVRKeGFNa0MyVHpNSEQxVVV0?=
 =?utf-8?B?SHR0QXoveXBYZy9ud2szcjd4aW1ZVTZDcWw3OWU0WHRRSTJQbG52TnFiM3lB?=
 =?utf-8?B?c3ZtSTl0LzlVS1dwNFczZVYvZ044b2hzczBPUk95WGZicURpV3g2Qk9wWkRT?=
 =?utf-8?B?bFd4ZUZOZDVmMTkybnB4VEsyKzQ5N2xrSEt5NzlOWi9mZWhDbHQyOWp6TkUr?=
 =?utf-8?B?N2lhanFUZ29vb0hPTEg4RWFzWHVWQnlRZ2dGbDZnZ3NJeGNhOGZpemZ5cFhX?=
 =?utf-8?B?VmFpaVJxVnhOcjRjOXVjZVhhZEJIUlBIL0cwSkdsaWVsUTFqOXBtNFFoamlB?=
 =?utf-8?B?bHBRNHkxazFLVVJtM0YwOFdhWW9PTXUzc0FRd0x3andCMzRqa0dkamdTYXBS?=
 =?utf-8?B?MVdwejdWKysySExHWWp1NFU0UGcreW9QVExKc205TVlDK0FQRmNxei90KzhT?=
 =?utf-8?B?WWk4QjRCRTFhK01OK3hMWTNCRndTcnNTSW4wZU4rb3VFUmRkeXF4aE9iNkRk?=
 =?utf-8?B?YzBPcG1IbnozQ2I1WFNaWnBzYTE4bnpSKy9kWW5JamRjTjgyS0toNERidTlh?=
 =?utf-8?B?UFZ1aUpwYXkxaXhETUVTbFFnSEFLQk9YeStneXJ3bnFOWE1OMXBabThHQkR5?=
 =?utf-8?B?cVFUZWUxYXZ5N0FrWXU5SUQ3YktmVTErU21ZamFDcjlEekZrckRRNEVRZnhi?=
 =?utf-8?B?Y0lSS0MzZ09PUGcrSjFrbjBMbnZYWVRYSWJ4WkFIQ2JSTUsraUt4bGZpUlQy?=
 =?utf-8?B?R0pSb0xIYUN6WkpENWxqdU1QNDZLeko5cHFRWFg1ZU5GTXd6RGxkN2V1Y0RY?=
 =?utf-8?B?NU42UERpN1pUTUlSYkVpbjArR01DRmlWZWxSeGNza1grNmYzM2ZKL3ZBcXJv?=
 =?utf-8?B?WlFSaENDRUNCb3pya3Z6ZlVZb3k3ZlhXNk5QMkhuUTA0S1lYTnZUUzFIYUMx?=
 =?utf-8?B?V2JGdEtLZnJWSkNWU2tHUkVQeXdnZzF2dStOUjNtSHp3bjFMTkk2V0QwL0Jo?=
 =?utf-8?B?WXE5dG1MU1g3MnVQbWNUdXJNTGYvK2NaRk1sbkloZ3dSd1g3dFowZjMxN2NY?=
 =?utf-8?Q?wE/2HRIOA4vdJQ8+sSl6XDlyH?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5967c2d9-456c-4149-8bc8-08da54ea94e2
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 07:32:52.4707
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: cyB7iBGc43d1owNBPyzwuQ5zyjU2cpDZ6TWyalEUJOxv0uWVM8iW5oR2Vi4b7KhAm0TU14Mp/CfJZfcilfHekg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB6073

On 22.06.2022 21:23, Stefano Stabellini wrote:
> A quick question about Rule 8.1.
> 
> 
> Michal sent a patch series to fix Xen against Rule 8.1 (here is a link
> if you are interested: https://marc.info/?l=xen-devel&m=165570851227125)
> 
> Although we all generally agree that the changes are a good thing, there
> was a question about the rule itself. Specifically, is the following
> actually a violation?
> 
>   unsigned x;
> 
> 
> Looking through the examples in the MISRA document I can see various
> instances of more confusing and obvious violations such as:
> 
>   const x;
>   extern x;
> 
> but no examples of using "unsigned" without "int". Do you know if it is
> considered a violation?

And if it is, by implication would plain "long" also be a violation?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 07:37:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 07:37:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354478.581592 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4HOy-00014r-7P; Thu, 23 Jun 2022 07:37:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354478.581592; Thu, 23 Jun 2022 07:37:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4HOy-00014k-2h; Thu, 23 Jun 2022 07:37:20 +0000
Received: by outflank-mailman (input) for mailman id 354478;
 Thu, 23 Jun 2022 07:37:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lCxP=W6=bugseng.com=roberto.bagnara@srs-se1.protection.inumbo.net>)
 id 1o4HOw-00014e-GA
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 07:37:18 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4e3611b2-f2c7-11ec-b725-ed86ccbb4733;
 Thu, 23 Jun 2022 09:37:17 +0200 (CEST)
Received: from [10.0.0.126] (p548cac38.dip0.t-ipconnect.de [84.140.172.56])
 by support.bugseng.com (Postfix) with ESMTPSA id 612B94EE077B;
 Thu, 23 Jun 2022 09:37:15 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4e3611b2-f2c7-11ec-b725-ed86ccbb4733
Message-ID: <8610703e-fd15-bba1-3bb1-cfe038f9b11c@bugseng.com>
Date: Thu, 23 Jun 2022 09:37:14 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.12) Gecko/20050929
 Thunderbird/1.0.7 Fedora/1.0.7-1.1.fc4 Mnenhy/0.7.3.0
Subject: Re: [PATCH 0/9] MISRA C 2012 8.1 rule fixes
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>, roberto.bagnara@bugseng.com
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Michal Orzel <Michal.Orzel@arm.com>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>, Dario Faggioli <dfaggioli@suse.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, jbeulich@suse.com,
 "Daniel P. Smith" <dpsmith@apertussolutions.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20220620070245.77979-1-michal.orzel@arm.com>
 <dd016e82-2480-0e1e-6286-18b2f677dd65@suse.com>
 <74ec2158-3d19-3b2c-1e8c-fb5b30267658@arm.com>
 <d91bb4ea-41be-225e-e2fe-1b03aa06c677@suse.com>
 <C45BA6EE-6294-4C6F-ADC4-3DE7C8DA866F@arm.com>
 <68d7fb35-e4c5-e5d2-13a8-9ee1369e8dbe@suse.com>
 <BE80A241-7983-425F-9212-0957E29AA5C7@arm.com>
 <7a8d70e3-c331-426d-fe96-77bd65caade7@suse.com>
 <alpine.DEB.2.22.394.2206221212510.2157383@ubuntu-linux-20-04-desktop>
From: Roberto Bagnara <roberto.bagnara@bugseng.com>
In-Reply-To: <alpine.DEB.2.22.394.2206221212510.2157383@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi there.

Rule 8.1 only applies to C90 code, as all the violating instances are
syntax errors in C99 and later versions of the language.  So,
the following line does not contain a violation of Rule 8.1:

     unsigned x;

It does contain a violation of Directive 4.6, though, whose correct
handling depends on the intention (uint32_t, uin64_t, size_t, ...).

As a side note (still on the theme of the many ways of referring
to a concrete type) Rule 6.1 says not to use plain int for a bitfield
and using an explicitly signed or unsigned type instead.
(Note that Directive 4.6 does not apply to bitfield types.)
So

     int field1:2;

is not compliant; the following are compliant:

     signed int   field1:2;
     unsigned int field2:3;

But also the following are compliant, and we much favor
this variant as the specification of "int" buys nothing
and can even mislead someone into thinking that more bits
are reserved:

     signed   field1:2;
     unsigned field2:3;

I mention this to encourage, as a matter of style, not to add
"int" on bitfield types currently specified as "signed" or "unsigned".
Kind regards,

    Roberto

On 22/06/22 21:23, Stefano Stabellini wrote:
> +Roberto
> 
> 
> Hi Roberto,
> 
> A quick question about Rule 8.1.
> 
> 
> Michal sent a patch series to fix Xen against Rule 8.1 (here is a link
> if you are interested: https://marc.info/?l=xen-devel&m=165570851227125)
> 
> Although we all generally agree that the changes are a good thing, there
> was a question about the rule itself. Specifically, is the following
> actually a violation?
> 
>    unsigned x;
> 
> 
> Looking through the examples in the MISRA document I can see various
> instances of more confusing and obvious violations such as:
> 
>    const x;
>    extern x;
> 
> but no examples of using "unsigned" without "int". Do you know if it is
> considered a violation?
> 
> 
> Thanks!
> 
> Cheers,
> 
> Stefano
> 
> 
> 
> On Wed, 22 Jun 2022, Jan Beulich wrote:
>>>>>>> On 22.06.2022 12:25, Jan Beulich wrote:
>>>>>>>> On 20.06.2022 09:02, Michal Orzel wrote:
>>>>>>>>> This series fixes all the findings for MISRA C 2012 8.1 rule, reported by
>>>>>>>>> cppcheck 2.7 with misra addon, for Arm (arm32/arm64 - target allyesconfig).
>>>>>>>>> Fixing this rule comes down to replacing implicit 'unsigned' with explicit
>>>>>>>>> 'unsigned int' type as there are no other violations being part of that rule
>>>>>>>>> in the Xen codebase.
>>>>>>>>
>>>>>>>> I'm puzzled, I have to admit. While I agree with all the examples in the
>>>>>>>> doc, I notice that there's no instance of "signed" or "unsigned" there.
>>>>>>>> Which matches my understanding that "unsigned" and "signed" on their own
>>>>>>>> (just like "long") are proper types, and hence the omission of "int"
>>>>>>>> there is not an "omission of an explicit type".
> 
> [...]
> 
>>>>>> Neither the name of the variable nor the comment clarify that this is about
>>>>>> the specific case of "unsigned". As said there's also the fact that they
>>>>>> don't appear to point out the lack of "int" when seeing plain "long" (or
>>>>>> "long long"). I fully agree that "extern x;" or "const y;" lack explicit
>>>>>> "int".


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 07:43:43 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 07:43:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354486.581603 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4HV3-0002UX-Ru; Thu, 23 Jun 2022 07:43:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354486.581603; Thu, 23 Jun 2022 07:43:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4HV3-0002UQ-OH; Thu, 23 Jun 2022 07:43:37 +0000
Received: by outflank-mailman (input) for mailman id 354486;
 Thu, 23 Jun 2022 07:43:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CRa/=W6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o4HV2-0002UK-LW
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 07:43:36 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-eopbgr150040.outbound.protection.outlook.com [40.107.15.40])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2fcbf194-f2c8-11ec-b725-ed86ccbb4733;
 Thu, 23 Jun 2022 09:43:35 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB7806.eurprd04.prod.outlook.com (2603:10a6:102:c9::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16; Thu, 23 Jun
 2022 07:43:32 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 07:43:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2fcbf194-f2c8-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ukwbp3DshFxQoEHZERBX0EAlNMcubJRFLZGZn9wkxsJNSKStdSDQOh3aBLjRI25FAnseMDdVLLN8q6UGOAg5qHBCcb6C5gmAn7ABm0L1b5WkvBGI7QD7nBl/HKzoYpb/QFcHoSCb2ZTI9q5jYifUGZNJVPEXv6mPiS+Kr39iHzVwDAS5TY5aWvG7m0JWgsJsAfUTyxPxcPf+WujR1ZpffuhHjkEElihKKdwFgs+qdluyTLJWFSKMfEl0PXMIGlYsG3EWWaVQvaS1hnxledWeanGHD3cpnYTRUp8bBbyJIpsjqY4EpTDdfzExLpMCJORhkmcJNAPsnzi0YRRsnKviKw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=wNoRDXJWr5Cem3yQ0skWig081m+b8aU7iUPB84/w024=;
 b=Uj/apxuN+s1hbKxdiFHtwpw1CEVMDztexskIubdjUvngJTiV47GTvh214VJLuNPNxzOlPb2tYlFXMg7xxHc/LbfZBVz3CzDzzqMBpYkHZjxRjO5xvP1ffHs+rBnLykK20lVDmSU70evrUJk3NluG/Vjg8BnUEYDuP5GJbYagEraadru4N9dq5h+4cz6aou5ar8pF3baLNG8LPAPg9wzFr66T9+R4t8VjL61OOgbm1h3l4yYW3I06r8Jo9fV9uYHJW7Vwj9IDE216rbU534VV31phDSXJ1MQT3N8oAMitwhaP9h5C/n5oSVZAhGNfKkkjs1U8yHeLqRco54V4mWeQsw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wNoRDXJWr5Cem3yQ0skWig081m+b8aU7iUPB84/w024=;
 b=SFajp8/kVKDe125R8WjyJ5K0GyHO6HUcTRl6dt9T+RvBezmLiacNu+10tRQS8zs7q76vh2/f9wKENaWYOt6Z1daK2M5xqO8W9DymyvTlOOERSN2WIduUx3DYq/KZXorUBXvzZFIkMJH9nWMTQrmI2gafV8NQKoYSybKqrQtiej85QJvuP6qiuPl4nkc6KdwXinWm0dh2JGRbPtCt3ZTatJFdpZjRDv2D68yHJN1U4LDnpagNT238rTna1PlxrSuo0RAP4w574o1c6pe4rA3PUgGI5EwQG8GMMQuIytQodq/w0Bx58PxCBcvPpc78E1Y/w2gs9Lcyax6+EMIQsE5gKA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <16b02586-43a5-0f67-5479-1d7b77aa892e@suse.com>
Date: Thu, 23 Jun 2022 09:43:30 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH] MAINTAINERS: drop XSM maintainer
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Cc: Daniel de Graaf <dgdegra@tycho.nsa.gov>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <baa7d303-1fcc-cd59-0872-a930ea43734d@suse.com>
In-Reply-To: <baa7d303-1fcc-cd59-0872-a930ea43734d@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR06CA0521.eurprd06.prod.outlook.com
 (2603:10a6:20b:49d::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8301185d-d56f-4548-dca2-08da54ec1281
X-MS-TrafficTypeDiagnostic: PA4PR04MB7806:EE_
X-Microsoft-Antispam-PRVS:
	<PA4PR04MB7806CC5DE5A75168A9BB7BC5B3B59@PA4PR04MB7806.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	rZoVAkF1dut+aMWUXZFQ7vJAohyVWSMg8OAq8Ljo6UohgeRJoiwdwdNeK8rFFKFIIdm0lL9AU6bhlghLwr8U9lCYdhle1KZVHOwhpKpDDSs8eJBv+jnxVDlyCsmLFW0O1IICR20HVJTUQ12bfRo5hhgoZZiH48mD+mtFlt7SbEAMYNWWdcUbSQB9pKg1+Gg0amDdIZ8Nbhsd8bS0ouWmOJysi6xU6VUkbqdSWhq1jc/3Q3dYnvH/D4i+RDs6TQXh8dCmDkTkPO9HwOhvEIbIiZkG3c6/JvC0WzoF2TR3QLsssrFmWlVPhPlHG1hPeVJwegsfEEhOsVI8AzZWf63ZPIqFApwQUT0AyRBz2txRfCg8amkExdFnkiDikGfEnDU4NfKyHwEnORiU+QlApmn8ANXtXsHELrhl2j4IOMVRDLcz6xIJVEMoYvKjcPqYN022VXCSAzat8UGNxG9wBHFx+M/1kbFAEZbULhmK7Od7JjxKtOedoF9Hxwog2NXTfFxiEThn9q4EbehkIQSrfFDQFRD8nH92qVPStZZPu8+QgT9EnPYQd3Yvper7Vl8vIJoFoRcolDV2YIGc9U+g0H3xOftmE40sjqsTpzqOWVR/MqOTaqJR5dZ7DvvnXL3xhnL4w2L2C4vEJ6bvrKElD6uF6ZdKzj2uvRQhhsMpaKBin5syJc1PLhqxhMwLfg9js0J4NvrHhHiWwcrqWCpxpGAQwoTQ/pxGRACWMFUDtphV6uczU12mfdlGbaRv7srBtWNO5A51VBXf7IpLm9cIfp1n2Cz1N7L/wknpWp5bSklxfEk=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(346002)(136003)(396003)(39860400002)(366004)(376002)(36756003)(54906003)(110136005)(2906002)(66946007)(66556008)(8676002)(4326008)(31686004)(66476007)(316002)(26005)(6512007)(53546011)(6486002)(478600001)(5660300002)(8936002)(6506007)(41300700001)(38100700002)(2616005)(83380400001)(186003)(31696002)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aDNpUFhVYW1ELzJDd3BFR0Y3SHhSMjlqQmxhQlo3NW5aN2ZqRW5maDg4bFRJ?=
 =?utf-8?B?akExdXNTNnhCWmRIUEZPNVUwRGlDYm5YL2xMR0lmTW8walJUMjVtcVlNM1dP?=
 =?utf-8?B?azFkaXY0eFd3WnlZOHJ2a0N0M0dvdzJJVVl0OFI1Mm5wU0xjYlVpYUtrb1ZZ?=
 =?utf-8?B?Y1E1SWk4alpydnFtNW9RdmRCT2JvdUhZclByTkRmWWhiZlppeTdQK3Rqblh5?=
 =?utf-8?B?MGpERHZhZ2ZlRldwK283dW1TTU1XeE1SdU95SDN1cWVnRnJWa3lHcm9uQUtp?=
 =?utf-8?B?bjFJYmNpNlo2NzBjelJMaFJTaHRFNTQxSWdpUTFIUlNYOC9jRDV5SU9qVDg0?=
 =?utf-8?B?Rmd3OTlZbWpSWG8yc2VNc2tjREt2bXZXSzlyWDgrNUJzeHQ0N09Xczc2SGxz?=
 =?utf-8?B?bk9JMFMxWG5aYm4wMmJjODNyWmZqVWRGbW03QjdSdjJwU3RkN3lCTzZ2aFVw?=
 =?utf-8?B?M2VnOXJBNVp6cTBMSTlSSEtGNFJuem1ncERiZUVEV1l1TkRjMUVwUTJ6cXdD?=
 =?utf-8?B?RXI0UFlVbCtoWDk3U3Fnckh1NEg4T1hTekVSN0NXbFhOeW1zRGo3cldvMDY5?=
 =?utf-8?B?NGlwTnBSQm4rMDlzS3ZwN1hvN0w4cVZuSTRIeGRHQnZZclUzSjJ3RWNzUllu?=
 =?utf-8?B?L3BjeklGVXJzSUlPMFBaVFNYbUQ4NVBaQWh3UUNZdlowWHVaTHMvR0M0V1Z6?=
 =?utf-8?B?cXRVT2ZXV1NKbzlsMVA2T2t4Yy9XZzRwdUd0YW53WjUyd0RVTTBLNFNWNXEv?=
 =?utf-8?B?V2FFWTVkdG5CWU9nQ21zcTMwUVlFUjdVcEZpcWlYc3liSFd4b0hQYzhmYkJk?=
 =?utf-8?B?VzNudGZ6OVFQd0cvakpweGNkaG5MblFkR1pIZ0J4SDVWZkdtNE5IMVNqSEFi?=
 =?utf-8?B?MWtycjU3dHJrVlFWeHhmL09vdytHWVRsZ1JGMTczOUhBakI5dFNPR2g5Rndl?=
 =?utf-8?B?bVlxbTU5NHZ2R0FmTmY5eWtoa0U5eE1GeXAwWUhYekZ2UWNEV0xoSi9PdEhq?=
 =?utf-8?B?V1p6ZCt4THhocXVFWFNEWFNEZzFNQlhhNTFUaDIyb1RwKzMwOHhCYzhxRHlu?=
 =?utf-8?B?MSs1N1lESlgxQ3g5L1A1aG91cFZhSHI4bmJvV0xEZE0wTGhWZ1QzcmpydUpQ?=
 =?utf-8?B?cDVjNHVoUXVxc0U5T2pRU1lkWG55LzFCd0RSdHdvK0QxaTV4WGhZYzJRMjJW?=
 =?utf-8?B?b1Q2cTU4dmJ5dGkwOGtTckZVWDQrZnV4T3ArVnNrYnk5dFdXekQ3S0djaHph?=
 =?utf-8?B?em1Jc1FzTUlGYzl4RUZhR0krM3c2V0wzK3hFZHBtZ3Z0MlB6a0p3djBFWmM2?=
 =?utf-8?B?UVVEZ0M5R3dZanNQSEY3VUljWkRkR0J3OW1oUjBUdnFvUTNrSU1rV0VtOTFO?=
 =?utf-8?B?em1wWTYvVFNRQmc3TUJSb2NndFh4bDlXY1o3b1FreGJiaFcwRHBmYzcxaFBJ?=
 =?utf-8?B?R2E4KzNoWnpSWmd1ZnY5citmVVdVWXQwUkRVeDc0eWMxOEZtWG5jelE2SGNF?=
 =?utf-8?B?L2dlclgxNm1wME5Oc2ZaMmUwemRyalJCckxJdEJnTHpaRm1DeUlBZEExd3JR?=
 =?utf-8?B?ZU1YSlhKZkZFSGhxdGhKeTY5NGx3UHZXNVdnanpNQVRYd0l1UEdVUmZ6Z3E0?=
 =?utf-8?B?dGI0YTA0b2lVQjBVL09OZVVXODBISTgrVWFFYXhqeTgzYkZFYUhOU1FZVG9V?=
 =?utf-8?B?bW81QjFEajJDSE1NaGhnMU1ha0ZKblMybXFKVkNwTmZ6K24rS0dEM1lZZVVh?=
 =?utf-8?B?Z1IydWQxMjUwMHlNcWNBRzY5bmNFZkwvaXZIWjR2N0lmVWY0MUt0TlJkWFpB?=
 =?utf-8?B?UFpjNm5PYmY1Vmc4cmlpOVR2RlUwWUdQeUxHN1dBL2ZOT2p5TXRhZFp3Mlh5?=
 =?utf-8?B?UGF5SWx1TjV5c251RTNZSWRCVnhMN3pLbzFWQjN5bys4S2htbTV6dW1LQmQ0?=
 =?utf-8?B?M1Zta3Z4VlNtaDN0RnlDNDlaRkZXKzZ6SnJ3QUVCT0huNGdDc2k5UGFUUXpw?=
 =?utf-8?B?bzZod0pvZVpRR2lwRzFmVWpGY3d0TU9RaC9qTDBNMjk0ZnY0NTBFaFArNWJq?=
 =?utf-8?B?ZUVINmp3L1RVQjhhL2dZWHp1UlhPYWsyVlpFRXJjK3ZEZ21CQ1MzN1dkSVMr?=
 =?utf-8?Q?DwkZr5bkP8mrw376WSIG3m9WY?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8301185d-d56f-4548-dca2-08da54ec1281
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 07:43:32.7580
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 1t3k9XCOh7qkn0fK9pledgGpV1d+A9M3rcRbevK2Pqxhx5QT4yZ1xcjlqHGDvZEPraRolEtyjyBCq6YnLJjBNA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB7806

On 09.06.2022 17:33, Jan Beulich wrote:
> While mail hasn't been bouncing, Daniel has not been responding to patch
> submissions or otherwise interacting with the community for several
> years. Move maintainership to THE REST in kind of an unusual way, with
> the goal to avoid
> - orphaning the component,
> - repeating all THE REST members here,
> - removing the entry altogether.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> We hope this to be transient, with a new maintainer to be established
> rather sooner than later.
> 
> I realize the way I'm expressing this may upset scripts/*_maintainer*.pl,
> so I'd welcome any better alternative suggestion.

Two weeks have passed. May I ask for an ack so this can go in?

Thanks, Jan

> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -648,7 +648,7 @@ F:	xen/common/trace.c
>  F:	xen/include/xen/trace.h
>  
>  XSM/FLASK
> -M:	Daniel De Graaf <dgdegra@tycho.nsa.gov>
> +M:	THE REST (see below)
>  R:	Daniel P. Smith <dpsmith@apertussolutions.com>
>  S:	Supported
>  F:	tools/flask/
> 



From xen-devel-bounces@lists.xenproject.org Thu Jun 23 07:44:37 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 07:44:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354494.581625 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4HW1-00039a-8O; Thu, 23 Jun 2022 07:44:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354494.581625; Thu, 23 Jun 2022 07:44:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4HW1-00039T-5V; Thu, 23 Jun 2022 07:44:37 +0000
Received: by outflank-mailman (input) for mailman id 354494;
 Thu, 23 Jun 2022 07:44:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0A1Q=W6=gmail.com=dmitry.semenets@srs-se1.protection.inumbo.net>)
 id 1o4HW0-000394-2t
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 07:44:36 +0000
Received: from mail-lf1-x132.google.com (mail-lf1-x132.google.com
 [2a00:1450:4864:20::132])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 538ea0c9-f2c8-11ec-b725-ed86ccbb4733;
 Thu, 23 Jun 2022 09:44:35 +0200 (CEST)
Received: by mail-lf1-x132.google.com with SMTP id y32so31899391lfa.6
 for <xen-devel@lists.xenproject.org>; Thu, 23 Jun 2022 00:44:35 -0700 (PDT)
Received: from localhost.localdomain ([91.219.254.75])
 by smtp.gmail.com with ESMTPSA id
 w24-20020a194918000000b0047255d21190sm2869071lfa.191.2022.06.23.00.44.32
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 23 Jun 2022 00:44:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 538ea0c9-f2c8-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=mwbVZbkGb1mIzAb6cGBQmcbXv0Qp1YQ7XptNOzCcVVU=;
        b=agdJCNDWKtal+dxhtZ0ntcI1xPHADJTK+qpcTEREiEVTjCMQ8nWc04LwjGNQXfb1Oq
         0Gd0Hm9q+eC2xXn1OXmWlc1qNYPmqpRMC0vNfjDSESlZNeG4GG5SHeo8zcKdRStA8B58
         qtM1yoD5192Hyr2BD/0nRD79AJZvaqS0WiNQ6NdSmA+AM0M0laYHCiKHs2+CHjFyB5Yp
         cVR85k04ofgLqDuciqcc6C7RNZUmumtPKRVdMVX2CQWZg5lHMYIrzYbZRw40ydg1uRgk
         rOuvy9U80p8cOzOpwBCrVL2ET+Ru0D3LXsVKhKqWnFjPvarYrUJlR8ZEqp9q7NaD2WzI
         bGcg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=mwbVZbkGb1mIzAb6cGBQmcbXv0Qp1YQ7XptNOzCcVVU=;
        b=t5FVhtUswOYC3v8Nd5i74M+POBsGPDEU58E2sYHE6QsMH/ZPFQt64ZOxv10k/WpwXA
         lbX+Ojs/7QZzu3QxHDm498PupG2nZmrt7x5QH9VT2wwAdGQqD7Sop5DqVHY066SvrGbe
         mVUFiwPyRUHoVzyQvk8XUbrHBYZK1UckKkjWVQP35ugIFbuBlj/g+zWt2TOPfAxn8Bww
         1iF5Oh+RCmy3L/QujPLBQF7WihyeGyOFXU+5RU3YNmpjJpcaTlAihhu9ODVy/twY9oSM
         dvZnoJBd5WCP5cExD+wYfvQTFaa6IiBD71kWBIAuAW1v4lMY/2LPJX4waKEIDI7jVuDz
         h51A==
X-Gm-Message-State: AJIora/S7AVz1IuwSpKIxHbyWeEDlqRsot/fQy3aB7R1vnvuzLSRhcvJ
	rnywN6Tbpln41Bz1Hn1E3kiqfoceHJs6oA==
X-Google-Smtp-Source: AGRyM1tgGEDjTbPw3X1ZM+Cnm2NXts2D69Tzg5f9IP0yHqdEP6loNsINQiZduPQRxoczuewJNo8IWw==
X-Received: by 2002:a05:6512:2398:b0:479:24aa:6454 with SMTP id c24-20020a056512239800b0047924aa6454mr4829473lfv.664.1655970274322;
        Thu, 23 Jun 2022 00:44:34 -0700 (PDT)
From: dmitry.semenets@gmail.com
To: xen-devel@lists.xenproject.org
Cc: Dmytro Semenets <dmytro_semenets@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH] xen: arm: Don't use stop_cpu() in halt_this_cpu()
Date: Thu, 23 Jun 2022 10:44:28 +0300
Message-Id: <20220623074428.226719-1-dmitry.semenets@gmail.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Dmytro Semenets <dmytro_semenets@epam.com>

When shutting down (or rebooting) the platform, Xen will call stop_cpu()
on all the CPUs but one. The last CPU will then request the system to
shutdown/restart.

On platform using PSCI, stop_cpu() will call PSCI CPU off. Per the spec
(section 5.5.2 DEN0022D.b), the call could return DENIED if the Trusted
OS is resident on the CPU that is about to be turned off.

As Xen doesn't migrate off the trusted OS (which BTW may not be
migratable), it would be possible to hit the panic().

In the ideal situation, Xen should migrate the trusted OS or make sure
the CPU off is not called. However, when shutting down (or rebooting)
the platform, it is pointless to try to turn off all the CPUs (per
section 5.10.2, it is only required to put the core in a known state).

So solve the problem by open-coding stop_cpu() in halt_this_cpu() and
not call PSCI CPU off.

Signed-off-by: Dmytro Semenets <dmytro_semenets@epam.com>
---
 xen/arch/arm/shutdown.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/shutdown.c b/xen/arch/arm/shutdown.c
index 3dc6819d56..a9aea19e8e 100644
--- a/xen/arch/arm/shutdown.c
+++ b/xen/arch/arm/shutdown.c
@@ -8,7 +8,12 @@
 
 static void noreturn halt_this_cpu(void *arg)
 {
-    stop_cpu();
+    local_irq_disable();
+    /* Make sure the write happens before we sleep forever */
+    dsb(sy);
+    isb();
+    while ( 1 )
+        wfi();
 }
 
 void machine_halt(void)
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 23 07:45:13 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 07:45:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354501.581637 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4HWa-0003mR-L5; Thu, 23 Jun 2022 07:45:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354501.581637; Thu, 23 Jun 2022 07:45:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4HWa-0003mK-Hq; Thu, 23 Jun 2022 07:45:12 +0000
Received: by outflank-mailman (input) for mailman id 354501;
 Thu, 23 Jun 2022 07:45:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4HWZ-0003m8-4M; Thu, 23 Jun 2022 07:45:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4HWZ-0007Or-1t; Thu, 23 Jun 2022 07:45:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4HWY-0004qJ-I0; Thu, 23 Jun 2022 07:45:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o4HWY-0000dE-HY; Thu, 23 Jun 2022 07:45:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=oPwX8bV6GerbpJBtGy8IruzVf+qRqEDyH2G7QtjGtMw=; b=P0AwRAt4Vpos66q3DjdjgYhNSB
	bSdhkPRwr6BiYaO/1ubG1Wd/XBYWV6zFdfhrcwOfZI67UGD6IVLQXlWBAC07JKlWovVcQhZniiplx
	q9fjKRBGwsqo7+GWD/UobIiAyZNpK0vSzmDdiIMhFSJPQjs6EuO7sAq5Ra0QJJRhs4qQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171323-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 171323: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=a55abe6c5120bd7614a4c9b2027eeab8d6c3bd54
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 23 Jun 2022 07:45:10 +0000

flight 171323 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171323/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              a55abe6c5120bd7614a4c9b2027eeab8d6c3bd54
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  713 days
Failing since        151818  2020-07-11 04:18:52 Z  712 days  694 attempts
Testing same since   171323  2022-06-23 04:21:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Amneesh Singh <natto@weirdnatto.in>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani@anisinha.ca>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brad Laue <brad@brad-x.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Kirbach <christian.kirbach@gmail.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Christophe Fergeau <cfergeau@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Didik Supriadi <didiksupriadi41@gmail.com>
  dinglimin <dinglimin@cmss.chinamobile.com>
  Divya Garg <divya.garg@nutanix.com>
  Dmitrii Shcherbakov <dmitrii.shcherbakov@canonical.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Emilio Herrera <ehespinosa57@gmail.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Florian Schmidt <flosch@nutanix.com>
  Franck Ridel <fridel@protonmail.com>
  Gavi Teitz <gavi@nvidia.com>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Haonan Wang <hnwanga1@gmail.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Hiroki Narukawa <hnarukaw@yahoo-corp.jp>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Ian Wienand <iwienand@redhat.com>
  Ioanna Alifieraki <ioanna-maria.alifieraki@canonical.com>
  Ivan Teterevkov <ivan.teterevkov@nutanix.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  jason lee <ppark5237@gmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jia Zhou <zhou.jia2@zte.com.cn>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jing Qi <jinqi@redhat.com>
  Jinsheng Zhang <zhangjl02@inspur.com>
  Jiri Denemark <jdenemar@redhat.com>
  Joachim Falk <joachim.falk@gmx.de>
  John Ferlan <jferlan@redhat.com>
  John Levon <john.levon@nutanix.com>
  John Levon <levon@movementarian.org>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Justin Gatzen <justin.gatzen@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kim InSoo <simmon@nplob.com>
  Koichi Murase <myoga.murase@gmail.com>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Lei Yang <yanglei209@huawei.com>
  Lena Voytek <lena.voytek@canonical.com>
  Liang Yan <lyan@digitalocean.com>
  Liang Yan <lyan@digtalocean.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Liu Yiding <liuyd.fnst@fujitsu.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  luzhipeng <luzhipeng@cestc.cn>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Mielke <mark.mielke@gmail.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Martin Pitt <mpitt@debian.org>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matej Cepl <mcepl@cepl.eu>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Goodhart <c@chromakode.com>
  Maxim Nestratov <mnestratov@virtuozzo.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Moteen Shah <codeguy.moteen@gmail.com>
  Moteen Shah <moteenshah.02@gmail.com>
  Muha Aliss <muhaaliss@gmail.com>
  Nathan <nathan95@live.it>
  Neal Gompa <ngompa13@gmail.com>
  Nick Chevsky <nchevsky@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nicolas Lécureuil <neoclust@mageia.org>
  Nicolas Lécureuil <nicolas.lecureuil@siveo.net>
  Nikolay Shirokovskiy <nikolay.shirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Niteesh Dubey <niteesh@linux.ibm.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Or Ozeri <oro@il.ibm.com>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peng Liang <tcx4c70@gmail.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Praveen K Paladugu <prapal@linux.microsoft.com>
  Prerna Saxena <prerna.saxena@nutanix.com>
  Richard W.M. Jones <rjones@redhat.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Robin Lee <cheeselee@fedoraproject.org>
  Rohit Kumar <rohit.kumar3@nutanix.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Davis <scott.davis@starlab.io>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Sergey A <sw@atrus.ru>
  Sergey A. <sw@atrus.ru>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  shenjiatong <yshxxsjt715@gmail.com>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Simon Rowe <simon.rowe@nutanix.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Temuri Doghonadze <temuri.doghonadze@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tom Wieczorek <tom@bibbu.net>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tu Qiang <tu.qiang35@zte.com.cn>
  Tuguoyi <tu.guoyi@h3c.com>
  tuqiang <tu.qiang35@zte.com.cn>
  Vasiliy Ulyanov <vulyanov@suse.de>
  Victor Toso <victortoso@redhat.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Vinayak Kale <vkale@nvidia.com>
  Vineeth Pillai <viremana@linux.microsoft.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  Wei-Chen Chen <weicche@microsoft.com>
  William Douglas <william.douglas@intel.com>
  Xu Chao <xu.chao6@zte.com.cn>
  Yalan Zhang <yalzhang@redhat.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Fei <yangfei85@huawei.com>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yasuhiko Kamata <belphegor@belbel.or.jp>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
  zhangjl02 <zhangjl02@inspur.com>
  zhanglei <zhanglei@smartx.com>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>
  Дамјан Георгиевски <gdamjan@gmail.com>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 114109 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 07:45:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 07:45:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354506.581647 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4HX3-0004Fe-VH; Thu, 23 Jun 2022 07:45:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354506.581647; Thu, 23 Jun 2022 07:45:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4HX3-0004FX-SN; Thu, 23 Jun 2022 07:45:41 +0000
Received: by outflank-mailman (input) for mailman id 354506;
 Thu, 23 Jun 2022 07:45:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CRa/=W6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o4HX1-0003bD-Qg
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 07:45:40 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 (mail-eopbgr30080.outbound.protection.outlook.com [40.107.3.80])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 796fdf30-f2c8-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 09:45:39 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by HE1PR0401MB2395.eurprd04.prod.outlook.com (2603:10a6:3:22::26)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.22; Thu, 23 Jun
 2022 07:45:36 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 07:45:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 796fdf30-f2c8-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=C8GoNPnvKGTetEWJHAEdML1xQL5Q49LvCMTF2dZ/xxpwuRiI9h9A1yO+GRo6TYMSaiy05f8YGKfwTJKY6PMOs30CpZebWSLmxir+41SoOdfFdGy3TT5MQC/d2vgoTtxjLRoqw/Y/bCPIPzPPfQGxZJzQb078n/BAk8cf1KvEa+a9+3Cjz87UMUsx23a+RL9qUYWoamz2sHR3s/HTtQiX9FcEi5M2rEg9wgFMz5UydPbZlfAwfnhPNfGgCGh96mOE2aJvCgGkMBQg2hVOMkhaniOU7npY29Au8r+dcLsB8fd/0ecHGdI5nsGVJXC+Hgkn/5bgDwqFAV0DK4z5f0D3Pw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9S2XZHpw7oYuU0uz4uxqO4dJCzVpZtRR7V61f/opCNw=;
 b=byEc3hi0iKpBEmivVpLS6exSMf22+B6AMyNIcn+l/grPej8aMu3uT07/WzVps/HiqXmieaSUL2xO4fLUTtqyTqnn0/wEWe+BmaVjxgYXbB4fZBTrXL6lbsgbEDvoZhdNZDiN2aYaOp20ZW6uo2Bigs3Bw7j+zsNU5iaoG1mA6Fq1gPfpHyhaPllAvT7YRiZMNRpAK/ks5B4bTTz7tLs7vvE21RbcWUuZiTr9iO5y8QzDJLin7fmUza+DLxK4JFaL+6mfGPUt3PA05N0LppS2yaa3ZYoGSQqkERd7ERWFh3HlQh9HqNyP4YEqIXtj13Osv38EcGKlNtib/8PNRGffEg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9S2XZHpw7oYuU0uz4uxqO4dJCzVpZtRR7V61f/opCNw=;
 b=Jn1HGSKqJFuYwK98rFuF9BtbBlxZkAM0TxVxcYR8W9ez4XybCuJVJc0CDHqZYWvzqV7lrr/wHOfyzFts+Y0dYOLSC/+7SyfxsnW5Quj+ZovUA00l6tYVhyvCE1tTwgwYfGbmUfinL1VtGvLEpwCigC0j0BH9YKUUDfbGjMn2nRxjPkPLKCZslCtIpz/nbJy2lsqWGuVQwQhRezEPJpwP4Pdvw9/WA4FGr8TXm7VJzCLpFb2KvozMuxJaNcMLH7KC8R4ghk++bcdVOGslshL6i465jrCI1TiwrL1qL7utlnhToFB7LbsWcpH4Zi9QG4VFHyW8ylRdnWB4uJpquNVVrw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <c06b39fc-0813-edf4-9c0c-a45f8648b1b3@suse.com>
Date: Thu, 23 Jun 2022 09:45:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [XEN PATCH v2.1 1/4] build,include: rework shell script for
 headers++.chk
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220614162248.40278-1-anthony.perard@citrix.com>
 <20220621101128.50543-1-anthony.perard@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220621101128.50543-1-anthony.perard@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR06CA0510.eurprd06.prod.outlook.com
 (2603:10a6:20b:49b::35) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 312a3b2b-7902-4bd6-9388-08da54ec5c0c
X-MS-TrafficTypeDiagnostic: HE1PR0401MB2395:EE_
X-Microsoft-Antispam-PRVS:
	<HE1PR0401MB2395D0E49FADD798677F20CFB3B59@HE1PR0401MB2395.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8TcPEkW9DQwIQTxpJHqyHjDvvrlS3iqZt4IN2pRUNeyI4ujiFeX1Wgk4zAT0w5WzDx7nc6PVZaW3esKWE7BDW2WS1c7uZpKyat94OUp91bTY8HKe9EiqzKt0wf8oxdpl9QU8pMSoqWj61OzxiHZKCI/YqO9ohuKtGfQoUF1H23iQW9Qe8/6prNtT0461Brw4DaWtagp1YF5DDiDWpeKwAxvXgF46ZKYfC0+dYqZvIIdFIc5GEZX4MIk+rDVwKrxNMiBkK0eQTYbesUDdbpQVMn2RP7gI7RowOUF35Odmp1mpz9/e5g42uAclIjTBtLIAh6YvJsH27Abx9KgIjjF+ObQcMVAsr0yICkatvsI48k61bSGj7cjtpPNy4eWqZSy1lvNt+SCqR5OmOSkrGm955wUMrfcnxYsxqVNDbVmMc6MIDkfmPI8nhmbR19Yi9SADKT4zTMLlUXQLd+q445HSeWirxarcuOhqPA5GP+JhMPZYcuhVwg1LUgAgG7exnQ/PX/zPReyp8utzifJuz4fmQcRv+rBWTCxsv7mJGWCuwLWoS1MUucp4rUJMdycelOvk7jni2G41WfJkjCijW8dKLBVHr/S9yQRjZLE57tdKSPmGnGcjrB8SfPZG7vrwlJLoJTSCatVjRYy7GJfOgEHcNldTBe6kuk1ZYdZ21oVpP7DL1Grq0aYznxWy2WRAwdmWBvK5kNH5fYHV15pgqqKLVs8nI3TFuHQj6GzqmY1fG2x3uipK4g9tCqRni7x7fp93qup790HkaPN/Pcg993rhiIOAXuJbOLmWGSOBfu6BOA4=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(376002)(136003)(39860400002)(346002)(396003)(66946007)(8676002)(5660300002)(4326008)(83380400001)(6486002)(2906002)(66556008)(66476007)(41300700001)(478600001)(26005)(6512007)(8936002)(2616005)(38100700002)(86362001)(6666004)(186003)(6506007)(6916009)(31686004)(53546011)(36756003)(316002)(31696002)(54906003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OUFPbnpZcHpCVjFudGtIajQrWXd0aGw5QmRmQ0FUREpBSUplUDRjbUo5YmUv?=
 =?utf-8?B?WXFIVlNYenU1Qi9yQTdHNDJsZERzQjkyeTVINWN5TG4xcVFiWEJ0UkEzQzlZ?=
 =?utf-8?B?Zy9jdXZweEk5WVYwOW9aUVVlb1BhOXVGVEhHU0pBUGhyRTc3Yk40amRBSnly?=
 =?utf-8?B?ZXJRLzc3RXJWNzY1T1ZDTEw1WjdRWGhGMW9kNHY5WjRYRlozb0pER2gzV09U?=
 =?utf-8?B?ZnNkRnV6YjViT1BBVFdQZUZkQk5VRHI4bWZwZjdVOTlYMTdJdkF3WHBURkMy?=
 =?utf-8?B?NXN1RnRMMkJxSHVPNmYzeTdFRFF2VWN6dTNBUzVRcUNxQkRPTzFGQ2xGcWhP?=
 =?utf-8?B?MTNLZHJWeWhlK0Nua0FQQ0ZIRFJSeXhFVkszTWF1RDVpcTRVNnFaWXFGaFFF?=
 =?utf-8?B?NHg3a3RpZ0labS9GMmhIalpsWmsrYXdETExTNi9kd3QxcVhNUHNzSVlFd01O?=
 =?utf-8?B?WmM4b1oyampxS0dOZWdFUWZ4dWJHcFdYYUxXbTI3YWlGNkJUeGVoR3VoZG1H?=
 =?utf-8?B?c1hKVGJEblMwbUlpYXI2a2p4T0tjYU1obW13cWY4VE1JaGRDTXkwTG1GZjRq?=
 =?utf-8?B?dnl3ODNoUkg5UFVvUFNXTjlSTDd6NEx2amsyalBzOTZNb2QvN2N1QitIZjRs?=
 =?utf-8?B?OTk3T25zT1l4WnNoYkZlYzVnQ0VxREdNSzNaMlFtSkJwbmZxMFBFTXdIZ1dh?=
 =?utf-8?B?OGNzQUd6K0dHek45WnhVaDl4MmdzMmVoaS9Yc2lvbWdQWEJmdFNLVlZ3bnVN?=
 =?utf-8?B?UmJ1SEJMcFE3R0lCWHBuZHI5OFJPWlErRlAvdytzV2lFWDZZdGQrck5vbVQ2?=
 =?utf-8?B?SnptdnhkN3IvNWFUNFNsWU5TNW1qb1QwRXl4VWlIV0NsRTFiNWRoeUVQRzhs?=
 =?utf-8?B?eHF2bFVSeUNJL1Q2N2Nab3dxamxYaFprRWZUZmFYS1JNdlNVRVpYZU9MMVZY?=
 =?utf-8?B?STUrZ3cwVjU0eS8rWDFlRGpjUTRQUjFNNXlOVFd6S1gyYnloUVVIb0RQS0d2?=
 =?utf-8?B?SlFRRUFuTnRtcm44cXhuWEIwYXpXOGZXbHIxdmdxa29JcTBWWE5JL05OTFdi?=
 =?utf-8?B?YS9VU2x1bGhjcHJkSjRRaDhiVHFld05hU1JOV29heGdOc1pMaWdnVFlNT2Fo?=
 =?utf-8?B?ZjFjOVppaDhuQWw1dUJvUUc0MXFYUXRBYkIvTHQzaE5lWXFlcDNQOGRTdFhT?=
 =?utf-8?B?NFdadEVVelQzZ0pUSVhzTDFuRGc4dThoa0YxUWlBR0xPOWdKT3ZUb2lLUUth?=
 =?utf-8?B?UEd2dTVXUzMzVXR0R1dDOGtDT21MRENOeEJTV1IxWlYyRmZVVDZVWEJCejhM?=
 =?utf-8?B?b29pWSsyRjhzczhvVitzd25vUWtPZm9KeXAwRHhzOHU5a2wrbnhiMGhBY1ZW?=
 =?utf-8?B?TklxZjJUY2IvSCtIRXVxbXpUSTlaOE0rTXlmWGg3OEZHYy8rUmtBdU04dDZZ?=
 =?utf-8?B?dk1sZm9kYlV5Q0lsQWUwSmczaGptMGlHUVdHd2VzeTRnL210Ylh3R1pSM2w1?=
 =?utf-8?B?WjFMK09wRXN2RDdMdlNqejFITVdEdHArUGtPUXkwbS9Da3RQLzYvVCtCdFU3?=
 =?utf-8?B?Qmg5cEp2WW95NmpySFlKdDdDUkk5aGxjeHNpSkJIL1NXMkZITk5zSnRiWkha?=
 =?utf-8?B?NU9qaUcxOVVyZHBFVFFaT1lLaDVKUmhYRmU3N1cwbzA4Zk80TjhVUFJwY2di?=
 =?utf-8?B?YU5EaTdFK0oxRjVuWDNTOHJMSjVhOE5QU2UyQnV0bm1yNlJWakdxekEyeEZv?=
 =?utf-8?B?THpIWUZid3VIR1NCREZaZGFvWkJvL1FFNXFWaTlyNm9ja3g1WTJtOVNndWpx?=
 =?utf-8?B?Y2wrOHFqZzduYWtPOUtIT1NJd2R1RVRzdDg2a3hTVllhTmZpQkNreDZFUFJM?=
 =?utf-8?B?WkNXZzdPczVJUWpUL3YxNFpSRVFTdy9VclQ3Uld4U3hiVHVOdDA3MEFRQ3Ey?=
 =?utf-8?B?ZkJwcEJYelR2VktSZjhwOUNtYmhJREZqZXZwNmdIM3JuUkc5RFQwRzVvVU5i?=
 =?utf-8?B?bkxkNWc2LytFUm4yQ1BxOFd4c2RmL3ZUd2ZoRzNITlNWNndzVVFGeWJseTB4?=
 =?utf-8?B?YksvSDVHSTVBcWtaQW4ya2JKWWlQejkyS3RNNFU2TGxuRmEzVHN4RzZzN1ZB?=
 =?utf-8?Q?ISwJY/UdBWQWsFIRzmpbEztsQ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 312a3b2b-7902-4bd6-9388-08da54ec5c0c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 07:45:36.1095
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 7nGLvAy7caYGqAvbd7OLT+g4P0EEc3Z7ffcJqqoWDtc9aIfI1OG2XGdQ8+UwJN19vdtL0lSpGePPMcHpAoKdTA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0401MB2395

On 21.06.2022 12:11, Anthony PERARD wrote:
> The command line generated for headers++.chk by make is quite long,
> and in some environment it is too long. This issue have been seen in
> Yocto build environment.
> 
> Error messages:
>     make[9]: execvp: /bin/sh: Argument list too long
>     make[9]: *** [include/Makefile:181: include/headers++.chk] Error 127
> 
> Rework so that we do the foreach loop in shell rather that make to
> reduce the command line size by a lot. We also need a way to get
> headers prerequisite for some public headers so we use a switch "case"
> in shell to be able to do some simple pattern matching. Variables
> alone in POSIX shell don't allow to work with associative array or
> variables with "/".
> 
> Also rework headers99.chk as it has a similar implementation, even if
> with only two headers to check the command line isn't too long at the
> moment.
> 
> Reported-by: Bertrand Marquis <Bertrand.Marquis@arm.com>
> Fixes: 28e13c7f43 ("build: xen/include: use if_changed")
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

I have committed this, but strictly speaking imo the R-b should have
been dropped because ...

> ---
> 
> Notes:
>     v3:
>     - add one more pattern to avoid a possible empty body for "case"
>     - use $() instead of `` to execute get_prereq()
>     - also convert headers99_chk
>     - convert some 'tab' to 'space', have only 1 tab at start of line

... at least the added headers99_chk conversion was not a purely
mechanical change.

Jan

>     v2:
>     - fix typo in commit message
>     - fix out-of-tree build
>     
>     v1:
>     - was sent as a reply to v1 of the series
> 
>  xen/include/Makefile | 37 +++++++++++++++++++++++++++++--------
>  1 file changed, 29 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/include/Makefile b/xen/include/Makefile
> index 617599df7e..510f65c92a 100644
> --- a/xen/include/Makefile
> +++ b/xen/include/Makefile
> @@ -141,13 +141,24 @@ cmd_header_chk = \
>  quiet_cmd_headers99_chk = CHK     $@
>  define cmd_headers99_chk
>  	rm -f $@.new; \
> -	$(foreach i, $(filter %.h,$^),                                        \
> -	    echo "#include "\"$(i)\"                                          \
> +	get_prereq() {                                                        \
> +	    case $$1 in                                                       \
> +	    $(foreach i, $(filter %.h,$^),                                    \
> +	    $(if $($(patsubst $(srctree)/%,%,$(i))-prereq),                   \
> +	        $(i)$(close)                                                  \
> +	        echo "$(foreach j, $($(patsubst $(srctree)/%,%,$(i))-prereq), \
> +	                -include $(j).h)";;))                                 \
> +	    *) ;;                                                             \
> +	    esac;                                                             \
> +	};                                                                    \
> +	for i in $(filter %.h,$^); do                                         \
> +	    echo "#include "\"$$i\"                                           \
>  	    | $(CC) -x c -std=c99 -Wall -Werror                               \
>  	      -include stdint.h                                               \
> -	      $(foreach j, $($(patsubst $(srctree)/%,%,$i)-prereq), -include $(j).h) \
> +	      $$(get_prereq $$i)                                              \
>  	      -S -o /dev/null -                                               \
> -	    || exit $$?; echo $(i) >> $@.new;) \
> +	    || exit $$?; echo $$i >> $@.new;                                  \
> +	done;                                                                 \
>  	mv $@.new $@
>  endef
>  
> @@ -158,13 +169,23 @@ define cmd_headerscxx_chk
>  	    touch $@.new;                                                     \
>  	    exit 0;                                                           \
>  	fi;                                                                   \
> -	$(foreach i, $(filter %.h,$^),                                        \
> -	    echo "#include "\"$(i)\"                                          \
> +	get_prereq() {                                                        \
> +	    case $$1 in                                                       \
> +	    $(foreach i, $(filter %.h,$^),                                    \
> +	    $(if $($(patsubst $(srctree)/%,%,$(i))-prereq),                   \
> +	        $(i)$(close)                                                  \
> +	        echo "$(foreach j, $($(patsubst $(srctree)/%,%,$(i))-prereq), \
> +	                -include c$(j))";;))                                  \
> +	    *) ;;                                                             \
> +	    esac;                                                             \
> +	};                                                                    \
> +	for i in $(filter %.h,$^); do                                         \
> +	    echo "#include "\"$$i\"                                           \
>  	    | $(CXX) -x c++ -std=gnu++98 -Wall -Werror -D__XEN_TOOLS__        \
>  	      -include stdint.h -include $(srcdir)/public/xen.h               \
> -	      $(foreach j, $($(patsubst $(srctree)/%,%,$i)-prereq), -include c$(j)) \
> +	      $$(get_prereq $$i)                                              \
>  	      -S -o /dev/null -                                               \
> -	    || exit $$?; echo $(i) >> $@.new;) \
> +	    || exit $$?; echo $$i >> $@.new; done;                            \
>  	mv $@.new $@
>  endef
>  



From xen-devel-bounces@lists.xenproject.org Thu Jun 23 07:52:12 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 07:52:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354519.581659 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4HdF-0005wf-RS; Thu, 23 Jun 2022 07:52:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354519.581659; Thu, 23 Jun 2022 07:52:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4HdF-0005wY-Mv; Thu, 23 Jun 2022 07:52:05 +0000
Received: by outflank-mailman (input) for mailman id 354519;
 Thu, 23 Jun 2022 07:52:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CRa/=W6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o4HdE-0005wS-TN
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 07:52:04 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr70087.outbound.protection.outlook.com [40.107.7.87])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5eed1459-f2c9-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 09:52:03 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB4719.eurprd04.prod.outlook.com (2603:10a6:803:61::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16; Thu, 23 Jun
 2022 07:52:00 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 07:52:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5eed1459-f2c9-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RKpgxoybsXsYqp+YisPaVjpg8KHxJ/RBe/iywsOcEy1Ney0rohQyIoTBhgdVt/qC4yU6+b8i9GbNtVKJd2fEYe1EzYPjcvhRTPo/FcJ5oIK3WWzEMLGSyGINLAAvhuuZX3jZaMb7boB88hE268lmRLARDN5D/q38jRehT+uRPcsIZc6GhRKxojwJlF97wbNXEkcTyjr3PL0wiesm1jX2ZOFaYFymwDARe68SGLtU+2rm4BKe3dKJnwxuVWNvEznPFeAk3xQP9KGkT4P9CTR6jNr1EvuiqzQP+AC4y1jPRdPYun9nMHbUX8Qq1Mz3KCGjfqlAZirRqkLq4F3byNdmiQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8u46eZwMH4FPHFe6gHdQDzIJWbmFBrZxoBP7fCjuaII=;
 b=JKF4/pGCihD31viDJ1H8HyqSs0r5f/GHAyAzRt2L6H2/82psRXgULRCIfhrpxp91paoriAwcG9xvtIW1juWFG+J5gwCt9uQtKD3cB6qJ21/3gfsnBUV017AlOoVVRvlfh1I3QWg9I8z4cvNg71y+B5eWwo2a5l++Lyb+wxybD7PSwF8pq1C02Gtk04k9lLUWZfFng5vjdST/AFb/30noggjL8c6BMbWR0WS0uDmuC33mbHsK2XyLhA9YuZIuBq4KuNS02RFaNx5JtxuzZLGIBEJwbsMLodpW+Ck4KMs28mKsFbGfq4Ao6+YjRfqhrS25j2Yp2+a6x9pZZgKTwXN6Ww==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8u46eZwMH4FPHFe6gHdQDzIJWbmFBrZxoBP7fCjuaII=;
 b=CXlp+NiMTzz3VUmYyfc+3UhwcGtU2TMBFb4uOq8F2zMkRDwHfV+LOioX131f4LbHdafQWFFAJbyBf9Z3cfwIVX+NLsMEiPOksms/KLc50sCepHEL9n/ImzuhM+qvTrHa6fav10Eks0KEhGiQGPKNEMn0RaD/ldtlW/7Vg/Ub7KtjgrVMEzDOvzJ9cPAyOfzaYrEZluUBkvWaHWgA+jKISL9v6pFglI03W2nGzy+XgBbueWuvghXMo2fYGHh0IQCiH4Tst5rrS8SyGhkzZ5VpWXkTjTXWKSv1Ifgg0GJoAjVvc8TMD4kbsupY9i92Z++PXn+B7qKexryJ7QEgeue+vw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <3e86d233-7c9a-cd80-a744-c4bdd42ac85c@suse.com>
Date: Thu, 23 Jun 2022 09:51:58 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 0/9] MISRA C 2012 8.1 rule fixes
Content-Language: en-US
To: Roberto Bagnara <roberto.bagnara@bugseng.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Michal Orzel <Michal.Orzel@arm.com>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>, Dario Faggioli <dfaggioli@suse.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 "Daniel P. Smith" <dpsmith@apertussolutions.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20220620070245.77979-1-michal.orzel@arm.com>
 <dd016e82-2480-0e1e-6286-18b2f677dd65@suse.com>
 <74ec2158-3d19-3b2c-1e8c-fb5b30267658@arm.com>
 <d91bb4ea-41be-225e-e2fe-1b03aa06c677@suse.com>
 <C45BA6EE-6294-4C6F-ADC4-3DE7C8DA866F@arm.com>
 <68d7fb35-e4c5-e5d2-13a8-9ee1369e8dbe@suse.com>
 <BE80A241-7983-425F-9212-0957E29AA5C7@arm.com>
 <7a8d70e3-c331-426d-fe96-77bd65caade7@suse.com>
 <alpine.DEB.2.22.394.2206221212510.2157383@ubuntu-linux-20-04-desktop>
 <8610703e-fd15-bba1-3bb1-cfe038f9b11c@bugseng.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <8610703e-fd15-bba1-3bb1-cfe038f9b11c@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM6PR10CA0083.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:209:8c::24) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a2032390-0297-4608-4cf0-08da54ed414c
X-MS-TrafficTypeDiagnostic: VI1PR04MB4719:EE_
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB471971FC27B8E16F2D760310B3B59@VI1PR04MB4719.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	riHvIoATUlI86KhmQ7Ge172CbOG/hwGY+LgRJaQl5c0rhoMukhZnmqO/4nBvx9ofhpQ27Kwu3uNX+VxBdSe7tIvyTvi91kKR5i8v4bdbQXCkemrqPd1/+NqkEumGhoaotuBDuHNBkxqQ637ldv1j92Q9zIrC++HNJkuAf7gGVJ0IF/y9kXv4SzH7fum+KTxOV6Qr1Wa/g/fcjcCvN8g+Od/xv9QBOBtxJ1zTfChyKxyJwrNXVmS1WrmwUyKPvjbKkvnFvgr5HU6KPxRpZzh4EN4kDudX6VI8cAIf4P5dLqxe7O2l9PgNYzSv/x3XKR7l9e1wGhzrR9bzpRyXxNdF9+EPSNx2qmXtlMiJXGePS6CQgoQ8FJHsttApqb3ovD76SL6s6LyxEs4RwhdeH1o4mTMCE3+zBqfSXfce4f5WX4CgWOSmVZ5OFtKkbIqErYrVoBZJR9UKYN5abdhaVX3tvFpz8am+FsRp4hJaUTbzp8t/gvspxPMGnW1B/n/CuDUDzLJ7D8piMWcqv7nzJvHjk7ffuTFm72Y8jvrjx5JJY48ow3DgTgSDxs7lbo98UBFnT0eq5huOCO+smTA6LkgODubaSGHY8idikYBdwvLsY4Sno0cHCNwm8piUw57CL6zwjfWYcBIpKSfEyvIT9GmyfCOPOduNjbcp3+FJ/ac9+JBXIkeFZhiR6JREpCRamlIzzQAgHOAtgGDEfUy2G8HYepHvmYtcJo01yQIDRXLVtLEZcJxlnFO++9bVf4pA6Wtc6oyca8u1j1QwNoDuTsH+FfQ8psoq9KiV0A77h/++T7Y=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(136003)(376002)(396003)(39860400002)(346002)(2616005)(6916009)(54906003)(8936002)(31696002)(316002)(8676002)(66476007)(4326008)(66946007)(6512007)(38100700002)(41300700001)(86362001)(66556008)(53546011)(6506007)(26005)(6486002)(478600001)(36756003)(31686004)(2906002)(186003)(7416002)(5660300002)(4744005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YU14THNUcTFHblNIT1U3anNza210RTZCYkxTOGVqOE95OUVjbjJXcjlERTJO?=
 =?utf-8?B?R3A2a295N1drek5iRVoxM0tvVTI5VkFneFNLdzVYV2VtdjdTeFZqVHREWUJw?=
 =?utf-8?B?OGZWRjhJZmVVOEtwaG5ORFNoZ0wyNWQwckZ6UzRVY1p0UThkQzROM1l4Q1pz?=
 =?utf-8?B?MkpnMEhzdDdJT3J0bzc4dlNDODE0OHdoSjJIVjV0UTZmTk5MYXVSa3NhemtD?=
 =?utf-8?B?WW1EMThZbDBYdnVIYW1BQ2pncnZGdzY5MjFYdUhPQkVFcHJ1Rm51NkxBRG1B?=
 =?utf-8?B?a1RqVXdWZFEyMU5OMFVidmZjbWJiZVNsNDVoTThCM3czOXllaGY2azNKSkhl?=
 =?utf-8?B?K1hWQ3NZUHRRUUNCaEQzRWx1ZkRCWjA3eVJFeVBBNU83SCswb29Ec1BuN0R2?=
 =?utf-8?B?bkhWUHlpbFp1Vk8vb0ZzbUJrT3BMWHV1MGNwNWxid3dFN2ZHNlVTQnFqRnl1?=
 =?utf-8?B?V2x4T0UxUm4wMlVuMWRFYW5DSElINEtpY3lmM0VMMHdJYlRFZ0d2YUNGYkxV?=
 =?utf-8?B?S1N6aVBSMnJnR3YvTXF3eEdiMVlDejIvb24zRVcrd2k5blFsblEwWlIwRGt0?=
 =?utf-8?B?aHRkSDBFb1FuTTl3bHlwcEVLSkRlTVp4cnQ1TTR1UXR2Vi9XRjUyWU5yRGx5?=
 =?utf-8?B?TjdDaWwxdkszRmJRYWkvZVcyYjV5YkZ5Z25RT3ZJcEJFb2t6TnF3d2I0NHUv?=
 =?utf-8?B?bnRTMTNHSmNsY2F4ZlNnSFAvOHhXZVVmNTJXVmVEMnNWZjZmZVI5WVpXcFZp?=
 =?utf-8?B?RERCN3ZnQklnLzlEbnBrdUhwbTVIV1lpQ2wrOUQ5Z25XZkgvUkEvZGhPNysr?=
 =?utf-8?B?dnFyWm8yWUZFTXlXTUpDNzJqUlZYUGJJT1JxTi92Y0pEelpRODBVUExiSEp6?=
 =?utf-8?B?NmdHbjZFNmhraUFQSFpCUGtKRWthSFZROWlUekNMclpxZlFJMUt4QUF2YWU2?=
 =?utf-8?B?ekJ4NWxSQy9OdWhiRGE1MnZ0K3lRRkJiZEthcGJvb0xQUUtpYWFOb2E1emY4?=
 =?utf-8?B?bFFHc0xJNGZJN2tnalR2ejJEaXN2OWxJbS9vTDJSZkt3WVJRc1MyQXNsQ3ds?=
 =?utf-8?B?RVd0eUxtSUpoWlJnUndXc1FoUnJ4c0NkQXJyV3NJbUN3QjU4MFhORjkrSW12?=
 =?utf-8?B?elR5YXdPK2pxRmFzOEJZbUlOaW5QSVJvSHRTWkhTekdGZkhHdzlFV1FhNC9x?=
 =?utf-8?B?VHdIcXN5cS9vZGl1a1ovd2FubFF6ZkpPb1dEZmgzdmpCa1ZqVll2N2hMelZH?=
 =?utf-8?B?WDZodExIVGQzbVBWbVl0bWE3b0VJZllpOTVNRUt2ekcyQlJsREpXbjYxODBP?=
 =?utf-8?B?VHpjd1IyMURuTzIwMi9rYi9udDZMMXNsYWZJTklwVnZmWkRBVU1aZURqY2xC?=
 =?utf-8?B?ZThGb0FKdFJhdWJDTGtCVFJWanBBR0ZtY21YekNkU2M2czdudUNlWkpoVGhs?=
 =?utf-8?B?QWNHWDRJMTRzcmpBekY4bE1SZmkyVEU3ZGpuczRsU0FuS3p2RDJrbmtXUW5O?=
 =?utf-8?B?d2xvQ1JxZlpaWlBkOEFMOG9qQzhJVkVmSDNESlF3UUQ4dXVzd2hYODNQNlAv?=
 =?utf-8?B?YkozRkl6aStHNys2Rlc5RGNld2NLc1IvYTdHSm95a0kwNy9ISkQ4VSsyNzI4?=
 =?utf-8?B?bWlKTWZINEZFU0cyczhTTnp0Mi9NclJ3QkNuMnVqMEFtWFpXcXRGTGJLeUtD?=
 =?utf-8?B?aWFVYVo4OWp2Z0Y1L3lvYWJhZnEreGtrUkJzbVFDRk5YcFBLcVFvL0JrNjl3?=
 =?utf-8?B?VzE2dVlla1ZkanRDL0R0cEhGekdHazdBRnRCbUpNNkJ3WXNmdENvSjVBUVJo?=
 =?utf-8?B?aUNpSGE4bkhMTWVISzFPSVB2T3pmY1l4TGRLSldER1FnWWltOVhJQmlUTlRl?=
 =?utf-8?B?NVozeWlEZDl5cmo1SS8xTDVSb1lRc2NPdGE3cUlyMUdBWmNBNE5mQjhGQzJO?=
 =?utf-8?B?bVVaNVB4d01qRGY1dCtEdHdDc2pmMmJTQ2luS3Nod3pEK2IyQ255aHoyVjRq?=
 =?utf-8?B?d2pOeDlMOGFkTVROUVBSQ1BPaVFxOExSeHNkM2NaRHJmYzcrbDU4WmZxK2Zn?=
 =?utf-8?B?Z1p4R1hPUXhaYWkxM2pDTkhQdDAvVCtnRHljQ0cxV1E2bWlXN2ordEhLV1Uz?=
 =?utf-8?Q?luBYBiTxBlnDlz8sdNbrEWnI0?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a2032390-0297-4608-4cf0-08da54ed414c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 07:52:00.7410
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: VYD9iPbdBhZBsJs7c7+oTbNK0KAm8BQxKt828/e7ipDjK4iuhCO+EBxIjW8gogecWWmNY1HlO2DqfGXKFaffwA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4719

On 23.06.2022 09:37, Roberto Bagnara wrote:
> Rule 8.1 only applies to C90 code, as all the violating instances are
> syntax errors in C99 and later versions of the language.  So,
> the following line does not contain a violation of Rule 8.1:
> 
>      unsigned x;
> 
> It does contain a violation of Directive 4.6, though, whose correct
> handling depends on the intention (uint32_t, uin64_t, size_t, ...).

Interesting - this goes straight against a rule we have set in
./CODING_STYLE. I'm also puzzled by you including size_t in your list
of examples, when the spec doesn't. The sole "goal" of the directive
(which is advisory only anyway) is to be able to determine allocation
size. size_t size, however, varies as much as short, int, long, etc
do.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 07:55:33 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 07:55:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354525.581670 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Hga-0006XU-CA; Thu, 23 Jun 2022 07:55:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354525.581670; Thu, 23 Jun 2022 07:55:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Hga-0006XN-9F; Thu, 23 Jun 2022 07:55:32 +0000
Received: by outflank-mailman (input) for mailman id 354525;
 Thu, 23 Jun 2022 07:55:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CRa/=W6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o4HgY-0006XD-Mh
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 07:55:30 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2041.outbound.protection.outlook.com [40.107.21.41])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d973d83b-f2c9-11ec-b725-ed86ccbb4733;
 Thu, 23 Jun 2022 09:55:29 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB7949.eurprd04.prod.outlook.com (2603:10a6:102:cc::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.22; Thu, 23 Jun
 2022 07:55:27 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 07:55:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d973d83b-f2c9-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bZAnf3Wtpq8OuIx/C3sxvcDcWKxGGaeQKKGhKwkbiMFz+GUnGFHjcFsTiXxHOzewSB/b+dteXvhOv3wisRqefYU461g/8Sdfb1yzroUCcHQcSnfziPWUlugoesCjGGemWqy5xUab0vS7BBlGzfeWhXXcwrtNPonW56ApJwGIrVkrJMyQINc2c8/K/vYrfzzAQK+3vve7t2TgoMHLNQpKbuNtFC8/P4g7lGjsG4hbvoPRQqsirbkWHA4vhz8vxna9sfdQL3WWMbeI0Wt2Uwg+J4oOaggV3MjgPJvcTyOZfUyKhbOgXmt6778ZBlEq5XEGDmUdi59Z3/Gatj3yX3ON3g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+Bo62bnXYwnaQMPoWppPDTRVPRld4i9FNVCzazkNfGo=;
 b=UBYlr1lpvcWjxYtTI7B+3G0Xi9z86aKIZ+PAjjTnucGV/felE4FbtiFL5KABR7Kq52+ONPdfmNaZpA2Kg0aR8OHl7x5CE/9Q7Svot4Otq74K09S54yQ9IKlnTmLRSBPKwZy8GALMAZTJS1FKHkFmmbQbFtiQdmEqvPk3X797HKT7JMoCjse1xxPmekwo3H27h/sALyrOjCS6FMF+aaBThkZIyfn2r78CG9zDkdEmwl/guGg4rkLc8kil2CvhD6gBjRcAQb6F2hWgMXo0oXp7EWMmUalgQYQwF0Qc/y6Md9o46l+NtajrdwjOp+gVtOFCCyI2eZgY40ZLfajNN/rMpA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+Bo62bnXYwnaQMPoWppPDTRVPRld4i9FNVCzazkNfGo=;
 b=ZolwweOKXZIA9UhyuV82ToIA9DJHPTnNEtgopvznd4zenGhhS6bDblaS0Zy4Fi2hAzmsvXSRCm4LLSqg+HeNHHQmRc03F74POIgbTdBNAdxK3gLN51WjQifu2hKGfqUEonfratXVrhhzahbh4+bWOe2MlSrUq+KpOmQG5bmjLT5RsAU+vr6Qsn+zL01qIdSSjs36bMi4svMrauQVK75foxRrlT/yWIFLsTwoXdJYtLKfVP6MCWfLq5n2HTO45TOUyWOqE/OOlvGHS/r8CGg39Nw6CxdFJwxI44y7xzjADmYpukknX3IwjAhoLCfJv2yCqgYzfXMQb02/N/ff+QpUAQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <922ee651-c211-6e46-7986-6d0f74164e57@suse.com>
Date: Thu, 23 Jun 2022 09:55:25 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: Problem loading linux 5.19 as PV dom0
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>
References: <5c396832-3102-ff5b-c198-c037ee87d83f@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <5c396832-3102-ff5b-c198-c037ee87d83f@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR05CA0035.eurprd05.prod.outlook.com
 (2603:10a6:20b:489::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 1dc014ce-3e86-454a-af20-08da54edbc5f
X-MS-TrafficTypeDiagnostic: PA4PR04MB7949:EE_
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-Microsoft-Antispam-PRVS:
	<PA4PR04MB7949D0822FEAACA02922590AB3B59@PA4PR04MB7949.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qeDgoj/7nDELIQCRYTLyN/r0RWg4lbUcDRFR9ZBDAwwNihwFwLfEdeDHfQGwaBDO844x96LVimZ3ZTAnQ+5LmXRxVdAj5Kq+92bjFrq5jovrMnjlrNRt6fGpNShFm4XCT6ZP898pYfJ/8T4mTfoXWhB5ZikntAdgTxPtR2unq4oVpHDdYWu5b/o+JV9FBiZOm1ydOgpIXsMyidPfl67yciAwmBv/WqUOQg56kOOGVIVFjl+rQuW2S6LOk1+R3v9Vkvprzh89UpmFiQHrROPfX1hDskaFXLihXf6N+zJX3gxQOI9ij7joDvfRAB4Md9r7TclSzPugXXAmdISHskAgid7HLy7CMW5jQTX4Ue/D0TLcceZsJbFy6hDpVLp4DUlnMbI1/O5tz7x6NVcndE4bL8aPVc0AO1hA+HIWb3LALvFi3P5LL502lEo9HQGphVwrrQeFqfXZ7/feZwZZTFohiDrEfIidtpXHLMbPYBYJdOL9KwFp8Cs5iUPgD4BbgNSh5TPKAcp0NJ2Qoqtjhsc0GtxgtA4+VA9Xb3m5OZ2RO8rUC/kbrqUODQB0XK48/vBKMR7fIAfnjCma4QHdl8Yp6AumzrMY7T26lO8mxORjMZXFSQgsRwnoXr5i+FHCpfbJfyq6shOwQeS5lf0zM6eoAj/FxJTsDl2LM8SGdW14Oe7BcOJPraMJGXZ3SzcQ4NvdMagGd5nTB0MVchsWauXEmttkz8dah/wCc7oh8RnTlT6wXS4tGcER9iuQZUVxguYPraY4gpsneVdBgmI9h1mKOeuqO7xifwTkenHx5ENwrf0=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(39860400002)(346002)(396003)(136003)(376002)(86362001)(31696002)(8936002)(5660300002)(6862004)(66946007)(66556008)(6512007)(26005)(8676002)(2906002)(478600001)(4326008)(316002)(6636002)(186003)(37006003)(31686004)(41300700001)(36756003)(38100700002)(66476007)(2616005)(53546011)(6486002)(6506007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QjFZTDNkWVVQQS8zaExENEQ2azA2cGRXVjB6VzVxcll5NHZoM3l1dVIrU2tx?=
 =?utf-8?B?dVdJcGV5bUFJQnNhUkNSdlhRUjhSNlAwK24rRVhMMDZsNXpDK0M0REVldmUr?=
 =?utf-8?B?OFdHNVNrRFR6TDJSYmZZTVFqQ0ZuYmtheUh4QTA3aGlCc1ROZXJzVUJOTHJl?=
 =?utf-8?B?S3d5TVExR1FraHJCaWhCeERhYlhBZmZOaVYxV2N5MWxsV2tkYlRjUXJ3ckJl?=
 =?utf-8?B?c01VKzFaQkFaa0RIR3hqbWoxZ3hWcGllS0VLclp4QjlRdWNYV2s1WGpQOHp1?=
 =?utf-8?B?THVzRllBUnVRN2xncXB0VG51Q2VydGdScGhjK0ovRFJJUm52dUM1bWgyNy9w?=
 =?utf-8?B?bldlYzltNmRINWJNQU0xMGFiNHJ1b25jdERMNWd3c3VvUHl5OWFQWFFyU1Nq?=
 =?utf-8?B?STlsNG5mbjlZcnJ2N3d2VzJ3YzBOU3Rwai9rRmRuR1VzVkJNbU5CLzk3R0ZB?=
 =?utf-8?B?Z014b0EwUkFScHZsNmpIZFNvUlNtOXhBU1kzZndEK0p3czQvdFRLNzRKcHBl?=
 =?utf-8?B?OFFJcllTOE5MUkZ3TTN3SkN6K0g5OS9pV3d0eXhJTmVxTnhkNThEUVFDWGVx?=
 =?utf-8?B?NS9vajRDcThLSTNVVSsxd3dZc2kvb3g5a3RKdTBpYWxOd1ZXdXB2VmRBV2w1?=
 =?utf-8?B?WGhwZkw3amt3bFZVcWlKUnpqbjhhM1ZOKytwRWdnZ3ZIMjVkaXJCQlV5WkNV?=
 =?utf-8?B?VlNtQlgvUnpmL1cyYzh4MFFYRVRXOVlkT2ZZMnVkL1EvUy8yR3NOZUtJYzZp?=
 =?utf-8?B?QWU0YUhUYWRyVlN0SytVaXVGRmwwZEcyTW9SNkZVSzV3NDd2eEswaW5QSGFx?=
 =?utf-8?B?Z0pjdUkzRk1EOFFBNHFLckRqUXBsWVNqcnFMOGhuMWxrdndpSnUvWEloWDlK?=
 =?utf-8?B?OHU4Ry9SeEw1QWRJQVduNFBoU1MwRU84MFJsNGNTcFdMK1ZNUzh3dlFuQjBK?=
 =?utf-8?B?c2ZOZ1F4MHVFM1I1NXFNc1R1OVFqRlNvZVQxcWFnUkZtd1R0MGhlYXpoZDNi?=
 =?utf-8?B?VytxelVuUjlvVUVkYUhhN0ZMK2RyUEpYaVZJYnZVdEJhVnFmcUtmUEN5VVNt?=
 =?utf-8?B?RjBYd0NPQjlpc0Z1SzcyVk9rZDNpcFdZb1Z2UkpwN00yWU5VcGl3bEZrSnhR?=
 =?utf-8?B?NkpRTHBFS09rNXFwVGEzb01yejRhV2NHWFZydFJTU0dNcWc1SDZRYzVyNFcz?=
 =?utf-8?B?RVpwejBiZkU0RUZJM05FQU1yMnVlVFdKOWlOZjcyYi9lc2xvZGFSUHY5MWdP?=
 =?utf-8?B?MFpVbUYxRDFRckdVZm8yR1pFSTljTC9mbldUU3AwQW80RkRVRDZZMG0rNVlp?=
 =?utf-8?B?cUJNTzRwSGt5SDJEWmE3MmMwdUtDQi9OZk9zYXNiZzFvNHRKdWJrUjV5ZWNW?=
 =?utf-8?B?WnZnYTRiMWRKNklpdG9jenVMeTZIS2dURHVDMVVSMHZBbGhYU0tkaGwvSW5R?=
 =?utf-8?B?V2ZRbGxjWENnR0ZsOHNYVGJEM1AzMjcyVXN3TXdIa1UvWDNSRHlXYXh1ZUN2?=
 =?utf-8?B?WFo3R29pNnFzL3kwaTZvQWxxclhqeVFuNDJsZ1VweVhpT1p2NXZSRklacEZj?=
 =?utf-8?B?Y2NudFJBcXF4WHgxdWl2ejU5L0tUU2pLc0c4Z3JFSEFyZFMwbXM4MStUTDhG?=
 =?utf-8?B?WWpibHVIajU2S3IvaWNkazFxU0h2cWVaNzhKZlRDMjRRa2pkc0FYaE1lVks2?=
 =?utf-8?B?VEg1L2U4Q2VhQlZBbGtBWWlkTHN6eE9PenFvQVJlMjhJZ0hCczNzbGh2SmRn?=
 =?utf-8?B?N0hUWVlNbTYyN21uVUxSZS9OOFQ4TjVhaTN6VGlEYnRTaWljNlhjc1dlUHlh?=
 =?utf-8?B?Y0dZaVJ4dS82aElIOUQyRWZjSHg1ZTRQRWs1bmNBalZYNldnSWp5c1g0OGlm?=
 =?utf-8?B?b1c2VlRFSEl1UEJwOUJkVUpydUxNa25sZGsveVBqeG0rNE1CLzhhdk1rcC9t?=
 =?utf-8?B?TDlUell3Q3ZGOHMrbkd1ck5hOHVaRTdZeCtuMmJiZUN0SzhCVVEwa1YzSTFs?=
 =?utf-8?B?TlRETCtTK2pXZXNpUlZjZURUSHROU2VpQnVqQ0NDMmxYVEE5anVzb01JcVAr?=
 =?utf-8?B?cW1Id0IwR2NoRnpzUlBSTE1uSnlBRHZrcS9HQkFaZ3Q5bllMeUVmMHIxL0dJ?=
 =?utf-8?Q?5Y0x/a4s9QtdW6QkEMSm0wfPV?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1dc014ce-3e86-454a-af20-08da54edbc5f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 07:55:27.2434
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: uHzlVRPZmaNe8vQX08LxyVnIIZBeUgUHAwi3hH6rEmpdb4evwnYbaOz66Cjf6/fr07L2uqFFjco1Cdr2agiXeA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB7949

On 22.06.2022 18:06, Juergen Gross wrote:
> A Linux kernel 5.19 can only be loaded as dom0, if it has been
> built with CONFIG_AMD_MEM_ENCRYPT enabled. This is due to the
> fact that otherwise the (relevant) last section of the built
> kernel has the NOLOAD flag set (it is still marked with
> SHF_ALLOC).
> 
> I think at least the hypervisor needs to be changed to support
> this layout. Otherwise it will put the initial page tables for
> dom0 at the same position as this last section, leading to
> early crashes.

Isn't Xen using the bzImage header there, rather than any ELF
one? In which case it would matter how the NOLOAD section is
actually represented in that header. Can you provide a dump (or
binary representation) of both headers?

Jan

> A workaround in the kernel would be to always add a small
> section at the end which needs to be loaded (like is done
> with CONFIG_AMD_MEM_ENCRYPT set), but I don't think we can
> put this burden on all guests being capable to run in PV
> mode.
> 
> I haven't tested yet, whether unprivileged PV guests are
> affected, too.
> 
> 
> Juergen



From xen-devel-bounces@lists.xenproject.org Thu Jun 23 07:59:47 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 07:59:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354532.581680 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Hkc-0007B4-T0; Thu, 23 Jun 2022 07:59:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354532.581680; Thu, 23 Jun 2022 07:59:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Hkc-0007Ax-Q5; Thu, 23 Jun 2022 07:59:42 +0000
Received: by outflank-mailman (input) for mailman id 354532;
 Thu, 23 Jun 2022 07:59:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CRa/=W6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o4Hkb-0007Ar-8j
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 07:59:41 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-eopbgr80083.outbound.protection.outlook.com [40.107.8.83])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6ef64db6-f2ca-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 09:59:40 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB3245.eurprd04.prod.outlook.com (2603:10a6:802:6::33) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16; Thu, 23 Jun
 2022 07:59:38 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 07:59:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6ef64db6-f2ca-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gfN9sCZyJuOAUmuqlbqO1rM9z293dYujO+u32W6Jzy16Db/OBmtytX0iz2eLu76nZ4kagMMSAJC0Hv3u+yJ3HOd0pKY7qtfKMmcGOtZlBPEgx9L/lyZtu1VqvuyocQHImioZ0E1YOmdeur42fk+BM1ZNo6HAK1E51Zd/5IQHi00zeqjiTj5KJYJb8R9LbOmxW8namBX5zpHvPxFPXFV8UT5WHvkmc7Ai4/FcLuCLx0mjvvYotqMlpJnbWxWVVs/vjPudxWwCuJbpEWG4aYSEjojJSLP/QgUXfNbLHjaVi9L4vULzJ66mZx0ER6jjfjLSoFKvXM77YOmFD4LaLfJaTw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Lfg6PW0PChEHGQ4R9t6bnvbaBdnRt0kRd44w7tfK57I=;
 b=BkVrCZ27hG5ztPrIxLe+rL3DxWbUgnogZXfHaQUZ40x2cjsXMIhxo/phHenBslOIxg+5EZgUsn0smKPCiL40vC1iiQHoJ8w8CQxifBtbS9mGg1xR/raeouuAkW3mHbD6a0cIbsGovIO3+844TuTLJq1fEdP06TjM97HhaxP3kTapQQfpR7FetWSa2cIFNux+qmxXyLf7mt8ghtIWOA6hUxg4jBgXSCYAM/gSxsrjOCxNiAb0LoFw+2EQj2Qgkg4FL6kpYYfnY38T6c0vUQrRP/z7lOouMCR0L9zS+B4t78hi3fGgIwWT+AUzo+YvpmnlulOnqqoBkZTkaUcBptdDFg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Lfg6PW0PChEHGQ4R9t6bnvbaBdnRt0kRd44w7tfK57I=;
 b=dpWOroPqZlGJ9XHLJBsnb8FigCG2nqiIuA0q62AQiPKZyaZcq1tgJcEh0nLbhc1WlUrg91a+l8MGielMP+S36YaQ9XeuyBgtC6t3Xl2Wwsy55AQOMKgzlOT3fwSJFc9HVVzpTGtSp5qk8mAwvcfu4UevY33BRrhckCb0GRRRNhENgo3zRb3Fx9ohBakFK1yVd0C1rm+o8RUVNo50wQNmgRI8obigV/BH9q9uXrqf30nM8wZuit4+hvLLlj3ZC88ANKqnQe/wrZ1H+tYWOAAYDK44oZoALUP+NRP5FOZD/FtDYYNxYN3S6CFm0u6Sssy/xjFSW/+gdgOVUSx2rzC39w==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b85e6530-da44-3287-f3b9-51cd4f3116b7@suse.com>
Date: Thu, 23 Jun 2022 09:59:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 1/2] x86/xen: use clear_bss() for Xen PV guests
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>
Cc: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, x86@kernel.org, linux-kernel@vger.kernel.org
References: <20220622161048.4483-1-jgross@suse.com>
 <20220622161048.4483-2-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220622161048.4483-2-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR0301CA0021.eurprd03.prod.outlook.com
 (2603:10a6:20b:468::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4c4de446-4088-47f6-93d3-08da54ee51de
X-MS-TrafficTypeDiagnostic: VI1PR04MB3245:EE_
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB32453339114C58652510281DB3B59@VI1PR04MB3245.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WY9C0kOXBWkXvSIt25kjMggAC8TEDDukV9llPg3jizHfRgaElI3cx8QyIxef7bQ3SyQEzwCf8Ha/i04XvZxsOr+7Ad3YxSHdZnwgxM+Nq8uhXj/78eM2avmUhx6efbJDUpV+Kf31AAY3f0BEzMJ6lYaOSTTGtbgippNHnJmFz16SMbs3Gaq+YR53gKKsv0Ucds9N2QcN0iAso6SORHKmZARjvMYqkR66qfs2AMPU1j2AqHo+H3h8f30vvZVWSNOy7IkH1ZdzExuAUJWofvkoEPv+A9hfR+dNaB4Z2n2w8J80VZAczvtIL6lxl2pGKULSQcPa//h2qwdhF5V7wtJT+r9eINEbvd+SXQYnRj82Z3nw7Un+xkxBrdKjGo5tr4gOaIVK69bfgy2uYIp2kJFNdG0c/4clUNwBKQg+eUb4VEZNpmWM7kEZDEZbqWgcR/FvVnGX/2H5ZYypk88Tp6oO7m5T6PNI2+eosBFp/XaNSsUgfFSKAs/1UrFKD2MLLMnk7iL6zZHzYGXL3lqJg3SrwkwePnrae0Y+NHUGTAKmnilLP7daV6UueQfGG5LEckpFwjSI1jCuP0UTBK7IGnAarC+uv8utom/zwcy44V14CWZ7iz/lQLxRphotj+t7vjvv1rTtiiwP27wedq/fjRorO/r8DzhMx+Oz9YknKYOvVveQK/U0M+yJPwJSG/MF4usvE2LfFWmEDpcy/bdD5eNFSgm1egI5QzmF13gajyBrgDbZkmBoiGauMnUbibEmNnUKgJjSwVVgkH1UW+4MWX7XB0JHs6TX7SQBX7+KyKpwqJE=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(346002)(396003)(39860400002)(376002)(136003)(366004)(186003)(6862004)(6666004)(36756003)(6486002)(66476007)(8936002)(4326008)(66946007)(66556008)(54906003)(41300700001)(8676002)(6636002)(478600001)(53546011)(37006003)(26005)(2906002)(316002)(31696002)(2616005)(6512007)(31686004)(38100700002)(4744005)(86362001)(5660300002)(83380400001)(6506007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ODhLN1lNYzA1SXJwZEJkdDJVZG40TVpCUjJQaHVmR1RsN2NVdzg4RDNlMkh4?=
 =?utf-8?B?dnNyanB1bkRGeDIrUzRKbUVib3R1bkpCQ09YMlRaM2ttNUtNakpZcDU4S0Iz?=
 =?utf-8?B?Y3A0WU9FWW5wUWlQSEFVbG1tMU9iWEZBM1JTbzFPTDEvNms3T2thbmNLTi9E?=
 =?utf-8?B?VE0xb2c1WnVrQVVxekpWSlhDb0VJSDg4RHpSQTZZdHpPL0wvYU4zR1JmMGNF?=
 =?utf-8?B?R3NoT1NiK2twUXI3MlA4OERLUEtBcVM0MnFBQ0prT3h5a3dHVTdKRjQvSVoz?=
 =?utf-8?B?WFhFbHJjRjdHMk5sRCs5eXFTOGY1RWNuWHVsRC8weWhjQ3ZCb05SSUxtbHo0?=
 =?utf-8?B?cnd3dk5TbU5VWTV5a1dzWXpRd3RtTnVHVFlTOW1wSFgxWjJFcHBoYlViR2dW?=
 =?utf-8?B?Vm84eUxCODNXcXdNRW9PNHg4b05WOTNmL1pFd1pxaDdJQWFiSDdRRmhIRFRB?=
 =?utf-8?B?Qzg1Z2ltS2JyZld4UFd1dGk0enhjUWhvMlpGajJsOEZVcFMzZkVVTS9VQ3dz?=
 =?utf-8?B?ajFHaTQyRkZiS2VQSG1nc3pJa2pGMFlzVGZ3amhES1JQR284VHp2S21lRnI1?=
 =?utf-8?B?V0d4azZRZWVWcG1LWXRLMWZERkYxWkZJODRVcnVlcnIwcytFUGErZTMwMUpr?=
 =?utf-8?B?QWtTZExqMmtYQm1zckxwa0ZvNFMrTXVVOS9BVWQ4SEVrRng2OFZBYUNWRCtQ?=
 =?utf-8?B?RGI1WVJWZkJQUStETmkyQTY5NndmSktKMXpReUtJR2RwNW1zTjR0ckhzWHUx?=
 =?utf-8?B?YWdaaWEwUVQ4WTlYMGZZREZ1TUR0dkErbDBKRklZMTJ2ajNzMnVxSEFVQ3lC?=
 =?utf-8?B?R29SaUJhcTlJSTNJaXJWczZialpGOWNzV3RUUzdFV254dmxRT0hIc3hIUlNG?=
 =?utf-8?B?eE0vdkg0NHhTTlkzY0s2cWx0THF1aHJnRldwcjNUN1VZNnZmSDIxaVAya3Rk?=
 =?utf-8?B?N1ZzdlpFRXZVQXh0TUNPWUpuSzlmQ09sUjFZYm91TjNNczVLbDhnMWo4bE9W?=
 =?utf-8?B?bTB6ZzRFNm1LMVRQV0lqMGlFckl0bjhrcW5CV0djM3pmUEFoMTh5OHdURDFZ?=
 =?utf-8?B?c0ZERkVzRFh5T3JKZytEQkNjWWZqLzZKbURiMlJwSW45RkQrUWZQN0ZkeDdh?=
 =?utf-8?B?T1B4WkxTS29HNWxnVWRpWWc4cTdRbVpjZjEvNzJXRFd1cE4xYko5amhJb2ov?=
 =?utf-8?B?NE5FWGhBZzdHcWFRMGxqRnA1em96VzhGa3ZnNDN6cm45ekJnQ2dyZVJiRGVj?=
 =?utf-8?B?aEJlV1dYdC9FSHRuU0QrN0c3RjAxSVp5cXdYajlOZWtRY2w1Qjd0cnNmREdX?=
 =?utf-8?B?ckg4NzlNSzFtd3NyaUplRnZiYTJPRGRDMGc0NUZ4aUl1YTROOTkwNEx6L3hR?=
 =?utf-8?B?bWRtUzB4Y3BNUVRSRjRudlExVElVSEJVcCtpdE9ueHE1SUNuSkRSRGwrUGNL?=
 =?utf-8?B?TDNZakVKNXdyNzRuSm5ScXppZTltVmdHYUU1ZUJ3OXBKc3ByT2xJaGNJYlBQ?=
 =?utf-8?B?bzNUMmhTMGxwNll0aTdwMFU2ZnNzWVRrem94TUQzS3h4M2FZUGVvMktsak9v?=
 =?utf-8?B?RnpHdDdyaFFpU3ZBcldCMGlpMWZCS1QxYlJ3bUQ2T3JRNlJxYXBGTnBEek1F?=
 =?utf-8?B?MWJPc3NQR1JEejg3bzNvSkZRd3poUHlpOG1QcjZJczBsSXI4dDJDemdOcjZj?=
 =?utf-8?B?dTlvWlNuTUJURXNTYVZrdHdJNkZqZHIwb0orWDAxckMvL3p4NWl4MXA5bUkw?=
 =?utf-8?B?NzZYWEdZK2QvZStQT3U0ZWhlejd3ZEErUzVJTDE0RHRjQXVHbE1nTW9iT0dQ?=
 =?utf-8?B?U3c1eDFIRmN3YnM0cWNwTDRJWkN4UDIxOHM5MGxmWmZycWdodGdvRS9wQXJV?=
 =?utf-8?B?VVZXUlVZVWw4bkd1RXRnWm5wSEdpWktNZ0w2Z3ljWG94VGRqK2lMcVJJc0Vk?=
 =?utf-8?B?RGNnUHJ4RytueUFxOU14bUZvVG5CQnNJUG1aY1VEbXNEZnROY3ptYWtVbEdO?=
 =?utf-8?B?N3ZUTEZvNjB3bnFEeWQ3ajlCbTF2V2xidG5FVWxQOWpFTDJCTWY2Tk16Tkx5?=
 =?utf-8?B?RW1FUHFvM0pRWDUvUVNvQWpNN2hvVlowTGpSdEhmaVdNTHp3NmY4c0R0OGZC?=
 =?utf-8?Q?MQVP2s+ozNcnaBpW6AqWQ412x?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4c4de446-4088-47f6-93d3-08da54ee51de
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 07:59:38.1337
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: LseRsPvw/F+mATn81xRC6IW0EjRcI9bgu9hv11HCEOn+Mp1xxv1txEuMItq9nxVhwr40f4MZWZnNKQgWfdsDTA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB3245

On 22.06.2022 18:10, Juergen Gross wrote:
> Instead of clearing the bss area in assembly code, use the clear_bss()
> function.
> 
> This requires to pass the start_info address as parameter to
> xen_start_kernel() in order to avoid the xen_start_info being zeroed
> again.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 08:02:13 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 08:02:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354542.581692 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Hn2-0000iw-On; Thu, 23 Jun 2022 08:02:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354542.581692; Thu, 23 Jun 2022 08:02:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Hn2-0000ip-Lb; Thu, 23 Jun 2022 08:02:12 +0000
Received: by outflank-mailman (input) for mailman id 354542;
 Thu, 23 Jun 2022 08:02:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=j28/=W6=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o4Hn1-0000ij-HY
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 08:02:11 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c88f39c3-f2ca-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 10:02:10 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 238D021C09;
 Thu, 23 Jun 2022 08:02:10 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id D8ED613AB2;
 Thu, 23 Jun 2022 08:02:09 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id d+PjMgEetGJLdgAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 23 Jun 2022 08:02:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c88f39c3-f2ca-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655971330; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=2NL1pFdeH5hHxZALWQiBRVNMsb2ulVlRd+lsPDspO4g=;
	b=f6zlMDwd1pShidmnntxIfjGU8VdW1yi/n02tx8+x51PWWaF9IOszu/vPypii10BPY9Vv60
	Pw0FaKWQH559qE2ogYUXPRibZC7G0fNNMABiPmaD1F5VRFg1U5OM0426ByYwiHH5mUt/i3
	tFYAhXMbVMK6ToEdJxvtnqHcUKWxh4g=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] xen: consider alloc-only segments when loading PV dom0 kernel
Date: Thu, 23 Jun 2022 10:02:08 +0200
Message-Id: <20220623080208.2214-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When loading the dom0 kernel for PV mode, the first free usable memory
location after the kernel needs to take segments into account, which
have only the ALLOC flag set, but are not specified to be loaded in
the program headers of the ELF file.

This is e.g. a problem for Linux kernels from 5.19 onwards, as those
can have a final NOLOAD section at the end, which must not be used by
e.g. the start_info structure or the initial page tables allocated by
the hypervisor.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/libelf/libelf-loader.c | 33 +++++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)

diff --git a/xen/common/libelf/libelf-loader.c b/xen/common/libelf/libelf-loader.c
index 629cc0d3e6..4b0e3ced55 100644
--- a/xen/common/libelf/libelf-loader.c
+++ b/xen/common/libelf/libelf-loader.c
@@ -467,7 +467,9 @@ do {                                                                \
 void elf_parse_binary(struct elf_binary *elf)
 {
     ELF_HANDLE_DECL(elf_phdr) phdr;
+    ELF_HANDLE_DECL(elf_shdr) shdr;
     uint64_t low = -1, high = 0, paddr, memsz;
+    uint64_t vlow = -1, vhigh = 0, vaddr, voff;
     unsigned i, count;
 
     count = elf_phdr_count(elf);
@@ -480,6 +482,7 @@ void elf_parse_binary(struct elf_binary *elf)
         if ( !elf_phdr_is_loadable(elf, phdr) )
             continue;
         paddr = elf_uval(elf, phdr, p_paddr);
+        vaddr = elf_uval(elf, phdr, p_vaddr);
         memsz = elf_uval(elf, phdr, p_memsz);
         elf_msg(elf, "ELF: phdr: paddr=%#" PRIx64 " memsz=%#" PRIx64 "\n",
                 paddr, memsz);
@@ -487,7 +490,37 @@ void elf_parse_binary(struct elf_binary *elf)
             low = paddr;
         if ( high < paddr + memsz )
             high = paddr + memsz;
+        if ( vlow > vaddr )
+            vlow = vaddr;
+        if ( vhigh < vaddr + memsz )
+            vhigh = vaddr + memsz;
     }
+
+    voff = vhigh - high;
+
+    count = elf_shdr_count(elf);
+    for ( i = 0; i < count; i++ )
+    {
+        shdr = elf_shdr_by_index(elf, i);
+        if ( !elf_access_ok(elf, ELF_HANDLE_PTRVAL(shdr), 1) )
+            /* input has an insane section header count field */
+            break;
+        if ( !(elf_uval(elf, shdr, sh_flags) & SHF_ALLOC) )
+            continue;
+        vaddr = elf_uval(elf, shdr, sh_addr);
+        memsz = elf_uval(elf, shdr, sh_size);
+        if ( vlow > vaddr )
+        {
+            vlow = vaddr;
+            low = vaddr - voff;
+        }
+        if ( vhigh < vaddr + memsz )
+        {
+            vhigh = vaddr + memsz;
+            high = vaddr + memsz - voff;
+        }
+    }
+
     elf->pstart = low;
     elf->pend = high;
     elf_msg(elf, "ELF: memory: %#" PRIx64 " -> %#" PRIx64 "\n",
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Thu Jun 23 08:04:21 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 08:04:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354549.581703 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Hp7-0001Jw-4r; Thu, 23 Jun 2022 08:04:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354549.581703; Thu, 23 Jun 2022 08:04:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Hp7-0001Jp-1u; Thu, 23 Jun 2022 08:04:21 +0000
Received: by outflank-mailman (input) for mailman id 354549;
 Thu, 23 Jun 2022 08:04:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uSRO=W6=citrix.com=prvs=166c34e93=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o4Hp6-0001Jh-3i
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 08:04:20 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 136c193e-f2cb-11ec-b725-ed86ccbb4733;
 Thu, 23 Jun 2022 10:04:18 +0200 (CEST)
Received: from mail-bn8nam11lp2168.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 23 Jun 2022 04:04:14 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by MN2PR03MB5038.namprd03.prod.outlook.com (2603:10b6:208:1b1::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.15; Thu, 23 Jun
 2022 08:04:12 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 08:04:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 136c193e-f2cb-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655971458;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=yAcNc111MiPw57/DXDo+Fck670VyWN/kQwVCmy9LFsQ=;
  b=BQklDJyIB4XDTE5AZVV5HmkpDf+7GNPFSIx/WO+lKS5UeKAif1UbXKAp
   L+bLerXPabELeKP2Cz0c3UrMBAzKOwHh4+ARF6Ohvb7xxAbp3qSEJP1jM
   FpGad7MNHFM5vVA0QjERwWtdAtLwTbnYWd4D2g9dSV/2JBthcFh5o5pIN
   M=;
X-IronPort-RemoteIP: 104.47.58.168
X-IronPort-MID: 74659738
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:KUz7oKMcLYwFYZjvrR2QlsFynXyQoLVcMsEvi/4bfWQNrUonhTQDm
 2NLD2mCM/fZZmP0eYx+aIqz8xkC6J6BzYNlQAto+SlhQUwRpJueD7x1DKtR0wB+jCHnZBg6h
 ynLQoCYdKjYdleF+lH1dOKJQUBUjclkfJKlYAL/En03FFUMpBsJ00o5wbZn2NQw3bBVPivW0
 T/Mi5yHULOa82Yc3lI8s8pvfzs24ZweEBtB1rAPTagjUG32zhH5P7pGTU2FFFPqQ5E8IwKPb
 72rIIdVXI/u10xF5tuNyt4Xe6CRK1LYFVDmZnF+A8BOjvXez8CbP2lS2Pc0MC9qZzu1c99Z1
 cccm5eBRQYQJ7SL3+NFdjlKIjFOIvgTkFPHCSDXXc276WTjKiKp6dM+SUY8MMsf5/p9BnxI+
 boAMjcRYxufhuWwhrWmVu1rgcdlJ87uVG8dkig4kXeFUrB7EdaaHfWiCdxwhV/cguhUGvnTf
 YwBYCdHZxXceRxffFwQDfrSmc/32ieiImID+Dp5o4I3s0bDkhNL94PND8CNUIC1GdtPtAGX8
 zeuE2PRR0ty2Mak4TiP/2+oh+TPtTjmQ49UH7q9ntZonVmSy2o7GBAQE1yhrpGRkVWiUthSL
 0gV/CsGrqUo8kGvCN7nUHWQv3qsrhMaHd1KHIUS5A6Bx6XO6i6FF2MESXhHc9Vgu8goLRQ62
 1nMk973CDhHtLyOVWnb5rqStSm1OyUeMSkFfyBscOcey9zqoYV2hBSfSN9mSfexloesR2G2x
 C2Wpi8jgblVldQMy6iw4VHAhXSru4TNSQk2oA7QWwpJ8z9EWWJsXKTwgXCz0BqKBNzxooWp1
 JTcp/Wj0Q==
IronPort-HdrOrdr: A9a23:9RrJJ6Bf7jM2IfflHeg+sceALOsnbusQ8zAXPh9KJCC9I/bzqy
 nxpp8mPH/P5wr5lktQ/OxoHJPwOU80kqQFmrX5XI3SJTUO3VHFEGgM1+vfKlHbak7DH6tmpN
 1dmstFeaLN5DpB/KHHCWCDer5PoeVvsprY49s2p00dMT2CAJsQizuRZDzrcHGfE2J9dOcE/d
 enl4N6jgvlXU5SQtWwB3EDUeSGj9rXlKj+aRpDIxI88gGBgR6h9ba/SnGjr1wjegIK5Y1n3X
 nOkgT/6Knmm/anyiXE32uWy5hNgtPuxvZKGcTJoMkILTfHjBquee1aKvW/lQFwhNvqxEchkd
 HKrRtlF8Nv60nJdmXwmhfp0xmI6kdb11bSjXujxVfzq83wQzw3T+Bbg5hCTxff4008+Plhza
 NixQuixtVqJCKFuB64y8nDVhlsmEbxi2Eli/Qvg3tWVpZbQKNNrLYY4FheHP47bW7HAbgcYa
 hT5fznlbZrmQvwVQGbgoAv+q3gYp0LJGbJfqBY0fblkQS/nxhCvj4lLYIk7zI9HakGOuh5Dt
 T/Q9pVfY51P78rhIJGdZA8qJiMexrwqSylChPgHX3XUIc6Blnql7nbpJ0I2cDCQu178HJ1ou
 WKbG9l
X-IronPort-AV: E=Sophos;i="5.92,215,1650945600"; 
   d="scan'208";a="74659738"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=m2qLIf4QNEVpt3av9wBf9YsPYhOdDfLiCKIXyDja/teX2gG7f8IB1fHaluGdxthxXtI7DrcFeTFMWlxznQD4PMTRWqZFIzR6aWqnRDXzax4b6u6o+r8OoRTioaYHCgrCo+fX0MEF9e8hxKTxOrKugkwex+Q9tx/wSfscbouEqCIb/fa3qwarVVjnk2kSfTXcfV3QWh0cy1a/q1nZiwEVC5oeOz4/GNId5GePA/5NY9voPI7EcECrvdMo2IA4aCvqZooit+h25dhvz3BFiKyq+j7sxQs8p/X1g7YlY3vFhDEgv7bHOQYTIcWpKGK9Ic5SPTxIsd2IhYwszMBudW9F5Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mn2LLMLLFH6ai3I6fN26DatQk4F0Yn8dR80dp3oGOdc=;
 b=fwuPsd+58H1EU25sWymAoXaHg4IxHfOEcZ/CeN/vwrB9iMD00jAUcbSjThSoO6FJ5kRjAUgOdgtbV/NpEFak3y/Kz7G3uGcgN13DrmLvlvOMNAPxRKBQ6VSpGxjGzYVN/JbQXbKp4PPuHneJREN8ajyL07ZeVdyQlClgCD9eIg+2Jr7HgrqNHjYj8LYm0NQQVUv0QTihsHWu5SRjz0a/wkMMi4URYhLwfg2k4ybgf/Tx5DY53O2gMCYqh+dbWcpVW2HErBD1cJ+jMxSi3+aO22KBouNv1K4syqipVulURHEF2DXoctwbQ1lc0/Uv1XqUITDUOOkhGzwQmVI3e3N5PA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mn2LLMLLFH6ai3I6fN26DatQk4F0Yn8dR80dp3oGOdc=;
 b=Dl9u58+QkpzighIJnDEYwDgERwspl1q/iGU63oORMVSmtbS70xgfzdDocE/bpTvbKC4EbbSRllQ7AB4Sj5Oy+KZ4tJF0kviny5Ulhz5c+WLLi15F2ZmUVHNpCoFbtAVusx4Mo2LEhd1WQbh43ZKCoxEiozAIWcKe99h84lV2Kok=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Thu, 23 Jun 2022 10:04:08 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	Daniel de Graaf <dgdegra@tycho.nsa.gov>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] MAINTAINERS: drop XSM maintainer
Message-ID: <YrQeeHcHMhRB3g6C@Air-de-Roger>
References: <baa7d303-1fcc-cd59-0872-a930ea43734d@suse.com>
 <16b02586-43a5-0f67-5479-1d7b77aa892e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <16b02586-43a5-0f67-5479-1d7b77aa892e@suse.com>
X-ClientProxiedBy: LO4P123CA0345.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18d::8) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e0f4018c-3ccd-4cec-f516-08da54eef545
X-MS-TrafficTypeDiagnostic: MN2PR03MB5038:EE_
X-Microsoft-Antispam-PRVS:
	<MN2PR03MB503821EAB8547F742DA172618FB59@MN2PR03MB5038.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	YO3jxg+P3EqEAXQq0tYFlwJyOZpKdzz4qgtA3pzbPAoyDCoMa5DR8dn2RRvPleeJMSOxfYyFtBjkvY8rbeUkkSXYaVFzY0AB9MZVlgZbU8MM++kTzHHPe+sbXa7ex0z23sy/9g7RKbOglpXbAmiEDcIEy4NsdiKE4jqBEEMzUs6n7hvE6PadE5qyYV1E6zowKPoPUlG7KdzGvnmrJh5pTSB3AM/iJS0Wi8B2yGxj970pAURELkmczA/5D1l6AF6UY50YK3TVE0F4vya2xugJuCzOj492vQEZVKa70rG1abU18KJsnD+qSunavUKb2g19zng3Etlp1EF0pmttSjsTaouaty2nTsumZvIE90jYGpEIahvZVog2cclNv6a6r6xbbLjQSAui0HHgCqCaf5BUmv/BtyXdDJEcHzwBc5NATuFL6lKDu6ccKDruM6ASEKC85e13MDR8b2/J1SMzlAOdNx8P4jQSwNncVyGEQ3pIhZ6M0MsNwKDDEyvzaW7KTsRnbPxpJ8H5t1zPkmT98Dz6ux1rjAW+ksdfTxIYGNn0dxb3HEuLvtzUdzzqhjFtUknrLks50sIMx9ZcIXgkF4k/n39JBQ9BzrQXZZ9JuEAwlVE248w2CSC5IE47FNpzYUHRZHEoJpD9zWrQ9HlFP+6pgi1t+ueBHpxxq3jC34A6cwpDiKNrAHhmDdldEAjHMhyiJRzSOzA15ETxeiMARLLEkw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(7916004)(4636009)(136003)(346002)(376002)(366004)(39860400002)(396003)(5660300002)(6916009)(54906003)(6486002)(66556008)(82960400001)(4744005)(86362001)(8936002)(8676002)(478600001)(316002)(66476007)(66946007)(4326008)(6666004)(41300700001)(53546011)(85182001)(9686003)(2906002)(83380400001)(6512007)(6506007)(26005)(38100700002)(33716001)(186003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Vys4eFdqTkZvTHZyYlNDejFQMGtvNEVxa2JpS0dTc2s1MzA3VGhCT2JSSDdl?=
 =?utf-8?B?aHFvaGladzA5bm5naGh4eThxMzBXenoxUVlWZktoWHBVNWF5ZDNCNitEWVZU?=
 =?utf-8?B?eWpVaFVVRzM3VmxoemFFMlRlRGQ4Smdac1ZjWnVzYStrOEZwY2FKZm41MlBN?=
 =?utf-8?B?a3Z1aXNnc0VIQ0U0TTdhMkJDZ1UwbEljWmhDWi84c0ZuRnVlNUFIVU51ZkF0?=
 =?utf-8?B?TVFsUVdiQjZQTWY2K0NLNDJ3Y3Y5M0VQc1Z1ME9lVWNrUmVqNi9aMXB2THli?=
 =?utf-8?B?VWNBZCtVbHBUL2xIZStnaEk1U0YrQ3hleHJjM3A3bzlwUVpqS05mdEhkQjZx?=
 =?utf-8?B?cG1XYWgyOCtTRzhqVW9RdFpYSFUxN0lORWRmeHJDS0tNckhZZHBSRmJMaUNL?=
 =?utf-8?B?dmFVdUVueEFISHNUS1BiMUNFQ2N1cmY2Z3IrUjF2b0Q3aHFzekpBMVltQkln?=
 =?utf-8?B?ZTFEN3h5K1VNQnV6Vm5ZZ2lxU2JIQXAxNExyMVFySndiK2hHYUV0bk04UUZi?=
 =?utf-8?B?N1FPQjNWVERrLzV2ZjY0WkFudDdCQUR4VytLbnljWUNlak1KMk1EQTZ3ajBT?=
 =?utf-8?B?YW9POC9wbU1KNDJuVWppdlhKTzRIeUE4QnZvNkFOdkltbkZibWsxeVZVekpW?=
 =?utf-8?B?Mm9pdHFUUVFnam5EUE93d20zNzg3SmlOZ2pJVmpURXlIMTZMTEM0c1ZPejQ0?=
 =?utf-8?B?L3pSK3Z2OVZYNHVjdG95c2psRHZGZ3VmcEQ2T2cxbkFSTnE5eFdQYnRsUHRj?=
 =?utf-8?B?ZjUveFl5ak9LWlk4MEEwSGlhbExreGJmQ01QeElZam5pZ2pzMGladFd6OHNJ?=
 =?utf-8?B?cU0vTU9MSTBEaWpmNWJLRDlyMkRLb1F5ZlVYMG1lRHRkc3VNOHljaDZidFpm?=
 =?utf-8?B?VXdLRGQ3ZWs3Y0J1Mkdjb0h6eHdXQkd1YU82TUdzMDBWdDh2SXk5YUI0dWlT?=
 =?utf-8?B?ZHpIN3BWMkJwVUNMQTFKejhZaTM4K2FEQ1hYeElXcVowZlJ2cDgyUDJlK0ZG?=
 =?utf-8?B?UGkxZVVEWUxoVWlWYXdmRENTdVhSNS9iOUJ0bzBsTGEwWHBPTkdBYnB0aUpL?=
 =?utf-8?B?WFNIMXlmcGhpSE1tM2J2ZFhKRTlFNDV3THAyY096Q1Z2NTUwbi8zcXdpQ1NL?=
 =?utf-8?B?ODg1YmViT3FLSzRkZmtnSDBOSC9yVmJ1TVFhRGJRYk4rOG5aQ2lWUkhST0ZI?=
 =?utf-8?B?QmthU1RQQmM1aWxKMnF4c1lwR0ZUT0h1eFpIemUyd0FEUG4zbjJIdjFCS2hD?=
 =?utf-8?B?TUxFT3g5Y045cXkxclRCQ1lQYWUvbVArTUZmalJmbStqN2ZnZ3V6Nlg0d3pL?=
 =?utf-8?B?cHNQNnY0YVROUURXN29jd2RKd3NJcW9HTWx6aDRRdzFYc0FndCtOUWNNSEFo?=
 =?utf-8?B?N3lMSUpTNHhZekk5bzQxTDhZMzUxcExSN2I1alp3dm5BbFp4ejVFcjQraGhH?=
 =?utf-8?B?c1N3bUhFa0QveG5LM1VaNExNeFV0NDNncHJpSzd3U2duaUh2NnlPc2pkM0Nw?=
 =?utf-8?B?VloxWTNmam1WQlJGei9tR0FSdzZUVXZnVTAzUnI2THVlSTVXQkEyVnAzNFZ5?=
 =?utf-8?B?R2d5SHVnK2YzT2RWVHdicWxxeUZETHplM0VuZHhmc1NmVFJXV1NCZTVJVE0z?=
 =?utf-8?B?dDdBN3JwcS9URkk0TWNacTRNejdJMWJIbCtobk56UFpMVnM0c3FFU2dwMHdG?=
 =?utf-8?B?T0UydDBIQnEzdTNMQ1pvY0RmZVB4SzRick81TU5xeHpSNVB6emw5NWl6VGtr?=
 =?utf-8?B?WDJuS3llbm1uRlJ3aTBmbFdZMm1OVE5GNkhFZ1ZFdnYwMkJ3SVJTSGhuK3FX?=
 =?utf-8?B?eXpMMHhLU25ML0EvMzAzMXJpYy9mY3ozR05XUmp3ekZSUlJOTG8xZ2lQMlhz?=
 =?utf-8?B?WjFxMVFMMjJya1A3S0xuZnZPYnFrazE2SGlacm9ZQ2VQcHcrcXpNWGFsNzRn?=
 =?utf-8?B?bVNkUlA2Y1ZxakltVHYvVVdUTjFGTUpvTk5DMEp5NHI3Mys4Vno5eit6S1V1?=
 =?utf-8?B?TWFOTlVOSThqOVppMlZFY3ExbHduL1RYSEdqaXZ6MnF5TzBUbE5JSmI1MFZB?=
 =?utf-8?B?b2lWTlVZbGx0NU5jS1dHUXVIZFNRbGFUTExuM3BNd2hjVFd5OFdqdXdVNTJj?=
 =?utf-8?Q?wdUmsbo2yBFGET6l+zG9O+a+z?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e0f4018c-3ccd-4cec-f516-08da54eef545
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 08:04:12.2798
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: VB+na/xj0Gb3FzU2sjStKeX3XykNoH8GK0ohwZYw9goctc2LSwHMAOV1p+B4ASd+C7me0Mp8npwA0GRMHjsOgQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5038

On Thu, Jun 23, 2022 at 09:43:30AM +0200, Jan Beulich wrote:
> On 09.06.2022 17:33, Jan Beulich wrote:
> > While mail hasn't been bouncing, Daniel has not been responding to patch
> > submissions or otherwise interacting with the community for several
> > years. Move maintainership to THE REST in kind of an unusual way, with
> > the goal to avoid
> > - orphaning the component,
> > - repeating all THE REST members here,
> > - removing the entry altogether.
> > 
> > Signed-off-by: Jan Beulich <jbeulich@suse.com>
> > ---
> > We hope this to be transient, with a new maintainer to be established
> > rather sooner than later.
> > 
> > I realize the way I'm expressing this may upset scripts/*_maintainer*.pl,
> > so I'd welcome any better alternative suggestion.
> 
> Two weeks have passed. May I ask for an ack so this can go in?

I don't think mine suffices, but just in case:

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 08:06:54 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 08:06:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354555.581713 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4HrW-0001wE-Ii; Thu, 23 Jun 2022 08:06:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354555.581713; Thu, 23 Jun 2022 08:06:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4HrW-0001w7-G7; Thu, 23 Jun 2022 08:06:50 +0000
Received: by outflank-mailman (input) for mailman id 354555;
 Thu, 23 Jun 2022 08:06:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=j28/=W6=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o4HrV-0001vz-8F
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 08:06:49 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6de4ce95-f2cb-11ec-b725-ed86ccbb4733;
 Thu, 23 Jun 2022 10:06:48 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id AA6B41FD3B;
 Thu, 23 Jun 2022 08:06:47 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 905F8133A6;
 Thu, 23 Jun 2022 08:06:47 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id zXu2IRcftGK+eAAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 23 Jun 2022 08:06:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6de4ce95-f2cb-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655971607; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=doIDdn0aOABPHcXDiQsLrntQW0gdmP4ima+5H6rdDkM=;
	b=FdK0DDSShte1Tld6g2QoBTTjaJQqTX6wQluZl7mDEiZ7/fFdwEAHUGO75/ysc2X2nGg9dL
	mdaRO0w3OUFivZG51MsONTqRUYWDn15i7uRa+I9BXNl2NHtxjeh1Mi7v1SFtXI+B/K8/Sd
	2dWcifpi0OhbwrWjPTx/wILc/8DiMgk=
Message-ID: <b74d7347-113b-c608-1346-8f75f1a77cb9@suse.com>
Date: Thu, 23 Jun 2022 10:06:47 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: Problem loading linux 5.19 as PV dom0
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <5c396832-3102-ff5b-c198-c037ee87d83f@suse.com>
 <922ee651-c211-6e46-7986-6d0f74164e57@suse.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <922ee651-c211-6e46-7986-6d0f74164e57@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------UV4NmG4z3h7tmtKl6dIKj0XU"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------UV4NmG4z3h7tmtKl6dIKj0XU
Content-Type: multipart/mixed; boundary="------------UfVWdtDCmUNP0UQO72LD4IvQ";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <b74d7347-113b-c608-1346-8f75f1a77cb9@suse.com>
Subject: Re: Problem loading linux 5.19 as PV dom0
References: <5c396832-3102-ff5b-c198-c037ee87d83f@suse.com>
 <922ee651-c211-6e46-7986-6d0f74164e57@suse.com>
In-Reply-To: <922ee651-c211-6e46-7986-6d0f74164e57@suse.com>

--------------UfVWdtDCmUNP0UQO72LD4IvQ
Content-Type: multipart/mixed; boundary="------------PJjURniSlnlNfF70ktqO2YfB"

--------------PJjURniSlnlNfF70ktqO2YfB
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjMuMDYuMjIgMDk6NTUsIEphbiBCZXVsaWNoIHdyb3RlOg0KPiBPbiAyMi4wNi4yMDIy
IDE4OjA2LCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4gQSBMaW51eCBrZXJuZWwgNS4xOSBj
YW4gb25seSBiZSBsb2FkZWQgYXMgZG9tMCwgaWYgaXQgaGFzIGJlZW4NCj4+IGJ1aWx0IHdp
dGggQ09ORklHX0FNRF9NRU1fRU5DUllQVCBlbmFibGVkLiBUaGlzIGlzIGR1ZSB0byB0aGUN
Cj4+IGZhY3QgdGhhdCBvdGhlcndpc2UgdGhlIChyZWxldmFudCkgbGFzdCBzZWN0aW9uIG9m
IHRoZSBidWlsdA0KPj4ga2VybmVsIGhhcyB0aGUgTk9MT0FEIGZsYWcgc2V0IChpdCBpcyBz
dGlsbCBtYXJrZWQgd2l0aA0KPj4gU0hGX0FMTE9DKS4NCj4+DQo+PiBJIHRoaW5rIGF0IGxl
YXN0IHRoZSBoeXBlcnZpc29yIG5lZWRzIHRvIGJlIGNoYW5nZWQgdG8gc3VwcG9ydA0KPj4g
dGhpcyBsYXlvdXQuIE90aGVyd2lzZSBpdCB3aWxsIHB1dCB0aGUgaW5pdGlhbCBwYWdlIHRh
YmxlcyBmb3INCj4+IGRvbTAgYXQgdGhlIHNhbWUgcG9zaXRpb24gYXMgdGhpcyBsYXN0IHNl
Y3Rpb24sIGxlYWRpbmcgdG8NCj4+IGVhcmx5IGNyYXNoZXMuDQo+IA0KPiBJc24ndCBYZW4g
dXNpbmcgdGhlIGJ6SW1hZ2UgaGVhZGVyIHRoZXJlLCByYXRoZXIgdGhhbiBhbnkgRUxGDQo+
IG9uZT8gSW4gd2hpY2ggY2FzZSBpdCB3b3VsZCBtYXR0ZXIgaG93IHRoZSBOT0xPQUQgc2Vj
dGlvbiBpcw0KDQpGb3IgYSBQViBrZXJuZWw/IE5vLCBJIGRvbid0IHRoaW5rIHNvLiBKdXN0
IHNlbnQgYSBwYXRjaCByZXBhaXJpbmcNCnRoZSBpc3N1ZSwgQlRXLg0KDQo+IGFjdHVhbGx5
IHJlcHJlc2VudGVkIGluIHRoYXQgaGVhZGVyLiBDYW4geW91IHByb3ZpZGUgYSBkdW1wIChv
cg0KPiBiaW5hcnkgcmVwcmVzZW50YXRpb24pIG9mIGJvdGggaGVhZGVycz8NCg0KUHJvZ3Jh
bSBIZWFkZXI6DQogICAgIExPQUQgb2ZmICAgIDB4MDAwMDAwMDAwMDIwMDAwMCB2YWRkciAw
eGZmZmZmZmZmODEwMDAwMDAgcGFkZHIgDQoweDAwMDAwMDAwMDEwMDAwMDAgYWxpZ24gMioq
MjENCiAgICAgICAgICBmaWxlc3ogMHgwMDAwMDAwMDAxNDVlMTE0IG1lbXN6IDB4MDAwMDAw
MDAwMTQ1ZTExNCBmbGFncyByLXgNCiAgICAgTE9BRCBvZmYgICAgMHgwMDAwMDAwMDAxODAw
MDAwIHZhZGRyIDB4ZmZmZmZmZmY4MjYwMDAwMCBwYWRkciANCjB4MDAwMDAwMDAwMjYwMDAw
MCBhbGlnbiAyKioyMQ0KICAgICAgICAgIGZpbGVzeiAweDAwMDAwMDAwMDA2YjcwMDAgbWVt
c3ogMHgwMDAwMDAwMDAwNmI3MDAwIGZsYWdzIHJ3LQ0KICAgICBMT0FEIG9mZiAgICAweDAw
MDAwMDAwMDIwMDAwMDAgdmFkZHIgMHgwMDAwMDAwMDAwMDAwMDAwIHBhZGRyIA0KMHgwMDAw
MDAwMDAyY2I3MDAwIGFsaWduIDIqKjIxDQogICAgICAgICAgZmlsZXN6IDB4MDAwMDAwMDAw
MDAzMTJhOCBtZW1zeiAweDAwMDAwMDAwMDAwMzEyYTggZmxhZ3MgcnctDQogICAgIExPQUQg
b2ZmICAgIDB4MDAwMDAwMDAwMjBlOTAwMCB2YWRkciAweGZmZmZmZmZmODJjZTkwMDAgcGFk
ZHIgDQoweDAwMDAwMDAwMDJjZTkwMDAgYWxpZ24gMioqMjENCiAgICAgICAgICBmaWxlc3og
MHgwMDAwMDAwMDAwMWZkMDAwIG1lbXN6IDB4MDAwMDAwMDAwMDMxNzAwMCBmbGFncyByd3gN
CiAgICAgTk9URSBvZmYgICAgMHgwMDAwMDAwMDAxNjVkZjEwIHZhZGRyIDB4ZmZmZmZmZmY4
MjQ1ZGYxMCBwYWRkciANCjB4MDAwMDAwMDAwMjQ1ZGYxMCBhbGlnbiAyKioyDQogICAgICAg
ICAgZmlsZXN6IDB4MDAwMDAwMDAwMDAwMDIwNCBtZW1zeiAweDAwMDAwMDAwMDAwMDAyMDQg
ZmxhZ3MgLS0tDQoNCg0KU2VjdGlvbnM6DQpJZHggTmFtZSAgICAgICAgICBTaXplICAgICAg
Vk1BICAgICAgICAgICAgICAgTE1BICAgICAgICAgICAgICAgRmlsZSBvZmYgIEFsZ24NCi4u
Lg0KICAzMCAuc21wX2xvY2tzICAgIDAwMDA5MDAwICBmZmZmZmZmZjgyZWRjMDAwICAwMDAw
MDAwMDAyZWRjMDAwICAwMjJkYzAwMCAgMioqMg0KICAgICAgICAgICAgICAgICAgIENPTlRF
TlRTLCBBTExPQywgTE9BRCwgUkVBRE9OTFksIERBVEENCiAgMzEgLmRhdGFfbm9zYXZlICAw
MDAwMTAwMCAgZmZmZmZmZmY4MmVlNTAwMCAgMDAwMDAwMDAwMmVlNTAwMCAgMDIyZTUwMDAg
IDIqKjINCiAgICAgICAgICAgICAgICAgICBDT05URU5UUywgQUxMT0MsIExPQUQsIERBVEEN
CiAgMzIgLmJzcyAgICAgICAgICAwMDExYTAwMCAgZmZmZmZmZmY4MmVlNjAwMCAgMDAwMDAw
MDAwMmVlNjAwMCAgMDIyZTYwMDAgIDIqKjEyDQogICAgICAgICAgICAgICAgICAgQUxMT0MN
CiAgMzMgLmJyayAgICAgICAgICAwMDAyNjAwMCAgZmZmZmZmZmY4MzAwMDAwMCAgZmZmZmZm
ZmY4MzAwMDAwMCAgMDAwMDAwMDAgIDIqKjANCiAgICAgICAgICAgICAgICAgICBBTExPQw0K
DQpBbmQgdGhlIHJlbGF0ZWQgbGlua2VyIHNjcmlwdCBwYXJ0Og0KDQogICAgICAgICBfX2Vu
ZF9vZl9rZXJuZWxfcmVzZXJ2ZSA9IC47DQoNCiAgICAgICAgIC4gPSBBTElHTihQQUdFX1NJ
WkUpOw0KICAgICAgICAgLmJyayAoTk9MT0FEKSA6IEFUKEFERFIoLmJyaykgLSBMT0FEX09G
RlNFVCkgew0KICAgICAgICAgICAgICAgICBfX2Jya19iYXNlID0gLjsNCiAgICAgICAgICAg
ICAgICAgLiArPSA2NCAqIDEwMjQ7ICAgICAgICAgLyogNjRrIGFsaWdubWVudCBzbG9wIHNw
YWNlICovDQogICAgICAgICAgICAgICAgICooLmJzcy4uYnJrKSAgICAgICAgICAgIC8qIGFy
ZWFzIGJyayB1c2VycyBoYXZlIHJlc2VydmVkICovDQogICAgICAgICAgICAgICAgIF9fYnJr
X2xpbWl0ID0gLjsNCiAgICAgICAgIH0NCg0KICAgICAgICAgLiA9IEFMSUdOKFBBR0VfU0la
RSk7ICAgICAgICAgICAvKiBrZWVwIFZPX0lOSVRfU0laRSBwYWdlIGFsaWduZWQgKi8NCiAg
ICAgICAgIF9lbmQgPSAuOw0KDQoNCkp1ZXJnZW4NCg==
--------------PJjURniSlnlNfF70ktqO2YfB
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------PJjURniSlnlNfF70ktqO2YfB--

--------------UfVWdtDCmUNP0UQO72LD4IvQ--

--------------UV4NmG4z3h7tmtKl6dIKj0XU
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK0HxcFAwAAAAAACgkQsN6d1ii/Ey/t
DAf/YQPIu+2TFIXCGJ1Lu7ZBLCZgcPeDZm0wfRGIi8CFV2MG0NsUXIKgOHvJJWYTTPbPGcx4U5Lr
emLZabCgSMj/Tr2ZF0M4rg3qI9M+vgSQ2I6JK0Pl6fuVdkNT3C/xXySL45PVvA5zNcA1s1BvyD65
fbW7uNRqvx8VECKIvMXMxDw7JEOLZcOmQ9hWqETznz9swBxDH/XlrSBvCTBI/yEHaxVLpJHQ0D6a
9fY/B1UpOyTyIGfN47lp8zm1WQ3UivoFSSKLgtoNbOTU3n0CWZzCnLvyyc5Lds0eOaZ6JfbRohoi
hPnknOUlQeAXWr0fg6Yb0+uQBwO9wd5ze7c2A9sBuA==
=LIy9
-----END PGP SIGNATURE-----

--------------UV4NmG4z3h7tmtKl6dIKj0XU--


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 08:09:19 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 08:09:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354563.581725 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Htu-0002bh-3n; Thu, 23 Jun 2022 08:09:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354563.581725; Thu, 23 Jun 2022 08:09:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Htu-0002ba-09; Thu, 23 Jun 2022 08:09:18 +0000
Received: by outflank-mailman (input) for mailman id 354563;
 Thu, 23 Jun 2022 08:09:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CRa/=W6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o4Htt-0002bU-OM
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 08:09:17 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2050.outbound.protection.outlook.com [40.107.104.50])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c66f3f9c-f2cb-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 10:09:16 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM6PR04MB5462.eurprd04.prod.outlook.com (2603:10a6:20b:95::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.22; Thu, 23 Jun
 2022 08:09:11 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 08:09:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c66f3f9c-f2cb-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZJhFpq7OKKJxV1imeH6+lddhdA1Wa5NX9r/GalrKdbhMgV+BvnQscGOAS8cIdfV+MI9hdfNmQKBclr7ZNjIHJuFTzqSBlEsX/z9C2FjHUb1ygGIrEs7tKKenJeC/5j+gjepXV3nJ/2aeUfBXlDbHQJX+6/8UAmPJMvJSwNh0GSpWGd6hlSEUVhOuRZNBcDSRkfYkOmPw2F5ohLKxJA5Nprr+bREq9MTdzbuQvl/CDxk08gtki4SAtnus38vU2ByYOlIZrmAZMX0k0dpYbNkGJD9EfkZVpa9tLJHH8ZekdirWZUxIJXDg2DjecfC8y9TFRGGESKRoJyEX7CvdoozYvA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=YJ2EBGK9IpOyjiMtgo2Xldv69VXYF9fFvP/n00ISXz0=;
 b=bxEQp9oQwaPlneqw3PVwqXrZd+Zdbv9LESGy9A9FQsgjP1GGPGR7gJq4pvTLyxJodCitupWGqeXFFpqJ21o8ITf2RnMd9ouiRjOl4opHqRPcTzUiQD8QPqtyKDekIM8QuYk6TaoikdTPalfdKc+pM/4yMfTpEvU1Q35uHKTkIUqI0xbHsj+aoWuQ5TBsQXfB440c9UtpRK11LDJVl2Pzn/FS9VttUqjcHYW+Yt4NS+M2SPQLmrhdlJ1ZWF/JFkMZ1M3tjfVIIuWxFWG9/o+fU1NlYFWl6kvWxDYO7UNYkmenVP/jKdMWb8KshxinoNyiKri+NUPEo01zSg2Jzk8cDg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YJ2EBGK9IpOyjiMtgo2Xldv69VXYF9fFvP/n00ISXz0=;
 b=gB126O64gmNLgxpxXdB32bDn5oYUFKShn3fKAfujJGNFWOiMi77db2MbIkkvdXUHPBjgzCWurhRXFYRIZIxcfAZPCqZpDhi6gpmM0DSWPmxmMesdPShHC6wZ+3q4NTj58vkur4PK+Ue9fOMOyWnD+i7cUVV34cbCzLM9Ydk8+GyatK95cxFrOVwo8aTA0TdY+m+YJxwL7ebXkc0YP3iTmuxM5w2S9andp16WnXulUlBdOA07mGR/Gpj3cug1FofCvneMrnTjSzBEI+mDw1rebpgacEbJAHkEeHo0GQziNu1RH6ysyf8vexU7vPmanZ/noOg75nxwUMePpOE27mp9tw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b210d5c7-26cc-df6f-fa37-56fe04c8d0dc@suse.com>
Date: Thu, 23 Jun 2022 10:09:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 2/2] x86: fix setup of brk area
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>
Cc: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
 x86@kernel.org, linux-kernel@vger.kernel.org
References: <20220622161048.4483-1-jgross@suse.com>
 <20220622161048.4483-3-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220622161048.4483-3-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR06CA0110.eurprd06.prod.outlook.com
 (2603:10a6:20b:465::29) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a0832061-a9d2-4ef4-e581-08da54efa7d1
X-MS-TrafficTypeDiagnostic: AM6PR04MB5462:EE_
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-Microsoft-Antispam-PRVS:
	<AM6PR04MB5462DB7AE73AEBA21BC50F7BB3B59@AM6PR04MB5462.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	E6gTYhVqCWgOD/Jq/oLmDPMz6qoAqfJJK4swhhCHCMYZKlPqaf1bYM7nG3Bh7UBRP7PJMefrtyI8Jw8/nx87jSS+lvgeV8DMnmRt9RnykSj4OTtae68sqSotWzz6mZI1GK+775aSlNP5Qbp4FHgpqCCvjyviFhb14x5qLPxzSYXl3k9T4C17Sm+jK1fD6+y2TF/EtdmhTk9mXYdnrv7a1tyqA8QVI4tCB6Vy6eSTcxfnskSEq6RfghtSj1yfoNgcI7/GvOnV5SQ0OwHF89jjTSo1t4wZv3YSjhhzC3yrssWWMO6oE/kbkK0i9f0rUmpVEkWkpvV4fR+M29yaNccg09PmDEa1Z/sNarTUNm3oDG6r9CIAH436aROu8wOwFT+USvnpFMq3asiv5ZTHJ9Q6ZU4q+wu9IOrYYq4nCokSvwvzawJz3l8EC6r5AZYcs9pJ5NeJJzaeroWwpWvTjsg6K4NUzyuiR/A8s+jMJUUfJFOm+TM6NKOz9qUR7tA713p9a370adGokyf+cWkup8TOFG/kGj/cdWzCiPE4JhCrbdSRyg7KzT40UacR4ncY20LrxriRkMM+Jtxb81Z3Wjq1eWQpb69VMcmNoYXM5QcJoFG4lqVPnGunohThPuPu54m1e0fWyWvF0Gp1OoGqFjF+akncjlde3My1I3qseNx5cMpFiFCHwbaVsxYyL1PMyfgvx0KmzBc4PEmipS3g0ndzHyemcXcLLh07vj1sPaPV3tJGnrr7TKZfxEq9atOM4VrUdSUZNX1D79O+4GsR0yCZZ3oQdNtmYgHlaNknuJatRRo=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(396003)(376002)(39860400002)(136003)(366004)(346002)(478600001)(53546011)(6636002)(2906002)(6512007)(6862004)(38100700002)(4744005)(2616005)(41300700001)(6506007)(31686004)(8936002)(36756003)(66556008)(5660300002)(37006003)(26005)(8676002)(66946007)(31696002)(6486002)(186003)(4326008)(316002)(54906003)(66476007)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bHFrQXkzcUlPTFFVQmNoYWYvaFVWTjM5V2lhR0hRM3hQd3p1RkJ6NTgza2Rr?=
 =?utf-8?B?bWkxbHRRNzVkWXFveWM0RkcraWJCcjREYndrY3loWlB3L0NFMW9oalF4YUFT?=
 =?utf-8?B?ZTFDelFPQXhRUVVJeFYwT3FGUUFIWlprL24wMlBiU2pPN0wzNTVwTUp3OVdB?=
 =?utf-8?B?NlVONUlDTllEOVpwM0cyUmZnUU01VS9UNDdHM1dJOXNwUmhyZEQ2N3RGZjVk?=
 =?utf-8?B?YVBsNGVVbWhYOFM5WUVCZXlIQ1YzQTd5M0EvMkpzWjZiUlRhK1dLNVV2ZzNo?=
 =?utf-8?B?SkVHTjQxVng3WGJKaS9kZkViYk0vUmM3TWJZS2NwUzl4Q2haOFN3amRSMUlh?=
 =?utf-8?B?TjFjU3d4V0o1Y1RBMXQ0cWVHZSs0amVSWkh1OEtzT3BhTHZJTkJ1WTVTOU1q?=
 =?utf-8?B?eGF4NjlnZlYyYVkrZjRaTW1MZlJVcElJZVlXMTUyeUNtYWdpREdDTFVMYlVp?=
 =?utf-8?B?ZEhzMHZHRVFvTlZVckI3TXlQc2pMWW5hM3BvTXNIVmJ3OVhudHZxMG12NW95?=
 =?utf-8?B?Q3BDczBlMUZaSzB3WkNhb3o4RDJxZU5hVVFqZE51Y2EvZEJwc2ZsSHU1UWlZ?=
 =?utf-8?B?K09mN2E4eGt4ekdIdkJ5YjE2R2NZUlkxbnZaT05sZFFhOFNXZ3NmK0FlcFJw?=
 =?utf-8?B?b1drdW5ESG9DMHREWlI2a0ZjTkk3aXU4UG1BM1lJM3JzNUp5dEp6aWo5Tm1U?=
 =?utf-8?B?eUxWZzB0MzNlbmRNWGN5SExFZ0tZTXJOUUQweHRmZ1dMUXJuK0xETTNpVnBO?=
 =?utf-8?B?RHVlMytlYjhtbm5GUTU0UzJNVkd5amFRRndyb1JmUDZKNkJTTCtHdk5pYjJU?=
 =?utf-8?B?bGcxZ2ZvVGZBRUR4UVI2SjdhMTU0K3ZjUGVNcStidEFMTUxLeUp6Rm9IdVdj?=
 =?utf-8?B?Z2FreXhXYUd1YVR4WVZwTnFneW4yYTUrT1dDUkNreVdEdDIwQjM1T0RMM1hX?=
 =?utf-8?B?b1ZKdDRFM1d3VlZBQkNUTGJvZE5yNzJpTElwVjFaTlNzMExqVkpiVTdSMHNp?=
 =?utf-8?B?NEN4L2wwb0MvSlh3bzQwL204b0JUbHAxQlM4QU1YKzlDTCtiRldZYUduU0NY?=
 =?utf-8?B?bjJEbkdpYmxjMlFFekIvOVlnNm4wSDkya2NQZVJNSkZucnI0a3hMaFFBSnAx?=
 =?utf-8?B?eHFJY0d5QjVvcTl6NWtacWxKSlo1Ynp3MzJGTlB2RFdlRUl5UmhSU0lCQ1pX?=
 =?utf-8?B?bXhmOFdpd2cyYmo4MFh3UFZNa2o0S0VaOWp0UVZFUXh4UHA3OFdGMjN5Ny9z?=
 =?utf-8?B?STZxU2h4Rm1YbHZadjNpd2FBQUZPdmhlY2t0bWpacFJ1QzFtcklueUsrRHNC?=
 =?utf-8?B?UmpWeTk5NkVoSGsrZ09UOGZ4YmY5SVEzeUowZXVIYlZnN1dGUzRwd043UnB4?=
 =?utf-8?B?YkZvazgxeDZ3UWpOM0tqWnFJSDkrVXBWZ2ViUnU2cFdyVXZHU2ROZ2cwbTIw?=
 =?utf-8?B?YW5NbThvQThGMk1WbG40V3lMN3Zxc250aHBLMXdrcFI5WThXTXFKcGJQRURL?=
 =?utf-8?B?VnNWSGZia01YYUJYOFZsYnhkTWRkMEQ2K2hab2xndVRTU1ZGR1RYS1ZQZitE?=
 =?utf-8?B?aFA4K0wzb1c0R1lwZEN6a29pY2ladWR2Y25YYmYyYjYwbXFSZWE1VGVGN3hs?=
 =?utf-8?B?Q05obDhFdVo5YTVlNVUxNy9jRjBvaGRrT09RaVB0M0tLV21XWDJ5RkJBOEtt?=
 =?utf-8?B?bkFNVCtIaDhrRGpyc0I3cjlZZ3VGT2JMdG9YWWtQZXlaWDhleGdHdHFraEZB?=
 =?utf-8?B?ZVFIWENuWTh3bW4rYTNUUExRUTdjUFBzdmdDYXFtKzJVZUM1QVNJZmMybDNh?=
 =?utf-8?B?Z3l3bEQ3d0swbkNZSU04NEJVajVjN1hzRDdqbjJpMkJXWEEyZlpGTnhicmJr?=
 =?utf-8?B?NDJYOTREVjZ4VTZFV3lIUjArQzE1Q1AvS0o3YjJXV1F5WDIyY0JDcWJHMmxU?=
 =?utf-8?B?cEdIN1Vscy9ZT3dsSW9ZSWFtQjkvNE55dkZXZFQ5a1hoNU0ramFWc3ZRZ1F3?=
 =?utf-8?B?cGFlNUNhTEJMaWtPS0NrbFd2aGhMbklacWdKdC9tZy9kekM0NkR1ekRYWUtP?=
 =?utf-8?B?dUozb0RvN1lyS3lyTk4rYmVaQUJvSUErS0NkRTQ4MEgwWmlUeVk4Tmx6ZjNj?=
 =?utf-8?Q?h7xfqpCAl9sDb6W9dmX+001nn?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a0832061-a9d2-4ef4-e581-08da54efa7d1
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 08:09:11.7220
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: lgGyemEK2+WSMskD2quSzIT+b3OFqPMooCFfkJmhapV6VIVs9TJ3/GsEIss8Lw1tQZN7NhAYp32Tk1va56v6Ow==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR04MB5462

On 22.06.2022 18:10, Juergen Gross wrote:
> Commit e32683c6f7d2 ("x86/mm: Fix RESERVE_BRK() for older binutils")
> put the brk area into the .bss segment, causing it not to be cleared
> initially.

This reads contradictively: If the area was put in .bss, it would be
cleared. Thing is it is put in .bss..brk in the object files, while
the linker script puts it in .brk (i.e. outside of .bss).

> As the brk area is used to allocate early page tables, these
> might contain garbage in not explicitly written entries.

I'm surprised this lack of zero-initialization didn't cause any issue
outside of PV Xen. Unless of course there never was the intention for
users of the facility to assume blank pages coming from there, in
which case Xen's use for early page tables would have been wrong (in
not explicitly zeroing the space first).

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 08:14:41 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 08:14:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354570.581735 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Hz2-00040l-Mk; Thu, 23 Jun 2022 08:14:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354570.581735; Thu, 23 Jun 2022 08:14:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Hz2-00040e-K6; Thu, 23 Jun 2022 08:14:36 +0000
Received: by outflank-mailman (input) for mailman id 354570;
 Thu, 23 Jun 2022 08:14:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=j28/=W6=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o4Hz2-00040Y-44
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 08:14:36 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8467c02f-f2cc-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 10:14:35 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id B03DA21CBA;
 Thu, 23 Jun 2022 08:14:34 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 67C69133A6;
 Thu, 23 Jun 2022 08:14:34 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id vfnuF+ogtGIRfQAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 23 Jun 2022 08:14:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8467c02f-f2cc-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655972074; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=afwLM4d4rLkUgAhD+sY30a7nqnnN9dXH8QSVTUckGeo=;
	b=nW3hGcBTNax3sy1XU5yV4vaq8C9RumrT231gEt0vxuYG2Txds/YvyV0Nau2Ij3b2sOWbIX
	m5cTsjLh+4b40XOjOJ1wu6r+PLkWZ2Kjg/lJTGf/GqnHOxRn+Hg3rkoN4eVe02so5+6kcq
	sH5Uu6SW518BauOK0pk72sQyTF2cIbE=
Message-ID: <594866df-ef56-055f-c13c-64fac5797164@suse.com>
Date: Thu, 23 Jun 2022 10:14:33 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH 2/2] x86: fix setup of brk area
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
 x86@kernel.org, linux-kernel@vger.kernel.org
References: <20220622161048.4483-1-jgross@suse.com>
 <20220622161048.4483-3-jgross@suse.com>
 <b210d5c7-26cc-df6f-fa37-56fe04c8d0dc@suse.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <b210d5c7-26cc-df6f-fa37-56fe04c8d0dc@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------tyKWPK4tg4qidWiOv9WpCwsK"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------tyKWPK4tg4qidWiOv9WpCwsK
Content-Type: multipart/mixed; boundary="------------ZqePaME2t2Xr00pBN6b1ShkQ";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
 x86@kernel.org, linux-kernel@vger.kernel.org
Message-ID: <594866df-ef56-055f-c13c-64fac5797164@suse.com>
Subject: Re: [PATCH 2/2] x86: fix setup of brk area
References: <20220622161048.4483-1-jgross@suse.com>
 <20220622161048.4483-3-jgross@suse.com>
 <b210d5c7-26cc-df6f-fa37-56fe04c8d0dc@suse.com>
In-Reply-To: <b210d5c7-26cc-df6f-fa37-56fe04c8d0dc@suse.com>

--------------ZqePaME2t2Xr00pBN6b1ShkQ
Content-Type: multipart/mixed; boundary="------------Q7WiS17ExvyNKOn2eZ3hqg4i"

--------------Q7WiS17ExvyNKOn2eZ3hqg4i
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjMuMDYuMjIgMTA6MDksIEphbiBCZXVsaWNoIHdyb3RlOg0KPiBPbiAyMi4wNi4yMDIy
IDE4OjEwLCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4gQ29tbWl0IGUzMjY4M2M2ZjdkMiAo
Ing4Ni9tbTogRml4IFJFU0VSVkVfQlJLKCkgZm9yIG9sZGVyIGJpbnV0aWxzIikNCj4+IHB1
dCB0aGUgYnJrIGFyZWEgaW50byB0aGUgLmJzcyBzZWdtZW50LCBjYXVzaW5nIGl0IG5vdCB0
byBiZSBjbGVhcmVkDQo+PiBpbml0aWFsbHkuDQo+IA0KPiBUaGlzIHJlYWRzIGNvbnRyYWRp
Y3RpdmVseTogSWYgdGhlIGFyZWEgd2FzIHB1dCBpbiAuYnNzLCBpdCB3b3VsZCBiZQ0KPiBj
bGVhcmVkLiBUaGluZyBpcyBpdCBpcyBwdXQgaW4gLmJzcy4uYnJrIGluIHRoZSBvYmplY3Qg
ZmlsZXMsIHdoaWxlDQo+IHRoZSBsaW5rZXIgc2NyaXB0IHB1dHMgaXQgaW4gLmJyayAoaS5l
LiBvdXRzaWRlIG9mIC5ic3MpLg0KDQpIbW0sIHllcywgdGhpcyBzaG91bGQgYmUgcmV3b3Jk
ZWQuDQoNCj4gDQo+PiBBcyB0aGUgYnJrIGFyZWEgaXMgdXNlZCB0byBhbGxvY2F0ZSBlYXJs
eSBwYWdlIHRhYmxlcywgdGhlc2UNCj4+IG1pZ2h0IGNvbnRhaW4gZ2FyYmFnZSBpbiBub3Qg
ZXhwbGljaXRseSB3cml0dGVuIGVudHJpZXMuDQo+IA0KPiBJJ20gc3VycHJpc2VkIHRoaXMg
bGFjayBvZiB6ZXJvLWluaXRpYWxpemF0aW9uIGRpZG4ndCBjYXVzZSBhbnkgaXNzdWUNCj4g
b3V0c2lkZSBvZiBQViBYZW4uIFVubGVzcyBvZiBjb3Vyc2UgdGhlcmUgbmV2ZXIgd2FzIHRo
ZSBpbnRlbnRpb24gZm9yDQo+IHVzZXJzIG9mIHRoZSBmYWNpbGl0eSB0byBhc3N1bWUgYmxh
bmsgcGFnZXMgY29taW5nIGZyb20gdGhlcmUsIGluDQo+IHdoaWNoIGNhc2UgWGVuJ3MgdXNl
IGZvciBlYXJseSBwYWdlIHRhYmxlcyB3b3VsZCBoYXZlIGJlZW4gd3JvbmcgKGluDQo+IG5v
dCBleHBsaWNpdGx5IHplcm9pbmcgdGhlIHNwYWNlIGZpcnN0KS4NCg0KRnVuIGZhY3Q6IEl0
cyBub3QgWGVuJ3MgdXNlIGZvciBlYXJseSBwYWdlIHRhYmxlcywgYnV0IHRoZSBrZXJuZWwn
cw0KaW5pdCBjb2RlLiBJdCBpcyB1c2VkIGZvciBiYXJlIG1ldGFsLCB0b28uDQoNClRoZSB1
c2UgY2FzZSBmb3IgaW5pdGlhbCBwYWdlIHRhYmxlcyBpcyB0aGUgcHJvYmxlbWF0aWMgb25l
LiBPbmx5IHRoZQ0KbmVlZGVkIHBhZ2UgdGFibGUgZW50cmllcyBhcmUgd3JpdHRlbiBieSB0
aGUga2VybmVsLCBzbyB0aGUgb3RoZXIgb25lcw0Ka2VlcCB0aGVpciBpbml0aWFsIGdhcmJh
Z2UgdmFsdWVzLiBBcyBub3JtYWxseSBubyB1bmluaXRpYWxpemVkIGVudHJpZXMNCmFyZSBl
dmVyIHJlZmVyZW5jZWQsIHRoaXMgd2lsbCBoYXZlIG5vIHJlYWwgaW1wYWN0Lg0KDQpXaXRo
IFhlbiwgaG93ZXZlciwgQUxMIGVudHJpZXMgYXJlIGJlaW5nIHZhbGlkYXRlZCwgd2hpY2gg
bGVkIHRvIHRoZQ0KZWFybHkgY3Jhc2ggb2YgZG9tMC4NCg0KDQpKdWVyZ2VuDQo=
--------------Q7WiS17ExvyNKOn2eZ3hqg4i
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------Q7WiS17ExvyNKOn2eZ3hqg4i--

--------------ZqePaME2t2Xr00pBN6b1ShkQ--

--------------tyKWPK4tg4qidWiOv9WpCwsK
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK0IOoFAwAAAAAACgkQsN6d1ii/Ey9A
awf/UbFKzpxwA/vQQZx1sPtORlECc5baaQN5NYZJ7TMX450cnX99+a/RTXSvHL3o3K0xGGnDG7xr
o1AmHdBMnWKXrpnukvQ9+UrY1fsCtnYzxSwOlg7yroVMdG69qVihKMJAY7ixfibcwRZERM9bxP9+
2e5kgMhwRagAc0YTnQNtE5tQ2GzIi3aAnQa1MYDn2wgCROJDlMIn6PjEevw0oWF+07VM/yyZtV7m
ZCw6JANjOHN/HmDRhMX9ChuOoElGn4LKlYO+m2N4X/l3U0rIew6BzUL4a8/ZNZtSVFd6ZyN9YyBx
e858wV+nkkPMPt60DezRgzzF2LSDh5I9HGvEqSmp/g==
=z5F0
-----END PGP SIGNATURE-----

--------------tyKWPK4tg4qidWiOv9WpCwsK--


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 08:25:27 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 08:25:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354578.581757 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4I9O-0005kg-39; Thu, 23 Jun 2022 08:25:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354578.581757; Thu, 23 Jun 2022 08:25:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4I9O-0005kX-06; Thu, 23 Jun 2022 08:25:18 +0000
Received: by outflank-mailman (input) for mailman id 354578;
 Thu, 23 Jun 2022 08:25:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uSRO=W6=citrix.com=prvs=166c34e93=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o4I9M-0005Uq-M4
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 08:25:16 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 013765d6-f2ce-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 10:25:15 +0200 (CEST)
Received: from mail-dm6nam12lp2169.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 23 Jun 2022 04:25:12 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by MN2PR03MB5008.namprd03.prod.outlook.com (2603:10b6:208:1ac::24) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.15; Thu, 23 Jun
 2022 08:25:11 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 08:25:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 013765d6-f2ce-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655972715;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=f10z1FY1cEzgBr5aQzVFoWFCR34qa1FghZWIHlDrVVk=;
  b=ihXQi4ubWa5LSdoAzs65tqCvD8F/MdLQ4h5ILDm0SUX8RqxniHlWbxaN
   w5iz96Ie57Dj+eJISzFuM3Gwvcgb9GqwQfYs3gQ0kEYcg86jpgiguu8Ih
   Kswa2Md7TpcxUUqVuPJPE3IPsRLF2UWY3rNPQxRGA5JOpRNIiZ8p3OLvU
   U=;
X-IronPort-RemoteIP: 104.47.59.169
X-IronPort-MID: 74236204
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:bBaqlK+7EiuI4WHy9zBFDrUD9H+TJUtcMsCJ2f8bNWPcYEJGY0x3y
 msaW22POfiCMzD2KdkjO9jn805UuJXdx95iQQVp+Cs8E34SpcT7XtnIdU2Y0wF+jyHgoOCLy
 +1EN7Es+ehtFie0Si+Fa+Sn9T8mvU2xbuKU5NTsY0idfic5DnZ74f5fs7Rh2NQw34LjW1/lV
 e7a+KUzBnf0g1aYDUpMg06zgEsHUCPa4W5wUvQWPJinjXeG/5UnJMt3yZKZdhMUdrJ8DO+iL
 9sv+Znilo/vE7XBPfv++lrzWhVirrc/pmFigFIOM0SpqkAqSiDfTs/XnRfTAKtao2zhojx/9
 DlCncO0VxoUY7+SodYifDlxFChvJrFt6IaSdBBTseTLp6HHW13F5qw0SW0TY8gf8OsxBnxS/
 /sFLjxLdgqEm++93LO8TK9rm9gnK87oeogYvxmMzxmAVapgHc+FHvSMvIEJtNszrpkm8fL2f
 c0WZCApdB3dSxZOJk0WGNQ1m+LAanzXLGEC8wzP/fZfD2774i9S9OXBaPnvXMWVQ59IglyUv
 m+e1jGsav0dHJnFodafyVq8i+mKkS7lVYY6ELyj6uUskFCV3nYUChAdSR28u/bRoky0Vs9bK
 kcU0jEztqV0/0uuJvHiWzWorXjCuQQTM+e8CMU/4QCJj6bRvQCQAzFeSiYbMYJ38sgrWTYty
 1mF2cvzAiBiu6GUTnTb8aqIqTS1Om4eKmpqiTI4cDbpKuLL+Okb5i8jhP44eEJpprUZwQ3N/
 g0=
IronPort-HdrOrdr: A9a23:+A0boa9G51zemGK+0xZuk+FKdb1zdoMgy1knxilNoENuH/Bwxv
 rFoB1E73TJYVYqN03IV+rwWpVoJkmsj6KdgLNhRotKOTOLhILGFvAH0WKP+V3d8mjFh5dgPM
 RbAtdD4aPLfD9HZK/BiWHXcurIguP3iJxA7d2us0uFJjsaDp2IgT0JaTpyRSZNNXR77NcCZd
 Ohz/sCgwDlVWUcb8y9CHVAd+/fp+fTnJajRRIdHRYo5CSHkDvtsdfBYlOl9yZbdwkK7aYp8G
 DDnQC8zqK/s8ujwhuZ82PI9ZxZlPbo19MGLs2Rjco+LCnql2+TFcxccozHmApwjPCk6V4snt
 WJixA8P/5r43eURW2xqQuF4XiV7B8er1vZjXOIi3rqpsL0ABggDdBauI5fehzFr2I9odBVys
 twri2knqsSKSmFsDX25tDOWR0vvFGzu2AenekaiGEaeZcCaYVWsZcU8CpuYds99RrBmcEa+d
 RVfYHhDK48SyLYU5mZhBgj/DWUZAV8Iv/cKXJy+PB80FBt7QVEJgUjtYkid0w7heMAoql/lp
 r525tT5cFzp7ctHMRA7cc6MLyK4z/2MGTx2Fz7GyWVKIg3f1TwlrXQ3JIZoMmXRb1g9upBpH
 2GaiITiVIP
X-IronPort-AV: E=Sophos;i="5.92,215,1650945600"; 
   d="scan'208";a="74236204"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CdkAzbg6Gg9lG8qBl9jAzxBuo5vKWuVYmCpURHXq6LKK+8+IuOdE+fWRaQ+Y/KWo89W6rSM+bUSHgibCkrhL9COz49OHjPasic/xCQZHizreN2J/ApTX9iZtKLZsemXIXLXfAZzshpp+gOuqtkLA+1KhXLT1XKmAiB5WpQDV4DQSqCXgrUsT9uanEwJEz7wf6piHD8plDdbZz3P9hihL9iHL6i71UMTzdLZEXbcAZ8nJEcQrBFCTgVwt79tzmfrOZYpNXgtG9dRkQWKGZG04bWC2qWhuz9HeWY+QMkOW/dj81pM9xys1ZbaoahlbI0sDZbwp0g4/YnGFrHseXcgb3A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7D3+9B1wNBssAOS6U4CtpT3AYDnVm0sewLbpEyQ/iCE=;
 b=If/mSBDx8cbzhFT1GZJAkkBKJSuQeAJ2tUa3qbYt3uUCAWIeOl4/QJelSBjbuqlky6H94J51458ybzkNYcN8TvUYNn17BXxqYUO7SYIPoKUisR0U+V2QZmOrioumkW6p9BKK6HKzYzgC8bq/T0a/aiq/OFlm4EcxNpEIa6KiVIXJYG9fKyA+ErfmybmiPa4SYVo6YCJA+Ed9cGpEpzROWhqOBsEG0ofBWIp4W9PfK2Hm5vRuGHiwy4zyLDtI1OqYByYN1e3eSS1/4Rzt8qBlgMisSQEfImwnRXoKTXI1346FKRnHScQ1ezZ7QWtOvGZp5kDByExpQlhJAsU0ytU0Uw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7D3+9B1wNBssAOS6U4CtpT3AYDnVm0sewLbpEyQ/iCE=;
 b=X19pl1H3V1PVUKEEgPY3Sm+x79WDnOA06+aMIFRiBg+G+okhW0/MWtyFe64y5XIuxQK+I1IMh8mgQSdUWEmLKPiToxa5GwfUJud7BkO70u4JLfNWjoQKKVYfCqpj3HDfR1/jgDPRrt6DVcM9ouVabu4wpVyBH+1vVHXAndYE0Cs=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 1/6] x86/Kconfig: add selection of default x2APIC destination mode
Date: Thu, 23 Jun 2022 10:24:23 +0200
Message-Id: <20220623082428.28038-2-roger.pau@citrix.com>
X-Mailer: git-send-email 2.36.1
In-Reply-To: <20220623082428.28038-1-roger.pau@citrix.com>
References: <20220623082428.28038-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0086.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:190::19) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 5c426ef9-13df-4ac6-dd43-08da54f1e3bf
X-MS-TrafficTypeDiagnostic: MN2PR03MB5008:EE_
X-Microsoft-Antispam-PRVS:
	<MN2PR03MB50089BEFE7257294CF072E088FB59@MN2PR03MB5008.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	1qaOju36ZDvS39oZEdLbat6Z5da2uQk/21u+2QN65Gh1V3c2n2lVXAQ/X5Zy5oz6ib1Yil1eN5hMGXsnvbLcXFOjt88bFFUZPO/EDTbdm2SNf5tI1j4HsN8ey741SlB9LaruuO6DqfFq9Ne9oE+ChzR+38RjkAzJgRT0mvceqbWhDblN2ZGVgdHkhQUTF1quarf6Blx9wpYJKxARe9j/rgSKT42KNt0XO15Y1aMa6Tf8Q3p98GoqmYOUb3D4+9lx+5IV0JP2VbKU9HdN7zUVsUa1hlpIYRsCmY2mF5BZ3sZU5c5wpIgcoGVtEZgELDsl08HKuxYYRd2cE1LryFm50+IooW0Is0cgbIuqG2NXmQEy+1QT9Fi+Iu/hQOxtyerraXyhloMRIp+A7a3jZ/SXC1tpxsdEOJJIe0z2oVekm1HnMtQ1AZF22AXaSMNHuMulR8Hz0R3PbOhk+Io5aZuh7BK7EJo76W9Pr59vYxPTkuia1WVwU6pTrfAx1xjtZAhk8tEldzJbXofQ1ApP37laJVnMxAKyl9BAxHJWIzni1Dma4IT/NFPrEtbpSqdUaHzN1dWZDprcIPGGXT3rDxGtovc1d8LH4ubNjgZvPlB+kbd97F1k63BIHKQpt69sEcZQUVo8RRZsTq5tW4G1kD9aW4rjNvzPm4BePwo7l9E7qwh5LL+KHNN6iIqOki0s7qkZ19ilKsWr/G+VD/wkvnGKbw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(136003)(366004)(376002)(396003)(346002)(39860400002)(82960400001)(6916009)(2906002)(6486002)(38100700002)(5660300002)(316002)(86362001)(8936002)(6512007)(66556008)(8676002)(36756003)(54906003)(2616005)(4326008)(66946007)(6506007)(83380400001)(1076003)(26005)(41300700001)(478600001)(66476007)(186003)(6666004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?L1JMZFJRUXRNSnZwY3MvUFlYdkZkTS9zMVVWQzN1dWFTV2JMMExlQUs5Nm00?=
 =?utf-8?B?TWZIMlJmMHI1YkhIUUJSN2QzWE9QazJIbXhnWVVVdFlaSnQ4TVV0Uko4dXlH?=
 =?utf-8?B?NEtMcWpzQllRNkk3Rzgzc0p2MjF5RE8yQy9VR2FleGdMRjRiY3hoeUJRa2Zq?=
 =?utf-8?B?ZUhQU1RVR0h4OWhDQzZBZmovR3ZUVTlzUWJhcWd3UDAxRHo2LzRqTkU5WjUy?=
 =?utf-8?B?eXh2WXR4RmE0eGR2L1BjQlZpRUNLUTJFT3ZON294RjZlTFFwOU5nRUprYm1o?=
 =?utf-8?B?b1pIS1hDS2ZKNTJFc1JVc2tzQ0pyYndZdjE5dVlKQUJ6amY1QktnWnprdldR?=
 =?utf-8?B?azk2dzV1c21ZbllmTVJYWHV6TjRpdkk1TDVrc0NRR1E4MnVSb20zUWQyaGdu?=
 =?utf-8?B?M3Vuc3BhVnA3TmltNHlDZHNiYkdkNWZnaExNMVFjVnFmUEtPT3V4V0NuSmQ0?=
 =?utf-8?B?a0RydWV3Qkg2UHBHaTJqRzZUaWZXekcwUkdhWGhIUWFjS0o3YTFUdHQ3Q1dN?=
 =?utf-8?B?U1c1dG9ib0E1ZUozcnlGRFZYNk5nV3VmaVU4RVMzL2F6U3NhTFo2LzdaQTB4?=
 =?utf-8?B?dUxoaG8xN1F6Y0VlTzJsZkpUMGNWOGVBRHg4Qzk4bkdOdkI4NTFKTWVTRjgr?=
 =?utf-8?B?WFFhdnJOQ21uZ282VE5jNkFtNHkrMHY1US9QZzZ6V2RBWjJlengrSUROUEZo?=
 =?utf-8?B?MGxOOUQzRXlTcGNtTXlwWEJoWElHSjNIQ09sK3NwdG9hc1VMRzlQS0xFMCto?=
 =?utf-8?B?Y3IxTldjWlF6akNXYUFhQzJlbGN6VW53TkRSL1RLUFBrRXkyeGo2Sk54ZXNj?=
 =?utf-8?B?Q3VpOWh5TXh3Z014bTE1OFF4d1c3OWtld2NUcTdBVzBmMk50YzFsc0VPQ0RZ?=
 =?utf-8?B?SlBSZS80cEFnRUc2NldVbldnbDZSY216MXE2OU5TVFFKUXNoRHdqeUlqVXB6?=
 =?utf-8?B?M2tFcVN6cUNxckYyc0UyazlCVE4yano3eFdhdkk2aTV3bVVrR0kvajlTRU11?=
 =?utf-8?B?UFBpZk1aY2VkcEZGRkZzVERBcVRsTTA2R3dSSjh0eEVjTU1BZFp0dHRKWHRN?=
 =?utf-8?B?NzMwMksydW9RbXgxY3B2bk5GVmxFRFYxeWZXNkVlY0J2UzVCWXk4R09DSFFB?=
 =?utf-8?B?d0Y2NC8vS1NzSXhaY0FHQTBlZkY1U1I2bWVjRlByRlhYSzdRMHJSMXA2aUJM?=
 =?utf-8?B?ZHYzUVpXZXl3QzF3UGdXUVp6U3VNalV1RytNQjlyMGpoZitMOGJkK3dDQ0s2?=
 =?utf-8?B?Z1htaUppOTRhbzNYWm9NcUsrT09wcUNSODdraW1XT2hheVBUSnljYnBjUGVs?=
 =?utf-8?B?bkNRRUkvT0NlVnhJd1ZRclFvQ1JlZG1zTVN5UUhEcnpnZ1RWdXNHa2R0OVhj?=
 =?utf-8?B?VjVDaVZTbkRUMUpNZWZrUWIweEh6YnJzQWhVRGh5M2t4NUc1V2JxbnFldzBp?=
 =?utf-8?B?NW16MlhRZkRHVmpyMHQ3Njcra3Zob1ZoOEJpUlVGSG1DK2NrdnJJNkhoMG9k?=
 =?utf-8?B?M0d5S0VYQ0ZqNmhLVHdXVlNxT0dwNFozSzYxdFJiTDNac2xhNTFHUkswZFJC?=
 =?utf-8?B?K1RNRzJYbko1V1RCSU51dyt4bjRGR3Z6L0JuUk1hN3lNUTNQTTdsY1ZpU0dD?=
 =?utf-8?B?dU5UbGxkNE9qSXhnR1Y3dlhUUkZNbXFPWHNpbmZuVGhEKy9zT3l1blFMemxI?=
 =?utf-8?B?VXE4b1ZpSWRwSFdpODA0aWpxS2JmZVZXSkpOWkdLSUV2TGtPZWlUUSswalZ3?=
 =?utf-8?B?RGtPbzViTXlXaTl4bUh1cjlCcDNyakN5T0dXZGxZQjhOS3htdzlxMWh6MG5G?=
 =?utf-8?B?ejJyVzhaK3h0QmxiMmJxRldrWnFOU1c3UDljejlyRUVCZy84S050dXB6QVNK?=
 =?utf-8?B?WThOblFTRjJzeGZQZDBGcDhzaWZRb3pFODRRdTNXQytkWFJHMGxHZ2VqYUlH?=
 =?utf-8?B?RkpnMW01cnBOTE42Ukc0WEd5NW5odjZWalF0N0c0d3VrYTlLc1NVdXRpMmhu?=
 =?utf-8?B?d3RIczA1SnNGN1BjZyt1OU1qYTRaeVYrQmM3Q0pPK2hQRHRhZmVwUFFzUFVL?=
 =?utf-8?B?dUFvTkdRb0VhaWpncUhTQ1JLNXNFbVh6WVhPQTVyS1J6KytaUExSZnRPT1Zi?=
 =?utf-8?Q?FEq+9pQ/j+PBmt1LsUT0QgRIS?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5c426ef9-13df-4ac6-dd43-08da54f1e3bf
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 08:25:11.2619
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: vUNBIsXR+F2uvv68TB0aStpxT9JNDNR/nKrlHZ/lSM4XXMjQKyy/k01HLye2i+YlAiqnhnJnBIR7CUnA4X30YA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5008

Allow selecting the default x2APIC destination mode from Kconfig.
Note the default destination mode is still Logical (Cluster) mode.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/Kconfig          | 29 +++++++++++++++++++++++++++++
 xen/arch/x86/genapic/x2apic.c |  6 ++++--
 2 files changed, 33 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
index 1e31edc99f..f560dc13f4 100644
--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -226,6 +226,35 @@ config XEN_ALIGN_2M
 
 endchoice
 
+choice
+	prompt "x2APIC default destination mode"
+	default X2APIC_LOGICAL
+	---help---
+	  Specify default destination mode for x2APIC.
+
+	  If unsure, choose "Logical".
+
+config X2APIC_LOGICAL
+	bool "Logical mode"
+	---help---
+	  Use Logical Destination mode.
+
+	  When using this mode APICs are addressed using the Logical
+	  Destination mode, which allows for optimized IPI sending,
+	  but also reduces the amount of vectors available for external
+	  interrupts when compared to physical mode.
+
+config X2APIC_PHYS
+	bool "Physical mode"
+	---help---
+	  Use Physical Destination mode.
+
+	  When using this mode APICs are addressed using the Physical
+	  Destination mode, which allows using all dynamic vectors on
+	  each CPU independently.
+
+endchoice
+
 config GUEST
 	bool
 
diff --git a/xen/arch/x86/genapic/x2apic.c b/xen/arch/x86/genapic/x2apic.c
index de5032f202..4b9bbe2f3e 100644
--- a/xen/arch/x86/genapic/x2apic.c
+++ b/xen/arch/x86/genapic/x2apic.c
@@ -228,7 +228,7 @@ static struct notifier_block x2apic_cpu_nfb = {
    .notifier_call = update_clusterinfo
 };
 
-static s8 __initdata x2apic_phys = -1; /* By default we use logical cluster mode. */
+static int8_t __initdata x2apic_phys = -1;
 boolean_param("x2apic_phys", x2apic_phys);
 
 const struct genapic *__init apic_x2apic_probe(void)
@@ -241,7 +241,9 @@ const struct genapic *__init apic_x2apic_probe(void)
          * the usage of the high 16 bits to hold the cluster ID.
          */
         x2apic_phys = !iommu_intremap ||
-                      (acpi_gbl_FADT.flags & ACPI_FADT_APIC_PHYSICAL);
+                      (acpi_gbl_FADT.flags & ACPI_FADT_APIC_PHYSICAL) ||
+                      (IS_ENABLED(CONFIG_X2APIC_PHYS) &&
+                       !(acpi_gbl_FADT.flags & ACPI_FADT_APIC_CLUSTER));
     }
     else if ( !x2apic_phys )
         switch ( iommu_intremap )
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 23 08:25:27 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 08:25:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354577.581746 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4I9M-0005V3-Md; Thu, 23 Jun 2022 08:25:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354577.581746; Thu, 23 Jun 2022 08:25:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4I9M-0005Uw-Jv; Thu, 23 Jun 2022 08:25:16 +0000
Received: by outflank-mailman (input) for mailman id 354577;
 Thu, 23 Jun 2022 08:25:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uSRO=W6=citrix.com=prvs=166c34e93=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o4I9K-0005Uq-MO
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 08:25:15 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fecc906a-f2cd-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 10:25:11 +0200 (CEST)
Received: from mail-dm6nam12lp2175.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.175])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 23 Jun 2022 04:25:08 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by MN2PR03MB5008.namprd03.prod.outlook.com (2603:10b6:208:1ac::24) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.15; Thu, 23 Jun
 2022 08:25:07 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 08:25:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fecc906a-f2cd-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655972711;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=IELzLEEcLQqTiUa/SGuozgRncB3NPp4bYarUlY1m4ko=;
  b=hiQFhmnP5gpWGy+3B2/9YVKNc+9EDFi+zLjIanvcbesdjSpCDWpoLa3n
   jm7RtQIczMNRH5zdfZBKjt1W5yVUR0M/4BzR/fS9+mL/mw8SWOAC7vpvV
   mDYN3C8jKKo9IldIEZsNe4vqrvA+H2X9QdHcxZz5yuvvZaxy8IBiPZ/vV
   E=;
X-IronPort-RemoteIP: 104.47.59.175
X-IronPort-MID: 74660987
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:2Q9qqavYT/a7t/zlgeh5J2+PWOfnVEpfMUV32f8akzHdYApBsoF/q
 tZmKW3UOPeNa2fwfdolYdu2oxkBsJXUy981SQJkrywyES8X+JbJXdiXEBz9bniYRiHhoOOLz
 Cm8hv3odp1coqr0/0/1WlTZhSAgk/nOHNIQMcacUsxLbVYMpBwJ1FQywYbVvqYy2YLjW13X4
 4uuyyHiEATNNwBcYzp8B52r8HuDjNyq0N/PlgVjDRzjlAa2e0g9VPrzF4noR5fLatA88tqBb
 /TC1NmEElbxpH/BPD8HfoHTKSXmSpaKVeSHZ+E/t6KK2nCurQRquko32WZ1he66RFxlkvgoo
 Oihu6BcRi8vNbLMvb05UyBIAn1PBKJd8qLaKCmW5Jn7I03uKxMAwt1IJWRvZ8g037gyBmtDs
 /sFNDoKcxaPwfqsx662QfVtgcJlK9T3OIQYuTdryjSx4fQOGMifBfmVo4IJmm5v26iiHt6HD
 yYdQSBoYxnaJQVGJ38cCY4knffujX76G9FdgA3J+/RvsjiMpOB3+Km1K8DecNCKftR6l2+Fp
 lCdpmnbHjhPYbRzzhLAqBpAnNTnjS79HY4fCrC83vprm0GIgHweDgUMUlm2quX/jVSxM/pdI
 UEJ/islrYAp6VemCNL6WnWQv3qsrhMaHd1KHIUS6hyJy6fSyxaUAC4DVDEpQMc9qMY8SDgu1
 1mIt9DkHzpitPuSU331y1uPhTa7OCxQIWpcYyYBFFEB+4O6/9x1iQ/TRNF+FqLzlsfyBTz73
 zGNqm45mqkXiskIka68+Dgrng6Rm3QAdSZtji2/Y45vxloRiFKND2Bw1WXm0A==
IronPort-HdrOrdr: A9a23:hVXLM6yeinh6Jb5fL3XUKrPxyuskLtp133Aq2lEZdPULSKGlfp
 GV9sjziyWetN9wYh4dcB67Scu9qBTnhOZICOgqTM6ftWzd1FdAQ7sSibcKrweBJ8SczJ8h6U
 4fSdkYNDSYNzET46fHCWGDYqwdKbK8gcWVbInlvhRQpVYAUdAa0+41MHftLmRGAC19QbYpHp
 uV4cRK4xKmZHQsd8y+Ql0IRfLKqdHnnI/vJUduPW9v1CC+yReTrJLqGRmR2RkTFxtJ3LcZ6G
 DA1yj0/L+qvf2XwgLVk0XT85NVst38zcYrPr37tuElbhHXziq4boVoXLOP+BgzveGU8V4v1O
 LBph8xVv4DmU/5TyWQm1/AygPg2DEh5zvJ0lmDm0bupsT/WXYTF9dBrZgxSGqV12MQ+PVHlI
 5b1WOQsJRaSTnamj7m2tTOXxZ20mKpvHsZl/IJhXA3a/pVVFZol/1RwKppKuZPIMqjg7pXUd
 WGTfusr8q+SGnqI0ww5QJUsZyRtndaJGb0fqFNgL3X79FspgEH86Ip/r1iop4+zuNCd3A93Z
 WjDk1JrsA6ciZEV9MIOA8+KfHHe1DlcFbrDF+4B2jBOeUuB0/twqSHkIndotvaMKA18A==
X-IronPort-AV: E=Sophos;i="5.92,215,1650945600"; 
   d="scan'208";a="74660987"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gvE3DmcvBDlDk9MuabAx6sH76/26KnK2IUspBj7orUosy1SzrSl189Va8febC/IeQ1QZyRjMAnH5WZyG4JBvLHBYbJK1mPswkfJV0I9kglDRTF1xMFm+jtXQW4vDtPPi+zAZFVBrl0OH065Zdwln7r7HEuCqAJ3I5tIkmaTSbqtlS/UFmDJ+8l+ThmAILis/AheiJlOPTmOUUpmr3jYKKl4pPgH5Ve/pXopz7tArWyP8OWAldyAT3dDqFa6q4oKMoTsQCcg26fObgwMMuZQijJ5KV4QoRGuQ7dQmxh6iZmHz308ez800TGTdCX7hBf1RF95MAIheQ+vnYN8TLL9DxA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=34i3CXDk1TptQVT7grwKaTQ6s2KDkyL3tOrtsxFaFFw=;
 b=Mq62sujjvkZRNahYthFbP9vJzHSOGvN1tKPGIgKqg4fcW34SzBwAOvIr3Szdo8oOtxDE6O6DU5aIM7hZji2s+y+sxvAq0CKiyBDvvk4rDQk0kBCT6CJ2FAjtA3f2A0xYOYASr+X1CNNVJrW8A/wHfGJjdMQ2qRahR2dzBRDwmhonG5pRflAV2yqlg3kkeibs+g2Z4iSicf4QGN2LtIVxq8u41O+LjeZlShxnAqU9bQ6BYIabD1PiemzdCv57QRBGXcOMI5dJtyuAczXoX8zLjbwv+euPErK6IGorX+pQXNKq8QZAlVjiebv9e/WBknsupV97uPkPkpA11Ys8gp52Bg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=34i3CXDk1TptQVT7grwKaTQ6s2KDkyL3tOrtsxFaFFw=;
 b=GR08wd7mV4EuXCa//wSGW/nPCgyLpLUSeHfAd4rqWbdI3mp6M9MFHK5OXmIFd2sxiMJ4NJ4MZD2fFvLZwv05+1zx0OxQnzmisOB0cyQX+Qa8FLmu2QV9abCi5i1o4aW5tGxQl6WycUxP1qSxiQeziTLj6U5ZPZn4aO/n9qgjTQg=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH 0/6] x86/irq: switch x2APIC default destination mode
Date: Thu, 23 Jun 2022 10:24:22 +0200
Message-Id: <20220623082428.28038-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.36.1
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0026.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:151::13) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 18bd3a84-fbba-42ab-c5e7-08da54f1e128
X-MS-TrafficTypeDiagnostic: MN2PR03MB5008:EE_
X-Microsoft-Antispam-PRVS:
	<MN2PR03MB5008308EDFEE8E9424D810928FB59@MN2PR03MB5008.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hlAvJu2liI56PiPhRK5SfZWomfv22P99aBjHTqPefT4qV6OQCySu6h6HS/BzVVxJ8FTXhuym47pPouBXc80lZFmVuiGSjQ6sblILrWHGsrGysA3JIneMHrCyyRh0mjdA3ehLmEbGNzG60FiGnVSVFJFZXc4kcTnb4ymAjFOZcYnWVci/i6pCDZxCQyE6dzpfJcPfXURvAdidG0tGTVgfzqoOlzUu8hX3Ep+ey50ZBU8kyvY7QDrPMWnOCQ9x9H6w6Oik68bPjs4m4pQhwZZjQa/JvqfsjXnuLRQ2bXoPrh22+G82qQbqA77J5LoRDVbD2hQ4J6CE3tzgGzie4vw0JxS8eMvaUZjvNiMxLcDfyhaVeI3mZ2iLpbQmq/zSje5WupQWYvdC+1MGnl/rMfNwoK/ggyu6BiS8/zLHq6v9J43cWZbF8X6AhsDJQsVhC9jWvGKOadZ66uy33r+AZj+Hc9oVqwfrRj209XMGSzeRJsPcqsI1pZCIeDDxG4EPr0fzXUnD1PTuO/k7QtsXGleu1/+mfZ8v0wKJ0eGrl6LV/V262FxzQEwuzp/Imd5v/pMDwKCSdpk6alCREWU0N+aaxqMT32T1oRg+1pd1HXUnnCWmJVx9GEgdY6MKSeQ0wnmfj40OlNTEorTfp1iwHVN3FS8mslaW5/hkgYzRkyRMSYVCIvb1lUPWyXyYJnz8a7mxE0rVJl7fxoSkBEidQAi1mA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(136003)(366004)(376002)(396003)(346002)(39860400002)(82960400001)(6916009)(2906002)(6486002)(38100700002)(5660300002)(316002)(86362001)(8936002)(6512007)(66556008)(8676002)(36756003)(54906003)(2616005)(4326008)(66946007)(6506007)(83380400001)(1076003)(26005)(41300700001)(478600001)(66476007)(186003)(6666004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?N0FEeU0wYWU5ZHBNSUNOVjQ3WUJkWGtSQWV3L3BwMEpOd0dwaTFnTWxWY01o?=
 =?utf-8?B?SjBhTFZCWmJYMjBrQ2JOTDIwQjNHQmtyVkIyazZ2UVVaZmExLy81M2laWDg1?=
 =?utf-8?B?emdiZjBqTXB4UTVlQjJPTFZ0N3N5ODY5OXBFb2c4YmUzMXFwU0RPdlF1WFBo?=
 =?utf-8?B?SThFVlZRalNlQXN5K2dVaUdxVlBlM0c3T1pYODNLSUordndDV1lsQ0JBZWNl?=
 =?utf-8?B?OEJhU3cvMksyTHZEYkFZbE9uZUpzaVRyVDBiVklaUlN6VDdCdDc2bnpvRk96?=
 =?utf-8?B?T2tLVllyWlNpS0FRci93TkFISWtvbk1yeWkvdkoyWmNsUGZlRytRRnVheHZL?=
 =?utf-8?B?Q2NIQmlQMEY1UmNHT1VJRFV0T3NiRkpiZzZjMGhNYUtlU2dlSmRsZ29Mdkxw?=
 =?utf-8?B?ajdJak9Cc2pXaFpuWTBnZEdFSmI1RkhBaEhvTmJGdmNNYmhJeHFla1hTZU5T?=
 =?utf-8?B?QVFRazhRZlNxOUswOU1KSmxaVzJvTXRKM2ZBUVFoQnBBanNIa1NZQW1nUHJZ?=
 =?utf-8?B?ZnplRnExeGZuYm1wcCszVU8yMEZ5MmlrOXVNVlNzQklGblhjWDhreVpsMVdY?=
 =?utf-8?B?aCt5dk1LclBQaS9jWVlTbkVjeHpXNFNkeS9vOXpUUnVIUHFFMXZlSDgvd2pN?=
 =?utf-8?B?c21QUDlwbElmczR1aFo4b2Q1TGcxMjZDdWZodG1qK2xCa3ZzeEJ6QmozUmpr?=
 =?utf-8?B?OVBpL2RzSGdPeCtlSk9OKzc5bTJOWkt6Q3lOYytVQWt1dkcvTzN2UGl1Rlha?=
 =?utf-8?B?N05QRnY5ajlEeUVnTDZjMEFSUncxOTBLTmRYRk13MXNoU1BFSXVIUDkvTlNv?=
 =?utf-8?B?bkFaRXJJcW90dHVqTFRkdzlxNDZzZVZDTkxvZk52OWQzeWVDMEFVcGVvcHpG?=
 =?utf-8?B?ZmRIK2JjQ3RwOWl5V3V1MC9mQUNmK3R2WGtKcGtVdS9jOUJ5RmJ6WE03eTFn?=
 =?utf-8?B?azFuZlNaRE81T3hSS0RXbWVENmJTOXhsNWxrMkllTTgvdURkT25rQWN5OWNx?=
 =?utf-8?B?aHF6Q3duZ043OS9sdzNtMVlNcEtXWVJkVkdKZ3RPZ1V3SU5Da3Y5STlUV0tz?=
 =?utf-8?B?cXNYSGg5MXpNd1VYSGk3aEFsZGhpcmg2V0tvWXRlWERtT3JiejZoTVBEL2RR?=
 =?utf-8?B?Z1h4dWd6WGN1d3pnTGxCKzBEY0hHUi8rU1BCREFUQUtpSEJHZjIrK0VwWXMy?=
 =?utf-8?B?a0dGejdUbXZ0QkFFL3ZWdUs4SUhCT3N5U2c3MS9yTkQyMkJZcXR1UmJxT3ll?=
 =?utf-8?B?YjB0NXpzNnBpSkNFWmU4MXBWZlliT0lVbGJtM2JaQWZVc2xyUlowTXdGRURF?=
 =?utf-8?B?UndnZ2Y2ZHM5NjVRKzZZZHBoRE1qQlN3eGxvUzB1T3lnb000ZDRuc2MvUlZJ?=
 =?utf-8?B?WDRMQ0doOE9ENUNoK3kvYWFZS09JdmEyeVJ0aEFiZGNLdTQ3N1VtSW9nS3Bk?=
 =?utf-8?B?d1IyN2JQdnpJMmpOWkJOc1RRUThUclNlWmJXeHFUbGhVWHZyb3hsMlI1WXIy?=
 =?utf-8?B?aXRWRUI3OE5aWExZZUJPRk5BckJwYVc0VkhweUs5Q3lZampjRVFzY0tJVGVn?=
 =?utf-8?B?RlZoOFlLbTI5S0Q3Mi9RTTFhdWhoTTVlakh1V0lYZSt4MCtqODRPVnI1bTBz?=
 =?utf-8?B?MG5TZm5XRS81SzNNUVdmVWRucmlzQUZyM2t2MXZvQmNjNjRTbE4vS2FZeTlu?=
 =?utf-8?B?dVVPeUYrM3JoenNWV0R6MmJhbEpMM0lrVmdwanJWdFIyb0hja0NkYUpNSnlO?=
 =?utf-8?B?SVpjQTZGdWxPQ2hPWitwVC9sY0NaOUJOdXJtUHhTSC9Pc2crYm5iTXpaTUl6?=
 =?utf-8?B?UFljS3BVSENxQjE1WDRvYTFvcmFVZm9USHRqUldwVGhoblprTDNvdFh0UEE2?=
 =?utf-8?B?UGZ2a0dFWnlKMUJQNG02d2FyQkRSZ3VHalFiU015SmVZM2dCaW9yM1dPaEZH?=
 =?utf-8?B?cXJKRGIxY0lFRU1CMmZlTC8xemlWQWdBU0ZGZExVMFhDSmlZSEl5NFduUUJ6?=
 =?utf-8?B?MnEzNnpOVlU0NVJzZFd2bjZoSlpKdUNicE01S1JzWGxJbWxkTFhvSXlKb0E2?=
 =?utf-8?B?Zm5aTVRuNXhKYWUrcUhqeWJ1b3ZWM1BNUEFTVGI2eTgzMndZeHorZjVlRlEz?=
 =?utf-8?Q?/E0FxL7NbTUA5w9ugkPiEpRaa?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 18bd3a84-fbba-42ab-c5e7-08da54f1e128
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 08:25:06.9774
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: cp/tKhkMCM07vJGFSPR0m5qKnJzSu9ywzbipnVae1oOyaZr/aCV4hmsXl4s67HI1dFktJA9EVoZp9o51sPj8Wg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5008

Hello,

The following series aims to change the default x2APIC Destination mode
from Logical to Physical.  This is done in order to cope with boxes that
don't have a huge amount of CPUs, but do have a non trivial amount of
PCI devices using MSI(-X).

The default x2APIC destination mode can now be set from Kconfig, and
will default to phys in order to reliable boot on all boxes.

Further patches are a bit of cleanup related to the interrupt limits
reported at boot, and making those values more realistic.

Thanks, Roger.

Roger Pau Monne (6):
  x86/Kconfig: add selection of default x2APIC destination mode
  x86/x2apic: use physical destination mode by default
  x86/setup: init nr_irqs after having detected x2APIC support
  x86/irq: fix setting irq limits
  x86/irq: print nr_irqs as limit on the number of MSI(-X) interrupts
  x86/irq: do not set nr_irqs based on nr_irqs_gsi in APIC mode

 docs/misc/xen-command-line.pandoc |  5 ++---
 xen/arch/x86/Kconfig              | 29 +++++++++++++++++++++++++++++
 xen/arch/x86/genapic/x2apic.c     |  6 ++++--
 xen/arch/x86/include/asm/apic.h   |  2 ++
 xen/arch/x86/io_apic.c            | 10 ----------
 xen/arch/x86/irq.c                | 15 +++++++++++++++
 xen/arch/x86/mpparse.c            |  5 +++++
 7 files changed, 57 insertions(+), 15 deletions(-)

-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 23 08:25:27 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 08:25:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354580.581780 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4I9V-0006Kf-M9; Thu, 23 Jun 2022 08:25:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354580.581780; Thu, 23 Jun 2022 08:25:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4I9V-0006KU-Id; Thu, 23 Jun 2022 08:25:25 +0000
Received: by outflank-mailman (input) for mailman id 354580;
 Thu, 23 Jun 2022 08:25:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uSRO=W6=citrix.com=prvs=166c34e93=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o4I9U-0005Uq-7v
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 08:25:24 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0654112a-f2ce-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 10:25:22 +0200 (CEST)
Received: from mail-dm6nam12lp2177.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.177])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 23 Jun 2022 04:25:21 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by MN2PR03MB5008.namprd03.prod.outlook.com (2603:10b6:208:1ac::24) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.15; Thu, 23 Jun
 2022 08:25:20 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 08:25:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0654112a-f2ce-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655972722;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=NjywiNvb5PqoOaAfnLLHWPA+CBvHvcnWE3RraIAqnd4=;
  b=Jlq173b3GqB4GPmrAAxdB+XgBjTIB6L5wRmFKBbHTjAFrvxOapW2keMu
   IWN1Tu5+665xt3mwCajw1YZtNTpFFvD4oNHpxHyJAkwarqASThxa7+Iee
   4j6IPwQ7oOsBcWwS2NuP97ev1EUzMIFIJtluQIAbboY1tFQHQrWYP3Wth
   Y=;
X-IronPort-RemoteIP: 104.47.59.177
X-IronPort-MID: 74257078
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:jXkjVK77mnKEs5Nl/GGMKgxRtEzGchMFZxGqfqrLsTDasY5as4F+v
 mdNX27XPqmKZWT9ctl3bYzk8x9XuZCHmNVrTwFpr30yHi5G8cbLO4+Ufxz6V8+wwmwvb67FA
 +E2MISowBUcFyeEzvuVGuG96yE6j8lkf5KkYAL+EnkZqTRMFWFw03qPp8Zj2tQy2YbjWVvR0
 T/Pi5a31GGNimYc3l08s8pvmDs31BglkGpF1rCWTakjUG72zxH5PrpGTU2CByKQrr1vNvy7X
 47+IISRpQs1yfuP5uSNyd4XemVSKlLb0JPnZnB+A8BOiTAazsA+PzpS2FPxpi67hh3Q9+2dx
 umhurSsTlt1JPXUkt8QdF5KMQdsBJQYyJLIdC3XXcy7lyUqclPK6tA3VgQaGNNd/ex6R2ZT6
 fYfNTYBKAiZgP67y666Te8qgdk/KM7sP8UUvXQIITPxVK56B8ycBfibo4YHg1/chegXdRraT
 9AeZjd1KgzJfjVEO0sNCYJ4l+Ct7pX6W2IE8g/K/fVni4TV5DZ18KfwDt7XQcSTb4J6ul6dh
 z6W82usV3n2M/Tak1Jp6EmEluLJ2C/2Ro8WPLm57eJxxk2ewHQJDx8bXkf9puO24ma8Ud9CL
 00f+gI1sLM/skesS7HVQBmQsHOC+BkGVLJt//YS7QiMzu/Y5lifD21dFDpZMoV564kxWCAg0
 UKPk5XxHztzvbaJSHWbsLCJsTe1PitTJmgHDcMZcTY4DxDYiNlbpnryohxLT8ZZUvWd9enM/
 g23
IronPort-HdrOrdr: A9a23:kBSp9KN/xIpokcBcT1P155DYdb4zR+YMi2TDiHoddfUFSKalfp
 6V98jztSWatN/eYgBDpTnmAtj7fZq8z+8P3WB1B9uftWbdyQ+Vxe1ZjbcKhgeQYhEWldQtqp
 uIDZIOb+EYZGIS5aia3OD7KadZ/DDuytHVuQ609QYJcegFUdAC0+8vYTzrb3GeCTM2TKYRJd
 653I5qtjCgcXMYYoCSAWQEZfHKo5numIj9aRALKhY74E3W5AnYo4LSIly95FMzQjlPybAt/S
 zslBH43Lyqt7WexgXH32HewpxKkJ/Ky8dFBuaLls8JQw+cwjqAVcBEYfmvrTo1qOag5BIDl8
 TNmQ4pO4BJ53bYbgiO0G/Q8jil9Axrx27pyFeej3emi9f+XigGB81Igp8cWgfF6mI71esMn5
 5j7ia8jd56HBnAlCPy65zjTBdxjHe5pnIkjKo6k2Ffa40Dc7VcxLZvsH+9KK1wXR4S1bpXUN
 WHVKrnlbVrmBKhHj3kV1BUsZKRti9ZJGbFfqAA0vblpgS+0koJinfw//Zv70voxKhNNaWs2N
 60QpiA7Is+KPP+TZgNc9vpEvHHfFAkf3r3QRGvCGWiMp07EFTwjLOyyIkJxYiRCe41Jd0J6d
 78bG8=
X-IronPort-AV: E=Sophos;i="5.92,215,1650945600"; 
   d="scan'208";a="74257078"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iL8aIN3wJ7cPzXI8qHuc0KZdMsJ6gd5qgRwlmV3lLnmeluQrSx73IX4mUBP86RAXqS4tPRtShIAVczzS86jbaq7p3BbH/cH9vwgtyO12RWJ2rK5BjEIPXBH+DJdJdJ/q6eRh94PwzUK2sEtHSTWbyC+z+RQvwhjVIUj/OGLivVi/ZbJHQ5Bu6SYwKZfk7UZKPxvCakzT9p0yRNjwluF1Dfs/yxNE0yWx95b0vVAiN8OMB1MKhf36v7uQjM7ZXAR3HIO+VMqSso1ZdRf/dB5DOOfm4onlFiPra/JT3TWZkPliCDKzpr1cmQrKZynwEYQGB4wRPDe4hR5CvYlMm6HIfw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=euzhKLQqaFSC2Ezhbo0Ek1xKzO4ctm4oHFntZFUVdLc=;
 b=GZEpsL9d8KrvWll+fUW/Z+Omgq152MKC2I5k3qFGSFoN5zrk3CG/r/3YxC5VvKX036SXbECGG7+JOzcT8PRP9NTouN431/oyUIZ4wMYwYorxTCkI9eqrTRJGcOmOVUQO1FrgI9pQd8wZPgRV2vSQf6jmIAAv5cKyIYbHf2rGskyAgSJXgynobOJLk6yS0zbXgu4IqpMT2bLTQmqsIHY+TM/xwACKEoPIwRiLMxptPbfbdfxE47CxpSXhf5JPTlB2GQRz2sIS8mRDRjrSAyDGDT4iGTCkxMjc/8JucV77PWG4tLmalBjptbmYb+GaddCkF7JhvfCyhJLJFabeC0+1gw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=euzhKLQqaFSC2Ezhbo0Ek1xKzO4ctm4oHFntZFUVdLc=;
 b=hBt1RxWje6BWRx+Smhg5wy2IHoU6j/kb0Ae8BfFUxXv3BQWbry35R2Y+siRGqOhoF4UbMHEJf0hk5JpD9YzwiTpB97KrHvSA3i2f90DP2yKoNKwelMQnjigz38cWcX2GFvDTPSz6RBv/A1fmqkqrSrfVJ/oP28fy/+btVmbM/1s=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 3/6] x86/setup: init nr_irqs after having detected x2APIC support
Date: Thu, 23 Jun 2022 10:24:25 +0200
Message-Id: <20220623082428.28038-4-roger.pau@citrix.com>
X-Mailer: git-send-email 2.36.1
In-Reply-To: <20220623082428.28038-1-roger.pau@citrix.com>
References: <20220623082428.28038-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0099.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:c::15) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8378000c-f92a-4cc3-a2e5-08da54f1e91e
X-MS-TrafficTypeDiagnostic: MN2PR03MB5008:EE_
X-Microsoft-Antispam-PRVS:
	<MN2PR03MB500872E9450990B5E0AD45DB8FB59@MN2PR03MB5008.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ApsOg2QddOPmUNEeQARYgbum+t93fL12wA2Oz9As92US6dqObtWYgd6wDJluA/LLk/h7SwN9j3q+xpu/cbxp+7J4tCRhyZLEa1N9FFCZvKJ8EKnOAD41mDlKNidUTAKhZ1Rzf4be9Rihem3Vb/kLBGjMdAxAldvg5Ya8PTVs5GlYXodH9+bSPFPhbfzO8aCsmtvQZ5ObBwn5N4E1Y4ZEhYvOHyMGkd1y2IEHTrkIduZlOFAV98t5dvsc7gL0sFWaNcKenl6Da2aj6jmnYfjaUiCXOlZUzQ8Eis04N+R9eTUONyj4RWXE+YBrSG5cG0Ll4cUe+jF2Vdt8vSSFDv8b5Rb9cK+6MWFsAOF+pCvcJJWqsL3sR5BGGGdCpuzBF0DA/0uiVhB4w7U9JbUfHxWplq8UfNc52xag39RfGHHxUZ50DJiJyFmjVklfAGf/yQM5AAKRSG1xdz6LlPNWrdSoRZ0yfKfvXlZ/vQo4MVUcDWMnT0hKNW2XHhgb3jl1Z9FGj8mtStu5Z+ekApxksnFQq+LIRt+m/WAILUUEc57NULSp23F4d/6Lk4I6xyiGXulKYESgJlLVWGB9wMeYuZ5Vm5n92pGjz5BSzSs8hI3fLMXwdgJXSM+gkXpj9ETFSkbp5Pch4sO1IGugJbIm/rFz37T2TCeYqO8exs9fAHZF124nSfpiDyH0nTq8fZNiwAP9AHvK0DVs8wQCpLm4OXKBPg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(136003)(366004)(376002)(396003)(346002)(39860400002)(82960400001)(6916009)(2906002)(6486002)(38100700002)(5660300002)(316002)(86362001)(8936002)(6512007)(66556008)(8676002)(36756003)(54906003)(2616005)(4326008)(66946007)(6506007)(83380400001)(1076003)(26005)(41300700001)(478600001)(66476007)(186003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RThCSE03cHZ6WmVFVU1uTUZZY0pYQXhTc1I5ZGlyMlNTNHpHV3R1WXFFZ1pw?=
 =?utf-8?B?VFpmcHlEU2xvc3V1ZUgxc3k5VU1BVFp0ZkRmVWVOYzJ4TkorS1I5N3Q1TXMx?=
 =?utf-8?B?bE14ZjcvSnc4SWd0dGw3T0NOU04zMXdLSE16MXhvKy9jUUkxTi9BK2tOcDlG?=
 =?utf-8?B?WWRJQnR6bzIyYzdKdlpFdy9mTE1ROEs1NUJPSHJuUEp4c0dvVkRRMW5RdVM1?=
 =?utf-8?B?dnVPN2FBcWU4SnhKemJiSVowRXFvZmdzRFVOWWJVTXpwMU9ybnkzUHcray9k?=
 =?utf-8?B?UktIZG5vVjhpOWtqWmI5MzNjeThyU1ltb2w3dlhrbmcydVN3K2xCVGpxTFFq?=
 =?utf-8?B?SEhZa0pCTzVubFBqZXhFb0dDKzdqV1Rtblkyc1FRSmJFWlBmSzVsYmhTMUZV?=
 =?utf-8?B?bEpKRVVUMnd4dTJ5cG54aDA3WWFRYzFyNm01ZW84OUtzTEdXT3loN3RFTXZL?=
 =?utf-8?B?MHpZMy9ndmc5amZFWXlXb290QWZITklIV1dUNk1vZ1hLdTR3aC9vaG42dktq?=
 =?utf-8?B?ZUdzWVcyVUMrUGg1cC84NjJ2Yy9LUExCQllWc3NpbktQaElnVTQ1dE9hazZl?=
 =?utf-8?B?UStVRExnV0NWWGJzbU5WcnNleUNzS0JUNEF1WHdxK3NuK0JnY0kyc3ZTS2VO?=
 =?utf-8?B?bUJjNnFUQnc0Ni9ncldWMWFjaUxtOHZHamV1TmhrREdvaDNGNzgyQlluaSts?=
 =?utf-8?B?b2QvWjVLbE91U0pZVlY4OTY2ZVB4TWFuLzV0YWVKMHYzN2J2K2hYRFhXeEVi?=
 =?utf-8?B?RmcwdnN5QWhMd0lXd0dmbEQzQkZ6MGk3NFArQzNpdG5CbEV0TytYN0Vlb09y?=
 =?utf-8?B?aDJmVFZOREk0dndYNmthT3lUcWRHMzVMVXJtUmFVaUdGcmo3QUlHdStpTmJN?=
 =?utf-8?B?b0N5TU1LQjdPUnFDS3FDQVUrSE1Yck1mTklsblZsTWFraXp2WlhnUVpwbWtS?=
 =?utf-8?B?WUVsZ250QTdxNDNxTXRudktMMFhJSE1oZzN6OXdjRm1HZFBSWGc0UlhRSW5T?=
 =?utf-8?B?NEI5VDdxUEpWYXBVVktFbzBHYXcwZmUwWkJkeFJwQXRyMHAycmc1RC9OV1g0?=
 =?utf-8?B?U2dSS0JYOXVVVnlJZGlTRWJIbGVvY3hrSzk3d1BtWmUwbkFXWkNUM3VobEZ4?=
 =?utf-8?B?aEJZT1BLVU9ZeEJIeEZ1aUUxUzllbWZlY2JPVXpFSkQ0dHhQTzRYUlkvNHg2?=
 =?utf-8?B?SEZXVlZjNWhCTFF1bW1jdlA2Q1ZiZU5jbVlsbjhSZ3FBQ2EwV3haYTZVa3R2?=
 =?utf-8?B?eG1RdGtmdlY1blN0K0pWZDk2THFKcEc4andNZXFCWll2eXRUNWhONmltMkNS?=
 =?utf-8?B?cytZNWYyTzNXQXpOMEQ3MDhFT2g0UmZCQWtLbm01d2U4MTE3V0FuMW1OdWh2?=
 =?utf-8?B?ckFRWWR0VWFlcDZsOG5Vd2JFM0cvWFZvYm8wdnd3UmJ2QXZYRmpVWklVaXFR?=
 =?utf-8?B?RGVyK29COHo2bDlCcGZuVFNvR0JLZUpiVWRVYitwUVFGYlNscEZoVzlET05y?=
 =?utf-8?B?QTExdzBGZElZZzZWMnpiTUVXcWR3N1g1bXY4ZWl5eFhPT1d6azdLWjZhWlN3?=
 =?utf-8?B?VFBjRjBqM25DWlJBSTB4b2p4YWc3Y29LM3p0TDFWSE93RXhEeGlMOXhJbVlw?=
 =?utf-8?B?OE9xTTQyL3loQ1cvYUVPYzB1UmFCa0c5aTdhSFZyQU80RUE1SWVoeXdTMGVN?=
 =?utf-8?B?eWJsK1hkdko5bzd0MFhwQjRrbkdiZ2RWckxTWkR1UmRuVkxTbDNyUExrKzhp?=
 =?utf-8?B?L05HSE52Ukp4M2RNN1JrQTFFbXkwRWRuakE4SmxZenBpQVc5dGVGZi91RGt5?=
 =?utf-8?B?U2FTS01OS1BkcEtNZDhjeE9pQWxHM1Y3NVdTcUMyd0tXWTI1NXErY2ZBcjhj?=
 =?utf-8?B?NGF2SHVwczRGSCsrVmZ4UnNLSnlUQ2tTYzBCUFEwWTBsaC9INHZHbFVUZ082?=
 =?utf-8?B?ellEVlBNWXEzVjB3dk9iZGt1YVhFTy9mbncxZnBySlBXT1lORkdsRkVzazNV?=
 =?utf-8?B?dmtLcFJLVmM2VjdkWlRwM3BmdkIvOUk3MDZYSm4yTVRYZXNFQ0J4ejNDaWo2?=
 =?utf-8?B?eWttREovcHBOWWRnYm90Q0dNOG9aUFFtTkFvS1dzUXpWTEtFLzJvQVIzVG4v?=
 =?utf-8?Q?jGAT0priU7m4CGmBLjO2sAovB?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8378000c-f92a-4cc3-a2e5-08da54f1e91e
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 08:25:20.3172
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Pbgpvk9SeTPA37xo3J6C3XwxNNra7TJPalWdwIETnmijAujHabhdsKJBtrExHW3NGc+SuoP3OMd2wD551g0cBw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5008

Logic in ioapic_init() that sets the number of available vectors for
external interrupts requires knowing the x2APIC Destination Mode.  As
such move the call after x2APIC BSP setup.

Do it as part of init_irq_data(), which is called just after x2APIC
BSP init and also makes use of nr_irqs itself.

No functional change intended.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/io_apic.c | 10 ----------
 xen/arch/x86/irq.c     | 10 ++++++++++
 2 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/io_apic.c b/xen/arch/x86/io_apic.c
index c086f40f63..8d4923ba9a 100644
--- a/xen/arch/x86/io_apic.c
+++ b/xen/arch/x86/io_apic.c
@@ -2653,16 +2653,6 @@ void __init ioapic_init(void)
                max_gsi_irqs, nr_irqs_gsi);
         nr_irqs_gsi = max_gsi_irqs;
     }
-
-    if ( nr_irqs == 0 )
-        nr_irqs = cpu_has_apic ?
-                  max(0U + num_present_cpus() * NR_DYNAMIC_VECTORS,
-                      8 * nr_irqs_gsi) :
-                  nr_irqs_gsi;
-    else if ( nr_irqs < 16 )
-        nr_irqs = 16;
-    printk(XENLOG_INFO "IRQ limits: %u GSI, %u MSI/MSI-X\n",
-           nr_irqs_gsi, nr_irqs - nr_irqs_gsi);
 }
 
 unsigned int arch_hwdom_irqs(domid_t domid)
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index de30ee7779..b51e25f696 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -420,6 +420,16 @@ int __init init_irq_data(void)
     struct irq_desc *desc;
     int irq, vector;
 
+    if ( nr_irqs == 0 )
+        nr_irqs = cpu_has_apic ? max(0U + num_present_cpus() *
+                                     NR_DYNAMIC_VECTORS, 8 * nr_irqs_gsi)
+                               : nr_irqs_gsi;
+    else if ( nr_irqs < 16 )
+        nr_irqs = 16;
+
+    printk(XENLOG_INFO "IRQ limits: %u GSI, %u MSI/MSI-X\n",
+           nr_irqs_gsi, nr_irqs - nr_irqs_gsi);
+
     for ( vector = 0; vector < X86_NR_VECTORS; ++vector )
         this_cpu(vector_irq)[vector] = INT_MIN;
 
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 23 08:25:27 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 08:25:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354579.581769 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4I9S-000628-Cw; Thu, 23 Jun 2022 08:25:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354579.581769; Thu, 23 Jun 2022 08:25:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4I9S-00061z-9N; Thu, 23 Jun 2022 08:25:22 +0000
Received: by outflank-mailman (input) for mailman id 354579;
 Thu, 23 Jun 2022 08:25:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uSRO=W6=citrix.com=prvs=166c34e93=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o4I9R-0005Uq-C2
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 08:25:21 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 03bd9dc0-f2ce-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 10:25:19 +0200 (CEST)
Received: from mail-dm6nam12lp2168.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 23 Jun 2022 04:25:17 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by MN2PR03MB5008.namprd03.prod.outlook.com (2603:10b6:208:1ac::24) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.15; Thu, 23 Jun
 2022 08:25:15 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 08:25:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 03bd9dc0-f2ce-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655972719;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=GAh16+3k4c8+TUlKcf9+shdqpJejBCp6iXhFkcuF4Bw=;
  b=MAzvwkM4e0dg9NNWo/wgkmO8KXv339NSQwVSJlo+wFD2kLq/M3J00S3R
   GopU7cMqU1tNRhhtnc2BAQ+ZAzJ+/CeTWs4/uaF98M9TQzAEOiXe2DW9Q
   u0qBfw3iJlXaxHE37HbhwKhETVQ0z6TWvNypnT19oIfl5CnHokQ45ezdp
   M=;
X-IronPort-RemoteIP: 104.47.59.168
X-IronPort-MID: 74257063
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:WdLx3aLHG/rYz7/FFE+RzZQlxSXFcZb7ZxGr2PjKsXjdYENSgzxWy
 mIWWzyOP/6LNmP0ed51Pty/p0kB78fTzYVgHlNlqX01Q3x08seUXt7xwmUcns+xwm8vaGo9s
 q3yv/GZdJhcokf0/0vrav67xZVF/fngqoDUUYYoAQgsA14+IMsdoUg7wbRh3NQy2YTR7z6l4
 rseneWOYDdJ5BYsWo4kw/rrRMRH5amaVJsw5zTSVNgT1LPsvyB94KE3fMldG0DQUIhMdtNWc
 s6YpF2PEsE1yD92Yj+tuu6TnkTn2dc+NyDW4pZdc/DKbhSvOkXee0v0XRYRQR4/ttmHozx+4
 PwKuYDtaiF5B4nBg8Zedkh7TTx3AKITrdcrIVDn2SCS52vvViK2htlLUgQxN4Be/ftrC2ZT8
 /BeMCoKch2Im+OxxvS8V/VogcMgasLsOevzuFk5lW2fUalgHMGFGvuajTNb9G5YasRmB/HRa
 tBfcTNyRB/BfwdOKhEcD5dWcOKA2SWlLmIA9gn9SawfwVKJlBJA07rXaoTRUPnUa+9yu0GEq
 TeTl4j+KlRAXDCF8hKV/3TpiuLRkCfTXIMJCKb+5vNsmEeUxGEYFFsRT1TTifuzh1O6WtlfA
 1cJ4Sdopq83nGSpU938UhuQsHOC+BkGVLJ4CPYm4QuAzq7V5QexBWUeSDNFLts8u6ceWjgCx
 lKP2dTzClRSXKa9THuc8vKYqG20MC1Md2saP3ZaHU0C/sXpp5w1glTXVNF/HaWpj9rzXzbt3
 zSNqyt4jLIW5SIW65iGEZn8q2rEjvD0osQdvG07gkrNAttFWbOY
IronPort-HdrOrdr: A9a23:i9xhhai31kgLKkVuNGyk0eserXBQX0h13DAbv31ZSRFFG/FwyP
 rCoB1L73XJYWgqM03I+eruBEBPewK/yXcT2/hqAV7CZnichILMFu1fBOTZslnd8kHFltK1kJ
 0QCpSWa+eAcmSS8/yKhzVQeuxIqLfnzEnrv5an854Ed3AXV0gK1XYdNu/0KDwUeOEQbqBJaa
 Z0q/A37gaISDAyVICWF3MFV+/Mq5nik4/nWwcPA1oC5BOVhT2lxbbmG1zAty1uGA9n8PMHyy
 zoggb57qKsv7WSzQLd7Xba69BzlMH6wtVOKcSQgow+KynqiCyveIN9Mofy9AwdkaWK0hIHgd
 PMqxAvM4Ba7G7QRHi8pV/X1wzpwF8Vmgvf4G7dpUGmjd3yRTo8BcYEr5leaAHl500pu8w5+L
 5X3kqC3qAnQi/orWDY3ZzlRhtqnk27rT4JiugIlUFSVoMYdft4sZEfxkVIC50NdRiKpLzPKN
 MeTf002cwmMW9zNxvizypSKZ2XLzkO9y69MwY/Upf/6UkVoJh7p3FosfD30E1wsa7VcKM0lt
 gsAp4Y6o2mcfVmHZ6VfN1xJ/dfKla9Ni4kY1jiV2gOKsk8SgHwgq+yxokJz8eXX7FN5KcOuf
 36ISFlXCgJCgjTNfE=
X-IronPort-AV: E=Sophos;i="5.92,215,1650945600"; 
   d="scan'208";a="74257063"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bSeHdKlGl6KeIQqP5EybgsYP1KcCpx9XAxf+5iVFGT9/EDvrLdj/xEzwDY6W47BFOexPLqw//06ReXRkP2/B1oCwutzRRCAoU3QB/CbF6RCdCFMLS5SLpYONJvU4gXPYJekClFzkhnIRBxTNKnCaql/LkkH00CMJw3j39vcXLAseL4npD1kevBsegE3WPqVSAb7KRNXXr9ZO04JpMjyMqPm4mO8DUB5G09w0KuLS6lTqrdK9kpze+TYJOCVME8QPillSQOzPfB33Pp0Ps2RCWasPfyexLG4NCmF/gXulNloQFT6kHVKfwFhDGfxhv6flPdimupXonCUx3vRaGHXOlA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mV+GSA3oDieHR/bXqJJFOlJFFYmGghYFGMWvwZGx7cE=;
 b=oIs9ZGWLbsDTb946MHt44xHElFjiUkn4dz55fAvTmwNwQyubEDLZZxzuHzB2vcS40N87XZKhsOZFSBlSQmapMzI39qMWStgRLv/BOwI0lVG301QV00KUdKJrR8NhDafe+AtswBcV4qukDbKVy0PwMeejIbDh8niaxLn4groLdSllKHbwNSHZWuNbgfhCKoeMOXPNmWfWxEkdtMjffJD5qg6ZtVRrLrVQaSH1WqR4+QN7DaAbYzQPMUe8dwFWOAH6HmzxRqVvavtGrOExTA0AnbTZW899Pu05ZXoJHTI5+dt7uyInYPSJolqJyTcfNGvGFe7RahcGxQ8seU4u0eHNWA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mV+GSA3oDieHR/bXqJJFOlJFFYmGghYFGMWvwZGx7cE=;
 b=NihpBs3xlP6GPZHWiQsL897C4nYXY1nAqLnIfFpzTRk/4B/j7CfrSfyDh5yuWhVJQlowHOgFHDj10nBXwf4wRSDptJtvNBLUf/uNhsB9FiiCx2lC1GvfhphPqWAgoQe4+e+6YdnPKcY0mdIPp3avRVCxWADnrHl5R9c4xM3Stn0=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 2/6] x86/x2apic: use physical destination mode by default
Date: Thu, 23 Jun 2022 10:24:24 +0200
Message-Id: <20220623082428.28038-3-roger.pau@citrix.com>
X-Mailer: git-send-email 2.36.1
In-Reply-To: <20220623082428.28038-1-roger.pau@citrix.com>
References: <20220623082428.28038-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0283.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a1::31) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 5d4a3772-7508-45f7-1cea-08da54f1e65e
X-MS-TrafficTypeDiagnostic: MN2PR03MB5008:EE_
X-Microsoft-Antispam-PRVS:
	<MN2PR03MB5008CC3A039202AB3A7A2C618FB59@MN2PR03MB5008.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	D2Y5WEx4/BFdvmZ295YheBUcUCdEYvWGrO7/Hvqak120tE7Osu+LH1D3HihQhcLSIOsu/H4ENrAAym85zzHydjMKlcE3H5D9ZB24gse2CJxNgBNiVWjukyh/+mmpoU+fMOxIWzKPFW3UHWS4SY/q9gwmLUd6CAwe7eut8JTVug3MJ2VxxYnnxdbJ4xkW3HfrRNbvs77bVOKzEbG7cCYqSPX/RLSPx0Em/g0XoReWRjfg3kUASf93wnPhfQo+2M+x77v6ZPzyBFrQ2z2nU/stsdO6aanYv0+GbBBdaiOmhOjMyGlYsgCygDabO+4ae6TeV9gdCHayRefxJzYTTomBKhRmS2jzz1Kpbjuqwp3Mf4ULOU42CvKYCXOqxUva41n2dDzLt0NIOVVv0QsFhyoIzn8IlEIhouWBD4Hi/0gisV3N86KH177A9k/fFRk0g+KRBILk7rNYC0YW0HRP88ZvJBTNVRSFn0rXHA40XUmF3DvvEh6ahYOEIfxFxhWj8agOWOPS5bwsIEl7++BqOas3LybQjCGGPJA4Xx45hXMdXuqiWtcBrWXOGATCoPvQMJrfHyZQUA3Du/27EKaAbUS2A5sKWxCgeM/5Xt14mURaTO3ACVJPyD1UP2SWNYAoUbO8wau5+qZ2epT7bmaLrT2BmAkCsp7FLwqy70UMMvVhLc8axK9M2o3bosXOYBxCJJKuLKbGHUBm9Qf7gWzldSJUPw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(136003)(366004)(376002)(396003)(346002)(39860400002)(82960400001)(6916009)(2906002)(6486002)(38100700002)(5660300002)(316002)(86362001)(8936002)(6512007)(66556008)(8676002)(36756003)(54906003)(2616005)(4326008)(66946007)(6506007)(83380400001)(1076003)(26005)(41300700001)(478600001)(66476007)(186003)(6666004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?alE4T00wREFnZDcxZ01qWHkxZUZ0c0FFNlRoeHRnS1VNUXJZQzhxWGU0NExr?=
 =?utf-8?B?Ukx2b0h0Y21Jd2libmpJSit4OFVVdkN3bDJad3pkckllYWtFdmpuWTdabFJh?=
 =?utf-8?B?THl5Tm5FTU10Sks2NTg4V1hLRFVsNm5GODlmQnpXdlNsUWs3M0JHcEZ4Y1ZO?=
 =?utf-8?B?b3p2Qjk5b1IweEw0VWlyaTJwMFlMbTBwbm8vUkVhVUtybHgwOFMzb0tzdDB1?=
 =?utf-8?B?MTlrSWtUOHF0UjQ5aldQRUlNYXpLdHBiVThwSzZadENDeDFmN0hJYVQzdkcr?=
 =?utf-8?B?TmtaQlZ3WWkzalU1bHBWNkFwWm9ZSkVnMzJmcnpIVmVmOEZycW44RExQNmpw?=
 =?utf-8?B?aVEraFFCL0J4SjA1WWtrRkczYUhIRUMxUktSM1VwVUZtWjlPVmQ1RlozNGZR?=
 =?utf-8?B?Uy9NQll4WHdxYmNyKzBDcWFJWGxjMU9PTUM2YnQyRmhrNlJmTTYzSEpCbGtD?=
 =?utf-8?B?aHJzTFAwWGQvbHVQc3hGY0xOeWVLU01ZODF0eGxGSUtHdG4zdVo3TzBiYndF?=
 =?utf-8?B?UVVYZDcvZGVyZTJXNzdrRFB3aW1XZHA4QkFpeUlVeEcyMXlJT3V0ekRtVjRt?=
 =?utf-8?B?NlpWNUFJZGJJanNIcUc3SVJIZkZNdEo2bjY2VEpCNVVPRk1PQkQ0ODhQcUg0?=
 =?utf-8?B?ZG1OTXZNajZhNi9MN1BydGxXS3BxN05iZFpEUkp3VFJQUit6Y0YyU3F6bnZ1?=
 =?utf-8?B?ZmRkQW51MXl1MHRvdVAxWG1KVDlUZkNXUEFpd0MvTVNCalI3WVJYM2dXUnNR?=
 =?utf-8?B?TEt2TlRYbm5FTzZ4VXUrcndDdHVIUEQ0aUFmUWxaY0xqVzViWkZ4a2Z5Qk1l?=
 =?utf-8?B?WFdGN09zYlpPRk03NmdubFVra3RIQ3p0cmhXbnFyKzhldWdkdUNhS1duaVNt?=
 =?utf-8?B?b3lobTlXZVc4Q0hZaUd5THdvWWhnNUlRSEc0dmhTbnB1d1JJR0pFRkdkc2ht?=
 =?utf-8?B?NnV6YitrWjhwUHEvclNtKzFLZnBEbElZNzhCUUFvdnE1MnZiQk8rTjRKbXNV?=
 =?utf-8?B?UDhHcGtCWWhGZElKNzhmbUNGSU1zblc1RXNhUUt2T1dHSlBiQ3NtVDNFc0ox?=
 =?utf-8?B?YWhwbXl4UUN4cHdWM3ZzdlNTSTE5THZuZzZmM1lFTWdpYzJSTjJ3MWJ4amQz?=
 =?utf-8?B?Y3FUbFNyY1RHRWttc21jOTFLM1M5a3RNKy9JNmF1S1lWNW40SjFDSWhjMUxF?=
 =?utf-8?B?bmxhbDU3TVFKcjRVL1RhVW9YWHRkeHdOVThhQ3NGQXFiQ3FScmgrMGx0N3py?=
 =?utf-8?B?K2RjR29QcWVZSm84UExTSzI5NkdkQmx1cXRkMFdmODVFOWdxMzNEampwclRv?=
 =?utf-8?B?WCtQbzhWN3U5VnpGK3Z5VUJJdmlkTkU2ZFlHTDVCK1R1RUY2bkl5c2o4cFp4?=
 =?utf-8?B?UkQ0MWRZSElWU1V2MnFSbWZySlJSTDBUNzNaMkFlL01tV1BIekY4YkdBSGw1?=
 =?utf-8?B?cHM2bnk0WWMvT29DN1hURFdaMDQwei93Q2FmdGhSWk1wRUtOOTFqd1NzUDF6?=
 =?utf-8?B?UlBTeGp6VGJETW5nenR4bHdCRDN2VFdPbnU2bzJsZ0dhcDRLdjJ4N1JaTkkx?=
 =?utf-8?B?M1hIWUUyS0JNN01qYXd0Z0ZuWHB6YnVsdUNhZUZkZjlneXNXZzR4S3Y5Kzha?=
 =?utf-8?B?RlJ0SHFGaEFXa2k4WTRDOXU1Tjl5azhGeE80U0pMZ2ZqQlRRUVZaQUtEb0Fo?=
 =?utf-8?B?aThXcjhFZWtUcmZPOTNqaW5kcnlyWWRVMllKM2w0TlNTbVpieC9mNU5tRDIy?=
 =?utf-8?B?YXNDdSs0RHhPYS9mSkZpeXB6VVhHNUVNaFRkbG0xU296ZlhNUyszakN6NXlu?=
 =?utf-8?B?RnBsK2lsSllDOTlQY0ExVzl1aVhicUZERWtTaTZ5VTllSnhqc0p4Z3orZTNO?=
 =?utf-8?B?UUlsTHkraVFtYVd0ZW8zSzVGamcrclNrdmYzWGxIbVppZ2xtVFFVZlJITUhS?=
 =?utf-8?B?cll3YVExdmp5Um9lNHN3bmRyTTkrY2lxUm1tK2dsV1J1L0pTMzJXZDFUL2pm?=
 =?utf-8?B?TTQ2U1FsOXJSTS9uK09kTlZsZVNhaTgycWZDZThLVm9Qd0V6SXBEZXVCd2Y3?=
 =?utf-8?B?ZGJWV2Rvb1M5NXpWWHhWT0lTTkFPSDZOTHprbXNGdE5DQm1xZWFuTVVTNUo5?=
 =?utf-8?Q?CVnoDouMAYB+0D6xpUlqkIGtw?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5d4a3772-7508-45f7-1cea-08da54f1e65e
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 08:25:15.7000
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3chuX1jtqy/+Slkd4t9ZsZH9lM/5hdZcU7jPdw2Nnmhm2J69Ap1wqzgCnKcTCn8yzFOhhO5oKoINSOFimmwFLw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5008

Using cluster mode by default greatly limits the amount of vectors
available, as then vector space is shared amongst all the CPUs in the
logical cluster.

This can lead to vector shortage issues on boxes with not a huge
amount of CPUs but with a non-trivial amount of devices, there are
reports of boxes with 32 CPUs (2 logical clusters, and thus only 414
dynamic vectors) that run out of vectors and fail to setup interrupts
for dom0.

This could be considered as a regression when switching from xAPIC
mode, as when using xAPIC only physical mode is supported.

Switch default Kconfig selection to use x2APIC physical mode.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 docs/misc/xen-command-line.pandoc | 5 ++---
 xen/arch/x86/Kconfig              | 4 ++--
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index a92b7d228c..952874c4f4 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -2646,11 +2646,10 @@ Permit use of x2apic setup for SMP environments.
 ### x2apic_phys (x86)
 > `= <boolean>`
 
-> Default: `true` if **FADT** mandates physical mode or if interrupt remapping
->          is not available, `false` otherwise.
+> Default: `false` if **FADT** mandates cluster mode, `true` otherwise.
 
 In the case that x2apic is in use, this option switches between physical and
-clustered mode.  The default, given no hint from the **FADT**, is cluster
+clustered mode.  The default, given no hint from the **FADT**, is physical
 mode.
 
 ### xenheap_megabytes (arm32)
diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
index f560dc13f4..74bfb37db4 100644
--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -228,11 +228,11 @@ endchoice
 
 choice
 	prompt "x2APIC default destination mode"
-	default X2APIC_LOGICAL
+	default X2APIC_PHYS
 	---help---
 	  Specify default destination mode for x2APIC.
 
-	  If unsure, choose "Logical".
+	  If unsure, choose "Physical".
 
 config X2APIC_LOGICAL
 	bool "Logical mode"
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 23 08:25:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 08:25:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354581.581791 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4I9b-0006iy-88; Thu, 23 Jun 2022 08:25:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354581.581791; Thu, 23 Jun 2022 08:25:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4I9b-0006il-43; Thu, 23 Jun 2022 08:25:31 +0000
Received: by outflank-mailman (input) for mailman id 354581;
 Thu, 23 Jun 2022 08:25:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uSRO=W6=citrix.com=prvs=166c34e93=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o4I9Y-0005Uq-VK
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 08:25:29 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0918ba58-f2ce-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 10:25:27 +0200 (CEST)
Received: from mail-dm6nam12lp2177.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.177])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 23 Jun 2022 04:25:26 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by MN2PR03MB5008.namprd03.prod.outlook.com (2603:10b6:208:1ac::24) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.15; Thu, 23 Jun
 2022 08:25:25 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 08:25:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0918ba58-f2ce-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655972727;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=hWcVLNaotcKVHvD2Y7Nh3KgfaTFcIeIe5XDhmeBgrpo=;
  b=KnifK4c21ygIHcdUE97aCDu7cKVlxALqsrw/Mka4UgBtUjZO1kBbQRBZ
   OGNYWgq56WipSdeH1CFqHNF3LstMNr5wapQV9LbZszHuwtM25y2nrQWmY
   6CqE7jPk0+4Ghw0hnQyjChBxdtVBYMX6p2/U6DgOvn/TqG11u4u6rCN6V
   A=;
X-IronPort-RemoteIP: 104.47.59.177
X-IronPort-MID: 74257084
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:2jM0Mq5O7mFisw188LihdgxRtEzGchMFZxGqfqrLsTDasY5as4F+v
 mQZWDjXMqyMYmL9eN8nbI3jpBtS7JbVx9I2QVM4+Sg9Hi5G8cbLO4+Ufxz6V8+wwmwvb67FA
 +E2MISowBUcFyeEzvuVGuG96yE6j8lkf5KkYAL+EnkZqTRMFWFw03qPp8Zj2tQy2YbjWVvR0
 T/Pi5a31GGNimYc3l08s8pvmDs31BglkGpF1rCWTakjUG72zxH5PrpGTU2CByKQrr1vNvy7X
 47+IISRpQs1yfuP5uSNyd4XemVSKlLb0JPnZnB+A8BOiTAazsA+PzpS2FPxpi67hh3Q9+2dx
 umhurS2UDt3Do3+kt8heDQFPiNxMbFco7v+dC3XXcy7lyUqclPK6tA3VgQaGNNd/ex6R2ZT6
 fYfNTYBKAiZgP67y666Te8qgdk/KM7sP8UUvXQIITPxVK56B8ycBfibo4YHg1/chegXdRraT
 9AeZjd1KgzJfjVEO0sNCYJ4l+Ct7pX6W2IE8g/K/fpri4TV5CBOiei8FerHRvuhf8pHmmaTn
 13h0nusV3n2M/Tak1Jp6EmEluLJ2C/2Ro8WPLm57eJxxk2ewHQJDx8bXkf9puO24ma8Ud9CL
 00f+gI1sLM/skesS7HVQBmQsHOC+BkGVLJt//YS7QiMzu/Y5lifD21dFDpZMoV564kxWCAg0
 UKPk5XxHztzvbaJSHWbsLCJsTe1PitTJmgHDcMZcTY4DxDYiNlbpnryohxLScZZUvWd9enM/
 g23
IronPort-HdrOrdr: A9a23:VnTGCa31+BmutAKtSTJEWQqjBSByeYIsimQD101hICG9Lfb0qy
 n+pp4mPEHP4wr5OEtOpTlPAtjjfZq6z+8O3WBxB8bYYOCCggeVxe5ZnO/fKlHbexEWs9QtrJ
 uIEJIOd+EYc2IK6voSiTPQe7hA/DDEytHRuQ639QYQcegAUdAE0+4WMHf5LqUgLzM2eKbRWa
 Dsr/Zvln6FQzA6f867Dn4KU6zqoMDKrovvZVojCwQ84AeDoDu04PqieiLolSs2Yndq+/MP4G
 LFmwv26uGKtOy68AbV0yv2445NkNXs59NfDIini9QTKB/rlgG0Db4RE4GqjXQQmqWC+VwqmN
 7Dr1MJONly0WrYeiWPrR7ky2DboUITwk6n7WXdrWrooMT/Sj5/IdFGn5hlfhzQ7FdllM1g0Y
 pQtljp+KZ/PFflpmDQ9tLIXxZlmg6funw5i9MeiHRZTM83dKJRl4oC50lYea1wUB4S0LpXUd
 WGMfuspMq/KTihHjPkVyhUsZGRt00Ib1m7qhNogL3W79BU9EoJunfwivZv20voz6hNOqWs19
 60TJiAq4s+PvP+FZgNYtvpYfHHfVAlEii8Rl57HzzcZdI6EkOIjaLLy5MIw8zvUKA07fIJ6e
 b8uRVjxCQPR34=
X-IronPort-AV: E=Sophos;i="5.92,215,1650945600"; 
   d="scan'208";a="74257084"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=I2ehmWVdJvUw66ElPaZ1bc19Zq7K1yj0i7CAiHcaEr/3kCdTz/QPJtNzD+/1nyWEBm7S3AMKfOJ+aHM+u7oHeL5d5UkjxbujUhfG47ZmmtBAQ4hJv+R1B5qyNq4UcTwr7x2O7qydx/6vfsbvtkL7edaLwM3DDVFhvjYJlC8fZhwUcbaNZehBFAYhHOJRx4VkEpuXX3V7N/zjn87B7TfamWvstNeih4G2icH35SKp/MU7Juu7IZ+EKuELut+MssYnHNpz6c0qThe6Dc/7apen6eHu+Gq7ZJCsdLsYPlSKFVUCe/Ng3S9Td4dN8Q/j/I/IX6CN60iyxdWBIUFQG8MqRw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=iVzwTq/xdU/s19XtAghAqqOGblBvPwgi6X4ZhhHGtpI=;
 b=g7RSb/YUYYNlt3ZhUA0b1/22kE1b6SSCn5l0lN2fUejFprizKx577XrWZenhcdB9ets4zwv9JKD8YF3lbvdxe2s7xD5Mi1XuRvWt79f3jm5zRV8hUOTPXdZcWmJBzuSad0uSep5tatE1d4+05tPIlvDj0+YMrhxRNii9CCzblAg5uurzGjgH1y+iItuetMWKLf24vODoFn8/IyChQol/4XHXlwoCHFfzZEGg09QRBtMLD4mSsxF9ZmRuubV13TmqS4SjyPoDyyw7lQsVIMvivkLM31a2U5znfJmZik0mGYs3Wzblnd4DMEUhRo1DPV/jIlBxcD0w1Ct9tbtoV5E50w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iVzwTq/xdU/s19XtAghAqqOGblBvPwgi6X4ZhhHGtpI=;
 b=IhBmCe2429XqK9TvVOS9uiLWniekxZvd/+7dZhYLEY0oVG1vYR/aOJVwJ0NayspBl7xF3G3pS9EJgwGyfm85MaHuULuaHMJNEOxa8T+o+OZ9nrSentBkqyjX6p5P3GE30Gl0UOprhHJdoFp/AghU03670BV/ivykpR//FssOyOo=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 4/6] x86/irq: fix setting irq limits
Date: Thu, 23 Jun 2022 10:24:26 +0200
Message-Id: <20220623082428.28038-5-roger.pau@citrix.com>
X-Mailer: git-send-email 2.36.1
In-Reply-To: <20220623082428.28038-1-roger.pau@citrix.com>
References: <20220623082428.28038-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0318.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:197::17) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d776e669-7b2d-4ea3-d4d3-08da54f1ec01
X-MS-TrafficTypeDiagnostic: MN2PR03MB5008:EE_
X-Microsoft-Antispam-PRVS:
	<MN2PR03MB500848845E4BD965836E07C48FB59@MN2PR03MB5008.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	DGdaAAQBwwtYWoBcbDQ/PRYDEKWwmjSixRheUY4bUdtzg9hHge6pQNHXLpHkBE3HsQ9D6D2ZhPVrkABRYl/agzM/fpkOeNgJot+aXrdJD5FL4SGnSIidHL1WtrmYLJzQ18iTt7J7rel9qAmrKM6PNd2qYAupWWag9zGxrWrzhNjkKhkEeffnoOc7BvZp60JMWY6+PXMWfGA8nadO3VgJmLLWgdqqr/00YnIkwXc21oFhJFZ2FuPwCVRxAOyu9knxUdZaRNvQa793aHHZe51cv85R1irr3Nz7ZZBp+CBVETBeR74Af0UgKkMMDxjFIFcScBM8WXmM1mu/0nYvPIKTTwWOBkfx+4127jl7DIBdB6jVBgVIawEQqHIBUrEg0Bfkx5e+psYkJdBG8XPNkhGtSAMnb9OLeHt77MjbQzmHGnQvA9oL6oS6dkH8tiA0j87LE3ptEJI8yZkNzjqOuU+0q768/UoaQWdVsrD4zvqn8ZvAgasURZWreXet89eoh2jRxQHNhQMDm9VbWuVT1xVW8fVEsEu/OXMxD5w0X0150WdHQpSZp4w9U49PrmKRiAxyXW4momOzyKkIQPutKM+Qdn1x+D4Vk3qvEqLzs1YZNW55iV3HGB+MjpV6e+lHai7e3Edh57d2nLXTPSTFUnFFjwF0ONCXEcTBkgO4ii9ozOR9cjng0d0y8jjHDzfL7yNetN7d+BRlV1n7D8t6VGYT0w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(136003)(366004)(376002)(396003)(346002)(39860400002)(82960400001)(6916009)(2906002)(6486002)(38100700002)(5660300002)(316002)(86362001)(8936002)(6512007)(66556008)(8676002)(36756003)(54906003)(2616005)(4326008)(66946007)(6506007)(83380400001)(1076003)(26005)(41300700001)(478600001)(66476007)(186003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UWgvak16WUNtYkRYVWxidVovT1JmTG9MNXJjVUhPcjFUaEtEdUx1WEIwdTM2?=
 =?utf-8?B?RnE2NUI3WFJKTWhhL0NWMVVFZldPRFFnd2ZVdjh3T3NUdDNxYkRqYkZNVEtp?=
 =?utf-8?B?Vnd4VHQ2K3BWSHNlOWhhbjBmdXJHUXkxeVJ5Y0hMTUorUCtHQk4yUmVNb3Rv?=
 =?utf-8?B?N0Z4QllqekErR3VXWllVVlFyTmkzcS9SRHdzK1BHekVybWRUa09LMUdPSkxk?=
 =?utf-8?B?ODk0TDRmajUxTFhUUG5JWDExa3Y0OGs1WE5HbXlsODZGVjM3NW9pT0grWU5w?=
 =?utf-8?B?RGFjQVZab3JCL1l4L293STRDeTNPaktZaHFJZzVDaGtUOW11SVN1ZmxRdmh0?=
 =?utf-8?B?UGFzUHlIOWpjN1R0QlExaFZlU040anlJUmNWdmJGWFgyYUk3SlQ1SUZVOWxv?=
 =?utf-8?B?b05sTnI1andXMnl6bVNrMnFNbVJzUUg2b2hiM0VzVHFNUGl2YmRiRVZLdWVD?=
 =?utf-8?B?WGJyZnptaTBpTnhPUUh0VGlqZlZKUm0vR09OVzl3VTNRRnBFNWViQ2FGYTQw?=
 =?utf-8?B?eXQ2UlZhZzJxQURrTU1hQjl2RnI5QXdNREZYREFTK2FSeTBsM0VpUjFZSVFp?=
 =?utf-8?B?RUthTWx6THB3Uk1ZcndVZnFiQkUzQm03NHBBNytybCtTVG4wZnlReTJjYU5L?=
 =?utf-8?B?dW81VWZrd1RkRnJHaExmanREVmttd3FPVXV5N29mRnhaUVh0SEpaZUpCd2VH?=
 =?utf-8?B?c0tOeVBNVm81cVhVTzZnc0VNUlBMbFFZNHRvT2gzM3BQN21kSkpnK09XUDlu?=
 =?utf-8?B?U2FYVUFCR0RJZEpjZG4xdkhWTkxHN0RyOXFOTGpwTkRMb1F6a28vUlArbXZm?=
 =?utf-8?B?cVVLdVNFR0hSLzlaak4rTGpJUW1lRDBhMmlVN0Nxa2FxdDd0VXpPaXRaUXFz?=
 =?utf-8?B?cTBGenRSNm5QdEIzU2hudHR6dGFDWE1wbW82RmZ0UGlGd2FtRTJESUNRVXJn?=
 =?utf-8?B?ME9JNzg4bVBXR20yZnJFYTVxVTlDRmpsTUVTeUY0ejdlK0tCTStLaXUxaEtN?=
 =?utf-8?B?ZTNydndMWXNJVVZyZkxadWdxRGE3Z1dDTER3THdEUmxNVXpURWtJWTNFR1lI?=
 =?utf-8?B?Zyt3SGVMVWFBMGk4TTYyelJIOUpNSkRWN2tueDRNWUNHWTZqbHNVUUZWc3B5?=
 =?utf-8?B?VlA2ZHBGU21TYytyVVZQMkZRZFN6YWR3R0JXVUdLR3laaWF6eDhEdDA0VnVE?=
 =?utf-8?B?bU1aQUY5QmwxMkJyYmFIRFBPOElubkVOU1p1RFk0aG5WMGhzdHRGTTNiVFdE?=
 =?utf-8?B?WVgvcEladmhoWHI5b0dyZlJrTEhaVTBOSFM5WU1HaWcvU2ErM1daQml6RVpt?=
 =?utf-8?B?ME9ydFFzSEc2bXc1ZWoxYm9XVjhGdGFiNWVYU0RwVy9SMnhNWHptSW5zYUtG?=
 =?utf-8?B?dCtteCs1K0hxaWxtbEV5V3l5S2s4b1ZITE5TTU9RbnJJdmtQZlB5TEVTMVp6?=
 =?utf-8?B?WkpIelVZSXhxOU9UOVptNXErL2pRNFZOdzdxblpqOU1obmhZUjUyeHBQeEVr?=
 =?utf-8?B?L2x6WlpFdkpYcURpaFRQRTFENW9BK2xHZ2k0N0dsbmY3d24vVlBMSzh2NFla?=
 =?utf-8?B?SHVrUVd2N0k2bGR3NHFOQXRqVDBSM21FYWl5cUFnb1RFYWdGaEdYMmw0NzNI?=
 =?utf-8?B?Vllack83N1dHZTZGeFhsb2JGMDhKSWZKWE5GMkY0WldQWDJOT20vd0FjM0Z2?=
 =?utf-8?B?R2pUb21aSzRCNXNrZzlzczdGenBiaW5pc05QWEYvZDNJdnROZGdiZDhkN21r?=
 =?utf-8?B?Vmp2QVY0T3pNRzdKaGd4em5ycCtWZ2lwejdGVW5hc1R6eWxEa1VrNjJYTzJi?=
 =?utf-8?B?U2t5bi9vVVY0T2wwbTR6QVpONjdINWZBRzBLeVZkUHo2VVkwM1pEZmNiNTBF?=
 =?utf-8?B?alZORjkzcGY1cThtTEE1YlNjSzB0U1BrQzNiOXQ4eEtZUEsxUFF3UjBsY2RF?=
 =?utf-8?B?ZlNMeXBxNUNGOVBTRVFkN1g1UVA2K0VUUUIzN2s3TkJYR2N5Y3FaMmNKOE9u?=
 =?utf-8?B?R0RORlpOUkhtS0N2T0RFdk9sYWdaVlQ5VmJSVG4vODIvMnBEU0I4RDRWcG91?=
 =?utf-8?B?dGw4VjRnZDFCeEVwMzZhcUxpOXF3QmpiVndpc2hpbXd1YkVmaFlXUGRGZnAy?=
 =?utf-8?Q?LFTrvsTFEmeuQvmvIvdbXlEay?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d776e669-7b2d-4ea3-d4d3-08da54f1ec01
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 08:25:25.1146
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 1D481zu6kv/HZs6btEfI6qh+lPwliaTf+EWPNjcDaRSJbL/Qjy73kFSO6Z6GycIwZZhDHpVk9Vv4mw7xEGNn8Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5008

Current code to calculate nr_irqs assumes the APIC destination mode to
be physical, so all vectors on each possible CPU is available for use
by a different interrupt source. This is not true when using Logical
(Cluster) destination mode, where CPUs in the same cluster share the
vector space.

Fix by calculating the maximum Cluster ID and use it to derive the
number of clusters in the system. Note the code assumes Cluster IDs to
be contiguous, or else we will set nr_irqs to a number higher than the
real amount of vectors (still not fatal).

The number of clusters is then used instead of the number of present
CPUs when calculating the value of nr_irqs.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/genapic/x2apic.c   |  2 +-
 xen/arch/x86/include/asm/apic.h |  2 ++
 xen/arch/x86/irq.c              | 10 ++++++++--
 xen/arch/x86/mpparse.c          |  5 +++++
 4 files changed, 16 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/genapic/x2apic.c b/xen/arch/x86/genapic/x2apic.c
index 4b9bbe2f3e..cd1f55cad8 100644
--- a/xen/arch/x86/genapic/x2apic.c
+++ b/xen/arch/x86/genapic/x2apic.c
@@ -228,7 +228,7 @@ static struct notifier_block x2apic_cpu_nfb = {
    .notifier_call = update_clusterinfo
 };
 
-static int8_t __initdata x2apic_phys = -1;
+int8_t __initdata x2apic_phys = -1;
 boolean_param("x2apic_phys", x2apic_phys);
 
 const struct genapic *__init apic_x2apic_probe(void)
diff --git a/xen/arch/x86/include/asm/apic.h b/xen/arch/x86/include/asm/apic.h
index 7625c0ecd6..6060628836 100644
--- a/xen/arch/x86/include/asm/apic.h
+++ b/xen/arch/x86/include/asm/apic.h
@@ -27,6 +27,8 @@ enum apic_mode {
 extern bool iommu_x2apic_enabled;
 extern u8 apic_verbosity;
 extern bool directed_eoi_enabled;
+extern uint16_t x2apic_max_cluster_id;
+extern int8_t x2apic_phys;
 
 void check_x2apic_preenabled(void);
 void x2apic_bsp_setup(void);
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index b51e25f696..b64d18c450 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -421,9 +421,15 @@ int __init init_irq_data(void)
     int irq, vector;
 
     if ( nr_irqs == 0 )
-        nr_irqs = cpu_has_apic ? max(0U + num_present_cpus() *
-                                     NR_DYNAMIC_VECTORS, 8 * nr_irqs_gsi)
+    {
+        unsigned int vec_spaces =
+            (x2apic_enabled && !x2apic_phys) ? x2apic_max_cluster_id + 1
+                                             : num_present_cpus();
+
+        nr_irqs = cpu_has_apic ? max(vec_spaces * NR_DYNAMIC_VECTORS,
+                                     8 * nr_irqs_gsi)
                                : nr_irqs_gsi;
+    }
     else if ( nr_irqs < 16 )
         nr_irqs = 16;
 
diff --git a/xen/arch/x86/mpparse.c b/xen/arch/x86/mpparse.c
index d8ccab2449..dc112bffc7 100644
--- a/xen/arch/x86/mpparse.c
+++ b/xen/arch/x86/mpparse.c
@@ -131,6 +131,8 @@ static int __init mpf_checksum(unsigned char *mp, int len)
 	return sum & 0xFF;
 }
 
+uint16_t __initdata x2apic_max_cluster_id;
+
 /* Return xen's logical cpu_id of the new added cpu or <0 if error */
 static int MP_processor_info_x(struct mpc_config_processor *m,
 			       u32 apicid, bool hotplug)
@@ -199,6 +201,9 @@ static int MP_processor_info_x(struct mpc_config_processor *m,
 		def_to_bigsmp = true;
 	}
 
+	x2apic_max_cluster_id = max(x2apic_max_cluster_id,
+				    (uint16_t)(apicid >> 4));
+
 	return cpu;
 }
 
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 23 08:25:35 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 08:25:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354583.581802 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4I9f-0007Cv-K4; Thu, 23 Jun 2022 08:25:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354583.581802; Thu, 23 Jun 2022 08:25:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4I9f-0007CT-F3; Thu, 23 Jun 2022 08:25:35 +0000
Received: by outflank-mailman (input) for mailman id 354583;
 Thu, 23 Jun 2022 08:25:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uSRO=W6=citrix.com=prvs=166c34e93=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o4I9e-00071z-Q3
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 08:25:34 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0ba0c902-f2ce-11ec-b725-ed86ccbb4733;
 Thu, 23 Jun 2022 10:25:33 +0200 (CEST)
Received: from mail-mw2nam12lp2047.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.47])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 23 Jun 2022 04:25:30 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by MN2PR03MB5008.namprd03.prod.outlook.com (2603:10b6:208:1ac::24) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.15; Thu, 23 Jun
 2022 08:25:29 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 08:25:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0ba0c902-f2ce-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655972733;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=MuxBZdCHcFcLAUSqULIUORd90AHHM71XIXawMiighH0=;
  b=Pbb4Or+lR1t+hsU1PGLBIIP5aYQ3zLq6YnAKQImDgB/OqqHZqoVeETWh
   Ejyq51I0GEne7NSCqB2ix9TPMLy49dEZ0MqX5eYjzuhx9FzlHQ7N+I7QU
   cOjBBs+T4ntkCucjaKh/PmOts6PImAJ/HSdQ7ISRwU7mVIof90U8P5ir3
   k=;
X-IronPort-RemoteIP: 104.47.66.47
X-IronPort-MID: 74236222
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:ajDlJayDFherKT+l01t6t+dCxyrEfRIJ4+MujC+fZmUNrF6WrkUFy
 WQYWWyPPamDZGD1L9l2Odm+9k8HvJfSz9A2GQA9+CAxQypGp/SeCIXCJC8cHc8zwu4v7q5Dx
 59DAjUVBJlsFhcwnj/0bv656yMUOZigHtIQMsadUsxKbVIiGX1JZS5LwbZj2NY224LhX2thh
 PupyyHhEA79s9JLGjp8B5Kr8HuDa9yr5Vv0FnRnDRx6lAe2e0s9VfrzFonoR5fMeaFGH/bSe
 gr25OrRElU1XfsaIojNfr7TKiXmS1NJVOSEoiI+t6OK2nCuqsGuu0qS2TV1hUp/0l20c95NJ
 NpliaSoVR4FEb/2lccbXBBkDwh3I4BoweqSSZS/mZT7I0zuVVLJmq8rKX5seIoS96BwHH1E8
 uEeJHYVdBefiumqwbW9DO5xmsAkK8qtN4Qa0p1i5WiBUbB6HtadHeOWure03x9p7ixKNezZa
 McDLyJmcTzLYgFVO0dRA5U79AutriajKWYG8gjPzUYxy3iC3DF8jeHWC9TqZPqSbtp8mGSch
 VuTqgwVBTlfbrRz0wGt4n+qw+PCgy7/cIYTD6GjsO5nhkWJwW4eAwFQUkG0ydG7gEOjX9NUK
 2QP5zEj66M18SSDUd3VTxC+5nmesXYht8F4FuQ77ESI1fDS6gPAXGwcFGYdN5ohqdM8QiEs2
 hmRhdT1CDdzsbqTD3WA6rOTqjD0Mi8QRYMfWRI5ocI+y4GLiOkOYtjnF76PzIbdYgXJJAzN
IronPort-HdrOrdr: A9a23:LVaPda/QO0JkLC82CSZuk+FKdb1zdoMgy1knxilNoENuH/Bwxv
 rFoB1E73TJYVYqN03IV+rwWpVoJkmsj6KdgLNhRotKOTOLhILGFvAH0WKP+V3d8mjFh5dgPM
 RbAtdD4aPLfD9HZK/BiWHXcurIguP3iJxA7d2us0uFJjsaDp2IgT0JaTpyRSZNNXR77NcCZd
 Ohz/sCgwDlVWUcb8y9CHVAd+/fp+fTnJajRRIdHRYo5CSHkDvtsdfBYlOl9yZbdwkK7aYp8G
 DDnQC8zqK/s8ujwhuZ82PI9ZxZlPbo19MGLs2Rjco+LCnql2+TFcxccozHmApwjPCk6V4snt
 WJixA8P/5r43eURW2xqQuF4XiV7B8er1vZjXOIi3rqpsL0ABggDdBauI5fehzFr2I9odBVys
 twri2knqsSKSmFsDX25tDOWR0vvFGzu2AenekaiGEaeZcCaYVWsZcU8CpuYds99RrBmcEa+d
 RVfYHhDK48SyLYU5mZhBgj/DWUZAV8Iv/cKXJy+PB80FBt7QVEJgUjtYkid0w7heMAoql/lp
 r525tT5cFzp7ctHMRA7cc6MLyK4z/2MGTx2Fz7GyWVKIg3f1TwlrXQ3JIZoMmXRb1g9upBpH
 2GaiITiVIP
X-IronPort-AV: E=Sophos;i="5.92,215,1650945600"; 
   d="scan'208";a="74236222"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Oz1G4NNOcg9EMpi/lxw3xgBlL/paAng0Tbluqh9tIzUbXzkAJa9o1XOYNtpTu0Uu/OY71hMPUIL69nLr9nOTzSKUTJpcFUitWaBvQvQnMYNlpQlU7TfiPwpu57oXwZ3cma3F5KpjCsJO5DArxIe/VYZLLg68ifo0Cw5M2TFjv9PB0LC6mC8twY1066sGo6KwPu3qA8DPaZ6a3Tps+14RnJbWoyL/hZjNvaNrRtFnG/fdX3Kwe6syX4YNsI9mz2kFmOoHm31adI50+vQVy2JHSTjsyHcVGuN1fTJUbgp4HEW5cRuKAR8rVYvP44oj8txqgeoUTBIZFnacWViEb5zW/Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=dsP+mK1T59nSfzkuyerDtNZDx7JRlt36zNeR2aRBE2k=;
 b=EJ/rCCVfICdGNnMlrMuigFdWlf5cEP7aXClTcxVf40X0n7d4yAiqjHLJXS4vrYLmM00ZI0I0gUoGBwvtauxe7vZej1PvdT7cK/JnKEfoEbn6qkbfyggh2+iQTLvqGtVGGmJjLFudmU31aEIQyDKHJOuhmOkcu2b2CtQTXzig7ZPCHpLfRUTCfz5p9Ov1KJ83dwHNb7shUdpoimDQbuX9SEPAk31jP8x+QHygzPDl94IF0IGQ0/QeBQFhEFgxz6SUC7YDczogtA82X6MA0ZufOcwn0LIIbL0I1WK0ngsa5NbdtwWSo7SexgohJmiRkK037k+nQPVwi7Vg3RUNeUP1mQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dsP+mK1T59nSfzkuyerDtNZDx7JRlt36zNeR2aRBE2k=;
 b=k46nflQk5tPwrSGJUfmRqlxiQ0f5NbQD5MUMYtVhqOeUvPyhSAGPOAk1RtgD/qi3ss8AFNGffGpC4aBaOrJrgRH4sTXSAMpQQkVbHMTrQ7daSleXUWlobn1uHhGyFnIZnYINTLqV1BY+UUC56pEZiMdTV0xt09vCZmvuGekBTIc=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 5/6] x86/irq: print nr_irqs as limit on the number of MSI(-X) interrupts
Date: Thu, 23 Jun 2022 10:24:27 +0200
Message-Id: <20220623082428.28038-6-roger.pau@citrix.com>
X-Mailer: git-send-email 2.36.1
In-Reply-To: <20220623082428.28038-1-roger.pau@citrix.com>
References: <20220623082428.28038-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0190.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a4::15) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a46a249c-89e5-4a0a-745f-08da54f1ee5d
X-MS-TrafficTypeDiagnostic: MN2PR03MB5008:EE_
X-Microsoft-Antispam-PRVS:
	<MN2PR03MB5008F9BF9411B98B50D751F58FB59@MN2PR03MB5008.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BbB50cEbad1sB4At5jrgxjLVqX5mhsCGe85ytnPyJzsfvWMg0ZEhVPnNzV5UMarMxWoZg2o1ujoX4xc7lMuEBoJxL8SjrXpntwTPG57qkft1wt+Fj6xgITaWS09ICEmu9ja+ZX9c1v5xQYX7vLV5zDNzvz+W6MjCjcAbrpUHu+gXWVKMvgExQKvWCh8u2MAsn+cILlUoVpzW80cEsOe+B7NnOMncRo8Fnkgn/vkVGUXLq3Gwxx90OOdZu+DtuUQD65ShH7MWOD4LrQfjYZCuoz9niNrtRkMq1IfWn77QV89bfTiy/zqoS1AiqgXrGkgKuIK8hbmGZL2VNFtBTttnExkiH53n4C7nfPwaRZ2e6luN1j2jV4Q/k084n/BQ7NX22CR0qSFIeOciGhLuY6/uQoYdo5SGEnm0VYpeIq5pbyG/U55skWGolEWKNArFvRm8qF5o2qMSWVFxGQ1XBC8xtvtRN/KcYaqX8PGsCThoHnhuTppbkh1aBANo2KMOX2CwxFLuV8/URxjjrjt0gkTNdCFsK2XdtVZvtD7675O77nbF5FJpdrOlz9MHp1qn5sWy+xq2TDwPW5rzWNA50A9ESkH5mhUH98MuneVCcx55nBmjBd4mlduYDvCnrmSB1mq2kLaQ/vc+tvoE2EzwBAKg6MdWLd11NQj1Dg1Pv4LbYgKAqYg01JnTmiNlJJGjv4B4JMqvUOnRR9pghWUpwoImDA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(136003)(366004)(376002)(396003)(346002)(39860400002)(82960400001)(6916009)(2906002)(6486002)(38100700002)(5660300002)(316002)(86362001)(4744005)(8936002)(6512007)(66556008)(8676002)(36756003)(54906003)(2616005)(4326008)(66946007)(6506007)(83380400001)(1076003)(26005)(41300700001)(478600001)(66476007)(186003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cnMra1V0aHNWNTNweWpGMWVTSFBSdUZlMUNqNGJjS1BqMnBqQjRidnlNVHRq?=
 =?utf-8?B?Rjc3MTNRZEdJUEY0Uk05WE9Nb3NLRnM4L1U1WGQvMEtvb2NpWkRrdEtITnBn?=
 =?utf-8?B?RFV6TlRQQ2Z5QzloZHlVbGVDUU5HK0ZzbVVYQjBucU5YRVVheDBEeWhUd2JT?=
 =?utf-8?B?L2JJdk9tZFROVjluOFJya2YzSnJxVWRaMkVoK2VCU0RaZ2JPVWMvWWJEeGla?=
 =?utf-8?B?UlBWVVNzYjJjZy92OFpySEhpOGdNSXA3U0VmaWhoSnBTUVowUDQ4VjFxckZZ?=
 =?utf-8?B?TVhsYytPejRjMEJJeksrcUdWWUhDTTM2V3NTV3FMeFZ2ZDFYV2tmNTUrWllE?=
 =?utf-8?B?TmlUMHVwcGtmQmVxemtzTGtBWmFVOUxJSVZQN3J0em51MVdmRVMwVmh3b2wy?=
 =?utf-8?B?L1ZxZEFVdUZpZ1Z2RGlYN1pjZ1lLVlRjZndXRkNiNytYRy9rSVdSbzhRYnh2?=
 =?utf-8?B?NWd4Q1VCbXQwSjhtc29SdGdCZ05TZHg3a1JzRlBKZzJFMmpaNFdKVkVqakEv?=
 =?utf-8?B?cG1scUJGNzRORi8zcFlRZEp3YVhZUU9ocUhjc3c1ajJuVC8yMmhSUStXY0Iz?=
 =?utf-8?B?elRnejU1YlBHOFJjQll4dFcxVGhuRDAram03WFZGMHIzWHhINTd1Z1huOElC?=
 =?utf-8?B?dVNPSGdiUng3ZU1xcjJpVDdGRlFsQlVxNGNWYS9LWlhPNnloRmtPcFBFMlhs?=
 =?utf-8?B?Umo4Ni9ZSWJYYUxYYzdVR0pzWUZseTJBRWY4SGV3c01WY1ZxSmM3L05sVGpk?=
 =?utf-8?B?aXIvVDNWbXBMU0F4VnVOalpRU0h4K1lUT2JwVDAwemIvcTI5bEZqWXdOUXZU?=
 =?utf-8?B?MzVCaERBZHNnUzIrQWJoZVVmYzhyN0NGWE0rSSs2RHZFbWNCZCsrQ2xNNGth?=
 =?utf-8?B?U0FmTzd1bkNHZHE4UVNyamZQU2ZPVE1VWDZnK3cxL1JSVWg4OWpXblMxZFVI?=
 =?utf-8?B?MDRndkxaVFNDVkQ4dHVISExrdjQ4VUxoYktVb0Z1MVBZR0xtK1RIU2pDdk91?=
 =?utf-8?B?TWJibG5TNjdsYXNhQjB6aU1TVlBPYlAxQjl3NDlXMDZCRzE0SEJtcllVdm8y?=
 =?utf-8?B?V0JzcU9MTmptQmpoZEI4STlHVnFqZzhnOVJNamxiT1JuZk81NHdyWEVwOFpC?=
 =?utf-8?B?bVJ3S1VCbHJCNjNHelRhb0JJRGN0RTZtTmxsWWxoQy9aQXdtc3FtOS9VY1lP?=
 =?utf-8?B?cG4xWWtwQjhZZ2hGcnAwaTh5TDcwMFRiS3FOZVBaZmFROHk0cTZCZ3k1MjZv?=
 =?utf-8?B?Mk9oNktUMDYveDRwUnFtNE05NEVqcUNPYms1OWIrUnY1cE1ZUDk2UFZGUXlh?=
 =?utf-8?B?bDNHK2I4TVFaVEp4VFNEWlhTbjlyaUM5TitoWTdiSFMwemVkMTF2MEtBTHMr?=
 =?utf-8?B?bS9OUGJDV2o2empIZDQzVkFaV2pHVTQ4OXhHZEFRY1o4TzNoKzFrTFNWQ1ps?=
 =?utf-8?B?K2J0SUN4VkdTaTJDRUxtYlgrU2VYd0oxcHowbDBuUkZpaGxIaEs5dm1UVzNK?=
 =?utf-8?B?cUVwbThPcDkvL0pGdnF0TVVPVkNCTEpIb21NVDZTS2hzRVZ4TG4zV2hQMzVv?=
 =?utf-8?B?NktjbDh3SDVNdzcvVk9NbE92MmhXenlpU0x5d3MzMUVMcUhPbzZJQ1BYSTBI?=
 =?utf-8?B?MGRlTFdZWXdSODZ5ZGlQMVorRU1RRm5mTklQbnpZclJoTlhKM2ZFZUNXS0RB?=
 =?utf-8?B?M1hWRnY0TTR1S2w5WkZIM21zWVpQWTdXUU9Oc1BQQzdaZVNYcit1aXgxUmZ1?=
 =?utf-8?B?LzlYOFBOZEx0a2dLMGVhSjBhMVJQOE54QzEyVFpQc0xoYXhLQlIwalhGSmR2?=
 =?utf-8?B?S1RvQVdsZEVPc1R6YUd5aUJ4b2tpcC9JbWpWRDBGUTY4MmI5U28zdE1EVEhh?=
 =?utf-8?B?NCtkZnVHV1o5dEJnUjJuL1RpbXREbUw4V2R3a29ycmhxV3lDQXE4QW1IbnBR?=
 =?utf-8?B?aVVMUjRFQnZhQ29YVnFmakZVR3pxMXhBd05GcERXS3VyU1NMeW9NQzh0WWw1?=
 =?utf-8?B?WEdLV1pQQ2xFVktMeGdDRmpaWE80RjRpdGZyL1lkQ2NPelBBL2p0dUFCSDdO?=
 =?utf-8?B?VDdJVUR5YjFzSVJMbjdYSENxRDNFbWZUS25qM3EwMGxUTjRORkFhV3gwdWJp?=
 =?utf-8?Q?PAQcwRIn6vufYyQW6I9XFo43c?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a46a249c-89e5-4a0a-745f-08da54f1ee5d
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 08:25:29.0394
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: hkZQeG+pQm43iOjdnZkE2hNNhiI1l3NXvNRz5XIqEXOn5s5No3Gav89F7L/0Moo8x1WspT9EI1lEe5lpCMqBoA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5008

Using nr_irqs minus nr_irqs_gsi is misleading, as GSI interrupts are
not allocated unless requested by the hardware domain, so a hardware
domain could not use any GSI (or just one for the ACPI SCI), and hence
(almost) all nr_irqs will be available for MSI(-X) usage.

No functional difference, just affects the printed message.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/irq.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index b64d18c450..7f75ec8bcc 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -434,7 +434,7 @@ int __init init_irq_data(void)
         nr_irqs = 16;
 
     printk(XENLOG_INFO "IRQ limits: %u GSI, %u MSI/MSI-X\n",
-           nr_irqs_gsi, nr_irqs - nr_irqs_gsi);
+           nr_irqs_gsi, nr_irqs);
 
     for ( vector = 0; vector < X86_NR_VECTORS; ++vector )
         this_cpu(vector_irq)[vector] = INT_MIN;
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 23 08:28:43 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 08:28:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354611.581813 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4ICh-0000fZ-2T; Thu, 23 Jun 2022 08:28:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354611.581813; Thu, 23 Jun 2022 08:28:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4ICg-0000fQ-Vc; Thu, 23 Jun 2022 08:28:42 +0000
Received: by outflank-mailman (input) for mailman id 354611;
 Thu, 23 Jun 2022 08:28:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uSRO=W6=citrix.com=prvs=166c34e93=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o4I9w-0005Uq-MY
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 08:25:52 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 16bbc3bb-f2ce-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 10:25:51 +0200 (CEST)
Received: from mail-dm6nam12lp2174.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.174])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 23 Jun 2022 04:25:40 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by MN2PR03MB5008.namprd03.prod.outlook.com (2603:10b6:208:1ac::24) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.15; Thu, 23 Jun
 2022 08:25:33 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 08:25:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 16bbc3bb-f2ce-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655972751;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=M9sDkqyoWe8u+1pJDvGlTJ+8UF/5g6xliqOyamJnVWI=;
  b=F7fy1z09bxx8iGH5vfE62HRDlKLGLlapEq6BKKPUwGtFow1zHTkD3Dva
   lYal19zr7NMFqYDtY06d1XUJ3cdK9MMEZ+svD1+lCTwiKyIUljW1ENl7C
   ErUQgjeBcW9YX7XQHMKjJ0TbK7CFis4c3jY7WaGifAuGHLv8u5tKFGrnO
   M=;
X-IronPort-RemoteIP: 104.47.59.174
X-IronPort-MID: 73580354
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:ayqbl6AJ4h1+cBVW/13iw5YqxClBgxIJ4kV8jS/XYbTApGwnhjYPx
 jEfDWrQa/fba2b0eYwlYI+y/E0Fv5eDn4JrQQY4rX1jcSlH+JHPbTi7wuYcHM8wwunrFh8PA
 xA2M4GYRCwMZiaA4E/raNANlFEkvU2ybuOU5NXsZ2YgH2eIdA970Ug5w7Bj2NY06TSEK1jlV
 e3a8pW31GCNg1aYAkpMg05UgEoy1BhakGpwUm0WPZinjneH/5UmJMt3yZWKB2n5WuFp8tuSH
 I4v+l0bElTxpH/BAvv9+lryn9ZjrrT6ZWBigVIOM0Sub4QrSoXfHc/XOdJFAXq7hQllkPh/5
 8tju5+TRjszGaqPuNYtcCFHLSJhaPguFL/veRBTsOS15mifKT7A5qsrC0s7e4oF5uxwHGdCs
 +QCLywAZQyCgOTwx6+nTu5rhYIoK8yD0IE34yk8i22GS6t3B8mcH80m5vcBtNs0rtpJEvvEI
 dIQdBJkbQjaYg0JMVASYH47tLjz2ymnKGAEwL6TjZsbvXb95QJw7IboMPrFZ93bAvx7h1nN8
 woq+Ey8WHn2Lue32TeDt36hmOLLtSf6Q54JUq218OZwh1+ezXBVDwcZPXO5q/Skjk+1W/pEN
 lcZvCEpqMAa60iDXtT7GRqirxa5UgU0XtNRF6g27V+Lw6+NuQKBXDFbEnhGdcAss9IwSXoyz
 FiVktj1BDtp9rqIVXaa8bTSpjS3UcQIEVI/ieY/ZVNty7HeTEsb03ojkv4L/HaJs+DI
IronPort-HdrOrdr: A9a23:o9muhq9dYfPOqit5Bh1uk+FKdb1zdoMgy1knxilNoENuH/Bwxv
 rFoB1E73TJYVYqN03IV+rwWpVoJkmsj6KdgLNhRotKOTOLhILGFvAH0WKP+V3d8mjFh5dgPM
 RbAtdD4aPLfD9HZK/BiWHXcurIguP3iJxA7d2us0uFJjsaDp2IgT0JaTpyRSZNNXR77NcCZd
 Ohz/sCgwDlVWUcb8y9CHVAd+/fp+fTnJajRRIdHRYo5CSHkDvtsdfBYlOl9yZbdwkK7aYp8G
 DDnQC8zqK/s8ujwhuZ82PI9ZxZlPbo19MGLs2Rjco+LCnql2+TFcxccozHmApwjPCk6V4snt
 WJixA8P/5r43eURW2xqQuF4XiV7B8er1vZjXOIi3rqpsL0ABggDdBauI5fehzFr2I9odBVys
 twri2knqsSKSmFsDX25tDOWR0vvFGzu2AenekaiGEaeZcCaYVWsZcU8CpuYds99RrBmcEa+d
 RVfYHhDK48SyLYU5mZhBgj/DWUZAV8Iv/cKXJy+PB80FBt7QVEJgUjtYkid0w7heMAoql/lp
 r525tT5cFzp7ctHMRA7cc6MLyK4z/2MGTx2Fz7GyWVKIg3f1TwlrXQ3JIZoMmXRb1g9upBpH
 2GaiITiVIP
X-IronPort-AV: E=Sophos;i="5.92,215,1650945600"; 
   d="scan'208";a="73580354"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AxCIowIDGrOKrX7/eLjIYrjmeMdqBJgDPCf6tzqnlxhmJHMV3lMsbrEbGMXfAKDc96sM6kZzBLE+RaQD2kOFZQYRgCQwlip9I7RDrx4xKtgPWQfUb9KlgQZL0Ilo4nW07DYVDhVh1KJfCtauG+XUJBeo1F78x3fPtx8arGT/yR8JXohG3DJObIejyrvlI356/qKYaP1Kqr1AwHxAN69tYQqJAywXWuzPtkHhgoXGZczF0jFjQsgvCJwl/t4cdfsBaQLbX/ZBHbwv2aGh3QM7wzh5qXKpyOg8fIhUsP8PztgBUYN7I+aSWKbZHRWtpLjDSF4c9wttfiINhi+1YF0/Fg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=f+VcW6quQIs1mH4she5NONTPj4PREd8r33VA4ThvE2Y=;
 b=Vm1JnhpCimwNl09RPPYqhRrqKB6StYkwhHCnxWA9i2y97qBp1c4MTgToOhcIraCGdWo4NR2GxTtcia+Qygnxv8BBk2lI6Bo+9BphEnez/kPqGxJEstqbs+QhZNtMiQzLrSRCqun9gpsW8Mo/SHJ/XPnes3VJIUYVT3moEUgEbNNlQMMKC9Pm0AeLOE8X/GfzmLXxKwDmi3BWW7C9rsJ7tQyuM6yUo7eEr9kV3XkE8sv+iBKEk9cXscIk7FYq4VU6u+xhzbtYK/swYJW+r5Jp45kvueFXTEGI1fHDAOcQryv3UbBMTKFr8MbPjl+DERxDrJ1koRs2fmUFtDB495ghYQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=f+VcW6quQIs1mH4she5NONTPj4PREd8r33VA4ThvE2Y=;
 b=kUovyGIqr6eagJG1A7sgbmKqvnXYjccK8voJuqegfyNGB8kwmkdXEYw/GL7EKLut5ggNAvlQXN6QzfLJLKPEPmv+bXQY2bj5tSOsRSVgLxrxxJseKoWM9BonIVJJb4pnEIdUDCRnYjYbayy6VcwnIczMO57JKk4MzSrwEpvxMz4=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 6/6] x86/irq: do not set nr_irqs based on nr_irqs_gsi in APIC mode
Date: Thu, 23 Jun 2022 10:24:28 +0200
Message-Id: <20220623082428.28038-7-roger.pau@citrix.com>
X-Mailer: git-send-email 2.36.1
In-Reply-To: <20220623082428.28038-1-roger.pau@citrix.com>
References: <20220623082428.28038-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0447.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:e::27) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 5f99e338-6a36-44ae-1460-08da54f1f0fe
X-MS-TrafficTypeDiagnostic: MN2PR03MB5008:EE_
X-Microsoft-Antispam-PRVS:
	<MN2PR03MB5008604B2685388A6C5EF0AD8FB59@MN2PR03MB5008.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	dgl5riFRUauTheRMLupyjcOw1ebgF36stwfZBAek9SNtCKjXBmh95IdfC6PXZZ+/tcsVtCEk/3QsYVgcYF/CjUD/sL5PPWKtzCLf5PByjIqd9mu2lKZbzmb7mGdg3W0EuMt7Nut825VgLHW6/ZG/QB14RWPP/KyS2XpF4nRiZrfjKcPp8hdWf+4sfX3Ji2cu6ukqlkXqSwhNE8F4fpk74WB26LomZHD1JBAhGjzs1E6aUwye83WGVrw9dfwl70+Kr0iF8CxWFCInrQRrnFsxD0yyogUMUUUY2aSyvkaWL4icz+M3Kg9QitN/V+wM5krHXjaWsVZ3/P07Zx4VxwqAh0XXSowbNVt4xhKqgHrnwAN0LJSZz9EmrnhU2mkFu52VGkc1iMd/NmerttY5114A2rO3qkVe+GqNp4bZObGZP4RiksABBn0O1wdEFiXPLy0kZpMohcM8boFpguGNLlhp3l4kH7HzwUtmbsuCHR7SKtWIv/TQG6beN48X0hOf6Z85WEiZuirnCf7uLeigEpemT0tVtH6hqsX5dRrBSk2wOGHNJfzYvvrQ9UCwE67AxPySie3TntH7xafKSGBY1wRzCJWTg0mX2gWz1IlG52gH7PdmYjA4OcemR1FHqtfFtJUas8CwWfhMn6eufwHolVmFHY8mbj9Tgo4RdN2cia5UC+rg//ELwA6iM7DhVH97iWeWBiJV3DReG1161Kfkaomx6Q==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(136003)(366004)(376002)(396003)(346002)(39860400002)(82960400001)(6916009)(2906002)(6486002)(38100700002)(5660300002)(316002)(86362001)(4744005)(8936002)(6512007)(66556008)(8676002)(36756003)(54906003)(2616005)(4326008)(66946007)(6506007)(83380400001)(1076003)(26005)(41300700001)(478600001)(66476007)(186003)(6666004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?M1IxVUQ0RENzS0ZNWWgraTJvN2QvMGdZOCtVT0lncWVaczBSUWRTb1RhNlQ2?=
 =?utf-8?B?SmQzTUt5RjA1c3BkeFZPRE85ZjVvVW9ha0hQL1dSR2dWeFFGYVprSHp4N3pn?=
 =?utf-8?B?cGt6T2xjbEpPVzRqdlhZOFpoei9jK2VkWHVnYkFGdWJja3VwWW0za2o2dHVn?=
 =?utf-8?B?endaM1U0VnVMYis3c2lEc3JtY2NYVjcxdzg3aXQyb0V6M0ZNOWZrcTdUTTlD?=
 =?utf-8?B?bW5xUGFlMkVLQVUyVWROcSs0YlU4aDB6MGdNcVlncjZXYW1PWFhLV2NoREhE?=
 =?utf-8?B?OGNJVmJ5K0FlWW5OZVdFVWlKSGYyZUIvR1V4aTBLTG0wWlBSQ3E5bVVWVDVo?=
 =?utf-8?B?Y0Q0RENNWGtMZGJ1ZUdNR0pGbVM4a3Qvbk9qU3JMN2dVVGplQUxTdjVqV2Fn?=
 =?utf-8?B?bnpneG1DSmEvVWw2Ym9OdTJRVS9JbjdYK3FXRGMycEt4YW5rUW90TXgvUUI2?=
 =?utf-8?B?Z3pOZmhXZzUyU0VQQUhUSEgyMVk3ZU9wemJrY0k1SFNFY09lSHJOZ2l1UlNv?=
 =?utf-8?B?MWRZOU8rVnJ6YWptTTBLL1phbWhGNEJaZDV2M0Y4NHMyR0F3ZTcrQ05WNitv?=
 =?utf-8?B?MWp3OUsyWUNJM3ZQcmNacytkMVZsVlNxN01RZDMxaEZONHlvaGxaYU9HRzdK?=
 =?utf-8?B?UWZVTzVOQlBtT0tHVzlOVCthVTNLTnRaaGlQbkRkWlBHWG9mNjNHb3ZyS0lm?=
 =?utf-8?B?azhNTXlTTVQzSWZBZ2RYd2NSQUs3YUpXTGJySnpORnZyUEJQTlJNZW9CcGdm?=
 =?utf-8?B?eE1TdHF5Q3J6OWNZcHQ3TkJBRnhkV3NQOVFsbmRDT0RVYnNjaVkySjFJU29X?=
 =?utf-8?B?SmcvdU1IWXJ6dlJ5c1IyWUQ3MkRKSVFrR2VGUHQwY1A0eDh4QXFnZjNydytD?=
 =?utf-8?B?dnFMaisyc2RCYS9XVElyL2c5a2x3NDlYYnlTY1BrUmFSeW5JQ0dlMWlJWCtR?=
 =?utf-8?B?RXlBNDZ3Z1ZrRXh4N2czK2l4Y2FrVzZUa3N1d1FETHlzNVdTekNCRkJCamll?=
 =?utf-8?B?NWRCc2EzYXBoKzR2UzVPNWNJeklNNkNVK0NOQjN4RHVYeEw5dUtXYmZLRWln?=
 =?utf-8?B?OGFOZUpGZEZBdkRYaC9CRGJleTh3UFN6SHFrQWF6VVpZQjdoNlQyRlVCZTc5?=
 =?utf-8?B?L1dXWERpK3lRQjNNNFpnOFoxdDdiVHBhRVYwWkx2REY0U2RWSFZHa3ZHY1gz?=
 =?utf-8?B?MkFwaWRtd0ViV2RZK1BvWVNpa2h5Q3p2YnU3WVFNMUU4QTRNRk43MC9ycC9W?=
 =?utf-8?B?YTJSd2N4K3NOZjg3SFBzb3czN1kwYkMzUGkzYW0wNk1GZlA4WU9vNWdWbkVP?=
 =?utf-8?B?SXpUSDRyalU4dmFLMXVqQjBSdU5tZ1ZrWHB1ekFQOXJJOGV6MlNvd2d2NWQ5?=
 =?utf-8?B?TFlvOWhySUtnMWU5TUx0Q2ovSVQzemcxbS9DOFYwSW9nNlhQOS9SQnVNR2x3?=
 =?utf-8?B?VWQ3bU9nNFJWL3ErZnMvTGpDZW5KQ0h2Y2xueTBsa3VLdkE3MU1FQnFwNllT?=
 =?utf-8?B?c200SVJYSkZxQVNBT0hIZUczdzVSblRTT1A1blFPQVhLWmtYZk5pcVRnN2lj?=
 =?utf-8?B?SU95OTFhWjZMVUQ4Szd2UnRoblhXN1FkSk8vV3B3eUo2ZldlVUVjL1dHWG13?=
 =?utf-8?B?cmpYWkFGZ3BzRkg1alJ6VU5mcHVXS1dJSnExUzFLM3BmbEp2MzUzNHVlK21n?=
 =?utf-8?B?YTBXamRHRjNBQVdtYWtLdTNBbEJ3WE56NXg3M1VwTW1GRTRMaWRZQmVkWDd5?=
 =?utf-8?B?WUMzWnpBci9QODloekcyMkVMQmljSHJMWmYyS3lnZmlQZDY1MmlMNlgyV3I3?=
 =?utf-8?B?ODRRRTh1QkRYc0FjUHl4L2RlRkxTdFhXY2JDMlJ6dW9RTGc2V1IxYUZTYVRK?=
 =?utf-8?B?a2NGdzJFN1RzdCsrS0xBUlRGTW9TTjgyekRMUWJ0ZmxvaGZMWWpwTFBXemxy?=
 =?utf-8?B?L1AzNkpWYUVUWDBzWjdQWGhlNmZMTzkrZUFJd3pwNUNqdXZXWExVM1JuY29C?=
 =?utf-8?B?L0l5MjZyUTc5enh5R0N6S0sxNmxIOFFNVzNaVmc0ZzduT2pqRmlzWlNrdU9v?=
 =?utf-8?B?ZSt4ZFNWcUdHSkZONjZ1UGNjclJ3ZjdORzN4WEJJejRJc3R3dFV3N1A1U1Av?=
 =?utf-8?B?SXpZcmtZeWV2c0gvTEJDek92S3pkdWV1NlRNUi8wZ05ramt0UnFrR0pTcWc3?=
 =?utf-8?B?THc9PQ==?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5f99e338-6a36-44ae-1460-08da54f1f0fe
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 08:25:33.5412
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: CpGckuVjjZjhPRGfyVuGz6rhYne9hwQlUPbRxKbumSVGuMtfORHXcu3e9k4KYT2mIFqA2CTbwyuEDdzeuLt7GQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5008

When using an APIC do not set nr_irqs based on a factor of nr_irqs_gsi
(currently x8), and instead do so exclusively based on the amount of
available vectors on the system.

There's no point in setting nr_irqs to a value higher than the
available set of vectors, as vector allocation will fail anyway.

Fixes: e99d45da8a ('x86/x2apic: properly implement cluster mode')
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/irq.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 7f75ec8bcc..e3b0bee527 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -426,8 +426,7 @@ int __init init_irq_data(void)
             (x2apic_enabled && !x2apic_phys) ? x2apic_max_cluster_id + 1
                                              : num_present_cpus();
 
-        nr_irqs = cpu_has_apic ? max(vec_spaces * NR_DYNAMIC_VECTORS,
-                                     8 * nr_irqs_gsi)
+        nr_irqs = cpu_has_apic ? vec_spaces * NR_DYNAMIC_VECTORS
                                : nr_irqs_gsi;
     }
     else if ( nr_irqs < 16 )
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 23 08:47:38 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 08:47:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354626.581824 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4IUt-000372-Q0; Thu, 23 Jun 2022 08:47:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354626.581824; Thu, 23 Jun 2022 08:47:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4IUt-00036v-NA; Thu, 23 Jun 2022 08:47:31 +0000
Received: by outflank-mailman (input) for mailman id 354626;
 Thu, 23 Jun 2022 08:47:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CRa/=W6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o4IUs-00036p-B0
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 08:47:30 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-eopbgr60057.outbound.protection.outlook.com [40.107.6.57])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1a0e5e0c-f2d1-11ec-b725-ed86ccbb4733;
 Thu, 23 Jun 2022 10:47:25 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB7PR04MB5210.eurprd04.prod.outlook.com (2603:10a6:10:19::31) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.22; Thu, 23 Jun
 2022 08:47:18 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 08:47:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1a0e5e0c-f2d1-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mt5C7P1+Bv31e/p4jMNXCZYQbiw7kPY+qX6f7/L052DPz4p7/w6cYgoJFOzhl9lwqri6zpqxx/xDilEwrMbolxy4gx89l9udWj7wRyIVX8qDes7qn45F+7CALrHl5Yot652rRcIRJpA6reN04oiFGCNOyrnmZuC9eQ7UpQnWKt+maxDC7NTitBD7pf/5ZxSyUYmv5iPQIyInuYXblQK99EUwyKSgrGhXVqppK9H1fdNArajD0oRzga20zq548afHhMMgxiT7mwmY1JgW30o7Ms6391gTkoSwe0V2oR8qzjdXXZ31NawaZoiB/BjUGgH1VpOkCLopemkxgPMtt95jng==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+og/IjaRDM4Uv/kfH/Qocl/zhwcuUO3jysX+GyaNWUk=;
 b=Nih2Tv5MWywMPVOmKFmWZiSrxVei4pjIxRq9y8PPjVlzG48CCdrQ0XImDnibLkThWYwnyQL1eOzf4x0TPYQbJ1tfVN2K9Twjwl7/QgMyDwXjaQvuhATUCMkd/5gPFHO/LZhb+fe/HoMCC4NBs9A5HvIQqcJwwMuWyYlxC/P7X7YsaB2U5JutD+PiLOXoQZ2tP0SXRiQT2KSzK9pNSvgySgQWGVZfiPs/PcT1hrLW3bxzut3GRHcY/iQWiaN1olAA1d+N3qr1AfADLmWszd0uXeyVYxOcs9LgaCKunK1NyNhWKNPZjJPviGkVFf2Qqy5kbsMHPY3muzyBwBCq2c42DA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+og/IjaRDM4Uv/kfH/Qocl/zhwcuUO3jysX+GyaNWUk=;
 b=y63vLAyf4TEkb1jpe/Gd5f7mZIFp6RH8kFeBWJOkRAtG4Ejp/5ZZNHi09WZU+NwCVlzIOtSJKuVFVY16BVqWUe1C1tOLKQ39vBAEiXps035x8FHOfIJkVbhQaPvZtY3Wj857L3T4yrmtBla8lSbsn3kaafJ8cGOYXmxcR9cFPfLeaCzCRAONZlMhSx9nVEiyaLwkOSv257SGxxeAu0DPqqus5Mp9lc685No6G2aJ8eqfUU9263t9eUZz8RPPz8c4f2KQCBboRMl0ctx5QNVpttGKA8oo5hpsW4MaPafkm1zqbQ5/p/sdje9pdlXspkw83A1RaZSwXtZ1D19aUZJwcA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <53bb13f6-04ec-0ed0-2c19-9c7947654989@suse.com>
Date: Thu, 23 Jun 2022 10:47:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: Problem loading linux 5.19 as PV dom0
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <5c396832-3102-ff5b-c198-c037ee87d83f@suse.com>
 <922ee651-c211-6e46-7986-6d0f74164e57@suse.com>
 <b74d7347-113b-c608-1346-8f75f1a77cb9@suse.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <b74d7347-113b-c608-1346-8f75f1a77cb9@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS8PR05CA0025.eurprd05.prod.outlook.com
 (2603:10a6:20b:311::30) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 0d60a23b-8118-40cc-e7a5-08da54f4fac9
X-MS-TrafficTypeDiagnostic: DB7PR04MB5210:EE_
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-Microsoft-Antispam-PRVS:
	<DB7PR04MB52104E9D455C8EC0D54B3F27B3B59@DB7PR04MB5210.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7fKafMQeazuCcWxEO7zBuDIfzxKUV09iJB8Q6Y02oM7tAqXLO13yhX2/p1pClOr/pwAEwjCJpwFIEoGJ2YKRWL+qu2O2QbVAaKCcX2AKwherd/R0UCdJoSbQcGnn2A8wrEaeidEwGkFZ0dZLoTQUsiEa/KNSLKKo2IyWbLTE7PlVwbjv5G2fPgq0JVceHFXNKJXJCvzMIzBBWL6fjKd4L3DaHiNqCz52IHvfLhXqtx4NRwuX37hpwKLl4M2qLGLUZSG/NpsmiDO3CL59xEvQdGB6sNIqFV2LTN8hqJjSYfRPcQh3wHtO4HuYW5arPPsSVwk7qXkgLxIsd5EWXF3JOOb1xX/4h2zmuIGnWDFeGxMA27m23PMXNaA0MAi88/8un/XXJYR6pJj+mh3JfDXcbk6lkHYQLtf1ZyFZac/NzcSEVMiur9oQcUt49MGbMS8ZGQrfPOuV+nWMpVp487R6UipW0qHwf0LFbMTwn47gnBFx/usaH17hiaaYT4KFzvdQ+w31sxwHrn3cshKZTmmIiFvJxuSmdjzXD/HO+wCAFlcvfz7hpFr3qVuLAOhlvlPh/oEqFu+tsM7Tw7v/zNMSobuDwN3GlIz3Obs4Jey81OUtPOJ1p2/pMxkj8QrNZXalom3IuSPNMoxF9ZlIyP8k+o5E2P7K6a0LMylZTrBnle+aUtsMhnUerwu+zMmw0r9C4JixgpDQRfjXzkA8Co7i7+opnLgHSkMNk3UKAQ9JJkrAAcxD3ZLINZhyZM+8wgBdSG8Ntqwq4QYeWYPtqcTLLTwGDEpAiHVtCXy5ZL1cCyU=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(346002)(376002)(396003)(366004)(136003)(39860400002)(8936002)(478600001)(6636002)(36756003)(2616005)(31696002)(66556008)(8676002)(53546011)(6862004)(66476007)(38100700002)(4326008)(6506007)(37006003)(66946007)(5660300002)(6512007)(31686004)(2906002)(26005)(186003)(86362001)(83380400001)(316002)(6486002)(41300700001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bytieGVyZUdXbjRsdVY0N3djeEFtV2RLMVIyL2x4ZGxWRTF1eVNoVjNHNWgx?=
 =?utf-8?B?NlkzRTVJM05HUnFHSUpMZS95dDA1eGJMRTQ5R1QvSUZzSlovK2pDMnRuY2lu?=
 =?utf-8?B?TTdCdGhhcmtoU0J5K0Y3ZGhkL2FpNk55TXNoYjI5eUpLTDdYZlFwak1PaEVC?=
 =?utf-8?B?Y3RDQ1VubUJDRytxMnlKT25MMFA4Z0huUXphUjZpKzNnRXhSdi9MbnVQRUw3?=
 =?utf-8?B?U0cxYk9zaE9CaTltdkpyTHlqdFg2M2RxN3VpWWQxMURCWHBSVG9ibnR4S0Fo?=
 =?utf-8?B?ZzdBc0NNb0pyaWNWeHpldFkyN2plQUpodmpBYnhOZEZiblFRQ1NEcUZmK1pm?=
 =?utf-8?B?dWNmTjZhWjVQZGpjOHIyQVBpUW1YZ1ZBRDhkRnJJQStsOFFqVnlWVWVkUUIr?=
 =?utf-8?B?b2NTUW80elZFWTZVT0NzckFYNVZuVkhaWk1MVDhsQkpJVkd2MUZOUE9uSzNj?=
 =?utf-8?B?UHVhWFBvandVWWIzZmJMRFZNNUJoVThxcHJlbmVmMDk3WGJVb25KYkxqeHVZ?=
 =?utf-8?B?TDFHdmlhOWhLVDltRlcvWU1USUFLOXk2d3hXN1RjbEd0RytUbTdpTkplaTho?=
 =?utf-8?B?aEM4SmIrdVpPdWZzdUN1RkJCYkVWT2JtOHBZa3A5bUQ1R2dITzdvTlFCQm16?=
 =?utf-8?B?eWZ6WnpJMXR3aHF3ZE1kYkZCNG9udGRicXFGQmFzei81am82c2xOZTlLeWZr?=
 =?utf-8?B?QUlKRlVDcll6Vi9yU0pyZVBqVUgyNE1seThPeGF4SHVhNFVqN3BRNDFSQUVt?=
 =?utf-8?B?M3RYVXJCTThuZTg0TU1tQ0NYTzQ2dk5nSHZTL2pRZDA5a0o2dWZiSW9MNS9y?=
 =?utf-8?B?UlNCN0tvQTcxZHV4QkswYU91dGd2N3IzelNTeVhqNzZENFE3WENUbE5qVS94?=
 =?utf-8?B?b0VYdG9ZZnJXazRCWXdyc2RoMHlYMDNuUWJBU2hFc2k1UEN0K2d2TkREZHRP?=
 =?utf-8?B?TmhLMVdEK3NmRUhPWlc0TFdmRytadmo5dFJqanc0V2FSVWdmS21KYmEzUE1p?=
 =?utf-8?B?M1BLK3gxYUF0cVEyVEV1VFJicEZsT05EMlFCMDdjUzl1LzVwV1IzTDYxaUdF?=
 =?utf-8?B?WFl6Nng2c051RTl0MjFuY2oyY3JwRGRPOXFzd0daQ2tleWhiaHF3V01xMFRw?=
 =?utf-8?B?ekZaOVJOVG9NVjQ4N1NMdXNVUVViYzUxSFhLc3ljL1RNWWptYVRQV0UxYjBr?=
 =?utf-8?B?LzdGV2djWU5rbjZiQmNTOWFqM3BGMGc4SXVhQVlWQ0llK2t1dmFTOElNR1cw?=
 =?utf-8?B?R0VwL0gvY1ZSdjN5YzJOSnltUHhPNlBxb0lndXo0MjV1L3pWb21GQ0Y2N2Vw?=
 =?utf-8?B?R3FHMW1HZi92M2pWRGFxZGJmaW5XY2xiSE43ZDJmTzhLMFkxYS90am93Si9Q?=
 =?utf-8?B?OHVBMkdJRkVNdS84TFQ3MksxU0JpWFFGMys0Sm9oOUtBZkJqeWx2ZUVoV1E1?=
 =?utf-8?B?RS9OUVdFSE95VEJvTEo3WUdOd2dYb0g2M0h5ZVlDMU16TUFkY2FDTXJjWmN3?=
 =?utf-8?B?QTZsV1pMbERkcWlnRTlJVXFXUXZYaUNXTmJTSDdUNHVvdEJIR1FXQWJSWDV2?=
 =?utf-8?B?N0lvUmx2WHBxbDJ2OS8xSGpKSE8yQTZqa2NPM3FxTTJ5Vm80YVoxVTZQQWh2?=
 =?utf-8?B?YnhJamp3VG0vbjdYbVRGdnBUdlVCQUMwSDBsVDRITVpHWVpudmZYOGg2L2cw?=
 =?utf-8?B?cGE2bUdXaVRmeGVPd0R6VkhYU202L3ZBMXRlOUxDbkJwUlBwMzBVQUlWR2JN?=
 =?utf-8?B?SFFEVUI2ZENqWVBKOVc4bW9MeDhWSmRpUHhSQ3ZnL1kvcUkzVFp2NWpNZUt3?=
 =?utf-8?B?dmhwTVo5Qk10bE5tYWVKOEtwLzgwcFkvVzZiMzg2ZFdBcDJnSnlmZHZwRWdL?=
 =?utf-8?B?NmlPcUxmU2hhNGd5VFlHemxyYm1mUmo5YWF4N0lYOVdheHlXdFpTWHc3dGQx?=
 =?utf-8?B?N2V4ZFZBZElaY09OSWJSbjQvN05YQmVaQi9vRGN1My93ZzMxNHZuUFlETkdi?=
 =?utf-8?B?WjFIaVZRYno0bVVOT1FzeUttSzFhVW95d2RiV1VSNFdvYWdTWWZ6VWRIdUdD?=
 =?utf-8?B?VU0ySVlFRDE2by8yeEs1ZDRQeXVMQ21ucDk2Z0NScGpLRnZFMTM2WDFwajdm?=
 =?utf-8?Q?uNa7uP04aU7PG7sFgrLa6zB9r?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0d60a23b-8118-40cc-e7a5-08da54f4fac9
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 08:47:18.4042
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: NXz13IuBZuq1H/RU045pAqjWnYblrSq30JsyRf97HUGvuDRnhIWJIX0l+8763OkB6yz7lJ0J6fZdxgDqVOw6hQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR04MB5210

On 23.06.2022 10:06, Juergen Gross wrote:
> On 23.06.22 09:55, Jan Beulich wrote:
>> On 22.06.2022 18:06, Juergen Gross wrote:
>>> A Linux kernel 5.19 can only be loaded as dom0, if it has been
>>> built with CONFIG_AMD_MEM_ENCRYPT enabled. This is due to the
>>> fact that otherwise the (relevant) last section of the built
>>> kernel has the NOLOAD flag set (it is still marked with
>>> SHF_ALLOC).
>>>
>>> I think at least the hypervisor needs to be changed to support
>>> this layout. Otherwise it will put the initial page tables for
>>> dom0 at the same position as this last section, leading to
>>> early crashes.
>>
>> Isn't Xen using the bzImage header there, rather than any ELF
>> one? In which case it would matter how the NOLOAD section is
> 
> For a PV kernel? No, I don't think so.

Actually it's a mix (and the same for PV and PVH) - the bzImage
header is parsed to get at the embedded ELF header. XenoLinux was
what would/could be loaded as plain ELF.

>> actually represented in that header. Can you provide a dump (or
>> binary representation) of both headers?
> 
> Program Header:
>      LOAD off    0x0000000000200000 vaddr 0xffffffff81000000 paddr 
> 0x0000000001000000 align 2**21
>           filesz 0x000000000145e114 memsz 0x000000000145e114 flags r-x
>      LOAD off    0x0000000001800000 vaddr 0xffffffff82600000 paddr 
> 0x0000000002600000 align 2**21
>           filesz 0x00000000006b7000 memsz 0x00000000006b7000 flags rw-
>      LOAD off    0x0000000002000000 vaddr 0x0000000000000000 paddr 
> 0x0000000002cb7000 align 2**21
>           filesz 0x00000000000312a8 memsz 0x00000000000312a8 flags rw-
>      LOAD off    0x00000000020e9000 vaddr 0xffffffff82ce9000 paddr 
> 0x0000000002ce9000 align 2**21
>           filesz 0x00000000001fd000 memsz 0x0000000000317000 flags rwx

20e9000 + 317000 = 240000

>      NOTE off    0x000000000165df10 vaddr 0xffffffff8245df10 paddr 
> 0x000000000245df10 align 2**2
>           filesz 0x0000000000000204 memsz 0x0000000000000204 flags ---
> 
> 
> Sections:
> Idx Name          Size      VMA               LMA               File off  Algn
> ...
>   30 .smp_locks    00009000  ffffffff82edc000  0000000002edc000  022dc000  2**2
>                    CONTENTS, ALLOC, LOAD, READONLY, DATA
>   31 .data_nosave  00001000  ffffffff82ee5000  0000000002ee5000  022e5000  2**2
>                    CONTENTS, ALLOC, LOAD, DATA
>   32 .bss          0011a000  ffffffff82ee6000  0000000002ee6000  022e6000  2**12
>                    ALLOC

2ee6000 + 11a000 = 240000

>   33 .brk          00026000  ffffffff83000000  ffffffff83000000  00000000  2**0
>                    ALLOC

This space isn't covered by any program header. Which in turn may be a
result of its LMA matching its VMA, unlike for all other sections.
Looks like a linker script or linker issue to me: While ...

> And the related linker script part:
> 
>          __end_of_kernel_reserve = .;
> 
>          . = ALIGN(PAGE_SIZE);
>          .brk (NOLOAD) : AT(ADDR(.brk) - LOAD_OFFSET) {

... this AT() looks correct to me, I'm uncertain of the use of NOLOAD.
Note that .bss doesn't have NOLOAD, matching the vast majority of the
linker scripts ld itself has.

Jan

>                  __brk_base = .;
>                  . += 64 * 1024;         /* 64k alignment slop space */
>                  *(.bss..brk)            /* areas brk users have reserved */
>                  __brk_limit = .;
>          }
> 
>          . = ALIGN(PAGE_SIZE);           /* keep VO_INIT_SIZE page aligned */
>          _end = .;
> 
> 
> Juergen



From xen-devel-bounces@lists.xenproject.org Thu Jun 23 08:50:24 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 08:50:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354632.581834 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4IXg-0004Tf-8i; Thu, 23 Jun 2022 08:50:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354632.581834; Thu, 23 Jun 2022 08:50:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4IXg-0004TY-5z; Thu, 23 Jun 2022 08:50:24 +0000
Received: by outflank-mailman (input) for mailman id 354632;
 Thu, 23 Jun 2022 08:50:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CRa/=W6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o4IXe-0004TQ-SO
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 08:50:22 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-eopbgr60050.outbound.protection.outlook.com [40.107.6.50])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 83c04db1-f2d1-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 10:50:21 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8861.eurprd04.prod.outlook.com (2603:10a6:102:20c::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16; Thu, 23 Jun
 2022 08:50:19 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 08:50:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 83c04db1-f2d1-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=f7+wIYL73JutarJJhDTigzhedtuJVKfCooNHDPP1cUcWSNR35bta/YolXXYmNkZDx9WaWuGZ9fgrnFZgB5/9MnWjnqzxr/WT2EWso/CVSczIZ6NDOskyyo1mjXTj3GJaq8kT9kr9wz8HUfCliW+ia34fZiMqdGetVULAE0WkFCtRadze6jBdcBWO1BnkTyvt+fwczFDaP064R1vIgJTN4BrA0alRC9VVFXUJeACy5b2qv4Y9wkGRxEOkdKDijQn/WpWH7wofIxkFiWEcpRZoPTh1O72kwIP6/E/mqNUOZMJux0QvUHj3e9VmScxrjL8JFY7mSXAx9JV65IoNsC188g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=tJntYUc7/9EMUW5ebl5CjETPcg+UXjq6Lx4oGSxkdEw=;
 b=YpEYbv13R/CphFyBtNDy87sGfkyqDitCdRxSwT2jqG509m7Temk/gCvm4b+lJqRRj/7XbVfdOW3h1pcT/KDaY7ofdMtWgP+gzvQ27awnp8eLe3vU/DYY3iXcPWvXlY0P+rE1W27G96zU4Smv3ZhWQknapwgBsfuCLl9BKP8ablHdDTE58cJ1+LVQcuyGYOhAM9IikkvsE+nIQ/fXN7t1lZ35BhgikeKfZvto5DdZY4l+RzH+7ir/GGsgKK2XkogGtu3tQxFQce4vnNslWMTU0/rGmzfqGh60YC9X2e5CQ3gR9sv2sPUDOMsbwCLQhXTy2RE4qlXfRtbHjteze1/pPw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tJntYUc7/9EMUW5ebl5CjETPcg+UXjq6Lx4oGSxkdEw=;
 b=VFb+Jehfd0DgU17a40xjZbvMl8Jry0nVvXIfWNZ9JQP4I/WhYkq39ssONfOBErK9eHv+FFOofA7yTRw+UnSlN7G1rFEOeosIh7ENYeQ49pLVKpq8FccU9ECUxvDCkp4gDEO35lOOj/r5GQ8W1r97Bwhtt4+WaXT4jYVLs14fv9K3IrRQXKAZqUoZjn8Oz1fbEhKnEocB8LPskJu7u5jsXG1BJaBlclS4v9WhT2BJGkU+61aAjNQFlqV4gnw9vcPUTJUcM5ZvXvE85ehRSUTSpqSHB6lyDsvzwwv/ViYgOAmumkX0z4i2OkdxRx7hhWvEIU2Xd7L1yyCeXBIjCZfaSQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <bba2828b-0649-3a93-16a4-eb3b35214af0@suse.com>
Date: Thu, 23 Jun 2022 10:50:17 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 2/2] x86: fix setup of brk area
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>
Cc: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
 x86@kernel.org, linux-kernel@vger.kernel.org
References: <20220622161048.4483-1-jgross@suse.com>
 <20220622161048.4483-3-jgross@suse.com>
 <b210d5c7-26cc-df6f-fa37-56fe04c8d0dc@suse.com>
 <594866df-ef56-055f-c13c-64fac5797164@suse.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <594866df-ef56-055f-c13c-64fac5797164@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM6PR08CA0038.eurprd08.prod.outlook.com
 (2603:10a6:20b:c0::26) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d2eb37b4-4247-4f80-dd23-08da54f56699
X-MS-TrafficTypeDiagnostic: PAXPR04MB8861:EE_
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-Microsoft-Antispam-PRVS:
	<PAXPR04MB886151B7EBC9DF52D3FDE3C2B3B59@PAXPR04MB8861.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WjTJUnsRuEjzKxndfdmS3/amHNqxXsmN7bNcStQZ9EMHLhyV1UnP+XZfDISXPQ1efI8IsMah5B5fNAv38axhEJ+RlZnP264f+bZte9+K8D76YdsfEW+zsrEer/o/9qpi3pAmp6Vxe2tQb/BvlD1Z8AabTJddgzSx4n5DD+HtPkjKZ14RWBFCXu6kJmbNRvGURwKTKrTB3JgMqxRESmGrpAEftkcK7oIwbh6oEC+KdXtN2/1zbKPQEJLctp37YlLh1nnj7MpVCYeqY84rxhAJr0nzU5PbsrsmsAr1W8vLZT1IQSrCPhlIA+a5M5L0Ajz/pB7XOFtE0nAbB5SMvJTFw/rqNfU4uDUzUGWx78i3M+/QZ4IXb+j6oiUb5pYVXHMvIjnAnPi2TUdYE05fU02EMkD+hJO13hg521reJ1CF3BfrcgEADyMFjomQ9yp4aVHiWB7cjAC76q+lknAyp22fYRf2gGaO/pBq1ZKTy50C7nDkkcSKgXM6TxakjWFq37M6k+D8UK9uHEGQ5A6wP2t05M00qHx2kySt4c4Cw7F4UFHPyhFAy0bIlPyNsIhAXFmzqUsB9p3ttrKYCOKbrOA/lswQH5PWqC1QjsqP1mYaH8Cu2YhZsV4+008nmr9xIE3rSEP4nGy4Uch6g7+lfYSpJ59AhEmjw91X2EpuTyPNSDo+NyxZO959H+IQ5GOhMEFP7/hLyQPhv1CuPzLrB/sufeGWPE20XWtArvqWtZpr/9jmDqm6InLoe++mN+bsbuatvOZgofShe1PlRmFg+0w1iMUkJaOjfEywM0ysBzxgiIU=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(396003)(136003)(346002)(376002)(39860400002)(6486002)(66946007)(2906002)(478600001)(41300700001)(4326008)(66476007)(54906003)(6636002)(86362001)(8676002)(36756003)(66556008)(31696002)(31686004)(53546011)(6506007)(26005)(6512007)(316002)(38100700002)(83380400001)(2616005)(186003)(37006003)(5660300002)(8936002)(6862004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?b0ErbXByNmJ3ZjFoMktlVTI1MGpwRXBvSlI5cXZIbUxzRllZZWJsVVZxM0Vs?=
 =?utf-8?B?OE1tWmhSa3FZS1lNTVZ6TWt2WjdBVUlHUHBlN2kzbFQ1ZTdQVDFHL3MzU2Rx?=
 =?utf-8?B?NEFpYXIrVzBNN05xVDg4d01sR0VxaDV3aSszcXBpMWs2S0lCY3NqYVBXNWVu?=
 =?utf-8?B?ZXlZQzBiam95V0V5UnZvUEwza2F5M09jNHVrMkp6Q0EyUXNhZDlDVTk4a3Aw?=
 =?utf-8?B?M3FoT3N1MGd5aTFUS3I1YkhRemM4SG9JQmFFbUVxZGRzbmczdm1RV0FzTmVC?=
 =?utf-8?B?K3Rld2taOGxzUDd5U21lbTJmMTVkWkxGMW4yem81enJEN0VsTzRhNkxoZTNi?=
 =?utf-8?B?ZmRRM09qSWw0THBHaFNVMnlmVThaaDhyRFJ2SU1tczlDVjdaREZVTG9EU3BT?=
 =?utf-8?B?eVFFSEllcVF5cklBVE5yU05OdzBVeGd5OW1qelIrVWlrZlVCaWFOR2VqcWFo?=
 =?utf-8?B?TGpuZHRMaVhOUjQ1cWU1a2Z3czZvcks5U0VVUEF3YnkxOTEvbklVS29FOG5s?=
 =?utf-8?B?c0xmY25mRy9BRWZqc3JZWmtSTGlUa1JpTk1lRzRpWHd4SGJYRE9QN05jMGI4?=
 =?utf-8?B?TW1Fdk45SzhseGxBa0F6cndvNVpvZHo4U3o3cVYzVklFQ0phelZYbVBrckdL?=
 =?utf-8?B?Z1RCWUVuaDUzRzhmc2U4ZnBPNGFDVW13NHhwOHJCOTlLNzFOWUFPM2tWbVY3?=
 =?utf-8?B?azFBWWFxeWRTVk51Y3ovOHloVFNnTGtlMG44d0Z1bWNzZlQ5Q01mTkZUSTA4?=
 =?utf-8?B?MlV2VXpqT0J5dUVLSFZoeHRVWnNUSGVYTWFFQWR0TmdIL0QzZkE5a2IwSWpO?=
 =?utf-8?B?MDNCNGtEa2tTRHBRRmR3TkF1d2JCVDZLMTNmSHY5dWdMZEgyUndFMGx6Ujgy?=
 =?utf-8?B?TmExVUpUMURQMHdpekdYemo1WDNDV2l3UW5KWU1kUmFLNjFDOWxWbnFHQlZE?=
 =?utf-8?B?UDdMQzY1TXI3WHNBd3VYaUxFUGNVdGRMQjVaVW01Yjd3OUd0dENaN0ZoeHZG?=
 =?utf-8?B?aFl0NFFuT2szYXMxU2dWbFFYdEVUanVHR1VTRzBVd0d2VTdEYTFHOFBLQzMx?=
 =?utf-8?B?S0RicVVZdDVpVWNLYlZ3M1JvbHQ4SEc0d2I0c1I4WmJaSUhnVjhjTWdZdnkw?=
 =?utf-8?B?VUxiU2VFQ2VPSGxDR3BmVVFkUUZaWjFDV0hpYUJYSEFDdThVWWF1ZjduZDJR?=
 =?utf-8?B?NFZUcmVNSnlEbWdSVzFUUVJ1N1lZZk9QVE5tNDJ3ODMxMStsTFVXU0l4Y21U?=
 =?utf-8?B?QzZIZmFhdUJtazJDT0k1UWNYSlpiVFRUdm91QW1hZmZtd1NKK3B1eDZWRFlF?=
 =?utf-8?B?Uyt0Nm83eXJlUU1jcEVSUTJ0MG9nSFllS0swZC9YcjlkcWswWldNejZjeTJK?=
 =?utf-8?B?aVo0cVc5akxnczBFejNlcFBFMEcvNWdVL1RZd0VxZXB0U3p0SDdOd2J5MVRP?=
 =?utf-8?B?WnRyM1dUWDFYUElnRWlBeVJGeEdRY0JNN0E2YkhIUEsxV2Z1cXBjcHJFNVY4?=
 =?utf-8?B?NmpJUzlLLzlDSnJrbWZlM2IzSXVwT3NUbDRsSUFNbjJyYnpoV1dSUm1QWkZo?=
 =?utf-8?B?dkFnTDAvdDdCamY3WjJrRFRRSjRUdTNMalJ5djBXaC9OMmZINndldk80a0k4?=
 =?utf-8?B?ZzBJSnZFdW9qNHFnaHY4ZlFRRThBRFpGNGxTS0dsWmFCT3d5eWZicmN6WDBL?=
 =?utf-8?B?NG0wZC9rVTBZT3JhMVEzR0RMZHVSVkxOWC9ac3BIT3NiZWRLUzV4Tk5UY21n?=
 =?utf-8?B?UHZIOHVuRmlMdVhqNUQ0MHY2STQ1dWpGaTBnenhCNUUzYmRTMnFzSFZnREgy?=
 =?utf-8?B?NzRoZlNEVWdiSEQrMmlmdXBVYnNWUGVLZGhQb2RYaFVoK2lmVEw1RXplN3Ar?=
 =?utf-8?B?Q24vaEd4K2paQnB2d1pmOXRsNUR4cUJEcjJVUVpJZ1RsR0NJd3ZRSkRTOEZO?=
 =?utf-8?B?U3BRNHVudkRYL2ZMMElPZHFobU00dlBQU2FSWXA4VDAyOUpIV09KaUJOcHc0?=
 =?utf-8?B?M29FZFdPdDRjOVhYekR1S3ZpUXV2bmx2ODhJZE0ybkVPazdaSXNMWkVxbHRn?=
 =?utf-8?B?WG5uU05xSTF5LzRyZjhFK1ZIeHBVa1NzcGhCb29TUHIwWTFjTmROVXJzOEEr?=
 =?utf-8?Q?O39ntoBPMV33DECEdJKT9MW3x?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d2eb37b4-4247-4f80-dd23-08da54f56699
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 08:50:19.3145
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: l0rXk9th8nDlKt2bRpgHLp5uv4UYmfM9CGwe2dWxjsY+hOG8lY6NN7f2REBcC26MFdLrfNyxOSRN1tQ3uGFPwg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8861

On 23.06.2022 10:14, Juergen Gross wrote:
> On 23.06.22 10:09, Jan Beulich wrote:
>> On 22.06.2022 18:10, Juergen Gross wrote:
>>> Commit e32683c6f7d2 ("x86/mm: Fix RESERVE_BRK() for older binutils")
>>> put the brk area into the .bss segment, causing it not to be cleared
>>> initially.
>>
>> This reads contradictively: If the area was put in .bss, it would be
>> cleared. Thing is it is put in .bss..brk in the object files, while
>> the linker script puts it in .brk (i.e. outside of .bss).
> 
> Hmm, yes, this should be reworded.
> 
>>
>>> As the brk area is used to allocate early page tables, these
>>> might contain garbage in not explicitly written entries.
>>
>> I'm surprised this lack of zero-initialization didn't cause any issue
>> outside of PV Xen. Unless of course there never was the intention for
>> users of the facility to assume blank pages coming from there, in
>> which case Xen's use for early page tables would have been wrong (in
>> not explicitly zeroing the space first).
> 
> Fun fact: Its not Xen's use for early page tables, but the kernel's
> init code. It is used for bare metal, too.
> 
> The use case for initial page tables is the problematic one. Only the
> needed page table entries are written by the kernel, so the other ones
> keep their initial garbage values. As normally no uninitialized entries
> are ever referenced, this will have no real impact.

Are you sure there couldn't surface user-mode accessible page table
entries pointing at random pages?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 09:02:09 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 09:02:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354640.581845 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Iix-00061T-Bj; Thu, 23 Jun 2022 09:02:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354640.581845; Thu, 23 Jun 2022 09:02:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Iix-00061M-8Y; Thu, 23 Jun 2022 09:02:03 +0000
Received: by outflank-mailman (input) for mailman id 354640;
 Thu, 23 Jun 2022 09:02:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=j28/=W6=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o4Iiw-00061G-6C
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 09:02:02 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 24948abd-f2d3-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 11:02:01 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 817DC21DE4;
 Thu, 23 Jun 2022 09:02:00 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 6196813461;
 Thu, 23 Jun 2022 09:02:00 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id sRcuFggstGKgGAAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 23 Jun 2022 09:02:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 24948abd-f2d3-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655974920; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=4DP/2IsHCFdAw6qHyPAW0gTEI0KLh60BvH4sq5H7Xgg=;
	b=W4Yl9N+0TxfH3Y9icLxMLshyfVnn1Qoa3GFM2RZ4aWnklVtv7tbHVgyd6V1VE7c8TWUwZz
	PXqHXuw7l7LY8EhgLFv8bLmT6ZiEuVV9ikrXKnDx7eLLo42G0qFW0lb5iwHa9xu5GSLg8E
	5xgSGkJWPO7XJkjFgEcERwIXTapiSy8=
Message-ID: <17124274-05e5-52c4-5505-9de9ad95db55@suse.com>
Date: Thu, 23 Jun 2022 11:01:59 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <5c396832-3102-ff5b-c198-c037ee87d83f@suse.com>
 <922ee651-c211-6e46-7986-6d0f74164e57@suse.com>
 <b74d7347-113b-c608-1346-8f75f1a77cb9@suse.com>
 <53bb13f6-04ec-0ed0-2c19-9c7947654989@suse.com>
From: Juergen Gross <jgross@suse.com>
Subject: Re: Problem loading linux 5.19 as PV dom0
In-Reply-To: <53bb13f6-04ec-0ed0-2c19-9c7947654989@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------4UWtt0HYQ0jRSW400tu08J6a"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------4UWtt0HYQ0jRSW400tu08J6a
Content-Type: multipart/mixed; boundary="------------ff80WqD6Reakmw04RyHrebFu";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <17124274-05e5-52c4-5505-9de9ad95db55@suse.com>
Subject: Re: Problem loading linux 5.19 as PV dom0
References: <5c396832-3102-ff5b-c198-c037ee87d83f@suse.com>
 <922ee651-c211-6e46-7986-6d0f74164e57@suse.com>
 <b74d7347-113b-c608-1346-8f75f1a77cb9@suse.com>
 <53bb13f6-04ec-0ed0-2c19-9c7947654989@suse.com>
In-Reply-To: <53bb13f6-04ec-0ed0-2c19-9c7947654989@suse.com>

--------------ff80WqD6Reakmw04RyHrebFu
Content-Type: multipart/mixed; boundary="------------XNAo5TeMVPRvod1rflvzBYyD"

--------------XNAo5TeMVPRvod1rflvzBYyD
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjMuMDYuMjIgMTA6NDcsIEphbiBCZXVsaWNoIHdyb3RlOg0KPiBPbiAyMy4wNi4yMDIy
IDEwOjA2LCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4gT24gMjMuMDYuMjIgMDk6NTUsIEph
biBCZXVsaWNoIHdyb3RlOg0KPj4+IE9uIDIyLjA2LjIwMjIgMTg6MDYsIEp1ZXJnZW4gR3Jv
c3Mgd3JvdGU6DQo+Pj4+IEEgTGludXgga2VybmVsIDUuMTkgY2FuIG9ubHkgYmUgbG9hZGVk
IGFzIGRvbTAsIGlmIGl0IGhhcyBiZWVuDQo+Pj4+IGJ1aWx0IHdpdGggQ09ORklHX0FNRF9N
RU1fRU5DUllQVCBlbmFibGVkLiBUaGlzIGlzIGR1ZSB0byB0aGUNCj4+Pj4gZmFjdCB0aGF0
IG90aGVyd2lzZSB0aGUgKHJlbGV2YW50KSBsYXN0IHNlY3Rpb24gb2YgdGhlIGJ1aWx0DQo+
Pj4+IGtlcm5lbCBoYXMgdGhlIE5PTE9BRCBmbGFnIHNldCAoaXQgaXMgc3RpbGwgbWFya2Vk
IHdpdGgNCj4+Pj4gU0hGX0FMTE9DKS4NCj4+Pj4NCj4+Pj4gSSB0aGluayBhdCBsZWFzdCB0
aGUgaHlwZXJ2aXNvciBuZWVkcyB0byBiZSBjaGFuZ2VkIHRvIHN1cHBvcnQNCj4+Pj4gdGhp
cyBsYXlvdXQuIE90aGVyd2lzZSBpdCB3aWxsIHB1dCB0aGUgaW5pdGlhbCBwYWdlIHRhYmxl
cyBmb3INCj4+Pj4gZG9tMCBhdCB0aGUgc2FtZSBwb3NpdGlvbiBhcyB0aGlzIGxhc3Qgc2Vj
dGlvbiwgbGVhZGluZyB0bw0KPj4+PiBlYXJseSBjcmFzaGVzLg0KPj4+DQo+Pj4gSXNuJ3Qg
WGVuIHVzaW5nIHRoZSBiekltYWdlIGhlYWRlciB0aGVyZSwgcmF0aGVyIHRoYW4gYW55IEVM
Rg0KPj4+IG9uZT8gSW4gd2hpY2ggY2FzZSBpdCB3b3VsZCBtYXR0ZXIgaG93IHRoZSBOT0xP
QUQgc2VjdGlvbiBpcw0KPj4NCj4+IEZvciBhIFBWIGtlcm5lbD8gTm8sIEkgZG9uJ3QgdGhp
bmsgc28uDQo+IA0KPiBBY3R1YWxseSBpdCdzIGEgbWl4IChhbmQgdGhlIHNhbWUgZm9yIFBW
IGFuZCBQVkgpIC0gdGhlIGJ6SW1hZ2UNCj4gaGVhZGVyIGlzIHBhcnNlZCB0byBnZXQgYXQg
dGhlIGVtYmVkZGVkIEVMRiBoZWFkZXIuIFhlbm9MaW51eCB3YXMNCj4gd2hhdCB3b3VsZC9j
b3VsZCBiZSBsb2FkZWQgYXMgcGxhaW4gRUxGLg0KPiANCj4+PiBhY3R1YWxseSByZXByZXNl
bnRlZCBpbiB0aGF0IGhlYWRlci4gQ2FuIHlvdSBwcm92aWRlIGEgZHVtcCAob3INCj4+PiBi
aW5hcnkgcmVwcmVzZW50YXRpb24pIG9mIGJvdGggaGVhZGVycz8NCj4+DQo+PiBQcm9ncmFt
IEhlYWRlcjoNCj4+ICAgICAgIExPQUQgb2ZmICAgIDB4MDAwMDAwMDAwMDIwMDAwMCB2YWRk
ciAweGZmZmZmZmZmODEwMDAwMDAgcGFkZHINCj4+IDB4MDAwMDAwMDAwMTAwMDAwMCBhbGln
biAyKioyMQ0KPj4gICAgICAgICAgICBmaWxlc3ogMHgwMDAwMDAwMDAxNDVlMTE0IG1lbXN6
IDB4MDAwMDAwMDAwMTQ1ZTExNCBmbGFncyByLXgNCj4+ICAgICAgIExPQUQgb2ZmICAgIDB4
MDAwMDAwMDAwMTgwMDAwMCB2YWRkciAweGZmZmZmZmZmODI2MDAwMDAgcGFkZHINCj4+IDB4
MDAwMDAwMDAwMjYwMDAwMCBhbGlnbiAyKioyMQ0KPj4gICAgICAgICAgICBmaWxlc3ogMHgw
MDAwMDAwMDAwNmI3MDAwIG1lbXN6IDB4MDAwMDAwMDAwMDZiNzAwMCBmbGFncyBydy0NCj4+
ICAgICAgIExPQUQgb2ZmICAgIDB4MDAwMDAwMDAwMjAwMDAwMCB2YWRkciAweDAwMDAwMDAw
MDAwMDAwMDAgcGFkZHINCj4+IDB4MDAwMDAwMDAwMmNiNzAwMCBhbGlnbiAyKioyMQ0KPj4g
ICAgICAgICAgICBmaWxlc3ogMHgwMDAwMDAwMDAwMDMxMmE4IG1lbXN6IDB4MDAwMDAwMDAw
MDAzMTJhOCBmbGFncyBydy0NCj4+ICAgICAgIExPQUQgb2ZmICAgIDB4MDAwMDAwMDAwMjBl
OTAwMCB2YWRkciAweGZmZmZmZmZmODJjZTkwMDAgcGFkZHINCj4+IDB4MDAwMDAwMDAwMmNl
OTAwMCBhbGlnbiAyKioyMQ0KPj4gICAgICAgICAgICBmaWxlc3ogMHgwMDAwMDAwMDAwMWZk
MDAwIG1lbXN6IDB4MDAwMDAwMDAwMDMxNzAwMCBmbGFncyByd3gNCj4gDQo+IDIwZTkwMDAg
KyAzMTcwMDAgPSAyNDAwMDANCj4gDQo+PiAgICAgICBOT1RFIG9mZiAgICAweDAwMDAwMDAw
MDE2NWRmMTAgdmFkZHIgMHhmZmZmZmZmZjgyNDVkZjEwIHBhZGRyDQo+PiAweDAwMDAwMDAw
MDI0NWRmMTAgYWxpZ24gMioqMg0KPj4gICAgICAgICAgICBmaWxlc3ogMHgwMDAwMDAwMDAw
MDAwMjA0IG1lbXN6IDB4MDAwMDAwMDAwMDAwMDIwNCBmbGFncyAtLS0NCj4+DQo+Pg0KPj4g
U2VjdGlvbnM6DQo+PiBJZHggTmFtZSAgICAgICAgICBTaXplICAgICAgVk1BICAgICAgICAg
ICAgICAgTE1BICAgICAgICAgICAgICAgRmlsZSBvZmYgIEFsZ24NCj4+IC4uLg0KPj4gICAg
MzAgLnNtcF9sb2NrcyAgICAwMDAwOTAwMCAgZmZmZmZmZmY4MmVkYzAwMCAgMDAwMDAwMDAw
MmVkYzAwMCAgMDIyZGMwMDAgIDIqKjINCj4+ICAgICAgICAgICAgICAgICAgICAgQ09OVEVO
VFMsIEFMTE9DLCBMT0FELCBSRUFET05MWSwgREFUQQ0KPj4gICAgMzEgLmRhdGFfbm9zYXZl
ICAwMDAwMTAwMCAgZmZmZmZmZmY4MmVlNTAwMCAgMDAwMDAwMDAwMmVlNTAwMCAgMDIyZTUw
MDAgIDIqKjINCj4+ICAgICAgICAgICAgICAgICAgICAgQ09OVEVOVFMsIEFMTE9DLCBMT0FE
LCBEQVRBDQo+PiAgICAzMiAuYnNzICAgICAgICAgIDAwMTFhMDAwICBmZmZmZmZmZjgyZWU2
MDAwICAwMDAwMDAwMDAyZWU2MDAwICAwMjJlNjAwMCAgMioqMTINCj4+ICAgICAgICAgICAg
ICAgICAgICAgQUxMT0MNCj4gDQo+IDJlZTYwMDAgKyAxMWEwMDAgPSAyNDAwMDANCj4gDQo+
PiAgICAzMyAuYnJrICAgICAgICAgIDAwMDI2MDAwICBmZmZmZmZmZjgzMDAwMDAwICBmZmZm
ZmZmZjgzMDAwMDAwICAwMDAwMDAwMCAgMioqMA0KPj4gICAgICAgICAgICAgICAgICAgICBB
TExPQw0KPiANCj4gVGhpcyBzcGFjZSBpc24ndCBjb3ZlcmVkIGJ5IGFueSBwcm9ncmFtIGhl
YWRlci4gV2hpY2ggaW4gdHVybiBtYXkgYmUgYQ0KPiByZXN1bHQgb2YgaXRzIExNQSBtYXRj
aGluZyBpdHMgVk1BLCB1bmxpa2UgZm9yIGFsbCBvdGhlciBzZWN0aW9ucy4NCj4gTG9va3Mg
bGlrZSBhIGxpbmtlciBzY3JpcHQgb3IgbGlua2VyIGlzc3VlIHRvIG1lOiBXaGlsZSAuLi4N
Cj4gDQo+PiBBbmQgdGhlIHJlbGF0ZWQgbGlua2VyIHNjcmlwdCBwYXJ0Og0KPj4NCj4+ICAg
ICAgICAgICBfX2VuZF9vZl9rZXJuZWxfcmVzZXJ2ZSA9IC47DQo+Pg0KPj4gICAgICAgICAg
IC4gPSBBTElHTihQQUdFX1NJWkUpOw0KPj4gICAgICAgICAgIC5icmsgKE5PTE9BRCkgOiBB
VChBRERSKC5icmspIC0gTE9BRF9PRkZTRVQpIHsNCj4gDQo+IC4uLiB0aGlzIEFUKCkgbG9v
a3MgY29ycmVjdCB0byBtZSwgSSdtIHVuY2VydGFpbiBvZiB0aGUgdXNlIG9mIE5PTE9BRC4N
Cj4gTm90ZSB0aGF0IC5ic3MgZG9lc24ndCBoYXZlIE5PTE9BRCwgbWF0Y2hpbmcgdGhlIHZh
c3QgbWFqb3JpdHkgb2YgdGhlDQo+IGxpbmtlciBzY3JpcHRzIGxkIGl0c2VsZiBoYXMuDQoN
ClllYWgsIGJ1dCB0aGUgZmlsZXN6IGFuZCBtZW1zeiB2YWx1ZXMgb2YgdGhlIC5ic3MgcmVs
YXRlZCBwcm9ncmFtIGhlYWRlcg0KZGlmZmVyIGEgbG90IChiYXNpY2FsbHkgYnkgdGhlIC5i
c3Mgc2l6ZSBwbHVzIHNvbWUgYWxpZ25tZW50KSwgYW5kIHRoZQ0KLmJzcyBzZWN0aW9uIGZs
YWdzIGNsZWFybHkgc2F5IHRoYXQgaXRzIGF0dHJpYnV0ZXMgbWF0Y2ggdGhvc2Ugb2YgLmJy
ay4NCg0KSSdtIG5vdCBzdXJlIHdoeSB0aGUgbGlua2VyIHdvdWxkbid0IGFkZCAuYnJrIHRv
IHRoZSBzYW1lIHBncm9ncmFtDQpoZWFkZXIgZW50cnkgYXMgLmJzcywgYnV0IG1heWJlIHRo
YXQgaXMgc29tZSAuYnNzIHNwZWNpYWwgaGFuZGxpbmcuDQoNCkluIHRoZSBlbmQgSSB0aGlu
ayB0aGlzIG1pZ2h0IGJlIGEgbGlua2VyIGlzc3VlLCBidXQgZXZlbiBpbiB0aGlzIGNhc2UN
CndlIHNob3VsZCByZWFsbHkgY29uc2lkZXIgdG8gaGFuZGxlIGl0LCBhcyBvdGhlcndpc2Ug
d2UnZCBqdXN0IHNheQ0KImhleSwgZHVlIHRvIGEgbGlua2VyIHByb2JsZW0gd2UgZG9uJ3Qg
c3VwcG9ydCBMaW51eCA1LjE5IGluIFBWIG1vZGUiLg0KDQpJbiB0aGUgZW5kIHdlIGNhbid0
IGNvbnRyb2wgd2hpY2ggbGlua2VyIHZlcnNpb25zIGFyZSB1c2VkIHRvIGxpbmsNCnRoZSBr
ZXJuZWwuDQoNCg0KSnVlcmdlbg0K
--------------XNAo5TeMVPRvod1rflvzBYyD
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------XNAo5TeMVPRvod1rflvzBYyD--

--------------ff80WqD6Reakmw04RyHrebFu--

--------------4UWtt0HYQ0jRSW400tu08J6a
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK0LAcFAwAAAAAACgkQsN6d1ii/Ey/k
xgf6Al255mvkXh6uQzjcAJ81skub5Gj5xpI4v5SGPfA84ybL44dzO9J86PJiiIkSF7tHqJdBYzHu
SJplF5RjVm8omCZiXZgMhwacOii2lngZ5o15FfzWd4Q5kiojEQzUhQRK5FGB0dP/Qp9zXxy0pNdl
k5F2Q/Eny9UQdTqApcm9TVPfqPCf7RkdZG3tK62z4GBfqwQfmtualCyQ8voRqtPBD3nV867kp4CW
vmfVYSTCd8wjTuju/LoZoJ9EjXwvQbs1F8KvHLycc223qUiapGgrTPo95RhEOCPIOlWv9xxiMNG7
GzvQfKJ6QDUOWwG/P5E6NWzxwZMEi1vfcOMk9/L9ig==
=VScz
-----END PGP SIGNATURE-----

--------------4UWtt0HYQ0jRSW400tu08J6a--


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 09:03:15 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 09:03:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354647.581856 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Ik5-0006cV-TS; Thu, 23 Jun 2022 09:03:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354647.581856; Thu, 23 Jun 2022 09:03:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Ik5-0006cO-QV; Thu, 23 Jun 2022 09:03:13 +0000
Received: by outflank-mailman (input) for mailman id 354647;
 Thu, 23 Jun 2022 09:03:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=j28/=W6=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o4Ik4-0006cD-QD
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 09:03:12 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4edfb566-f2d3-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 11:03:11 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 8611E1FD69;
 Thu, 23 Jun 2022 09:03:11 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 3619A13461;
 Thu, 23 Jun 2022 09:03:11 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id o4YOC08stGJEGQAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 23 Jun 2022 09:03:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4edfb566-f2d3-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655974991; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=bAKw0xzUEKMoT1Yy+1wdM0M3tDw+GQYafQLrIJQL42w=;
	b=tTNhpU9e6FGUnJ/VrF1WQ8rm0cfR/TwSCKUyU0AG6mAvQUI3kEZJR7TkdTC58AdXSuQbWv
	EKqfrhJUMtbdZUKFHz6jtHbioVfhEOGvkDyNEnLOs6+S53cKuE9SZegThgQiSGOFUunDRJ
	fwChRPACmGdDpT88un9aQVDYpls/MIA=
Message-ID: <41a4a440-36fc-513a-232f-18b781b41926@suse.com>
Date: Thu, 23 Jun 2022 11:03:10 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH 2/2] x86: fix setup of brk area
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
 x86@kernel.org, linux-kernel@vger.kernel.org
References: <20220622161048.4483-1-jgross@suse.com>
 <20220622161048.4483-3-jgross@suse.com>
 <b210d5c7-26cc-df6f-fa37-56fe04c8d0dc@suse.com>
 <594866df-ef56-055f-c13c-64fac5797164@suse.com>
 <bba2828b-0649-3a93-16a4-eb3b35214af0@suse.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <bba2828b-0649-3a93-16a4-eb3b35214af0@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------0Kgae878YS1jYcAQMbnGeBB5"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------0Kgae878YS1jYcAQMbnGeBB5
Content-Type: multipart/mixed; boundary="------------kd5r3yTajMvuOOQvM9ZBotxm";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
 x86@kernel.org, linux-kernel@vger.kernel.org
Message-ID: <41a4a440-36fc-513a-232f-18b781b41926@suse.com>
Subject: Re: [PATCH 2/2] x86: fix setup of brk area
References: <20220622161048.4483-1-jgross@suse.com>
 <20220622161048.4483-3-jgross@suse.com>
 <b210d5c7-26cc-df6f-fa37-56fe04c8d0dc@suse.com>
 <594866df-ef56-055f-c13c-64fac5797164@suse.com>
 <bba2828b-0649-3a93-16a4-eb3b35214af0@suse.com>
In-Reply-To: <bba2828b-0649-3a93-16a4-eb3b35214af0@suse.com>

--------------kd5r3yTajMvuOOQvM9ZBotxm
Content-Type: multipart/mixed; boundary="------------zvMaJq6FxuLUBrOMsqi6oh8u"

--------------zvMaJq6FxuLUBrOMsqi6oh8u
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjMuMDYuMjIgMTA6NTAsIEphbiBCZXVsaWNoIHdyb3RlOg0KPiBPbiAyMy4wNi4yMDIy
IDEwOjE0LCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4gT24gMjMuMDYuMjIgMTA6MDksIEph
biBCZXVsaWNoIHdyb3RlOg0KPj4+IE9uIDIyLjA2LjIwMjIgMTg6MTAsIEp1ZXJnZW4gR3Jv
c3Mgd3JvdGU6DQo+Pj4+IENvbW1pdCBlMzI2ODNjNmY3ZDIgKCJ4ODYvbW06IEZpeCBSRVNF
UlZFX0JSSygpIGZvciBvbGRlciBiaW51dGlscyIpDQo+Pj4+IHB1dCB0aGUgYnJrIGFyZWEg
aW50byB0aGUgLmJzcyBzZWdtZW50LCBjYXVzaW5nIGl0IG5vdCB0byBiZSBjbGVhcmVkDQo+
Pj4+IGluaXRpYWxseS4NCj4+Pg0KPj4+IFRoaXMgcmVhZHMgY29udHJhZGljdGl2ZWx5OiBJ
ZiB0aGUgYXJlYSB3YXMgcHV0IGluIC5ic3MsIGl0IHdvdWxkIGJlDQo+Pj4gY2xlYXJlZC4g
VGhpbmcgaXMgaXQgaXMgcHV0IGluIC5ic3MuLmJyayBpbiB0aGUgb2JqZWN0IGZpbGVzLCB3
aGlsZQ0KPj4+IHRoZSBsaW5rZXIgc2NyaXB0IHB1dHMgaXQgaW4gLmJyayAoaS5lLiBvdXRz
aWRlIG9mIC5ic3MpLg0KPj4NCj4+IEhtbSwgeWVzLCB0aGlzIHNob3VsZCBiZSByZXdvcmRl
ZC4NCj4+DQo+Pj4NCj4+Pj4gQXMgdGhlIGJyayBhcmVhIGlzIHVzZWQgdG8gYWxsb2NhdGUg
ZWFybHkgcGFnZSB0YWJsZXMsIHRoZXNlDQo+Pj4+IG1pZ2h0IGNvbnRhaW4gZ2FyYmFnZSBp
biBub3QgZXhwbGljaXRseSB3cml0dGVuIGVudHJpZXMuDQo+Pj4NCj4+PiBJJ20gc3VycHJp
c2VkIHRoaXMgbGFjayBvZiB6ZXJvLWluaXRpYWxpemF0aW9uIGRpZG4ndCBjYXVzZSBhbnkg
aXNzdWUNCj4+PiBvdXRzaWRlIG9mIFBWIFhlbi4gVW5sZXNzIG9mIGNvdXJzZSB0aGVyZSBu
ZXZlciB3YXMgdGhlIGludGVudGlvbiBmb3INCj4+PiB1c2VycyBvZiB0aGUgZmFjaWxpdHkg
dG8gYXNzdW1lIGJsYW5rIHBhZ2VzIGNvbWluZyBmcm9tIHRoZXJlLCBpbg0KPj4+IHdoaWNo
IGNhc2UgWGVuJ3MgdXNlIGZvciBlYXJseSBwYWdlIHRhYmxlcyB3b3VsZCBoYXZlIGJlZW4g
d3JvbmcgKGluDQo+Pj4gbm90IGV4cGxpY2l0bHkgemVyb2luZyB0aGUgc3BhY2UgZmlyc3Qp
Lg0KPj4NCj4+IEZ1biBmYWN0OiBJdHMgbm90IFhlbidzIHVzZSBmb3IgZWFybHkgcGFnZSB0
YWJsZXMsIGJ1dCB0aGUga2VybmVsJ3MNCj4+IGluaXQgY29kZS4gSXQgaXMgdXNlZCBmb3Ig
YmFyZSBtZXRhbCwgdG9vLg0KPj4NCj4+IFRoZSB1c2UgY2FzZSBmb3IgaW5pdGlhbCBwYWdl
IHRhYmxlcyBpcyB0aGUgcHJvYmxlbWF0aWMgb25lLiBPbmx5IHRoZQ0KPj4gbmVlZGVkIHBh
Z2UgdGFibGUgZW50cmllcyBhcmUgd3JpdHRlbiBieSB0aGUga2VybmVsLCBzbyB0aGUgb3Ro
ZXIgb25lcw0KPj4ga2VlcCB0aGVpciBpbml0aWFsIGdhcmJhZ2UgdmFsdWVzLiBBcyBub3Jt
YWxseSBubyB1bmluaXRpYWxpemVkIGVudHJpZXMNCj4+IGFyZSBldmVyIHJlZmVyZW5jZWQs
IHRoaXMgd2lsbCBoYXZlIG5vIHJlYWwgaW1wYWN0Lg0KPiANCj4gQXJlIHlvdSBzdXJlIHRo
ZXJlIGNvdWxkbid0IHN1cmZhY2UgdXNlci1tb2RlIGFjY2Vzc2libGUgcGFnZSB0YWJsZQ0K
PiBlbnRyaWVzIHBvaW50aW5nIGF0IHJhbmRvbSBwYWdlcz8NCg0KTm8sIEknbSBub3Qgc3Vy
ZSB0aGlzIGNhbid0IGhhcHBlbi4NCg0KDQpKdWVyZ2VuDQoNCg==
--------------zvMaJq6FxuLUBrOMsqi6oh8u
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------zvMaJq6FxuLUBrOMsqi6oh8u--

--------------kd5r3yTajMvuOOQvM9ZBotxm--

--------------0Kgae878YS1jYcAQMbnGeBB5
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK0LE4FAwAAAAAACgkQsN6d1ii/Ey/j
Qgf+NW0ofa9R6lUtDLFDh4bkvNB842OI9FDH6ntwEmPPNkpeRZDlm+6ERG/pH31mtmz8UQr6MLh8
335F4v4eKtYG7+fvUuzKkMjfvT5HOskrYMwCavpMEDsJ6Ds9bjtaMHSaxoKzA0LRkvqilEpLBXBq
L0q0c37Pxe8+E4nXfRB5uV9/ArA48150fMLIRx0Poc411tlAypyN6kqyeofoUDu4As4dYu/SsRne
SCMUkzrgtIw4ybsINZxHN6YrdsKo3yUaKyMtcEE2ThqpDGkU4oIKpgyhsvNvmDVkzZQ5vojWaEMJ
rzsUOngMtdtpLolRKVSBaQz75wmNdglAChV/cRgHmQ==
=+bXg
-----END PGP SIGNATURE-----

--------------0Kgae878YS1jYcAQMbnGeBB5--


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 09:04:20 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 09:04:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354653.581867 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4IlA-0007BF-7n; Thu, 23 Jun 2022 09:04:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354653.581867; Thu, 23 Jun 2022 09:04:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4IlA-0007B6-4r; Thu, 23 Jun 2022 09:04:20 +0000
Received: by outflank-mailman (input) for mailman id 354653;
 Thu, 23 Jun 2022 09:04:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CRa/=W6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o4Il8-0007Au-3c
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 09:04:18 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2050.outbound.protection.outlook.com [40.107.20.50])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 75a9f758-f2d3-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 11:04:17 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7252.eurprd04.prod.outlook.com (2603:10a6:20b:1da::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Thu, 23 Jun
 2022 09:04:14 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 09:04:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 75a9f758-f2d3-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OYlwdNt3AD501LUmfD8sDAW9l+EFYTvJWoGHTDT+sYwtfTkfhPEsc4XKg3iGbH/eHLzKRvG5tTfpMVdg750mCtaAilK1ruA1+9+qQdFUPNpy/VRMfrr1msMQlJhR2CSIfidTCqKQiSxS9MlP5yDHs++PKBaFQ4Mx0EdcBLOi4ZZ4RNVs3FUGn9ij/B9dxA5zcIUTeDLYPtiHvOJdOfEqQnMUpes9ssbr77fQOr74Z0eZSNjozYlNz9nlST3chvSbqJmahX/7o617fYdo/+oCJz3XCVWQEUoXP55oQqjozm3an4k5yNyRgLNCUX1EzWo1+06KlmforEcJgrfkuHKGdw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=lJ4QXewQvgXnvx0V4pxYg7RGzttLNZQvSM0/FQi9rjI=;
 b=VwQlGuOq6gQAtun9brq5ba2dU6/xHsqZXl7IazQvydqahwjziXsqbeYQ3RKwCKHssBUMLNUy7bjs6L72vXMtmjoJF1tX7JgmVK66ShGJEc6U9v4FANQ6Dki4Ems4V0hEHJy6vNqBIvnEMWO22gxnQs0kZxYffzmv0CjEAKlyhGqg7I3/8GeqJnfMFxvzVUXHAVIflyRXDdb8CKapo3bDv5d/AONjRurr8EAsVzQmx09hXBozmNexLp0s3mU2Qhsex8u7jPjUd/Gs5gds/UQBQmiPigZLutz2pZmFwzz3QxUHL32IQ44eW0k1fTo17M9oDKMjimDpFQvYhvwg0WqhoQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lJ4QXewQvgXnvx0V4pxYg7RGzttLNZQvSM0/FQi9rjI=;
 b=EoOeGRl7O79o5a0jyi/TrvlM/ahb815VTFI8Wjhte3Xopo6HrbCp/SFPda8WgyTmrnpvIu6uVo94vBspRvzTeWxKBz1bCCdALdQ1XoSwXNTDUhF7nqZmEqguQVfYS3CbkZLW9MxL+jVLIFY/mom66lmWv3kY+ckaIPoAidR2GLs9n1a1C13wbQdbwcEwvtDdVJRtfhUSAklX1iVcirx8l4EsHySds4FzvS8gKC0zhLa+qZxf0fAT9Z75MhfWbL9y7TNzLKc/EpHL1ZVYDpEM8Hhc2F6eO5J1k72ZSwdZUT6qureI7ION36raz4ymG0X5KmGKdFi+XMrWNT3JTGp3Tw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <c5961627-1719-dd54-bbcf-c08a826ba14d@suse.com>
Date: Thu, 23 Jun 2022 11:04:11 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH] xen: consider alloc-only segments when loading PV dom0
 kernel
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220623080208.2214-1-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220623080208.2214-1-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM7PR04CA0005.eurprd04.prod.outlook.com
 (2603:10a6:20b:110::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 98724d4c-1f27-47a4-513f-08da54f75836
X-MS-TrafficTypeDiagnostic: AM8PR04MB7252:EE_
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-Microsoft-Antispam-PRVS:
	<AM8PR04MB725295CFAACEC7CAB4D85BAEB3B59@AM8PR04MB7252.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	rFJalvxchnSXEhuzUQDRoeP3GvYplT8LkSIqOVCuaNY/FbFTRQS/KXvTQ6x0Nk2Aa4RbeUYqG0ZWKP+IcgoqR2MzY5fRyRWzxrTcc/fUEiyILb8vpeXV8++bQhNQnh3T+m3zIyk2BVArS+SmVeixAqUEqVrNq9N6+0jk3uMo8xwJtjT3o978OO+lhxIanHO9YA1XlKHH8VrKgsUEqIxxIFeRlS+NhixR0YRJJe8I5WToO5d5Zq8H2/MPGJ8zMdooKmbvEFpumVB/fclr26NhG4FQvpjS2fuKzE0TArMlM+A3ud2a4bsfMLPLWIvVAQNtS9R9Yj20kb7UAiAQobLePhbRIEAD4wCsx0Ghvtw3H1GFsSWRe7Vta0RRsX/wZTZPSyKCpwtGgC6ZwZ9mPrTbUlY+ijQXKWJKoDKDSmquCrUH55GP3+TRlYhsYv5g6Ykjbfawma/Zqx3A1LFfW9nvQDEeILFyUNB8ks6T+G4OPCZJjyU5+RsYi8PshmdU9BFF1O07gRW1UJWun8zh5FzMn0AyziR775SwwaOqKx9KEKC6qcGMsllA8kBtoJbc+rlnw/MdJDwEt76RG8Fyov4my2MKhNhBPIkibocN7rNq+bwrUb4wYBFRV4k9uiXIgQTyZkF13O/1VJv83ZkQUnAcDYeubjYlZvqNs9j5HU7+83nXAAKvSjENdPURxEjnZrpZMOhN+OHjmq2QplNkVtA9TsgYuRTqD5wW1HVM1SMxyHQC2vuVP/P5A4PVrPyJ75NoCHaajgYAddyLRbeSGP2/GAN4ZJtcPVTYQk+cwG+8YSY=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(346002)(39860400002)(366004)(396003)(136003)(376002)(86362001)(83380400001)(31696002)(66946007)(53546011)(2616005)(8676002)(38100700002)(66476007)(66556008)(41300700001)(6862004)(186003)(8936002)(4326008)(5660300002)(26005)(2906002)(6666004)(478600001)(6486002)(6636002)(6512007)(316002)(37006003)(54906003)(31686004)(6506007)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dUJPditXMjdRdFVhV1U4SnB1UHVQMlhlMjE4V3l0RHJzNjV5end0TzVXdEpn?=
 =?utf-8?B?Z3Z5MTlGajNwZ0RUZXFzcmFiRk1oWjJGRktSSGI0Nkp3aTlsd0ZRaHVQa0c1?=
 =?utf-8?B?OUEvS0Z2MEt6cDZ3SmUwdG40c2J4ZnNHZzlZUXhPZ0haR29OVEg4WTFVU3Ir?=
 =?utf-8?B?QWwrSjhCYWlmTUM1WE03aHRZMytvVXg4OGhLMHRvRTRyWTN0NytLdmxTZ1h5?=
 =?utf-8?B?blIzd2FiNVJSWmdCdVVrM0d2clBpelFzYWFndjNuUHExOTlBbzJ6MndyWXdQ?=
 =?utf-8?B?WGZTU1dHWnY3M2tPelVLR282OUdIaEd1RFJUaWJIbHZ0enk5aXR4RFZncisy?=
 =?utf-8?B?QWg2VG1OUnFKNndBeEdWcU94SzJhajBTL0JHQVhEMXRpOTFRcC8va2ZRblNQ?=
 =?utf-8?B?RDh5SHJNazkxMm84K2l4SUZ4OThXUW5xKzI0Z2dLd2wzcnhBRXRLR3NIQVRq?=
 =?utf-8?B?OTFhdjROUVEvTmNKbGg3S3RkekhwVEJmVW5ickxZTDdwTGU1ZVhWK2FwNElj?=
 =?utf-8?B?RU54WDNQWWMyWllncFpmSFpRNHBad2pST3NOWlkzNnB6U2pvZFdPc2ZqZ2Ri?=
 =?utf-8?B?TDdCR2RzbTZ0THZsMXE5Tld6MHhYalVZWDJ0QjF4TTlISXA1Q1NQb2M1dlF2?=
 =?utf-8?B?K0J3a1N6N0J0WjJIQVZFOThucEdWUDBKUmloaWt4dzAvelBkYzVGV01jMXRi?=
 =?utf-8?B?dEVyckxxUExuMmpGd2NYY00zTzdBSVBKeVNCY1kxbktGTjNtZlBZRzBoUUFG?=
 =?utf-8?B?WkxUL1VCc25kOGhIV3pvY1NFS1VQaHNTOE1ESTFGQmRwY2ZVZ2c0SWJkaFhP?=
 =?utf-8?B?ZFZHcE5YTnc2RTNUclFtMSt2Yk5yNnk1MFp1VXlNdGNhM3V0cmREeUNaYzdX?=
 =?utf-8?B?NlBUSzhuKzEwSlZYZy9OQ0FGaUc2ZGVscStwYUlmT2cyMERLelhOYm1Uemt6?=
 =?utf-8?B?c0dGN2NMM0x1OFhNdTExZ2M0akFRSDVONkx6cWVMV2ZRYWtiTFdCWm5ZVXVt?=
 =?utf-8?B?a05GazNVMVZHSzhYemJnTk90TnBDbWxCZTNRS3QvVGM3cVUrN2lhU3ZPYnNj?=
 =?utf-8?B?S2RDYmJ2UHhLQ1hLSzZsbnBlUCtDUjJYR0s4amxlSkliL01QSlp1OWdWZWpU?=
 =?utf-8?B?czJUdUliV3p1TXpBTFZPKy90Mk5OeEpXODBrYnBmTHJFSFROaTNuRUg1Nllk?=
 =?utf-8?B?cVVkcURhb0FJMWtOZE5mYU9hdGpaZ2k4U2ZjWFBMN2ZhMDNEd2lEZFFteDc0?=
 =?utf-8?B?K1JQZVpyb2V4RzFJeWNodzk5aFpQbnZ3b3FDZXlQcCtCdm4xV3Rlc3EvZ2F6?=
 =?utf-8?B?UjdpSEJuSFJVaFFHVDRiTmVPUkFOOElhNEgreDBpYTd4eS9ad3EwYWM5ZnZH?=
 =?utf-8?B?N1V1MVlLSVRTcWoyRVZ6eVZPZXFROUlNek9TR0tkY3licHhrOHRKRks2andv?=
 =?utf-8?B?T1d1WjhFQUYzb0lXMUZCVnpNTHpxOHh1aHBTWk5KS0RHN2x1cjZ2ZE54b1Bv?=
 =?utf-8?B?MVRzdjd5VkZiOGdyRmhPdXdYd3RPVHdxUXcyNFRVazZUYWdIeCs2YXFlNnEr?=
 =?utf-8?B?dFEvRFNLN1lLekc0SDMyUUFPODdWYkp4eWxzeURWNWp0NGxWTGc5Wnh5ZHcx?=
 =?utf-8?B?N0lSY0lhclpnNlVGaStwaVpJaS9MQ04wcG56L21aN3NseUtONEpuTzV0OVRL?=
 =?utf-8?B?TllEb2xSaytNZUR6R3dXcW15QVR1UXV2eE5pQitOWnI4Sm9kc2M3UnViSEVM?=
 =?utf-8?B?aVh5c2NDWDZTMFFvc2pod0ZCaUh3NEFlL3FkMnM4YjFtTkl5djlCSFJObXkv?=
 =?utf-8?B?dmVHZWpKdWpTcEJVbWY0RUVQTEdscWVCek5ScUlkazBqSzVjdVlZU29sMnpF?=
 =?utf-8?B?N3VDR0x4bGJsdzIyeklIeU1nQkh1SU14azVKZjl3RkFLK1lPVDZoOFBVdWc0?=
 =?utf-8?B?VHIwaTJxWXNJRWRvU2pQMG5vOHlNRTVpTmd0OEdsdFlkYTBDWEFoUklCRHNv?=
 =?utf-8?B?TVEvQ3kxczUwbGtJUHNLbmlCbU5xMWZQWjFidEFpQlIwY2FDSHI2bC96YTBB?=
 =?utf-8?B?c05mQUNhSXZnSWxqTGl3VXY0eGZEdkEzZmJTUVBPWVBKVkhNc3J5MUR1a2k1?=
 =?utf-8?Q?4b26IqCmlVWvXOrzxwKYCntrC?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 98724d4c-1f27-47a4-513f-08da54f75836
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 09:04:14.1206
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: FIWFsMtTRba1rw/VfKFKS/B/Or/bWTtegrqWUMGVnlSbEBD/ZT0nqmYONdxsONaiy4/ZLTcQfgEfzjLtLgCiIg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7252

On 23.06.2022 10:02, Juergen Gross wrote:
> When loading the dom0 kernel for PV mode, the first free usable memory
> location after the kernel needs to take segments into account, which
> have only the ALLOC flag set, but are not specified to be loaded in
> the program headers of the ELF file.
> 
> This is e.g. a problem for Linux kernels from 5.19 onwards, as those
> can have a final NOLOAD section at the end, which must not be used by
> e.g. the start_info structure or the initial page tables allocated by
> the hypervisor.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  xen/common/libelf/libelf-loader.c | 33 +++++++++++++++++++++++++++++++
>  1 file changed, 33 insertions(+)
> 
> diff --git a/xen/common/libelf/libelf-loader.c b/xen/common/libelf/libelf-loader.c
> index 629cc0d3e6..4b0e3ced55 100644
> --- a/xen/common/libelf/libelf-loader.c
> +++ b/xen/common/libelf/libelf-loader.c
> @@ -467,7 +467,9 @@ do {                                                                \
>  void elf_parse_binary(struct elf_binary *elf)
>  {
>      ELF_HANDLE_DECL(elf_phdr) phdr;
> +    ELF_HANDLE_DECL(elf_shdr) shdr;
>      uint64_t low = -1, high = 0, paddr, memsz;
> +    uint64_t vlow = -1, vhigh = 0, vaddr, voff;
>      unsigned i, count;
>  
>      count = elf_phdr_count(elf);
> @@ -480,6 +482,7 @@ void elf_parse_binary(struct elf_binary *elf)
>          if ( !elf_phdr_is_loadable(elf, phdr) )
>              continue;
>          paddr = elf_uval(elf, phdr, p_paddr);
> +        vaddr = elf_uval(elf, phdr, p_vaddr);
>          memsz = elf_uval(elf, phdr, p_memsz);
>          elf_msg(elf, "ELF: phdr: paddr=%#" PRIx64 " memsz=%#" PRIx64 "\n",
>                  paddr, memsz);
> @@ -487,7 +490,37 @@ void elf_parse_binary(struct elf_binary *elf)
>              low = paddr;
>          if ( high < paddr + memsz )
>              high = paddr + memsz;
> +        if ( vlow > vaddr )
> +            vlow = vaddr;
> +        if ( vhigh < vaddr + memsz )
> +            vhigh = vaddr + memsz;
>      }
> +
> +    voff = vhigh - high;
> +
> +    count = elf_shdr_count(elf);
> +    for ( i = 0; i < count; i++ )
> +    {
> +        shdr = elf_shdr_by_index(elf, i);
> +        if ( !elf_access_ok(elf, ELF_HANDLE_PTRVAL(shdr), 1) )
> +            /* input has an insane section header count field */
> +            break;
> +        if ( !(elf_uval(elf, shdr, sh_flags) & SHF_ALLOC) )
> +            continue;
> +        vaddr = elf_uval(elf, shdr, sh_addr);
> +        memsz = elf_uval(elf, shdr, sh_size);
> +        if ( vlow > vaddr )
> +        {
> +            vlow = vaddr;
> +            low = vaddr - voff;
> +        }
> +        if ( vhigh < vaddr + memsz )
> +        {
> +            vhigh = vaddr + memsz;
> +            high = vaddr + memsz - voff;
> +        }
> +    }

As said in the reply to your problem report: The set of PHDRs doesn't
cover all sections. For loading one should never need to resort to
parsing section headers - in a loadable binary it is no error if
there's no section table in the first place. (The title is also
misleading, as you really mean sections there, not segments. Afaik
there's no concept of "alloc" for segments, which are what program
headers describe.)

Also: Needing to fix this in the hypervisor would mean that Linux
5.19 and onwards cannot be booted on Xen without whichever fix
backported.

Finally, you changing libelf but referring to only Dom0 in the title
looks inconsistent to me.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 09:08:29 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 09:08:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354661.581879 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Ip6-0007sV-PU; Thu, 23 Jun 2022 09:08:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354661.581879; Thu, 23 Jun 2022 09:08:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Ip6-0007sO-M9; Thu, 23 Jun 2022 09:08:24 +0000
Received: by outflank-mailman (input) for mailman id 354661;
 Thu, 23 Jun 2022 09:08:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=j28/=W6=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o4Ip6-0007sI-4h
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 09:08:24 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 084cc2e8-f2d4-11ec-b725-ed86ccbb4733;
 Thu, 23 Jun 2022 11:08:23 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 6AF331FD72;
 Thu, 23 Jun 2022 09:08:22 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 3336D13461;
 Thu, 23 Jun 2022 09:08:22 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id vxIiC4YttGLoGwAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 23 Jun 2022 09:08:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 084cc2e8-f2d4-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655975302; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=saX0XxYWwqZKGK2Qs5FJNAlGmzJJFeE6nTYl6t3TRZI=;
	b=Yo+lWOmai3ek8i98JprgFxqZfh43QI+6PiXPK8O2K9LlTjU73jB906azIP/UA5cDJ0IsjM
	7vQ4URnq5fkiCqjv1tIgaA49JIzx1iQRfXVpPgnkZvi+7F1cknwRqMKoI6nctntdiDuQZu
	Hz681ouaWzUlK0vWWfScL7T1uEzPtjU=
Message-ID: <50942106-0082-e86b-8a2c-b04aaafac444@suse.com>
Date: Thu, 23 Jun 2022 11:08:21 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH] xen: consider alloc-only segments when loading PV dom0
 kernel
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220623080208.2214-1-jgross@suse.com>
 <c5961627-1719-dd54-bbcf-c08a826ba14d@suse.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <c5961627-1719-dd54-bbcf-c08a826ba14d@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------YelCvZP46NNe4HUO0DvZ1EQh"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------YelCvZP46NNe4HUO0DvZ1EQh
Content-Type: multipart/mixed; boundary="------------i0YvJ6vbkw0B0PpgA60zkOmo";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
Message-ID: <50942106-0082-e86b-8a2c-b04aaafac444@suse.com>
Subject: Re: [PATCH] xen: consider alloc-only segments when loading PV dom0
 kernel
References: <20220623080208.2214-1-jgross@suse.com>
 <c5961627-1719-dd54-bbcf-c08a826ba14d@suse.com>
In-Reply-To: <c5961627-1719-dd54-bbcf-c08a826ba14d@suse.com>

--------------i0YvJ6vbkw0B0PpgA60zkOmo
Content-Type: multipart/mixed; boundary="------------JpQED6vh5kLQ3WdDvgXr1l5O"

--------------JpQED6vh5kLQ3WdDvgXr1l5O
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjMuMDYuMjIgMTE6MDQsIEphbiBCZXVsaWNoIHdyb3RlOg0KPiBPbiAyMy4wNi4yMDIy
IDEwOjAyLCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4gV2hlbiBsb2FkaW5nIHRoZSBkb20w
IGtlcm5lbCBmb3IgUFYgbW9kZSwgdGhlIGZpcnN0IGZyZWUgdXNhYmxlIG1lbW9yeQ0KPj4g
bG9jYXRpb24gYWZ0ZXIgdGhlIGtlcm5lbCBuZWVkcyB0byB0YWtlIHNlZ21lbnRzIGludG8g
YWNjb3VudCwgd2hpY2gNCj4+IGhhdmUgb25seSB0aGUgQUxMT0MgZmxhZyBzZXQsIGJ1dCBh
cmUgbm90IHNwZWNpZmllZCB0byBiZSBsb2FkZWQgaW4NCj4+IHRoZSBwcm9ncmFtIGhlYWRl
cnMgb2YgdGhlIEVMRiBmaWxlLg0KPj4NCj4+IFRoaXMgaXMgZS5nLiBhIHByb2JsZW0gZm9y
IExpbnV4IGtlcm5lbHMgZnJvbSA1LjE5IG9ud2FyZHMsIGFzIHRob3NlDQo+PiBjYW4gaGF2
ZSBhIGZpbmFsIE5PTE9BRCBzZWN0aW9uIGF0IHRoZSBlbmQsIHdoaWNoIG11c3Qgbm90IGJl
IHVzZWQgYnkNCj4+IGUuZy4gdGhlIHN0YXJ0X2luZm8gc3RydWN0dXJlIG9yIHRoZSBpbml0
aWFsIHBhZ2UgdGFibGVzIGFsbG9jYXRlZCBieQ0KPj4gdGhlIGh5cGVydmlzb3IuDQo+Pg0K
Pj4gU2lnbmVkLW9mZi1ieTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPg0KPj4g
LS0tDQo+PiAgIHhlbi9jb21tb24vbGliZWxmL2xpYmVsZi1sb2FkZXIuYyB8IDMzICsrKysr
KysrKysrKysrKysrKysrKysrKysrKysrKysNCj4+ICAgMSBmaWxlIGNoYW5nZWQsIDMzIGlu
c2VydGlvbnMoKykNCj4+DQo+PiBkaWZmIC0tZ2l0IGEveGVuL2NvbW1vbi9saWJlbGYvbGli
ZWxmLWxvYWRlci5jIGIveGVuL2NvbW1vbi9saWJlbGYvbGliZWxmLWxvYWRlci5jDQo+PiBp
bmRleCA2MjljYzBkM2U2Li40YjBlM2NlZDU1IDEwMDY0NA0KPj4gLS0tIGEveGVuL2NvbW1v
bi9saWJlbGYvbGliZWxmLWxvYWRlci5jDQo+PiArKysgYi94ZW4vY29tbW9uL2xpYmVsZi9s
aWJlbGYtbG9hZGVyLmMNCj4+IEBAIC00NjcsNyArNDY3LDkgQEAgZG8geyAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBc
DQo+PiAgIHZvaWQgZWxmX3BhcnNlX2JpbmFyeShzdHJ1Y3QgZWxmX2JpbmFyeSAqZWxmKQ0K
Pj4gICB7DQo+PiAgICAgICBFTEZfSEFORExFX0RFQ0woZWxmX3BoZHIpIHBoZHI7DQo+PiAr
ICAgIEVMRl9IQU5ETEVfREVDTChlbGZfc2hkcikgc2hkcjsNCj4+ICAgICAgIHVpbnQ2NF90
IGxvdyA9IC0xLCBoaWdoID0gMCwgcGFkZHIsIG1lbXN6Ow0KPj4gKyAgICB1aW50NjRfdCB2
bG93ID0gLTEsIHZoaWdoID0gMCwgdmFkZHIsIHZvZmY7DQo+PiAgICAgICB1bnNpZ25lZCBp
LCBjb3VudDsNCj4+ICAgDQo+PiAgICAgICBjb3VudCA9IGVsZl9waGRyX2NvdW50KGVsZik7
DQo+PiBAQCAtNDgwLDYgKzQ4Miw3IEBAIHZvaWQgZWxmX3BhcnNlX2JpbmFyeShzdHJ1Y3Qg
ZWxmX2JpbmFyeSAqZWxmKQ0KPj4gICAgICAgICAgIGlmICggIWVsZl9waGRyX2lzX2xvYWRh
YmxlKGVsZiwgcGhkcikgKQ0KPj4gICAgICAgICAgICAgICBjb250aW51ZTsNCj4+ICAgICAg
ICAgICBwYWRkciA9IGVsZl91dmFsKGVsZiwgcGhkciwgcF9wYWRkcik7DQo+PiArICAgICAg
ICB2YWRkciA9IGVsZl91dmFsKGVsZiwgcGhkciwgcF92YWRkcik7DQo+PiAgICAgICAgICAg
bWVtc3ogPSBlbGZfdXZhbChlbGYsIHBoZHIsIHBfbWVtc3opOw0KPj4gICAgICAgICAgIGVs
Zl9tc2coZWxmLCAiRUxGOiBwaGRyOiBwYWRkcj0lIyIgUFJJeDY0ICIgbWVtc3o9JSMiIFBS
SXg2NCAiXG4iLA0KPj4gICAgICAgICAgICAgICAgICAgcGFkZHIsIG1lbXN6KTsNCj4+IEBA
IC00ODcsNyArNDkwLDM3IEBAIHZvaWQgZWxmX3BhcnNlX2JpbmFyeShzdHJ1Y3QgZWxmX2Jp
bmFyeSAqZWxmKQ0KPj4gICAgICAgICAgICAgICBsb3cgPSBwYWRkcjsNCj4+ICAgICAgICAg
ICBpZiAoIGhpZ2ggPCBwYWRkciArIG1lbXN6ICkNCj4+ICAgICAgICAgICAgICAgaGlnaCA9
IHBhZGRyICsgbWVtc3o7DQo+PiArICAgICAgICBpZiAoIHZsb3cgPiB2YWRkciApDQo+PiAr
ICAgICAgICAgICAgdmxvdyA9IHZhZGRyOw0KPj4gKyAgICAgICAgaWYgKCB2aGlnaCA8IHZh
ZGRyICsgbWVtc3ogKQ0KPj4gKyAgICAgICAgICAgIHZoaWdoID0gdmFkZHIgKyBtZW1zejsN
Cj4+ICAgICAgIH0NCj4+ICsNCj4+ICsgICAgdm9mZiA9IHZoaWdoIC0gaGlnaDsNCj4+ICsN
Cj4+ICsgICAgY291bnQgPSBlbGZfc2hkcl9jb3VudChlbGYpOw0KPj4gKyAgICBmb3IgKCBp
ID0gMDsgaSA8IGNvdW50OyBpKysgKQ0KPj4gKyAgICB7DQo+PiArICAgICAgICBzaGRyID0g
ZWxmX3NoZHJfYnlfaW5kZXgoZWxmLCBpKTsNCj4+ICsgICAgICAgIGlmICggIWVsZl9hY2Nl
c3Nfb2soZWxmLCBFTEZfSEFORExFX1BUUlZBTChzaGRyKSwgMSkgKQ0KPj4gKyAgICAgICAg
ICAgIC8qIGlucHV0IGhhcyBhbiBpbnNhbmUgc2VjdGlvbiBoZWFkZXIgY291bnQgZmllbGQg
Ki8NCj4+ICsgICAgICAgICAgICBicmVhazsNCj4+ICsgICAgICAgIGlmICggIShlbGZfdXZh
bChlbGYsIHNoZHIsIHNoX2ZsYWdzKSAmIFNIRl9BTExPQykgKQ0KPj4gKyAgICAgICAgICAg
IGNvbnRpbnVlOw0KPj4gKyAgICAgICAgdmFkZHIgPSBlbGZfdXZhbChlbGYsIHNoZHIsIHNo
X2FkZHIpOw0KPj4gKyAgICAgICAgbWVtc3ogPSBlbGZfdXZhbChlbGYsIHNoZHIsIHNoX3Np
emUpOw0KPj4gKyAgICAgICAgaWYgKCB2bG93ID4gdmFkZHIgKQ0KPj4gKyAgICAgICAgew0K
Pj4gKyAgICAgICAgICAgIHZsb3cgPSB2YWRkcjsNCj4+ICsgICAgICAgICAgICBsb3cgPSB2
YWRkciAtIHZvZmY7DQo+PiArICAgICAgICB9DQo+PiArICAgICAgICBpZiAoIHZoaWdoIDwg
dmFkZHIgKyBtZW1zeiApDQo+PiArICAgICAgICB7DQo+PiArICAgICAgICAgICAgdmhpZ2gg
PSB2YWRkciArIG1lbXN6Ow0KPj4gKyAgICAgICAgICAgIGhpZ2ggPSB2YWRkciArIG1lbXN6
IC0gdm9mZjsNCj4+ICsgICAgICAgIH0NCj4+ICsgICAgfQ0KPiANCj4gQXMgc2FpZCBpbiB0
aGUgcmVwbHkgdG8geW91ciBwcm9ibGVtIHJlcG9ydDogVGhlIHNldCBvZiBQSERScyBkb2Vz
bid0DQo+IGNvdmVyIGFsbCBzZWN0aW9ucy4gRm9yIGxvYWRpbmcgb25lIHNob3VsZCBuZXZl
ciBuZWVkIHRvIHJlc29ydCB0bw0KPiBwYXJzaW5nIHNlY3Rpb24gaGVhZGVycyAtIGluIGEg
bG9hZGFibGUgYmluYXJ5IGl0IGlzIG5vIGVycm9yIGlmDQo+IHRoZXJlJ3Mgbm8gc2VjdGlv
biB0YWJsZSBpbiB0aGUgZmlyc3QgcGxhY2UuIChUaGUgdGl0bGUgaXMgYWxzbw0KDQpUaGUg
cHJvYmxlbSBpc24ndCB0aGUgbG9hZGluZywgYnV0IHRoZSBtZW1vcnkgdXNhZ2UgYWZ0ZXIg
ZG9pbmcgdGhlDQpsb2FkaW5nLiBUaGUgaHlwZXJ2aXNvciBpcyBwbGFjaW5nIHBhZ2UgdGFi
bGVzIGluIGEgbWVtb3J5IHJlZ2lvbg0KdGhlIGtlcm5lbCBoYXMgb3RoZXIgcGxhbnMgd2l0
aC4NCg0KPiBtaXNsZWFkaW5nLCBhcyB5b3UgcmVhbGx5IG1lYW4gc2VjdGlvbnMgdGhlcmUs
IG5vdCBzZWdtZW50cy4gQWZhaWsNCj4gdGhlcmUncyBubyBjb25jZXB0IG9mICJhbGxvYyIg
Zm9yIHNlZ21lbnRzLCB3aGljaCBhcmUgd2hhdCBwcm9ncmFtDQo+IGhlYWRlcnMgZGVzY3Jp
YmUuKQ0KDQpTb3JyeSwgd2lsbCByZXdvcmQuDQoNCj4gQWxzbzogTmVlZGluZyB0byBmaXgg
dGhpcyBpbiB0aGUgaHlwZXJ2aXNvciB3b3VsZCBtZWFuIHRoYXQgTGludXgNCj4gNS4xOSBh
bmQgb253YXJkcyBjYW5ub3QgYmUgYm9vdGVkIG9uIFhlbiB3aXRob3V0IHdoaWNoZXZlciBm
aXgNCj4gYmFja3BvcnRlZC4NCg0KQ29ycmVjdC4gU2VlIG15IHJlcGx5IHRvIHRoZSByZXBs
eSB5b3UgbWVudGlvbmVkIGFib3ZlLg0KDQo+IEZpbmFsbHksIHlvdSBjaGFuZ2luZyBsaWJl
bGYgYnV0IHJlZmVycmluZyB0byBvbmx5IERvbTAgaW4gdGhlIHRpdGxlDQo+IGxvb2tzIGlu
Y29uc2lzdGVudCB0byBtZS4NCg0KSG1tLCB5ZXMuIFdpbGwgZHJvcCB0aGUgZG9tMCBhc3Bl
Y3QuDQoNCg0KSnVlcmdlbg0K
--------------JpQED6vh5kLQ3WdDvgXr1l5O
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------JpQED6vh5kLQ3WdDvgXr1l5O--

--------------i0YvJ6vbkw0B0PpgA60zkOmo--

--------------YelCvZP46NNe4HUO0DvZ1EQh
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK0LYUFAwAAAAAACgkQsN6d1ii/Ey+h
9gf8Cxqu3wrSOMj0x08fvs4cw9mhUyv2jL/uowxbHt7jVlWWTOAbsS0ZZhhC05yZ7DG65p3+WaIe
ySj6RMUXBPhsZgb4kKugF4exMqzxd4RalXqCkzLyS3QagSfza7LRnCcQuxDICaT5cgY72PM2JBpw
tyhNntHyFX3ZzOrE8nJ0JGQdlKPIsCKNJSNisLxug5OnoErdSyQNK7rxC20hHOF3/+5q28Phnohi
ksifzZ6EdxbNAaL+DVYINj2U0e9pC8sBAQ5Ub+SAzLMpg9JzHly2G6XUHMjQ43lfzlh8g7f8oiID
FB/dK2gGadsr5cu/S4HYSMYLH1inOFCMCtnUu8V/Qw==
=7wXQ
-----END PGP SIGNATURE-----

--------------YelCvZP46NNe4HUO0DvZ1EQh--


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 09:08:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 09:08:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354662.581890 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4IpC-00089x-65; Thu, 23 Jun 2022 09:08:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354662.581890; Thu, 23 Jun 2022 09:08:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4IpC-00089o-2u; Thu, 23 Jun 2022 09:08:30 +0000
Received: by outflank-mailman (input) for mailman id 354662;
 Thu, 23 Jun 2022 09:08:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CRa/=W6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o4IpB-000897-1G
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 09:08:29 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr70050.outbound.protection.outlook.com [40.107.7.50])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0b2bdc12-f2d4-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 11:08:28 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB7PR04MB4283.eurprd04.prod.outlook.com (2603:10a6:5:27::33) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.22; Thu, 23 Jun
 2022 09:08:24 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 09:08:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0b2bdc12-f2d4-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Wchljq4g7fLwLOj2nv9dEO9selWLxdoaEzSlGW3eZQn/n0UTcQ6ns7JWAuNqZ50t39XvG1+xOzeZye+tEbpcxreZJgKPdbCab2sEVVttoLSZl2SrjbF7ZsairdYIqvV63U54+cKArw0SgOUztUEhzxTasBfMCN0c6skMLzSvFSobWqwYN4bjdyjAZwdGKe0h3AuxCIpCu7J4Jzmh7wYJU3kOt700TypHQgyKy+n5PCW6UbOX+bm63ht48PtFebbdlIO7FGXaRGGlb9zkZI9x+r2mx+mtS6ffLuOZfOio/dkMnXQQPFjat5geCddOXV917f9o13lonR3M1PLzlOe1Ng==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=GgNZsoGMltYHNDij42hj6kRqJvaS3/KPArB/mqwR9T0=;
 b=KmGDTDDy2kllGVpLHe/dz/QINlhPSXExSBTOTt8WLQs+AvTsOGXvglppyaxDI6xF85A1UtdTdBeYKWfDZiUjhCWqbbbRS5zNf5b2QxItPHVi9S4jflH4L1jHXEKFttJaBf7vyIe2l3f2Ul2WnZQZp3yiDMXHANLm9jc/twZRNk4IHK5/1/f96V4O/XNGouBnsP5kYww+EetM0XLPZhdtLEkHXyhIyrmSCGNh3RAb+dGxKOqu9CRinnTRC6eKmNfPgHZXKohMETPz+bqxJ5qyk/TNyx6v26T33PR/yhAPpTZI/R4ycsw7y4KdbcBlREShRLSuaza/bCI3AZ4n100A2A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GgNZsoGMltYHNDij42hj6kRqJvaS3/KPArB/mqwR9T0=;
 b=m53wV/CUluAe+TVNii9QNTLzF2EWIIaiHMz31XkRf+knqN7SAQ2oRO/Hinj8MCplCjRC71Q3luWlg+drT6x3bTfwPKAq96Exsq+Usl5Mtb/Go1chUBFeMf3sHSiuZ3pUvV6miQ/15sUbsNDfHAADNxqP2KpdNxX0MTmcOMC2JSYiYK/CkfsLcM+aBt1DgsBrdgOpEhe9DEbnDAdHKKJTMd3nxYXKLEah4LkwgeAni50+yN5NH11yzp0JN5s57okuCpqN+ugFgb6FATlpgJ+2LUpoGSGECqgvxZLHK8ZvRXY1fCWhcrQwleGKxiazXl0pLs7x+LxUbuBJtEAz4p9Jsg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <03ba839e-4249-b18b-81bf-86b98cb319b5@suse.com>
Date: Thu, 23 Jun 2022 11:08:22 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: Problem loading linux 5.19 as PV dom0
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <5c396832-3102-ff5b-c198-c037ee87d83f@suse.com>
 <922ee651-c211-6e46-7986-6d0f74164e57@suse.com>
 <b74d7347-113b-c608-1346-8f75f1a77cb9@suse.com>
 <53bb13f6-04ec-0ed0-2c19-9c7947654989@suse.com>
 <17124274-05e5-52c4-5505-9de9ad95db55@suse.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <17124274-05e5-52c4-5505-9de9ad95db55@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS8P251CA0007.EURP251.PROD.OUTLOOK.COM
 (2603:10a6:20b:2f2::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 353d2e0f-8953-48ad-3dce-08da54f7ed39
X-MS-TrafficTypeDiagnostic: DB7PR04MB4283:EE_
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-Microsoft-Antispam-PRVS:
	<DB7PR04MB4283A898AC66EB4F47F3A31FB3B59@DB7PR04MB4283.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	k1BI+1gu3l0vFPbaPTCS9UxZCzYAZ82pkWq/AdoSqvv2OXiAxwi0W4tP7R7eumpPPDY3hu4tnDmK4NuDP67qFLKNKzXXoXgeW6GupPuCxOLD3VMpoCLAMn8KoF0lv68L7lRPfAO9pgq0kPO73tmjsRzyR/VXJvHTundGo5JUu1g1RkbyKvchXcLfrv2eFbFQCScmMFdy/AeEeVNoJ2FfFBqHOHPBTHNWAzDSnjRpkneHLB6Sf2TqB5ne2RwMzM8QTnd34WHDZN8Cec+cNpJEqMIjYCa4i6bydTbjaW1SEhQvnUgFKCUm2SGcs/GRdNfRNmNyl5F3fFOSDaUIVm8KznhM5/istKsUTymC2JYdidwy+HkHq6uGn3vN1/tHVxgv2r04mPi9gqGvWVkOy+sb/lqre2fTRit228MwSbFA0NNsbK6rsWHuNOOOLSgyAvFGFwwEsZqMWsCJEMM8TP67x5qTdBryoqgVya1uVvQKHhYW7cmjeLG9R4xMn4tbRORI4vQsX07CBe6M8RCuuDm5BIbxTZYQU0kT2x2+9Hnwl4IsdLWiKkBFmwXh5+5zXcwTNUm9fJokmaRLwA0L/xOW2+7ElNyInrW0SopLHVMw3o4lq88TvaQYPSgIiGU2foPgitKe0/xUvvVuLjRES0es0jXGlxplMxrqp0PR7fihTQ2LpH47YCb990urNKj9ioF1pAxkEFm6vWpubYuXnKahOYT5rZUcKOinXWu3n/frbQzbYrFx6KHsVfHy19QE3dN529mVEFomc0H6dvNZQtW6G3vrHVVoFhcYiUR6IfwzHyo=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(376002)(346002)(136003)(39860400002)(396003)(366004)(41300700001)(6636002)(36756003)(2616005)(186003)(2906002)(66476007)(31686004)(83380400001)(37006003)(316002)(31696002)(5660300002)(4326008)(6506007)(8676002)(66946007)(53546011)(38100700002)(66556008)(478600001)(6486002)(86362001)(6512007)(26005)(8936002)(6862004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?S1dFSUl6WVlCaVh4bDVicTJZM3I0NTlEM1RYQmFySWUramNGNWF4dFJmdm15?=
 =?utf-8?B?QjJNanFyU2MwaURtdFgrcVFveU5wSm5ocVhnTDFaOFZoTEFpa1MxSStDS3VN?=
 =?utf-8?B?TzNpazY2ZmZPeXd5cmNhR3hnU1o1b3kyK0ltbWhMeUU0Q0ZMak9qVGpYdGdp?=
 =?utf-8?B?d2kyNFJqMmJyNjUweEVuMUxwZElpNWg1R1VWRkVQK21YaXAzTkJ1RXZvRDhI?=
 =?utf-8?B?ZmdENDhrZklUeUYxSGNNa0FvZnJzbmlEV05jeXRTZDZpNEhSMDZZdDNaR2Qy?=
 =?utf-8?B?YVdRNW1sNEhJdEFPdy9CWjIxWk41NHIzL2Uwb1RaQU9EWVF6ak1kZ05ON2NU?=
 =?utf-8?B?T1RMWVRKTmdGR1JES2xIT2ROREJmbVdHck12bHM1YWRQai9vVUJSeTBZS2V3?=
 =?utf-8?B?Sm9TcGl1eWRZd3hWRW9jWXc2eUt1c0ViN05yT2xyQzBQeldUZ2FoeGtseStY?=
 =?utf-8?B?ZWJuZ2ZKcTVqT1UrdFBlMTRjbVNyR2xud3lYV1BuQWJBN1g0TnR1aU01c1h1?=
 =?utf-8?B?eHJKOHd4b1gvZUl4dEZRdTdrRm9yNm51RHl6dUhEMTZnVjE1M25CLzNRZll2?=
 =?utf-8?B?OTN3RFczTkxEQm5CTHFXLzRoZlBZaVkxbEJNVTRwODJkZ2tQeFpZU3NBc29W?=
 =?utf-8?B?WlJLT2xPV0Qrc1JqRTdCbjNPSVE2NmJrVjFiZFBYY0JtWk5KdHZlUDljT0dZ?=
 =?utf-8?B?bm0zaXhPczdJSFB1MzJqcllHODYvL0VPTjdiSjJVWVpyQktZL1IyRFFCZlNm?=
 =?utf-8?B?RGdOcU9VK0FHWTNuQTQ1RDJ2bXY0UmQ1Qms0ZGgwMGVNWTlKK2E1R2h2S2Jo?=
 =?utf-8?B?K1pGNThjZkw1MlNPR3M2L3FRdkpGeGwyZmRacVBJRjg3Mi83SWE2UzJiUVpW?=
 =?utf-8?B?bU1kcC9pWWhyUzY0S2lUL043cDhHVzBudUVncklndUhHa0V2bVpEU3grK1Uz?=
 =?utf-8?B?d2tMcWE4ZHhTaUhOY0tLc0hOdTJ0bnR0ZGM0SFY0WWF4Nnp2UVBha0ZpdU41?=
 =?utf-8?B?RytHTHpDaE83TDFzL2x5UU1ud2dSdkJVRjdndGJ1Ty9YL2d2VHZOZysrOVRJ?=
 =?utf-8?B?WTcyRVVUaFI3NkJJeEFReVB6WUFvTHczUHJYTmp0MXM4aVBxdjYzR3ROcHhI?=
 =?utf-8?B?N1BmUjBXajRqOHJ6NGR0NTJvcjZWdnUxVmY5NW1DK3dWcmM1WnZkY2Rpc2tH?=
 =?utf-8?B?Yk14S2d1amtaN0d1UVdvYU9meXJodldKNXJpZ2tIZXFPby93VUtWV2JvanBm?=
 =?utf-8?B?Wi9DS204UVdBU0JDdkdLbXMvejg5NkUybHZVYlBPRGRidjNlM1JES04vNU56?=
 =?utf-8?B?Smk5eUxUVHdwejN0TGJZdmpHb2NkeHhTKzJySkk2K2RVa1REUHhiODdYMlJW?=
 =?utf-8?B?RWFuNDJjc2NtckVFWUhzNlBIY2VxTTBjRk1XV3N4Y1pRc1pQaWVXaUQ0TDlt?=
 =?utf-8?B?V08vWXlrd2VlS0xURXBMRXpRZmxGTE94MHU1WlE0aWhFVCtEYVUwRHNqSHhJ?=
 =?utf-8?B?cjJvYkM0SUxCTlhzVjZ4TjJPdUlZbkhpcjFRR21jcndWaUFwZDUxaytpYWRO?=
 =?utf-8?B?cWZHQ3llaVQyYXhNa29PcWtVZUMyejdyK3Yra1JoeUpFRnNpK3BPeHJSU25E?=
 =?utf-8?B?dmlVSnhKUmRub2tpTklpWG56SW9KTXRHekFOMjVXSmFNcWZaRlZtbG9DWGYz?=
 =?utf-8?B?Yzh2UWQ0bEpIaFI4YTBPL0lNV2NpYmxMOXc2T0hPNWFJY0ovVXVjTmJSZHA5?=
 =?utf-8?B?RDgrelpDeFdSTXFRSzdGRjBoZDA4YzZ5Y3VzSmFMSk5ObGZGT0ppYzIwNCta?=
 =?utf-8?B?bVNncXcyK0VUMXVGTzU0UVBrSnZxZFhnQUNKNTFTd2VNOVUzTnhZaVlLNVM0?=
 =?utf-8?B?U3hIRk01blNIUTYxVjZXWWFtRnVsN2d6RnNzRVNteWNLZXV1OHZsNUxyMjhw?=
 =?utf-8?B?TmlKQWtIUTN2TnBzb0ZYbENGdE5CYy9FdndHVVJ0WDdmVjVrYllnSytFbURF?=
 =?utf-8?B?eVB5cFlzNytNTFB5Ni9wcFRnbHhMVjlNZUdKMThwcEdGbFRQWmNhNkh2bk5T?=
 =?utf-8?B?djF5NUdKemxkbkt3cW1ZV3hUQ0Q3T1JFVytjRnhPV28wY2pOeXh5QlNlNW9o?=
 =?utf-8?Q?n5Wouixk+69vea+R3WLdeoPkE?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 353d2e0f-8953-48ad-3dce-08da54f7ed39
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 09:08:24.1359
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: rIU3SlBjyXSuiU9q/0IGNtC5Go1E4u5fH9QxyQy85onc4HrSl7Q0T8TwPV7iLFfOK3pkHTC3kf45YmeUP+5IlQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR04MB4283

On 23.06.2022 11:01, Juergen Gross wrote:
> On 23.06.22 10:47, Jan Beulich wrote:
>> On 23.06.2022 10:06, Juergen Gross wrote:
>>> On 23.06.22 09:55, Jan Beulich wrote:
>>>> On 22.06.2022 18:06, Juergen Gross wrote:
>>>>> A Linux kernel 5.19 can only be loaded as dom0, if it has been
>>>>> built with CONFIG_AMD_MEM_ENCRYPT enabled. This is due to the
>>>>> fact that otherwise the (relevant) last section of the built
>>>>> kernel has the NOLOAD flag set (it is still marked with
>>>>> SHF_ALLOC).
>>>>>
>>>>> I think at least the hypervisor needs to be changed to support
>>>>> this layout. Otherwise it will put the initial page tables for
>>>>> dom0 at the same position as this last section, leading to
>>>>> early crashes.
>>>>
>>>> Isn't Xen using the bzImage header there, rather than any ELF
>>>> one? In which case it would matter how the NOLOAD section is
>>>
>>> For a PV kernel? No, I don't think so.
>>
>> Actually it's a mix (and the same for PV and PVH) - the bzImage
>> header is parsed to get at the embedded ELF header. XenoLinux was
>> what would/could be loaded as plain ELF.
>>
>>>> actually represented in that header. Can you provide a dump (or
>>>> binary representation) of both headers?
>>>
>>> Program Header:
>>>       LOAD off    0x0000000000200000 vaddr 0xffffffff81000000 paddr
>>> 0x0000000001000000 align 2**21
>>>            filesz 0x000000000145e114 memsz 0x000000000145e114 flags r-x
>>>       LOAD off    0x0000000001800000 vaddr 0xffffffff82600000 paddr
>>> 0x0000000002600000 align 2**21
>>>            filesz 0x00000000006b7000 memsz 0x00000000006b7000 flags rw-
>>>       LOAD off    0x0000000002000000 vaddr 0x0000000000000000 paddr
>>> 0x0000000002cb7000 align 2**21
>>>            filesz 0x00000000000312a8 memsz 0x00000000000312a8 flags rw-
>>>       LOAD off    0x00000000020e9000 vaddr 0xffffffff82ce9000 paddr
>>> 0x0000000002ce9000 align 2**21
>>>            filesz 0x00000000001fd000 memsz 0x0000000000317000 flags rwx
>>
>> 20e9000 + 317000 = 240000
>>
>>>       NOTE off    0x000000000165df10 vaddr 0xffffffff8245df10 paddr
>>> 0x000000000245df10 align 2**2
>>>            filesz 0x0000000000000204 memsz 0x0000000000000204 flags ---
>>>
>>>
>>> Sections:
>>> Idx Name          Size      VMA               LMA               File off  Algn
>>> ...
>>>    30 .smp_locks    00009000  ffffffff82edc000  0000000002edc000  022dc000  2**2
>>>                     CONTENTS, ALLOC, LOAD, READONLY, DATA
>>>    31 .data_nosave  00001000  ffffffff82ee5000  0000000002ee5000  022e5000  2**2
>>>                     CONTENTS, ALLOC, LOAD, DATA
>>>    32 .bss          0011a000  ffffffff82ee6000  0000000002ee6000  022e6000  2**12
>>>                     ALLOC
>>
>> 2ee6000 + 11a000 = 240000
>>
>>>    33 .brk          00026000  ffffffff83000000  ffffffff83000000  00000000  2**0
>>>                     ALLOC
>>
>> This space isn't covered by any program header. Which in turn may be a
>> result of its LMA matching its VMA, unlike for all other sections.
>> Looks like a linker script or linker issue to me: While ...
>>
>>> And the related linker script part:
>>>
>>>           __end_of_kernel_reserve = .;
>>>
>>>           . = ALIGN(PAGE_SIZE);
>>>           .brk (NOLOAD) : AT(ADDR(.brk) - LOAD_OFFSET) {
>>
>> ... this AT() looks correct to me, I'm uncertain of the use of NOLOAD.
>> Note that .bss doesn't have NOLOAD, matching the vast majority of the
>> linker scripts ld itself has.
> 
> Yeah, but the filesz and memsz values of the .bss related program header
> differ a lot (basically by the .bss size plus some alignment),

That's the very nature of .bss - no data to be loaded from the file.

> and the
> .bss section flags clearly say that its attributes match those of .brk.
> 
> I'm not sure why the linker wouldn't add .brk to the same pgrogram
> header entry as .bss, but maybe that is some .bss special handling.

I don't know either, but I suspect this to be an effect of using NOLOAD
(without meaning to decide yet whether it's a wrong use of the
attribute or bad handling of it in ld).

> In the end I think this might be a linker issue, but even in this case
> we should really consider to handle it, as otherwise we'd just say
> "hey, due to a linker problem we don't support Linux 5.19 in PV mode".
> 
> In the end we can't control which linker versions are used to link
> the kernel.

Right, but the workaround for such a linker issue (if any) would better
live in Linux 5.19.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 09:09:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 09:09:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354672.581901 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Ipr-0000YP-H8; Thu, 23 Jun 2022 09:09:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354672.581901; Thu, 23 Jun 2022 09:09:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Ipr-0000YH-E5; Thu, 23 Jun 2022 09:09:11 +0000
Received: by outflank-mailman (input) for mailman id 354672;
 Thu, 23 Jun 2022 09:09:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uSRO=W6=citrix.com=prvs=166c34e93=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o4Ipq-0007sI-9I
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 09:09:10 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 229e1b09-f2d4-11ec-b725-ed86ccbb4733;
 Thu, 23 Jun 2022 11:09:08 +0200 (CEST)
Received: from mail-mw2nam12lp2046.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.46])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 23 Jun 2022 05:09:02 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by SJ0PR03MB5934.namprd03.prod.outlook.com (2603:10b6:a03:2d7::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15; Thu, 23 Jun
 2022 09:09:00 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 09:09:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 229e1b09-f2d4-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655975348;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=rtOuad8c+07CNIVJbjN8tiZvyJ0+MclpQ3UhlntkBjk=;
  b=N02IfPO1QN1NGPcDkJgRwWGDDUaFM6vNOdtIeAGczy1R5Cm/I0JAuGJL
   RujLkL5rtZzJA4RDQF3OmyJXn3y/YYVxq7W/0aXrQGiwjSfZQawqA0ISF
   fwJ45BCb6jjKguPK2+IRzQXLUnCAUzxbN2+9sXtf+mkvhuUyOhdFz+2P4
   w=;
X-IronPort-RemoteIP: 104.47.66.46
X-IronPort-MID: 73582811
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Xz3f3KCzm3LO3hVW/zTiw5YqxClBgxIJ4kV8jS/XYbTApGx30TMGy
 GQeDWjUM/iLNGLwKdFyOovn9EIDuMDRn4NqQQY4rX1jcSlH+JHPbTi7wuYcHM8wwunrFh8PA
 xA2M4GYRCwMZiaA4E/raNANlFEkvU2ybuOU5NXsZ2YgH2eIdA970Ug5w7Bj2NY06TSEK1jlV
 e3a8pW31GCNg1aYAkpMg05UgEoy1BhakGpwUm0WPZinjneH/5UmJMt3yZWKB2n5WuFp8tuSH
 I4v+l0bElTxpH/BAvv9+lryn9ZjrrT6ZWBigVIOM0Sub4QrSoXfHc/XOdJFAXq7hQllkPhx7
 /Z8qJmNaz4jP5Tdgd4YWgRIIntHaPguFL/veRBTsOS15mifKT7J/K8rC0s7e4oF5uxwHGdCs
 +QCLywAZQyCgOTwx6+nTu5rhYIoK8yD0IE34yk8i22GS6h4B8ydK0nJzYYwMDMYnMdBEOyYf
 8MEQTFucA7Bc1tEPVJ/5JcWw7z11iOvKmQwRFS9u7Bw20nWyC1I1OLgFdf6eN26X/tekRPNz
 o7B1yGjav0AD/SPxDzA/n+yi+vnmSLgRJlUBLC+7uRtglCY2ioUEhJ+fVmxrOS9i0W+c8lCM
 EFS8S0rxYAw6UiqQ9/VTxC+5nmesXY0QMFMGuc37AWMzKv84AuDAGUACDlbZ7QOq8seVTEsk
 FiTkLvU6SdHtbSUTTeY6e2SpDbrYywNdzdeO2kDUBcP5MTlrMcrlBXTQ91/EamzyNroBTX3x
 DPMpy8771kOsfM2O2yA1Qivq1qRSlLhF2bZOi2/srqZ0z5E
IronPort-HdrOrdr: A9a23:et1G8K6yQLn/bb2OwwPXwS6BI+orL9Y04lQ7vn2ZFiY5TiXIra
 qTdaogviMc6Ax/ZJjvo6HjBEDmewKlyXcV2/hpAV7GZmXbUQSTXeVfBOfZowEIeBeOi9K1q5
 0QFJSWYeeYZTYasS+T2njDLz9K+qjjzEnHv5a88587JjsaEJ2Ioj0JfTqzIwlTfk1rFJA5HJ
 2T6o5uoCehQ20eaoCeCmMeV+bOitXXnNa+CCR2TiIP2U2rt3eF+bT6Gx+X0lM3VC5O+64r9S
 zgnxbi7quunvmnwlv31nPV7b5RhNz9o+EzcvCku4wwEHHBmwyobINuV/mruy00mvim7BIQnN
 zFs34bTrZOwkKUWlvwjQrm2gHm3jprwWTl00WkjXzqptG8bC4mCuJa7LgpPyfx2g4FhpVRwa
 hL12WWu958FhXbhhnw4NDOSlVDile0m3w/iuQe5kYvG7f2UIUh4rD3wXklXqvpREnBmcEa+a
 hVfYrhDc9tAB+nhyuzhBgu/DSuNk5DbStuDHJy+fB96AIm40yR/3FouPD3oU1wiq7VM6M0gd
 gsEp4Y4Y2mHfVmGZ6UOo86MLqKI12IZy7wG0SvBnmiPJ07Ghv22u7KCfMOlamXRKA=
X-IronPort-AV: E=Sophos;i="5.92,215,1650945600"; 
   d="scan'208";a="73582811"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TTWXfUrSddY5AT8FGCfmTcNgTVKOr3wq5ONVyLIm79D9yR52ugs4Y7HF3F+MHlgzlUW/tdHOlIY4f3FfyfAGTgSkVnKji2hRbAnuhyGT7dlvI6fdeOlqwgziQ5bb4aIdK4fXOgPI00eh0jtZSmVzlXftyA1uAsf4AVX4xkrQBWm4i5AyznCQbMQgyBpWxgZ1GUgkvcr+kSAJqKb9hj2L0MrTh8K7Fb3Ov52zXFGH7xSN7KvYgqr13LoEvs23sZ7zK9fYUbHd6tqcS/5ygMuBFzflIdwOCoCmJ08BGd3F7ZNSRZv2+8UqUjKzd4vctdQVQH1ZjKGKv8F/Qbly0NqKew==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=iPok12Lcm4RwiqPaAOJfrYnKaoBs0y2eJqzdIiTG+no=;
 b=BjiFh2S0L3Afu6f/Az9/P6C+vsrDl15N9gDY5piXf821Ho1aqaql3xiQJJ+Gwk6qL+u3INzJ9fkAXLb9l6Nxg2Vih62OnRA51AKghR89MpkUzEvDtv/XgcTbpIQHJUPUuSBrQ0uWhYUXyT0byBmI9J+CvdPP9oGdLzrfkY5X4IC2jZle/hKDQJltwVvpfVRCNUSFFJ2Scy5Ui3AGF+Tdf9bgG/8VPTEePYffdm5lcX7T6MgXu79lIkPkD62ytZsAG3O9mRZXunGx42cmOVPwHHYkT7f77Q16YjESajH/BAUtTn+4/zigT0pRBd7fFIA2tv7TApoxz6vXf7Cq55FHnQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iPok12Lcm4RwiqPaAOJfrYnKaoBs0y2eJqzdIiTG+no=;
 b=t1QrSJJo9nWE+uvI0n8ONLrunAENA5OGXu4L+j0xfBGT5Jk8aZVXagNtIA9htL7PNHHRwx9aKMkpgKPlYnsvYNCiJYGadFxQ/w713ARYhAvfmcYBSvc+EzkUFwla8Ui2F/vSK+1pXTHmVRRoC9D6W71l23+Q7mJYEkFqeDXlR/Y=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 0/2] console/serial: adjust default TX buffer size
Date: Thu, 23 Jun 2022 11:08:50 +0200
Message-Id: <20220623090852.29622-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.36.1
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P265CA0155.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2c7::7) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 9d2266d0-480d-42be-53c4-08da54f802f6
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5934:EE_
X-Microsoft-Antispam-PRVS:
	<SJ0PR03MB59345A3283BE6D291E79B3948FB59@SJ0PR03MB5934.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	KE8vyr97RIUl91ZWCe4hcBMmVX6gjbkbkJduFt1lFVzyKKSfAjFgYJ3wqvXYUiApvt3A/HklQEQqSwnwVZr9j5/Xb4Uzj7w7TeWoi2tSu5yj/VPpfoVTiVsi4JX5BdPmYcQKimU88OkhPfaKBEvwBW3MQxAmM2hvhGOQNeo9YqsCARp3K/HtZI6lXwisf/h3tT70Tb88DxD8Ycg4fNenan5rlld1b+DrysYM3MolC1nQUCgVLJLU9D7H0JL/yaT45v7cyruZ359BkahH26Jf13LlntORvFrUCvu3xJxRBvTAE1QZrgajwJyYhJEJeLZu4ZmgHR3ay9qJCaL+EZ5dad+1VI61Gw/OQSdCKXBNsUIawKFMyIQWT2AfDQXX4ig2nQqamvFJLh/pWswQwbTjWUHam/+AgNS90GEGLI25R7Z0/xo5jB/HpwGYs6MBcLdjaKWnze6W++NMMhwVdkC0AqjBQ6tb/DhVs95dUsSPzhMxRgttTx6evgS8E8oS68H9Y/GSZvza9HTN27sZgxv2EfGtDo1m8i8aI/Lc6sQSd789MUiLBhkZMu1twz2dDjYsYL+1yNrYN9MBFovzsJKEJ+pusTx7Ak+DSje6E+LbMQ/cIrMiCnnW9NHFllAK5wfvTssSiltVsfb2/VlnNl6b7y11GyDisvNJjjLg/lxKok3O+JtJc/y5U7nw3eDIuwHJvsN0jTrk9T+pg8Daa2lsIQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(396003)(136003)(366004)(39860400002)(346002)(376002)(6506007)(1076003)(8676002)(4326008)(83380400001)(6486002)(66476007)(6512007)(54906003)(86362001)(66556008)(6916009)(2616005)(26005)(66946007)(41300700001)(6666004)(186003)(478600001)(316002)(5660300002)(4744005)(38100700002)(36756003)(82960400001)(2906002)(8936002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cW9zUDROT0U5OW5FQ3pZVmRMV28wWloveEpTMHRwd0k3aEJLNUpYT0hRUEx6?=
 =?utf-8?B?bldMUEpDaWVEOXVqeG96TG52bW1PT3VXY2h3YzdzS0twYmVTa1c1Z0dPU2Iw?=
 =?utf-8?B?QjhadldJQ3MzQU9XZEg0Q0NMS1RtSXFPU0NKekZoZVdLOHNYdUtqZ2lkMGN4?=
 =?utf-8?B?Y2QxOFVOeEtTeE1JaU5sNTZrN1F4VnpHL2hiS2pBZ2RTY0pjUkgwb1dTb3Qr?=
 =?utf-8?B?SEpicWNKcWNsOWdJVXhCalZScmdDanBFWTlMQitOOUt1TWxRUFF3TDVENE11?=
 =?utf-8?B?SnBvSXpYQkMrUy8wL0paVTQzQ1FOWjZYR0p2K3o0ZWdiTWFZODAvYjRobXdl?=
 =?utf-8?B?RWt1MUdGQnFFY2RScHJEZ1NNQitFcHd2Q2NraVdvUW4zWGtkMncxWVd5aGZo?=
 =?utf-8?B?Wm9pL2I2UWtKWFNsVjhaY2NuYXdQRXhES2FRU2NMSWYrZmpLZHlsYU9wc2d5?=
 =?utf-8?B?MlN0K3V4K2lxUHR3TXpMQ1ZqTldwRm0xNGJkV29KQS91N3F5QS9xSzFvMDd5?=
 =?utf-8?B?TUYzc21oUzg1NVJJcFQxZm5kSmRCRyt0Y2F6R1FsM3hoTkR4U3gzQTNnV3d2?=
 =?utf-8?B?QVp5WVhsZnNoSXVyVWlWRjJWRXJ1cVNFTERIRHZwd3Z2RzEvVWcyVW1YRjRM?=
 =?utf-8?B?WDVJNmZ6RkszQ0VreUVVQ1YxblZ5QjRqeHJPVUFsMWU4dGMvWitjUUVCYXJ4?=
 =?utf-8?B?RkZHWkh0aUJqbmV3anVWWVM3ZFkxV01xZTY3V1pGZE1aVU93MmpnMEt5YTVD?=
 =?utf-8?B?U2wzeXJCUzZWdzRJVG5pZS9xeWF1b2RiZ0VVNzRsNWlzRlRvSXRiZlBwVzNj?=
 =?utf-8?B?STk1TjY2SmNSWXI1SWFHMkRBQ0kzUE82aStHd3RSaFRqTkhDaGVZUkEzc3RE?=
 =?utf-8?B?cml3U2dOR2RWK1dEUGJOU0t4Sk0xVzZOM2QzZlhnT3crdjF1N2E3YzFoYTRw?=
 =?utf-8?B?R05xa1JwdzhFeWgvaEtMU01RUllzZjM3eTlwWkgvaDRYN3FGTDA5MjVZQ0hp?=
 =?utf-8?B?ald3VlgwS25DREdSSXNyaTkxaTJMNkhjc0wwenFtSCtvU2tJbi83K2UwcGYr?=
 =?utf-8?B?VWNxWW50UkdPM0NUY0ZrNWVDM1lBL0NtTkhrNkFkMWQ5a0NQSk94Q084R0wx?=
 =?utf-8?B?MTk2UEN5UjVVREJnd0ltVWwxVHZiWE9QaEtCV0FRNzhqNktGUEZVaEF2ZjJ0?=
 =?utf-8?B?T3pUQ0lQL2ZRdzhFM05pSS9hTWthNkhvV2cyY3F4em1CM3A2UzdNb0JJdEJF?=
 =?utf-8?B?TzBGWm94QXZDSjBMRmwyaVVia1RZYVdBRkR0OGpDZmNoMnVlMjNJNFNVMEFa?=
 =?utf-8?B?bVZWSngzMDlEcUk5dFBGQlh5N3RwUWpsODZ5eGZlenlWU3BWMWFXQklqOCtr?=
 =?utf-8?B?MGxNK05VN3hjdXNmK3V1dkRpVnZiVkNqa2FlVGxrZ3RVSDc4d0E4aDNkLzBT?=
 =?utf-8?B?TUVHb2s3Zlpnd3dvUmg2ZTZPY1M0M0NVaG91VEhnZ0kwVERFcVhpYWpSUllj?=
 =?utf-8?B?aUxFKzJpTm5UN3pyN0ZCT1NWTXFBejJTZmFhczgybzliQTBsUW92TUxPUDEy?=
 =?utf-8?B?aitacE9LTDFNVnVuZFFtVGlhVENJY3NHS3R1N2JxMHlFUzZOemZnZEMzMWFV?=
 =?utf-8?B?ellRVndLcFpjS1pFaGdHdlNCK2krNit5bXpCRnlsU2xWN1F4N1dqaU9mS2hk?=
 =?utf-8?B?aFhZOCtTWkxkdXMvRXc1aWlIZFNDY2tBVTlXa21BWmZJbTVIcWFMbHdtbTJJ?=
 =?utf-8?B?YTNyWmliaU00SWkvUXV0dkw2NnJhRXkyM0tBODVXVjR5aExPWittR2E3YXJ0?=
 =?utf-8?B?ci9zN2Qyd21qbTNGYTFsTmk4OFNsVlR5UHZuQzByRXA1Wk9qK0RobnJaNVNy?=
 =?utf-8?B?SFhDMlJiNG40aUtMOG1QK0VJM2hHK0UrVE1ldDZ1M0tncnY1YVRTcVJGenJU?=
 =?utf-8?B?MWtNa0F5dGxPRFR6czhXa1NtVHd6SkVPUitTZmx1VEpzUVM0citGYkVyZmNj?=
 =?utf-8?B?R2g5L1grdnVPZE5scDJpSjdyclYrOXhKQlFrOUxETGRqK0Rubk9UT0dTR2Rp?=
 =?utf-8?B?OEV6SkczQU02TlZmblFrVjRCU0hUeXpRK0FYRG1uWThuNHJXV2o1Uyt4UHFM?=
 =?utf-8?B?a2dHOWxiK0laUml2dFBaNUJRNFBvVnVLdU5RZkpMaENXdHJ1ZGNwVWlUblIy?=
 =?utf-8?B?MGc9PQ==?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9d2266d0-480d-42be-53c4-08da54f802f6
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 09:09:00.5292
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: cxDeZ/abtxn4SVqLFjM61UFiYcynBOs/iVbniTe+xV7jos+P4fKY0dM96bd4HSbyWVt4GP/Z7Rh6xHUmyGEl9A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5934

Hello,

First patch moves the setting of the default TX buffer size to Kconfig,
and shouldn't be controversial, second patch increases the buffer to
32K.

Thanks, Roger.

Roger Pau Monne (2):
  console/serial: set the default transmit buffer size in Kconfig
  console/serial: bump buffer from 16K to 32K

 xen/drivers/char/Kconfig  | 10 ++++++++++
 xen/drivers/char/serial.c |  3 ++-
 2 files changed, 12 insertions(+), 1 deletion(-)

-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 23 09:09:13 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 09:09:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354674.581912 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Ipt-0000p6-SS; Thu, 23 Jun 2022 09:09:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354674.581912; Thu, 23 Jun 2022 09:09:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Ipt-0000ox-Ow; Thu, 23 Jun 2022 09:09:13 +0000
Received: by outflank-mailman (input) for mailman id 354674;
 Thu, 23 Jun 2022 09:09:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uSRO=W6=citrix.com=prvs=166c34e93=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o4Ipr-0007sI-NR
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 09:09:11 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 247becdd-f2d4-11ec-b725-ed86ccbb4733;
 Thu, 23 Jun 2022 11:09:10 +0200 (CEST)
Received: from mail-mw2nam12lp2047.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.47])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 23 Jun 2022 05:09:08 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by SJ0PR03MB5934.namprd03.prod.outlook.com (2603:10b6:a03:2d7::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15; Thu, 23 Jun
 2022 09:09:06 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 09:09:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 247becdd-f2d4-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655975350;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=teZsxeWDp9URVH3hkJcqy+gk4GBwS4EaVwF+XzaeVfs=;
  b=B056hFASSlqeI68HuCKsr6IJFLrsn6hnaTRi84dItQOkECf7v3G5hfqM
   hFOWckji09i98BTpFcN5vY5JRwwC2dcGNgXUujHmLV/Pk+CtSyzj0siZO
   JM2dSEmBRJXaT4dYkPB1x0eqvAplPmrJrEI1b7at8hwd+9TBR2w3sGV7v
   E=;
X-IronPort-RemoteIP: 104.47.66.47
X-IronPort-MID: 73582818
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:k1A9B6q035Dpbca6koab8tG/QgVeBmIFZBIvgKrLsJaIsI4StFCzt
 garIBnUO/2MM2Hyct0nPo/k8EgCusXcxt8xSFZprCw2E3sU9JuZCYyVIHmrMnLJJKUvbq7GA
 +byyDXkBJppJpMJjk71atANlVEliefQAOCU5NfsYkidfyc9IMsaoU8lyrRRbrJA24DjWVvT4
 4mq/6UzBXf+s9JKGjNMg068gEsHUMTa4Fv0aXRnOJinFHeH/5UkJMp3yZOZdhMUcaENdgKOf
 M7RzanRw4/s10xF5uVJMFrMWhZirrb6ZWBig5fNMkSoqkAqSicais7XOBeAAKv+Zvrgc91Zk
 b1wWZKMpQgBAqKSssA9YzRhNApZZZ9pw5+eBSSRiJnGp6HGWyOEL/RGKmgTZNdd39ktRGZE+
 LofNSwHaQ2Fi6Su2rWnR+Jwh8Mlas72IIcYvXImxjbcZRokacmbH+OWupkGgnFs2aiiHt6HD
 yYdQSBoYxnaJQVGJ38cCY4knffujX76G9FdgA3O+PptszGDpOB3+L/jacvMK5/bf5RQnE21j
 TL++HumWChPYbRzzhLAqBpAnNTnjS79HY4fCrC83vprm0GIgHweDgUMUlm2quX/jVSxM/pdI
 UEJ/islrYAp6VemCNL6WnWQomOAvxMac8pdFas98g7l4rHP/w+TC2wATzhAQN8rrsk7QXotz
 FDht8ztLSxitvuSU331y1uPhTa7OCxQKHBYYyYBFFEB+4O7/N51iQ/TRNF+FqLzlsfyBTz73
 zGNqm45mqkXiskIka68+Dgrng6Rm3QAdSZtji2/Y45vxloRiFKND2Bw1WXm0A==
IronPort-HdrOrdr: A9a23:4LOPS6vKm1Kpp1cBctfmBGt/7skC5IMji2hC6mlwRA09TyXGra
 2TdaUgvyMc1gx7ZJhBo7+90We7MBHhHPlOkPMs1NaZLXLbUQ6TQL2KgrGSpwEIdxefygcZ79
 YYT0EcMqyOMbEFt7ec3ODQKb9Jrri6GeKT9J/jJh9WPH1XgspbnmJE42igYy5LrF4sP+tFKH
 PQ3LsPmxOQPVAsKuirDHgMWObO4/XNiZLdeBYDQzoq8hOHgz+E4KPzV0Hw5GZUbxp/hZMZtU
 TVmQ3w4auu99m91x/nzmfWq7BbgsHoxNdvDNGFzuIVNjLvoAC1Y5kJYczLgBkF5MWUrHo6mt
 jFpBkte+x19nPqZ2mw5SDg3gHxuQxen0PK+Bu9uz/OsMb5TDU1B45qnoRCaCbU7EImoZVVzL
 9L93jxjesZMTrw2ADGo/TYXRBjkUS55VA4l/QIsnBZWYwCLJdMsI0k+l9PGptoJlO31GkeKp
 guMCjg3ocXTbvDBEqp/VWHgebcE0jbJy32DHTr4aeuonprdHMQ9Tps+CVQpAZEyHsHceg02w
 31CNUXqFhwdL5nUUsEPpZmfSKWMB27ffueChPlHbzYfJt3SE7lmtrQ3Igfwt2MVdgh8KYS8a
 6xIm+w81RCMX7TNQ==
X-IronPort-AV: E=Sophos;i="5.92,215,1650945600"; 
   d="scan'208";a="73582818"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jxNREDdB0xe/kGVku9qpnfEpDJPSPRrwFTUw5vncVHWX/0jTxJnD4q+tdWOVJD3efmacbSRTICjWs/1ub27albqJnO9IjLKWdfc/uEMhg0VCAw6BASDzkF0yL3xUSmelbg3TcBiNT8EdsZyAlxV+PuQ5ODcqiTGnY1Ce4MIYCoKnUxHQfui72yaYK6jNp+hI2f583LSv+7ro3CxfFeF494qPdIvfXgGhDfvFEjGxhuXyW8kOEiuHe1Ack4SoA+Gbgy3XLdtn+zO5jtHUf8RBAgShyo4BK5XP/bQmDaqQNCeAXgpO1888si1CGHQPvXgmC+0O/GzRKGcjm/i2756BHA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=hH75V/WM2LVZxnBqVh030EFKsJy+Y/Gn6Q/hZLew5iQ=;
 b=Mp9Yor0QmzLrnUM8GAJLS6+NjewK9qxdw+LaZUKU/HP0eyFNDAazT0TbnHHorBdWB8rqDr/wDkiiAEUsXXjHBnAosDQfwAVDwtm+c/AecA3UMPJzqakh7f2j81qySPY5K3Cp4k5oh6TGI+Ce8fTg2111mRD8jv8pgk5WmuRaJ4pB1b5VgP1Vx2SHwdRIRg1zWpu96E8igXKvLuwhRQcUZGBElKc8MjildQ4kIYlRITOsZ9UH4rUxtx5pOmYSFlMnJLdwED1Ax9b63I+DtWFfu9hT9wtSnSgyMo8g9H0pRAnBUJ1ezm0QeeMpS2FbZJAOOqSP2gSqR3L+erDy3qyw2w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hH75V/WM2LVZxnBqVh030EFKsJy+Y/Gn6Q/hZLew5iQ=;
 b=PLQSUsm5MKc+ynRuSEWLoq7dvAFLAXt917zkejZ9loT33CTmnz4uqYhkG/9OLnXy2WQvhc/UD9vJfqC263zhO1SDX9SBfrvilwsaRr+V83gTmZuJLl7p4naIkIeIcbpISdAoamB74kt281mO6rHlC29662F9Dzde/cLnJajOpRc=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 1/2] console/serial: set the default transmit buffer size in Kconfig
Date: Thu, 23 Jun 2022 11:08:51 +0200
Message-Id: <20220623090852.29622-2-roger.pau@citrix.com>
X-Mailer: git-send-email 2.36.1
In-Reply-To: <20220623090852.29622-1-roger.pau@citrix.com>
References: <20220623090852.29622-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0414.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a0::18) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ca529437-7c94-4de7-6fb5-08da54f8063b
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5934:EE_
X-Microsoft-Antispam-PRVS:
	<SJ0PR03MB59342326E1EF442192A40DB78FB59@SJ0PR03MB5934.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	9oP1EHGAYjd45eb4ek7C2AZhHljYdPt89+K29WOMl7D1gSSUuneUH8ZnAOOLLpvfJFFd6l4cgQd2uYoRyILbzKu1ApN3Z3cqnrD11voIUI/Lk/92uLXRpkLpVrUtSCAvdAR5ZPB2yalv296oZkxDsGV7jBEiTfHWUXFoJ+wjibzLpaFVg16WAYdWA+sUh8xJvum4CaLw5ZxjusXgdZNX6CCjtqMNX1WV7SZm20rFKUCN9UgM8SJ5dJQ0oQdbqvb4PvQYkB+ff17RchVeItgF/nx/4d+dtBh1ZBQaYQnVECRMfoFWMeyKl7951xbK0uSBXCUxAeQHhJ+X1BO8HPBzYqn+EJFlthsW3AKw1ENUMfT5U6Mf0N+EAzHwH1+iOmgskS5HdJh33Q7VHP6fRV23yVBVSIf0sDVNalslgSQkAgWhMAddVPavc6LWcC0jQc4eg1uoz6YkUgMuRlMLzb8J2aRtMdmefgExjIpYATG03dXrxfta1ibAddERXC7KnW58UGPWaKCDaFyqynPmodpsNLSQAnPjbN3CExUXLO9CWz4YSAaHM0ptce5x/gJNgX5MDeEK7z1wMjBCEz9XvgzW6lWS3mO3Wb/5bkJ9wf/gdX0kwQbh0aJyuzLPObH8o2IpVgh+8pdJJ8lPb/gs+vAuRBogOFslelQMQFWrNlODQTzStfbZtNauRCzQM0bNpA9oq9OA6R2qTOf/PGBw2blv8w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(396003)(136003)(366004)(39860400002)(346002)(376002)(6506007)(1076003)(8676002)(4326008)(83380400001)(6486002)(66476007)(6512007)(54906003)(86362001)(66556008)(6916009)(2616005)(26005)(66946007)(41300700001)(6666004)(186003)(478600001)(316002)(5660300002)(38100700002)(36756003)(82960400001)(2906002)(8936002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Q2FCWVNrK0gveHhobFl0OTNZQUt6T3lmcEpyTlNCYmRKS2pRUTF5T3lRK3hS?=
 =?utf-8?B?YTVOVjJFamkza3Fia3VYQnRmTFJGV3FsejU4cXErdzRFVjhlR2xnc0JBZjZk?=
 =?utf-8?B?V2J0OUpqdnZFN210ZzR6L2Z3ZUdtTEhvZWw1U1VGZlAyNjBTc3RaQnJJRHhE?=
 =?utf-8?B?Q09KYktuOFRiV0hMbGpYM1FWL2VxUVJuaTBlUDV0KzdMT1U4K0xFL2k3YnNL?=
 =?utf-8?B?YkxPdzNjeVpLY2oxTkNJcHQxcWlmWTNVZkRUYS9nOGJqZmxOQnhTQjUxQU5p?=
 =?utf-8?B?TUVoUlJUeVFWWWY3WEg5S0lST041amlTTDRCZW5Pa0xCUE9UL3hJenZhb2hq?=
 =?utf-8?B?MmtmUWFsTWFOMkhlNEVXdGhZa2Vyc2VMTzduaHBCa2wxVmx2V0dpQWNNSnR4?=
 =?utf-8?B?c29PcUNYVGFtSW5GYmc0UFJycEdtZm5JaUxocnpnUTYrVEhkUEVTcmp1L2tT?=
 =?utf-8?B?a0lqZTEyR0lYSkIwT1R2ZTN5b1VzUFF1bEwybWVYak5iNjRsTFFGdUxZMnBr?=
 =?utf-8?B?aDR2LzJaOUxZRzcvWXlFc0didjI2TWEwdytrbmJNWGRUY3IzbnVrYkxYR2RE?=
 =?utf-8?B?dzV5am1vTEthRVU3S2dnT3lxdk5zcmlWWmZJOW5oakUwU1NGRTkxMFRJelA1?=
 =?utf-8?B?cWRKVjRGTkFERWhTdmZ2MTRMQmtsZUh1SjRaOGZRZjdvN1lHQjNXK1hUVU5i?=
 =?utf-8?B?ait3UXB3YUVobjN6Yy90bEdsODliZDFtWHBFMHJVblo5YTVPYU43bVdKSCtS?=
 =?utf-8?B?RWFQUEF4UFh1ZVlmS29FWkxIZzdxaURGZ3VaNFN1TktXU1dycjFWSXlhakk1?=
 =?utf-8?B?cEUyL1QrclJuSWlGRzRudndBdWpvdkNnNzhmK0thd2JpWTF6Q0VEeFlXdEFD?=
 =?utf-8?B?SXFGT1dySXFiQStOclR3THJncHZ6d3lUakt3YkVHYlh6NS8xbXlqOGZSRVNI?=
 =?utf-8?B?Mm45NVkrdmx5YUVrV0FvYkZvR3ZQSjdRcW1lWFFCUWNSVXlNM1VvaFRSdUxq?=
 =?utf-8?B?akE2ZUpwOWczVU1reWlYVkFUU1JrYjYvbFI1RHZIM3ZLc2dsZzRTUk84NlBP?=
 =?utf-8?B?OGkzamN4N1YvZi9jbi9wSFltd0JDdFhldzU4REpNVDV0bk02Q0YyTXRnRDVq?=
 =?utf-8?B?WGdIRHZUWFM2QmNPc1lIYlRIRCt1cjRKdHhxMy80ZStzTmZETEcwN3VrVWtV?=
 =?utf-8?B?aHhERk9PeVRxVTk4cUI0eThjU21idG45OUkrL3dBNDRNOXl1OHpEMHVEYnR0?=
 =?utf-8?B?UUo4YWEzSXV4dXVUZjdNNnp4NWYwMWw0Y1lTVFZjZDhjOXlSd1JKQkRQaDRl?=
 =?utf-8?B?MFpXWGJRWUYxbjRlU2RJeDk2eG9SQWJkZ25hTDFheUwyRXhMbGRUQlFCaldB?=
 =?utf-8?B?OExGL3NJd1g2emoyZlBENFJwUXl3b0MrRkdtSlMyWWRMMWZWY3hiRGtBMGl5?=
 =?utf-8?B?bmZOYWdvaWVVZDhTaGRVSFdCSHdxWFIyckdJVGpyeXVXZStuVVplTWdibjZK?=
 =?utf-8?B?OG1IYm05T21XeUVjN0ZQVjJ5eVY0Vmo3ejZvc1Fhd0MzQ0dSN1hZeDc4dS84?=
 =?utf-8?B?alVVeDM3WGVoZXYwamdOVlVWQUlNODRBQTd6VWNoM25kUEkyNThGMUtKR29w?=
 =?utf-8?B?ZGJseUpqcnRscDk2aTA1S3VBblhQQmFBS3F0VWkwQXZEMWtKZXRXY1pmOEtx?=
 =?utf-8?B?MS9mRGRzcUxBcUh0Qmp6TlVNV0NSMkZhUkNES0hjOHFYS0NrUnZSYlNBTlhi?=
 =?utf-8?B?Ri9xQ2h4amdtejRoOFpxbkJCbU5HMi9FZVkxdVpFaGZaTXF2NVNIOHdacE92?=
 =?utf-8?B?QlFjSGYxb3JISThEYWk2NTFHUVJaNDlpWHBvN2lqaFJ6Nk9GdzhLWExoQTNv?=
 =?utf-8?B?YnJRb1dFa2tlQTUrdUJRMEpFT0c4QWVYaTFvTENLVitqVXNmcG5KMDlJWEJB?=
 =?utf-8?B?RmphaGdnZ3FMdjZ0YzRhRFdLK1owTzBwVnpyTHI4MTlXbCt1eHZkYUgzNnlo?=
 =?utf-8?B?a2pwYVVQUmRpa0RXdE9tWDgwTnZMSVY0amJFMGU2Rloxd2xrbGN2L09jTXdO?=
 =?utf-8?B?M25xb2ZtNy8vc0R1S0llSjhDZ3A0RFdFWGU3Y2t5cUd2a3hhZks1Nk9hS29k?=
 =?utf-8?B?ejBBcnR5dUw3Q0JLSEFLTWZjTlBHcVBsb3lNQWRzcGZWanBVOFhFckVndERO?=
 =?utf-8?B?Nnc9PQ==?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ca529437-7c94-4de7-6fb5-08da54f8063b
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 09:09:06.1245
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: lsHlx25bGx41C4AxIPtnrjThZDsjyIjp3kgoeCqIpQwZB+UrHl/3tX3l5o1az4Xa5cQUI99vDkEFloGSs5u+hA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5934

Take the opportunity to convert the variable to read-only after init.

No functional change intended.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/drivers/char/Kconfig  | 10 ++++++++++
 xen/drivers/char/serial.c |  3 ++-
 2 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/xen/drivers/char/Kconfig b/xen/drivers/char/Kconfig
index e5f7b1d8eb..a349d55f18 100644
--- a/xen/drivers/char/Kconfig
+++ b/xen/drivers/char/Kconfig
@@ -74,3 +74,13 @@ config HAS_EHCI
 	help
 	  This selects the USB based EHCI debug port to be used as a UART. If
 	  you have an x86 based system with USB, say Y.
+
+config SERIAL_TX_BUFSIZE
+	int "Size of the transmit serial buffer"
+	default 16384
+	help
+	  Controls the default size of the transmit buffer (in bytes) used by
+	  the serial driver.  Note the value provided will be rounder up to
+	  PAGE_SIZE.
+
+	  Default value is 16384 (16KB).
diff --git a/xen/drivers/char/serial.c b/xen/drivers/char/serial.c
index 5ecba0af33..8d375a41e3 100644
--- a/xen/drivers/char/serial.c
+++ b/xen/drivers/char/serial.c
@@ -16,7 +16,8 @@
 /* Never drop characters, even if the async transmit buffer fills. */
 /* #define SERIAL_NEVER_DROP_CHARS 1 */
 
-unsigned int __read_mostly serial_txbufsz = 16384;
+unsigned int __ro_after_init serial_txbufsz = ROUNDUP(CONFIG_SERIAL_TX_BUFSIZE,
+                                                      PAGE_SIZE);
 size_param("serial_tx_buffer", serial_txbufsz);
 
 #define mask_serial_rxbuf_idx(_i) ((_i)&(serial_rxbufsz-1))
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 23 09:09:18 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 09:09:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354675.581923 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Ipy-0001AH-9p; Thu, 23 Jun 2022 09:09:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354675.581923; Thu, 23 Jun 2022 09:09:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Ipy-0001A6-5i; Thu, 23 Jun 2022 09:09:18 +0000
Received: by outflank-mailman (input) for mailman id 354675;
 Thu, 23 Jun 2022 09:09:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uSRO=W6=citrix.com=prvs=166c34e93=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o4Ipw-000897-MM
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 09:09:16 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 26a19b9e-f2d4-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 11:09:15 +0200 (CEST)
Received: from mail-dm6nam10lp2100.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.100])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 23 Jun 2022 05:09:12 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by SJ0PR03MB5934.namprd03.prod.outlook.com (2603:10b6:a03:2d7::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15; Thu, 23 Jun
 2022 09:09:10 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 09:09:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 26a19b9e-f2d4-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655975355;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=OF2CzC+bWI3f3T0/eu9SANLSdkZxiLCzkxGgWpLBBxI=;
  b=hwq81MrLzia6qQmuCxCOLZIWQjsY8w+47qVGqCMaesdZ0qOWKguAk9HX
   Ev1VeJMM7IRnqp+dK+M88w1D1uehg8jsoOsp9NwOwYWMdnW6Kz6ejqyqF
   61XIFdM1+ZliUD0DC8aQU+Ej5HnGD4V9wxgW8uooP/q1yQjv53y70jGgJ
   o=;
X-IronPort-RemoteIP: 104.47.58.100
X-IronPort-MID: 74238752
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:6I60TazxyyBFzM3BUTF6t+crxyrEfRIJ4+MujC+fZmUNrF6WrkVSy
 2AYX2qDOfqOa2Whf9x+O42x/RtSvJeGm4UwTgU/qCAxQypGp/SeCIXCJC8cHc8zwu4v7q5Dx
 59DAjUVBJlsFhcwnj/0bv656yMUOZigHtIQMsadUsxKbVIiGX1JZS5LwbZj2NY224LhX2thh
 PupyyHhEA79s9JLGjp8B5Kr8HuDa9yr5Vv0FnRnDRx6lAe2e0s9VfrzFonoR5fMeaFGH/bSe
 gr25OrRElU1XfsaIojNfr7TKiXmS1NJVOSEoiI+t6OK2nCuqsGuu0qS2TV1hUp/0l20c95NJ
 Npl6oeCTyMYPrH2vr4nTCUCFCdyMoZLweqSSZS/mZT7I0zuVVLJmq0rIGRoeIoS96BwHH1E8
 uEeJHYVdBefiumqwbW9DO5xmsAkK8qtN4Qa0p1i5WiBUbB6HtaeE+OTvYcwMDQY36iiGd7EY
 MUUc3x3ZQnoaBxTIFYHTpk5mY9Eg1GgL2cD+A3I/8Lb5UCN6iFy9ITkG+beRfCvHptut2ikq
 HnZqjGR7hYycYb3JSC+2mKhgKrDkD32XKoWFaak7bh6jVuL3GsRBRYKE1yhrpGRiESzRtZeI
 Ew84Tc1oO4580nDZtvgWxy1plaUsxhaXMBfe8Uh8x2EwKfQ5wefB0AHQyRHZdhgs9U5LRQ10
 neZktWvAiZg2IB5UlqY/7aQ6D+3Zy4cKDZaYTdeFFNdpd7+vIs0kxTDCM55F7K4hcH0Hje2x
 C2WqC85hPMYistjO7iHwG0rSgmE/vDhJjPZLC2MNo55xmuVvLKYWrE=
IronPort-HdrOrdr: A9a23:Vx0PcKjQOC3+o6eAX85nC6/Lm3BQX0h13DAbv31ZSRFFG/FwyP
 rCoB1L73XJYWgqM03I+eruBEBPewK/yXcT2/hqAV7CZnichILMFu1fBOTZslnd8kHFltK1kJ
 0QCpSWa+eAcmSS8/yKhzVQeuxIqLfnzEnrv5an854Ed3AXV0gK1XYdNu/0KDwUeOEQbqBJaa
 Z0q/A37gaISDAyVICWF3MFV+/Mq5nik4/nWwcPA1oC5BOVhT2lxbbmG1zAty1uGA9n8PMHyy
 zoggb57qKsv7WSzQLd7Xba69BzlMH6wtVOKcSQgow+KynqiCyveIN9Mofy9AwdkaWK0hIHgd
 PMqxAvM4Ba7G7QRHi8pV/X1wzpwF8Vmgvf4G7dpUGmjd3yRTo8BcYEr5leaAHl500pu8w5+L
 5X3kqC3qAnQi/orWDY3ZzlRhtqnk27rT4JiugIlUFSVoMYdft4sZEfxkVIC50NdRiKpLzPKN
 MeTf002cwmMW9zNxvizypSKZ2XLzkO9y69MwY/Upf/6UkVoJh7p3FosfD30E1wsa7VcKM0lt
 gsAp4Y6o2mcfVmHZ6VfN1xJ/dfKla9Ni4kY1jiV2gOKsk8SgHwgq+yxokJz8eXX7FN5KcOuf
 36ISFlXCgJCgjTNfE=
X-IronPort-AV: E=Sophos;i="5.92,215,1650945600"; 
   d="scan'208";a="74238752"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oWJ7k2bBkOkTZTd3ye3feMu26XbxTtzJl0seZxBPAE0LPm4CA4Ma/MZ8OL+Y95vj+JPf68zYZzfXss8bL+Ch9KwyI/emSAvIbfhgPUR75Ehw/uvEVLiCU7DQHObOtoU3ZVVQ12sUygsr8/Hzft4IZ9kTZcs31bL0s3/Lb3Shv/LJmzwLoFbHi9AOW5XByeNTYmkkghm6zGqPf8/eHiy0hAx1vV9h+tTr6w3zRUhzIzFn/BG+Trjw46nHy6FDX2yrz22BfIyydnWNYogKWPjSGQiRH1k4D6od4CE6+2HdKeNqyg3ujxVKw65bHw2qQWiqmynLd0XUnS09ef51psYcbg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=X2oFt1bNuAp5BzqF/NN2ssNn38HZUPDDFxLmM49o1TQ=;
 b=WwQEPX8CH5YJo9imTWpSHZSkOSu8KgOxTd7zjYri+ZTIe8m9iXVD+KQeYNHrn8hwSRnW3lFuZ8orpE/ht7CagJODoyC4/IamAEfZV8LY8AzPb+iEYOnzRXF6OBGxs67q7u2Anmmz81k2cLuF/XwjWDHf2Sm8AA02taVGPOsMDcsg6iUaKIjTB3s4cNi6TVf8kxhrzPr6OgCdwV+1vcs/urN1ipXQLCup5HQpnbr8GzUMyf1PNTFWA9RZQqgKUV9ShQwmTgak8s9LHaa/DGH0n6mhyh3MEXo62TaVAPZ9A9UOXgT6RjnU2dLyJDmXt7u24HZBZoMNztYgancKww9crQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=X2oFt1bNuAp5BzqF/NN2ssNn38HZUPDDFxLmM49o1TQ=;
 b=OAl6eRHqVB7np4WUjXpx7gyT1ZEIlmJAJETClTJp/OmjLHLO/5yadDUJ3j7BlFzvZtZHeJ0pQ6Ygxed+8Qz3dydG4r/bHo3TNLI/kysQeSQtgWzJ20I8Qp+q2R9BIwuTc87c2SJQ9EJApR4zeL4TOk/ALsKAGyOX95qLU4ebE7o=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 2/2] console/serial: bump buffer from 16K to 32K
Date: Thu, 23 Jun 2022 11:08:52 +0200
Message-Id: <20220623090852.29622-3-roger.pau@citrix.com>
X-Mailer: git-send-email 2.36.1
In-Reply-To: <20220623090852.29622-1-roger.pau@citrix.com>
References: <20220623090852.29622-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P265CA0144.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2c4::17) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e418b339-c4a3-4ce6-d492-08da54f808ff
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5934:EE_
X-Microsoft-Antispam-PRVS:
	<SJ0PR03MB5934BC3B61D9E4ED7717A7BD8FB59@SJ0PR03MB5934.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	DgALAtNb46lUg3x5ZOiFyZbCi9L6mzrVaQd6oCpsrASfGU9y0BTr/pXZ5dLzZguCJrdqBL38CivgXdaZPhoZ6BbBacYCc6/IS8nbv7e07F8OQNZUjrnkuXI0KH0fSCIQBEkLuCKdNqlYd0r7apTfxYkyvukiSG7gXPie1m+Z4FSSfhHO+Yc1FvJzeoNDJVik1qRpvG3JOveBJT8JjTPiJxXTVVkfOM86B90sng8Sr82ATCsVvVGEVl0hsmqvtAQ3ad1gocuOlEH3NsLP/bbioe32hyRYgL5aYyZ/5fTrzHVqwnW2Rsw2C1eh46ymDs7CVyzIvzbI/jzgIbl5+89RDco9fbF252q134IcwXrC7b+DcUgNeqnTuvhiWuR+v1nEuP71rzrZl5ycJnNFcWUJazEKJjbbtyuzse+ISbfAAHsjeXQDF0ZJXjM2grkxSER/6HeIdQzqZHBd+z2cVWx8ykDDtLAOPNaO3f3NLC39cUI8VfA2F1jqFc+pK3SsXkKZN2gVk6mAxIdwc789YbNHB8ztnXAiHvIzqxInVVDnXtFYtkgnStVMfleifVyriqQmjxuvWH8iwE9PMNDQCLsmmwp0+SjDO/HvGrXGXY0DjZtRJ8bNeDhSvlvCXw6SiJSpOdUzS855WHZf8/c5Hf2i+FI9yV4nhOZVJn805ox23ki9KN6aRoPNB9Nz8ru5qnXbGoRw4VIpSmvZJa+B82Jspg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(396003)(136003)(366004)(39860400002)(346002)(376002)(6506007)(1076003)(8676002)(4326008)(83380400001)(6486002)(66476007)(6512007)(54906003)(86362001)(66556008)(6916009)(2616005)(26005)(66946007)(41300700001)(6666004)(186003)(478600001)(316002)(5660300002)(38100700002)(36756003)(82960400001)(2906002)(8936002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TU16SmV4dmphK1dIQ3JjTENyY2ZUL1g3eXZjNjNESy9yc2lVeGFyZXNlL21j?=
 =?utf-8?B?Sk5GUmV5anVoOTVpK2wwYTBlV0oxaFk5TU96U3VDY2FZN3Q4SGxtZ3V4c3oz?=
 =?utf-8?B?ZWhOYVlpRjA1d3Vhck9jUEpQWFhXNnRJMmxTLytmZ1BOMG90eUdoRWd6OE93?=
 =?utf-8?B?ektJeXk5QnhFQnZkSW5RUGI5eWJTVmx4MkxXT0lIaWpzZG5Sa3kwNkwxc0pk?=
 =?utf-8?B?UXhqRFJ2bjMzT0dQWGl3V0NKdnIzVkVGSEgyVFBiYUxJQTdqNEtqeTJtRG43?=
 =?utf-8?B?UUdvNnpXZFdhc3NxQVlMK1hWcXN6TUlNT1lrR3lpKzAxdmNGc0V5bmZDVkVm?=
 =?utf-8?B?ZnlyWHgvZjNGSUlEU283NVRqbDVScXpOOER2dkF1WHQ2VDZibmxOVHVra1hj?=
 =?utf-8?B?V0ltWUN6Z1lpUGpDK2hWMnAvaHVjYUtDc0N0ZmRYMHhhT3NJbjJTMWVBcUl3?=
 =?utf-8?B?YktGVTZNT1RWamZrTUc3VHZHdnNSUzNsU3ZUSE1xLzRobm1Mb3czV2JDMUpo?=
 =?utf-8?B?K2lzRVNMOHR3akE5QndtSlk5bGgxWHVKV2l2U2MvMnJrWUZPRFNYSVdWM2J4?=
 =?utf-8?B?UEZMS2lramNBdjlBRDd2NmJmVXhjTkZMeGxJNXlYZ1RFRFFDSzNSM3hhTmdW?=
 =?utf-8?B?T0VGbTRod2RyekgzVTZoUmo4a1lzNGRRMW1CeHIwcm8vaUdxekduU2VUalQx?=
 =?utf-8?B?eWNiMUlPSzlHYk1sVlhBT1dPTGRWNjQ3eU9iMFgyM0JSMUVkL2ZsTXpZcmxr?=
 =?utf-8?B?SzNOMUlrTTFIT3RKWHNzdTZ3RnRyeFhwRUJmN2NYS3U5R3NDalBXYjJZcFJE?=
 =?utf-8?B?bmZvMUNuZjMxdWNadTFRZXhvVTFER3VCQm5zbi9NTXQ2QkdRVC9SWXpackFY?=
 =?utf-8?B?dXlmelZzRWJiVWt1WGxmY2l5Y3lua0NaMlBZZzhDQVA4cWVyWTF3ZDZLd2hx?=
 =?utf-8?B?VmFDenJPNmo3ZzZ1dnJSZkxqNDNhU0pKMG8vYXpZZnp6NGYwWGN5QU5UdHdp?=
 =?utf-8?B?cjUrZEZMUWJrZzcxYVVKRzZvcHZJekkrcmU3ajlKRFROWlZ2WVM0SnZzVlhq?=
 =?utf-8?B?R1RWNjJIcHlYcHQvY1pHZ2xCeHFseGJLKzRNU3FvaTVhbWt0QWFMOHpjL3VO?=
 =?utf-8?B?aDFTYWhRTi9EUDNrK0VwOWZkcm95REhwbkhCNnkyQ0hkSFEzZyt4Z2djWWRw?=
 =?utf-8?B?UGp5M08rMTRLclhPQUZRWWpPUmFZMWh5eWErR1BLd0tsTVRUR2pDVDdKL05x?=
 =?utf-8?B?NFBmRitySTZMRGFxbUhsTEdLNVk0cWRsMHhLTWRZeEJmdGhDbHZLZ0pBMkg1?=
 =?utf-8?B?Y2pteHZRMnMxWEpaN25IeUhXcjBpbW83V3p3bWU5Rm1XTVcydmIrMFlMRm5K?=
 =?utf-8?B?M0hjenU3TlE3VFpOcGU0YldnMjk3VmMwdlBGMjdCcjgzRTE5R3ZUczFob2w4?=
 =?utf-8?B?dmNrUE9YekdxL2NZa1k2QlVVcmJmVXNiVEN3bDdOWVNVWjBlNTBPcnMzclBz?=
 =?utf-8?B?WmYxMjFTenFrbkROdHpheWN6NnM1QVhBemE2ekpCc04yUHpkVmtOUmlpa1Zn?=
 =?utf-8?B?amtSTWdHNzBkOWtidllrRFRTenlkVHJYWURjVURCZ0w3Qm9RcVBycUNRWXRN?=
 =?utf-8?B?TnZ5YzVieHVUWWoxcXAvWHVWb2x6Rzd1dS9ZcURDaXJkSG5lMDVqOTVDZmhF?=
 =?utf-8?B?WUJKR2tyYnh6NmlyOEpaRXBnR2ptZ3czWFpqTXF2R0ZNR3BpalByYlAvaU1D?=
 =?utf-8?B?OFFLbmR1Szg1MGZmRTRiTytPYzFTVUU3bEZkcm9vQnNKV09mREtocERvMkM3?=
 =?utf-8?B?ajExMkVCcTVweTh5S2VDUmtJSTNKYzdreEZ4YmR5WE95UlhUc2hmcStyaks5?=
 =?utf-8?B?ZTVESzc3NkdBZXNxeURjeElPMis4amJDQ3laZTMydisyTkZ0b1RSSUNhci9D?=
 =?utf-8?B?ald6OWNjeDJWclhRQ1Q4Rm0zUlZ3TTlGTmM4YUVYMHhQSS9LemgxYTNraWFD?=
 =?utf-8?B?NlRQSldqQVR3UHYvNlpHM2I1OEQ1bDF4OERVT01DY21UMDJ2L1ViZXB1cHd6?=
 =?utf-8?B?M1NLQkFLaXcrWU85d0xoaHNWWDFITFB0aEtDcGEvNUp5V0tWcFYvdHBpT3VW?=
 =?utf-8?B?SlVoL1VWTkkyOExRTGMxcDhxMWdKY3pORUhBQk1uRUdKR1Q2UEtqWllkeWxj?=
 =?utf-8?B?UEE9PQ==?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e418b339-c4a3-4ce6-d492-08da54f808ff
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 09:09:10.7671
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: hIdflgEXJUP2m6xcouL0nUgROLF2Nq1w39Mt/llJSEr1bUaPXUM93imyMXD/9Ex6LN4SZbvGgwhS9fXna64olQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5934

Testing on a Kaby Lake box with 8 CPUs leads to the serial buffer
being filled halfway during dom0 boot, and thus a non-trivial chunk of
Linux boot messages are dropped.

Increasing the buffer to 32K does fix the issue and Linux boot
messages are no longer dropped.  There's no justification either on
why 16K was chosen, and hence bumping to 32K in order to cope with
current systems generating output faster does seem appropriate to have
a better user experience with the provided defaults.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/drivers/char/Kconfig | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/char/Kconfig b/xen/drivers/char/Kconfig
index a349d55f18..a8ac667ba2 100644
--- a/xen/drivers/char/Kconfig
+++ b/xen/drivers/char/Kconfig
@@ -77,10 +77,10 @@ config HAS_EHCI
 
 config SERIAL_TX_BUFSIZE
 	int "Size of the transmit serial buffer"
-	default 16384
+	default 32768
 	help
 	  Controls the default size of the transmit buffer (in bytes) used by
 	  the serial driver.  Note the value provided will be rounder up to
 	  PAGE_SIZE.
 
-	  Default value is 16384 (16KB).
+	  Default value is 32768 (32KB).
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 23 09:14:41 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 09:14:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354694.581934 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Iv5-0003H6-Vt; Thu, 23 Jun 2022 09:14:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354694.581934; Thu, 23 Jun 2022 09:14:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Iv5-0003Gz-Sr; Thu, 23 Jun 2022 09:14:35 +0000
Received: by outflank-mailman (input) for mailman id 354694;
 Thu, 23 Jun 2022 09:14:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CRa/=W6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o4Iv4-0003Gt-DW
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 09:14:34 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr130072.outbound.protection.outlook.com [40.107.13.72])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e4f1061a-f2d4-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 11:14:33 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM5PR0401MB2660.eurprd04.prod.outlook.com (2603:10a6:203:3a::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15; Thu, 23 Jun
 2022 09:14:30 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 09:14:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e4f1061a-f2d4-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Rjac1rTYS+yLGpDq4YaDTYQAGW5iXAmPjZgIY2o+DK5GLofykXr9MXZu0dIzNyasEAU2JYnowN73r7EFzzwHFLIaQANpfOVzRgsvEuNDZlhOIggt6HQ/Dd5ABSdQqE+4yVVy7LyJK/9b/6NV8WyC1LFyvu8wgSrY9U6oswpGPF/L9sjMnCnHGMBxBXTk7+IUYiU7hFSnW/RTm8ytCHL+bMiS5wYJSEasCEaBe9/krq2rdDNCa8EaRB59fIFYJI0SXHiVNDmdNo8fnCWhFyFrRAky6aR0CqdEwkPeqoHfWi6W5FzaBVXZaLO39zDqSwLH6hIHKABKb5tdiG05uijD6w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5WdlzlEK2bgdG54qJThScXFFa7IsFPTAKkg7uUyQ+Ew=;
 b=JO8uMfeW30QnyT5dKuQvUYhs1N78PVZVEFLPO+ESGwnyCfsMkwhvO2/0pfMqwDTdIft4F1ZrtbBc598npTeFt/jsxbZl1KbOtTzZkqP2D94F3yu6kqt/V7lwt0RO/+p47JC0v9elw+sFc8RunCWn5Uhtv1Z44JMCh1XHcB+Mc5bYjxZPtpYzfgyObF4KDhsLtOMfXCT1GnHyW8SSZe/C1kTuxXAPf+pBFuODnIU34UTfpekjEFevIo5Kn27p6rDVjKqDbrtWu0HWok0RYcf9S1IL/hdVj14For/GyLp93npGTXB+3sAwc8zwgBff3SMaeZBNTSjGwobNmvOrwP2dyw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5WdlzlEK2bgdG54qJThScXFFa7IsFPTAKkg7uUyQ+Ew=;
 b=qyVJ6JDvV79LHuL93cOOmEA2mLQoSPYJ2c5S1ScfbLfkCnYmU42Yzbles9Fj1x9Sg5ci8tq404j0yVtxWgOd2vaZrlT8ZRHkdIFzIJBWtq02ZzZ5DwhKzJ5QMGIHWx+vCW79qCIxEjxIelhQcdahgXLQzZM4B3kcq8d6xLlESwgGveMI2+i3nRZMBOeKoE0hdClwYgCD4wOlv+NJhvPZ+7KsI9WmGodwRFQ2ntiWrarZywxTmcJ9lv3tloy8HMrb4CN9huNQFK0uayMAd6I3e87xH8NcNBMmVIQ6ZjDTiTfSDBaQBWMniGOVQp+FByAUYMOQRTIoUh+n2anSHVc4cg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <0daa40bd-db1d-759c-5d02-16634cead932@suse.com>
Date: Thu, 23 Jun 2022 11:14:28 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH] xen: consider alloc-only segments when loading PV dom0
 kernel
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220623080208.2214-1-jgross@suse.com>
 <c5961627-1719-dd54-bbcf-c08a826ba14d@suse.com>
 <50942106-0082-e86b-8a2c-b04aaafac444@suse.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <50942106-0082-e86b-8a2c-b04aaafac444@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0074.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1e::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8dbefa66-19c0-4f3e-78cc-08da54f8c73b
X-MS-TrafficTypeDiagnostic: AM5PR0401MB2660:EE_
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-Microsoft-Antispam-PRVS:
	<AM5PR0401MB2660354B223377A5CE29B0EEB3B59@AM5PR0401MB2660.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Ss5kKNcWhCV2JX+wiNLnV+DrvxDwKUjJTMoj954jOHXDI3Q7e1PhggIL4Sb4LRErIo5LpwTqoISiFQGZiKnpBIDqAkwvc0i+69SnMbKm9DrVd1rfuH8uKLFq59zgciZxlDNsX2GbPlWqOdOz6Eg16e90ioM2u9cJkdNWmCNm78SfoPQ18ddT4b7SQf+IU/iy+wrqJFtZcbQPt+d68cN22x7R4TI9sesnrJjMYqzEM7XUTzNwfV40lhrJKynI+lJlnA5gCkATD4TutQ5/BUELI1fZd84XXFoIrxEMxAbuDoGJhmHg/eM8FH4K0orazMYIuqeqbNHkbJRHJAioGDItdxPBwEsLooDLhQK8HYMnNR11poHgFVMQ6ozxE3Xemu1Ttz4cShNZODiog4Nzkb2eZrUMaunizL0tJsobEHKMv3TqXS1xwjU7cpb5g37PbS9LjrUyGZoRjULy86YZgNbJZode6GlOC12UtELqdkKpgBBZ03wsEYv0kZF4yuAUChvsjeqbMc4bBjjK9af8OFeKBwybAtaSi59P/1ZZ7oH2Rpm1DjnxjAowWEnRrCyZsNX9eH+K/4mFNBe9QA/I0MEJrwrw2G2OuHcAIh/UNQJqPP/EvF8XI0SL7hdAC9xVizze4Q+OeJPZZ58cWacf/dI5MzlWghNFw+N7XXq+dITH528Ssl0dn2pR7hY/WEw/PDl+ouDUdPoRTpZNZ2Sztdzzm/bnizuT1TIfOUwlgspp6gY9A1kw+ty7eXQstnuyTEzjCPIi7C0SF2yCeJU5REuEcz6xdMD62Rgz4kYhgvOJ6MY=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(346002)(366004)(396003)(136003)(376002)(39860400002)(8936002)(66946007)(316002)(8676002)(66556008)(66476007)(4326008)(83380400001)(31686004)(5660300002)(2906002)(6636002)(6862004)(36756003)(6486002)(478600001)(54906003)(37006003)(6506007)(53546011)(86362001)(26005)(31696002)(6512007)(186003)(2616005)(38100700002)(41300700001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Y1hPaVI3eDE4aTNWWjlMRTdqMWtGN2ltM3M5SERSMmhFNFhhVlk5L3MrYUZt?=
 =?utf-8?B?cmNTTUduYUhWcWtnRU52a1d2UWVnT0ZORDBuRjJHMjdKRTZjUVNCY1gzSVRL?=
 =?utf-8?B?cG1lWDZwU05HMUcwNnlVbU9NVHQ3amEzVHBSTnpDM1pUTVhqZE5kYmlKYW1I?=
 =?utf-8?B?NmlOdFNBejVrZUM4Zk84SnRZVU9ydDkxaU5YOXJqTjJVUngyZjNyN0lqNEVJ?=
 =?utf-8?B?dElkVSsyRS85QVkxVnZFUHpleUFsYlZVZFR2K1JLUmowMmNLVWp1RmkvaVpE?=
 =?utf-8?B?dlovanpSVEFXaTUyMEExS3FxcDdrVWhqdmhhcDhreGJzTndXN20xb3VHMFBY?=
 =?utf-8?B?emhUUUVvQXV6Z2VyeFRIUytraHQ0N1R4Y0hXUUcrSWVJMml4TitmWTJNMmcx?=
 =?utf-8?B?NkhqcTdGSDFPc0dmVklaWlU3QmdZWE92YVUzdDRFaVpGdEU0bXRpR2QwSDl3?=
 =?utf-8?B?K0k1RHAybDh6ZVpEcGRqbXBFMFJUQWFHRnFVV3pDT3g1WDRJQmROdGRMK3N1?=
 =?utf-8?B?bzdETEhrSjF4NDIzbUdFa1BYL3FVWi9kc3pvZmlGZTkxN3BjOE5YZGlvTnVm?=
 =?utf-8?B?b3B3NUJiY25DRTZ0eTlLcGhKaDg3L3VoZ0hQUVc4eXN5S3ZmOWEwL04zSFQ5?=
 =?utf-8?B?UG1qYnhZV0ttNkdobDlPcGFSL1c2UldwbzIxSVlZM1NtL2JtNktGUkNLamtD?=
 =?utf-8?B?MEF3aU80enBORkR1QlZkamRkL0R6RDhET1lsRThoME55SmtqYmlkS2orRnB6?=
 =?utf-8?B?Z2Q4eEhaMEdRcXRWQ200SUQycmt4bm0zY3dRdWkvMUswSDJzd2Eva0JMRncx?=
 =?utf-8?B?eG1TRkthRlByM3hUTkxLbVAvRFVFbUhlSHcrZjVxcWtxMkJiSTFIcmo4a3ZU?=
 =?utf-8?B?eHJTZ2pBMTNYV3FOOGRlbUlvMTkwU2V1a2Z3T01tRkhsYzVMN2dhT2JyTDRP?=
 =?utf-8?B?R3pCY3RYaUpldW01TzFWQVZzbFppU1Vja2lYWjFNTW11cUF4eVdST3libXJv?=
 =?utf-8?B?ckN0N2NCWm1JNDVLZkRaelNaSjBpRjlJMzRnd0F1WEpuYVBreHpkbEtsV2lX?=
 =?utf-8?B?WmV4dndIaTJNZHpFaHRUYUoxWDh0YjZNQ21FZFk0bHI5aFNqMG96bzFDbkNR?=
 =?utf-8?B?eGdjdHdWbWI1OHl2am5RaDR4eUE5VGlPczJyTVdVaGhlbGhzb2x3SUs5Vm9Y?=
 =?utf-8?B?YlRDTXg3elJoQk1QYm1LVmF4eUp4WEhwUS9kbG1TUVZJaHZJcEp6WjZLOHhF?=
 =?utf-8?B?bHBrZUFObFplUWNNWjFheUlGcUxNclFKRnVMVytIZWV0MWdFQkExZnBCQTVT?=
 =?utf-8?B?NXZXamoyR2JGVTZaMGdGR0RBQ2hyUmY2cCs0U0RRaXFVWm1XOWh6eGpCdnAz?=
 =?utf-8?B?dld5RlJIQkNOOSt6bFFKU1FxMUE5b3pOOE9adWREWEhXOE56b3VVaUVvNkxB?=
 =?utf-8?B?dnQ1Ly84TG56b0tjTWUwUG5jdWllQlBub2lRNHNVVlZpRTdwemlKajlvMXFh?=
 =?utf-8?B?OTdNVE5ZVG1hall4WFExSzl1Y0Y4WmdWYUgrVVdHZTRBV1dMdjRJYXpONFJQ?=
 =?utf-8?B?QUppc2J5Y2locTdLaTAvTUl6aERSTVhVVmtrV3JEdnd4cDkvQjBZRGFvNDdV?=
 =?utf-8?B?d2xPMTIyNnVzaWhXbWx4aGxOVmdHQ1prVmI0dkYzYmN6WXVaUlpOVFhQOUJ5?=
 =?utf-8?B?SW1Tc0VPWTRJU21KMUFXK25RdTRIRUF5ZUdWcnRHV3VPSXNSdXlrb2VWNGoy?=
 =?utf-8?B?RXhXTFBsSFFueHhLZTZwQ2gvYklkYkVnK1ptRHo0WDNIU20wOThiSmx3emZr?=
 =?utf-8?B?RjJBY3UvRkcwaWh6ZGZUb0E0V3ZOS045UTJ4WDRnVEF4cE1BeE0rRUVnSHMz?=
 =?utf-8?B?VEt2VGhVNmtHTG1HbG8yRWRWdHpNVjFRQzRlRFJlQXF2ZHEvZlRVUGh2TzNE?=
 =?utf-8?B?MWtNcGNBblU3YnZJVmtLMHptbWhHTThIVkVDSkFQanVFRFhwa2pVeFVvQkRv?=
 =?utf-8?B?RlRpajQzMnBlUXRLZEorWnhMeFZhb09rNlVIeVAxbjl0Rkt4U3lLR1FuRGlB?=
 =?utf-8?B?SzdLeHRXUmZOOEtYSS8wejNBN0twZlRnQjBVKy9QNktwV0UvemJMUHdHTXBn?=
 =?utf-8?Q?S9Vx+h2j+yekERyNOK9AqRC/6?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8dbefa66-19c0-4f3e-78cc-08da54f8c73b
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 09:14:29.9251
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: KHyGbrxXHnI5VGd8hvplYxRrNP+NKWlkL8URib1VFTkl8nUVbkO11aufJ1VaiGgprkHwXwgBn4g2dzDDzfJ7tQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM5PR0401MB2660

On 23.06.2022 11:08, Juergen Gross wrote:
> On 23.06.22 11:04, Jan Beulich wrote:
>> On 23.06.2022 10:02, Juergen Gross wrote:
>>> When loading the dom0 kernel for PV mode, the first free usable memory
>>> location after the kernel needs to take segments into account, which
>>> have only the ALLOC flag set, but are not specified to be loaded in
>>> the program headers of the ELF file.
>>>
>>> This is e.g. a problem for Linux kernels from 5.19 onwards, as those
>>> can have a final NOLOAD section at the end, which must not be used by
>>> e.g. the start_info structure or the initial page tables allocated by
>>> the hypervisor.
>>>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> ---
>>>   xen/common/libelf/libelf-loader.c | 33 +++++++++++++++++++++++++++++++
>>>   1 file changed, 33 insertions(+)
>>>
>>> diff --git a/xen/common/libelf/libelf-loader.c b/xen/common/libelf/libelf-loader.c
>>> index 629cc0d3e6..4b0e3ced55 100644
>>> --- a/xen/common/libelf/libelf-loader.c
>>> +++ b/xen/common/libelf/libelf-loader.c
>>> @@ -467,7 +467,9 @@ do {                                                                \
>>>   void elf_parse_binary(struct elf_binary *elf)
>>>   {
>>>       ELF_HANDLE_DECL(elf_phdr) phdr;
>>> +    ELF_HANDLE_DECL(elf_shdr) shdr;
>>>       uint64_t low = -1, high = 0, paddr, memsz;
>>> +    uint64_t vlow = -1, vhigh = 0, vaddr, voff;
>>>       unsigned i, count;
>>>   
>>>       count = elf_phdr_count(elf);
>>> @@ -480,6 +482,7 @@ void elf_parse_binary(struct elf_binary *elf)
>>>           if ( !elf_phdr_is_loadable(elf, phdr) )
>>>               continue;
>>>           paddr = elf_uval(elf, phdr, p_paddr);
>>> +        vaddr = elf_uval(elf, phdr, p_vaddr);
>>>           memsz = elf_uval(elf, phdr, p_memsz);
>>>           elf_msg(elf, "ELF: phdr: paddr=%#" PRIx64 " memsz=%#" PRIx64 "\n",
>>>                   paddr, memsz);
>>> @@ -487,7 +490,37 @@ void elf_parse_binary(struct elf_binary *elf)
>>>               low = paddr;
>>>           if ( high < paddr + memsz )
>>>               high = paddr + memsz;
>>> +        if ( vlow > vaddr )
>>> +            vlow = vaddr;
>>> +        if ( vhigh < vaddr + memsz )
>>> +            vhigh = vaddr + memsz;
>>>       }
>>> +
>>> +    voff = vhigh - high;
>>> +
>>> +    count = elf_shdr_count(elf);
>>> +    for ( i = 0; i < count; i++ )
>>> +    {
>>> +        shdr = elf_shdr_by_index(elf, i);
>>> +        if ( !elf_access_ok(elf, ELF_HANDLE_PTRVAL(shdr), 1) )
>>> +            /* input has an insane section header count field */
>>> +            break;
>>> +        if ( !(elf_uval(elf, shdr, sh_flags) & SHF_ALLOC) )
>>> +            continue;
>>> +        vaddr = elf_uval(elf, shdr, sh_addr);
>>> +        memsz = elf_uval(elf, shdr, sh_size);
>>> +        if ( vlow > vaddr )
>>> +        {
>>> +            vlow = vaddr;
>>> +            low = vaddr - voff;
>>> +        }
>>> +        if ( vhigh < vaddr + memsz )
>>> +        {
>>> +            vhigh = vaddr + memsz;
>>> +            high = vaddr + memsz - voff;
>>> +        }
>>> +    }
>>
>> As said in the reply to your problem report: The set of PHDRs doesn't
>> cover all sections. For loading one should never need to resort to
>> parsing section headers - in a loadable binary it is no error if
>> there's no section table in the first place. (The title is also
> 
> The problem isn't the loading, but the memory usage after doing the
> loading. The hypervisor is placing page tables in a memory region
> the kernel has other plans with.

But part of "loading" is to determine the extent of the binary, which
is what the program headers (and only them) ought to describe. Note
also that our "loading" includes correct handling of .bss-style parts
of segments (i.e. their clearing):

static elf_errorstatus elf_load_image(struct elf_binary *elf, elf_ptrval dst, elf_ptrval src, uint64_t filesz, uint64_t memsz)
{
    elf_errorstatus rc;
    if ( filesz > ULONG_MAX || memsz > ULONG_MAX )
        return -1;
    /* We trust the dom0 kernel image completely, so we don't care
     * about overruns etc. here. */
    rc = elf_memcpy(elf->vcpu, ELF_UNSAFE_PTR(dst), ELF_UNSAFE_PTR(src),
                    filesz);
    if ( rc != 0 )
        return -1;
    rc = elf_memcpy(elf->vcpu, ELF_UNSAFE_PTR(dst + filesz), NULL,
                    memsz - filesz);
    if ( rc != 0 )
        return -1;
    return 0;
}

IOW in principle there's no need for the kernel to clear its .bss
(a 2nd time). Provided, of course, the phdrs properly describe the
entire image.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 09:17:24 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 09:17:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354699.581944 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Ixn-0003sv-ED; Thu, 23 Jun 2022 09:17:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354699.581944; Thu, 23 Jun 2022 09:17:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Ixn-0003so-BH; Thu, 23 Jun 2022 09:17:23 +0000
Received: by outflank-mailman (input) for mailman id 354699;
 Thu, 23 Jun 2022 09:17:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=j28/=W6=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o4Ixl-0003sg-Cu
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 09:17:21 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4898cf99-f2d5-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 11:17:20 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id E772C1FD84;
 Thu, 23 Jun 2022 09:17:19 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id C41A613461;
 Thu, 23 Jun 2022 09:17:19 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id ExVYLp8vtGLGIAAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 23 Jun 2022 09:17:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4898cf99-f2d5-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655975839; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=MunLjHPNnWTbeY1RX6E4zRAG5HofK+3ZOjdJc3Vj4Fo=;
	b=qsfBFn5eXgks19pMelSmtpynHTCpiYHbmCUHEWv+p+SvuDz1xxQlCO1gGMdkHihrDlJnKE
	/evSEscOMknToG9rcafQwraHn3TpflC+c9tRDAld03eOKuOVnyxOUbA7Z1+zOuq61F2PBb
	8oHDzIpwaBKpbKjkYh8AAU5ALqHSPXQ=
Message-ID: <9eba3e4a-f420-4024-fe5f-607b5b6677b4@suse.com>
Date: Thu, 23 Jun 2022 11:17:19 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: Problem loading linux 5.19 as PV dom0
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <5c396832-3102-ff5b-c198-c037ee87d83f@suse.com>
 <922ee651-c211-6e46-7986-6d0f74164e57@suse.com>
 <b74d7347-113b-c608-1346-8f75f1a77cb9@suse.com>
 <53bb13f6-04ec-0ed0-2c19-9c7947654989@suse.com>
 <17124274-05e5-52c4-5505-9de9ad95db55@suse.com>
 <03ba839e-4249-b18b-81bf-86b98cb319b5@suse.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <03ba839e-4249-b18b-81bf-86b98cb319b5@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------Wa9i16KM0SYF2M0tZg1Q0ZvM"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------Wa9i16KM0SYF2M0tZg1Q0ZvM
Content-Type: multipart/mixed; boundary="------------Fxzc9x3xaB5Kh5ovmqmwFIw3";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <9eba3e4a-f420-4024-fe5f-607b5b6677b4@suse.com>
Subject: Re: Problem loading linux 5.19 as PV dom0
References: <5c396832-3102-ff5b-c198-c037ee87d83f@suse.com>
 <922ee651-c211-6e46-7986-6d0f74164e57@suse.com>
 <b74d7347-113b-c608-1346-8f75f1a77cb9@suse.com>
 <53bb13f6-04ec-0ed0-2c19-9c7947654989@suse.com>
 <17124274-05e5-52c4-5505-9de9ad95db55@suse.com>
 <03ba839e-4249-b18b-81bf-86b98cb319b5@suse.com>
In-Reply-To: <03ba839e-4249-b18b-81bf-86b98cb319b5@suse.com>

--------------Fxzc9x3xaB5Kh5ovmqmwFIw3
Content-Type: multipart/mixed; boundary="------------Bm50bso9g0bAJKDrSANDU400"

--------------Bm50bso9g0bAJKDrSANDU400
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjMuMDYuMjIgMTE6MDgsIEphbiBCZXVsaWNoIHdyb3RlOg0KPiBPbiAyMy4wNi4yMDIy
IDExOjAxLCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4gT24gMjMuMDYuMjIgMTA6NDcsIEph
biBCZXVsaWNoIHdyb3RlOg0KPj4+IE9uIDIzLjA2LjIwMjIgMTA6MDYsIEp1ZXJnZW4gR3Jv
c3Mgd3JvdGU6DQo+Pj4+IE9uIDIzLjA2LjIyIDA5OjU1LCBKYW4gQmV1bGljaCB3cm90ZToN
Cj4+Pj4+IE9uIDIyLjA2LjIwMjIgMTg6MDYsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+Pj4+
Pj4gQSBMaW51eCBrZXJuZWwgNS4xOSBjYW4gb25seSBiZSBsb2FkZWQgYXMgZG9tMCwgaWYg
aXQgaGFzIGJlZW4NCj4+Pj4+PiBidWlsdCB3aXRoIENPTkZJR19BTURfTUVNX0VOQ1JZUFQg
ZW5hYmxlZC4gVGhpcyBpcyBkdWUgdG8gdGhlDQo+Pj4+Pj4gZmFjdCB0aGF0IG90aGVyd2lz
ZSB0aGUgKHJlbGV2YW50KSBsYXN0IHNlY3Rpb24gb2YgdGhlIGJ1aWx0DQo+Pj4+Pj4ga2Vy
bmVsIGhhcyB0aGUgTk9MT0FEIGZsYWcgc2V0IChpdCBpcyBzdGlsbCBtYXJrZWQgd2l0aA0K
Pj4+Pj4+IFNIRl9BTExPQykuDQo+Pj4+Pj4NCj4+Pj4+PiBJIHRoaW5rIGF0IGxlYXN0IHRo
ZSBoeXBlcnZpc29yIG5lZWRzIHRvIGJlIGNoYW5nZWQgdG8gc3VwcG9ydA0KPj4+Pj4+IHRo
aXMgbGF5b3V0LiBPdGhlcndpc2UgaXQgd2lsbCBwdXQgdGhlIGluaXRpYWwgcGFnZSB0YWJs
ZXMgZm9yDQo+Pj4+Pj4gZG9tMCBhdCB0aGUgc2FtZSBwb3NpdGlvbiBhcyB0aGlzIGxhc3Qg
c2VjdGlvbiwgbGVhZGluZyB0bw0KPj4+Pj4+IGVhcmx5IGNyYXNoZXMuDQo+Pj4+Pg0KPj4+
Pj4gSXNuJ3QgWGVuIHVzaW5nIHRoZSBiekltYWdlIGhlYWRlciB0aGVyZSwgcmF0aGVyIHRo
YW4gYW55IEVMRg0KPj4+Pj4gb25lPyBJbiB3aGljaCBjYXNlIGl0IHdvdWxkIG1hdHRlciBo
b3cgdGhlIE5PTE9BRCBzZWN0aW9uIGlzDQo+Pj4+DQo+Pj4+IEZvciBhIFBWIGtlcm5lbD8g
Tm8sIEkgZG9uJ3QgdGhpbmsgc28uDQo+Pj4NCj4+PiBBY3R1YWxseSBpdCdzIGEgbWl4IChh
bmQgdGhlIHNhbWUgZm9yIFBWIGFuZCBQVkgpIC0gdGhlIGJ6SW1hZ2UNCj4+PiBoZWFkZXIg
aXMgcGFyc2VkIHRvIGdldCBhdCB0aGUgZW1iZWRkZWQgRUxGIGhlYWRlci4gWGVub0xpbnV4
IHdhcw0KPj4+IHdoYXQgd291bGQvY291bGQgYmUgbG9hZGVkIGFzIHBsYWluIEVMRi4NCj4+
Pg0KPj4+Pj4gYWN0dWFsbHkgcmVwcmVzZW50ZWQgaW4gdGhhdCBoZWFkZXIuIENhbiB5b3Ug
cHJvdmlkZSBhIGR1bXAgKG9yDQo+Pj4+PiBiaW5hcnkgcmVwcmVzZW50YXRpb24pIG9mIGJv
dGggaGVhZGVycz8NCj4+Pj4NCj4+Pj4gUHJvZ3JhbSBIZWFkZXI6DQo+Pj4+ICAgICAgICBM
T0FEIG9mZiAgICAweDAwMDAwMDAwMDAyMDAwMDAgdmFkZHIgMHhmZmZmZmZmZjgxMDAwMDAw
IHBhZGRyDQo+Pj4+IDB4MDAwMDAwMDAwMTAwMDAwMCBhbGlnbiAyKioyMQ0KPj4+PiAgICAg
ICAgICAgICBmaWxlc3ogMHgwMDAwMDAwMDAxNDVlMTE0IG1lbXN6IDB4MDAwMDAwMDAwMTQ1
ZTExNCBmbGFncyByLXgNCj4+Pj4gICAgICAgIExPQUQgb2ZmICAgIDB4MDAwMDAwMDAwMTgw
MDAwMCB2YWRkciAweGZmZmZmZmZmODI2MDAwMDAgcGFkZHINCj4+Pj4gMHgwMDAwMDAwMDAy
NjAwMDAwIGFsaWduIDIqKjIxDQo+Pj4+ICAgICAgICAgICAgIGZpbGVzeiAweDAwMDAwMDAw
MDA2YjcwMDAgbWVtc3ogMHgwMDAwMDAwMDAwNmI3MDAwIGZsYWdzIHJ3LQ0KPj4+PiAgICAg
ICAgTE9BRCBvZmYgICAgMHgwMDAwMDAwMDAyMDAwMDAwIHZhZGRyIDB4MDAwMDAwMDAwMDAw
MDAwMCBwYWRkcg0KPj4+PiAweDAwMDAwMDAwMDJjYjcwMDAgYWxpZ24gMioqMjENCj4+Pj4g
ICAgICAgICAgICAgZmlsZXN6IDB4MDAwMDAwMDAwMDAzMTJhOCBtZW1zeiAweDAwMDAwMDAw
MDAwMzEyYTggZmxhZ3MgcnctDQo+Pj4+ICAgICAgICBMT0FEIG9mZiAgICAweDAwMDAwMDAw
MDIwZTkwMDAgdmFkZHIgMHhmZmZmZmZmZjgyY2U5MDAwIHBhZGRyDQo+Pj4+IDB4MDAwMDAw
MDAwMmNlOTAwMCBhbGlnbiAyKioyMQ0KPj4+PiAgICAgICAgICAgICBmaWxlc3ogMHgwMDAw
MDAwMDAwMWZkMDAwIG1lbXN6IDB4MDAwMDAwMDAwMDMxNzAwMCBmbGFncyByd3gNCj4+Pg0K
Pj4+IDIwZTkwMDAgKyAzMTcwMDAgPSAyNDAwMDANCj4+Pg0KPj4+PiAgICAgICAgTk9URSBv
ZmYgICAgMHgwMDAwMDAwMDAxNjVkZjEwIHZhZGRyIDB4ZmZmZmZmZmY4MjQ1ZGYxMCBwYWRk
cg0KPj4+PiAweDAwMDAwMDAwMDI0NWRmMTAgYWxpZ24gMioqMg0KPj4+PiAgICAgICAgICAg
ICBmaWxlc3ogMHgwMDAwMDAwMDAwMDAwMjA0IG1lbXN6IDB4MDAwMDAwMDAwMDAwMDIwNCBm
bGFncyAtLS0NCj4+Pj4NCj4+Pj4NCj4+Pj4gU2VjdGlvbnM6DQo+Pj4+IElkeCBOYW1lICAg
ICAgICAgIFNpemUgICAgICBWTUEgICAgICAgICAgICAgICBMTUEgICAgICAgICAgICAgICBG
aWxlIG9mZiAgQWxnbg0KPj4+PiAuLi4NCj4+Pj4gICAgIDMwIC5zbXBfbG9ja3MgICAgMDAw
MDkwMDAgIGZmZmZmZmZmODJlZGMwMDAgIDAwMDAwMDAwMDJlZGMwMDAgIDAyMmRjMDAwICAy
KioyDQo+Pj4+ICAgICAgICAgICAgICAgICAgICAgIENPTlRFTlRTLCBBTExPQywgTE9BRCwg
UkVBRE9OTFksIERBVEENCj4+Pj4gICAgIDMxIC5kYXRhX25vc2F2ZSAgMDAwMDEwMDAgIGZm
ZmZmZmZmODJlZTUwMDAgIDAwMDAwMDAwMDJlZTUwMDAgIDAyMmU1MDAwICAyKioyDQo+Pj4+
ICAgICAgICAgICAgICAgICAgICAgIENPTlRFTlRTLCBBTExPQywgTE9BRCwgREFUQQ0KPj4+
PiAgICAgMzIgLmJzcyAgICAgICAgICAwMDExYTAwMCAgZmZmZmZmZmY4MmVlNjAwMCAgMDAw
MDAwMDAwMmVlNjAwMCAgMDIyZTYwMDAgIDIqKjEyDQo+Pj4+ICAgICAgICAgICAgICAgICAg
ICAgIEFMTE9DDQo+Pj4NCj4+PiAyZWU2MDAwICsgMTFhMDAwID0gMjQwMDAwDQo+Pj4NCj4+
Pj4gICAgIDMzIC5icmsgICAgICAgICAgMDAwMjYwMDAgIGZmZmZmZmZmODMwMDAwMDAgIGZm
ZmZmZmZmODMwMDAwMDAgIDAwMDAwMDAwICAyKiowDQo+Pj4+ICAgICAgICAgICAgICAgICAg
ICAgIEFMTE9DDQo+Pj4NCj4+PiBUaGlzIHNwYWNlIGlzbid0IGNvdmVyZWQgYnkgYW55IHBy
b2dyYW0gaGVhZGVyLiBXaGljaCBpbiB0dXJuIG1heSBiZSBhDQo+Pj4gcmVzdWx0IG9mIGl0
cyBMTUEgbWF0Y2hpbmcgaXRzIFZNQSwgdW5saWtlIGZvciBhbGwgb3RoZXIgc2VjdGlvbnMu
DQo+Pj4gTG9va3MgbGlrZSBhIGxpbmtlciBzY3JpcHQgb3IgbGlua2VyIGlzc3VlIHRvIG1l
OiBXaGlsZSAuLi4NCj4+Pg0KPj4+PiBBbmQgdGhlIHJlbGF0ZWQgbGlua2VyIHNjcmlwdCBw
YXJ0Og0KPj4+Pg0KPj4+PiAgICAgICAgICAgIF9fZW5kX29mX2tlcm5lbF9yZXNlcnZlID0g
LjsNCj4+Pj4NCj4+Pj4gICAgICAgICAgICAuID0gQUxJR04oUEFHRV9TSVpFKTsNCj4+Pj4g
ICAgICAgICAgICAuYnJrIChOT0xPQUQpIDogQVQoQUREUiguYnJrKSAtIExPQURfT0ZGU0VU
KSB7DQo+Pj4NCj4+PiAuLi4gdGhpcyBBVCgpIGxvb2tzIGNvcnJlY3QgdG8gbWUsIEknbSB1
bmNlcnRhaW4gb2YgdGhlIHVzZSBvZiBOT0xPQUQuDQo+Pj4gTm90ZSB0aGF0IC5ic3MgZG9l
c24ndCBoYXZlIE5PTE9BRCwgbWF0Y2hpbmcgdGhlIHZhc3QgbWFqb3JpdHkgb2YgdGhlDQo+
Pj4gbGlua2VyIHNjcmlwdHMgbGQgaXRzZWxmIGhhcy4NCj4+DQo+PiBZZWFoLCBidXQgdGhl
IGZpbGVzeiBhbmQgbWVtc3ogdmFsdWVzIG9mIHRoZSAuYnNzIHJlbGF0ZWQgcHJvZ3JhbSBo
ZWFkZXINCj4+IGRpZmZlciBhIGxvdCAoYmFzaWNhbGx5IGJ5IHRoZSAuYnNzIHNpemUgcGx1
cyBzb21lIGFsaWdubWVudCksDQo+IA0KPiBUaGF0J3MgdGhlIHZlcnkgbmF0dXJlIG9mIC5i
c3MgLSBubyBkYXRhIHRvIGJlIGxvYWRlZCBmcm9tIHRoZSBmaWxlLg0KPiANCj4+IGFuZCB0
aGUNCj4+IC5ic3Mgc2VjdGlvbiBmbGFncyBjbGVhcmx5IHNheSB0aGF0IGl0cyBhdHRyaWJ1
dGVzIG1hdGNoIHRob3NlIG9mIC5icmsuDQo+Pg0KPj4gSSdtIG5vdCBzdXJlIHdoeSB0aGUg
bGlua2VyIHdvdWxkbid0IGFkZCAuYnJrIHRvIHRoZSBzYW1lIHBncm9ncmFtDQo+PiBoZWFk
ZXIgZW50cnkgYXMgLmJzcywgYnV0IG1heWJlIHRoYXQgaXMgc29tZSAuYnNzIHNwZWNpYWwg
aGFuZGxpbmcuDQo+IA0KPiBJIGRvbid0IGtub3cgZWl0aGVyLCBidXQgSSBzdXNwZWN0IHRo
aXMgdG8gYmUgYW4gZWZmZWN0IG9mIHVzaW5nIE5PTE9BRA0KPiAod2l0aG91dCBtZWFuaW5n
IHRvIGRlY2lkZSB5ZXQgd2hldGhlciBpdCdzIGEgd3JvbmcgdXNlIG9mIHRoZQ0KPiBhdHRy
aWJ1dGUgb3IgYmFkIGhhbmRsaW5nIG9mIGl0IGluIGxkKS4NCg0KSSBqdXN0IGRpZCBhIHRl
c3Q6IGRyb3BwaW5nIHRoZSAiKE5PTE9BRCkiIGZvciAuYnJrIGluIHRoZSBsaW5rZXIgc2Ny
aXB0DQp3aWxsIHJlc3VsdCBpbiB0aGUgLmJyayBzZWN0aW9uIHRvIGJlIGluY2x1ZGVkIGlu
IHRoZSBzYW1lIHByb2dyYW0gaGVhZGVyDQphcyB0aGUgLmJzcyBzZWN0aW9uLg0KDQpJJ2xs
IHN0YXJ0IGFuIHVwc3RyZWFtIGRpc2N1c3Npb24gdG8gZHJvcCBpdCAoSSBjb3VsZCBpbWFn
aW5lIHByb2JsZW1zDQplLmcuIHdoZW4gdXNpbmcgZ3J1YiwgYXMgZ3J1YiBtaWdodCBwbGFj
ZSB0aGUgaW5pdHJkIG92ZXJsYXBwaW5nIHRoZSAuYnJrDQpzZWN0aW9uIHdoZW4gbm90IHVz
aW5nIGEgY29tcHJlc3NlZCBrZXJuZWwpLg0KDQoNCkp1ZXJnZW4NCg==
--------------Bm50bso9g0bAJKDrSANDU400
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------Bm50bso9g0bAJKDrSANDU400--

--------------Fxzc9x3xaB5Kh5ovmqmwFIw3--

--------------Wa9i16KM0SYF2M0tZg1Q0ZvM
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK0L58FAwAAAAAACgkQsN6d1ii/Ey+8
VAf/e0JzVYOJiY8a/KROmbwbVn81vi9tyVvYGpkOp0wVE1/jS8T6sa9qwvRKFV4RhsTllh7kwhl/
T+Kt4W5p87CP+F78eC8WyYgxd1BW+bgLBZx2166E1wQtPACBP2aIZgHs6elodL39khMcrJKcFqEr
vu/dAlDIcHdqQG3cjHR3PhiPl30H4O+B3AsXDaR2zepARzL0jZ7abOtZ9gyr5eKWnyJ5zVAu6DWF
DyXRD3x98Qa1m9dFjCekekA0jDr5wKusGQbge/AKClEqLaus2mhlJY2HiWyDYhp441nKm40tq0cG
j/N/xDpjPZZtdnDQTcK5YLvld4pLIB68jB66VXbMUw==
=KV39
-----END PGP SIGNATURE-----

--------------Wa9i16KM0SYF2M0tZg1Q0ZvM--


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 09:29:51 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 09:29:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354710.581956 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4J9d-0005SL-NV; Thu, 23 Jun 2022 09:29:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354710.581956; Thu, 23 Jun 2022 09:29:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4J9d-0005SE-Jw; Thu, 23 Jun 2022 09:29:37 +0000
Received: by outflank-mailman (input) for mailman id 354710;
 Thu, 23 Jun 2022 09:29:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CRa/=W6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o4J9c-0005S6-L6
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 09:29:36 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2082.outbound.protection.outlook.com [40.107.22.82])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fe6c8a00-f2d6-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 11:29:35 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR0402MB3394.eurprd04.prod.outlook.com (2603:10a6:208:24::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15; Thu, 23 Jun
 2022 09:29:33 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 09:29:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe6c8a00-f2d6-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ie6WJvauoc0OK47WD/d4tPddpzF+1IDarkUVXnnCRJgkgp5YTzPaMGa2KrPlG1eGKLeIxO4L+JV1QIODRnIENvFWqffk73pSL49csG2AwDDjUC0VKAJdf98CFkmzKrfna+kQerNH4PxC3lBKpa8DsV63IjaqboJbziwDx5liv+Va7JcUAvpZQPuHdc3NHN8w6JARw0M0ClZHbdXEjXo3inrR6hJX54RzA61lW+86FhS2eGFJNnKkcy4EUzz1NgwWl2MIUuG5wS82xGDTmep5I1gj2XmZ3lD6sq7d5YvzLvfLJ34UxUjm48XMzKTXSKYlJ56Dr2ha0L9pB43izzPyEw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=p7Zpaic8PdMekytrLer6to1o4bb49ZM8wIG3WKMQn9Q=;
 b=Pn2hvZvRvC5QQQrtlcAT3fgNqjoCdBHQDzTLkLTTuf6h9Wth6DL4wce2lqtn5umm2y1h2BXXC4DxZdjKRa7corYOZ197b3t0PCUP3Dt0AV/r2FRiYzJaj51tAf85fhifuJGYRwSczquxgm8+mMGoht1+rV45njgODKZzdNeh/NoOuH2e6KF1oj3x/ean2w/ii814hExXCIzr6PbjYDw1cwVPolk8mbFWx1N+UUygxRdB5294atsu+Q4/sCnEY21B2qg4MDYDn2yRszwsZ3sa7UF9b3g6SmBRjTcy9I9F+XJOTdDLBx40V59aFXbz4ibo2ihQAWH166TEve30mD6hNQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=p7Zpaic8PdMekytrLer6to1o4bb49ZM8wIG3WKMQn9Q=;
 b=xRKDlJ8qni5SbdkM8ufftiCgvxL9pOGdxbC3SXlt+57xZoWDsH6/eIrm4u24hh78PocOdK+p2kYz4xPJRduHyPWYgaxObdAyhedngt0rZBmcwUPKSLQwVYd9OXI7m7auT2TMNbg0/VOSGpfPOFaqfiGJd/kiXCuvAkz3Hye3AbkkH+kly+VxTV0qW2IqVdRZhcrz0s6MjAu9vVziPh4EAcwNFL+8yE5mjBq+9M68NT4qtiK0nV4bqAiTyl42+d9qr7nz+c7DBunXb6BADFWIqcfU7X7KiIgY6QAETOvmn+mQS3/V8ZqIIdq7afw5jsK8Wd5p4tHui5GeRw07DonJ7A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <8322303f-021a-b520-d2ad-cf8310573df5@suse.com>
Date: Thu, 23 Jun 2022 11:29:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v1 01/10] drivers/char: Add support for Xue USB3 debugger
Content-Language: en-US
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
Cc: Connor Davis <davisc@ainfosec.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.5d286dc6304969ed7155051e900236947c1b14dc.1654612169.git-series.marmarek@invisiblethingslab.com>
 <87c73737fe8ec6d9fe31c844b72b6c979b90c25d.1654612169.git-series.marmarek@invisiblethingslab.com>
 <9c7c11f5-be1e-f0ef-0659-48026675ec1a@suse.com> <YrM5g3dLRJHTIVYt@mail-itl>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <YrM5g3dLRJHTIVYt@mail-itl>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AM6P192CA0047.EURP192.PROD.OUTLOOK.COM
 (2603:10a6:209:82::24) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: efb2414b-be11-4513-c360-08da54fae1c5
X-MS-TrafficTypeDiagnostic: AM0PR0402MB3394:EE_
X-Microsoft-Antispam-PRVS:
	<AM0PR0402MB33944B80F2D25B083E4784B5B3B59@AM0PR0402MB3394.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	4Ltf7tlsgKjwyibg/hVU1bFjq2MZs2HLJY5ceI6PO6uMtpZNtdCTU5n1doPAZ100WURqY83GkuB7OwFy2aL0oXD3ZrpolljcooPbI35mifc8kPTXi9Yt+fR4Gcz0ndA5HrxUIOT6DJVe8uckp95aB/6uV8TSeGsujhflNcnMdqVy3TxsmLGeDQw6kpSkMkeaORJEoERervNttRtwjUaDALt7Ud6m39tagA2C6ggcDY8CSU4uREU8EoKmrCZE5TS/0HIKrdIEBgFvis4lr0c9I9ZN9yWHi3FNsX8uhNHXqYydCwSCQV5zytC3UpwntXAqnveWLVyu5zPG7qaEc9q9vhyiYMBSrYuhlTETWCR6fuoQal6hMJuRWsPaYaDY2Zbs8AgtpU5+PfxuQOIjPAUYFjiUuvHmiLVIvyaNlGzbBjcLdQxyi0KMcFVqRgqHSMVTOuZODxLl2jH7cGCkgFVTdd9tg/Ars0JoURitGtggelVeWJf9f2M8NtZJqcqf/gl19c1wsuNq49lTErq8631S3CAhyxb3tTFGIrefvcqk4WaVXgSg0Qw32+rwi/Zqw4JHNEMpkkrb9YA7QdpmAitLOf1P0bmFx8RuRrBnkfBdkP82lUhYYo2xHWA2ovmbA7X8c8ch98UeMVWy3oEh+UUFcYTu4J+6JlXZCKZy9QV4oV2ojddk/BeedaqU4fcvHbPOFIyuoaaSd+/SwQu/VM0VswlnxS/esCUrLvmkWQtsP2THJJcbAWdfyuHbCMRL8k59kFeOZcD1YtZ3nlSXWjaSaNIJxctVj+fYUkMiGeAb2LI=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(136003)(39860400002)(366004)(346002)(376002)(396003)(6506007)(83380400001)(53546011)(26005)(6512007)(41300700001)(5660300002)(8936002)(6486002)(478600001)(2906002)(7416002)(186003)(66946007)(86362001)(2616005)(38100700002)(66574015)(6916009)(31696002)(31686004)(4326008)(8676002)(316002)(54906003)(36756003)(66476007)(66556008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UGlicmFSSmRLZmRGdVRqUy9udmNMdi84UFlpUEtMTTlSVzA5SWthYXFrSWF4?=
 =?utf-8?B?bnd2aW1VVDhLcVRiRmdZMGVCWXVrOENXVmFvYU5VLzBwd1ZjMytSek9TSUsr?=
 =?utf-8?B?TFg1SGhaak14WWNQb25BeWFLVi9mK1M3cVkzQmpwWWJodlg5aTZMU3ZwZlZB?=
 =?utf-8?B?eWt2N0RjdFl4WDdkLy9aWFFEZjJXbW4vdnJUcUJuVXFWYXNnYjExdWNYUUhD?=
 =?utf-8?B?RzVNRzBzc1ZaV0ViUVVHb3ZoSlNEd0ZGUDRxVExNcWQwdVVuYzQ5VlYwNWZx?=
 =?utf-8?B?TTVSSHB3QUxMNS9rQi9ieXhGRnB2dEMwTlNhcitNVSsyVDFjYXVQdUlsaFhn?=
 =?utf-8?B?dFVZN1pkZXc2V3pVeEczUWpJaVFOc0ZPYUhOU1A3VTczWFdQSnFxa0xrUTNs?=
 =?utf-8?B?bDZia3c3WE5lb3MydE5rQ1NCcm12UjEvdzBHOUFKN2NKc2w5dTFlbS9VYTg0?=
 =?utf-8?B?YlE0WXcrc0tXVmRMc3RPSXM3bjFLQmF5ZFkyK2ZYc3dyejkyRUVEZWJzcDBw?=
 =?utf-8?B?RjFTb3pMM3VUd2hIemJhbm9lTzF0MElUQUtmZnI2UUJVZElSMDlmNmc5MDZq?=
 =?utf-8?B?V3BYVm83aVpoYkw3dGp4a3lqNDh4cXAzQUdTemtwcno3Q2tnUVlzNmYyVldQ?=
 =?utf-8?B?dlkrZVh0OWhTL1dpT1lYbDFhWUxXL2V4Vk1pUitEUlY2MEJrTXZBYnBhOS9M?=
 =?utf-8?B?TGRxc3pDS2M0UHRnNERKWHA3aG1kWit0MjdEeTRldnovbGpCWlJiTkdtR25O?=
 =?utf-8?B?QmVVNGhITTd5MTVjcmV5eXMwUmJCczZZcERpQjNqNDIyTUdDQ0p0TCt1bFBJ?=
 =?utf-8?B?cVRVNnVPMk9xUk5ZTU5XU0lnOEVXSUttbVluS2R6WFlxb1VXbHVpMmYyWUZO?=
 =?utf-8?B?elBiYXlXeUZ0N3JEcGZabWRqNW1lMEZqeHltekRyZWN6NC9JNW9NWkl1Vm9r?=
 =?utf-8?B?UWRWOWVoaW1NQ3lXQWhLQzM4Q0NqVEFwakp3dGU5Smk0SUp5T2JEdkMvcmZo?=
 =?utf-8?B?NVp6WVljaUhsQ1VpaDJqSnM1d0NscDdXZ1Nkei9vY09PSWRPeHVWbVV5dU9l?=
 =?utf-8?B?R0xtNFVlM2xuV2ltRUsyRkM2Y2R2TFZIQWpYdUxLYTFYNlVRNFpuR1cwV0Yr?=
 =?utf-8?B?UVY2aGQ4b3pxZ0VjRlVRSDVUZG1YZktVc0R3VFpFeXhINXlTUFN4bkROMmoz?=
 =?utf-8?B?ajd0elIvV2QyY3pDNXlMbTdIQXFrN25HejNEOG84dG8xUkNJT3NEVmcwaVRH?=
 =?utf-8?B?Tk9HNmx6Q1JZL1RCSU9PeGZ3WnU5bm9GQ2F6eUlVRVVUQjF3ZldGaFM1ckN0?=
 =?utf-8?B?bXVmTXBTWUZCWDNtaXBENWZuMi9ERllHK0NxTk1BYTlpV2JqeGw4ZWVsMlNU?=
 =?utf-8?B?T1VzUEJya0tmVWVKL0N2TWYxd2poQ1Fxb1NlbG9XWG0rSFRjZVJDb3I1R2Zk?=
 =?utf-8?B?L3U5VGJjaUJTQVdPSU1iWnFibDFMM2YxVTNkNERYWkxNQ3AzSjhnWEt2Nk9N?=
 =?utf-8?B?eldGbG5UbUIzcEpzOW5GbTBiRVhBbUx5bzhIUW41cURBYUI4T0ZCbElNMGE2?=
 =?utf-8?B?d1MxQVpwOGdYRGZIeThXL1hjRk12NnVMQkxtbWtUdTlKNE96NUlDUXpleVFL?=
 =?utf-8?B?SVRxTVU5Z2VyMUFhZTRRUEpRaDVIOGdNellqb3FLVWlpU0xkV3NNN3hCbE5k?=
 =?utf-8?B?eDBEYVFlMHFyeUMyS1lpazNFOTF0QnYwYnpzL2F0Sy80OUlMZFQwS1IxcUUr?=
 =?utf-8?B?Q3dNbit4a0xjb0xyTVlOLzNpTllGZkVOZ29acGliQ3E5UUEvc1FyMzVvaGFx?=
 =?utf-8?B?dEc2akdxOUtCNWVKakxZUzNmNmtJMkp1YzBicmkrSUNnYThYMFJsZjFranVE?=
 =?utf-8?B?YTEvenN6WjF6UXlJUWlBNzN5cGNOTW1lcTFuZjlwN3haeEV0UzRXd2lqYlhO?=
 =?utf-8?B?RW5ybzdyQUcwajQ3N0RENTVJSk5ITlI5elhzS0JaUnpYRlNwTVB4T1dYclJJ?=
 =?utf-8?B?R2o0dm53OWlpTUtaMHJCWHNxUTJabXZEeTFqV3RWN1Y0dk1BVGYzSWVnNGxx?=
 =?utf-8?B?eFFQdytSL2JreXB1UitESzhsVFJwY3NUTlpvYlRDVHdoYmtOcVZqeXZFdm56?=
 =?utf-8?Q?fDFda8aX7mlhP2kWS0JEzN+0v?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: efb2414b-be11-4513-c360-08da54fae1c5
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 09:29:33.4456
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: F+huVZ9XKjCkkYr+vBz9Hnv1FnYogXkjpXYWUumw46vIgVHKbDHhGrs9ztcBTMjEkZXZwAUk+jPVkMUhfJPrVA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR0402MB3394

On 22.06.2022 17:47, Marek Marczykowski-Górecki wrote:
> On Wed, Jun 15, 2022 at 04:25:54PM +0200, Jan Beulich wrote:
>> On 07.06.2022 16:30, Marek Marczykowski-Górecki wrote:
>>> +    /* ...we found it, so parse the BAR and map the registers */
>>> +    bar0 = pci_conf_read32(xue->sbdf, PCI_BASE_ADDRESS_0);
>>> +    bar1 = pci_conf_read32(xue->sbdf, PCI_BASE_ADDRESS_1);
>>
>> What if there are multiple?
> 
> You mean two 32-bit BARs? I check for that below (refusing to use them).
> Anyway, I don't think that's a thing in USB 3.0 controllers.

No, I mean multiple controllers. When making the remark I didn't know
yet that you'll deal with that in patch 3. A sentence making the
restriction (and its intended resolution) explicit in the description
would help.

>>> +    memset(xue, 0, sizeof(*xue));
>>> +
>>> +    xue->dbc_ctx = &ctx;
>>> +    xue->dbc_erst = &erst;
>>> +    xue->dbc_ering.trb = evt_trb;
>>> +    xue->dbc_oring.trb = out_trb;
>>> +    xue->dbc_iring.trb = in_trb;
>>> +    xue->dbc_owork.buf = wrk_buf;
>>> +    xue->dbc_str = str_buf;
>>
>> Especially the page-sized entities want allocating dynamically here, as
>> they won't be needed without the command line option requesting the use
>> of this driver.
> 
> Are you okay with changing this only in patch 9, where I restructure those
> buffers anyway?

I'm afraid I'll need to make it to patch 9 to answer this question. If
suitable dealt with later, I don't see a fundamental problem, as long
as it's clear then that I will request that this patch be committed in
a batch with that later one, not in isolation.

>>> +    serial_register_uart(SERHND_DBGP, &xue_uart_driver, &xue_uart);
>>> +}
>>> +
>>> +void xue_uart_dump(void)
>>> +{
>>> +    struct xue_uart *uart = &xue_uart;
>>> +    struct xue *xue = &uart->xue;
>>> +
>>> +    xue_dump(xue);
>>> +}
>>
>> This function looks to be unused (and lacks a declaration).
> 
> It is unused, same as xue_dump(), but is extremely useful when
> debugging. Should I put it behind something like #ifdef XUE_DEBUG,
> accompanied with a comment about its purpose?

Yes, please (or any other suitable means to make the functions
disappear from the final binary). The function here then also ought
to be static, I suppose - you're not adding a declaration anywhere
for it to be usable outside of this source file.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 09:36:47 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 09:36:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354717.581966 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4JGR-0006t7-GK; Thu, 23 Jun 2022 09:36:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354717.581966; Thu, 23 Jun 2022 09:36:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4JGR-0006t0-DV; Thu, 23 Jun 2022 09:36:39 +0000
Received: by outflank-mailman (input) for mailman id 354717;
 Thu, 23 Jun 2022 09:36:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4JGP-0006sq-TH; Thu, 23 Jun 2022 09:36:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4JGP-0001WZ-Pu; Thu, 23 Jun 2022 09:36:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4JGP-0000F5-AN; Thu, 23 Jun 2022 09:36:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o4JGP-00032h-9x; Thu, 23 Jun 2022 09:36:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vEZ60dCjAAfl+W8x+MjbENvCUKwO5ybpclZ4CSGtH4A=; b=goWPqFHHVy1oKdV0jPIvBeJqj/
	6sgS+i91/AFwPri5L5K8chcx6JVPskUFh2vZ51wPByJqTTSdOk77m2mNET6PNPmU68NrbEoeG1xMF
	066oaMzZdIo5olv4I5ongS6oguKzBqBOVOaez7fJbCQh3OmjaNqoqJu7/rdBGoiigsN4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171324-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 171324: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=4bfd668e5edb59092a8e16414b3f6632efdac4f2
X-Osstest-Versions-That:
    ovmf=f304308e1cb21846a79fc8e4aa9ffa2cb1db3e4c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 23 Jun 2022 09:36:37 +0000

flight 171324 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171324/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 4bfd668e5edb59092a8e16414b3f6632efdac4f2
baseline version:
 ovmf                 f304308e1cb21846a79fc8e4aa9ffa2cb1db3e4c

Last test of basis   171315  2022-06-22 20:13:24 Z    0 days
Testing same since   171324  2022-06-23 06:42:01 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ray Ni <ray.ni@intel.com>
  Taylor Beebe <t@taylorbeebe.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   f304308e1c..4bfd668e5e  4bfd668e5edb59092a8e16414b3f6632efdac4f2 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 09:46:18 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 09:46:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354728.582000 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4JPk-0000QL-UT; Thu, 23 Jun 2022 09:46:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354728.582000; Thu, 23 Jun 2022 09:46:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4JPk-0000QE-R7; Thu, 23 Jun 2022 09:46:16 +0000
Received: by outflank-mailman (input) for mailman id 354728;
 Thu, 23 Jun 2022 09:46:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=j28/=W6=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o4JPj-0008LX-2z
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 09:46:15 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 50b0d5a3-f2d9-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 11:46:11 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 0B5AB21D14;
 Thu, 23 Jun 2022 09:46:12 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id BE492133A6;
 Thu, 23 Jun 2022 09:46:11 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id EFUOLWM2tGLmLwAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 23 Jun 2022 09:46:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 50b0d5a3-f2d9-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655977572; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=bYI7V/dmnXpWZM2zlRb/pc+ci8hD5Th5E8mJSClVr9Q=;
	b=ibWYLSIpJW3cxc8VSyWjDZrMrjlU3h6E5ttl0nic13fOU+GJYulTpVShiW7JRjZs1Cdb/g
	3Sm4LdhKI2YpIuuRVieTgokm3PWXoIahWFwWWt60PzWC0PDY9EPCX1U7ReIJOg3y7Oi81V
	XzYONG1YZZnLXEbgbiBqj/5espOq4aw=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>
Subject: [PATCH v2 3/3] x86: fix .brk attribute in linker script
Date: Thu, 23 Jun 2022 11:46:08 +0200
Message-Id: <20220623094608.7294-4-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20220623094608.7294-1-jgross@suse.com>
References: <20220623094608.7294-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Commit e32683c6f7d2 ("x86/mm: Fix RESERVE_BRK() for older binutils")
added the "NOLOAD" attribute to the .brk section as a "failsafe"
measure.

Unfortunately this leads to the linker no longer covering the .brk
section in a program header, resulting in the kernel loader not knowing
that the memory for the .brk section must be reserved.

This has led to crashes when loading the kernel as PV dom0 under Xen,
but other scenarios could be hit by the same problem (e.g. in case an
uncompressed kernel is used and the initrd is placed directly behind
it).

So drop the "NOLOAD" attribute. This has been verified to correctly
cover the .brk section by a program header of the resulting ELF file.

Fixes: e32683c6f7d2 ("x86/mm: Fix RESERVE_BRK() for older binutils")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- new patch
---
 arch/x86/kernel/vmlinux.lds.S | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 81aba718ecd5..9487ce8c13ee 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -385,7 +385,7 @@ SECTIONS
 	__end_of_kernel_reserve = .;
 
 	. = ALIGN(PAGE_SIZE);
-	.brk (NOLOAD) : AT(ADDR(.brk) - LOAD_OFFSET) {
+	.brk : AT(ADDR(.brk) - LOAD_OFFSET) {
 		__brk_base = .;
 		. += 64 * 1024;		/* 64k alignment slop space */
 		*(.bss..brk)		/* areas brk users have reserved */
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Thu Jun 23 09:46:18 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 09:46:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354729.582006 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4JPl-0000X7-Fc; Thu, 23 Jun 2022 09:46:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354729.582006; Thu, 23 Jun 2022 09:46:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4JPl-0000Up-9F; Thu, 23 Jun 2022 09:46:17 +0000
Received: by outflank-mailman (input) for mailman id 354729;
 Thu, 23 Jun 2022 09:46:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=j28/=W6=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o4JPk-0008LX-32
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 09:46:16 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 508d46e9-f2d9-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 11:46:11 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id B6B731FD8B;
 Thu, 23 Jun 2022 09:46:11 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 7291713AB2;
 Thu, 23 Jun 2022 09:46:11 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 4A6NGmM2tGLmLwAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 23 Jun 2022 09:46:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 508d46e9-f2d9-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655977571; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JeJlzEy9S9HO3UBMbEaEgVpv+aw7L6zfG0jvSaFE4ro=;
	b=XTE+RGVoZDWEOCRBYwyFcbohYpKJ0tpbYKgmRUWXHS0+VkUaHPYT85AzuVlQqwIoTHLAGh
	tgsvveadD+BFZE0w4NEj2C0G9PIi/KMBvbG23r1w+/0eAt/LTSkuq9N7QERf5nCB7vjHbS
	0Tqza47Z5amQO5OYJVc/LVLFHU2SiTw=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>
Subject: [PATCH v2 2/3] x86: fix setup of brk area
Date: Thu, 23 Jun 2022 11:46:07 +0200
Message-Id: <20220623094608.7294-3-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20220623094608.7294-1-jgross@suse.com>
References: <20220623094608.7294-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Commit e32683c6f7d2 ("x86/mm: Fix RESERVE_BRK() for older binutils")
put the brk area into the .bss..brk section (placed directly behind
.bss), causing it not to be cleared initially. As the brk area is used
to allocate early page tables, these might contain garbage in not
explicitly written entries.

This is especially a problem for Xen PV guests, as the hypervisor will
validate page tables (check for writable page tables and hypervisor
private bits) before accepting them to be used. There have been reports
of early crashes of PV guests due to illegal page table contents.

Fix that by letting clear_bss() clear the brk area, too.

Fixes: e32683c6f7d2 ("x86/mm: Fix RESERVE_BRK() for older binutils")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/kernel/head64.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index e7e233209a8c..6a3cfaf6b72a 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -430,6 +430,8 @@ void __init clear_bss(void)
 {
 	memset(__bss_start, 0,
 	       (unsigned long) __bss_stop - (unsigned long) __bss_start);
+	memset(__brk_base, 0,
+	       (unsigned long) __brk_limit - (unsigned long) __brk_base);
 }
 
 static unsigned long get_cmd_line_ptr(void)
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Thu Jun 23 09:46:18 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 09:46:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354727.581989 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4JPj-00009o-MN; Thu, 23 Jun 2022 09:46:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354727.581989; Thu, 23 Jun 2022 09:46:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4JPj-00009h-J5; Thu, 23 Jun 2022 09:46:15 +0000
Received: by outflank-mailman (input) for mailman id 354727;
 Thu, 23 Jun 2022 09:46:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=j28/=W6=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o4JPi-0008LX-2t
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 09:46:14 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 50575c74-f2d9-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 11:46:11 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 6F2CA21CF6;
 Thu, 23 Jun 2022 09:46:11 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 1B7CF133A6;
 Thu, 23 Jun 2022 09:46:11 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id QPVlBWM2tGLmLwAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 23 Jun 2022 09:46:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 50575c74-f2d9-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655977571; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qOrZ3a0atU/z4XGkVjCM42/CYfuowe8zJtC3IRIryL4=;
	b=CY+aACXb8FTwdvlK2MD+8Ue/Ep+AuaiXD4J2b/dJwMFgM+Cr8HwRI0sA5M0QA78pAqdvsr
	3Wy5n6qTS4x9KZcf/S2ffLR4puT/FnYCgKNVJuftwlxmFmOANRIxWolPgpkpPWtaqQ+hho
	V/uuve6jQ8m5GFkEDaoknN9kv4J73Vk=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 1/3] x86/xen: use clear_bss() for Xen PV guests
Date: Thu, 23 Jun 2022 11:46:06 +0200
Message-Id: <20220623094608.7294-2-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20220623094608.7294-1-jgross@suse.com>
References: <20220623094608.7294-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of clearing the bss area in assembly code, use the clear_bss()
function.

This requires to pass the start_info address as parameter to
xen_start_kernel() in order to avoid the xen_start_info being zeroed
again.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
 arch/x86/include/asm/setup.h |  3 +++
 arch/x86/kernel/head64.c     |  2 +-
 arch/x86/xen/enlighten_pv.c  |  8 ++++++--
 arch/x86/xen/xen-head.S      | 10 +---------
 4 files changed, 11 insertions(+), 12 deletions(-)

diff --git a/arch/x86/include/asm/setup.h b/arch/x86/include/asm/setup.h
index f8b9ee97a891..f37cbff7354c 100644
--- a/arch/x86/include/asm/setup.h
+++ b/arch/x86/include/asm/setup.h
@@ -120,6 +120,9 @@ void *extend_brk(size_t size, size_t align);
 	static char __brk_##name[size]
 
 extern void probe_roms(void);
+
+void clear_bss(void);
+
 #ifdef __i386__
 
 asmlinkage void __init i386_start_kernel(void);
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index bd4a34100ed0..e7e233209a8c 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -426,7 +426,7 @@ void __init do_early_exception(struct pt_regs *regs, int trapnr)
 
 /* Don't add a printk in there. printk relies on the PDA which is not initialized 
    yet. */
-static void __init clear_bss(void)
+void __init clear_bss(void)
 {
 	memset(__bss_start, 0,
 	       (unsigned long) __bss_stop - (unsigned long) __bss_start);
diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index e3297b15701c..70fb2ea85e90 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -1183,15 +1183,19 @@ static void __init xen_domu_set_legacy_features(void)
 extern void early_xen_iret_patch(void);
 
 /* First C function to be called on Xen boot */
-asmlinkage __visible void __init xen_start_kernel(void)
+asmlinkage __visible void __init xen_start_kernel(struct start_info *si)
 {
 	struct physdev_set_iopl set_iopl;
 	unsigned long initrd_start = 0;
 	int rc;
 
-	if (!xen_start_info)
+	if (!si)
 		return;
 
+	clear_bss();
+
+	xen_start_info = si;
+
 	__text_gen_insn(&early_xen_iret_patch,
 			JMP32_INSN_OPCODE, &early_xen_iret_patch, &xen_iret,
 			JMP32_INSN_SIZE);
diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S
index 3a2cd93bf059..13af6fe453e3 100644
--- a/arch/x86/xen/xen-head.S
+++ b/arch/x86/xen/xen-head.S
@@ -48,15 +48,6 @@ SYM_CODE_START(startup_xen)
 	ANNOTATE_NOENDBR
 	cld
 
-	/* Clear .bss */
-	xor %eax,%eax
-	mov $__bss_start, %rdi
-	mov $__bss_stop, %rcx
-	sub %rdi, %rcx
-	shr $3, %rcx
-	rep stosq
-
-	mov %rsi, xen_start_info
 	mov initial_stack(%rip), %rsp
 
 	/* Set up %gs.
@@ -71,6 +62,7 @@ SYM_CODE_START(startup_xen)
 	cdq
 	wrmsr
 
+	mov	%rsi, %rdi
 	call xen_start_kernel
 SYM_CODE_END(startup_xen)
 	__FINIT
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Thu Jun 23 09:46:18 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 09:46:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354726.581978 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4JPi-0008Lq-Ge; Thu, 23 Jun 2022 09:46:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354726.581978; Thu, 23 Jun 2022 09:46:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4JPi-0008Li-Bl; Thu, 23 Jun 2022 09:46:14 +0000
Received: by outflank-mailman (input) for mailman id 354726;
 Thu, 23 Jun 2022 09:46:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=j28/=W6=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o4JPh-0008LX-BP
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 09:46:13 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 503ee856-f2d9-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 11:46:11 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 14A8F1FD8A;
 Thu, 23 Jun 2022 09:46:11 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id AED21133A6;
 Thu, 23 Jun 2022 09:46:10 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id XCAtKWI2tGLmLwAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 23 Jun 2022 09:46:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 503ee856-f2d9-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655977571; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=IIvicZRTKlJOBgOc3++ZHzJfG9P22jzec6L0MnajaIo=;
	b=HPHaGgfyCZ84QyQjmgfwZPXBbLogxmzAtQTOZqyOqAI/F39MK0TO4DKhXNYcJSj4jo+eX1
	4LEt8gm8/e7JyZ+Gah81gPbuDHtiUtkXXTogbdVwTdSRavynFnglvqajH+3ROrBcVj3ETO
	NGKDudBRBG152w3cUPz/VLCVqB1q+AE=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: [PATCH v2 0/3] x86: fix brk area initialization
Date: Thu, 23 Jun 2022 11:46:05 +0200
Message-Id: <20220623094608.7294-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The brk area needs to be zeroed initially, like the .bss section.
At the same time its memory should be covered by the ELF program
headers.

Juergen Gross (3):
  x86/xen: use clear_bss() for Xen PV guests
  x86: fix setup of brk area
  x86: fix .brk attribute in linker script

 arch/x86/include/asm/setup.h  |  3 +++
 arch/x86/kernel/head64.c      |  4 +++-
 arch/x86/kernel/vmlinux.lds.S |  2 +-
 arch/x86/xen/enlighten_pv.c   |  8 ++++++--
 arch/x86/xen/xen-head.S       | 10 +---------
 5 files changed, 14 insertions(+), 13 deletions(-)

-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Thu Jun 23 09:49:08 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 09:49:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354750.582021 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4JSV-0002Ik-UD; Thu, 23 Jun 2022 09:49:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354750.582021; Thu, 23 Jun 2022 09:49:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4JSV-0002Ib-RT; Thu, 23 Jun 2022 09:49:07 +0000
Received: by outflank-mailman (input) for mailman id 354750;
 Thu, 23 Jun 2022 09:49:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CRa/=W6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o4JST-0002IJ-Oz
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 09:49:06 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-eopbgr60088.outbound.protection.outlook.com [40.107.6.88])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b6f6965c-f2d9-11ec-b725-ed86ccbb4733;
 Thu, 23 Jun 2022 11:49:03 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB4385.eurprd04.prod.outlook.com (2603:10a6:208:74::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.22; Thu, 23 Jun
 2022 09:49:02 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 09:49:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b6f6965c-f2d9-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FI4aQrOgKc7nK9ju4k22Y3uZ06BmeY474C7YqYlxomZeMfBXKelhu9iSsAqRsRSOw2I5JFUboC21k4jkcxdEUU8m6RJI8uzPkYOurI106PkTNJuow5UXuY67jNjXq1BqbebwxFh13U9CyqAMCHjXCmcTRCZmNMyP42bhtwyQHTheb5SsOkTW6Vfn6N5dsMufxYRow9BGlio/310u8fYVUX+bUOZaazZCdseK0DNKlHmovlTk88eF5lc2Sc68Aheus7KmGuAvBnHtePe7y1f2PLWvcR9XKzroEIk4VB5vGOu+MJayL/BmCDjVIQ7TZRF2yTm3joVTxYiH5FFONHYucA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fzPfA4UDnIVpPE6hjVBB4ERwhtRK52DcWyywnHSgmxw=;
 b=OY/eoIpEflmeRhKYx4vEZLW7xa/W590MD6YY/xn5T3he4wX4IA7rEJOiFESjv9U+ey831yu2TbdqRL0nZSPwjOqhn9+4l7n0Vry17ogCNaR9UrZv1AKUd3veHMirRd21Y/4dgkvsM8ghf2u5RdercV9xBhoQStRe3WNXlJFwulQqIJ2L1FHHj9tixd1M5kw4JKGp3HkfbiGmxtP4hZ5MwmYtAaBvgbyRfyqebJnx3Jr0P0ry9LTCAApq0VD+BRNBQ4A8m4M3v/Tff45O/m5ds5o6tqkuw0YGCWQDn6Y/rFr1bMMNfulzjCZfU7c1ZHRN57GPnKLyyizTdRuSw8CcLA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fzPfA4UDnIVpPE6hjVBB4ERwhtRK52DcWyywnHSgmxw=;
 b=SncYYBOQPdU/fJKp8aYIa8CrUC9Qedf2PncNHXVI6COt1TQStyN0TC98A2NsD0jhqjZ+pkyD0/RnynkkbFDZJTfHBj/hL7Nb8hYT7/KI4DnF5T2xNggGhpOHSi2viSYqeXgHT11CF58FHWl5RItGHJrDykMG6P0jxnS0EiRWFmV9Ec8M/i7hc0sO5NIWDO8klLSZ2yrzPQAqMZIUrBnHHtlWVWcw0dsYJzw3I+KQpftMHknXrgnC/AzmnoOvsayilti1OMn2C7G6M7lPolrLJRN3LWWocHiAiReON5VzM09sICaJ8ra4h9LtVfaJfkY104Qe/0zpm2uKM9n+IeN+aQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a968598e-cacc-d762-46b8-579e18f64d12@suse.com>
Date: Thu, 23 Jun 2022 11:49:00 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH] iommu: add preemption support to iommu_{un,}map()
Content-Language: en-US
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20220610083248.25800-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220610083248.25800-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AS8PR04CA0074.eurprd04.prod.outlook.com
 (2603:10a6:20b:313::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 5af40d5d-f98e-463e-f320-08da54fd9a8f
X-MS-TrafficTypeDiagnostic: AM0PR04MB4385:EE_
X-Microsoft-Antispam-PRVS:
	<AM0PR04MB4385CA23BC0C7DF4AA207A3EB3B59@AM0PR04MB4385.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	lkO0xHMMGded0xi/lgSNy3oZhYMKNO+ggfoktSv/9MTvwphQQ7AFizMHYajAivIqUpruThUJlkEW/ux8r24rINsAmXqiL6zxDixa/J+ymSY9zu+6W+/1YIbjy68fusdXlUXjPe5d6zo+U1gL7f+Un2HDxcU0Y7MQISHo6lI15uxj8lcn2YXktKCqVGGER5FNedXEybgIEJYwd/NZGNcJaS6z8H1g3yOcpNm8EhoWNvB84eFdPU/ue6rQiCuG0AoKqpcxkgwROZ0M8knE1LpEJqhSfUUs3kMG57RzD6YiG/aORPy6M+O2fS5hia2UXvh1k/y3o3ioTt4h1LAjxKZAIwbkBUm+kvm1Vr554Z7+uWt8rjV8IGQalIYU+dM7WUUF6hhAgd5VVJVCu+ZIbFptD6zcU1qsydHrhAGXajOASoYlKuLTjkxvw4tOmAhnWmFaU6C5YQkcSjw9Nfxfby9YZBq2UN0f3eJvX+7e570InzWft7fh9BNgIzpR/JqfZrVkSwm0KDYcoNOm4SZCjaGbEgZi2aYwOcmyTkVLNL8k6vpGz/BIQ+vzVUuf3kvTQHlgVLIQK9gdakI7cc3pjpwe1hB9odyLoL+6jcIPfICGN1kngAq/Ec6brSOGRzgwA1TkvbMni9WhdgMks8Cs/XCydszdfgiZq0wXdYeBRSLgSP8luPeLOVcBu84bw7MjT7XX1++27GpodqxV8h+YF+eYjvvCM50MISJZHYC2QrPZTio5IIHHpdAqEJV7I759tWDM85rO14tTZj2HBzs4rBwUiCVcEjE6Jw1sFATL1H6Xgy0=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(396003)(366004)(136003)(376002)(346002)(39860400002)(6916009)(38100700002)(8936002)(54906003)(41300700001)(316002)(6506007)(36756003)(478600001)(2616005)(6486002)(31696002)(6512007)(86362001)(66946007)(5660300002)(83380400001)(26005)(31686004)(4326008)(8676002)(66556008)(2906002)(53546011)(66476007)(186003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?M1ZpUysrdFlzenRjT2NIN1ROQ3RTUEZ4Y1Y0NEpPVVM5RUROcnhsOU9pc3N5?=
 =?utf-8?B?dlJybUcrLyt5K3pwM1JJR3hkOGdacUdDY3JBRlFWNlRxS2tEc0J1VFYvaXl2?=
 =?utf-8?B?V1I4TXhRZHBaZEJqSUlIajJjS0tXY0xFOVI1bnlhb0RvZEZoZGF6d3FqQ3lm?=
 =?utf-8?B?Zjc2WnZEb2N5RGtVbnVXdkhBeDNPMmxwLzhqZHZmT2RmbWR4emxFbXZxdTdt?=
 =?utf-8?B?TkxtMjE0Mit6VVBYU1pnMTdIZ25jeGxtdlA3TGZLamRMSlhFQXRiYWJ6Vytt?=
 =?utf-8?B?SkxndkxLb0pYQWpqa3U0TzNQQ3U1cWFEU01FTjIyVjVCVmRjRHhGN2VMeWZa?=
 =?utf-8?B?c2JmTzNJSm9XbkFsOWRSaVpLRk5nS2pVcHBicUw1bm0vME0wdStiVUlOeG4r?=
 =?utf-8?B?WWdwRjJ6NUxsdWVBMkJ3SStxTGV1Q0Q5S05zRDRoQk01SGtBMDFONG1ua3Fh?=
 =?utf-8?B?YmNMbkxzWURWa3hEelltNzY2SXpCVnNENS9zaWhZNS9uK3luSVJmdVNlbHlo?=
 =?utf-8?B?U0x1RmVScSszbmZhbnFQOUFDZFpFdzg4eTN0TVNrUzNJdTAxVVBZY0dtcnhC?=
 =?utf-8?B?VDY1UzVpZVRzeWVWSzYxOHd6WjJZb2FoSDZvV291OEllZXVnaG4wcDduRlpj?=
 =?utf-8?B?bHlreXNXZ2UvbVo1TmFpeXdjQlhtNS9MZjIyblh2VzVFRkFDNFBBV05qc0lM?=
 =?utf-8?B?MWNLOWlhRzZEMmkwNGhuQ2dpWFhWbmRFZHdvTm9Dcmd3TStibEtpMVNwQnZF?=
 =?utf-8?B?aUlHZnViL2lrYU9NYVk4bzRXOFNsVFBqajFNcDljK1dLMXRDR2UvNDQxNkRu?=
 =?utf-8?B?S3NCT1RNVm1JN3BoVTdnbzZCWTJNUlFWNmoyTjJYdjZMZGpvWTVkOTBES0tC?=
 =?utf-8?B?SmRPYXduNFN3OVhDK3pHL2xjRXRqQmErTk9Kb08zc0Y3Rm9Yc0lSTEZPbjg3?=
 =?utf-8?B?NVVrSHpuQW5uR0ZuZExDSi9QbTMzM0w4bXlmSHFqWFQ5blBXY1VTeDFQalRB?=
 =?utf-8?B?SXR0Q1VZVS9td1JlTENpMTJxWlZLSDNzT3BackZjeUMzM1BUd09kZGxMVldq?=
 =?utf-8?B?U2JtOUdrTlcxSzd4NU53dTlUVm54Q0g3TFIxVU90WGpBa2ZkbkFnQ3lrR0FI?=
 =?utf-8?B?bSs4VjlxL1lSQmZ1ZG85VFVuc3ZSL3FFUzVUaDR6NWtaYkVJM2xiMjdWaE4r?=
 =?utf-8?B?dXdpYTRlMmRxLy9wVjRqakpkS3B2T3c2RlU0ODF2a3B2QloxYVNId2t3WThs?=
 =?utf-8?B?cnQraS9sUUZ3TTdIYXkrNVFBT21ESkJqdi9lMG9uOTBLcHJSanRqTGJqNVhS?=
 =?utf-8?B?cWFmZzBycTF5RzUydFRXWSttUU5adGc4WlFOSVdObUpVY0diUDdFSWlTSU5y?=
 =?utf-8?B?RFV5bFZmSXcxSDlEYWlmOEtoSE1wVnN6eWpwYm0zR05DRTJBQ0NzWkl3WmlI?=
 =?utf-8?B?ejhRVDhvanJkT09ZZE1XT3pJbis2WE1IQis2TmdqaDFCcWFkeFB4UitiL3FK?=
 =?utf-8?B?TWVBY3N1dEhIT2NZcUFLcjhnZXF6eDRMclRSN1hrOHdsVDAwZGFaeWdycVNK?=
 =?utf-8?B?RHRFNWRISVZnU3NLbXFabnVRZW9UQzdmTm9Za2NocGFyU2VHakU5UVhIdldw?=
 =?utf-8?B?NHV2VTREMFpzdmpIWnpYdzJtVzRvUlVyemQwSDFlNHBkWW1JOStUamhNOXgv?=
 =?utf-8?B?RG1ML2pNT0VYVkpZaXhnM1FXeUIxVFZoNWc5RWhYZ3Z5YTB3V3o4VFNRNmVO?=
 =?utf-8?B?NTYvQWZEY09vNmxDUlRmelpnaDhsSTRqRDdQbUt0QUJodEc2QUVLWTc1bTBU?=
 =?utf-8?B?RndPa2swYytmV2EzZkFNMlhTYU9vSFcyUnJ5MVJuZ0d5WUVienQwTHNNSW1H?=
 =?utf-8?B?eUs4MHVhZUFXOGl6UFhUdzZQQ2I1VXNRZjVXdmpFb3hFSU16VENoTy9wcURt?=
 =?utf-8?B?QWZxbS9DbHptL1dPSlFFMkpzdVErMFg3TGN1SjZheWErNHlCQWxrRGxkSFh0?=
 =?utf-8?B?V3ZBTCtVSGNOWEZnd21VZTdhcWVrOEM1V29wUE9SNitCTE52azhVM1dZcVpM?=
 =?utf-8?B?c3Exak5FTXlDdjJiLzVJS1hmd0lzS0Jtdlg5WnJWZUVieFpRS05zdjJKa2w0?=
 =?utf-8?Q?A4dMqWPBP1jzsLB6BzszqPWdR?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5af40d5d-f98e-463e-f320-08da54fd9a8f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 09:49:02.4336
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: WOGNAdqodP/ynBvcSWqqo3DpmDiqd15xkuTdtWwlorORJxI1bKXVnsWqFzdZeu3PN+CKKWF3jgtiZxdZqMyVqQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB4385

On 10.06.2022 10:32, Roger Pau Monne wrote:
> The loop in iommu_{un,}map() can be arbitrary large, and as such it
> needs to handle preemption.  Introduce a new parameter that allow
> returning the number of pages that have been processed, and which
> presence also signals whether the function should do preemption
> checks.
> 
> Note that the cleanup done in iommu_map() can now be incomplete if
> preemption has happened, and hence callers would need to take care of
> unmapping the whole range (ie: ranges already mapped by previously
> preempted calls).  So far none of the callers care about having those
> ranges unmapped, so error handling in iommu_memory_setup() and
> arch_iommu_hwdom_init() can be kept as-is.
> 
> Note that iommu_legacy_{un,}map() is left without preemption handling:
> callers of those interfaces are not modified to pass bigger chunks,
> and hence the functions won't be modified as are legacy and should be
> replaced with iommu_{un,}map() instead if preemption is required.
> 
> Fixes: f3185c165d ('IOMMU/x86: perform PV Dom0 mappings in batches')
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
>  xen/arch/x86/pv/dom0_build.c        | 15 ++++++++++++---
>  xen/drivers/passthrough/iommu.c     | 26 +++++++++++++++++++-------
>  xen/drivers/passthrough/x86/iommu.c | 13 +++++++++++--
>  xen/include/xen/iommu.h             |  4 ++--
>  4 files changed, 44 insertions(+), 14 deletions(-)

I'm a little confused, I guess: On irc you did, if I'm not mistaken,
say you'd post what you have, but that would be incomplete. Now this
looks pretty complete when leaving aside the fact that the referenced
commit has meanwhile been reverted, and there's also no post-commit-
message remark towards anything else that needs doing. I'd like to
include this change in the next version of my series (ahead of the
previously reverted change), doing the re-basing as necessary. But
for that I first need to understand the state of this change.

> @@ -327,6 +327,12 @@ int iommu_map(struct domain *d, dfn_t dfn0, mfn_t mfn0,
>          dfn_t dfn = dfn_add(dfn0, i);
>          mfn_t mfn = mfn_add(mfn0, i);
>  
> +        if ( done && !(++j & 0xfffff) && general_preempt_check() )

0xfffff seems rather high to me; I'd be inclined to move down to 0xffff
or even 0xfff.

> --- a/xen/include/xen/iommu.h
> +++ b/xen/include/xen/iommu.h
> @@ -155,10 +155,10 @@ enum
>  
>  int __must_check iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn,
>                             unsigned long page_count, unsigned int flags,
> -                           unsigned int *flush_flags);
> +                           unsigned int *flush_flags, unsigned long *done);
>  int __must_check iommu_unmap(struct domain *d, dfn_t dfn,
>                               unsigned long page_count,
> -                             unsigned int *flush_flags);
> +                             unsigned int *flush_flags, unsigned long *done);

While I'm okay with adding a 6th parameter to iommu_unmap(), I'm afraid
I don't really like adding a 7th one to iommu_map(). I'd instead be
inclined to overload the return values of both functions, with positive
values indicating "partially done, this many completed". The 6th
parameter of iommu_unmap() would then be a "flags" one, with one bit
identifying whether preemption is to be checked for. Thoughts?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 09:50:51 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 09:50:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354759.582033 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4JUB-0003eM-CA; Thu, 23 Jun 2022 09:50:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354759.582033; Thu, 23 Jun 2022 09:50:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4JUB-0003eF-8G; Thu, 23 Jun 2022 09:50:51 +0000
Received: by outflank-mailman (input) for mailman id 354759;
 Thu, 23 Jun 2022 09:50:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vof+=W6=citrix.com=prvs=166aae13d=George.Dunlap@srs-se1.protection.inumbo.net>)
 id 1o4JU8-0003e5-Ui
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 09:50:49 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f3141ebe-f2d9-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 11:50:46 +0200 (CEST)
Received: from mail-bn8nam12lp2169.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 23 Jun 2022 05:50:38 -0400
Received: from PH0PR03MB5669.namprd03.prod.outlook.com (2603:10b6:510:33::16)
 by MN2PR03MB5200.namprd03.prod.outlook.com (2603:10b6:208:1e6::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15; Thu, 23 Jun
 2022 09:50:34 +0000
Received: from PH0PR03MB5669.namprd03.prod.outlook.com
 ([fe80::99b:8d7c:620d:d795]) by PH0PR03MB5669.namprd03.prod.outlook.com
 ([fe80::99b:8d7c:620d:d795%7]) with mapi id 15.20.5373.016; Thu, 23 Jun 2022
 09:50:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f3141ebe-f2d9-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655977846;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:mime-version;
  bh=nlMqpglbr3OF3/e6PmbsNJnzeSJxiehm1nXlZ/4lS3M=;
  b=D/mldHESV66wIY6EWjJvoBLu9p2ttQoalmZYtjwV8B+mIJFdTnk+Bi/6
   2Y9ubJuNPMef3SmFn8ss+vmO+2LcGTkvAN8WLbEx3d+xjR0CNaUXPWeLi
   faHnItrYZuCx+kjF6JoraYcajOk6vMSHcv+mUqOEAJlE8mQ9LtPKr2E1c
   c=;
X-IronPort-RemoteIP: 104.47.55.169
X-IronPort-MID: 73585102
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:iDmDb6ABY5qWjBVW/1vjw5YqxClBgxIJ4kV8jC+esDiIYAhSlGxQk
 DNbHCvTJK7JMVJBSKkkO97koExTsZ+And8wSgpk+CxhFiwb85TIXNmTJBesMn7If53OQBs24
 clGNoDKJc44FifW/RmkYumx8CZwivDSLlaQ5JYoHwgoLeMzYHtx2XqP4tIEv7OEoeRVIivX6
 Niq+ceDZAT7gzR+PDhI4f7Y8Bk/46j/4GsUsFE0bqEXsAfSmUdOAcNEL8ldDZdZrqq4vAKeb
 7yepF1s1jqBp3/BMvv8zvCjNBdirof6ZWBisFIPM0SZqkUE93ZaPpoTbqJGMx8J0W/Rw7id9
 f0W3XCOYVZxVkHzsLx1vylwS0mS6oUfpdcriVDm2SCi5xWun0nEmp2CP2lvVWEswc5lAHkmy
 BAtAGtlgiZvJQ6B6OnTpuFE3qzPJSRwVW8VkikIITrxVZ7KTX1fKkljCBAxMDoY36hz8fjii
 8UxRhYxdErKXBB0G0YVCJsEvb2mjyGubGgNwL6VjfJfD2n76iVUieKoGvyFP9uASINSg1qSo
 X/A8yLhGBYGOdeDyD2DtHWxmuvImiC9U4UXfFG63qcy3BvPmSpOVVtPCwrTTfqR0yZSX/pwJ
 ksO9SdogbU08EWzZtL8Qwe5sDiPuRt0t994TLZnuFzUkvC8DwCxCU1dFiBuQvgdvt4TdA4K2
 mDXwPLxCmk62FGSYTfHnluOlhuYNDIJN2YEaWkhRBEc/tj4iIgpi1TESdMLOKW1lNzuBTbow
 z2Q6jd4jLEal80j2KCy/FSBiDWpzrDLUwo06wP/Tm+jqARja+aNd4GurFTW8/tEBIKYVUWa+
 mgJndCE6+IDBo3LkzaCKM0oHbqp7vLDFyfOjFpHFoMksT+q/haLZptM6TtzIENoNMcsejLzZ
 kLX/wRL6/d7AnyuaqNmZpOrPO4jx6PgCNfNW+jda5xFZZ0ZXAOf5yRveU641njgilQxiro4P
 YqHcMGqFjARDqEP8datb+IU0LtuzCZuw2rWHMr/107+j+vYY2OJQ7AYNlfIdvo+8K6PvATS9
 ZBYKteOzBJcFub5Z0E77LIuELzDFlBjbbieliCdXrXrztZOcI35N8Ls/A==
IronPort-HdrOrdr: A9a23:86TsgKAjRPJlo9vlHelW55DYdb4zR+YMi2TDt3oddfWaSKylfq
 GV7ZAmPHrP4gr5N0tOpTntAse9qBDnhPtICOsqTNSftWDd0QPFEGgL1+DfKlbbak/DH4BmtJ
 uJc8JFeaDN5VoRt7eH3OFveexQv+Vu88qT9JnjJ28Gd3AMV0n5hT0JcTpyFCdNNW97LKt8Lr
 WwzOxdqQGtfHwGB/7LfEXsD4D41qT2fIuNW29/OyIa
X-IronPort-AV: E=Sophos;i="5.92,215,1650945600"; 
   d="asc'?scan'208,217";a="73585102"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OloKGVicAWWmisu0HmZJkmPkZN/N/JZCJRP5uBXqCnSD51Qsq6fCOXgQHCZ8ebhCcg1Ss4fU7KQ0RKpLP9PCuXqLPyWjsdqdY8k+vwTuQ1nad4r7hZi6HQqMKPVvAC0JlAjJY8bMC8xM06Edulz4LcWYBC2m9OGX0JqoYnWsEtsYaY7J8Z280kKIvNOoHBdJrq82XXgISWNonT9yXuBz2PHONQ2QD0yDDQovwyQLa/95JNkfXyMxB0AMCDEFREyuLrrdD+cNIvWyE008CjKYVYMfTq6l8brWBDMk5+bXw9tmJE8gM1Q0Cz/G3TWQPCX/u4r0qoQ4fACBWDhd1RBfhQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=BcFepjAnGM+27vJO7r60GIwIxz63RjDXqvPur/tpukY=;
 b=KUdezUeTR7sq6xhVzHFRWfVk7IugS22F4CgeGUqK+zLu7JZpdJZS+Mjj2VQ74GJuPQQy0TwT9byF2QzPN+M1ri1FoZcXX2XcyxGxKje0Q2GtC9Tw+jnIwr+rXF8a491OYYE+mbASkxdPXG5DTay9uuzuzhaJ84U2ugu/OAAYt6gSuyutaw5JtC/Ww4/Q/p8s2wcxqYuRli8QEy1515yJYNAfyQq1P1gbGjxpJSxQAZ2JjOoW4RIWbI51A0jYWoplzZIiXjtKbJo8Mt9zy+z3wT1YAbSP8GUXJmqKWpg2KMiqVYr97SuC2sjBfGIwhknPfnU4ycDdsn8oBECCB35GrA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BcFepjAnGM+27vJO7r60GIwIxz63RjDXqvPur/tpukY=;
 b=CaVAJ9s5hVUpdfbDUc6jUGN87pw7IAfgZ6quGCUqqUcUCiCynfIHZLzZjzk80OYXCyr8zBAOzXXG63iyDRzOSkGOaZ8V7rp9wWBEncpCugh2HKHbj1g+yvGlHj1l6VOEV4hvuxkN6B/iTDQj+d5fSbdIPJpfW9hI1mUrnpOvbE4=
From: George Dunlap <George.Dunlap@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Christopher Clark
	<christopher.w.clark@gmail.com>, Julien Grall <julien@xen.org>, xen-devel
	<xen-devel@lists.xenproject.org>, Michal Orzel <Michal.Orzel@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Daniel Smith <dpsmith@apertussolutions.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: XTF-on-ARM: Bugs
Thread-Topic: XTF-on-ARM: Bugs
Thread-Index:
 AQHYhWH1+fjNtvukV0adEqGEdTB7sq1ZxCsAgAAXIgCAAAnTAIAAf4aAgAD4xwCAAEImgIABIvEA
Date: Thu, 23 Jun 2022 09:50:34 +0000
Message-ID: <4A1C629B-4842-4047-A180-435BD5972091@citrix.com>
References: <7f490d75-153d-7e1d-b3c0-5418ff7fdf8f@citrix.com>
 <b8f05e22-c30d-d4b2-b725-9db91ee7a09d@xen.org>
 <fd30be68-d1ac-b1bc-b3f1-cff589f338ee@citrix.com>
 <c97de57c-4812-cdfc-f329-cc2e1d950dc7@xen.org>
 <CACMJ4GY+H7P733_-UNgSd7P8+Z4ryeJwVy3QfekMJskkmh9btQ@mail.gmail.com>
 <30BB31A7-F49C-4908-8053-74E31D03BD33@arm.com>
 <36854512-23fe-57dc-3c47-5f996927872b@citrix.com>
In-Reply-To: <36854512-23fe-57dc-3c47-5f996927872b@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: cd100f95-e7b2-4ed0-c2a1-08da54fdd163
x-ms-traffictypediagnostic: MN2PR03MB5200:EE_
x-microsoft-antispam-prvs:
 <MN2PR03MB52004E424520B0EC58A88B1E99B59@MN2PR03MB5200.namprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 sUjfxvCtzFrcIcqZMrNkgD4Zomju6X0VZQR38KgsJY7meFKpXrHpsdvejXQYUDCiTN6F2NPdmHCwYLHZlmBfiW8PZVxd7IsKiKHG733QDiPnxwZ8RN9MWm60y8lm2wuqhKwGrnRKGKGt+jn5JaqXQzpGBwpNPU5/q8xEKTMI0nER4jyD3/QAHZ5yhCNS5Rlk9F7pPhZQxgZFXv4kEuILsoGV7I+bVbmCY4i3+1dJXsWy0Dmm3VNNCHVhvntIrEGa189aKuI6vSNEJlROh4jcqv7mjV9Ei1aVfKSCNu+JF6veuPYzhez53YV8wwJ7Ezebq8AKz7cdFZ4oXERbOjeeWtHEOvfZLgUMqga19lny13nRb/n+ZJm/YPz41sI80iDvAtT7pg5iFQlSOB1kD/0WhaGs11seYtTzd1mgJAz2O4Zv/96JeYybqojvHU3gaJOfwykZp+r8E+p7aXQj8xZQWoeUqojJ0HvsxxvDSICg0njgzIC6FBdQlksr+T2iOIiwXC6qwXwxQIvf82kBwjPr5JKK4jpQyq1l8NirupLEG7KwhPhorVpFjmM62A7TlaOlIZybDDPoyv2xlnX4gHIljftFwiSRxqdkVsWx6YbCf/vdMAiKNSVbGkRZnPdYzdwx3ZWxG/fdSJMlUolquqjpwHIR3ahifNmAJXskdq1rSOEJ1/jFyMv5tjHMdqKFhUh2jaEfCpqJLVdhN+O2B/QkMxAs/rWDsmL8+cYb133aEKWzTX6vHhyvOak9S/OCPhyehCQgcHDzA8IEbZvokr8PAcxat6ixWTKiMP5n6VNe+A5v66fRpnnqKedYx2icRpUxIPz6pVWUtViYUV/+kisOoQ==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(376002)(136003)(396003)(346002)(366004)(2616005)(83380400001)(478600001)(966005)(86362001)(2906002)(71200400001)(6486002)(82960400001)(6506007)(107886003)(316002)(8676002)(66556008)(53546011)(26005)(66476007)(76116006)(66446008)(64756008)(4326008)(99936003)(36756003)(6512007)(166002)(91956017)(38100700002)(41300700001)(37006003)(66946007)(38070700005)(33656002)(6636002)(54906003)(8936002)(186003)(5660300002)(122000001)(6862004)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?bTdzWEtJT2VZSGJwcHR3bFVQV2Zkeno5RDZZbU01eW5uUXhsbjVUSmpxS2t4?=
 =?utf-8?B?L1U3ak1OdUVNRElITjE3cGZjZEVwaE5sV0lCV3RiUVNRMllWQW93OVRvYjVu?=
 =?utf-8?B?bnd3NUJGaDR4QUlicXVuS2ZlYmtOVnR6SlVZT3VIbklTTFhWbFQ5ZlJWVUFO?=
 =?utf-8?B?azhoMjhaV201d0hyVFZJbExQMXVobmQ5d3l1MnJzSFBkc0l6MTF0L1hXM0do?=
 =?utf-8?B?QnJyNEU5amJVNmRIYjRLZ1p0NFowanMrQzRqaFF2WWY5UVZ4eWRVOU13bmJj?=
 =?utf-8?B?UEJOdC9BVFBQZ3dHTUZwbENid3JOc0kyRmN6WHNtV1daQzE2a3kwSUtTQ0hi?=
 =?utf-8?B?dlhqVFl1QjJpMGsvTzZtb3hyRHNUOXl2ZC82STExSzhtaUF6MGtjTml2RUV6?=
 =?utf-8?B?TmI1VXB3UWlSNU5sV2lvcmZtOU4rNFljaDI5bWV5RVc4SHlwa2RaOFpyU1VO?=
 =?utf-8?B?Q05YRDhZbDgwdjJxRVZHS1FYcml4NEhWOGkzcWpqUUEzeTlzazI5d1pPaWdU?=
 =?utf-8?B?ai9rOWJRK2NiaXBUQXR2SFdEZFJwd0VhaWhGMDBFaXd3dkR5a2ZhQ0lWYm5M?=
 =?utf-8?B?dnJxU05JWGIrQ3hwWi9qTGM4T3FNbnNNbUV0L1h5bFUvZkcyY3FZNGVHa1JO?=
 =?utf-8?B?dncwanBPNzhTU3hycGUrUWpyT2N1MllTQnViS0lQbmczNGJYS0V6bEVJbkV0?=
 =?utf-8?B?Rm1HaE9MOVRHcnJBZWtuaWw3VUhIR2VjdFlDdkhSSk9TVHFFRnpSNWZCRURp?=
 =?utf-8?B?UXdrR1ZpUFcvZ0xWclZaSnVHWGhHdHBUR2g0Sm9aYmZ2OUdLOWtRR2NsMjl3?=
 =?utf-8?B?alh1Y0JCV1B1TFNXNDFQbkpwcGZGaE0vWXlESmdZVlFkbnN4Umd2STZjSC9F?=
 =?utf-8?B?WktlalVXYmp6TGZ6V0ZTZ3AzSEZXT3l2bGM5YlplZjVNSWdyZFllWWl6NVpS?=
 =?utf-8?B?SndLeVJ1QlFud2pNdlJHR3hrUWp4bFFkSnkzV2EzaUU1dzJWdHA2UDRMSkln?=
 =?utf-8?B?cCtzQ3hnc3lIVFV1VU1UUXZjYjc1VE9CQmtKMllFb0Evb01LM2ZJQTREdEti?=
 =?utf-8?B?OGk3d3BjOHB0TTVHTzNOZ0dLemdXU1FHWTdVQjYwWG5EaWJoMW8yOVNXR3N1?=
 =?utf-8?B?TUJuZHNwTFk3ZytEd1dSN0oxR0VYT0IvS05kQmluSkkwU1RaMnhKU3VFelFF?=
 =?utf-8?B?eUF1ZHUrVmhnd1Y4L1M4eXk4YnhBLy9mSGt1RWVlUXFyMnhLaWsyUzF6MktX?=
 =?utf-8?B?U3NoanF0bC9lY081THUweFFneitLM2gvMjI0UWp5cVJWWjZuVDVkUUg2bWNY?=
 =?utf-8?B?eHRwWE5vWlo2MFlqTjF6MjRDMllkNy8zeDhCWnJ3aUt3MCs0dHRzWDNHRi9D?=
 =?utf-8?B?WktDVWEwOCt4S1QxeWQrMjRsYmhWYnpSS0RtaDdwUW1wVmVvTzJ4TktJT1kr?=
 =?utf-8?B?bnRXWk56OGkrTU1COFlBcnY0bjY3NC9iY1pNZ1ZvS2c1Q20rZlh2S3BLRnZq?=
 =?utf-8?B?RURLdmpibTRFdkZWK2s5UGJOKzZFL05rcEpDb0xLU3UwTjM2cDU1SktYR1lB?=
 =?utf-8?B?WjhDbExFa0Z2SFpLdTExOHNZeThIajl5SW9wUDRvOHBqSlE3L1J2eXBvcjJv?=
 =?utf-8?B?OTEwVU1JUWZwcStpK3hZbzRVSHo3SFIzSTQvOUJxTnluNEh2TjdpYjZXSHZl?=
 =?utf-8?B?L1N1SFRteXNQZzZCby94eVJQQXZmRFlhcGRidUZsSExCdHhncDE5dEhlRDIr?=
 =?utf-8?B?Nm14U2dBdUFJdEk1c3lTQ3dFblpZT0hnVFRkb0JXYVFjYW9hWjRIMllsb2FI?=
 =?utf-8?B?bU5hVWVhODFaRHhtWW4rdGhGak9RNDJ0WXpTamVRSjAzaUI4czV0MGVZZ1kr?=
 =?utf-8?B?Z3lRZHFpb3c1cnhuekQzRG5UcGRMN1pBZEtzZDJkQVU3LzZtcGU3YmFtTzZ3?=
 =?utf-8?B?L09sVTlRV2pINDBCRTZmcjlxMVJ1VzVPNnF6dE9JK3RVYzg4N05yWXcwd1hJ?=
 =?utf-8?B?ckh1MFFsTUVnWkQ2ZVNrTE0xeDNvQTlzNGUrZXRrWWxXalVFaWY2VVZnVWxK?=
 =?utf-8?B?cjY1cGk3a1ZyWW5iak9ZbTFxTEo2STliUFV1cmJaR3ZEM2V6dVprbk1OeGRX?=
 =?utf-8?B?bHlBVDlLeDZINmVPeHhzOWNHMXlSYzJ0ckpvYW5PUU9mQ0ZRSDgwNThmb0F1?=
 =?utf-8?B?U1E9PQ==?=
Content-Type: multipart/signed;
	boundary="Apple-Mail=_43D8713C-5875-4BC9-B7DF-70111C69E8DC";
	protocol="application/pgp-signature";
	micalg=pgp-sha256
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cd100f95-e7b2-4ed0-c2a1-08da54fdd163
X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Jun 2022 09:50:34.2469
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: Lc1mPtlMQNcirzH7v7TvP3wme1TqcVDFCYFCUousqv30ytIC08YRFjLuFeEWlhmUoFXol/XbhE+TMPoB2orDFmkew7/yi99WMYbgjsL/tKM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5200

--Apple-Mail=_43D8713C-5875-4BC9-B7DF-70111C69E8DC
Content-Type: multipart/alternative;
	boundary="Apple-Mail=_1FF60EFC-FE68-420F-8A8C-75314AAC8E11"


--Apple-Mail=_1FF60EFC-FE68-420F-8A8C-75314AAC8E11
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=utf-8



> On 22 Jun 2022, at 17:28, Andrew Cooper <Andrew.Cooper3@citrix.com> =
wrote:
>=20
> On 22/06/2022 13:32, Bertrand Marquis wrote:
>> Hi Andrew and Christopher,
>>=20
>> I will not dig into the details of the issues you currently have
>> but it seems you are trying to re-do the work we already did
>> and have been using for quite a while.
>>=20
>> Currently we maintain the xtf on arm code in gitlab and we
>> recently rebased it on the latest xtf master:
>> https://gitlab.com/xen-project/people/bmarquis/xtf
>>=20
>> If possible I would suggest to start from there.
>=20
> Sorry to be blunt, but no.  I've requested several times for that =
series
> to be broken down into something which is actually reviewable, and
> because that has not been done, I'm doing it at the fastest pace my
> other priorities allow.
>=20
> Notice how 2/3 of the patches in the past year have been bits
> specifically carved out of the ARM series, or improvements to prevent
> the ARM series introducing technical debt.  Furthermore, you've not
> taken the "build ARM in CI" patch that I wrote specifically for you to
> be part of the series, and you've got breakages to x86 from rebasing.
>=20
> At this point, I am not interested in seeing any work which is not
> morphing (and mostly pruning) the arm-wip branch down into a set of
> clean build system modifications that can bootstrap the
> as-minimal-as-I-can-make-it stub.

Andy,

You are not in a position to dictate to anyone else what work they will =
be doing; particularly if that dictation means, =E2=80=9CDo nothing =
until I can get a chance to get around to it.=E2=80=9D

Bertrand and his team have their own goals and priorities they need to =
accomplish, just like you do: they need to get additional XTF tests =
working, and they need to be able to share those with partners.  They =
already put off that work for over a year waiting for you to get around =
to the architectural re-work you had in mind.  It=E2=80=99s not =
reasonable to expect them to put off indefinitely their own needs and =
priorities, any more than it=E2=80=99s reasonable for them to expect you =
to drop your security work and other things you=E2=80=99ve been working =
on instead of refactoring the XTF architecture.

Bertrand knew when he made the branch that the more work done on the =
branch, the more effort it would take to eventually merge it upstream.  =
You were told that they were going to create a branch and continue =
working on it; and you knew that the longer you delayed the =
architectural re-work you had in mind, the further Bertrand=E2=80=99s =
branch would drift.  It was more important for you to work on security =
issues; it was more important for Bertrand to get additional =
functionality implemented and shared.   You and Bertrand have both made =
your decisions with the full knowledge of the implications of your =
choices; there=E2=80=99s no point in complaining now that the natural =
consequences of your choices have in fact come to pass.  And it=E2=80=99s =
hypocritical to be angry at Bertrand for having priorities higher than =
easy merging, when you did exactly the same thing.

Bertrands response to this thread =E2=80=94 suggesting that you begin by =
testing his branch to see whether the bugs you=E2=80=99re looking at =
have already been fixed there =E2=80=94 was reasonable, polite, and =
cooperative.  Yours has not been; and this kind of response isn=E2=80=99t =
likely to encourage him to be cooperative in the future.

The sooner you accept that Bertrand's branch is going to continue to =
develop, gain more features and bugfixes, the more effectively you=E2=80=99=
ll be able to begin reducing the diff between the two, such that things =
can eventually be merged.

 -George


--Apple-Mail=_1FF60EFC-FE68-420F-8A8C-75314AAC8E11
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
	charset=utf-8

<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html; =
charset=3Dutf-8"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; line-break: after-white-space;" class=3D""><br =
class=3D""><div><br class=3D""><blockquote type=3D"cite" class=3D""><div =
class=3D"">On 22 Jun 2022, at 17:28, Andrew Cooper &lt;<a =
href=3D"mailto:Andrew.Cooper3@citrix.com" =
class=3D"">Andrew.Cooper3@citrix.com</a>&gt; wrote:</div><br =
class=3D"Apple-interchange-newline"><div class=3D""><meta =
charset=3D"UTF-8" class=3D""><span style=3D"caret-color: rgb(0, 0, 0); =
font-family: JetBrainsMonoRoman-Thin; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">On 22/06/2022 13:32, Bertrand Marquis wrote:</span><br =
style=3D"caret-color: rgb(0, 0, 0); font-family: =
JetBrainsMonoRoman-Thin; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: 400; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><blockquote type=3D"cite" =
style=3D"font-family: JetBrainsMonoRoman-Thin; font-size: 14px; =
font-style: normal; font-variant-caps: normal; font-weight: 400; =
letter-spacing: normal; orphans: auto; text-align: start; text-indent: =
0px; text-transform: none; white-space: normal; widows: auto; =
word-spacing: 0px; -webkit-text-size-adjust: auto; =
-webkit-text-stroke-width: 0px; text-decoration: none;" class=3D"">Hi =
Andrew and Christopher,<br class=3D""><br class=3D"">I will not dig into =
the details of the issues you currently have<br class=3D"">but it seems =
you are trying to re-do the work we already did<br class=3D"">and have =
been using for quite a while.<br class=3D""><br class=3D"">Currently we =
maintain the xtf on arm code in gitlab and we<br class=3D"">recently =
rebased it on the latest xtf master:<br class=3D""><a =
href=3D"https://gitlab.com/xen-project/people/bmarquis/xtf" =
class=3D"">https://gitlab.com/xen-project/people/bmarquis/xtf</a><br =
class=3D""><br class=3D"">If possible I would suggest to start from =
there.<br class=3D""></blockquote><br style=3D"caret-color: rgb(0, 0, =
0); font-family: JetBrainsMonoRoman-Thin; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><span style=3D"caret-color: rgb(0, 0, =
0); font-family: JetBrainsMonoRoman-Thin; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">Sorry to be blunt, but no.&nbsp; I've requested several times =
for that series</span><br style=3D"caret-color: rgb(0, 0, 0); =
font-family: JetBrainsMonoRoman-Thin; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><span style=3D"caret-color: rgb(0, 0, =
0); font-family: JetBrainsMonoRoman-Thin; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">to be broken down into something which is actually =
reviewable, and</span><br style=3D"caret-color: rgb(0, 0, 0); =
font-family: JetBrainsMonoRoman-Thin; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><span style=3D"caret-color: rgb(0, 0, =
0); font-family: JetBrainsMonoRoman-Thin; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">because that has not been done, I'm doing it at the fastest =
pace my</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: =
JetBrainsMonoRoman-Thin; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: 400; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><span style=3D"caret-color: rgb(0, 0, =
0); font-family: JetBrainsMonoRoman-Thin; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">other priorities allow.</span><br style=3D"caret-color: =
rgb(0, 0, 0); font-family: JetBrainsMonoRoman-Thin; font-size: 14px; =
font-style: normal; font-variant-caps: normal; font-weight: 400; =
letter-spacing: normal; text-align: start; text-indent: 0px; =
text-transform: none; white-space: normal; word-spacing: 0px; =
-webkit-text-stroke-width: 0px; text-decoration: none;" class=3D""><br =
style=3D"caret-color: rgb(0, 0, 0); font-family: =
JetBrainsMonoRoman-Thin; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: 400; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><span style=3D"caret-color: rgb(0, 0, =
0); font-family: JetBrainsMonoRoman-Thin; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">Notice how 2/3 of the patches in the past year have been =
bits</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: =
JetBrainsMonoRoman-Thin; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: 400; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><span style=3D"caret-color: rgb(0, 0, =
0); font-family: JetBrainsMonoRoman-Thin; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">specifically carved out of the ARM series, or improvements to =
prevent</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: =
JetBrainsMonoRoman-Thin; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: 400; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><span style=3D"caret-color: rgb(0, 0, =
0); font-family: JetBrainsMonoRoman-Thin; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">the ARM series introducing technical debt.&nbsp; Furthermore, =
you've not</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: =
JetBrainsMonoRoman-Thin; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: 400; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><span style=3D"caret-color: rgb(0, 0, =
0); font-family: JetBrainsMonoRoman-Thin; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">taken the "build ARM in CI" patch that I wrote specifically =
for you to</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: =
JetBrainsMonoRoman-Thin; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: 400; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><span style=3D"caret-color: rgb(0, 0, =
0); font-family: JetBrainsMonoRoman-Thin; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">be part of the series, and you've got breakages to x86 from =
rebasing.</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: =
JetBrainsMonoRoman-Thin; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: 400; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><br style=3D"caret-color: rgb(0, 0, =
0); font-family: JetBrainsMonoRoman-Thin; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><span style=3D"caret-color: rgb(0, 0, =
0); font-family: JetBrainsMonoRoman-Thin; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">At this point, I am not interested in seeing any work which =
is not</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: =
JetBrainsMonoRoman-Thin; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: 400; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><span style=3D"caret-color: rgb(0, 0, =
0); font-family: JetBrainsMonoRoman-Thin; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">morphing (and mostly pruning) the arm-wip branch down into a =
set of</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: =
JetBrainsMonoRoman-Thin; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: 400; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><span style=3D"caret-color: rgb(0, 0, =
0); font-family: JetBrainsMonoRoman-Thin; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">clean build system modifications that can bootstrap =
the</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: =
JetBrainsMonoRoman-Thin; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: 400; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><span style=3D"caret-color: rgb(0, 0, =
0); font-family: JetBrainsMonoRoman-Thin; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">as-minimal-as-I-can-make-it stub.</span><br =
style=3D"caret-color: rgb(0, 0, 0); font-family: =
JetBrainsMonoRoman-Thin; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: 400; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""></div></blockquote><div><br =
class=3D""></div><div>Andy,</div><div><br class=3D""></div><div>You are =
not in a position to dictate to anyone else what work they will be =
doing; particularly if that dictation means, =E2=80=9CDo nothing until I =
can get a chance to get around to it.=E2=80=9D</div><div><br =
class=3D""></div><div>Bertrand and his team have their own goals and =
priorities they need to accomplish, just like you do: they need to get =
additional XTF tests working, and they need to be able to share those =
with partners. &nbsp;They already put off that work for over a year =
waiting for you to get around to the architectural re-work you had in =
mind. &nbsp;It=E2=80=99s not reasonable to expect them to put off =
indefinitely their own needs and priorities, any more than it=E2=80=99s =
reasonable for them to expect you to drop your security work and other =
things you=E2=80=99ve been working on instead of refactoring the XTF =
architecture.</div><div><br class=3D""></div><div>Bertrand knew when he =
made the branch that the more work done on the branch, the more effort =
it would take to eventually merge it upstream. &nbsp;<font =
color=3D"#000000" class=3D"">You were told that they were going to =
create a branch and continue working on it; and you knew that the longer =
you delayed the architectural re-work you had in mind, the further =
Bertrand=E2=80=99s branch would drift. &nbsp;It was more important for =
you to work on security issues; it was more important for Bertrand to =
get additional functionality implemented and shared. =
&nbsp;&nbsp;</font><span style=3D"color: rgb(0, 0, 0);" class=3D"">You =
and Bertrand have both made your decisions with the full knowledge of =
the implications of your choices; there=E2=80=99s no point in =
complaining now that the natural consequences of your choices have in =
fact come to pass. &nbsp;And it=E2=80=99s hypocritical to be angry at =
Bertrand for having priorities higher than easy merging, when you did =
exactly the same thing.</span></div><div><br =
class=3D""></div><div>Bertrands response to this thread =E2=80=94 =
suggesting that you begin by testing his branch to see whether the bugs =
you=E2=80=99re looking at have already been fixed there =E2=80=94 was =
reasonable, polite, and cooperative. &nbsp;Yours has not been; and this =
kind of response isn=E2=80=99t likely to encourage him to be cooperative =
in the future.</div><div><br class=3D""></div><div>The sooner you accept =
that Bertrand's branch is going to continue to develop, gain more =
features and bugfixes, the more effectively you=E2=80=99ll be able to =
begin reducing the diff between the two, such that things can eventually =
be merged.</div><div><br class=3D""></div><div>&nbsp;-George</div><div><br=
 class=3D""></div></div></body></html>=

--Apple-Mail=_1FF60EFC-FE68-420F-8A8C-75314AAC8E11--

--Apple-Mail=_43D8713C-5875-4BC9-B7DF-70111C69E8DC
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
	filename=signature.asc
Content-Type: application/pgp-signature;
	name=signature.asc
Content-Description: Message signed with OpenPGP

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEj3+7SZ4EDefWZFyCshXHp8eEG+0FAmK0N1gACgkQshXHp8eE
G+3bLAgAug4opJ5Uon+mEzoyGvnRuTOfwFWGLsU5wkuLHdCaZYx6rnSSOtZTrviN
8vgYeScP4okouYMaC82oUY23aDs6TYyr5BDMWcwlh4AiC1hTX4tXIV5B5hq/azGf
N76cvE3h0NEZoIzCi9Jz9FS/ogjKIYz+FWPsDHTzuYEdMlV4UbQEgI51wF5KyiNO
oRVYLNZhqbxUOzrPJPi9p8m8G9XKkcekNlbNggs9mE9c0fB3x2fqoEq4r72WMYD2
Yfykel7i1irVm4Rh3CWgwEKLWzWaLN6eEytVrMBDkEohJ6V4NYZqv+09NN8+l/ID
rLpnta9v56U+oKH6b04GI1uRPWa5pQ==
=VfW9
-----END PGP SIGNATURE-----

--Apple-Mail=_43D8713C-5875-4BC9-B7DF-70111C69E8DC--


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 09:51:52 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 09:51:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354765.582044 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4JV6-0004Hq-QO; Thu, 23 Jun 2022 09:51:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354765.582044; Thu, 23 Jun 2022 09:51:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4JV6-0004Hj-N7; Thu, 23 Jun 2022 09:51:48 +0000
Received: by outflank-mailman (input) for mailman id 354765;
 Thu, 23 Jun 2022 09:51:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CRa/=W6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o4JV5-0004EB-Uv
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 09:51:48 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr70040.outbound.protection.outlook.com [40.107.7.40])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 180413ad-f2da-11ec-b725-ed86ccbb4733;
 Thu, 23 Jun 2022 11:51:46 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB6324.eurprd04.prod.outlook.com (2603:10a6:208:13f::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.22; Thu, 23 Jun
 2022 09:51:45 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 09:51:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 180413ad-f2da-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lRH5NpRMrtMONiZ1XMBzLpTmL/ycHKKaOb5wO4TjNG5ztthbzpisad2IeutLdgqSMqS8m5cXiZnIy19RuGUKURLr5j4RU6IyeHzAZBK+uUyrUOeNPSFmIHrrwB88YYxD6/0hdIkGew2jnFIUaXA7Z0HSEHx+nJimzesczhbXiMaAIMn3HCmGjxzAjXKCqQirTvO9fAN1tNS/n8Grhj7RxvKblAgRvEqh0e1wf80mTY9MM92Cs8oODhWyArg4Dn9glxV2DtQVfKxtPx4OxczAgn2oNb1pO8G/q8FAemNOWXkuwkwVzBxEL/uOc3w22qlD19YmOCA+wIuIiJ8juk/lsw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=cPkCWlh2g4+7fZKEVssqxY5bK56PP5KkWtPeMBrD6HY=;
 b=aWs67TC54fP+wfHpWULJzvk5Ehr7O1D58k+siXIicpbrZPCCRf7mJ+qrhYq/QEDFS48bDeM6+U/5s4IkUQcWk82J+a+2Yl94/9pxHBUXx6a+jOV/F2FXfmds4/4+UulMFIxLzzQZXJX+1Jat/NNV8kYa1FsJVOVKipcS0J6E/kEPy+bUwTeb/ySRTSzSiLfCdiNk49QlbCpR38ZUvB/mQOgbOyY5vQ+DMoU1QYn9plb37JpMQ6CNB1T1gsMEdhB52Zj1FoqmCQSMfb7QtV9cJX5aVvjK3EwTcmW8jXWz4IFnMj4hBWpFAgcmvS6Wk3A7WoM+PS5PNiPrD9/oHxJ8zw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cPkCWlh2g4+7fZKEVssqxY5bK56PP5KkWtPeMBrD6HY=;
 b=MU6knorznhgr/KNHnlkstznCF6rWIP/2L1aoNdQ4Ax7qZe0Ebcm2/FEfyDGzAl7ZNE/PPocaRIsgWHQPulFfTI7d5vB0I85uwFztYDACokXv0t750Z6M7+POqraNgnwhk+rZi8CgnjraOhZMHi5USYITCCOC1+XmviciB/GDQe4y1p4zp7FYrlzJ+0YK1rASsOFUWRB8MnmPbYfxBdrak84Usb/669GwUaY5S0TAuCi+0bl4O9z7WF8H6guv3g3f6v74VTHwJ7QgDCu80sFw89Uk0nPdxa/Cl2Z67oWTfr7P1XJDSxwXQc6xL415Lh+1+lZpvnu9Pf8dk4Bf8Bp5NQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <117fd526-a241-2f01-47b5-e40e1803124b@suse.com>
Date: Thu, 23 Jun 2022 11:51:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v2 1/3] x86/xen: use clear_bss() for Xen PV guests
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>
Cc: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, x86@kernel.org, linux-kernel@vger.kernel.org
References: <20220623094608.7294-1-jgross@suse.com>
 <20220623094608.7294-2-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220623094608.7294-2-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM6P192CA0057.EURP192.PROD.OUTLOOK.COM
 (2603:10a6:209:82::34) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4e8848dc-1266-4755-f88c-08da54fdfb9b
X-MS-TrafficTypeDiagnostic: AM0PR04MB6324:EE_
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-Microsoft-Antispam-PRVS:
	<AM0PR04MB6324BE2418761D390B2FAC1BB3B59@AM0PR04MB6324.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	N2UkmaNdZxfNORvtmTIF6RqV4//8CO09hcEe6RLZszB0dglB70vRxJ7/gwXdu/PBNR5cPbXwv03hCWsxOacrquvXiT0li1xeaL5vYw0x6/9VgO5GrRyXAPK86lyAqt1sHuTTxVGsOJfre3GkmDiL1AQB75ePyDwoB9WaILYGgGeeq1+5J2EiBAqx0QD8XsvWa3wUneSIX1AC0eMB9Dejl9g7A0rmiW6hD+zd5NRa8OX7crQpIgsKP1ePdPz2oC31MfGnDMZ2IAD7teFX0WGQ5YBuzuJa+vZFw/6mArv3DIASTJEEf0ucj5cwzQ0FU63wkG/UDZ7U5IKsd19yDFyq8tRyqWqJm/KPrHH3ocJS0F3Toa7jVuvzNMlzYPAxa9OHZLPyyGJ/wmesY81yIk3yF9wnXdwfsXVH9Ei/pZ1a5mpaLntDgpul6kEXsD+ntvjiOzCSYbAyJGQHk7zyhL5hgmKBFxgJutRLwBNa6gXV97hngCMl4kjIr3rcLvFAbmVSsqlHy6TjGktq6Xv3zMXOk6FvnTteE0ApdCJxpuN9Za733YvpOPlEazAWWJVYQNPlaYUI804HsEklmPp06L4emWqkG8FlXmD8ZBAr5zLCUiU9JEwMdBvZ3gcQC5nqggvE/uGtITAl6rOJUfb8zbTExs4SPes4x+Ja+pXlGatcgmfxnPRrK3xQZx2BdoMHB1ZQaUfCbANt68C4bzis8U9PpPFs+19mKPFgUnpIVJoW/C5NQ/U7ORrXj7rbs919fQu6Z1AWrOLRfMKRLRV8hDyKdhTMuxMkr8bOqknjiBlP3cA=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(346002)(366004)(376002)(396003)(136003)(39860400002)(6486002)(53546011)(478600001)(6666004)(26005)(6506007)(54906003)(8936002)(6512007)(38100700002)(186003)(41300700001)(4744005)(31686004)(5660300002)(2616005)(8676002)(66476007)(36756003)(37006003)(316002)(6636002)(2906002)(4326008)(6862004)(86362001)(31696002)(66556008)(66946007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bzZNMndKRjZMZGllVk8yS3hqV0VEYytPUi9lalJESDhIY1V5RHBkdlh2V05a?=
 =?utf-8?B?WlhsT0VpN3ExenJqYm9TUnhjTWszVENWY1JDZ0FvRUNhM3F2dlMwMldqdzdI?=
 =?utf-8?B?R09UTFMrNWJ1QmhIVzkvMXZxTkkwV2JOUFd2aFB0UlFRaXE3aGJZZVVJNTlu?=
 =?utf-8?B?QTVGSkFGWTdsV29ZVFBpR0o3VnF1QTk5bXZ3a0NmUjk4WmkxK0lnUSs2UWgw?=
 =?utf-8?B?T01OTlVMeEY2eWhkTSt0ZjFHRUpEMGpSNlRRTm1wTUoxOE9Zd0xTTVBYakNQ?=
 =?utf-8?B?QXczRm5Vc1BERFMrTXlFbHhoQ0d4ekEzeUpoVDJOSE51Ym5sSjlpdTh4L1Nt?=
 =?utf-8?B?aFhzUVFnQk1NZElkSGtISkRXV1p3Rmkxdk92RGRHZE5FZEpoakVrd2JjL0tI?=
 =?utf-8?B?Z050UFZCM1lHUjduSiswNVVwUEVoTWJvOE8vZTBtOGVpeEtoRXk1bDJTaS9t?=
 =?utf-8?B?OTRTSjdUbHdlT3dZOFlTVG44WkdBTExHSm43dlJNODYrU0lvaERIdm1XcnE2?=
 =?utf-8?B?N2JQekFBTjJaT0NCUHlxbFRpOVlmeGMzMDMwZFcrNUZIWHAwb0xZSGh6S2pM?=
 =?utf-8?B?T3BNUHZlZmlwcyt2NU5CSmpiZEM0VTJiRWp5S3RsanI4Qm9ldldSblQxSXdQ?=
 =?utf-8?B?TFlGenlwSHlYRDdVNDAyU290bG1nUWZ2Rm5KZ2M4VmI4WjdlRDNaREI1UWMy?=
 =?utf-8?B?SG5jVUZsc0ZBd3hRTWFVNklKMmtoVklkZkRjTHBJUG1XMEdaNWlpY0hZcUNJ?=
 =?utf-8?B?eVFtM3N1Z0NITmhxVHBXQ2wwM1BENWpxWm5zQXdEOFNLQUV1VTRpMWJGUVR3?=
 =?utf-8?B?RWdQd3FJNndKVEJvZ2NZUmZpbUxINE1oY0N3MGp0M251RUFySjZISi8rWi94?=
 =?utf-8?B?UG0xd0VJMzI5ejVvK2FhNjFEYUJZeTNnUFN5MHF0cXppYXVzcjdubktnRGJt?=
 =?utf-8?B?S0JrZS8ybDVaVDVvMXp4a1M4VWJWMjRmaXhTdTVnTzdLand3ak1wRjlCMnha?=
 =?utf-8?B?QWt0MlExYWFWMGd5M0FFck1Sa2N4ajYwTjUzNHdPUmVvREwrVmNaNmFseHJ5?=
 =?utf-8?B?S0NJYi9KUFlJVU5mZWwySURVSHJvVENQSFhya25WV0VLK2swekx0U1pFcEV5?=
 =?utf-8?B?NC9vVU5seWJTSVVMWkJJcWpCa005cDRKZU1aRjZTVE5BTjMxS2NMNUYrNHVZ?=
 =?utf-8?B?OGRkQ3dGek1BUEYwYk9zMU1aN0J2WVQ5b2VHdkg0NUY4Y2ZYenJBSlZJbzM1?=
 =?utf-8?B?TlNRK3pYUmNvWDZpVnZKNUpZQjNoRnVkSUVQd2ZnU1FBcE9ranZ1NVVCL1px?=
 =?utf-8?B?SjhwR1UzMTBVNEN4UnlzMnRZSVBHYmhSV0hxa2xpV3ArYlBXRUhkbmdQZ2JB?=
 =?utf-8?B?NUdlNzVXNmJhRVF2eXRRdWZTc254cjA5cFNkcm1vWkR6ekVvSFlybjdzWGZ2?=
 =?utf-8?B?c1JBRThxR1NjNTk1cG8vbW1lS1pncFVJU0JNQW9aOGM5bkw2cmlnZGVoT2I4?=
 =?utf-8?B?d04xTmwzb1RCcDdPSk1FRVhoYmZUNnkxYzQ0MitUMW5XZi96S2hhWGYyWC9a?=
 =?utf-8?B?S3gzdUt4cmhiVGh1NG9rVktjWmdwU29zdGhsbGJuVWxuTjVzOVRJQ0FIdjdq?=
 =?utf-8?B?aVhRK21BYU9wR05TNU9rNDd1NS9rMlFTRlk0M01yanA0SUhtbUo2RzdjVjRM?=
 =?utf-8?B?VGhKTkxlWUc1dTR6dUpyK0ZvMEJxbm16N25uSDM0QnU5b3R3RE43UWt2Zmkz?=
 =?utf-8?B?ZG9JaW5NQTJ1V1VNbnp2cDd4cmIzblJ5S3JLUjNyeUZlcFhQS0N4U3FNUUxs?=
 =?utf-8?B?c0lNQlJPcW05aThsL2pwMWxneHY3WXVIVWRFeUhHWCtFdFJvLzIrL0U2b3V5?=
 =?utf-8?B?akZOd2JFcmhuS3NBdXNvQW8rdndpRldFQ1FaV2dtSi9IYUxBQXgwdFZSSHkr?=
 =?utf-8?B?eGc3Zkx2bmc3RDQ2Nm1sZlRRZlQxK3NhVHJ0OXplbGIrRDllZ1BHRzlMWFZx?=
 =?utf-8?B?aXdRUmtVUzd5VEMzdi9xR01zRFNmbkNDamt1d0Y1MlZuR2hKemFPWFFmaW1C?=
 =?utf-8?B?ZklmZnRzU0NEMzBudm5KcnpvWDdTVUdib0R2T2FLWmdDMU5pMTdoeExKcUtv?=
 =?utf-8?Q?268NdWPJWZKXNEuR3qskNoJ+b?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4e8848dc-1266-4755-f88c-08da54fdfb9b
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 09:51:45.2670
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: bfyGMPdbQf6yUA21JLyChEz8CUhNOkUHAdwAj+tMT8bbiLPYqpg/a/YCfUHQMWMIIFca3xzuLDMNdgF2DQuT4w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB6324

On 23.06.2022 11:46, Juergen Gross wrote:
> --- a/arch/x86/xen/enlighten_pv.c
> +++ b/arch/x86/xen/enlighten_pv.c
> @@ -1183,15 +1183,19 @@ static void __init xen_domu_set_legacy_features(void)
>  extern void early_xen_iret_patch(void);
>  
>  /* First C function to be called on Xen boot */
> -asmlinkage __visible void __init xen_start_kernel(void)
> +asmlinkage __visible void __init xen_start_kernel(struct start_info *si)
>  {
>  	struct physdev_set_iopl set_iopl;
>  	unsigned long initrd_start = 0;
>  	int rc;
>  
> -	if (!xen_start_info)
> +	if (!si)
>  		return;
>  
> +	clear_bss();

As per subsequent observation, this shouldn't really be needed: The
hypervisor (or tool stack for DomU-s) already does so. While I guess
we want to keep it to be on the safe side, maybe worth a comment?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 09:56:48 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 09:56:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354775.582055 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4JZt-0004yn-EE; Thu, 23 Jun 2022 09:56:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354775.582055; Thu, 23 Jun 2022 09:56:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4JZt-0004yg-Af; Thu, 23 Jun 2022 09:56:45 +0000
Received: by outflank-mailman (input) for mailman id 354775;
 Thu, 23 Jun 2022 09:56:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vof+=W6=citrix.com=prvs=166aae13d=George.Dunlap@srs-se1.protection.inumbo.net>)
 id 1o4JZs-0004ya-CV
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 09:56:44 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c76c63c4-f2da-11ec-b725-ed86ccbb4733;
 Thu, 23 Jun 2022 11:56:41 +0200 (CEST)
Received: from mail-bn8nam11lp2169.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 23 Jun 2022 05:56:36 -0400
Received: from PH0PR03MB5669.namprd03.prod.outlook.com (2603:10b6:510:33::16)
 by MN2PR03MB5375.namprd03.prod.outlook.com (2603:10b6:208:1ee::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15; Thu, 23 Jun
 2022 09:56:34 +0000
Received: from PH0PR03MB5669.namprd03.prod.outlook.com
 ([fe80::99b:8d7c:620d:d795]) by PH0PR03MB5669.namprd03.prod.outlook.com
 ([fe80::99b:8d7c:620d:d795%7]) with mapi id 15.20.5373.016; Thu, 23 Jun 2022
 09:56:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c76c63c4-f2da-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655978202;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:mime-version;
  bh=oiGANlG8K8+pnTlQLCbjfuKOL0MgCQBzhwyREDcWTes=;
  b=XZW3ZKKKFcyRy2AH4NuqmH+Fw7EUFS/aJvgLiBnWJRuMR8Uq1BRUXUKF
   zCIXr65vykgINVvBlg3VdFnve9CGDc7hTakbag44ZsysaJ+yIjYia3I7S
   lFLqaCWDt1brDpIA5ID1xtCmcPhJTnyCJTStLTG6ad7iPshF+m9YQqZeL
   o=;
X-IronPort-RemoteIP: 104.47.58.169
X-IronPort-MID: 74262279
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:2mEirajAo5vv67VeBze6+csvX161JhAKZh0ujC45NGQN5FlGYwSy9
 lOraxnFY6jUMyawOYxoOc7lxf41ycWAz9ZmTQBvqCxkFSIT8pebDI/JIxv7MXrNJZGcHRI2s
 p9GNdTJfJ04EXGG/xv3aOe59yZ22fDZS+KmWeeUNn0ZqWOIMMsEoUsLd7kR3t446TTAPz6wh
 D/SnyH+EFT+h2AoYjpLsP7f8R4x4v6ssz9EsFZkNakTt1KDx3cZXc4Tfa2/ESD1E9JedgKYq
 0cv710bEkfxpUpF5gaNy+6jGqEyaueOe1DI0BK6YoD66vR4jnVaPp0TabxNMy+7tx3Tx4ork
 IsX78TsIesUFvakdNo1AkEw/x5WZcWqyJefSZRomZXOp6FuWyKEL8RGVCnaD6VBkgpEKTgmG
 cgjACIMdni+a9eem9pXfAXOavMLd6EHNKtH0p1pIKqw4fwOGfgvSI2SjTNUMatZammj0p8ya
 uJAAQeDYigsbDVGO3wnT4kBxtyQnyfhWj51sA+FnpAotj27IAxZiNABMfLzU/nTHIB/uBbdo
 WjLuWPkHhsdKdqTjyKf9W6hjfPOmiW9X58OELq/9bhhh1j7Km47UUVKEwfk56TpzBfgCrqzK
 GRNksYqhYc/81akQ5/RQhu8qWastR8AQdtAVeY97Wlhz4KLv1zHXDJbH1atbvQ7nf0vFHsXy
 WSAgtjWWDhNvKGVTmmCo+L8QTSafHJ9wXU5TS0OQBYB4tLjiJoulR+JRdFmeIa3k9n0FDfY0
 z2M6i8kiN07ltUX3q+2+VTGhTOEpZXTSAMxoALNUQqN8QdRdIOjIYuy5jDz4flMIYmDR3Gdr
 XMEnI6Y9+lIApaT/ASdTeNIELy36vKtNDzHnUUpD5Qn7y6q+XOoYcZX+j4WDEtxKcMFZT/Ba
 Vfeox9M/4RUOGa2bKhxeMS6DMFC8ET7PdHsV/SRZNweZJF0LVaD5Hs3Ox/W2H3xmk8xl615I
 Y2cbcunEXcdD+Jg0SayQOAel7Qsw0jS2F/ueHwy9Dz/uZL2WZJfYe5dWLdSRojVNJ+5nTg=
IronPort-HdrOrdr: A9a23:XMVzGK6MZlubq5zCHAPXwWSBI+orL9Y04lQ7vn2ZFiY5TiXIra
 qTdaogviMc0AxhI03Jmbi7Scq9qADnhORICOgqTPqftWzd1FdAQ7sSircKrweAJ8S6zJ8k6U
 4CSdkzNDSTNykdsS+S2mDRfLgdKZu8gdmVbIzlvhVQpHRRGsVdBnBCe2Om+yNNJDVuNN4cLt
 6x98BHrz2vdTA8dcKgHEQIWODFupniiI/mSQRuPW9o1CC+yReTrJLqGRmR2RkTFxlVx605zG
 TDmwvloo2+rvCAzAPG3WO71eUWpDKh8KoCOCW/sLlWFtzesHfsWG2nYczHgNkBmpDt1L/tqq
 iKn/5vBbU015qbRBDJnfKk4Xid7N9p0Q6s9bbQuwqcneXpAD09EMZPnoRfb1/Q7Fchpsh11O
 ZR03uerIc/N2KJoMxsj+K4KC2Cu3DE10bKq9RjxkC3kLFuGoN5vMga5gdYAZ0AFCX15MQuF/
 RvFtjV4LJTfUmBZ37Us2FzyJj0N05DVCuuUwwHoIiYwjJWlHd2ww8Rw9EehG4J8NY4R4Nf7+
 rJP6x0nPVFT9MQb6h6GOAdKPHHQVDlUFbJKiafMF7nHKYINzbErIP2+qw84KWwdJkB3PIJ6e
 P8uZNjxBoPkm7VeL2zNcdwg2HwqU2GLEfQ49Ab4YRlsbvhQ7euOTGfSTkV4r6dn8k=
X-IronPort-AV: E=Sophos;i="5.92,215,1650945600"; 
   d="asc'?scan'208,217";a="74262279"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aFYqFaYC0GlmsSj+lsaEh5rc5pFx2V4n/u5d9UJtI3cTGnBimWRl+GxR64lf9LLOACbWCtzpnUFUtbga2/c6mhgwfZ+7IrD1Ew+AjM0h5enoIjcDNAN8HN8dRXMGYJjon84F+BGD0XP7VlFJTENwpyMaplrrCqThK8yZyq8+UnjbaVuAWFp+uCQJ5AutOSFZjreQ40a4elfK8INfGYzxNUOoLb1M9Ufkj+YJ8UcyhRQIvm9GZCAbwI91cT4uI4mW1leLg2LUPVk/qZVMqh4dEuNyl9QjNCIaf/9mJMpNrHV1zTNaD/Rvf8bK9A8GWgUz8lutIl7MhAwbJWQL9tjHDA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9zhEXelUsWthDYo+ZiKHfkVi4ABIVMevjWQBIFBuhVo=;
 b=BB+NVe3Zm5+eZPRKKYuhwXgnag02rbXoPZ8u4oPRXiJVOy3tMVqJiVpL5fq1tMBgUDt3KFvixhxD7+pOM+Tf3d+GYNvTiS7xCh7g+7VqpZZNE4nADlw8nmUeca9cQwa6agw1curM3jv8KwLDqMBAqhVRk/v4BSmPNfK3wSix+3ydqZNMhEmpmMmbQnWJPZ7Db+lEESjFcSp6sHHJ4c2jxZu34szNSyoWX3QJYBX0qw8ong2fP38Jrt3/809TjFAWnsgVKIEISapjJyLvf649FAbBA1/qvrOqDiYcesgEoLc4H57cUf97yg8I6K4QRt+UgHiIXWHW9HHmpDMukrVx7A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9zhEXelUsWthDYo+ZiKHfkVi4ABIVMevjWQBIFBuhVo=;
 b=pNrz4++iC3Wu7APNGLWJRaR93Z1jmYiVa2VUGrYgEDRDTp1KaBreMtly5QdYTh6QXX6o94/UIW3LiaJVrEq1uJYAjBVKJDoQJvHpVP0dHjB2MJZ/fPRFMKul4ct0RR/SOHyCmxrPUNjWN4FkW8o5hvgb9Jsj7pCGs7hCddFm5QA=
From: George Dunlap <George.Dunlap@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <Andrew.Cooper3@citrix.com>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Daniel de
 Graaf <dgdegra@tycho.nsa.gov>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Daniel Smith <dpsmith@apertussolutions.com>
Subject: Re: [PATCH] MAINTAINERS: drop XSM maintainer
Thread-Topic: [PATCH] MAINTAINERS: drop XSM maintainer
Thread-Index: AQHYfBZAWPNhrlbPskGUL/+6B0hJq61csbAAgAAlLIA=
Date: Thu, 23 Jun 2022 09:56:34 +0000
Message-ID: <DE900420-7474-45AC-B363-519D1D51020A@citrix.com>
References: <baa7d303-1fcc-cd59-0872-a930ea43734d@suse.com>
 <16b02586-43a5-0f67-5479-1d7b77aa892e@suse.com>
In-Reply-To: <16b02586-43a5-0f67-5479-1d7b77aa892e@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 68065955-defe-45b2-7bb4-08da54fea7f0
x-ms-traffictypediagnostic: MN2PR03MB5375:EE_
x-microsoft-antispam-prvs:
 <MN2PR03MB5375D90F5F2B956888BFA02499B59@MN2PR03MB5375.namprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 pAS6aWbHxBqmCSmW6YgdOFqK1MuCCeAJeESbsXaPKhvEYOUeDqN2TPC308684puYROxBVTeN50Xil8iwmEe6iH+FB4Vw6lgCNAJZh4pRMbxTfbT7aztc3YD36tgO+/tS7QFn95Dwt10QKFGdRv8jlw3UNRg+OgOjD96xNQfggmWrT5u/KoUbqzAHVJxdJcrQOMx1ZIccdTURBg7hyw8r3ldzxjreC/uqGGqm+i05D+9qpjSfvnKNhySFMxITuqEgxuc5bJM57MNbKspipMZd2M4vrCydgNeWbxPh8U2P1tG5CDkYXuPyq3NiIR/swByTXEyQu47jvo/TwczCJR0zEKO4WywMPSkNUfFAcbA0AK3ucPVbshfKEpE8kiam1S77G2Qv3e6aKTS9Ld+QcCjCyFS30dJOLuPbvdDlohsyZdt7bCcZdkXQm44AtcmlSLjiFDjMk56d0GL78FOz8z2g7a45gFp0NDBlPRB5G5HjIzApYHuzXSIF181RYWCe+Q0mQd0RnWdShq2eSkpIiS/oJ4DJRjZsH1m+BP4lGGm03PxhH351TuFG2fJ4p/ZJDpBWcUkHkCcgchy+MZNPguNLkekfCCScn11RO7LrDIdFEnaUs9IFburKX4RY5JGg9gcka+HG+nCdENOdNsOt7B0R0oL3T51Ru43te/qBK8lvJN++xoDvq+R38aQXWKfcoLR7jKcYcBsjw2TAuCGLL0bTMIeuniwYkyhJO0zxkmp0UxiY4fbsKEkpOJMyuNr3756U7hrrVfd/7vwVccDV7qZDOrZlSVvryY/eO/psITU8S1c=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(136003)(346002)(376002)(39860400002)(396003)(366004)(6506007)(41300700001)(2616005)(186003)(82960400001)(99936003)(26005)(53546011)(38100700002)(6512007)(122000001)(83380400001)(33656002)(8936002)(8676002)(36756003)(2906002)(71200400001)(6916009)(316002)(54906003)(66556008)(64756008)(66476007)(76116006)(86362001)(66946007)(4326008)(5660300002)(6486002)(91956017)(66446008)(38070700005)(478600001)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?QlFEVXNZRWRPSTk5NGVrQmR3dEJXZjcwRm9wUnlKR1hFYml2THJBYm9UR1dy?=
 =?utf-8?B?RXlFbnVzSjIwMGJWL0J5TUluZThCQk9jRjRkTU1QT3dTb3pjQ2xQenhPTUtl?=
 =?utf-8?B?NTg0dzI3aHJFa2hrcWJuc292cWVOdllwQmJocytQRytKS245Z2V4MStQNVZt?=
 =?utf-8?B?c0RacGgyOXptOCt0dGNqZkJUYUh3MVNNR25xUTJGNDBNOW91SUhnRGVQSlVi?=
 =?utf-8?B?UkpNV24rMnVMRlBNdzFMcjlhU0w3N3ZudXltNHNIcWpuTFJvY1R0OGR1ejdj?=
 =?utf-8?B?SENMbVhDWFV2S1FWdHRzREhieWJQbWsxN2FQSjF5MlBCTWNIdkI0ZVhCV1BG?=
 =?utf-8?B?TTRibWIwV3JmemF2c2ZNMDhsZXJLeFJhWnd3YUhhckE3TjdrMGJ2QlpLNmFI?=
 =?utf-8?B?cDAyNWJMMjBKNWpKcUh0RU9tMzBxaGQ4R0lWc3lxa0ZxMXZuWC9mL2FvZEhm?=
 =?utf-8?B?YkxNYldYWGE3M0FVQlRqT3lyMHpLcGVvY1g1UUloekt1U1h4c3pqa0NWVTEx?=
 =?utf-8?B?dS9UTkJweDZiU0ovZzM4cW52TGtsbFBmeWVvbVpST2t4T1gyOXFuT1JUVmVX?=
 =?utf-8?B?dGRabjBwUnVoNHh1NkVWYzVTQzRaNGFwWDI3RzVNRzU3dmt0VTZ1OUdlSS9s?=
 =?utf-8?B?SVdCemtvcEp2RFVkT1YzUHJHODRhZTRCcndnS253SksxeHErRERVSFdPUGk4?=
 =?utf-8?B?NUticmR2WDJpYzFoVU5YL2pHUzQ1Ri9GODhXSmNuSUxUdGdtanFZTVJOd0VR?=
 =?utf-8?B?NTdpZTI2amJnQkNSQmk4SHA0Sm9NUWZrUG0wenlIaG5VOG04RTNnZVNvNVhC?=
 =?utf-8?B?WEU0WVdxV21Zd1VESENrc3NzZEtrWmtrQjlJME5COGpMd1JYR0FDMENIY25E?=
 =?utf-8?B?d3JGai9CaVl2TGpsR1pvTDBhNnZOc0ZPVXFLRElYRzlTeTg1RGlRTHlSd2pH?=
 =?utf-8?B?M25WS3hubUQ0WDB2WmFtcXd6TVVkdEZNZ3NCTmNRNm8zTGRha0FDS1hxalpJ?=
 =?utf-8?B?TW83RTd6d0RqOGV2eFY4TjFkeFZTVmlFTXQ4bFRxSmNvRlJtQ1kxTVhQd2Vv?=
 =?utf-8?B?Z2MvSzV0Wk1JbDFGdXIra1FCZjVtQWsybEt6aWJDZ05ITStOcUdoQ3NndFl5?=
 =?utf-8?B?T0VnQlNmTDBrbXArUkpKSE55b1BJdUVLVm9wZ1hWR0dDZkwzcFdvb2tDY0Rq?=
 =?utf-8?B?aituKzFKTXArVm1PVS9mRWJUNHJzMWo2OHlvRjBkUjNRUk5iZGtzdWxRZnpl?=
 =?utf-8?B?RzcwUC94ejFIV3ZvUUVZOXFPZGlNbnhLbFhZQlNOb29jU21wNVRxYmRpa2g1?=
 =?utf-8?B?d3ZEVnU4bm5uOVV4Z1A1eSs5bFptZWkydkxCL3JXY0c4ZHN6WnZvNmJUZGpH?=
 =?utf-8?B?eVJPdG12R2xDM1BXaGNIREVoYjFpaWROeFprbTNtRVpIRzFEait5eWUwcTA2?=
 =?utf-8?B?MU0rQ2pkYi9ETnNLQVdzK3hMQUY4MUNHdEgvRGpMaUtaS2p2MlVJayszWlkw?=
 =?utf-8?B?eERSK0ZJZzJjZC83Zk1ZbGZJdVBlRHRuY3F4YW9iM3ZuQzh0NGlGSEVuS0xn?=
 =?utf-8?B?Nk9GeW1TRE91Zm84dHZDeS8zZDR4OU9aSi9OQ1Q5bU1yM0hsTU1lSkk0dTZz?=
 =?utf-8?B?Y3NvbU9zZG1pcnpwODRRYS9aYVdsOHM2UDNmelQxRHpad2Vna3Q4L2UxUFBP?=
 =?utf-8?B?bUE3dXhsaW1aK1NhU3dBODM5bVFMNi9pa0pVRDk2RUpIZktZOVBrSEhRYjlV?=
 =?utf-8?B?dG0vV3NMc1lqVm1wRDdDLy9wVVNkZ1FPVS9xUlZiUmdCbEhaU3hLOXRaWk1w?=
 =?utf-8?B?M0FyY3BzZEVDSjAvOWViS3lsQXFncG16RnlKUGRoTWdTaEdaNEg2L3drc2VZ?=
 =?utf-8?B?akVHNFQ2UHZ2UW84Y1ErTDYxMzVaTGE1NEpDQUpvWDBaQ0UxQUJqSWhuL0Jk?=
 =?utf-8?B?UXFUSjhETTFDZmlNSzBrQXptdFh6RTVMS3hYZ09uTXBkTExHMFljZU0xajNi?=
 =?utf-8?B?ZC9heVFkMXhZaFNsQ2FsU2Q0VFFYRUFiZ1M2SDM5eFI5MlR4cHl4RWVBL2t2?=
 =?utf-8?B?bWxXcUEzWXVTVkp3R3BFYi9WQnNMR013ZDNwd0hXUjRMMEdEaFZPTWtqQloy?=
 =?utf-8?B?LzFRMGZVTVF5VDdxMHZJNXhnM3QwbFpOVFJQVkhCU0xVYjlObER2Z2gvanRu?=
 =?utf-8?B?amc9PQ==?=
Content-Type: multipart/signed;
	boundary="Apple-Mail=_16464673-E05F-4B3D-B777-EAE887EA61DF";
	protocol="application/pgp-signature";
	micalg=pgp-sha256
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 68065955-defe-45b2-7bb4-08da54fea7f0
X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Jun 2022 09:56:34.2171
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: lL/AS5pOwVgn7HigF3X3omOerh82NPw/0hD9YtkNSg5ydhk1vpcPVJ9zbbi9yvMO61FBuAkyQ/FUO87JbrNIfKk+AmmdtRyFPKoloz7uDvk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5375

--Apple-Mail=_16464673-E05F-4B3D-B777-EAE887EA61DF
Content-Type: multipart/alternative;
	boundary="Apple-Mail=_B412ABD9-1F1F-43CA-9B13-A74C3DDDDDEA"


--Apple-Mail=_B412ABD9-1F1F-43CA-9B13-A74C3DDDDDEA
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=utf-8



> On 23 Jun 2022, at 08:43, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 09.06.2022 17:33, Jan Beulich wrote:
>> While mail hasn't been bouncing, Daniel has not been responding to =
patch
>> submissions or otherwise interacting with the community for several
>> years. Move maintainership to THE REST in kind of an unusual way, =
with
>> the goal to avoid
>> - orphaning the component,
>> - repeating all THE REST members here,
>> - removing the entry altogether.
>>=20
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> We hope this to be transient, with a new maintainer to be established
>> rather sooner than later.
>>=20
>> I realize the way I'm expressing this may upset =
scripts/*_maintainer*.pl,
>> so I'd welcome any better alternative suggestion.
>=20
> Two weeks have passed. May I ask for an ack so this can go in?

I=E2=80=99m happy to give you my Ack re the maintainership change, but =
I=E2=80=99m not qualified to comment on whether it will screw up the =
get_maintainer script.  Perhaps it would be better to send a v2 patch, =
proposing Daniel Smith as a replacement, to avoid the question of the =
script entirely.

Acked-by: George Dunlap <george.dunlap@citrix.com =
<mailto:george.dunlap@citrix.com>> # Removing old maintainer

 -George


--Apple-Mail=_B412ABD9-1F1F-43CA-9B13-A74C3DDDDDEA
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
	charset=utf-8

<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html; =
charset=3Dutf-8"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; line-break: after-white-space;" class=3D""><br =
class=3D""><div><br class=3D""><blockquote type=3D"cite" class=3D""><div =
class=3D"">On 23 Jun 2022, at 08:43, Jan Beulich &lt;<a =
href=3D"mailto:jbeulich@suse.com" class=3D"">jbeulich@suse.com</a>&gt; =
wrote:</div><br class=3D"Apple-interchange-newline"><div class=3D""><span =
style=3D"caret-color: rgb(0, 0, 0); font-family: =
JetBrainsMonoRoman-Thin; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: 400; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">On 09.06.2022 17:33, Jan Beulich wrote:</span><br =
style=3D"caret-color: rgb(0, 0, 0); font-family: =
JetBrainsMonoRoman-Thin; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: 400; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><blockquote type=3D"cite" =
style=3D"font-family: JetBrainsMonoRoman-Thin; font-size: 14px; =
font-style: normal; font-variant-caps: normal; font-weight: 400; =
letter-spacing: normal; orphans: auto; text-align: start; text-indent: =
0px; text-transform: none; white-space: normal; widows: auto; =
word-spacing: 0px; -webkit-text-size-adjust: auto; =
-webkit-text-stroke-width: 0px; text-decoration: none;" class=3D"">While =
mail hasn't been bouncing, Daniel has not been responding to patch<br =
class=3D"">submissions or otherwise interacting with the community for =
several<br class=3D"">years. Move maintainership to THE REST in kind of =
an unusual way, with<br class=3D"">the goal to avoid<br class=3D"">- =
orphaning the component,<br class=3D"">- repeating all THE REST members =
here,<br class=3D"">- removing the entry altogether.<br class=3D""><br =
class=3D"">Signed-off-by: Jan Beulich &lt;<a =
href=3D"mailto:jbeulich@suse.com" class=3D"">jbeulich@suse.com</a>&gt;<br =
class=3D"">---<br class=3D"">We hope this to be transient, with a new =
maintainer to be established<br class=3D"">rather sooner than later.<br =
class=3D""><br class=3D"">I realize the way I'm expressing this may =
upset scripts/*_maintainer*.pl,<br class=3D"">so I'd welcome any better =
alternative suggestion.<br class=3D""></blockquote><br =
style=3D"caret-color: rgb(0, 0, 0); font-family: =
JetBrainsMonoRoman-Thin; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: 400; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><span style=3D"caret-color: rgb(0, 0, =
0); font-family: JetBrainsMonoRoman-Thin; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">Two weeks have passed. May I ask for an ack so this can go =
in?</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: =
JetBrainsMonoRoman-Thin; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: 400; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""></div></blockquote><div><br =
class=3D""></div><div>I=E2=80=99m happy to give you my Ack re the =
maintainership change, but I=E2=80=99m not qualified to comment on =
whether it will screw up the get_maintainer script. &nbsp;Perhaps it =
would be better to send a v2 patch, proposing Daniel Smith as a =
replacement, to avoid the question of the script entirely.</div><div><br =
class=3D""></div><div>Acked-by: George Dunlap &lt;<a =
href=3D"mailto:george.dunlap@citrix.com" =
class=3D"">george.dunlap@citrix.com</a>&gt; # Removing old =
maintainer</div><div><br =
class=3D""></div><div>&nbsp;-George</div></div><br =
class=3D""></body></html>=

--Apple-Mail=_B412ABD9-1F1F-43CA-9B13-A74C3DDDDDEA--

--Apple-Mail=_16464673-E05F-4B3D-B777-EAE887EA61DF
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
	filename=signature.asc
Content-Type: application/pgp-signature;
	name=signature.asc
Content-Description: Message signed with OpenPGP

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEj3+7SZ4EDefWZFyCshXHp8eEG+0FAmK0ONEACgkQshXHp8eE
G+2ltggAtzKNPJPMyD31fcj00V6tdpmsd6F7Ubddkx0/FeBm3zy3u75KzZTfEP/6
Cn6lFPrMNkpnslbmazn8aV3drnMN+IlCsfjyEP0kSqaCu9Gy9XHtnoAP4UCowWo5
V92ERDmDDuVqwWHQkA0mpSRNFb8lcGeoqNqi7dT3QqYm9DBMIrAlGjXvtYkES23i
KUEVP6+ZSHZg98Ib+X5leVV45UySYMX0PHBynVvLggzAFO0ij3La1Gn+cKQydIa7
H2FpMKA8crWfBkZ0PI03fuBELbIVjcGcOrDnYV4lD7hHecDz+r/fSMSPPTo197Hx
STkd4sYiIVrmqdzydJu36dHcqIBGtw==
=S2A2
-----END PGP SIGNATURE-----

--Apple-Mail=_16464673-E05F-4B3D-B777-EAE887EA61DF--


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 10:04:38 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 10:04:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354786.582066 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4JhR-0006an-F7; Thu, 23 Jun 2022 10:04:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354786.582066; Thu, 23 Jun 2022 10:04:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4JhR-0006ag-Bi; Thu, 23 Jun 2022 10:04:33 +0000
Received: by outflank-mailman (input) for mailman id 354786;
 Thu, 23 Jun 2022 10:04:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uSRO=W6=citrix.com=prvs=166c34e93=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o4JhP-0006aZ-TI
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 10:04:31 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dea4a1af-f2db-11ec-b725-ed86ccbb4733;
 Thu, 23 Jun 2022 12:04:30 +0200 (CEST)
Received: from mail-bn8nam12lp2169.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 23 Jun 2022 06:04:27 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by BLAPR03MB5460.namprd03.prod.outlook.com (2603:10b6:208:292::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16; Thu, 23 Jun
 2022 10:04:25 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 10:04:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dea4a1af-f2db-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655978670;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=9ZTR3EuZ5cyPZfBX3ahgjowQzgyarign0HQwhQAoySo=;
  b=E65IGgmcqtBqypRVCiGICV6SMpiSpKa6a42Fv3lK1XP3/s+dbO+RIEG3
   NclkXlvgnRmq/0A9kBCgDf0BHnkhLNWxCcYoGz7Zo1fXsYagC+PyEKoEY
   diBr1gu6NF1XI7Ie27RAsC7rooqCqWc0dL6pTJEv5uotqMzezVpg/7fd1
   A=;
X-IronPort-RemoteIP: 104.47.55.169
X-IronPort-MID: 74242059
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:v/NgrqAnIWL7hBVW/+Liw5YqxClBgxIJ4kV8jS/XYbTApGh21GQFm
 2YZDDyBPP+CN2Gme9klOYuzo0oDvp/Qy4AwQQY4rX1jcSlH+JHPbTi7wuYcHM8wwunrFh8PA
 xA2M4GYRCwMZiaA4E/raNANlFEkvU2ybuOU5NXsZ2YgH2eIdA970Ug5w7Bj2NY06TSEK1jlV
 e3a8pW31GCNg1aYAkpMg05UgEoy1BhakGpwUm0WPZinjneH/5UmJMt3yZWKB2n5WuFp8tuSH
 I4v+l0bElTxpH/BAvv9+lryn9ZjrrT6ZWBigVIOM0Sub4QrSoXfHc/XOdJFAXq7hQllkPgq8
 MwT9qGVYzwiHY7FqM0+TR9oFAxHaPguFL/veRBTsOS15mifKT7G5aUrC0s7e4oF5uxwHGdCs
 +QCLywAZQyCgOTwx6+nTu5rhYIoK8yD0IE34yk8i22GS6t7B8mdEs0m5vcBtNs0rtpJEvvEI
 dIQdBJkbQjaYg0JMVASYH47tLj03CWlK2AJwL6Tjaom/UPfwUt16qfgLOCNIvLWS8lFpknN8
 woq+Ey8WHn2Lue3yzCI73atje/nhj7gVcQZE7jQ3vx3hFyewEQDBRtQUkG0ydGikVK3Ud9bL
 00S+wItoLI0+UjtScPyNzWnpFaUsxhaXMBfe8U25w2AxbDdyxqIDWgDCDhaYZops9FebSwn0
 BqFks3kARRrsaaJUjSN+7GMtzSwNCMJa2gYakcsQRMe5tj/oKk6lh/VUst4C6mxk8H0Hjfrh
 TuNqUADa647iMcK0+Cx+wDBijf1/JzRFFdrt0PQQ36v6R5/aMi9fYu05FPH7PFGaoGEUl2Gu
 3tCkM+bhAwTMayweOW2aL1lNNmUCzytaVUwXXYH80EdygmQ
IronPort-HdrOrdr: A9a23:s3wLgqhMV04sXpgKLZWvyGoIaHBQX0h13DAbv31ZSRFFG/FwyP
 rCoB1L73XJYWgqM03I+eruBEBPewK4yXdQ2/hoAV7EZnichILIFvAa0WKG+VHd8kLFltK1uZ
 0QEJSWTeeAd2SS7vyKnzVQcexQp+VvmZrA7Ym+854ud3ANV0gJ1XYENu/xKDwTeOApP+taKH
 LKjfA32gZINE5nJ/iTNz0gZazuttfLnJXpbVovAAMm0hCHiXeN5KThGxaV8x8CW3cXqI1Sul
 Ttokjc3OGOovu7whjT2yv66IlXosLozp9mCNaXgsYYBz3wgkKDZZhnWZeFoDcpydvfoGoCoZ
 3pmVMNLs5z43TeciWcpgbs4RDp1HIU53rr2Taj8A/eiP28YAh/J9tKhIpffBecwVEnpstA3K
 VC2H/cn4ZLDDvb9R6NqOTgZlVPrA6ZsHAimekcgzh0So0FcoJcqoQZ4Qd8DIoAJiTn84oqed
 MeQP003MwmMG9yUkqp/lWGmLeXLzcO91a9MwU/U/WuonZrdCsT9Tpb+CQd9k1wga7VBaM0ot
 gsCZ4Y5Y2mfvVmE56VO91xMfdfKla9Ni4kY1jiV2gOKsk8SgHwgq+yxokJz8eXX7FN5KcOuf
 36ISFlXCgJCgjTNfE=
X-IronPort-AV: E=Sophos;i="5.92,215,1650945600"; 
   d="scan'208";a="74242059"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YxuriB9+/Uwj2gYwY9bR3G6LBxhRpYG6yrQLtSHWLV4Zn/apdxSaSCxZSUG7L98vICVuiK6BUcjHhDdVt9NEPuaL7KTzo9PAcfaKHBSzvBjeCgpusvUdQFu7mcohy4P1GIw9j3cxu/q1zRschQEk4hpD9FOsONTvsJ1NkKguyKavjkTdAc9My+rX1irvcKeSmiJw9WmvnwKDduvh0wHzwR/f2FPLHRVXkp60drPjLO5FE3viIPO70e6P6550nAKlYmCxBmLwQG7Nnw9ZB+ld/z3NeYELaoMSHwMG/UZTzy5LOpLMDzKENmlT7z6Mtt1vVT711DsrswlxXjA1EfbMOg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0gZi33ta667xHrZ5UVLCTN8JnDNrhI4y5VrLEgyrsF4=;
 b=GNFtW7TfaHt7O5Npq6PFTc3Mhg5y4RXjKTTUC5j/CIdp/5s424Fi1UQvkPiCd8xOAkQ9scSlzTxz8z6F6B85dZmT5L/IT+fkT+4zdXdQJN8tJNj5KawdItt+qVxZ0D3AcEnsmaibOhqkXqACYu+8YWsUqW9NR/ZriOJYfNqhPnJbQOC/caUgJSUWyIE9HEOvr3Xwbc5KhA1XoEueVBhemMOFch2l3BbIk8vi8fWdmQRChJm79NoCYchvuMhERT/u2TI2J3TtxkTAx2sbvSI7wsoLNllcqRJnX0BjrwZDKH+gIYJspbbBzAxGsnEUl1/G3TR+uJfBLUR5okmrjE7AWA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0gZi33ta667xHrZ5UVLCTN8JnDNrhI4y5VrLEgyrsF4=;
 b=BK6tGXdr3GiMvlt4wO3uIk3IKRZQK7gj8omQSenV8tN3eb/pT0lFDf/ECmjRF6/L/LT/MtaL87RPhLmoHAa5z61K5QeTHk7dobEyFXgSx+6H+bHyNCTiVBkF0Ci7Lts0CK7kbvbH+WBNpoS7cVmzFvY2Oa3KbLv7JqsOeGeswaA=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Thu, 23 Jun 2022 12:04:20 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: George Dunlap <George.Dunlap@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	Daniel de Graaf <dgdegra@tycho.nsa.gov>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Daniel Smith <dpsmith@apertussolutions.com>
Subject: Re: [PATCH] MAINTAINERS: drop XSM maintainer
Message-ID: <YrQ6pNrr3pbZ2THo@Air-de-Roger>
References: <baa7d303-1fcc-cd59-0872-a930ea43734d@suse.com>
 <16b02586-43a5-0f67-5479-1d7b77aa892e@suse.com>
 <DE900420-7474-45AC-B363-519D1D51020A@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <DE900420-7474-45AC-B363-519D1D51020A@citrix.com>
X-ClientProxiedBy: LO4P265CA0047.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2ac::14) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 11b2a931-14ce-4735-b174-08da54ffc0bd
X-MS-TrafficTypeDiagnostic: BLAPR03MB5460:EE_
X-Microsoft-Antispam-PRVS:
	<BLAPR03MB54606D62894C88BE2D45386E8FB59@BLAPR03MB5460.namprd03.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0zkpJudPuNooIsn1E+UFfAWjyByFCWcjL+NjqdHAMlBZu9V9+dd4QUIuqR4CgDvydVBN8lIERS5Jx2qc1/ruLFgp+Qc7fkwb8Co2gaTgywYT8mK8tEPxTYSpb5P3Qv4P7+pf5o6+6kaKNZ5j3OgQeqopgi0LiMoOab8NQifX7g180E9AOESU7so0CB7mTuO/TpbpQ7X8RBKOIlnC2NU9+8Adg7ebfDuzyTc2E4hRlwA4rNHZR+bLq8CHRj2R8HptEwYfllaGNA+9FK7sfDr6dCehwvLDx5qmKI8CihAw6H6dUORFCyPIHv33aMu1Z1g+QPttLk3Gpy/o3zdUKbq2PkyK21EEGZ+Faxul/6+zTf7Dsz/MHy6xAl+WZCrV5rSWTpjBcm7ewV+gOCybJjz7JyQ3ZvazgMehXJ4n/peNFI5I6PJ73eOEkJMl3NL4sKnTQY1U68pDA84JGWdmL13z2gOjvS08pm+b+rmrJ3pw5gkToimUFaS391fJISUQhH17iwvkjm87ajkYEkfLlN6OSy9xdr9DoulqeRpqZSIHiKS2dOn0XSF693edhFCeixQkvNE3nbklUnk+viHU2aLRNPNl70P/kniwuABTvHE5n4CKWWYqRK5/jZaHp2sBu6Q+ki7XdWnLbplU4Z/PUm9tBwmUjH55bWbfNbXhep/QdpIfwdn62dd3uE5jWIA32WbkJhHybPMeZIIYW/7+J9jlYQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(7916004)(4636009)(396003)(366004)(346002)(136003)(39860400002)(376002)(6486002)(2906002)(186003)(86362001)(82960400001)(8936002)(8676002)(66946007)(4326008)(66476007)(38100700002)(66556008)(6862004)(478600001)(33716001)(5660300002)(316002)(6636002)(26005)(54906003)(6666004)(6512007)(85182001)(9686003)(41300700001)(53546011)(83380400001)(6506007);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UkhBVmt5b1RHQU9BaTdrekRSb3J2RjBzWkxRQ3VxN3ExWlBwekhYeUc3YlRw?=
 =?utf-8?B?UkFSMUZRelJuZ1ZZaGlDdzU1SUdFVDJoVTFYNGsvM0hoMW9JSW40VlY2b3B2?=
 =?utf-8?B?YURwWGZDMWJXUDBqUjQxdjVCbElOUmFBNXhBazdvWmVuT01laDNDV2drT1Vw?=
 =?utf-8?B?a255WDVjOTYxMTRxdm1kMCtJanRCUTZkTTB1alkzbGZkYWlMZkJCd1g5NzNC?=
 =?utf-8?B?NUNUcFpXWENXem1HZGdHbWlBSTlLNEk4aHhFWkpqY01WeFMvUnkzdUZTRU5a?=
 =?utf-8?B?N0NKU0RVNWNpS3VZOGJydU5kWTNFbytaR21xYlFLZ1BRMmowZWt6U09qYkR2?=
 =?utf-8?B?ZlVmdG9yOTd4N1pHSGRhRndueWxRMWp5MzJ4dXpXU1RDbHoya09oVUNhZDhW?=
 =?utf-8?B?RW9QQWQydTNtb211VmZCajR1QmRESG8xRUxkUHRNSlZDbjErN3BUZGxvS2JZ?=
 =?utf-8?B?WjFZVW5vV3kyUDJkM2xKRFI0TVFlWE5aSS9mMHdiUStCUGNEUDdpOEZyMDU1?=
 =?utf-8?B?R2dNMmgxTmpHM2xhMnp2SG1Ua1E1S2pGUHZOOWVmdmZiSHJ6OTJac0NaZllX?=
 =?utf-8?B?ZkNzNlp6dVZvTE54MUpsVVZkR2hFVW5ZY2s5QjlpTlQzK0VZL0llUHlxSVgz?=
 =?utf-8?B?UFVNQkVycDdyTWQ4UTFhenltbTJLKzBocjBqRzBhTUUvdUxLNFRnMnV4NXBn?=
 =?utf-8?B?OWdRVkNpRU41dFMyZFVQTnd6MWpJZzRBR1NPWTE1RHJ3RXZBb05ra3plSXY0?=
 =?utf-8?B?Q0RjeGx3d0cxOFhrZ21EY1BDTmNFNTl4VDBJK2xBc1lFcldUczBHaVJvMEF4?=
 =?utf-8?B?TU4ycGMxQm9BVUU5bi9Wc2JXQi83UUZ4WW1HMjQzNmxaNm5LMWJYM2RrWGtt?=
 =?utf-8?B?THJlV0Y4b3ZnOUp1VmNhNTBRRWFSRWQzSGlhQ1lJUFcycEtzOGx5MkhCaEI1?=
 =?utf-8?B?NC8rVVFXbHNpTW8zUUdYWDhSclhTdXNGaU9TaHlOcWpuSE43eW1lSTVOM3Q0?=
 =?utf-8?B?N0xNSU1qSXdzd0lyU2JXRjdjSGdlanRrWDliVzVOanNpY3dFNzYwSC9rQjRT?=
 =?utf-8?B?MUhRM1hJS1duT0hEck50UXFYRFJhdEZ1WEtJMFRMT1BLUDlnS3k4WjEvR1Bn?=
 =?utf-8?B?K1lJMk9rL0pIa2NzZnFwK0xFSGtuMis4NnVYU1o4S0ZGNlNHVmpOVEJkNlU2?=
 =?utf-8?B?SFljbitvNVM3VHpnRitTei9WWmdqNkRRVjBvODczRnhJTjhDeHZtUE9lb0o3?=
 =?utf-8?B?cnVFejR0czJKUmRmZ0FaNlJLWEJvNFZJamZSdXJKR280bEpqZmRudHdvd1du?=
 =?utf-8?B?WXVZcWFLK2Z4bkhrRVZld2ZkZi96UWowWVovZ2NCZHJlY3pzMy84NlF6cTZl?=
 =?utf-8?B?UjFzWWtpQWdWaTYxMkVPVEZuQTd3OGVhN3ZTL3NNL1RDUENVWnVSVGxsdE9h?=
 =?utf-8?B?QUhsMlVVaXZVWU85Ty9YMUNOTXhvcFltaENBd3hvUDZkTkFyYnNhTGlDWFd6?=
 =?utf-8?B?Y3hxaTlZY1hEem5nTjRSQm9vOGdVR2pJWC9rVG90bFc3Z2l3NUVrWXRHMlJQ?=
 =?utf-8?B?TEcxWkhyTFgxYUxvWVpvUnhzQklyZzFuVkRNOHRvU1pGMC9YN0tYVnQzd2Ir?=
 =?utf-8?B?SnFOL0I3Q2VMdHBhakhsQjByZE5YS1U1Q1RIMVg1RTdOY3Bxa1p1UFNTZ0Rz?=
 =?utf-8?B?dmx3VHBob0M3cU9XUE56MVdGUkRwVDl0VGQyaEIybjJiZWlyV0Z4Z0ZkM1U2?=
 =?utf-8?B?NHNvdS9INFVGVGtmUXpmYzJSYmpYTFQxRDZsaGs3S1Q1dzRTNUt3ZEovQzBz?=
 =?utf-8?B?RmJ4Ry9iNUQ3NnVteHozancxUGptazlCVnRFZXQrbEZpellZbTJEMmgzb3Jn?=
 =?utf-8?B?TVFIWWVIc1Z3Wk5QdVVkN1FhOC9aNTh5VEpZTFlSRjFNbVRDeEVreE5vbnZS?=
 =?utf-8?B?dVoya21PbXlDeUsyc3EzeHV3RTJHYW93VmNWOWhtTjhzOGQwWmpHWTd3NUJR?=
 =?utf-8?B?WnNqWHk0ZFg1TVBoNGRJd0ZNMGIrMWhIVTZFb3A1dkhFQ3hzdG1oN052dVY1?=
 =?utf-8?B?UllCOU5QbEVsOG1VVXQ5U3FId2ROa1lhVFFEOGlZcS9peks3MFJFOWhYNjZq?=
 =?utf-8?B?OU9USUFyTTgva3RPdi9LSzVyQ2h2TXkvQkRwNEZPTFBZUmlHK2V0VzdVWWlx?=
 =?utf-8?B?clE9PQ==?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 11b2a931-14ce-4735-b174-08da54ffc0bd
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 10:04:25.5272
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Eta5oaTnpH7rGu/XxzO1b4qnYRVW4M/G6xacRjMDK8qwCtPloRjaT2Fpf+EPdwPuRx+FbAKsu5RELjWsXt10LQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR03MB5460

On Thu, Jun 23, 2022 at 09:56:34AM +0000, George Dunlap wrote:
> 
> 
> > On 23 Jun 2022, at 08:43, Jan Beulich <jbeulich@suse.com> wrote:
> > 
> > On 09.06.2022 17:33, Jan Beulich wrote:
> >> While mail hasn't been bouncing, Daniel has not been responding to patch
> >> submissions or otherwise interacting with the community for several
> >> years. Move maintainership to THE REST in kind of an unusual way, with
> >> the goal to avoid
> >> - orphaning the component,
> >> - repeating all THE REST members here,
> >> - removing the entry altogether.
> >> 
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >> ---
> >> We hope this to be transient, with a new maintainer to be established
> >> rather sooner than later.
> >> 
> >> I realize the way I'm expressing this may upset scripts/*_maintainer*.pl,
> >> so I'd welcome any better alternative suggestion.
> > 
> > Two weeks have passed. May I ask for an ack so this can go in?
> 
> I’m happy to give you my Ack re the maintainership change, but I’m not qualified to comment on whether it will screw up the get_maintainer script.  Perhaps it would be better to send a v2 patch, proposing Daniel Smith as a replacement, to avoid the question of the script entirely.

Maybe we should modify the script so that any sections that don't have
any Maintainer assigned (no M:) fallback into the rest if that's not
the case already?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 10:08:24 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 10:08:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354782.582076 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Jl8-0007E3-Vq; Thu, 23 Jun 2022 10:08:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354782.582076; Thu, 23 Jun 2022 10:08:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Jl8-0007Dw-Sq; Thu, 23 Jun 2022 10:08:22 +0000
Received: by outflank-mailman (input) for mailman id 354782;
 Thu, 23 Jun 2022 10:01:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qyAp=W6=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1o4Jel-0006WW-Ux
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 10:01:48 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on2077.outbound.protection.outlook.com [40.107.223.77])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7cf71192-f2db-11ec-b725-ed86ccbb4733;
 Thu, 23 Jun 2022 12:01:46 +0200 (CEST)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by DM6PR12MB3162.namprd12.prod.outlook.com (2603:10b6:5:15c::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.17; Thu, 23 Jun
 2022 10:01:43 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::4047:c750:5bc6:19d7]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::4047:c750:5bc6:19d7%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 10:01:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7cf71192-f2db-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BAUdodlVXEMz28ush/AUBB2qdeafSHPKTqUcJIl7nKIzCP5uGxWCrVsbR8HURgt17L9kGHDuHkCbuIcq9eoXN91fDmonlLkQoGuVMGqvUT35vHSmPhU7wZFyAZt8hzfuCL3BE9fceqvro4Pbpf9biugpwkL7TWwRvLO1CjEIOTtLcXgdagSSN1s0XXtxdvnZp1Nu9zmluJ9kLckLGS7h4QGVeMHDyOhizT5UKYYRoOf0Z8rEfKe9wSwR9pzZtrSlPdTwPWRzrnP/J2psN21ts5rhrV5nvJl4+BLerM2EK5CfES/AwX4m9LfxN4t+aWQZd72JqVvPULLcRbE9+NgvPg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=QchqiwzMhFPp3iADoHJex6tyQYZPXj1IG5hE14D1wWE=;
 b=RFWL+GOxoKi8YMWBnID6HjIvqV4Wx7pllkRNMhijw4pvzQ6FNeVry0ub9e40H+UZSOgoCBjx3n9XCMWtkahcWG0AR1g6uf//4eNCd4JL34/tzbmj5GPEMQA09lUggBxHnYntYZvAmnfEi2djGkRxopYXs8epCs0/nyoKF/ss7EX2C1MtEAXB1XWk01K2E/YMobWEIl+M4SuIa091sADoXiWVG9Swqy5H1QGuM8mQVZr5siAlonPHNK9wvd00ck8qNRs+nVr3FWFHXkJI7bd6K616l5HMvZFQRmiuDTRh7pfjB0c3krGgfh0o+cXBRBJaD4BTNp66QMDc9sWvViFOLQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QchqiwzMhFPp3iADoHJex6tyQYZPXj1IG5hE14D1wWE=;
 b=lbQvDnhqQoBW9v+6roV0ZYfpz1HbM9H6FAvOJidtxjAvFXuJ4Ph4HKvk4JjDVr+ndA7KHPyqppT0FWn2Snd7nlPbn5Gtt2eRr83Lzg/fprYH02cpF2lVCkAUSrmCsxPDWglCs1yeoVMb8DAZ0crZ1SCMq3+wX0LkNQa5IBW48x8=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <94cca06c-8439-ec92-3bc7-8f2a69929beb@amd.com>
Date: Thu, 23 Jun 2022 11:01:37 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [Viryaos-discuss] [ImageBuilder] [PATCH 1/2] uboot-script-gen:
 Skip dom0 instead of exiting if DOM0_KERNEL is not set
From: Ayan Kumar Halder <ayankuma@amd.com>
To: Xenia Ragiadakou <burzalodowa@gmail.com>, xen-devel@lists.xenproject.org
Cc: viryaos-discuss@lists.sourceforge.net
References: <20220619124316.378365-1-burzalodowa@gmail.com>
 <80dc865b-f053-d9c9-b8d4-efb19abb2e49@amd.com>
In-Reply-To: <80dc865b-f053-d9c9-b8d4-efb19abb2e49@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0311.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a5::35) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: fba06cf1-3c6a-4509-127c-08da54ff5ffb
X-MS-TrafficTypeDiagnostic: DM6PR12MB3162:EE_
X-Microsoft-Antispam-PRVS:
	<DM6PR12MB316263F597647D8758716A4ABCB59@DM6PR12MB3162.namprd12.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JF1g9IYPbZ1vsaYvzI/ktFfoS2+Ar2fzR967pCeEJi73s1Dsq4w6ZoIGDkewvsrnr+KdxNxhCZP5hwQ5cNLu0sbQfkW5YQw+8E8T4MOvS8Udno/iUn1WrT76myvE9aDfvtg4OZdZRe0gLa2Ona6jtDdQ5FtKo4uUP0/HxOW6eTocxWFY0mPIsT5/8H8Ru7OSVt0oqmRYmZSVeCICRWY2uxz85tqEXrryHSFP59o98fEhoQIYO3D6mqDuqA3LAPPJwWQHKOfd5wV7+BTqV3hIlRZU/1+fOD+gei4MOIHgZsPzGtfiHIytNp6dwJvfflrxb73/Mw0Cv9RcDmwnsZ4c/KohARLVkft/hxxNyRlJffe54WIvoPKmN6lNBrpglhX3oMFB9qVs0LkCjSK0ifi/YQ5HxQg+qfb86PysxFoq97YpZJf/ROW+q6/yZNQAI1r/xP8vfenK7Z25UtFaPKC4iAST8wgPV5it1xip9Wp8CxxAVDLjxP/QkISVZto6ZnvCye3jFFz67+LNcqFfkRQH+62Qo0GKTu5NvKwd7lN4Sn9eXgcgHLx6TyWP73u/HlZu2UyCHPLla9FSVCAGQ0mG/wTA25oMY7kgBg36S0vYb+fX1alq66/xz0Ur0AWMkg1n5KpUbtxkGZ0E/HFqgkK8rRimsrnkiPbm99DDlw4/Qhs+xvTN7n1sEljuY2yZO4+kWAdmv9hV9QjGDAKQws1QPbXFsBgqp9KNFLHGG9yRsSZihCvB8TJwCK3yLqxtwjyYJx5I8LbuqR9h2N5M2ZnIy7MQvvpApO5L+5ajAlgKpNs=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(376002)(366004)(396003)(136003)(346002)(6666004)(4326008)(66476007)(66556008)(66946007)(316002)(186003)(31686004)(6486002)(36756003)(8676002)(478600001)(83380400001)(6512007)(8936002)(2616005)(38100700002)(41300700001)(53546011)(6506007)(5660300002)(2906002)(31696002)(26005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZVhneDlVMUdGQzAwT3NySTU0b0xLbkhVd092L1AzZkw5MWQ4aVhiZUxmc3Uy?=
 =?utf-8?B?akhRUSt3N2lSZTY4NGZaMGZ3S2NidXg2MXZlczBLQ0tDTmpSQXR5NFZNaHZ6?=
 =?utf-8?B?ajBUWndDOXNSL3ZKZWRob2tIQkhNNllldUI5aDhVeHltZk9TUXlCd2F0ZjBw?=
 =?utf-8?B?QW5YTUpLVjRPRzBId1JaczFHRzRaMllKeUQ3Z3lZZ0lFdVZ6YlFNQklxMFRR?=
 =?utf-8?B?THhsR0c4T0N4Q3pkSnlPLzIrRnAyMTQ5dWVkK3Y0RWlkWStBTDQ1dlA5a3da?=
 =?utf-8?B?N3FTTTlURU54SjlHOGN5NkttTzgwalc5eUJKTGNlTm0wQzdsdnViUlB6a2VV?=
 =?utf-8?B?RHFOekRYeG1pYzJuN0ZTeFh6bVZHU280TFd1c2YzdjZOU3Z6U0x5ekF2Rkpr?=
 =?utf-8?B?NnlVTEJPcWUrd2F2cVZHSHNzczFDdHRzKzhVbnhoZXhYQUlFOU12VmRNckEz?=
 =?utf-8?B?dDNWazlUNEFJZ3pzQVhpZGVBNTU4UFd0bmpoNDRRL3NIeVJRMnNYYzZoUDY3?=
 =?utf-8?B?SlAyVkxGTERicTkwa1ZzZXl4OUhIUDBOc1dIZy9sdEhHZjY5OHJPclEyaVFG?=
 =?utf-8?B?NUIrY3lPajAyckNDTExUa1ZxclI0V3ZlTFZFcGYrUHFCaTdOZVdkNlZaL0Vs?=
 =?utf-8?B?RWlObnFHKy9aRmRhZkh4cnFBL1lpT2JQRUI1cFVKQXhkR3Y0Nk5PSXpwb2xH?=
 =?utf-8?B?ZUlIVDd4Y296aCtLU25aMk0rSGJ5RWQySEgwYjlwelNweFpCOU52cTVlRFlj?=
 =?utf-8?B?ejFuaGVOODhVOGZpNTB3Slg4cEMzWHprSmVvejA4MUduMFhGbVZ3VGlIT3d4?=
 =?utf-8?B?K05LT1FWMU9UR0VLdkVQckFrRnZHcWRTK0szZnA3OXByUldOZ05MTitFTGRk?=
 =?utf-8?B?T2E2OXVET1p1S2pUdWVyQ0h6bnhsY0dicy9kYnFyd3ZHZGtKUC9POFR4aTN3?=
 =?utf-8?B?RHRKRWNuWVdvMnF6VUN4SFBuZ1ovUk1vUUVPc3ZYSnpZRHBSd29zNWZESXFv?=
 =?utf-8?B?V3NXMjg3OWVXd21Nd3dsNWVTcXlya2x3dW5wUWk2TmU0YURraEg0TDFwKzFk?=
 =?utf-8?B?VEhMaEdxOGxidXBRdmZWUFVzMWdxU0NhT2lOREpOVUQ5dmVMd28rSnpxanR5?=
 =?utf-8?B?OCt0VnVudHNRTGgzTlB3V3paSUpDSFNrc2VFYnVyZ0ljY0dIU1d1WnBVczJ1?=
 =?utf-8?B?OTlxWng2RzdUcnN5U1AwOUJWN2JDbmV0SDhtSTNDV1Ixc0RmVjNDdGhyb0pK?=
 =?utf-8?B?U1IyQitURWpQVnFTZVVRR0dheVgrOXVkWDV3MmxuRFIxMmxDM3FvTlU1OTR4?=
 =?utf-8?B?VDU5eWpiL1g2b2xGb3dHMnd0RGxUZ3lFMXowQ3RnYXlRaW5EdEhWUDk3WTBQ?=
 =?utf-8?B?NFBiRjNLd3VSazk4ZzJPQ05PSXZlOVpyRHBDRmkwSFgzb2EwcE92OFltNHJU?=
 =?utf-8?B?dHFOMUtNeXRrV0s3dUFGMmNDZW1wTHhreDNvWnlxU2NkZUdtL1BiZUl1dUZv?=
 =?utf-8?B?MzlYQ2w3MFZyZE9sOWVOdzlHWUNJUGZvSXFSSDdGODdvK3NqcG9yVU9Kbkk2?=
 =?utf-8?B?Njg2RGxGYjRFTVg4MVRpYitxTDk1Qzh4SFZyTDZ4OUk1Y1Rjc2UvTEptQ2tt?=
 =?utf-8?B?aU10bVYwajE1ZmkyVzJBR2UwMkgrRmEwbEZoTm9NNUZmZ0xCcHZhdXlDaUQr?=
 =?utf-8?B?a29MQmsvYXpQblhmNXFGUXlYVmNBckxIcUdjMGgxclhvZ0FWS1dxM3RQaWRC?=
 =?utf-8?B?cncydG9BWmNZTmNXK3pJSHRqSTZzOUF6c2JVb0gyanNLUWRYTlIxYjZjbGV2?=
 =?utf-8?B?VmczREtSS2oyVHZTOU12MzlEZ2ErVG12NFNmNTJndzdLMHRadVRlWkViR1NV?=
 =?utf-8?B?U21LbjVoOFFyQUt3RWlaMGtmSU1sTlpwU2IwczV4d05zMlRMN0Fva0lZRjRK?=
 =?utf-8?B?bENLS2RVOVU0WFNvWWphaG9PZ0ljRTU2eVc4VDY3VldxTjdxM2RYcjV2WGxD?=
 =?utf-8?B?ZHlpRkx5RVl5eHFjTDVlVFc3UzNSRWZvTk1LejJCNmtnWVp1QUxOME5PeldK?=
 =?utf-8?B?WW82UmRvUnpnSXVtTGxvdmM0NUlueFpsYm0veVBmY2N3UFRjTjRQdzcwblRO?=
 =?utf-8?B?UXZRQjdQczVLTlh0TUo3U1F5cFM3MlJwY0pzSGpSSWZwaDBQQi82ZGJmdVdq?=
 =?utf-8?B?OUwxZU9INys5dStjelo3WGJSUnQvYTI4NVJXaHAxcXlJRFJXRjh0S1BRSENk?=
 =?utf-8?B?VFVmeFZpbzJGSHFQWUg5Q1JnbkhLLzhGNE5Od0s2dTRzcUxYUnlGenVFOTlk?=
 =?utf-8?B?VjF5SkVUY2dlai9EbFN6V25DeGdTblEyVitRSVJVeDdJcXdzT3RrZz09?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fba06cf1-3c6a-4509-127c-08da54ff5ffb
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 10:01:43.1436
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: baY/sfUoILVSvrB5FzYPT0tsvUhEspVkZ3YzwEWgCkE9HTFLCVl/EWBdaUkGYYsUxxIOLLCX6t7AdXJfhygnUA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3162

(Resending mail, as the previous delivery failed)

On 21/06/2022 11:57, Ayan Kumar Halder wrote:
> Hi Xenia,
>
> Thanks for the change. It looks good.
>
> Some points :-
>
> 1. In README.md, please mention in 'DOM0_KERNEL' description that it 
> is an optional parameter. When user does not provide this, it 
> generates a dom0less configuration.
>
> 2. In uboot-script-gen, please provide a check like this :-
>
> if test -z "$DOM0_KERNEL"
>
> then
>
>     if test "DOM0_MEM" || "DOM0_VCPUS" || "DOM0_COLORS" || "DOM0_CMD" 
> || "DOM0_RAMDISK" || "DOM0_ROOTFS"
>
>     then
>
>         echo "For Domoless configuration, "DOM0_MEM" || "DOM0_VCPUS" 
> || "DOM0_COLORS" || "DOM0_CMD" || "DOM0_RAMDISK" || "DOM0_ROOTFS" 
> should not be defined
>
>         exit 1
>
>     fi
>
> fi
>
> The reason is that user should not be allowed to provide with an 
> incorrect configuration.
>
> 3. Please test the patch for both dom0 and dom0less. The reason being 
> such a change might break something. :)
>
> - Ayan
>
> On 19/06/2022 13:43, Xenia Ragiadakou wrote:
>> When the parameter DOM0_KERNEL is not specified and NUM_DOMUS is not 0,
>> instead of failing the script, just skip any dom0 specific setup.
>> This way the script can be used to boot XEN in dom0less mode.
>>
>> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
>> ---
>>   scripts/uboot-script-gen | 60 ++++++++++++++++++++++++++++------------
>>   1 file changed, 43 insertions(+), 17 deletions(-)
>>
>> diff --git a/scripts/uboot-script-gen b/scripts/uboot-script-gen
>> index 455b4c0..bdc8a6b 100755
>> --- a/scripts/uboot-script-gen
>> +++ b/scripts/uboot-script-gen
>> @@ -168,10 +168,15 @@ function xen_device_tree_editing()
>>       dt_set "/chosen" "#address-cells" "hex" "0x2"
>>       dt_set "/chosen" "#size-cells" "hex" "0x2"
>>       dt_set "/chosen" "xen,xen-bootargs" "str" "$XEN_CMD"
>> -    dt_mknode "/chosen" "dom0"
>> -    dt_set "/chosen/dom0" "compatible" "str_a" "xen,linux-zimage 
>> xen,multiboot-module multiboot,module"
>> -    dt_set "/chosen/dom0" "reg" "hex" "0x0 $dom0_kernel_addr 0x0 
>> $(printf "0x%x" $dom0_kernel_size)"
>> -    dt_set "/chosen" "xen,dom0-bootargs" "str" "$DOM0_CMD"
>> +
>> +    if test "$DOM0_KERNEL"
>> +    then
>> +        dt_mknode "/chosen" "dom0"
>> +        dt_set "/chosen/dom0" "compatible" "str_a" "xen,linux-zimage 
>> xen,multiboot-module multiboot,module"
>> +        dt_set "/chosen/dom0" "reg" "hex" "0x0 $dom0_kernel_addr 0x0 
>> $(printf "0x%x" $dom0_kernel_size)"
>> +        dt_set "/chosen" "xen,dom0-bootargs" "str" "$DOM0_CMD"
>> +    fi
>> +
>>       if test "$DOM0_RAMDISK" && test $ramdisk_addr != "-"
>>       then
>>           dt_mknode "/chosen" "dom0-ramdisk"
>> @@ -203,7 +208,10 @@ function xen_device_tree_editing()
>>               add_device_tree_static_mem "/chosen/domU$i" 
>> "${DOMU_STATIC_MEM[$i]}"
>>           fi
>>           dt_set "/chosen/domU$i" "vpl011" "hex" "0x1"
>> -        dt_set "/chosen/domU$i" "xen,enhanced" "str" "enabled"
>> +        if test "$DOM0_KERNEL"
>> +        then
>> +            dt_set "/chosen/domU$i" "xen,enhanced" "str" "enabled"
>> +        fi
>>             if test "${DOMU_COLORS[$i]}"
>>           then
>> @@ -433,6 +441,19 @@ function xen_config()
>>               DOM0_CMD="$DOM0_CMD root=$root_dev"
>>           fi
>>       fi
>> +    if test -z "$DOM0_KERNEL"
>> +    then
>> +        if test "$NUM_DOMUS" -eq "0"
>> +        then
>> +            echo "Neither dom0 or domUs are specified, exiting."
>> +            exit 1
>> +        fi
>> +        echo "Dom0 kernel is not specified, continue with dom0less 
>> setup."
>> +        unset DOM0_RAMDISK
>> +        # Remove dom0 specific parameters from the XEN command line.
>> +        local params=($XEN_CMD)
>> +        XEN_CMD="${params[@]/dom0*/}"
>> +    fi
>>       i=0
>>       while test $i -lt $NUM_DOMUS
>>       do
>> @@ -490,11 +511,13 @@ generate_uboot_images()
>>     xen_file_loading()
>>   {
>> -    check_compressed_file_type $DOM0_KERNEL "executable"
>> -    dom0_kernel_addr=$memaddr
>> -    load_file $DOM0_KERNEL "dom0_linux"
>> -    dom0_kernel_size=$filesize
>> -
>> +    if test "$DOM0_KERNEL"
>> +    then
>> +        check_compressed_file_type $DOM0_KERNEL "executable"
>> +        dom0_kernel_addr=$memaddr
>> +        load_file $DOM0_KERNEL "dom0_linux"
>> +        dom0_kernel_size=$filesize
>> +    fi
>>       if test "$DOM0_RAMDISK"
>>       then
>>           check_compressed_file_type $DOM0_RAMDISK "cpio archive"
>> @@ -597,14 +620,16 @@ bitstream_load_and_config()
>>     create_its_file_xen()
>>   {
>> -    if test "$ramdisk_addr" != "-"
>> +    if test "$DOM0_KERNEL"
>>       then
>> -        load_files="\"dom0_linux\", \"dom0_ramdisk\""
>> -    else
>> -        load_files="\"dom0_linux\""
>> -    fi
>> -    # xen below
>> -    cat >> "$its_file" <<- EOF
>> +        if test "$ramdisk_addr" != "-"
>> +        then
>> +            load_files="\"dom0_linux\", \"dom0_ramdisk\""
>> +        else
>> +            load_files="\"dom0_linux\""
>> +        fi
>> +        # xen below
>> +        cat >> "$its_file" <<- EOF
>>           dom0_linux {
>>               description = "dom0 linux kernel binary";
>>               data = /incbin/("$DOM0_KERNEL");
>> @@ -616,6 +641,7 @@ create_its_file_xen()
>>               $fit_algo
>>           };
>>       EOF
>> +    fi
>>       # domUs
>>       i=0
>>       while test $i -lt $NUM_DOMUS


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 10:08:24 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 10:08:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354784.582082 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Jl9-0007HA-AB; Thu, 23 Jun 2022 10:08:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354784.582082; Thu, 23 Jun 2022 10:08:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Jl9-0007Gb-4I; Thu, 23 Jun 2022 10:08:23 +0000
Received: by outflank-mailman (input) for mailman id 354784;
 Thu, 23 Jun 2022 10:02:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qyAp=W6=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1o4Jfa-0006XJ-LE
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 10:02:38 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on2082.outbound.protection.outlook.com [40.107.93.82])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9bdb8751-f2db-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 12:02:36 +0200 (CEST)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by DM6PR12MB3114.namprd12.prod.outlook.com (2603:10b6:5:11e::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.18; Thu, 23 Jun
 2022 10:02:34 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::4047:c750:5bc6:19d7]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::4047:c750:5bc6:19d7%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 10:02:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9bdb8751-f2db-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=niCB6CK+be5qMPInEHa254ZoOyZAGj5cOsf6RRo7xd8JTuPvWFwQW/k2JXgw7Rr3Hl8m6TMN8t7cADn23xaul6dGNiKrKIKon7fyO+sIOsB6faIYODvL2kwEvmCA4hWneXy5Udsg4xfGhigkDhR7/CBG32NKxKvjR7QDWzJtf+6lweDSlp/fR2OWjvKxGbn/cGLHq+DE89hIrxr5DFeCbqeyZnuZvqHtAcg72g3/UKywY9bZEmRUgC9qyyRTEjeT29hM/Ya4ThhKZ5vNP+l/byf5BqiRSadWHUtbvs5M1NSDPxfKhUp0c9ce8CgEGHy2NO94vuWET/ukofaW30a3PQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IiKyNOVJI2mz2xvxS4XSkjceT6GMKGp26YhVkTb5B1Q=;
 b=dofuDIDlcjvixTKg7pZQSija0850h2XFRFUm/Bq7DFEWRMNkSdeDjzsVYbu01gqCDoxhGoiCLiu80AYhL8xBzhPGyTUvLWn/henuc6Mv7fw9XM/Trl5GPapkPH06+ztXnLk4DLPqHDVzlc8yA6k97RTFI75YFKe3x5tX9qAuY/5v0XlbfvtF/1Ivr5wEOpFnbYLhNFS7IZtw8HM6wh37jNvEqYsHROO8nM/TQYA2sba4MJ9dwxu+0elPqToZH/yqt2PpZBlZyysWiiw/klDlQOsz94yhy5QO789QbzbNBWztqG7TMt9HTu9Rlfri3ykKsqyT4tnQyqAPiS6O53Pfdg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IiKyNOVJI2mz2xvxS4XSkjceT6GMKGp26YhVkTb5B1Q=;
 b=LK0q1ucUdROLaWkUsiBz09D6rV4F/UKWpx/V+SZWiAeGIhYqVERERi6vjSH3UloS8s7bZ6Dt7ZtkOKjWa3WFWipRwuycz+uzwGhCCBv1n4CJ3i7Y5z2NUnTizBn2ZHV6s8R7mjKhF7SWWgJVgQ/hzR4rCxpY2CUAY6n51a8fjnA=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <797bf441-9d7a-7eb8-4e90-787398acf726@amd.com>
Date: Thu, 23 Jun 2022 11:02:28 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [Viryaos-discuss] [ImageBuilder] [PATCH 2/2] uboot-script-gen:
 Enable direct mapping of statically allocated memory
From: Ayan Kumar Halder <ayankuma@amd.com>
To: Xenia Ragiadakou <burzalodowa@gmail.com>, xen-devel@lists.xenproject.org
Cc: viryaos-discuss@lists.sourceforge.net
References: <20220619124316.378365-1-burzalodowa@gmail.com>
 <20220619124316.378365-2-burzalodowa@gmail.com>
 <5cd7ee29-d43a-1302-0a0b-6b4c339a96da@amd.com>
In-Reply-To: <5cd7ee29-d43a-1302-0a0b-6b4c339a96da@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0307.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a5::31) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6d6b4068-eda5-4900-886e-08da54ff7e03
X-MS-TrafficTypeDiagnostic: DM6PR12MB3114:EE_
X-Microsoft-Antispam-PRVS:
	<DM6PR12MB31140F5CC5187364BAB205A6BCB59@DM6PR12MB3114.namprd12.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	yHItHFGs/imJzdBUVCpWUSevHLn8fhrkJ3H2TJW+wHXtmcYgIVwGFDdEHGkZQXVr5bRGYUSotGS0ui7gJPYBJKviGWNTxNYUAH0b24YZ5GK50zGZUSLh+qKrwIENdryRKNpVQbR3dg7WwZs1gPfRhMP50Ie55AcaDNsK9zfk8DN9B2AriB/j6R7aGZcmzkFPdJI/kTh6pwYUaroNM9trUWnhsMqmgFLGLT2W7M9twwt/+459DMY0JWKTI/vGZ1+juDYDEyGq5egaUIJ+vevftmFrG6NGfBHLBko8XcrN2qrgacrnI9qEXNm+eGUxZLQwLBhk9qshOBYhoX/u0aW6VAcq8VMYIS1j+bAQKwidgEfuwA/vSedEH96R3DpTSulE3ME38IIvYN2+4jRT8VCK2C0q0f2XrYkd8VNRN/ot3NYYzXrWVqGfqQyTwevZwS4nJyyWjUIQDqCAjuN5ltvMGb4xcxeKFhq9vu9+BKtxXpet7h+8hBLzv/hteXxF23gyFtcgKUqSPWbBCORqbPmZrrHkFdqPK6/W9JGCpcUsJZNDmV801mSr8pvZKYm5CmmkR4w2X36U9pV3Q/coiHBYQxdvltroltgIIZ+UhGMBSMmlpoSBk2rVg4XyMUT0IeKR6Y8kKCz+rnpL/Ybw4Y2SIgJRolyv8AP8cEQEHVNQEpvRvYY7ILN81Hm55Cnh8TwvWXM+tTaGc44OqhgtUAt/j0FfKvBsvoAWeT5gfyXQH1GTu/7QeSsJnq4UQCiJQNCtFaxznAxU75KGiIW7hwRYiQTbrQpIJhepEtkjevSU1u8=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(396003)(366004)(136003)(39860400002)(376002)(346002)(478600001)(38100700002)(316002)(2616005)(66476007)(6486002)(2906002)(186003)(4326008)(6666004)(66946007)(83380400001)(8936002)(66556008)(26005)(8676002)(31696002)(6506007)(53546011)(5660300002)(6512007)(41300700001)(31686004)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?a3NQTGZvdUZSVU5ubERWc1VVTzhocWJxSndGd0xmMys4OFBEOHUvbzJ3NXNI?=
 =?utf-8?B?VnYwdU1iYWRKU3hxSHlXc3M3bmlaNXZ3R1RPK2xwYnVrVzBFRUQ4TFRCUkRW?=
 =?utf-8?B?dkV0RTN3QlNkS0ZHR2ZObWFLMHlkV2Q4OXBFL1FrcTZtbkVQOUlmWlYyQ0wy?=
 =?utf-8?B?a2VJd3EvQmplaTVWV1BteFMzWXdKNExkS1JwWnRHc3kyM2JTVnZ4UEtQb1ZL?=
 =?utf-8?B?YmxUQWc4L3RLN3VBUVQ0b1ZRbDhxQVdPV2dDaEVVRG41Rkd1cmpxdnBjWE81?=
 =?utf-8?B?NHlOWVRsZ0Q3RFR6bk5HSEY4NzVKM25LTzM5emdsR1ZzQzIwYkZMWC91SHEy?=
 =?utf-8?B?VWNLVjBKWEpUZ2ljVEZWQmlEMUZkdW1FYTI3c3RNN1lyR3V2RkNWWVJ1ZXpo?=
 =?utf-8?B?dXlWMlZGNkJFajY1eG81Y2NrYTR1NURTZ2gzUlA0VUxDQXhNODJCQ1pRTXhI?=
 =?utf-8?B?Mm84L0Y4UUtzdUpxYStlejZmWmdxb2pNUDNHWjlKNHc1cSsvTno4K0NKZVNq?=
 =?utf-8?B?T0tmK25ueWdFS2dDZGZmVUk3NFRPK1BrUWdXRTBGeVMxb3g0anRZRzIrOVBs?=
 =?utf-8?B?STk2RGdDWUt1czRSK3pvZWwzNFVySFNhNmZ0MTFxbDROL1Btd0pRRFN6RUdO?=
 =?utf-8?B?bEtRUlRTSmNScmViTXAyc3dyRjdQdUtzMlJnYlg3d01qTjhnV0grRzZxNGpI?=
 =?utf-8?B?ZlZGVXpnMmp5MlBzWnl1L2pZdS90T3pvZHhvMUZkL3I5eUVXNUJFUDlTOUJ0?=
 =?utf-8?B?S1hVOTBwYUJIWDg3cVQxN0xkRTdpM3g2c2N3OXBRSTU3dmRyWEhKaUx6OGNl?=
 =?utf-8?B?eUhaZmIveUozNENmeXpZeHBQa1JFOStRZlBGY1ppYytpY2prWE1pOVNUeElt?=
 =?utf-8?B?N1ZyRUcwQ1hhOTA2bGU4a3B1N3FLdlVXUERMSWhFaE9oNXlHekZDeXJCNUNE?=
 =?utf-8?B?c1ZaZ0VaTHRzU0hpRXNoTWpkdGhLOXRWZXg1QUhDOWc1NTdTNjhmd0pyYWJp?=
 =?utf-8?B?RTFCV1NTbVorVmFvQTNJTEx4Wk5ud01VZys2WlRLWTRtamFnQ2MxL0tZZHBV?=
 =?utf-8?B?RzJyMnVzRjlKQndheHdPaG15VzBiSlNuR0VoN202RTJnM2dhbFM5YTZ0UDBM?=
 =?utf-8?B?RVg4blB5L01HTENjTVpNNWpYS016V3NDSXd1bnJUWTVrWnhKdWFiR2psVWJZ?=
 =?utf-8?B?akZaMS9iZ0Z6L01RZnV0cHArV1ZpeE8va3pvUERPOHdkRnVIL3BUNUxEVi9u?=
 =?utf-8?B?NW5pL0RsMXo3ZDhTM3lLM0dPNnBzR0o2YmdmU296K2crSTlNKzVablRuY0t5?=
 =?utf-8?B?aUQ3TG16SzhNdlQrTG4yVzZIM3dHcEY2VmlrVm5tMEdCYzd2MWhmdTEvSzMr?=
 =?utf-8?B?L0NrQ2kzREdVd08rTVYyQ1R1OTJPSkVMcHVQRHNOZHhBZUlzc2VoZjB0UVlr?=
 =?utf-8?B?QzVodUZDeFd5QlVyb1BwWHhMWjFFaWRaM2J3Lzk3dSs3V0lpNktkRkhlcHl6?=
 =?utf-8?B?RkNQbWp2NEtTQ1NtZTFWWnhMdkJTYTNLOEY1V0tPY1dWZXpMRWxwcDgxbmdx?=
 =?utf-8?B?NnhIWTZiQm0zczBpdG4yaktLbGkwSDBwdm9KU3JNVXBrcmZuZjBsZGRLUmpW?=
 =?utf-8?B?RXVuMDcwV2V5Wk1nTTcxT0xtZ1Q2MFUxK0xuMjV2M01LSUovdFNldEUyaFhj?=
 =?utf-8?B?UTJya1dEMUpsQmZhTFFOWUQyNm9id0w3aVVlZWJsTUhUVTBjUlFyVWhpTFFj?=
 =?utf-8?B?MGlqWE5GV3BLNDFNUGVzQzlXVkFLa1FheGd6SERndzZHQ0hIQmNhWVV6OVM3?=
 =?utf-8?B?SlNKVnhKUHVxYktLUnVwN09iU0dWeEQzdVFOMnZXaExhMHgxWEJQT3h1N0Rk?=
 =?utf-8?B?ZlorME5VOHdaN2xPQ2tuc3Q0ZExFa2hMS213SHpYekF4R0N6eDBVd0xvalIr?=
 =?utf-8?B?WkVmcXNVNmphdGZLdFJiSkNscmIwakNtVllnVFVHRW1qTHVKSGRkektYU0hu?=
 =?utf-8?B?M2lnakUvcXlsd2Fpc3BhS1d5ME9BV3NBc2RiMjBkUWhtbWZoVlN4ZlFsZnp5?=
 =?utf-8?B?VTNIK09FeG9GYkl2SzFFbjJ5TkhsT3g2SU1hU1NGaWRGOU9Ld3VlYktzK3Bt?=
 =?utf-8?B?Q3FSWWxhd2hIa3l3NlJsc2lkZVZZdzhqd3JoRGcra1l6RUVMaGp5Uy9meEI1?=
 =?utf-8?B?OGE2ZldrUzBkcTlEdlhCZnFsNTNXOCtBZkpoM0ZwWkxkcHRIWkFXOTl3QjdM?=
 =?utf-8?B?NWUxSTFYWnBabWpKRzFRS3VybkJuWVRpVEYrR1M0eVo3aVFScE9oNTVac2xC?=
 =?utf-8?B?TWpaNzMxWVVFK01TT0dVZUJ5ZHFyb1JxTkFyWlN4OVBkVldDbjdZdz09?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6d6b4068-eda5-4900-886e-08da54ff7e03
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 10:02:33.9056
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: mSFnlSGaPalG/CyICaGg+Kqk7t89o2Tk0GLoqFECjvJPQXjX1cyyscLT8MsdUDYfMjWStmOEJK4qpombm0cDMQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3114

(Resending mail, as the previous delivery failed)

On 21/06/2022 12:34, Ayan Kumar Halder wrote:
> Hi,
>
> On 19/06/2022 13:43, Xenia Ragiadakou wrote:
>> Direct mapping for dom0less VMs is disabled by default in XEN and can be
>> enabled through the 'direct-map' property.
>> Add a new config parameter DOMU_DIRECT_MAP to be able to enable or 
>> disable
>> direct mapping, i.e set to 1 for enabling and to 0 for disabling.
>> This parameter is optional. Direct mapping is enabled by default for all
>> dom0less VMs with static allocation.
>>
>> The property 'direct-map' is a boolean property. Boolean properties 
>> are true
>> if present and false if missing.
>> Add a new data_type 'bool' in function dt_set() to setup a boolean 
>> property.
>>
>> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
>> ---
>>   README.md                |  4 ++++
>>   scripts/uboot-script-gen | 18 ++++++++++++++++++
>>   2 files changed, 22 insertions(+)
>>
>> diff --git a/README.md b/README.md
>> index c52e4b9..17ff206 100644
>> --- a/README.md
>> +++ b/README.md
>> @@ -168,6 +168,10 @@ Where:
>>     if specified, indicates the host physical address regions
>>     [baseaddr, baseaddr + size) to be reserved to the VM for static 
>> allocation.
>>   +- DOMU_DIRECT_MAP[number] can be set to 1 or 0.
>> +  If set to 1, the VM is direct mapped. The default is 1.
>> +  This is only applicable when DOMU_STATIC_MEM is specified.
>
> Can't we just use $DOMU_STATIC_MEM to set "direct-map" in dts ?
>
> Is there a valid use-case for static allocation without direct mapping 
> ? Sorry, it is not very clear to me.
>
> - Ayan
>
>> +
>>   - LINUX is optional but specifies the Linux kernel for when Xen is NOT
>>     used.  To enable this set any LINUX\_\* variables and do NOT set the
>>     XEN variable.
>> diff --git a/scripts/uboot-script-gen b/scripts/uboot-script-gen
>> index bdc8a6b..e85c6ec 100755
>> --- a/scripts/uboot-script-gen
>> +++ b/scripts/uboot-script-gen
>> @@ -27,6 +27,7 @@ function dt_mknode()
>>   #   hex
>>   #   str
>>   #   str_a
>> +#   bool
>>   function dt_set()
>>   {
>>       local path=$1
>> @@ -49,6 +50,12 @@ function dt_set()
>>                   array+=" \"$element\""
>>               done
>>               echo "fdt set $path $var $array" >> $UBOOT_SOURCE
>> +        elif test $data_type = "bool"
>> +        then
>> +            if test "$data" -eq 1
>> +            then
>> +                echo "fdt set $path $var" >> $UBOOT_SOURCE
>> +            fi
>>           else
>>               echo "fdt set $path $var \"$data\"" >> $UBOOT_SOURCE
>>           fi
>> @@ -65,6 +72,12 @@ function dt_set()
>>           elif test $data_type = "str_a"
>>           then
>>               fdtput $FDTEDIT -p -t s $path $var $data
>> +        elif test $data_type = "bool"
>> +        then
>> +            if test "$data" -eq 1
>> +            then
>> +                fdtput $FDTEDIT -p $path $var
>> +            fi
>>           else
>>               fdtput $FDTEDIT -p -t s $path $var "$data"
>>           fi
>> @@ -206,6 +219,7 @@ function xen_device_tree_editing()
>>           if test "${DOMU_STATIC_MEM[$i]}"
>>           then
>>               add_device_tree_static_mem "/chosen/domU$i" 
>> "${DOMU_STATIC_MEM[$i]}"
>> +            dt_set "/chosen/domU$i" "direct-map" "bool" 
>> "${DOMU_DIRECT_MAP[$i]}"
>>           fi
>>           dt_set "/chosen/domU$i" "vpl011" "hex" "0x1"
>>           if test "$DOM0_KERNEL"
>> @@ -470,6 +484,10 @@ function xen_config()
>>           then
>>               DOMU_CMD[$i]="console=ttyAMA0"
>>           fi
>> +        if test -z "${DOMU_DIRECT_MAP[$i]}"
>> +        then
>> +             DOMU_DIRECT_MAP[$i]=1
>> +        fi
>>           i=$(( $i + 1 ))
>>       done
>>   }


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 10:24:37 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 10:24:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354804.582099 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4K0l-0001kk-Q1; Thu, 23 Jun 2022 10:24:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354804.582099; Thu, 23 Jun 2022 10:24:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4K0l-0001kd-My; Thu, 23 Jun 2022 10:24:31 +0000
Received: by outflank-mailman (input) for mailman id 354804;
 Thu, 23 Jun 2022 10:24:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2Gc7=W6=citrix.com=prvs=166b7d494=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o4K0k-0001kW-7G
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 10:24:30 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a8cb4dad-f2de-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 12:24:28 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a8cb4dad-f2de-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655979868;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=M8DEYtK+frEuMn0/9L9yy1S4jux0ULsimUgf7YNcTVc=;
  b=AatnuYNiC6tu45f63EEm91DosU37ygrChYRfRiRVfNHVZOdYRgrJGUZ2
   FxHVChdpBuFZAOOfcMKI4WTCam/PejJ0e1B+vCXGZ/duPfXEP/FKLfxwu
   W5ooV/eubDRemRynzuG5rBl+2S7GqWqamRwjaTJ14I+hOLFQp/jwQSvAF
   c=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 74264350
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:jo2VLaPjc58q7W3vrR1el8FynXyQoLVcMsEvi/4bfWQNrUp0hGQFz
 zdLCzzSM6zZNmGhL9xzaIm38E0P65fTzIU2TQto+SlhQUwRpJueD7x1DKtR0wB+jCHnZBg6h
 ynLQoCYdKjYdleF+lH1dOKJQUBUjclkfJKlYAL/En03FFUMpBsJ00o5wbZn2NQw3bBVPivW0
 T/Mi5yHULOa82Yc3lI8s8pvfzs24ZweEBtB1rAPTagjUG32zhH5P7pGTU2FFFPqQ5E8IwKPb
 72rIIdVXI/u10xF5tuNyt4Xe6CRK1LYFVDmZnF+A8BOjvXez8CbP2lS2Pc0MC9qZzu1c99Zk
 oRg74aLewkTN7TdieIneBVBFg9fIvgTkFPHCSDXXc27ykTHdz3nwul0DVFwNoodkgp1KTgQr
 7pCcmlLN03dwbLtqF64YrAEasALJc/3PIQZqzd4wCvQF/oOSpHfWaTao9Rf2V/cg+gRQayAO
 5FEMVKDajz+bxYWYEo+E6gHp9bwpEn5ajcJjHeK8P9fD2/7k1UqjemF3MDuUseRWcxfk0Kcp
 2TH12f0GBcXMJqY0zXt2nCjnOjUhgvgRZkfUra/85ZCgkCXx2EVIA0bUx28u/bRokSzQc5FI
 koYvC8nt7Ev9VeDR8P4GRa/pRasgBkYXNZBFvwg3yuEwKHU/gWxC3ANS3hKb9lOnN87Q3km2
 0GEm/vtBCdzq/uFRHSF7LCWoDiufy8PIgcqYisJThAZ8sLjiI42hxPLCN1kFcadidn4Gir5x
 TyQmzQvnLUYjcMN1KKT8EjOhnSnoZ2hZhQy/Q/NWWWm6Ct2YYekY8qj7l2zxelEBJaUSB+Gp
 ndspiSFxLlQV9fXznXLGbhTWuHyjxqYDNHCqQQ+MsM7zAajwXe6bJFW2hpyPUJHE9lRLFcFf
 3TvVRNtCI57ZSX3MP8qO9PhVazG3oC7S427C6m8gs5mJ8EoKVTZpHwGiVu4hTiFraQ6rU0o1
 X53m+6IBG1SN6loxSHeqww1ge5ynXBWKY8+qPnGI/WbPVm2Pif9pU8tagfmUwzAxPrsTP/p2
 9heLdCW7B5UTffzZCLamaZKcw1XfCJhVc+r9JcJHgJmHuaBMDh5Y8I9PJt7I9A190irvr2gE
 o6Btr9wlwOk2CyvxfSiYXF/crL/NatCQYYAFXV0Zz6AgiF7Ca72tft3X8ZnLNEPqb04pdYpH
 qZtRil1KqkWItgx029GNseVQU0LXEnDuD9iyAL+OmllLsU+GlKhFx2NVlKHyRTixxGf7aMWy
 4BMHCuBKXbfb2yO1PrrVc8=
IronPort-HdrOrdr: A9a23:/IhY7a2cvybWOYc/jyC9bQqjBLIkLtp133Aq2lEZdPRUGvb3qy
 mLpoV+6faUskd1ZJhOo7290cW7LU80sKQFhrX5Xo3SPjUO2lHJEGgK1+KLqFfd8m/Fh41gPM
 9bAs5D4bbLbGSS4/yU3DWF
X-IronPort-AV: E=Sophos;i="5.92,215,1650945600"; 
   d="scan'208";a="74264350"
Date: Thu, 23 Jun 2022 11:24:22 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
CC: <xen-devel@lists.xenproject.org>, Oleksandr Tyshchenko
	<oleksandr_tyshchenko@epam.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Nick Rosbrook <rosbrookn@gmail.com>, "Juergen
 Gross" <jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>,
	"Julien Grall" <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Bertrand Marquis <bertrand.marquis@arm.com>
Subject: Re: [PATCH V10.1 1/3] libxl: Add support for Virtio disk
 configuration
Message-ID: <YrQ/VrfzScUVK+PK@perard.uk.xensource.com>
References: <62903b8e-6c20-600e-8283-5a3e3b853a18@gmail.com>
 <1655482471-16850-1-git-send-email-olekstysh@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <1655482471-16850-1-git-send-email-olekstysh@gmail.com>

On Fri, Jun 17, 2022 at 07:14:31PM +0300, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> This patch adds basic support for configuring and assisting virtio-mmio
> based virtio-disk backend (emulator) which is intended to run out of
> Qemu and could be run in any domain.
> Although the Virtio block device is quite different from traditional
> Xen PV block device (vbd) from the toolstack's point of view:
>  - as the frontend is virtio-blk which is not a Xenbus driver, nothing
>    written to Xenstore are fetched by the frontend currently ("vdev"
>    is not passed to the frontend). But this might need to be revised
>    in future, so frontend data might be written to Xenstore in order to
>    support hotplugging virtio devices or passing the backend domain id
>    on arch where the device-tree is not available.
>  - the ring-ref/event-channel are not used for the backend<->frontend
>    communication, the proposed IPC for Virtio is IOREQ/DM
> it is still a "block device" and ought to be integrated in existing
> "disk" handling. So, re-use (and adapt) "disk" parsing/configuration
> logic to deal with Virtio devices as well.
> 
> For the immediate purpose and an ability to extend that support for
> other use-cases in future (Qemu, virtio-pci, etc) perform the following
> actions:
> - Add new disk backend type (LIBXL_DISK_BACKEND_OTHER) and reflect
>   that in the configuration
> - Introduce new disk "specification" and "transport" fields to struct
>   libxl_device_disk. Both are written to the Xenstore. The transport
>   field is only used for the specification "virtio" and it assumes
>   only "mmio" value for now.
> - Introduce new "specification" option with "xen" communication
>   protocol being default value.
> - Add new device kind (LIBXL__DEVICE_KIND_VIRTIO_DISK) as current
>   one (LIBXL__DEVICE_KIND_VBD) doesn't fit into Virtio disk model
> 
> An example of domain configuration for Virtio disk:
> disk = [ 'phy:/dev/mmcblk0p3, xvda1, backendtype=other, specification=virtio']
> 
> Nothing has changed for default Xen disk configuration.
> 
> Please note, this patch is not enough for virtio-disk to work
> on Xen (Arm), as for every Virtio device (including disk) we need
> to allocate Virtio MMIO params (IRQ and memory region) and pass
> them to the backend, also update Guest device-tree. The subsequent
> patch will add these missing bits. For the current patch,
> the default "irq" and "base" are just written to the Xenstore.
> This is not an ideal splitting, but this way we avoid breaking
> the bisectability.
> 
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> ---
> Changes V10 -> V10.1:
>    - fix small coding style issue in libxl__device_disk_get_path()
>    - drop specification check in main_blockattach() and add
>      required check in libxl__device_disk_setdefault()
>    - update specification check in main_blockdetach()

For this v10.1: Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

BTW, the subject of this updated patch still state "v10" instead of
"v10.1", hopefully committers can pick the right version.

Cheers,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 10:32:08 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 10:32:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354811.582109 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4K7u-0003CL-Hh; Thu, 23 Jun 2022 10:31:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354811.582109; Thu, 23 Jun 2022 10:31:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4K7u-0003CE-F0; Thu, 23 Jun 2022 10:31:54 +0000
Received: by outflank-mailman (input) for mailman id 354811;
 Thu, 23 Jun 2022 10:31:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4K7t-0003C1-9d; Thu, 23 Jun 2022 10:31:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4K7t-0002bT-4V; Thu, 23 Jun 2022 10:31:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4K7s-0001jK-GY; Thu, 23 Jun 2022 10:31:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o4K7s-0000sF-G9; Thu, 23 Jun 2022 10:31:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+lUhDxLrTKtAHvIyTwEoQwS0AGqMbYZxy8UZw7yhmJ8=; b=5uFLXHOY6/tlTWQqo90bqDwAS7
	lSkr66IcfCHPWQ/Z3WMOoot8U7gbvAUXhSrdWgCwesKo7kiMdNFWHO8sWXz38nraiVMf/1aEgLHx5
	Zg9LMe9txxM8pKzcIxAVynR5FK8FkUA0Ys6fRBjhIJwqOYw90ekSfrWmCc08C//TyPLA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171319-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 171319: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-freebsd10-amd64:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=65f684b728f779e170335e9e0cbbf82f7e1c7e5b
X-Osstest-Versions-That:
    xen=15d93068e3484cb14006e935734a1e6088f228fd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 23 Jun 2022 10:31:52 +0000

flight 171319 xen-unstable real [real]
flight 171326 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/171319/
http://logs.test-lab.xenproject.org/osstest/logs/171326/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-freebsd10-amd64  7 xen-install      fail pass in 171326-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171310
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171310
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171310
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171310
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171310
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171310
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171310
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171310
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171310
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171310
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171310
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171310
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  65f684b728f779e170335e9e0cbbf82f7e1c7e5b
baseline version:
 xen                  15d93068e3484cb14006e935734a1e6088f228fd

Last test of basis   171310  2022-06-22 14:08:51 Z    0 days
Testing same since   171319  2022-06-23 01:39:07 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   15d93068e3..65f684b728  65f684b728f779e170335e9e0cbbf82f7e1c7e5b -> master


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 10:45:50 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 10:45:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354821.582128 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4KLG-0004mM-UH; Thu, 23 Jun 2022 10:45:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354821.582128; Thu, 23 Jun 2022 10:45:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4KLG-0004mF-RW; Thu, 23 Jun 2022 10:45:42 +0000
Received: by outflank-mailman (input) for mailman id 354821;
 Thu, 23 Jun 2022 10:45:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4KLF-0004m5-Im; Thu, 23 Jun 2022 10:45:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4KLF-0002qp-AU; Thu, 23 Jun 2022 10:45:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4KLF-00029n-2P; Thu, 23 Jun 2022 10:45:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o4KLF-0005ax-1z; Thu, 23 Jun 2022 10:45:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3X5PC/8gBXBvJnmskpnTfdDAoizPDVqSKDqwnUAaErc=; b=6aMcERfqlJFcjR4bWDw22aLF5D
	WSVmUq3viGGkS3g9BT2EF9rL47O9sFN0C/3rlSi6x8Gl7FQWVwJchG1rn1FmglQKp58o+itt7BTAU
	QO6LUUS3CtEV+FMbXD6U3ls38tIPK7BacwMofLhAgjTCaJtA1hE+H1Ux+SA+T0S0Vabw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171325-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 171325: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=61ac7919a6a38a24d26fd1b57a2511beb0724e99
X-Osstest-Versions-That:
    xen=65f684b728f779e170335e9e0cbbf82f7e1c7e5b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 23 Jun 2022 10:45:41 +0000

flight 171325 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171325/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  61ac7919a6a38a24d26fd1b57a2511beb0724e99
baseline version:
 xen                  65f684b728f779e170335e9e0cbbf82f7e1c7e5b

Last test of basis   171314  2022-06-22 18:01:50 Z    0 days
Testing same since   171325  2022-06-23 08:01:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Michal Orzel <michal.orzel@arm.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   65f684b728..61ac7919a6  61ac7919a6a38a24d26fd1b57a2511beb0724e99 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 11:24:24 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 11:24:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354831.582141 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Kwc-0000d6-02; Thu, 23 Jun 2022 11:24:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354831.582141; Thu, 23 Jun 2022 11:24:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Kwb-0000cz-TQ; Thu, 23 Jun 2022 11:24:17 +0000
Received: by outflank-mailman (input) for mailman id 354831;
 Thu, 23 Jun 2022 11:24:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4Kwa-0000ct-O6
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 11:24:16 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4Kwa-0003UT-JK; Thu, 23 Jun 2022 11:24:16 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4Kwa-0001It-A7; Thu, 23 Jun 2022 11:24:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:Message-Id:Date:
	Subject:Cc:To:From; bh=48pd6qT6ElSdCjI12mEb8x+anpsLHh/bkyVkl9bWtvE=; b=S/QWtA
	Pd932QAyJKv2pvGMz5HdLQm+zyKJzWxCuubDwmIHDP/2X472orvJO4sfQlVoEQb7019+m0E8Ci7hy
	YyHnZx8gp1GCLvgHLC1wZUDC6V7jdCoWqw6OJgl/PmI7x3wYVdMxGmyYagPNt1Ku/nS+bwFzYIQHl
	LXLinF78L00=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Julien Grall <jgrall@amazon.com>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH] tools/xenstored: Harden corrupt()
Date: Thu, 23 Jun 2022 12:24:07 +0100
Message-Id: <20220623112407.13604-1-julien@xen.org>
X-Mailer: git-send-email 2.32.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

At the moment, corrupt() is neither checking for allocation failure
nor freeing the allocated memory.

Harden the code by printing ENOMEM if the allocation failed and
free 'str' after the last use.

This is not considered to be a security issue because corrupt() should
only be called when Xenstored thinks the database is corrupted. Note
that the trigger (i.e. a guest reliably provoking the call) would be
a security issue.

Fixes: 06d17943f0cd ("Added a basic integrity checker, and some basic ability to recover from store")
Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_core.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index fa733e714e9a..b6279bdfe229 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -2065,7 +2065,11 @@ void corrupt(struct connection *conn, const char *fmt, ...)
 	va_end(arglist);
 
 	log("corruption detected by connection %i: err %s: %s",
-	    conn ? (int)conn->id : -1, strerror(saved_errno), str);
+	    conn ? (int)conn->id : -1, strerror(saved_errno),
+	    str ? str : "ENOMEM");
+
+	if (str)
+		talloc_free(str);
 
 	check_store();
 }
-- 
2.32.0



From xen-devel-bounces@lists.xenproject.org Thu Jun 23 11:28:48 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 11:28:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354837.582153 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4L0x-0001HA-HW; Thu, 23 Jun 2022 11:28:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354837.582153; Thu, 23 Jun 2022 11:28:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4L0x-0001H3-Ei; Thu, 23 Jun 2022 11:28:47 +0000
Received: by outflank-mailman (input) for mailman id 354837;
 Thu, 23 Jun 2022 11:28:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=j28/=W6=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o4L0w-0001Gu-50
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 11:28:46 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a4096908-f2e7-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 13:28:44 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 6F19D1F994;
 Thu, 23 Jun 2022 11:28:44 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 4127E13461;
 Thu, 23 Jun 2022 11:28:44 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id dr6EDmxOtGLHZQAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 23 Jun 2022 11:28:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a4096908-f2e7-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655983724; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=+00qCXe7hDN9wYDblsGuNcErdjw2EFhuqfdmI5V723k=;
	b=fargDTtVjAniKj7dw4Xco45nMsHOgRggTlqKOYTmRYr9hs4GMa5F9Yh8bTJ+HBMGg4Bv1O
	LhvRRbAM+YyFkLeV3uedpT/BNRnP7Co9+UKUdPubuNTicWxMKpkBEFMtfw54MC5x5VrYwq
	qUWPqr64VwY61Eby+Atr8xh/Z/GDWW0=
Message-ID: <1168a21d-80be-1a9a-6a7b-7732a7126bf0@suse.com>
Date: Thu, 23 Jun 2022 13:28:43 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH] tools/xenstored: Harden corrupt()
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Julien Grall <jgrall@amazon.com>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
References: <20220623112407.13604-1-julien@xen.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20220623112407.13604-1-julien@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------s46Q8hVdPpF9cwpjVnblTuSM"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------s46Q8hVdPpF9cwpjVnblTuSM
Content-Type: multipart/mixed; boundary="------------rWlG0HTXLrMjiqSDd9wti2CU";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Julien Grall <jgrall@amazon.com>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <1168a21d-80be-1a9a-6a7b-7732a7126bf0@suse.com>
Subject: Re: [PATCH] tools/xenstored: Harden corrupt()
References: <20220623112407.13604-1-julien@xen.org>
In-Reply-To: <20220623112407.13604-1-julien@xen.org>

--------------rWlG0HTXLrMjiqSDd9wti2CU
Content-Type: multipart/mixed; boundary="------------cCFzKXoew5MRVpUlnABX4vlf"

--------------cCFzKXoew5MRVpUlnABX4vlf
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjMuMDYuMjIgMTM6MjQsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gRnJvbTogSnVsaWVu
IEdyYWxsIDxqZ3JhbGxAYW1hem9uLmNvbT4NCj4gDQo+IEF0IHRoZSBtb21lbnQsIGNvcnJ1
cHQoKSBpcyBuZWl0aGVyIGNoZWNraW5nIGZvciBhbGxvY2F0aW9uIGZhaWx1cmUNCj4gbm9y
IGZyZWVpbmcgdGhlIGFsbG9jYXRlZCBtZW1vcnkuDQo+IA0KPiBIYXJkZW4gdGhlIGNvZGUg
YnkgcHJpbnRpbmcgRU5PTUVNIGlmIHRoZSBhbGxvY2F0aW9uIGZhaWxlZCBhbmQNCj4gZnJl
ZSAnc3RyJyBhZnRlciB0aGUgbGFzdCB1c2UuDQo+IA0KPiBUaGlzIGlzIG5vdCBjb25zaWRl
cmVkIHRvIGJlIGEgc2VjdXJpdHkgaXNzdWUgYmVjYXVzZSBjb3JydXB0KCkgc2hvdWxkDQo+
IG9ubHkgYmUgY2FsbGVkIHdoZW4gWGVuc3RvcmVkIHRoaW5rcyB0aGUgZGF0YWJhc2UgaXMg
Y29ycnVwdGVkLiBOb3RlDQo+IHRoYXQgdGhlIHRyaWdnZXIgKGkuZS4gYSBndWVzdCByZWxp
YWJseSBwcm92b2tpbmcgdGhlIGNhbGwpIHdvdWxkIGJlDQo+IGEgc2VjdXJpdHkgaXNzdWUu
DQo+IA0KPiBGaXhlczogMDZkMTc5NDNmMGNkICgiQWRkZWQgYSBiYXNpYyBpbnRlZ3JpdHkg
Y2hlY2tlciwgYW5kIHNvbWUgYmFzaWMgYWJpbGl0eSB0byByZWNvdmVyIGZyb20gc3RvcmUi
KQ0KPiBTaWduZWQtb2ZmLWJ5OiBKdWxpZW4gR3JhbGwgPGpncmFsbEBhbWF6b24uY29tPg0K
PiAtLS0NCj4gICB0b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jIHwgNiArKysrKy0N
Cj4gICAxIGZpbGUgY2hhbmdlZCwgNSBpbnNlcnRpb25zKCspLCAxIGRlbGV0aW9uKC0pDQo+
IA0KPiBkaWZmIC0tZ2l0IGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYyBiL3Rv
b2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMNCj4gaW5kZXggZmE3MzNlNzE0ZTlhLi5i
NjI3OWJkZmUyMjkgMTAwNjQ0DQo+IC0tLSBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9j
b3JlLmMNCj4gKysrIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYw0KPiBAQCAt
MjA2NSw3ICsyMDY1LDExIEBAIHZvaWQgY29ycnVwdChzdHJ1Y3QgY29ubmVjdGlvbiAqY29u
biwgY29uc3QgY2hhciAqZm10LCAuLi4pDQo+ICAgCXZhX2VuZChhcmdsaXN0KTsNCj4gICAN
Cj4gICAJbG9nKCJjb3JydXB0aW9uIGRldGVjdGVkIGJ5IGNvbm5lY3Rpb24gJWk6IGVyciAl
czogJXMiLA0KPiAtCSAgICBjb25uID8gKGludCljb25uLT5pZCA6IC0xLCBzdHJlcnJvcihz
YXZlZF9lcnJubyksIHN0cik7DQo+ICsJICAgIGNvbm4gPyAoaW50KWNvbm4tPmlkIDogLTEs
IHN0cmVycm9yKHNhdmVkX2Vycm5vKSwNCj4gKwkgICAgc3RyID8gc3RyIDogIkVOT01FTSIp
Ow0KPiArDQo+ICsJaWYgKHN0cikNCg0KTm8gbmVlZCBmb3IgdGhlICJpZiIuIHRhbGxvY19m
cmVlKCkgaGFuZGxlcyBOVUxMIHF1aXRlIGZpbmUuDQoNCj4gKwkJdGFsbG9jX2ZyZWUoc3Ry
KTsNCj4gICANCj4gICAJY2hlY2tfc3RvcmUoKTsNCj4gICB9DQoNCldpdGggYWJvdmUgZml4
ZWQ6DQoNClJldmlld2VkLWJ5OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+DQoN
Cg0KSnVlcmdlbg0K
--------------cCFzKXoew5MRVpUlnABX4vlf
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------cCFzKXoew5MRVpUlnABX4vlf--

--------------rWlG0HTXLrMjiqSDd9wti2CU--

--------------s46Q8hVdPpF9cwpjVnblTuSM
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK0TmsFAwAAAAAACgkQsN6d1ii/Ey8t
1Af/cxr70pVmTbfpxwIzK7j20HnG+4/8jRnPmhoXDXYBaYh/EwtekAh1fiiYN7TsBOZh4I9oFPXG
8fDjKfbE9b/EmMtMEKdqCR42z4jtaEpTqw949wDwyo/jicvdysaTCJ4Fv5YuQ9BOoyewdfu1Oz7O
a0aUqHThLf+mRaTjor6PwgwkK84FDgqtrnbhxIlK6q9aIeUqd1BYm02u7efwi7x4G1QN1YgDAknk
0L69HaXJHofA61Yqe4aF5bVXDsBBYDJU4yy2KohtCxB+UrpheT1+ETgbz5lv3p1kGTB2SMThSm/8
fHEK0jalM0wzViyrg6SieA5/JTWaJyCHtvSCeW5OWw==
=gJME
-----END PGP SIGNATURE-----

--------------s46Q8hVdPpF9cwpjVnblTuSM--


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 11:35:03 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 11:35:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354845.582164 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4L6p-0002hf-6J; Thu, 23 Jun 2022 11:34:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354845.582164; Thu, 23 Jun 2022 11:34:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4L6p-0002hY-3U; Thu, 23 Jun 2022 11:34:51 +0000
Received: by outflank-mailman (input) for mailman id 354845;
 Thu, 23 Jun 2022 11:34:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4L6n-0002hS-Id
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 11:34:49 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4L6m-0003f2-P5; Thu, 23 Jun 2022 11:34:48 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.0.239]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4L6m-0001mj-HG; Thu, 23 Jun 2022 11:34:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=bK2JAADBh0ne/mDR7Qxe91B6gbBy8QN9GkgXdJZkp40=; b=OW3OLjW1D/8hmyK3Aja3szTgww
	+Hj307KraCgh9djtB/ATBgwbCNx1SWiZvJfTWUlH5Z0XcBRI9GPGTnGkP08Ac5YHf3RzdHD18Povw
	g4gLSrwMB6qUrmKznf1GIZa37soXKwqtT3b4kpcXv0GOlgxi9K/HF+RXvKiHRJkGKkWo=;
Message-ID: <113420b2-816c-1753-6260-04eb0caa4a47@xen.org>
Date: Thu, 23 Jun 2022 12:34:46 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH] tools/xenstored: Harden corrupt()
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Julien Grall <jgrall@amazon.com>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
References: <20220623112407.13604-1-julien@xen.org>
 <1168a21d-80be-1a9a-6a7b-7732a7126bf0@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <1168a21d-80be-1a9a-6a7b-7732a7126bf0@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 23/06/2022 12:28, Juergen Gross wrote:
> On 23.06.22 13:24, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> At the moment, corrupt() is neither checking for allocation failure
>> nor freeing the allocated memory.
>>
>> Harden the code by printing ENOMEM if the allocation failed and
>> free 'str' after the last use.
>>
>> This is not considered to be a security issue because corrupt() should
>> only be called when Xenstored thinks the database is corrupted. Note
>> that the trigger (i.e. a guest reliably provoking the call) would be
>> a security issue.
>>
>> Fixes: 06d17943f0cd ("Added a basic integrity checker, and some basic 
>> ability to recover from store")
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>> ---
>>   tools/xenstore/xenstored_core.c | 6 +++++-
>>   1 file changed, 5 insertions(+), 1 deletion(-)
>>
>> diff --git a/tools/xenstore/xenstored_core.c 
>> b/tools/xenstore/xenstored_core.c
>> index fa733e714e9a..b6279bdfe229 100644
>> --- a/tools/xenstore/xenstored_core.c
>> +++ b/tools/xenstore/xenstored_core.c
>> @@ -2065,7 +2065,11 @@ void corrupt(struct connection *conn, const 
>> char *fmt, ...)
>>       va_end(arglist);
>>       log("corruption detected by connection %i: err %s: %s",
>> -        conn ? (int)conn->id : -1, strerror(saved_errno), str);
>> +        conn ? (int)conn->id : -1, strerror(saved_errno),
>> +        str ? str : "ENOMEM");
>> +
>> +    if (str)
> 
> No need for the "if". talloc_free() handles NULL quite fine.

In my original approach, I wasn't checking "if (str)" when I looked at 
talloc_free(), I noticed it would return -1 (i.e. the memory wasn't 
freed) when str is NULL.

I also couldn't find any example in Xenstored where talloc_free() would 
be called with NULL. So I felt it was not a good idea to add one because 
talloc_free() technically return a "failure".

That said, I would not strongly argue to keep it. So I am happy to drop 
the check if that's what you want.

> 
>> +        talloc_free(str);
>>       check_store();
>>   }
> 
> With above fixed:
> 
> Reviewed-by: Juergen Gross <jgross@suse.com>

Thanks!

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 11:47:26 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 11:47:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354851.582174 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4LIv-0004Cx-9l; Thu, 23 Jun 2022 11:47:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354851.582174; Thu, 23 Jun 2022 11:47:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4LIv-0004Cq-6n; Thu, 23 Jun 2022 11:47:21 +0000
Received: by outflank-mailman (input) for mailman id 354851;
 Thu, 23 Jun 2022 11:47:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CRa/=W6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o4LIt-0004Ck-Jn
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 11:47:19 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-eopbgr60063.outbound.protection.outlook.com [40.107.6.63])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3b9e7471-f2ea-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 13:47:18 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8166.eurprd04.prod.outlook.com (2603:10a6:20b:3fa::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.22; Thu, 23 Jun
 2022 11:47:16 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 11:47:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3b9e7471-f2ea-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=llLr61od150z8QLEurreBm5PvY2U6s+PpYcrBXP2UNdUlmLYDq9IClSbFjiwrsGU2leV0RvoYHSlseYR13cUGM1dUty4bpG+kXUs13Ya7Dc0ygEA4okoHKY9EN/lHO/3oXnebglYCnvCy2ZI8T/d9rrDJT6AgiAvG8GIEq9X6kirtVSRqbFBm3zmDjyzSWH7YKYvW8E5gxaIReJUzvhOS1bOLDKiQjeSweOyYj+ojmitvUDOqgIkDEprzrnNI0Z8kM6Ip7FrI3txiDrF5c/2tnyTW2haxqrpVEt2oQJWR0GNWHSQz6z/VEslVN922+RgXSnNRTN3xCzv8Rk5271njA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5BjUup+pWpLmGiOXVtu4EvW2I88EGKM/DoGMVSRMXMA=;
 b=GjnAjKH59BHNuf37Rhkyr0r42i/KOm1sAIImG1RMpYmIt78DXehgsa+wRm/vs+wuGm6aL7mKIQzzZFHMRD8T8gRfKAmcG51A3ZRNdbfGZg+f5tpdAkLPNC9cFReY2P/mO3pXowlNS53JeA8H98SKAlVssu5M5BJrF+nacxnfZgCMGOvpJHgpD6Tl6lhdq1qQQnY2St2SGbZq1d2RHy/bDNCwuCkEcEQWJo4AqeIyun410YUt9xVsXWgtkDdEpu3qsZeF8b7mhNUXPdZZ0iJ5RzVwkyYB4EeYYIVEMoz4BT9XBmlqpIC6RB0g6olaah/Z+wpYpz5bIk62wbBLpYLjxQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5BjUup+pWpLmGiOXVtu4EvW2I88EGKM/DoGMVSRMXMA=;
 b=nCNdIhMDiAtJ/huqNaj4ZLfqyrP7NXXIMs2wj2hBXTSg5z2s2iC7oS6DkaDSgsvXlfKnY0BqVz6eQNDBfqbNBxmZ0hWCHBq2TR3tFb3kOAB9Qt9bVFVFXnImxSTaU1pwa3GYcheFSdqLdLtLUuz8KgTqj0qtffk/1UmtJprAbB/2UleZav4crgRCyvrakp/VfLh+uQPzoGKCrwrlyFXgKAF8A3i87ztHpOxswUf9VKkbV6N2ehL49wpDMcZUKl0A2n/ipoDGzOCfO8sUtxN1tM7nPYjEAXlf7fxu5+7uiv4NOBYTgbGJmF6A+AKcLi2EqgeTeMQ7YOFp1stMmVfSZQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e6a07ee6-526f-6e6c-2abb-dc83eef22475@suse.com>
Date: Thu, 23 Jun 2022 13:47:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 2/2] x86: annotate entry points with type and size
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <e4bf47ca-2ae6-1fd4-56a6-e4e777150b64@suse.com>
 <531ab7f7-ce5a-12b2-e7e7-528c26f9ff7f@suse.com>
 <22df182a-762d-711e-5191-d4b628904085@citrix.com>
 <44589528-2655-a949-0fd9-f30b6f2fa09d@suse.com>
In-Reply-To: <44589528-2655-a949-0fd9-f30b6f2fa09d@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM6P193CA0062.EURP193.PROD.OUTLOOK.COM
 (2603:10a6:209:8e::39) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 511405dd-38e0-4c0c-3f6e-08da550e1eaf
X-MS-TrafficTypeDiagnostic: AS8PR04MB8166:EE_
X-Microsoft-Antispam-PRVS:
	<AS8PR04MB81668CC8E813DF949EE0755AB3B59@AS8PR04MB8166.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	wXfHuRCA6od9zTazyqLrkhIgpaInF9tuzCeQAgLT6WB08Pc+Dlpnry3R/ZKMEPgwW6o0B0be7XliPYuQHJICnoKcLuV+ZYJjyO9fGbifPIMtpcAXiOAsrI6uAazz2V6Sk2zCOXSLqtqWir1Dkik682PiU6NcDJvINZc9Vj37sA6ehq5n0Ydp9EKiEVnDZo+bEYDifO/7x34cLcCVtJe7isXy5vJghrn556WEHap4AGY4xwB19cZ95sq8ks5yWc6cWKSPARQaJi2P8oTXTfCMgfBsTB9JryTWVNXAO4UiexDXXtdGtjhGQz2CyKg0VspaDOctywNZ2UmGvSgQsgtqt3k6JGhj98MHLQ6Tco7NdPKkty2J5VYHIMa/1pdA8Ipv2xbBpht7WQLiT4DYPhNFUsP4wqIjFnHhsDtt25ifH3u41GX3h0YRkv2ahHj02wpfksa7NVi/4Zbg9rXkWc5cf4KstO4HimiQ+7NjLep/b7Sx9/TRq7Su3KY/7gP3J+TCYrNjoObVBN6DddHLZc3laGn08EJrKPE/zzBfQK0NShCsszTq8tWlSEjVjrd0IZFgxra+Vndiui4CU7ZJLtur9oBORyxhyEP5IL917pU6lM4kCU7SsaTq0yKeokUcQ0jpLCVkZnO1NYJAb1ZMTi8MJx7EmDTsBO2vZ8rjoDxGBTMmL7EqZrFNuvtNIXe5rSdVqEWLA0BMnDtUs8caMnKX5gAD3J9eEEm49QsrIWhs1kUxEzT+PY7xgvzhLeeDabd7sxqtsAVVVtUcDLNGZy37YDQZ89B1RrgUq4elrG6W63L44nL0Y+414kphO2BEp47Yrm0C1Q2/G96+Jkd4PLCuwQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(396003)(376002)(366004)(136003)(346002)(39860400002)(26005)(6666004)(478600001)(36756003)(41300700001)(53546011)(5660300002)(38100700002)(2616005)(8936002)(31686004)(2906002)(6506007)(6512007)(66556008)(66946007)(66476007)(6486002)(86362001)(83380400001)(186003)(6916009)(4326008)(31696002)(54906003)(316002)(966005)(8676002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SHhPTVhWS00wL0ltRDJTbHBJUkJ0MXZRbmdPdjFYNUgzSitBUTAyRGhoeDdp?=
 =?utf-8?B?OWpwRGxLSlZBekUvWUFvbWRlRzJOTG1WWWEwTU5FZy8rNjJ4ak94c3o2QVpT?=
 =?utf-8?B?TVBLcE5vUkl1MldRbElWQURuQkpYLzRVNG8zQUVJam1GL25IT0JRN3ZzdEhO?=
 =?utf-8?B?QkJrNjkyOEdNYTJpWFRkeUUvaGpRTzhrRG1lVXNzSkhDUEV5eGFLRjl1TW51?=
 =?utf-8?B?d3NWWDZQODR1R2hVb0FOT0Z0YUdVK2FoalpFRUtINWw0ckE3Y2xnbDlxS0FG?=
 =?utf-8?B?bW5UWDlRVW5UaEg4RDVEVlNRUXR1OVBQZXJuVDJWa2ZRYk90Y1R6dXF1NE5B?=
 =?utf-8?B?SDhnZithMnBnc0pLQVVNVzVOOVZGcDJsd1ovMndyS3FmN211R3NUVXlSa1Iy?=
 =?utf-8?B?eGJ5Q3QzQkNYWmhoVlZRWFVCSTltdk0zTXNIK21iU0FnaVlYRmQ3aE4xQXZu?=
 =?utf-8?B?bVFsdE1ra3A2N1RBQkNydGtPcDNLRklqVTdmaG5wdmVSTy8vVk4zaFdsbzNP?=
 =?utf-8?B?VG96emxRNTFlOGlodytvSHppeERJK3NTOGRuWXhvNWRpeUxJbkVNdlpyVVhD?=
 =?utf-8?B?Uzh2NEtTYSsvUDlxb0JNcU8yRU51QXFqUFJXTTg1aDlSYVlwc2ZxRENJeFo4?=
 =?utf-8?B?N29HZm5jY0RvZ3FPNnBpNzIrK0JvTzVXYm13a3JIVnpUT1RvTkdRL1d0UXU3?=
 =?utf-8?B?ZFUza1BqaVYzMExIMEFYMmtEYzRDSnpUbW5PT2dubjlWVVpCczFvRFNxQ1RY?=
 =?utf-8?B?OGVHckZQczAvbzNydXVWQnUwTFFNdGQ5cDBOVlpvREtsNHhxRVF0bFFrVnVj?=
 =?utf-8?B?cEhsUWVZZ0pOQkRlRmRzRXNEYnBGU1U3Z0tIZSs3bFBTM1ZLOVJmdFhqL0Y2?=
 =?utf-8?B?RlVrUDhVU3o5OFMrQlhFMURWWmlCemIwalpQaXI4R2k0S2FYbm5Pb2pLZVpR?=
 =?utf-8?B?emRURkhxM3k2Y1RBTXNrVnhvMFVDQ0p3STNsa0pnMmVuNG1qMndnNERDMmgy?=
 =?utf-8?B?MFhQMkVPa1dFWUR5R0VNRWxsdTJmM1pQOWRLdk1LWEtOODdhNWJSeTJDYU0r?=
 =?utf-8?B?aFZwS0RJY24zS3owOWFnWktoWGJwUmV3c01pbHUwdlk5citCR241U2hrUWtJ?=
 =?utf-8?B?SGcxRGxSNysweFBTSEVXdUJvSHU4MklpSkFqVTVUWmgrWUlKejFhbDVlTlBn?=
 =?utf-8?B?ejZ1WTV0bWtTMVJObWxNaFdDbDlWNFlGMW9xS3Z2aU1yQVdmU2pPaWI0ck1y?=
 =?utf-8?B?dmdQRjM1S0hSd00vcVpWUnJPZkVhK2ZMODc0ZkxjeFlOSE5qaEY1WHNIMnRJ?=
 =?utf-8?B?SFphOTNDemZLemNuWW1ua2VRUjhoU3prdzR2RW1OOTFqLy80Y1FtSERmd0c4?=
 =?utf-8?B?VHRCNHFDbjN0VklQaUE5QlBoWXNCdUlhNHpjdW5sdlNMY0liS0Uza3JuajZu?=
 =?utf-8?B?SmREZ2hpakhQUG1HaDUwMU1WNk5TQTViTWdRdkNuaWl1ZHg4dGhOb284NUNo?=
 =?utf-8?B?TE5PSVptTlFraUJyaXhVL1FncVoxTUgvN01VbTQ3em5UOVFQMDVKZDhBaVZv?=
 =?utf-8?B?dmxoTVhPeGtRY0RhZlhYSGs1azMxSEhvZDE0cW50MFF6eTBiT3ROa2tHbm5D?=
 =?utf-8?B?SjFXNWd1aUpTZ1ZWZEV1TzJVVFd2OENHT2pOYkxxdytlVlEwSjRkeFFNNFlt?=
 =?utf-8?B?cktsUWN6ZVRhV04wcktJRUYrL2JJVlRzQlVjR3UrN085R2Z1VGp3MHVPWGdS?=
 =?utf-8?B?OGtrV1dPaGozejRBVkZtaUxKRmlYQjltZ0pOZTJxRGN3eGtCdHJvOTRtK001?=
 =?utf-8?B?NUd5cmVXL3oxdERXclVsazhEaHRaMU4wUVpKWWxiQ3VOSWVZdWpRVGVxSkFW?=
 =?utf-8?B?VUQzTGU5eXJkczB6QXFmdm5XcldUQitmMmVWUmgxMkh0R0J2VXd6N29oVG5O?=
 =?utf-8?B?TUc0WmdQdlNWd2dDSW5lY2VjbWkwMzdkVlZFVS9GeDBRUTk5Mml1R0Qrbi9o?=
 =?utf-8?B?d01kOGdDbjYxUVdOcU1ncFJuak90M1JrTWRISlpVTkNyeFc0dDE0U0dnSXZw?=
 =?utf-8?B?dEJOekdlM1k4UGRUTmo3b053MGpONUlnSmpHYVlrVXdYYzJoc0l4RzF0RVUy?=
 =?utf-8?Q?xDdbq2b/+/sskGQYFqqnFc60J?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 511405dd-38e0-4c0c-3f6e-08da550e1eaf
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 11:47:16.0613
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 067xamdaXtomaJJOSCEp5+mwzvmAK3X5NwUh6gVweDdioq4rDOV/2jEowr8vcTUExXxRzzCmGH7oh9y011T1uw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8166

On 14.04.2022 14:59, Jan Beulich wrote:
> On 14.04.2022 14:49, Andrew Cooper wrote:
>> On 12/04/2022 11:28, Jan Beulich wrote:
>>> Future gas versions will generate minimalistic Dwarf debug info for
>>> items annotated as functions and having their sizes specified [1].
>>> "Borrow" Arm's END() and ENDPROC() to avoid open-coding (and perhaps
>>> typo-ing) the respective directives.
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>
>>> [1] https://sourceware.org/git?p=binutils-gdb.git;a=commitdiff;h=591cc9fbbfd6d51131c0f1d4a92e7893edcc7a28
>>
>> I'm conflicted by this change.
>>
>> You've clearly changed your mind since you rejected my patch introducing
>> this infrastructure and starting to use it.
> 
> Hmm, to be honest I don't recall me rejecting such work of yours.
> In fact I have always been in favor of properly typing symbols,
> where sensible and possible. I would therefore assume it was more
> the "how" than the "that" which I wasn't happy with. If you have
> a reference to the old thread to hand, I'd be interested in
> looking up what made me oppose back at the time.
> 
>> Given that it is a reoccurring bug with livepatching which has been in
>> need of fixing since 2018, I'd organised some work to port Linux's
>> linkage.h as something more likely to have been acceptable.
> 
> Taking what they've got would likely be fine as well. At least in
> a suitably stripped down manner (looking at their header they may
> have gone a little overboard with this).

Over two months have passed. I wonder whether I had misunderstood your
reply: I took it to mean an alternative patch or series would be
posted. In the absence of that and considering that you say that you
did want such annotations anyway, I wonder what it is that stands in
the way of these two patches making it in.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 11:53:04 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 11:53:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354858.582186 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4LOO-0005gz-60; Thu, 23 Jun 2022 11:53:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354858.582186; Thu, 23 Jun 2022 11:53:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4LOO-0005gq-1G; Thu, 23 Jun 2022 11:53:00 +0000
Received: by outflank-mailman (input) for mailman id 354858;
 Thu, 23 Jun 2022 11:52:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CRa/=W6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o4LON-0005gk-HJ
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 11:52:59 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-eopbgr60076.outbound.protection.outlook.com [40.107.6.76])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 050628fc-f2eb-11ec-b725-ed86ccbb4733;
 Thu, 23 Jun 2022 13:52:56 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR0402MB3523.eurprd04.prod.outlook.com (2603:10a6:208:1b::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15; Thu, 23 Jun
 2022 11:52:56 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 11:52:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 050628fc-f2eb-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kzLKvHpmFQhkTaNa4Gb4i/K2/P7gr/7zWwtS6ycAtmCI7GR89LBj2Xt/hPJerwy3dvwqDG6i+SZQlPO1fcjCBvcO/vepvmSv1zjOxIOxM5cBLcgv5n7HONXFkNEDdmMKvnMAPbkmf+k5N318WykNhrhDXjpXf69BqUFyViLi3XBZpu9rbh6LejFq/UDgDE4Ae0E5x7VdRCxMgzjfxz0eV5VkxCTvlkc7A/CC6Mah/8/U99/N2tSOlR/E2e4J6TDzFAfs+OxC9YZvC4iWPBBhdllIzx7rZEJBM8mB5/ywai9XHq2wR1znWLlnPatmXp+KzfbtnXb6qhUXCgcamVpERw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DfA+KoDOZk7fiA1yr/ZaD1BDBKAQ3tEWWMbgEIRy8qw=;
 b=hJsleS8279hdVDknHAWgMkXhaS8/AQ+ucEyYvKrxK+P5Kt5/Q6RLe7D5Lk58jTgaNifeC75vnenwsznI8DhVe2zm1MdmfsgrHjejT1TV44p28V0W7AWwdWOsqgb6LXjEvFsuv47mZVWq2WkNU5XXl2g1fg1VxUsU/UKgq0PD0PeUhiqtSWJCvk2XSsRecIc3dpEjSNYNfEc1Zc0BWZZc3mtDN7TeRaNvR2r6PwL0zqfn38NyTkZs7cPEqkfNoFKapXbccZfHl3C2X/dzsiLBTrkKlZSSmFaQCmmFI6EDgs2EStEEqcG03Hsp89+YLgV8KqLZ0XuWoD5Uc4GL1Noz4g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DfA+KoDOZk7fiA1yr/ZaD1BDBKAQ3tEWWMbgEIRy8qw=;
 b=s6RUjtgFeHOE4d5rfGtnCOX54zoUfP5Nqd/JpeNR3QN2SwFbQI0bC9c+I9oCV230VZQa3sfzfuqzoBfJupKdRzl4PQZ18D+5CMH/n9BI5tJTmYPbwcbD9LXElpIvByGrJwNhkYsS5uQQxcUZbR19taDN2Qy4puT/6f6vptna6P10dqHcSiq3teRs1z23w1dwyEtDEmuY0EJ8tAlU4Pq2CeAK0+0bISwwuX1H8TpTlsiblPktAd0nj8cQjp2PNpRL0QpzXGHopYsysF97i94CmEGU5sLfUS6C1oJRm2CcbWHf8YowWhuWpv1tJBITZQ6kM+nNkEJ52a6Lm/ipvSnAtQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5d6c927e-7d7c-5754-e7eb-65d1e70f6222@suse.com>
Date: Thu, 23 Jun 2022 13:52:54 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 0/2] x86/p2m: type checking adjustments
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR06CA0510.eurprd06.prod.outlook.com
 (2603:10a6:20b:49b::35) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d3bc9484-6a07-41d2-c1a3-08da550ee9bc
X-MS-TrafficTypeDiagnostic: AM0PR0402MB3523:EE_
X-Microsoft-Antispam-PRVS:
	<AM0PR0402MB3523AF7C51512051A3CC6FCAB3B59@AM0PR0402MB3523.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	TVnaUEH8/7L0W4Vf6sO9OU9UwUBBQ7qpHfFT00AKb29S0Xq15upd793LWaJOBggo8YadVNAVowVUQVGTOJD43vHnqSLjZNHk5LWCEh65p9ll9FsQ3U/MakNW8F3ue7WCtbIV9XgOZb7LbhSZdC5zfzzKMvGrabI+BW9aRd2NDvtwdZqYcOKO/vzQhuyIXAwz2DqsQGcse8pT/FlTiI1nNA3SH+hPqZUwQoW/VlKi/bmu99Zw5x7BdpzNhAxH303d9fT8fp+8D+V1OExKVR1/h2+ZdA/KJbs7br61cn5JZycZP8PL4m6LzBZQoPCEptURLX85GhSXcK6XaFVxaQveG67JM+dQbbtPINOKnRtYaL6rbp03uHCg7VnfbZsUP/VeTc/xXmHcyRq7lYS0LLi8UyBCY4vzyInJgT73ySKWFKyROISOsDQs5x2M5zIa25stFNQgfOkXx+kPFoU/VXQ2v0XJMjhfJ6UjjXM+OZWrRJpAAgliAMminNS36CxWS3DPJ2vKzIZnx9k2AfNsxYeyHBcLPb402bABc8kSGXzArKs1aD7P3F+vkQqkgzLWCOGiFWWK9H7ZbgcAxB6Ho79PbEsEmUrQqirjurRHuk6GWcDcQ+0dVczzQYqOpZU95gNHC2pXwtMw0F1veDpiH9f6UQC6oAlfwKn5UyLyo+qW88x4ZEDS9JOQZXZ0Htf/iCsal3Z1fhswEfHkwxE8lJFFWL5S0qbVCLy+j1+LgtmriCITFPLADUUBC8OJMsAfjAv+mDZAWCVsMULFdUl29C3cnfYO599Bf47lAvodOQAfRRo=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(346002)(376002)(136003)(39860400002)(396003)(366004)(186003)(2616005)(4744005)(5660300002)(38100700002)(8936002)(4326008)(66556008)(8676002)(6486002)(66946007)(41300700001)(2906002)(478600001)(6916009)(6512007)(54906003)(26005)(6506007)(31686004)(66476007)(316002)(36756003)(86362001)(31696002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Nm5vYjFaYjF4YTk3TDZZdUUrY251QVZ0QStUeWYwY01ONGlsVENtcXd3QVpM?=
 =?utf-8?B?Rm9IcVdLQ21EWTNBaXl2WXFsK3o3emxPMTQ3T2JnRmtiNG5jbXdLOWY1dDBs?=
 =?utf-8?B?OEhrdzY2UDUwRHl4UENCOWxJSGhZd0pmWE8waVBzZkxrZ0c0dGNXSk9tbmxx?=
 =?utf-8?B?MytpZUdEUXZVdU8zOTd6TjVPVjZHaWxWNmx3YmNMa0NTbnVTK2F6c3Y4Wk1R?=
 =?utf-8?B?WDJWQUxWMFdTUDB3Q0xiZjVjaURtSmRZSU1YTmdtWVdFeWRTeFFsNVhGaGZ1?=
 =?utf-8?B?ODNaNG9aY2kzODYxZ05hQmtHcUxHSG9WNU1ucUN5aTlQTFd3RVhKcTllb3VB?=
 =?utf-8?B?Vm9HMGx3bnRRUkJBbzNBdDB5dGFCQzMxdE9tem1HMHpuSXA4b0JSVXZIL1B6?=
 =?utf-8?B?djZWRDJJSm1MR2J3Qjh1eEQyMlFvQ1BRcVZMci93bmw5NkZpMVhOQXR0bnBk?=
 =?utf-8?B?MFQvVmpmbTQ5WWhUd1JnZjFwUDRDMGFyOENpa1RqWkZRWCtEZjlQMVhjZ0M2?=
 =?utf-8?B?YnBBazN4c1pMQ2k1L3BQLzZmVWx1dzE3RVVVWm8zWU9QT0VqZjJvRXB2bngr?=
 =?utf-8?B?US84Y1lsR0NqZjladjA2MEl5ZkNNYmlmeUJ0VHljQzhuRGtBdWpVbVd5TXc0?=
 =?utf-8?B?d2FaRWdIQXVHaVF4OHlqL3NjNS80eDJSUlJFRlVnRUgwS1ZGVWFuOFVjSTBt?=
 =?utf-8?B?dFN4SktmL3VEQWJaZ1NyOGcvUlExb1l2dWt6RGtIT0QzRzkwMFBhVEEyUXdx?=
 =?utf-8?B?MzMvUFRGakhjUVBlSWhwVU5XZXRHd1BVYWx4QjI2Q1RlKzhSYXJJL3RlY1dQ?=
 =?utf-8?B?WlhmamZsOEE2T1NMNmc4QkgyRW0ydGN4eGwvTWM1SGxoQThCZldpditZeGV6?=
 =?utf-8?B?eDlIRWUyZmtkdmc2QkRERmM5ckQ1T1JOZEJuZUV3T0N6MmlpUm1FOStsZnRp?=
 =?utf-8?B?MXc4VXM5NytHejkwM3lEYXM2SktjTVFaQjZLVnhnQ1JzOGFaeWJZWFpXM2lN?=
 =?utf-8?B?c0pJMVhaSFFZcVJ1ZWVYQS9wR21RbnpMZkhzQ2wxK1BhL0lpaFlXaE0zcGhF?=
 =?utf-8?B?NC95czZnaHJmeEwxbENWUmdoeTFCMXB0YlppWXZac3R2TFpoSjRPWW14cERu?=
 =?utf-8?B?a2JTSGJTdldoSW9TSEJaRE5KbzlWTG1lN1FjdEZ1RjJoYzArN0NjK0Z3dmJC?=
 =?utf-8?B?MjlZQ0FxTGRtait4ZEJlUTNGOEx1WXozcnozSnp2dUx4TEVQYjE2VS9DNUZV?=
 =?utf-8?B?bm5QZVRsaVRTMURHWlZPNTYyNmJuSVRTMFJZalJ0cCtXWUhBa2Q1NTdJUlcy?=
 =?utf-8?B?aVV6cDd2TWtrRUdtYUJBT3JHbnBQNG1nbFcxUlFiVm9NOC9SY3lHTW8rSE1R?=
 =?utf-8?B?emkyU1ZYZWd0NTMxYjRSYTdKa0xtWURWdmxDNkhPOFJ6SWZoeE5aUG9Kdjdj?=
 =?utf-8?B?RUVsYUVrai93TDM0SGRmbWRwcXZHSGIzUng5bTdYbXhrQ28xdEVNWHorZkNp?=
 =?utf-8?B?OG5lTUEwZ0dxOEk2NkR2QlgvVDBLc1U4UW5JWGpMWFVubytEb2lnT2ltSDVC?=
 =?utf-8?B?OC9oSS9ldTdLZ2dIZzExeDEwMk53YldxZU9RV3hSWFhTYVZFSVkrcFgrT24w?=
 =?utf-8?B?YXQ1eGpGMFkvdE1Wc2pjSlh1T2JDVkluMVVQZ2RtUXJ3QzlSam9NQ21WZW9H?=
 =?utf-8?B?SmJmem5aUmw5OVh4Vi94MVNsSUpNem1GNjhaazZqOEh1eHZEaGpuU0RMOUM4?=
 =?utf-8?B?REdQUm1WOFErUVJBQzl1Qk9sSDhtVzEvWXkrZ29vdFJub2pSQml4UG1xb2Np?=
 =?utf-8?B?UE9ZcS9WVFppS1JYT0x1LzZ4V0w2bDAzOFlvT0tWNG5BVWp5UmdGQmxOTjVE?=
 =?utf-8?B?V25JVUVhUWE3aWloaXBUYU1obllacWYxb2hJb2twRnJXRkRlV1pvMTlnd0Vj?=
 =?utf-8?B?ekVJVkdwUHkzc3dxalB3QTZFa3lMemtQSVE2THZ3a202T29HQWRGUHdrOENE?=
 =?utf-8?B?Uld3Q3NrUXl0Z1BSeC81Q2h2eUhkUmFpTk9VTGxZWllxcmR5RUVsUDNtc2k0?=
 =?utf-8?B?T1RycGY1Vk1HMmFyL25YNUlndDREYXNCaU0za05pdFM3WEt6Ty9uR3U2V1g2?=
 =?utf-8?Q?wCRW18CjUqjDcTfkbiXN0XOo/?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d3bc9484-6a07-41d2-c1a3-08da550ee9bc
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 11:52:56.7429
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: UVro/tx86L3cgqgciKrUlGMEgQG2498xZxzUdz9rVV9ZrEkDawOgemtrUM2duNFzCZU9vurz0JqP4REV26B5UA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR0402MB3523

While the first change is a bug fix (for, admittedly, a case which
apparently hasn't occurred in practice, or else we would have had
bug reports), it already puts in place an instance of what the 2nd
patch is proposing for perhaps wider use.

1: make p2m_get_page_from_gfn() handle grant and shared cases better
2: aid the compiler in folding p2m_is_...()

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 11:53:59 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 11:53:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354864.582197 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4LPL-0006Ee-GN; Thu, 23 Jun 2022 11:53:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354864.582197; Thu, 23 Jun 2022 11:53:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4LPL-0006EX-Cu; Thu, 23 Jun 2022 11:53:59 +0000
Received: by outflank-mailman (input) for mailman id 354864;
 Thu, 23 Jun 2022 11:53:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CRa/=W6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o4LPJ-0006EL-Vj
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 11:53:57 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2070.outbound.protection.outlook.com [40.107.20.70])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 29649367-f2eb-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 13:53:57 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR0402MB3523.eurprd04.prod.outlook.com (2603:10a6:208:1b::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15; Thu, 23 Jun
 2022 11:53:55 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 11:53:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 29649367-f2eb-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jCfSS+LhFVbqkAUPNerbQkZbB/w2U0/qDxrRFdliNEFR/kszo8fN2L8srDYbV5THeSHF9II4lpGNFuZDFsdhS6ka8qhVUehFSrNZzuTE+ZVP5SvlB9R79BRWTxpBacsdxQOeTGJXN3qmnQTqYPE1h9whCQggaMF9qk4Dyox9fdfbUs8tZII4cGYVmlTwPIfhD56qSLofG2uGUTiHC6CFv5beHR2uN0fhqydhGKocJRQnplzb2Pbg1wGqsnkjmni47ISj6t1/nkKP8/O9Oev4O36QYzDT+/BVzRQbo7LHI/jjfC4yayUaO6FbU0MbnvBkb5t0QFIDJDs0uQjJS1X/bA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=1OxswrBk4M276hwZgUxG9p4J92lXbotQZzLwSqRbrQw=;
 b=i4VtSc0tB0S01PX+o/ytf1Jd/RxgiQiOyn26bTKYMe0HnguaZM3eF+/yzfjpx8ZyNlj/Gi3lBhffOyutt450oCgs+CtfC2FLkXgUNuN1hpXNvI6/1iVdFP6k9oX1Qo5dmAsrq+uEs5wECqRlI8ZwKoZ+FViwclyAJI1mF7NO5dQh3SjF5Tg2jdvXNi4to24/iZX0RvAcHIOlSoEEbUiWyrnUo9gCrZO0a2euqsfyKUCoVuBoVCR4MaWlDwlhtEza8j8/0uZ7IyRslgphZH8psFh9gbrxXB97p1NWz4yWiiFQkzNlLQdgu9Yz5qZHtUftUNwSYut/EMaQE/1AkSIGPw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1OxswrBk4M276hwZgUxG9p4J92lXbotQZzLwSqRbrQw=;
 b=M0dZi/2Jyo34wcpjhgSpqvsVJfZlKMyvy+/rYIf7gh3arU1hPtB3G+4Foc7GKG6rviAwE/C2PI49Xt8RHEYKGp1HzN2nwr7YsSAKdgkfG1vWEA/cFV4PhiwCcDr7tPChHE1MPgO3aqEB0wbCrA1GlVYwImGy1Vt4vA7RJL+5SOhnavktyclm7KXkvrNqkg9v7SdVACoUk6Jt4Pp4sG9InZC/+Vl8QQ9kk2wKZkDVMvj67mzgBGcPJY2PxOcOMNIi2iH+qMQCWiZbco+lqJLyHnIC+rriZcJ9DP/6V+H3YEBNSVqKEBxdlaqFre3iMs6uAJEaiOF4XqfsTuJKeIE1dg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <8d5df1f4-74ca-27cc-99f0-7e7a82050de1@suse.com>
Date: Thu, 23 Jun 2022 13:53:54 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: [PATCH v2 1/2] x86/p2m: make p2m_get_page_from_gfn() handle grant
 case correctly
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>
References: <5d6c927e-7d7c-5754-e7eb-65d1e70f6222@suse.com>
In-Reply-To: <5d6c927e-7d7c-5754-e7eb-65d1e70f6222@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR06CA0485.eurprd06.prod.outlook.com
 (2603:10a6:20b:49b::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 66850d2c-4dc5-489d-a561-08da550f0ccc
X-MS-TrafficTypeDiagnostic: AM0PR0402MB3523:EE_
X-Microsoft-Antispam-PRVS:
	<AM0PR0402MB3523D6F26CB535BAB75C1DDFB3B59@AM0PR0402MB3523.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	LYUjyYv3ny/fnpTKu7GHOqPVlcwTPUXoqu5sE1mNxJjiW5008YnEroANnYgWE2EaeOJ+aBj8cnwFkTSUX1O9T8t/mZZnjCNCBIs5XqJ6ht9/GClJUg8j24sjz9/dJcEzxcW5XQ6z2w3dXrMGZYXgamaE8vtbXIGgqx3+cd7KH2vK5IIFAebdIi7GJ2IIYT9w2LJrj0eW9qM8GXvQzf8EOgcFSsKJBTv02VA+23vn2+wJPweCG4UZqTKIiVpi9s+tfUORD3f7fXbYEQqfcEBnPtxVFQ/7yjRu7jC6/CxreViPqFhfYMqh6uIighq4k3pfHFOSO1kTkHreUbdzrye4w+LgG+AqpXakWk/jbJhqqPsJaU/7pvyXGnDPwnTet1uoJBl/jtrecKgF29FKYILqW5XxJ4+lTDGg2tcFo8jf2jP6OckmVD0zrLCf1V/sElC1l8ybn2+0KY4p4SwwCvZIXl8NOA3gB7OvdrQ1nX0pm3N7rFiGbJL7G9y7RDApRs6GQNVg0ug4ccP24BlSOX3X0dGxvFBRR76xuPiVeut4WNcCd6Y5DWilSRmgMyQk+pj1/d4xsS3c47pPeJxdIgBHNfDHru0UWYwyzMdvglhok+oERyMAiQq0XGC+mUCMGRFG6ZikNUxmKT3MoWNl8ddGcUldcOmjYEzIINJmqlm6ta0SBkW7Tq8BPSngD+P9fcTKSQsp/uPQUyX9W9j2qKoK+BF8yPdJORXSAzINlW+l+24qi0L1L9AHtpO3lZTLl5yBiZBSWg5kijHux2kInMhezfCnkNlFOaOOiD2S1NdmKFs=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(346002)(376002)(136003)(39860400002)(396003)(366004)(83380400001)(186003)(2616005)(4744005)(5660300002)(38100700002)(8936002)(4326008)(66556008)(8676002)(6486002)(66946007)(41300700001)(2906002)(478600001)(6916009)(6512007)(54906003)(26005)(6506007)(31686004)(66476007)(316002)(36756003)(86362001)(31696002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ejg5N015bTk4L2dQL1FUZW4wZGlUdENOb1dqVkRvRWJaclpUdHU3SExVWCtD?=
 =?utf-8?B?L01oMU9BSlYrYnhnM3NhRFpnd21rKzlaaVR0WFlMV01iYUlpbzE0U1VML29q?=
 =?utf-8?B?YlVWcWgrWjdGUG9yVzlmYzEyMUVJWDFLZ25WRUMxcHdlQjR0c3hSMnpQK1do?=
 =?utf-8?B?MUYzOWgvZUwvN2xOR0ExdDk1VExqYVpDT1lDSzNSRS92dXVLUDIxSDZiZkpk?=
 =?utf-8?B?TE5reHpFNVZTSGxUaFFTQ1MxYTEyS1ZiSlgxRE1xbkVNSXdMVTRXbm1rSE1v?=
 =?utf-8?B?dE9Jd3QzREluTVpGQ2dIQXZFOFFaSnNzQ2x1NzE1R3BKVmFJVXF6VWlpenQx?=
 =?utf-8?B?UmhsenZjdzRyeEludlo3d2h4V1NnT1RNTUhlQkN5K2s5TkQrbVpRdHduSDR3?=
 =?utf-8?B?WmN4SXJsZXU0UkhqMUhoczU5alpON0tBNmtvaWszZ0Ixc2NjN3FUdVdtMURh?=
 =?utf-8?B?aWxBZHNGZlhQVEhWS0ltODY2QVArT1ZSUEdLUEdqYkZPZThhSVh2aFdGdXZM?=
 =?utf-8?B?TmdpenMwS1JmbzBpT3puRStjejRVSDZIOU4vVEh2V3AwcENTTUlKelpHSmtF?=
 =?utf-8?B?WE5zQXJReVdqaTgvS09VRnlkanJRZ3ZsYmIzZjBFVlEzaXRSVGhHZzU2KzR6?=
 =?utf-8?B?VlJ3MWJEYUU3Zkc5TjlBUjBabi9OTlNnZ0g0Mld2YTRReC9RNlVwTDk4R1I2?=
 =?utf-8?B?YUluc05Jd1hsY082ejhDQnhzWEVpVUtaK0FGL3V0TWl4NFBES3lBKzBDV0pj?=
 =?utf-8?B?cnd6Um93UVY1RDVFemlWdlNoTTV2bnNhNC8wSk1jRjVoRlQ3VERVeUxzcjZG?=
 =?utf-8?B?cjRlSCtERW9tc1JoQytOQTdUazdjRllzZUV5ZlY4WHcyZ0FlSUFMUEVUVlR0?=
 =?utf-8?B?RWdhbTQrZitiakZtNzVSLzRKNHgvcVIxTVVUbDYwSzdBMUtiQUJ0Vnh6RzJL?=
 =?utf-8?B?OHdHSEpoeWFGVzdKUy9ISDk0REtTRHpKbWI3MlZJajBPYnVNcmRPTEFMZVJ3?=
 =?utf-8?B?a0FJbGdRWEwzV2NBblU4TkNsb1poUFJEb25SK3BueEVRc1ByM3BUQnIxT2Uy?=
 =?utf-8?B?c2FNWFhGcnZyWE5UNTR1OUhRR3ZDUXJMTlZtN3k0TUdSVVNsZHczK0V1NlRj?=
 =?utf-8?B?N1JyTE4rYVEyU2VBMUNmd2ZGaXVLdkdCU0s5WEI2L1F1VHlaMVQ2a1l5VU4z?=
 =?utf-8?B?amxCbjhnYkM4d3dUMlJKRUZBRW9JTDhMbVB5OHpOUEFwUGZDMnhYYWE0eTNp?=
 =?utf-8?B?YTJjbUJMUmJNTlNmc2pUbkFNeVVPRGJYalZxN2k1K2kvVW1xY3o5aHFpSzVh?=
 =?utf-8?B?N0JONEtwNlBtN01OWGdHUDdXeDJVdjFOZkMyek5kNEt3NVBMS1hFbldoeWlT?=
 =?utf-8?B?aFNOMHpkUktPUDdOUitNY2ZRa0syeXdMU09vNW83T1VHZnFtQXkvTG9ZdTVz?=
 =?utf-8?B?cjcwVSs2cTBocFgrMjF2SzRGNkEwUGEybXg5OGQySjRhODRGemlDZjdXc3VD?=
 =?utf-8?B?Um8xeVlEOGJreEJtazF0cFlzdEtJTlVoa2pUZUxIeE13eG1zOUEvMU1XTlRJ?=
 =?utf-8?B?SDN0ZEhLVTgweERDYWNPZWRmaWt6d0FIWFRWU3grMnVNdkFYWWxOK0hwUEpB?=
 =?utf-8?B?bSsvYnNRTHRCRU1wSFlrQ2tiRER4OGwwc09KOStyZDM0UFNvMHBaNmtOcnBm?=
 =?utf-8?B?QmJkaEpTZTg1SFRLNEFDWUxYcEh4cmphNmJJVVNMWXA1bjRIVnUvM3N4ckFJ?=
 =?utf-8?B?bEQ2S3FialFMWXVuY3FjSnIwK1Y1d0lFSUM2T2tVd2ljTFVBV29YWVNUdys5?=
 =?utf-8?B?b2NwclN6Y1VoOFN4VEs2L0h2OTRCK0x0dko2dXNhN0kxTFd2d0pIT0V1WEdS?=
 =?utf-8?B?TFZKWHJXS0l1S3VwL241dHJaaFI1cDBiYVlTK3E1YkFBa1Z3M1BvNkNhM2g5?=
 =?utf-8?B?VlljRlBuMzZJRG9IV3dEbzBkZndKWldGZHpjNFVJdTRsL1U3bGtKT1ZRV05l?=
 =?utf-8?B?NUlpVlU2THdaOFRhTjI2czhPV3E4Z3lKY0FYbmFzbm1lUkFEWlVlWFpyZTVr?=
 =?utf-8?B?bElIL21ZdDBRTTQ4RjVnMWlrcE1XbnZLbTJCM0lTMndUa1FuM1h6c2lzQUNi?=
 =?utf-8?Q?4lLfD2jEvA7sLgoriMLoI13+4?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 66850d2c-4dc5-489d-a561-08da550f0ccc
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 11:53:55.5517
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 5aHkPNK36qpXPqLD2/IjVr+yRyzz27gjoFmULnbP3d5noA31iD17vrbtcSydhv9renqEM+mBiSLvQzErU5XvPg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR0402MB3523

Grant P2M entries, which are covered by p2m_is_any_ram(), wouldn't pass
the get_page() unless the grant was a local one. These need to take the
same path as foreign entries. Just the assertion there is not valid for
local grants, and hence it triggering needs to be avoided.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Using | instead of || helps the compiler fold the two p2m_is_*().
---
v2: The shared case was fine; limit to grant adjustment.

--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -357,11 +357,11 @@ struct page_info *p2m_get_page_from_gfn(
              && !((q & P2M_UNSHARE) && p2m_is_shared(*t)) )
         {
             page = mfn_to_page(mfn);
-            if ( unlikely(p2m_is_foreign(*t)) )
+            if ( unlikely(p2m_is_foreign(*t) | p2m_is_grant(*t)) )
             {
                 struct domain *fdom = page_get_owner_and_reference(page);
 
-                ASSERT(fdom != p2m->domain);
+                ASSERT(!p2m_is_foreign(*t) || fdom != p2m->domain);
                 if ( fdom == NULL )
                     page = NULL;
             }



From xen-devel-bounces@lists.xenproject.org Thu Jun 23 11:54:48 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 11:54:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354869.582208 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4LQ6-0006my-Og; Thu, 23 Jun 2022 11:54:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354869.582208; Thu, 23 Jun 2022 11:54:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4LQ6-0006mr-Lf; Thu, 23 Jun 2022 11:54:46 +0000
Received: by outflank-mailman (input) for mailman id 354869;
 Thu, 23 Jun 2022 11:54:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CRa/=W6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o4LQ4-0006eP-WD
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 11:54:45 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-eopbgr60070.outbound.protection.outlook.com [40.107.6.70])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 43f2ea9e-f2eb-11ec-b725-ed86ccbb4733;
 Thu, 23 Jun 2022 13:54:41 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB5745.eurprd04.prod.outlook.com (2603:10a6:208:128::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16; Thu, 23 Jun
 2022 11:54:42 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 11:54:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 43f2ea9e-f2eb-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IP/gR4hYFxFpZHle6lqQhWL7J1F+sYNenPVcA6BxqVHDERLEVXjPcXV8lrYxD8UcQ8iKV8da/25UlSjI0D+dujpXcGGBbMfaMQ1ekUq0IRANDCr/6fgZGBFFCBnCsdM2x+IEuZ3i6Ug6xentHTFrwzbl/DfFS9A4cnotzF4HqpG355R073WqbYgtFrujCruKVX599XVHSleSeRHpgSKMSJxMRDwSqvOEN1owY6KeBE0g7jdvvd8yanNFeX4KCxk2KqpEUZV+ruZ/zG2sDimk1XXXC05tIokaeFk9JitVShU6QsmGb1VopEEBEVkoe07nM4udg+bfp+Mje8svjLNqGA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=EeosBR+ZdmfwY/nfD62Mb7pRTRQW2IaAj5iLTpTX/vA=;
 b=l8npNh/iDGQw/zmKncpJPm0p4JkNvzqBrDMinlpK1n3zlWrDS8Hk5D5kmmNxoJRSUaoZglD4A532uDjbC0/Ab2bsKwmiaAoAQ6Pj9gxIQsy0O4cPRvsFEJs1w7yzAvroDUPYVE8Pqv2YkF3ZQaYbJvEjl7S6DG6fMwo1pzVLwA56IeEmJblhtAwOLijzGpfIKAqxsuCz6nv925sYva9qestPebxfAtxeGbdScpKctjNW/xQpZNwOTyzbea3jWVnvVeAnsu170ron2u2m8NMTh7tGBLvWOTvbcgbYUAz8sVrtxnbvTiPfwC+k425m6NzjIe+4AvpfDWtyrD3cByKpBQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EeosBR+ZdmfwY/nfD62Mb7pRTRQW2IaAj5iLTpTX/vA=;
 b=BOxkQ2RFthcC5eFJKkap7Uw9FDuHaLUEDi43SHc/Aecj52XySzM+MQb9V8rPsVknyMavZvBcuUdoWg2u5M6qzptpLxJdGuPNtXK1XglgZVVcCDTdz1+FMyoAL2ZCzBupON4ebvHPYjDpcciBuW62MdsV0nnA/A4c2ZZ0+DH3u4uoJyvbfireWPYhrR8+MuBJaHuTMoB1LlVzP1cLoWGYCyD3HW3v75X4CacSKSSUNVSXZy0jLVNFbkrT+AkWmlUOPGy1S5S3w57wGJlVMLnIiqx/LfG+2Rfscv29ypvD1O6meQhNUuHn+zW2IqZ6M0XU0NCBqZlEJjYRR1eW60pmJA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <7cce89f4-962e-bfbe-7d30-18fea7515bed@suse.com>
Date: Thu, 23 Jun 2022 13:54:40 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: [PATCH v2 2/2] x86/p2m: aid the compiler in folding p2m_is_...()
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>
References: <5d6c927e-7d7c-5754-e7eb-65d1e70f6222@suse.com>
In-Reply-To: <5d6c927e-7d7c-5754-e7eb-65d1e70f6222@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM6P191CA0044.EURP191.PROD.OUTLOOK.COM
 (2603:10a6:209:7f::21) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 330e4c1d-2167-4b8d-b2fa-08da550f284c
X-MS-TrafficTypeDiagnostic: AM0PR04MB5745:EE_
X-Microsoft-Antispam-PRVS:
	<AM0PR04MB5745911531CF8D7A9662717EB3B59@AM0PR04MB5745.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	3+siftVLU5orHUblw7shROGyX64oBpiMclCkuC9ni4h/A3b1j6M/PC15Uw3Pu2HNnZ5ntQr45/P+LKsdXu0uuw3awvHi2KFEzkve/FtovJhXpS9aDUUucNzH/aIMG7GF2otFIICs7RLdL2Olvp62VDPhuh9XqvrYPo45PuP32lERveQu7wiC9RPbTrkId8AvwNDwGGZiKnI99zn+xdknLwnFU8fX7eM2oVCOdAjKdCHcjeJpHTCCTp3RfypX0Lg8SF0OkDutTVIcpceVAs2VQfmqZgOCwT1X7QMX9NbTO+w2CVnt/Y3pMRcOvCwHidOtf/3GrBxFYcKxJCd54A5nmb0HeUWwlHMgVB3ejR7U+vkFSgqBvc2LafOhMZLYTFPEEoj4R0TQAp+VhfZIEqr+HAs7JMf7ObAAXN2Dg+KiVKNR0DINcOebgJ+/cCvVkn1F3HTTC+ztT/b3MtFHDLBDa5HzlbxTsUeQDay05d1+vXJKfiPvhzUXY/iqIMyn3yAxLcIh9gnPqw7+KqJwpDcXndTyvIWOb5H0YxLVbHDMxr4LzAfg+it1K1YoOGT9eSLXGGJIysdd5UIsOIwm4KD1/4hTF+/ghtYcBGGX1lzkg5HShcZFfJMH2MHElVZ0JFZMBswKSlj6MZ+NHvKpubguhhsIcpkBP0WN7O4Mp9fKHTtbmbdOrc8rMYj0Cwg9LoPW7Oz9aHIyKvQLfroVNCbIaplGldb+91GN5NjzQE1rKF3bHI8Nb8/hsGgOGcP7Zs8z1vBEETSkFO2N3JT2QMslTVuUNEZLRNGB2dUwM7qauOg=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(136003)(346002)(366004)(396003)(376002)(39860400002)(41300700001)(2906002)(6506007)(478600001)(8676002)(66476007)(4326008)(66556008)(6916009)(316002)(31696002)(31686004)(36756003)(86362001)(26005)(66946007)(6512007)(6486002)(54906003)(186003)(2616005)(83380400001)(38100700002)(5660300002)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Q0xHLzBoUHlxUXZqMVR4a1JHMmFOTXlOa1JHRzlsK09iQXJ2MTNKMm85bGxR?=
 =?utf-8?B?RFZkekVOQ1dvb2hzNjdRWUdUbTlwaFg0dVowWStNbElOQ1R4RE1ONVZUTDFY?=
 =?utf-8?B?aFNnL2gxd3BNTGQyVVAyNnFQbldPYW5DUGxaQ20raU9oblJWSHo1NkRaTzky?=
 =?utf-8?B?aVcxR1k4Y3FaajJJbG9HdzVEaFhQdU9taXRWc0VGUGoya0tjVnR2T3Z2UjFF?=
 =?utf-8?B?Y2VuczJVMjVYMk01bStHNzlIOVl2MnFkZmxNRUo0RnVBYVVJYlpLcksweUxO?=
 =?utf-8?B?TUFRMUJuY0U1ZlpLc256eTJEdkovN3hJYStYakZsUEFYcmlITkVtWGFaOUpm?=
 =?utf-8?B?d3g0ZFMwYi9UYzNNR1Fzdll2M0hyVXIxaHVKU0p0QjRXd0VMNzFETVBiM0Vs?=
 =?utf-8?B?L0ZJOEFRV3pTbjluak9SUGRxTFU2YzdkOW5DMVhvcUhaTW9RVjVRd0JSR0ls?=
 =?utf-8?B?WXIwRUhvVzk2SW5LbnROTlYybmxMakdWdHVZdU8xMk9NVFdFS21WREZYSDlq?=
 =?utf-8?B?RmdFNEJGR1dKN29JZnVvYzdDRW9yNWZvVElTOC82OUhDQWl6OWFhK2g1YllO?=
 =?utf-8?B?L3E3TndOZVBUSmY4aTNHVDlvWVJGRXdVa0VDdVJoUXhydHBiNkNHK05KNklK?=
 =?utf-8?B?ZmVnd2JGK3FSMGFuR2pPYWtmS0Vma25pbmVKcEJ5TzRMOFNMZS8zdmdZU0d6?=
 =?utf-8?B?NjJFbGxNNjlaM0d5REE4cGY3anFtQW5sTWlYbWt0akJxM0tZRkVJbDRQdlQy?=
 =?utf-8?B?dTF2cTFId2JFc29vVTdqcTI5NmlKcFpyU1o5N0FuemdjWk9ubzZQU3RXeDVJ?=
 =?utf-8?B?cjBBOFArMEVGYkdpZEw0STZlQldaSUZPU1grbXVocnhUbXFUeGx4RXd5Y3Nh?=
 =?utf-8?B?RTdVNXZta0pKbUJydkdWY3dQZk5IeEpMdkt3NHQzRXhqcktKQVVDOUQzdnRQ?=
 =?utf-8?B?MkpvdlRSWlh2VzlCY0RieC90dDNkaEh3NndhYk1DRW5WMjZzQ0NMVjVFL2s5?=
 =?utf-8?B?cjJzQy9UMDFualIxTDd5czZ3NTUwMVNwU2lUNS9UYlBtbCs0elgwd3RTNVFK?=
 =?utf-8?B?VG9Rano1aVRoSFI3WDB4cFJ5UldmWHhFaXVqMGxvQWVKbUpVbkh5cmZ1RVJK?=
 =?utf-8?B?d3QzcjMxbGNWbkxFVElTdDZFYjZxS2VhbHRiUVdENGpwa2NwZ0I1aWVqanAy?=
 =?utf-8?B?dFMweWxtSEM3UnpsU0EzRDE0OVowdEhEWGRHVVJ2Umt6QUtuMUxUdFJha1pO?=
 =?utf-8?B?UmlualV2QlEvWU0wN3V6dDVveDFjckI5enkvYWtSTVZmZTBRT2N1d0dpNTlX?=
 =?utf-8?B?c29COHJaYXpSbTdxb3dyTlpIL3Y2aHArSWQyVjhmU3F5dnRSMU9ucngxTzFo?=
 =?utf-8?B?bFV2V1F2Z2JhWmt3NUc0RXdZQlROTW93QlM4bXhDNkpsV3k1d1RXZytIRTRn?=
 =?utf-8?B?eS9LQlVwemQ3RzF3WmhwT3JKVSs5ME9iWi9xUlhQSXdZQ1ZiRVlyRGl3aGlk?=
 =?utf-8?B?eEtUOWdmUVdkMnZBZzlBVHRrMDBxTmlKaTFRQkVBQ2h2ek9WMEd2VGdDbVBs?=
 =?utf-8?B?UHFaN3dtZm90WmhUKzh6NytFVHcya08zY0ZLTDJSOTRBU0E3RGIyUzlTdU1p?=
 =?utf-8?B?ZEU2L05ld1lYdEFTU050Y1hLMEdRSEFQMDc2ckZrbzhWTW1saVI5dDY4eS95?=
 =?utf-8?B?cGxCaTRqTCtPb3R4V29oTWZmd0Mxa1JEMnBZUGhXSFM4UzlsMTc1bmZGcW9Y?=
 =?utf-8?B?VFh4QVE1Y0xZNDE1L3ViUklnbFNLM2lyUmVYZ2toaGhWTmJTOUtNWFo0d3px?=
 =?utf-8?B?L2gweGNtNHU5cUFLcllzTnJDbUp0b3NoV3VpK0ZtMjlLeGZwRGVCTy9HRDdR?=
 =?utf-8?B?SEJvQnAySHVmYzlBUHhXR0FMMkVxazAzZVFJZFdDa291YnZUMHpEMk55R09B?=
 =?utf-8?B?Z1FObytzZXNhVTNtakE2VWZRZjJ2L2xqdGgvTjJHWWtEblNMbU5NRGFVMFVQ?=
 =?utf-8?B?c2pxZ2dIMWtOd0ZFelcrUjlMUGJTRytSTCtWQlpneHBHN21NaWVpZE1NUE9w?=
 =?utf-8?B?bEY2WXZXUG5za00vd3FiZ0dZcmtsUHd1L3ZPWGFkdFM1ZEt3WDYwRU83L1dD?=
 =?utf-8?Q?DbzwfVAT2RQcBmTkXGd71livS?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 330e4c1d-2167-4b8d-b2fa-08da550f284c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 11:54:41.9550
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: kkhouZqDa5AEn8I4fHdebKOFS6Tk6ikTXBmdXmZ9pr/YBLdHkO5647YVaQzvlZIC1SZC8JYXqOYOuj69KSyM8A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB5745

By using | instead of || or (in the negated form) && chances increase
for the compiler to recognize that both predicates can actually be
folded into an expression requiring just a single branch (via OR-ing
together the respective P2M_*_TYPES constants).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
RFC: The 3-way checks look to be a general problem for gcc, but even in
     some 2-way cases it doesn't manage to fold the expressions. Hence
     it's worth considering to go farther with this transformation, as
     long as the idea isn't disliked in general.
---
v2: Re-base over change to earlier patch.

--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -379,7 +379,7 @@ struct page_info *p2m_get_page_from_gfn(
             return page;
 
         /* Error path: not a suitable GFN at all */
-        if ( !p2m_is_ram(*t) && !p2m_is_paging(*t) && !p2m_is_pod(*t) &&
+        if ( !(p2m_is_ram(*t) | p2m_is_paging(*t) | p2m_is_pod(*t)) &&
              !mem_sharing_is_fork(p2m->domain) )
             return NULL;
     }
@@ -568,7 +568,7 @@ p2m_remove_entry(struct p2m_domain *p2m,
     for ( i = 0; i < (1UL << page_order); ++i )
     {
         p2m->get_entry(p2m, gfn_add(gfn, i), &t, &a, 0, NULL, NULL);
-        if ( !p2m_is_hole(t) && !p2m_is_special(t) && !p2m_is_shared(t) )
+        if ( !(p2m_is_hole(t) | p2m_is_special(t) | p2m_is_shared(t)) )
         {
             set_gpfn_from_mfn(mfn_x(mfn) + i, gfn_x(gfn) + i);
             paging_mark_pfn_dirty(p2m->domain, _pfn(gfn_x(gfn) + i));



From xen-devel-bounces@lists.xenproject.org Thu Jun 23 12:00:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 12:00:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354878.582219 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4LVG-0008DV-Oo; Thu, 23 Jun 2022 12:00:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354878.582219; Thu, 23 Jun 2022 12:00:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4LVG-0008DO-Jr; Thu, 23 Jun 2022 12:00:06 +0000
Received: by outflank-mailman (input) for mailman id 354878;
 Thu, 23 Jun 2022 12:00:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CRa/=W6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o4LVF-00081T-LT
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 12:00:05 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 (mail-eopbgr50084.outbound.protection.outlook.com [40.107.5.84])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 02bcbf90-f2ec-11ec-b725-ed86ccbb4733;
 Thu, 23 Jun 2022 14:00:02 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB7534.eurprd04.prod.outlook.com (2603:10a6:102:ed::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15; Thu, 23 Jun
 2022 12:00:02 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 12:00:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 02bcbf90-f2ec-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Pqq6RWUATtdPNR9zdKtGKxEqeyw16skSiBdPaHka+fbjJUqvN3sT1ttY0Bg4+U3Hwa7tkrEpfDess6W+/sCgwK7okzVbktn/1pcl81H6PomQILIuFEcMQzrKCf9aAg3j75SWc5AGbP+A6DtFNgqE1ggOVw3biKZ9rU3/A09C2il0sOd9QHXATSxEL68ruoXMW/epgVjU7OdhjNE4FOcktyu8QUtjbSu4parEc+5/zmsTqyj1ZeC5VJXUchGYAmVr/MXO4st6lJ6Js1aKPSZSbRfyvKx0X4cvNgRyBIFfNk6XucX2YJ0ShUPAXnQLdzd94RfGT2lWW+tHqOgggEf6nQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=aY0oLw+JgO0O3oLJDGXCJgI7h6GzmAMlBIXPvCVAq0A=;
 b=Rqdj0atFOPcqAaTf5Dp75Jx6rPobcOaHfgS7i1grVIEI97og9HFpQCgb7HJbMMNe33fqJvY7QkaXrAPmFwGB3ojirZyMCiNCtvrBdoa2yZZSciXhyWgn69l//SWDv0B/KwaURcD+0GEAsjWS0AzGNIu0i2fH6kw5Vhmmu4JCxhwqJkvGDGglFgyIUfrE5To8oUhLAAjTR4B5iuniTgFyAmh3TLdyMzotmVzRrt8bSCfJ+mq2URt2d6Pj0AlwY+lcPsS1qQD0DEW/j0vMqwrFaCX2zAFOVsODeIjZPY2kDhoMQHsKU6o9ErB7oU9PtJG98XWgcoOVj+ccYarGWU54Ug==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aY0oLw+JgO0O3oLJDGXCJgI7h6GzmAMlBIXPvCVAq0A=;
 b=TpbuoeK4o+abSLUPsxsQqmuvLjXjNMAYvGv38pWV+DbBtZmGc1njaW6Idn9LIM0rQNQeF36HovMbog5WZFqmTs9re7e+4F1FlGIAfWidhbXd4SyR6PpmW93q0CQ8p7xrUJNvRf/V4SGYgeZRFujD7HLoiPvvfsQMuUnKHkM9frcbNFo9+SzRev1fNzgKO/KuW2KrgigMMHN6gkhOAZB5Oo7/H5Ei+Vq40OdQeRITeXXUUeMmIdaGxJJDJI2SjOV1PZOMxVI2WqLh/4gpTE4764QDIDd2/rNGt/8xmY/Kr+TmMw8Vfpd5RQ3CGGOi5JGU957HCIz3X81BX8XGsOhwnA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5a8a8f01-03cb-b84a-bbca-9c5f6d2ea9cc@suse.com>
Date: Thu, 23 Jun 2022 14:00:00 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Ping: [PATCH 0/2] x86/P2M: allow 2M superpage use for shadowed guests
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Tim Deegan <tim@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <9ae1d130-178a-ba01-b889-f2cf2a403d95@suse.com>
In-Reply-To: <9ae1d130-178a-ba01-b889-f2cf2a403d95@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM6PR01CA0069.eurprd01.prod.exchangelabs.com
 (2603:10a6:20b:e0::46) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 133e9172-cb78-49f3-b29a-08da550fe762
X-MS-TrafficTypeDiagnostic: PA4PR04MB7534:EE_
X-Microsoft-Antispam-PRVS:
	<PA4PR04MB75342D4E834C5B732EF4EC24B3B59@PA4PR04MB7534.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	TRhAX4gdkVb4HosAf5UhwTd+aZ0HUdB7Tnr9IfLcTxUEk0Fgq9SfMyvaGI0TbCaS70NEuklSVAIYJw7ifuMp1FQkJ6NIkKBRqKHn9Lt4HIIbBQ4s94/DuSyquzTFdIDYvLsAbt1d+FyxSKrSgcRQQPG316lVWsr9WqMtuwNaJ9uuaXbawlvagnJ2/UvpPUWdXfUFVsIMqRd1rwMlbZ8dPu3ocJpchyysFXWLHKmTxGhJyr4woehrPeJ+JFELDZE9c3VQpPfOFwkYk22hW6AIIBZqFMwkqn9pxpWeyf+75OUpe+p5Ie4DCe+9DwUM+f/QB/8UMSL3jyCgUFWvw5+wGFg5RS1T0xYEAUxoUVkQwUuQsZ9lNw8rvg6AT0ZsibjWnA1+eN1S4bnxMzMbJfMdazstSJ1Na51V9wmwxisd4o6nTu8ngPsjeEPbtmIrfp7BSHum5/e2PdE2WsfQu4DtWRd95jEVF43t/jtoHQznTQajJWWLLZ5uEEHTByqfEYanxp8bO1Ba92aJWkszBllbACw2gO+2/S/6SsSIgkCod6AItmPuJdbaqKX7klh48EWFZhjqChLa5aM8ddefFLRnC268wns2MfiRHVIIdLTmBKXZG+GlXxy1msoFpuAWiiMArBV9s1QKSLPEFvOI51mBcGpdTrk9D/QFxXAdqgdrVYi4oXc03OPwthBom3GzPaAxWoqN4/LrxhtiNTYyGMzbqhBYkz/q25W5bKigCA8PsEQzcO36axVCDeyLgjylHnwE4uMw7lAFqPnHek6DeUyatgXhXCFK5tS0acfpZ9JYLmo=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(396003)(136003)(376002)(39860400002)(366004)(346002)(83380400001)(186003)(6486002)(54906003)(38100700002)(2616005)(8936002)(31686004)(5660300002)(2906002)(36756003)(6512007)(66556008)(31696002)(66476007)(8676002)(66946007)(41300700001)(6506007)(86362001)(316002)(110136005)(558084003)(26005)(53546011)(478600001)(4326008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UWFueklYNHpGbTJXZEpDUGpvU0pKYjRDNkpCQlFSS3psVjJPaTI5blhxczFz?=
 =?utf-8?B?K2VRNHBvUmpjd0pvL0tiRHpabmtEWkIwNFM0ZUdzY3VnVEtZN3c4cjBsbGt2?=
 =?utf-8?B?c0p3OXM3QVVGeFJvWUlMcXYrZytFZEI5dmdqUW1pSXJEQnd5OGhpS0ZvVC9W?=
 =?utf-8?B?cVNOVGdUblFkUzdjV01ldkN4OE9qa0FMWk5yU3dIelhOMVRZaHhZVTlvNXcw?=
 =?utf-8?B?Y2V2eFltejlIb3BpbWNnNkd4RVova245ZzE5YUtwVk4zeXJ1SGZ3Z1J3dFEx?=
 =?utf-8?B?UTBkL3RXZ2NrcFdaZjZaN1kvVGlEcHIrK0FQT21TeTc1SE1HRWVsKzJ6QU4v?=
 =?utf-8?B?NUZJcUozNk5IN3B5QTlqZmZFQW90TnJWTHFlY1JDUTdLTUYwQmRWM3d4VUk5?=
 =?utf-8?B?dXhsd1U2V1lZb1dZdnNTc2hLdy82VEYxVlVOMjRNVVgrZE52VTA0YS8wYjl0?=
 =?utf-8?B?blNuQUU0T3lzY2Exd294bU9wMGdPOVkyR0RzMGxJcVNzSXR2cDlabHhRTVBT?=
 =?utf-8?B?MkE4MXFGUWZaeVBrVHZkUllJK3dUYjdXNmtmbWVhVitxRE9ocXN0OGpNeVJB?=
 =?utf-8?B?b3cyZXlHc3dnanVnNXNFbnRXb2hNU21vVTZoQ1ltdEN0enF6b3Y2aHhqTmp1?=
 =?utf-8?B?MFdMVEpKbkVMcjB5cy9nSGZWak9nMXI1ZldmQnRRYnphallTUldNU2prZnNT?=
 =?utf-8?B?dHJkNUNteDRvRExaS2R5TW8vdWpoeldMaW9mZzcxb1RxanRBcHg0OXYwK251?=
 =?utf-8?B?bjlmSC8wQ3JtVUtNZHhBM3NFN3hRakhlZG1OS3VURTZ2VmE5L29nL0lhOUpk?=
 =?utf-8?B?WHJwbDNqUVNWRm9XS0VidmxkajQxUms5M3FId3ZYQ3BiOS92a3QyZW9TN29L?=
 =?utf-8?B?cGxWTkpVRFhFYUp2LzlVVXA1MkxiOUhUWGhlejdwQXpvOEIyNFdMN2grUTdE?=
 =?utf-8?B?MXR5MlIrbmFhbTJKTnl2ZFJqM28xZkp6MHZaeUhtQmF0WGhTT0drdVBzZlRF?=
 =?utf-8?B?R3VIT1FBRFNwb3BxeDdpWnJ4YXp1L29VMC9ucUpLN3BkMUk3MERCVG0yamN1?=
 =?utf-8?B?M3U1aGdGRVR3TE1uK3dJaFJNcm02R05oOTF4dFQvRWhlOWV4dTA0MG5hbnpx?=
 =?utf-8?B?bFJaSElaVm1UOWVTQXQwL3NLZFJTR2k4VFZKS3NFWk5SdXBrWW1vb1JIOHFH?=
 =?utf-8?B?V1lyRkJkUGRyanlUckFQbHVFeXhqOHN5SWdLVmMycytNQzAzaElFVWdOc0ww?=
 =?utf-8?B?K3owRG9tWkwvbG50azZ0S256ZWhPR1VvT3cvREVScmxBc0JoQWF2MWRlRHMx?=
 =?utf-8?B?aHNYdldOcUhGNm14T2k2K2NMZHFoN2hpMFBzWXNIZElQYzFkTm5Tb09Ya3RB?=
 =?utf-8?B?aHMyeG1selBHYVFDclkwckVlenBEclhIYmtzZnJJaFNoeUc0RVZ4OWsyT3pF?=
 =?utf-8?B?dWN0MTBGS2NDWFA0d0FiVGo0akx0YmhrODVUNGdDZk81TEw0WmdyQ2Y4UzNF?=
 =?utf-8?B?MVkvUkdpaGZTZnMrK3R6UDBUTElRM3R0NHZjTUt2WGxjNzd6bDFMS2paOUk1?=
 =?utf-8?B?Y0dkQ0dPTkhsczFaRmN4VmFRZnArc2ppTCtjR2hyeHp5Y3JuMHdrT3RtMHkw?=
 =?utf-8?B?ZDJ5d3JhQW5sMEhJbUNMMjNFWjVJN2kyckhSbWU5N2IzZDFzWUZUb1Y5a042?=
 =?utf-8?B?M3dnMDNXdm5Yd01xV2c4b1NZa0RwN29makRZUzNMVUk2bTdnQzBkU3JXU05P?=
 =?utf-8?B?NWcwQVZQUFltcVRNb2V4L0lvWEVlaHV1ekNVb3JLYUlTdzRjTTBPMi80NU5z?=
 =?utf-8?B?Q1BEdnBzNzVPZTNLMHVDeTMxcmNEcjJtdUU1cUlndCtpcXVIOU8wV3Rxclov?=
 =?utf-8?B?RDRKMFZ1RnVWSjUwNjc5SWY0Sm9wUi9JcmZLUHErcnAyaUwrRUJqSGNOekFY?=
 =?utf-8?B?T0V1eUpIRFBUUkIzYzJKOFJla1FNL2hWRURrb2l4bWpIejNnbzUzM0lMQ1B2?=
 =?utf-8?B?dWFNUEpOSHN1R25reGIzVkRmRDQ2ck91NE01bUdiaTRzUGxsUGhMZ0hNb1pN?=
 =?utf-8?B?THhINTQxeVRwbkdrM2g4THUzS1UvNWM0SC9Hdzg1eDdqVXdlT0Z1V0FLd0FD?=
 =?utf-8?Q?VGagl0Jqpv+bJ1Ur6tlaGFG+Z?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 133e9172-cb78-49f3-b29a-08da550fe762
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 12:00:02.3253
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: FEZUnU8WYPaC9/SntoResj86C/oNIP2tZ93RlvzgLfcrSzNpF5LfGFloa3wruuc/yvNeOY1GPlDZjYOdpYJMlA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB7534

On 09.12.2021 12:25, Jan Beulich wrote:
> I did notice this anomaly in the context of IOMMU side work.
> 
> 1: shadow: slightly consolidate sh_unshadow_for_p2m_change()
> 2: P2M: allow 2M superpage use for shadowed guests

This has been pending for over half a year. Anyone?

Thanks, Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 12:03:20 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 12:03:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354887.582230 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4LYN-0000Yf-91; Thu, 23 Jun 2022 12:03:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354887.582230; Thu, 23 Jun 2022 12:03:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4LYN-0000YY-4D; Thu, 23 Jun 2022 12:03:19 +0000
Received: by outflank-mailman (input) for mailman id 354887;
 Thu, 23 Jun 2022 12:03:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CRa/=W6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o4LYM-0000YS-BH
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 12:03:18 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr70050.outbound.protection.outlook.com [40.107.7.50])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7774f7d0-f2ec-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 14:03:17 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB7534.eurprd04.prod.outlook.com (2603:10a6:102:ed::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15; Thu, 23 Jun
 2022 12:03:15 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 12:03:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7774f7d0-f2ec-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Xgk3Bei9Q0Yno9495l2e5ZtbQ/1N3BzXvQUR7QX1MJjFSYGyxWI+qVKUpzCpNJ02xkG8QS2F6YVnDbcLwjKg8uFdRvbmlh1ULDrXZl4INxxQTSsiofvnymlXTQ9q8m5vNpsEAPxC156XOA76GzJUQudxuGZ2iZtOAC8MEQu7PG/gOzu/8LktKWQxmNajQlsjVnaYbDgyaW92QxfXtj+zNXoGOgQHNh0+6m3DEmn+zEWlQRou+57AV/GVuvTnBaglD/EKVec4QfDiCgGSR6XsFI4BtOQIkLcSWnq/cV/BJbMkv9entjkTwENvcq7i4jEhOfdTIYJIXhvQxq7eN8GcFA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=i+TaXK5q8EwNOsaof0Oa83Zj9OAG55ERnCGvpk7KNHU=;
 b=f5d0uYWmtw25KpZsSTxiVQsqtXcpSfakP66ybDCfpvIAmYusSc9vknJkvKbtMDTKFjwCu/qV85U3OBM4kaZ54LTBpIpoquSJM/dZA4cy+HUc94ONw3TY7ckxi86+HG5Hgbh/IAU5r8anI3yWkYDR5tsiZY1f3qrcZL0t1Ck7PQ1120cUEWmM1cKHApns6828zCfVtVwnSYfPeupwF7maY3o6EE2/k2BVCqJQtAPSvCMI6pPNN1RZ4tvzedv3eeo8TYe315CKw3vVyTcUwyJHmO8Lp4c4v8Huzq7EfqARVmjoR2WHR1mAMw1VQVO1HDl5UXM/CHM7MQGZTkXJegtUdA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=i+TaXK5q8EwNOsaof0Oa83Zj9OAG55ERnCGvpk7KNHU=;
 b=JmHUq2kRh7V2yBIFlHEeUOAGIQxSQAsfwworQekxvx8mqGvx/9TV6pt0tP1NhVfQDFcRK4wsOWzWDZxeqlI16jVoh3rsgTz6xnHSpKyzcYP2mTb18hiUws6jqhHzrccIoGKoAiE4LqkytKG86gx+5GBjQuPLyaK9snDLZX56tqT65l+rWRVIXSIq2SxNBnABlaU4L52CQ29b5IMNHi90S73tdN2brVym70Vy9i3w0zDDvd6/nOHoVF3TeLUrfS3oQNyjUcpNsCX7jXaVf1zTnw9w4IxyceL8cpYVmcJL2T71TgKew6HbJjByvMn0SOqTTI9r3+nE3+2RXb+1CHylXA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9150fbde-2cbb-06e8-aefa-658e0940a411@suse.com>
Date: Thu, 23 Jun 2022 14:03:14 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Ping: [PATCH v2] x86: correct asm() constraints when dealing with
 immediate selector values
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <2c04b2aa-d41f-0a2c-6045-2d37a6fac53e@suse.com>
In-Reply-To: <2c04b2aa-d41f-0a2c-6045-2d37a6fac53e@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR06CA0057.eurprd06.prod.outlook.com
 (2603:10a6:20b:463::33) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 1886e906-4cc4-4e9e-80a1-08da55105aa3
X-MS-TrafficTypeDiagnostic: PA4PR04MB7534:EE_
X-Microsoft-Antispam-PRVS:
	<PA4PR04MB7534729AAAD8EB747927AEFFB3B59@PA4PR04MB7534.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	vZyJwel1bxuS7ZfkCbdsrxXmoNk5C5bq++iZi1w1wDXHKJ5839DoS/JhpqT4FOI7xE8sGNT2+sNUO8lsSqF/uQLEIEQxPbMl8zuBfOxesZiuHdQisJgDFqKG0EvsdBTz6tSnjrK3hLV3WjJiVEoWFNukl0M30HFGe5TgkV5cVUYZ9zZhQFyhPqlVt9JAPaLbeyVVzZOeQRskPg+J/4uzlj5AJ7Q0pXzQpyl7v/Qrf0w2oH4hNPA/aH0FdaHlapotzOVdapcfIQL0kOMqw3zej32NapamuXfCrmErVriN71d410utah23ybswg74rf9scN21gwIB4bv+SCx/DLJrnSyou/kxRMsJShl+KpVfxHjc87CUTMTqp7oYGdvDMOEMstG9Hqpvg2/tAbtUIDDRBvtzQxQRH0Ud/upWQYr4nITZ2NWyK0RobB/amM6W6OVavHHziHmKTDPvYlRdbkqIv3jK2G/hnAvNPwJ287EILPBjkXJd/C1e9db5cjSPczdXTjgmf12lrHNQ+dZHvvW05sSTuvvSyjwxkbjGs1myaRsUWaLxEJ437Unr9Seu/rn92Qi+DKPIIP6QuBHXJ5AvnqOYWK2yyuYlUQhw5q8vwa4QEipz/6FpnJc1t0cfKHOF/gsJtHBNVQeeHTzV3tASXj2Mqvq1NI6qUOf8j25U0axWFcfPezzW9mw8RbD8IXSfow3s0/eNcc9gb4ZrZrvo9EKzM6Tu6nrhVBvqiZ8rFgLJS4PKcVBh8E+/1W+xhghW3/KpGa/Py9ROj7J9W/xecRWca91gJnKyWq7khcgv/3rzRLTBHiGxcwWneY1aTR1Z4
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(396003)(136003)(376002)(39860400002)(366004)(346002)(83380400001)(186003)(6486002)(54906003)(38100700002)(2616005)(8936002)(31686004)(5660300002)(2906002)(36756003)(6512007)(66556008)(31696002)(66476007)(8676002)(66946007)(41300700001)(6506007)(86362001)(316002)(110136005)(26005)(53546011)(478600001)(4326008)(14773001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eDdIdjZ5ZGZpclNXbVZzVElmVHduNy9ZNWpFZkM5bFN1NG5tU002bEFROGdF?=
 =?utf-8?B?SEVXUFNIMzdvSFVTZC9NZzhBVTdVc0Y2dzlZK0RlcW5CRXpjdktRTGxPTEsx?=
 =?utf-8?B?SWVmbGNlODZvWWEwdCs5ekJMQUZFNk9oSVBmbVpBK1c1T1NBYlFpOWJ3Vm9Y?=
 =?utf-8?B?ZFVJc1lmaTZLejJUTGpUeTdPZDFub1hIUjFkaUgvR29nTllWeVcrczV4Tkw5?=
 =?utf-8?B?c0o0cVc2dzBlTy95TVVXbE1hbXdrSGl4OUlsblRET0E3eHBkZjNQZmRJbUdx?=
 =?utf-8?B?d1E1djJzNUlmOGJ1dzlEWngzTlNoTitabFQ2aUNlN3RhMkZ5OXFOZUZOMkVD?=
 =?utf-8?B?QW9XcjJndzhzMUcyOHBVYVlsSkEyM2t2Q0I0Zm9aZGd1RFJRUCtxa3gxWmJs?=
 =?utf-8?B?azdQaGN0SjM3TVVQcElBSUVlLzJVUnk2NTZNRURBd1pmM0I1TCsvdFlqS3Jh?=
 =?utf-8?B?QkozMGpRQ3k0Tmw3WmllV0NUSDNJbTRLa05Xa0RqV2YrZWdCcDA2amJ0SjRZ?=
 =?utf-8?B?a29Sb1BoZXA1Nk1CR1p5OVhsY3VPZDJxQWU4SDJHVm0veTlYKzhZV2tWOUND?=
 =?utf-8?B?R0t5alV5V0grS3FPWlpjZWU1V3I0WDVweG9LNzM1R0hoWjJIc29IR0trc1ZO?=
 =?utf-8?B?VVRjc1dsdXFsVWtFR2REcDJmZnV6MkpzOTdJdDZoUi9xaS9LME8rc2VLandM?=
 =?utf-8?B?aXMvakdpNXh1SitUckRBTUNRcWI0TnJTcm5QQUpscnZnTThkY3psZkxsaEpQ?=
 =?utf-8?B?SUJhbW1vZk5WQ1JlZEJBWnE5Nmt5VnJGL2NzVWE0OG1TVVk1S0pZK0E0aW85?=
 =?utf-8?B?SGdQRWFkb2pkQTkrdXo1SThubWVFNlp1dGpjbHRtdjZVbGgrZjdLY2NsS0wr?=
 =?utf-8?B?QWhITHZTYWxGQzhTTlRaSytON1JGREZidDNkdnZyTnNHWmlCMmtHMktUZ1pL?=
 =?utf-8?B?QzArbDduR1hzcXQzMyswQldUUTU3S3RxOXlkUFhCM2l2L2dDbU80MnBydXRQ?=
 =?utf-8?B?ajNMa1RudG1kajRhakZNZ3djWUM1VFJUeWhvY0hUZUhKMUs0ZEF1VFAwcXI0?=
 =?utf-8?B?L1k3ejRUL2tkTjcvS0hCVGRKUDJQeTdjRWFxNUpWSDFZUHlxT2JTMU5MNGF4?=
 =?utf-8?B?cW0vNHEzUjBreXlJekNlN0gwZmRwZWljcEcvcjNEKzJGM1pmQi9SK2hjejRO?=
 =?utf-8?B?RWJHUmNUOW96RzZWRGYxWlc2dFVnNXpDTFFVbUhLYm5EY0FURXdKVVRVNmx6?=
 =?utf-8?B?V2dBaktodk4vLzZHU3pvN3VwZ2xJR0gyNDZKZWhzMFZzckE3YzVRak5qejRV?=
 =?utf-8?B?eDI3aTdsS3ZEaDM4bVd0M1Vzc1JHMDd4Rkt4dDdaYkhidi9qbWNBT25FeHBt?=
 =?utf-8?B?QWVtRWxjakdRdmhqK014eEhNb3R1VDFMcTNZdmhHSTlpcHJ2RDBqYXpwVEhX?=
 =?utf-8?B?TEJka0k1RXFrQlhOanIrT0ppNHBmcUVRSnJKbC9NeXJ1cUpTQ2xnNTBqcm1B?=
 =?utf-8?B?eVNvam4wOUtXYTdoT1lrMUwvWThnQTB5VEt0WFcvQ0dDcnZTNUxtU1gvVDNU?=
 =?utf-8?B?RCtDb2lHNW5pdnV6N1RaOVEvQ3cwaE1LVHNEVmx6OERWK2k1L1l2c002d2hV?=
 =?utf-8?B?ajF5cXgrSGZibzlWSUtRaXBLZGFvTkR3TkJXSjErOEpDZ3lQbFhCS3BPWW1U?=
 =?utf-8?B?ZlhvbFRLc201ZGJyRk14Qm1EenJ0VTFxMzhxQlZLWXJzblJmandUWGEzSk8x?=
 =?utf-8?B?Wityb3NLSFY0Y0x4emRId3FhQVBZNXZFNGlwYUJ5ZldmTUVORnhUYW5mdWdk?=
 =?utf-8?B?YjlRbHBhYWFkQ0RIS3YvemtuY3JxMi94VnM5Z2d1TlpKUGFxODFYeTBJQ1p5?=
 =?utf-8?B?cFFxSUpBUmc3OWsrbmIxUjRrUnUwNHd3azNNRlZ3d0Y4YmJmaW5iSFcvS2Zy?=
 =?utf-8?B?YVZvSHNLSzRCb3BsT1VhYkQvbmxTNklLNHNYeGRSUkJMUHEzbnBnbkpGSS9Q?=
 =?utf-8?B?NlZaY1pBdG9COTk5ejByVnZEWEZ2TjE0czVYQUVtR0M3aTl0b2VlRTRZVDRE?=
 =?utf-8?B?NWpkelFSYVRQMjRueHBsaEUwMVZpRHJNRGFYSEVMcG50M2llU3NjNW9vWnJG?=
 =?utf-8?Q?MsJ0nP3hr/Fvhvy6WhCCyQ9Cw?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1886e906-4cc4-4e9e-80a1-08da55105aa3
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 12:03:15.7192
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: yk5GlaDRyWeWGrbRK+R1D8Nu6XewmKqhSRNpJPQwxlvkiyq6SqjcY/7PNQxokKvnTrmxDh+FToH8G4H4QnSVXQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB7534

On 13.09.2021 08:26, Jan Beulich wrote:
> asm() constraints need to fit both the intended insn(s) which the
> respective operands are going to be used with as well as the actual kind
> of value specified. "m" (alone) together with a constant, however, leads
> to gcc saying
> 
> error: memory input <N> is not directly addressable
> 
> while clang complains
> 
> error: invalid lvalue in asm input for constraint 'm'
> 
> And rightly so - in order to access a memory operand, an address needs
> to be specified to the insn. In some cases it might be possible for a
> compiler to synthesize a memory operand holding the requested constant,
> but I think any solution there would have sharp edges.
> 
> If "m" alone doesn't work with constants, it is at best pointless (and
> perhaps misleading or even risky - the compiler might decide to actually
> pick "m" and not try very hard to find a suitable register) to specify
> it alongside "r". And indeed clang does, oddly enough despite its
> objection to "m" alone. Which means there the change also improves the
> generated code.
> 
> While there also switch the two operand case to using named operands.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v2: Use named operands in do_double_fault().

This has been pending for over 9 months. May I ask for feedback?

Thanks, Jan

> --- a/xen/arch/x86/cpu/amd.c
> +++ b/xen/arch/x86/cpu/amd.c
> @@ -736,7 +736,7 @@ void __init detect_zen2_null_seg_behavio
>  	uint64_t base;
>  
>  	wrmsrl(MSR_FS_BASE, 1);
> -	asm volatile ( "mov %0, %%fs" :: "rm" (0) );
> +	asm volatile ( "mov %0, %%fs" :: "r" (0) );
>  	rdmsrl(MSR_FS_BASE, base);
>  
>  	if (base == 0)
> --- a/xen/arch/x86/x86_64/traps.c
> +++ b/xen/arch/x86/x86_64/traps.c
> @@ -248,7 +248,8 @@ void do_double_fault(struct cpu_user_reg
>  
>      console_force_unlock();
>  
> -    asm ( "lsll %1, %0" : "=r" (cpu) : "rm" (PER_CPU_SELECTOR) );
> +    asm ( "lsll %[sel], %[limit]" : [limit] "=r" (cpu)
> +                                  : [sel] "r" (PER_CPU_SELECTOR) );
>  
>      /* Find information saved during fault and dump it to the console. */
>      printk("*** DOUBLE FAULT ***\n");



From xen-devel-bounces@lists.xenproject.org Thu Jun 23 12:06:05 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 12:06:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354894.582241 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Lay-0001A8-LF; Thu, 23 Jun 2022 12:06:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354894.582241; Thu, 23 Jun 2022 12:06:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Lay-0001A1-I0; Thu, 23 Jun 2022 12:06:00 +0000
Received: by outflank-mailman (input) for mailman id 354894;
 Thu, 23 Jun 2022 12:05:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9Gci=W6=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1o4Law-00019v-V5
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 12:05:59 +0000
Received: from mail-wm1-x333.google.com (mail-wm1-x333.google.com
 [2a00:1450:4864:20::333])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d7146953-f2ec-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 14:05:58 +0200 (CEST)
Received: by mail-wm1-x333.google.com with SMTP id q15so10908095wmj.2
 for <xen-devel@lists.xenproject.org>; Thu, 23 Jun 2022 05:05:57 -0700 (PDT)
Received: from [192.168.1.10] (adsl-190.37.6.169.tellas.gr. [37.6.169.190])
 by smtp.gmail.com with ESMTPSA id
 x21-20020a05600c421500b003a018e43df2sm2750929wmh.34.2022.06.23.05.05.56
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 23 Jun 2022 05:05:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d7146953-f2ec-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=message-id:date:mime-version:user-agent:subject:content-language:to
         :cc:references:from:in-reply-to:content-transfer-encoding;
        bh=9jHRei8fvZH9aQYctgaO8oyO8JOm1ysfYdmqK4TNSsg=;
        b=luJQU+yMu44aK8sxODvURw5h9gpq6XH4A80Js0m5LWMFq1mm6jWiMDZmu1IYOwQRk9
         GDyt5fPjSQjt+sqz3d49yw/aWPUTWIB3enBij0MAEcgFyGXnanM+soc1/Y9mLTLTRi/P
         u0g/YBBagZof5ePJCjN0UsOoFP0ujZ9Zitcj2cGGiejyFOLoevNERMmC6Qu4NeQpoKRZ
         BFmtkaGTDSF65JSq6S2NMPKyL143C2z9A393iHjiRFJm0omdL6+svQ19GYIgJI81v82/
         32lt/r6EyhFpg989gts5AFUiGhuMdC8qCry4yfi347XlK8kaemfpYRgaFbwMvr3oCQxC
         dGkg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:message-id:date:mime-version:user-agent:subject
         :content-language:to:cc:references:from:in-reply-to
         :content-transfer-encoding;
        bh=9jHRei8fvZH9aQYctgaO8oyO8JOm1ysfYdmqK4TNSsg=;
        b=7St5nn2gFVbZ+KwA5oFK3Z5apJ8UuZVfk3d5HJrQeZik27NfImIaYAcNVLVQVyiOy8
         1BBtP5NaLthN4QqLxw5E9X/1HD+G39lD73cW4RETGtRPk6R9TilU6MNfhLm8OwYu5ed6
         mzPjkCzF9YAaUc01bRop1oA8FDdb8xUuS8cXwRvHv1BB7rYPU0qmdx/mnECSgELEaxof
         qwCySTCrg8MfxO0MIUkr7N309/kdZA+MSN9qRz7H/BK09IPKYyM+3A50P735RhRdFXqb
         IuaTTx28xrnwSI4bIm3beCsHvpcPjWjbEIvS9fAWkyovuKolWdqlGbHjfttkXERxwibt
         SOiA==
X-Gm-Message-State: AJIora9gtflwd1b3DYHeD7fkdTvDfylGidm0s8vjn2ISzMunbezvO6Hb
	uy2oPxcnXmqRYvM61qnT0kQ=
X-Google-Smtp-Source: AGRyM1sOyQXRSomUIbdzerMi0mofgiS8GpxlbJoS1nWYbVJNmJOfdTO4MnDbIbhbkRjqCQlHkcjESA==
X-Received: by 2002:a7b:c242:0:b0:3a0:1acc:4893 with SMTP id b2-20020a7bc242000000b003a01acc4893mr3975376wmj.66.1655985957242;
        Thu, 23 Jun 2022 05:05:57 -0700 (PDT)
Message-ID: <96c919fe-dba1-d1c4-10e9-b73800f96cea@gmail.com>
Date: Thu, 23 Jun 2022 15:05:54 +0300
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [Viryaos-discuss] [ImageBuilder] [PATCH 2/2] uboot-script-gen:
 Enable direct mapping of statically allocated memory
Content-Language: en-US
To: Ayan Kumar Halder <ayankuma@amd.com>, xen-devel@lists.xenproject.org
Cc: viryaos-discuss@lists.sourceforge.net
References: <20220619124316.378365-1-burzalodowa@gmail.com>
 <20220619124316.378365-2-burzalodowa@gmail.com>
 <5cd7ee29-d43a-1302-0a0b-6b4c339a96da@amd.com>
 <797bf441-9d7a-7eb8-4e90-787398acf726@amd.com>
From: xenia <burzalodowa@gmail.com>
In-Reply-To: <797bf441-9d7a-7eb8-4e90-787398acf726@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Ayan!

On 6/23/22 13:02, Ayan Kumar Halder wrote:
> (Resending mail, as the previous delivery failed)
>
> On 21/06/2022 12:34, Ayan Kumar Halder wrote:
>> Hi,
>>
>> On 19/06/2022 13:43, Xenia Ragiadakou wrote:
>>> Direct mapping for dom0less VMs is disabled by default in XEN and 
>>> can be
>>> enabled through the 'direct-map' property.
>>> Add a new config parameter DOMU_DIRECT_MAP to be able to enable or 
>>> disable
>>> direct mapping, i.e set to 1 for enabling and to 0 for disabling.
>>> This parameter is optional. Direct mapping is enabled by default for 
>>> all
>>> dom0less VMs with static allocation.
>>>
>>> The property 'direct-map' is a boolean property. Boolean properties 
>>> are true
>>> if present and false if missing.
>>> Add a new data_type 'bool' in function dt_set() to setup a boolean 
>>> property.
>>>
>>> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
>>> ---
>>>   README.md                |  4 ++++
>>>   scripts/uboot-script-gen | 18 ++++++++++++++++++
>>>   2 files changed, 22 insertions(+)
>>>
>>> diff --git a/README.md b/README.md
>>> index c52e4b9..17ff206 100644
>>> --- a/README.md
>>> +++ b/README.md
>>> @@ -168,6 +168,10 @@ Where:
>>>     if specified, indicates the host physical address regions
>>>     [baseaddr, baseaddr + size) to be reserved to the VM for static 
>>> allocation.
>>>   +- DOMU_DIRECT_MAP[number] can be set to 1 or 0.
>>> +  If set to 1, the VM is direct mapped. The default is 1.
>>> +  This is only applicable when DOMU_STATIC_MEM is specified.
>>
>> Can't we just use $DOMU_STATIC_MEM to set "direct-map" in dts ?
>>
>> Is there a valid use-case for static allocation without direct 
>> mapping ? Sorry, it is not very clear to me.
>>
Thank you for taking the time to review the patch!

I agree with you that static allocation without direct mapping is not a 
common configuration, that's why, in the script, direct mapping is 
enabled by default.

My reasoning was that, since direct mapping is not enabled by default in 
XEN for all domUs with static allocation but instead requires the 
'direct-map' property to be present in the domU dt node, then such a 
configuration is still valid.
I thought that with this parameter it is much easier to setup (and test) 
both configurations.


Xenia



From xen-devel-bounces@lists.xenproject.org Thu Jun 23 12:23:22 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 12:23:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354901.582251 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Lrh-0003VE-1J; Thu, 23 Jun 2022 12:23:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354901.582251; Thu, 23 Jun 2022 12:23:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Lrg-0003V7-Uq; Thu, 23 Jun 2022 12:23:16 +0000
Received: by outflank-mailman (input) for mailman id 354901;
 Thu, 23 Jun 2022 12:23:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9Gci=W6=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1o4Lrf-0003V1-Da
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 12:23:15 +0000
Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com
 [2a00:1450:4864:20::32f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 40bb4f29-f2ef-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 14:23:14 +0200 (CEST)
Received: by mail-wm1-x32f.google.com with SMTP id
 i127-20020a1c3b85000000b003a0297a61ddso91800wma.2
 for <xen-devel@lists.xenproject.org>; Thu, 23 Jun 2022 05:23:14 -0700 (PDT)
Received: from [192.168.1.10] (adsl-190.37.6.169.tellas.gr. [37.6.169.190])
 by smtp.gmail.com with ESMTPSA id
 y2-20020a05600c364200b003974d0d981dsm2858413wmq.35.2022.06.23.05.23.12
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 23 Jun 2022 05:23:12 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 40bb4f29-f2ef-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=message-id:date:mime-version:user-agent:subject:content-language:to
         :cc:references:from:in-reply-to:content-transfer-encoding;
        bh=VPICt6blLyS4p7N0awnN63eGtHxfntMFc7etnE4RIeY=;
        b=fvGpW++QszI76ej+oNvpd4LLAR1HlDeKmKV3ggTkO6r6YYTk0EQGfOOmZ65323bx91
         kGVqwVaLF0jgn4GXH+vEq2vN4BVmh7Ww5zPyfDSOJ5kjdGHcNUNJ58paDHctgWvb42zT
         IEmElFsBNhprpcZscu44zO9udc3e/CzXehkdWuSdaGbaA/eNAZZ4pCEMqEq9mZPNVnm/
         M8l22xHC06+TCfTsjSlXaHkLBWV0JQtEAGzxdD2F504X7IDnn4kZG7n8aqimjLHyysGL
         pKxVBgJkhA+8+iSNLylnEmayT2VlwpLeRI1aans48p0HDmTytlFtGZwz5mB9ntqRmavH
         gX/A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:message-id:date:mime-version:user-agent:subject
         :content-language:to:cc:references:from:in-reply-to
         :content-transfer-encoding;
        bh=VPICt6blLyS4p7N0awnN63eGtHxfntMFc7etnE4RIeY=;
        b=i7Q7e5E0EcODB7DdK534Simgzf1eSv+w+jR8ILmTdkF++CXcnqGLNFcowOGrvMQngB
         WNAfxaI4GgzSPX/1Ll6BhvaMDxLyMiR9yeLs59FuXS9fdwCca56EYv/RLqeT+tB1OHY+
         zFttOtoNmbjj5Qvy2PRKiheI5WCg82fkcHyRZZCem8z35GCceY0Zi/vK0u9KR5QSf0nw
         TjD04DoBLGPYt8MU8wbsNGCDLBegTKCwA85r1xwmxMxlQ3QT5BMBzmVPgsA7IjWcd1J5
         BWfjKnQaKZ3rMUYxcoeALYwxVYtKKwxrPjo3nbM7UI0kcxKoJpdz5/MwjzzrdUNmlSfy
         7xTg==
X-Gm-Message-State: AJIora/tNCVwBe4obxpTt9hzspUJf40xYp/SRSv6+EA8lOT5kz9337TQ
	ioOAdn4Q0Npp3UswHMMF3ag=
X-Google-Smtp-Source: AGRyM1t2tBp/kK/9vndOGfdRohOTNudyGZqhMEos7KcQ27QOtMNFti+UkOgXt7pQ1mWvjXsHp2lQOg==
X-Received: by 2002:a05:600c:2282:b0:39c:7dc6:2d9d with SMTP id 2-20020a05600c228200b0039c7dc62d9dmr3878704wmf.86.1655986993436;
        Thu, 23 Jun 2022 05:23:13 -0700 (PDT)
Message-ID: <2842ffc9-776a-d80b-b7c9-f1a21ed6d517@gmail.com>
Date: Thu, 23 Jun 2022 15:23:11 +0300
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [Viryaos-discuss] [ImageBuilder] [PATCH 1/2] uboot-script-gen:
 Skip dom0 instead of exiting if DOM0_KERNEL is not set
Content-Language: en-US
To: Ayan Kumar Halder <ayankuma@amd.com>, xen-devel@lists.xenproject.org
Cc: viryaos-discuss@lists.sourceforge.net
References: <20220619124316.378365-1-burzalodowa@gmail.com>
 <80dc865b-f053-d9c9-b8d4-efb19abb2e49@amd.com>
 <94cca06c-8439-ec92-3bc7-8f2a69929beb@amd.com>
From: xenia <burzalodowa@gmail.com>
In-Reply-To: <94cca06c-8439-ec92-3bc7-8f2a69929beb@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 6/23/22 13:01, Ayan Kumar Halder wrote:
> (Resending mail, as the previous delivery failed)
>
> On 21/06/2022 11:57, Ayan Kumar Halder wrote:
>> 1. In README.md, please mention in 'DOM0_KERNEL' description that it 
>> is an optional parameter. When user does not provide this, it 
>> generates a dom0less configuration.

Sure. This patch has been already merged upstream so I will do with a 
follow-on patch.

>>
>> 2. In uboot-script-gen, please provide a check like this :-
>>
>> if test -z "$DOM0_KERNEL"
>>
>> then
>>
>>     if test "DOM0_MEM" || "DOM0_VCPUS" || "DOM0_COLORS" || "DOM0_CMD" 
>> || "DOM0_RAMDISK" || "DOM0_ROOTFS"
>>
>>     then
>>
>>         echo "For Domoless configuration, "DOM0_MEM" || "DOM0_VCPUS" 
>> || "DOM0_COLORS" || "DOM0_CMD" || "DOM0_RAMDISK" || "DOM0_ROOTFS" 
>> should not be defined
>>
>>         exit 1
>>
>>     fi
>>
>> fi
>>
>> The reason is that user should not be allowed to provide with an 
>> incorrect configuration.
>>
Sure. I will do with a follow-on patch.

>> 3. Please test the patch for both dom0 and dom0less. The reason being 
>> such a change might break something. :)
>>
>> - Ayan
>>
>> On 19/06/2022 13:43, Xenia Ragiadakou wrote:
>>> When the parameter DOM0_KERNEL is not specified and NUM_DOMUS is not 0,
>>> instead of failing the script, just skip any dom0 specific setup.
>>> This way the script can be used to boot XEN in dom0less mode.
>>>
>>> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
>>> ---
>>>   scripts/uboot-script-gen | 60 
>>> ++++++++++++++++++++++++++++------------
>>>   1 file changed, 43 insertions(+), 17 deletions(-)
>>>
>>> diff --git a/scripts/uboot-script-gen b/scripts/uboot-script-gen
>>> index 455b4c0..bdc8a6b 100755
>>> --- a/scripts/uboot-script-gen
>>> +++ b/scripts/uboot-script-gen
>>> @@ -168,10 +168,15 @@ function xen_device_tree_editing()
>>>       dt_set "/chosen" "#address-cells" "hex" "0x2"
>>>       dt_set "/chosen" "#size-cells" "hex" "0x2"
>>>       dt_set "/chosen" "xen,xen-bootargs" "str" "$XEN_CMD"
>>> -    dt_mknode "/chosen" "dom0"
>>> -    dt_set "/chosen/dom0" "compatible" "str_a" "xen,linux-zimage 
>>> xen,multiboot-module multiboot,module"
>>> -    dt_set "/chosen/dom0" "reg" "hex" "0x0 $dom0_kernel_addr 0x0 
>>> $(printf "0x%x" $dom0_kernel_size)"
>>> -    dt_set "/chosen" "xen,dom0-bootargs" "str" "$DOM0_CMD"
>>> +
>>> +    if test "$DOM0_KERNEL"
>>> +    then
>>> +        dt_mknode "/chosen" "dom0"
>>> +        dt_set "/chosen/dom0" "compatible" "str_a" 
>>> "xen,linux-zimage xen,multiboot-module multiboot,module"
>>> +        dt_set "/chosen/dom0" "reg" "hex" "0x0 $dom0_kernel_addr 
>>> 0x0 $(printf "0x%x" $dom0_kernel_size)"
>>> +        dt_set "/chosen" "xen,dom0-bootargs" "str" "$DOM0_CMD"
>>> +    fi
>>> +
>>>       if test "$DOM0_RAMDISK" && test $ramdisk_addr != "-"
>>>       then
>>>           dt_mknode "/chosen" "dom0-ramdisk"
>>> @@ -203,7 +208,10 @@ function xen_device_tree_editing()
>>>               add_device_tree_static_mem "/chosen/domU$i" 
>>> "${DOMU_STATIC_MEM[$i]}"
>>>           fi
>>>           dt_set "/chosen/domU$i" "vpl011" "hex" "0x1"
>>> -        dt_set "/chosen/domU$i" "xen,enhanced" "str" "enabled"
>>> +        if test "$DOM0_KERNEL"
>>> +        then
>>> +            dt_set "/chosen/domU$i" "xen,enhanced" "str" "enabled"
>>> +        fi
>>>             if test "${DOMU_COLORS[$i]}"
>>>           then
>>> @@ -433,6 +441,19 @@ function xen_config()
>>>               DOM0_CMD="$DOM0_CMD root=$root_dev"
>>>           fi
>>>       fi
>>> +    if test -z "$DOM0_KERNEL"
>>> +    then
>>> +        if test "$NUM_DOMUS" -eq "0"
>>> +        then
>>> +            echo "Neither dom0 or domUs are specified, exiting."
>>> +            exit 1
>>> +        fi
>>> +        echo "Dom0 kernel is not specified, continue with dom0less 
>>> setup."
>>> +        unset DOM0_RAMDISK
>>> +        # Remove dom0 specific parameters from the XEN command line.
>>> +        local params=($XEN_CMD)
>>> +        XEN_CMD="${params[@]/dom0*/}"
>>> +    fi
>>>       i=0
>>>       while test $i -lt $NUM_DOMUS
>>>       do
>>> @@ -490,11 +511,13 @@ generate_uboot_images()
>>>     xen_file_loading()
>>>   {
>>> -    check_compressed_file_type $DOM0_KERNEL "executable"
>>> -    dom0_kernel_addr=$memaddr
>>> -    load_file $DOM0_KERNEL "dom0_linux"
>>> -    dom0_kernel_size=$filesize
>>> -
>>> +    if test "$DOM0_KERNEL"
>>> +    then
>>> +        check_compressed_file_type $DOM0_KERNEL "executable"
>>> +        dom0_kernel_addr=$memaddr
>>> +        load_file $DOM0_KERNEL "dom0_linux"
>>> +        dom0_kernel_size=$filesize
>>> +    fi
>>>       if test "$DOM0_RAMDISK"
>>>       then
>>>           check_compressed_file_type $DOM0_RAMDISK "cpio archive"
>>> @@ -597,14 +620,16 @@ bitstream_load_and_config()
>>>     create_its_file_xen()
>>>   {
>>> -    if test "$ramdisk_addr" != "-"
>>> +    if test "$DOM0_KERNEL"
>>>       then
>>> -        load_files="\"dom0_linux\", \"dom0_ramdisk\""
>>> -    else
>>> -        load_files="\"dom0_linux\""
>>> -    fi
>>> -    # xen below
>>> -    cat >> "$its_file" <<- EOF
>>> +        if test "$ramdisk_addr" != "-"
>>> +        then
>>> +            load_files="\"dom0_linux\", \"dom0_ramdisk\""
>>> +        else
>>> +            load_files="\"dom0_linux\""
>>> +        fi
>>> +        # xen below
>>> +        cat >> "$its_file" <<- EOF
>>>           dom0_linux {
>>>               description = "dom0 linux kernel binary";
>>>               data = /incbin/("$DOM0_KERNEL");
>>> @@ -616,6 +641,7 @@ create_its_file_xen()
>>>               $fit_algo
>>>           };
>>>       EOF
>>> +    fi
>>>       # domUs
>>>       i=0
>>>       while test $i -lt $NUM_DOMUS


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 12:28:16 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 12:28:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354908.582262 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4LwU-0004Ci-P7; Thu, 23 Jun 2022 12:28:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354908.582262; Thu, 23 Jun 2022 12:28:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4LwU-0004Cb-MM; Thu, 23 Jun 2022 12:28:14 +0000
Received: by outflank-mailman (input) for mailman id 354908;
 Thu, 23 Jun 2022 12:28:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vof+=W6=citrix.com=prvs=166aae13d=George.Dunlap@srs-se1.protection.inumbo.net>)
 id 1o4LwU-0004CV-31
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 12:28:14 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f1286419-f2ef-11ec-b725-ed86ccbb4733;
 Thu, 23 Jun 2022 14:28:12 +0200 (CEST)
Received: from mail-mw2nam12lp2046.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.46])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 23 Jun 2022 08:28:05 -0400
Received: from PH0PR03MB5669.namprd03.prod.outlook.com (2603:10b6:510:33::16)
 by BL1PR03MB5991.namprd03.prod.outlook.com (2603:10b6:208:31a::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.14; Thu, 23 Jun
 2022 12:28:03 +0000
Received: from PH0PR03MB5669.namprd03.prod.outlook.com
 ([fe80::99b:8d7c:620d:d795]) by PH0PR03MB5669.namprd03.prod.outlook.com
 ([fe80::99b:8d7c:620d:d795%7]) with mapi id 15.20.5373.016; Thu, 23 Jun 2022
 12:28:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f1286419-f2ef-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655987291;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:mime-version;
  bh=ACNoGY9qRBAQJqGJRzuBfYMRIf6wqdSnYH7w/1bMUd0=;
  b=ZoLTFcgpPnj23uU23T0kTGMgW2G1t83CZ5MZzmIOTwhIsFjLGjaV/fmU
   JBBgYNAZNVFBU5UhSMuvBeY4zLFdQzZiw/b7o9aFI5fRdtVtiLbQ3deo8
   +tppmgDYtxfEBOK0AoT9JbKEuN+c1O2WLeF8w1qt/7as7AHy/UfWGmcKx
   g=;
X-IronPort-RemoteIP: 104.47.66.46
X-IronPort-MID: 76821657
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:yu/scKpGnli1/oPwGDRDOt2sZa5eBmLsZBIvgKrLsJaIsI4StFGz/
 9cnaN20SrzTNTykP5w0PZPnthk2DaWly9drGwY9rCFkQiIap5XLWNjGdkqgYnPJdMaYR0ts4
 p5ENoaRJpttRXOM/kunbbO//Sdwi/vQGeOtUeWeNH58T2eIJMtZZTdLwobV1aY00YjR73qxh
 O7PT+3j1H6N0DIqaTpEtv7T+EMwsKT4tmgR51ZnaasW4w+FyykeAshOea3pI3XGGYQFReTSq
 8QvbV2aEsE12z93V7tJR56iKhVirob6ZFTI0jwMM0SbqkAqShYai87XD9JBLxYN49m1t4opk
 o8V68brEV5B0pDkw4zxbTEJS0mSAoUekFP3CSDXXRu7lhCun9PEmp2CPWluVWEq0r8f7VJmr
 JT0HAslfBGb799a9ZrgIgVaambPG+GwVG8XkikIITg0lp/KS7ibK0nBzYcwMDvdGqmitBsRD
 iYUQWMHUfjOX/FAEmYxE7MeotuavCW8bA0CgnG/t+05/kGGmWSd0JC1WDbUUvqjYJwP22On/
 CfB9Wm/BQwGPtuCzzbD6mirmuLEgSL8XsQVCaG88flpxlaUwwT/CjVPDQf9/ab/1BD4B4w3x
 088o0LCqYAd+UuxQdS7cwC+pHeclhUdR8BRA6sx7wTlJq/8vFjJXDFaE2EphNoO68ZsSA1z9
 WewuNLOFSNCrKObcFW6z+LBxd+1EW1PRYMYXgcGRwYY59jooKkokwnCCN1kFcadkdndCTz2h
 TeQo0AWm7QWpd4G0eO851+vqyKhoN3FQxA44i3TX3m59UVpaYi9fYuq5FPHq/FaI+6kokKpu
 XEFn42b87oIBJTUziiVGr1RQveu+uqPNyDajRh3BZ49+j+x+nmlO4dN/DV5I0QvOcEBEdP0X
 HLuVcpqzMc7FBOXgWVfOupd1+xCIXDcKOnY
IronPort-HdrOrdr: A9a23:YEZiQKHM9r6Gyy81pLqFSpHXdLJyesId70hD6qkvc3Fom52j/f
 xGws5x6fatskdrZJkh8erwW5VoMkmsj6KdgLNhcYtKOTOLhILGFvAE0WKP+Vzd8mjFh5ZgPM
 RbAudD4b/LfD5HZK/BiWHWferIguP3iZxA7t2urUuFODsaD52ImD0JbzpzfHcXeCB2Qb4CUL
 aM7MtOoDStPV4NaN6gO3UDV+/f4/XWiZPPe3c9dl8awTjLqQntxK/xEhCe0BtbeShI260e/W
 /MlBG8zrm/ssu81gTX2wbontRrcZrau5h+7f63+40owwbX+0KVjUNaKvq/VQUO0aOSAZAR4Z
 /xSlkbTp1OAjjqDxyISFPWqnXdOXAVmjHfIBaj8AXeiN28SzQgB8Vbg4VFNhPf9ko7pdl5lL
 lGxmSDqvNsfFj9dYvGlqr1v4EDrDvKnZMOq59bs5Vka/pXVFaRl/1rwGpFVJMbWC7q4oEuF+
 djSMna+fZNaFufK3TUpHNmztCgVmk6Wk7ueDlOhuWFlzxN2HxpxUoRw8IS2n8G6ZImUpFBo+
 DJKL5hmr1CRtIfKah9GOACS82qDXGle2OHDEuCZVD8UK0XMXPErJD6pL0z+eGxYZQNiIA/nZ
 zQOWkow1Lau3iefvFm8Kc7giwlGl/NLAgF4vsulKRRq/n7WKfhNzGFRRQnj9agys9vdvHmZw
 ==
X-IronPort-AV: E=Sophos;i="5.92,216,1650945600"; 
   d="asc'?scan'208";a="76821657"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UaxoRqBoNdDARZzvlo9bNLRd3U3yLSLRuxuH5rI0YA2KM82fnrWXXEBF1nd1Ht013ipkncd3An7RfBFFShqOWfS4ezFNSyJjtNCEQTIf7eEWOuCCujrKfOpLAnOrsfUhj6VGzBr3v3yYs7hAg8c7WbYFHvcX73Rc1NGwubWAd2SVplf2aUYR7i7cXV7WkxIWga0a6jEL6U9IqpR2wjoG8qFpPYRTCQFhumVA8S1tLysNkfqlZgmcX+VvxOR7H1KRzNRA7hVSSo+8aNLS/e18Pcdv4Uqx+dUvE4C6J+GPoMF4de5SX8T90t7/1Vpeoj7rHm0GfZQzDiTSaJ09voIvyQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3w5PMtG+7N4IHxxO9Hr8VGZOpY7bYKRhGOV8vf82b3c=;
 b=GGKJFcW7ZSg7k+g5M1wu4eVSxcttMDerTapiujXqIfUTsMaMh36d4osmtQarO9W/GXDdui9VmEhDw2MUYUZPdnxB6AJO1Ea4zsWkc+6Htjbe/MU9/mqGfTRzizumYtwm1WgVW/4oq29e7HKi0V/I3aF3qPngj3OKDoEXgyfvDt5ZLRCwP2qR+iqzz5hqEShYQ0FgRKQCT64vgcafBi3YgD0LGd1u8rBjbM1w8azRtCTSWtMwVqXqalBtmusb0IyHjKXUlM77rxYp27Zkf1lgU5XA6V9axiexChsV6u9sTGlnE0LdC1U22V2ALlGIOusbswveGXa9HRIau4AXfYM+SQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3w5PMtG+7N4IHxxO9Hr8VGZOpY7bYKRhGOV8vf82b3c=;
 b=Txq4SjOYsEhK9Ej6x5RPtHqgzLVcndihPfXiqDJ75XUOZXYPt6l32L8LzOgxXD44E3Z4vROV0RXmjnREY5j/oesJ5QH5JCeRQGM1vyEHZUpfoRekVYvPcWaD3T8VSmeakuv+RgN0CPMoDRX7lygCF8yhNJHE7LAq3XojFDNgiPY=
From: George Dunlap <George.Dunlap@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <Andrew.Cooper3@citrix.com>, Wei Liu <wl@xen.org>, Roger Pau
 Monne <roger.pau@citrix.com>, "Tim (Xen.org)" <tim@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: Ping: [PATCH 0/2] x86/P2M: allow 2M superpage use for shadowed
 guests
Thread-Topic: Ping: [PATCH 0/2] x86/P2M: allow 2M superpage use for shadowed
 guests
Thread-Index: AQHX7O+Zuhd13MkDA0i1XG8QMQlRG61eF6gAgAAHz4A=
Date: Thu, 23 Jun 2022 12:28:03 +0000
Message-ID: <F9BEFBCF-F4E3-43C4-9BF9-87E5FA36637E@citrix.com>
References: <9ae1d130-178a-ba01-b889-f2cf2a403d95@suse.com>
 <5a8a8f01-03cb-b84a-bbca-9c5f6d2ea9cc@suse.com>
In-Reply-To: <5a8a8f01-03cb-b84a-bbca-9c5f6d2ea9cc@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 246dac34-3114-44f7-9a2c-08da5513d19e
x-ms-traffictypediagnostic: BL1PR03MB5991:EE_
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
x-microsoft-antispam-prvs:
 <BL1PR03MB59919199D2C19A4C584160BF99B59@BL1PR03MB5991.namprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 jYCNqKR8xaAoEaNNgje5dAUPWeukn3iB/qibT4hU+kTPOYUDcKr+8zYPbit6wmvXUBuzO7/ImSKzEqpTN2swbN15S62yIUrxMjlVdSyrgBtEPi7NSmesyVp+123/6jNLb7NgLqE2UhQhQqQZYry4qYbkgvwukFxMRyL36oNo/vci61BCXvUglg3jWelvxfkcq6Yy8RKapMcpnHMm7uo/JqZdqLNUrHDJrFxmwEsOAGLam06ntfSVyUrPpVFCmsRLA/2xfg9vrQYRhVaxxb+CMO5wQQX0pziJpBoVikbV7aQTlADDG7W6zPH69KaWVh6ZCRLHAxOGIc7SINrD4ShCQFQHzqlffA6712INT0KNt4fW+HoccnGYlYTVo8eI32wn9cZvHQJxbqaBzfq3QkuES4S3kfJFoc/CUaIcchtDAKeIjA2HtG480zrJe49aa2MYFgfNP3qL6c3GoyuCobisH9XuY7hKLYbw6nIlZG7Ob6sCEkS90D9Wmcil0aL1YXfYn7FYJM1xWGjVlOWLcKhPBxfkNhfuG0+06R+uO0gwjvKYFNcduYQ63rIixkS/nIry0MFvhx+j24ohNtsAisYQtWeSbwhVOIpqFZHQ6ryRtZc+2tdMHupx9Q7WVNPPyaazVlTBZbRXxKo+0Qfb5/d+pjDp+8stiKiH8a2xzGuzQqokUth5AkvlDGf2ZuE4m2JBFBBcamMQTXnte9xnAdXx95hYq56OP8hSMPVWpuV0YlCwnhq2AwRFc6DHjcxb+0Jp7PxDqMdSLp/bU2XXRtv+gHWERK3vz/vhcO4C9soSf+s=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(346002)(136003)(366004)(39860400002)(396003)(376002)(54906003)(316002)(86362001)(6916009)(66556008)(6486002)(71200400001)(53546011)(122000001)(8936002)(478600001)(5660300002)(4326008)(66446008)(8676002)(2906002)(6512007)(36756003)(4744005)(64756008)(66476007)(76116006)(41300700001)(91956017)(2616005)(26005)(99936003)(38070700005)(186003)(83380400001)(6506007)(38100700002)(82960400001)(33656002)(66946007)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?us-ascii?Q?qL8E1whH+rfvFk9VOIyh63b8JB9FdzJ9Y7dF/38qTCNxxl4kiXqTTjcMZLZg?=
 =?us-ascii?Q?JSnmmw6vNCktY/QkNEJdF3ad+lgbZcNvLyiW5hbfQ3NbvCDkPq8QZ8htfR5f?=
 =?us-ascii?Q?2XsetPcxBBl3sjDnKzL0CS3+2KxGdpmO/avY9zhquNaFN2gf8/DzCd06NOYm?=
 =?us-ascii?Q?OE9E3RzsrEOy0DK8063yeWawmFmh27M6s/sjKDQIVtyxuVotygZ166lPdPum?=
 =?us-ascii?Q?8+tD+yPgpp0LPGyKI5jkRZbqsbEUhQjafLB7FdlroMlgBfBLFAiOX4QR1yqZ?=
 =?us-ascii?Q?H+mCWhqsNZ8mChUkwUez6tsiofndoKxypAazlU10M4jEQvHSqMrzY/yjOj5a?=
 =?us-ascii?Q?RI766e1uTmkLDkh8B7ZkXkGRNZiuCuFieFFtY5T27ggUaGZMiAHfYfX2xiC4?=
 =?us-ascii?Q?HpeOp9pdNzsCbxu1KcXWWF7VDCJ20zRrL4SyM4P7wCvJZdcUktVLfemAQ3tz?=
 =?us-ascii?Q?WWhqUb4WXlvnPrDhfWQHtCe1Ig0Xge4fGHVj6u06MbmWrG3CdvRvNE9Rik2z?=
 =?us-ascii?Q?/PbZ0eL3KGvcpE3YulNl9l9rLNP767fmAIVLaaz3PSh6VQHcFNjOVl37hl2z?=
 =?us-ascii?Q?dBmvPrtVbX0m/rqD4N9zfe9uWSiEOUMNG4CXG+DnGxHw4ZSdU8k8PSlyutUd?=
 =?us-ascii?Q?PT8Ta8/hmc1GkAlTGxV0g0HxdAuUpGsKF892P818VEtLtxkc3qThv+20KyVb?=
 =?us-ascii?Q?kMpkyb201YrMhwVCFnA23pgfvVl9hjFml5M+g7HrSSab2lepu2DQAb6/arq7?=
 =?us-ascii?Q?68unxtLoIuQK3k21P2GsfryM4ybfq7JYkrEWBfiB8GB1wRGujr2oKPLysyrF?=
 =?us-ascii?Q?0UdC6NW6jmkILP/7KWuTI6WkUC3I4+cZIryp9UjpooJso17ql3aRwS1H9c7W?=
 =?us-ascii?Q?QOAKEBpSzJwvZNEXbnZASTWEulj20QYZe45B7oELdcgv5H7f0FaDhh5jN8Au?=
 =?us-ascii?Q?BozUm8c7d+YxAuWyNcPOAcPHapbooEXGtDTo22RdH8gVL6j1if5n4Pe6OIbU?=
 =?us-ascii?Q?MCfZaLNjV5z4U4UTBapeV4nQ5mj3nH/OuSy7Bo0GjH/uHVrXWHdBVwXSDa7/?=
 =?us-ascii?Q?MQDeKOM5GBfffNlD88Ue23MIyaacBCorlh9rqOu1Aw/OZ7IXgCUd3OgP4n6q?=
 =?us-ascii?Q?tBznTbp4uTDtKxDY2AXzrWj0rLhgKpccmT9QiudBjQH3SyyL9uHpj1bK4kti?=
 =?us-ascii?Q?vFF5qwCaygjBW1aKabM0lr/8VLBgWIPfuDocFJtMtmvyyorkSEFFeZVT2Hcc?=
 =?us-ascii?Q?T7CXmuiv5KZlrY46xuAoqsvsbTf1PPhY0exL81vDBj3jI1TcrDWh2DuMP7oJ?=
 =?us-ascii?Q?wSJKYvBexQn1ee7ifGxbE/2iqEcYwle1LIbptb3aPjVHfW/14lrZwAOb8x8j?=
 =?us-ascii?Q?rpDLAjgt24HB/QcX0leAD0MzGxhiuVz00dyjVi1Ax0G5J1+jTVV62LG1s8rr?=
 =?us-ascii?Q?6VPmmtjZISL9TcK0dicPnqu/x9BEllnncUaDagbsrByYKbfRkcgjgK/Gm+zt?=
 =?us-ascii?Q?O5rszdCJLhGMrLwFgdishCIGL4i9cY7aCiLYDHlD4MMeaCuyrz1FmO73nrUe?=
 =?us-ascii?Q?8Z2Bfv/4Zy8/9ToFEPXkOHEssi+ob3mvXfCQO8rHZIBeK/BREYZ2nmbnZeWl?=
 =?us-ascii?Q?KQ=3D=3D?=
Content-Type: multipart/signed;
	boundary="Apple-Mail=_72AB159C-5944-4604-9B63-60B46667403F";
	protocol="application/pgp-signature";
	micalg=pgp-sha256
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 246dac34-3114-44f7-9a2c-08da5513d19e
X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Jun 2022 12:28:03.5762
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: e21cAiBNyFmhFb0xtso0lyvw1ABdriBEEpH6Ud05u4xE8EjKruqKm/JA6LF3jV4yXzO1n/3mXOeQrJwsxrH/qSH0qTYwWVi8Ulo99yCNvE0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR03MB5991

--Apple-Mail=_72AB159C-5944-4604-9B63-60B46667403F
Content-Transfer-Encoding: 7bit
Content-Type: text/plain;
	charset=us-ascii



> On 23 Jun 2022, at 13:00, Jan Beulich <jbeulich@suse.com> wrote:
> 
> On 09.12.2021 12:25, Jan Beulich wrote:
>> I did notice this anomaly in the context of IOMMU side work.
>> 
>> 1: shadow: slightly consolidate sh_unshadow_for_p2m_change()
>> 2: P2M: allow 2M superpage use for shadowed guests
> 
> This has been pending for over half a year. Anyone?

I can put it on a list of things to look at tomorrow.

 -George


--Apple-Mail=_72AB159C-5944-4604-9B63-60B46667403F
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
	filename=signature.asc
Content-Type: application/pgp-signature;
	name=signature.asc
Content-Description: Message signed with OpenPGP

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEj3+7SZ4EDefWZFyCshXHp8eEG+0FAmK0XE0ACgkQshXHp8eE
G+0eDwgAx4obv5fxaCMUI4H1FdU1LKyIygBUhyMEE0CYz//F/DX9OdP6nAGnqzm0
MPAGiddXjECiTJ0LXJKlgYWrhQEO0XCk2QyylRYcmQxpdsUuJWLU9EoWZhv1JXU0
HFqxMzpg76TDgNQtyCXTWq7cPLNJZP3s+KYIASk3Tjh5q0vEnzQDbXDFY1ckQ2YS
mCgI1RetmVRm7Yusmmwr+cDBh3tjtI4t8l18FEKpDDxH2Mer9gC0dEhfMv3ujIGA
/ISr10dS1D7uF2W0GW7yaIU8HUifQohIoWdtFHvQkuY2QY/rcY0zOxe3zSGNMGJb
uQ/ZqqaIODE9F4TNv4EnHYEanmgExw==
=a+pp
-----END PGP SIGNATURE-----

--Apple-Mail=_72AB159C-5944-4604-9B63-60B46667403F--


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 12:42:15 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 12:42:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354914.582274 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4M9m-0006UT-Vd; Thu, 23 Jun 2022 12:41:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354914.582274; Thu, 23 Jun 2022 12:41:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4M9m-0006UM-SW; Thu, 23 Jun 2022 12:41:58 +0000
Received: by outflank-mailman (input) for mailman id 354914;
 Thu, 23 Jun 2022 12:41:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vMvu=W6=citrix.com=prvs=166a928b5=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1o4M9l-0006UG-PA
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 12:41:58 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dc33ee43-f2f1-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 14:41:55 +0200 (CEST)
Received: from mail-co1nam11lp2168.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 23 Jun 2022 08:41:50 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB5870.namprd03.prod.outlook.com (2603:10b6:a03:2de::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.15; Thu, 23 Jun
 2022 12:41:49 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::bd46:feab:b3:4a5c]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::bd46:feab:b3:4a5c%4]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 12:41:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc33ee43-f2f1-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1655988115;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=kCpTq1JNCOQ+Xyus0Jrgcs5/tVCqsWepHnlfpIKqwbY=;
  b=dDq48Cq1NNAKlvCI5gNyex9cb6F0t5n4R1q1XXg6Nb/oVP9KmcQe5uVb
   DbN7oK71i/9GiQz4qOgp6BFxRUcAqMnrhSBHvpjAkMtaz0xMNwv9HA7Aw
   Mbxab/KyThmOxgUyVbBJCfvB7i4d2aQnOHahenCX+5TEWHVDqhDBJYOpM
   0=;
X-IronPort-RemoteIP: 104.47.56.168
X-IronPort-MID: 73595860
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:CBdLl6IOCViAmOeCFE+Rw5QlxSXFcZb7ZxGr2PjKsXjdYENSg2YPx
 2oaCz3UO/mKNGqjKY9wYdzi/UkEsJ/QztFqTAVlqX01Q3x08seUXt7xwmUcns+xwm8vaGo9s
 q3yv/GZdJhcokf0/0vrav67xZVF/fngqoDUUYYoAQgsA14+IMsdoUg7wbRh3NQz2YHR7z6l4
 rseneWOYDdJ5BYsWo4kw/rrRMRH5amaVJsw5zTSVNgT1LPsvyB94KE3fMldG0DQUIhMdtNWc
 s6YpF2PEsE1yD92Yj+tuu6TnkTn2dc+NyDW4pZdc/DKbhSvOkXee0v0XRYRQR4/ttmHozx+4
 IxrkISyRzsuAojnpbQHbwRbFw49HrITrdcrIVDn2SCS52vvViK1ht5JVQQxN4Be/ftrC2ZT8
 /BeMCoKch2Im+OxxvS8V/VogcMgasLsOevzuFk5lW2fUalgHM6FGvuajTNb9G5YasRmNPDSf
 ccGLxFoawzNeUZnMVYLEpMu2uyvgxETdhUH8w/E+PppuwA/yiQrgZHGKcbWYObVXJ8Ok16G/
 mjm4l3QV0Ry2Nu3jGDtHmiXru3AhyTgQ6oJCaa1sPVthTW7xHEXCRAQfUu2p7++kEHWc8JSL
 QkY9zQjqYA29Ve3VZ/tUhugunmGsxUAHd1KHIUS6g6Xw67Qyw+cD3oDSHhKb9lOnNAybSwn0
 BmOhdyBONB0mLicSHbY/bDNqzq3YHERNTVbO39CShYZ6d7+po11lgjIUttoDK+yiJvyBC30x
 DeJ6iM5gt3/kPI26klyxnif6xrEm3QDZlddCtn/No590j5EWQ==
IronPort-HdrOrdr: A9a23:fK3qi6/GIdqnPnBSAfRuk+AiI+orL9Y04lQ7vn2ZKSY5TiVXra
 CTdZUgpHvJYVMqMk3I9uruBEDtex3hHP1OkOws1NWZLWrbUQKTRekP0WKL+Vbd8kbFh4xgPM
 lbEpSXCLfLfCVHZcSR2njFLz73quP3j5xBho3lvglQpRkBUdAG0+/gYDzraXGfQmN9dPwEPa
 vZ3OVrjRy6d08aa8yqb0N1JdQq97Xw5evbiQdtPW9e1DWz
X-IronPort-AV: E=Sophos;i="5.92,216,1650945600"; 
   d="scan'208";a="73595860"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lGJQnt5oYkj+pJw7xbHD2kPkGZTJhGtOUf1Za++q2Icj8RakF5tFY6+VOUDBEV1hPGim+hKy628GXYr+I3tjpI8TkXG2kS3oNgCs3jT6AApSsPfenAcFHGLCdiEpRjNxwWMso/TAVQZM1Z5rgFVV5x3wpjBEguKN0ROI7UxQirUIKKHfWMrsbzdJDm7MOub0nIEk+uG1vwkxQnRVTg/Ku0z1G3h2rw/BQdVqGIyji4Kh7Yb6go7gMtlWPknvXxA42iPrQ2WHpExcJ/hREpy45YN/W3WuWaW/vtiAwalN7gLSRJN+DSImd44Lis6oVVOEwofjcRSNFIH/uSVsxNiz6Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=kCpTq1JNCOQ+Xyus0Jrgcs5/tVCqsWepHnlfpIKqwbY=;
 b=Z3iX6d7LGQjiUqe5YH0dfVkyIV5T7ovHLYJh3vlSauYiwLECeQ+DVX8MF1JiRNAwxjCeYRI/jhEjwxxu0JZa3HMrfY8bN8rUnm8ARkDCezg5nr+g07UBDV9U1ya9SPK3210DesmSZcFJsNKKVa3NpAs2dFJIG4XuVTGuDnTO54MrrEu8uHYb3OYjRtbBfMcufXviBbCaIApAGfcnireMLYFJBXKS3Y51fTpv7K+zLsE5ugDKKLCtp8WscydBXMVmY5HEOALbQFyhzefmkXx80VmlKABiBefJudqff/GdDslFykknWIQmK1yQEaGZpfsBvo3eRuX6fYzT3czXCrg/TA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kCpTq1JNCOQ+Xyus0Jrgcs5/tVCqsWepHnlfpIKqwbY=;
 b=Ve5QjnUO+sfzMkHmmsPi5OpYSnlQPP3x9ynwtr4iisL07XA1y8hxYcUdhEvmLM45Ujb7wQBJu2WEqlgGC33kn1IIZGk7CNzA9Xbx5qJTp28Bw7w5yf4uq7oA9WiF+d1tOkaL7SAiBFArtPJtucqgY+0WbBar84Iq8Je9KLnDhvA=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Juergen Gross <jgross@suse.com>, Julien Grall <julien@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Julien Grall <jgrall@amazon.com>, Wei Liu <wl@xen.org>, Anthony Perard
	<anthony.perard@citrix.com>
Subject: Re: [PATCH] tools/xenstored: Harden corrupt()
Thread-Topic: [PATCH] tools/xenstored: Harden corrupt()
Thread-Index: AQHYhvPL6UjRJR/HKUCr4NOCggNq961c2uKAgAAUbIA=
Date: Thu, 23 Jun 2022 12:41:49 +0000
Message-ID: <1b0ea627-f325-b290-4159-57aa50d1f713@citrix.com>
References: <20220623112407.13604-1-julien@xen.org>
 <1168a21d-80be-1a9a-6a7b-7732a7126bf0@suse.com>
In-Reply-To: <1168a21d-80be-1a9a-6a7b-7732a7126bf0@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 6921d816-8fdb-4974-ac7b-08da5515bdd2
x-ms-traffictypediagnostic: SJ0PR03MB5870:EE_
x-microsoft-antispam-prvs:
 <SJ0PR03MB58706F14FFCC0319D7561C3FBAB59@SJ0PR03MB5870.namprd03.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 N4e6rshCxuBPUaDAHohCmhCM3RdgWg8y7PWlLKBMQB0JqaSdWlgn+bGCwWmvz04q745dK88omQSc+nqGI3VYTcz4vB2sETkQq27dK7IMESjGe3O076t9oilzFbRTwWGMLuANhRMpVjaYSQ8/FMbLjPD4S07uiGUNoBFwBW1UrCXdPoJ5qTKGuBTdq5jvqmIZtg2F3onuWDBg9R2hN+w5m/wdTMK+Y7+2gxOHYItWk+XVb/6jSygSj9eF6XezRe6tc3tPazEGyaHc2bOv22W2znvyLV3zwQ7io7hFglpvp0XAGWyFeYeLGpQ07EosPHDZfc3FdlUazoZ1XguHyqbwdIqQpEo8ToR5v7z9XVoC/HYh7MflT8N9FA+hby1p/iFO72OdMNqGWr2rPaFWh14Os7PCKHq/MvI/cUGacuW472a6U3i2eUW4NuYrgmmlTw5gdcs+tLSTJV6SA7fCJrpzpZk4xOek8Y5/WxPLLX0MmaZJZJdEzCVLrwnNqlnvz1V+KpMeeXJSQBtbBtxGiC0+Sd/Y7egGMSUwWAcWhm3P8Z0k94s95Hem9zfwVi1XsfhafNgZbvsyqrLUlUqQx8RmYn7bpR/WzRiC7Aey4jaCN1xuiqaFfio+dhhxszZewPpoGs9cC4fO3y/KnXI3Dungf3ob/83klMZnBZtoTbz1g14Phr/PmE0vnXJKRiFo5BjFCCfi/SY2on/gSt0lvbJirq43e5LL/rzOcB8WmNaGoS6k55nZWmiTaR1DijDnW7W8558bZ0hNiMg8TJn7l5m0ldNKCFfGTjX/XZI16tQ6Aa2ywMDGaLC7XilxgICxODHHihj/ZlYJSa59WNtea/UZHA==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(136003)(346002)(39860400002)(396003)(366004)(376002)(66476007)(91956017)(53546011)(6512007)(64756008)(6506007)(2906002)(66446008)(26005)(82960400001)(66946007)(6486002)(76116006)(8676002)(54906003)(31686004)(86362001)(31696002)(38100700002)(66556008)(110136005)(316002)(38070700005)(5660300002)(478600001)(8936002)(83380400001)(36756003)(107886003)(2616005)(41300700001)(186003)(4326008)(122000001)(71200400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?V2tTMnpGa1hrM3YyRUF4N2ZERUs5MHVzRHlHRWZ3azdWT29lQ1VjanVjWm9H?=
 =?utf-8?B?Q1BYRmlDN2xnRjNTUWcxU1lqaHpTdmN2bGJyV2Y4YTZuUHUzNFV1NEtLdnZR?=
 =?utf-8?B?Y1IzR2NqQ3VJaWQ3TU4wV3dwMVc4Z2lFOHFHVEVhaEZzaUhHVXVESnRIRzBV?=
 =?utf-8?B?VDB2WXVKaEVvOGFOeWd0cTJIZStrN05FbDBML2lRY040cWFMZzZNOEVFVGhT?=
 =?utf-8?B?QUgrN1RrUHo5SXlsZDVzaWYzQ1ZmR1E1c2FBa24zZlU5SFZjWGNDbGhtdTIw?=
 =?utf-8?B?b3kxRU9SclVZV1k5NnVLSFJPcHQzYklSWTlxOElHZjljb0VMNndpUmNocEVX?=
 =?utf-8?B?SzhrWWo3RExxaFA2d2tIRmM5Yzd0ajR0MDRnMXVxM1ZHWFZUdmNpQTVteER1?=
 =?utf-8?B?ZVk1dVZtNWR6ZmlucE5TVjFSMXBMaFBKemtHUlBYL3MzVmxTTms4Rmp6VERo?=
 =?utf-8?B?eHN0UkxZS1pFaVZSTTNpUUdMcDdUcEhzdmVQM200MmZZVnJKVzZLV01CMUZ4?=
 =?utf-8?B?ZGZJeng5S0poOE1pZGhEK1RadmNJTzExdFNPRG04RnBGSXhJSU8yU2pycFh3?=
 =?utf-8?B?UHptNlhzc2FDZ0JLQlFBL1JZYjFUOThiODlpTlErWTc4VzNkYS8wQjJlVUlG?=
 =?utf-8?B?eWcvQk5yM2xiYUxFQmJBa2l6MTlxNm10NTJHRUdtN1F0M3pHN001WDBmZHYw?=
 =?utf-8?B?dTc5Nmhob01lUTd4MDFVclBoQjVGY2xxeklHa0ZKQStMOEFiZCs1bDUraiti?=
 =?utf-8?B?Qlhmek9YWFlpSjRoTG9oa3g2M2U0aFc5dytUcXJDYXB4MldhTVhxeFpDUWFI?=
 =?utf-8?B?ZndFb21ueWZDc21jUVYvZTdCTWNNTmhUbkdtYWFPbC85SGtwQ3NHeVVnOWtn?=
 =?utf-8?B?dHhYVnFvaEo5MVZQby9mYjcwWDBLa3hqWFRPa3lWYVNHczdnL2poMVgvQmJP?=
 =?utf-8?B?RkkwS1VwR0F5SjVveVJuTWRwM2pYSlduUUtWdWdJcHN0NFppeEVCSHNvb00z?=
 =?utf-8?B?ZndtVGxrMHpvQzIvNnVKSXBMbzBXK0NNY0xrV1VNak1sYWJXK1VlYm4wUm16?=
 =?utf-8?B?TjFRU2tQdEUwa3Q1RHcvWGF5anZaaDJrNU5hdmVSVkJiZHdqYXdOVGIxZ25s?=
 =?utf-8?B?cSszQi9rcThacXFXOStSeitZQTNoc25meS93VHdkTENjM25GUTFJRnVUSDZD?=
 =?utf-8?B?MGtGazFJL0xxTTBaa3JmMHhrbE93RE83amxTMjArLzFXN1BHVHVLSmZXTkZF?=
 =?utf-8?B?RWpkRDhRVlViWVF1MWR5RldIaW5YTTl4dDBFWWdDSWNUeEhCT1oyNVljbitI?=
 =?utf-8?B?Nzd5bzRGYlR5djk2NnY4ZVNFOC9tV1crNkIvVGdnNTFQL0xQTEdqSjc3Wk9s?=
 =?utf-8?B?UHpzMmxsbUJlc3kxcGUrRThRZ0kyWmdIR05nYSsxQkNPaTVnYUdSYS8rWWI4?=
 =?utf-8?B?b2VLQ2ZvKzRxYklKRk1reHFjQXRqSHN3QWxxaHJQT3BWVGcwMTJiWWgyaVBS?=
 =?utf-8?B?emhNaWhUZDM5OXBlcHg4NWl4VjViV05qS2s1QnB2Nk1Cbld3a3MyY3RkRDJB?=
 =?utf-8?B?eXhuMC9WR3Qyb0M1RkVSWHNmdXJhZlNvVDBpMm5OMUZaQ2lOZ0hNdENBalFz?=
 =?utf-8?B?Y1laK28ydk5CVTVjMTNab1FEVmptVGo5cStGZnN3RG0vektkaElwYTNSNjBj?=
 =?utf-8?B?UUdRYXA1T00zMW1GUW1pUkQ1a2Fleld1c29kSEJoRlhyTnlYcUdkVEpMR1Fm?=
 =?utf-8?B?Sk5lYXBVUjFDRVh6QnlWbkZtOHFhNjVtL3hKdWR5cXRxVGNnVXRwa1N5ZWNs?=
 =?utf-8?B?QzJyRXJJclNVWFF0a3pXTU1OajJ2c1RMLzJtcVluN3dDQWt6NjhRQ0QrMzVj?=
 =?utf-8?B?ZUJtNVhiMUN0N1R1U1R1elptWDZOcTBQRVJ6Mms3THo4T3NwSndlQVFlQXVq?=
 =?utf-8?B?dHd0YXR6UEN4V2dib1NuNFVUYjJiY2grYlFlSWJzWWJqdE1ORVVRazRNTDVn?=
 =?utf-8?B?OVdiT3FrdTF1MHJxS1JtMDBZaEVSS2dLQ0NBTlZsZHNNZVBOTVM5SisvZWhv?=
 =?utf-8?B?NFRkb256NnptR052eCt1WVRNQmxvc2JudW4yMnpVYlgveFhJVGJnZExLTTlr?=
 =?utf-8?B?TUJFMzMzOFkyTVZsVUxneGVWYzkzaVVqb0NtZWg2c0JlRUxiWGpqQ2d1SVI4?=
 =?utf-8?B?ODVCMGFFWmtYYy9YcUlJQnJaN212bDEyOU52TWtkeW1LdzNOYmxld3RZbzJJ?=
 =?utf-8?B?a3FUQ2cyTVNCMnI1TVA4RlhWa3dTSk9PM01lMzFjanhCM3o0alVpU2NNeEQ2?=
 =?utf-8?B?VE83OGlvNnQ2RGVNcldQd3p1TXdSSThFU1N4Vi9wSUs3aDhRQWZqMFdKdk4w?=
 =?utf-8?Q?D9uAy1/nR7txA0A0=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <27B277B80163E94498F7A75232122E0D@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6921d816-8fdb-4974-ac7b-08da5515bdd2
X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Jun 2022 12:41:49.3252
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: vr0cczENZhlvmeuDJyq7YtQ3iBPZUzIZQaSbtCNmz2YK3xBmXhCwf+QPOAQnhqHhnzZBPDkH398NXSVNINtOCo1NJD9DNBYRj5Vvl+5xsto=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5870

T24gMjMvMDYvMjAyMiAxMjoyOCwgSnVlcmdlbiBHcm9zcyB3cm90ZToNCj4gT24gMjMuMDYuMjIg
MTM6MjQsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4+IEZyb206IEp1bGllbiBHcmFsbCA8amdyYWxs
QGFtYXpvbi5jb20+DQo+Pg0KPj4gQXQgdGhlIG1vbWVudCwgY29ycnVwdCgpIGlzIG5laXRoZXIg
Y2hlY2tpbmcgZm9yIGFsbG9jYXRpb24gZmFpbHVyZQ0KPj4gbm9yIGZyZWVpbmcgdGhlIGFsbG9j
YXRlZCBtZW1vcnkuDQo+Pg0KPj4gSGFyZGVuIHRoZSBjb2RlIGJ5IHByaW50aW5nIEVOT01FTSBp
ZiB0aGUgYWxsb2NhdGlvbiBmYWlsZWQgYW5kDQo+PiBmcmVlICdzdHInIGFmdGVyIHRoZSBsYXN0
IHVzZS4NCj4+DQo+PiBUaGlzIGlzIG5vdCBjb25zaWRlcmVkIHRvIGJlIGEgc2VjdXJpdHkgaXNz
dWUgYmVjYXVzZSBjb3JydXB0KCkgc2hvdWxkDQo+PiBvbmx5IGJlIGNhbGxlZCB3aGVuIFhlbnN0
b3JlZCB0aGlua3MgdGhlIGRhdGFiYXNlIGlzIGNvcnJ1cHRlZC4gTm90ZQ0KPj4gdGhhdCB0aGUg
dHJpZ2dlciAoaS5lLiBhIGd1ZXN0IHJlbGlhYmx5IHByb3Zva2luZyB0aGUgY2FsbCkgd291bGQg
YmUNCj4+IGEgc2VjdXJpdHkgaXNzdWUuDQo+Pg0KPj4gRml4ZXM6IDA2ZDE3OTQzZjBjZCAoIkFk
ZGVkIGEgYmFzaWMgaW50ZWdyaXR5IGNoZWNrZXIsIGFuZCBzb21lIGJhc2ljDQo+PiBhYmlsaXR5
IHRvIHJlY292ZXIgZnJvbSBzdG9yZSIpDQo+PiBTaWduZWQtb2ZmLWJ5OiBKdWxpZW4gR3JhbGwg
PGpncmFsbEBhbWF6b24uY29tPg0KPj4gLS0tDQo+PiDCoCB0b29scy94ZW5zdG9yZS94ZW5zdG9y
ZWRfY29yZS5jIHwgNiArKysrKy0NCj4+IMKgIDEgZmlsZSBjaGFuZ2VkLCA1IGluc2VydGlvbnMo
KyksIDEgZGVsZXRpb24oLSkNCj4+DQo+PiBkaWZmIC0tZ2l0IGEvdG9vbHMveGVuc3RvcmUveGVu
c3RvcmVkX2NvcmUuYw0KPj4gYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jDQo+PiBp
bmRleCBmYTczM2U3MTRlOWEuLmI2Mjc5YmRmZTIyOSAxMDA2NDQNCj4+IC0tLSBhL3Rvb2xzL3hl
bnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMNCj4+ICsrKyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3Jl
ZF9jb3JlLmMNCj4+IEBAIC0yMDY1LDcgKzIwNjUsMTEgQEAgdm9pZCBjb3JydXB0KHN0cnVjdCBj
b25uZWN0aW9uICpjb25uLCBjb25zdA0KPj4gY2hhciAqZm10LCAuLi4pDQo+PiDCoMKgwqDCoMKg
IHZhX2VuZChhcmdsaXN0KTsNCj4+IMKgIMKgwqDCoMKgwqAgbG9nKCJjb3JydXB0aW9uIGRldGVj
dGVkIGJ5IGNvbm5lY3Rpb24gJWk6IGVyciAlczogJXMiLA0KPj4gLcKgwqDCoMKgwqDCoMKgIGNv
bm4gPyAoaW50KWNvbm4tPmlkIDogLTEsIHN0cmVycm9yKHNhdmVkX2Vycm5vKSwgc3RyKTsNCj4+
ICvCoMKgwqDCoMKgwqDCoCBjb25uID8gKGludCljb25uLT5pZCA6IC0xLCBzdHJlcnJvcihzYXZl
ZF9lcnJubyksDQo+PiArwqDCoMKgwqDCoMKgwqAgc3RyID8gc3RyIDogIkVOT01FTSIpOw0KDQpz
dHIgPzogIkVOT01FTSINCg0Kc2VlaW5nIGFzIHdlIHVzZSB0aGlzIGlkaW9tIGluIGEgbG90IG9m
IHBsYWNlcy4NCg0KfkFuZHJldw0K


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 12:45:27 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 12:45:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354919.582284 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4MD8-00074z-H8; Thu, 23 Jun 2022 12:45:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354919.582284; Thu, 23 Jun 2022 12:45:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4MD8-00074s-EO; Thu, 23 Jun 2022 12:45:26 +0000
Received: by outflank-mailman (input) for mailman id 354919;
 Thu, 23 Jun 2022 12:45:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4MD6-00074m-JY
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 12:45:24 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4MD5-0004uw-KS; Thu, 23 Jun 2022 12:45:23 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.0.239]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4MD5-0005Rt-EN; Thu, 23 Jun 2022 12:45:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=vOj3x5eG7FKkJAAA0ZKuy5E0QU6ItYiViCuAxsdpCBE=; b=Xr296QODDGxAfE2eNydTLqgghu
	WvxN1akgMcmoK9im5zWxYzrCA4RR1Idj0yq5/RVTTaTgi1gOplOjYdfVwVzPPuWx14pyM+xs6qwL9
	d9K7fIfXY4WMogd1LH5tHsEur0MY7rfQhwjQkTXP8VbGVS4L2mPdlMfjSNNFmdU6yXBY=;
Message-ID: <04a16301-b7d0-3a62-7e71-0cfbfc74d377@xen.org>
Date: Thu, 23 Jun 2022 13:45:21 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH] tools/xenstored: Harden corrupt()
To: Andrew Cooper <Andrew.Cooper3@citrix.com>, Juergen Gross
 <jgross@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Julien Grall <jgrall@amazon.com>, Wei Liu <wl@xen.org>,
 Anthony Perard <anthony.perard@citrix.com>
References: <20220623112407.13604-1-julien@xen.org>
 <1168a21d-80be-1a9a-6a7b-7732a7126bf0@suse.com>
 <1b0ea627-f325-b290-4159-57aa50d1f713@citrix.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <1b0ea627-f325-b290-4159-57aa50d1f713@citrix.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Andrew,

On 23/06/2022 13:41, Andrew Cooper wrote:
> On 23/06/2022 12:28, Juergen Gross wrote:
>> On 23.06.22 13:24, Julien Grall wrote:
>>> From: Julien Grall <jgrall@amazon.com>
>>>
>>> At the moment, corrupt() is neither checking for allocation failure
>>> nor freeing the allocated memory.
>>>
>>> Harden the code by printing ENOMEM if the allocation failed and
>>> free 'str' after the last use.
>>>
>>> This is not considered to be a security issue because corrupt() should
>>> only be called when Xenstored thinks the database is corrupted. Note
>>> that the trigger (i.e. a guest reliably provoking the call) would be
>>> a security issue.
>>>
>>> Fixes: 06d17943f0cd ("Added a basic integrity checker, and some basic
>>> ability to recover from store")
>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>> ---
>>>    tools/xenstore/xenstored_core.c | 6 +++++-
>>>    1 file changed, 5 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/tools/xenstore/xenstored_core.c
>>> b/tools/xenstore/xenstored_core.c
>>> index fa733e714e9a..b6279bdfe229 100644
>>> --- a/tools/xenstore/xenstored_core.c
>>> +++ b/tools/xenstore/xenstored_core.c
>>> @@ -2065,7 +2065,11 @@ void corrupt(struct connection *conn, const
>>> char *fmt, ...)
>>>        va_end(arglist);
>>>          log("corruption detected by connection %i: err %s: %s",
>>> -        conn ? (int)conn->id : -1, strerror(saved_errno), str);
>>> +        conn ? (int)conn->id : -1, strerror(saved_errno),
>>> +        str ? str : "ENOMEM");
> 
> str ?: "ENOMEM"

Sure. I have updated the patch and committed it.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 12:48:15 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 12:48:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354927.582296 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4MFn-00082B-9l; Thu, 23 Jun 2022 12:48:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354927.582296; Thu, 23 Jun 2022 12:48:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4MFn-000824-6j; Thu, 23 Jun 2022 12:48:11 +0000
Received: by outflank-mailman (input) for mailman id 354927;
 Thu, 23 Jun 2022 12:48:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ht1C=W6=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1o4MFm-00081q-4X
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 12:48:10 +0000
Received: from mail-pg1-x532.google.com (mail-pg1-x532.google.com
 [2607:f8b0:4864:20::532])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bb2c6cdf-f2f2-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 14:48:08 +0200 (CEST)
Received: by mail-pg1-x532.google.com with SMTP id 9so758047pgd.7
 for <xen-devel@lists.xenproject.org>; Thu, 23 Jun 2022 05:48:08 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bb2c6cdf-f2f2-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=7QM3IbvgTeKSF1AeqGPT8Oqd/hSxzNXQVDL5ZIkzm6I=;
        b=H68DIjkg3HLdsH9efZ789irYXfiqTASoTolbsj4LiPa7FajORo8fhJMaz5eCZDUOHW
         n3CPHeXQ2Ubn8OZpndaZK4wHf5JYRaP1mxrc8+dFIKKP0AVqZCNyswSIkxWK7okmwiZz
         WMRUgQiwt+m1RDPUZaWcXH86wqb6XsOpGYMozVr2741f/52VyiO1sDWpFMDZ3/SrFoBv
         6uISYCbJ1Kw2c9EaBL1835gghRr8YLY6Ubk8ef6qTN4wUBkK6mxOuMgpZv4Kbq2cWTQf
         H3iEMUVgVxm6a9QhrzNiqEw5x5Y6OUrLTRqx0cNSrv9csiYv5LM6PeeyLJ+l0/I51MG+
         3+Tw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=7QM3IbvgTeKSF1AeqGPT8Oqd/hSxzNXQVDL5ZIkzm6I=;
        b=2s2Y3viJvjw8FoPzBPURynRSfKtZbiO6bHK5UYkANSMEnSxBjVLaC7a1X4+thQEfzj
         MDb9KfEqJkdVGAAsKVSiJFO+OqvK5EOnwnWvAHxiuMKed7alsDihwzrpgvEFEAfvzP9a
         We35N5obZZcqtwqEebV6vDI17Cr4zi9qOMLv4pB9UFB/3RX5eYITrkynXH+x1C/XPOXt
         fGkdqDOdGwOdWLAgm0cBQjNIF/h7pr2otqLAjrQy7584ifq+Exs36CT/nJ//LmSgwtne
         xBmjCe40X1f3ex/JJy8ByngvxPa6aKg9GKY0zohrPbbMab6X6eeYAkI7ivfTIpbtDLsJ
         +6SA==
X-Gm-Message-State: AJIora/vxJ/Q6kPIblmbKqNP479d3HZQaA/2R0DVv9tiM4ddAutiSRSv
	w78OPX6XOv/i6dmPLlQvw8AMrtJqF1fSuqD9+uDzEMLRpZE=
X-Google-Smtp-Source: AGRyM1uWifUE9NJLKsFI77+ViiqFmFM3C5K8EcqC/jm8VhRXSLYS3VCFbqRpjNiGT1+yor/vsums8BXeElWEeCC7Vk8=
X-Received: by 2002:a63:3409:0:b0:40c:9736:287 with SMTP id
 b9-20020a633409000000b0040c97360287mr7592789pga.14.1655988487104; Thu, 23 Jun
 2022 05:48:07 -0700 (PDT)
MIME-Version: 1.0
References: <20220414091538.jijj4lbrkjiby6el@vireshk-i7> <CAPD2p-ks4ZxWB8YT0pmX1sF_Mu2H+n_SyvdzE8LwVP_k_+Biog@mail.gmail.com>
 <20220622114950.lpidph5ugvozhbu5@vireshk-i7> <CAPD2p-kFeC8FygFcbpEbH3CzrAM7Td+G68t9ebOFR4V0w1dpEQ@mail.gmail.com>
 <20220623054819.do25phfuumnexw73@vireshk-i7>
In-Reply-To: <20220623054819.do25phfuumnexw73@vireshk-i7>
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
Date: Thu, 23 Jun 2022 15:47:55 +0300
Message-ID: <CAPD2p-=OMDMqdV27E2jTTcE0gx1eiT+9TqLeOVH2u6YfwT_8pg@mail.gmail.com>
Subject: Re: Virtio on Xen with Rust
To: Viresh Kumar <viresh.kumar@linaro.org>
Cc: Stratos Mailing List <stratos-dev@op-lists.linaro.org>, 
	=?UTF-8?B?QWxleCBCZW5uw6ll?= <alex.bennee@linaro.org>, 
	Stefano Stabellini <stefano.stabellini@xilinx.com>, 
	Mathieu Poirier <mathieu.poirier@linaro.com>, Vincent Guittot <vincent.guittot@linaro.org>, 
	Mike Holmes <mike.holmes@linaro.org>, Wei Liu <wl@xen.org>, 
	xen-devel <xen-devel@lists.xenproject.org>, Juergen Gross <jgross@suse.com>, 
	Julien Grall <julien@xen.org>
Content-Type: multipart/alternative; boundary="00000000000086ab8e05e21cdd24"

--00000000000086ab8e05e21cdd24
Content-Type: text/plain; charset="UTF-8"

Hello Viresh

[sorry for the possible format issues]

On Thu, Jun 23, 2022 at 8:48 AM Viresh Kumar <viresh.kumar@linaro.org>
wrote:

> On 22-06-22, 18:05, Oleksandr Tyshchenko wrote:
> > Even leaving
> > aside the fact that restricted virtio memory access in the guest means
> that
> > not all of guest memory can be accessed, so even having pre-maped guest
> > memory in advance, we are not able to calculate a host pointer as we
> don't
> > know which gpa the particular grant belongs to.
>
> Ahh, I clearly missed that as well. We can't simply convert the
> address here on the requests :(
>


Exactly, the grant represents the granted guest page, but the backend
doesn't know the guest physical address of that page and it shouldn't know
it, that is the point.
So the backend can only map granted pages, for which the guest explicitly
calls dma_map_*(). The more, currently the backend shouldn't keep them
mapped more than necessary, for example to cache mappings. Otherwise, when
calling dma_unmap_*() guest will notice that grant is still in use by the
backend and complain.



>
> > I am not sure that I understand this use-case.
> > Well, let's consider the virtio-disk example, it demonstrates three
> > possible memory mapping modes:
> > 1. All addresses are gpa, map/unmap at runtime using foreign mappings
> > 2. All addresses are gpa, map in advance using foreign mappings
> > 3. All addresses are grants, only map/unmap at runtime using grants
> mappings
> >
> > If you are asking about #4 which would imply map in advance together with
> > using grants then I think, no. This won't work with the current stuff.
> > These are conflicting opinions, either grants and map at runtime or gpa
> and
> > map in advance.
> > If there is a wish to optimize when using grants then "maybe" it is worth
> > looking into how persistent grants work for PV block device for example
> > (feature-persistent in blkif.h).
>
> I though #4 may make it work for our setup, but it isn't what we need
> necessarily.
>
> The deal is that we want hypervisor agnostic backends, they won't and
> shouldn't know what hypervisor they are running against. So ideally,
> no special handling.
>

I see and agree


>
> To make it work, the simplest of the solutions can be to map all that
> we need in advance, when the vhost negotiations happen and memory
> regions are passed to the backend. It doesn't necessarily mean mapping
> entire guest, but just the regions we need.


> With what I have understood about grants until now, I don't think it
> will work straight away.
>

yes

Below is my understanding, which might be wrong.

I am not sure about x86, there are some moments with its modes, for example
PV guests should always use grants for virtio, but on Arm (which guest type
is HVM):
1. If you run backend(s) in dom0 which is trusted by default, you don't
necessarily need to use grants for the virtio so you will be able to map
what you need in advance using foreign mappings.
2. If you run backend(s) in another domain *which you trust* and you don't
want to use grants for the virtio, I think, you also will be able to map in
advance using foreign mappings, but for that you will need a security
policy to allow your backend's domain to map arbitrary guest pages.
3. If you run backend(s) in non-trusted domain, you will have to use grants
for the virtio, so there is no way to map in advance, only to map at the
runtime what was previously granted by the guest and umap right after using
it.

These is another method how to restrict backend without modifying guest
which is CONFIG_DMA_RESTRICTED_POOL in Linux, but this includes memcpy in
the guest and requires some support in toolstack to make it work, but I
wouldn't
suggest it as the usage of grants for the virtio is better (and already in
upsteam).

Regarding your previous attempt to map 512MB by using grants, what I
understand from the error message is that Xen complains that the passed
grant ref is bigger than the current number of grant table entries.
Now I am wondering where do these 0x40000 - 0x5ffff grant refs (which
backend tries to map in a single call) come from, are they really were
previously granted by the guest and passed to the backend in a
single request?
If the answer is yes, then what does gnttab_usage_print_all() say (key 'g'
in Xen console)? I expect there should be a lot of Xen messages like
"common/grant_table.c:1882:d2v3 Expanding d2 grant table from 28 to 29
frames. Do you see them?



>
> > Yes, this is the correct environment. Please note that Juergen has
> recently
> > pushed new version [1]
>
> Yeah, I am following them up, will test the one you all agree on :)
>
> Thanks.
>
> --
> viresh
>


-- 
Regards,

Oleksandr Tyshchenko

--00000000000086ab8e05e21cdd24
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr"><br></div><div dir=3D"ltr">Hello Viresh<b=
r><br>[sorry for the possible format issues]<br></div><br><div class=3D"gma=
il_quote"><div dir=3D"ltr" class=3D"gmail_attr">On Thu, Jun 23, 2022 at 8:4=
8 AM Viresh Kumar &lt;<a href=3D"mailto:viresh.kumar@linaro.org">viresh.kum=
ar@linaro.org</a>&gt; wrote:<br></div><blockquote class=3D"gmail_quote" sty=
le=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);paddi=
ng-left:1ex">On 22-06-22, 18:05, Oleksandr Tyshchenko wrote:<br>
&gt; Even leaving<br>
&gt; aside the fact that restricted virtio memory access in the guest means=
 that<br>
&gt; not all of guest memory can be accessed, so even having pre-maped gues=
t<br>
&gt; memory in advance, we are not able to calculate a host pointer as we d=
on&#39;t<br>
&gt; know which gpa the particular grant belongs to.<br>
<br>
Ahh, I clearly missed that as well. We can&#39;t simply convert the<br>
address here on the requests :(<br></blockquote><div><br></div><div><br></d=
iv><div>Exactly, the grant represents the granted guest page, but the backe=
nd doesn&#39;t know the guest physical address of that page and it shouldn&=
#39;t know it, that is the point.<br>So the backend can only map granted pa=
ges, for which the guest explicitly calls dma_map_*(). The more, currently =
the backend shouldn&#39;t keep them mapped more than necessary, for example=
 to cache mappings. Otherwise, when calling dma_unmap_*() guest will notice=
 that grant is still in use by the backend and complain.<br></div><div><br>=
</div><div>=C2=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0p=
x 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
&gt; I am not sure that I understand this use-case.<br>
&gt; Well, let&#39;s consider the virtio-disk example, it demonstrates thre=
e<br>
&gt; possible memory mapping modes:<br>
&gt; 1. All addresses are gpa, map/unmap at runtime using foreign mappings<=
br>
&gt; 2. All addresses are gpa, map in advance using foreign mappings<br>
&gt; 3. All addresses are grants, only map/unmap at runtime using grants ma=
ppings<br>
&gt; <br>
&gt; If you are asking about #4 which would imply map in advance together w=
ith<br>
&gt; using grants then I think, no. This won&#39;t work with the current st=
uff.<br>
&gt; These are conflicting opinions, either grants and map at runtime or gp=
a and<br>
&gt; map in advance.<br>
&gt; If there is a wish to optimize when using grants then &quot;maybe&quot=
; it is worth<br>
&gt; looking into how persistent grants work for PV block device for exampl=
e<br>
&gt; (feature-persistent in blkif.h).<br>
<br>
I though #4 may make it work for our setup, but it isn&#39;t what we need<b=
r>
necessarily.<br>
<br>
The deal is that we want hypervisor agnostic backends, they won&#39;t and<b=
r>
shouldn&#39;t know what hypervisor they are running against. So ideally,<br=
>
no special handling.<br></blockquote><div><br></div><div>I see and agree</d=
iv><div>=C2=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0=
px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
To make it work, the simplest of the solutions can be to map all that<br>
we need in advance, when the vhost negotiations happen and memory<br>
regions are passed to the backend. It doesn&#39;t necessarily mean mapping<=
br>
entire guest, but just the regions we need.</blockquote><blockquote class=
=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rg=
b(204,204,204);padding-left:1ex">
<br>
With what I have understood about grants until now, I don&#39;t think it<br=
>
will work straight away.<br></blockquote><div><br></div><div>yes</div><div>=
<br></div><div>Below is my understanding, which might be wrong.</div><div><=
br></div><div>I am not sure about x86, there are some moments with its mode=
s, for example PV guests should always use grants for virtio, but on Arm=C2=
=A0(which guest type is HVM):=C2=A0</div><div>1. If you run backend(s) in d=
om0 which is trusted by default, you don&#39;t necessarily need to use gran=
ts for the virtio so you will be able to map what you need in advance using=
 foreign mappings.</div><div>2. If you=C2=A0run backend(s) in another domai=
n *which you trust* and you don&#39;t want to use grants for the virtio, I =
think, you also will be able to map in advance using foreign mappings, but =
for that you will need a security policy to allow your backend&#39;s domain=
 to map arbitrary guest pages.</div><div>3. If=C2=A0you run backend(s) in n=
on-trusted domain, you will have to use=C2=A0grants for the virtio, so ther=
e is no way to map in advance, only to map at the runtime what was previous=
ly granted by the guest and umap right after using it.</div><div><br></div>=
<div>These is another method how to restrict backend without modifying gues=
t which is=C2=A0CONFIG_DMA_RESTRICTED_POOL in Linux, but this includes=C2=
=A0memcpy in the guest and requires some support in toolstack to make it wo=
rk, but I wouldn&#39;t</div><div>suggest it as the usage of grants for the =
virtio is better (and already in upsteam).=C2=A0 =C2=A0</div><div><br></div=
><div>Regarding your previous attempt to map 512MB by using grants,=C2=A0wh=
at I understand from the=C2=A0error message is that Xen complains that the =
passed grant ref is bigger than the current number of grant table entries. =
</div><div>Now I am wondering where do these 0x40000 - 0x5ffff grant=C2=A0r=
efs (which backend tries to map in a single call) come from, are they reall=
y were previously granted by the guest and passed to the=C2=A0backend in a =
single=C2=A0request?<br></div><div>If the answer=C2=A0is yes, then what doe=
s gnttab_usage_print_all() say (key &#39;g&#39; in Xen console)? I expect t=
here should be a lot of Xen messages like &quot;common/grant_table.c:1882:d=
2v3 Expanding d2 grant table from 28 to 29 frames. Do you see them?</div><d=
iv><br></div><div>=C2=A0</div><blockquote class=3D"gmail_quote" style=3D"ma=
rgin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:=
1ex">
<br>
&gt; Yes, this is the correct environment. Please note that Juergen has rec=
ently<br>
&gt; pushed new version [1]<br>
<br>
Yeah, I am following them up, will test the one you all agree on :)<br>
<br>
Thanks.<br>
<br>
-- <br>
viresh<br>
</blockquote></div><br clear=3D"all"><div><br></div>-- <br><div dir=3D"ltr"=
 class=3D"gmail_signature"><div dir=3D"ltr"><div><div dir=3D"ltr"><div><div=
 dir=3D"ltr"><span style=3D"background-color:rgb(255,255,255)"><font size=
=3D"2"><span style=3D"color:rgb(51,51,51);font-family:Arial,sans-serif">Reg=
ards,</span></font></span></div><div dir=3D"ltr"><br></div><div dir=3D"ltr"=
><div><span style=3D"background-color:rgb(255,255,255)"><font size=3D"2">Ol=
eksandr Tyshchenko</font></span></div></div></div></div></div></div></div><=
/div>

--00000000000086ab8e05e21cdd24--


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 12:54:05 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 12:54:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354934.582307 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4MLQ-0002aL-NZ; Thu, 23 Jun 2022 12:54:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354934.582307; Thu, 23 Jun 2022 12:54:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4MLQ-0002aE-Jm; Thu, 23 Jun 2022 12:54:00 +0000
Received: by outflank-mailman (input) for mailman id 354934;
 Thu, 23 Jun 2022 12:53:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CRa/=W6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o4MLP-0002a8-82
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 12:53:59 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr10053.outbound.protection.outlook.com [40.107.1.53])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8b726a72-f2f3-11ec-b725-ed86ccbb4733;
 Thu, 23 Jun 2022 14:53:57 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM6PR04MB6598.eurprd04.prod.outlook.com (2603:10a6:20b:fe::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16; Thu, 23 Jun
 2022 12:53:55 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 12:53:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8b726a72-f2f3-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UvkbocTtxcaCVSeHg7oHtO4WMLlxBQl8MEjAMh/OSZ++DwG8lpYHhh2RPL7iOIy7C9llhgC+ebsMVnI8gYeH63oIi1BVeVF7If/lvYCbAxeW8smpO0ixxsupxlJU5MJZAQEML2kU3IpVlnQBdKyiOXZM3Vta5tEMmx1xfxw6hS3gzRI+bJmiR1EMFUVs1rkBqDudrk9BwaL25ZcIMFj0ruEub5mQHgi/KKoTHsFFrJcD5/kNzPfc+T6PBNfEg5V+Yl9nfJSngkzSva3bQWLdZkr2923W216OJIoMsmTy7tcTYD0BuvV2HLaX6zTK/cRco2DUmeS9bhfQvQNrhJLSHA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Mcv18ceQNl32sTf34KHRDEcfxbOjbBVxPy7/ESfq/CI=;
 b=XGA8c9mvJSZaWbqBYWgealZjdeaScrF97dKJtarW6UpuObLT349tzsU+Gf3eC2MQaIV2cmfplnfFJ8RICRNLMmOEsZZmFd2mnYaNDIRQEXna4raJG9R7Lv83RSKzpksmobx8zjugmi4PuFDSexYx0VBjogHDioBvZ2kSXnUa3TpLYPXZJRB1F9lTXCoXCXVWnUU4LjbGIHzqapWVPLigyI7GCp2nLhySgpmVgX5HBL9eRzQVa6Ec/Fguh6X1R4fKRutto7OGgQYP6Ae7pg02eZbCgRGw7dpvjDcCFAT7jMygFY6hnOy33knR8EHH9D32PujuMirLI5/M73oprpE/vQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Mcv18ceQNl32sTf34KHRDEcfxbOjbBVxPy7/ESfq/CI=;
 b=jGTcplUnr2kff6/v5qW/R+lKKr07Cx0E8O3rH1l2sce5L90nbKuUNaNx2YX80mqeQ+Yrc+mg+p3Ssk9rsFcZZMTyww0vTM2hwD/nB5Yx0+f/NDfz8TrEDzZLDL9h6pAgLWLZ/gbGQR5l40IBfNH34HMSOcd/D1YS6U9JpPRqVNRJICKzPPk85HTa3t7P6j5ersJR4YlLFYTEvmu001/+uv2aLwtwgX+XIFdCBCl488XhtBZDq+hTQa0fyumND1hQelq74Yg3ClxOnNSq07y6PVzbV+PZP5H4Fc0KGUW4kihnMyA5/LKA9DgP0KY/+CUmh3LkVXykyWYGXpfff6gtGg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <05dadcda-505d-d46a-776a-bb29b8915815@suse.com>
Date: Thu, 23 Jun 2022 14:53:53 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v6 1/8] xen: reuse x86 EFI stub functions for Arm
Content-Language: en-US
To: Wei Chen <wei.chen@arm.com>
Cc: nd@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Jiamei Xie <jiamei.xie@arm.com>, xen-devel@lists.xenproject.org
References: <20220610055316.2197571-1-wei.chen@arm.com>
 <20220610055316.2197571-2-wei.chen@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220610055316.2197571-2-wei.chen@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM6P191CA0065.EURP191.PROD.OUTLOOK.COM
 (2603:10a6:209:7f::42) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 10a66582-7643-43ac-e2d1-08da55176e7e
X-MS-TrafficTypeDiagnostic: AM6PR04MB6598:EE_
X-Microsoft-Antispam-PRVS:
	<AM6PR04MB6598EF1512F8CF5900B7C3A3B3B59@AM6PR04MB6598.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	O4EjJiuy+sf4UEV83TeDrnboqd0con93KP+1pdRrfFaxsRnD0263dC7bgF2et15cno2fzKUCxIh3Wtf+5XuY/OO+3pyN+R0ipIPqzWTCBZSf21R2nqy56xxqxiyLr3eE/ZIsD65gGDbmBqbKZD2E5vkJ1N+SrpS+7+4DLfcx+1VoRVVEkmD+a6NPbv8TThccWioakEmGn3fQOe3PNZMLV6AizOdrNz+dLeS/I3eA8T0SXGYnufDHlCwY/I01mBn4CxDNQi35s0gHrNWqbgxE1dNYOiIzI5t6Gl5Qremu7K6eQ1PYUeKmg0dCAvizg8gcQuIq2LSst3Dw6bwceVYYyNhImV6Sihns3HchkD48rlAwlNgV0j0+LCGxUoOzjciHcMoDsUryj1EtUU7Z9ns57EGMuWfEMkxOLmoPuVEQvf3+Rj1Uw/qNFRAbImYVNw/I/BQbw0YSQjYUM9MNIkM5o3w9YSsIwYMHCFF1UpuAPgEKSXMf0V7tznVjNEr5/bw2jfa2ZtDKlzH9sXq6zlUKt9JPabriUCICyEZEKielvi5qA0iWmuMqWmGsRF061aNhQDrz9X7obR3cC7zBhJCZh80nySrvbVz0l4f0yM0vDG7K982CYD/RKQvPEF9APlMhE4qOVMoLSzTK74kzDSHwrLkiTyYTO+nSD0Bj/IY6B3POyiKuHCbphH5ArrCqZPghOU9e2fxXmKOFKkOaSN/mxkHvXx9ib8r+1ZFGHggKoen1ErJv2+PpqD9H4WtF4vgRSEZfAPQE+ue0d9pzmKdAWXgios83VuwegPrigWk95ps=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(346002)(39860400002)(396003)(376002)(136003)(366004)(53546011)(6512007)(2616005)(41300700001)(5660300002)(26005)(7416002)(478600001)(8936002)(86362001)(6506007)(186003)(31696002)(2906002)(83380400001)(38100700002)(4326008)(66476007)(66556008)(36756003)(66946007)(31686004)(8676002)(54906003)(316002)(6486002)(6916009)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cU9IR0lyTENyeFlLWld2ZTVTZnlJM0JhdXNsYTJIa2tYeFpUb0l2dlA1bDBG?=
 =?utf-8?B?NzF1K3ZvQi9QRnY1cFRtckcyYW8weVJ3QVJnQXdsdUlHKzdtYVZNbmdDMzFz?=
 =?utf-8?B?OTQ5NkRJbDJQOTE5R2xZRXl2SlVlYVY3d1IrcUFEVXRSN3NDZWFnUTA4OXRK?=
 =?utf-8?B?NVAwdndwdjZ0d0toMjR2M29VTmRQWUFOVGtUYS9Nc0RzRktuejM3MWZWbURU?=
 =?utf-8?B?a0lFWm44V0RzcXRCOUZwQS9Vb1VpM0J2emV1cEVzbDkvUzJ0STZCSXdnVFBv?=
 =?utf-8?B?ajhDOEUzcTVrVkpCL3FpQTlnaVVhK1VKREpPdk1zb3RQOEpjZnRyVmpaNGVn?=
 =?utf-8?B?WkZMS2NSenFkRzBxUUw3aUcxTXlLbU5VWkR6aDlPVUZtUmMzSW9sOTFCdUEy?=
 =?utf-8?B?MVhsWG1yQTNzNGo3ZDd3UHFWN3Z1RmgwMWFYSUlsNURaUmhHOW8rNnRyQWMw?=
 =?utf-8?B?TUN1NGlDMzFGNmZZb25yVGZYM1FjNnFlYTM4YnZHS1Y1Ly9laHh2anA1MERC?=
 =?utf-8?B?QmxQTnJ6UThuMEtQWEIxQ09aQWJSR0Rsd2NzV2k5Mm1JT0tpbDlabVE5REFQ?=
 =?utf-8?B?OGxnbU1XT0JLeUx6cWVlUFVRUktDajJLcGZyOHZTVnQ2MmE3b2FuTmtORVRw?=
 =?utf-8?B?K1c2NTJsT1ljWU1ETnd0eWVSTDBDZ1VVOXdVZ0dpelRYaVRtZjNHS1Y4V1c3?=
 =?utf-8?B?QjNQZmwvL1hvUTZ6N2J3QUY3RytmMEtGU1RkWFZUSSs3eDZvWGxNNSs0eHFN?=
 =?utf-8?B?eVZ5Wm91ck1ZMEdHRjFPNXpSbVc4Ump0UGlQbFkwZDJTSHRwSDJaWk9hREFV?=
 =?utf-8?B?b1kvYnVqbjNzREVqZ3d5QVdDVXNIN2oraVgvWEovdkZqMlFOOXdNd0x5YU5q?=
 =?utf-8?B?VngwcHZ6ZWlYbTN2WnZXaWx4WStRRkhWQUpkakZpdmh5NUpSV0ttWHoxSDZT?=
 =?utf-8?B?NURpcmM0UXByWmRHd01OMzFqQzVsalBjSDM4djRVUFJBSlVTM1FUMlY4MmU2?=
 =?utf-8?B?Qzk0Ri9LalpwUUpLU04zM0N6cHhBMzJMV21QZTNVMmtva250OHZGMkduYk9H?=
 =?utf-8?B?Ym9KWXRZNHN5SFFXaEIxVjM2OTlKV2ZDTEJsQTg5b0FHanUzNVBORTJvL21H?=
 =?utf-8?B?OEJRN1JOd2hyQndWa3g1cVN1WkJpcW1YOHVXbTBUeU1NL2VTb285UmlXQitp?=
 =?utf-8?B?QVZtcW5lNTlMWUdReTZUb2JaWkorcTRqV1J2dWFOdjM2dTZzK3YrOE5YMzI4?=
 =?utf-8?B?RTU5bnVDSm1pQ2pJa1o2ZnQwd0hBMWtBeHFCbGZZSno4UkNBYUhCQjhZK3hY?=
 =?utf-8?B?WHNZQndnWGpZaFFHSElaQUMzbFNseWcxMTFURmludnpvR3RsWWxIeG82KzRL?=
 =?utf-8?B?elRjRVRPcDA2SEc5KzZseHZzajNTMC95dkk1VklJVVBGNmJ5L3hHM2pTRjA3?=
 =?utf-8?B?UmM1b2JXL0hRbS92cU42ckp4SDZoWHFVYXBqT0xKaVBTT21xUGtNUHhER2t1?=
 =?utf-8?B?aHcweldYK2xtVW9qUURSaUpsWmdNNWdmN1g4K3lqMFo5OVh6TzFWN0hUZ2Iz?=
 =?utf-8?B?aDl6aGRHVGh1L01xTXJFL2Z1YURpbnlSb24wQnNJZUdubFhZbkdKbzVTZG0z?=
 =?utf-8?B?ZE9HNFpQVG03bE95bTIybWRkT1ZvQ1E3RkNTWm51TkN4VGZzSDRzb0V3bmhi?=
 =?utf-8?B?dmlkUndmdkJweSs0Q0grTXg0WHNqbStEa3YwRFREemRHelVvMTh3cTc1b1JT?=
 =?utf-8?B?R1lCbFdHYkVuTzhjbUREYTRwUlJtM1crKzlKY3Q2NWN1V0o2dlkzNDRCR1J1?=
 =?utf-8?B?K1NxYnR4MHdIbm05QWpzbDcrK2ZQZGwwTGtnTDFhdDlkcEJwVVpxSVlTT1Uy?=
 =?utf-8?B?M1BocHU2eHd0dUgwU1BFYWwrVUpaTDVId2Z0MERYWVVaWEtOVEJUVHNtd3h3?=
 =?utf-8?B?bWdXS28wTkQzUHV5NnplM1FmOXVFdUR4QW5OOUQrVnpSSm1memlKSHR3b25D?=
 =?utf-8?B?bUhWWWE4d0picmU5OEVZYWl4OW1zWEV6WTdPT1NINHRGSGxDQzZ5d0xWVTlR?=
 =?utf-8?B?eFZBeEllT1hYT3B3UXRLWVRwQ0dFN0x5ZXAvTU9wS1Ixazg3T1hjQVQweWNl?=
 =?utf-8?Q?W+zd9R993UnfZXuxd1nA0f+cN?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 10a66582-7643-43ac-e2d1-08da55176e7e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 12:53:55.4327
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ysFubP2iZx7lDTlQuyrG+2NJB7auD5tbcC8NckCGTzYw9xUfFQtbbu8+3VZCmQCxdzpx+XF01J7T58XQymJWNg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR04MB6598

On 10.06.2022 07:53, Wei Chen wrote:
> --- a/xen/arch/arm/Makefile
> +++ b/xen/arch/arm/Makefile
> @@ -1,6 +1,5 @@
>  obj-$(CONFIG_ARM_32) += arm32/
>  obj-$(CONFIG_ARM_64) += arm64/
> -obj-$(CONFIG_ARM_64) += efi/
>  obj-$(CONFIG_ACPI) += acpi/
>  obj-$(CONFIG_HAS_PCI) += pci/
>  ifneq ($(CONFIG_NO_PLAT),y)
> @@ -20,6 +19,7 @@ obj-y += domain.o
>  obj-y += domain_build.init.o
>  obj-y += domctl.o
>  obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
> +obj-y += efi/
>  obj-y += gic.o
>  obj-y += gic-v2.o
>  obj-$(CONFIG_GICV3) += gic-v3.o
> --- a/xen/arch/arm/efi/Makefile
> +++ b/xen/arch/arm/efi/Makefile
> @@ -1,4 +1,12 @@
>  include $(srctree)/common/efi/efi-common.mk
>  
> +ifeq ($(CONFIG_ARM_EFI),y)
>  obj-y += $(EFIOBJ-y)
>  obj-$(CONFIG_ACPI) +=  efi-dom0.init.o
> +else
> +# Add stub.o to EFIOBJ-y to re-use the clean-files in
> +# efi-common.mk. Otherwise the link of stub.c in arm/efi
> +# will not be cleaned in "make clean".
> +EFIOBJ-y += stub.o
> +obj-y += stub.o
> +endif

This has caused

ld: warning: arch/arm/efi/built_in.o uses 2-byte wchar_t yet the output is to use 4-byte wchar_t; use of wchar_t values across objects may fail

for the 32-bit Arm build that I keep doing every once in a while, with
(if it matters) GNU ld 2.38. I guess you will want to consider building
all of Xen with -fshort-wchar, or to avoid building stub.c with that
option.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 12:57:09 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 12:57:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354941.582321 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4MOS-00041d-NH; Thu, 23 Jun 2022 12:57:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354941.582321; Thu, 23 Jun 2022 12:57:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4MOS-00041W-Kd; Thu, 23 Jun 2022 12:57:08 +0000
Received: by outflank-mailman (input) for mailman id 354941;
 Thu, 23 Jun 2022 12:57:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=j28/=W6=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o4MOQ-00041Q-S4
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 12:57:06 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fba89110-f2f3-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 14:57:05 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 5564321B17;
 Thu, 23 Jun 2022 12:57:05 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 1570113461;
 Thu, 23 Jun 2022 12:57:05 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id Jf+GAyFjtGJqGQAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 23 Jun 2022 12:57:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fba89110-f2f3-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655989025; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=IwQIdTPOL0hcDLLiZO+ZnH1H9mqh4iSUrszqHqXfKJc=;
	b=BNfNzZISdwN6dTOvSVYKGZafp2YfIvseJA0TCW0PiO4whijnaYKYDM8hVcwHRGoZ4QTfVk
	leCP35Oaw2HuejLPN93Q5mccJnMwjK/L6qfssXBkWzjy3y7dtoelT1JpXUNNaRmZXuX2EG
	bIeKBExMFKB5AvuTchIeVoPJ5rAmt6E=
Message-ID: <88ca4ce6-0f8a-0084-9074-7e72a341536b@suse.com>
Date: Thu, 23 Jun 2022 14:57:04 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH v4] xen/gntdev: Avoid blocking in unmap_grant_pages()
Content-Language: en-US
To: Demi Marie Obenour <demi@invisiblethingslab.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 David Vrabel <david.vrabel@citrix.com>,
 Jennifer Herbert <jennifer.herbert@citrix.com>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 stable@vger.kernel.org
References: <20220622022726.2538-1-demi@invisiblethingslab.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20220622022726.2538-1-demi@invisiblethingslab.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------MjLJ9LQCj5UmEMxFvomyfUd0"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------MjLJ9LQCj5UmEMxFvomyfUd0
Content-Type: multipart/mixed; boundary="------------CM7Lbh2cN3VZwePM0uCnSORv";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Demi Marie Obenour <demi@invisiblethingslab.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 David Vrabel <david.vrabel@citrix.com>,
 Jennifer Herbert <jennifer.herbert@citrix.com>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 stable@vger.kernel.org
Message-ID: <88ca4ce6-0f8a-0084-9074-7e72a341536b@suse.com>
Subject: Re: [PATCH v4] xen/gntdev: Avoid blocking in unmap_grant_pages()
References: <20220622022726.2538-1-demi@invisiblethingslab.com>
In-Reply-To: <20220622022726.2538-1-demi@invisiblethingslab.com>

--------------CM7Lbh2cN3VZwePM0uCnSORv
Content-Type: multipart/mixed; boundary="------------EZrvq5EHS1RGY5UKrUtgHHZI"

--------------EZrvq5EHS1RGY5UKrUtgHHZI
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjIuMDYuMjIgMDQ6MjcsIERlbWkgTWFyaWUgT2Jlbm91ciB3cm90ZToNCj4gdW5tYXBf
Z3JhbnRfcGFnZXMoKSBjdXJyZW50bHkgd2FpdHMgZm9yIHRoZSBwYWdlcyB0byBubyBsb25n
ZXIgYmUgdXNlZC4NCj4gSW4gaHR0cHM6Ly9naXRodWIuY29tL1F1YmVzT1MvcXViZXMtaXNz
dWVzL2lzc3Vlcy83NDgxLCB0aGlzIGxlYWQgdG8gYQ0KPiBkZWFkbG9jayBhZ2FpbnN0IGk5
MTU6IGk5MTUgd2FzIHdhaXRpbmcgZm9yIGdudGRldidzIE1NVSBub3RpZmllciB0bw0KPiBm
aW5pc2gsIHdoaWxlIGdudGRldiB3YXMgd2FpdGluZyBmb3IgaTkxNSB0byBmcmVlIGl0cyBw
YWdlcy4gIEkgYWxzbw0KPiBiZWxpZXZlIHRoaXMgaXMgcmVzcG9uc2libGUgZm9yIHZhcmlv
dXMgZGVhZGxvY2tzIEkgaGF2ZSBleHBlcmllbmNlZCBpbg0KPiB0aGUgcGFzdC4NCj4gDQo+
IEF2b2lkIHRoZXNlIHByb2JsZW1zIGJ5IG1ha2luZyB1bm1hcF9ncmFudF9wYWdlcyBhc3lu
Yy4gIFRoaXMgcmVxdWlyZXMNCj4gbWFraW5nIGl0IHJldHVybiB2b2lkLCBhcyBhbnkgZXJy
b3JzIHdpbGwgbm90IGJlIGF2YWlsYWJsZSB3aGVuIHRoZQ0KPiBmdW5jdGlvbiByZXR1cm5z
LiAgRm9ydHVuYXRlbHksIHRoZSBvbmx5IHVzZSBvZiB0aGUgcmV0dXJuIHZhbHVlIGlzIGEN
Cj4gV0FSTl9PTigpLCB3aGljaCBjYW4gYmUgcmVwbGFjZWQgYnkgYSBXQVJOX09OIHdoZW4g
dGhlIGVycm9yIGlzDQo+IGRldGVjdGVkLiAgQWRkaXRpb25hbGx5LCBhIGZhaWxlZCBjYWxs
IHdpbGwgbm90IHByZXZlbnQgZnVydGhlciBjYWxscw0KPiBmcm9tIGJlaW5nIG1hZGUsIGJ1
dCB0aGlzIGlzIGhhcm1sZXNzLg0KPiANCj4gQmVjYXVzZSB1bm1hcF9ncmFudF9wYWdlcyBp
cyBub3cgYXN5bmMsIHRoZSBncmFudCBoYW5kbGUgd2lsbCBiZSBzZW50IHRvDQo+IElOVkFM
SURfR1JBTlRfSEFORExFIHRvbyBsYXRlIHRvIHByZXZlbnQgbXVsdGlwbGUgdW5tYXBzIG9m
IHRoZSBzYW1lDQo+IGhhbmRsZS4gIEluc3RlYWQsIGEgc2VwYXJhdGUgYm9vbCBhcnJheSBp
cyBhbGxvY2F0ZWQgZm9yIHRoaXMgcHVycG9zZS4NCj4gVGhpcyB3YXN0ZXMgbWVtb3J5LCBi
dXQgc3R1ZmZpbmcgdGhpcyBpbmZvcm1hdGlvbiBpbiBwYWRkaW5nIGJ5dGVzIGlzDQo+IHRv
byBmcmFnaWxlLiAgRnVydGhlcm1vcmUsIGl0IGlzIG5lY2Vzc2FyeSB0byBncmFiIGEgcmVm
ZXJlbmNlIHRvIHRoZQ0KPiBtYXAgYmVmb3JlIG1ha2luZyB0aGUgYXN5bmNocm9ub3VzIGNh
bGwsIGFuZCByZWxlYXNlIHRoZSByZWZlcmVuY2Ugd2hlbg0KPiB0aGUgY2FsbCByZXR1cm5z
Lg0KPiANCj4gSXQgaXMgYWxzbyBuZWNlc3NhcnkgdG8gZ3VhcmQgYWdhaW5zdCByZWVudHJh
bmN5IGluIGdudGRldl9tYXBfcHV0KCksDQo+IGFuZCB0byBoYW5kbGUgdGhlIGNhc2Ugd2hl
cmUgdXNlcnNwYWNlIHRyaWVzIHRvIG1hcCBhIG1hcHBpbmcgd2hvc2UNCj4gY29udGVudHMg
aGF2ZSBub3QgYWxsIGJlZW4gZnJlZWQgeWV0Lg0KPiANCj4gRml4ZXM6IDc0NTI4MjI1NmM3
NSAoInhlbi9nbnRkZXY6IHNhZmVseSB1bm1hcCBncmFudHMgaW4gY2FzZSB0aGV5IGFyZSBz
dGlsbCBpbiB1c2UiKQ0KPiBDYzogc3RhYmxlQHZnZXIua2VybmVsLm9yZw0KPiBTaWduZWQt
b2ZmLWJ5OiBEZW1pIE1hcmllIE9iZW5vdXIgPGRlbWlAaW52aXNpYmxldGhpbmdzbGFiLmNv
bT4NCg0KUmV2aWV3ZWQtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4NCg0K
DQpKdWVyZ2VuDQo=
--------------EZrvq5EHS1RGY5UKrUtgHHZI
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------EZrvq5EHS1RGY5UKrUtgHHZI--

--------------CM7Lbh2cN3VZwePM0uCnSORv--

--------------MjLJ9LQCj5UmEMxFvomyfUd0
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK0YyAFAwAAAAAACgkQsN6d1ii/Ey9/
pgf/eNQa9Pf7E8x2YxUld0WpDOH6NxiTgM+wwHd/VsfBNuwXmthu0iGFvImOHIPJ21gD7Ibz1N4N
JFOp4Dh60QJ04EjrnT8SDaz7Mu9lsA1dHr6MMXokdbSRTijHVhzpMphN63AnZOvHCJn2FUzp+UvE
riOxkKCB3yIohXFV3oajCQHmZBb83wMizl02J/hA7pAez4pen8JiQYMnkq7djZ+vXc1E0PAcZf17
5QPSXEbR4ghoUqkd8rFVq4vUClmyqRJwsOtuYdSd6VS3U1vjg1MW712MBTyT/snO+5snXZp53jE7
QRHhn7aLmtPAt/nC6Gf+lmgxBdtpdIah0mJByVQixA==
=t+Ri
-----END PGP SIGNATURE-----

--------------MjLJ9LQCj5UmEMxFvomyfUd0--


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 12:59:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 12:59:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354949.582333 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4MQL-0004h4-9n; Thu, 23 Jun 2022 12:59:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354949.582333; Thu, 23 Jun 2022 12:59:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4MQL-0004gx-5R; Thu, 23 Jun 2022 12:59:05 +0000
Received: by outflank-mailman (input) for mailman id 354949;
 Thu, 23 Jun 2022 12:59:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4MQK-0004gm-6i; Thu, 23 Jun 2022 12:59:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4MQK-0005AI-4z; Thu, 23 Jun 2022 12:59:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4MQJ-000139-KV; Thu, 23 Jun 2022 12:59:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o4MQJ-0005De-Jj; Thu, 23 Jun 2022 12:59:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5FFHPTQq3zX6z8p02eXT+BIWucymWd8t2/56eg9iFOM=; b=gkep2dTNJjn0uMxQl8TixFIe6V
	dcsizi/yGg7EUtRRlAb5rCmpTIX01MqysHwsfiIWMOrpqI6Valk7XgQg9562Ds345vqUkyGl3AnR4
	nzTqp5KKNorcsrcwiYcFwlJwMgdTrTrWnMyZ4FD/U9Z59NY+GRKbmuYsMOsnopICd83o=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171322-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171322: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    linux-linus:test-amd64-coresched-amd64-xl:<job status>:broken:regression
    linux-linus:test-amd64-coresched-amd64-xl:host-install(5):broken:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=de5c208d533a46a074eb46ea17f672cc005a7269
X-Osstest-Versions-That:
    linux=354c6e071be986a44b956f7b57f1884244431048
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 23 Jun 2022 12:59:03 +0000

flight 171322 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171322/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-coresched-amd64-xl    <job status>                 broken
 test-amd64-coresched-amd64-xl  5 host-install(5)       broken REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-intel  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-amd  8 xen-boot              fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 171277
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 171277
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 171277
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 171277
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-arm64-arm64-libvirt-raw 13 guest-start              fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 171277

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 171277

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171277
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171277
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171277
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                de5c208d533a46a074eb46ea17f672cc005a7269
baseline version:
 linux                354c6e071be986a44b956f7b57f1884244431048

Last test of basis   171277  2022-06-19 03:11:35 Z    4 days
Failing since        171280  2022-06-19 15:12:25 Z    3 days   12 attempts
Testing same since   171322  2022-06-23 03:32:00 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Usyskin <alexander.usyskin@intel.com>
  Ali Saidi <alisaidi@amazon.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Ard Biesheuvel <ardb@kernel.org>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Arnd Bergmann <arnd@arndb.de>
  Athira Jajeev <atrajeev@linux.vnet.ibm.com>
  Athira Rajeev <atrajeev@linux.vnet.ibm.com>
  Bart Van Assche <bvanassche@acm.org>
  Christian Schoenebeck <linux_oss@crudebyte.com>
  Christoph Lameter <cl@linux.com>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Damien Le Moal <damien.lemoal@opensource.wdc.com>
  Darrick J. Wong <djwong@kernel.org>
  Dave Hansen <dave.hansen@linux.intel.com>
  David Howells <dhowells@redhat.com>
  David Rientjes <rientjes@google.com>
  David Sterba <dsterba@suse.com>
  Ding Xiang <dingxiang@cmss.chinamobile.com>
  Dominique Martinet <asmadeus@codewreck.org>
  Douglas Gilbert <dgilbert@interlog.com>
  Evgeniy Baskov <baskov@ispras.ru>
  Filipe Manana <fdmanana@suse.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Hyeonggon Yoo <42.hyeyoo@gmail.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ian Rogers <irogers@google.com>
  Jamie Iles <jamie@jamieiles.com>
  Jann Horn <jannh@google.com>
  Jarkko Nikula <jarkko.nikula@linux.intel.com>
  Javier Martinez Canillas <javierm@redhat.com>
  Jiasheng Jiang <jiasheng@iscas.ac.cn>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jing-Ting Wu <jing-ting.wu@mediatek.com>
  Joe Damato <jdamato@fastly.com>
  Joel Savitz <jsavitz@redhat.com>
  Josh Poimboeuf <jpoimboe@kernel.org>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Kunihiko Hayashi <hayashi.kunihiko@socionext.com>
  Leo Yan <leo.yan@linaro.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Liu Ying <victor.liu@nxp.com>
  Lukas Bulwahn <lukas.bulwahn@gmail.com>
  Marc Dionne <marc.dionne@auristor.com>
  Marc Zyngier <maz@kernel.org>
  Mark Brown <broonie@kernel.org>
  Martin K. Petersen <martin.petersen@oracle.com>
  Miaoqian Lin <linmq006@gmail.com>
  Michael Petlan <mpetlan@redhat.com>
  Michal Simek <michal.simek@amd.com>
  Nathan Chancellor <nathan@kernel.org>
  Nico Pache <npache@redhat.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Peter Zijlstra <peterz@infradead.org>
  Qu Wenruo <wqu@suse.com>
  Rob Herring <robh@kernel.org>
  Saurabh Sengar <ssengar@linux.microsoft.com>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Sedat Dilek <sedat.dilek@gmail.com> # LLVM-14 (x86-64)
  Serge Semin <Sergey.Semin@baikalelectronics.ru>
  Sergey Gorenko <sergeygo@nvidia.com>
  Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Tali Perry <tali.perry1@gmail.com>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Richter <tmricht@linux.ibm.com>
  Tomas Winkler <tomas.winkler@intel.com>
  Tyler Hicks <tyhicks@linux.microsoft.com>
  Tyrel Datwyler <tyreld@linux.ibm.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wolfram Sang <wsa@kernel.org>
  Yu Liao <liaoyu15@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                broken  
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-coresched-amd64-xl broken
broken-step test-amd64-coresched-amd64-xl host-install(5)

Not pushing.

(No revision log; it would be 2432 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 12:59:54 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 12:59:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354956.582344 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4MR7-0005EO-L3; Thu, 23 Jun 2022 12:59:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354956.582344; Thu, 23 Jun 2022 12:59:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4MR7-0005EH-Hu; Thu, 23 Jun 2022 12:59:53 +0000
Received: by outflank-mailman (input) for mailman id 354956;
 Thu, 23 Jun 2022 12:59:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CRa/=W6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o4MR6-0005E7-0M
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 12:59:52 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2083.outbound.protection.outlook.com [40.107.21.83])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5dfd3c27-f2f4-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 14:59:51 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB7960.eurprd04.prod.outlook.com (2603:10a6:20b:2a8::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.22; Thu, 23 Jun
 2022 12:59:48 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 12:59:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5dfd3c27-f2f4-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Jbyf2GuDZRxGRd+8w2R9dygAN8hdbMxcKaUGbvca9aD6UttF18+Y/VsH3z0aSg9/wcRmCM2heIVtrkao2JEXxBJBDgFlcc04NHHa0ZKjBYTMXwZvPbjXmryy8N/kPNHLRsHcW4DtEpIDxUwcvWfrn6w5ehtwszqhNgED+zc2wVxEZWHqbQRx5B1rPT+JW5H0rCFTgvpZKABBpxX7nH7K1JpZ24xFoKYufdzHqR1KEFe15AqaNOx4QU+68dScawGytUCKCTePGbH+02hBMknP3syFValifD7AqgnH75WFS2zhJOmNuXX+o/9ibl1hzQEN26VFhYRi4sUvrhJSsn+QJw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9IuVPsVWl/8sRCb1phdqUlF63oQpQLnaNXYBxHmCukA=;
 b=iTQyrrRUEAsUO1/gBlVY1Tj0lCvCYpKiD1R6tR383uL06ekRKV+HifzH5bLPs/5nReNh0g/gI1zA2ofYG/4U1llT9yxsDIu/IPhllTfV1yQkhqYKT7ICGfqIzrhGdA1gYwjBVRc4GL0Ze1zTe7faI6brH2blBX3xEc5SDQ8mj0AV7kHRrcfeCyTqGUPV9aqjUl4uxhIuz6GrTo6NaMq/ZG3ZYzdxPDooFONYfXRswt8nyAHt3nL7J0suvyh8GVlFVckVzoNlBMvPpe+4p7eDwZdDOBlYLg0z0bSRIk3GS+96MWS2K3Ea9LGkON2kjpTj9Gj3sDEo53rN7ev1cGWSuw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9IuVPsVWl/8sRCb1phdqUlF63oQpQLnaNXYBxHmCukA=;
 b=hZXyp0uxw3hxgsXhLMz+Ad/aKtf26Y7RRZpmyFe6m0xU45HYAaGE9ZKtsQQPlPxY5xl7+ITKHq9jZ5OZM42EEKDB8yJqkh/VnqOGkoYYYGHAGdzydORrfMJckNZ+hDkMhumc+C9WqF3N7CDAOXtm4XxUCsoAHDW20/Irna/oeBb5vVmp+17baQ3HI9sDyv3R1V2qoSiuxAFb7SHvSNU1QohNKt0NpX5/KVIiytK3cUPFnk7VXJcDBZpJ98SpUrxyYgxZuVev1kmBIJgZQ55+pP1qr8VUAyW2LwykBk5hK52uExdNUbIcj48s4cSjz2Y3pW0/nfXw0YeMPOe1U1OJ1A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d0de3b7b-fdb4-716c-227d-5fee024d8fd9@suse.com>
Date: Thu, 23 Jun 2022 14:59:46 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH] tools/xenstored: Harden corrupt()
Content-Language: en-US
To: Julien Grall <julien@xen.org>, Juergen Gross <jgross@suse.com>
Cc: Julien Grall <jgrall@amazon.com>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
References: <20220623112407.13604-1-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220623112407.13604-1-julien@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM5PR0601CA0057.eurprd06.prod.outlook.com
 (2603:10a6:206::22) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 48ce1317-e7b6-4593-9040-08da551840cd
X-MS-TrafficTypeDiagnostic: AS8PR04MB7960:EE_
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-Microsoft-Antispam-PRVS:
	<AS8PR04MB7960F9A024091CB7ECD6828CB3B59@AS8PR04MB7960.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	9AHMfyNOuY4iWxoWivV1fMcKgozAzfmAzDVBMIQ8ebznpqO/r6d7rWYs/VE9nhaBHuzezghWcsVOySUzRk0/Ke0i9n2NPrcZx4q1b8hLCC+E8GVT+/bxcpjqKpXWlzG5OCEUIgMWNj6g+EXfXhEAbj+WqvI2WWlFg8BgjiqnvddX6gTDw5m87pPxF9dQfrBZ85FrWB4oQqjdujDFW73LSDZQtvzK+1ZTcY+jwX0fnw1DE2F/3KQLcyOkSmfgAWZaMRjVqArWs4auZCZGth0ZvQ4PJVRvhKZtDQ5oKLV/xbkJFlHb1prfHCLMau85XURrQSZiQZNHhn1pujYUF/juBAEPtJjkD85U0HoIPESeGwO7zadv6COA7ruWPIZgsnvNWaDCpPhxBWRpYg8sdWN5DBJtZzoQrrgVnMp/m8NOrXCsXfIDScwPbC1/Gg8RJPr2SWtFGo+UhLYonjjykAPW57ckRQdueIU8EslvVgbC1qNsG/VE72XoPit2cG00Py+w7JffFRXnzCemDohabN1xoyXwaJji8SEIHxvkDfBgaubdkGl9zkdbpCz3a/BRqs6xZxGQvja4zbpIzWf6tKIU+TgXhzlaaHcNqBmaHTw4OVmvrVvDirNbpwQ+pBWp2VPr8Q2Zo6zZ6JLqCRxI5bx4zZreFoCFzHo8roZvYi2GMJubR8JNfQaDJ6XRWZkCrhWy7GPwgRHbrXnr/ItVAD0iEkto5zPj+bc2Ra87zHmFqiGdrykhyjCRFWoXOj2amVXWGwhGepH7SDJ7e8UylQZthGMd0/2yW/M9Hbdp9QRw+Fg=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(136003)(376002)(39860400002)(366004)(346002)(396003)(6636002)(53546011)(4744005)(4326008)(6512007)(41300700001)(86362001)(38100700002)(5660300002)(6506007)(31696002)(26005)(2616005)(83380400001)(316002)(2906002)(31686004)(66476007)(8676002)(36756003)(6486002)(8936002)(478600001)(110136005)(66946007)(66556008)(186003)(54906003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NFRXSzRnUU1tL256Qk9EZjh3YlM1NHBBcHRVQTh2eU9NdGxOS1JXU3BtblVM?=
 =?utf-8?B?S3QxendPU1h0M0xGMWxRejBRUjh3aHdVQUVYUndpSnBBVGJJZWFmTmd0K2pR?=
 =?utf-8?B?QUdHVkZuYS9ZTFlReFdmbWo5MjJwRkRUK1pQbHNyVGFrblNxYXlET2lwQ1hC?=
 =?utf-8?B?M3RLWlh1SnpwcmIwelBzOE1MRDV4d2Y4VmIvb1VKNDNwWWJ2THU1am9hTXlm?=
 =?utf-8?B?MktTcGJoSkNMejRJeGI1ZVo2azBJQ1JTb3NmK3ozMGpmN2l1NnYxVEZyeDQ4?=
 =?utf-8?B?ZEtxclJFSkZlcjc1K2xJSXVLVTVzdEJoOGxVa09IZUd5UjhXd2JqUGxMOWJh?=
 =?utf-8?B?K1NiTmxnRjhkSGdxRTMyQSthQUZIaGQzVGZoL0dHaEpIc0lhb1NzaUQremVX?=
 =?utf-8?B?TXcxaTZXOEdRMHFLQmtvU3BuU3VSVHE5VlNYLzZNY1B5dityTVJkNnlOdGJx?=
 =?utf-8?B?dWhFUXRaR25DSGZPczNwcHREQmJidWhicm1UV0xqR250b0V5eHlmTkNHYnpV?=
 =?utf-8?B?ZFQrcENlRzMvcGpTd1lVRmQxYS9QRDNkUVRXcmQvVzdCSVZ1YkhiRE92VUNv?=
 =?utf-8?B?dTdPc043QWFqYkJWMkpGTHhib1BGckxIOVR1cWMxUnlPY2s5ZEViNk44Z1pP?=
 =?utf-8?B?NW8xQmxOWHgrNm5jR2ptR3oyNGV4bDJJNFhvMUdyWDVzVCtZeFFVbzlMTGIx?=
 =?utf-8?B?K1hNRDlCZkFNS2JXT3pzZ09BN3c5NDZGSkFIdncrS2ZMK3VMTU5ibEt2R2tx?=
 =?utf-8?B?V09EZkhYOEx0ajduVzVwL3RaZVk0MXM3VXFOR2wvQ0RNQW0zdDV1d0R4bFEv?=
 =?utf-8?B?UmZQc3ErYW9DcmxYbStQcFNiSWtXb0VWUjlSU3ZHRzl4YlprQVh4SDZCRFk2?=
 =?utf-8?B?YmNudTYvTmRDTklQZjVQYU05M093bXNJT1ZPVU5SZlBnSGFTTDUyejEvSDdB?=
 =?utf-8?B?YWliMU94d2N3Ukc2bXdRUVpKb01TdVFuOTI5aVA1dHpQT3hEZWt4U05xa0Y4?=
 =?utf-8?B?czVXV1hidFNMK21BY0h1UC9YRkltd1ZWYWp3a1ZTSCthb05vVkEwdUJHM3ZP?=
 =?utf-8?B?MWNpejRONURYd05wcE8vcDFsbFJqUUN0RUFsMnJwZTIvMHh3VXp6OVh3VWpS?=
 =?utf-8?B?b3JDL01kemxaZHpQVWQ3WU1qN2tVb1laODVZMXF3TitGRUlWOFpqdUxFaXNF?=
 =?utf-8?B?UTM5a1hIVVZYcnd3MWR1WlF5SVBYN1Fkb2NDOXU5UTRYaXlrZytOMzRNUkY0?=
 =?utf-8?B?OVc1VmNJR0dNTGdLUDQxMWc5Tlg0WWxtUm5YNkgyT1UxL2JZb2FueHpHdTA2?=
 =?utf-8?B?SVhxS3NHNDMrUU10L2F3VHdEVmZZTitKdlpvRXVsWGVCRXd5QlNNeGwxZ1Bi?=
 =?utf-8?B?Vk95OUlraGNUeXBGenZQUkpHTHR4bHZGSk9oTXVHUG9LRnJ6SGlUVEI4bFhu?=
 =?utf-8?B?QlcyOWQreWtLTmxreTVBYnJxcjZFYitCaTFncDVQSFFrMkR1cS9ibERUSEtX?=
 =?utf-8?B?dTdQeTg1eStIdUF2a2NXS0dzWmY5MEIwVlZnRjFVVitPNjUvZy9VNFVnaVls?=
 =?utf-8?B?eFV6Y0ZDL3BzWVk1QmJ3WTNUQnZwaWx6dkg0OVFnTi83WnJOS2diN2M1WTFs?=
 =?utf-8?B?WDdUZDZlZ05LYVRIWWJPMW10VDRka00xL0lmRmM3Qk5tZ3gvaFl4RzFURVV3?=
 =?utf-8?B?aTVDbGFad1NRcXdva2cwdHdjSE5lT0YwbzMzN01yTHczVklGeEtpV21YV1ZZ?=
 =?utf-8?B?clhPVnpmT2ZOL3FGdlF0VmJrSUl4VHJNUjl2TWlRb3E2WVY5ZzIvZEU0d1hu?=
 =?utf-8?B?QmtTby9kbUIvY3VpWWVhTi9tN3psbkVwUjQzUGdlZGtkMzVaM0UyTlhvMStH?=
 =?utf-8?B?YkNFZ0tETTErNmRpUFlYN2Z2RzBzb3BseExjcE1HWmZPNlN5MlhFajQ0VFRV?=
 =?utf-8?B?bHI1YWxBQSsxN0NSYUhmZElRcHJPWVYxWVB0eUIrWld3TW1HcFQ0STJFR3BC?=
 =?utf-8?B?M0wvOHdiZnZ6QWFiY0x4ZHdJMnpwT2JwcDFVaHFaTmlIcEgvWXRHZWpQcTI3?=
 =?utf-8?B?L1JwM3VwN2xpZStjd1hFUHR3V05CeVNQZkplL3UxUVZyN3A5QkdhM0N3bnZ2?=
 =?utf-8?Q?UKLVtkgcFv6cw4PlkizsxL4Oe?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 48ce1317-e7b6-4593-9040-08da551840cd
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 12:59:48.2386
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 2UsPh0PEskxRysT+H3md+2D+XWYdOoxPzWCS4R9MBcqT/jBnA661STZq7oG/ZMfaaQ0y1G43CnHiqKoz01Rm8Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB7960

On 23.06.2022 13:24, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> At the moment, corrupt() is neither checking for allocation failure
> nor freeing the allocated memory.
> 
> Harden the code by printing ENOMEM if the allocation failed and
> free 'str' after the last use.
> 
> This is not considered to be a security issue because corrupt() should
> only be called when Xenstored thinks the database is corrupted. Note
> that the trigger (i.e. a guest reliably provoking the call) would be
> a security issue.
> 
> Fixes: 06d17943f0cd ("Added a basic integrity checker, and some basic ability to recover from store")
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Is this something which would want queuing for backport?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 13:00:20 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 13:00:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354961.582355 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4MRY-0006US-3w; Thu, 23 Jun 2022 13:00:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354961.582355; Thu, 23 Jun 2022 13:00:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4MRY-0006UL-0k; Thu, 23 Jun 2022 13:00:20 +0000
Received: by outflank-mailman (input) for mailman id 354961;
 Thu, 23 Jun 2022 13:00:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=j28/=W6=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o4MRW-0005E7-2g
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 13:00:18 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6d8c35ab-f2f4-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 15:00:16 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 8046E21D51;
 Thu, 23 Jun 2022 13:00:16 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 3A20B13461;
 Thu, 23 Jun 2022 13:00:16 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id biyuDOBjtGITGwAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 23 Jun 2022 13:00:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6d8c35ab-f2f4-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655989216; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=F5rp4biCzxhXRUFFWAT3ZfHsc7jcuLns1eT+TYiLgcU=;
	b=NqsE+R3A+G8jfvqHrFsMuoKCkpU3h16m/I3olzV12/gzWJYQnZry4r7vqoqr6NhswyO6h2
	H6QYZ7NaQx9JdHJiyXiAgpf+4bobmSp/YZsZ8Td5v6o2WBBo9q8xRUBq5rv/QmmTDOHPKw
	vTS61WJ9aH0xaigngd/NdcuTQGPeK/E=
Message-ID: <7ec07c28-479a-4fa6-cd9c-dcd0b71e3f42@suse.com>
Date: Thu, 23 Jun 2022 15:00:15 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH] xen-blkfront: Handle NULL gendisk
Content-Language: en-US
To: Jason Andryuk <jandryuk@gmail.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jens Axboe <axboe@kernel.dk>
Cc: xen-devel@lists.xenproject.org, linux-block@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?= <marmarek@invisiblethingslab.com>
References: <20220601195341.28581-1-jandryuk@gmail.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20220601195341.28581-1-jandryuk@gmail.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------DRCs634Vc3nA0yR6EDITkIC9"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------DRCs634Vc3nA0yR6EDITkIC9
Content-Type: multipart/mixed; boundary="------------l05qm07gdYiFOKB7dc7Wllsf";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jason Andryuk <jandryuk@gmail.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jens Axboe <axboe@kernel.dk>
Cc: xen-devel@lists.xenproject.org, linux-block@vger.kernel.org,
 linux-kernel@vger.kernel.org,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?= <marmarek@invisiblethingslab.com>
Message-ID: <7ec07c28-479a-4fa6-cd9c-dcd0b71e3f42@suse.com>
Subject: Re: [PATCH] xen-blkfront: Handle NULL gendisk
References: <20220601195341.28581-1-jandryuk@gmail.com>
In-Reply-To: <20220601195341.28581-1-jandryuk@gmail.com>

--------------l05qm07gdYiFOKB7dc7Wllsf
Content-Type: multipart/mixed; boundary="------------OBLMJDStp9e1aD6PHxmVHL9f"

--------------OBLMJDStp9e1aD6PHxmVHL9f
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDEuMDYuMjIgMjE6NTMsIEphc29uIEFuZHJ5dWsgd3JvdGU6DQo+IFdoZW4gYSBWQkQg
aXMgbm90IGZ1bGx5IGNyZWF0ZWQgYW5kIHRoZW4gY2xvc2VkLCB0aGUga2VybmVsIGNhbiBo
YXZlIGENCj4gTlVMTCBwb2ludGVyIGRlcmVmZXJlbmNlOg0KPiANCj4gVGhlIHJlcHJvZHVj
ZXIgaXMgdHJpdmlhbDoNCj4gDQo+IFt1c2VyQGRvbTAgfl0kIHN1ZG8geGwgYmxvY2stYXR0
YWNoIHdvcmsgYmFja2VuZD1zeXMtdXNiIHZkZXY9eHZkaSB0YXJnZXQ9L2Rldi9zZHoNCj4g
W3VzZXJAZG9tMCB+XSQgeGwgYmxvY2stbGlzdCB3b3JrDQo+IFZkZXYgIEJFICBoYW5kbGUg
c3RhdGUgZXZ0LWNoIHJpbmctcmVmIEJFLXBhdGgNCj4gNTE3MTIgMCAgIDI0MSAgICA0ICAg
ICAtMSAgICAgLTEgICAgICAgL2xvY2FsL2RvbWFpbi8wL2JhY2tlbmQvdmJkLzI0MS81MTcx
Mg0KPiA1MTcyOCAwICAgMjQxICAgIDQgICAgIC0xICAgICAtMSAgICAgICAvbG9jYWwvZG9t
YWluLzAvYmFja2VuZC92YmQvMjQxLzUxNzI4DQo+IDUxNzQ0IDAgICAyNDEgICAgNCAgICAg
LTEgICAgIC0xICAgICAgIC9sb2NhbC9kb21haW4vMC9iYWNrZW5kL3ZiZC8yNDEvNTE3NDQN
Cj4gNTE3NjAgMCAgIDI0MSAgICA0ICAgICAtMSAgICAgLTEgICAgICAgL2xvY2FsL2RvbWFp
bi8wL2JhY2tlbmQvdmJkLzI0MS81MTc2MA0KPiA1MTg0MCAzICAgMjQxICAgIDMgICAgIC0x
ICAgICAtMSAgICAgICAvbG9jYWwvZG9tYWluLzMvYmFja2VuZC92YmQvMjQxLzUxODQwDQo+
ICAgICAgICAgICAgICAgICAgIF4gbm90ZSBzdGF0ZSwgdGhlIC9kZXYvc2R6IGRvZXNuJ3Qg
ZXhpc3QgaW4gdGhlIGJhY2tlbmQNCj4gDQo+IFt1c2VyQGRvbTAgfl0kIHN1ZG8geGwgYmxv
Y2stZGV0YWNoIHdvcmsgeHZkaQ0KPiBbdXNlckBkb20wIH5dJCB4bCBibG9jay1saXN0IHdv
cmsNCj4gVmRldiAgQkUgIGhhbmRsZSBzdGF0ZSBldnQtY2ggcmluZy1yZWYgQkUtcGF0aA0K
PiB3b3JrIGlzIGFuIGludmFsaWQgZG9tYWluIGlkZW50aWZpZXINCj4gDQo+IEFuZCBpdHMg
Y29uc29sZSBoYXM6DQo+IA0KPiBCVUc6IGtlcm5lbCBOVUxMIHBvaW50ZXIgZGVyZWZlcmVu
Y2UsIGFkZHJlc3M6IDAwMDAwMDAwMDAwMDAwNTANCj4gUEdEIDgwMDAwMDAwZWRlYmIwNjcg
UDREIDgwMDAwMDAwZWRlYmIwNjcgUFVEIGVkZWMyMDY3IFBNRCAwDQo+IE9vcHM6IDAwMDAg
WyMxXSBQUkVFTVBUIFNNUCBQVEkNCj4gQ1BVOiAxIFBJRDogNTIgQ29tbTogeGVud2F0Y2gg
Tm90IHRhaW50ZWQgNS4xNi4xOC0yLjQzLmZjMzIucXViZXMueDg2XzY0ICMxDQo+IFJJUDog
MDAxMDpibGtfbXFfc3RvcF9od19xdWV1ZXMrMHg1LzB4NDANCj4gQ29kZTogMDAgNDggODMg
ZTAgZmQgODMgYzMgMDEgNDggODkgODUgYTggMDAgMDAgMDAgNDEgMzkgNWMgMjQgNTAgNzcg
YzAgNWIgNWQgNDEgNWMgNDEgNWQgYzMgYzMgMGYgMWYgODAgMDAgMDAgMDAgMDAgMGYgMWYg
NDQgMDAgMDAgPDhiPiA0NyA1MCA4NSBjMCA3NCAzMiA0MSA1NCA0OSA4OSBmYyA1NSA1MyAz
MSBkYiA0OSA4YiA0NCAyNCA0OCA0OA0KPiBSU1A6IDAwMTg6ZmZmZmM5MDAwMGJjZmU5OCBF
RkxBR1M6IDAwMDEwMjkzDQo+IFJBWDogZmZmZmZmZmZjMDAwODM3MCBSQlg6IDAwMDAwMDAw
MDAwMDAwMDUgUkNYOiAwMDAwMDAwMDAwMDAwMDAwDQo+IFJEWDogMDAwMDAwMDAwMDAwMDAw
MCBSU0k6IDAwMDAwMDAwMDAwMDAwMDUgUkRJOiAwMDAwMDAwMDAwMDAwMDAwDQo+IFJCUDog
ZmZmZjg4ODAwNzc1ZjAwMCBSMDg6IDAwMDAwMDAwMDAwMDAwMDEgUjA5OiBmZmZmODg4MDA2
ZTYyMGI4DQo+IFIxMDogZmZmZjg4ODAwNmU2MjBiMCBSMTE6IGYwMDAwMDAwMDAwMDAwMDAg
UjEyOiBmZmZmODg4MGJmZjM5MDAwDQo+IFIxMzogZmZmZjg4ODBiZmYzOTAwMCBSMTQ6IDAw
MDAwMDAwMDAwMDAwMDAgUjE1OiBmZmZmODg4MDA2MDRiZTAwDQo+IEZTOiAgMDAwMDAwMDAw
MDAwMDAwMCgwMDAwKSBHUzpmZmZmODg4MGYzMzAwMDAwKDAwMDApIGtubEdTOjAwMDAwMDAw
MDAwMDAwMDANCj4gQ1M6ICAwMDEwIERTOiAwMDAwIEVTOiAwMDAwIENSMDogMDAwMDAwMDA4
MDA1MDAzMw0KPiBDUjI6IDAwMDAwMDAwMDAwMDAwNTAgQ1IzOiAwMDAwMDAwMGU5MzJlMDAy
IENSNDogMDAwMDAwMDAwMDM3MDZlMA0KPiBDYWxsIFRyYWNlOg0KPiAgIDxUQVNLPg0KPiAg
IGJsa2JhY2tfY2hhbmdlZCsweDk1LzB4MTM3IFt4ZW5fYmxrZnJvbnRdDQo+ICAgPyByZWFk
X3JlcGx5KzB4MTYwLzB4MTYwDQo+ICAgeGVud2F0Y2hfdGhyZWFkKzB4YzAvMHgxYTANCj4g
ICA/IGRvX3dhaXRfaW50cl9pcnErMHhhMC8weGEwDQo+ICAga3RocmVhZCsweDE2Yi8weDE5
MA0KPiAgID8gc2V0X2t0aHJlYWRfc3RydWN0KzB4NDAvMHg0MA0KPiAgIHJldF9mcm9tX2Zv
cmsrMHgyMi8weDMwDQo+ICAgPC9UQVNLPg0KPiBNb2R1bGVzIGxpbmtlZCBpbjogc25kX3Nl
cV9kdW1teSBzbmRfaHJ0aW1lciBzbmRfc2VxIHNuZF9zZXFfZGV2aWNlIHNuZF90aW1lciBz
bmQgc291bmRjb3JlIGlwdF9SRUpFQ1QgbmZfcmVqZWN0X2lwdjQgeHRfc3RhdGUgeHRfY29u
bnRyYWNrIG5mdF9jb3VudGVyIG5mdF9jaGFpbl9uYXQgeHRfTUFTUVVFUkFERSBuZl9uYXQg
bmZfY29ubnRyYWNrIG5mX2RlZnJhZ19pcHY2IG5mX2RlZnJhZ19pcHY0IG5mdF9jb21wYXQg
bmZfdGFibGVzIG5mbmV0bGluayBpbnRlbF9yYXBsX21zciBpbnRlbF9yYXBsX2NvbW1vbiBj
cmN0MTBkaWZfcGNsbXVsIGNyYzMyX3BjbG11bCBjcmMzMmNfaW50ZWwgZ2hhc2hfY2xtdWxu
aV9pbnRlbCB4ZW5fbmV0ZnJvbnQgcGNzcGtyIHhlbl9zY3NpYmFjayB0YXJnZXRfY29yZV9t
b2QgeGVuX25ldGJhY2sgeGVuX3ByaXZjbWQgeGVuX2dudGRldiB4ZW5fZ250YWxsb2MgeGVu
X2Jsa2JhY2sgeGVuX2V2dGNobiBpcG1pX2RldmludGYgaXBtaV9tc2doYW5kbGVyIGZ1c2Ug
YnBmX3ByZWxvYWQgaXBfdGFibGVzIG92ZXJsYXkgeGVuX2Jsa2Zyb250DQo+IENSMjogMDAw
MDAwMDAwMDAwMDA1MA0KPiAtLS1bIGVuZCB0cmFjZSA3YmM5NTk3ZmQwNmFlODlkIF0tLS0N
Cj4gUklQOiAwMDEwOmJsa19tcV9zdG9wX2h3X3F1ZXVlcysweDUvMHg0MA0KPiBDb2RlOiAw
MCA0OCA4MyBlMCBmZCA4MyBjMyAwMSA0OCA4OSA4NSBhOCAwMCAwMCAwMCA0MSAzOSA1YyAy
NCA1MCA3NyBjMCA1YiA1ZCA0MSA1YyA0MSA1ZCBjMyBjMyAwZiAxZiA4MCAwMCAwMCAwMCAw
MCAwZiAxZiA0NCAwMCAwMCA8OGI+IDQ3IDUwIDg1IGMwIDc0IDMyIDQxIDU0IDQ5IDg5IGZj
IDU1IDUzIDMxIGRiIDQ5IDhiIDQ0IDI0IDQ4IDQ4DQo+IFJTUDogMDAxODpmZmZmYzkwMDAw
YmNmZTk4IEVGTEFHUzogMDAwMTAyOTMNCj4gUkFYOiBmZmZmZmZmZmMwMDA4MzcwIFJCWDog
MDAwMDAwMDAwMDAwMDAwNSBSQ1g6IDAwMDAwMDAwMDAwMDAwMDANCj4gUkRYOiAwMDAwMDAw
MDAwMDAwMDAwIFJTSTogMDAwMDAwMDAwMDAwMDAwNSBSREk6IDAwMDAwMDAwMDAwMDAwMDAN
Cj4gUkJQOiBmZmZmODg4MDA3NzVmMDAwIFIwODogMDAwMDAwMDAwMDAwMDAwMSBSMDk6IGZm
ZmY4ODgwMDZlNjIwYjgNCj4gUjEwOiBmZmZmODg4MDA2ZTYyMGIwIFIxMTogZjAwMDAwMDAw
MDAwMDAwMCBSMTI6IGZmZmY4ODgwYmZmMzkwMDANCj4gUjEzOiBmZmZmODg4MGJmZjM5MDAw
IFIxNDogMDAwMDAwMDAwMDAwMDAwMCBSMTU6IGZmZmY4ODgwMDYwNGJlMDANCj4gRlM6ICAw
MDAwMDAwMDAwMDAwMDAwKDAwMDApIEdTOmZmZmY4ODgwZjMzMDAwMDAoMDAwMCkga25sR1M6
MDAwMDAwMDAwMDAwMDAwMA0KPiBDUzogIDAwMTAgRFM6IDAwMDAgRVM6IDAwMDAgQ1IwOiAw
MDAwMDAwMDgwMDUwMDMzDQo+IENSMjogMDAwMDAwMDAwMDAwMDA1MCBDUjM6IDAwMDAwMDAw
ZTkzMmUwMDIgQ1I0OiAwMDAwMDAwMDAwMzcwNmUwDQo+IEtlcm5lbCBwYW5pYyAtIG5vdCBz
eW5jaW5nOiBGYXRhbCBleGNlcHRpb24NCj4gS2VybmVsIE9mZnNldDogZGlzYWJsZWQNCj4g
DQo+IGluZm8tPnJxIGFuZCBpbmZvLT5nZCBhcmUgb25seSBzZXQgaW4gYmxrZnJvbnRfY29u
bmVjdCgpLCB3aGljaCBpcw0KPiBjYWxsZWQgZm9yIHN0YXRlIDQgKFhlbmJ1c1N0YXRlQ29u
bmVjdGVkKS4gIEd1YXJkIGFnYWluc3QgdXNpbmcgTlVMTA0KPiB2YXJpYWJsZXMgaW4gYmxr
ZnJvbnRfY2xvc2luZygpIHRvIGF2b2lkIHRoZSBpc3N1ZS4NCj4gDQo+IFRoZSByZXN0IG9m
IGJsa2Zyb250X2Nsb3NpbmcgbG9va3Mgb2theS4gIElmIGluZm8tPm5yX3JpbmdzIGlzIDAs
IHRoZW4NCj4gZm9yX2VhY2hfcmluZm8gd29uJ3QgZG8gYW55dGhpbmcuDQo+IA0KPiBibGtm
cm9udF9yZW1vdmUgYWxzbyBuZWVkcyB0byBjaGVjayBmb3Igbm9uLU5VTEwgcG9pbnRlcnMg
YmVmb3JlDQo+IGNsZWFuaW5nIHVwIHRoZSBnZW5kaXNrIGFuZCByZXF1ZXN0IHF1ZXVlLg0K
PiANCj4gRml4ZXM6IDA1ZDY5ZDk1MGQ5ZCAieGVuLWJsa2Zyb250OiBzYW5pdGl6ZSB0aGUg
cmVtb3ZhbCBzdGF0ZSBtYWNoaW5lIg0KPiBSZXBvcnRlZC1ieTogTWFyZWsgTWFyY3p5a293
c2tpLUfDs3JlY2tpIDxtYXJtYXJla0BpbnZpc2libGV0aGluZ3NsYWIuY29tPg0KPiBTaWdu
ZWQtb2ZmLWJ5OiBKYXNvbiBBbmRyeXVrIDxqYW5kcnl1a0BnbWFpbC5jb20+DQoNClB1c2hl
ZCB0byB4ZW4vdGlwLmdpdCBmb3ItbGludXMtNS4xOWENCg0KDQpKdWVyZ2VuDQo=
--------------OBLMJDStp9e1aD6PHxmVHL9f
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------OBLMJDStp9e1aD6PHxmVHL9f--

--------------l05qm07gdYiFOKB7dc7Wllsf--

--------------DRCs634Vc3nA0yR6EDITkIC9
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK0Y98FAwAAAAAACgkQsN6d1ii/Ey+B
3Af+OlWAew+/rVw9xQNVq3C6Rxh/irHSZKtoDD1WkEH8nbIsVafo7VGa6jHIZF9V4oNWBk7b2D1Y
gtdrram4DlL3XBaDPpbR9LMY4sVcfJaNomL5p1zmK/fNEXgZcqpNMaeuHxoJTDUYabe38/6LJaYJ
OKMPFu6uyfEWI7z05EDDKLy0VtYy/GtlMn0YzMk1NcR0Oc7s1lIHfXthlNW69ixsEI3dPa9WKdj1
IZzm3IJQRYH00few3/lZNbZdPaanMaP/pZEMlNMPp2yqhCdIEEBotuWmJuTRBi7FK34M37j/cJYp
Xp7Xktepq6jBTUbhXhDS//yeDrcBIqGW+qqUVppZzQ==
=tToZ
-----END PGP SIGNATURE-----

--------------DRCs634Vc3nA0yR6EDITkIC9--


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 13:00:53 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 13:00:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354966.582366 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4MS5-0007AI-F3; Thu, 23 Jun 2022 13:00:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354966.582366; Thu, 23 Jun 2022 13:00:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4MS5-0007A9-BH; Thu, 23 Jun 2022 13:00:53 +0000
Received: by outflank-mailman (input) for mailman id 354966;
 Thu, 23 Jun 2022 13:00:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=j28/=W6=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o4MS4-00052x-Qp
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 13:00:52 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 825a715a-f2f4-11ec-b725-ed86ccbb4733;
 Thu, 23 Jun 2022 15:00:51 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id B3AC721D4F;
 Thu, 23 Jun 2022 13:00:51 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 8296113461;
 Thu, 23 Jun 2022 13:00:51 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id l62MHgNktGKCGwAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 23 Jun 2022 13:00:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 825a715a-f2f4-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655989251; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Ck5k3SHIMHj4mOayuM18vld237QYH8R/jheKuLxLngo=;
	b=iR/KWymyDUl0wteWFD5z+VqaqoKR/KEXT9HYfZa0jrfTLdXqmcOZIcAdTip/Pch6MB8DNp
	eccDUJ9h+3AP02AlmjbNdaLmJ9uZbjudfQSsQO3UIbiYGeT4jPJxuVMAe1JEfa7MZjm45E
	OgthX5NrliqX/RDSrrs6Myluwx5PFo4=
Message-ID: <25176d56-202e-f0cf-3c9b-f0db6d1be3d0@suse.com>
Date: Thu, 23 Jun 2022 15:00:51 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH] x86/xen: Remove undefined behavior in setup_features()
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, oleksandr_tyshchenko@epam.com,
 linux-kernel@vger.kernel.org, Julien Grall <jgrall@amazon.com>
References: <20220617103037.57828-1-julien@xen.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20220617103037.57828-1-julien@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------iTXgncNXfr0kJBjHjy6qXvzX"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------iTXgncNXfr0kJBjHjy6qXvzX
Content-Type: multipart/mixed; boundary="------------lu2TEmGJrMdU49PAHw3pr4ov";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, oleksandr_tyshchenko@epam.com,
 linux-kernel@vger.kernel.org, Julien Grall <jgrall@amazon.com>
Message-ID: <25176d56-202e-f0cf-3c9b-f0db6d1be3d0@suse.com>
Subject: Re: [PATCH] x86/xen: Remove undefined behavior in setup_features()
References: <20220617103037.57828-1-julien@xen.org>
In-Reply-To: <20220617103037.57828-1-julien@xen.org>

--------------lu2TEmGJrMdU49PAHw3pr4ov
Content-Type: multipart/mixed; boundary="------------ODtknW5UZS5HOi8VeL1t9pS5"

--------------ODtknW5UZS5HOi8VeL1t9pS5
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTcuMDYuMjIgMTI6MzAsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gRnJvbTogSnVsaWVu
IEdyYWxsIDxqZ3JhbGxAYW1hem9uLmNvbT4NCj4gDQo+IDEgPDwgMzEgaXMgdW5kZWZpbmVk
LiBTbyBzd2l0Y2ggdG8gMVUgPDwgMzEuDQo+IA0KPiBGaXhlczogNWVhZDk3Yzg0ZmE3ICgi
eGVuOiBDb3JlIFhlbiBpbXBsZW1lbnRhdGlvbiIpDQo+IFNpZ25lZC1vZmYtYnk6IEp1bGll
biBHcmFsbCA8amdyYWxsQGFtYXpvbi5jb20+DQoNClB1c2hlZCB0byB4ZW4vdGlwLmdpdCBm
b3ItbGludXMtNS4xOWENCg0KDQpKdWVyZ2VuDQo=
--------------ODtknW5UZS5HOi8VeL1t9pS5
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------ODtknW5UZS5HOi8VeL1t9pS5--

--------------lu2TEmGJrMdU49PAHw3pr4ov--

--------------iTXgncNXfr0kJBjHjy6qXvzX
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK0ZAMFAwAAAAAACgkQsN6d1ii/Ey/2
VAf+LYPXvPtignG7simvMluLPvNanox80o6G9g7+/rTLSxGKua6xzT/pvkyqBHBR5JvF+BlXkufM
cwDiZbY3e42dL6TYwdF3v71dE7XciNFWUSPgQl8eUI5+4UIFknL3OTrxUAkgjiBrK9VPDtXnqe55
meuaT41dvWjAk8NlsLxcU7VxsqQmWWvSEj6cFpO9SpI2eoa6rtICpP4a2zEpogfqhY4tjn7fldrr
5F1H+SDIfcUuZ4qEzJoC5GcxCcMhejLvClydbGJHlAg2HTh4lP+5OedT0gyOJaXv0hB5yAy+NXGr
KWbJZZCliRzcLa1cmrTe7Jttpq1g+NRQIXkouzA8eA==
=neYe
-----END PGP SIGNATURE-----

--------------iTXgncNXfr0kJBjHjy6qXvzX--


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 13:01:25 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 13:01:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354971.582377 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4MSa-0007kS-Ne; Thu, 23 Jun 2022 13:01:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354971.582377; Thu, 23 Jun 2022 13:01:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4MSa-0007kL-KL; Thu, 23 Jun 2022 13:01:24 +0000
Received: by outflank-mailman (input) for mailman id 354971;
 Thu, 23 Jun 2022 13:01:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=j28/=W6=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o4MSa-0007bG-0C
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 13:01:24 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 953fbf30-f2f4-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 15:01:23 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 1FC221F919;
 Thu, 23 Jun 2022 13:01:23 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id DC33313461;
 Thu, 23 Jun 2022 13:01:22 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 7AR3NCJktGK7GwAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 23 Jun 2022 13:01:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 953fbf30-f2f4-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655989283; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=jRAMK4H9VufbaUiYFMKr2eIlefiWX6ewJxyHep4f/XY=;
	b=JTQ7L2mMh2XWT33x8HDuNc4taDE1H0Q0HrXH1Hg3ifuiwzzqJAUhdf/1zmiHD6lEWMj4vJ
	alRFbRvmrT2kbf9vbgWzyWIBotqFv0t4illQ/3AkUpGMh4dCtl8/34OmwOafJmxWuOl6Pu
	NPAIsX84j9BU/84fdyv9tZsFJ77FcLQ=
Message-ID: <c138b4b1-334e-a025-5e9d-b0268a87c09b@suse.com>
Date: Thu, 23 Jun 2022 15:01:22 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH] drm/xen: Add missing VM_DONTEXPAND flag in mmap callback
Content-Language: en-US
To: Oleksandr Tyshchenko <olekstysh@gmail.com>,
 xen-devel@lists.xenproject.org, dri-devel@lists.freedesktop.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
 David Airlie <airlied@linux.ie>, Daniel Vetter <daniel@ffwll.ch>
References: <1652104303-5098-1-git-send-email-olekstysh@gmail.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <1652104303-5098-1-git-send-email-olekstysh@gmail.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------6DUnQpAN8lflTtrHV0i7OL6y"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------6DUnQpAN8lflTtrHV0i7OL6y
Content-Type: multipart/mixed; boundary="------------DCDpdEXNrYQUNz07he00L9Y0";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Oleksandr Tyshchenko <olekstysh@gmail.com>,
 xen-devel@lists.xenproject.org, dri-devel@lists.freedesktop.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
 David Airlie <airlied@linux.ie>, Daniel Vetter <daniel@ffwll.ch>
Message-ID: <c138b4b1-334e-a025-5e9d-b0268a87c09b@suse.com>
Subject: Re: [PATCH] drm/xen: Add missing VM_DONTEXPAND flag in mmap callback
References: <1652104303-5098-1-git-send-email-olekstysh@gmail.com>
In-Reply-To: <1652104303-5098-1-git-send-email-olekstysh@gmail.com>

--------------DCDpdEXNrYQUNz07he00L9Y0
Content-Type: multipart/mixed; boundary="------------pu2M1o1cMI1x7ICNDMtN7tho"

--------------pu2M1o1cMI1x7ICNDMtN7tho
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDkuMDUuMjIgMTU6NTEsIE9sZWtzYW5kciBUeXNoY2hlbmtvIHdyb3RlOg0KPiBGcm9t
OiBPbGVrc2FuZHIgVHlzaGNoZW5rbyA8b2xla3NhbmRyX3R5c2hjaGVua29AZXBhbS5jb20+
DQo+IA0KPiBXaXRoIFhlbiBQViBEaXNwbGF5IGRyaXZlciBpbiB1c2UgdGhlICJleHBlY3Rl
ZCIgVk1fRE9OVEVYUEFORCBmbGFnDQo+IGlzIG5vdCBzZXQgKG5laXRoZXIgZXhwbGljaXRs
eSBub3IgaW1wbGljaXRseSksIHNvIHRoZSBkcml2ZXIgaGl0cw0KPiB0aGUgY29kZSBwYXRo
IGluIGRybV9nZW1fbW1hcF9vYmooKSB3aGljaCB0cmlnZ2VycyB0aGUgV0FSTklORy4NCj4g
DQo+IFNpZ25lZC1vZmYtYnk6IE9sZWtzYW5kciBUeXNoY2hlbmtvIDxvbGVrc2FuZHJfdHlz
aGNoZW5rb0BlcGFtLmNvbT4NCg0KUHVzaGVkIHRvIHhlbi90aXAuZ2l0IGZvci1saW51cy01
LjE5YQ0KDQoNCkp1ZXJnZW4NCg==
--------------pu2M1o1cMI1x7ICNDMtN7tho
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------pu2M1o1cMI1x7ICNDMtN7tho--

--------------DCDpdEXNrYQUNz07he00L9Y0--

--------------6DUnQpAN8lflTtrHV0i7OL6y
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK0ZCIFAwAAAAAACgkQsN6d1ii/Ey9x
+Af+MQ2D0no/fEtmPq82OAyMEdS1oBH1e+m0aoX9RZ++somCEdLRNKmnpo9k7Y1w1VY/f23u0SQI
u5a6I78CEE7Gn3fB2Wmx9Pk789Q1Qc8A5VnQx6dAC94NUSb5AuZCbZN/7+PNvNejSuZSlTSJ9u3P
TARbOoIPgCEQDpRJbTBpu177JRQ6+lgQVcHdD0y6X3D2tjV3gXqPECOi5b1D7lypPWCdgbcKtCSE
UqFTnmHVOXL7yBgTOx0yHzqLRm6RLe+DABB0iuhqn6yYwEgIpEjoNCLcq5TkDdHgrnlTRMmKVca1
N9qapKkmnUrAJeogRTufVi8UUo429jTZz2rdYZr1IA==
=xBaj
-----END PGP SIGNATURE-----

--------------6DUnQpAN8lflTtrHV0i7OL6y--


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 13:03:56 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 13:03:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354982.582387 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4MV1-00006I-9l; Thu, 23 Jun 2022 13:03:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354982.582387; Thu, 23 Jun 2022 13:03:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4MV1-00006B-6G; Thu, 23 Jun 2022 13:03:55 +0000
Received: by outflank-mailman (input) for mailman id 354982;
 Thu, 23 Jun 2022 13:03:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4MV0-000065-A3
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 13:03:54 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4MUy-0005Ia-CV; Thu, 23 Jun 2022 13:03:52 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.0.239]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4MUy-0006Tr-4i; Thu, 23 Jun 2022 13:03:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=nc93QevzFFrvQG+OpgidGEp8N0UGZEg6+XCxC1OqZ+0=; b=rF7qsCcno9HEGVRv8+JeoVyjy5
	nm5g15ITUYaz2uQP3R//DARvU7EXE2ce9PMPJTD/jfXkCt/3uMdxfKUFwrRbEYNU6U7KRXysIZ20N
	PaaK6/kMV5M9G+/ItnkirkJWiFslxwGJRAqBRicA+JEoaaGPKKZQ1u+j0r4zMnrpBp9A=;
Message-ID: <de0ae18f-b0b7-c63c-ed03-d0260dfc4c1d@xen.org>
Date: Thu, 23 Jun 2022 14:03:50 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH] tools/xenstored: Harden corrupt()
To: Jan Beulich <jbeulich@suse.com>, Juergen Gross <jgross@suse.com>
Cc: Julien Grall <jgrall@amazon.com>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
References: <20220623112407.13604-1-julien@xen.org>
 <d0de3b7b-fdb4-716c-227d-5fee024d8fd9@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <d0de3b7b-fdb4-716c-227d-5fee024d8fd9@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 23/06/2022 13:59, Jan Beulich wrote:
> On 23.06.2022 13:24, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> At the moment, corrupt() is neither checking for allocation failure
>> nor freeing the allocated memory.
>>
>> Harden the code by printing ENOMEM if the allocation failed and
>> free 'str' after the last use.
>>
>> This is not considered to be a security issue because corrupt() should
>> only be called when Xenstored thinks the database is corrupted. Note
>> that the trigger (i.e. a guest reliably provoking the call) would be
>> a security issue.
>>
>> Fixes: 06d17943f0cd ("Added a basic integrity checker, and some basic ability to recover from store")
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> Is this something which would want queuing for backport?

I would say yes. There are a couple of more Xenstored patches I would 
consider for backporting:

fe9be76d880b tools/xenstore: fix error handling of check_store()
b977929d3646 tools/xenstore: fix hashtable_expand() zeroing new area

Who is taking care of tools backport nowadays?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 13:07:34 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 13:07:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354991.582399 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4MYT-0000ka-Oj; Thu, 23 Jun 2022 13:07:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354991.582399; Thu, 23 Jun 2022 13:07:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4MYT-0000kT-L0; Thu, 23 Jun 2022 13:07:29 +0000
Received: by outflank-mailman (input) for mailman id 354991;
 Thu, 23 Jun 2022 13:07:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=j28/=W6=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o4MYR-0000kN-RD
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 13:07:27 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6df1ac75-f2f5-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 15:07:26 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 8471E1F998;
 Thu, 23 Jun 2022 13:07:26 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 1F67C13461;
 Thu, 23 Jun 2022 13:07:26 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id qJEtBo5ltGLlHgAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 23 Jun 2022 13:07:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6df1ac75-f2f5-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1655989646; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=AJnpuwxN/FLeBtqA4SLXcqXAP5iBoZPO1+utrcEpndA=;
	b=HSB/H6Mp44vXYIMdrDPH1JIj3WR+EHHYSlcnSFQzUqMuCesy5BIXfxC+ncHlwchSbVgNcG
	eaNMurm86pGEJFRyrPljePnrR42Km8Jb3D57hx1fG19q90437jgo/7gs/HX126xTdjRzCl
	7R0Xy1BJz8q+VAjtA1mC25IjypVujgo=
Message-ID: <4abe4863-3e24-70bb-5a6d-619f1de30a14@suse.com>
Date: Thu, 23 Jun 2022 15:07:25 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v6 0/9] xen: drop hypercall function tables
To: xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Christopher Clark <christopher.w.clark@gmail.com>,
 Dario Faggioli <dfaggioli@suse.com>, Daniel De Graaf
 <dgdegra@tycho.nsa.gov>, "Daniel P. Smith" <dpsmith@apertussolutions.com>
References: <20220324140139.5899-1-jgross@suse.com>
 <06edd55a-86f2-52e3-e275-ee928a956fdf@suse.com>
 <8baf689f-2a20-cf07-6878-9f9459063a25@suse.com>
 <db7f5c3e-894a-1700-e0a4-5893bd70c205@suse.com>
Content-Language: en-US
In-Reply-To: <db7f5c3e-894a-1700-e0a4-5893bd70c205@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------t0X2MtyaTQWCihLZQQ9pIgn0"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------t0X2MtyaTQWCihLZQQ9pIgn0
Content-Type: multipart/mixed; boundary="------------pqKygV3Bj0X4PXsIN9plUa8L";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Christopher Clark <christopher.w.clark@gmail.com>,
 Dario Faggioli <dfaggioli@suse.com>, Daniel De Graaf
 <dgdegra@tycho.nsa.gov>, "Daniel P. Smith" <dpsmith@apertussolutions.com>
Message-ID: <4abe4863-3e24-70bb-5a6d-619f1de30a14@suse.com>
Subject: Re: [PATCH v6 0/9] xen: drop hypercall function tables
References: <20220324140139.5899-1-jgross@suse.com>
 <06edd55a-86f2-52e3-e275-ee928a956fdf@suse.com>
 <8baf689f-2a20-cf07-6878-9f9459063a25@suse.com>
 <db7f5c3e-894a-1700-e0a4-5893bd70c205@suse.com>
In-Reply-To: <db7f5c3e-894a-1700-e0a4-5893bd70c205@suse.com>

--------------pqKygV3Bj0X4PXsIN9plUa8L
Content-Type: multipart/mixed; boundary="------------VDZ5cCZUPPjk90TnYMTVITFu"

--------------VDZ5cCZUPPjk90TnYMTVITFu
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTguMDUuMjIgMTE6NDUsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+IE9uIDA0LjA1LjIy
IDA5OjUzLCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4gT24gMTkuMDQuMjIgMTA6MDEsIEp1
ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+Pj4gT24gMjQuMDMuMjIgMTU6MDEsIEp1ZXJnZW4gR3Jv
c3Mgd3JvdGU6DQo+Pj4+IEluIG9yZGVyIHRvIGF2b2lkIGluZGlyZWN0IGZ1bmN0aW9uIGNh
bGxzIG9uIHRoZSBoeXBlcmNhbGwgcGF0aCBhcw0KPj4+PiBtdWNoIGFzIHBvc3NpYmxlIHRo
aXMgc2VyaWVzIGlzIHJlbW92aW5nIHRoZSBoeXBlcmNhbGwgZnVuY3Rpb24gdGFibGVzDQo+
Pj4+IGFuZCBpcyByZXBsYWNpbmcgdGhlIGh5cGVyY2FsbCBoYW5kbGVyIGNhbGxzIHZpYSB0
aGUgZnVuY3Rpb24gYXJyYXkNCj4+Pj4gYnkgYXV0b21hdGljYWxseSBnZW5lcmF0ZWQgY2Fs
bCBtYWNyb3MuDQo+Pj4+DQo+Pj4+IEFub3RoZXIgYnktcHJvZHVjdCBvZiBnZW5lcmF0aW5n
IHRoZSBjYWxsIG1hY3JvcyBpcyB0aGUgYXV0b21hdGljDQo+Pj4+IGdlbmVyYXRpbmcgb2Yg
dGhlIGh5cGVyY2FsbCBoYW5kbGVyIHByb3RvdHlwZXMgZnJvbSB0aGUgc2FtZSBkYXRhIGJh
c2UNCj4+Pj4gd2hpY2ggaXMgdXNlZCB0byBnZW5lcmF0ZSB0aGUgbWFjcm9zLg0KPj4+Pg0K
Pj4+PiBUaGlzIGhhcyB0aGUgYWRkaXRpb25hbCBhZHZhbnRhZ2Ugb2YgdXNpbmcgdHlwZSBz
YWZlIGNhbGxzIG9mIHRoZQ0KPj4+PiBoYW5kbGVycyBhbmQgdG8gZW5zdXJlIHJlbGF0ZWQg
aGFuZGxlciAoZS5nLiBQViBhbmQgSFZNIG9uZXMpIHNoYXJlDQo+Pj4+IHRoZSBzYW1lIHBy
b3RvdHlwZXMuDQo+Pj4+DQo+Pj4+IEEgdmVyeSBicmllZiBwZXJmb3JtYW5jZSB0ZXN0IChw
YXJhbGxlbCBidWlsZCBvZiB0aGUgWGVuIGh5cGVydmlzb3INCj4+Pj4gaW4gYSA2IHZjcHUg
Z3Vlc3QpIHNob3dlZCBhIHZlcnkgc2xpbSBpbXByb3ZlbWVudCAobGVzcyB0aGFuIDElKSBv
Zg0KPj4+PiB0aGUgcGVyZm9ybWFuY2Ugd2l0aCB0aGUgcGF0Y2hlcyBhcHBsaWVkLiBUaGUg
dGVzdCB3YXMgcGVyZm9ybWVkIHVzaW5nDQo+Pj4+IGEgUFYgYW5kIGEgUFZIIGd1ZXN0Lg0K
Pj4+DQo+Pj4gQSBnZW50bGUgcGluZyByZWdhcmRpbmcgdGhpcyBzZXJpZXMuDQo+Pj4NCj4+
PiBJIHRoaW5rIHBhdGNoIDEgc3RpbGwgbGFja3MgYW4gQWNrIGZyb20geDg2IHNpZGUuIE90
aGVyIHRoYW4gdGhhdA0KPj4+IHBhdGNoZXMgMSwgMiBhbmQgNCBzaG91bGQgYmUgZmluZSB0
byBnbyBpbiwgYXMgdGhleSBhcmUgY2xlYW51cHMgd2hpY2gNCj4+PiBhcmUgZmluZSBvbiB0
aGVpciBvd24gSU1ITy4NCj4+Pg0KPj4+IEFuZHJldywgeW91IHdhbnRlZCB0byBnZXQgc29t
ZSBwZXJmb3JtYW5jZSBudW1iZXJzIG9mIHRoZSBzZXJpZXMgdXNpbmcNCj4+PiB0aGUgQ2l0
cml4IHRlc3QgZW52aXJvbm1lbnQuIEFueSBuZXdzIG9uIHRoZSBwcm9ncmVzcyBoZXJlPw0K
Pj4NCj4+IEFuZCBhbm90aGVyIHBpbmcuDQo+Pg0KPj4gQW5kcmV3LCBjb3VsZCB5b3UgcGxl
YXNlIGdpdmUgc29tZSBmZWVkYmFjayByZWdhcmRpbmcgcGVyZm9ybWFuY2UNCj4+IHRlc3Rp
bmcgcHJvZ3Jlc3M/DQo+IA0KPiBUaGlzIGlzIGJlY29taW5nIHJpZGljdWxvdXMuIEFuZHJl
dywgSSBrbm93IHlvdSBhcmUgYnVzeSwgYnV0IG5vdCByZWFjdGluZw0KPiBhdCBhbGwgdG8g
ZXhwbGljaXQgcXVlc3Rpb25zIGlzIGtpbmQgb2YgYW5ub3lpbmcuDQoNCiAgIF9fX18gX19f
IF8gICBfICBfX19fDQp8ICBfIFxfIF98IFwgfCB8LyBfX198DQp8IHxfKSB8IHx8ICBcfCB8
IHwgIF8NCnwgIF9fL3wgfHwgfFwgIHwgfF98IHwNCnxffCAgfF9fX3xffCBcX3xcX19fX3wN
Cg0KDQpKdWVyZ2VuDQo=
--------------VDZ5cCZUPPjk90TnYMTVITFu
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----


--------------VDZ5cCZUPPjk90TnYMTVITFu--

--------------pqKygV3Bj0X4PXsIN9plUa8L--

--------------t0X2MtyaTQWCihLZQQ9pIgn0
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK0ZY0FAwAAAAAACgkQsN6d1ii/Ey/2
tgf/fZBN+4Q51WyHZXhWandvIXnbVQKzA0H0uFVv4BCHKjrA3qzykpjRq4mEZnUBsa/gksJagYDZ
rH0nekgXu/rxm8rridRiIRWxCMeUv37udtZxJWlELbWPBS+kbfOTD6VamYgfuRR/MY6RrBRa+6dz
kUQzoBEd+tXPNsl5WgRklOoJQuH54qhHymQg52DLyglD7+DZseyW/euL7cZO0nWRNyDzWq/nnonI
E7/Y7XusUwdaJID5TF2ESUsKEcM3KQFkCoTHP0NkujX7cTBoYDY9y5pdb0LE3gQL+dGIt/GBQRjv
PpAXbV9uPx6toUjLfZeF/FU5qHJ4lbyB9nr4qfjMuw==
=5uwC
-----END PGP SIGNATURE-----

--------------t0X2MtyaTQWCihLZQQ9pIgn0--


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 13:09:39 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 13:09:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.354996.582410 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4MaZ-0001Jp-5C; Thu, 23 Jun 2022 13:09:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 354996.582410; Thu, 23 Jun 2022 13:09:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4MaZ-0001Ji-1P; Thu, 23 Jun 2022 13:09:39 +0000
Received: by outflank-mailman (input) for mailman id 354996;
 Thu, 23 Jun 2022 13:09:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ht1C=W6=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1o4MaX-0001JY-Np
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 13:09:37 +0000
Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com
 [2607:f8b0:4864:20::636])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ba849139-f2f5-11ec-b725-ed86ccbb4733;
 Thu, 23 Jun 2022 15:09:36 +0200 (CEST)
Received: by mail-pl1-x636.google.com with SMTP id m2so10864298plx.3
 for <xen-devel@lists.xenproject.org>; Thu, 23 Jun 2022 06:09:36 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ba849139-f2f5-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=8ZpsnAi80ADwSdHbB9vVM+jLuyC83SXa1tFmJtVfbe4=;
        b=fpdKsfBQLliRSMz5lKSS/dqP1O2KD4Pp6k8CH9WHVKpO801XloQf7MNpRRsv4WWzFB
         ZSXg4hWnF9rq3jNDPqRBYtUhyrOzhcOg1SaOAIQ632opfP8DO9kFeZRnyvSKpmS87uLg
         YBSvD3UGaDSqNMmzt8+ZmgWuth82P8IfF5FuCU4RS/ikBwkpggyzkR8AywbKuvV35asP
         q79j9EbFs29YsxpZzYgSoj9HyphteUG1nVMRfBhd3NEfvqQwMKc+CbRQgsMaM/BSZAGd
         kOEKDop0DV2a+N0eBjBE2NUBpR2GGVY/S9aRo7OtoNvi0A6iI64n5r8QexL2mAm9xCIb
         joxQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=8ZpsnAi80ADwSdHbB9vVM+jLuyC83SXa1tFmJtVfbe4=;
        b=0w6Mf9ML5+EKqn10Vo8UaTjuVN2V1OHaT/5KUt5xu6Wi/H8c9yUeo5b/iVu6ugcTAi
         R4wEjlBtKgGZE8P6Tm3lmcFP3ixUroxCWYplZSsds3GXIeG/9TAspJPSz5V5BHIgIkOd
         Jjoxw8w8Ojq5lxCCqamWesHHQifBbukH0TY38+2Sn3Zpum1QYSttzja60wNQAFiz6h1R
         Q9Ctf0WlVk9lhqsLnN/7plLPrTw0Sp9+nwzTX654DrLoAxrCkuhsM7fwF57rxSC0qtzE
         1E4wJVcYnPnyZhPFSklz7VMXoZWGpwn1l1wNB+cJ7/B0Pe6Zz5TyI1BYnTPNyxjUzhV1
         nL0Q==
X-Gm-Message-State: AJIora/IaAhV4h46fFKI4gsIKh9RlwRD2wb1wj0vPOdq47Pep87IiXEE
	cjHl76hQDCVD32Z277roUcRhIIT6YmIp539S6S4=
X-Google-Smtp-Source: AGRyM1umc+jYrRjqwAPBwtDTi5dQZ6aiWvqjxRi5w6Al/riZWF+BzoEB3goj2Ep9Y2GMpEvum2/w9bnsjcIlS1PGHpU=
X-Received: by 2002:a17:903:247:b0:168:ecca:444 with SMTP id
 j7-20020a170903024700b00168ecca0444mr38327200plh.121.1655989774507; Thu, 23
 Jun 2022 06:09:34 -0700 (PDT)
MIME-Version: 1.0
References: <62903b8e-6c20-600e-8283-5a3e3b853a18@gmail.com>
 <1655482471-16850-1-git-send-email-olekstysh@gmail.com> <YrQ/VrfzScUVK+PK@perard.uk.xensource.com>
In-Reply-To: <YrQ/VrfzScUVK+PK@perard.uk.xensource.com>
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
Date: Thu, 23 Jun 2022 16:09:23 +0300
Message-ID: <CAPD2p-=AewSzyvxVwxdNAvrVbRoUdrSu-ZnCjE0A9ZY8UZoDnQ@mail.gmail.com>
Subject: Re: [PATCH V10.1 1/3] libxl: Add support for Virtio disk configuration
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, 
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Wei Liu <wl@xen.org>, 
	George Dunlap <george.dunlap@citrix.com>, Nick Rosbrook <rosbrookn@gmail.com>, 
	Juergen Gross <jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Bertrand Marquis <bertrand.marquis@arm.com>
Content-Type: multipart/alternative; boundary="00000000000042e49305e21d2a0a"

--00000000000042e49305e21d2a0a
Content-Type: text/plain; charset="UTF-8"

Hello Anthony

[sorry for the possible format issues]

On Thu, Jun 23, 2022 at 1:24 PM Anthony PERARD <anthony.perard@citrix.com>
wrote:

> On Fri, Jun 17, 2022 at 07:14:31PM +0300, Oleksandr Tyshchenko wrote:
> > From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> >
> > This patch adds basic support for configuring and assisting virtio-mmio
> > based virtio-disk backend (emulator) which is intended to run out of
> > Qemu and could be run in any domain.
> > Although the Virtio block device is quite different from traditional
> > Xen PV block device (vbd) from the toolstack's point of view:
> >  - as the frontend is virtio-blk which is not a Xenbus driver, nothing
> >    written to Xenstore are fetched by the frontend currently ("vdev"
> >    is not passed to the frontend). But this might need to be revised
> >    in future, so frontend data might be written to Xenstore in order to
> >    support hotplugging virtio devices or passing the backend domain id
> >    on arch where the device-tree is not available.
> >  - the ring-ref/event-channel are not used for the backend<->frontend
> >    communication, the proposed IPC for Virtio is IOREQ/DM
> > it is still a "block device" and ought to be integrated in existing
> > "disk" handling. So, re-use (and adapt) "disk" parsing/configuration
> > logic to deal with Virtio devices as well.
> >
> > For the immediate purpose and an ability to extend that support for
> > other use-cases in future (Qemu, virtio-pci, etc) perform the following
> > actions:
> > - Add new disk backend type (LIBXL_DISK_BACKEND_OTHER) and reflect
> >   that in the configuration
> > - Introduce new disk "specification" and "transport" fields to struct
> >   libxl_device_disk. Both are written to the Xenstore. The transport
> >   field is only used for the specification "virtio" and it assumes
> >   only "mmio" value for now.
> > - Introduce new "specification" option with "xen" communication
> >   protocol being default value.
> > - Add new device kind (LIBXL__DEVICE_KIND_VIRTIO_DISK) as current
> >   one (LIBXL__DEVICE_KIND_VBD) doesn't fit into Virtio disk model
> >
> > An example of domain configuration for Virtio disk:
> > disk = [ 'phy:/dev/mmcblk0p3, xvda1, backendtype=other,
> specification=virtio']
> >
> > Nothing has changed for default Xen disk configuration.
> >
> > Please note, this patch is not enough for virtio-disk to work
> > on Xen (Arm), as for every Virtio device (including disk) we need
> > to allocate Virtio MMIO params (IRQ and memory region) and pass
> > them to the backend, also update Guest device-tree. The subsequent
> > patch will add these missing bits. For the current patch,
> > the default "irq" and "base" are just written to the Xenstore.
> > This is not an ideal splitting, but this way we avoid breaking
> > the bisectability.
> >
> > Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> > ---
> > Changes V10 -> V10.1:
> >    - fix small coding style issue in libxl__device_disk_get_path()
> >    - drop specification check in main_blockattach() and add
> >      required check in libxl__device_disk_setdefault()
> >    - update specification check in main_blockdetach()
>
> For this v10.1: Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
>


perfect, thanks!


>
> BTW, the subject of this updated patch still state "v10" instead of
> "v10.1", hopefully committers can pick the right version.
>


Oh, sorry, I was thinking to use "v10.1", but I wasn't sure that commit
with updated subject would properly appear in the current thread if being
sent using git send-email --in-reply-to))


>
> Cheers,
>
> --
> Anthony PERARD
>


-- 
Regards,

Oleksandr Tyshchenko

--00000000000042e49305e21d2a0a
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Hello=C2=A0Anthony</div><div><br></div><div>[sorry fo=
r the possible format issues]</div><br><div class=3D"gmail_quote"><div dir=
=3D"ltr" class=3D"gmail_attr">On Thu, Jun 23, 2022 at 1:24 PM Anthony PERAR=
D &lt;<a href=3D"mailto:anthony.perard@citrix.com" target=3D"_blank">anthon=
y.perard@citrix.com</a>&gt; wrote:<br></div><blockquote class=3D"gmail_quot=
e" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204)=
;padding-left:1ex">On Fri, Jun 17, 2022 at 07:14:31PM +0300, Oleksandr Tysh=
chenko wrote:<br>
&gt; From: Oleksandr Tyshchenko &lt;<a href=3D"mailto:oleksandr_tyshchenko@=
epam.com" target=3D"_blank">oleksandr_tyshchenko@epam.com</a>&gt;<br>
&gt; <br>
&gt; This patch adds basic support for configuring and assisting virtio-mmi=
o<br>
&gt; based virtio-disk backend (emulator) which is intended to run out of<b=
r>
&gt; Qemu and could be run in any domain.<br>
&gt; Although the Virtio block device is quite different from traditional<b=
r>
&gt; Xen PV block device (vbd) from the toolstack&#39;s point of view:<br>
&gt;=C2=A0 - as the frontend is virtio-blk which is not a Xenbus driver, no=
thing<br>
&gt;=C2=A0 =C2=A0 written to Xenstore are fetched by the frontend currently=
 (&quot;vdev&quot;<br>
&gt;=C2=A0 =C2=A0 is not passed to the frontend). But this might need to be=
 revised<br>
&gt;=C2=A0 =C2=A0 in future, so frontend data might be written to Xenstore =
in order to<br>
&gt;=C2=A0 =C2=A0 support hotplugging virtio devices or passing the backend=
 domain id<br>
&gt;=C2=A0 =C2=A0 on arch where the device-tree is not available.<br>
&gt;=C2=A0 - the ring-ref/event-channel are not used for the backend&lt;-&g=
t;frontend<br>
&gt;=C2=A0 =C2=A0 communication, the proposed IPC for Virtio is IOREQ/DM<br=
>
&gt; it is still a &quot;block device&quot; and ought to be integrated in e=
xisting<br>
&gt; &quot;disk&quot; handling. So, re-use (and adapt) &quot;disk&quot; par=
sing/configuration<br>
&gt; logic to deal with Virtio devices as well.<br>
&gt; <br>
&gt; For the immediate purpose and an ability to extend that support for<br=
>
&gt; other use-cases in future (Qemu, virtio-pci, etc) perform the followin=
g<br>
&gt; actions:<br>
&gt; - Add new disk backend type (LIBXL_DISK_BACKEND_OTHER) and reflect<br>
&gt;=C2=A0 =C2=A0that in the configuration<br>
&gt; - Introduce new disk &quot;specification&quot; and &quot;transport&quo=
t; fields to struct<br>
&gt;=C2=A0 =C2=A0libxl_device_disk. Both are written to the Xenstore. The t=
ransport<br>
&gt;=C2=A0 =C2=A0field is only used for the specification &quot;virtio&quot=
; and it assumes<br>
&gt;=C2=A0 =C2=A0only &quot;mmio&quot; value for now.<br>
&gt; - Introduce new &quot;specification&quot; option with &quot;xen&quot; =
communication<br>
&gt;=C2=A0 =C2=A0protocol being default value.<br>
&gt; - Add new device kind (LIBXL__DEVICE_KIND_VIRTIO_DISK) as current<br>
&gt;=C2=A0 =C2=A0one (LIBXL__DEVICE_KIND_VBD) doesn&#39;t fit into Virtio d=
isk model<br>
&gt; <br>
&gt; An example of domain configuration for Virtio disk:<br>
&gt; disk =3D [ &#39;phy:/dev/mmcblk0p3, xvda1, backendtype=3Dother, specif=
ication=3Dvirtio&#39;]<br>
&gt; <br>
&gt; Nothing has changed for default Xen disk configuration.<br>
&gt; <br>
&gt; Please note, this patch is not enough for virtio-disk to work<br>
&gt; on Xen (Arm), as for every Virtio device (including disk) we need<br>
&gt; to allocate Virtio MMIO params (IRQ and memory region) and pass<br>
&gt; them to the backend, also update Guest device-tree. The subsequent<br>
&gt; patch will add these missing bits. For the current patch,<br>
&gt; the default &quot;irq&quot; and &quot;base&quot; are just written to t=
he Xenstore.<br>
&gt; This is not an ideal splitting, but this way we avoid breaking<br>
&gt; the bisectability.<br>
&gt; <br>
&gt; Signed-off-by: Oleksandr Tyshchenko &lt;<a href=3D"mailto:oleksandr_ty=
shchenko@epam.com" target=3D"_blank">oleksandr_tyshchenko@epam.com</a>&gt;<=
br>
&gt; ---<br>
&gt; Changes V10 -&gt; V10.1:<br>
&gt;=C2=A0 =C2=A0 - fix small coding style issue in libxl__device_disk_get_=
path()<br>
&gt;=C2=A0 =C2=A0 - drop specification check in main_blockattach() and add<=
br>
&gt;=C2=A0 =C2=A0 =C2=A0 required check in libxl__device_disk_setdefault()<=
br>
&gt;=C2=A0 =C2=A0 - update specification check in main_blockdetach()<br>
<br>
For this v10.1: Reviewed-by: Anthony PERARD &lt;<a href=3D"mailto:anthony.p=
erard@citrix.com" target=3D"_blank">anthony.perard@citrix.com</a>&gt;<br></=
blockquote><div>=C2=A0</div><div><br></div><div>perfect, thanks!</div><div>=
=C2=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0=
.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
BTW, the subject of this updated patch still state &quot;v10&quot; instead =
of<br>
&quot;v10.1&quot;, hopefully committers can pick the right version.<br></bl=
ockquote><div>=C2=A0</div><div><br></div><div>Oh, sorry, I was thinking to =
use=C2=A0&quot;v10.1&quot;, but I wasn&#39;t sure that commit with updated =
subject would properly appear in the current thread if being sent using git=
 send-email --in-reply-to))=C2=A0 =C2=A0=C2=A0</div><div>=C2=A0</div><block=
quote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1=
px solid rgb(204,204,204);padding-left:1ex">
<br>
Cheers,<br>
<br>
-- <br>
Anthony PERARD<br>
</blockquote></div><br clear=3D"all"><div><br></div>-- <br><div dir=3D"ltr"=
><div dir=3D"ltr"><div><div dir=3D"ltr"><div><div dir=3D"ltr"><span style=
=3D"background-color:rgb(255,255,255)"><font size=3D"2"><span style=3D"colo=
r:rgb(51,51,51);font-family:Arial,sans-serif">Regards,</span></font></span>=
</div><div dir=3D"ltr"><br></div><div dir=3D"ltr"><div><span style=3D"backg=
round-color:rgb(255,255,255)"><font size=3D"2">Oleksandr Tyshchenko</font><=
/span></div></div></div></div></div></div></div></div>

--00000000000042e49305e21d2a0a--


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 13:11:02 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 13:11:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355003.582421 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Mbt-0002fX-IG; Thu, 23 Jun 2022 13:11:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355003.582421; Thu, 23 Jun 2022 13:11:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Mbt-0002fQ-Du; Thu, 23 Jun 2022 13:11:01 +0000
Received: by outflank-mailman (input) for mailman id 355003;
 Thu, 23 Jun 2022 13:11:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CRa/=W6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o4Mbs-0002fI-SW
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 13:11:00 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-eopbgr150072.outbound.protection.outlook.com [40.107.15.72])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ecaa2ec2-f2f5-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 15:10:59 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7396.eurprd04.prod.outlook.com (2603:10a6:20b:1da::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16; Thu, 23 Jun
 2022 13:10:57 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 13:10:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ecaa2ec2-f2f5-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PuA5kfUUCXfyToYuDvGmsQXA0Y7sD2xsUoTPKvzyrND8PtYmfgzloy/RnZlgbkRHT8OPMflufX7uSkDpm8OfmKPDMYtQDROEevPZrJPz7t5XmoSOxUKS2H0vglKyvr1720AxhMn57ewlea5JrHYp6Tf1Hqw/sfignL8RgRhHCLF0nOZX2beSCjVJJvDe/jokXuNn6kKukoE7i6+96E4YcEE5+IXrOuCFbIzC2QYOL5HvlaiHXGTqRkA7kDBMHKh3MZOeRCJeNhrhqR6teEDBt8ZIl6wX6DD7JQQPrlOl2M0QKqTXXa7OHUMTH3FU/O9Xa3nr15atTi4gPdqcfMhyCA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Ciq/mpICNyOEXmFvjm4x0WcGyDt0FHAi/Cs6ANCUgNE=;
 b=MviDKhFSNMV44Eb4/8YlQLWbQ1KbSyRyUUROLdZYPNw1syChqc3hW16RNrBcnQ6sYy6phnUBtV02xIMFkyQZjpNwc6/qBvcaO8lfE/vWugScOUOMT6AImtxNhDEPvnVtpaN44Hg6kgYSFtmPEjmIy30zM1NXlGqJoUvxqichHIA/WuCx2B5Xxxyv4Bj7BAWjjiXqZwu8BdIFl9/CvCWjboOBVXLJr4r0TnmOj8CHhlGTaCWXEey+y95UaLkgR3tFoRJO65lL8oy2WfQwSgyX4l8ntwJSNLlj3in7TAXOsimlsFevQL8t/DbF9krM0D9b6qEkp/t3A/Q7KXl4wmD4vw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ciq/mpICNyOEXmFvjm4x0WcGyDt0FHAi/Cs6ANCUgNE=;
 b=cMPPSoRtUGs3Rk1BGchjUYtLGBdtOmuoDtnJwkI+uGzLLQlOnGapnolzGhDO/PEO0siRuEELZcKXqsPB5OCOniiZy2ZMiwHVHQGR7JRTKVCFKPp1/NKgjPyBN7mV+tMSeAiXl1u7Bw6Xh3T5Vph+KDJQ2Goxawo5h4mGOohtA1ngjyZsLDeccWvryKgg+ajvcB50DQ1c62UPDQ9EimLptODiz1+Gmf/G+zZF5zW3zaPf9eRbD34hFensXF8RnL1Pcj8dnQ60zFDSaGRaCRSgVsDxqJXeKLBYNttaqhKvU6TdmD+Z7tIABsJVtyMxxK77UsE9g8a1HPBzyETddKQ6Sw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <7f241ed2-41ff-c0cf-0aad-ad52440305bb@suse.com>
Date: Thu, 23 Jun 2022 15:10:55 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH] tools/xenstored: Harden corrupt()
Content-Language: en-US
To: Julien Grall <julien@xen.org>
Cc: Julien Grall <jgrall@amazon.com>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 Juergen Gross <jgross@suse.com>
References: <20220623112407.13604-1-julien@xen.org>
 <d0de3b7b-fdb4-716c-227d-5fee024d8fd9@suse.com>
 <de0ae18f-b0b7-c63c-ed03-d0260dfc4c1d@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <de0ae18f-b0b7-c63c-ed03-d0260dfc4c1d@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR04CA0049.eurprd04.prod.outlook.com
 (2603:10a6:20b:46a::34) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 04253c5e-2bdc-4285-7610-08da5519cf5d
X-MS-TrafficTypeDiagnostic: AM8PR04MB7396:EE_
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-Microsoft-Antispam-PRVS:
	<AM8PR04MB7396E6D80835EF469A9B54C3B3B59@AM8PR04MB7396.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	oo6cMXOo+Lo2rf/KAJb7fWsr0ryBkjyVz4WVYrHL7Dl7FSWtlS1DcmV/46j7A0cBX6wMTBCNGA9X80ZBnnOiJNWFXhvmYOyacbDhHRhe6oc+NfhBsdIuirmeuE34nOzeGWqU3BTpmam07yRcXbVCzd2AyR2hadoMwdodAUyyR9PVLcX2Vxy1rlYlM3rIb5yRGCkmZHqAO0A8ZL7SbOHBddbD/r4lrwWNBg/i61wWt8BXWabunucW/TURd7q/clsyS8Fvy9VHo20IaYhVYzraTIV6T6gGc/XpCCTuiJToOedEQ5kNaV6NtwfNVQ5YRnUa8Unk4kq9/l3xuZiArBhmDudm5s3RJ3aSZlLy9dMfHXFqQQC+bVrvGrWKMngGQ3bQ2R9WbccD982Y7Mk+H7HVnOGtfkomHZ8mjOCb/oxbemZ8o3SKsL/Nf1fp2rE0FwShvEe/3A+bvoLqcGu1D7ofKeQIs3cv85pAb+JtE/7mSj2pg/AthuPL2VJU94rHcKL/OxxonfR67S/stLLytgzDPFqK41zF1STaNhcgEem6xAp42qWU6423jNkHF0wMhmzcGF5I7t1MQgnIo3nmmPuMk1Ux7Jt+2fCeFYDvIy2JxHq2LLmETjbhXDctJDooufDvAt3G1p7xPVPZYacUt+Bzu9Xn+qzUOL4mJH/yf159x6qEwooCUHwP0awwj/oja2a2rhBDPqA5Eb8Reu90Wj0TJs4wQJiZ6HtIvhyprgO2E3xTb3pnQP32ujvn4ZfODXb8LpSFE5OrLPCxs9BForKrTVttba9lcnwCjKAGxq8X1Us=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(346002)(396003)(366004)(376002)(136003)(39860400002)(38100700002)(86362001)(4326008)(31696002)(83380400001)(8936002)(5660300002)(8676002)(26005)(66556008)(2906002)(66946007)(66476007)(2616005)(107886003)(6916009)(186003)(316002)(6506007)(31686004)(6512007)(478600001)(6486002)(54906003)(41300700001)(53546011)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NGs1eDg5ODJxdDYvMm5Yb2VxMndKUFZkM3BDcUN3TXZXcTh2a2Z4bFZudTY1?=
 =?utf-8?B?SkJjVWpvNjlXbTEwRkpwYlNyN010SHBqRFhtUXdZUm9nYUQ1ZWpYNmZqMEti?=
 =?utf-8?B?WU8xdURsMnA2QXdVQ2ltZU4zeEFkSzJidlBTYTJCTloxNElsYlRhdVg0R3dM?=
 =?utf-8?B?VUhOdU5KcGc3U1BGZS95K3IyWWJpdG1INURUcDh2RVNuaGIxejNCRWVZcVpW?=
 =?utf-8?B?OUludXNac25lVFVyMWdJZ2twcUVTWVZCdU56KzdPMjcwZGdXZmlKQk45VWE1?=
 =?utf-8?B?VkJnWUIrei9zTHFQQzVrUURrdy9wdGZ2QVFJVmg0azdTSnFiYzJiZWw2aGto?=
 =?utf-8?B?MUtCd05aZU1qWW5JZWtra2RBUDBFTEZyK2pMbnN3OEcxNzQ3dTlDb01nS3B4?=
 =?utf-8?B?VEY1bkZ3MThxeER2QTRHY1ljYlFBK2JrZGxEWHNBbDJ2MXZyVENLL2UxMHZ6?=
 =?utf-8?B?RXROZUpsWWFkUS9IeVlSMW9tNkZCTDVqOVhjNmdsZVVBUjJackZjUzM3NGxQ?=
 =?utf-8?B?ZGhOTTBWck92a3d3dUxlUHhJSm5peWpTRWd5aThUVnpXdlpUUHRPb3pqVXdT?=
 =?utf-8?B?VEp4VTZvem1tdDlHUllzeVpsVkNsdXJ1Tm4wRXl0UFVkQnI2S01jSGRyUWZh?=
 =?utf-8?B?VUxxSWlkY09IVVRiRDRrU01jL0sxQTdESzBFYWdOVlcrWkVFRlpoVXlZTU9X?=
 =?utf-8?B?ZE1GdFgxKzJPZE1kUXE5aHFNc1M1VWNrNURyNkpYVVlsNTUvSVF2K3AzYjRZ?=
 =?utf-8?B?WnptRjEzWHFGRGMzUC8xSW02d2dKZGVCMkFxZDVsdVhsQnVlMUtmUGk3Si91?=
 =?utf-8?B?Y1VTVDJDTjVQcTBhOGNRanZQdjM0RDJjeDl5b0dWSWdmVlV4MjJUVjJvMDNJ?=
 =?utf-8?B?VUxWdWxzbGdvZFhPNDVDSHBTbk5YNEkxSFRSa3dRRlZPb1hpa2FsVTVGSFYv?=
 =?utf-8?B?eHJucVNGQldtL3NHSjYzeEtMNkx6eW1sRWQ5RkxBZmdDWFkyQnFjTGE3MURV?=
 =?utf-8?B?TWJSRllET2g0TlJ0YXAwS3pwN2dKUDVMTzlpMmhtTmJITlFPcml6YWJCTWF5?=
 =?utf-8?B?RjUwaFgrYjAvSkUzajZTQzAxUUI2c0FPYVc1QjVFRDBScnB0Y29vVkFLWUFs?=
 =?utf-8?B?NWVGVTZZOVQ4aWl6bGdSUEhBOHBMSWdRU2JIMXlSd1dDQ2llM1N0WTBzeWhz?=
 =?utf-8?B?T1dCYjhPdjQ0U1g2ajc3MW1SK1lZdXFnV0RkMTU5ZmxNanlXb1dMYzhwTTY3?=
 =?utf-8?B?VVd5YmRsN0thNjN5N0F0ZFpjbS9EOXJpRXdvREs4aHVWaEw0NG1ScWdqS3dD?=
 =?utf-8?B?OGNucEkxK0lFUitER3FhOVNBVDI3UEdsZS9yaTQzR29GYVBPaFpKcDhjVFVU?=
 =?utf-8?B?QWthUmVmQ1BmMGxsSkwyZXFIcU5mT3YxZzJQZDZjYVJEVFpIT09vSS84eldk?=
 =?utf-8?B?TU9JTEowa0Rrc2tjNm40U2phM3lVWkhrZVpHRldtTkt4NG8zOEpXVU5LMjdy?=
 =?utf-8?B?aG1HY1NVVmNVcmpraGlrRmltSzA0VW15OFlENUZHR1BwT0lCUHhCbm1RZXVO?=
 =?utf-8?B?T2xUR1RLVFNIc3k1NUlqNWRtZ09GUFd6WEVDNzI2Wk13bHNERnE1ci9Bb2Mx?=
 =?utf-8?B?RDRSSmU4UjBYWGswN0lVdDk2TE05MGpwRmR1b0x4R1FTaW9peUpNMmJIOFZ1?=
 =?utf-8?B?RWFnNVFSekN4WWxINngvYjRUcEZtZ0srVmc3WSswOEhjZVJFa0hVajZBYlY2?=
 =?utf-8?B?aWRPMmRKUHZWekpsclUxT1ZDVVl3eTR1QzhsRGpSUFE0R3F5WFFuc01ZR3JY?=
 =?utf-8?B?RUpQQjVsM2txZTBLMk9ic2NwaHFYd3hvL3BWQjJydEIvaVhFaVZ0ZFkvdmpt?=
 =?utf-8?B?S2NiTFlyd3Axa1lUZjNRUGJ6d1k5VHoxMDJxTGY5MTNJMVJJUVpiQm5VcjVJ?=
 =?utf-8?B?MitLZmhvQjFETjU0RlBKNk8rY05TK2V0NTdNQlVhanNIVG54b0xrZC9YUGhl?=
 =?utf-8?B?b0JQYjZPT1BLSkhWQzk3WG1pVjZ3Y1RyVTNsWjlwd3Uyd1V6ai9oenZMSmZK?=
 =?utf-8?B?VFE3NUJCK1RTc29HSkRkNWNySDkzVHVJNWVod3hpNW9jZWtqUllvQXp2QTJL?=
 =?utf-8?Q?JWq2dna5xt6KO2bl6QHOP28xu?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 04253c5e-2bdc-4285-7610-08da5519cf5d
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 13:10:56.9304
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: sTwxE6hOEGkULZ49MPb/J3vLnba1zoWQ8COY62P3JMXxYIQazamK+ID4Ecloy9qw60lE80Vcc30OTQOVoYTJqA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7396

On 23.06.2022 15:03, Julien Grall wrote:
> 
> 
> On 23/06/2022 13:59, Jan Beulich wrote:
>> On 23.06.2022 13:24, Julien Grall wrote:
>>> From: Julien Grall <jgrall@amazon.com>
>>>
>>> At the moment, corrupt() is neither checking for allocation failure
>>> nor freeing the allocated memory.
>>>
>>> Harden the code by printing ENOMEM if the allocation failed and
>>> free 'str' after the last use.
>>>
>>> This is not considered to be a security issue because corrupt() should
>>> only be called when Xenstored thinks the database is corrupted. Note
>>> that the trigger (i.e. a guest reliably provoking the call) would be
>>> a security issue.
>>>
>>> Fixes: 06d17943f0cd ("Added a basic integrity checker, and some basic ability to recover from store")
>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>
>> Is this something which would want queuing for backport?
> 
> I would say yes. There are a couple of more Xenstored patches I would 
> consider for backporting:
> 
> fe9be76d880b tools/xenstore: fix error handling of check_store()
> b977929d3646 tools/xenstore: fix hashtable_expand() zeroing new area
> 
> Who is taking care of tools backport nowadays?

I'm trying to, as long as they apply cleanly enough. But I'd prefer if
rather sooner then later I could offload this again. And I'm not
actively looking to spot backporting candidates there (unlike for the
hypervisor, excluding Arm).

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 13:30:39 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 13:30:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355014.582432 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Muh-0007RF-VY; Thu, 23 Jun 2022 13:30:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355014.582432; Thu, 23 Jun 2022 13:30:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Muh-0007R8-SO; Thu, 23 Jun 2022 13:30:27 +0000
Received: by outflank-mailman (input) for mailman id 355014;
 Thu, 23 Jun 2022 13:30:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CRa/=W6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o4Mug-0007R2-QK
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 13:30:26 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 (mail-eopbgr30083.outbound.protection.outlook.com [40.107.3.83])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a3ae8582-f2f8-11ec-b725-ed86ccbb4733;
 Thu, 23 Jun 2022 15:30:25 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB6PR0401MB2264.eurprd04.prod.outlook.com (2603:10a6:4:47::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15; Thu, 23 Jun
 2022 13:30:21 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 13:30:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a3ae8582-f2f8-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nLMXNbq8pFD4gus1B5irw1ueASK/fRBe+k+8u3NJxGOD3A7/8rBHr5wzrob6U5/cHD9JrGJwQp27Pbq8mgqHvKNlGtNdlzQaxK0Tcqyy8wrbfeYrp3hDz9Ed2iFHe/HiZ4XWne6Rka0s2fPWE33haMRmofwN30hHOUx7Qw8Hc2CzNlgxXqWIxOIjgjJJT60RAUbkgx6eAdn2VloI1bPGNTRey20+iDaZXsOKPhC6Lcjs5/+Hsq32w1x2aBZRQuDoXOpblYaNzvfZCHjrvVqvRbsJwkNjMPpCiVwgD2vdSfTGwrRLsaMuzI/pu6JfJ7Qb5k94hhv7aRrlQPSnbvbgJA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xBo1s2Eo42rC4sM9OoapxCB7WufBem5PlY3JzRIYNus=;
 b=AebgbeyT4uQ+KInzWHypK+Xb5ZPuwUCNYk5llY+S5MMXMjNVQmfCIM28TIE9H98OKAHoafHmTZ1Vs5RiMrDw81OhuzfXWIyP9k++JRKrAKCizske1eZQwRMg64DP4jtN0AbnLO2lGVQ2JRAMudSEfaqgtok3dtiAeSI9C7v0eqOg/ERy7+fUfZ//M3v59PdOWnE8SPr/fKYrR+L4UbRPLWvz4pk7bGZDf+8kxZ/VNU38KBnce5vaKxMz3lUGdh9SjU2lcpP4+rzBroqRMtMMsHXKuLiJDv3KrF9GOQha1v4wao2EOntxkB3ecnLKSZMEQbgIIq54MFG6nJz5OzYgVA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xBo1s2Eo42rC4sM9OoapxCB7WufBem5PlY3JzRIYNus=;
 b=hBnkFHH1SMwbbOwmNtzVHrDGkFfqfTDLIEhF32e7Tu2lIBZqF5feUSGDvTaAA4xZG7oqpDOfmJS5OLsVfek/aElDLrsGdTWyayEA9ss3EQLlUrTwqHxUCAkSq7hkTPo83O2bhEW26gM2Z/ySpDaW3hZDFwxi/D6bWBp0736Va+s7+1B4uh093BqOGU6KHDNTGv8eSDfGlD0ANfuAWXRZIiosz6iLC0XieDNGlW6enofRDCDYt98YHJ1UgsHgg+FqKfiR0CTCWwrzBdVm5SOV2+AHrp/bYVXI/Y/Md71Wz2/eV6ZyBxXbsE+t7x2JTtiZRddGv6M2CDwjPy+Qy54hPg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <7372aada-d4ad-1547-2e44-acb9a5a62bbd@suse.com>
Date: Thu, 23 Jun 2022 15:30:19 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 1/2] console/serial: set the default transmit buffer size
 in Kconfig
Content-Language: en-US
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220623090852.29622-1-roger.pau@citrix.com>
 <20220623090852.29622-2-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220623090852.29622-2-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS8PR05CA0006.eurprd05.prod.outlook.com
 (2603:10a6:20b:311::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f21f9edf-7be2-4b71-5d8c-08da551c8555
X-MS-TrafficTypeDiagnostic: DB6PR0401MB2264:EE_
X-Microsoft-Antispam-PRVS:
	<DB6PR0401MB2264AB58B095375BB5031288B3B59@DB6PR0401MB2264.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	vxX4QwuEWS31p8eJaUqX71m/m7XtxjFU/EA6aOEOh3HMsAwZ2onzr2yVBUqteL9Pi8QPK6McrsYjzUJ5AFqnCiLUKmzspoCFVPwZkblUuj68uFSVGwYzGO66sbj7ulzWRPxcslEoBamtwSy+GaH7MU/jcbOXkgKdaf0wYAQmewwi9ckKs3PqIlW+1Dk7jdXsdkgTXFk/My8lELzMDlFuPjkOWuNjVvvOMzLCGPjQk7oGnXMgZ0P7UjMXt7HoKUk6HSzDWuQ5T4cK/P9ZPt1nMjY9jD9wov4eK0rBIxkt9NQWjQXrPvMfMmqDphoj+aHUEXkBC1f2DZ9fuNEVVq9TObKGhgNZe1wGQr80nJLOLcNiN0PSETh11JNV702Chsd9rIMLt8onrCzINRarYKC3THRhmOKVmdKwRPDiDMfdT8Cgmpv9V1zAebEFHILeRKOOkDCRpRxxF3Qt378H8wU714Y3ZyKxfZbRTk+6nIqxnuPalaufHWhswfWXVa/AjnctvN4JrHNib0nfw1ML95l2ngvg1ugUe4SDZYAeyX5QriXXDevViS8FFZCKulB/zBlT0GMG/H8TTezXLu8UO1hsAugNP28bFi7/aLCVSe5K7VKjfNYEfO825gUVYAs1fMSkr7sMw2i+KoqVjeaf7+QM1yB8VelhXCdI5XmVdwiwRfkC/FvB+qB95mMb6S+WwIR80hiV6Vlm7XZTRpZ14hLJuHgkihW5xswB3qwHeGS/wCfotmoniGPmgeMtEcsQ7BObUeoupDhn5gQxlpET2pnn32GR9WPoeH49rHGVuLacCHU=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(376002)(346002)(136003)(396003)(366004)(39860400002)(6916009)(41300700001)(53546011)(6506007)(8676002)(6512007)(26005)(316002)(478600001)(8936002)(66556008)(54906003)(6486002)(66946007)(4744005)(4326008)(2906002)(66476007)(36756003)(83380400001)(31686004)(2616005)(38100700002)(86362001)(31696002)(186003)(5660300002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?S0U5VEJBQUdCVGZIR2hpekkwOGMyWUcwWGozOUtKYlc0Nmo0Uk5JV3pGWlpW?=
 =?utf-8?B?bUFncWhoMXFaZ3VNM3ZRNm14a04wb3lJQ2g4TWlYTTR1Z3VxK2NJR2FoYUZ1?=
 =?utf-8?B?UlNTek5yN2c3MGE5VGJHY2Q4dVo0MVVGeDE1NCtNOUJvdTB0UjRYeHBwdmUy?=
 =?utf-8?B?TjJzZEFERi9WUEVlK1VENHFycUIxTXg1c3ZxSXh5b241bTltNkh3d0ViaDNC?=
 =?utf-8?B?Z1lUYTAydmNLOUJ6WFJoWng1VVIvV3B0SnIyQ3VSMXREbUdHM01ROGp3Sm5q?=
 =?utf-8?B?dGl2R01GUVF5dGlXSXpkWXpEQ2dZWXVzM3V5S1NXT2FaYjlVMlhsNkhOOFNQ?=
 =?utf-8?B?clFZdHdleDdzVVFmVXZ1UHdReGw4VFVra2JHNnFmWExDM2IwaWZtdzdGVnFu?=
 =?utf-8?B?MGRrZTRid2FxWlhOem9rOWpieEV2NW9HMGczaVlvQzdmeWp5eTVMUjdkbXZv?=
 =?utf-8?B?bzZ0dVRMK0hnb3E5YXlQcUNLT2FYN3FPbXArdjJla0ZpOTdOeFg2bm9hOGxw?=
 =?utf-8?B?TTRwSE9qSXlKMnp6Zi83bERqMFhBdTB3aUVHQTQ0elBhRjYvTFpEYTduRllx?=
 =?utf-8?B?ZDd4V1BTYjBnU3BuN3dNU0xycWhuV2duMzVaMkhLQ0lMczJqeWZxYlA5NjVu?=
 =?utf-8?B?TlFHWmFaR01aNmx3Y0lOaXlDeUZvc1piL29pNklsckJxTXFneXNwdVhDS2pK?=
 =?utf-8?B?TXhJd1lSdTE0TWh1VDBRSmtqQ1UxVDJYVTFTTmY4bW9KdHBsRnEyUHlvVHV2?=
 =?utf-8?B?RE9vNUg5YmVzMEt0TndBK2o0bkh6ZXo2MVNZVzNFbjJwYzNkaEQ1QlFTRDRp?=
 =?utf-8?B?ajF2TEtXdmxiMUxwd0pZUXdlRDFPbzdrOVNqblZoK1pBWUtOSFAvcHJIZXJy?=
 =?utf-8?B?RklveFZVZjFQM29IU0ozdERCakIwNjZxTFNjU0dqZW1YT2p1UkNCeDdSVE80?=
 =?utf-8?B?ZkJMSS9XdE5MWTlocmQrc0VvYTUxaVV1WXF4TEdPMHd6WEJLMHVmdTZBcTJr?=
 =?utf-8?B?bnBOZzlyMjh3a204WWRaVGxmVTg4T215bDVTY3l1Wk1HTlVZNDZ6VmxWa0R5?=
 =?utf-8?B?UXFMcUxQczVWcW9lYTQ3UXlid25jM2xBZUtVei9VeUlKcUs1aHNrRlhWVlBi?=
 =?utf-8?B?b3htVXFVM1drWVdLczBSZFhERURTZlJnZEVienU3RVR6eVl5cENNOEZUeC9p?=
 =?utf-8?B?eVZkenlVd0ZFd0dYT3dmQjI1dDM2MENTWm10aUI5c2dTUnBHMjRpcmFqSXRt?=
 =?utf-8?B?WGpQMytqYzkvNEFOVjdvTk1Kb2hZZGpGR2h3V1kzbDVLNEo0WitUNk5QV3pP?=
 =?utf-8?B?ZFJHbDRTVW10WkpRSGp4bUZEVkJOb0RmdFd5NUd2aUZmN0tVYWJwczk2NnA0?=
 =?utf-8?B?NU0wWUd6blJJUzl6V1RLR3JhMmI3d0tWL2NPR3I1N1dFOC9WNFZOQS90NEZl?=
 =?utf-8?B?eUxFdjQvSTZ0eXZKeWN2eXhvR0xVbWwzck4yT0ZQUzFtZ2pWemdsdXFUaW5y?=
 =?utf-8?B?MXpMcWV2K2x1UnJBcXNEeFY3dWdoTnZKWEhrWkVmaXVyakpLYUNjc1NmZDV2?=
 =?utf-8?B?SWVMbTlSQ2c5S3Bod2NVL0VYVURjQ2Z3VksvQkppUFBBc0U5eW5UV1lSSTJF?=
 =?utf-8?B?cndienpFYzllWWpvb1llVHRnYzFxajUzQU9IU1pMZzIxRnRmWVR6dE93UHdJ?=
 =?utf-8?B?ZXJLbHhXd25WUWx2dlo1anUya3c3VUt2MmkyZjBVTFdFc0dPdjBobUxLOGNi?=
 =?utf-8?B?RE5LMGJ1T3NObm05aGJUR2JTWXhCZ2lnaXhra2Z5SEkvNldVczEyS1g1aXNL?=
 =?utf-8?B?SktmUHNqYTVBMk05Ry9RcTE0YzZHcDZ0RllMWjJDTTZYdU1TaVFxb3cxUEVu?=
 =?utf-8?B?TXRnTnUwbDNSUjZFYnlrbFFYazFOeWV1V2RuajJkSzlJOHV2MlYwaUFlTGlV?=
 =?utf-8?B?UkZNdjBJbDlCbEt0N0VBVG95WVZYZEF3cWg0VGRDM0U5dFpVSzZTMnhEc0w5?=
 =?utf-8?B?MVNXajVoVnhJRjNPWEsrVHlSaXBzbmx6T09EbndUZnZPdUdvTkN4OEpTV2Iz?=
 =?utf-8?B?cjJ3Zjg1eGlVeU5HUVFUMUNoeUYxQ2ZYMDlpK3hWcVJ4dkVBcWFRN05QdWJ5?=
 =?utf-8?Q?qkLauR7Izb/bTzAmfTX+t2erw?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f21f9edf-7be2-4b71-5d8c-08da551c8555
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 13:30:21.2162
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 7aXrSg3g7mB1g1f2Svh81IS70RyBgPBcQx+lvuwpXqqnGFYeA92Wn9OloNkHDemGhIP9nCjx0YS+TSsn+DNqrg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0401MB2264

On 23.06.2022 11:08, Roger Pau Monne wrote:
> --- a/xen/drivers/char/Kconfig
> +++ b/xen/drivers/char/Kconfig
> @@ -74,3 +74,13 @@ config HAS_EHCI
>  	help
>  	  This selects the USB based EHCI debug port to be used as a UART. If
>  	  you have an x86 based system with USB, say Y.
> +
> +config SERIAL_TX_BUFSIZE
> +	int "Size of the transmit serial buffer"
> +	default 16384
> +	help
> +	  Controls the default size of the transmit buffer (in bytes) used by
> +	  the serial driver.  Note the value provided will be rounder up to
> +	  PAGE_SIZE.

I first wanted to point out the spelling mistake (rounded), but I
wonder what good that rounding does and whether this description
isn't really misleading: serial_async_transmit() rounds down to a
power of two. So likely the value here would better be a log-2
one.

> +	  Default value is 16384 (16KB).

Perhaps (16kiB) (albeit the default value would change anyway if
the above is followed)?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 13:32:38 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 13:32:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355019.582443 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Mwn-00081h-CO; Thu, 23 Jun 2022 13:32:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355019.582443; Thu, 23 Jun 2022 13:32:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Mwn-00081a-8r; Thu, 23 Jun 2022 13:32:37 +0000
Received: by outflank-mailman (input) for mailman id 355019;
 Thu, 23 Jun 2022 13:32:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CRa/=W6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o4Mwm-00081U-3H
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 13:32:36 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-eopbgr80045.outbound.protection.outlook.com [40.107.8.45])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f0ce5754-f2f8-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 15:32:35 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB6PR0401MB2264.eurprd04.prod.outlook.com (2603:10a6:4:47::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15; Thu, 23 Jun
 2022 13:32:33 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 13:32:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f0ce5754-f2f8-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gw31uuVqSIigN3Vx5HdEne+/v9ftwAFgOnDuvCfE/YRkPY5T4eEUAdI+AptPfxmIkjSDfeht4jCqdLlQuPNwQPeFe2AuUmnGtQTev7hIbRe3QmsSuHJJG5dudVI+iukqlaamqOFDpMvY1VWIkUZcZdJUuZkwXGdJWRp7SQqSzD5yzRwWSSCmCPkWTmFQ1xaHRwo9KIR8DKlhTRJg67tT/oh3UhsjHkW19awwFdg0K9bkFAHm/IgChdj/7pxC4sNTsQWEWn60hhsyBWTmq0YH7jUQS0GKGtqMs2OT+o9MNhNeXAs1eBkC2C+qTlxsrKxqBHdlzZr5CCtXTSInqnhGNw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=gKjjLu2Qwv3e3QqBI/PAls+gwgdA0UjMM9ENhhj391g=;
 b=A1dGVBKq5PNUyK7VHQlPOxA5Jq8vx8DY8Tq7/FY/j32n3FelOJBHpdHk6RyLsWH6Y22iBeotx3Yave/3px/XuGIbgIwtMNXAO6ztQFDoRV7lEUZiBcKx6Y2JQtfGp1SATmS01PZDMSCEknbZhrBPuyvFSph15qsHyrB91arRpLvgS5Yg03RWweQzGsptXSwYbLbd3K6t0ebznHwvp3cv4obe1+7C2Fc1Ta87i0Ysp97+r/Uah4xo4Bo3OXLSC51BJBOQalEMjxeBFklljpGO/YMm/teXzDXuMD5Z5er7drquxrZ6Mskq0bNXEO57BTCWTW9weNckKFsuYWXYx6D9Pw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gKjjLu2Qwv3e3QqBI/PAls+gwgdA0UjMM9ENhhj391g=;
 b=1h8J0Z7oLV4GkNTER4NjTrNym7okLAZhnTT1RYJcg/cwpdpEegPhTOLL6ev4yaj04+UgRGq91xzk1vO5JqedC1T8dRQTHUe63OzlDIrm4fwFhPa3hw/p+5FIO949uLiz6oXpFwQDPQxWFIrL5H22xb9w47HZU7z/3Y9kD5qZ9jOhFkkP4DtCDj4pFoqjRJq0oPT+0WKlIdeub78cUX52AYlqPXgVEkaMkzemsw+KzB+mmq0K2e3XgX7r0vkks7RzKzs86ZWFeyd1yseZGyuo4fKpoJxpUQLhymzgZRxG6Gk5Y2TcPGABflnsxHHCG+gIzAUeHoHPtmtTNngk7KZo4g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e45d8dcf-fd0a-6875-a887-5c0dafcc4543@suse.com>
Date: Thu, 23 Jun 2022 15:32:30 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 2/2] console/serial: bump buffer from 16K to 32K
Content-Language: en-US
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
References: <20220623090852.29622-1-roger.pau@citrix.com>
 <20220623090852.29622-3-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220623090852.29622-3-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM5PR1001CA0003.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:206:2::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 444c1f40-725b-4af8-041f-08da551cd3d2
X-MS-TrafficTypeDiagnostic: DB6PR0401MB2264:EE_
X-Microsoft-Antispam-PRVS:
	<DB6PR0401MB22643992A31D975A87D47A27B3B59@DB6PR0401MB2264.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Oa7S2N3QtcntRyxTMYl6WmwM6tgb4Q5MldxWsaEAexzZRDSuwjiopn2yxbHmK9i5GHNtCIjWOy7kjBSCOxMQRvQmR9+ud3BG0vbS12qHkb+Y/2SGvbXu1kPK6lmJnGD+OygLvaZ0tiz1B8cI1TwNjUmlWGltk9Rm3msSikBp2Rn83oWXppVVqvTWVZFCbi3n6CVcUdo2S9qSwOCb+Oh84Zd7e5t5kg/bKe78iFo/odiRgrzRc3XIgfOCFciKv6MIh3oWavAu/co2pLUOvJ1KAwh/dEehI0ws+pwL7sLeolN9mKySW6YzPJHk06dv2H65c3SGng2PwnZZha2Ls7/GQROAuUiQ/rXcqMHEwpUWsHdy3eRYd0mBmMVklCecy+Nhvz2U2IaRMqDBdBM/br43RevopNtuihDhyvsndYOAD5Liwby36ka6C+PHhUC6cxDFVu4JxmOi/x3izX+bq6L5ysd2KSGl7Ll+NocmGsM3i22I1E2tKPiD4odviRHyuOUZTvbMWnVf2PP64naNCRoL7l8fLKXqqWD448sPYEsfAR78mkxVzF6YixsTpW9z14eBnbaD2u0YX5QKlg3ECZNW5YMsk2LLxz5cSahZH5gA4/Lz/ckfytrKRqHZ/TjM3i2LqQDxFWDk/CnMJiYnKtMAqPSBfbMfMd09aFEgSv0PRRVU5v6l9utBCKgyfIWvv6y6kTG8hc/Xv+aV9qhJlbHg9zQ9kRnS8mmcAEIvAY19OujIYuJb/HWa2xLe4QajOny+ioH5hC0U5RMnvb65OkH2kAjJoDeYRMu5dWLDW4GhOuk=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(136003)(39860400002)(396003)(366004)(376002)(346002)(83380400001)(66476007)(36756003)(186003)(5660300002)(2616005)(31686004)(86362001)(31696002)(38100700002)(8936002)(66556008)(316002)(478600001)(54906003)(6486002)(26005)(6916009)(53546011)(6506007)(8676002)(6512007)(41300700001)(4326008)(66946007)(4744005)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TEhxeU1YVnlKTGVvTDBhd29ZdThuTkhRa2ZCQnFsQTNOUjNsUHh5TFJ4YU9R?=
 =?utf-8?B?Ni92UW5sblJhMEJkWmdFMms0U3kzZzNyQ1pUMTY2RUh0eFpZSDdwdzdOWUcw?=
 =?utf-8?B?VllJN0szRSs2U2E5MWRoSE9RNy9GT2RqSWNoQ1VreEhEWHVkaUZMeGVEVnph?=
 =?utf-8?B?czQ1eThHN2FKSmNKaFR0emNxRjVRanFnNVhtZUUxZzlrenEyN2hmcU5JNVhK?=
 =?utf-8?B?Nzh5bjFXYTN0MTRLK1o4Y25acWVPU3prb3NVSGpUZ25Qc3pCM3hkZjhHVEd6?=
 =?utf-8?B?UDFJb2JQUnR5MGR4MXJ4eXp1d050WmJjQ002d1ArNUdrVXVCYUphZ2tFUElk?=
 =?utf-8?B?VlBna25SN0t4THJBREh6ZU1GSDJvTDZpK1BZOGdJRkhFMzQwQ202MGFvL1po?=
 =?utf-8?B?dlhYS2lYZVRGa2JkK2djQ2RzRkgrQzZQRk1vZThhOW9ta24vUTVvWjNrSkxw?=
 =?utf-8?B?OTd3SGh5MmNBWkgxbTNLQ3daSndBODlhZTJyMnRQOU5LSSttV1FWVlZiQ2lG?=
 =?utf-8?B?ZitvMUpBQWo5dnpBZlRnYWFqeVVnTjdpTy9qVXhPOTdoTDUzYzNZeGpqenVv?=
 =?utf-8?B?Z2RMSitaNHFOczZCc3R2RkZlL3phYTllUStNOHlGN1llMnM0NGh2SERVbEJz?=
 =?utf-8?B?TFpCUndxTGE3WUU5cUxqaXBVOVl4Z1RCaWlVY2cwOTlHVDhRNjV4YmhoekVW?=
 =?utf-8?B?ajY3c3hxQ2VvVXBTTnBHalB6cjVxa3lwRExwR1NRVVdQNjVFclQvUUorSXR1?=
 =?utf-8?B?bTlkL2N5UVJ1WERVbGRwT2s1bHg0VEEvakx4RGs4bURVWm1CQTd1dEcwSmdn?=
 =?utf-8?B?RzlmMkNacmUrbXB4ZUMzWFlERGFHV2c4Sk5JSWJRdG0zQzZKY0FTeHFyVnhZ?=
 =?utf-8?B?Y0NaM21kUytaM0N3WE02aUZyQXI1OENsZWo1LytCWlhZeEY0M1ByRzk2U24r?=
 =?utf-8?B?ZUw5NXQ3REs1S28wMlF4eGVFV2VUaHl0MFo3Um9Sb3hNSUxuZ3hOR0xNcGMv?=
 =?utf-8?B?ZkpFdGVXbWNnYzE5eUtQUExNV2lOWGpMaTN2MVU0VGJ3NWF1WEMvMHM5VFFM?=
 =?utf-8?B?ZmZGUzVBa3JXcGtZYmkrL1V5SDVYMjhIWmM3RVVBWHNoaVQwZ1F6VkZxVVNF?=
 =?utf-8?B?U2ZHM3lPRUhmc2R6cEU3dU5meHAxc0pyN3Zlc3FmL3J4amR3QzNIQUJBRk1l?=
 =?utf-8?B?OEpXLzB4VFhBaXVORWRMcHNMOWFxazBSU1BwdHFraGwvU2NBMjhJNE82YXJZ?=
 =?utf-8?B?TjIzcEtHUmpiSE9sYzdBeEg0QWNFaVRHazl4THEwcUdBOCtDWXhyQ2pWZ01Q?=
 =?utf-8?B?THdxU21Xd3dSUWZwYXRhd05qSStyM1MwWXhUTDBCaFp1aldXM09ZS3ZIZGQ3?=
 =?utf-8?B?WmNYNnU4Y2J0MlpzdHBiak1IMUpWZHEvajhjRnpmckdwYkJ2SnBDN2JzRWdX?=
 =?utf-8?B?aHZaL1JtRE9HRk4wY0d3WUh5MlVQTmZ5U0JRdWxtdlgrMmxQSzF2c0JEZExu?=
 =?utf-8?B?S1lJU2R4V0kvZndiSHJFMkNkOU5jTHBXR0RxT0x2ME5ac0RwTy9sSkRXaEtE?=
 =?utf-8?B?bU14SnptVWQ0cWV6U1ErM05qektURjlFTzViMVdCdTRkSmJEL0xoc2RJT3Fw?=
 =?utf-8?B?c2xGcUtzZldBK1hybWxrSi8wQ3dJTzVsS1hhNXdtZ2V2NU80MWcxTjZxU0pK?=
 =?utf-8?B?R3M0cG9DRXMrcW02eTZIeHJDbHA0b0lsL3lOQ2VXbWx2cGs2bVp1SjlxMnlN?=
 =?utf-8?B?K2k4T2VtV0xCSllrbllJa2tCRHNLb2pKc0ZrUkM2VFByWEZQd2NGZFJvTGtZ?=
 =?utf-8?B?T2RmZTZrQ2wwa0hQZkl0MW1BY3BEWGN3UVNwdWVpSDNyaEdvSDVMR2czZmww?=
 =?utf-8?B?MVlEQ0Mrblphc1hFNTdSK1RPVXpqTHRKUDFVVDNVcGs2OVRQR3VzRlUvdWxz?=
 =?utf-8?B?UUI0K3FPQUdIVDkxUkVMRXp0b3luSjFnNmZOUVEvcHUzV0lOUkFTc01lOHJD?=
 =?utf-8?B?UEZ0cFpNeDRVckZSa2RPT1hGNVZhZzlUbEhpemtINUZwTHVOamttU3djc1lx?=
 =?utf-8?B?WGxqMGhqSkQ4czBLNGZHVUplMU4wUUdCNFArTFMvdUxCSnEwazlaL3A0OURh?=
 =?utf-8?Q?GegooibngfG8orNX3ROBI/Egq?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 444c1f40-725b-4af8-041f-08da551cd3d2
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 13:32:32.8954
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: OAG2xd/8L87CWE0B2nV5Wx5RenU9CF4kGmcXnmhac2WvlaSQLuPyaNRe7iFvHcpDzUttE0zlvC7OfqpZ/jxh3A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0401MB2264

On 23.06.2022 11:08, Roger Pau Monne wrote:
> Testing on a Kaby Lake box with 8 CPUs leads to the serial buffer
> being filled halfway during dom0 boot, and thus a non-trivial chunk of
> Linux boot messages are dropped.
> 
> Increasing the buffer to 32K does fix the issue and Linux boot
> messages are no longer dropped.  There's no justification either on
> why 16K was chosen, and hence bumping to 32K in order to cope with
> current systems generating output faster does seem appropriate to have
> a better user experience with the provided defaults.

Just to record what was part of an earlier discussion: I'm not going
to nak such a change, but I think the justification is insufficient:
On this same basis someone else could come a few days later and bump
to 64k, then 128k, etc.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 13:55:20 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 13:55:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355027.582454 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4NIT-00020N-8N; Thu, 23 Jun 2022 13:55:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355027.582454; Thu, 23 Jun 2022 13:55:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4NIT-00020G-3L; Thu, 23 Jun 2022 13:55:01 +0000
Received: by outflank-mailman (input) for mailman id 355027;
 Thu, 23 Jun 2022 13:55:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4NIS-000206-2p; Thu, 23 Jun 2022 13:55:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4NIR-0006Ag-PP; Thu, 23 Jun 2022 13:54:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4NIQ-0004Hg-Ub; Thu, 23 Jun 2022 13:54:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o4NIQ-0008AS-TX; Thu, 23 Jun 2022 13:54:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=HCAGmqDN8lelsfGQA/xaoH4eR46TnpRU0dcQ9WmoWi0=; b=1UW0dBdXpXaYSF/P6F/mmzPY85
	2kPTPgq5d7px15/sjQr5D8kjHwr4zgQM6wXxPR9Kxi3nNgWQ+NtoPTrgBGAL2pTOvkJHj6gwwsqkh
	V5OSLc48tzbq2zGrQooirAs41zezT4Yv2gd/Za+FwWdcv+ktR0DwqPwlgk4zJmfocWVE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171320-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 171320: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-libvirt-qcow2:guest-start:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-amd64:guest-start/debianhvm.repeat:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f0c280af0ec7c79cf043594974206d87c3c46524
X-Osstest-Versions-That:
    linux=a31bd366116cb157fc20d5cdc8a2fd1c0d39ac38
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 23 Jun 2022 13:54:58 +0000

flight 171320 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171320/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1  14 guest-start              fail REGR. vs. 171275
 test-armhf-armhf-libvirt-qcow2 13 guest-start            fail REGR. vs. 171275

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-amd64 20 guest-start/debianhvm.repeat fail pass in 171309

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     14 guest-start         fail in 171309 like 171224
 test-armhf-armhf-xl-multivcpu 14 guest-start        fail in 171309 like 171271
 test-armhf-armhf-xl-credit2 18 guest-start/debian.repeat fail in 171309 like 171271
 test-armhf-armhf-xl-credit2 15 migrate-support-check fail in 171309 never pass
 test-armhf-armhf-xl-credit2 16 saverestore-support-check fail in 171309 never pass
 test-armhf-armhf-xl-credit2  14 guest-start                  fail  like 171224
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171275
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171275
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171275
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171275
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171275
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171275
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171275
 test-armhf-armhf-xl-multivcpu 18 guest-start/debian.repeat    fail like 171275
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171275
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171275
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171275
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171275
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                f0c280af0ec7c79cf043594974206d87c3c46524
baseline version:
 linux                a31bd366116cb157fc20d5cdc8a2fd1c0d39ac38

Last test of basis   171275  2022-06-18 21:42:02 Z    4 days
Testing same since   171309  2022-06-22 12:44:47 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  "Ivan T. Ivanov" <iivanov@suse.de>
  Aaron Conole <aconole@redhat.com>
  Adam Ford <aford173@gmail.com>
  Al Viro <viro@zeniv.linux.org.uk>
  Aleksandr Loktionov <aleksandr.loktionov@intel.com>
  Alexey Kardashevskiy <aik@ozlabs.ru>
  Andre Przywara <andre.przywara@arm.com>
  Andy Chi <andy.chi@canonical.com>
  Andy Lutomirski <luto@kernel.org>
  Anna Schumaker <Anna.Schumaker@Netapp.com>
  Ard Biesheuvel <ardb@kernel.org>
  Arnd Bergmann <arnd@arndb.de>
  Arvind Sankar <nivedita@alum.mit.edu>
  Baokun Li <libaokun1@huawei.com>
  Bharathi Sreenivas <bharathi.sreenivas@intel.com>
  Brian King <brking@linux.vnet.ibm.com>
  Catalin Marinas <catalin.marinas@arm.com>
  Charles Keepax <ckeepax@opensource.cirrus.com>
  Chen Jingwen <chenjingwen6@huawei.com>
  Chen Lin <chen45464546@163.com>
  Chengguang Xu <cgxu519@mykernel.net>
  chengkaitao <pilgrimtao@gmail.com>
  Christoph Hellwig <hch@lst.de>
  Christophe de Dinechin <dinechin@redhat.com>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Damien Le Moal <damien.lemoal@opensource.wdc.com>
  David S. Miller <davem@davemloft.net>
  Davide Caratti <dcaratti@redhat.com>
  Ding Xiang <dingxiang@cmss.chinamobile.com>
  Dinh Nguyen <dinguyen@kernel.org>
  Dominik Brodowski <linux@dominikbrodowski.net>
  Eric Biggers <ebiggers@google.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Geert Uytterhoeven <geert@linux-m68k.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Grzegorz Szczurek <grzegorzx.szczurek@intel.com>
  Guenter Roeck <linux@roeck-us.net>
  Gurucharan <gurucharanx.g@intel.com> (A Contingent worker at Intel)
  He Ying <heying24@huawei.com>
  Heiko Carstens <hca@linux.ibm.com>
  Helge Deller <deller@gmx.de>
  Herbert Xu <herbert@gondor.apana.org.au>
  huangwenhui <huangwenhuia@uniontech.com>
  Hui Wang <hui.wang@canonical.com>
  Hulk Robot <hulkrobot@huawei.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ido Schimmel <idosch@nvidia.com>
  Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
  Ilya Maximets <i.maximets@ovn.org>
  Jakub Kicinski <kuba@kernel.org>
  James Smart <jsmart2021@gmail.com>
  Jan Varho <jan.varho@gmail.com>
  Jann Horn <jannh@google.com>
  Jarkko Nikula <jarkko.nikula@linux.intel.com>
  Jarkko Sakkinen <jarkko@kernel.org>
  Jason A. Donenfeld <Jason@zx2c4.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jedrzej Jagielski <jedrzej.jagielski@intel.com>
  Jens Axboe <axboe@kernel.dk>
  Jeremy Szu <jeremy.szu@canonical.com>
  Johan Hovold <johan@kernel.org>
  Johannes Berg <johannes@sipsolutions.net>
  Jon Hunter <jonathanh@nvidia.com>
  Jonathan Neuschäfer <j.neuschaefer@gmx.net>
  Josh Poimboeuf <jpoimboe@kernel.org>
  Justin Tee <justin.tee@broadcom.com>
  Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Maciej W. Rozycki <macro@orcam.me.uk>
  Marc Zyngier <maz@kernel.org>
  Mark Brown <broonie@kernel.org>
  Mark Rutland <mark.rutland@arm.com>
  Mark-PK Tsai <mark-pk.tsai@mediatek.com>
  Martin Faltesek <mfaltesek@google.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Masami Hiramatsu <mhiramat@kernel.org>
  Matt Turner <mattst88@gmail.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Miaoqian Lin <linmq006@gmail.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michael S. Tsirkin <mst@redhat.com>
  Michal Jaron <michalx.jaron@intel.com>
  Mike Snitzer <snitzer@kernel.org>
  Mikulas Patocka <mpatocka@redhat.com>
  Minas Harutyunyan <hminas@synopsys.com>
  Murilo Opsfelder Araujo <muriloo@linux.ibm.com>
  Nicolai Stange <nstange@suse.de>
  Olof Johansson <olof@lixom.net>
  Palmer Dabbelt <palmerdabbelt@google.com>
  Paolo Abeni <pabeni@redhat.com>
  Paul Walmsley <paul.walmsley@sifive.com>
  Petr Machata <petrm@nvidia.com>
  Randy Dunlap <rdunlap@infradead.org>
  Richard Henderson <richard.henderson@linaro.org>
  Richard Henderson <rth@twiddle.net>
  Rob Clark <robdclark@chromium.org>
  Robert Eckelmann <longnoserob@gmail.com>
  Samuel Neves <sneves@dei.uc.pt>
  Sasha Levin <sashal@kernel.org>
  Schspa Shi <schspa@gmail.com>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Serge Semin <Sergey.Semin@baikalelectronics.ru>
  Sergey Shtylyov <s.shtylyov@omp.ru>
  Shuah Khan <skhan@linuxfoundation.org>
  Slark Xiao <slark_xiao@163.com>
  Stephan Mueller <smueller@chronox.de>
  Stephan Müller <smueller@chronox.de>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Sudip Mukherjee <sudipm.mukherjee@gmail.com>
  Takashi Iwai <tiwai@suse.de>
  Theodore Ts'o <tytso@mit.edu>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Thomas Gleixner <tglx@linutronix.de>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
  Uwe Kleine-König <u.kleine-koenig@penugtronix.de>
  Vincent Whitchurch <vincent.whitchurch@axis.com>
  Wang Yufen <wangyufen@huawei.com>
  Wei Liu <wei.liu@kernel.org>
  Wentao Wang <wwentao@vmware.com>
  Will Deacon <will@kernel.org>
  Wolfram Sang <wsa@kernel.org>
  Xiaohui Zhang <xiaohuizhang@ruc.edu.cn>
  Yangtao Li <tiny.windzz@gmail.com>
  Yuntao Wang <ytcoode@gmail.com>
  Zhang Yi <yi.zhang@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 6427 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 13:59:54 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 13:59:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355038.582465 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4NNC-0002ig-5n; Thu, 23 Jun 2022 13:59:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355038.582465; Thu, 23 Jun 2022 13:59:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4NNC-0002iZ-1a; Thu, 23 Jun 2022 13:59:54 +0000
Received: by outflank-mailman (input) for mailman id 355038;
 Thu, 23 Jun 2022 13:59:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qyAp=W6=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1o4NNB-0002iT-MA
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 13:59:53 +0000
Received: from NAM11-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam11on2040.outbound.protection.outlook.com [40.107.236.40])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bfff18b7-f2fc-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 15:59:51 +0200 (CEST)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by BY5PR12MB5016.namprd12.prod.outlook.com (2603:10b6:a03:1c5::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15; Thu, 23 Jun
 2022 13:59:49 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::4047:c750:5bc6:19d7]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::4047:c750:5bc6:19d7%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 13:59:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bfff18b7-f2fc-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jN6la/mCoowuHpRJqrZVK6d7Pg9CxGu3lb2ceoGgDa/poIzbKgVK0kPjRoKlPqxQ3o9j4bh3wNnKCdzXQyyHNAyzl2JuvxTjVywWncNtSXgiRmujHfwQSAS7Z9PaBEY/T9DogCb0bZ1M2De0WSR6piwaHmrXaBDft70ANSIz+SKRXv7ehU+H1e0sPS+YJ/Pl/38DuinssKRpXboRQYNQwj7TJPgYb9zGp7ZMSmCzqyb8hyun3vKgM735vdi/6f72a+cPi4LglyyIlkTn1uWFcggXs97j4BQDe0DUtncUaxufdk554roEd1y4UOOlwd5pfsNom7FXhlJAynFX8p9T5w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=irgGec9J5jV8MZAKDoUKDvKV/B3cNRwReIAzNSHrDR0=;
 b=YOq7O4MW5i6TRqgotq1qJhZRRt5wRTBUbmVF9RIAkIp1UVB4K6p8/UG8Bl8kFM2TtYyQ4gwQyFEAYMKLfnhq00FxgxW53VUGAZANsONjEvKK1sDQ9ui8nOIwUMCPvqfbRdZrtjZQ0L3dxhFBsyGLAUYIMTyCE4iviXS+5pJF5JUUJ3/XARoBInxOhAwxpGCLg0x33WRghjyMigyq1SzOHczYSKplgdUObOSHzJOJ+xMlLp3sh781qAH+1Jg45uFlFA/c1oPxpGDvTQtMhpwNWRaSX7lfOn+0n797tJ0y0B0VPYDFHgdeYz3Wm8jXZRW80aMLpiw8CGudPNhw3Zv7IQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=irgGec9J5jV8MZAKDoUKDvKV/B3cNRwReIAzNSHrDR0=;
 b=T5L6owDWb4T5g9KF1zPonreJfRJaMko1280unk7I/6LnktjabgcZvFe/kDudskl30CiL4IKVhekPXXVF30zHZM2K/FJHWX84F9cHKMta/eWHTdSM1tcu1TOi8tEY+mqRT5umPwFmSwTN4ud6w8OrMcz5SHBphhTdEclHwKpQAx8=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <f2add6f9-1573-5236-84ac-4db1ade60ce6@amd.com>
Date: Thu, 23 Jun 2022 14:59:43 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [Viryaos-discuss] [ImageBuilder] [PATCH 2/2] uboot-script-gen:
 Enable direct mapping of statically allocated memory
To: xenia <burzalodowa@gmail.com>, xen-devel@lists.xenproject.org
Cc: viryaos-discuss@lists.sourceforge.net
References: <20220619124316.378365-1-burzalodowa@gmail.com>
 <20220619124316.378365-2-burzalodowa@gmail.com>
 <5cd7ee29-d43a-1302-0a0b-6b4c339a96da@amd.com>
 <797bf441-9d7a-7eb8-4e90-787398acf726@amd.com>
 <96c919fe-dba1-d1c4-10e9-b73800f96cea@gmail.com>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <96c919fe-dba1-d1c4-10e9-b73800f96cea@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0076.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:190::9) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8b9406a3-f43e-407b-f552-08da5520a2ea
X-MS-TrafficTypeDiagnostic: BY5PR12MB5016:EE_
X-Microsoft-Antispam-PRVS:
	<BY5PR12MB50168CB513AE4EDEA8972658BCB59@BY5PR12MB5016.namprd12.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	a5ejbuBV6yPJSY6XoKJYyjNKLAsOK1ukdmqGvg4i7DaHSc3q6Uv6SRx1+yBXBUxzNc+EslBE8STwFRmv4E8IeSVCoD/olp13u950M2TvYgkUXR1Q9tlk4Tccw6bonl+BSyLUh5AZvFtwoLrUfWVYmuX6Fi/0yUJrK5qN96cSsrR/8hk3b1CWQEwSy3hZ11/v9HdBjHZ2xd1IW9KzLLZHx5Ka1zEZKhIHq70mPgkjKJe5rbY8OysIhTj2mgT41QP+FTe2GxrtWz/Og9cn31Ycw6UiqeqVCEWCqgdWvZQmbcJYBPTNaSkARUzEyfOTViWkUqZzxkqQGsJEqqsKmsmvpfJQtKfFlL/up6UB7ZwojrsYJUIxoZA+S3oB6qyCCETYzn8SEqVSbE4eh1TfOQDdhN4UA9ZJXjiwPxWFKWgKnjYjAC90+TLhPMA6RjfmVUZLSbQNmta2TsIDPFHe/7fDdyAoQoLLHDFCrvHF0UN79rym00IK13pXo81CoW3j0V/+o3ZTIOlc4v6QRmaiqhNxSEJyvDJNjdJhpBtt9VLAx9pCDVC0avEhz5l+kSzHC1zVCQYGmtNLgdGUp5ohlvWYKkfNoz6ZgMt9Dgm31lYS600HK5TqJJ5REnlzl5OFOkGNqZSaKGe+aaNilqnGT4W9FZzIMF9EgIwpQDVU+BVbxQ+C6yGG13i0afarSe2Vr/Fn3VtI99L0MT8hiGh9hirdEOvPKI3YpNY/jqkuXpkpf1qExFXI6Zj3E+QQV/Qr+LWTsLO1YHrx1WvCRvmb+S9rE330iFizgfESi4Ev5DO+SKM=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(396003)(376002)(39860400002)(366004)(346002)(136003)(41300700001)(186003)(6512007)(2616005)(6506007)(83380400001)(26005)(6666004)(31686004)(36756003)(53546011)(38100700002)(478600001)(5660300002)(2906002)(6486002)(66476007)(66946007)(66556008)(4326008)(8676002)(8936002)(316002)(31696002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?L0ViQXRydFZETW9jdVkyaXYwZ2ROVFN0QVRnOXpET2RwUVk5aGFLb0J3Tk5G?=
 =?utf-8?B?N3NVNWliZkN0d0FEWjM2QTQ2RTBUcmJyakk5dDVCZDlBS2RXR1lCQmdUQUpD?=
 =?utf-8?B?Szh2ek5zdzBsdSt5NzhabmMzMjczZFFkOThXVWNoRGJ6cE5sMnhreFhUcTJO?=
 =?utf-8?B?cTF0eXhvaVYzVnFCdFM5V0czYlZjaVoxUFYvMm84Z1RIUWorT2xvODVnMGo1?=
 =?utf-8?B?UTRteU9ZellRNW4zS2cwcFN4ekhtV3VEMzdwOVBkb2ZRZlJVa0ZVT1dZY3oy?=
 =?utf-8?B?b2UrNENqM21LTi8yL1dWNkJoc2xDbWt0ODRHU3c3WXNLME9KZzY4Y0lkZjd6?=
 =?utf-8?B?dTVneFRVK0xGVHY0YUhDcWFaUkNlNWM0Nm9qZmRucWhDeTViK0NTSG03VWlD?=
 =?utf-8?B?UGl4R0RZZExzQU5SMCtib0QxeWVDdzlnK015bTNlYzFPUFlRSWNJVmZQQ21p?=
 =?utf-8?B?Zkk5Z3BVRUFXWGxPTEQ2ZmJBQTlnVlc1MFpiQXVzbm5YQUUzbzVBeWVKMVQ0?=
 =?utf-8?B?UjlVWDBjaGZrRFkwb0xHUWJteUtqQzY4SHN5ZDlhcGUvQzh3UlR3enhkU2dl?=
 =?utf-8?B?Z00rYzB1ZWRuVGJnbkRpaUZ0L0hJRG1BbU9qYjVGVDFCdTR1MWc0QmhaTThD?=
 =?utf-8?B?SWRzdGlieGlYMElXa3lCVHdRb3p4U0FRRTZYaXZ6Zk1LMm9hQlpVTlhCUjRG?=
 =?utf-8?B?S0tmNHlsc0ZWVlRKek5YZFZqYTl3Sm9Xc3BPVElNK1JNMTBsQnJuMW5QMTg1?=
 =?utf-8?B?UUREb0lrNlNpa3YxNFpSc2dxQWRSc3VBQXFKekNjYW10OHZBbFdWNzFjeWRz?=
 =?utf-8?B?SmsxU1FRSk9QekVQbHhVT3EvRjJNOW1yald0WGFHZVBCUHpxbnN0RXpYcVFx?=
 =?utf-8?B?SEZqd2JFeVZKSmxSZnRMSmpocCtOQlBLV3Y3UG9RSFRIazVqeXhkcnhGbHZa?=
 =?utf-8?B?eUZ2MEtoaGY2a2RoR2dSMWtDVSthWWhPcUgzUFg5YklPOHM3RnIwYUI0U0Ns?=
 =?utf-8?B?czRDT1VyUko0VjFYak1Pd1d4eHVYcTNma1ZyUk9tZmU5cUhIWmtNZHRoakZ6?=
 =?utf-8?B?QUVJeHV0clp1cVdsaWVFU1RjOU5aZStDNTBpdmltVUVJclBCUTRuYWl5bVNm?=
 =?utf-8?B?RXEyV1lSUUhsNEsvSEsyaDJudXNpNzg3NXhmTnVTMUZuZElBTGFSTHJGdTBj?=
 =?utf-8?B?WWxVdy9odUM2cW5obnBobEpTMjZXcGE5NGtNbVUyRktIU1VGV0lRYzlmRW1z?=
 =?utf-8?B?TmwvVzlMZzlad0l5eW9tYWZ2dGYwQXVqT3loaTF2RnVHZHMyMCtrQmdaZlNj?=
 =?utf-8?B?Rk1meVJEbXMzcFhHK2JFR0czTXdxSUllUGRNVlducE4vWEV5RUlqaXJYNTI2?=
 =?utf-8?B?MVBYSlpSVUcrbDhYc1liU043K0pXTFJ2SlAxU0lrMWdWK2tPS0VvT0Zna21n?=
 =?utf-8?B?RVJ0YUh0blpYaElPcFJyOUVBVXYvMkxBdlJQaEVOVFowZHpHU3pyNWtzZjdx?=
 =?utf-8?B?SFVodVlxVUsvZk4vR1ZFYjgyYklNeTBJaFNQbFdlaHA2a0FudFJ5M1hNL21p?=
 =?utf-8?B?TXlTQ1NDM3ZkN3A1ZjJHUG1Jc1F0VFMxN0pWWHFhNll4bHFvcm42U3ExNTFp?=
 =?utf-8?B?S1FmNVFvL3hrZGUyd3hGdUptYWRNVWdOd1c2Ti92ak44Z243MTNpdS91dTQ5?=
 =?utf-8?B?dGhOMnV4L1RWVlRLR0w5c1dCWHFMeW0zaEtEaG8xRFYycGxtVmh2WnpFVTkr?=
 =?utf-8?B?R0FDQ29yRkxpQWozZWdNTzhjMkd4RXN4RjRLNlJzcmMvM2xzNGQyMHJUVE1D?=
 =?utf-8?B?dFZuUllIUDhZaXBzcmVJY3JEalpwai9FWTRzVzdGZ0VxenRlYnRNTEF5NTVw?=
 =?utf-8?B?Zit6T0JOMEtzM0V1UFVWM3RuckpGN0lJZkY0ZDIza1hpUlFSb1BNeVVCaG12?=
 =?utf-8?B?NVhqSUtmQUlvUVlvUThZYnYrbEZFVE9TNkMvZnpMMURmaklOSTRQck1SNmo0?=
 =?utf-8?B?Z0JmNFNrREVnM3RlQjV0cDUvVU1UeWhvVmVFbXY2Q0dVRlZvWmIvTUJjVU0z?=
 =?utf-8?B?Um1jckRtM2pEMy8xd1EveTNsYStjcW1MNjNTcVNJMXhub0kxTlJKMTVDY2pj?=
 =?utf-8?B?OUIwbTRJWDg2emw5OU9lTWFaa2hvaktiN3ZGdHVFK1NnU0t5TnZXNWNNc0Nx?=
 =?utf-8?B?b0RKWGJRR2V0SzFmQ3FYSVlKemlhMTgxcGlseklsalJnWHJXcmhjS3Ayalhj?=
 =?utf-8?B?N0J3S2N6SERsZUloVFRpaUQwNEt4cTRaZ092dFd1VU1pcWlzM0l2c205Sy9N?=
 =?utf-8?B?ZFVDMXVlbkMzTnkrYzdLczRhRmhVZm9ldXRlYjNXSkt6SHIyeXVwQT09?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8b9406a3-f43e-407b-f552-08da5520a2ea
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 13:59:48.9155
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: tCNeqTyrI4IZqP6yajMZPCzoRcnQFX8yuaEVa8GA0lzTejBkZAnnTKw5w+W1TdboXuJPi8AOR42L9S+VIdQ3eg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB5016


On 23/06/2022 13:05, xenia wrote:
> Hi Ayan!
Hi Xenia,
>
> On 6/23/22 13:02, Ayan Kumar Halder wrote:
>> (Resending mail, as the previous delivery failed)
>>
>> On 21/06/2022 12:34, Ayan Kumar Halder wrote:
>>> Hi,
>>>
>>> On 19/06/2022 13:43, Xenia Ragiadakou wrote:
>>>> Direct mapping for dom0less VMs is disabled by default in XEN and 
>>>> can be
>>>> enabled through the 'direct-map' property.
>>>> Add a new config parameter DOMU_DIRECT_MAP to be able to enable or 
>>>> disable
>>>> direct mapping, i.e set to 1 for enabling and to 0 for disabling.
>>>> This parameter is optional. Direct mapping is enabled by default 
>>>> for all
>>>> dom0less VMs with static allocation.
>>>>
>>>> The property 'direct-map' is a boolean property. Boolean properties 
>>>> are true
>>>> if present and false if missing.
>>>> Add a new data_type 'bool' in function dt_set() to setup a boolean 
>>>> property.
>>>>
>>>> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
>>>> ---
>>>>   README.md                |  4 ++++
>>>>   scripts/uboot-script-gen | 18 ++++++++++++++++++
>>>>   2 files changed, 22 insertions(+)
>>>>
>>>> diff --git a/README.md b/README.md
>>>> index c52e4b9..17ff206 100644
>>>> --- a/README.md
>>>> +++ b/README.md
>>>> @@ -168,6 +168,10 @@ Where:
>>>>     if specified, indicates the host physical address regions
>>>>     [baseaddr, baseaddr + size) to be reserved to the VM for static 
>>>> allocation.
>>>>   +- DOMU_DIRECT_MAP[number] can be set to 1 or 0.
>>>> +  If set to 1, the VM is direct mapped. The default is 1.
>>>> +  This is only applicable when DOMU_STATIC_MEM is specified.
>>>
>>> Can't we just use $DOMU_STATIC_MEM to set "direct-map" in dts ?
>>>
>>> Is there a valid use-case for static allocation without direct 
>>> mapping ? Sorry, it is not very clear to me.
>>>
> Thank you for taking the time to review the patch!
>
> I agree with you that static allocation without direct mapping is not 
> a common configuration, that's why, in the script, direct mapping is 
> enabled by default.
>
> My reasoning was that, since direct mapping is not enabled by default 
> in XEN for all domUs with static allocation but instead requires the 
> 'direct-map' property to be present in the domU dt node, then such a 
> configuration is still valid.
> I thought that with this parameter it is much easier to setup (and 
> test) both configurations.

Thanks for the explanation. This makes sense to me. :)

In this case, can we remove the below snippet from your patch.

+        if test -z "${DOMU_DIRECT_MAP[$i]}"
+        then
+ DOMU_DIRECT_MAP[$i]=1
+        fi

The reason being if the user wants to set 'direct-map' prop in dts, then 
the user can specify it in the config.
Else, (by default) the direct-map will be set to false. This is inline 
with XEN's behavior as you described above.

- Ayan

- Ayan

>
>
> Xenia
>


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 14:47:35 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 14:47:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355050.582479 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4O7E-0007oW-PK; Thu, 23 Jun 2022 14:47:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355050.582479; Thu, 23 Jun 2022 14:47:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4O7E-0007oP-Lu; Thu, 23 Jun 2022 14:47:28 +0000
Received: by outflank-mailman (input) for mailman id 355050;
 Thu, 23 Jun 2022 14:47:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CRa/=W6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o4O7D-0007oJ-ME
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 14:47:27 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr70075.outbound.protection.outlook.com [40.107.7.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 65ce5783-f303-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 16:47:26 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR0401MB2272.eurprd04.prod.outlook.com (2603:10a6:800:31::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.22; Thu, 23 Jun
 2022 14:47:24 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 14:47:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65ce5783-f303-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IoFH3pZZ4lqrvF+OTrNy8mkkyh1a83NBcXBs+ZNN3W/urGN9xS6UmltavDXOxXVuCVY5NfcEPBtSoN0bF+Ep8l2krF16Dm6fO8ipT0jB8qoaPQhti6an3hNJ8H4Y0KgdNpW0oJPj4AXxtbralwqZ4E40mBjau6y9Ox+CamU9VDDjhpzfNxRlzzi3LVh2sp2frlgEXTZHbHKxXLCfM9ZN8yhKwVffxMRa4YiINq3e5Iv11MQlCOjbv+XpKPAg5iYtoYKu5AJornk70Jqj+QL5CiYuNC1SD3dSQCMUKbGSe8VJEttdsd0UKyQQyZIihbbXtdnDP0E6fHkIkDI7SB+3qQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=W0xYgJsZj8PhxJorQckree2DX8wceOnDYAkphOBDn/4=;
 b=ZKt8aYcgr8YI9cRtntzLXOfDA38prAvtb6ulUoO20T8QxYxW6qDNvX0MIdZIkGdWKUymJJlmHf8qx170gqZpGPfWZwMUYoud46cc5EnlPgxO4MsaMimVFBCBo1SW5gO5Jpx0FepBUvS8kUIhZrlrOq8UNdtZkGVNTlh+ku9uoH/IHjmO87bX4qf8I3tju5/Inhx79Jz9pvBHEBqLBUVvyc/eNRLtPXLW3RDToVqBcsVlsTw5ybWn8DytSDCKTGAZWOqSvUmL5eccVhCpKdGOhfyKPmSMIdPm5aC1Suw2Umgq8P/ygJfAiyie43CHv+uJhLxpBZcdlv3N+Un+5aL7aw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=W0xYgJsZj8PhxJorQckree2DX8wceOnDYAkphOBDn/4=;
 b=h3nkLCB8LVwH76DxHvaqA3PX3bcodV9yAWrKxYktxthFMFlrfzMJsgr8TG8Dn5t2TZOCozz6xtBS00Msw5VHWxDJ2h68L7iH7bn/bJ8gDD1ek9zCxd0Lj8W54YZJmg124Zm9lrD3YnZUp6cYmoNRsGKx15zeCWx5HDRMUoOuBj7XR9Q28cnQi+w6tF3pFrfMo3eDdVM1/TuGKWhxy1wtl4YyZ9Ofp8RrDfIio9+aQM0UXAxHMtMzBY51h+zfgLUrp/cwKb2UUQSA2OBeQn7lqhgFJXY/NJ5fqurFhVa9dAMwSK5hSPHswvU2GtNPRohW/cp2v1siTNosusZ8eI/OBQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <2a94902e-bc4f-26d1-b47d-abd4709226de@suse.com>
Date: Thu, 23 Jun 2022 16:47:22 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 1/6] x86/Kconfig: add selection of default x2APIC
 destination mode
Content-Language: en-US
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220623082428.28038-1-roger.pau@citrix.com>
 <20220623082428.28038-2-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220623082428.28038-2-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AS9PR04CA0133.eurprd04.prod.outlook.com
 (2603:10a6:20b:48a::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 53ebc130-a22d-46c8-a0e0-08da55274906
X-MS-TrafficTypeDiagnostic: VI1PR0401MB2272:EE_
X-Microsoft-Antispam-PRVS:
	<VI1PR0401MB22721B36DECA049CD3E9943CB3B59@VI1PR0401MB2272.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	9kvMiaCWPMGREeIJOGgpgT3/rDHGN3BqwsT8OFb79rZDUJx+QmuSfmFiR8eew6TH9/norXDsyeYsetUoJj8aruK7kR/ULOCR2n/LB6u1UqXjLFUqZHazZBm8vo9cFw1GiRKHAOl1bEjNrTVKZUPYaAeu7hldErdx3gjE48aLSGOshGdy1yrU1lDZOfbe8QdKM+0YyWR1tqiTHLteeVmFjB0KTbKU60akACKlhTwbBU2IiHBMwt2pqaB8KjL4ZE2D/Mw533TMqHDmsOmc5UASiEDeB4mR1kYZf31pH1ct6VDnG4c5SUYvl/c+X+JR3MjCBr1NFOdKi2Wpursna619HoW+nwJLSy+gI8+zoUW/D2wrJJtSexgEs79N4z6E/fhjZa5fW6R/M58wWs5AqtMp6iyBTe64AEGa4NC4O5b07xBDx4WqgPTkoch4hSpTJ2JRS/gBjnIBgU51vSt8CRCNRttTWzoxd2CUr6POlejIpNIR8KcVN2tGrXGaGd7n9BScFZoXoXs3sNyVSqDyZbKeAgkRXqguyb2rvvhROsP4n11+KdadzcmojClLXCcFG5Xjv7x+xJJhuIEOj+HheZt7gdR662ohrNCi/sodk2FwEZ9IVVLqYS4CGxGm4ZDize9mzTD4YMOpX0aHlcaZS7Sp7ehqemQNCTeAKdONlWxdaEGichxTNj4/O9Nut6Jt952Fn6tRMlblSTg1jLKcxONsSrCE4K3Jx30dHGd+KJgDyvygAgBUHHxapDJ7GL0yDCbbvhZtSzZ/+wIBre4RmQ7zeIzQJPx5s6Ek5KN7+AVhQ04=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(136003)(366004)(346002)(376002)(396003)(39860400002)(6512007)(83380400001)(2616005)(316002)(5660300002)(186003)(66476007)(41300700001)(38100700002)(31686004)(66556008)(54906003)(26005)(6916009)(6506007)(53546011)(66946007)(31696002)(8936002)(8676002)(36756003)(6486002)(86362001)(2906002)(4326008)(478600001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NVhONHMrODhKeTh2WlFYTWJGRXFaTzYzVGo1LzJDZ3lJQmR5VVkzaU96ZnF3?=
 =?utf-8?B?dGFDelBsRmw1SzhKUHRKMXlmVjBrS0Z3SlFMQUJCU0dKWW9XZkw0dFl4Tlpp?=
 =?utf-8?B?VDFIemNZZ3VwcXdtYXcvREtjRWZmUVBmMnV6SDFRbXV5dW5uaWZsajBQMnNB?=
 =?utf-8?B?V3RXWDAyRGFRWEp6dDRvL1BmQVcyN1R4R0FOVnhpNzhlZmIwN3o0VnMxajAw?=
 =?utf-8?B?bDJUckwva2hNMHJGdW5RYVhET3htVkNtdUI3Mmg2bC80K3hFWUFqL29DWFg1?=
 =?utf-8?B?UHBDR0xIWHhZYzFUMy9OUmJlYVBPamhBdHFtR0RDRytKVWtyb09OeEF3NUJo?=
 =?utf-8?B?b3YvQ3lEUDNBOEZGZGhDN1A5cGhac0VvMVVRZDFweDdPSTB4SWovVmMyS2xo?=
 =?utf-8?B?MTVySEs2Z2xXQ05iUXNnRFQ4Y0sxbFBhL3diQzd3bWJoS0FHcUNjaHRNdTkr?=
 =?utf-8?B?NHhPTDFoSStQY1pJNG9jVWd5NGY5OHNNbjI4TDFEZUt5YVZLOTNjK2xxTytB?=
 =?utf-8?B?TVV6a21RUUxKSS9nVVplTTZZQ2V4VFFucyt2dHRFNnBNREFtOFlHR0s0ZlBX?=
 =?utf-8?B?YXdtWnkvUG9iRFVzbGxydzlIdjNMRlBYQnlaU0t5bjloOE9uRGtJZnJCbVNF?=
 =?utf-8?B?UDZ0MHBJZHBDekdsU2NRald4d2xrWnF1QmZXTWhyem9IWUQxQUpxQXV5clpi?=
 =?utf-8?B?WEdpN2dEQmh0bDUrbWtmZHdYajRyNkpSakpacHRUbUdLS2syWVZVSks0RjdF?=
 =?utf-8?B?TjQ0STdFeGgySEttRUFLdTlKcWg2aUEyTWhva3gvU0YxeVBCbTNMY2kzclVN?=
 =?utf-8?B?czBGMXhwQVRUZDlzcXMvKzBWN3dKMjZ6QmNBa0Q0Ymd2T3VnWUxNS2lPdW9j?=
 =?utf-8?B?azFyRFhOZFVFdTkyQ1JvY0lnRldzNnNrNFFJUnpCR0RzdGsrdkhrMk9sZElY?=
 =?utf-8?B?OE1MUjhVMm5pNkdCSjRUN2ttVys2N2JzZjBuQ2dvQXJ1YndlMDliNWNvUTRK?=
 =?utf-8?B?cnR6MHp6NmdYczUwREV0TmZMdERmZEgrOWhYb25RNEVPaUowWGxlME1peUZ2?=
 =?utf-8?B?RHRLRjdsVWlYZXo0YkcrSDFHckkxZ3lHVmJVK3ZsaXFLU2FNdml6ZFJKOGxh?=
 =?utf-8?B?S2Vrck5QSjBRemhlaVFHSDh3NlZoZFREVi9HQWo2aEdNNWx6YzYvdE9EZWdO?=
 =?utf-8?B?SDFHbTV6YUZ2R1BaQVFtZENmQmRNVlVlaENjQlRSUGtoTmxDWGFXd1QrNi9M?=
 =?utf-8?B?QXAwRkkxdnNmaWt6L0J6VlNkTHpQeTY3QW9PSncybVkybmNUZEF4S0czMTFO?=
 =?utf-8?B?UlkybGwyVUhKRUZqcmRzTERGaWtFcGdaU21FdWo1cTlNVXVXNmwyMlhxRGdT?=
 =?utf-8?B?U0svK3c4RldXUU85KzVZYXBGQnVNZ2VDRDliYjFhbnpMUjc3THdyMDk5ZGtR?=
 =?utf-8?B?T3NTaHNiWjZmY1ZEOS9uZ2t3VC91dUhjbXRmdkpWejRIUVNBSk5NY0U2VlND?=
 =?utf-8?B?a2llblI3WFMvTnpFMmtjMDNDRHJnVmNYN2FRRDM4cTBGZGNFMDJEZmY3eVMw?=
 =?utf-8?B?aW5YZmF6SU9pWS9XVzJwblVCdGk0cUU2cHhoOU95eHgxc000RU91VXNRK0xP?=
 =?utf-8?B?S1l1UDQzaCs2T09GdHJYbjhpTCtjQ2JyRHp6M0Rld2E3RGhtdld1eThxMUd6?=
 =?utf-8?B?aC95OENFbW5SNmo3Z3JCVzZBWEF1VDdIMEttSXR2LzdSWWRKVnVtMXRlYWt3?=
 =?utf-8?B?R1lyWDRKZVA0Y0U5R0lPSHdtcTZZKzlZVys2NGdnVVF2aEZRcHdRTWUxN2hk?=
 =?utf-8?B?N1pYTnJua3JDbWcrN1JPV3JYY2dCWVQzTE1ZMjFSK0JoS2VLMWJGZy9rd3JR?=
 =?utf-8?B?TVhLR2xlYUtzU09FdGRHNkQxU1FYS2lacVB2TEpSMVp2aW92ZHhGajMwdVJQ?=
 =?utf-8?B?QW1RdURFTDBUT0NIQVc1ZmRzQ093ZGlOMU9EcDJya2FIVE9PdDJCMUxGZ3Ar?=
 =?utf-8?B?YjR3bW9iOHN2Vi9kbURzMFpldWJlbEJTQjRQRExsMW1wYTQ3YlJFZnZOMWVL?=
 =?utf-8?B?QTh3bjk0b3Y4T2c1WHlMSjZFQ0dINGxXQXpnY3A1RUViQ0RmWkt1bGcxYllw?=
 =?utf-8?Q?Tlv7A3A3treg5zYwVS4keoPQq?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 53ebc130-a22d-46c8-a0e0-08da55274906
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 14:47:24.5165
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: SP3Rc+/KIFhInvcVuNh+mjTgj3g0BkRUYk0hTOJlOBodW52kTN5VIKd0HevUJXzwFGHM+t/krCo0rWUYJ2SEAQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2272

On 23.06.2022 10:24, Roger Pau Monne wrote:
> Allow selecting the default x2APIC destination mode from Kconfig.
> Note the default destination mode is still Logical (Cluster) mode.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
>  xen/arch/x86/Kconfig          | 29 +++++++++++++++++++++++++++++
>  xen/arch/x86/genapic/x2apic.c |  6 ++++--
>  2 files changed, 33 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
> index 1e31edc99f..f560dc13f4 100644
> --- a/xen/arch/x86/Kconfig
> +++ b/xen/arch/x86/Kconfig
> @@ -226,6 +226,35 @@ config XEN_ALIGN_2M
>  
>  endchoice
>  
> +choice
> +	prompt "x2APIC default destination mode"

What's the point of using "choice" here, and not a single "bool"?

> +	default X2APIC_LOGICAL
> +	---help---

Nit: Please don't use ---help--- anymore - we're trying to phase out its
use as Linux has dropped it altogether (and hence once we update our
Kconfig, we'd like to change as few places as possible), leaving just
"help".

One downside of "choice" (iirc) is that the individual sub-options' help
text is inaccessible from at least the command line version of kconfig.

> +	  Specify default destination mode for x2APIC.
> +
> +	  If unsure, choose "Logical".
> +
> +config X2APIC_LOGICAL
> +	bool "Logical mode"
> +	---help---
> +	  Use Logical Destination mode.
> +
> +	  When using this mode APICs are addressed using the Logical
> +	  Destination mode, which allows for optimized IPI sending,
> +	  but also reduces the amount of vectors available for external
> +	  interrupts when compared to physical mode.
> +
> +config X2APIC_PHYS

X2APIC_PHYSICAL (to be in line with X2APIC_LOGICAL)?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 14:49:46 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 14:49:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355055.582490 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4O9R-0008O9-5u; Thu, 23 Jun 2022 14:49:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355055.582490; Thu, 23 Jun 2022 14:49:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4O9R-0008O2-2x; Thu, 23 Jun 2022 14:49:45 +0000
Received: by outflank-mailman (input) for mailman id 355055;
 Thu, 23 Jun 2022 14:49:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CRa/=W6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o4O9P-0008Nt-Gv
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 14:49:43 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr70051.outbound.protection.outlook.com [40.107.7.51])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b6e7529d-f303-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 16:49:42 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR0401MB2272.eurprd04.prod.outlook.com (2603:10a6:800:31::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.22; Thu, 23 Jun
 2022 14:49:40 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 14:49:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b6e7529d-f303-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DtipWfEHyWgk80d+tsBEzTHLfG6JzEKrhEZKKwtSV1I3xYU7z2bdRHOuedyKYgYODFG3ZO7rabQoYWdS1DGfJYnbQgztozp6lVvfKavZHL+YB/j2eah5VXdmmjAidSDzkue4SYBrl9nirvL3U/OfIVfbqlQo/h3Rv+gXfQzS+Yp2iGr6PlNhV5tpqmJB/DJL6O09ZOL61PTOP6pKCVPEr1ejZgWW+iaCR56v2NT4SfE4sw0PfTwrrwaotMY7CyDNUcxC5JJgmRo3xQ+0+sITBVnF+bjFKe3T4i5K0at71bsY8sZubIiFkNYrifv0AY/sEYPXczSK563JUc6CFQ/IFA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=emZOfPv2o8SnmYEatwSRAh6YuruhKdKislTpIaOIkMg=;
 b=VMBw645DwAL9ONZyrIAx9TypYQXak7oWUy1X4TsYUE9C+6YMmiMvXSOjb8FXGlGglEerXE7OpNARMiHWI//VSWrogplx0q8OTQNXrcnxXWXhf9K2EXTr/kj55dxT39zaEpwliDvhRl630mVEKNZPUSrABwPV9Hq9ypxGdm03U7AOtoEVg+wYqIrF8uEoWinosgw9dp8N91M5ufDNIEfOXgvN68Ucu/r+qD37jGCtwDbUZn/NAkBlbWzysCNF3umUm08jyAJyOgBuqOSvwNJ+BIul+Sv9CvloDS5ulvoElTDxVDPrYGWlZGbjyqvZAwW99CxebAj5JcGJ1Y3s5JjmNA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=emZOfPv2o8SnmYEatwSRAh6YuruhKdKislTpIaOIkMg=;
 b=WtQec4oD/m8+2rXB6FAUbInbHcD99fr/w3lZMPnc1ZntlAF0l7Y8RoUktRCEpDLEWik5tMgY5EywV9B6qaFeCTXIVcGfFTIPNOHKYStRUGhvtLnteWBe+cFhoiWfnIYvttMcEKsapO/m2zFe6i7jy+zWn8I97wxwnlFNx06VoZDAvqJzlghlU7ChDHQLcyQMwyvmGYpPJQPBulg9NLhlJ1Uvv7J8ZH6o4YSDBEA+rf7zLzwlJYHLXWpg0E45Wj+G3vYAN7sJ23a0dvTcNFXfzknX4vmmSAPRKgyeMgwwh3/nywShwDH1SC3r/fup05KwzO/mxdMkc1VbjqZ/A+Pcfg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <982b76df-fd88-9551-0702-a7f61bac8b1c@suse.com>
Date: Thu, 23 Jun 2022 16:49:38 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 2/6] x86/x2apic: use physical destination mode by default
Content-Language: en-US
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220623082428.28038-1-roger.pau@citrix.com>
 <20220623082428.28038-3-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220623082428.28038-3-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM5PR0101CA0033.eurprd01.prod.exchangelabs.com
 (2603:10a6:206:16::46) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 5d7ce5ff-f2df-4673-4595-08da552799f6
X-MS-TrafficTypeDiagnostic: VI1PR0401MB2272:EE_
X-Microsoft-Antispam-PRVS:
	<VI1PR0401MB2272F199C1786CD02C556076B3B59@VI1PR0401MB2272.eurprd04.prod.outlook.com>
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2adYb7VQqqNpDfbnsLBQDYGJDsdu9R7cT3qqQ2vJI9o7a44qJnJ9SBP7QDC9IJI1iFX8TEHwlcYvi/pJrjhKoAx1xG5ORc6QhlajZX64w39ZfQ7cSyz4WPkMy0t5JLGe8VAwHQYwwjmMyre9hGpadWURciJujCQPVWtNqxgZ6gpXjUU8KkbaMPLY0HxacHNU3a9PdDUoJy4X956LSbkCVRDwN1oBVOUnkNDK7ZTLRuOH8S/aLVhTEydE9lmzSrf5FmGMd9/pzMrTMk8JaOaHErQlMaLYOt5Ck4ZL2h8fbr8s7q0mYYGg63HIPXIdvdDQQRDUEhbuKRo6fjL20N7kM11RUhK6hbFocDmGEVl3SfLxCI9ns9SmFezre1fSiKvvpNKVGcrXnJXOnDW54kaaUKiE6Kwn0/puOyr28YWEVmtVW/e+1JvI04jLxlXhN6RrZGjmsg3x8ImuumM113ro69VL3T2rvS231o05+4D8jTcqHVvN7K/8O7jG4cUvmO6r0aO6+EB2IENoyDfclhCWMj1A6T8wme1X919m8QqoDX0VhZboiNjuZu5czqRCoNCa8jF2pWGglF1cqPwjaft9+SFP1eAyat6Y8tM4eYzsLtoNU1kNXU7trmbCQowTCjerxzxTYrlZQZtaQX+zxJkgx1JT+zTzj0bQmvfonQI8N6sUtzgWkpjQL5/GWHqzxsAcHFcAtN/j0mHqWNG2WgFYT7GU7dyjeXpXBzDKEmJKKN/G0t8jtCEWcCAY1+ujvrBJdjbyBKK0iWNIcmUIR7A6Xw7r6psGgTRyp6+Iktqivrs=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(39860400002)(396003)(376002)(346002)(366004)(136003)(86362001)(2906002)(53546011)(6486002)(36756003)(66946007)(31696002)(8936002)(8676002)(478600001)(4326008)(5660300002)(186003)(38100700002)(66476007)(41300700001)(2616005)(6512007)(83380400001)(4744005)(316002)(26005)(66556008)(54906003)(6916009)(6506007)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eEZpbjhCYW40OWRMS1QzUUU2bGprSGRjWUxHYlM5Y0xLY2IwQStvV3p5ZzNB?=
 =?utf-8?B?UWdONTU1ODhIckVvQWRyOGNhT2FMdXpDTlBpSWRvUm5OMkh3a0JxMGxkTWtT?=
 =?utf-8?B?ZTZCV3R2UFh3Wmd4L0hCbHh0K3duM05yZDdPelNzeVZib2tFUnhKS2RmYmEv?=
 =?utf-8?B?T3R1Z1F1aTN5Yk52b2Z5VlNvK1hZZGMvWVZ2MjE3d1FRMlRZdFFtQnRlY3Rh?=
 =?utf-8?B?MG9HM3pMYUtZcmROT3EveEVKTDZQdFFGNFRDdGs5YnFDay8yVE82M0FFbWg5?=
 =?utf-8?B?bVRaVHVUMGpneG5DNGJUNnAyem1GY1phNHpYb2NRVlFwRCtLeDFuT0hlMlRx?=
 =?utf-8?B?eWNRM0FxbmVyMUtaaXFYSnY2MzhCRXd2ZmUrcmhNUHpUaXc1U3dramlhSy9S?=
 =?utf-8?B?SWsxbzRZd3ZnL2V1SGt1OWVIenh6bjVTZDZHV3VISEIvakFkUGVTVFE0MjVI?=
 =?utf-8?B?eDVRdHhpZUlzN3VvNCs1Qkw3L1p6YmdvUUQzQjVOV0xRRFJhbUVFQlowVU5y?=
 =?utf-8?B?eUVxR3I2M3NFSFFsRE55SmdOc1JCSHRvUXZ5eHlZcjRweHp2OHZzLy9tV0xk?=
 =?utf-8?B?SURyU3BFVzZHODZGU0dzZy9WZE5KS3ppMEtFaU5yU3NSZi9tUFhvYUdFU1B5?=
 =?utf-8?B?MGRaSElVSEp5K3FwNzVJMHJ6SXFmR2poMnRMZVF2VWxXdElEKzNzb2JJdXJi?=
 =?utf-8?B?V0RxdXAvclZibWlQWWZMZ1VJTmJqdWVEa3hJSmt0czVsTm1rdWdoYU9lSEdx?=
 =?utf-8?B?eGpuazJXT2dQb3VRSTk0Y0tjUE1weVpIZHNZci80RXdwb3ByakQxa1NZeUNO?=
 =?utf-8?B?NnFTQXZLZ1NWdEVXMWYwQWQrZXNRT0gxU0pYbEVyeFZLRS9nalBGUG5lMjQ3?=
 =?utf-8?B?RWRUUWxtdTQvaklLd0thS1RNc2FHZnc4dEI5K3NNRllETnhwUUxrU2FjLy9P?=
 =?utf-8?B?STJXaTI5aXdSRTgrVmdhTDVQVGYreWF2SzYzQnczckZSejZZbUl1d1FXaitk?=
 =?utf-8?B?SHppbUUycDdVaUJVSUUvMlpGczlNQ0YyQStxenFzbGtRU1hkbXJlaU52cElI?=
 =?utf-8?B?SHBaeHNwOUMzS2szajBzZHg3bDN6Q1NyRSt0UFRpRmRxWFRIQnowQnBJV2Q3?=
 =?utf-8?B?WVE5Zy9uZXI1S25HbjZCZnFRMHNuRUR2UEpYY0FLbjA2alJqZXZwK1N3cDNE?=
 =?utf-8?B?QVR1ZzQ3bkxkSDEwK0dqQXBGaFVnTEFXSVZHdWZIUklGM2VvcFBMbEhnZ1dL?=
 =?utf-8?B?cHVHQ2xWZElXOGwybXFGblV3ZGtQVnZOVkJkQnNGWW5kVlpIMldWMm14ZDlU?=
 =?utf-8?B?OTcvUXFkMTljcUFYckZLVTlxNFRPTm9BL2MzTUFMN2dtaGdqVzgxc0tMci9r?=
 =?utf-8?B?VURRRDh6NmdCYUtDbDNacDlFRDFxN09hVldNSjl5RW4rSjNHd3ZwNU8yekg5?=
 =?utf-8?B?clpHWU5Na0ZMQU9OMkxkQnVqdWcrQ3FzZkFuRmFBbzBpYVFldkVzanNycnRS?=
 =?utf-8?B?c002Y1crUTlRYkJBNHNOY0VNeFArZ2V5Ri9tNzhDUkZuS284YTkvUEorWDNQ?=
 =?utf-8?B?dEJSQTVueFdCOWRsZGdFcjVoQU5XMmEzbGRieTNzdFhFa2pkVFVpRmUzVFho?=
 =?utf-8?B?NmpOU0RCdCsyWXkwUm4vR28xS0YvbU9yMmRibXV0Yk15WDI4RjZvdzFrWDl1?=
 =?utf-8?B?QnNLSkdIaW0vcjVNZEgwSmV6SFRCR0IrQ0U5OGdxTDlibnNUUGZUNG9QSVBi?=
 =?utf-8?B?K2NSdlZQd2tVRWl2ajd4UWRxRzlwaUJsTng4dXdWMEYrZmI3eWJobkhtOEI3?=
 =?utf-8?B?cGM4UWdPVHVBNGtIQ1dUYnZjSXR5aWNpekZGUWtlbHJBRlhRZ05IZDc2blVy?=
 =?utf-8?B?cWl1dG1hbzJzcytIdGxkemRYT1NqZjJsbmVSZGdhWEY0RDFESzQ1MnVhU3px?=
 =?utf-8?B?M2NJR1l3L3lLeHdZQlpnNkNMUDEvMml0MjVlNEIrSjY3ajFMalJGUU45KzI4?=
 =?utf-8?B?cFhMcWxWaFhBd2JxU1pUaFJUSGtjdkgxZ0RDbXpsejdsREZiV0N3THhFaHRC?=
 =?utf-8?B?WEh0Q29mbWJEWk4zeTdkU0pLSEZHby81eEVoTEV3RVNLVWlOS2ZNcGN6WFk1?=
 =?utf-8?Q?AYuVlCBhD59jZEyQENswIpw+5?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5d7ce5ff-f2df-4673-4595-08da552799f6
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 14:49:40.3673
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: oG1f2RRXnzvkPvuUxrGDNLtygFWHWDu53DVimabf/0mpy+/qjrP248Yd8baxQmgaEIB70PNlhkvOWCgbtecSgg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2272

On 23.06.2022 10:24, Roger Pau Monne wrote:
> Using cluster mode by default greatly limits the amount of vectors
> available, as then vector space is shared amongst all the CPUs in the
> logical cluster.
> 
> This can lead to vector shortage issues on boxes with not a huge
> amount of CPUs but with a non-trivial amount of devices, there are
> reports of boxes with 32 CPUs (2 logical clusters, and thus only 414
> dynamic vectors) that run out of vectors and fail to setup interrupts
> for dom0.
> 
> This could be considered as a regression when switching from xAPIC
> mode, as when using xAPIC only physical mode is supported.

When using more than 8 CPUs.

You also don't mention the downside (higher IPI send effort) at all.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 15:10:59 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 15:10:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355065.582501 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4OTi-0003BW-2x; Thu, 23 Jun 2022 15:10:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355065.582501; Thu, 23 Jun 2022 15:10:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4OTh-0003BP-Vq; Thu, 23 Jun 2022 15:10:41 +0000
Received: by outflank-mailman (input) for mailman id 355065;
 Thu, 23 Jun 2022 15:10:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TQE8=W6=arm.com=Rahul.Singh@srs-se1.protection.inumbo.net>)
 id 1o4OTf-0003BJ-VW
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 15:10:40 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr70084.outbound.protection.outlook.com [40.107.7.84])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a288215d-f306-11ec-b725-ed86ccbb4733;
 Thu, 23 Jun 2022 17:10:37 +0200 (CEST)
Received: from AS9PR06CA0755.eurprd06.prod.outlook.com (2603:10a6:20b:484::11)
 by AS4PR08MB7879.eurprd08.prod.outlook.com (2603:10a6:20b:51e::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.13; Thu, 23 Jun
 2022 15:10:33 +0000
Received: from VE1EUR03FT036.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:484:cafe::8d) by AS9PR06CA0755.outlook.office365.com
 (2603:10a6:20b:484::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.16 via Frontend
 Transport; Thu, 23 Jun 2022 15:10:33 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT036.mail.protection.outlook.com (10.152.19.204) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Thu, 23 Jun 2022 15:10:33 +0000
Received: ("Tessian outbound d3318d0cda7b:v120");
 Thu, 23 Jun 2022 15:10:32 +0000
Received: from 4ec3578ea784.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 9B5DA730-D671-4C48-9E2B-AA3DC08DFDDD.1; 
 Thu, 23 Jun 2022 15:10:21 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 4ec3578ea784.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 23 Jun 2022 15:10:21 +0000
Received: from AS8PR08MB7158.eurprd08.prod.outlook.com (2603:10a6:20b:404::24)
 by AM9PR08MB6929.eurprd08.prod.outlook.com (2603:10a6:20b:302::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.14; Thu, 23 Jun
 2022 15:10:19 +0000
Received: from AS8PR08MB7158.eurprd08.prod.outlook.com
 ([fe80::5cc5:d9b5:e3b0:c8d7]) by AS8PR08MB7158.eurprd08.prod.outlook.com
 ([fe80::5cc5:d9b5:e3b0:c8d7%9]) with mapi id 15.20.5373.016; Thu, 23 Jun 2022
 15:10:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a288215d-f306-11ec-b725-ed86ccbb4733
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=YtNHSA49pG+kFhFnqPkipXKY+7dOATZIYMhAvep+nCtiW7ePS7WzbnL/Udy4/i1NNljMOfak+g2KmEKb99wV1rbCBKrGBPFR5CXpE4qVpdYRcu3BmjoTelpCrTHW5homOM2Ia7plg+SmtgpcGCemZ841u+vFZBtSrmZR0of1KCZvFlBbwsayAn73gj2Y7yDIqko8zFdU+M1iEKHDdl8Xkin3ZO+FOtpPqRRuFimBY/jp/oRS8fInZ2HgHGKndWnwhb8f5Mc0ZkSR+lXhqo2JYkNdHTQRblUI0l4RMPwGvcbj0U0Q1gmUpR9B+qe/rP70hUppUlXdQUe6Mg/VGZ1Bpw==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9gZT7tq89fRXo1NwAY5FwgPpPY8hPZ6uzpqvCrR1a6o=;
 b=RCzzWp8aG1VNEJfxa0xvnYigWYsDUMM48Jf5QZel6lV/Ews5scCvMhs0BfstgCNDhLHt7yf9eoljCt8Q//sGYQm0+/WB1Y9f+4EOgVAUa6eTTgHGXFpDZNnXskXabqneTRk+8Ue7L02DQJc/jPZyq+45N0ffWSMkM5fCLicjH6ZeOxtcNdbKNJqGR+fYqIlxIzHVbpv+3D2FV2tp6v/lvLrEZEDxgSWp9kV3dE4QtgzZVit/8bX93lcEpOC/PBgKh0QZGAUtf+Wij9Pc1tA7Wv/X2gF9e7UJ2SQPb7GXCqEE7GSWh16xLEm8MisrOcLRPCZbQPXN7IdHKnsSgW4YXg==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9gZT7tq89fRXo1NwAY5FwgPpPY8hPZ6uzpqvCrR1a6o=;
 b=tMl1XAqj1XaqwXYn8j8E3z+pjQeQeliDeQyAz11DIhMUqqHOHe5gF/kqrMvY4qbPfbCAhyUt4k1O8emIVqXJsZgFZbGU4aQ6STYK5TzyqVGdOY587c3sU8MPdlN4ktMldytAIfRvPwBlYsiTGxSHImQEFt83iG+/jWmBiDwo52w=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 81266b4eb0ee3738
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WdHSwQ4dqBayujt9mPpTYrnM44fnOgHA79FkgC5OjyhwcOX9HtGgJG3WWhBd77RdxL91+cGITubwQ4sNCIrGbv+QuY9MTLI/SxXeGx0yhZPOwiJ8Yc64SXlClXXTiTvRQ8qldYVI90tRBiZgzQtBGXUZQKUADWfdB7iO9CkGykf8CfFpHNxzxcE4ye6O0jBLetHkTI2CWdoKstevorUM2RmRX4/H+hpt7TN9ynWSzHZSosg6p6qnyzeunq1phgXkW268W4GmxULMiUIqeU5dWZQDfg1pwjRGGqTi0jXjQRxY8BFZQrQk996AiH5jqF0Zn2dSjayYScGdfKs43Y4plA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9gZT7tq89fRXo1NwAY5FwgPpPY8hPZ6uzpqvCrR1a6o=;
 b=Jat5vAmuzHeZZ6tKVMSl9hWvAmrhHrbXbV7rKhcZTzB7woBfHO9dIRE66HVizNmtQ7hPqUmFrjc1hSWWwqDy03lkurazmCJsOdIgZmBBJvypKjd/rD8ON9ZtpJ+D0Fqh+28kW/vbmOS29sZN22hi5EZLRV5plxtsyy+N+khaIiMNMgqIAxAA3EjPHMB+VtORgNjwtMGbJxye7iK+u75Kb6LroOESnhKDj/PaX6pSeWpPLAJ5SVuKahaBthFx7o6iIAPdCELfpMwND0UX4qEg/Hk3CDLJNqzHe8LU2Foindhi5e50Y75wydTHPzdW02xABQuj6Gzr9PBSqCZaLLVcDA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9gZT7tq89fRXo1NwAY5FwgPpPY8hPZ6uzpqvCrR1a6o=;
 b=tMl1XAqj1XaqwXYn8j8E3z+pjQeQeliDeQyAz11DIhMUqqHOHe5gF/kqrMvY4qbPfbCAhyUt4k1O8emIVqXJsZgFZbGU4aQ6STYK5TzyqVGdOY587c3sU8MPdlN4ktMldytAIfRvPwBlYsiTGxSHImQEFt83iG+/jWmBiDwo52w=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Julien Grall <julien@xen.org>
CC: xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 5/8] xen/evtchn: don't close the static event channel.
Thread-Topic: [PATCH 5/8] xen/evtchn: don't close the static event channel.
Thread-Index: AQHYhkX79rj81mJ150e3nBZ0EA7O6K1bhmuAgAGTuoA=
Date: Thu, 23 Jun 2022 15:10:18 +0000
Message-ID: <7403EAA7-67A4-4A8D-835E-6015463B9016@arm.com>
References: <cover.1655903088.git.rahul.singh@arm.com>
 <91656930b5bfd49e40ff5a9d060d7643e6311f4f.1655903088.git.rahul.singh@arm.com>
 <b64a7980-e51b-417b-4929-94a020c81438@xen.org>
In-Reply-To: <b64a7980-e51b-417b-4929-94a020c81438@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: dc321f0f-89fb-4dfa-5875-08da552a84f1
x-ms-traffictypediagnostic:
	AM9PR08MB6929:EE_|VE1EUR03FT036:EE_|AS4PR08MB7879:EE_
X-Microsoft-Antispam-PRVS:
	<AS4PR08MB787976D00BF349ADF0A9AC3CFCB59@AS4PR08MB7879.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 kP+Ef7aycdM2UEzVDhrFZBFOj7Htx2CzeuIxpjOB0n8kXdRTJ07YUxNEi8Entz+CmpDUYT3MuzaGs564M6yvOAw8+0bertCUIlvNt9Spx4GisgPJMPLbSqGv8Nyrud7WYxA+J937kptoawnUwwOkMKradOI6RI+4IUfFSbU5P7wIANWlSAYO7BAIjgxcEZGSwESrcr6Fd19+4+qUv+Kg+JpbWrVrg2FJkz3VmdH8WDkuwhiHvm3MDk5VzzzhhSBsW/Uw5I5/rC3V18/gPqcZrsmAsiDR2hIp4OIaubo1VIbc70ZweX8t9mXszG5v+13jz+8wLxK9EdwuWteVrLIQV+DZVRAd9TBFEzXIM9jTTabPe+mr3Tai2/yZhP32vaJFtVVR0I12K336Kdtysun7fsAJfzTE2NfqdUKjnA4BcZioun0ZHYFwCvhPu7zskeU1g8Sb90xLIvVJGhCY1gRwjH6FXuGtqZNnpIjVYu6pR7wSD7DSYyDK/2rqcpS+5LoE6IVutWntmUNcrajHt6ZJ/CGTY7QVw/erJv8KId6y4l2VZWej2l028ytxkcgSb9szXD1OUGghFZTweMyYqPW56hzl4NufWCtlGRA3WkLDi6dxsc4Em34lza7/Rewmz2YHOr92/H/DCLcMUcTkRziOoX9+pukoImDB0L/wFwjTg4JBVeA8yDAGc7rqpEPk5cRChp16TG/nNvqn98PD5E1LvGGTl9P0aHnLBXQ5a5KPgIWisEz39c10qseYQKIByl6Hyu2PPH0iA97/qECnDts902c8FSy+DZAe/fZi1rwUhvw=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7158.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(136003)(376002)(39860400002)(366004)(346002)(396003)(53546011)(41300700001)(122000001)(38100700002)(186003)(86362001)(2616005)(33656002)(316002)(66446008)(6506007)(26005)(54906003)(6512007)(38070700005)(478600001)(66946007)(76116006)(71200400001)(2906002)(83380400001)(36756003)(91956017)(6486002)(8936002)(8676002)(6916009)(64756008)(66556008)(5660300002)(66476007)(4326008)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <0A57F97B28866D46B1670E1F2A502A95@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6929
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT036.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f604eeb8-e16b-4e89-8df6-08da552a7c4d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	IApRiLePPsTH69Z4nvLzoQ6BYB1sWMZkDHoMG7QUNKrIf1Rv1C8LAWJNcF7YnOT8v9PE26UiHzv8D5NCBRXYYMQplO40BVPBKsl8TOlNSdl4bsUMV7R8maMF3ZohxcQeSxiqOgmfEf0ln9EuNDS0FUbaBs7354a38oZLvWr5Aq+/a9I6nYJt673VvO/XwY/NyUOM8xGyORYZKoVrgbpyABmGPRZBjf66GSCjWEPu99QaQoT+TiUT6FnRUEfrGII5I50kv5aMR0cjMtqpEPNQ7j/awNWdixF2PsFRlG5vCv77Y06tzJ+EvBNjSm1gem4EtgjSFlRV8dbUhpiwmcL6LmfMKQ5Uns3MMG7uof5TZh9+f1HNLcvuWzioiwE1TIscpDFXLhMHOILPRBPR5IxLxnVR4tEzQGzqyCKfs7i0drV0FExkJn9FKL5jX4IrMgHyb3DUVJWHFnDhM6WmdRoHIY4w2Qm1OjlixmatPvx7lgLk/+XFSCF+f+QtRMa3GKgDoAGEGqgOQzc4Mv8/aHa7ac7radixnZLZVtM9XQpFL2MkK/MwidpSXAdUv9BYbjYV1j1RAWSk/6v3O48BKol5gCn3PAAmf6jYnnqFVY9Zeuew9JYWEtHY4cvas2PY/lnjwYNvgatn2/+xwGgfiwWDXYEIuaywxu9Jo9OMENDGvm1O051B+pb05zM18Ix3U0nEBI3H9XWRu/THJ32PgYSBtnLebD0X50q7ipu7aFGHYif31reYMqVgiIar8hsh+/jkzAQDlCxgShobcMJX3oppsA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(136003)(39860400002)(346002)(396003)(376002)(40470700004)(36840700001)(46966006)(8936002)(478600001)(40480700001)(83380400001)(5660300002)(54906003)(6862004)(82310400005)(356005)(6512007)(8676002)(70586007)(316002)(4326008)(33656002)(36860700001)(6486002)(47076005)(40460700003)(2906002)(53546011)(6506007)(186003)(70206006)(86362001)(26005)(336012)(2616005)(82740400003)(36756003)(41300700001)(81166007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 15:10:33.2315
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: dc321f0f-89fb-4dfa-5875-08da552a84f1
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT036.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR08MB7879

SGkgSnVsaWVuLA0KDQoNCj4gT24gMjIgSnVuIDIwMjIsIGF0IDQ6MDUgcG0sIEp1bGllbiBHcmFs
bCA8anVsaWVuQHhlbi5vcmc+IHdyb3RlOg0KPiANCj4gSGksDQo+IA0KPiBPbiAyMi8wNi8yMDIy
IDE1OjM4LCBSYWh1bCBTaW5naCB3cm90ZToNCj4+IEd1ZXN0IGNhbiByZXF1ZXN0IHRoZSBYZW4g
dG8gY2xvc2UgdGhlIGV2ZW50IGNoYW5uZWxzLiBJZ25vcmUgdGhlDQo+PiByZXF1ZXN0IGZyb20g
Z3Vlc3QgdG8gY2xvc2UgdGhlIHN0YXRpYyBjaGFubmVscyBhcyBzdGF0aWMgZXZlbnQgY2hhbm5l
bHMNCj4+IHNob3VsZCBub3QgYmUgY2xvc2VkLg0KPiANCj4gV2h5IGRvIHlvdSB3YW50IHRvIHBy
ZXZlbnQgdGhlIGd1ZXN0IHRvIGNsb3NlIHN0YXRpYyBwb3J0cz8gVGhlIHByb2JsZW0gSSBjYW4g
c2VlIGlzLi4uDQoNCkFzIGEgc3RhdGljIGV2ZW50IGNoYW5uZWwgc2hvdWxkIGJlIGF2YWlsYWJs
ZSBkdXJpbmcgdGhlIGxpZmV0aW1lIG9mIHRoZSBndWVzdCB3ZSB3YW50IHRvIHByZXZlbnQNCnRo
ZSBndWVzdCB0byBjbG9zZSB0aGUgc3RhdGljIHBvcnRzLiANCg0KSSB0ZXN0ZWQgdGhpcyBzZXJp
ZXMgdG8gc2VuZC9yZWNlaXZlIGV2ZW50IG5vdGlmaWNhdGlvbiBmcm9tIHRoZSBMaW51eCB1c2Vy
LXNwYWNlIGFwcGxpY2F0aW9uIHZpYSAiL2Rldi94ZW4vZXZ0Y2hu4oCdIGludGVyZmFjZSBhbmQN
CmlvY3RsICggSU9DVExfRVZUQ0hOXyopIGNhbGxzLiBXaGVuIHdlIGNsb3NlIHRoZSAiL2Rldi94
ZW4vZXZ0Y2hu4oCdIGludGVyZmFjZSBMaW51eCBldmVudCBjaGFubmVsDQpkcml2ZXIgd2lsbCB0
cnkgdG8gY2xvc2UgdGhlIHN0YXRpYyBldmVudCBjaGFubmVsIGFsc28sIHRoYXQgd2h5IHdlIG5l
ZWQgdGhpcyBwYXRjaCB0byBhdm9pZCBndWVzdHMgdG8gY2xvc2UgDQp0aGUgZXZlbnQgY2hhbm5l
bCBhcyB3ZSBkb27igJl0IHdhbnQgdG8gY2xvc2UgdGhlIHN0YXRpYyBldmVudCBjaGFubmVsLg0K
DQo+ICANCj4gWy4uLl0NCj4gDQo+PiBkaWZmIC0tZ2l0IGEveGVuL2NvbW1vbi9ldmVudF9jaGFu
bmVsLmMgYi94ZW4vY29tbW9uL2V2ZW50X2NoYW5uZWwuYw0KPj4gaW5kZXggODRmMDA1NWE1YS4u
Y2VkYzk4Y2NhZiAxMDA2NDQNCj4+IC0tLSBhL3hlbi9jb21tb24vZXZlbnRfY2hhbm5lbC5jDQo+
PiArKysgYi94ZW4vY29tbW9uL2V2ZW50X2NoYW5uZWwuYw0KPj4gQEAgLTI5NCw3ICsyOTQsOCBA
QCB2b2lkIGV2dGNobl9mcmVlKHN0cnVjdCBkb21haW4gKmQsIHN0cnVjdCBldnRjaG4gKmNobikN
Cj4+ICAgKiBJZiBwb3J0IGlzIHplcm8gZ2V0IHRoZSBuZXh0IGZyZWUgcG9ydCBhbmQgYWxsb2Nh
dGUuIElmIHBvcnQgaXMgbm9uLXplcm8NCj4+ICAgKiBhbGxvY2F0ZSB0aGUgc3BlY2lmaWVkIHBv
cnQuDQo+PiAgICovDQo+PiAtaW50IGV2dGNobl9hbGxvY191bmJvdW5kKGV2dGNobl9hbGxvY191
bmJvdW5kX3QgKmFsbG9jLCBldnRjaG5fcG9ydF90IHBvcnQpDQo+PiAraW50IGV2dGNobl9hbGxv
Y191bmJvdW5kKGV2dGNobl9hbGxvY191bmJvdW5kX3QgKmFsbG9jLCBldnRjaG5fcG9ydF90IHBv
cnQsDQo+PiArICAgICAgICAgICAgICAgICAgICAgICAgIGJvb2wgaXNfc3RhdGljKQ0KPj4gIHsN
Cj4+ICAgICAgc3RydWN0IGV2dGNobiAqY2huOw0KPj4gICAgICBzdHJ1Y3QgZG9tYWluICpkOw0K
Pj4gQEAgLTMzMCw2ICszMzEsNyBAQCBpbnQgZXZ0Y2huX2FsbG9jX3VuYm91bmQoZXZ0Y2huX2Fs
bG9jX3VuYm91bmRfdCAqYWxsb2MsIGV2dGNobl9wb3J0X3QgcG9ydCkNCj4+ICAgICAgZXZ0Y2hu
X3dyaXRlX2xvY2soY2huKTsNCj4+ICAgICAgICBjaG4tPnN0YXRlID0gRUNTX1VOQk9VTkQ7DQo+
PiArICAgIGNobi0+aXNfc3RhdGljID0gaXNfc3RhdGljOw0KPj4gICAgICBpZiAoIChjaG4tPnUu
dW5ib3VuZC5yZW1vdGVfZG9taWQgPSBhbGxvYy0+cmVtb3RlX2RvbSkgPT0gRE9NSURfU0VMRiAp
DQo+PiAgICAgICAgICBjaG4tPnUudW5ib3VuZC5yZW1vdGVfZG9taWQgPSBjdXJyZW50LT5kb21h
aW4tPmRvbWFpbl9pZDsNCj4+ICAgICAgZXZ0Y2huX3BvcnRfaW5pdChkLCBjaG4pOw0KPj4gQEAg
LTM2OCw3ICszNzAsNyBAQCBzdGF0aWMgdm9pZCBkb3VibGVfZXZ0Y2huX3VubG9jayhzdHJ1Y3Qg
ZXZ0Y2huICpsY2huLCBzdHJ1Y3QgZXZ0Y2huICpyY2huKQ0KPj4gICAqIGFsbG9jYXRlIHRoZSBz
cGVjaWZpZWQgbHBvcnQuDQo+PiAgICovDQo+PiAgaW50IGV2dGNobl9iaW5kX2ludGVyZG9tYWlu
KGV2dGNobl9iaW5kX2ludGVyZG9tYWluX3QgKmJpbmQsIHN0cnVjdCBkb21haW4gKmxkLA0KPj4g
LSAgICAgICAgICAgICAgICAgICAgICAgICAgICBldnRjaG5fcG9ydF90IGxwb3J0KQ0KPj4gKyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICBldnRjaG5fcG9ydF90IGxwb3J0LCBib29sIGlzX3N0
YXRpYykNCj4+ICB7DQo+PiAgICAgIHN0cnVjdCBldnRjaG4gKmxjaG4sICpyY2huOw0KPj4gICAg
ICBzdHJ1Y3QgZG9tYWluICpyZDsNCj4+IEBAIC00MjMsNiArNDI1LDcgQEAgaW50IGV2dGNobl9i
aW5kX2ludGVyZG9tYWluKGV2dGNobl9iaW5kX2ludGVyZG9tYWluX3QgKmJpbmQsIHN0cnVjdCBk
b21haW4gKmxkLA0KPj4gICAgICBsY2huLT51LmludGVyZG9tYWluLnJlbW90ZV9kb20gID0gcmQ7
DQo+PiAgICAgIGxjaG4tPnUuaW50ZXJkb21haW4ucmVtb3RlX3BvcnQgPSBycG9ydDsNCj4+ICAg
ICAgbGNobi0+c3RhdGUgICAgICAgICAgICAgICAgICAgICA9IEVDU19JTlRFUkRPTUFJTjsNCj4+
ICsgICAgbGNobi0+aXNfc3RhdGljICAgICAgICAgICAgICAgICA9IGlzX3N0YXRpYzsNCj4+ICAg
ICAgZXZ0Y2huX3BvcnRfaW5pdChsZCwgbGNobik7DQo+PiAgICAgICAgICAgIHJjaG4tPnUuaW50
ZXJkb21haW4ucmVtb3RlX2RvbSAgPSBsZDsNCj4+IEBAIC02NTksNiArNjYyLDkgQEAgaW50IGV2
dGNobl9jbG9zZShzdHJ1Y3QgZG9tYWluICpkMSwgaW50IHBvcnQxLCBib29sIGd1ZXN0KQ0KPj4g
ICAgICAgICAgcmMgPSAtRUlOVkFMOw0KPj4gICAgICAgICAgZ290byBvdXQ7DQo+PiAgICAgIH0N
Cj4+ICsgICAgLyogR3Vlc3QgY2Fubm90IGNsb3NlIGEgc3RhdGljIGV2ZW50IGNoYW5uZWwuICov
DQo+PiArICAgIGlmICggY2huMS0+aXNfc3RhdGljICYmIGd1ZXN0ICkNCj4+ICsgICAgICAgIGdv
dG8gb3V0Ow0KPiANCj4gLi4uIGF0IGxlYXN0IHRoZSBpbnRlcmRvbWFpbiBzdHJ1Y3R1cmUgc3Rv
cmUgcG9pbnRlciB0byB0aGUgZG9tYWluLiBJIGFtIGEgYml0IGNvbmNlcm5lZCB0aGF0IHdlIHdv
dWxkIGVuZCB1cCB0byBsZWF2ZSBkYW5nbGluZyBwb2ludGVycyAoc3VjaCBhcyBjaG4tPnUuaW50
ZXJkb21haW4ucmVtb3RlX2RvbWFpbikgYXMgZXZ0Y2huX2Nsb3NlKCkgaXMgYWxzbyB1c2VkIHdo
aWxlIGRlc3Ryb3lpbmcgdGhlIGRvbWFpbi4NCg0KTGV0IG1lIGhhdmUgYSBsb29rIGFnYWluIGlm
IHdlIGhhdmUgdG8gZG8gdGhlIGNsZWFudXAgd2hlbiB3ZSBkZXN0cm95IHRoZSBndWVzdCBhbmQg
Y2xvc2UgdGhlIHN0YXRpYyBldmVudCBjaGFubmVsLg0KPiANCj4gQWxzbywgQUZBSUNUIFhlbiB3
aWxsIHJldHVybiAwIChpLmUuIHN1Y2Nlc3MpIHRvIHRoZSBjYWxsZXIuIEkgdGhpbmsgdGhpcyBp
cyBhIG1pc3Rha2UgYmVjYXVzZSB3ZSBkaWRuJ3QgY2xvc2UgdGhlIHBvcnQgYXMgcmVxdWVzdGVk
Lg0KDQpJZiB3ZSByZXR1cm4gbm9uLXplcm8gdG8gZ3Vlc3QgKGluIHBhcnRpY3VsYXIgaWYgbGlu
dXggZ3Vlc3QpLCBMaW51eCB3aWxsIHJlcG9ydCB0aGUgQlVHKCkuIFRoZXJlZm9yZSBJIGRlY2lk
ZWQgdG8gcmV0dXJuIDAuIA0KDQppZiAoSFlQRVJWSVNPUl9ldmVudF9jaGFubmVsX29wKEVWVENI
Tk9QX2Nsb3NlLCAmY2xvc2UpICE9IDApICAgICAgICAgICAgICAgDQogICAgICAgIEJVRygpOw0K
DQpSZWdhcmRzLA0KUmFodWwNCg0K


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 15:30:36 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 15:30:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355073.582512 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Omp-0005Yt-NF; Thu, 23 Jun 2022 15:30:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355073.582512; Thu, 23 Jun 2022 15:30:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Omp-0005Ym-KA; Thu, 23 Jun 2022 15:30:27 +0000
Received: by outflank-mailman (input) for mailman id 355073;
 Thu, 23 Jun 2022 15:30:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4Omo-0005Yg-Qs
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 15:30:26 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4Omo-0007tc-8J; Thu, 23 Jun 2022 15:30:26 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.0.239]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4Omo-000510-1e; Thu, 23 Jun 2022 15:30:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=LhjzCQzfVeX3yehAMi0MJ+Fdr41bZmglF8MzavW9IsM=; b=Gxhn/O4jfhb5/7S285bZVpJ+Xx
	nIUUw6ux4ivFfe8abUIx1xhJZzP0BTp32OQhP6ScoiMzSgdd9IMWvffwYMaaRJIEbB4lf1UQl95e8
	OWzXp0NDmYYmjrfFLGjrEJKChY+NB9Tn7swh08y0ileFr4+3dJc4xfZagf9xmLLc3uqI=;
Message-ID: <a5cd291d-45b1-baf4-4d0b-907140b38eab@xen.org>
Date: Thu, 23 Jun 2022 16:30:23 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH 5/8] xen/evtchn: don't close the static event channel.
To: Rahul Singh <Rahul.Singh@arm.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>
References: <cover.1655903088.git.rahul.singh@arm.com>
 <91656930b5bfd49e40ff5a9d060d7643e6311f4f.1655903088.git.rahul.singh@arm.com>
 <b64a7980-e51b-417b-4929-94a020c81438@xen.org>
 <7403EAA7-67A4-4A8D-835E-6015463B9016@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <7403EAA7-67A4-4A8D-835E-6015463B9016@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit



On 23/06/2022 16:10, Rahul Singh wrote:
> Hi Julien,
> 
> 
>> On 22 Jun 2022, at 4:05 pm, Julien Grall <julien@xen.org> wrote:
>>
>> Hi,
>>
>> On 22/06/2022 15:38, Rahul Singh wrote:
>>> Guest can request the Xen to close the event channels. Ignore the
>>> request from guest to close the static channels as static event channels
>>> should not be closed.
>>
>> Why do you want to prevent the guest to close static ports? The problem I can see is...
> 
> As a static event channel should be available during the lifetime of the guest we want to prevent
> the guest to close the static ports.
I don't think it is Xen job to prevent a guest to close a static port. 
If the guest decide to do it, then it will just break itself and not Xen.

> 
> I tested this series to send/receive event notification from the Linux user-space application via "/dev/xen/evtchn” interface and
> ioctl ( IOCTL_EVTCHN_*) calls. When we close the "/dev/xen/evtchn” interface Linux event channel
> driver will try to close the static event channel also, that why we need this patch to avoid guests to close
> the event channel as we don’t want to close the static event channel.

To me, this reads as Linux should be modified in order to avoid closing 
static event channel. In fact...

>>   
>> [...]
>>
>>> diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
>>> index 84f0055a5a..cedc98ccaf 100644
>>> --- a/xen/common/event_channel.c
>>> +++ b/xen/common/event_channel.c
>>> @@ -294,7 +294,8 @@ void evtchn_free(struct domain *d, struct evtchn *chn)
>>>    * If port is zero get the next free port and allocate. If port is non-zero
>>>    * allocate the specified port.
>>>    */
>>> -int evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc, evtchn_port_t port)
>>> +int evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc, evtchn_port_t port,
>>> +                         bool is_static)
>>>   {
>>>       struct evtchn *chn;
>>>       struct domain *d;
>>> @@ -330,6 +331,7 @@ int evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc, evtchn_port_t port)
>>>       evtchn_write_lock(chn);
>>>         chn->state = ECS_UNBOUND;
>>> +    chn->is_static = is_static;
>>>       if ( (chn->u.unbound.remote_domid = alloc->remote_dom) == DOMID_SELF )
>>>           chn->u.unbound.remote_domid = current->domain->domain_id;
>>>       evtchn_port_init(d, chn);
>>> @@ -368,7 +370,7 @@ static void double_evtchn_unlock(struct evtchn *lchn, struct evtchn *rchn)
>>>    * allocate the specified lport.
>>>    */
>>>   int evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind, struct domain *ld,
>>> -                            evtchn_port_t lport)
>>> +                            evtchn_port_t lport, bool is_static)
>>>   {
>>>       struct evtchn *lchn, *rchn;
>>>       struct domain *rd;
>>> @@ -423,6 +425,7 @@ int evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind, struct domain *ld,
>>>       lchn->u.interdomain.remote_dom  = rd;
>>>       lchn->u.interdomain.remote_port = rport;
>>>       lchn->state                     = ECS_INTERDOMAIN;
>>> +    lchn->is_static                 = is_static;
>>>       evtchn_port_init(ld, lchn);
>>>             rchn->u.interdomain.remote_dom  = ld;
>>> @@ -659,6 +662,9 @@ int evtchn_close(struct domain *d1, int port1, bool guest)
>>>           rc = -EINVAL;
>>>           goto out;
>>>       }
>>> +    /* Guest cannot close a static event channel. */
>>> +    if ( chn1->is_static && guest )
>>> +        goto out;
>>
>> ... at least the interdomain structure store pointer to the domain. I am a bit concerned that we would end up to leave dangling pointers (such as chn->u.interdomain.remote_domain) as evtchn_close() is also used while destroying the domain.
> 
> Let me have a look again if we have to do the cleanup when we destroy the guest and close the static event channel.
>>
>> Also, AFAICT Xen will return 0 (i.e. success) to the caller. I think this is a mistake because we didn't close the port as requested.
> 
> If we return non-zero to guest (in particular if linux guest), Linux will report the BUG(). Therefore I decided to return 0.

... this shows that we are papering over a bigger problem: Linux is not 
ready for static event channels.

> 
> if (HYPERVISOR_event_channel_op(EVTCHNOP_close, &close) != 0)
>          BUG();
The BUG() in Linux is definitely not a reason to lie and claim the port 
was closed.

If you tell that to an OS, it may validly think that it know need to 
call bind interdomain in order to "re-open" the port. So your Linux will 
already need some information to know that the port is "static".

At which point, you can modify Linux to also prevent the port to be closed.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 15:33:24 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 15:33:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355078.582523 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Opg-0006A6-6T; Thu, 23 Jun 2022 15:33:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355078.582523; Thu, 23 Jun 2022 15:33:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Opg-00069z-2E; Thu, 23 Jun 2022 15:33:24 +0000
Received: by outflank-mailman (input) for mailman id 355078;
 Thu, 23 Jun 2022 15:33:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CRa/=W6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o4Ope-00069t-V7
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 15:33:22 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-eopbgr60060.outbound.protection.outlook.com [40.107.6.60])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d02bbd96-f309-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 17:33:21 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB9572.eurprd04.prod.outlook.com (2603:10a6:102:24f::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.22; Thu, 23 Jun
 2022 15:33:19 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Thu, 23 Jun 2022
 15:33:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d02bbd96-f309-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WLjnhns9j6Lvf86XGCq97x/nVvdn3+SvBdbgVDm6DVZxsGTkFqNpEYTHLD3GaFI762u8cqbaqKOIrb1CAM/Ia8kyDDh17MIJ5tx6TZRygVx+RHtNUt1JfS9Rq/EkTsNfAOIhMswvkgHcMgKekrTq5t4HSxbVe2SH3TCulJkSDrV6dPOk5Jap1GZuTLmn2blMqzXv3uhcOcNiB/GF3ykaVjWZDTby7zAbPToAh/ITWlZ0VaqyCSPRuvswYhaDpnJ/JXrsOqI/hPYx5KA+712Pi/uUNL1VFFQdK4GF7cIQ1eWQKY7amt5Ebs8rv6sL6QMQU0tivIvU3+BFrvPZl+0K5A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+xua5ZNAnhxSZ2eOZX1/N/ffFpV/DP81MabRt9d2asI=;
 b=g9qkvXPl3ZNs2PKa4d7LNWFZBVSOcHaUOHH6aPWQs3KIcl0iVpePWXbI0OxWs9zNKmTqlPx1eV323vxePna3Pf/ZhrcTigwdZsEbKTZ67Xxo4jZA7dLzVI2cfG4IX9IKYnypsXo3tvK/KpclG4LNxV+aWNXx7Pc7k+CRPHS39n9LMgoEeNrweE+ITtJVQriirmp1KI8ChVt6+Ud9EriPriWqlb838iq+ZCuJjHoFNEXZOg9eNFycVgklmQNR2qdIW1JWxSefJDaV1X3oAnx3SjbWOM+wixv46mnWhWkYj9LDG4dOmZL5YAvewTDNdFZFB85KPHTCr8lm6pZKzMW6NQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+xua5ZNAnhxSZ2eOZX1/N/ffFpV/DP81MabRt9d2asI=;
 b=Vk/ituuHIk1b5jXN6lL1X5daXw+rnBIMAyzQbmN+9azNVc/FMN0tr8ssHnP/NNLulxQNUrUe5U11WAM9j6Oanq+Cm9/VBXFPh9QC5KWX1iRw/ZpLZNa43YKz6lrrinW8ZPknMn6Pf1IfxvPAKcz2SbauJkEITuznaAQulVHLmDz7QMQGave+CwGZ1lIV6ilFnMqrCmZ7nn16dBkVMmNwuZh2bThOih0jAC4i+6lw1mo8du/GtjKtpqcNh8TydzgqQiuXNJQ765ou6aQVxJRnbpPnDpN5ciZ6CQKDiEhBhYNmGsHlMvM5D4ZOUxSoH9SoV3EGrT3cgGMTQcuecGN1ug==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <85802262-887b-1a69-ec5a-6c91a329f231@suse.com>
Date: Thu, 23 Jun 2022 17:33:17 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 5/8] xen/evtchn: don't close the static event channel.
Content-Language: en-US
To: Julien Grall <julien@xen.org>, Rahul Singh <Rahul.Singh@arm.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>
References: <cover.1655903088.git.rahul.singh@arm.com>
 <91656930b5bfd49e40ff5a9d060d7643e6311f4f.1655903088.git.rahul.singh@arm.com>
 <b64a7980-e51b-417b-4929-94a020c81438@xen.org>
 <7403EAA7-67A4-4A8D-835E-6015463B9016@arm.com>
 <a5cd291d-45b1-baf4-4d0b-907140b38eab@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <a5cd291d-45b1-baf4-4d0b-907140b38eab@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM5PR0502CA0024.eurprd05.prod.outlook.com
 (2603:10a6:203:91::34) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ed246f37-d24e-4d1e-f4e2-08da552db31c
X-MS-TrafficTypeDiagnostic: PAXPR04MB9572:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	cE+7y9m3vgrg8buZa7JUyvVmdbWYsogKLQtepOTEJb61n3j2UeVjOaz+2b9jbU6RpiquYkTXfSeNTIe8/s8acRzUFbUt8Eh5C3FcjWcdZZjYyuIItSr2pSbmyH9QMbn1Ud8O9oN3ZVhGgJ4LC1x5EHCIHhulyFmBxngD+7+h7KDItxk/A1j87pyNQDntNvRWnWPGs3IIzhGrQsqtL4+taNQR31Ix4IYVN3ZErS/zkP+3XI/Jb4BPzB5jnH96lSHbiq7qsU4amIV62sMoE+eR2CyuCkBMshUfzaYIs7umFyAj/EcBPS9cIWxoQxVoQMCf7EXFaTe+SicZw1VWWr6uSRMDUon6xfqun+0LYasQS8Lo3IIP/MY9ECSwadxX2IIQwb8OrlCPvrRajMbabc8uZFFPQVK6HMUDzuCXHO+PAPqEBG7imloK1o+S32aKE0zYpDjPtTdwQy96TDDC0xtqkM6vsP9yxd7AfEvwMnRqQw/MhU5ADeGThhiz58e/k6+gUteDGZ/mMU7WxOq2g+4UgjVHj2VrXDNN9hRS0eyHk14I6fRn73H0E5AjL062VuNvmmg7IhYHqe7PITwQcfWAAIRW53HjUBXWG+DRtMPjvZ5fDxem3jiwqKnwnnZEmPuAQYlc5BCSCdL7ww6lpqSFQtmAOILLCu2vhUIazsaDinHsoRsGpGYoPYrqAa4Z5zvmQz05du1FopiU91za6OFG/N+qxP/JUFLHVSpasN70rytbCaXKFA9foaBsHoRx4H/SF+rvJdoVp8r+Rzv4+eHbhRlZpGPdeNfNrF6ii9hhGQ9cgYgFr+ykxWDiPAiU6KLohP/3Ze/q0ItaxD4zZ1TcMgegJ918wZuJS6CVHY3I/uv7Jk4VQEydLkdFrEGCadU7
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(346002)(396003)(376002)(366004)(39860400002)(136003)(41300700001)(31696002)(66476007)(66556008)(6512007)(6506007)(66946007)(6486002)(2616005)(316002)(8936002)(86362001)(53546011)(478600001)(110136005)(26005)(186003)(4744005)(5660300002)(38100700002)(54906003)(8676002)(31686004)(2906002)(83380400001)(36756003)(4326008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RVlVSDBiTmpmVXhIeU8rREJGVTNyWE5KSUUxQTd3aFd6eUtqS3M2aXJzVmNL?=
 =?utf-8?B?OWxXam82WHpYNVpOdTlZUmxMV2FYNWtSSm9ETFpVUHJRbnhjRmdEVnM5ZlVX?=
 =?utf-8?B?K1RpUndMZVJ5OUM3MEwxcXorVU5RendjYnFhTERybjF3NzJwbjBpYUtaeXZV?=
 =?utf-8?B?SFlEWkJubEZiOU1Od0NaL1o3VDE0cVdOSk1aczhxZjRzNS84Y2NmZ1BPZjIv?=
 =?utf-8?B?eE9ENThHQTNuKy8ySWRuYXhIdTl1Wi9kUDBKTEdGQXkvWmxUekVuNXhnSDU2?=
 =?utf-8?B?a3JVQTczR05DRXJzRk1jdWw4SUVPTDRaT2pWcG1iL0MwRlpsZDRuUmVsNG9H?=
 =?utf-8?B?dk1DaytZamRlcmNMMjl2Ylp6SXhSOXFMb3dpUVhEQm5hQmJOTUN0YmY1S0dH?=
 =?utf-8?B?S2hCYWxtTnVNRVAxV3VPb1Q5ckptSnVJSklNM3VzSDVxdmZsR3lhRzBxZDRH?=
 =?utf-8?B?Mlh0TjN4NW4zbEdTZlVkek1CLzBpOWdZaVgvVVB0VE1jTWdnL05venhtY3ZX?=
 =?utf-8?B?UDhlRDRHSVdEUFBvdW5MZTFSRWVVeGs1aVc5YU4vMWl0V0dtaEZJZUx6Vmxw?=
 =?utf-8?B?enZkTVpkTG5ocC9HSGZ1NFFRYWNoQ2l5aGVEU3IwVzhldHhQTlNISm5hNGM0?=
 =?utf-8?B?QlhxcnVBS3ZZSjJJU1FiaEpVU0VpRXpRa1RaUlZUYk1XZWJSNFIzSnNQbVdh?=
 =?utf-8?B?bjNvSlZvNnhjRTl1aFdaMGJUNUROUGJVUWF5M2RmRDZPdkN1ZURiYXJHSEVi?=
 =?utf-8?B?Zmg4ZzZOQnVGdm9LU1BhdzV2UXFkZE45Y08yMUtHdHdFdVNKQ05DY1VKV0RL?=
 =?utf-8?B?OTJSV1dWc0YxSWcrZkt3UFlRNGV5ZmVJWWF2NHdBNnRHbUwrTXAxRktYOFZo?=
 =?utf-8?B?R052UW5DSTF1T1pyT3ZFZmxYakRJOFBYNndpSGwyL2pqZHRiVW9qR3kxSkk2?=
 =?utf-8?B?czBXQkw3N1l0a1FuT0V4dkt4QXc1MWNOdnlTOEVITTBQWVR4OFdjTkMyQWZn?=
 =?utf-8?B?eUhHTWFWRFZRODNCd3RkNTFKcjFXdThVMWFQa3dueFZXK3dzcmdYT3pGYWtE?=
 =?utf-8?B?Zkdkb3oxbytUbzZEUElmUHQ4YTVHK2hFOENQYjlZZXBhZmRMOWlFb2xMWE11?=
 =?utf-8?B?amlJS2paM1VPQWtZUm4ybEc2a20vaWRSMnJrUmlKdG91Rm80Ylh0OXlpQVlH?=
 =?utf-8?B?cXRDeGsxRHVVNjFWRXIzTEhuR2V5SHRxMjdGYnZYWmVOaStla1Blb3BCMnVp?=
 =?utf-8?B?NitxeUF4N3paNUJuTFI3Y29OZTRQRTF4ckxhMTljNFdhY1EzL1Jmc1lxcnpX?=
 =?utf-8?B?cTJ5NmxTdndFYkpMTmZkZTlZanJYcUFPOE1xZVFBSUpPZW9tRWVmZUhuUElN?=
 =?utf-8?B?eDBSb1VNYjFoaml4Z01BODFhaFFBSDFvNjIxSjN3UkEzaXB2UDltcEdCVDNN?=
 =?utf-8?B?UXZ1cU8xUWJzZllNTU1jMnF5aUpRUHdUaEpRWkdrWUNrcUs1RzZhNUZHMU1y?=
 =?utf-8?B?ZWszaTRMRjk2b2xUN0sxYlk1Z3RzTnhqb3VidlUvakhVdXNBbjJBWHdBRWlG?=
 =?utf-8?B?OUwrN09WWUZwOEUxTVRwc0RpdEthOHVFY3RaR1QyZm5SRmNlYTdrc2dlWm9X?=
 =?utf-8?B?NWg2N3BDazkvWFhVeFF2Q1YrVXU5TnJQMVF6UXUvaUhmTTdGeFFRQ1NBbmZB?=
 =?utf-8?B?SDBncDFmaHFsNC95OGMvaU5jRkFvM0xIei9CaFZpSEVDNXZtU2VQRDRBQ3J4?=
 =?utf-8?B?Z0ZsMVk3V251L0Fya0Uwcml2Q3ZTSUh5dTFLMkZCUkluSURia0lSVG9FNHFL?=
 =?utf-8?B?MFQ2U0V6TzhsbHljK3RvVFd3c3BTU1dQclZ1K21vRGI1d09uUldFUzcyODZZ?=
 =?utf-8?B?bzIrcjJMVXFlN2ZnSGMwdXRwVHFIRllPdmtnNnN5NVU1Y3lsZlpHam5mUnpq?=
 =?utf-8?B?RkdDUkhReUFMc0pZMWZNTGh2RUtHN28yM1M3QkNiQWZFRG1nZktMSjc1ZVNU?=
 =?utf-8?B?VVVsK2hJZEdYSmlEb1BTMTJOS0M3N2NOeW9wZlBoT3NLY09Wd2c3QWZxK0dr?=
 =?utf-8?B?UXVzM2MwS2VDejZEQWRoam5YWWxUa1dzdFFUV1NGaTBBcWZNaTl6disxdDJL?=
 =?utf-8?Q?kecWCEknuc/TMIS6+CsvUELux?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ed246f37-d24e-4d1e-f4e2-08da552db31c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jun 2022 15:33:19.4973
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: G29zRNKKQSwWZxJ+FgYlTHfAkQxhZIYdTBQ9QqqtXBF9XOXES5CrqHqwnjZ3bKuO7+HtmP4kCfnJkMqpzA8Wkg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB9572

On 23.06.2022 17:30, Julien Grall wrote:
> On 23/06/2022 16:10, Rahul Singh wrote:
>>> On 22 Jun 2022, at 4:05 pm, Julien Grall <julien@xen.org> wrote:
>>> On 22/06/2022 15:38, Rahul Singh wrote:
>>>> Guest can request the Xen to close the event channels. Ignore the
>>>> request from guest to close the static channels as static event channels
>>>> should not be closed.
>>>
>>> Why do you want to prevent the guest to close static ports? The problem I can see is...
>>
>> As a static event channel should be available during the lifetime of the guest we want to prevent
>> the guest to close the static ports.
> I don't think it is Xen job to prevent a guest to close a static port. 
> If the guest decide to do it, then it will just break itself and not Xen.

+1, fwiw.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 16:22:19 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 16:22:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355087.582537 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Paj-0003OV-WF; Thu, 23 Jun 2022 16:22:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355087.582537; Thu, 23 Jun 2022 16:22:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Paj-0003OO-S1; Thu, 23 Jun 2022 16:22:01 +0000
Received: by outflank-mailman (input) for mailman id 355087;
 Thu, 23 Jun 2022 16:22:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4Pai-0003OC-Pi; Thu, 23 Jun 2022 16:22:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4Pai-0000uw-Kc; Thu, 23 Jun 2022 16:22:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4Pai-0002sA-6V; Thu, 23 Jun 2022 16:22:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o4Pai-0005hU-1o; Thu, 23 Jun 2022 16:22:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=v0yMLsYhQO6Xd9x4G4x7EflEBq5DNlyZzNAtAK5fKT8=; b=cM6SD/dDWhiaIYYm0xKJ86emtX
	fcUMwFdJGSW66O6WnAb8ea4cFhE4V7bLjJLTRYqA/M8lvSSx3J75L9c+fhzidWpjGGt5/mrLoWyoc
	S+3BTVzs1u8PobrtnJu9As0Ecxrro4nr5EhAWwEsVATVvsLtNZXr3mOTQbnt6UqZU3p4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171328-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 171328: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=db3382dd4f468c763512d6bf91c96773395058fb
X-Osstest-Versions-That:
    xen=61ac7919a6a38a24d26fd1b57a2511beb0724e99
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 23 Jun 2022 16:22:00 +0000

flight 171328 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171328/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  db3382dd4f468c763512d6bf91c96773395058fb
baseline version:
 xen                  61ac7919a6a38a24d26fd1b57a2511beb0724e99

Last test of basis   171325  2022-06-23 08:01:47 Z    0 days
Testing same since   171328  2022-06-23 13:01:50 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   61ac7919a6..db3382dd4f  db3382dd4f468c763512d6bf91c96773395058fb -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 17:50:40 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 17:50:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355096.582548 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4QyD-0003J5-9M; Thu, 23 Jun 2022 17:50:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355096.582548; Thu, 23 Jun 2022 17:50:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4QyD-0003Iy-6I; Thu, 23 Jun 2022 17:50:21 +0000
Received: by outflank-mailman (input) for mailman id 355096;
 Thu, 23 Jun 2022 17:50:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4QyC-0003Is-7K
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 17:50:20 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4QyB-0002NG-BM; Thu, 23 Jun 2022 17:50:19 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.0.239]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4QyB-0008HH-3r; Thu, 23 Jun 2022 17:50:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=tqk1hy+kIMGA35BVYbXUaKlK3h+awE2BvB7uWBSKPQM=; b=AazkShVuUI34tY+XSI0AKGA2PC
	ZPKXtC0SoGFaHKSrKzb1g7nY1u+5hxCH0TISviUi/F1cX2/E7hR9ppJLAdmAPD4axHR71ffycnFp6
	4HkN6UDJOtGlJbxKBSubY2SNn3dQRyXt27dnysU4BHlfsvn3DqVZfvfBu3TBMBiWUcws=;
Message-ID: <632404c3-b285-753d-6644-bccbc17d42c0@xen.org>
Date: Thu, 23 Jun 2022 18:50:16 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH V6 1/2] xen/gnttab: Store frame GFN in struct page_info on
 Arm
To: Oleksandr Tyshchenko <olekstysh@gmail.com>, xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <1652294845-13980-1-git-send-email-olekstysh@gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <1652294845-13980-1-git-send-email-olekstysh@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Oleksandr,

Sorry for the late reply.

On 11/05/2022 19:47, Oleksandr Tyshchenko wrote:
> diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
> +/*
> + * All accesses to the GFN portion of type_info field should always be
> + * protected by the P2M lock. In case when it is not feasible to satisfy
> + * that requirement (risk of deadlock, lock inversion, etc) it is important
> + * to make sure that all non-protected updates to this field are atomic.

Here you say the non-protected updates should be atomic but...

[...]

> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 7b1f2f4..c94bdaf 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -1400,8 +1400,10 @@ void share_xen_page_with_guest(struct page_info *page, struct domain *d,
>       spin_lock(&d->page_alloc_lock);
>   
>       /* The incremented type count pins as writable or read-only. */
> -    page->u.inuse.type_info =
> -        (flags == SHARE_ro ? PGT_none : PGT_writable_page) | 1;
> +    page->u.inuse.type_info &= ~(PGT_type_mask | PGT_count_mask);
> +    page->u.inuse.type_info |= (flags == SHARE_ro ? PGT_none
> +                                                  : PGT_writable_page) |
> +                                MASK_INSR(1, PGT_count_mask);

... this is not going to be atomic. So I would suggest to add a comment 
explaining why this is fine.

>   
>       page_set_owner(page, d);
>       smp_wmb(); /* install valid domain ptr before updating refcnt. */
> @@ -1505,7 +1507,23 @@ int xenmem_add_to_physmap_one(
>       }
>   
>       /* Map at new location. */
> -    rc = guest_physmap_add_entry(d, gfn, mfn, 0, t);

> +    if ( !p2m_is_ram(t) || !is_xen_heap_mfn(mfn) )
> +        rc = guest_physmap_add_entry(d, gfn, mfn, 0, t);

I would expand the comment above to explain why you need a different 
path for xenheap mapped as RAM. AFAICT, this is because we need to call 
page_set_xenheap_gfn().

> +    else
> +    {
> +        struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +
> +        p2m_write_lock(p2m);
> +        if ( gfn_eq(page_get_xenheap_gfn(mfn_to_page(mfn)), INVALID_GFN) )

Sorry to only notice it now. This check will also change the behavior 
for XENMAPSPACE_shared_info. Now, we are only allowed to map the shared 
info once.

I believe this is fine because AFAICT x86 already prevents it. But this 
is probably something that ought to be explained in the already long 
commit message.

My comments are mainly seeking for clarifications. The code itself looks 
correct to me. I can handle the comments on commit to save you a round 
trip once we agree on them.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 18:08:57 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 18:08:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355103.582558 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4RG6-0004xH-Qf; Thu, 23 Jun 2022 18:08:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355103.582558; Thu, 23 Jun 2022 18:08:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4RG6-0004xA-Nu; Thu, 23 Jun 2022 18:08:50 +0000
Received: by outflank-mailman (input) for mailman id 355103;
 Thu, 23 Jun 2022 18:08:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4RG4-0004x4-Ob
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 18:08:48 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4RG4-0002lh-8I; Thu, 23 Jun 2022 18:08:48 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.0.239]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4RG4-0000os-21; Thu, 23 Jun 2022 18:08:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=dT+l2tfaBQirVi8w4X5VjEB80gAkXTYZbZarg3UrK50=; b=MB8j3JOLmxw1xBbxutfFWfCZdZ
	GBbLR5Gdnu5MLWjZXbS+itBVxc5vSHM23pS+y2hUxLkcZdjg/LwmMcrM1DGyzIT76O1iwmB/6/vp3
	PvMCurHr3+6rcIwtj/SI/hO0FTEPUSr1qK4PldmvIWO/IYd5pd4wXYhssFINBavRYOUk=;
Message-ID: <42b0d343-a491-877c-3b5c-d9c95872774c@xen.org>
Date: Thu, 23 Jun 2022 19:08:45 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH V6 2/2] xen/arm: Harden the P2M code in
 p2m_remove_mapping()
To: Oleksandr Tyshchenko <olekstysh@gmail.com>, xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <1652294845-13980-1-git-send-email-olekstysh@gmail.com>
 <1652294845-13980-2-git-send-email-olekstysh@gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <1652294845-13980-2-git-send-email-olekstysh@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Oleksandr,

On 11/05/2022 19:47, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> Borrow the x86's check from p2m_remove_page() which was added
> by the following commit: c65ea16dbcafbe4fe21693b18f8c2a3c5d14600e
> "x86/p2m: don't assert that the passed in MFN matches for a remove"
> and adjust it to the Arm code base.
> 
> Basically, this check is strictly needed for the xenheap pages only
> since there are several non-protected read accesses to our simplified
> xenheap based M2P approach on Arm (most calls to page_get_xenheap_gfn()
> are not protected by the P2M lock).

To me, this read as you introduced a bug in patch #1 and now you are 
fixing it. So this patch should have been first.

> 
> But, it will be a good opportunity to harden the P2M code for *every*
> RAM pages since it is possible to remove any GFN - MFN mapping
> currently on Arm (even with the wrong helpers).

> This can result in
> a few issues when mapping is overridden silently (in particular when
> building dom0).

Hmmm... AFAIU, in such situation p2m_remove_mapping() wouldn't be 
called. Instead, we would call the mapping helper twice and the override 
would still happen.

> 
> Suggested-by: Julien Grall <jgrall@amazon.com>
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> ---
> You can find the corresponding discussion at:
> https://lore.kernel.org/xen-devel/82d8bfe0-cb46-d303-6a60-2324dd76a1f7@xen.org/
> 
> Changes V5 -> V6:
>   - new patch
> ---
>   xen/arch/arm/p2m.c | 21 +++++++++++++++++++++
>   1 file changed, 21 insertions(+)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index f87b48e..635e474 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1311,11 +1311,32 @@ static inline int p2m_remove_mapping(struct domain *d,
>                                        mfn_t mfn)
>   {
>       struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +    unsigned long i;
>       int rc;
>   
>       p2m_write_lock(p2m);
> +    for ( i = 0; i < nr; )
One bit I really hate in the x86 code is the lack of in-code 
documentation. It makes really difficult to understand the logic.

I know this code was taken from x86, but I would like to avoid making 
same mistake (this code is definitely not trivial). So can we document 
the logic?

The code itself looks good to me.

> +    {
> +        unsigned int cur_order;
> +        p2m_type_t t;
> +        mfn_t mfn_return = p2m_get_entry(p2m, gfn_add(start_gfn, i), &t, NULL,
> +                                         &cur_order, NULL);
> +
> +        if ( p2m_is_any_ram(t) &&
> +             (!mfn_valid(mfn) || !mfn_eq(mfn_add(mfn, i), mfn_return)) )
> +        {
> +            rc = -EILSEQ;
> +            goto out;
> +        }
> +
> +        i += (1UL << cur_order) -
> +             ((gfn_x(start_gfn) + i) & ((1UL << cur_order) - 1));
> +    }
> +
>       rc = p2m_set_entry(p2m, start_gfn, nr, INVALID_MFN,
>                          p2m_invalid, p2m_access_rwx);
> +
> +out:
>       p2m_write_unlock(p2m);
>   
>       return rc;

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 18:14:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 18:14:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355110.582570 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4RLj-0000Ce-F6; Thu, 23 Jun 2022 18:14:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355110.582570; Thu, 23 Jun 2022 18:14:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4RLj-0000CX-C8; Thu, 23 Jun 2022 18:14:39 +0000
Received: by outflank-mailman (input) for mailman id 355110;
 Thu, 23 Jun 2022 18:14:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4RLi-0000CP-8u
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 18:14:38 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4RLi-0002tP-0k; Thu, 23 Jun 2022 18:14:38 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.0.239]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4RLh-0001EE-QT; Thu, 23 Jun 2022 18:14:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=zhjZv/vGKQklffnindBfKOisaj4Ol9xVtANKZuxBOFk=; b=ZTmmW5DA4otNKj67DAvatpm4nf
	yhNIOLFZz08vhcLMJxw0SEupw0ZkFkX1+c0fPo/b9jLLU6VHjJ7qdVJP5bC8atQjUiSrEnI1gH9Mf
	++gl/yUOd4FUwVxSj+QDe2IHA+BAtXnjxYux7j/l4XUf5T68GrTiEoRyY1RrNx7gsxs8=;
Message-ID: <83e13fdf-7e9a-9c2d-dd44-bf0b8e1435cc@xen.org>
Date: Thu, 23 Jun 2022 19:14:35 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH v2 1/4] tools/xenstore: modify feature bit specification
 in xenstore-ring.txt
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20220527072427.20327-1-jgross@suse.com>
 <20220527072427.20327-2-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220527072427.20327-2-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 27/05/2022 08:24, Juergen Gross wrote:
> Instead of specifying the feature bits in xenstore-ring.txt as a mask
> value use bit numbers. This will make the specification easier to read
> when adding more features.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 18:23:29 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 18:23:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355116.582580 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4RUC-0002SI-9u; Thu, 23 Jun 2022 18:23:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355116.582580; Thu, 23 Jun 2022 18:23:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4RUC-0002SB-6c; Thu, 23 Jun 2022 18:23:24 +0000
Received: by outflank-mailman (input) for mailman id 355116;
 Thu, 23 Jun 2022 18:23:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VLWc=W6=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o4RUB-0002S5-Le
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 18:23:23 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8e9879eb-f321-11ec-b725-ed86ccbb4733;
 Thu, 23 Jun 2022 20:23:21 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id DA77961FE2;
 Thu, 23 Jun 2022 18:23:18 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 693DEC341C4;
 Thu, 23 Jun 2022 18:23:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8e9879eb-f321-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1656008598;
	bh=MqsWdnARlOOVZyzfD2d21CRHrVDPsGOz4Y94XZGbPGE=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=EWV2O7MqdT4KjzCJtNncdX9IeLJr0S4i4i2VzxHs3fKAx/nd7JetgNt20Boae/KSM
	 uyI844C9XyUexgG6ZMB86nCnMIJjewsJIXxrprXLqOS7yb9Ti/9QmQQ6yOLpOixTUS
	 hEUnhzwnGrbO0ThWTjzIG7FuS0BRUtgywEy/eUkNh3GLPokUEczt29U/RiLDPCJdJg
	 2HZztXowRHO9qX5JaWyH/jcf6qXftfzJCEfYTvmZekUhZs1Qg15Iup+u31AVVzMNqW
	 BuU3+ui4IwpeZh0oK6ixb8oUrwc6O2RYPj6rSeGGxNgmapywtFpH1d5zeJxrai5dKR
	 g1MikmGUyboUA==
Date: Thu, 23 Jun 2022 11:23:17 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jan Beulich <jbeulich@suse.com>
cc: Roberto Bagnara <roberto.bagnara@bugseng.com>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>, 
    Michal Orzel <Michal.Orzel@arm.com>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>, 
    Juergen Gross <jgross@suse.com>, Dario Faggioli <dfaggioli@suse.com>, 
    Daniel De Graaf <dgdegra@tycho.nsa.gov>, 
    "Daniel P. Smith" <dpsmith@apertussolutions.com>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH 0/9] MISRA C 2012 8.1 rule fixes
In-Reply-To: <3e86d233-7c9a-cd80-a744-c4bdd42ac85c@suse.com>
Message-ID: <alpine.DEB.2.22.394.2206231117460.2410338@ubuntu-linux-20-04-desktop>
References: <20220620070245.77979-1-michal.orzel@arm.com> <dd016e82-2480-0e1e-6286-18b2f677dd65@suse.com> <74ec2158-3d19-3b2c-1e8c-fb5b30267658@arm.com> <d91bb4ea-41be-225e-e2fe-1b03aa06c677@suse.com> <C45BA6EE-6294-4C6F-ADC4-3DE7C8DA866F@arm.com>
 <68d7fb35-e4c5-e5d2-13a8-9ee1369e8dbe@suse.com> <BE80A241-7983-425F-9212-0957E29AA5C7@arm.com> <7a8d70e3-c331-426d-fe96-77bd65caade7@suse.com> <alpine.DEB.2.22.394.2206221212510.2157383@ubuntu-linux-20-04-desktop> <8610703e-fd15-bba1-3bb1-cfe038f9b11c@bugseng.com>
 <3e86d233-7c9a-cd80-a744-c4bdd42ac85c@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 23 Jun 2022, Jan Beulich wrote:
> On 23.06.2022 09:37, Roberto Bagnara wrote:
> > Rule 8.1 only applies to C90 code, as all the violating instances are
> > syntax errors in C99 and later versions of the language.  So,
> > the following line does not contain a violation of Rule 8.1:
> > 
> >      unsigned x;
> > 
> > It does contain a violation of Directive 4.6, though, whose correct
> > handling depends on the intention (uint32_t, uin64_t, size_t, ...).

Hi Roberto,

Thank you very much for the quick reply and very clear answer!


> Interesting - this goes straight against a rule we have set in
> ./CODING_STYLE. I'm also puzzled by you including size_t in your list
> of examples, when the spec doesn't. The sole "goal" of the directive
> (which is advisory only anyway) is to be able to determine allocation
> size. size_t size, however, varies as much as short, int, long, etc
> do.

I wouldn't worry about Directive 4.6 for now. We'll talk about it when
we get to it. (Also we already require uint32_t, uint64_t, etc. in all
external interfaces and ABIs which I think is what Dir 4.6 cares about
the most.)

For this series, I suggest to keep the patches because "unsigned int" is
better than "unsigned" from a style perspective, but we need to rephrase
the commit messages because we cannot claim they are fixing Rule 8.1.
Also, thank Jan for spotting the misunderstanding!


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 18:27:55 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 18:27:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355122.582592 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4RYX-00035s-R4; Thu, 23 Jun 2022 18:27:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355122.582592; Thu, 23 Jun 2022 18:27:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4RYX-00035l-ON; Thu, 23 Jun 2022 18:27:53 +0000
Received: by outflank-mailman (input) for mailman id 355122;
 Thu, 23 Jun 2022 18:27:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4RYW-00035f-IL
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 18:27:52 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4RYW-00037D-BN; Thu, 23 Jun 2022 18:27:52 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.0.239]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4RYW-0001jr-4s; Thu, 23 Jun 2022 18:27:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=vqi+5J+xYLjIcS3T8bw+ELiCQCid09kSbgk89Kae82g=; b=q24Em/EylNde4ER2+Fws8uAQ2/
	gMz9fQ6drj1Xc1yDOeWDcHO1VfcdUan9M5TPDFMuQHPjtzW/JpuRg/iJXysz8K4i/VrWIaYBVldC7
	KqnvlqFz6dv7tq/iGJxQpBVsy42V1twgxgrbJY5VIJL7wivPK9ANg7icGWKwuggT7Obs=;
Message-ID: <4f8f6cf3-3aee-9128-df09-d3957c233c42@xen.org>
Date: Thu, 23 Jun 2022 19:27:49 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH v2 2/4] tools/xenstore: add documentation for new
 set/get-feature commands
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20220527072427.20327-1-jgross@suse.com>
 <20220527072427.20327-3-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220527072427.20327-3-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 27/05/2022 08:24, Juergen Gross wrote:
> Add documentation for two new Xenstore wire commands SET_FEATURE and
> GET_FEATURE used to set or query the Xenstore features visible in the
> ring page of a given domain.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> Do we need support in the migration protocol for the features?

I would say yes. You want to make sure that the client can be migrated 
without loosing features between two xenstored.

> V2:
> - remove feature bit (Julien Grall)
> - GET_FEATURE without domid will return Xenstore supported features
>    (triggered by Julien Grall)
> ---
>   docs/misc/xenstore.txt | 14 ++++++++++++++
>   1 file changed, 14 insertions(+)
> 
> diff --git a/docs/misc/xenstore.txt b/docs/misc/xenstore.txt
> index a3d3da0a5b..00f6969202 100644
> --- a/docs/misc/xenstore.txt
> +++ b/docs/misc/xenstore.txt
> @@ -331,6 +331,20 @@ SET_TARGET		<domid>|<tdomid>|
>   
>   	xenstored prevents the use of SET_TARGET other than by dom0.
>   
> +GET_FEATURE		[<domid>|]		<value>|
> +SET_FEATURE		<domid>|<value>|
> +	Returns or sets the contents of the "feature" field located at
> +	offset 2064 of the Xenstore ring page of the domain specified by
> +	<domid>. <value> is a decimal number being a logical or of the

In the context of migration, I am stilll a bit concerned that the 
features are stored in the ring because the guest could overwrite it.

I would expect the migration code to check that the GET_FEATURE <domid> 
is a subset of GET_FEATURE on the targeted Xenstored. So it can easily 
prevent a guest to migrate.

So I think this should be a shadow copy that will be returned instead of 
the contents of the "feature" field.

> +	feature bits as defined in docs/misc/xenstore-ring.txt. Trying
> +	to set a bit for a feature not being supported by the running
> +	Xenstore will be denied. Providing no <domid> with the
> +	GET_FEATURE command will return the features which are supported
> +	by Xenstore.

Do we want to allow modifying the features when the guest is running?

> +
> +	xenstored prevents the use of GET_FEATURE and SET_FEATURE other
> +	than by dom0.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 18:34:19 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 18:34:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355129.582603 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Ref-0005Dm-I6; Thu, 23 Jun 2022 18:34:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355129.582603; Thu, 23 Jun 2022 18:34:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Ref-0005Df-ED; Thu, 23 Jun 2022 18:34:13 +0000
Received: by outflank-mailman (input) for mailman id 355129;
 Thu, 23 Jun 2022 18:34:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4Red-0005DZ-Qt
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 18:34:11 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4Red-0003EB-H2; Thu, 23 Jun 2022 18:34:11 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.0.239]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4Red-00029e-8b; Thu, 23 Jun 2022 18:34:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=3ddCkUWlRCbiDqr4lfOdAp4N8Iou0D35VDNGT72VTUA=; b=o0RbW0lQ3Quxa1hexj+HajP2mw
	9w+FRFZ15aGK0jEraGluZH9VD1+FG9PBbxrNBbe5ucVqR6SYyi1ngqGnnoDfdbunaM9T6IWkF+cah
	1tgzyESAm5YjL2KD/b0KhC44Fc3poTukjBAUWZK2fBj0KA98ddfAKyz5iXcEJCw9d8Nk=;
Message-ID: <a3eb8018-2e32-e451-7d97-885a5d4fd336@xen.org>
Date: Thu, 23 Jun 2022 19:34:09 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH v2 3/4] tools/xenstore: add documentation for new
 set/get-quota commands
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20220527072427.20327-1-jgross@suse.com>
 <20220527072427.20327-4-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220527072427.20327-4-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 27/05/2022 08:24, Juergen Gross wrote:
> Add documentation for two new Xenstore wire commands SET_QUOTA and
> GET_QUOTA used to set or query the global Xenstore quota or those of
> a given domain.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> Note that it might be a good idea to add support to the Xenstore
> migration protocol to transfer quota data (global and/or per domain).

I think this is needed because a user may have configured a domain with 
quotas above the default. After Live-Update, we would have a short 
window where the domain may not function properly.

I think it would be good to documentation the migration part in this 
patch. But if you want do it separately then:

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 18:40:56 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 18:40:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355136.582613 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Rl7-0006mA-BF; Thu, 23 Jun 2022 18:40:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355136.582613; Thu, 23 Jun 2022 18:40:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Rl7-0006m3-8G; Thu, 23 Jun 2022 18:40:53 +0000
Received: by outflank-mailman (input) for mailman id 355136;
 Thu, 23 Jun 2022 18:40:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4Rl5-0006lX-P9; Thu, 23 Jun 2022 18:40:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4Rl5-0003Mo-KA; Thu, 23 Jun 2022 18:40:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4Rl4-0002Zm-Vj; Thu, 23 Jun 2022 18:40:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o4Rl4-0006Zo-VH; Thu, 23 Jun 2022 18:40:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ZY+Hup5olYG/xyrUHqHAfThDT8ManJB6avNmTmeHfIk=; b=NGQuyQ+owejxwVbrguvOsNdxyn
	6jgpCzz1WMyJqLiEnjpzjvkR0Um3pWi63N98cu3xYMwhIojFQfoUGDqY/EKTLUwO6NLdDFKZJ2nm6
	mJKmUXZZvSxCdv1UkiRVq8cCPMdbEHXT4Kf57YjeLaQs2JifF0CSl326ocUAdjOyk6DA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171327-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 171327: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-libvirt:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-saverestore:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-amd64:xen-install:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=65f684b728f779e170335e9e0cbbf82f7e1c7e5b
X-Osstest-Versions-That:
    xen=65f684b728f779e170335e9e0cbbf82f7e1c7e5b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 23 Jun 2022 18:40:50 +0000

flight 171327 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171327/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt       7 xen-install                fail pass in 171319
 test-amd64-amd64-libvirt-vhd 16 guest-saverestore          fail pass in 171319
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat  fail pass in 171319

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt     15 migrate-support-check fail in 171319 never pass
 test-amd64-i386-freebsd10-amd64  7 xen-install                fail like 171319
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171319
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171319
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171319
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171319
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171319
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171319
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171319
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171319
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171319
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171319
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171319
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171319
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  65f684b728f779e170335e9e0cbbf82f7e1c7e5b
baseline version:
 xen                  65f684b728f779e170335e9e0cbbf82f7e1c7e5b

Last test of basis   171327  2022-06-23 10:35:21 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Thu Jun 23 18:42:04 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 18:42:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355143.582625 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4RmF-0007M9-Ot; Thu, 23 Jun 2022 18:42:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355143.582625; Thu, 23 Jun 2022 18:42:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4RmF-0007M2-Ka; Thu, 23 Jun 2022 18:42:03 +0000
Received: by outflank-mailman (input) for mailman id 355143;
 Thu, 23 Jun 2022 18:42:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4RmF-0007Lw-9i
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 18:42:03 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4RmF-0003OG-1D; Thu, 23 Jun 2022 18:42:03 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.0.239]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4RmE-0002UU-R4; Thu, 23 Jun 2022 18:42:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=BUUnwttK0ft7tr2MeA1AsSA1yujibvMxxt5aLZkaKFY=; b=CBvM5iamDEUt2u4NzPIv3UZefg
	P7iiuyX5Hu3+xsxpdUxtU1bkeEiUumvTmaGS3RmwYIE9e+4OMHxcaSBQyvsIJ1OibPQwTMYKUgKuS
	ujeZ7tSFiRcWowoESE4CPUEFWKZ4+MqG4ryfsnBSvYtHPrJ4QUtI1PA2t8PpJ7i3oIwA=;
Message-ID: <121f2685-014e-0a8c-75e7-e224d469dcf5@xen.org>
Date: Thu, 23 Jun 2022 19:42:00 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH v2 4/4] tools/xenstore: add documentation for extended
 watch command
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20220527072427.20327-1-jgross@suse.com>
 <20220527072427.20327-5-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220527072427.20327-5-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 27/05/2022 08:24, Juergen Gross wrote:
> Add documentation for an extension of the WATCH command used to limit
> the scope of watched paths. Additionally it enables to receive more
> information in the events related to special watches (@introduceDomain
> or @releaseDomain).
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> This will probably need an extension of the Xenstore migration
> protocol, too.

That's going to be necessary in order to live-update xenstored.

I would prefer if this is dealt in this patch, but if you want to do it 
separately then:

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 18:45:33 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 18:45:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355151.582636 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Rpb-0000Dm-6f; Thu, 23 Jun 2022 18:45:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355151.582636; Thu, 23 Jun 2022 18:45:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Rpb-0000Df-39; Thu, 23 Jun 2022 18:45:31 +0000
Received: by outflank-mailman (input) for mailman id 355151;
 Thu, 23 Jun 2022 18:45:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4RpZ-0000DS-MQ
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 18:45:29 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4RpZ-0003RI-9B; Thu, 23 Jun 2022 18:45:29 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.0.239]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4RpZ-0002m8-3v; Thu, 23 Jun 2022 18:45:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=68v5wa/HlVOdFYWFpVpw3Kwn3p9rtOYQIlQj2aYg0mY=; b=RXQrG3+9XjLkxjMuZLAlr0KpH+
	rBxKe0nbYW+riStczEXzIDFjDZNV9ce518nFmnEzjgQq0dNmj9QzAiHT9rabTcULZRf6b4YY4S6Ay
	yLLp4qOW3iU7jD1x/6+EHlaIvpWbcISoVrgGH8LhgQJmp1TkCjxQmZonVWDQeNzKFtTA=;
Message-ID: <20ff7145-b561-b080-29ea-08583da140a4@xen.org>
Date: Thu, 23 Jun 2022 19:45:27 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH] xen/common: device_tree: Fix MISRA C 2012 Rule 8.7
 violation
To: Stefano Stabellini <sstabellini@kernel.org>,
 Xenia Ragiadakou <burzalodowa@gmail.com>
Cc: xen-devel@lists.xenproject.org
References: <20220622151557.545880-1-burzalodowa@gmail.com>
 <alpine.DEB.2.22.394.2206221231260.2157383@ubuntu-linux-20-04-desktop>
From: Julien Grall <julien@xen.org>
In-Reply-To: <alpine.DEB.2.22.394.2206221231260.2157383@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 22/06/2022 20:31, Stefano Stabellini wrote:
> On Wed, 22 Jun 2022, Xenia Ragiadakou wrote:
>> The function __dt_n_size_cells() is referenced only in device_tree.c.
>> Change the linkage of the function from external to internal by adding
>> the storage-class specifier static to the function definition.
>>
>> This patch aims to resolve indirectly a MISRA C 2012 Rule 8.4 violation
>> warning.
>>
>> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
> 
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

Committed.

Cheers,

> 
> 
>> ---
>>   xen/common/device_tree.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
>> index 0e8798bd24..6c9712ab7b 100644
>> --- a/xen/common/device_tree.c
>> +++ b/xen/common/device_tree.c
>> @@ -496,7 +496,7 @@ static int __dt_n_addr_cells(const struct dt_device_node *np, bool_t parent)
>>       return DT_ROOT_NODE_ADDR_CELLS_DEFAULT;
>>   }
>>   
>> -int __dt_n_size_cells(const struct dt_device_node *np, bool_t parent)
>> +static int __dt_n_size_cells(const struct dt_device_node *np, bool_t parent)
>>   {
>>       const __be32 *ip;
>>   
>> -- 
>> 2.34.1
>>

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 18:52:24 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 18:52:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355158.582647 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Rw1-0002K0-S2; Thu, 23 Jun 2022 18:52:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355158.582647; Thu, 23 Jun 2022 18:52:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Rw1-0002Jt-PB; Thu, 23 Jun 2022 18:52:09 +0000
Received: by outflank-mailman (input) for mailman id 355158;
 Thu, 23 Jun 2022 18:52:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4Rw1-0002Jn-3U
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 18:52:09 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4Rw0-0003Yz-Av; Thu, 23 Jun 2022 18:52:08 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.0.239]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4Rw0-0002xF-2D; Thu, 23 Jun 2022 18:52:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=6HUAoMgu7OAuQziJOh1VskMn2fbI/nwiDwPyLEcVb6s=; b=upNUfFMS7CvBlDA0x6ldEAq2Fs
	XNId6BnGbg1tvPOuuIj3BFXXmjIyH/vFzttXAri+8xPBPldOQ0M2YyKHI3MAw3iVkwqvPigYwdPQZ
	kMYi7kbb6QXRICeckACrfCBfeUUc0YD/bGoiJdpDVqGtCKSfULJDEaLhsayCj0FSZD9w=;
Message-ID: <42ccc891-d56a-8928-c94b-911076ff7e85@xen.org>
Date: Thu, 23 Jun 2022 19:52:06 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH 1/3] xen/arm: shutdown: Fix MISRA C 2012 Rule 8.4
 violation
To: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Xenia Ragiadakou <burzalodowa@gmail.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220622151514.545850-1-burzalodowa@gmail.com>
 <50F8F42B-F82B-4F9C-87B4-6090A5BB2B57@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <50F8F42B-F82B-4F9C-87B4-6090A5BB2B57@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Xenia,

On 22/06/2022 16:19, Bertrand Marquis wrote:
>> On 22 Jun 2022, at 16:15, Xenia Ragiadakou <burzalodowa@gmail.com> wrote:
>>
>> Include header <xen/shutdown.h> so that the declarations of the functions
>> machine_halt() and machine_restart(), which have external linkage, are visible
>> before the function definitions.
>>
>> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

I couldn't find a cover letter. So replying here.

The series is now committed.

In the future, please create a cover letter when you send multiple 
patches (git-format-patch should already do that for you).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 18:56:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 18:56:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355170.582674 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4S0B-0003C7-N3; Thu, 23 Jun 2022 18:56:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355170.582674; Thu, 23 Jun 2022 18:56:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4S0B-0003C0-Jv; Thu, 23 Jun 2022 18:56:27 +0000
Received: by outflank-mailman (input) for mailman id 355170;
 Thu, 23 Jun 2022 18:56:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4S0A-0003Bu-75
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 18:56:26 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4S09-0003eS-J3; Thu, 23 Jun 2022 18:56:25 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.0.239]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4S09-0003Il-DM; Thu, 23 Jun 2022 18:56:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=u0VfwaeWhaBIPwVjcu6HcW3tM4iS8ZTgV5gppUHPras=; b=0kRIFaThqTY4jZyXRRe3qUk2RJ
	t+29rI+kDifBX9/FhPddgPVXTh++CTELTGjtWXCJ2aJ9ZkUztUBhqeUWDfCDnYemd7/2s5/sPvKMY
	0BWBS+YTX7+8GgNtE3f48q7//tErjfrv5MW2JSW8Lu6w35hRXA6VImucRREwSxK+UlPw=;
Message-ID: <4b24526f-6816-209e-e64e-7a08381b6002@xen.org>
Date: Thu, 23 Jun 2022 19:56:23 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH 1/2] xen/arm: vtimer: Fix MISRA C 2012 Rule 8.4 violation
To: Stefano Stabellini <sstabellini@kernel.org>,
 Xenia Ragiadakou <burzalodowa@gmail.com>
Cc: xen-devel@lists.xenproject.org,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220621154402.482857-1-burzalodowa@gmail.com>
 <alpine.DEB.2.22.394.2206211644210.788376@ubuntu-linux-20-04-desktop>
From: Julien Grall <julien@xen.org>
In-Reply-To: <alpine.DEB.2.22.394.2206211644210.788376@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Xenia,

On 22/06/2022 00:44, Stefano Stabellini wrote:
> On Tue, 21 Jun 2022, Xenia Ragiadakou wrote:
>> Include vtimer.h so that the declarations of vtimer functions with external
>> linkage are visible before the function definitions.
>>
>> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
> 
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

Like your other series, I wasn't able to find a cover letter so replying 
here.

I have now committed it. But in the future, please create a cover letter.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 19:05:43 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 19:05:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355179.582694 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4S95-0005Pf-OI; Thu, 23 Jun 2022 19:05:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355179.582694; Thu, 23 Jun 2022 19:05:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4S95-0005PY-In; Thu, 23 Jun 2022 19:05:39 +0000
Received: by outflank-mailman (input) for mailman id 355179;
 Thu, 23 Jun 2022 19:05:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VLWc=W6=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o4S94-0005PS-EV
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 19:05:38 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 75a43b5e-f327-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 21:05:36 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 11D5760DE4;
 Thu, 23 Jun 2022 19:05:34 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id D9B7EC341C6;
 Thu, 23 Jun 2022 19:05:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 75a43b5e-f327-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1656011133;
	bh=DlaG4hikWmV5GgEEhFKLTExjiONKayjhelSAZ9OX0mg=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=MeGFBC/IDQs/gYGFQsyiAGqtObMQqYr659vAvp7nny88g0w7tOHjkssgIytGXPc+C
	 xk6osWSsynJdCPNVUy3c8sd7YAyJvimq3VMu7tS1Xj03W2wLYi6jVds/QFyMUbToL1
	 jn1MHFro7tUU6qPXZgvsp6G+DEQGbDW9Pns/DN9RGeghRpK1mqajA/Izr9aLVrZMyA
	 bWITtmWyjKB2/6ZtvSB67uae4qBwgd87RPSQFE/31njvIvzH8kgGrlP/xuZnTtYpTC
	 CKOkpnK0zDQC2CmKsMNnQTIP7qBnTvvu/+Ou3PCI4rVxCXVed0ukDnnKnuE4Qik8aI
	 4/IYy8dsocBSA==
Date: Thu, 23 Jun 2022 12:05:32 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v3] xen: Add MISRA support to cppcheck make rule
In-Reply-To: <99E7CA0A-B87F-40D3-BE15-AA344AFB9855@arm.com>
Message-ID: <alpine.DEB.2.22.394.2206231204400.2410338@ubuntu-linux-20-04-desktop>
References: <82a29dff7a0da97cc6ad9d247a97372bcf71f17c.1654850751.git.bertrand.marquis@arm.com> <alpine.DEB.2.22.394.2206211658480.788376@ubuntu-linux-20-04-desktop> <FE2CD795-09AC-4AD0-8F08-8320FE7122C5@arm.com> <alpine.DEB.2.22.394.2206221445520.2352613@ubuntu-linux-20-04-desktop>
 <99E7CA0A-B87F-40D3-BE15-AA344AFB9855@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 23 Jun 2022, Bertrand Marquis wrote:
> > On 22 Jun 2022, at 22:52, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > 
> > On Wed, 22 Jun 2022, Bertrand Marquis wrote:
> >> Hi Stefano,
> >> 
> >>> On 22 Jun 2022, at 01:00, Stefano Stabellini <sstabellini@kernel.org> wrote:
> >>> 
> >>> On Fri, 10 Jun 2022, Bertrand Marquis wrote:
> >>>> cppcheck MISRA addon can be used to check for non compliance to some of
> >>>> the MISRA standard rules.
> >>>> 
> >>>> Add a CPPCHECK_MISRA variable that can be set to "y" using make command
> >>>> line to generate a cppcheck report including cppcheck misra checks.
> >>>> 
> >>>> When MISRA checking is enabled, a file with a text description suitable
> >>>> for cppcheck misra addon is generated out of Xen documentation file
> >>>> which lists the rules followed by Xen (docs/misra/rules.rst).
> >>>> 
> >>>> By default MISRA checking is turned off.
> >>>> 
> >>>> While adding cppcheck-misra files to gitignore, also fix the missing /
> >>>> for htmlreport gitignore
> >>>> 
> >>>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> >>> 
> >>> Hi Bertrand,
> >>> 
> >>> I tried this patch and I am a bit confused by the output
> >>> cppcheck-misra.txt file that I get (appended.)
> >>> 
> >>> I can see that there are all the rules from docs/misra/rules.rst as it
> >>> should be together with the one line summary, but there are also a bunch
> >>> of additional rules not present in docs/misra/rules.rst. Starting from
> >>> Rule 1.1 all the way to Rule 21.21. Is the expected?
> >> 
> >> To make cppcheck happy I need to give a text for all rules so the python script is generating a dummy sentence for not declared Misra rules to prevent cppcheck warnings. To make it simpler I just did it for 1 to 22 for main and sub numbers.
> >> 
> >> So yes this is expected.
> > 
> > No problem about the dummy text sentence. My question was why are all
> > those additional rules listed?
> > 
> > If you see below, the first few rules from 2.1 to 20.14 are coming from
> > docs/misra/rules.rst. Why are the other rules afterward from 1.1 to
> > 21.21 listed and where are they coming from?
> 
> Those are dummy entries generated by the python script.
> 
> > 
> > Is it because all rules need to be listed? And the ones that are enabled
> > are marked as "Required"?
> 
> If a rule is not listed in the file, cppcheck will give a warning.
> 
> > 
> > I take we couldn't just avoid listing the other rules (the ones not in
> > docs/misra/rules.rst)?
> 
> I can but each cppcheck command will output a warning for each rule which has no description in the generated file.

No, that makes sense. It is to silence a warning. Maybe explain this in
the commit message and add my

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 21:15:02 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 21:15:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355187.582709 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4UA2-0008VH-0Y; Thu, 23 Jun 2022 21:14:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355187.582709; Thu, 23 Jun 2022 21:14:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4UA1-0008VA-Tr; Thu, 23 Jun 2022 21:14:45 +0000
Received: by outflank-mailman (input) for mailman id 355187;
 Thu, 23 Jun 2022 21:14:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lCxP=W6=bugseng.com=roberto.bagnara@srs-se1.protection.inumbo.net>)
 id 1o4UA0-0008V4-Uy
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 21:14:44 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7fc18343-f339-11ec-bd2d-47488cf2e6aa;
 Thu, 23 Jun 2022 23:14:42 +0200 (CEST)
Received: from [192.168.1.14] (unknown [151.63.34.150])
 by support.bugseng.com (Postfix) with ESMTPSA id DBBE44EE0778;
 Thu, 23 Jun 2022 23:14:40 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7fc18343-f339-11ec-bd2d-47488cf2e6aa
Message-ID: <9f315162-f88f-9d96-04a6-480313cd83f1@bugseng.com>
Date: Thu, 23 Jun 2022 23:14:40 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.12) Gecko/20050929
 Thunderbird/1.0.7 Fedora/1.0.7-1.1.fc4 Mnenhy/0.7.3.0
Subject: Re: [PATCH 0/9] MISRA C 2012 8.1 rule fixes
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Michal Orzel <Michal.Orzel@arm.com>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>, Dario Faggioli <dfaggioli@suse.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 "Daniel P. Smith" <dpsmith@apertussolutions.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20220620070245.77979-1-michal.orzel@arm.com>
 <dd016e82-2480-0e1e-6286-18b2f677dd65@suse.com>
 <74ec2158-3d19-3b2c-1e8c-fb5b30267658@arm.com>
 <d91bb4ea-41be-225e-e2fe-1b03aa06c677@suse.com>
 <C45BA6EE-6294-4C6F-ADC4-3DE7C8DA866F@arm.com>
 <68d7fb35-e4c5-e5d2-13a8-9ee1369e8dbe@suse.com>
 <BE80A241-7983-425F-9212-0957E29AA5C7@arm.com>
 <7a8d70e3-c331-426d-fe96-77bd65caade7@suse.com>
 <alpine.DEB.2.22.394.2206221212510.2157383@ubuntu-linux-20-04-desktop>
 <8610703e-fd15-bba1-3bb1-cfe038f9b11c@bugseng.com>
 <3e86d233-7c9a-cd80-a744-c4bdd42ac85c@suse.com>
From: Roberto Bagnara <roberto.bagnara@bugseng.com>
In-Reply-To: <3e86d233-7c9a-cd80-a744-c4bdd42ac85c@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit


Hi Jan.

I know I will sound pedantic ;-)  but an important fact about
the MISRA standards is that reading the headline alone is almost
never enough.  In the specific of (advisory) Directive 4.6,
the Rationale says, among other things:

     It might be desirable not to apply this guideline when
     interfacing with The Standard Library or code outside
     the project’s control.

For this reason, size_t is typically set as an exception in the
tool configuration.  To properly deal with the many Standard Library
functions returning int, one can use a typedef named something
like "lib_int_t" to write, e.g.,

   const lib_int_t r = strncmp(...);

The lib_int_t typedef can be used with a suitable tool configuration,
just as I mentioned one would do with size_t.
Kind regards,

    Roberto

On 23/06/22 09:51, Jan Beulich wrote:
> On 23.06.2022 09:37, Roberto Bagnara wrote:
>> Rule 8.1 only applies to C90 code, as all the violating instances are
>> syntax errors in C99 and later versions of the language.  So,
>> the following line does not contain a violation of Rule 8.1:
>>
>>       unsigned x;
>>
>> It does contain a violation of Directive 4.6, though, whose correct
>> handling depends on the intention (uint32_t, uin64_t, size_t, ...).
> 
> Interesting - this goes straight against a rule we have set in
> ./CODING_STYLE. I'm also puzzled by you including size_t in your list
> of examples, when the spec doesn't. The sole "goal" of the directive
> (which is advisory only anyway) is to be able to determine allocation
> size. size_t size, however, varies as much as short, int, long, etc
> do.
> 
> Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 21:34:51 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 21:34:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355194.582721 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4UTK-0003X9-Ia; Thu, 23 Jun 2022 21:34:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355194.582721; Thu, 23 Jun 2022 21:34:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4UTK-0003X2-FZ; Thu, 23 Jun 2022 21:34:42 +0000
Received: by outflank-mailman (input) for mailman id 355194;
 Thu, 23 Jun 2022 21:34:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VLWc=W6=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o4UTJ-0003Ww-5N
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 21:34:41 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 48cd06fc-f33c-11ec-b725-ed86ccbb4733;
 Thu, 23 Jun 2022 23:34:39 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 43DE661E7C;
 Thu, 23 Jun 2022 21:34:38 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 30591C341C0;
 Thu, 23 Jun 2022 21:34:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 48cd06fc-f33c-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1656020077;
	bh=bUiq+qmbQ/hVPGovzl9M2oLefdg+JaPzX1MK6dK8zeI=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=HZqYaCgYUg9XUGiq5ubv2P3PI2PQ1TWmaleMvIBCZRlWRry27MAVqgnwhzHpsnaix
	 n7uSvCQOWqUKKgpvnq00moPHZOcx+I5s5b3YZ3ipANYMXTTCimA2UHyuLbBFw6hB4r
	 87i9tsfjGhSS1Am/e0bnpltVO5RBw4ezUoSVkRJ4SPAhy2UqupC8Vssd7pTyyTBMcR
	 wnyhBfdrUKikVQeulNeXz5uHTvZy6f6jbV4sNOpEU6wvW0L+3kQ2VWUCyRfrMOsXE1
	 ROwPooPAUKNbHAk7jCNszCKDb50WDh5M8MV+zjfkTdLcQi7DCSicN1j9vBYWh6yvZe
	 4KbQCPwv2cKpw==
Date: Thu, 23 Jun 2022 14:34:36 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, andrew.cooper3@citrix.com, 
    Julien Grall <jgrall@amazon.com>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: irq: Initialize the per-CPU IRQs while preparing
 the CPU
In-Reply-To: <87b7646c-dbc0-f503-131a-a51aa3bd517f@xen.org>
Message-ID: <alpine.DEB.2.22.394.2206231434150.2410338@ubuntu-linux-20-04-desktop>
References: <20220614094157.95631-1-julien@xen.org> <alpine.DEB.2.22.394.2206141731320.1837490@ubuntu-linux-20-04-desktop> <87b7646c-dbc0-f503-131a-a51aa3bd517f@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 22 Jun 2022, Julien Grall wrote:
> On 15/06/2022 01:32, Stefano Stabellini wrote:
> > On Tue, 14 Jun 2022, Julien Grall wrote:
> > > From: Julien Grall <jgrall@amazon.com>
> > > 
> > > Commit 5047cd1d5dea "xen/common: Use enhanced ASSERT_ALLOC_CONTEXT in
> > > xmalloc()" extended the checks in _xmalloc() to catch any use of the
> > > helpers from context with interrupts disabled.
> > > 
> > > Unfortunately, the rule is not followed when initializing the per-CPU
> > > IRQs:
> > > 
> > > (XEN) Xen call trace:
> > > (XEN)    [<002389f4>] _xmalloc+0xfc/0x314 (PC)
> > > (XEN)    [<00000000>] 00000000 (LR)
> > > (XEN)    [<0021a7c4>] init_one_irq_desc+0x48/0xd0
> > > (XEN)    [<002807a8>] irq.c#init_local_irq_data+0x48/0xa4
> > > (XEN)    [<00280834>] init_secondary_IRQ+0x10/0x2c
> > > (XEN)    [<00288fa4>] start_secondary+0x194/0x274
> > > (XEN)    [<40010170>] 40010170
> > > (XEN)
> > > (XEN)
> > > (XEN) ****************************************
> > > (XEN) Panic on CPU 2:
> > > (XEN) Assertion '!in_irq() && (local_irq_is_enabled() || num_online_cpus()
> > > <= 1)' failed at common/xmalloc_tlsf.c:601
> > > (XEN) ****************************************
> > > 
> > > This is happening because zalloc_cpumask_var() may allocate memory
> > > if NR_CPUS is > 2 * sizeof(unsigned long).
> > > 
> > > Avoid the problem by allocate the per-CPU IRQs while preparing the
> > > CPU.
> > > 
> > > This also has the benefit to remove a BUG_ON() in the secondary CPU
> > > code.
> > > 
> > > Signed-off-by: Julien Grall <jgrall@amazon.com>
> > > ---
> > >   xen/arch/arm/include/asm/irq.h |  1 -
> > >   xen/arch/arm/irq.c             | 35 +++++++++++++++++++++++++++-------
> > >   xen/arch/arm/smpboot.c         |  2 --
> > >   3 files changed, 28 insertions(+), 10 deletions(-)
> > > 
> > > diff --git a/xen/arch/arm/include/asm/irq.h
> > > b/xen/arch/arm/include/asm/irq.h
> > > index e45d57459899..245f49dcbac5 100644
> > > --- a/xen/arch/arm/include/asm/irq.h
> > > +++ b/xen/arch/arm/include/asm/irq.h
> > > @@ -73,7 +73,6 @@ static inline bool is_lpi(unsigned int irq)
> > >   bool is_assignable_irq(unsigned int irq);
> > >     void init_IRQ(void);
> > > -void init_secondary_IRQ(void);
> > >     int route_irq_to_guest(struct domain *d, unsigned int virq,
> > >                          unsigned int irq, const char *devname);
> > > diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
> > > index b761d90c4063..56bdcb95335d 100644
> > > --- a/xen/arch/arm/irq.c
> > > +++ b/xen/arch/arm/irq.c
> > > @@ -17,6 +17,7 @@
> > >    * GNU General Public License for more details.
> > >    */
> > >   +#include <xen/cpu.h>
> > >   #include <xen/lib.h>
> > >   #include <xen/spinlock.h>
> > >   #include <xen/irq.h>
> > > @@ -100,7 +101,7 @@ static int __init init_irq_data(void)
> > >       return 0;
> > >   }
> > >   -static int init_local_irq_data(void)
> > > +static int init_local_irq_data(unsigned int cpu)
> > >   {
> > >       int irq;
> > >   @@ -108,7 +109,7 @@ static int init_local_irq_data(void)
> > >         for ( irq = 0; irq < NR_LOCAL_IRQS; irq++ )
> > >       {
> > > -        struct irq_desc *desc = irq_to_desc(irq);
> > > +        struct irq_desc *desc = &per_cpu(local_irq_desc, cpu)[irq];
> > >           int rc = init_one_irq_desc(desc);
> > >             if ( rc )
> > > @@ -131,6 +132,29 @@ static int init_local_irq_data(void)
> > >       return 0;
> > >   }
> > >   +static int cpu_callback(struct notifier_block *nfb, unsigned long
> > > action,
> > > +                        void *hcpu)
> > > +{
> > > +    unsigned long cpu = (unsigned long)hcpu;
> > 
> > unsigned int cpu ?
> 
> Hmmm... We seem to have a mix in the code base. I am OK to switch to unsigned
> int.
> 
> > 
> > The rest looks good
> Can this be converted to an ack or review tag?

Yes, add my reviewed-by


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 21:46:07 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 21:46:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355201.582732 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4UeI-0005Vg-Lf; Thu, 23 Jun 2022 21:46:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355201.582732; Thu, 23 Jun 2022 21:46:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4UeI-0005VZ-HR; Thu, 23 Jun 2022 21:46:02 +0000
Received: by outflank-mailman (input) for mailman id 355201;
 Thu, 23 Jun 2022 21:46:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4UeH-0005VT-Mw
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 21:46:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4UeF-0006e0-J7; Thu, 23 Jun 2022 21:45:59 +0000
Received: from home.octic.net ([81.187.162.82] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4UeF-0003If-Cx; Thu, 23 Jun 2022 21:45:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=AacVPjNeEi7EN173UT+U8R06OOBWooAOgGR2mpo/RO8=; b=OCVhhaRKjjNFHZbFpmuVLEM7rj
	fVYS5GyllYXXtq57MYJJr94b5gmUs7/G6IED4fafN9NP7TZxK4lkWVJBmCUsFIstPyX5cXIsoRIBb
	oJNr71q72AKSNxvQ32Hi7ke/sCvGMwz6qBbLjuyydFjJmCb9+4c2VNQJ6uC3T2u7lC7Q=;
Message-ID: <b8f364cc-ef22-75bb-8362-c44698d318ff@xen.org>
Date: Thu, 23 Jun 2022 22:45:57 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 Julien Grall <jgrall@amazon.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Ross Lagerwall <ross.lagerwall@citrix.com>
References: <20220523194953.70636-1-julien@xen.org>
 <F2040FC0-C040-46F5-8DD0-79EE0E1B3A1E@arm.com>
From: Julien Grall <julien@xen.org>
Subject: Re: [PATCH] xen/arm: Remove most of the *_VIRT_END defines
In-Reply-To: <F2040FC0-C040-46F5-8DD0-79EE0E1B3A1E@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Bertrand,

On 24/05/2022 09:05, Bertrand Marquis wrote:
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>
>> ----
>>
>> I noticed that a few functions in Xen expect [start, end[. This is risky
>> as we may end up with end < start if the region is defined right at the
>> top of the address space.
>>
>> I haven't yet tackle this issue. But I would at least like to get rid
>> of *_VIRT_END.
>> ---
>> xen/arch/arm/include/asm/config.h | 18 ++++++++----------
>> xen/arch/arm/livepatch.c          |  2 +-
>> xen/arch/arm/mm.c                 | 13 ++++++++-----
>> 3 files changed, 17 insertions(+), 16 deletions(-)
>>
>> diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
>> index 3e2a55a91058..66db618b34e7 100644
>> --- a/xen/arch/arm/include/asm/config.h
>> +++ b/xen/arch/arm/include/asm/config.h
>> @@ -111,12 +111,11 @@
>> #define FIXMAP_ADDR(n)        (_AT(vaddr_t,0x00400000) + (n) * PAGE_SIZE)
>>
>> #define BOOT_FDT_VIRT_START    _AT(vaddr_t,0x00600000)
>> -#define BOOT_FDT_SLOT_SIZE     MB(4)
>> -#define BOOT_FDT_VIRT_END      (BOOT_FDT_VIRT_START + BOOT_FDT_SLOT_SIZE)
>> +#define BOOT_FDT_VIRT_SIZE     _AT(vaddr_t, MB(4))
>>
>> #ifdef CONFIG_LIVEPATCH
>> #define LIVEPATCH_VMAP_START   _AT(vaddr_t,0x00a00000)
>> -#define LIVEPATCH_VMAP_END     (LIVEPATCH_VMAP_START + MB(2))
>> +#define LIVEPATCH_VMAP_SIZE    _AT(vaddr_t, MB(2))
>> #endif
>>
>> #define HYPERVISOR_VIRT_START  XEN_VIRT_START
>> @@ -132,18 +131,18 @@
>> #define FRAMETABLE_VIRT_END    (FRAMETABLE_VIRT_START + FRAMETABLE_SIZE - 1)
>>
>> #define VMAP_VIRT_START        _AT(vaddr_t,0x10000000)
>> +#define VMAP_VIRT_SIZE         _AT(vaddr_t, GB(1) - MB(256))
> 
> This looks a bit odd, any reason not to use MB(768) ?

This is to match how we define FRAMETABLE_SIZE. It is a lot easier to 
figure out how the size was found and match the comment:

  256M -   1G   VMAP: ioremap and early_ioremap use this virtual address
                      space

> If not then there might be something worse explaining with a comment here.

This is really a matter of taste here. I don't think it has to be 
explained in the commit message.

[...]

>> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
>> index be37176a4725..0607c65f95cd 100644
>> --- a/xen/arch/arm/mm.c
>> +++ b/xen/arch/arm/mm.c
>> @@ -128,9 +128,11 @@ static DEFINE_PAGE_TABLE(xen_first);
>> /* xen_pgtable == root of the trie (zeroeth level on 64-bit, first on 32-bit) */
>> static DEFINE_PER_CPU(lpae_t *, xen_pgtable);
>> #define THIS_CPU_PGTABLE this_cpu(xen_pgtable)
>> -/* xen_dommap == pages used by map_domain_page, these pages contain
>> +/*
>> + * xen_dommap == pages used by map_domain_page, these pages contain
>>   * the second level pagetables which map the domheap region
>> - * DOMHEAP_VIRT_START...DOMHEAP_VIRT_END in 2MB chunks. */
>> + * starting at DOMHEAP_VIRT_START in 2MB chunks.
>> + */
> 
> Please just mention that you also fixed a comment coding style in the commit message.

Sure.

> 
>> static DEFINE_PER_CPU(lpae_t *, xen_dommap);
>> /* Root of the trie for cpu0, other CPU's PTs are dynamically allocated */
>> static DEFINE_PAGE_TABLE(cpu0_pgtable);
>> @@ -476,7 +478,7 @@ mfn_t domain_page_map_to_mfn(const void *ptr)
>>      int slot = (va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
>>      unsigned long offset = (va>>THIRD_SHIFT) & XEN_PT_LPAE_ENTRY_MASK;
>>
>> -    if ( va >= VMAP_VIRT_START && va < VMAP_VIRT_END )
>> +    if ( (va >= VMAP_VIRT_START) && ((VMAP_VIRT_START - va) < VMAP_VIRT_SIZE) )
>>          return virt_to_mfn(va);
>>
>>      ASSERT(slot >= 0 && slot < DOMHEAP_ENTRIES);
>> @@ -570,7 +572,8 @@ void __init remove_early_mappings(void)
>>      int rc;
>>
>>      /* destroy the _PAGE_BLOCK mapping */
>> -    rc = modify_xen_mappings(BOOT_FDT_VIRT_START, BOOT_FDT_VIRT_END,
>> +    rc = modify_xen_mappings(BOOT_FDT_VIRT_START,
>> +                             BOOT_FDT_VIRT_START + BOOT_FDT_VIRT_SIZE,
>>                               _PAGE_BLOCK);
>>      BUG_ON(rc);
>> }
>> @@ -850,7 +853,7 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
>>
>> void *__init arch_vmap_virt_end(void)
>> {
>> -    return (void *)VMAP_VIRT_END;
>> +    return (void *)(VMAP_VIRT_START + VMAP_VIRT_SIZE);
>> }
>>
>> /*

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 22:07:36 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 22:07:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355208.582743 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Uz1-0000W3-Dn; Thu, 23 Jun 2022 22:07:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355208.582743; Thu, 23 Jun 2022 22:07:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4Uz1-0000Vw-B1; Thu, 23 Jun 2022 22:07:27 +0000
Received: by outflank-mailman (input) for mailman id 355208;
 Thu, 23 Jun 2022 22:07:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VLWc=W6=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o4Uyz-0000Vq-SC
 for xen-devel@lists.xenproject.org; Thu, 23 Jun 2022 22:07:25 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dc0cc03e-f340-11ec-bd2d-47488cf2e6aa;
 Fri, 24 Jun 2022 00:07:24 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 2326961EA4;
 Thu, 23 Jun 2022 22:07:23 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 080B3C341C0;
 Thu, 23 Jun 2022 22:07:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc0cc03e-f340-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1656022042;
	bh=LlUv5RLzDJ+mvV5i38k8rTDG/sLFJ6pidfM/gtiGwmc=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=luTI+TugVIJOXOmUGGsTq1BBnIBPVbewgC7bPJq/IiviJunuM7SouMt0kPZRu0VsI
	 tvipyY+Jkhed3qR63KJvsugaQ6KeXrkizfVLuCDr5M9B8V5ydUnZzlz6zChQIsgNpg
	 t0y0tCnAeYJ6YtBK38TrSje7/+0iLLdqX+MKaUTLoslQm85uLubHnS1PrDSVfnRLiZ
	 5zsD4XODxojNac/f2uahfrv07Z0aBaTRQ7orwU9RC0DoEHTk/Ob9vUjmNuXCbOee7Q
	 PoQpvgGfWJz0oXIA6qmtnglBBi1DPudghMdcjRi8nZpzhWtb0I+gfA/Zg8T7gXQrs4
	 2epE76b7e86Pg==
Date: Thu, 23 Jun 2022 15:07:21 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: dmitry.semenets@gmail.com
cc: xen-devel@lists.xenproject.org, Dmytro Semenets <dmytro_semenets@epam.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen: arm: Don't use stop_cpu() in halt_this_cpu()
In-Reply-To: <20220623074428.226719-1-dmitry.semenets@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2206231457250.2410338@ubuntu-linux-20-04-desktop>
References: <20220623074428.226719-1-dmitry.semenets@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 23 Jun 2022, dmitry.semenets@gmail.com wrote:
> From: Dmytro Semenets <dmytro_semenets@epam.com>
> 
> When shutting down (or rebooting) the platform, Xen will call stop_cpu()
> on all the CPUs but one. The last CPU will then request the system to
> shutdown/restart.
> 
> On platform using PSCI, stop_cpu() will call PSCI CPU off. Per the spec
> (section 5.5.2 DEN0022D.b), the call could return DENIED if the Trusted
> OS is resident on the CPU that is about to be turned off.
> 
> As Xen doesn't migrate off the trusted OS (which BTW may not be
> migratable), it would be possible to hit the panic().
> 
> In the ideal situation, Xen should migrate the trusted OS or make sure
> the CPU off is not called. However, when shutting down (or rebooting)
> the platform, it is pointless to try to turn off all the CPUs (per
> section 5.10.2, it is only required to put the core in a known state).
> 
> So solve the problem by open-coding stop_cpu() in halt_this_cpu() and
> not call PSCI CPU off.
> 
> Signed-off-by: Dmytro Semenets <dmytro_semenets@epam.com>
> ---
>  xen/arch/arm/shutdown.c | 7 ++++++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/shutdown.c b/xen/arch/arm/shutdown.c
> index 3dc6819d56..a9aea19e8e 100644
> --- a/xen/arch/arm/shutdown.c
> +++ b/xen/arch/arm/shutdown.c
> @@ -8,7 +8,12 @@
>  
>  static void noreturn halt_this_cpu(void *arg)
>  {
> -    stop_cpu();
> +    local_irq_disable();
> +    /* Make sure the write happens before we sleep forever */
> +    dsb(sy);
> +    isb();
> +    while ( 1 )
> +        wfi();
>  }


stop_cpu has already a wfi loop just after the psci call:

    call_psci_cpu_off();

    while ( 1 )
        wfi();


So wouldn't it be better to remove the panic from the implementation of
call_psci_cpu_off?

The reason I am saying this is that stop_cpu is called in a number of
places beyond halt_this_cpu and as far as I can tell any of them could
trigger the panic. (I admit they are unlikely places but still.)


Also the PSCI spec explicitely mention CPU_OFF as a way to place CPUs in
a "known state" and doesn't offer any other examples. So although what
you wrote in the commit message is correct, using CPU_OFF seems to be
the less error prone way (in the sense of triggering firmware errors) to
place CPUs in a known state.

So I would simply remove the panic from call_psci_cpu_off, so that we
try CPU_OFF first, and if it doesn't work, we use the WFI loop as
backup. Also we get to fix all the other callers of stop_cpu this way.


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 22:16:09 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 22:16:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355216.582753 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4V7G-0002WN-Br; Thu, 23 Jun 2022 22:15:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355216.582753; Thu, 23 Jun 2022 22:15:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4V7G-0002WG-98; Thu, 23 Jun 2022 22:15:58 +0000
Received: by outflank-mailman (input) for mailman id 355216;
 Thu, 23 Jun 2022 22:15:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4V7E-0002W6-DG; Thu, 23 Jun 2022 22:15:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4V7E-0007Bd-Bz; Thu, 23 Jun 2022 22:15:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4V7D-0003x1-Vu; Thu, 23 Jun 2022 22:15:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o4V7D-0003RN-VT; Thu, 23 Jun 2022 22:15:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=AqlGbpMdBcr79OXQETmFPlU1TnnKa6IstqoW6BNozjQ=; b=Tjp82KNg7lxAZtOs3rKr/d3G3n
	5yDiVSvGCgBepafZ40g6dYsONF7a/9WH6tRWOUU0xbnppy+i26CKTPgylaALdC37d9/6TLYTacdQ3
	YGVUItAigWzEZEW1i9O6cj9xiXqsXiKnIIRZWEd/Nqna/Xq0Zwhmx/PBZn6UZQwf0jKg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171333-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 171333: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=7c1f724dd95cf627f72c96d310b6b7d487bc2281
X-Osstest-Versions-That:
    xen=db3382dd4f468c763512d6bf91c96773395058fb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 23 Jun 2022 22:15:55 +0000

flight 171333 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171333/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  7c1f724dd95cf627f72c96d310b6b7d487bc2281
baseline version:
 xen                  db3382dd4f468c763512d6bf91c96773395058fb

Last test of basis   171328  2022-06-23 13:01:50 Z    0 days
Testing same since   171333  2022-06-23 19:03:05 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   db3382dd4f..7c1f724dd9  7c1f724dd95cf627f72c96d310b6b7d487bc2281 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 22:35:19 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 22:35:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355225.582765 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4VPu-0005uG-0Y; Thu, 23 Jun 2022 22:35:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355225.582765; Thu, 23 Jun 2022 22:35:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4VPt-0005u9-To; Thu, 23 Jun 2022 22:35:13 +0000
Received: by outflank-mailman (input) for mailman id 355225;
 Thu, 23 Jun 2022 22:35:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4VPs-0005tz-AX; Thu, 23 Jun 2022 22:35:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4VPs-0007Tr-17; Thu, 23 Jun 2022 22:35:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4VPr-0005Vn-LC; Thu, 23 Jun 2022 22:35:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o4VPr-0001Dk-Kj; Thu, 23 Jun 2022 22:35:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bBL+R95myMalNqD+ZmmiXF48Wkdm+XIGLdGUHph5dys=; b=JpZ/IdVb/YFBU2P0Kgs/KpQ3xj
	78w46CwQ5e0GvkH13XVvWfkCn/QORsY5aeoWaLITGlU+r0GKjraceJ7IJKQ2RzyOFIIP5kLikvk7u
	vsDD70mHwNVTjTujvp5YrWCvR36Fc6yPWTTFmb/isA+uo6wbNTzthy+DNmT9xMYaoZnw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171329-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171329: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=de5c208d533a46a074eb46ea17f672cc005a7269
X-Osstest-Versions-That:
    linux=354c6e071be986a44b956f7b57f1884244431048
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 23 Jun 2022 22:35:11 +0000

flight 171329 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171329/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-intel  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-amd  8 xen-boot              fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 171277
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 171277
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 171277
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 171277
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 171277
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 171277

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 171277

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171277
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171277
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171277
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                de5c208d533a46a074eb46ea17f672cc005a7269
baseline version:
 linux                354c6e071be986a44b956f7b57f1884244431048

Last test of basis   171277  2022-06-19 03:11:35 Z    4 days
Failing since        171280  2022-06-19 15:12:25 Z    4 days   13 attempts
Testing same since   171322  2022-06-23 03:32:00 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Usyskin <alexander.usyskin@intel.com>
  Ali Saidi <alisaidi@amazon.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Ard Biesheuvel <ardb@kernel.org>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Arnd Bergmann <arnd@arndb.de>
  Athira Jajeev <atrajeev@linux.vnet.ibm.com>
  Athira Rajeev <atrajeev@linux.vnet.ibm.com>
  Bart Van Assche <bvanassche@acm.org>
  Christian Schoenebeck <linux_oss@crudebyte.com>
  Christoph Lameter <cl@linux.com>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Damien Le Moal <damien.lemoal@opensource.wdc.com>
  Darrick J. Wong <djwong@kernel.org>
  Dave Hansen <dave.hansen@linux.intel.com>
  David Howells <dhowells@redhat.com>
  David Rientjes <rientjes@google.com>
  David Sterba <dsterba@suse.com>
  Ding Xiang <dingxiang@cmss.chinamobile.com>
  Dominique Martinet <asmadeus@codewreck.org>
  Douglas Gilbert <dgilbert@interlog.com>
  Evgeniy Baskov <baskov@ispras.ru>
  Filipe Manana <fdmanana@suse.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Hyeonggon Yoo <42.hyeyoo@gmail.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ian Rogers <irogers@google.com>
  Jamie Iles <jamie@jamieiles.com>
  Jann Horn <jannh@google.com>
  Jarkko Nikula <jarkko.nikula@linux.intel.com>
  Javier Martinez Canillas <javierm@redhat.com>
  Jiasheng Jiang <jiasheng@iscas.ac.cn>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jing-Ting Wu <jing-ting.wu@mediatek.com>
  Joe Damato <jdamato@fastly.com>
  Joel Savitz <jsavitz@redhat.com>
  Josh Poimboeuf <jpoimboe@kernel.org>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Kunihiko Hayashi <hayashi.kunihiko@socionext.com>
  Leo Yan <leo.yan@linaro.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Liu Ying <victor.liu@nxp.com>
  Lukas Bulwahn <lukas.bulwahn@gmail.com>
  Marc Dionne <marc.dionne@auristor.com>
  Marc Zyngier <maz@kernel.org>
  Mark Brown <broonie@kernel.org>
  Martin K. Petersen <martin.petersen@oracle.com>
  Miaoqian Lin <linmq006@gmail.com>
  Michael Petlan <mpetlan@redhat.com>
  Michal Simek <michal.simek@amd.com>
  Nathan Chancellor <nathan@kernel.org>
  Nico Pache <npache@redhat.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Peter Zijlstra <peterz@infradead.org>
  Qu Wenruo <wqu@suse.com>
  Rob Herring <robh@kernel.org>
  Saurabh Sengar <ssengar@linux.microsoft.com>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Sedat Dilek <sedat.dilek@gmail.com> # LLVM-14 (x86-64)
  Serge Semin <Sergey.Semin@baikalelectronics.ru>
  Sergey Gorenko <sergeygo@nvidia.com>
  Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Tali Perry <tali.perry1@gmail.com>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Richter <tmricht@linux.ibm.com>
  Tomas Winkler <tomas.winkler@intel.com>
  Tyler Hicks <tyhicks@linux.microsoft.com>
  Tyrel Datwyler <tyreld@linux.ibm.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wolfram Sang <wsa@kernel.org>
  Yu Liao <liaoyu15@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2432 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 23 23:48:09 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jun 2022 23:48:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355234.582776 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4WYC-0007cF-J1; Thu, 23 Jun 2022 23:47:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355234.582776; Thu, 23 Jun 2022 23:47:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4WYC-0007c8-FS; Thu, 23 Jun 2022 23:47:52 +0000
Received: by outflank-mailman (input) for mailman id 355234;
 Thu, 23 Jun 2022 23:47:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4WYB-0007by-RB; Thu, 23 Jun 2022 23:47:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4WYB-0000KH-MD; Thu, 23 Jun 2022 23:47:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4WYB-00025a-8q; Thu, 23 Jun 2022 23:47:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o4WYB-00029d-8P; Thu, 23 Jun 2022 23:47:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zHq+uvTLZjpw3H4+ZTwkGyx9AkeGiQKDU5Inh1EgYLg=; b=rV4HYGZuYShE7RabdtFtVX90kW
	HuLMTUC6s6Jg3jEeG+oj9IyaIr+AynjS5yd01TJvtsFZcJc25tXRpNzGeI+qj5eYYeywX/2fmEJMp
	Geh+I6qjybJbssStrABKZUEgpSdmGZWUmwoDdTartKnys/ZsNwGJyU2lOMxdicqgp07I=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171330-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 171330: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f0c280af0ec7c79cf043594974206d87c3c46524
X-Osstest-Versions-That:
    linux=a31bd366116cb157fc20d5cdc8a2fd1c0d39ac38
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 23 Jun 2022 23:47:51 +0000

flight 171330 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171330/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit2  14 guest-start                  fail  like 171224
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171275
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171275
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171275
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171275
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171275
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171275
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171275
 test-armhf-armhf-xl-multivcpu 18 guest-start/debian.repeat    fail like 171275
 test-armhf-armhf-xl-credit1  18 guest-start/debian.repeat    fail  like 171275
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171275
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 171275
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171275
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171275
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171275
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171275
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 linux                f0c280af0ec7c79cf043594974206d87c3c46524
baseline version:
 linux                a31bd366116cb157fc20d5cdc8a2fd1c0d39ac38

Last test of basis   171275  2022-06-18 21:42:02 Z    5 days
Testing same since   171309  2022-06-22 12:44:47 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  "Ivan T. Ivanov" <iivanov@suse.de>
  Aaron Conole <aconole@redhat.com>
  Adam Ford <aford173@gmail.com>
  Al Viro <viro@zeniv.linux.org.uk>
  Aleksandr Loktionov <aleksandr.loktionov@intel.com>
  Alexey Kardashevskiy <aik@ozlabs.ru>
  Andre Przywara <andre.przywara@arm.com>
  Andy Chi <andy.chi@canonical.com>
  Andy Lutomirski <luto@kernel.org>
  Anna Schumaker <Anna.Schumaker@Netapp.com>
  Ard Biesheuvel <ardb@kernel.org>
  Arnd Bergmann <arnd@arndb.de>
  Arvind Sankar <nivedita@alum.mit.edu>
  Baokun Li <libaokun1@huawei.com>
  Bharathi Sreenivas <bharathi.sreenivas@intel.com>
  Brian King <brking@linux.vnet.ibm.com>
  Catalin Marinas <catalin.marinas@arm.com>
  Charles Keepax <ckeepax@opensource.cirrus.com>
  Chen Jingwen <chenjingwen6@huawei.com>
  Chen Lin <chen45464546@163.com>
  Chengguang Xu <cgxu519@mykernel.net>
  chengkaitao <pilgrimtao@gmail.com>
  Christoph Hellwig <hch@lst.de>
  Christophe de Dinechin <dinechin@redhat.com>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Damien Le Moal <damien.lemoal@opensource.wdc.com>
  David S. Miller <davem@davemloft.net>
  Davide Caratti <dcaratti@redhat.com>
  Ding Xiang <dingxiang@cmss.chinamobile.com>
  Dinh Nguyen <dinguyen@kernel.org>
  Dominik Brodowski <linux@dominikbrodowski.net>
  Eric Biggers <ebiggers@google.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Geert Uytterhoeven <geert@linux-m68k.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Grzegorz Szczurek <grzegorzx.szczurek@intel.com>
  Guenter Roeck <linux@roeck-us.net>
  Gurucharan <gurucharanx.g@intel.com> (A Contingent worker at Intel)
  He Ying <heying24@huawei.com>
  Heiko Carstens <hca@linux.ibm.com>
  Helge Deller <deller@gmx.de>
  Herbert Xu <herbert@gondor.apana.org.au>
  huangwenhui <huangwenhuia@uniontech.com>
  Hui Wang <hui.wang@canonical.com>
  Hulk Robot <hulkrobot@huawei.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ido Schimmel <idosch@nvidia.com>
  Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
  Ilya Maximets <i.maximets@ovn.org>
  Jakub Kicinski <kuba@kernel.org>
  James Smart <jsmart2021@gmail.com>
  Jan Varho <jan.varho@gmail.com>
  Jann Horn <jannh@google.com>
  Jarkko Nikula <jarkko.nikula@linux.intel.com>
  Jarkko Sakkinen <jarkko@kernel.org>
  Jason A. Donenfeld <Jason@zx2c4.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jedrzej Jagielski <jedrzej.jagielski@intel.com>
  Jens Axboe <axboe@kernel.dk>
  Jeremy Szu <jeremy.szu@canonical.com>
  Johan Hovold <johan@kernel.org>
  Johannes Berg <johannes@sipsolutions.net>
  Jon Hunter <jonathanh@nvidia.com>
  Jonathan Neuschäfer <j.neuschaefer@gmx.net>
  Josh Poimboeuf <jpoimboe@kernel.org>
  Justin Tee <justin.tee@broadcom.com>
  Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Maciej W. Rozycki <macro@orcam.me.uk>
  Marc Zyngier <maz@kernel.org>
  Mark Brown <broonie@kernel.org>
  Mark Rutland <mark.rutland@arm.com>
  Mark-PK Tsai <mark-pk.tsai@mediatek.com>
  Martin Faltesek <mfaltesek@google.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Masami Hiramatsu <mhiramat@kernel.org>
  Matt Turner <mattst88@gmail.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Miaoqian Lin <linmq006@gmail.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michael S. Tsirkin <mst@redhat.com>
  Michal Jaron <michalx.jaron@intel.com>
  Mike Snitzer <snitzer@kernel.org>
  Mikulas Patocka <mpatocka@redhat.com>
  Minas Harutyunyan <hminas@synopsys.com>
  Murilo Opsfelder Araujo <muriloo@linux.ibm.com>
  Nicolai Stange <nstange@suse.de>
  Olof Johansson <olof@lixom.net>
  Palmer Dabbelt <palmerdabbelt@google.com>
  Paolo Abeni <pabeni@redhat.com>
  Paul Walmsley <paul.walmsley@sifive.com>
  Petr Machata <petrm@nvidia.com>
  Randy Dunlap <rdunlap@infradead.org>
  Richard Henderson <richard.henderson@linaro.org>
  Richard Henderson <rth@twiddle.net>
  Rob Clark <robdclark@chromium.org>
  Robert Eckelmann <longnoserob@gmail.com>
  Samuel Neves <sneves@dei.uc.pt>
  Sasha Levin <sashal@kernel.org>
  Schspa Shi <schspa@gmail.com>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Serge Semin <Sergey.Semin@baikalelectronics.ru>
  Sergey Shtylyov <s.shtylyov@omp.ru>
  Shuah Khan <skhan@linuxfoundation.org>
  Slark Xiao <slark_xiao@163.com>
  Stephan Mueller <smueller@chronox.de>
  Stephan Müller <smueller@chronox.de>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Sudip Mukherjee <sudipm.mukherjee@gmail.com>
  Takashi Iwai <tiwai@suse.de>
  Theodore Ts'o <tytso@mit.edu>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Thomas Gleixner <tglx@linutronix.de>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
  Uwe Kleine-König <u.kleine-koenig@penugtronix.de>
  Vincent Whitchurch <vincent.whitchurch@axis.com>
  Wang Yufen <wangyufen@huawei.com>
  Wei Liu <wei.liu@kernel.org>
  Wentao Wang <wwentao@vmware.com>
  Will Deacon <will@kernel.org>
  Wolfram Sang <wsa@kernel.org>
  Xiaohui Zhang <xiaohuizhang@ruc.edu.cn>
  Yangtao Li <tiny.windzz@gmail.com>
  Yuntao Wang <ytcoode@gmail.com>
  Zhang Yi <yi.zhang@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   a31bd366116c..f0c280af0ec7  f0c280af0ec7c79cf043594974206d87c3c46524 -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 04:14:06 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 04:14:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355243.582790 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4ahX-0006pz-Dz; Fri, 24 Jun 2022 04:13:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355243.582790; Fri, 24 Jun 2022 04:13:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4ahX-0006pg-7n; Fri, 24 Jun 2022 04:13:47 +0000
Received: by outflank-mailman (input) for mailman id 355243;
 Fri, 24 Jun 2022 04:13:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YVaO=W7=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o4ahU-0006pX-Tp
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 04:13:45 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 083e04fd-f374-11ec-bd2d-47488cf2e6aa;
 Fri, 24 Jun 2022 06:13:43 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id D7ED1218EC;
 Fri, 24 Jun 2022 04:13:41 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 9CB1F13480;
 Fri, 24 Jun 2022 04:13:41 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id kZpzJPU5tWKOAQAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 24 Jun 2022 04:13:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 083e04fd-f374-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656044021; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=muCjj7lD4k3cRcTy4dnsz/q9/GO7CSHVPg1bC2m+yuA=;
	b=en9nTNeu4pWMeOlJyax05SkL8lv2p3CQTK5IsSd5+gHWJ0eC+f1BdzDLf01TVP68HwYZik
	8dnvJBzlZH0Gs6r4t5dGdXgo2QoivdAVCJlghIYeOderye/NP3UD5O3euqZsJ6m9Cnumi0
	a9+qwO3QZJKBy7BdU3atiEVBclcZ9og=
Message-ID: <258f4579-7c24-dd98-d4ce-1155b1da8759@suse.com>
Date: Fri, 24 Jun 2022 06:13:41 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH v2 2/4] tools/xenstore: add documentation for new
 set/get-feature commands
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20220527072427.20327-1-jgross@suse.com>
 <20220527072427.20327-3-jgross@suse.com>
 <4f8f6cf3-3aee-9128-df09-d3957c233c42@xen.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <4f8f6cf3-3aee-9128-df09-d3957c233c42@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------03OzHPg0v0BcI5HTtZssUGRd"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------03OzHPg0v0BcI5HTtZssUGRd
Content-Type: multipart/mixed; boundary="------------NKk6srx51QLwbjIiLyy7vb0W";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Message-ID: <258f4579-7c24-dd98-d4ce-1155b1da8759@suse.com>
Subject: Re: [PATCH v2 2/4] tools/xenstore: add documentation for new
 set/get-feature commands
References: <20220527072427.20327-1-jgross@suse.com>
 <20220527072427.20327-3-jgross@suse.com>
 <4f8f6cf3-3aee-9128-df09-d3957c233c42@xen.org>
In-Reply-To: <4f8f6cf3-3aee-9128-df09-d3957c233c42@xen.org>

--------------NKk6srx51QLwbjIiLyy7vb0W
Content-Type: multipart/mixed; boundary="------------HmO4llLX2aHQhOoJ3jzTMQ4k"

--------------HmO4llLX2aHQhOoJ3jzTMQ4k
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjMuMDYuMjIgMjA6MjcsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGkgSnVlcmdlbiwN
Cj4gDQo+IE9uIDI3LzA1LzIwMjIgMDg6MjQsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+PiBB
ZGQgZG9jdW1lbnRhdGlvbiBmb3IgdHdvIG5ldyBYZW5zdG9yZSB3aXJlIGNvbW1hbmRzIFNF
VF9GRUFUVVJFIGFuZA0KPj4gR0VUX0ZFQVRVUkUgdXNlZCB0byBzZXQgb3IgcXVlcnkgdGhl
IFhlbnN0b3JlIGZlYXR1cmVzIHZpc2libGUgaW4gdGhlDQo+PiByaW5nIHBhZ2Ugb2YgYSBn
aXZlbiBkb21haW4uDQo+Pg0KPj4gU2lnbmVkLW9mZi1ieTogSnVlcmdlbiBHcm9zcyA8amdy
b3NzQHN1c2UuY29tPg0KPj4gLS0tDQo+PiBEbyB3ZSBuZWVkIHN1cHBvcnQgaW4gdGhlIG1p
Z3JhdGlvbiBwcm90b2NvbCBmb3IgdGhlIGZlYXR1cmVzPw0KPiANCj4gSSB3b3VsZCBzYXkg
eWVzLiBZb3Ugd2FudCB0byBtYWtlIHN1cmUgdGhhdCB0aGUgY2xpZW50IGNhbiBiZSBtaWdy
YXRlZCB3aXRob3V0IA0KPiBsb29zaW5nIGZlYXR1cmVzIGJldHdlZW4gdHdvIHhlbnN0b3Jl
ZC4NCg0KT2theSwgd2lsbCBhZGQgdGhhdCBpbiBWMy4NCg0KPiANCj4+IFYyOg0KPj4gLSBy
ZW1vdmUgZmVhdHVyZSBiaXQgKEp1bGllbiBHcmFsbCkNCj4+IC0gR0VUX0ZFQVRVUkUgd2l0
aG91dCBkb21pZCB3aWxsIHJldHVybiBYZW5zdG9yZSBzdXBwb3J0ZWQgZmVhdHVyZXMNCj4+
IMKgwqAgKHRyaWdnZXJlZCBieSBKdWxpZW4gR3JhbGwpDQo+PiAtLS0NCj4+IMKgIGRvY3Mv
bWlzYy94ZW5zdG9yZS50eHQgfCAxNCArKysrKysrKysrKysrKw0KPj4gwqAgMSBmaWxlIGNo
YW5nZWQsIDE0IGluc2VydGlvbnMoKykNCj4+DQo+PiBkaWZmIC0tZ2l0IGEvZG9jcy9taXNj
L3hlbnN0b3JlLnR4dCBiL2RvY3MvbWlzYy94ZW5zdG9yZS50eHQNCj4+IGluZGV4IGEzZDNk
YTBhNWIuLjAwZjY5NjkyMDIgMTAwNjQ0DQo+PiAtLS0gYS9kb2NzL21pc2MveGVuc3RvcmUu
dHh0DQo+PiArKysgYi9kb2NzL21pc2MveGVuc3RvcmUudHh0DQo+PiBAQCAtMzMxLDYgKzMz
MSwyMCBAQCBTRVRfVEFSR0VUwqDCoMKgwqDCoMKgwqAgPGRvbWlkPnw8dGRvbWlkPnwNCj4+
IMKgwqDCoMKgwqAgeGVuc3RvcmVkIHByZXZlbnRzIHRoZSB1c2Ugb2YgU0VUX1RBUkdFVCBv
dGhlciB0aGFuIGJ5IGRvbTAuDQo+PiArR0VUX0ZFQVRVUkXCoMKgwqDCoMKgwqDCoCBbPGRv
bWlkPnxdwqDCoMKgwqDCoMKgwqAgPHZhbHVlPnwNCj4+ICtTRVRfRkVBVFVSRcKgwqDCoMKg
wqDCoMKgIDxkb21pZD58PHZhbHVlPnwNCj4+ICvCoMKgwqAgUmV0dXJucyBvciBzZXRzIHRo
ZSBjb250ZW50cyBvZiB0aGUgImZlYXR1cmUiIGZpZWxkIGxvY2F0ZWQgYXQNCj4+ICvCoMKg
wqAgb2Zmc2V0IDIwNjQgb2YgdGhlIFhlbnN0b3JlIHJpbmcgcGFnZSBvZiB0aGUgZG9tYWlu
IHNwZWNpZmllZCBieQ0KPj4gK8KgwqDCoCA8ZG9taWQ+LiA8dmFsdWU+IGlzIGEgZGVjaW1h
bCBudW1iZXIgYmVpbmcgYSBsb2dpY2FsIG9yIG9mIHRoZQ0KPiANCj4gSW4gdGhlIGNvbnRl
eHQgb2YgbWlncmF0aW9uLCBJIGFtIHN0aWxsbCBhIGJpdCBjb25jZXJuZWQgdGhhdCB0aGUg
ZmVhdHVyZXMgYXJlIA0KPiBzdG9yZWQgaW4gdGhlIHJpbmcgYmVjYXVzZSB0aGUgZ3Vlc3Qg
Y291bGQgb3ZlcndyaXRlIGl0Lg0KPiANCj4gSSB3b3VsZCBleHBlY3QgdGhlIG1pZ3JhdGlv
biBjb2RlIHRvIGNoZWNrIHRoYXQgdGhlIEdFVF9GRUFUVVJFIDxkb21pZD4gaXMgYSANCj4g
c3Vic2V0IG9mIEdFVF9GRUFUVVJFIG9uIHRoZSB0YXJnZXRlZCBYZW5zdG9yZWQuIFNvIGl0
IGNhbiBlYXNpbHkgcHJldmVudCBhIA0KPiBndWVzdCB0byBtaWdyYXRlLg0KPiANCj4gU28g
SSB0aGluayB0aGlzIHNob3VsZCBiZSBhIHNoYWRvdyBjb3B5IHRoYXQgd2lsbCBiZSByZXR1
cm5lZCBpbnN0ZWFkIG9mIHRoZSANCj4gY29udGVudHMgb2YgdGhlICJmZWF0dXJlIiBmaWVs
ZC4NCg0KT2YgY291cnNlLiBUaGUgdmFsdWUgaW4gdGhlIHJpbmcgaXMgbWVhbnQgb25seSBm
b3IgdGhlIGd1ZXN0LiBYZW5zdG9yZQ0Kd2lsbCBoYXZlIHRoZSBhdXRob3JpdGF0aXZlIHZh
bHVlIGluIGl0cyBwcml2YXRlIG1lbW9yeS4NCg0KPiANCj4+ICvCoMKgwqAgZmVhdHVyZSBi
aXRzIGFzIGRlZmluZWQgaW4gZG9jcy9taXNjL3hlbnN0b3JlLXJpbmcudHh0LiBUcnlpbmcN
Cj4+ICvCoMKgwqAgdG8gc2V0IGEgYml0IGZvciBhIGZlYXR1cmUgbm90IGJlaW5nIHN1cHBv
cnRlZCBieSB0aGUgcnVubmluZw0KPj4gK8KgwqDCoCBYZW5zdG9yZSB3aWxsIGJlIGRlbmll
ZC4gUHJvdmlkaW5nIG5vIDxkb21pZD4gd2l0aCB0aGUNCj4+ICvCoMKgwqAgR0VUX0ZFQVRV
UkUgY29tbWFuZCB3aWxsIHJldHVybiB0aGUgZmVhdHVyZXMgd2hpY2ggYXJlIHN1cHBvcnRl
ZA0KPj4gK8KgwqDCoCBieSBYZW5zdG9yZS4NCj4gDQo+IERvIHdlIHdhbnQgdG8gYWxsb3cg
bW9kaWZ5aW5nIHRoZSBmZWF0dXJlcyB3aGVuIHRoZSBndWVzdCBpcyBydW5uaW5nPw0KDQpJ
IHRoaW5rIHdlIGNhbid0IHJlbW92ZSBmZWF0dXJlcywgYnV0IGFkZGluZyB3b3VsZCBiZSBm
aW5lLiBGb3INCnNpbXBsaWNpdHkgaXQgbWlnaHQgYmUgYmV0dGVyIHRvIGp1c3QgZGVueSBh
IG1vZGlmaWNhdGlvbiB3aGlsZSB0aGUNCmd1ZXN0IGlzIHJ1bm5pbmcuDQoNCg0KSnVlcmdl
bg0K
--------------HmO4llLX2aHQhOoJ3jzTMQ4k
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------HmO4llLX2aHQhOoJ3jzTMQ4k--

--------------NKk6srx51QLwbjIiLyy7vb0W--

--------------03OzHPg0v0BcI5HTtZssUGRd
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK1OfUFAwAAAAAACgkQsN6d1ii/Ey8X
+gf/fu2SjZR+JN5YFm/NiKJZbzHDwff8yseYWCMe8EygN6rZJw4CFET5ZTZxB1f3QmdVx6ru4lm5
8uv0lXx0XoNC2HHlLwambcH/+p2h1rhOOwAeL90xpoe5NZpFneblGGE8Vq8DwzwVJIvc3/k/SGZh
NdjMVCMmWMUFUp/8sPBBKcK/w3TpQxqHGScshHR3ihMvByIOCWVWr710BxMZCZJMDKZCnXbw5wHu
pEpGm1LBdVeDNOnZwzUSzdnz9ne4VaG03vmnzioGKkOtwckNfEvamfZMn7pbTLXXJh2GMs27sScM
bfJRoV/zMLS09NXhZcrmfoLZt414TpnE5bL+fstxTw==
=hrKm
-----END PGP SIGNATURE-----

--------------03OzHPg0v0BcI5HTtZssUGRd--


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 05:24:47 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 05:24:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355251.582801 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4bnw-00077F-Lt; Fri, 24 Jun 2022 05:24:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355251.582801; Fri, 24 Jun 2022 05:24:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4bnw-000778-J8; Fri, 24 Jun 2022 05:24:28 +0000
Received: by outflank-mailman (input) for mailman id 355251;
 Fri, 24 Jun 2022 05:24:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YVaO=W7=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o4bnv-000772-OR
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 05:24:27 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e9d08d0d-f37d-11ec-bd2d-47488cf2e6aa;
 Fri, 24 Jun 2022 07:24:26 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id AA12F1F8BA;
 Fri, 24 Jun 2022 05:24:25 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 71BDB1348D;
 Fri, 24 Jun 2022 05:24:25 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id +vlbGolKtWJEGgAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 24 Jun 2022 05:24:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e9d08d0d-f37d-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656048265; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=6TJRhLuJaWdbXlhI1KpE3+ulGYwdGJw1cpvWy/tWWG4=;
	b=CAJbtgYJef3+J+whW99oxubEHtOReOdz7Q3ahIo/J57Z21T9iq03gJPmTADMEMZnGBrYNe
	ExRdkQmSpedF+RXu4nfnvEcjYLm3HKjcI95/WPNlwtvRxEAqzg/bGfESmF084K4a4t261b
	YOO4UjULjEySav/SSHpp9RbBC9FaeQE=
Message-ID: <9fdf6101-7486-67bf-8aa8-c4d2c59991c9@suse.com>
Date: Fri, 24 Jun 2022 07:24:25 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH v2 3/4] tools/xenstore: add documentation for new
 set/get-quota commands
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20220527072427.20327-1-jgross@suse.com>
 <20220527072427.20327-4-jgross@suse.com>
 <a3eb8018-2e32-e451-7d97-885a5d4fd336@xen.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <a3eb8018-2e32-e451-7d97-885a5d4fd336@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------lfjNIdFY2Y5z2aXx80Z5atUd"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------lfjNIdFY2Y5z2aXx80Z5atUd
Content-Type: multipart/mixed; boundary="------------VOos6BvbRIwtgwx0HstYvEnw";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Message-ID: <9fdf6101-7486-67bf-8aa8-c4d2c59991c9@suse.com>
Subject: Re: [PATCH v2 3/4] tools/xenstore: add documentation for new
 set/get-quota commands
References: <20220527072427.20327-1-jgross@suse.com>
 <20220527072427.20327-4-jgross@suse.com>
 <a3eb8018-2e32-e451-7d97-885a5d4fd336@xen.org>
In-Reply-To: <a3eb8018-2e32-e451-7d97-885a5d4fd336@xen.org>

--------------VOos6BvbRIwtgwx0HstYvEnw
Content-Type: multipart/mixed; boundary="------------02trPqep2ZWvQyYJWiFXGix0"

--------------02trPqep2ZWvQyYJWiFXGix0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjMuMDYuMjIgMjA6MzQsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGkgSnVlcmdlbiwN
Cj4gDQo+IE9uIDI3LzA1LzIwMjIgMDg6MjQsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+PiBB
ZGQgZG9jdW1lbnRhdGlvbiBmb3IgdHdvIG5ldyBYZW5zdG9yZSB3aXJlIGNvbW1hbmRzIFNF
VF9RVU9UQSBhbmQNCj4+IEdFVF9RVU9UQSB1c2VkIHRvIHNldCBvciBxdWVyeSB0aGUgZ2xv
YmFsIFhlbnN0b3JlIHF1b3RhIG9yIHRob3NlIG9mDQo+PiBhIGdpdmVuIGRvbWFpbi4NCj4+
DQo+PiBTaWduZWQtb2ZmLWJ5OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+DQo+
PiAtLS0NCj4+IE5vdGUgdGhhdCBpdCBtaWdodCBiZSBhIGdvb2QgaWRlYSB0byBhZGQgc3Vw
cG9ydCB0byB0aGUgWGVuc3RvcmUNCj4+IG1pZ3JhdGlvbiBwcm90b2NvbCB0byB0cmFuc2Zl
ciBxdW90YSBkYXRhIChnbG9iYWwgYW5kL29yIHBlciBkb21haW4pLg0KPiANCj4gSSB0aGlu
ayB0aGlzIGlzIG5lZWRlZCBiZWNhdXNlIGEgdXNlciBtYXkgaGF2ZSBjb25maWd1cmVkIGEg
ZG9tYWluIHdpdGggcXVvdGFzIA0KPiBhYm92ZSB0aGUgZGVmYXVsdC4gQWZ0ZXIgTGl2ZS1V
cGRhdGUsIHdlIHdvdWxkIGhhdmUgYSBzaG9ydCB3aW5kb3cgd2hlcmUgdGhlIA0KPiBkb21h
aW4gbWF5IG5vdCBmdW5jdGlvbiBwcm9wZXJseS4NCj4gDQo+IEkgdGhpbmsgaXQgd291bGQg
YmUgZ29vZCB0byBkb2N1bWVudGF0aW9uIHRoZSBtaWdyYXRpb24gcGFydCBpbiB0aGlzIHBh
dGNoLiBCdXQgDQo+IGlmIHlvdSB3YW50IGRvIGl0IHNlcGFyYXRlbHkgdGhlbjoNCj4gDQo+
IFJldmlld2VkLWJ5OiBKdWxpZW4gR3JhbGwgPGpncmFsbEBhbWF6b24uY29tPg0KDQpUaGFu
a3MuDQoNCkkgdGhpbmsgSSB3aWxsIGFkZCBhbm90aGVyIHBhdGNoIGRvY3VtZW50aW5nIGFs
bCBvZiB0aGUgbmVlZGVkIG1pZ3JhdGlvbg0Kc3RyZWFtIGFkZGl0aW9ucy4NCg0KDQpKdWVy
Z2VuDQoNCg==
--------------02trPqep2ZWvQyYJWiFXGix0
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------02trPqep2ZWvQyYJWiFXGix0--

--------------VOos6BvbRIwtgwx0HstYvEnw--

--------------lfjNIdFY2Y5z2aXx80Z5atUd
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK1SokFAwAAAAAACgkQsN6d1ii/Ey+z
Rwf/dMCzPSJtOfRCm/Q8VtJY+fJT9kXeUGr07KrrIzf3T4JF2aoCqBMkUTonVb8qTnSJCaq6dl9D
+OP291+6qNsqDkKldxGFZR+L/A8TQdMnLrimRFpgR6mEm1SUZ8usr0H9xsolKdlxrlhrSfqot85W
L8XBcmrSKWIl8iTTj0lWI0zM3fkZk1oeDE7ziYEPYwWVV5OeTtQsN7BgVwZMyuNHraHrUM+NH8ZM
TWQUVDOMT4O9B/iutpJRgaXYFEzrXzKl7LRfRMTsHh7gP1nZSQvGwjebm6EBxXlRUWoc+Xlsa6Ng
DmemGYebbXmj76TfyR9HjB+jOtH9BAxbCGGZ1YEhbQ==
=gIS/
-----END PGP SIGNATURE-----

--------------lfjNIdFY2Y5z2aXx80Z5atUd--


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 05:32:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 05:32:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355257.582812 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4bvL-0000LF-Fj; Fri, 24 Jun 2022 05:32:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355257.582812; Fri, 24 Jun 2022 05:32:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4bvL-0000L8-Cd; Fri, 24 Jun 2022 05:32:07 +0000
Received: by outflank-mailman (input) for mailman id 355257;
 Fri, 24 Jun 2022 05:32:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Zubf=W7=linaro.org=viresh.kumar@srs-se1.protection.inumbo.net>)
 id 1o4bvJ-0000Kw-Hq
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 05:32:05 +0000
Received: from mail-pf1-x42f.google.com (mail-pf1-x42f.google.com
 [2607:f8b0:4864:20::42f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fa64ed7b-f37e-11ec-b725-ed86ccbb4733;
 Fri, 24 Jun 2022 07:32:04 +0200 (CEST)
Received: by mail-pf1-x42f.google.com with SMTP id t21so1643774pfq.1
 for <xen-devel@lists.xenproject.org>; Thu, 23 Jun 2022 22:32:04 -0700 (PDT)
Received: from localhost ([122.172.201.58]) by smtp.gmail.com with ESMTPSA id
 v3-20020a170902e8c300b001616e19537esm763283plg.213.2022.06.23.22.32.01
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 23 Jun 2022 22:32:02 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fa64ed7b-f37e-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=S81W0gB6I+uxS9SB34gDgN1dG87Qew55lSLDRL6fcsk=;
        b=qwbyBADGvJGNSBS/IkB1gTZ6z2tCos26AnXhQmdB9IvZB06Xfgy3Zke432+yeFaoDT
         qcmv14C9wTbKNCNc0VRS1RPnopEtZ6UjKXvBIOa2aa3Wgf/ImNKetuzHYg8ca7c4pNop
         rrobLqkMo16mHb1LoM7BO+lTzRejbWXQq7BIt+fRCPyWRbBZon+KtYQASSriVuXZNU0P
         ejgYGdgUyUyR5ID3Yos+ZAiwkGu/WV+qNsGV4TfrFs0CCz85T/t37DMS6Mi1/VTqTay/
         2+DasTQN1ndvnH7pEHiF54K/ITBany/SONazbr9DqyXNNWXtg3CL7aI2ocH35AZMj9Kk
         uosw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=S81W0gB6I+uxS9SB34gDgN1dG87Qew55lSLDRL6fcsk=;
        b=sZ++yfllCTekx9yiwrrPtVcSlm42EABYwe7DSIgoZoJwESXELKGYNMygGbqS8WiJGT
         kENX9BJrdFUIdXXwC3zvZeS8+jEDIAgeetIba6EetM7T/sF77Z5VVCtORxkSWI5FPnf6
         Ws+LfrgI0CJmaR743reEd5hOesH69QsRLbgoFRnAmOjbAE/orvQgAdHMzdazU4pOfGKP
         mPLVvYK2X+5j9nsSWQrOfAqo2dso8XxTozvMEeUjYjj3Y0JB46bbqkCjpHT/b8jsarfL
         j7EhM/+jou6RS61ke8T4nKJtEWVGRMjZ2Yn3c5tTdSb87Pk7PRnMizvy4ul88/X7TSdR
         hQow==
X-Gm-Message-State: AJIora+SIIKj2ptuHRTkleAFBFkzf6F0Qy1+/lsf5lDlJ1SltHdeFCt5
	Bk4NDUASGw+EeZohiiJMj+elCg==
X-Google-Smtp-Source: AGRyM1uO6xR8FEXNanwcQhwkSsk8h4le+5+n8pD+5b1U2JH8SHLk/lYzY6wYgtkdjvH5/n2HxjHb2w==
X-Received: by 2002:a65:53c8:0:b0:40d:77fc:5f05 with SMTP id z8-20020a6553c8000000b0040d77fc5f05mr2722358pgr.263.1656048722678;
        Thu, 23 Jun 2022 22:32:02 -0700 (PDT)
Date: Fri, 24 Jun 2022 11:02:00 +0530
From: Viresh Kumar <viresh.kumar@linaro.org>
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Stratos Mailing List <stratos-dev@op-lists.linaro.org>,
	Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Stefano Stabellini <stefano.stabellini@xilinx.com>,
	Mathieu Poirier <mathieu.poirier@linaro.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Mike Holmes <mike.holmes@linaro.org>, Wei Liu <wl@xen.org>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Juergen Gross <jgross@suse.com>, Julien Grall <julien@xen.org>
Subject: Re: Virtio on Xen with Rust
Message-ID: <20220624053200.iwvwvesnu7o52tp7@vireshk-i7>
References: <20220414091538.jijj4lbrkjiby6el@vireshk-i7>
 <CAPD2p-ks4ZxWB8YT0pmX1sF_Mu2H+n_SyvdzE8LwVP_k_+Biog@mail.gmail.com>
 <20220622114950.lpidph5ugvozhbu5@vireshk-i7>
 <CAPD2p-kFeC8FygFcbpEbH3CzrAM7Td+G68t9ebOFR4V0w1dpEQ@mail.gmail.com>
 <20220623054819.do25phfuumnexw73@vireshk-i7>
 <CAPD2p-=OMDMqdV27E2jTTcE0gx1eiT+9TqLeOVH2u6YfwT_8pg@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAPD2p-=OMDMqdV27E2jTTcE0gx1eiT+9TqLeOVH2u6YfwT_8pg@mail.gmail.com>

On 23-06-22, 15:47, Oleksandr Tyshchenko wrote:
> Below is my understanding, which might be wrong.
> 
> I am not sure about x86, there are some moments with its modes, for example
> PV guests should always use grants for virtio, but on Arm (which guest type
> is HVM):
> 1. If you run backend(s) in dom0 which is trusted by default, you don't
> necessarily need to use grants for the virtio so you will be able to map
> what you need in advance using foreign mappings.
> 2. If you run backend(s) in another domain *which you trust* and you don't
> want to use grants for the virtio, I think, you also will be able to map in
> advance using foreign mappings, but for that you will need a security
> policy to allow your backend's domain to map arbitrary guest pages.
> 3. If you run backend(s) in non-trusted domain, you will have to use grants
> for the virtio, so there is no way to map in advance, only to map at the
> runtime what was previously granted by the guest and umap right after using
> it.
> 
> These is another method how to restrict backend without modifying guest
> which is CONFIG_DMA_RESTRICTED_POOL in Linux, but this includes memcpy in
> the guest and requires some support in toolstack to make it work, but I
> wouldn't
> suggest it as the usage of grants for the virtio is better (and already in
> upsteam).

Yeah, above looks okay.

> Regarding your previous attempt to map 512MB by using grants, what I
> understand from the error message is that Xen complains that the passed
> grant ref is bigger than the current number of grant table entries.
> Now I am wondering where do these 0x40000 - 0x5ffff grant refs (which
> backend tries to map in a single call) come from, are they really were
> previously granted by the guest and passed to the backend in a
> single request?

I just tried to map everything in one go, just like map in advance.
Yeah, the whole idea is faulty :)

The guest never agreed to it.

> If the answer is yes, then what does gnttab_usage_print_all() say (key 'g'
> in Xen console)? I expect there should be a lot of Xen messages like
> "common/grant_table.c:1882:d2v3 Expanding d2 grant table from 28 to 29
> frames. Do you see them?

I am not sure if there were other messages, but anyway this doesn't
bother me now as the whole thing was wrong to begin with. :)

-- 
viresh


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 06:40:36 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 06:40:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355263.582823 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4czJ-00082N-EQ; Fri, 24 Jun 2022 06:40:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355263.582823; Fri, 24 Jun 2022 06:40:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4czJ-00082G-AC; Fri, 24 Jun 2022 06:40:17 +0000
Received: by outflank-mailman (input) for mailman id 355263;
 Fri, 24 Jun 2022 06:40:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4czH-000823-Rz; Fri, 24 Jun 2022 06:40:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4czH-00079N-Nx; Fri, 24 Jun 2022 06:40:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4czH-0007lN-C6; Fri, 24 Jun 2022 06:40:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o4czH-00050T-Bg; Fri, 24 Jun 2022 06:40:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=lBVCMqlwU8r5YpIEwD8ANpVyFJAIVsOupY+fuKI3h28=; b=DV3KlIG9EctBvIlINzMYaOJ1ps
	MTuqszo9UIkkG41FaR762wQkNjZx+w4eR3TGv0cFMvvwCqkEzufMlBlXl2kqaS2YURGTsuDCORpfm
	VFlXUW+8bSzQNSqLufcoIDbMFm/L9xnq5IXe/dtY6shZmop6BVtfSrv+p3dTcW6s+t94=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171334-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 171334: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=db3382dd4f468c763512d6bf91c96773395058fb
X-Osstest-Versions-That:
    xen=65f684b728f779e170335e9e0cbbf82f7e1c7e5b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 24 Jun 2022 06:40:15 +0000

flight 171334 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171334/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171327
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171327
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171327
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171327
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171327
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171327
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171327
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171327
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171327
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171327
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171327
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171327
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  db3382dd4f468c763512d6bf91c96773395058fb
baseline version:
 xen                  65f684b728f779e170335e9e0cbbf82f7e1c7e5b

Last test of basis   171327  2022-06-23 10:35:21 Z    0 days
Testing same since   171334  2022-06-23 19:07:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@arm.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   65f684b728..db3382dd4f  db3382dd4f468c763512d6bf91c96773395058fb -> master


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 06:46:16 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 06:46:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355277.582849 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4d52-0000dS-GE; Fri, 24 Jun 2022 06:46:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355277.582849; Fri, 24 Jun 2022 06:46:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4d52-0000dG-CC; Fri, 24 Jun 2022 06:46:12 +0000
Received: by outflank-mailman (input) for mailman id 355277;
 Fri, 24 Jun 2022 06:46:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Tt/v=W7=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o4d50-0000cJ-F5
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 06:46:10 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2052.outbound.protection.outlook.com [40.107.22.52])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 517f1938-f389-11ec-b725-ed86ccbb4733;
 Fri, 24 Jun 2022 08:46:04 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB9638.eurprd04.prod.outlook.com (2603:10a6:102:273::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15; Fri, 24 Jun
 2022 06:46:02 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Fri, 24 Jun 2022
 06:46:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 517f1938-f389-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=V8Mj9VMISWfAtQoqGfW002w7ULaSqIRbGkTCtLSDHbftVNWShqEHCniDEGpkr5TOApYegtttguErv9I24ZjhJkcSHXnu9KOPCVN9Ep1lubmbQKdyTRXYYiGlO7oagipMeYunUsKpaqLbafaRJnFnlxzqvs/Io+mluYiQf77xDCDwviAnc0EPH57+ed9q6N4TrYdjQspG4H35Sn8Q6NgWRMabqftDETAALtAlLBuBX4FyfzoJaS7Qwdhg9QVRRQQZeknS5Kx4y5mZLFMAKYbE36TnXhWTcx6GFSToAHZQ5O7xDwapB28txavVKKjhhHLHbJwrxE6ZWZBZ5sPFBMfGSw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=eQDrSFZylcM5YAFa8zNM1v4ZA0XXyJhh/B28gevNPj0=;
 b=TDMCy9Xl7ie8wP083yBTl32G9ZAoInqqpY1ngdoq7oIHl0gP/Huca91kk+B4q2pKsHokrv2M3pVQw07T60DM7dou1QIyq4Oe5WM1DC8+ZGiG9Yiw4nhuthGrcBgRpCKAxFw7BTJldE3dZrOxpk6Rz19eWhG+tndQlY9qiifPXOAQAZHSwn+gegjAJduLPNgX9RpkG7dCA0QJr5nM7e9qb0mwbn0ixClq8H/gLnptLUU8zpX1O4ZG665njF1eucTGOQEd8j8JSUSw4WYoWa2j3vgVuNKBGq3v/reTSuOPipuG/9xJYBdxHVoiBHvbwR4BSw4XVYV3R7BeH1gcV9XOcA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eQDrSFZylcM5YAFa8zNM1v4ZA0XXyJhh/B28gevNPj0=;
 b=5oTSDSz+0G22X6Eu24Ryh/pxtpS39IeWyHH+RIELYiA0hJ6UijtaazpUo+AlUXiqAvrbBisLT9dJnkISdFAWFQCgAHAe9S7UqQNeFucOKI+oB8VReh0IKm+g1JXXe8l9xxX/oJ1VDC3KKifmf/0geOxRGoQ6NJBJimeHB4ri0cwswswBfExCPowm1wdNzxuf+EbsSStRSO1/nngQ99L7PwnVHCp1HMco6NpYfxg/+LtuAY9YZbgP7TnhCxUn8kKt1k0henmRpLguHEMK3LYGEJCsZrksQ6g1VRcR8/PGeN76Odh3qQK6C6jDLvVL+XtVGKsygyzfmS31MwZWyS3VtQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <15da3838-aa8e-57c3-b9e2-6c0a4a639fb0@suse.com>
Date: Fri, 24 Jun 2022 08:45:59 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH V6 1/2] xen/gnttab: Store frame GFN in struct page_info on
 Arm
Content-Language: en-US
To: Julien Grall <julien@xen.org>, Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <1652294845-13980-1-git-send-email-olekstysh@gmail.com>
 <632404c3-b285-753d-6644-bccbc17d42c0@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <632404c3-b285-753d-6644-bccbc17d42c0@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR07CA0030.eurprd07.prod.outlook.com
 (2603:10a6:20b:46c::35) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e7390f00-f8af-4eaa-0bdf-08da55ad344e
X-MS-TrafficTypeDiagnostic: PA4PR04MB9638:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	GLj38xEMik1KfXZYa9oPT/hT9yQeKkh0QQBXlHO4TsHvgVh21cPMZ9h8QuPeUbpJ1bDN17OqRi9tqw5KZk/AZS2qwnFBEd/UeJlcO4xOJ1X9GWzQVvsZ6KVWEZeUtynrgeUOl14pLg1NWxjGiCSEII+AS7ZSv/eCZW1wvxoN2/1VACpVVz5o4cdkhEmqOMS3szOEYuMwfT2qThE0LB1FqtkiOYlaRbQ3xaUcvZwjt4bm55TIuXfqIIzfwAb0EeWff19MQ7+CCh4G+8GpaTVu9SINy5djaj6R7iwYSEOiSzZ2QBz0S74DQUbdcK0nN632zNas2FOa0n3GeUoZgCROhVau0O1s2zvRL4y/dff9GetDhDlH0gDzftKsW+4iceoaILmgiRtH+KUYgPg+xS2vCp4yUiBWfLmDBCweTNf21+bqZvYn/GtYoY6enQlEzlD+oNHpMvUTNx3B4yJarFTNlUDepoyYK8xmVk/FRj+WgpNhosng9c7zAwJsifuVxU+W+RPtUPj/m+T9kTBRUD5HJmJBj9lTOb5EcelH4hCX35lpt3Tv3+ZVjoj3n3fSqG4DpzlmBGz8cIlPugSF42UChIa6v5KFdU63O5yE2J27XlCVd4QgFa6DTugiMIm6ZxVhYRNs3kwQpbfYFT6/mYvrLNtUMCnNNNksUZu6/TJCNKEbg1GVX6F26h5YfDYrUEozISXVhE5DpFrgqIYGeNQyLmhXrO14N9/lknz9/LCiSWQ075RgSrtEksOZ8n0h1FhfMz991OrD11NCx3JPJaXm13sTE26G8PKxYLnjcxMExn1Q9fJT5QlQR8BLoxAvfO7BdsSI/3Bx4TVaf3ZU94VCGg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(136003)(346002)(396003)(366004)(376002)(39860400002)(6512007)(186003)(41300700001)(2616005)(53546011)(26005)(6486002)(478600001)(6506007)(2906002)(38100700002)(5660300002)(8936002)(31696002)(6666004)(86362001)(66556008)(66946007)(36756003)(316002)(54906003)(8676002)(31686004)(7416002)(83380400001)(110136005)(66476007)(4326008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RERRSUdkaDRaMTlOeGR4cS9XQ3I3SnBmQTlvRzJEVDFQVktWZlpZSWltbmlo?=
 =?utf-8?B?NWRyN1NEeFhDT1A3dFdUY1BKWVBwWHB4RkU1UlJSMFI1cTJoUWhDT2l6c3Va?=
 =?utf-8?B?Vk5CSE1mbnFLVTNzYy8wUkdjenhkSEk4Nzlpdm0zM0RRUFhBUWxiVlJOTTlG?=
 =?utf-8?B?NWMrd0dxNlpoZEN5Q2FZaWxJTFl6K2R2ekU2c0U1Wi8zQXZuVjVjNE9NcCtQ?=
 =?utf-8?B?aitLVHlyRG4xNzJ1ZjBnZEQzOTZ0NTM4cXdYa21mU3craC9WYUU2QjkzVFVm?=
 =?utf-8?B?NTE3YXI2d2ZraHdZRzdEWld4NDJsNE42aDErUTQyOVJTUW9ZRDYzaFJxbmFL?=
 =?utf-8?B?Nk9jN3dXdXJWUnJ4UStYeWdONnJOR1FrUVhGaEtheXdBVncveHpQejN6ZFht?=
 =?utf-8?B?Wklva3Z5ekJVWjNjdjQ3VDV4YVNldTNSZ2pFVG1CVW9HZFVPZmp1YVgwY0VY?=
 =?utf-8?B?ZUdNcGU0VW1aNG5Pc01lVlNpazRKSk1SelNKTHowcW1IUG5ZdGNDbkUxM1Bh?=
 =?utf-8?B?YjYzQXdvMGoxTWhWTUhCMVozWUJQa3NLeW83bTc0K0ViTjgyVkJCdkxGVzk5?=
 =?utf-8?B?Z3pDbUl6dDFMelVKWG83UEE4c3BjMlBYQWFJdjhXNGFvUHJIajYzbnhHaVAx?=
 =?utf-8?B?dEpwUWM0SHNsZWI2STcxTGM2bTlvcGFaVmVrYkxvbjJ1aXYwZjBITXpiZEt2?=
 =?utf-8?B?dUhVZWZnRWJKSXUraFRSN1NiV0ZPbm1JeTkraTVUMkV6Q3JTd0htOTdVNzdm?=
 =?utf-8?B?L2VtMll4SnNLOThxNWtxVVFIMHY5c2VXSlhJeVRWRUVXOUFoUnA5M3JHTUJy?=
 =?utf-8?B?VG9MNnlQMXFpdjduSjhMS3NITjRtZ0wwVEtSOU96c0VnS1BNOFk1dEIrN3Ft?=
 =?utf-8?B?Qy9mdGszeW5EN1JlTCszUFdGeFNVT044TldKYjRZbmlQLzBYQUcyNG1WYnEx?=
 =?utf-8?B?S0tGck5xSU1SbWpTajVFSUNzb0pWWEQvK2dxU1dCZUxYN0gyR2tDV0FTNFhx?=
 =?utf-8?B?WXBTWUNISHBUeU5TTFJtUzdqQlBMOFN2N1BiN2k1ZXV0cUxXT0VoL0VUODk4?=
 =?utf-8?B?eUdIOGxYWGk0cVdiZnNaeXJlQjdablJvRlpUL1JzL05uNjBnTmxCTyttTGV0?=
 =?utf-8?B?OFhtaGNOdXZrc1ZWOU1McjMySytvS2srdDdpWmtrM1laLzBRWG5tZk5UY282?=
 =?utf-8?B?dGwrQ01tdTZkOEdwZnd0VFkwektyU0lGWlM5cjdNWWszelMyeXZpTGI2cGt3?=
 =?utf-8?B?QTJmNnF4RVN5Q2NpM0JXOWZqNU9FS1FCd2YrVmtPYWZEK2NYSVFOTnRMUEZZ?=
 =?utf-8?B?dmViM1dxVU1kNEJrNEhaRUE4K0UvRnRlWlZBYzZDOGovTStUWlJyOW55Qkgz?=
 =?utf-8?B?b0REK1h4N0xtZmh0aUw0NWp1Tkt6Undzc0RDMWx0ZWFjVCsxUHVIb0NOaW5C?=
 =?utf-8?B?Nk1BWWhKdEVIMmx2d3A1K2M4OWxudjRGK1ZZQmdWdGRNVzNrbWp5SjN2SWVv?=
 =?utf-8?B?eFNtYTJEd2pCNlZTMGhKY3V0dGtNZUxSNkFmL3ZPdG9GVTVtZ2ZiUy93dHNq?=
 =?utf-8?B?bFBXaVpKMFVpWmUwTC9wbU5NckJpRGU4UDZBc09aSllacllXWGdmdE05aVh5?=
 =?utf-8?B?a2QzeXFxQWl5c0JoL0NpWVFtOTkrT042bUIyYVdLTGI5b1EwZGUxRW5iOXQx?=
 =?utf-8?B?eGpiSHBFYjR2Zkk4TEdaOXNVdGpUQmE5TTk5ZnhMYy9xMXBxZ3RWbkRqUGRH?=
 =?utf-8?B?eWR4MjNkd0VOb2Z5U2lmV1Q5OWtHK1F5ZGlIeURQWngrSHBLMndrOUpjcm1Z?=
 =?utf-8?B?RFMxbnlISFNzVG52Tm1DdDF6dCs3NURRUjZNcDQvRUdmNUloMGFHOUtvYUdQ?=
 =?utf-8?B?cnY1bmFuc0xRd3B3WlBLNTZIWUlLVExxSmwya3BHTzlKUis2d3dmdkVFUGxF?=
 =?utf-8?B?Mk5WMCszM0RMdnpUM3hVUmt3c2ZwZm45Vi93OUNERS9KQnFlVVhvWjd2OWZt?=
 =?utf-8?B?ZFRVZktBVEFmV0dSRThSSThOZ0NDbytOTmxDbXYxaytIdFBQQ1pzRGg4Y0RF?=
 =?utf-8?B?YWxnUTRwUHJVWkRYWWowcS9wVzY0NWJZenBiVzN0ZXZRRHpnd1QwZkJUc0Qz?=
 =?utf-8?Q?Eoj3cmREjED0riPZI1hsFDbfv?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e7390f00-f8af-4eaa-0bdf-08da55ad344e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jun 2022 06:46:02.4235
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: aMfipBHIvtT9HmfC/sOjPPJMNZ/U0cZPpd8pzFOfm0xDW426kkmLiEEZefXKPzPZze8pCTYF2zl4PIpvofWiPA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB9638

On 23.06.2022 19:50, Julien Grall wrote:
> On 11/05/2022 19:47, Oleksandr Tyshchenko wrote:
>> @@ -1505,7 +1507,23 @@ int xenmem_add_to_physmap_one(
>>       }
>>   
>>       /* Map at new location. */
>> -    rc = guest_physmap_add_entry(d, gfn, mfn, 0, t);
> 
>> +    if ( !p2m_is_ram(t) || !is_xen_heap_mfn(mfn) )
>> +        rc = guest_physmap_add_entry(d, gfn, mfn, 0, t);
> 
> I would expand the comment above to explain why you need a different 
> path for xenheap mapped as RAM. AFAICT, this is because we need to call 
> page_set_xenheap_gfn().
> 
>> +    else
>> +    {
>> +        struct p2m_domain *p2m = p2m_get_hostp2m(d);
>> +
>> +        p2m_write_lock(p2m);
>> +        if ( gfn_eq(page_get_xenheap_gfn(mfn_to_page(mfn)), INVALID_GFN) )
> 
> Sorry to only notice it now. This check will also change the behavior 
> for XENMAPSPACE_shared_info. Now, we are only allowed to map the shared 
> info once.
> 
> I believe this is fine because AFAICT x86 already prevents it. But this 
> is probably something that ought to be explained in the already long 
> commit message.

If by "prevent" you mean x86 unmaps the page from its earlier GFN, then
yes. But this means that Arm would better follow that model instead of
returning -EBUSY in this case. Just think of kexec-ing or a boot loader
wanting to map shared info or grant table: There wouldn't necessarily
be an explicit unmap.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 07:19:20 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 07:19:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355287.582861 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4day-0004Ki-8F; Fri, 24 Jun 2022 07:19:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355287.582861; Fri, 24 Jun 2022 07:19:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4day-0004Kb-4o; Fri, 24 Jun 2022 07:19:12 +0000
Received: by outflank-mailman (input) for mailman id 355287;
 Fri, 24 Jun 2022 07:19:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SV4E=W7=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1o4dav-0004KV-QN
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 07:19:10 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 (mail-eopbgr50069.outbound.protection.outlook.com [40.107.5.69])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id eebc864b-f38d-11ec-bd2d-47488cf2e6aa;
 Fri, 24 Jun 2022 09:19:07 +0200 (CEST)
Received: from AS8PR04CA0067.eurprd04.prod.outlook.com (2603:10a6:20b:313::12)
 by PR3PR08MB5756.eurprd08.prod.outlook.com (2603:10a6:102:90::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Fri, 24 Jun
 2022 07:19:01 +0000
Received: from AM5EUR03FT032.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:313:cafe::5c) by AS8PR04CA0067.outlook.office365.com
 (2603:10a6:20b:313::12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17 via Frontend
 Transport; Fri, 24 Jun 2022 07:19:01 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT032.mail.protection.outlook.com (10.152.16.84) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Fri, 24 Jun 2022 07:19:00 +0000
Received: ("Tessian outbound d3318d0cda7b:v120");
 Fri, 24 Jun 2022 07:18:59 +0000
Received: from 70c99ec0491b.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 1A7A5D83-3CE8-4A4F-9001-0AF40FB6B59E.1; 
 Fri, 24 Jun 2022 07:18:50 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 70c99ec0491b.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 24 Jun 2022 07:18:50 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com (2603:10a6:102:2b9::9)
 by AM9PR08MB6196.eurprd08.prod.outlook.com (2603:10a6:20b:283::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16; Fri, 24 Jun
 2022 07:18:47 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::980a:f741:c4e1:82f7]) by PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::980a:f741:c4e1:82f7%5]) with mapi id 15.20.5373.016; Fri, 24 Jun 2022
 07:18:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eebc864b-f38d-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=EJKkbG9e6olcuomH8M4sCiF1Klw5ap9LexGYbfiUYiDbGpSgkqSPQ1HcHj3z54Tbam79PgzaaO4vIoo6ozjJrAGC3LY+GUgEpJ33OoQgggiQKug5n+pw9t596uMYtyOWXgG/H2h3HthPA6iTzUONzWUnsTETmGnOP7t2fXghQ4hGRuue66qPBnEBbIoTHStnufVwUE5lYgrEqRbl3zzv7wrnW8gPVJiWxkK8x1rxWV/p9RKZsd1vZcT5wYUDfsRumwUQNJbEVsIDiqf4LHuQ+nwz06pV01tNzhC9gfVMg+ks3Gtu58Rw8W1fdFQxQ2IMKyqA7omCZxiW2lOz3VnaiA==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=VavmEi37OZUroXHyeVtCogpRzg00xa26FE5rlldmerY=;
 b=QRIOSneN2jbUBMLsfeJ2ncyb69SSpZtKmLgKD+/SCtGQW7gudxIRt8JLQAUMyGpAOketbeKz3b6hJ145QVkYulwi5lKNymvXV7f74d2Wen6xhvycw/wP8RRP98liq84N6sz7Xcgqx0I7Y8HM4V8muHfyU+LhWUTwZa6TY4koeAe2lvKZQdes1FLMTMusvBinJq5EVJemiy6FQsv5Dy6PWN+Q9urB863AVL2yqSD1ZSqmnamGARH+MXMsl1C7lSQIGVjNjKSjF0NXRrs8S/yoR/bp3dzSQB99s0DYhTHcnx2SCqJk7CcXfhyzjpgn8XCvW9gr9Ui0RhSwA+LWBe7VvA==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VavmEi37OZUroXHyeVtCogpRzg00xa26FE5rlldmerY=;
 b=lUVKlXCJwOjxhNaH3pe7f4tUVmXXAgXsa3zd+c0OMbZhhM/PPPPsErOsNxApxGCNBOlzPXBbmnkqtSAhWKxTTJhKxIxSJyX9tHg6gpLsFoOc7USipx3bcz86nEKa2UltHNA+/7IrbscgVOv19/ZDK9B9eJztqIwbuCmiTymyFIw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IXlx62mNybQkSuKf/W/wse2tYqlQdkoAESv8kkox4Hn7F4lHPveUYTOtAKxh2qO+LPTr4x9KoOpuHmDc1W5xQ66f5Q/c7QbcBQMDNId+Qsuynf6tuWz/LtlQPceM+90iP/FlB8jgGTVM1PDJ7EjzbEQGArzBp/vx+1mg9fb8CA7trqIM8IMPn6azqljMnyeOhmVsmVcveQu/3k0CYwwoulO42F1HylCL78Vu2A5ZP6qzJBUy/S5eCyn+OWzLfx3s+DbuovD79l0zJulhKEcis/MaHpojdf3GPUskeh/n+x5TSCzmGQqX8PWEpXpQyq2u4IhZ5kZPmpJTO+z497g64w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=VavmEi37OZUroXHyeVtCogpRzg00xa26FE5rlldmerY=;
 b=jub4dJrmZYKD/nyrVOuJehVhdRkDiEMTpD32+y2KW69p1GP3tPP6SuorljbdFvynO29Pv4g82k6E+SUINHuv1v3TEnhG2O/fP2HLNYxwkicFzHywAUDjffMj7Y9iGVfiNEYhA0wSH1kjAwklATnEOgXPsqABs00ZLohXzTXssY0K8JOaKtzVSKBXCpK5u6yXIGSwvZ0M7DWJe/weVC5/gKO1eX45z7SdxZ5enxvjrO6e5gC878iyMjlq/CUKSqKxocMjA3gfS73z8o/L0jDenxc7ZgpfgxjL5QP5n8BV0Edsm/l0itAyuLVygT3qJ39dORql5et1pB1NSg1LPESjNA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VavmEi37OZUroXHyeVtCogpRzg00xa26FE5rlldmerY=;
 b=lUVKlXCJwOjxhNaH3pe7f4tUVmXXAgXsa3zd+c0OMbZhhM/PPPPsErOsNxApxGCNBOlzPXBbmnkqtSAhWKxTTJhKxIxSJyX9tHg6gpLsFoOc7USipx3bcz86nEKa2UltHNA+/7IrbscgVOv19/ZDK9B9eJztqIwbuCmiTymyFIw=
From: Wei Chen <Wei.Chen@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Jiamei Xie
	<Jiamei.Xie@arm.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v6 1/8] xen: reuse x86 EFI stub functions for Arm
Thread-Topic: [PATCH v6 1/8] xen: reuse x86 EFI stub functions for Arm
Thread-Index: AQHYfI531SRNO5A+8Ei3iOFoJrK02K1dB3iAgAE0OjA=
Date: Fri, 24 Jun 2022 07:18:47 +0000
Message-ID:
 <PAXPR08MB74205A192C0E6E2E4BDD64BB9EB49@PAXPR08MB7420.eurprd08.prod.outlook.com>
References: <20220610055316.2197571-1-wei.chen@arm.com>
 <20220610055316.2197571-2-wei.chen@arm.com>
 <05dadcda-505d-d46a-776a-bb29b8915815@suse.com>
In-Reply-To: <05dadcda-505d-d46a-776a-bb29b8915815@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 1C3B082F10D5EE41B9BCC9E920E906A1.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 52b0e99b-e664-4205-18e6-08da55b1cf40
x-ms-traffictypediagnostic:
	AM9PR08MB6196:EE_|AM5EUR03FT032:EE_|PR3PR08MB5756:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 IEHfbsGEAqsycCMPy1tUBGD83ycZFPh9g+ic8RoTMuMxp4YkrcFk56GuVQ4kQc2cpJLIGuEi7cDWUTe3j3yRP1Kqv/OnAcwaSQNBkqoxN6BonjO/riHP+jQ4hjvBWt7mtx2uMzvskeChqizjFv1bRzkjmHRNG93e2sH06DmHovwuZKzB9LIQtLfNpAaSFp+q1wEYkJ9z3uuwPNXiGiigbHciq6wr8MOdOs2ofKR0zJjKuuahzkKVEReFqca2+V9bE+KVgRepJAPFj6kIxMJS8802fNBGbYEansNZG+KCTMjmgDck7o1NE7xGbS4XkBA8R1Wj0nYOhgJEHOAzq3uVRpSI9LX6mJ9uaJ/1ax5HLy91RksGjOy3d2GuCe+zteFY1sW1pJW5ImxdS6WG0iu4pmpjt/B+HfFHLXRGuh2EzOPz8z4bvmTFkwPRKx5EsA+hIYvjW0D83ht08TDe/Dg+d/neuxmTkwDu6c0jehqo9WpKrD0BcbhtubyGclHi7ec6LIGZ1ND5LHN6vynbDiiXkvEN0zlr7YwUANB222PVWfkYzdmRXkE1VybFM8YjC/FDMRm4T3s2g1SiXcHs2/GG7B/M7AixWEh3O3TMUr0VKeUcfMlYtEvNYinv5YnCAsX+woi9LoADCqceGEmWGjNRXTymMbxhfrY5eJfx/ooKuSP9RsQLefKL2X8IG+QOCfnfll8q2MX0hUQlIaRbmm5iFSnrZW0vuAE4Ob4dOy0SmXSjjhhdJnQuowiHKvcwt7e2wtyNXjgx2VmheBO9pwNeudJV5TxKonrA6KlI+BKnbqU=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB7420.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(136003)(366004)(396003)(376002)(39860400002)(346002)(6916009)(8936002)(38100700002)(2906002)(316002)(122000001)(54906003)(55016003)(86362001)(8676002)(66476007)(6506007)(66446008)(66946007)(66556008)(64756008)(7696005)(83380400001)(53546011)(9686003)(76116006)(4326008)(41300700001)(38070700005)(26005)(186003)(52536014)(71200400001)(5660300002)(478600001)(33656002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6196
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f1fa8474-2d88-49e4-30a4-08da55b1c787
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NImAuAI2/HoT9X7rCqjmquEBg0JH8ukHy8p7Ke3dFWgtj3G91kMv4KQNVVzEPzRWACFNERIv6SY/iHGk/SQ451WH7WXqiuQoQmPAVDCKF5XPivYU/etFt0nCkfwUSMTkJjb5NZoLh2LvDifaqhhm65nw4ZWKni3qqb+5U5BL8ady++gprvqk0WBEJo93v0I3iHKcP9092eUrQhm2/ore5RBCL/qtpVFELIOU/bIDegqnU7lJqE99CgJcMtsFCD39n2Woj2c3Shw1cERZLmI7f1F4vkKgBnYgKNqpaJ92fkC/8ryH+kMDqDEk3dgPNcdx/mavEHk3FtxeBQEvYXm9iRCM00C4VjkQELoUGi7E3ANFPFgHzkOHryT+gIc/WsDsA+nC2pa0nhOEK6mioS0xMS0obJeHlb/bhc/4mjhvzE0Pz0VOOgOEMyTKJlrUesG0uGpobWc40qPFx4xMim2yWOtdaLT0qiEWXUHAS83KeUua8aB7upGfjVTKrJAfePF7Jgi6W4FQ9TxsFBK58oZjrBqWutgAM1o6OSw2IK4xeEkrgBaae85lLGzb5wvPnJyfv9ufhoVNDcKmUaH5lujoc6xWcSVA7VUA8mfEPHJ2+Gs1rCKjNpvKQQOv1xWQsEN9CSVnqUz6I8xqPpQ9OxHJ+tt16PRfxReBfQh+19mdxM9Eq6kyRKlNvKqUIEik1wqCpIqSCTdDwqRB7N41vxjt3WlgOfCNb/44yl6+YnwJF0IoBUzYnbXzGJhE/FgH614uywirG0K/Sp4Q4tpdDXiwy21ZBiYd9Sfobi/01yFZh/E=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(396003)(136003)(39860400002)(346002)(376002)(40470700004)(36840700001)(46966006)(83380400001)(47076005)(336012)(41300700001)(186003)(81166007)(82740400003)(356005)(36860700001)(5660300002)(6862004)(8936002)(4326008)(8676002)(52536014)(82310400005)(55016003)(40480700001)(2906002)(478600001)(26005)(9686003)(53546011)(6506007)(7696005)(316002)(40460700003)(70206006)(70586007)(54906003)(86362001)(33656002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jun 2022 07:19:00.0378
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 52b0e99b-e664-4205-18e6-08da55b1cf40
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR08MB5756

SGkgSmFuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEphbiBCZXVs
aWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4gU2VudDogMjAyMuW5tDbmnIgyM+aXpSAyMDo1NA0K
PiBUbzogV2VpIENoZW4gPFdlaS5DaGVuQGFybS5jb20+DQo+IENjOiBuZCA8bmRAYXJtLmNvbT47
IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz47IEp1bGllbg0KPiBH
cmFsbCA8anVsaWVuQHhlbi5vcmc+OyBCZXJ0cmFuZCBNYXJxdWlzIDxCZXJ0cmFuZC5NYXJxdWlz
QGFybS5jb20+Ow0KPiBWb2xvZHlteXIgQmFiY2h1ayA8Vm9sb2R5bXlyX0JhYmNodWtAZXBhbS5j
b20+OyBBbmRyZXcgQ29vcGVyDQo+IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPjsgUm9nZXIg
UGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+OyBXZWkNCj4gTGl1IDx3bEB4ZW4ub3Jn
PjsgSmlhbWVpIFhpZSA8SmlhbWVpLlhpZUBhcm0uY29tPjsgeGVuLQ0KPiBkZXZlbEBsaXN0cy54
ZW5wcm9qZWN0Lm9yZw0KPiBTdWJqZWN0OiBSZTogW1BBVENIIHY2IDEvOF0geGVuOiByZXVzZSB4
ODYgRUZJIHN0dWIgZnVuY3Rpb25zIGZvciBBcm0NCj4gDQo+IE9uIDEwLjA2LjIwMjIgMDc6NTMs
IFdlaSBDaGVuIHdyb3RlOg0KPiA+IC0tLSBhL3hlbi9hcmNoL2FybS9NYWtlZmlsZQ0KPiA+ICsr
KyBiL3hlbi9hcmNoL2FybS9NYWtlZmlsZQ0KPiA+IEBAIC0xLDYgKzEsNSBAQA0KPiA+ICBvYmot
JChDT05GSUdfQVJNXzMyKSArPSBhcm0zMi8NCj4gPiAgb2JqLSQoQ09ORklHX0FSTV82NCkgKz0g
YXJtNjQvDQo+ID4gLW9iai0kKENPTkZJR19BUk1fNjQpICs9IGVmaS8NCj4gPiAgb2JqLSQoQ09O
RklHX0FDUEkpICs9IGFjcGkvDQo+ID4gIG9iai0kKENPTkZJR19IQVNfUENJKSArPSBwY2kvDQo+
ID4gIGlmbmVxICgkKENPTkZJR19OT19QTEFUKSx5KQ0KPiA+IEBAIC0yMCw2ICsxOSw3IEBAIG9i
ai15ICs9IGRvbWFpbi5vDQo+ID4gIG9iai15ICs9IGRvbWFpbl9idWlsZC5pbml0Lm8NCj4gPiAg
b2JqLXkgKz0gZG9tY3RsLm8NCj4gPiAgb2JqLSQoQ09ORklHX0VBUkxZX1BSSU5USykgKz0gZWFy
bHlfcHJpbnRrLm8NCj4gPiArb2JqLXkgKz0gZWZpLw0KPiA+ICBvYmoteSArPSBnaWMubw0KPiA+
ICBvYmoteSArPSBnaWMtdjIubw0KPiA+ICBvYmotJChDT05GSUdfR0lDVjMpICs9IGdpYy12My5v
DQo+ID4gLS0tIGEveGVuL2FyY2gvYXJtL2VmaS9NYWtlZmlsZQ0KPiA+ICsrKyBiL3hlbi9hcmNo
L2FybS9lZmkvTWFrZWZpbGUNCj4gPiBAQCAtMSw0ICsxLDEyIEBADQo+ID4gIGluY2x1ZGUgJChz
cmN0cmVlKS9jb21tb24vZWZpL2VmaS1jb21tb24ubWsNCj4gPg0KPiA+ICtpZmVxICgkKENPTkZJ
R19BUk1fRUZJKSx5KQ0KPiA+ICBvYmoteSArPSAkKEVGSU9CSi15KQ0KPiA+ICBvYmotJChDT05G
SUdfQUNQSSkgKz0gIGVmaS1kb20wLmluaXQubw0KPiA+ICtlbHNlDQo+ID4gKyMgQWRkIHN0dWIu
byB0byBFRklPQkoteSB0byByZS11c2UgdGhlIGNsZWFuLWZpbGVzIGluDQo+ID4gKyMgZWZpLWNv
bW1vbi5tay4gT3RoZXJ3aXNlIHRoZSBsaW5rIG9mIHN0dWIuYyBpbiBhcm0vZWZpDQo+ID4gKyMg
d2lsbCBub3QgYmUgY2xlYW5lZCBpbiAibWFrZSBjbGVhbiIuDQo+ID4gK0VGSU9CSi15ICs9IHN0
dWIubw0KPiA+ICtvYmoteSArPSBzdHViLm8NCj4gPiArZW5kaWYNCj4gDQo+IFRoaXMgaGFzIGNh
dXNlZA0KPiANCj4gbGQ6IHdhcm5pbmc6IGFyY2gvYXJtL2VmaS9idWlsdF9pbi5vIHVzZXMgMi1i
eXRlIHdjaGFyX3QgeWV0IHRoZSBvdXRwdXQgaXMNCj4gdG8gdXNlIDQtYnl0ZSB3Y2hhcl90OyB1
c2Ugb2Ygd2NoYXJfdCB2YWx1ZXMgYWNyb3NzIG9iamVjdHMgbWF5IGZhaWwNCj4gDQo+IGZvciB0
aGUgMzItYml0IEFybSBidWlsZCB0aGF0IEkga2VlcCBkb2luZyBldmVyeSBvbmNlIGluIGEgd2hp
bGUsIHdpdGgNCj4gKGlmIGl0IG1hdHRlcnMpIEdOVSBsZCAyLjM4LiBJIGd1ZXNzIHlvdSB3aWxs
IHdhbnQgdG8gY29uc2lkZXIgYnVpbGRpbmcNCj4gYWxsIG9mIFhlbiB3aXRoIC1mc2hvcnQtd2No
YXIsIG9yIHRvIGF2b2lkIGJ1aWxkaW5nIHN0dWIuYyB3aXRoIHRoYXQNCj4gb3B0aW9uLg0KPiAN
Cg0KVGhhbmtzIGZvciBwb2ludGluZyB0aGlzIG91dC4gSSB3aWxsIHRyeSB0byB1c2UgLWZzaG9y
dC13Y2hhciBmb3IgQXJtMzIsDQppZiBBcm0gbWFpbnRhaW5lcnMgYWdyZWUuDQoNCkNoZWVycywN
CldlaSBDaGVuDQoNCj4gSmFuDQo=


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 08:03:00 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 08:03:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355297.582871 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4eH6-0002wo-3E; Fri, 24 Jun 2022 08:02:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355297.582871; Fri, 24 Jun 2022 08:02:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4eH6-0002wh-0M; Fri, 24 Jun 2022 08:02:44 +0000
Received: by outflank-mailman (input) for mailman id 355297;
 Fri, 24 Jun 2022 08:02:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4eH4-0002wX-NK; Fri, 24 Jun 2022 08:02:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4eH4-0000i8-Jh; Fri, 24 Jun 2022 08:02:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4eH4-0002mA-3o; Fri, 24 Jun 2022 08:02:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o4eH4-0000GD-3N; Fri, 24 Jun 2022 08:02:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=PmCCIVtjwfheI+/ALh9AVPd5PgnS+7v2Sx7E8bqiiDI=; b=xacg0K0DsXXwp1a1skjr+PdPxD
	6BvgPNGOVutKXLlAFHRlze1x7f3Sr3Uox52oJNWwQajowJdwii4aTgqF3zE1WqqhoextCyFPaZMR7
	aq8f8QNuNPpjss79TGZ1ga1FZVvw0TBJoieUKWHqu2zuAeK+j7aFxmhU5qIYXzFek4Rw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171335-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 171335: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-arm64-arm64-libvirt-raw:debian-di-install:fail:heisenbug
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=7db86fe2ed220c196061824e652b94e7a2acbabf
X-Osstest-Versions-That:
    qemuu=2b049d2c8dc01de750410f8f1a4eac498c04c723
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 24 Jun 2022 08:02:42 +0000

flight 171335 qemu-mainline real [real]
flight 171340 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/171335/
http://logs.test-lab.xenproject.org/osstest/logs/171340/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail pass in 171340-retest
 test-arm64-arm64-libvirt-raw 12 debian-di-install   fail pass in 171340-retest

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-raw 14 migrate-support-check fail in 171340 never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check fail in 171340 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171312
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171312
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171312
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171312
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171312
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171312
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171312
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171312
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                7db86fe2ed220c196061824e652b94e7a2acbabf
baseline version:
 qemuu                2b049d2c8dc01de750410f8f1a4eac498c04c723

Last test of basis   171312  2022-06-22 16:08:30 Z    1 days
Testing same since   171335  2022-06-23 19:07:18 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Daniel P. Berrangé <berrange@redhat.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Juan Quintela <quintela@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Markus Armbruster <armbru@redhat.com>
  Peter Xu <peterx@redhat.com>
  Richard Henderson <richard.henderson@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   2b049d2c8d..7db86fe2ed  7db86fe2ed220c196061824e652b94e7a2acbabf -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 08:35:49 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 08:35:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355307.582883 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4en0-0006WA-Qy; Fri, 24 Jun 2022 08:35:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355307.582883; Fri, 24 Jun 2022 08:35:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4en0-0006W3-Ne; Fri, 24 Jun 2022 08:35:42 +0000
Received: by outflank-mailman (input) for mailman id 355307;
 Fri, 24 Jun 2022 08:35:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4emy-0006Vx-VN
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 08:35:40 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4ems-0001Gh-QR; Fri, 24 Jun 2022 08:35:34 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238] helo=[192.168.4.76])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4ems-0003QK-J8; Fri, 24 Jun 2022 08:35:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=yGaL3amccKQDrRZtcxY+n2fxbuVEK+F9l3vY5ksmLt0=; b=6Jfq5eXbnE6qKk9xwixkVsfV6P
	2BKKK+CMacLbHGJmWwn3RY2EDK58Zed8alK1JwbQwwhHdD6XjT1pDPWYOys+iZkhkHb/bNL5nWePr
	3EGEFpI6RD9QTs6Aa4yibYonftYPWq6o4/D1z/zQ2V5S7+Q01/vm/lkHkGPX747ZqaRA=;
Message-ID: <8e44e765-c47f-4480-ee44-704ea13a170d@xen.org>
Date: Fri, 24 Jun 2022 09:35:31 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH v6 1/8] xen: reuse x86 EFI stub functions for Arm
To: Wei Chen <Wei.Chen@arm.com>, Jan Beulich <jbeulich@suse.com>
Cc: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Jiamei Xie <Jiamei.Xie@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20220610055316.2197571-1-wei.chen@arm.com>
 <20220610055316.2197571-2-wei.chen@arm.com>
 <05dadcda-505d-d46a-776a-bb29b8915815@suse.com>
 <PAXPR08MB74205A192C0E6E2E4BDD64BB9EB49@PAXPR08MB7420.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <PAXPR08MB74205A192C0E6E2E4BDD64BB9EB49@PAXPR08MB7420.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi,

On 24/06/2022 08:18, Wei Chen wrote:
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 2022年6月23日 20:54
>> To: Wei Chen <Wei.Chen@arm.com>
>> Cc: nd <nd@arm.com>; Stefano Stabellini <sstabellini@kernel.org>; Julien
>> Grall <julien@xen.org>; Bertrand Marquis <Bertrand.Marquis@arm.com>;
>> Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>; Andrew Cooper
>> <andrew.cooper3@citrix.com>; Roger Pau Monné <roger.pau@citrix.com>; Wei
>> Liu <wl@xen.org>; Jiamei Xie <Jiamei.Xie@arm.com>; xen-
>> devel@lists.xenproject.org
>> Subject: Re: [PATCH v6 1/8] xen: reuse x86 EFI stub functions for Arm
>>
>> On 10.06.2022 07:53, Wei Chen wrote:
>>> --- a/xen/arch/arm/Makefile
>>> +++ b/xen/arch/arm/Makefile
>>> @@ -1,6 +1,5 @@
>>>   obj-$(CONFIG_ARM_32) += arm32/
>>>   obj-$(CONFIG_ARM_64) += arm64/
>>> -obj-$(CONFIG_ARM_64) += efi/
>>>   obj-$(CONFIG_ACPI) += acpi/
>>>   obj-$(CONFIG_HAS_PCI) += pci/
>>>   ifneq ($(CONFIG_NO_PLAT),y)
>>> @@ -20,6 +19,7 @@ obj-y += domain.o
>>>   obj-y += domain_build.init.o
>>>   obj-y += domctl.o
>>>   obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
>>> +obj-y += efi/
>>>   obj-y += gic.o
>>>   obj-y += gic-v2.o
>>>   obj-$(CONFIG_GICV3) += gic-v3.o
>>> --- a/xen/arch/arm/efi/Makefile
>>> +++ b/xen/arch/arm/efi/Makefile
>>> @@ -1,4 +1,12 @@
>>>   include $(srctree)/common/efi/efi-common.mk
>>>
>>> +ifeq ($(CONFIG_ARM_EFI),y)
>>>   obj-y += $(EFIOBJ-y)
>>>   obj-$(CONFIG_ACPI) +=  efi-dom0.init.o
>>> +else
>>> +# Add stub.o to EFIOBJ-y to re-use the clean-files in
>>> +# efi-common.mk. Otherwise the link of stub.c in arm/efi
>>> +# will not be cleaned in "make clean".
>>> +EFIOBJ-y += stub.o
>>> +obj-y += stub.o
>>> +endif
>>
>> This has caused
>>
>> ld: warning: arch/arm/efi/built_in.o uses 2-byte wchar_t yet the output is
>> to use 4-byte wchar_t; use of wchar_t values across objects may fail
>>
>> for the 32-bit Arm build that I keep doing every once in a while, with
>> (if it matters) GNU ld 2.38. I guess you will want to consider building
>> all of Xen with -fshort-wchar, or to avoid building stub.c with that
>> option.
>>
> 
> Thanks for pointing this out. I will try to use -fshort-wchar for Arm32,
> if Arm maintainers agree.

Looking at the code we don't seem to build Xen arm64 with -fshort-wchar 
(aside the EFI files). So it is not entirely clear why we would want to 
use -fshort-wchar for arm32.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 08:53:20 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 08:53:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355313.582894 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4f3y-0000aA-Aa; Fri, 24 Jun 2022 08:53:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355313.582894; Fri, 24 Jun 2022 08:53:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4f3y-0000a3-76; Fri, 24 Jun 2022 08:53:14 +0000
Received: by outflank-mailman (input) for mailman id 355313;
 Fri, 24 Jun 2022 08:53:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4f3w-0000Zw-3X
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 08:53:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4f3v-0001bk-9y; Fri, 24 Jun 2022 08:53:11 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238] helo=[192.168.4.76])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4f3v-00047Q-3S; Fri, 24 Jun 2022 08:53:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=J3WM92tf+L1bkFnLmWQI2eBUEPRhPlBK1xk0qu5wSgM=; b=mO+BEdpmluWWR3YoX+QFMaI6u3
	jnAym9KD4Nmtb+vEFTtIqoHKXAI2J/vXZdwiL+N5E7I7a1jW2eyXgggePoSxDhpVBkjfA+P4LxD1B
	MwRuOFTbMUUuk/HDh8MOpCw2MQPnpXcGOva46mPEjI4GyJAV1ah4ZGUYc+QHUaGLYfu4=;
Message-ID: <e60a4e68-ed00-6cc7-31ca-64bcfc4bbdc5@xen.org>
Date: Fri, 24 Jun 2022 09:53:08 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH] xen: arm: Don't use stop_cpu() in halt_this_cpu()
To: Stefano Stabellini <sstabellini@kernel.org>, dmitry.semenets@gmail.com
Cc: xen-devel@lists.xenproject.org, Dmytro Semenets
 <dmytro_semenets@epam.com>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220623074428.226719-1-dmitry.semenets@gmail.com>
 <alpine.DEB.2.22.394.2206231457250.2410338@ubuntu-linux-20-04-desktop>
From: Julien Grall <julien@xen.org>
In-Reply-To: <alpine.DEB.2.22.394.2206231457250.2410338@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 23/06/2022 23:07, Stefano Stabellini wrote:
> On Thu, 23 Jun 2022, dmitry.semenets@gmail.com wrote:
>> From: Dmytro Semenets <dmytro_semenets@epam.com>
> So wouldn't it be better to remove the panic from the implementation of
> call_psci_cpu_off?

I have asked to keep the panic() in call_psci_cpu_off(). If you remove 
the panic() then we will hide the fact that the CPU was not properly 
turned off and will consume more energy than expected.

The WFI loop is fine when shutting down or rebooting because we know 
this will only happen for a short period of time.

> 
> The reason I am saying this is that stop_cpu is called in a number of
> places beyond halt_this_cpu and as far as I can tell any of them could
> trigger the panic. (I admit they are unlikely places but still.)

This is one of the example where the CPU will not be stopped for a short 
period of time. We should deal with them differently (i.e. migrating the 
trusted OS or else) so we give all the chance for the CPU to be fully 
powered.

IMHO, this is a different issue and hence why I didn't ask Dmitry to 
solve it.

> 
> 
> Also the PSCI spec explicitely mention CPU_OFF as a way to place CPUs in
> a "known state" and doesn't offer any other examples. So although what
> you wrote in the commit message is correct, using CPU_OFF seems to be
> the less error prone way (in the sense of triggering firmware errors) to
> place CPUs in a known state.

The section you are referring to is starting with "One way". This imply 
there are others methods.

FWIW, the spin loop above seems to be how Linux is dealing with the 
shutdown/reboot.

> 
> So I would simply remove the panic from call_psci_cpu_off, so that we
> try CPU_OFF first, and if it doesn't work, we use the WFI loop as
> backup. Also we get to fix all the other callers of stop_cpu this way.
This reads strange. In the previous paragraph you suggested the CPU off 
is a less error prone way to place CPUs in a known state. But here, you 
are softening the stance and suggesting to fallback to the WFI loop.

So to me this indicates that WFI loop is fine. Otherwise, wouldn't the 
user risk to see firmware errors (which BTW, I don't understand what 
sort of firmware errors you are worried)? If yes, why would it be 
acceptable?

For instance, Dmitry situation, the CPU0 would always WFI loop...

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 09:12:00 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 09:12:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355324.582945 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4fM3-00048K-BJ; Fri, 24 Jun 2022 09:11:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355324.582945; Fri, 24 Jun 2022 09:11:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4fM3-00046B-4n; Fri, 24 Jun 2022 09:11:55 +0000
Received: by outflank-mailman (input) for mailman id 355324;
 Fri, 24 Jun 2022 09:11:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4fM2-0003rG-5P
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 09:11:54 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4fM2-0001zq-0K; Fri, 24 Jun 2022 09:11:54 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4fM1-0005kh-Oj; Fri, 24 Jun 2022 09:11:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=cV3cR00UhtGpPQyq5h3uQP1B9sqMJ894didgXHGUxvQ=; b=mai6zIA1l1ESqFE4rwtevFJLVC
	xmfpQdhD6w5EfM2TB8RsVBVTMqd1cd+ZXCzLEPRu5YFDoBY8Yb+39gFFY6ZJHYq+69gs9N1SjA7/q
	Uln3uPJTY9i9nKVpvFwBRuO0SProqo1UVLCGzRaoTUtzS2iBP7pSyLKs9ge3H/Hvsvpw=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 4/7] xen/arm: mm: Add more ASSERT() in {destroy, modify}_xen_mappings()
Date: Fri, 24 Jun 2022 10:11:43 +0100
Message-Id: <20220624091146.35716-5-julien@xen.org>
X-Mailer: git-send-email 2.32.0
In-Reply-To: <20220624091146.35716-1-julien@xen.org>
References: <20220624091146.35716-1-julien@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

Both destroy_xen_mappings() and modify_xen_mappings() will take in
parameter a range [start, end[. Both end should be page aligned.

Add extra ASSERT() to ensure start and end are page aligned. Take the
opportunity to rename 'v' to 's' to be consistent with the other helper.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/arm/mm.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 0607c65f95cd..20733afebce4 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -1371,14 +1371,18 @@ int populate_pt_range(unsigned long virt, unsigned long nr_mfns)
     return xen_pt_update(virt, INVALID_MFN, nr_mfns, _PAGE_POPULATE);
 }
 
-int destroy_xen_mappings(unsigned long v, unsigned long e)
+int destroy_xen_mappings(unsigned long s, unsigned long e)
 {
-    ASSERT(v <= e);
-    return xen_pt_update(v, INVALID_MFN, (e - v) >> PAGE_SHIFT, 0);
+    ASSERT(IS_ALIGNED(s, PAGE_SIZE));
+    ASSERT(IS_ALIGNED(e, PAGE_SIZE));
+    ASSERT(s <= e);
+    return xen_pt_update(s, INVALID_MFN, (e - s) >> PAGE_SHIFT, 0);
 }
 
 int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int flags)
 {
+    ASSERT(IS_ALIGNED(s, PAGE_SIZE));
+    ASSERT(IS_ALIGNED(e, PAGE_SIZE));
     ASSERT(s <= e);
     return xen_pt_update(s, INVALID_MFN, (e - s) >> PAGE_SHIFT, flags);
 }
-- 
2.32.0



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 09:12:00 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 09:12:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355321.582916 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4fM1-0003V3-5C; Fri, 24 Jun 2022 09:11:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355321.582916; Fri, 24 Jun 2022 09:11:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4fM1-0003Uw-2L; Fri, 24 Jun 2022 09:11:53 +0000
Received: by outflank-mailman (input) for mailman id 355321;
 Fri, 24 Jun 2022 09:11:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4fLz-0003FT-E3
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 09:11:51 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4fLy-0001z8-Sg; Fri, 24 Jun 2022 09:11:50 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4fLy-0005kh-K0; Fri, 24 Jun 2022 09:11:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=zIQ3x1L00ktbHL79TXnTuWC8BdC4G1K1Ot6H1GVHDBA=; b=60huZJTD4Lg890D6JSZvBA8MBR
	Y5H5mGDXCodXNoDrPQ82EJQaP4qd0gaP6bku3XmfAtVQgTPQkh9PrQcO/nGkUhhZBmMQFIURONjbS
	LXNNxGNyuKiwTYusZ6lvBhvZ/y1kbDvwMgihIp1pfHqbhuZ8bG4MZ2Xp1BaBBby1NyKo=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Ross Lagerwall <ross.lagerwall@citrix.com>
Subject: [PATCH 1/7] xen/arm: Remove most of the *_VIRT_END defines
Date: Fri, 24 Jun 2022 10:11:40 +0100
Message-Id: <20220624091146.35716-2-julien@xen.org>
X-Mailer: git-send-email 2.32.0
In-Reply-To: <20220624091146.35716-1-julien@xen.org>
References: <20220624091146.35716-1-julien@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

At the moment, *_VIRT_END may either point to the address after the end
or the last address of the region.

The lack of consistency make quite difficult to reason with them.

Furthermore, there is a risk of overflow in the case where the address
points past to the end. I am not aware of any cases, so this is only a
latent bug.

Start to solve the problem by removing all the *_VIRT_END exclusively used
by the Arm code and add *_VIRT_SIZE when it is not present.

Take the opportunity to rename BOOT_FDT_SLOT_SIZE to BOOT_FDT_VIRT_SIZE
for better consistency and use _AT(vaddr_t, ).

Also take the opportunity to fix the coding style of the comment touched
in mm.c.

Signed-off-by: Julien Grall <jgrall@amazon.com>

----

I noticed that a few functions in Xen expect [start, end[. This is risky
as we may end up with end < start if the region is defined right at the
top of the address space.

I haven't yet tackle this issue. But I would at least like to get rid
of *_VIRT_END.

This was originally sent separately (lets call it v0).

    Changes in v1:
        - Mention the coding style change.
---
 xen/arch/arm/include/asm/config.h | 18 ++++++++----------
 xen/arch/arm/livepatch.c          |  2 +-
 xen/arch/arm/mm.c                 | 13 ++++++++-----
 3 files changed, 17 insertions(+), 16 deletions(-)

diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
index 3e2a55a91058..66db618b34e7 100644
--- a/xen/arch/arm/include/asm/config.h
+++ b/xen/arch/arm/include/asm/config.h
@@ -111,12 +111,11 @@
 #define FIXMAP_ADDR(n)        (_AT(vaddr_t,0x00400000) + (n) * PAGE_SIZE)
 
 #define BOOT_FDT_VIRT_START    _AT(vaddr_t,0x00600000)
-#define BOOT_FDT_SLOT_SIZE     MB(4)
-#define BOOT_FDT_VIRT_END      (BOOT_FDT_VIRT_START + BOOT_FDT_SLOT_SIZE)
+#define BOOT_FDT_VIRT_SIZE     _AT(vaddr_t, MB(4))
 
 #ifdef CONFIG_LIVEPATCH
 #define LIVEPATCH_VMAP_START   _AT(vaddr_t,0x00a00000)
-#define LIVEPATCH_VMAP_END     (LIVEPATCH_VMAP_START + MB(2))
+#define LIVEPATCH_VMAP_SIZE    _AT(vaddr_t, MB(2))
 #endif
 
 #define HYPERVISOR_VIRT_START  XEN_VIRT_START
@@ -132,18 +131,18 @@
 #define FRAMETABLE_VIRT_END    (FRAMETABLE_VIRT_START + FRAMETABLE_SIZE - 1)
 
 #define VMAP_VIRT_START        _AT(vaddr_t,0x10000000)
+#define VMAP_VIRT_SIZE         _AT(vaddr_t, GB(1) - MB(256))
 
 #define XENHEAP_VIRT_START     _AT(vaddr_t,0x40000000)
-#define XENHEAP_VIRT_END       _AT(vaddr_t,0x7fffffff)
-#define DOMHEAP_VIRT_START     _AT(vaddr_t,0x80000000)
-#define DOMHEAP_VIRT_END       _AT(vaddr_t,0xffffffff)
+#define XENHEAP_VIRT_SIZE      _AT(vaddr_t, GB(1))
 
-#define VMAP_VIRT_END    XENHEAP_VIRT_START
+#define DOMHEAP_VIRT_START     _AT(vaddr_t,0x80000000)
+#define DOMHEAP_VIRT_SIZE      _AT(vaddr_t, GB(2))
 
 #define DOMHEAP_ENTRIES        1024  /* 1024 2MB mapping slots */
 
 /* Number of domheap pagetable pages required at the second level (2MB mappings) */
-#define DOMHEAP_SECOND_PAGES ((DOMHEAP_VIRT_END - DOMHEAP_VIRT_START + 1) >> FIRST_SHIFT)
+#define DOMHEAP_SECOND_PAGES (DOMHEAP_VIRT_SIZE >> FIRST_SHIFT)
 
 #else /* ARM_64 */
 
@@ -152,12 +151,11 @@
 #define SLOT0_ENTRY_SIZE  SLOT0(1)
 
 #define VMAP_VIRT_START  GB(1)
-#define VMAP_VIRT_END    (VMAP_VIRT_START + GB(1))
+#define VMAP_VIRT_SIZE   GB(1)
 
 #define FRAMETABLE_VIRT_START  GB(32)
 #define FRAMETABLE_SIZE        GB(32)
 #define FRAMETABLE_NR          (FRAMETABLE_SIZE / sizeof(*frame_table))
-#define FRAMETABLE_VIRT_END    (FRAMETABLE_VIRT_START + FRAMETABLE_SIZE - 1)
 
 #define DIRECTMAP_VIRT_START   SLOT0(256)
 #define DIRECTMAP_SIZE         (SLOT0_ENTRY_SIZE * (265-256))
diff --git a/xen/arch/arm/livepatch.c b/xen/arch/arm/livepatch.c
index 75e8adcfd6a1..57abc746e60b 100644
--- a/xen/arch/arm/livepatch.c
+++ b/xen/arch/arm/livepatch.c
@@ -175,7 +175,7 @@ void __init arch_livepatch_init(void)
     void *start, *end;
 
     start = (void *)LIVEPATCH_VMAP_START;
-    end = (void *)LIVEPATCH_VMAP_END;
+    end = start + LIVEPATCH_VMAP_SIZE;
 
     vm_init_type(VMAP_XEN, start, end);
 
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index be37176a4725..0607c65f95cd 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -128,9 +128,11 @@ static DEFINE_PAGE_TABLE(xen_first);
 /* xen_pgtable == root of the trie (zeroeth level on 64-bit, first on 32-bit) */
 static DEFINE_PER_CPU(lpae_t *, xen_pgtable);
 #define THIS_CPU_PGTABLE this_cpu(xen_pgtable)
-/* xen_dommap == pages used by map_domain_page, these pages contain
+/*
+ * xen_dommap == pages used by map_domain_page, these pages contain
  * the second level pagetables which map the domheap region
- * DOMHEAP_VIRT_START...DOMHEAP_VIRT_END in 2MB chunks. */
+ * starting at DOMHEAP_VIRT_START in 2MB chunks.
+ */
 static DEFINE_PER_CPU(lpae_t *, xen_dommap);
 /* Root of the trie for cpu0, other CPU's PTs are dynamically allocated */
 static DEFINE_PAGE_TABLE(cpu0_pgtable);
@@ -476,7 +478,7 @@ mfn_t domain_page_map_to_mfn(const void *ptr)
     int slot = (va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
     unsigned long offset = (va>>THIRD_SHIFT) & XEN_PT_LPAE_ENTRY_MASK;
 
-    if ( va >= VMAP_VIRT_START && va < VMAP_VIRT_END )
+    if ( (va >= VMAP_VIRT_START) && ((VMAP_VIRT_START - va) < VMAP_VIRT_SIZE) )
         return virt_to_mfn(va);
 
     ASSERT(slot >= 0 && slot < DOMHEAP_ENTRIES);
@@ -570,7 +572,8 @@ void __init remove_early_mappings(void)
     int rc;
 
     /* destroy the _PAGE_BLOCK mapping */
-    rc = modify_xen_mappings(BOOT_FDT_VIRT_START, BOOT_FDT_VIRT_END,
+    rc = modify_xen_mappings(BOOT_FDT_VIRT_START,
+                             BOOT_FDT_VIRT_START + BOOT_FDT_VIRT_SIZE,
                              _PAGE_BLOCK);
     BUG_ON(rc);
 }
@@ -850,7 +853,7 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
 
 void *__init arch_vmap_virt_end(void)
 {
-    return (void *)VMAP_VIRT_END;
+    return (void *)(VMAP_VIRT_START + VMAP_VIRT_SIZE);
 }
 
 /*
-- 
2.32.0



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 09:12:00 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 09:12:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355323.582938 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4fM2-000401-Nk; Fri, 24 Jun 2022 09:11:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355323.582938; Fri, 24 Jun 2022 09:11:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4fM2-0003zE-Kd; Fri, 24 Jun 2022 09:11:54 +0000
Received: by outflank-mailman (input) for mailman id 355323;
 Fri, 24 Jun 2022 09:11:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4fM1-0003Vs-6F
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 09:11:53 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4fM1-0001zW-07; Fri, 24 Jun 2022 09:11:53 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4fM0-0005kh-N8; Fri, 24 Jun 2022 09:11:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=5cwlf6hvockFhfMMpDgaqjNm0NVrUCzxPEvGvIkErrk=; b=qD5hsIL3+3Q8RH29wjU5tztmx4
	nPtgf6Ftf7Z1SOZAeRXIKfGVVpFUp9DKfYibEjkYWf+79LMfjJtQ9Nf3UTHBzEbEh3J8yiw4P7cTc
	OopuQxZHb4XEAetLy7ctzX2XFZU5YNgIu7sN2bu181R6JCSmpeUHH57MJFdDi6RzpcCY=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 3/7] xen/arm: head: Add missing isb after writing to SCTLR_EL2/HSCTLR
Date: Fri, 24 Jun 2022 10:11:42 +0100
Message-Id: <20220624091146.35716-4-julien@xen.org>
X-Mailer: git-send-email 2.32.0
In-Reply-To: <20220624091146.35716-1-julien@xen.org>
References: <20220624091146.35716-1-julien@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

Write to SCTLR_EL2/HSCTLR may not be visible until the next context
synchronization. When initializing the CPU, we want the update to take
effect right now. So add an isb afterwards.

Spec references:
    - AArch64: D13.1.2 ARM DDI 0406C.d
    - AArch32 v8: G8.1.2 ARM DDI 0406C.d
    - AArch32 v7: B5.6.3 ARM DDI 0406C.d

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/arm/arm32/head.S | 1 +
 xen/arch/arm/arm64/head.S | 1 +
 2 files changed, 2 insertions(+)

diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index 77f0a619ca51..98ccf18b51f1 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -353,6 +353,7 @@ cpu_init_done:
 
         ldr   r0, =HSCTLR_SET
         mcr   CP32(r0, HSCTLR)
+        isb
 
         mov   pc, r5                        /* Return address is in r5 */
 ENDPROC(cpu_init)
diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index 109ae7de0c2b..1babcc65d7c9 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -486,6 +486,7 @@ cpu_init:
 
         ldr   x0, =SCTLR_EL2_SET
         msr   SCTLR_EL2, x0
+        isb
 
         /*
          * Ensure that any exceptions encountered at EL2
-- 
2.32.0



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 09:12:00 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 09:12:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355322.582920 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4fM1-0003XY-Fx; Fri, 24 Jun 2022 09:11:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355322.582920; Fri, 24 Jun 2022 09:11:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4fM1-0003X3-Ao; Fri, 24 Jun 2022 09:11:53 +0000
Received: by outflank-mailman (input) for mailman id 355322;
 Fri, 24 Jun 2022 09:11:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4fM0-0003JW-51
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 09:11:52 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4fLz-0001zI-Ti; Fri, 24 Jun 2022 09:11:51 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4fLz-0005kh-La; Fri, 24 Jun 2022 09:11:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=axjB04kMHfN9pAzvy+jzI10okTa32BNLwJRPdmioDVM=; b=bXb8SxvJFtPDGp5SOcKBg1Un+d
	KM5InCR6Jw3nm52QKRmf5r2E7AnV0tJ6Cf7M5EkDqXp6fpJiTVg9I2v16gz+XqJe+ii8oJAli2rUK
	YFPwzm5f+f6vPcDe/u/Soo07Hk6KsV78yiFn9i+9evPycJJtrcCm2GP0Bin7Z+4F+5yE=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 2/7] xen/arm32: head.S: Introduce a macro to load the physical address of a symbol
Date: Fri, 24 Jun 2022 10:11:41 +0100
Message-Id: <20220624091146.35716-3-julien@xen.org>
X-Mailer: git-send-email 2.32.0
In-Reply-To: <20220624091146.35716-1-julien@xen.org>
References: <20220624091146.35716-1-julien@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

A lot of places in the ARM32 assembly requires to load the physical address
of a symbol. Rather than open-coding the translation, introduce a new macro
that will load the phyiscal address of a symbol.

Lastly, use the new macro to replace all the current open-coded version.

Note that most of the comments associated to the code changed have been
removed because the code is now self-explanatory.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/arm/arm32/head.S | 23 +++++++++++------------
 1 file changed, 11 insertions(+), 12 deletions(-)

diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index c837d3054cf9..77f0a619ca51 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -65,6 +65,11 @@
         .endif
 .endm
 
+.macro load_paddr rb, sym
+        ldr   \rb, =\sym
+        add   \rb, \rb, r10
+.endm
+
 /*
  * Common register usage in this file:
  *   r0  -
@@ -157,8 +162,7 @@ past_zImage:
 
         /* Using the DTB in the .dtb section? */
 .ifnes CONFIG_DTB_FILE,""
-        ldr   r8, =_sdtb
-        add   r8, r10                /* r8 := paddr(DTB) */
+        load_paddr r8, _stdb
 .endif
 
         /* Initialize the UART if earlyprintk has been enabled. */
@@ -208,8 +212,7 @@ GLOBAL(init_secondary)
         mrc   CP32(r1, MPIDR)
         bic   r7, r1, #(~MPIDR_HWID_MASK) /* Mask out flags to get CPU ID */
 
-        ldr   r0, =smp_up_cpu
-        add   r0, r0, r10            /* Apply physical offset */
+        load_paddr r0, smp_up_cpu
         dsb
 2:      ldr   r1, [r0]
         cmp   r1, r7
@@ -376,8 +379,7 @@ ENDPROC(cpu_init)
         and   r1, r1, r2             /* r1 := slot in \tlb */
         lsl   r1, r1, #3             /* r1 := slot offset in \tlb */
 
-        ldr   r4, =\tbl
-        add   r4, r4, r10            /* r4 := paddr(\tlb) */
+        load_paddr r4, \tbl
 
         movw  r2, #PT_PT             /* r2:r3 := right for linear PT */
         orr   r2, r2, r4             /*           + \tlb paddr */
@@ -536,8 +538,7 @@ enable_mmu:
         dsb   nsh
 
         /* Write Xen's PT's paddr into the HTTBR */
-        ldr   r0, =boot_pgtable
-        add   r0, r0, r10            /* r0 := paddr (boot_pagetable) */
+        load_paddr r0, boot_pgtable
         mov   r1, #0                 /* r0:r1 is paddr (boot_pagetable) */
         mcrr  CP64(r0, r1, HTTBR)
         isb
@@ -782,10 +783,8 @@ ENTRY(lookup_processor_type)
  */
 __lookup_processor_type:
         mrc   CP32(r0, MIDR)                /* r0 := our cpu id */
-        ldr   r1, = __proc_info_start
-        add   r1, r1, r10                   /* r1 := paddr of table (start) */
-        ldr   r2, = __proc_info_end
-        add   r2, r2, r10                   /* r2 := paddr of table (end) */
+        load_paddr r1, __proc_info_start
+        load_paddr r2, __proc_info_end
 1:      ldr   r3, [r1, #PROCINFO_cpu_mask]
         and   r4, r0, r3                    /* r4 := our cpu id with mask */
         ldr   r3, [r1, #PROCINFO_cpu_val]   /* r3 := cpu val in current proc info */
-- 
2.32.0



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 09:12:00 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 09:12:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355327.582983 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4fM8-00059M-GX; Fri, 24 Jun 2022 09:12:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355327.582983; Fri, 24 Jun 2022 09:12:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4fM8-000591-5q; Fri, 24 Jun 2022 09:12:00 +0000
Received: by outflank-mailman (input) for mailman id 355327;
 Fri, 24 Jun 2022 09:11:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4fM5-0004pN-Sv
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 09:11:57 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4fM5-00020Z-PN; Fri, 24 Jun 2022 09:11:57 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4fM5-0005kh-HX; Fri, 24 Jun 2022 09:11:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=zMEWCFw8VlgwSLYa/MG4xNd0JR2sCngDBJS/t/MngS4=; b=i3TFzCyYZR4/ccLzNmn4cD77da
	uk8GGc9hqwx2yP9WgQFCpSRdv0ryucZ7Ix6DYaYQZ4uzVpqZV6pRRAfqT9HiVzxYsur/vsqi8NTpn
	obLTjdJyJ681MtBCeZ4lyv4oidYgE7xay+yzSJiyz6zXkZETsQ5amd3l1deYzFc/Gixs=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 7/7] xen/arm: mm: Reduce the area that xen_second covers
Date: Fri, 24 Jun 2022 10:11:46 +0100
Message-Id: <20220624091146.35716-8-julien@xen.org>
X-Mailer: git-send-email 2.32.0
In-Reply-To: <20220624091146.35716-1-julien@xen.org>
References: <20220624091146.35716-1-julien@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

At the moment, xen_second is used to cover the first 2GB of the
virtual address space. With the recent rework of the page-tables,
only the first 1GB region (where Xen resides) is effectively used.

In addition to that, I would like to reshuffle the memory layout.
So Xen mappings may not be anymore in the first 2GB of the virtual
address space.

Therefore, rework xen_second so it only covers the 1GB region where
Xen will reside.

With this change, xen_second doesn't cover anymore the xenheap area
on arm32. So, we first need to add memory to the boot allocator before
setting up the xenheap mappings.

Take the opportunity to update the comments on top of xen_fixmap and
xen_xenmap.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/arm/mm.c    | 32 +++++++++++---------------------
 xen/arch/arm/setup.c | 13 +++++++++++--
 2 files changed, 22 insertions(+), 23 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 74666b2e720a..f87a7c32d07d 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -116,17 +116,14 @@ static DEFINE_PAGE_TABLE(cpu0_pgtable);
 #endif
 
 /* Common pagetable leaves */
-/* Second level page tables.
- *
- * The second-level table is 2 contiguous pages long, and covers all
- * addresses from 0 to 0x7fffffff. Offsets into it are calculated
- * with second_linear_offset(), not second_table_offset().
- */
-static DEFINE_PAGE_TABLES(xen_second, 2);
-/* First level page table used for fixmap */
+/* Second level page table used to cover Xen virtual address space */
+static DEFINE_PAGE_TABLE(xen_second);
+/* Third level page table used for fixmap */
 DEFINE_BOOT_PAGE_TABLE(xen_fixmap);
-/* First level page table used to map Xen itself with the XN bit set
- * as appropriate. */
+/*
+ * Third level page table used to map Xen itself with the XN bit set
+ * as appropriate.
+ */
 static DEFINE_PAGE_TABLE(xen_xenmap);
 
 /* Non-boot CPUs use this to find the correct pagetables. */
@@ -168,7 +165,6 @@ static void __init __maybe_unused build_assertions(void)
     BUILD_BUG_ON(zeroeth_table_offset(XEN_VIRT_START));
 #endif
     BUILD_BUG_ON(first_table_offset(XEN_VIRT_START));
-    BUILD_BUG_ON(second_linear_offset(XEN_VIRT_START) >= XEN_PT_LPAE_ENTRIES);
 #ifdef CONFIG_DOMAIN_PAGE
     BUILD_BUG_ON(DOMHEAP_VIRT_START & ~FIRST_MASK);
 #endif
@@ -482,14 +478,10 @@ void __init setup_pagetables(unsigned long boot_phys_offset)
     p = (void *) cpu0_pgtable;
 #endif
 
-    /* Initialise first level entries, to point to second level entries */
-    for ( i = 0; i < 2; i++)
-    {
-        p[i] = pte_of_xenaddr((uintptr_t)(xen_second +
-                                          i * XEN_PT_LPAE_ENTRIES));
-        p[i].pt.table = 1;
-        p[i].pt.xn = 0;
-    }
+    /* Map xen second level page-table */
+    p[0] = pte_of_xenaddr((uintptr_t)(xen_second));
+    p[0].pt.table = 1;
+    p[0].pt.xn = 0;
 
     /* Break up the Xen mapping into 4k pages and protect them separately. */
     for ( i = 0; i < XEN_PT_LPAE_ENTRIES; i++ )
@@ -618,8 +610,6 @@ void __init setup_xenheap_mappings(unsigned long base_mfn,
 
     /* Record where the xenheap is, for translation routines. */
     xenheap_virt_end = XENHEAP_VIRT_START + nr_mfns * PAGE_SIZE;
-    xenheap_mfn_start = _mfn(base_mfn);
-    xenheap_mfn_end = _mfn(base_mfn + nr_mfns);
 }
 #else /* CONFIG_ARM_64 */
 void __init setup_xenheap_mappings(unsigned long base_mfn,
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 31574996f36d..c777cc3adcc7 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -774,11 +774,20 @@ static void __init setup_mm(void)
            opt_xenheap_megabytes ? ", from command-line" : "");
     printk("Dom heap: %lu pages\n", domheap_pages);
 
-    setup_xenheap_mappings((e >> PAGE_SHIFT) - xenheap_pages, xenheap_pages);
+    /*
+     * We need some memory to allocate the page-tables used for the
+     * xenheap mappings. So populate the boot allocator first.
+     *
+     * This requires us to set xenheap_mfn_{start, end} first so the Xenheap
+     * region can be avoided.
+     */
+    xenheap_mfn_start = _mfn((e >> PAGE_SHIFT) - xenheap_pages);
+    xenheap_mfn_end = mfn_add(xenheap_mfn_start, xenheap_pages);
 
-    /* Add non-xenheap memory */
     populate_boot_allocator();
 
+    setup_xenheap_mappings(mfn_x(xenheap_mfn_start), xenheap_pages);
+
     /* Frame table covers all of RAM region, including holes */
     setup_frametable_mappings(ram_start, ram_end);
     max_page = PFN_DOWN(ram_end);
-- 
2.32.0



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 09:12:00 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 09:12:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355320.582904 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4fLz-0003Fg-UB; Fri, 24 Jun 2022 09:11:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355320.582904; Fri, 24 Jun 2022 09:11:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4fLz-0003FZ-RI; Fri, 24 Jun 2022 09:11:51 +0000
Received: by outflank-mailman (input) for mailman id 355320;
 Fri, 24 Jun 2022 09:11:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4fLy-0003FN-EM
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 09:11:50 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4fLx-0001yy-L3; Fri, 24 Jun 2022 09:11:49 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4fLx-0005kh-Bd; Fri, 24 Jun 2022 09:11:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:Message-Id:Date:
	Subject:Cc:To:From; bh=LUUTzfKjE4shgj9dPHcr74fVufq6DTQHJU2cvaX2OrE=; b=RIq7QV
	NEy1DfvZqLmWWUHFIGX3CrIQTQEipBgRkJD49+r7U1FMkrptrYmsEqd3QNLkDy2t2988R4DO5qm05
	/qzI1XOUha9PMGmqDBn6wUqTMlz2QqefCQUI+XmEo2PJJcgP0afrUruSoBYnxa9i+xQvSSjk8Vnq9
	zeFEkSEBQe0=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Ross Lagerwall <ross.lagerwall@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH 0/7] xen/arm: mm: Bunch of clean-ups
Date: Fri, 24 Jun 2022 10:11:39 +0100
Message-Id: <20220624091146.35716-1-julien@xen.org>
X-Mailer: git-send-email 2.32.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

Hi all,

This series is a collection of patches to clean-up the MM subsystem
I have done in preparation for the next revision of "xen/arm: Don't
switch TTBR while the MMU is on" [1].

Cheers,

[1] https://lore.kernel.org/all/20220309112048.17377-1-julien@xen.org/

Julien Grall (7):
  xen/arm: Remove most of the *_VIRT_END defines
  xen/arm32: head.S: Introduce a macro to load the physical address of a
    symbol
  xen/arm: head: Add missing isb after writing to SCTLR_EL2/HSCTLR
  xen/arm: mm: Add more ASSERT() in {destroy, modify}_xen_mappings()
  xen/arm32: mm: Consolidate the domheap mappings initialization
  xen/arm: mm: Move domain_{,un}map_* helpers in a separate file
  xen/arm: mm: Reduce the area that xen_second covers

 xen/arch/arm/Kconfig                |   1 +
 xen/arch/arm/Makefile               |   1 +
 xen/arch/arm/arm32/head.S           |  24 +--
 xen/arch/arm/arm64/head.S           |   1 +
 xen/arch/arm/domain_page.c          | 193 +++++++++++++++++++++++
 xen/arch/arm/include/asm/arm32/mm.h |   8 +
 xen/arch/arm/include/asm/config.h   |  19 +--
 xen/arch/arm/include/asm/lpae.h     |  17 ++
 xen/arch/arm/livepatch.c            |   2 +-
 xen/arch/arm/mm.c                   | 231 ++++------------------------
 xen/arch/arm/setup.c                |  21 ++-
 xen/arch/x86/Kconfig                |   1 +
 xen/arch/x86/include/asm/config.h   |   1 -
 xen/common/Kconfig                  |   3 +
 14 files changed, 297 insertions(+), 226 deletions(-)
 create mode 100644 xen/arch/arm/domain_page.c

-- 
2.32.0



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 09:12:00 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 09:12:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355325.582960 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4fM4-0004VC-IR; Fri, 24 Jun 2022 09:11:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355325.582960; Fri, 24 Jun 2022 09:11:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4fM4-0004Ug-Dd; Fri, 24 Jun 2022 09:11:56 +0000
Received: by outflank-mailman (input) for mailman id 355325;
 Fri, 24 Jun 2022 09:11:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4fM3-00048Y-75
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 09:11:55 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4fM3-000205-2C; Fri, 24 Jun 2022 09:11:55 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4fM2-0005kh-QK; Fri, 24 Jun 2022 09:11:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=qe4fAnsClmSDMh+3jhKIrIzURJjOhtWwCwowArSR7bI=; b=dLbiIYSwg5zh1fKeVRUsS4pdwX
	BM3I/Gf0Q8F3Gyl+esUsc9XqoB1DWEhIumeadx0cqqUtREJRKQEAHo2+DD9R+TIHu3i50Xoi0Sa/6
	5FuyuavKDPZxX+54AGI/KarcHWbw8n2p0OPJsv7wgSCNKe/VaoPAYkEprYTYxOB2cDpo=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 5/7] xen/arm32: mm: Consolidate the domheap mappings initialization
Date: Fri, 24 Jun 2022 10:11:44 +0100
Message-Id: <20220624091146.35716-6-julien@xen.org>
X-Mailer: git-send-email 2.32.0
In-Reply-To: <20220624091146.35716-1-julien@xen.org>
References: <20220624091146.35716-1-julien@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

At the moment, the domheap mappings initialization is done separately for
the boot CPU and secondary CPUs. The main difference is for the former
the pages are part of Xen binary whilst for the latter they are
dynamically allocated.

It would be good to have a single helper so it is easier to rework
on the domheap is initialized.

For CPU0, we still need to use pre-allocated pages because the
allocators may use domain_map_page(), so we need to have the domheap
area ready first. But we can still delay the initialization to setup_mm().

Introduce a new helper domheap_mapping_init() that will be called
from setup_mm() for the boot CPU and from init_secondary_pagetables()
for secondary CPUs.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/arm/include/asm/arm32/mm.h |  2 +
 xen/arch/arm/mm.c                   | 92 +++++++++++++++++++----------
 xen/arch/arm/setup.c                |  8 +++
 3 files changed, 71 insertions(+), 31 deletions(-)

diff --git a/xen/arch/arm/include/asm/arm32/mm.h b/xen/arch/arm/include/asm/arm32/mm.h
index 6b039d9ceaa2..575373aeb985 100644
--- a/xen/arch/arm/include/asm/arm32/mm.h
+++ b/xen/arch/arm/include/asm/arm32/mm.h
@@ -10,6 +10,8 @@ static inline bool arch_mfns_in_directmap(unsigned long mfn, unsigned long nr)
     return false;
 }
 
+bool init_domheap_mappings(unsigned int cpu);
+
 #endif /* __ARM_ARM32_MM_H__ */
 
 /*
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 20733afebce4..995aa1e4480e 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -372,6 +372,58 @@ void clear_fixmap(unsigned map)
 }
 
 #ifdef CONFIG_DOMAIN_PAGE
+/*
+ * Prepare the area that will be used to map domheap pages. They are
+ * mapped in 2MB chunks, so we need to allocate the page-tables up to
+ * the 2nd level.
+ *
+ * The caller should make sure the root page-table for @cpu has been
+ * been allocated.
+ */
+bool init_domheap_mappings(unsigned int cpu)
+{
+    unsigned int order = get_order_from_pages(DOMHEAP_SECOND_PAGES);
+    lpae_t *root = per_cpu(xen_pgtable, cpu);
+    unsigned int i, first_idx;
+    lpae_t *domheap;
+    mfn_t mfn;
+
+    ASSERT(root);
+    ASSERT(!per_cpu(xen_dommap, cpu));
+
+    /*
+     * The domheap for cpu0 is before the heap is initialized. So we
+     * need to use pre-allocated pages.
+     */
+    if ( !cpu )
+        domheap = cpu0_dommap;
+    else
+        domheap = alloc_xenheap_pages(order, 0);
+
+    if ( !domheap )
+        return false;
+
+    /* Ensure the domheap has no stray mappings */
+    memset(domheap, 0, DOMHEAP_SECOND_PAGES * PAGE_SIZE);
+
+    /*
+     * Update the first level mapping to reference the local CPUs
+     * domheap mapping pages.
+     */
+    mfn = virt_to_mfn(domheap);
+    first_idx = first_table_offset(DOMHEAP_VIRT_START);
+    for ( i = 0; i < DOMHEAP_SECOND_PAGES; i++ )
+    {
+        lpae_t pte = mfn_to_xen_entry(mfn_add(mfn, i), MT_NORMAL);
+        pte.pt.table = 1;
+        write_pte(&root[first_idx + i], pte);
+    }
+
+    per_cpu(xen_dommap, cpu) = domheap;
+
+    return true;
+}
+
 void *map_domain_page_global(mfn_t mfn)
 {
     return vmap(&mfn, 1);
@@ -633,16 +685,6 @@ void __init setup_pagetables(unsigned long boot_phys_offset)
         p[i].pt.xn = 0;
     }
 
-#ifdef CONFIG_ARM_32
-    for ( i = 0; i < DOMHEAP_SECOND_PAGES; i++ )
-    {
-        p[first_table_offset(DOMHEAP_VIRT_START+i*FIRST_SIZE)]
-            = pte_of_xenaddr((uintptr_t)(cpu0_dommap +
-                                         i * XEN_PT_LPAE_ENTRIES));
-        p[first_table_offset(DOMHEAP_VIRT_START+i*FIRST_SIZE)].pt.table = 1;
-    }
-#endif
-
     /* Break up the Xen mapping into 4k pages and protect them separately. */
     for ( i = 0; i < XEN_PT_LPAE_ENTRIES; i++ )
     {
@@ -686,7 +728,6 @@ void __init setup_pagetables(unsigned long boot_phys_offset)
 
 #ifdef CONFIG_ARM_32
     per_cpu(xen_pgtable, 0) = cpu0_pgtable;
-    per_cpu(xen_dommap, 0) = cpu0_dommap;
 #endif
 }
 
@@ -719,39 +760,28 @@ int init_secondary_pagetables(int cpu)
 #else
 int init_secondary_pagetables(int cpu)
 {
-    lpae_t *first, *domheap, pte;
-    int i;
+    lpae_t *first;
 
     first = alloc_xenheap_page(); /* root == first level on 32-bit 3-level trie */
-    domheap = alloc_xenheap_pages(get_order_from_pages(DOMHEAP_SECOND_PAGES), 0);
 
-    if ( domheap == NULL || first == NULL )
+    if ( !first )
     {
-        printk("Not enough free memory for secondary CPU%d pagetables\n", cpu);
-        free_xenheap_pages(domheap, get_order_from_pages(DOMHEAP_SECOND_PAGES));
-        free_xenheap_page(first);
+        printk("CPU%u: Unable to allocate the first page-table\n", cpu);
         return -ENOMEM;
     }
 
     /* Initialise root pagetable from root of boot tables */
     memcpy(first, cpu0_pgtable, PAGE_SIZE);
+    per_cpu(xen_pgtable, cpu) = first;
 
-    /* Ensure the domheap has no stray mappings */
-    memset(domheap, 0, DOMHEAP_SECOND_PAGES*PAGE_SIZE);
-
-    /* Update the first level mapping to reference the local CPUs
-     * domheap mapping pages. */
-    for ( i = 0; i < DOMHEAP_SECOND_PAGES; i++ )
+    if ( !init_domheap_mappings(cpu) )
     {
-        pte = mfn_to_xen_entry(virt_to_mfn(domheap + i * XEN_PT_LPAE_ENTRIES),
-                               MT_NORMAL);
-        pte.pt.table = 1;
-        write_pte(&first[first_table_offset(DOMHEAP_VIRT_START+i*FIRST_SIZE)], pte);
+        printk("CPU%u: Unable to prepare the domheap page-tables\n", cpu);
+        per_cpu(xen_pgtable, cpu) = NULL;
+        free_xenheap_page(first);
+        return -ENOMEM;
     }
 
-    per_cpu(xen_pgtable, cpu) = first;
-    per_cpu(xen_dommap, cpu) = domheap;
-
     clear_boot_pagetables();
 
     /* Set init_ttbr for this CPU coming up */
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 577c54e6fbfa..31574996f36d 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -783,6 +783,14 @@ static void __init setup_mm(void)
     setup_frametable_mappings(ram_start, ram_end);
     max_page = PFN_DOWN(ram_end);
 
+    /*
+     * The allocators may need to use map_domain_page() (such as for
+     * scrubbing pages). So we need to prepare the domheap area first.
+     */
+    if ( !init_domheap_mappings(smp_processor_id()) )
+        panic("CPU%u: Unable to prepare the domheap page-tables\n",
+              smp_processor_id());
+
     /* Add xenheap memory that was not already added to the boot allocator. */
     init_xenheap_pages(mfn_to_maddr(xenheap_mfn_start),
                        mfn_to_maddr(xenheap_mfn_end));
-- 
2.32.0



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 09:12:00 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 09:12:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355326.582970 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4fM6-0004rZ-S2; Fri, 24 Jun 2022 09:11:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355326.582970; Fri, 24 Jun 2022 09:11:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4fM6-0004rJ-Ms; Fri, 24 Jun 2022 09:11:58 +0000
Received: by outflank-mailman (input) for mailman id 355326;
 Fri, 24 Jun 2022 09:11:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4fM5-0004jQ-9i
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 09:11:57 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4fM4-00020O-O2; Fri, 24 Jun 2022 09:11:56 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4fM4-0005kh-CY; Fri, 24 Jun 2022 09:11:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=DPd+8w6wNLbTPQr2pZ+YLZt6MUKTetODc5e79UqkbfM=; b=IYyDWXiphD/XQVzx2WK8UQ4s8U
	HYbxivRbU+t9JcRJt7pLSokXfrP/WHkBWAB4ntxk26Q5pL8o8GfXmfk2hi/zLa0kbIXa0vaH7LrrG
	hLIc+CEkj3zZCNmnUXcmxH9xIQBW/sNDVk2zpI1QZjHCJUJNMQv4ea4HIwI52bue+g9Y=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH 6/7] xen/arm: mm: Move domain_{,un}map_* helpers in a separate file
Date: Fri, 24 Jun 2022 10:11:45 +0100
Message-Id: <20220624091146.35716-7-julien@xen.org>
X-Mailer: git-send-email 2.32.0
In-Reply-To: <20220624091146.35716-1-julien@xen.org>
References: <20220624091146.35716-1-julien@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

The file xen/arch/mm.c has been growing quite a lot. It now contains
various independent part of the MM subsytem.

One of them is the helpers to map/unmap a page when CONFIG_DOMAIN_PAGE
(only used by arm32). Move them in a new file xen/arch/arm/domain_page.c.

In order to be able to use CONFIG_DOMAIN_PAGE in the Makefile, a new
Kconfig option is introduced that is selected by x86 and arm32.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/arm/Kconfig                |   1 +
 xen/arch/arm/Makefile               |   1 +
 xen/arch/arm/domain_page.c          | 193 +++++++++++++++++++++++++++
 xen/arch/arm/include/asm/arm32/mm.h |   6 +
 xen/arch/arm/include/asm/config.h   |   1 -
 xen/arch/arm/include/asm/lpae.h     |  17 +++
 xen/arch/arm/mm.c                   | 198 +---------------------------
 xen/arch/x86/Kconfig                |   1 +
 xen/arch/x86/include/asm/config.h   |   1 -
 xen/common/Kconfig                  |   3 +
 10 files changed, 224 insertions(+), 198 deletions(-)
 create mode 100644 xen/arch/arm/domain_page.c

diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index be9eff014120..eddec5b2e750 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -1,6 +1,7 @@
 config ARM_32
 	def_bool y
 	depends on "$(ARCH)" = "arm32"
+	select DOMAIN_PAGE
 
 config ARM_64
 	def_bool y
diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index bb7a6151c13c..4f3a50a7bad8 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -17,6 +17,7 @@ obj-y += device.o
 obj-$(CONFIG_IOREQ_SERVER) += dm.o
 obj-y += domain.o
 obj-y += domain_build.init.o
+obj-$(CONFIG_DOMAIN_PAGE) += domain_page.o
 obj-y += domctl.o
 obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
 obj-y += efi/
diff --git a/xen/arch/arm/domain_page.c b/xen/arch/arm/domain_page.c
new file mode 100644
index 000000000000..ca7a907b8bb9
--- /dev/null
+++ b/xen/arch/arm/domain_page.c
@@ -0,0 +1,193 @@
+#include <xen/mm.h>
+#include <xen/pmap.h>
+#include <xen/vmap.h>
+
+/* Override macros from asm/page.h to make them work with mfn_t */
+#undef virt_to_mfn
+#define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
+
+/* cpu0's domheap page tables */
+static DEFINE_PAGE_TABLES(cpu0_dommap, DOMHEAP_SECOND_PAGES);
+
+/*
+ * xen_dommap == pages used by map_domain_page, these pages contain
+ * the second level pagetables which map the domheap region
+ * starting at DOMHEAP_VIRT_START in 2MB chunks.
+ */
+static DEFINE_PER_CPU(lpae_t *, xen_dommap);
+
+/*
+ * Prepare the area that will be used to map domheap pages. They are
+ * mapped in 2MB chunks, so we need to allocate the page-tables up to
+ * the 2nd level.
+ *
+ * The caller should make sure the root page-table for @cpu has been
+ * been allocated.
+ */
+bool init_domheap_mappings(unsigned int cpu)
+{
+    unsigned int order = get_order_from_pages(DOMHEAP_SECOND_PAGES);
+    lpae_t *root = per_cpu(xen_pgtable, cpu);
+    unsigned int i, first_idx;
+    lpae_t *domheap;
+    mfn_t mfn;
+
+    ASSERT(root);
+    ASSERT(!per_cpu(xen_dommap, cpu));
+
+    /*
+     * The domheap for cpu0 is before the heap is initialized. So we
+     * need to use pre-allocated pages.
+     */
+    if ( !cpu )
+        domheap = cpu0_dommap;
+    else
+        domheap = alloc_xenheap_pages(order, 0);
+
+    if ( !domheap )
+        return false;
+
+    /* Ensure the domheap has no stray mappings */
+    memset(domheap, 0, DOMHEAP_SECOND_PAGES * PAGE_SIZE);
+
+    /*
+     * Update the first level mapping to reference the local CPUs
+     * domheap mapping pages.
+     */
+    mfn = virt_to_mfn(domheap);
+    first_idx = first_table_offset(DOMHEAP_VIRT_START);
+    for ( i = 0; i < DOMHEAP_SECOND_PAGES; i++ )
+    {
+        lpae_t pte = mfn_to_xen_entry(mfn_add(mfn, i), MT_NORMAL);
+        pte.pt.table = 1;
+        write_pte(&root[first_idx + i], pte);
+    }
+
+    per_cpu(xen_dommap, cpu) = domheap;
+
+    return true;
+}
+
+void *map_domain_page_global(mfn_t mfn)
+{
+    return vmap(&mfn, 1);
+}
+
+void unmap_domain_page_global(const void *va)
+{
+    vunmap(va);
+}
+
+/* Map a page of domheap memory */
+void *map_domain_page(mfn_t mfn)
+{
+    unsigned long flags;
+    lpae_t *map = this_cpu(xen_dommap);
+    unsigned long slot_mfn = mfn_x(mfn) & ~XEN_PT_LPAE_ENTRY_MASK;
+    vaddr_t va;
+    lpae_t pte;
+    int i, slot;
+
+    local_irq_save(flags);
+
+    /* The map is laid out as an open-addressed hash table where each
+     * entry is a 2MB superpage pte.  We use the available bits of each
+     * PTE as a reference count; when the refcount is zero the slot can
+     * be reused. */
+    for ( slot = (slot_mfn >> XEN_PT_LPAE_SHIFT) % DOMHEAP_ENTRIES, i = 0;
+          i < DOMHEAP_ENTRIES;
+          slot = (slot + 1) % DOMHEAP_ENTRIES, i++ )
+    {
+        if ( map[slot].pt.avail < 0xf &&
+             map[slot].pt.base == slot_mfn &&
+             map[slot].pt.valid )
+        {
+            /* This slot already points to the right place; reuse it */
+            map[slot].pt.avail++;
+            break;
+        }
+        else if ( map[slot].pt.avail == 0 )
+        {
+            /* Commandeer this 2MB slot */
+            pte = mfn_to_xen_entry(_mfn(slot_mfn), MT_NORMAL);
+            pte.pt.avail = 1;
+            write_pte(map + slot, pte);
+            break;
+        }
+
+    }
+    /* If the map fills up, the callers have misbehaved. */
+    BUG_ON(i == DOMHEAP_ENTRIES);
+
+#ifndef NDEBUG
+    /* Searching the hash could get slow if the map starts filling up.
+     * Cross that bridge when we come to it */
+    {
+        static int max_tries = 32;
+        if ( i >= max_tries )
+        {
+            dprintk(XENLOG_WARNING, "Domheap map is filling: %i tries\n", i);
+            max_tries *= 2;
+        }
+    }
+#endif
+
+    local_irq_restore(flags);
+
+    va = (DOMHEAP_VIRT_START
+          + (slot << SECOND_SHIFT)
+          + ((mfn_x(mfn) & XEN_PT_LPAE_ENTRY_MASK) << THIRD_SHIFT));
+
+    /*
+     * We may not have flushed this specific subpage at map time,
+     * since we only flush the 4k page not the superpage
+     */
+    flush_xen_tlb_range_va_local(va, PAGE_SIZE);
+
+    return (void *)va;
+}
+
+/* Release a mapping taken with map_domain_page() */
+void unmap_domain_page(const void *va)
+{
+    unsigned long flags;
+    lpae_t *map = this_cpu(xen_dommap);
+    int slot = ((unsigned long) va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
+
+    if ( !va )
+        return;
+
+    local_irq_save(flags);
+
+    ASSERT(slot >= 0 && slot < DOMHEAP_ENTRIES);
+    ASSERT(map[slot].pt.avail != 0);
+
+    map[slot].pt.avail--;
+
+    local_irq_restore(flags);
+}
+
+mfn_t domain_page_map_to_mfn(const void *ptr)
+{
+    unsigned long va = (unsigned long)ptr;
+    lpae_t *map = this_cpu(xen_dommap);
+    int slot = (va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
+    unsigned long offset = (va>>THIRD_SHIFT) & XEN_PT_LPAE_ENTRY_MASK;
+
+    if ( (va >= VMAP_VIRT_START) && ((VMAP_VIRT_START - va) < VMAP_VIRT_SIZE) )
+        return virt_to_mfn(va);
+
+    ASSERT(slot >= 0 && slot < DOMHEAP_ENTRIES);
+    ASSERT(map[slot].pt.avail != 0);
+
+    return mfn_add(lpae_get_mfn(map[slot]), offset);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/include/asm/arm32/mm.h b/xen/arch/arm/include/asm/arm32/mm.h
index 575373aeb985..8bfc906e7178 100644
--- a/xen/arch/arm/include/asm/arm32/mm.h
+++ b/xen/arch/arm/include/asm/arm32/mm.h
@@ -1,6 +1,12 @@
 #ifndef __ARM_ARM32_MM_H__
 #define __ARM_ARM32_MM_H__
 
+#include <xen/percpu.h>
+
+#include <asm/lpae.h>
+
+DECLARE_PER_CPU(lpae_t *, xen_pgtable);
+
 /*
  * Only a limited amount of RAM, called xenheap, is always mapped on ARM32.
  * For convenience always return false.
diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
index 66db618b34e7..2fafb9f2283c 100644
--- a/xen/arch/arm/include/asm/config.h
+++ b/xen/arch/arm/include/asm/config.h
@@ -122,7 +122,6 @@
 
 #ifdef CONFIG_ARM_32
 
-#define CONFIG_DOMAIN_PAGE 1
 #define CONFIG_SEPARATE_XENHEAP 1
 
 #define FRAMETABLE_VIRT_START  _AT(vaddr_t,0x02000000)
diff --git a/xen/arch/arm/include/asm/lpae.h b/xen/arch/arm/include/asm/lpae.h
index fc19cbd84772..3fdd5d0de28e 100644
--- a/xen/arch/arm/include/asm/lpae.h
+++ b/xen/arch/arm/include/asm/lpae.h
@@ -261,6 +261,23 @@ lpae_t mfn_to_xen_entry(mfn_t mfn, unsigned int attr);
 #define third_table_offset(va)  TABLE_OFFSET(third_linear_offset(va))
 #define zeroeth_table_offset(va)  TABLE_OFFSET(zeroeth_linear_offset(va))
 
+/*
+ * Macros to define page-tables:
+ *  - DEFINE_BOOT_PAGE_TABLE is used to define page-table that are used
+ *  in assembly code before BSS is zeroed.
+ *  - DEFINE_PAGE_TABLE{,S} are used to define one or multiple
+ *  page-tables to be used after BSS is zeroed (typically they are only used
+ *  in C).
+ */
+#define DEFINE_BOOT_PAGE_TABLE(name)                                          \
+lpae_t __aligned(PAGE_SIZE) __section(".data.page_aligned")                   \
+    name[XEN_PT_LPAE_ENTRIES]
+
+#define DEFINE_PAGE_TABLES(name, nr)                    \
+lpae_t __aligned(PAGE_SIZE) name[XEN_PT_LPAE_ENTRIES * (nr)]
+
+#define DEFINE_PAGE_TABLE(name) DEFINE_PAGE_TABLES(name, 1)
+
 #endif /* __ARM_LPAE_H__ */
 
 /*
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 995aa1e4480e..74666b2e720a 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -57,23 +57,6 @@ mm_printk(const char *fmt, ...) {}
     } while (0)
 #endif
 
-/*
- * Macros to define page-tables:
- *  - DEFINE_BOOT_PAGE_TABLE is used to define page-table that are used
- *  in assembly code before BSS is zeroed.
- *  - DEFINE_PAGE_TABLE{,S} are used to define one or multiple
- *  page-tables to be used after BSS is zeroed (typically they are only used
- *  in C).
- */
-#define DEFINE_BOOT_PAGE_TABLE(name)                                          \
-lpae_t __aligned(PAGE_SIZE) __section(".data.page_aligned")                   \
-    name[XEN_PT_LPAE_ENTRIES]
-
-#define DEFINE_PAGE_TABLES(name, nr)                    \
-lpae_t __aligned(PAGE_SIZE) name[XEN_PT_LPAE_ENTRIES * (nr)]
-
-#define DEFINE_PAGE_TABLE(name) DEFINE_PAGE_TABLES(name, 1)
-
 /* Static start-of-day pagetables that we use before the allocators
  * are up. These are used by all CPUs during bringup before switching
  * to the CPUs own pagetables.
@@ -110,7 +93,7 @@ DEFINE_BOOT_PAGE_TABLE(boot_third);
 /* Main runtime page tables */
 
 /*
- * For arm32 xen_pgtable and xen_dommap are per-PCPU and are allocated before
+ * For arm32 xen_pgtable are per-PCPU and are allocated before
  * bringing up each CPU. For arm64 xen_pgtable is common to all PCPUs.
  *
  * xen_second, xen_fixmap and xen_xenmap are always shared between all
@@ -126,18 +109,10 @@ static DEFINE_PAGE_TABLE(xen_first);
 #define HYP_PT_ROOT_LEVEL 1
 /* Per-CPU pagetable pages */
 /* xen_pgtable == root of the trie (zeroeth level on 64-bit, first on 32-bit) */
-static DEFINE_PER_CPU(lpae_t *, xen_pgtable);
+DEFINE_PER_CPU(lpae_t *, xen_pgtable);
 #define THIS_CPU_PGTABLE this_cpu(xen_pgtable)
-/*
- * xen_dommap == pages used by map_domain_page, these pages contain
- * the second level pagetables which map the domheap region
- * starting at DOMHEAP_VIRT_START in 2MB chunks.
- */
-static DEFINE_PER_CPU(lpae_t *, xen_dommap);
 /* Root of the trie for cpu0, other CPU's PTs are dynamically allocated */
 static DEFINE_PAGE_TABLE(cpu0_pgtable);
-/* cpu0's domheap page tables */
-static DEFINE_PAGE_TABLES(cpu0_dommap, DOMHEAP_SECOND_PAGES);
 #endif
 
 /* Common pagetable leaves */
@@ -371,175 +346,6 @@ void clear_fixmap(unsigned map)
     BUG_ON(res != 0);
 }
 
-#ifdef CONFIG_DOMAIN_PAGE
-/*
- * Prepare the area that will be used to map domheap pages. They are
- * mapped in 2MB chunks, so we need to allocate the page-tables up to
- * the 2nd level.
- *
- * The caller should make sure the root page-table for @cpu has been
- * been allocated.
- */
-bool init_domheap_mappings(unsigned int cpu)
-{
-    unsigned int order = get_order_from_pages(DOMHEAP_SECOND_PAGES);
-    lpae_t *root = per_cpu(xen_pgtable, cpu);
-    unsigned int i, first_idx;
-    lpae_t *domheap;
-    mfn_t mfn;
-
-    ASSERT(root);
-    ASSERT(!per_cpu(xen_dommap, cpu));
-
-    /*
-     * The domheap for cpu0 is before the heap is initialized. So we
-     * need to use pre-allocated pages.
-     */
-    if ( !cpu )
-        domheap = cpu0_dommap;
-    else
-        domheap = alloc_xenheap_pages(order, 0);
-
-    if ( !domheap )
-        return false;
-
-    /* Ensure the domheap has no stray mappings */
-    memset(domheap, 0, DOMHEAP_SECOND_PAGES * PAGE_SIZE);
-
-    /*
-     * Update the first level mapping to reference the local CPUs
-     * domheap mapping pages.
-     */
-    mfn = virt_to_mfn(domheap);
-    first_idx = first_table_offset(DOMHEAP_VIRT_START);
-    for ( i = 0; i < DOMHEAP_SECOND_PAGES; i++ )
-    {
-        lpae_t pte = mfn_to_xen_entry(mfn_add(mfn, i), MT_NORMAL);
-        pte.pt.table = 1;
-        write_pte(&root[first_idx + i], pte);
-    }
-
-    per_cpu(xen_dommap, cpu) = domheap;
-
-    return true;
-}
-
-void *map_domain_page_global(mfn_t mfn)
-{
-    return vmap(&mfn, 1);
-}
-
-void unmap_domain_page_global(const void *va)
-{
-    vunmap(va);
-}
-
-/* Map a page of domheap memory */
-void *map_domain_page(mfn_t mfn)
-{
-    unsigned long flags;
-    lpae_t *map = this_cpu(xen_dommap);
-    unsigned long slot_mfn = mfn_x(mfn) & ~XEN_PT_LPAE_ENTRY_MASK;
-    vaddr_t va;
-    lpae_t pte;
-    int i, slot;
-
-    local_irq_save(flags);
-
-    /* The map is laid out as an open-addressed hash table where each
-     * entry is a 2MB superpage pte.  We use the available bits of each
-     * PTE as a reference count; when the refcount is zero the slot can
-     * be reused. */
-    for ( slot = (slot_mfn >> XEN_PT_LPAE_SHIFT) % DOMHEAP_ENTRIES, i = 0;
-          i < DOMHEAP_ENTRIES;
-          slot = (slot + 1) % DOMHEAP_ENTRIES, i++ )
-    {
-        if ( map[slot].pt.avail < 0xf &&
-             map[slot].pt.base == slot_mfn &&
-             map[slot].pt.valid )
-        {
-            /* This slot already points to the right place; reuse it */
-            map[slot].pt.avail++;
-            break;
-        }
-        else if ( map[slot].pt.avail == 0 )
-        {
-            /* Commandeer this 2MB slot */
-            pte = mfn_to_xen_entry(_mfn(slot_mfn), MT_NORMAL);
-            pte.pt.avail = 1;
-            write_pte(map + slot, pte);
-            break;
-        }
-
-    }
-    /* If the map fills up, the callers have misbehaved. */
-    BUG_ON(i == DOMHEAP_ENTRIES);
-
-#ifndef NDEBUG
-    /* Searching the hash could get slow if the map starts filling up.
-     * Cross that bridge when we come to it */
-    {
-        static int max_tries = 32;
-        if ( i >= max_tries )
-        {
-            dprintk(XENLOG_WARNING, "Domheap map is filling: %i tries\n", i);
-            max_tries *= 2;
-        }
-    }
-#endif
-
-    local_irq_restore(flags);
-
-    va = (DOMHEAP_VIRT_START
-          + (slot << SECOND_SHIFT)
-          + ((mfn_x(mfn) & XEN_PT_LPAE_ENTRY_MASK) << THIRD_SHIFT));
-
-    /*
-     * We may not have flushed this specific subpage at map time,
-     * since we only flush the 4k page not the superpage
-     */
-    flush_xen_tlb_range_va_local(va, PAGE_SIZE);
-
-    return (void *)va;
-}
-
-/* Release a mapping taken with map_domain_page() */
-void unmap_domain_page(const void *va)
-{
-    unsigned long flags;
-    lpae_t *map = this_cpu(xen_dommap);
-    int slot = ((unsigned long) va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
-
-    if ( !va )
-        return;
-
-    local_irq_save(flags);
-
-    ASSERT(slot >= 0 && slot < DOMHEAP_ENTRIES);
-    ASSERT(map[slot].pt.avail != 0);
-
-    map[slot].pt.avail--;
-
-    local_irq_restore(flags);
-}
-
-mfn_t domain_page_map_to_mfn(const void *ptr)
-{
-    unsigned long va = (unsigned long)ptr;
-    lpae_t *map = this_cpu(xen_dommap);
-    int slot = (va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
-    unsigned long offset = (va>>THIRD_SHIFT) & XEN_PT_LPAE_ENTRY_MASK;
-
-    if ( (va >= VMAP_VIRT_START) && ((VMAP_VIRT_START - va) < VMAP_VIRT_SIZE) )
-        return virt_to_mfn(va);
-
-    ASSERT(slot >= 0 && slot < DOMHEAP_ENTRIES);
-    ASSERT(map[slot].pt.avail != 0);
-
-    return mfn_add(lpae_get_mfn(map[slot]), offset);
-}
-#endif
-
 void flush_page_to_ram(unsigned long mfn, bool sync_icache)
 {
     void *v = map_domain_page(_mfn(mfn));
diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
index 1e31edc99f9d..e440b473b677 100644
--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -10,6 +10,7 @@ config X86
 	select ALTERNATIVE_CALL
 	select ARCH_SUPPORTS_INT128
 	select CORE_PARKING
+	select DOMAIN_PAGE
 	select HAS_ALTERNATIVE
 	select HAS_COMPAT
 	select HAS_CPUFREQ
diff --git a/xen/arch/x86/include/asm/config.h b/xen/arch/x86/include/asm/config.h
index 07bcd158314b..fbc4bb3416bd 100644
--- a/xen/arch/x86/include/asm/config.h
+++ b/xen/arch/x86/include/asm/config.h
@@ -22,7 +22,6 @@
 #define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS 1
 #define CONFIG_DISCONTIGMEM 1
 #define CONFIG_NUMA_EMU 1
-#define CONFIG_DOMAIN_PAGE 1
 
 #define CONFIG_PAGEALLOC_MAX_ORDER (2 * PAGETABLE_ORDER)
 #define CONFIG_DOMU_MAX_ORDER      PAGETABLE_ORDER
diff --git a/xen/common/Kconfig b/xen/common/Kconfig
index 41a67891bcc8..b308f4aa0ee5 100644
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -11,6 +11,9 @@ config COMPAT
 config CORE_PARKING
 	bool
 
+config DOMAIN_PAGE
+	bool
+
 config GRANT_TABLE
 	bool "Grant table support" if EXPERT
 	default y
-- 
2.32.0



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 09:28:20 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 09:28:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355373.582993 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4fbn-0000Xf-Qb; Fri, 24 Jun 2022 09:28:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355373.582993; Fri, 24 Jun 2022 09:28:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4fbn-0000XY-Lc; Fri, 24 Jun 2022 09:28:11 +0000
Received: by outflank-mailman (input) for mailman id 355373;
 Fri, 24 Jun 2022 09:28:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YVaO=W7=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o4fbm-0000XO-IW
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 09:28:10 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f5d8c341-f39f-11ec-bd2d-47488cf2e6aa;
 Fri, 24 Jun 2022 11:28:09 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id C3F561F747;
 Fri, 24 Jun 2022 09:28:08 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 9E22A13ACA;
 Fri, 24 Jun 2022 09:28:08 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id rNY9JaiDtWKmCQAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 24 Jun 2022 09:28:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f5d8c341-f39f-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656062888; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=NT2sb7YMg8LvPipkNj6Pv1Ue4PoLQCX83HM2YlOrZ/o=;
	b=BW+nJlWIaflIr8E3FcM5XhRFnw5/6cTo3t0GCtucY9V7OkTQzT7Xbx5j/e1rS6SOZJyTKy
	mukwRw4SsjDnPlVjc/CYTXTAGUcJF1Rs3qfVa48Gu4FAnvGb3bQD+2IVRWFAMY+eZ7/Exm
	iMbUNPXrBW1atfYuCLRcfKGJ9qYtss8=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH] tools/init-xenstore-domain: fix memory map for PVH stubdom
Date: Fri, 24 Jun 2022 11:28:06 +0200
Message-Id: <20220624092806.27700-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In case of maxmem != memsize the E820 map of the PVH stubdom is wrong,
as it is missing the RAM above memsize.

Additionally the MMIO area should only cover the HVM special pages.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/helpers/init-xenstore-domain.c | 16 ++++++++++------
 1 file changed, 10 insertions(+), 6 deletions(-)

diff --git a/tools/helpers/init-xenstore-domain.c b/tools/helpers/init-xenstore-domain.c
index b4f3c65a8a..dad8e43c42 100644
--- a/tools/helpers/init-xenstore-domain.c
+++ b/tools/helpers/init-xenstore-domain.c
@@ -71,8 +71,9 @@ static int build(xc_interface *xch)
     char cmdline[512];
     int rv, xs_fd;
     struct xc_dom_image *dom = NULL;
-    int limit_kb = (maxmem ? : (memory + 1)) * 1024;
+    int limit_kb = (maxmem ? : memory) * 1024 + X86_HVM_NR_SPECIAL_PAGES * 4;
     uint64_t mem_size = MB(memory);
+    uint64_t max_size = MB(maxmem);
     struct e820entry e820[3];
     struct xen_domctl_createdomain config = {
         .ssidref = SECINITSID_DOMU,
@@ -157,21 +158,24 @@ static int build(xc_interface *xch)
         config.flags |= XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap;
         config.arch.emulation_flags = XEN_X86_EMU_LAPIC;
         dom->target_pages = mem_size >> XC_PAGE_SHIFT;
-        dom->mmio_size = GB(4) - LAPIC_BASE_ADDRESS;
+        dom->mmio_size = X86_HVM_NR_SPECIAL_PAGES << XC_PAGE_SHIFT;
         dom->lowmem_end = (mem_size > LAPIC_BASE_ADDRESS) ?
                           LAPIC_BASE_ADDRESS : mem_size;
         dom->highmem_end = (mem_size > LAPIC_BASE_ADDRESS) ?
                            GB(4) + mem_size - LAPIC_BASE_ADDRESS : 0;
-        dom->mmio_start = LAPIC_BASE_ADDRESS;
+        dom->mmio_start = (X86_HVM_END_SPECIAL_REGION -
+                           X86_HVM_NR_SPECIAL_PAGES) << XC_PAGE_SHIFT;
         dom->max_vcpus = 1;
         e820[0].addr = 0;
-        e820[0].size = dom->lowmem_end;
+        e820[0].size = (max_size > LAPIC_BASE_ADDRESS) ?
+                       LAPIC_BASE_ADDRESS : max_size;
         e820[0].type = E820_RAM;
-        e820[1].addr = LAPIC_BASE_ADDRESS;
+        e820[1].addr = dom->mmio_start;
         e820[1].size = dom->mmio_size;
         e820[1].type = E820_RESERVED;
         e820[2].addr = GB(4);
-        e820[2].size = dom->highmem_end - GB(4);
+        e820[2].size = (max_size > LAPIC_BASE_ADDRESS) ?
+                       max_size - LAPIC_BASE_ADDRESS : 0;
         e820[2].type = E820_RAM;
     }
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 09:31:18 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 09:31:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355378.583004 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4fen-0001xG-8P; Fri, 24 Jun 2022 09:31:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355378.583004; Fri, 24 Jun 2022 09:31:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4fen-0001x7-3x; Fri, 24 Jun 2022 09:31:17 +0000
Received: by outflank-mailman (input) for mailman id 355378;
 Fri, 24 Jun 2022 09:31:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b5tQ=W7=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1o4fel-0001x1-Gk
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 09:31:15 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2049.outbound.protection.outlook.com [40.107.21.49])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 63e66cbb-f3a0-11ec-b725-ed86ccbb4733;
 Fri, 24 Jun 2022 11:31:14 +0200 (CEST)
Received: from DB8PR06CA0025.eurprd06.prod.outlook.com (2603:10a6:10:100::38)
 by AM6PR08MB4343.eurprd08.prod.outlook.com (2603:10a6:20b:ba::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.19; Fri, 24 Jun
 2022 09:31:11 +0000
Received: from DBAEUR03FT033.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:100:cafe::f7) by DB8PR06CA0025.outlook.office365.com
 (2603:10a6:10:100::38) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16 via Frontend
 Transport; Fri, 24 Jun 2022 09:31:11 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT033.mail.protection.outlook.com (100.127.142.251) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Fri, 24 Jun 2022 09:31:11 +0000
Received: ("Tessian outbound d3318d0cda7b:v120");
 Fri, 24 Jun 2022 09:31:10 +0000
Received: from b6bcc0d5e2ea.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 5C0825DB-A121-4970-A3EA-F080F059132C.1; 
 Fri, 24 Jun 2022 09:31:04 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b6bcc0d5e2ea.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 24 Jun 2022 09:31:04 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by PAXPR08MB7076.eurprd08.prod.outlook.com (2603:10a6:102:202::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.14; Fri, 24 Jun
 2022 09:31:02 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5353.022; Fri, 24 Jun 2022
 09:31:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 63e66cbb-f3a0-11ec-b725-ed86ccbb4733
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=bRBVwSmkSsMfKGqyEu+zzgVgOASaoceZihy1zSb4AcvApCduhXE7191ZO3seHRV9lsaiE4r13Nf8emaKxcxEtgmwkN3K9tdvDKYZCd9twg//vA8/DnF5EG+7MsXMAAGi9Ik3QPF+kPk2R2ZMXsfRX8d/9srrdGwzGII90kDOfSYbG6Jm6NSBJngkaLH0c00u5nSoIlihPC5lpkdBVFPWM0cH8xiW2KpwF5iherouTZPDyUAuxNtmnKGEe/IFr6gWAhrpznJnoS9b7DrKG5fXeeWsytZvKwg8oVOHXBn/n2CPIEVCoyS2ttRbX6XPA5I9P0qoMpALz3aIcLmEbcfVAQ==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6kylPXZQgI4kO5a5rykSr2a7fzrXNdksnfQQm5rnP+4=;
 b=Uxo1m0ytMf2S9Jfm2F85TH+nVEhuYjeJxpQlSJy8YZRwFxWFyA15/1bnCA7Hj6IumBpYBVoYcLQIy/1/y4xJhQwTbpFKlgJS62PUosJA7s9OxWrmdYPgVdVOUspNKBR+zHUXvs0kT4qxuWFy8QI9ZdJOnrfxSoc+cAd5PL3De9TOIW4Xf6qd97Wsf1jEwkEQ1bG4usZI+g8N2oFWqjjOp47OTQ45az0CcWsdp2Ob8Wzaff8u+rI4nsSLnePCJL3Zv0m1GN/xI8OtwUIz83UR/sdXojFFRWMLJHz16n8jieCCtx/JcnpPW2d7exbyw4ZzaVdoCwABh3/0eXqo5GcG8g==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6kylPXZQgI4kO5a5rykSr2a7fzrXNdksnfQQm5rnP+4=;
 b=T+tRvSEXrYGtasqeCi/JHrrJtDiTNFRO1VuXorUYhkFYBm89mtcWg3YZmsE8g5NC8nA31OJvyMXEuQwypt7GhRFVNQpLfRs+GMfLWq7wBjDnVoVDi0Vie0Kxhy8+OijCcSowDK3YjgLbsK5PzQYeTuhNb1asFiwUQ75W5GjgOdA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: fce0faabb4051509
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XrRXBcIYnq7fPVl8SyGaUiP/ycQuMR/mrEZHM8D2tzTQF8vx1DDyetn/4UJd/ebtdpgMviOsiZI7ZybHYrJmPKM0M875UJtGRu0ZakYTUT0bxaiwOLS7cftgWIdKF0PcDcKMqbUsxJjuJHjyTtJc7ywtg89xKO8fa24DqjBie4SMYb4ueqdcGToMDdZwMQh3fieH37rGVymU90bnjN+RGIUIpEXhv6t7gcnaGDHFSu8H8MXb3hqrBig7IIFqyTnLc5+FVt8KxbR913c5kq+Y2SXsMnCwSmi/6gbbAVVfJbLHKhguusKptJDxvKe0nab/+IcpIwlQf9Mo86xVXZvEjA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6kylPXZQgI4kO5a5rykSr2a7fzrXNdksnfQQm5rnP+4=;
 b=jJ8CHj0CITHZLPsl5YNdBekqLtpMVm1Ui3wxTBP/2yTusyD2TyBzsBJa3FAxHyjhp0erIQAel8gzTc6I76liyq5AKZ9aQq7PpWS1ju41ujyLwVGRDDeuCS7p+PpcK71KUz168HBf1FhSsLB4VB19r1lGSCKY8YnlFVq4AoImXWF5rVl2ObwK1hENJRN/IGrbG96PMjeghP2OJ80qL2vePbuVCsZ7wHg8kV4Sane9+VWK2cyWWZi9ijTUKOX+1WeHiBxNxokQYWDdwwEN4eEwOy8352LLO8NILxrZvnJs9Courb0NmJ0g+4pTE2l+x1qQdrIq1Qs0xj42LK48Wmrc5A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6kylPXZQgI4kO5a5rykSr2a7fzrXNdksnfQQm5rnP+4=;
 b=T+tRvSEXrYGtasqeCi/JHrrJtDiTNFRO1VuXorUYhkFYBm89mtcWg3YZmsE8g5NC8nA31OJvyMXEuQwypt7GhRFVNQpLfRs+GMfLWq7wBjDnVoVDi0Vie0Kxhy8+OijCcSowDK3YjgLbsK5PzQYeTuhNb1asFiwUQ75W5GjgOdA=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: xen-devel <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: irq: Initialize the per-CPU IRQs while preparing
 the CPU
Thread-Topic: [PATCH] xen/arm: irq: Initialize the per-CPU IRQs while
 preparing the CPU
Thread-Index: AQHYf9MHjBymRs/pn0u9upJvLeFm3q1eWpcA
Date: Fri, 24 Jun 2022 09:31:02 +0000
Message-ID: <29AAB4EF-6326-41F2-BB51-EED5FFDB26EA@arm.com>
References: <20220614094157.95631-1-julien@xen.org>
In-Reply-To: <20220614094157.95631-1-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 7e567109-d824-41dc-2ece-08da55c44672
x-ms-traffictypediagnostic:
	PAXPR08MB7076:EE_|DBAEUR03FT033:EE_|AM6PR08MB4343:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 IqIenAcXztI5zHSo+vxd1fah9qUE1H9D5mn7NbImHL6uCptGH/NHr3/IvT+SAVGGg/1MbAlywpDyj5kGEIJXZnNoDL/1dS+9lS1+62gKCD9zYtaoAZpFxvofuqt+Tx+lPK136Bw0wpngyw6qeSSvYQHjgopSp3d+m0Uau+A4aGjHwxq3jbqn2T2C+l8mJBWs6kvO1vflPO8kTDO/1iXbndyZkxlhXmSacL2lpywAt6XbyRZTRfLFJRAdbVApZcIkTP5L/kAqDhZHIz+49lsR+ZIGstBMAYqnpGht+1iGDnk512vSQ3imc2ZvtIS50+iyUaSFmFyO+wjg26/U1yqxhbgwonu1+ldTbFCJmDjR3XRoda6N4yQRkMCmILd7nyvOecMZkzS0jasmNjxtmsHeSyvcBC4tibx5zmQH7g43e0gDIDGVtUiLSxpFoecgYLzEo/exZQx+9YlgDOgSO7CaPTkrE439W6W7rm251vJQ6MebTkjb2b/p5RsTwSUDTzVHdXeTc0rxD9RTFB+tDI/VCakebiCUXlL1pImkFH5KZ3owwfBBHzOLVvZaRwN6r1Y3O9vAseF89QC4cjnqIw+8slszGQJc5BQ4uhDmHjUjv6PRU8xCXZcJPnTjwnFYY0QDedSf6KdXIFZJHxSrt5ECeFOouUP5+0oI28OWvV1INcD0/RkKcXowLMRWzR+MNg3xsLg0pfb6FsRdGtUkwQT6p5HXAzjTHsGndegypqOkcz08JkWPD2QhDAlSpdRdIJHSXKrhxdtwp+/x7PXSX7wYgY+eWAp6Sir4T8tcZmqRvSOKbE4GKmQoBb+raLozVI0dtz0eCxMzc+MmgUpfnVQ1Jg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(396003)(376002)(39860400002)(136003)(366004)(346002)(122000001)(6916009)(478600001)(66946007)(54906003)(38070700005)(316002)(83380400001)(36756003)(71200400001)(64756008)(6486002)(38100700002)(2906002)(6506007)(91956017)(66446008)(41300700001)(53546011)(4326008)(26005)(66476007)(33656002)(5660300002)(66556008)(2616005)(76116006)(186003)(86362001)(8936002)(6512007)(8676002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <E1D307CDC5C53C48BEEEFC2B3FAEB989@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB7076
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT033.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	a313b99c-1e2b-4e02-4ef8-08da55c44171
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	YGrQRtrd/GSifW/y/3LT3X29P+a77JNQfQErntC4QQ4TOYA6edCWCCkLJS3GfB7Vx+bOGnAcHsoNJi68BM7AJX5qwzwBHm+3t5YMA5GWEP0IgINy8AoMB1kC5Qhl33nyYDK0nFc2S7QEE6WGeYxzNu8KenPCdIzUYibNEyt5W7w+xWH1oPBpl/YG1O/qq61mtZbU0ZHjhkyoDUvtQUFv+ZAHRqDY7+XhOLZ5dTjYgpzFllwLBHq5QPx2AS9HyzUdylpkSraQgf2FMnpOgjqyzb4OItYLXnJKf+ThJZ1mbjvhT3JfQ2+qsG4sexHuCspg+z2ls0bEajVqUUu97hO9cOe5C8IZhPLD1xHdzu29kHLONWeSFRZlaHXbfV9JvxM7OkJX0tkhHZsSwOJ5KLHbahh3lURrind8JoaykjgKxN/a3LjFvoHYlFU9AJPnsyPTs6eO6q/UwwAWkJxOHbqElASPiFkOfez8rjjHGuSLzUoICTlhF1+LGhwcTmMu7nliH5etH5eby0OmWSahERl2h8T1wQYLo6SZ7Bu0+ex1ZTbyCsqI31f2ul6BIMMkhpcL6VFe8Oc29Q45POI6tajV1wteMuZ/Asb2ZB4tKnNKVExPj7RXvwzgLN36Lg6PFmr+hPWKFgpT/RyxHkbwCAXevfoGYzkRb1TLOi/OxHSUhdhlt8izra3b6sJxUyUrWqxdQgi8iSPSIm7RvxkkBYlcqhUJsx6Ff+B0QnWfnoPQYkMfACnSS23Mm3PNwX2Yan8fD9Ke8tP+1eXDPNVwWQZi4dIXS1jF/cQARLm5d88jJJzcdwk+NiHYHvmg9npzFteK
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(376002)(346002)(396003)(136003)(39860400002)(36840700001)(46966006)(40470700004)(36860700001)(41300700001)(40460700003)(8676002)(8936002)(6862004)(53546011)(54906003)(86362001)(107886003)(5660300002)(40480700001)(478600001)(6486002)(36756003)(70206006)(82740400003)(6512007)(316002)(70586007)(82310400005)(4326008)(33656002)(2616005)(26005)(336012)(83380400001)(356005)(2906002)(6506007)(47076005)(81166007)(186003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jun 2022 09:31:11.0185
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 7e567109-d824-41dc-2ece-08da55c44672
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT033.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4343

Hi Julien,

[OFFLIST]

> On 14 Jun 2022, at 10:41, Julien Grall <julien@xen.org> wrote:
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> Commit 5047cd1d5dea "xen/common: Use enhanced ASSERT_ALLOC_CONTEXT in
> xmalloc()" extended the checks in _xmalloc() to catch any use of the
> helpers from context with interrupts disabled.
>=20
> Unfortunately, the rule is not followed when initializing the per-CPU
> IRQs:
>=20
> (XEN) Xen call trace:
> (XEN)    [<002389f4>] _xmalloc+0xfc/0x314 (PC)
> (XEN)    [<00000000>] 00000000 (LR)
> (XEN)    [<0021a7c4>] init_one_irq_desc+0x48/0xd0
> (XEN)    [<002807a8>] irq.c#init_local_irq_data+0x48/0xa4
> (XEN)    [<00280834>] init_secondary_IRQ+0x10/0x2c
> (XEN)    [<00288fa4>] start_secondary+0x194/0x274
> (XEN)    [<40010170>] 40010170
> (XEN)
> (XEN)
> (XEN) ****************************************
> (XEN) Panic on CPU 2:
> (XEN) Assertion '!in_irq() && (local_irq_is_enabled() || num_online_cpus(=
) <=3D 1)' failed at common/xmalloc_tlsf.c:601
> (XEN) ****************************************
>=20
> This is happening because zalloc_cpumask_var() may allocate memory
> if NR_CPUS is > 2 * sizeof(unsigned long).
>=20
> Avoid the problem by allocate the per-CPU IRQs while preparing the
> CPU.
>=20
> This also has the benefit to remove a BUG_ON() in the secondary CPU
> code.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>

I still have issues after applying this patch on qemu-arm32:

(XEN) CPU0: Guest atomics will try 1 times before pausing the domain^M^M
(XEN) Bringing up CPU1^M^M
(XEN) CPU1: Guest atomics will try 1 times before pausing the domain^M^M
(XEN) Assertion 'test_bit(_IRQ_DISABLED, &desc->status)' failed at arch/arm=
/gic.c:124^M^M
(XEN) ----[ Xen-4.17-unstable  arm32  debug=3Dy  Not tainted ]----^M^M
(XEN) CPU:    1^M^M
(XEN) PC:     0026f134 gic_route_irq_to_xen+0xa4/0xb0^M^M
(XEN) CPSR:   400001da MODE:Hypervisor^M^M
(XEN)      R0: 00000120 R1: 000000a0 R2: 40002538 R3: 00000000^M^M
(XEN)      R4: 40004f00 R5: 000000a0 R6: 40002538 R7: 8000015a^M^M
(XEN)      R8: 00000000 R9: 40004f14 R10:3fe10000 R11:43fefeec R12:40002ff8=
^M^M
(XEN) HYP: SP: 43fefed4 LR: 0026f0b8^M^M
(XEN) ^M^M
(XEN)   VTCR_EL2: 00000000^M^M
(XEN)  VTTBR_EL2: 0000000000000000^M^M
(XEN) ^M^M
(XEN)  SCTLR_EL2: 30cd187f^M^M
(XEN)    HCR_EL2: 00000038^M^M
(XEN)  TTBR0_EL2: 00000000bfffa000^M^M
(XEN) ^M^M
(XEN)    ESR_EL2: 00000000^M^M
(XEN)  HPFAR_EL2: 00000000^M^M
(XEN)      HDFAR: 00000000^M^M
(XEN)      HIFAR: 00000000^M^M
(XEN) ^M^M
(XEN) Xen stack trace from sp=3D43fefed4:^M^M
(XEN)    00000000 40004f00 00000000 40002538 8000015a 43feff0c 00272a4c 400=
02538^M^M
(XEN)    002a47c4 00000019 00000000 0026ee28 40010000 43feff2c 00272b30 003=
09298^M^M
(XEN)    00000001 0033b248 00000001 00000000 40010000 43feff3c 0026f7ac 000=
00000^M^M
(XEN)    00201828 43feff54 0027ac3c bfffa000 00000000 00000000 00000001 000=
00000^M^M
(XEN)    400100c0 00000000 00000000 00000000 00000000 00000000 00000000 000=
00000^M^M
(XEN)    00000000 00000000 00000000 00000000 00000000 00000000 00000000 000=
00000^M^M
(XEN)    00000000 00000000 00000000 00000000 00000000 00000000 00000000 000=
00000^M^M
(XEN)    00000000 00000000 00000000 00000000 00000000 00000000 00000000 000=
00000^M^M
(XEN)    00000000 00000000 00000000 00000000 00000000 00000000 00000000 000=
00000^M^M
(XEN)    00000000 00000000 00000000^M^M
(XEN) Xen call trace:^M^M
(XEN)    [<0026f134>] gic_route_irq_to_xen+0xa4/0xb0 (PC)^M^M
(XEN)    [<0026f0b8>] gic_route_irq_to_xen+0x28/0xb0 (LR)^M^M
(XEN)    [<00272a4c>] setup_irq+0x104/0x178^M^M
(XEN)    [<00272b30>] request_irq+0x70/0xb4^M^M
(XEN)    [<0026f7ac>] init_maintenance_interrupt+0x40/0x5c^M^M
(XEN)    [<0027ac3c>] start_secondary+0x1e8/0x270^M^M
(XEN)    [<400100c0>] 400100c0^M^M

Just wanted to signal before you push this out.

I will investigate deeper and come back to you.

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 09:33:34 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 09:33:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355385.583015 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4fgx-0002Y7-Kb; Fri, 24 Jun 2022 09:33:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355385.583015; Fri, 24 Jun 2022 09:33:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4fgx-0002Y0-Hj; Fri, 24 Jun 2022 09:33:31 +0000
Received: by outflank-mailman (input) for mailman id 355385;
 Fri, 24 Jun 2022 09:33:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Tt/v=W7=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o4fgw-0002Xu-4I
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 09:33:30 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr70054.outbound.protection.outlook.com [40.107.7.54])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b441a40f-f3a0-11ec-b725-ed86ccbb4733;
 Fri, 24 Jun 2022 11:33:29 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB7032.eurprd04.prod.outlook.com (2603:10a6:20b:112::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.22; Fri, 24 Jun
 2022 09:33:26 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Fri, 24 Jun 2022
 09:33:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b441a40f-f3a0-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kLnIQE+Hkcv0YmVP09qxZbjL9S9YarOzjq0Vr5LvY9c2TklhAIsYfxMX9yFqi8Jpq2xEBAGG5fSMW9srWTitm9sbtN9nOTxHpcDh6gwiWt+R2QLiNBfOEblLT8LN5XDhmCOub6FbAYc/vRFEx6jQ5V84RzAZ1lEm8htR5D0Su7ZLPNC3go2+96wHDnDPnBzTVwJR8kcAuUxQ+j1qTnjr9UCC09HSYSAhg8Oy5sM7uABzZ/OEWbMyqN9kigS+iZ0F+WYwMlOF97hRHGqKOXBLtbcmiLvdY3rmE4PdOyuuWAUMH/A0D5Uzofr20Cvh6QJTvAZqS2CTP98v51axBd2sjQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=84TV5Ut/v0xxlrRHQx2qSIfYTcRlFbg8oXXqt5D94M4=;
 b=f7Y46Louh3uv3OJPaV/awH9T0ep3eOMrktO6QwpnaJtgPPoJSTDh6i6fZiDXqEyxBCdx1PpNZs0JV/f4sXgCAglcBpHaydLhZohXbBBZ0R8XfJvtM/gyAJML5V9BmoZmb8u0lMdwkeNwERfUfNCvsKYBAn1nnsoT+itR2w2U1UP6XC59+m3wisxctWIx0NLMQ4k/n4AUlj0A10rGtWoOSvZ3yb0BjqQaf0NRMWSXqeh1/kllETVnFyYGT4ZkIcSlgy+JzQOFta6N9gfmoXURPUbiJPbtFkUpkaKyb30KPdN3f0ZYjDrtov5DlC5H7IXWuX1m9+7jD2BuZqjbPNjCUg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=84TV5Ut/v0xxlrRHQx2qSIfYTcRlFbg8oXXqt5D94M4=;
 b=hP2lKUAaASCh2B9OAjhOL1KQTGtDiBKeXMUnCffE50R+69eKkdsK4j5MYf7EHt85Pf5RnDfxvk5ItQtijIqQXWyV856zwWLC0vboTGew4pr3HQR2iyGDdb/NpeksFmNflQKjphI3VKEme1TUMZQuof+kyemoU2LDSvHNJ0WgVX4F7rJSpLpQmlgjsI1SfjYvOBuzHaIsXjOJoocBf7xgSGwR7zbmWU+WYL+P8lItPL6omqiSonXMKjJn1w5oqupdFITE7BapHPPpZBWKhngNQ6h55QgURsZ3vJTEg8n8Vdxx07iga5DdnQoLa89oA4eoQMTaGUg0Pou9XXcB6xF/Nw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <cd5d728c-a21e-780e-3b79-0cfb163eb824@suse.com>
Date: Fri, 24 Jun 2022 11:33:24 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v6 1/8] xen: reuse x86 EFI stub functions for Arm
Content-Language: en-US
To: Julien Grall <julien@xen.org>
Cc: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Jiamei Xie <Jiamei.Xie@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Chen <Wei.Chen@arm.com>
References: <20220610055316.2197571-1-wei.chen@arm.com>
 <20220610055316.2197571-2-wei.chen@arm.com>
 <05dadcda-505d-d46a-776a-bb29b8915815@suse.com>
 <PAXPR08MB74205A192C0E6E2E4BDD64BB9EB49@PAXPR08MB7420.eurprd08.prod.outlook.com>
 <8e44e765-c47f-4480-ee44-704ea13a170d@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <8e44e765-c47f-4480-ee44-704ea13a170d@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AM6PR02CA0013.eurprd02.prod.outlook.com
 (2603:10a6:20b:6e::26) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8f197188-74fe-4d8a-3071-08da55c49715
X-MS-TrafficTypeDiagnostic: AM7PR04MB7032:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	zgls53alpNfKJkZykhD2smkEIeeKYWYuYeor9wwDvEHOCtntFTpHngWjVLO0pzclJCp4A5M/E4V8DPnPyf0kvopbGyTS52Urr1I1FWSUHSdj39K5LOQZ3d0Bw6kzh8LmF8F9iWucwVYwor8Ac+7nF2goU1tjZFk53noLbIvbqnK31aEsoP2iA6G8XIOZYqZ7qt0QJaX2CkCceSkiqhagi3Kvip3LvCY5BDO8dBNI8f+/cdIj6OAir/FAtzQiXuYVfbjJ77sKM3vbAPBUuF9JQ9xnWhtkwiKxAZdP202ajQXb0j80X9puLbLMgGW1Q72Xj2Xc+VojlPu2Cny6ftJ8TkU85AqzkvyP2uKNNalnAOd2Vvk33BCEHLaVJ1LzBkjU+dGtrnWyQxWkRqqOePUu5UOo3/ntVheuvpAdSTEwI7URkB5dpJngAkBxOUm6ogspaUI6SlQSgN7pvHPJVrHouoptVl7doMMHBDeOxFsb6nd6nhmQ96y7pJ9OgI2JUGVClz4qTyZB1LGm1KGmatMeQzkSzar8KrbeFxFqdH98J3Z3vZC0Ie/R8dJl3u/Fftny6U67htdJ5b+YXX+lbXf1LU3fV2NdTv+YtcbwnnSyFC7ozd5FbGE1DxSHoXOUCQN3gGC/5ZVTSz/dtk5j5a7fwfjn+znWKFZHCM2pP5Gc1G/jjweheRvvofqs4by30UlVklvh3eT73r3bT7TnHWoviQWwocWJm7T/eGJRwdiLDjW+mjT696MGHAe8YQiBw3FYwONX6Hy/T+bFri1MGhEv+VKNSg06TGFUVwyxfj+8sQ8iPtrgi4sB36XwCdmREg1oVaUOKIOP8o5Gi00iH5UIAQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(376002)(346002)(396003)(39860400002)(366004)(136003)(66476007)(7416002)(5660300002)(8936002)(53546011)(36756003)(31686004)(66556008)(6916009)(8676002)(478600001)(2906002)(4326008)(54906003)(38100700002)(6486002)(31696002)(66946007)(41300700001)(6506007)(83380400001)(2616005)(26005)(6512007)(86362001)(186003)(316002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TGZ2N3BMV2dhZGd4cW9ETURiUWFmbHlvdmRyZFFvMWZ4aFc4UFVKbzVHOGdi?=
 =?utf-8?B?blZtd2hkekpvQ1pLMTRzVW04WWFiSkRjZzk2WGxGK3phWkxXSDRXazJEZTlj?=
 =?utf-8?B?TFdFZzZwWS9SS1hpUDNJUlVUN1oySE8yZHZHeTUwVitQckVUQmdEL01HRm1R?=
 =?utf-8?B?bHpHWVY0ZVlhV2ZLUTVTM1d1UmpnYi9hdXljRGRIeGluNCs3emZEck9nUkFq?=
 =?utf-8?B?a1B1dmRseENhaFJEaVp1RzI1VkdybVdURm5pUjRWTUxJdTN5TDF0R20xbGpi?=
 =?utf-8?B?czBPaGw1MWNyYzFweTVuVGVpL2xYbkE4MVdKcXE4aUVjMngyQVB0Ri82clo3?=
 =?utf-8?B?UkRra2JqbHkwT2dNeVZ4YitrS0QvNHQ5eFhjZ2pQTGx6dXpKNGtUajZISUFp?=
 =?utf-8?B?UGFid0U3WmNPY1ZZc2tvVU5mbDVzNGhVMFp1NDB4UzJKRmNXSGNRaWFxRmN1?=
 =?utf-8?B?bkpUQkRzRkwvL1hvZTdMMHozbkNOYlVWcEZxT1krbnlIMjZPYUxnSXYxMVlE?=
 =?utf-8?B?YSswS0hMVnZtakZqMEQ5U205cXpmcy9GYVZqQUMrakgxM2RQTUFXMkZBRmFw?=
 =?utf-8?B?OFFPcTQxemhXUzlTZUl1NWp1SnF6UXozZi9IbUlsbEsxZkpGZ01YUkxkQlNZ?=
 =?utf-8?B?NWNSckVXZyt2OC96MzVNR05YMzdoaG1IMUZNSTJYeHRad1ZiajFCZzlHNnhF?=
 =?utf-8?B?R2VqakRMUkN1OWtOSE9VT2pMbmc3b0hwdmFTUG5yRTVQL0V5c015b0FXdG53?=
 =?utf-8?B?QTVJYTg4WnJ0VEhXcFdzSS9Pd3BSWnpaMVN0VWpVa2RMY0dkeDlISTNpZzVY?=
 =?utf-8?B?b0N0VVh0eEZtMG81cENkbHJmMjJkZUVxdytSMTR5U0NvYUJabkc0TDJZZWZi?=
 =?utf-8?B?Z2J5Y0N2UzFLcUk3c1hFa1RxVWpFVkJNYzhUczkvbnRTZVNCa1lwcktBaEw0?=
 =?utf-8?B?ZFFrMmIwajV4ZHBtVXlpRmNBMlFUVTRQdHBQbk9UUzlyTjFVMHF3b04yNlR6?=
 =?utf-8?B?Z0VQN2FVaElDMTNFY0J1UEovV3dUb3VCbHAvMlg0ZXZkQXRYUjZtQ21kRmNa?=
 =?utf-8?B?R3Jybk9UQVB5b25TS2Zkdm8vbWZiaG0xTFo5RVJzTVRrS05xb1FRdG15Nmg4?=
 =?utf-8?B?NXZEd3YzTVlybWgwS2p4eFZwUURrcmE4VXNjUTNMRGxMWkM0S3Uyd0poeUdR?=
 =?utf-8?B?cUx3L20zaE9DS1BOeDBadkNWQ3JMUHVXRHZoNEc3OGcvLzlsaEJOTWhrRVFn?=
 =?utf-8?B?RUdia0hVQWVqQWJzUjVLZ09PWGhxTjlaeUFXM24rTGJpQnFlL1Z3OTdQMHJ6?=
 =?utf-8?B?VVJtVks5dXNxcEJJb0oxdnA2NllodktSM2gwanZPZFREbUVIMWxmdTJDVHhh?=
 =?utf-8?B?WkVLTm9TN2IrdFdpd2NncnYwYUhFYVJsWDRpOVdVWlhaeXU4TS92NFFBV3hF?=
 =?utf-8?B?ajdyZVhvMFdmR09tTGUwQjBQdnFVUW9oQUNXZm9IOU9jWUsxMHJuRXFiZHIx?=
 =?utf-8?B?Z242Wk1HZzd1b0IrUW53M0VDcGJHa2hoN1ByaXdSQzdER05xV0dRVHNRbVlu?=
 =?utf-8?B?VTdIZFhIYXFpTFRJYU1nR09hNkd0b25LdFRubDZqWjRkRVlSZklVK1FKSnov?=
 =?utf-8?B?TGdZT0N4THVOeURYcHJMMlVPVFhlRTlsUGdDaUxrb1BiNkVwNExFU3hYVGxB?=
 =?utf-8?B?cjFSK2ZRU05BbUEyVXlPeUw3bVBhMVVMajhqc0dyLzgxazZrVFRnWWxqRFly?=
 =?utf-8?B?RUdDRUptRjgwd1VYcWgxT2x4YU9ROE14Wmk5a3VEK3B0RnFTRWtUWHN2b3NW?=
 =?utf-8?B?L2hSSkhHV2VGVzF5M1k3SjdnN1lLY2NmT2JZWkNQajgwamdtN3ZhNEQ3Rnpj?=
 =?utf-8?B?OU43K0c1bk4vdGNLY3psSjIwV1J1cWJlVkc5alJFdmVsb01aRmFnN1lualRB?=
 =?utf-8?B?MVZSeXlwa2tFTXNKa05CNDFmcTU3S3BVeWd0ZGhKbkQ1TTk2Z2ppMGdPUk5a?=
 =?utf-8?B?Nkt3YmlaN2Zlck5ON09Tei9FL2FCRWYvTlJ4NFdEdXR1ZDJlOFlPc0RlTnNn?=
 =?utf-8?B?K2tVVmhlN1RCYWdMNWZPM2VNUk5ZUEo2SmYvaWk5NTZaWElndkVoYmpLaFdo?=
 =?utf-8?Q?goO+b6zSx8ugEB0BU70enaK4Q?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8f197188-74fe-4d8a-3071-08da55c49715
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jun 2022 09:33:26.4764
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: lwPF0g3kSojD4TO77P4KYkZ/3Wt3M/FXY7PrySQXiMyyj+RV8gQTMCqiZYvV6hZPm8lPvZFHDDNbdSIB0YXhpA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB7032

On 24.06.2022 10:35, Julien Grall wrote:
> On 24/06/2022 08:18, Wei Chen wrote:
>>> -----Original Message-----
>>> From: Jan Beulich <jbeulich@suse.com>
>>> Sent: 2022年6月23日 20:54
>>> To: Wei Chen <Wei.Chen@arm.com>
>>> Cc: nd <nd@arm.com>; Stefano Stabellini <sstabellini@kernel.org>; Julien
>>> Grall <julien@xen.org>; Bertrand Marquis <Bertrand.Marquis@arm.com>;
>>> Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>; Andrew Cooper
>>> <andrew.cooper3@citrix.com>; Roger Pau Monné <roger.pau@citrix.com>; Wei
>>> Liu <wl@xen.org>; Jiamei Xie <Jiamei.Xie@arm.com>; xen-
>>> devel@lists.xenproject.org
>>> Subject: Re: [PATCH v6 1/8] xen: reuse x86 EFI stub functions for Arm
>>>
>>> On 10.06.2022 07:53, Wei Chen wrote:
>>>> --- a/xen/arch/arm/Makefile
>>>> +++ b/xen/arch/arm/Makefile
>>>> @@ -1,6 +1,5 @@
>>>>   obj-$(CONFIG_ARM_32) += arm32/
>>>>   obj-$(CONFIG_ARM_64) += arm64/
>>>> -obj-$(CONFIG_ARM_64) += efi/
>>>>   obj-$(CONFIG_ACPI) += acpi/
>>>>   obj-$(CONFIG_HAS_PCI) += pci/
>>>>   ifneq ($(CONFIG_NO_PLAT),y)
>>>> @@ -20,6 +19,7 @@ obj-y += domain.o
>>>>   obj-y += domain_build.init.o
>>>>   obj-y += domctl.o
>>>>   obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
>>>> +obj-y += efi/
>>>>   obj-y += gic.o
>>>>   obj-y += gic-v2.o
>>>>   obj-$(CONFIG_GICV3) += gic-v3.o
>>>> --- a/xen/arch/arm/efi/Makefile
>>>> +++ b/xen/arch/arm/efi/Makefile
>>>> @@ -1,4 +1,12 @@
>>>>   include $(srctree)/common/efi/efi-common.mk
>>>>
>>>> +ifeq ($(CONFIG_ARM_EFI),y)
>>>>   obj-y += $(EFIOBJ-y)
>>>>   obj-$(CONFIG_ACPI) +=  efi-dom0.init.o
>>>> +else
>>>> +# Add stub.o to EFIOBJ-y to re-use the clean-files in
>>>> +# efi-common.mk. Otherwise the link of stub.c in arm/efi
>>>> +# will not be cleaned in "make clean".
>>>> +EFIOBJ-y += stub.o
>>>> +obj-y += stub.o
>>>> +endif
>>>
>>> This has caused
>>>
>>> ld: warning: arch/arm/efi/built_in.o uses 2-byte wchar_t yet the output is
>>> to use 4-byte wchar_t; use of wchar_t values across objects may fail
>>>
>>> for the 32-bit Arm build that I keep doing every once in a while, with
>>> (if it matters) GNU ld 2.38. I guess you will want to consider building
>>> all of Xen with -fshort-wchar, or to avoid building stub.c with that
>>> option.
>>>
>>
>> Thanks for pointing this out. I will try to use -fshort-wchar for Arm32,
>> if Arm maintainers agree.
> 
> Looking at the code we don't seem to build Xen arm64 with -fshort-wchar 
> (aside the EFI files). So it is not entirely clear why we would want to 
> use -fshort-wchar for arm32.

We don't use wchar_t outside of EFI code afaict. Hence to all other code
it should be benign whether -fshort-wchar is in use. So the suggestion
to use the flag unilaterally on Arm32 is really just to silence the ld
warning; off the top of my head I can't see anything wrong with using
the option also for Arm64 or even globally. Yet otoh we typically try to
not make changes for environments where they aren't really needed.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 09:38:24 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 09:38:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355393.583025 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4flf-0003GJ-CI; Fri, 24 Jun 2022 09:38:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355393.583025; Fri, 24 Jun 2022 09:38:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4flf-0003GC-9K; Fri, 24 Jun 2022 09:38:23 +0000
Received: by outflank-mailman (input) for mailman id 355393;
 Fri, 24 Jun 2022 09:38:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b5tQ=W7=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1o4fld-0003G6-Us
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 09:38:22 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr70042.outbound.protection.outlook.com [40.107.7.42])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6263fd62-f3a1-11ec-b725-ed86ccbb4733;
 Fri, 24 Jun 2022 11:38:21 +0200 (CEST)
Received: from AM6P194CA0072.EURP194.PROD.OUTLOOK.COM (2603:10a6:209:84::49)
 by AM0PR08MB5124.eurprd08.prod.outlook.com (2603:10a6:208:161::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15; Fri, 24 Jun
 2022 09:38:18 +0000
Received: from VE1EUR03FT036.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:84:cafe::78) by AM6P194CA0072.outlook.office365.com
 (2603:10a6:209:84::49) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17 via Frontend
 Transport; Fri, 24 Jun 2022 09:38:18 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT036.mail.protection.outlook.com (10.152.19.204) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Fri, 24 Jun 2022 09:38:17 +0000
Received: ("Tessian outbound d3318d0cda7b:v120");
 Fri, 24 Jun 2022 09:38:17 +0000
Received: from 3cc7c35f207b.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 1987DA18-1A00-4D6A-8475-735CD44F0CDA.1; 
 Fri, 24 Jun 2022 09:38:10 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 3cc7c35f207b.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 24 Jun 2022 09:38:10 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by PAXPR08MB6877.eurprd08.prod.outlook.com (2603:10a6:102:13a::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Fri, 24 Jun
 2022 09:38:08 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5353.022; Fri, 24 Jun 2022
 09:38:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6263fd62-f3a1-11ec-b725-ed86ccbb4733
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=eUWT0GdGlOPYpO9Mjz0OzZWjc5yAN0D5qh8VF5LqkIMAfNABl6DPr+haJPDr1TZJ+XlUit8i3szmqMu95VKsRzfIzN7XEghhprTObwwF5xGZzjchQNvGuMomB2y4OQyXiomqP122HLR7TCWyUd79yRdrNNkLSqsroB9YmwpF0UAdUxTTVfb7M5t1Q4n3m6QYHC2ABRj+ZKrV1QKGRbnQrUqzHuXc6LKdZ1JkCA35BWn0C9LMXkNvr0gL9VPktVkRc5fnZUvymV9HrS+ZDm9p4K8m3MNBtDnDZnYGkiPn0KkGd2wxVnZ9pujMpCbWdI6AkaXYfdrW7p+bTB1Vl0M1Lw==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=47zris70NczLohxHcqqyqI9T/geQc7MaURXI/wB9rt8=;
 b=Kyf7MNpyI8OYYffR2hza0KXMZb1ifPAnacJjET6LQWA2OH44qHyKmB8y16YShpnqc/X5L0xCQLHU7knahxqt5jowm79tcqRS1py6tjMVOnTAOAE0MkQ4MZtUwDcSi66W5VD3qb7ISquVBP62ZPKKgCntBlB4nbPT2Jl4fxTZjL8YBo6trl73VujfLJoBG6B+8Px3J8J0eJonPoQYjyfpJR2zr9TQNVJXIjhqAnVokY3eDY0GChXnXTwQyby6sXB5rw/+nh2RqYuyahBLA5hqWxAKzuF1eeh7cD+udoTwyFZxO8MtYat95O18bVJhYgleKca2aMdyBSo8loV8hgNmfg==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=47zris70NczLohxHcqqyqI9T/geQc7MaURXI/wB9rt8=;
 b=3b06ryTAtWNKPeL+2Hh0K4i+CDGdQO8wkDvPYtP/PEDVDZvgv9FHniUihPjxJiWahyMLPypmgHhkES02EckWTPUicZgvTPnyeiVf/70wmxWA5vKiQtWjJ9pOmrhLAZLbn4JHUVlwLkzuXPnqYgNXhiTCaT1f9RWKhDy8CYpoAhk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 737a7835669229de
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UFtkLk+14lpjRxNG+e+ajcWO74L03vnWM69ORzq00Xp2fITe05RwJJRYd58fK4Badus1KX4IaMzcUDbj5KDtmAcJV2LvSuSSzlSubBdwI1XnImchmlvM2VwUsf1iTwj94ahcNw2XQ3NR7EG/4ut1GGWzYDKxM5qsUmEFrmGqnc9dDtn5SmyTf9c/NZi30cR5WPKeV4lqY+dLigaGNuJxcsQbb+zGmMiv5DEoQsJNMm6RXPIEKwB+sGWBTwqNhOcvPeWGQ/lucoWLtI1/7Zvp+5JwZhGgbekg4GBOuazDX9HqJ7okN50Pdi80Nw/DWKoCPYZBAm0y2TW/dDT8joZDIw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=47zris70NczLohxHcqqyqI9T/geQc7MaURXI/wB9rt8=;
 b=fEwiQXl8BKfx8v4eiffjRTxe5M62J5c78m+c3nx+m0qXWXXmg4m92cpk51cm5XRusypijHMkGKIheAgD1Vb999OPnaiCjT7D0lTtQDo7ZoxStoTY3QCIAqEpq5xOfzEobIuRJeSR9o8MICfiSHGktf4zFLr1fOrUrId5NZV0Rw4DSIYMhgJAwk7F4a4mbavn5lNB6OCoFPN4EqxjSiMsjspNZmpHi2G+B87dpYnjPO57gQU7HeQeFa+Xo61WKmcUGMDNBCz/6Yg+WEW7H87UGM3EfI9ogLGaxIITJ3LUJcvinjQe2P6oNQ4Mx98/lEQnt+OTgNSq5ng3DS327vmjhQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=47zris70NczLohxHcqqyqI9T/geQc7MaURXI/wB9rt8=;
 b=3b06ryTAtWNKPeL+2Hh0K4i+CDGdQO8wkDvPYtP/PEDVDZvgv9FHniUihPjxJiWahyMLPypmgHhkES02EckWTPUicZgvTPnyeiVf/70wmxWA5vKiQtWjJ9pOmrhLAZLbn4JHUVlwLkzuXPnqYgNXhiTCaT1f9RWKhDy8CYpoAhk=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: xen-devel <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: irq: Initialize the per-CPU IRQs while preparing
 the CPU
Thread-Topic: [PATCH] xen/arm: irq: Initialize the per-CPU IRQs while
 preparing the CPU
Thread-Index: AQHYf9MHjBymRs/pn0u9upJvLeFm3q1eWpcAgAAB+wA=
Date: Fri, 24 Jun 2022 09:38:08 +0000
Message-ID: <68A4ADAB-9D74-43F7-886C-1A49485D4E77@arm.com>
References: <20220614094157.95631-1-julien@xen.org>
 <29AAB4EF-6326-41F2-BB51-EED5FFDB26EA@arm.com>
In-Reply-To: <29AAB4EF-6326-41F2-BB51-EED5FFDB26EA@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: dafb6ecf-ea50-4ff6-933e-08da55c54497
x-ms-traffictypediagnostic:
	PAXPR08MB6877:EE_|VE1EUR03FT036:EE_|AM0PR08MB5124:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 FhwWqTZzUJ7n/M2SjPZIEB1AzJhzIpYqDDqeAfvjzkIYJTCuIvNuLTqeHEJLwbUTi3DrkSXaonkrK2+8rMyTEUPDOCOEfM4ldmaolpsKP4xRgjaFLLZLSVlvp4lZhWHDKs8C7+JBBsubv0dqB6Oj/XMWmIr/QnrArKFc95fPRD9n9MYwLA+tuyG+bn+xn7yDzy/hQin4QqF6bKpzN8wWdKnw7svwhjvW3M2C3GYfddL6Bj+1AUU7jhNZ0yqTXEu1CB7NoZk7zqZ7GQ1uTo4CfUWBYTafJeoBX5te2kx0zrPmO2b+vOGsvm+MxBJNo8leR+N+6Qqw01gqEppqPXRKHoP19nwZSJz+1QJgN3eUIb9IKs116ltSpdqzBWYrkJYLzZyuuO09BVEa4JXSKt5WAPzgU7Wm4Y+JRIIt7tprm7/u1u4m1dVORzpqrzFg02TvLTqOOsKzjCswITjW27uEY+4hGygqdLSLQBORa6DieiVhHe2uRwrxobTfJAfG9/stZGw0mgrxIUUU37KJ0gSuim4ic2a7LuRMWv7qFtWU+uPyYamoiMnecpcKtaP3NI9EoAtrXcBUae/Tv+3XiM3ZrkeieYMqcvxx9wBl/lPP2hjOSY2Vd6acgz1F8GJ4c+la+PtNPJe7WZ7kjfRXawbpK1YeQFZ0pE3PWsHS5a3VuqdFZJOYZeT/QCAsY3PJNEf2tkCIib2LfY+0sqhKkiksDsmmBSp49HM/Bf3aLiLjzh2ej422ZDNGajintjdaokJ3kIdLSSSBJMQk4FvkaNtbiseYioi4dC1K8MEqEkKXfylQHwDt+84CdRPbDHK27fEI1gvPIsj8vWowkIjp2eWWLg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(376002)(366004)(396003)(136003)(346002)(64756008)(66946007)(91956017)(66446008)(76116006)(66476007)(66556008)(2616005)(71200400001)(54906003)(8676002)(4326008)(86362001)(38070700005)(5660300002)(6916009)(122000001)(8936002)(186003)(6506007)(53546011)(83380400001)(38100700002)(6512007)(478600001)(36756003)(41300700001)(33656002)(26005)(6486002)(2906002)(316002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <3ED143ECBC438B408AA79B78F9B094DA@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB6877
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT036.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	fbecd789-f18f-430e-c1f1-08da55c53f3e
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	COpEFGUchJmOGM2cXzr2TyRFCAn2CnQhFb/8yc9naZgt6BmACwvrDAut523Dom90FRwXKEESb5eIBC5VOmJ7kCQnfRELD8UL5nfTj4CGsgP9jbAvtSOo3e2q7Zh2Fi1yE7cA9hogMtEXKt0ziDkm05C0GLOLx352JtTb+m7Cxbm5wqz1ok+HCHPZO1l1n8ytdFQdMe0Qrcc9qqJ43Lu9YgqbDBe3YWYh71l6+tYA4bAndOu2+z/f0wtqV+4wGktL8nKrOSIxukidHhAAzSVgudVjI9L6LHNe8HyrUZntB/zJRGJ+oeFjlnSMMCf+8Nj9lSQ/FapP8W2lfrUyqGAl9jcj3DsUPO3cw2/CcYQzDsmeXxqvxkwzLAwZ0hMkuUQtuIVh70HCf8a73oMDHeg9WgEluNK+A12xtwFGCVYHAYJd+Z9wSQ2xQYxD5T8xYxZs86fh8XgzBBR5uTs+kaXgUOScSFPFNCMjqdjiPVehJ4hMKtbuIgtLpdnRjPgDW5yhd+6GN2F4SQXZZTiljf01u5wbImHHkaPp1jS8nQ5zOe4gLaG9OwpRlSCAfV5z1DiKWtCBNCCImcsArFuduCkPreVzGjYUtoD93pJOhWtWc/LRCahQesMu94YfLkYnECg8/mFpXVlH/3l4rIKgoQIxEI73mb0ZmVyKm82oLHxzTX3oEO0ydSFKnNRjjCTYbOzeYV/LJ+zRW7Or8rHlnXEBJuDI0GulJYLnL9SnoqZVinX/d55AajspbMovYN1y0YlnRIn7yNodsQmUC3h53TKu5KORGYLTG4+6+lKU1Bm60UASa8AwB8f2nZG7n4aCv+xM
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(346002)(396003)(136003)(39860400002)(376002)(40470700004)(46966006)(36840700001)(8936002)(70586007)(70206006)(4326008)(40480700001)(36756003)(33656002)(316002)(81166007)(8676002)(6862004)(356005)(336012)(2616005)(83380400001)(6486002)(54906003)(2906002)(47076005)(40460700003)(6506007)(41300700001)(82740400003)(82310400005)(478600001)(5660300002)(26005)(53546011)(86362001)(6512007)(107886003)(186003)(36860700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jun 2022 09:38:17.2931
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: dafb6ecf-ea50-4ff6-933e-08da55c54497
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT036.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB5124

Sorry, this was suppose to be off list but nothing wrong in making everybod=
y aware I guess :-)

Bertrand

> On 24 Jun 2022, at 10:31, Bertrand Marquis <Bertrand.Marquis@arm.com> wro=
te:
>=20
> Hi Julien,
>=20
> [OFFLIST]
>=20
>> On 14 Jun 2022, at 10:41, Julien Grall <julien@xen.org> wrote:
>>=20
>> From: Julien Grall <jgrall@amazon.com>
>>=20
>> Commit 5047cd1d5dea "xen/common: Use enhanced ASSERT_ALLOC_CONTEXT in
>> xmalloc()" extended the checks in _xmalloc() to catch any use of the
>> helpers from context with interrupts disabled.
>>=20
>> Unfortunately, the rule is not followed when initializing the per-CPU
>> IRQs:
>>=20
>> (XEN) Xen call trace:
>> (XEN) [<002389f4>] _xmalloc+0xfc/0x314 (PC)
>> (XEN) [<00000000>] 00000000 (LR)
>> (XEN) [<0021a7c4>] init_one_irq_desc+0x48/0xd0
>> (XEN) [<002807a8>] irq.c#init_local_irq_data+0x48/0xa4
>> (XEN) [<00280834>] init_secondary_IRQ+0x10/0x2c
>> (XEN) [<00288fa4>] start_secondary+0x194/0x274
>> (XEN) [<40010170>] 40010170
>> (XEN)
>> (XEN)
>> (XEN) ****************************************
>> (XEN) Panic on CPU 2:
>> (XEN) Assertion '!in_irq() && (local_irq_is_enabled() || num_online_cpus=
() <=3D 1)' failed at common/xmalloc_tlsf.c:601
>> (XEN) ****************************************
>>=20
>> This is happening because zalloc_cpumask_var() may allocate memory
>> if NR_CPUS is > 2 * sizeof(unsigned long).
>>=20
>> Avoid the problem by allocate the per-CPU IRQs while preparing the
>> CPU.
>>=20
>> This also has the benefit to remove a BUG_ON() in the secondary CPU
>> code.
>>=20
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>=20
> I still have issues after applying this patch on qemu-arm32:
>=20
> (XEN) CPU0: Guest atomics will try 1 times before pausing the domain^M^M
> (XEN) Bringing up CPU1^M^M
> (XEN) CPU1: Guest atomics will try 1 times before pausing the domain^M^M
> (XEN) Assertion 'test_bit(_IRQ_DISABLED, &desc->status)' failed at arch/a=
rm/gic.c:124^M^M
> (XEN) ----[ Xen-4.17-unstable arm32 debug=3Dy Not tainted ]----^M^M
> (XEN) CPU: 1^M^M
> (XEN) PC: 0026f134 gic_route_irq_to_xen+0xa4/0xb0^M^M
> (XEN) CPSR: 400001da MODE:Hypervisor^M^M
> (XEN) R0: 00000120 R1: 000000a0 R2: 40002538 R3: 00000000^M^M
> (XEN) R4: 40004f00 R5: 000000a0 R6: 40002538 R7: 8000015a^M^M
> (XEN) R8: 00000000 R9: 40004f14 R10:3fe10000 R11:43fefeec R12:40002ff8^M^=
M
> (XEN) HYP: SP: 43fefed4 LR: 0026f0b8^M^M
> (XEN) ^M^M
> (XEN) VTCR_EL2: 00000000^M^M
> (XEN) VTTBR_EL2: 0000000000000000^M^M
> (XEN) ^M^M
> (XEN) SCTLR_EL2: 30cd187f^M^M
> (XEN) HCR_EL2: 00000038^M^M
> (XEN) TTBR0_EL2: 00000000bfffa000^M^M
> (XEN) ^M^M
> (XEN) ESR_EL2: 00000000^M^M
> (XEN) HPFAR_EL2: 00000000^M^M
> (XEN) HDFAR: 00000000^M^M
> (XEN) HIFAR: 00000000^M^M
> (XEN) ^M^M
> (XEN) Xen stack trace from sp=3D43fefed4:^M^M
> (XEN) 00000000 40004f00 00000000 40002538 8000015a 43feff0c 00272a4c 4000=
2538^M^M
> (XEN) 002a47c4 00000019 00000000 0026ee28 40010000 43feff2c 00272b30 0030=
9298^M^M
> (XEN) 00000001 0033b248 00000001 00000000 40010000 43feff3c 0026f7ac 0000=
0000^M^M
> (XEN) 00201828 43feff54 0027ac3c bfffa000 00000000 00000000 00000001 0000=
0000^M^M
> (XEN) 400100c0 00000000 00000000 00000000 00000000 00000000 00000000 0000=
0000^M^M
> (XEN) 00000000 00000000 00000000 00000000 00000000 00000000 00000000 0000=
0000^M^M
> (XEN) 00000000 00000000 00000000 00000000 00000000 00000000 00000000 0000=
0000^M^M
> (XEN) 00000000 00000000 00000000 00000000 00000000 00000000 00000000 0000=
0000^M^M
> (XEN) 00000000 00000000 00000000 00000000 00000000 00000000 00000000 0000=
0000^M^M
> (XEN) 00000000 00000000 00000000^M^M
> (XEN) Xen call trace:^M^M
> (XEN) [<0026f134>] gic_route_irq_to_xen+0xa4/0xb0 (PC)^M^M
> (XEN) [<0026f0b8>] gic_route_irq_to_xen+0x28/0xb0 (LR)^M^M
> (XEN) [<00272a4c>] setup_irq+0x104/0x178^M^M
> (XEN) [<00272b30>] request_irq+0x70/0xb4^M^M
> (XEN) [<0026f7ac>] init_maintenance_interrupt+0x40/0x5c^M^M
> (XEN) [<0027ac3c>] start_secondary+0x1e8/0x270^M^M
> (XEN) [<400100c0>] 400100c0^M^M
>=20
> Just wanted to signal before you push this out.
>=20
> I will investigate deeper and come back to you.
>=20
> Cheers
> Bertrand



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 09:44:58 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 09:44:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355399.583036 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4frx-0004hs-2y; Fri, 24 Jun 2022 09:44:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355399.583036; Fri, 24 Jun 2022 09:44:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4frx-0004hl-0H; Fri, 24 Jun 2022 09:44:53 +0000
Received: by outflank-mailman (input) for mailman id 355399;
 Fri, 24 Jun 2022 09:44:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Tt/v=W7=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o4frv-0004hf-C8
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 09:44:51 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-eopbgr80089.outbound.protection.outlook.com [40.107.8.89])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2b4fcd9b-f3a2-11ec-bd2d-47488cf2e6aa;
 Fri, 24 Jun 2022 11:44:39 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8294.eurprd04.prod.outlook.com (2603:10a6:20b:3f7::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Fri, 24 Jun
 2022 09:43:55 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Fri, 24 Jun 2022
 09:43:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2b4fcd9b-f3a2-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LRJpMQKZy4KzOYsdMOXdKsLBqjzhBdpHLWoklYDpczn74yqtgAvyux1fEPas3CdtGZpGoB4a9IQTHzqsHSKL54kPGw5WKxmCPKIsnyb27NoE3XoyvYNvEvSJNXeAVJDmnZQXAA/nMGakdIdUI3x+66SvJGaK7jO5I4OWtv8S8VG+KIJXRGIQ5RNIfQyDg66XxI911llUN1BWemex3ufDgJg5Ut7tUOzP5NGQADGyar6sOdO8xOipitd2r7tXunWDGtusGQwY7Ly7UnnDcyBpDPQ4Evtvn8sJUHuJ0BC+nXJS4GpxXcccvY+NQtPKGZmr5GGu/yJ+BtsvnGQZ1jGDDg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=wwaVRJ85vdouC/gV2Ca8NE5VHgEtpXwjIR8aFaYvWvk=;
 b=CQw8fKB5PuM7K0tRs+8vDOFGMwNvYFgwTuwNtts4dARWkuedKRRuc0SxEXUBgUmrvVmiduqQBc0pV6C5e/fIxlqD7uOle0VZ/4W9S+G3Hlnm2wHBQBvfiZSuUnzDGWF6AqgCWxd4uv8iD1Y2jhNy24GezjldKjim8ProQi0m3loC2PcR60LfBypul1lwOQLU3NEbaSqQzVToOkk49cs+ERpfn+RCDe/beidj9zNEZlf8RVJnORC4TIYCVFrWsAV/vHrHoPSX7sFIq/dEgTOxIQxIkySwwj66E38W2omTlkfjAioHhkvRbLDD6ewVnyLsUV/bc3bizreSEhhzVzGFLg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wwaVRJ85vdouC/gV2Ca8NE5VHgEtpXwjIR8aFaYvWvk=;
 b=uo7AK1RrCkQO8lJ3ewr5WP/QIg1uwCzIE7udTN17f8Qi07H844lr25MvDkZD/wfh6gpEJK/F0akgkIPmnkj/zRCOYM7lpULPLuroBwKoRdAddbIrHY5dWccYuKObI9b7iGbbOXidwErVmV01M/EoswhEUnmhfU+46NkVwzz4ZYHgsQZ+htflkByxLvtzzZdRX3lelJ/z4SeiKbjnrgT+1YgttTXS1EJXSFhX+lhsm23GrRNIX0B1M+Cc/ek9Kaymq4dbo/xieRIYrt/Mm7qw6BcIqrha1v+WKQ+NREOwZpqDk3BNkhQOCBZYjGejQcskC54Q4jIBZhKQCQGZTPByLA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <c46469ef-6f42-621c-ef4c-3e1e5d6bb0cd@suse.com>
Date: Fri, 24 Jun 2022 11:43:53 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 6/7] xen/arm: mm: Move domain_{,un}map_* helpers in a
 separate file
Content-Language: en-US
To: Julien Grall <julien@xen.org>
Cc: Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20220624091146.35716-1-julien@xen.org>
 <20220624091146.35716-7-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220624091146.35716-7-julien@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM7PR04CA0009.eurprd04.prod.outlook.com
 (2603:10a6:20b:110::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 908e0daf-c975-482b-f5a7-08da55c60e2a
X-MS-TrafficTypeDiagnostic: AS8PR04MB8294:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	VSR6Ezmrae/nDvmfzCE3TWX/QrYyNI+X/uXffqn9z7b8lg3xSrjGBb+uTA3zFQXPnVc8UBj+gPANfbSF6IFtmvUU8fpB+ovnYqH/NcQDSuMPa2iadp8Jptl0TZw5CBUc1dzRR5fe4aqpSFL3/Lh9p1T7dgoo69Fs2IyNdGXM3cXd9x8tGAxIf7SqBqvIX1niqTyGWesby9QfMRMk3m7yr1GLUraI/93sboBb2tOFVBYfR1wVmu9EQ991ep2f7pjK+y//3IKgMOYUPUsZyRQbFAPcmkW8MbsfvO+jDrA8GrugqaYwisj3SEQ4LDXCz+B5umTCygNdoUQshRltnzbOC9fJlsQWpeI438Z8EianWSAKIJAnIjc2GkjkGVfiBAZYjIJoG6DkFGcDwo78/4nNOkIfHHNB9j914e7+WWtWOJvmdptRcK/xG2P7CevdOKojfTaLYn9fV7xvlV9ua6/efV78QbOctnIkt2F+Hwre6aVYb98r9rqF9X+LlCrEniGsZf4erE+FnFRz91+nD8NfB2cRw3KPIowdroVVduXes8kbj5hdTx/veaRNBuNjtE6b6acFPI935k0dbAICX/pdZDvLtuTHL6iQQ+23iyqAt71+sJFs4ObASW4jbvNggB8brhEYLsCILjMsF9JgII/a/0zgclvVPucrTd279nbk86PAdaFYxmBJ7g5k3QK8hVqDj05uin97FTK6lUVW1jsMXSq6/HxTJAWGdbYmMuUA3c7YrniIKy/3nbnOjK4ZSXuckm63SZMr/lE8GgvZ2wKyQ5DEBeqOC7moiElW6RH+U8xSzUeZPr1eiud8lKxdJdLNnPOYLHx7MZyYu9/6bTnU9w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(396003)(39860400002)(376002)(136003)(346002)(8676002)(66946007)(66476007)(66556008)(2616005)(6916009)(31696002)(5660300002)(86362001)(186003)(7416002)(54906003)(4326008)(8936002)(6506007)(53546011)(38100700002)(31686004)(83380400001)(6512007)(36756003)(26005)(41300700001)(2906002)(6486002)(478600001)(316002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?L1p1dEhFTnV3YU9aWWJFKzh6czRrUVhvbTMyaXdHZlFHeCtPbDZ2c0JsU0p6?=
 =?utf-8?B?eGszK2RSQWhwNFdHbGFYdGg3Z29DNDgxUDhXQlFDUFZmSnd2WVE2L0xRQTVx?=
 =?utf-8?B?dHJ6eE8vU3hoRWl4a3VuQk9lcWJPeEZnUHNQRzArQXpIaVFhZkhFZjNabzNa?=
 =?utf-8?B?MDg3L3ZrK0ZRZ09jU2xIaHRmU3pJUzZJZTVoZkpCc29CaWJEWFoxRGpLemlB?=
 =?utf-8?B?TC8vajU5b0FWQThyRTlXUTJXdHF0VHQvZ3RESW0wR2Vxalc4ZUxETTJiUHhV?=
 =?utf-8?B?Y0dlVjFTTzBLU2IxQTdHdzNacXZlS1BkYkNYQzlTSkRpbEhHQStaSUpmK3R2?=
 =?utf-8?B?dnVwUm5GU09sc1ZYQ243bEZRQ29lMEhqSE11YVBVRlFrUGRpaTZBSXRROG5R?=
 =?utf-8?B?UytXQkVZVnZYU0N5UzNwTE1UU09RSEg4WGdoT2ZaNTM1bmo2NC80NXJ5T09H?=
 =?utf-8?B?M21LOU0wMHlEZEFmM1NZVnZVbzMzUkZrQnFrMWRsejlmb2ZaMWxYNDJLdnVu?=
 =?utf-8?B?cENvTUdHQjQvczZWeGc0bGd6WVJBeUorUGxQdWNlOFFZR2JNUUkwMlMvcnVs?=
 =?utf-8?B?V04wL3dSMGRyMU4zTnFLQk9KZTRFczVjL0YzRTVzaFdMQ0RLaHlPVXpITkha?=
 =?utf-8?B?YnpmbmkvMWE4YWtJYUViSGVEMFJGWTdPSy9RR1BSN1lKbUhXdncyZjd2ZWNp?=
 =?utf-8?B?WDJUNzB1SE1ObmVNQ3hrQ08vL3Bjb0hRVS9lN290ek9kUXFPc2h4Wkl2TENj?=
 =?utf-8?B?SWZGSDF6WnFySGhDS0lmLzhZUnNrcGxIR0NOSzdaemhjSkNUVEJVc1EvdEti?=
 =?utf-8?B?dEJVczNhN0Q5QWFKVzB3RGxUT3Q3NmJyaFd4VG5VdlJ1V2hqdVIwRzA2VjZL?=
 =?utf-8?B?RUtIVDFPLzUzZkhYSmYvVWxGNkpvdUZSdnhqU2h0MDZLRi8zdExMN2d2M2U5?=
 =?utf-8?B?cXBGMVN2VlpFV1gycndnSmd5b2dCaEl6SURGK3pKcG5wUjJPalBBUFh3L2ZV?=
 =?utf-8?B?TmlxOEo4VWxrVXZ5Y0xpYmdnRit5VER3Tmdyd2xzcVJrWlBxUk1kZDFMaWVF?=
 =?utf-8?B?MVZjbDNLNE9CWkdkNjlyRVBBRE1pK2tqblJVOXBTVm8waHJuR09aU3RzcDlV?=
 =?utf-8?B?OWRDN2dqNm5nRG9kMGgxTzRiVDJPZHd1TW1wajAxTmwxdEZReGtmQm42dUdK?=
 =?utf-8?B?RXdmL1V6c0IwdVIyNkF3UlNHWnMyUG53R0VHWTFtRHJOM2lwa0Nsc3d3ellX?=
 =?utf-8?B?UElHWmxFT05lbGM5d016b2N5RUVNandUaVBYYWowbnZ5UXAxNjBIbUFGVFFC?=
 =?utf-8?B?Qm5tcW5WZWxBUktUWW9YRjFCOThoMGxHYUlzMUt2S1A4dVlWTDR1Y055azRE?=
 =?utf-8?B?YlhuZncrZ3pXUUoxbS91QjhZai9IRzJwaUxJUlBvVUFpZmcxdWVvWDRUaG1w?=
 =?utf-8?B?dUFYTzRyd0pqOGVubkdTRHMzNFBxMFViUDl4bmEvZDNINnZzZml5VG5xb1VO?=
 =?utf-8?B?c0VMUWNVYk1zL05XSXY5bWZmWG83RUdLdkg3MHo0b3ZsclYwQmFTQTJ5RmUw?=
 =?utf-8?B?K014QmVrNUpYR0Y4Zjk3MjZCb3NuM3AxaGhWZ3loVUh1UHNvZlJuRGQ0OHBy?=
 =?utf-8?B?aEFOaVE2SnpMdEtSTXAyYjBWNTVsOEd6RXJQN3lGcWN5R1ZITnEyQmpSNFU2?=
 =?utf-8?B?VDc1anNXV2hCS1c5TnhvY1FGT1Q0WU9mV1ovMXFEYzNTTmRibHBnK0ZvdUNa?=
 =?utf-8?B?cndIZDM2YWUwa09qd1pTam5RamVxMFVOOWxHbnNtd1V4VjJSNWxGMzBhZnY1?=
 =?utf-8?B?TDdPUU96UmNvamQ1ek5jME9JekFMWEpsZksvYWVuQThiUmFBaE1ZblFzOFVD?=
 =?utf-8?B?NDVadFhPWUxPU0ZqQVpEM3o4djhzUlFZQU5XM3RBVHBBc0tqS1grbGNEZGlV?=
 =?utf-8?B?cmlLblYzOXRhTUZqTExVNXNHWHNQVUFBYTJEVlNMaEY4Q2tETU1RbVBwZE5S?=
 =?utf-8?B?bnhhaEMxL1BZQ2lYbmZQNFJweW03Rk9JTlhhNFFCbWtFemRoeHQrQTA2ajV2?=
 =?utf-8?B?ZW9tNC9xS1dVd2Y0Z1VZT3UzSW55Yjh0dEU4SUh6Zkt4Snc0NjdqdWpBZzBT?=
 =?utf-8?Q?348gZZIwVWKg3XAVuB3PVuuGk?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 908e0daf-c975-482b-f5a7-08da55c60e2a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jun 2022 09:43:55.7022
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: DA8cbGsMkFwcqgWWt5PCH6vTwjiLcOaq7KkaEaI8+uR2afuYvoRA6SzjOF0JTsZicb2njTYlSLtSP9PK8WKkUg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8294

On 24.06.2022 11:11, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> The file xen/arch/mm.c has been growing quite a lot. It now contains
> various independent part of the MM subsytem.
> 
> One of them is the helpers to map/unmap a page when CONFIG_DOMAIN_PAGE
> (only used by arm32). Move them in a new file xen/arch/arm/domain_page.c.
> 
> In order to be able to use CONFIG_DOMAIN_PAGE in the Makefile, a new
> Kconfig option is introduced that is selected by x86 and arm32.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

In principle
Acked-by: Jan Beulich <jbeulich@suse.com>

But ...

> --- a/xen/arch/x86/Kconfig
> +++ b/xen/arch/x86/Kconfig
> @@ -10,6 +10,7 @@ config X86
>  	select ALTERNATIVE_CALL
>  	select ARCH_SUPPORTS_INT128
>  	select CORE_PARKING
> +	select DOMAIN_PAGE
>  	select HAS_ALTERNATIVE
>  	select HAS_COMPAT
>  	select HAS_CPUFREQ
> diff --git a/xen/arch/x86/include/asm/config.h b/xen/arch/x86/include/asm/config.h
> index 07bcd158314b..fbc4bb3416bd 100644
> --- a/xen/arch/x86/include/asm/config.h
> +++ b/xen/arch/x86/include/asm/config.h
> @@ -22,7 +22,6 @@
>  #define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS 1
>  #define CONFIG_DISCONTIGMEM 1
>  #define CONFIG_NUMA_EMU 1
> -#define CONFIG_DOMAIN_PAGE 1

... while I realize it has been named this way, I wonder whether ...

> --- a/xen/common/Kconfig
> +++ b/xen/common/Kconfig
> @@ -11,6 +11,9 @@ config COMPAT
>  config CORE_PARKING
>  	bool
>  
> +config DOMAIN_PAGE
> +	bool

... this isn't a good opportunity to make the name match what it is
about - MAP_DOMAIN_PAGE. See e.g. {clear,copy}_domain_page() which
aren't under this guard, and domain pages in general is a concept we
can't get away without in the first place.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 09:50:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 09:50:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355406.583048 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4fx0-00064M-Qc; Fri, 24 Jun 2022 09:50:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355406.583048; Fri, 24 Jun 2022 09:50:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4fx0-00064F-MC; Fri, 24 Jun 2022 09:50:06 +0000
Received: by outflank-mailman (input) for mailman id 355406;
 Fri, 24 Jun 2022 09:50:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4fwy-0005om-VR
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 09:50:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4fwu-0002iw-Rn; Fri, 24 Jun 2022 09:50:00 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238] helo=[192.168.4.76])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4fwu-0007Q4-Ja; Fri, 24 Jun 2022 09:50:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=Pvl+/kS0qKcoT9BUgIyK9M/J7GVAo0VYKWZrIVAjL5E=; b=0IVfsU4a+Z+zah6yzOSS4HSYBS
	kYYPZrYKw2+8JQbJmkCeimWBbldX5zqXAvGtb53LLRnuSr/O/6e778rLlkBMDhO8LGkKpGaNFwq6a
	Uio5nOJOenwdRWo5p1jQ+RGDM69iEqZxJVE1MllRWciiy8wOiGC6Nvhe6Mrnc9dxBWqM=;
Message-ID: <a6844d62-c1aa-a29f-56ba-3556bc1d4dac@xen.org>
Date: Fri, 24 Jun 2022 10:49:57 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH v6 1/8] xen: reuse x86 EFI stub functions for Arm
To: Jan Beulich <jbeulich@suse.com>
Cc: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Jiamei Xie <Jiamei.Xie@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Chen <Wei.Chen@arm.com>
References: <20220610055316.2197571-1-wei.chen@arm.com>
 <20220610055316.2197571-2-wei.chen@arm.com>
 <05dadcda-505d-d46a-776a-bb29b8915815@suse.com>
 <PAXPR08MB74205A192C0E6E2E4BDD64BB9EB49@PAXPR08MB7420.eurprd08.prod.outlook.com>
 <8e44e765-c47f-4480-ee44-704ea13a170d@xen.org>
 <cd5d728c-a21e-780e-3b79-0cfb163eb824@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <cd5d728c-a21e-780e-3b79-0cfb163eb824@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Jan,

On 24/06/2022 10:33, Jan Beulich wrote:
> On 24.06.2022 10:35, Julien Grall wrote:
>> On 24/06/2022 08:18, Wei Chen wrote:
>>>> -----Original Message-----
>>>> From: Jan Beulich <jbeulich@suse.com>
>>>> Sent: 2022年6月23日 20:54
>>>> To: Wei Chen <Wei.Chen@arm.com>
>>>> Cc: nd <nd@arm.com>; Stefano Stabellini <sstabellini@kernel.org>; Julien
>>>> Grall <julien@xen.org>; Bertrand Marquis <Bertrand.Marquis@arm.com>;
>>>> Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>; Andrew Cooper
>>>> <andrew.cooper3@citrix.com>; Roger Pau Monné <roger.pau@citrix.com>; Wei
>>>> Liu <wl@xen.org>; Jiamei Xie <Jiamei.Xie@arm.com>; xen-
>>>> devel@lists.xenproject.org
>>>> Subject: Re: [PATCH v6 1/8] xen: reuse x86 EFI stub functions for Arm
>>>>
>>>> On 10.06.2022 07:53, Wei Chen wrote:
>>>>> --- a/xen/arch/arm/Makefile
>>>>> +++ b/xen/arch/arm/Makefile
>>>>> @@ -1,6 +1,5 @@
>>>>>    obj-$(CONFIG_ARM_32) += arm32/
>>>>>    obj-$(CONFIG_ARM_64) += arm64/
>>>>> -obj-$(CONFIG_ARM_64) += efi/
>>>>>    obj-$(CONFIG_ACPI) += acpi/
>>>>>    obj-$(CONFIG_HAS_PCI) += pci/
>>>>>    ifneq ($(CONFIG_NO_PLAT),y)
>>>>> @@ -20,6 +19,7 @@ obj-y += domain.o
>>>>>    obj-y += domain_build.init.o
>>>>>    obj-y += domctl.o
>>>>>    obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
>>>>> +obj-y += efi/
>>>>>    obj-y += gic.o
>>>>>    obj-y += gic-v2.o
>>>>>    obj-$(CONFIG_GICV3) += gic-v3.o
>>>>> --- a/xen/arch/arm/efi/Makefile
>>>>> +++ b/xen/arch/arm/efi/Makefile
>>>>> @@ -1,4 +1,12 @@
>>>>>    include $(srctree)/common/efi/efi-common.mk
>>>>>
>>>>> +ifeq ($(CONFIG_ARM_EFI),y)
>>>>>    obj-y += $(EFIOBJ-y)
>>>>>    obj-$(CONFIG_ACPI) +=  efi-dom0.init.o
>>>>> +else
>>>>> +# Add stub.o to EFIOBJ-y to re-use the clean-files in
>>>>> +# efi-common.mk. Otherwise the link of stub.c in arm/efi
>>>>> +# will not be cleaned in "make clean".
>>>>> +EFIOBJ-y += stub.o
>>>>> +obj-y += stub.o
>>>>> +endif
>>>>
>>>> This has caused
>>>>
>>>> ld: warning: arch/arm/efi/built_in.o uses 2-byte wchar_t yet the output is
>>>> to use 4-byte wchar_t; use of wchar_t values across objects may fail
>>>>
>>>> for the 32-bit Arm build that I keep doing every once in a while, with
>>>> (if it matters) GNU ld 2.38. I guess you will want to consider building
>>>> all of Xen with -fshort-wchar, or to avoid building stub.c with that
>>>> option.
>>>>
>>>
>>> Thanks for pointing this out. I will try to use -fshort-wchar for Arm32,
>>> if Arm maintainers agree.
>>
>> Looking at the code we don't seem to build Xen arm64 with -fshort-wchar
>> (aside the EFI files). So it is not entirely clear why we would want to
>> use -fshort-wchar for arm32.
> 
> We don't use wchar_t outside of EFI code afaict. Hence to all other code
> it should be benign whether -fshort-wchar is in use. So the suggestion
> to use the flag unilaterally on Arm32 is really just to silence the ld
> warning;

Ok. This is odd. Why would ld warn on arm32 but not other arch? Wei, do 
you think you can have a look?

> off the top of my head I can't see anything wrong with using
> the option also for Arm64 or even globally. Yet otoh we typically try to
> not make changes for environments where they aren't really needed.

I agree. If we need a workaround, then my preference would be to not 
build stub.c with -fshort-wchar.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 10:01:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 10:01:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355415.583059 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4g7w-0007o4-4U; Fri, 24 Jun 2022 10:01:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355415.583059; Fri, 24 Jun 2022 10:01:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4g7w-0007nx-1T; Fri, 24 Jun 2022 10:01:24 +0000
Received: by outflank-mailman (input) for mailman id 355415;
 Fri, 24 Jun 2022 10:01:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b5tQ=W7=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1o4g7v-0007nr-Gw
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 10:01:23 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-eopbgr140053.outbound.protection.outlook.com [40.107.14.53])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 99958b53-f3a4-11ec-bd2d-47488cf2e6aa;
 Fri, 24 Jun 2022 12:01:22 +0200 (CEST)
Received: from AM0PR01CA0175.eurprd01.prod.exchangelabs.com
 (2603:10a6:208:aa::44) by PR3PR08MB5868.eurprd08.prod.outlook.com
 (2603:10a6:102:81::6) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16; Fri, 24 Jun
 2022 10:01:20 +0000
Received: from AM5EUR03FT041.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:208:aa:cafe::8f) by AM0PR01CA0175.outlook.office365.com
 (2603:10a6:208:aa::44) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15 via Frontend
 Transport; Fri, 24 Jun 2022 10:01:20 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT041.mail.protection.outlook.com (10.152.17.186) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Fri, 24 Jun 2022 10:01:19 +0000
Received: ("Tessian outbound d3318d0cda7b:v120");
 Fri, 24 Jun 2022 10:01:19 +0000
Received: from 889c47cd83f6.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 6D67E294-48D6-4586-BC8B-E2984E999EEE.1; 
 Fri, 24 Jun 2022 10:01:12 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 889c47cd83f6.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 24 Jun 2022 10:01:12 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by VE1PR08MB5197.eurprd08.prod.outlook.com (2603:10a6:803:106::32)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15; Fri, 24 Jun
 2022 10:01:10 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5353.022; Fri, 24 Jun 2022
 10:01:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 99958b53-f3a4-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=NGhiUIpdBbTz2C1paNT2SovFjAZ4slfYi6v0/gi9deTr4CQ/aNG6PekVVd3evcXbr6PjGjirGah1YXkxT6HAKbL6O+U9zOE0Q1HOiMIWAgCorcBzqcJz05phQgfDRTu1ik6M076vldMmBGqwD6Es2ThdZ9cR7qmYtgqXBE173D+kyJxTuKCVw9JB1ZKMrnFCQ/9Udah7G5iZXrBvnq63y2QEE1rHk18DM10llRGieH0DFBA7VvKRAapYcBfOBC4wOsN+Uf5lJdZkdCSwHq1xvHKQI/sIDRCjoINxsHitmrBIEAYjeeJ3qqpyoFPbIkO+G9tOn9BNnWplbEL+UEVHuA==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=FIW7iql8xfFig9NPdgzRhm92AswZfQJI9eKjzYf3lHg=;
 b=KDvCRi2eC+63gpAxrDOBOOIqx0WcgPzraa1Oi/bckTxk/Y8PwFaLevXv5KCSdT+m4c57bdeklhkXU5Goi4eyr5aJXQpw9fGmY5pUDuQpkEwotp/bt+nvqYy+As9cD1IcWfavfLwsw8i9mYgCa/hxTd0AsEnCtXvWJbK15a/QmYGo+D0VAlJF+onxamoqUHqCAbq6NBANtvlgADY9kElu2WMHeGQHKTJKS40xaMmsl2yoPklH+zutDpPw4/3C7VQBSGQ4CZ+QIwGppT5Y+uEYCkLQEfuRwbFn86CqWAvxJsvOieHCVt1yspa04OelzonHKXyNszO03ZbEl2nMRbbZ6Q==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FIW7iql8xfFig9NPdgzRhm92AswZfQJI9eKjzYf3lHg=;
 b=AHITBOdXswQ9NqEnHgAR4S9H9AgnBoXR2ijp2JUpTMlUqWlJ9TiDCnpGRj1KGiaUEibJshwn3F8lDsuG0mnsMlIsBzCptWAtNjHkc575JH5pZKWJIPgLeykQd7Ck3PNtYGNKoyg2YPaZkandp42uhNCixvm0CIX2OA+NAc6k8SQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 1cd8ef8b82d7117a
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GQDiOG1x5Rbxz1P3CyF5jp7ic9jYZXxtn4vigSFDjp5etAUrkHfFzBBqcR0q/DAN9AG9oruKocHXe7awwOuBC/LjtIlyMnhKdE94MTKEFDFKbZQD73k/kyNPOqkDf6uI0qwvCS0C+7uQQ2WUeMiGYCjQGTNzXAAjJtO0ny8q0vZW0vdix9PKd3kBjqOCJ6gCGtg4bR4XE0pDqCm194hm2LP/bWvH9HaEMKnJ4Fd3gjFfDTChFifU4/kIGHrQ+E+YmXrPid0eGdaFxodZ0fNG9XCWCRBilSR3YQh0VjfEhFGhdQLkbWTWDPIQCR4o8jDtjB7VAokXF/N2FW8JatBlaw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=FIW7iql8xfFig9NPdgzRhm92AswZfQJI9eKjzYf3lHg=;
 b=fYEAvN33+03AGt1f2hGcGGPTtM9HyFr1gOclepcGE+FYwiZdJAcb1jbn2udKxDQ6m/tOPH7T3ocZfQMn6n8DqCJCfoVpcfhjqYK1OHA04N1WTDBckUKiQh2s88T7eyDVZzXD+fabbso8FL3PGQlf+5xKVnNApJVPe3GpOcvmX7jIg0oIuOxa+mTjr41ZRsjLqHbFcAEQM4gVcTg93OUKY6HJ3/9U8m6KvppHsFEndbf2LhmVC20uRAhmX90pfOyyR0vaoM/s2fXeoOI4BEyJ205tFlXSNw9QTEdi0i1zBZtyXFLCR/mZ5DNTo6k0Op0NfG8bZEj0PgYn2d0/YKXPLA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FIW7iql8xfFig9NPdgzRhm92AswZfQJI9eKjzYf3lHg=;
 b=AHITBOdXswQ9NqEnHgAR4S9H9AgnBoXR2ijp2JUpTMlUqWlJ9TiDCnpGRj1KGiaUEibJshwn3F8lDsuG0mnsMlIsBzCptWAtNjHkc575JH5pZKWJIPgLeykQd7Ck3PNtYGNKoyg2YPaZkandp42uhNCixvm0CIX2OA+NAc6k8SQ=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: xen-devel <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: irq: Initialize the per-CPU IRQs while preparing
 the CPU
Thread-Topic: [PATCH] xen/arm: irq: Initialize the per-CPU IRQs while
 preparing the CPU
Thread-Index: AQHYf9MHjBymRs/pn0u9upJvLeFm3q1eWpcAgAAIaoA=
Date: Fri, 24 Jun 2022 10:01:09 +0000
Message-ID: <E71E0034-ED75-4783-9A8B-01D6BBF293A9@arm.com>
References: <20220614094157.95631-1-julien@xen.org>
 <29AAB4EF-6326-41F2-BB51-EED5FFDB26EA@arm.com>
In-Reply-To: <29AAB4EF-6326-41F2-BB51-EED5FFDB26EA@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 0f51de09-9abd-45a1-c72e-08da55c87c5a
x-ms-traffictypediagnostic:
	VE1PR08MB5197:EE_|AM5EUR03FT041:EE_|PR3PR08MB5868:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 /1BepcavudTnL9/kGgSUrvwUA+TspJUlSkR9P9rykzA8IjJW5kKmoBd+Ys3qUpJ2Fu1awlUg8KOm/aLBJAEU5FYp8zCP8/my/UQjb4RNpaV5Aqx9Zno10yBH5plNwPb36xaL9n5QH/22pCo8pcSaYMlOjXmQ2pdH8oTLsSnjiQu+G/9AOTrgMLzGI5RPfRAS9EJgNsq+fGv+DhAgsjagfM5XdOuWY29XVt1JOIpHfE32HSfO3idtHtXltRsUIwfYNl1lBY/TFQncp2ye9MmyNr4gXLB9AJBVIu+Tz/gn1h8yIrUSgiLBg4GCMUEcUN496zJ2dVH0lOQQsaXpUdWZY9/Z80ruHSlFt2NwcoykJ44EGFENEjllKCVu3Q21G93eJXFtXE/Rm4IT6gAtKn91/GsGPOux+vxw/5JmKOOjCWRtsA81F3IwcsK2HvBLsl3O7hPpTjAZ+fl5m1Ms7gFATAPIDjzBHnmv4T3ndy5Jo0ql34SefVjclWxHC4Fh19+NOB2C5DdJopVh1qCGUKyIMwX2r22LBAJM44wJcfKZH66G6FYwKjdSSMkpeK6r7NFl5416ZL7DNoGZyWYAIjuM2G0DZ7FVkz6pUJevdX+NqQ6gavv/gYq1LsnmF7CJmS8DItl3mNDVSUUApQPVLOw+rf69ZPjLWMH8fYWRVoG3D1fC5aBKuh1ioV2kpzFDocFEwU+RYkst7J9ym3U5PCFlwZMbWpSQmH+aSEGET/lcCnes17JAurQK1unI4HntugpgQcE7xuA6yxSz5ueDiN73I8/Vw62pwJhFYphtHp6wup52lsGqi+AumsNODmvDLvsmRLVhdzSujkQvGygepI4eFg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(39860400002)(396003)(376002)(136003)(346002)(4326008)(66476007)(64756008)(76116006)(83380400001)(41300700001)(66446008)(8676002)(91956017)(66946007)(54906003)(66556008)(38100700002)(36756003)(6916009)(316002)(38070700005)(122000001)(71200400001)(5660300002)(26005)(53546011)(6506007)(8936002)(478600001)(33656002)(6486002)(186003)(86362001)(6512007)(2616005)(2906002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <EE30AF5CDC501D429C6E2824C4C6B660@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5197
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT041.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	846bbb39-f8cd-4886-5b38-08da55c8767d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	awCG2hZ5mvBtN/EjW8sgO9oLH/52sWFBw8GRFyxpHi7u7tIXS5cXSzaNIWSoHPjHSHgxudpz/Xb9eaMpeHRei51YjYUhxFyvlaLb4UtomQLNxVgtTVZU7Zf1cSDOJvNOwaxBUucYk43oSa/2XDwKv5JS3dSFEh1l9YDwe6+aMY0tVVT7xsLldHJSKE/+9o4XVmrsrkPELdokJLRc2Q1eEBeZ5XVsORE4Y7RXcw7JTCeUMpbFYt+wbax3eSYlb3IPwyZgUgwTFsiBR4goQiZLrJ9DPfI6kIvm9nleTKuPRxc9oQwUgiAAuVBRhSj5Bd0dduzV0fEWN+DxPQnUnfPjY1mpnrxFpp7wFT8XceeOy1UchTHQPVENL/LauKdlH/Su84R8VykYkeDn/T7j5LUsC+POAksbn+ChPD1xxaOWYJa+gjOWiuZ1ai4m6uiPRv41rah56Jmp5A2KbjrFzaPZ7ia3x5BUVzzjqi+eiyqHdo/ep8JEtreFsHCR7czzgf4QPaX45GN4r1WbVa10Sgu9pswxzftveounskqbGn16CqWN5ECBC0z5aDPHiwaOwahzKC74/bB8b6yTeSu1kF0yfQrQVOBeHAD7nun2MBSHEejM3hN3T9JA39oBUs4zOXi5Yess+1r89MQoujhFcnvgyKyONsl1jQa17C8WuLksDWzTLzR+nZpGYT/aOjnZQ7RmO9cddY/r0MuV3U3XZiwftmAaZ3Ibxc94fGknJMMW+1NYemROd4N4XFB22ktKHRFkErePO6bcM+r++F/HagL4lLRH20qHNxbfhKmq3z00yJVUsDhOrbXFYzt0QoDKu0eR
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(346002)(136003)(376002)(396003)(36840700001)(40470700004)(46966006)(356005)(186003)(2616005)(54906003)(2906002)(41300700001)(316002)(82740400003)(5660300002)(40460700003)(81166007)(478600001)(8936002)(36756003)(107886003)(6486002)(6862004)(6512007)(70206006)(336012)(33656002)(36860700001)(6506007)(83380400001)(40480700001)(70586007)(53546011)(86362001)(4326008)(8676002)(82310400005)(26005)(47076005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jun 2022 10:01:19.3865
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 0f51de09-9abd-45a1-c72e-08da55c87c5a
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT041.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR08MB5868

Hi,

> On 24 Jun 2022, at 10:31, Bertrand Marquis <Bertrand.Marquis@arm.com> wro=
te:
>=20
> Hi Julien,
>=20
> [OFFLIST]
>=20
>> On 14 Jun 2022, at 10:41, Julien Grall <julien@xen.org> wrote:
>>=20
>> From: Julien Grall <jgrall@amazon.com>
>>=20
>> Commit 5047cd1d5dea "xen/common: Use enhanced ASSERT_ALLOC_CONTEXT in
>> xmalloc()" extended the checks in _xmalloc() to catch any use of the
>> helpers from context with interrupts disabled.
>>=20
>> Unfortunately, the rule is not followed when initializing the per-CPU
>> IRQs:
>>=20
>> (XEN) Xen call trace:
>> (XEN) [<002389f4>] _xmalloc+0xfc/0x314 (PC)
>> (XEN) [<00000000>] 00000000 (LR)
>> (XEN) [<0021a7c4>] init_one_irq_desc+0x48/0xd0
>> (XEN) [<002807a8>] irq.c#init_local_irq_data+0x48/0xa4
>> (XEN) [<00280834>] init_secondary_IRQ+0x10/0x2c
>> (XEN) [<00288fa4>] start_secondary+0x194/0x274
>> (XEN) [<40010170>] 40010170
>> (XEN)
>> (XEN)
>> (XEN) ****************************************
>> (XEN) Panic on CPU 2:
>> (XEN) Assertion '!in_irq() && (local_irq_is_enabled() || num_online_cpus=
() <=3D 1)' failed at common/xmalloc_tlsf.c:601
>> (XEN) ****************************************
>>=20
>> This is happening because zalloc_cpumask_var() may allocate memory
>> if NR_CPUS is > 2 * sizeof(unsigned long).
>>=20
>> Avoid the problem by allocate the per-CPU IRQs while preparing the
>> CPU.
>>=20
>> This also has the benefit to remove a BUG_ON() in the secondary CPU
>> code.
>>=20
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>=20
> I still have issues after applying this patch on qemu-arm32:
>=20
> (XEN) CPU0: Guest atomics will try 1 times before pausing the domain^M^M
> (XEN) Bringing up CPU1^M^M
> (XEN) CPU1: Guest atomics will try 1 times before pausing the domain^M^M
> (XEN) Assertion 'test_bit(_IRQ_DISABLED, &desc->status)' failed at arch/a=
rm/gic.c:124^M^M
> (XEN) ----[ Xen-4.17-unstable arm32 debug=3Dy Not tainted ]----^M^M
> (XEN) CPU: 1^M^M
> (XEN) PC: 0026f134 gic_route_irq_to_xen+0xa4/0xb0^M^M
> (XEN) CPSR: 400001da MODE:Hypervisor^M^M
> (XEN) R0: 00000120 R1: 000000a0 R2: 40002538 R3: 00000000^M^M
> (XEN) R4: 40004f00 R5: 000000a0 R6: 40002538 R7: 8000015a^M^M
> (XEN) R8: 00000000 R9: 40004f14 R10:3fe10000 R11:43fefeec R12:40002ff8^M^=
M
> (XEN) HYP: SP: 43fefed4 LR: 0026f0b8^M^M
> (XEN) ^M^M
> (XEN) VTCR_EL2: 00000000^M^M
> (XEN) VTTBR_EL2: 0000000000000000^M^M
> (XEN) ^M^M
> (XEN) SCTLR_EL2: 30cd187f^M^M
> (XEN) HCR_EL2: 00000038^M^M
> (XEN) TTBR0_EL2: 00000000bfffa000^M^M
> (XEN) ^M^M
> (XEN) ESR_EL2: 00000000^M^M
> (XEN) HPFAR_EL2: 00000000^M^M
> (XEN) HDFAR: 00000000^M^M
> (XEN) HIFAR: 00000000^M^M
> (XEN) ^M^M
> (XEN) Xen stack trace from sp=3D43fefed4:^M^M
> (XEN) 00000000 40004f00 00000000 40002538 8000015a 43feff0c 00272a4c 4000=
2538^M^M
> (XEN) 002a47c4 00000019 00000000 0026ee28 40010000 43feff2c 00272b30 0030=
9298^M^M
> (XEN) 00000001 0033b248 00000001 00000000 40010000 43feff3c 0026f7ac 0000=
0000^M^M
> (XEN) 00201828 43feff54 0027ac3c bfffa000 00000000 00000000 00000001 0000=
0000^M^M
> (XEN) 400100c0 00000000 00000000 00000000 00000000 00000000 00000000 0000=
0000^M^M
> (XEN) 00000000 00000000 00000000 00000000 00000000 00000000 00000000 0000=
0000^M^M
> (XEN) 00000000 00000000 00000000 00000000 00000000 00000000 00000000 0000=
0000^M^M
> (XEN) 00000000 00000000 00000000 00000000 00000000 00000000 00000000 0000=
0000^M^M
> (XEN) 00000000 00000000 00000000 00000000 00000000 00000000 00000000 0000=
0000^M^M
> (XEN) 00000000 00000000 00000000^M^M
> (XEN) Xen call trace:^M^M
> (XEN) [<0026f134>] gic_route_irq_to_xen+0xa4/0xb0 (PC)^M^M
> (XEN) [<0026f0b8>] gic_route_irq_to_xen+0x28/0xb0 (LR)^M^M
> (XEN) [<00272a4c>] setup_irq+0x104/0x178^M^M
> (XEN) [<00272b30>] request_irq+0x70/0xb4^M^M
> (XEN) [<0026f7ac>] init_maintenance_interrupt+0x40/0x5c^M^M
> (XEN) [<0027ac3c>] start_secondary+0x1e8/0x270^M^M
> (XEN) [<400100c0>] 400100c0^M^M
>=20
> Just wanted to signal before you push this out.
>=20
> I will investigate deeper and come back to you.

Pwclient did not apply the whole patch, only the smpboot part on my try run=
.
Re-running it applied it correctly and now my tests are passing so:

Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand


>=20
> Cheers
> Bertrand



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 10:05:48 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 10:05:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355421.583070 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4gCB-0008PH-Oj; Fri, 24 Jun 2022 10:05:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355421.583070; Fri, 24 Jun 2022 10:05:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4gCB-0008PA-Kz; Fri, 24 Jun 2022 10:05:47 +0000
Received: by outflank-mailman (input) for mailman id 355421;
 Fri, 24 Jun 2022 10:05:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Tt/v=W7=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o4gCA-0008P4-NR
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 10:05:46 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2084.outbound.protection.outlook.com [40.107.22.84])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2415b89a-f3a5-11ec-b725-ed86ccbb4733;
 Fri, 24 Jun 2022 12:05:14 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB4083.eurprd04.prod.outlook.com (2603:10a6:208:64::29)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.23; Fri, 24 Jun
 2022 10:05:44 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Fri, 24 Jun 2022
 10:05:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2415b89a-f3a5-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HhqNmBmtmRPGGScPMs3gXZQJCOa/HFjTWZVdQQ3jACfPj/6ClX41gsynT8NadmYWFJUNPqzjvxaT/g4v3z8IB3OyHCVujuUfnAdMni05dqjv1fdrNJcZx62KbO4f3g/wQilpxACKd5pXYdrjKbwmESE/OXG2NvnQoTq04f+0GChIIJFcgQbE0hGboSrzvBd4mjzuoIgA2R++SCjMcrQ0+ggLjBo071l2aKuBcZ5TYu7CvzVbV+ySaFOWUjupcur1Gv2SdJavbG376RX2rKk2VE443xjgTC9YFO5Pjb2kAL4tW6aY5GqpRt+YkL8IyW+yNzL2TBJ2+sVO4MyYfQC90w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/927hs9b5qDlM5hU5MKy+mzImZzDOmV3ZzLGX64IAok=;
 b=DYzGtBMIZ0afPxGHxz7gmCLejhcUe4uME9EuDek5L7/8vEpL5/+/xxx+mol9sRKgelbYCEMJI33hTExaL1up4yFs/qH4oBYoV7Ud7kyTu6LNP+H6aSOPgjCJqFWQwAkyi0QSkQfp46QV0A7GCyij9nylUWJOb4COwFZTKUEFFkGrBG5PMglC//TtbuB5iGRe0DIEo5S42xItkeh0AqZaWDk/j0zT8Tis/4LuZyTB94WpYKqF4Mozq6qer8UIOD8VnFcoHfgT+5FjFrx4ujXXsoSVUtFycAkUiZqH3txKNhpFcy12iysdMH4kyu8jMqeBnTMbjBifbQc4DLZWDmMvQA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/927hs9b5qDlM5hU5MKy+mzImZzDOmV3ZzLGX64IAok=;
 b=TvvGrVkm7VJ840TaslHxGgI6HhIcLBLrZ2Za+JUKSYvj5j2ejrXj9QijM/EHTJn8MyeEnGhIZm1ZymAyJwcJ3W8kcWeYHhiUApsYn4ingNeN2eXcl6n/4Hekffi2ZUjlw2X/LnGMac5VUHD5zRmn5TyHEZH2vEf60Sg9fc2QpNONTngXmBzUIyPiKYLahAyenTFf11a4iyIRl5fZE1AC/j8YU4c+rBvQwkb43lS5jhkoRpnNcM2u4Fr806VM4vY1MTWRlc5bOiUuNYydcGMQDb6AbRldNzmqevBLyrTGZB+31GXoffoY1ZU/vYsdOEo1XWVd34xln1JQe+Fo2P0KqQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <6e91d7d0-78d2-2eec-3b14-9aea00b2a028@suse.com>
Date: Fri, 24 Jun 2022 12:05:41 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v6 1/8] xen: reuse x86 EFI stub functions for Arm
Content-Language: en-US
To: Julien Grall <julien@xen.org>
Cc: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Jiamei Xie <Jiamei.Xie@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Chen <Wei.Chen@arm.com>
References: <20220610055316.2197571-1-wei.chen@arm.com>
 <20220610055316.2197571-2-wei.chen@arm.com>
 <05dadcda-505d-d46a-776a-bb29b8915815@suse.com>
 <PAXPR08MB74205A192C0E6E2E4BDD64BB9EB49@PAXPR08MB7420.eurprd08.prod.outlook.com>
 <8e44e765-c47f-4480-ee44-704ea13a170d@xen.org>
 <cd5d728c-a21e-780e-3b79-0cfb163eb824@suse.com>
 <a6844d62-c1aa-a29f-56ba-3556bc1d4dac@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <a6844d62-c1aa-a29f-56ba-3556bc1d4dac@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AS9PR06CA0528.eurprd06.prod.outlook.com
 (2603:10a6:20b:49d::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ae456280-ae8f-4175-0409-08da55c919e3
X-MS-TrafficTypeDiagnostic: AM0PR04MB4083:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tlJO5RVtjjvsN8tmbjUk7enZy7884TUii95VYQEJi7EAAVuwboF7bI1BkzBvTjLFjGAg0QZo4HrdeDK7SrkoGZF2HS7YrmygdxalddkLC2Dg/cKwT/9yVkVUOdj5CBb/iqhgOYrtUfZw583gQfXDt9X3aLphGI7agQp1rvr+3MRG1O4o0F/74m5yBdJMI081pjD9GPe6mPL/XT5T3GPHngYlqat5XZ5V7d+iljjvn83aTI27/Niv6P37/LVmfK/NVKHjg1PufjaRuJmhTdvZsXc38GBAyAvOYjEHpcMao++JkAg+IbN6DlbLCNbaqyXRIHFcL3ME+EZsIk3IIDSFMrhh6mfuA4W8S2MnO0Nyc5OlSEQVjthZzy/vlncjHMnsDHqM50/sDfzFnIRf7whTZESD3c/U3xpzlw3VduVdJ6YvnHSOVNNoM0fGbTpHGHwWuL/Gq+1p2bklU3kP5YeBmik3oXtlLoJkargUvLZXOG45/Un2+qfJpufn9Dy/VN4u/6jhwkLDsPkm169lEirPMyAz0G2gB0Fm/9XO+xlDKD2oafx6YmIJbgNHI8jbgqNJxIkBgU1KJFZ6JE/VzQ3PYjZE4i1YIo781x0QfReoewb32qLautV9L3oplKj9xIxQRw/szab4oPqWrTvosIpeYBNl58jRXrWQ87GvY2YSGeErA5s2kjVVWIwzn/Wi+0T+/C7rCoeAVMOuPG1bhGJmwNvZjs725e6jy0cN49aWjfNoT9U+Q7M/OfPhBHkooA92PbA/99Pahfw3uNluzi0zALyR5g7eHoM2lWccsYKgauV9h2LSOSSK6rbA7YQSADWSM669uWTxZP8Lgp/HwbVfTg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(136003)(366004)(346002)(396003)(39860400002)(376002)(6486002)(2906002)(6506007)(478600001)(86362001)(5660300002)(6916009)(2616005)(36756003)(53546011)(31696002)(7416002)(38100700002)(186003)(316002)(41300700001)(54906003)(8936002)(31686004)(26005)(6512007)(83380400001)(8676002)(66946007)(66476007)(66556008)(4326008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TUorYWk3dGRRc0ljZUpndS9pWkVvbk4zNGQ1VFJNVXByMldRTXBGRmlSU0R0?=
 =?utf-8?B?KzhxY3VDcjhpakpEeXkvSWZFM2xmK3NsMnIwNmV4cTdqS1RUY0FYM0Y3Z1dl?=
 =?utf-8?B?SVZJYzRZNWFVdFQ3cUpDdEV6U3FIYkhRN0pmTERabTVvRXN6cy84Y1lCdVRU?=
 =?utf-8?B?UmZkbmF5NW1QZVgxM2Q5WXg5Y2JVVGVMRlZ1c05NNWNmaEJqRUplU09DVXlV?=
 =?utf-8?B?aU1ueXJ2V2xZSHQvYWNSK0lWamdESjluVTRNV3ZiUUk5TysyMXRBQVNLdTdu?=
 =?utf-8?B?bXBNQmg5UlRvQ0NFRmVveGZkVHdDU2UzQjQvaHBwdlNUMDFRRnNyRWU5Njd1?=
 =?utf-8?B?bDdiSG5pQlVlOWNpL2I1SGFIekkvNHl1eDJGOW5GNXdQSnMrR2s3V0NSNUs3?=
 =?utf-8?B?bEdKZnpnRHlVa1l0ZFM2ZGxlYUZJTFh1bUk4ZEpEa0V5WGN6VlpLaGk0czR4?=
 =?utf-8?B?REtkVWZxSDhBSGVPcEJ4dkJiYXk4UHg1SEl6ZEJyNmVnbXc2YUFXN0Y4RjJU?=
 =?utf-8?B?TElRNHExd1VFZnJmRGRzYzJhenR0VGdFS3VIMmVDT0h4VWJsNGJOazBUZzV2?=
 =?utf-8?B?REpXRFROaTY5NVMxVlVEVzBrb2ZvcVVLa2xLcys0eGozRWtiWHNvU1ZYNkVK?=
 =?utf-8?B?a1FxZTU3Ny84cDRuWXpEbEswa3BPbWJpbE5ieWlKc1B0bGl3MDJkMmFJamI3?=
 =?utf-8?B?Rm1TSS82RDl2aHpaYWh1cGpyYUJ5U0lWRWY1REkybU5mNHVmeVJnRFJaK3VV?=
 =?utf-8?B?aHpnSzkralZlVmJDeEdwSU5xOGZad3I2TzdTZzhaSVJGM1VLblo3M0FHZ0dJ?=
 =?utf-8?B?SmlHOHVEMjBWaUhSeDgrSjdDcDZJTm1wTGFRbjRWaE9MdFZLUkdYZWFwUUhm?=
 =?utf-8?B?ZHlkMTFSU2RFcS9NcDEyYnBtdlVuT3pWdlNBRGY4bFpUR29pVC9acmlpMXM0?=
 =?utf-8?B?c0dQK1ZnV1pqMnZwNkx2TUIrb3duK24zajNsOUFscFJmTzhvNE1weG9hR3JM?=
 =?utf-8?B?WTVNTWhGOUNSNTVvSjFCb3NBYjhIeXc5YmlnaG1HZjhwWHJ3T0dTUlVya3VC?=
 =?utf-8?B?ZFl0N3VBd3F1SjZ1aXZWUW41VWw5clVTNjgyejFxc1ZtNFFBQ1hsSS9pV01L?=
 =?utf-8?B?MTlpc3lCdkgyMzlobExYbEVkUXBiWjVmc3NwSy9MM3JUN0VPelZpcmlPZ2Vt?=
 =?utf-8?B?clNMaEY4ekxiUVhCY0dGTEJqRlAwQURwSUlrN0k2TlhXYTdJNS9hRm9sZTJj?=
 =?utf-8?B?OVJSR3FGWmpvUC9KRml1WDdOMEVRQU9KVTN5YjJEaUNKV1ppN2krbnRqWXNj?=
 =?utf-8?B?SmtIUXJ4YnFicVZjWHltYXh5a3RnNUhrOXJMeFNUdHJRV2lpWHQ4OUsrYUxH?=
 =?utf-8?B?c2JmMk9JOWRQSkZ1dkVVTitrbkVoaHdMNzNqa1ZDa0luRXp6cHREMURhNmJ3?=
 =?utf-8?B?bEFSTGx0TWNRWU1UYjBUcW1QU3Q2dk9TOWFVblI0eTdaQ05Udm0yTmRlbHQr?=
 =?utf-8?B?S3VsT2RaMjRKQ2sxNE91bW9ZM0FLNENMb2xKVzlTUEhQWXFpaDVSdWQ0V0Iy?=
 =?utf-8?B?S296NjlRZnFNNFBYY25lOHk3aHFpVlpCazYxM2NsbHhUTVlwSEk4elpIZzhh?=
 =?utf-8?B?ZGlpQXo4QzQrMjYzaFhhUjY2SXEwaUJpOTJjby9PYW5NbVYvb2hBMm84QmNQ?=
 =?utf-8?B?dE90WHJIcUo1OEpLQmtWRzNtdW9Ua0dWRXZRMFJwckJOMlBHeHBjcC93TlZY?=
 =?utf-8?B?ekFvU2NYb0Y5WDM4c3ZmYVhhekwyVkxxMW5CM1dQeEpHWUJhcGlaMTZ2QmVr?=
 =?utf-8?B?TjRiOVpIbEplQS9iSm5YWUQvL3RPbmRDbkpVZnJkK2lzVDhYWFhLUkZEcEoy?=
 =?utf-8?B?R1VXWDJjMmphNXk5UEpoYXRCeDNpUFB6cExGZkJ5Ymh4YmhrWUVOQ0srSnVT?=
 =?utf-8?B?SXRRdlV6azhXK29iMldUTmR2dGdCWitxOFJhNUsvZ3l2STdjMEQzMXBrc3hX?=
 =?utf-8?B?aHcrSHh0SmpKYmNtUjVaS0pEbzFudkRJWThBeDhMMmdRSkdKRTV4NnpQUm56?=
 =?utf-8?B?ZmwyR1ZYa0RXT3d6aFhOTGQ2a0NnV1lGRkFqVXZKdmRmMFhjdXlIOWRuRGZw?=
 =?utf-8?Q?/MIJV+LqttyNyCynu49bY9yOh?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ae456280-ae8f-4175-0409-08da55c919e3
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jun 2022 10:05:43.8535
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: jVBzEiE1NCZkY+Kg0/5jGjWpErTZyw6RYx9EjQoWDEqzmXkoaZZc8w0B9KY7TknQn0lukICKO7ZMOlCXxQlP2g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB4083

On 24.06.2022 11:49, Julien Grall wrote:
> Hi Jan,
> 
> On 24/06/2022 10:33, Jan Beulich wrote:
>> On 24.06.2022 10:35, Julien Grall wrote:
>>> On 24/06/2022 08:18, Wei Chen wrote:
>>>>> -----Original Message-----
>>>>> From: Jan Beulich <jbeulich@suse.com>
>>>>> Sent: 2022年6月23日 20:54
>>>>> To: Wei Chen <Wei.Chen@arm.com>
>>>>> Cc: nd <nd@arm.com>; Stefano Stabellini <sstabellini@kernel.org>; Julien
>>>>> Grall <julien@xen.org>; Bertrand Marquis <Bertrand.Marquis@arm.com>;
>>>>> Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>; Andrew Cooper
>>>>> <andrew.cooper3@citrix.com>; Roger Pau Monné <roger.pau@citrix.com>; Wei
>>>>> Liu <wl@xen.org>; Jiamei Xie <Jiamei.Xie@arm.com>; xen-
>>>>> devel@lists.xenproject.org
>>>>> Subject: Re: [PATCH v6 1/8] xen: reuse x86 EFI stub functions for Arm
>>>>>
>>>>> On 10.06.2022 07:53, Wei Chen wrote:
>>>>>> --- a/xen/arch/arm/Makefile
>>>>>> +++ b/xen/arch/arm/Makefile
>>>>>> @@ -1,6 +1,5 @@
>>>>>>    obj-$(CONFIG_ARM_32) += arm32/
>>>>>>    obj-$(CONFIG_ARM_64) += arm64/
>>>>>> -obj-$(CONFIG_ARM_64) += efi/
>>>>>>    obj-$(CONFIG_ACPI) += acpi/
>>>>>>    obj-$(CONFIG_HAS_PCI) += pci/
>>>>>>    ifneq ($(CONFIG_NO_PLAT),y)
>>>>>> @@ -20,6 +19,7 @@ obj-y += domain.o
>>>>>>    obj-y += domain_build.init.o
>>>>>>    obj-y += domctl.o
>>>>>>    obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
>>>>>> +obj-y += efi/
>>>>>>    obj-y += gic.o
>>>>>>    obj-y += gic-v2.o
>>>>>>    obj-$(CONFIG_GICV3) += gic-v3.o
>>>>>> --- a/xen/arch/arm/efi/Makefile
>>>>>> +++ b/xen/arch/arm/efi/Makefile
>>>>>> @@ -1,4 +1,12 @@
>>>>>>    include $(srctree)/common/efi/efi-common.mk
>>>>>>
>>>>>> +ifeq ($(CONFIG_ARM_EFI),y)
>>>>>>    obj-y += $(EFIOBJ-y)
>>>>>>    obj-$(CONFIG_ACPI) +=  efi-dom0.init.o
>>>>>> +else
>>>>>> +# Add stub.o to EFIOBJ-y to re-use the clean-files in
>>>>>> +# efi-common.mk. Otherwise the link of stub.c in arm/efi
>>>>>> +# will not be cleaned in "make clean".
>>>>>> +EFIOBJ-y += stub.o
>>>>>> +obj-y += stub.o
>>>>>> +endif
>>>>>
>>>>> This has caused
>>>>>
>>>>> ld: warning: arch/arm/efi/built_in.o uses 2-byte wchar_t yet the output is
>>>>> to use 4-byte wchar_t; use of wchar_t values across objects may fail
>>>>>
>>>>> for the 32-bit Arm build that I keep doing every once in a while, with
>>>>> (if it matters) GNU ld 2.38. I guess you will want to consider building
>>>>> all of Xen with -fshort-wchar, or to avoid building stub.c with that
>>>>> option.
>>>>>
>>>>
>>>> Thanks for pointing this out. I will try to use -fshort-wchar for Arm32,
>>>> if Arm maintainers agree.
>>>
>>> Looking at the code we don't seem to build Xen arm64 with -fshort-wchar
>>> (aside the EFI files). So it is not entirely clear why we would want to
>>> use -fshort-wchar for arm32.
>>
>> We don't use wchar_t outside of EFI code afaict. Hence to all other code
>> it should be benign whether -fshort-wchar is in use. So the suggestion
>> to use the flag unilaterally on Arm32 is really just to silence the ld
>> warning;
> 
> Ok. This is odd. Why would ld warn on arm32 but not other arch?

Arm32 embeds ABI information in a note section in each object file.
The mismatch of the wchar_t part of this information is what causes
ld to emit the warning.

>> off the top of my head I can't see anything wrong with using
>> the option also for Arm64 or even globally. Yet otoh we typically try to
>> not make changes for environments where they aren't really needed.
> 
> I agree. If we need a workaround, then my preference would be to not 
> build stub.c with -fshort-wchar.

This would need to be an Arm-special then, as on x86 it needs to be built
this way.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 10:09:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 10:09:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355429.583081 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4gFP-0000go-Ck; Fri, 24 Jun 2022 10:09:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355429.583081; Fri, 24 Jun 2022 10:09:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4gFP-0000gh-9E; Fri, 24 Jun 2022 10:09:07 +0000
Received: by outflank-mailman (input) for mailman id 355429;
 Fri, 24 Jun 2022 10:09:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Tt/v=W7=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o4gFN-0000gY-Gz
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 10:09:05 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2089.outbound.protection.outlook.com [40.107.22.89])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9a99bf51-f3a5-11ec-b725-ed86ccbb4733;
 Fri, 24 Jun 2022 12:08:33 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8551.eurprd04.prod.outlook.com (2603:10a6:10:2d6::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16; Fri, 24 Jun
 2022 10:09:02 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5353.022; Fri, 24 Jun 2022
 10:09:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a99bf51-f3a5-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nzMyFY0otyiFzXNvTLOGyXu/MeFAlQ4I9xxt9JvxFM6f6qustmlc2cvgPGilXDZaB+aNlDooLq68mAWGmR2KNYzbwZx0dvZrWx0eDygsa4Xk8oSubyjPWj656RHhB2bukPLub18IdIRpbEynLdQG+iKOI20Y80NtjATUSAB6rZJA9RI6T2A1IwXP87kDPjg322yDJMGKSKIm2OsYpW8rEllR8mxNabSFdjTbYjhYOSZDsCMiYwLBe81tq6vwuhiYjON9owdCRUcw7kLElIZlfXFD37GwByyztsKqNaC70lCq8o95dwZkB5vW2yfIirPkmJMjT9sZ3N9y2WAaJ1DXEw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=1FVL1IyaHxLDG9o583v7c1UgzM/LSnVxzxLhqmMmE6k=;
 b=a58EIMRILKGpSeJ9/n3iWqlrFNKT2sYb8uRcU0OVbCx6vX5SfudtUWi5Xf0wQ/if6+C0UgyqUhKJ1omRosWWndmRn4K64pI+9d6/RJVVNcRUE6hjvVksX0SuuOuAaXXbLDvZR6MoXC/38GkizTLiQNgRqLtHebBtckLJX73uQYa67Y4y/Ick3scAvu0rLnFZHbBNwBIhz/1PQp/LswsPpQmGcIjeqpqFUiYOcA/EJWU1jKBe5uCBeBR4EI5jo8JkjDAfYnFhgJemQuXeVddT0vuOGlx89ay8NvVdT3Gpo4EZj9wT1waPdEkxAndcNlbC0I8Snq7+aWSpqDLilm2Y0A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1FVL1IyaHxLDG9o583v7c1UgzM/LSnVxzxLhqmMmE6k=;
 b=NudIDyqWFkyOjHn9iEeZgjKW0ydkqJ2u5TDtL/Mz0VhSgI+VUzFL/kyziVMVQ6yfU1lxGnOxtP7cXkufglZrGKPZZ3+mwshKAq+cslirxYcMYRE5LLESmYv4/plkdvn6DBG5h0i3pQY0Vz3RBYId3e59ykb7dek9ER3G7FsbsPwIOzwcktJaxp5Ko2bbTcA0UWQ6ftFXMKeR6Rt/rERUN8Q1esPqTESumZPFfL156p204ZyxHttvKIYI1d0HEuapAxsxfGC0on9D1NeHqJ2sSUuOlG1Q6O/ylMa52eHqJi7LIgo1DCn4vGHlTgMLoP7ec7ORlPZEulckP4hTs3PZog==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <bad83568-94c6-6d90-308b-ae9965f54754@suse.com>
Date: Fri, 24 Jun 2022 12:09:00 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v6 1/8] xen: reuse x86 EFI stub functions for Arm
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: Julien Grall <julien@xen.org>
Cc: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Jiamei Xie <Jiamei.Xie@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Chen <Wei.Chen@arm.com>
References: <20220610055316.2197571-1-wei.chen@arm.com>
 <20220610055316.2197571-2-wei.chen@arm.com>
 <05dadcda-505d-d46a-776a-bb29b8915815@suse.com>
 <PAXPR08MB74205A192C0E6E2E4BDD64BB9EB49@PAXPR08MB7420.eurprd08.prod.outlook.com>
 <8e44e765-c47f-4480-ee44-704ea13a170d@xen.org>
 <cd5d728c-a21e-780e-3b79-0cfb163eb824@suse.com>
 <a6844d62-c1aa-a29f-56ba-3556bc1d4dac@xen.org>
 <6e91d7d0-78d2-2eec-3b14-9aea00b2a028@suse.com>
In-Reply-To: <6e91d7d0-78d2-2eec-3b14-9aea00b2a028@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AM6P191CA0018.EURP191.PROD.OUTLOOK.COM
 (2603:10a6:209:8b::31) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 9a53b006-8b7b-4542-8d19-08da55c99055
X-MS-TrafficTypeDiagnostic: DU2PR04MB8551:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	v+PlL6d532NXoi5VEVfYxZk0srkVGVb0cNrvU5biyzW5hjOpXZsBfGmmn0F2sXDrJPtO6eeHSDi4bh/fx91UNzND7scaaSEqpzNUs6s0KUIvbonWxRCGDD0ZQuS0yr4t3PRQcwTrzdvCaLWBrAymWRzCtpR+ZI4iMRTc2wtaymT7JxTlKAUbUZqTsNddRCdER/rDbc4nqAPAN9BU4GyNR9kA56D9VMlQfElF1omDklhalQeG++xqtG1OZz7ruIdcZDppWUxCgx+hg1/E78jxXytpUsusfX9FADkvTa8XuSgNxhpwvwvEu6dVpNdY7cg9ACeQVDhw6LimA960gNiGPh2/Sc3GGW8ijvllx/iLDdDi3/cULMsAJrYt1GYHaFxCDlZLV4JQQYz58bGYlNyfYZbJ8zOkE8UdH2HxC7vgi2lV4Dk+7kiC1+ehvEuvkcEvdmdzVuzp7Eyj7Keyp17ATY8K2OY4O9lmvuzFVOXmZUlciZyXU046sNUQs+rjp8HK5Z6wjcX6wi4zuDy0TGRPI4Kr8so0UYaf/rNox2uw3isX2yyEi6TyibtQf8Ulg4WvoDfYSsCJIwl+4HRQjL0kWksjOL1sNNFQ3GDlskiv7gIJKlYI48mP8NnNzGIdLKlHFGDKgRrsTaCHJdgHbwRrQ4PPIORMLcgN7uAsai8OwlR/uoqwL/8YplkHTmZcZuDkZzqk9je/433rVG80uuqD8enzwotiX6RZPStOPGCdHMkgBAcp+t2h9GNq7iYASQOkmNb1rSmVyEjBgjWK5bS9ykMo6Fxs2LfhcXxPGgSoOt0OiU4avYRCfCRMarpvVrQFFM5MZyX4eKuYzg6aq/Ra0A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(136003)(366004)(39860400002)(346002)(396003)(376002)(7416002)(8936002)(2906002)(4326008)(31696002)(54906003)(86362001)(316002)(5660300002)(38100700002)(6916009)(66556008)(26005)(8676002)(66476007)(41300700001)(83380400001)(186003)(36756003)(2616005)(66946007)(6506007)(478600001)(31686004)(53546011)(6486002)(6512007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WVdDM0gzdi9FZjhFUWxtOTBFbGROdXYrT0dlSVJDd1g3U0hnclRIaWk5TmJU?=
 =?utf-8?B?RXFEb25HejAxMys0aDVIWnRTVXRONFEvdlNaUlVVSUZybWFNNWMyVTlIdFZG?=
 =?utf-8?B?RGp1VUVqci9HNzM1N1BrdjJyVUdOS1l1S2R6T0F3dHhMbk44Z1E1S0NnUmZi?=
 =?utf-8?B?M1VLS1lPWHBuS3ZMdzlGUVVlRm9nZHNlUlY4Y0x6TEF0QVZTakUvN2tnazBU?=
 =?utf-8?B?QTlWTXpNV0U2dlR5QUg1Y2cvRHZINldydjVCTkdJakN6UzJsYmhzcWx4N205?=
 =?utf-8?B?QUcwZE10anIyZHRlWFV3L3MvU1ZXWkEvc00xSlRMOEl0V2czUDdCQkNERGt6?=
 =?utf-8?B?V1VSaEpRc2VVSW9wVVkwZ0d6WEdxa1Q2MGlWeEovaUpUVVlWMTlEd3hYWU5L?=
 =?utf-8?B?SmdVblJlRzFNTHkrRU5QUGwrMm9MQVRwQ3FVRHBoMFhpZC9WMm93VDRIYkU1?=
 =?utf-8?B?c2MxYll5c2hrbmV5ZVl6S0xzL1ZjWVFKczVCY1p2c1RTWDJ2WEFnZEZ2Vm5H?=
 =?utf-8?B?RSt6bmo1czM3eXoyMGl6dUY1N2hsemJWU1NLb2twci9OVHNKdnRqUHI1K3VY?=
 =?utf-8?B?enZhWjROTEFycWxReTRTS2FjdHVzcnZmVVBUYUFFcS9ycjBRUTNLUCtWbmEx?=
 =?utf-8?B?cEgwZWZpSXJEbFZKMkNLRjlId2hMVjhvMmhIYW9vU1pnL0kyMzlHSFBXQ1U5?=
 =?utf-8?B?NklGdFQzVUgvUGxVRU1KUEdzeWE4OER2enZXdFh1d1J5TVI3WFQ3UUxMREhN?=
 =?utf-8?B?cEExWExmNURrczlJVkdmVFpXVEtnYmk5NzJFcFppMWlObkZ3WDdFck00aXBp?=
 =?utf-8?B?cmQwNklMOWhYbEJjUVZhZS9HSlhweXp6UnNlWEJWaUpmbzdWQzIxeVdLcjZp?=
 =?utf-8?B?eTZuUmtGY3R3bFp4MnlzVjdBelB0dmNLdHVzaTRxdWZOM2FXRDNNRHJtODRS?=
 =?utf-8?B?cHkvUWREeE1NM29BbFhNVnIyQndqT0xGMm00cXVUUzdtOExoZ1R0TG4wWDJs?=
 =?utf-8?B?QzhIVlNvZms3blJLK2Z6bGdsTWF2TDBOd0c2aC9NMlRPQk9HQWdyekE2Tkcy?=
 =?utf-8?B?NXFsek1XVy9BbXIxT0g2Q3ExbXlNRkdEd1hqQW9qUmVJUnlvY2VrZndXWWpC?=
 =?utf-8?B?Y0RjTythcTF4SXNlOUczdEdadVduMG1CY29qZXBoWlZmczc1MmNlM1Y5alBO?=
 =?utf-8?B?c1g3ZjZ2RVZWMDd3Q2xHRHJjQXMxbkNHNFpIMnVVUzlaNlduWkdMZTIvaUtJ?=
 =?utf-8?B?ajk0Yk0yTCtUY21FTjJNVzM2d3BNblVtVCtSV0N3eW0rd1VNTWFCa2RGcE8w?=
 =?utf-8?B?UzlLc0pHRFdyRjVZcmx5bm94MlZLa3BxWlNQZ3VBOHZkczVLRmk4WmRLTG5B?=
 =?utf-8?B?VVl1WjRSS1lMUENMWllBSnZ1bUxIYXRsK0hka0NwRVhmbVA0Wmo5UHE0cXYv?=
 =?utf-8?B?WUxtcVQxSkE5dGhrU1djcHNEOTgrbmt6YW5TalVuNG5XZUlXSE1xZmpUSEdv?=
 =?utf-8?B?K09VRGdjMU40UzB6T1I5SGFmVXpudjJoK2FNOGlnOVpNaGpNNXAyWFQ3MDRx?=
 =?utf-8?B?VnNYN1N2RVJMUFpxMFJEK1psakMwUnBQRTlxOVhITllQWlpHY1pwUkF0Mjg5?=
 =?utf-8?B?dXNJckJ0SFFwOXdoSEtNQ2tXcTFMNExPYXQrM2xYZ2tya2hXMjlMNmhZcS94?=
 =?utf-8?B?WE5lbGFaTG5oS0dDdzRrc3pwbzZFOWNwWkRBYWxmbEdCbWJqUzlIdUxlaFVJ?=
 =?utf-8?B?aCs1cXdHTVZlUUlDZUd0Z3ZtZzlJd21xQ256Ni9hcWJRSzJpZFdFQjY5YUV6?=
 =?utf-8?B?QVlGN1RHcnMxakkxQks2d0VRcmUzZGYzQWRVN0JUOEt3UlkzTzdpN09SM0Rr?=
 =?utf-8?B?RmFSdWppZU9LT3M0UklIV25VRWwvMGl1R2R6ZTR1NmVEQ3VBYjVaTkVaeFI4?=
 =?utf-8?B?akZqUWprZllxSjNvT0ZqUGRVV2NSQnpQN1VTMEZ3clcxbUZsSmp3T090Y0RJ?=
 =?utf-8?B?eWNCdjV0eVNKOTMzVkJXdmVOV1puWGVRTDlJcDJXQ3VTWXdUMEI3TU8vN2w0?=
 =?utf-8?B?TTFIUSt6aGdYdmpBZ1lnVmRzdE91czBxYXlTeVNiUDM2cmpPeEdNVHhyVEZn?=
 =?utf-8?Q?SqhzE8G0BoQLQFvE//UUuozyD?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9a53b006-8b7b-4542-8d19-08da55c99055
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jun 2022 10:09:02.5753
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: qV2iTtlz+/SehI0w6zZcuLPRrOX3xaW/3ixR7bNupIWY1auFEGVoE11g99WxNyw+zLlzlqx/kTzAaR4Jzz9pdA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8551

On 24.06.2022 12:05, Jan Beulich wrote:
> On 24.06.2022 11:49, Julien Grall wrote:
>> Hi Jan,
>>
>> On 24/06/2022 10:33, Jan Beulich wrote:
>>> On 24.06.2022 10:35, Julien Grall wrote:
>>>> On 24/06/2022 08:18, Wei Chen wrote:
>>>>>> -----Original Message-----
>>>>>> From: Jan Beulich <jbeulich@suse.com>
>>>>>> Sent: 2022年6月23日 20:54
>>>>>> To: Wei Chen <Wei.Chen@arm.com>
>>>>>> Cc: nd <nd@arm.com>; Stefano Stabellini <sstabellini@kernel.org>; Julien
>>>>>> Grall <julien@xen.org>; Bertrand Marquis <Bertrand.Marquis@arm.com>;
>>>>>> Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>; Andrew Cooper
>>>>>> <andrew.cooper3@citrix.com>; Roger Pau Monné <roger.pau@citrix.com>; Wei
>>>>>> Liu <wl@xen.org>; Jiamei Xie <Jiamei.Xie@arm.com>; xen-
>>>>>> devel@lists.xenproject.org
>>>>>> Subject: Re: [PATCH v6 1/8] xen: reuse x86 EFI stub functions for Arm
>>>>>>
>>>>>> On 10.06.2022 07:53, Wei Chen wrote:
>>>>>>> --- a/xen/arch/arm/Makefile
>>>>>>> +++ b/xen/arch/arm/Makefile
>>>>>>> @@ -1,6 +1,5 @@
>>>>>>>    obj-$(CONFIG_ARM_32) += arm32/
>>>>>>>    obj-$(CONFIG_ARM_64) += arm64/
>>>>>>> -obj-$(CONFIG_ARM_64) += efi/
>>>>>>>    obj-$(CONFIG_ACPI) += acpi/
>>>>>>>    obj-$(CONFIG_HAS_PCI) += pci/
>>>>>>>    ifneq ($(CONFIG_NO_PLAT),y)
>>>>>>> @@ -20,6 +19,7 @@ obj-y += domain.o
>>>>>>>    obj-y += domain_build.init.o
>>>>>>>    obj-y += domctl.o
>>>>>>>    obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
>>>>>>> +obj-y += efi/
>>>>>>>    obj-y += gic.o
>>>>>>>    obj-y += gic-v2.o
>>>>>>>    obj-$(CONFIG_GICV3) += gic-v3.o
>>>>>>> --- a/xen/arch/arm/efi/Makefile
>>>>>>> +++ b/xen/arch/arm/efi/Makefile
>>>>>>> @@ -1,4 +1,12 @@
>>>>>>>    include $(srctree)/common/efi/efi-common.mk
>>>>>>>
>>>>>>> +ifeq ($(CONFIG_ARM_EFI),y)
>>>>>>>    obj-y += $(EFIOBJ-y)
>>>>>>>    obj-$(CONFIG_ACPI) +=  efi-dom0.init.o
>>>>>>> +else
>>>>>>> +# Add stub.o to EFIOBJ-y to re-use the clean-files in
>>>>>>> +# efi-common.mk. Otherwise the link of stub.c in arm/efi
>>>>>>> +# will not be cleaned in "make clean".
>>>>>>> +EFIOBJ-y += stub.o
>>>>>>> +obj-y += stub.o
>>>>>>> +endif
>>>>>>
>>>>>> This has caused
>>>>>>
>>>>>> ld: warning: arch/arm/efi/built_in.o uses 2-byte wchar_t yet the output is
>>>>>> to use 4-byte wchar_t; use of wchar_t values across objects may fail
>>>>>>
>>>>>> for the 32-bit Arm build that I keep doing every once in a while, with
>>>>>> (if it matters) GNU ld 2.38. I guess you will want to consider building
>>>>>> all of Xen with -fshort-wchar, or to avoid building stub.c with that
>>>>>> option.
>>>>>>
>>>>>
>>>>> Thanks for pointing this out. I will try to use -fshort-wchar for Arm32,
>>>>> if Arm maintainers agree.
>>>>
>>>> Looking at the code we don't seem to build Xen arm64 with -fshort-wchar
>>>> (aside the EFI files). So it is not entirely clear why we would want to
>>>> use -fshort-wchar for arm32.
>>>
>>> We don't use wchar_t outside of EFI code afaict. Hence to all other code
>>> it should be benign whether -fshort-wchar is in use. So the suggestion
>>> to use the flag unilaterally on Arm32 is really just to silence the ld
>>> warning;
>>
>> Ok. This is odd. Why would ld warn on arm32 but not other arch?
> 
> Arm32 embeds ABI information in a note section in each object file.

Or a note-like one (just to avoid possible confusion); I think it's
".ARM.attributes".

Jan

> The mismatch of the wchar_t part of this information is what causes
> ld to emit the warning.
> 
>>> off the top of my head I can't see anything wrong with using
>>> the option also for Arm64 or even globally. Yet otoh we typically try to
>>> not make changes for environments where they aren't really needed.
>>
>> I agree. If we need a workaround, then my preference would be to not 
>> build stub.c with -fshort-wchar.
> 
> This would need to be an Arm-special then, as on x86 it needs to be built
> this way.
> 
> Jan
> 



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 10:14:52 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 10:14:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355436.583092 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4gKl-00027n-WF; Fri, 24 Jun 2022 10:14:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355436.583092; Fri, 24 Jun 2022 10:14:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4gKl-00027g-TI; Fri, 24 Jun 2022 10:14:39 +0000
Received: by outflank-mailman (input) for mailman id 355436;
 Fri, 24 Jun 2022 10:14:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4gKj-00027U-QG; Fri, 24 Jun 2022 10:14:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4gKj-0003H8-MM; Fri, 24 Jun 2022 10:14:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4gKj-0003Of-7G; Fri, 24 Jun 2022 10:14:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o4gKj-0006VN-6m; Fri, 24 Jun 2022 10:14:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=t7I2SDmmxiG6OTYI3Ytixl0rcGMROxhfKTQjJ1Xn7c8=; b=P0KjzMosGyb+BcX+w3s9vK5W9j
	gmaxI2mLUdNwTi7JrTqhCA9i9zAG5J96npymsqR9NPK4UOtFTz1EnqG3IeJkBMyAPqvMa+wDIViBC
	J1FyvAh0YccX6XZWjYbA3EXw3pM3F9AJ4taPzATTYBiMD8fvRqTHuDSXvgXnlLkzpq0w=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171337-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171337: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:build-arm64:xen-build:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:guest-start/debian.repeat:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:build-arm64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=92f20ff72066d8d7e2ffb655c2236259ac9d1c5d
X-Osstest-Versions-That:
    linux=354c6e071be986a44b956f7b57f1884244431048
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 24 Jun 2022 10:14:37 +0000

flight 171337 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171337/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-intel  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-amd  8 xen-boot              fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 171277
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 171277
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 171277
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 171277
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 171277
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 171277
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 171277
 build-arm64                   6 xen-build                fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 171277
 test-armhf-armhf-libvirt    18 guest-start/debian.repeat fail REGR. vs. 171277

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 171277

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171277
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 171277
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171277
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171277
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                92f20ff72066d8d7e2ffb655c2236259ac9d1c5d
baseline version:
 linux                354c6e071be986a44b956f7b57f1884244431048

Last test of basis   171277  2022-06-19 03:11:35 Z    5 days
Failing since        171280  2022-06-19 15:12:25 Z    4 days   14 attempts
Testing same since   171337  2022-06-23 22:42:50 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aidan MacDonald <aidanmacdonald.0x0@gmail.com>
  Alexander Usyskin <alexander.usyskin@intel.com>
  Alexei Starovoitov <ast@kernel.org>
  Ali Saidi <alisaidi@amazon.com>
  Alistair Popple <apopple@nvidia.com>
  Anatolii Gerasymenko <anatolii.gerasymenko@intel.com>
  Andrii Nakryiko <andrii@kernel.org>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Antoine Tenart <atenart@kernel.org>
  Ard Biesheuvel <ardb@kernel.org>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Arnd Bergmann <arnd@arndb.de>
  Athira Jajeev <atrajeev@linux.vnet.ibm.com>
  Athira Rajeev <atrajeev@linux.vnet.ibm.com>
  Bart Van Assche <bvanassche@acm.org>
  Brian Foster <bfoster@redhat.com>
  Carlos Llamas <cmllamas@google.com>
  Chevron Li <chevron.li@bayhubtech.com>
  Chevron Li<chevron.li@bayhubtech.com>
  Christian Marangi <ansuelsmth@gmail.com>
  Christian Schoenebeck <linux_oss@crudebyte.com>
  Christoph Lameter <cl@linux.com>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Ciara Loftus <ciara.loftus@intel.com>
  Claudiu Manoil <claudiu.manoil@nxp.com>
  Curtis Taylor <cutaylor-pub@yahoo.com>
  Damien Le Moal <damien.lemoal@opensource.wdc.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniil Dementev <d.dementev@ispras.ru>
  Darrick J. Wong <djwong@kernel.org>
  Dave Hansen <dave.hansen@linux.intel.com>
  David Howells <dhowells@redhat.com>
  David Rientjes <rientjes@google.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Ding Xiang <dingxiang@cmss.chinamobile.com>
  Dmitry Osipenko <dmitry.osipenko@collabora.com>
  Dominique Martinet <asmadeus@codewreck.org>
  Douglas Gilbert <dgilbert@interlog.com>
  Eelco Chaudron <echaudro@redhat.com>
  Eric Dumazet <edumazet@google.com>
  Evgeniy Baskov <baskov@ispras.ru>
  Filipe Manana <fdmanana@suse.com>
  Florian Westphal <fw@strlen.de>
  Gautam Menghani <gautammenghani201@gmail.com>
  Genjian Zhang <zhanggenjian@kylinos.cn>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Gurucharan <gurucharanx.g@intel.com> (A Contingent worker at Intel)
  Hoang Le <hoang.h.le@dektech.com.au>
  huhai <huhai@kylinos.cn>
  Hyeonggon Yoo <42.hyeyoo@gmail.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ian Rogers <irogers@google.com>
  Ivan Vecera <ivecera@redhat.com>
  Jakub Kicinski <kuba@kernel.org>
  Jakub Sitnicki <jakub@cloudflare.com>
  Jamie Iles <jamie@jamieiles.com>
  Jann Horn <jannh@google.com>
  Jarkko Nikula <jarkko.nikula@linux.intel.com>
  Jason A. Donenfeld <Jason@zx2c4.com>
  Jason Wang <jasowang@redhat.com>
  Javier Martinez Canillas <javierm@redhat.com>
  Jay Vosburgh <jay.vosburgh@canonical.com>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
  Jiasheng Jiang <jiasheng@iscas.ac.cn>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jie2x Zhou <jie2x.zhou@intel.com>
  Jing-Ting Wu <jing-ting.wu@mediatek.com>
  Jiri Olsa <jolsa@kernel.org>
  Joe Damato <jdamato@fastly.com>
  Joel Savitz <jsavitz@redhat.com>
  John Fastabend <john.fastabend@gmail.com>
  Jon Maloy <jmaloy@redhat.com>
  Jon Maxwell <jmaxwell37@gmail.com>
  Jonathan Lemon <jonathan.lemon@gmail.com>
  Jonathan Toppins <jtoppins@redhat.com>
  Josh Poimboeuf <jpoimboe@kernel.org>
  Kai Vehmanen <kai.vehmanen@linux.intel.com>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Kailang Yang <kailang@realtek.com>
  Kees Cook <keescook@chromium.org>
  Ken Moffat <zarniwhoop@ntlworld.com>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Kumar Kartikeya Dwivedi <memxor@gmail.com>
  Kunihiko Hayashi <hayashi.kunihiko@socionext.com>
  Leo Yan <leo.yan@linaro.org>
  Liang He <windhl@126.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Liu Ying <victor.liu@nxp.com>
  Lorenzo Bianconi <lorenzo@kernel.org>
  Lukas Bulwahn <lukas.bulwahn@gmail.com>
  Lukas Wunner <lukas@wunner.de>
  Maciej Fijalkowski <maciej.fijalkowski@intel.com>
  Magnus Karlsson <magnus.karlsson@intel.com>
  Marc Dionne <marc.dionne@auristor.com>
  Marc Zyngier <maz@kernel.org>
  Marcin Szycik <marcin.szycik@linux.intel.com>
  Mark Brown <broonie@kernel.org>
  Martin K. Petersen <martin.petersen@oracle.com>
  Masami Hiramatsu (Google) <mhiramat@kernel.org>
  Matthew Wilcox (Oracle) <willy@infradead.org>
  Mengqi Zhang <mengqi.zhang@mediatek.com>
  Miaoqian Lin <linmq006@gmail.com>
  Michael Petlan <mpetlan@redhat.com>
  Michal Simek <michal.simek@amd.com>
  Nathan Chancellor <nathan@kernel.org>
  Nathan Chancellor <nathan@kernel.org> # build
  Nico Pache <npache@redhat.com>
  nikitashvets@flyium.com
  Oleksij Rempel <o.rempel@pengutronix.de>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paolo Abeni <pabeni@redhat.com>
  Peilin Ye <peilin.ye@bytedance.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Peter Zijlstra <peterz@infradead.org>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Qu Wenruo <wqu@suse.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Riccardo Paolo Bestetti <pbl@bestov.io>
  Rob Herring <robh@kernel.org>
  Ron Economos <re@w6rz.net>
  Rosemarie O'Riorden <roriorden@redhat.com>
  Sandeep Penigalapati <sandeep.penigalapati@intel.com>
  Saurabh Sengar <ssengar@linux.microsoft.com>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Sedat Dilek <sedat.dilek@gmail.com> # LLVM-14 (x86-64)
  Serge Semin <Sergey.Semin@baikalelectronics.ru>
  Sergey Gorenko <sergeygo@nvidia.com>
  Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Soham Sen <contact@sohamsen.me>
  Song Liu <songliubraving@fb.com>
  Stephan Gerhold <stephan.gerhold@kernkonzept.com>
  Stephen Hemminger <stephen@networkplumber.org>
  Steven Rostedt (Google) <rostedt@goodmis.org>
  sunliming <sunliming@kylinos.cn>
  Takashi Iwai <tiwai@suse.de>
  Takashi Sakamoto <o-takashi@sakamocchi.jp>
  Tali Perry <tali.perry1@gmail.com>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Richter <tmricht@linux.ibm.com>
  Tim Crawford <tcrawford@system76.com>
  Tomas Winkler <tomas.winkler@intel.com>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Tyler Hicks <tyhicks@linux.microsoft.com>
  Tyrel Datwyler <tyreld@linux.ibm.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Vadim Fedorenko <vadfed@fb.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wentao_Liang <Wentao_Liang_g@163.com>
  Wojciech Drewek <wojciech.drewek@intel.com>
  Wolfram Sang <wsa@kernel.org>
  Xiang wangx <wangxiang@cdjrlc.com>
  Xiubo Li <xiubli@redhat.com>
  Xu Jia <xujia39@huawei.com>
  Ying Xue <ying.xue@windriver.com>
  Yonghong Song <yhs@fb.com>
  Yu Liao <liaoyu15@huawei.com>
  Ziyang Xuan <william.xuanziyang@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 5108 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 10:53:34 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 10:53:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355445.583103 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4gwH-0006Im-55; Fri, 24 Jun 2022 10:53:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355445.583103; Fri, 24 Jun 2022 10:53:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4gwH-0006If-1i; Fri, 24 Jun 2022 10:53:25 +0000
Received: by outflank-mailman (input) for mailman id 355445;
 Fri, 24 Jun 2022 10:53:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CwSF=W7=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1o4gwF-0006IZ-Tj
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 10:53:23 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id ca66d405-f3ab-11ec-b725-ed86ccbb4733;
 Fri, 24 Jun 2022 12:52:50 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 01AF323A;
 Fri, 24 Jun 2022 03:53:21 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com
 [10.1.195.16])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9F0623F792;
 Fri, 24 Jun 2022 03:53:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ca66d405-f3ab-11ec-b725-ed86ccbb4733
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] docs/misra: Add instructions for cppcheck
Date: Fri, 24 Jun 2022 11:53:11 +0100
Message-Id: <20220624105311.21057-1-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.17.1

Add instructions on how to build cppcheck, the version currently used
and an example to use the cppcheck integration to run the analysis on
the Xen codebase

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
 docs/misra/cppcheck.txt | 66 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 66 insertions(+)
 create mode 100644 docs/misra/cppcheck.txt

diff --git a/docs/misra/cppcheck.txt b/docs/misra/cppcheck.txt
new file mode 100644
index 000000000000..4df0488794aa
--- /dev/null
+++ b/docs/misra/cppcheck.txt
@@ -0,0 +1,66 @@
+Cppcheck for Xen static and MISRA analysis
+==========================================
+
+Xen can be analysed for both static analysis problems and MISRA violation using
+cppcheck, the open source tool allows the creation of a report with all the
+findings. Xen has introduced the support in the Makefile so it's very easy to
+use and in this document we can see how.
+
+First recommendation is to use exactly the same version in this page and provide
+the same option to the build system, so that every Xen developer can reproduce
+the same findings.
+
+Install cppcheck in the system
+==============================
+
+Cppcheck can be retrieved from the github repository or by downloading the
+tarball, the version tested so far is the 2.7:
+
+ - https://github.com/danmar/cppcheck/tree/2.7
+ - https://github.com/danmar/cppcheck/archive/2.7.tar.gz
+
+To compile and install it, here the complete command line:
+
+make MATCHCOMPILER=yes \
+    FILESDIR=/usr/share/cppcheck \
+    CFGDIR=/usr/share/cppcheck/cfg \
+    HAVE_RULES=yes \
+    CXXFLAGS="-O2 -DNDEBUG -Wall -Wno-sign-compare -Wno-unused-function" \
+    install
+
+This will compile and install cppcheck in /usr/bin and all the cppcheck config
+files and addons will be installed in /usr/share/cppcheck folder, please modify
+that path in FILESDIR and CFGDIR if it's not convinient for your system.
+
+If you don't want to overwrite a possible cppcheck binary installed in your
+system, you can omit the "install" target, FILESDIR, CFGDIR and cppcheck will be
+just compiled and the binaries will be available in the same folder.
+If you choose to do that, later in this page it's explained how to use a local
+installation of cppcheck for the Xen analysis.
+
+Dependencies are listed in the Readme.md of the project repository.
+
+Use cppcheck to analyse Xen
+===========================
+
+Using cppcheck integration is very simple, it requires few steps:
+
+ 1) Compile Xen
+ 2) call the cppcheck make target to generate a report in xml format:
+    make CPPCHECK_MISRA=y cppcheck
+ 3) call the cppcheck-html make target to generate a report in xml and html
+    format:
+    make CPPCHECK_MISRA=y cppcheck-html
+
+    In case the cppcheck binaries are not in the PATH, CPPCHECK and
+    CPPCHECK_HTMLREPORT variables can be overridden with the full path to the
+    binaries:
+
+    make -C xen \
+        CPPCHECK=/path/to/cppcheck \
+        CPPCHECK_HTMLREPORT=/path/to/cppcheck-htmlreport \
+        CPPCHECK_MISRA=y \
+        cppcheck-html
+
+The output is by default in a folder named cppcheck-htmlreport, but the name
+can be changed by passing it in the CPPCHECK_HTMLREPORT_OUTDIR variable.
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 11:12:15 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 11:12:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355452.583114 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4hEP-0000EB-Lh; Fri, 24 Jun 2022 11:12:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355452.583114; Fri, 24 Jun 2022 11:12:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4hEP-0000E4-IA; Fri, 24 Jun 2022 11:12:09 +0000
Received: by outflank-mailman (input) for mailman id 355452;
 Fri, 24 Jun 2022 11:12:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4hEO-0000Du-1k; Fri, 24 Jun 2022 11:12:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4hEN-0004J6-Tz; Fri, 24 Jun 2022 11:12:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4hEN-0006k7-Ir; Fri, 24 Jun 2022 11:12:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o4hEN-0004sA-IN; Fri, 24 Jun 2022 11:12:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Y1cBkpqGxA5p+x2ojTbBfRVYoYbu9b/D3v5mYoUuVK8=; b=gDjvYII1vk/lFLj16goBflbYVN
	OpCJ+Vyhf2yhLpYUo17x0glNxAOfcBHM2EclxzgHP/Dfe3XE9zmKdNKPAVGcwIaEAOlZBuiOG6rZx
	J2ZSM4LGs9dNJCLMvOngLIoesGvnpTafLxsSpzJcZr3DqVwe+TzAQQSL8/VBUjL9A/ZQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171338-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 171338: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=a55abe6c5120bd7614a4c9b2027eeab8d6c3bd54
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 24 Jun 2022 11:12:07 +0000

flight 171338 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171338/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              a55abe6c5120bd7614a4c9b2027eeab8d6c3bd54
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  714 days
Failing since        151818  2020-07-11 04:18:52 Z  713 days  695 attempts
Testing same since   171323  2022-06-23 04:21:45 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Amneesh Singh <natto@weirdnatto.in>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani@anisinha.ca>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brad Laue <brad@brad-x.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Kirbach <christian.kirbach@gmail.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Christophe Fergeau <cfergeau@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Didik Supriadi <didiksupriadi41@gmail.com>
  dinglimin <dinglimin@cmss.chinamobile.com>
  Divya Garg <divya.garg@nutanix.com>
  Dmitrii Shcherbakov <dmitrii.shcherbakov@canonical.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Emilio Herrera <ehespinosa57@gmail.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Florian Schmidt <flosch@nutanix.com>
  Franck Ridel <fridel@protonmail.com>
  Gavi Teitz <gavi@nvidia.com>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Haonan Wang <hnwanga1@gmail.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Hiroki Narukawa <hnarukaw@yahoo-corp.jp>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Ian Wienand <iwienand@redhat.com>
  Ioanna Alifieraki <ioanna-maria.alifieraki@canonical.com>
  Ivan Teterevkov <ivan.teterevkov@nutanix.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  jason lee <ppark5237@gmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jia Zhou <zhou.jia2@zte.com.cn>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jing Qi <jinqi@redhat.com>
  Jinsheng Zhang <zhangjl02@inspur.com>
  Jiri Denemark <jdenemar@redhat.com>
  Joachim Falk <joachim.falk@gmx.de>
  John Ferlan <jferlan@redhat.com>
  John Levon <john.levon@nutanix.com>
  John Levon <levon@movementarian.org>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Justin Gatzen <justin.gatzen@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kim InSoo <simmon@nplob.com>
  Koichi Murase <myoga.murase@gmail.com>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Lei Yang <yanglei209@huawei.com>
  Lena Voytek <lena.voytek@canonical.com>
  Liang Yan <lyan@digitalocean.com>
  Liang Yan <lyan@digtalocean.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Liu Yiding <liuyd.fnst@fujitsu.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  luzhipeng <luzhipeng@cestc.cn>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Mielke <mark.mielke@gmail.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Martin Pitt <mpitt@debian.org>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matej Cepl <mcepl@cepl.eu>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Goodhart <c@chromakode.com>
  Maxim Nestratov <mnestratov@virtuozzo.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Moteen Shah <codeguy.moteen@gmail.com>
  Moteen Shah <moteenshah.02@gmail.com>
  Muha Aliss <muhaaliss@gmail.com>
  Nathan <nathan95@live.it>
  Neal Gompa <ngompa13@gmail.com>
  Nick Chevsky <nchevsky@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nicolas Lécureuil <neoclust@mageia.org>
  Nicolas Lécureuil <nicolas.lecureuil@siveo.net>
  Nikolay Shirokovskiy <nikolay.shirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Niteesh Dubey <niteesh@linux.ibm.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Or Ozeri <oro@il.ibm.com>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peng Liang <tcx4c70@gmail.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Praveen K Paladugu <prapal@linux.microsoft.com>
  Prerna Saxena <prerna.saxena@nutanix.com>
  Richard W.M. Jones <rjones@redhat.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Robin Lee <cheeselee@fedoraproject.org>
  Rohit Kumar <rohit.kumar3@nutanix.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Davis <scott.davis@starlab.io>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Sergey A <sw@atrus.ru>
  Sergey A. <sw@atrus.ru>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  shenjiatong <yshxxsjt715@gmail.com>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Simon Rowe <simon.rowe@nutanix.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Temuri Doghonadze <temuri.doghonadze@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tom Wieczorek <tom@bibbu.net>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tu Qiang <tu.qiang35@zte.com.cn>
  Tuguoyi <tu.guoyi@h3c.com>
  tuqiang <tu.qiang35@zte.com.cn>
  Vasiliy Ulyanov <vulyanov@suse.de>
  Victor Toso <victortoso@redhat.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Vinayak Kale <vkale@nvidia.com>
  Vineeth Pillai <viremana@linux.microsoft.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  Wei-Chen Chen <weicche@microsoft.com>
  William Douglas <william.douglas@intel.com>
  Xu Chao <xu.chao6@zte.com.cn>
  Yalan Zhang <yalzhang@redhat.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Fei <yangfei85@huawei.com>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yasuhiko Kamata <belphegor@belbel.or.jp>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
  zhangjl02 <zhangjl02@inspur.com>
  zhanglei <zhanglei@smartx.com>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>
  Дамјан Георгиевски <gdamjan@gmail.com>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 114109 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 11:21:16 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 11:21:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355460.583125 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4hN1-0001eZ-H7; Fri, 24 Jun 2022 11:21:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355460.583125; Fri, 24 Jun 2022 11:21:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4hN1-0001eS-Du; Fri, 24 Jun 2022 11:21:03 +0000
Received: by outflank-mailman (input) for mailman id 355460;
 Fri, 24 Jun 2022 11:21:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4hN0-0001eM-8Y
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 11:21:02 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4hMz-0004SA-Rd; Fri, 24 Jun 2022 11:21:01 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238] helo=[192.168.4.76])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4hMz-0004hA-L1; Fri, 24 Jun 2022 11:21:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=aBYO87BqO1dAbCxIojsFJFI205Xb1pxEQhjAL1OFQGQ=; b=xUbQ3xttbEucwonneS1Y4jEsqf
	7KZFztsCQ0sgnIkj2OVtLLq14Vgy2TcBbvEnbbGJueJ3ziaJRY4y3iblx8+PAs5dwSyL26ul+W22p
	SDRemN4E5pvAaJZ/0NLXu/S/NN8TWnYMjrp1HvfYFSdNv0d9EyQR6kHiBlm+m63XfZps=;
Message-ID: <692d09fa-5513-132a-6b5b-4bc62e46a443@xen.org>
Date: Fri, 24 Jun 2022 12:20:59 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH] docs/misra: Add instructions for cppcheck
To: Luca Fancellu <luca.fancellu@arm.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, wei.chen@arm.com,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20220624105311.21057-1-luca.fancellu@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220624105311.21057-1-luca.fancellu@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Luca,

On 24/06/2022 11:53, Luca Fancellu wrote:
> Add instructions on how to build cppcheck, the version currently used
> and an example to use the cppcheck integration to run the analysis on
> the Xen codebase
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
>   docs/misra/cppcheck.txt | 66 +++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 66 insertions(+)
>   create mode 100644 docs/misra/cppcheck.txt
> 
> diff --git a/docs/misra/cppcheck.txt b/docs/misra/cppcheck.txt
> new file mode 100644
> index 000000000000..4df0488794aa
> --- /dev/null
> +++ b/docs/misra/cppcheck.txt
> @@ -0,0 +1,66 @@
> +Cppcheck for Xen static and MISRA analysis
> +==========================================
> +
> +Xen can be analysed for both static analysis problems and MISRA violation using
> +cppcheck, the open source tool allows the creation of a report with all the
> +findings. Xen has introduced the support in the Makefile so it's very easy to
> +use and in this document we can see how.
> +
> +First recommendation is to use exactly the same version in this page and provide
> +the same option to the build system, so that every Xen developer can reproduce
> +the same findings.

I am not sure I agree. I think it is good that each developper use their 
own version (so long it is supported), so they may be able to find 
issues that may not appear with 2.7.

> +
> +Install cppcheck in the system

NIT: s/in/on/ I think.

> +==============================
> +
> +Cppcheck can be retrieved from the github repository or by downloading the
> +tarball, the version tested so far is the 2.7:
> +
> + - https://github.com/danmar/cppcheck/tree/2.7
> + - https://github.com/danmar/cppcheck/archive/2.7.tar.gz
> +
> +To compile and install it, here the complete command line:
> +
> +make MATCHCOMPILER=yes \
> +    FILESDIR=/usr/share/cppcheck \
> +    CFGDIR=/usr/share/cppcheck/cfg \
> +    HAVE_RULES=yes \
> +    CXXFLAGS="-O2 -DNDEBUG -Wall -Wno-sign-compare -Wno-unused-function" \
> +    install

Let me start that I am not convinced that our documentation should 
explain how to build cppcheck.

But if that's desire, then I think you ought to explain why we need to 
update CXXFLAGS (I would expect cppcheck to build everywhere without 
specifying additional flags).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 11:41:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 11:41:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355466.583135 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4hga-000425-5X; Fri, 24 Jun 2022 11:41:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355466.583135; Fri, 24 Jun 2022 11:41:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4hga-00041y-2V; Fri, 24 Jun 2022 11:41:16 +0000
Received: by outflank-mailman (input) for mailman id 355466;
 Fri, 24 Jun 2022 11:41:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b5tQ=W7=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1o4hgY-00041s-Rk
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 11:41:15 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 (mail-eopbgr20054.outbound.protection.outlook.com [40.107.2.54])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8c2efe8e-f3b2-11ec-bd2d-47488cf2e6aa;
 Fri, 24 Jun 2022 13:41:13 +0200 (CEST)
Received: from AM7PR03CA0002.eurprd03.prod.outlook.com (2603:10a6:20b:130::12)
 by AS8PR08MB7989.eurprd08.prod.outlook.com (2603:10a6:20b:541::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.22; Fri, 24 Jun
 2022 11:41:10 +0000
Received: from AM5EUR03FT003.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:130:cafe::bb) by AM7PR03CA0002.outlook.office365.com
 (2603:10a6:20b:130::12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17 via Frontend
 Transport; Fri, 24 Jun 2022 11:41:10 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT003.mail.protection.outlook.com (10.152.16.149) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Fri, 24 Jun 2022 11:41:10 +0000
Received: ("Tessian outbound e40990bc24d7:v120");
 Fri, 24 Jun 2022 11:41:09 +0000
Received: from adf84b4f8c51.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 02AE1CBC-2DF2-4A35-B0FD-C70863F5293F.1; 
 Fri, 24 Jun 2022 11:40:58 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id adf84b4f8c51.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 24 Jun 2022 11:40:58 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by DU2PR08MB7254.eurprd08.prod.outlook.com (2603:10a6:10:2d1::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Fri, 24 Jun
 2022 11:40:56 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5353.022; Fri, 24 Jun 2022
 11:40:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8c2efe8e-f3b2-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=Mp4kVlEVW8U+2HPzPCMlM4bjrvlp+QXmjmd3gPsKJlsIoyMTuZnkCzRLxejDHDRfNEdu5lyyu+1sNzYovo5dLw6P4DRynu+GtgnYMPXX+8xgFsW0sp7W0RhuAHAD/0ZzASOx/OuNURRUnFP/T9L191uORu0R2nDPnA+b7uyqtMmxHUx1cj/lYthp3OM+5wRISFiFDG1YBhHeJKOAZon+N2eQVlmX5zcPCfuiBuPJpGGTbd4IhJ7iihdOCW011xmAN5tDJrJwutqvnC9+waq3XjGpYmNuYL3467zuPrEv64TbbTBJ0/iCIT1U5Wg4tA4OY6fm4RvM6YLC3eGtubofMw==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=A0EQT1+W7rPpR3E1TsLQbl3LADF7j3n1GrM6w034SEM=;
 b=SlLIHtb1bQOZuveFAvjFX7FafzXUnG3Ur4q6bH5K2GyOMwo6wPKaInlm498M0eDBwz9Fa2YjpwDlDQ9kPbmmjcibrJCo2AdihydfAPy0+kYtxcHIcL6Swt1d4MWmalvKUbiv23ODw3LzD+BKm4OBGVtiLNfpvBU58pqHyL8TUnNyiEHsNYQ9mKAd/wuImg3mNXrlZO3LzQxFMM48lqx5K4d3s3eQqNUelloMYtOV9b1eyc5BEL5xqs7l5nQE8e/B/z1iQLLBOtUffxVmLjHI1bHFPV5b/bce0FCPX6p6fs+I4iuS7soRpaYaDhK3snQvEtCt3D3V/eBorAsky1aMRw==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=A0EQT1+W7rPpR3E1TsLQbl3LADF7j3n1GrM6w034SEM=;
 b=o8f6ndPf+Ima6iHu9J7eso9Jsl0ucxW0E/VPZMBLM4FWG7y1P5nZfj0vKrTkOfdDPSj5x3GONl4wpOHOWM9KTxXmhuGbPkY89QmNzDdH9sVLiiXmZE4UZh8VuVWp8wObtc58UwNpsG+KpcKMg+TZjthSufCXVyXJnrH4wDy6JKU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: d0bcfe06c11831a0
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aFNC9LfqNZyzCuLoHHK4iSJiJnMHVBa4Bc9cW5d+u8jCBCXqtIxQ9vDAn8o2vC0TeycpcsN8JpwUzsKMuviHDok4T2ClBV0xqDIVqrZ3ccHEepfPMa5WkzEXVgw2++ZaB48gtYTxEAZKbL4xRmmvYhT3yKnL1VFqWf4+0hkSoqSUe/zke4cLhs53FIM68QrxBp8mgzZKLiBNjpa+LpOEtIyvAdo+M2lO2jVBPcMWLSyr7W3IoEIZ47t2eSYxnInMknVEvpLvvDzGOaHQ2rguzb8cy9fznNITHeGRkUiYq0Ef3x1NVNDlCzWibPl041AM11N9YZpUZ6+pCM8GJbbTgA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=A0EQT1+W7rPpR3E1TsLQbl3LADF7j3n1GrM6w034SEM=;
 b=nZBz1fc0NC/WSP022i/Qn3Y0bJVzV7QBFchW2nt6ZqCHm5VyWevj8/VJnabrUFJwo1bpZYhWLl2Pmz44DfpI4qEqDUCZu2/8uCtYpaZYYYTCQwEi0qx0GJOvcOoVsAO/3AJVUe+zPwZuczUAKZufA0jexrr19JpeUPs6e++rVyACl4z3fYXVo/LILoDTyoRLJZT1sGGTG9yc/tyeVJ7e+S7ohj6WvCIjTh30OqLGzUE+X9gcl8nWkJf+eU6Nr4OZdtkBIpOpjwFYeMPrhA/g4Z504+Z4stB0L/03O6hugS5H364l3dlUjXQ3GxkkP4NWLy81myzDhYhmrsG0DZK4vA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=A0EQT1+W7rPpR3E1TsLQbl3LADF7j3n1GrM6w034SEM=;
 b=o8f6ndPf+Ima6iHu9J7eso9Jsl0ucxW0E/VPZMBLM4FWG7y1P5nZfj0vKrTkOfdDPSj5x3GONl4wpOHOWM9KTxXmhuGbPkY89QmNzDdH9sVLiiXmZE4UZh8VuVWp8wObtc58UwNpsG+KpcKMg+TZjthSufCXVyXJnrH4wDy6JKU=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: Luca Fancellu <Luca.Fancellu@arm.com>, xen-devel
	<xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH] docs/misra: Add instructions for cppcheck
Thread-Topic: [PATCH] docs/misra: Add instructions for cppcheck
Thread-Index: AQHYh7iovubAjsqNDkKOdqYTdWMQia1eaYSAgAAFkYA=
Date: Fri, 24 Jun 2022 11:40:55 +0000
Message-ID: <E5F45AD6-8D0E-447C-9864-6E9F34C1BE0D@arm.com>
References: <20220624105311.21057-1-luca.fancellu@arm.com>
 <692d09fa-5513-132a-6b5b-4bc62e46a443@xen.org>
In-Reply-To: <692d09fa-5513-132a-6b5b-4bc62e46a443@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: b0649e7a-3ff9-48be-9d4d-08da55d66f2a
x-ms-traffictypediagnostic:
	DU2PR08MB7254:EE_|AM5EUR03FT003:EE_|AS8PR08MB7989:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 n1fnQrLSiaTSjxruHYY+MB/9ZYlJNJCMYp2ViOL2/yOPuTMKYHxIm1zyxZCPQ2Uk2QwIs0e0iE2uGe6etu/ZIBK6zHpcske8odex/7lRnGBacgh5jTihL+VutyMaDT5opFHyeMJkr9TFj80VCpuxGX695J/+BPLyOtmUB0WjfYSNYsYmX/nRUGRBT9SPWb5Qa9LLp33BuTTPE7Y687q7HKgYPY334Tn4WYJ+ZTmjSlUxty9QpdQJFCtX3HBoSqUjb9CrHwMfSg5P7C2qme0S5LMV3KHA2aLPHyQae/D5yR85UecHCiVvC5mFFesMs7BI0v3dnuNLAdGkblDUoq1uZ6+GoWDgJiC+GybV2G/samYcKvk1QJHhIXf7bxG9YqE18H+BSmfqCL+N+a+PE82uu/nxsVV/oV6xiZ88MFklfwxO6IAe3XuePkqd2ezwjZOQ7V7XCwIDe18+aQnFSXDhayJP8k1COh2KOfEAcCFnZTh7pEosEvcxOjbujy7O3C/yMogzySt/8KEg9oG44zXrAT9FuqgGbG1bjUI9Xrr6EFw6TW31v5vpozQudTcLou0g8Yr9SgclsdPuCPL7IgP6D3SXX9gQkqJTXgIouoMYxqOoK2TMMi+FqX1zxzR72Ov/i/0HbFUMNKjeGIetTnyScBf1FBSv8P+wgkTpbHPkjWzp9WbOujLUTUH34AHI2+SCF23X/K2zpYFF9c4sNxY4imlkaeIP2XOhiGmtunAAOrQSIC6zn+Vb8Vpq5GvEdsA/OHjaLuPnNwmfF8K51VFr5qGJv9+xjHiLb0T07p0ONs2Dw4ZQJl9WHijDju7kuo8MYFCrhYf/Yt5giyKswkb6EQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(396003)(366004)(346002)(376002)(136003)(66946007)(64756008)(76116006)(66446008)(71200400001)(66556008)(2616005)(8676002)(4326008)(54906003)(6916009)(66476007)(91956017)(5660300002)(86362001)(186003)(122000001)(8936002)(38070700005)(38100700002)(6512007)(41300700001)(26005)(478600001)(6486002)(36756003)(33656002)(53546011)(316002)(2906002)(6506007)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <52EE7D3F9CCC1F4086F0C2A45076F043@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR08MB7254
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT003.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	d6bbac9a-12d6-42e4-9c46-08da55d6668e
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	b6j0ZqclxdnUeD/Cgga7OOe2W3ZZSGdrdaG0OCKM5VzGUqPGyJpqws+pSaolxg5k/ES8gSSXqc0qiFl5SUqFJeY2SDo4m63WKtwlwqfjxV1XCfKBf8Ps1sPpr9dpWpTsvmHGSueGvddnWmZ4mABvIgcLNpd/L+WNQ2dS2c/Ro1tJ3+hFT6xSmi+4EkM76S0J6kTSCLX1MlrsPDWFOK/WueTD+tmk9Z5Ry+5oKDn6c3KYnGwLM1CCjNVT5BKkFtXpJr46ng+aPQ+qpKlAB4+ot1Ec1d7Ct2v864EUWDGEveiotnQSrQRQpjU3ur9BcFM5k6UIVSBrNhMiyaCxEy2mv+txeouRJ1vRVNkiuODYXgKBwYgJ6NGcFSkq/JcMYX8U0zOdFKQDdMFzoyHKKAPiFwzE7C5WjluBaV0/6tEeP6um0phYCkq1cgG4dB0j7MQvb5NZWdl+s2HCnPImiq+2NssV7ud4uzpwZ8QCMZwcTjsO5kmdZd/qjSWNMrGA9hu5p9Ok/p5s1U8p5P0bOyrQ6Rk/KH/oTsOVVs4SArQY211GFioLUeV+jTeZJFR1/GmaQt3heX5aGymycweWpX2nArgjS2oAWym0QyM1eHPcxul+yLOapqLzVPOfe5r2cjzNhCky7BdpZOwMv9gC60Bmtmir6lkBmw8MSKOWXqVh3imwQqCCHiIrFpmwlWpObZcC+lB2GEyfe3mikCH3x2XGNqXCZb7esp3Z3wmj1vUn3MiIhdIIVbaeLWO6XdSdlQ3fSY+h+h0UOy83HtMKJ1niBbHRfMxQjH4xUB2C40YtkwAbmauSFvqibY0mgqaYzI/K
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(376002)(346002)(136003)(396003)(46966006)(40470700004)(36840700001)(336012)(8676002)(4326008)(186003)(2616005)(70586007)(47076005)(82310400005)(6512007)(70206006)(53546011)(26005)(36860700001)(82740400003)(86362001)(40460700003)(8936002)(33656002)(54906003)(36756003)(2906002)(316002)(478600001)(6862004)(356005)(81166007)(6506007)(6486002)(40480700001)(41300700001)(5660300002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jun 2022 11:41:10.1991
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b0649e7a-3ff9-48be-9d4d-08da55d66f2a
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT003.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB7989

Hi Julien,

> On 24 Jun 2022, at 12:20, Julien Grall <julien@xen.org> wrote:
>=20
> Hi Luca,
>=20
> On 24/06/2022 11:53, Luca Fancellu wrote:
>> Add instructions on how to build cppcheck, the version currently used
>> and an example to use the cppcheck integration to run the analysis on
>> the Xen codebase
>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>> ---
>> docs/misra/cppcheck.txt | 66 +++++++++++++++++++++++++++++++++++++++++
>> 1 file changed, 66 insertions(+)
>> create mode 100644 docs/misra/cppcheck.txt
>> diff --git a/docs/misra/cppcheck.txt b/docs/misra/cppcheck.txt
>> new file mode 100644
>> index 000000000000..4df0488794aa
>> --- /dev/null
>> +++ b/docs/misra/cppcheck.txt
>> @@ -0,0 +1,66 @@
>> +Cppcheck for Xen static and MISRA analysis
>> +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
>> +
>> +Xen can be analysed for both static analysis problems and MISRA violati=
on using
>> +cppcheck, the open source tool allows the creation of a report with all=
 the
>> +findings. Xen has introduced the support in the Makefile so it's very e=
asy to
>> +use and in this document we can see how.
>> +
>> +First recommendation is to use exactly the same version in this page an=
d provide
>> +the same option to the build system, so that every Xen developer can re=
produce
>> +the same findings.
>=20
> I am not sure I agree. I think it is good that each developper use their =
own version (so long it is supported), so they may be able to find issues t=
hat may not appear with 2.7.

Right now the reality is not that great:
- 2.8 version of cppcheck has bugs and Misra checking is not working
- older versions of cppcheck are generating wrong html or xml files

So in practice anybody can try an other version but at the moment only 2.7 =
is useable.

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 11:47:20 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 11:47:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355473.583147 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4hmP-0004jp-1X; Fri, 24 Jun 2022 11:47:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355473.583147; Fri, 24 Jun 2022 11:47:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4hmO-0004ji-TF; Fri, 24 Jun 2022 11:47:16 +0000
Received: by outflank-mailman (input) for mailman id 355473;
 Fri, 24 Jun 2022 11:47:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3IMW=W7=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1o4hmO-0004jc-C7
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 11:47:16 +0000
Received: from mail-lf1-x12b.google.com (mail-lf1-x12b.google.com
 [2a00:1450:4864:20::12b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 646fc56a-f3b3-11ec-bd2d-47488cf2e6aa;
 Fri, 24 Jun 2022 13:47:15 +0200 (CEST)
Received: by mail-lf1-x12b.google.com with SMTP id g4so3994240lfv.9
 for <xen-devel@lists.xenproject.org>; Fri, 24 Jun 2022 04:47:15 -0700 (PDT)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 a14-20020a056512200e00b004790b051822sm335325lfb.90.2022.06.24.04.47.13
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 24 Jun 2022 04:47:13 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 646fc56a-f3b3-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=kmh0ENdASuzPwvNnlicIKQ5TBNpp7RzwJFZQseGlMJM=;
        b=KAsbqYjUfSjDI+JwFiJ2h9nLoByatH7zPnUIWNSKHA7knN6fw21oRYO6JLl2U198M2
         oA9pY+fy5htTDzA1Z4TpGYBB2Elxcpmvxw7ZfhLjEOwevyVlVTdkGSYAZ7yr/lxnP79/
         4F/Q9ZBDDJG+m3yAhYgRV8daUjDw4hS+HdAMfBA+cs+RWjVkKBTS32AwSC2jvWpYMW+4
         r5BIAWyMZKYqaq2jvibeVpm1XLSjuGPx7tvvVrsrJ2rDu8c8xXjL5rNblV8vdPMstA/B
         wHtoo3OEjdcaYXn/D1L1sk1SQ67zpRNDsQPnZeUgir4nenVeDCOCi7ddME63sU8iJFEI
         tiZg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=kmh0ENdASuzPwvNnlicIKQ5TBNpp7RzwJFZQseGlMJM=;
        b=myUAnkpzEKyMMFg9S4RFRsiToK91Wd9ELqP65YuHVvX1/Dolccj3mKmRU9MWi/J4zZ
         WnCO9m1RWYRb10KbsuUbTL4i3RZwnBjMeTMIJc85sttzgGa410Eke2ZWpeFKzcgN834S
         ZETaJPxQl9iLqeLt7Q2j5xKuh8LWeUBuI01lrqhy2DUeDnMs7L3E4YmoGCRRIgfa3zx3
         t5gEoUS6PUeFa05709RqbIlwbW1OlvnigCdUzFzdmm1sEPkHtXXvFsgwiLXwL8EdlJS0
         QRGPqdkoJrmUONvB9bOWVE4D1v3w14rIpu4vRPBIud0m4drodyyxYFS2E7Uusx/Gckfv
         qxXg==
X-Gm-Message-State: AJIora8mOTguF2VL2fQKHkgM41XQn8H3p6zaKZowCbcxcyfqwPfcKg+A
	lN0o1PWwF2Fu3itlzNtKTY8=
X-Google-Smtp-Source: AGRyM1sropTVtnTKfdklaDJvAT2RxLisCuu0kXmcmeJUOpd0YTy9+lrwIs3iEOKPDLaG0yO2tCl++w==
X-Received: by 2002:a05:6512:b8d:b0:47f:74f0:729b with SMTP id b13-20020a0565120b8d00b0047f74f0729bmr8358007lfv.403.1656071234589;
        Fri, 24 Jun 2022 04:47:14 -0700 (PDT)
Subject: Re: [PATCH V6 1/2] xen/gnttab: Store frame GFN in struct page_info on
 Arm
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <1652294845-13980-1-git-send-email-olekstysh@gmail.com>
 <632404c3-b285-753d-6644-bccbc17d42c0@xen.org>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <834ed066-9e5f-f440-f6b5-78746c0d4163@gmail.com>
Date: Fri, 24 Jun 2022 14:47:12 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <632404c3-b285-753d-6644-bccbc17d42c0@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 23.06.22 20:50, Julien Grall wrote:
> Hi Oleksandr,


Hello Julien


>
> Sorry for the late reply.


no problem)


>
> On 11/05/2022 19:47, Oleksandr Tyshchenko wrote:
>> diff --git a/xen/arch/arm/include/asm/mm.h 
>> b/xen/arch/arm/include/asm/mm.h
>> +/*
>> + * All accesses to the GFN portion of type_info field should always be
>> + * protected by the P2M lock. In case when it is not feasible to 
>> satisfy
>> + * that requirement (risk of deadlock, lock inversion, etc) it is 
>> important
>> + * to make sure that all non-protected updates to this field are 
>> atomic.
>
> Here you say the non-protected updates should be atomic but...
>
> [...]
>
>> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
>> index 7b1f2f4..c94bdaf 100644
>> --- a/xen/arch/arm/mm.c
>> +++ b/xen/arch/arm/mm.c
>> @@ -1400,8 +1400,10 @@ void share_xen_page_with_guest(struct 
>> page_info *page, struct domain *d,
>>       spin_lock(&d->page_alloc_lock);
>>         /* The incremented type count pins as writable or read-only. */
>> -    page->u.inuse.type_info =
>> -        (flags == SHARE_ro ? PGT_none : PGT_writable_page) | 1;
>> +    page->u.inuse.type_info &= ~(PGT_type_mask | PGT_count_mask);
>> +    page->u.inuse.type_info |= (flags == SHARE_ro ? PGT_none
>> +                                                  : 
>> PGT_writable_page) |
>> +                                MASK_INSR(1, PGT_count_mask);
>
> ... this is not going to be atomic. So I would suggest to add a 
> comment explaining why this is fine.


Yes, I should have added your explanation given in V5 why this is fine.

So I propose the following text, do you agree with that being added?

/*
  * Please note, the update of type_info field here is not atomic as we use
  * Read-Modify-Write operation on it. But currently it is fine because
  * the caller of page_set_xenheap_gfn() (which is another place where
  * type_info is updated) would need to acquire a reference on the page.
  * This is only possible after the count_info is updated *and* there a 
barrier
  * between the type_info and count_info. So there is no immediate need 
to use
  * cmpxchg() here.
  */


>
>
>>         page_set_owner(page, d);
>>       smp_wmb(); /* install valid domain ptr before updating refcnt. */
>> @@ -1505,7 +1507,23 @@ int xenmem_add_to_physmap_one(
>>       }
>>         /* Map at new location. */
>> -    rc = guest_physmap_add_entry(d, gfn, mfn, 0, t);
>
>> +    if ( !p2m_is_ram(t) || !is_xen_heap_mfn(mfn) )
>> +        rc = guest_physmap_add_entry(d, gfn, mfn, 0, t);
>
> I would expand the comment above to explain why you need a different 
> path for xenheap mapped as RAM. AFAICT, this is because we need to 
> call page_set_xenheap_gfn().


agree, I propose the following text, do you agree with that?

/*
  * Map at new location. Here we need to map xenheap RAM page differently
  * because we need to store the valid GFN and make sure that nothing was
  * mapped before (the stored GFN is invalid).
  */


>
>
>> +    else
>> +    {
>> +        struct p2m_domain *p2m = p2m_get_hostp2m(d);
>> +
>> +        p2m_write_lock(p2m);
>> +        if ( gfn_eq(page_get_xenheap_gfn(mfn_to_page(mfn)), 
>> INVALID_GFN) )
>
> Sorry to only notice it now. This check will also change the behavior 
> for XENMAPSPACE_shared_info. Now, we are only allowed to map the 
> shared info once.
>
> I believe this is fine because AFAICT x86 already prevents it. But 
> this is probably something that ought to be explained in the already 
> long commit message.


agree, I propose the following text, do you agree with that?

Please note, this patch changes the behavior how the shared_info page
(which is xenheap RAM page) is mapped in xenmem_add_to_physmap_one().
Now, we only allow to map the shared_info at once. The subsequent attempts
to map it will result in -EBUSY, if there is a legitimate use case
we will be able to relax that behavior.


>
>
> My comments are mainly seeking for clarifications. The code itself 
> looks correct to me. I can handle the comments on commit to save you a 
> round trip once we agree on them.


Thank you, that would be much appreciated.


>
>
> Cheers,
>
-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 12:02:40 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 12:02:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355480.583158 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4i1B-00077G-JS; Fri, 24 Jun 2022 12:02:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355480.583158; Fri, 24 Jun 2022 12:02:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4i1B-000779-FP; Fri, 24 Jun 2022 12:02:33 +0000
Received: by outflank-mailman (input) for mailman id 355480;
 Fri, 24 Jun 2022 12:02:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CwSF=W7=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1o4i19-000773-M5
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 12:02:31 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-eopbgr80048.outbound.protection.outlook.com [40.107.8.48])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 85a77ce8-f3b5-11ec-bd2d-47488cf2e6aa;
 Fri, 24 Jun 2022 14:02:30 +0200 (CEST)
Received: from AS9PR0301CA0036.eurprd03.prod.outlook.com
 (2603:10a6:20b:469::14) by AM8PR08MB5761.eurprd08.prod.outlook.com
 (2603:10a6:20b:1d0::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Fri, 24 Jun
 2022 12:02:05 +0000
Received: from AM5EUR03FT056.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:469:cafe::71) by AS9PR0301CA0036.outlook.office365.com
 (2603:10a6:20b:469::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16 via Frontend
 Transport; Fri, 24 Jun 2022 12:02:05 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT056.mail.protection.outlook.com (10.152.17.224) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Fri, 24 Jun 2022 12:02:05 +0000
Received: ("Tessian outbound e40990bc24d7:v120");
 Fri, 24 Jun 2022 12:02:05 +0000
Received: from fc5a2c89d6ef.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 8CC24479-B46E-4E48-9BB5-D2C08778F8B0.1; 
 Fri, 24 Jun 2022 12:01:58 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id fc5a2c89d6ef.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 24 Jun 2022 12:01:58 +0000
Received: from AM0PR08MB3809.eurprd08.prod.outlook.com (2603:10a6:208:103::16)
 by AM6PR08MB3992.eurprd08.prod.outlook.com (2603:10a6:20b:a4::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Fri, 24 Jun
 2022 12:01:56 +0000
Received: from AM0PR08MB3809.eurprd08.prod.outlook.com
 ([fe80::4ca:af1b:4380:abf9]) by AM0PR08MB3809.eurprd08.prod.outlook.com
 ([fe80::4ca:af1b:4380:abf9%5]) with mapi id 15.20.5373.016; Fri, 24 Jun 2022
 12:01:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 85a77ce8-f3b5-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=OMMOtuHJE1JkuYl77yrA+O57DDZCTw9z5W+gk2S+w3hTcs1Ll49ThYC0EKJV+mDI0Yz07RPeHsOP+BxQfxtITgyDjMnFR3u9vck7f9QrR0oVgTTMm8l7hJlxic6jG/PsjAGoGVy+nRjAvEJ5SSFRrWpG7+ihGy+vwXFHiyYZTWkgWiI/qJ6WRmKNqGNrBe8OjtWqqFK2lqwgY4xYPHaE6sVUVTa6Sos6qhtuTE7EvTmjvBFja/8XAQ0FKaJIIZgY/pi/yEIHVZuGRPCWUfGQmCPdI8yEBLTQ2ORHJ5QHL4b4I5tVw6HpH7Xm4aS8XZo5KUpo9zjNOZAJIodPAYRJ3Q==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7jd8//NOB/wkSRvn3P02c9SAujR4bW0+bEv8An+j39g=;
 b=Juv1i2nFd8aRyWI3pYYlY3W6N/ritwrj2MJmogBZe05/m2+XLhR1JYfOrmv8/N2AgCytWcjJ8cq3H5stTmg1Qjk0UZBO+Qt45BU2QsploUYdNhLxxDu6kjRJhrMHu2jb3aPJEHLZ74BtvXUjYryqn+vOQWy1ljHT5URLH4KCkgvFYZ/VVNdRIjPcPnGN+zA/x2xz46ku3+ImtkoXliBx3owgAMx4FdIctkr+BmcTQHEFswXzJrPKpooOv9ZtSccCpIahlLcfFOjeptOEl77E+RLfoMydPBELXF9pzDKJ7MS/dmNy8jdIOfBPlVLpiXfhBVj/Xo3mLksvWwtu47W0eQ==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7jd8//NOB/wkSRvn3P02c9SAujR4bW0+bEv8An+j39g=;
 b=2anxpS1PlrpGlj4Jbw6Y3rL/sbgrorCyV0vrt1khYU8mZ2ulAukfqdjHOxtJIcYRyG+OD7VTiXERqLa1eX8UnS3RG6OqJNlVs3NGbEsrgRkglHlU3GmgLpT+pZ6ovM1nUBiGrwebbf5k/H+phb4eHwN6W8sSnWDhIE3PqCTWfts=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 8de32a8313e73a71
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ew5lWkZ9yaH1+6W5g8RrbOG4KEzX+IK8povoTQDWxHhPnmp5LwYALEy4vp804iwt/a0XISdYgyS+8Xg76ZWSKpVgY9Qkb5J1n+SgB3aokD11kxi5+v/Y4OjX+xQh/cuUx1pP8IGxCqG20tp1g8ur4nqonCc8qIYY45pNB5TNM211N0bf/+26rtZdqvw7h8rXTZD3Cf48rloA/8e1YfXw0Nj+lGxj5lthszC1Sh8Erob97MBUpHhBe3yHNr/89AO8W7GInocn1wvylTDlW/A1fsD3PUb87zejvvYmJ8GSJ0Bop9SjP6YayDWLkR7zM2LblhG+cK8GLjosfWttJNksQQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7jd8//NOB/wkSRvn3P02c9SAujR4bW0+bEv8An+j39g=;
 b=f0t3lFusfjVdCbPyK2nPwhV+ZDAe4PVKhZfiVcdHxe5alpl/7DVaoltu4SuZ0u0WElQ2HziuxJw0el9ShjC2v3sBQqOalY5V3r19xj6p+o2H+hH12Yw8/3Y1VkfQPzRLD9wIMj234ONY050KV5Qr7iqYVGwcfwvccL4zT3PV2B8r8qFNw3x1nCiU7c9mfIPAfXjp6lq6rzlgAhxUyps98FZspY+OpWZRfC9gP71AYEYiN5SJImlWdBIoNB7A98ltg9naDRy4VNyjOMYCW4iWnveGfKVwDV4dRqpyU+WbuupLTzx/hCV++CmgGFGVKKsjISldUXJiTp8s98aKWBM/bw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7jd8//NOB/wkSRvn3P02c9SAujR4bW0+bEv8An+j39g=;
 b=2anxpS1PlrpGlj4Jbw6Y3rL/sbgrorCyV0vrt1khYU8mZ2ulAukfqdjHOxtJIcYRyG+OD7VTiXERqLa1eX8UnS3RG6OqJNlVs3NGbEsrgRkglHlU3GmgLpT+pZ6ovM1nUBiGrwebbf5k/H+phb4eHwN6W8sSnWDhIE3PqCTWfts=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH] docs/misra: Add instructions for cppcheck
Thread-Topic: [PATCH] docs/misra: Add instructions for cppcheck
Thread-Index: AQHYh7nJHM8+T3fZ/0Gr0RGCgDxGeq1eaYGAgAALcIA=
Date: Fri, 24 Jun 2022 12:01:55 +0000
Message-ID: <15F23829-3693-47CC-A9D6-3D7A3B44EB64@arm.com>
References: <20220624105311.21057-1-luca.fancellu@arm.com>
 <692d09fa-5513-132a-6b5b-4bc62e46a443@xen.org>
In-Reply-To: <692d09fa-5513-132a-6b5b-4bc62e46a443@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.80.82.1.1)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: be4f7cda-d4c9-4a29-e266-08da55d95b5c
x-ms-traffictypediagnostic:
	AM6PR08MB3992:EE_|AM5EUR03FT056:EE_|AM8PR08MB5761:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 bsg5as5t+ikukSEt59nZ77hssicLjdoyHoQbz3puEfE07QNaOPXsSaAvMf/oVilT5btPgd9GTZji8UVcQ1RGqXTw2XK4b9zpuLmgSERZNy6VjjLlAR5Y3lGHr5+CpR4LgVoeuV1vVZTQRlbTePbIDiEZ8vIzNiNU1IcpKHz+1GhlGMhCvpWWQKyh8AY9dxv3cyUEGuC7UbM6wyJg+2hEBGwhtLedpJfmGzB7HRCPo0URoDewe3xCEgwQjW7CBSRHSw1MXKmBA6oaTYmOdloST4WR5/y0Zv+TNVB+l/4R8P6VcNdAvvLZ35ZNvTVpS3IYoDE4nWWKeeMLyq+Kc3ITueAY5o6pP1e6mZpgIa9BCLGLvjcwmwiEbSCFFKnVjiHh3nCYzUehQcUfVbqNGyYFRvHtOwuAULsv4PSOVhgJrFutfDHiaSouu4GYj/1su/PGBRhcgjPxR197YLOZEzzFeDfHt3pkfEeeFrICLCy3s/qTrWwHJw7//g80AaPpu3q+iTAXf5Et9BfF4mU1/+52jjlOeN4Zv936Tbh0mCs2WKCI6TmgWqB74m+oHaBBj11Zwvs2bfiyxL6hS2glJuDdffkMHw22jhNX7RFm3XKyxUCeJSPHBBRrkJ1CyF3fz/HBWbTj+xYCl9+0SpDK2ZdXT0T5niREzqC8CzU3NYOjyQX6TQN0YdfWnrTffTqulbQauioB3JDSvc3S7j+Bh365OjYKTHsIgJZnHmVVZNiRncmc78JtgRhycfIVScpiD2Zr1K1v8da+XnpPas/3AMDHmdJYevy3MFFLxlBzftT/fnOWEAV4WeQbQA3KTeSnrdZhu9OOftIzvfZx5P2wpwtCzioDCW7VYCLbN69qm529PeIJnTtuBjRWXid7LLEzdHJiVzDpfzE3qkbhbFc/Zig0MA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB3809.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(376002)(136003)(39860400002)(396003)(346002)(41300700001)(26005)(6512007)(86362001)(38070700005)(6486002)(122000001)(38100700002)(2616005)(186003)(33656002)(8676002)(6506007)(66556008)(4326008)(66446008)(66476007)(316002)(76116006)(5660300002)(83380400001)(54906003)(36756003)(8936002)(71200400001)(2906002)(6916009)(64756008)(91956017)(478600001)(66946007)(966005)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <EE574BDAF919594C8497C66708B5CD9C@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3992
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT056.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	d5370780-772d-4b08-1abd-08da55d955ae
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	e7uja1bi0+pTDhMgXu2WmiJO82GChobCCRJg1CnrtmqyOaCvOmnuodLOtpprQmTW26OZ0/aWuglqwmnwKb96HnCE1/OM7OWb7n3jC2PBLDz91jBA6E+pnttndvZ0r7efa4uzRYGKVmNllR17jPojCGsJaN4PHrpcbfdgAPuqQQmYgaB1IybLef+w+EfsmM+QvRWcVdcqu8YLZTxw3LXrG5OrrCTevsgr2Le7nbP24zhgpjyp7gSAzcRh4jK/ld2bUnYeQS3BK5G+o6cPkLZIduiEdRm01I6qrbdDC6F5z2L/kAFedlVcRV0FCB+JEGmVj1fjtuNii5HfaL+iNKiJ+VL2fZv3RwuieiKEb5o3x//wd55BfZ0VABgnmRSsJixs1LOG5wlb/3VNmKbFhCQup7Qto0JjRb4Yi8dFmD9kiBpYXTICQ1csqAVEw/ThwUyXG1jCeBISqTUcMQTExImaREBUXlPBumGBqwiLSp9evKXkEC1uvcBW8JhUKkyS5Bwg1t2NswT1n/hlzJdYVME9fdfaa4NXZJ9RrkeFyvyjF33xwVfNDDPcRFwT8kk34SJJB06BwhYlwiFLnfC7mQQmhLSRYa63hDxVHQrhgGHlhTA55yhwaFEoyEddU6wmRRdhwx5AFZoxLEyKpgBMMwsCvpAAMjmf2lH0hJ0WLftPzi0AUcIa+YDcfcumudwpeQRC6U+K6LLvogvAarSzFV21AjLEYUxeJlAzBjpI3HRu3mtyo8KJt9rBPSXry5SObGdf+i/nqVrhkh3mJRbS/OE1Ft+4YKdFVB79iWhBxjsSnFwqa15uaw/i4zP21p/OZbFSi/Hv/+cgEi4h+2Olcp23ukO1PEcRmBN3lS8QYsQiKtLXUXbf7S75QU5o3Cco+8PP
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(346002)(39860400002)(396003)(376002)(136003)(40470700004)(46966006)(36840700001)(5660300002)(86362001)(6862004)(8936002)(82310400005)(36756003)(36860700001)(2906002)(54906003)(70586007)(70206006)(4326008)(8676002)(40460700003)(316002)(33656002)(40480700001)(6486002)(966005)(356005)(82740400003)(6506007)(81166007)(478600001)(336012)(47076005)(186003)(26005)(6512007)(41300700001)(2616005)(83380400001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jun 2022 12:02:05.4800
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: be4f7cda-d4c9-4a29-e266-08da55d95b5c
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT056.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB5761

SGkgSnVsaWVuLA0KDQo+PiArRmlyc3QgcmVjb21tZW5kYXRpb24gaXMgdG8gdXNlIGV4YWN0bHkg
dGhlIHNhbWUgdmVyc2lvbiBpbiB0aGlzIHBhZ2UgYW5kIHByb3ZpZGUNCj4+ICt0aGUgc2FtZSBv
cHRpb24gdG8gdGhlIGJ1aWxkIHN5c3RlbSwgc28gdGhhdCBldmVyeSBYZW4gZGV2ZWxvcGVyIGNh
biByZXByb2R1Y2UNCj4+ICt0aGUgc2FtZSBmaW5kaW5ncy4NCj4gDQo+IEkgYW0gbm90IHN1cmUg
SSBhZ3JlZS4gSSB0aGluayBpdCBpcyBnb29kIHRoYXQgZWFjaCBkZXZlbG9wcGVyIHVzZSB0aGVp
ciBvd24gdmVyc2lvbiAoc28gbG9uZyBpdCBpcyBzdXBwb3J0ZWQpLCBzbyB0aGV5IG1heSBiZSBh
YmxlIHRvIGZpbmQgaXNzdWVzIHRoYXQgbWF5IG5vdCBhcHBlYXIgd2l0aCAyLjcuDQoNClllcyBJ
IHVuZGVyc3RhbmQsIGJ1dCBhcyBCZXJ0cmFuZCBzYXlzLCBvdGhlciB2ZXJzaW9uIG9mIHRoaXMg
dG9vbCBkb2VzbuKAmXQgd29yayBxdWl0ZSB3ZWxsLiBJIGFncmVlIHRoYXQgZXZlcnlvbmUgc2hv
dWxkIHVzZSB0aGVpciBvd24gdmVyc2lvbiwgYnV0IGZvciB0aGUgc2FrZSBvZiByZXByb2R1Y2li
aWxpdHkNCm9mIHRoZSBmaW5kaW5ncywgSSB0aGluayB3ZSBzaG91bGQgaGF2ZSBhIGNvbW1vbiBn
cm91bmQuIFRoZSBjb21tdW5pdHkgY2FuIGhvd2V2ZXIgcHJvcG9zZSBmcm9tIHRpbWUgdG8gdGlt
ZSB0byBidW1wIHRoZSB2ZXJzaW9uIGFzIGxvbmcgYXMgd2UgY2FuIHNheSBpdCB3b3JrcyAobWF5
YmUNCmNyb3NzaW5nIHRoZSByZXBvcnRzIGJldHdlZW4gY3BwY2hlY2ssIGVjbGFpciwgb3RoZXIg
cHJvcHJpZXRhcnkgdG9vbHMpLg0KDQo+IA0KPj4gKw0KPj4gK0luc3RhbGwgY3BwY2hlY2sgaW4g
dGhlIHN5c3RlbQ0KPiANCj4gTklUOiBzL2luL29uLyBJIHRoaW5rLg0KDQpTdXJlIHdpbGwgZml4
DQo+IA0KPj4gKz09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQ0KPj4gKw0KPj4gK0NwcGNo
ZWNrIGNhbiBiZSByZXRyaWV2ZWQgZnJvbSB0aGUgZ2l0aHViIHJlcG9zaXRvcnkgb3IgYnkgZG93
bmxvYWRpbmcgdGhlDQo+PiArdGFyYmFsbCwgdGhlIHZlcnNpb24gdGVzdGVkIHNvIGZhciBpcyB0
aGUgMi43Og0KPj4gKw0KPj4gKyAtIGh0dHBzOi8vZ2l0aHViLmNvbS9kYW5tYXIvY3BwY2hlY2sv
dHJlZS8yLjcNCj4+ICsgLSBodHRwczovL2dpdGh1Yi5jb20vZGFubWFyL2NwcGNoZWNrL2FyY2hp
dmUvMi43LnRhci5neg0KPj4gKw0KPj4gK1RvIGNvbXBpbGUgYW5kIGluc3RhbGwgaXQsIGhlcmUg
dGhlIGNvbXBsZXRlIGNvbW1hbmQgbGluZToNCj4+ICsNCj4+ICttYWtlIE1BVENIQ09NUElMRVI9
eWVzIFwNCj4+ICsgRklMRVNESVI9L3Vzci9zaGFyZS9jcHBjaGVjayBcDQo+PiArIENGR0RJUj0v
dXNyL3NoYXJlL2NwcGNoZWNrL2NmZyBcDQo+PiArIEhBVkVfUlVMRVM9eWVzIFwNCj4+ICsgQ1hY
RkxBR1M9Ii1PMiAtRE5ERUJVRyAtV2FsbCAtV25vLXNpZ24tY29tcGFyZSAtV25vLXVudXNlZC1m
dW5jdGlvbiIgXA0KPj4gKyBpbnN0YWxsDQo+IA0KPiBMZXQgbWUgc3RhcnQgdGhhdCBJIGFtIG5v
dCBjb252aW5jZWQgdGhhdCBvdXIgZG9jdW1lbnRhdGlvbiBzaG91bGQgZXhwbGFpbiBob3cgdG8g
YnVpbGQgY3BwY2hlY2suDQo+IA0KPiBCdXQgaWYgdGhhdCdzIGRlc2lyZSwgdGhlbiBJIHRoaW5r
IHlvdSBvdWdodCB0byBleHBsYWluIHdoeSB3ZSBuZWVkIHRvIHVwZGF0ZSBDWFhGTEFHUyAoSSB3
b3VsZCBleHBlY3QgY3BwY2hlY2sgdG8gYnVpbGQgZXZlcnl3aGVyZSB3aXRob3V0IHNwZWNpZnlp
bmcgYWRkaXRpb25hbCBmbGFncykuDQoNClllcyB5b3UgYXJlIHJpZ2h0LCB0aGlzIGlzIHRoZSBy
ZWNvbW1lbmRlZCBjb21tYW5kIGxpbmUgZm9yIGJ1aWxkaW5nIGFzIGluIGh0dHBzOi8vZ2l0aHVi
LmNvbS9kYW5tYXIvY3BwY2hlY2svYmxvYi9tYWluL3JlYWRtZS5tZCBzZWN0aW9uIEdOVSBtYWtl
LCBJIGNhbiBhZGQgdGhlIHNvdXJjZS4NCg0KTXkgaW50ZW50aW9uIHdoZW4gd3JpdGluZyB0aGlz
IHBhZ2Ugd2FzIHRvIGhhdmUgYSBjb21tb24gZ3JvdW5kIGJldHdlZW4gWGVuIGRldmVsb3BlcnMs
IHNvIHRoYXQgaWYgb25lIGRheSBzb21lb25lIGNhbWUgdXAgd2l0aCBhIGZpeCBmb3Igc29tZXRo
aW5nLCB3ZSBhcmUgYWJsZSB0byByZXByb2R1Y2UNCnRoZSBmaW5kaW5nIGFsbCB0b2dldGhlci4N
Cg0KQ2hlZXJzLA0KTHVjYQ0KDQo+IA0KPiBDaGVlcnMsDQo+IA0KPiAtLSANCj4gSnVsaWVuIEdy
YWxsDQoNCg==


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 12:08:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 12:08:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355485.583169 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4i6Z-0007kp-6h; Fri, 24 Jun 2022 12:08:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355485.583169; Fri, 24 Jun 2022 12:08:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4i6Z-0007ki-3n; Fri, 24 Jun 2022 12:08:07 +0000
Received: by outflank-mailman (input) for mailman id 355485;
 Fri, 24 Jun 2022 12:08:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4i6X-0007kc-Io
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 12:08:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4i6X-0005Gb-7b; Fri, 24 Jun 2022 12:08:05 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238] helo=[192.168.4.76])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4i6X-0006zi-12; Fri, 24 Jun 2022 12:08:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=SydIncRv2k6Y+M8KPuux05mdFu4KZNWWv3IU2bwoSgk=; b=4hoR1+usP+6mat+iUjvDN0v7Ka
	o20/i8aHE4xttvSS9MQkLg5qMIALjRoGzc5mqvU5dLHEq9o7FrOdoADCRLXSGrFY7x96X2xSPgR4W
	KbHADeK2U4Z38eRF/2fYeJZAuJN9x4++0Tm4I0qtnnhFLXQ3kUJDlT9Uaoex1uECDUBc=;
Message-ID: <c304be56-ae9b-121a-007e-1bb5ef06f95b@xen.org>
Date: Fri, 24 Jun 2022 13:08:02 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH] docs/misra: Add instructions for cppcheck
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: Luca Fancellu <Luca.Fancellu@arm.com>,
 xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20220624105311.21057-1-luca.fancellu@arm.com>
 <692d09fa-5513-132a-6b5b-4bc62e46a443@xen.org>
 <E5F45AD6-8D0E-447C-9864-6E9F34C1BE0D@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <E5F45AD6-8D0E-447C-9864-6E9F34C1BE0D@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 24/06/2022 12:40, Bertrand Marquis wrote:
> Hi Julien,

Hi Bertrand,

> 
>> On 24 Jun 2022, at 12:20, Julien Grall <julien@xen.org> wrote:
>>
>> Hi Luca,
>>
>> On 24/06/2022 11:53, Luca Fancellu wrote:
>>> Add instructions on how to build cppcheck, the version currently used
>>> and an example to use the cppcheck integration to run the analysis on
>>> the Xen codebase
>>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>>> ---
>>> docs/misra/cppcheck.txt | 66 +++++++++++++++++++++++++++++++++++++++++
>>> 1 file changed, 66 insertions(+)
>>> create mode 100644 docs/misra/cppcheck.txt
>>> diff --git a/docs/misra/cppcheck.txt b/docs/misra/cppcheck.txt
>>> new file mode 100644
>>> index 000000000000..4df0488794aa
>>> --- /dev/null
>>> +++ b/docs/misra/cppcheck.txt
>>> @@ -0,0 +1,66 @@
>>> +Cppcheck for Xen static and MISRA analysis
>>> +==========================================
>>> +
>>> +Xen can be analysed for both static analysis problems and MISRA violation using
>>> +cppcheck, the open source tool allows the creation of a report with all the
>>> +findings. Xen has introduced the support in the Makefile so it's very easy to
>>> +use and in this document we can see how.
>>> +
>>> +First recommendation is to use exactly the same version in this page and provide
>>> +the same option to the build system, so that every Xen developer can reproduce
>>> +the same findings.
>>
>> I am not sure I agree. I think it is good that each developper use their own version (so long it is supported), so they may be able to find issues that may not appear with 2.7.
> 
> Right now the reality is not that great:
> - 2.8 version of cppcheck has bugs and Misra checking is not working

Can you be more specifics for "bugs". Is it Xen specific?

Also, what do you mean by MISRA checking is not working? Is this a 
regression or intentional?

> - older versions of cppcheck are generating wrong html or xml files

That's fine to say we don't support cppcheck < 2.7 (we do that also for 
the compiler).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 12:17:35 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 12:17:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355492.583180 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4iFd-0000mS-2y; Fri, 24 Jun 2022 12:17:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355492.583180; Fri, 24 Jun 2022 12:17:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4iFc-0000mL-Vo; Fri, 24 Jun 2022 12:17:28 +0000
Received: by outflank-mailman (input) for mailman id 355492;
 Fri, 24 Jun 2022 12:17:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4iFa-0000mF-Tc
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 12:17:27 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4iFa-0005Sy-DB; Fri, 24 Jun 2022 12:17:26 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238] helo=[192.168.4.76])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4iFa-0007To-54; Fri, 24 Jun 2022 12:17:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=hDFGIsyZPScWlB7MJVgrHZKeGG+H8Vr4QrP7ZqH1GvE=; b=afk+ajY2L4Kkpl+4EDtnX7wA+7
	q63jTKFYoIPE1S94h0vnG6FyU3KqFBYkFZG1V0Loo/XiePXPH+zMxnFUb9VCxqcweZtOXbiuGczhP
	IQFxwrLbqgF0iod2t59DEgFthcdIraJ+ZkMYS2gPYicCcWCuSlTXMkENIEqajAye0Wy0=;
Message-ID: <88bd7017-e2b3-59f3-a68a-25db9e53136d@xen.org>
Date: Fri, 24 Jun 2022 13:17:23 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH] docs/misra: Add instructions for cppcheck
To: Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20220624105311.21057-1-luca.fancellu@arm.com>
 <692d09fa-5513-132a-6b5b-4bc62e46a443@xen.org>
 <15F23829-3693-47CC-A9D6-3D7A3B44EB64@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <15F23829-3693-47CC-A9D6-3D7A3B44EB64@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit



On 24/06/2022 13:01, Luca Fancellu wrote:
> Hi Julien,

Hi Luca,

> 
>>> +First recommendation is to use exactly the same version in this page and provide
>>> +the same option to the build system, so that every Xen developer can reproduce
>>> +the same findings.
>>
>> I am not sure I agree. I think it is good that each developper use their own version (so long it is supported), so they may be able to find issues that may not appear with 2.7.
> 
> Yes I understand, but as Bertrand says, other version of this tool doesn’t work quite well. 

I have replied to this on Bertrand e-mail.


> I agree that everyone should use their own version, but for the sake of reproducibility
> of the findings, I think we should have a common ground.

I will reply to this below.

> The community can however propose from time to time to bump the version as long as we can say it works (maybe
> crossing the reports between cppcheck, eclair, other proprietary tools).

This would mean we should de-support 2.7 which sounds wrong if it worked 
before.

> 
>>
>>> +
>>> +Install cppcheck in the system
>>
>> NIT: s/in/on/ I think.
> 
> Sure will fix
>>
>>> +==============================
>>> +
>>> +Cppcheck can be retrieved from the github repository or by downloading the
>>> +tarball, the version tested so far is the 2.7:
>>> +
>>> + - https://github.com/danmar/cppcheck/tree/2.7
>>> + - https://github.com/danmar/cppcheck/archive/2.7.tar.gz
>>> +
>>> +To compile and install it, here the complete command line:
>>> +
>>> +make MATCHCOMPILER=yes \
>>> + FILESDIR=/usr/share/cppcheck \
>>> + CFGDIR=/usr/share/cppcheck/cfg \
>>> + HAVE_RULES=yes \
>>> + CXXFLAGS="-O2 -DNDEBUG -Wall -Wno-sign-compare -Wno-unused-function" \
>>> + install
>>
>> Let me start that I am not convinced that our documentation should explain how to build cppcheck.
>>
>> But if that's desire, then I think you ought to explain why we need to update CXXFLAGS (I would expect cppcheck to build everywhere without specifying additional flags).
> 
> Yes you are right, this is the recommended command line for building as in https://github.com/danmar/cppcheck/blob/main/readme.md section GNU make, I can add the source.

I think we should remove the command line and tell the user to read the 
cppcheck README.md.

> 
> My intention when writing this page was to have a common ground between Xen developers, so that if one day someone came up with a fix for something, we are able to reproduce
> the finding all together.
Well, if someone find a fix you want to check against all versions not 
the one that warns. Otherwise, you can end up in a situation where you 
silence cppcheck 2.10 (just making up a version) but then introduce a 
warning in cppcheck 2.7.

To me this is no different than other software used to build Xen. We 
don't tell the user that they should always build with GCC x.y.z. 
Instead, we provide a minimum version. This has multiple benefits:
  1) The user doesn't need to rebuild the software and can use the one 
provided by the distributions
  2) Different versions find different (most of the time) valid bugs. So 
we are getting towards a better codebase.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 12:18:35 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 12:18:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355497.583191 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4iGf-0001K7-E5; Fri, 24 Jun 2022 12:18:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355497.583191; Fri, 24 Jun 2022 12:18:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4iGf-0001K0-9i; Fri, 24 Jun 2022 12:18:33 +0000
Received: by outflank-mailman (input) for mailman id 355497;
 Fri, 24 Jun 2022 12:18:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b5tQ=W7=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1o4iGe-0001Jo-4L
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 12:18:32 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-eopbgr60049.outbound.protection.outlook.com [40.107.6.49])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c26bbce3-f3b7-11ec-bd2d-47488cf2e6aa;
 Fri, 24 Jun 2022 14:18:31 +0200 (CEST)
Received: from AM6P195CA0016.EURP195.PROD.OUTLOOK.COM (2603:10a6:209:81::29)
 by DU0PR08MB7859.eurprd08.prod.outlook.com (2603:10a6:10:3b1::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16; Fri, 24 Jun
 2022 12:18:29 +0000
Received: from AM5EUR03FT019.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:81:cafe::44) by AM6P195CA0016.outlook.office365.com
 (2603:10a6:209:81::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15 via Frontend
 Transport; Fri, 24 Jun 2022 12:18:29 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT019.mail.protection.outlook.com (10.152.16.104) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Fri, 24 Jun 2022 12:18:28 +0000
Received: ("Tessian outbound ff2e13d26e0f:v120");
 Fri, 24 Jun 2022 12:18:28 +0000
Received: from bc343346eac1.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 F8534AA1-9C2E-46CC-987E-C922F1E73A7C.1; 
 Fri, 24 Jun 2022 12:18:17 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id bc343346eac1.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 24 Jun 2022 12:18:17 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by VI1PR08MB4304.eurprd08.prod.outlook.com (2603:10a6:803:f2::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.14; Fri, 24 Jun
 2022 12:18:15 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5353.022; Fri, 24 Jun 2022
 12:18:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c26bbce3-f3b7-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=FfdqDxtULSeDXKJO2RELwBvsKVQ+8HvDnJOogCDnYiDRpndI3zTC5xOuAXmQd055tukpSa/K7aNlPxQHMj0PfHWRWHRLFQ4xaiu74rEICXiNFfqcPZQlGA9HvzUcoGgm9XrgJkSPPvg43jZTnWtKLjsf52MYkqarayLNnOPnj8R635EmEI34l+Mm0o3Yd6AmDnNOk1+iyuTBoEQmRtT6Fqq9m2p4LadijgLoWp9CTCK+xNCdq4ldHVP6jFF26FpBtEM8j123EpgpXkLxwxM3ZMersabbBCgBovQL++iBnBNwfMPFOHgXN3xJMk+iha1WkUlhouzLmLtvfh45zNpANw==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IUzvdusQG3H6jh5zyg3/4GQJ9iuvCtDjcJ9QN0T5lqY=;
 b=Wuck4RWjttoUJGZ4zn5hXI6GsVLxRjCURubbyoyoyIZetLv3g7eMaQHMvC8g3Dh/3R/gD1KvLTKPjo7C57vq0XeJ92S3f6Olkh3+ydVrhpNkQrY910LFyvBXws5Dj7d3cW+kZn1Oq+OJK98wTpCslS2b2gY1Q6E4jKWBmEdYnTN+D2L14KJncbOgpN+yHuCX63rEmdAimFBW3gny2O6f4+vB/jdEzYI6t2dWAWEinte7kegOEmticJ0F0Jemda72OT9B94IMrc0P+cV8tz8GUkEgNtvwe1SlJaBDKxXYQcQUTWSvXWO9dw5IEgLByYmSpHuWW559rnfTp0BnQ8O5hg==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IUzvdusQG3H6jh5zyg3/4GQJ9iuvCtDjcJ9QN0T5lqY=;
 b=ZJl6P8mFkDNUumhZLJDBY7EHsvQQhHxG1zFUkQLXmq3cZMdtvGuvX6Lb4JyxqnkNaJ+122+Jg1tRMVdQYjLghHuPhJdMEnUtiVZSbrxX2OhmNdbuLRY4F3r8r68KtG73ib2wgR/DmynNDTL25GVj5VkF7Pe7HkmaVFlM7wztCL8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 7a37694e0d9b576f
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=R6rtpmnDnjoNcChquIMf+13qyUWG91H4vij0tMU1OwuHV0jem5gl1Qi2MtX82iyAEjHCoHD6Z1mQ2PMxlIwCee6PIakpgLLSMQFGTYKI5yBPMcKGFIJz6wVbR1haen2OAk0fRtz3Z54c5oQPjpM6JumLm8t3/bJCLlaRS0JFa0CVDo0Jr76a1EzoFoh2pnk/yfKNBNXkFWtGt1pKQPWBp+A55Y5hDgVaNoZsS8QfjwZtiYsl5kw3eHINEJcf806aHviCNw+nCiEBz9HwV5Uf18HneGKII9WyfA67s7MMcpf2EfrVdNx/8uFZpZ56TTBq2fzFeOj3pCooajjGgDdBfw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IUzvdusQG3H6jh5zyg3/4GQJ9iuvCtDjcJ9QN0T5lqY=;
 b=FcUEeYRJ9PnrpbSCk2G21sSFq3MM0DzQsZtPrkDo2TcTpU/PFo15MWKQeAhfnLyMgX/RXDwxaWvr6RIajDv1DqSD36EP8pxkGzTlL6oHdhhyQ24EDvwOWbc3lRO2+rozZPod4/qky8uPbhQF355+H+WAVWWNCLdoks4BMAb1A/X95MCehBM+3NstVF6C1okpxQESkEHk0p6UPmIF9qEUEGrXl1XgYOtH74M8nHhwR0kUZoF8hO9NKVx2qpwZIRczM2MT5GEqhv/I4o0ThjFbJmDw3Tngni+E/YcfC2HnmaKnk3nTfYRk6iAJzP6prThMPMCGgiheRfKUlTfdH3GvFw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IUzvdusQG3H6jh5zyg3/4GQJ9iuvCtDjcJ9QN0T5lqY=;
 b=ZJl6P8mFkDNUumhZLJDBY7EHsvQQhHxG1zFUkQLXmq3cZMdtvGuvX6Lb4JyxqnkNaJ+122+Jg1tRMVdQYjLghHuPhJdMEnUtiVZSbrxX2OhmNdbuLRY4F3r8r68KtG73ib2wgR/DmynNDTL25GVj5VkF7Pe7HkmaVFlM7wztCL8=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: Luca Fancellu <Luca.Fancellu@arm.com>, xen-devel
	<xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH] docs/misra: Add instructions for cppcheck
Thread-Topic: [PATCH] docs/misra: Add instructions for cppcheck
Thread-Index: AQHYh7iovubAjsqNDkKOdqYTdWMQia1eaYSAgAAFkYCAAAeUAIAAAtiA
Date: Fri, 24 Jun 2022 12:18:14 +0000
Message-ID: <F0A0EE87-88CA-4A42-948F-D5CC4B5540DF@arm.com>
References: <20220624105311.21057-1-luca.fancellu@arm.com>
 <692d09fa-5513-132a-6b5b-4bc62e46a443@xen.org>
 <E5F45AD6-8D0E-447C-9864-6E9F34C1BE0D@arm.com>
 <c304be56-ae9b-121a-007e-1bb5ef06f95b@xen.org>
In-Reply-To: <c304be56-ae9b-121a-007e-1bb5ef06f95b@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: c117045b-f25a-4d75-6e74-08da55dba579
x-ms-traffictypediagnostic:
	VI1PR08MB4304:EE_|AM5EUR03FT019:EE_|DU0PR08MB7859:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 8ZvBQcmXmryHdUb0shgetk94T5IzbOf2vLdh7ye8FDybXNaiRNfXjQWuXYW7aqMSNZKOBOBnTLxUAMoUV6Lq0TK6VC7a0ZxYuaw99yUAcO8DthvqunFo1/zw8GEkxP9QvQzNqhsOQeaDod74wv7S6klCFErLeItiiMPVQm89ssFlR4dN65OsE4+7aCtONEN3/SLSrlDAQ4dDtCBM6YCwpb9ELWHWyHVFWWI3KBbwupiwRyaxTOh4hUPloNHNUEr0Ui/5wnWwoLHA8ncndQ8uN6j3A0u8ieWn1VeeVHWPaPvrO16/B5GeGlid27O3oqt7eo90HBseh2m80pShJ1NrMjxxv9joc2rUD8OX1dCB2qaTv/qM5DjIO3cD5ftgRMlvnvGNUwtru+qMkagJheD0zcUAMWLZB1VD2p2Spemkq4h8voicI5VOmDqXO+WCw+Y7t3WWoIMyq+Y9YB2r9BBv3sXL7yi6DDvOsxqd69oLYuCcUPNQ5lVw073mJuzikROIJN917OjM4md3gN0lUZh2+UfCzCWnsNdXmKb7x2u2iA7VHxHkq76uko5sJ0yJfgC0vBzMbAI7K1gpyoGD2piDuxzkWybkjn2Ckk7MBOxL65047eBlpcP+r2/KbPN1G2Kj8vxVBXhiKJGl5fRgZdML55Byf9KSrIN473YBOlMwaoNzmzJrmWfN9MlDQFP0XUybWunWHM9ZUmC7jaUqmXbklvwYiWW+DcomJCjyiAYuR8ZPqcoGuSqO1WTVumSh/RRW0/gk/T5RFas54YorFlokWXfGQQ2Y65JG7dw7dCa/2XfNWTnrrtC4OwPMMV6RRQEkG8448xTp1hy9terQpCz8bn3r4rwh5eewqMXWFc+DxF8=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(396003)(346002)(376002)(136003)(366004)(39860400002)(6512007)(53546011)(83380400001)(86362001)(38070700005)(91956017)(8676002)(66556008)(316002)(66446008)(54906003)(966005)(66946007)(2906002)(66476007)(76116006)(26005)(6916009)(4326008)(5660300002)(36756003)(64756008)(2616005)(33656002)(478600001)(186003)(6486002)(71200400001)(41300700001)(6506007)(8936002)(38100700002)(122000001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <F45C746F29E66C45A221D3AD39098C66@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4304
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT019.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	aecb0e93-c517-4456-32b8-08da55db9d0e
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	lpwrEv9k2Dk6hDThxYKsqpfpbjQ/FWVZcX9KaTM3NxNurfKHS71EZbLtIzxXfQnMOeTJoQ+2Z2kazR/oPfgxdITzhdW8XpbKkT0TrcuHAl3fdUKp+Slh2IkCGLWXcqrw/03cj40DD4Ls27bIRJwYXk0bQKJYNzp6Bw7X59/S+i4J74mF2xWGrHIdY38KSDnJo126C+eXra9hQ8NNN8asT3I4Um6MU7tIUB8DshOupe6eCD8Hjkt0JoxQNgZqHh5FGYEI3OXv7BisGGQVjV0zeAj1U4BBvZXRfM0jBUj2DFhQTnBCr5GrRZl8XjxHul+L67tfGqgjKbbb5iip+Ra28mRJO9YYDtg/3sSJK+KaxiTw4ipVXxwQZXRFsxsf6KFCvPrecoV9lK4MAgIstw2iJZkfH9iMe5NMroVzlZMwChW8ZCxjpvCQnrgM7EN72KE4arT7dnhY8NJRKNd4V3ENgrBzfFdRjBxYA8T+SYCCUSCHb/U/sC0R4R5vxZ7RCuGvpel4+oio1TLWapRnBdYP2GVZQnKl3IP2EDkb1MwqpQuWQ1KAQwzd7IfTUEGK90hvMNNz850Ng6TjiFRXICCaUYrg94jyO3uOeDD6iK28u8v+h9oojNY+Fg3SiZA/s8hHVgsy5ciw3saBv9i3SFL7fH6deeH50RAdwhBzL1dVTLFvhdgwCpPMVL7IjHCIFxcg2AjSN44Nj6kzyWwjvr264AM8GA8LSHi/YoLSXogWkeGQZ11SkZg8G3psa0VwiOw0s3THqYKic0lvRzYr0gnMSL3VoD8yroJwmqfW0vwuOCJAHaPvsLrB1E/5aetWdsKseFoL9dfleYwW1yFuwSQ6vw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(136003)(376002)(346002)(396003)(39860400002)(46966006)(36840700001)(40470700004)(5660300002)(2616005)(336012)(40460700003)(316002)(26005)(47076005)(53546011)(2906002)(6512007)(40480700001)(83380400001)(86362001)(186003)(6506007)(41300700001)(82310400005)(4326008)(8936002)(6486002)(70206006)(33656002)(356005)(36860700001)(6862004)(966005)(36756003)(81166007)(70586007)(82740400003)(54906003)(478600001)(8676002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jun 2022 12:18:28.8119
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c117045b-f25a-4d75-6e74-08da55dba579
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT019.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB7859

Hi Julien,

> On 24 Jun 2022, at 13:08, Julien Grall <julien@xen.org> wrote:
>=20
>=20
>=20
> On 24/06/2022 12:40, Bertrand Marquis wrote:
>> Hi Julien,
>=20
> Hi Bertrand,
>=20
>>> On 24 Jun 2022, at 12:20, Julien Grall <julien@xen.org> wrote:
>>>=20
>>> Hi Luca,
>>>=20
>>> On 24/06/2022 11:53, Luca Fancellu wrote:
>>>> Add instructions on how to build cppcheck, the version currently used
>>>> and an example to use the cppcheck integration to run the analysis on
>>>> the Xen codebase
>>>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>>>> ---
>>>> docs/misra/cppcheck.txt | 66 +++++++++++++++++++++++++++++++++++++++++
>>>> 1 file changed, 66 insertions(+)
>>>> create mode 100644 docs/misra/cppcheck.txt
>>>> diff --git a/docs/misra/cppcheck.txt b/docs/misra/cppcheck.txt
>>>> new file mode 100644
>>>> index 000000000000..4df0488794aa
>>>> --- /dev/null
>>>> +++ b/docs/misra/cppcheck.txt
>>>> @@ -0,0 +1,66 @@
>>>> +Cppcheck for Xen static and MISRA analysis
>>>> +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
>>>> +
>>>> +Xen can be analysed for both static analysis problems and MISRA viola=
tion using
>>>> +cppcheck, the open source tool allows the creation of a report with a=
ll the
>>>> +findings. Xen has introduced the support in the Makefile so it's very=
 easy to
>>>> +use and in this document we can see how.
>>>> +
>>>> +First recommendation is to use exactly the same version in this page =
and provide
>>>> +the same option to the build system, so that every Xen developer can =
reproduce
>>>> +the same findings.
>>>=20
>>> I am not sure I agree. I think it is good that each developper use thei=
r own version (so long it is supported), so they may be able to find issues=
 that may not appear with 2.7.
>> Right now the reality is not that great:
>> - 2.8 version of cppcheck has bugs and Misra checking is not working
>=20
> Can you be more specifics for "bugs". Is it Xen specific?

No it is not Xen specific (see [1] for more info)

>=20
> Also, what do you mean by MISRA checking is not working? Is this a regres=
sion or intentional?

It is a regression.

>=20
>> - older versions of cppcheck are generating wrong html or xml files
>=20
> That's fine to say we don't support cppcheck < 2.7 (we do that also for t=
he compiler).

Ok

[1] https://sourceforge.net/p/cppcheck/discussion/general/thread/bfc3ab6c41=
/?limit=3D25

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 12:22:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 12:22:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355505.583202 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4iKH-0002pJ-3p; Fri, 24 Jun 2022 12:22:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355505.583202; Fri, 24 Jun 2022 12:22:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4iKH-0002pC-0O; Fri, 24 Jun 2022 12:22:17 +0000
Received: by outflank-mailman (input) for mailman id 355505;
 Fri, 24 Jun 2022 12:22:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4iKF-0002p6-Aj
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 12:22:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4iKE-0005Yu-VL; Fri, 24 Jun 2022 12:22:14 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238] helo=[192.168.4.76])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4iKE-0007nu-Oz; Fri, 24 Jun 2022 12:22:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=Mu5cI+Pils4nDP2905FyerWfuU2zuaXL4QcvVQU3wNI=; b=ixqCn7UrU1S0zCgLLYdrkGVqMC
	FA6LzlNZ2ilViU83HGe9uZ6mrQkbcE9ciClnYQRrfffy5nq3y8GmvgkM+63AK9qxZ/0AeRZFCWe8H
	539x/I7sLNk+IQrjPYV7LkjMRfzYjrOHKfPFrwwvZmqljAQbS8kYIOWGTC+pqHHpZOe8=;
Message-ID: <581ae1a2-bf1b-7dca-fa0b-a772ad7c489a@xen.org>
Date: Fri, 24 Jun 2022 13:22:12 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH] docs/misra: Add instructions for cppcheck
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: Luca Fancellu <Luca.Fancellu@arm.com>,
 xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20220624105311.21057-1-luca.fancellu@arm.com>
 <692d09fa-5513-132a-6b5b-4bc62e46a443@xen.org>
 <E5F45AD6-8D0E-447C-9864-6E9F34C1BE0D@arm.com>
 <c304be56-ae9b-121a-007e-1bb5ef06f95b@xen.org>
 <F0A0EE87-88CA-4A42-948F-D5CC4B5540DF@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <F0A0EE87-88CA-4A42-948F-D5CC4B5540DF@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 24/06/2022 13:18, Bertrand Marquis wrote:
> Hi Julien,

Hi Bertrand,

> 
>> On 24 Jun 2022, at 13:08, Julien Grall <julien@xen.org> wrote:
>>
>>
>>
>> On 24/06/2022 12:40, Bertrand Marquis wrote:
>>> Hi Julien,
>>
>> Hi Bertrand,
>>
>>>> On 24 Jun 2022, at 12:20, Julien Grall <julien@xen.org> wrote:
>>>>
>>>> Hi Luca,
>>>>
>>>> On 24/06/2022 11:53, Luca Fancellu wrote:
>>>>> Add instructions on how to build cppcheck, the version currently used
>>>>> and an example to use the cppcheck integration to run the analysis on
>>>>> the Xen codebase
>>>>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>>>>> ---
>>>>> docs/misra/cppcheck.txt | 66 +++++++++++++++++++++++++++++++++++++++++
>>>>> 1 file changed, 66 insertions(+)
>>>>> create mode 100644 docs/misra/cppcheck.txt
>>>>> diff --git a/docs/misra/cppcheck.txt b/docs/misra/cppcheck.txt
>>>>> new file mode 100644
>>>>> index 000000000000..4df0488794aa
>>>>> --- /dev/null
>>>>> +++ b/docs/misra/cppcheck.txt
>>>>> @@ -0,0 +1,66 @@
>>>>> +Cppcheck for Xen static and MISRA analysis
>>>>> +==========================================
>>>>> +
>>>>> +Xen can be analysed for both static analysis problems and MISRA violation using
>>>>> +cppcheck, the open source tool allows the creation of a report with all the
>>>>> +findings. Xen has introduced the support in the Makefile so it's very easy to
>>>>> +use and in this document we can see how.
>>>>> +
>>>>> +First recommendation is to use exactly the same version in this page and provide
>>>>> +the same option to the build system, so that every Xen developer can reproduce
>>>>> +the same findings.
>>>>
>>>> I am not sure I agree. I think it is good that each developper use their own version (so long it is supported), so they may be able to find issues that may not appear with 2.7.
>>> Right now the reality is not that great:
>>> - 2.8 version of cppcheck has bugs and Misra checking is not working
>>
>> Can you be more specifics for "bugs". Is it Xen specific?
> 
> No it is not Xen specific (see [1] for more info)

Thanks for the information. How about writing something like:

"
The minimum version required for cppcheck is 2.7. Note that at the time 
of writing (June 2022), the version 2.8 is known to be broken [1].
"

[1] 
https://sourceforge.net/p/cppcheck/discussion/general/thread/bfc3ab6c41/?limit=25

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 12:26:54 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 12:26:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355511.583213 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4iOf-0003Rx-MT; Fri, 24 Jun 2022 12:26:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355511.583213; Fri, 24 Jun 2022 12:26:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4iOf-0003Rq-JC; Fri, 24 Jun 2022 12:26:49 +0000
Received: by outflank-mailman (input) for mailman id 355511;
 Fri, 24 Jun 2022 12:26:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b5tQ=W7=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1o4iOe-0003Rk-8q
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 12:26:48 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-eopbgr60042.outbound.protection.outlook.com [40.107.6.42])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ea460125-f3b8-11ec-b725-ed86ccbb4733;
 Fri, 24 Jun 2022 14:26:47 +0200 (CEST)
Received: from DB8PR06CA0057.eurprd06.prod.outlook.com (2603:10a6:10:120::31)
 by VE1PR08MB5742.eurprd08.prod.outlook.com (2603:10a6:800:1a9::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16; Fri, 24 Jun
 2022 12:26:43 +0000
Received: from DBAEUR03FT049.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:120:cafe::3a) by DB8PR06CA0057.outlook.office365.com
 (2603:10a6:10:120::31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17 via Frontend
 Transport; Fri, 24 Jun 2022 12:26:43 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT049.mail.protection.outlook.com (100.127.142.192) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Fri, 24 Jun 2022 12:26:42 +0000
Received: ("Tessian outbound 5b5a41c043d3:v120");
 Fri, 24 Jun 2022 12:26:42 +0000
Received: from 65765f2da25a.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 2701253E-1FE8-40B6-ABA3-7387D5F31684.1; 
 Fri, 24 Jun 2022 12:26:36 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 65765f2da25a.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 24 Jun 2022 12:26:36 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by GV1PR08MB7851.eurprd08.prod.outlook.com (2603:10a6:150:5e::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.16; Fri, 24 Jun
 2022 12:26:32 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5353.022; Fri, 24 Jun 2022
 12:26:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ea460125-f3b8-11ec-b725-ed86ccbb4733
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=PDb25U0fa03Hq3F4ZawuSD8VKc1R7jXYnFZqVkXV7/KIMJc3MoUW56yz6hmZhpcMKAvuahijocDx6WobQcE8NmFmwJaYtZAjw4CDNG2z5ldOMMJzu92HL0QpRod3mksJhbefH74gS5xBpZ5TIgpOrVsGoGaurpDnJNmy4+Mrp5RjqVzNJgJ3pbGHRsddokfFB30uuDFNytr/PLzYrT9U5gGgEB8VYbQlbFY1Q1rN66hv7GqbnSQeKGkqgmXcWM+2Bke1q4XF1MoBTDJxWqJsQ2sTe4RW3afsQZ4gg+lE0wVRuj7EkB10DaRLW1SzVyAPvVEUmeiEZfJ2QYfH1Am6Aw==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ilwf5G7GhpmBjzHhPzhO4tgbwBWqXxS+Ytyby8oCuAc=;
 b=oZV9tLbpJUIe72hMetMWGEd7GAavbff7gJcjz4spZre8cwqQvmSRWlM9wKWgNKH7S9PxSjod5vecYetFIgNpDw2Be2PVyuomEEAT6SvO/YHcfYk8hmu3uDiTMxSB8TAx4d8kFw6qf48obsYedAcgcqZHZogKvgecVlf2df9vN0Sh1C3Al1+qPZwdpwCEywDgIn0An9J6na8qWqDYZ8822kltXAyro1FDoiZTL8JAKNXJ/J+w/jGM8MC/20ncAbql1OYkBj6bQYqG7AFeZqQWFnmuw7GVNYaMLcHoETaKOF5Ev8dvBnPe5plHGPdpROeMSxFd5dIVDp07L5/bvYU9Xw==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ilwf5G7GhpmBjzHhPzhO4tgbwBWqXxS+Ytyby8oCuAc=;
 b=lM6qOJPmTO/j20bX3fVJTMkSulXnzgWzxJFGfGVpC2ubp+kZbxR+AXJnUSg5/SKSihGY1dI+WEOrZJteuBvDkJqTBI6z4BqmH0tU7WwDyVXOOsv3N3UVWFAiL6VnxfT4s6QaIna4hz+OGHC7PeCEgHXTX7LsNsBZonNccrLEb9I=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: fbc3772e1b68ce40
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Y+YTV8ogqSSujEsetpROOt6xh14YxiapER93o28XcFQQyHIYql8l32wGQY7pmTE3eh5h3zjeTWhewT5+EU1UIcCDTM/hxK6IWIXeCxhZauoYTBU4Rj2OdbjawvCSrZw3/HVRgTa8kqn+ii9j8NVkld6c+M99ye5YRK1CNWydQB4kNpZmWqP5mP+scMnR61y+PBJyF99LoMxgSuIOq3VuIlubKlvcKeQTI12i4WQSzivKDRnieJebjEy+Znd+J2QIFDmIOJMpRvn6r51+n0L+c5X5yDmSh9xLtBarCl7kNdT0xIRFHAAVyG2rfyc5irMBYmBupnPOFKJM/+hLY3YD/w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ilwf5G7GhpmBjzHhPzhO4tgbwBWqXxS+Ytyby8oCuAc=;
 b=Q9RfARdNv3o4k4y2bkFJ8TSs5dDW+icdMumBtitm0AeDJg6nGoYORExHdHVYT/AfZ3gic1XRJjQ5F17Os9SRStV2Mmmk2ZqwyVYiXEg53VclrTZnPMJmgPnozdmXG7SlRo+94bRCOWGMjHG8qcSymbU/DK9iGXlxwCSbYk5IGjSgKsUcCqJg6Tcnffyffg9bvGrlUa5kurOJGpaKBtue2pEHb5AKL7+XVd7p+8wUBsdXBJaYJOH5jKny8td/I8JV+9vDx8/xemRCvkAKBpjVyhf4IbuKQ32dYLhPjS2g2Mf/6iNcSsdptn+cL+DvojeaH7WsRnAPOt+PFQCYoL6S6w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ilwf5G7GhpmBjzHhPzhO4tgbwBWqXxS+Ytyby8oCuAc=;
 b=lM6qOJPmTO/j20bX3fVJTMkSulXnzgWzxJFGfGVpC2ubp+kZbxR+AXJnUSg5/SKSihGY1dI+WEOrZJteuBvDkJqTBI6z4BqmH0tU7WwDyVXOOsv3N3UVWFAiL6VnxfT4s6QaIna4hz+OGHC7PeCEgHXTX7LsNsBZonNccrLEb9I=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: Luca Fancellu <Luca.Fancellu@arm.com>, xen-devel
	<xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH] docs/misra: Add instructions for cppcheck
Thread-Topic: [PATCH] docs/misra: Add instructions for cppcheck
Thread-Index:
 AQHYh7iovubAjsqNDkKOdqYTdWMQia1eaYSAgAAFkYCAAAeUAIAAAtiAgAABHQCAAAE2AA==
Date: Fri, 24 Jun 2022 12:26:32 +0000
Message-ID: <243D0C5A-6E48-4392-BECE-C3EE74CC4566@arm.com>
References: <20220624105311.21057-1-luca.fancellu@arm.com>
 <692d09fa-5513-132a-6b5b-4bc62e46a443@xen.org>
 <E5F45AD6-8D0E-447C-9864-6E9F34C1BE0D@arm.com>
 <c304be56-ae9b-121a-007e-1bb5ef06f95b@xen.org>
 <F0A0EE87-88CA-4A42-948F-D5CC4B5540DF@arm.com>
 <581ae1a2-bf1b-7dca-fa0b-a772ad7c489a@xen.org>
In-Reply-To: <581ae1a2-bf1b-7dca-fa0b-a772ad7c489a@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: e2e04037-e5de-4d00-e40f-08da55dccbf4
x-ms-traffictypediagnostic:
	GV1PR08MB7851:EE_|DBAEUR03FT049:EE_|VE1PR08MB5742:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 lbSoKSWEMmgmVFIXqBdpwXoilW19jvRH4DmH3Q2wp2Z3O6FzPdlEAEHggoHU5n8u7WrCuRlyporLRFluvvHKQ0v6tvXta6gDgZnYes8DDKyYL5921DCmMCSfFDKDAlGgpf8L/otquGj64G9M/JDX+5hsuf+J09aRIVADh+xZ9ep4SleNQxvLgpqgtkkfM0onMu7NbqmcldHx9j9IvEQ9a8bPcC2taNRkltCWTaWNO++JcCmeay+6bZX2GBBW3UwPiagVLa9eaUo3Dk+FMJ+rngPAGoeQZ3ROKnArfmYwnAoGtXQ3Mx328KH/DTwkEgEtDigwtXeOE5lhbxdkNji5hf+A3UP8AMeIeku9MvJEyH9hgyU7g8VWxeKW/myURE90/ArNqWjHNGtIIDbXGXaAj/plQRN2QI/85jZ8UYd/+kOe/jZOsGgp7M2A3xvEKyCNeGUrtWi3JgrCUkIyCUN8itkLlOTVVww8e0ipMpV4SIT+FykHRKCQLAyACQG+tdHObmURVvzoTIHAaDrWz7lqTxsLW9AviZV9JxSWqa7KJwb/VSRL3f01DJaKRKeHHMSHU/4VRJ8VJNiPg4wVl/7w1aoEfOR5otcgtryRlbRhpKq+RX1IUYzIt7ybPPmW7lAbl9UH+8ZMVy8N4D5zPAaQSWnQ9QJ32Nvg8xh6PR5W543x+2RSiFndpBf2o/1sKPs/EPpnVdzogxdasP0HWcVS7oBzaKVQG7rzdrUwWhJnciPC3d60R2HdDf47wrrQENccLkrt18A7m+hQrU1X5/n799lY5R4iWIRPEOqY/Sp15dDIVKxnoT9oLod+nZh4uhsyYBbTZF+ZHqK1Q2JiXr9IjWxwlhANBUiz4gUnL0V1FYA=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(376002)(346002)(39860400002)(366004)(136003)(396003)(6916009)(41300700001)(4326008)(66556008)(316002)(8936002)(83380400001)(186003)(2906002)(2616005)(38070700005)(66446008)(38100700002)(66946007)(66476007)(8676002)(36756003)(54906003)(6486002)(33656002)(64756008)(966005)(5660300002)(53546011)(86362001)(76116006)(122000001)(478600001)(6512007)(6506007)(91956017)(26005)(71200400001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <207E2FCE942AFE4E84D6584302B3BF17@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB7851
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT049.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	37585c79-3bbf-416e-23ff-08da55dcc5ca
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	pPCfuR86uYDKT0CDfTYsgMr3ZqmJkiQ720Tu22KAXfTAz9LHgU5+MtI2cmsMiXlNYVMVfbq6hhbn6V28W9ehiJV1AzGUQfviwvDAc3EYeyuuXeZcdVOBTFgcOysUjGE4Gk0F9t3XsLN4owImSyNu3CjmzpC6Jm4UwRnEv3d49gQcaVkgpWpGYuCVLB/sYu6RoBD9WF20fr/VNRgxYb8JHOOnK1dId18uoX1bjbGldcRBpngqB5d8tuax7cRZO/YWnG2TGZrGSEjP0F9/37fMU08LGSBw90IobznsPwfnTGxNMCfCrx0tz5DFRE06F4+Y4UHu0F+X+7BWHOZSTfAcfztqq91pzI3a/X6lDqU7mjb2bMpACrCHfBzR2jlTWkiuTEIZG9DlM493C+4gy4K38yS8ti5BjGTUfYSmuC2PIwnnDsIOTZFhPoigMDpgnHQtGJLlZkZbJKtPK0cLDa8klunzHuCFHOliCLPo95yU4nADqIUkCq3AP6uzMndASnDjxirpeqTsxEK+XsdPu+kU1ZWc9gh8dL2po4rnyLeeKNRuYcsPd15TEFdNFY7nBNATHNiYbVJwNpHFH6g4C4RbvK80T3A6v63qBY/8oT+AqJkxTlzfB0cHNht5CTIPGfJAe+PogZpogpgYEV2m990Q6l35RFY3h32TdMAv9xyuccvweUS7gKWNG82FYQT7qm71iQjqeSxCOGAxq2kq2wb5SwKNm4psEma7DtrKYNQEZb7uzhtHctF9AyWvNcIonzrkgp1+mH8E8KsOeOS0lLPFHiKmeoYN9ZSrA16WarbBjK7qFGM2EQ7tFHOvvbh1oBpk0x+JLWhSpgSXCayi+kEi6g==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(396003)(136003)(39860400002)(376002)(346002)(46966006)(40470700004)(36840700001)(5660300002)(82310400005)(316002)(6506007)(40460700003)(356005)(33656002)(70586007)(4326008)(70206006)(8676002)(6862004)(478600001)(6486002)(40480700001)(336012)(8936002)(36756003)(2906002)(83380400001)(54906003)(966005)(36860700001)(86362001)(6512007)(41300700001)(53546011)(26005)(2616005)(186003)(82740400003)(81166007)(47076005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jun 2022 12:26:42.9336
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e2e04037-e5de-4d00-e40f-08da55dccbf4
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT049.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5742

Hi,

> On 24 Jun 2022, at 13:22, Julien Grall <julien@xen.org> wrote:
>=20
> On 24/06/2022 13:18, Bertrand Marquis wrote:
>> Hi Julien,
>=20
> Hi Bertrand,
>=20
>>> On 24 Jun 2022, at 13:08, Julien Grall <julien@xen.org> wrote:
>>>=20
>>>=20
>>>=20
>>> On 24/06/2022 12:40, Bertrand Marquis wrote:
>>>> Hi Julien,
>>>=20
>>> Hi Bertrand,
>>>=20
>>>>> On 24 Jun 2022, at 12:20, Julien Grall <julien@xen.org> wrote:
>>>>>=20
>>>>> Hi Luca,
>>>>>=20
>>>>> On 24/06/2022 11:53, Luca Fancellu wrote:
>>>>>> Add instructions on how to build cppcheck, the version currently use=
d
>>>>>> and an example to use the cppcheck integration to run the analysis o=
n
>>>>>> the Xen codebase
>>>>>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>>>>>> ---
>>>>>> docs/misra/cppcheck.txt | 66 +++++++++++++++++++++++++++++++++++++++=
++
>>>>>> 1 file changed, 66 insertions(+)
>>>>>> create mode 100644 docs/misra/cppcheck.txt
>>>>>> diff --git a/docs/misra/cppcheck.txt b/docs/misra/cppcheck.txt
>>>>>> new file mode 100644
>>>>>> index 000000000000..4df0488794aa
>>>>>> --- /dev/null
>>>>>> +++ b/docs/misra/cppcheck.txt
>>>>>> @@ -0,0 +1,66 @@
>>>>>> +Cppcheck for Xen static and MISRA analysis
>>>>>> +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
>>>>>> +
>>>>>> +Xen can be analysed for both static analysis problems and MISRA vio=
lation using
>>>>>> +cppcheck, the open source tool allows the creation of a report with=
 all the
>>>>>> +findings. Xen has introduced the support in the Makefile so it's ve=
ry easy to
>>>>>> +use and in this document we can see how.
>>>>>> +
>>>>>> +First recommendation is to use exactly the same version in this pag=
e and provide
>>>>>> +the same option to the build system, so that every Xen developer ca=
n reproduce
>>>>>> +the same findings.
>>>>>=20
>>>>> I am not sure I agree. I think it is good that each developper use th=
eir own version (so long it is supported), so they may be able to find issu=
es that may not appear with 2.7.
>>>> Right now the reality is not that great:
>>>> - 2.8 version of cppcheck has bugs and Misra checking is not working
>>>=20
>>> Can you be more specifics for "bugs". Is it Xen specific?
>> No it is not Xen specific (see [1] for more info)
>=20
> Thanks for the information. How about writing something like:
>=20
> "
> The minimum version required for cppcheck is 2.7. Note that at the time o=
f writing (June 2022), the version 2.8 is known to be broken [1].
> "
>=20
> [1] https://sourceforge.net/p/cppcheck/discussion/general/thread/bfc3ab6c=
41/?limit=3D25
>=20

This up to Luca (as it is his patch) but I am ok with that.

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 12:42:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 12:42:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355517.583224 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4idw-0005jr-2e; Fri, 24 Jun 2022 12:42:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355517.583224; Fri, 24 Jun 2022 12:42:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4idv-0005jk-Vd; Fri, 24 Jun 2022 12:42:35 +0000
Received: by outflank-mailman (input) for mailman id 355517;
 Fri, 24 Jun 2022 12:42:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YVaO=W7=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o4idu-0005je-So
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 12:42:34 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1e3cad7d-f3bb-11ec-b725-ed86ccbb4733;
 Fri, 24 Jun 2022 14:42:33 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 1EA751F8D3;
 Fri, 24 Jun 2022 12:42:33 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id D641D13ACA;
 Fri, 24 Jun 2022 12:42:32 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id aRnFMjixtWIwawAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 24 Jun 2022 12:42:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1e3cad7d-f3bb-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656074553; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=ZPF37THvgfx7n6/XoaVA1Vlsvk1MiJMATW6ZECgwY24=;
	b=im68LekaDvUTQznrYum2ShKKJnAQVuPa9iPwtXYhNm+nSR6c54Dq8bkHiA73J6blPE+pmq
	g6lbz5FH5uiWNbj4mSSlwCR510Oe+42hV9zpP9TMVCjLeNgpb7XOYDAYrmyAXZQ7lFUqiT
	fa40wv2Ckb9sksXO+Tmafm8R4AtEzoE=
Message-ID: <2049c227-9a3d-6e43-01bb-639267b7ddad@suse.com>
Date: Fri, 24 Jun 2022 14:42:32 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH v4] xen/gntdev: Avoid blocking in unmap_grant_pages()
Content-Language: en-US
To: Demi Marie Obenour <demi@invisiblethingslab.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 David Vrabel <david.vrabel@citrix.com>,
 Jennifer Herbert <jennifer.herbert@citrix.com>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 stable@vger.kernel.org
References: <20220622022726.2538-1-demi@invisiblethingslab.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20220622022726.2538-1-demi@invisiblethingslab.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------gPZ7BfZdz9D3IEOV00gg8qhO"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------gPZ7BfZdz9D3IEOV00gg8qhO
Content-Type: multipart/mixed; boundary="------------87al14GsXIQ9g60d3eHk02fC";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Demi Marie Obenour <demi@invisiblethingslab.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 David Vrabel <david.vrabel@citrix.com>,
 Jennifer Herbert <jennifer.herbert@citrix.com>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 stable@vger.kernel.org
Message-ID: <2049c227-9a3d-6e43-01bb-639267b7ddad@suse.com>
Subject: Re: [PATCH v4] xen/gntdev: Avoid blocking in unmap_grant_pages()
References: <20220622022726.2538-1-demi@invisiblethingslab.com>
In-Reply-To: <20220622022726.2538-1-demi@invisiblethingslab.com>

--------------87al14GsXIQ9g60d3eHk02fC
Content-Type: multipart/mixed; boundary="------------Bs1GaeD0uKddp5Jn7uvjrlcO"

--------------Bs1GaeD0uKddp5Jn7uvjrlcO
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjIuMDYuMjIgMDQ6MjcsIERlbWkgTWFyaWUgT2Jlbm91ciB3cm90ZToNCj4gdW5tYXBf
Z3JhbnRfcGFnZXMoKSBjdXJyZW50bHkgd2FpdHMgZm9yIHRoZSBwYWdlcyB0byBubyBsb25n
ZXIgYmUgdXNlZC4NCj4gSW4gaHR0cHM6Ly9naXRodWIuY29tL1F1YmVzT1MvcXViZXMtaXNz
dWVzL2lzc3Vlcy83NDgxLCB0aGlzIGxlYWQgdG8gYQ0KPiBkZWFkbG9jayBhZ2FpbnN0IGk5
MTU6IGk5MTUgd2FzIHdhaXRpbmcgZm9yIGdudGRldidzIE1NVSBub3RpZmllciB0bw0KPiBm
aW5pc2gsIHdoaWxlIGdudGRldiB3YXMgd2FpdGluZyBmb3IgaTkxNSB0byBmcmVlIGl0cyBw
YWdlcy4gIEkgYWxzbw0KPiBiZWxpZXZlIHRoaXMgaXMgcmVzcG9uc2libGUgZm9yIHZhcmlv
dXMgZGVhZGxvY2tzIEkgaGF2ZSBleHBlcmllbmNlZCBpbg0KPiB0aGUgcGFzdC4NCj4gDQo+
IEF2b2lkIHRoZXNlIHByb2JsZW1zIGJ5IG1ha2luZyB1bm1hcF9ncmFudF9wYWdlcyBhc3lu
Yy4gIFRoaXMgcmVxdWlyZXMNCj4gbWFraW5nIGl0IHJldHVybiB2b2lkLCBhcyBhbnkgZXJy
b3JzIHdpbGwgbm90IGJlIGF2YWlsYWJsZSB3aGVuIHRoZQ0KPiBmdW5jdGlvbiByZXR1cm5z
LiAgRm9ydHVuYXRlbHksIHRoZSBvbmx5IHVzZSBvZiB0aGUgcmV0dXJuIHZhbHVlIGlzIGEN
Cj4gV0FSTl9PTigpLCB3aGljaCBjYW4gYmUgcmVwbGFjZWQgYnkgYSBXQVJOX09OIHdoZW4g
dGhlIGVycm9yIGlzDQo+IGRldGVjdGVkLiAgQWRkaXRpb25hbGx5LCBhIGZhaWxlZCBjYWxs
IHdpbGwgbm90IHByZXZlbnQgZnVydGhlciBjYWxscw0KPiBmcm9tIGJlaW5nIG1hZGUsIGJ1
dCB0aGlzIGlzIGhhcm1sZXNzLg0KPiANCj4gQmVjYXVzZSB1bm1hcF9ncmFudF9wYWdlcyBp
cyBub3cgYXN5bmMsIHRoZSBncmFudCBoYW5kbGUgd2lsbCBiZSBzZW50IHRvDQo+IElOVkFM
SURfR1JBTlRfSEFORExFIHRvbyBsYXRlIHRvIHByZXZlbnQgbXVsdGlwbGUgdW5tYXBzIG9m
IHRoZSBzYW1lDQo+IGhhbmRsZS4gIEluc3RlYWQsIGEgc2VwYXJhdGUgYm9vbCBhcnJheSBp
cyBhbGxvY2F0ZWQgZm9yIHRoaXMgcHVycG9zZS4NCj4gVGhpcyB3YXN0ZXMgbWVtb3J5LCBi
dXQgc3R1ZmZpbmcgdGhpcyBpbmZvcm1hdGlvbiBpbiBwYWRkaW5nIGJ5dGVzIGlzDQo+IHRv
byBmcmFnaWxlLiAgRnVydGhlcm1vcmUsIGl0IGlzIG5lY2Vzc2FyeSB0byBncmFiIGEgcmVm
ZXJlbmNlIHRvIHRoZQ0KPiBtYXAgYmVmb3JlIG1ha2luZyB0aGUgYXN5bmNocm9ub3VzIGNh
bGwsIGFuZCByZWxlYXNlIHRoZSByZWZlcmVuY2Ugd2hlbg0KPiB0aGUgY2FsbCByZXR1cm5z
Lg0KPiANCj4gSXQgaXMgYWxzbyBuZWNlc3NhcnkgdG8gZ3VhcmQgYWdhaW5zdCByZWVudHJh
bmN5IGluIGdudGRldl9tYXBfcHV0KCksDQo+IGFuZCB0byBoYW5kbGUgdGhlIGNhc2Ugd2hl
cmUgdXNlcnNwYWNlIHRyaWVzIHRvIG1hcCBhIG1hcHBpbmcgd2hvc2UNCj4gY29udGVudHMg
aGF2ZSBub3QgYWxsIGJlZW4gZnJlZWQgeWV0Lg0KPiANCj4gRml4ZXM6IDc0NTI4MjI1NmM3
NSAoInhlbi9nbnRkZXY6IHNhZmVseSB1bm1hcCBncmFudHMgaW4gY2FzZSB0aGV5IGFyZSBz
dGlsbCBpbiB1c2UiKQ0KPiBDYzogc3RhYmxlQHZnZXIua2VybmVsLm9yZw0KPiBTaWduZWQt
b2ZmLWJ5OiBEZW1pIE1hcmllIE9iZW5vdXIgPGRlbWlAaW52aXNpYmxldGhpbmdzbGFiLmNv
bT4NCg0KUHVzaGVkIHRvIHhlbi90aXAuZ2l0IGZvci1saW51cy01LjE5YQ0KDQoNCkp1ZXJn
ZW4NCg==
--------------Bs1GaeD0uKddp5Jn7uvjrlcO
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------Bs1GaeD0uKddp5Jn7uvjrlcO--

--------------87al14GsXIQ9g60d3eHk02fC--

--------------gPZ7BfZdz9D3IEOV00gg8qhO
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK1sTgFAwAAAAAACgkQsN6d1ii/Ey8u
XQf/ebYdmdSPTTkR0FxlVhx06ey7mujOAlSFkJUMi65CPOj1xBCOWOQcZnKHbm9wqcdUwroXHxNb
tjhi8PrEW+6ufRANQ9e9+Ri2UYvFpCooONzd/gi/riKn1aveHnos6SIg2yP/Iw9kcLbQyDHClZpn
tq2aX8oIkJ/A5VyeCT/q7I19Ar7KbdJ87K8z4tvOaK31c9Fj3a42v7djjI3M9wNTTk4UPjydMPBH
BuTA6qr8xY4wHzWnjbe1rcBJQXqPELxP1KM2aFX+L/kEXrVRi58F0FIiDopNhziDWH3wss14dJ1P
CiP3Nc46JG2vzibaMRcp1u/vWThTYttZGsc6hkes8w==
=qjWb
-----END PGP SIGNATURE-----

--------------gPZ7BfZdz9D3IEOV00gg8qhO--


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 12:46:13 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 12:46:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355522.583235 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4ihQ-0006Q2-PV; Fri, 24 Jun 2022 12:46:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355522.583235; Fri, 24 Jun 2022 12:46:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4ihQ-0006Pv-Lw; Fri, 24 Jun 2022 12:46:12 +0000
Received: by outflank-mailman (input) for mailman id 355522;
 Fri, 24 Jun 2022 12:46:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WzJM=W7=gmail.com=dunlapg@srs-se1.protection.inumbo.net>)
 id 1o4ihP-0006OQ-4Y
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 12:46:11 +0000
Received: from mail-lj1-x235.google.com (mail-lj1-x235.google.com
 [2a00:1450:4864:20::235])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9c023013-f3bb-11ec-bd2d-47488cf2e6aa;
 Fri, 24 Jun 2022 14:46:04 +0200 (CEST)
Received: by mail-lj1-x235.google.com with SMTP id b7so2654287ljr.6
 for <xen-devel@lists.xenproject.org>; Fri, 24 Jun 2022 05:46:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9c023013-f3bb-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=umich.edu; s=google-2016-06-03;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=PFlSWgy8YXnx55Xf+blzeHc6ct4HtMg7b30IIQJgP0c=;
        b=fzVaQHPwzE1Z0jBDIGptzMIAc9rhq7mH5oLrPP8RB7YI80wdi+hCAPsI2cWBh79YrX
         S/H8B4u2NNU9SADbbj/vAXAites+QEDYKjjIly+lSOY93wCD8EGKmHBZgMOVv2mGDwlf
         4l8t6SYqfZ3gk5X3TG/lc57LUl0JYs9nK6qr6n5Bpa3k85Rn+hLAlysUoqVTW4uOVhBw
         Kl7YvMOW/YDOXMtZbpgKkAs2JfEPelVnWDgUByEdMi2lGx/fEJbvCT6WWYnLlvEEUCE+
         jfKbE3uhjUTHg5odYCdO98wmINLC3bKbaOIuj8Hk5N/aRRrCqr61i/XI+i+MEb1liXEW
         fl0A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=PFlSWgy8YXnx55Xf+blzeHc6ct4HtMg7b30IIQJgP0c=;
        b=2Skz1EKAg8Naej2aqZcng0akGS4ZXCzqYNQ+VJhWKp+PoW3RQisSqNglsZu/5jkiPE
         TZIvAJ31PXGS8Ot6yXRd6jkSoTMK/wGiwWmybfPk8JZkr4w2SLHOv/Z0D3F+NCiowWGk
         uk6OAQAu4llpZccDS2YAX3bM5ZMAkE+C6trpAmgSDlMVFyZ2a51JoYSo+hdBxNFhaUVc
         EJ+0uktvxTDXZOUIUBAaHOI1PMrVfyeeTZF5mK6MdBjF6UIz3qUKdht1+F84ypjeGG3T
         FCjzUqFLXci9kLTHEFa+A4JqWXqDWJqMz0k7S4+WoQJ3eEXMRie9V/SwZcQyZZbe0Fp/
         wvzg==
X-Gm-Message-State: AJIora9tog5xnFzgOpP5klMVQQbTNn4A8nzqYYEmdY9PV5qnsaRhMbXh
	tBgEhm/a/2g9rp+7ElEo0ojq35e97C3PfqWz/Cw=
X-Google-Smtp-Source: AGRyM1v3/iO8C3mYcwsi8245fqTvEDODrhUaSg3ZzF/LhPIWBLViGrINGqpB0qB57vH0PzNn4VQkwh5dJ8w5XZD0/Us=
X-Received: by 2002:a05:651c:11d0:b0:25a:978e:4892 with SMTP id
 z16-20020a05651c11d000b0025a978e4892mr3681796ljo.517.1656074763913; Fri, 24
 Jun 2022 05:46:03 -0700 (PDT)
MIME-Version: 1.0
References: <1638982784-14390-1-git-send-email-olekstysh@gmail.com>
 <1638982784-14390-2-git-send-email-olekstysh@gmail.com> <YbjANCjAUGe4BAar@perard>
 <bce10079-abd6-c033-6273-ac0ea9f51668@gmail.com> <4c89e55d-4bf1-506e-d620-4a0ff18ef308@suse.com>
 <dc1b70ac-079d-5de8-cb13-6be4944cef0a@gmail.com> <813684b0-df71-c18b-cf4c-106cc286c035@suse.com>
In-Reply-To: <813684b0-df71-c18b-cf4c-106cc286c035@suse.com>
From: George Dunlap <dunlapg@umich.edu>
Date: Fri, 24 Jun 2022 13:45:52 +0100
Message-ID: <CAFLBxZbm0KcLhpqs2tGXgx6-JP+3OtkMEReTaphBC-JHZ3sJDQ@mail.gmail.com>
Subject: Re: [PATCH V6 1/2] libxl: Add support for Virtio disk configuration
To: Juergen Gross <jgross@suse.com>
Cc: Oleksandr <olekstysh@gmail.com>, Anthony PERARD <anthony.perard@citrix.com>, 
	xen-devel <xen-devel@lists.xenproject.org>, 
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Wei Liu <wl@xen.org>, 
	Nick Rosbrook <rosbrookn@ainfosec.com>, Stefano Stabellini <sstabellini@kernel.org>, 
	Julien Grall <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
	Bertrand Marquis <bertrand.marquis@arm.com>, George Dunlap <george.dunlap@citrix.com>
Content-Type: multipart/alternative; boundary="00000000000006517d05e230f4ef"

--00000000000006517d05e230f4ef
Content-Type: text/plain; charset="UTF-8"

On Wed, Dec 15, 2021 at 3:58 PM Juergen Gross <jgross@suse.com> wrote:

> On 15.12.21 16:02, Oleksandr wrote:


> In practice we are having something like the "protocol" already today:
> the disk device name is encoding that ("xvd*" is a Xen PV disk, while
> "sd*" is an emulated SCSI disk, which happens to be presented to the
> guest as "xvd*", too). And this is an additional information not
> related to the backendtype.
>
> So we have basically the following configuration items, which are
> orthogonal to each other (some combinations might not make sense,
> but in theory most would be possible):
>
> 1. protocol: emulated (not PV), Xen (like today), virtio
>
> 2. backendtype: phy (blkback), qdisk (qemu), other (e.g. a daemon)
>
> 3. format: raw, qcow, qcow2, vhd, qed
>
> The combination virtio+phy would be equivalent to vhost, BTW. And
> virtio+other might even use vhost-user, depending on the daemon.
>

Sorry to fly in here 7 months after the fact to quibble about something,
but since we're baking something into an external interface, I think it's
worth making sure we get it exactly right.

It seems to me that the current way that "backendtype" is used is to tell
libxl how to set up the connection.  For "phy", it talks to the dom0
blkback driver, and hands it a file or some other physical device.  For
qdisk, it talks to the QEMU which is already associated with the domain:
either the qdisk process started up when the PV/H domain was created, or
the emulator started up when the HVM guest was created.  (Correct me if
I've made a mistake here.)

Given that, "other" is just wrong.  The toolstack needs to know
*specifically* how to drive the thing that's going to be providing the
protocol.  I'm not sure what you're expecting to use in this case, but
presumably if we're adding a third thing (in addition to blkback and QEMU),
then at some point we're going to be adding a fourth thing, and a fifth
thing as well.  What do we call them?  "Other other" and "other other
other"?

If we're planning on having a general interface for these daemons that are
going to be be virtio providers, we should come up with a specific name for
them as a class, and use that for the name.

Furthermore, "virtio+phy == vhost" is just wrong: phy means that libxl is
using blkback, and blkback can't speak the virtio protocol.  If we want to
use vhost (or something like it), then it will need its own separate
backendtype.

 -George

--00000000000006517d05e230f4ef
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr">On Wed, Dec 15, 2021 at 3:58 PM Juergen G=
ross &lt;<a href=3D"mailto:jgross@suse.com">jgross@suse.com</a>&gt; wrote:<=
br></div><div class=3D"gmail_quote"><blockquote class=3D"gmail_quote" style=
=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding=
-left:1ex">On 15.12.21 16:02, Oleksandr wrote:=C2=A0</blockquote><blockquot=
e class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px s=
olid rgb(204,204,204);padding-left:1ex"><br>
In practice we are having something like the &quot;protocol&quot; already t=
oday:<br>
the disk device name is encoding that (&quot;xvd*&quot; is a Xen PV disk, w=
hile<br>
&quot;sd*&quot; is an emulated SCSI disk, which happens to be presented to =
the<br>
guest as &quot;xvd*&quot;, too). And this is an additional information not<=
br>
related to the backendtype.<br>
<br>
So we have basically the following configuration items, which are<br>
orthogonal to each other (some combinations might not make sense,<br>
but in theory most would be possible):<br>
<br>
1. protocol: emulated (not PV), Xen (like today), virtio<br>
<br>
2. backendtype: phy (blkback), qdisk (qemu), other (e.g. a daemon)<br>
<br>
3. format: raw, qcow, qcow2, vhd, qed<br>
<br>
The combination virtio+phy would be equivalent to vhost, BTW. And<br>
virtio+other might even use vhost-user, depending on the daemon.<br></block=
quote><div><br></div><div>Sorry to fly in here 7 months after the fact to q=
uibble about something, but since we&#39;re baking something into an extern=
al interface, I think it&#39;s worth making sure we get it exactly right.</=
div><div><br></div><div>It seems to me that the current way that &quot;back=
endtype&quot; is used is to tell libxl how to set up the connection.=C2=A0 =
For &quot;phy&quot;, it talks to the dom0 blkback driver, and hands it a fi=
le or some other physical device.=C2=A0 For qdisk, it talks to the QEMU whi=
ch is already associated with the domain: either the qdisk process started =
up when the PV/H domain was created, or the emulator started up when the HV=
M guest was created.=C2=A0 (Correct me if I&#39;ve made a mistake here.)<br=
></div><div><br></div><div>Given that, &quot;other&quot; is just wrong.=C2=
=A0 The toolstack needs to know *specifically* how to drive the thing that&=
#39;s going to be providing the protocol.=C2=A0 I&#39;m not sure what you&#=
39;re expecting to use in this case, but presumably if we&#39;re adding a t=
hird thing (in addition to blkback and QEMU), then at some point we&#39;re =
going to be adding a fourth thing, and a fifth thing as well.=C2=A0 What do=
 we call them?=C2=A0 &quot;Other other&quot; and &quot;other other other&qu=
ot;?</div><div><br></div><div>If we&#39;re planning on having a general int=
erface for these daemons that are going to be be virtio providers, we shoul=
d come up with a specific name for them as a class, and use that for the na=
me.</div><div><br></div><div>Furthermore, &quot;virtio+phy =3D=3D vhost&quo=
t; is just wrong: phy means that libxl is using blkback, and blkback can&#3=
9;t speak the virtio protocol.=C2=A0 If we want to use vhost (or something =
like it), then it will need its own separate backendtype.</div><div><br></d=
iv><div>=C2=A0-George</div><div><br></div></div></div>

--00000000000006517d05e230f4ef--


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 12:59:37 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 12:59:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355530.583246 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4iuH-0007u9-Uv; Fri, 24 Jun 2022 12:59:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355530.583246; Fri, 24 Jun 2022 12:59:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4iuH-0007u2-Rx; Fri, 24 Jun 2022 12:59:29 +0000
Received: by outflank-mailman (input) for mailman id 355530;
 Fri, 24 Jun 2022 12:59:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jTea=W7=citrix.com=prvs=167c355c5=George.Dunlap@srs-se1.protection.inumbo.net>)
 id 1o4iuG-0007tw-CN
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 12:59:28 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 78df4115-f3bd-11ec-b725-ed86ccbb4733;
 Fri, 24 Jun 2022 14:59:26 +0200 (CEST)
Received: from mail-dm3nam02lp2045.outbound.protection.outlook.com (HELO
 NAM02-DM3-obe.outbound.protection.outlook.com) ([104.47.56.45])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 24 Jun 2022 08:59:22 -0400
Received: from CO1PR03MB5665.namprd03.prod.outlook.com (2603:10b6:303:94::6)
 by BN9PR03MB6139.namprd03.prod.outlook.com (2603:10b6:408:11c::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Fri, 24 Jun
 2022 12:59:20 +0000
Received: from CO1PR03MB5665.namprd03.prod.outlook.com
 ([fe80::8037:ee0a:e1bd:bdab]) by CO1PR03MB5665.namprd03.prod.outlook.com
 ([fe80::8037:ee0a:e1bd:bdab%7]) with mapi id 15.20.5373.016; Fri, 24 Jun 2022
 12:59:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 78df4115-f3bd-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656075566;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:mime-version;
  bh=GSKfMV47IShb+SUOa5jB3SJwXpTB0MxlIT3uXIMo9JE=;
  b=IajUnMasrZ9Nhk50URmlFI98O7XEAmCExl2Dz6TaqDVyQCFUkMaUQMhD
   mt31mE7NrB510NrxoA+DYMhIqyhw9tG25K/wcVHct5yOwmc8k2oHq2nne
   arvAK+fb0PmEenLiMAkmi6DwkNLmPfzP33KjVPPIvvkzKjymXcl/kWcuI
   0=;
X-IronPort-RemoteIP: 104.47.56.45
X-IronPort-MID: 74366479
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:RD8T/6AC1FcUbxVW/yXjw5YqxClBgxIJ4kV8jC+esDiIYAhSlGxQk
 DNbHCvTJK7JMVJBSKlzPd6z8E4A6sfTxoNmHAVsqnpgQ3kRoJPMCI3DcR6gM37Cd5WbRR1q4
 ZlBNIKbcZ5kHieBrR30P+G8oSUniP7VLlaQ5JYoHwgoLeMzYHtx2XqP4tIEv7OEoeRVIivUt
 935rsPVMQb1gTQpOTxJt/LZ80oy7Piq5GwUtQZlPKEXsAfSmUdOAcNEL8ldDZdZrqq4vAKeb
 7yepF1s1jqBp3/BMvv8zvCjNBdirof6ZWBisFIPM0SZqkUE93ZaPpoTbqJGMx8J0WnRxbid9
 f0W3XCOYVZxVkHzsLx1vylwS0mS6oUfpdcriVDm2SCi5xWun0nEmp2CP2lvVWEswc5lAHkmy
 BAtAGtlgiZvJQ6B6OnTpuFE3qzPJSRwVW8VkikIITrxVZ7KTX1fKkljCBAxMDoY36hz8fjii
 8UxUgtFbR39XTl0NnBUM64ukOKIvHivfGgNwL6VjfJfD2n76iVUiOKoHP+OP9uASINSg1qSo
 X/A8yLhGBYGOdeDyD2DtHWxmuvImiC9U4UXfFG63qcy3BvPmSpOVltKCDNXotHg4qK6c/1SL
 FYb92wCsK42/VSDRdjhRRyo5nWDu3bwXvIPT7Zktl3Sm8I45S7DD20JDT1KUecHm8woeWMbh
 0+gvffAUGkHXLq9DCj1Gq2vhTS2NCsOMX4ZZQcLSAIE55/op4RbpgLCSJNvHbC4ivXxGCrs2
 HaaoS4mnbIRgMUXkaKh8jjvhDahpYPASAIv0RnGRWKu7g5/Z4mNapSh7B7Q6vMoBJmdZkmMu
 j4Dgcf2xPoJFpuXkyuORs0CGbio47CONzi0qVxgBZ467By25mWuO4tX5VlWP1x1O8wJfTvoZ
 k77ug5L4pJXenywYsdfYZ+1Csks5bjtE5LiTP+8RuRJZp99ZQqW5hZEbESb333uuEU0mKR5M
 pCeGe6mAGwGE61hwHyzTv0EzL4w7ikkwCXYQpWT8vi8+b+XZXrQRbJeNlKLNro99Pnc/FiT9
 MtDPcyXzRkZSPf5fiTc7Y8UKxYNMGQ/Apf17cdQc4ZvPzZbJY3oMNeJqZtJRmCvt/49ejvgl
 p1lZnJl9Q==
IronPort-HdrOrdr: A9a23:40kXxaPSA/AxasBcT3f155DYdb4zR+YMi2TDiHoddfUFSKalfp
 6V98jzjSWE8wr4WBkb6LO90dq7MAnhHP9OkMMs1NKZMDUO11HYS72KgbGC/9SkIVyHygc/79
 YsT0EdMqyXMbESt6+Tj2eF+pQbsaC6GcuT9IXjJgJWPGVXgtZbnmJE42igcnFedU1jP94UBZ
 Cc7s1Iq36LYnIMdPm2AXEDQqzqu8DLvIiOW29IOzcXrC21yR+44r/zFBaVmj0EVSlU/Lsk+W
 /Z1yTk+6SYte2hwBO07R6c030Woqqh9jJwPr3OtiEnEESvtu9uXvUlZ1S2hkF0nAho0idvrD
 CDmWZmAy050QKtQoj8m2qQ5+Cn6kdj15aq8y7nvVLz5cP+Xz40EMxHmMZQdQbY8VMpuJVm3L
 tMxH/xjesfMftR9B6NmOQgeisa4XZcm0BS59I7njhaS88TebVRpYsQ8AdcF4oBBjvz7MQiHP
 N1BM/R6f5KeRfCBkqp9VVH0ZipRDA+Dx2GSk8Ntoic1CVXhmlwyw8dyNYElnkN+ZohQ91P5v
 jCMK5viLZSJ/VmJZ5VFaMEW4+6G2bNSRXDPCabJknmDrgOPzbXp5v+8NwOlZWXkVwzve4Pcb
 j6ISNlXDQJCjPT4OW1re522wGIRnmhVjLwzcwb74Rlu9THNcjWDRE=
X-IronPort-AV: E=Sophos;i="5.92,218,1650945600"; 
   d="asc'?scan'208";a="74366479"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Z7Ur9EXhOT7Cswdzc4Wps14M5L+crcYuGP1EpQvqLgzyC71GSHFw+VtEDSXhDT6lOS/rav2mMsGJDyNGEJsaUskQY8k5I+Xzkfy49LRK0F9gY6zV/fq7Rs/CIUEf6lnTjcbao/Gq78eM3SALVKrhYDhK8TVbwlFhOVkFfv6Vr+c6ju4zuzudonUwqdUollSF6q95P3OE+SGhm7RvPTuhReD5HUG73PNy9vy3hCVuxLdIvphjIkhEdXa1QG71lfy3HLwBgcPWP2jNFhrqNSuVvlL0NPDUYGhWf6akevuY76KSZ1oZT/CWbkFtZhapn8xza8H5Vea9/VQa8wMCNcVwpA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=K0AWtsG9RwUMTRUD7CMXKMKlZ3tKtKdSaKiyeVLgKXc=;
 b=X3WBCAX2QUByDEfZEBt1Jf6CcXGkaXhL5a2uKitDkKE0pppTxLQLQL5pnNzsJKphWdP9qVaeBSR5xGubyp4Fcy9xxXKr8+JWUpJt2O9ZirbihAmZibaAaptBwI+QLm1yqNP6rtCP5QkH8O/SuvoVSU6145Nc9UOtfbVPJGxJv1fIfwzTDEwUh4WLiOxNqXJMg6KwQI+4/ZC8zSqGMdflZf3qvnZEAqXyPqj5HnV5ewc5m1G4U80IlFfnWU+pbNHhxanfZT4WSf5EkAPDBZw1wJS4n2ur8oWpntw6fzI2VI9ft/wgZHgPmW3fN5mQ+1fnkeiO/h2X5odQL4iOq/cyzQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=K0AWtsG9RwUMTRUD7CMXKMKlZ3tKtKdSaKiyeVLgKXc=;
 b=YO8jG1hRJalVxq1K4MbxCqHKb2SXa0C00LPPrmduHeU8hWU2BKjxqFOZw+rI7nUHemdm3dqEJNkTnOSdn+aSJR5rpiswJxEtvi50lGTgCz/eL5txLiMSa7quQc3i3LbtlSJ/JATtmVNeBMeCtNPXSqqdb6TErnCwI6k9ILsvpe0=
From: George Dunlap <George.Dunlap@citrix.com>
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
CC: xen-devel <xen-devel@lists.xenproject.org>, Oleksandr Tyshchenko
	<oleksandr_tyshchenko@epam.com>, Wei Liu <wl@xen.org>, Anthony Perard
	<anthony.perard@citrix.com>, Nick Rosbrook <rosbrookn@gmail.com>, Juergen
 Gross <jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>
Subject: Re: [PATCH V10 1/3] libxl: Add support for Virtio disk configuration
Thread-Topic: [PATCH V10 1/3] libxl: Add support for Virtio disk configuration
Thread-Index:
 AQHYf1BHHIj+DWvN90eGeepJ4klT4a1QiHQAgAAkCwCAAwIpgIAAEjOAgAALK4CACsnDgA==
Date: Fri, 24 Jun 2022 12:59:20 +0000
Message-ID: <9A36692A-8095-4C76-A69B-FBAB221A365C@citrix.com>
References: <62903b8e-6c20-600e-8283-5a3e3b853a18@gmail.com>
 <1655482471-16850-1-git-send-email-olekstysh@gmail.com>
In-Reply-To: <1655482471-16850-1-git-send-email-olekstysh@gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 050a2da6-2e56-495f-9d15-08da55e15ac9
x-ms-traffictypediagnostic: BN9PR03MB6139:EE_
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 mA2mr7jr2p2L5fMET0sMAQbhhu+EdkEciYY/CeRF4FC9TQBSe2jnwtCYMIeVZ0Ot/bGH2DMVuQY45sEbA2nYFF+LkoHFOO8S8ajAnV5KoaFqBd/B7wNZKmPLZrIlszLTbXlLRctd0Vq7+Wfc7xNTS1aZ9ft4OMvCwpYMqHfiVZ1/WLOuhEfB+NSWiLva/9X5UZkyHNZxSLzKcutkXz464m0CgBsZ3cBlVLeTo6SxT6RoT7huMNL3MfqJNmNr5RNTxndi1DbGXTxUv0AqG6y/clsI/xCHzYKMsel7m7H5b8PoySiMs4QB82ImOTESDyAKeSC+BO/OKzs2ZQUuGQgLFaj+QIYg3/g/f+7zQsMkp7zYZPvFUxRybu4z16OwXHkvHTXnxWT9+Z1GN+2kdngs35VPxAclD+2wCYVg8cVous83rRxJSxsR0uLeR6TfP8sGPzrBPWp73gx2as12dOCFgHxyFlth9pE1W2AngD9V4Hdx4l6arPOYIMr/+9QzacWKi1hrXW1Pb1TbnH++YLTzdlbJrdMyiG2onUBk+Va/EWTruw1pcPorT5z8cmWXdiGlZOIPW9DEUJXHgi0xWAPKD5syt0enyquhSf+b0psRjMgeXn27dNj2LTjH+FIfaitI6c2h5tDr/MclKUJ7jAtIj/kRmNN5tCqhkjtHS5y+nOzwtFhp1YPgmdbZiT9Ac22WPNB7rG36DHrNS7qP7TkfUdbKhyUZRr7L/EaljKvtwYYw8uXWQTuyCZ8qGOG4fuJ6w0kmaoz/PgZEn+3iL2kuC4KIabVbpcoljx/B/+wbF0m2SDfrC10bqWOkCVg6JskgH5Cca9UAN+K1GO7Lyw6M+A==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:CO1PR03MB5665.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(376002)(396003)(366004)(39860400002)(136003)(346002)(66446008)(64756008)(66946007)(66556008)(4326008)(71200400001)(99936003)(66476007)(76116006)(6916009)(8676002)(2616005)(33656002)(54906003)(91956017)(86362001)(82960400001)(5660300002)(38070700005)(122000001)(7416002)(186003)(8936002)(83380400001)(38100700002)(53546011)(6506007)(6512007)(41300700001)(26005)(316002)(6486002)(36756003)(478600001)(2906002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?Z1YrZVY1TjRRZll5NkhCWGNGcEFxckpvenVjR1pGazV5aS8yNFp3VHp5VGUv?=
 =?utf-8?B?eFEycENrY3MvOS8xc0pIWTR2Z0hNZGVCNm5GRXoyUmJWakNLLzI3eWQ5TXJl?=
 =?utf-8?B?cWxVU1NXQnFISVdiaHpab3QrMmZSUXRqMmNHeHhqZkxpaTJ1dm1rMjVma0tv?=
 =?utf-8?B?SW9selhmNDlEUnZuTTVkRlcxUy9FTmNxcmtLcS9XOUcybXhxZE1RaWk4S2VK?=
 =?utf-8?B?bGNUKzVScUV0SVFFd0FmenVnbEg0RDlLdFc2ZFRqWVp6T00vV09GQS9TNXZC?=
 =?utf-8?B?VWJFei9JNll0TlI3bnJqL0RuLzg3U1RmZHFaaVpXVW9uTlo4Um9LZjJFUCs3?=
 =?utf-8?B?bHVvcExhRFdYMUFyQjd5cFhIMVZUd0FxY1lSQWRCZ1ZVbXUvVnA3QlpzLzBU?=
 =?utf-8?B?dUNKNWtJUWFtUktqU0R0UGRzRFlQVGdSbmtJcDlzZzJHSTdwelpyWVE5MnJX?=
 =?utf-8?B?U29JYWNZWlcrYkZiOWVrN083cSsrTUxMdmllYzNnYlpzcmtDSzlzM1Q5dVY5?=
 =?utf-8?B?ekR2V2NpNEJnYlNqbGY4ZnRwaVRVRmZOb2d3RmNtT05lOVBxNFRPN2JHQmVz?=
 =?utf-8?B?S09yZmhEWjFLYWhVOXNCVkk2UnBDMGJFaldSeWs2OVpsQ09kcmlCdHJWc3dQ?=
 =?utf-8?B?VlZJSm5UZDJrczFGKzR6RndjNjRHVURnZlFKb3Uvb1Y4OUZrVnZTcmRIRGdk?=
 =?utf-8?B?TThLZDRFOSs1ODd3K2ZMdmhFSTVPekoxYXROcjhhdlBDc0NPQTBvWTV4YXZG?=
 =?utf-8?B?elZ6TklabVBlbjZxSE9XZmNHYzJDc2ZUaWZaS09RYmx0MlZlNHNFcmxFTEdH?=
 =?utf-8?B?TTRYLzUwNml3MUR0cVRORjBoSlowMjg4VEFBOVlwSzFTOGZXLzZsZWs3R0wx?=
 =?utf-8?B?aDFyM0ZHaDR4aDlwWElXYlljQVdLbGhZVS9iL01IQUNkSnJldWN2bU45WHVu?=
 =?utf-8?B?WitINm1oYzJFUU5CZlF2ZlIxQ2M5SGJ2V3RhVXlYN0ZZakV4ajJialpGd3k0?=
 =?utf-8?B?Z2ZKZ0RuNUcwbnQvSkU1aFU5cjhpV0dDUVorSlZBa3NuMU9zTzlReGdyUHh5?=
 =?utf-8?B?R25LSk1ZS1Q0cHpZeEFwNThZWVpsVnVLZVRzdmNRVExDT081NCtpYjRjYVV6?=
 =?utf-8?B?aGNFVUNoazhtSEdLY2VIRCt0TnJIV05Benp5bUsyR3U3TW1sQk8vWkZlTXA1?=
 =?utf-8?B?NmtRUzIzTy9vU0VMVXhnd3F6QytsNEVCY3FQTElMeWFzNVJjeTVjcHRVOThO?=
 =?utf-8?B?S29oalVBaTUxSVFxSTdNYWh6eDVQaTFHOTBSc2F4bkhNS1VlZmltcHFhblBl?=
 =?utf-8?B?Qm1pTFdMUnhvdjI2Q2d5V2tjZWNsMHkzZFdNWnZ2QWZsYkQzRzZRbEdCMDZR?=
 =?utf-8?B?enVOdFhub2YyZUpvb05UY0FkMDdJbGRSa1B5RXBzeDVlaEZZVk1TK0V5S2tn?=
 =?utf-8?B?SHk4UlZMKzNIMERmenVjcXg5TnZIQjZkYzRFeFVsT3NUTld2R2c4SXNwVmxh?=
 =?utf-8?B?cml1M0lxN1dudExaNmV0NFVVdXFCcHNlQlFheVFLTFF5b3ZJbEhaR2FIUjNt?=
 =?utf-8?B?NjZXMytmQUd6VnBvV3lkYkdVUVFBT2kwTWxyMUZ1ZmIxZW9lc25nd2tUV3pY?=
 =?utf-8?B?WGw3bUFuZGpIRHJXcWFsVXNlZ2hSL1Z6KzlYRmU1VGltOEswNXU1cktKZm96?=
 =?utf-8?B?UzNDVHVrTE16QkFhRjhERmdmVUU3TVV4cGY1eEVnTlJiZnFxM0d5SHhZbWk4?=
 =?utf-8?B?UHVsUDU2ekdHcGxrUmxpSVdtMXJ1T1NMK3EwMnNOS2tjU1FtK2RkUTR1Vndn?=
 =?utf-8?B?MEtYdGNZMVR1K1oyMFc0akJRZWR4WjJGQ2NWcmxCeDlFMnY1djdlL2gxdmk0?=
 =?utf-8?B?Qk5veE1jNlRuc0ZuMEYwNFU1enFTMGo0YVl2ZHJkcERhQ2cveVRaazNJT3J0?=
 =?utf-8?B?WGlCRGJiUmxmc1JKVzcraHlRaUxrZ3VLdFdtVlp2Tk40WnpUMi9hZFk0WGc2?=
 =?utf-8?B?ZjZNbkx4RHo4TUJ5ZlNoR3dERVlaMUxuZEZXWSt1VmZuejliWlhNSzA5K1BR?=
 =?utf-8?B?dmxZMHVkS01uNUZUTmNJd250SDRXbXIvTUZXWGtiQ0lSSFBjcnBLbDJPMEcz?=
 =?utf-8?B?Nk5JUUJRaDRBUGdkbHprSEV5dHVuVi9VSVlaRXBtVjNNRkc4M0dQckxyY0JR?=
 =?utf-8?B?Rnc9PQ==?=
Content-Type: multipart/signed;
	boundary="Apple-Mail=_03AD81FD-8F39-4C72-AF6F-0D4E6C34C573";
	protocol="application/pgp-signature";
	micalg=pgp-sha256
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: CO1PR03MB5665.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 050a2da6-2e56-495f-9d15-08da55e15ac9
X-MS-Exchange-CrossTenant-originalarrivaltime: 24 Jun 2022 12:59:20.5354
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: eHcZHUkF9GSSjSsRhHbATyyWbxv69+XcslHQcR7B2tleu0qQj1/di/zpiVVyOFsJFynw8aZimE89RzsRQLVUiNITIPFmxuqgzt2rDF7wKEA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR03MB6139

--Apple-Mail=_03AD81FD-8F39-4C72-AF6F-0D4E6C34C573
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=utf-8



> On 17 Jun 2022, at 17:14, Oleksandr Tyshchenko <olekstysh@gmail.com> =
wrote:
>=20
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>=20
> This patch adds basic support for configuring and assisting =
virtio-mmio
> based virtio-disk backend (emulator) which is intended to run out of
> Qemu and could be run in any domain.
> Although the Virtio block device is quite different from traditional
> Xen PV block device (vbd) from the toolstack's point of view:
> - as the frontend is virtio-blk which is not a Xenbus driver, nothing
>   written to Xenstore are fetched by the frontend currently ("vdev"
>   is not passed to the frontend). But this might need to be revised
>   in future, so frontend data might be written to Xenstore in order to
>   support hotplugging virtio devices or passing the backend domain id
>   on arch where the device-tree is not available.
> - the ring-ref/event-channel are not used for the backend<->frontend
>   communication, the proposed IPC for Virtio is IOREQ/DM
> it is still a "block device" and ought to be integrated in existing
> "disk" handling. So, re-use (and adapt) "disk" parsing/configuration
> logic to deal with Virtio devices as well.
>=20
> For the immediate purpose and an ability to extend that support for
> other use-cases in future (Qemu, virtio-pci, etc) perform the =
following
> actions:
> - Add new disk backend type (LIBXL_DISK_BACKEND_OTHER) and reflect
>  that in the configuration
> - Introduce new disk "specification" and "transport" fields to struct
>  libxl_device_disk. Both are written to the Xenstore. The transport
>  field is only used for the specification "virtio" and it assumes
>  only "mmio" value for now.
> - Introduce new "specification" option with "xen" communication
>  protocol being default value.
> - Add new device kind (LIBXL__DEVICE_KIND_VIRTIO_DISK) as current
>  one (LIBXL__DEVICE_KIND_VBD) doesn't fit into Virtio disk model
>=20
> An example of domain configuration for Virtio disk:
> disk =3D [ 'phy:/dev/mmcblk0p3, xvda1, backendtype=3Dother, =
specification=3Dvirtio']
>=20
> Nothing has changed for default Xen disk configuration.
>=20
> Please note, this patch is not enough for virtio-disk to work
> on Xen (Arm), as for every Virtio device (including disk) we need
> to allocate Virtio MMIO params (IRQ and memory region) and pass
> them to the backend, also update Guest device-tree. The subsequent
> patch will add these missing bits. For the current patch,
> the default "irq" and "base" are just written to the Xenstore.
> This is not an ideal splitting, but this way we avoid breaking
> the bisectability.
>=20
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

OK, I am *really* sorry for coming in here at the last minute and =
quibbling about things. But this introduces a public interface which =
looks really wrong to me.  I=E2=80=99ve replied to the mail from =
December where Juergen proposed the =E2=80=9COther=E2=80=9D protocol.

Hopefully this will be a simple matter of finding a better name than =
=E2=80=9Cother=E2=80=9D.  (Or you guys convincing me that =E2=80=9Cother=E2=
=80=9D is really the best name for this value; or even Anthony asserting =
his right as a maintainer to overrule my objection if he thinks I=E2=80=99=
m out of line.)

FWIW the Golang changes look fine.

 -George


--Apple-Mail=_03AD81FD-8F39-4C72-AF6F-0D4E6C34C573
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
	filename=signature.asc
Content-Type: application/pgp-signature;
	name=signature.asc
Content-Description: Message signed with OpenPGP

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEj3+7SZ4EDefWZFyCshXHp8eEG+0FAmK1tSMACgkQshXHp8eE
G+0AGwf/TC7l31NYPRzoKxPXeL6BRTK+QSkt2pfz6UO5YGYmw0XQvywNWU2iDnNE
HMvRRIiKrmWA7OF0Z07pB8vO35rXtx7hFTfkaRYBnXwyOyC1tvd53EZhr3Qb/r6k
7Ra1yQfjH3peBbijw4nwN12lJxvVGTvyCPMDwK+3iAVk/qof4PpX1z/cVhTFY6hK
CZaa3Qfe4ME/g9v3Ba6h9bf2D6FEWrE/2DCEqdWvK8MQokd5rz+hLA35iXWfLlQf
C63uwPZ8h9eaML/KStZWHlqZTODq26SoGuVHhaCMoUvNx5klsnG7zFi98QYNBKyS
UcQOFgjh3yP4a9uSeYP/oySK9rTt6w==
=EtXf
-----END PGP SIGNATURE-----

--Apple-Mail=_03AD81FD-8F39-4C72-AF6F-0D4E6C34C573--


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 13:06:47 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 13:06:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355536.583257 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4j1F-0000vx-NN; Fri, 24 Jun 2022 13:06:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355536.583257; Fri, 24 Jun 2022 13:06:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4j1F-0000vq-Jp; Fri, 24 Jun 2022 13:06:41 +0000
Received: by outflank-mailman (input) for mailman id 355536;
 Fri, 24 Jun 2022 13:06:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Kb19=W7=gmail.com=matiasevara@srs-se1.protection.inumbo.net>)
 id 1o4j1E-0000vk-4u
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 13:06:40 +0000
Received: from mail-wr1-x434.google.com (mail-wr1-x434.google.com
 [2a00:1450:4864:20::434])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7bbccfe3-f3be-11ec-b725-ed86ccbb4733;
 Fri, 24 Jun 2022 15:06:38 +0200 (CEST)
Received: by mail-wr1-x434.google.com with SMTP id q9so3059695wrd.8
 for <xen-devel@lists.xenproject.org>; Fri, 24 Jun 2022 06:06:38 -0700 (PDT)
Received: from horizon ([2a01:e0a:19f:35f0:dde5:d55a:20f5:7ef5])
 by smtp.gmail.com with ESMTPSA id
 a17-20020adffb91000000b0020c5253d907sm2288903wrr.83.2022.06.24.06.06.36
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 24 Jun 2022 06:06:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7bbccfe3-f3be-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:content-transfer-encoding:in-reply-to;
        bh=tmmMc/+6fMOuYyEJeltVYvGlEaNaHuROm71Y1fktAl4=;
        b=bJ85DQYPh5E55q2omOp3YTdeEdMsEEpPMr9zJayKDo9+NKjwjZf2qvmcTnKqGB5S8Q
         DRHGXAhEENI5EsSPU+37hY77PYjSbs+W6u+jhbIi77gYbEAdn8fSMTq9maV31R0cw4HM
         udn+/LHSFLNy+gW4C2Bz5DE29LEGkfQyagvAL4dNg8CB9+llv3u0qhzrjxlQygfG/pua
         69eqQ93icigEbrSwTSO5ape1LYsBUEbZ2EWZfnlkz2qF5r0fFG+P5mlmupWCBAa/67U/
         +ZVROxLFfNHQPteuQCza6vkuxXI5YogQrxrx1Jum1lIC7a3i7gxLW//XhWJ1WTReK9Mf
         PcrA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:content-transfer-encoding
         :in-reply-to;
        bh=tmmMc/+6fMOuYyEJeltVYvGlEaNaHuROm71Y1fktAl4=;
        b=j/UQc+uzyxsiEl9KF0vvr6882/Jzh3r9ySaKxnNwreEQYxOWdWCXv1KHjtwS7Xn0n8
         pZiDiobz7PH91UpFYwilxGCo4G15KzInHT+bDvDcbMo+ziAmK1Ryx0PzxkaDJcA//0+7
         K59CK3Fw+o4Ac0NCsQGg9cTLiy9qoWaWJrTrOYudC8lkVBRgOSVxJF+cXr539XCY3+hX
         8QN16EqGUV+RjtpNxeetYAynzU0i4Owo5O707OvfL5OjE177Ilz6lpGJAC4p40iWoDyW
         9mlUYRYS7JLnmnrAIRqGLcJDInMNk0uc7Yq/JSgr+IiqHO2x7HahdHnq0BXeQbHq+Rs1
         zZAg==
X-Gm-Message-State: AJIora9bZrb/6igW3ZpV/HxQmkB4Xue6/6IK5KWTYIsl824ub1+cn3Kd
	uPDmJ+8liAReN6qxKq5SfqA=
X-Google-Smtp-Source: AGRyM1vKMSdy6rxDVn7NWw1i7aNMFQYln4QpGcwF4NeYfHwbcRM6gXJ25ElLshmrg/asjbpKla1NJg==
X-Received: by 2002:a5d:4ace:0:b0:21b:a37d:f5d with SMTP id y14-20020a5d4ace000000b0021ba37d0f5dmr11187299wrs.38.1656075998014;
        Fri, 24 Jun 2022 06:06:38 -0700 (PDT)
Date: Fri, 24 Jun 2022 15:06:35 +0200
From: Matias Ezequiel Vara Larsen <matiasevara@gmail.com>
To: George Dunlap <George.Dunlap@citrix.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Matias Ezequiel Vara Larsen <matias.vara@vates.fr>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	Dario Faggioli <dfaggioli@suse.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [RFC PATCH 0/2] Add a new acquire resource to query vcpu
 statistics
Message-ID: <20220624130635.GA249540@horizon>
References: <cover.1652797713.git.matias.vara@vates.fr>
 <EEFF4C8C-F26D-47CF-8E5D-5E62BB6579BC@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <EEFF4C8C-F26D-47CF-8E5D-5E62BB6579BC@citrix.com>

Hello George and thanks for the review! you will find my comments below.

On Fri, Jun 17, 2022 at 07:54:32PM +0000, George Dunlap wrote:
> 
> 
> > On 17 May 2022, at 15:33, Matias Ezequiel Vara Larsen <matiasevara@gmail.com> wrote:
> > 
> > Hello all,
> > 
> > The purpose of this RFC is to get feedback about a new acquire resource that
> > exposes vcpu statistics for a given domain. The current mechanism to get those
> > statistics is by querying the hypervisor. This mechanism relies on a hypercall
> > and holds the domctl spinlock during its execution. When a pv tool like xcp-rrdd
> > periodically samples these counters, it ends up affecting other paths that share
> > that spinlock. By using acquire resources, the pv tool only requires a few
> > hypercalls to set the shared memory region and samples are got without issuing
> > any other hypercall. The original idea has been suggested by Andrew Cooper to
> > which I have been discussing about how to implement the current PoC. You can
> > find the RFC patch series at [1]. The series is rebased on top of stable-4.15.
> > 
> > I am currently a bit blocked on 1) what to expose and 2) how to expose it. For
> > 1), I decided to expose what xcp-rrdd is querying, e.g., XEN_DOMCTL_getvcpuinfo.
> > More precisely, xcp-rrd gets runstate.time[RUNSTATE_running]. This is a uint64_t
> > counter. However, the time spent in other states may be interesting too.
> > Regarding 2), I am not sure if simply using an array of uint64_t is enough or if
> > a different interface should be exposed. The remaining question is when to get
> > new values. For the moment, I am updating this counter during
> > vcpu_runstate_change().
> > 
> > The current series includes a simple pv tool that shows how this new interface is
> > used. This tool maps the counter and periodically samples it.
> > 
> > Any feedback/help would be appreciated.
> 
> Hey Matias,
> 
> Sorry it’s taken so long to get back to you.  My day-to-day job has shifted away from technical things to community management; this has been on my radar but I never made time to dig into it.
> 
> There are some missing details I’ve had to try to piece together about the situation, so let me sketch things out a bit further and see if I understand the situation:
> 
> * xcp-rrd currently wants (at minimum) to record runstate.time[RUNSTATE_running] for each vcpu.  Currently that means calling XEN_DOMCTL_getvcpuinfo, which has to hold a single global domctl_lock (!) for the entire hypercall; and of course must be iterated over every vcpu in the system for every update.
> 

For example, in xcp-ng, xcp-rrdd is sampling all the VCPUs of the system every 5
seconds. Also, xcp-rrdd queries other counters like XEN_DOMCTL_getdomaininfo. 

Out of curiosity, do you know any benchmark to measure the impact of this
querying? My guess is that the time of domctl-based operations would increase
with the number of VCPUs. In such a escenario, I am supposing that the time to
query all vcpus increase with the number of vcpus thus holding the domctl_lock
longer. However, this would be only observable in a large
enough system.

> * VCPUOP_get_runstate_info copies out a vcpu_runstate_info struct, which contains information on the other runstates.  Additionally, VCPUOP_register_runstate_memory_area already does something similar to what you want: it passes a virtual address to Xen, which Xen maps, and copies information about the various vcpus into (in update_runstate_area()).
> 
> * However, the above assumes a domain of “current->domain”: That is a domain can call VCPUOP_get_runstate_info on one of its own vcpus, but dom0 cannot call it to get information about the vcpus of other domains.
> 
> * Additionally, VCPUOP_register_runstate_memory_area registers by *virtual address*; this is actually problematic even for guest kernels looking at their own vcpus; but would be completely inappropriate for a dom0 userspace application, which is what you’re looking at.
> 

I just learned about VCPUOP_register_runstate_memory_area a few days ago. I did
not know that it is only for current->domain. Thanks for pointing it out.

> Your solution is to expose things via the xenforeignmemory interface instead, modelled after the vmtrace_buf functionality.
> 
> Does that all sound right?
> 

That's correct. I used vmtrace_buf functionality for inspiration.

> I think at a high level that’s probably the right way to go.
> 
> As you say, my default would be to expose similar information as VCPUOP_get_runstate_info.  I’d even consider just using vcpu_runstate_info_t.
> 
> The other option would be to try to make the page a more general “foreign vcpu info” page, which we could expand with more information as we find it useful.
> 
> In this patch, you’re allocating 4k *per vcpu on the entire system* to hold a single 64-bit value; even if you decide to use vcpu_runstate_info_t, there’s still quite a large wastage.  Would it make sense rather to have this pass back an array of MFNs designed to be mapped contiguously, with the vcpus listed as an array? This seems to be what XENMEM_resource_ioreq_server does.
> 
> The advantage of making the structure extensible is that we wouldn’t need to add another interface, and potentially another full page, if we wanted to add more functionality that we wanted to export.  On the other hand, every new functionality that we add may require adding code to copy things into it; making it so that such code is added bit by bit as it’s requested might be better.
> 

Current PoC is indeed a waste of memory in two senses:
1) data structures are not well chosen 
2) memory is being allocated unconditionally

For 1), you propose to use an extensible structure on top of an array of MFNs. I
checked xen.git/xen/include/public/hvm/ioreq.h, it defines the
structure:

struct shared_iopage {
    struct ioreq vcpu_ioreq[1];
};

And then, it accesses it as:

p->vcpu_ioreq[v->vcpu_id];

I could have similar structures, let me sketch it and then I will write down a
design document. The extensible structures could look like:

struct vcpu_stats{ 
   uint64 runstate_running_time;
   // potentially other runstate-time counters
};

struct shared_statspages {
   // potentially other counters, e.g., domain-info
   struct vcpu_stats vcpu_info[1]
}; 

The shared_statspage structure would be mapped on top of an array of continuous
MFNs. The vcpus are listed as an array. I think this structure could be extended
to record per-domain counters by defining them just before vcpu_info[1].  

What do you think?

For 2), you propose a domctl flag on domain creation to enable/disable the
allocation and use of these buffers. I think that is the right way to go for the
moment.

There is a 3) point regarding what Jan suggested about how to ensure that the
consumed data is consistent. I do not have a response for that yet, I will think
about it.

I will address these points and submit v1 in the next few weeks.   

Thanks, Matias.

> I have some more comments I’ll give on the 1/2 patch.
> 
>  -George
> 
> 
> 
> 
> 
> 
> > 
> > Thanks, Matias.
> > 
> > [1] https://github.com/MatiasVara/xen/tree/feature_stats
> > 
> > Matias Ezequiel Vara Larsen (2):
> >  xen/memory : Add stats_table resource type
> >  tools/misc: Add xen-stats tool
> > 
> > tools/misc/Makefile         |  5 +++
> > tools/misc/xen-stats.c      | 83 +++++++++++++++++++++++++++++++++++++
> > xen/common/domain.c         | 42 +++++++++++++++++++
> > xen/common/memory.c         | 29 +++++++++++++
> > xen/common/sched/core.c     |  5 +++
> > xen/include/public/memory.h |  1 +
> > xen/include/xen/sched.h     |  5 +++
> > 7 files changed, 170 insertions(+)
> > create mode 100644 tools/misc/xen-stats.c
> > 
> > --
> > 2.25.1
> > 
> 




From xen-devel-bounces@lists.xenproject.org Fri Jun 24 13:12:22 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 13:12:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355566.583286 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4j6e-0003MX-T0; Fri, 24 Jun 2022 13:12:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355566.583286; Fri, 24 Jun 2022 13:12:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4j6e-0003MQ-PR; Fri, 24 Jun 2022 13:12:16 +0000
Received: by outflank-mailman (input) for mailman id 355566;
 Fri, 24 Jun 2022 13:12:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YVaO=W7=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o4j6d-0003MH-27
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 13:12:15 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 43705efd-f3bf-11ec-b725-ed86ccbb4733;
 Fri, 24 Jun 2022 15:12:13 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 5A6811F909;
 Fri, 24 Jun 2022 13:12:13 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 042FC13480;
 Fri, 24 Jun 2022 13:12:12 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id ZlBdOyy4tWJMeQAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 24 Jun 2022 13:12:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 43705efd-f3bf-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656076333; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=8evYzFwvo69rzCly7bkSSEquukJj4VadkKuCgUkp5U0=;
	b=L8i/stBbiZn8Np5eet3LS6aHcdtlFDIyYSB3ytyQyvywfp8RyqcpIensUryImtHUYhrDhF
	hP2yWwx5uhbJ+Th2fbJ8Ghn0j0KJCycVPiRHNl+RMoHClq6fO4u4ZLW4IlTIFmCG72gB/U
	+qt6KYxua2aonkiLWaFtvEjsnk8kL8U=
Message-ID: <b05fd5b4-c419-256d-ec56-7916cef74b39@suse.com>
Date: Fri, 24 Jun 2022 15:12:12 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Content-Language: en-US
To: George Dunlap <dunlapg@umich.edu>
Cc: Oleksandr <olekstysh@gmail.com>,
 Anthony PERARD <anthony.perard@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Wei Liu <wl@xen.org>,
 Nick Rosbrook <rosbrookn@ainfosec.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 George Dunlap <george.dunlap@citrix.com>
References: <1638982784-14390-1-git-send-email-olekstysh@gmail.com>
 <1638982784-14390-2-git-send-email-olekstysh@gmail.com>
 <YbjANCjAUGe4BAar@perard> <bce10079-abd6-c033-6273-ac0ea9f51668@gmail.com>
 <4c89e55d-4bf1-506e-d620-4a0ff18ef308@suse.com>
 <dc1b70ac-079d-5de8-cb13-6be4944cef0a@gmail.com>
 <813684b0-df71-c18b-cf4c-106cc286c035@suse.com>
 <CAFLBxZbm0KcLhpqs2tGXgx6-JP+3OtkMEReTaphBC-JHZ3sJDQ@mail.gmail.com>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH V6 1/2] libxl: Add support for Virtio disk configuration
In-Reply-To: <CAFLBxZbm0KcLhpqs2tGXgx6-JP+3OtkMEReTaphBC-JHZ3sJDQ@mail.gmail.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------2jYlYO08nbwtwG0jJEVYbgvG"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------2jYlYO08nbwtwG0jJEVYbgvG
Content-Type: multipart/mixed; boundary="------------Nx6tv1gxiDXRxjqsti0IM9Cr";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: George Dunlap <dunlapg@umich.edu>
Cc: Oleksandr <olekstysh@gmail.com>,
 Anthony PERARD <anthony.perard@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Wei Liu <wl@xen.org>,
 Nick Rosbrook <rosbrookn@ainfosec.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 George Dunlap <george.dunlap@citrix.com>
Message-ID: <b05fd5b4-c419-256d-ec56-7916cef74b39@suse.com>
Subject: Re: [PATCH V6 1/2] libxl: Add support for Virtio disk configuration
References: <1638982784-14390-1-git-send-email-olekstysh@gmail.com>
 <1638982784-14390-2-git-send-email-olekstysh@gmail.com>
 <YbjANCjAUGe4BAar@perard> <bce10079-abd6-c033-6273-ac0ea9f51668@gmail.com>
 <4c89e55d-4bf1-506e-d620-4a0ff18ef308@suse.com>
 <dc1b70ac-079d-5de8-cb13-6be4944cef0a@gmail.com>
 <813684b0-df71-c18b-cf4c-106cc286c035@suse.com>
 <CAFLBxZbm0KcLhpqs2tGXgx6-JP+3OtkMEReTaphBC-JHZ3sJDQ@mail.gmail.com>
In-Reply-To: <CAFLBxZbm0KcLhpqs2tGXgx6-JP+3OtkMEReTaphBC-JHZ3sJDQ@mail.gmail.com>

--------------Nx6tv1gxiDXRxjqsti0IM9Cr
Content-Type: multipart/mixed; boundary="------------U8zeXxs5PUPL4lklEHwmAIjr"

--------------U8zeXxs5PUPL4lklEHwmAIjr
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjQuMDYuMjIgMTQ6NDUsIEdlb3JnZSBEdW5sYXAgd3JvdGU6DQo+IE9uIFdlZCwgRGVj
IDE1LCAyMDIxIGF0IDM6NTggUE0gSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tIA0K
PiA8bWFpbHRvOmpncm9zc0BzdXNlLmNvbT4+IHdyb3RlOg0KPiANCj4gICAgIE9uIDE1LjEy
LjIxIDE2OjAyLCBPbGVrc2FuZHIgd3JvdGU6IA0KPiANCj4gDQo+ICAgICBJbiBwcmFjdGlj
ZSB3ZSBhcmUgaGF2aW5nIHNvbWV0aGluZyBsaWtlIHRoZSAicHJvdG9jb2wiIGFscmVhZHkg
dG9kYXk6DQo+ICAgICB0aGUgZGlzayBkZXZpY2UgbmFtZSBpcyBlbmNvZGluZyB0aGF0ICgi
eHZkKiIgaXMgYSBYZW4gUFYgZGlzaywgd2hpbGUNCj4gICAgICJzZCoiIGlzIGFuIGVtdWxh
dGVkIFNDU0kgZGlzaywgd2hpY2ggaGFwcGVucyB0byBiZSBwcmVzZW50ZWQgdG8gdGhlDQo+
ICAgICBndWVzdCBhcyAieHZkKiIsIHRvbykuIEFuZCB0aGlzIGlzIGFuIGFkZGl0aW9uYWwg
aW5mb3JtYXRpb24gbm90DQo+ICAgICByZWxhdGVkIHRvIHRoZSBiYWNrZW5kdHlwZS4NCj4g
DQo+ICAgICBTbyB3ZSBoYXZlIGJhc2ljYWxseSB0aGUgZm9sbG93aW5nIGNvbmZpZ3VyYXRp
b24gaXRlbXMsIHdoaWNoIGFyZQ0KPiAgICAgb3J0aG9nb25hbCB0byBlYWNoIG90aGVyIChz
b21lIGNvbWJpbmF0aW9ucyBtaWdodCBub3QgbWFrZSBzZW5zZSwNCj4gICAgIGJ1dCBpbiB0
aGVvcnkgbW9zdCB3b3VsZCBiZSBwb3NzaWJsZSk6DQo+IA0KPiAgICAgMS4gcHJvdG9jb2w6
IGVtdWxhdGVkIChub3QgUFYpLCBYZW4gKGxpa2UgdG9kYXkpLCB2aXJ0aW8NCj4gDQo+ICAg
ICAyLiBiYWNrZW5kdHlwZTogcGh5IChibGtiYWNrKSwgcWRpc2sgKHFlbXUpLCBvdGhlciAo
ZS5nLiBhIGRhZW1vbikNCj4gDQo+ICAgICAzLiBmb3JtYXQ6IHJhdywgcWNvdywgcWNvdzIs
IHZoZCwgcWVkDQo+IA0KPiAgICAgVGhlIGNvbWJpbmF0aW9uIHZpcnRpbytwaHkgd291bGQg
YmUgZXF1aXZhbGVudCB0byB2aG9zdCwgQlRXLiBBbmQNCj4gICAgIHZpcnRpbytvdGhlciBt
aWdodCBldmVuIHVzZSB2aG9zdC11c2VyLCBkZXBlbmRpbmcgb24gdGhlIGRhZW1vbi4NCj4g
DQo+IA0KPiBTb3JyeSB0byBmbHkgaW4gaGVyZSA3IG1vbnRocyBhZnRlciB0aGUgZmFjdCB0
byBxdWliYmxlIGFib3V0IHNvbWV0aGluZywgYnV0IA0KPiBzaW5jZSB3ZSdyZSBiYWtpbmcg
c29tZXRoaW5nIGludG8gYW4gZXh0ZXJuYWwgaW50ZXJmYWNlLCBJIHRoaW5rIGl0J3Mgd29y
dGggDQo+IG1ha2luZyBzdXJlIHdlIGdldCBpdCBleGFjdGx5IHJpZ2h0Lg0KPiANCj4gSXQg
c2VlbXMgdG8gbWUgdGhhdCB0aGUgY3VycmVudCB3YXkgdGhhdCAiYmFja2VuZHR5cGUiIGlz
IHVzZWQgaXMgdG8gdGVsbCBsaWJ4bCANCj4gaG93IHRvIHNldCB1cCB0aGUgY29ubmVjdGlv
bi7CoCBGb3IgInBoeSIsIGl0IHRhbGtzIHRvIHRoZSBkb20wIGJsa2JhY2sgZHJpdmVyLCAN
Cj4gYW5kIGhhbmRzIGl0IGEgZmlsZSBvciBzb21lIG90aGVyIHBoeXNpY2FsIGRldmljZS7C
oCBGb3IgcWRpc2ssIGl0IHRhbGtzIHRvIHRoZSANCj4gUUVNVSB3aGljaCBpcyBhbHJlYWR5
IGFzc29jaWF0ZWQgd2l0aCB0aGUgZG9tYWluOiBlaXRoZXIgdGhlIHFkaXNrIHByb2Nlc3Mg
DQo+IHN0YXJ0ZWQgdXAgd2hlbiB0aGUgUFYvSCBkb21haW4gd2FzIGNyZWF0ZWQsIG9yIHRo
ZSBlbXVsYXRvciBzdGFydGVkIHVwIHdoZW4gdGhlIA0KPiBIVk0gZ3Vlc3Qgd2FzIGNyZWF0
ZWQuwqAgKENvcnJlY3QgbWUgaWYgSSd2ZSBtYWRlIGEgbWlzdGFrZSBoZXJlLikNCj4gDQo+
IEdpdmVuIHRoYXQsICJvdGhlciIgaXMganVzdCB3cm9uZy7CoCBUaGUgdG9vbHN0YWNrIG5l
ZWRzIHRvIGtub3cgKnNwZWNpZmljYWxseSogDQo+IGhvdyB0byBkcml2ZSB0aGUgdGhpbmcg
dGhhdCdzIGdvaW5nIHRvIGJlIHByb3ZpZGluZyB0aGUgcHJvdG9jb2wuwqAgSSdtIG5vdCBz
dXJlIA0KPiB3aGF0IHlvdSdyZSBleHBlY3RpbmcgdG8gdXNlIGluIHRoaXMgY2FzZSwgYnV0
IHByZXN1bWFibHkgaWYgd2UncmUgYWRkaW5nIGEgDQo+IHRoaXJkIHRoaW5nIChpbiBhZGRp
dGlvbiB0byBibGtiYWNrIGFuZCBRRU1VKSwgdGhlbiBhdCBzb21lIHBvaW50IHdlJ3JlIGdv
aW5nIHRvIA0KPiBiZSBhZGRpbmcgYSBmb3VydGggdGhpbmcsIGFuZCBhIGZpZnRoIHRoaW5n
IGFzIHdlbGwuwqAgV2hhdCBkbyB3ZSBjYWxsIHRoZW0/ICANCj4gIk90aGVyIG90aGVyIiBh
bmQgIm90aGVyIG90aGVyIG90aGVyIj8NCg0KVGhlIGlkZWEgd2FzIHRvIGFsbG93IGFuIHVu
c3BlY2lmaWVkIGV4dGVybmFsIGNvbXBvbmVudCB0byBiZSB1c2VkLiBJdA0Kd291bGQgb25s
eSBuZWVkIGluZm9ybWF0aW9uIGF2YWlsYWJsZSBpbiBYZW5zdG9yZSBhbmQgImRvIHRoZSBy
aWdodCB0aGluZyIuDQoNClRoaXMgYWxsb3dzIHRvIGhhdmUgY3VzdG9tIGJhY2tlbmRzIHdp
dGhvdXQgaGF2aW5nIHRvIG1vZGlmeSAobGliKXhsIG9yDQp0aGUgY29uZmlnIHN5bnRheCBp
biBjYXNlIGEgbmV3IG9uZSBpcyBiZWluZyBhZGRlZC4NCg0KSW4gY2FzZSBYZW5zdG9yZSBp
c24ndCBlbm91Z2ggZm9yIHRoZSBuZWVkZWQgaW5mb3JtYXRpb24gb2YgYSBuZXcgYmFja2Vu
ZCwNCmEgbW9yZSBzcGVjaWZpYyBiYWNrZW5kdHlwZSB3b3VsZCBiZSBuZWVkZWQuDQoNCj4g
SWYgd2UncmUgcGxhbm5pbmcgb24gaGF2aW5nIGEgZ2VuZXJhbCBpbnRlcmZhY2UgZm9yIHRo
ZXNlIGRhZW1vbnMgdGhhdCBhcmUgZ29pbmcgDQo+IHRvIGJlIGJlIHZpcnRpbyBwcm92aWRl
cnMsIHdlIHNob3VsZCBjb21lIHVwIHdpdGggYSBzcGVjaWZpYyBuYW1lIGZvciB0aGVtIGFz
IGEgDQo+IGNsYXNzLCBhbmQgdXNlIHRoYXQgZm9yIHRoZSBuYW1lLg0KPiANCj4gRnVydGhl
cm1vcmUsICJ2aXJ0aW8rcGh5ID09IHZob3N0IiBpcyBqdXN0IHdyb25nOiBwaHkgbWVhbnMg
dGhhdCBsaWJ4bCBpcyB1c2luZyANCj4gYmxrYmFjaywgYW5kIGJsa2JhY2sgY2FuJ3Qgc3Bl
YWsgdGhlIHZpcnRpbyBwcm90b2NvbC7CoCBJZiB3ZSB3YW50IHRvIHVzZSB2aG9zdCANCj4g
KG9yIHNvbWV0aGluZyBsaWtlIGl0KSwgdGhlbiBpdCB3aWxsIG5lZWQgaXRzIG93biBzZXBh
cmF0ZSBiYWNrZW5kdHlwZS4NCg0KUmVhbGx5PyBUb2RheSAidmlydGlvIiArICJwaHkiIGlz
bid0IHVzZWQgYW55d2hlcmUsIGFzIGl0IGN1cnJlbnRseSBtYWtlcyBubw0Kc2Vuc2UuIEp1
c3QgYmVjYXVzZSAicGh5IiBoYXMgYmVlbiB1c2VkIGZvciBibGtiYWNrLCBpdCBkb2Vzbid0
IG1lYW4gdGhhdA0KdGhpcyBuZWVkcyB0byBzdGF5IHRoaXMgd2F5LiBXaXRoIHRoZSBuZXcg
c2NoZW1lLCAieGVuK3BoeSIgbWVhbnMgYmxrYmFjaywNCndoaWxlICJ2aXJ0aW8rcGh5IiBp
cyB5ZXQgdW5kZWZpbmVkLiBJZGVudGlmeWluZyAicGh5IiB3aXRoIGEgInBoeXNpY2FsIGRl
dmljZSIsDQppdCB3b3VsZCBtYWtlIHNlbnNlIHRvIHVzZSAidmlydGlvK3BoeSIgZm9yIHZo
b3N0LCBhcyB2aG9zdC1ibG9jayBpcyB0aGUNCmVxdWl2YWxlbnQgdG8gYmxrYmFjayBpbiB0
aGUgdmlydGlvIHdvcmxkLg0KDQoNCkp1ZXJnZW4NCg==
--------------U8zeXxs5PUPL4lklEHwmAIjr
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------U8zeXxs5PUPL4lklEHwmAIjr--

--------------Nx6tv1gxiDXRxjqsti0IM9Cr--

--------------2jYlYO08nbwtwG0jJEVYbgvG
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK1uCwFAwAAAAAACgkQsN6d1ii/Ey/b
ugf/TrWywdkIHq2tdD+8Jxo4MeeyyCyaoHz9RIGtDhda05ogY89oI6m/594hNwQkGy63GtuXvu6g
AncJNAPUhb5ZEgClItZlp9/P/HffOsiiOhLg6uM5odwXjwCyiW1JT233H6Y1+9FO6dAkuYM02fNV
D2sKKkQFdZatZH2npsXs20O9tcRlq7rbFloYcFT7pC2dEas9CSgjq0PlD/2kbWyr+N9dbxNaFM0T
mBgqr/5wAzhVkcmbkK1Nfh1V36c3+PevkZCpiwiD66RZkypDaK/lyJ3SNmf9Ph+mKbhIWVHwZX3Q
bSY8vEkTY4LRxL9K+oAQBF5TMEezyzxlNToSkbcQKQ==
=+HS9
-----END PGP SIGNATURE-----

--------------2jYlYO08nbwtwG0jJEVYbgvG--


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 13:34:47 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 13:34:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355580.583297 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4jSB-0005q2-Le; Fri, 24 Jun 2022 13:34:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355580.583297; Fri, 24 Jun 2022 13:34:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4jSB-0005pv-Ii; Fri, 24 Jun 2022 13:34:31 +0000
Received: by outflank-mailman (input) for mailman id 355580;
 Fri, 24 Jun 2022 13:34:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CwSF=W7=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1o4jS9-0005pp-TF
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 13:34:30 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2067.outbound.protection.outlook.com [40.107.22.67])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5ef15a51-f3c2-11ec-bd2d-47488cf2e6aa;
 Fri, 24 Jun 2022 15:34:28 +0200 (CEST)
Received: from AM7PR03CA0001.eurprd03.prod.outlook.com (2603:10a6:20b:130::11)
 by AM0PR08MB4097.eurprd08.prod.outlook.com (2603:10a6:208:132::25)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.16; Fri, 24 Jun
 2022 13:34:23 +0000
Received: from VE1EUR03FT003.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:130:cafe::c6) by AM7PR03CA0001.outlook.office365.com
 (2603:10a6:20b:130::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16 via Frontend
 Transport; Fri, 24 Jun 2022 13:34:23 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT003.mail.protection.outlook.com (10.152.18.108) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Fri, 24 Jun 2022 13:34:22 +0000
Received: ("Tessian outbound 4ab5a053767b:v120");
 Fri, 24 Jun 2022 13:34:22 +0000
Received: from ff3f87bdb656.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 AB417532-F3A0-4E0D-9528-17315F964DE8.1; 
 Fri, 24 Jun 2022 13:34:15 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ff3f87bdb656.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 24 Jun 2022 13:34:15 +0000
Received: from AM0PR08MB3809.eurprd08.prod.outlook.com (2603:10a6:208:103::16)
 by DBBPR08MB4457.eurprd08.prod.outlook.com (2603:10a6:10:cd::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15; Fri, 24 Jun
 2022 13:34:14 +0000
Received: from AM0PR08MB3809.eurprd08.prod.outlook.com
 ([fe80::4ca:af1b:4380:abf9]) by AM0PR08MB3809.eurprd08.prod.outlook.com
 ([fe80::4ca:af1b:4380:abf9%5]) with mapi id 15.20.5373.016; Fri, 24 Jun 2022
 13:34:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5ef15a51-f3c2-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=dKtGKVQMJ5jlTScw8XhuP2vkbzJBljMDmS6KJbc1gviu+PCZkYIOq8J1Ry7IAOLSFN2o9C7NCTNi8AJM8+PDJN4YKWeCR78GyNFvWOjwXijxEsngXQezF9zobhW/BCsR1NfGdum4Xex8o9eMI2TOeCKer3DyYB/0kugGBbvPoSH5A2RxIUx0XKkLFf+5ftQvyT2/yatnpV8oLfYYZLubOkiO2lG7c6V3biUFG0hl7vQxPRYSkMZnjQMxW9XSbYuiy3JUbp57sxwDKLjKGQOV8QVwoCLWlWZq7h6gv389DoHVfPqSCQrOo6dyF4inRH0YbWZgjFn88MSI+BlDYDWquA==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=t2Qn+jG3krrZu6rweNVUSTRPzv5l48WUrKJdU/Z1VZs=;
 b=VDgsE+S/PEFXLmyeyyTgzJHCS5UuxFDV6F5WnC3Y4mQrMTX2eNKjPHUCue9nN3cltjZPIa3cUQiPjyUZ8xX7XFgwehp12UlAfpCkPzhdzEvgDDsRvTPgiJmHig4FmXFADKnR2+d87S8o8KqgOjMfCpX5NIm2xgrDw+wkdqysrc+OurqJptVjjLVtB1i+i1TfZd+qqSNSb4g79jD6xLqUyT78HBMWCr8doTvG+oveLy8w/P6Hbdro4ha4Jxvp8DI4r2zLEfncKcW3HhXsg+lTCWbWsOT2JJCO0CEvmnGHJbXmis/S6D1MtMqrJSWp6LvyWahbJrCtDFUVpKMLs/pBXA==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=t2Qn+jG3krrZu6rweNVUSTRPzv5l48WUrKJdU/Z1VZs=;
 b=lr3B0PhsUEgA2SWG4V8FubO3NYBES6yggnHGV8j6csKe/qBKTzVWZ2Cxqz1sbO4lK9yO1xCTH7+UrWjjFJX5lrI3y9HoBdY38uZFKfyhz2GWmEnKCg5z82lZL5sE860aKoQmsLX7l9PLfcF4B+bUHhLysZPbFxb0eCims/UOmf8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 6c28ce5c7251c8f4
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mBdW4NgddW0ESM8fyd7/VlfeKlEW1mVjijoyKr1Zbj34bAbxi3X3DXKE6y/IvTLD5s2aj/Yd64w5b8ZtCD/emti2MKYMbiiqR8GmY3fF+ebm8mv0eptunWY703SGhjUhBFEgFsRFSO1SYDFUCfRycJi+Vkna+ejyEr7yu/NkFFckbSe6ukddkJcBWCbCJKlRFRDWkUNpgR8zX8o38g4rXMAUPouXuQUzpCDkLrYjVSa22oho/R3lm1wKNPwp/WB1EJgqAL6Tdz6BdOxW0kpHrW12J1MQj6K8vsvn29UErMzqqzDUp8bewg1dZrLT3R7qAuEbVbWSEDdQPwzsaC1oGw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=t2Qn+jG3krrZu6rweNVUSTRPzv5l48WUrKJdU/Z1VZs=;
 b=XKMQ89nhit8faPTJ9kmq88DAFNCQ6X3iHUHX2ECUCwhrFnIZZH0nc3YculrFeMkEjmhD8egMeg7+foXQJeFPL3FRhY5y9hhM6I1DSreHoyBsM9AgS+Y/9GmGQfMgsjAFI7d0cSZrJaLM8bvZNmeS3I+ph6LKTKwu0t69ucJGAMHBGCSVNpZYrYndVAo8fr5M1D4Ncb6GwglSP8eLbTJgc4512MUAngaed5z3blc3kLtex89Ek28+0SJ7+TXD58pEb6WfckvH3Rt6YFiP9EX5Wa42/rYI+BE5ToBI80rtoa4U52wBjgWLCikYG9WkYU8V10VFrmze1fCAJPYCZNrUrQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=t2Qn+jG3krrZu6rweNVUSTRPzv5l48WUrKJdU/Z1VZs=;
 b=lr3B0PhsUEgA2SWG4V8FubO3NYBES6yggnHGV8j6csKe/qBKTzVWZ2Cxqz1sbO4lK9yO1xCTH7+UrWjjFJX5lrI3y9HoBdY38uZFKfyhz2GWmEnKCg5z82lZL5sE860aKoQmsLX7l9PLfcF4B+bUHhLysZPbFxb0eCims/UOmf8=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH] docs/misra: Add instructions for cppcheck
Thread-Topic: [PATCH] docs/misra: Add instructions for cppcheck
Thread-Index: AQHYh7nJHM8+T3fZ/0Gr0RGCgDxGeq1eaYGAgAALcICAAARSgIAAFXiA
Date: Fri, 24 Jun 2022 13:34:14 +0000
Message-ID: <CA8DFF26-3D7F-4CDA-9EDC-E173203B2A51@arm.com>
References: <20220624105311.21057-1-luca.fancellu@arm.com>
 <692d09fa-5513-132a-6b5b-4bc62e46a443@xen.org>
 <15F23829-3693-47CC-A9D6-3D7A3B44EB64@arm.com>
 <88bd7017-e2b3-59f3-a68a-25db9e53136d@xen.org>
In-Reply-To: <88bd7017-e2b3-59f3-a68a-25db9e53136d@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.80.82.1.1)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 932c4ab9-ed46-4201-8aa9-08da55e63fed
x-ms-traffictypediagnostic:
	DBBPR08MB4457:EE_|VE1EUR03FT003:EE_|AM0PR08MB4097:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 hfm/VO6iV7s95ZCu30fcN64vRkfweEE5UEkChDbngwFd7DfjoSiPfe7mxCn8DYgNBOQ8r7rI6HasCG6RWubHSE4E+zFJZFsE1aXz2X3uZpC/J7gH408Gqbb8XGYy4Nh9gklpY67jKLp9UiCntytLho0Zx6x1uDRHXLoYFabb4L9HTeHHclv6oqqiuB98CzLoHVGnlAdfkFcjElnhHijeDtMQ9InMIofAgoFKLGWsQ3/DQWhy4k1W636Zw1T14OmpMcYJr+rjjo/6eyynB3GnSgaz2uF/Wo9RFACKkrXxrkScXPjqPE8w4t1WTzbgUX0UDTjrfO99fMqSgUdRl5sfYx/upFZs06T1u0WIvJpHuPAHZm5VVcGSaVqvqcODykag5jJjgDcNIoF/Zx0OuRY3hKRVa5ofuM/K175QKLGM5Yh3grqZ4gs0qaZkv3zZcmCdvMgkfTy5/ofR/E7Josp9btmcsRkEQ+SNTmnwLqsuzwsrQEBVFH10IVtNNcC1CVfsK9hV1Z8sDbIw4n5BWLrZCtI8L+5+cxjwVyx3srREnuAvPmejbcA5YIzCXcjj2hTLDfKmNLLUmGlKKQF4RxkqhP13/rJTqaPqtzOJvQuTCdeeb4pYjiz2Cam+6RI28xDCcJqK6+iD/0Mn3m++kCdmIsCXke5M+p505rfN6yNGmrZ+vn+CiXqIp1PtBmWsUf0Y6uQYxQW9ELrJu7PrdR0DRuJXviNwcFeN4VqMThLAmsxVWq1NsYOmVU0Q9bHoaQazglAvbtB1nsg6NwldyOSJboqsPPLJCowlb5hURCGLvozbHaMnZeg0TTQppto5+/ue9TNyuASkz9SjlbiWcWVcIgJmNodg5CsQVsvrfGO5+5VePlGdGyFxStJbKXE8rnnxuvOsfHZyMnErGgRzyqB3sw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB3809.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(396003)(346002)(366004)(136003)(39860400002)(376002)(66556008)(91956017)(54906003)(66446008)(36756003)(38100700002)(8676002)(64756008)(66946007)(66476007)(33656002)(83380400001)(4326008)(86362001)(76116006)(316002)(6916009)(122000001)(186003)(41300700001)(478600001)(71200400001)(38070700005)(2906002)(6506007)(53546011)(2616005)(26005)(8936002)(6486002)(966005)(5660300002)(6512007)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <3BF3DF9487D9714A8CCA57DA9745C62E@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4457
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT003.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ae1ebdf9-6daa-4888-4d92-08da55e63ad3
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	EH4c/b4lXU4v2p00f7TP+EZ2KUp8ttmnF9HFFX5IZ/zgnjbihnCPJE7a6NUOhRZ3Lu/g1w82cpnI6TTdg7YN0Sl7ht26CytHgv1JfjL2W9Kwnv7167CYixbKUZ0KnaU6FhbGrwuDQW95iWBU4vgZKTEFIffowtsSXkOOXGffZrymM+VQJhFOhVeSwOgp1ya28+hvRgMhNudz7kg7JAbf/jYBu7IIhmEreA1EXDwo1T1huIM8TI3+0ePm+VvOZaNBEkxGpPKCAuNlIWQj60oBocLVIoXxoBhXk8jeh0wqnaszEmc6zEoYAOVh3dSmWiwynSlnZe1CDk8H5SfxdGitRKdEICp5xME7K4fK4GhYMRtMaFWI16hzisjFyGYro33jKyfEv96yv1GrsrtcKucRKpDRo6xUZgg4tvDbRxvFKX8kLvhBp9SrPFSukczCZjdZe9iua8iQT1zvQO0KOkDcm8C0POxor5vp5pOqGK5JubxynyTr82SuJGr9YVztsmxQ6EUScTjw09dTSf78V2w6tQcCaugkwN1IGKdOSRGezzv8eEN8w0ramrmF+qBcOqjyw1rGxnEf8D8pe4Z66GEZOwe0pbC8nhDG1piis5JhaDlgzm8u2T+eaY/ano3OQbL1eDxAyIZh8+yH1lxzXovY6TWFJKaS9p9eNhuxBCRoGu2wkHPggAY5hVm/OBt3Ugh9Nss5k31bIlV3LqDolio/6e+MFXOb1XMxMYgskuQzYPmvhnFM34EhKp0YA6gkT3GBSfQiziSaE0576qrU4y2Ds9C7UOj+r5/sCVWVjEmavlO9m3b5M86PgDKLMijokBrnmvIJ8IPBrplKstwd4l4SVEFqKm65eWct5nwT/PWXSHV71yPPtZVCEtD29pjlpvVh
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(136003)(39860400002)(376002)(396003)(346002)(40470700004)(36840700001)(46966006)(478600001)(966005)(2906002)(86362001)(41300700001)(6862004)(53546011)(40460700003)(6506007)(81166007)(186003)(47076005)(33656002)(6486002)(70206006)(54906003)(2616005)(36756003)(82310400005)(8936002)(26005)(40480700001)(5660300002)(4326008)(336012)(8676002)(70586007)(83380400001)(82740400003)(6512007)(36860700001)(316002)(356005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jun 2022 13:34:22.8311
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 932c4ab9-ed46-4201-8aa9-08da55e63fed
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT003.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB4097

DQoNCj4gT24gMjQgSnVuIDIwMjIsIGF0IDEzOjE3LCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4u
b3JnPiB3cm90ZToNCj4gDQo+IA0KPiANCj4gT24gMjQvMDYvMjAyMiAxMzowMSwgTHVjYSBGYW5j
ZWxsdSB3cm90ZToNCj4+IEhpIEp1bGllbiwNCj4gDQo+IEhpIEx1Y2EsDQo+IA0KPj4+PiArRmly
c3QgcmVjb21tZW5kYXRpb24gaXMgdG8gdXNlIGV4YWN0bHkgdGhlIHNhbWUgdmVyc2lvbiBpbiB0
aGlzIHBhZ2UgYW5kIHByb3ZpZGUNCj4+Pj4gK3RoZSBzYW1lIG9wdGlvbiB0byB0aGUgYnVpbGQg
c3lzdGVtLCBzbyB0aGF0IGV2ZXJ5IFhlbiBkZXZlbG9wZXIgY2FuIHJlcHJvZHVjZQ0KPj4+PiAr
dGhlIHNhbWUgZmluZGluZ3MuDQo+Pj4gDQo+Pj4gSSBhbSBub3Qgc3VyZSBJIGFncmVlLiBJIHRo
aW5rIGl0IGlzIGdvb2QgdGhhdCBlYWNoIGRldmVsb3BwZXIgdXNlIHRoZWlyIG93biB2ZXJzaW9u
IChzbyBsb25nIGl0IGlzIHN1cHBvcnRlZCksIHNvIHRoZXkgbWF5IGJlIGFibGUgdG8gZmluZCBp
c3N1ZXMgdGhhdCBtYXkgbm90IGFwcGVhciB3aXRoIDIuNy4NCj4+IFllcyBJIHVuZGVyc3RhbmQs
IGJ1dCBhcyBCZXJ0cmFuZCBzYXlzLCBvdGhlciB2ZXJzaW9uIG9mIHRoaXMgdG9vbCBkb2VzbuKA
mXQgd29yayBxdWl0ZSB3ZWxsLiANCj4gDQo+IEkgaGF2ZSByZXBsaWVkIHRvIHRoaXMgb24gQmVy
dHJhbmQgZS1tYWlsLg0KPiANCj4gDQo+PiBJIGFncmVlIHRoYXQgZXZlcnlvbmUgc2hvdWxkIHVz
ZSB0aGVpciBvd24gdmVyc2lvbiwgYnV0IGZvciB0aGUgc2FrZSBvZiByZXByb2R1Y2liaWxpdHkN
Cj4+IG9mIHRoZSBmaW5kaW5ncywgSSB0aGluayB3ZSBzaG91bGQgaGF2ZSBhIGNvbW1vbiBncm91
bmQuDQo+IA0KPiBJIHdpbGwgcmVwbHkgdG8gdGhpcyBiZWxvdy4NCj4gDQo+PiBUaGUgY29tbXVu
aXR5IGNhbiBob3dldmVyIHByb3Bvc2UgZnJvbSB0aW1lIHRvIHRpbWUgdG8gYnVtcCB0aGUgdmVy
c2lvbiBhcyBsb25nIGFzIHdlIGNhbiBzYXkgaXQgd29ya3MgKG1heWJlDQo+PiBjcm9zc2luZyB0
aGUgcmVwb3J0cyBiZXR3ZWVuIGNwcGNoZWNrLCBlY2xhaXIsIG90aGVyIHByb3ByaWV0YXJ5IHRv
b2xzKS4NCj4gDQo+IFRoaXMgd291bGQgbWVhbiB3ZSBzaG91bGQgZGUtc3VwcG9ydCAyLjcgd2hp
Y2ggc291bmRzIHdyb25nIGlmIGl0IHdvcmtlZCBiZWZvcmUuDQoNClN1cmUsIEkgZ3Vlc3MgdGhh
dCBhcyBsb25nIGFzIHdlIGRvbuKAmXQgc2VlIHJlZ3Jlc3Npb25zIGZyb20gdmVyc2lvbiBYIHRv
IFgrMSB3ZSBhcmUgZmluZSB3aXRoIHZlcnNpb25zID49IFguDQoNCj4+PiANCj4+Pj4gKw0KPj4+
PiArSW5zdGFsbCBjcHBjaGVjayBpbiB0aGUgc3lzdGVtDQo+Pj4gDQo+Pj4gTklUOiBzL2luL29u
LyBJIHRoaW5rLg0KPj4gU3VyZSB3aWxsIGZpeA0KPj4+IA0KPj4+PiArPT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09DQo+Pj4+ICsNCj4+Pj4gK0NwcGNoZWNrIGNhbiBiZSByZXRyaWV2ZWQg
ZnJvbSB0aGUgZ2l0aHViIHJlcG9zaXRvcnkgb3IgYnkgZG93bmxvYWRpbmcgdGhlDQo+Pj4+ICt0
YXJiYWxsLCB0aGUgdmVyc2lvbiB0ZXN0ZWQgc28gZmFyIGlzIHRoZSAyLjc6DQo+Pj4+ICsNCj4+
Pj4gKyAtIGh0dHBzOi8vZ2l0aHViLmNvbS9kYW5tYXIvY3BwY2hlY2svdHJlZS8yLjcNCj4+Pj4g
KyAtIGh0dHBzOi8vZ2l0aHViLmNvbS9kYW5tYXIvY3BwY2hlY2svYXJjaGl2ZS8yLjcudGFyLmd6
DQo+Pj4+ICsNCj4+Pj4gK1RvIGNvbXBpbGUgYW5kIGluc3RhbGwgaXQsIGhlcmUgdGhlIGNvbXBs
ZXRlIGNvbW1hbmQgbGluZToNCj4+Pj4gKw0KPj4+PiArbWFrZSBNQVRDSENPTVBJTEVSPXllcyBc
DQo+Pj4+ICsgRklMRVNESVI9L3Vzci9zaGFyZS9jcHBjaGVjayBcDQo+Pj4+ICsgQ0ZHRElSPS91
c3Ivc2hhcmUvY3BwY2hlY2svY2ZnIFwNCj4+Pj4gKyBIQVZFX1JVTEVTPXllcyBcDQo+Pj4+ICsg
Q1hYRkxBR1M9Ii1PMiAtRE5ERUJVRyAtV2FsbCAtV25vLXNpZ24tY29tcGFyZSAtV25vLXVudXNl
ZC1mdW5jdGlvbiIgXA0KPj4+PiArIGluc3RhbGwNCj4+PiANCj4+PiBMZXQgbWUgc3RhcnQgdGhh
dCBJIGFtIG5vdCBjb252aW5jZWQgdGhhdCBvdXIgZG9jdW1lbnRhdGlvbiBzaG91bGQgZXhwbGFp
biBob3cgdG8gYnVpbGQgY3BwY2hlY2suDQo+Pj4gDQo+Pj4gQnV0IGlmIHRoYXQncyBkZXNpcmUs
IHRoZW4gSSB0aGluayB5b3Ugb3VnaHQgdG8gZXhwbGFpbiB3aHkgd2UgbmVlZCB0byB1cGRhdGUg
Q1hYRkxBR1MgKEkgd291bGQgZXhwZWN0IGNwcGNoZWNrIHRvIGJ1aWxkIGV2ZXJ5d2hlcmUgd2l0
aG91dCBzcGVjaWZ5aW5nIGFkZGl0aW9uYWwgZmxhZ3MpLg0KPj4gWWVzIHlvdSBhcmUgcmlnaHQs
IHRoaXMgaXMgdGhlIHJlY29tbWVuZGVkIGNvbW1hbmQgbGluZSBmb3IgYnVpbGRpbmcgYXMgaW4g
aHR0cHM6Ly9naXRodWIuY29tL2Rhbm1hci9jcHBjaGVjay9ibG9iL21haW4vcmVhZG1lLm1kIHNl
Y3Rpb24gR05VIG1ha2UsIEkgY2FuIGFkZCB0aGUgc291cmNlLg0KPiANCj4gSSB0aGluayB3ZSBz
aG91bGQgcmVtb3ZlIHRoZSBjb21tYW5kIGxpbmUgYW5kIHRlbGwgdGhlIHVzZXIgdG8gcmVhZCB0
aGUgY3BwY2hlY2sgUkVBRE1FLm1kLg0KDQpPayBzb3VuZHMgZ29vZCB0byBtZQ0KDQo+IA0KPj4g
TXkgaW50ZW50aW9uIHdoZW4gd3JpdGluZyB0aGlzIHBhZ2Ugd2FzIHRvIGhhdmUgYSBjb21tb24g
Z3JvdW5kIGJldHdlZW4gWGVuIGRldmVsb3BlcnMsIHNvIHRoYXQgaWYgb25lIGRheSBzb21lb25l
IGNhbWUgdXAgd2l0aCBhIGZpeCBmb3Igc29tZXRoaW5nLCB3ZSBhcmUgYWJsZSB0byByZXByb2R1
Y2UNCj4+IHRoZSBmaW5kaW5nIGFsbCB0b2dldGhlci4NCj4gV2VsbCwgaWYgc29tZW9uZSBmaW5k
IGEgZml4IHlvdSB3YW50IHRvIGNoZWNrIGFnYWluc3QgYWxsIHZlcnNpb25zIG5vdCB0aGUgb25l
IHRoYXQgd2FybnMuIE90aGVyd2lzZSwgeW91IGNhbiBlbmQgdXAgaW4gYSBzaXR1YXRpb24gd2hl
cmUgeW91IHNpbGVuY2UgY3BwY2hlY2sgMi4xMCAoanVzdCBtYWtpbmcgdXAgYSB2ZXJzaW9uKSBi
dXQgdGhlbiBpbnRyb2R1Y2UgYSB3YXJuaW5nIGluIGNwcGNoZWNrIDIuNy4NCj4gDQo+IFRvIG1l
IHRoaXMgaXMgbm8gZGlmZmVyZW50IHRoYW4gb3RoZXIgc29mdHdhcmUgdXNlZCB0byBidWlsZCBY
ZW4uIFdlIGRvbid0IHRlbGwgdGhlIHVzZXIgdGhhdCB0aGV5IHNob3VsZCBhbHdheXMgYnVpbGQg
d2l0aCBHQ0MgeC55LnouIEluc3RlYWQsIHdlIHByb3ZpZGUgYSBtaW5pbXVtIHZlcnNpb24uIFRo
aXMgaGFzIG11bHRpcGxlIGJlbmVmaXRzOg0KPiAxKSBUaGUgdXNlciBkb2Vzbid0IG5lZWQgdG8g
cmVidWlsZCB0aGUgc29mdHdhcmUgYW5kIGNhbiB1c2UgdGhlIG9uZSBwcm92aWRlZCBieSB0aGUg
ZGlzdHJpYnV0aW9ucw0KPiAyKSBEaWZmZXJlbnQgdmVyc2lvbnMgZmluZCBkaWZmZXJlbnQgKG1v
c3Qgb2YgdGhlIHRpbWUpIHZhbGlkIGJ1Z3MuIFNvIHdlIGFyZSBnZXR0aW5nIHRvd2FyZHMgYSBi
ZXR0ZXIgY29kZWJhc2UuDQo+IA0KDQpPayBJIHNlZSB5b3VyIHBvaW50LCBpbnN0ZWFkIG9mIHNh
eWluZyDigJx3ZSB1c2UgdmVyc2lvbiBYLlksIEkgd2lsbCBzYXkgPj1YLlnigJ0sIHlvdXIgY29t
bWVudCBvbiBCZXJ0cmFuZOKAmXMgcmVwbHkgaXMgb24gdGhpcyBsaW5lLg0KDQpJIHdvdWxkIGtl
ZXAgdGhlIHNlY3Rpb24gYWJvdXQgY29tcGlsaW5nIGNwcGNoZWNrIHNpbmNlIG1hbnkgcmVjZW50
IGRpc3RybyBkb2VzbuKAmXQgcHJvdmlkZSBjcHBjaGVjayA+PTIuNyB5ZXQgKGFuZCAyLjggaXMg
YnJva2VuKSwNCklmIHlvdSBhZ3JlZSB3aXRoIGl0Lg0KDQpGb3IgdGhpcyBvbmU6DQo+IA0KPiBU
aGFua3MgZm9yIHRoZSBpbmZvcm1hdGlvbi4gSG93IGFib3V0IHdyaXRpbmcgc29tZXRoaW5nIGxp
a2U6DQo+IA0KPiAiDQo+IFRoZSBtaW5pbXVtIHZlcnNpb24gcmVxdWlyZWQgZm9yIGNwcGNoZWNr
IGlzIDIuNy4gTm90ZSB0aGF0IGF0IHRoZSB0aW1lIG9mIHdyaXRpbmcgKEp1bmUgMjAyMiksIHRo
ZSB2ZXJzaW9uIDIuOCBpcyBrbm93biB0byBiZSBicm9rZW4gWzFdLg0KPiAiDQo+IA0KPiBbMV0g
aHR0cHM6Ly9zb3VyY2Vmb3JnZS5uZXQvcC9jcHBjaGVjay9kaXNjdXNzaW9uL2dlbmVyYWwvdGhy
ZWFkL2JmYzNhYjZjNDEvP2xpbWl0PTI1DQo+IA0KPiANCg0KU3VyZSwgSSBjYW4gYWRkIGl0IGFu
ZCByZXBocmFzZSB0aGF0IHNlY3Rpb24uDQoNCkNoZWVycywNCkx1Y2ENCg0KDQo=


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 14:59:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 14:59:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355587.583308 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4km6-0005Ik-47; Fri, 24 Jun 2022 14:59:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355587.583308; Fri, 24 Jun 2022 14:59:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4km6-0005Id-17; Fri, 24 Jun 2022 14:59:10 +0000
Received: by outflank-mailman (input) for mailman id 355587;
 Fri, 24 Jun 2022 14:59:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4km5-0005IT-C9; Fri, 24 Jun 2022 14:59:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4km5-0008RB-AG; Fri, 24 Jun 2022 14:59:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4km4-0007NF-R5; Fri, 24 Jun 2022 14:59:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o4km4-0000iW-Qh; Fri, 24 Jun 2022 14:59:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NW9SUsZSy53xlMB9rsT3E164MqBItnCV61J6loy3B8w=; b=DY1qEkz2Cl29xY5okatBXZZyhP
	kJszym7wbIizn27x3UeMJ+6PzfkrX0AYzRsh0qs5bb4gAkkWkcUx97xCuEw7EyP+D5f6nIk+uFJvF
	U9ERh8cgdlXWirv2YRyaeZn9Xti6y1ppehU79mZ83RLLyGpq/2UJHK8MY2o05tBv+OX4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171343-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 171343: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=2aee08c0b6bfb32d36bda17ab24645205a74df65
X-Osstest-Versions-That:
    ovmf=4bfd668e5edb59092a8e16414b3f6632efdac4f2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 24 Jun 2022 14:59:08 +0000

flight 171343 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171343/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 2aee08c0b6bfb32d36bda17ab24645205a74df65
baseline version:
 ovmf                 4bfd668e5edb59092a8e16414b3f6632efdac4f2

Last test of basis   171324  2022-06-23 06:42:01 Z    1 days
Testing same since   171343  2022-06-24 13:11:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  KasimX Liu <kasimx.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   4bfd668e5e..2aee08c0b6  2aee08c0b6bfb32d36bda17ab24645205a74df65 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 15:32:01 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 15:32:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355595.583319 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lHl-0000wT-NN; Fri, 24 Jun 2022 15:31:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355595.583319; Fri, 24 Jun 2022 15:31:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lHl-0000wM-JN; Fri, 24 Jun 2022 15:31:53 +0000
Received: by outflank-mailman (input) for mailman id 355595;
 Fri, 24 Jun 2022 15:31:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3IMW=W7=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1o4lHk-0000wG-Jn
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 15:31:52 +0000
Received: from mail-lf1-x12e.google.com (mail-lf1-x12e.google.com
 [2a00:1450:4864:20::12e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c4b987af-f3d2-11ec-bd2d-47488cf2e6aa;
 Fri, 24 Jun 2022 17:31:51 +0200 (CEST)
Received: by mail-lf1-x12e.google.com with SMTP id a13so5062904lfr.10
 for <xen-devel@lists.xenproject.org>; Fri, 24 Jun 2022 08:31:51 -0700 (PDT)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 c8-20020a056512104800b0047a4f18f825sm414384lfb.74.2022.06.24.08.31.49
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 24 Jun 2022 08:31:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c4b987af-f3d2-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=uOUdcW7R4AY8NQ/GHTwqoGgjVTj1aaOmB/taBQVJKMY=;
        b=g39lau4py5SIMDIqf8Zy5tcEyYu2tzCmb44KnVqBsjmrLEJYnswvm7OXflOiDQ/B2W
         iF3yUKyVsCLMSCVbz//dhVXgQ2yaBa9/haAzrGK3iw/X5GSdlgqzbQEU8Ww8Vxxol6bM
         dty9FFP1sKmQ8tIp26LuajmmKF3VkSFloslCyxZKUbuiN4sKrJbgpglmYEq12cTU/mbx
         9k5x8/IF0PAjbG0VERfXRy8ypfrvyMRW8BMUJQczPSRV9BAlKOqYlcO+W2Mtmjud6ExE
         tJ9tQmStrkcJ1+jaUr2PMFktF1ImPkeCvG9+AbfOeL4UuwPIXQe/A6bUZ7UxksBXfAOl
         LCBg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=uOUdcW7R4AY8NQ/GHTwqoGgjVTj1aaOmB/taBQVJKMY=;
        b=dSeFwtpc8SZQR6GstyO+jFA+FtoZ1UiXUgmeFVU8h3CCkisd7YHUY3HBXCcNOPnpYj
         //pItEUcyh+c4jtEf+xdHxPOkL30RYbfUuMJdoqEjW8La4GRaKB00Gmu/N54ztku+eVR
         HOtIArqPuJJmaQer3hHvoZZEf406E6RRhWr0GApHMpXBZ6+XmN6Kqa3rvlOF+l5J404M
         lEqVtcyfSg2D6fHn4NdYjKGttH6FBFye7nYcG7GbaXj1PEHbwnPxJ4U6v0WMmgUrzljP
         Gmo7Kqsv+hN6RJyKvQ7gLevzP6Td0ZRSiXl1beZjTjTdn1WRH72ArfVfvgEe7gC/L3AA
         5ZnA==
X-Gm-Message-State: AJIora98Wxa2oxKTLxXfBL+5wCQCRFXkkZ9AOW8MLl3SsLqabpEa4LKO
	6v8kIQQrpXh1JZAYhrGwgQQ=
X-Google-Smtp-Source: AGRyM1sy0l1hQDqXf+rZiaNjpU7e8OJEfGbBriNPy6sRUWVQALdZ6fnGjw++knJ34bd7kKG8yZ9qkg==
X-Received: by 2002:a05:6512:2356:b0:47f:8756:737b with SMTP id p22-20020a056512235600b0047f8756737bmr9229551lfu.212.1656084710632;
        Fri, 24 Jun 2022 08:31:50 -0700 (PDT)
Subject: Re: [PATCH V6 2/2] xen/arm: Harden the P2M code in
 p2m_remove_mapping()
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <1652294845-13980-1-git-send-email-olekstysh@gmail.com>
 <1652294845-13980-2-git-send-email-olekstysh@gmail.com>
 <42b0d343-a491-877c-3b5c-d9c95872774c@xen.org>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <94afe35c-627c-8aba-37ce-1d017a2e4e3c@gmail.com>
Date: Fri, 24 Jun 2022 18:31:48 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <42b0d343-a491-877c-3b5c-d9c95872774c@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 23.06.22 21:08, Julien Grall wrote:
> Hi Oleksandr,


Hello Julien


>
> On 11/05/2022 19:47, Oleksandr Tyshchenko wrote:
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> Borrow the x86's check from p2m_remove_page() which was added
>> by the following commit: c65ea16dbcafbe4fe21693b18f8c2a3c5d14600e
>> "x86/p2m: don't assert that the passed in MFN matches for a remove"
>> and adjust it to the Arm code base.
>>
>> Basically, this check is strictly needed for the xenheap pages only
>> since there are several non-protected read accesses to our simplified
>> xenheap based M2P approach on Arm (most calls to page_get_xenheap_gfn()
>> are not protected by the P2M lock).
>
> To me, this read as you introduced a bug in patch #1 and now you are 
> fixing it. So this patch should have been first.


Sounds like yes, I agree. But, in that case I propose to rewrite this 
text like the following:


Basically, this check will be strictly needed for the xenheap pages only 
*and* only after applying subsequent
commit which will introduce xenheap based M2P approach on Arm. But, it 
will be a good opportunity
to harden the P2M code for *every* RAM pages since it is possible to 
remove any GFN - MFN mapping
currently on Arm (even with the wrong helpers).


And ...


>
>>
>> But, it will be a good opportunity to harden the P2M code for *every*
>> RAM pages since it is possible to remove any GFN - MFN mapping
>> currently on Arm (even with the wrong helpers).
>
>> This can result in
>> a few issues when mapping is overridden silently (in particular when
>> building dom0).
>
> Hmmm... AFAIU, in such situation p2m_remove_mapping() wouldn't be 
> called. Instead, we would call the mapping helper twice and the 
> override would still happen.


    ... drop this one.


>
>
>>
>> Suggested-by: Julien Grall <jgrall@amazon.com>
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>> ---
>> You can find the corresponding discussion at:
>> https://lore.kernel.org/xen-devel/82d8bfe0-cb46-d303-6a60-2324dd76a1f7@xen.org/ 
>>
>>
>> Changes V5 -> V6:
>>   - new patch
>> ---
>>   xen/arch/arm/p2m.c | 21 +++++++++++++++++++++
>>   1 file changed, 21 insertions(+)
>>
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index f87b48e..635e474 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -1311,11 +1311,32 @@ static inline int p2m_remove_mapping(struct 
>> domain *d,
>>                                        mfn_t mfn)
>>   {
>>       struct p2m_domain *p2m = p2m_get_hostp2m(d);
>> +    unsigned long i;
>>       int rc;
>>         p2m_write_lock(p2m);
>> +    for ( i = 0; i < nr; )
> One bit I really hate in the x86 code is the lack of in-code 
> documentation. It makes really difficult to understand the logic.
>
> I know this code was taken from x86, but I would like to avoid making 
> same mistake (this code is definitely not trivial). So can we document 
> the logic?


ok, I propose the following text right after acquiring the p2m lock:


  /*
   * Before removing the GFN - MFN mapping for any RAM pages make sure
   * that there is no difference between what is already mapped and what
   * is requested to be unmapped. If passed mapping doesn't match
   * the existing one bail out early.
   */


Could you please clarify, do you agree with both?


>
> The code itself looks good to me.

Thanks!


>
>> +    {
>> +        unsigned int cur_order;
>> +        p2m_type_t t;
>> +        mfn_t mfn_return = p2m_get_entry(p2m, gfn_add(start_gfn, i), 
>> &t, NULL,
>> +                                         &cur_order, NULL);
>> +
>> +        if ( p2m_is_any_ram(t) &&
>> +             (!mfn_valid(mfn) || !mfn_eq(mfn_add(mfn, i), 
>> mfn_return)) )
>> +        {
>> +            rc = -EILSEQ;
>> +            goto out;
>> +        }
>> +
>> +        i += (1UL << cur_order) -
>> +             ((gfn_x(start_gfn) + i) & ((1UL << cur_order) - 1));
>> +    }
>> +
>>       rc = p2m_set_entry(p2m, start_gfn, nr, INVALID_MFN,
>>                          p2m_invalid, p2m_access_rwx);
>> +
>> +out:
>>       p2m_write_unlock(p2m);
>>         return rc;
>
> Cheers,
>
-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 16:04:52 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 16:04:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355608.583406 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lnd-0006ej-PQ; Fri, 24 Jun 2022 16:04:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355608.583406; Fri, 24 Jun 2022 16:04:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lnd-0006dn-LP; Fri, 24 Jun 2022 16:04:49 +0000
Received: by outflank-mailman (input) for mailman id 355608;
 Fri, 24 Jun 2022 16:04:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7mLY=W7=citrix.com=prvs=16756bcf7=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o4lnc-0004qb-0f
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 16:04:48 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5e372a70-f3d7-11ec-bd2d-47488cf2e6aa;
 Fri, 24 Jun 2022 18:04:46 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5e372a70-f3d7-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656086686;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=Qi8ktRfGxKz6TVNBfNc+szyw4+rNfchyhf2q4Nw7GMs=;
  b=LhGgRGkZtR1KvHYOYtPeJ8I9EBTe2tOk3L7sVfvykV5m0NkLWdoTvdVB
   wdPP8OOo70hhqoulII07BDuzjRjYy9mN2waisjtA6vXcQHcMIh1tUEitD
   mo+fqBZOr7ERPTyMimC3VRqP2PEoykcF5dFCVzfKPOY57VnKvvAX+JGqd
   U=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 73702017
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:3wOXxq2nMSAqJSfmN/bD5Zpxkn2cJEfYwER7XKvMYLTBsI5bp2FSx
 2UYXz+Hb/uKajH0fthyaom28UJS78TcnNYyQFNlpC1hF35El5HIVI+TRqvS04J+DSFhoGZPt
 Zh2hgzodZhsJpPkjk7xdOCn9xGQ7InQLlbGILes1htZGEk1Ek/NtTo5w7Rj2tAy3YDja++wk
 YiaT/P3aQfNNwFcagr424rbwP+4lK2v0N+wlgVWicFj5DcypVFMZH4sDfjZw0/DaptVBoaHq
 9Prl9lVyI97EyAFUbtJmp6jGqEDryW70QKm0hK6UID66vROS7BbPg/W+5PwZG8O4whlkeydx
 /1uvLqSZiAABZbn2/4WDRoASyVRJpNZreqvzXiX6aR/zmXDenrohf5vEFs3LcsT/eMf7WNmr
 KJCbmpXN1ba2rzwkOnTpupE36zPKOHiOp8fvXdxiynUF/88TbjIQrnQ5M8e1zA17ixLNamFO
 JJDMWMwBPjGSwVsIUYpOIo8p7ihpGHPVxJ+91LWjJNitgA/yyQuieOwYbI5YOeiWsF9jkue4
 GXc8AzREhwccdCS1zeB2natnfPU2zP2XpoIE7+1/eIsh0ecrlH/EzVPCwH9+6PgzBfjBZQPc
 CT45xbCs4AR/WqJYf7UZCaT42SP4B1EA95/CNMlvVTlJrXv3+qJOoQVZmcfNYJ+75JuGmxCO
 kyhxI2wW2E22FGBYTfEr+rP82vvUcQABTVaDRLoWzfp9DUKTGsbqhvUBuhuH6eu5jEeMWGhm
 mvaxMTSalh6sCLq60lY1Qqe695UjsKVJjPZHy2ONo5f0it3ZZS+e6uj4kXB4PBLIe6xFwfc4
 iBcypHBsLhWUvlhcRBhps1XRNlFAN7VWAAwfHY1R8Vxn9hT0yTLkX9sDMFWex4yb5dslc7Ba
 07PowJBjKJu0I+RRfYvOeqZUp1ypYC5TIiNfq2EP7JmP8kqHCfarX4GWKJl9z20+KTaufpkY
 snznAfFJStyNJmLOxLsFrlEj+N0l3tgrY4RLLiipymaPXOlTCb9Yd843JGmMojVMIvsTN3pz
 uti
IronPort-HdrOrdr: A9a23:w3tZ6a/xZKD7vDcso19uk+DeI+orL9Y04lQ7vn2YSXRuHfBw8P
 re+8jztCWE8Qr5N0tApTntAsS9qDbnhPxICOoqTNOftWvd2FdARbsKheCJ/9SjIVyaygc079
 YHT0EUMrPN5DZB4foSmDPIcOod/A==
X-IronPort-AV: E=Sophos;i="5.92,218,1650945600"; 
   d="scan'208";a="73702017"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH v3 09/25] tools/xenpaging: Rework makefile
Date: Fri, 24 Jun 2022 17:04:06 +0100
Message-ID: <20220624160422.53457-10-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220624160422.53457-1-anthony.perard@citrix.com>
References: <20220624160422.53457-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

- Rename $(SRCS) to $(OBJS-y), we don't need to collect sources.
- Rename $(IBINS) to $(TARGETS)
- Stop cleaning "xen" and non-set variable $(LIB).

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/xenpaging/Makefile | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/tools/xenpaging/Makefile b/tools/xenpaging/Makefile
index 04743b335c..e2ed9eaa3f 100644
--- a/tools/xenpaging/Makefile
+++ b/tools/xenpaging/Makefile
@@ -5,33 +5,33 @@ CFLAGS += $(CFLAGS_libxentoollog) $(CFLAGS_libxenevtchn) $(CFLAGS_libxenctrl) $(
 LDLIBS += $(LDLIBS_libxentoollog) $(LDLIBS_libxenevtchn) $(LDLIBS_libxenctrl) $(LDLIBS_libxenstore) $(PTHREAD_LIBS)
 LDFLAGS += $(PTHREAD_LDFLAGS)
 
-POLICY    = default
+POLICY   := default
 
-SRC      :=
-SRCS     += file_ops.c xenpaging.c policy_$(POLICY).c
-SRCS     += pagein.c
+OBJS-y   := file_ops.o
+OBJS-y   += xenpaging.o
+OBJS-y   += policy_$(POLICY).o
+OBJS-y   += pagein.o
 
 CFLAGS   += -Werror
 CFLAGS   += -Wno-unused
 
-OBJS     = $(SRCS:.c=.o)
-IBINS    = xenpaging
+TARGETS := xenpaging
 
-all: $(IBINS)
+all: $(TARGETS)
 
-xenpaging: $(OBJS)
+xenpaging: $(OBJS-y)
 	$(CC) $(LDFLAGS) -o $@ $^ $(LDLIBS) $(APPEND_LDFLAGS)
 
 install: all
 	$(INSTALL_DIR) -m 0700 $(DESTDIR)$(XEN_PAGING_DIR)
 	$(INSTALL_DIR) $(DESTDIR)$(LIBEXEC_BIN)
-	$(INSTALL_PROG) $(IBINS) $(DESTDIR)$(LIBEXEC_BIN)
+	$(INSTALL_PROG) $(TARGETS) $(DESTDIR)$(LIBEXEC_BIN)
 
 uninstall:
-	rm -f $(addprefix $(DESTDIR)$(LIBEXEC_BIN)/, $(IBINS))
+	rm -f $(addprefix $(DESTDIR)$(LIBEXEC_BIN)/, $(TARGETS))
 
 clean:
-	rm -f *.o *~ $(DEPS_RM) xen TAGS $(IBINS) $(LIB)
+	rm -f *.o *~ $(DEPS_RM) TAGS $(TARGETS)
 
 distclean: clean
 
@@ -39,6 +39,6 @@ distclean: clean
 
 .PHONY: TAGS
 TAGS:
-	etags -t $(SRCS) *.h
+	etags -t *.c *.h
 
 -include $(DEPS_INCLUDE)
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 16:04:52 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 16:04:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355602.583341 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lnY-00056L-HX; Fri, 24 Jun 2022 16:04:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355602.583341; Fri, 24 Jun 2022 16:04:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lnY-00056E-EC; Fri, 24 Jun 2022 16:04:44 +0000
Received: by outflank-mailman (input) for mailman id 355602;
 Fri, 24 Jun 2022 16:04:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7mLY=W7=citrix.com=prvs=16756bcf7=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o4lnW-0004qb-JA
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 16:04:42 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 58f3da0c-f3d7-11ec-bd2d-47488cf2e6aa;
 Fri, 24 Jun 2022 18:04:39 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 58f3da0c-f3d7-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656086679;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=ZnL4iDV/JErozwnMFG24hDXAd8++8V6DJ1eS0kuExCY=;
  b=FrCmoyV/K8tCwytg3rOVcwpnvyMFn+iC+fhOriYWG67A3ekQ2unVQpwQ
   J04YA4cNzIQgkPzW1K5XFSEOjd2QRgzoynazS6OWpue+0goTuxp0r3+n7
   mskwmPAaJnvrpbz5EwthKeRc3wV/3lQVSZmSJiBs0ITvDTDUnZi0kALh7
   E=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 73701972
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:z/dWV6k4MZZMojTgYLzakiDo5gwYJkRdPkR7XQ2eYbSJt1+Wr1Gzt
 xIdDT3SPPeLZWL9Ktx3a4mxpkwPusKGm4dhHQRs+H9gRiMWpZLJC+rCIxarNUt+DCFioGGLT
 Sk6QoOdRCzhZiaE/n9BCpC48T8kk/vgqoPUUIYoAAgoLeNfYHpn2EgLd9IR2NYy24DnWV/V4
 7senuWEULOb828sWo4rw/rrRCNH5JwebxtB4zTSzdgS1LPvvyF94KA3fMldHFOhKmVgJcaoR
 v6r8V2M1jixEyHBqD+Suu2TnkUiGtY+NOUV45Zcc/DKbhNq/kTe3kunXRa1hIg+ZzihxrhMJ
 NtxWZOYCig0H/zXvNwmVxxAUBtiep1s+r/mPi3q2SCT5xWun3rExvxvCAc9PJEC+/YxCmZLn
 RAaAGlTNFbZ3bvwme/lDLk37iggBJCD0Ic3s3d8zTbfHLA+TIrKWani7t5ExjYgwMtJGJ4yY
 uJGMmU3NUWfOXWjPH9UAc4BorqWp0CmfhII9lWki6klwGvqmVkZPL/Fb4OOJ43iqd9utlmcj
 nLL+SL+GB5yHMCezBKV/3TqgfXA9QvgQ54bHrC88v9sgXWQy3YVBRlQUkG0ydGjjVW0QdVYK
 Eo89S8nrKx0/0uuJvHwWxC+qTiZsB8ZR8FdDeQS7xuEwa7ZpQ2eAwAsTCNFadEgnN87Q3otz
 FDht9HmHzt0q5WOVGmQsLyTqFuaNTAOKG4eZQcNVQYf/8T4u4Y3kw7OSdB4VqWyi7XdCTz2h
 jyHsiU6r7ESltIQkbW2+0jdhDChrYSPSRQ6ji3bV3yoxhl0b4mkY8qv81ezxfRKIZudT1KBl
 GMZgMXY5+cLZbmBmyCAT/8ENK247PaCdjvHiBhgGIdJyti20yf9J8YKumg4fRo3dJZfEdP0X
 KPNkUB++b4CJ1+SVqtye8GLF4Ma56rgNMuwA5g4ceFyjohNmB6vpX8zOxbLgjiywCDAgolkZ
 87FLJ/E4WIyTP0+kWHoH7p1PaoDnHhW+I/FeXzsI/1LO5K6bWXdd7oKOUDmggsRvPLd+1W9H
 zqy2qK3J/RjvA7WOHC/HXY7dwxiEJTCLcmeRzZrXuCCOBF6P2oqFuXcx7gsE6Q8wfkLzr+Zp
 y7hAhcHoLYauZEhAV/SApyEQOOHYHqChShjYXxE0aiAgRDPnrpDHI9ALsBqLNHLBcRozOJuT
 ultRvhs9s9nE2ydkxxENMGVhNU7KHyD2FLfVwL4MWNXV8MxGGT0FirMI1KHGN8mVXHs66PTY
 tSIi2vmfHb0b14zUp+INK3+lA3ZULp0sLsaYnYk6+J7IC3EmLWG4QSr5hPrC6ng8Sn++wY=
IronPort-HdrOrdr: A9a23:RPIPMqDsij/SHLvlHemm55DYdb4zR+YMi2TC1yhKJiC9Ffbo8v
 xG/c5rsiMc5wxxZJhNo7290cq7MBHhHPxOgbX5VI3KNGKNhILBFvAH0WKI+VPd8kPFmtK1rZ
 0QEJRDNA==
X-IronPort-AV: E=Sophos;i="5.92,218,1650945600"; 
   d="scan'208";a="73701972"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, "Daniel P. Smith"
	<dpsmith@apertussolutions.com>, Juergen Gross <jgross@suse.com>, "Christian
 Lindig" <christian.lindig@citrix.com>, Daniel De Graaf
	<dgdegra@tycho.nsa.gov>, George Dunlap <george.dunlap@citrix.com>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>, "David
 Scott" <dave@recoil.org>, Stefano Stabellini <sstabellini@kernel.org>, "Tim
 Deegan" <tim@xen.org>, Wei Liu <wl@xen.org>, Julien Grall <julien@xen.org>,
	Elena Ufimtseva <elena.ufimtseva@oracle.com>, Nick Rosbrook
	<rosbrookn@gmail.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>
Subject: [XEN PATCH v3 00/25] Toolstack build system improvement, toward non-recursive makefiles
Date: Fri, 24 Jun 2022 17:03:57 +0100
Message-ID: <20220624160422.53457-1-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Patch series available in this git branch:
https://xenbits.xen.org/git-http/people/aperard/xen-unstable.git br.toolstack-build-system-v3

Changes in v3:
- rebased
- several new patches, starting with 13/25 "tools/libs/util: cleanup Makefile"
- introducing macros to deal with linking with in-tree xen libraries
- Add -Werror to CFLAGS for all builds in tools/

Changes in v2:
- one new patch
- other changes described in patch notes

Hi everyone,

I've been looking at reworking the build system we have for the "tools/", and
transforming it to something that suit it better. There are a lot of
dependencies between different sub-directories so it would be nice if GNU make
could actually handle them. This is possible with "non-recursive makefiles".

With non-recursive makefiles, make will have to load/include all the makefiles
and thus will have complete overview of all the dependencies. This will allow
make to build the necessary targets in other directory, and we won't need to
build sub-directories one by one.

To help with this transformation, I've chosen to go with a recent project
called "subdirmk". It help to deal with the fact that all makefiles will share
the same namespace, it is hooked into autoconf, we can easily run `make` from
any subdirectory. Together "autoconf" and "subdirmk" will also help to get
closer to be able to do out-of-tree build of the tools, but I'm mainly looking
to have non-recursive makefile.

Link to the project:
    https://www.chiark.greenend.org.uk/ucgi/~ian/git/subdirmk.git/

But before getting to the main course, I've got quite a few cleanup and some
changes to the makefiles. I start the patch series with patches that remove old
left over stuff, then start reworking makefiles. They are some common changes like
removing the "build" targets in many places as "all" would be the more common
way to spell it and "all" is the default target anyway. They are other changes
related to the conversion to "subdirmk", I start to use the variable $(TARGETS)
in several makefiles, this variable will have a special meaning in subdirmk
which will build those target by default.

As for the conversion to non-recursive makefile, with subdirmk, I have this WIP
branch, it contains some changes that I'm trying out, some notes, and the
conversion, one Makefile per commit. Cleanup are still needed, some makefile
not converted yet, but it's otherwise mostly done.

    https://xenbits.xen.org/git-http/people/aperard/xen-unstable.git br.toolstack-build-system-v1-wip-extra

With that branch, you could tried something like:
    ./configure; cd tools/xl; make
and `xl` should be built as well as all the xen library needed.
Also, things like `make clean` or rebuild should be faster in the all tools/
directory.

Cheers,

Anthony PERARD (25):
  tools/console: have one Makefile per program/directory
  tools/debugger/gdbsx: Fix and cleanup makefiles
  tools/examples: cleanup Makefile
  tools/firmware/hvmloader: rework Makefile
  tools/fuzz/libelf: rework makefile
  tools/fuzz/x86_instruction_emulator: rework makefile
  tools/hotplug: cleanup Makefiles
  tools/libfsimage: Cleanup makefiles
  tools/xenpaging: Rework makefile
  tools/xentop: rework makefile
  tools/xentrace: rework Makefile
  .gitignore: Cleanup ignores of tools/libs/*/{headers.chk,*.pc}
  tools/libs/util: cleanup Makefile
  tools/flask/utils: list build targets in $(TARGETS)
  libs/libs.mk: Rename $(LIB) to $(TARGETS)
  libs/libs.mk: Remove the need for $(PKG_CONFIG_INST)
  libs/libs.mk: Rework target headers.chk dependencies
  tools: Introduce $(xenlibs-rpath,..) to replace $(SHDEPS_lib*)
  tools: Introduce $(xenlibs-ldlibs, ) macro
  tools: Introduce $(xenlibs-ldflags, ) macro
  tools/helper: Cleanup Makefile
  tools/console: Use $(xenlibs-ldlibs,)
  tools/helpers: Fix build of xen-init-dom0 with -Werror
  tools: Add -Werror by default to all tools/
  tools: Remove -Werror everywhere else

 tools/configure.ac                            |  1 +
 tools/Makefile                                |  2 +-
 tools/console/Makefile                        | 49 +----------------
 tools/console/client/Makefile                 | 37 +++++++++++++
 tools/console/daemon/Makefile                 | 45 +++++++++++++++
 tools/debugger/gdbsx/Makefile                 | 20 +++----
 tools/debugger/gdbsx/gx/Makefile              | 15 +++--
 tools/debugger/gdbsx/xg/Makefile              | 25 +++------
 tools/debugger/kdd/Makefile                   |  1 -
 tools/examples/Makefile                       | 25 ++-------
 tools/firmware/hvmloader/Makefile             | 16 +++---
 tools/flask/utils/Makefile                    | 11 ++--
 tools/fuzz/cpu-policy/Makefile                |  2 +-
 tools/fuzz/libelf/Makefile                    | 21 ++++---
 tools/fuzz/x86_instruction_emulator/Makefile  | 32 +++++------
 tools/golang/xenlight/Makefile                |  2 +-
 tools/helpers/Makefile                        | 23 ++++----
 tools/hotplug/FreeBSD/Makefile                | 11 +---
 tools/hotplug/Linux/Makefile                  | 16 ++----
 tools/hotplug/Linux/systemd/Makefile          | 16 +++---
 tools/hotplug/NetBSD/Makefile                 |  9 +--
 tools/hotplug/common/Makefile                 | 16 ++----
 tools/libfsimage/common/Makefile              | 11 +---
 tools/libfsimage/ext2fs-lib/Makefile          |  9 ---
 tools/libfsimage/ext2fs/Makefile              |  9 ---
 tools/libfsimage/fat/Makefile                 |  9 ---
 tools/libfsimage/iso9660/Makefile             | 11 ----
 tools/libfsimage/reiserfs/Makefile            |  9 ---
 tools/libfsimage/ufs/Makefile                 |  9 ---
 tools/libfsimage/xfs/Makefile                 |  9 ---
 tools/libfsimage/zfs/Makefile                 |  9 ---
 tools/libs/util/Makefile                      |  3 +-
 tools/misc/Makefile                           |  1 -
 tools/tests/cpu-policy/Makefile               |  2 +-
 tools/tests/depriv/Makefile                   |  2 +-
 tools/tests/resource/Makefile                 |  1 -
 tools/tests/tsx/Makefile                      |  1 -
 tools/tests/xenstore/Makefile                 |  1 -
 tools/xcutils/Makefile                        |  2 -
 tools/xenmon/Makefile                         |  1 -
 tools/xenpaging/Makefile                      | 25 ++++-----
 tools/xenpmd/Makefile                         |  1 -
 tools/xentop/Makefile                         | 23 ++++----
 tools/xentrace/Makefile                       | 21 +++----
 tools/xl/Makefile                             |  2 +-
 tools/Rules.mk                                | 55 ++++++++++++++-----
 tools/debugger/gdbsx/Rules.mk                 |  2 +-
 tools/firmware/Rules.mk                       |  2 -
 tools/libfsimage/Rules.mk                     | 26 +++------
 tools/libfsimage/common.mk                    | 11 ++++
 tools/libs/libs.mk                            | 31 +++++------
 tools/helpers/xen-init-dom0.c                 |  2 +
 tools/ocaml/common.make                       |  2 +-
 .gitignore                                    | 35 ------------
 config/Tools.mk.in                            |  1 +
 tools/configure                               | 26 +++++++++
 tools/console/client/.gitignore               |  1 +
 tools/console/daemon/.gitignore               |  1 +
 tools/fuzz/libelf/.gitignore                  |  2 +
 .../fuzz/x86_instruction_emulator/.gitignore  |  7 +++
 tools/libs/.gitignore                         |  2 +
 tools/xenstore/Makefile.common                |  1 -
 62 files changed, 349 insertions(+), 424 deletions(-)
 create mode 100644 tools/console/client/Makefile
 create mode 100644 tools/console/daemon/Makefile
 create mode 100644 tools/libfsimage/common.mk
 create mode 100644 tools/console/client/.gitignore
 create mode 100644 tools/console/daemon/.gitignore
 create mode 100644 tools/fuzz/libelf/.gitignore
 create mode 100644 tools/fuzz/x86_instruction_emulator/.gitignore

-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 16:04:52 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 16:04:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355610.583427 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lnf-0007A8-Rv; Fri, 24 Jun 2022 16:04:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355610.583427; Fri, 24 Jun 2022 16:04:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lnf-00078r-FL; Fri, 24 Jun 2022 16:04:51 +0000
Received: by outflank-mailman (input) for mailman id 355610;
 Fri, 24 Jun 2022 16:04:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7mLY=W7=citrix.com=prvs=16756bcf7=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o4lnd-0004qb-R9
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 16:04:49 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5ef24dfc-f3d7-11ec-bd2d-47488cf2e6aa;
 Fri, 24 Jun 2022 18:04:48 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5ef24dfc-f3d7-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656086688;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=+tvgdAnTgDFIQbMFRm+czYIkxCdj45NAL3MUbvPerCo=;
  b=GoukUCllQxy0X5MKyNwMM+1CjV/d50HEcX/+wFPWQ7ANjr6Lq1P2gRfS
   WqgkX9PxODoBNd28i6Hji84k9//+UnPIre4M3dMvjnTfx8hbJE1WZs1jf
   QcjVw0lnD1Wt7xpuxOMPd8GJwn8vKQhNbxCdsYbTTOVtWaRZeE+C3i42u
   w=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 73702016
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:BBze+a0VFFfRqw9vXPbD5Zpxkn2cJEfYwER7XKvMYLTBsI5bp2cGn
 WAaCGGGOa3bM2TwKdp/bonl9U4C6sCBnddgSQJqpC1hF35El5HIVI+TRqvS04J+DSFhoGZPt
 Zh2hgzodZhsJpPkjk7xdOCn9xGQ7InQLlbGILes1htZGEk1Ek/NtTo5w7Rj2tAy3YDja++wk
 YiaT/P3aQfNNwFcagr424rbwP+4lK2v0N+wlgVWicFj5DcypVFMZH4sDfjZw0/DaptVBoaHq
 9Prl9lVyI97EyAFUbtJmp6jGqEDryW70QKm0hK6UID66vROS7BbPg/W+5PwZG8O4whlkeydx
 /1uvLqSZiAABZbn2/4WDRoASyVRJpNZreqvzXiX6aR/zmXDenrohf5vEFs3LcsT/eMf7WNmr
 KJCbmpXN1ba2rzwkOnTpupE36zPKOHiOp8fvXdxiynUF/88TbjIQrnQ5M8e1zA17ixLNamFO
 JJDMWMxBPjGSzhxBk03VaJuoLmtg33PTH5ahHuY4oNitgA/yyQuieOwYbI5YOeiWsF9jkue4
 GXc8AzREhwccdCS1zeB2natnfPU2zP2XpoIE7+1/eIsh0ecrlH/EzVPCwH9+6PgzBfjBZQPc
 CT45xbCs4AR/WqJYf7UZCaT42SP4B1EA95/CNMlvVTlJrXv3+qJOoQVZmcfNYJ+75JuGmxCO
 kyhxI2wW2E22FGBYTfEr+rP82vvUcQABTVaDRLoWzfp9DUKTGsbqhvUBuhuH6eu5jEeMWGhm
 mvaxMTSalh6sCLq60lY1Qqe695UjsKVJjPZHy2ONo5f0it3ZZS+e6uj4kXB4PBLIe6xFwfc4
 iBcypHBsLhWUvlhcRBhps1XRNlFAN7VWAAwfHY1R8Vxn9hT0yTLkX9sDMFWex4yb5dslc7Ba
 07PowJBjKJu0I+RRfYvOeqZUp1ypYC5TIiNfq2EP7JmP8kqHCfarX4GWKJl9z20+KTaufpkY
 snznAfFJStyNJmLOxLsFrlEj+N0l3tgrY4RLLiipymaPXOlTCb9Yd843JGmMojVMIvsTN3pz
 uti
IronPort-HdrOrdr: A9a23:jkEzSqAWyXVAdWzlHemU55DYdb4zR+YMi2TC1yhKJyC9Ffbo8f
 xG/c5rrSMc5wxwZJhNo7y90ey7MBbhHP1OkO4s1NWZLWrbUQKTRekIh+bfKn/baknDH4ZmpN
 5dmsNFaeEYY2IUsS+D2njbL+od
X-IronPort-AV: E=Sophos;i="5.92,218,1650945600"; 
   d="scan'208";a="73702016"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH v3 07/25] tools/hotplug: cleanup Makefiles
Date: Fri, 24 Jun 2022 17:04:04 +0100
Message-ID: <20220624160422.53457-8-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220624160422.53457-1-anthony.perard@citrix.com>
References: <20220624160422.53457-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Remove "build" targets.

Use simply expanded variables when recursively expanded variable
aren't needed. (Use ":=" instead of "=".)

Don't check if a directory already exist when installing, just create
it.

Fix $(HOTPLUGPATH), it shouldn't have any double-quote.

Some reindentation.

FreeBSD, "hotplugpath.sh" is already installed by common/.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/hotplug/FreeBSD/Makefile       | 11 +++--------
 tools/hotplug/Linux/Makefile         | 16 ++++++----------
 tools/hotplug/Linux/systemd/Makefile | 16 +++++++---------
 tools/hotplug/NetBSD/Makefile        |  9 +++------
 tools/hotplug/common/Makefile        | 16 ++++++----------
 5 files changed, 25 insertions(+), 43 deletions(-)

diff --git a/tools/hotplug/FreeBSD/Makefile b/tools/hotplug/FreeBSD/Makefile
index de9928cd86..a6552c9884 100644
--- a/tools/hotplug/FreeBSD/Makefile
+++ b/tools/hotplug/FreeBSD/Makefile
@@ -2,18 +2,15 @@ XEN_ROOT = $(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
 # Xen script dir and scripts to go there.
-XEN_SCRIPTS = vif-bridge block
+XEN_SCRIPTS := vif-bridge block
 
-XEN_SCRIPT_DATA =
+XEN_SCRIPT_DATA :=
 
-XEN_RCD_PROG = rc.d/xencommons rc.d/xendriverdomain
+XEN_RCD_PROG := rc.d/xencommons rc.d/xendriverdomain
 
 .PHONY: all
 all:
 
-.PHONY: build
-build:
-
 .PHONY: install
 install: install-scripts install-rcd
 
@@ -44,12 +41,10 @@ install-rcd:
 	   do \
 	   $(INSTALL_PROG) $$i $(DESTDIR)$(INITD_DIR); \
 	done
-	$(INSTALL_DATA) ../common/hotplugpath.sh $(DESTDIR)$(XEN_SCRIPT_DIR)
 
 .PHONY: uninstall-rcd
 uninstall-rcd:
 	rm -f $(addprefix $(DESTDIR)$(INITD_DIR)/, $(XEN_RCD_PROG))
-	rm -f $(DESTDIR)$(XEN_SCRIPT_DIR)/hotplugpath.sh
 
 .PHONY: clean
 clean:
diff --git a/tools/hotplug/Linux/Makefile b/tools/hotplug/Linux/Makefile
index 0b1d111d7e..9a7b3a3515 100644
--- a/tools/hotplug/Linux/Makefile
+++ b/tools/hotplug/Linux/Makefile
@@ -2,7 +2,7 @@ XEN_ROOT = $(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
 # Xen script dir and scripts to go there.
-XEN_SCRIPTS = vif-bridge
+XEN_SCRIPTS := vif-bridge
 XEN_SCRIPTS += vif-route
 XEN_SCRIPTS += vif-nat
 XEN_SCRIPTS += vif-openvswitch
@@ -22,16 +22,13 @@ XEN_SCRIPTS += launch-xenstore
 
 SUBDIRS-$(CONFIG_SYSTEMD) += systemd
 
-XEN_SCRIPT_DATA = xen-script-common.sh locking.sh logging.sh
+XEN_SCRIPT_DATA := xen-script-common.sh locking.sh logging.sh
 XEN_SCRIPT_DATA += xen-hotplug-common.sh xen-network-common.sh vif-common.sh
 XEN_SCRIPT_DATA += block-common.sh
 
 .PHONY: all
 all: subdirs-all
 
-.PHONY: build
-build:
-
 .PHONY: install
 install: install-initd install-scripts subdirs-install
 
@@ -41,9 +38,9 @@ uninstall: uninstall-initd uninstall-scripts subdirs-uninstall
 # See docs/misc/distro_mapping.txt for INITD_DIR location
 .PHONY: install-initd
 install-initd:
-	[ -d $(DESTDIR)$(INITD_DIR) ] || $(INSTALL_DIR) $(DESTDIR)$(INITD_DIR)
-	[ -d $(DESTDIR)$(SYSCONFIG_DIR) ] || $(INSTALL_DIR) $(DESTDIR)$(SYSCONFIG_DIR)
-	[ -d $(DESTDIR)$(LIBEXEC_BIN) ] || $(INSTALL_DIR) $(DESTDIR)$(LIBEXEC_BIN)
+	$(INSTALL_DIR) $(DESTDIR)$(INITD_DIR)
+	$(INSTALL_DIR) $(DESTDIR)$(SYSCONFIG_DIR)
+	$(INSTALL_DIR) $(DESTDIR)$(LIBEXEC_BIN)
 	$(INSTALL_DATA) init.d/sysconfig.xendomains $(DESTDIR)$(SYSCONFIG_DIR)/xendomains
 	$(INSTALL_DATA) init.d/sysconfig.xencommons $(DESTDIR)$(SYSCONFIG_DIR)/xencommons
 	$(INSTALL_PROG) xendomains $(DESTDIR)$(LIBEXEC_BIN)
@@ -64,8 +61,7 @@ uninstall-initd:
 
 .PHONY: install-scripts
 install-scripts:
-	[ -d $(DESTDIR)$(XEN_SCRIPT_DIR) ] || \
-		$(INSTALL_DIR) $(DESTDIR)$(XEN_SCRIPT_DIR)
+	$(INSTALL_DIR) $(DESTDIR)$(XEN_SCRIPT_DIR)
 	set -e; for i in $(XEN_SCRIPTS); \
 	    do \
 	    $(INSTALL_PROG) $$i $(DESTDIR)$(XEN_SCRIPT_DIR); \
diff --git a/tools/hotplug/Linux/systemd/Makefile b/tools/hotplug/Linux/systemd/Makefile
index a5d41d86ef..26df2a43b1 100644
--- a/tools/hotplug/Linux/systemd/Makefile
+++ b/tools/hotplug/Linux/systemd/Makefile
@@ -1,12 +1,12 @@
 XEN_ROOT = $(CURDIR)/../../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-XEN_SYSTEMD_MODULES = xen.conf
+XEN_SYSTEMD_MODULES := xen.conf
 
-XEN_SYSTEMD_MOUNT =  proc-xen.mount
+XEN_SYSTEMD_MOUNT := proc-xen.mount
 XEN_SYSTEMD_MOUNT += var-lib-xenstored.mount
 
-XEN_SYSTEMD_SERVICE  = xenstored.service
+XEN_SYSTEMD_SERVICE := xenstored.service
 XEN_SYSTEMD_SERVICE += xenconsoled.service
 XEN_SYSTEMD_SERVICE += xen-qemu-dom0-disk-backend.service
 XEN_SYSTEMD_SERVICE += xendomains.service
@@ -14,7 +14,7 @@ XEN_SYSTEMD_SERVICE += xen-watchdog.service
 XEN_SYSTEMD_SERVICE += xen-init-dom0.service
 XEN_SYSTEMD_SERVICE += xendriverdomain.service
 
-ALL_XEN_SYSTEMD =	$(XEN_SYSTEMD_MODULES)  \
+ALL_XEN_SYSTEMD :=	$(XEN_SYSTEMD_MODULES)  \
 			$(XEN_SYSTEMD_MOUNT)	\
 			$(XEN_SYSTEMD_SERVICE)
 
@@ -30,10 +30,8 @@ distclean: clean
 
 .PHONY: install
 install: $(ALL_XEN_SYSTEMD)
-	[ -d $(DESTDIR)$(XEN_SYSTEMD_DIR) ] || \
-		$(INSTALL_DIR) $(DESTDIR)$(XEN_SYSTEMD_DIR)
-	[ -d $(DESTDIR)$(XEN_SYSTEMD_MODULES_LOAD) ] || \
-		$(INSTALL_DIR) $(DESTDIR)$(XEN_SYSTEMD_MODULES_LOAD)
+	$(INSTALL_DIR) $(DESTDIR)$(XEN_SYSTEMD_DIR)
+	$(INSTALL_DIR) $(DESTDIR)$(XEN_SYSTEMD_MODULES_LOAD)
 	$(INSTALL_DATA) *.service $(DESTDIR)$(XEN_SYSTEMD_DIR)
 	$(INSTALL_DATA) *.mount $(DESTDIR)$(XEN_SYSTEMD_DIR)
 	$(INSTALL_DATA) *.conf $(DESTDIR)$(XEN_SYSTEMD_MODULES_LOAD)
@@ -48,5 +46,5 @@ $(XEN_SYSTEMD_MODULES):
 	rm -f $@.tmp
 	for mod in $(LINUX_BACKEND_MODULES) ; do \
 		echo $$mod ; \
-		done > $@.tmp
+	done > $@.tmp
 	$(call move-if-changed,$@.tmp,$@)
diff --git a/tools/hotplug/NetBSD/Makefile b/tools/hotplug/NetBSD/Makefile
index f909ffa367..1cd3db2ccb 100644
--- a/tools/hotplug/NetBSD/Makefile
+++ b/tools/hotplug/NetBSD/Makefile
@@ -2,22 +2,19 @@ XEN_ROOT = $(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
 # Xen script dir and scripts to go there.
-XEN_SCRIPTS =
+XEN_SCRIPTS :=
 XEN_SCRIPTS += locking.sh
 XEN_SCRIPTS += block
 XEN_SCRIPTS += vif-bridge
 XEN_SCRIPTS += vif-ip
 XEN_SCRIPTS += qemu-ifup
 
-XEN_SCRIPT_DATA =
-XEN_RCD_PROG = rc.d/xencommons rc.d/xendomains rc.d/xen-watchdog rc.d/xendriverdomain
+XEN_SCRIPT_DATA :=
+XEN_RCD_PROG := rc.d/xencommons rc.d/xendomains rc.d/xen-watchdog rc.d/xendriverdomain
 
 .PHONY: all
 all:
 
-.PHONY: build
-build:
-
 .PHONY: install
 install: install-scripts install-rcd
 
diff --git a/tools/hotplug/common/Makefile b/tools/hotplug/common/Makefile
index ef48bfacc9..e8a8dbea6c 100644
--- a/tools/hotplug/common/Makefile
+++ b/tools/hotplug/common/Makefile
@@ -1,22 +1,19 @@
 XEN_ROOT = $(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-HOTPLUGPATH="hotplugpath.sh"
+HOTPLUGPATH := hotplugpath.sh
 
 # OS-independent hotplug scripts go in this directory
 
 # Xen scripts to go there.
-XEN_SCRIPTS =
-XEN_SCRIPT_DATA = $(HOTPLUGPATH)
+XEN_SCRIPTS :=
+XEN_SCRIPT_DATA := $(HOTPLUGPATH)
 
 genpath-target = $(call buildmakevars2file,$(HOTPLUGPATH))
 $(eval $(genpath-target))
 
 .PHONY: all
-all: build
-
-.PHONY: build
-build: $(HOTPLUGPATH)
+all: $(HOTPLUGPATH)
 
 .PHONY: install
 install: install-scripts
@@ -25,9 +22,8 @@ install: install-scripts
 uninstall: uninstall-scripts
 
 .PHONY: install-scripts
-install-scripts: build
-	[ -d $(DESTDIR)$(XEN_SCRIPT_DIR) ] || \
-		$(INSTALL_DIR) $(DESTDIR)$(XEN_SCRIPT_DIR)
+install-scripts: all
+	$(INSTALL_DIR) $(DESTDIR)$(XEN_SCRIPT_DIR)
 	set -e; for i in $(XEN_SCRIPTS); \
 	   do \
 	   $(INSTALL_PROG) $$i $(DESTDIR)$(XEN_SCRIPT_DIR); \
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 16:04:52 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 16:04:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355609.583411 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lne-0006kY-9u; Fri, 24 Jun 2022 16:04:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355609.583411; Fri, 24 Jun 2022 16:04:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lne-0006it-3l; Fri, 24 Jun 2022 16:04:50 +0000
Received: by outflank-mailman (input) for mailman id 355609;
 Fri, 24 Jun 2022 16:04:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7mLY=W7=citrix.com=prvs=16756bcf7=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o4lnd-0004qb-9U
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 16:04:49 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5db7fb44-f3d7-11ec-bd2d-47488cf2e6aa;
 Fri, 24 Jun 2022 18:04:47 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5db7fb44-f3d7-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656086687;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=vEj0fZWMl+R77ejhJIT+1IS4qvoqXeRC/IzY/VGgI/8=;
  b=Km1O2OJmT93Ntpt7BpOcwoBC/OiFR3GXeu9Pz7A4wDI0dgVr6TFUQg3k
   DoADTNV4rv2/fz6iFTn9sbzUkjcx0E+qCLLAZ0aCC4+0TCtbymuM8Nipq
   LRJ5WAqO0AOGlt89DdrQbJ34KVOnCHjwuVH0V3QHOfdtuMy6c5jCOulx3
   k=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 74787670
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:eKgX8KvXsG72mnAmmStQltDTo+fnVB9eMUV32f8akzHdYApBsoF/q
 tZmKWzQb/eJYDajKItzaI2y8BlT6sKEzNBjTVY5+3szEC4V+JbJXdiXEBz9bniYRiHhoOOLz
 Cm8hv3odp1coqr0/0/1WlTZhSAgk/nOHNIQMcacUsxLbVYMpBwJ1FQywYbVvqYy2YLjW13X5
 YuoyyHiEATNNwBcYzp8B52r8HuDjNyq0N/PlgVjDRzjlAa2e0g9VPrzF4noR5fLatA88tqBb
 /TC1NmEElbxpH/BPD8HfoHTKSXmSpaKVeSHZ+E/t6KK2nCurQRquko32WZ1he66RFxlkvgoo
 Oihu6BcRi92H7PTsfswCCJHUDFMGLJEo5b4Al2G5Jn7I03uKxMAwt1rBUAye4YZ5vx2ESdF8
 vlwxDIlN07ZwbjsmfTiF7cq1p9LwMrDZevzvllpyy3ZCvA3B4jOWazQ6fdT3Ssqh9AIFvHbD
 yYcQWUzM0SfPUIXUrsRIKABmLysp3PBST9ntECoqqA2w1fO7CUkhdABN/KKI4fXFK25hH2wu
 Wbu72n/RBYAO7S32TeDt36hmOLLtSf6Q54JUq218OZwh1+ezXBVDwcZPWZXutHg1BT4AYgGb
 RVJpGx+9sDe6XBHUPGifgOniWGp5SUDGMpiNvI4syiy6Y/ttlPx6nc/ctJRVDA3nJZoGGJyj
 QLRwIOB6S9H6+PMFy/EnluAhXbrYHVOczdfDcMRZVFdi+QPtr3fmf4mojxLNKeuxuP4Fjjrq
 9xhhHhv3u5D5SLnOkjSwLwmv95PjsKQJuLNzl+LNl9JFysgDGJfW6Sm6ELA8dFLJ5uDQ1+Ks
 RAswpbDsrhWXMjSyHTVH43h+Y1FAd7faFUwZnY/d6TNChz3oyLzFWyuyGsWyLhV3jYsJmayP
 R67VfJ5755PJnq6BZJKj3aKI51yl8DITI29PtiNN4YmSsUhJWevoXA1DWbNjj+FraTZufxmU
 XttWZ33Vihy5GUO5GfeetrxJpdxl35nmz+MFMulp/lluJLHDEOopX4+GAPmRogEAGms+W05L
 /432xO29ihi
IronPort-HdrOrdr: A9a23:aWuwSq/k9T98F7hChqZuk+AiI+orL9Y04lQ7vn2ZKSY5TiVXra
 CTdZUgpHvJYVMqMk3I9uruBEDtex3hHP1OkOws1NWZLWrbUQKTRekP0WKL+Vbd8kbFh4xgPM
 lbEpSXCLfLfCVHZcSR2njFLz73quP3j5xBho3lvglQpRkBUdAG0+/gYDzraXGfQmN9dPwEPa
 vZ3OVrjRy6d08aa8yqb0N1JdQq97Xw5evbiQdtPW9e1DWz
X-IronPort-AV: E=Sophos;i="5.92,218,1650945600"; 
   d="scan'208";a="74787670"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH v3 08/25] tools/libfsimage: Cleanup makefiles
Date: Fri, 24 Jun 2022 17:04:05 +0100
Message-ID: <20220624160422.53457-9-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220624160422.53457-1-anthony.perard@citrix.com>
References: <20220624160422.53457-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Remove the need for "fs-*" targets by creating a "common.mk" which
have flags that are common to libfsimage/common/ and the other
libfsimages/*/ directories.

In common.mk, make $(PIC_OBJS) a recursively expanded variable so it
doesn't matter where $(LIB_SRCS-y) is defined, and remove the extra
$(PIC_OBJS) from libfsimage/common/Makefile.

Use a $(TARGETS) variable to list things to be built. And $(TARGETS)
can be use in the clean target in common.mk.

iso9660/:
    Remove the explicit dependency between fsys_iso9660.c and
    iso9660.h, this is handled automaticaly by the .*.d dependency files,
    and iso9660.h already exist.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/libfsimage/common/Makefile     | 11 +++--------
 tools/libfsimage/ext2fs-lib/Makefile |  9 ---------
 tools/libfsimage/ext2fs/Makefile     |  9 ---------
 tools/libfsimage/fat/Makefile        |  9 ---------
 tools/libfsimage/iso9660/Makefile    | 11 -----------
 tools/libfsimage/reiserfs/Makefile   |  9 ---------
 tools/libfsimage/ufs/Makefile        |  9 ---------
 tools/libfsimage/xfs/Makefile        |  9 ---------
 tools/libfsimage/zfs/Makefile        |  9 ---------
 tools/libfsimage/Rules.mk            | 26 ++++++++------------------
 tools/libfsimage/common.mk           | 11 +++++++++++
 11 files changed, 22 insertions(+), 100 deletions(-)
 create mode 100644 tools/libfsimage/common.mk

diff --git a/tools/libfsimage/common/Makefile b/tools/libfsimage/common/Makefile
index 0c5a34baea..79f8cfd28e 100644
--- a/tools/libfsimage/common/Makefile
+++ b/tools/libfsimage/common/Makefile
@@ -1,5 +1,5 @@
 XEN_ROOT = $(CURDIR)/../../..
-include $(XEN_ROOT)/tools/libfsimage/Rules.mk
+include $(XEN_ROOT)/tools/libfsimage/common.mk
 
 MAJOR := $(shell $(XEN_ROOT)/version.sh $(XEN_ROOT)/xen/Makefile)
 MINOR = 0
@@ -13,12 +13,10 @@ LDFLAGS += $(PTHREAD_LDFLAGS)
 
 LIB_SRCS-y = fsimage.c fsimage_plugin.c fsimage_grub.c
 
-PIC_OBJS := $(patsubst %.c,%.opic,$(LIB_SRCS-y))
-
-LIB = libxenfsimage.so libxenfsimage.so.$(MAJOR) libxenfsimage.so.$(MAJOR).$(MINOR)
+TARGETS = libxenfsimage.so libxenfsimage.so.$(MAJOR) libxenfsimage.so.$(MAJOR).$(MINOR)
 
 .PHONY: all
-all: $(LIB)
+all: $(TARGETS)
 
 .PHONY: install
 install: all
@@ -40,9 +38,6 @@ uninstall:
 	rm -f $(DESTDIR)$(libdir)/libxenfsimage.so.$(MAJOR)
 	rm -f $(DESTDIR)$(libdir)/libxenfsimage.so.$(MAJOR).$(MINOR)
 
-clean distclean::
-	rm -f $(LIB)
-
 libxenfsimage.so: libxenfsimage.so.$(MAJOR)
 	ln -sf $< $@
 libxenfsimage.so.$(MAJOR): libxenfsimage.so.$(MAJOR).$(MINOR)
diff --git a/tools/libfsimage/ext2fs-lib/Makefile b/tools/libfsimage/ext2fs-lib/Makefile
index 431a79068e..b9b560df75 100644
--- a/tools/libfsimage/ext2fs-lib/Makefile
+++ b/tools/libfsimage/ext2fs-lib/Makefile
@@ -9,13 +9,4 @@ FS_LIBDEPS = $(EXTFS_LIBS)
 # Include configure output (config.h)
 CFLAGS += -include $(XEN_ROOT)/tools/config.h
 
-.PHONY: all
-all: fs-all
-
-.PHONY: install
-install: fs-install
-
-.PHONY: uninstall
-uninstall: fs-uninstall
-
 include $(XEN_ROOT)/tools/libfsimage/Rules.mk
diff --git a/tools/libfsimage/ext2fs/Makefile b/tools/libfsimage/ext2fs/Makefile
index c62ae359ac..fe01f98148 100644
--- a/tools/libfsimage/ext2fs/Makefile
+++ b/tools/libfsimage/ext2fs/Makefile
@@ -4,13 +4,4 @@ LIB_SRCS-y = fsys_ext2fs.c
 
 FS = ext2fs
 
-.PHONY: all
-all: fs-all
-
-.PHONY: install
-install: fs-install
-
-.PHONY: uninstall
-uninstall: fs-uninstall
-
 include $(XEN_ROOT)/tools/libfsimage/Rules.mk
diff --git a/tools/libfsimage/fat/Makefile b/tools/libfsimage/fat/Makefile
index 7ee5e7588d..58bcc0751d 100644
--- a/tools/libfsimage/fat/Makefile
+++ b/tools/libfsimage/fat/Makefile
@@ -4,13 +4,4 @@ LIB_SRCS-y = fsys_fat.c
 
 FS = fat
 
-.PHONY: all
-all: fs-all
-
-.PHONY: install
-install: fs-install
-
-.PHONY: uninstall
-uninstall: fs-uninstall
-
 include $(XEN_ROOT)/tools/libfsimage/Rules.mk
diff --git a/tools/libfsimage/iso9660/Makefile b/tools/libfsimage/iso9660/Makefile
index bc86baf2c0..acf3164046 100644
--- a/tools/libfsimage/iso9660/Makefile
+++ b/tools/libfsimage/iso9660/Makefile
@@ -4,15 +4,4 @@ LIB_SRCS-y = fsys_iso9660.c
 
 FS = iso9660
 
-.PHONY: all
-all: fs-all
-
-.PHONY: install
-install: fs-install
-
-.PHONY: uninstall
-uninstall: fs-uninstall
-
-fsys_iso9660.c: iso9660.h
-
 include $(XEN_ROOT)/tools/libfsimage/Rules.mk
diff --git a/tools/libfsimage/reiserfs/Makefile b/tools/libfsimage/reiserfs/Makefile
index 5acfedf25e..42b751e007 100644
--- a/tools/libfsimage/reiserfs/Makefile
+++ b/tools/libfsimage/reiserfs/Makefile
@@ -4,13 +4,4 @@ LIB_SRCS-y = fsys_reiserfs.c
 
 FS = reiserfs
 
-.PHONY: all
-all: fs-all
-
-.PHONY: install
-install: fs-install
-
-.PHONY: uninstall
-uninstall: fs-uninstall
-
 include $(XEN_ROOT)/tools/libfsimage/Rules.mk
diff --git a/tools/libfsimage/ufs/Makefile b/tools/libfsimage/ufs/Makefile
index f32b9178bd..cca4f0a588 100644
--- a/tools/libfsimage/ufs/Makefile
+++ b/tools/libfsimage/ufs/Makefile
@@ -4,13 +4,4 @@ LIB_SRCS-y = fsys_ufs.c
 
 FS = ufs
 
-.PHONY: all
-all: fs-all
-
-.PHONY: install
-install: fs-install
-
-.PHONY: uninstall
-uninstall: fs-uninstall
-
 include $(XEN_ROOT)/tools/libfsimage/Rules.mk
diff --git a/tools/libfsimage/xfs/Makefile b/tools/libfsimage/xfs/Makefile
index 54eeb6e35e..ebac7baf14 100644
--- a/tools/libfsimage/xfs/Makefile
+++ b/tools/libfsimage/xfs/Makefile
@@ -4,13 +4,4 @@ LIB_SRCS-y = fsys_xfs.c
 
 FS = xfs
 
-.PHONY: all
-all: fs-all
-
-.PHONY: install
-install: fs-install
-
-.PHONY: uninstall
-uninstall: fs-uninstall
-
 include $(XEN_ROOT)/tools/libfsimage/Rules.mk
diff --git a/tools/libfsimage/zfs/Makefile b/tools/libfsimage/zfs/Makefile
index 084e5ec08d..434a9c3580 100644
--- a/tools/libfsimage/zfs/Makefile
+++ b/tools/libfsimage/zfs/Makefile
@@ -28,13 +28,4 @@ LIB_SRCS-y = zfs_lzjb.c zfs_sha256.c zfs_fletcher.c fsi_zfs.c fsys_zfs.c
 
 FS = zfs
 
-.PHONY: all
-all: fs-all
-
-.PHONY: install
-install: fs-install
-
-.PHONY: uninstall
-uninstall: fs-uninstall
-
 include $(XEN_ROOT)/tools/libfsimage/Rules.mk
diff --git a/tools/libfsimage/Rules.mk b/tools/libfsimage/Rules.mk
index bb6d42abb4..cf37d6cb0d 100644
--- a/tools/libfsimage/Rules.mk
+++ b/tools/libfsimage/Rules.mk
@@ -1,25 +1,18 @@
-include $(XEN_ROOT)/tools/Rules.mk
-
-CFLAGS += -Wno-unknown-pragmas -I$(XEN_ROOT)/tools/libfsimage/common/ -DFSIMAGE_FSDIR=\"$(FSDIR)\"
-CFLAGS += -Werror -D_GNU_SOURCE
-LDFLAGS += -L../common/
-
-PIC_OBJS := $(patsubst %.c,%.opic,$(LIB_SRCS-y))
-
-FSDIR = $(libdir)/xenfsimage
+include $(XEN_ROOT)/tools/libfsimage/common.mk
 
 FSLIB = fsimage.so
+TARGETS += $(FSLIB)
 
-.PHONY: fs-all
-fs-all: $(FSLIB)
+.PHONY: all
+all: $(TARGETS)
 
-.PHONY: fs-install
-fs-install: fs-all
+.PHONY: install
+install: all
 	$(INSTALL_DIR) $(DESTDIR)$(FSDIR)/$(FS)
 	$(INSTALL_PROG) $(FSLIB) $(DESTDIR)$(FSDIR)/$(FS)
 
-.PHONY: fs-uninstall
-fs-uninstall:
+.PHONY: uninstall
+uninstall:
 	rm -f $(addprefix $(DESTDIR)$(FSDIR)/$(FS)/, $(FSLIB))
 	if [ -d $(DESTDIR)$(FSDIR)/$(FS) ]; then \
 		rmdir $(DESTDIR)$(FSDIR)/$(FS); \
@@ -28,7 +21,4 @@ fs-uninstall:
 $(FSLIB): $(PIC_OBJS)
 	$(CC) $(LDFLAGS) $(SHLIB_LDFLAGS) -o $@ $^ -lxenfsimage $(FS_LIBDEPS) $(APPEND_LDFLAGS)
 
-clean distclean::
-	rm -f $(PIC_OBJS) $(FSLIB) $(DEPS_RM)
-
 -include $(DEPS_INCLUDE)
diff --git a/tools/libfsimage/common.mk b/tools/libfsimage/common.mk
new file mode 100644
index 0000000000..77bc957f27
--- /dev/null
+++ b/tools/libfsimage/common.mk
@@ -0,0 +1,11 @@
+include $(XEN_ROOT)/tools/Rules.mk
+
+FSDIR := $(libdir)/xenfsimage
+CFLAGS += -Wno-unknown-pragmas -I$(XEN_ROOT)/tools/libfsimage/common/ -DFSIMAGE_FSDIR=\"$(FSDIR)\"
+CFLAGS += -Werror -D_GNU_SOURCE
+LDFLAGS += -L../common/
+
+PIC_OBJS = $(patsubst %.c,%.opic,$(LIB_SRCS-y))
+
+clean distclean::
+	rm -f $(PIC_OBJS) $(TARGETS) $(DEPS_RM)
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 16:04:52 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 16:04:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355606.583379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lnb-0005td-1L; Fri, 24 Jun 2022 16:04:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355606.583379; Fri, 24 Jun 2022 16:04:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lna-0005sZ-Qh; Fri, 24 Jun 2022 16:04:46 +0000
Received: by outflank-mailman (input) for mailman id 355606;
 Fri, 24 Jun 2022 16:04:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7mLY=W7=citrix.com=prvs=16756bcf7=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o4lnZ-0004qb-V8
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 16:04:46 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5d0cdfcc-f3d7-11ec-bd2d-47488cf2e6aa;
 Fri, 24 Jun 2022 18:04:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5d0cdfcc-f3d7-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656086685;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=6zroeQWnVKs2sspcMTPF4FS/ihKUgmPZSZni2g/h+iU=;
  b=YWdVoPFHspWShIHsJGhN168qZLZ4kdsAgWqySBXBPp0Y6N5p8skDNWNo
   9o9twQmccGSc2QZx/1LrpKQQN/wr+Nim9oNrUc81f4NxdOEQPdNdKcrIG
   t3IRfK8wQ8FmjM90pc0hqyLDQgJx9OYSE935R2E74XJLtX143Yic1sAcM
   Q=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 74208073
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:wU33AKy6aB2z7Iu30k96t+c1xirEfRIJ4+MujC+fZmUNrF6WrkUPz
 mUWWTiAafiPZmCgctojOY2y9x4Ovp7SmoIxHAU4/iAxQypGp/SeCIXCJC8cHc8zwu4v7q5Dx
 59DAjUVBJlsFhcwnj/0bv656yMUOZigHtIQMsadUsxKbVIiGX1JZS5LwbZj2NY224ThWWthh
 PupyyHhEA79s9JLGjp8B5Kr8HuDa9yr5Vv0FnRnDRx6lAe2e0s9VfrzFonoR5fMeaFGH/bSe
 gr25OrRElU1XfsaIojNfr7TKiXmS1NJVOSEoiI+t6OK2nCuqsGuu0qS2TV1hUp/0l20c95NJ
 Nplv5P3SQg0Aff2l746AwV3Nn1yArVL0eqSSZS/mZT7I0zudnLtx7NlDV0sPJ1e8eFyaY1M3
 aVGcnZXNEnF3r/ohuLgIgVvrp1LwM3DNYUDunZm3HfBAOwvW5zrSKTW/95Imjw3g6iiGN6BO
 5BBOWIwN3wsZTUSK3wmNMgDgd6P2CC4U2R/k0Ow/aUotj27IAtZj+G2bYu9lsaxbd5Ogk+Sq
 2bC/mL4KhIXLtqSzXyC6H3ErvDLtTP2XsQVDrLQ3vx3hFyewEQDBRtQUkG0ydGph0j7V99BJ
 kg8/is1sbN05EGtVsP6XRCzvDiDpBF0c9haHvA+6QqN4rHJ+AvfDW8BJgOtc/R/6pVwH2Zzk
 AbUwZW5XlSDrYF5V1qX+fCUoi6NYxIad0hSeQAhEQc6+9TK9dRbYg30cjpzLEKkpoSrRG+om
 G3S83hWa6Y71pBSifjilbzTq3f1/8WSEFZojunCdjj9hj6VcrJJcGBBBbLzyf9bZLiUQVCa1
 JTvs5jPtbteZX1hecHkfQnsIF1Kz6zcWNEkqQQzd6TNDhz0k5JZQahe4StlOGBiOdsedDnib
 Sf74F0MuscLbSL1MfcvPOpd7vjGK4C6TbwJsdiEBuein7ArLFPXlM2QTRT4M5/RfLgEzvhkZ
 MbznTeEBncGE6V3pAeLqxMm+eZznEgWnDqLLbiilkjP+efONRa9FOZeWHPTP79R0U9xiFiMm
 zqpH5DRkEs3vSyXSnS/zLP/2nhQfCZiW8yp+pcJHgNBSyI/cFwc5zbq6etJU+RYc259yo8kI
 lnVtpdk9WfC
IronPort-HdrOrdr: A9a23:xC/hIKiMcIgnQgsOgA0XVsd+oHBQXuIji2hC6mlwRA09TySZ//
 rBoB19726TtN9xYgBZpTnuAsm9qB/nmaKdpLNhWItKPzOW31dATrsSjrcKqgeIc0aVm9K1l5
 0QF5SWYOeAdGSS5vya3ODXKbkdKaG8gcKVuds=
X-IronPort-AV: E=Sophos;i="5.92,218,1650945600"; 
   d="scan'208";a="74208073"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH v3 05/25] tools/fuzz/libelf: rework makefile
Date: Fri, 24 Jun 2022 17:04:02 +0100
Message-ID: <20220624160422.53457-6-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220624160422.53457-1-anthony.perard@citrix.com>
References: <20220624160422.53457-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Rename ELF_LIB_OBJS to LIBELF_OBJS as to have the same name as in
libs/guest/.

Replace "-I" by "-iquote".

Remove the use of "vpath". It will not works when we will convert this
makefile to subdirmk. Instead, we create symlinks to the source files.

Since we are creating a new .gitignore for the links, also move the
existing entry to it.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---

Notes:
    v2:
    - create a per-directory .gitignore to add new entry and existing one

 tools/fuzz/libelf/Makefile   | 21 ++++++++++-----------
 .gitignore                   |  1 -
 tools/fuzz/libelf/.gitignore |  2 ++
 3 files changed, 12 insertions(+), 12 deletions(-)
 create mode 100644 tools/fuzz/libelf/.gitignore

diff --git a/tools/fuzz/libelf/Makefile b/tools/fuzz/libelf/Makefile
index 9eb30ee40c..9211f75951 100644
--- a/tools/fuzz/libelf/Makefile
+++ b/tools/fuzz/libelf/Makefile
@@ -1,25 +1,24 @@
 XEN_ROOT = $(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-# libelf fuzz target
-vpath %.c ../../../xen/common/libelf
-CFLAGS += -I../../../xen/common/libelf
-ELF_SRCS-y += libelf-tools.c libelf-loader.c libelf-dominfo.c
-ELF_LIB_OBJS := $(patsubst %.c,%.o,$(ELF_SRCS-y))
+LIBELF_OBJS := libelf-tools.o libelf-loader.o libelf-dominfo.o
 
-$(patsubst %.c,%.o,$(ELF_SRCS-y)): CFLAGS += -Wno-pointer-sign
-
-$(ELF_LIB_OBJS): CFLAGS += -DFUZZ_NO_LIBXC $(CFLAGS_xeninclude)
+CFLAGS += -iquote ../../../xen/common/libelf
+$(LIBELF_OBJS): CFLAGS += -Wno-pointer-sign
+$(LIBELF_OBJS): CFLAGS += -DFUZZ_NO_LIBXC $(CFLAGS_xeninclude)
 
 libelf-fuzzer.o: CFLAGS += $(CFLAGS_xeninclude)
 
-libelf.a: libelf-fuzzer.o $(ELF_LIB_OBJS)
+$(LIBELF_OBJS:.o=.c): libelf-%.c: ../../../xen/common/libelf/libelf-%.c FORCE
+	ln -nsf $< $@
+
+libelf.a: libelf-fuzzer.o $(LIBELF_OBJS)
 	$(AR) rc $@ $^
 
 .PHONY: libelf-fuzzer-all
 libelf-fuzzer-all: libelf.a libelf-fuzzer.o
 
-afl-libelf-fuzzer: afl-libelf-fuzzer.o libelf-fuzzer.o $(ELF_LIB_OBJS)
+afl-libelf-fuzzer: afl-libelf-fuzzer.o libelf-fuzzer.o $(LIBELF_OBJS)
 	$(CC) $(CFLAGS) $^ -o $@
 
 # Common targets
@@ -31,7 +30,7 @@ distclean: clean
 
 .PHONY: clean
 clean:
-	rm -f *.o .*.d *.a *-libelf-fuzzer
+	rm -f *.o .*.d *.a *-libelf-fuzzer $(LIBELF_OBJS:.o=.c)
 
 .PHONY: install
 install: all
diff --git a/.gitignore b/.gitignore
index 7cf26051db..6410dfbc72 100644
--- a/.gitignore
+++ b/.gitignore
@@ -195,7 +195,6 @@ tools/flask/utils/flask-loadpolicy
 tools/flask/utils/flask-setenforce
 tools/flask/utils/flask-set-bool
 tools/flask/utils/flask-label-pci
-tools/fuzz/libelf/afl-libelf-fuzzer
 tools/fuzz/x86_instruction_emulator/asm
 tools/fuzz/x86_instruction_emulator/afl-harness
 tools/fuzz/x86_instruction_emulator/afl-harness-cov
diff --git a/tools/fuzz/libelf/.gitignore b/tools/fuzz/libelf/.gitignore
new file mode 100644
index 0000000000..ed634214c9
--- /dev/null
+++ b/tools/fuzz/libelf/.gitignore
@@ -0,0 +1,2 @@
+/afl-libelf-fuzzer
+/libelf-*.c
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 16:04:52 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 16:04:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355604.583352 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lnZ-0005Gw-6p; Fri, 24 Jun 2022 16:04:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355604.583352; Fri, 24 Jun 2022 16:04:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lnZ-0005G9-0X; Fri, 24 Jun 2022 16:04:45 +0000
Received: by outflank-mailman (input) for mailman id 355604;
 Fri, 24 Jun 2022 16:04:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7mLY=W7=citrix.com=prvs=16756bcf7=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o4lnX-0004qb-4T
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 16:04:43 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5b111485-f3d7-11ec-bd2d-47488cf2e6aa;
 Fri, 24 Jun 2022 18:04:41 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5b111485-f3d7-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656086681;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=X2aENXxRDTAvXRj798ueb4NLH5aVs5M0JBRhVknZte8=;
  b=ClH0kvo4KWT9e2A7r9j16XV0jbMcL3LF/JMm5WwQvBma3xrDRMGVnMIB
   McTVEtvGsIi/A34gmkNST27LPlkbFX8jq+c2QOV7Q9FsK4IjB139Ikr4M
   vf2/8cxq0pXuak6cpX5vWuz97UdnfvsUVbcvzouNPAKCxXsYLcENqxRM3
   4=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 73701974
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:qskwaqq/lrIITC6px2G7gZflUrpeBmIaZRIvgKrLsJaIsI4StFCzt
 garIBnTb6yLYGL3KY9xPo7goEMHv5bQn9UyHgU6ry09EipB8JuZCYyVIHmrMnLJJKUvbq7GA
 +byyDXkBJppJpMJjk71atANlVEliefQAOCU5NfsYkidfyc9IMsaoU8lyrRRbrJA24DjWVvT4
 4+q+aUzBXf+s9JKGjNMg068gEsHUMTa4Fv0aXRnOJinFHeH/5UkJMp3yZOZdhMUcaENdgKOf
 M7RzanRw4/s10xF5uVJMFrMWhZirrb6ZWBig5fNMkSoqkAqSicais7XOBeAAKv+Zvrgc91Zk
 b1wWZKMpQgBN4T0l9s2YQVjEgZVA/BtoJnGIELhvpnGp6HGWyOEL/RGCUg3OcsT+/ptAHEI/
 vsdQNwPRknd3aTsmuv9E7QywJR4RCXoFNp3VnVIxDfFDfEgUNbbTr/D/9Nw1zYsnMFeW/3ZY
 qL1bBIwN0uYOkwQZj/7DroYh9W22WKidgZbrV3MlfMU5nDV9ANIhe2F3N39JYXRGJQ9clyjj
 nnd423zDxUeNdqe4TmI6HShgqnIhyyTcJ0WPK218LhtmlL77m4ODBwbU3OrrP//jVSxM/pPJ
 kpR9icwoKwa8E2wUsK7TxC+uGSDvBMXR5xXCeJSwAOHx7fQ4g2ZLnMZVTMHY9sj3PLaXhRzi
 AXPxYmwQ2Uy7vvFEhpx64t4sxu7EBAaEkQweRQFaiA7vvK7hoAytEzAG4ML/LGOsjHlJd3h6
 2nU8XZm3OhL0p5jO7aTpg6e3W/1znTdZktsv1iMADr4hu9sTNT9D7FE/2Q3+hqpwGyxalCa9
 EYJlMGFhAzlJcHczXfdKAnh8VzA2hpkDNE/qQQ2d3XZ327xk0NPhKgJiN2EGG9nM9wfZRjia
 1LJtAVa6fd7ZSX3M/cvMtvuV5xyksAM8OgJsNiONrKihbAhHDJrAQk0PRLAt4wTuBJEfV4D1
 WezLp/3UCdy5VVPxzuqXeYNuYIWKtQF7TqLH/jTlk3/uZLHPSL9YepVYTOmM7FihIvZ8Vq9z
 jqqH5bTo/mpeLalOXe/HE96BQ1iEEXX8ris+pIOKLLcc1E4cIzjYteIqY4cl0Vet/w9vo/1E
 ruVAye0FHKXaaX7FDi3
IronPort-HdrOrdr: A9a23:ZnLoSq9EDDm87aa0KI1uk+DeI+orL9Y04lQ7vn2YSXRuHfBw8P
 re+8jztCWE8Qr5N0tApTntAsS9qDbnhPxICOoqTNOftWvd2FdARbsKheCJ/9SjIVyaygc079
 YHT0EUMrPN5DZB4foSmDPIcOod/A==
X-IronPort-AV: E=Sophos;i="5.92,218,1650945600"; 
   d="scan'208";a="73701974"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH v3 01/25] tools/console: have one Makefile per program/directory
Date: Fri, 24 Jun 2022 17:03:58 +0100
Message-ID: <20220624160422.53457-2-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220624160422.53457-1-anthony.perard@citrix.com>
References: <20220624160422.53457-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Sources of both xenconsoled and xenconsole are already separated into
different directory and don't share anything in common. Having two
different Makefile means it's easier to deal with *FLAGS.

Some common changes:
Rename $(BIN) to $(TARGETS), this will be useful later.
Stop removing *.so *.rpm *.a as they aren't created here.
Use $(OBJS-y) to list objects.
Update $(CFLAGS) for the directory rather than a single object.

daemon:
    Remove the need for $(LDLIBS_xenconsoled), use $(LDLIBS) instead.
    Remove the need for $(CONSOLE_CFLAGS-y) and use $(CFLAGS-y)
	instead.

client:
    Remove the unused $(LDLIBS_xenconsole)

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---

Notes:
    v2:
    - create per-directory .gitignore

 tools/console/Makefile          | 49 ++------------------------------
 tools/console/client/Makefile   | 39 +++++++++++++++++++++++++
 tools/console/daemon/Makefile   | 50 +++++++++++++++++++++++++++++++++
 .gitignore                      |  2 --
 tools/console/client/.gitignore |  1 +
 tools/console/daemon/.gitignore |  1 +
 6 files changed, 94 insertions(+), 48 deletions(-)
 create mode 100644 tools/console/client/Makefile
 create mode 100644 tools/console/daemon/Makefile
 create mode 100644 tools/console/client/.gitignore
 create mode 100644 tools/console/daemon/.gitignore

diff --git a/tools/console/Makefile b/tools/console/Makefile
index 207c04c9cd..63bd2ac302 100644
--- a/tools/console/Makefile
+++ b/tools/console/Makefile
@@ -1,50 +1,7 @@
 XEN_ROOT=$(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS  += -Werror
+SUBDIRS-y := daemon client
 
-CFLAGS  += $(CFLAGS_libxenctrl)
-CFLAGS  += $(CFLAGS_libxenstore)
-LDLIBS += $(LDLIBS_libxenctrl)
-LDLIBS += $(LDLIBS_libxenstore)
-LDLIBS += $(SOCKET_LIBS)
-
-LDLIBS_xenconsoled += $(UTIL_LIBS)
-LDLIBS_xenconsoled += -lrt
-CONSOLE_CFLAGS-$(CONFIG_ARM) = -DCONFIG_ARM
-
-BIN      = xenconsoled xenconsole
-
-.PHONY: all
-all: $(BIN)
-
-.PHONY: clean
-clean:
-	$(RM) *.a *.so *.o *.rpm $(BIN) $(DEPS_RM)
-	$(RM) client/*.o daemon/*.o
-
-.PHONY: distclean
-distclean: clean
-
-daemon/main.o: CFLAGS += -include $(XEN_ROOT)/tools/config.h
-daemon/io.o: CFLAGS += $(CFLAGS_libxenevtchn) $(CFLAGS_libxengnttab) $(CFLAGS_libxenforeignmemory) $(CONSOLE_CFLAGS-y)
-xenconsoled: $(patsubst %.c,%.o,$(wildcard daemon/*.c))
-	$(CC) $(LDFLAGS) $^ -o $@ $(LDLIBS) $(LDLIBS_libxenevtchn) $(LDLIBS_libxengnttab) $(LDLIBS_libxenforeignmemory) $(LDLIBS_xenconsoled) $(APPEND_LDFLAGS)
-
-client/main.o: CFLAGS += -include $(XEN_ROOT)/tools/config.h
-xenconsole: $(patsubst %.c,%.o,$(wildcard client/*.c))
-	$(CC) $(LDFLAGS) $^ -o $@ $(LDLIBS) $(LDLIBS_xenconsole) $(APPEND_LDFLAGS)
-
-.PHONY: install
-install: $(BIN)
-	$(INSTALL_DIR) $(DESTDIR)/$(sbindir)
-	$(INSTALL_PROG) xenconsoled $(DESTDIR)/$(sbindir)
-	$(INSTALL_DIR) $(DESTDIR)$(LIBEXEC_BIN)
-	$(INSTALL_PROG) xenconsole $(DESTDIR)$(LIBEXEC_BIN)
-
-.PHONY: uninstall
-uninstall:
-	rm -f $(DESTDIR)$(LIBEXEC_BIN)/xenconsole
-	rm -f $(DESTDIR)$(sbindir)/xenconsoled
-
--include $(DEPS_INCLUDE)
+.PHONY: all clean install distclean uninstall
+all clean install distclean uninstall: %: subdirs-%
diff --git a/tools/console/client/Makefile b/tools/console/client/Makefile
new file mode 100644
index 0000000000..44176c6d93
--- /dev/null
+++ b/tools/console/client/Makefile
@@ -0,0 +1,39 @@
+XEN_ROOT=$(CURDIR)/../../..
+include $(XEN_ROOT)/tools/Rules.mk
+
+CFLAGS += -Werror
+CFLAGS += $(CFLAGS_libxenctrl)
+CFLAGS += $(CFLAGS_libxenstore)
+CFLAGS += -include $(XEN_ROOT)/tools/config.h
+
+LDLIBS += $(LDLIBS_libxenctrl)
+LDLIBS += $(LDLIBS_libxenstore)
+LDLIBS += $(SOCKET_LIBS)
+
+OBJS-y := main.o
+
+TARGETS := xenconsole
+
+.PHONY: all
+all: $(TARGETS)
+
+xenconsole: $(OBJS-y)
+	$(CC) $(LDFLAGS) $^ -o $@ $(LDLIBS) $(APPEND_LDFLAGS)
+
+.PHONY: install
+install: all
+	$(INSTALL_DIR) $(DESTDIR)$(LIBEXEC_BIN)
+	$(INSTALL_PROG) xenconsole $(DESTDIR)$(LIBEXEC_BIN)
+
+.PHONY: uninstall
+uninstall:
+	rm -f $(DESTDIR)$(LIBEXEC_BIN)/xenconsole
+
+.PHONY: clean
+clean:
+	$(RM) *.o $(TARGETS) $(DEPS_RM)
+
+.PHONY: distclean
+distclean: clean
+
+-include $(DEPS_INCLUDE)
diff --git a/tools/console/daemon/Makefile b/tools/console/daemon/Makefile
new file mode 100644
index 0000000000..0f004f0b14
--- /dev/null
+++ b/tools/console/daemon/Makefile
@@ -0,0 +1,50 @@
+XEN_ROOT=$(CURDIR)/../../..
+include $(XEN_ROOT)/tools/Rules.mk
+
+CFLAGS += -Werror
+CFLAGS += $(CFLAGS_libxenctrl)
+CFLAGS += $(CFLAGS_libxenstore)
+CFLAGS += $(CFLAGS_libxenevtchn)
+CFLAGS += $(CFLAGS_libxengnttab)
+CFLAGS += $(CFLAGS_libxenforeignmemory)
+CFLAGS-$(CONFIG_ARM) += -DCONFIG_ARM
+CFLAGS += -include $(XEN_ROOT)/tools/config.h
+
+LDLIBS += $(LDLIBS_libxenctrl)
+LDLIBS += $(LDLIBS_libxenstore)
+LDLIBS += $(LDLIBS_libxenevtchn)
+LDLIBS += $(LDLIBS_libxengnttab)
+LDLIBS += $(LDLIBS_libxenforeignmemory)
+LDLIBS += $(SOCKET_LIBS)
+LDLIBS += $(UTIL_LIBS)
+LDLIBS += -lrt
+
+OBJS-y := main.o
+OBJS-y += io.o
+OBJS-y += utils.o
+
+TARGETS := xenconsoled
+
+.PHONY: all
+all: $(TARGETS)
+
+xenconsoled: $(OBJS-y)
+	$(CC) $(LDFLAGS) $^ -o $@ $(LDLIBS) $(APPEND_LDFLAGS)
+
+.PHONY: install
+install: all
+	$(INSTALL_DIR) $(DESTDIR)/$(sbindir)
+	$(INSTALL_PROG) xenconsoled $(DESTDIR)/$(sbindir)
+
+.PHONY: uninstall
+uninstall:
+	rm -f $(DESTDIR)$(sbindir)/xenconsoled
+
+.PHONY: clean
+clean:
+	$(RM) *.o $(TARGETS) $(DEPS_RM)
+
+.PHONY: distclean
+distclean: clean
+
+-include $(DEPS_INCLUDE)
diff --git a/.gitignore b/.gitignore
index 18ef56a780..7cf26051db 100644
--- a/.gitignore
+++ b/.gitignore
@@ -160,8 +160,6 @@ tools/libs/util/libxenutil.map
 tools/libs/vchan/headers.chk
 tools/libs/vchan/libxenvchan.map
 tools/libs/vchan/xenvchan.pc
-tools/console/xenconsole
-tools/console/xenconsoled
 tools/debugger/gdb/gdb-6.2.1-linux-i386-xen/*
 tools/debugger/gdb/gdb-6.2.1/*
 tools/debugger/gdb/gdb-6.2.1.tar.bz2
diff --git a/tools/console/client/.gitignore b/tools/console/client/.gitignore
new file mode 100644
index 0000000000..b096a1d841
--- /dev/null
+++ b/tools/console/client/.gitignore
@@ -0,0 +1 @@
+/xenconsole
diff --git a/tools/console/daemon/.gitignore b/tools/console/daemon/.gitignore
new file mode 100644
index 0000000000..55c8f84664
--- /dev/null
+++ b/tools/console/daemon/.gitignore
@@ -0,0 +1 @@
+/xenconsoled
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 16:04:52 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 16:04:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355607.583395 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lnc-0006Jb-Im; Fri, 24 Jun 2022 16:04:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355607.583395; Fri, 24 Jun 2022 16:04:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lnc-0006JE-7o; Fri, 24 Jun 2022 16:04:48 +0000
Received: by outflank-mailman (input) for mailman id 355607;
 Fri, 24 Jun 2022 16:04:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7mLY=W7=citrix.com=prvs=16756bcf7=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o4lnb-0004qb-3R
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 16:04:47 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5db7e901-f3d7-11ec-bd2d-47488cf2e6aa;
 Fri, 24 Jun 2022 18:04:46 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5db7e901-f3d7-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656086686;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=H6lNZDzSWqxu4KmA0eh73M+dV6/C4L+8X1rFIcfviuI=;
  b=byRBltptZh787kRD6crDvZ13r6JPwg6+bs5aadtzvlhMfWrkmK0HnyBD
   euLMmxEfLmc3rA7soH/MDF+e936/oVTUky6q61JHJfIZw8luYKnLhqo2D
   +Vhb9MXvExFTuyowUzZoyaMuKaYVIc4K2skvfzbQ9qYlHlKCrSvpup9Ew
   U=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 74208075
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:eLfogag4YypOK+jahAu2T33DX161YhAKZh0ujC45NGQN5FlHY01je
 htvXG/Va/6PY2OjKNEnbIqwoE4Hv57Wnd8xTFA/rio0E3kb9cadCdqndUqhZCn6wu8v7a5EA
 2fyTvGacajYm1eF/k/F3oDJ9CU6jefSLlbFILas1hpZHGeIcw98z0M58wIFqtQw24LhXVnR4
 YmaT/D3YzdJ5RYlagr41IrbwP9flKyaVOQw5wFWiVhj5TcyplFNZH4tDfjZw0jQG+G4KtWSV
 efbpIxVy0uCl/sb5nFJpZ6gGqECaua60QFjERO6UYD66vRJjnRaPqrWqJPwwKqY4tmEt4kZ9
 TlDiXC/YQ0mea3DhdtHaRYCNHFiEoga9IDfLFHq5KR/z2WeG5ft6/BnDUVwNowE4OdnR2pJ8
 JT0KhhUMErF3bjvhuvmFK883azPL+GyVG8bknhm0THeC+dgWZ3ZSr/GzdRZwC0xloZFGvO2i
 88xNmY1NESYPEAn1lE/VckAmsi2hyLGUzQH80vWgJoY0mLS9VkkuFTqGIWMIYHbLSlPpW6Dv
 X7P9Wn9BhAcNfScxCCD/3bqgfXA9QvkXKoCGbv+8eRl6HWR22gSBRs+RVa95/6jhSaWS99Zb
 kAZ5Ccqhawz71CwCMnwWQWip3yJtQJaXMBfe8U44gyQzqvf4y6CG3MJCDVGbbQbWNQeHGJwk
 AXTxpWwWGIp4Ob9pW+hGqm8pzz1OScIEjU4anUjHRcqxoXvn5k+p0eaJjp8K5JZnuEZCBmpn
 W3U9HNj3+pD5SIY//7lpA6a2lpAsrCMF1dovVuPAwpJ+ysjPOaYi5qUBU83BBqqBKKQVRG/s
 XcNgKByB8heXMjWxERhrAjgdYxFBspp0xWG2DaD57F7q1yQF4eLJOi8Gg1WKkZzKdojcjT0e
 kLVsg45zMYNYSXyNf4uPN7pU5tCIU3c+TLNDKi8gj1mMvBMmPKvpnkyNSZ8IUi3+KTTrU3PE
 cjCKpv9ZZrrIa9m0CC3V48g7FPf/QhnnTm7bcmil3yPiOPCDFbIGeZtGAbfNYgRsfLbyDg5B
 v4CbqNmPT0EC7agCsQWmKZORW03wY8TX8Go8pILKb/YfGKL2ggJUpfs/F/oQKQ994w9qwsC1
 i3VtpNwoLYnuUD6FA==
IronPort-HdrOrdr: A9a23:WEOs36vjTnPFiW1lnemP18Vr7skDcNV00zEX/kB9WHVpmszxra
 +TdZMgpHjJYVcqKQgdcL+7WZVoLUmwyXcx2/hyAV7AZniDhILLFuFfBOLZqlWKcREWtNQtsJ
 uIG5IObuEYZmIVsS+V2mWF+q4bsbq6zJw=
X-IronPort-AV: E=Sophos;i="5.92,218,1650945600"; 
   d="scan'208";a="74208075"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH v3 06/25] tools/fuzz/x86_instruction_emulator: rework makefile
Date: Fri, 24 Jun 2022 17:04:03 +0100
Message-ID: <20220624160422.53457-7-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220624160422.53457-1-anthony.perard@citrix.com>
References: <20220624160422.53457-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Rework dependencies of all objects. We don't need to add dependencies
for headers that $(CC) is capable of generating, we only need to
include $(DEPS_INCLUDE). Some dependencies are still needed so make
knows to generate symlinks for them.

We remove the use of "vpath" for cpuid.c. While it works fine for now,
when we will convert this makefile to subdirmk, vpath will not be
usable. Also, "-iquote" is now needed to build "cpuid.o".

Replace "-I." by "-iquote .", so it applies to double-quote includes
only.

Rather than checking if a symlink exist, always regenerate the
symlink. So if the source tree changed location, the symlink is
updated.

Since we are creating a new .gitignore for the symlink, also move the
entry to it.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---

Notes:
    v2:
    - create a new per-directory .gitignore to add the new entry and existing ones

 tools/fuzz/x86_instruction_emulator/Makefile  | 32 ++++++++-----------
 .gitignore                                    |  6 ----
 .../fuzz/x86_instruction_emulator/.gitignore  |  7 ++++
 3 files changed, 21 insertions(+), 24 deletions(-)
 create mode 100644 tools/fuzz/x86_instruction_emulator/.gitignore

diff --git a/tools/fuzz/x86_instruction_emulator/Makefile b/tools/fuzz/x86_instruction_emulator/Makefile
index 1a6dbf94e1..f11437e6a2 100644
--- a/tools/fuzz/x86_instruction_emulator/Makefile
+++ b/tools/fuzz/x86_instruction_emulator/Makefile
@@ -8,33 +8,27 @@ else
 x86-insn-fuzz-all:
 endif
 
-# Add libx86 to the build
-vpath %.c $(XEN_ROOT)/xen/lib/x86
+cpuid.c: %: $(XEN_ROOT)/xen/lib/x86/% FORCE
+	ln -nsf $< $@
 
-x86_emulate:
-	[ -L $@ ] || ln -sf $(XEN_ROOT)/xen/arch/x86/$@
+x86_emulate: FORCE
+	ln -nsf $(XEN_ROOT)/xen/arch/x86/$@
 
 x86_emulate/%: x86_emulate ;
 
-x86-emulate.c x86-emulate.h wrappers.c: %:
-	[ -L $* ] || ln -sf $(XEN_ROOT)/tools/tests/x86_emulator/$*
+x86-emulate.c x86-emulate.h wrappers.c: %: $(XEN_ROOT)/tools/tests/x86_emulator/% FORCE
+	ln -nsf $< $@
 
-CFLAGS += $(CFLAGS_xeninclude) -D__XEN_TOOLS__ -I.
+CFLAGS += $(CFLAGS_xeninclude) -D__XEN_TOOLS__ -iquote .
+cpuid.o: CFLAGS += -iquote $(XEN_ROOT)/xen/lib/x86
 
 GCOV_FLAGS := --coverage
 %-cov.o: %.c
 	$(CC) -c $(CFLAGS) $(GCOV_FLAGS) $< -o $@
 
-x86.h := $(addprefix $(XEN_ROOT)/tools/include/xen/asm/,\
-                     x86-vendors.h x86-defns.h msr-index.h) \
-         $(addprefix $(XEN_ROOT)/tools/include/xen/lib/x86/, \
-                     cpuid.h cpuid-autogen.h)
-x86_emulate.h := x86-emulate.h x86_emulate/x86_emulate.h $(x86.h)
-
-# x86-emulate.c will be implicit for both
-x86-emulate.o x86-emulate-cov.o: x86_emulate/x86_emulate.c $(x86_emulate.h)
-
-fuzz-emul.o fuzz-emulate-cov.o cpuid.o wrappers.o: $(x86_emulate.h)
+x86-emulate.h: x86_emulate/x86_emulate.h
+x86-emulate.o x86-emulate-cov.o: x86-emulate.h x86_emulate/x86_emulate.c
+fuzz-emul.o fuzz-emul-cov.o wrappers.o: x86-emulate.h
 
 x86-insn-fuzzer.a: fuzz-emul.o x86-emulate.o cpuid.o
 	$(AR) rc $@ $^
@@ -51,7 +45,7 @@ all: x86-insn-fuzz-all
 
 .PHONY: distclean
 distclean: clean
-	rm -f x86_emulate x86-emulate.c x86-emulate.h
+	rm -f x86_emulate x86-emulate.c x86-emulate.h wrappers.c cpuid.c
 
 .PHONY: clean
 clean:
@@ -67,3 +61,5 @@ afl: afl-harness
 
 .PHONY: afl-cov
 afl-cov: afl-harness-cov
+
+-include $(DEPS_INCLUDE)
diff --git a/.gitignore b/.gitignore
index 6410dfbc72..8b6886f3fd 100644
--- a/.gitignore
+++ b/.gitignore
@@ -195,12 +195,6 @@ tools/flask/utils/flask-loadpolicy
 tools/flask/utils/flask-setenforce
 tools/flask/utils/flask-set-bool
 tools/flask/utils/flask-label-pci
-tools/fuzz/x86_instruction_emulator/asm
-tools/fuzz/x86_instruction_emulator/afl-harness
-tools/fuzz/x86_instruction_emulator/afl-harness-cov
-tools/fuzz/x86_instruction_emulator/wrappers.c
-tools/fuzz/x86_instruction_emulator/x86_emulate
-tools/fuzz/x86_instruction_emulator/x86-emulate.[ch]
 tools/helpers/init-xenstore-domain
 tools/helpers/xen-init-dom0
 tools/hotplug/common/hotplugpath.sh
diff --git a/tools/fuzz/x86_instruction_emulator/.gitignore b/tools/fuzz/x86_instruction_emulator/.gitignore
new file mode 100644
index 0000000000..65c3cf9702
--- /dev/null
+++ b/tools/fuzz/x86_instruction_emulator/.gitignore
@@ -0,0 +1,7 @@
+/asm
+/afl-harness
+/afl-harness-cov
+/cpuid.c
+/wrappers.c
+/x86_emulate
+/x86-emulate.[ch]
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 16:04:53 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 16:04:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355603.583347 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lnY-00059p-RB; Fri, 24 Jun 2022 16:04:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355603.583347; Fri, 24 Jun 2022 16:04:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lnY-00059Z-Mq; Fri, 24 Jun 2022 16:04:44 +0000
Received: by outflank-mailman (input) for mailman id 355603;
 Fri, 24 Jun 2022 16:04:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7mLY=W7=citrix.com=prvs=16756bcf7=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o4lnW-0004qc-MM
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 16:04:42 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 58be9a20-f3d7-11ec-b725-ed86ccbb4733;
 Fri, 24 Jun 2022 18:04:39 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 58be9a20-f3d7-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656086679;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=SYJTJIKxHEFQcMKNpvz4kHz4LT/E8ZyffBaU/35UM+o=;
  b=HmzEjoLDOpRxONKwf00fObo3vaIJ5nB6CgEG5DHe/38eBMK1hMnarIcz
   rh/MIbAT//k992vK1EAAAxeD49R5VToSRF7Dij1YpAchiESf/wDhZYrXf
   BuLLRGwonlTwa4LqTU1j1syPrT6w5ooL5Gn8DYZuCZ8+pQjC/Qqp6K6rh
   0=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 74362336
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:nXFssakeiBmCED/TfVVOVYLo5gzbJkRdPkR7XQ2eYbSJt1+Wr1Gzt
 xJNCGHUMvuIajb2ctEjYYvj8EIF6pHUmINrGQJvqCs9ESMWpZLJC+rCIxarNUt+DCFioGGLT
 Sk6QoOdRCzhZiaE/n9BCpC48T8kk/vgqoPUUIYoAAgoLeNfYHpn2EgLd9IR2NYy24DnWV/V4
 7senuWEULOb828sWo4rw/rrRCNH5JwebxtB4zTSzdgS1LPvvyF94KA3fMldHFOhKmVgJcaoR
 v6r8V2M1jixEyHBqD+Suu2TnkUiGtY+NOUV45Zcc/DKbhNq/kTe3kunXRa1hIg+ZzihxrhMJ
 NtxWZOYSwEvPfbdmeghaBhKEjlkP/B+opP7Li3q2SCT5xWun3rExvxvCAc9PJEC+/YxCmZLn
 RAaAGlTNFbZ3bvwme/lDLk37iggBJCD0Ic3s3d8zTbfHLA+TIrKWani7t5ExjYgwMtJGJ4yY
 uJGNWExNk+bPXWjPH88A4sHpqCpvEXvbmcAsHexo5E2u3ncmVkZPL/Fb4OOJ43iqd9utlmcj
 nLL+SL+GB5yHN6VxCeB83msrvTShi69U4UXfJWo+/gvjFCNy2g7DBwNSUD9sfS/klS5Wd9UN
 woT4CVGkEQp3BX1FJ+nBUT++SPa+E5HMzZNLwEkwAOLzKmP8geVOlMFXD9Zct57jJUaeTN/g
 zdlgOjV6SxTXKy9ECzAqO/P8GvtaUD5PkdZO3ZaEFJtD83L5dhq00mRFosL/Lud1IWdJN3m/
 9ydQMHSbZ03hNVD6ai09Euvb9mE9smQFV5dCuk6swuYAuJFiG2NPdXABaDzt6ooEWpgZgDpU
 II4s8af9vsSKpqGiTaARu4AdJnwuavbaGWN2AEzR8F+n9hIx5JFVdoIiN2ZDBcBDyr5UWWxP
 B+7Vf15vve/w0dGnYcoOtnsWqzGPIDrFMj/V+C8U+eilqNZLVfdlAk3PBb49zm0zCAEzPFuU
 b/GIJ3EJStLVsxaIM+eGr51PUkDnXtlmws+hPnTknya7FZpTCTEF+5bbATfNb5RAWHtiFy9z
 uuz/vCik313ONASqAGNmWLPBTjm9UQGOK0=
IronPort-HdrOrdr: A9a23:B2ve5Kt8OtcuMScEm3JqsGLs7skDcNV00zEX/kB9WHVpmszxra
 +TdZMgpHjJYVcqKQgdcL+7WZVoLUmwyXcx2/hyAV7AZniDhILLFuFfBOLZqlWKcREWtNQtsJ
 uIG5IObuEYZmIVsS+V2mWF+q4bsbq6zJw=
X-IronPort-AV: E=Sophos;i="5.92,218,1650945600"; 
   d="scan'208";a="74362336"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Jan Beulich
	<jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: [XEN PATCH v3 04/25] tools/firmware/hvmloader: rework Makefile
Date: Fri, 24 Jun 2022 17:04:01 +0100
Message-ID: <20220624160422.53457-5-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220624160422.53457-1-anthony.perard@citrix.com>
References: <20220624160422.53457-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Setup proper dependencies with libacpi so we don't need to run "make
hvmloader" in the "all" target. ("build.o" new prerequisite isn't
exactly proper but a side effect of building the $(DSDT_FILES) is to
generate the "ssdt_*.h" needed by "build.o".)

Make use if "-iquote" instead of a plain "-I".

For "roms.inc" target, use "$(SHELL)" instead of plain "sh". And use
full path to "mkhex" instead of a relative one. Lastly, add "-f" flag
to "mv", in case the target already exist.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/firmware/hvmloader/Makefile | 16 +++++++++-------
 1 file changed, 9 insertions(+), 7 deletions(-)

diff --git a/tools/firmware/hvmloader/Makefile b/tools/firmware/hvmloader/Makefile
index b754220839..fc20932110 100644
--- a/tools/firmware/hvmloader/Makefile
+++ b/tools/firmware/hvmloader/Makefile
@@ -60,8 +60,7 @@ ROMS += $(ROMBIOS_ROM) $(STDVGA_ROM) $(CIRRUSVGA_ROM)
 endif
 
 .PHONY: all
-all: acpi
-	$(MAKE) hvmloader
+all: hvmloader
 
 .PHONY: acpi
 acpi:
@@ -73,12 +72,15 @@ smbios.o: CFLAGS += -D__SMBIOS_DATE__="\"$(SMBIOS_REL_DATE)\""
 ACPI_PATH = ../../libacpi
 DSDT_FILES = dsdt_anycpu.c dsdt_15cpu.c dsdt_anycpu_qemu_xen.c
 ACPI_OBJS = $(patsubst %.c,%.o,$(DSDT_FILES)) build.o static_tables.o
-$(ACPI_OBJS): CFLAGS += -I. -DLIBACPI_STDUTILS=\"$(CURDIR)/util.h\"
+$(ACPI_OBJS): CFLAGS += -iquote . -DLIBACPI_STDUTILS=\"$(CURDIR)/util.h\"
 CFLAGS += -I$(ACPI_PATH)
 vpath build.c $(ACPI_PATH)
 vpath static_tables.c $(ACPI_PATH)
 OBJS += $(ACPI_OBJS)
 
+$(DSDT_FILES): acpi
+build.o: $(DSDT_FILES)
+
 hvmloader: $(OBJS) hvmloader.lds
 	$(LD) $(LDFLAGS_DIRECT) -N -T hvmloader.lds -o $@ $(OBJS)
 
@@ -87,21 +89,21 @@ roms.inc: $(ROMS)
 
 ifneq ($(ROMBIOS_ROM),)
 	echo "#ifdef ROM_INCLUDE_ROMBIOS" >> $@.new
-	sh ../../misc/mkhex rombios $(ROMBIOS_ROM) >> $@.new
+	$(SHELL) $(XEN_ROOT)/tools/misc/mkhex rombios $(ROMBIOS_ROM) >> $@.new
 	echo "#endif" >> $@.new
 endif
 
 ifneq ($(STDVGA_ROM),)
 	echo "#ifdef ROM_INCLUDE_VGABIOS" >> $@.new
-	sh ../../misc/mkhex vgabios_stdvga $(STDVGA_ROM) >> $@.new
+	$(SHELL) $(XEN_ROOT)/tools/misc/mkhex vgabios_stdvga $(STDVGA_ROM) >> $@.new
 	echo "#endif" >> $@.new
 endif
 ifneq ($(CIRRUSVGA_ROM),)
 	echo "#ifdef ROM_INCLUDE_VGABIOS" >> $@.new
-	sh ../../misc/mkhex vgabios_cirrusvga $(CIRRUSVGA_ROM) >> $@.new
+	$(SHELL) $(XEN_ROOT)/tools/misc/mkhex vgabios_cirrusvga $(CIRRUSVGA_ROM) >> $@.new
 	echo "#endif" >> $@.new
 endif
-	mv $@.new $@
+	mv -f $@.new $@
 
 .PHONY: clean
 clean:
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 16:04:53 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 16:04:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355601.583329 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lnX-0004r3-90; Fri, 24 Jun 2022 16:04:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355601.583329; Fri, 24 Jun 2022 16:04:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lnX-0004qt-5f; Fri, 24 Jun 2022 16:04:43 +0000
Received: by outflank-mailman (input) for mailman id 355601;
 Fri, 24 Jun 2022 16:04:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7mLY=W7=citrix.com=prvs=16756bcf7=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o4lnW-0004qb-11
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 16:04:42 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 58b2e4b9-f3d7-11ec-bd2d-47488cf2e6aa;
 Fri, 24 Jun 2022 18:04:39 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 58b2e4b9-f3d7-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656086679;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=Itv0TYjJaWvZTLhYwIDcVR+kB+4hcfXSmAszpJaqRIw=;
  b=d5w6S5JcxLmZpjIopIbB+NpkHkp6qA5FpkUn4OhefwStCBeNdRMKK3tz
   T0YLG7xGoRDYHQhQlIIQb4iwa10Md1/N5ke+bZ616oWTSgkJhmH74TO7P
   PzdCNAmZy97juqobN7EGbOVJRCu1+dNesZTKcX5b3JoLyi+M6gyjORwKG
   8=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 74384146
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:6T68gK8mz+giWA6WQUxrDrUD7H6TJUtcMsCJ2f8bNWPcYEJGY0x3z
 DAZC2+GPqmIYWfxc9Fxbd+39hwCu5CHyNVmTgdt/H08E34SpcT7XtnIdU2Y0wF+jyHgoOCLy
 +1EN7Es+ehtFie0Si+Fa+Sn9T8mvU2xbuKU5NTsY0idfic5DnZ74f5fs7Rh2NQw34LlW1nlV
 e7a+KUzBnf0g1aYDUpMg06zgEsHUCPa4W5wUvQWPJinjXeG/5UnJMt3yZKZdhMUdrJ8DO+iL
 9sv+Znilo/vE7XBPfv++lrzWhVirrc/pmFigFIOM0SpqkAqSiDfTs/XnRfTAKtao2zhojx/9
 DlCndvuVSt5AfXwor4+EF52DBguAJYZqLCSdBBTseTLp6HHW37lwvEoB0AqJ4wIvO1wBAmi9
 9RBdmpLNErawbvrnvTrEYGAhex6RCXvFIoZpnFnyyCfFfs8SIrPa67L+cVZzHE7gcUm8fP2O
 JZCOWY2MEqojxtnPWslCqIutveUjTqhTBBgk1Co+ZUyyj2GpOB2+Oe0a4eEEjCQfu1Kmm6Iq
 2SA+H72ajkXMNGZwHyY/HOpmvTCgyrTX5gbH7m1sPVthTW71mEVTREbS1a/if24kVKlHcJSL
 VQO/SgjprR081akJuQRRDXh/iTC5ERFHYMNTatqs2lh15Y4/S7eWHQoHgEZVOZ8l4xxayMTh
 mKywIPmUGkHXKKudZ6NyluFhWrsZHZNdjJaOn9soRgtuIe6/txq5v7bZpM6SfPu0IWocd3l6
 2rSxBXSkYn/miLiO0+T2VncywyhqZHSJuLezlWGBzn1hu+ViWPMWmBJ1bQ4xawZRGphZgPd1
 EXoYuDHhAz0MbmDlTaWXMIGF6yz6vCOPVX02AAyQcF5pm3ypyb7Iui8BQ2Swm8zaq7onhe5C
 HI/RCsLvMMDVJdURfUfj32N5zQCkvG7SIWNugH8ZdtSeJlhHDK6ENVVTRfIhQjFyRF0+YlmY
 MvzWZv8XB4yVPU8pBLrFrh17FPe7n1nrY8lbcuglErPPHv3TCP9dIrpx3PUP75gsv/b8V6Lm
 zudXuPToyhivCTFSnG/2eYuwZoidyBT6UzewyCPStO+Hw==
IronPort-HdrOrdr: A9a23:0/L4CKkrZsycnx7pK6eYLneWYVXpDfIU3DAbv31ZSRFFG/Fxl6
 iV8sjzsiWE7gr5OUtQ4exoV5PhfZqxz/JICMwqTNKftWrdyQyVxeNZnOjfKlTbckWUnINgPO
 VbAsxD4bXLfCFHZK3BgTVQfexO/DD+ytHLudvj
X-IronPort-AV: E=Sophos;i="5.92,218,1650945600"; 
   d="scan'208";a="74384146"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Elena Ufimtseva
	<elena.ufimtseva@oracle.com>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH v3 02/25] tools/debugger/gdbsx: Fix and cleanup makefiles
Date: Fri, 24 Jun 2022 17:03:59 +0100
Message-ID: <20220624160422.53457-3-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220624160422.53457-1-anthony.perard@citrix.com>
References: <20220624160422.53457-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

gdbsx/:
  - Make use of subdir facility for the "clean" target.
  - No need to remove the *.a, they aren't in this dir.
  - Avoid calling "distclean" in subdirs as "distclean" targets do only
    call "clean", and the "clean" also runs "clean" in subdirs.
  - Avoid the need to make "gx_all.a" and "xg_all.a" in the "all"
    recipe by forcing make to check for update of "xg/xg_all.a" and
    "gx/gx_all.a" by having "FORCE" as prerequisite. Now, when making
    "gdbsx", make will recurse even when both *.a already exist.
  - List target in $(TARGETS).

gdbsx/*/:
  - Fix dependency on *.h.
  - Remove some dead code.
  - List targets in $(TARGETS).
  - Remove "build" target.
  - Cleanup "clean" targets.
  - remove comments about the choice of "ar" instead of "ld"
  - Use "$(AR)" instead of plain "ar".

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---

Notes:
    v2:
    - also replace plain "ar" by "$(AR)"

 tools/debugger/gdbsx/Makefile    | 20 ++++++++++----------
 tools/debugger/gdbsx/gx/Makefile | 15 +++++++--------
 tools/debugger/gdbsx/xg/Makefile | 25 +++++++------------------
 3 files changed, 24 insertions(+), 36 deletions(-)

diff --git a/tools/debugger/gdbsx/Makefile b/tools/debugger/gdbsx/Makefile
index 5571450a89..4aaf427c45 100644
--- a/tools/debugger/gdbsx/Makefile
+++ b/tools/debugger/gdbsx/Makefile
@@ -1,20 +1,20 @@
 XEN_ROOT = $(CURDIR)/../../..
 include ./Rules.mk
 
+SUBDIRS-y += gx
+SUBDIRS-y += xg
+
+TARGETS := gdbsx
+
 .PHONY: all
-all:
-	$(MAKE) -C gx
-	$(MAKE) -C xg
-	$(MAKE) gdbsx
+all: $(TARGETS)
 
 .PHONY: clean
-clean:
-	rm -f xg_all.a gx_all.a gdbsx
-	set -e; for d in xg gx; do $(MAKE) -C $$d clean; done
+clean: subdirs-clean
+	rm -f $(TARGETS)
 
 .PHONY: distclean
 distclean: clean
-	set -e; for d in xg gx; do $(MAKE) -C $$d distclean; done
 
 .PHONY: install
 install: all
@@ -28,7 +28,7 @@ uninstall:
 gdbsx: gx/gx_all.a xg/xg_all.a 
 	$(CC) $(LDFLAGS) -o $@ $^
 
-xg/xg_all.a:
+xg/xg_all.a: FORCE
 	$(MAKE) -C xg
-gx/gx_all.a:
+gx/gx_all.a: FORCE
 	$(MAKE) -C gx
diff --git a/tools/debugger/gdbsx/gx/Makefile b/tools/debugger/gdbsx/gx/Makefile
index 3b8467f799..e9859aea9c 100644
--- a/tools/debugger/gdbsx/gx/Makefile
+++ b/tools/debugger/gdbsx/gx/Makefile
@@ -2,21 +2,20 @@ XEN_ROOT = $(CURDIR)/../../../..
 include ../Rules.mk
 
 GX_OBJS := gx_comm.o gx_main.o gx_utils.o gx_local.o
-GX_HDRS := $(wildcard *.h)
+
+TARGETS := gx_all.a
 
 .PHONY: all
-all: gx_all.a
+all: $(TARGETS)
 
 .PHONY: clean
 clean:
-	rm -rf gx_all.a *.o .*.d
+	rm -f *.o $(TARGETS) $(DEPS_RM)
 
 .PHONY: distclean
 distclean: clean
 
-#%.o: %.c $(GX_HDRS) Makefile
-#	$(CC) -c $(CFLAGS) -o $@ $<
-
-gx_all.a: $(GX_OBJS) Makefile $(GX_HDRS)
-	ar cr $@ $(GX_OBJS)        # problem with ld using -m32 
+gx_all.a: $(GX_OBJS) Makefile
+	$(AR) cr $@ $(GX_OBJS)
 
+-include $(DEPS_INCLUDE)
diff --git a/tools/debugger/gdbsx/xg/Makefile b/tools/debugger/gdbsx/xg/Makefile
index acdcddf0d5..05325d6d81 100644
--- a/tools/debugger/gdbsx/xg/Makefile
+++ b/tools/debugger/gdbsx/xg/Makefile
@@ -1,35 +1,24 @@
 XEN_ROOT = $(CURDIR)/../../../..
 include ../Rules.mk
 
-XG_HDRS := xg_public.h 
 XG_OBJS := xg_main.o 
 
 CFLAGS += -D__XEN_TOOLS__
 CFLAGS += $(CFLAGS_xeninclude)
 
+TARGETS := xg_all.a
 
 .PHONY: all
-all: build
+all: $(TARGETS)
 
-.PHONY: build
-build: xg_all.a $(XG_HDRS) $(XG_OBJS) Makefile
-# build: mk-symlinks xg_all.a $(XG_HDRS) $(XG_OBJS) Makefile
-# build: mk-symlinks xg_all.a
-
-xg_all.a: $(XG_OBJS) Makefile $(XG_HDRS)
-	ar cr $@ $(XG_OBJS)    # problems using -m32 in ld 
-#	$(LD) -b elf32-i386 $(LDFLAGS) -r -o $@ $^
-#	$(CC) -m32 -c -o $@ $^
-
-# xg_main.o: xg_main.c Makefile $(XG_HDRS)
-#$(CC) -c $(CFLAGS) -o $@ $<
-
-# %.o: %.c $(XG_HDRS) Makefile  -- doesn't work as it won't overwrite Rules.mk
-#%.o: %.c       -- doesn't recompile when .c changed
+xg_all.a: $(XG_OBJS) Makefile
+	$(AR) cr $@ $(XG_OBJS)
 
 .PHONY: clean
 clean:
-	rm -rf xen xg_all.a $(XG_OBJS)  .*.d
+	rm -f $(TARGETS) $(XG_OBJS) $(DEPS_RM)
 
 .PHONY: distclean
 distclean: clean
+
+-include $(DEPS_INCLUDE)
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 16:04:53 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 16:04:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355605.583374 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lna-0005qL-Ma; Fri, 24 Jun 2022 16:04:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355605.583374; Fri, 24 Jun 2022 16:04:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lna-0005pE-H5; Fri, 24 Jun 2022 16:04:46 +0000
Received: by outflank-mailman (input) for mailman id 355605;
 Fri, 24 Jun 2022 16:04:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7mLY=W7=citrix.com=prvs=16756bcf7=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o4lnY-0004qb-PE
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 16:04:44 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5b5e355d-f3d7-11ec-bd2d-47488cf2e6aa;
 Fri, 24 Jun 2022 18:04:43 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5b5e355d-f3d7-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656086683;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=e4dSMew7ThkfUT4VBhxHq8D2hkxwuVvIslum+Yj9ZTA=;
  b=Ks++WWr7Udo9du6BL8xg2CUni19gfCprMdkw0J0mYT+uvZQMZhsIBWdJ
   TlL0j1yaduo+0C2aFNPc3FExEs0wvoTh0OKYJFV4YWs9U2BpbKy5CnHTS
   tkh5wsTig3xsNcjUiXDtI2A6XVygvs/y9RgjIWEyH0utJH4QqKavaCQRM
   c=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 74208051
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:4hcApa7sKFEclUjXW8oBQgxRtHHHchMFZxGqfqrLsTDasY5as4F+v
 moYXW2Ebv/cazb1eYsnO4y3801U6JWHzodmSFY6pS5mHi5G8cbLO4+Ufxz6V8+wwmwvb67FA
 +E2MISowBUcFyeEzvuVGuG96yE6j8lkf5KkYAL+EnkZqTRMFWFw03qPp8Zj2tQy2YbjX1vX0
 T/Pi5a31GGNimYc3l08s8pvmDs31BglkGpF1rCWTakjUG72zxH5PrpGTU2CByKQrr1vNvy7X
 47+IISRpQs1yfuP5uSNyd4XemVSKlLb0JPnZnB+A8BOiTAazsA+PzpS2FPxpi67hh3Q9+2dx
 umhurSvdgA2GbLlo9hecCNDMytaN61a55HYdC3XXcy7lyUqclPpyvRqSko3IZcZ6qB8BmQmG
 f4wcW5XKErZ3qTvnez9GrIEascLdaEHOKsWvG1gyjfIS+4rW5nZT43B5MNC3Sd2jcdLdRrbT
 5VFM2I/NEmZC/FJEkZPOZ4lzd6HvHn+US9ir36pp6sU7kGGmWSd15CyaYGIK7RmX/59gUKwt
 m/AuWPjDXkyJNGZjDaI7H+oruvOhj/gHpIfEqWi8fxni0HVwXYcYCD6TnPi/6P/0BTnHYsCd
 QpEoULCsJTe6mS0cd7ieAKRm0LHnSQHe9B0Ge0m9y+Sn/+8DxmiOoQUctJQQIV46ZFuHmFyi
 Q/hc8DBXmI27uDMIZ6J3vLN9G7pZ3BIRYMXTXVcJTbp9eUPt23aYvjnat94WJC4gdTucd0b6
 2Db9XNu71n/YCNi6klawbwkq2j1znQxZlRpjjg7p0r8hu+DWKarZpaz9X/Q5utaIYCSQzGp5
 SZZxZDGvblRUcnVy0Rhpdnh+pnzv55p1xWM6WOD4rF7r2j9k5JdVdo4DM5CyLdBbZ9fJG6Bj
 L77sgJN/p5DVEaXgVtMS9vpUawClPG4ffy8D6y8RoceM/BZKV7clAkzNBH44owYuBV1+U3JE
 czAKpjE4LdzIfkP8QdasM9HgOFymHxhlDiNLX05pjz+uYejiLeuYe9tGDOzgioRt8tofC29H
 w5jCva3
IronPort-HdrOrdr: A9a23:9UybOqnS8xO8b5jM/HG4hFOHxnDpDfIU3DAbv31ZSRFFG/Fxl6
 iV8sjzsiWE7gr5OUtQ4exoV5PhfZqxz/JICMwqTNKftWrdyQyVxeNZnOjfKlTbckWUnINgPO
 VbAsxD4bXLfCFHZK3BgTVQfexO/DD+ytHLudvj
X-IronPort-AV: E=Sophos;i="5.92,218,1650945600"; 
   d="scan'208";a="74208051"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH v3 03/25] tools/examples: cleanup Makefile
Date: Fri, 24 Jun 2022 17:04:00 +0100
Message-ID: <20220624160422.53457-4-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220624160422.53457-1-anthony.perard@citrix.com>
References: <20220624160422.53457-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Don't check if a target exist before installing it. For directory,
install doesn't complain, and for file it would prevent from updating
them. Also remove the existing loop and instead install all files with
a single call to $(INSTALL_DATA).

Remove XEN_CONFIGS-y which isn't used.

Remove "build" target.

Add an empty line after the first comment. The comment isn't about
$(XEN_READMES), it is about the makefile as a whole.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---

Notes:
    v2:
    - remove existing loops in install targets and use a single call to
      $(INSTALL_DATA) to install multiple files.

 tools/examples/Makefile | 25 ++++++-------------------
 1 file changed, 6 insertions(+), 19 deletions(-)

diff --git a/tools/examples/Makefile b/tools/examples/Makefile
index 14e24f4cb3..c839bf5603 100644
--- a/tools/examples/Makefile
+++ b/tools/examples/Makefile
@@ -2,6 +2,7 @@ XEN_ROOT = $(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
 # Xen configuration dir and configs to go there.
+
 XEN_READMES = README
 
 XEN_CONFIGS += xlexample.hvm
@@ -10,14 +11,9 @@ XEN_CONFIGS += xlexample.pvhlinux
 XEN_CONFIGS += xl.conf
 XEN_CONFIGS += cpupool
 
-XEN_CONFIGS += $(XEN_CONFIGS-y)
-
 .PHONY: all
 all:
 
-.PHONY: build
-build:
-
 .PHONY: install
 install: all install-readmes install-configs
 
@@ -26,12 +22,8 @@ uninstall: uninstall-readmes uninstall-configs
 
 .PHONY: install-readmes
 install-readmes:
-	[ -d $(DESTDIR)$(XEN_CONFIG_DIR) ] || \
-		$(INSTALL_DIR) $(DESTDIR)$(XEN_CONFIG_DIR)
-	set -e; for i in $(XEN_READMES); \
-	    do [ -e $(DESTDIR)$(XEN_CONFIG_DIR)/$$i ] || \
-	    $(INSTALL_DATA) $$i $(DESTDIR)$(XEN_CONFIG_DIR); \
-	done
+	$(INSTALL_DIR) $(DESTDIR)$(XEN_CONFIG_DIR)
+	$(INSTALL_DATA) $(XEN_READMES) $(DESTDIR)$(XEN_CONFIG_DIR)
 
 .PHONY: uninstall-readmes
 uninstall-readmes:
@@ -39,14 +31,9 @@ uninstall-readmes:
 
 .PHONY: install-configs
 install-configs: $(XEN_CONFIGS)
-	[ -d $(DESTDIR)$(XEN_CONFIG_DIR) ] || \
-		$(INSTALL_DIR) $(DESTDIR)$(XEN_CONFIG_DIR)
-	[ -d $(DESTDIR)$(XEN_CONFIG_DIR)/auto ] || \
-		$(INSTALL_DIR) $(DESTDIR)$(XEN_CONFIG_DIR)/auto
-	set -e; for i in $(XEN_CONFIGS); \
-	    do [ -e $(DESTDIR)$(XEN_CONFIG_DIR)/$$i ] || \
-	    $(INSTALL_DATA) $$i $(DESTDIR)$(XEN_CONFIG_DIR); \
-	done
+	$(INSTALL_DIR) $(DESTDIR)$(XEN_CONFIG_DIR)
+	$(INSTALL_DIR) $(DESTDIR)$(XEN_CONFIG_DIR)/auto
+	$(INSTALL_DATA) $(XEN_CONFIGS) $(DESTDIR)$(XEN_CONFIG_DIR)
 
 .PHONY: uninstall-configs
 uninstall-configs:
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 16:04:53 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 16:04:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355611.583438 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lnh-0007ZA-CV; Fri, 24 Jun 2022 16:04:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355611.583438; Fri, 24 Jun 2022 16:04:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lnh-0007YT-7K; Fri, 24 Jun 2022 16:04:53 +0000
Received: by outflank-mailman (input) for mailman id 355611;
 Fri, 24 Jun 2022 16:04:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7mLY=W7=citrix.com=prvs=16756bcf7=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o4lnf-0004qb-Bx
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 16:04:51 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5f689afe-f3d7-11ec-bd2d-47488cf2e6aa;
 Fri, 24 Jun 2022 18:04:50 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f689afe-f3d7-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656086690;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=06doEHilS6AKmsVA0dHPUnjlMaGNmuuqrJF1JZCOe3k=;
  b=M392P2F8vHceH75PhI2aXuB9FTGIEnHWqlm6emfUcPvsJhII8Vy9WvHi
   8sWeEZJ6dVXB9mLG3DSPs6MP9Uh1UoT9nbsamKSH71epL5yB47l1j+scZ
   7kUW6YPk+gy+akZuAUJelKjb3uWM7tZCOgVW0fvz9/18/E4Rz/oPWW2dk
   U=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 74384185
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:9k7tQqv6Z6iLx6UJV8NT3JcTVufnVB9eMUV32f8akzHdYApBsoF/q
 tZmKWGGOfeKY2KkfIp1bI208x9T7ZOHxtc1G1NopHg9RSxB+JbJXdiXEBz9bniYRiHhoOOLz
 Cm8hv3odp1coqr0/0/1WlTZhSAgk/nOHNIQMcacUsxLbVYMpBwJ1FQywYbVvqYy2YLjW13X5
 YuoyyHiEATNNwBcYzp8B52r8HuDjNyq0N/PlgVjDRzjlAa2e0g9VPrzF4noR5fLatA88tqBb
 /TC1NmEElbxpH/BPD8HfoHTKSXmSpaKVeSHZ+E/t6KK2nCurQRquko32WZ1he66RFxlkvgoo
 Oihu6BcRi8oPbCRifgaWiMFHh9xfp1F5OPKIVWG5Jn7I03uKxMAwt1rBUAye4YZ5vx2ESdF8
 vlwxDIlN07ZwbjsmfTiF7cq1p9LwMrDZevzvllpyy3ZCvA3B4jOWazQ6fdT3Ssqh9AIFvHbD
 yYcQWUzN0ScOk0SUrsRIJkhsPuFt2uuTwYbiVTWuKNt3zPOzRMkhdABN/KKI4fXFK25hH2wu
 Wbu72n/RBYAO7S32TeDt36hmOLLtSf6Q54JUq218OZwh1+ezXBVDwcZPWZXutHg1BT4AYgGb
 RVJpGx+9sDe6XBHUPH8XADlsWKDtyRBVuRfUN059F2AyofttlPx6nc/ctJRVDA3nJZoGGJyj
 QLRwIOB6S9H6+PMFy/EnluAhXbrYHVOczdfDcMRZVFdi+QPtr3fmf4mojxLNKeuxuP4Fjjrq
 9xhhHhv3u5D5SLnOkjSwLwmv95PjsKQJuLNzl+LNl9JFysgDGJfW6Sm6ELA8dFLJ5uDQ1+Ks
 RAswpbDsrhWXMjSyHTVH43h+Y1FAd7faFUwZnY/d6TNChz3oyLzFWyuyGsWyLhV3jYsJmayP
 R67VfJ5755PJnq6BZJKj3aKI51yl8DITI29PtiNN4YmSsUhJWevoXA1DWbNjj+FraTZufxmU
 XttWZ33Vihy5GUO5GfeetrxJpdxl35nmz+MFMulp/lluJLHDEOopX4+GAPmRogEAGms+m05L
 /432xO29ihi
IronPort-HdrOrdr: A9a23:3Z4m0atrXF3O64HgX2SeRV7I7skDcNV00zEX/kB9WHVpmszxra
 +TdZMgpHjJYVcqKQgdcL+7WZVoLUmwyXcx2/hyAV7AZniDhILLFuFfBOLZqlWKcREWtNQtsJ
 uIG5IObuEYZmIVsS+V2mWF+q4bsbq6zJw=
X-IronPort-AV: E=Sophos;i="5.92,218,1650945600"; 
   d="scan'208";a="74384185"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH v3 10/25] tools/xentop: rework makefile
Date: Fri, 24 Jun 2022 17:04:07 +0100
Message-ID: <20220624160422.53457-11-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220624160422.53457-1-anthony.perard@citrix.com>
References: <20220624160422.53457-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Add "xentop" to "TARGETS" because this variable will be useful later.

Always define all the targets, even when configured with
--disable-monitor, instead don't visit the subdirectory.
This mean xentop/ isn't visited anymore during "make clean" that's how
most other subdirs in the tools/ works.

Also add missing "xentop" rules. It only works without it because we
still have make's built-ins rules and variables, but fix this to not
have to rely on them.

Use $(TARGETS) with $(INSTALL_PROG), and thus install into the
directory rather than spelling the program name.

In the "clean" rule, use $(RM) and remove all "*.o" instead of just
one object.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---

Notes:
    v2:
    - use $(RM) in clean.
    - remove all *.o instead of just one object in "clean" rule.
    - in "install" rule, make use of $(TARGETS). install into a dir rather
      than to a specific path, in case there's more targets.

 tools/Makefile        |  2 +-
 tools/xentop/Makefile | 21 +++++++++------------
 2 files changed, 10 insertions(+), 13 deletions(-)

diff --git a/tools/Makefile b/tools/Makefile
index 79b4c7e3de..0c1d8b64a4 100644
--- a/tools/Makefile
+++ b/tools/Makefile
@@ -17,7 +17,7 @@ SUBDIRS-$(CONFIG_XCUTILS) += xcutils
 SUBDIRS-$(CONFIG_X86) += firmware
 SUBDIRS-y += console
 SUBDIRS-y += xenmon
-SUBDIRS-y += xentop
+SUBDIRS-$(XENSTAT_XENTOP) += xentop
 SUBDIRS-y += libfsimage
 SUBDIRS-$(CONFIG_Linux) += vchan
 
diff --git a/tools/xentop/Makefile b/tools/xentop/Makefile
index 0034114684..7bd96f34d5 100644
--- a/tools/xentop/Makefile
+++ b/tools/xentop/Makefile
@@ -13,36 +13,33 @@
 XEN_ROOT=$(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-ifneq ($(XENSTAT_XENTOP),y)
-.PHONY: all install xentop uninstall
-all install xentop uninstall:
-else
-
 CFLAGS += -DGCC_PRINTF -Werror $(CFLAGS_libxenstat)
 LDLIBS += $(LDLIBS_libxenstat) $(CURSES_LIBS) $(TINFO_LIBS) $(SOCKET_LIBS) -lm
 CFLAGS += -DHOST_$(XEN_OS)
 
 # Include configure output (config.h)
 CFLAGS += -include $(XEN_ROOT)/tools/config.h
-LDFLAGS += $(APPEND_LDFLAGS)
+
+TARGETS := xentop
 
 .PHONY: all
-all: xentop
+all: $(TARGETS)
+
+xentop: xentop.o
+	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS) $(APPEND_LDFLAGS)
 
 .PHONY: install
-install: xentop
+install: all
 	$(INSTALL_DIR) $(DESTDIR)$(sbindir)
-	$(INSTALL_PROG) xentop $(DESTDIR)$(sbindir)/xentop
+	$(INSTALL_PROG) $(TARGETS) $(DESTDIR)$(sbindir)
 
 .PHONY: uninstall
 uninstall:
 	rm -f $(DESTDIR)$(sbindir)/xentop
 
-endif
-
 .PHONY: clean
 clean:
-	rm -f xentop xentop.o $(DEPS_RM)
+	$(RM) *.o $(TARGETS) $(DEPS_RM)
 
 .PHONY: distclean
 distclean: clean
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 16:04:56 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 16:04:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355612.583451 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lnj-00081P-Tr; Fri, 24 Jun 2022 16:04:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355612.583451; Fri, 24 Jun 2022 16:04:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lnj-00080y-Mk; Fri, 24 Jun 2022 16:04:55 +0000
Received: by outflank-mailman (input) for mailman id 355612;
 Fri, 24 Jun 2022 16:04:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7mLY=W7=citrix.com=prvs=16756bcf7=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o4lni-0004qc-I8
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 16:04:54 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 617cdfd6-f3d7-11ec-b725-ed86ccbb4733;
 Fri, 24 Jun 2022 18:04:53 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 617cdfd6-f3d7-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656086693;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=cq0Zb+dmEm/n03sfWJhUW4S7VwgI0ziQPkbqQdTYp0c=;
  b=ecyKfyyNpERVin3i8JpZ9G9/a46aAs6QSPH/5+dxdZJ3dyFBbQFXUmQL
   GhaII6EC9QFMMOXn1TqVyWd9T5Q2JmBt8E588ZX+SKeqG9GmXXGQMyJ3Q
   RldhOumZ+ei3RF02NSa/GM70jDZO60CTUTbZ8hnlPmEifNU/1b3e7HG9l
   o=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 74208110
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:xdIZRq9B1i9W19uAfT4hDrUDnX6TJUtcMsCJ2f8bNWPcYEJGY0x3z
 zYZWD2APPaIMGP2ct1/ad6+9hwBucfXy9A2QQs+q3w8E34SpcT7XtnIdU2Y0wF+jyHgoOCLy
 +1EN7Es+ehtFie0Si+Fa+Sn9T8mvU2xbuKU5NTsY0idfic5DnZ74f5fs7Rh2NQw34LlW1nlV
 e7a+KUzBnf0g1aYDUpMg06zgEsHUCPa4W5wUvQWPJinjXeG/5UnJMt3yZKZdhMUdrJ8DO+iL
 9sv+Znilo/vE7XBPfv++lrzWhVirrc/pmFigFIOM0SpqkAqSiDfTs/XnRfTAKtao2zhojx/9
 DlCnbauGD8uJ/bBo/smXAdqTxNzN4Je9YaSdBBTseTLp6HHW37lwvEoB0AqJ4wIvO1wBAmi9
 9RBdmpLNErawbvrnvTrEYGAhex6RCXvFIoZpnFnyyCfFfs8SIrPa67L+cVZzHE7gcUm8fP2O
 JZDMWo2NUyojxtnMxQ5WK1jmv+RmnzRUyJd8g6TlIFo2j2GpOB2+Oe0a4eEEjCQfu1OhVqRr
 G/C+2X/AzkZOcaZxD7D9Wij7sfQmQvrVYRUE6e3ntZoj0eU3Xc7EwANWB2wpvzRol6zXZdTJ
 lIZ/gIqrLMu7wq7Q9/lRRq6rXWY+BkGVLJ4Eec39QWMwar8+BuCCy4PSTspQN47sM47QxQ62
 1nPmMnmbQGDq5XMFyjbrO3N62rvZ25FdgfueBPoUyMv/Yjbp5hogynQddl9IvKHg9faATzJl
 mXiQDcFu1kDsSIa//zloA6f2G/0+cihoh0dvVuOAD/8hu9tTMv8PtHztwCGhRpVBNzBJmRtq
 kTojCR3AAomKZiW3BKAT+wWdF1Cz6bUaWaM6bKD8nRIythMx5JAVdoJiN2GDB01WvvogBewC
 KMphStf5YVIIFyhZrJtboS6BqwClPa9S4y9B6mLMoIWPfCdkTNrGgk0PSZ8OEi9+HXAbIllY
 cvLGSpSJSxy5VtbIMqeGL5GjO5DKtEWzmLPX5HrpymaPU6lTCfNE98taQLWBshgtf/siFiFo
 r53aprRoz0CAbKWX8Ui2dNKRbz8BSNgXs6eRg0+XrPrHzeK70l7WqGIn+9+Ktc790mX/8+Rl
 kyAtoZj4AKXrRX6xc+iNhiPtJuHsU5DkE8G
IronPort-HdrOrdr: A9a23:bQzovqOcWgvRKMBcTsejsMiBIKoaSvp037Eqv3ofdfUzSL3+qy
 nOpoVj6faaslcssR0b9OxofZPwI080lqQFhbX5X43DYOCOggLBR+tfBMnZsljd8kXFh4hgPM
 xbHZSWZuedMbEDt7eY3DWF
X-IronPort-AV: E=Sophos;i="5.92,218,1650945600"; 
   d="scan'208";a="74208110"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Juergen Gross
	<jgross@suse.com>
Subject: [XEN PATCH v3 12/25] .gitignore: Cleanup ignores of tools/libs/*/{headers.chk,*.pc}
Date: Fri, 24 Jun 2022 17:04:09 +0100
Message-ID: <20220624160422.53457-13-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220624160422.53457-1-anthony.perard@citrix.com>
References: <20220624160422.53457-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---

Notes:
    v2:
    - move new .gitignore entries to the one in tools/libs/

 .gitignore            | 26 --------------------------
 tools/libs/.gitignore |  2 ++
 2 files changed, 2 insertions(+), 26 deletions(-)

diff --git a/.gitignore b/.gitignore
index 8b6886f3fd..1de28c833c 100644
--- a/.gitignore
+++ b/.gitignore
@@ -106,26 +106,8 @@ tools/config.cache
 config/Tools.mk
 config/Stubdom.mk
 config/Docs.mk
-tools/libs/toolcore/headers.chk
-tools/libs/toolcore/xentoolcore.pc
-tools/libs/toollog/headers.chk
-tools/libs/toollog/xentoollog.pc
-tools/libs/evtchn/headers.chk
-tools/libs/evtchn/xenevtchn.pc
-tools/libs/gnttab/headers.chk
-tools/libs/gnttab/xengnttab.pc
-tools/libs/hypfs/headers.chk
-tools/libs/hypfs/xenhypfs.pc
-tools/libs/call/headers.chk
-tools/libs/call/xencall.pc
 tools/libs/ctrl/libxenctrl.map
-tools/libs/ctrl/xencontrol.pc
-tools/libs/foreignmemory/headers.chk
-tools/libs/foreignmemory/xenforeignmemory.pc
-tools/libs/devicemodel/headers.chk
-tools/libs/devicemodel/xendevicemodel.pc
 tools/libs/guest/libxenguest.map
-tools/libs/guest/xenguest.pc
 tools/libs/guest/xc_bitops.h
 tools/libs/guest/xc_core.h
 tools/libs/guest/xc_core_arm.h
@@ -145,21 +127,13 @@ tools/libs/light/testidl.c
 tools/libs/light/test_timedereg
 tools/libs/light/test_fdderegrace
 tools/libs/light/tmp.*
-tools/libs/light/xenlight.pc
-tools/libs/stat/headers.chk
 tools/libs/stat/libxenstat.map
-tools/libs/stat/xenstat.pc
-tools/libs/store/headers.chk
 tools/libs/store/list.h
 tools/libs/store/utils.h
-tools/libs/store/xenstore.pc
 tools/libs/store/xs_lib.c
-tools/libs/util/*.pc
 tools/libs/util/libxlu_cfg_y.output
 tools/libs/util/libxenutil.map
-tools/libs/vchan/headers.chk
 tools/libs/vchan/libxenvchan.map
-tools/libs/vchan/xenvchan.pc
 tools/debugger/gdb/gdb-6.2.1-linux-i386-xen/*
 tools/debugger/gdb/gdb-6.2.1/*
 tools/debugger/gdb/gdb-6.2.1.tar.bz2
diff --git a/tools/libs/.gitignore b/tools/libs/.gitignore
index 4a13126144..1ad7c7f0cb 100644
--- a/tools/libs/.gitignore
+++ b/tools/libs/.gitignore
@@ -1 +1,3 @@
+*/*.pc
+*/headers.chk
 */headers.lst
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 16:04:59 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 16:04:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355613.583462 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lnn-0008SV-7l; Fri, 24 Jun 2022 16:04:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355613.583462; Fri, 24 Jun 2022 16:04:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lnn-0008SK-2g; Fri, 24 Jun 2022 16:04:59 +0000
Received: by outflank-mailman (input) for mailman id 355613;
 Fri, 24 Jun 2022 16:04:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7mLY=W7=citrix.com=prvs=16756bcf7=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o4lnl-0004qc-E3
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 16:04:57 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 62e59467-f3d7-11ec-b725-ed86ccbb4733;
 Fri, 24 Jun 2022 18:04:56 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 62e59467-f3d7-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656086696;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=DRGeCKQgwOk/UNXsa1d505aRXj+pmL/UOsklwZQeiFU=;
  b=I+MksVtG6MTNz6Nn1xvQjLW1EDKt6shiRYJJ3yBQW5oUc6sKr0gT/iuT
   4RShp4Xxxs81bXEKpEgd7SZrbamCCAnKIYDEWkcf92/Ix4ctb7cP8//uy
   2PrIpGmvRLhMyeYfvafZTS+/lVKOmWrQ48+BCA9aLOYkye37Iyj8lPWLG
   Q=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 74787692
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:zKbMeaxgaEDdRUT38vB6t+dvxirEfRIJ4+MujC+fZmUNrF6WrkVVm
 mZOUTuFb/7cM2amc4skPdvg9x8Ovp6Dx9ExTFQ5qCAxQypGp/SeCIXCJC8cHc8zwu4v7q5Dx
 59DAjUVBJlsFhcwnj/0bv656yMUOZigHtIQMsadUsxKbVIiGX1JZS5LwbZj2NY224ThWWthh
 PupyyHhEA79s9JLGjp8B5Kr8HuDa9yr5Vv0FnRnDRx6lAe2e0s9VfrzFonoR5fMeaFGH/bSe
 gr25OrRElU1XfsaIojNfr7TKiXmS1NJVOSEoiI+t6OK2nCuqsGuu0qS2TV1hUp/0l20c95NJ
 Nplnse5TiYHP4DwmuEvcyJeA3lFO6l+9+qSSZS/mZT7I0zudnLtx7NlDV0sPJ1e8eFyaY1M3
 aVGcnZXNEnF3r/ohuLgIgVvrp1LwM3DNYUDunZm3HfBAOwvW5zrSKTW/95Imjw3g6iiGN6BO
 5VJNmQ+NnwsZTUWHl4uCYhktt23oWvaVGRos3akvIQetj27IAtZj+G2bYu9lsaxbcdahEGDv
 Urd4n/0RBodMbS31j6t4n+qwOjVkkvTSI8UUbG16PNuqFmS3XAITg0bU0Ohpvu0gVL4XMhQQ
 3H44QJ38/J0rhbyCICgAVvo+xZooyLwRfJ7SOQ9yS+M55bW5jS5PW4UFgVHbOQp4ZpeqSMR6
 rOZoz/4LWUx7ePNEi/Fqef8QSCaYnZMczJbDcMQZU5cuoS4/tlu5v7aZow7eJNZmOEZDt0ZL
 9qiiCElz4segscQv0lQ1QCW2mn8znQlo+Nc2+k2Yo5GxlkgDGJdT9b0gWU3FN4ZRGpjcnGPv
 WIfh++V5/0UAJeGmUSlGbtQQunxtq/abGWE3jaD+qXNERz3oxZPmqgAiAyS2W8zappUEdMXS
 BW7VfxtCG97YyLxMP4fj3OZAMU216nwfenYugTvRoMWOPBZLVbflAk3PBL49z29wSAEzPBkU
 b/GIJnEMJrvIfk+pNZAb7xGiuFDK+FX7T67eK0XODz9gOTHOiLKGOxbWLZMB8hghJ65TMzu2
 443H6O3J993CYUSvgG/HVYvEG03
IronPort-HdrOrdr: A9a23:nvL99a078SDYc1CBdUnA6QqjBIokLtp133Aq2lEZdPRUGvb3qy
 nIpoV86faUskdoZJhOo7C90cW7LU80sKQFhLX5Xo3SOzUO2lHYT72KhLGKq1aLdhEWtNQtsZ
 uIG5IOceEYZmIasS+V2maF+q4bsbu6zJw=
X-IronPort-AV: E=Sophos;i="5.92,218,1650945600"; 
   d="scan'208";a="74787692"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Wei Liu <wl@xen.org>, "Juergen
 Gross" <jgross@suse.com>
Subject: [XEN PATCH v3 13/25] tools/libs/util: cleanup Makefile
Date: Fri, 24 Jun 2022 17:04:10 +0100
Message-ID: <20220624160422.53457-14-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220624160422.53457-1-anthony.perard@citrix.com>
References: <20220624160422.53457-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Remove -I. from CFLAGS, it isn't necessary.

Removed $(AUTOSRCS), it isn't used.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/libs/util/Makefile | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/tools/libs/util/Makefile b/tools/libs/util/Makefile
index ffe507b379..493d2e00be 100644
--- a/tools/libs/util/Makefile
+++ b/tools/libs/util/Makefile
@@ -11,7 +11,7 @@ OBJS-y += libxlu_pci.o
 
 CFLAGS += -Wno-format-zero-length -Wmissing-declarations \
 	-Wno-declaration-after-statement -Wformat-nonliteral
-CFLAGS += -I. $(CFLAGS_libxenctrl)
+CFLAGS += $(CFLAGS_libxenctrl)
 
 CFLAGS += $(PTHREAD_CFLAGS)
 LDFLAGS += $(PTHREAD_LDFLAGS)
@@ -29,7 +29,6 @@ ifeq ($(BISON),)
 endif
 
 AUTOINCS = libxlu_cfg_y.h libxlu_cfg_l.h libxlu_disk_l.h
-AUTOSRCS = libxlu_cfg_y.c libxlu_cfg_l.c
 
 LIBHEADER := libxlutil.h
 PKG_CONFIG_NAME := Xlutil
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 16:05:05 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 16:05:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355616.583473 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lnt-0000lZ-1a; Fri, 24 Jun 2022 16:05:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355616.583473; Fri, 24 Jun 2022 16:05:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lns-0000kn-LY; Fri, 24 Jun 2022 16:05:04 +0000
Received: by outflank-mailman (input) for mailman id 355616;
 Fri, 24 Jun 2022 16:05:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7mLY=W7=citrix.com=prvs=16756bcf7=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o4lnq-0004qc-VI
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 16:05:02 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 656edef2-f3d7-11ec-b725-ed86ccbb4733;
 Fri, 24 Jun 2022 18:05:00 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 656edef2-f3d7-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656086700;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=AvHTHvehrcxwVAzIH4QeC8DG622elqZioP/rCdCMY4Q=;
  b=NEjOe9I9L8eU4Aajhgy25p0mgG6XXIIeGkqV8Ym5XoO7J1gCg41E/YEB
   kv1AchNILcmquAdiG1GW7pQtZizv/4nbBOhlb3cDg1wBVh2giy8vY+TMC
   0qkjJBeVbrVm/eAUhKNDdCIViW0CKVIOygCHXRjBncAa8NHWY7PDvYh0G
   k=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 74384212
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:7qkL9q6EJxPxb1XyrvgjrQxRtGDHchMFZxGqfqrLsTDasY5as4F+v
 jNMWGnXOanZa2bweYt0ady29R9VvZ7Qy9BnT1NvrSpjHi5G8cbLO4+Ufxz6V8+wwmwvb67FA
 +E2MISowBUcFyeEzvuVGuG96yE6j8lkf5KkYAL+EnkZqTRMFWFw03qPp8Zj2tQy2YbjX1vX0
 T/Pi5a31GGNimYc3l08s8pvmDs31BglkGpF1rCWTakjUG72zxH5PrpGTU2CByKQrr1vNvy7X
 47+IISRpQs1yfuP5uSNyd4XemVSKlLb0JPnZnB+A8BOiTAazsA+PzpS2FPxpi67hh3Q9+2dx
 umhurSzZCZ5Hrf1l94YCTZVF39cBfx5w4budC3XXcy7lyUqclPpyvRqSko3IZcZ6qB8BmQmG
 f4wcW5XKErZ3qTvnez9GrIEascLdaEHOKsWvG1gyjfIS+4rW5nZT43B5MNC3Sd2jcdLdRrbT
 5VFMmozNk2aC/FJEltPOr8Bt8C0vX68WSV1p3Kb+ulvxHeGmWSd15CyaYGIK7RmX/59nEmCo
 Xnd13/kGRxcP9uaoRKa9lq8i+mJmjn0MKoCGbv9+vN0jVm7wm0IFAZQRVa9ueO+iEO1R5RYM
 UN8x8Y1hfFsrgrxFIC7BkDm5i7f1vIBZzZOO+4XyVGt0JPb2QPDKWUAEBx5OeMdjeZjEFTGy
 WS1c8PV6S1H6ePIFyrGq+/L/VteKgBOczZcOHZsoR8tpoC6/dpt1k+nosNLSvbdszHjJd3nL
 9lmRgAajq5bs8ME3r7TEbvv02P1/cihouLYC2zqsoOZAuBRPtfNi3SAswSz0Bq5BN/xoqO9l
 HYFgdOCy+sFEIuAkieAKM1UQuz3v67UaWKA2QYwd3XEy9hL0yT7FWy3yGEWGauUGpxcJW+Bj
 LH742u9G6O/zFP1NPQqMupd+uwhzLT6FMSNa804muFmO8ArHCfepXkGTRfJgwjFzRh9+Ylia
 MzzWZv9Uh4n5VFPkWPeqxE1iuRwmEjTBAr7GPjG8vhQ+eDPOifLFehUawXmgyJQxPrsnTg5O
 u13b6OioyizmsWnCsUL2eb/9Ww3EEU=
IronPort-HdrOrdr: A9a23:4t1VPqoJaMCqR9ovrIH8W0EaV5oTeYIsimQD101hICG8cqSj+f
 xG+85rsyMc6QxhIE3I9urhBEDtex/hHNtOkOws1NSZLW7bUQmTXeJfBOLZqlWKcUDDH6xmpM
 NdmsBFeaTN5DNB7PoSjjPWLz9Z+qjkzJyV
X-IronPort-AV: E=Sophos;i="5.92,218,1650945600"; 
   d="scan'208";a="74384212"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Wei Liu <wl@xen.org>, "Juergen
 Gross" <jgross@suse.com>
Subject: [XEN PATCH v3 15/25] libs/libs.mk: Rename $(LIB) to $(TARGETS)
Date: Fri, 24 Jun 2022 17:04:12 +0100
Message-ID: <20220624160422.53457-16-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220624160422.53457-1-anthony.perard@citrix.com>
References: <20220624160422.53457-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/libs/libs.mk | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/tools/libs/libs.mk b/tools/libs/libs.mk
index 58d8166b09..e02f91f95e 100644
--- a/tools/libs/libs.mk
+++ b/tools/libs/libs.mk
@@ -23,9 +23,9 @@ LDLIBS += $(foreach lib, $(USELIBS_$(LIBNAME)), $(LDLIBS_libxen$(lib)))
 PIC_OBJS := $(OBJS-y:.o=.opic)
 
 LIB_FILE_NAME = $(FILENAME_$(LIBNAME))
-LIB := lib$(LIB_FILE_NAME).a
+TARGETS := lib$(LIB_FILE_NAME).a
 ifneq ($(nosharedlibs),y)
-LIB += lib$(LIB_FILE_NAME).so
+TARGETS += lib$(LIB_FILE_NAME).so
 endif
 
 PKG_CONFIG ?= $(LIB_FILE_NAME).pc
@@ -55,7 +55,7 @@ $(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_INCLUDE)
 $(PKG_CONFIG_LOCAL): PKG_CONFIG_LIBDIR = $(CURDIR)
 
 .PHONY: all
-all: headers.chk $(LIB) $(PKG_CONFIG_INST) $(PKG_CONFIG_LOCAL) libxen$(LIBNAME).map $(LIBHEADERS)
+all: headers.chk $(TARGETS) $(PKG_CONFIG_INST) $(PKG_CONFIG_LOCAL) libxen$(LIBNAME).map $(LIBHEADERS)
 
 ifneq ($(NO_HEADERS_CHK),y)
 headers.chk:
@@ -124,7 +124,7 @@ TAGS:
 
 .PHONY: clean
 clean::
-	rm -rf $(LIB) *~ $(DEPS_RM) $(OBJS-y) $(PIC_OBJS)
+	rm -rf $(TARGETS) *~ $(DEPS_RM) $(OBJS-y) $(PIC_OBJS)
 	rm -f lib$(LIB_FILE_NAME).so.$(MAJOR).$(MINOR) lib$(LIB_FILE_NAME).so.$(MAJOR)
 	rm -f headers.chk headers.lst
 	rm -f $(PKG_CONFIG)
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 16:05:06 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 16:05:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355617.583478 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lnu-0000tH-8z; Fri, 24 Jun 2022 16:05:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355617.583478; Fri, 24 Jun 2022 16:05:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lnt-0000r3-My; Fri, 24 Jun 2022 16:05:05 +0000
Received: by outflank-mailman (input) for mailman id 355617;
 Fri, 24 Jun 2022 16:05:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7mLY=W7=citrix.com=prvs=16756bcf7=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o4lnr-0004qc-VN
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 16:05:03 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 67526923-f3d7-11ec-b725-ed86ccbb4733;
 Fri, 24 Jun 2022 18:05:02 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 67526923-f3d7-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656086702;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=nUZOfBqCn5La2Rq50ef3cLNXGuInKFbPM4f0Kz8ZjEw=;
  b=U44QaZfzvIGNLuSYSB9zt5XpFF9/XG2Je3fBcg6RPSi9hqrVzuLsbuEj
   hP31PnuMAcIul5Og0zg3BmPQ+xXFy5ZVwKnIipdo4XhCYrSvGNETgAIEo
   RkuE9qLimGzK7XuursGTezh5a57V0/T2BoHjFDWXfJRQl4gZImPM+LhmR
   8=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 74787718
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:Pekj9qzCshs3SyY3t+x6t+dvxirEfRIJ4+MujC+fZmUNrF6WrkVRn
 WoWDWGGOfnbYGrwf9Ena4i18U0HucPRnNAwGQRo+CAxQypGp/SeCIXCJC8cHc8zwu4v7q5Dx
 59DAjUVBJlsFhcwnj/0bv656yMUOZigHtIQMsadUsxKbVIiGX1JZS5LwbZj2NY224ThWWthh
 PupyyHhEA79s9JLGjp8B5Kr8HuDa9yr5Vv0FnRnDRx6lAe2e0s9VfrzFonoR5fMeaFGH/bSe
 gr25OrRElU1XfsaIojNfr7TKiXmS1NJVOSEoiI+t6OK2nCuqsGuu0qS2TV1hUp/0l20c95NJ
 NplhbKpQwITIpLwg8MCSD5hShpUEIQc5+qSSZS/mZT7I0zudnLtx7NlDV0sPJ1e8eFyaY1M3
 aVGcnZXNEnF3r/ohuLgIgVvrp1LwM3DNYUDunZm3HfBAOwvW5zrSKTW/95Imjw3g6iiGN6BO
 5VJNmU2PHwsZTVEE2YlBq46u9uJi0ftXxlfqHufqek4tj27IAtZj+G2bYu9lsaxbcdahEGDv
 Urd4n/0RBodMbS31j6t4n+qwOjVkkvTSI8UUbG16PNuqFmS3XAITg0bU0Ohpvu0gVL4XMhQQ
 3H44QJ38/J0rhbyCICgAVvo+xZooyLwRfJgPfQw7TuR9ZbPxBmCIVFddRRsQYY54ZpeqSMR6
 rOZoz/4LWUx7ePNEi/Fqef8QSCaYnZMczJbDcMQZU5cuoS4/tlu5v7aZow7eJNZmOEZDt0ZL
 9qiiCElz4segscQv0lQ1QCW2mn8znQlo+Nc2+k2Yo5GxlkgDGJdT9b0gWU3FN4ZRGpjcnGPv
 WIfh++V5/0UAJeGmUSlGbtQQunxtq/abGWE3jaD+qXNERz3oxZPmqgAiAyS2W8zappUEdMXS
 BW7VfxtCG97YyLxMP4fj3OZAMU216nwfenYugTvRoMWOPBZLVbflAk3PBL49z29wSAEzPBkU
 b/GIJnEMJrvIfk+pNZAb7xGiuFDK+FX7T67eK0XODz9gOTHOiLKGOxbWLZMB8hghJ65TMzu2
 443H6O3J993C4USvgG/HVYvEG03
IronPort-HdrOrdr: A9a23:lsjL9q4JE9CmtxjY7QPXwPDXdLJyesId70hD6qhwISY6TiX+rb
 HJoB17726NtN9/YhEdcLy7VJVoBEmskKKdgrNhWotKPjOW21dARbsKheCJrgEIWReOktK1vZ
 0QCpSWY+eQMbEVt6nHCXGDYrQd/OU=
X-IronPort-AV: E=Sophos;i="5.92,218,1650945600"; 
   d="scan'208";a="74787718"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Wei Liu <wl@xen.org>, "Juergen
 Gross" <jgross@suse.com>
Subject: [XEN PATCH v3 17/25] libs/libs.mk: Rework target headers.chk dependencies
Date: Fri, 24 Jun 2022 17:04:14 +0100
Message-ID: <20220624160422.53457-18-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220624160422.53457-1-anthony.perard@citrix.com>
References: <20220624160422.53457-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

There is no need to call the "headers.chk" target when it isn't
wanted, so it never need to be .PHONY.

Also, there is no more reason to separate the prerequisites from the
recipe.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/libs/libs.mk | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/tools/libs/libs.mk b/tools/libs/libs.mk
index 7aee449370..f778a7df82 100644
--- a/tools/libs/libs.mk
+++ b/tools/libs/libs.mk
@@ -55,22 +55,20 @@ $(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_INCLUDE)
 $(PKG_CONFIG_LOCAL): PKG_CONFIG_LIBDIR = $(CURDIR)
 
 .PHONY: all
-all: headers.chk $(TARGETS) $(PKG_CONFIG_LOCAL) libxen$(LIBNAME).map $(LIBHEADERS)
+all: $(TARGETS) $(PKG_CONFIG_LOCAL) libxen$(LIBNAME).map $(LIBHEADERS)
 
 ifneq ($(NO_HEADERS_CHK),y)
-headers.chk:
+all: headers.chk
+
+headers.chk: $(LIBHEADERS) $(AUTOINCS)
 	for i in $(filter %.h,$^); do \
 	    $(CC) -x c -ansi -Wall -Werror $(CFLAGS_xeninclude) \
 	          -S -o /dev/null $$i || exit 1; \
 	    echo $$i; \
 	done >$@.new
 	mv $@.new $@
-else
-.PHONY: headers.chk
 endif
 
-headers.chk: $(LIBHEADERS) $(AUTOINCS)
-
 headers.lst: FORCE
 	@{ set -e; $(foreach h,$(LIBHEADERS),echo $(h);) } > $@.tmp
 	@$(call move-if-changed,$@.tmp,$@)
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 16:05:08 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 16:05:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355618.583488 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lnv-0001FZ-UI; Fri, 24 Jun 2022 16:05:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355618.583488; Fri, 24 Jun 2022 16:05:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lnv-0001Co-It; Fri, 24 Jun 2022 16:05:07 +0000
Received: by outflank-mailman (input) for mailman id 355618;
 Fri, 24 Jun 2022 16:05:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7mLY=W7=citrix.com=prvs=16756bcf7=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o4lns-0004qc-Vh
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 16:05:05 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 672b381d-f3d7-11ec-b725-ed86ccbb4733;
 Fri, 24 Jun 2022 18:05:03 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 672b381d-f3d7-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656086703;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=TsefgdfY0tcRhsq+woEvspQFjwDZdlkGaeHqzm9V7cQ=;
  b=FQAm0jRg7yFdGPSEc6W9jwfvmdria3uINiXArUiY8IebA7wfjFCc+s8q
   qB7FWL548LsWMvf9gghLe3/h0gfT3n+K9sHVdF7ZQMMy0aRdpgG4PtUBA
   r1DFaDX4/vn+fVDsiBmY6ce/6fL1ZDVJUxLFnYBnNQLXxEcoFf0qRsrwp
   4=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 73702048
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:PEA8rKkhkzc3Bt2TbsKbwRro5gzZJkRdPkR7XQ2eYbSJt1+Wr1Gzt
 xIYW2mAa/iLajDwe9lxO9y/9EICsZKAy9FqQFZo+ys8EyMWpZLJC+rCIxarNUt+DCFioGGLT
 Sk6QoOdRCzhZiaE/n9BCpC48T8kk/vgqoPUUIYoAAgoLeNfYHpn2EgLd9IR2NYy24DnWV/V4
 7senuWEULOb828sWo4rw/rrRCNH5JwebxtB4zTSzdgS1LPvvyF94KA3fMldHFOhKmVgJcaoR
 v6r8V2M1jixEyHBqD+Suu2TnkUiGtY+NOUV45Zcc/DKbhNq/kTe3kunXRa1hIg+ZzihxrhMJ
 NtxWZOYZAhxBOqcudgkAzpDKRxBF/FF/ebYGC3q2SCT5xWun3rExvxvCAc9PJEC+/YxCmZLn
 RAaAGlTNFbZ3bvwme/lDLk37iggBJCD0Ic3s3d8zTbfHLA+TIrKWani7t5ExjYgwMtJGJ4yY
 uJGMmU3NkycM3WjPH8pFYJmnOi6lkDaWCRGulusuJAVs0zqmVkZPL/Fb4OOJ43iqd9utkSXv
 GXd5EziHwoXcteYzFKt7XaEluLJ2yThV+o6BLC+s/JnnlCX7mgSEwENE0u2p+GjjUyzUM4ZL
 FYbkhfCtoBrqhbtFIOkGUTl/jjU5XbwRua8DcUX51m3jfr13z/JJXM+cwFrNN8j7dAPEGlCO
 kCyoz/5OdB+mOTLFCzFrerM8mPa1Ts9djFbO3JdJecRy5y6+dxo0EqSJjp2OPTt5uAZDw0c1
 NxjQMIWo7wIxfAG2Kyglbwsq2L9/8OZJuLZC+i+Y45E0u+aTNT8D2BQwQKHhcus1a7AJrV7g
 FAKmtKF8McFBoyXmSqGTY0lRe/0ua7dYWSD3QY3QPHNEghBHVb5Jei8BxkuTHqFz+5eIWO5C
 KMtkVk5CGBv0IuCMvYsPtPZ5zUCxqn8D9X1Ps3pgi51SsEpLmevpXg2DWbJhjyFuBV8wMkXZ
 MbAGe7xXClyNEiS5GfvLwvr+eRwnX5WKKK6bc2T8ilLJpLENSDMF+taYQDQBg37hYvdyDjoH
 x9kH5Pi431ivCfWOUE7LaZ7wYg2EEUG
IronPort-HdrOrdr: A9a23:UmlaBajZ+Y2OCDQogC9iuRL1MXBQXtwji2hC6mlwRA09TySZ//
 rAoB19726StN9xYgBYpTnuAsi9qB/nmKKdpLNhX4tKPzOW3FdATrsD0WKK+VSJcEfDH6xmpM
 JdmsBFebvN5DNB4/oSjjPVLz9Z+qjlzJyV
X-IronPort-AV: E=Sophos;i="5.92,218,1650945600"; 
   d="scan'208";a="73702048"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Wei Liu <wl@xen.org>, "Juergen
 Gross" <jgross@suse.com>
Subject: [XEN PATCH v3 16/25] libs/libs.mk: Remove the need for $(PKG_CONFIG_INST)
Date: Fri, 24 Jun 2022 17:04:13 +0100
Message-ID: <20220624160422.53457-17-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220624160422.53457-1-anthony.perard@citrix.com>
References: <20220624160422.53457-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

We can simply use $(PKG_CONFIG) to set the parameters, and add it to
$(TARGETS) as necessary.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/libs/libs.mk | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/tools/libs/libs.mk b/tools/libs/libs.mk
index e02f91f95e..7aee449370 100644
--- a/tools/libs/libs.mk
+++ b/tools/libs/libs.mk
@@ -37,10 +37,10 @@ PKG_CONFIG_LIB := $(LIB_FILE_NAME)
 PKG_CONFIG_REQPRIV := $(subst $(space),$(comma),$(strip $(foreach lib,$(patsubst ctrl,control,$(USELIBS_$(LIBNAME))),xen$(lib))))
 
 ifneq ($(CONFIG_LIBXC_MINIOS),y)
-PKG_CONFIG_INST := $(PKG_CONFIG)
-$(PKG_CONFIG_INST): PKG_CONFIG_PREFIX = $(prefix)
-$(PKG_CONFIG_INST): PKG_CONFIG_INCDIR = $(includedir)
-$(PKG_CONFIG_INST): PKG_CONFIG_LIBDIR = $(libdir)
+TARGETS += $(PKG_CONFIG)
+$(PKG_CONFIG): PKG_CONFIG_PREFIX = $(prefix)
+$(PKG_CONFIG): PKG_CONFIG_INCDIR = $(includedir)
+$(PKG_CONFIG): PKG_CONFIG_LIBDIR = $(libdir)
 endif
 
 PKG_CONFIG_LOCAL := $(PKG_CONFIG_DIR)/$(PKG_CONFIG)
@@ -55,7 +55,7 @@ $(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_INCLUDE)
 $(PKG_CONFIG_LOCAL): PKG_CONFIG_LIBDIR = $(CURDIR)
 
 .PHONY: all
-all: headers.chk $(TARGETS) $(PKG_CONFIG_INST) $(PKG_CONFIG_LOCAL) libxen$(LIBNAME).map $(LIBHEADERS)
+all: headers.chk $(TARGETS) $(PKG_CONFIG_LOCAL) libxen$(LIBNAME).map $(LIBHEADERS)
 
 ifneq ($(NO_HEADERS_CHK),y)
 headers.chk:
@@ -127,7 +127,6 @@ clean::
 	rm -rf $(TARGETS) *~ $(DEPS_RM) $(OBJS-y) $(PIC_OBJS)
 	rm -f lib$(LIB_FILE_NAME).so.$(MAJOR).$(MINOR) lib$(LIB_FILE_NAME).so.$(MAJOR)
 	rm -f headers.chk headers.lst
-	rm -f $(PKG_CONFIG)
 
 .PHONY: distclean
 distclean: clean
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 16:07:39 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 16:07:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355637.583506 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lqN-0005ex-2s; Fri, 24 Jun 2022 16:07:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355637.583506; Fri, 24 Jun 2022 16:07:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lqM-0005eq-W6; Fri, 24 Jun 2022 16:07:38 +0000
Received: by outflank-mailman (input) for mailman id 355637;
 Fri, 24 Jun 2022 16:07:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YVaO=W7=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o4lqM-0005eU-GT
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 16:07:38 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c3d1d118-f3d7-11ec-b725-ed86ccbb4733;
 Fri, 24 Jun 2022 18:07:37 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id C7BB41F8BF;
 Fri, 24 Jun 2022 16:07:36 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 9145D13480;
 Fri, 24 Jun 2022 16:07:36 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id A5s3IUjhtWJDTQAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 24 Jun 2022 16:07:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c3d1d118-f3d7-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656086856; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=8wsGpUN1UHHjvwIixiG/ch3Vuvqb7p7EhnrZz6CUSYE=;
	b=QrRyewZrCg6LT4hn5tCGnl9MAX7vPMxkYVG+zpI1aTS5sbpOUnliLWiA2UdkNiE71nQh3F
	alPLupPQSY/RdqfutXxd7gJ5yMtMyaNlTv7kJp4esqU1y0OimrKaT8lPfHaa4IG/0pOaVk
	QGu25m49rZ6c+Tsp/hxfqdJkFraXFVE=
From: Juergen Gross <jgross@suse.com>
To: torvalds@linux-foundation.org
Cc: linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	sstabellini@kernel.org
Subject: [GIT PULL] xen: branch for v5.19-rc4
Date: Fri, 24 Jun 2022 18:07:36 +0200
Message-Id: <20220624160736.14606-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Linus,

Please git pull the following tag:

 git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.19a-rc4-tag

xen: branch for v5.19-rc4

It contains the following fixes:

- A rare deadlock in Qubes-OS between the i915 driver and Xen grant
  unmapping, solved by making the unmapping fully asynchronous
- A bug in the Xen blkfront driver caused by incomplete error handling
- A fix for undefined behavior (shifting a signed int by 31 bits)
- A fix in the Xen drmfront driver avoiding a WARN()

Thanks.

Juergen

 drivers/block/xen-blkfront.c            |  19 ++--
 drivers/gpu/drm/xen/xen_drm_front_gem.c |   2 +-
 drivers/xen/features.c                  |   2 +-
 drivers/xen/gntdev-common.h             |   7 ++
 drivers/xen/gntdev.c                    | 157 +++++++++++++++++++++-----------
 5 files changed, 127 insertions(+), 60 deletions(-)

Demi Marie Obenour (1):
      xen/gntdev: Avoid blocking in unmap_grant_pages()

Jason Andryuk (1):
      xen-blkfront: Handle NULL gendisk

Julien Grall (1):
      x86/xen: Remove undefined behavior in setup_features()

Oleksandr Tyshchenko (1):
      drm/xen: Add missing VM_DONTEXPAND flag in mmap callback


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 16:08:16 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 16:08:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355647.583522 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lqx-0006M6-QZ; Fri, 24 Jun 2022 16:08:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355647.583522; Fri, 24 Jun 2022 16:08:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lqx-0006KY-HT; Fri, 24 Jun 2022 16:08:15 +0000
Received: by outflank-mailman (input) for mailman id 355647;
 Fri, 24 Jun 2022 16:08:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7mLY=W7=citrix.com=prvs=16756bcf7=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o4lo3-0004qc-0f
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 16:05:15 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6c74b4c8-f3d7-11ec-b725-ed86ccbb4733;
 Fri, 24 Jun 2022 18:05:12 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c74b4c8-f3d7-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656086712;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=pBOCOfGvGT0Sraw3NBsPSjR52CXZ/ChD1FyPAVw+iMU=;
  b=eDYKIzy4K62rJW4cnEeQ+Bs3qsgnt8VpnEMqYoIpRuTxUy7eJm2vGXf3
   B34+3aABU68hK87miM18p2a8JqD7apd0Wy0GvHHbLHoNKtdYSr+yNZm8K
   docpvwKkT6rHENRN9BkpkTyAqKvprcLYeKpqVpTK0yxbg083SMS+5KgQ5
   s=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 74362408
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:xUFXnaMa7d7XakbvrR3sl8FynXyQoLVcMsEvi/4bfWQNrUon3z0Hy
 2cYXm+DaKvbYWCkc9xzYI+x80sCuZDcyNExGwto+SlhQUwRpJueD7x1DKtR0wB+jCHnZBg6h
 ynLQoCYdKjYdleF+lH1dOKJQUBUjclkfJKlYAL/En03FFUMpBsJ00o5wbZn2NIw27BVPivW0
 T/Mi5yHULOa82Yc3lI8s8pvfzs24ZweEBtB1rAPTagjUG32zhH5P7pGTU2FFFPqQ5E8IwKPb
 72rIIdVXI/u10xF5tuNyt4Xe6CRK1LYFVDmZnF+A8BOjvXez8CbP2lS2Pc0MC9qZzu1c99Zy
 95z9qWxVwAQMPPFyOI6Y0ECLCRDIvgTkFPHCSDXXc27ykTHdz3nwul0DVFwNoodkgp1KTgQr
 7pCcmlLN03dwbLtqF64YrAEasALJc/3PIQZqzd4wCvQF/oOSpHfWaTao9Rf2V/cg+gRQa2AP
 ZZBOVKDajyDZTduAU5MAa4ds+eF20TbYRNGkWmK8P9fD2/7k1UqjemF3MDuUsOObdVYmACfv
 G2u13T0BFQWOcKSzRKB82mwnanfkCXjQoUQGbaksPlwjzWuKnc7UUNMEwHh+L/g1xD4C4k3x
 1EoFjQGrqMMt3WqUN7EUUOx8HijjkZGZN9tDLhvgO2S8ZY48zp1F0BdEGMfMId77JBmLdA5/
 gTXxo20XFSDpJXQECvArenM8FteLABPdQc/iTk4oRzpCjUJiKU6lVrxQ9lqC8ZZZfWlSGirk
 1hmQMXT7oj/bPLnNI3hpDgrexr2+vD0ovcdv207pF6N4AJjf5KCbIe181Xd5vsoBN/HEwfZ5
 CldxJTGtL9m4XSxeMqlGr1l8FaBt5643MD02wYzT/HNCRz3k5JcQWygyG4nfxo4Wir1UTTof
 FXSqWts2XOnB1PzNfUfS9voU6wClPG8ffy4BqG8RocfOfBZKV7YlByCkGbNhggBZmB3yvphU
 XpaGO7xZUsn5VNPlmvoHrlBju5wmEjTBwr7HPjG8vhu6pLGDFb9dFvPGAHmgjwRhE9cnDjoz
 g==
IronPort-HdrOrdr: A9a23:jbA+uqnjoAlil2h8e4qYsLG2plbpDfIq3DAbv31ZSRFFG/Fxl6
 iV88jzsiWE7wr5OUtQ4OxoV5PgfZqxz/NICMwqTNWftWrdyQ+VxeNZjbcKqgeIc0aVygce79
 YET0EXMqyXMbEQt6jHCWeDf+rIuOP3k5yVuQ==
X-IronPort-AV: E=Sophos;i="5.92,218,1650945600"; 
   d="scan'208";a="74362408"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH v3 21/25] tools/helper: Cleanup Makefile
Date: Fri, 24 Jun 2022 17:04:18 +0100
Message-ID: <20220624160422.53457-22-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220624160422.53457-1-anthony.perard@citrix.com>
References: <20220624160422.53457-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Use $(TARGETS) to collect targets.
Collect library to link against in $(LDLIBS).
Remove extra "-f" flags that is already part of $(RM).

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---

Notes:
    v3:
    - apply changes to the new "init-dom0less" helper.
    - make use of the new macro $(xenlibs-ldlibs,)

 tools/helpers/Makefile | 23 +++++++++++++----------
 1 file changed, 13 insertions(+), 10 deletions(-)

diff --git a/tools/helpers/Makefile b/tools/helpers/Makefile
index 8d78ab1e90..7c9d671b32 100644
--- a/tools/helpers/Makefile
+++ b/tools/helpers/Makefile
@@ -5,13 +5,13 @@
 XEN_ROOT = $(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-PROGS += xen-init-dom0
+TARGETS += xen-init-dom0
 ifeq ($(CONFIG_Linux),y)
 ifeq ($(CONFIG_X86),y)
-PROGS += init-xenstore-domain
+TARGETS += init-xenstore-domain
 endif
 ifeq ($(CONFIG_ARM),y)
-PROGS += init-dom0less
+TARGETS += init-dom0less
 endif
 endif
 
@@ -20,6 +20,7 @@ $(XEN_INIT_DOM0_OBJS): CFLAGS += $(CFLAGS_libxentoollog)
 $(XEN_INIT_DOM0_OBJS): CFLAGS += $(CFLAGS_libxenstore)
 $(XEN_INIT_DOM0_OBJS): CFLAGS += $(CFLAGS_libxenlight)
 $(XEN_INIT_DOM0_OBJS): CFLAGS += $(CFLAGS_libxenctrl)
+xen-init-dom0: LDLIBS += $(call xenlibs-ldlibs,ctrl toollog store light)
 
 INIT_XENSTORE_DOMAIN_OBJS = init-xenstore-domain.o init-dom-json.o
 $(INIT_XENSTORE_DOMAIN_OBJS): CFLAGS += $(CFLAGS_libxentoollog)
@@ -28,6 +29,7 @@ $(INIT_XENSTORE_DOMAIN_OBJS): CFLAGS += $(CFLAGS_libxenctrl)
 $(INIT_XENSTORE_DOMAIN_OBJS): CFLAGS += $(CFLAGS_libxenstore)
 $(INIT_XENSTORE_DOMAIN_OBJS): CFLAGS += $(CFLAGS_libxenlight)
 $(INIT_XENSTORE_DOMAIN_OBJS): CFLAGS += -include $(XEN_ROOT)/tools/config.h
+init-xenstore-domain: LDLIBS += $(call xenlibs-ldlibs,toollog store ctrl guest light)
 
 INIT_DOM0LESS_OBJS = init-dom0less.o init-dom-json.o
 $(INIT_DOM0LESS_OBJS): CFLAGS += $(CFLAGS_libxentoollog)
@@ -35,30 +37,31 @@ $(INIT_DOM0LESS_OBJS): CFLAGS += $(CFLAGS_libxenstore)
 $(INIT_DOM0LESS_OBJS): CFLAGS += $(CFLAGS_libxenlight)
 $(INIT_DOM0LESS_OBJS): CFLAGS += $(CFLAGS_libxenctrl)
 $(INIT_DOM0LESS_OBJS): CFLAGS += $(CFLAGS_libxenevtchn)
+init-dom0less: LDLIBS += $(call xenlibs-ldlibs,ctrl evtchn toollog store light guest foreignmemory)
 
 .PHONY: all
-all: $(PROGS)
+all: $(TARGETS)
 
 xen-init-dom0: $(XEN_INIT_DOM0_OBJS)
-	$(CC) $(LDFLAGS) -o $@ $(XEN_INIT_DOM0_OBJS) $(LDLIBS_libxenctrl) $(LDLIBS_libxentoollog) $(LDLIBS_libxenstore) $(LDLIBS_libxenlight) $(APPEND_LDFLAGS)
+	$(CC) $(LDFLAGS) -o $@ $(XEN_INIT_DOM0_OBJS) $(LDLIBS) $(APPEND_LDFLAGS)
 
 init-xenstore-domain: $(INIT_XENSTORE_DOMAIN_OBJS)
-	$(CC) $(LDFLAGS) -o $@ $(INIT_XENSTORE_DOMAIN_OBJS) $(LDLIBS_libxentoollog) $(LDLIBS_libxenstore) $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxenlight) $(APPEND_LDFLAGS)
+	$(CC) $(LDFLAGS) -o $@ $(INIT_XENSTORE_DOMAIN_OBJS) $(LDLIBS) $(APPEND_LDFLAGS)
 
 init-dom0less: $(INIT_DOM0LESS_OBJS)
-	$(CC) $(LDFLAGS) -o $@ $(INIT_DOM0LESS_OBJS) $(LDLIBS_libxenctrl) $(LDLIBS_libxenevtchn) $(LDLIBS_libxentoollog) $(LDLIBS_libxenstore) $(LDLIBS_libxenlight) $(LDLIBS_libxenguest) $(LDLIBS_libxenforeignmemory) $(APPEND_LDFLAGS)
+	$(CC) $(LDFLAGS) -o $@ $(INIT_DOM0LESS_OBJS) $(LDLIBS) $(APPEND_LDFLAGS)
 
 .PHONY: install
 install: all
 	$(INSTALL_DIR) $(DESTDIR)$(LIBEXEC_BIN)
-	for i in $(PROGS); do $(INSTALL_PROG) $$i $(DESTDIR)$(LIBEXEC_BIN); done
+	for i in $(TARGETS); do $(INSTALL_PROG) $$i $(DESTDIR)$(LIBEXEC_BIN); done
 
 .PHONY: uninstall
 uninstall:
-	for i in $(PROGS); do rm -f $(DESTDIR)$(LIBEXEC_BIN)/$$i; done
+	for i in $(TARGETS); do rm -f $(DESTDIR)$(LIBEXEC_BIN)/$$i; done
 
 .PHONY: clean
 clean:
-	$(RM) -f *.o $(PROGS) $(DEPS_RM)
+	$(RM) *.o $(TARGETS) $(DEPS_RM)
 
 distclean: clean
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 16:08:16 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 16:08:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355646.583517 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lqx-0006Ju-Ed; Fri, 24 Jun 2022 16:08:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355646.583517; Fri, 24 Jun 2022 16:08:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lqx-0006Jh-9S; Fri, 24 Jun 2022 16:08:15 +0000
Received: by outflank-mailman (input) for mailman id 355646;
 Fri, 24 Jun 2022 16:08:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7mLY=W7=citrix.com=prvs=16756bcf7=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o4lnl-0004qb-SO
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 16:04:57 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6362446f-f3d7-11ec-bd2d-47488cf2e6aa;
 Fri, 24 Jun 2022 18:04:56 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6362446f-f3d7-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656086696;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=UgBJmLW71oTGL7rE/nyRfmo9v19OgU/6PqXrnkS62aw=;
  b=FnlATABxyLu0/K1Qn4PuBXCjH5Ted+500K5KXo1FJWvvRD8uNviOHaeb
   mMsF1VxRoM1yqpnXMeIVAv+bPk3R+LyCTwy0Pq87JP7FlPJFfZipC6iAM
   vsFv0oD/+DYVj2YcNbHsposWmQNr+eNk4FX3RV8BSTx1H6rw1rk5NhuKa
   s=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 76935332
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:q8tK2K2q6N2S/JHIoPbD5ddxkn2cJEfYwER7XKvMYLTBsI5bpzYGn
 DAeDW3Ua/6JZ2Wged4iO4W/o0IO65HQzdVhGQc6pC1hF35El5HIVI+TRqvS04J+DSFhoGZPt
 Zh2hgzodZhsJpPkjk7xdOCn9xGQ7InQLlbGILes1htZGEk1Ek/NtTo5w7Rj2tAy3YDja++wk
 YiaT/P3aQfNNwFcagr424rbwP+4lK2v0N+wlgVWicFj5DcypVFMZH4sDfjZw0/DaptVBoaHq
 9Prl9lVyI97EyAFUbtJmp6jGqEDryW70QKm0hK6UID66vROS7BbPg/W+5PwZG8O4whlkeydx
 /1AhbivRTcsGpb2p+gjCUF5DzAjbJJJreqvzXiX6aR/zmXDenrohf5vEFs3LcsT/eMf7WNmr
 KJCbmpXN1ba2rzwkOnTpupE36zPKOHiOp8fvXdxiynUF/88TbjIQrnQ5M8e1zA17ixLNamAN
 pFEMmE1BPjGSwQXN20JLLc+oPeXrULcfCN0hF6Ip6VitgA/yyQuieOwYbI5YOeiSd1Om0eEp
 krP52njHgwBL9ub1CaE9XS3wOTImEvTR4Y6BLC+sPlwjzW71mEVTREbS1a/if24kVKlHcJSL
 VQO/SgjprR081akJvHxUBG1r2SNlgINUNpXVesh4UeCzbS83uqCLjFaFHgbMoVg7ZJoA2xxv
 rOUoz/3LTFflKKZeXe5zY2roQ3oYQkJPDJTWiBRGGPp/OLfTJEPYgPnF4g+Tvbu04WqSVkc0
 BjR8nFg2ux7YdojkvzioAuZ22/ESo3hFFZd2+nBYo6yAuqVjqaBbpfg11XU5O0owG2xHgjY5
 yhsdyRzAYkz4XCxeM+lGrxl8EmBvartDdElqQcH82Md3zqs4WW/Wotb/StzIkxkWu5dJ2K3O
 BeC4FwNvMcMVJdPUUORS9jpYyjN5fiIKDgYfqqMMoomjmZZLmdrAx2ClWbPhjuwwSDAYIk0O
 IuBcNbEMEv2/Z9PlWLsL89EiOdD7nlnmQv7GMCqpzz6gOH2TCPEFt843K6mM7lRAFWs+16Or
 b6y9qKiln1ibQEJSnOGr9dNcQ9bdiZT6FKfg5U/S9Nv6zFOQAkJY8I9C5t+E2C5t8y5Ttv1w
 0w=
IronPort-HdrOrdr: A9a23:4N5mOqjyY5S2aKl1zz0ybWr4EHBQXtwji2hC6mlwRA09TySZ//
 rAoB19726StN9xYgBYpTnuAsi9qB/nmKKdpLNhX4tKPzOW3FdATrsD0WKK+VSJcEfDH6xmpM
 JdmsBFebvN5DNB4/oSjjPVLz9Z+qjlzJyV
X-IronPort-AV: E=Sophos;i="5.92,218,1650945600"; 
   d="scan'208";a="76935332"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Daniel De Graaf
	<dgdegra@tycho.nsa.gov>, "Daniel P. Smith" <dpsmith@apertussolutions.com>,
	Wei Liu <wl@xen.org>
Subject: [XEN PATCH v3 14/25] tools/flask/utils: list build targets in $(TARGETS)
Date: Fri, 24 Jun 2022 17:04:11 +0100
Message-ID: <20220624160422.53457-15-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220624160422.53457-1-anthony.perard@citrix.com>
References: <20220624160422.53457-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/flask/utils/Makefile | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/tools/flask/utils/Makefile b/tools/flask/utils/Makefile
index db567b13dc..6be134142a 100644
--- a/tools/flask/utils/Makefile
+++ b/tools/flask/utils/Makefile
@@ -4,10 +4,10 @@ include $(XEN_ROOT)/tools/Rules.mk
 CFLAGS += -Werror
 CFLAGS += $(CFLAGS_libxenctrl)
 
-CLIENTS := flask-loadpolicy flask-setenforce flask-getenforce flask-label-pci flask-get-bool flask-set-bool
+TARGETS := flask-loadpolicy flask-setenforce flask-getenforce flask-label-pci flask-get-bool flask-set-bool
 
 .PHONY: all
-all: $(CLIENTS)
+all: $(TARGETS)
 
 flask-loadpolicy: loadpolicy.o
 	$(CC) $(LDFLAGS) $< $(LDLIBS) $(LDLIBS_libxenctrl) -o $@
@@ -29,7 +29,7 @@ flask-set-bool: set-bool.o
 
 .PHONY: clean
 clean:
-	$(RM) *.o $(CLIENTS) $(DEPS_RM)
+	$(RM) *.o $(TARGETS) $(DEPS_RM)
 
 .PHONY: distclean
 distclean: clean
@@ -37,10 +37,10 @@ distclean: clean
 .PHONY: install
 install: all
 	$(INSTALL_DIR) $(DESTDIR)$(sbindir)
-	$(INSTALL_PROG) $(CLIENTS) $(DESTDIR)$(sbindir)
+	$(INSTALL_PROG) $(TARGETS) $(DESTDIR)$(sbindir)
 
 .PHONY: uninstall
 uninstall:
-	rm -f $(addprefix $(DESTDIR)$(sbindir)/, $(CLIENTS))
+	rm -f $(addprefix $(DESTDIR)$(sbindir)/, $(TARGETS))
 
 -include $(DEPS_INCLUDE)
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 16:08:17 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 16:08:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355652.583538 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lqz-0006lC-6O; Fri, 24 Jun 2022 16:08:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355652.583538; Fri, 24 Jun 2022 16:08:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lqy-0006jo-RX; Fri, 24 Jun 2022 16:08:16 +0000
Received: by outflank-mailman (input) for mailman id 355652;
 Fri, 24 Jun 2022 16:08:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7mLY=W7=citrix.com=prvs=16756bcf7=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o4lo5-0004qc-1Q
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 16:05:17 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6e40166c-f3d7-11ec-b725-ed86ccbb4733;
 Fri, 24 Jun 2022 18:05:13 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6e40166c-f3d7-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656086713;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=S8/hwL2N26ebpBrr2BTp9NQzqw/JJLwRIQMFuynDLXQ=;
  b=ftg7FbNNJxiLX+Ax7R0/z9W6m6sU52764kHALcCqXgIq05akE6oCxsnl
   qLH04GugxvJm4qH4NLD/Rj85iErbb62IhmvZUFqzJNSk7uFgtPemcVxE2
   6gwtEQfrSOcvp/p8RUT2U7aTVuXkPzyrW39gok9yFhelFyJpBkjnxotbA
   A=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 74362409
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:sz7SNqIpVG7lSwGMFE+RmJUlxSXFcZb7ZxGr2PjKsXjdYENS0WYDm
 jcZXzzUMvqMYTemKdB1PYzn809XsMTSn4I2TFZlqX01Q3x08seUXt7xwmUcns+xwm8vaGo9s
 q3yv/GZdJhcokf0/0vrav67xZVF/fngqoDUUYYoAQgsA14+IMsdoUg7wbRh3NQ02YLR7z6l4
 rseneWOYDdJ5BYsWo4kw/rrRMRH5amaVJsw5zTSVNgT1LPsvyB94KE3fMldG0DQUIhMdtNWc
 s6YpF2PEsE1yD92Yj+tuu6TnkTn2dc+NyDW4pZdc/DKbhSvOkXee0v0XRYRQR4/ttmHozx+4
 MccrL3tWVskBaSWv+ItAyF3FyhwA6ITrdcrIVDn2SCS50jPcn+qyPRyFkAme4Yf/46bA0kXq
 6ZecmpUKEne2aTmm9pXScE17ignBMDtIIMYvGAm1TzDBOwqaZvCX7/L9ZlT2zJYasVmQqqFO
 ZBFMWsHgBLoP0xhZUoNMrMFxb2rjWn8MCVig2qujP9ii4TU5FMoi+W8WDbPQfSVQe1Fk0Deo
 XjJl0zpDxdfONGBxD6t9nO3mvSJjS79QJgVFrCz6rhtmlL7+4AIIERIDx3h+6D/0xPgHYIEQ
 6AJxsYwhbpj7W32XoXwZBTih3i07iYzX9NeLeJvvWlh1ZHoDxal6nksF2AcNoR96ZdpFVTGx
 XfSwYq3WGUHXKm9DCvEq+zK9W7a1T09dzdqWMMScecSDzAPSqkXhwmHcNtsGbXdYjbdSWCpm
 GDiQMTTatwuYS83O0aTpwmvb8qE/MShc+LMzly/spiZxg14fpW5QIej9ELW6/1NRK7AEATf5
 CBVwpfCtLhRZX1oqMBraL9VdF1Oz6btDdEhqQQ3Q8lJG8qFoRZPgry8EBkhfRw0Y67oiBfiY
 VPJuBM52aK/yECCNPctC6roUpxC5fG5SbzNC6CFBvITM8MZXFLWo0lTibu4gjmFfL4EyvpkZ
 /92sK+EUB4nNEiQ5GDnGb5DjeB6nX5WKKG6bcmT8ilLGIG2PBa9IYrp+nPUBgzlxMtoeDnoz
 us=
IronPort-HdrOrdr: A9a23:az2aT60Y5gbwrYYEgUr6DAqjBLQkLtp133Aq2lEZdPRUGvb2qy
 nIpoV96faUskdpZJhOo7G90cW7LE80sKQFg7X5Xo3SODUO2lHJEGgK1+KLqFfd8m/Fh4tgPM
 9bAs5D4bbLY2SS4/yX3ODBKadC/OW6
X-IronPort-AV: E=Sophos;i="5.92,218,1650945600"; 
   d="scan'208";a="74362409"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH v3 22/25] tools/console: Use $(xenlibs-ldlibs,)
Date: Fri, 24 Jun 2022 17:04:19 +0100
Message-ID: <20220624160422.53457-23-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220624160422.53457-1-anthony.perard@citrix.com>
References: <20220624160422.53457-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/console/client/Makefile | 3 +--
 tools/console/daemon/Makefile | 6 +-----
 2 files changed, 2 insertions(+), 7 deletions(-)

diff --git a/tools/console/client/Makefile b/tools/console/client/Makefile
index 44176c6d93..e2f2554f92 100644
--- a/tools/console/client/Makefile
+++ b/tools/console/client/Makefile
@@ -6,8 +6,7 @@ CFLAGS += $(CFLAGS_libxenctrl)
 CFLAGS += $(CFLAGS_libxenstore)
 CFLAGS += -include $(XEN_ROOT)/tools/config.h
 
-LDLIBS += $(LDLIBS_libxenctrl)
-LDLIBS += $(LDLIBS_libxenstore)
+LDLIBS += $(call xenlibs-ldlibs,ctrl store)
 LDLIBS += $(SOCKET_LIBS)
 
 OBJS-y := main.o
diff --git a/tools/console/daemon/Makefile b/tools/console/daemon/Makefile
index 0f004f0b14..99bb33b6a2 100644
--- a/tools/console/daemon/Makefile
+++ b/tools/console/daemon/Makefile
@@ -10,11 +10,7 @@ CFLAGS += $(CFLAGS_libxenforeignmemory)
 CFLAGS-$(CONFIG_ARM) += -DCONFIG_ARM
 CFLAGS += -include $(XEN_ROOT)/tools/config.h
 
-LDLIBS += $(LDLIBS_libxenctrl)
-LDLIBS += $(LDLIBS_libxenstore)
-LDLIBS += $(LDLIBS_libxenevtchn)
-LDLIBS += $(LDLIBS_libxengnttab)
-LDLIBS += $(LDLIBS_libxenforeignmemory)
+LDLIBS += $(call xenlibs-ldlibs,ctrl store evtchn gnttab foreignmemory)
 LDLIBS += $(SOCKET_LIBS)
 LDLIBS += $(UTIL_LIBS)
 LDLIBS += -lrt
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 16:08:20 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 16:08:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355662.583550 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lr2-0007CW-GD; Fri, 24 Jun 2022 16:08:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355662.583550; Fri, 24 Jun 2022 16:08:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lr2-0007CA-Bg; Fri, 24 Jun 2022 16:08:20 +0000
Received: by outflank-mailman (input) for mailman id 355662;
 Fri, 24 Jun 2022 16:08:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7mLY=W7=citrix.com=prvs=16756bcf7=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o4lnv-0004qc-W5
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 16:05:08 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 68d89563-f3d7-11ec-b725-ed86ccbb4733;
 Fri, 24 Jun 2022 18:05:05 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 68d89563-f3d7-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656086706;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=tyXg2EhFGm7DWQjDWYVLoXwH1ePwX2W5U//FqRQR1+A=;
  b=XQaczMrdsZsx5bP5lEnnKI9U/XMvImfvjCr0/Rjm1e9SInZGwlkKfNx3
   7bP4n+W6f9b3e6DgZ1e6istOZDO7k34WYiwj/YQZ1kzKxTPJiEkYgua1a
   1GEMoH+/MPNFO4vOZWqX9UWUmjFf5QTRhcq5L2YbeY0CByYJSfAO+mPqU
   4=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 74208140
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:CNulR6nu2NgbS3LUXnvdqdro5gzZJkRdPkR7XQ2eYbSJt1+Wr1Gzt
 xIXX27SOPmCZDf9KYp1a43k8E1T6MOBndRgSwM/pH80FCMWpZLJC+rCIxarNUt+DCFioGGLT
 Sk6QoOdRCzhZiaE/n9BCpC48T8kk/vgqoPUUIYoAAgoLeNfYHpn2EgLd9IR2NYy24DnWV/V4
 7senuWEULOb828sWo4rw/rrRCNH5JwebxtB4zTSzdgS1LPvvyF94KA3fMldHFOhKmVgJcaoR
 v6r8V2M1jixEyHBqD+Suu2TnkUiGtY+NOUV45Zcc/DKbhNq/kTe3kunXRa1hIg+ZzihxrhMJ
 NtxWZOYbCgnGrbR38okDUdfHBtUZL188+/VPi3q2SCT5xWun3rExvxvCAc9PJEC+/YxCmZLn
 RAaAGlTNFbZ3bvwme/lDLk37iggBJCD0Ic3s3d8zTbfHLA+TIrKWani7t5ExjYgwMtJGJ4yY
 uJGNWA3PE2cO3WjPH9NIsIRm9yunUDkLSRhhXy+v/QSz1PqmVkZPL/Fb4OOJ43iqd9utkSXv
 GXd5EziHwoXcteYzFKt7XaEluLJ2yThV+o6BLC+s/JnnlCX7mgSEwENE0u2p+GjjUyzUM4ZL
 FYbkhfCtoBrqhbtFIOkGUTl/jjU5XbwRua8DcUfxw+p0beTzT/HWG8LdBQYeOYt5N0pEGlCO
 kCyoz/5OdB+mOTLFCzFrerM8mPa1Ts9djFbO3JdJecRy5y6+dxo0EqSJjp2OPTt5uAZDw0c1
 NxjQMIWo7wIxfAG2Kyglbwsq2L9/8OZJuLZC+i+Y45E0u+aTNT8D2BQwQKHhcus1a7AJrV7g
 FAKmtKF8McFBoyXmSqGTY0lRe/0ua7dYWSD3QY3QPHNEghBHVb5Jei8BxkuTHqFz+5eIWO5C
 KMtkVk5CGBv0IuCMvYsPtPZ5zUCxqn8D9X1Ps3pgi51SsEpLmevpXg2DWbJhjyFuBV8wMkXZ
 MbAGe7xXClyNEiS5GfvLwvr+eRwnX5WKKK6bc2T8ilLJpLENSDMF+taYQDQBg37hYvdyDjoH
 x9kH5Pi431ivCfWOEE7LaZ7wYg2EEUG
IronPort-HdrOrdr: A9a23:Hc1K86lMkwF9Eu8TzpxlfpyZgoXpDfIU3DAbv31ZSRFFG/Fxl6
 iV8sjzsiWE7gr5OUtQ4exoV5PhfZqxz/JICMwqTNKftWrdyQyVxeNZnOjfKlTbckWUnINgPO
 VbAsxD4bXLfCFHZK3BgTVQfexO/DD+ytHLudvj
X-IronPort-AV: E=Sophos;i="5.92,218,1650945600"; 
   d="scan'208";a="74208140"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Wei Liu <wl@xen.org>, "Juergen
 Gross" <jgross@suse.com>
Subject: [XEN PATCH v3 18/25] tools: Introduce $(xenlibs-rpath,..) to replace $(SHDEPS_lib*)
Date: Fri, 24 Jun 2022 17:04:15 +0100
Message-ID: <20220624160422.53457-19-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220624160422.53457-1-anthony.perard@citrix.com>
References: <20220624160422.53457-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

This patch introduce a new macro $(xenlibs-dependencies,) to generate
a list of all the xen library that a library is list against, and they
are listed only once. We use the side effect of $(sort ) which remove
duplicates.

This is used by another macro $(xenlibs-rpath,) which is to replace
$(SHDEPS_libxen*).

In libs.mk, we don't need to $(sort ) SHLIB_lib* anymore as this was used
to remove duplicates and they are no more duplicates.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/Rules.mk     | 29 ++++++++++++++++-------------
 tools/libs/libs.mk |  2 +-
 2 files changed, 17 insertions(+), 14 deletions(-)

diff --git a/tools/Rules.mk b/tools/Rules.mk
index 47424935ba..23979ed254 100644
--- a/tools/Rules.mk
+++ b/tools/Rules.mk
@@ -61,13 +61,8 @@ endif
 #           public headers. Users of libfoo are therefore transitively
 #           using libbaz's header but not linking against libbaz.
 #
-# SHDEPS_libfoo: Flags for linking recursive dependencies of
-#                libfoo. Must contain SHLIB for every library which
-#                libfoo links against. So must contain both
-#                $(SHLIB_libbar) and $(SHLIB_libbaz).
-#
 # SHLIB_libfoo: Flags for recursively linking against libfoo. Must
-#               contains SHDEPS_libfoo and:
+#               contains $(call xenlibs-rpath,foo) and:
 #                   -Wl,-rpath-link=<directory containing libfoo.so>
 #
 # CFLAGS_libfoo: Flags for compiling against libfoo. Must add the
@@ -79,23 +74,31 @@ endif
 #                libfoo.
 #
 # LDLIBS_libfoo: Flags for linking against libfoo. Must contain
-#                $(SHDEPS_libfoo) and the path to libfoo.so
+#                $(call xenlibs-rpath,foo) and the path to libfoo.so
 #
 # Consumers of libfoo should include $(CFLAGS_libfoo) and
 # $(LDLIBS_libfoo) in their appropriate directories. They should not
 # include any CFLAGS or LDLIBS relating to libbar or libbaz unless
 # they use those libraries directly (not via libfoo) too.
-#
-# Consumers of libfoo should not directly use $(SHDEPS_libfoo) or
-# $(SHLIB_libfoo)
+
+# Give the list of Xen library that the libraries in $(1) are linked against,
+# directly or indirectly.
+define xenlibs-dependencies
+    $(sort $(foreach lib,$(1), \
+        $(USELIBS_$(lib)) $(call xenlibs-dependencies,$(USELIBS_$(lib)))))
+endef
+
+# Flags for linking recursive dependencies of Xen libraries in $(1)
+define xenlibs-rpath
+    $(addprefix -Wl$(comma)-rpath-link=$(XEN_ROOT)/tools/libs/,$(call xenlibs-dependencies,$(1)))
+endef
 
 define LIB_defs
  FILENAME_$(1) ?= xen$(1)
  XEN_libxen$(1) = $$(XEN_ROOT)/tools/libs/$(1)
  CFLAGS_libxen$(1) = $$(CFLAGS_xeninclude)
- SHDEPS_libxen$(1) = $$(foreach use,$$(USELIBS_$(1)),$$(SHLIB_libxen$$(use)))
- LDLIBS_libxen$(1) = $$(SHDEPS_libxen$(1)) $$(XEN_libxen$(1))/lib$$(FILENAME_$(1))$$(libextension)
- SHLIB_libxen$(1) = $$(SHDEPS_libxen$(1)) -Wl,-rpath-link=$$(XEN_libxen$(1))
+ SHLIB_libxen$(1) = $$(call xenlibs-rpath,$(1)) -Wl,-rpath-link=$$(XEN_libxen$(1))
+ LDLIBS_libxen$(1) = $$(call xenlibs-rpath,$(1)) $$(XEN_libxen$(1))/lib$$(FILENAME_$(1))$$(libextension)
 endef
 
 $(foreach lib,$(LIBS_LIBS),$(eval $(call LIB_defs,$(lib))))
diff --git a/tools/libs/libs.mk b/tools/libs/libs.mk
index f778a7df82..d7e1274249 100644
--- a/tools/libs/libs.mk
+++ b/tools/libs/libs.mk
@@ -32,7 +32,7 @@ PKG_CONFIG ?= $(LIB_FILE_NAME).pc
 PKG_CONFIG_NAME ?= Xen$(LIBNAME)
 PKG_CONFIG_DESC ?= The $(PKG_CONFIG_NAME) library for Xen hypervisor
 PKG_CONFIG_VERSION := $(MAJOR).$(MINOR)
-PKG_CONFIG_USELIBS := $(sort $(SHLIB_libxen$(LIBNAME)))
+PKG_CONFIG_USELIBS := $(SHLIB_libxen$(LIBNAME))
 PKG_CONFIG_LIB := $(LIB_FILE_NAME)
 PKG_CONFIG_REQPRIV := $(subst $(space),$(comma),$(strip $(foreach lib,$(patsubst ctrl,control,$(USELIBS_$(LIBNAME))),xen$(lib))))
 
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 16:08:34 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 16:08:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355682.583561 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lrF-0007ue-RW; Fri, 24 Jun 2022 16:08:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355682.583561; Fri, 24 Jun 2022 16:08:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lrF-0007uX-Nw; Fri, 24 Jun 2022 16:08:33 +0000
Received: by outflank-mailman (input) for mailman id 355682;
 Fri, 24 Jun 2022 16:08:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7mLY=W7=citrix.com=prvs=16756bcf7=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o4lnz-0004qc-0I
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 16:05:11 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6ad30624-f3d7-11ec-b725-ed86ccbb4733;
 Fri, 24 Jun 2022 18:05:09 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6ad30624-f3d7-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656086709;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=RMuLHf57LSFQbWtEVhf+qOkim9vFt3YxGp2RAYL9V7U=;
  b=VRRJ8BjhybhMdwGuj+Q+zknGXs1HssrvVUWjWCZvrShPV3zKMgEXl7Cs
   uebxJAiD/XM1XJB5RZWoga21ts7ADR6wHf5VXIOaslV6a7GNGyER7P1um
   ka5yKewmo3BC7c1pFjD6L7BhUpvOxQY43F8xEf4HQ/TMbzRdjR2WaURwH
   4=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 74384231
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:R54Z7KJvEaWBmuacFE+RtZUlxSXFcZb7ZxGr2PjKsXjdYENS3jRUz
 WZOXjyCOP/ZMWb0Kdokbdu19h4DvpTXy94wQAJlqX01Q3x08seUXt7xwmUcns+xwm8vaGo9s
 q3yv/GZdJhcokf0/0vrav67xZVF/fngqoDUUYYoAQgsA14+IMsdoUg7wbRh3NQ02YLR7z6l4
 rseneWOYDdJ5BYsWo4kw/rrRMRH5amaVJsw5zTSVNgT1LPsvyB94KE3fMldG0DQUIhMdtNWc
 s6YpF2PEsE1yD92Yj+tuu6TnkTn2dc+NyDW4pZdc/DKbhSvOkXee0v0XRYRQR4/ttmHozx+4
 NVrmJruWB47Ap3rw+4DQjlHOGJAbYQTrdcrIVDn2SCS50jPcn+qyPRyFkAme4Yf/46bA0kXq
 6ZecmpUKEne2aTmm9pXScE17ignBMDtIIMYvGAm1TzDBOwqaZvCX7/L9ZlT2zJYasVmQqqFN
 5ZDMmMHgBLofwRIGlIWLbMEk/aBuHXQTmcbpgm+nP9ii4TU5FMoi+W8WDbPQfSQQt5fhEGfp
 WTu8GHwAxVcP9uaoRKa9lq8i+mJmjn0MKoCGbv9+vN0jVm7wm0IFAZQRVa9ueO+iEO1R5RYM
 UN8x8Y1hfFsrgrxFIC7BkDm5i7f1vIBZzZOO+0cxQfT27PG3za+XWIaXx5eQ58Y5OZjEFTGy
 WS1c8PV6S1H6ePIFyrGq+/L/VteKgBOczZcOHZsoR8tpoC6/dpt1k+nosNLSvbdszHjJd3nL
 9lmRgAajq5bs8ME3r7TEbvv02P1/cihouLYC2zqsoOZAuBRPtfNi3SAswSz0Bq5BN/xoqO9l
 HYFgdOCy+sFEIuAkieAKM1UQuz3v67UaWKA2QYwd3XEy9hL0yT7FWy3yGEWGauUGpxcJW+Bj
 LH742u9G6O/zFP1NPQqMupd+uwhzLT6FMSNa804muFmO8ArHCfepXkGTRfJgwjFzRh9+Ylia
 MzzWZv9Uh4n5VFPkWPeqxE1iuRwmEjTBAr7GPjG8vhQ+eDPOifLFehUawXmgyJQxPrsnTg5O
 u13b6OioyizmsWlCsUL2eb/9Ww3EEU=
IronPort-HdrOrdr: A9a23:frGmRKyRAf/pcTCs/02fKrPwFL1zdoMgy1knxilNoRw8SKKlfq
 eV7ZImPH7P+U4ssR4b+exoVJPtfZqYz+8R3WBzB8bEYOCFghrKEGgK1+KLqFeMJ8S9zJ846U
 4JSdkGNDSaNzlHZKjBjzVQa+xQouW6zA==
X-IronPort-AV: E=Sophos;i="5.92,218,1650945600"; 
   d="scan'208";a="74384231"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Wei Liu <wl@xen.org>, "Nick
 Rosbrook" <rosbrookn@gmail.com>
Subject: [XEN PATCH v3 20/25] tools: Introduce $(xenlibs-ldflags, ) macro
Date: Fri, 24 Jun 2022 17:04:17 +0100
Message-ID: <20220624160422.53457-21-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220624160422.53457-1-anthony.perard@citrix.com>
References: <20220624160422.53457-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

This avoid the need to open-coding the list of flags needed to link
with an in-tree Xen library when using -lxen*.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/golang/xenlight/Makefile | 2 +-
 tools/Rules.mk                 | 8 ++++++++
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/tools/golang/xenlight/Makefile b/tools/golang/xenlight/Makefile
index 64671f246c..00e6d17f2b 100644
--- a/tools/golang/xenlight/Makefile
+++ b/tools/golang/xenlight/Makefile
@@ -27,7 +27,7 @@ GOXL_GEN_FILES = types.gen.go helpers.gen.go
 # so that it can find the actual library.
 .PHONY: build
 build: xenlight.go $(GOXL_GEN_FILES)
-	CGO_CFLAGS="$(CFLAGS_libxenlight) $(CFLAGS_libxentoollog) $(APPEND_CFLAGS)" CGO_LDFLAGS="$(LDLIBS_libxenlight) $(LDLIBS_libxentoollog) -L$(XEN_libxenlight) -L$(XEN_libxentoollog) $(APPEND_LDFLAGS)" $(GO) build -x
+	CGO_CFLAGS="$(CFLAGS_libxenlight) $(CFLAGS_libxentoollog) $(APPEND_CFLAGS)" CGO_LDFLAGS="$(call xenlibs-ldflags,light toollog) $(APPEND_LDFLAGS)" $(GO) build -x
 
 .PHONY: install
 install: build
diff --git a/tools/Rules.mk b/tools/Rules.mk
index ce77dd2eb1..26958b2948 100644
--- a/tools/Rules.mk
+++ b/tools/Rules.mk
@@ -105,6 +105,14 @@ define xenlibs-ldlibs
     $(foreach lib,$(1),$(xenlibs-ldlibs-$(lib)))
 endef
 
+# Provide needed flags for linking an in-tree Xen library by an external
+# project (or when it is necessary to link with "-lxen$(1)" instead of using
+# the full path to the library).
+define xenlibs-ldflags
+    $(call xenlibs-rpath,$(1)) \
+    $(foreach lib,$(1),-L$(XEN_ROOT)/tools/libs/$(lib))
+endef
+
 define LIB_defs
  FILENAME_$(1) ?= xen$(1)
  XEN_libxen$(1) = $$(XEN_ROOT)/tools/libs/$(1)
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 16:08:37 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 16:08:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355686.583572 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lrJ-0008MF-AX; Fri, 24 Jun 2022 16:08:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355686.583572; Fri, 24 Jun 2022 16:08:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lrJ-0008Ly-6m; Fri, 24 Jun 2022 16:08:37 +0000
Received: by outflank-mailman (input) for mailman id 355686;
 Fri, 24 Jun 2022 16:08:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7mLY=W7=citrix.com=prvs=16756bcf7=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o4loG-0004qc-3F
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 16:05:28 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 712368b8-f3d7-11ec-b725-ed86ccbb4733;
 Fri, 24 Jun 2022 18:05:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 712368b8-f3d7-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656086718;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=3uym4CH2Pdfd2UaOFcD0mM6SQZlHE9RUzweK8v6Cy2k=;
  b=KH0hGnwEH9EI3GnVBbxf4Bgm2ySRK/OU98Ok48vZ4U3SI7UskgYu0qnY
   AwmssSlnfBsiziBl8WF73obUgK55ul22mNrdI6EkxIDmfuqJ+gKnEjzfE
   innKX3w9jhuUUD4tqep6lPGnecyzEHY9lXN74H7oNQlzteZxVsJU/drgr
   A=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 74362426
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:h3Nrz64T2m12XUyIm5wBtQxRtPHHchMFZxGqfqrLsTDasY5as4F+v
 mYZCm+FOqmPZTGmc9lzPNu08h8PuJTcy9BmTFNo+CgzHi5G8cbLO4+Ufxz6V8+wwmwvb67FA
 +E2MISowBUcFyeEzvuVGuG96yE6j8lkf5KkYAL+EnkZqTRMFWFw03qPp8Zj2tQy2YbjX1vX0
 T/Pi5a31GGNimYc3l08s8pvmDs31BglkGpF1rCWTakjUG72zxH5PrpGTU2CByKQrr1vNvy7X
 47+IISRpQs1yfuP5uSNyd4XemVSKlLb0JPnZnB+A8BOiTAazsA+PzpS2FPxpi67hh3Q9+2dx
 umhurSXTgovM6zltt4vXgBHCR1bJrZhoYPYdC3XXcy7lyUqclPpyvRqSko3IZcZ6qB8BmQmG
 f4wcW5XKErZ3qTvnez9GrIEascLdaEHOKsWvG1gyjfIS+4rW5nZT43B5MNC3Sd2jcdLdRrbT
 5VFMmQ1ME6eC/FJEmYdCZxmt+WivCj2Vjl89FvErvU5+lGGmWSd15CyaYGIK7RmX/59kl6Do
 2Pa/0zwGhwAKMGE0j2B726tgenU2yj8Xeo6GLK0+/FwiXWP12cTD1sQTlL9rv6n4mayUsxSA
 1YZ8S0vqe417kPDZtvyRRqju1afowURHdFXFoUS9wWl2qfSpQGDCQAsXjNHLdArqsIybTgrz
 UOS2cPkAyR1t7+YQm7b8a2bxRuuOC09PWIEIygeQmMt7t3upoh1kxzOS8p4HbC8ptrvEDr0z
 naBqy1WulkIpZdVjePhpwmB2m/y4MiSJuIo2unJdkWi4wV0ZaeLXb6l8EfB0dNbN6HAckbU6
 RDohPOiAPAy4YClzXLQHr1WQen2t55pIxWH3wcxQsBJGyCFvif6INsOuGwWyFJBaJ5sRNP/X
 KPEVeq9Drd3NWDiU6J4apnZ5y8Cnfm5ToSNuhw5g7NzjnlNmOyvpngGiba4hTyFraTVufhX1
 W2nWcitF20GLq9s0SC7QewQuZdymH1gnTuCHM+nl0z4uVZ7WJJzYe1dWLdpRrBR0U95iF+Nr
 4Y32zWikX2zr9ESkgGIqNVOfDjm3FAwBIzsqtw/S9Nv1jFOQTl7Y9eImOtJU9U8w8x9y7eZl
 lngCxQw4Aeu2hX6xfCiNykLhEXHBs0k8xrW/EUEYD6V5pTUSdz+sv5BK8RmJudPGS4K5accc
 sTpsv6oWpxnIgkrMRxHBXUhhOSOrCiWuD8=
IronPort-HdrOrdr: A9a23:q57Wy6PjjrF7qsBcTvmjsMiBIKoaSvp037Eqv3oedfUzSL3/qy
 nOpoVi6faaslYssR0b9exofZPwJE80lqQFhrX5X43SPzUO0VHAROoJgLcKgQeQfxEWntQtrJ
 uIGJIeNDSfNzdHZL7BkWuFL+o=
X-IronPort-AV: E=Sophos;i="5.92,218,1650945600"; 
   d="scan'208";a="74362426"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Wei Liu <wl@xen.org>, "Elena
 Ufimtseva" <elena.ufimtseva@oracle.com>, Tim Deegan <tim@xen.org>, "Daniel De
 Graaf" <dgdegra@tycho.nsa.gov>, "Daniel P. Smith"
	<dpsmith@apertussolutions.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Juergen Gross <jgross@suse.com>, Christian Lindig
	<christian.lindig@citrix.com>, David Scott <dave@recoil.org>
Subject: [XEN PATCH v3 25/25] tools: Remove -Werror everywhere else
Date: Fri, 24 Jun 2022 17:04:22 +0100
Message-ID: <20220624160422.53457-26-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220624160422.53457-1-anthony.perard@citrix.com>
References: <20220624160422.53457-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Patch "tools: Add -Werror by default to all tools/" have added
"-Werror" to CFLAGS in tools/Rules.mk, remove it from every other
makefiles as it is now duplicated.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/console/client/Makefile   | 1 -
 tools/console/daemon/Makefile   | 1 -
 tools/debugger/kdd/Makefile     | 1 -
 tools/flask/utils/Makefile      | 1 -
 tools/fuzz/cpu-policy/Makefile  | 2 +-
 tools/misc/Makefile             | 1 -
 tools/tests/cpu-policy/Makefile | 2 +-
 tools/tests/depriv/Makefile     | 2 +-
 tools/tests/resource/Makefile   | 1 -
 tools/tests/tsx/Makefile        | 1 -
 tools/tests/xenstore/Makefile   | 1 -
 tools/xcutils/Makefile          | 2 --
 tools/xenmon/Makefile           | 1 -
 tools/xenpaging/Makefile        | 1 -
 tools/xenpmd/Makefile           | 1 -
 tools/xentop/Makefile           | 2 +-
 tools/xentrace/Makefile         | 2 --
 tools/xl/Makefile               | 2 +-
 tools/debugger/gdbsx/Rules.mk   | 2 +-
 tools/firmware/Rules.mk         | 2 --
 tools/libfsimage/common.mk      | 2 +-
 tools/libs/libs.mk              | 2 +-
 tools/ocaml/common.make         | 2 +-
 tools/xenstore/Makefile.common  | 1 -
 24 files changed, 9 insertions(+), 27 deletions(-)

diff --git a/tools/console/client/Makefile b/tools/console/client/Makefile
index e2f2554f92..62d89fdeb9 100644
--- a/tools/console/client/Makefile
+++ b/tools/console/client/Makefile
@@ -1,7 +1,6 @@
 XEN_ROOT=$(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS += -Werror
 CFLAGS += $(CFLAGS_libxenctrl)
 CFLAGS += $(CFLAGS_libxenstore)
 CFLAGS += -include $(XEN_ROOT)/tools/config.h
diff --git a/tools/console/daemon/Makefile b/tools/console/daemon/Makefile
index 99bb33b6a2..9fc3b6711f 100644
--- a/tools/console/daemon/Makefile
+++ b/tools/console/daemon/Makefile
@@ -1,7 +1,6 @@
 XEN_ROOT=$(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS += -Werror
 CFLAGS += $(CFLAGS_libxenctrl)
 CFLAGS += $(CFLAGS_libxenstore)
 CFLAGS += $(CFLAGS_libxenevtchn)
diff --git a/tools/debugger/kdd/Makefile b/tools/debugger/kdd/Makefile
index 26116949d4..a72ad3b1e0 100644
--- a/tools/debugger/kdd/Makefile
+++ b/tools/debugger/kdd/Makefile
@@ -1,7 +1,6 @@
 XEN_ROOT = $(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS  += -Werror
 CFLAGS  += $(CFLAGS_libxenctrl)
 CFLAGS  += -DXC_WANT_COMPAT_MAP_FOREIGN_API
 LDLIBS  += $(LDLIBS_libxenctrl)
diff --git a/tools/flask/utils/Makefile b/tools/flask/utils/Makefile
index 6be134142a..88d7edb6b1 100644
--- a/tools/flask/utils/Makefile
+++ b/tools/flask/utils/Makefile
@@ -1,7 +1,6 @@
 XEN_ROOT=$(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS += -Werror
 CFLAGS += $(CFLAGS_libxenctrl)
 
 TARGETS := flask-loadpolicy flask-setenforce flask-getenforce flask-label-pci flask-get-bool flask-set-bool
diff --git a/tools/fuzz/cpu-policy/Makefile b/tools/fuzz/cpu-policy/Makefile
index 41a2230408..6e7743e0aa 100644
--- a/tools/fuzz/cpu-policy/Makefile
+++ b/tools/fuzz/cpu-policy/Makefile
@@ -17,7 +17,7 @@ install: all
 
 .PHONY: uninstall
 
-CFLAGS += -Werror $(CFLAGS_xeninclude) -D__XEN_TOOLS__
+CFLAGS += $(CFLAGS_xeninclude) -D__XEN_TOOLS__
 CFLAGS += $(APPEND_CFLAGS) -Og
 
 vpath %.c ../../../xen/lib/x86
diff --git a/tools/misc/Makefile b/tools/misc/Makefile
index 0e02401227..1c6e1d6a04 100644
--- a/tools/misc/Makefile
+++ b/tools/misc/Makefile
@@ -1,7 +1,6 @@
 XEN_ROOT=$(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS += -Werror
 # Include configure output (config.h)
 CFLAGS += -include $(XEN_ROOT)/tools/config.h
 CFLAGS += $(CFLAGS_libxenevtchn)
diff --git a/tools/tests/cpu-policy/Makefile b/tools/tests/cpu-policy/Makefile
index 93af9d76fa..c5b81afc71 100644
--- a/tools/tests/cpu-policy/Makefile
+++ b/tools/tests/cpu-policy/Makefile
@@ -36,7 +36,7 @@ install: all
 uninstall:
 	$(RM) -- $(addprefix $(DESTDIR)$(LIBEXEC_BIN)/,$(TARGETS))
 
-CFLAGS += -Werror -D__XEN_TOOLS__
+CFLAGS += -D__XEN_TOOLS__
 CFLAGS += $(CFLAGS_xeninclude)
 CFLAGS += $(APPEND_CFLAGS)
 
diff --git a/tools/tests/depriv/Makefile b/tools/tests/depriv/Makefile
index 3cba28da25..7d9e3b01bb 100644
--- a/tools/tests/depriv/Makefile
+++ b/tools/tests/depriv/Makefile
@@ -1,7 +1,7 @@
 XEN_ROOT=$(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS += -Werror -Wno-declaration-after-statement
+CFLAGS += -Wno-declaration-after-statement
 
 CFLAGS += $(CFLAGS_xeninclude)
 CFLAGS += $(CFLAGS_libxenctrl)
diff --git a/tools/tests/resource/Makefile b/tools/tests/resource/Makefile
index b3cd70c06d..a5856bf095 100644
--- a/tools/tests/resource/Makefile
+++ b/tools/tests/resource/Makefile
@@ -27,7 +27,6 @@ install: all
 uninstall:
 	$(RM) -- $(DESTDIR)$(LIBEXEC_BIN)/$(TARGET)
 
-CFLAGS += -Werror
 CFLAGS += $(CFLAGS_xeninclude)
 CFLAGS += $(CFLAGS_libxenctrl)
 CFLAGS += $(CFLAGS_libxenforeginmemory)
diff --git a/tools/tests/tsx/Makefile b/tools/tests/tsx/Makefile
index d7d2a5d95e..a4f516b725 100644
--- a/tools/tests/tsx/Makefile
+++ b/tools/tests/tsx/Makefile
@@ -26,7 +26,6 @@ uninstall:
 .PHONY: uninstall
 uninstall:
 
-CFLAGS += -Werror
 CFLAGS += -I$(XEN_ROOT)/tools/libs/ctrl -I$(XEN_ROOT)/tools/libs/guest
 CFLAGS += $(CFLAGS_xeninclude)
 CFLAGS += $(CFLAGS_libxenctrl)
diff --git a/tools/tests/xenstore/Makefile b/tools/tests/xenstore/Makefile
index 239e1dce47..202dda0d3c 100644
--- a/tools/tests/xenstore/Makefile
+++ b/tools/tests/xenstore/Makefile
@@ -27,7 +27,6 @@ install: all
 uninstall:
 	$(RM) -- $(addprefix $(DESTDIR)$(LIBEXEC_BIN)/,$(TARGETS))
 
-CFLAGS += -Werror
 CFLAGS += $(CFLAGS_libxenstore)
 CFLAGS += $(APPEND_CFLAGS)
 
diff --git a/tools/xcutils/Makefile b/tools/xcutils/Makefile
index e40a2c4bfa..3687f6cd8f 100644
--- a/tools/xcutils/Makefile
+++ b/tools/xcutils/Makefile
@@ -13,8 +13,6 @@ include $(XEN_ROOT)/tools/Rules.mk
 
 TARGETS := readnotes lsevtchn
 
-CFLAGS += -Werror
-
 CFLAGS_readnotes.o  := $(CFLAGS_libxenevtchn) $(CFLAGS_libxenctrl) $(CFLAGS_libxenguest)
 CFLAGS_lsevtchn.o   := $(CFLAGS_libxenevtchn) $(CFLAGS_libxenctrl)
 
diff --git a/tools/xenmon/Makefile b/tools/xenmon/Makefile
index 3e150b0659..679c4b41a3 100644
--- a/tools/xenmon/Makefile
+++ b/tools/xenmon/Makefile
@@ -13,7 +13,6 @@
 XEN_ROOT=$(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS  += -Werror
 CFLAGS  += $(CFLAGS_libxenevtchn)
 CFLAGS  += $(CFLAGS_libxenctrl)
 LDLIBS  += $(LDLIBS_libxenctrl)
diff --git a/tools/xenpaging/Makefile b/tools/xenpaging/Makefile
index e2ed9eaa3f..835cf2b965 100644
--- a/tools/xenpaging/Makefile
+++ b/tools/xenpaging/Makefile
@@ -12,7 +12,6 @@ OBJS-y   += xenpaging.o
 OBJS-y   += policy_$(POLICY).o
 OBJS-y   += pagein.o
 
-CFLAGS   += -Werror
 CFLAGS   += -Wno-unused
 
 TARGETS := xenpaging
diff --git a/tools/xenpmd/Makefile b/tools/xenpmd/Makefile
index e0d3f06ab2..8da20510b5 100644
--- a/tools/xenpmd/Makefile
+++ b/tools/xenpmd/Makefile
@@ -1,7 +1,6 @@
 XEN_ROOT=$(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS += -Werror
 CFLAGS += $(CFLAGS_libxenstore)
 
 LDLIBS += $(LDLIBS_libxenstore)
diff --git a/tools/xentop/Makefile b/tools/xentop/Makefile
index 7bd96f34d5..70cc2211c5 100644
--- a/tools/xentop/Makefile
+++ b/tools/xentop/Makefile
@@ -13,7 +13,7 @@
 XEN_ROOT=$(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS += -DGCC_PRINTF -Werror $(CFLAGS_libxenstat)
+CFLAGS += -DGCC_PRINTF $(CFLAGS_libxenstat)
 LDLIBS += $(LDLIBS_libxenstat) $(CURSES_LIBS) $(TINFO_LIBS) $(SOCKET_LIBS) -lm
 CFLAGS += -DHOST_$(XEN_OS)
 
diff --git a/tools/xentrace/Makefile b/tools/xentrace/Makefile
index 0995fa9203..b188ba70d6 100644
--- a/tools/xentrace/Makefile
+++ b/tools/xentrace/Makefile
@@ -1,8 +1,6 @@
 XEN_ROOT=$(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS += -Werror
-
 CFLAGS += $(CFLAGS_libxenevtchn)
 CFLAGS += $(CFLAGS_libxenctrl)
 LDLIBS += $(LDLIBS_libxenevtchn)
diff --git a/tools/xl/Makefile b/tools/xl/Makefile
index b7f439121a..5f7aa5f46c 100644
--- a/tools/xl/Makefile
+++ b/tools/xl/Makefile
@@ -5,7 +5,7 @@
 XEN_ROOT = $(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS += -Werror -Wno-format-zero-length -Wmissing-declarations \
+CFLAGS += -Wno-format-zero-length -Wmissing-declarations \
 	-Wno-declaration-after-statement -Wformat-nonliteral
 CFLAGS += -fPIC
 
diff --git a/tools/debugger/gdbsx/Rules.mk b/tools/debugger/gdbsx/Rules.mk
index 920f1c87fb..0610db873b 100644
--- a/tools/debugger/gdbsx/Rules.mk
+++ b/tools/debugger/gdbsx/Rules.mk
@@ -1,6 +1,6 @@
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS   += -Werror -Wmissing-prototypes 
+CFLAGS   += -Wmissing-prototypes 
 # (gcc 4.3x and later)   -Wconversion -Wno-sign-conversion
 
 CFLAGS-$(clang) += -Wno-ignored-attributes
diff --git a/tools/firmware/Rules.mk b/tools/firmware/Rules.mk
index 278cca01e4..d3482c9ec4 100644
--- a/tools/firmware/Rules.mk
+++ b/tools/firmware/Rules.mk
@@ -11,8 +11,6 @@ ifneq ($(debug),y)
 CFLAGS += -DNDEBUG
 endif
 
-CFLAGS += -Werror
-
 $(call cc-options-add,CFLAGS,CC,$(EMBEDDED_EXTRA_CFLAGS))
 
 $(call cc-option-add,CFLAGS,CC,-fcf-protection=none)
diff --git a/tools/libfsimage/common.mk b/tools/libfsimage/common.mk
index 77bc957f27..4fc8c66795 100644
--- a/tools/libfsimage/common.mk
+++ b/tools/libfsimage/common.mk
@@ -2,7 +2,7 @@ include $(XEN_ROOT)/tools/Rules.mk
 
 FSDIR := $(libdir)/xenfsimage
 CFLAGS += -Wno-unknown-pragmas -I$(XEN_ROOT)/tools/libfsimage/common/ -DFSIMAGE_FSDIR=\"$(FSDIR)\"
-CFLAGS += -Werror -D_GNU_SOURCE
+CFLAGS += -D_GNU_SOURCE
 LDFLAGS += -L../common/
 
 PIC_OBJS = $(patsubst %.c,%.opic,$(LIB_SRCS-y))
diff --git a/tools/libs/libs.mk b/tools/libs/libs.mk
index 2b8e7a6128..e47fb30ed4 100644
--- a/tools/libs/libs.mk
+++ b/tools/libs/libs.mk
@@ -14,7 +14,7 @@ MINOR ?= 0
 
 SHLIB_LDFLAGS += -Wl,--version-script=libxen$(LIBNAME).map
 
-CFLAGS   += -Werror -Wmissing-prototypes
+CFLAGS   += -Wmissing-prototypes
 CFLAGS   += $(CFLAGS_xeninclude)
 CFLAGS   += $(foreach lib, $(USELIBS_$(LIBNAME)), $(CFLAGS_libxen$(lib)))
 
diff --git a/tools/ocaml/common.make b/tools/ocaml/common.make
index d5478f626f..0c8a597d5b 100644
--- a/tools/ocaml/common.make
+++ b/tools/ocaml/common.make
@@ -9,7 +9,7 @@ OCAMLLEX ?= ocamllex
 OCAMLYACC ?= ocamlyacc
 OCAMLFIND ?= ocamlfind
 
-CFLAGS += -fPIC -Werror -I$(shell ocamlc -where)
+CFLAGS += -fPIC -I$(shell ocamlc -where)
 
 OCAMLOPTFLAG_G := $(shell $(OCAMLOPT) -h 2>&1 | sed -n 's/^  *\(-g\) .*/\1/p')
 OCAMLOPTFLAGS = $(OCAMLOPTFLAG_G) -ccopt "$(LDFLAGS)" -dtypes $(OCAMLINCLUDE) -cc $(CC) -w F -warn-error F
diff --git a/tools/xenstore/Makefile.common b/tools/xenstore/Makefile.common
index 21b78b0538..ddbac052ac 100644
--- a/tools/xenstore/Makefile.common
+++ b/tools/xenstore/Makefile.common
@@ -9,7 +9,6 @@ XENSTORED_OBJS-$(CONFIG_NetBSD) += xenstored_posix.o
 XENSTORED_OBJS-$(CONFIG_FreeBSD) += xenstored_posix.o
 XENSTORED_OBJS-$(CONFIG_MiniOS) += xenstored_minios.o
 
-CFLAGS += -Werror
 # Include configure output (config.h)
 CFLAGS += -include $(XEN_ROOT)/tools/config.h
 CFLAGS += -I./include
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 16:08:43 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 16:08:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355702.583583 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lrO-0000XH-P5; Fri, 24 Jun 2022 16:08:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355702.583583; Fri, 24 Jun 2022 16:08:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lrO-0000X2-In; Fri, 24 Jun 2022 16:08:42 +0000
Received: by outflank-mailman (input) for mailman id 355702;
 Fri, 24 Jun 2022 16:08:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7mLY=W7=citrix.com=prvs=16756bcf7=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o4loB-0004qc-2N
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 16:05:23 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6f0614cd-f3d7-11ec-b725-ed86ccbb4733;
 Fri, 24 Jun 2022 18:05:15 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6f0614cd-f3d7-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656086714;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=n/V1L71RdFQfMeSoM2sfB601hpOlEv718nuVnaqP+yY=;
  b=Mxz3IsSDUYEwGIqT8q2T7SLu60oSPbZO0/W90gmvaHxNuCH3E5EyY/Q4
   XcKPFJubZ15ILLAD0xR+iBqyrl7i1edYTL2HPgxvMr0aymcfLf6xKeeJz
   VT1LNCEDDu+UrX+xABzZ944XinAKl4OiBn06w3f6yJV0WhO8ZUE1PaTXK
   8=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 74384250
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:s0YkjaqdO65Bk1NcMNl7GikLILdeBmJRZRIvgKrLsJaIsI4StFCzt
 garIBnXP6yKY2D9ftB/bdi3p0hTupCAx9I3SAtpq39kFXsa8puZCYyVIHmrMnLJJKUvbq7GA
 +byyDXkBJppJpMJjk71atANlVEliefQAOCU5NfsYkidfyc9IMsaoU8lyrRRbrJA24DjWVvT4
 4+q+aUzBXf+s9JKGjNMg068gEsHUMTa4Fv0aXRnOJinFHeH/5UkJMp3yZOZdhMUcaENdgKOf
 M7RzanRw4/s10xF5uVJMFrMWhZirrb6ZWBig5fNMkSoqkAqSicais7XOBeAAKv+Zvrgc91Zk
 b1wWZKMpQgBLpPjl90wcD1iEAZOG78Bx+ftGWa4iJnGp6HGWyOEL/RGCUg3OcsT+/ptAHEI/
 vsdQNwPRknd3aTsmuv9E7QywJR4RCXoFNp3VnVIxDfFDfEgUNbbTr/D/9Nw1zYsnMFeW/3ZY
 qL1bBIwME+QP0cSYj/7DroGrNiOm37BIgQI8l+W9Zor6WWKyQ9uhe2F3N39JYXRGJQ9clyjj
 n3C13T0BFcdLtP34SqI9Degi/HCmQv/WZkOD/uo+/hymlqRy2cPThoMWjOGTeKR0xDkHYgFc
 gpNp3Ro/fNaGFGXosfVTyOXr0epnTomWP1gNf9911mn8rTT2lPMboQbdQKteODKpedvG2F0j
 gTUx4y5bdB8mObLECzAr994uRv3YHFIdjFaOEfoWCNfu7HeTJcPYgUjpzqJOIq8lZXLFD752
 FhmRwBu1uxI3abnO0hWlG0rYg5ARbCTF2bZHi2NAgqYAvpRPeZJnbCA51nB9upnJ42EVFSHt
 3Vss5HAsb5WVcDUy3DVGLpl8FSVCxCtamW0bblHT/EcG8mFoSb/Lei8HhkkTKuWDir0UWCwO
 xKC0e+gzJRSIGGrfcdKXm5FMOxzlfKIPY28Dpj8N4MSCrAsJF7v1Hw/Pia4gjGy+HXAZIliY
 P93h+73Vi1EYUmmpRLrL9ogPUgDnHFgnD2PGsGgkXxKE9O2PRaodFvMC3PWBshR0U9OiFm9H
 wp3XydS9yhibQ==
IronPort-HdrOrdr: A9a23:6SU69KhbQzLtZFRWXCxw4w10+nBQXtwji2hC6mlwRA09TySZ//
 rAoB19726StN9xYgBYpTnuAsi9qB/nmKKdpLNhX4tKPzOW3FdATrsD0WKK+VSJcEfDH6xmpM
 JdmsBFebvN5DNB4/oSjjPVLz9Z+qjlzJyV
X-IronPort-AV: E=Sophos;i="5.92,218,1650945600"; 
   d="scan'208";a="74384250"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH v3 24/25] tools: Add -Werror by default to all tools/
Date: Fri, 24 Jun 2022 17:04:21 +0100
Message-ID: <20220624160422.53457-25-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220624160422.53457-1-anthony.perard@citrix.com>
References: <20220624160422.53457-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

And provide an option to ./configure to disable it.

A follow-up patch will remove -Werror from every other Makefile in
tools/. ("tools: Remove -Werror everywhere else")

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/configure.ac |  1 +
 tools/Rules.mk     |  4 ++++
 config/Tools.mk.in |  1 +
 tools/configure    | 26 ++++++++++++++++++++++++++
 4 files changed, 32 insertions(+)

diff --git a/tools/configure.ac b/tools/configure.ac
index 1094d896fc..06311b99c2 100644
--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -79,6 +79,7 @@ m4_include([../m4/header.m4])
 AX_XEN_EXPAND_CONFIG()
 
 # Enable/disable options
+AX_ARG_DEFAULT_ENABLE([werror], [Build tools without -Werror])
 AX_ARG_DEFAULT_DISABLE([rpath], [Build tools with -Wl,-rpath,LIBDIR])
 AX_ARG_DEFAULT_DISABLE([githttp], [Download GIT repositories via HTTP])
 AX_ARG_DEFAULT_ENABLE([monitors], [Disable xenstat and xentop monitoring tools])
diff --git a/tools/Rules.mk b/tools/Rules.mk
index 26958b2948..a165dc4bda 100644
--- a/tools/Rules.mk
+++ b/tools/Rules.mk
@@ -133,6 +133,10 @@ endif
 
 CFLAGS_libxenlight += $(CFLAGS_libxenctrl)
 
+ifeq ($(CONFIG_WERROR),y)
+CFLAGS += -Werror
+endif
+
 ifeq ($(debug),y)
 # Use -Og if available, -O0 otherwise
 dbg_opt_level := $(call cc-option,$(CC),-Og,-O0)
diff --git a/config/Tools.mk.in b/config/Tools.mk.in
index 6c1a0a676f..d0d460f922 100644
--- a/config/Tools.mk.in
+++ b/config/Tools.mk.in
@@ -1,5 +1,6 @@
 -include $(XEN_ROOT)/config/Paths.mk
 
+CONFIG_WERROR       := @werror@
 CONFIG_RUMP         := @CONFIG_RUMP@
 ifeq ($(CONFIG_RUMP),y)
 XEN_OS              := NetBSDRump
diff --git a/tools/configure b/tools/configure
index a052c186a5..986cb1d933 100755
--- a/tools/configure
+++ b/tools/configure
@@ -716,6 +716,7 @@ ocamltools
 monitors
 githttp
 rpath
+werror
 DEBUG_DIR
 XEN_DUMP_DIR
 XEN_PAGING_DIR
@@ -805,6 +806,7 @@ with_xen_scriptdir
 with_xen_dumpdir
 with_rundir
 with_debugdir
+enable_werror
 enable_rpath
 enable_githttp
 enable_monitors
@@ -1490,6 +1492,7 @@ Optional Features:
   --disable-FEATURE       do not include FEATURE (same as --enable-FEATURE=no)
   --enable-FEATURE[=ARG]  include FEATURE [ARG=yes]
   --disable-largefile     omit support for large files
+  --disable-werror        Build tools without -Werror (default is ENABLED)
   --enable-rpath          Build tools with -Wl,-rpath,LIBDIR (default is
                           DISABLED)
   --enable-githttp        Download GIT repositories via HTTP (default is
@@ -4111,6 +4114,29 @@ DEBUG_DIR=$debugdir_path
 
 # Enable/disable options
 
+# Check whether --enable-werror was given.
+if test "${enable_werror+set}" = set; then :
+  enableval=$enable_werror;
+fi
+
+
+if test "x$enable_werror" = "xno"; then :
+
+    ax_cv_werror="n"
+
+elif test "x$enable_werror" = "xyes"; then :
+
+    ax_cv_werror="y"
+
+elif test -z $ax_cv_werror; then :
+
+    ax_cv_werror="y"
+
+fi
+werror=$ax_cv_werror
+
+
+
 # Check whether --enable-rpath was given.
 if test "${enable_rpath+set}" = set; then :
   enableval=$enable_rpath;
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 16:08:49 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 16:08:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355715.583594 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lrV-000139-2Z; Fri, 24 Jun 2022 16:08:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355715.583594; Fri, 24 Jun 2022 16:08:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lrU-00012w-UX; Fri, 24 Jun 2022 16:08:48 +0000
Received: by outflank-mailman (input) for mailman id 355715;
 Fri, 24 Jun 2022 16:08:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7mLY=W7=citrix.com=prvs=16756bcf7=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o4lnh-0004qb-0t
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 16:04:53 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6071422d-f3d7-11ec-bd2d-47488cf2e6aa;
 Fri, 24 Jun 2022 18:04:52 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6071422d-f3d7-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656086691;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=DRUZBzwipLHVi3LrT/VXUcevANDycChtXk37Vv5NYiY=;
  b=DMYhPAbbIc7N/O7kItuByodH0U9FFH9dzWv3JiFilWWwddpzcD65hns+
   pJCC4B3+hO+ggq9zGahMlkOAkBLtdaKI0MaBidwZXzhfKKco2ifaOojNs
   1W0ER5u0yU9ClQOq8NkKtDXv8bXB9Tx/pqBi6gAwyqFiYa/JrDlGKHSV2
   k=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 74362364
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:8i+vGa/vvvkbrBPzjb/gDrUDyX6TJUtcMsCJ2f8bNWPcYEJGY0x3n
 TYWD2CGMveCN2v1fdknYIji8U8P756DyYAxQANl+Ho8E34SpcT7XtnIdU2Y0wF+jyHgoOCLy
 +1EN7Es+ehtFie0Si+Fa+Sn9T8mvU2xbuKU5NTsY0idfic5DnZ74f5fs7Rh2NQw34LlW1nlV
 e7a+KUzBnf0g1aYDUpMg06zgEsHUCPa4W5wUvQWPJinjXeG/5UnJMt3yZKZdhMUdrJ8DO+iL
 9sv+Znilo/vE7XBPfv++lrzWhVirrc/pmFigFIOM0SpqkAqSiDfTs/XnRfTAKtao2zhojx/9
 DlCnaSUdT8QIbbPo8UmcUEHPx9nH50dpLCSdBBTseTLp6HHW37lwvEoB0AqJ4wIvO1wBAmi9
 9RBdmpLNErawbvrnvTrEYGAhex6RCXvFIoZpnFnyyCfFfs8SIrPa67L+cVZzHE7gcUm8fP2O
 JZCN2A0MkiojxtnZhQTOLcymfaUonDAehx4igikpPI4yj2GpOB2+Oe0a4eEEjCQfu1Kmm6Iq
 2SA+H72ajkKOdraxTeb/3aEgu7UgTi9SI8UDKe/9PNhnBuU3GN7NfENfQLl+7/j0Bf4Ao8Bb
 RxPksYzkUQs3HSPcuHEQAa7m1+/nEZDd+RJMd1htA7Yn8I4/D2l6ng4oi9pMYJ76pdtGGR1h
 jdljPuyW2Ux7eT9pWa1s+7N8GjsYXV9wXoqP3dscOcT3zX0TGjfZDrrR80rLqO6h8ad9drYk
 2HT93hWa1n+YKc2O0SHEbPv2WvESmDhFFJd2+kudjvNAvlFTICkfZe0zlPQ8OxNKo2UJnHY4
 iVaw5DPtb9SVcnS/MBofAnrNOvxjxpiGG20vLKSN8N5q2TFF4CLJ+i8Hw2S1G82a51ZKFcFk
 WfYuB9L5Y87AUZGmZRfOtrrY+xzlPCIPY28Cpj8M4ofCrAsJVTv1Hw/OiatM5XFzRFEfVcXY
 szAL65BzB8yVMxa8dZBb71Mj+Z1mn9vnj27qFKS503P7IdyrUW9Ed8tWGZipMhjhE9YiG05K
 +piCvY=
IronPort-HdrOrdr: A9a23:YDMGN6uB8UFkI1xjn5bcU5m/7skDTtV00zEX/kB9WHVpmszxra
 6TdZMgpHnJYVcqKQkdcL+7WJVoLUmxyXcx2/h1AV7AZniAhILLFvAA0WKK+VSJcEeSygce79
 YFT0EXMqyIMbEQt6fHCWeDfOrIuOP3kpyVuQ==
X-IronPort-AV: E=Sophos;i="5.92,218,1650945600"; 
   d="scan'208";a="74362364"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH v3 11/25] tools/xentrace: rework Makefile
Date: Fri, 24 Jun 2022 17:04:08 +0100
Message-ID: <20220624160422.53457-12-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220624160422.53457-1-anthony.perard@citrix.com>
References: <20220624160422.53457-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Remove "build" targets.

Use "$(TARGETS)" to list binary to be built.

Cleanup "clean" rule.

Also drop conditional install of $(BIN) and $(LIBBIN) as those two
variables are now always populated.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---

Notes:
    v2:
    - fix typo in title
    - drop conditional install of $(BIN) and $(LIBBIN)

 tools/xentrace/Makefile | 19 +++++++------------
 1 file changed, 7 insertions(+), 12 deletions(-)

diff --git a/tools/xentrace/Makefile b/tools/xentrace/Makefile
index 9fb7fc96e7..0995fa9203 100644
--- a/tools/xentrace/Makefile
+++ b/tools/xentrace/Makefile
@@ -14,36 +14,31 @@ SBIN     = xentrace xentrace_setsize
 LIBBIN   = xenctx
 SCRIPTS  = xentrace_format
 
-.PHONY: all
-all: build
+TARGETS := $(BIN) $(SBIN) $(LIBBIN)
 
-.PHONY: build
-build: $(BIN) $(SBIN) $(LIBBIN)
+.PHONY: all
+all: $(TARGETS)
 
 .PHONY: install
-install: build
+install: all
 	$(INSTALL_DIR) $(DESTDIR)$(bindir)
 	$(INSTALL_DIR) $(DESTDIR)$(sbindir)
-	[ -z "$(LIBBIN)" ] || $(INSTALL_DIR) $(DESTDIR)$(LIBEXEC_BIN)
-ifneq ($(BIN),)
+	$(INSTALL_DIR) $(DESTDIR)$(LIBEXEC_BIN)
 	$(INSTALL_PROG) $(BIN) $(DESTDIR)$(bindir)
-endif
 	$(INSTALL_PROG) $(SBIN) $(DESTDIR)$(sbindir)
 	$(INSTALL_PYTHON_PROG) $(SCRIPTS) $(DESTDIR)$(bindir)
-	[ -z "$(LIBBIN)" ] || $(INSTALL_PROG) $(LIBBIN) $(DESTDIR)$(LIBEXEC_BIN)
+	$(INSTALL_PROG) $(LIBBIN) $(DESTDIR)$(LIBEXEC_BIN)
 
 .PHONY: uninstall
 uninstall:
 	rm -f $(addprefix $(DESTDIR)$(LIBEXEC_BIN)/, $(LIBBIN))
 	rm -f $(addprefix $(DESTDIR)$(bindir)/, $(SCRIPTS))
 	rm -f $(addprefix $(DESTDIR)$(sbindir)/, $(SBIN))
-ifneq ($(BIN),)
 	rm -f $(addprefix $(DESTDIR)$(bindir)/, $(BIN))
-endif
 
 .PHONY: clean
 clean:
-	$(RM) *.a *.so *.o *.rpm $(BIN) $(SBIN) $(LIBBIN) $(DEPS_RM)
+	$(RM) *.o $(TARGETS) $(DEPS_RM)
 
 .PHONY: distclean
 distclean: clean
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 16:09:10 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 16:09:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355731.583605 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lrq-0002D0-Dr; Fri, 24 Jun 2022 16:09:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355731.583605; Fri, 24 Jun 2022 16:09:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lrq-0002Cr-8w; Fri, 24 Jun 2022 16:09:10 +0000
Received: by outflank-mailman (input) for mailman id 355731;
 Fri, 24 Jun 2022 16:09:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7mLY=W7=citrix.com=prvs=16756bcf7=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o4lo7-0004qc-1N
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 16:05:19 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6db82574-f3d7-11ec-b725-ed86ccbb4733;
 Fri, 24 Jun 2022 18:05:14 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6db82574-f3d7-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656086714;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=fDQ/vJNJxx9MBaOwB2oL0yR7kNW2eRPokZWwFVGVAsQ=;
  b=PmdY29JVwAOMKBPwTB41fUjBt+YXlePgcHhHTRCL4S3KYnZr00hysaoj
   tefkI2iKlVvbABUKgojQ0ikdJ9qWx/Bebjnytl5q4SIAhUkR9ngBkWrY/
   j+qTrYlVMgxhMFRODRvKyBPPDUQ+6F7KkQGbUJe5BHhx7kEF/fIYTgbDh
   w=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 76935370
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:bCZZ7a32KuCpMNdmffbD5Zpxkn2cJEfYwER7XKvMYLTBsI5bpzUBy
 GpMDW3UbP/cNDT9Lo1xbNy2904G78eGm9VrGgpupC1hF35El5HIVI+TRqvS04J+DSFhoGZPt
 Zh2hgzodZhsJpPkjk7xdOCn9xGQ7InQLlbGILes1htZGEk1Ek/NtTo5w7Rj2tAy3YDja++wk
 YiaT/P3aQfNNwFcagr424rbwP+4lK2v0N+wlgVWicFj5DcypVFMZH4sDfjZw0/DaptVBoaHq
 9Prl9lVyI97EyAFUbtJmp6jGqEDryW70QKm0hK6UID66vROS7BbPg/W+5PwZG8O4whlkeydx
 /1Vl77tQy4xLJScgdk8QUJePgohI5VJreqvzXiX6aR/zmXDenrohf5vEFs3LcsT/eMf7WNmr
 KJCbmpXN1ba2rzwkOnTpupE36zPKOHiOp8fvXdxiynUF/88TbjIQrnQ5M8e1zA17ixLNamAN
 pFEMmU3BPjGSyNDOVoYMrgmpf+hhXbeXTthtlG3lYNitgA/yyQuieOwYbI5YOeiWsF9jkue4
 GXc8AzREhwccdCS1zeB2natnfPU2zP2XpoIE7+1/eIsh0ecrlH/EzVPCwH9+6PgzBfjBZQPc
 CT45xbCs4Aq1m72RPDlTSTouUOlrEUadvEPDdU1vVTlJrXv3+qJOoQVZmcfNYJ+75JuGmxCO
 kyhxI2wW2E22FGBYTfEr+rP82vvUcQABTVaDRLoWzfp9DUKTGsbqhvUBuhuH6eu5jEeMWGhm
 mvaxMTSalh6sCLq60lY1Qqe695UjsKVJjPZHy2ONo5f0it3ZZS+e6uj4kXB4PBLIe6xFwfc4
 iBcypHBsLhWUvlhcRBhps1XRNlFAN7VWAAwfHY1R8Vxn9hT0yTLkX9sDMFWex4yb5dslc7Ba
 07PowJBjKJu0I+RRfYvOeqZUp1ypYC5TIiNfq2EP7JmP8kqHCfarX4GWKJl9z20+KTaufpkY
 snznAfFJStyNJmLOxLsFrlEj+N0l3tgrY4RLLiipymaPXOlTCb9Yd843JGmNIjVMIvsTN3pz
 uti
IronPort-HdrOrdr: A9a23:y/KiH6F3TRHJsio3pLqE0MeALOsnbusQ8zAXP0AYc3Jom6uj5q
 aTdZUgpGfJYVkqOE3I9ertBEDEewK4yXcX2/h3AV7BZniEhILAFugLhuGO/9SjIVybygc079
 YYT0EUMrzN5DZB4voSmDPIceod/A==
X-IronPort-AV: E=Sophos;i="5.92,218,1650945600"; 
   d="scan'208";a="76935370"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH v3 23/25] tools/helpers: Fix build of xen-init-dom0 with -Werror
Date: Fri, 24 Jun 2022 17:04:20 +0100
Message-ID: <20220624160422.53457-24-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220624160422.53457-1-anthony.perard@citrix.com>
References: <20220624160422.53457-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Missing prototype of asprintf() without _GNU_SOURCE.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/helpers/xen-init-dom0.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/tools/helpers/xen-init-dom0.c b/tools/helpers/xen-init-dom0.c
index 37eff8868f..764f837887 100644
--- a/tools/helpers/xen-init-dom0.c
+++ b/tools/helpers/xen-init-dom0.c
@@ -1,3 +1,5 @@
+#define _GNU_SOURCE
+
 #include <stdlib.h>
 #include <stdint.h>
 #include <string.h>
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 16:09:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 16:09:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355733.583610 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lrq-0002LK-Ua; Fri, 24 Jun 2022 16:09:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355733.583610; Fri, 24 Jun 2022 16:09:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4lrq-0002JR-Pu; Fri, 24 Jun 2022 16:09:10 +0000
Received: by outflank-mailman (input) for mailman id 355733;
 Fri, 24 Jun 2022 16:09:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7mLY=W7=citrix.com=prvs=16756bcf7=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o4lnx-0004qc-00
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 16:05:09 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 69dc418e-f3d7-11ec-b725-ed86ccbb4733;
 Fri, 24 Jun 2022 18:05:06 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 69dc418e-f3d7-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656086706;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=oByBK99qnefbrQHrEHd3FvzvjmDsAemolWu+MnoHJUY=;
  b=KlenXAgY4asI04rH0XzRv/eSvW2hK8KHenlJGMHdZl4LEV7fhnSvsN/T
   8NJYTkVOtKRyX0ldOoWUvwr8MHm5eysTxmZwERs3hjESsCIXZPm0aRTjy
   0pP0UNVidgC0bus0yyXpGwFF5fNhye/tsLgvb05VC8A5NBsn892q8AmHX
   I=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 74787734
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:4SRl5KKYYc2C+uY6FE+RiZUlxSXFcZb7ZxGr2PjKsXjdYENS3jEOx
 mdODT/VPK6JNmL3fohya4u/oU8E68SAmNIxGQRlqX01Q3x08seUXt7xwmUcns+xwm8vaGo9s
 q3yv/GZdJhcokf0/0vrav67xZVF/fngqoDUUYYoAQgsA14+IMsdoUg7wbRh3NQ02YLR7z6l4
 rseneWOYDdJ5BYsWo4kw/rrRMRH5amaVJsw5zTSVNgT1LPsvyB94KE3fMldG0DQUIhMdtNWc
 s6YpF2PEsE1yD92Yj+tuu6TnkTn2dc+NyDW4pZdc/DKbhSvOkXee0v0XRYRQR4/ttmHozx+4
 I1AicHrUF53BZ/B28VDSBNSOXllYLITrdcrIVDn2SCS50jPcn+qyPRyFkAme4Yf/46bA0kXq
 6ZecmpUKEne2aTmm9pXScE17ignBMDtIIMYvGAm1TzDBOwqaZvCX7/L9ZlT2zJYasVmQqqBN
 5VGMmYHgBLofiN/axQrGawHnuKTo1jcdxdppWKIjP9ii4TU5FMoi+W8WDbPQfSISt9ShV2wv
 X/d8iLyBRRyHMOb4SqI9DSrnOCntTP2XsceGaO18tZugUaP3SoDBRsOT1y5rPKlzEmkVLpix
 1c8o3R06/JorQryE4e7D0bQTGO4UgA0A9dsTrYjsV+2+KeXwliCDGIuFiUcd4lz3CMpfgDGx
 mNljvuwW2Ex6ObIES3NnluHhWjsYHZIdAfucQdBFFJYuIe7/enfmzqVFr5e/LiJYsoZ8N0a6
 xSDt2AAiroalqbnPI3rrAmc01pASnUkJzPZBzk7vUr/t2uVnKb/O+SVBaHztJ6s1rqxQFibp
 2QjkMOD9u0IBpzlvHXTHbtVRODwuKrdaGK0bbtT838JrWzFF5mLLehtDMxWfh81Yq7ohxezC
 KMshe+hzMAKZyb7BUOGS4mwF94r3cDdKDgRbdiNNoAmSsEoLGevpXgyDWbNjzGFuBV9yskXZ
 MbEGftA+F5HUMyLOhLtHLxDuVLqrwhjrV7uqWfTlUX5iOTCOy/OF9/o8jKmN4gE0U9Nmy2Nm
 /43CidA40w3vDHWCsUPzbMuEA==
IronPort-HdrOrdr: A9a23:Tumu16tlwYBGiZSir3qv+dzt7skDcNV00zEX/kB9WHVpmszxra
 +TdZMgpHjJYVcqKQgdcL+7WZVoLUmwyXcx2/hyAV7AZniDhILLFuFfBOLZqlWKcREWtNQtsJ
 uIG5IObuEYZmIVsS+V2mWF+q4bsbq6zJw=
X-IronPort-AV: E=Sophos;i="5.92,218,1650945600"; 
   d="scan'208";a="74787734"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Wei Liu <wl@xen.org>, "Juergen
 Gross" <jgross@suse.com>
Subject: [XEN PATCH v3 19/25] tools: Introduce $(xenlibs-ldlibs, ) macro
Date: Fri, 24 Jun 2022 17:04:16 +0100
Message-ID: <20220624160422.53457-20-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20220624160422.53457-1-anthony.perard@citrix.com>
References: <20220624160422.53457-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

This can be used when linking against multiple in-tree Xen libraries,
and avoid duplicated flags. It can be used instead of multiple
$(LDLIBS_libxen*).

For now, replace the open-coding in libs.mk.

The macro $(xenlibs-libs, ) will be useful later when only the path to
the libraries is wanted (e.g. for checking for dependencies).

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/Rules.mk     | 16 ++++++++++++++--
 tools/libs/libs.mk |  2 +-
 2 files changed, 15 insertions(+), 3 deletions(-)

diff --git a/tools/Rules.mk b/tools/Rules.mk
index 23979ed254..ce77dd2eb1 100644
--- a/tools/Rules.mk
+++ b/tools/Rules.mk
@@ -93,12 +93,24 @@ define xenlibs-rpath
     $(addprefix -Wl$(comma)-rpath-link=$(XEN_ROOT)/tools/libs/,$(call xenlibs-dependencies,$(1)))
 endef
 
+# Provide a path for each library in $(1)
+define xenlibs-libs
+    $(foreach lib,$(1), \
+        $(XEN_ROOT)/tools/libs/$(lib)/lib$(FILENAME_$(lib))$(libextension))
+endef
+
+# Flags for linking against all Xen libraries listed in $(1)
+define xenlibs-ldlibs
+    $(call xenlibs-rpath,$(1)) $(call xenlibs-libs,$(1)) \
+    $(foreach lib,$(1),$(xenlibs-ldlibs-$(lib)))
+endef
+
 define LIB_defs
  FILENAME_$(1) ?= xen$(1)
  XEN_libxen$(1) = $$(XEN_ROOT)/tools/libs/$(1)
  CFLAGS_libxen$(1) = $$(CFLAGS_xeninclude)
  SHLIB_libxen$(1) = $$(call xenlibs-rpath,$(1)) -Wl,-rpath-link=$$(XEN_libxen$(1))
- LDLIBS_libxen$(1) = $$(call xenlibs-rpath,$(1)) $$(XEN_libxen$(1))/lib$$(FILENAME_$(1))$$(libextension)
+ LDLIBS_libxen$(1) = $$(call xenlibs-ldlibs,$(1))
 endef
 
 $(foreach lib,$(LIBS_LIBS),$(eval $(call LIB_defs,$(lib))))
@@ -108,7 +120,7 @@ $(foreach lib,$(LIBS_LIBS),$(eval $(call LIB_defs,$(lib))))
 CFLAGS_libxenctrl += -D__XEN_TOOLS__
 
 ifeq ($(CONFIG_Linux),y)
-LDLIBS_libxenstore += -ldl
+xenlibs-ldlibs-store := -ldl
 endif
 
 CFLAGS_libxenlight += $(CFLAGS_libxenctrl)
diff --git a/tools/libs/libs.mk b/tools/libs/libs.mk
index d7e1274249..2b8e7a6128 100644
--- a/tools/libs/libs.mk
+++ b/tools/libs/libs.mk
@@ -18,7 +18,7 @@ CFLAGS   += -Werror -Wmissing-prototypes
 CFLAGS   += $(CFLAGS_xeninclude)
 CFLAGS   += $(foreach lib, $(USELIBS_$(LIBNAME)), $(CFLAGS_libxen$(lib)))
 
-LDLIBS += $(foreach lib, $(USELIBS_$(LIBNAME)), $(LDLIBS_libxen$(lib)))
+LDLIBS += $(call xenlibs-ldlibs,$(USELIBS_$(LIBNAME)))
 
 PIC_OBJS := $(OBJS-y:.o=.opic)
 
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 16:52:14 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 16:52:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355769.583627 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4mXM-00005e-9Y; Fri, 24 Jun 2022 16:52:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355769.583627; Fri, 24 Jun 2022 16:52:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4mXM-00005X-6F; Fri, 24 Jun 2022 16:52:04 +0000
Received: by outflank-mailman (input) for mailman id 355769;
 Fri, 24 Jun 2022 16:52:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4mXK-00005R-IB
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 16:52:02 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4mXK-00033d-CE; Fri, 24 Jun 2022 16:52:02 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4mXK-0006pZ-3I; Fri, 24 Jun 2022 16:52:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:Message-Id:Date:
	Subject:Cc:To:From; bh=ESCTga1VeCr6gJ7GjxuK0AjiEvsJA55vwBsy56yrgpY=; b=Qq4k5W
	8fjelYPId5/t+OF6Yh2pA3WkXtJzg4SY+RHUPefvjWPJdMIXObPeBOyi3mpjTOYzsyLJ8Yx+sZY8f
	bvMYW6LCaUlR/FHnUcz6jKr2aveefl1nnBZE5gHH9SHE1eHLGggXD167ZRPNOtOLKMXO0LPOPEERc
	ZRvfxJy5b+0=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Julien Grall <jgrall@amazon.com>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH] public/io: xs_wire: Allow Xenstore to report EPERM
Date: Fri, 24 Jun 2022 17:51:51 +0100
Message-Id: <20220624165151.940-1-julien@xen.org>
X-Mailer: git-send-email 2.32.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

C Xenstored is using EPERM when the client is not allowed to change
the owner (see GET_PERMS). However, the xenstore protocol doesn't
describe EPERM so EINVAL will be sent to the client.

When writing test, it would be useful to differentiate between EINVAL
(e.g. parsing error) and EPERM (i.e. no permission). So extend
xsd_errors[] to support return EPERM.

Looking at previous time xsd_errors was extended (8b2c441a1b), it was
considered to be safe to add a new error because at least Linux driver
and libxenstore treat an unknown error code as EINVAL.

This statement doesn't cover other possible OSes, however I am not
aware of any breakage.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/include/public/io/xs_wire.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/include/public/io/xs_wire.h b/xen/include/public/io/xs_wire.h
index c1ec7c73e3b1..c23b63cdfeaf 100644
--- a/xen/include/public/io/xs_wire.h
+++ b/xen/include/public/io/xs_wire.h
@@ -76,6 +76,7 @@ static struct xsd_errors xsd_errors[]
 __attribute__((unused))
 #endif
     = {
+    XSD_ERROR(EPERM),
     XSD_ERROR(EINVAL),
     XSD_ERROR(EACCES),
     XSD_ERROR(EEXIST),
-- 
2.32.0



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 17:25:22 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 17:25:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355775.583638 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4n3L-0003S6-SF; Fri, 24 Jun 2022 17:25:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355775.583638; Fri, 24 Jun 2022 17:25:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4n3L-0003Rz-Ow; Fri, 24 Jun 2022 17:25:07 +0000
Received: by outflank-mailman (input) for mailman id 355775;
 Fri, 24 Jun 2022 17:25:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4n3K-0003Rs-4Y
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 17:25:06 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4n3J-0003l3-QL; Fri, 24 Jun 2022 17:25:05 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238] helo=[192.168.4.76])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4n3J-0002mc-JR; Fri, 24 Jun 2022 17:25:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=uIQBUHf2B1nNIwlw54Yy0Fu2lC1Ge4jGBpr2Y7H/EGw=; b=XnuUmAUJzS5OqZ07xirKEs97rV
	Fu8h/c3hdfulfyogtbX/r826eiU80r2gjYwvVI6cUJm2H/k9hMZ0ZhRojOLmJzoNPvO3fLpy/sRHx
	9iKzlvuF8WnUZ0sru3veHl2UmIRWWMYHrtVoDr45YUpvWK5lRh9PtkzVhPZ/Y9cNMoM8=;
Message-ID: <81c33c8c-e345-2fe3-32c6-2f80799eefd0@xen.org>
Date: Fri, 24 Jun 2022 18:25:03 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH] docs/misra: Add instructions for cppcheck
To: Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20220624105311.21057-1-luca.fancellu@arm.com>
 <692d09fa-5513-132a-6b5b-4bc62e46a443@xen.org>
 <15F23829-3693-47CC-A9D6-3D7A3B44EB64@arm.com>
 <88bd7017-e2b3-59f3-a68a-25db9e53136d@xen.org>
 <CA8DFF26-3D7F-4CDA-9EDC-E173203B2A51@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <CA8DFF26-3D7F-4CDA-9EDC-E173203B2A51@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Luca,

On 24/06/2022 14:34, Luca Fancellu wrote:
>> On 24 Jun 2022, at 13:17, Julien Grall <julien@xen.org> wrote:
> I would keep the section about compiling cppcheck since many recent distro doesn’t provide cppcheck >=2.7 yet (and 2.8 is broken),
> If you agree with it.

It depends on the content of the section. If the content duplicates the 
cppcheck README then no. If this is just to point to the cppcheck 
README, then I am OK with that.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 17:28:51 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 17:28:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355780.583649 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4n6x-00045n-BC; Fri, 24 Jun 2022 17:28:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355780.583649; Fri, 24 Jun 2022 17:28:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4n6x-00045g-89; Fri, 24 Jun 2022 17:28:51 +0000
Received: by outflank-mailman (input) for mailman id 355780;
 Fri, 24 Jun 2022 17:28:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4n6v-00045a-9J
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 17:28:49 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4n6t-0003pU-AV; Fri, 24 Jun 2022 17:28:47 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238] helo=[192.168.4.76])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4n6t-0002r3-4v; Fri, 24 Jun 2022 17:28:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:Cc:
	References:To:Subject:MIME-Version:Date:Message-ID;
	bh=Ndb0oprtCRhf+y5+7akjUInJJBj9HPbzJezBpH9A4dw=; b=E96T9SdzHxvQFeDvYvIA+JzBrU
	2Bwwz2PnhjjxIUUDFOaq60qWrO73u+XWQp7VZFNsabETyeFjBkwxd3eah82nRTWVLNJJjVWmuY29n
	l/vV0vM+RbSzHHODPcKk8z9wzpiM/qwwXSx+aJbPWqe0PDrbBR01wr7curXxTd8Vkr9s=;
Message-ID: <7689497b-1977-b30a-5835-587fa266c721@xen.org>
Date: Fri, 24 Jun 2022 18:28:45 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: Reg. Tee init fail...
To: "SK, SivaSangeetha (Siva Sangeetha)" <SivaSangeetha.SK@amd.com>
References: <DM4PR12MB5200C7C38770E07B5946424A80B49@DM4PR12MB5200.namprd12.prod.outlook.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <DM4PR12MB5200C7C38770E07B5946424A80B49@DM4PR12MB5200.namprd12.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

(moving the discussion to xen-devel as I think it is more appropriate)

On 24/06/2022 10:53, SK, SivaSangeetha (Siva Sangeetha) wrote:
> [AMD Official Use Only - General]

Not clear what this means.

> 
> Hi Xen team,
> 
> In TEE driver, We allocate a ring buffer, get its physical address from __pa() macro, pass the physical address to secure processor for mapping it and using in secure processor side.
> 
> Source: https://elixir.bootlin.com/linux/latest/source/drivers/crypto/ccp/tee-dev.c#L132
> 
> This works good natively in Dom0 on the target.
> When we boot the same Dom0 kernel, with Xen hypervisor enabled, ring init fails.

Do you have any error message or error code?

> 
> 
> We suspect that the address passed to secure processor, is not same when xen is enabled, and when xen is enabled, some level of address translation might be required to get exact physical address.

If you are using Xen upstream, Dom0 will be mapped with IPA == PA. So 
there should be no need for translation.

Can you provide more details on your setup (version of Xen, Linux...)?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 17:34:32 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 17:34:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355787.583660 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4nCO-0005Va-VR; Fri, 24 Jun 2022 17:34:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355787.583660; Fri, 24 Jun 2022 17:34:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4nCO-0005VT-SN; Fri, 24 Jun 2022 17:34:28 +0000
Received: by outflank-mailman (input) for mailman id 355787;
 Fri, 24 Jun 2022 17:34:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4nCO-0005VN-Ai
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 17:34:28 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4nCO-0003vV-1r; Fri, 24 Jun 2022 17:34:28 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238] helo=[192.168.4.76])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4nCN-0003E5-Rl; Fri, 24 Jun 2022 17:34:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=GsUC9d5Wa3NI82Wm9SG0/Kl9J6u9HHEBs422+GYu7gY=; b=S3ErcNhqCedS7+G1NfFVYD7ce7
	WKZIUJjGJw8/nEXMsUAofsSpjflBfERbxDOH4joC2Jya2koAKtLrrC8Bp1UVgDcLpmexc9n36tl2o
	1wSoqceWPYtGOJDCTA5neuu/E9kMdAewKwF14Og/9pvp23ksaS2f1tNjFZ3pPVUWIRSI=;
Message-ID: <7561bb54-a96b-0b18-f953-3ad3babf9f22@xen.org>
Date: Fri, 24 Jun 2022 18:34:25 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH v2 2/4] tools/xenstore: add documentation for new
 set/get-feature commands
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20220527072427.20327-1-jgross@suse.com>
 <20220527072427.20327-3-jgross@suse.com>
 <4f8f6cf3-3aee-9128-df09-d3957c233c42@xen.org>
 <258f4579-7c24-dd98-d4ce-1155b1da8759@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <258f4579-7c24-dd98-d4ce-1155b1da8759@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 24/06/2022 05:13, Juergen Gross wrote:
> On 23.06.22 20:27, Julien Grall wrote:
>> On 27/05/2022 08:24, Juergen Gross wrote:
>>> Add documentation for two new Xenstore wire commands SET_FEATURE and
>>> GET_FEATURE used to set or query the Xenstore features visible in the
>>> ring page of a given domain.
>>>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> ---
>>> Do we need support in the migration protocol for the features?
>>
>> I would say yes. You want to make sure that the client can be migrated 
>> without loosing features between two xenstored.
>>
>>> V2:
>>> - remove feature bit (Julien Grall)
>>> - GET_FEATURE without domid will return Xenstore supported features
>>>    (triggered by Julien Grall)
>>> ---
>>>   docs/misc/xenstore.txt | 14 ++++++++++++++
>>>   1 file changed, 14 insertions(+)
>>>
>>> diff --git a/docs/misc/xenstore.txt b/docs/misc/xenstore.txt
>>> index a3d3da0a5b..00f6969202 100644
>>> --- a/docs/misc/xenstore.txt
>>> +++ b/docs/misc/xenstore.txt
>>> @@ -331,6 +331,20 @@ SET_TARGET        <domid>|<tdomid>|
>>>       xenstored prevents the use of SET_TARGET other than by dom0.
>>> +GET_FEATURE        [<domid>|]        <value>|
>>> +SET_FEATURE        <domid>|<value>|
>>> +    Returns or sets the contents of the "feature" field located at
>>> +    offset 2064 of the Xenstore ring page of the domain specified by
>>> +    <domid>. <value> is a decimal number being a logical or of the
>>
>> In the context of migration, I am stilll a bit concerned that the 
>> features are stored in the ring because the guest could overwrite it.
>>
>> I would expect the migration code to check that the GET_FEATURE 
>> <domid> is a subset of GET_FEATURE on the targeted Xenstored. So it 
>> can easily prevent a guest to migrate.
>>
>> So I think this should be a shadow copy that will be returned instead 
>> of the contents of the "feature" field.
> 
> Of course. The value in the ring is meant only for the guest. Xenstore
> will have the authoritative value in its private memory.

Good. I would suggest to clarify it in xenstore.txt because it suggests 
otherwise so far.

>>> +    feature bits as defined in docs/misc/xenstore-ring.txt. Trying
>>> +    to set a bit for a feature not being supported by the running
>>> +    Xenstore will be denied. Providing no <domid> with the
>>> +    GET_FEATURE command will return the features which are supported
>>> +    by Xenstore.
>>
>> Do we want to allow modifying the features when the guest is running?
> 
> I think we can't remove features, but adding would be fine. For

Agree with removing features, regarding adding I think we would need a 
mechanism to tell the client there is a new feature. It would require 
some work, so...

> simplicity it might be better to just deny a modification while the
> guest is running.

... I agree this is probably best for a first shot. This can be relaxed 
afterwards.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 17:55:17 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 17:55:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355793.583670 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4nWR-0007tm-LY; Fri, 24 Jun 2022 17:55:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355793.583670; Fri, 24 Jun 2022 17:55:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4nWR-0007tf-Ig; Fri, 24 Jun 2022 17:55:11 +0000
Received: by outflank-mailman (input) for mailman id 355793;
 Fri, 24 Jun 2022 17:55:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4nWQ-0007tZ-BT
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 17:55:10 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4nWP-0004J2-VN; Fri, 24 Jun 2022 17:55:09 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238] helo=[192.168.4.76])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4nWP-0004Dn-PS; Fri, 24 Jun 2022 17:55:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=JO00fkty48Zh8fsa4ezsLZvRhS4z1PJ9w1aOv2dQOTE=; b=aDJ3E5qL0YU3H0sUA8+z0fBmd5
	dMdR5+bGDW2TqzVXWz2fZdXgrspWB7Pu1+axj20swOsqXlGNB/056IuV7mmDyxzB6F/goPRLZ6t49
	2k+hhQkzSKQj3cKNaU28LVsu4iSiPF1mo3iabBDrutmkEX6pNBXAqk7o2hu1b3TNS7Lk=;
Message-ID: <45a41132-1520-a894-a9eb-6688c79a660d@xen.org>
Date: Fri, 24 Jun 2022 18:55:07 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH v5 1/8] xen/arm: introduce static shared memory
To: Penny Zheng <Penny.Zheng@arm.com>, xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220620051114.210118-1-Penny.Zheng@arm.com>
 <20220620051114.210118-2-Penny.Zheng@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220620051114.210118-2-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Penny,

On 20/06/2022 06:11, Penny Zheng wrote:
> From: Penny Zheng <penny.zheng@arm.com>
> 
> This patch serie introduces a new feature: setting up static

Typo: s/serie/series/

> shared memory on a dom0less system, through device tree configuration.
> 
> This commit parses shared memory node at boot-time, and reserve it in
> bootinfo.reserved_mem to avoid other use.
> 
> This commits proposes a new Kconfig CONFIG_STATIC_SHM to wrap
> static-shm-related codes, and this option depends on static memory(
> CONFIG_STATIC_MEMORY). That's because that later we want to reuse a few
> helpers, guarded with CONFIG_STATIC_MEMORY, like acquire_staticmem_pages, etc,
> on static shared memory.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> ---
> v5 change:
> - no change
> ---
> v4 change:
> - nit fix on doc
> ---
> v3 change:
> - make nr_shm_domain unsigned int
> ---
> v2 change:
> - document refinement
> - remove bitmap and use the iteration to check
> - add a new field nr_shm_domain to keep the number of shared domain
> ---
>   docs/misc/arm/device-tree/booting.txt | 120 ++++++++++++++++++++++++++
>   xen/arch/arm/Kconfig                  |   6 ++
>   xen/arch/arm/bootfdt.c                |  68 +++++++++++++++
>   xen/arch/arm/include/asm/setup.h      |   3 +
>   4 files changed, 197 insertions(+)
> 
> diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
> index 98253414b8..6467bc5a28 100644
> --- a/docs/misc/arm/device-tree/booting.txt
> +++ b/docs/misc/arm/device-tree/booting.txt
> @@ -378,3 +378,123 @@ device-tree:
>   
>   This will reserve a 512MB region starting at the host physical address
>   0x30000000 to be exclusively used by DomU1.
> +
> +Static Shared Memory
> +====================
> +
> +The static shared memory device tree nodes allow users to statically set up
> +shared memory on dom0less system, enabling domains to do shm-based
> +communication.
> +
> +- compatible
> +
> +    "xen,domain-shared-memory-v1"
> +
> +- xen,shm-id
> +
> +    An 8-bit integer that represents the unique identifier of the shared memory
> +    region. The maximum identifier shall be "xen,shm-id = <0xff>".
> +
> +- xen,shared-mem
> +
> +    An array takes a physical address, which is the base address of the
> +    shared memory region in host physical address space, a size, and a guest
> +    physical address, as the target address of the mapping. The number of cells
> +    for the host address (and size) is the same as the guest pseudo-physical
> +    address and they are inherited from the parent node.

Sorry for jump in the discussion late. But as this is going to be a 
stable ABI, I would to make sure the interface is going to be easily 
extendable.

AFAIU, with your proposal the host physical address is mandatory. I 
would expect that some user may want to share memory but don't care 
about the exact location in memory. So I think it would be good to make 
it optional in the binding.

I think this wants to be done now because it would be difficult to 
change the binding afterwards (the host physical address is the first 
set of cells).

The Xen doesn't need to handle the optional case.

[...]

> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index be9eff0141..7321f47c0f 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -139,6 +139,12 @@ config TEE
>   
>   source "arch/arm/tee/Kconfig"
>   
> +config STATIC_SHM
> +	bool "Statically shared memory on a dom0less system" if UNSUPPORTED

You also want to update SUPPORT.md.

> +	depends on STATIC_MEMORY
> +	help
> +	  This option enables statically shared memory on a dom0less system.
> +
>   endmenu
>   
>   menu "ARM errata workaround via the alternative framework"
> diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
> index ec81a45de9..38dcb05d5d 100644
> --- a/xen/arch/arm/bootfdt.c
> +++ b/xen/arch/arm/bootfdt.c
> @@ -361,6 +361,70 @@ static int __init process_domain_node(const void *fdt, int node,
>                                      size_cells, &bootinfo.reserved_mem, true);
>   }
>   
> +#ifdef CONFIG_STATIC_SHM
> +static int __init process_shm_node(const void *fdt, int node,
> +                                   u32 address_cells, u32 size_cells)
> +{
> +    const struct fdt_property *prop;
> +    const __be32 *cell;
> +    paddr_t paddr, size;
> +    struct meminfo *mem = &bootinfo.reserved_mem;
> +    unsigned long i;

nr_banks is "unsigned int" so I think this should be "unsigned int" as well.

> +
> +    if ( address_cells < 1 || size_cells < 1 )
> +    {
> +        printk("fdt: invalid #address-cells or #size-cells for static shared memory node.\n");
> +        return -EINVAL;
> +    }
> +
> +    prop = fdt_get_property(fdt, node, "xen,shared-mem", NULL);
> +    if ( !prop )
> +        return -ENOENT;
> +
> +    /*
> +     * xen,shared-mem = <paddr, size, gaddr>;
> +     * Memory region starting from physical address #paddr of #size shall
> +     * be mapped to guest physical address #gaddr as static shared memory
> +     * region.
> +     */
> +    cell = (const __be32 *)prop->data;
> +    device_tree_get_reg(&cell, address_cells, size_cells, &paddr, &size);

Please check the len of the property to confirm is it big enough to 
contain "paddr", "size", and "gaddr".

> +    for ( i = 0; i < mem->nr_banks; i++ )
> +    {
> +        /*
> +         * A static shared memory region could be shared between multiple
> +         * domains.
> +         */
> +        if ( paddr == mem->bank[i].start && size == mem->bank[i].size )
> +            break;
> +    }
> +
> +    if ( i == mem->nr_banks )
> +    {
> +        if ( i < NR_MEM_BANKS )
> +        {
> +            /* Static shared memory shall be reserved from any other use. */
> +            mem->bank[mem->nr_banks].start = paddr;
> +            mem->bank[mem->nr_banks].size = size;
> +            mem->bank[mem->nr_banks].xen_domain = true;
> +            mem->nr_banks++;
> +        }
> +        else
> +        {
> +            printk("Warning: Max number of supported memory regions reached.\n");
> +            return -ENOSPC;
> +        }
> +    }
> +    /*
> +     * keep a count of the number of domains, which later may be used to
> +     * calculate the number of the reference count.
> +     */
> +    mem->bank[i].nr_shm_domain++;
> +
> +    return 0;
> +}
> +#endif
> +
>   static int __init early_scan_node(const void *fdt,
>                                     int node, const char *name, int depth,
>                                     u32 address_cells, u32 size_cells,
> @@ -386,6 +450,10 @@ static int __init early_scan_node(const void *fdt,
>           process_chosen_node(fdt, node, name, address_cells, size_cells);
>       else if ( depth == 2 && device_tree_node_compatible(fdt, node, "xen,domain") )
>           rc = process_domain_node(fdt, node, name, address_cells, size_cells);
> +#ifdef CONFIG_STATIC_SHM
> +    else if ( depth <= 3 && device_tree_node_compatible(fdt, node, "xen,domain-shared-memory-v1") )
> +        rc = process_shm_node(fdt, node, address_cells, size_cells);
> +#endif
>   
>       if ( rc < 0 )
>           printk("fdt: node `%s': parsing failed\n", name);
> diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
> index 2bb01ecfa8..5063e5d077 100644
> --- a/xen/arch/arm/include/asm/setup.h
> +++ b/xen/arch/arm/include/asm/setup.h
> @@ -27,6 +27,9 @@ struct membank {
>       paddr_t start;
>       paddr_t size;
>       bool xen_domain; /* whether the memory bank is bound to a Xen domain. */
> +#ifdef CONFIG_STATIC_SHM
> +    unsigned int nr_shm_domain;
> +#endif
>   };
>   
>   struct meminfo {

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 18:06:50 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 18:06:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355799.583681 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4nhb-00017d-Sd; Fri, 24 Jun 2022 18:06:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355799.583681; Fri, 24 Jun 2022 18:06:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4nhb-00017W-Pu; Fri, 24 Jun 2022 18:06:43 +0000
Received: by outflank-mailman (input) for mailman id 355799;
 Fri, 24 Jun 2022 18:06:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4nha-00017M-I6; Fri, 24 Jun 2022 18:06:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4nha-0004b7-5O; Fri, 24 Jun 2022 18:06:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4nhZ-0004Xs-M4; Fri, 24 Jun 2022 18:06:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o4nhZ-0006fz-La; Fri, 24 Jun 2022 18:06:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=SqgRw9wMLmBMD7APsEv4dyiPrvH6WTATplxfCIC6kJM=; b=lgYI13Byi/hpWOP6G6R3CS3fmC
	4ipRoner89eJk5w4tHFnfhbEMSoXYo2kRsYWwxexfND5amshgWPvXIbn9Jxa7bIyYaSgYCCWPyCuN
	EVjoIbTOnrLkHs5Vz500PktbVxc1495TOrpGEOb2rxQbOdZTLLLmeVRaQJSeWmOjsS0w=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171339-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 171339: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:guest-start/debianhvm.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=7c1f724dd95cf627f72c96d310b6b7d487bc2281
X-Osstest-Versions-That:
    xen=db3382dd4f468c763512d6bf91c96773395058fb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 24 Jun 2022 18:06:41 +0000

flight 171339 xen-unstable real [real]
flight 171344 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/171339/
http://logs.test-lab.xenproject.org/osstest/logs/171344/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install   fail pass in 171344-retest
 test-amd64-amd64-xl-qemut-debianhvm-amd64 20 guest-start/debianhvm.repeat fail pass in 171344-retest
 test-amd64-amd64-xl-qemuu-ws16-amd64 12 windows-install fail pass in 171344-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop  fail in 171344 like 171334
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171334
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171334
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171334
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171334
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171334
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171334
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171334
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171334
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171334
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171334
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171334
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  7c1f724dd95cf627f72c96d310b6b7d487bc2281
baseline version:
 xen                  db3382dd4f468c763512d6bf91c96773395058fb

Last test of basis   171334  2022-06-23 19:07:26 Z    0 days
Testing same since   171339  2022-06-24 06:43:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   db3382dd4f..7c1f724dd9  7c1f724dd95cf627f72c96d310b6b7d487bc2281 -> master


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 18:18:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 18:18:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355814.583717 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4nsb-0002tb-5K; Fri, 24 Jun 2022 18:18:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355814.583717; Fri, 24 Jun 2022 18:18:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4nsb-0002tU-2B; Fri, 24 Jun 2022 18:18:05 +0000
Received: by outflank-mailman (input) for mailman id 355814;
 Fri, 24 Jun 2022 18:18:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HRPZ=W7=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1o4nsZ-0002tO-7i
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 18:18:03 +0000
Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com
 [66.111.4.29]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f9fa3af0-f3e9-11ec-bd2d-47488cf2e6aa;
 Fri, 24 Jun 2022 20:18:00 +0200 (CEST)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.nyi.internal (Postfix) with ESMTP id 7C9C95C01A0;
 Fri, 24 Jun 2022 14:17:58 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute2.internal (MEProxy); Fri, 24 Jun 2022 14:17:58 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri,
 24 Jun 2022 14:17:57 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f9fa3af0-f3e9-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:message-id
	:mime-version:reply-to:sender:subject:subject:to:to; s=fm2; t=
	1656094678; x=1656181078; bh=vnurcusEhgyC4dR6+iI9H+tDoKix9EjEeCm
	OSiLvxHA=; b=cazAEJZnNo4qBknCaAZ0w+YvUiAlM4zedIp5OAdbnLaFoK/qDec
	l4KmTuwTRMBrrTCoN3rf1S7XmtmYXCtyETu9ezm2pqwA5MSwQ982biQLIbmdp2GQ
	bAqfVeof1xiIV5Esxu3nRG5kzonCXaD87BGe/dGa5qPHXFm0/+YsPsNsVpMb3ZCz
	83HFszuYPe/mjml1PbJv3OW+uX3ctEF0/Zgc7UMGMyFFFHQIJ0HhsqRzQKlzQU3b
	l/xNwi5pxOv+9LUWuNVO6emCm+sQmOFh8sCspWTucrF0MGXHNRcNyYhKiMp7BDk0
	g48fBauuycXh6djbCeGjjUqeMioMItzMk2g==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:message-id:mime-version:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm2; t=1656094678; x=1656181078; bh=vnurcusEhgyC4
	dR6+iI9H+tDoKix9EjEeCmOSiLvxHA=; b=vcszBaX5nhoLpxwHYv9MMQqCVoq0E
	6JM+v3VeAEmiNBR08tIFOpwCw+UhzZ5dGm3+HRtSKMm0iWjMu1No8Om2/X5WLoVZ
	haoCsU4yPPZ2YWsoi+w1G2l8Xo3uUNJxOof6T/M9cINFxIP5vb0ROIj4oNVZrSd4
	ICremnn8suQ6+aayo30zQrpVQDqA0cupYP20CJ05CZC76uXOva4YFEG1Qz+l4zPJ
	3aEMr7Yth0HRFEbWQe8LwNGxSvcVTHcIxaFwB8I9Ia+eolrHIXJZ2U7CABvwZ19j
	h2nXaQFseeVD36v5G6x0Sxxkw4MzzY3eV6jRlLdi0KJ2ryMch1WjGUTNw==
X-ME-Sender: <xms:1v-1YkZSG5yJ_LsvnLXxtTjs710cdDlCl50tY0Xiz3BVb8_pcU3JLw>
    <xme:1v-1YvYHC9jCa7dUV9ZSjSojVcoTUWOXenotBzn5vIeymEjpa5_dYDd2BmtPvvRf9
    xXvNwNKVd3SqQk>
X-ME-Received: <xmr:1v-1Yu-_AYgyA8fMFzUz1c9sg8zO_gBdOO9sNQqv67FmP8FbGxUU74C_-LxALr7FFJxT4vu_UMYw>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrudefledguddvtdcutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffogggtgfesthekredtredtjeenucfhrhhomhepffgvmhhi
    ucforghrihgvucfqsggvnhhouhhruceouggvmhhisehinhhvihhsihgslhgvthhhihhngh
    hslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepkeeileejueetgfeijeffgfelhfei
    teevffelfedufeejgeeljedvledvfeeuffevnecuffhomhgrihhnpehkvghrnhgvlhdroh
    hrghenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpegu
    vghmihesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:1v-1YureZvKimR1EqMoaGxMoxsuycq3oFzIFkdrwU6_jxgARnEPbpg>
    <xmx:1v-1YvrpDkpi8ijhTqpWgjzPNbl6Lo2BnwbOoV-YknaAKC4mpWopLA>
    <xmx:1v-1YsTX7u2PccEZNMtMExi87u56z39f8cKTgEffTYtESRkZjh9K1Q>
    <xmx:1v-1YuRLR_V06K8mj-sDOxmU2IM5n3wkTWw0bnE-ERi61qOFQY8Ezg>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v8] Preserve the EFI System Resource Table for dom0
Date: Fri, 24 Jun 2022 14:17:32 -0400
Message-Id: <7f773ea8d3967fc3dd2a485384a852c006fd82b9.1656093756.git.demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The EFI System Resource Table (ESRT) is necessary for fwupd to identify
firmware updates to install.  According to the UEFI specification §23.4,
the ESRT shall be stored in memory of type EfiBootServicesData.  However,
memory of type EfiBootServicesData is considered general-purpose memory
by Xen, so the ESRT needs to be moved somewhere where Xen will not
overwrite it.  Copy the ESRT to memory of type EfiRuntimeServicesData,
which Xen will not reuse.  dom0 can use the ESRT if (and only if) it is
in memory of type EfiRuntimeServicesData.

Earlier versions of this patch reserved the memory in which the ESRT was
located.  This created awkward alignment problems, and required either
splitting the E820 table or wasting memory.  It also would have required
a new platform op for dom0 to use to indicate if the ESRT is reserved.
By copying the ESRT into EfiRuntimeServicesData memory, the E820 table
does not need to be modified, and dom0 can just check the type of the
memory region containing the ESRT.  The copy is only done if the ESRT is
not already in EfiRuntimeServicesData memory, avoiding memory leaks on
repeated kexec.

See https://lore.kernel.org/xen-devel/20200818184018.GN1679@mail-itl/T/
for details.

Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
---
 xen/common/efi/boot.c | 134 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 134 insertions(+)

diff --git a/xen/common/efi/boot.c b/xen/common/efi/boot.c
index a25e1d29f1..f6f34aa816 100644
--- a/xen/common/efi/boot.c
+++ b/xen/common/efi/boot.c
@@ -39,6 +39,26 @@
   { 0x605dab50, 0xe046, 0x4300, {0xab, 0xb6, 0x3d, 0xd8, 0x10, 0xdd, 0x8b, 0x23} }
 #define APPLE_PROPERTIES_PROTOCOL_GUID \
   { 0x91bd12fe, 0xf6c3, 0x44fb, { 0xa5, 0xb7, 0x51, 0x22, 0xab, 0x30, 0x3a, 0xe0} }
+#define EFI_SYSTEM_RESOURCE_TABLE_GUID    \
+  { 0xb122a263, 0x3661, 0x4f68, {0x99, 0x29, 0x78, 0xf8, 0xb0, 0xd6, 0x21, 0x80} }
+#define EFI_SYSTEM_RESOURCE_TABLE_FIRMWARE_RESOURCE_VERSION 1
+
+typedef struct {
+    EFI_GUID FwClass;
+    UINT32 FwType;
+    UINT32 FwVersion;
+    UINT32 LowestSupportedFwVersion;
+    UINT32 CapsuleFlags;
+    UINT32 LastAttemptVersion;
+    UINT32 LastAttemptStatus;
+} EFI_SYSTEM_RESOURCE_ENTRY;
+
+typedef struct {
+    UINT32 FwResourceCount;
+    UINT32 FwResourceCountMax;
+    UINT64 FwResourceVersion;
+    EFI_SYSTEM_RESOURCE_ENTRY Entries[];
+} EFI_SYSTEM_RESOURCE_TABLE;
 
 typedef EFI_STATUS
 (/* _not_ EFIAPI */ *EFI_SHIM_LOCK_VERIFY) (
@@ -567,6 +587,41 @@ static int __init efi_check_dt_boot(const EFI_LOADED_IMAGE *loaded_image)
 }
 #endif
 
+static UINTN __initdata esrt = EFI_INVALID_TABLE_ADDR;
+
+static size_t __init get_esrt_size(const EFI_MEMORY_DESCRIPTOR *desc)
+{
+    size_t available_len, len;
+    const UINTN physical_start = desc->PhysicalStart;
+    const EFI_SYSTEM_RESOURCE_TABLE *esrt_ptr;
+
+    len = desc->NumberOfPages << EFI_PAGE_SHIFT;
+    if ( esrt == EFI_INVALID_TABLE_ADDR )
+        return 0;
+    if ( physical_start > esrt || esrt - physical_start >= len )
+        return 0;
+    /*
+     * The specification requires EfiBootServicesData, but accept
+     * EfiRuntimeServicesData, which is a more logical choice.
+     */
+    if ( (desc->Type != EfiRuntimeServicesData) &&
+         (desc->Type != EfiBootServicesData) )
+        return 0;
+    available_len = len - (esrt - physical_start);
+    if ( available_len <= offsetof(EFI_SYSTEM_RESOURCE_TABLE, Entries) )
+        return 0;
+    available_len -= offsetof(EFI_SYSTEM_RESOURCE_TABLE, Entries);
+    esrt_ptr = (const EFI_SYSTEM_RESOURCE_TABLE *)esrt;
+    if ( (esrt_ptr->FwResourceVersion !=
+          EFI_SYSTEM_RESOURCE_TABLE_FIRMWARE_RESOURCE_VERSION) ||
+         !esrt_ptr->FwResourceCount )
+        return 0;
+    if ( esrt_ptr->FwResourceCount > available_len / sizeof(esrt_ptr->Entries[0]) )
+        return 0;
+
+    return esrt_ptr->FwResourceCount * sizeof(esrt_ptr->Entries[0]);
+}
+
 /*
  * Include architecture specific implementation here, which references the
  * static globals defined above.
@@ -845,6 +900,8 @@ static UINTN __init efi_find_gop_mode(EFI_GRAPHICS_OUTPUT_PROTOCOL *gop,
     return gop_mode;
 }
 
+static EFI_GUID __initdata esrt_guid = EFI_SYSTEM_RESOURCE_TABLE_GUID;
+
 static void __init efi_tables(void)
 {
     unsigned int i;
@@ -868,6 +925,8 @@ static void __init efi_tables(void)
             efi.smbios = (unsigned long)efi_ct[i].VendorTable;
         if ( match_guid(&smbios3_guid, &efi_ct[i].VendorGuid) )
             efi.smbios3 = (unsigned long)efi_ct[i].VendorTable;
+        if ( match_guid(&esrt_guid, &efi_ct[i].VendorGuid) )
+            esrt = (UINTN)efi_ct[i].VendorTable;
     }
 
 #ifndef CONFIG_ARM /* TODO - disabled until implemented on ARM */
@@ -1051,6 +1110,70 @@ static void __init efi_set_gop_mode(EFI_GRAPHICS_OUTPUT_PROTOCOL *gop, UINTN gop
 #define INVALID_VIRTUAL_ADDRESS (0xBAAADUL << \
                                  (EFI_PAGE_SHIFT + BITS_PER_LONG - 32))
 
+static void __init efi_relocate_esrt(EFI_SYSTEM_TABLE *SystemTable)
+{
+    EFI_STATUS status;
+    UINTN info_size = 0, map_key, mdesc_size;
+    void *memory_map = NULL;
+    UINT32 ver;
+    unsigned int i;
+
+    for ( ; ; ) {
+        status = efi_bs->GetMemoryMap(&info_size, memory_map, &map_key,
+                                      &mdesc_size, &ver);
+        if ( status == EFI_SUCCESS && memory_map != NULL )
+            break;
+        if ( status == EFI_BUFFER_TOO_SMALL || memory_map == NULL )
+        {
+            info_size += 8 * efi_mdesc_size;
+            if ( memory_map != NULL )
+                efi_bs->FreePool(memory_map);
+            memory_map = NULL;
+            status = efi_bs->AllocatePool(EfiLoaderData, info_size, &memory_map);
+            if ( status == EFI_SUCCESS )
+                continue;
+            PrintErr(L"Cannot allocate memory to relocate ESRT\r\n");
+        }
+        else
+            PrintErr(L"Cannot obtain memory map to relocate ESRT\r\n");
+        return;
+    }
+
+    /* Try to obtain the ESRT.  Errors are not fatal. */
+    for ( i = 0; i < info_size; i += efi_mdesc_size )
+    {
+        /*
+         * ESRT needs to be moved to memory of type EfiRuntimeServicesData
+         * so that the memory it is in will not be used for other purposes.
+         */
+        void *new_esrt = NULL;
+        size_t esrt_size = get_esrt_size(efi_memmap + i);
+
+        if ( !esrt_size )
+            continue;
+        if ( ((EFI_MEMORY_DESCRIPTOR *)(efi_memmap + i))->Type ==
+             EfiRuntimeServicesData )
+            break; /* ESRT already safe from reuse */
+        status = efi_bs->AllocatePool(EfiRuntimeServicesData, esrt_size,
+                                      &new_esrt);
+        if ( status == EFI_SUCCESS && new_esrt )
+        {
+            memcpy(new_esrt, (void *)esrt, esrt_size);
+            status = efi_bs->InstallConfigurationTable(&esrt_guid, new_esrt);
+            if ( status != EFI_SUCCESS )
+            {
+                PrintErr(L"Cannot install new ESRT\r\n");
+                efi_bs->FreePool(new_esrt);
+            }
+        }
+        else
+            PrintErr(L"Cannot allocate memory for ESRT\r\n");
+        break;
+    }
+
+    efi_bs->FreePool(memory_map);
+}
+
 static void __init efi_exit_boot(EFI_HANDLE ImageHandle, EFI_SYSTEM_TABLE *SystemTable)
 {
     EFI_STATUS status;
@@ -1413,6 +1536,8 @@ efi_start(EFI_HANDLE ImageHandle, EFI_SYSTEM_TABLE *SystemTable)
     if ( gop )
         efi_set_gop_mode(gop, gop_mode);
 
+    efi_relocate_esrt(SystemTable);
+
     efi_exit_boot(ImageHandle, SystemTable);
 
     efi_arch_post_exit_boot(); /* Doesn't return. */
@@ -1753,3 +1878,12 @@ void __init efi_init_memory(void)
     unmap_domain_page(efi_l4t);
 }
 #endif
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 18:22:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 18:22:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355822.583728 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4nws-0004Of-Qr; Fri, 24 Jun 2022 18:22:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355822.583728; Fri, 24 Jun 2022 18:22:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4nws-0004OY-NS; Fri, 24 Jun 2022 18:22:30 +0000
Received: by outflank-mailman (input) for mailman id 355822;
 Fri, 24 Jun 2022 18:22:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4nwr-0004OS-HE
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 18:22:29 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4nwr-0004ve-0R; Fri, 24 Jun 2022 18:22:29 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238] helo=[192.168.4.76])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4nwq-0005jp-Pb; Fri, 24 Jun 2022 18:22:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=IMU+ODRBlkNAU2Z3TDAu/9pV1UuWSQzZj0s1oNRQSTA=; b=fSQSn9e2ADVVCUtCC8XdKK1Taj
	g+/nd1JUWZ8NSkdLdL1Nj+jZ4CzTKOViwW5cSE5jSb0tD0fGL6OsL7yGhDWAEIGJX2nfdRFtPcrP7
	dfQOm92iR/2Xb13q9oglNe1LIcOQq0x4SYXdy/F6e25dI9ZS5MdD0zX5mIaRjhzfFjsk=;
Message-ID: <3b7b32cb-df48-e458-e8a9-f17e86f39c9a@xen.org>
Date: Fri, 24 Jun 2022 19:22:26 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH v5 2/8] xen/arm: allocate static shared memory to the
 default owner dom_io
To: Penny Zheng <Penny.Zheng@arm.com>, xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>
References: <20220620051114.210118-1-Penny.Zheng@arm.com>
 <20220620051114.210118-3-Penny.Zheng@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220620051114.210118-3-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Penny,

On 20/06/2022 06:11, Penny Zheng wrote:
> From: Penny Zheng <penny.zheng@arm.com>
> 
> This commit introduces process_shm to cope with static shared memory in
> domain construction.
> 
> DOMID_IO will be the default owner of memory pre-shared among multiple domains
> at boot time, when no explicit owner is specified.

The document in patch #1 suggest the page will be shared with 
dom_shared. But here you say "DOMID_IO".

Which one is correct?

> 
> This commit only considers allocating static shared memory to dom_io
> when owner domain is not explicitly defined in device tree, all the left,
> including the "borrower" code path, the "explicit owner" code path, shall
> be introduced later in the following patches.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> ---
> v5 change:
> - refine in-code comment
> ---
> v4 change:
> - no changes
> ---
> v3 change:
> - refine in-code comment
> ---
> v2 change:
> - instead of introducing a new system domain, reuse the existing dom_io
> - make dom_io a non-auto-translated domain, then no need to create P2M
> for it
> - change dom_io definition and make it wider to support static shm here too
> - introduce is_shm_allocated_to_domio to check whether static shm is
> allocated yet, instead of using shm_mask bitmap
> - add in-code comment
> ---
>   xen/arch/arm/domain_build.c | 132 +++++++++++++++++++++++++++++++++++-
>   xen/common/domain.c         |   3 +
>   2 files changed, 134 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 7ddd16c26d..91a5ace851 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -527,6 +527,10 @@ static bool __init append_static_memory_to_bank(struct domain *d,
>       return true;
>   }
>   
> +/*
> + * If cell is NULL, pbase and psize should hold valid values.
> + * Otherwise, cell will be populated together with pbase and psize.
> + */
>   static mfn_t __init acquire_static_memory_bank(struct domain *d,
>                                                  const __be32 **cell,
>                                                  u32 addr_cells, u32 size_cells,
> @@ -535,7 +539,8 @@ static mfn_t __init acquire_static_memory_bank(struct domain *d,
>       mfn_t smfn;
>       int res;
>   
> -    device_tree_get_reg(cell, addr_cells, size_cells, pbase, psize);
> +    if ( cell )
> +        device_tree_get_reg(cell, addr_cells, size_cells, pbase, psize);

I think this is a bit of a hack. To me it sounds like this should be 
moved out to a separate helper. This will also make the interface of 
acquire_shared_memory_bank() less questionable (see below).

As this is v5, I would be OK with a follow-up for this split. But this 
interface of acuiqre_shared_memory_bank() needs to change.

>       ASSERT(IS_ALIGNED(*pbase, PAGE_SIZE) && IS_ALIGNED(*psize, PAGE_SIZE));

In the context of your series, who is checking that both psize and pbase 
are suitably aligned?

>       if ( PFN_DOWN(*psize) > UINT_MAX )
>       {
> @@ -759,6 +764,125 @@ static void __init assign_static_memory_11(struct domain *d,
>       panic("Failed to assign requested static memory for direct-map domain %pd.",
>             d);
>   }
> +
> +#ifdef CONFIG_STATIC_SHM
> +/*
> + * This function checks whether the static shared memory region is
> + * already allocated to dom_io.
> + */
> +static bool __init is_shm_allocated_to_domio(paddr_t pbase)
> +{
> +    struct page_info *page;
> +
> +    page = maddr_to_page(pbase);
> +    ASSERT(page);

maddr_to_page() can never return NULL. If you want to check a page will 
be valid, then you should use mfn_valid().

However, the ASSERT() implies that the address was suitably checked 
before. But I can't find such check.

> +
> +    if ( page_get_owner(page) == NULL )
> +        return false;
> +
> +    ASSERT(page_get_owner(page) == dom_io);
Could this be hit because of a wrong device-tree? If yes, then this 
should not be an ASSERT() (they are not suitable to check user input).

> +    return true;
> +}
> +
> +static mfn_t __init acquire_shared_memory_bank(struct domain *d,
> +                                               u32 addr_cells, u32 size_cells,
> +                                               paddr_t *pbase, paddr_t *psize)

There is something that doesn't add-up in this interface. The use of 
pointer implies that pbase and psize may be modified by the function. So...

> +{
> +    /*
> +     * Pages of statically shared memory shall be included
> +     * in domain_tot_pages().
> +     */
> +    d->max_pages += PFN_DOWN(*psize);

... it sounds a bit strange to use psize here. If psize, can't be 
modified than it should probably not be a pointer.

Also, where do you check that d->max_pages will not overflow?

> +
> +    return acquire_static_memory_bank(d, NULL, addr_cells, size_cells,
> +                                      pbase, psize);
> +
> +}
> +
> +/*
> + * Func allocate_shared_memory is supposed to be only called

I am a bit concerned with the word "supposed". Are you implying that it 
may be called by someone that is not the owner? If not, then it should 
be "should".

Also NIT: Spell out completely "func". I.e "The function".

> + * from the owner.

I read from as "current should be the owner". But I guess this is not 
what you mean here. Instead it looks like you mean "d" is the owner. So 
I would write "d should be the owner of the shared area".

It would be good to have a check/ASSERT confirm this (assuming this is 
easy to write).

> + */
> +static int __init allocate_shared_memory(struct domain *d,
> +                                         u32 addr_cells, u32 size_cells,
> +                                         paddr_t pbase, paddr_t psize)
> +{
> +    mfn_t smfn;
> +
> +    dprintk(XENLOG_INFO,
> +            "Allocate static shared memory BANK %#"PRIpaddr"-%#"PRIpaddr".\n",
> +            pbase, pbase + psize);

NIT: I would suggest to also print the domain. This could help to easily 
figure out that 'd' wasn't the owner.

> +
> +    smfn = acquire_shared_memory_bank(d, addr_cells, size_cells, &pbase,
> +                                      &psize);
> +    if ( mfn_eq(smfn, INVALID_MFN) )
> +        return -EINVAL;
> +
> +    /*
> +     * DOMID_IO is the domain, like DOMID_XEN, that is not auto-translated.
> +     * It sees RAM 1:1 and we do not need to create P2M mapping for it
> +     */
> +    ASSERT(d == dom_io);
> +    return 0;
> +}
> +
> +static int __init process_shm(struct domain *d,
> +                              const struct dt_device_node *node)
> +{
> +    struct dt_device_node *shm_node;
> +    int ret = 0;
> +    const struct dt_property *prop;
> +    const __be32 *cells;
> +    u32 shm_id;
> +    u32 addr_cells, size_cells;
> +    paddr_t gbase, pbase, psize;
> +
> +    dt_for_each_child_node(node, shm_node)
> +    {
> +        if ( !dt_device_is_compatible(shm_node, "xen,domain-shared-memory-v1") )
> +            continue;
> +
> +        if ( !dt_property_read_u32(shm_node, "xen,shm-id", &shm_id) )
> +        {
> +            printk("Shared memory node does not provide \"xen,shm-id\" property.\n");
> +            return -ENOENT;
> +        }
> +
> +        addr_cells = dt_n_addr_cells(shm_node);
> +        size_cells = dt_n_size_cells(shm_node);
> +        prop = dt_find_property(shm_node, "xen,shared-mem", NULL);
> +        if ( !prop )
> +        {
> +            printk("Shared memory node does not provide \"xen,shared-mem\" property.\n");
> +            return -ENOENT;
> +        }
> +        cells = (const __be32 *)prop->value;
> +        /* xen,shared-mem = <pbase, psize, gbase>; */
> +        device_tree_get_reg(&cells, addr_cells, size_cells, &pbase, &psize);
> +        ASSERT(IS_ALIGNED(pbase, PAGE_SIZE) && IS_ALIGNED(psize, PAGE_SIZE));

See above about what ASSERT()s are for.

> +        gbase = dt_read_number(cells, addr_cells);
> +
> +        /* TODO: Consider owner domain is not the default dom_io. */
> +        /*
> +         * Per static shared memory region could be shared between multiple
> +         * domains.
> +         * In case re-allocating the same shared memory region, we check
> +         * if it is already allocated to the default owner dom_io before
> +         * the actual allocation.
> +         */
> +        if ( !is_shm_allocated_to_domio(pbase) )
> +        {
> +            /* Allocate statically shared pages to the default owner dom_io. */
> +            ret = allocate_shared_memory(dom_io, addr_cells, size_cells,
> +                                         pbase, psize);
> +            if ( ret )
> +                return ret;
> +        }
> +    }
> +
> +    return 0;
> +}
> +#endif /* CONFIG_STATIC_SHM */
>   #else
>   static void __init allocate_static_memory(struct domain *d,
>                                             struct kernel_info *kinfo,
> @@ -3236,6 +3360,12 @@ static int __init construct_domU(struct domain *d,
>       else
>           assign_static_memory_11(d, &kinfo, node);
>   
> +#ifdef CONFIG_STATIC_SHM
> +    rc = process_shm(d, node);
> +    if ( rc < 0 )
> +        return rc;
> +#endif
> +
>       /*
>        * Base address and irq number are needed when creating vpl011 device
>        * tree node in prepare_dtb_domU, so initialization on related variables
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 7570eae91a..7070f5a9b9 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -780,6 +780,9 @@ void __init setup_system_domains(void)
>        * This domain owns I/O pages that are within the range of the page_info
>        * array. Mappings occur at the priv of the caller.
>        * Quarantined PCI devices will be associated with this domain.
> +     *
> +     * DOMID_IO is also the default owner of memory pre-shared among multiple
> +     * domains at boot time.
>        */
>       dom_io = domain_create(DOMID_IO, NULL, 0);
>       if ( IS_ERR(dom_io) )

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 19:07:44 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 19:07:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355828.583738 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4oeD-0001q9-CY; Fri, 24 Jun 2022 19:07:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355828.583738; Fri, 24 Jun 2022 19:07:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4oeD-0001q2-9o; Fri, 24 Jun 2022 19:07:17 +0000
Received: by outflank-mailman (input) for mailman id 355828;
 Fri, 24 Jun 2022 19:07:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4oeB-0001pw-DT
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 19:07:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4oeA-0005jE-Ug; Fri, 24 Jun 2022 19:07:14 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238] helo=[192.168.4.76])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4oeA-00084Q-O2; Fri, 24 Jun 2022 19:07:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=bVHWr1eyOn7xrOFcb7kcXn1hxu6B9NX735jrtp9le3g=; b=2y6y85QY/dkWHkEuvboH6yRpHk
	LAqZcNl9WZ0LdU52CyCVzw5HpFqRSC8zJHXYlXRRu7bNDen9GmP8chSkz+bpXFlNRKVz0HgrhfXGE
	qTOX3Epc6mWXoVvCSZscWTHyWRw0JETsNvaIAIrdoYIJZpLLBlueILTolz41PxTWBGaM=;
Message-ID: <8cf391b9-02a3-6058-35cb-e0a63b8db854@xen.org>
Date: Fri, 24 Jun 2022 20:07:12 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH v5 3/8] xen/arm: allocate static shared memory to a
 specific owner domain
To: Penny Zheng <Penny.Zheng@arm.com>, xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220620051114.210118-1-Penny.Zheng@arm.com>
 <20220620051114.210118-4-Penny.Zheng@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220620051114.210118-4-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Penny,

On 20/06/2022 06:11, Penny Zheng wrote:
> If owner property is defined, then owner domain of a static shared memory
> region is not the default dom_io anymore, but a specific domain.
> 
> This commit implements allocating static shared memory to a specific domain
> when owner property is defined.
> 
> Coding flow for dealing borrower domain will be introduced later in the
> following commits.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> ---
> v5 change:
> - no change
> ---
> v4 change:
> - no changes
> ---
> v3 change:
> - simplify the code since o_gbase is not used if the domain is dom_io
> ---
> v2 change:
> - P2M mapping is restricted to normal domain
> - in-code comment fix
> ---
>   xen/arch/arm/domain_build.c | 44 +++++++++++++++++++++++++++----------
>   1 file changed, 33 insertions(+), 11 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 91a5ace851..d4fd64e2bd 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -805,9 +805,11 @@ static mfn_t __init acquire_shared_memory_bank(struct domain *d,
>    */
>   static int __init allocate_shared_memory(struct domain *d,
>                                            u32 addr_cells, u32 size_cells,
> -                                         paddr_t pbase, paddr_t psize)
> +                                         paddr_t pbase, paddr_t psize,
> +                                         paddr_t gbase)
>   {
>       mfn_t smfn;
> +    int ret = 0;
>   
>       dprintk(XENLOG_INFO,
>               "Allocate static shared memory BANK %#"PRIpaddr"-%#"PRIpaddr".\n",
> @@ -822,8 +824,18 @@ static int __init allocate_shared_memory(struct domain *d,
>        * DOMID_IO is the domain, like DOMID_XEN, that is not auto-translated.
>        * It sees RAM 1:1 and we do not need to create P2M mapping for it
>        */
> -    ASSERT(d == dom_io);
> -    return 0;
> +    if ( d != dom_io )
> +    {
> +        ret = guest_physmap_add_pages(d, gaddr_to_gfn(gbase), smfn, PFN_DOWN(psize));

Coding style: this line is over 80 characters. And...

> +        if ( ret )
> +        {
> +            printk(XENLOG_ERR
> +                   "Failed to map shared memory to %pd.\n", d);

... this line could be merged with the previous one.

> +            return ret;
> +        }
> +    }
> +
> +    return ret;
>   }
>   
>   static int __init process_shm(struct domain *d,
> @@ -836,6 +848,8 @@ static int __init process_shm(struct domain *d,
>       u32 shm_id;
>       u32 addr_cells, size_cells;
>       paddr_t gbase, pbase, psize;
> +    const char *role_str;
> +    bool owner_dom_io = true;

I think it would be best if role_str and owner_dom_io are defined within 
the loop. Same goes for all the other declarations.

>   
>       dt_for_each_child_node(node, shm_node)
>       {
> @@ -862,19 +876,27 @@ static int __init process_shm(struct domain *d,
>           ASSERT(IS_ALIGNED(pbase, PAGE_SIZE) && IS_ALIGNED(psize, PAGE_SIZE));
>           gbase = dt_read_number(cells, addr_cells);
>   
> -        /* TODO: Consider owner domain is not the default dom_io. */
> +        /*
> +         * "role" property is optional and if it is defined explicitly,
> +         * then the owner domain is not the default "dom_io" domain.
> +         */
> +        if ( dt_property_read_string(shm_node, "role", &role_str) == 0 )
> +            owner_dom_io = false
IIUC, the role is per-region. However, owner_dom_io is first initialized 
to false outside the loop. Therefore, the variable may not be correct on 
the next region.

So I think you want to write:

owner_dom_io = !dt_property_read_string(...);

This can also be avoided if you reduce the scope of the variable (it is 
meant to only be used in the loop).

> +
>           /*
>            * Per static shared memory region could be shared between multiple
>            * domains.
> -         * In case re-allocating the same shared memory region, we check
> -         * if it is already allocated to the default owner dom_io before
> -         * the actual allocation.
> +         * So when owner domain is the default dom_io, in case re-allocating
> +         * the same shared memory region, we check if it is already allocated
> +         * to the default owner dom_io before the actual allocation.
>            */
> -        if ( !is_shm_allocated_to_domio(pbase) )
> +        if ( (owner_dom_io && !is_shm_allocated_to_domio(pbase)) ||
> +             (!owner_dom_io && strcmp(role_str, "owner") == 0) )
>           {
> -            /* Allocate statically shared pages to the default owner dom_io. */
> -            ret = allocate_shared_memory(dom_io, addr_cells, size_cells,
> -                                         pbase, psize);
> +            /* Allocate statically shared pages to the owner domain. */
> +            ret = allocate_shared_memory(owner_dom_io ? dom_io : d,
> +                                         addr_cells, size_cells,
> +                                         pbase, psize, gbase);
>               if ( ret )
>                   return ret;
>           }

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 19:10:25 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 19:10:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355833.583750 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4ohD-0003E4-RK; Fri, 24 Jun 2022 19:10:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355833.583750; Fri, 24 Jun 2022 19:10:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4ohD-0003Dx-OB; Fri, 24 Jun 2022 19:10:23 +0000
Received: by outflank-mailman (input) for mailman id 355833;
 Fri, 24 Jun 2022 19:10:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4ohC-0003Dr-3U
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 19:10:22 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4ohB-0005m2-QP; Fri, 24 Jun 2022 19:10:21 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238] helo=[192.168.4.76])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4ohB-0008HW-K3; Fri, 24 Jun 2022 19:10:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=q1f0OwZV0j2RYU6Cm6ogVRo1zLLGWLKZ6IO1I6ztJ8s=; b=g+NdlN7YenniLmUIFPIS3fmZH4
	GBcCO8j0d6ASEBeLZVYbhTq9G2IDZq696rlZihrN7zIGA/UFHSeaTgaXCU3wqVPj9rpTwG1eW/wMv
	oOfiFKoRxNMcP5bSyL/9bgH7RpXutg2F6FMWAvIgHe0qkL7yKo55IpunZ/H5HW+XXIU8=;
Message-ID: <e9c2f30b-a9fd-5ef4-b7b1-e8ff54c6175c@xen.org>
Date: Fri, 24 Jun 2022 20:10:19 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH v5 4/8] xen/arm: introduce put_page_nr and get_page_nr
To: Penny Zheng <Penny.Zheng@arm.com>, xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220620051114.210118-1-Penny.Zheng@arm.com>
 <20220620051114.210118-5-Penny.Zheng@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220620051114.210118-5-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Penny,

On 20/06/2022 06:11, Penny Zheng wrote:
> Later, we need to add the right amount of references, which should be
> the number of borrower domains, to the owner domain. Since we only have
> get_page() to increment the page reference by 1, a loop is needed per
> page, which is inefficient and time-consuming.
> 
> To save the loop time, this commit introduces a set of new helpers
> put_page_nr() and get_page_nr() to increment/drop the page reference by nr.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 19:16:50 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 19:16:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355840.583761 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4onO-0003u2-HU; Fri, 24 Jun 2022 19:16:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355840.583761; Fri, 24 Jun 2022 19:16:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4onO-0003tv-E0; Fri, 24 Jun 2022 19:16:46 +0000
Received: by outflank-mailman (input) for mailman id 355840;
 Fri, 24 Jun 2022 19:16:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jTea=W7=citrix.com=prvs=167c355c5=George.Dunlap@srs-se1.protection.inumbo.net>)
 id 1o4onN-0003tp-Os
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 19:16:45 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2bc4ff39-f3f2-11ec-bd2d-47488cf2e6aa;
 Fri, 24 Jun 2022 21:16:43 +0200 (CEST)
Received: from mail-co1nam11lp2168.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 24 Jun 2022 15:16:35 -0400
Received: from CO1PR03MB5665.namprd03.prod.outlook.com (2603:10b6:303:94::6)
 by BY5PR03MB5188.namprd03.prod.outlook.com (2603:10b6:a03:223::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Fri, 24 Jun
 2022 19:16:33 +0000
Received: from CO1PR03MB5665.namprd03.prod.outlook.com
 ([fe80::8037:ee0a:e1bd:bdab]) by CO1PR03MB5665.namprd03.prod.outlook.com
 ([fe80::8037:ee0a:e1bd:bdab%7]) with mapi id 15.20.5373.016; Fri, 24 Jun 2022
 19:16:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2bc4ff39-f3f2-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656098203;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:mime-version;
  bh=ild9snMu/raeOCqx1MqDdXeVWZhRasZNAveMe1vfscQ=;
  b=R2CPn7l1VYonO8E1i0iOdZVIMxQyV8HnB2Yy4U3A6NdQA6S5G7EFPmoX
   0WEhVx7alXndwQrV+HpaL+4wghWZ80FDUifu4ngaggwxr+ZsZYjVq13Dy
   gVSGnY4k5J3BJnrZpQKpWUGVH5Y364Z1AoY5gH5Xs9GqF+4nVPaVnk/Dt
   M=;
X-IronPort-RemoteIP: 104.47.56.168
X-IronPort-MID: 74374948
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:ujcw3a5xM24ubgxAkIiv0AxRtM7GchMFZxGqfqrLsTDasI4TYg02e
 lBvGjDRZK7OJyCgZYg1O70CxjpU75OGytRrGwJv/ChgQi9HoJHJC9rAchmrMnKcdJ2TFx5tt
 JsTYYSZcpFtEnGH90qnPuaw/HVx2PzSHuWiAeOeUswdqXeIbQ944f40s7Jp0uaE+OSEPj5hm
 e8eguWCaFP40DQraj5Juvvc+EM/7K2s4mpIt1UwPv1B7AfTyihJAMoTK5/qIiqjSOG4PAIbq
 8Uvbl2d1jmEl/v4Ior9yt4XSmVTHviKe1LmZkN+A8BOuDAbzsAJ+vt9ZaB0hXt/0W3TxYgvk
 okV7vRcdC9yVkHysLVFO/VnO3kW0Z1uoNcr9lDm7KR/Z2WfG5fd660G4HMeZOX0yc4uaY16z
 tQKKShlU/y2r7neLIRX6AVbrp9LwMHDZOvzs5z7pN3TJa5OrZvrG80m6TLEtduZaw8n8fv2P
 qIkhTRTgBvoSAVyNwkoT78FjPqqxVvjamdxhGq4uv9ii4TT5FQZPLnFFvPwI4XPbuIM20GSq
 yTB4njzBQwcOJqH0z2Z/3mwh+jJ2yTmRIYVE77+/flv6LGR7jVLVFtKCh3m/7/g1B7Wt9F3c
 iT4/gIBoK8o+0HtYsT7WxSgiHWFogQdS5xbFOhSBASllfaPvFrIWjhsojhpV+YIiPYvfgQQ7
 HiKvPLbJwV/vOKbcCfInluThXboUcQPFkcAbyIZSQoO4/H4vZo+yBnIS75LAKOzy9H4Bzz06
 zSLtzQlwaUei9YR0Ke29kyBhCijzrDWSiYl6wORWXiqhitlZYuNd4Gur1/B4p59wJ2xS1CAu
 D0PnJGY5eVXVJWVznXTEKMKAa2j4OuDPHvEm1lzEpI99jOrvXm+YYRX5zI4L0BsWioZRQLUj
 IbokVs5zPdu0LGCNPAfj16ZYyjy8ZXdKA==
IronPort-HdrOrdr: A9a23:keUu9axFZB9UF3yxQOWbKrPxgOskLtp133Aq2lEZdPULSKGlfp
 GV9sjziyWetN9IYgBapTiBUJPwIk80hqQFm7X5XI3SETUO3VHFEGgM1/qE/9SNIUzDH6tmpN
 9dmstFeZHN5DpB/KDHCWCDer5OruVvsprY/Ns2pE0dLz2CHpsQizuRfTzrd3GeKjMnObMJUL
 6nouZXrTupfnoaKu6hAGMeYuTFr9rX0Lr7fB8vHXccmUezpALtzIS/PwmT3x8YXT8K66wl63
 L5nwvw4bjmm+2nyyXby3TY4/1t6ZrcI5p4dYyxY/ouW3fRYzWTFcFcsnq5zXQISdSUmRUXeR
 /30lAd1opImjXslyqO0GbQMkHboUoTAjnZuBClaDLY0LLEbSN/BMxbiY1DdBzFr0ImodFnya
 pOm3mUrpxNEHr77WzADnfzJmNXftrdmwtcrQc/tQ0obWIlUs4ZkaUPuEdOVJsQFiPz744qVO
 FoEcHH/f5TNVeXdWrQsGVjyMGlGi1bJGbNfmES/siOlzRGlnFwyEUVgMQZg3cb7Zo4D51J/f
 7NPKhknKxHCsUWcaV+DuEcRtbfMB2HfTvcdGaJZVj3HqAOPHzA75bx/bUu/emvPIcFyZMj8a
 6xJ2+wdVRCD34GJff+rKGjqCq9MVlVdQ6duf129tx+pqD2QqbtPGmKVE0u+vHQ1skiPg==
X-IronPort-AV: E=Sophos;i="5.92,220,1650945600"; 
   d="asc'?scan'208";a="74374948"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nC8IpoQS1nBySeh+yF8gvjCJ1azFEZ5r0y/YF3TvFPLytqxIl3RHqGIeKu9PzA2s7Plel0uRe5CgoQH4N+bUjVw2iGgFXfJ3UIEJoJxWQJEIOiqjnCfr2Qf5gQ2NbUNgvUIRoPh9pcBSaZbsvjlHA7t32ToJ96DIBWPnfanFr9aqgDewR4z5AhkcZ6yJkqThH9FhvlQrS/hOV/VOd4zJ6nFPoM63ixMVl/woY9Z5wYaSzbiFAGHdRjmyvtuJtfajs4iIcHNM8YZE8OV2+s6sF6AEQ5pDx6Xo3aTBZFMz/KJ1GU0pGZ+Y4J+pHVtNZRTE+bhgph7EXJyZo94NBjkHrg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fU9f9et3mZnoAlvZRoVmIqFqPnyz/WIVKE/UkfGTTWI=;
 b=YmIpjVGZZndKzeJIf3/aHVAPoWBoDXcYRYf1B6cGqgqRJKYYhqxKBcHOrSDv9U+wv7a7P89Sf77cT5U0p/6rYFS0XJa55wuaHpipvDJWeetm+ZB4/zZW5fDCaVtLWD0rQFAxgBvqa3cbV17mhlT5qWufSn8bXu2UUGaId+8AXKbErxMQnMhyUAGgnOfGEhvdG5Y29mhqOBOZ7mrrCvn8FAO5Tjwadfdpba5ZicloPPcNQwrClXurwum2yuq3rhAx4kkubv77HxXFhYC+QS3i5RCvcSM004JhHhWueAJTo2F/EAlncZxgMVX4QH0JUvK3iyvDVlMO3kKqdwGzKWom9w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fU9f9et3mZnoAlvZRoVmIqFqPnyz/WIVKE/UkfGTTWI=;
 b=lmk+Zo8jFQn9Wjl68WnZW0bdk7r0BZXczXTsJT8EmZMmP1ygRuT3R2nKkk+0Yj6KPzIIfqILsumCx1n48bhzXzGelBd+qjMt5I6oKR9GtOzgxXhCTPRgiIRZrMuBslH2mErJOcIE3RBFZTrqvHlCRqhQJQxgbbhpKlt7++lHz2M=
From: George Dunlap <George.Dunlap@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <Andrew.Cooper3@citrix.com>, Wei Liu <wl@xen.org>, Roger Pau Monne
	<roger.pau@citrix.com>, "Tim (Xen.org)" <tim@xen.org>
Subject: Re: [PATCH 1/2] x86/shadow: slightly consolidate
 sh_unshadow_for_p2m_change()
Thread-Topic: [PATCH 1/2] x86/shadow: slightly consolidate
 sh_unshadow_for_p2m_change()
Thread-Index: AQHX7O+5hV+aChvnbkCC5GJvwBDI6a1gI+MA
Date: Fri, 24 Jun 2022 19:16:33 +0000
Message-ID: <87A27648-D543-4122-A354-A37CC4C4BEA4@citrix.com>
References: <9ae1d130-178a-ba01-b889-f2cf2a403d95@suse.com>
 <521b39ce-2c2e-967e-ecc7-f66281aee562@suse.com>
In-Reply-To: <521b39ce-2c2e-967e-ecc7-f66281aee562@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 9d167621-e9fe-430b-92bc-08da56160d2f
x-ms-traffictypediagnostic: BY5PR03MB5188:EE_
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 eHWRF2YyfhLA7GsJvYlZ1ViN7FGkrevoeCy0xSovQ9x/7Mr4MceIUwdAKxrXkfhFnqPnJ7tntdL1JdFpwA8Kks2dXKGzAlM/3hOiaFqXzgu6ZzWkd5Zh/+0fotT3sft7xVPLbUDzghaoHXhsNV6D/uiBbTHvHdxvs6xc9mhPpY+2TVxBSOJ8pIAgM9K3GWeiGKqFrOWfJdeC/+FdfLJl89xZDewKpup36C4L1eM3pLIN1soY4PpAGQDqwOUEcSUFbRUcU8kAzre8DmVrwqH1Fhw/PW5mahP1qMIyhbEyz0zbAGgbxyhA+pPftkpisGbWmDFqLArFHMEBnNeEU9ICAtjYbArAVe6qNiOR5xoxgS6cOondhqU9tnnbssT/vtpu1IG20sDNsVvEi6w5/gLWkp59V0/XxN3LywfcdKaeECLc0GpnxYc2XLLTDfKYxkQTtkrcNqtdq//+h3EvRCwYwhqUDi1489fuHbTnKpeo3OPBKHjEew4c4M1mXrhG67L5LrDuzUMS45H1KEM9EizWaNbgcUe07mAQI1NhY9j5jrjHL488Fo2Yt1nYepdk5IBUQBcfYIuOhJb+6zXM0DNS6iAZo1GOJkNTDceP+G5njf2wA+B2memm2XtKuhL8c7yt8MkOjT5th6dbeVdhdIf09ifE2iXsxbT1iWyZl/SKi+jDV1UxW4nIv2mRbjy34c1zMx8zWWDIkuXGPSjQnrUOFHXLgqiSCWB6R6c2TTaR3x+skutQX0rfbphMPef2rTUsmNqeoZJu7A1gBKG9VTAQo5Z8iRhiHj4V2M1o3j8oaMzRhpqfKT9Tg3xrGFc8uCi40ltuMfSdZWxtPyEKhhZ22xrclwPQY+Dx6uZAn6yZ4vYleGr4viWxyfjKmBYkzmBY
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:CO1PR03MB5665.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(346002)(366004)(39860400002)(376002)(396003)(136003)(4744005)(86362001)(5660300002)(36756003)(91956017)(8936002)(2906002)(54906003)(64756008)(6916009)(76116006)(66446008)(66946007)(4326008)(66476007)(66556008)(316002)(33656002)(53546011)(6486002)(71200400001)(122000001)(38070700005)(8676002)(478600001)(99936003)(6512007)(6506007)(186003)(82960400001)(38100700002)(26005)(2616005)(41300700001)(70780200001)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?a1lXZytxWUNQaFpJZjVmUzFNM05YMW1jKzY0d2xTL1RneXd0UkhwQjhyeTJw?=
 =?utf-8?B?M3UzSGlDQzUwVUZTRkdpdGcrMEpDYjhFMGVPOWEwWS9hQmh3bkNhenZlTVgx?=
 =?utf-8?B?WUlXKzNndkdIUmF3T0hzcXlXR0lFQXJkNnV4QmFjWDF1U1lMMFpadWlzQmN6?=
 =?utf-8?B?bUFxMTR5b2ZKR1JMK1hjNHFTbnE3bThFWndxUWh2RHZtSTNWc1JkVnNtWkVI?=
 =?utf-8?B?TmlraG1lK1ZrTGFJUDc0VndhOUFuemhHblJsT1pNanN2cms5WHpPTGYraFQv?=
 =?utf-8?B?djR3aSs3KzMxQjBWdUd5NXNlaVJVc0FkdkN5emtZcnBWaC9pOVVBd0NsRmhU?=
 =?utf-8?B?YzlmSkRWdk90a2I1Z08reHFaQldKYmhpRVovLzFsN0VBcGRqZzExMWU4dFBL?=
 =?utf-8?B?NU91QldCWWdjZ1FVc3hQa0t4T0w2M2FDR1QyRFloREFpSGtiYmY0bmE5U2pU?=
 =?utf-8?B?b20vcUw3OGUwZWtaUW9Nai9zU2t5YnYzY3JEb1p0elgycnRKYXlpRVRETWlC?=
 =?utf-8?B?MU56TU4xL0dFdk1US2g5RUV3V0c2Qjd1Rk8ySDlIV0psY3BuSjF6TU9VaWFy?=
 =?utf-8?B?SVhKZ1ZOMDI0VnlpdGxQUGwyL2g2NlV0SXRnWVlCSW1ha1UvNDRxcXhnbjJZ?=
 =?utf-8?B?dkFhdGZWMERxekZ0SmNpTDRqQlBqV0NJNU1YNGwwTHF3alZYd0NYZWFWUWhl?=
 =?utf-8?B?THgzcXlDMzYrbDQyUlBKbTNxUDd3U3FtcE5GQ0lQSjZpR2szMXUxN1hJQVBX?=
 =?utf-8?B?UGlKQTNQVjJ0RnZxc1VaLzRGbW5VSnpBdUs5WW9JdEIwTktPTTBTckphY1Fm?=
 =?utf-8?B?OGpmUHljRUlqTzdHNnFJMWo3Sks0Rk94RWg0cEgvL3VrallpakR3bXE1VzZ3?=
 =?utf-8?B?cGRocndXczVvc29KL0tzZ3BHbk1ZODZKTlkxZm5nUGxOSzMxY2lKcnVDaDk5?=
 =?utf-8?B?YUpNTVhuaE8vYm51bU9SOE84RzhucHdDUDFhOXg2ajZqVHV5WnVLZ2Q0NWd5?=
 =?utf-8?B?c1czRUpFK3NXUUltbGIrV2ptK3FuVXJuUXNYMmJFMCtsaXVqa2VQUmcyQXI1?=
 =?utf-8?B?R1ZKbzhPZDJMTFRVQk9nZlFHUDJ2YTNWQ2lZTmdOcUNwQVdpSFlmY0Q5RWJU?=
 =?utf-8?B?cmJNcnRESDhjc0lBV1VKTnRQN3JPbXlyYkFYSDFZdW1LQlJleGIrTDN6YzlY?=
 =?utf-8?B?Ym1BOTd0aVRKejFmdDJ3VmJPd3phVTRJNXYrellVTHl0Q2R6ZVV1b3dFMUlD?=
 =?utf-8?B?aVk2bWpRYmxjbkFkT3k4ZkFWT0lJa0E5c2pXSTdhUGI5ZmhQY2hHZFlvSXZz?=
 =?utf-8?B?N3RENWhXTVc3aDlnK1pOaVFUWjUxbStuV0ppRnRVSVp4WnVKb3VEeUF5eWcr?=
 =?utf-8?B?U1ZMcWoraVZhT3loVGZkRVI5eVVqQUc4S1VDYmpUU1lYaTRyTTJMNmNqLzFw?=
 =?utf-8?B?ZW8zY25ZZ20xaE0rcENzTTBVWDFXME9uS3RNU2o5QjVzb3hURDl0TXozSHRq?=
 =?utf-8?B?QWhNd08wUENBd0pIWGJJdFJMazRNcktwWUsrSU9TNGdMQWRnclFucjA5UDVT?=
 =?utf-8?B?MlRaOXNobko4d1ZxeEgyK3FWeFg0c1llMkc1VGhkSmRuV1NVUS9Dei9MQUps?=
 =?utf-8?B?d2M2eENUS2tHV0lMUHduRVI2SEtrbE40dFZXSHpzUGJvRThkREYyb0o0bnRq?=
 =?utf-8?B?QmI5QktLMUIrRlE0U2pVVGNtRHlZZnRQdDlVOXlwM2w2MmxDNDFQWFJUTEJO?=
 =?utf-8?B?b0NBMFR3eUVvOTZ1MkFnVlp5N1VYaW5EakZFK1A1bGJpSVBYeDRrVzd3QW1S?=
 =?utf-8?B?REVJaHZLV1AxNnBGYWx5MGkwNE83Z25oa29YQThpQzZSK0Z0cjRRQ25Xb0Rt?=
 =?utf-8?B?S0FOSnpEd0xLalhGanFTMHlVTXF3STZSOXFGSVdqQ2hYUkhzWHZtQUdqeEFR?=
 =?utf-8?B?YmpCYVlmc0JoTVhKRWZkYytJdzc5SzBLV0RPQU1UdXlReEw3TlFjT1A2N3V0?=
 =?utf-8?B?WC91ak95a0NqbTg2L2xidVNvVkRYZk50UUVxS3VwTzJNK2FldG81enk3bSt3?=
 =?utf-8?B?S0ZNZVZvRjFSYXkxOWQ4VkZzbzY3cFJTamRDd2kwYlZhUkd5cks4dDhOUUp5?=
 =?utf-8?B?R051NjEyNVJYTjYxYTNtcXM0ald2bTRROElCbXh4cmczSFpTRTU5MzVHNVQ5?=
 =?utf-8?B?N0E9PQ==?=
Content-Type: multipart/signed;
	boundary="Apple-Mail=_BBAEB1B2-92DA-4E8E-B9D6-67BA3A8C2166";
	protocol="application/pgp-signature";
	micalg=pgp-sha256
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: CO1PR03MB5665.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9d167621-e9fe-430b-92bc-08da56160d2f
X-MS-Exchange-CrossTenant-originalarrivaltime: 24 Jun 2022 19:16:33.6368
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: VtmxKndTF5YFwi6AJ+jEsk6nd4yA5SxLIsRI1lfUg5fk92337apJoovV3+PnJPoHlDIATU2Uqs19AGWvYGTTRIVwvPqJgzuc4jj8YQzAb5w=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5188

--Apple-Mail=_BBAEB1B2-92DA-4E8E-B9D6-67BA3A8C2166
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=utf-8



> On 9 Dec 2021, at 11:26, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> In preparation for reactivating the presently dead 2M page path of the
> function,
> - also deal with the case of replacing an L1 page table all in one go,
> - pull common checks out of the switch(). This includes extending a
>  _PAGE_PRESENT check to L1 as well, which presumably was deemed
>  redundant with p2m_is_valid() || p2m_is_grant(), but I think we are
>  better off being explicit in all cases,
> - replace a p2m_is_ram() check in the 2M case by an explicit
>  _PAGE_PRESENT one, to make more obvious that the subsequent
>  l1e_get_mfn() actually retrieves something that is actually an MFN.

Each of these changes requires careful checking to make sure there =
aren=E2=80=99t any bugs introduced.  I=E2=80=99d feel much more =
comfortable giving an R-b of they were broken out into separate patches.

 -George


--Apple-Mail=_BBAEB1B2-92DA-4E8E-B9D6-67BA3A8C2166
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
	filename=signature.asc
Content-Type: application/pgp-signature;
	name=signature.asc
Content-Description: Message signed with OpenPGP

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEj3+7SZ4EDefWZFyCshXHp8eEG+0FAmK2DYIACgkQshXHp8eE
G+0ROggAt+Aj2MNbIfwEFxdCsSHn0db4K0r4zUR2frFPjEkFH21L6LqfJ+HN80Xw
bfShhmFH7CjiPOQgwzhBi2TwhSMYp7ixF4+Zw/GugguOmMBXxm1zV0Svt6uO/ndh
e3IICa+C0cmhEPUloueV9u0i5IDgrNA2jAP2uxAy+LB7ZWk2xHkjAJj6rf9liqp2
Bvg0zeFdY7rN+2fFELSN3tJ2kTtD6owrpD4KXYfAbmAiJ7oaLRoodY4wOt3pYUw7
LLWjk4WKGx7/ew9gSakLuKj+OrdbM/nq2eT381yE6QtLAbkhvKKVTMLD4TLdlUn1
2yVfaEc8wSH2h+2vCdCirABklHhUyQ==
=IvKB
-----END PGP SIGNATURE-----

--Apple-Mail=_BBAEB1B2-92DA-4E8E-B9D6-67BA3A8C2166--


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 19:18:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 19:18:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355846.583772 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4op3-0004YZ-0Y; Fri, 24 Jun 2022 19:18:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355846.583772; Fri, 24 Jun 2022 19:18:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4op2-0004YS-U3; Fri, 24 Jun 2022 19:18:28 +0000
Received: by outflank-mailman (input) for mailman id 355846;
 Fri, 24 Jun 2022 19:18:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4op2-0004Y7-8B
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 19:18:28 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4op1-0005yQ-Va; Fri, 24 Jun 2022 19:18:27 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238] helo=[192.168.4.76])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4op1-0000B4-Oh; Fri, 24 Jun 2022 19:18:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=I4Fwegyrc1IZQItWF5i8KJDAzGvkaQ79eiPkaAvvogs=; b=v+/p9wzC/UoVO94s9LOzsFujuC
	+adZaDA4TZ/N+dgQSD74dwvcqAyVKTCHSugEVpf7JHHBUiORzh/w1lDTu3GhsDuRBE1QK3Ct60s6o
	NM2Aal0FKPadFlIJq6CvFZwfBavONAkRd6G85e7Jqxk9yXshbBRZx2C4ygBtJS0Si0xk=;
Message-ID: <3e397ff3-0b67-523c-179a-0a2035b081da@xen.org>
Date: Fri, 24 Jun 2022 20:18:25 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH v5 5/8] xen/arm: Add additional reference to owner domain
 when the owner is allocated
To: Penny Zheng <Penny.Zheng@arm.com>, xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220620051114.210118-1-Penny.Zheng@arm.com>
 <20220620051114.210118-6-Penny.Zheng@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220620051114.210118-6-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Penny,

On 20/06/2022 06:11, Penny Zheng wrote:
> Borrower domain will fail to get a page ref using the owner domain
> during allocation, when the owner is created after borrower.
> 
> So here, we decide to get and add the right amount of reference, which
> is the number of borrowers, when the owner is allocated.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> ---
> v5 change:
> - no change
> ---
> v4 changes:
> - no change
> ---
> v3 change:
> - printk rather than dprintk since it is a serious error
> ---
> v2 change:
> - new commit
> ---
>   xen/arch/arm/domain_build.c | 62 +++++++++++++++++++++++++++++++++++++
>   1 file changed, 62 insertions(+)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index d4fd64e2bd..650d18f5ef 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -799,6 +799,34 @@ static mfn_t __init acquire_shared_memory_bank(struct domain *d,
>   
>   }
>   
> +static int __init acquire_nr_borrower_domain(struct domain *d,
> +                                             paddr_t pbase, paddr_t psize,
> +                                             unsigned long *nr_borrowers)
> +{
> +    unsigned long bank;
> +
> +    /* Iterate reserved memory to find requested shm bank. */
> +    for ( bank = 0 ; bank < bootinfo.reserved_mem.nr_banks; bank++ )
> +    {
> +        paddr_t bank_start = bootinfo.reserved_mem.bank[bank].start;
> +        paddr_t bank_size = bootinfo.reserved_mem.bank[bank].size;
> +
> +        if ( pbase == bank_start && psize == bank_size )
> +            break;
> +    }
> +
> +    if ( bank == bootinfo.reserved_mem.nr_banks )
> +        return -ENOENT;
> +
> +    if ( d == dom_io )
> +        *nr_borrowers = bootinfo.reserved_mem.bank[bank].nr_shm_domain;
> +    else
> +        /* Exclude the owner domain itself. */
NIT: I think this comment wants to be just above the 'if' and expanded 
to explain why the "dom_io" is not included. AFAIU, this is because 
"dom_io" is not described in the Device-Tree, so it would not be taken 
into account for nr_shm_domain.

> +        *nr_borrowers = bootinfo.reserved_mem.bank[bank].nr_shm_domain - 1;

TBH, given the use here. I would have consider to not increment 
nr_shm_domain if the role was owner in parsing code. This is v5 now, so 
I would be OK with the comment above.

But I would suggest to consider it as a follow-up.

> +
> +    return 0;
> +}
> +
>   /*
>    * Func allocate_shared_memory is supposed to be only called
>    * from the owner.
> @@ -810,6 +838,8 @@ static int __init allocate_shared_memory(struct domain *d,
>   {
>       mfn_t smfn;
>       int ret = 0;
> +    unsigned long nr_pages, nr_borrowers, i;
> +    struct page_info *page;
>   
>       dprintk(XENLOG_INFO,
>               "Allocate static shared memory BANK %#"PRIpaddr"-%#"PRIpaddr".\n",
> @@ -824,6 +854,7 @@ static int __init allocate_shared_memory(struct domain *d,
>        * DOMID_IO is the domain, like DOMID_XEN, that is not auto-translated.
>        * It sees RAM 1:1 and we do not need to create P2M mapping for it
>        */
> +    nr_pages = PFN_DOWN(psize);
>       if ( d != dom_io )
>       {
>           ret = guest_physmap_add_pages(d, gaddr_to_gfn(gbase), smfn, PFN_DOWN(psize));
> @@ -835,6 +866,37 @@ static int __init allocate_shared_memory(struct domain *d,
>           }
>       }
>   
> +    /*
> +     * Get the right amount of references per page, which is the number of
> +     * borrow domains.
> +     */
> +    ret = acquire_nr_borrower_domain(d, pbase, psize, &nr_borrowers);
> +    if ( ret )
> +        return ret;
> +
> +    /*
> +     * Instead of let borrower domain get a page ref, we add as many

Typo: s/let/letting/

> +     * additional reference as the number of borrowers when the owner
> +     * is allocated, since there is a chance that owner is created
> +     * after borrower.

What if the borrower is created first? Wouldn't this result to add pages 
in the P2M without reference?

If yes, then I think this is worth an explaination.

> +     */
> +    page = mfn_to_page(smfn);

Where do you validate the range [smfn, nr_pages]?

> +    for ( i = 0; i < nr_pages; i++ )
> +    {
> +        if ( !get_page_nr(page + i, d, nr_borrowers) )
> +        {
> +            printk(XENLOG_ERR
> +                   "Failed to add %lu references to page %"PRI_mfn".\n",
> +                   nr_borrowers, mfn_x(smfn) + i);
> +            goto fail;
> +        }
> +    }
> +
> +    return 0;
> +
> + fail:
> +    while ( --i >= 0 )
> +        put_page_nr(page + i, nr_borrowers);
>       return ret;
>   }
>   

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 19:25:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 19:25:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355853.583783 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4ovx-000601-OV; Fri, 24 Jun 2022 19:25:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355853.583783; Fri, 24 Jun 2022 19:25:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4ovx-0005zu-Lr; Fri, 24 Jun 2022 19:25:37 +0000
Received: by outflank-mailman (input) for mailman id 355853;
 Fri, 24 Jun 2022 19:25:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4ovw-0005zo-92
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 19:25:36 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4ovv-00065i-Tt; Fri, 24 Jun 2022 19:25:35 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238] helo=[192.168.4.76])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4ovv-0000Xr-La; Fri, 24 Jun 2022 19:25:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=rPiSki+h44CFmX+VOegY1CamI283Ny0AJ7r0k2dZW5c=; b=6KNGtnpZtljpOp5AvZiWZfH9w9
	GS9WW2IPfpXTUNEi3Raz8kqtyqkFyzofLjJQb3vww2xapTT744dvNcOpOWOvWnfroQdbn1SGjG6Xc
	ZAeiPEcGxc5PIja9i9Ost2tj1Z3LIQUvfT3VgNemzDFQ6ZXNqEgu5tGUeIIJRMpsIewc=;
Message-ID: <f87c00c5-8253-0c51-4f05-e137d98fc149@xen.org>
Date: Fri, 24 Jun 2022 20:25:33 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH v5 1/8] xen/arm: introduce static shared memory
To: Penny Zheng <Penny.Zheng@arm.com>, xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220620051114.210118-1-Penny.Zheng@arm.com>
 <20220620051114.210118-2-Penny.Zheng@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220620051114.210118-2-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Penny,

I have looked at the code and I have further questions about the binding.

On 20/06/2022 06:11, Penny Zheng wrote:
> ---
>   docs/misc/arm/device-tree/booting.txt | 120 ++++++++++++++++++++++++++
>   xen/arch/arm/Kconfig                  |   6 ++
>   xen/arch/arm/bootfdt.c                |  68 +++++++++++++++
>   xen/arch/arm/include/asm/setup.h      |   3 +
>   4 files changed, 197 insertions(+)
> 
> diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
> index 98253414b8..6467bc5a28 100644
> --- a/docs/misc/arm/device-tree/booting.txt
> +++ b/docs/misc/arm/device-tree/booting.txt
> @@ -378,3 +378,123 @@ device-tree:
>   
>   This will reserve a 512MB region starting at the host physical address
>   0x30000000 to be exclusively used by DomU1.
> +
> +Static Shared Memory
> +====================
> +
> +The static shared memory device tree nodes allow users to statically set up
> +shared memory on dom0less system, enabling domains to do shm-based
> +communication.
> +
> +- compatible
> +
> +    "xen,domain-shared-memory-v1"
> +
> +- xen,shm-id
> +
> +    An 8-bit integer that represents the unique identifier of the shared memory
> +    region. The maximum identifier shall be "xen,shm-id = <0xff>".

There is nothing in Xen that will ensure that xen,shm-id will match for 
all the nodes using the same region.

I see you write it to the guest device-tree. However there is a mismatch 
of the type: here you use an integer whereas the guest binding is using 
a string.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 19:27:47 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 19:27:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355859.583794 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4oy3-0006b8-5U; Fri, 24 Jun 2022 19:27:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355859.583794; Fri, 24 Jun 2022 19:27:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4oy3-0006b1-21; Fri, 24 Jun 2022 19:27:47 +0000
Received: by outflank-mailman (input) for mailman id 355859;
 Fri, 24 Jun 2022 19:27:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jTea=W7=citrix.com=prvs=167c355c5=George.Dunlap@srs-se1.protection.inumbo.net>)
 id 1o4oy1-0006ar-PL
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 19:27:45 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b77d4b2e-f3f3-11ec-b725-ed86ccbb4733;
 Fri, 24 Jun 2022 21:27:44 +0200 (CEST)
Received: from mail-sn1anam02lp2042.outbound.protection.outlook.com (HELO
 NAM02-SN1-obe.outbound.protection.outlook.com) ([104.47.57.42])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 24 Jun 2022 15:27:41 -0400
Received: from CO1PR03MB5665.namprd03.prod.outlook.com (2603:10b6:303:94::6)
 by SA2PR03MB5804.namprd03.prod.outlook.com (2603:10b6:806:fb::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Fri, 24 Jun
 2022 19:27:39 +0000
Received: from CO1PR03MB5665.namprd03.prod.outlook.com
 ([fe80::8037:ee0a:e1bd:bdab]) by CO1PR03MB5665.namprd03.prod.outlook.com
 ([fe80::8037:ee0a:e1bd:bdab%7]) with mapi id 15.20.5373.016; Fri, 24 Jun 2022
 19:27:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b77d4b2e-f3f3-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656098864;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:mime-version;
  bh=VaOXPvTVh3H3j0ZMITtM3LzJ8kx4wmbAeWr20urHe7I=;
  b=Xy3JnZ5XW9NGLHLhQEFBmtrWDj2Am4za4MVWuz6QRz7aROcR8TSS5aP6
   EbgUicj5Z+l726ZaxIroUJM2deH+EovoYW+B0s7Cx0EwlLfh4IkNAJmpZ
   Z5qVcEp/WdS5Dal14obSJaUri8W9IffJ8DzB6REhmWjioE7Guzwd90M1a
   w=;
X-IronPort-RemoteIP: 104.47.57.42
X-IronPort-MID: 74375579
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:1J2596CCf2aO5xVW/97iw5YqxClBgxIJ4kV8jC+esDiIYAhSlGxQk
 DNbHCvTJK7JMVJBSKkkPYnn8U4Ou5LTn9M2QFRtpXtkEy8Q9ZDLX4qTc0r6bniff5zIRRs94
 s5EMYOaJps5RyOM/xz2OLW483Qg3vGCLlaQ5JYoHwgoLeMzYHtx2XqP4tIEv7OEoeRVIiuB5
 4r785KPNACo0GUra28et6+J8x4z5/6p6D4RtAA0PaEXsAfSmUdOAcNEL8ldDZdZrqq4vAKeb
 7yepF1s1jqBp3/BMvv8zvCjNBdirof6ZWBisFIPM0SZqkUE93ZaPpoTbqJGMx8J0WnRxLid9
 f0W3XCOYVZxVkHzsLx1vylwS0mS6oUfpdcriVDm2SCi5xWun0nEmp2CP2lvVWEswc5lAHkmy
 BAtAGtlgiZvJQ6B6OnTpuFE3qzPJSRwVW8VkikIITrxVZ7KTX1fKkljCBAxMDoY36hz8fjii
 8UxNAA+aySZfzlzEVoWAck3wf+huiDZfGgNwL6VjfJfD2n76iVUiOKoC/yMP9uASINSg1qSo
 X/A8yLhGBYGOdeDyD2DtHWxmuvImiC9U4UXfFG63qcy3BvPmSpOV1tKDzNXotHg4qK6c/1SL
 FYb92wCsK42/VSDRdjhRRyo5nWDu3bwXvIPT7ZltVzSm8I45S6SW0UUZzdoMuABj804Qhlw8
 Xi4gvjQUGkHXLq9DCj1Gq2vhTG4NDURLGQCTTQZVgZD6N7myKkjgxSKQtt9HaqditzuBSq20
 z2MtDI5hbgYkYgMzarT1U/DqyKhoN7OVAFdzhnWW0q14wU/Y5SqD7FE8nDe5PdEaZ2fF1CHt
 X1cwcyGtrhSV9eKiTCHR/gLEPex/fGZPTbAgFlpWZ486zCq/H3ldodViN1jGHpU3g8/UWeBS
 CfuVcl5vfe/4FPCgXdLXr+M
IronPort-HdrOrdr: A9a23:jOJrAqxPAS72trJmZJcVKrPxgOskLtp133Aq2lEZdPULSKGlfp
 GV9sjziyWetN9IYgBapTiBUJPwIk80hqQFm7X5XI3SETUO3VHFEGgM1/qE/9SNIUzDH6tmpN
 9dmstFeZHN5DpB/KDHCWCDer5OruVvsprY/Ns2pE0dLz2CHpsQizuRfTzrd3GeKjMnObMJUL
 6nouZXrTupfnoaKu6hAGMeYuTFr9rX0Lr7fB8vHXccmUezpALtzIS/PwmT3x8YXT8K66wl63
 L5nwvw4bjmm+2nyyXby3TY4/1t6ZrcI5p4dYyxY/ouW3fRYzWTFcFcsnq5zXQISdSUmRUXeR
 /30lAd1opImjXslyqO0GbQMkHboUoTAjnZuBClaDLY0LLEbSN/BMxbiY1DdBzFr0ImodFnya
 pOm3mUrpxNEHr77WzADnfzJmNXftrdmwtcrQc/tQ0obWIlUs4ZkaUPuEdOVJsQFiPz744qVO
 FoEcHH/f5TNVeXdWrQsGVjyMGlGi1bJGbNfmES/siOlzRGlnFwyEUVgMQZg3cb7Zo4D51J/f
 7NPKhknKxHCsUWcaV+DuEcRtbfMB2HfTvcdGaJZVj3HqAOPHzA75bx/bUu/emvPIcFyZMj8a
 6xJ2+wdVRCD34GJff+rKGjqCq9MVlVdQ6duf129tx+pqD2QqbtPGmKVE0u+vHQ1skiPg==
X-IronPort-AV: E=Sophos;i="5.92,220,1650945600"; 
   d="asc'?scan'208";a="74375579"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DfqSO9xs6ohX0JsvXulV4IUzMwy9GtlqtmtlZnbRzmf45nUDhYgsZOGO5KYFVofVQ2hMvsHYxCunwN/Ozx1rrT5M8UuTJLGxg8ySaUbT4V0jdl/oke0rFgOq9RGIMJlpDp+VsgZTa6Na+5p0smSY4EkzYwr7ZTCK9hw7jOK3vsTSF+YpfmAWZjmsykhBp2PUUNBobCYFj4Vm41gDpv1+iCBJ/zL5fzoHKy0T2bU6vfay6L0mxsAKT09t+ExUZ+dUH55xPXTGrjmxj6QKmvyjz0WE3Yk/OBnxtt3UIjSvhVyYcmD3HmHjhmrMciavtdHmTCZcnDYU3kyM6RfvDEsqsg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=bsrzEX+WHrlrFoJN3ARiAkM3lVJAx9ElXP1C50OvT5w=;
 b=XbXRXGJ82PZNy7vnmyFS17VMVc7Zy+XQ2MssvgJ91x7hL7z+PCW8V87kC8/nk7FtW6MBR77Gol6pKyDcEWQCLFDhLWRMNvzryZOTJ9ovEgXuZMSFuEuJTTLgpywLxguYssedSoy5G8HOT17xWBbToFUYpcXL8NSYslhAREtHgP2D5OflJW6Y4Y8xEIm9ImDKqiDRvnIuJChmfGDmRHIQArPDpZTFzlNF3gn33V4Q0+dM/jxIWVaLqjKgvL/Vkou1tHIG1pJRaPoJ3XpR4ecWwMntswtUykpK2musgs0C0FV5HKof0YVvrRkGMyhU/eLApQhhTkMvgfma/MPHDsBBaA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bsrzEX+WHrlrFoJN3ARiAkM3lVJAx9ElXP1C50OvT5w=;
 b=TPGQTgfqOkidiavSb7sGva3d1ElgpSM3eHtMXAyObMx1DQJOQ9djPCeToaLBAvQlrfwYLJpQlcqYh2pI35l8BtY7ANyqioXCszSqIR9Fha07BDWi9Ydr1U9Xa7gUQmiuJl9sr7v3FR9kDrNyTIPh3k2Bl0d5P3oaIDomSOD7cRQ=
From: George Dunlap <George.Dunlap@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <Andrew.Cooper3@citrix.com>, Wei Liu <wl@xen.org>, Roger Pau Monne
	<roger.pau@citrix.com>, "Tim (Xen.org)" <tim@xen.org>
Subject: Re: [PATCH 2/2] x86/P2M: allow 2M superpage use for shadowed guests
Thread-Topic: [PATCH 2/2] x86/P2M: allow 2M superpage use for shadowed guests
Thread-Index: AQHX7O/bKX0sOdH9/0aIlntq4rv2Q61gJw0A
Date: Fri, 24 Jun 2022 19:27:39 +0000
Message-ID: <8D91423A-67A6-40B3-A3D7-44711DC41A7E@citrix.com>
References: <9ae1d130-178a-ba01-b889-f2cf2a403d95@suse.com>
 <7a80d08b-edd7-43c8-a7ce-42eb85d6f3be@suse.com>
In-Reply-To: <7a80d08b-edd7-43c8-a7ce-42eb85d6f3be@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 4ea361ce-18bd-4fda-4b64-08da56179a27
x-ms-traffictypediagnostic: SA2PR03MB5804:EE_
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 9UN/l7tTp94qFF2NvmFNZo9qdcpHcUMjOqyJKMSwQ8kYNcub04zlyXSwq1thGDYC5DW7oDPyZnTpCiw3UJtW4hXzk3MmEG4H4JSFWI90Ose/Z+GTgL3zPxaX132Tci4NTxhr4JXdIdizeHNqfZu2ttNQ8EjhUnn0ujT6wWBt/cPRVQyigqDBAUoo38MmPwT6TaQjR26rdCfld5Ot8pVzigK8hz+xvJw3/aXOKZkvvKg2DSqFW8BDtCizatYDESB54AA3JWVc7oucywe7HgO9iCWEeqI7JFVWY2vm282bMgP71O/z/HxPyVSBEsChu7mSVWcP/vD4GtJoaIS6mzuJ6c3LMdSRfL2PWg4VAjWZMKv7R5emH62Ip0XA6n/Lk/Ck45rWAmisipJRxyr4+kpKM+nU8qA46Be6KEsvr2QxsSPrxqEOgYAsySPVoQMeLttOUP2aqy6zQ5OhYY0UN7CpMUNOClGs+AXXEYsCCflYmSjxNMz4MW5VsxUMvyEcryQ0JEIktqnneSFKUB5DEz3v6kZfIpvFusNMxHe7GXq/fyKl6gNNZ27oRV8XQZxWX7gyQyXdPNQTmLmkKpe2Db8WMpqZLwITOFnhgkrfVbz228PQmAIonaTxCv3R6QB8/QDgEEOlifRD52W+MrDbA8sLwi9E39JxcHfS1VkJ7MZ+RKywSw7wJ5FMIUCD756M0kgyUI5XReZH0gz0IgKkYsi9/jrhXPJHwLdseGSyc1Mwx3PJRTFSHtJceVTmjyiaVY7/iOqRUFm5YTQHUxCkpTJSILKIlhM3b+4w+/aBbhDIjGUFXZTzfxPEsUwgwkv3tzDMXBNc1DkWGmVv0LnwwuBXLw==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:CO1PR03MB5665.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(136003)(366004)(346002)(396003)(376002)(2906002)(66556008)(122000001)(6486002)(66476007)(478600001)(6916009)(26005)(86362001)(54906003)(316002)(66446008)(36756003)(8676002)(64756008)(33656002)(71200400001)(53546011)(91956017)(66946007)(6512007)(41300700001)(76116006)(2616005)(186003)(99936003)(38100700002)(6506007)(83380400001)(5660300002)(8936002)(82960400001)(38070700005)(4326008)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?R2hrYXFqY3NORkRGa2krVDhJNmlMWkF2ZjNZQzBOb0ZKN2ZwNHhncThQdnRM?=
 =?utf-8?B?RUlaOFhQY3RObTM5TzJ4NFU5YVNVS3kxdmQvd3hIZk5lOHZqazExUDdUaC8y?=
 =?utf-8?B?d3JPaWlscllnNVdHeUFNN1podVl0VlJrSU83N1JCVy90N2l4OVBmbENwdGxs?=
 =?utf-8?B?bUNESHU5STBBWlFRSXliejQzVmZSZWJuNEQ1L1dTdGQyeWk3aXh4VVBFQzZ2?=
 =?utf-8?B?YitvTjlHS1JXV1RsaEljY3ZZUHYydUpzK1NrNTI3MEFzK2NyQ0xSU2dMMGRv?=
 =?utf-8?B?KzVZam1xS2FvMFpnKzJaNkJIZ3BNSDBaZGgycm5jUzMxL0xSeVcxM3Y5Z3p2?=
 =?utf-8?B?OW95NWhnUFY0YWVOZTd4c2dlRGZ0cTVaMDl3ZEh3WTlBOSsydGlsUHdXSVFX?=
 =?utf-8?B?QjQxNGc4ajh3UHBRYm9talJ1VVRnWjRuNVdubUZucWlHQjBYcHF2bHB5Wmdn?=
 =?utf-8?B?aStnRjNicUMyQzBKVWtDMldaRGNhdXI0Vzl4UHZwcEJHTGEvYnZjenVJSS9T?=
 =?utf-8?B?REZhenRjbWpCQnNtbzUzTmdDcmpPeXBDb1liNEJIOHFpeDNrN0g5Y0xxRE5R?=
 =?utf-8?B?dUw0NFlCTXpFK0RNbjlzRmFnTGRVMklLckk5MWMxdnFtZGF2QWJOY2d1VXJ1?=
 =?utf-8?B?UnFUMjVIeDdmYTR1cHFNaDJKeUJMRjlIQnZuZlFmNlRybTdhN1ZHU2JoRnVa?=
 =?utf-8?B?R2hNZGVSd2R6VTZWTjUwdFpxZzRyL3N2TTFheHZMOTY5RWgzNEVsYjV4aHo1?=
 =?utf-8?B?OUxvTkZHeU1rY3BRbVpsWkVnSDNMMFhCOGNUbFdkS2JJNkl6OXloZ2hpMXll?=
 =?utf-8?B?Z05HSTQ2aStEZFptZkdkcmpRY251aUtIdHh2a0tSbC9sVUIva2o3R2hxUEFh?=
 =?utf-8?B?UFZBZXRBYjlOd0xYR2hOdXkxeDg0c0dsMDVuVFpwYnY4SnFWWUNVUUhXVkZZ?=
 =?utf-8?B?QW13QTlWVUJkYkRrYTZua1cwaHE0ZWozZ04wU09zOHRZSDRNR0JZZU4rdzVh?=
 =?utf-8?B?T1d2b29jcFJ4bHZGVk81UG5adEN2YVdXK2ZPM1Yvdjc5QUIxZUFqM1VWZjAz?=
 =?utf-8?B?eG51YTRZU0ZWMjhKcHgrR1NGY3FjaS81SEZtVjljV285Tk9rc3hKZy90ZTNY?=
 =?utf-8?B?RE9Rbk53d3pyc1U2bVUzdi9TNVRIUG1zSWh4Q0xOREtsNk9ueUtmdkZUcmJS?=
 =?utf-8?B?YWhGankwQVdVVnRtSjFxL2tQVkhrbWxocm15RjBMMkhLczlHQjBkaTJsdDE3?=
 =?utf-8?B?MDFiMEk2MVJ5T2JZbGt4TUl1RmFraTNMWjJkMnBKQUdBQkdJV0x3YS9UcG43?=
 =?utf-8?B?YjRIV24yZHNML1BXTHhNZmhhbFpvWDU3alhTUjRqQkdhS2V3NDNWT2J3c1ZI?=
 =?utf-8?B?Z3hlMjBObUluVFlkb0NSU3dyVlNwUUExYmpJN3dSWEpLN3N5MVdqTDZaODh1?=
 =?utf-8?B?bk5kWUYxbGdYcHptZjBlWWxZOFU0ZC9QY0xkRzUxTWN5dUNXVXZXcWNaY2RT?=
 =?utf-8?B?a1ozdllQNXhEdGdjQklUaXB0T0MrRzRQMXM2SkN1ZHB4dVFwbWhZV1EwTUYx?=
 =?utf-8?B?bU1xaHdpd0hTeVV4Rm1nblIyMG9ha3Zpa2h6emtCUitNOTMraWt2bkZyckgx?=
 =?utf-8?B?akduNzRzU21pS0psenJzQ1I3UmdramgxeXQ2bVZMWEwyaHJTai90Q0Vzd25a?=
 =?utf-8?B?V29EZlArVi96Z3J6RVUwRkNOdTN4RHRTek10N3dTdGFIUEhnYWIya0lLNzZN?=
 =?utf-8?B?VjcyOTJYb0xrY2FrbXhKc2swaSttTlMvdnRTVDBCS2VtQXRua2d0aXMxeWtz?=
 =?utf-8?B?aWdpYjVMelJqTWZHb1pwc0h1aHpMY3NsUTlHak95akFwSTJRSlBUZXlJRytM?=
 =?utf-8?B?WlJwdzlFc2dDRWM4TTE2Nk44MGRXK2FpTXN5Zjc2VC9QOGRQVVFncS93NEJu?=
 =?utf-8?B?NW00YW9UdVlZdFRQb1FGemlYRTAzcFNxblJodE9NdVp1dEQ5bVZtc2NKNHZk?=
 =?utf-8?B?VHRTSlRGcWlJK1dEVlRzUjFjTlhXbVhob1QxU2pwcnI2c0xsaVE1eXh5a0Fx?=
 =?utf-8?B?cWlGVnZ3OFVzMG9HSzhqWk1rZXhlTHc3YUpKdHFPZFNBNEZqeUVNN1VJTlFH?=
 =?utf-8?B?dGpqa2F2K1dNUmVsMzB1NytZaVUvZG84ejd5SThlMHpMZjUzN0F4ZzRTS2Zp?=
 =?utf-8?B?TVE9PQ==?=
Content-Type: multipart/signed;
	boundary="Apple-Mail=_64CD8593-F695-4F63-AE43-137DE7998877";
	protocol="application/pgp-signature";
	micalg=pgp-sha256
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: CO1PR03MB5665.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4ea361ce-18bd-4fda-4b64-08da56179a27
X-MS-Exchange-CrossTenant-originalarrivaltime: 24 Jun 2022 19:27:39.6537
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: hIqRf5n4fH7jChdwGnO1cMMDnqcxI+8X/LJkxF2CQl6a66mfqo59npP7Zpr8Vx/g1bjC0ebgJH9n4xH482uwsphfMdOeI5W6mxtow6soWlA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA2PR03MB5804

--Apple-Mail=_64CD8593-F695-4F63-AE43-137DE7998877
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=utf-8



> On 9 Dec 2021, at 11:27, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> For guests in shadow mode the P2M table gets used only by software. =
The
> only place where it matters whether superpages in the P2M can be dealt
> with is sh_unshadow_for_p2m_change().

It=E2=80=99s easy to verify that this patch is doing what it claims to =
do; but whether it=E2=80=99s correct or not depends on the veracity of =
this claim here.  Rather than me having to duplicate whatever work you =
did to come to this conclusion, can you briefly explain why it=E2=80=99s =
true in a way that I can easily verify?

e.g., all other accesses to the p2m in the shadow code are via =
get_gfn_[something](), which (because it=E2=80=99s in the p2m code) =
handles p2m superpages correctly?

Everything else looks good here.

> That function has been capabale of
> handling them even before commit 0ca1669871f8a ("P2M: check whether =
hap
> mode is enabled before using 2mb pages") disabled 2M use in this case
> for dubious reasons ("potential errors when hap is disabled").

I=E2=80=99m glad the days of random patches being checked in without =
comment or discussion are behind us...

 -George


--Apple-Mail=_64CD8593-F695-4F63-AE43-137DE7998877
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
	filename=signature.asc
Content-Type: application/pgp-signature;
	name=signature.asc
Content-Description: Message signed with OpenPGP

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEj3+7SZ4EDefWZFyCshXHp8eEG+0FAmK2ECoACgkQshXHp8eE
G+17BAf/dVhpD+0XMfIz7lGVIVBDcdvueI7VqgPIwruO9QjSBb6hcSs+oW4sBjCZ
Us7FDYOMDTnUpaHrvjQhuyvLMWsMELZxQQJR5DzHHfdqEmQb0RVVJOQiHsSZRPEM
1O6qkA+ryCVSHvi25d96bj81lYUKoYHl0ZiB9Hk4TAXImBRiXcyAaaoMl+FaB75q
IyHHwlP1MABxg3B9MU6DDIIzFSk6kb9aQ1qpqzRlHMOuPKmTtnCfQVvccpf04ZBM
8uj4Q/Zk64GlSp/9W0X+I02zaS6BMfdiSEfmgAPCvILsawyGoN6iIL1MdHnerMb9
n5ui6NGXnMtXXCy4lM2HedyC7WL4fw==
=Kssj
-----END PGP SIGNATURE-----

--Apple-Mail=_64CD8593-F695-4F63-AE43-137DE7998877--


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 19:30:15 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 19:30:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355864.583805 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4p0Q-0007zL-J8; Fri, 24 Jun 2022 19:30:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355864.583805; Fri, 24 Jun 2022 19:30:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4p0Q-0007zE-GA; Fri, 24 Jun 2022 19:30:14 +0000
Received: by outflank-mailman (input) for mailman id 355864;
 Fri, 24 Jun 2022 19:30:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4p0P-0007z8-PS
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 19:30:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4p0P-0006As-HM; Fri, 24 Jun 2022 19:30:13 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238] helo=[192.168.4.76])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4p0P-0000gp-Bj; Fri, 24 Jun 2022 19:30:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=wkyiEZEDmm24IGcBliVvx06/FI5j+95uuo4cUEmfNdE=; b=HE+EJ+AyXi3A4rPVKy2Io2ptW/
	8BT//VPpK4CNnOZmLjtrE5FOEumuBJSRTn8fzVhqjC7QJshs/Ly2RJGiIyjcvttI54GOCahP++Sd3
	ccz0g3uXEHfp78srQqBVPceIycpW+3wlnxbdrku9oEIyk77q1d6O4FTfRohk14yAqrkU=;
Message-ID: <84641d6e-202d-934c-9ea9-bbf090e29bdb@xen.org>
Date: Fri, 24 Jun 2022 20:30:11 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH v5 7/8] xen/arm: create shared memory nodes in guest
 device tree
To: Penny Zheng <Penny.Zheng@arm.com>, xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220620051114.210118-1-Penny.Zheng@arm.com>
 <20220620051114.210118-8-Penny.Zheng@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220620051114.210118-8-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Penny,

On 20/06/2022 06:11, Penny Zheng wrote:
> We expose the shared memory to the domU using the "xen,shared-memory-v1"
> reserved-memory binding. See
> Documentation/devicetree/bindings/reserved-memory/xen,shared-memory.txt
> in Linux for the corresponding device tree binding.
> 
> To save the cost of re-parsing shared memory device tree configuration when
> creating shared memory nodes in guest device tree, this commit adds new field
> "shm_mem" to store shm-info per domain.
> 
> For each shared memory region, a range is exposed under
> the /reserved-memory node as a child node. Each range sub-node is
> named xen-shmem@<address> and has the following properties:
> - compatible:
>          compatible = "xen,shared-memory-v1"
> - reg:
>          the base guest physical address and size of the shared memory region
> - xen,id:
>          a string that identifies the shared memory region.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> ---
> v5 change:
> - no change
> ---
> v4 change:
> - no change
> ---
> v3 change:
> - move field "shm_mem" to kernel_info
> ---
> v2 change:
> - using xzalloc
> - shm_id should be uint8_t
> - make reg a local variable
> - add #address-cells and #size-cells properties
> - fix alignment
> ---
>   xen/arch/arm/domain_build.c       | 143 +++++++++++++++++++++++++++++-
>   xen/arch/arm/include/asm/kernel.h |   1 +
>   xen/arch/arm/include/asm/setup.h  |   1 +
>   3 files changed, 143 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 1584e6c2ce..4d62440a0e 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -900,7 +900,22 @@ static int __init allocate_shared_memory(struct domain *d,
>       return ret;
>   }
>   
> -static int __init process_shm(struct domain *d,
> +static int __init append_shm_bank_to_domain(struct kernel_info *kinfo,
> +                                            paddr_t start, paddr_t size,
> +                                            u32 shm_id)
> +{
> +    if ( (kinfo->shm_mem.nr_banks + 1) > NR_MEM_BANKS )
> +        return -ENOMEM;
> +
> +    kinfo->shm_mem.bank[kinfo->shm_mem.nr_banks].start = start;
> +    kinfo->shm_mem.bank[kinfo->shm_mem.nr_banks].size = size;
> +    kinfo->shm_mem.bank[kinfo->shm_mem.nr_banks].shm_id = shm_id;
> +    kinfo->shm_mem.nr_banks++;
> +
> +    return 0;
> +}
> +
> +static int __init process_shm(struct domain *d, struct kernel_info *kinfo,
>                                 const struct dt_device_node *node)
>   {
>       struct dt_device_node *shm_node;
> @@ -971,6 +986,14 @@ static int __init process_shm(struct domain *d,
>               if ( ret )
>                   return ret;
>           }
> +
> +        /*
> +         * Record static shared memory region info for later setting
> +         * up shm-node in guest device tree.
> +         */
> +        ret = append_shm_bank_to_domain(kinfo, gbase, psize, shm_id);
> +        if ( ret )
> +            return ret;
>       }
>   
>       return 0;
> @@ -1301,6 +1324,117 @@ static int __init make_memory_node(const struct domain *d,
>       return res;
>   }
>   
> +#ifdef CONFIG_STATIC_SHM
> +static int __init make_shm_memory_node(const struct domain *d,
> +                                       void *fdt,
> +                                       int addrcells, int sizecells,
> +                                       struct meminfo *mem)

NIT: AFAICT mem is not changed, so it should be const.

> +{
> +    unsigned long i = 0;

NIT: This should be "unsigned int" to match the type of nr_banks.

> +    int res = 0;
> +
> +    if ( mem->nr_banks == 0 )
> +        return -ENOENT;
> +
> +    /*
> +     * For each shared memory region, a range is exposed under
> +     * the /reserved-memory node as a child node. Each range sub-node is
> +     * named xen-shmem@<address>.
> +     */
> +    dt_dprintk("Create xen-shmem node\n");
> +
> +    for ( ; i < mem->nr_banks; i++ )
> +    {
> +        uint64_t start = mem->bank[i].start;
> +        uint64_t size = mem->bank[i].size;
> +        uint8_t shm_id = mem->bank[i].shm_id;
> +        /* Placeholder for xen-shmem@ + a 64-bit number + \0 */
> +        char buf[27];
> +        const char compat[] = "xen,shared-memory-v1";
> +        __be32 reg[4];
> +        __be32 *cells;
> +        unsigned int len = (addrcells + sizecells) * sizeof(__be32);
> +
> +        snprintf(buf, sizeof(buf), "xen-shmem@%"PRIx64, mem->bank[i].start);
> +        res = fdt_begin_node(fdt, buf);
> +        if ( res )
> +            return res;
> +
> +        res = fdt_property(fdt, "compatible", compat, sizeof(compat));
> +        if ( res )
> +            return res;
> +
> +        cells = reg;
> +        dt_child_set_range(&cells, addrcells, sizecells, start, size);
> +
> +        res = fdt_property(fdt, "reg", reg, len);
> +        if ( res )
> +            return res;
> +
> +        dt_dprintk("Shared memory bank %lu: %#"PRIx64"->%#"PRIx64"\n",
> +                   i, start, start + size);
> +
> +        res = fdt_property_cell(fdt, "xen,id", shm_id);

Looking at the Linux binding, "xen,id" is meant to be a string. But here 
you are writing it as an integer.

Given that the Linux binding is already merged, I think the Xen binding 
should be changed.

> +        if ( res )
> +            return res;
> +
> +        res = fdt_end_node(fdt);
> +        if ( res )
> +            return res;
> +    }
> +
> +    return res;
> +}
> +#else
> +static int __init make_shm_memory_node(const struct domain *d,
> +                                       void *fdt,
> +                                       int addrcells, int sizecells,
> +                                       struct meminfo *mem)
> +{
> +    ASSERT_UNREACHABLE();
> +}
> +#endif
> +
> +static int __init make_resv_memory_node(const struct domain *d,
> +                                        void *fdt,
> +                                        int addrcells, int sizecells,
> +                                        struct meminfo *mem)
> +{
> +    int res = 0;
> +    /* Placeholder for reserved-memory\0 */
> +    char resvbuf[16] = "reserved-memory";
> +
> +    if ( mem->nr_banks == 0 )
> +        /* No shared memory provided. */
> +        return 0;
> +
> +    dt_dprintk("Create reserved-memory node\n");
> +
> +    res = fdt_begin_node(fdt, resvbuf);
> +    if ( res )
> +        return res;
> +
> +    res = fdt_property(fdt, "ranges", NULL, 0);
> +    if ( res )
> +        return res;
> +
> +    res = fdt_property_cell(fdt, "#address-cells", addrcells);
> +    if ( res )
> +        return res;
> +
> +    res = fdt_property_cell(fdt, "#size-cells", sizecells);
> +    if ( res )
> +        return res;
> +
> +    res = make_shm_memory_node(d, fdt, addrcells, sizecells, mem);
> +    if ( res )
> +        return res;
> +
> +    res = fdt_end_node(fdt);
> +
> +    return res;
> +}
> +
>   static int __init add_ext_regions(unsigned long s, unsigned long e, void *data)
>   {
>       struct meminfo *ext_regions = data;
> @@ -3078,6 +3212,11 @@ static int __init prepare_dtb_domU(struct domain *d, struct kernel_info *kinfo)
>       if ( ret )
>           goto err;
>   
> +    ret = make_resv_memory_node(d, kinfo->fdt, addrcells, sizecells,
> +                                &kinfo->shm_mem);
> +    if ( ret )
> +        goto err;
> +
>       /*
>        * domain_handle_dtb_bootmodule has to be called before the rest of
>        * the device tree is generated because it depends on the value of
> @@ -3454,7 +3593,7 @@ static int __init construct_domU(struct domain *d,
>           assign_static_memory_11(d, &kinfo, node);
>   
>   #ifdef CONFIG_STATIC_SHM
> -    rc = process_shm(d, node);
> +    rc = process_shm(d, &kinfo, node);
>       if ( rc < 0 )
>           return rc;
>   #endif
> diff --git a/xen/arch/arm/include/asm/kernel.h b/xen/arch/arm/include/asm/kernel.h
> index c4dc039b54..2cc506b100 100644
> --- a/xen/arch/arm/include/asm/kernel.h
> +++ b/xen/arch/arm/include/asm/kernel.h
> @@ -19,6 +19,7 @@ struct kernel_info {
>       void *fdt; /* flat device tree */
>       paddr_t unassigned_mem; /* RAM not (yet) assigned to a bank */
>       struct meminfo mem;
> +    struct meminfo shm_mem;
>   
>       /* kernel entry point */
>       paddr_t entry;
> diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
> index 5063e5d077..7497cc40aa 100644
> --- a/xen/arch/arm/include/asm/setup.h
> +++ b/xen/arch/arm/include/asm/setup.h
> @@ -29,6 +29,7 @@ struct membank {
>       bool xen_domain; /* whether the memory bank is bound to a Xen domain. */
>   #ifdef CONFIG_STATIC_SHM
>       unsigned int nr_shm_domain;
> +    uint8_t shm_id ; /* ID identifier of a static shared memory bank. */

I am not entirely happy that we are defining shm_id for everyone. We are 
at v5, so I am OK for now.

But I would at least like "shm_id" to be defined before nr_shm_domain so 
we re-use the existing hole and avoid increasing the size of the structure.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 19:35:25 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 19:35:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355872.583815 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4p5N-0000GO-9s; Fri, 24 Jun 2022 19:35:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355872.583815; Fri, 24 Jun 2022 19:35:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4p5N-0000GG-6y; Fri, 24 Jun 2022 19:35:21 +0000
Received: by outflank-mailman (input) for mailman id 355872;
 Fri, 24 Jun 2022 19:35:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4p5L-0000G4-Ny; Fri, 24 Jun 2022 19:35:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4p5L-0006GC-GM; Fri, 24 Jun 2022 19:35:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4p5K-0006tP-Ug; Fri, 24 Jun 2022 19:35:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o4p5K-0002ZE-UG; Fri, 24 Jun 2022 19:35:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0cAWnG6XR21gVv2vsBZD/x1GKfg2Fx3RVXfoJNegMjM=; b=sciu2qgaWkIGMuffDxgxaiF84S
	aUYwRKB20Ne971xdbtmdc6I0bzzo3LF90yp/ze/9Rg4LoNuIvsu8Onk+G+ltvHH8+by75E8C1RuSy
	dovVsoFJkzfNyHDQE2uM37WioFNgkrZuSyOMqIxbUP+tSv8uThxIXMrTuGNFiKnEhv7Y=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171341-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 171341: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=3a821c52e1a30ecd9a436f2c67cc66b5628c829f
X-Osstest-Versions-That:
    qemuu=7db86fe2ed220c196061824e652b94e7a2acbabf
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 24 Jun 2022 19:35:18 +0000

flight 171341 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171341/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171335
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171335
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171335
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171335
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171335
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171335
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171335
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171335
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                3a821c52e1a30ecd9a436f2c67cc66b5628c829f
baseline version:
 qemuu                7db86fe2ed220c196061824e652b94e7a2acbabf

Last test of basis   171335  2022-06-23 19:07:18 Z    1 days
Testing same since   171341  2022-06-24 08:06:33 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Klaus Jensen <k.jensen@samsung.com>
  Lukasz Maniak <lukasz.maniak@linux.intel.com>
  Michael S. Tsirkin <mst@redhat.com>
  Richard Henderson <richard.henderson@linaro.org>
  Łukasz Gieryk <lukasz.gieryk@linux.intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   7db86fe2ed..3a821c52e1  3a821c52e1a30ecd9a436f2c67cc66b5628c829f -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 19:37:16 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 19:37:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355878.583826 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4p7D-0000rU-M7; Fri, 24 Jun 2022 19:37:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355878.583826; Fri, 24 Jun 2022 19:37:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4p7D-0000rN-Jd; Fri, 24 Jun 2022 19:37:15 +0000
Received: by outflank-mailman (input) for mailman id 355878;
 Fri, 24 Jun 2022 19:37:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dybx=W7=kernel.org=pr-tracker-bot@srs-se1.protection.inumbo.net>)
 id 1o4p7C-0000rH-DR
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 19:37:14 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0afd581d-f3f5-11ec-b725-ed86ccbb4733;
 Fri, 24 Jun 2022 21:37:12 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 4BE2D62221;
 Fri, 24 Jun 2022 19:37:11 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPS id 1D47FC341C0;
 Fri, 24 Jun 2022 19:37:11 +0000 (UTC)
Received: from aws-us-west-2-korg-oddjob-1.ci.codeaurora.org
 (localhost.localdomain [127.0.0.1])
 by aws-us-west-2-korg-oddjob-1.ci.codeaurora.org (Postfix) with ESMTP id
 EFAE0E737F0; Fri, 24 Jun 2022 19:37:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0afd581d-f3f5-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1656099431;
	bh=LgXu9p3qO29gKCNdZSeymheH+7qUMJ29Icfx2fBLIaI=;
	h=Subject:From:In-Reply-To:References:Date:To:Cc:From;
	b=toLOayYmotAOrNDg/xUWQyxPcS+xatJ2UCnzkBgv8PBrptkqUySp6mRC++ZGG/scR
	 vDNxfq+lmYkVveB3AeSD4sYkKpChdN0LvzeGv5GMn/aAmKFJF6i6Ft9/lmH6CKdv/O
	 WdURQGUqmdzD7+up1DOCwDtC4audF7YiGFfu7APCSJUJSp/0SCaG9OTNS5br0GEP2r
	 586MMq7Xoct/GXMlFfgdLtDEo+yiXYVe5FdKIqJvZnqv7v5EiQBd7GX8XrgHuHYn2g
	 WMHUeWDL6j8emYWOo6jSxxEUMnL14qVh5e6+/H2hKl4lD+mGQtWaWblE/08FT/pk2A
	 WYlMCW+xEDkSg==
Subject: Re: [GIT PULL] xen: branch for v5.19-rc4
From: pr-tracker-bot@kernel.org
In-Reply-To: <20220624160736.14606-1-jgross@suse.com>
References: <20220624160736.14606-1-jgross@suse.com>
X-PR-Tracked-List-Id: <linux-kernel.vger.kernel.org>
X-PR-Tracked-Message-Id: <20220624160736.14606-1-jgross@suse.com>
X-PR-Tracked-Remote: git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.19a-rc4-tag
X-PR-Tracked-Commit-Id: dbe97cff7dd9f0f75c524afdd55ad46be3d15295
X-PR-Merge-Tree: torvalds/linux.git
X-PR-Merge-Refname: refs/heads/master
X-PR-Merge-Commit-Id: 2c39d612aa5f34d63d264598692a7e6cd4fb34eb
Message-Id: <165609943097.3020.15887140717397581468.pr-tracker-bot@kernel.org>
Date: Fri, 24 Jun 2022 19:37:10 +0000
To: Juergen Gross <jgross@suse.com>
Cc: torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, sstabellini@kernel.org

The pull request you sent on Fri, 24 Jun 2022 18:07:36 +0200:

> git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.19a-rc4-tag

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/2c39d612aa5f34d63d264598692a7e6cd4fb34eb

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/prtracker.html


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 19:50:45 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 19:50:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355885.583838 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4pKA-0003C4-SB; Fri, 24 Jun 2022 19:50:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355885.583838; Fri, 24 Jun 2022 19:50:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4pKA-0003Bx-PA; Fri, 24 Jun 2022 19:50:38 +0000
Received: by outflank-mailman (input) for mailman id 355885;
 Fri, 24 Jun 2022 19:50:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o4pK9-0003Br-7r
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 19:50:37 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4pK8-0006aX-OL; Fri, 24 Jun 2022 19:50:36 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238] helo=[192.168.4.76])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o4pK8-0001q6-HS; Fri, 24 Jun 2022 19:50:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=tCMYDyGKBEyFr4nbKxbGiqy58oA2gpTKGvWP2roHCNY=; b=fW/kUOzTaAvMyV1nfLCS0pzkbS
	w3Q3UlyZByg8XMWXIlhfEtmNJAdlJN5UpTfGIBV6xf9Oucy3FIbQk+D6/h9OhR6ykgfGXKKTAn7xo
	VDslrcRB3SGrxis5xu/UAWSSSshQegAIvqz6FNpFCfaIt0rGa7evSoD1Y0Jc6N6ySsH0=;
Message-ID: <ae94da35-40d5-f65c-1df5-3ebde3aa86a3@xen.org>
Date: Fri, 24 Jun 2022 20:50:34 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH v7 9/9] xen: retrieve reserved pages on populate_physmap
To: Penny Zheng <Penny.Zheng@arm.com>, xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20220620024408.203797-1-Penny.Zheng@arm.com>
 <20220620024408.203797-10-Penny.Zheng@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220620024408.203797-10-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Penny,

On 20/06/2022 03:44, Penny Zheng wrote:
> When a static domain populates memory through populate_physmap at runtime,
> it shall retrieve reserved pages from resv_page_list to make sure that
> guest RAM is still restricted in statically configured memory regions.
> This commit also introduces a new helper acquire_reserved_page to make it work.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
> v7 changes:
> - remove the lock, since we add the page to rsv_page_list after it has
> been totally freed.

Hmmm... Adding the page after it has been totally freed doesn't mean you 
can get away with the lock. AFAICT you can still have concurrent 
free/allocate that could modify the list.

Therefore if you add/remove pages without the list, you would end up to 
corrupt the list.

If you disagree, then please point out which lock (or mechanism) will 
prevent concurrent access.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 19:53:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 19:53:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355891.583849 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4pMc-0003nY-AM; Fri, 24 Jun 2022 19:53:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355891.583849; Fri, 24 Jun 2022 19:53:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4pMc-0003nR-6h; Fri, 24 Jun 2022 19:53:10 +0000
Received: by outflank-mailman (input) for mailman id 355891;
 Fri, 24 Jun 2022 19:53:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4pMa-0003mu-SW; Fri, 24 Jun 2022 19:53:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4pMa-0006ch-Ri; Fri, 24 Jun 2022 19:53:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4pMa-0007NV-Ee; Fri, 24 Jun 2022 19:53:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o4pMa-0005lI-EE; Fri, 24 Jun 2022 19:53:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=lJA6zVmbx6cgQG0FurtmGLsNx/Kh1lC7WWzOE7VC0Is=; b=TI0Wvuaxoksnn0HAOEK847FGSX
	XM05MO6BRi/u2HKqp3Tngzy17Pd11ut0L4tPSgSq5Uc8t/7VF9/S/7Y8XqHDl5F3Y4EsYtCehghjV
	Xzh40cB1NOftYiiFVS8J46BwiKhqKoVkKJ1U8msE5PR+C6vDlpr9vAbFdDuC7qqdWCN8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171345-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 171345: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=15b25045e6db2c82bc12973ed1629bbaeb3c0a57
X-Osstest-Versions-That:
    ovmf=2aee08c0b6bfb32d36bda17ab24645205a74df65
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 24 Jun 2022 19:53:08 +0000

flight 171345 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171345/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 15b25045e6db2c82bc12973ed1629bbaeb3c0a57
baseline version:
 ovmf                 2aee08c0b6bfb32d36bda17ab24645205a74df65

Last test of basis   171343  2022-06-24 13:11:49 Z    0 days
Testing same since   171345  2022-06-24 18:13:04 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Nicolas Ojeda Leon <ncoleon@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   2aee08c0b6..15b25045e6  15b25045e6db2c82bc12973ed1629bbaeb3c0a57 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 20:03:14 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 20:03:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355901.583860 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4pWI-0005SZ-CI; Fri, 24 Jun 2022 20:03:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355901.583860; Fri, 24 Jun 2022 20:03:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4pWI-0005SS-95; Fri, 24 Jun 2022 20:03:10 +0000
Received: by outflank-mailman (input) for mailman id 355901;
 Fri, 24 Jun 2022 20:03:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4pWH-0005Rz-KL; Fri, 24 Jun 2022 20:03:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4pWH-0006sh-FZ; Fri, 24 Jun 2022 20:03:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4pWG-0007bQ-Uw; Fri, 24 Jun 2022 20:03:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o4pWG-0000lA-UQ; Fri, 24 Jun 2022 20:03:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=h81FcNShrW1Td33MeisYRPwkGV86MQS+YQsw+i4uw1Q=; b=CQcmSW6MKChmw04Ld3K5nUXRnx
	R37mu/MxPtK9FmpgLNH7t2jH3KdjE55j/OWruW/OLId/80EFFJrIpZFWAtqMLgVEYvqr1Y4ijr7mK
	LaMWad9/BOTzeXE77Rq4c5VslpMaJECKaNVUgQU+pRjdsqX1fxTS9gsEHmwwZWxyBIL8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171342-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171342: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=92f20ff72066d8d7e2ffb655c2236259ac9d1c5d
X-Osstest-Versions-That:
    linux=354c6e071be986a44b956f7b57f1884244431048
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 24 Jun 2022 20:03:08 +0000

flight 171342 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171342/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-intel  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-amd  8 xen-boot              fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 171277
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 171277
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 171277
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 171277
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 171277
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 171277

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 171277

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171277
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171277
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171277
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                92f20ff72066d8d7e2ffb655c2236259ac9d1c5d
baseline version:
 linux                354c6e071be986a44b956f7b57f1884244431048

Last test of basis   171277  2022-06-19 03:11:35 Z    5 days
Failing since        171280  2022-06-19 15:12:25 Z    5 days   15 attempts
Testing same since   171337  2022-06-23 22:42:50 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aidan MacDonald <aidanmacdonald.0x0@gmail.com>
  Alexander Usyskin <alexander.usyskin@intel.com>
  Alexei Starovoitov <ast@kernel.org>
  Ali Saidi <alisaidi@amazon.com>
  Alistair Popple <apopple@nvidia.com>
  Anatolii Gerasymenko <anatolii.gerasymenko@intel.com>
  Andrii Nakryiko <andrii@kernel.org>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Antoine Tenart <atenart@kernel.org>
  Ard Biesheuvel <ardb@kernel.org>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Arnd Bergmann <arnd@arndb.de>
  Athira Jajeev <atrajeev@linux.vnet.ibm.com>
  Athira Rajeev <atrajeev@linux.vnet.ibm.com>
  Bart Van Assche <bvanassche@acm.org>
  Brian Foster <bfoster@redhat.com>
  Carlos Llamas <cmllamas@google.com>
  Chevron Li <chevron.li@bayhubtech.com>
  Chevron Li<chevron.li@bayhubtech.com>
  Christian Marangi <ansuelsmth@gmail.com>
  Christian Schoenebeck <linux_oss@crudebyte.com>
  Christoph Lameter <cl@linux.com>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Ciara Loftus <ciara.loftus@intel.com>
  Claudiu Manoil <claudiu.manoil@nxp.com>
  Curtis Taylor <cutaylor-pub@yahoo.com>
  Damien Le Moal <damien.lemoal@opensource.wdc.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniil Dementev <d.dementev@ispras.ru>
  Darrick J. Wong <djwong@kernel.org>
  Dave Hansen <dave.hansen@linux.intel.com>
  David Howells <dhowells@redhat.com>
  David Rientjes <rientjes@google.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Ding Xiang <dingxiang@cmss.chinamobile.com>
  Dmitry Osipenko <dmitry.osipenko@collabora.com>
  Dominique Martinet <asmadeus@codewreck.org>
  Douglas Gilbert <dgilbert@interlog.com>
  Eelco Chaudron <echaudro@redhat.com>
  Eric Dumazet <edumazet@google.com>
  Evgeniy Baskov <baskov@ispras.ru>
  Filipe Manana <fdmanana@suse.com>
  Florian Westphal <fw@strlen.de>
  Gautam Menghani <gautammenghani201@gmail.com>
  Genjian Zhang <zhanggenjian@kylinos.cn>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Gurucharan <gurucharanx.g@intel.com> (A Contingent worker at Intel)
  Hoang Le <hoang.h.le@dektech.com.au>
  huhai <huhai@kylinos.cn>
  Hyeonggon Yoo <42.hyeyoo@gmail.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ian Rogers <irogers@google.com>
  Ivan Vecera <ivecera@redhat.com>
  Jakub Kicinski <kuba@kernel.org>
  Jakub Sitnicki <jakub@cloudflare.com>
  Jamie Iles <jamie@jamieiles.com>
  Jann Horn <jannh@google.com>
  Jarkko Nikula <jarkko.nikula@linux.intel.com>
  Jason A. Donenfeld <Jason@zx2c4.com>
  Jason Wang <jasowang@redhat.com>
  Javier Martinez Canillas <javierm@redhat.com>
  Jay Vosburgh <jay.vosburgh@canonical.com>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
  Jiasheng Jiang <jiasheng@iscas.ac.cn>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jie2x Zhou <jie2x.zhou@intel.com>
  Jing-Ting Wu <jing-ting.wu@mediatek.com>
  Jiri Olsa <jolsa@kernel.org>
  Joe Damato <jdamato@fastly.com>
  Joel Savitz <jsavitz@redhat.com>
  John Fastabend <john.fastabend@gmail.com>
  Jon Maloy <jmaloy@redhat.com>
  Jon Maxwell <jmaxwell37@gmail.com>
  Jonathan Lemon <jonathan.lemon@gmail.com>
  Jonathan Toppins <jtoppins@redhat.com>
  Josh Poimboeuf <jpoimboe@kernel.org>
  Kai Vehmanen <kai.vehmanen@linux.intel.com>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Kailang Yang <kailang@realtek.com>
  Kees Cook <keescook@chromium.org>
  Ken Moffat <zarniwhoop@ntlworld.com>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Kumar Kartikeya Dwivedi <memxor@gmail.com>
  Kunihiko Hayashi <hayashi.kunihiko@socionext.com>
  Leo Yan <leo.yan@linaro.org>
  Liang He <windhl@126.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Liu Ying <victor.liu@nxp.com>
  Lorenzo Bianconi <lorenzo@kernel.org>
  Lukas Bulwahn <lukas.bulwahn@gmail.com>
  Lukas Wunner <lukas@wunner.de>
  Maciej Fijalkowski <maciej.fijalkowski@intel.com>
  Magnus Karlsson <magnus.karlsson@intel.com>
  Marc Dionne <marc.dionne@auristor.com>
  Marc Zyngier <maz@kernel.org>
  Marcin Szycik <marcin.szycik@linux.intel.com>
  Mark Brown <broonie@kernel.org>
  Martin K. Petersen <martin.petersen@oracle.com>
  Masami Hiramatsu (Google) <mhiramat@kernel.org>
  Matthew Wilcox (Oracle) <willy@infradead.org>
  Mengqi Zhang <mengqi.zhang@mediatek.com>
  Miaoqian Lin <linmq006@gmail.com>
  Michael Petlan <mpetlan@redhat.com>
  Michal Simek <michal.simek@amd.com>
  Nathan Chancellor <nathan@kernel.org>
  Nathan Chancellor <nathan@kernel.org> # build
  Nico Pache <npache@redhat.com>
  nikitashvets@flyium.com
  Oleksij Rempel <o.rempel@pengutronix.de>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paolo Abeni <pabeni@redhat.com>
  Peilin Ye <peilin.ye@bytedance.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Peter Zijlstra <peterz@infradead.org>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Qu Wenruo <wqu@suse.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Riccardo Paolo Bestetti <pbl@bestov.io>
  Rob Herring <robh@kernel.org>
  Ron Economos <re@w6rz.net>
  Rosemarie O'Riorden <roriorden@redhat.com>
  Sandeep Penigalapati <sandeep.penigalapati@intel.com>
  Saurabh Sengar <ssengar@linux.microsoft.com>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Sedat Dilek <sedat.dilek@gmail.com> # LLVM-14 (x86-64)
  Serge Semin <Sergey.Semin@baikalelectronics.ru>
  Sergey Gorenko <sergeygo@nvidia.com>
  Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Soham Sen <contact@sohamsen.me>
  Song Liu <songliubraving@fb.com>
  Stephan Gerhold <stephan.gerhold@kernkonzept.com>
  Stephen Hemminger <stephen@networkplumber.org>
  Steven Rostedt (Google) <rostedt@goodmis.org>
  sunliming <sunliming@kylinos.cn>
  Takashi Iwai <tiwai@suse.de>
  Takashi Sakamoto <o-takashi@sakamocchi.jp>
  Tali Perry <tali.perry1@gmail.com>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Richter <tmricht@linux.ibm.com>
  Tim Crawford <tcrawford@system76.com>
  Tomas Winkler <tomas.winkler@intel.com>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Tyler Hicks <tyhicks@linux.microsoft.com>
  Tyrel Datwyler <tyreld@linux.ibm.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Vadim Fedorenko <vadfed@fb.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wentao_Liang <Wentao_Liang_g@163.com>
  Wojciech Drewek <wojciech.drewek@intel.com>
  Wolfram Sang <wsa@kernel.org>
  Xiang wangx <wangxiang@cdjrlc.com>
  Xiubo Li <xiubli@redhat.com>
  Xu Jia <xujia39@huawei.com>
  Ying Xue <ying.xue@windriver.com>
  Yonghong Song <yhs@fb.com>
  Yu Liao <liaoyu15@huawei.com>
  Ziyang Xuan <william.xuanziyang@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 5108 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 24 21:32:14 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 21:32:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355909.583871 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4quE-0005rG-So; Fri, 24 Jun 2022 21:31:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355909.583871; Fri, 24 Jun 2022 21:31:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4quE-0005r9-PR; Fri, 24 Jun 2022 21:31:58 +0000
Received: by outflank-mailman (input) for mailman id 355909;
 Fri, 24 Jun 2022 21:31:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b+8T=W7=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o4quD-0005r3-K4
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 21:31:57 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 11f40d04-f405-11ec-b725-ed86ccbb4733;
 Fri, 24 Jun 2022 23:31:55 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id B2E86B82C48;
 Fri, 24 Jun 2022 21:31:54 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 23222C34114;
 Fri, 24 Jun 2022 21:31:53 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 11f40d04-f405-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1656106313;
	bh=W1rSOa14R6RGx69MRWTtq56HNTBQe1M4o61oyaeDxM0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=o08nZowaJ4AAF753cK7cPIsiiuGK6qJSYL2xHLJk40STZYNNnSz1xvvgqj7tHLvfx
	 qEBi14xgO4dS6z6huN2DiOYKR6SfUldyZp+fm219Wf1AjYu4z7DTPQU2DS61aHynCM
	 SdLeJwIA3rCowDZjmONHHk8BKyy7cdUGpbrVtUzuFcIKpvbkQQ6ixgK/dgrQhDy5bs
	 LVVCG7oD3+M3THvxDrINLgM0cNNZa6W+6+JT9E4PdBtNO02XcHfYRioWrteAsTL1HX
	 kp6+1HyfBYaHlTo6WWTTITMoohrxsUETA8UcsWD8Vua94sG2TmqPcPdH7xLBdAxmZv
	 KKPytNcKC3SnQ==
Date: Fri, 24 Jun 2022 14:31:52 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, dmitry.semenets@gmail.com, 
    xen-devel@lists.xenproject.org, Dmytro Semenets <dmytro_semenets@epam.com>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen: arm: Don't use stop_cpu() in halt_this_cpu()
In-Reply-To: <e60a4e68-ed00-6cc7-31ca-64bcfc4bbdc5@xen.org>
Message-ID: <alpine.DEB.2.22.394.2206241414420.2410338@ubuntu-linux-20-04-desktop>
References: <20220623074428.226719-1-dmitry.semenets@gmail.com> <alpine.DEB.2.22.394.2206231457250.2410338@ubuntu-linux-20-04-desktop> <e60a4e68-ed00-6cc7-31ca-64bcfc4bbdc5@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 24 Jun 2022, Julien Grall wrote:
> On 23/06/2022 23:07, Stefano Stabellini wrote:
> > On Thu, 23 Jun 2022, dmitry.semenets@gmail.com wrote:
> > > From: Dmytro Semenets <dmytro_semenets@epam.com>
> > So wouldn't it be better to remove the panic from the implementation of
> > call_psci_cpu_off?
> 
> I have asked to keep the panic() in call_psci_cpu_off(). If you remove the
> panic() then we will hide the fact that the CPU was not properly turned off
> and will consume more energy than expected.
> 
> The WFI loop is fine when shutting down or rebooting because we know this will
> only happen for a short period of time.

Yeah, I don't think we should hide that CPU_OFF failed. I think we
should print a warning. However, given that we know CPU_OFF can
reasonably fail in normal conditions returning DENIED when a Trusted OS
is present, then I think we should not panic.

If there was a way to distinguish a failure because a Trusted OS is
present (the "normal" failure) from other failures, I would suggest to:
- print a warning if failed due to a Trusted OS being present
- panic in other cases

Unfortunately it looks like in all cases the return code is DENIED :-(


Given that, I would not panic and only print a warning in all cases. Or
we could ASSERT which at least goes away in !DEBUG builds.


> > The reason I am saying this is that stop_cpu is called in a number of
> > places beyond halt_this_cpu and as far as I can tell any of them could
> > trigger the panic. (I admit they are unlikely places but still.)
> 
> This is one of the example where the CPU will not be stopped for a short
> period of time. We should deal with them differently (i.e. migrating the
> trusted OS or else) so we give all the chance for the CPU to be fully powered.
> 
> IMHO, this is a different issue and hence why I didn't ask Dmitry to solve it.

I see your point now. I was seeing the two things as one.

I think it is true that the WFI loop is likely to work. Also it is true
that from a power perspective it makes no different on power down or
reboot.  From that point of view this patch is OK.

But even on shut-down/reboot, why not do that as a fallback in case
CPU_OFF didn't work? It is going to work most of the times anyway, why
change the default for the few cases that doesn't work?

Given that this patch would work, I don't want to insist on this and let
you decide.


But even if we don't want to remove the panic as part of this patch, I
think we should remove the panic in a separate patch anyway, at least
until someone investigates and thinks of a strategy how to migrate the
TrustedOS as you suggested.


 
 
> > Also the PSCI spec explicitely mention CPU_OFF as a way to place CPUs in
> > a "known state" and doesn't offer any other examples. So although what
> > you wrote in the commit message is correct, using CPU_OFF seems to be
> > the less error prone way (in the sense of triggering firmware errors) to
> > place CPUs in a known state.
> 
> The section you are referring to is starting with "One way". This imply there
> are others methods.
> 
> FWIW, the spin loop above seems to be how Linux is dealing with the
> shutdown/reboot.
> 
> > 
> > So I would simply remove the panic from call_psci_cpu_off, so that we
> > try CPU_OFF first, and if it doesn't work, we use the WFI loop as
> > backup. Also we get to fix all the other callers of stop_cpu this way.
> This reads strange. In the previous paragraph you suggested the CPU off is a
> less error prone way to place CPUs in a known state. But here, you are
> softening the stance and suggesting to fallback to the WFI loop.
> 
> So to me this indicates that WFI loop is fine. Otherwise, wouldn't the user
> risk to see firmware errors (which BTW, I don't understand what sort of
> firmware errors you are worried)? If yes, why would it be acceptable?
> 
> For instance, Dmitry situation, the CPU0 would always WFI loop...



From xen-devel-bounces@lists.xenproject.org Fri Jun 24 21:57:08 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jun 2022 21:57:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355916.583881 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4rIV-0008O8-13; Fri, 24 Jun 2022 21:57:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355916.583881; Fri, 24 Jun 2022 21:57:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4rIU-0008O1-Tq; Fri, 24 Jun 2022 21:57:02 +0000
Received: by outflank-mailman (input) for mailman id 355916;
 Fri, 24 Jun 2022 21:57:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b+8T=W7=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o4rIT-0008Nv-JU
 for xen-devel@lists.xenproject.org; Fri, 24 Jun 2022 21:57:01 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 92c0caad-f408-11ec-b725-ed86ccbb4733;
 Fri, 24 Jun 2022 23:57:00 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 8A809B82B63;
 Fri, 24 Jun 2022 21:56:59 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id E13E6C34114;
 Fri, 24 Jun 2022 21:56:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 92c0caad-f408-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1656107818;
	bh=nW//reqxFHN8IpxxmzQEx9kqa7PnsGTgPmjdUQkcd0U=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=QyI1xn1C5PQDAF2lIeVYl4zH4HmqJHM4qVZbjNzBqfl68uLqnn2KSXXOuG87z2nOs
	 CktYgC3FDXwlJFoTbC1PwEFATgW5LkcpcYCQwZ4upp1/qkLI4suDXTMqkYbxxGb4Ho
	 CeVlBdL5g2Voi/UnQpm4F7X8oGVo+HwGmld7JE7NCJ6RMDnCNeJorz12kqXLHW/Lll
	 O8BlT9PCGeLGelrlIiw3JkVA8SOk8tGlyyWaQylhHUfGG/3ZwfUxIQ/UTrG8s5eyPT
	 m49gBBawv1Jfnrp9c9a934J5/5r7voyw/0dKcdfc/Nrv0GqgpZaiaEdydbp//4/lfL
	 628+HRXrjmukQ==
Date: Fri, 24 Jun 2022 14:56:57 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: Penny Zheng <Penny.Zheng@arm.com>, xen-devel@lists.xenproject.org, 
    wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v5 7/8] xen/arm: create shared memory nodes in guest
 device tree
In-Reply-To: <84641d6e-202d-934c-9ea9-bbf090e29bdb@xen.org>
Message-ID: <alpine.DEB.2.22.394.2206241448040.2410338@ubuntu-linux-20-04-desktop>
References: <20220620051114.210118-1-Penny.Zheng@arm.com> <20220620051114.210118-8-Penny.Zheng@arm.com> <84641d6e-202d-934c-9ea9-bbf090e29bdb@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 24 Jun 2022, Julien Grall wrote:
> On 20/06/2022 06:11, Penny Zheng wrote:
> > We expose the shared memory to the domU using the "xen,shared-memory-v1"
> > reserved-memory binding. See
> > Documentation/devicetree/bindings/reserved-memory/xen,shared-memory.txt
> > in Linux for the corresponding device tree binding.
> > 
> > To save the cost of re-parsing shared memory device tree configuration when
> > creating shared memory nodes in guest device tree, this commit adds new
> > field
> > "shm_mem" to store shm-info per domain.
> > 
> > For each shared memory region, a range is exposed under
> > the /reserved-memory node as a child node. Each range sub-node is
> > named xen-shmem@<address> and has the following properties:
> > - compatible:
> >          compatible = "xen,shared-memory-v1"
> > - reg:
> >          the base guest physical address and size of the shared memory
> > region
> > - xen,id:
> >          a string that identifies the shared memory region.
> > 
> > Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> > Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> > ---
> > v5 change:
> > - no change
> > ---
> > v4 change:
> > - no change
> > ---
> > v3 change:
> > - move field "shm_mem" to kernel_info
> > ---
> > v2 change:
> > - using xzalloc
> > - shm_id should be uint8_t
> > - make reg a local variable
> > - add #address-cells and #size-cells properties
> > - fix alignment
> > ---
> >   xen/arch/arm/domain_build.c       | 143 +++++++++++++++++++++++++++++-
> >   xen/arch/arm/include/asm/kernel.h |   1 +
> >   xen/arch/arm/include/asm/setup.h  |   1 +
> >   3 files changed, 143 insertions(+), 2 deletions(-)
> > 
> > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> > index 1584e6c2ce..4d62440a0e 100644
> > --- a/xen/arch/arm/domain_build.c
> > +++ b/xen/arch/arm/domain_build.c
> > @@ -900,7 +900,22 @@ static int __init allocate_shared_memory(struct domain
> > *d,
> >       return ret;
> >   }
> >   -static int __init process_shm(struct domain *d,
> > +static int __init append_shm_bank_to_domain(struct kernel_info *kinfo,
> > +                                            paddr_t start, paddr_t size,
> > +                                            u32 shm_id)
> > +{
> > +    if ( (kinfo->shm_mem.nr_banks + 1) > NR_MEM_BANKS )
> > +        return -ENOMEM;
> > +
> > +    kinfo->shm_mem.bank[kinfo->shm_mem.nr_banks].start = start;
> > +    kinfo->shm_mem.bank[kinfo->shm_mem.nr_banks].size = size;
> > +    kinfo->shm_mem.bank[kinfo->shm_mem.nr_banks].shm_id = shm_id;
> > +    kinfo->shm_mem.nr_banks++;
> > +
> > +    return 0;
> > +}
> > +
> > +static int __init process_shm(struct domain *d, struct kernel_info *kinfo,
> >                                 const struct dt_device_node *node)
> >   {
> >       struct dt_device_node *shm_node;
> > @@ -971,6 +986,14 @@ static int __init process_shm(struct domain *d,
> >               if ( ret )
> >                   return ret;
> >           }
> > +
> > +        /*
> > +         * Record static shared memory region info for later setting
> > +         * up shm-node in guest device tree.
> > +         */
> > +        ret = append_shm_bank_to_domain(kinfo, gbase, psize, shm_id);
> > +        if ( ret )
> > +            return ret;
> >       }
> >         return 0;
> > @@ -1301,6 +1324,117 @@ static int __init make_memory_node(const struct
> > domain *d,
> >       return res;
> >   }
> >   +#ifdef CONFIG_STATIC_SHM
> > +static int __init make_shm_memory_node(const struct domain *d,
> > +                                       void *fdt,
> > +                                       int addrcells, int sizecells,
> > +                                       struct meminfo *mem)
> 
> NIT: AFAICT mem is not changed, so it should be const.
> 
> > +{
> > +    unsigned long i = 0;
> 
> NIT: This should be "unsigned int" to match the type of nr_banks.
> 
> > +    int res = 0;
> > +
> > +    if ( mem->nr_banks == 0 )
> > +        return -ENOENT;
> > +
> > +    /*
> > +     * For each shared memory region, a range is exposed under
> > +     * the /reserved-memory node as a child node. Each range sub-node is
> > +     * named xen-shmem@<address>.
> > +     */
> > +    dt_dprintk("Create xen-shmem node\n");
> > +
> > +    for ( ; i < mem->nr_banks; i++ )
> > +    {
> > +        uint64_t start = mem->bank[i].start;
> > +        uint64_t size = mem->bank[i].size;
> > +        uint8_t shm_id = mem->bank[i].shm_id;
> > +        /* Placeholder for xen-shmem@ + a 64-bit number + \0 */
> > +        char buf[27];
> > +        const char compat[] = "xen,shared-memory-v1";
> > +        __be32 reg[4];
> > +        __be32 *cells;
> > +        unsigned int len = (addrcells + sizecells) * sizeof(__be32);
> > +
> > +        snprintf(buf, sizeof(buf), "xen-shmem@%"PRIx64,
> > mem->bank[i].start);
> > +        res = fdt_begin_node(fdt, buf);
> > +        if ( res )
> > +            return res;
> > +
> > +        res = fdt_property(fdt, "compatible", compat, sizeof(compat));
> > +        if ( res )
> > +            return res;
> > +
> > +        cells = reg;
> > +        dt_child_set_range(&cells, addrcells, sizecells, start, size);
> > +
> > +        res = fdt_property(fdt, "reg", reg, len);
> > +        if ( res )
> > +            return res;
> > +
> > +        dt_dprintk("Shared memory bank %lu: %#"PRIx64"->%#"PRIx64"\n",
> > +                   i, start, start + size);
> > +
> > +        res = fdt_property_cell(fdt, "xen,id", shm_id);
> 
> Looking at the Linux binding, "xen,id" is meant to be a string. But here you
> are writing it as an integer.

Good catch!


> Given that the Linux binding is already merged, I think the Xen binding should
> be changed.

We would be compliant with both bindings (xen and linux) just by writing
shm_id as string here, but if it is not too difficult we might as well
harmonize the two bindings and also define xen,shm-id as a string.

On the Xen side, I would suggest to put a clear size limit so that the
string is easier to handle.


From xen-devel-bounces@lists.xenproject.org Sat Jun 25 01:18:16 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jun 2022 01:18:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355923.583893 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4uQw-0000zc-VA; Sat, 25 Jun 2022 01:17:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355923.583893; Sat, 25 Jun 2022 01:17:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4uQw-0000zV-Rf; Sat, 25 Jun 2022 01:17:58 +0000
Received: by outflank-mailman (input) for mailman id 355923;
 Sat, 25 Jun 2022 01:17:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4uQu-0000zL-VE; Sat, 25 Jun 2022 01:17:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4uQu-0002kC-SS; Sat, 25 Jun 2022 01:17:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4uQu-0003L8-B8; Sat, 25 Jun 2022 01:17:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o4uQu-0006fj-Ad; Sat, 25 Jun 2022 01:17:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WVGMDhgpay8Nluh5IpeHqkWT0tU8xwYk60X+6ia2QMc=; b=mlrzEgwhVvIdO8yQCUc1ZROrYN
	xxXahvBU5lj490S3AniF2QRQA0d8LvHEZJu2t9nrQQCtvEpWIRVjzi9qOn9XHzKV8U/oIpoXFxMSz
	z8J4SHWf0BPokCO+Q1lYi7Q+qZs1HQLcUpDC52aj8kYtz8inCPab9FMUKInBVGVgQjG4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171346-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 171346: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=8b5669e40f05ff1a1cb865ccc1bdb079b7bfc92c
X-Osstest-Versions-That:
    qemuu=3a821c52e1a30ecd9a436f2c67cc66b5628c829f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 25 Jun 2022 01:17:56 +0000

flight 171346 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171346/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171341
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171341
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171341
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171341
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171341
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171341
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171341
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171341
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                8b5669e40f05ff1a1cb865ccc1bdb079b7bfc92c
baseline version:
 qemuu                3a821c52e1a30ecd9a436f2c67cc66b5628c829f

Last test of basis   171341  2022-06-24 08:06:33 Z    0 days
Testing same since   171346  2022-06-24 19:39:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Helge Deller <deller@gmx.de>
  Helge Deller<deller@gmx.de>
  Ilya Leoshkevich <iii@linux.ibm.com>
  Laurent Vivier <laurent@vivier.eu>
  Richard Henderson <richard.henderson@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   3a821c52e1..8b5669e40f  8b5669e40f05ff1a1cb865ccc1bdb079b7bfc92c -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Sat Jun 25 04:10:13 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jun 2022 04:10:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355931.583904 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4x7O-0000p9-H5; Sat, 25 Jun 2022 04:09:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355931.583904; Sat, 25 Jun 2022 04:09:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o4x7O-0000p0-B8; Sat, 25 Jun 2022 04:09:58 +0000
Received: by outflank-mailman (input) for mailman id 355931;
 Sat, 25 Jun 2022 04:09:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4x7M-0000oq-Qs; Sat, 25 Jun 2022 04:09:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4x7M-0006O4-NT; Sat, 25 Jun 2022 04:09:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o4x7M-0001Bx-BJ; Sat, 25 Jun 2022 04:09:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o4x7M-00078h-9H; Sat, 25 Jun 2022 04:09:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LAGiFivuajDGh5dSpPW1VwaKHRFDwoc3TzkN2zl1qyw=; b=DMAx9tY8T1/pXTB17gxRqvZxa6
	jADC1mCa+qsiwhAdHI2ChkXETQfOtfRniJGfJys63LoOQLfxFxjHC5OcvIttnpi5Poj8B0gYJ5LLF
	BpRhddOeoHoTlEU/Ud3ve1ruFuNWzijx7NHIXcjj115e+w/zplouzsnQddGgi7x4BlJg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171347-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171347: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=70d605cbeecb408dd884b1f0cd3963eeeaac144c
X-Osstest-Versions-That:
    linux=354c6e071be986a44b956f7b57f1884244431048
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 25 Jun 2022 04:09:56 +0000

flight 171347 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171347/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-intel  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-amd  8 xen-boot              fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 171277
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 171277
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 171277
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 171277

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 171277

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171277
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171277
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171277
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                70d605cbeecb408dd884b1f0cd3963eeeaac144c
baseline version:
 linux                354c6e071be986a44b956f7b57f1884244431048

Last test of basis   171277  2022-06-19 03:11:35 Z    6 days
Failing since        171280  2022-06-19 15:12:25 Z    5 days   16 attempts
Testing same since   171347  2022-06-24 20:11:12 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abhinav Kumar <quic_abhinavk@quicinc.com>
  Aidan MacDonald <aidanmacdonald.0x0@gmail.com>
  Alan Liu <HaoPing.Liu@amd.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Egorenkov <egorenar@linux.ibm.com>
  Alexander Gordeev <agordeev@linux.ibm.com>
  Alexander Usyskin <alexander.usyskin@intel.com>
  Alexei Starovoitov <ast@kernel.org>
  Ali Saidi <alisaidi@amazon.com>
  Alistair Popple <apopple@nvidia.com>
  Anatolii Gerasymenko <anatolii.gerasymenko@intel.com>
  Andrii Nakryiko <andrii@kernel.org>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Antoine Tenart <atenart@kernel.org>
  Ard Biesheuvel <ardb@kernel.org>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Arnd Bergmann <arnd@arndb.de>
  Athira Jajeev <atrajeev@linux.vnet.ibm.com>
  Athira Rajeev <atrajeev@linux.vnet.ibm.com>
  Bart Van Assche <bvanassche@acm.org>
  Bjorn Andersson <bjorn.andersson@linaro.org>
  Brian Foster <bfoster@redhat.com>
  Carlos Llamas <cmllamas@google.com>
  Catalin Marinas <catalin.marinas@arm.com>
  Chevron Li <chevron.li@bayhubtech.com>
  Chevron Li<chevron.li@bayhubtech.com>
  Christian Marangi <ansuelsmth@gmail.com>
  Christian Schoenebeck <linux_oss@crudebyte.com>
  Christoph Hellwig <hch@lst.de>
  Christoph Lameter <cl@linux.com>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Ciara Loftus <ciara.loftus@intel.com>
  Claudiu Manoil <claudiu.manoil@nxp.com>
  Curtis Taylor <cutaylor-pub@yahoo.com>
  Damien Le Moal <damien.lemoal@opensource.wdc.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Wheeler <daniel.wheeler@amd.com>
  Daniil Dementev <d.dementev@ispras.ru>
  Darrick J. Wong <djwong@kernel.org>
  Dave Airlie <airlied@redhat.com>
  Dave Hansen <dave.hansen@linux.intel.com>
  David Howells <dhowells@redhat.com>
  David Rientjes <rientjes@google.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Demi Marie Obenour <demi@invisiblethingslab.com>
  Ding Xiang <dingxiang@cmss.chinamobile.com>
  Dmitry Klochkov <kdmitry556@gmail.com>
  Dmitry Osipenko <dmitry.osipenko@collabora.com>
  Dominique Martinet <asmadeus@codewreck.org>
  Douglas Gilbert <dgilbert@interlog.com>
  Dylan Yudaken <dylany@fb.com>
  Edward Wu <edwardwu@realtek.com>
  Eelco Chaudron <echaudro@redhat.com>
  Eric Dumazet <edumazet@google.com>
  Evgeniy Baskov <baskov@ispras.ru>
  Filipe Manana <fdmanana@suse.com>
  Florian Westphal <fw@strlen.de>
  Gautam Menghani <gautammenghani201@gmail.com>
  Genjian Zhang <zhanggenjian@kylinos.cn>
  George Shen <george.shen@amd.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Gurucharan <gurucharanx.g@intel.com> (A Contingent worker at Intel)
  Heiko Stuebner <heiko@sntech.de>
  Hoang Le <hoang.h.le@dektech.com.au>
  huhai <huhai@kylinos.cn>
  Hyeonggon Yoo <42.hyeyoo@gmail.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ian Rogers <irogers@google.com>
  Ivan Vecera <ivecera@redhat.com>
  Jakub Kicinski <kuba@kernel.org>
  Jakub Sitnicki <jakub@cloudflare.com>
  Jamie Iles <jamie@jamieiles.com>
  Jani Nikula <jani.nikula@intel.com>
  Jann Horn <jannh@google.com>
  Jarkko Nikula <jarkko.nikula@linux.intel.com>
  Jason A. Donenfeld <Jason@zx2c4.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Javier Martinez Canillas <javierm@redhat.com>
  Jay Vosburgh <jay.vosburgh@canonical.com>
  Jens Axboe <axboe@kernel.dk>
  Jernej Skrabec <jernej.skrabec@gmail.com>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
  Jiasheng Jiang <jiasheng@iscas.ac.cn>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jie2x Zhou <jie2x.zhou@intel.com>
  Jing-Ting Wu <jing-ting.wu@mediatek.com>
  Jiri Olsa <jolsa@kernel.org>
  Joe Damato <jdamato@fastly.com>
  Joel Granados <j.granados@samsung.com>
  Joel Savitz <jsavitz@redhat.com>
  John Fastabend <john.fastabend@gmail.com>
  Jon Maloy <jmaloy@redhat.com>
  Jon Maxwell <jmaxwell37@gmail.com>
  Jonathan Lemon <jonathan.lemon@gmail.com>
  Jonathan Marek <jonathan@marek.ca>
  Jonathan Toppins <jtoppins@redhat.com>
  Josh Poimboeuf <jpoimboe@kernel.org>
  Joshua Ashton <joshua@froggi.es>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Kai Vehmanen <kai.vehmanen@linux.intel.com>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Kailang Yang <kailang@realtek.com>
  Kees Cook <keescook@chromium.org>
  Ken Moffat <zarniwhoop@ntlworld.com>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Kumar Kartikeya Dwivedi <memxor@gmail.com>
  Kunihiko Hayashi <hayashi.kunihiko@socionext.com>
  Kuogee Hsieh <quic_khsieh@quicinc.com>
  Leo Savernik <l.savernik@aon.at>
  Leo Yan <leo.yan@linaro.org>
  Li Nan <linan122@huawei.com>
  Liang He <windhl@126.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Liu Ying <victor.liu@nxp.com>
  Lorenzo Bianconi <lorenzo@kernel.org>
  Lukas Bulwahn <lukas.bulwahn@gmail.com>
  Lukas Wunner <lukas@wunner.de>
  Maciej Fijalkowski <maciej.fijalkowski@intel.com>
  Magnus Karlsson <magnus.karlsson@intel.com>
  Marc Dionne <marc.dionne@auristor.com>
  Marc Zyngier <maz@kernel.org>
  Marcin Szycik <marcin.szycik@linux.intel.com>
  Mario Limonciello <mario.limonciello@amd.com>
  Mark Brown <broonie@kernel.org>
  Mark Pearson <markpearson@lenovo.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Masami Hiramatsu (Google) <mhiramat@kernel.org>
  Matthew Wilcox (Oracle) <willy@infradead.org>
  Maxime Ripard <maxime@cerno.tech>
  Maximilian Luz <luzmaximilian@gmail.com>
  Maya Matuszczyk <maccraft123mc@gmail.com>
  Mengqi Zhang <mengqi.zhang@mediatek.com>
  Miaoqian Lin <linmq006@gmail.com>
  Michael Petlan <mpetlan@redhat.com>
  Michal Simek <michal.simek@amd.com>
  Mike Snitzer <snitzer@kernel.org>
  Mikulas Patocka <mpatocka@redhat.com>
  Ming Lei <ming.lei@redhat.com>
  Mingwei Zhang <mizhang@google.com>
  Nathan Chancellor <nathan@kernel.org>
  Nathan Chancellor <nathan@kernel.org> # build
  Nico Pache <npache@redhat.com>
  nikitashvets@flyium.com
  Nikos Tsironis <ntsironis@arrikto.com>
  Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
  Oleksij Rempel <o.rempel@pengutronix.de>
  Oliver Upton <oupton@google.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Abeni <pabeni@redhat.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Pavel Begunkov <asml.silence@gmail.com>
  Peilin Ye <peilin.ye@bytedance.com>
  Peter Gonda <pgonda@google.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Peter Zijlstra <peterz@infradead.org>
  Petr Mladek <pmladek@suse.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Qingqing Zhuo <qingqing.zhuo@amd.com>
  Qu Wenruo <wqu@suse.com>
  Quentin Perret <qperret@google.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Raghavendra Rao Ananta <rananta@google.com>
  Riccardo Paolo Bestetti <pbl@bestov.io>
  Rob Clark <robdclark@chromium.org>
  Rob Clark <robdclark@gmail.com>
  Rob Herring <robh@kernel.org>
  Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
  Ron Economos <re@w6rz.net>
  Rosemarie O'Riorden <roriorden@redhat.com>
  Samuel Holland <samuel@sholland.org>
  Sandeep Penigalapati <sandeep.penigalapati@intel.com>
  Saud Farooqui <farooqui_saud@hotmail.com>
  Saurabh Sengar <ssengar@linux.microsoft.com>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Sedat Dilek <sedat.dilek@gmail.com> # LLVM-14 (x86-64)
  Serge Semin <Sergey.Semin@baikalelectronics.ru>
  Sergey Gorenko <sergeygo@nvidia.com>
  Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Soham Sen <contact@sohamsen.me>
  Song Liu <songliubraving@fb.com>
  Steev Klimaszewski <steev@kali.org>
  Stephan Gerhold <stephan.gerhold@kernkonzept.com>
  Stephen Hemminger <stephen@networkplumber.org>
  Steven Rostedt (Google) <rostedt@goodmis.org>
  Sumanth Korikkar <sumanthk@linux.ibm.com>
  sunliming <sunliming@kylinos.cn>
  syzbot+3e3f419f4a7816471838@syzkaller.appspotmail.com
  Takashi Iwai <tiwai@suse.de>
  Takashi Sakamoto <o-takashi@sakamocchi.jp>
  Tali Perry <tali.perry1@gmail.com>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Richter <tmricht@linux.ibm.com>
  Tim Crawford <tcrawford@system76.com>
  Tomas Winkler <tomas.winkler@intel.com>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Tvrtko Ursulin <tvrtko.ursulin@intel.com>
  Tyler Hicks <tyhicks@linux.microsoft.com>
  Tyrel Datwyler <tyreld@linux.ibm.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Vadim Fedorenko <vadfed@fb.com>
  Ville Syrjälä <ville.syrjala@linux.intel.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wentao_Liang <Wentao_Liang_g@163.com>
  Wojciech Drewek <wojciech.drewek@intel.com>
  Wolfram Sang <wsa@kernel.org>
  Xiang wangx <wangxiang@cdjrlc.com>
  Xiubo Li <xiubli@redhat.com>
  Xu Jia <xujia39@huawei.com>
  Ying Xue <ying.xue@windriver.com>
  Yonghong Song <yhs@fb.com>
  Yu Liao <liaoyu15@huawei.com>
  Ziyang Xuan <william.xuanziyang@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 7745 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 25 07:52:34 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jun 2022 07:52:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355941.583915 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o50aU-0006Cf-9n; Sat, 25 Jun 2022 07:52:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355941.583915; Sat, 25 Jun 2022 07:52:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o50aU-0006CY-6M; Sat, 25 Jun 2022 07:52:14 +0000
Received: by outflank-mailman (input) for mailman id 355941;
 Sat, 25 Jun 2022 07:52:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o50aT-0006CO-QA; Sat, 25 Jun 2022 07:52:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o50aT-0002Y7-NY; Sat, 25 Jun 2022 07:52:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o50aT-00079U-5y; Sat, 25 Jun 2022 07:52:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o50aT-00023l-5Y; Sat, 25 Jun 2022 07:52:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=AEFwXJQ+0UHpM42wDPVVwnRKiYvW2Gr6zKd74rmCdtM=; b=Px/za+fErHA5Hy9ApOIwE7Mp9l
	q/l3ir+LUU/X9Cv0lvmoHwvrztu48vK3QuKyI6I8LPOy6doH2taCwwfH2v+BquqpM1E+AK4afI2LY
	mJkgfBa78irOLW/A1PWvZ9w8/bkmXBOCXjNHC+4dogvuD6G13H3wlHklr5GZto4t9k/Y=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171348-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 171348: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=40d522490714b65e0856444277db6c14c5cc3796
X-Osstest-Versions-That:
    qemuu=8b5669e40f05ff1a1cb865ccc1bdb079b7bfc92c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 25 Jun 2022 07:52:13 +0000

flight 171348 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171348/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171346
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171346
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171346
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171346
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171346
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171346
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171346
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171346
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                40d522490714b65e0856444277db6c14c5cc3796
baseline version:
 qemuu                8b5669e40f05ff1a1cb865ccc1bdb079b7bfc92c

Last test of basis   171346  2022-06-24 19:39:46 Z    0 days
Testing same since   171348  2022-06-25 01:39:21 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Blake <eblake@redhat.com>
  Fabian Ebner <f.ebner@proxmox.com>
  Kevin Wolf <kwolf@redhat.com>
  Richard Henderson <richard.henderson@linaro.org>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefano Garzarella <sgarzare@redhat.com>
  Vladimir Sementsov-Ogievskiy <v.sementsov-og@mail.ru>
  Vladimir Sementsov-Ogievskiy <vsementsov@openvz.org>
  Xie Yongji <xieyongji@bytedance.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   8b5669e40f..40d5224907  40d522490714b65e0856444277db6c14c5cc3796 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Sat Jun 25 07:57:33 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jun 2022 07:57:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355948.583926 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o50fa-0006pq-UO; Sat, 25 Jun 2022 07:57:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355948.583926; Sat, 25 Jun 2022 07:57:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o50fa-0006pj-RF; Sat, 25 Jun 2022 07:57:30 +0000
Received: by outflank-mailman (input) for mailman id 355948;
 Sat, 25 Jun 2022 07:57:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o50fZ-0006pZ-W2; Sat, 25 Jun 2022 07:57:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o50fZ-0002dQ-T2; Sat, 25 Jun 2022 07:57:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o50fZ-0007SI-E0; Sat, 25 Jun 2022 07:57:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o50fZ-0007Cn-Da; Sat, 25 Jun 2022 07:57:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=HxBfIBeis034sqCyUo7NPkGJ1DlgSX+0esxJLGT0Jtk=; b=lBMHR/gKBX4rKHvVHts1o+FtR9
	l6mretR0triOK+hM99rC7PoX9RWOdGWrLnqczEbUvg9ISt+Vn4nl6GY3s17xHssK/VPTLv8cfY50r
	4yEbvBtfBSAigElazQxIdEDiw5MoMuwNzQnh5n+Nh0ZLeNRVsaK+5sfkVurUSsnz/WyQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171351-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 171351: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=790f3b214bffb402609ec1c6ceb2317c2b3cabb1
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 25 Jun 2022 07:57:29 +0000

flight 171351 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171351/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              790f3b214bffb402609ec1c6ceb2317c2b3cabb1
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  715 days
Failing since        151818  2020-07-11 04:18:52 Z  714 days  696 attempts
Testing same since   171351  2022-06-25 04:18:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Amneesh Singh <natto@weirdnatto.in>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani@anisinha.ca>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brad Laue <brad@brad-x.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Kirbach <christian.kirbach@gmail.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Christophe Fergeau <cfergeau@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Didik Supriadi <didiksupriadi41@gmail.com>
  dinglimin <dinglimin@cmss.chinamobile.com>
  Divya Garg <divya.garg@nutanix.com>
  Dmitrii Shcherbakov <dmitrii.shcherbakov@canonical.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Emilio Herrera <ehespinosa57@gmail.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Florian Schmidt <flosch@nutanix.com>
  Franck Ridel <fridel@protonmail.com>
  Gavi Teitz <gavi@nvidia.com>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Haonan Wang <hnwanga1@gmail.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Hiroki Narukawa <hnarukaw@yahoo-corp.jp>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Ian Wienand <iwienand@redhat.com>
  Ioanna Alifieraki <ioanna-maria.alifieraki@canonical.com>
  Ivan Teterevkov <ivan.teterevkov@nutanix.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  jason lee <ppark5237@gmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jia Zhou <zhou.jia2@zte.com.cn>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jing Qi <jinqi@redhat.com>
  Jinsheng Zhang <zhangjl02@inspur.com>
  Jiri Denemark <jdenemar@redhat.com>
  Joachim Falk <joachim.falk@gmx.de>
  John Ferlan <jferlan@redhat.com>
  John Levon <john.levon@nutanix.com>
  John Levon <levon@movementarian.org>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Justin Gatzen <justin.gatzen@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kim InSoo <simmon@nplob.com>
  Koichi Murase <myoga.murase@gmail.com>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Lei Yang <yanglei209@huawei.com>
  Lena Voytek <lena.voytek@canonical.com>
  Liang Yan <lyan@digitalocean.com>
  Liang Yan <lyan@digtalocean.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Liu Yiding <liuyd.fnst@fujitsu.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  luzhipeng <luzhipeng@cestc.cn>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Mielke <mark.mielke@gmail.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Martin Pitt <mpitt@debian.org>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matej Cepl <mcepl@cepl.eu>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Goodhart <c@chromakode.com>
  Maxim Nestratov <mnestratov@virtuozzo.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Moteen Shah <codeguy.moteen@gmail.com>
  Moteen Shah <moteenshah.02@gmail.com>
  Muha Aliss <muhaaliss@gmail.com>
  Nathan <nathan95@live.it>
  Neal Gompa <ngompa13@gmail.com>
  Nick Chevsky <nchevsky@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nicolas Lécureuil <neoclust@mageia.org>
  Nicolas Lécureuil <nicolas.lecureuil@siveo.net>
  Nikolay Shirokovskiy <nikolay.shirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Niteesh Dubey <niteesh@linux.ibm.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Or Ozeri <oro@il.ibm.com>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peng Liang <tcx4c70@gmail.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Praveen K Paladugu <prapal@linux.microsoft.com>
  Prerna Saxena <prerna.saxena@nutanix.com>
  Richard W.M. Jones <rjones@redhat.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Robin Lee <cheeselee@fedoraproject.org>
  Rohit Kumar <rohit.kumar3@nutanix.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Davis <scott.davis@starlab.io>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Sergey A <sw@atrus.ru>
  Sergey A. <sw@atrus.ru>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  shenjiatong <yshxxsjt715@gmail.com>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Simon Rowe <simon.rowe@nutanix.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Temuri Doghonadze <temuri.doghonadze@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tom Wieczorek <tom@bibbu.net>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tu Qiang <tu.qiang35@zte.com.cn>
  Tuguoyi <tu.guoyi@h3c.com>
  tuqiang <tu.qiang35@zte.com.cn>
  Vasiliy Ulyanov <vulyanov@suse.de>
  Victor Toso <victortoso@redhat.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Vinayak Kale <vkale@nvidia.com>
  Vineeth Pillai <viremana@linux.microsoft.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  Wei-Chen Chen <weicche@microsoft.com>
  William Douglas <william.douglas@intel.com>
  Xu Chao <xu.chao6@zte.com.cn>
  Yalan Zhang <yalzhang@redhat.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Fei <yangfei85@huawei.com>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yasuhiko Kamata <belphegor@belbel.or.jp>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
  zhangjl02 <zhangjl02@inspur.com>
  zhanglei <zhanglei@smartx.com>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>
  Дамјан Георгиевски <gdamjan@gmail.com>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 114220 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 25 11:03:48 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jun 2022 11:03:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355968.583940 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o53Zb-0000P4-2X; Sat, 25 Jun 2022 11:03:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355968.583940; Sat, 25 Jun 2022 11:03:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o53Za-0000Ox-US; Sat, 25 Jun 2022 11:03:30 +0000
Received: by outflank-mailman (input) for mailman id 355968;
 Sat, 25 Jun 2022 11:03:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o53ZZ-0000On-Fc; Sat, 25 Jun 2022 11:03:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o53ZZ-0006iZ-AY; Sat, 25 Jun 2022 11:03:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o53ZY-0005V5-Pz; Sat, 25 Jun 2022 11:03:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o53ZY-00085l-PW; Sat, 25 Jun 2022 11:03:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8txc7dCEpZZAbX/Tzy8N6HUSlWRcl2Tf+Bgj9BqGfQU=; b=usck3JMeTc/+KQ5zxjLzmmIXCi
	L66wH/UZHs7uZs4WQ80wS5fYjuKT0NjnYWXAxzfSi+FkFdnsY4RP+n2J5VZFjYzdJLxotQRvin10S
	rjpwN5tfH/YJzYiORjDvckftFb+Kq8h+3x9nsvvIL7H7f4FaF0GekZbSdVyjIlBkYqYI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171349-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 171349: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:guest-start/debianhvm.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=7c1f724dd95cf627f72c96d310b6b7d487bc2281
X-Osstest-Versions-That:
    xen=7c1f724dd95cf627f72c96d310b6b7d487bc2281
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 25 Jun 2022 11:03:28 +0000

flight 171349 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171349/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-amd 7 xen-install fail in 171339 pass in 171349
 test-amd64-amd64-xl-qemut-debianhvm-amd64 20 guest-start/debianhvm.repeat fail in 171339 pass in 171349
 test-amd64-amd64-xl-qemuu-ws16-amd64 12 windows-install fail in 171339 pass in 171349
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  7 xen-install fail pass in 171339

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop      fail blocked in 171339
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171339
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171339
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171339
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171339
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171339
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171339
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171339
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171339
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171339
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171339
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171339
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  7c1f724dd95cf627f72c96d310b6b7d487bc2281
baseline version:
 xen                  7c1f724dd95cf627f72c96d310b6b7d487bc2281

Last test of basis   171349  2022-06-25 01:51:54 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sat Jun 25 12:38:35 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jun 2022 12:38:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355984.583951 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o553L-0001Cj-Fo; Sat, 25 Jun 2022 12:38:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355984.583951; Sat, 25 Jun 2022 12:38:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o553L-0001Cc-Cv; Sat, 25 Jun 2022 12:38:19 +0000
Received: by outflank-mailman (input) for mailman id 355984;
 Sat, 25 Jun 2022 12:38:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o553K-0001C3-Hu; Sat, 25 Jun 2022 12:38:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o553K-0008QU-EX; Sat, 25 Jun 2022 12:38:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o553J-00011f-So; Sat, 25 Jun 2022 12:38:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o553J-0003uC-Rx; Sat, 25 Jun 2022 12:38:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=RYnY5IlBKtMkfVf0Cde47692REt7rOFWCLvMsjVtxIQ=; b=C61sCbNlovwHtf9PpX6mHq9ROX
	SmIA0I8RWM6WLnkuyZNQM3vVp3dYQn6g3z4468OYSn+d+tIb5ZxoWGPIc0SYoKVDjO2MM1JgxpJJ8
	UAWOA9SHc+kYfLJmy9chOg/8d2PeHpqdQ5bEoTFn9oOvd5R7CucOpdpo11pEKLWtc4lw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171350-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171350: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=8c23f235a6a8ae43abea215812eb9d8cf4dd165e
X-Osstest-Versions-That:
    linux=354c6e071be986a44b956f7b57f1884244431048
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 25 Jun 2022 12:38:17 +0000

flight 171350 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171350/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-intel  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-amd  8 xen-boot              fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 171277
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 171277
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 171277
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 171277
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 171277
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 171277

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 171277

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171277
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171277
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                8c23f235a6a8ae43abea215812eb9d8cf4dd165e
baseline version:
 linux                354c6e071be986a44b956f7b57f1884244431048

Last test of basis   171277  2022-06-19 03:11:35 Z    6 days
Failing since        171280  2022-06-19 15:12:25 Z    5 days   17 attempts
Testing same since   171350  2022-06-25 04:14:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abhinav Kumar <quic_abhinavk@quicinc.com>
  Aidan MacDonald <aidanmacdonald.0x0@gmail.com>
  Akira Yokosawa <akiyks@gmail.com>
  Alan Liu <HaoPing.Liu@amd.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Egorenkov <egorenar@linux.ibm.com>
  Alexander Gordeev <agordeev@linux.ibm.com>
  Alexander Usyskin <alexander.usyskin@intel.com>
  Alexei Starovoitov <ast@kernel.org>
  Ali Saidi <alisaidi@amazon.com>
  Alistair Popple <apopple@nvidia.com>
  Amit Kumar Mahapatra <amit.kumar-mahapatra@xilinx.com>
  Anatolii Gerasymenko <anatolii.gerasymenko@intel.com>
  Andrii Nakryiko <andrii@kernel.org>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Antoine Tenart <atenart@kernel.org>
  Ard Biesheuvel <ardb@kernel.org>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Arnd Bergmann <arnd@arndb.de>
  Athira Jajeev <atrajeev@linux.vnet.ibm.com>
  Athira Rajeev <atrajeev@linux.vnet.ibm.com>
  Bart Van Assche <bvanassche@acm.org>
  Bartosz Golaszewski <brgl@bgdev.pl>
  Bjorn Andersson <bjorn.andersson@linaro.org>
  Brian Foster <bfoster@redhat.com>
  Carlos Llamas <cmllamas@google.com>
  Catalin Marinas <catalin.marinas@arm.com>
  Chevron Li <chevron.li@bayhubtech.com>
  Chevron Li<chevron.li@bayhubtech.com>
  Christian Lamparter <chunkeey@gmail.com>
  Christian Marangi <ansuelsmth@gmail.com>
  Christian Schoenebeck <linux_oss@crudebyte.com>
  Christoph Hellwig <hch@lst.de>
  Christoph Lameter <cl@linux.com>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Ciara Loftus <ciara.loftus@intel.com>
  Claudiu Manoil <claudiu.manoil@nxp.com>
  Curtis Taylor <cutaylor-pub@yahoo.com>
  Damien Le Moal <damien.lemoal@opensource.wdc.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Wheeler <daniel.wheeler@amd.com>
  Daniil Dementev <d.dementev@ispras.ru>
  Darrick J. Wong <djwong@kernel.org>
  Dave Airlie <airlied@redhat.com>
  Dave Hansen <dave.hansen@linux.intel.com>
  David Howells <dhowells@redhat.com>
  David Rientjes <rientjes@google.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Demi Marie Obenour <demi@invisiblethingslab.com>
  Ding Xiang <dingxiang@cmss.chinamobile.com>
  Dmitry Klochkov <kdmitry556@gmail.com>
  Dmitry Osipenko <dmitry.osipenko@collabora.com>
  Dominique Martinet <asmadeus@codewreck.org>
  Douglas Gilbert <dgilbert@interlog.com>
  Dylan Yudaken <dylany@fb.com>
  Edward Wu <edwardwu@realtek.com>
  Eelco Chaudron <echaudro@redhat.com>
  Eric Dumazet <edumazet@google.com>
  Evgeniy Baskov <baskov@ispras.ru>
  Filipe Manana <fdmanana@suse.com>
  Florian Westphal <fw@strlen.de>
  Gautam Menghani <gautammenghani201@gmail.com>
  Genjian Zhang <zhanggenjian@kylinos.cn>
  George Shen <george.shen@amd.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Gurucharan <gurucharanx.g@intel.com> (A Contingent worker at Intel)
  Heiko Stuebner <heiko@sntech.de>
  Hoang Le <hoang.h.le@dektech.com.au>
  huhai <huhai@kylinos.cn>
  Hyeonggon Yoo <42.hyeyoo@gmail.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ian Rogers <irogers@google.com>
  Ivan Vecera <ivecera@redhat.com>
  Jakub Kicinski <kuba@kernel.org>
  Jakub Sitnicki <jakub@cloudflare.com>
  Jamie Iles <jamie@jamieiles.com>
  Jani Nikula <jani.nikula@intel.com>
  Jann Horn <jannh@google.com>
  Jarkko Nikula <jarkko.nikula@linux.intel.com>
  Jason A. Donenfeld <Jason@zx2c4.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Javier Martinez Canillas <javierm@redhat.com>
  Jay Vosburgh <jay.vosburgh@canonical.com>
  Jens Axboe <axboe@kernel.dk>
  Jernej Skrabec <jernej.skrabec@gmail.com>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
  Jiasheng Jiang <jiasheng@iscas.ac.cn>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jie2x Zhou <jie2x.zhou@intel.com>
  Jing-Ting Wu <jing-ting.wu@mediatek.com>
  Jiri Olsa <jolsa@kernel.org>
  Joe Damato <jdamato@fastly.com>
  Joel Granados <j.granados@samsung.com>
  Joel Savitz <jsavitz@redhat.com>
  Joerg Roedel <jroedel@suse.de>
  John Fastabend <john.fastabend@gmail.com>
  Jon <jon.lin@rock-chips.com>
  Jon Lin <jon.lin@rock-chips.com>
  Jon Maloy <jmaloy@redhat.com>
  Jon Maxwell <jmaxwell37@gmail.com>
  Jonathan Lemon <jonathan.lemon@gmail.com>
  Jonathan Marek <jonathan@marek.ca>
  Jonathan Toppins <jtoppins@redhat.com>
  Josh Poimboeuf <jpoimboe@kernel.org>
  Joshua Ashton <joshua@froggi.es>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Kai Vehmanen <kai.vehmanen@linux.intel.com>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Kailang Yang <kailang@realtek.com>
  Kees Cook <keescook@chromium.org>
  Ken Moffat <zarniwhoop@ntlworld.com>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Kumar Kartikeya Dwivedi <memxor@gmail.com>
  Kunihiko Hayashi <hayashi.kunihiko@socionext.com>
  Kuogee Hsieh <quic_khsieh@quicinc.com>
  Lars-Peter Clausen <lars@metafoo.de>
  Leo Savernik <l.savernik@aon.at>
  Leo Yan <leo.yan@linaro.org>
  Li Nan <linan122@huawei.com>
  Liang He <windhl@126.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Liu Ying <victor.liu@nxp.com>
  Lorenzo Bianconi <lorenzo@kernel.org>
  Lukas Bulwahn <lukas.bulwahn@gmail.com>
  Lukas Wunner <lukas@wunner.de>
  Maciej Fijalkowski <maciej.fijalkowski@intel.com>
  Magnus Karlsson <magnus.karlsson@intel.com>
  Marc Dionne <marc.dionne@auristor.com>
  Marc Zyngier <maz@kernel.org>
  Marcin Szycik <marcin.szycik@linux.intel.com>
  Mario Limonciello <mario.limonciello@amd.com>
  Mark Brown <broonie@kernel.org>
  Mark Pearson <markpearson@lenovo.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Masami Hiramatsu (Google) <mhiramat@kernel.org>
  Matthew Wilcox (Oracle) <willy@infradead.org>
  Maxime Ripard <maxime@cerno.tech>
  Maximilian Luz <luzmaximilian@gmail.com>
  Maya Matuszczyk <maccraft123mc@gmail.com>
  Mengqi Zhang <mengqi.zhang@mediatek.com>
  Miaoqian Lin <linmq006@gmail.com>
  Michael Petlan <mpetlan@redhat.com>
  Michal Simek <michal.simek@amd.com>
  Mike Snitzer <snitzer@kernel.org>
  Mikulas Patocka <mpatocka@redhat.com>
  Ming Lei <ming.lei@redhat.com>
  Mingwei Zhang <mizhang@google.com>
  Miquel Raynal <miquel.raynal@bootlin.com>
  Nathan Chancellor <nathan@kernel.org>
  Nathan Chancellor <nathan@kernel.org> # build
  Nico Pache <npache@redhat.com>
  nikitashvets@flyium.com
  Nikos Tsironis <ntsironis@arrikto.com>
  Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
  Oleksij Rempel <o.rempel@pengutronix.de>
  Oliver Upton <oupton@google.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Abeni <pabeni@redhat.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrice Chotard <patrice.chotard@foss.st.com>
  Pavel Begunkov <asml.silence@gmail.com>
  Peilin Ye <peilin.ye@bytedance.com>
  Peter Gonda <pgonda@google.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Peter Zijlstra <peterz@infradead.org>
  Petr Mladek <pmladek@suse.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Qingqing Zhuo <qingqing.zhuo@amd.com>
  Qu Wenruo <wqu@suse.com>
  Quentin Perret <qperret@google.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Raghavendra Rao Ananta <rananta@google.com>
  Riccardo Paolo Bestetti <pbl@bestov.io>
  Rob Clark <robdclark@chromium.org>
  Rob Clark <robdclark@gmail.com>
  Rob Herring <robh@kernel.org>
  Robert Marko <robimarko@gmail.com>
  Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
  Ron Economos <re@w6rz.net>
  Rosemarie O'Riorden <roriorden@redhat.com>
  Sai Krishna Potthuri <lakshmi.sai.krishna.potthuri@xilinx.com>
  Samuel Holland <samuel@sholland.org>
  Sandeep Penigalapati <sandeep.penigalapati@intel.com>
  Sander Vanheule <sander@svanheule.net>
  Sascha Hauer <s.hauer@pengutronix.de>
  Saud Farooqui <farooqui_saud@hotmail.com>
  Saurabh Sengar <ssengar@linux.microsoft.com>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Sedat Dilek <sedat.dilek@gmail.com> # LLVM-14 (x86-64)
  Serge Semin <Sergey.Semin@baikalelectronics.ru>
  Sergey Gorenko <sergeygo@nvidia.com>
  Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Soham Sen <contact@sohamsen.me>
  Song Liu <songliubraving@fb.com>
  Steev Klimaszewski <steev@kali.org>
  Stefan Wahren <stefan.wahren@i2se.com>
  Stephan Gerhold <stephan.gerhold@kernkonzept.com>
  Stephen Hemminger <stephen@networkplumber.org>
  Stephen Rothwell <sfr@canb.auug.org.au>
  Steven Rostedt (Google) <rostedt@goodmis.org>
  Sumanth Korikkar <sumanthk@linux.ibm.com>
  sunliming <sunliming@kylinos.cn>
  syzbot+3e3f419f4a7816471838@syzkaller.appspotmail.com
  Takashi Iwai <tiwai@suse.de>
  Takashi Sakamoto <o-takashi@sakamocchi.jp>
  Tali Perry <tali.perry1@gmail.com>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Richter <tmricht@linux.ibm.com>
  Tim Crawford <tcrawford@system76.com>
  Tom Schwindl <schwindl@posteo.de>
  Tomas Winkler <tomas.winkler@intel.com>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Tvrtko Ursulin <tvrtko.ursulin@intel.com>
  Tyler Hicks <tyhicks@linux.microsoft.com>
  Tyrel Datwyler <tyreld@linux.ibm.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
  Vadim Fedorenko <vadfed@fb.com>
  Ville Syrjälä <ville.syrjala@linux.intel.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wentao_Liang <Wentao_Liang_g@163.com>
  Wojciech Drewek <wojciech.drewek@intel.com>
  Wolfram Sang <wsa@kernel.org>
  Xiang wangx <wangxiang@cdjrlc.com>
  Xiubo Li <xiubli@redhat.com>
  Xu Jia <xujia39@huawei.com>
  Ying Xue <ying.xue@windriver.com>
  Yonghong Song <yhs@fb.com>
  Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
  Yu Liao <liaoyu15@huawei.com>
  Ziyang Xuan <william.xuanziyang@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 8312 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 25 14:32:34 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jun 2022 14:32:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355993.583961 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o56pY-0004Wd-Of; Sat, 25 Jun 2022 14:32:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355993.583961; Sat, 25 Jun 2022 14:32:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o56pY-0004WW-M4; Sat, 25 Jun 2022 14:32:12 +0000
Received: by outflank-mailman (input) for mailman id 355993;
 Sat, 25 Jun 2022 14:32:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yMZ6=XA=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1o56pX-0004WQ-0R
 for xen-devel@lists.xenproject.org; Sat, 25 Jun 2022 14:32:11 +0000
Received: from mail-lf1-x130.google.com (mail-lf1-x130.google.com
 [2a00:1450:4864:20::130])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 97c607f7-f493-11ec-b725-ed86ccbb4733;
 Sat, 25 Jun 2022 16:32:08 +0200 (CEST)
Received: by mail-lf1-x130.google.com with SMTP id c2so9274843lfk.0
 for <xen-devel@lists.xenproject.org>; Sat, 25 Jun 2022 07:32:08 -0700 (PDT)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 9-20020a2e0509000000b0025538905298sm673389ljf.123.2022.06.25.07.32.06
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sat, 25 Jun 2022 07:32:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 97c607f7-f493-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=x6Fy3/uQW7PV7sjNtq2E/ETGqkCcnt2NEZ3nx6xebvU=;
        b=kPLVc4WgWv9UOcTxcJeUmLBcacmFB0x92i/cnP9qkiTkq+7Pf/73C8eL7bhBjYopcw
         XKZBNsC9qxZ7RD3Tp6thB4lvxugHlPvS/HBc5kj0hAwEe2lF4qiG/p3unTM8SkNAc070
         gpaROcQVHyu2Yy+QUc3sjcimfoXuiTqbvtvR1m6ThVO3FMC/RXXI5Pf4mxinKlm6e/Ug
         iXo8FPQtB02xh21nzg1epj4r+SVBQRxEbKoLuD4Miuik5tHevhNhGHrElp4CP+W/Ph9+
         uuFio5XIytB7w2WP8dAD3/Absd8CuAsmPg04ptZogx9jzhfmIS3q6jhLXihMh/VymPob
         oK8g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=x6Fy3/uQW7PV7sjNtq2E/ETGqkCcnt2NEZ3nx6xebvU=;
        b=1we81DD7zffTyueh6Lh6zpjWEqA1TjtD/yljl0LIbhcshYL4gt3KRoh0MpdzD16s1D
         8g6zHMo1okge4wSjMYkM1g8kPMj8FzBVBeog1n04NHMTiHtNnrMZcbk+dHfH+kkM5d+z
         jEu/9vMRQC4hFV2gbpChlsK2fEHAdgaVkRe/WS7lQaUd1h7KhWXojB7sKyechfGUCeXJ
         K7tIXAD6uliRmu4xCNgdO65X9aKgwcegv8cDWbvyFKF49IN+gbtQafMuno0sUj1nxJG7
         ZatcJwMTX15vuSbmXNRRECXb8sWtEwGkgi8KZIcF3twPJPc22brl0REDfKSGNXZbVtIo
         NWCA==
X-Gm-Message-State: AJIora8FgO1cW/IbHolrEejHvEAFntQjGxzpUkEy+Wo7eLrgqXxua9YB
	ItTF0+XdVRZKfFSek0sUqS0=
X-Google-Smtp-Source: AGRyM1twwdIQDqNiLkOdvn2UhiibUoio6MYgplot2RcoWQCCbiR/hzHrRmxnPgBxO721xNiDlsUAMA==
X-Received: by 2002:a05:6512:b19:b0:47d:e576:305f with SMTP id w25-20020a0565120b1900b0047de576305fmr2570362lfu.61.1656167528035;
        Sat, 25 Jun 2022 07:32:08 -0700 (PDT)
Subject: Re: [PATCH V10 1/3] libxl: Add support for Virtio disk configuration
To: George Dunlap <George.Dunlap@citrix.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Wei Liu <wl@xen.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 Nick Rosbrook <rosbrookn@gmail.com>, Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>
References: <62903b8e-6c20-600e-8283-5a3e3b853a18@gmail.com>
 <1655482471-16850-1-git-send-email-olekstysh@gmail.com>
 <9A36692A-8095-4C76-A69B-FBAB221A365C@citrix.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <02648046-7781-61e5-de93-77342e4d6c16@gmail.com>
Date: Sat, 25 Jun 2022 17:32:06 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <9A36692A-8095-4C76-A69B-FBAB221A365C@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 24.06.22 15:59, George Dunlap wrote:

Hello George

>
>> On 17 Jun 2022, at 17:14, Oleksandr Tyshchenko <olekstysh@gmail.com> wrote:
>>
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> This patch adds basic support for configuring and assisting virtio-mmio
>> based virtio-disk backend (emulator) which is intended to run out of
>> Qemu and could be run in any domain.
>> Although the Virtio block device is quite different from traditional
>> Xen PV block device (vbd) from the toolstack's point of view:
>> - as the frontend is virtio-blk which is not a Xenbus driver, nothing
>>    written to Xenstore are fetched by the frontend currently ("vdev"
>>    is not passed to the frontend). But this might need to be revised
>>    in future, so frontend data might be written to Xenstore in order to
>>    support hotplugging virtio devices or passing the backend domain id
>>    on arch where the device-tree is not available.
>> - the ring-ref/event-channel are not used for the backend<->frontend
>>    communication, the proposed IPC for Virtio is IOREQ/DM
>> it is still a "block device" and ought to be integrated in existing
>> "disk" handling. So, re-use (and adapt) "disk" parsing/configuration
>> logic to deal with Virtio devices as well.
>>
>> For the immediate purpose and an ability to extend that support for
>> other use-cases in future (Qemu, virtio-pci, etc) perform the following
>> actions:
>> - Add new disk backend type (LIBXL_DISK_BACKEND_OTHER) and reflect
>>   that in the configuration
>> - Introduce new disk "specification" and "transport" fields to struct
>>   libxl_device_disk. Both are written to the Xenstore. The transport
>>   field is only used for the specification "virtio" and it assumes
>>   only "mmio" value for now.
>> - Introduce new "specification" option with "xen" communication
>>   protocol being default value.
>> - Add new device kind (LIBXL__DEVICE_KIND_VIRTIO_DISK) as current
>>   one (LIBXL__DEVICE_KIND_VBD) doesn't fit into Virtio disk model
>>
>> An example of domain configuration for Virtio disk:
>> disk = [ 'phy:/dev/mmcblk0p3, xvda1, backendtype=other, specification=virtio']
>>
>> Nothing has changed for default Xen disk configuration.
>>
>> Please note, this patch is not enough for virtio-disk to work
>> on Xen (Arm), as for every Virtio device (including disk) we need
>> to allocate Virtio MMIO params (IRQ and memory region) and pass
>> them to the backend, also update Guest device-tree. The subsequent
>> patch will add these missing bits. For the current patch,
>> the default "irq" and "base" are just written to the Xenstore.
>> This is not an ideal splitting, but this way we avoid breaking
>> the bisectability.
>>
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> OK, I am *really* sorry for coming in here at the last minute and quibbling about things.


no problem


>   But this introduces a public interface which looks really wrong to me.  I’ve replied to the mail from December where Juergen proposed the “Other” protocol.
>
> Hopefully this will be a simple matter of finding a better name than “other”.  (Or you guys convincing me that “other” is really the best name for this value; or even Anthony asserting his right as a maintainer to overrule my objection if he thinks I’m out of line.)


I saw your reply to V6 and Juergen's answer. I share Juergen's opinion 
as well as I understand your concern. I think, this is exactly the 
situation when finding a perfect name (obvious, short, etc) for the 
backendtype (in our particular case) is really difficult.

Personally I tend to leave "other", because there is no better 
alternative from my PoV. Also I might be completely wrong here, but I 
don't think we will have to extend the "backendtype" for supporting 
other possible virtio backend implementations in the foreseeable future:

- when Qemu gains the required support we will choose qdisk: backendtype 
qdisk specification virtio
- for the possible virtio alternative of the blkback we will choose phy: 
backendtype phy specification virtio

If there will be a need to keep various implementation, we will be able 
to describe that by using "transport" (mmio, pci, xenbus, argo, whatever).
Actually this is why we also introduced "specification" and "transport".

IIRC, there were other (suggested?) names except "other" which are 
"external" and "daemon". If you think that one of them is better than 
"other", I will happy to use it.



>
> FWIW the Golang changes look fine.
>
>   -George
>
-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Sat Jun 25 14:45:34 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jun 2022 14:45:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.355999.583972 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o572P-00062r-Uj; Sat, 25 Jun 2022 14:45:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 355999.583972; Sat, 25 Jun 2022 14:45:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o572P-00062k-S8; Sat, 25 Jun 2022 14:45:29 +0000
Received: by outflank-mailman (input) for mailman id 355999;
 Sat, 25 Jun 2022 14:45:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o572O-00062e-GA
 for xen-devel@lists.xenproject.org; Sat, 25 Jun 2022 14:45:28 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o572N-0002UD-Oo; Sat, 25 Jun 2022 14:45:27 +0000
Received: from home.octic.net ([81.187.162.82] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o572N-0005Ra-Ge; Sat, 25 Jun 2022 14:45:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=GeFbfIa+gtCTy2aaanL3LieGAJaOgijYwGr8EpZ6OsY=; b=thgPxKFxPk6ijTzbLpCSom85ul
	Zslm3TK2rj9GgMAq3JcugDMEJJywThtFbnJT+15JcOi1re3YlEHjXSnVdUoiIrT1dV9YdR49DJyrJ
	y5NCOZWTNSTu/sKGXXHmN6kwL+vVkzaXK6IIkkDEZHtfsYkYluHxowgcl9EB4L8ksFzs=;
Message-ID: <5c986703-c932-3c7d-3756-2b885bb96e42@xen.org>
Date: Sat, 25 Jun 2022 15:45:24 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: dmitry.semenets@gmail.com, xen-devel@lists.xenproject.org,
 Dmytro Semenets <dmytro_semenets@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220623074428.226719-1-dmitry.semenets@gmail.com>
 <alpine.DEB.2.22.394.2206231457250.2410338@ubuntu-linux-20-04-desktop>
 <e60a4e68-ed00-6cc7-31ca-64bcfc4bbdc5@xen.org>
 <alpine.DEB.2.22.394.2206241414420.2410338@ubuntu-linux-20-04-desktop>
From: Julien Grall <julien@xen.org>
Subject: Re: [PATCH] xen: arm: Don't use stop_cpu() in halt_this_cpu()
In-Reply-To: <alpine.DEB.2.22.394.2206241414420.2410338@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 24/06/2022 22:31, Stefano Stabellini wrote:
> On Fri, 24 Jun 2022, Julien Grall wrote:
>> On 23/06/2022 23:07, Stefano Stabellini wrote:
>>> On Thu, 23 Jun 2022, dmitry.semenets@gmail.com wrote:
>>>> From: Dmytro Semenets <dmytro_semenets@epam.com>
>>> So wouldn't it be better to remove the panic from the implementation of
>>> call_psci_cpu_off?
>>
>> I have asked to keep the panic() in call_psci_cpu_off(). If you remove the
>> panic() then we will hide the fact that the CPU was not properly turned off
>> and will consume more energy than expected.
>>
>> The WFI loop is fine when shutting down or rebooting because we know this will
>> only happen for a short period of time.
> 
> Yeah, I don't think we should hide that CPU_OFF failed. I think we
> should print a warning. However, given that we know CPU_OFF can
> reasonably fail in normal conditions returning DENIED when a Trusted OS
> is present, then I think we should not panic.

We know how to detect this condition (see section 5.9 in DEN0022D.b). So 
I would argue we should fix it properly rather than removing the panic().

> 
> If there was a way to distinguish a failure because a Trusted OS is
> present (the "normal" failure) from other failures, I would suggest to:
> - print a warning if failed due to a Trusted OS being present
> - panic in other cases
> 
> Unfortunately it looks like in all cases the return code is DENIED :-(
I am confused. Per the spec, the only reason CPU_OFF can return DENIED 
is because the Trusted OS is resident. So what other cases are you 
talking about?

> 
> 
> Given that, I would not panic and only print a warning in all cases. Or
> we could ASSERT which at least goes away in !DEBUG builds.

ASSERT() is definitely not way to deal with external input. I could 
possibly accept a WARN(), but see above.

>>> The reason I am saying this is that stop_cpu is called in a number of
>>> places beyond halt_this_cpu and as far as I can tell any of them could
>>> trigger the panic. (I admit they are unlikely places but still.)
>>
>> This is one of the example where the CPU will not be stopped for a short
>> period of time. We should deal with them differently (i.e. migrating the
>> trusted OS or else) so we give all the chance for the CPU to be fully powered.
>>
>> IMHO, this is a different issue and hence why I didn't ask Dmitry to solve it.
> 
> I see your point now. I was seeing the two things as one.
> 
> I think it is true that the WFI loop is likely to work. Also it is true
> that from a power perspective it makes no different on power down or
> reboot.  From that point of view this patch is OK.
> 
> But even on shut-down/reboot, why not do that as a fallback in case
> CPU_OFF didn't work? It is going to work most of the times anyway, why
> change the default for the few cases that doesn't work?

Because we would not be consistent how we would turn off a CPU on a 
system supporting PSCI. I would prefer to use the same method everywhere 
so it is easier to reason about.

I am also not sure how you define "most of the time". Yes it is possible 
that the boards we aware of will not have this issue, but how about the 
one we don't know?

> 
> Given that this patch would work, I don't want to insist on this and let
> you decide.
> 
> 
> But even if we don't want to remove the panic as part of this patch, I
> think we should remove the panic in a separate patch anyway, at least
> until someone investigates and thinks of a strategy how to migrate the
> TrustedOS as you suggested.
If we accept this patch, then we remove the immediate pain. The other 
uses of stop_cpu() are in:
	1) idle_loop(), this is reachable when turning off a CPU after boot 
(not supported on Arm)
         2) start_secondary(), this is only used during boot (CPU 
hot-unplug is not supported)

Even if it would be possible to trigger the panic() in 2), I am not 
aware of an immediate issue there. So I think it would be the wrong 
approach to remove the panic() first and then investigate.

The advantage of the panic() is it will remind us that some needs to be 
fixed. With a warning (or WARN()) people will tend to ignore it.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat Jun 25 14:54:18 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jun 2022 14:54:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356005.583984 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o57Ap-0007Wg-T4; Sat, 25 Jun 2022 14:54:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356005.583984; Sat, 25 Jun 2022 14:54:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o57Ap-0007WZ-PQ; Sat, 25 Jun 2022 14:54:11 +0000
Received: by outflank-mailman (input) for mailman id 356005;
 Sat, 25 Jun 2022 14:54:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o57Ao-0007WT-Tq
 for xen-devel@lists.xenproject.org; Sat, 25 Jun 2022 14:54:10 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o57Ao-0002d5-BV; Sat, 25 Jun 2022 14:54:10 +0000
Received: from gw1.octic.net ([81.187.162.82] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o57Ao-0005df-3m; Sat, 25 Jun 2022 14:54:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=mx+q6vjinJPOnrfjfjmTtfqcA4rOHuZtQOZl9+9SNv0=; b=z0qk+llQTP/ohVvvCqQPqB3k0w
	sKbuOy00tjJ5v7z2lrmchBwNfPIjYuORH8UWqPL7+mLCzqmpqSRHeecBbh69L8WsObGHCCXl8ovzl
	YrLqmC9/wiQAlNfEc1LlfknwpXeMKDYsnc3rKmc+by8vm75esIdwwb61U5A9YZgAT2Ww=;
Message-ID: <96825a61-9c97-2ea8-519e-bf70de30237d@xen.org>
Date: Sat, 25 Jun 2022 15:54:08 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH] xen/arm: irq: Initialize the per-CPU IRQs while preparing
 the CPU
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220614094157.95631-1-julien@xen.org>
 <29AAB4EF-6326-41F2-BB51-EED5FFDB26EA@arm.com>
 <E71E0034-ED75-4783-9A8B-01D6BBF293A9@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <E71E0034-ED75-4783-9A8B-01D6BBF293A9@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Bertrand,

On 24/06/2022 11:01, Bertrand Marquis wrote:
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
> Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>

Thanks! I have committed the patch.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat Jun 25 17:22:27 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jun 2022 17:22:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356013.583999 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o59Tz-0005g2-LX; Sat, 25 Jun 2022 17:22:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356013.583999; Sat, 25 Jun 2022 17:22:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o59Tz-0005fv-If; Sat, 25 Jun 2022 17:22:07 +0000
Received: by outflank-mailman (input) for mailman id 356013;
 Sat, 25 Jun 2022 17:22:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o59Ty-0005fl-30; Sat, 25 Jun 2022 17:22:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o59Tx-00062y-Pd; Sat, 25 Jun 2022 17:22:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o59Tx-0004Zq-CD; Sat, 25 Jun 2022 17:22:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o59Tx-0008Dv-Bi; Sat, 25 Jun 2022 17:22:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zxwk2ymcE7mf6Be+OtY+lBo3/ii/21m6M+mUvjtY31w=; b=HHZQtPVtWaYTVlxBi+4LUkBmFw
	pKqiyUG5Yk7of7qQIDnBlnWQRCYkCZ4ocJZ/0YEXeg7PHFO9lR933UsJFf9PAbucb3j7mcVokjkVr
	4Jj6tY/8WvdXg686Sd9b1LsnRaar3zLTS0SD2VMHhMpsGcrJDLHaI6F2+1ozIh6vKrfU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171352-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 171352: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-armhf-armhf-libvirt-qcow2:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=23db944f754e99abf814a79a2273b0191d35e4ff
X-Osstest-Versions-That:
    linux=f0c280af0ec7c79cf043594974206d87c3c46524
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 25 Jun 2022 17:22:05 +0000

flight 171352 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171352/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-qcow2 13 guest-start                 fail like 171320
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171330
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171330
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171330
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171330
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171330
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171330
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171330
 test-armhf-armhf-xl-credit1  18 guest-start/debian.repeat    fail  like 171330
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171330
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 171330
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171330
 test-armhf-armhf-xl-credit2  14 guest-start                  fail  like 171330
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171330
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171330
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                23db944f754e99abf814a79a2273b0191d35e4ff
baseline version:
 linux                f0c280af0ec7c79cf043594974206d87c3c46524

Last test of basis   171330  2022-06-23 14:01:14 Z    2 days
Testing same since   171352  2022-06-25 11:13:17 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Catalin Marinas <catalin.marinas@arm.com>
  Christian Borntraeger <borntraeger@linux.ibm.com>
  David S. Miller <davem@davemloft.net>
  Eric Dumazet <edumazet@google.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Heiko Carstens <hca@linux.ibm.com>
  Hulk Robot <hulkrobot@huawei.com>
  Jakub Kicinski <kuba@kernel.org>
  Jon Hunter <jonathanh@nvidia.com>
  Marian Postevca <posteuca@mutex.one>
  Mike Snitzer <snitzer@kernel.org>
  Shuah Khan <skhan@linuxfoundation.org>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Will Deacon <will@kernel.org>
  Willy Tarreau <w@1wt.eu>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   f0c280af0ec7..23db944f754e  23db944f754e99abf814a79a2273b0191d35e4ff -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Sat Jun 25 17:45:59 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jun 2022 17:45:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356024.584009 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o59qz-0008Bo-RE; Sat, 25 Jun 2022 17:45:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356024.584009; Sat, 25 Jun 2022 17:45:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o59qz-0008Bh-O9; Sat, 25 Jun 2022 17:45:53 +0000
Received: by outflank-mailman (input) for mailman id 356024;
 Sat, 25 Jun 2022 17:45:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o59qy-0008BX-7e; Sat, 25 Jun 2022 17:45:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o59qy-0006Rk-6R; Sat, 25 Jun 2022 17:45:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o59qx-0005CU-HV; Sat, 25 Jun 2022 17:45:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o59qx-0006xE-H3; Sat, 25 Jun 2022 17:45:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WBO/vTUhWfX1kVAxOqtcIn9UvSd4uLOUIR+9tTwUo5s=; b=F3yYjSB4JMUAfosVy6vg6Lg3aG
	HEU3Mc4WvGk4jx8fHmWrDnnDHaNB73iZOxiZVA0Ll1ar2KelIE2mecsL1RrggC71ACO4vHINT5Ecp
	KmVWPJELbNzlZ2j1euqc4CBkxJRTIVFI2RzQzZvGFclhNwd/P8q7qYtxkUD7c30S/lIE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171354-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 171354: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=0544c4ee4b48f7e2715e69ff3e73c3d5545b0526
X-Osstest-Versions-That:
    xen=7c1f724dd95cf627f72c96d310b6b7d487bc2281
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 25 Jun 2022 17:45:51 +0000

flight 171354 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171354/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  0544c4ee4b48f7e2715e69ff3e73c3d5545b0526
baseline version:
 xen                  7c1f724dd95cf627f72c96d310b6b7d487bc2281

Last test of basis   171333  2022-06-23 19:03:05 Z    1 days
Testing same since   171354  2022-06-25 15:00:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bertrand Marquis <bertrand.marquis@arm.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   7c1f724dd9..0544c4ee4b  0544c4ee4b48f7e2715e69ff3e73c3d5545b0526 -> smoke


From xen-devel-bounces@lists.xenproject.org Sat Jun 25 20:54:20 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jun 2022 20:54:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356032.584021 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5Cn2-0001Cm-8F; Sat, 25 Jun 2022 20:54:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356032.584021; Sat, 25 Jun 2022 20:54:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5Cn2-0001Cf-4j; Sat, 25 Jun 2022 20:54:00 +0000
Received: by outflank-mailman (input) for mailman id 356032;
 Sat, 25 Jun 2022 20:53:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5Cn1-0001CV-HV; Sat, 25 Jun 2022 20:53:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5Cn1-0001t2-DX; Sat, 25 Jun 2022 20:53:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5Cn0-0006Pz-P1; Sat, 25 Jun 2022 20:53:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o5Cn0-0002xn-Oc; Sat, 25 Jun 2022 20:53:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nl+Z6gW+6dVGD6D4uCsnE614rL5GWJrXukra77sSC/g=; b=iKjZHdYEpTT5WTUi56O5seCFx/
	F5wsxJmwo/XNdgml2Eud8bPIDgN67RzqOrOVWeeMMLAui1ZLU9oOiMM/yOUTetoC1STCwDM18avwE
	Kea7J6Ira1un1HRoEJ9BIwaREa8JDPTtn+e+HtFeNJ12DoEXrEwUojijTvma0JXRxBJs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171353-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171353: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=8c23f235a6a8ae43abea215812eb9d8cf4dd165e
X-Osstest-Versions-That:
    linux=354c6e071be986a44b956f7b57f1884244431048
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 25 Jun 2022 20:53:58 +0000

flight 171353 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171353/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-intel  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-amd  8 xen-boot              fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 171277
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 171277
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 171277
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 171277

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 171277

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171277
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171277
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171277
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                8c23f235a6a8ae43abea215812eb9d8cf4dd165e
baseline version:
 linux                354c6e071be986a44b956f7b57f1884244431048

Last test of basis   171277  2022-06-19 03:11:35 Z    6 days
Failing since        171280  2022-06-19 15:12:25 Z    6 days   18 attempts
Testing same since   171350  2022-06-25 04:14:49 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abhinav Kumar <quic_abhinavk@quicinc.com>
  Aidan MacDonald <aidanmacdonald.0x0@gmail.com>
  Akira Yokosawa <akiyks@gmail.com>
  Alan Liu <HaoPing.Liu@amd.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Egorenkov <egorenar@linux.ibm.com>
  Alexander Gordeev <agordeev@linux.ibm.com>
  Alexander Usyskin <alexander.usyskin@intel.com>
  Alexei Starovoitov <ast@kernel.org>
  Ali Saidi <alisaidi@amazon.com>
  Alistair Popple <apopple@nvidia.com>
  Amit Kumar Mahapatra <amit.kumar-mahapatra@xilinx.com>
  Anatolii Gerasymenko <anatolii.gerasymenko@intel.com>
  Andrii Nakryiko <andrii@kernel.org>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Antoine Tenart <atenart@kernel.org>
  Ard Biesheuvel <ardb@kernel.org>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Arnd Bergmann <arnd@arndb.de>
  Athira Jajeev <atrajeev@linux.vnet.ibm.com>
  Athira Rajeev <atrajeev@linux.vnet.ibm.com>
  Bart Van Assche <bvanassche@acm.org>
  Bartosz Golaszewski <brgl@bgdev.pl>
  Bjorn Andersson <bjorn.andersson@linaro.org>
  Brian Foster <bfoster@redhat.com>
  Carlos Llamas <cmllamas@google.com>
  Catalin Marinas <catalin.marinas@arm.com>
  Chevron Li <chevron.li@bayhubtech.com>
  Chevron Li<chevron.li@bayhubtech.com>
  Christian Lamparter <chunkeey@gmail.com>
  Christian Marangi <ansuelsmth@gmail.com>
  Christian Schoenebeck <linux_oss@crudebyte.com>
  Christoph Hellwig <hch@lst.de>
  Christoph Lameter <cl@linux.com>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Ciara Loftus <ciara.loftus@intel.com>
  Claudiu Manoil <claudiu.manoil@nxp.com>
  Curtis Taylor <cutaylor-pub@yahoo.com>
  Damien Le Moal <damien.lemoal@opensource.wdc.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Wheeler <daniel.wheeler@amd.com>
  Daniil Dementev <d.dementev@ispras.ru>
  Darrick J. Wong <djwong@kernel.org>
  Dave Airlie <airlied@redhat.com>
  Dave Hansen <dave.hansen@linux.intel.com>
  David Howells <dhowells@redhat.com>
  David Rientjes <rientjes@google.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Demi Marie Obenour <demi@invisiblethingslab.com>
  Ding Xiang <dingxiang@cmss.chinamobile.com>
  Dmitry Klochkov <kdmitry556@gmail.com>
  Dmitry Osipenko <dmitry.osipenko@collabora.com>
  Dominique Martinet <asmadeus@codewreck.org>
  Douglas Gilbert <dgilbert@interlog.com>
  Dylan Yudaken <dylany@fb.com>
  Edward Wu <edwardwu@realtek.com>
  Eelco Chaudron <echaudro@redhat.com>
  Eric Dumazet <edumazet@google.com>
  Evgeniy Baskov <baskov@ispras.ru>
  Filipe Manana <fdmanana@suse.com>
  Florian Westphal <fw@strlen.de>
  Gautam Menghani <gautammenghani201@gmail.com>
  Genjian Zhang <zhanggenjian@kylinos.cn>
  George Shen <george.shen@amd.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Gurucharan <gurucharanx.g@intel.com> (A Contingent worker at Intel)
  Heiko Stuebner <heiko@sntech.de>
  Hoang Le <hoang.h.le@dektech.com.au>
  huhai <huhai@kylinos.cn>
  Hyeonggon Yoo <42.hyeyoo@gmail.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ian Rogers <irogers@google.com>
  Ivan Vecera <ivecera@redhat.com>
  Jakub Kicinski <kuba@kernel.org>
  Jakub Sitnicki <jakub@cloudflare.com>
  Jamie Iles <jamie@jamieiles.com>
  Jani Nikula <jani.nikula@intel.com>
  Jann Horn <jannh@google.com>
  Jarkko Nikula <jarkko.nikula@linux.intel.com>
  Jason A. Donenfeld <Jason@zx2c4.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Javier Martinez Canillas <javierm@redhat.com>
  Jay Vosburgh <jay.vosburgh@canonical.com>
  Jens Axboe <axboe@kernel.dk>
  Jernej Skrabec <jernej.skrabec@gmail.com>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
  Jiasheng Jiang <jiasheng@iscas.ac.cn>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jie2x Zhou <jie2x.zhou@intel.com>
  Jing-Ting Wu <jing-ting.wu@mediatek.com>
  Jiri Olsa <jolsa@kernel.org>
  Joe Damato <jdamato@fastly.com>
  Joel Granados <j.granados@samsung.com>
  Joel Savitz <jsavitz@redhat.com>
  Joerg Roedel <jroedel@suse.de>
  John Fastabend <john.fastabend@gmail.com>
  Jon <jon.lin@rock-chips.com>
  Jon Lin <jon.lin@rock-chips.com>
  Jon Maloy <jmaloy@redhat.com>
  Jon Maxwell <jmaxwell37@gmail.com>
  Jonathan Lemon <jonathan.lemon@gmail.com>
  Jonathan Marek <jonathan@marek.ca>
  Jonathan Toppins <jtoppins@redhat.com>
  Josh Poimboeuf <jpoimboe@kernel.org>
  Joshua Ashton <joshua@froggi.es>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Kai Vehmanen <kai.vehmanen@linux.intel.com>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Kailang Yang <kailang@realtek.com>
  Kees Cook <keescook@chromium.org>
  Ken Moffat <zarniwhoop@ntlworld.com>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Kumar Kartikeya Dwivedi <memxor@gmail.com>
  Kunihiko Hayashi <hayashi.kunihiko@socionext.com>
  Kuogee Hsieh <quic_khsieh@quicinc.com>
  Lars-Peter Clausen <lars@metafoo.de>
  Leo Savernik <l.savernik@aon.at>
  Leo Yan <leo.yan@linaro.org>
  Li Nan <linan122@huawei.com>
  Liang He <windhl@126.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Liu Ying <victor.liu@nxp.com>
  Lorenzo Bianconi <lorenzo@kernel.org>
  Lukas Bulwahn <lukas.bulwahn@gmail.com>
  Lukas Wunner <lukas@wunner.de>
  Maciej Fijalkowski <maciej.fijalkowski@intel.com>
  Magnus Karlsson <magnus.karlsson@intel.com>
  Marc Dionne <marc.dionne@auristor.com>
  Marc Zyngier <maz@kernel.org>
  Marcin Szycik <marcin.szycik@linux.intel.com>
  Mario Limonciello <mario.limonciello@amd.com>
  Mark Brown <broonie@kernel.org>
  Mark Pearson <markpearson@lenovo.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Masami Hiramatsu (Google) <mhiramat@kernel.org>
  Matthew Wilcox (Oracle) <willy@infradead.org>
  Maxime Ripard <maxime@cerno.tech>
  Maximilian Luz <luzmaximilian@gmail.com>
  Maya Matuszczyk <maccraft123mc@gmail.com>
  Mengqi Zhang <mengqi.zhang@mediatek.com>
  Miaoqian Lin <linmq006@gmail.com>
  Michael Petlan <mpetlan@redhat.com>
  Michal Simek <michal.simek@amd.com>
  Mike Snitzer <snitzer@kernel.org>
  Mikulas Patocka <mpatocka@redhat.com>
  Ming Lei <ming.lei@redhat.com>
  Mingwei Zhang <mizhang@google.com>
  Miquel Raynal <miquel.raynal@bootlin.com>
  Nathan Chancellor <nathan@kernel.org>
  Nathan Chancellor <nathan@kernel.org> # build
  Nico Pache <npache@redhat.com>
  nikitashvets@flyium.com
  Nikos Tsironis <ntsironis@arrikto.com>
  Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
  Oleksij Rempel <o.rempel@pengutronix.de>
  Oliver Upton <oupton@google.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Abeni <pabeni@redhat.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrice Chotard <patrice.chotard@foss.st.com>
  Pavel Begunkov <asml.silence@gmail.com>
  Peilin Ye <peilin.ye@bytedance.com>
  Peter Gonda <pgonda@google.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Peter Zijlstra <peterz@infradead.org>
  Petr Mladek <pmladek@suse.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Qingqing Zhuo <qingqing.zhuo@amd.com>
  Qu Wenruo <wqu@suse.com>
  Quentin Perret <qperret@google.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Raghavendra Rao Ananta <rananta@google.com>
  Riccardo Paolo Bestetti <pbl@bestov.io>
  Rob Clark <robdclark@chromium.org>
  Rob Clark <robdclark@gmail.com>
  Rob Herring <robh@kernel.org>
  Robert Marko <robimarko@gmail.com>
  Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
  Ron Economos <re@w6rz.net>
  Rosemarie O'Riorden <roriorden@redhat.com>
  Sai Krishna Potthuri <lakshmi.sai.krishna.potthuri@xilinx.com>
  Samuel Holland <samuel@sholland.org>
  Sandeep Penigalapati <sandeep.penigalapati@intel.com>
  Sander Vanheule <sander@svanheule.net>
  Sascha Hauer <s.hauer@pengutronix.de>
  Saud Farooqui <farooqui_saud@hotmail.com>
  Saurabh Sengar <ssengar@linux.microsoft.com>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Sedat Dilek <sedat.dilek@gmail.com> # LLVM-14 (x86-64)
  Serge Semin <Sergey.Semin@baikalelectronics.ru>
  Sergey Gorenko <sergeygo@nvidia.com>
  Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Soham Sen <contact@sohamsen.me>
  Song Liu <songliubraving@fb.com>
  Steev Klimaszewski <steev@kali.org>
  Stefan Wahren <stefan.wahren@i2se.com>
  Stephan Gerhold <stephan.gerhold@kernkonzept.com>
  Stephen Hemminger <stephen@networkplumber.org>
  Stephen Rothwell <sfr@canb.auug.org.au>
  Steven Rostedt (Google) <rostedt@goodmis.org>
  Sumanth Korikkar <sumanthk@linux.ibm.com>
  sunliming <sunliming@kylinos.cn>
  syzbot+3e3f419f4a7816471838@syzkaller.appspotmail.com
  Takashi Iwai <tiwai@suse.de>
  Takashi Sakamoto <o-takashi@sakamocchi.jp>
  Tali Perry <tali.perry1@gmail.com>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Richter <tmricht@linux.ibm.com>
  Tim Crawford <tcrawford@system76.com>
  Tom Schwindl <schwindl@posteo.de>
  Tomas Winkler <tomas.winkler@intel.com>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Tvrtko Ursulin <tvrtko.ursulin@intel.com>
  Tyler Hicks <tyhicks@linux.microsoft.com>
  Tyrel Datwyler <tyreld@linux.ibm.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
  Vadim Fedorenko <vadfed@fb.com>
  Ville Syrjälä <ville.syrjala@linux.intel.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wentao_Liang <Wentao_Liang_g@163.com>
  Wojciech Drewek <wojciech.drewek@intel.com>
  Wolfram Sang <wsa@kernel.org>
  Xiang wangx <wangxiang@cdjrlc.com>
  Xiubo Li <xiubli@redhat.com>
  Xu Jia <xujia39@huawei.com>
  Ying Xue <ying.xue@windriver.com>
  Yonghong Song <yhs@fb.com>
  Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
  Yu Liao <liaoyu15@huawei.com>
  Ziyang Xuan <william.xuanziyang@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 8312 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 26 02:53:20 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jun 2022 02:53:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356041.584032 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5IOU-0000kz-M2; Sun, 26 Jun 2022 02:53:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356041.584032; Sun, 26 Jun 2022 02:53:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5IOU-0000ks-Hr; Sun, 26 Jun 2022 02:53:02 +0000
Received: by outflank-mailman (input) for mailman id 356041;
 Sun, 26 Jun 2022 02:53:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5IOT-0000ki-70; Sun, 26 Jun 2022 02:53:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5IOT-0007qF-2m; Sun, 26 Jun 2022 02:53:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5IOS-0002Md-Ij; Sun, 26 Jun 2022 02:53:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o5IOS-0007zJ-I5; Sun, 26 Jun 2022 02:53:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tYSFf8RqICF0Ya3vZgvblnYDrmB1XcBnX3kW8LOdoMk=; b=1QbjFe13nveTOTq3HGkQeB4Xs4
	oO/D3TRIzSOXdEr24hsRa8oTXi1ONZEE0g3ivIJz+O/CYViKSBs//q/YuzuxSfSWw91NQFXYRGShq
	lWXvWbXZ8D194b66XD9kLTPaUWaf5p4v5weDiEFtpd2BvsvnQ+e/TsghsOJuRxOQDv5Q=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171355-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 171355: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=0544c4ee4b48f7e2715e69ff3e73c3d5545b0526
X-Osstest-Versions-That:
    xen=7c1f724dd95cf627f72c96d310b6b7d487bc2281
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 26 Jun 2022 02:53:00 +0000

flight 171355 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171355/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 171349

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171349
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171349
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171349
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171349
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171349
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171349
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171349
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171349
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171349
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171349
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171349
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171349
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  0544c4ee4b48f7e2715e69ff3e73c3d5545b0526
baseline version:
 xen                  7c1f724dd95cf627f72c96d310b6b7d487bc2281

Last test of basis   171349  2022-06-25 01:51:54 Z    1 days
Testing same since   171355  2022-06-25 18:08:12 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bertrand Marquis <bertrand.marquis@arm.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   7c1f724dd9..0544c4ee4b  0544c4ee4b48f7e2715e69ff3e73c3d5545b0526 -> master


From xen-devel-bounces@lists.xenproject.org Sun Jun 26 05:08:46 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jun 2022 05:08:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356050.584047 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5KVa-0005fs-DA; Sun, 26 Jun 2022 05:08:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356050.584047; Sun, 26 Jun 2022 05:08:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5KVa-0005fl-97; Sun, 26 Jun 2022 05:08:30 +0000
Received: by outflank-mailman (input) for mailman id 356050;
 Sun, 26 Jun 2022 05:08:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5KVZ-0005fb-CH; Sun, 26 Jun 2022 05:08:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5KVZ-0002W8-7F; Sun, 26 Jun 2022 05:08:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5KVY-0008UE-Jt; Sun, 26 Jun 2022 05:08:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o5KVY-0008MY-JH; Sun, 26 Jun 2022 05:08:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WNDHj2LUHNciI4qkVyklrT+msCEg/ITKk9UQSzt2kWI=; b=bOZ+xmWanB1opVOdxGQKfJAB7d
	8wifV/gqYC46cqYSMQANNfprrnQHaG2ik4bW24aHu6C+Ir8+o3aKY8YDTT10saw1FPi+qqE3czv6U
	X1HVQVLKh51C3w0QtKt0m7rSawRzOGeqlPdWbQZDSjXescvPbza6t6623r8DRmKDNYIo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171356-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171356: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=0840a7914caa14315a3191178a9f72c742477860
X-Osstest-Versions-That:
    linux=354c6e071be986a44b956f7b57f1884244431048
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 26 Jun 2022 05:08:28 +0000

flight 171356 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171356/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-intel  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-amd  8 xen-boot              fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 171277
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 171277
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 171277
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 171277
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 171277
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 171277

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 171277

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171277
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171277
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171277
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                0840a7914caa14315a3191178a9f72c742477860
baseline version:
 linux                354c6e071be986a44b956f7b57f1884244431048

Last test of basis   171277  2022-06-19 03:11:35 Z    7 days
Failing since        171280  2022-06-19 15:12:25 Z    6 days   19 attempts
Testing same since   171356  2022-06-25 21:11:35 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aashish Sharma <shraash@google.com>
  Abhinav Kumar <quic_abhinavk@quicinc.com>
  Aidan MacDonald <aidanmacdonald.0x0@gmail.com>
  Akira Yokosawa <akiyks@gmail.com>
  Alan Liu <HaoPing.Liu@amd.com>
  Alan Stern <stern@rowland.harvard.edu>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Egorenkov <egorenar@linux.ibm.com>
  Alexander Gordeev <agordeev@linux.ibm.com>
  Alexander Usyskin <alexander.usyskin@intel.com>
  Alexei Starovoitov <ast@kernel.org>
  Ali Saidi <alisaidi@amazon.com>
  Alistair Popple <apopple@nvidia.com>
  Amit Kumar Mahapatra <amit.kumar-mahapatra@xilinx.com>
  Anatolii Gerasymenko <anatolii.gerasymenko@intel.com>
  Andrii Nakryiko <andrii@kernel.org>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Antoine Tenart <atenart@kernel.org>
  Antoniu Miclaus <antoniu.miclaus@analog.com>
  Ard Biesheuvel <ardb@kernel.org>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Arnd Bergmann <arnd@arndb.de>
  Athira Jajeev <atrajeev@linux.vnet.ibm.com>
  Athira Rajeev <atrajeev@linux.vnet.ibm.com>
  Ballon Shi <ballon.shi@quectel.com>
  Bart Van Assche <bvanassche@acm.org>
  Bartosz Golaszewski <brgl@bgdev.pl>
  Baruch Siach <baruch@tkos.co.il>
  Bjorn Andersson <bjorn.andersson@linaro.org>
  Brian Foster <bfoster@redhat.com>
  Carlo Lobrano <c.lobrano@gmail.com>
  Carlos Llamas <cmllamas@google.com>
  Catalin Marinas <catalin.marinas@arm.com>
  Chevron Li <chevron.li@bayhubtech.com>
  Chevron Li<chevron.li@bayhubtech.com>
  Christian Lamparter <chunkeey@gmail.com>
  Christian Marangi <ansuelsmth@gmail.com>
  Christian Schoenebeck <linux_oss@crudebyte.com>
  Christoph Hellwig <hch@lst.de>
  Christoph Lameter <cl@linux.com>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Ciara Loftus <ciara.loftus@intel.com>
  Claudiu Manoil <claudiu.manoil@nxp.com>
  Curtis Taylor <cutaylor-pub@yahoo.com>
  Daeho Jeong <daehojeong@google.com>
  Damien Le Moal <damien.lemoal@opensource.wdc.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Dan Vacura <w36195@motorola.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Wheeler <daniel.wheeler@amd.com>
  Daniil Dementev <d.dementev@ispras.ru>
  Darrick J. Wong <djwong@kernel.org>
  Dave Airlie <airlied@redhat.com>
  Dave Hansen <dave.hansen@linux.intel.com>
  David Howells <dhowells@redhat.com>
  David Rientjes <rientjes@google.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Demi Marie Obenour <demi@invisiblethingslab.com>
  Ding Xiang <dingxiang@cmss.chinamobile.com>
  Dmitry Klochkov <kdmitry556@gmail.com>
  Dmitry Osipenko <dmitry.osipenko@collabora.com>
  Dmitry Rokosov <ddrokosov@sberdevices.ru>
  Dominique Martinet <asmadeus@codewreck.org>
  Douglas Gilbert <dgilbert@interlog.com>
  Dylan Yudaken <dylany@fb.com>
  Eddie Huang <eddie.huang@mediatek.com>
  Edward Wu <edwardwu@realtek.com>
  Eelco Chaudron <echaudro@redhat.com>
  Eric Dumazet <edumazet@google.com>
  Evgeniy Baskov <baskov@ispras.ru>
  Filipe Manana <fdmanana@suse.com>
  Florian Westphal <fw@strlen.de>
  Gautam Menghani <gautammenghani201@gmail.com>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Genjian Zhang <zhanggenjian@kylinos.cn>
  George Shen <george.shen@amd.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Gurucharan <gurucharanx.g@intel.com> (A Contingent worker at Intel)
  Haibo Chen <haibo.chen@nxp.com>
  Hans de Goede <hdegoede@redhat.com>
  Heiko Stuebner <heiko@sntech.de>
  Hillf Danton <hdanton@sina.com>
  Hoang Le <hoang.h.le@dektech.com.au>
  Hongyu Xie <xiehongyu1@kylinos.cn>
  Hongyu Xie <xy521521@gmail.com>
  Huacai Chen <chenhuacai@loongson.cn>
  huhai <huhai@kylinos.cn>
  Hyeonggon Yoo <42.hyeyoo@gmail.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ian Rogers <irogers@google.com>
  Ivan Vecera <ivecera@redhat.com>
  Jaegeuk Kim <jaegeuk@kernel.org>
  Jakub Kicinski <kuba@kernel.org>
  Jakub Sitnicki <jakub@cloudflare.com>
  Jamie Iles <jamie@jamieiles.com>
  Jani Nikula <jani.nikula@intel.com>
  Jann Horn <jannh@google.com>
  Jarkko Nikula <jarkko.nikula@linux.intel.com>
  Jason A. Donenfeld <Jason@zx2c4.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Javier Martinez Canillas <javierm@redhat.com>
  Jay Vosburgh <jay.vosburgh@canonical.com>
  Jean-Baptiste Maneyrol <jean-baptiste.maneyrol@tdk.com>
  Jens Axboe <axboe@kernel.dk>
  Jernej Skrabec <jernej.skrabec@gmail.com>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Jialin Zhang <zhangjialin11@huawei.com>
  Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
  Jiasheng Jiang <jiasheng@iscas.ac.cn>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jie2x Zhou <jie2x.zhou@intel.com>
  Jing-Ting Wu <jing-ting.wu@mediatek.com>
  Jiri Olsa <jolsa@kernel.org>
  Joe Damato <jdamato@fastly.com>
  Joel Granados <j.granados@samsung.com>
  Joel Savitz <jsavitz@redhat.com>
  Joerg Roedel <jroedel@suse.de>
  Johan Hovold <johan@kernel.org>
  John Fastabend <john.fastabend@gmail.com>
  Jon <jon.lin@rock-chips.com>
  Jon Lin <jon.lin@rock-chips.com>
  Jon Maloy <jmaloy@redhat.com>
  Jon Maxwell <jmaxwell37@gmail.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Jonathan Lemon <jonathan.lemon@gmail.com>
  Jonathan Marek <jonathan@marek.ca>
  Jonathan Toppins <jtoppins@redhat.com>
  Josh Poimboeuf <jpoimboe@kernel.org>
  Joshua Ashton <joshua@froggi.es>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Kai Vehmanen <kai.vehmanen@linux.intel.com>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Kailang Yang <kailang@realtek.com>
  Kees Cook <keescook@chromium.org>
  Ken Moffat <zarniwhoop@ntlworld.com>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Kumar Kartikeya Dwivedi <memxor@gmail.com>
  Kunihiko Hayashi <hayashi.kunihiko@socionext.com>
  Kuogee Hsieh <quic_khsieh@quicinc.com>
  Lars-Peter Clausen <lars@metafoo.de>
  Leo Savernik <l.savernik@aon.at>
  Leo Yan <leo.yan@linaro.org>
  Li Nan <linan122@huawei.com>
  Liam Beguin <liambeguin@gmail.com>
  Liang He <windhl@126.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Liu Ying <victor.liu@nxp.com>
  Lorenzo Bianconi <lorenzo@kernel.org>
  Lukas Bulwahn <lukas.bulwahn@gmail.com>
  Lukas Wunner <lukas@wunner.de>
  Lv Ruyi <lv.ruyi@zte.com.cn>
  Maciej Fijalkowski <maciej.fijalkowski@intel.com>
  Macpaul Lin <macpaul.lin@mediatek.com>
  Magnus Karlsson <magnus.karlsson@intel.com>
  Marc Dionne <marc.dionne@auristor.com>
  Marc Zyngier <maz@kernel.org>
  Marcin Szycik <marcin.szycik@linux.intel.com>
  Mario Limonciello <mario.limonciello@amd.com>
  Mark Brown <broonie@kernel.org>
  Mark Pearson <markpearson@lenovo.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Masami Hiramatsu (Google) <mhiramat@kernel.org>
  Mathias Nyman <mathias.nyman@linux.intel.com>
  Matthew Wilcox (Oracle) <willy@infradead.org>
  Maxime Ripard <maxime@cerno.tech>
  Maximilian Luz <luzmaximilian@gmail.com>
  Maya Matuszczyk <maccraft123mc@gmail.com>
  Mengqi Zhang <mengqi.zhang@mediatek.com>
  Miaoqian Lin <linmq006@gmail.com>
  Michael Petlan <mpetlan@redhat.com>
  Michal Simek <michal.simek@amd.com>
  Mike Snitzer <snitzer@kernel.org>
  Mikulas Patocka <mpatocka@redhat.com>
  Ming Lei <ming.lei@redhat.com>
  Mingwei Zhang <mizhang@google.com>
  Miquel Raynal <miquel.raynal@bootlin.com>
  Nathan Chancellor <nathan@kernel.org>
  Nathan Chancellor <nathan@kernel.org> # build
  Nico Pache <npache@redhat.com>
  nikitashvets@flyium.com
  Nikos Tsironis <ntsironis@arrikto.com>
  Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
  Oleksij Rempel <o.rempel@pengutronix.de>
  Oliver Upton <oupton@google.com>
  Olivier Moysan <olivier.moysan@foss.st.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Abeni <pabeni@redhat.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrice Chotard <patrice.chotard@foss.st.com>
  Pavel Begunkov <asml.silence@gmail.com>
  Peilin Ye <peilin.ye@bytedance.com>
  Peter Gonda <pgonda@google.com>
  Peter Rosin <peda@axentia.se>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Peter Zijlstra <peterz@infradead.org>
  Petr Mladek <pmladek@suse.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Qingqing Zhuo <qingqing.zhuo@amd.com>
  Qu Wenruo <wqu@suse.com>
  Quentin Perret <qperret@google.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Raghavendra Rao Ananta <rananta@google.com>
  Randy Dunlap <rdunlap@infradead.org>
  Riccardo Paolo Bestetti <pbl@bestov.io>
  Rob Clark <robdclark@chromium.org>
  Rob Clark <robdclark@gmail.com>
  Rob Herring <robh@kernel.org>
  Robert Marko <robimarko@gmail.com>
  Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
  Ron Economos <re@w6rz.net>
  Rosemarie O'Riorden <roriorden@redhat.com>
  Sai Krishna Potthuri <lakshmi.sai.krishna.potthuri@xilinx.com>
  Samuel Holland <samuel@sholland.org>
  Sandeep Penigalapati <sandeep.penigalapati@intel.com>
  Sander Vanheule <sander@svanheule.net>
  Sascha Hauer <s.hauer@pengutronix.de>
  Saud Farooqui <farooqui_saud@hotmail.com>
  Saurabh Sengar <ssengar@linux.microsoft.com>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Sedat Dilek <sedat.dilek@gmail.com> # LLVM-14 (x86-64)
  Serge Semin <Sergey.Semin@baikalelectronics.ru>
  Sergey Gorenko <sergeygo@nvidia.com>
  Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Soham Sen <contact@sohamsen.me>
  Song Liu <songliubraving@fb.com>
  Steev Klimaszewski <steev@kali.org>
  Stefan Wahren <stefan.wahren@i2se.com>
  Stephan Gerhold <stephan.gerhold@kernkonzept.com>
  Stephen Boyd <swboyd@chromium.org>
  Stephen Hemminger <stephen@networkplumber.org>
  Stephen Rothwell <sfr@canb.auug.org.au>
  Steven Rostedt (Google) <rostedt@goodmis.org>
  Sumanth Korikkar <sumanthk@linux.ibm.com>
  sunliming <sunliming@kylinos.cn>
  syzbot+3e3f419f4a7816471838@syzkaller.appspotmail.com
  Takashi Iwai <tiwai@suse.de>
  Takashi Sakamoto <o-takashi@sakamocchi.jp>
  Tali Perry <tali.perry1@gmail.com>
  Tanveer Alam <tanveer1.alam@intel.com>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Richter <tmricht@linux.ibm.com>
  Tiezhu Yang <yangtiezhu@loongson.cn>
  Tim Crawford <tcrawford@system76.com>
  Tom Schwindl <schwindl@posteo.de>
  Tomas Winkler <tomas.winkler@intel.com>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Tvrtko Ursulin <tvrtko.ursulin@intel.com>
  Tyler Hicks <tyhicks@linux.microsoft.com>
  Tyrel Datwyler <tyreld@linux.ibm.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Utkarsh Patel <utkarsh.h.patel@intel.com>
  Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
  Vadim Fedorenko <vadfed@fb.com>
  Ville Syrjälä <ville.syrjala@linux.intel.com>
  Vincent Whitchurch <vincent.whitchurch@axis.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wentao_Liang <Wentao_Liang_g@163.com>
  Wojciech Drewek <wojciech.drewek@intel.com>
  Wolfram Sang <wsa@kernel.org>
  Xiang wangx <wangxiang@cdjrlc.com>
  Xiubo Li <xiubli@redhat.com>
  Xu Jia <xujia39@huawei.com>
  Xu Yang <xu.yang_2@nxp.com>
  Yannick Brosseau <yannick.brosseau@gmail.com>
  Ying Xue <ying.xue@windriver.com>
  Yonghong Song <yhs@fb.com>
  Yonglin Tan <yonglin.tan@outlook.com>
  Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
  Yu Liao <liaoyu15@huawei.com>
  Zheyu Ma <zheyuma97@gmail.com>
  Ziyang Xuan <william.xuanziyang@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 9761 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 26 09:26:43 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jun 2022 09:26:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356062.584057 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5OXB-0005jB-9N; Sun, 26 Jun 2022 09:26:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356062.584057; Sun, 26 Jun 2022 09:26:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5OXB-0005j4-5y; Sun, 26 Jun 2022 09:26:25 +0000
Received: by outflank-mailman (input) for mailman id 356062;
 Sun, 26 Jun 2022 09:26:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5OX9-0005it-G1; Sun, 26 Jun 2022 09:26:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5OX9-0008TS-Co; Sun, 26 Jun 2022 09:26:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5OX8-0002gy-TS; Sun, 26 Jun 2022 09:26:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o5OX8-0000y0-T0; Sun, 26 Jun 2022 09:26:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KYN397LT8cVOafTF3x2dFRcnGZymJPSQrMYBuO7ioG0=; b=o5tQDxJDSxCaBn1W0n03Ocledz
	GeXtCuY2C10YRZgPBd3Dnf2wRcrkYfHbplmfQ2lrOo418C2g4vnfNDkVVmKL774wbd9YgCorvtyFw
	dFPwmr5SE1g1x5NXsnhw8eSqM7FPfEW4gr0AoT0PWcfEN20/tgnFDDJFnzDZaWGRuFVo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171358-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 171358: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=790f3b214bffb402609ec1c6ceb2317c2b3cabb1
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 26 Jun 2022 09:26:22 +0000

flight 171358 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171358/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              790f3b214bffb402609ec1c6ceb2317c2b3cabb1
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  716 days
Failing since        151818  2020-07-11 04:18:52 Z  715 days  697 attempts
Testing same since   171351  2022-06-25 04:18:58 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Amneesh Singh <natto@weirdnatto.in>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani@anisinha.ca>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brad Laue <brad@brad-x.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Kirbach <christian.kirbach@gmail.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Christophe Fergeau <cfergeau@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Didik Supriadi <didiksupriadi41@gmail.com>
  dinglimin <dinglimin@cmss.chinamobile.com>
  Divya Garg <divya.garg@nutanix.com>
  Dmitrii Shcherbakov <dmitrii.shcherbakov@canonical.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Emilio Herrera <ehespinosa57@gmail.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Florian Schmidt <flosch@nutanix.com>
  Franck Ridel <fridel@protonmail.com>
  Gavi Teitz <gavi@nvidia.com>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Haonan Wang <hnwanga1@gmail.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Hiroki Narukawa <hnarukaw@yahoo-corp.jp>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Ian Wienand <iwienand@redhat.com>
  Ioanna Alifieraki <ioanna-maria.alifieraki@canonical.com>
  Ivan Teterevkov <ivan.teterevkov@nutanix.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  jason lee <ppark5237@gmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jia Zhou <zhou.jia2@zte.com.cn>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jing Qi <jinqi@redhat.com>
  Jinsheng Zhang <zhangjl02@inspur.com>
  Jiri Denemark <jdenemar@redhat.com>
  Joachim Falk <joachim.falk@gmx.de>
  John Ferlan <jferlan@redhat.com>
  John Levon <john.levon@nutanix.com>
  John Levon <levon@movementarian.org>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Justin Gatzen <justin.gatzen@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kim InSoo <simmon@nplob.com>
  Koichi Murase <myoga.murase@gmail.com>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Lei Yang <yanglei209@huawei.com>
  Lena Voytek <lena.voytek@canonical.com>
  Liang Yan <lyan@digitalocean.com>
  Liang Yan <lyan@digtalocean.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Liu Yiding <liuyd.fnst@fujitsu.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  luzhipeng <luzhipeng@cestc.cn>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Mielke <mark.mielke@gmail.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Martin Pitt <mpitt@debian.org>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matej Cepl <mcepl@cepl.eu>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Goodhart <c@chromakode.com>
  Maxim Nestratov <mnestratov@virtuozzo.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Moteen Shah <codeguy.moteen@gmail.com>
  Moteen Shah <moteenshah.02@gmail.com>
  Muha Aliss <muhaaliss@gmail.com>
  Nathan <nathan95@live.it>
  Neal Gompa <ngompa13@gmail.com>
  Nick Chevsky <nchevsky@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nicolas Lécureuil <neoclust@mageia.org>
  Nicolas Lécureuil <nicolas.lecureuil@siveo.net>
  Nikolay Shirokovskiy <nikolay.shirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Niteesh Dubey <niteesh@linux.ibm.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Or Ozeri <oro@il.ibm.com>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peng Liang <tcx4c70@gmail.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Praveen K Paladugu <prapal@linux.microsoft.com>
  Prerna Saxena <prerna.saxena@nutanix.com>
  Richard W.M. Jones <rjones@redhat.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Robin Lee <cheeselee@fedoraproject.org>
  Rohit Kumar <rohit.kumar3@nutanix.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Davis <scott.davis@starlab.io>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Sergey A <sw@atrus.ru>
  Sergey A. <sw@atrus.ru>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  shenjiatong <yshxxsjt715@gmail.com>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Simon Rowe <simon.rowe@nutanix.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Temuri Doghonadze <temuri.doghonadze@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tom Wieczorek <tom@bibbu.net>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tu Qiang <tu.qiang35@zte.com.cn>
  Tuguoyi <tu.guoyi@h3c.com>
  tuqiang <tu.qiang35@zte.com.cn>
  Vasiliy Ulyanov <vulyanov@suse.de>
  Victor Toso <victortoso@redhat.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Vinayak Kale <vkale@nvidia.com>
  Vineeth Pillai <viremana@linux.microsoft.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  Wei-Chen Chen <weicche@microsoft.com>
  William Douglas <william.douglas@intel.com>
  Xu Chao <xu.chao6@zte.com.cn>
  Yalan Zhang <yalzhang@redhat.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Fei <yangfei85@huawei.com>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yasuhiko Kamata <belphegor@belbel.or.jp>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
  zhangjl02 <zhangjl02@inspur.com>
  zhanglei <zhanglei@smartx.com>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>
  Дамјан Георгиевски <gdamjan@gmail.com>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 114220 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 26 09:47:48 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jun 2022 09:47:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356071.584080 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5Orp-0008Pz-AD; Sun, 26 Jun 2022 09:47:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356071.584080; Sun, 26 Jun 2022 09:47:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5Orp-0008Ps-7Y; Sun, 26 Jun 2022 09:47:45 +0000
Received: by outflank-mailman (input) for mailman id 356071;
 Sun, 26 Jun 2022 09:47:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cB6r=XB=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1o5Oro-0008Pg-JA
 for xen-devel@lists.xenproject.org; Sun, 26 Jun 2022 09:47:44 +0000
Received: from mail-ej1-x62b.google.com (mail-ej1-x62b.google.com
 [2a00:1450:4864:20::62b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 05b43fd0-f535-11ec-b725-ed86ccbb4733;
 Sun, 26 Jun 2022 11:47:43 +0200 (CEST)
Received: by mail-ej1-x62b.google.com with SMTP id pk21so13245534ejb.2
 for <xen-devel@lists.xenproject.org>; Sun, 26 Jun 2022 02:47:42 -0700 (PDT)
Received: from localhost.localdomain
 (dynamic-078-055-174-013.78.55.pool.telefonica.de. [78.55.174.13])
 by smtp.gmail.com with ESMTPSA id
 t4-20020a17090605c400b00706242d297fsm3504752ejt.212.2022.06.26.02.47.40
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 26 Jun 2022 02:47:41 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 05b43fd0-f535-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=8y1YigI7ttxs3tTld6DKvzpRpRRmju3jx/o0m9Wy9os=;
        b=pCD3TQyytOn6s8CgipMMRlMe8ipJcM/lIt7qDuGLUPIXeg+yWx4ZdloptdUVNTKaDd
         sIqtC44UFMxDUCcPpiTrv/DTyFcxvk14BVhRcpSdhEXoaf908w64RBTvxNgIDr4RrQ+A
         qDHr74I4ghmr96AdkqiCMd+wk7vx9jqomIqAHyHMSFhdEvknJijiil+FPfF99ISYxHqD
         dxrbvGTN3OF/WlTnvJ/82NPHYogw5Rh3UyRYTonV0et257btT/LL3q1q00H1Vb6HENGr
         B58hlzz3kx3m7E0zYwSsRIvtuKla1CQeAPPct1v6EAPf4hH3zdy3eMn4TTSbjKGM1GgT
         gqvA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=8y1YigI7ttxs3tTld6DKvzpRpRRmju3jx/o0m9Wy9os=;
        b=k6A3N3CZoC7bUtV3EClLFpfGIJjPUs/3XJj+jh938e1yEudPIcSKraRB0dNvFCauJi
         1hZbWxvGLy6oUluWEhUc8GZv2f1n+6E5TS8XYYSD1nFHKAK/b87C9bJqiAehPLX6Tpc4
         j0K5VkHVRxSWNq1FPiahUb46MbR3r0DZ4++F3gIf2zZHUkvI2TMwoea3FEDsU66C2I6J
         HB9gQC/xKeOW7kCrSDpOkf390K1DVY4/kK2wt8w5URhcnFCs2uzCXGrd45ID3iYSrGZ5
         hA6KTYt4rmqiQySra7YEnh/eir0TneYzRi1XZODI7R3Be0xM7RFG72WE7AulY+Th4u3E
         FMag==
X-Gm-Message-State: AJIora9hEx/r1iCVzd2wXYkJDydFkBw9xWVxV+nBI3TbcAKPD+bhcXsZ
	OCpRMudY9iUua+Ladw4DviE=
X-Google-Smtp-Source: AGRyM1v6iTzO4uMHp2vLzEWX53KPy01bIP+vC0hI0N/lRCRouLltezCBvCVoffSlli5cWMnSFE53BA==
X-Received: by 2002:a17:907:968a:b0:722:e508:fc15 with SMTP id hd10-20020a170907968a00b00722e508fc15mr7614907ejc.188.1656236861608;
        Sun, 26 Jun 2022 02:47:41 -0700 (PDT)
From: Bernhard Beschow <shentey@gmail.com>
To: qemu-devel@nongnu.org
Cc: Richard Henderson <richard.henderson@linaro.org>,
	xen-devel@lists.xenproject.org,
	qemu-trivial@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Bernhard Beschow <shentey@gmail.com>
Subject: [PATCH 1/2] hw/i386/xen/xen-hvm: Allow for stubbing xen_set_pci_link_route()
Date: Sun, 26 Jun 2022 11:46:55 +0200
Message-Id: <20220626094656.15673-2-shentey@gmail.com>
X-Mailer: git-send-email 2.36.1
In-Reply-To: <20220626094656.15673-1-shentey@gmail.com>
References: <20220626094656.15673-1-shentey@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The only user of xen_set_pci_link_route() is
xen_piix_pci_write_config_client() which implements PIIX-specific logic in
the xen namespace. This makes xen-hvm depend on PIIX which could be
avoided if xen_piix_pci_write_config_client() was implemented in PIIX. In
order to do this, xen_set_pci_link_route() needs to be stubbable which
this patch addresses.

Signed-off-by: Bernhard Beschow <shentey@gmail.com>
---
 hw/i386/xen/xen-hvm.c       | 7 ++++++-
 include/hw/xen/xen.h        | 1 +
 include/hw/xen/xen_common.h | 6 ------
 stubs/xen-hw-stub.c         | 5 +++++
 4 files changed, 12 insertions(+), 7 deletions(-)

diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index 0731f70410..204fda7949 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -161,11 +161,16 @@ void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len)
         }
         v &= 0xf;
         if (((address + i) >= PIIX_PIRQCA) && ((address + i) <= PIIX_PIRQCD)) {
-            xen_set_pci_link_route(xen_domid, address + i - PIIX_PIRQCA, v);
+            xen_set_pci_link_route(address + i - PIIX_PIRQCA, v);
         }
     }
 }
 
+int xen_set_pci_link_route(uint8_t link, uint8_t irq)
+{
+    return xendevicemodel_set_pci_link_route(xen_dmod, xen_domid, link, irq);
+}
+
 int xen_is_pirq_msi(uint32_t msi_data)
 {
     /* If vector is 0, the msi is remapped into a pirq, passed as
diff --git a/include/hw/xen/xen.h b/include/hw/xen/xen.h
index 0f9962b1c1..13bffaef53 100644
--- a/include/hw/xen/xen.h
+++ b/include/hw/xen/xen.h
@@ -21,6 +21,7 @@ extern enum xen_mode xen_mode;
 extern bool xen_domid_restrict;
 
 int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num);
+int xen_set_pci_link_route(uint8_t link, uint8_t irq);
 void xen_piix3_set_irq(void *opaque, int irq_num, int level);
 void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len);
 void xen_hvm_inject_msi(uint64_t addr, uint32_t data);
diff --git a/include/hw/xen/xen_common.h b/include/hw/xen/xen_common.h
index 179741ff79..77ce17d8a4 100644
--- a/include/hw/xen/xen_common.h
+++ b/include/hw/xen/xen_common.h
@@ -316,12 +316,6 @@ static inline int xen_set_pci_intx_level(domid_t domid, uint16_t segment,
                                              device, intx, level);
 }
 
-static inline int xen_set_pci_link_route(domid_t domid, uint8_t link,
-                                         uint8_t irq)
-{
-    return xendevicemodel_set_pci_link_route(xen_dmod, domid, link, irq);
-}
-
 static inline int xen_inject_msi(domid_t domid, uint64_t msi_addr,
                                  uint32_t msi_data)
 {
diff --git a/stubs/xen-hw-stub.c b/stubs/xen-hw-stub.c
index 15f3921a76..743967623f 100644
--- a/stubs/xen-hw-stub.c
+++ b/stubs/xen-hw-stub.c
@@ -23,6 +23,11 @@ void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len)
 {
 }
 
+int xen_set_pci_link_route(uint8_t link, uint8_t irq)
+{
+    return -1;
+}
+
 void xen_hvm_inject_msi(uint64_t addr, uint32_t data)
 {
 }
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Sun Jun 26 09:47:48 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jun 2022 09:47:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356072.584091 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5Orq-0000Er-Gg; Sun, 26 Jun 2022 09:47:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356072.584091; Sun, 26 Jun 2022 09:47:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5Orq-0000Ej-Dv; Sun, 26 Jun 2022 09:47:46 +0000
Received: by outflank-mailman (input) for mailman id 356072;
 Sun, 26 Jun 2022 09:47:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cB6r=XB=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1o5Orp-0008Pg-Aa
 for xen-devel@lists.xenproject.org; Sun, 26 Jun 2022 09:47:45 +0000
Received: from mail-ej1-x62f.google.com (mail-ej1-x62f.google.com
 [2a00:1450:4864:20::62f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 06577fdf-f535-11ec-b725-ed86ccbb4733;
 Sun, 26 Jun 2022 11:47:43 +0200 (CEST)
Received: by mail-ej1-x62f.google.com with SMTP id u15so13171799ejc.10
 for <xen-devel@lists.xenproject.org>; Sun, 26 Jun 2022 02:47:43 -0700 (PDT)
Received: from localhost.localdomain
 (dynamic-078-055-174-013.78.55.pool.telefonica.de. [78.55.174.13])
 by smtp.gmail.com with ESMTPSA id
 t4-20020a17090605c400b00706242d297fsm3504752ejt.212.2022.06.26.02.47.41
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 26 Jun 2022 02:47:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 06577fdf-f535-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=GE5xBWjtG9WBcqQI8QpdzBg3P6kzmtE86vK5W8A22FE=;
        b=a4377qvEOvu3e1RTfuQHxJNPGsEaFzLBzKYodFBIEb2E1GNijJZcTVwGr5bD9befZ0
         JpJaEMJNHnDTjVdJnoPxuQPvBYdl9U6yXsj9p2qGi79fPrVYpM3QAmOAT2ppnrTn0D3P
         U7JQCpBP09rFtCwCEL0pih2PVJ2uPi3Hbay6sEHcBhdodIs8k+DLz/JFcavfnOjaCg17
         nESFOkVfPHUvWU2OMNEkN4gb5Lus7Eu0GK6ro3W5Ebh/44PCoV4lrM6ld8wJpZmpvSfJ
         jbiSB0EdpEEM1cZpiH8vVpw1KfH+YF5T11FlSFr2nPntX8U9XqUDtbFLv7Q2NZyfB0VR
         /YEQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=GE5xBWjtG9WBcqQI8QpdzBg3P6kzmtE86vK5W8A22FE=;
        b=rTUrnQJsuzDgxkwP41lEUy+j/0GHyTEx7eYijco+Pud1vzL8ZDHs42qKwCnlYmfHRt
         7s8B59lzn6hYfvOsAX155rs4gVxIuoGCrNyKwOC9VnkR7WRV8/hzG6+wOMO1WiQHlGHr
         Pqc5Rpt2eCw+JJzscVIDhZiOR8HfOFKaluh4nUmQqBZcZNqe31hTegHLjvWVBZ/1GWXO
         Q2FF2hsGz1ThfFjcDL/DIsbvJxSy20JMXKGtaJtEOMvHJnRAbdCm3du2oIBH7ltBFxLS
         Aj6gRgJBGYnr1on03FkVzv3MfU1XEVObt6x89AXmNlGfsHG6kV9FVZ/axb136lk4t7vk
         OeXA==
X-Gm-Message-State: AJIora+nTMfccD7NnbWrAG8rEKRvXIe9JDqRr2ZW719b2gThRlYg0JBg
	cVh6cWRr/LrNEmupO3qSHOU=
X-Google-Smtp-Source: AGRyM1u8/kthGc0kpB9GQ15WfHOOb7ssiu2IAI5tCBcbMUyhhCKcEeCpk68TInH1f8s1P+3vxGqdmg==
X-Received: by 2002:a17:906:b05a:b0:718:cc6b:61e0 with SMTP id bj26-20020a170906b05a00b00718cc6b61e0mr7378413ejb.501.1656236862769;
        Sun, 26 Jun 2022 02:47:42 -0700 (PDT)
From: Bernhard Beschow <shentey@gmail.com>
To: qemu-devel@nongnu.org
Cc: Richard Henderson <richard.henderson@linaro.org>,
	xen-devel@lists.xenproject.org,
	qemu-trivial@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Bernhard Beschow <shentey@gmail.com>
Subject: [PATCH 2/2] hw/i386/xen/xen-hvm: Inline xen_piix_pci_write_config_client() and remove it
Date: Sun, 26 Jun 2022 11:46:56 +0200
Message-Id: <20220626094656.15673-3-shentey@gmail.com>
X-Mailer: git-send-email 2.36.1
In-Reply-To: <20220626094656.15673-1-shentey@gmail.com>
References: <20220626094656.15673-1-shentey@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

xen_piix_pci_write_config_client() is implemented in the xen sub tree and
uses PIIX constants internally, thus creating a direct dependency on
PIIX. Now that xen_set_pci_link_route() is stubbable, the logic of
xen_piix_pci_write_config_client() can be moved to PIIX which resolves
the dependency.

Signed-off-by: Bernhard Beschow <shentey@gmail.com>
---
 hw/i386/xen/xen-hvm.c | 18 ------------------
 hw/isa/piix3.c        | 15 ++++++++++++++-
 include/hw/xen/xen.h  |  1 -
 stubs/xen-hw-stub.c   |  4 ----
 4 files changed, 14 insertions(+), 24 deletions(-)

diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index 204fda7949..e4293d6d66 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -15,7 +15,6 @@
 #include "hw/pci/pci.h"
 #include "hw/pci/pci_host.h"
 #include "hw/i386/pc.h"
-#include "hw/southbridge/piix.h"
 #include "hw/irq.h"
 #include "hw/hw.h"
 #include "hw/i386/apic-msidef.h"
@@ -149,23 +148,6 @@ void xen_piix3_set_irq(void *opaque, int irq_num, int level)
                            irq_num & 3, level);
 }
 
-void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len)
-{
-    int i;
-
-    /* Scan for updates to PCI link routes (0x60-0x63). */
-    for (i = 0; i < len; i++) {
-        uint8_t v = (val >> (8 * i)) & 0xff;
-        if (v & 0x80) {
-            v = 0;
-        }
-        v &= 0xf;
-        if (((address + i) >= PIIX_PIRQCA) && ((address + i) <= PIIX_PIRQCD)) {
-            xen_set_pci_link_route(address + i - PIIX_PIRQCA, v);
-        }
-    }
-}
-
 int xen_set_pci_link_route(uint8_t link, uint8_t irq)
 {
     return xendevicemodel_set_pci_link_route(xen_dmod, xen_domid, link, irq);
diff --git a/hw/isa/piix3.c b/hw/isa/piix3.c
index 6388558f92..48f9ab1096 100644
--- a/hw/isa/piix3.c
+++ b/hw/isa/piix3.c
@@ -138,7 +138,20 @@ static void piix3_write_config(PCIDevice *dev,
 static void piix3_write_config_xen(PCIDevice *dev,
                                    uint32_t address, uint32_t val, int len)
 {
-    xen_piix_pci_write_config_client(address, val, len);
+    int i;
+
+    /* Scan for updates to PCI link routes (0x60-0x63). */
+    for (i = 0; i < len; i++) {
+        uint8_t v = (val >> (8 * i)) & 0xff;
+        if (v & 0x80) {
+            v = 0;
+        }
+        v &= 0xf;
+        if (((address + i) >= PIIX_PIRQCA) && ((address + i) <= PIIX_PIRQCD)) {
+            xen_set_pci_link_route(address + i - PIIX_PIRQCA, v);
+        }
+    }
+
     piix3_write_config(dev, address, val, len);
 }
 
diff --git a/include/hw/xen/xen.h b/include/hw/xen/xen.h
index 13bffaef53..afdf9c436a 100644
--- a/include/hw/xen/xen.h
+++ b/include/hw/xen/xen.h
@@ -23,7 +23,6 @@ extern bool xen_domid_restrict;
 int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num);
 int xen_set_pci_link_route(uint8_t link, uint8_t irq);
 void xen_piix3_set_irq(void *opaque, int irq_num, int level);
-void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len);
 void xen_hvm_inject_msi(uint64_t addr, uint32_t data);
 int xen_is_pirq_msi(uint32_t msi_data);
 
diff --git a/stubs/xen-hw-stub.c b/stubs/xen-hw-stub.c
index 743967623f..34a22f2ad7 100644
--- a/stubs/xen-hw-stub.c
+++ b/stubs/xen-hw-stub.c
@@ -19,10 +19,6 @@ void xen_piix3_set_irq(void *opaque, int irq_num, int level)
 {
 }
 
-void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len)
-{
-}
-
 int xen_set_pci_link_route(uint8_t link, uint8_t irq)
 {
     return -1;
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Sun Jun 26 09:47:48 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jun 2022 09:47:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356070.584069 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5Orn-0008AW-3F; Sun, 26 Jun 2022 09:47:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356070.584069; Sun, 26 Jun 2022 09:47:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5Orm-0008AP-W1; Sun, 26 Jun 2022 09:47:42 +0000
Received: by outflank-mailman (input) for mailman id 356070;
 Sun, 26 Jun 2022 09:47:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cB6r=XB=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1o5Orm-0008AI-LH
 for xen-devel@lists.xenproject.org; Sun, 26 Jun 2022 09:47:42 +0000
Received: from mail-ed1-x536.google.com (mail-ed1-x536.google.com
 [2a00:1450:4864:20::536])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 05257ae5-f535-11ec-bd2d-47488cf2e6aa;
 Sun, 26 Jun 2022 11:47:41 +0200 (CEST)
Received: by mail-ed1-x536.google.com with SMTP id z19so9140889edb.11
 for <xen-devel@lists.xenproject.org>; Sun, 26 Jun 2022 02:47:41 -0700 (PDT)
Received: from localhost.localdomain
 (dynamic-078-055-174-013.78.55.pool.telefonica.de. [78.55.174.13])
 by smtp.gmail.com with ESMTPSA id
 t4-20020a17090605c400b00706242d297fsm3504752ejt.212.2022.06.26.02.47.39
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 26 Jun 2022 02:47:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 05257ae5-f535-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=/18Wn59mCVUcj5sOf0xFz2mH+5FELr59xslfCm8MOAc=;
        b=KQFT2epSHelgMWfg3dPUpwaqRsz0pA/6e5/iNbH31U8kKSk7PILo/zpeyNLhXSDyXY
         ksboRyLpVAxrJRZjtAxwI3LHi4Wev6eMXmfCmklaVS9jL5Z0KdOvXRm8kckW8cQo37FC
         YxXRil6dPmLObGen9MY5oVQjsCSzQpEAo2wWpnmmO4mbiRNQBYOWldMYZSjEPl0225FR
         bEtKL5nACHbm2OBsnHlqafqzkrPAhIn/1zznkFdgYP0bcXQdy3zrgOkXKwbBqj6/7JAa
         sV981rtXpApP/c92nYXlgVMSFrNCgMY6+ei3LwETElwYuEt2S19vSnyRJs3hiw/3EKzy
         utPg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=/18Wn59mCVUcj5sOf0xFz2mH+5FELr59xslfCm8MOAc=;
        b=IyVyajdPmZHjwfPcKluIrJKuGXjh0JkIPbjnqazA3DytNaKSSaq69IapUjBMjNX3QJ
         iBok9DaNfeher9mPvCA9UzQuXOMczsWDmnEm0lYgdrmHktd9mL6f6irT4XCx8CTQNN0D
         08Je75y1lj330mItFW6oPgtaHaUk0mn2KwBDUrXDDz1bZDYfCR8+TTQVVXTSkRgf2ENo
         /KcbhbAMxb+oijxdNVGVoz9X/G5HXl0DG4UNbtKSm9JP9Naa1vV20IO1vL2ipta8cOJv
         vSfhEoSlLUpp+9y7FqA60RcOxEpEnGPxjVraR1njfmuWt4gNaNCeQBYSz5PtJNtz7qAq
         krew==
X-Gm-Message-State: AJIora912OnXEOuxmRPfaG91QjgUEYYSeDr6qEYESGURqMM5Em1f8DmV
	vu5vFThFTmXat/TU0C8hm8I=
X-Google-Smtp-Source: AGRyM1v/lS+4NjVferS6kUoLylyCQHZ+ljz7kEWCPuWjTusvwodjvL0yfMAzSACL+nodY1l45DSCNg==
X-Received: by 2002:a05:6402:26d5:b0:435:aba2:9495 with SMTP id x21-20020a05640226d500b00435aba29495mr9923552edd.133.1656236860604;
        Sun, 26 Jun 2022 02:47:40 -0700 (PDT)
From: Bernhard Beschow <shentey@gmail.com>
To: qemu-devel@nongnu.org
Cc: Richard Henderson <richard.henderson@linaro.org>,
	xen-devel@lists.xenproject.org,
	qemu-trivial@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Bernhard Beschow <shentey@gmail.com>
Subject: [PATCH 0/2] Decouple Xen-HVM from PIIX
Date: Sun, 26 Jun 2022 11:46:54 +0200
Message-Id: <20220626094656.15673-1-shentey@gmail.com>
X-Mailer: git-send-email 2.36.1
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable

hw/i386/xen/xen-hvm.c contains logic which is PIIX-specific. This makes xen=
-hvm.c depend on PIIX which can be avoided if PIIX logic was isolated in PI=
IX itself.=0D
=0D
Bernhard Beschow (2):=0D
  hw/i386/xen/xen-hvm: Allow for stubbing xen_set_pci_link_route()=0D
  hw/i386/xen/xen-hvm: Inline xen_piix_pci_write_config_client() and=0D
    remove it=0D
=0D
 hw/i386/xen/xen-hvm.c       | 17 ++---------------=0D
 hw/isa/piix3.c              | 15 ++++++++++++++-=0D
 include/hw/xen/xen.h        |  2 +-=0D
 include/hw/xen/xen_common.h |  6 ------=0D
 stubs/xen-hw-stub.c         |  3 ++-=0D
 5 files changed, 19 insertions(+), 24 deletions(-)=0D
=0D
-- =0D
2.36.1=0D
=0D


From xen-devel-bounces@lists.xenproject.org Sun Jun 26 10:18:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jun 2022 10:18:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356089.584102 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5PLA-0004Op-0r; Sun, 26 Jun 2022 10:18:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356089.584102; Sun, 26 Jun 2022 10:18:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5PL9-0004Oi-TV; Sun, 26 Jun 2022 10:18:03 +0000
Received: by outflank-mailman (input) for mailman id 356089;
 Sun, 26 Jun 2022 10:18:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5PL8-0004OY-Df; Sun, 26 Jun 2022 10:18:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5PL8-0001Bx-AD; Sun, 26 Jun 2022 10:18:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5PL7-0003uQ-N4; Sun, 26 Jun 2022 10:18:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o5PL7-00040y-Mb; Sun, 26 Jun 2022 10:18:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=H8twgvfqSJUou0ncuZP5D/YG61mPijEe8x4n+GDTp0E=; b=PxbnY9ckKSj2QawsweWy8ripi5
	OsX1UjXfMqpGuv7M2vYuXki0kVk0Ly0Cuy9/XcJr/i1Dlv/6Fr5WDTgwq+UdnYwCGlxgBvuposKOh
	ddPsXfV2ES+3ODn68EJ3L74ZgDH+gelfCF9e4DhEiK8pRR8cKGC9Fuy8cDWFldu9jvNg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171357-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 171357: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=0544c4ee4b48f7e2715e69ff3e73c3d5545b0526
X-Osstest-Versions-That:
    xen=0544c4ee4b48f7e2715e69ff3e73c3d5545b0526
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 26 Jun 2022 10:18:01 +0000

flight 171357 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171357/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds     18 guest-localmigrate         fail pass in 171355

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds 20 guest-localmigrate/x10 fail in 171355 blocked in 171357
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171355
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171355
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171355
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171355
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171355
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171355
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171355
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171355
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171355
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171355
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171355
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171355
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  0544c4ee4b48f7e2715e69ff3e73c3d5545b0526
baseline version:
 xen                  0544c4ee4b48f7e2715e69ff3e73c3d5545b0526

Last test of basis   171357  2022-06-26 02:56:22 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Jun 26 12:59:48 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jun 2022 12:59:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356098.584113 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5RrH-0003DC-Uh; Sun, 26 Jun 2022 12:59:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356098.584113; Sun, 26 Jun 2022 12:59:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5RrH-0003D5-RG; Sun, 26 Jun 2022 12:59:23 +0000
Received: by outflank-mailman (input) for mailman id 356098;
 Sun, 26 Jun 2022 12:59:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wW8K=XB=redhat.com=mst@srs-se1.protection.inumbo.net>)
 id 1o5RrG-0003Cz-P6
 for xen-devel@lists.xenproject.org; Sun, 26 Jun 2022 12:59:23 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cab17e66-f54f-11ec-b725-ed86ccbb4733;
 Sun, 26 Jun 2022 14:59:20 +0200 (CEST)
Received: from mail-ed1-f72.google.com (mail-ed1-f72.google.com
 [209.85.208.72]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-422-LIumsrBEPoSHYxQ-gQzt9Q-1; Sun, 26 Jun 2022 08:59:17 -0400
Received: by mail-ed1-f72.google.com with SMTP id
 h4-20020a056402280400b00435abcf8f2dso5267399ede.3
 for <xen-devel@lists.xenproject.org>; Sun, 26 Jun 2022 05:59:17 -0700 (PDT)
Received: from redhat.com ([2.54.171.2]) by smtp.gmail.com with ESMTPSA id
 p24-20020a170906615800b00709343c0017sm3831548ejl.98.2022.06.26.05.59.13
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 26 Jun 2022 05:59:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cab17e66-f54f-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1656248359;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=EWKAhxlYGU/zXw6dAaoTgwOVq0KRBRGkFCQdwQ8ZYEI=;
	b=dGagiqUTsheALfqqEJQJZUQg9Y84U31IsYHFPUSxbkOTf9pM+Usqz59c38fuY0OXegSnoF
	qIuFzAxIw+xlYwXp5ypBNtzT9ykFVdW4k8ZAqMfV5Gc4Wi93qITXZgPei0hhW4OSwRVffp
	0Puzx09wssmxCXf7DI4Oo5TIwzia8iQ=
X-MC-Unique: LIumsrBEPoSHYxQ-gQzt9Q-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=EWKAhxlYGU/zXw6dAaoTgwOVq0KRBRGkFCQdwQ8ZYEI=;
        b=ms+LTXtzszx3BKtHcZ6v7SAXquIQXLpRaOSOaWyx4aOom3dqUJ3NPoLakFDVdX/ALV
         Tx4xYTbx24cNfCXRDNohXz7BMHicZ6pTAFugqKKYd/nGRFZ5Zvzwr3Wuh8pU11YWaFFa
         kKcpxPGx6wM7ehZB3/UnvLzUS3hmpHhEUnaCos+tPXgGwwW1xposZBZM5lIjgCOislwX
         P4uVXRDp0YMZMeNOtPygFkHWnwcshxoojKlckZ7zJ63i4nukIM+yK0DQJBEHRPbhCW0F
         nsHNX+Z8xcE948USU32TDjrkEkxvLXOgwGYmSBJIMLD4dvSyE9pJPfY6YRA3i0EHInab
         vaKw==
X-Gm-Message-State: AJIora9+Y3nMYlUu5Vx9QfVSlHf64d8UDYLpNjT9T6TJyD26uc1HFQ/Q
	BNYCz1K7fc23Pt2kx39Iz6UebRMwygym63wzve1QQ5vSEVIgT1V8iMH5H49SbWb0JqRso5q5lHw
	llEZ0WQhH0mP4/m3HrUKYBkHsvO8=
X-Received: by 2002:a50:fc15:0:b0:435:7897:e8ab with SMTP id i21-20020a50fc15000000b004357897e8abmr10723578edr.17.1656248356417;
        Sun, 26 Jun 2022 05:59:16 -0700 (PDT)
X-Google-Smtp-Source: AGRyM1vWZ8v0/l6cE/oBqSOcuPEcSCQehWDW5ulN3bcCd4MSEDOnZY2sq6sSedgWnQ4Hs6d/v0G4yw==
X-Received: by 2002:a50:fc15:0:b0:435:7897:e8ab with SMTP id i21-20020a50fc15000000b004357897e8abmr10723556edr.17.1656248356202;
        Sun, 26 Jun 2022 05:59:16 -0700 (PDT)
Date: Sun, 26 Jun 2022 08:59:11 -0400
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Bernhard Beschow <shentey@gmail.com>
Cc: qemu-devel@nongnu.org, Richard Henderson <richard.henderson@linaro.org>,
	xen-devel@lists.xenproject.org, qemu-trivial@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>
Subject: Re: [PATCH 2/2] hw/i386/xen/xen-hvm: Inline
 xen_piix_pci_write_config_client() and remove it
Message-ID: <20220626085903-mutt-send-email-mst@kernel.org>
References: <20220626094656.15673-1-shentey@gmail.com>
 <20220626094656.15673-3-shentey@gmail.com>
MIME-Version: 1.0
In-Reply-To: <20220626094656.15673-3-shentey@gmail.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=mst@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

On Sun, Jun 26, 2022 at 11:46:56AM +0200, Bernhard Beschow wrote:
> xen_piix_pci_write_config_client() is implemented in the xen sub tree and
> uses PIIX constants internally, thus creating a direct dependency on
> PIIX. Now that xen_set_pci_link_route() is stubbable, the logic of
> xen_piix_pci_write_config_client() can be moved to PIIX which resolves
> the dependency.
> 
> Signed-off-by: Bernhard Beschow <shentey@gmail.com>

Fine by me

Acked-by: Michael S. Tsirkin <mst@redhat.com>

> ---
>  hw/i386/xen/xen-hvm.c | 18 ------------------
>  hw/isa/piix3.c        | 15 ++++++++++++++-
>  include/hw/xen/xen.h  |  1 -
>  stubs/xen-hw-stub.c   |  4 ----
>  4 files changed, 14 insertions(+), 24 deletions(-)
> 
> diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
> index 204fda7949..e4293d6d66 100644
> --- a/hw/i386/xen/xen-hvm.c
> +++ b/hw/i386/xen/xen-hvm.c
> @@ -15,7 +15,6 @@
>  #include "hw/pci/pci.h"
>  #include "hw/pci/pci_host.h"
>  #include "hw/i386/pc.h"
> -#include "hw/southbridge/piix.h"
>  #include "hw/irq.h"
>  #include "hw/hw.h"
>  #include "hw/i386/apic-msidef.h"
> @@ -149,23 +148,6 @@ void xen_piix3_set_irq(void *opaque, int irq_num, int level)
>                             irq_num & 3, level);
>  }
>  
> -void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len)
> -{
> -    int i;
> -
> -    /* Scan for updates to PCI link routes (0x60-0x63). */
> -    for (i = 0; i < len; i++) {
> -        uint8_t v = (val >> (8 * i)) & 0xff;
> -        if (v & 0x80) {
> -            v = 0;
> -        }
> -        v &= 0xf;
> -        if (((address + i) >= PIIX_PIRQCA) && ((address + i) <= PIIX_PIRQCD)) {
> -            xen_set_pci_link_route(address + i - PIIX_PIRQCA, v);
> -        }
> -    }
> -}
> -
>  int xen_set_pci_link_route(uint8_t link, uint8_t irq)
>  {
>      return xendevicemodel_set_pci_link_route(xen_dmod, xen_domid, link, irq);
> diff --git a/hw/isa/piix3.c b/hw/isa/piix3.c
> index 6388558f92..48f9ab1096 100644
> --- a/hw/isa/piix3.c
> +++ b/hw/isa/piix3.c
> @@ -138,7 +138,20 @@ static void piix3_write_config(PCIDevice *dev,
>  static void piix3_write_config_xen(PCIDevice *dev,
>                                     uint32_t address, uint32_t val, int len)
>  {
> -    xen_piix_pci_write_config_client(address, val, len);
> +    int i;
> +
> +    /* Scan for updates to PCI link routes (0x60-0x63). */
> +    for (i = 0; i < len; i++) {
> +        uint8_t v = (val >> (8 * i)) & 0xff;
> +        if (v & 0x80) {
> +            v = 0;
> +        }
> +        v &= 0xf;
> +        if (((address + i) >= PIIX_PIRQCA) && ((address + i) <= PIIX_PIRQCD)) {
> +            xen_set_pci_link_route(address + i - PIIX_PIRQCA, v);
> +        }
> +    }
> +
>      piix3_write_config(dev, address, val, len);
>  }
>  
> diff --git a/include/hw/xen/xen.h b/include/hw/xen/xen.h
> index 13bffaef53..afdf9c436a 100644
> --- a/include/hw/xen/xen.h
> +++ b/include/hw/xen/xen.h
> @@ -23,7 +23,6 @@ extern bool xen_domid_restrict;
>  int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num);
>  int xen_set_pci_link_route(uint8_t link, uint8_t irq);
>  void xen_piix3_set_irq(void *opaque, int irq_num, int level);
> -void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len);
>  void xen_hvm_inject_msi(uint64_t addr, uint32_t data);
>  int xen_is_pirq_msi(uint32_t msi_data);
>  
> diff --git a/stubs/xen-hw-stub.c b/stubs/xen-hw-stub.c
> index 743967623f..34a22f2ad7 100644
> --- a/stubs/xen-hw-stub.c
> +++ b/stubs/xen-hw-stub.c
> @@ -19,10 +19,6 @@ void xen_piix3_set_irq(void *opaque, int irq_num, int level)
>  {
>  }
>  
> -void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len)
> -{
> -}
> -
>  int xen_set_pci_link_route(uint8_t link, uint8_t irq)
>  {
>      return -1;
> -- 
> 2.36.1



From xen-devel-bounces@lists.xenproject.org Sun Jun 26 14:07:36 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jun 2022 14:07:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356105.584123 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5Suz-0001ui-Sq; Sun, 26 Jun 2022 14:07:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356105.584123; Sun, 26 Jun 2022 14:07:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5Suz-0001ub-Q1; Sun, 26 Jun 2022 14:07:17 +0000
Received: by outflank-mailman (input) for mailman id 356105;
 Sun, 26 Jun 2022 14:07:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5Suz-0001uR-0L; Sun, 26 Jun 2022 14:07:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5Suy-0005oU-SV; Sun, 26 Jun 2022 14:07:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5Suy-0000xr-9V; Sun, 26 Jun 2022 14:07:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o5Suy-0004Zj-8m; Sun, 26 Jun 2022 14:07:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=eStnMfqVHbLFTd9rjzaq8EzcX+kl6R4hGbuQnyGyzCc=; b=Lq2PWxVE9DiT7rWhhxXibmT5vr
	NC5ufUqQb3ARi/HpdOaWRYIc2Cdy3VLJCv/JRLGXZ4UL4f1f6RuFju4Vply8K38ZuxDOlhRYssOcX
	GB7HcUET1khi/OC7FkojPTdnc7J+SuYxwA/658c+Vv1IBxm5XsrMrQiGDj7gN6cmt4uA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171359-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171359: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=0840a7914caa14315a3191178a9f72c742477860
X-Osstest-Versions-That:
    linux=354c6e071be986a44b956f7b57f1884244431048
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 26 Jun 2022 14:07:16 +0000

flight 171359 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171359/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-intel  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-amd  8 xen-boot              fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 171277
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 171277
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 171277
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 171277
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 171277

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 171277

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171277
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171277
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171277
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                0840a7914caa14315a3191178a9f72c742477860
baseline version:
 linux                354c6e071be986a44b956f7b57f1884244431048

Last test of basis   171277  2022-06-19 03:11:35 Z    7 days
Failing since        171280  2022-06-19 15:12:25 Z    6 days   20 attempts
Testing same since   171356  2022-06-25 21:11:35 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aashish Sharma <shraash@google.com>
  Abhinav Kumar <quic_abhinavk@quicinc.com>
  Aidan MacDonald <aidanmacdonald.0x0@gmail.com>
  Akira Yokosawa <akiyks@gmail.com>
  Alan Liu <HaoPing.Liu@amd.com>
  Alan Stern <stern@rowland.harvard.edu>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Egorenkov <egorenar@linux.ibm.com>
  Alexander Gordeev <agordeev@linux.ibm.com>
  Alexander Usyskin <alexander.usyskin@intel.com>
  Alexei Starovoitov <ast@kernel.org>
  Ali Saidi <alisaidi@amazon.com>
  Alistair Popple <apopple@nvidia.com>
  Amit Kumar Mahapatra <amit.kumar-mahapatra@xilinx.com>
  Anatolii Gerasymenko <anatolii.gerasymenko@intel.com>
  Andrii Nakryiko <andrii@kernel.org>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Antoine Tenart <atenart@kernel.org>
  Antoniu Miclaus <antoniu.miclaus@analog.com>
  Ard Biesheuvel <ardb@kernel.org>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Arnd Bergmann <arnd@arndb.de>
  Athira Jajeev <atrajeev@linux.vnet.ibm.com>
  Athira Rajeev <atrajeev@linux.vnet.ibm.com>
  Ballon Shi <ballon.shi@quectel.com>
  Bart Van Assche <bvanassche@acm.org>
  Bartosz Golaszewski <brgl@bgdev.pl>
  Baruch Siach <baruch@tkos.co.il>
  Bjorn Andersson <bjorn.andersson@linaro.org>
  Brian Foster <bfoster@redhat.com>
  Carlo Lobrano <c.lobrano@gmail.com>
  Carlos Llamas <cmllamas@google.com>
  Catalin Marinas <catalin.marinas@arm.com>
  Chevron Li <chevron.li@bayhubtech.com>
  Chevron Li<chevron.li@bayhubtech.com>
  Christian Lamparter <chunkeey@gmail.com>
  Christian Marangi <ansuelsmth@gmail.com>
  Christian Schoenebeck <linux_oss@crudebyte.com>
  Christoph Hellwig <hch@lst.de>
  Christoph Lameter <cl@linux.com>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Ciara Loftus <ciara.loftus@intel.com>
  Claudiu Manoil <claudiu.manoil@nxp.com>
  Curtis Taylor <cutaylor-pub@yahoo.com>
  Daeho Jeong <daehojeong@google.com>
  Damien Le Moal <damien.lemoal@opensource.wdc.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Dan Vacura <w36195@motorola.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Wheeler <daniel.wheeler@amd.com>
  Daniil Dementev <d.dementev@ispras.ru>
  Darrick J. Wong <djwong@kernel.org>
  Dave Airlie <airlied@redhat.com>
  Dave Hansen <dave.hansen@linux.intel.com>
  David Howells <dhowells@redhat.com>
  David Rientjes <rientjes@google.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Demi Marie Obenour <demi@invisiblethingslab.com>
  Ding Xiang <dingxiang@cmss.chinamobile.com>
  Dmitry Klochkov <kdmitry556@gmail.com>
  Dmitry Osipenko <dmitry.osipenko@collabora.com>
  Dmitry Rokosov <ddrokosov@sberdevices.ru>
  Dominique Martinet <asmadeus@codewreck.org>
  Douglas Gilbert <dgilbert@interlog.com>
  Dylan Yudaken <dylany@fb.com>
  Eddie Huang <eddie.huang@mediatek.com>
  Edward Wu <edwardwu@realtek.com>
  Eelco Chaudron <echaudro@redhat.com>
  Eric Dumazet <edumazet@google.com>
  Evgeniy Baskov <baskov@ispras.ru>
  Filipe Manana <fdmanana@suse.com>
  Florian Westphal <fw@strlen.de>
  Gautam Menghani <gautammenghani201@gmail.com>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Genjian Zhang <zhanggenjian@kylinos.cn>
  George Shen <george.shen@amd.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Gurucharan <gurucharanx.g@intel.com> (A Contingent worker at Intel)
  Haibo Chen <haibo.chen@nxp.com>
  Hans de Goede <hdegoede@redhat.com>
  Heiko Stuebner <heiko@sntech.de>
  Hillf Danton <hdanton@sina.com>
  Hoang Le <hoang.h.le@dektech.com.au>
  Hongyu Xie <xiehongyu1@kylinos.cn>
  Hongyu Xie <xy521521@gmail.com>
  Huacai Chen <chenhuacai@loongson.cn>
  huhai <huhai@kylinos.cn>
  Hyeonggon Yoo <42.hyeyoo@gmail.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ian Rogers <irogers@google.com>
  Ivan Vecera <ivecera@redhat.com>
  Jaegeuk Kim <jaegeuk@kernel.org>
  Jakub Kicinski <kuba@kernel.org>
  Jakub Sitnicki <jakub@cloudflare.com>
  Jamie Iles <jamie@jamieiles.com>
  Jani Nikula <jani.nikula@intel.com>
  Jann Horn <jannh@google.com>
  Jarkko Nikula <jarkko.nikula@linux.intel.com>
  Jason A. Donenfeld <Jason@zx2c4.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Javier Martinez Canillas <javierm@redhat.com>
  Jay Vosburgh <jay.vosburgh@canonical.com>
  Jean-Baptiste Maneyrol <jean-baptiste.maneyrol@tdk.com>
  Jens Axboe <axboe@kernel.dk>
  Jernej Skrabec <jernej.skrabec@gmail.com>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Jialin Zhang <zhangjialin11@huawei.com>
  Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
  Jiasheng Jiang <jiasheng@iscas.ac.cn>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jie2x Zhou <jie2x.zhou@intel.com>
  Jing-Ting Wu <jing-ting.wu@mediatek.com>
  Jiri Olsa <jolsa@kernel.org>
  Joe Damato <jdamato@fastly.com>
  Joel Granados <j.granados@samsung.com>
  Joel Savitz <jsavitz@redhat.com>
  Joerg Roedel <jroedel@suse.de>
  Johan Hovold <johan@kernel.org>
  John Fastabend <john.fastabend@gmail.com>
  Jon <jon.lin@rock-chips.com>
  Jon Lin <jon.lin@rock-chips.com>
  Jon Maloy <jmaloy@redhat.com>
  Jon Maxwell <jmaxwell37@gmail.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Jonathan Lemon <jonathan.lemon@gmail.com>
  Jonathan Marek <jonathan@marek.ca>
  Jonathan Toppins <jtoppins@redhat.com>
  Josh Poimboeuf <jpoimboe@kernel.org>
  Joshua Ashton <joshua@froggi.es>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Kai Vehmanen <kai.vehmanen@linux.intel.com>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Kailang Yang <kailang@realtek.com>
  Kees Cook <keescook@chromium.org>
  Ken Moffat <zarniwhoop@ntlworld.com>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Kumar Kartikeya Dwivedi <memxor@gmail.com>
  Kunihiko Hayashi <hayashi.kunihiko@socionext.com>
  Kuogee Hsieh <quic_khsieh@quicinc.com>
  Lars-Peter Clausen <lars@metafoo.de>
  Leo Savernik <l.savernik@aon.at>
  Leo Yan <leo.yan@linaro.org>
  Li Nan <linan122@huawei.com>
  Liam Beguin <liambeguin@gmail.com>
  Liang He <windhl@126.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Liu Ying <victor.liu@nxp.com>
  Lorenzo Bianconi <lorenzo@kernel.org>
  Lukas Bulwahn <lukas.bulwahn@gmail.com>
  Lukas Wunner <lukas@wunner.de>
  Lv Ruyi <lv.ruyi@zte.com.cn>
  Maciej Fijalkowski <maciej.fijalkowski@intel.com>
  Macpaul Lin <macpaul.lin@mediatek.com>
  Magnus Karlsson <magnus.karlsson@intel.com>
  Marc Dionne <marc.dionne@auristor.com>
  Marc Zyngier <maz@kernel.org>
  Marcin Szycik <marcin.szycik@linux.intel.com>
  Mario Limonciello <mario.limonciello@amd.com>
  Mark Brown <broonie@kernel.org>
  Mark Pearson <markpearson@lenovo.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Masami Hiramatsu (Google) <mhiramat@kernel.org>
  Mathias Nyman <mathias.nyman@linux.intel.com>
  Matthew Wilcox (Oracle) <willy@infradead.org>
  Maxime Ripard <maxime@cerno.tech>
  Maximilian Luz <luzmaximilian@gmail.com>
  Maya Matuszczyk <maccraft123mc@gmail.com>
  Mengqi Zhang <mengqi.zhang@mediatek.com>
  Miaoqian Lin <linmq006@gmail.com>
  Michael Petlan <mpetlan@redhat.com>
  Michal Simek <michal.simek@amd.com>
  Mike Snitzer <snitzer@kernel.org>
  Mikulas Patocka <mpatocka@redhat.com>
  Ming Lei <ming.lei@redhat.com>
  Mingwei Zhang <mizhang@google.com>
  Miquel Raynal <miquel.raynal@bootlin.com>
  Nathan Chancellor <nathan@kernel.org>
  Nathan Chancellor <nathan@kernel.org> # build
  Nico Pache <npache@redhat.com>
  nikitashvets@flyium.com
  Nikos Tsironis <ntsironis@arrikto.com>
  Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
  Oleksij Rempel <o.rempel@pengutronix.de>
  Oliver Upton <oupton@google.com>
  Olivier Moysan <olivier.moysan@foss.st.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Abeni <pabeni@redhat.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrice Chotard <patrice.chotard@foss.st.com>
  Pavel Begunkov <asml.silence@gmail.com>
  Peilin Ye <peilin.ye@bytedance.com>
  Peter Gonda <pgonda@google.com>
  Peter Rosin <peda@axentia.se>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Peter Zijlstra <peterz@infradead.org>
  Petr Mladek <pmladek@suse.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Qingqing Zhuo <qingqing.zhuo@amd.com>
  Qu Wenruo <wqu@suse.com>
  Quentin Perret <qperret@google.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Raghavendra Rao Ananta <rananta@google.com>
  Randy Dunlap <rdunlap@infradead.org>
  Riccardo Paolo Bestetti <pbl@bestov.io>
  Rob Clark <robdclark@chromium.org>
  Rob Clark <robdclark@gmail.com>
  Rob Herring <robh@kernel.org>
  Robert Marko <robimarko@gmail.com>
  Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
  Ron Economos <re@w6rz.net>
  Rosemarie O'Riorden <roriorden@redhat.com>
  Sai Krishna Potthuri <lakshmi.sai.krishna.potthuri@xilinx.com>
  Samuel Holland <samuel@sholland.org>
  Sandeep Penigalapati <sandeep.penigalapati@intel.com>
  Sander Vanheule <sander@svanheule.net>
  Sascha Hauer <s.hauer@pengutronix.de>
  Saud Farooqui <farooqui_saud@hotmail.com>
  Saurabh Sengar <ssengar@linux.microsoft.com>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Sedat Dilek <sedat.dilek@gmail.com> # LLVM-14 (x86-64)
  Serge Semin <Sergey.Semin@baikalelectronics.ru>
  Sergey Gorenko <sergeygo@nvidia.com>
  Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Soham Sen <contact@sohamsen.me>
  Song Liu <songliubraving@fb.com>
  Steev Klimaszewski <steev@kali.org>
  Stefan Wahren <stefan.wahren@i2se.com>
  Stephan Gerhold <stephan.gerhold@kernkonzept.com>
  Stephen Boyd <swboyd@chromium.org>
  Stephen Hemminger <stephen@networkplumber.org>
  Stephen Rothwell <sfr@canb.auug.org.au>
  Steven Rostedt (Google) <rostedt@goodmis.org>
  Sumanth Korikkar <sumanthk@linux.ibm.com>
  sunliming <sunliming@kylinos.cn>
  syzbot+3e3f419f4a7816471838@syzkaller.appspotmail.com
  Takashi Iwai <tiwai@suse.de>
  Takashi Sakamoto <o-takashi@sakamocchi.jp>
  Tali Perry <tali.perry1@gmail.com>
  Tanveer Alam <tanveer1.alam@intel.com>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Richter <tmricht@linux.ibm.com>
  Tiezhu Yang <yangtiezhu@loongson.cn>
  Tim Crawford <tcrawford@system76.com>
  Tom Schwindl <schwindl@posteo.de>
  Tomas Winkler <tomas.winkler@intel.com>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Tvrtko Ursulin <tvrtko.ursulin@intel.com>
  Tyler Hicks <tyhicks@linux.microsoft.com>
  Tyrel Datwyler <tyreld@linux.ibm.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Utkarsh Patel <utkarsh.h.patel@intel.com>
  Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
  Vadim Fedorenko <vadfed@fb.com>
  Ville Syrjälä <ville.syrjala@linux.intel.com>
  Vincent Whitchurch <vincent.whitchurch@axis.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wentao_Liang <Wentao_Liang_g@163.com>
  Wojciech Drewek <wojciech.drewek@intel.com>
  Wolfram Sang <wsa@kernel.org>
  Xiang wangx <wangxiang@cdjrlc.com>
  Xiubo Li <xiubli@redhat.com>
  Xu Jia <xujia39@huawei.com>
  Xu Yang <xu.yang_2@nxp.com>
  Yannick Brosseau <yannick.brosseau@gmail.com>
  Ying Xue <ying.xue@windriver.com>
  Yonghong Song <yhs@fb.com>
  Yonglin Tan <yonglin.tan@outlook.com>
  Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
  Yu Liao <liaoyu15@huawei.com>
  Zheyu Ma <zheyuma97@gmail.com>
  Ziyang Xuan <william.xuanziyang@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 9761 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 26 18:46:07 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jun 2022 18:46:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356115.584146 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5XGc-0004FA-4D; Sun, 26 Jun 2022 18:45:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356115.584146; Sun, 26 Jun 2022 18:45:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5XGc-0004F3-0w; Sun, 26 Jun 2022 18:45:54 +0000
Received: by outflank-mailman (input) for mailman id 356115;
 Sun, 26 Jun 2022 18:45:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0i3P=XB=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1o5XGa-0003zd-No
 for xen-devel@lists.xenproject.org; Sun, 26 Jun 2022 18:45:52 +0000
Received: from mail-ed1-x536.google.com (mail-ed1-x536.google.com
 [2a00:1450:4864:20::536])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3424f11a-f580-11ec-bd2d-47488cf2e6aa;
 Sun, 26 Jun 2022 20:45:52 +0200 (CEST)
Received: by mail-ed1-x536.google.com with SMTP id c13so10178301eds.10
 for <xen-devel@lists.xenproject.org>; Sun, 26 Jun 2022 11:45:52 -0700 (PDT)
Received: from uni.. (adsl-146.37.6.170.tellas.gr. [37.6.170.146])
 by smtp.googlemail.com with ESMTPSA id
 h7-20020a1709070b0700b00711d8696de9sm4019609ejl.70.2022.06.26.11.45.50
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 26 Jun 2022 11:45:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3424f11a-f580-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=R57KbytAbDK41N7wZwRgrJuymEgkyCgxDTH+lUlBCWI=;
        b=jPcgV+UVoAzqOXv43SqwA+/iIPN0cjB17rJUVp7km/ULG6YlIBoSDLpN2AY7fNqpDR
         e6DlAzhSm5KXBoRAo41MwtKbQEmOx44IFROUfU407CRxsh3NyrFgJOrKUa59pBCvib9x
         IhJKsamzUG7xSMnAEFBke3W0TdbH+S6sBz9CyFi3Ifu4T+YQakl3bh3VkCu/5gTh1Hl8
         K9OSBJCio52cIbDlRHPFmYTKP6vdGrwWD1EDvhFIZnQkOV3FYSNeR8xIhhVCQK6QQlvq
         a5VghOg+NVTnpAmgWcL7j9ZTIO/izMpfgNLDhrbdAsPa25SXE5Uy6v4rH26S5zitL/wo
         C5vA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=R57KbytAbDK41N7wZwRgrJuymEgkyCgxDTH+lUlBCWI=;
        b=3rllWAfpiHNOmOfchsJD9BPfVTi85AUvmhFZOKAFMymlfQmtGDcztWNunB3Hs3i35O
         gM+/0yR6ilZ7c1olR9S8FgMy7KzzSo4iKSKn+5H125Q7XMB9n86YABNy9zQDHqp0K5HC
         q8ZnFPMr7j/nHmzu+M3jGGs3PYesfhCy/YLlVd4r/0n9zV+wdFebfAMm4L9P6p8wEEhc
         cMXUBRRjbucaj19YhfEt+seDzq8nUabh/lXTciV9sipQPz5m0ry17PcQi6fHLBgZgqQ0
         M40GirRcK5hpF6R+/2kV9u9ej1MGyZC1sFQLayUvRwKFxDrWgTUeuY7ANAC98FBrCGbY
         hQ3A==
X-Gm-Message-State: AJIora+L+tmw32rCL/ajoTj8BD/u3gukGU+tC4Dr/c6pPjna6m60A0on
	I86bkx1zjdnhqoOmbax8ZDtp2Hhf7vw=
X-Google-Smtp-Source: AGRyM1uSUtpJPuatrGKpEoGzr/vWJiHxcXqwI8D/ypG7JgIb1QTc96drf1DCfiaica4jWXtz3amfzA==
X-Received: by 2002:a05:6402:4386:b0:437:6450:b41f with SMTP id o6-20020a056402438600b004376450b41fmr12295437edc.97.1656269151589;
        Sun, 26 Jun 2022 11:45:51 -0700 (PDT)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	viryaos-discuss@lists.sourceforge.net,
	Xenia Ragiadakou <burzalodowa@gmail.com>
Subject: [PATCH 2/2] uboot-script-gen: do not enable direct mapping by default
Date: Sun, 26 Jun 2022 21:45:36 +0300
Message-Id: <20220626184536.666647-2-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20220626184536.666647-1-burzalodowa@gmail.com>
References: <20220626184536.666647-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

To be inline with XEN, do not enable direct mapping automatically for all
statically allocated domains.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---
 README.md                | 4 ++--
 scripts/uboot-script-gen | 8 ++------
 2 files changed, 4 insertions(+), 8 deletions(-)

diff --git a/README.md b/README.md
index cb15ca5..03e437b 100644
--- a/README.md
+++ b/README.md
@@ -169,8 +169,8 @@ Where:
   if specified, indicates the host physical address regions
   [baseaddr, baseaddr + size) to be reserved to the VM for static allocation.
 
-- DOMU_DIRECT_MAP[number] can be set to 1 or 0.
-  If set to 1, the VM is direct mapped. The default is 1.
+- DOMU_DIRECT_MAP[number] if set to 1, enables direct mapping.
+  By default, direct mapping is disabled.
   This is only applicable when DOMU_STATIC_MEM is specified.
 
 - LINUX is optional but specifies the Linux kernel for when Xen is NOT
diff --git a/scripts/uboot-script-gen b/scripts/uboot-script-gen
index 085e29f..66ce6f7 100755
--- a/scripts/uboot-script-gen
+++ b/scripts/uboot-script-gen
@@ -52,7 +52,7 @@ function dt_set()
             echo "fdt set $path $var $array" >> $UBOOT_SOURCE
         elif test $data_type = "bool"
         then
-            if test "$data" -eq 1
+            if test "$data" == "1"
             then
                 echo "fdt set $path $var" >> $UBOOT_SOURCE
             fi
@@ -74,7 +74,7 @@ function dt_set()
             fdtput $FDTEDIT -p -t s $path $var $data
         elif test $data_type = "bool"
         then
-            if test "$data" -eq 1
+            if test "$data" == "1"
             then
                 fdtput $FDTEDIT -p $path $var
             fi
@@ -491,10 +491,6 @@ function xen_config()
         then
             DOMU_CMD[$i]="console=ttyAMA0"
         fi
-        if test -z "${DOMU_DIRECT_MAP[$i]}"
-        then
-             DOMU_DIRECT_MAP[$i]=1
-        fi
         i=$(( $i + 1 ))
     done
 }
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jun 26 18:46:07 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jun 2022 18:46:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356114.584135 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5XGa-0003zq-R5; Sun, 26 Jun 2022 18:45:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356114.584135; Sun, 26 Jun 2022 18:45:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5XGa-0003zj-O1; Sun, 26 Jun 2022 18:45:52 +0000
Received: by outflank-mailman (input) for mailman id 356114;
 Sun, 26 Jun 2022 18:45:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0i3P=XB=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1o5XGZ-0003zd-96
 for xen-devel@lists.xenproject.org; Sun, 26 Jun 2022 18:45:51 +0000
Received: from mail-ed1-x536.google.com (mail-ed1-x536.google.com
 [2a00:1450:4864:20::536])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 32730784-f580-11ec-bd2d-47488cf2e6aa;
 Sun, 26 Jun 2022 20:45:49 +0200 (CEST)
Received: by mail-ed1-x536.google.com with SMTP id c13so10178301eds.10
 for <xen-devel@lists.xenproject.org>; Sun, 26 Jun 2022 11:45:49 -0700 (PDT)
Received: from uni.. (adsl-146.37.6.170.tellas.gr. [37.6.170.146])
 by smtp.googlemail.com with ESMTPSA id
 h7-20020a1709070b0700b00711d8696de9sm4019609ejl.70.2022.06.26.11.45.47
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 26 Jun 2022 11:45:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 32730784-f580-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=+muHyAkkc+movO68H9ldsLy8902fAEHYYZE1ioXjz5o=;
        b=mlx54YBcrvyEZoxkBVo63xB5IybpvxQ3mTf+IrL3kLWN6fo2fPEXHMhlAoCEzB/lh4
         K0KL4eoLCWvPXVsN97JELNmx3w5zXTvc0exAeBD/i5HOXEiLyZ34nlnoyQ5VKEx7GNl+
         nhjigY6vsH547RfTFRGdv2U37On+tNCp96HvmaUJD6AI/kE8Em9p/KGEJz/m+B0AShlh
         HrRBkVsr2B64omq/1VKogZ5uAV/5S7OZpPZcl1NKyRy5R1KW0UHPxMTEwbY16hPUT7tP
         wkFHtPdCx/r49ynjxMZ9Dx/2AgBmzKlChXwJGBt62J6qjaEqRhiTtdfG7Jg317lvHENb
         Ma7g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=+muHyAkkc+movO68H9ldsLy8902fAEHYYZE1ioXjz5o=;
        b=V3aoyZfKNycyy4dCXiJOgJ+gaoSi4rj5Bo0LcrZFtcXbgM9gcWzRTmExXSxVcNQBS7
         7Zlom4RLDLZpTUrJUXdqyRi38wYS5H9nn8UvlBGkRN0By1timiM0MOjit8dDC4dlwXRr
         hr9rQZVSQ+V2p9K7fO/3YPp68K0JCR8NcAEUH9SvfnCIWeOXB+uzTr9YtjMITtSboRSl
         RDI0FzeB2zxCAMgoQvzefDiVnoFYsmwROPi8kfZek/z/+mmpzbubItJFPTWdHBjHWQ4V
         GtRJ28ooPzyIWppbFZfTsp2eRbmfq7q9AM3AJc9mNiR39MoAA1EVHcUXbBjWChYgqCsU
         658Q==
X-Gm-Message-State: AJIora8C0vIuKoEETJazthLAzc3uS86EjppaMdr5GnHP5GqqAuM3iQ4n
	KkQ/TSw4bPZ20bEN1GOvzTR89J09xKk=
X-Google-Smtp-Source: AGRyM1vEyKV05yR6TIXWbiTaZ5SJn0lMZXPCIIDrEPa8bMpbqDn6WxibH6bnE5jxv2+d3uGJlT347Q==
X-Received: by 2002:a05:6402:2987:b0:434:ef34:6617 with SMTP id eq7-20020a056402298700b00434ef346617mr12486710edb.176.1656269148758;
        Sun, 26 Jun 2022 11:45:48 -0700 (PDT)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	viryaos-discuss@lists.sourceforge.net,
	Xenia Ragiadakou <burzalodowa@gmail.com>
Subject: [PATCH 1/2] uboot-script-gen: prevent user mistakes due to DOM0_KERNEL becoming optional
Date: Sun, 26 Jun 2022 21:45:35 +0300
Message-Id: <20220626184536.666647-1-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Before enabling true dom0less configuration, the script was failing instantly
if DOM0_KERNEL parameter was not specified. This behaviour has changed and
this needs to be communicated to the user.

Mention in README.md that for dom0less configurations, the parameter
DOM0_KERNEL is optional.

If DOM0_KERNEL is not set, check that no other dom0 specific parameters are
specified by the user. Fail the script early with an appropriate error
message, if it was invoked with erroneous configuration settings.

Change message "Dom0 kernel is not specified, continue with dom0less setup."
to "Dom0 kernel is not specified, continue with true dom0less setup."
to refer more accurately to a dom0less setup without dom0.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---
 README.md                |  1 +
 scripts/uboot-script-gen | 21 ++++++++++++++-------
 2 files changed, 15 insertions(+), 7 deletions(-)

diff --git a/README.md b/README.md
index 17ff206..cb15ca5 100644
--- a/README.md
+++ b/README.md
@@ -100,6 +100,7 @@ Where:
   been specified in XEN_PASSTHROUGH_PATHS.
 
 - DOM0_KERNEL specifies the Dom0 kernel file to load.
+  For dom0less configurations, the parameter is optional.
 
 - DOM0_MEM specifies the amount of memory for Dom0 VM in MB. The default
   is 1024. This is only applicable when XEN_CMD is not specified.
diff --git a/scripts/uboot-script-gen b/scripts/uboot-script-gen
index e85c6ec..085e29f 100755
--- a/scripts/uboot-script-gen
+++ b/scripts/uboot-script-gen
@@ -410,6 +410,20 @@ function find_root_dev()
 
 function xen_config()
 {
+    if test -z "$DOM0_KERNEL"
+    then
+        if test "$NUM_DOMUS" -eq "0"
+        then
+            echo "Neither dom0 or domUs are specified, exiting."
+            exit 1
+        elif test "$DOM0_MEM" || test "$DOM0_VCPUS" || test "$DOM0_COLORS" || test "$DOM0_CMD" || test "$DOM0_RAMDISK" || test "$DOM0_ROOTFS"
+        then
+            echo "For dom0less configuration without dom0, no dom0 specific parameters should be specified, exiting."
+            exit 1
+        fi
+        echo "Dom0 kernel is not specified, continue with true dom0less setup."
+    fi
+
     if [ -z "$XEN_CMD" ]
     then
         if [ -z "$DOM0_MEM" ]
@@ -457,13 +471,6 @@ function xen_config()
     fi
     if test -z "$DOM0_KERNEL"
     then
-        if test "$NUM_DOMUS" -eq "0"
-        then
-            echo "Neither dom0 or domUs are specified, exiting."
-            exit 1
-        fi
-        echo "Dom0 kernel is not specified, continue with dom0less setup."
-        unset DOM0_RAMDISK
         # Remove dom0 specific parameters from the XEN command line.
         local params=($XEN_CMD)
         XEN_CMD="${params[@]/dom0*/}"
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jun 26 20:17:46 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jun 2022 20:17:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356126.584157 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5YhG-0005F4-Qy; Sun, 26 Jun 2022 20:17:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356126.584157; Sun, 26 Jun 2022 20:17:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5YhG-0005Ex-NV; Sun, 26 Jun 2022 20:17:30 +0000
Received: by outflank-mailman (input) for mailman id 356126;
 Sun, 26 Jun 2022 20:17:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5YhF-0005En-Sw; Sun, 26 Jun 2022 20:17:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5YhF-00062s-Nl; Sun, 26 Jun 2022 20:17:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5YhF-0001Md-5h; Sun, 26 Jun 2022 20:17:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o5YhF-0004WG-5E; Sun, 26 Jun 2022 20:17:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=I34pBH3mCICqYm2qLXwkUWpHfhVbfNgrlD5gHGQVI1k=; b=J+Eb/BZtmPITBy3ENrrTFNifcc
	JwUX30LgcXI0k7t3RKNN6M7zV4AHurPHpTDBiywvz3cAM1Ny2HIAlCLr0bF24a7q0eRCPCjK2EPWs
	oKT3TZZkWmWpVkuZW+8SReOmy8iVsiq9tYF7jXmi6MfB6s1x1halKBXeOAnRyPR4JsPg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171360-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171360: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=0840a7914caa14315a3191178a9f72c742477860
X-Osstest-Versions-That:
    linux=354c6e071be986a44b956f7b57f1884244431048
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 26 Jun 2022 20:17:29 +0000

flight 171360 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171360/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-intel  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-amd  8 xen-boot              fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 171277
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 171277
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 171277
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 171277
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 171277

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 171277

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171277
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171277
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171277
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                0840a7914caa14315a3191178a9f72c742477860
baseline version:
 linux                354c6e071be986a44b956f7b57f1884244431048

Last test of basis   171277  2022-06-19 03:11:35 Z    7 days
Failing since        171280  2022-06-19 15:12:25 Z    7 days   21 attempts
Testing same since   171356  2022-06-25 21:11:35 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aashish Sharma <shraash@google.com>
  Abhinav Kumar <quic_abhinavk@quicinc.com>
  Aidan MacDonald <aidanmacdonald.0x0@gmail.com>
  Akira Yokosawa <akiyks@gmail.com>
  Alan Liu <HaoPing.Liu@amd.com>
  Alan Stern <stern@rowland.harvard.edu>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Egorenkov <egorenar@linux.ibm.com>
  Alexander Gordeev <agordeev@linux.ibm.com>
  Alexander Usyskin <alexander.usyskin@intel.com>
  Alexei Starovoitov <ast@kernel.org>
  Ali Saidi <alisaidi@amazon.com>
  Alistair Popple <apopple@nvidia.com>
  Amit Kumar Mahapatra <amit.kumar-mahapatra@xilinx.com>
  Anatolii Gerasymenko <anatolii.gerasymenko@intel.com>
  Andrii Nakryiko <andrii@kernel.org>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Antoine Tenart <atenart@kernel.org>
  Antoniu Miclaus <antoniu.miclaus@analog.com>
  Ard Biesheuvel <ardb@kernel.org>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Arnd Bergmann <arnd@arndb.de>
  Athira Jajeev <atrajeev@linux.vnet.ibm.com>
  Athira Rajeev <atrajeev@linux.vnet.ibm.com>
  Ballon Shi <ballon.shi@quectel.com>
  Bart Van Assche <bvanassche@acm.org>
  Bartosz Golaszewski <brgl@bgdev.pl>
  Baruch Siach <baruch@tkos.co.il>
  Bjorn Andersson <bjorn.andersson@linaro.org>
  Brian Foster <bfoster@redhat.com>
  Carlo Lobrano <c.lobrano@gmail.com>
  Carlos Llamas <cmllamas@google.com>
  Catalin Marinas <catalin.marinas@arm.com>
  Chevron Li <chevron.li@bayhubtech.com>
  Chevron Li<chevron.li@bayhubtech.com>
  Christian Lamparter <chunkeey@gmail.com>
  Christian Marangi <ansuelsmth@gmail.com>
  Christian Schoenebeck <linux_oss@crudebyte.com>
  Christoph Hellwig <hch@lst.de>
  Christoph Lameter <cl@linux.com>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Ciara Loftus <ciara.loftus@intel.com>
  Claudiu Manoil <claudiu.manoil@nxp.com>
  Curtis Taylor <cutaylor-pub@yahoo.com>
  Daeho Jeong <daehojeong@google.com>
  Damien Le Moal <damien.lemoal@opensource.wdc.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Dan Vacura <w36195@motorola.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Wheeler <daniel.wheeler@amd.com>
  Daniil Dementev <d.dementev@ispras.ru>
  Darrick J. Wong <djwong@kernel.org>
  Dave Airlie <airlied@redhat.com>
  Dave Hansen <dave.hansen@linux.intel.com>
  David Howells <dhowells@redhat.com>
  David Rientjes <rientjes@google.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Demi Marie Obenour <demi@invisiblethingslab.com>
  Ding Xiang <dingxiang@cmss.chinamobile.com>
  Dmitry Klochkov <kdmitry556@gmail.com>
  Dmitry Osipenko <dmitry.osipenko@collabora.com>
  Dmitry Rokosov <ddrokosov@sberdevices.ru>
  Dominique Martinet <asmadeus@codewreck.org>
  Douglas Gilbert <dgilbert@interlog.com>
  Dylan Yudaken <dylany@fb.com>
  Eddie Huang <eddie.huang@mediatek.com>
  Edward Wu <edwardwu@realtek.com>
  Eelco Chaudron <echaudro@redhat.com>
  Eric Dumazet <edumazet@google.com>
  Evgeniy Baskov <baskov@ispras.ru>
  Filipe Manana <fdmanana@suse.com>
  Florian Westphal <fw@strlen.de>
  Gautam Menghani <gautammenghani201@gmail.com>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Genjian Zhang <zhanggenjian@kylinos.cn>
  George Shen <george.shen@amd.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Gurucharan <gurucharanx.g@intel.com> (A Contingent worker at Intel)
  Haibo Chen <haibo.chen@nxp.com>
  Hans de Goede <hdegoede@redhat.com>
  Heiko Stuebner <heiko@sntech.de>
  Hillf Danton <hdanton@sina.com>
  Hoang Le <hoang.h.le@dektech.com.au>
  Hongyu Xie <xiehongyu1@kylinos.cn>
  Hongyu Xie <xy521521@gmail.com>
  Huacai Chen <chenhuacai@loongson.cn>
  huhai <huhai@kylinos.cn>
  Hyeonggon Yoo <42.hyeyoo@gmail.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ian Rogers <irogers@google.com>
  Ivan Vecera <ivecera@redhat.com>
  Jaegeuk Kim <jaegeuk@kernel.org>
  Jakub Kicinski <kuba@kernel.org>
  Jakub Sitnicki <jakub@cloudflare.com>
  Jamie Iles <jamie@jamieiles.com>
  Jani Nikula <jani.nikula@intel.com>
  Jann Horn <jannh@google.com>
  Jarkko Nikula <jarkko.nikula@linux.intel.com>
  Jason A. Donenfeld <Jason@zx2c4.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Javier Martinez Canillas <javierm@redhat.com>
  Jay Vosburgh <jay.vosburgh@canonical.com>
  Jean-Baptiste Maneyrol <jean-baptiste.maneyrol@tdk.com>
  Jens Axboe <axboe@kernel.dk>
  Jernej Skrabec <jernej.skrabec@gmail.com>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Jialin Zhang <zhangjialin11@huawei.com>
  Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
  Jiasheng Jiang <jiasheng@iscas.ac.cn>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jie2x Zhou <jie2x.zhou@intel.com>
  Jing-Ting Wu <jing-ting.wu@mediatek.com>
  Jiri Olsa <jolsa@kernel.org>
  Joe Damato <jdamato@fastly.com>
  Joel Granados <j.granados@samsung.com>
  Joel Savitz <jsavitz@redhat.com>
  Joerg Roedel <jroedel@suse.de>
  Johan Hovold <johan@kernel.org>
  John Fastabend <john.fastabend@gmail.com>
  Jon <jon.lin@rock-chips.com>
  Jon Lin <jon.lin@rock-chips.com>
  Jon Maloy <jmaloy@redhat.com>
  Jon Maxwell <jmaxwell37@gmail.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Jonathan Lemon <jonathan.lemon@gmail.com>
  Jonathan Marek <jonathan@marek.ca>
  Jonathan Toppins <jtoppins@redhat.com>
  Josh Poimboeuf <jpoimboe@kernel.org>
  Joshua Ashton <joshua@froggi.es>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Kai Vehmanen <kai.vehmanen@linux.intel.com>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Kailang Yang <kailang@realtek.com>
  Kees Cook <keescook@chromium.org>
  Ken Moffat <zarniwhoop@ntlworld.com>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Kumar Kartikeya Dwivedi <memxor@gmail.com>
  Kunihiko Hayashi <hayashi.kunihiko@socionext.com>
  Kuogee Hsieh <quic_khsieh@quicinc.com>
  Lars-Peter Clausen <lars@metafoo.de>
  Leo Savernik <l.savernik@aon.at>
  Leo Yan <leo.yan@linaro.org>
  Li Nan <linan122@huawei.com>
  Liam Beguin <liambeguin@gmail.com>
  Liang He <windhl@126.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Liu Ying <victor.liu@nxp.com>
  Lorenzo Bianconi <lorenzo@kernel.org>
  Lukas Bulwahn <lukas.bulwahn@gmail.com>
  Lukas Wunner <lukas@wunner.de>
  Lv Ruyi <lv.ruyi@zte.com.cn>
  Maciej Fijalkowski <maciej.fijalkowski@intel.com>
  Macpaul Lin <macpaul.lin@mediatek.com>
  Magnus Karlsson <magnus.karlsson@intel.com>
  Marc Dionne <marc.dionne@auristor.com>
  Marc Zyngier <maz@kernel.org>
  Marcin Szycik <marcin.szycik@linux.intel.com>
  Mario Limonciello <mario.limonciello@amd.com>
  Mark Brown <broonie@kernel.org>
  Mark Pearson <markpearson@lenovo.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Masami Hiramatsu (Google) <mhiramat@kernel.org>
  Mathias Nyman <mathias.nyman@linux.intel.com>
  Matthew Wilcox (Oracle) <willy@infradead.org>
  Maxime Ripard <maxime@cerno.tech>
  Maximilian Luz <luzmaximilian@gmail.com>
  Maya Matuszczyk <maccraft123mc@gmail.com>
  Mengqi Zhang <mengqi.zhang@mediatek.com>
  Miaoqian Lin <linmq006@gmail.com>
  Michael Petlan <mpetlan@redhat.com>
  Michal Simek <michal.simek@amd.com>
  Mike Snitzer <snitzer@kernel.org>
  Mikulas Patocka <mpatocka@redhat.com>
  Ming Lei <ming.lei@redhat.com>
  Mingwei Zhang <mizhang@google.com>
  Miquel Raynal <miquel.raynal@bootlin.com>
  Nathan Chancellor <nathan@kernel.org>
  Nathan Chancellor <nathan@kernel.org> # build
  Nico Pache <npache@redhat.com>
  nikitashvets@flyium.com
  Nikos Tsironis <ntsironis@arrikto.com>
  Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
  Oleksij Rempel <o.rempel@pengutronix.de>
  Oliver Upton <oupton@google.com>
  Olivier Moysan <olivier.moysan@foss.st.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Abeni <pabeni@redhat.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrice Chotard <patrice.chotard@foss.st.com>
  Pavel Begunkov <asml.silence@gmail.com>
  Peilin Ye <peilin.ye@bytedance.com>
  Peter Gonda <pgonda@google.com>
  Peter Rosin <peda@axentia.se>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Peter Zijlstra <peterz@infradead.org>
  Petr Mladek <pmladek@suse.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Qingqing Zhuo <qingqing.zhuo@amd.com>
  Qu Wenruo <wqu@suse.com>
  Quentin Perret <qperret@google.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Raghavendra Rao Ananta <rananta@google.com>
  Randy Dunlap <rdunlap@infradead.org>
  Riccardo Paolo Bestetti <pbl@bestov.io>
  Rob Clark <robdclark@chromium.org>
  Rob Clark <robdclark@gmail.com>
  Rob Herring <robh@kernel.org>
  Robert Marko <robimarko@gmail.com>
  Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
  Ron Economos <re@w6rz.net>
  Rosemarie O'Riorden <roriorden@redhat.com>
  Sai Krishna Potthuri <lakshmi.sai.krishna.potthuri@xilinx.com>
  Samuel Holland <samuel@sholland.org>
  Sandeep Penigalapati <sandeep.penigalapati@intel.com>
  Sander Vanheule <sander@svanheule.net>
  Sascha Hauer <s.hauer@pengutronix.de>
  Saud Farooqui <farooqui_saud@hotmail.com>
  Saurabh Sengar <ssengar@linux.microsoft.com>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Sedat Dilek <sedat.dilek@gmail.com> # LLVM-14 (x86-64)
  Serge Semin <Sergey.Semin@baikalelectronics.ru>
  Sergey Gorenko <sergeygo@nvidia.com>
  Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Soham Sen <contact@sohamsen.me>
  Song Liu <songliubraving@fb.com>
  Steev Klimaszewski <steev@kali.org>
  Stefan Wahren <stefan.wahren@i2se.com>
  Stephan Gerhold <stephan.gerhold@kernkonzept.com>
  Stephen Boyd <swboyd@chromium.org>
  Stephen Hemminger <stephen@networkplumber.org>
  Stephen Rothwell <sfr@canb.auug.org.au>
  Steven Rostedt (Google) <rostedt@goodmis.org>
  Sumanth Korikkar <sumanthk@linux.ibm.com>
  sunliming <sunliming@kylinos.cn>
  syzbot+3e3f419f4a7816471838@syzkaller.appspotmail.com
  Takashi Iwai <tiwai@suse.de>
  Takashi Sakamoto <o-takashi@sakamocchi.jp>
  Tali Perry <tali.perry1@gmail.com>
  Tanveer Alam <tanveer1.alam@intel.com>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Richter <tmricht@linux.ibm.com>
  Tiezhu Yang <yangtiezhu@loongson.cn>
  Tim Crawford <tcrawford@system76.com>
  Tom Schwindl <schwindl@posteo.de>
  Tomas Winkler <tomas.winkler@intel.com>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Tvrtko Ursulin <tvrtko.ursulin@intel.com>
  Tyler Hicks <tyhicks@linux.microsoft.com>
  Tyrel Datwyler <tyreld@linux.ibm.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Utkarsh Patel <utkarsh.h.patel@intel.com>
  Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
  Vadim Fedorenko <vadfed@fb.com>
  Ville Syrjälä <ville.syrjala@linux.intel.com>
  Vincent Whitchurch <vincent.whitchurch@axis.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wentao_Liang <Wentao_Liang_g@163.com>
  Wojciech Drewek <wojciech.drewek@intel.com>
  Wolfram Sang <wsa@kernel.org>
  Xiang wangx <wangxiang@cdjrlc.com>
  Xiubo Li <xiubli@redhat.com>
  Xu Jia <xujia39@huawei.com>
  Xu Yang <xu.yang_2@nxp.com>
  Yannick Brosseau <yannick.brosseau@gmail.com>
  Ying Xue <ying.xue@windriver.com>
  Yonghong Song <yhs@fb.com>
  Yonglin Tan <yonglin.tan@outlook.com>
  Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
  Yu Liao <liaoyu15@huawei.com>
  Zheyu Ma <zheyuma97@gmail.com>
  Ziyang Xuan <william.xuanziyang@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 9761 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 26 21:11:53 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jun 2022 21:11:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356137.584189 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5ZXp-0003W8-Gp; Sun, 26 Jun 2022 21:11:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356137.584189; Sun, 26 Jun 2022 21:11:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5ZXp-0003W1-Dq; Sun, 26 Jun 2022 21:11:49 +0000
Received: by outflank-mailman (input) for mailman id 356137;
 Sun, 26 Jun 2022 21:11:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0i3P=XB=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1o5ZXo-0002ze-4J
 for xen-devel@lists.xenproject.org; Sun, 26 Jun 2022 21:11:48 +0000
Received: from mail-ed1-x52f.google.com (mail-ed1-x52f.google.com
 [2a00:1450:4864:20::52f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 96a317e6-f594-11ec-b725-ed86ccbb4733;
 Sun, 26 Jun 2022 23:11:47 +0200 (CEST)
Received: by mail-ed1-x52f.google.com with SMTP id eo8so10553039edb.0
 for <xen-devel@lists.xenproject.org>; Sun, 26 Jun 2022 14:11:47 -0700 (PDT)
Received: from uni.. (adsl-146.37.6.170.tellas.gr. [37.6.170.146])
 by smtp.googlemail.com with ESMTPSA id
 cq12-20020a056402220c00b004356b8ad003sm6367556edb.60.2022.06.26.14.11.45
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 26 Jun 2022 14:11:46 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 96a317e6-f594-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=nuU5RdK7p3A+qPrUO/Jr3dM+wawCSfpGpfW4baMZN1o=;
        b=WpQxztn4nBsr82cJ9eOZah6UtylOZg7AnA56/KVKxKTt9oh2n1FZWC3Bm3MprrBI5m
         9cNC6/OGiNYKWqZm14RYDQvL8OO5O6swZL1K91SmSlC+eJHvIubQmaS4RmNqAAP80U2g
         CsTG1O68POh/vetLnbSuc+dS1GaV6PzisV1sVOA1PFgFVjOYK/NomIx/VuWyUrwc3ZDa
         BTXw06zoRN8pLFb9aJoVyFbHaAw4INhhLeMpFKv6uyrP7TStYcWisyQbNlxb0NaV1btx
         dw3ZcSaj/tmw4pRP0da5dyeUT2nL72p26qq0sKfqeMV491SnuGoh3I/WUE3wSQHvOezc
         DgQg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=nuU5RdK7p3A+qPrUO/Jr3dM+wawCSfpGpfW4baMZN1o=;
        b=t21KKj1KKy436cHUNN+AJNN2Tor0I8hPbB6+k8EeDcCnSy9USmYsuL/VIj3ZiGpE2j
         TUzcYayZxXTUCOvVodehFSm/73bPXPkfk4vxvgSAmIcQ7os+DMInsyIq+/Oz+GUXUbP7
         sm28gSD7Wowwpn9P+mItpoqhw7a+LVGS1GF47tZpf/U5rsfxLxcSHGmoWw+5LGXYnpjx
         +Bg0PXsqv6vjOLoL6oLoHiX9dx0Rk720DQnbn0hYUEQtmuV02pGPBEjvCeUaJOFStpDD
         hmyuvwp9sfKKD9QtPyZ6e8Z706+rylqKYW1mb1L0k/MR7I9i7E66iTwQkN8FBcUC6ozr
         Kljg==
X-Gm-Message-State: AJIora+RjC0SFh2vrHomQBWULY/4DT2jrkLvq9I7C5sLmnETWasccYx9
	I8Q94BUZBYrlySsePVmsjYbcwrGZ7fU=
X-Google-Smtp-Source: AGRyM1uqxPILZfgMHtiSrlaohNVIDdzBMh410mKVJtBS0assx2Gn2edp44afW8udB/i1s3pRO8iozA==
X-Received: by 2002:a05:6402:4496:b0:435:d605:6ff8 with SMTP id er22-20020a056402449600b00435d6056ff8mr12821122edb.357.1656277906805;
        Sun, 26 Jun 2022 14:11:46 -0700 (PDT)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Tamas K Lengyel <tamas@tklengyel.com>,
	Alexandru Isaila <aisaila@bitdefender.com>,
	Petre Pircalabu <ppircalabu@bitdefender.com>
Subject: [PATCH 2/5] xen/common: vm_event: Fix MISRA C 2012 Rule 8.7 violation
Date: Mon, 27 Jun 2022 00:11:28 +0300
Message-Id: <20220626211131.678995-3-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20220626211131.678995-1-burzalodowa@gmail.com>
References: <20220626211131.678995-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The function vm_event_wake() is referenced only in vm_event.c.
Change the linkage of the function from external to internal by adding
the storage-class specifier static to the function definition.

This patch aims to resolve indirectly a MISRA C 2012 Rule 8.4 violation warning.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---
 xen/common/vm_event.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index 0b99a6ea72..ecf49c38a9 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -173,7 +173,7 @@ static void vm_event_wake_queued(struct domain *d, struct vm_event_domain *ved)
  * call vm_event_wake() again, ensuring that any blocked vCPUs will get
  * unpaused once all the queued vCPUs have made it through.
  */
-void vm_event_wake(struct domain *d, struct vm_event_domain *ved)
+static void vm_event_wake(struct domain *d, struct vm_event_domain *ved)
 {
     if ( !list_empty(&ved->wq.list) )
         vm_event_wake_queued(d, ved);
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jun 26 21:11:53 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jun 2022 21:11:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356136.584178 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5ZXm-0003FF-92; Sun, 26 Jun 2022 21:11:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356136.584178; Sun, 26 Jun 2022 21:11:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5ZXm-0003F8-6J; Sun, 26 Jun 2022 21:11:46 +0000
Received: by outflank-mailman (input) for mailman id 356136;
 Sun, 26 Jun 2022 21:11:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0i3P=XB=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1o5ZXl-0002ze-MG
 for xen-devel@lists.xenproject.org; Sun, 26 Jun 2022 21:11:45 +0000
Received: from mail-ej1-x62e.google.com (mail-ej1-x62e.google.com
 [2a00:1450:4864:20::62e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9534bc44-f594-11ec-b725-ed86ccbb4733;
 Sun, 26 Jun 2022 23:11:45 +0200 (CEST)
Received: by mail-ej1-x62e.google.com with SMTP id ay16so15177748ejb.6
 for <xen-devel@lists.xenproject.org>; Sun, 26 Jun 2022 14:11:45 -0700 (PDT)
Received: from uni.. (adsl-146.37.6.170.tellas.gr. [37.6.170.146])
 by smtp.googlemail.com with ESMTPSA id
 cq12-20020a056402220c00b004356b8ad003sm6367556edb.60.2022.06.26.14.11.42
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 26 Jun 2022 14:11:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9534bc44-f594-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=1/ZGiB26wkc6ln+SQl2Q+9yHj6lnaAaAN9Iu/vPcQ1g=;
        b=c/Dx4jLVI71H6lIglKPncvyiMVNDcz+lkyCneD8pZ0VtZmZafuJUi1+Q5Qt6BHFiKh
         G6vsEExUx4P0si3ydzfXuDuki08c0yI8CnkuJ98DvUKU0uSDi7QkQ+Ppz/9UvJC8yky7
         CCH+ioHJhpJAtINMlqymaglWBHHFLfU9ACjtDm7X+qJ/I1XKsv+Un1I2Lum0s4d+GFDA
         upMuIytPTF7UPt4ZOnCpS6SApPR57tNEVpX0UXC8AmXRfTNz78rYH2upOG5BFXjsGaVR
         Dc7/7J6EBQifZWCxH4EozAUio1gL72jC2uvgu8CKHaRHbbNwp8vIECIXU68d/mZUpjMi
         2elQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=1/ZGiB26wkc6ln+SQl2Q+9yHj6lnaAaAN9Iu/vPcQ1g=;
        b=RFkzZQAoVpcdhbm23Tb0vnwJHc2IxBiDSrAmAlpNiK6suAHzTQugzqO6kNY5bO0Hug
         xAcBP8lDfsD5dSJteRq+AUjFFzGh65w2BkiJm080nTqrVmNergy4ZUo1FmLPoCmSEOlk
         vd6IFWhG1jGqC+tiHiLwVD03XS65KfAF/K3w02cwwSgGkSAEmVrIk98a4HDsGk3WG+L5
         dbM0UWw9zLO1Umd7AokeNtwIJX4LLnBvOMbQfSGBbVfsBLicyX+SCFp+Dy7mAw7/e6fR
         gvRyxcPzl4DbY/hSmva6RSr58yMGXZP/bJ65tYvsBquJRjff+lDo95xupBykNquRMaan
         WMxg==
X-Gm-Message-State: AJIora93uM8t27pGkdjHecPaSH3kUjM6MNm2Z0jzw+Xvxe43SrZHKhJ0
	wMLXrYVLHlRhAGtgQmXYAqmfVi9GcXQ=
X-Google-Smtp-Source: AGRyM1v8+NAGMGdbj+ckdz0wTt/lfqZgApC1gUcOB5R7jZe3PmGOCbeCDS/tOeQiWv+e7nDFBOe4Wg==
X-Received: by 2002:a17:907:c0a:b0:726:22b1:9734 with SMTP id ga10-20020a1709070c0a00b0072622b19734mr9850234ejc.195.1656277904377;
        Sun, 26 Jun 2022 14:11:44 -0700 (PDT)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 1/5] xen/common: page_alloc: Fix MISRA C 2012 Rule 8.7 violation
Date: Mon, 27 Jun 2022 00:11:27 +0300
Message-Id: <20220626211131.678995-2-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20220626211131.678995-1-burzalodowa@gmail.com>
References: <20220626211131.678995-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The variables page_offlined_list and page_broken_list are referenced only
in page_alloc.c.
Change their linkage from external to internal by adding the storage-class
specifier static to their definitions.

This patch aims to resolve indirectly a MISRA C 2012 Rule 8.4 violation warning.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---
 xen/common/page_alloc.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 000ae6b972..fe0e15429a 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -235,9 +235,9 @@ static unsigned int dma_bitsize;
 integer_param("dma_bits", dma_bitsize);
 
 /* Offlined page list, protected by heap_lock. */
-PAGE_LIST_HEAD(page_offlined_list);
+static PAGE_LIST_HEAD(page_offlined_list);
 /* Broken page list, protected by heap_lock. */
-PAGE_LIST_HEAD(page_broken_list);
+static PAGE_LIST_HEAD(page_broken_list);
 
 /*************************
  * BOOT-TIME ALLOCATOR
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jun 26 21:11:53 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jun 2022 21:11:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356138.584201 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5ZXr-0003o9-R1; Sun, 26 Jun 2022 21:11:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356138.584201; Sun, 26 Jun 2022 21:11:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5ZXr-0003nu-Ml; Sun, 26 Jun 2022 21:11:51 +0000
Received: by outflank-mailman (input) for mailman id 356138;
 Sun, 26 Jun 2022 21:11:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0i3P=XB=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1o5ZXq-0002ze-E8
 for xen-devel@lists.xenproject.org; Sun, 26 Jun 2022 21:11:50 +0000
Received: from mail-ej1-x62e.google.com (mail-ej1-x62e.google.com
 [2a00:1450:4864:20::62e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9824f61a-f594-11ec-b725-ed86ccbb4733;
 Sun, 26 Jun 2022 23:11:49 +0200 (CEST)
Received: by mail-ej1-x62e.google.com with SMTP id lw20so15193372ejb.4
 for <xen-devel@lists.xenproject.org>; Sun, 26 Jun 2022 14:11:49 -0700 (PDT)
Received: from uni.. (adsl-146.37.6.170.tellas.gr. [37.6.170.146])
 by smtp.googlemail.com with ESMTPSA id
 cq12-20020a056402220c00b004356b8ad003sm6367556edb.60.2022.06.26.14.11.48
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 26 Jun 2022 14:11:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9824f61a-f594-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=Fupjb85vIsrQ5wu4RYb1H4Kw82O2kq4W3orr63txTa4=;
        b=E3Vv7SItkwItycz2Z90J9YWQzUXfs/W4ID/1ICVv59a2BoVVRP07Co36VlfZgOrK8f
         5PUTHUSZOE7B4hIM23fLATd3GtAptz81tYTAAnx8mr8ceJfKMcQY3xjGk63Sx3SUR7Pl
         A+5NSmM6f0/+N16HBAfVqgboTEUBvPo0eqwH1I9d264XKFXH47mTuMhg/U6r33DMxVh6
         CwzKB0O3vvd1HSr/rsOOXp80dJ2QrWo7ywmRXqjSm/bv0dpZmc1tONuYFO6kY3je9484
         d/868zYhwf+ZpbxAvcLT/T1ZeQCDDUu50xPlj86akj6TGa2KeqUvgpSuunXS6Id5zbNm
         msnA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=Fupjb85vIsrQ5wu4RYb1H4Kw82O2kq4W3orr63txTa4=;
        b=aBdQb4n2Lur1G47qN38vTp4U0LO5NTjIZmsEEcvTzdo6rdeeI9+uOMiCdOYh2iMjjc
         ZeQwcdKMzVuA0gFGMpkLwb7mQXM5AZexz60qETE3bianEiT+1pk4iarPGQ+qeTrwWIKR
         qpiIWY2KSdbbL2mmjphASBHnghs3JgfNMAtAr04rbJ5OyenZOE/speM+i0VlDTib2Wgh
         7FEqzn345TpGwsfEbhpdkn0+x8Dfq08rl+IpF3iDW6VhdM2hudzl2OdF7dBrduxkyS4R
         HmGqk/MWHSSMVVI4l/2yS57xQBjYqLDGgWjkhFUlfVxl4J2ywx9oRXkn60RqfTXH+8oi
         R8EA==
X-Gm-Message-State: AJIora+jHr6PFaIcw6T1i6vHeVpRcpz/LJNSa4WIVElabL+QZTsmNpVm
	QoJ5tCnIHB38R2qQP/pYuTdSw7xrYgk=
X-Google-Smtp-Source: AGRyM1tLzA9SebPtts7XsjFRsbDPlaTmSVFsa+i933FIzXFhm28U+TMrLx5hXNT/Kyhk0u9WGZKyKw==
X-Received: by 2002:a17:906:6a11:b0:726:97b8:51e9 with SMTP id qw17-20020a1709066a1100b0072697b851e9mr4832004ejc.115.1656277909391;
        Sun, 26 Jun 2022 14:11:49 -0700 (PDT)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH 3/5] xen/drivers: iommu: Fix MISRA C 2012 Rule 8.7 violation
Date: Mon, 27 Jun 2022 00:11:29 +0300
Message-Id: <20220626211131.678995-4-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20220626211131.678995-1-burzalodowa@gmail.com>
References: <20220626211131.678995-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The variable iommu_crash_disable is referenced only in one translation unit.
Change its linkage from external to internal by adding the storage-class
specifier static to its definition.

This patch aims to resolve indirectly a MISRA C 2012 Rule 8.4 violation warning.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---
 xen/drivers/passthrough/iommu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 75df3aa8dd..77f64e6174 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -29,7 +29,7 @@ bool_t __initdata iommu_enable = 1;
 bool_t __read_mostly iommu_enabled;
 bool_t __read_mostly force_iommu;
 bool_t __read_mostly iommu_verbose;
-bool_t __read_mostly iommu_crash_disable;
+static bool_t __read_mostly iommu_crash_disable;
 
 #define IOMMU_quarantine_none         0 /* aka false */
 #define IOMMU_quarantine_basic        1 /* aka true */
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jun 26 21:11:54 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jun 2022 21:11:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356135.584167 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5ZXl-0002zr-1j; Sun, 26 Jun 2022 21:11:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356135.584167; Sun, 26 Jun 2022 21:11:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5ZXk-0002zk-Ug; Sun, 26 Jun 2022 21:11:44 +0000
Received: by outflank-mailman (input) for mailman id 356135;
 Sun, 26 Jun 2022 21:11:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0i3P=XB=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1o5ZXj-0002ze-PW
 for xen-devel@lists.xenproject.org; Sun, 26 Jun 2022 21:11:43 +0000
Received: from mail-ej1-x62e.google.com (mail-ej1-x62e.google.com
 [2a00:1450:4864:20::62e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 930c94fe-f594-11ec-b725-ed86ccbb4733;
 Sun, 26 Jun 2022 23:11:42 +0200 (CEST)
Received: by mail-ej1-x62e.google.com with SMTP id lw20so15193372ejb.4
 for <xen-devel@lists.xenproject.org>; Sun, 26 Jun 2022 14:11:41 -0700 (PDT)
Received: from uni.. (adsl-146.37.6.170.tellas.gr. [37.6.170.146])
 by smtp.googlemail.com with ESMTPSA id
 cq12-20020a056402220c00b004356b8ad003sm6367556edb.60.2022.06.26.14.11.38
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 26 Jun 2022 14:11:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 930c94fe-f594-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=9qduxVfcPOGMNJk2aZ/6LiSflO/gNs4M/uoLPaxFuqI=;
        b=l0umNaPa48VuyaWeWwKUHz/ZvTdUaBnP73WaWMnnelaE7bVPBYqmfKys0u1YU+3Uzq
         iBXy9he3/DDh3QzJv6SNdjywm8m9cGW19NNDIUFwbdR0aZr2blGWwibGbjoQPE/B0Fsz
         xNgKHppL4z9T0fDgVAfmcTLhNESHTI9hwQUwI8YT1PzwLch+BPtOclH/cDnp1oPMstes
         9+vq6RZciDyOoX0hXlPX4XDOVIAOZQlvOYpyxtQpUq+omp5ujsgsTpUB6h/dNemYVZi4
         RPoEGzhgBxpZqOj/CvKPe2QcdFAl2BrgrVweB8cif/5EO0hx6iqToStMLkY2T/A5yrhR
         elUw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=9qduxVfcPOGMNJk2aZ/6LiSflO/gNs4M/uoLPaxFuqI=;
        b=l6WyTbnrk/HqeNJOH9rqDX5X/YH6Lr5dhYP7J1UlYfK8qmRDz1u/9MjU240YXMVvL5
         aYtbrSl7Z4xZUOsCVeLOrbNPuxnpHBfcrZbEYjyygVhZYV1Iv4dZD5nS7GZ2qvL6ntc+
         Tfuhhpjz97VZ1W/bSN3NEu0K3D6DvlQ+kAVmIUpgeJ376KvJyOnBsEUq/PYQnBcTICj8
         uJMlaYuJt0EXMSnOwKGdFwwLTeqY2WnsZcoD2+ErtDyvalFxEy+HDjTqmDXrltvciz84
         2L/SQ+Q52TrUMBI5rPk3AIwVqskxwJrQmA1/xoPD3uO89W0JlMpaoTgUUt/oGkZCrse1
         58cg==
X-Gm-Message-State: AJIora8rVl0F1bCPNKKzJQo3jPDqOPrMTzMuEtwP3w8y5S+9dQed+HeT
	+zx67GaV/2f77MHN0+Lmpf0FnZTFB9tPig==
X-Google-Smtp-Source: AGRyM1t9NbO0l9vtidZltnkofT1vz6DNU0R4X5dmgUIGYW1MYJ6Bym85yUDBHtGbpFteqvHRotynnw==
X-Received: by 2002:a17:907:6e17:b0:726:2b3c:d373 with SMTP id sd23-20020a1709076e1700b007262b3cd373mr9520821ejc.357.1656277900662;
        Sun, 26 Jun 2022 14:11:40 -0700 (PDT)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Tamas K Lengyel <tamas@tklengyel.com>,
	Alexandru Isaila <aisaila@bitdefender.com>,
	Petre Pircalabu <ppircalabu@bitdefender.com>,
	Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 0/5] Fix MISRA C 2012 violations
Date: Mon, 27 Jun 2022 00:11:26 +0300
Message-Id: <20220626211131.678995-1-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Resolve MISRA C 2012 Rule 8.4 warnings.

Xenia Ragiadakou (5):
  xen/common: page_alloc: Fix MISRA C 2012 Rule 8.7 violation
  xen/common: vm_event: Fix MISRA C 2012 Rule 8.7 violation
  xen/drivers: iommu: Fix MISRA C 2012 Rule 8.7 violation
  xen/sched: credit: Fix MISRA C 2012 Rule 8.7 violation
  xen/arm64: traps: Fix MISRA C 2012 Rule 8.4 violations

 xen/arch/arm/arm64/traps.c             | 1 +
 xen/arch/arm/include/asm/arm64/traps.h | 2 ++
 xen/common/page_alloc.c                | 4 ++--
 xen/common/sched/credit.c              | 2 +-
 xen/common/vm_event.c                  | 2 +-
 xen/drivers/passthrough/iommu.c        | 2 +-
 6 files changed, 8 insertions(+), 5 deletions(-)

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jun 26 21:11:55 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jun 2022 21:11:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356139.584212 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5ZXv-00049Z-52; Sun, 26 Jun 2022 21:11:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356139.584212; Sun, 26 Jun 2022 21:11:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5ZXv-00049R-1C; Sun, 26 Jun 2022 21:11:55 +0000
Received: by outflank-mailman (input) for mailman id 356139;
 Sun, 26 Jun 2022 21:11:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0i3P=XB=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1o5ZXt-00045G-Eu
 for xen-devel@lists.xenproject.org; Sun, 26 Jun 2022 21:11:53 +0000
Received: from mail-ed1-x52b.google.com (mail-ed1-x52b.google.com
 [2a00:1450:4864:20::52b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 999fc724-f594-11ec-bd2d-47488cf2e6aa;
 Sun, 26 Jun 2022 23:11:52 +0200 (CEST)
Received: by mail-ed1-x52b.google.com with SMTP id fd6so10484717edb.5
 for <xen-devel@lists.xenproject.org>; Sun, 26 Jun 2022 14:11:52 -0700 (PDT)
Received: from uni.. (adsl-146.37.6.170.tellas.gr. [37.6.170.146])
 by smtp.googlemail.com with ESMTPSA id
 cq12-20020a056402220c00b004356b8ad003sm6367556edb.60.2022.06.26.14.11.50
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 26 Jun 2022 14:11:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 999fc724-f594-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=xj9geaDYiDJeXi2P/hZ4T8+FfXgCNZI+BFN5XSupO2M=;
        b=AOaGcV0vT+0cYRABSoUay9LVQ06yx+0+VUBptJY1W5sEO+rs2OrtIw+McAnr97zO/m
         NUXbavyrIvEqQhreBluaN+AEb/IzXjcsf4FOvGB8b10MvGxOMfqQ0wG3D4Q2tD7+V+VZ
         d277lwcgIdGwFhI1iVPx2S7cC306oVHpLJiSwGdzXYLqjo6fbu/uMFxNP99beGSuuwQQ
         IwLkMXw/PYcUcj0LY+GaETqENcqmC5AvLkX+oz0TTMpRHZsTRz7fyU7C5dJcsL0GoUzR
         m4NHKj2JTW/P65gbfsFqsjfv7lvBrZ9ibfTHZhX0UuS5yUEj6rLO/6IGh3sT5+q11mgt
         z/nQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=xj9geaDYiDJeXi2P/hZ4T8+FfXgCNZI+BFN5XSupO2M=;
        b=kto4FgTVuLRE0D3FvAHcIpOIWDPlTKpw8BkY20qi6av0fGzYXOTUtKVEINA6GuIYiw
         LRPnGzk8pokcZNgmbWqNqCXCgrJvFNJw3B22ii8UMOncKp/pwQ3jl2NPvkZk1mgVU9+f
         cSdZRsLfqdcA2iG4yBHLXkK0zRzDt5RZ1zgmWVEdbZU0CSGn+56ortGTwmUncSW9NN0a
         5mzVzkNMLBIretDfmmgt2AFRAuz3q3s3yiD+3fRiIrFmko0J+pDN8F8f7eBKYpYfcX2y
         qS0mcsJE7dXpcorcHkdSBiS/oliuyjXYPDPxLVHjz7AsAIAwG+X66a25FjM4ec/FRmep
         mKjw==
X-Gm-Message-State: AJIora/BubJefBRgQ2UEUWlmZ19xoFY9W4exLPKLSw96u6qaAG5C00ZA
	o+2LQYvp8AVvOyyDBxeYsvgZ++UkETE=
X-Google-Smtp-Source: AGRyM1suuYGZAMFsKza4q0oXYNza4EVXS2SZIk6Rq2XRGJ6H8ZAgjiEKy6qsbFUeaB2/R+gyoXWuDA==
X-Received: by 2002:a05:6402:320f:b0:435:7236:e312 with SMTP id g15-20020a056402320f00b004357236e312mr12957410eda.115.1656277911895;
        Sun, 26 Jun 2022 14:11:51 -0700 (PDT)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Dario Faggioli <dfaggioli@suse.com>
Subject: [PATCH 4/5] xen/sched: credit: Fix MISRA C 2012 Rule 8.7 violation
Date: Mon, 27 Jun 2022 00:11:30 +0300
Message-Id: <20220626211131.678995-5-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20220626211131.678995-1-burzalodowa@gmail.com>
References: <20220626211131.678995-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The per-cpu variable last_tickle_cpu is referenced only in credit.c.
Change its linkage from external to internal by adding the storage-class
specifier static to its definitions.

This patch aims to resolve indirectly a MISRA C 2012 Rule 8.4 violation warning.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---
 xen/common/sched/credit.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/common/sched/credit.c b/xen/common/sched/credit.c
index 4d3bd8cba6..47945c2834 100644
--- a/xen/common/sched/credit.c
+++ b/xen/common/sched/credit.c
@@ -348,7 +348,7 @@ static void burn_credits(struct csched_unit *svc, s_time_t now)
 static bool __read_mostly opt_tickle_one_idle = true;
 boolean_param("tickle_one_idle_cpu", opt_tickle_one_idle);
 
-DEFINE_PER_CPU(unsigned int, last_tickle_cpu);
+static DEFINE_PER_CPU(unsigned int, last_tickle_cpu);
 
 static inline void __runq_tickle(const struct csched_unit *new)
 {
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jun 26 21:11:56 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jun 2022 21:11:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356140.584223 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5ZXw-0004Rc-Gj; Sun, 26 Jun 2022 21:11:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356140.584223; Sun, 26 Jun 2022 21:11:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5ZXw-0004QA-BS; Sun, 26 Jun 2022 21:11:56 +0000
Received: by outflank-mailman (input) for mailman id 356140;
 Sun, 26 Jun 2022 21:11:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0i3P=XB=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1o5ZXv-0002ze-Ou
 for xen-devel@lists.xenproject.org; Sun, 26 Jun 2022 21:11:55 +0000
Received: from mail-ej1-x62f.google.com (mail-ej1-x62f.google.com
 [2a00:1450:4864:20::62f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9b18f4a4-f594-11ec-b725-ed86ccbb4733;
 Sun, 26 Jun 2022 23:11:55 +0200 (CEST)
Received: by mail-ej1-x62f.google.com with SMTP id g26so15156220ejb.5
 for <xen-devel@lists.xenproject.org>; Sun, 26 Jun 2022 14:11:54 -0700 (PDT)
Received: from uni.. (adsl-146.37.6.170.tellas.gr. [37.6.170.146])
 by smtp.googlemail.com with ESMTPSA id
 cq12-20020a056402220c00b004356b8ad003sm6367556edb.60.2022.06.26.14.11.53
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 26 Jun 2022 14:11:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9b18f4a4-f594-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=zO50M0sKicPYbO3MfGNjKmAChEFlsCUJ3Pd09z6d3B8=;
        b=gt44c+Q2MGza2lsgnlUfudnqddhWLluw63hNoC7ABVuhmNz+eMsZ/igzq4PPys7uAt
         0KLpejzAQI1lgnQKGtTswVKzrEUzVLuSh6VVt13Wsw/Q0HIZ/SmXU5tOC09097ga1xGo
         9no9l3d/Yv+YQhM7Q7up8Flsvewa5RTsBERnSMvrIU6ILBrbFBJUo3HGTObRbMqCnSxQ
         4dn0sEkDSl28qqj2kdZqdmR7VPPtKUDy7EqykAndJvZ1u/1iS6f7Jf2ax0ZMmlF5BiFn
         ZXv38QdzmItBDLpCPS8t4Di+xSmb/8UClkgMJT+OaVI79nH7gdGXfAXG0nEhhW+jjxAm
         SbUg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=zO50M0sKicPYbO3MfGNjKmAChEFlsCUJ3Pd09z6d3B8=;
        b=yIIc0dyclkGiyflFSqp7/pLtx3tNplULoJUQS2jh7Ok19f6k5dMPIwVo6R9oE+of4u
         4X7V53eX/Nw+1JnsQ2m2sDV7RRbfMyALg0WuJNSWmPQOEdrSSeziPhYccw6XlFilBvSA
         8rIrxMAfBI2oXpOchrf2ovctoUmvq3vG03KKR7wgHqnTlLvuvPak+GbbO2KlsnVT3men
         2MTixMb5aYcjLa2lrl27FYE0JMwTecmrCC3ZYEqFRi4r1Eyt+voMbFm8Sf8Il5HhSxJ1
         3PHIkJYVsRPI1uxlYK3GccKLtquTbfbOG4uke/fi+f1kUdhfKeGd6jBd99NV6RwS3Lw3
         eO1Q==
X-Gm-Message-State: AJIora/6JQaUXm+kNX4QpKgsdprR1oAiO9ZX2WeHgU1ftiPiI2Y/Wvcu
	WszIat4Qsim9ns0pGs3mMQLYYCxd3DE=
X-Google-Smtp-Source: AGRyM1sjgtHzgRZhQw1AoPrP9Py9kuXuomYrtZtw3vkLr3eTO0+YKeIo3PRd3Ing0GNcgtRcEXxhFg==
X-Received: by 2002:a17:907:3e18:b0:722:be7e:302c with SMTP id hp24-20020a1709073e1800b00722be7e302cmr9590234ejc.437.1656277914377;
        Sun, 26 Jun 2022 14:11:54 -0700 (PDT)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 5/5] xen/arm64: traps: Fix MISRA C 2012 Rule 8.4 violations
Date: Mon, 27 Jun 2022 00:11:31 +0300
Message-Id: <20220626211131.678995-6-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20220626211131.678995-1-burzalodowa@gmail.com>
References: <20220626211131.678995-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a function prototype for do_bad_mode() in <asm/arm64/traps.h> and include
header <asm/traps.h> in traps.c, so that the declarations of the functions
do_bad_mode() and finalize_instr_emulation(), which have external linkage,
are visible before the function definitions.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---
 xen/arch/arm/arm64/traps.c             | 1 +
 xen/arch/arm/include/asm/arm64/traps.h | 2 ++
 2 files changed, 3 insertions(+)

diff --git a/xen/arch/arm/arm64/traps.c b/xen/arch/arm/arm64/traps.c
index 3f8858acec..a995ad7c2c 100644
--- a/xen/arch/arm/arm64/traps.c
+++ b/xen/arch/arm/arm64/traps.c
@@ -22,6 +22,7 @@
 #include <asm/hsr.h>
 #include <asm/system.h>
 #include <asm/processor.h>
+#include <asm/traps.h>
 
 #include <public/xen.h>
 
diff --git a/xen/arch/arm/include/asm/arm64/traps.h b/xen/arch/arm/include/asm/arm64/traps.h
index 2379b578cb..a347cb13d6 100644
--- a/xen/arch/arm/include/asm/arm64/traps.h
+++ b/xen/arch/arm/include/asm/arm64/traps.h
@@ -6,6 +6,8 @@ void inject_undef64_exception(struct cpu_user_regs *regs, int instr_len);
 void do_sysreg(struct cpu_user_regs *regs,
                const union hsr hsr);
 
+void do_bad_mode(struct cpu_user_regs *regs, int reason);
+
 #endif /* __ASM_ARM64_TRAPS__ */
 /*
  * Local variables:
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 27 01:38:54 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 01:38:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356171.584234 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5di0-0004Qz-3h; Mon, 27 Jun 2022 01:38:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356171.584234; Mon, 27 Jun 2022 01:38:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5dhz-0004Qs-UW; Mon, 27 Jun 2022 01:38:35 +0000
Received: by outflank-mailman (input) for mailman id 356171;
 Mon, 27 Jun 2022 01:38:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5dhx-0004Qi-Q0; Mon, 27 Jun 2022 01:38:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5dhx-0002yE-JT; Mon, 27 Jun 2022 01:38:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5dhw-0000hh-VW; Mon, 27 Jun 2022 01:38:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o5dhw-0004zr-Uw; Mon, 27 Jun 2022 01:38:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=88JV5bsQ9AGT91e+mv5VTw+aUWDf7fDo+jXt+fae138=; b=gIWnLNTgbkxQGmAz79Y5vpBWVY
	hFHGbwqSL/6N7OIlGz6mgnYXgKFZWzCRiRUwqu+/pCdBu60OdQ6f4NlUDQC1ZB2s7s/6lCzMQ+WxV
	SRDtvl7Lj3K8XoMXSFk9TPHybrYoB8Ej7nrAUUQ4IGxgZwogKW+2DlDMc+rIQkWkNDA8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171361-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171361: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=893d1eaa56e8ed8ebf0726556454c9e53c0bf047
X-Osstest-Versions-That:
    linux=354c6e071be986a44b956f7b57f1884244431048
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 27 Jun 2022 01:38:32 +0000

flight 171361 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171361/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-intel  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-amd  8 xen-boot              fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 171277
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 171277
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 171277
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 171277
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 171277

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 171277

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171277
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171277
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171277
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                893d1eaa56e8ed8ebf0726556454c9e53c0bf047
baseline version:
 linux                354c6e071be986a44b956f7b57f1884244431048

Last test of basis   171277  2022-06-19 03:11:35 Z    7 days
Failing since        171280  2022-06-19 15:12:25 Z    7 days   22 attempts
Testing same since   171361  2022-06-26 20:42:05 Z    0 days    1 attempts

------------------------------------------------------------
313 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 11404 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 02:57:08 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 02:57:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356180.584245 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5evj-0004c1-SM; Mon, 27 Jun 2022 02:56:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356180.584245; Mon, 27 Jun 2022 02:56:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5evj-0004bu-Ol; Mon, 27 Jun 2022 02:56:51 +0000
Received: by outflank-mailman (input) for mailman id 356180;
 Mon, 27 Jun 2022 02:56:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5evi-0004bk-Sq; Mon, 27 Jun 2022 02:56:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5evi-0005BN-QQ; Mon, 27 Jun 2022 02:56:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5evi-0003wX-9x; Mon, 27 Jun 2022 02:56:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o5evi-0005L1-9Y; Mon, 27 Jun 2022 02:56:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=iT+gaiEKpovfhUZArIc0HBgcE/DrT4ixRTovQnPh/rU=; b=JiDTtG9rl2Uwh3ANhONqZMYbkJ
	OILicl4Hdf9IKwFpLYdYb3ykK4Is3H/61Xpv7KIn/r+dTsEtzH32msx1dX1HfTErxuMkterQEda+r
	Llvuu+rgQTe+2qSKbDNhZO3Nddci1JXOuqmwpgRkweKOqOIvRu1Ho+/67bl7/oaCAvbs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171362-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 171362: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=b600f253b3077943908431cd780dbc1a9ed1bc81
X-Osstest-Versions-That:
    ovmf=15b25045e6db2c82bc12973ed1629bbaeb3c0a57
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 27 Jun 2022 02:56:50 +0000

flight 171362 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171362/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 b600f253b3077943908431cd780dbc1a9ed1bc81
baseline version:
 ovmf                 15b25045e6db2c82bc12973ed1629bbaeb3c0a57

Last test of basis   171345  2022-06-24 18:13:04 Z    2 days
Testing same since   171362  2022-06-27 01:10:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Rebecca Cran <quic_rcran@quicinc.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   15b25045e6..b600f253b3  b600f253b3077943908431cd780dbc1a9ed1bc81 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 02:59:03 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 02:59:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356187.584255 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5exq-0005Ei-81; Mon, 27 Jun 2022 02:59:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356187.584255; Mon, 27 Jun 2022 02:59:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5exq-0005Eb-57; Mon, 27 Jun 2022 02:59:02 +0000
Received: by outflank-mailman (input) for mailman id 356187;
 Mon, 27 Jun 2022 02:59:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RfAm=XC=arm.com=jiamei.xie@srs-se1.protection.inumbo.net>)
 id 1o5exp-0005ET-BT
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 02:59:01 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 17468c28-f5c5-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 04:58:59 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C2DE223A;
 Sun, 26 Jun 2022 19:58:58 -0700 (PDT)
Received: from a015971.shanghai.arm.com (unknown [10.169.188.104])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 324853F66F;
 Sun, 26 Jun 2022 19:58:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 17468c28-f5c5-11ec-b725-ed86ccbb4733
From: Jiamei Xie <jiamei.xie@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Jiamei Xie <jiamei.xie@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Wei Chen <wei.chen@arm.com>
Subject: [PATCH] xen/arm: avoid extra caclulations when setting vtimer in context switch
Date: Mon, 27 Jun 2022 10:58:09 +0800
Message-Id: <20220627025809.1985720-1-jiamei.xie@arm.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

virt_vtimer_save is calculating the new time for the vtimer in:
"v->arch.virt_timer.cval + v->domain->arch.virt_timer_base.offset
- boot_count".
In this formula, "cval + offset" might cause uint64_t overflow.
Changing it to "v->domain->arch.virt_timer_base.offset - boot_count +
v->arch.virt_timer.cval" can reduce the possibility of overflow, and
"arch.virt_timer_base.offset - boot_count" will be always the same,
which has been caculated in domain_vtimer_init. Introduce a new field
vtimer_offset.nanoseconds to store this value for arm in struct
arch_domain, so we can use it directly and extra caclulations can be
avoided.

This patch is enlightened from [1].

Signed-off-by: Jiamei Xie <jiamei.xie@arm.com>

[1] https://www.mail-archive.com/xen-devel@lists.xenproject.org/msg123139.htm
---
xen/arch/arm/include/asm/domain.h | 4 ++++
 xen/arch/arm/vtimer.c             | 6 ++++--
 2 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
index ed63c2b6f9..94fe5b6444 100644
--- a/xen/arch/arm/include/asm/domain.h
+++ b/xen/arch/arm/include/asm/domain.h
@@ -73,6 +73,10 @@ struct arch_domain
         uint64_t offset;
     } virt_timer_base;
 
+    struct {
+        int64_t nanoseconds;
+    } vtimer_offset;
+
     struct vgic_dist vgic;
 
     struct vuart {
diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
index 6b78fea77d..54161e5fea 100644
--- a/xen/arch/arm/vtimer.c
+++ b/xen/arch/arm/vtimer.c
@@ -64,6 +64,7 @@ int domain_vtimer_init(struct domain *d, struct xen_arch_domainconfig *config)
 {
     d->arch.virt_timer_base.offset = get_cycles();
     d->time_offset.seconds = ticks_to_ns(d->arch.virt_timer_base.offset - boot_count);
+    d->arch.vtimer_offset.nanoseconds = d->time_offset.seconds;
     do_div(d->time_offset.seconds, 1000000000);
 
     config->clock_frequency = timer_dt_clock_frequency;
@@ -144,8 +145,9 @@ void virt_timer_save(struct vcpu *v)
     if ( (v->arch.virt_timer.ctl & CNTx_CTL_ENABLE) &&
          !(v->arch.virt_timer.ctl & CNTx_CTL_MASK))
     {
-        set_timer(&v->arch.virt_timer.timer, ticks_to_ns(v->arch.virt_timer.cval +
-                  v->domain->arch.virt_timer_base.offset - boot_count));
+        set_timer(&v->arch.virt_timer.timer,
+                  v->domain->arch.vtimer_offset.nanoseconds +
+                  ticks_to_ns(v->arch.virt_timer.cval));
     }
 }
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 27 05:56:29 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 05:56:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356195.584266 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5hjF-00076L-Qu; Mon, 27 Jun 2022 05:56:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356195.584266; Mon, 27 Jun 2022 05:56:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5hjF-00076E-Nj; Mon, 27 Jun 2022 05:56:09 +0000
Received: by outflank-mailman (input) for mailman id 356195;
 Mon, 27 Jun 2022 05:56:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5hjE-000764-87; Mon, 27 Jun 2022 05:56:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5hjE-0001sB-3t; Mon, 27 Jun 2022 05:56:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5hjD-00056i-Oe; Mon, 27 Jun 2022 05:56:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o5hjD-0005Oa-OB; Mon, 27 Jun 2022 05:56:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=TL9HUbyU5g+V9OI6uFeUE51cxL+7K58oAcggR/bT1DI=; b=r5y1ofic7k0oGxBGpPaKHvDbL1
	OD2X0CWnoqHXXOPS4TDLdyfe3Iff2W09fx+Hjnwk+Or9P5SmHp/SmLjNVDJyA+lk0QPOS060HCp6g
	lItrOHxC7PWI/AHiRBin7g/eabMJHokLhBqPhJO8R54dhx1c1UiU7ZyiQmZzYPyDhNXQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171365-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 171365: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=7f4eca4cc2e01d4160ef265f477f9d098d7d33df
X-Osstest-Versions-That:
    ovmf=b600f253b3077943908431cd780dbc1a9ed1bc81
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 27 Jun 2022 05:56:07 +0000

flight 171365 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171365/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 7f4eca4cc2e01d4160ef265f477f9d098d7d33df
baseline version:
 ovmf                 b600f253b3077943908431cd780dbc1a9ed1bc81

Last test of basis   171362  2022-06-27 01:10:27 Z    0 days
Testing same since   171365  2022-06-27 02:58:42 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ian Chiu <Ian.chiu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   b600f253b3..7f4eca4cc2  7f4eca4cc2e01d4160ef265f477f9d098d7d33df -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 06:23:32 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 06:23:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356203.584277 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5i9d-00027Y-Vn; Mon, 27 Jun 2022 06:23:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356203.584277; Mon, 27 Jun 2022 06:23:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5i9d-00027R-Sg; Mon, 27 Jun 2022 06:23:25 +0000
Received: by outflank-mailman (input) for mailman id 356203;
 Mon, 27 Jun 2022 06:23:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vXPS=XC=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o5i9c-00027L-JJ
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 06:23:24 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id a4c350e1-f5e1-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 08:23:22 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F3CD02B;
 Sun, 26 Jun 2022 23:23:21 -0700 (PDT)
Received: from [10.57.42.186] (unknown [10.57.42.186])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 059B03F5A1;
 Sun, 26 Jun 2022 23:23:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a4c350e1-f5e1-11ec-b725-ed86ccbb4733
Message-ID: <f078812a-bdd0-d27b-28ce-62c0c131ecdb@arm.com>
Date: Mon, 27 Jun 2022 08:23:05 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH 1/7] xen/arm: Remove most of the *_VIRT_END defines
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Ross Lagerwall <ross.lagerwall@citrix.com>
References: <20220624091146.35716-1-julien@xen.org>
 <20220624091146.35716-2-julien@xen.org>
From: Michal Orzel <michal.orzel@arm.com>
In-Reply-To: <20220624091146.35716-2-julien@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Hi Julien,

On 24.06.2022 11:11, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> At the moment, *_VIRT_END may either point to the address after the end
> or the last address of the region.
> 
> The lack of consistency make quite difficult to reason with them.
> 
> Furthermore, there is a risk of overflow in the case where the address
> points past to the end. I am not aware of any cases, so this is only a
> latent bug.
> 
> Start to solve the problem by removing all the *_VIRT_END exclusively used
> by the Arm code and add *_VIRT_SIZE when it is not present.
> 
> Take the opportunity to rename BOOT_FDT_SLOT_SIZE to BOOT_FDT_VIRT_SIZE
> for better consistency and use _AT(vaddr_t, ).
> 
> Also take the opportunity to fix the coding style of the comment touched
> in mm.c.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ----
> 
> I noticed that a few functions in Xen expect [start, end[. This is risky
> as we may end up with end < start if the region is defined right at the
> top of the address space.
> 
> I haven't yet tackle this issue. But I would at least like to get rid
> of *_VIRT_END.
> 
> This was originally sent separately (lets call it v0).
> 
>     Changes in v1:
>         - Mention the coding style change.
> ---
>  xen/arch/arm/include/asm/config.h | 18 ++++++++----------
>  xen/arch/arm/livepatch.c          |  2 +-
>  xen/arch/arm/mm.c                 | 13 ++++++++-----
>  3 files changed, 17 insertions(+), 16 deletions(-)
> 
> diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
> index 3e2a55a91058..66db618b34e7 100644
> --- a/xen/arch/arm/include/asm/config.h
> +++ b/xen/arch/arm/include/asm/config.h
> @@ -111,12 +111,11 @@
>  #define FIXMAP_ADDR(n)        (_AT(vaddr_t,0x00400000) + (n) * PAGE_SIZE)
>  
>  #define BOOT_FDT_VIRT_START    _AT(vaddr_t,0x00600000)
> -#define BOOT_FDT_SLOT_SIZE     MB(4)
> -#define BOOT_FDT_VIRT_END      (BOOT_FDT_VIRT_START + BOOT_FDT_SLOT_SIZE)
> +#define BOOT_FDT_VIRT_SIZE     _AT(vaddr_t, MB(4))
>  
>  #ifdef CONFIG_LIVEPATCH
>  #define LIVEPATCH_VMAP_START   _AT(vaddr_t,0x00a00000)
> -#define LIVEPATCH_VMAP_END     (LIVEPATCH_VMAP_START + MB(2))
> +#define LIVEPATCH_VMAP_SIZE    _AT(vaddr_t, MB(2))
>  #endif
>  
>  #define HYPERVISOR_VIRT_START  XEN_VIRT_START
> @@ -132,18 +131,18 @@
>  #define FRAMETABLE_VIRT_END    (FRAMETABLE_VIRT_START + FRAMETABLE_SIZE - 1)
>  
>  #define VMAP_VIRT_START        _AT(vaddr_t,0x10000000)
> +#define VMAP_VIRT_SIZE         _AT(vaddr_t, GB(1) - MB(256))
>  
>  #define XENHEAP_VIRT_START     _AT(vaddr_t,0x40000000)
> -#define XENHEAP_VIRT_END       _AT(vaddr_t,0x7fffffff)
> -#define DOMHEAP_VIRT_START     _AT(vaddr_t,0x80000000)
> -#define DOMHEAP_VIRT_END       _AT(vaddr_t,0xffffffff)
> +#define XENHEAP_VIRT_SIZE      _AT(vaddr_t, GB(1))
>  
> -#define VMAP_VIRT_END    XENHEAP_VIRT_START
> +#define DOMHEAP_VIRT_START     _AT(vaddr_t,0x80000000)
> +#define DOMHEAP_VIRT_SIZE      _AT(vaddr_t, GB(2))
>  
>  #define DOMHEAP_ENTRIES        1024  /* 1024 2MB mapping slots */
>  
>  /* Number of domheap pagetable pages required at the second level (2MB mappings) */
> -#define DOMHEAP_SECOND_PAGES ((DOMHEAP_VIRT_END - DOMHEAP_VIRT_START + 1) >> FIRST_SHIFT)
> +#define DOMHEAP_SECOND_PAGES (DOMHEAP_VIRT_SIZE >> FIRST_SHIFT)
>  
>  #else /* ARM_64 */
>  
> @@ -152,12 +151,11 @@
>  #define SLOT0_ENTRY_SIZE  SLOT0(1)
>  
>  #define VMAP_VIRT_START  GB(1)
> -#define VMAP_VIRT_END    (VMAP_VIRT_START + GB(1))
> +#define VMAP_VIRT_SIZE   GB(1)
>  
>  #define FRAMETABLE_VIRT_START  GB(32)
>  #define FRAMETABLE_SIZE        GB(32)
>  #define FRAMETABLE_NR          (FRAMETABLE_SIZE / sizeof(*frame_table))
> -#define FRAMETABLE_VIRT_END    (FRAMETABLE_VIRT_START + FRAMETABLE_SIZE - 1)
>  
>  #define DIRECTMAP_VIRT_START   SLOT0(256)
>  #define DIRECTMAP_SIZE         (SLOT0_ENTRY_SIZE * (265-256))
> diff --git a/xen/arch/arm/livepatch.c b/xen/arch/arm/livepatch.c
> index 75e8adcfd6a1..57abc746e60b 100644
> --- a/xen/arch/arm/livepatch.c
> +++ b/xen/arch/arm/livepatch.c
> @@ -175,7 +175,7 @@ void __init arch_livepatch_init(void)
>      void *start, *end;
>  
>      start = (void *)LIVEPATCH_VMAP_START;
> -    end = (void *)LIVEPATCH_VMAP_END;
> +    end = start + LIVEPATCH_VMAP_SIZE;
>  
>      vm_init_type(VMAP_XEN, start, end);
>  
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index be37176a4725..0607c65f95cd 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -128,9 +128,11 @@ static DEFINE_PAGE_TABLE(xen_first);
>  /* xen_pgtable == root of the trie (zeroeth level on 64-bit, first on 32-bit) */
>  static DEFINE_PER_CPU(lpae_t *, xen_pgtable);
>  #define THIS_CPU_PGTABLE this_cpu(xen_pgtable)
> -/* xen_dommap == pages used by map_domain_page, these pages contain
> +/*
> + * xen_dommap == pages used by map_domain_page, these pages contain
>   * the second level pagetables which map the domheap region
> - * DOMHEAP_VIRT_START...DOMHEAP_VIRT_END in 2MB chunks. */
> + * starting at DOMHEAP_VIRT_START in 2MB chunks.
> + */
>  static DEFINE_PER_CPU(lpae_t *, xen_dommap);
>  /* Root of the trie for cpu0, other CPU's PTs are dynamically allocated */
>  static DEFINE_PAGE_TABLE(cpu0_pgtable);
> @@ -476,7 +478,7 @@ mfn_t domain_page_map_to_mfn(const void *ptr)
>      int slot = (va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
>      unsigned long offset = (va>>THIRD_SHIFT) & XEN_PT_LPAE_ENTRY_MASK;
>  
> -    if ( va >= VMAP_VIRT_START && va < VMAP_VIRT_END )
> +    if ( (va >= VMAP_VIRT_START) && ((VMAP_VIRT_START - va) < VMAP_VIRT_SIZE) )
The second condition does not seem to be correct. Instead, this check should like like following:
if ( (va >= VMAP_VIRT_START) && (va < (VMAP_VIRT_START + VMAP_VIRT_SIZE)) )

>          return virt_to_mfn(va);
>  
>      ASSERT(slot >= 0 && slot < DOMHEAP_ENTRIES);
> @@ -570,7 +572,8 @@ void __init remove_early_mappings(void)
>      int rc;
>  
>      /* destroy the _PAGE_BLOCK mapping */
> -    rc = modify_xen_mappings(BOOT_FDT_VIRT_START, BOOT_FDT_VIRT_END,
> +    rc = modify_xen_mappings(BOOT_FDT_VIRT_START,
> +                             BOOT_FDT_VIRT_START + BOOT_FDT_VIRT_SIZE,
>                               _PAGE_BLOCK);
>      BUG_ON(rc);
>  }
> @@ -850,7 +853,7 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
>  
>  void *__init arch_vmap_virt_end(void)
>  {
> -    return (void *)VMAP_VIRT_END;
> +    return (void *)(VMAP_VIRT_START + VMAP_VIRT_SIZE);
>  }
>  
>  /*

The rest looks good.

Cheers,
Michal


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 06:26:21 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 06:26:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356209.584289 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5iCS-0002ij-EF; Mon, 27 Jun 2022 06:26:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356209.584289; Mon, 27 Jun 2022 06:26:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5iCS-0002ic-Ad; Mon, 27 Jun 2022 06:26:20 +0000
Received: by outflank-mailman (input) for mailman id 356209;
 Mon, 27 Jun 2022 06:26:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yQHX=XC=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o5iCQ-0002iU-V3
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 06:26:19 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-eopbgr80081.outbound.protection.outlook.com [40.107.8.81])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0d2e7e75-f5e2-11ec-bd2d-47488cf2e6aa;
 Mon, 27 Jun 2022 08:26:17 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBAPR04MB7237.eurprd04.prod.outlook.com (2603:10a6:10:1a4::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Mon, 27 Jun
 2022 06:26:16 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5373.018; Mon, 27 Jun 2022
 06:26:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0d2e7e75-f5e2-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AqjxW4W7OHz0rYZiHvtdiXbzI8UOalQaBrs/JG1TU2Cw7TrS4Yoqad2AEenmADVAcgrANA+lNv0bVIrgv5Ei5HZwAP7paEo7H0FFLiHwD5+QNZtaS4Czx78V9U/kwNzcYIxvF7Bjpek1bnHGKsjhSrN6/pd8Oa6jGLuE0WqM3DfWsAVXFy0t6WL39YHtWlIRFu/Kyq+uBRFhQWmISWJxHorLpNUPVuDzfzfHHbVNhQEbaVdd1dApU10Bnwt9sETqHnGr6cb6l123xg5n7oqTK8NA6wgYAyZj+RyxYzlTiOuAF+IpEu5U1tuLTAMKWSLQEi/dnpgV+1/wcTQjBiORNw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=gV1Xq38WoGbki+qQ6AMzWg8y3k6Z0aUHS4+b5ppIr5Y=;
 b=A7KOdUgETT7qpCyf3tOiRtCfU6g3LMZYUdgNF8o+EZdUCCVaYyiUhV/h7xeL5ZcfeqBteQbMPkKi0i5yUL6QYZInLZMIgiTJZm4Nm2TNMDs+J3Bl8amXp+zEeRxML4kZ8SlkUpGPObhEkuoX6GqxjooqHnLgQDUqRLPpIXtHelP/6qx5Q699/0+leiNn8/MdaIuZuEJ1RaXjwI7RHGFOpo7Iy2ytWTulR//Qebp3BycfHTYZ+7iM7f3/AOQNrqAnMNblE2vXfR6C+9t5P4Que/RyHU8W/+9ZMfXCtdGZW4R8KOweeUY6HCQO1j70Xt3sdMmPFP8Db6I/NMC6oswxpg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gV1Xq38WoGbki+qQ6AMzWg8y3k6Z0aUHS4+b5ppIr5Y=;
 b=Nv/XjzmY3FtA/hCrzsLRpTedfuAato/Au1NXg1cf0jLC0GUs6W7vBnCrpnxHQ5XbmNkUEq4GWOrsCYFvRSg4qRNE4ZZERhfnb4dkR3jaUgStSaKyLZrCiZ7tENdTlSd2ZHsq27VKv5k8CqqLkW8w+X4W4gnM5XzYb3DY1pDhlMSP+waI9BMKJ50Wl4Eucuwrr+2NdFcRizQF1JwQwtv0HwGMBa/a3fQXAUibreufTKYdT+ai4va4qoerP9FTRH2L3Zj+pAQbyWoSWZ3FwYORQGeY1Hwf0Xcw1lqbbjaWKe6kWmkK+cqddTGnkOPtFs0n3quyrsNcjlCDsznPwgr18w==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <883929e5-cebf-101b-7906-47ada79a56f1@suse.com>
Date: Mon, 27 Jun 2022 08:26:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 1/2] x86/shadow: slightly consolidate
 sh_unshadow_for_p2m_change()
Content-Language: en-US
To: George Dunlap <George.Dunlap@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>, "Tim (Xen.org)" <tim@xen.org>
References: <9ae1d130-178a-ba01-b889-f2cf2a403d95@suse.com>
 <521b39ce-2c2e-967e-ecc7-f66281aee562@suse.com>
 <87A27648-D543-4122-A354-A37CC4C4BEA4@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <87A27648-D543-4122-A354-A37CC4C4BEA4@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AS9PR04CA0171.eurprd04.prod.outlook.com
 (2603:10a6:20b:530::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 7eecebd9-4a8e-4f60-421f-08da5805f022
X-MS-TrafficTypeDiagnostic: DBAPR04MB7237:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	LeIUaI6a9LAA9pe8nZ5reMBR+DLMRADwo52oQrYaYbXIDIgLv5tO3nbIn1YlXI+u7TGGwsnyHpuzKwNQUt/vjs9F2z1DuSdfJc7rMNMGpfTyzd6UjrwulskNdXEkm54mveViJLGbCpcRHt4EDJuOMgglyxVWSX2+Vob10ewQy7t7vDFN6GmocgAxi5Cxj3jyd//eIDdeDvl23m74jIqFDf93ppcltYQHqcy5KXSP4uOi1GW/60eeiOpNjlt+2GDQzUT0t58S5bwxJLi56pojDQfjFUYpUIeNgQwT+aGqZK32hMDNh6/edpluqPsA2iglUdx7iDQD8flJNF7KWwSCOKZC7bVkI0wjy4wYRYBCWiiNAQt59Tptp3rHk1c3U8OjWzjcr0qWxzINyaGx7Gjt6MgUs6CnaQKnYLHQ15eym3IrLlBdsALOtKwxg0zIFo44Cab/2I+hRfw46zfwgaUFlgw7LjBXCoKoFD5D0PvD/BChSUVY5wEGlzqeXCWnwQ7/u97G2ySrU7WTcwwoBK+qHHEeoYjByKiSujOzKRAoO9IRXxYiqGHLb7d2E0ry+WXqEuHyARK2KzFCAuWD4CeoslCZEo/5Jg9siV/kcwvfuH3eApAsl8EzsL3uqTgZ5HoKFnxwVe7u4rz5/YbSD709F4A0TYeMosnVfgFP2wXbENZ0PhU6tTKthO+mWTRnehJb4YoGcV1/ADvqR8c4kk8hzch7jPlYXnAr08Lp3J1AtrQ7A4zMl1d9xgCg7xLfOxUiFbTef3yVj2mQank+l5skUCJCKTc+33dtjuN9smb4zd4J5qSkd5q+PaFinfx2/msGBiqa2Bf6kSs7rFa22R8QtKx691yKjECfIg7tMuj3kkA=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(346002)(39850400004)(136003)(396003)(376002)(366004)(8936002)(5660300002)(478600001)(6486002)(186003)(6916009)(2616005)(86362001)(41300700001)(66946007)(6512007)(53546011)(26005)(31696002)(2906002)(8676002)(4326008)(316002)(38100700002)(6506007)(54906003)(31686004)(66556008)(66476007)(36756003)(70780200001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VlJYYTdHTzEwMS9GQVJqUnJDK2ZLck4vTHd1RWFGYno4RVN5aXBNNitLRXhI?=
 =?utf-8?B?VUVnaURZa0tmdTN1VVBQTm90Mm9hbFQzR3RaNmlJaXpIWUlvbWJOMEFUWGxv?=
 =?utf-8?B?NkV3OFhaTUhVeUVmeXBYbUFPWEdXcmJlVG1XM2VyWnlocXpuV3VWQW96VDl3?=
 =?utf-8?B?Q1lQZjFBR21mTC9hVWRsOU9zQ0RZZ3FNR1ZlNmMrbzQxeERSbDZaN0NVd2Jq?=
 =?utf-8?B?THN6VDRvYmhrc1AxT0M4U0FvY0k4Vjlab2NyY0pBbEJZVFNBVGxERXNNZVVm?=
 =?utf-8?B?NGl5eFJNdWpaZ2I3YnZSZXBkc21jNVA4TDlGNXhzd2Q2THdtNE9sZktweGpz?=
 =?utf-8?B?djFGMEZYUnFCRCtDWVJXei8zaElhSERzRFRqOUl2c2dORFkrMC82TTBGS3Y0?=
 =?utf-8?B?a3NnbHAwR2FKbjhZaUcyaldiTVhuZTVGRU9BUUFzZjAwRTdDbGF1Tjd4cDd1?=
 =?utf-8?B?NlN3RXBBT05LdVNoTFdOMk4rbFNyN2RXSHRZemlsakk4L1JVVWN1OGJvSDRO?=
 =?utf-8?B?VnZ6WFRDMkZON2hvN3FEWlBRdzRBUDU2WXZ2VUkybk95cW9HWnlZRitMUWRk?=
 =?utf-8?B?cUR4cTRQeUN6aGxjUzJ6d2lickxnUDJSWVk0eGMwRUlVVUhRemVRcERpUWdz?=
 =?utf-8?B?U3d1L3ZSVG56LzJHNUpaeVQzV25WZjJ0NHJRQ2tFYlQzbnl4cGJxamlUa1Vh?=
 =?utf-8?B?SlN3WlFoWkgvcStuQzdXN3FCc3lteHNtTjR1dWZCZlBSeldoWE5Ja3lzQllP?=
 =?utf-8?B?WjROd3FhajczcUdEeDhqQVZDTS8rVDZySzRHL3FxdktTY05wamtpdWEwS1FP?=
 =?utf-8?B?WDhuKzUzZnJlaTQ3QWpFZ0hSeTd0OHVMTUNZdlF2RkJYSUtzZUVvMnFaeHJJ?=
 =?utf-8?B?NmJiSzd3TjgyM1ZKWFppSHhkK3g2S1dMeFNPdXRmUU5oTGNlQXhwaVpKcXdD?=
 =?utf-8?B?Q0I3MEtpdURHNTRNa0pueDFyMS83YmZNZEgwM3hKcm14WE5FUDhFeEJyaTBD?=
 =?utf-8?B?U0VmNUkzS05SOFR0L1BuR0dJT20vRjhubVc0VDVCenlTSWhpaHhkd2U5OTNv?=
 =?utf-8?B?OU4vMml0OGdXWDdUR0tONldoMWpYcER6c2VmUUZ1eHFMQWhvSGFXNUR3ZVJL?=
 =?utf-8?B?aVplOVdWY25LbHBFU2gzVWpQTnJld2ZwVmVQdEVOMFVmQ0cva0JqMnFwTW5y?=
 =?utf-8?B?Qk9TdXYwMmdRNjlqZlZqS2xmMVl0UklIK055QkY3Z1R6cGtvQ2RnTGF3Y3RR?=
 =?utf-8?B?SXh6WVp0WEZJb2FnSjJ2NkNrMm5hM2sydEh6eHgzY0ZMRWhYYzRnUTdUL3Uv?=
 =?utf-8?B?T0tWZ2t3MEJ3UnE2R1NsT09GSFNmQWZWbWtEa3pWQld3VEtMa2pNZzJUNWZr?=
 =?utf-8?B?S3l4cTh1RHIrbUM4aXYrczZEb1NOSndKcXlnSXBKcDE4SGdHYXNiRk1zdlRm?=
 =?utf-8?B?S2ZLMWtuU3JsTHpMN3dVbXB4QU5VTUpZZ1VBTFMzN21iaEZOVVF3SThqa3BK?=
 =?utf-8?B?dE03ZXUrTDlUUWZZWi9CRHhIMHFIdDY1MW9FeVB1U0U0em5wRGhwbTBTSkRO?=
 =?utf-8?B?R0t3QThJZ051YkdDcmJnTnFNYWlJYVVmNFJna2lML3hURS9Ic1BqOGg5UnJF?=
 =?utf-8?B?S2FVVDhJTHFDNXdraHZENk0rbnZrR0pzekQ1ZWFxSEFGbm1zRU1hZWFhQUtB?=
 =?utf-8?B?VmFiVURxTHhtWW9iUTgrcEhLVjJTWFc3NWFFVHpidHlTZzNMdEExTGNSVzJR?=
 =?utf-8?B?QkowK3U2MUcxUzNaaUpNbkVOUUdCd0l1YzNlR3NLNUdJYmw4d2huNmdza3VO?=
 =?utf-8?B?cFBpdGlQSWVsb0NDOUphZmh4TFI2WGc2S21ybDEvV3RzLzJFWXVzZytIU1Fu?=
 =?utf-8?B?ajJoSG13aEE4UDNyRDFFVDdTV3RlWFZDeG1HVG00Umt1aEVxeDRibWEzSjhT?=
 =?utf-8?B?R2p5RDhOaHllS0hZNGdmWlVVQ1NkcmdkTGtsRGw5NkFmNTJkYWVrdDY1WG8x?=
 =?utf-8?B?U3FjdzlPcldIblU4ZFB2S2VySVZFKzkzbXh1bUQ0aWxtaXNFQzVoZHNDOFV4?=
 =?utf-8?B?YnpWZnZzaWt2blhVMm41cENkSk4vK3MyZmdnWXo0ZGVLc2d0YksvTVgzWVVu?=
 =?utf-8?Q?EQmlH3LFYXD+CF8QKSoqeLWua?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7eecebd9-4a8e-4f60-421f-08da5805f022
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2022 06:26:15.4610
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Nj4wyt8UyHad3hrntScOnUpK9X/SqYSKqVXTyplxtuT3FNUmjNegv7GPfCv9uqAsfOG0jHNmKFnmqbwmP3M4hA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR04MB7237

On 24.06.2022 21:16, George Dunlap wrote:
> 
> 
>> On 9 Dec 2021, at 11:26, Jan Beulich <jbeulich@suse.com> wrote:
>>
>> In preparation for reactivating the presently dead 2M page path of the
>> function,
>> - also deal with the case of replacing an L1 page table all in one go,
>> - pull common checks out of the switch(). This includes extending a
>>  _PAGE_PRESENT check to L1 as well, which presumably was deemed
>>  redundant with p2m_is_valid() || p2m_is_grant(), but I think we are
>>  better off being explicit in all cases,
>> - replace a p2m_is_ram() check in the 2M case by an explicit
>>  _PAGE_PRESENT one, to make more obvious that the subsequent
>>  l1e_get_mfn() actually retrieves something that is actually an MFN.
> 
> Each of these changes requires careful checking to make sure there aren’t any bugs introduced.  I’d feel much more comfortable giving an R-b of they were broken out into separate patches.

I'll see what I can do. It has been quite some time, but iirc trying
to do things separately didn't work out very well.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 06:31:36 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 06:31:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356217.584300 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5iHT-0004FZ-7L; Mon, 27 Jun 2022 06:31:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356217.584300; Mon, 27 Jun 2022 06:31:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5iHT-0004FS-41; Mon, 27 Jun 2022 06:31:31 +0000
Received: by outflank-mailman (input) for mailman id 356217;
 Mon, 27 Jun 2022 06:31:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vXPS=XC=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o5iHS-0004FM-3h
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 06:31:30 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id c6ccb74b-f5e2-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 08:31:29 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B52631758;
 Sun, 26 Jun 2022 23:31:28 -0700 (PDT)
Received: from [10.57.42.186] (unknown [10.57.42.186])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1FC693F5A1;
 Sun, 26 Jun 2022 23:31:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c6ccb74b-f5e2-11ec-b725-ed86ccbb4733
Message-ID: <1218329a-13a3-79b6-6753-c2c9a0c45b2d@arm.com>
Date: Mon, 27 Jun 2022 08:31:13 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH 2/7] xen/arm32: head.S: Introduce a macro to load the
 physical address of a symbol
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220624091146.35716-1-julien@xen.org>
 <20220624091146.35716-3-julien@xen.org>
From: Michal Orzel <michal.orzel@arm.com>
In-Reply-To: <20220624091146.35716-3-julien@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Hi Julien,

On 24.06.2022 11:11, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> A lot of places in the ARM32 assembly requires to load the physical address
> of a symbol. Rather than open-coding the translation, introduce a new macro
> that will load the phyiscal address of a symbol.
> 
> Lastly, use the new macro to replace all the current open-coded version.
> 
> Note that most of the comments associated to the code changed have been
> removed because the code is now self-explanatory.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> ---
>  xen/arch/arm/arm32/head.S | 23 +++++++++++------------
>  1 file changed, 11 insertions(+), 12 deletions(-)
> 
> diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
> index c837d3054cf9..77f0a619ca51 100644
> --- a/xen/arch/arm/arm32/head.S
> +++ b/xen/arch/arm/arm32/head.S
> @@ -65,6 +65,11 @@
>          .endif
>  .endm
>  
> +.macro load_paddr rb, sym
> +        ldr   \rb, =\sym
> +        add   \rb, \rb, r10
> +.endm
> +
All the macros in this file have a comment so it'd be useful to follow this convention.
Apart from that:
Reviewed-by: Michal Orzel <michal.orzel@arm.com>

Cheers,
Michal


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 06:33:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 06:33:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356223.584311 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5iJ4-0004od-JX; Mon, 27 Jun 2022 06:33:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356223.584311; Mon, 27 Jun 2022 06:33:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5iJ4-0004oW-FX; Mon, 27 Jun 2022 06:33:10 +0000
Received: by outflank-mailman (input) for mailman id 356223;
 Mon, 27 Jun 2022 06:33:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yQHX=XC=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o5iJ2-0004oQ-Vg
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 06:33:08 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr70087.outbound.protection.outlook.com [40.107.7.87])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 01bb8d54-f5e3-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 08:33:08 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VE1PR04MB7488.eurprd04.prod.outlook.com (2603:10a6:800:1b0::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.21; Mon, 27 Jun
 2022 06:33:05 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5373.018; Mon, 27 Jun 2022
 06:33:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 01bb8d54-f5e3-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=i3tgBz1inbnn/76dkjYMwZjwtmst7faan2l56WDfe2b3rxKaJ57K46tI6JpxkBT/lzLD3kgb06QHj+9SDA0QQyHy2lEKJ4HB4EPF47V4IkjHlmOdCvKlBebIRCgktgIUTuuVrXodYDpLsvBf4IIn4+y1R72szL80DW7DKSwIqTNosiKnwp82MMWxy1cq2WP5gz7+JxhGNIeTLLgO26XgfdBjwxfjhIzUejlmR+3PZB778ud0xPMqgJavYtRQakPbmiviTSF2ffL589MVcDULyIWEGUFYoRU30qP2+gDI5N4+xCd1pacTFm2cn3e0ZuQ+asgV8h5+7VI50wNv2/0VxA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IElnXK9yk4XopBOwXyX7Ws22fw+Sa0TUazo8zYsenRQ=;
 b=VYiTHeoq+GCIIIwGomjnH2fPf9J/DggYI5NlzpPgHD3vRf5vqdQsT/jC5w7vZe2VPwWPWdQ2FuDSvgJFJRan3/d6GFhbh+h8oyTDu9tUbTCEmF/ZlByuGfiksGKBsAqGkE4H6xDxqm3/z9L0LhWFApc0pX5Z5W8JFC06g97tnagjPEPk2SRatRLy3ZWqFz3goshAdpxDSHUdFet0kJS9w58rhE7yKwaVHHnTrVdWazH6cygYlAftfzY9Mn9KfUeYGmPJC/XPM7L2yYwu/j8d+031+zsao+vLqob20vyfGBNed9pqYUNBV1P3Sk/73Jfwug6yDnNOOHWU4/sDax1zsg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IElnXK9yk4XopBOwXyX7Ws22fw+Sa0TUazo8zYsenRQ=;
 b=vrUsRCkxFfJY8vxXvIgPporj1KKPoWc2YcUtfRWkgahWQwZSgkey/6jZhGhicSPsDTSFfh045ZI27kSAMEO6Tdf0QsRSL9qtJ5Fn01l/UY6XnaZdufFJTBo4BVyZ/IwkyA+AIKtqDghdtx7acxnMOKHwazeJ0GEyPBMJMrGBc+9ge442C7lGG+5kASrtqGr6pvNRez2erbuKY1KR08qW+i6euW0P7QexTiwSjxbJ/T0tfXSV/aKiqigIi5Ek4u0NipFO9bRzuViPjMHIZv4Vi7rM1DsqOT+jOA/OgpCHlqSvTuQpTkjJp9ClTpR5T7CI/LoxdR4eFvsjjMNTJa8Ipg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <64ff8cc6-6c15-7255-e908-36d7bbbd6348@suse.com>
Date: Mon, 27 Jun 2022 08:33:05 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 2/2] x86/P2M: allow 2M superpage use for shadowed guests
Content-Language: en-US
To: George Dunlap <George.Dunlap@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>, "Tim (Xen.org)" <tim@xen.org>
References: <9ae1d130-178a-ba01-b889-f2cf2a403d95@suse.com>
 <7a80d08b-edd7-43c8-a7ce-42eb85d6f3be@suse.com>
 <8D91423A-67A6-40B3-A3D7-44711DC41A7E@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <8D91423A-67A6-40B3-A3D7-44711DC41A7E@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AM6PR01CA0047.eurprd01.prod.exchangelabs.com
 (2603:10a6:20b:e0::24) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 0be68bed-4462-449d-d4d0-08da5806e488
X-MS-TrafficTypeDiagnostic: VE1PR04MB7488:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2M909DLVdDTgxKlFqlcnkOO8zCt+tGXHjaGeNFKbxCTTJuPsVW39/79QBDQdde3oMhIK5oqYZ945YQ8QjiNYQ6Q5nNFisHcYEbJPP/Mp3jmxSS4hddOQdsXXCfaGQdnO7W0BKE/YQBaM4hGl8KVYWW98rv1OxFL3NaZhC+U1WxLyxoNM3rXxHnEFDIKFIVfGdXHDlse4eABTi9hQMFkKzEbQLutxCcdQGVykO0C6ybbHyY1asU7DwIOcIzu2LFZe53mEJC2/nQHtByvogx+9Qn0Bk0sBMfNj4IEz67v9o85QV5FEBAKFkxOl9GcgfdWlJUvVvrOo0kCQ8L4B9O8jw3G8uOwXXwRd2oRcBnJEVxzzlT86jhE6ObOALt27UMD2n6JD6RFAJGcorF6zLv5e3FptRVWOYJdjIlMFtpXgF3zYA8AVWFRxihCttGyjWvSvhBccY0ICPU+OcvPGiydLMgumEdIUzFzd+Fzc/4Z2y2Skp40n+1+2LDABXMQeBPHwptwULnplccO09bBjVnOLmMAM48ef+g9hblXagw8CUbCc76JSpEWIbWhcCdkbeRlcA/l5SkrcD3tbyXPHyTuflixIYeHMfeVHb2/kG8LUYbGK8oB1j0T8iAJ7xsuUdCa3Qw4+q2cpVXr5hvVgNv59tbTG1vPh2ixGUV6im4SFvgMG8ZAdeEMCgY71qLr4cCRM7ZPv9NuYvXnQnSJr2tECBy2S0XTuT5DKuPuDapPvaDM+bcu42G5JAuCSj4cNopJN/RhEMY9v1bySL+mnpVQZEuo1SxxCrDOAilwFzf1H2LlVohMryuXix1Al7pIJ7+ndtnVuDLdB/Rp7kyCLq5+uhg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(396003)(39850400004)(376002)(136003)(346002)(366004)(38100700002)(478600001)(53546011)(31686004)(8936002)(5660300002)(66476007)(66556008)(316002)(6506007)(186003)(41300700001)(54906003)(66946007)(31696002)(6512007)(86362001)(8676002)(4326008)(6486002)(2906002)(26005)(36756003)(2616005)(6916009)(83380400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TGRqVC9DZ2RPbEt6Mm5kcXl5SDVwZnptMDVGaEdqQWVxZWNFZ1Baa3BKU3BY?=
 =?utf-8?B?Vk92Q1RNL3RQSVNGOXJYaXdneGJ1bFg3aXB0M1VoVjVhUFdvUWFzVW5LMjBU?=
 =?utf-8?B?L0tVWVA4L0VlL0pkU1hYNkM2Nm9oUXdnVGhjTzJGTnM1d2Joc1puN3dMVlUr?=
 =?utf-8?B?VEhzSTNpbXpscXJaQ0ROWEhhM1N4WWEzUFN3dGtScW5nTUQzMlBXT2N3Ti8z?=
 =?utf-8?B?ekFYOEQ5Z0JnRjZ2U1ljNXdCejRGK0N2cDRCbi9wSWZ6TndjcENKWGd3TGY3?=
 =?utf-8?B?czVFcWc0UzVYdWhmYVI1Q1JGWEVTOTBKRFB4bGR1V2p5TlNYT2FoTFc5SEkr?=
 =?utf-8?B?QTA2Rk9MSEVSNXRaUVJ0T1ZEZGdpL0dxTEw4QlRybiszcWhPQXg3ZzJTNFJZ?=
 =?utf-8?B?MVU5Zm5ZMVBIb0ZpZFlxcFdGcXhuL2JCZ3BJSXdCNVF1L2lFb1hUV0FqZ3Bx?=
 =?utf-8?B?UjYyeE5lY0V2ZS9HTnpUUDJYOXNsTlJEcHA1NlZKOXkxc2RXR3ZUa2FJWHhQ?=
 =?utf-8?B?b0I2VjZvbk9ONXlkc21kbytPbXNackZZZGVab241UElYckpFOHJwYjE5MzBO?=
 =?utf-8?B?Q1VOazRFaUc0Vi9taVF3N1JkTUtWemlxcUNWQWJTdGFaeFVsM2RZZmMvUjhw?=
 =?utf-8?B?RXNaTUZ0djF5OXcrNXZVMHlDcmR5c3FIUVp0NWk5dnYrVnZZOEF4clFPbGxY?=
 =?utf-8?B?alNlTmRKc2RjVlU1eGVsb2cza1A4TktmTkRHaGZ5VlFsUlFnb2FrL2pBZ1I1?=
 =?utf-8?B?VTlaQ2J1bmU2ZEo2c2xoUml2eEhGS01uK2llRzVHc0wrdHFtMDRUQm5XZEFZ?=
 =?utf-8?B?MGNubkNYZDZBOEVFbHdrRWVIMzY0bVdCQkM1RWJMSjJyWWdWTjJIcDgzSXRJ?=
 =?utf-8?B?Skc5dFZtZjVUOEIyUitXT1dBMlZ4Rnp3MENodFlxOU43SlJWRFVKSWJ6WGQz?=
 =?utf-8?B?VGxmYkJVcGdkRVZxQkpobCs1WDRXdXVHM1pnWlNDbVNpaityTW0yVDJKaVNM?=
 =?utf-8?B?S3VOaGVrRytWcGtDRlNvQXRDMUd0Y3FOdDlRcUNOZmU2TVp6dmRHMlNGOXN6?=
 =?utf-8?B?Q0RNMFZFVHRVVElMUlltMVdhWUNYZko0OWFybTBlcXVndWNFYzZDOEJtNWd3?=
 =?utf-8?B?dER3eXZNbFVWTW50M0Jsb2twTDdlOGNjRG9lMG9VdHFBMWhhNHVyUmUyRXlW?=
 =?utf-8?B?UW1oNWdJVDRXY09kNVdJQnF4ZGhpdXBWdTd5bitHdFpBZ3FPSlZ5bHd1ZXBX?=
 =?utf-8?B?QTlZczgza0NVN01ySVg0ZG1PaVZ0SmdyVzA4ZzVQcjUzcnBYbjdoS1VONk95?=
 =?utf-8?B?ZVNIQXdTS2t0SnZIWlBPY01QbDRHa1hycXNIS2I5d0tnNXVHaHE5NHhKcTVW?=
 =?utf-8?B?RWhYRDVJb0pyTk5vWGtxSmthV0RqYVJ4QU1FNzdCZG5yU21zcFlQVGtJdzhm?=
 =?utf-8?B?czBkVTRIWUl3ZmFkMG8yZU1waHZsbkR1aHRuMldYZERGbE1GdE1ncm1GdnFL?=
 =?utf-8?B?OTh0NHpncFViWEJsQ2dXekZNbXV5cjhwN0xpVlNleTMrNE02TG5mM09nUmZn?=
 =?utf-8?B?M25QOTd2eWlxakRGd0xBUTVvSFJrMTVsRHNWQlZLRnNJcXJ0MzlsNTU4QTZi?=
 =?utf-8?B?OFdqaTFTQzFlZCs0L1ZUZlNoeFdzeGwwVXBDZGtGQjNQMDQvaXRFdlM0VDBL?=
 =?utf-8?B?VnFxdWVuYUh5VmUzdHZuMEFPMkxvcUlGWkxnenZOYjV3TCszOEtXZGZPNXBT?=
 =?utf-8?B?THZuZ0Vvb0dXd3hBSkR2aVgweldCbUUvNzErR1c1MnVBMWdFK2JQU3N1TmZU?=
 =?utf-8?B?REV5Tit6akZNMmthS0Jjc3NEYnZXQ1JsWW9PeThlSUtoNEJHZytrSEd6djZQ?=
 =?utf-8?B?ZFNYRFBhLytXMVFJekd4TEFjdmtES2ozUHlxcmsvRmdxVTNaeGJsN3NVVjg5?=
 =?utf-8?B?a1NyRFUzc3pXQTN5TkpFOGF0T2EwUnM4ZjdEY3VmT1JSbW9tSG9IV1FPaWtl?=
 =?utf-8?B?Y2V1SUJwOVVtdjRYY2NUTTVZV0E3YUZZTjVPVTVSTlBnZktERWhNVXpET0VC?=
 =?utf-8?B?cnU1YzFxcjN5ODJ5Y3BvY0kwMTVYMjN0VWM2c1Z4OTZpdTZkOWlORW1VOFFI?=
 =?utf-8?Q?xThJakW3U23OU5GyT4s1GKg7p?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0be68bed-4462-449d-d4d0-08da5806e488
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2022 06:33:05.5129
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: mIZnh54n5JXOeQ+jJVodYzyYcEH25CarICOTqSPoM+YmlKF1bG1XtOvLVBg20RG1eeOczLgMcf80CstOVbpRbw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB7488

On 24.06.2022 21:27, George Dunlap wrote:
> 
> 
>> On 9 Dec 2021, at 11:27, Jan Beulich <jbeulich@suse.com> wrote:
>>
>> For guests in shadow mode the P2M table gets used only by software. The
>> only place where it matters whether superpages in the P2M can be dealt
>> with is sh_unshadow_for_p2m_change().
> 
> It’s easy to verify that this patch is doing what it claims to do; but whether it’s correct or not depends on the veracity of this claim here.  Rather than me having to duplicate whatever work you did to come to this conclusion, can you briefly explain why it’s true in a way that I can easily verify?

Would

"The table is never made accessible by hardware for address translation,
 and the only checks of _PAGE_PSE in P2M entries in shadow code are in
 this function (all others are against guest page table entries)."

look sufficient to you?

> e.g., all other accesses to the p2m in the shadow code are via get_gfn_[something](), which (because it’s in the p2m code) handles p2m superpages correctly?

Well, yes - I don't think I need to reason about generic P2M code being
super-page aware?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 06:37:00 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 06:37:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356230.584322 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5iMi-0005Vw-3Z; Mon, 27 Jun 2022 06:36:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356230.584322; Mon, 27 Jun 2022 06:36:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5iMi-0005Vp-0s; Mon, 27 Jun 2022 06:36:56 +0000
Received: by outflank-mailman (input) for mailman id 356230;
 Mon, 27 Jun 2022 06:36:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vXPS=XC=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o5iMg-0005Vg-7k
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 06:36:54 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 877aa36c-f5e3-11ec-bd2d-47488cf2e6aa;
 Mon, 27 Jun 2022 08:36:52 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 110221758;
 Sun, 26 Jun 2022 23:36:52 -0700 (PDT)
Received: from [10.57.42.186] (unknown [10.57.42.186])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4FFE03F5A1;
 Sun, 26 Jun 2022 23:36:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 877aa36c-f5e3-11ec-bd2d-47488cf2e6aa
Message-ID: <d3a1f96d-328a-32c4-993c-7b4cd6cd662b@arm.com>
Date: Mon, 27 Jun 2022 08:36:36 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH 3/7] xen/arm: head: Add missing isb after writing to
 SCTLR_EL2/HSCTLR
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220624091146.35716-1-julien@xen.org>
 <20220624091146.35716-4-julien@xen.org>
From: Michal Orzel <michal.orzel@arm.com>
In-Reply-To: <20220624091146.35716-4-julien@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Hi Julien,

On 24.06.2022 11:11, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Write to SCTLR_EL2/HSCTLR may not be visible until the next context
> synchronization. When initializing the CPU, we want the update to take
> effect right now. So add an isb afterwards.
> 
> Spec references:
>     - AArch64: D13.1.2 ARM DDI 0406C.d
>     - AArch32 v8: G8.1.2 ARM DDI 0406C.d
>     - AArch32 v7: B5.6.3 ARM DDI 0406C.d
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Michal Orzel <michal.orzel@arm.com>

Cheers,
Michal


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 06:45:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 06:45:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356237.584333 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5iUt-0006zL-TP; Mon, 27 Jun 2022 06:45:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356237.584333; Mon, 27 Jun 2022 06:45:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5iUt-0006zE-QX; Mon, 27 Jun 2022 06:45:23 +0000
Received: by outflank-mailman (input) for mailman id 356237;
 Mon, 27 Jun 2022 06:45:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vXPS=XC=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o5iUs-0006z8-Dr
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 06:45:22 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id b6969071-f5e4-11ec-bd2d-47488cf2e6aa;
 Mon, 27 Jun 2022 08:45:20 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8AD701758;
 Sun, 26 Jun 2022 23:45:20 -0700 (PDT)
Received: from [10.57.42.186] (unknown [10.57.42.186])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C279A3F5A1;
 Sun, 26 Jun 2022 23:45:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b6969071-f5e4-11ec-bd2d-47488cf2e6aa
Message-ID: <1dad1b91-beb2-e09b-1417-c59f059765c2@arm.com>
Date: Mon, 27 Jun 2022 08:45:05 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH 4/7] xen/arm: mm: Add more ASSERT() in {destroy,
 modify}_xen_mappings()
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220624091146.35716-1-julien@xen.org>
 <20220624091146.35716-5-julien@xen.org>
From: Michal Orzel <michal.orzel@arm.com>
In-Reply-To: <20220624091146.35716-5-julien@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Hi Julien,

On 24.06.2022 11:11, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Both destroy_xen_mappings() and modify_xen_mappings() will take in
> parameter a range [start, end[. Both end should be page aligned.
> 
> Add extra ASSERT() to ensure start and end are page aligned. Take the
> opportunity to rename 'v' to 's' to be consistent with the other helper.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> ---
>  xen/arch/arm/mm.c | 10 +++++++---
>  1 file changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 0607c65f95cd..20733afebce4 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -1371,14 +1371,18 @@ int populate_pt_range(unsigned long virt, unsigned long nr_mfns)
>      return xen_pt_update(virt, INVALID_MFN, nr_mfns, _PAGE_POPULATE);
>  }
>  
> -int destroy_xen_mappings(unsigned long v, unsigned long e)
> +int destroy_xen_mappings(unsigned long s, unsigned long e)
I think the prototype wants to be updated as well in include/xen/mm.h.
x86 mm.c already uses s instead of v so it is a good opportunity to fix the prototype.

Cheers,
Michal


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 06:57:43 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 06:57:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356244.584344 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5igY-00005m-0F; Mon, 27 Jun 2022 06:57:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356244.584344; Mon, 27 Jun 2022 06:57:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5igX-00005e-TG; Mon, 27 Jun 2022 06:57:25 +0000
Received: by outflank-mailman (input) for mailman id 356244;
 Mon, 27 Jun 2022 06:57:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yQHX=XC=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o5igX-00005Y-4p
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 06:57:25 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr130054.outbound.protection.outlook.com [40.107.13.54])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 64851d99-f5e6-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 08:57:22 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM6PR04MB4488.eurprd04.prod.outlook.com (2603:10a6:20b:15::30)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Mon, 27 Jun
 2022 06:57:18 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5373.018; Mon, 27 Jun 2022
 06:57:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 64851d99-f5e6-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jeGXKaiH3IFBEFxJjV2E3ij3WQdyXFdM90/pvzR4bLTbUUdhAcfUdKcWobVsJ6ahYD5JK/HG4fvZ+rKNWnVJPU//WDK1orBgVzWqBTFP/3l/PoDYZdQ7SbQWFLfRECJ3zAfuV4bxBt0ELPYo0VgAD0QiyD9KFoaI0O9kblqhtz95jMDWPC1ytHlQQvCUzTKTuIBgUbl2KZrv6n1joeq3I0oVvhwB7gBn+EwoXzvSJtPMk1JASXAQT8i5vuV0XnNGZlF75mgh6ZaFW1vz2YxN0xtGvgDy3ShgVE+HwJY6gvhSFzN8biMFjFBH1GHtYu3ukYS0OImJ/wMrxIHXtIboJg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=4DBhxCmD8w9E0mWextSw33DyRZn7iIh2zAv1//e6U30=;
 b=oYzJ222AcK+6NlZJQgKWGAwwsWdKhS+geswlwuE7cVjRFnv38UH4QzOfstrmxDJ/M2aQlF2Fv5VRNMxVn5B2yrSKLdhOqoz7eFnlf0QPgUDq1f4fA9tsoJZSeoUkfuDpA/5je3U2TfGjTKuESasnyZtE9ELrVy/XgVXXYzHCSNuLc1fjBtL62k/FrVZAnia5vMydVwlwAJHl99FqstslKHqWparTJAcU9FzJXuJMhwBffdiMoMqGjYtp+JxSXQ8SfpnMduYKmJZrLU4VovvapCVXdKZHjMVC6P8Z0UR3xIlHYjiyski1LX9xSzbk955uqxb2A3C0Mv9H+w1iGN1NFg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4DBhxCmD8w9E0mWextSw33DyRZn7iIh2zAv1//e6U30=;
 b=0rYeTntnHHcc4mr1SY7Mc7uTy8AfbR9BrfzWm7azF47cmjsH8B4ni9rzgFBmmYbEIvpZ6it776xfV2FQE+OhMi3jrbl0npOMqZXZMYck2r5ESfH+N/MtbCKtlgco+6q2rEFXcmqQdx6NTLp1K/atDclNFRZGphLJRGMkGA+m6Yxb1/jFXT2dtuWHwO2ldMmv8O5PXjlT4B20TWfvVvij1kZeacAZhV7yYAVFyhygBBNc0gnbSu4qCIM4NOKL7r/zkl3oxVioVVkVb9L5BQIkKkNqok3YmEsZZqbIaCB2AQl8gQELoVX9kfFzhaUF+VKrdluUvoxxl937tYOa8kMh0Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d135e1aa-d0bb-8a7a-cb1d-7dc9387f1f24@suse.com>
Date: Mon, 27 Jun 2022 08:57:18 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH] public/io: xs_wire: Allow Xenstore to report EPERM
Content-Language: en-US
To: Julien Grall <julien@xen.org>
Cc: Julien Grall <jgrall@amazon.com>, Juergen Gross <jgross@suse.com>,
 xen-devel@lists.xenproject.org
References: <20220624165151.940-1-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220624165151.940-1-julien@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM7PR02CA0001.eurprd02.prod.outlook.com
 (2603:10a6:20b:100::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: bb93aa6c-49cd-490a-6641-08da580a4692
X-MS-TrafficTypeDiagnostic: AM6PR04MB4488:EE_
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nLymrTz+9xeEMe0gv3UljnLp6P++KBU9y+7TjvwFbvL3Bu2vLaAS4TNf8MmgFiTV1+DvPTsQv8O2C/Hazr1VBf8tvy2XxuM7AY3YZ+wNwIX3wqN3hAm7s5CdbLQaxEKSgsGWEqcxbRT/yakc+/96HfqND11/SRDLbp0GySpY1dKvNTCUahR0q1WJnveOD7ccA4YYXF6os4bRmzkFRC7udGLLNBSJASsJ2eL+plZlfHChzGyJbjzkHEFhQ7yCB3N3JEhMz36quuPeboHLoRTlv4q5jnSjSo6Q2cPjG+UiIqJ1wUEW0O+4aB6e3w+WLEzDdC1SL+kFNUYTJLBspJDuIYfdZyktd/xr+CqqC6kyGUx+UZJh3F+O46eFYQpQo+5RzDlDiegIXt7aGbCMwGy8LLA1km9z0W9/3IXRmNyDyL5Wcn1elo/kOHgWxEE5IugdaxyHauFOHnQ8j5GfGyZi4IyDqm3L8Sa9ibQbXvaAPsk10B1C7a8glSmDgzlBx/LxkJ5AePw3Dq2oi8sdW2diiwVXZ4V3K7/Wq+Znpd4zn7rbBV/dQuqA0uawziALZgIvEwhrkoTHPfX1VI3Vw5pob3BYBocNX8UGMraeuDuGBpGfC/1RQG7f9AsdN5gJly4uAwYaBAy1WycjPKyWIJAZGSVvnGNUI/QPf6eqjbU7sOpAhj/0HIsSfw/RRf9wWr0Z8UxST6ObsBr+fQVO18z1XQUxmz1ilILdAXiq2BiGddTmP2aFZoi9PATv2OVradq/vgPtE7TKlW/L0b+gBD1VHdiZ+jrzzsBi50K5UQy/iDVWbDBwPEhA6igR2SkLx8VrbHQYN6P4TFLwvu5eK32GjQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(396003)(136003)(39850400004)(376002)(346002)(36756003)(31686004)(4326008)(8936002)(5660300002)(4744005)(66556008)(316002)(86362001)(66476007)(6486002)(2906002)(38100700002)(53546011)(478600001)(54906003)(6506007)(66946007)(41300700001)(186003)(6916009)(8676002)(26005)(2616005)(6512007)(31696002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WUNhZHZZanUrQkF1bGtVN3ZTQWl3L3VnaEc4c3VQblZoS3dmWWxVYVdyYkRX?=
 =?utf-8?B?czl0NDE5aXR4Ykh3QVcvUUxFS214bFVzeUUzMU11YVJHajMrVmtTdHZ3KzBN?=
 =?utf-8?B?YzkyOVdBSC9QM2NMZUVhSFYyUVhPNm9OQlUxUW1OaU5CZmRMWWxNWmRST2No?=
 =?utf-8?B?RU9XRjJmcFU1c2VZek9HRDgxTExKcVNBN1M1UzByN2xyNVJVNE16ZTU5d2h6?=
 =?utf-8?B?Tnh2NmZTenJqU2U1STJGWGxtZ212ekwrRUxYazU1Q1BORDNuOGEzL1ZoUWd5?=
 =?utf-8?B?L0FZay9oRGZMRVlOc0cxMmVObzl2ZXVyNTZPUlR5ZkVBdUN4WlgreXdLR20y?=
 =?utf-8?B?OExqSGVSTi9SYUd1anZJL0drSkRtWGZxTnpVclR6UVQ5NC9KY3dJVTM3KzdT?=
 =?utf-8?B?L1VhM2VlZmZJMGw0Q1ZVYnFTZlB2bDJ2VlYwUmIrMFdxeDNQbndvZmRnWTZG?=
 =?utf-8?B?VGluZEpSS05mdjc0RFNCUXlFQTNPYkliRkFLWHllK3YyOGV0ZExtZnpuRkJk?=
 =?utf-8?B?ZnE2TVJNakY1UWJKOFNMY3BXUG1VVEJkdW12VmsreVlib3JCUW4rcWdLZnFY?=
 =?utf-8?B?dDRUR3I3K010WWlSU1l1TlZoUHRxYVczR1VmRmYwODVyM1BOYWgwQ0Q1TmNa?=
 =?utf-8?B?M3hnNUJoblRxd3ppbVdPUERDQWV1ay9sSUZONi92dkl4MWJYamNtd08vc3Uy?=
 =?utf-8?B?b3JWUm9BemIwcjR6NytjeXlIWDJZaVl4emZNSS9KcjZWK3AwSnJ3d2N1R1ll?=
 =?utf-8?B?MWUvRDlXZVJUOGlOVVl0eHpKTjVoNWV4dGJyN2NJbEhtRThLSUpIbkE5cEll?=
 =?utf-8?B?SHIzWW56OGV4dEZlMmo5dVJxb2VoZy9pOVFXUHl2RForV1NabEQ1ODEzeUNh?=
 =?utf-8?B?eUw5QXdlS0VVVWRRNmxlWFBLUDVldmZoN2VxK0piU1BGR0E4dzF4bGRrcTYw?=
 =?utf-8?B?bUdRbk9nNER5RXBodjYwb2xwVkVyeG5hWWYyaVFCTzI1QlBpYW9EbnJUanRB?=
 =?utf-8?B?eWtLS3JYTWVvdWlEZGxIK1ZCc0VOaVdxU0hqZXZSVk5BVlZaT2xVMzU5c2ZY?=
 =?utf-8?B?ejBTeXJEczZtUE1WUFNkRVB3eHdURmp5TlQ0bE5pY3ZEMUR0Zk81NWovL0hm?=
 =?utf-8?B?UEh0dnNYcXk4ZlVGRkgrWFl6WlFhOHZTcndCTHd5b0E4a2dWNGxOaVhHeWxa?=
 =?utf-8?B?dGI3SlJidWJ3R0k2aXdzREw5QktJNHlFanJoTm5acDg0cHVpeGpjQ1pHV1I0?=
 =?utf-8?B?QVFtOEhqMXJ5WUdrU0wyUkxRUDRaTDVEbFk1ZEJYZEhzRkhmTS9JMS9lanIv?=
 =?utf-8?B?L0t5cGZNenZwN1huNXc2bXdsQ3FFbVFoYkpkQTFRV0dFL2Q4eFB2VDJpR2Z3?=
 =?utf-8?B?c1J2dTA4Tjlqai9Td3lwaUIrTVRGOUN3VWYrcGNPNEM5SXN2alJYZHo2dmtz?=
 =?utf-8?B?R0Vzdm0wOWgxNDdTRDhZbFJrRVN6ZThBN3dFVzlUb05sK25nVTh5dzJMSXgz?=
 =?utf-8?B?OGhYb1JVVFdMR3k1Um1pWWphM1VNYUQraUxmakhuS0tLRFhjbTdhWEhGdm5k?=
 =?utf-8?B?V1hHRWNsWi9CbjZxNDZiM0VHUDlwSTVGS3ZCZmJkUDQ0dVBEK0UycHYzTVBs?=
 =?utf-8?B?NFQ4WEpaOGQ1TmZQVGY5dDNFZGFYMHMvcngrTXVBM0lZbkJ1QlY4VitGcGNl?=
 =?utf-8?B?UVhyUzRPRWdJZzRPV08xTkhxV3Vvd012UDk3dG5QTm5iUi9la0h6U0JTcXVJ?=
 =?utf-8?B?bWIwa242aGZ6ZHNCWmNzdlp5SElFTnFDY1ArcnlCWlJOYjY1UFVYR0MrQ2d0?=
 =?utf-8?B?S0xyZGhZRVRxNVNMS1hCNjFaRkF4SHZTdkFGR3FQcWsyVGU1WGtJZW9DNmdz?=
 =?utf-8?B?eHp5cUJFVUdmRlBTWGxwdTFFZ2g5UmxJY0VFblR1VGExaTBxNjJ5MFYvOWI4?=
 =?utf-8?B?aDcrclZ5NUVlM28ycCtyLy9iWTFxRndPOUExQW03WUxSOFRERHJYdHNuMFdo?=
 =?utf-8?B?UlJaVk9yU0ttQTAyTWtZK2RZMzBGMGJkUEdiME5zL0NjY0l3MWpMN25iM0lz?=
 =?utf-8?B?S055eW5xcGJldkRnd2ZIbmpZMCtjQkpndVBLRXNjMlVNYzVVQ3RqcHRQSDBX?=
 =?utf-8?Q?xylc6GjknUGHVL7/3e45gW9Sx?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bb93aa6c-49cd-490a-6641-08da580a4692
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2022 06:57:18.4981
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: XX/EN04ZxRvQ7f1rs1UNWELzKfkkmohX2tLwreyiYKEbjz8SfjdMb2M+kzyk4qEzuPPs2SjS8T2HlmpqHozgcQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR04MB4488

On 24.06.2022 18:51, Julien Grall wrote:
> --- a/xen/include/public/io/xs_wire.h
> +++ b/xen/include/public/io/xs_wire.h
> @@ -76,6 +76,7 @@ static struct xsd_errors xsd_errors[]
>  __attribute__((unused))
>  #endif
>      = {
> +    XSD_ERROR(EPERM),
>      XSD_ERROR(EINVAL),
>      XSD_ERROR(EACCES),
>      XSD_ERROR(EEXIST),

Inserting ahead of EINVAL looks to break xenstored_core.c:send_error(),
which - legitimately or not - assumes EINVAL to be first.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 07:07:33 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 07:07:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356251.584355 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5iqG-0001kn-05; Mon, 27 Jun 2022 07:07:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356251.584355; Mon, 27 Jun 2022 07:07:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5iqF-0001kg-Te; Mon, 27 Jun 2022 07:07:27 +0000
Received: by outflank-mailman (input) for mailman id 356251;
 Mon, 27 Jun 2022 07:07:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yQHX=XC=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o5iqE-0001ka-T9
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 07:07:26 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr70077.outbound.protection.outlook.com [40.107.7.77])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cc582053-f5e7-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 09:07:25 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB7PR04MB5114.eurprd04.prod.outlook.com (2603:10a6:10:23::30) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Mon, 27 Jun
 2022 07:07:23 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5373.018; Mon, 27 Jun 2022
 07:07:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cc582053-f5e7-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Nnixr6Pjz/gDyHB2xp2BZSx44E2hdKKUYEjSmMlQlzZ2pTNy+muKMCy4FPMo0vC3aZepABIz9zo5OPaCNwnHXn9R/pE5s25A2JUgVQwQ7xJNT3poR0KTTzpOVpqgXSA0hz9XzrwLGGlIoERVD6IeaLXtbwOYb4I/7bvghh6xAuzUbIcAEbliSTxx/d575ImXfESduxKBRuZm3BYAh8TsCrWG+fOh/KIb65T0dzOkgg5LtAgfgN7usAuO/GRJlyPuFXjXeMHuRq0sF66bp29YBziCOFHqJrY1eENTRU5FwVXcE40GPwjzJsCZv0B4jCQTj97Frk20xHwnmHkF49JLZg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=XyNRVA2XbKy4GSZCIKS9HX6j/fWhaaFlcY0oqCINqjQ=;
 b=Rr66vi3YoTa+EpUySQR5CqLaxyU/6zhRI2J0SNEm9sTYJHTu0RkWBTFNuqzUjzfJOduH1CTsHmsrwWNuiC04oMVmeCLu+4SqGZEw89FLdJ2TfBLSaspoHuqIVuXtl4RXrRpPPAag100roQ2NehtnxE+hwGJ2mvIDM4rdbOeK3DCGX12oSjS+L51Rbeou1PyQj9I5SJp2fxmsV1z3yEo38vV42gd66Dl7m4UcobNhY8QIj2PtZStYGPbwDZADqVQcjemxF4fTtlY1r604X1m6FKS736a4tuq0YT0+MM6UPbWE3xB+ESikJaxjBGioBfLdTNRppTkCEE4LLYPMgPLyEA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XyNRVA2XbKy4GSZCIKS9HX6j/fWhaaFlcY0oqCINqjQ=;
 b=EVe/lmX5+277QkUpqfcSo1SAZCPVCfcPyvhnzDFSMt/bK0G64K72OzJ9+0pTF2s5pu5Wbj2qzBwj2hICJRAusG9NFY1fAHAkMqMYYhvOTQMmD7bSdvu54eWX1IfFHUSJA1NPkvfzHMh1eZ02MCm84OUhPtQvkP+WxBTE/g68U+/rkliUiB6cGxst1uhuxi9NacXxuPifxD5wN03QLQA8WQxWJnIuFY3zNytBFhH0PVDnza8ZdrvluankUDF9wA3Yt9aZ9gdulLpnqduKrRhf6GYW8K1viF1oRI4AsYabUFkqpQ4KQppbK6tIz8axgIkDRtSrMihuCSJRJkupjNSxxg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <58f93ffb-a943-bf7e-52a2-5c72a59bb070@suse.com>
Date: Mon, 27 Jun 2022 09:07:24 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 1/5] xen/common: page_alloc: Fix MISRA C 2012 Rule 8.7
 violation
Content-Language: en-US
To: Xenia Ragiadakou <burzalodowa@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220626211131.678995-1-burzalodowa@gmail.com>
 <20220626211131.678995-2-burzalodowa@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220626211131.678995-2-burzalodowa@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR06CA0215.eurprd06.prod.outlook.com
 (2603:10a6:20b:45e::22) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 986bc7f9-703e-4069-b1ca-08da580baf50
X-MS-TrafficTypeDiagnostic: DB7PR04MB5114:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	DM2uRy8AJXoWLBCNc4J0rWFSMJd7vZ38pqBRhiMTdNKvLf2smaCLeJ8owr/wtQ0P2jsZv4NmmyGKm0jbbgKQMiDuyo6Di4B7IOGMDtZAcA282Gk+cw33/+Dp66fMSsnSthDs1lJ4OsiEsCzzlM/NCXp2zpMKQGUpGvgNo54cuANCC10gKYeLOpfW+lO0kKaQ9z0Z0TClrnWy3CEHKbbkB4XcTi3Uk3l+Jj7J2i8/e5kN9XAzEm/Dl0E9yfggZR/6S3Z9FgOvNeaCW8cgyq5p6A5MwPT6FT9dsCdnJ8M2jbAlVB+r+pxtdrGfTXjtSXIlh2JSWEIh0rNH3Fn0SwrfUOA2ltg1NLC08rmwSF3d7Iu1PsPE2pmOZ5uKC27X/Mr6gdo/4OdkNnJI4e7+OYK1gWBjfKscg6X6DU6KkgWTEP0VI7vFVrNeObsW7CL1ST7dHsCyUtr5NDAJoQTSFSp3x0d1VnuqMDgwfK/yWwaivneI0fIXYxNWIwQ/uenUKtbjhcNx1fO4d7CZ1SbYqliKrnnMEtbsdPMDhPjozjwGOI++eXs2Wwa3WwBBhFrw9rzHuaj61sqCdsovkLY384xnoUf8XfDFBqL2Hw9ldJInAcuaUL52AzIutEzUF6nm69FB3ryT3F/fWftX/P+KcVeqIy7uK6sSh2mkDn5GLky3kqDEqLI4cGMsBkiO5hrx+JIx+EGi2fwZQrkp07l9VjZ3IJuUKuaQJKRBXho93JxFvA7kKdCCPwZloNbKmSocl/cb4kBmnBSKepLINCZEuvVNQ22yTo6ycZ60PPN9wBu66fOa8iYxVu+6BvLFTyK45MxeBz9g1GvCNNKYOe/UeKr+Sg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(136003)(396003)(346002)(39850400004)(376002)(6512007)(31696002)(53546011)(8936002)(86362001)(38100700002)(5660300002)(26005)(6506007)(2906002)(4744005)(6916009)(54906003)(41300700001)(6486002)(83380400001)(316002)(66556008)(8676002)(478600001)(2616005)(186003)(4326008)(36756003)(66476007)(66946007)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?N3hiM2pnbkZ2ZmgwdW1yR2FGNVhmQ1h1ak5CZGFTeGhna3dES2FGSzFDamtj?=
 =?utf-8?B?TnVFdFQ1MGw2TEdqaTlvMVhKTjJpaldyOE10azRNRTNLSlBGdFhoeEQ5cjNL?=
 =?utf-8?B?akprRzI4RE14c0E3TVJPMk1udXFydFJpbmtScHo0ajRIYThZNXFSY3pZd3Zp?=
 =?utf-8?B?VHA1ZGgrNVkyUkNDRVBLZDFVbEx6VlZsNXRYYndDa20ybnpDUGsrOHBDSHY5?=
 =?utf-8?B?R2pjOE94cW5MbW1wQXJtZy9EN29RRHJmL2Y0aHVWMmV6aXRQNGs1dkJrdi9W?=
 =?utf-8?B?eGErVFBEL0hHbTR6MVB5UzNqbzA1azA1cS9ydFFhRzFJSEkrM3o1THI3R0Nt?=
 =?utf-8?B?ZEd4d0tmdEFBbG9YdUR4aUFhOW5oaEd0N3IxR3orUVA3Q1RaaUIvT0Zzelk1?=
 =?utf-8?B?Qjg0bzl1ZktXeGlpZFh2VjNjRnRBVWt5TEVWcmEzQXc0WjVjWDJ0c0lCMjkx?=
 =?utf-8?B?UTBqd05pUDF0ZkZLdkZXTjZuL0xWcE9EOTZmTWlJazROdFM2cDIyMU44MmhR?=
 =?utf-8?B?RXhqbTNQTXZlY3lkaU4zSHU5SW42SDNGaTF1blhzaFN6NWxRck5IMHZ1QSt1?=
 =?utf-8?B?d2JETTlOdnJUYWhaSVdtSkNKcFlyWWdWNURscmdBMFFKdE1VNnBoZUJmVTdo?=
 =?utf-8?B?Rmo0VUdaaVdmczMvZ1AwY2VITU9MV3JyNklUOFY1NTBQUjRrVU9IdkcvMDJX?=
 =?utf-8?B?Q2VoOUh4ZkpJMTNrUHRFMmllZnZtWVJjVGR3cTZwSjFCTy9zeUJSWDhhcTN2?=
 =?utf-8?B?MlArVTBkZUdRbUNkMGNycytyMVRLbFhHUTJZZ2Qvb1RIQ1IwQytQTWFMbzVH?=
 =?utf-8?B?MDM0UFB0YWh1VG55RFU4bEdVb0hXaVlTNllCTzQxK1JidFJmd1pLZVJYZlhr?=
 =?utf-8?B?UUw0dEJ2RFFna29Ga0VmYU9DYzh0SDRaY3JGcDFld0wxcjd2UXJtS3lFZTZn?=
 =?utf-8?B?WkpDUWMxaTVQQWFEOGdxQ1IrU1poUVpTNC9pMjJtZEkwWjA5TS80cFRaM3NX?=
 =?utf-8?B?MVU5U3pGVkJyWUJobHRDVGZ1UEpmcVNCVTIwQXN0TDV3K0xlWUFHaE5qOXNz?=
 =?utf-8?B?T2NRTEJaeXlla1NWNFFWY0JOdTdyQUcvUzJmanlxS0h3TDhpVUtwNjFGMk1t?=
 =?utf-8?B?c29hcHBmK1RBL0ZjSEtQTEYxT1d3aHR0YXZBaUdLdW16VkkraEh1V1pkMUU3?=
 =?utf-8?B?c3dOM0RXeW1aQ0owT1l5a3hVVk9vb2pqSTlOVGFnbTRTS3ZyMXpDVnI4QzRX?=
 =?utf-8?B?U21GTHIrOHNHczdvaXE3V0M1bUVuMVdoTW9udTFFVTNMZ3d0ZHJuV2dFdHNU?=
 =?utf-8?B?LzhMYVVtQUMwRFZNMDhJYmg4dmJKMzVOODBZOUUwakZrOEdJMkx1UjFmSmtW?=
 =?utf-8?B?dTRka01qaVBmYUVXWTdvL0NHemd0Q21iSnJXdmdSRkhxM0VZVzVEbjR6ajBO?=
 =?utf-8?B?QStRMG1PQ0tYNWg2ZWg2RXRzUzRoeCsxV1RZZXFJaURSNWI1ZUsvYk1Sdldm?=
 =?utf-8?B?ZmdRZGJaY1llQWN6dlovQkEzeUNzNzlUMzJ4bW9NUFhrWG5Db3dLRTRpZENk?=
 =?utf-8?B?Ly9IMWNmZnhHdGxQdVAzSHNyV1A0enFrSEdlVkRXbEY1SVNPcXlhUitxZHdP?=
 =?utf-8?B?MnJCT3NqeWc2b3pnME9qU2lMSlN1c1VKLzJCUlJsMlZ3YnBaRjJ5RXdRMVlR?=
 =?utf-8?B?N0ZYdWFYc3ViOEl3NVR6d3FNbkoxbzZ4NGgrK2F0ZkdSTDZGMkRuOEZlN2xG?=
 =?utf-8?B?WmZQb0d6bml0SnRtZzlRM0N1d3c3Wk1qS0d6ZjEvKzlqUGtyQ2tIcm9kVDlo?=
 =?utf-8?B?NXdydU13UHJ2WEdEMGVKc0YzQS8zMW1JdHhZYXFGanhaaXZ4REFkdXZ4M3B6?=
 =?utf-8?B?cm40ZnZRREJ3MEp5aHByWUZweDgyb0NPblhscnFDRklza3g5SGV2TktlUXV5?=
 =?utf-8?B?Nzc1blRNQWhhdVl0MU9IZGtkY3A4VlhaZXJXcG9Sb05mYXlGUVIzR1pqMG50?=
 =?utf-8?B?KzdUdnBvKythRXJJcnhGSDJ4Nnp6Y2c1cWw0QVQvdXNkRDBmMVQ4YkxkWHZC?=
 =?utf-8?B?Q3ZhUlBxbE5QTU5IK0xRQlByTmJkTWEvSlRlVnNmWlY5V1E0MTRwa3MvTGNI?=
 =?utf-8?Q?yT6t1ku7z4HY56B/9dKH5nU+7?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 986bc7f9-703e-4069-b1ca-08da580baf50
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2022 07:07:23.6938
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: gqzX+vL3b9qxMAfkEgtXffdeQY0aatDlnDLVzJ+aTQkCKkOnExQOx4gNWsk9KKYUwD5tdauIcfifbVsyemkMAw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR04MB5114

On 26.06.2022 23:11, Xenia Ragiadakou wrote:
> The variables page_offlined_list and page_broken_list are referenced only
> in page_alloc.c.
> Change their linkage from external to internal by adding the storage-class
> specifier static to their definitions.
> 
> This patch aims to resolve indirectly a MISRA C 2012 Rule 8.4 violation warning.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Mon Jun 27 07:11:25 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 07:11:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356257.584366 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5iu4-0003CN-IW; Mon, 27 Jun 2022 07:11:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356257.584366; Mon, 27 Jun 2022 07:11:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5iu4-0003CG-FZ; Mon, 27 Jun 2022 07:11:24 +0000
Received: by outflank-mailman (input) for mailman id 356257;
 Mon, 27 Jun 2022 07:11:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yQHX=XC=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o5iu3-0003C9-6a
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 07:11:23 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2048.outbound.protection.outlook.com [40.107.21.48])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 58ff9848-f5e8-11ec-bd2d-47488cf2e6aa;
 Mon, 27 Jun 2022 09:11:22 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB6578.eurprd04.prod.outlook.com (2603:10a6:208:16e::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Mon, 27 Jun
 2022 07:11:20 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5373.018; Mon, 27 Jun 2022
 07:11:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 58ff9848-f5e8-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VPt0tg1B5T41LBGlQF2wo2WT92veTkZnweJ8pX88vzkqMNW7+NxlCmfUmHS5Pw4fsL5+6suJvhRKBEBU2QDAF0Zreb7hiJOweja81KlgoovjJ9M9LKl/MPsVu4zq0sC7NziouocvWt9M2n1PWpr5dVBt+7hOJMwpRBobH89rZn6txUZGqsq4S/g3HMc3HeM95YbOYjzVTrRVkCn8jsGDbRVFly7kWWaDHK91fEFBegcP35ekLp2IqaKDS/NsUatA/zBHoFoCr8w0WD+7R3DPsI1PJave6QpSv+DGyeWltGoqSr17hIshMHwCkegy7Hxi6kNLl2goBbkpwik86tVZ9w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/+LGdPoDcQdqBt0fQskbltJ7wo0Fk5ytPBxVpSC/PZw=;
 b=gjGjR5vCyqzWq7cQ0mqm0QPdWq9EW50K+VjPNk/TboEy2I+5cA5/WdBnEh6gb6OO02uiHjFYcUzLAV5SABu0Ccg7N9/r2ZsbQAUz9K8dMW0idrN6vOR9y0ZqDZN3daa051hvBwGTfkkLBqng9UTwgKZ4Ew1ucCGis2pjCRHINWfG/lAfdJo6xu4HzW48CX6VnjvrAG2rkdvJcu4WVW0Bt1U69d1rr6iKhBH/tnrExIff0dSnFyHy5B4xDHgHWcDKbSqMEPx3we0Opcp2S2l5Isr6Pde7umgxaDEYMT5H3w7keOs4fIymabsmxTrXFFmpdJN20s9pjsfSTikz9pXN1Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/+LGdPoDcQdqBt0fQskbltJ7wo0Fk5ytPBxVpSC/PZw=;
 b=j4yrNtq1YPSjYZV4hQnySpJsYUf+9mAJ/eKBtvnb+6dlUMzpJIym9mQ153FJ0QSwIIwB3ez1xO7Ut4b1Y9vZr2TYV2emKJplIZPd6uDL2wSP146i4cfl3cH8wMOjNkdjKQOs2IzRBM8IXR2uXB1W7EM9Vt71Rimg/eGk954CJIjU6lXoZ8V8zNt+T+Y/l+PEZq21nj3mes/FdW6ZReQ9MotYwox0ZKa1r1GNC4pQDa7MtbtdBe2KNsnUc14hV+XB/RKkRTotq2PalIhrxxA1GP28Lh50RqTS1u1eyoppFIZnCWdn7JhOScnsL8WIRfOBBOloqZmURSm2d6iThl6Clw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <61094b37-4075-e362-7fc6-ce28f965bb05@suse.com>
Date: Mon, 27 Jun 2022 09:11:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 2/5] xen/common: vm_event: Fix MISRA C 2012 Rule 8.7
 violation
Content-Language: en-US
To: Xenia Ragiadakou <burzalodowa@gmail.com>
Cc: Tamas K Lengyel <tamas@tklengyel.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>, xen-devel@lists.xenproject.org
References: <20220626211131.678995-1-burzalodowa@gmail.com>
 <20220626211131.678995-3-burzalodowa@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220626211131.678995-3-burzalodowa@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS8P250CA0006.EURP250.PROD.OUTLOOK.COM
 (2603:10a6:20b:330::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d846c92c-3133-4801-1a0b-08da580c3c75
X-MS-TrafficTypeDiagnostic: AM0PR04MB6578:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nY6wKqobAMEyziu/E8lm04jvnMF5b9O+pd0rEM0bTpgNFAH2QFEnHUvLKIJY7eTWTPxlIpq3wzVQJYyNh9Djdzx8CQAHTxUTm1rUzLiizMS/t9qgWc5zlI6vccMloXJ9/KnS8pZ9Wt2d7baU2OO5HyyRdJgdKU6U+SCDF0wIWQf2+NZUnNeTI/I89h6xtaHSKrT460TxR4ed7dXSLbUhIg2povBKfbWGWsWTRIahY38HPoo178c6zb+QOh4gRO1T3OJdZAIztB92Mc2kaLOB6/HHw2oZM7IUOk69Z2V+Mfbs1vz+0zl1IuKNjnT9A0ZQ+DF4t/FqbA7e+BNO2G8D5FaziUsr80vUhHqqIJz4tZ9BXiuTNt6bz4nNv/+ISAjgqzqoxHdXr6/gjOV4x0+fWQdP4dFGxwwcfF546CZJ/NTEGJArvnyNESOnnDXit/2jDxcdEn9rCd42QAKUdpr1vbL41pGzMRpyEWhXS/Q/NgjWKR4exk807nwrFaVzLzUD9R3tZQ/u28zDpwOr1p3Xo/wjUTwBtnqqYEqJu05twnIQ+1kYF8Uc7Ia2jJj/vxiNDvd5YJvtPeaKXnoCXqIcFmzmJsR0j9RzYzwG8HW0VmJLssCcUREQIJe+s8h38fD+yzFrlOd3E6wvrol3ICNx5ufWlnIpRmHZVEIbvHDUgszF9Q9x3r119x0V9uOjUliV8414+b4Sfjy+W8kRv9xklriS+TjLkktzhhvHXAiJQd1ZPSh75aUUQtXpRCrgFkC6Djvd89bCjSGWP7SfOFp2qXWjS9nv5w3MhPgfFtUoOg36FEIWVASohCWvXOI7W6jOZdYfgr4ZQZJyYYg4G+Qmcw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(39860400002)(396003)(376002)(136003)(346002)(66476007)(26005)(31696002)(31686004)(83380400001)(316002)(66946007)(54906003)(8676002)(6916009)(6512007)(66556008)(4326008)(4744005)(86362001)(6506007)(38100700002)(2616005)(2906002)(5660300002)(186003)(53546011)(478600001)(36756003)(41300700001)(8936002)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MCt0QkFSdENhbldWc0wwR3RBbXUrUTc2Y1ZNM3g3Tm96b1k4RHdoS1FrME1z?=
 =?utf-8?B?NkNrd3NZd3llcGZyN2JqdHhuZjhMWnBMaWphU0daNnh2UVRyaWtZY0kzU2VB?=
 =?utf-8?B?WDYrK1Vib3VLczk3cmJLYW1ya0NFTmNzdG1BOUZ4YVRXREIvQUlkOENaemFD?=
 =?utf-8?B?cGpqTzBvY0dqNENJRzFOck1QUHcyQnpBbkZjSDdCdlVSbk5JNE1jcTJlYjc3?=
 =?utf-8?B?UU8wK2Zlazl5b3JmSVR4N2FYZXc2aEZtdk8wWWR0cFJzSlFMQlpzeHR6dERu?=
 =?utf-8?B?MjhUbUllNDQvSitiWGJucTE2VnJBNGdiMkpLa2toRUlJeGNqNkh4bGFHMndE?=
 =?utf-8?B?N2VxWHVNRFN1ZHBoaHhEd2EwM2YrMWcwN0RSNXRkR2trdStqQ08zVFF0cjRu?=
 =?utf-8?B?UEJhcFZxUkNJOFlpa2U2VjRsTlhHSXkxRzNDZ0FQRUFJR1Iwb3pSTEtqWGhs?=
 =?utf-8?B?eHRvbTFwQWEwL0ZyeEE0SDd0MXhBUkl3anlBaXM0NmEyNXNIcHpLUlZwVjd2?=
 =?utf-8?B?NUdlbUt0RkZGWU84b0U3M1p0bGdqRHJRbFQ2YmVia2tKaFlWYW84aVB6dVhB?=
 =?utf-8?B?bVVTeWRFTlgzdDVxSllHM2VzVW0zSFFna0ROZkFIbVp6bG1NRTd1R3dveStK?=
 =?utf-8?B?Y0tIbWVnazd0Vlp2NDJPcnRKUU1rdm1EdDBpRCtNVlFRU1YrM3FkaWhVamlR?=
 =?utf-8?B?cXdTZ3J4cVM0akNlREdsOVFCdHJHMWNMSjJCSnM2bEFxTGhXMmJGSWhlMG5y?=
 =?utf-8?B?eTkzVk56dlVvUzVnaWxsMXZPb2pCMnNQQXh4d2xxMHJPaDNyanJoSzYyMk5H?=
 =?utf-8?B?ZXBUNFpnUG81b1BGVkNKQmh1Q29VdnNPbW5rYW5VR0lvUDFOZFZ2RHcwVzJB?=
 =?utf-8?B?RXdlREdKMm1QQjIycDdoVVRDSDBnYmFFNHNhZnJqTFdCOGJkVGVRM1BUc2xy?=
 =?utf-8?B?Wm5PQktCUVVzNmEwS0pkSTIvSlNJNlNRSDh0RkhjVEx4dEppMzA3Rzd2WGNR?=
 =?utf-8?B?NTRRVzBBQ243Y2tzY3pFZ3ZhVThGc2xyRXdkVDhjY2pKZGladHFuVHRmMzd3?=
 =?utf-8?B?Ri9NZTl3c3ZoTCtQSUpaQWFjaFJNVXZRNXFqNnJyRTg3RmcwRkkzUVl6QXBm?=
 =?utf-8?B?N3V5dVJycWlJMUk1UHRjNGFHNnZpMzVZNEdsZGNXVUw4UEZQMVJSZlh5a20y?=
 =?utf-8?B?VUFmMExiYXFjREtncGNpN3NDT3MwME9hSCtacUdGbHJKUHNnM0dRMTNNQmhr?=
 =?utf-8?B?OGdIQ25BaTczcWwzOTVaZnF6K3lRb1lNbkg4cXVOMlpSMnZISjVIRXJOZXJq?=
 =?utf-8?B?NHV6cE9UU211N1ZVUkZEUUVocmI3ZUJZd3dFdERqMGJEYmNZV0xRbEQ4amEv?=
 =?utf-8?B?cVR2Mm1jMTVoQzIyUHN0OUpBeHpia1hYcGxMSUxiUHpWZFhTRERmWkNnZ1hS?=
 =?utf-8?B?R2lKMkQ5dHgwd3lMb3Ewald6clFPdTdSRHlqUnVsV0V5TGljVThiOW1QWFZs?=
 =?utf-8?B?R1lramlTVjRTeGdFZTgwWlhEMzV2U1JCRnY5NE43Y040UFVwcE9lVXlMbW5t?=
 =?utf-8?B?Q1dYM2xHdWRCQzI0QXBMdVBzRG1OcWkrSDdvMjU1SzMzSjRyY1NHQ2RiNnow?=
 =?utf-8?B?S1pzTDU4eGYzWXlQZk5NWnQ4Q2cyTDF6eEdmM3hFa2I4aXQ4cElsM3hPSUNI?=
 =?utf-8?B?ZUJzRmpzNFN6RmQvM04rNzRvZkUxZ0grbzY2S0NoSlAzbWRlVEh6enhjbHEx?=
 =?utf-8?B?SUtycVBkVkEwMWtQenlrNDRlZUNUNFNnbmNRY0ZvVVlhY0xNK1hma3dpSmRl?=
 =?utf-8?B?eXR5R1ZINlZWeDJmdWNmcFJQL0dCb1p2dFJlbjdkMUxjVU5UeUkyUnd4N1pP?=
 =?utf-8?B?NnBoVUR1b2FxNUxzanAzd3JZK1ErV2FwYVVJUGhjOG1mazEwYnFZczB4cTFm?=
 =?utf-8?B?emNldkxPYUtLUk13dVhueVVhQ2w5STNnZW01cjNBOU5mbmFlbXUreFFnV21M?=
 =?utf-8?B?MDdxeHFzcExmQnpBcS9pbzR1c0Jxd1RSZlJKSmRoeVBjN0owMytuN25WOW5p?=
 =?utf-8?B?YUFsVkFucDFJbERJQUY0ZHk3cC9DWHBnMmRibFZvd2RQWVpIQjA4ejRmMFdj?=
 =?utf-8?Q?GoEiC1W9Wq76B2vRiOOjDFwkV?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d846c92c-3133-4801-1a0b-08da580c3c75
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2022 07:11:20.4755
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6Kao+QdVmlm7r1R14kjCAOg0shCAhlQnOk9UetQTAjpvY97NRsP2+4dyVI1Uj17Tj7NNa8aKfqpcESzrZ8AH2Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB6578

On 26.06.2022 23:11, Xenia Ragiadakou wrote:
> The function vm_event_wake() is referenced only in vm_event.c.
> Change the linkage of the function from external to internal by adding
> the storage-class specifier static to the function definition.
> 
> This patch aims to resolve indirectly a MISRA C 2012 Rule 8.4 violation warning.

Actually also for patch 1 - this is slightly confusing, as the title
talks about 8.7. At first I suspected a typo. May I suggest to add
"also" (which I think could be easily done while committing)?

> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Mon Jun 27 07:12:39 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 07:12:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356262.584377 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5ivE-0003kc-Sa; Mon, 27 Jun 2022 07:12:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356262.584377; Mon, 27 Jun 2022 07:12:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5ivE-0003kV-PT; Mon, 27 Jun 2022 07:12:36 +0000
Received: by outflank-mailman (input) for mailman id 356262;
 Mon, 27 Jun 2022 07:12:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yQHX=XC=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o5ivD-0003kN-DA
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 07:12:35 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2043.outbound.protection.outlook.com [40.107.21.43])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 84325386-f5e8-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 09:12:34 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB6578.eurprd04.prod.outlook.com (2603:10a6:208:16e::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Mon, 27 Jun
 2022 07:12:32 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5373.018; Mon, 27 Jun 2022
 07:12:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 84325386-f5e8-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TIIWIfAyypZg2U+2pEksFV9mIOeTEEx7OpAGzmDdEDXKCUY3QAlNsukVrAkr+cYJE3Vu2dJxwghC7Tj5H+I1+M+WU8ldX521VYkt0kTqbZNA/SaxhWtzSc/xYne2uyEeFKKdO1Liu775en2h4/j+xbDl4iLfKv5kF6hlmm3YrDyxsNJ+sgH+lOJO5dwlX6DRkg9xFyCOAChzpgb9LWauWaXr4Yifyn3wDP4AkOHyAC9HAO7UK6AseAZK6iONFThsTZYh1KWMSuko8oEivw6Rpv/8i/x9wxbjezKPEhcJFGh++6ivndntCJqFUQINFTHKXnCQt/FZlaB0hDTE+58ESA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ZcZW2e5NNjilZudK7OJVh1EOzJh1lbAIyLkW9aNErik=;
 b=cHCS8ymKe1y34eWp5nOgyIYWMo9Dw6zKPoX1+qCCoEdAQp1SMHWVR+1KIHbC5DhPOF3NtQ+m8Xj8KRHh3vjZyMf+ptvCPxuzHk2/CIcNboVWxLufpfi54VMk8z1y05ZZ+9CEFiYrcSZWujHix4Ff8j60HV1efElusQVi2Hw3KoQ9VOYcoMVyxYB/x51PLho/Dvx/QrVyT1W1IEeMdnhRhO+tNs6N9FmDGQAS49VKoThU+Twq6K9m4FL85Zyca5Sk27dZ7eLIqL5yX0V81iZ1p9HNo7ynLghgIqSkTFR9TojLXUjtllShDiUYga9P62VhXFNkFjuO3Gk5Uw1mTKNcug==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZcZW2e5NNjilZudK7OJVh1EOzJh1lbAIyLkW9aNErik=;
 b=1q7mJcBUVxABN2N0B1wIPyZ0DVtdvI4FGLBN+Sifff2ko4Pgux+UAa92azYbnSDR7XGZmzLsScqCdUMptenEK6eUvGDQ5b6K+aaVZq8JF4XSosZpkgJ/0oZoAdd/+bjQzx4Qz4eadnThcz/sO/Qnxohr1tQ0dzuiY0LaKoSfH4c+mQmluZO+xzOjmjW3aj11dRA6tZEJ5zgFqlUm3qb2uAx1H1f3hBm5Xhb38ygB4FxhbSg/+Jg08+/kfVsDLmNDrLsHsGFx3mtL6uMe7m1ycL8hXz3SGwz5Q4bOm32NdpcQxFjQs4YEZcK1zIOpDkDtZwIDGbVugTGUARRcX3Lz1A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f8fabc99-f986-2612-dd93-464bf7dc1022@suse.com>
Date: Mon, 27 Jun 2022 09:12:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 3/5] xen/drivers: iommu: Fix MISRA C 2012 Rule 8.7
 violation
Content-Language: en-US
To: Xenia Ragiadakou <burzalodowa@gmail.com>
Cc: Paul Durrant <paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <20220626211131.678995-1-burzalodowa@gmail.com>
 <20220626211131.678995-4-burzalodowa@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220626211131.678995-4-burzalodowa@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR06CA0206.eurprd06.prod.outlook.com
 (2603:10a6:20b:45d::26) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 9f490588-b51b-47a2-a59e-08da580c676a
X-MS-TrafficTypeDiagnostic: AM0PR04MB6578:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	V6nBPp+PgnvOdfoz08Rddvf+AyHv4frRBia1+kV6S7f8ydvcAmUDzT1VQIqG987Esu25c1dgTiAyHIvenkx12MydrWYCXaw5eALXRpoDKHxyrt0ifOMFieGscGc6rRrCg0iv/5XGUxwHMztFoeS+DoLzun+B4cMGuS9tb5pAMBsMwjDhYQk9QvAxhXL+jkSS92XFauzHkKx1LkiYx5JEezwS2sWl5ZpzoeHjy+4vWMZjsQleWv4OefChwlPjBRgZ/Rp2cWOdZrRhquzcyDXwPi9JCLRgyXMEVCf4fjJDIzeXCDNvW3OH64yW/tRV0kPhenUEJ7G9hynZ+3/ARBW532gBtoGkOuVtLPYbQnK7oETPK6zO2ZB+Ic8IF6xrwzASQkwXOil0AnfND+P3OjrV42sAI8TNu7KvD2u+SaIlxi++gyCnonGe/svmviqPNZZWihncNEca2Nc+TN6oDelqh0sLK0IDwyXHM70PLHnW2Hirq31lVCAFN7sZBSCZdKpaa9/LTPvs8G6bO/rRnYvxVN5e3PvjsVubZLfnluJebE86cPIb0l1FbgyX07ILoqc75lkuJOqIoTmH5CHV2Y7Emi7ZiRpddaXQUxep+NEL4+9bhfnkdG8rv3qKQ2hc1YKJchcupkEi+8J+vmU4SM/2mv60U6BbUsBPjFajY83sdyDLXuulVIyi55vtWiaGVhstHoLupoVy1KIj9l3vIHTKOppVt5sYF4AXJoeRP/5vblryI6gMeR4c6ntZ82ho4g/tJGZDzHbXPrAnFizcO/sNFGJLMXpQRj7APffX4iYTlueD5i3S7/Ux3bCkaP49LOSWO/mo17Y3qymgPyN753KFiQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(39860400002)(396003)(376002)(136003)(346002)(66476007)(26005)(31696002)(31686004)(83380400001)(316002)(66946007)(54906003)(8676002)(6916009)(6512007)(66556008)(4326008)(4744005)(86362001)(6506007)(38100700002)(2616005)(2906002)(5660300002)(186003)(53546011)(478600001)(36756003)(41300700001)(8936002)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UnpFQTZWRithSlVBQVNpMGNRc1BPTTBXUzZyRnhYTHB1UkI5MXJieWQvcURU?=
 =?utf-8?B?NmVZbE5VQlRuSVNxekZQZFRLTHlCWGJDUDZSZGNqVjdISFcxTWR6T21ONmZx?=
 =?utf-8?B?bFZSYUp3ZE0zWkRiTGhPMml5Yk11eW9KdDc2dEYvK2Vtek1HUzNpUGk5Sm5G?=
 =?utf-8?B?NXI3d0NpVVFQWE1MeHhiLzFHa2trZlFRY0pjV2Y5K3YzRjc0YlBLelEyUW9Q?=
 =?utf-8?B?cjU4Wk9WZS9oa3puQkxBb2M0emd0YmUrZEdFeit2ekJkMEFmdnJla0NMV25T?=
 =?utf-8?B?S1A3NWI2VTJGS3BTTXRzOWNwOURIYm5ucjhGMy9CczMxL3UyU1prZWZDS2Rt?=
 =?utf-8?B?ZWlVWkhQWlVEMjAxQmx5bzNKSXZ6c3EzcDMwQnIzdjB2WE9CNnNMemtqaHYw?=
 =?utf-8?B?aXB3a3dUUWNhd0ZISk5Fd0QyVzQwckk4cndYTVVMMzVUTURqSmd3blJrRWZm?=
 =?utf-8?B?MVBUL1g1U2IwSzJyRDFac0lNNzl3TjgxRHBwMjB0Y01lOEpXU2dhRFI3R01Z?=
 =?utf-8?B?NGJaeHZka040ajhyMUJtcTNHbG4zWE5tOFpnWU16N3pXYmR5Ukhyb0hsVXB6?=
 =?utf-8?B?NUVaMlJZc0NLWkhiUXZmTFI1R2RWRTJFRlQ4TXFGa3BvZkRJRzJDVWdVZkls?=
 =?utf-8?B?Y01sM2haaFh3eEVpM2dCdmMzbDljc2s0eHJIZmJTb1FkUGR4d1RFM2t5eURK?=
 =?utf-8?B?NkNoWDZwaVErbTVHUDFhKzNBMkVBeS9zSmtuVzh5VERldXFTNkZEb20rd29h?=
 =?utf-8?B?U3BmNW5PRE4yTktkZVJZcnJpaURJZUU3Nk1LK05aRGtwTGI4YXp6eWNyMXd0?=
 =?utf-8?B?Y0JoY3dpWjBaL3lyQUNBaGM3T1pteVBGMExmT0J4YU1wS2I3NWppZ3JLN3BX?=
 =?utf-8?B?QWlMQlZZL01YZ0VMK3NObUp2WVhSVEp5VWwyWDNRTkVTbi8vV2h0SHpVajI4?=
 =?utf-8?B?S2phNFRRRjJHRkpHSks1NmZLdlgwYzJxMDRidDRUWmhlMGhRQnhqWWVOQTFY?=
 =?utf-8?B?S05rVG91bUErWUFBUVh6cUVKbkNtVkZ2d2dvOE5rSG1Wak1mR0IxM1VnRVhs?=
 =?utf-8?B?cW9IdisrSlloS09OWmpXckVTdkYyL1M0cUMzQjR1TGw2VXFiZ2MxRXpwSEtv?=
 =?utf-8?B?bkNxTkZlS0QyQlBsMXNKUmpWRTlDa2hlNDNsdzFFUlBvKzRlZEFlZzZRNnNO?=
 =?utf-8?B?U0cyUC9LOGtIcjhxZjd6aVExLzNsY0VjNTM2OHZod3lhOEM0amFaTGMwcHIw?=
 =?utf-8?B?SnJCbEdSWUl2RnpoQzBVcDZBdzR4dCtMRkVjd3hUeHE4TGlLOUl0NVMrOGpY?=
 =?utf-8?B?VU5MK2g5alNrcGJON3c2c0ZOV2RNZVVxOGh0NXNxc2pHN2tPZzJTYUJ4Y284?=
 =?utf-8?B?Mkx1OTZaVlhadnAzZGZJZ1hiRi8xV1JxMWNHVWdCa0kvWll2VWt4NlVBL2Ny?=
 =?utf-8?B?ZDN3Nk9ldFRWVDlWK0U1eFhDN1UvaFhYV2JFSFluRmpINy93MG1rN3pmaVVo?=
 =?utf-8?B?ZGU4MHdkTnFJcEJtVVJDcnF4TTl2d2RBUEhtaGhyVC9MeGV2TWM1YTNWYmtm?=
 =?utf-8?B?bzd5NlpkMStyb3oxRU9hVGZjU3cxc0FBeUE5cUdGdmtTblpaZEpvRkVPaStL?=
 =?utf-8?B?NTlQcVRiN0tPSkdQdmhJQ0krSG1WODFsNHNUQUZHUCthK3M1R1Yyanl2WFpl?=
 =?utf-8?B?TlFwa25zSXdEak1abXF5YytxUmhkVjd0ZE4rVjhWbGdoaSttaUFhc0JpTEhx?=
 =?utf-8?B?cUNvQTZvaS84QnZjdTk0dm1icUhKQVg1U01qUHdvQU1pcFEwaHh5L3lwanhq?=
 =?utf-8?B?SUhVSS9zYTRML2YyMFY2YURaRkV1Y1pTK2pycUsxYTFwUEtKak9TRkQyS1Jv?=
 =?utf-8?B?eTFGaUs2cW4xaWdMUFl3VXpBb0MzM0M3WmxVRW91VlVLZ0U2amN1T0doMmNH?=
 =?utf-8?B?bHpqcEhwRFZISFMxNGRXVHRDWm50dkNSUzFNWEVDN2RMTURXTzJSUFJPS2pS?=
 =?utf-8?B?WVNtcEpyQjl6dk5NSCtvaTRHbmE4TU5QZHBwS0F2cFpTMk4wZmMvWWZQQzZW?=
 =?utf-8?B?Z0FDNmRGclpMK2FwSnk3b1NpdnFmckpwbFNKRU0vbHcvZ1V4d1pWQ2xDU0hV?=
 =?utf-8?Q?N7dU0xgsGhN72K9bDQJ3jAdhD?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9f490588-b51b-47a2-a59e-08da580c676a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2022 07:12:32.5646
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: xZEas9fVO6mZWoOtkrz2aguPKTepulG/J5rwfV+dORzr7LstW1wI2ZAMUvH+u49eqk8KRhbfA2EETEhmjyuPAA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB6578

On 26.06.2022 23:11, Xenia Ragiadakou wrote:
> The variable iommu_crash_disable is referenced only in one translation unit.
> Change its linkage from external to internal by adding the storage-class
> specifier static to its definition.
> 
> This patch aims to resolve indirectly a MISRA C 2012 Rule 8.4 violation warning.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
(with the same remark as on patch 2)


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 07:13:52 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 07:13:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356268.584388 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5iwR-0004O0-9f; Mon, 27 Jun 2022 07:13:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356268.584388; Mon, 27 Jun 2022 07:13:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5iwR-0004Nt-6x; Mon, 27 Jun 2022 07:13:51 +0000
Received: by outflank-mailman (input) for mailman id 356268;
 Mon, 27 Jun 2022 07:13:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yQHX=XC=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o5iwQ-0004Nn-Q3
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 07:13:50 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 (mail-eopbgr30080.outbound.protection.outlook.com [40.107.3.80])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b10cee43-f5e8-11ec-bd2d-47488cf2e6aa;
 Mon, 27 Jun 2022 09:13:49 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB6578.eurprd04.prod.outlook.com (2603:10a6:208:16e::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Mon, 27 Jun
 2022 07:13:48 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5373.018; Mon, 27 Jun 2022
 07:13:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b10cee43-f5e8-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HYGSJIAXcSaAlI9KoMsMOep03QEz14s8kGnHvBeDIwykhpaw7WlLpO4sa6hPTwGRbeaDHTXHUpDj6rQi49Eof91r46cdIFv58ixE1PAxXmmJwtC2SrP334d1pTck9jL9jjj5bxCLPY6jV9x8Fu2DK144gfXHmVZVzB9S6C8iZ64R/7RwX2NP9IzySjTHtkKof7E7kQD+UwqPX09ddlOF5K6hJ/TOD2n/UCuc75LaLcyaElJhlOs5K+k0j+ciqbdUmQsGm0N+6xIWAcdX0DZQyuV8a3bZGlPILXLOsjQNzkCW29+2J5ZR/Lp4+/8mpcP3sm/VbcwoVRzacNFVCYf3CQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3BhsXnHvkKjEXDwDNzitNlTtgOH9H5PPi2BSZcYUiQY=;
 b=EG98RZXkYKE83+s058t3I3KV20U0RgG9IPDm9WHYO3Bd61CDRI1qsuvdJdCAbVMAOOjPJKtqejlhr60ZzFWXjCFi03Wz7WwiKC70Oj4kpmZMxF3M5PLims/f78Z41c9uHj4/6eqFfmYXfQ8b9HHPtS/12hjFloLWzKmvdmm2jt8p1PiqC7TwbvFSHGryPW5VMFMVKEVMn96B9OvecpAxBwKgjcFwP4kx6GHPj3CaiVCQGBmDAUHf/xHPrjSvqAHWT66xvxEJdLqr3bSNEFAXSEwRq4MPqSTGXu2vY8ta1GNDtUuZMNWvGyPy3uuXqMhtFI2bW/ftMWZuZXXt74JRyw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3BhsXnHvkKjEXDwDNzitNlTtgOH9H5PPi2BSZcYUiQY=;
 b=mFZ3XNQ5TdTDSAsZWEVrO3cyCiYflOti+cn2dLKxzGB+FEHWBD2RVqTp18FuV5yJTjpXGD5aWolJUQJe3r9BBsHzc1ffoHXa2++KzG/u1XgBEEWSvG/1L31D6wcUIGxkc5+R8HzRhRF1YcZWQ6Si/HUTkRJ+zP3bNmYmbUozo8c/h4OkWGLiqnQnL0GsfBCWAsBt98d5nboOZTNNKhAaTqjE5aQRmdfu3WrgJU+md55awDDhaZIteJ/+TL95Yu5VFHvS3QgyIrEXH1xqUkRzsem1f3NGn59nYTjFNIVGoDMNbbsQJyo0SeqgJUCv4HmrWZH+wG/PUEb6EXrInd0MAw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <fa2bbcd8-ee27-c9b5-5428-ea07bf9a4734@suse.com>
Date: Mon, 27 Jun 2022 09:13:48 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 4/5] xen/sched: credit: Fix MISRA C 2012 Rule 8.7
 violation
Content-Language: en-US
To: Xenia Ragiadakou <burzalodowa@gmail.com>
Cc: George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
References: <20220626211131.678995-1-burzalodowa@gmail.com>
 <20220626211131.678995-5-burzalodowa@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220626211131.678995-5-burzalodowa@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM5PR1001CA0017.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:206:2::30) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a2c226ec-49c3-449d-9a68-08da580c9478
X-MS-TrafficTypeDiagnostic: AM0PR04MB6578:EE_
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Z6vn7ZBCpffPgtDNf8v0owT7okVQNgTvcP473zKkd73tC3Z03WMnYGBKLwED7JN8ex6lR4mWwlFBV8tZUYwmv74mYi3g9IZHnP3lsUAJNYSUCaBbPa5mDIgozyZyCNrrxQdZtCK7HTvvueVYUHrBD1vgiVcyj3FTP33C4eWgDxQ8G/ZMh2aaVvSup94oxrdBj3z+Ag5DIT42HQtCJphnlACGIAGa0QuuCquWjEJOeuivH0F+s6rHxhLTHRu91m2Wd0rKKnOAkrHBbLnIkvXXs9wz5Xy8GN6od9gjfril49biABfopYB45kypsGsEWQCn3TDXvXh4Lq6vagoaQNQSeislBwMRTo/b9Cg4ZPMzzvY840pjhilHV7xX5qosyf3kVfs9RvBjxMI0g7I9VcSeXKWpqcUCWrTXkEyBqHG36VtmWShve/lQhrltgW0OXFaaXkoBcjfoIh6CLDXSf0D6Y32SdVkq7c0SHw+09t4ZViDH/bGPhC9r+jMML2qfBX8eO78jOKSAjeZfNWhxQDC51AD8GTWoEP895sxOVYtqbnaIAIN+0IuqsVZ7c7r/rjEVVQdEVDayYFcDxpFDI2ERNc1opxx8rykGzZmhL6HTZqjBnNTsdZbPHA6FM6WdQc5Kw8sGtSb5kGdEFbCruo8U3fZKGXucbSfMz7lhccGZmOuFp0e1Yy3VQhqAPNZL88LY+gjY3KyVOq84iOLs6h2Wr+Jp1BVnMWdg4Czj1p0TPQudswNzzjZDDX3DGnHPPW/ZN4LWvXFD/CNoShC+eZdq9e/07uVVO91domQ2Z7UFb7jarGZgDgafueqtnk+9HBQIHMNibfXzIUeUuA3OcM6R5w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(39860400002)(396003)(376002)(136003)(346002)(66476007)(26005)(31696002)(31686004)(83380400001)(316002)(66946007)(54906003)(8676002)(6916009)(6512007)(66556008)(4326008)(4744005)(86362001)(6506007)(38100700002)(2616005)(2906002)(5660300002)(186003)(53546011)(478600001)(36756003)(41300700001)(8936002)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YmNVaWdudzdJL1k2aTB0cXhnc2hUSUQ4S1VpUzBBWSsxMkVnNWpPODI0dGE1?=
 =?utf-8?B?VU0yRXg2SWo4VXdrSndZWkMrSTRyODlPdTBaREdZTngzWXdUQkdLamF4ekIv?=
 =?utf-8?B?d2s3dFJhMWcyRzNPcE95ZDNyUkxEK2NjRmFQOXRlRU1YQmF3cWJQbDNybmF1?=
 =?utf-8?B?TlB5YU5vWTFkVWNuTHBlT1ZNck5MNmcxOTN1ZEpFbUJCcXlWcEdkSUcxSzJz?=
 =?utf-8?B?OS9iWjJPWHBqN1FpL1paTXRMRjJpTXkwRnNBeWREeUJyQVB2Y2cyUWRhZEFW?=
 =?utf-8?B?WHV1ODRDQVJDQTQ1YU1ESVJvMmU5NjNaT1ZXZ1JZWWIvSTZ5c1dka1RUZHJV?=
 =?utf-8?B?alpsSWRUdFlaRXJYT09pVEhUdmZwbEw5UXNYWFd1Vi9JcWsxMlByaHBkM0Rl?=
 =?utf-8?B?enpkdHprWnJ0Sk5UeHpCRjBHYnIyb3ZHZWlJUEs2T2tnN21IazJ6ODFrbnFI?=
 =?utf-8?B?L1YydC9wWFpFRUJKbXhtV1pHRFZCa0FWS01zL0FuVzFLaXZjUisrVkpYV0RJ?=
 =?utf-8?B?Nnd4VVFNVVBVd0t3SVg4ekNaclBDQ09QMW95ajRteDEwRjA4WER5ZDJSRnJ5?=
 =?utf-8?B?YXoxdENyL1J5bHJMSWUzSDZ6Q3BvUjFJemRuU2ZLZjYwMXZMQkpHQkh5cy9B?=
 =?utf-8?B?UDc4NXI4K0tGWVF2TW55MFpQZmdQTFdBMXFhaGw1MnVRSlVudEdtckY4Q3dU?=
 =?utf-8?B?TWVJRjlacXA5K1dOVGg5RlBGUVVNT0xXSUVoOWg4QmJjNWNWdzJXMTluZjJw?=
 =?utf-8?B?emNlWit5V0VERW4yVmZLSU1yRGpVUXFxOU4wN21ySnRZaHhZaTN0RXF6MEZq?=
 =?utf-8?B?T1FISXRuSnFOKytWY2UzU2svL2loeG9semZWdmd2anNMU2Y2T3Yzc2FZdHNw?=
 =?utf-8?B?YUxwc3g1ZCtWcnkwblJTNUZZMkRmYmhzanVWb0QvTmJ2UTh4S1lZNWZrczQw?=
 =?utf-8?B?SmZ0dzR6anBaK2tubEUzTU85T3NGNHJIc2xJUFJoL2tNREVxSXJXZ0w2Z3pq?=
 =?utf-8?B?TTd1V01XczdmMFhzV0IrWmNnYXFNYW9WbnFSdWc3Ynl2SGt6eUdteTYrSGtq?=
 =?utf-8?B?VDFFZzlNa29OamhpS3hNU2RUeUhuMWUwQi9hRGhZdjJWS2V3b21teExPWlND?=
 =?utf-8?B?STRVL3VFR25NWG5jWW01S1pSUExocW52UXA3ZHd2VHM5b0tGNDNZa0xIb3dC?=
 =?utf-8?B?VkpKMzBXRXl2VlhUc2NRdkM2SXBSMktMUWdUWWdHRFZhYlRzU3hldVVQR2o2?=
 =?utf-8?B?MVpZZUFkUzlralNaakQ5MFZCa1ZrQnV3dGZlVkVZRjVHVERoM2pnQWV0SkFZ?=
 =?utf-8?B?bGhxS25jVXJoVEQ2UURzd2VoSTdFeWhLd1FCN1VwcGJoVzdHTXV0TEFkeWI0?=
 =?utf-8?B?SnFkK3Z1emVDanZ6cVN6ZElaaXBPSVEyZkp4RkMxMnBJd3lHQXVEUFpXNHcw?=
 =?utf-8?B?TWlGNjZhNjFXQVRLbjlqd0V6U2krZzBUMy9TclFhZW5YNjdLWG9mU1ova3BY?=
 =?utf-8?B?dEd3Zi9oS241UXpOTlRUekduWjBCaUhXc3p1ZVk4c3RpYmNRaHRPSzdUTHNE?=
 =?utf-8?B?QmcvNUFLdVJKVS85c3NGaTQ4L04xQW0vL3doY2cveU55MXQ0c205aExDakZo?=
 =?utf-8?B?SVp1WENKbXlaY2UxcUFiUmlHN0xId1Z0RElRRGpKWkhDM2dOaFRGbEFra2lT?=
 =?utf-8?B?TFRGR3V0ODZQd2RXM0EwdC9qemxvSlBaQlVUczdYWVFxbU9yOU5NM0dINnEv?=
 =?utf-8?B?a0NNSzdUb3NDZVBIYUlvRFV1SjBKMURjM0FvVEo3ZnlaVzVqQStDTU5EYUpH?=
 =?utf-8?B?c2p2bG5SL3BNZW1OZXpnOHNzRTRpZmNRc0NCd25YT0xBYzREM3YxZFQzTUtT?=
 =?utf-8?B?TXJoR2xSOEdYZjF2VjRsUTI2cFBxTVNYY0VoU25HNmozRk5teDNBd0VDU1E4?=
 =?utf-8?B?Y1VmU3JTMXpMV3lXSWVDUnhDMWNvbjNaL1hXd1ZXeEk5eTRYZkFZeStNeGxI?=
 =?utf-8?B?WEo1U1ZqY0pSNlYwVjJJZlNSL05BV3lBMU02M3phV1o0SlN5ZXAralh6ZGdR?=
 =?utf-8?B?SFpCc2lIR3BlcUFGM0pXd3Q0aFBpYU1US2dVNENOcjR1Q2FVSTFJTjVRbnl1?=
 =?utf-8?Q?q/oy42NckWhLyr2FWatX2dz59?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a2c226ec-49c3-449d-9a68-08da580c9478
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2022 07:13:48.1380
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: W05DMExPs1u3/l6HGQEubm6Z7FUBAqu6uFayCs4PeMYukglxRKDcKIBZ4eDVvwgi9SF1RlE+3oQmeACOwGaYiQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB6578

On 26.06.2022 23:11, Xenia Ragiadakou wrote:
> The per-cpu variable last_tickle_cpu is referenced only in credit.c.
> Change its linkage from external to internal by adding the storage-class
> specifier static to its definitions.
> 
> This patch aims to resolve indirectly a MISRA C 2012 Rule 8.4 violation warning.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
(again with the same remark as on patch 2)



From xen-devel-bounces@lists.xenproject.org Mon Jun 27 07:25:12 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 07:25:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356276.584399 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5j7K-0005xT-9o; Mon, 27 Jun 2022 07:25:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356276.584399; Mon, 27 Jun 2022 07:25:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5j7K-0005xM-78; Mon, 27 Jun 2022 07:25:06 +0000
Received: by outflank-mailman (input) for mailman id 356276;
 Mon, 27 Jun 2022 07:25:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vXPS=XC=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o5j7J-0005xG-GP
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 07:25:05 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 429a85af-f5ea-11ec-bd2d-47488cf2e6aa;
 Mon, 27 Jun 2022 09:25:03 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DA91D1758;
 Mon, 27 Jun 2022 00:25:02 -0700 (PDT)
Received: from [10.57.42.186] (unknown [10.57.42.186])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 51EDC3F5A1;
 Mon, 27 Jun 2022 00:25:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 429a85af-f5ea-11ec-bd2d-47488cf2e6aa
Message-ID: <c05cb05f-0344-19d2-4f8f-caa904c374dc@arm.com>
Date: Mon, 27 Jun 2022 09:24:47 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH 5/7] xen/arm32: mm: Consolidate the domheap mappings
 initialization
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220624091146.35716-1-julien@xen.org>
 <20220624091146.35716-6-julien@xen.org>
From: Michal Orzel <michal.orzel@arm.com>
In-Reply-To: <20220624091146.35716-6-julien@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Hi Julien,

On 24.06.2022 11:11, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> At the moment, the domheap mappings initialization is done separately for
> the boot CPU and secondary CPUs. The main difference is for the former
> the pages are part of Xen binary whilst for the latter they are
> dynamically allocated.
> 
> It would be good to have a single helper so it is easier to rework
> on the domheap is initialized.
> 
> For CPU0, we still need to use pre-allocated pages because the
> allocators may use domain_map_page(), so we need to have the domheap
> area ready first. But we can still delay the initialization to setup_mm().
> 
> Introduce a new helper domheap_mapping_init() that will be called
FWICS the function name is init_domheap_mappings.

> from setup_mm() for the boot CPU and from init_secondary_pagetables()
> for secondary CPUs.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> ---
>  xen/arch/arm/include/asm/arm32/mm.h |  2 +
>  xen/arch/arm/mm.c                   | 92 +++++++++++++++++++----------
>  xen/arch/arm/setup.c                |  8 +++
>  3 files changed, 71 insertions(+), 31 deletions(-)
> 
> diff --git a/xen/arch/arm/include/asm/arm32/mm.h b/xen/arch/arm/include/asm/arm32/mm.h
> index 6b039d9ceaa2..575373aeb985 100644
> --- a/xen/arch/arm/include/asm/arm32/mm.h
> +++ b/xen/arch/arm/include/asm/arm32/mm.h
> @@ -10,6 +10,8 @@ static inline bool arch_mfns_in_directmap(unsigned long mfn, unsigned long nr)
>      return false;
>  }
>  
> +bool init_domheap_mappings(unsigned int cpu);
> +
>  #endif /* __ARM_ARM32_MM_H__ */
>  
>  /*
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 20733afebce4..995aa1e4480e 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -372,6 +372,58 @@ void clear_fixmap(unsigned map)
>  }
>  
>  #ifdef CONFIG_DOMAIN_PAGE
> +/*
> + * Prepare the area that will be used to map domheap pages. They are
> + * mapped in 2MB chunks, so we need to allocate the page-tables up to
> + * the 2nd level.
> + *
> + * The caller should make sure the root page-table for @cpu has been
> + * been allocated.
Second 'been' not needed.

> + */
> +bool init_domheap_mappings(unsigned int cpu)
> +{
> +    unsigned int order = get_order_from_pages(DOMHEAP_SECOND_PAGES);
> +    lpae_t *root = per_cpu(xen_pgtable, cpu);
> +    unsigned int i, first_idx;
> +    lpae_t *domheap;
> +    mfn_t mfn;
> +
> +    ASSERT(root);
> +    ASSERT(!per_cpu(xen_dommap, cpu));
> +
> +    /*
> +     * The domheap for cpu0 is before the heap is initialized. So we
> +     * need to use pre-allocated pages.
> +     */
> +    if ( !cpu )
> +        domheap = cpu0_dommap;
> +    else
> +        domheap = alloc_xenheap_pages(order, 0);
> +
> +    if ( !domheap )
> +        return false;
> +
> +    /* Ensure the domheap has no stray mappings */
> +    memset(domheap, 0, DOMHEAP_SECOND_PAGES * PAGE_SIZE);
> +
> +    /*
> +     * Update the first level mapping to reference the local CPUs
> +     * domheap mapping pages.
> +     */
> +    mfn = virt_to_mfn(domheap);
> +    first_idx = first_table_offset(DOMHEAP_VIRT_START);
> +    for ( i = 0; i < DOMHEAP_SECOND_PAGES; i++ )
> +    {
> +        lpae_t pte = mfn_to_xen_entry(mfn_add(mfn, i), MT_NORMAL);
I might have missed sth but shouldn't 'i' be multiplied by XEN_PT_LPAE_ENTRIES?

Cheers,
Michal


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 07:25:12 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 07:25:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356277.584410 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5j7Q-0006EH-Jl; Mon, 27 Jun 2022 07:25:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356277.584410; Mon, 27 Jun 2022 07:25:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5j7Q-0006EA-Gm; Mon, 27 Jun 2022 07:25:12 +0000
Received: by outflank-mailman (input) for mailman id 356277;
 Mon, 27 Jun 2022 07:25:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=voCP=XC=citrix.com=prvs=1709c1826=christian.lindig@srs-se1.protection.inumbo.net>)
 id 1o5j7P-0005xG-Gd
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 07:25:11 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 45441373-f5ea-11ec-bd2d-47488cf2e6aa;
 Mon, 27 Jun 2022 09:25:09 +0200 (CEST)
Received: from mail-sn1anam02lp2041.outbound.protection.outlook.com (HELO
 NAM02-SN1-obe.outbound.protection.outlook.com) ([104.47.57.41])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 27 Jun 2022 03:25:05 -0400
Received: from MW4PR03MB6539.namprd03.prod.outlook.com (2603:10b6:303:126::9)
 by CO3PR03MB6791.namprd03.prod.outlook.com (2603:10b6:303:175::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15; Mon, 27 Jun
 2022 07:25:04 +0000
Received: from MW4PR03MB6539.namprd03.prod.outlook.com
 ([fe80::e0da:b315:76f5:37f9]) by MW4PR03MB6539.namprd03.prod.outlook.com
 ([fe80::e0da:b315:76f5:37f9%7]) with mapi id 15.20.5373.018; Mon, 27 Jun 2022
 07:25:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 45441373-f5ea-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656314709;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:mime-version;
  bh=yN03zbmEe0prQ8YQ5CJ2engNoW7qr0rHH/OFoSAUPb8=;
  b=QPnluloemBOkEzGxUcrbypYTMmVlRibySQ9wd1+uipyaoJy8JBCMpCHB
   l9gQI9m+TmNMkncQztBzc3c3H9A2gPO/PbOx2F3UzEly/H0iOMGfQAMNQ
   f1g/uO3YpsmNhWH9JXl0SmUi2aULGNieqoYlYqaOU8Xha2quTbBzBfmLA
   E=;
X-IronPort-RemoteIP: 104.47.57.41
X-IronPort-MID: 74894703
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:7egbDKg9Zgvb8CWiSFudKFMtX1610BEKZh0ujC45NGQN5FlHY01je
 htvXjqPPanfYmSkKN91Pdi2px5X6sPRzNVrTQM+rnpgEXsb9cadCdqndUqhZCn6wu8v7a5EA
 2fyTvGacajYm1eF/k/F3oDJ9CU6jefSLlbFILas1hpZHGeIcw98z0M58wIFqtQw24LhXVnT4
 YiaT/D3YzdJ5RYlagr41IrbwP9flKyaVOQw5wFWiVhj5TcyplFNZH4tDfjZw0jQG+G4KtWSV
 efbpIxVy0uCl/sb5nFJpZ6gGqECaua60QFjERO6UYD66vRJjnRaPqrWqJPwwKqY4tmEt4kZ9
 TlDiXC/YSMxDITlybsvaD5dEnhsepEYoIbtKlHq5KR/z2WeG5ft69NHKRlseLY+o6NwC2wI8
 uEEIjcQaBzFn/ix3L+wVuhrgIIkMdXvO4Qc/HpnyFk1D95/GcyFH/qMuIAegG5YasNmRJ4yY
 +ISaSBudwjBahsJPlYRBJMxtOypmmP+Y3tTr1f9Sa8fvDSDnVAuiOOF3Nz9UNKSZeELjluju
 kmFwkTiQTsoPeGnxm/Qmp6rrqqV9c/hY6oCGbv9+vN0jVm7wm0IFAZQRVa9ueO+iEO1R5RYM
 UN80iY2tq0z6EyDR8HwRQGlu2WDugMAWt1WCKsx7wTl4qjb+QGCHUAfUyVMLtchsaceSTMm2
 1CTlvv1FDdvt/uTUnvb+bCKxRuwMyUIKW4JZQcfUBAIpdLkpekbghvRQ/55HaWyj9mzHiv/q
 xiRtzQ3jbgXic8N1o248ErBjjbqoYLGJiYp5wD/Tm+jqARja+aNeIiA+VXdq/FaI+6xTESFv
 XUCs9iT6qYJF57lvCGJTeMEWqyn5vOZKzDCiHZoBZAq8znr8HmmFahK5yp0PkBuMcAsdjrgY
 UuVsgRUjKK/J1OvZK5zJpm3Us0sxK24T9D9DKiIN5xJf4R7cxKB8Gd2f0mM0mvxkU8q16YiJ
 ZOcdsXqBnEfYUh68AeLqy4m+edD7kgDKan7H/gXEzzPPWKiWUOo
IronPort-HdrOrdr: A9a23:KxjSO6u/IRG/tshqRv1SIvpO7skC2oMji2hC6mlwRA09TyXGra
 2TdaUgvyMc1gx7ZJh5o6H6BEGBKUmslqKdkrNhR4tKPTOW9VdASbsP0WKM+UyGJ8STzI9gPO
 JbAtBD4b7LfBRHZKTBkW+F+r8bqbHpnpxAx92utkuFJjsaCZ2Imj0JbjpzZXcGITWua6BYKL
 Osou584xawc3Ueacq2QlMfWfLYmtHNnJX6JTYbGh8O8mC1/H2VwY+/NyLd8gYVUjtJz7tn23
 PCiRbF6qKqtOz+4gPA1lXU849dlLLau5p+7Y23+4gowwfX+0SVjbdaKvi/VfcO0aWSAWMR4Z
 rxStEbToNOAj3qDyeISFDWqnTdOX4VmgPfIBmj8DTeSIXCNUwHItsEioRDfhTD7U08+Nl6za
 JQxmqc84FaFBXagU3GlpD1vr5R5ziJSepLq59ts5Vza/ppVFZql/1XwGpFVJMbWC7q4oEuF+
 djSMna+fZNaFufK3TUpHNmztCgVmk6Wk7ueDlJhuWFlzxN2HxpxUoRw8IS2n8G6ZImUpFBo+
 DJKL5hmr1CRtIfKah9GOACS82qDXGle2OGDEuCZVD8UK0XMXPErJD6pL0z+eGxYZQNiIA/nZ
 zQOWkowlLau3ieffFm8Kc7giwlGl/NLAgF4vsulKRRq/n7WKfhNzGFRRQnj9agys9vd/HmZw
 ==
X-IronPort-AV: E=Sophos;i="5.92,225,1650945600"; 
   d="scan'208,217";a="74894703"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZKYMicz5E6/Qd/58EzDdg3nvV+46NGyyf8JlxcjQCUm9AcIfmPYpWAkxAkg+m7xyNfARLS2t0PaQHpatLl7vS48Xfh43RFIt336qYfGhx65VkuCUcmh+EuKaOzxc3ecTuuTw94rKntW9PGvLjI0cmNMWi+GClAt5vxJm6q364nR9Db2R6hN0g4EhoGlIMgYnUFDFVrJjeIntJUJ7lX7bWjNHaTz8PHKjO4KyZHTn4XWcKaU3j4/rssgz/seHbmA2f2aTtQxZ12omBx75TGTlK0GEZw7S1Wmr24dErOegRPIpVh8ImGOEWii3yw7z2T/JOL9YRpDzgVeup/A9VKW/uQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=yN03zbmEe0prQ8YQ5CJ2engNoW7qr0rHH/OFoSAUPb8=;
 b=HIeKYGp55lt00q8WJT5svPuoNAJMNTNr/DKzCNZ3kI9ncVG2WdEcFt9p+5hAEi3ixHnZr72LpYVmcVViY8LUL4ilHDudSyN/FdRTF+ym5RQaLZZ2I+d/fnEXky2T/2NzZyLevHlbNwuE5I9bicxOzjuMv28v8Jraku3hN/aS9b3LkkETHTas4vsGQVEQgvungNgxuJFDRt3WOx3gDRbe2hDSqw5GOQS3nI/GeAWPKfTMxh+sVCt1jiaI+A+uZ27k09wEgAwM3bRyygvs/OD1OgWjIWzMAK0P0m8fi+8VICt9leTRwxh2BJqpj/b/cBAJDjYPTCfzqcVRrhEIobXttg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yN03zbmEe0prQ8YQ5CJ2engNoW7qr0rHH/OFoSAUPb8=;
 b=PZ7DsahCeq9CS/EQ2Wq2wyPujTZ/6TbzyxhBC74qlKCKFA7B20+gNccVW03UpxRV8bJi3uWXtC5TNyNdI7xxGiMBiA6Ww7xdLY+NaS/xIllrW8o1RughHSjqnZxiDcmgofNi+1rtkdKIEk6Ha4QT7QYRS/KJjV2QgkZTxEzOEc0=
From: Christian Lindig <christian.lindig@citrix.com>
To: Anthony Perard <anthony.perard@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, "Daniel P. Smith"
	<dpsmith@apertussolutions.com>, Juergen Gross <jgross@suse.com>, Daniel De
 Graaf <dgdegra@tycho.nsa.gov>, George Dunlap <George.Dunlap@citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	David Scott <dave@recoil.org>, Stefano Stabellini <sstabellini@kernel.org>,
	"Tim (Xen.org)" <tim@xen.org>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>, Elena Ufimtseva <elena.ufimtseva@oracle.com>, Nick Rosbrook
	<rosbrookn@gmail.com>, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [XEN PATCH v3 00/25] Toolstack build system improvement, toward
 non-recursive makefiles
Thread-Topic: [XEN PATCH v3 00/25] Toolstack build system improvement, toward
 non-recursive makefiles
Thread-Index: AQHYh+QgqCvAj4yQ00ukgLMeVhoHyK1i3kCA
Date: Mon, 27 Jun 2022 07:25:03 +0000
Message-ID: <3AB6FD34-EAAB-4644-9060-C83DBE617992@citrix.com>
References: <20220624160422.53457-1-anthony.perard@citrix.com>
In-Reply-To: <20220624160422.53457-1-anthony.perard@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.7)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 717d1fbc-76f6-496d-a134-08da580e2766
x-ms-traffictypediagnostic: CO3PR03MB6791:EE_
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 OawPuh8l5v12vlrTzD5hP2U7iS2G+rGStjcZuwTs9ctEgSBHsoo/JWJhryhLh/37PcOBwVFweUTWf7/zjKZMPgwGYzI78Fj6qSAtHYSRRk/8nzWDNNpIXXBN63T8hZgKreHrC4ePQOhkRAKnu/HVl16Ow4j2AHrif06muyyLLfzQiy5CYoCeIanr8DdQOCngV11cjJ2D33xSXFD1jnWq4KMcc+ADwzJOxCvLsU82gD0ZWdeZlcvJyvRlQylWEMdojyG8jJvteMvZaJ4HO52aDdjXCqtHTCANUvZmkI9cnBnXIehV8T71PvxqjfohquYfdWLEV8Bo1aU9qvFeqOJmqBUT4KsK58fKMvhFGVgovvDa24R3aH0pT5P3vIU8U5UntYe80Bf77iFTKyqJu/QVw2ocvUY2Bm5CT7vLBWJtsZmuWX7bGgp9ESS6y+1c1lGTIPw3HKKkj8sIhYIbX9nmsuwd4+o+5sE2hz82+lwdg3CRH6rUhhLRmcoQ8ROwdcp9yEl8rZACS47+tQ6OaB8RBR5eG7zz6OFBjxpoeAoHzPjsfGkkYyZqakqRfF1pqsjl84iALgFXaKR3vpnzA/5lPAABHImI3I3wphDt5gW7hCLaVHidBQd0QDjaNaeEkrnLXYAp8k+CULQDxT5wEbcGg0Wmr/yKnBtt08Q1sCQMlPxAl0n+Z8huUXhSraK44EQ6R2NDfAYw9QgthW9c4NqHCYUqtXG6JnOimdOnLJ9UN62Um550YXjifOFtj9zibWXC6jA6CGnn9ZmqNbXRlXGijROehi9sVE5lqSHl/4DSwftz4h87wWw6cJAPkyiW8MM5bVCqKiDh5g/I0C2Zk3Xh8A==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MW4PR03MB6539.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(396003)(376002)(366004)(346002)(136003)(39860400002)(44832011)(36756003)(33656002)(2616005)(37006003)(66946007)(66556008)(66446008)(66476007)(316002)(186003)(71200400001)(107886003)(6512007)(64756008)(6636002)(91956017)(82960400001)(76116006)(41300700001)(38100700002)(86362001)(2906002)(478600001)(8936002)(54906003)(6506007)(6486002)(38070700005)(8676002)(4326008)(26005)(7416002)(122000001)(53546011)(5660300002)(4744005)(6862004)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?us-ascii?Q?aLa/z5guVC690SUzWudUwhJfc+tD4HZhGwoGjn+X2sc7tvydJfgFdesbqLiA?=
 =?us-ascii?Q?wLCPdi17dPQ7wBNpI/K0BZcL7WiZhCM6eXNljOYJ1AGeuj33eTyeD6JuCsNz?=
 =?us-ascii?Q?AoWYzhdxnqHhRGkEgnCWe9FVXaeuWVhQcnrUlVNP6agIC4y1v0gQUDs4NiFM?=
 =?us-ascii?Q?oYGqS0ub3bgd9yNzft6s1tVRQy8tGtlHWqzKDcVYNZ7lpjZ6/L4K7/Gf85O1?=
 =?us-ascii?Q?nKO/t+0Yh+LJrghmr934SziuQ3SDqZlpAgsMBeTAXJJlG4G/Jk99K/icIeq0?=
 =?us-ascii?Q?O2WqvAfidXAASTWYhTr35XJPWVNbrOYu7VkBtx1O/pyPFAqAZInEYhSIsQy1?=
 =?us-ascii?Q?xFedzf2rXkCwzRt+lQnK7eXnuvxSDqTpSuT7Pzbv1175DUYJymYulPsswfBV?=
 =?us-ascii?Q?uGT2wCYmhWJNuAKjHlBkdSV+JbfGWATCiZYjFPoIDK+oWhqC2629PT0r9rIx?=
 =?us-ascii?Q?tjjzelZY8wYwI1vS8zyLsIs2Pq/rScfzYlrz8DfYZjSNWNmfHZJwafoIrK/k?=
 =?us-ascii?Q?dKsHg2TFK6gHzrLY0ILroGbw0qsNY+cWtXJZvZAxlp7wFHGUzR4Iq5Ys8sTM?=
 =?us-ascii?Q?3J4UFvfOZ9Ptr7dxLYa5S/1gwoymaoFI8fQo+FWLy/92Yw8HfHLEGjqOeFI3?=
 =?us-ascii?Q?xBBDiwYO5LUKEVHZrq1ZZrWbX2+RJcyza7S2bDYY8vODj8/z7H6EmK4S8f1X?=
 =?us-ascii?Q?bfnVuABvTyl/AW4E7iftk8aaSM5OsVFZN3VhGNTQ9d9Hb8Te3sAfWqeeVh1n?=
 =?us-ascii?Q?xZ2L2OYYn4qHrNg/cRisjBUSO84pOjT97NxqOSXrfMJmxzr/JfrxA9GNrXkA?=
 =?us-ascii?Q?msHy60V3ZUmNBRUhPFHL/mODRjej22v5+kpBltMpo1TE3EzVrTFyIaYMvDNM?=
 =?us-ascii?Q?/pj7RmSg0PRoSXBFiq2DNVN6plHUIpHpgEhEsZWf7eGuFNCnXwdcoImQK8Yz?=
 =?us-ascii?Q?OZYgGnTTUVX7BuabthIJzfq6zsvp3/VY+7Vq0QHvs0tiJ6BZOIPWIdW3nbBW?=
 =?us-ascii?Q?Lz5iVVfIcmx8zCWz1urmAFsfKd/aSLvoLNMFkynWxUAAkOpUEacz3R4nsqiO?=
 =?us-ascii?Q?lxBYt3VmL1qVgdto7YjDlNhDJ65Aq+ed+Xyay1oJ11oXbsW+46aa2lHs0rPf?=
 =?us-ascii?Q?KtVlj/+6rzlxbaRiL5zRhr/0+zvCI+M+ot28zSfH330abUgJdywVaZ/+lwlF?=
 =?us-ascii?Q?pIaBOG+ns+Tv3XQJB55QFWlbauiPfS9G644o4L8Cca1jyjy2nIqCol1BYAhC?=
 =?us-ascii?Q?A7e1hBHpjIK6y5BC5TS8ez6G+0Wa2wgGo20eybEj+bfEEc2VPY/egO/OrwKe?=
 =?us-ascii?Q?g2Vmjfd1LnM3dXbVDc7ZHy2qt9DgIiYmkmnEkLUHsJAY0mq/TSyP4BnerM/r?=
 =?us-ascii?Q?SZ60nQKP1o4GJeb5DqpYt5K4LNt1bY+rnD41hDVjyKceTTwXOgswk247qYWt?=
 =?us-ascii?Q?uFH5cHNsjcAT0e02lTqZOjOCPP8uK/O+mJsYmvO4lg9nPeFD73w+9NAZiP1B?=
 =?us-ascii?Q?O8NKbe2DwUSYVdTub/Brp9Hth3iqPxIY4tWw6Mjk29ak0Fgd+YGxTEE8FeRB?=
 =?us-ascii?Q?HTsF6TUY8oG58lR302PmHrWRR+tqdtAW6EV6Jc5LYLqclNpIQjgCZ4/OFtgJ?=
 =?us-ascii?Q?tg=3D=3D?=
Content-Type: multipart/alternative;
	boundary="_000_3AB6FD34EAAB46449060C83DBE617992citrixcom_"
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MW4PR03MB6539.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 717d1fbc-76f6-496d-a134-08da580e2766
X-MS-Exchange-CrossTenant-originalarrivaltime: 27 Jun 2022 07:25:03.9701
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 9SlO+x1a8BoRagyKW5KMZJCb8VnY71srS2WLeyBpW29GXH46qhNA973Tkt2Qj8Z+l3SmhO6ohOE8CtObYwUgbR48Q3MFE7TNs3bhr+Zzp/8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO3PR03MB6791

--_000_3AB6FD34EAAB46449060C83DBE617992citrixcom_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable



On 24 Jun 2022, at 17:03, Anthony PERARD <anthony.perard@citrix.com<mailto:=
anthony.perard@citrix.com>> wrote:

Changes in v3:
- rebased
- several new patches, starting with 13/25 "tools/libs/util: cleanup Makefi=
le"
- introducing macros to deal with linking with in-tree xen libraries
- Add -Werror to CFLAGS for all builds in tools/

Acked-by: Christian Lindig <christian.lindig@citrix.com<mailto:christian.li=
ndig@citrix.com>>


--_000_3AB6FD34EAAB46449060C83DBE617992citrixcom_
Content-Type: text/html; charset="us-ascii"
Content-ID: <9F1E69BC1010BA4A922222B829C1191B@namprd03.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable

<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
</head>
<body style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; line-break:=
 after-white-space;" class=3D"">
<br class=3D"">
<div><br class=3D"">
<blockquote type=3D"cite" class=3D"">
<div class=3D"">On 24 Jun 2022, at 17:03, Anthony PERARD &lt;<a href=3D"mai=
lto:anthony.perard@citrix.com" class=3D"">anthony.perard@citrix.com</a>&gt;=
 wrote:</div>
<br class=3D"Apple-interchange-newline">
<div class=3D""><span style=3D"caret-color: rgb(0, 0, 0); font-family: Helv=
etica; font-size: 12px; font-style: normal; font-variant-caps: normal; font=
-weight: normal; letter-spacing: normal; text-align: start; text-indent: 0p=
x; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-te=
xt-stroke-width: 0px; text-decoration: none; float: none; display: inline !=
important;" class=3D"">Changes
 in v3:</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetic=
a; font-size: 12px; font-style: normal; font-variant-caps: normal; font-wei=
ght: normal; letter-spacing: normal; text-align: start; text-indent: 0px; t=
ext-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-s=
troke-width: 0px; text-decoration: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal;=
 letter-spacing: normal; text-align: start; text-indent: 0px; text-transfor=
m: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width:=
 0px; text-decoration: none; float: none; display: inline !important;" clas=
s=3D"">-
 rebased</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: Helveti=
ca; font-size: 12px; font-style: normal; font-variant-caps: normal; font-we=
ight: normal; letter-spacing: normal; text-align: start; text-indent: 0px; =
text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-=
stroke-width: 0px; text-decoration: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal;=
 letter-spacing: normal; text-align: start; text-indent: 0px; text-transfor=
m: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width:=
 0px; text-decoration: none; float: none; display: inline !important;" clas=
s=3D"">-
 several new patches, starting with 13/25 &quot;tools/libs/util: cleanup Ma=
kefile&quot;</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: Hel=
vetica; font-size: 12px; font-style: normal; font-variant-caps: normal; fon=
t-weight: normal; letter-spacing: normal; text-align: start; text-indent: 0=
px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-t=
ext-stroke-width: 0px; text-decoration: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal;=
 letter-spacing: normal; text-align: start; text-indent: 0px; text-transfor=
m: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width:=
 0px; text-decoration: none; float: none; display: inline !important;" clas=
s=3D"">-
 introducing macros to deal with linking with in-tree xen libraries</span><=
br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 1=
2px; font-style: normal; font-variant-caps: normal; font-weight: normal; le=
tter-spacing: normal; text-align: start; text-indent: 0px; text-transform: =
none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0p=
x; text-decoration: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal;=
 letter-spacing: normal; text-align: start; text-indent: 0px; text-transfor=
m: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width:=
 0px; text-decoration: none; float: none; display: inline !important;" clas=
s=3D"">-
 Add -Werror to CFLAGS for all builds in tools/</span><br style=3D"caret-co=
lor: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; font-style: nor=
mal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal=
; text-align: start; text-indent: 0px; text-transform: none; white-space: n=
ormal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: =
none;" class=3D"">
</div>
</blockquote>
</div>
<br class=3D"">
<div class=3D"">
<div style=3D"margin: 0px; font-stretch: normal; font-size: 11px; line-heig=
ht: normal; font-family: Menlo;" class=3D"">
<span style=3D"font-variant-ligatures: no-common-ligatures" class=3D"">Acke=
d-by: Christian Lindig &lt;<a href=3D"mailto:christian.lindig@citrix.com" c=
lass=3D"">christian.lindig@citrix.com</a>&gt;</span></div>
</div>
<div class=3D""><span style=3D"font-variant-ligatures: no-common-ligatures"=
 class=3D""><br class=3D"">
</span></div>
</body>
</html>

--_000_3AB6FD34EAAB46449060C83DBE617992citrixcom_--


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 07:38:12 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 07:38:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356290.584421 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5jJv-00089h-RO; Mon, 27 Jun 2022 07:38:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356290.584421; Mon, 27 Jun 2022 07:38:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5jJv-00089a-OQ; Mon, 27 Jun 2022 07:38:07 +0000
Received: by outflank-mailman (input) for mailman id 356290;
 Mon, 27 Jun 2022 07:38:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yQHX=XC=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o5jJt-00089U-Qo
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 07:38:06 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 (mail-eopbgr00068.outbound.protection.outlook.com [40.107.0.68])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1430e5fa-f5ec-11ec-bd2d-47488cf2e6aa;
 Mon, 27 Jun 2022 09:38:04 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB3200.eurprd04.prod.outlook.com (2603:10a6:802:d::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Mon, 27 Jun
 2022 07:38:02 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5373.018; Mon, 27 Jun 2022
 07:38:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1430e5fa-f5ec-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=j0SZqU2QbQ/1bAy8fCRJI+Vcpe2VnOwb0LGdLsjem2DE7lYgCr3lAU1YFgJj69pE7C4tjEtJ6l2mAW+7NquqeixLMYTZ/rhaYXkhGHNlWJhcsy5+Bs5RfUdIjoMS7OaWhs+hOvOCOKK4oVs0NUqyyJIWujmjerE/mjQv2/ZIxDdV76oDydhOoRIvk5jk5CAor5TXA/OGIbZWhFQ4Hq3uMBWmO7ll06yAx2bLNyb+7uYFsHokMmfgs8MRoE4j0nT89ZHY0C3iOe5nVJ1VR9E7K67GfOn7YFBTlczldkfCFZ+PTyymf76ipaGyaZ8hlGsxmkQELFVQ/5veSEQKZRFukA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8R2xoE4z7WlDdFZ7FALH/2LOEwGBJ+De5YfxWiGdGDw=;
 b=TQQX6nNaZXL9+/6+4lDqB1DuC7kBn2xaO03W14xLH0bHYyCiTdJQWPchQKHPboYrp0IxF992i/wr53BUnr5WcYfg6IK4bpEUWdiv+QNdPdJdUoOLrIV32XSC2Gioeb5W4LHKkJdJmG0YGMHpYAXDgNhB2UVwk0wIyY7Hy9jUBxAoxofOVNIiWLlq0FkgxG49RpXPKIL/jhmLSs1c2ccsVYGwLqoJpU5XtCbmR+ICgyCe6ULTOF/JVKGR8rUryh9A4ekmZ6CDBgxq63EBy2GmGVrIAa9YoARmdlK8N+X+BtLZy3gluNnoQ+USvcplQQaC7JojMA1mkS9B640x07bP9A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8R2xoE4z7WlDdFZ7FALH/2LOEwGBJ+De5YfxWiGdGDw=;
 b=Tbyet/Tvnc95xwlSKAwIIY2uSB4fxuoZRkRvps5U57iAJMaJ0N+cftBdPObyt5xO6QRVIqQmetxTxNVjzYT7WGsP5x86siJBcF3KTDjBHekuI9xLk+MDCpzvNWQwEFyhG1k0aZzX21ljt3+OE5ADWvnJ90KGxTeDEoay8w2BNtqrjPcUwlOJx52PCp/aPc49UT4bcTvkzkRQ87PqWu3LSqPq1bbWXfI528PFuG/mcGHf0EoBXqPaxGDpN5+SqK5rKc2tw5IwDgpwn9xAtaYMc2nMAb8mt9YKP9mfFwcKcY3kSlxD2VOaJ3xAHnJ06GfovB8VE2ug7BNLmLd65MDlSA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d9c96d44-ca77-3f96-7da6-301c88695126@suse.com>
Date: Mon, 27 Jun 2022 09:38:02 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [linux-linus test] 171361: regressions - FAIL
Content-Language: en-US
To: xen-devel@lists.xenproject.org
References: <osstest-171361-mainreport@xen.org>
Cc: osstest service owner <osstest-admin@xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <osstest-171361-mainreport@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR04CA0042.eurprd04.prod.outlook.com
 (2603:10a6:20b:46a::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: bd6a47be-54dc-4e33-dc54-08da580ff70e
X-MS-TrafficTypeDiagnostic: VI1PR04MB3200:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	PiTBk+v/Wg6Ge0hiGOMWBbB4IAoGKwr+ezj1q1EFnqCPZgg6iSL6Woa4d3NG89vWPT6SB6+6WyVehK5ZYoK5iQlPlgGJfQmKphIzwtKEmqd//zgsAcwr6KjWcBKxzuZ9AjVJ6zvvcDQM6J9vjO3rzFd+1ncPvIJteP6o6jUONIBUuIzufHXXjl5Qv+A9bFCAcdl2P8PSk2sBxsUdD4c3AX7PdOwnMxgpRKaCTohILb84a6ALmwvphJ4117/CLDv81sjdwIVhQ7sG4XtJ3SlhiIo8YhZRxg6pv28ld2P/YiIsojwlPDUwo78UjDzTeAEBEqvUGLWbDY/pxo/uJBqxxQxxuRn+g0QWHflU5+/3voGxRD3tEyWKaVvFNQvI+FJ35pJNpO4eJeiAa5GilLTSKaVx53TDNdq4gZPV0P4+3loZb6H4vedxvVMALMLSNx7XUD7Gixahyuo8V5hFLsCw+lLu6b8dUNi4WdSHQnG656GTwiZpD0wsMgX6vksy7AAamnoCJasgtngh4Mn26T3691IgiDBlrAO5LanozKhxDZISzCTnvzgKUGS5s9KJzTTRgEbX8rAHAwM3toHHBXt/wXC2n8nakRvGC6zWKAqcILzozqUqxMOBkcaQuKtscc6SE8qhRlhoQk1y74aKSWxfO/YNsY9St/28n2RnCYKWFqk15m2ssJMP8xqecb6GKMxT9tOQ+Zo4NLXrWslyGNVomE58PMzeNY1rcXhc0HeGvw/IlroS5oAwgu6GzVLI/G2fvcjJpH53b9zNqQ4UIuXcF6QxmoOI4QMrsItxBtNpWpgA34ey63VZSPRMC434MDVfprw7H2WcLh9UlAHkpPKU2RChBk6NxqKtHNcBgh0pySY=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(39850400004)(376002)(136003)(346002)(366004)(396003)(31686004)(36756003)(6506007)(2616005)(316002)(2906002)(186003)(8676002)(4326008)(83380400001)(478600001)(6916009)(6486002)(66946007)(66556008)(41300700001)(66476007)(38100700002)(31696002)(8936002)(6512007)(26005)(5660300002)(966005)(53546011)(86362001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZmlMS200ZHBIVzhxZ1orMW0yMVdublgwdUpaclZmQmJhS0JRQXZ5bC9nc3Vo?=
 =?utf-8?B?MzJ2RlRTazYzZmV4U3J5ZzZibXltVUJhTGErY1FiRVk5L0F2WGdoMTBUV1E4?=
 =?utf-8?B?SDlqQzFJb1N1SFlMa0VyRzFLQm5WcDFtblZFeFdtSVNhcDVNdUl3Z3VUdm9L?=
 =?utf-8?B?VVBaRXBQM2tQN3MvTWg5bGpEeHB3SjhmNERJbHc2L2xGeEVHdWxtMkgxTGFX?=
 =?utf-8?B?SUd4NWdRZzk2dmc0TjBaQU13NURFNUZvaG5zZUhWY2FZU0xSaXl6NjEvWlV3?=
 =?utf-8?B?eElaMUpZTGJDU1pzSkh3cVJGK3pmaUhnTFZBdGEwZDNGdHkvOU9sSjRZUTUz?=
 =?utf-8?B?L0ZWcHYydzNkR1VRME9VcEdWcm1lb0ZWWEpnQTQ3aWNoTjdPR1hPdUNFaGdY?=
 =?utf-8?B?UEdCNEZLVzkrQUMwVkJ3eVdvTVMvbStKMlV1bTBjdFBZTmJ5dGhqMDZRUnk1?=
 =?utf-8?B?bGsxUGc4T0xueHBYeTJJMFIydWRYWGdRRE9PY2RWQVYxWTgxakJuZXYyQmxZ?=
 =?utf-8?B?K3J0VldxWks1VXRSN2hRcDVNdXlVMUF6bkJjMjRWMi9pVGt3cWNrazdwOThB?=
 =?utf-8?B?L0RVWERsa2Zjd2VZMnJnejBwZWY3bWR0ektnTitvbHFic1N3N3JHN1VWalk2?=
 =?utf-8?B?NXZKZ1c0Q0dNc0UrN1lWRFBKZ25wdnJEWnJzMjAvMTRJcDkzMUVkVm92UGJD?=
 =?utf-8?B?eTlDSE9oMXBIbU5BemJGdmh5RlFvQ1VqZ04rS01LRFU3MjJqODhvZ21kRGdT?=
 =?utf-8?B?bnk3aTVnanRveCtaWDVsNVlDYzk4SlcydEdSMkpMQk4yMHNzdTdhTFl5ZWE1?=
 =?utf-8?B?aHNNSUdmU05hQ05LU2ZxWXpKZ2g1ZDZYRmdEakhqc1Y1b1hTWmhIS3RMakRQ?=
 =?utf-8?B?QldrbzZxY09tR29nbHlYdDJ6M2lMaVUvZDFZVUxhcS9nWHN3cklxRWRDSk5B?=
 =?utf-8?B?VTZzSGFhUXhWS3pFMVBJNWMyeTRHS0lWT2RQb25iTDRERFZTT2ErYzIzL0Va?=
 =?utf-8?B?MEdxNitGWjJtdWVZeWVCZTl2WTlDRFdFd1hGcjdXYzBmUG5GdDgvYTd5cWJi?=
 =?utf-8?B?UWVLR2p2aS95Y1Z1WGxwdmlrVU4xUm5jcGQvK2pld3hFMHJSU2VmRC9kd25w?=
 =?utf-8?B?VS8wUFQ1Vm9odTV1SW85dlhyTkFUeUQyQzBGZTVlelpVNlFGMTlGYXRaczAv?=
 =?utf-8?B?NTMzYjNwNGV0VTBZMXpVQlFySUVrbnVjOFNnUDYvN05XR3FCT2Zta2czbTdP?=
 =?utf-8?B?bzZDNE04dTJBdmV2NERWYlM4RXpuRWhIYnVPaUVFODBMYWFFR3NockVmMkdl?=
 =?utf-8?B?anY2KzZGVkZoUVdGZ0xYQVFBQ0g1dWFoVWdjS25WZ2FEbE1mS1U5Q0tsR2tG?=
 =?utf-8?B?QVFQUFlkempJVzZzdk55WGpNWFFSNWEzcEdiandqWEJTT25vL3RremFuYjZT?=
 =?utf-8?B?R29tYVovdVlaOXhEWnJCQW9kemRxNUVTYlpSMHN6NEN4cUtmRDVYMUtxaXgr?=
 =?utf-8?B?U2RoZy9YSTUxTW5IMnBsWEg1amxqUVBReXByNzhwU0tOM25rT1RTazZBRTRz?=
 =?utf-8?B?Sk5YVkQxTFRhSDEyUEF4MGxMRjhtcllkcW5vcGFkQUFIR0lQSWtxL2tENjJR?=
 =?utf-8?B?RXUydTB6TDlTV283aFFDM1JRelV2ZkdwVTYwYXNsOVhieU5hVVM3MUNBb1pu?=
 =?utf-8?B?QnEzMVYxOEhjQVBEZ1BLODUrTUkvenhzWVdKcnRnbUlpa3FISjJpQmI5clZq?=
 =?utf-8?B?RDMyTFhNYjYyQVpsSUx0dXdCektGNExqL2ZDZkppU3R0RXNIblZtRExPOTFJ?=
 =?utf-8?B?Zm5pLzB4VUR2c05TWW5NVmZ3THQyWmp3L0FWcXdhT1UwT0xxZnZyVUljVndF?=
 =?utf-8?B?Z2JEUXd5d1BQKzc4eTh1QnZQY2dndy9PaFVXYi9RVHdBT2JLaDM1alFMczRj?=
 =?utf-8?B?Rk80dlpUaGwrQTByT05Ld25ITG1Kam12QndsYS81ZTVGWHF3Y2YzaDdSWTNQ?=
 =?utf-8?B?ZURFa1VTa3RsRVIyMXJPVUFFd1FKclVIc3dZMHRyZ0tzbEhUdzAwOFZ6bnVD?=
 =?utf-8?B?ZnVBbGd2RmpIK3NWdHFadTU2bFZtWUJuMmNta2FhTXR0NEZnZUdnaEFWQklP?=
 =?utf-8?Q?SzveHiuYdliWKLEESPuBVYz8/?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bd6a47be-54dc-4e33-dc54-08da580ff70e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2022 07:38:02.0608
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: WgyI9cTOHeAfaimVAvdGWkelbL0keT3e4YFOhBOPh1idEFUy5ZUsI6lFSLUpDEGcVlLLnB/0Fl06wQ8LP0uNNw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB3200

On 27.06.2022 03:38, osstest service owner wrote:
> flight 171361 linux-linus real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/171361/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-amd64-dom0pvh-xl-intel  8 xen-boot            fail REGR. vs. 171277
>  test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
>  test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 171277
>  test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 171277
>  test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 171277
>  test-amd64-amd64-dom0pvh-xl-amd  8 xen-boot              fail REGR. vs. 171277
>  test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 171277
>  test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 171277
>  test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
>  test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
>  test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
>  test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 171277
>  test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 171277
>  test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 171277

At the example of this:

Jun 26 21:51:47.667404 mapping kernel into physical memory
Jun 26 21:51:47.667425 about to get started...
Jun 26 21:51:47.667435 (XEN) arch/x86/mm.c:2159:d0v0 Bad L1 flags 400000
Jun 26 21:51:47.667448 (XEN) Hardware Dom0 halted: halting machine

This is an attempt to install (modify?) a PTE with _PAGE_GNTTAB set
via normal page table management hypercalls. Considering how early in
the boot process this appears to be, I wonder whether a flag was
introduced in the kernel which aliases _PAGE_GNTTAB (or a use of such
a pre-existing flag on a path which previously was safe from being hit
when running in PV mode under Xen).

I wonder if the bisector is already having a go at isolating the
offending commit, or whether it had already failed in trying to.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 07:51:54 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 07:51:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356296.584432 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5jXC-00027Y-1U; Mon, 27 Jun 2022 07:51:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356296.584432; Mon, 27 Jun 2022 07:51:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5jXB-00027R-UU; Mon, 27 Jun 2022 07:51:49 +0000
Received: by outflank-mailman (input) for mailman id 356296;
 Mon, 27 Jun 2022 07:51:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vXPS=XC=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o5jXA-000270-F4
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 07:51:48 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id fe6c6032-f5ed-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 09:51:46 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 582151758;
 Mon, 27 Jun 2022 00:51:46 -0700 (PDT)
Received: from [10.57.42.186] (unknown [10.57.42.186])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A861D3F5A1;
 Mon, 27 Jun 2022 00:51:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe6c6032-f5ed-11ec-b725-ed86ccbb4733
Message-ID: <f311e86b-7d37-53f1-d2d5-e31d10654c87@arm.com>
Date: Mon, 27 Jun 2022 09:51:30 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH 7/7] xen/arm: mm: Reduce the area that xen_second covers
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220624091146.35716-1-julien@xen.org>
 <20220624091146.35716-8-julien@xen.org>
From: Michal Orzel <michal.orzel@arm.com>
In-Reply-To: <20220624091146.35716-8-julien@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Hi Julien,

On 24.06.2022 11:11, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> At the moment, xen_second is used to cover the first 2GB of the
> virtual address space. With the recent rework of the page-tables,
> only the first 1GB region (where Xen resides) is effectively used.
> 
> In addition to that, I would like to reshuffle the memory layout.
> So Xen mappings may not be anymore in the first 2GB of the virtual
> address space.
> 
> Therefore, rework xen_second so it only covers the 1GB region where
> Xen will reside.
> 
> With this change, xen_second doesn't cover anymore the xenheap area
> on arm32. So, we first need to add memory to the boot allocator before
> setting up the xenheap mappings.
> 
> Take the opportunity to update the comments on top of xen_fixmap and
> xen_xenmap.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> ---
>  xen/arch/arm/mm.c    | 32 +++++++++++---------------------
>  xen/arch/arm/setup.c | 13 +++++++++++--
>  2 files changed, 22 insertions(+), 23 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 74666b2e720a..f87a7c32d07d 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -116,17 +116,14 @@ static DEFINE_PAGE_TABLE(cpu0_pgtable);
>  #endif
>  
>  /* Common pagetable leaves */
> -/* Second level page tables.
> - *
> - * The second-level table is 2 contiguous pages long, and covers all
> - * addresses from 0 to 0x7fffffff. Offsets into it are calculated
> - * with second_linear_offset(), not second_table_offset().
> - */
> -static DEFINE_PAGE_TABLES(xen_second, 2);
> -/* First level page table used for fixmap */
> +/* Second level page table used to cover Xen virtual address space */
> +static DEFINE_PAGE_TABLE(xen_second);
> +/* Third level page table used for fixmap */
>  DEFINE_BOOT_PAGE_TABLE(xen_fixmap);
> -/* First level page table used to map Xen itself with the XN bit set
> - * as appropriate. */
> +/*
> + * Third level page table used to map Xen itself with the XN bit set
> + * as appropriate.
> + */
>  static DEFINE_PAGE_TABLE(xen_xenmap);
>  
>  /* Non-boot CPUs use this to find the correct pagetables. */
> @@ -168,7 +165,6 @@ static void __init __maybe_unused build_assertions(void)
>      BUILD_BUG_ON(zeroeth_table_offset(XEN_VIRT_START));
>  #endif
>      BUILD_BUG_ON(first_table_offset(XEN_VIRT_START));
> -    BUILD_BUG_ON(second_linear_offset(XEN_VIRT_START) >= XEN_PT_LPAE_ENTRIES);
Instead of removing this completely, shouldn't you change it to:
BUILD_BUG_ON(second_table_offset(XEN_VIRT_START)); ?

All in all:
Reviewed-by: Michal Orzel <michal.orzel@arm.com>

Cheers,
Michal


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 07:54:41 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 07:54:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356302.584443 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5jZw-0002hT-FY; Mon, 27 Jun 2022 07:54:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356302.584443; Mon, 27 Jun 2022 07:54:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5jZw-0002hM-Cn; Mon, 27 Jun 2022 07:54:40 +0000
Received: by outflank-mailman (input) for mailman id 356302;
 Mon, 27 Jun 2022 07:54:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Kbpj=XC=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1o5jZv-0002hG-4Q
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 07:54:39 +0000
Received: from mail-ed1-x52e.google.com (mail-ed1-x52e.google.com
 [2a00:1450:4864:20::52e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 649813e9-f5ee-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 09:54:38 +0200 (CEST)
Received: by mail-ed1-x52e.google.com with SMTP id cf14so11737440edb.8
 for <xen-devel@lists.xenproject.org>; Mon, 27 Jun 2022 00:54:38 -0700 (PDT)
Received: from [192.168.1.10] (adsl-146.37.6.170.tellas.gr. [37.6.170.146])
 by smtp.gmail.com with ESMTPSA id
 u17-20020a056402111100b0042deea0e961sm7140360edv.67.2022.06.27.00.54.36
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 27 Jun 2022 00:54:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 649813e9-f5ee-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=message-id:date:mime-version:user-agent:subject:content-language:to
         :cc:references:from:in-reply-to:content-transfer-encoding;
        bh=P+fy/cYo5Rjn4tll9qI4vhve08KQRIZnXFgk4/rurV8=;
        b=TG3Jd71CkH+i3L1N3wE4bKJLuvy78hkd5cwLqk3aqBooDUzceYIf5vvhgooFdODqkz
         AWLZPNBV+p1TzUKG8VGG1ZuZZT1EPOiRi4jCP7TG6SxTfcy2J6B5OLgL8Z4P4F3lYE1d
         wDzQdith+gM/rFz1JJjCKLLC0XeS6N5Eg6Snpv5aSG0pCRSPBsbgHofk8xb/D9GuWoiU
         rjGmUFCqln8ateEA7wMIJQFAB4a65G+HwRrd9RcwC1ItNk7ICxftAwJp2VwJL86W7lr/
         0ZMSHbWREVusqOBm/103eQdZIm5HDrrpxN7vBOD1C4tHmfPkcWM3xhN8iT0ccZBmqnL1
         krOg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:message-id:date:mime-version:user-agent:subject
         :content-language:to:cc:references:from:in-reply-to
         :content-transfer-encoding;
        bh=P+fy/cYo5Rjn4tll9qI4vhve08KQRIZnXFgk4/rurV8=;
        b=57y/jQZJenLCquwujVPEx48uAyb0n8WPUdo6T24khSNfPI4MlDiwXdtzbPqQBQmuud
         M3eqLhNaAFBITm1KAG9fh3CIQ7jZWrdK/dxHHuuZi3fQOTDejNoKxz8p/lAE3f7RHUV6
         Bq4c/kXnwAhrDSHA7u+pAPwxdoSW+Ne+3A2VIjpLSbqWC5yHpEvTERsXsJYA7CjnRogT
         rscrgRkpaE40+BLv47mosvQ7xRvHpYP3EDZcjZjVixCTXR1O0jD57wU4Il0Kx1tMCKKE
         bpssa+hkbH5FfBiQHWIto+x8gKskmK84jw2/CIM63LcHswdqqFl8Lcn69CrXqLA0mUeo
         VLHg==
X-Gm-Message-State: AJIora9ioLOyPTtyahhglnfPRxzl4y36hkzPoYQltw01mogZRpdBKxi6
	DZSsJ4J87jcdMjMZgOzpa8U=
X-Google-Smtp-Source: AGRyM1vad67RR+ny6LvgZLW6hK4GfYmphL3oUQ1ptCGgsKG/05tb+CLZ2IzTKuW2xPEfx1Czspc3Sw==
X-Received: by 2002:a05:6402:278e:b0:435:bdc4:9294 with SMTP id b14-20020a056402278e00b00435bdc49294mr15355622ede.350.1656316477692;
        Mon, 27 Jun 2022 00:54:37 -0700 (PDT)
Message-ID: <cb50eba7-bbc5-3100-2be3-98587766c240@gmail.com>
Date: Mon, 27 Jun 2022 10:54:35 +0300
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH 2/5] xen/common: vm_event: Fix MISRA C 2012 Rule 8.7
 violation
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Tamas K Lengyel <tamas@tklengyel.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>, xen-devel@lists.xenproject.org
References: <20220626211131.678995-1-burzalodowa@gmail.com>
 <20220626211131.678995-3-burzalodowa@gmail.com>
 <61094b37-4075-e362-7fc6-ce28f965bb05@suse.com>
From: xenia <burzalodowa@gmail.com>
In-Reply-To: <61094b37-4075-e362-7fc6-ce28f965bb05@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Jan,

On 6/27/22 10:11, Jan Beulich wrote:

> On 26.06.2022 23:11, Xenia Ragiadakou wrote:
>> The function vm_event_wake() is referenced only in vm_event.c.
>> Change the linkage of the function from external to internal by adding
>> the storage-class specifier static to the function definition.
>>
>> This patch aims to resolve indirectly a MISRA C 2012 Rule 8.4 violation warning.
> Actually also for patch 1 - this is slightly confusing, as the title
> talks about 8.7. At first I suspected a typo. May I suggest to add
> "also" (which I think could be easily done while committing)?


This is actually a violation of MISRA C 2012 Rule 8.7 (Advisory), which 
states that functions referenced in only one translation unit should not 
be defined with external linkage.
This violation triggers a MISRA C 2012 Rule 8.4 violation warning, 
because the function is defined with external linkage without a visible 
declaration at the point of definition.
I thought that this does not make it a violation of MISRA C 2012 Rule 8.4.

>> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 08:08:33 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 08:08:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356312.584453 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5jn6-0004ur-UW; Mon, 27 Jun 2022 08:08:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356312.584453; Mon, 27 Jun 2022 08:08:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5jn6-0004uk-Rl; Mon, 27 Jun 2022 08:08:16 +0000
Received: by outflank-mailman (input) for mailman id 356312;
 Mon, 27 Jun 2022 08:08:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5jn5-0004ua-GY; Mon, 27 Jun 2022 08:08:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5jn5-0005Ng-CS; Mon, 27 Jun 2022 08:08:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5jn4-0002Na-VY; Mon, 27 Jun 2022 08:08:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o5jn4-0002iZ-V7; Mon, 27 Jun 2022 08:08:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gDBHJin5Gvd1kVMsEU3Y7IHfiZhxpEakGn4XKqk3LP8=; b=LeZpoRJPbQXXEkia9teE0yNqfq
	PG/HH2Mx1JR5AeBvIm+cSskhH9hHktMmSabIiQ8YbgTX7f1177U8x0ctc4NfX6/UcWT90y+Vw/QkZ
	E2+x/MldDzhk1ijUZOeWAYawACTCuZd34InRDz/ZUqH+H3ikK6cC8drPfH8OrE8fHWbk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171363-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 171363: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=0544c4ee4b48f7e2715e69ff3e73c3d5545b0526
X-Osstest-Versions-That:
    xen=0544c4ee4b48f7e2715e69ff3e73c3d5545b0526
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 27 Jun 2022 08:08:14 +0000

flight 171363 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171363/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds   18 guest-localmigrate fail in 171357 pass in 171363
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat  fail pass in 171357

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171357
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171357
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171357
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171357
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171357
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171357
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171357
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171357
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171357
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171357
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171357
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171357
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  0544c4ee4b48f7e2715e69ff3e73c3d5545b0526
baseline version:
 xen                  0544c4ee4b48f7e2715e69ff3e73c3d5545b0526

Last test of basis   171363  2022-06-27 01:53:09 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon Jun 27 08:43:36 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 08:43:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356321.584464 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5kLB-0000mw-QT; Mon, 27 Jun 2022 08:43:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356321.584464; Mon, 27 Jun 2022 08:43:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5kLB-0000mp-NS; Mon, 27 Jun 2022 08:43:29 +0000
Received: by outflank-mailman (input) for mailman id 356321;
 Mon, 27 Jun 2022 08:43:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yQHX=XC=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o5kLA-0000mj-HT
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 08:43:28 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2042.outbound.protection.outlook.com [40.107.21.42])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 368aefd0-f5f5-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 10:43:27 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8081.eurprd04.prod.outlook.com (2603:10a6:20b:3e2::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Mon, 27 Jun
 2022 08:43:25 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5373.018; Mon, 27 Jun 2022
 08:43:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 368aefd0-f5f5-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SMKRNfRyzcLFeIu1Z6zKhd7ihaT/py2b/8yHi1CDwzJ9XlyhdTjxlNkJc633C/Nap9veWYr9Mab1mTNUhYUoVJnidoLnHMUDiX6sL/Bf15CC00vuuUSqu1O701GSnCmqwzTaKXW2jVMESBhdanj6IrsUnLLX2O7XPrUoLr+4WFZENYYoRt0DYQGp7gtuovDpijrhf/5jWXNTvLEBpXdbLjAdpr0YHrQXshREhSr0QhfS4FZB37u7kPCgQXGfNctUt0W1n9pALjizJjck5elTh/byQWJxnG+IK0FMJ5imwSCNdikAAkRAUgQ9EPW+ZoVG/qcadPOd+WMd8U+OF8zqrw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=XfltGx2auh8LrZ8MpCLr3L09480H6JeUQ4eexV7wzwM=;
 b=VIv+AKenQ9gWo5mTqXJ8WAl71TKxyZ6ERrvaLualXRfDOOnaPusKZuXtIVm1l5PeJOd4EvY30Tt8cJpvukZ6qNk0XsA1br3C8085YQUXz8QLMU0xGQlGedaRNPqKNWwItwoZZ9nvHOdEisRTYx7/u23nUgUoyaOUfnuO3bso2JyRG1HgVGptvNHVsYiLN+ts3lNb27YEr52PTAap9SMLlo0fM5IdCLHW9j8UGJwcUtvLdnAgxUyjKRgW3tSLXImzKrYTemDY9u5RnddN/x7J2F6j114YjX91JL1YsmkTeOfpabJDXXzYuCYZqlozL9I3FQ7KPPSQBnR747jQKsLRJA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XfltGx2auh8LrZ8MpCLr3L09480H6JeUQ4eexV7wzwM=;
 b=FVao5dCME0nykdWIsXPteku3vtfYgfiSXwezSlMJxKmg7l2h3ZJmTif9Hb0zBCAdxq27H4y3hBP21VBw5chtBXAq4wyQ0Lbq3BuVjJ/LiCIgA0Br0zsD8GvNkT/ep/YZ0lKBVPOk2UvQZ+kqrBtvMcRdWERECgVINwvvIfmg7Ypv4Sr9n2CPCR9itV8zqTM5HRwKHvPu28nsd0aNqhxNP7+L5YCui3MmGA16OXrUWnaBIjq50Nw4lHAZAYdvnTSNjbU5FDjW58nfk96u+2PrU6OlRPrWnL5E0m1RiuNCXfwBLKZvo3kaUkNwB+0OrMp7hP8I55kkDIPRNZMTnE7fww==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a5e958aa-a616-115e-29e2-fe410b708583@suse.com>
Date: Mon, 27 Jun 2022 10:43:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 2/5] xen/common: vm_event: Fix MISRA C 2012 Rule 8.7
 violation
Content-Language: en-US
To: xenia <burzalodowa@gmail.com>
Cc: Tamas K Lengyel <tamas@tklengyel.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>, xen-devel@lists.xenproject.org
References: <20220626211131.678995-1-burzalodowa@gmail.com>
 <20220626211131.678995-3-burzalodowa@gmail.com>
 <61094b37-4075-e362-7fc6-ce28f965bb05@suse.com>
 <cb50eba7-bbc5-3100-2be3-98587766c240@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <cb50eba7-bbc5-3100-2be3-98587766c240@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM6P195CA0041.EURP195.PROD.OUTLOOK.COM
 (2603:10a6:209:87::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: fcc39205-00ee-4d56-c542-08da5819195d
X-MS-TrafficTypeDiagnostic: AM9PR04MB8081:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ff0gDulTQg2S4FbinbhlgIkOgRPzWD/9nwOEjhFA9bzZG3Sl/mE53+/PzO4+rShZvAuuWhx3yuzWnntQLWS8S84ESH1D2qKqHQFUuuX28DbayTzAQiIzEAhTmHAF3dKbzRvj7/f2C3Ej+w9Nc79Tq0NXdKiFJLnbJKcETborYswMh/2rzVktVHh+8IHZOR+PW1ZyTb+dcBEvAzr+ROIN8DlpiOXnSwE9qSjSmkmANqjeRjFH1SHZElBVHVDvPhaiJ3i8v98GqmS1XCHxxP7LelIEXuqnXC+qABgtpD9eA9obiKOMjZkv9jlrU1mFrXowMLoA5qdGOiORYt6iDX3DQdCsMsx4b6wm2vhzJ8h+w2PpYJ+6st27RNnn4NydJq7Kt2/Rqz9kF4FGUSRwofWAvQHt3hJGlkI5oX2cy4zmTate6VtT0PIG1AbpGnVtEr8RTzBI1ufJt6yOpezVEjsGhCCA5B3PwE3My9ZNFptK++WrcYdCCzF3qOHIY1EaiVtWpQnBHI59trBkkmZJEVsm/xkLFTHrLMKUIDOPHeAovCHbQOv80KAcj12qBZJPkMgcDtwy4TubLQBlrFkRiAO+6skqfRyz+Co0GtCZCNcdkisr2EiwE95bWLU3xESb8MBgD60VJEFNkvu5vfaeWLfg1CbFHchfIdZyFdaQFpNvr90lh+IsAslKyY8gFgJaxJTk/P32taAGL/nLTkbqyXfC9KNNFksfT/f+cFBl6P9tDl3xu1CfW5CY+lo81pYYI0RAVeFRZxDiIm1NeCKs9KsYA0+tY0k+C5KPzlCA0E6kU39NSKqLr7txteWshw2nTPTe
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(136003)(39860400002)(346002)(396003)(376002)(38100700002)(316002)(6486002)(86362001)(8676002)(6512007)(4326008)(66476007)(6506007)(31696002)(53546011)(6916009)(478600001)(66946007)(66556008)(26005)(2906002)(5660300002)(41300700001)(54906003)(36756003)(186003)(2616005)(8936002)(83380400001)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bmpKOFdwT2JSS082c0U0VWxxWGMvS2cxallGbCtxV3gvVzI3WmxuWmFvQ2h1?=
 =?utf-8?B?RkZ4eVZQN2grMmQrUjlTcnFsSmE3a2dUclVkb2R3bEpNUjJwdGVVRHRhWWkx?=
 =?utf-8?B?SThXOEEvb0ZrOVlMVWJyZDRtYTJJVmkvOU5RY2x5SkZSZmZMZmpCNG9WTFha?=
 =?utf-8?B?V081Q2dDZWRGRGxMeG1GMlJWSnFTbkpJUUcwRWhSNWptVVYzRks4aDdvZlpB?=
 =?utf-8?B?T3NobVRkS0ZJdTZncS9wQlo0YkpVbUZEMkw5V1Btd05lY3F2NXZ5dG42aWow?=
 =?utf-8?B?dSt5Y0tsZGdseFNhbHQ5dXFSU054MFpCZ3lqbzFqb0Exb0hrMWdlbS9NKzUx?=
 =?utf-8?B?UEJLZ1dxWUJkaGhzMG1LNExVVjl0cXFuM1l1TENNd0RTZDh1QzhUQWhudlUy?=
 =?utf-8?B?eE5yRDB5S1AxWC8wYjVGcXdqYUUyY2hVbXRGTStrMTdCTFhobi9uQXZ6TXdm?=
 =?utf-8?B?dy9SbnJYODc4WlhkNURta3FRYjg1Q21ORU5wNmpXYlJSbGM3dGNzZld6SWxD?=
 =?utf-8?B?aVNuNnBjR3ZzKys1NUZsRE4wVEhmcTdObTZZUndTUzVnTXdXODZXbXpmY0dO?=
 =?utf-8?B?eUhSR3JYOW82L3VmN2tWN3IrKzBydm9aUFpCazhmZzFibWJ0SXdEMlEvZmZS?=
 =?utf-8?B?a2VuaUQxUFBjZi96NmowY3JyVzd4RmdJVWpBRDd1UjF1K2tTR3FGZi9jS09z?=
 =?utf-8?B?V2dLVkNrLzlJSEF5eGVJcGRQNi8zazZCTGR4STJkWU9uTzRIcmkvMXcvanU5?=
 =?utf-8?B?QzRjUSt3dVhrcHpOc1hBemdqa0p1VGFYeTNXdUpDVUJrZnp5YU9oWjVxRGtN?=
 =?utf-8?B?M2RTYzR1UENrSTFja2pBY3k0OTlUU1pIWUVFNm5UazNJMEp3WktmYnprQXg5?=
 =?utf-8?B?ZUpJdU8yV0Y1dHR0eVYzTTR0eDgyZlNQRWEyRGFmVGJaeG9kdEFabHJyZE9T?=
 =?utf-8?B?ZkNpUHNjaHpzVjhDOVIrTVpaVk81Y2Z4eFB3MGlpOE5sQitjdXhSUmtWelkw?=
 =?utf-8?B?TmdabDNOUHdXeWNlQTBYeHZqR3k5TTZBbVptZnN0dTZ4Szk4cUhNZURTcnZQ?=
 =?utf-8?B?L1RmS2lsRjh3Q3VBaHo0TExhMnc0MWhZc05OM0NtM2VSWlUyczYxTXRTT05m?=
 =?utf-8?B?OWFSUE1sTHh3cUVPQ1dZNlJ1Vk5HVHg2UHptS09JQkdON1d2c0t1TC9vSlh4?=
 =?utf-8?B?ZWlsRy9xQ1lHcnJYUFF2Q3gvU2hIdm1ESTBiMGw1aVlSenZ5Wno0VjkrR0RI?=
 =?utf-8?B?TlV5b2RXanQvNG1PdnBiR2l5bEFBMUlicEZTYjVNaDlYdnZJTXgrbVJjT0VO?=
 =?utf-8?B?WEhYZkUxYi9TYWJabW93NXdrV0Fpbzdkbjd0ejVUZXRENStTL1BlVzBKcXJj?=
 =?utf-8?B?bERYSGtEdENhK0o4K2xlbENYWDQ4YVhteSswTHRMRkVaWWxRUDMyaEVIaGhr?=
 =?utf-8?B?ZVVhV0psRW43dFl5eDB6S2tyOW92K0VvMGpjN1VOakNHNVhvenJ1SEFUN0E4?=
 =?utf-8?B?SU1GU05IK2gra0F4M0NLZGVpMVExbWNhZ24xdU0xUWpLN0tEZWIxTVVVbzgz?=
 =?utf-8?B?bjA2Y1JsaHIySUhrM2loYm1yNk5VcjY2RG1JUkFSZTBBbGJlZjVBMldUNUo0?=
 =?utf-8?B?ek5qWDN2Z0ZuMndFQm5qWHI3UEpHU2h4bktrNGQ3MUNjb2w2ODR3NnZJTFZi?=
 =?utf-8?B?UlQzS2tnbGVPVysxajFqSEZjejZBYWpWVlE0TkpLS1Rnaml6L1hLQWNGZVBT?=
 =?utf-8?B?NStwdHJ3LzNIUXEzT1hFbmdwS3lmYjhJT2VPUmhKeUY4dEdLdklZRzRseU9C?=
 =?utf-8?B?YThFZWxqR0IyQ2dMSjdIaGgyd1cwMC94a1RoWURBb20rbjQrbVFJbFBpYlhh?=
 =?utf-8?B?VUdXbEt2R2VXc3ZEQ0U3UURwNE1nb05tTWhCVUxROG5vcmJXU3JBKzdFUGo1?=
 =?utf-8?B?bWxIOFhlQjgrTmM1ZVB5YVhnRUtiSklrL1hJZVNaQWUwbEtaNXl4emUzU0g3?=
 =?utf-8?B?WnpoOElZSGQ5Rnd1RUh3MHVSdE81cWJQcGlheUVzWXc5Rkx4UktrUnNoQ1E5?=
 =?utf-8?B?UjFHRmFwb2ZxQ3pmYVlROEJFc0xmdzdnWi83TXdDZy9yWUEwQ212ZHo1M2Zv?=
 =?utf-8?Q?SZw3kZXVeHwHxIwBuCmhiTeAZ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fcc39205-00ee-4d56-c542-08da5819195d
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2022 08:43:25.0755
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: sf0gDh8brhcnEkovij5v7cBcJRSN8/A9VcDEJLXvPX4bdBf96EtNcKP7lra9r//Vtl6wE2aYjcn8qeTrTLTCrw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8081

On 27.06.2022 09:54, xenia wrote:
> On 6/27/22 10:11, Jan Beulich wrote:
>> On 26.06.2022 23:11, Xenia Ragiadakou wrote:
>>> The function vm_event_wake() is referenced only in vm_event.c.
>>> Change the linkage of the function from external to internal by adding
>>> the storage-class specifier static to the function definition.
>>>
>>> This patch aims to resolve indirectly a MISRA C 2012 Rule 8.4 violation warning.
>> Actually also for patch 1 - this is slightly confusing, as the title
>> talks about 8.7. At first I suspected a typo. May I suggest to add
>> "also" (which I think could be easily done while committing)?
> 
> 
> This is actually a violation of MISRA C 2012 Rule 8.7 (Advisory), which 
> states that functions referenced in only one translation unit should not 
> be defined with external linkage.
> This violation triggers a MISRA C 2012 Rule 8.4 violation warning, 
> because the function is defined with external linkage without a visible 
> declaration at the point of definition.
> I thought that this does not make it a violation of MISRA C 2012 Rule 8.4.

I think this is a violation of both rules. It would be a violation of
only 8.7 if the function had a declaration, but still wasn't used
outside its defining CU.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 08:52:35 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 08:52:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356327.584476 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5kTv-0002I5-Kw; Mon, 27 Jun 2022 08:52:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356327.584476; Mon, 27 Jun 2022 08:52:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5kTv-0002Hy-Hz; Mon, 27 Jun 2022 08:52:31 +0000
Received: by outflank-mailman (input) for mailman id 356327;
 Mon, 27 Jun 2022 08:52:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M+OT=XC=gmail.com=xadimgnik@srs-se1.protection.inumbo.net>)
 id 1o5kTt-0002Hs-Bg
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 08:52:29 +0000
Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com
 [2a00:1450:4864:20::32f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 790b0ad2-f5f6-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 10:52:28 +0200 (CEST)
Received: by mail-wm1-x32f.google.com with SMTP id
 o16-20020a05600c379000b003a02eaea815so4499343wmr.0
 for <xen-devel@lists.xenproject.org>; Mon, 27 Jun 2022 01:52:28 -0700 (PDT)
Received: from [10.7.237.7] (54-240-197-231.amazon.com. [54.240.197.231])
 by smtp.gmail.com with ESMTPSA id
 h13-20020adff4cd000000b002103aebe8absm9720868wrp.93.2022.06.27.01.52.26
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 27 Jun 2022 01:52:27 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 790b0ad2-f5f6-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=message-id:date:mime-version:user-agent:reply-to:subject
         :content-language:to:cc:references:from:in-reply-to
         :content-transfer-encoding;
        bh=zD66L3mdITq6+N0oV3/QTMI5HSL0Xxy8B+3UI/gTIVM=;
        b=TUSft++HYTitNFoJnWHbmDqufQ4HfFly1XJ4egP6cXvI5z266MLmIl7Ce3bMsU2YMs
         fNcDKlR8JNXt6e+Utbq3VP1DIH+/QT5/SXiBygQRX/wJNourEWlo11KETbNEUk+fQRgG
         fuxJyCkABCAZF04NPfC/cH8urAZj9MCmXLfFobyuncMGq5Ji1wU6gnEjN8YJ2/FhOqdB
         yvWsnyuhH/tY4wPTDyResr7nm3TtptsknA092VQf5QudMjpDP9zxEEuhEy4gRz3dnWmF
         GSTXZfrmujZ38tCF7ScXpFfRoV+e1menEOotT7PheEkKOsjelG44c+Sko/TRCZn8BNTN
         CJqg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:message-id:date:mime-version:user-agent:reply-to
         :subject:content-language:to:cc:references:from:in-reply-to
         :content-transfer-encoding;
        bh=zD66L3mdITq6+N0oV3/QTMI5HSL0Xxy8B+3UI/gTIVM=;
        b=E4sLbvimD37Ioq6dhFsVWMCgxp9HEbapplOneEsfrLxrkAj9BNEiMoyoNe/zdzrjUM
         Dn3pGaxmMlQ1lu85Ey/cL9QVc9cnLI/joa/oDJzer9WuUiic8FLTmVGRkIwNaVpWrNu/
         Wo4qdakakkyCzffEbA/CXn3+95SVhccdO/c+UnJRxJ+U3Y0XISXjDlFjkmf2Ja8ii+WE
         2UN/7MppONYo6QyOv3FLnij6CNgBNWLqY8hpDmCrugtGD4lxwdBpm3Wt7yNmA1Kp5hfi
         2wo3ler4OzFgOIL6UpuCpoAjIqa868x/xIKLN6/+QBDK9qKq4KQh8CnkNb+91VnJw21D
         trcQ==
X-Gm-Message-State: AJIora+iwPQNrdkHZGf/XtOUV+mmyKHo3+CBCNrpmMLaiQz367mMamwZ
	lQjdywAUouLUri0FZcYSkBw=
X-Google-Smtp-Source: AGRyM1ue2+04Ox+iCpYrG9OKwUuld6DMVA7PofR/8yOuTsxVhCP+SVqFxzUGM+CPo+6IY4qnNykyJg==
X-Received: by 2002:a05:600c:35ce:b0:39c:7dc2:aec0 with SMTP id r14-20020a05600c35ce00b0039c7dc2aec0mr14143744wmq.33.1656319947966;
        Mon, 27 Jun 2022 01:52:27 -0700 (PDT)
Message-ID: <0044de2b-d25e-ba76-64e1-46316e786fb4@gmail.com>
Date: Mon, 27 Jun 2022 09:52:26 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Reply-To: paul@xen.org
Subject: Re: [PATCH 1/2] hw/i386/xen/xen-hvm: Allow for stubbing
 xen_set_pci_link_route()
Content-Language: en-US
To: Bernhard Beschow <shentey@gmail.com>, qemu-devel@nongnu.org
Cc: Richard Henderson <richard.henderson@linaro.org>,
 xen-devel@lists.xenproject.org, qemu-trivial@nongnu.org,
 Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Paolo Bonzini <pbonzini@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>
References: <20220626094656.15673-1-shentey@gmail.com>
 <20220626094656.15673-2-shentey@gmail.com>
From: "Durrant, Paul" <xadimgnik@gmail.com>
In-Reply-To: <20220626094656.15673-2-shentey@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 26/06/2022 10:46, Bernhard Beschow wrote:
> The only user of xen_set_pci_link_route() is
> xen_piix_pci_write_config_client() which implements PIIX-specific logic in
> the xen namespace. This makes xen-hvm depend on PIIX which could be
> avoided if xen_piix_pci_write_config_client() was implemented in PIIX. In
> order to do this, xen_set_pci_link_route() needs to be stubbable which
> this patch addresses.
> 
> Signed-off-by: Bernhard Beschow <shentey@gmail.com>

Reviewed-by: Paul Durrant <paul@xen.org>


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 08:52:57 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 08:52:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356330.584487 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5kUK-0002lA-U7; Mon, 27 Jun 2022 08:52:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356330.584487; Mon, 27 Jun 2022 08:52:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5kUK-0002l3-R4; Mon, 27 Jun 2022 08:52:56 +0000
Received: by outflank-mailman (input) for mailman id 356330;
 Mon, 27 Jun 2022 08:52:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M+OT=XC=gmail.com=xadimgnik@srs-se1.protection.inumbo.net>)
 id 1o5kUJ-0002dY-M5
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 08:52:55 +0000
Received: from mail-wr1-x433.google.com (mail-wr1-x433.google.com
 [2a00:1450:4864:20::433])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 88e4cbc3-f5f6-11ec-bd2d-47488cf2e6aa;
 Mon, 27 Jun 2022 10:52:55 +0200 (CEST)
Received: by mail-wr1-x433.google.com with SMTP id e28so6819891wra.0
 for <xen-devel@lists.xenproject.org>; Mon, 27 Jun 2022 01:52:55 -0700 (PDT)
Received: from [10.7.237.7] (54-240-197-231.amazon.com. [54.240.197.231])
 by smtp.gmail.com with ESMTPSA id
 l1-20020a5d4bc1000000b00219e77e489fsm9633148wrt.17.2022.06.27.01.52.53
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 27 Jun 2022 01:52:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 88e4cbc3-f5f6-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=message-id:date:mime-version:user-agent:reply-to:subject
         :content-language:to:cc:references:from:in-reply-to
         :content-transfer-encoding;
        bh=YxRvcSf7ptmHEAYJ6Lp/sYOIY/HDDd80u0zELz0O8Rg=;
        b=EVJcXnYxGeVieQhSeOobV7k1U5qEBLh66Tb+4iTzlm7+StkuQ+u3asnIz4U8fiBzTf
         0umbG5MBglrtNKmXjJP97eeuVS/uOVu1ksrs5Cs6uSw3dhB9pUmvQEH3qxEYzOIaNJo5
         fgXXHRZfwU6u3uFTMuvfU6w4mi5IU2cOLcCvSosPuveAEqnoVO6Nld+kYMUXbSuaD9/+
         Q9oSXSH6u8lte9NUpeMucqprHFlnngCCUIQzJY2+aE5oq6kglUm2DxJH8CrwEs7WlqMG
         18hLEMdEfg3z7+vHQqtvJlHsdJ1qcDG0MkqAt+kwDI0LRKkH5quMmB/jEwoDTt7h7uhv
         UQIA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:message-id:date:mime-version:user-agent:reply-to
         :subject:content-language:to:cc:references:from:in-reply-to
         :content-transfer-encoding;
        bh=YxRvcSf7ptmHEAYJ6Lp/sYOIY/HDDd80u0zELz0O8Rg=;
        b=0TgdJC7hDC5HP6OKqndKBsnTegtc924+fizD0e3SvQkuKicQo0JJWZBFEB8hgaaZiF
         NBV7QK6zVCI41fZncJo9CvqTrrb3FbhYiOv6DV4exZsAzP5w94GWvkoymuNyctFCrCAC
         D8KHiBL34/StYA3LbqJpH8xruh5ozulW4MFUj3K39WYDOo8+/WhpSWy0Or27KnLce54I
         4+VaphSiWB5T2iUqHsAc2BzNvpXgkIpUmsu4W5BG+jJoDW+6zk9o8AYN0Ne6p8UGndtJ
         SRgHvA7FodH4s6q2P9Uq448hjw5iH1OVcKMrW5dcAJwJ5VXJ8ETcAGZ15/urJ59ymne+
         WuNw==
X-Gm-Message-State: AJIora9eWaz0fvlfJwNO8Vs2gcUOf//a3un7WY9XYtC++RKRSxIzkxvO
	eD3kmfpHenf9buvVpcvlhE0=
X-Google-Smtp-Source: AGRyM1sakRCV4QFb6DvtU4gnXvhnzZNuVTvM9I0nKjKAn0cOFnvi2cXEqpaAGuHVniGGE9URjY6kog==
X-Received: by 2002:a5d:4589:0:b0:21b:8568:f38e with SMTP id p9-20020a5d4589000000b0021b8568f38emr11039593wrq.165.1656319974592;
        Mon, 27 Jun 2022 01:52:54 -0700 (PDT)
Message-ID: <c4dae18f-5337-ef74-ea9d-0d6f20c9b919@gmail.com>
Date: Mon, 27 Jun 2022 09:52:53 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Reply-To: paul@xen.org
Subject: Re: [PATCH 2/2] hw/i386/xen/xen-hvm: Inline
 xen_piix_pci_write_config_client() and remove it
Content-Language: en-US
To: Bernhard Beschow <shentey@gmail.com>, qemu-devel@nongnu.org
Cc: Richard Henderson <richard.henderson@linaro.org>,
 xen-devel@lists.xenproject.org, qemu-trivial@nongnu.org,
 Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Paolo Bonzini <pbonzini@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>
References: <20220626094656.15673-1-shentey@gmail.com>
 <20220626094656.15673-3-shentey@gmail.com>
From: "Durrant, Paul" <xadimgnik@gmail.com>
In-Reply-To: <20220626094656.15673-3-shentey@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 26/06/2022 10:46, Bernhard Beschow wrote:
> xen_piix_pci_write_config_client() is implemented in the xen sub tree and
> uses PIIX constants internally, thus creating a direct dependency on
> PIIX. Now that xen_set_pci_link_route() is stubbable, the logic of
> xen_piix_pci_write_config_client() can be moved to PIIX which resolves
> the dependency.
> 
> Signed-off-by: Bernhard Beschow <shentey@gmail.com>

Reviewed-by: Paul Durrant <paul@xen.org>


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 09:00:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 09:00:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356339.584497 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5kbR-0004Np-NR; Mon, 27 Jun 2022 09:00:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356339.584497; Mon, 27 Jun 2022 09:00:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5kbR-0004Ni-Kk; Mon, 27 Jun 2022 09:00:17 +0000
Received: by outflank-mailman (input) for mailman id 356339;
 Mon, 27 Jun 2022 09:00:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o5kbQ-0004Nc-6b
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 09:00:16 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o5kbP-0006fj-B8; Mon, 27 Jun 2022 09:00:15 +0000
Received: from 54-240-197-228.amazon.com ([54.240.197.228]
 helo=[192.168.2.226]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o5kbP-00055g-4Y; Mon, 27 Jun 2022 09:00:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=x66fwp7U5pJGbfGG45GupYS/hRJj/Cb1f0e/GfzQ8G4=; b=VP2p12BDeBJZLPufQzbo+jmHbh
	cLuMAr2FMgTmGeH59pHzVp0CTBXfaIRDfqkMC7WQl9gMgdIDK5hRBVzHQ7q3VQT6oEUJhv2rjbk+F
	VObsSWfD9gXnzqlW7oopHBcJsRTChl+xr1QZ6bVVLDJbw3cfWVrcBA9FpAiRkvU+v9ZM=;
Message-ID: <d4dbc687-a6ec-0532-65ea-dbe949f5f93c@xen.org>
Date: Mon, 27 Jun 2022 10:00:12 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [linux-linus test] 171361: regressions - FAIL
To: Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Cc: osstest service owner <osstest-admin@xenproject.org>,
 Juergen Gross <jgross@suse.com>
References: <osstest-171361-mainreport@xen.org>
 <d9c96d44-ca77-3f96-7da6-301c88695126@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <d9c96d44-ca77-3f96-7da6-301c88695126@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 27/06/2022 08:38, Jan Beulich wrote:
> On 27.06.2022 03:38, osstest service owner wrote:
>> flight 171361 linux-linus real [real]
>> http://logs.test-lab.xenproject.org/osstest/logs/171361/
>>
>> Regressions :-(
>>
>> Tests which did not succeed and are blocking,
>> including tests which could not be run:
>>   test-amd64-amd64-dom0pvh-xl-intel  8 xen-boot            fail REGR. vs. 171277
>>   test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
>>   test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 171277
>>   test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 171277
>>   test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 171277
>>   test-amd64-amd64-dom0pvh-xl-amd  8 xen-boot              fail REGR. vs. 171277
>>   test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 171277
>>   test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 171277
>>   test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
>>   test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
>>   test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
>>   test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 171277
>>   test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 171277
>>   test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 171277
> 
> At the example of this:
> 
> Jun 26 21:51:47.667404 mapping kernel into physical memory
> Jun 26 21:51:47.667425 about to get started...
> Jun 26 21:51:47.667435 (XEN) arch/x86/mm.c:2159:d0v0 Bad L1 flags 400000
> Jun 26 21:51:47.667448 (XEN) Hardware Dom0 halted: halting machine
> 
> This is an attempt to install (modify?) a PTE with _PAGE_GNTTAB set
> via normal page table management hypercalls. Considering how early in
> the boot process this appears to be, I wonder whether a flag was
> introduced in the kernel which aliases _PAGE_GNTTAB (or a use of such
> a pre-existing flag on a path which previously was safe from being hit
> when running in PV mode under Xen).
> 
> I wonder if the bisector is already having a go at isolating the
> offending commit, or whether it had already failed in trying to.

I saw the same issues in my testing. Manually bisection poiunted to:

https://lore.kernel.org/all/22d07a44c80d8e8e1e82b9a806ddc8c6bbb2606e.1654759036.git.jpoimboe@kernel.org/

This is meant to be fixed by:

https://lore.kernel.org/xen-devel/20220623094608.7294-1-jgross@suse.com/

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 09:04:00 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 09:04:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356344.584508 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5kf1-00050T-7Z; Mon, 27 Jun 2022 09:03:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356344.584508; Mon, 27 Jun 2022 09:03:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5kf1-00050M-4s; Mon, 27 Jun 2022 09:03:59 +0000
Received: by outflank-mailman (input) for mailman id 356344;
 Mon, 27 Jun 2022 09:03:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5kez-00050A-Jw; Mon, 27 Jun 2022 09:03:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5kez-0006na-HO; Mon, 27 Jun 2022 09:03:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5key-0005nI-WB; Mon, 27 Jun 2022 09:03:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o5key-0001IV-Vk; Mon, 27 Jun 2022 09:03:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Llgg2C3vpi5i2h7jXqAz8WVJkQZD5UaljI7yfuvSuxo=; b=sku020ptkZ1YXxZixPTxSv6BRg
	QN3d9zAOOApOlxiz/JrXsZGPpMSMcw58sVJepJV8/EnEE/i9CM8SuKoGGzGyCFVRbiHhlp407azv2
	N6I0DgzzxyyMuqjy2GF5a48NSXrgYGCiS2npPi0KeNpO77XeFjHPkj6I58Hj7YcpcR5M=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171366-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 171366: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=790f3b214bffb402609ec1c6ceb2317c2b3cabb1
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 27 Jun 2022 09:03:56 +0000

flight 171366 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171366/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              790f3b214bffb402609ec1c6ceb2317c2b3cabb1
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  717 days
Failing since        151818  2020-07-11 04:18:52 Z  716 days  698 attempts
Testing same since   171351  2022-06-25 04:18:58 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Amneesh Singh <natto@weirdnatto.in>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani@anisinha.ca>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brad Laue <brad@brad-x.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Kirbach <christian.kirbach@gmail.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Christophe Fergeau <cfergeau@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Didik Supriadi <didiksupriadi41@gmail.com>
  dinglimin <dinglimin@cmss.chinamobile.com>
  Divya Garg <divya.garg@nutanix.com>
  Dmitrii Shcherbakov <dmitrii.shcherbakov@canonical.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Emilio Herrera <ehespinosa57@gmail.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Florian Schmidt <flosch@nutanix.com>
  Franck Ridel <fridel@protonmail.com>
  Gavi Teitz <gavi@nvidia.com>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Haonan Wang <hnwanga1@gmail.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Hiroki Narukawa <hnarukaw@yahoo-corp.jp>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Ian Wienand <iwienand@redhat.com>
  Ioanna Alifieraki <ioanna-maria.alifieraki@canonical.com>
  Ivan Teterevkov <ivan.teterevkov@nutanix.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  jason lee <ppark5237@gmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jia Zhou <zhou.jia2@zte.com.cn>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jing Qi <jinqi@redhat.com>
  Jinsheng Zhang <zhangjl02@inspur.com>
  Jiri Denemark <jdenemar@redhat.com>
  Joachim Falk <joachim.falk@gmx.de>
  John Ferlan <jferlan@redhat.com>
  John Levon <john.levon@nutanix.com>
  John Levon <levon@movementarian.org>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Justin Gatzen <justin.gatzen@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kim InSoo <simmon@nplob.com>
  Koichi Murase <myoga.murase@gmail.com>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Lei Yang <yanglei209@huawei.com>
  Lena Voytek <lena.voytek@canonical.com>
  Liang Yan <lyan@digitalocean.com>
  Liang Yan <lyan@digtalocean.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Liu Yiding <liuyd.fnst@fujitsu.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  luzhipeng <luzhipeng@cestc.cn>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Mielke <mark.mielke@gmail.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Martin Pitt <mpitt@debian.org>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matej Cepl <mcepl@cepl.eu>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Goodhart <c@chromakode.com>
  Maxim Nestratov <mnestratov@virtuozzo.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Moteen Shah <codeguy.moteen@gmail.com>
  Moteen Shah <moteenshah.02@gmail.com>
  Muha Aliss <muhaaliss@gmail.com>
  Nathan <nathan95@live.it>
  Neal Gompa <ngompa13@gmail.com>
  Nick Chevsky <nchevsky@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nicolas Lécureuil <neoclust@mageia.org>
  Nicolas Lécureuil <nicolas.lecureuil@siveo.net>
  Nikolay Shirokovskiy <nikolay.shirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Niteesh Dubey <niteesh@linux.ibm.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Or Ozeri <oro@il.ibm.com>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peng Liang <tcx4c70@gmail.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Praveen K Paladugu <prapal@linux.microsoft.com>
  Prerna Saxena <prerna.saxena@nutanix.com>
  Richard W.M. Jones <rjones@redhat.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Robin Lee <cheeselee@fedoraproject.org>
  Rohit Kumar <rohit.kumar3@nutanix.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Davis <scott.davis@starlab.io>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Sergey A <sw@atrus.ru>
  Sergey A. <sw@atrus.ru>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  shenjiatong <yshxxsjt715@gmail.com>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Simon Rowe <simon.rowe@nutanix.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Temuri Doghonadze <temuri.doghonadze@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tom Wieczorek <tom@bibbu.net>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tu Qiang <tu.qiang35@zte.com.cn>
  Tuguoyi <tu.guoyi@h3c.com>
  tuqiang <tu.qiang35@zte.com.cn>
  Vasiliy Ulyanov <vulyanov@suse.de>
  Victor Toso <victortoso@redhat.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Vinayak Kale <vkale@nvidia.com>
  Vineeth Pillai <viremana@linux.microsoft.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  Wei-Chen Chen <weicche@microsoft.com>
  William Douglas <william.douglas@intel.com>
  Xu Chao <xu.chao6@zte.com.cn>
  Yalan Zhang <yalzhang@redhat.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Fei <yangfei85@huawei.com>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yasuhiko Kamata <belphegor@belbel.or.jp>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
  zhangjl02 <zhangjl02@inspur.com>
  zhanglei <zhanglei@smartx.com>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>
  Дамјан Георгиевски <gdamjan@gmail.com>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 114220 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 09:13:10 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 09:13:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356354.584520 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5knf-0006bF-8S; Mon, 27 Jun 2022 09:12:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356354.584520; Mon, 27 Jun 2022 09:12:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5knf-0006b8-5c; Mon, 27 Jun 2022 09:12:55 +0000
Received: by outflank-mailman (input) for mailman id 356354;
 Mon, 27 Jun 2022 09:12:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GcZ/=XC=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1o5knc-0006b2-U5
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 09:12:53 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on2073.outbound.protection.outlook.com [40.107.243.73])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 506d10d0-f5f9-11ec-bd2d-47488cf2e6aa;
 Mon, 27 Jun 2022 11:12:49 +0200 (CEST)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by DM6PR12MB4185.namprd12.prod.outlook.com (2603:10b6:5:216::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Mon, 27 Jun
 2022 09:12:47 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::6d20:93ce:c4d6:f683]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::6d20:93ce:c4d6:f683%4]) with mapi id 15.20.5373.018; Mon, 27 Jun 2022
 09:12:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 506d10d0-f5f9-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Cipr/Qr93yAE6mIczRhvbwlvkuykh2JziJdQRejBexHNOoT7Egb4RraD8R0OWGqI6EX376CVWSgeF5Q4A/Edt0X6siIR8xb98sjmqAiQ8t5/lUTF2IauKcJ2ouMsUbRT6zuMJ/H8bnnlYkTwV4GmhDM1qVuoFlnviUqhUlWwMeEbp0oZ1vI/EUAWeUIFoVUCgBcyNKZ2CqT5FuKABOYn5mwVCRrIkpJUW24Mb2VHkfj351Xuzg5YlrQWgAkyWoG8nkNj3tStGLBLAj45sxHymt32ZfmXIWeD27Y/JC41PdjT4YYVEesN8pe6ynJ/WKblq9ZBtEi+tx14UfDEXEChwg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=akhsvOYGdoJ+diln81bTJZEySyGASwisIxxl8Q5BpCM=;
 b=VsWlUXLeXrAdMzSpng7ukvVVLCQBYrx+CdaUAiHwFDiMRm5oP1kw8nQxFoOVVvmZImdQbk9yPMpkedQG1D1zwQPDoL/zeGOaiqmj4JZr+KRBPXbiUsmIR+45sxW7/txtDe/NosvW3FXUzCf9Ps5S/08RE2gpNcN8LOh4hLrv6Ffrggs+ppi58Wr4Bwd3cKlbus6Rw5oyASndkCtC2uq5XNUVAky60fz/aj1hqlM+y7nrWT/hfBa/gXsHd0L+dFfrNP6/LVukWt5NbEI07FteAvbKrdA3tFt7U4F8Y9lAQKf0pkepIfg3TIm0hgl82O2FpJja13RUBw8qWavEQsx6Ow==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=akhsvOYGdoJ+diln81bTJZEySyGASwisIxxl8Q5BpCM=;
 b=IcV3By6Ms9ifc9nvvNXiKPkLhJvs6aHxrf0/Jl3iSq+1U8y++s2f+hUTqTfewneKhzg1NrXu/tqQ1i5SdLDpEa1FC/W9QXWQoDfnwQ6yw2E15k9tq1Hf2qCrESNQ8s5nvxIB6yjytuJM0rDqKGxYAO3C27nlCl5741nG5YzY5Nk=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <d20aa14a-0889-958b-15ad-91c4e1fd1578@amd.com>
Date: Mon, 27 Jun 2022 10:12:37 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [Viryaos-discuss] [PATCH 1/2] uboot-script-gen: prevent user
 mistakes due to DOM0_KERNEL becoming optional
To: Xenia Ragiadakou <burzalodowa@gmail.com>, xen-devel@lists.xenproject.org
Cc: viryaos-discuss@lists.sourceforge.net
References: <20220626184536.666647-1-burzalodowa@gmail.com>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <20220626184536.666647-1-burzalodowa@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P265CA0106.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2c3::9) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 61621b5d-65f0-4814-39dc-08da581d3392
X-MS-TrafficTypeDiagnostic: DM6PR12MB4185:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	4wsa6dIr0TJjbG5/b8PJVdK9YVruO00yHuQQyrhLCMP81n+n8J338/lQjVeNACuzWMrNn3Ky/01R6n1qwvFjHvQ1tpTHNRgG5DuMFbPWhXJqI4943MP+re7gxtfg/DXVOCPgA32mFk/7PxDH1o0r7MUas4etk7jmV8UTAwEH+KzW8l3TfWJTT5VkfNOy2Nw8mPRGvhTU8wXpRebIQ8ydgR6oI7Wl0jtL/sslRdtAovhinYGHeaiT+ARiNebNAANt3fTbITCZJfX0IbUaKuDPrww7Ci0Nv6Iytrv/kTkEV8v8zqJnxnKvdfKFjOOjoqU9O/QRVS8x77OxSWrqHU7avCv7s+wBEg5wKGPPed3vz+P5L7IJQQmmbGUBjZMV/gtoQDpENTqkS87Vv8z1RdVYLWneTQdIRD2Jj/qyGs+7rsu84CGFkFm60sVdqFkGA86zcS+a+K1bg6aEsifZnix48MI26y85rIMQO5LHRDtHC+jo8onuMmJt1H0Kp2KhoHQ3gmnCZ2TCmiSEvnyARUOA23j+tv55y3RqX459jED0BO4bomS8qEjdDf3dTaRiuFyEd+hOpcZjL5lv+JNbWs3AtddzI6bf22noVJ6iye5Knjh1nLfZ1w4wr6bSiHuLPoVo7nXhVg6uPtPoZ8AncT0AfblaD8/vUWH4RIqAEczG5N9LSnjwbUDyMzlOkhy/vGxPntfsxVrgdF5VxWpgdNTQ5iGRLB4bRcmTNfUpLPc4vKRgLNr5y0rgdedMalkhRmDmxlHNVyaPaM+j9QbnO4AERj59XBGfQZWSNKFDOZeYOq5/GgkNC9QgpikIO5Y1DAje
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(376002)(39860400002)(366004)(346002)(136003)(396003)(66946007)(31696002)(66556008)(38100700002)(5660300002)(478600001)(66476007)(83380400001)(2616005)(41300700001)(186003)(8676002)(2906002)(4326008)(8936002)(6486002)(31686004)(316002)(6512007)(36756003)(26005)(6666004)(6506007)(53546011)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZHdNT1U4aGRNWnYwL29Wc29qakhEd0hiUlRVajVMSytBcHB5M3k2OTZXRUc3?=
 =?utf-8?B?bER2TG1RTlZOOGFsajhiYStUS240YkNpMDlmUGtEa3ZsckhnYVpaVlFaZjNx?=
 =?utf-8?B?bzhjMzFvZ2xwSmVoTG5hUUM4VXJnZzRENXExUGdWTDBVRU5QN3RMWnczRFpl?=
 =?utf-8?B?cU5neDZ0bUhieEkzaUtnbUxuZnF3R0F5eDRSV090ZGc0dVV0L3RqQU9tc0Va?=
 =?utf-8?B?cUY2SXFDNkhxRXAyaG1YYUt0Rjk2Ly9PQWJqbWs1WDBHZXdnTGxpSGNQM0cy?=
 =?utf-8?B?aVVSK3pkazVzdzA4dzIrVXdCUDQvMklqbFo3NjdhT2ZRb24ydTdkeE8xNDRB?=
 =?utf-8?B?aVRzTVBMNTVSZk5WdHJ3bnZPMGtucC9xOTE3a0VDVnBUVklkRGgzM0ZZNmhI?=
 =?utf-8?B?UlRMOXZBNlV2cWpKWDFtUTg4QmtrZFRHUkxrR0lxQTlvWW5sV0ZhUG4yWktU?=
 =?utf-8?B?a3Q3YmFzcDRhbG85MWRRUlFaSk5CWm5ZeG9tRStzRHJTaXJCNURZb21XNVhl?=
 =?utf-8?B?NDFOYVdPUWdCdDl6UlcvSENhUThxNS9XYW85L0IremVMelJtdzNIemJodlp2?=
 =?utf-8?B?ZkxWalZ6bzh6cWoxT212RjU3ODBOcjlUVlJOblV3WWVSR2JKd1laWWg0c0Q3?=
 =?utf-8?B?d1dIaEpFNzBwc1Y5SWxZcTVBV0NZdnpRaC9KK0F3b3lZbk5EUXEvQ0czRkJF?=
 =?utf-8?B?U25oMm5ra2JiWnJTTVUyQTdpRU5VVThIODA3N0R2bE1wQ01VRDdZanFScitr?=
 =?utf-8?B?YWQ2L1BTWVZrQm5yQnVaUnE4ZW1BTW5CcmQ4NzhkSU1hc3NXT2M1Z1J2S0Ni?=
 =?utf-8?B?SnJob1psRk5IMjJGVzgxWmlmUEVtL1V6TEpzNlFjUUE3dHZKVXEwWEMyVFdH?=
 =?utf-8?B?V0ZOaFpuZ1FHR2FUOWpzbm9yakV0Y0dGamYyNERLOG9sK1pjWm0xdkk0MDlz?=
 =?utf-8?B?RlVmRFlsNDNjZkk5bm1Razc5azZWb1NtTy95bE1idjJPdG5GYkx5WlREVWhw?=
 =?utf-8?B?Q1N2ZEp3S3kyRUt4cEhlYmdRSEd2VHh5RVBPdy92bE50bUd3cWNkMEVidWVC?=
 =?utf-8?B?QjVIV3NtaHBrSXZNeWE2VzU2SzdQTFNxUnZLK0pwdjNJbGVMQlBiNFNJNTRO?=
 =?utf-8?B?T1ZXVjErRzB3RUpneSt6Nnp3eHZmOFVwRWwwc1pIdUQrUUt5U0RSZURnRXdY?=
 =?utf-8?B?U0R1a1VWRkoxWGFQZHR0WmEvQmlmcFFKNnZCQ3ZRZU90Qm5aYVJnYWRUV0tL?=
 =?utf-8?B?QmVQV29YSVdjeklNYjhIMi9lcE1ha21Sa2hibis0b080WWUwdiswbEliQ1Bz?=
 =?utf-8?B?a25YVWNpLzFHY1NidHdncFRMMENVaGRwbVBzSFVDL0JiUi9zYzR3aGlUcUVH?=
 =?utf-8?B?VndHVUIvOE1pb3RFMm1zdTJJTXU0WW9rR0p0T3hrelBVSWlKM3gyK1YwS1hX?=
 =?utf-8?B?dUdiNmdTd0JiODFrbWJ4L3pwY1BGeXA4QmdHY1lQQkFIbTV4Qlg1WDFPMFk0?=
 =?utf-8?B?ZTlXeFFESWxVZkxCa05vNEVOUitLZzl1Qy9SVTIzaFREeUtPanVKK3lrSWIx?=
 =?utf-8?B?SW5RNnREdGwzK1lmQTduSTJCYkFaR2xERGpEWGl4bXZESkF6WFdtWG1vTW1o?=
 =?utf-8?B?RGR0bDRudVJ5NW9lZk9mU2szT2xjeXFMdEFTY29tK2JnQXVaWng5ZzRxUXZJ?=
 =?utf-8?B?VjZhTS9hb0R1N0tDalArVTd6Sk9oZUZJUTJwVW8wKy9rYzMzbzdoTjlCUzZj?=
 =?utf-8?B?K3BBNVNzaEpkSDJRemUzejBJK2hOWnlQL0tSQnhkQU1nK3pTcGNhL2ZBWFBJ?=
 =?utf-8?B?NXJieTNGdjhXR3lMcGVCemZMc1J0S2hXdmpxdStUZG15VjZ6U1EvNEpyeU81?=
 =?utf-8?B?VWJCSmxMRGRkKzUyTTBVaXZ6R2QxQTdPYVZZWGcyRHI4aWlmVGMycGlpU2lV?=
 =?utf-8?B?S2s3dFFTbDdVd203ZjlVUEQ0MjBUWkZ4d25vWDZ4UHgyTWNVN3RVd3c0SnZi?=
 =?utf-8?B?RTZDZGV2K2w1dWdxaURVRlBxbktYQnpQNnV2SEpOaTJSQ0lTejBKb0JXWkZv?=
 =?utf-8?B?eEFUWnZmVkxqVXlGRjlucmxpT2l1YmgwVjBOOElzQytOMmF2czNiZGtqWVZm?=
 =?utf-8?Q?jd9wcFCHscR0nQdW7easfdA0x?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 61621b5d-65f0-4814-39dc-08da581d3392
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2022 09:12:47.1050
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: eyzCzuFNOwXQpCkeUv3mODETHt5qAANa9OMFS1vPDtC1WAx3P1teOkBf3316Zt9NO2XnhO7xC0bzYhENRhP4Bw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4185


On 26/06/2022 19:45, Xenia Ragiadakou wrote:
> Before enabling true dom0less configuration, the script was failing instantly
> if DOM0_KERNEL parameter was not specified. This behaviour has changed and
> this needs to be communicated to the user.
>
> Mention in README.md that for dom0less configurations, the parameter
> DOM0_KERNEL is optional.
>
> If DOM0_KERNEL is not set, check that no other dom0 specific parameters are
> specified by the user. Fail the script early with an appropriate error
> message, if it was invoked with erroneous configuration settings.
>
> Change message "Dom0 kernel is not specified, continue with dom0less setup."
> to "Dom0 kernel is not specified, continue with true dom0less setup."
> to refer more accurately to a dom0less setup without dom0.
>
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
Reviewed-by: Ayan Kumar Halder <ayankuma@amd.com>
> ---
>   README.md                |  1 +
>   scripts/uboot-script-gen | 21 ++++++++++++++-------
>   2 files changed, 15 insertions(+), 7 deletions(-)
>
> diff --git a/README.md b/README.md
> index 17ff206..cb15ca5 100644
> --- a/README.md
> +++ b/README.md
> @@ -100,6 +100,7 @@ Where:
>     been specified in XEN_PASSTHROUGH_PATHS.
>   
>   - DOM0_KERNEL specifies the Dom0 kernel file to load.
> +  For dom0less configurations, the parameter is optional.
>   
>   - DOM0_MEM specifies the amount of memory for Dom0 VM in MB. The default
>     is 1024. This is only applicable when XEN_CMD is not specified.
> diff --git a/scripts/uboot-script-gen b/scripts/uboot-script-gen
> index e85c6ec..085e29f 100755
> --- a/scripts/uboot-script-gen
> +++ b/scripts/uboot-script-gen
> @@ -410,6 +410,20 @@ function find_root_dev()
>   
>   function xen_config()
>   {
> +    if test -z "$DOM0_KERNEL"
> +    then
> +        if test "$NUM_DOMUS" -eq "0"
> +        then
> +            echo "Neither dom0 or domUs are specified, exiting."
> +            exit 1
> +        elif test "$DOM0_MEM" || test "$DOM0_VCPUS" || test "$DOM0_COLORS" || test "$DOM0_CMD" || test "$DOM0_RAMDISK" || test "$DOM0_ROOTFS"
> +        then
> +            echo "For dom0less configuration without dom0, no dom0 specific parameters should be specified, exiting."
> +            exit 1
> +        fi
> +        echo "Dom0 kernel is not specified, continue with true dom0less setup."
> +    fi
> +
>       if [ -z "$XEN_CMD" ]
>       then
>           if [ -z "$DOM0_MEM" ]
> @@ -457,13 +471,6 @@ function xen_config()
>       fi
>       if test -z "$DOM0_KERNEL"
>       then
> -        if test "$NUM_DOMUS" -eq "0"
> -        then
> -            echo "Neither dom0 or domUs are specified, exiting."
> -            exit 1
> -        fi
> -        echo "Dom0 kernel is not specified, continue with dom0less setup."
> -        unset DOM0_RAMDISK
>           # Remove dom0 specific parameters from the XEN command line.
>           local params=($XEN_CMD)
>           XEN_CMD="${params[@]/dom0*/}"


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 09:15:19 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 09:15:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356359.584531 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5kpy-0007AM-Me; Mon, 27 Jun 2022 09:15:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356359.584531; Mon, 27 Jun 2022 09:15:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5kpy-0007AF-IE; Mon, 27 Jun 2022 09:15:18 +0000
Received: by outflank-mailman (input) for mailman id 356359;
 Mon, 27 Jun 2022 09:15:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GcZ/=XC=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1o5kpy-0007A9-7G
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 09:15:18 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on2083.outbound.protection.outlook.com [40.107.243.83])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a875d8dc-f5f9-11ec-bd2d-47488cf2e6aa;
 Mon, 27 Jun 2022 11:15:16 +0200 (CEST)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by DM6PR12MB4185.namprd12.prod.outlook.com (2603:10b6:5:216::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Mon, 27 Jun
 2022 09:15:13 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::6d20:93ce:c4d6:f683]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::6d20:93ce:c4d6:f683%4]) with mapi id 15.20.5373.018; Mon, 27 Jun 2022
 09:15:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a875d8dc-f5f9-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OT7qFhchU2PzV7GRp+LW/eZXcYIaOeVvWXZcsbH0ulIJTQKTC/xHHHZ9uCqBaounVPA1PQbmwjMuOmqzuFcIpQme78Moy8eeNMcgJ2GlqCQkgIOaFharBj/a4oKHyPUZpj/HJBw3ppz8UGov0mZ3LO8U3JhoelE6si3CXolplz3SLpYlm7pVPaFfSrJWyht+LFX9OYvbbkPB+XfJVoHzo1DlV8pnp2Y+ioAA4f6EWBvkE8XTUtf4+43WwWLoxYFFLUiLO2gQd3oqvxFGZsvaHVgFFmDM6z8me9AZs9tAoiAO3S9CanmbjJTnCGKv5hgfUZ+mesDb5dA9snrCdWK26g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=4UBn0p/7StPmtkjZOa+mAy0AOnP5Z686T6X4+RveO/o=;
 b=eRQQYChpUXVepf5dFoWmwIGVd4WNRTwAbwTKor+v8I3YNzhDUpTpDuNZcnG+cKJnolyGhekPsDfBMgnSKNbCJk2gP6dUkd3RMgq9S/QH43OOgtedrsSVMVsbBES+md/qclPXErIcw/pWu5Pm+9G4nBJxxOLjea3dC9xb0zxY92jq0PzRSJ9+iJR4ghWYSbJcP7JT6ikpI9S5MalJC27zlIgEaxv15NdGPA/AKc/Um+Rc8fgBUV5idZLL/Pe8t/75COk0yzINiwOltQC98L0l0fqayOjmf4/nv3MK0CGATof1e2D3B0ZD+ThbnAM33/x4bJlrwHjqTWF0L/Cu6btiHw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4UBn0p/7StPmtkjZOa+mAy0AOnP5Z686T6X4+RveO/o=;
 b=dlbVMR0mqfT5vFvnFu0K6k7ng78J5IFZuRaiwQKllNwOBN016eC53MVnS+8oTPGRhCH69m60jLkeU+mfKfgdDNbTAxdE/p9juEVcVwC3u0o7MyiS128o+5POn4ep32UffJrZttSZHTSVXlFZhmrbomg21gNs1b+0fOPmbXfbfLc=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <78353de3-7483-a949-731c-d5a9199e7154@amd.com>
Date: Mon, 27 Jun 2022 10:15:04 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [Viryaos-discuss] [PATCH 2/2] uboot-script-gen: do not enable
 direct mapping by default
To: Xenia Ragiadakou <burzalodowa@gmail.com>, xen-devel@lists.xenproject.org
Cc: viryaos-discuss@lists.sourceforge.net
References: <20220626184536.666647-1-burzalodowa@gmail.com>
 <20220626184536.666647-2-burzalodowa@gmail.com>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <20220626184536.666647-2-burzalodowa@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO2P265CA0012.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:62::24) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a7a75105-2008-4634-38a3-08da581d8b17
X-MS-TrafficTypeDiagnostic: DM6PR12MB4185:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	bpgb6wBsuY5Od6mLXxMeKi/D+o48pH/Lym/tVUxD4NYFQqg0oI89OSdaoqd/L2PaAOo6gqPtsxxPr1h8+NHKThldDbL7qFZ3dZJ5DoJAvBvBEXpETV7q7Jmf+dmq8LX9tB3DlbJ83Ste7fChkML36ruUQHIlaLlr+ejlqGSfXaPUBqLoNm1Ap9taYnMOH5R3GJZ+ZJAT+0lK9VTdHLKozf5D3+pEjhPoW7njmHEvFZ3ie3j2ODfD9uVjRnRmKHikkwCn7gMnSr1fx2nuESmwdsuGFF14NHmcz483PhWsTn/SFMYMxQVV01NwZ2zP09//gDZGj9xlbaAXR6Pe96xayJb8nJzgtV1ElDy5R9AUj271JzTuYVW75wgsvD3jjF6dprT0HZrMwz0CrGfI2F2QeuCyBlXhT26tNUCErf7IHCtZ6DQJhP+JLqNT7W35OEj733M+H6qYSIe7RkIBVfWCcy9vL/OHh3hD9oax8jefn/z+m1deqD4VJLFK5H0FdkFqT//w4VLpexJP00amfJc656fI+/yfQcFMj86HV+elxzO3zfdfrbZCcS+GLX3mMcx2zcn3k3iUNFGZgm10TwkBNYzslySmmYaCKqs/yWFouZyYlStNjKVf0urf49foKiy6hfFfTohXvVl+XcBZY6lALGk93eGukgLZWv8JYfXn2nFNTLR7GNwBBAmpYYEU389zAfTFcHswnMLpj1QbVxRxmYLOYTQqHALmyrNnV5UDgMhDkCUXNi8QdzsU68m1CLUI28O/boH0Fh0/m25Er2VoluOe2joVBboyGo8gAZqwKxI2+3tZMr9kgVa4VqEfswnp06vmOj5HJ6BH4/TF229EVA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(396003)(136003)(366004)(39860400002)(376002)(346002)(6486002)(31686004)(26005)(36756003)(6506007)(53546011)(6666004)(316002)(6512007)(66476007)(83380400001)(478600001)(186003)(2616005)(41300700001)(66556008)(66946007)(31696002)(5660300002)(38100700002)(8676002)(4326008)(8936002)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TUkzQ3hqRE1uTGx0d0RVZkJ2Q3pWZFMwWkRmQkhuMUNScTFyRGNDK0pUbTNp?=
 =?utf-8?B?WXJHcFQrV21HOHhNRzlHNzBGT25ObGU3Z1VmWEhOTWRvUmNQcm11VnFVQ1h5?=
 =?utf-8?B?dnE0SFZTRyt4clNXWXhVNUlOZkNNRGYvRlRxRzM1Vm5qdk1RRC9LQjNteWlD?=
 =?utf-8?B?V1pzV0lQVThtbFlHSDRVREh0UkhyYWlkZW1MejFydHF4VStYQmlnNEVGYXQ4?=
 =?utf-8?B?ZjVxZDBiSnI0OWJSY25ZajZPZEhEL05FYTl4Nkoyc3FUUG5FUDhrYU9Ca2la?=
 =?utf-8?B?bTZUMXFDVjE5ZHNGVmtWb0swUWF4QU9QSFlQUm03YlZxRWlsZnFGd054dVE5?=
 =?utf-8?B?a3o1QTl1N3ZWdjZzaGEyRjhYbkpWTjJzWnUyNG9tNFNEa0NESjJWUHJSSFkx?=
 =?utf-8?B?STJYSW9qRVFyQStuMzJLRnRBaDZyQXB2MkRUT0x3MnFHblNuK3ZYS3FqOHE3?=
 =?utf-8?B?YkhzUEtnVmtLQXM3M0ZnYjBpckZGaytjMFV6RWpxY1NOdGxRb0NCeUdLYU9Z?=
 =?utf-8?B?UnpaMVRFSGgrRTFoY2NWU3V5d0o3TU5uOUg1cmZWeVBUNllLUllVcWlWeThP?=
 =?utf-8?B?dms2MmdiTkZETCtwbXZUTnBvOW5LL21OTEFvbThya1VSeCs5Z2M5RkVHckk0?=
 =?utf-8?B?c2RLbGZLZWs3VjFqWnhSUVFpaHg2N3BpWFFWQmIrTTBPWDJBd2NoOFp5ZSsz?=
 =?utf-8?B?ekorRlVvbFZCSFBOMW0vaytraHpyNERGVm9YeWI0ZklHN0V3aWMrVHVLVGJy?=
 =?utf-8?B?QlQrVGhmaWRRbDRmNWF3R2dmdUJSdURrck85MnNaWmdHQkpTZVl1dWl3VWVS?=
 =?utf-8?B?TDZFUUxPaXd1bUVhZzF2elh2Vkc0R05wZnpXWHVhcDVobXRoWFhDT2lRVXdz?=
 =?utf-8?B?UEFkTGZ5bXZ2cGVzWEljejZZQXAyOUxZTXRpTmtocExJb01aTDc4M0dCaTJG?=
 =?utf-8?B?bjhEZWFvdXVrbEQyT0IxU0xLVHhVS3E3ZUNQYzBEZUZmcEhBc212S1JxU0pR?=
 =?utf-8?B?OUR4eTVHenphVVVsZ1pHYS9DWWNmZExJOHlmYmpOdHBVNHFSbjZlY29YL0xi?=
 =?utf-8?B?blNaeEpJOXhsRzZNK2dJRk5zTHZlSUZQeGZ2UCsyRE5SbGJ5YUxFOVpxRjdO?=
 =?utf-8?B?bVJOVTBVeUNIaGtRRWFDZnBpajZPK3BEdjZnOUxmZGptY3pWbWQxaXVpczFP?=
 =?utf-8?B?WmtwcTZteUg1eDFyeWVzTFdOU1hSTFpnWlNoOFphc2d1REdONGRQcHRTY2R3?=
 =?utf-8?B?NkEwbFBtREUvY2dORDZjZnZDWktLa1V2YVhWYVl3TFgrQ0thNVdLYnJaMEdO?=
 =?utf-8?B?L2lPc0NqWDQwcFU4TEVzbTh0SGE5L0FSdGd6aVlDNmlQT0Y4ZmRSUm53MHVh?=
 =?utf-8?B?ek9WYWdLM3kyN1U5MHhYSmh0bHN2T3FBOWlpVzAyanV5dmFhM0xEWHVQUDZz?=
 =?utf-8?B?TFJPTngyeE44d3k2c0lDMGdURlhUY2lxNnRhcTczQzBDTEU5NDYyMzZLcHRR?=
 =?utf-8?B?bEhaUXVrT09WMnpVdFNQSHdnaStpRTNGYVFFL0dWM2xvWVhLd2lOSGRybStk?=
 =?utf-8?B?T0duSHVrbDArSkNOQlZtKzhnVm40MnpTRWdBSW5yd0VsUVFFazlnRXFEajdT?=
 =?utf-8?B?WkJ1d0t3TzJqTmNudHlIMTUyUzA1NFF5WVcvKzNvV1lENHd3amtVMW8xSlUz?=
 =?utf-8?B?L1ExNWEvYTZXcUFtK3p2d2JxSy9KTjVlZTRZQmRrN01mamZmWmxKdXJUd2Ns?=
 =?utf-8?B?QmRONTArQmcyVXU0L0NJMGxqYjVpdUVOVW5jdnIxSlg5OGNZL1J0WXRlL1d5?=
 =?utf-8?B?WEtWMU96UFltbEwwT0hkZ04vdXRBNHZOVEpHQ21JYVJPT1ZRSmlxSmFFUTdk?=
 =?utf-8?B?SmR3S3B0bGNWVSsvVnF1RmN3WWd5eWdwNnVkZE9CQjIvZUp6QWhpMlRmaWZF?=
 =?utf-8?B?Y0xWQVRaL0VzZjhuSlN3U0ZQQzF2UEFCRzFNeTVhdnVMb1FiMWkrWVhsR2h3?=
 =?utf-8?B?QUU1OHgyb3dTa2xkaWx3YzduRHVqZE1MWmFZY1c2UndSbStzZDYva3ZVNjNI?=
 =?utf-8?B?eGZabFA0OUZ0WUxzVEhQNm9DUEhTM2JobC9VWHJ5TlZpMVlEamxVU1VRYlI4?=
 =?utf-8?Q?PnkT/SdcCI9mNQo6ATwMTtiq5?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a7a75105-2008-4634-38a3-08da581d8b17
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2022 09:15:13.8915
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ecSKvl2TnZcyoDIdB/PCW9JKRBFa/Qhu6pxdZMgcDbDZiXfODpkgJk212AwtDF96BXeSWVronJ5JGYiFibP5yA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4185


On 26/06/2022 19:45, Xenia Ragiadakou wrote:
> To be inline with XEN, do not enable direct mapping automatically for all
> statically allocated domains.
>
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
Reviewed-by: Ayan Kumar Halder <ayankuma@amd.com>
> ---
>   README.md                | 4 ++--
>   scripts/uboot-script-gen | 8 ++------
>   2 files changed, 4 insertions(+), 8 deletions(-)
>
> diff --git a/README.md b/README.md
> index cb15ca5..03e437b 100644
> --- a/README.md
> +++ b/README.md
> @@ -169,8 +169,8 @@ Where:
>     if specified, indicates the host physical address regions
>     [baseaddr, baseaddr + size) to be reserved to the VM for static allocation.
>   
> -- DOMU_DIRECT_MAP[number] can be set to 1 or 0.
> -  If set to 1, the VM is direct mapped. The default is 1.
> +- DOMU_DIRECT_MAP[number] if set to 1, enables direct mapping.
> +  By default, direct mapping is disabled.
>     This is only applicable when DOMU_STATIC_MEM is specified.
>   
>   - LINUX is optional but specifies the Linux kernel for when Xen is NOT
> diff --git a/scripts/uboot-script-gen b/scripts/uboot-script-gen
> index 085e29f..66ce6f7 100755
> --- a/scripts/uboot-script-gen
> +++ b/scripts/uboot-script-gen
> @@ -52,7 +52,7 @@ function dt_set()
>               echo "fdt set $path $var $array" >> $UBOOT_SOURCE
>           elif test $data_type = "bool"
>           then
> -            if test "$data" -eq 1
> +            if test "$data" == "1"
>               then
>                   echo "fdt set $path $var" >> $UBOOT_SOURCE
>               fi
> @@ -74,7 +74,7 @@ function dt_set()
>               fdtput $FDTEDIT -p -t s $path $var $data
>           elif test $data_type = "bool"
>           then
> -            if test "$data" -eq 1
> +            if test "$data" == "1"
>               then
>                   fdtput $FDTEDIT -p $path $var
>               fi
> @@ -491,10 +491,6 @@ function xen_config()
>           then
>               DOMU_CMD[$i]="console=ttyAMA0"
>           fi
> -        if test -z "${DOMU_DIRECT_MAP[$i]}"
> -        then
> -             DOMU_DIRECT_MAP[$i]=1
> -        fi
>           i=$(( $i + 1 ))
>       done
>   }


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 09:48:15 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 09:48:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356366.584542 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5lLk-0002D8-80; Mon, 27 Jun 2022 09:48:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356366.584542; Mon, 27 Jun 2022 09:48:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5lLk-0002D1-5B; Mon, 27 Jun 2022 09:48:08 +0000
Received: by outflank-mailman (input) for mailman id 356366;
 Mon, 27 Jun 2022 09:48:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o5lLi-0002Cv-U2
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 09:48:06 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o5lLi-0007qB-FO; Mon, 27 Jun 2022 09:48:06 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=[192.168.2.226]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o5lLi-0007Nb-7w; Mon, 27 Jun 2022 09:48:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=mdFORTlteAlYWpbLJ5/GfHtZl7sNIWJXyCTMz9o9s/A=; b=RA1OeKol2JlMFFp8O6COmJ7Lo2
	PM+ADyQJjUiDIJ4r8u9GQ462Qjhv2numML6pI/IEgU1Ipx/2NWJGahvxbpXEc4P0+xfQRSHR/DbHZ
	Pv2fjJAB+41c3WRnmc0ZRMaCXqM/IsUp8S/AdL0rx2+P9Rx8LNyJqtoKDn9rICAdks0I=;
Message-ID: <1b580210-9aab-106b-c0a7-60fe8726786f@xen.org>
Date: Mon, 27 Jun 2022 10:48:04 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH 1/7] xen/arm: Remove most of the *_VIRT_END defines
To: Michal Orzel <michal.orzel@arm.com>, xen-devel@lists.xenproject.org
Cc: Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Ross Lagerwall <ross.lagerwall@citrix.com>
References: <20220624091146.35716-1-julien@xen.org>
 <20220624091146.35716-2-julien@xen.org>
 <f078812a-bdd0-d27b-28ce-62c0c131ecdb@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <f078812a-bdd0-d27b-28ce-62c0c131ecdb@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 27/06/2022 07:23, Michal Orzel wrote:
> Hi Julien,

Hi,

Thanks for the review.

> On 24.06.2022 11:11, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>> ---
>>   xen/arch/arm/include/asm/config.h | 18 ++++++++----------
>>   xen/arch/arm/livepatch.c          |  2 +-
>>   xen/arch/arm/mm.c                 | 13 ++++++++-----
>>   3 files changed, 17 insertions(+), 16 deletions(-)
>>
>> diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
>> index 3e2a55a91058..66db618b34e7 100644
>> --- a/xen/arch/arm/include/asm/config.h
>> +++ b/xen/arch/arm/include/asm/config.h
>> @@ -111,12 +111,11 @@
>>   #define FIXMAP_ADDR(n)        (_AT(vaddr_t,0x00400000) + (n) * PAGE_SIZE)
>>   
>>   #define BOOT_FDT_VIRT_START    _AT(vaddr_t,0x00600000)
>> -#define BOOT_FDT_SLOT_SIZE     MB(4)
>> -#define BOOT_FDT_VIRT_END      (BOOT_FDT_VIRT_START + BOOT_FDT_SLOT_SIZE)
>> +#define BOOT_FDT_VIRT_SIZE     _AT(vaddr_t, MB(4))
>>   
>>   #ifdef CONFIG_LIVEPATCH
>>   #define LIVEPATCH_VMAP_START   _AT(vaddr_t,0x00a00000)
>> -#define LIVEPATCH_VMAP_END     (LIVEPATCH_VMAP_START + MB(2))
>> +#define LIVEPATCH_VMAP_SIZE    _AT(vaddr_t, MB(2))
>>   #endif
>>   
>>   #define HYPERVISOR_VIRT_START  XEN_VIRT_START
>> @@ -132,18 +131,18 @@
>>   #define FRAMETABLE_VIRT_END    (FRAMETABLE_VIRT_START + FRAMETABLE_SIZE - 1)
>>   
>>   #define VMAP_VIRT_START        _AT(vaddr_t,0x10000000)
>> +#define VMAP_VIRT_SIZE         _AT(vaddr_t, GB(1) - MB(256))
>>   
>>   #define XENHEAP_VIRT_START     _AT(vaddr_t,0x40000000)
>> -#define XENHEAP_VIRT_END       _AT(vaddr_t,0x7fffffff)
>> -#define DOMHEAP_VIRT_START     _AT(vaddr_t,0x80000000)
>> -#define DOMHEAP_VIRT_END       _AT(vaddr_t,0xffffffff)
>> +#define XENHEAP_VIRT_SIZE      _AT(vaddr_t, GB(1))
>>   
>> -#define VMAP_VIRT_END    XENHEAP_VIRT_START
>> +#define DOMHEAP_VIRT_START     _AT(vaddr_t,0x80000000)
>> +#define DOMHEAP_VIRT_SIZE      _AT(vaddr_t, GB(2))
>>   
>>   #define DOMHEAP_ENTRIES        1024  /* 1024 2MB mapping slots */
>>   
>>   /* Number of domheap pagetable pages required at the second level (2MB mappings) */
>> -#define DOMHEAP_SECOND_PAGES ((DOMHEAP_VIRT_END - DOMHEAP_VIRT_START + 1) >> FIRST_SHIFT)
>> +#define DOMHEAP_SECOND_PAGES (DOMHEAP_VIRT_SIZE >> FIRST_SHIFT)
>>   
>>   #else /* ARM_64 */
>>   
>> @@ -152,12 +151,11 @@
>>   #define SLOT0_ENTRY_SIZE  SLOT0(1)
>>   
>>   #define VMAP_VIRT_START  GB(1)
>> -#define VMAP_VIRT_END    (VMAP_VIRT_START + GB(1))
>> +#define VMAP_VIRT_SIZE   GB(1)
>>   
>>   #define FRAMETABLE_VIRT_START  GB(32)
>>   #define FRAMETABLE_SIZE        GB(32)
>>   #define FRAMETABLE_NR          (FRAMETABLE_SIZE / sizeof(*frame_table))
>> -#define FRAMETABLE_VIRT_END    (FRAMETABLE_VIRT_START + FRAMETABLE_SIZE - 1)
>>   
>>   #define DIRECTMAP_VIRT_START   SLOT0(256)
>>   #define DIRECTMAP_SIZE         (SLOT0_ENTRY_SIZE * (265-256))
>> diff --git a/xen/arch/arm/livepatch.c b/xen/arch/arm/livepatch.c
>> index 75e8adcfd6a1..57abc746e60b 100644
>> --- a/xen/arch/arm/livepatch.c
>> +++ b/xen/arch/arm/livepatch.c
>> @@ -175,7 +175,7 @@ void __init arch_livepatch_init(void)
>>       void *start, *end;
>>   
>>       start = (void *)LIVEPATCH_VMAP_START;
>> -    end = (void *)LIVEPATCH_VMAP_END;
>> +    end = start + LIVEPATCH_VMAP_SIZE;
>>   
>>       vm_init_type(VMAP_XEN, start, end);
>>   
>> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
>> index be37176a4725..0607c65f95cd 100644
>> --- a/xen/arch/arm/mm.c
>> +++ b/xen/arch/arm/mm.c
>> @@ -128,9 +128,11 @@ static DEFINE_PAGE_TABLE(xen_first);
>>   /* xen_pgtable == root of the trie (zeroeth level on 64-bit, first on 32-bit) */
>>   static DEFINE_PER_CPU(lpae_t *, xen_pgtable);
>>   #define THIS_CPU_PGTABLE this_cpu(xen_pgtable)
>> -/* xen_dommap == pages used by map_domain_page, these pages contain
>> +/*
>> + * xen_dommap == pages used by map_domain_page, these pages contain
>>    * the second level pagetables which map the domheap region
>> - * DOMHEAP_VIRT_START...DOMHEAP_VIRT_END in 2MB chunks. */
>> + * starting at DOMHEAP_VIRT_START in 2MB chunks.
>> + */
>>   static DEFINE_PER_CPU(lpae_t *, xen_dommap);
>>   /* Root of the trie for cpu0, other CPU's PTs are dynamically allocated */
>>   static DEFINE_PAGE_TABLE(cpu0_pgtable);
>> @@ -476,7 +478,7 @@ mfn_t domain_page_map_to_mfn(const void *ptr)
>>       int slot = (va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
>>       unsigned long offset = (va>>THIRD_SHIFT) & XEN_PT_LPAE_ENTRY_MASK;
>>   
>> -    if ( va >= VMAP_VIRT_START && va < VMAP_VIRT_END )
>> +    if ( (va >= VMAP_VIRT_START) && ((VMAP_VIRT_START - va) < VMAP_VIRT_SIZE) )
> The second condition does not seem to be correct.

Hmm... You are right, it wants to be

((va - VMAP_VIRT_START) < VMAP_VIRT_SIZE)

> Instead, this check should like like following:
> if ( (va >= VMAP_VIRT_START) && (va < (VMAP_VIRT_START + VMAP_VIRT_SIZE)) )
This check would still be incorrect because if the VMAP is right at the 
edge of the address (e.g. 2^32 - 1 on arm32), then VMAP_VIRT_START + 
VMAP_VIRT_SIZE would be 0.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 09:52:14 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 09:52:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356372.584553 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5lPh-0003j6-T8; Mon, 27 Jun 2022 09:52:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356372.584553; Mon, 27 Jun 2022 09:52:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5lPh-0003iz-Pk; Mon, 27 Jun 2022 09:52:13 +0000
Received: by outflank-mailman (input) for mailman id 356372;
 Mon, 27 Jun 2022 09:52:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o5lPg-0003it-L2
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 09:52:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o5lPf-0007v3-HU; Mon, 27 Jun 2022 09:52:11 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=[192.168.2.226]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o5lPf-0007Vt-Bf; Mon, 27 Jun 2022 09:52:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=RuH7KmEmpT0iYLGHwYsJhd89eJ1MfwHwZrZ6ovrcu34=; b=3M2MWQwIfMuoPl81SrWlwDITkD
	GXe77ygDF/RjYfs5C8U2TdqCcZ8wCeK0oYVN47pSW9bht2g5oQ1l+oA0gP+HlAhUWWLtVMidHpS2f
	kcse1ZXPz70Qg08gdmLj6aulEu/JCEb/80tH9XibswerHZdEeJDiIkP4+u0E46wXRwwE=;
Message-ID: <e92b0f0f-d73c-a003-eb0f-15f7d624a75e@xen.org>
Date: Mon, 27 Jun 2022 10:52:09 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH 2/7] xen/arm32: head.S: Introduce a macro to load the
 physical address of a symbol
To: Michal Orzel <michal.orzel@arm.com>, xen-devel@lists.xenproject.org
Cc: Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220624091146.35716-1-julien@xen.org>
 <20220624091146.35716-3-julien@xen.org>
 <1218329a-13a3-79b6-6753-c2c9a0c45b2d@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <1218329a-13a3-79b6-6753-c2c9a0c45b2d@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 27/06/2022 07:31, Michal Orzel wrote:
> Hi Julien,

Hi Michal,

> On 24.06.2022 11:11, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> A lot of places in the ARM32 assembly requires to load the physical address
>> of a symbol. Rather than open-coding the translation, introduce a new macro
>> that will load the phyiscal address of a symbol.
>>
>> Lastly, use the new macro to replace all the current open-coded version.
>>
>> Note that most of the comments associated to the code changed have been
>> removed because the code is now self-explanatory.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>> ---
>>   xen/arch/arm/arm32/head.S | 23 +++++++++++------------
>>   1 file changed, 11 insertions(+), 12 deletions(-)
>>
>> diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
>> index c837d3054cf9..77f0a619ca51 100644
>> --- a/xen/arch/arm/arm32/head.S
>> +++ b/xen/arch/arm/arm32/head.S
>> @@ -65,6 +65,11 @@
>>           .endif
>>   .endm
>>   
>> +.macro load_paddr rb, sym
>> +        ldr   \rb, =\sym
>> +        add   \rb, \rb, r10
>> +.endm
>> +
> All the macros in this file have a comment so it'd be useful to follow this convention.
This is not really a convention. Most of the macros are non-trivial 
(e.g. they may clobber registers).

The comment I have in mind here would be:

"Load the physical address of \sym in \rb"

I am fairly confident that anyone can understand from the ".macro" 
line... So I don't feel the comment is necessary.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 10:00:22 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 10:00:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356380.584575 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5lXX-0005YO-UA; Mon, 27 Jun 2022 10:00:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356380.584575; Mon, 27 Jun 2022 10:00:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5lXX-0005YH-Pk; Mon, 27 Jun 2022 10:00:19 +0000
Received: by outflank-mailman (input) for mailman id 356380;
 Mon, 27 Jun 2022 10:00:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o5lXW-0005IT-GH
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 10:00:18 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o5lXU-00088M-Iv; Mon, 27 Jun 2022 10:00:16 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=[192.168.2.226]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o5lXU-00085B-BU; Mon, 27 Jun 2022 10:00:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=Ou47FWTOOPkFfV4TRqMrUAF44oQA6xSjq/DqZXuJK2s=; b=VHnlhrifS5+3CIJqbZ7jUWYjjc
	6Hc5H4Ca5bs8HOPmRDQCNilCn3S1BRp2XB+Xd9D7jIuOrMuhnpZGARHWZxcPmD9bSk/Uu4VLlaVMB
	J9Z1OSSnsKD03CSj1HwE37TrhPN8RZoqT91crRDf3b7rp+W4gq3OfMBjBODMisdhTjYY=;
Message-ID: <ce325da1-e6d4-3692-9707-f9bd52bf78c4@xen.org>
Date: Mon, 27 Jun 2022 11:00:14 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH] public/io: xs_wire: Allow Xenstore to report EPERM
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <jgrall@amazon.com>, Juergen Gross <jgross@suse.com>,
 xen-devel@lists.xenproject.org
References: <20220624165151.940-1-julien@xen.org>
 <d135e1aa-d0bb-8a7a-cb1d-7dc9387f1f24@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <d135e1aa-d0bb-8a7a-cb1d-7dc9387f1f24@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Jan,

On 27/06/2022 07:57, Jan Beulich wrote:
> On 24.06.2022 18:51, Julien Grall wrote:
>> --- a/xen/include/public/io/xs_wire.h
>> +++ b/xen/include/public/io/xs_wire.h
>> @@ -76,6 +76,7 @@ static struct xsd_errors xsd_errors[]
>>   __attribute__((unused))
>>   #endif
>>       = {
>> +    XSD_ERROR(EPERM),
>>       XSD_ERROR(EINVAL),
>>       XSD_ERROR(EACCES),
>>       XSD_ERROR(EEXIST),
> 
> Inserting ahead of EINVAL looks to break xenstored_core.c:send_error(),

:(.

> which - legitimately or not - assumes EINVAL to be first.

I am not sure who else is relying on this. So I would consider this to 
be bake in the ABI. I think the minimum is to add a BUILD_BUG_ON() in 
send_error().

I will also move EPERM towards the end (I added first because EPERM is 1).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 10:00:22 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 10:00:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356379.584563 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5lXW-0005If-LI; Mon, 27 Jun 2022 10:00:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356379.584563; Mon, 27 Jun 2022 10:00:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5lXW-0005IU-I4; Mon, 27 Jun 2022 10:00:18 +0000
Received: by outflank-mailman (input) for mailman id 356379;
 Mon, 27 Jun 2022 10:00:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vXPS=XC=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o5lXV-0005IN-3u
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 10:00:17 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id f0af5e5d-f5ff-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 12:00:16 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6991D1691;
 Mon, 27 Jun 2022 03:00:14 -0700 (PDT)
Received: from [10.57.42.186] (unknown [10.57.42.186])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 93C813F792;
 Mon, 27 Jun 2022 03:00:12 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f0af5e5d-f5ff-11ec-b725-ed86ccbb4733
Message-ID: <8c8d6a9f-18cd-43e5-0835-68927e7d1bac@arm.com>
Date: Mon, 27 Jun 2022 11:59:58 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH 2/7] xen/arm32: head.S: Introduce a macro to load the
 physical address of a symbol
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220624091146.35716-1-julien@xen.org>
 <20220624091146.35716-3-julien@xen.org>
 <1218329a-13a3-79b6-6753-c2c9a0c45b2d@arm.com>
 <e92b0f0f-d73c-a003-eb0f-15f7d624a75e@xen.org>
From: Michal Orzel <michal.orzel@arm.com>
In-Reply-To: <e92b0f0f-d73c-a003-eb0f-15f7d624a75e@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit



On 27.06.2022 11:52, Julien Grall wrote:
> 
> 
> On 27/06/2022 07:31, Michal Orzel wrote:
>> Hi Julien,
> 
> Hi Michal,
> 
>> On 24.06.2022 11:11, Julien Grall wrote:
>>> From: Julien Grall <jgrall@amazon.com>
>>>
>>> A lot of places in the ARM32 assembly requires to load the physical address
>>> of a symbol. Rather than open-coding the translation, introduce a new macro
>>> that will load the phyiscal address of a symbol.
>>>
>>> Lastly, use the new macro to replace all the current open-coded version.
>>>
>>> Note that most of the comments associated to the code changed have been
>>> removed because the code is now self-explanatory.
>>>
>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>> ---
>>>   xen/arch/arm/arm32/head.S | 23 +++++++++++------------
>>>   1 file changed, 11 insertions(+), 12 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
>>> index c837d3054cf9..77f0a619ca51 100644
>>> --- a/xen/arch/arm/arm32/head.S
>>> +++ b/xen/arch/arm/arm32/head.S
>>> @@ -65,6 +65,11 @@
>>>           .endif
>>>   .endm
>>>   +.macro load_paddr rb, sym
>>> +        ldr   \rb, =\sym
>>> +        add   \rb, \rb, r10
>>> +.endm
>>> +
>> All the macros in this file have a comment so it'd be useful to follow this convention.
> This is not really a convention. Most of the macros are non-trivial (e.g. they may clobber registers).
> 
> The comment I have in mind here would be:
> 
> "Load the physical address of \sym in \rb"
> 
> I am fairly confident that anyone can understand from the ".macro" line... So I don't feel the comment is necessary.
> 
Fair enough although you did put a comment when introducing load_paddr for arm64 head.S.
Anyway, I'm ok not to add it.

> Cheers,
> 


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 10:02:10 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 10:02:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356390.584586 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5lZK-0006SJ-AM; Mon, 27 Jun 2022 10:02:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356390.584586; Mon, 27 Jun 2022 10:02:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5lZK-0006SC-5j; Mon, 27 Jun 2022 10:02:10 +0000
Received: by outflank-mailman (input) for mailman id 356390;
 Mon, 27 Jun 2022 10:02:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CVBH=XC=citrix.com=prvs=1703a0240=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o5lZJ-0006S6-1B
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 10:02:09 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 332b9cd4-f600-11ec-bd2d-47488cf2e6aa;
 Mon, 27 Jun 2022 12:02:07 +0200 (CEST)
Received: from mail-bn8nam12lp2175.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.175])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 27 Jun 2022 06:02:02 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by PH7PR03MB6944.namprd03.prod.outlook.com (2603:10b6:510:15c::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Mon, 27 Jun
 2022 10:01:58 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5373.018; Mon, 27 Jun 2022
 10:01:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 332b9cd4-f600-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656324127;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=QMm6a+ZWHMbACvpbZ3CnMeYyEUVn93RrCOHCCjcVQKg=;
  b=Fg0sRg/q2Iw6TrjPehpYN5LUqwzXbG2n6hPeHZ3Pv+6rKgE/v/8Fl5XO
   ozrkkx/6kgJ+1TX1Wnzp/gnHZETqUa6Jf63cWyIQTJCu2AZaqNlwRYwfR
   2Dnj+NKG+JAfD6DmcXEt6kAfzwENEqxJqw4u/+fHwkQx7q1Odz4YstIGh
   M=;
X-IronPort-RemoteIP: 104.47.55.175
X-IronPort-MID: 77047693
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:sbHizaAhsGSCdRVW/wHiw5YqxClBgxIJ4kV8jS/XYbTApDl0gmQFy
 GEXWT2OPffbYmT8Ld0jb9mz/ExS65bdy4BkQQY4rX1jcSlH+JHPbTi7wuYcHM8wwunrFh8PA
 xA2M4GYRCwMZiaA4E/raNANlFEkvU2ybuOU5NXsZ2YgH2eIdA970Ug5w7Bj3dYx6TSEK1jlV
 e3a8pW31GCNg1aYAkpMg05UgEoy1BhakGpwUm0WPZinjneH/5UmJMt3yZWKB2n5WuFp8tuSH
 I4v+l0bElTxpH/BAvv9+lryn9ZjrrT6ZWBigVIOM0Sub4QrSoXfHc/XOdJFAXq7hQllkPhym
 ZJ1jqGZezszI7/dp9UcajlZKnBhaPguFL/veRBTsOS15mifKT7G5aUrC0s7e4oF5uxwHGdCs
 +QCLywAZQyCgOTwx6+nTu5rhYIoK8yD0IE34yk8i22GS6t7B8mcHs0m5vcBtNs0rtpJEvvEI
 dIQdBJkbQjaYg0JMVASYH47tLj33iWgLWwDwL6TjbAOuEvvz1E26YjOO8LYfdOhXt14u2/N8
 woq+Ey8WHn2Lue32TeDt36hmOLLtSf6Q54JUq218OZwh1+ezXBVDwcZPXO5vP//jEe9UtBeL
 kU8+ywyoKx0/0uuJvH+UgO5pjiYvxcac9tWD+A+rgqKz8L84QyUG2wFRT5pc8E9uYk9QjlC/
 kCNt8PkA3poqrL9YWqU67O8vT60fy8PIgc/iTQsSAIE55zpptE1hxeWFNJ7Svfq05vyBC36x
 C2MoG4mnbIPgMUX1qK9u1fanzaroZuPRQkwjunKYl+YAspCTNbNT+SVBZLztJ6s8K7xooG9g
 UU5
IronPort-HdrOrdr: A9a23:Wq/fhaAVmQr1IwjlHehKsceALOsnbusQ8zAXPh9KJCC9I/bzqy
 nxpp8mPH/P5wr5lktQ++xoX5PwOU80lKQFmLX5WI3PYOCIghrNEGgP1+vfKl7balDDH5BmpM
 BdmsFFYbWfbGSS5fyKmjVQeOxQpeVvnprY5ts3mBxWPHpXguxbnnBE4kHxKDwGeCB2Qb4CUL
 aM7MtOoDStPVwRc8SAH3EAG8zOvcfCmp7KaQMPQ0dP0njGsRqYrJrBVzSI1BYXVD1ChZ8k7G
 j+igT8ooGuqeuyxBPw33Laq75WhNzi4N1eA9HksLliFhzcziKTIKhxUbyLuz445Mmp9VYRid
 HJ5ywtOsxigkmhDF2dkF/I4U3NwTwu43jtxRuzmn34u/H0Qzo8Fo5omZ9ZWgGx0TtsgPhMlI
 Zwm06JvZteCh3N2A7n4cLTah1snk2o5VI/jO8oiWBFW4d2Us4akWUmxjIbLH48JlO11Gh+e9
 MeTf00pcwmOm9yVkqp8lWGm7eXLzQO9hTveDlwhiXa6UkUoJlD9Tpm+CUupAZ9yHsDceg72w
 29CNUPqFhvdL5iUUsvPpZ0feKHTkrwfDnrDEW+ZXzaKYBvAQO8l3ew2sR82N2X
X-IronPort-AV: E=Sophos;i="5.92,226,1650945600"; 
   d="scan'208";a="77047693"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LdZ9EUvKjJn0zGPEhjj5JkLybfSEC5dAZPi+fZAvmQDgxUviu6K7P3RdOR32SPClHcHOg4L73mZ0UwHMjcVYzdhwJvB9BDIDMQpFlyv+doVmZF4ma3CxmteqZCjVHf2bJIjKTsxtSUyn+0jDd30PofffUb82zZcriMTx6uLKDObARWbwXwhXA8eqt5s0qzJDhftUGZUlQr45C2ODJMN222V4t7afawig6tM+Rbyt/GwsaGhtAdOpMo0ussYzFM5+i+qhw8idd35oIx5rgVHTkjHYyfrvf5CTLmnBG3uzEY1ISlnnA/fT6EmsCvjCajKmQwwRPXKaJOfz3P28we4Xvg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=kBuumvnaZMJyv/KTx3KCJxsfkE+oOmUhJW2T6CM4exE=;
 b=FPoZN8Lk/7lPXVe94Cu4SasB7vlRlYRy95L2W8GvDUhP1saGdikV2CBAC0hs/82hcvSjASxSy2871G/0DwH6gvhbRLBTP8XU8ynLVee8Va7Hjx6knt4MIAfwFUOjCIAg6tdyXDexI+gpeeG7b2vBRmFIrZM8Oc2xGV27HM0I5FSXCoOrpZCxTNBDui1syT4oQgFJdz25mfXDE39NX7BhW1/ONIzKNs3HqQp6ZZtwTGJHJ7+ktKDDNj8ndIIRTPUR5wQfhCf1ms/ZhCROrY2nwflwmItKz8KuI99UCKvjDzQ29mCVr4w7NynFG3mQp7yzVhOp9Dq6tm7bk7j+P/ziPA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kBuumvnaZMJyv/KTx3KCJxsfkE+oOmUhJW2T6CM4exE=;
 b=LpX6vjihjID4DIwa0LRPrE3SgD5yItHsWpmb2hYXLGlnVHUP2qCeNCNKeDKb9zmpIoWn21PZAkgLT5xEaBDJo/Qprx2yvL5HeH9kY5J1XLA+kP5pvsmvDhtZUZrT/uJcbfuHx/qEFDnifcdOZPXipkYvvKYuM3PJTSB0yuzsr1w=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jun Nakajima <jun.nakajima@intel.com>,
	Kevin Tian <kevin.tian@intel.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Wei Liu <wl@xen.org>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH] x86/ept: fix shattering of special pages
Date: Mon, 27 Jun 2022 12:01:19 +0200
Message-Id: <20220627100119.55363-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.36.1
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P265CA0032.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2ae::18) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4d54df7e-aa56-4a86-864d-08da58241276
X-MS-TrafficTypeDiagnostic: PH7PR03MB6944:EE_
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	iPYYJd/8Cno8J0lJS3zzROzrn5hTkTdh+zuMhrGiuw/5S3qrDVPAjZwLSNE7Pyaeo9aYIGk2deG37DOLTpfy8q2R8Sf21KUNhWOFb3qA5H3rPtUMo1N8nQWvrjWNmZmcDZQglQUOxgHirFTjIi68zhNFf8hlq3+LH3wqYJoBTRVEkywWftuK6mHSQLXK32wY5tTTg5VOHCs4K5Q4EYDlFYbY8l2Z4GmUZ8ANlVeb0GcjsIWh6Qp95KXAaNSyBWLXqEiKtMnVusPB+6efjayOhOUkQaFO5aXzIi3WB7nRmCjoLAjnrPLsdyblapn/bD9qXW/ITu+cBGnk3J1l+HPRiUUJ2IfTg90OUzVXFnBMUlwxW+iuUQF0zqsxfYvnGdh3uFDRUT9rJQYz6m/aqcN0LldK14ciqbeggy4Ca/2T2k9wm+BsRuqi/fF5UjohTRW0+afMR95NfO7REjtUl3mS49a2mX0UXbyuLy3Ii6wxiHz5guJ96GPS7qJj9YMEIeZ3dVtOfOlMouEuFUtQ/kWkzc3rTjnm80Ne59hndEySC10CW9exUJcFdjzlMhcAUGNGdm7XG7UkxEXuDZj4fupunH/a2wmX1xYLhq1eKA8PO2PtCVQiWxpNCoRZtcmkZJSB+zuXhkJi/OJVog7TlowWtY2dDohYjhKE9QK1fTF0r4VfWBx+gafyfnZJ+L50+Kb/7ppVbrC9Y650QvYIqnYrQ9eeXwmuaVSoGfbJ/HvTsCxkm/KM1NAjLu7wdcbSKy2T
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(346002)(366004)(396003)(136003)(376002)(39860400002)(316002)(2906002)(6666004)(38100700002)(6506007)(66556008)(86362001)(8676002)(66476007)(82960400001)(66946007)(8936002)(83380400001)(5660300002)(41300700001)(26005)(1076003)(4326008)(6916009)(2616005)(6512007)(54906003)(478600001)(6486002)(186003)(36756003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ejJGdjUxZEVnYnVsVTkyUlZPdk9NYTd1MVRjQmpYNWt1MEtPR1hGWEl3aXBB?=
 =?utf-8?B?TmFGbk9PakdzdDYybW1UN3B3MEZDaUIrbGdFMWdiK210QmwwaFhlUUJHam9w?=
 =?utf-8?B?VlhBMGZmRXZpck9RNWFxOW5YdkhOLzY4L0picC9WMHprSjlsenNhZmJHbGVE?=
 =?utf-8?B?cko5eTNpN1MyUGJMQmc5amNnSy9ONk9xV25tMzFvSmV2VGhwbzIxT0RjNzdB?=
 =?utf-8?B?V2wrcG94M2owNUVmckljR2UvVGNsSWpJSFBnbTcvdlJ2ZjljYlVnNUJ6UlJX?=
 =?utf-8?B?TkN2K0FJMWNFVGxRc0JJd21zTHgzemtnVFV5QUtna1JNQ3VIaGxLMG9PU3g1?=
 =?utf-8?B?VFU3YlkwWVdLY0FuTm02K0lGMXBNYjdGQzVIRUxyMmFjZ3gzNHgvSjhZTEsr?=
 =?utf-8?B?N0loeHprWmFNY0J0ZzYvZzlWTTNBeTJweXFMbDJFeFV0WE95TXgvOEVyZVpp?=
 =?utf-8?B?ckRLL1RFWDhVaHJ1MkxKZkNQSS9hSXFvV2M5Q0IvK0RkaVFQb0pWVVpOTm1S?=
 =?utf-8?B?WWFBdWs4VCt1Tkk5cElhVWV1L3lRbDRIRlg1RzJBN2xGTjkya1ZrYm5OeHpq?=
 =?utf-8?B?UU5uR0R6QXdVSWl4TGVDd0VnTlF4ekxsa0x4S2d2WWRFazB5N3phZVdkeGpa?=
 =?utf-8?B?U283NjEzWUhGazZWOVQrUHBVSSsvcE01V2tOcDVlc0J3UWJRdVBKeGhtV0Qw?=
 =?utf-8?B?WldNcTJYUWExS21yV1pZZHE1OUhFOW9CSFEzQVZQOW8wWmtNcUkwSUlOZVpl?=
 =?utf-8?B?QWlhS0N5T0xhdm9kK1lDaHFMQW5oV2lGS0VGajdnczVBWVMwLzNRb1UrM0s3?=
 =?utf-8?B?VjBIcDFpY2lrblZxMW43L2w1MFZSbFgwMG9FbmFGZVEzMEpSZk1kbjhNWkFI?=
 =?utf-8?B?ZDhzTUdUQ3IxVjZQdE15L05YbWVPWXJvRmd2STFnb3QyOTZna3g3bXcvWHVh?=
 =?utf-8?B?ZW16ZHo1cUV2TkNFeUxNK280Wmtqd0ozMVpjZzlGV1FIaWs4Q3ZIMjVnVGU3?=
 =?utf-8?B?Tk1xSVZYVlhNVi9LSUtGQVVIdEplUi9oWTlja0NLTjVXRlJOZmVOcXNjR0to?=
 =?utf-8?B?WXRRNmsyWW5IdlNhRWV6WFU0V29iLzJmdjhQamxwWmo3VHNxS3VsSVBXVWR2?=
 =?utf-8?B?TEYzRDhxQ2NZNlFjODR2WmJYUkJqMXlPWHhiZVd1OUREaFcwYW5veFFuc0JU?=
 =?utf-8?B?RFhqMzhscjhzblpvYk5IVE1OUmk0aG5OWm1FTWNBWXYrQzdIVFhBNFRwaUls?=
 =?utf-8?B?aU1BUVhrZTJWOEl3MkNXR3Q0c1V4WlpkUDhMSzJDeUlob1FIdDJvajhxcm5l?=
 =?utf-8?B?bnpYLzlPY0lNSm53R1FEbW1pbVpBZmFhWThpVm9tN3Qyemp5SXl5VklHMGg2?=
 =?utf-8?B?a0tqRXRMb1VTWTBLWXRvYU1ZSTVPWFlsZm9xc3NCeGVDOUk4endIb1FSVUF5?=
 =?utf-8?B?VWJINTM4TmFISkgyRmt6S21Ea1l6ZVUwS2lLaTVXdVorNDFlc0JqWFltNkJp?=
 =?utf-8?B?QkJpSmtLbEQyTndPWFlzZTJmTzZkRVRyd1ZmMUJpaWU4RzNHNnpYZFpxYXVz?=
 =?utf-8?B?VytWOTdOQUR3aTJPcGdhclNoOElmZGFuUkVWZTdqaTV6cUlGTGR1KzkwNnZD?=
 =?utf-8?B?c1Fub2pyWWtvUDlabGlmcVg0RnNYek4rd1F6bWF5Nk03UmFRSWhiekFyV1Vw?=
 =?utf-8?B?bTBKM09Gb2F6SkFKd0NzQktSVUd5MkhVR3h4dXNrcG9CRXJhbWo1OStiT3FX?=
 =?utf-8?B?YlFtSmxmcXJYTHo2N0lETDJBRUx2M3ZnNzd0eHhqWDBieGF6Q243NHdJMGNW?=
 =?utf-8?B?TGxoZGV0b0dhZEpFd2hjVDVrWDEza25oUkRrOUhxTkQ5ZUtiK2s1VXF1Tjll?=
 =?utf-8?B?c3Z2UTJOcmZza0Fua3JwVTVxaXJKcENFWG56Y3h6UFlOS2ZhKzNXcEhSb0dQ?=
 =?utf-8?B?U3Z2enJTc2NadlJzN2xoUXRObDlTUmlDRHh3THk1VFpESmxVQ29UQlo5UmVs?=
 =?utf-8?B?aHpmc3ZrTHFRN2kwQ2hIWmZyTC8vaVcwWmozTTh5dFl3NTY1clZYVG4vYVZQ?=
 =?utf-8?B?UnpSb1NqblA1TXl6c1hLL1BvNmdvekpwZmdMeWZlUWVGTU5hbTNIUVB6c1Fz?=
 =?utf-8?Q?uw3IwYuLH8VxF5ZmYX7/yAz+C?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4d54df7e-aa56-4a86-864d-08da58241276
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2022 10:01:58.0045
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: WMdHItZe+jLeCChqDGLKkjpXhwwI6oZEi9hUOO9dCcJ3wVH0WClsY1dSFvRoTA3bJMlDK/BAjJ8XbQIteIOFbw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR03MB6944

The current logic in epte_get_entry_emt() will split any page marked
as special with order greater than zero, without checking whether the
super page is all special.

Fix this by only splitting the page only if it's not all marked as
special, in order to prevent unneeded super page shuttering.

Fixes: ca24b2ffdb ('x86/hvm: set 'ipat' in EPT for special pages')
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Cc: Paul Durrant <paul@xen.org>
---
It would seem weird to me to have a super page entry in EPT with
ranges marked as special and not the full page.  I guess it's better
to be safe, but I don't see an scenario where we could end up in that
situation.

I've been trying to find a clarification in the original patch
submission about how it's possible to have such super page EPT entry,
but haven't been able to find any justification.

Might be nice to expand the commit message as to why it's possible to
have such mixed super page entries that would need splitting.
---
 xen/arch/x86/mm/p2m-ept.c | 20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index b04ca6dbe8..b4919bad51 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -491,7 +491,7 @@ int epte_get_entry_emt(struct domain *d, gfn_t gfn, mfn_t mfn,
 {
     int gmtrr_mtype, hmtrr_mtype;
     struct vcpu *v = current;
-    unsigned long i;
+    unsigned long i, special_pgs;
 
     *ipat = false;
 
@@ -525,15 +525,17 @@ int epte_get_entry_emt(struct domain *d, gfn_t gfn, mfn_t mfn,
         return MTRR_TYPE_WRBACK;
     }
 
-    for ( i = 0; i < (1ul << order); i++ )
-    {
+    for ( special_pgs = i = 0; i < (1ul << order); i++ )
         if ( is_special_page(mfn_to_page(mfn_add(mfn, i))) )
-        {
-            if ( order )
-                return -1;
-            *ipat = true;
-            return MTRR_TYPE_WRBACK;
-        }
+            special_pgs++;
+
+    if ( special_pgs )
+    {
+        if ( special_pgs != (1ul << order) )
+            return -1;
+
+        *ipat = true;
+        return MTRR_TYPE_WRBACK;
     }
 
     switch ( type )
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 27 10:03:33 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 10:03:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356395.584597 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5lad-000728-L0; Mon, 27 Jun 2022 10:03:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356395.584597; Mon, 27 Jun 2022 10:03:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5lad-000721-Hl; Mon, 27 Jun 2022 10:03:31 +0000
Received: by outflank-mailman (input) for mailman id 356395;
 Mon, 27 Jun 2022 10:03:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nImS=XC=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1o5lac-00071l-4P
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 10:03:30 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2075.outbound.protection.outlook.com [40.107.22.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6444d3cc-f600-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 12:03:28 +0200 (CEST)
Received: from AS8PR04CA0198.eurprd04.prod.outlook.com (2603:10a6:20b:2f3::23)
 by AM8PR08MB6436.eurprd08.prod.outlook.com (2603:10a6:20b:365::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15; Mon, 27 Jun
 2022 10:03:27 +0000
Received: from AM5EUR03FT006.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:2f3:cafe::b5) by AS8PR04CA0198.outlook.office365.com
 (2603:10a6:20b:2f3::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.21 via Frontend
 Transport; Mon, 27 Jun 2022 10:03:27 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT006.mail.protection.outlook.com (10.152.16.122) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Mon, 27 Jun 2022 10:03:26 +0000
Received: ("Tessian outbound d3318d0cda7b:v120");
 Mon, 27 Jun 2022 10:03:26 +0000
Received: from eb22d8152005.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 EEEDF57A-D528-4C0C-8DD4-113D25516E65.1; 
 Mon, 27 Jun 2022 10:03:16 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id eb22d8152005.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 27 Jun 2022 10:03:16 +0000
Received: from DU2PR08MB7325.eurprd08.prod.outlook.com (2603:10a6:10:2e4::7)
 by AM6PR08MB4833.eurprd08.prod.outlook.com (2603:10a6:20b:ce::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Mon, 27 Jun
 2022 10:03:15 +0000
Received: from DU2PR08MB7325.eurprd08.prod.outlook.com
 ([fe80::50e5:8757:6643:77f8]) by DU2PR08MB7325.eurprd08.prod.outlook.com
 ([fe80::50e5:8757:6643:77f8%9]) with mapi id 15.20.5373.017; Mon, 27 Jun 2022
 10:03:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6444d3cc-f600-11ec-b725-ed86ccbb4733
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=XEhR42riBUexEutGUilceK7kL8yI1U3qOBbbVEhgoQqeC1YNQXwORez0GOyuxfv01/5An3eu8JpPr6sxcehi28/B22iRyps8wZ+pacVXU+S0objIudl+1N3HaWIVtVUxuIt6OsEJyEcsKiJ9rU5dHvrp06V2vBtQUwmP354LhrAMqvKH4fvyCE3ydV9cE/LaGwtXZjL5YTNveG4uKPD+7vXA0OhuMVfUMVHpAIDLNag50QcIDSaQClyUv5urmMMIpSehOrO8VZd+qHFvvp2M/zDu/Wp/Re2rTerGiJr5X4UEYqoIrMgm1ne8v4kl66HHdMp3gB1HFh3BZY52npgOmg==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=EMFCQLaap/xgdts+uNUPkcoHzsu6uULHjfoESVJScLo=;
 b=A/VwiT+6GgbaCt8w71y4AXoHaN0buZBLui+OAQphr1FmPgngAQML7Tq1CWB0TCGjoaK2lsEn9lFJpOQDOn72rFQXU9kDL+4LjdzyGOSVJ2vkmKQf41SYHUOR+ipsiKiSfop6aXlwey/meGlLFAsWt5avVVy+SHV7T7UxDwEqqkiZR3u0lH7sXqqjLnxc9wD7G8S+2D0KSe8F/ESJs3wscnfUKS7VupLvkWDf0enmj/FcxNxPKymORN+xMocymJc+w22GSrSVmuVTZM8dR81GM+ekgZNEcBuYOZRxaLumarILGaubd2FUWKPbNEA09+F1nf0uLEy6wR3TBV9AmeDPbA==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EMFCQLaap/xgdts+uNUPkcoHzsu6uULHjfoESVJScLo=;
 b=1i9S6R/9EdiEPg9oVPU23E6RHLCeWWJBBCMcuaE6RsZzSZpF+Gpv3Wemknk2sq/6YNXDJuH7qoPXACpML7IalK7zm34uVRmOz6jDhSmBH2gVhwqPY7MtVZCO1KG+u46BENmWFfYGZew/7WoQvR7qxr32oE1+o7pE2B1sQNNvJtM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XZEBIqxpd3UXRIu15tn7rlGEwf1Q/9P+9U/0XrOtl1dBc6uvZjgwjfMoNWFcVSpEZ+sh7U2ajdHhn6Q5axyhECsapLqwXGPPLKX2kM5S9SoQ97lXKEybFw+KrYZmRvHWuRMKIWYrPIsRWmqw8ovxEBwN0ygLp7sUPQiJwQ21pvS1oIOqldJySsOX84HTrsnWAe4w7QRziqGDMjg4nMF/lS6poh3Shb6sF5f6ZC4KOQugxXsWmrD5SLChrRO9pP4iac0FsCFgCIjtpc5JF89iBeTgbfJdwOU78IWD4hwMChg9WaUCCEXUsz0UN3SHUFUdQBZFpW5UgR+vqobbYuqMSA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=EMFCQLaap/xgdts+uNUPkcoHzsu6uULHjfoESVJScLo=;
 b=MfSIWNOkA/Z95IeYBf8cgV+P2oDsCD+PxZSVrKDCxlP42mMY3bsBvDFyk1xl2yde27Zx/FOcP/eAUH7RvrTEFYrRMJ3pvM2ZlQQiAfg9I+/Dmt/xCbFjK9WKkS4bBfBIZQRuGplxBmezD2U7V212jytGzigsEQprLvpfUvRaGLY4U+aYS3DddhoZzHNm/Yt6AtmZwCNxmzNS8AFCp09JYtDfAFMN85iTq3qep/ZQdKuXfOV0u5G8zFaq09YKhJUycSWtrD2SfdBuBokkqBW5/jh4VxL5P5mLXqlA7IZNc+E6USH6vnboeP+2xOCEKaesDDZ5QkIOgx+70G5ApKf3ZQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EMFCQLaap/xgdts+uNUPkcoHzsu6uULHjfoESVJScLo=;
 b=1i9S6R/9EdiEPg9oVPU23E6RHLCeWWJBBCMcuaE6RsZzSZpF+Gpv3Wemknk2sq/6YNXDJuH7qoPXACpML7IalK7zm34uVRmOz6jDhSmBH2gVhwqPY7MtVZCO1KG+u46BENmWFfYGZew/7WoQvR7qxr32oE1+o7pE2B1sQNNvJtM=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Chen <Wei.Chen@arm.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v7 7/9] xen/arm: unpopulate memory when domain is static
Thread-Topic: [PATCH v7 7/9] xen/arm: unpopulate memory when domain is static
Thread-Index: AQHYhE/FRuoXPvvK9EWAAAtCUPIrtq1bKxqAgAfcEEA=
Date: Mon, 27 Jun 2022 10:03:14 +0000
Message-ID:
 <DU2PR08MB73255B2995B4692B5D46252FF7B99@DU2PR08MB7325.eurprd08.prod.outlook.com>
References: <20220620024408.203797-1-Penny.Zheng@arm.com>
 <20220620024408.203797-8-Penny.Zheng@arm.com>
 <5ac0e46d-2100-331e-b4d2-8fc715973b71@suse.com>
In-Reply-To: <5ac0e46d-2100-331e-b4d2-8fc715973b71@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: C42310B009F8A2448FD4C22A67678224.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: e4031950-7f34-42a7-c753-08da58244795
x-ms-traffictypediagnostic:
	AM6PR08MB4833:EE_|AM5EUR03FT006:EE_|AM8PR08MB6436:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 Y/Y/LtlmV4Gmjd2r+k4C78/nvvdYxyIywVL61s5WDkV2bbX4AhCz/Lf/RUE1HwITzl31FpmGHB0kxNSNcLe+nmuSMkYTWXWBUTEUon6s04/0DHKirGbdnctt5NTO5cmauUlvPufv89ooEa705o20+pc5m6R6A/6YtpUR2QifmfiXDAjuNdNDChM4fM5Q6celBL18Tgo3ggZSNNz6+DswgCXTCit19ax9Wz0xmAIJBkRKyBoaCEaLAvRCcycPkrTNLImaNs18z5bkT7DXPut7Ksw05cxjgcswLaFhWHBHoqGnrqP/rgyVf0OKAqzCCw7a56Q5Rkwth6s+KoxbFhTcvyDFydPmgLEAbbe3u2JFxhoV2rj0N1qYczy8b2l9MAFkrIvSf0XAuErPv+FLYVWinIeLzD1RV8ArQ3f8Ot/qrVIiuVVfr9HyXrL6zsonTzs85m5srebSwEUxitPiZVAujgIc/s+bHj+vx5BfguvpGY1BnvewbCyDT3jwNCTGJSEJ6d5jvsieK+H4F8UgDAxX3NTwZJm4jC7NIMIQuMQhu+xFLYR5Psv3yY33ucqotEbnq13wEpahB88dfhQWmeMS3uMZD2IZ0Bmyg6SO1TaIw9/zTDPTQefWRQeTR4BE8kGadLCmCcaaJclirs/smsF36PAkfWEBvnHVIztzCSuftoXV7IAxwkgYzSEi85uIvDDzo2YOynofFvMu03SGxZYN0Gjvm9xwM/hX5gHgzdhBWaqI9/YfF4NxXaXhtjc/71Qz8AiwI+rD3aY03ETwfDDlnDLbXdHY6AT/hEklLKHZSoc=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DU2PR08MB7325.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(136003)(366004)(396003)(346002)(39860400002)(376002)(33656002)(52536014)(4326008)(5660300002)(8936002)(66946007)(86362001)(66556008)(122000001)(66446008)(64756008)(316002)(76116006)(38100700002)(2906002)(55016003)(71200400001)(41300700001)(7696005)(54906003)(9686003)(6506007)(66476007)(478600001)(6916009)(186003)(53546011)(26005)(83380400001)(8676002)(38070700005);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4833
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT006.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	d6adef53-9eb4-4915-1a5d-08da58244079
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hkz5VNSKEDVmiNYBYCZxU0ddJp0JOgEAGhSfY2XLWIjBhgNzjp2e4SB1+QrDIzUSZ3oOduBFH3Ht44AQSi8Za1t6dp1HPoIptxXtKqNk0mPGxZw8OtNSeyEpyU23Wx3PQkH2r1P4monFwdQvXG05A5OXla/akV6RcTzAF89uVH372D6MQ1KXQx3WJQ8UM0H6wn11bmFjiuP3jOV+a1AdIMp2L20xcLwBG7HZ8S8k/DXGGUhEyjd4ee4Knr08UrDptvupwnr8gX2KSb8ak9Zls2twbvn7PIZfXmD4+g++IiWbwyDSFIRZag1OLjMr9MYIYsEKcH+oWzFvN0wzQhPMt2djt+V8rlZ9KQXv990xfgIKvkHpW7Q9U8f4E1OYAOxSavjvu/wOzuPqATHhGTM2mTkgArlXtEAe+F9GkUJkhvzLCmI8BC4nFvaRv9/6w4pKdh8W2KHS9+/EtiqcrqfBPtIQrp/DdmU6UZLepz7jMFnamGxZMQ/4yu2+sOe7NwNSN+fnkBmaiEmY1XEdY7QrVhtb/tp2/XPVDtaZgfyiMn/k2L5hVLpcl9c4BtJ0YLJ5UBvjjaegrQMoso+2AZSIeVb6a0Bx4aRGGo58hDb5ZZauiP1ZbgP9vhAFZVt7A0ga3hYSrd9sP7ohD0aQTgiXZBBCGNktEp1pqarcNiXPvka66CBuEDYPOANeLFqN4vjvheNCrbMfYMJzBHF44jjkmPIQNJWm1EhA82/d2wY78YJS/WKt9RxuNe/ou96Ry06HYAUOF3JrKP4hhRx/LXnpaVLPBKzQCO80Gj8whI93CWsK217XRqdA5vxWGhRtZcVE
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(396003)(136003)(39860400002)(376002)(346002)(40470700004)(46966006)(36840700001)(47076005)(336012)(70586007)(86362001)(33656002)(186003)(4326008)(6862004)(41300700001)(8936002)(70206006)(7696005)(5660300002)(8676002)(26005)(316002)(9686003)(40480700001)(478600001)(6506007)(53546011)(54906003)(52536014)(55016003)(82310400005)(81166007)(2906002)(83380400001)(356005)(82740400003)(40460700003)(36860700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2022 10:03:26.8772
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e4031950-7f34-42a7-c753-08da58244795
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT006.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB6436

SGkgamFuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSmFuIEJldWxp
Y2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KPiBTZW50OiBXZWRuZXNkYXksIEp1bmUgMjIsIDIwMjIg
NToyNCBQTQ0KPiBUbzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+DQo+IENjOiBX
ZWkgQ2hlbiA8V2VpLkNoZW5AYXJtLmNvbT47IEFuZHJldyBDb29wZXINCj4gPGFuZHJldy5jb29w
ZXIzQGNpdHJpeC5jb20+OyBHZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJpeC5jb20+
Ow0KPiBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4ub3JnPjsgU3RlZmFubyBTdGFiZWxsaW5pIDxz
c3RhYmVsbGluaUBrZXJuZWwub3JnPjsNCj4gV2VpIExpdSA8d2xAeGVuLm9yZz47IHhlbi1kZXZl
bEBsaXN0cy54ZW5wcm9qZWN0Lm9yZw0KPiBTdWJqZWN0OiBSZTogW1BBVENIIHY3IDcvOV0geGVu
L2FybTogdW5wb3B1bGF0ZSBtZW1vcnkgd2hlbiBkb21haW4gaXMNCj4gc3RhdGljDQo+IA0KPiBP
biAyMC4wNi4yMDIyIDA0OjQ0LCBQZW5ueSBaaGVuZyB3cm90ZToNCj4gPiAtLS0gYS94ZW4vY29t
bW9uL3BhZ2VfYWxsb2MuYw0KPiA+ICsrKyBiL3hlbi9jb21tb24vcGFnZV9hbGxvYy5jDQo+ID4g
QEAgLTI0OTgsNiArMjQ5OCwxMCBAQCB2b2lkIGZyZWVfZG9taGVhcF9wYWdlcyhzdHJ1Y3QgcGFn
ZV9pbmZvICpwZywNCj4gdW5zaWduZWQgaW50IG9yZGVyKQ0KPiA+ICAgICAgICAgIH0NCj4gPg0K
PiA+ICAgICAgICAgIGZyZWVfaGVhcF9wYWdlcyhwZywgb3JkZXIsIHNjcnViKTsNCj4gPiArDQo+
ID4gKyAgICAgICAgLyogQWRkIHBhZ2Ugb24gdGhlIHJlc3ZfcGFnZV9saXN0ICphZnRlciogaXQg
aGFzIGJlZW4gZnJlZWQuICovDQo+ID4gKyAgICAgICAgaWYgKCB1bmxpa2VseShwZy0+Y291bnRf
aW5mbyAmIFBHQ19zdGF0aWMpICkNCj4gPiArICAgICAgICAgICAgcHV0X3N0YXRpY19wYWdlcyhk
LCBwZywgb3JkZXIpOw0KPiANCj4gVW5sZXNzIEknbSBvdmVybG9va2luZyBzb21ldGhpbmcgdGhl
IGxpc3QgYWRkaXRpb24gZG9uZSB0aGVyZSAvIC4uLg0KPiANCj4gPiAtLS0gYS94ZW4vaW5jbHVk
ZS94ZW4vbW0uaA0KPiA+ICsrKyBiL3hlbi9pbmNsdWRlL3hlbi9tbS5oDQo+ID4gQEAgLTkwLDYg
KzkwLDE1IEBAIHZvaWQgZnJlZV9zdGF0aWNtZW1fcGFnZXMoc3RydWN0IHBhZ2VfaW5mbyAqcGcs
DQo+IHVuc2lnbmVkIGxvbmcgbnJfbWZucywNCj4gPiAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBib29sIG5lZWRfc2NydWIpOyAgaW50DQo+ID4gYWNxdWlyZV9kb21zdGF0aWNfcGFnZXMoc3Ry
dWN0IGRvbWFpbiAqZCwgbWZuX3Qgc21mbiwgdW5zaWduZWQgaW50DQo+IG5yX21mbnMsDQo+ID4g
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1bnNpZ25lZCBpbnQgbWVtZmxhZ3MpOw0KPiA+
ICsjaWZkZWYgQ09ORklHX1NUQVRJQ19NRU1PUlkNCj4gPiArI2RlZmluZSBwdXRfc3RhdGljX3Bh
Z2VzKGQsIHBhZ2UsIG9yZGVyKSAoeyAgICAgICAgICAgICAgICAgXA0KPiA+ICsgICAgdW5zaWdu
ZWQgaW50IGk7ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcDQo+ID4g
KyAgICBmb3IgKCBpID0gMDsgaSA8ICgxIDw8IChvcmRlcikpOyBpKysgKSAgICAgICAgICAgICAg
ICAgIFwNCj4gPiArICAgICAgICBwYWdlX2xpc3RfYWRkX3RhaWwoKHBnKSArIGksICYoZCktPnJl
c3ZfcGFnZV9saXN0KTsgXA0KPiA+ICt9KQ0KPiANCj4gLi4uIGhlcmUgaXNuJ3QgZ3VhcmRlZCBi
eSBhbnkgbG9jay4gRmVlbHMgbGlrZSB3ZSd2ZSBiZWVuIHRoZXJlIGJlZm9yZS4NCj4gSXQncyBu
b3QgcmVhbGx5IGNsZWFyIHRvIG1lIHdoeSB0aGUgZnJlZWluZyBvZiBzdGF0aWNtZW0gcGFnZXMg
bmVlZHMgdG8gYmUNCj4gc3BsaXQgbGlrZSB0aGlzIC0gaWYgaXQgd2Fzbid0IHNwbGl0LCB0aGUg
bGlzdCBhZGRpdGlvbiB3b3VsZCAibmF0dXJhbGx5IiBvY2N1ciB3aXRoDQo+IHRoZSBsb2NrIGhl
bGQsIEkgdGhpbmsuDQoNClJlbWluZGVkIGJ5IHlvdSBhbmQgSnVsaWVuLCBJIG5lZWQgdG8gYWRk
IGEgbG9jayBmb3Igb3BlcmF0aW9ucyhmcmVlL2FsbG9jYXRpb24pIG9uDQpyZXN2X3BhZ2VfbGlz
dCwgSSdsbCBndWFyZCB0aGUgcHV0X3N0YXRpY19wYWdlcyB3aXRoIGQtPnBhZ2VfYWxsb2NfbG9j
ay4gQW5kIGJyaW5nDQpiYWNrIHRoZSBsb2NrIGluIGFjcXVpcmVfcmVzZXJ2ZWRfcGFnZS4NCg0K
cHV0X3N0YXRpY19wYWdlcywgdGhhdCBpcywgYWRkaW5nIHBhZ2VzIHRvIHRoZSByZXNlcnZlZCBs
aXN0LCBpcyBvbmx5IGZvciBmcmVlaW5nIHN0YXRpYw0KcGFnZXMgb24gcnVudGltZS4gSW4gc3Rh
dGljIHBhZ2UgaW5pdGlhbGl6YXRpb24gc3RhZ2UsIEkgYWxzbyB1c2UgZnJlZV9zdGF0aW1lbV9w
YWdlcywNCmFuZCBpbiB3aGljaCBzdGFnZSwgSSB0aGluayB0aGUgZG9tYWluIGhhcyBub3QgYmVl
biBjb25zdHJ1Y3RlZCBhdCBhbGwuIFNvIEkgcHJlZmVyDQp0aGUgZnJlZWluZyBvZiBzdGF0aWNt
ZW0gcGFnZXMgaXMgc3BsaXQgaW50byB0d28gcGFydHM6IGZyZWVfc3RhdGljbWVtX3BhZ2VzIGFu
ZA0KcHV0X3N0YXRpY19wYWdlcyANCg0KPiANCj4gRnVydGhlcm1vcmUgY2FyZWZ1bCB3aXRoIHRo
ZSBsb2NhbCB2YXJpYWJsZSBuYW1lIHVzZWQgaGVyZS4gQ29uc2lkZXIgd2hhdA0KPiB3b3VsZCBo
YXBwZW4gd2l0aCBhbiBpbnZvY2F0aW9uIG9mDQo+IA0KPiAgICAgcHV0X3N0YXRpY19wYWdlcyhk
LCBwYWdlLCBpKTsNCj4gDQo+IFRvIGNvbW1vbiBhcHByb2FjaCBpcyB0byBzdWZmaXggYW4gdW5k
ZXJzY29yZSB0byB0aGUgdmFyaWFibGUgbmFtZS4NCj4gU3VjaCBuYW1lcyBhcmUgbm90IHN1cHBv
c2VkIHRvIGJlIHVzZWQgb3V0c2lkZSBvZiBtYWNyb3MgZGVmaW5pdGlvbnMsIGFuZA0KPiBoZW5j
ZSB0aGVyZSdzIHRoZW4gbm8gcG90ZW50aWFsIGZvciBzdWNoIGEgY29uZmxpY3QuDQo+IA0KDQpV
bmRlcnN0b29kISEgSSB3aWxsIGNoYW5nZSAidW5zaWduZWQgaW50IGkiIHRvICJ1bnNpZ25lZCBp
bnQgX2kiOw0KDQo+IEZpbmFsbHkgSSB0aGluayB5b3UgbWVhbiAoMXUgPDwgKG9yZGVyKSkgdG8g
YmUgb24gdGhlIHNhZmUgc2lkZSBhZ2FpbnN0IFVCIGlmDQo+IG9yZGVyIGNvdWxkIGV2ZXIgcmVh
Y2ggMzEuIFRoZW4gYWdhaW4gLSBpcyAib3JkZXIiIGFzIGEgcGFyYW1ldGVyIG5lZWRlZA0KPiBo
ZXJlIGluIHRoZSBmaXJzdCBwbGFjZT8gV2Fzbid0IGl0IHRoYXQgc3RhdGljbWVtIG9wZXJhdGlv
bnMgYXJlIGxpbWl0ZWQgdG8NCj4gb3JkZXItMCByZWdpb25zPw0KDQpZZXMsIHJpZ2h0IG5vdywg
dGhlIGFjdHVhbCB1c2FnZSBpcyBsaW1pdGVkIHRvIG9yZGVyLTAsIGhvdyBhYm91dCBJIGFkZCBh
c3NlcnRpb24gaGVyZQ0KYW5kIHJlbW92ZSBvcmRlciBwYXJhbWV0ZXI6DQoNCiAgICAgICAgLyog
QWRkIHBhZ2Ugb24gdGhlIHJlc3ZfcGFnZV9saXN0ICphZnRlciogaXQgaGFzIGJlZW4gZnJlZWQu
ICovDQogICAgICAgIGlmICggdW5saWtlbHkocGctPmNvdW50X2luZm8gJiBQR0Nfc3RhdGljKSAp
DQogICAgICAgIHsNCiAgICAgICAgICAgIEFTU0VSVCghb3JkZXIpOw0KICAgICAgICAgICAgcHV0
X3N0YXRpY19wYWdlcyhkLCBwZyk7DQogICAgICAgIH0NCg0KPiBKYW4NCg==


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 10:06:33 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 10:06:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356404.584607 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5ldV-0007jk-8D; Mon, 27 Jun 2022 10:06:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356404.584607; Mon, 27 Jun 2022 10:06:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5ldV-0007jd-5c; Mon, 27 Jun 2022 10:06:29 +0000
Received: by outflank-mailman (input) for mailman id 356404;
 Mon, 27 Jun 2022 10:06:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z6rY=XC=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o5ldT-0007jV-7U
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 10:06:27 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cdfbf40b-f600-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 12:06:26 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 9D6521FAA2;
 Mon, 27 Jun 2022 10:06:25 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 7B36F13456;
 Mon, 27 Jun 2022 10:06:25 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 3NLJHCGBuWIeOwAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 27 Jun 2022 10:06:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cdfbf40b-f600-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656324385; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Lgja+mgTvrj3SSYgWkQBoZdXnV38e5VIwhcxARx23FQ=;
	b=L/3gLPaALErIisJRNroZR+vUYz8I6eou3O1/zNPXkBLxAJTiY4eMTBOf2PiK7153TQewRu
	MZZD0PYV/zUlbWuKIMyZTYc1x5ceUjqDy0zIxcZE0SQ9SBUPfqZBZhvZQmKIFvVYnmC3Yo
	+pHWS/nxSr5yHMAB0K7frdVrFkEfNno=
Message-ID: <66900fb3-c86c-4413-062f-26443a0b9ba7@suse.com>
Date: Mon, 27 Jun 2022 12:06:25 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH] public/io: xs_wire: Allow Xenstore to report EPERM
Content-Language: en-US
To: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <jgrall@amazon.com>, xen-devel@lists.xenproject.org
References: <20220624165151.940-1-julien@xen.org>
 <d135e1aa-d0bb-8a7a-cb1d-7dc9387f1f24@suse.com>
 <ce325da1-e6d4-3692-9707-f9bd52bf78c4@xen.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <ce325da1-e6d4-3692-9707-f9bd52bf78c4@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------QoMqoaBvCXfUdaXGd0T2uw8g"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------QoMqoaBvCXfUdaXGd0T2uw8g
Content-Type: multipart/mixed; boundary="------------whd0lmGyvKm86Ev7exmMOAnh";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <jgrall@amazon.com>, xen-devel@lists.xenproject.org
Message-ID: <66900fb3-c86c-4413-062f-26443a0b9ba7@suse.com>
Subject: Re: [PATCH] public/io: xs_wire: Allow Xenstore to report EPERM
References: <20220624165151.940-1-julien@xen.org>
 <d135e1aa-d0bb-8a7a-cb1d-7dc9387f1f24@suse.com>
 <ce325da1-e6d4-3692-9707-f9bd52bf78c4@xen.org>
In-Reply-To: <ce325da1-e6d4-3692-9707-f9bd52bf78c4@xen.org>

--------------whd0lmGyvKm86Ev7exmMOAnh
Content-Type: multipart/mixed; boundary="------------2XP7g0cjMhHeGFZjmXazOJqP"

--------------2XP7g0cjMhHeGFZjmXazOJqP
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjcuMDYuMjIgMTI6MDAsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGkgSmFuLA0KPiAN
Cj4gT24gMjcvMDYvMjAyMiAwNzo1NywgSmFuIEJldWxpY2ggd3JvdGU6DQo+PiBPbiAyNC4w
Ni4yMDIyIDE4OjUxLCBKdWxpZW4gR3JhbGwgd3JvdGU6DQo+Pj4gLS0tIGEveGVuL2luY2x1
ZGUvcHVibGljL2lvL3hzX3dpcmUuaA0KPj4+ICsrKyBiL3hlbi9pbmNsdWRlL3B1YmxpYy9p
by94c193aXJlLmgNCj4+PiBAQCAtNzYsNiArNzYsNyBAQCBzdGF0aWMgc3RydWN0IHhzZF9l
cnJvcnMgeHNkX2Vycm9yc1tdDQo+Pj4gwqAgX19hdHRyaWJ1dGVfXygodW51c2VkKSkNCj4+
PiDCoCAjZW5kaWYNCj4+PiDCoMKgwqDCoMKgID0gew0KPj4+ICvCoMKgwqAgWFNEX0VSUk9S
KEVQRVJNKSwNCj4+PiDCoMKgwqDCoMKgIFhTRF9FUlJPUihFSU5WQUwpLA0KPj4+IMKgwqDC
oMKgwqAgWFNEX0VSUk9SKEVBQ0NFUyksDQo+Pj4gwqDCoMKgwqDCoCBYU0RfRVJST1IoRUVY
SVNUKSwNCj4+DQo+PiBJbnNlcnRpbmcgYWhlYWQgb2YgRUlOVkFMIGxvb2tzIHRvIGJyZWFr
IHhlbnN0b3JlZF9jb3JlLmM6c2VuZF9lcnJvcigpLA0KPiANCj4gOiguDQo+IA0KPj4gd2hp
Y2ggLSBsZWdpdGltYXRlbHkgb3Igbm90IC0gYXNzdW1lcyBFSU5WQUwgdG8gYmUgZmlyc3Qu
DQo+IA0KPiBJIGFtIG5vdCBzdXJlIHdobyBlbHNlIGlzIHJlbHlpbmcgb24gdGhpcy4gU28g
SSB3b3VsZCBjb25zaWRlciB0aGlzIHRvIGJlIGJha2UgDQo+IGluIHRoZSBBQkkuIEkgdGhp
bmsgdGhlIG1pbmltdW0gaXMgdG8gYWRkIGEgQlVJTERfQlVHX09OKCkgaW4gc2VuZF9lcnJv
cigpLg0KPiANCj4gSSB3aWxsIGFsc28gbW92ZSBFUEVSTSB0b3dhcmRzIHRoZSBlbmQgKEkg
YWRkZWQgZmlyc3QgYmVjYXVzZSBFUEVSTSBpcyAxKS4NCg0KSSBhZ3JlZSB0byBib3RoIHBs
YW5zLg0KDQoNCkp1ZXJnZW4NCg==
--------------2XP7g0cjMhHeGFZjmXazOJqP
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------2XP7g0cjMhHeGFZjmXazOJqP--

--------------whd0lmGyvKm86Ev7exmMOAnh--

--------------QoMqoaBvCXfUdaXGd0T2uw8g
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK5gSEFAwAAAAAACgkQsN6d1ii/Ey+k
TAf9HDJccIF+t0V7ZrqNKVmtUoChBNIEOCDQRe4FtL3PJmKleqPAk0QwzepQb/Qmsd2cW83NQXi9
uo8pTZvjSNi3M5wXB2BSFvWpc+v4SMI4EdYv5vd4VM2PUU68OmSiv5h3Dr2kHo32PQxiZ+194VDW
jRU1PVO4MXjdBJvzyoALGCNHEa5liH3bjmEldpiDHgAxsjuKyraKaq8RxM6aUdSfyIveS0dO51kT
vdXbR+dffqnP7nYTAj1CuwqtKOeViyvs8q4M2AO/GygmbZXmT5Tp3bMIJnWspcAA9NQBriyznzgu
7uK4wI5ax+o9MOWfTEVqN6CNs+eAvSYi2PKcgLN29Q==
=7Wpx
-----END PGP SIGNATURE-----

--------------QoMqoaBvCXfUdaXGd0T2uw8g--


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 10:09:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 10:09:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356409.584619 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5lgP-0008PS-Mr; Mon, 27 Jun 2022 10:09:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356409.584619; Mon, 27 Jun 2022 10:09:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5lgP-0008PL-KB; Mon, 27 Jun 2022 10:09:29 +0000
Received: by outflank-mailman (input) for mailman id 356409;
 Mon, 27 Jun 2022 10:09:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o5lgP-0008PF-1U
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 10:09:29 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o5lgO-0008S3-Nk; Mon, 27 Jun 2022 10:09:28 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=[192.168.2.226]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o5lgO-0008Sk-Gk; Mon, 27 Jun 2022 10:09:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=kQ1uzM4Ezv5rcEN2PYnbIlEciQrawISUpwH0zLWQhas=; b=cApBeOqUEm/SIV+7lZ+qx/0jF9
	tMWGXkMGj8Bt+WhfKbM6SWyKuoif+/bayL2aYHmZnO2NP/hmjaOFQRDJmDDWclZVck2OPRjQClPHm
	EYP7kLEi1iKgInCEf9SrefrYogzXnu36Y9cjts8+c9ZbklFLnKdIHpQmKuEDeBBk2tTg=;
Message-ID: <fbead981-fe36-30fe-12cd-29842a642e47@xen.org>
Date: Mon, 27 Jun 2022 11:09:26 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH 2/7] xen/arm32: head.S: Introduce a macro to load the
 physical address of a symbol
To: Michal Orzel <michal.orzel@arm.com>, xen-devel@lists.xenproject.org
Cc: Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220624091146.35716-1-julien@xen.org>
 <20220624091146.35716-3-julien@xen.org>
 <1218329a-13a3-79b6-6753-c2c9a0c45b2d@arm.com>
 <e92b0f0f-d73c-a003-eb0f-15f7d624a75e@xen.org>
 <8c8d6a9f-18cd-43e5-0835-68927e7d1bac@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <8c8d6a9f-18cd-43e5-0835-68927e7d1bac@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Michal,

On 27/06/2022 10:59, Michal Orzel wrote:
> 
> 
> On 27.06.2022 11:52, Julien Grall wrote:
>>
>>
>> On 27/06/2022 07:31, Michal Orzel wrote:
>>> Hi Julien,
>>
>> Hi Michal,
>>
>>> On 24.06.2022 11:11, Julien Grall wrote:
>>>> From: Julien Grall <jgrall@amazon.com>
>>>>
>>>> A lot of places in the ARM32 assembly requires to load the physical address
>>>> of a symbol. Rather than open-coding the translation, introduce a new macro
>>>> that will load the phyiscal address of a symbol.
>>>>
>>>> Lastly, use the new macro to replace all the current open-coded version.
>>>>
>>>> Note that most of the comments associated to the code changed have been
>>>> removed because the code is now self-explanatory.
>>>>
>>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>>> ---
>>>>    xen/arch/arm/arm32/head.S | 23 +++++++++++------------
>>>>    1 file changed, 11 insertions(+), 12 deletions(-)
>>>>
>>>> diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
>>>> index c837d3054cf9..77f0a619ca51 100644
>>>> --- a/xen/arch/arm/arm32/head.S
>>>> +++ b/xen/arch/arm/arm32/head.S
>>>> @@ -65,6 +65,11 @@
>>>>            .endif
>>>>    .endm
>>>>    +.macro load_paddr rb, sym
>>>> +        ldr   \rb, =\sym
>>>> +        add   \rb, \rb, r10
>>>> +.endm
>>>> +
>>> All the macros in this file have a comment so it'd be useful to follow this convention.
>> This is not really a convention. Most of the macros are non-trivial (e.g. they may clobber registers).
>>
>> The comment I have in mind here would be:
>>
>> "Load the physical address of \sym in \rb"
>>
>> I am fairly confident that anyone can understand from the ".macro" line... So I don't feel the comment is necessary.
>>
> Fair enough although you did put a comment when introducing load_paddr for arm64 head.S

For the better (or the worse) my way to code has evolved in the past 5 
years. :) Commenting is something that changed. I learnt from other open 
source projects that it is better to comment when it is not clear what 
the function/code is doing.

Anyway, this is easy enough for me to add if either Bertrand or Stefano 
think that it is better to add a comment.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 10:11:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 10:11:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356415.584630 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5liT-0001LD-3O; Mon, 27 Jun 2022 10:11:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356415.584630; Mon, 27 Jun 2022 10:11:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5liS-0001L6-VX; Mon, 27 Jun 2022 10:11:36 +0000
Received: by outflank-mailman (input) for mailman id 356415;
 Mon, 27 Jun 2022 10:11:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nImS=XC=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1o5liS-0001L0-Ao
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 10:11:36 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-eopbgr60071.outbound.protection.outlook.com [40.107.6.71])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 85f699cf-f601-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 12:11:34 +0200 (CEST)
Received: from DB6P192CA0010.EURP192.PROD.OUTLOOK.COM (2603:10a6:4:b8::20) by
 AM6PR08MB5031.eurprd08.prod.outlook.com (2603:10a6:20b:ed::12) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.18; Mon, 27 Jun 2022 10:11:33 +0000
Received: from DBAEUR03FT026.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:b8:cafe::e3) by DB6P192CA0010.outlook.office365.com
 (2603:10a6:4:b8::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15 via Frontend
 Transport; Mon, 27 Jun 2022 10:11:33 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT026.mail.protection.outlook.com (100.127.142.242) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Mon, 27 Jun 2022 10:11:32 +0000
Received: ("Tessian outbound 5b5a41c043d3:v120");
 Mon, 27 Jun 2022 10:11:32 +0000
Received: from 8c687c20eec8.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 730414EB-1C8A-440E-B6EF-2F0BEE7BED13.1; 
 Mon, 27 Jun 2022 10:11:23 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8c687c20eec8.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 27 Jun 2022 10:11:23 +0000
Received: from DU2PR08MB7325.eurprd08.prod.outlook.com (2603:10a6:10:2e4::7)
 by GV1PR08MB8035.eurprd08.prod.outlook.com (2603:10a6:150:98::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Mon, 27 Jun
 2022 10:11:20 +0000
Received: from DU2PR08MB7325.eurprd08.prod.outlook.com
 ([fe80::50e5:8757:6643:77f8]) by DU2PR08MB7325.eurprd08.prod.outlook.com
 ([fe80::50e5:8757:6643:77f8%9]) with mapi id 15.20.5373.017; Mon, 27 Jun 2022
 10:11:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 85f699cf-f601-11ec-b725-ed86ccbb4733
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=njGKSgIP0WfvPDLQNmVqZTwWt5YymAhtq6I/1PoZd+AejGjBDinr6pocUKmBxa3i6dXnl0qJKoSvnasQjVLv2BFy6kpKSa4U8FAPJhx+A48Qc1XTYeU5u1yVppxSQCIAStEsIDrMKpa+PCzbMIAFEHhGm0o9GpyLs6otzgdj/s9ixpVAecilS/qLCGOX3v7cEggaRk6w18b6XwYWPO/3Z7ieD/CNCoslEGU7ql0uNU6yg6YWm5RwlYrGntVlyva2idHhvi/9JCa7AHM6AEqgUt5zKm7suokVvJzAcpKQOykULZtSD6DG23DD0BcDuOevwTIg2exF90KsZoctQ6gW7w==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=vhXiTrf3nYEpPapCSFgPB9m1LKse5UuoW5un3Cv9OwM=;
 b=UVMTA4/P6UCu8iO/JfwOBiDYJI+fPt7DJN84wKAjgAcL5o0BFifmFdKhxPDIVYyKpwl1pZuoNk0Nfk9HDdDGqkWyIjvOH5+UkvCewizj1tnY717AeEUAKU/ovNVPNcsxsQ1piuQxcIUctisHYlgScpiDu4GEWoZJ72fSAfdg5nJ2lev7ZetZtQHIy54z6MQSNHllb69ifUUp3eOdvxTMnj8ZbZ5gIDIcuULtU111U9KBYf/emhBC0/ZBjeldf7oSjDcWFUaD07S4IrmfN8n8PXt3O1L9dyY8ne7aPl+nloKfgyitPcWbyOW7XUNLiN6terXVugAloftm8hZ3mQkwgQ==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vhXiTrf3nYEpPapCSFgPB9m1LKse5UuoW5un3Cv9OwM=;
 b=s9bxiqPiVS4kVNWMkloeiQ6jrPxgaQRMObQQv+cHSWMdkMHen7O9BXNkwJ7QkGVQTzeuEMZuHHvNSwdRIBbPz/P+uCTHen+2z77QCHNicVEQ2hbsAP7qgtORlspvDgTatevYWIeMnWI/MrvOJztes2FCuHmhWeCqngRtBvIIWdU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VqMllXL2D+ogWPEA4yd2387UifqCYavtWBCkvUBF5y+P8Y6LdwHRMOY236lcyGs3H5gFM5FaNp7ankSD0ZLAhMRcKnB2QfbD3kWQ5JisrF4l8bexxSwj8x79+8+xqGg+8UHrnpGUgb349Cr8LuWRiFDaszYb3Vga+skogVg0ihIFU7br2d7WlomLIt4r0vmMVdNq5vS0NCKtgOX8hvoggp13GEjP2r6xifX/whPqT6S/YRHdtZ39kjhVJuA3oXtkiwuKl4ic6ze3MHux/wuzgUD7isShXUPmk26Sce0Vm1GKGTZaKWGAm5gXJ57+VRKi0Ji74JC6epV3Uw3+Jgy0pA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=vhXiTrf3nYEpPapCSFgPB9m1LKse5UuoW5un3Cv9OwM=;
 b=imQNvurhoWysBpKd2F/x79vz3/kp2k7gaZs3m+/LSiBP383LRrmU4Oi/FTP7ut1PmyY3I8Y26lAjmi8Z1EKzJNGehd30QDi9pPN3YftsuDLgG15/kyI4GwFjNZbX2wCaW4JUE310ShfeJ2Z5vJoG0NmmZVqL8DCIivF6JT3V6Th+Fj4SzyAKvMrYHM8SzeVlsXmf/NzmWBp0gKCDRwttKz6IexNMt2w80pK/Hw03zAKMUpCg14GbwkKKOVWdMJ5xmIqpkp2vGgxyZdvWdv1oXXugaEmoRrOFKmCRfkzzPU+LtvCtkmczYZmlIYCxuLOOMUrBlioXv9nMuoVw0E3tGQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vhXiTrf3nYEpPapCSFgPB9m1LKse5UuoW5un3Cv9OwM=;
 b=s9bxiqPiVS4kVNWMkloeiQ6jrPxgaQRMObQQv+cHSWMdkMHen7O9BXNkwJ7QkGVQTzeuEMZuHHvNSwdRIBbPz/P+uCTHen+2z77QCHNicVEQ2hbsAP7qgtORlspvDgTatevYWIeMnWI/MrvOJztes2FCuHmhWeCqngRtBvIIWdU=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Chen <Wei.Chen@arm.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: RE: [PATCH v7 9/9] xen: retrieve reserved pages on populate_physmap
Thread-Topic: [PATCH v7 9/9] xen: retrieve reserved pages on populate_physmap
Thread-Index: AQHYhE/PLzgZAZhohEWBkyQ6CNSVbK1e/rYAgAQAb4A=
Date: Mon, 27 Jun 2022 10:11:20 +0000
Message-ID:
 <DU2PR08MB732533C0E687B1A27417E54AF7B99@DU2PR08MB7325.eurprd08.prod.outlook.com>
References: <20220620024408.203797-1-Penny.Zheng@arm.com>
 <20220620024408.203797-10-Penny.Zheng@arm.com>
 <ae94da35-40d5-f65c-1df5-3ebde3aa86a3@xen.org>
In-Reply-To: <ae94da35-40d5-f65c-1df5-3ebde3aa86a3@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: A1762B05500C5744B1EC62B571E08221.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: ee218fba-4bd3-4873-2397-08da58256945
x-ms-traffictypediagnostic:
	GV1PR08MB8035:EE_|DBAEUR03FT026:EE_|AM6PR08MB5031:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 GrZz+5uCIX11aKqp9IcEgGGlJ78R50VNUSeRM9TMCbwNiRYvXA9mSD1GD+uGwl0nGjowzpQ7YLZNEyLAuM1GPR+btGhokRA+5lgy1nZikSQlPAKkNh17RGshO6t7TScPtSuOmhN/SAWP8riOJ9pKvIRLywj6WjXIVKfuiIKnAHHDXkmiCW4UZAxqRNZuSnv0QwLxW9Nfx5QHweer/MM04OfqtLkQ4ks8L6eLVxLmEsLuN7w7o81iz4yL0xDuuJYwfwHBZMKDIByJOfZXkNj61mLsAWXbe6wvniRVPsLDAzla9klOM3ZLZzW/GLRiDG6ym9Ht0KcY7dgjzmJyRJU1yQ1yXHBKvUasCWcsmv6BU55qZ3DnS9t1xyPVKCDVnsnQWMjoL2/kULO3G1I9nQINVeDetERwoy+c7aTgdydTh/knNNt2LJtRyVKo2gDOD8sw3Xc7tLbID5q6jVW+BHcs71Lfm9FsJ78GtRM8C9Ei46YgtZevZU363OKITh6+kghmkHWcsdvbqE2NfFxroPNDU1u8NRdX5TAlhKEzRXy0CryMuK2bOrCti8mooN5PLerO4JUXh8X5u0QjqJBt2MFhiW8jRtEzXfu1gQetvIhGLTJi9vZ5yx6WsQKQudFVWnFD60+TVGd1rmjU+gRb9A+ImAZ3fYsGQuToXpyd/bTJEgDjY1DWwXvxkq9SD0mrGv+jZGJ8pZ1aaVWrNnQFijUsGdKq2qYmH7hiSTwBoigamJYm89vyrGIa2gDgWkge+5hwwkVCYiB9jVBDC9WHK9BDCa9BNE6NWqfxY80qfv8+aYV4rxM4rXPHhO0urkUf5x4bFh3mNOUvFZ9JYvGJbaNFTg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DU2PR08MB7325.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(136003)(366004)(396003)(376002)(39860400002)(346002)(2906002)(55016003)(86362001)(6506007)(41300700001)(478600001)(76116006)(38100700002)(66446008)(64756008)(71200400001)(26005)(9686003)(66556008)(8936002)(52536014)(66476007)(53546011)(8676002)(4326008)(122000001)(5660300002)(186003)(316002)(66946007)(7696005)(54906003)(33656002)(110136005)(38070700005)(83380400001)(156664002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB8035
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT026.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	3d513610-5538-492d-a279-08da582561f1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	imyjANTnjzMdc455TAchH3CwtyT8XairA0MeRRbHVMXFoQCE1zH77/htgQTxFzltamSMoubkvpH+9i9zFCQWa9nT4kNape3xmKNTNsXMyZ3eL3ISUSHoyPKcMERuZJgdPk6IVAUt7eYzqrkSuaVe7gOVRCbDdrDf+EpZTJtclQ1c28NJW64FoapJJkqpzyb0zsqqrViSY+nrh2buVOXknl2j4rzPW6Brt351RarNRW0kkWC3nQztk5sMvSIrDa1EpgrVqWgcNQ3Ccve8eKcgRD7H885Kz1D4fMKof9H1xgYf2LXQGv4346h8DTNAxQfc9fhSMgY2lGaIFZ1XazrxaBPccPBXhiM1ZqfgJ1u8udK2OHK53ZDzvO13ptgq7mfBeO+Qa5rS2MdTzGqb4g8zKVWkxyZGGzLf3MYcGpI11cZEL3m1r15uWXyDHVc8YaUlydZOSE3ws/s7k77uPSD8KwTGUExkMwxfcYJhVnm1sejRRUFQMQ0HGRnCZNrfpF7maz2Oefczm2Kbvc0L/jFVGxgE7aRAxF4pvfPoICLBDRID3AfGqgadb1BM9ELu5m3ny/5Sn+WMIPuj5Hxf1t+msVx3B5mEd70yq4+E/flsmlM65R29CMjwDJgj9ojZX6rjZnKyaKajn6fRoMAo38dH8sEL5FJe2nsBMmZMIIUaj2+2WgJ7KQLFZUB7DSqgCpbmGO4BBtZUv8V9v5ORXnEDa0xQ7Ek2n+cifqOiTozwcyatDEhRu8DzjxcI93Wj+k78Zpm6Jz2LX7qw8Jxm+Id64nEbQDb7e5PibKkdyOsEQROx/j4gXJNVGQHKDnvHD3LEZQSGjhZjGvDm97RovDZSmg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(396003)(136003)(39860400002)(346002)(376002)(40470700004)(46966006)(36840700001)(186003)(82310400005)(356005)(55016003)(81166007)(33656002)(8936002)(316002)(40480700001)(5660300002)(47076005)(110136005)(54906003)(336012)(83380400001)(70586007)(26005)(9686003)(86362001)(52536014)(36860700001)(8676002)(4326008)(478600001)(70206006)(41300700001)(82740400003)(2906002)(53546011)(7696005)(40460700003)(6506007)(156664002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2022 10:11:32.9519
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ee218fba-4bd3-4873-2397-08da58256945
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT026.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB5031

SGkgSnVsaWVuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSnVsaWVu
IEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4NCj4gU2VudDogU2F0dXJkYXksIEp1bmUgMjUsIDIwMjIg
Mzo1MSBBTQ0KPiBUbzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+OyB4ZW4tZGV2
ZWxAbGlzdHMueGVucHJvamVjdC5vcmcNCj4gQ2M6IFdlaSBDaGVuIDxXZWkuQ2hlbkBhcm0uY29t
PjsgQW5kcmV3IENvb3Blcg0KPiA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT47IEdlb3JnZSBE
dW5sYXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4LmNvbT47DQo+IEphbiBCZXVsaWNoIDxqYmV1bGlj
aEBzdXNlLmNvbT47IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz47
DQo+IFdlaSBMaXUgPHdsQHhlbi5vcmc+DQo+IFN1YmplY3Q6IFJlOiBbUEFUQ0ggdjcgOS85XSB4
ZW46IHJldHJpZXZlIHJlc2VydmVkIHBhZ2VzIG9uDQo+IHBvcHVsYXRlX3BoeXNtYXANCj4gDQo+
IEhpIFBlbm55LA0KPiANCj4gT24gMjAvMDYvMjAyMiAwMzo0NCwgUGVubnkgWmhlbmcgd3JvdGU6
DQo+ID4gV2hlbiBhIHN0YXRpYyBkb21haW4gcG9wdWxhdGVzIG1lbW9yeSB0aHJvdWdoIHBvcHVs
YXRlX3BoeXNtYXAgYXQNCj4gPiBydW50aW1lLCBpdCBzaGFsbCByZXRyaWV2ZSByZXNlcnZlZCBw
YWdlcyBmcm9tIHJlc3ZfcGFnZV9saXN0IHRvIG1ha2UNCj4gPiBzdXJlIHRoYXQgZ3Vlc3QgUkFN
IGlzIHN0aWxsIHJlc3RyaWN0ZWQgaW4gc3RhdGljYWxseSBjb25maWd1cmVkIG1lbW9yeQ0KPiBy
ZWdpb25zLg0KPiA+IFRoaXMgY29tbWl0IGFsc28gaW50cm9kdWNlcyBhIG5ldyBoZWxwZXIgYWNx
dWlyZV9yZXNlcnZlZF9wYWdlIHRvIG1ha2UNCj4gaXQgd29yay4NCj4gPg0KPiA+IFNpZ25lZC1v
ZmYtYnk6IFBlbm55IFpoZW5nIDxwZW5ueS56aGVuZ0Bhcm0uY29tPg0KPiA+IC0tLQ0KPiA+IHY3
IGNoYW5nZXM6DQo+ID4gLSByZW1vdmUgdGhlIGxvY2ssIHNpbmNlIHdlIGFkZCB0aGUgcGFnZSB0
byByc3ZfcGFnZV9saXN0IGFmdGVyIGl0IGhhcw0KPiA+IGJlZW4gdG90YWxseSBmcmVlZC4NCj4g
DQo+IEhtbW0uLi4gQWRkaW5nIHRoZSBwYWdlIGFmdGVyIGl0IGhhcyBiZWVuIHRvdGFsbHkgZnJl
ZWQgZG9lc24ndCBtZWFuIHlvdQ0KPiBjYW4gZ2V0IGF3YXkgd2l0aCB0aGUgbG9jay4gQUZBSUNU
IHlvdSBjYW4gc3RpbGwgaGF2ZSBjb25jdXJyZW50IGZyZWUvYWxsb2NhdGUNCj4gdGhhdCBjb3Vs
ZCBtb2RpZnkgdGhlIGxpc3QuDQo+IA0KPiBUaGVyZWZvcmUgaWYgeW91IGFkZC9yZW1vdmUgcGFn
ZXMgd2l0aG91dCB0aGUgbGlzdCwgeW91IHdvdWxkIGVuZCB1cCB0bw0KPiBjb3JydXB0IHRoZSBs
aXN0Lg0KPiANCj4gSWYgeW91IGRpc2FncmVlLCB0aGVuIHBsZWFzZSBwb2ludCBvdXQgd2hpY2gg
bG9jayAob3IgbWVjaGFuaXNtKSB3aWxsIHByZXZlbnQNCj4gY29uY3VycmVudCBhY2Nlc3MuDQo+
IA0KDQpPay4gQ29tYmluZWQgd2l0aCB0aGUgbGFzdCBzZXJpZSBjb21tZW50cywgYWN0dWFsbHks
IHlvdSBzdWdnZXN0IHRoYXQgd2UgbmVlZCB0byBhZGQNCnR3byBsb2NrcywgcmlnaHQ/DQoNCk9u
ZSBpcyB0aGUgbG9jayBmb3IgY29uY3VycmVudCBmcmVlL2FsbG9jYXRpb24gb24gcGFnZV9pbmZv
LCBhbmQgd2Ugd2lsbCB1c2UNCmhlYXBfbG9jaywgb25lIHN0YXlzIGluIGZyZWVfc3RhdGljbWVt
X3BhZ2VzLCBvbmUgc3RheXMgaW4gaXRzIHJldmVyc2VkIGZ1bmN0aW9uDQpwcmVwYXJlX3N0YXRp
Y21lbV9wYWdlcy4NCg0KVGhlIG90aGVyIGlzIGZvciBjb25jdXJyZW50IGZyZWUvYWxsb2NhdGlv
biBvbiByZXN2X3BhZ2VfbGlzdCwgYW5kIHdlIHdpbGwgdXNlDQpkLT5wYWdlX2FsbG9jX2xvY2sg
dHAgZ3VhcmQgaXQuIE9uZSBzdGF5cyBpbiBwdXRfc3RhdGljX3BhZ2UsIGFuZCBhbm90aGVyDQpz
dGF5cyBpbiByZXZlcnNlZCBmdW5jdGlvbiBhY3F1aXJlX3Jlc2VydmVkX3BhZ2UuDQoNCj4gQ2hl
ZXJzLA0KPiANCj4gLS0NCj4gSnVsaWVuIEdyYWxsDQo=


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 10:19:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 10:19:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356423.584641 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5lpq-00028h-UW; Mon, 27 Jun 2022 10:19:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356423.584641; Mon, 27 Jun 2022 10:19:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5lpq-00028a-Rl; Mon, 27 Jun 2022 10:19:14 +0000
Received: by outflank-mailman (input) for mailman id 356423;
 Mon, 27 Jun 2022 10:19:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o5lpp-00028U-BT
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 10:19:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o5lph-0000BO-3e; Mon, 27 Jun 2022 10:19:05 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=[192.168.2.226]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o5lpg-0000Qq-Sy; Mon, 27 Jun 2022 10:19:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=Fd7SNqyJ3gUsXFoqNaY/wMGrGZ8jzIRgsGEe0NNTWQk=; b=EScbIo1cJ1Ltt7MgsScCWkr+PP
	wQXWUa1BDlPYwCOErJOV3yIZ3pjGYrUU9PbdYtGcEu7F/N8bOAQTi4pXEtqkZR5unu/JGpXq19dkz
	HycJaDJxirOWbTSn7EYmdx/j7IAmprccIFyGpTWkd15ze8j2tIIUN0Z9qwc3hxkJHSIU=;
Message-ID: <380b2610-fe2f-6246-e6a4-f0dd8295d488@xen.org>
Date: Mon, 27 Jun 2022 11:19:02 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH v7 7/9] xen/arm: unpopulate memory when domain is static
To: Penny Zheng <Penny.Zheng@arm.com>, Jan Beulich <jbeulich@suse.com>
Cc: Wei Chen <Wei.Chen@arm.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20220620024408.203797-1-Penny.Zheng@arm.com>
 <20220620024408.203797-8-Penny.Zheng@arm.com>
 <5ac0e46d-2100-331e-b4d2-8fc715973b71@suse.com>
 <DU2PR08MB73255B2995B4692B5D46252FF7B99@DU2PR08MB7325.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <DU2PR08MB73255B2995B4692B5D46252FF7B99@DU2PR08MB7325.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 27/06/2022 11:03, Penny Zheng wrote:
> Hi jan
> 
>> -----Original Message-----
> put_static_pages, that is, adding pages to the reserved list, is only for freeing static
> pages on runtime. In static page initialization stage, I also use free_statimem_pages,
> and in which stage, I think the domain has not been constructed at all. So I prefer
> the freeing of staticmem pages is split into two parts: free_staticmem_pages and
> put_static_pages

AFAIU, all the pages would have to be allocated via 
acquire_domstatic_pages(). This call requires the domain to be allocated 
and setup for handling memory.

Therefore, I think the split is unnecessary. This would also have the 
advantage to remove one loop. Admittly, this is not important when the 
order 0, but it would become a problem for larger order (you may have to 
pull the struct page_info multiple time in the cache).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 10:25:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 10:25:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356430.584652 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5lvr-0003XW-Jr; Mon, 27 Jun 2022 10:25:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356430.584652; Mon, 27 Jun 2022 10:25:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5lvr-0003XP-H2; Mon, 27 Jun 2022 10:25:27 +0000
Received: by outflank-mailman (input) for mailman id 356430;
 Mon, 27 Jun 2022 10:25:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5lvp-0003XF-LD; Mon, 27 Jun 2022 10:25:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5lvp-0000HU-EO; Mon, 27 Jun 2022 10:25:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5lvo-0000l9-Qh; Mon, 27 Jun 2022 10:25:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o5lvo-0001Ew-QG; Mon, 27 Jun 2022 10:25:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=xgZdCAFkBeyPaCYXEIUhi4D1FEzRlmNR2bVjDTWYtSU=; b=xTl4ArtIoI41RH5dfOOf4TwnX3
	YZ1Hkmih9eMesyEi/tUqBcdyqz8TXRoXPfacPpWB1LJZnlmDTN4qe2EYWf0audBkadw/9SpWQBkSV
	jl3Hxbnz65wFDog2JCxidWSj25GqZnsWX0t2sBREYlqQevYPds3KR9d4WYc562Fgs2ZM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171364-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171364: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=03c765b0e3b4cb5063276b086c76f7a612856a9a
X-Osstest-Versions-That:
    linux=354c6e071be986a44b956f7b57f1884244431048
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 27 Jun 2022 10:25:24 +0000

flight 171364 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171364/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-intel  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-amd  8 xen-boot              fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 171277
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 171277
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 171277
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 171277
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 171277

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 171277

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171277
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171277
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171277
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                03c765b0e3b4cb5063276b086c76f7a612856a9a
baseline version:
 linux                354c6e071be986a44b956f7b57f1884244431048

Last test of basis   171277  2022-06-19 03:11:35 Z    8 days
Failing since        171280  2022-06-19 15:12:25 Z    7 days   23 attempts
Testing same since   171364  2022-06-27 01:55:47 Z    0 days    1 attempts

------------------------------------------------------------
358 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 12715 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 10:28:51 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 10:28:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356437.584662 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5lz8-0004EF-3H; Mon, 27 Jun 2022 10:28:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356437.584662; Mon, 27 Jun 2022 10:28:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5lz8-0004E8-0V; Mon, 27 Jun 2022 10:28:50 +0000
Received: by outflank-mailman (input) for mailman id 356437;
 Mon, 27 Jun 2022 10:28:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CVBH=XC=citrix.com=prvs=1703a0240=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o5lz6-0004Dy-Tc
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 10:28:48 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ebeee391-f603-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 12:28:46 +0200 (CEST)
Received: from mail-mw2nam04lp2173.outbound.protection.outlook.com (HELO
 NAM04-MW2-obe.outbound.protection.outlook.com) ([104.47.73.173])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 27 Jun 2022 06:28:24 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by SJ0PR03MB5806.namprd03.prod.outlook.com (2603:10b6:a03:2d2::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16; Mon, 27 Jun
 2022 10:28:21 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5373.018; Mon, 27 Jun 2022
 10:28:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ebeee391-f603-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656325726;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=0mtFCpSFN/y+j75DWraDGSeaZQnH5YMUcsGF4jT/8zY=;
  b=gEF+eonxIkT5SRYpi1g4flven2cIklvTAJvUgdIc5MuhDycFDzdgN5v9
   rAjJCDu3tlEAXwATVDF0GO2o74jP+gZLWW+89XHnaO/IWz7OmD6HHdnuw
   3n11DQzoqsTulGlx2q5Aw4nwj3d1pC1kI8Mej8kzfH/bvdm8rUsH7thSo
   Y=;
X-IronPort-RemoteIP: 104.47.73.173
X-IronPort-MID: 74906291
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:TNfAlayLDQjVhM1SCx56t+fBxyrEfRIJ4+MujC+fZmUNrF6WrkUAz
 WQWX2qBa/6NajfxLt0lYdm3oE0P75PRmIA1QAJspCAxQypGp/SeCIXCJC8cHc8zwu4v7q5Dx
 59DAjUVBJlsFhcwnj/0bv656yMUOZigHtIQMsadUsxKbVIiGX1JZS5LwbZj2NY224fhWmthh
 PupyyHhEA79s9JLGjp8B5Kr8HuDa9yr5Vv0FnRnDRx6lAe2e0s9VfrzFonoR5fMeaFGH/bSe
 gr25OrRElU1XfsaIojNfr7TKiXmS1NJVOSEoiI+t6OK2nCuqsGuu0qS2TV1hUp/0l20c95NJ
 Npll6KvRCUyMof1se0cCUUAIQtdYYxf0eqSSZS/mZT7I0zuVVLJm6krKX5seIoS96BwHH1E8
 uEeJHYVdBefiumqwbW9DO5xmsAkK8qtN4Qa0p1i5WiBUbB6HtacGOOTuoQwMDQY36iiGd7EY
 MUUc3x3ZQnoaBxTIFYHTpk5mY9Eg1GgL2wA9gjE/MLb5UDc41R+2ZHRFubcZ9+oSPtvw1iqn
 zjZqjGR7hYycYb3JSC+2nCjnOjUhgvgRZkfUra/85ZCkFCVg2AeFhASfV+6uuWizF6zXcpFL
 E4Z8TZoqrI9nGSwVcX0VRC8pH+CvzYfVsBWHul87xuCooLW/gKYC24sXjNHLts8u6ceTzEwy
 kWAmd+vADV1qaCUUlqU7LLSpjS3UQArKmsFaT4BXBEyydDpq4EujTrCVt9mVqWyi7XdGzv93
 jSLpygWnKgIgIgA0KDT1U/DqyKhoN7OVAFdzgfYRGuh6itwYYe3YIru4l/ehcusN66cR1iF+
 X0bwc6X6bhSCYnXzXPWBuIQALuu+vCJdiXGhkJiFIUg8DLr/GO/eYdX43d1I0IB3ts4RAIFq
 XT74Wt5jKK/9lP1BUOrS+pd0/gX8JU=
IronPort-HdrOrdr: A9a23:2yZwta6FYeor3YNlDwPXwS2BI+orL9Y04lQ7vn2ZFiY5TiXIra
 qTdaogviMc6Ax/ZJjvo6HkBEClewKlyXcV2/hpAV7GZmXbUQSTTL2KgbGSoAEIXheOjdK1tp
 0QD5SWaueAamSS5PySiGfYLz9j+qjgzEnBv5ai854Hd3APV0gP1XYaNu7NeXcGPjWuSKBJY6
 a0145inX6NaH4XZsO0Cj0sWPXCncTCkNbDbQQdDxAqxQGShXfwgYSKWiSw71M7aXdi0L0i+W
 /Kn0jQ4biiieiyzlv523XI55pbtdP9wp9oBdCKiOISNjLw4zzYErhJavmnhnQYseuv4FElnJ
 3lpAohBd167zfrcmS8sXLWqnvd+Qdrz0Wn5U6TgHPlr8C8bik9EdB9iYVQdQacw1Y8vflnuZ
 g7kl6xht5yN1ftjS7979/HW1VBjUyvu0cvluYVkjh2TZYeUrlMtoYSlXklWqvoJBiKp7zPLd
 MeQv01vJ1tABKnhjHizyJSKeWXLzgO9kzseDlDhiSXuwIm70yRgXFoh/D3pU1wiq7Ve6M0mN
 gsDZ4Y5Y2mbvVmGJ6VV91xNfefOyjqfS/mFl60DBDOKJwnUki926Ifpo9FrN2XRA==
X-IronPort-AV: E=Sophos;i="5.92,226,1650945600"; 
   d="scan'208";a="74906291"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=E6vmPTIeJwOTEP7W53XZe3nlAmOBj69FX0eR9RmnKWJsSMB8Qk8x9IIwuZf6cvsUk0WQVlIuN99TNV1vbVMkWyqweEZVOghfiTWoSpzyKo04DrnPO3pRp9oDUA4VLUMIGUhBbadIiRAas/sHtooeZCZy0xY53KK+vfiB9YhZEdTHcIpz+vRqeVBmWdgueygKFzWpY9ODvyhRMosFaIhuNEYq4YaHNxtf9mGzPjlQ9rw1jbaOV77m9g4AhQ/NoJNSqb21JvaEttGAOIzhjj/io06ydbemM3iJTzc/ES/SfI4onRs9y4x9IBneUqYFCpds4KTZztTf2hHeTR8Pk4vFaQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=MxGVPXuC/atwh/Z9xRoEJ7fhFahrpsLfKel2SZ+3ZaE=;
 b=jSOjLWqnFFRylnbQLbYba/9g1FTmG2H47/QvwjvhuCotI5yFprHcdI6bI72kpo+tPrMO2F6jU5dRZ0AUdsN3s07vwGxoD0B9l4d6AmmPZTlP/O5tUuQoKQURVRyJfInZn/8ImRofzGZNOjVYQo3rk6+z/Hm8DM783IXgPJ8LsdChMlRki1RZ6NkKxVhtC7qpiw3U0n/4wZSijRUOLMKX4M6Lt9hFj7Qva4HoxRZCQYnaTr7yWf4QmQNvU2QgPlnPpnxYJ2+TWX5aGCoW9cob6U+lyYL1eoay1+gKv+TTUq8w/fVE9QVW8cHMozZP4wxVzYYYfVIngy1zuDDvWOII8Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MxGVPXuC/atwh/Z9xRoEJ7fhFahrpsLfKel2SZ+3ZaE=;
 b=PDdnljUdekmOpytJMeiXiP1FnYLzo+Lo/cKGSsuQiRC6Fd0irQS3VsrRTT57R6FsW+4C5lSiOdfX5Kf2H7CxNci3CeZlpii9wZdqQlWU+OAFHrEqZ+6X67s4uu/DKJSIgTxdj7Ubkl+PRyX2mLGUN5he228XjGy2PdgpE2naD+w=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Mon, 27 Jun 2022 12:28:18 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
	Julien Grall <jgrall@amazon.com>
Subject: Re: [PATCH v6 1/9] xen: move do_vcpu_op() to arch specific code
Message-ID: <YrmGQj/W7KTzJ1vo@Air-de-Roger>
References: <20220324140139.5899-1-jgross@suse.com>
 <20220324140139.5899-2-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20220324140139.5899-2-jgross@suse.com>
X-ClientProxiedBy: LO2P123CA0049.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1::13) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c25f0d1f-79c8-40eb-263f-08da5827c227
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5806:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	gNXAgqzH+A3pwASiN6ND33KGXaQ4fRBTxBCJlbs5R7JftLn7vGOBgeMjlfsAMjQXK+6iHxH8XLmfwR2WP6RyewHXXqDCPs8tAtAbT50cKau0Zq8qnRKW8e+LCiUTN+vVtt4h11w+nK/WzwNynaaEE6vXtgtidu7m/lE/rtieffoaOLybVNXT2pA6eD9cr0WoBsAXZU+OnuBoClgRPcfRII1FIL17u019NwYG/P06L5muINA3+TmkyNY49+fA2iCTT8eQy9bA0qs4BP+yI9P2xRmNGhzSaPKY0TJCiaUeush96yW0sWKvNhPygnjyoHyzW38YREwTRGvly7MQzg7aQZs2HmnJQ5D0nlwuQs0IumE+ny5FUHrOOybZwfwQyQIav5+VjzoX0tfFNVOh3FE9xYg1XhDitAlgNeO/YiGaRlVUL1qEArk1LiF3rCDRsukZfhzSwwZzHz2U1J6gdR6hHk/yFLTHLUUHLI1snYmq6ZWaG0sszd0ZYd3Y/33kMG5W3OUTa5vRzliCERSHzeDt/tPNCVEgi6XCUTOlA5LfNjYcwciLPbyWs2b+QunR0/Gb3rKFdzQBgvhAhEdSp082lpJWMmSJyl8BQbaMDzVLmJo7qwwWc0Gn1/A2sd5Kz+e6lVNA+hOfGhn3CRmLRIczD5JnP7GzKLDvx2V7hlYw6fDb/muYtjqmCu7VoA2TqRKruTTbkeCpqzJmzEhhKGWu84ypwN17sypEBwj9JdEOjFH8vyzJO7l70/+5+RsZc/yR
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(7916004)(366004)(136003)(376002)(396003)(39860400002)(346002)(4326008)(8676002)(38100700002)(316002)(2906002)(85182001)(6666004)(4744005)(33716001)(5660300002)(6486002)(6512007)(26005)(9686003)(478600001)(82960400001)(8936002)(54906003)(66946007)(66476007)(66556008)(86362001)(6916009)(41300700001)(6506007)(186003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ejNkVDVMeExwaU9KaXpINXZFanI5bWdKdkg5VjBzRVRXVkNIRjhQeEtyUEFu?=
 =?utf-8?B?Q0NMQ0lkNEM1eTc3Q09xM0REQnJENGhySGh3WGI1cnBRTXA4am1ieEM5L3lt?=
 =?utf-8?B?TEFsN0R0dnU5TDY1eW1nQmd4VXRPdnh5M1RyRzY1UXpVNmk5SldnOXd0M1F5?=
 =?utf-8?B?QUZPdTlvdk5nRWFxR0lXa2h6M2xoVGtlSU5YTmxCYVBuU0IyUklXeVJjNXFO?=
 =?utf-8?B?UW9reDFMY1c1cjRHZkVsRm9wSDZNL3dHWHh5RzNWOUc4a1hqSmlvYVVsUkZm?=
 =?utf-8?B?WE9XVWp5NVU2N3Axa2pOOE9WbStpYUtnbUxhQ0R0QzNBa1ErLzdlWkg1YkZR?=
 =?utf-8?B?d1Vqakx4MmpNWnc5cEVVMzNRUE9lK0ttOVhaTW94WGloZVFlNFNmdkhEUGNG?=
 =?utf-8?B?Z1Q2bkkzb29VS2xqK0Z6WTlPdnNzMEJ2WFUwdUZxSWtacHNhTml3TVk0SVRl?=
 =?utf-8?B?eHZoOFRVVjByRU9qdERqeC9QU21IbFh5ZFpGZTQwSVRuZGlGU2Vud3BkUWJm?=
 =?utf-8?B?dVlhY0FnSkxHV2s2TXVvWmUwY2U2ZFNoSkkybEVoYkVoYTJlN0lIdWJOZFdz?=
 =?utf-8?B?a29JMTJvZE12THpjN2crbnJVNGNSSkd3OGU5aCtZMXNrWnVTYXJSSk9FMmw4?=
 =?utf-8?B?VUw0d00ydUdkTUdOZHVtdmh5aFBhaHlPZy9ubHhTSE5ITlNUVFZmNC9UYVVv?=
 =?utf-8?B?VTFXTE9JZUdnSFAyZHpSVDJXMzIxYnRvS3FlVGMzN2ZueWUwQ3paTjNoanU0?=
 =?utf-8?B?cVNLY3BEZzVJRTVhTHhpNXhMOForSGxvRWg5NGZVdElteXplcDNyV2JjUE5j?=
 =?utf-8?B?T2gvRHhiRFk1TndUNktJaFB3bUxZdEVDNXFJcCt5QlZLR3pUOEVxK3NsWlZi?=
 =?utf-8?B?MG1RaDFkYVc1Q083SW5vaFZoNi9QRGIrNmNIYlZ6NWxab2wrSTltTElLWlVB?=
 =?utf-8?B?azV1MG9SWnhmbFY2RjdYTi9zajV1S05NRGRMeHlqS0Z0NU5lRElvcWdlNDFu?=
 =?utf-8?B?WFdBbFZxZnQrRm1LYVJzWUhlZ1ErZ3pFTW1FQUFaS2FMUGFXOTlrQWZXb2JI?=
 =?utf-8?B?Z3VDT0s1eXVXbVdHK1BxRGk1N2RZOStKS3hqeFJleVJjdi9xQ0dEOWpqV1VK?=
 =?utf-8?B?Q09tbStadUlhRTY5dFZEc3dRWjdaUGU4WVRnVlF4YlorZ0x1K1BtNldsNVhZ?=
 =?utf-8?B?Tzd4R0tFanA0N3JIZ3pjL3VQTGtMenVvZTVkbUVKM2YxaHM4ZXVDMFd6eVRa?=
 =?utf-8?B?bHVwVXlhR0JIeTM1aDR6TksvMk1nNityNXNlRFNFdFk4Rng5cEI4Skd4UThV?=
 =?utf-8?B?UnBEazIvYlJnb0h2czdKTlZhR0JqVXFLbWZXRUNEc0JwdWo2c3c1UUZZQ0Ux?=
 =?utf-8?B?SEk2Uk5PY1lJY0IzaGQ4a2Q1UCtSamlvcEg5NGpRQ3hxNUlYQ1N5MEFYNjVZ?=
 =?utf-8?B?TTJGT0NnekFBalNjRWVYRzMwdHl6K0lRVW81Slc4WHc1b3Vac0daSWVQM1p2?=
 =?utf-8?B?dmw3WDNnVkVyUGpxT2c0MDhGN05rMzA0d3pUMXBLdDVqWFczZFFKQzNldTI3?=
 =?utf-8?B?ME4xUjFvajFtdmFVcmU1RmJaWmRQYmNacW9BS3lDa2w4TVRHOWdsWm5WSldT?=
 =?utf-8?B?OUZQbG9tTnhnRWxOd0NmVFFSaTRvckJvQXJuVnE0K0RMQkxVVUVUUXRwVXQr?=
 =?utf-8?B?RkpGMnhtQ1dMVkMyckJvQTFoc3IxZkNpQWRHcHI0aEdKQ3RCNjlCVGZTN3l4?=
 =?utf-8?B?U0VtN3JuYkJaTDdudVpUbEJnaTVRZzRGTndhNi91c0NDTTk0eVlkUndNSjRr?=
 =?utf-8?B?ckNZd0pZYVJURkVRR2lodTRKK1UxVWlEcDNrWnRCZEtjK3Awbmo3bTU5NFJU?=
 =?utf-8?B?ZjJ6empEQnJrTTJ3L2s5VzI4UERjamdHdFZwb2xqVzZiMC9NVjU1dXJCTWN1?=
 =?utf-8?B?cXFyZXpYbXZNb0ZlUzVoUmlzbUFvVjRHcGVKY0d1NE5LTzR6azFxdlV3YUEz?=
 =?utf-8?B?cTB3UFVvRkZCKzI5aWdjYUYyRDZuVVhtL3M2NldFYVVNbzQzV3BzK1B4R0R2?=
 =?utf-8?B?MjJTbUhVYXlubHBFeXVtUnVkU0txb1ZvNXhZT2I5cDg3bDlZVUhXcHJGaEFk?=
 =?utf-8?Q?tW4or4XQveDucvJ9BSLyr8Zzx?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c25f0d1f-79c8-40eb-263f-08da5827c227
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2022 10:28:21.2691
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /vNOGlPYCK0WRQ/4mn5g9h2VMjQIl8jZxIIylrPdBXBYPdZ1Ivi0/rPZ0VgiQNYd2Kt1XFvnLxrP0Ft5zco9Uw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5806

On Thu, Mar 24, 2022 at 03:01:31PM +0100, Juergen Gross wrote:
> The entry point used for the vcpu_op hypercall on Arm is different
> from the one on x86 today, as some of the common sub-ops are not
> supported on Arm. The Arm specific handler filters out the not
> supported sub-ops and then calls the common handler. This leads to the
> weird call hierarchy:
> 
>   do_arm_vcpu_op()
>     do_vcpu_op()
>       arch_do_vcpu_op()
> 
> Clean this up by renaming do_vcpu_op() to common_vcpu_op() and
> arch_do_vcpu_op() in each architecture to do_vcpu_op(). This way one
> of above calls can be avoided without restricting any potential
> future use of common sub-ops for Arm.

Wouldn't it be more natural to have do_vcpu_op() contain the common
code (AFAICT handlers for
VCPUOP_register_{vcpu_info,runstate_memory_area}) and then everything
else handled by the x86 arch_do_vcpu_op() handler?

I find the common prefix misleading, as not all the VCPUOP hypercalls
are available to all the architectures.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 10:35:59 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 10:35:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356445.584674 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5m5x-0005in-VR; Mon, 27 Jun 2022 10:35:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356445.584674; Mon, 27 Jun 2022 10:35:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5m5x-0005ig-ST; Mon, 27 Jun 2022 10:35:53 +0000
Received: by outflank-mailman (input) for mailman id 356445;
 Mon, 27 Jun 2022 10:35:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o5m5x-0005ia-50
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 10:35:53 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o5m5w-0000Ts-Ra; Mon, 27 Jun 2022 10:35:52 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=[192.168.2.226]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o5m5w-0001I9-KE; Mon, 27 Jun 2022 10:35:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=qI+p55CjaVTHL4lYOgnO2MlGiOjtAI3/EAiuxJ+Z9dY=; b=wml24ks4kmVuC4zoyWeZpJQFLz
	2xJ3hPX7P0sh5S1plEt5HVwKaoEJ3YeWi8O8p4nFOuWZAAikwcm3TGzQVSxVEYApZt802rylZHpe9
	S9hCf/pYee43u9nfCbLYZ6p8sPc8m+SxtP6tVlRHkkmiOT34Wr9nKQnxL/TZ2BJ01+Go=;
Message-ID: <cbb7a231-0f61-7170-3fc4-d4ae55398f3a@xen.org>
Date: Mon, 27 Jun 2022 11:35:50 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH] xen/arm: avoid extra caclulations when setting vtimer in
 context switch
To: Jiamei Xie <jiamei.xie@arm.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Wei Chen <wei.chen@arm.com>
References: <20220627025809.1985720-1-jiamei.xie@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220627025809.1985720-1-jiamei.xie@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Jiami

Title: s/caclulations/calculations/

However, I think the title should mention the overflow rather than the 
extra calculations. The former is more important the latter.

On 27/06/2022 03:58, Jiamei Xie wrote:
> virt_vtimer_save is calculating the new time for the vtimer in:
> "v->arch.virt_timer.cval + v->domain->arch.virt_timer_base.offset
> - boot_count".
> In this formula, "cval + offset" might cause uint64_t overflow.
> Changing it to "v->domain->arch.virt_timer_base.offset - boot_count +
> v->arch.virt_timer.cval" can reduce the possibility of overflow

This read strange to me. We want to remove the overflow completely not 
reducing it. The overflow is completely removed by converting the 
"offset - bount_count" to ns upfront.

AFAICT, the commit message doesn't explain that.

> , and
> "arch.virt_timer_base.offset - boot_count" will be always the same,
> which has been caculated in domain_vtimer_init. Introduce a new field
> vtimer_offset.nanoseconds to store this value for arm in struct
> arch_domain, so we can use it directly and extra caclulations can be
> avoided.
> 
> This patch is enlightened from [1].
> 
> Signed-off-by: Jiamei Xie <jiamei.xie@arm.com>
> 
> [1] https://www.mail-archive.com/xen-devel@lists.xenproject.org/msg123139.htm

This link doesn't work. But I would personally remove it from the commit 
message (or add ---) because it doesn't bring value (this patch looks 
like a v2 to me).

> ---
> xen/arch/arm/include/asm/domain.h | 4 ++++
>   xen/arch/arm/vtimer.c             | 6 ++++--
>   2 files changed, 8 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
> index ed63c2b6f9..94fe5b6444 100644
> --- a/xen/arch/arm/include/asm/domain.h
> +++ b/xen/arch/arm/include/asm/domain.h
> @@ -73,6 +73,10 @@ struct arch_domain
>           uint64_t offset;
>       } virt_timer_base;
>   
> +    struct {
> +        int64_t nanoseconds;

This should be s_time_t to match the argument of set_timer() and return 
of ticks_to_ns().

> +    } vtimer_offset;

Why are you adding a new structure rather than re-using virt_timer_base?

> +
>       struct vgic_dist vgic;
>   
>       struct vuart {
> diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
> index 6b78fea77d..54161e5fea 100644
> --- a/xen/arch/arm/vtimer.c
> +++ b/xen/arch/arm/vtimer.c
> @@ -64,6 +64,7 @@ int domain_vtimer_init(struct domain *d, struct xen_arch_domainconfig *config)
>   {
>       d->arch.virt_timer_base.offset = get_cycles();
>       d->time_offset.seconds = ticks_to_ns(d->arch.virt_timer_base.offset - boot_count);
> +    d->arch.vtimer_offset.nanoseconds = d->time_offset.seconds;

Hmmm... I find odd to assign a field "nanoseconds" to "seconds". I would 
suggest to re-order so you first set nanoseconds and then set seconds.

This will make more obvious that this is not a mistake and "seconds" 
will be closer to the do_div() below.

>       do_div(d->time_offset.seconds, 1000000000);
>   
>       config->clock_frequency = timer_dt_clock_frequency;
> @@ -144,8 +145,9 @@ void virt_timer_save(struct vcpu *v)
>       if ( (v->arch.virt_timer.ctl & CNTx_CTL_ENABLE) &&
>            !(v->arch.virt_timer.ctl & CNTx_CTL_MASK))
>       {
> -        set_timer(&v->arch.virt_timer.timer, ticks_to_ns(v->arch.virt_timer.cval +
> -                  v->domain->arch.virt_timer_base.offset - boot_count));
> +        set_timer(&v->arch.virt_timer.timer,
> +                  v->domain->arch.vtimer_offset.nanoseconds +
> +                  ticks_to_ns(v->arch.virt_timer.cval));
>       }
>   }
>   

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 10:40:45 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 10:40:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356451.584685 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5mAe-0007BB-I2; Mon, 27 Jun 2022 10:40:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356451.584685; Mon, 27 Jun 2022 10:40:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5mAe-0007B4-Ed; Mon, 27 Jun 2022 10:40:44 +0000
Received: by outflank-mailman (input) for mailman id 356451;
 Mon, 27 Jun 2022 10:40:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z6rY=XC=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o5mAd-0007Ay-HD
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 10:40:43 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 97bbad31-f605-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 12:40:42 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 07BD01FD86;
 Mon, 27 Jun 2022 10:40:42 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id AB4DE13AB2;
 Mon, 27 Jun 2022 10:40:41 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id EzVUKCmJuWItTAAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 27 Jun 2022 10:40:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 97bbad31-f605-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656326442; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=8MassX0m7CsOVuBuiF1VcGlLshhuv0mVzAeXp8jA6iI=;
	b=hCIptcD/dj6y+Ld2jpc1HXbqw3n5bbw/z8CasbNHOQUx7P11C6t8DsTW7MjFbD30fae+yU
	IH5VRbNWAUNfxAL62/a414JEFUFXwlM/fJOIh/3KBIXQd9OqhQyDeZJuAmndCRncmQ6isr
	I8hnsqWZsa0lFjx2D681DmVsY/a7ar4=
Message-ID: <8951e03f-ba63-4524-99e9-c030e273c1d1@suse.com>
Date: Mon, 27 Jun 2022 12:40:41 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v6 1/9] xen: move do_vcpu_op() to arch specific code
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>, Julien Grall <jgrall@amazon.com>
References: <20220324140139.5899-1-jgross@suse.com>
 <20220324140139.5899-2-jgross@suse.com> <YrmGQj/W7KTzJ1vo@Air-de-Roger>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <YrmGQj/W7KTzJ1vo@Air-de-Roger>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------EuhcmWRLt50hY3I2ZO4IFZsx"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------EuhcmWRLt50hY3I2ZO4IFZsx
Content-Type: multipart/mixed; boundary="------------8IeYO9RI9Ugt00Cokb3WO5Hh";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>, Julien Grall <jgrall@amazon.com>
Message-ID: <8951e03f-ba63-4524-99e9-c030e273c1d1@suse.com>
Subject: Re: [PATCH v6 1/9] xen: move do_vcpu_op() to arch specific code
References: <20220324140139.5899-1-jgross@suse.com>
 <20220324140139.5899-2-jgross@suse.com> <YrmGQj/W7KTzJ1vo@Air-de-Roger>
In-Reply-To: <YrmGQj/W7KTzJ1vo@Air-de-Roger>

--------------8IeYO9RI9Ugt00Cokb3WO5Hh
Content-Type: multipart/mixed; boundary="------------16tBs4D94Yj7Q1NDYlibgcCH"

--------------16tBs4D94Yj7Q1NDYlibgcCH
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjcuMDYuMjIgMTI6MjgsIFJvZ2VyIFBhdSBNb25uw6kgd3JvdGU6DQo+IE9uIFRodSwg
TWFyIDI0LCAyMDIyIGF0IDAzOjAxOjMxUE0gKzAxMDAsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6
DQo+PiBUaGUgZW50cnkgcG9pbnQgdXNlZCBmb3IgdGhlIHZjcHVfb3AgaHlwZXJjYWxsIG9u
IEFybSBpcyBkaWZmZXJlbnQNCj4+IGZyb20gdGhlIG9uZSBvbiB4ODYgdG9kYXksIGFzIHNv
bWUgb2YgdGhlIGNvbW1vbiBzdWItb3BzIGFyZSBub3QNCj4+IHN1cHBvcnRlZCBvbiBBcm0u
IFRoZSBBcm0gc3BlY2lmaWMgaGFuZGxlciBmaWx0ZXJzIG91dCB0aGUgbm90DQo+PiBzdXBw
b3J0ZWQgc3ViLW9wcyBhbmQgdGhlbiBjYWxscyB0aGUgY29tbW9uIGhhbmRsZXIuIFRoaXMg
bGVhZHMgdG8gdGhlDQo+PiB3ZWlyZCBjYWxsIGhpZXJhcmNoeToNCj4+DQo+PiAgICBkb19h
cm1fdmNwdV9vcCgpDQo+PiAgICAgIGRvX3ZjcHVfb3AoKQ0KPj4gICAgICAgIGFyY2hfZG9f
dmNwdV9vcCgpDQo+Pg0KPj4gQ2xlYW4gdGhpcyB1cCBieSByZW5hbWluZyBkb192Y3B1X29w
KCkgdG8gY29tbW9uX3ZjcHVfb3AoKSBhbmQNCj4+IGFyY2hfZG9fdmNwdV9vcCgpIGluIGVh
Y2ggYXJjaGl0ZWN0dXJlIHRvIGRvX3ZjcHVfb3AoKS4gVGhpcyB3YXkgb25lDQo+PiBvZiBh
Ym92ZSBjYWxscyBjYW4gYmUgYXZvaWRlZCB3aXRob3V0IHJlc3RyaWN0aW5nIGFueSBwb3Rl
bnRpYWwNCj4+IGZ1dHVyZSB1c2Ugb2YgY29tbW9uIHN1Yi1vcHMgZm9yIEFybS4NCj4gDQo+
IFdvdWxkbid0IGl0IGJlIG1vcmUgbmF0dXJhbCB0byBoYXZlIGRvX3ZjcHVfb3AoKSBjb250
YWluIHRoZSBjb21tb24NCj4gY29kZSAoQUZBSUNUIGhhbmRsZXJzIGZvcg0KPiBWQ1BVT1Bf
cmVnaXN0ZXJfe3ZjcHVfaW5mbyxydW5zdGF0ZV9tZW1vcnlfYXJlYX0pIGFuZCB0aGVuIGV2
ZXJ5dGhpbmcNCj4gZWxzZSBoYW5kbGVkIGJ5IHRoZSB4ODYgYXJjaF9kb192Y3B1X29wKCkg
aGFuZGxlcj8NCj4gDQo+IEkgZmluZCB0aGUgY29tbW9uIHByZWZpeCBtaXNsZWFkaW5nLCBh
cyBub3QgYWxsIHRoZSBWQ1BVT1AgaHlwZXJjYWxscw0KPiBhcmUgYXZhaWxhYmxlIHRvIGFs
bCB0aGUgYXJjaGl0ZWN0dXJlcy4NCg0KVGhpcyB3b3VsZCBlbmQgdXAgaW4gQXJtIHN1ZGRl
bmx5IHN1cHBvcnRpbmcgdGhlIHN1Yi1vcHMgaXQgZG9lc24ndA0KKHdhbnQgdG8pIHN1cHBv
cnQgdG9kYXkuIE90aGVyd2lzZSBpdCB3b3VsZCBtYWtlIG5vIHNlbnNlIHRoYXQgQXJtIGhh
cw0KYSBkZWRpY2F0ZWQgZW50cnkgZm9yIHRoaXMgaHlwZXJjYWxsLg0KDQpUaGUgImNvbW1v
biIganVzdCB3YW50cyB0byBleHByZXNzIHRoYXQgdGhlIGNvZGUgaXMgY29tbW9uLiBJJ20g
b3Blbg0KZm9yIGEgYmV0dGVyIHN1Z2dlc3Rpb24sIHRob3VnaC4gOi0pDQoNCg0KSnVlcmdl
bg0K
--------------16tBs4D94Yj7Q1NDYlibgcCH
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------16tBs4D94Yj7Q1NDYlibgcCH--

--------------8IeYO9RI9Ugt00Cokb3WO5Hh--

--------------EuhcmWRLt50hY3I2ZO4IFZsx
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK5iSkFAwAAAAAACgkQsN6d1ii/Ey8f
zAf8Chm6si0OG68QQsYyLisso804agOB3dnlW9kT/3ahToFvv967fRJH8otYCx6DOTqd33z9jugA
TlWxs4JQrwYvJQfU8eCBV85mNii26mvAdW0PlR6+BFz0VWPKsxb1oEKrSNsrSDDiz1yuNOx06MEy
IiiWDVvt+x9fVxwq7FSRmPD81nySzuV1lN9ZjBMUTjHOkuK5YJ5+sSocU9I98cYub/D/sBfNiq01
ob/SDKJrpV2MCVB4DKrb8IywB4rb8suvckUy+Ar9J82rMF9o3BOdJl6SsQWbmB7bM1zh/ygdeWjI
l0JaX5/SHdKFPU0hevVXUhcUSW8gB5yumN63PxP8ag==
=USkM
-----END PGP SIGNATURE-----

--------------EuhcmWRLt50hY3I2ZO4IFZsx--


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 10:54:26 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 10:54:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356457.584696 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5mNn-0000KN-NB; Mon, 27 Jun 2022 10:54:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356457.584696; Mon, 27 Jun 2022 10:54:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5mNn-0000KG-KZ; Mon, 27 Jun 2022 10:54:19 +0000
Received: by outflank-mailman (input) for mailman id 356457;
 Mon, 27 Jun 2022 10:54:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o5mNl-0000KA-Qf
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 10:54:17 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o5mNl-00011Y-5L; Mon, 27 Jun 2022 10:54:17 +0000
Received: from 54-240-197-228.amazon.com ([54.240.197.228]
 helo=[192.168.2.226]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o5mNk-00021x-UZ; Mon, 27 Jun 2022 10:54:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=JXXPMSZGzGaXFcv/ptd/B+vPPagSuN1AnCFkgsww1xY=; b=no2OhGIEY69SiLfjnCUdHLdaET
	98SOaU6e4pcCdZQO12ctOYC2sy18Y05lJW5HuOqwqzh+l5AbQZ2aK7vGvfIoN2ekqXauTJ2ovOEjp
	k+kJQCT7W+vXjtyG+Fy4A9iyneNUw6faGz3EhCn3xxiemCF+zZG3sgOSPgh5T2ZPBXzA=;
Message-ID: <724524cd-7cb1-4cb9-e636-7a5ea3d78a71@xen.org>
Date: Mon, 27 Jun 2022 11:54:14 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH v7 9/9] xen: retrieve reserved pages on populate_physmap
To: Penny Zheng <Penny.Zheng@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Chen <Wei.Chen@arm.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20220620024408.203797-1-Penny.Zheng@arm.com>
 <20220620024408.203797-10-Penny.Zheng@arm.com>
 <ae94da35-40d5-f65c-1df5-3ebde3aa86a3@xen.org>
 <DU2PR08MB732533C0E687B1A27417E54AF7B99@DU2PR08MB7325.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <DU2PR08MB732533C0E687B1A27417E54AF7B99@DU2PR08MB7325.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Penny,

On 27/06/2022 11:11, Penny Zheng wrote:
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: Saturday, June 25, 2022 3:51 AM
>> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org
>> Cc: Wei Chen <Wei.Chen@arm.com>; Andrew Cooper
>> <andrew.cooper3@citrix.com>; George Dunlap <george.dunlap@citrix.com>;
>> Jan Beulich <jbeulich@suse.com>; Stefano Stabellini <sstabellini@kernel.org>;
>> Wei Liu <wl@xen.org>
>> Subject: Re: [PATCH v7 9/9] xen: retrieve reserved pages on
>> populate_physmap
>>
>> Hi Penny,
>>
>> On 20/06/2022 03:44, Penny Zheng wrote:
>>> When a static domain populates memory through populate_physmap at
>>> runtime, it shall retrieve reserved pages from resv_page_list to make
>>> sure that guest RAM is still restricted in statically configured memory
>> regions.
>>> This commit also introduces a new helper acquire_reserved_page to make
>> it work.
>>>
>>> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
>>> ---
>>> v7 changes:
>>> - remove the lock, since we add the page to rsv_page_list after it has
>>> been totally freed.
>>
>> Hmmm... Adding the page after it has been totally freed doesn't mean you
>> can get away with the lock. AFAICT you can still have concurrent free/allocate
>> that could modify the list.
>>
>> Therefore if you add/remove pages without the list, you would end up to
>> corrupt the list.
>>
>> If you disagree, then please point out which lock (or mechanism) will prevent
>> concurrent access.
>>
> 
> Ok. Combined with the last serie comments, actually, you suggest that we need to add
> two locks, right?

We at least need the second lock (i.e. d->page_alloc_lock). The first 
lock (i.e.) may not be necessary if all the static memory are allocated 
for a domain. So you can guarantee ordering by adding to the resv_page_list.

Unless there are an ordering issue between the locks, I would for now 
suggest to keep both. We can refine this afterwards.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 11:02:50 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 11:02:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356463.584707 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5mVy-0001ru-IB; Mon, 27 Jun 2022 11:02:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356463.584707; Mon, 27 Jun 2022 11:02:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5mVy-0001rn-En; Mon, 27 Jun 2022 11:02:46 +0000
Received: by outflank-mailman (input) for mailman id 356463;
 Mon, 27 Jun 2022 11:02:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CVBH=XC=citrix.com=prvs=1703a0240=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o5mVw-0001rh-Gi
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 11:02:44 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a9e0f87b-f608-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 13:02:43 +0200 (CEST)
Received: from mail-mw2nam10lp2104.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.104])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 27 Jun 2022 07:02:30 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by SA0PR03MB5404.namprd03.prod.outlook.com (2603:10b6:806:bb::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16; Mon, 27 Jun
 2022 11:02:28 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5373.018; Mon, 27 Jun 2022
 11:02:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a9e0f87b-f608-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656327762;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=VhK9pMaLksyDRQaBcn41x0Iuy1G3v/dtS0+GOyxxjTg=;
  b=FI0b6YDmhJlJOgc0POnBZNzxcN5zW6H395kOvMrStTdxXavk/Z3lJTVC
   O3qg/Rg6j1w/o5gdGVxBeqTsup07zVHuQiWQDPmMGvAtuIQwfuH5tJW+p
   yfBlfsFLEwTgeFDJnmwEf/Baq4OqTfUhaknog7tIi6KhF819lWbyhJ2os
   g=;
X-IronPort-RemoteIP: 104.47.55.104
X-IronPort-MID: 74323375
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:rAujdql41RCIsA4aLwxUXP7o5gx3J0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xIdXG6Ab/beZDamfI0nPo6ypEtTvpHczNNjHVRl/389QiMWpZLJC+rCIxarNUt+DCFioGGLT
 Sk6QoOdRCzhZiaE/n9BCpC48T8kk/vgqoPUUIYoAAgoLeNfYHpn2EgLd9IR2NYy24DnWVzV4
 LsenuWEULOb828sWo4rw/rrRCNH5JwebxtB4zTSzdgS1LPvvyF94KA3fMldHFOhKmVgJcaoR
 v6r8V2M1jixEyHBqD+Suu2TnkUiGtY+NOUV45Zcc/DKbhNq/kTe3kunXRa1hIg+ZzihxrhMJ
 NtxWZOYbwUJIrz8uOEncCZiDWY5Y6Js6rPVGC3q2SCT5xWun3rE5dxLVRlzGLJCv+F9DCdJ6
 OASLy0LYlabneWqzbmnS+5qwMM+MM3sO4BZsXZlpd3bJa9+HdafHOOXuJkBhGtYasNmRJ4yY
 +IDbjVidlLYagBnMVYLEpMu2uyvgxETdhUH9AnP/vFovgA/yiQy8fvXb+LNX+CIYuJnwmiTj
 U/i4UXmV0Ry2Nu3jGDtHmiXru3AhyTgQ6oJCaa1sPVthTW71mEVTREbS1a/if24kVKlHcJSL
 VQO/SgjprR081akJvHmRAGxqnOAuh8aWvJTHvc85QXLzbDbiy6bG2wFQzhpeNEg8sgsSlQC3
 FKTg8ngAzAptbSPUG+c7Z+dtzb0Mi8QRUc8YisDQRoA8sPUiog5hRLSTf5uCKewyNbyHFnYw
 TqHsSw/jLU7ltMQ2uOw+lWvqy2ojojESEgy/Aq/dmCq9Ap9YKasYoW67l6d5vFFRLt1VXGEt
 XkA3s2BtuYHCMjVkDTXGb1RWra0+/yCLTvQx0Z1GIUs/Cis/Hjlep1M5DZ5JwFiNcNslSLVX
 XI/cDh5vPd7VEZGp4ctC25tI6zGFZTdKOk=
IronPort-HdrOrdr: A9a23:iipN4668E3xiQj/rfwPXwOjXdLJyesId70hD6qkoc20xTiSZ//
 rCoB1p726RtN93YgBapTngAtj5fZqyz/9ICOUqVotKGTOW2ldAT7sSl7cKoQeBJ8SWzIc06U
 4jSdkcNDSaNzRHZLPBjjVQZOxO/DDoysqVbKzlvhBQpElRGsddBilCe3+mLnE=
X-IronPort-AV: E=Sophos;i="5.92,226,1650945600"; 
   d="scan'208";a="74323375"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NYEwiaLTdfAia2CMLXQfOu+z3Op3uKYHtSOH6RuFveWLXbphS41+msVWJ+QXTQ7KP3dNQa2W5z7PBcy6ECcgATnQVJG02kUkCEDBTS3YgG+8YZK6Izl5HbyhOTJVE2H25a9f/fGCYtVJYPOdvEuPzOsXtwXlfm6T/iGMaL0hl/FDeQRBBUy4isOT5TQrYxUOE9JHi7s57jKQIKz1G3xZnDND6XJdm2thtim0zpSJ60rxbxFbCSOcvDW6B+nSa01fmEZERfMQnEcW8Qb15IxUAbN8zi4JJn6iX8RSgt1di8Sfuhlh7zTjRofZsSuIdGf5iBVR+MlsSLVBH+8Lk3jFCw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+YJvntpC1FV7apJyfockOE8dvjS/AQW83QBeu9E2bf0=;
 b=FGfoXn5sM7pYnmqK1qXK4kePiKMHxCR9NFJeinEO0dcooGSrr3sj1qqDxh+TheYqer1GhvXqdatIOVraPkpZ/kfiiFXtYQn3U8M8wQtjhBg7BBScRIA+PPw+4GQMH8dyxe1ylJVa1C0dMuO/RmxDkbCG+ZGaTvgzMjVFPtL76xmI/kbeqKiGI7g9fRMou3lq7bJaxamHnrLiQ/mONlQKsXMnGEdcm4Vncm/dpdgJcHrSuTSyV7h1t29pfhr4BIoaiNm2kG3ppnvcg/IfP/IZvmeqo5d547fUh+UlusxwyDsXTAimkeH7/JRE2WBLLpLnItBWVTyQ0d5A+c7Y4VzHsA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+YJvntpC1FV7apJyfockOE8dvjS/AQW83QBeu9E2bf0=;
 b=IF4tVFu+KRDknYKrEMIABRQNfOSHAOUFp+whfPny63h101I5uOJpLey1CEQQPe8na9SLBVjDiN9js4Bih4TEAckPlV6v5gGaVCt6/qs019WLNRA38a58F3wtetRi4T+tlQpmwxjQbsWivBkt3b2OjdVD9bKxBhEbZvXk8bGn87I=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Mon, 27 Jun 2022 13:02:24 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
	Julien Grall <jgrall@amazon.com>
Subject: Re: [PATCH v6 1/9] xen: move do_vcpu_op() to arch specific code
Message-ID: <YrmOQPQlbAMwULWc@Air-de-Roger>
References: <20220324140139.5899-1-jgross@suse.com>
 <20220324140139.5899-2-jgross@suse.com>
 <YrmGQj/W7KTzJ1vo@Air-de-Roger>
 <8951e03f-ba63-4524-99e9-c030e273c1d1@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <8951e03f-ba63-4524-99e9-c030e273c1d1@suse.com>
X-ClientProxiedBy: LO4P123CA0404.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:189::13) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ffa06ed1-f095-47d6-e1f0-08da582c862e
X-MS-TrafficTypeDiagnostic: SA0PR03MB5404:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ozRw8ZryN5nDBmyt9aOB00b/EiP116SJTXB1cUmf6XUiA8RzRDPxmXJokKWtHsjz+pwkyIe74rPVX6sxigBZLHdsGjkOkSExTI+KrtsAIbmOmUat2ogYCDP62V6HRFHbaBMLAWj3TTehliDBroo4Yd/+Bs1Plihx6s9QrAaK0ai1zUZhopWzQPUZt9UgP6JWGhbccFU/DCpbpfRPF+bGMIVfnP98Uc9OpohQJiQyH6f80pFGP1r9AW6v8uMewfQZR2r6kAEA/Nmd6DUJBCKO2jUn/a9KD6IY/mslfhdtnLXbgywiqcG3TNyR9HpXWeBXNPNz7E/4E2NN5ycac9GgOYh63a3m+dh0bVWL0XYyZoNKyh8S51iKsfX43fV9XunCbVpNo9gZy3JfSPzbtLDyOr11oqpjZ9ktHTVT/npD6rnFslacelYF/nKVxiR01nkioCmpwdaAoLGfo/Mw3lO4fdxkV01HrLoCsJvsxgYO6NeTHMbTEJX3EuqrkxOs3MfjwY0WS4iwc1CSQF7BdizPMiPKHYmFsmaR10G384x1EB7i9p8Gwytg208PaQEsbEmjbHEu5YTr0zAoukhldZd5qP6JU6xBwCZR69cbsZzfXdBQFhcs+YrzK9gJL1/jc1h8CalgHHVoJj5TjVeFa8qNhD8DJOSQ0tHhzwTaRtzgRL6RKkEh0qI5eNB+jyoKwe3cI3UbrtkDuY2J9jsVXpCyIpaFB0xZiSlfjn10UNRvli/tsFfsONvVMI4OyQwCf19F
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(7916004)(136003)(376002)(346002)(396003)(366004)(39860400002)(4326008)(66946007)(8676002)(5660300002)(38100700002)(6486002)(66476007)(66556008)(85182001)(186003)(478600001)(8936002)(41300700001)(53546011)(6916009)(2906002)(86362001)(316002)(33716001)(26005)(6512007)(6506007)(9686003)(82960400001)(54906003)(6666004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MktKZDdNdVRNTWZ5eFIxRkx4NEFQYllESGlnODViNFA2Q1NXRVB1UU1nbWtZ?=
 =?utf-8?B?MjBIQzRwZ0xrY0gyZ3FZalRiVGR2UDUxaVQ1OXZEdFVvQmVVdk5Ka1ZrV1hS?=
 =?utf-8?B?bVFYcHZYVjFYUnlXZHVCT0tSS21XcWM3a2hnVGEzQURJd3dJVDYzOUo0eVVP?=
 =?utf-8?B?a0U2REF5YkpWTitrdnkrcitwZWFNMXFITVNuTWM3S1g4d2NwbElZdFM1Snl2?=
 =?utf-8?B?S0g5SkNvb29aNkk3cGkrTlRaOE9KaURMZ0Q5RUcyWjBVRkZZemxiUmRTdTEr?=
 =?utf-8?B?dW5GejRvVnlMd0MzdGxsZ1ZKZWNxdG4rRU1LMnBkUk9YakY4c1dpdXphTTVR?=
 =?utf-8?B?R1VZeXJaUXJiUXFpQ1VFRzFqYXpEdE1zMFlJR0QrVDIwVnd6ejdMZkNXVm1y?=
 =?utf-8?B?Ull6Yk9ZMTQ4Y3R5L3hPUHVBZXUwQW5mTW8vTnplalhOY0JQdEdSNUpyOFYw?=
 =?utf-8?B?K1EwbUxUZGc1QXMzTSs3SDREMkdveXowS1pkTjhJL1h1NmJJdUpvYjE1cVg3?=
 =?utf-8?B?NXpPVHA4ZVFYT3hmL1pUUFFMQTc1clIyZkQxL3ptYnhHc0RwQStBZWx4NkVn?=
 =?utf-8?B?dUFObU4yV1FlRHdhYXJlMFRlNG9hemdmV2tEdWh1S0t1c3d6WkZnb1BRbHNw?=
 =?utf-8?B?c3F0Rmc5OFZ2L3BsZlNpVVhnTEc1Y2lYWnUvcEVJcittNzZFYk5WRVE2T1VH?=
 =?utf-8?B?WGZ2VEZoTG5xY2EwbHZaQzB5alVOanpGc1B1Q2Z6M3hTczJPZXcxRjM0YWxM?=
 =?utf-8?B?L211TnJJMVlCWVY0WVdMbkpDQXJiSWdpT2JBUE4zcFQvTHNaUkMvbGd0VEcy?=
 =?utf-8?B?bHVWSThMbUFUcVdTWHhvZWxRU2NSUGNsekxxK2ZldXNyTW1BMHMvNmRFcUhG?=
 =?utf-8?B?cGcvRktBQTRxejkrUWIrRjIwWTlYUXc0YThWWnpWc21nMlg4VzBFVWpNbGhz?=
 =?utf-8?B?WEhmcE1vK2U2OXczQUM1ODRJTkVYOUhrcGV1R05tZmtUaG43a3dRb2daTzIv?=
 =?utf-8?B?UlVnYUZoNVNTTmc0Q2F2STc1bGpoaWl5MDMxd3VJVUtaZ1FkUjMzMHJudGRk?=
 =?utf-8?B?Yk9iSDhRTm16Uno3OUpUR3RpSDdOTlFUS3Qwbit6OW1aTEhlaDUzVk1welB1?=
 =?utf-8?B?MVRJT1NLZHVDSTYrdkZ3T21JcmpTYXdDOVZ5TEZvazFMdmR2NFVHVXNJTzZQ?=
 =?utf-8?B?M2xsR0x4UGRITk5FSHpRb3lNNjNTV056anF2cmtBa1l5UFlWd1c2R1NPNkh0?=
 =?utf-8?B?ZHptemRBN3JWYkcrcnFZYUdCVDdCWGdqNzVQZVVQM0NYTmNWeldKb3oxeXlL?=
 =?utf-8?B?WlhTK1Q1K1pxOFRjdHM2NG9XV0Y3cEtMbjZ4Yk1tR0JYQVZLcTU1Sm01YTY3?=
 =?utf-8?B?UTUvUkJ6QW82NTJUazhKSGswYXdXT2RHUDh5RjhyUGcyaE5MU0JlUlVTM1JG?=
 =?utf-8?B?QnNROXZpMWdTMGZQeEwrcnczRElZeHNDRmdMcFU1cVo2VVFCdVFoSVpUb3Bh?=
 =?utf-8?B?VGJQd0RXZHo5MmN4ODlUT1F5UzF6ZGRFd1cwQnhpMW1LOVBrbVVnMEpxdUFr?=
 =?utf-8?B?RmdNUlNrWnN6SlZmSGVYRmZoVk1FUDUvTnRodzBhd2t5UTBKU0wwTnJ3bkdz?=
 =?utf-8?B?Y1BNRkVyZUROMFJNVE9LZURyNkhac0ttUWg4bDd6WFRWYUVUd3pQYVk2VnBx?=
 =?utf-8?B?WnVGcTdzOEU1N0IwNEl1UWhDcXJLNWRPREdkWlRjOVJRQnNFUkdvdnpneEh4?=
 =?utf-8?B?ajNGdElZK05INzdydWFJbitSVjRYNnpSNlBDTTM3RDdBcDZYT0VzZXI0Z2Nj?=
 =?utf-8?B?VG1xcTlPNFF1M0pOSWZHZXlBM2FabElLblg0b3V5Ri96RmlqMGhnU1huWHRZ?=
 =?utf-8?B?K2x4aC9EY3VsRmY4SGRjTkVsSGdVSUpJc2NUN1BzNGxEbTA3ZGppeFl6YjE5?=
 =?utf-8?B?b2IvdFhGcDNDQTV0dUw3ZEEzZ1JMaU5ER2x4YWpVejZqNXNZUmRlYkhUbE01?=
 =?utf-8?B?MUZtMld6K1JIUTNYbG1PVEpxdS8zZEsySlNERkNxM1RCL2c1dWUxbDhQRVUz?=
 =?utf-8?B?d1NwYURFbG9MT21TTTE3NkdJOVVGalBOcjJ5dkJYU0ZUVnBvdzd6WjdmcDVw?=
 =?utf-8?Q?2BDPZZ3hmgbnjPR+xWuJ9otCJ?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ffa06ed1-f095-47d6-e1f0-08da582c862e
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2022 11:02:28.1245
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: OcTwthzGmVA1pYFuz85MCMqkSq2067GMIHu4e/S3nicPf/G+AWZG4aj5dR6p46sXsdONzf7XudjUITSbz0pChw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR03MB5404

On Mon, Jun 27, 2022 at 12:40:41PM +0200, Juergen Gross wrote:
> On 27.06.22 12:28, Roger Pau Monné wrote:
> > On Thu, Mar 24, 2022 at 03:01:31PM +0100, Juergen Gross wrote:
> > > The entry point used for the vcpu_op hypercall on Arm is different
> > > from the one on x86 today, as some of the common sub-ops are not
> > > supported on Arm. The Arm specific handler filters out the not
> > > supported sub-ops and then calls the common handler. This leads to the
> > > weird call hierarchy:
> > > 
> > >    do_arm_vcpu_op()
> > >      do_vcpu_op()
> > >        arch_do_vcpu_op()
> > > 
> > > Clean this up by renaming do_vcpu_op() to common_vcpu_op() and
> > > arch_do_vcpu_op() in each architecture to do_vcpu_op(). This way one
> > > of above calls can be avoided without restricting any potential
> > > future use of common sub-ops for Arm.
> > 
> > Wouldn't it be more natural to have do_vcpu_op() contain the common
> > code (AFAICT handlers for
> > VCPUOP_register_{vcpu_info,runstate_memory_area}) and then everything
> > else handled by the x86 arch_do_vcpu_op() handler?
> > 
> > I find the common prefix misleading, as not all the VCPUOP hypercalls
> > are available to all the architectures.
> 
> This would end up in Arm suddenly supporting the sub-ops it doesn't
> (want to) support today. Otherwise it would make no sense that Arm has
> a dedicated entry for this hypercall.

My preference would be for a common do_vcpu_op() that just contains
handlers for VCPUOP_register_{vcpu_info,runstate_memory_area} and then
an empty arch_ handler for Arm, and everything else moved to the x86
arch_ handler.  That obviously implies some code churn, but results in
a cleaner implementation IMO.

Also has the nice benefit of removing unreachable code from the Arm
build, which is also a MISRA-C rule.

> The "common" just wants to express that the code is common. I'm open
> for a better suggestion, though. :-)

Right, it lives in common/ anyway, so there's not a much better name.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 11:08:19 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 11:08:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356470.584718 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5mbH-0002ZI-AS; Mon, 27 Jun 2022 11:08:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356470.584718; Mon, 27 Jun 2022 11:08:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5mbH-0002ZB-7f; Mon, 27 Jun 2022 11:08:15 +0000
Received: by outflank-mailman (input) for mailman id 356470;
 Mon, 27 Jun 2022 11:08:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z6rY=XC=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o5mbG-0002Z5-4f
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 11:08:14 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6f8f34fd-f609-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 13:08:13 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 96FBF1F8F7;
 Mon, 27 Jun 2022 11:08:12 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 45A4113AB2;
 Mon, 27 Jun 2022 11:08:12 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id egO9D5yPuWKIWQAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 27 Jun 2022 11:08:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6f8f34fd-f609-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656328092; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=t65R6Hz/gmeFj2B3stg0eE558saEADtFx1n/YiXzQ6Q=;
	b=nQn5WlWJWns5S0K3BEqPTVA9y9RE4r0cbLNsefZlOsTtpqHMcEMHcJNrW/cePdLzaATPSZ
	R9xOBjEvt5dO1JP8wHzYu58v3iCQhJuHAXdAEiBixgT2PtfdT8SqJ3pNyTHJf5XZFKkXXT
	vXURB6FQjPbFwilnlQcZIN3gucFpb4M=
Message-ID: <e0b54db4-af1a-fb54-e6e5-ef0b71194091@suse.com>
Date: Mon, 27 Jun 2022 13:08:11 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v6 1/9] xen: move do_vcpu_op() to arch specific code
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>, Julien Grall <jgrall@amazon.com>
References: <20220324140139.5899-1-jgross@suse.com>
 <20220324140139.5899-2-jgross@suse.com> <YrmGQj/W7KTzJ1vo@Air-de-Roger>
 <8951e03f-ba63-4524-99e9-c030e273c1d1@suse.com>
 <YrmOQPQlbAMwULWc@Air-de-Roger>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <YrmOQPQlbAMwULWc@Air-de-Roger>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------pI1MzcG9tFV3Cgp2dtp1BQ3h"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------pI1MzcG9tFV3Cgp2dtp1BQ3h
Content-Type: multipart/mixed; boundary="------------lgnQJJYqFo7P3vXdBMNZ3E4v";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>, Julien Grall <jgrall@amazon.com>
Message-ID: <e0b54db4-af1a-fb54-e6e5-ef0b71194091@suse.com>
Subject: Re: [PATCH v6 1/9] xen: move do_vcpu_op() to arch specific code
References: <20220324140139.5899-1-jgross@suse.com>
 <20220324140139.5899-2-jgross@suse.com> <YrmGQj/W7KTzJ1vo@Air-de-Roger>
 <8951e03f-ba63-4524-99e9-c030e273c1d1@suse.com>
 <YrmOQPQlbAMwULWc@Air-de-Roger>
In-Reply-To: <YrmOQPQlbAMwULWc@Air-de-Roger>

--------------lgnQJJYqFo7P3vXdBMNZ3E4v
Content-Type: multipart/mixed; boundary="------------NEGoqHbNdO6rVtX0zHb0HpuR"

--------------NEGoqHbNdO6rVtX0zHb0HpuR
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjcuMDYuMjIgMTM6MDIsIFJvZ2VyIFBhdSBNb25uw6kgd3JvdGU6DQo+IE9uIE1vbiwg
SnVuIDI3LCAyMDIyIGF0IDEyOjQwOjQxUE0gKzAyMDAsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6
DQo+PiBPbiAyNy4wNi4yMiAxMjoyOCwgUm9nZXIgUGF1IE1vbm7DqSB3cm90ZToNCj4+PiBP
biBUaHUsIE1hciAyNCwgMjAyMiBhdCAwMzowMTozMVBNICswMTAwLCBKdWVyZ2VuIEdyb3Nz
IHdyb3RlOg0KPj4+PiBUaGUgZW50cnkgcG9pbnQgdXNlZCBmb3IgdGhlIHZjcHVfb3AgaHlw
ZXJjYWxsIG9uIEFybSBpcyBkaWZmZXJlbnQNCj4+Pj4gZnJvbSB0aGUgb25lIG9uIHg4NiB0
b2RheSwgYXMgc29tZSBvZiB0aGUgY29tbW9uIHN1Yi1vcHMgYXJlIG5vdA0KPj4+PiBzdXBw
b3J0ZWQgb24gQXJtLiBUaGUgQXJtIHNwZWNpZmljIGhhbmRsZXIgZmlsdGVycyBvdXQgdGhl
IG5vdA0KPj4+PiBzdXBwb3J0ZWQgc3ViLW9wcyBhbmQgdGhlbiBjYWxscyB0aGUgY29tbW9u
IGhhbmRsZXIuIFRoaXMgbGVhZHMgdG8gdGhlDQo+Pj4+IHdlaXJkIGNhbGwgaGllcmFyY2h5
Og0KPj4+Pg0KPj4+PiAgICAgZG9fYXJtX3ZjcHVfb3AoKQ0KPj4+PiAgICAgICBkb192Y3B1
X29wKCkNCj4+Pj4gICAgICAgICBhcmNoX2RvX3ZjcHVfb3AoKQ0KPj4+Pg0KPj4+PiBDbGVh
biB0aGlzIHVwIGJ5IHJlbmFtaW5nIGRvX3ZjcHVfb3AoKSB0byBjb21tb25fdmNwdV9vcCgp
IGFuZA0KPj4+PiBhcmNoX2RvX3ZjcHVfb3AoKSBpbiBlYWNoIGFyY2hpdGVjdHVyZSB0byBk
b192Y3B1X29wKCkuIFRoaXMgd2F5IG9uZQ0KPj4+PiBvZiBhYm92ZSBjYWxscyBjYW4gYmUg
YXZvaWRlZCB3aXRob3V0IHJlc3RyaWN0aW5nIGFueSBwb3RlbnRpYWwNCj4+Pj4gZnV0dXJl
IHVzZSBvZiBjb21tb24gc3ViLW9wcyBmb3IgQXJtLg0KPj4+DQo+Pj4gV291bGRuJ3QgaXQg
YmUgbW9yZSBuYXR1cmFsIHRvIGhhdmUgZG9fdmNwdV9vcCgpIGNvbnRhaW4gdGhlIGNvbW1v
bg0KPj4+IGNvZGUgKEFGQUlDVCBoYW5kbGVycyBmb3INCj4+PiBWQ1BVT1BfcmVnaXN0ZXJf
e3ZjcHVfaW5mbyxydW5zdGF0ZV9tZW1vcnlfYXJlYX0pIGFuZCB0aGVuIGV2ZXJ5dGhpbmcN
Cj4+PiBlbHNlIGhhbmRsZWQgYnkgdGhlIHg4NiBhcmNoX2RvX3ZjcHVfb3AoKSBoYW5kbGVy
Pw0KPj4+DQo+Pj4gSSBmaW5kIHRoZSBjb21tb24gcHJlZml4IG1pc2xlYWRpbmcsIGFzIG5v
dCBhbGwgdGhlIFZDUFVPUCBoeXBlcmNhbGxzDQo+Pj4gYXJlIGF2YWlsYWJsZSB0byBhbGwg
dGhlIGFyY2hpdGVjdHVyZXMuDQo+Pg0KPj4gVGhpcyB3b3VsZCBlbmQgdXAgaW4gQXJtIHN1
ZGRlbmx5IHN1cHBvcnRpbmcgdGhlIHN1Yi1vcHMgaXQgZG9lc24ndA0KPj4gKHdhbnQgdG8p
IHN1cHBvcnQgdG9kYXkuIE90aGVyd2lzZSBpdCB3b3VsZCBtYWtlIG5vIHNlbnNlIHRoYXQg
QXJtIGhhcw0KPj4gYSBkZWRpY2F0ZWQgZW50cnkgZm9yIHRoaXMgaHlwZXJjYWxsLg0KPiAN
Cj4gTXkgcHJlZmVyZW5jZSB3b3VsZCBiZSBmb3IgYSBjb21tb24gZG9fdmNwdV9vcCgpIHRo
YXQganVzdCBjb250YWlucw0KPiBoYW5kbGVycyBmb3IgVkNQVU9QX3JlZ2lzdGVyX3t2Y3B1
X2luZm8scnVuc3RhdGVfbWVtb3J5X2FyZWF9IGFuZCB0aGVuDQo+IGFuIGVtcHR5IGFyY2hf
IGhhbmRsZXIgZm9yIEFybSwgYW5kIGV2ZXJ5dGhpbmcgZWxzZSBtb3ZlZCB0byB0aGUgeDg2
DQo+IGFyY2hfIGhhbmRsZXIuICBUaGF0IG9idmlvdXNseSBpbXBsaWVzIHNvbWUgY29kZSBj
aHVybiwgYnV0IHJlc3VsdHMgaW4NCj4gYSBjbGVhbmVyIGltcGxlbWVudGF0aW9uIElNTy4N
Cg0KSSdkIGJlIGZpbmUgd2l0aCB0aGF0Lg0KDQpJIGRpZCBpdCBpbiBWMiBvZiB0aGUgKHRo
ZW4gc2VjcmV0KSBzZXJpZXMsIGFuZCBKYW4gcmVwbGllZDoNCg0KICAgSSdtIGFmcmFpZCBJ
IGRvbid0IGFncmVlIHdpdGggdGhpcyBtb3ZlbWVudC4gSSBjb3VsZCBzZWUgdGhpbmdzIGdl
dHRpbmcNCiAgIG1vdmVkIHRoYXQgYXJlIHB1cmVseSBQViAob24gdGhlIGFzc3VtcHRpb24g
dGhhdCBubyBuZXcgUFYgcG9ydHMgd2lsbA0KICAgYXBwZWFyKSwgYnV0IGFueXRoaW5nIHRo
YXQncyBhbHNvIHVzYWJsZSBieSBQVkggLyBIVk0gb3VnaHQgdG8gYmUgdXNhYmxlDQogICBp
biBwcmluY2lwbGUgYWxzbyBieSBBcm0gb3Igb3RoZXIgUFYtZnJlZSBwb3J0cy4NCg0KDQpK
dWVyZ2VuDQo=
--------------NEGoqHbNdO6rVtX0zHb0HpuR
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------NEGoqHbNdO6rVtX0zHb0HpuR--

--------------lgnQJJYqFo7P3vXdBMNZ3E4v--

--------------pI1MzcG9tFV3Cgp2dtp1BQ3h
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK5j5sFAwAAAAAACgkQsN6d1ii/Ey8y
/QgAjhokKmFRVhuxRxM88r1cnD0kDqv5jKi29N+NS4sM6AeddH+HcA8fioTO+5yA0gR+K6kc8iob
cZ29UA8qXE5S+oM7VqwPOz76SrjSGF3qaMGyV2rCHp2LIwwS+KMDWgAOFv6rEDVmFF8+B/NlsLlH
mVetTfxxVhn1iTNefHFZMVzuUBpV6DBdJScCOBVLwpXkNFvdvFf77Hh0jvEz+wLBLUjAfS/4IIBE
JQ8EzngxbibcsKauo5UqmSpOOzCjuZMhR1dgqm1+so9X5TMnlZrv7i9dRqnFPSyXKctG00H5qCYY
totM+gQEStl8DCgB438JlX+2ZaoFSm4nMs1cta+8mg==
=V7r8
-----END PGP SIGNATURE-----

--------------pI1MzcG9tFV3Cgp2dtp1BQ3h--


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 11:36:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 11:36:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356502.584747 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5n2Q-0006vs-0r; Mon, 27 Jun 2022 11:36:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356502.584747; Mon, 27 Jun 2022 11:36:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5n2P-0006vl-Tr; Mon, 27 Jun 2022 11:36:17 +0000
Received: by outflank-mailman (input) for mailman id 356502;
 Mon, 27 Jun 2022 11:36:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CVBH=XC=citrix.com=prvs=1703a0240=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o5n2N-0006s5-T7
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 11:36:16 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 589502bc-f60d-11ec-bd2d-47488cf2e6aa;
 Mon, 27 Jun 2022 13:36:14 +0200 (CEST)
Received: from mail-bn8nam12lp2176.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.176])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 27 Jun 2022 07:36:07 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by PH7PR03MB6864.namprd03.prod.outlook.com (2603:10b6:510:155::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16; Mon, 27 Jun
 2022 11:36:03 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5373.018; Mon, 27 Jun 2022
 11:36:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 589502bc-f60d-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656329773;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=sLnEsSn9N6tQX12uZfjf7qDq+SwWi58qpFw6JowZJ4I=;
  b=OvNNg3EPaw/KDqluYJ/xIPg+X6DDM/9XB8RGZQ4vSJdwUhqOw2q3jg1s
   WseOODdRS+4znvWX9XoZKLiHaQODgWxZoW0UrP1bYNPG+ufz4ZnrN/+Wb
   IiMjh5CnwOl0P664eQE3/vXCbVPkZNg7dqgTnQ9je9h84gXvm33urgg7X
   8=;
X-IronPort-RemoteIP: 104.47.55.176
X-IronPort-MID: 77053822
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Qsnmi64QHcIRwEIFoZjrgwxRtM7GchMFZxGqfqrLsTDasY5as4F+v
 jMaWj2DMvuKMGD9fot3PY/j9kwC6pfczNBiSVdprXsyHi5G8cbLO4+Ufxz6V8+wwmwvb67FA
 +E2MISowBUcFyeEzvuVGuG96yE6j8lkf5KkYAL+EnkZqTRMFWFw03qPp8Zj2tQy2YbjXFvU0
 T/Pi5a31GGNimYc3l08s8pvmDs31BglkGpF1rCWTakjUG72zxH5PrpGTU2CByKQrr1vNvy7X
 47+IISRpQs1yfuP5uSNyd4XemVSKlLb0JPnZnB+A8BOiTAazsA+PzpS2FPxpi67hh3Q9+2dx
 umhurSKDisnM42UoN1HeB9YFnwkba5cxOfYdC3XXcy7lyUqclPK6tA3VgQcG91d/ex6R2ZT6
 fYfNTYBKAiZgP67y666Te8qgdk/KM7sP8UUvXQIITPxVK56B8ycBfiXo4YHgl/chegXdRraT
 9AeZjd1KgzJfjVEO0sNCYJ4l+Ct7pX6W2IH8A/O9fBti4TV5BBP7oHKAsLVQ/mlT+hngUOZ4
 Tzf2V2sV3n2M/Tak1Jp6EmEhODVmjjgcJkPD7D+/flv6HWDy2pWBBAIWF+TpfiillX4S99ZM
 1YT+Cclse417kPDZsLmQxSyrXqAvxgdc9ldCes37EeK0KW8ywSEAmkJSBZRZdpgs9U5LRQg2
 0WVhdrvCXpquaeMVHOG3r6OqHW5Pi19BVEFYSgIXA4U+e7JqYs4jg/MZtt7GavzhdrwcRnyy
 T2XqCk1h50IkNUGka68+DjvnDaEtpXPCAkv6W3/XG2/5wd9TIegbp6v7x7Q6vMoEWqCZlyIv
 XxBkc7O6ukLVMuJjHbUH7tLG6y17fGYNjGamURoA5Qq6zWq/TikYJxU5zZ9YkxuN67oZAPUX
 aMagisJjLc7AZdgRfYfj16ZYyjy8ZXdKA==
IronPort-HdrOrdr: A9a23:gVa79q98LqcqFwBpagduk+FDdb1zdoMgy1knxilNoENuH/Bwxv
 rFoB1E73TJYVYqN03IV+rwXZVoZUmsjaKdhrNhRotKPTOWwVdASbsP0WKM+V3d8kHFh41gPO
 JbAtJD4b7LfCdHZKTBkW6F+r8bqbHokZxAx92uqUuFJTsaF52IhD0JbjpzfHcGJjWvUvECZe
 ehD4d81kydUEVSSv7+KmgOXuDFqdGOvJX6YSQeDxpizAWVlzun5JPzDhDdh34lInty6IZn1V
 KAvx3y562lvf3+4hjA11XL55ATvNf60NNMCOGFl8BQADTxjQSDYphnRtS5zXkIidDqzGxvvM
 jHoh8mMcg2w3TNflutqR+o4AXk2CZG0Q6W9XaoxV/Y5eDpTjMzDMRMwahDdAHC1kYmtNZglI
 pWwmOwrfNsfF/9tRW4w+KNewBhl0Kyr3Znu/UUlWZjXYwXb6IUhZAD/XlSDIwLEEvBmc0a+d
 FVfY/hDcttABKnhyizhBgu/DXsZAV4Iv6+eDlMhiTPuAIm30yQzCMjtb4idzk7hdAAoqJ/lp
 X525RT5c9zp/AtHNJA7cc6ML+K4z/2MGXxGVPXB2jbP4c6HF+Ig6LLwdwOlZKXkdozvdAPpK
 g=
X-IronPort-AV: E=Sophos;i="5.92,226,1650945600"; 
   d="scan'208";a="77053822"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gCv09sCFd8/5jgLpNBQ8O4XfExKn3R+6e8vSvf1C4G67eEidnw32iEG0F6BV4yLwNPFk9a5zdoPm/WKPnEH6PYznJiJ4X8EgrtevsbyCR7dNpnm7N1FDBLjcNFoOIJ1ex+y7FFcBJzCSPpEIqs5R48yBGzOfSGp/tIe44Yiz75gjxqp4AjpLBlQ78cj7nKxV/OL7OCIRIUgh3lnv47s5OSf9XTfXQxLDcIWdnKhVjzTr5t6FmfHZrelSfUkWQcCLvwZHeVkpdLbsOJgX3nSSzxUfM4yM26YAJcgxAlTC0eehfnvYhvukq07hOrhI5u86cMd9QDCpjF1ycPbSOnzEuw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5j/HxpLkNcop0QKCe8dQZoWa0FXAWdea4btaSz7mTzQ=;
 b=J4vygAGUTlqfGoYj25Y7O69b6jijbkGMU9NzqZo6004kCKQ46CJwn1ApEf0KZ14bFJVTe1j45pq78h9mCF/dRSTQ0Pypz3AUa5LH6AKKYxGBpDT0lw3pCeAPJglJpV9gvTjkW7b1XLMzpvzMxhG9Q8ql6CXMmkMg39KozTgiMSo+gl587dCT8718dTGm//1jwWdOm+9a6KUM4zb36UU/fgXSQHDLo3GRO6tHs/1PeIGGPO1qdXUVJS7bsZBr1xeFjaUt8ZKyAlHoCenrlcc0jHPa9GOw0Mj8aKDAVLF+IeH2ZWO0/o4r7CDx5U0XZ+NndfQ9uNpehnHfd+ToowsVIw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5j/HxpLkNcop0QKCe8dQZoWa0FXAWdea4btaSz7mTzQ=;
 b=A1Zz580kJO+5lotFeExeze49/AyjN0Vz+iRnRviOZ4l0GqD+fzYSgJXQlW5jeXHSkmL50IeR1HvHfhtiAhmwVK1TKwSg/eLVwvmMNIqaYXnXhh855M09LOuVe+jTO0Kcg1hcCZRUsj+WKMooxUn/AGTo+uImKz05tdhsBi18qK0=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Mon, 27 Jun 2022 13:35:58 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
	Julien Grall <jgrall@amazon.com>
Subject: Re: [PATCH v6 1/9] xen: move do_vcpu_op() to arch specific code
Message-ID: <YrmWHrqwJOnb0iPr@Air-de-Roger>
References: <20220324140139.5899-1-jgross@suse.com>
 <20220324140139.5899-2-jgross@suse.com>
 <YrmGQj/W7KTzJ1vo@Air-de-Roger>
 <8951e03f-ba63-4524-99e9-c030e273c1d1@suse.com>
 <YrmOQPQlbAMwULWc@Air-de-Roger>
 <e0b54db4-af1a-fb54-e6e5-ef0b71194091@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <e0b54db4-af1a-fb54-e6e5-ef0b71194091@suse.com>
X-ClientProxiedBy: LO2P265CA0301.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a5::25) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b3a2ecd6-8763-4180-4073-08da58313776
X-MS-TrafficTypeDiagnostic: PH7PR03MB6864:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	UTIjwhof/KzBDpNj+pqP8udriPYOXnPsAnu4iW+NYNYGHAIRa2bS0rZmMHhDAVVlts8Q7LBKuys85Pr4yApf+KkERTKzGC6Y2CV8gUbVmTAaeToBoMuFkXya/JFaFHEXRfmD+ws0HX3vKQoh0ameg/iTpxE+WmnkDu31Xnmyk+RmJL6E73IPUQ0g/KFPjyjhGRmQMDkWlRKeYoeAGXtHPB8WA+OYvQGHBzo+4M/plwro/enqurtxcJ23MCQUS35o8ouV7E5Pp5L58a2Wl/i1YOyDVmJjr6u00bhfhhbo0E0dzSaO1LbMf9KepS+IbLq170licGEsocFeSQI4KdBUWnzZPfyi3mMNJgZMKwNNwyEoOtOC/Me+J585eEdA00FcOjvX9PjDw1jy2bWS1Rbjo6BcLPO18NWmb+AtcjyCsxXduxnMIGX7WTXRYRvUJ5zVdiw0YGMLBbkrKKUs0GVVqJxjJ2dRinzEvVAqpTXEFkRCH8L5G9ck2qsFB5fN8rINGCWTh+KCUelt5aGy2rhMQ9Q5ZUNh8Khs0gyyee8UGYyw2fY0P+T1tbRiwAW/yjs19FVyTd096adpL9ZR6siAIOO8yjf5ah3XcApnfDai2MlimEMCqXyfs2zIOG52V4dacHKiWN+IZ8+I6piHSoEgM6GJ9ZTZl6gBVXmMan421V0t3jHA77QqbXp1V6bWd6ekT1HtErktjid26+AteqRmaEawBrfVOY4MKudfTv1VuQbxicYfeJSwFr8aBBg2wQtl
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(7916004)(4636009)(366004)(39860400002)(376002)(136003)(396003)(346002)(186003)(9686003)(6506007)(66476007)(478600001)(66946007)(33716001)(86362001)(6666004)(26005)(54906003)(53546011)(6916009)(41300700001)(8936002)(5660300002)(8676002)(66556008)(316002)(82960400001)(38100700002)(6486002)(2906002)(4326008)(6512007)(85182001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?d3hQemNCZWtXREQzM2FkV2ZKQkJRVVlmbFVnZnJZNk1TVGt2WjFEaGVrdWdX?=
 =?utf-8?B?M1hNeU4zTFNwY3RhRjZsbUM0SGI5K2htcThuMmxaQTVaRVBoWG93TGpVTlNv?=
 =?utf-8?B?Y1h3aUxMUlJDRmFrenFkeVhZYnEwZk5IVThQNWU0RStqaWdTZGthdisyNXpi?=
 =?utf-8?B?bDl0NG45dTdCU2NsMFpTTVVERzlTU0s2N2ZuZ292aDdRUGU5YWk2ZUpFZjFa?=
 =?utf-8?B?aHBSb0JPeDhab2dHNEd1aFFTL1NKc1d2ZnRadkNFRzJVQXNIanJhUlA4M0dO?=
 =?utf-8?B?RUxHazNrTGcreTc3SFVpcjRkZHJHY3ljcERtMzhNNElSU0ZtV2o0em5YNTNI?=
 =?utf-8?B?R0ZPWFpNaERGdERwZllxaGJyMEhDeHlaWVl4QnJhT0VVbE04bmp5VHBLbWp2?=
 =?utf-8?B?WlpjQ2tBcHNybHFrVFUwenlzVFY4dkVIYUs2cnl0emtUZStiMUxXOVhEM0px?=
 =?utf-8?B?OUFBZzBZWWEvdEdLWkpIQ0tuVFRIcXl0b1prNjhibmhmcHpMT01Va0dLS1hE?=
 =?utf-8?B?NStqZGc5U084VW5KREZLZ2txZUFnVWtiYStEeE1ZWDBrUlhQSEJDYUw2ZHBk?=
 =?utf-8?B?UktVWnRLeDVKOUZUakdGRlFGOGxsSDRYNnY5Y3NIQWJHcUZlVFhsdkM5WU84?=
 =?utf-8?B?ajhReXA0eFdieXM5UkNoKzBBMkNweVBvelNLQXRyMmpnOFR5REZsU1Z5aVJQ?=
 =?utf-8?B?ZUZpQnVERnIxam1vdWJiMnF0QzR2czNvcWNaZmY0UnQ1Uk0wWkIyakozZEtx?=
 =?utf-8?B?WXJ4TFdXTFVOaU9MY1ljdVQ0Uit3RXkwMGtOekk0ckhJbVpKV0xpclhNNjhP?=
 =?utf-8?B?Wk9WdndyTktxdDFnY3FnL29rTERpVUlJRVNXL085TFNTWm9WQ296S2Z2S0Zx?=
 =?utf-8?B?WW94Sjg0b0xWbUJTSkRTbUsrMERNc0pTWFREUkNJU0VsNnJMUUdVbnVKUVpL?=
 =?utf-8?B?azZ6cjhTb0lpSnFDUitiSDBXMk5HUUhnQkg3RnplYTJFeTJaeXRHbDhldDhu?=
 =?utf-8?B?bEZVYkRYUVovOW9yTmg5U3pxblNDVU56SGxTMkJLSk0wRkUzc0xtbjZtcHVm?=
 =?utf-8?B?eVg0SkVrbmRGaGc5Q3ZRMDZ0YU1YTU9GRklkZW8yRXhaalQwbjFTVWlmUXVR?=
 =?utf-8?B?QU5WWnIrUHBhblVBY2k2ZGdxc0VOOEt3ajA3WkdMSTRxblM3RGllaVEzMUxJ?=
 =?utf-8?B?VWpwZWo5V2pSMGg4a1NyWGM0QWRXSFhQM1pCMUNpRVRXbXllTVlxNnEyTDQ3?=
 =?utf-8?B?RDdOazcwVlVNbGVzUXZ0Tm9Vdlo3cjhEUTJNNjlJaWk2Wmt5SXBDQ29rUWpF?=
 =?utf-8?B?RXZSMXpqYXNaTDJ1aWZlQkE5b0ZEUmg1SDNSOS82L3R5MmNLa3EwWk1wZ25h?=
 =?utf-8?B?ZFVmc0w4TDFmSXZNelBsdnA1eHFDNmVRblJrTjVpUjBlRkJkNTUyZGpCNDNG?=
 =?utf-8?B?d0x4ckptekFSRUltSVI2WTR5V211UytDZm16VVY1MWNoTEdHYlBtbEk3V2tm?=
 =?utf-8?B?bnJuYmN1am9VcVk4V2xZWGRLczl5TWd4OFNpUVRpekpwbjNLWU1WelFTT1R3?=
 =?utf-8?B?aXN5T2tPZlhDVmtoWkg3NDRUUnRvbkxZSjJ1WVpjNm85R1dqa1YzOWdHZS9j?=
 =?utf-8?B?Ymc2YTdFRW9UemFVWTR2WWJBaU52MEcyaDFjbTRxNVVoT0Y0TEVXaW93S2l6?=
 =?utf-8?B?S1M1NTlONytLMXlzY0hiSEJzUG0yNkZVcVJxbHpFS1gvNlNqY0FuY2I5TkUz?=
 =?utf-8?B?TlQzbzd1Nld1YURNdk9pK2c5bE1xeTdkM3NxbzVvYVpSb0VyN1VlSEt0MHFQ?=
 =?utf-8?B?OGJOVEg0T2k1SmR6L08yTW5GOCtTMFhnamlYb3M2Q3Rjcm1ZNHlGOElVc2N4?=
 =?utf-8?B?YjVDVkVZeHpiLzZRV2FCN2tiS0FtcWxsbWJFbEJxTEZkWUtLVHZ6OTl0cWp6?=
 =?utf-8?B?bTB0N3k1VUE0eEJkQUNUNTJ2MXRpM2VvMXRxUG0yMXlxMXEvREFSZGtEeXQw?=
 =?utf-8?B?QXcxVTJDanBRYUExZ2MrbjR4dFpwRi9NQjNDVHN4Z01hVDVBRGg4M042NXRr?=
 =?utf-8?B?WGRMSldZTXExREdyNzF1MndYVFJEMVBCdEZIVkZPMmE0dUZieEQ1azRCNHBu?=
 =?utf-8?Q?Qvy9v2diB+figxUbIE9EAlozu?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b3a2ecd6-8763-4180-4073-08da58313776
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2022 11:36:03.5685
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: DUwcQKDP9acln9jMo0S2EH+Zqdfs44+IyhaLS+EdP5l0vm7apc6q7htlctzszlcNDZTG5FuKfVID20e94o2zew==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR03MB6864

On Mon, Jun 27, 2022 at 01:08:11PM +0200, Juergen Gross wrote:
> On 27.06.22 13:02, Roger Pau Monné wrote:
> > On Mon, Jun 27, 2022 at 12:40:41PM +0200, Juergen Gross wrote:
> > > On 27.06.22 12:28, Roger Pau Monné wrote:
> > > > On Thu, Mar 24, 2022 at 03:01:31PM +0100, Juergen Gross wrote:
> > > > > The entry point used for the vcpu_op hypercall on Arm is different
> > > > > from the one on x86 today, as some of the common sub-ops are not
> > > > > supported on Arm. The Arm specific handler filters out the not
> > > > > supported sub-ops and then calls the common handler. This leads to the
> > > > > weird call hierarchy:
> > > > > 
> > > > >     do_arm_vcpu_op()
> > > > >       do_vcpu_op()
> > > > >         arch_do_vcpu_op()
> > > > > 
> > > > > Clean this up by renaming do_vcpu_op() to common_vcpu_op() and
> > > > > arch_do_vcpu_op() in each architecture to do_vcpu_op(). This way one
> > > > > of above calls can be avoided without restricting any potential
> > > > > future use of common sub-ops for Arm.
> > > > 
> > > > Wouldn't it be more natural to have do_vcpu_op() contain the common
> > > > code (AFAICT handlers for
> > > > VCPUOP_register_{vcpu_info,runstate_memory_area}) and then everything
> > > > else handled by the x86 arch_do_vcpu_op() handler?
> > > > 
> > > > I find the common prefix misleading, as not all the VCPUOP hypercalls
> > > > are available to all the architectures.
> > > 
> > > This would end up in Arm suddenly supporting the sub-ops it doesn't
> > > (want to) support today. Otherwise it would make no sense that Arm has
> > > a dedicated entry for this hypercall.
> > 
> > My preference would be for a common do_vcpu_op() that just contains
> > handlers for VCPUOP_register_{vcpu_info,runstate_memory_area} and then
> > an empty arch_ handler for Arm, and everything else moved to the x86
> > arch_ handler.  That obviously implies some code churn, but results in
> > a cleaner implementation IMO.
> 
> I'd be fine with that.
> 
> I did it in V2 of the (then secret) series, and Jan replied:
> 
>   I'm afraid I don't agree with this movement. I could see things getting
>   moved that are purely PV (on the assumption that no new PV ports will
>   appear), but anything that's also usable by PVH / HVM ought to be usable
>   in principle also by Arm or other PV-free ports.

I see. My opinion is that when other ports need those functions they
should be pulled out of arch code into common code, until then it just
adds confusion to have them in common code.

I will take a look at the current patch, as we need to make progress
on this.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 11:47:55 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 11:47:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356521.584757 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5nDc-0000CN-17; Mon, 27 Jun 2022 11:47:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356521.584757; Mon, 27 Jun 2022 11:47:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5nDb-0000CG-Uq; Mon, 27 Jun 2022 11:47:51 +0000
Received: by outflank-mailman (input) for mailman id 356521;
 Mon, 27 Jun 2022 11:47:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z6rY=XC=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o5nDZ-0000Br-Vp
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 11:47:50 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f788e1fd-f60e-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 13:47:48 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 1988D1FAAD;
 Mon, 27 Jun 2022 11:47:48 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id B505313456;
 Mon, 27 Jun 2022 11:47:47 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id jRvbKeOYuWK1bAAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 27 Jun 2022 11:47:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f788e1fd-f60e-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656330468; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=jyMJzqUXB10Iev/WE2Ko90Y6Sb+4305PiP8EJBltV1w=;
	b=keIDqK39Pp+8qE6v3vXzsdoPKGq1/oi9N+Lin+CQoN5n2YSjZlQmnqlRToOxJM6wlpnLWZ
	UqrIyPyKVnaPMpISzzK3BRRveWmpVOkPdbkinuwzG/rhUdqcpkfk7cJ27K6eIofPmmyBua
	YPUpkhnhc9oTv5Lm8nG9qse23qxlMXQ=
Message-ID: <7d15d5fe-809f-bd19-4170-68796dd53405@suse.com>
Date: Mon, 27 Jun 2022 13:47:47 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v6 1/9] xen: move do_vcpu_op() to arch specific code
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>, Julien Grall <jgrall@amazon.com>
References: <20220324140139.5899-1-jgross@suse.com>
 <20220324140139.5899-2-jgross@suse.com> <YrmGQj/W7KTzJ1vo@Air-de-Roger>
 <8951e03f-ba63-4524-99e9-c030e273c1d1@suse.com>
 <YrmOQPQlbAMwULWc@Air-de-Roger>
 <e0b54db4-af1a-fb54-e6e5-ef0b71194091@suse.com>
 <YrmWHrqwJOnb0iPr@Air-de-Roger>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <YrmWHrqwJOnb0iPr@Air-de-Roger>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------CPNxsF0B0C5ouXrFtOX0GkNv"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------CPNxsF0B0C5ouXrFtOX0GkNv
Content-Type: multipart/mixed; boundary="------------kuC10A84i6WBSr7405wQgVlb";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>, Julien Grall <jgrall@amazon.com>
Message-ID: <7d15d5fe-809f-bd19-4170-68796dd53405@suse.com>
Subject: Re: [PATCH v6 1/9] xen: move do_vcpu_op() to arch specific code
References: <20220324140139.5899-1-jgross@suse.com>
 <20220324140139.5899-2-jgross@suse.com> <YrmGQj/W7KTzJ1vo@Air-de-Roger>
 <8951e03f-ba63-4524-99e9-c030e273c1d1@suse.com>
 <YrmOQPQlbAMwULWc@Air-de-Roger>
 <e0b54db4-af1a-fb54-e6e5-ef0b71194091@suse.com>
 <YrmWHrqwJOnb0iPr@Air-de-Roger>
In-Reply-To: <YrmWHrqwJOnb0iPr@Air-de-Roger>

--------------kuC10A84i6WBSr7405wQgVlb
Content-Type: multipart/mixed; boundary="------------Nt5OFbIeLT3OKxn0cOc8zqpa"

--------------Nt5OFbIeLT3OKxn0cOc8zqpa
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjcuMDYuMjIgMTM6MzUsIFJvZ2VyIFBhdSBNb25uw6kgd3JvdGU6DQo+IE9uIE1vbiwg
SnVuIDI3LCAyMDIyIGF0IDAxOjA4OjExUE0gKzAyMDAsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6
DQo+PiBPbiAyNy4wNi4yMiAxMzowMiwgUm9nZXIgUGF1IE1vbm7DqSB3cm90ZToNCj4+PiBP
biBNb24sIEp1biAyNywgMjAyMiBhdCAxMjo0MDo0MVBNICswMjAwLCBKdWVyZ2VuIEdyb3Nz
IHdyb3RlOg0KPj4+PiBPbiAyNy4wNi4yMiAxMjoyOCwgUm9nZXIgUGF1IE1vbm7DqSB3cm90
ZToNCj4+Pj4+IE9uIFRodSwgTWFyIDI0LCAyMDIyIGF0IDAzOjAxOjMxUE0gKzAxMDAsIEp1
ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+Pj4+Pj4gVGhlIGVudHJ5IHBvaW50IHVzZWQgZm9yIHRo
ZSB2Y3B1X29wIGh5cGVyY2FsbCBvbiBBcm0gaXMgZGlmZmVyZW50DQo+Pj4+Pj4gZnJvbSB0
aGUgb25lIG9uIHg4NiB0b2RheSwgYXMgc29tZSBvZiB0aGUgY29tbW9uIHN1Yi1vcHMgYXJl
IG5vdA0KPj4+Pj4+IHN1cHBvcnRlZCBvbiBBcm0uIFRoZSBBcm0gc3BlY2lmaWMgaGFuZGxl
ciBmaWx0ZXJzIG91dCB0aGUgbm90DQo+Pj4+Pj4gc3VwcG9ydGVkIHN1Yi1vcHMgYW5kIHRo
ZW4gY2FsbHMgdGhlIGNvbW1vbiBoYW5kbGVyLiBUaGlzIGxlYWRzIHRvIHRoZQ0KPj4+Pj4+
IHdlaXJkIGNhbGwgaGllcmFyY2h5Og0KPj4+Pj4+DQo+Pj4+Pj4gICAgICBkb19hcm1fdmNw
dV9vcCgpDQo+Pj4+Pj4gICAgICAgIGRvX3ZjcHVfb3AoKQ0KPj4+Pj4+ICAgICAgICAgIGFy
Y2hfZG9fdmNwdV9vcCgpDQo+Pj4+Pj4NCj4+Pj4+PiBDbGVhbiB0aGlzIHVwIGJ5IHJlbmFt
aW5nIGRvX3ZjcHVfb3AoKSB0byBjb21tb25fdmNwdV9vcCgpIGFuZA0KPj4+Pj4+IGFyY2hf
ZG9fdmNwdV9vcCgpIGluIGVhY2ggYXJjaGl0ZWN0dXJlIHRvIGRvX3ZjcHVfb3AoKS4gVGhp
cyB3YXkgb25lDQo+Pj4+Pj4gb2YgYWJvdmUgY2FsbHMgY2FuIGJlIGF2b2lkZWQgd2l0aG91
dCByZXN0cmljdGluZyBhbnkgcG90ZW50aWFsDQo+Pj4+Pj4gZnV0dXJlIHVzZSBvZiBjb21t
b24gc3ViLW9wcyBmb3IgQXJtLg0KPj4+Pj4NCj4+Pj4+IFdvdWxkbid0IGl0IGJlIG1vcmUg
bmF0dXJhbCB0byBoYXZlIGRvX3ZjcHVfb3AoKSBjb250YWluIHRoZSBjb21tb24NCj4+Pj4+
IGNvZGUgKEFGQUlDVCBoYW5kbGVycyBmb3INCj4+Pj4+IFZDUFVPUF9yZWdpc3Rlcl97dmNw
dV9pbmZvLHJ1bnN0YXRlX21lbW9yeV9hcmVhfSkgYW5kIHRoZW4gZXZlcnl0aGluZw0KPj4+
Pj4gZWxzZSBoYW5kbGVkIGJ5IHRoZSB4ODYgYXJjaF9kb192Y3B1X29wKCkgaGFuZGxlcj8N
Cj4+Pj4+DQo+Pj4+PiBJIGZpbmQgdGhlIGNvbW1vbiBwcmVmaXggbWlzbGVhZGluZywgYXMg
bm90IGFsbCB0aGUgVkNQVU9QIGh5cGVyY2FsbHMNCj4+Pj4+IGFyZSBhdmFpbGFibGUgdG8g
YWxsIHRoZSBhcmNoaXRlY3R1cmVzLg0KPj4+Pg0KPj4+PiBUaGlzIHdvdWxkIGVuZCB1cCBp
biBBcm0gc3VkZGVubHkgc3VwcG9ydGluZyB0aGUgc3ViLW9wcyBpdCBkb2Vzbid0DQo+Pj4+
ICh3YW50IHRvKSBzdXBwb3J0IHRvZGF5LiBPdGhlcndpc2UgaXQgd291bGQgbWFrZSBubyBz
ZW5zZSB0aGF0IEFybSBoYXMNCj4+Pj4gYSBkZWRpY2F0ZWQgZW50cnkgZm9yIHRoaXMgaHlw
ZXJjYWxsLg0KPj4+DQo+Pj4gTXkgcHJlZmVyZW5jZSB3b3VsZCBiZSBmb3IgYSBjb21tb24g
ZG9fdmNwdV9vcCgpIHRoYXQganVzdCBjb250YWlucw0KPj4+IGhhbmRsZXJzIGZvciBWQ1BV
T1BfcmVnaXN0ZXJfe3ZjcHVfaW5mbyxydW5zdGF0ZV9tZW1vcnlfYXJlYX0gYW5kIHRoZW4N
Cj4+PiBhbiBlbXB0eSBhcmNoXyBoYW5kbGVyIGZvciBBcm0sIGFuZCBldmVyeXRoaW5nIGVs
c2UgbW92ZWQgdG8gdGhlIHg4Ng0KPj4+IGFyY2hfIGhhbmRsZXIuICBUaGF0IG9idmlvdXNs
eSBpbXBsaWVzIHNvbWUgY29kZSBjaHVybiwgYnV0IHJlc3VsdHMgaW4NCj4+PiBhIGNsZWFu
ZXIgaW1wbGVtZW50YXRpb24gSU1PLg0KPj4NCj4+IEknZCBiZSBmaW5lIHdpdGggdGhhdC4N
Cj4+DQo+PiBJIGRpZCBpdCBpbiBWMiBvZiB0aGUgKHRoZW4gc2VjcmV0KSBzZXJpZXMsIGFu
ZCBKYW4gcmVwbGllZDoNCj4+DQo+PiAgICBJJ20gYWZyYWlkIEkgZG9uJ3QgYWdyZWUgd2l0
aCB0aGlzIG1vdmVtZW50LiBJIGNvdWxkIHNlZSB0aGluZ3MgZ2V0dGluZw0KPj4gICAgbW92
ZWQgdGhhdCBhcmUgcHVyZWx5IFBWIChvbiB0aGUgYXNzdW1wdGlvbiB0aGF0IG5vIG5ldyBQ
ViBwb3J0cyB3aWxsDQo+PiAgICBhcHBlYXIpLCBidXQgYW55dGhpbmcgdGhhdCdzIGFsc28g
dXNhYmxlIGJ5IFBWSCAvIEhWTSBvdWdodCB0byBiZSB1c2FibGUNCj4+ICAgIGluIHByaW5j
aXBsZSBhbHNvIGJ5IEFybSBvciBvdGhlciBQVi1mcmVlIHBvcnRzLg0KPiANCj4gSSBzZWUu
IE15IG9waW5pb24gaXMgdGhhdCB3aGVuIG90aGVyIHBvcnRzIG5lZWQgdGhvc2UgZnVuY3Rp
b25zIHRoZXkNCj4gc2hvdWxkIGJlIHB1bGxlZCBvdXQgb2YgYXJjaCBjb2RlIGludG8gY29t
bW9uIGNvZGUsIHVudGlsIHRoZW4gaXQganVzdA0KPiBhZGRzIGNvbmZ1c2lvbiB0byBoYXZl
IHRoZW0gaW4gY29tbW9uIGNvZGUuDQo+IA0KPiBJIHdpbGwgdGFrZSBhIGxvb2sgYXQgdGhl
IGN1cnJlbnQgcGF0Y2gsIGFzIHdlIG5lZWQgdG8gbWFrZSBwcm9ncmVzcw0KPiBvbiB0aGlz
Lg0KDQpUaGFua3MgZm9yIHRoYXQuDQoNCg0KSnVlcmdlbg0KDQo=
--------------Nt5OFbIeLT3OKxn0cOc8zqpa
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------Nt5OFbIeLT3OKxn0cOc8zqpa--

--------------kuC10A84i6WBSr7405wQgVlb--

--------------CPNxsF0B0C5ouXrFtOX0GkNv
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK5mOMFAwAAAAAACgkQsN6d1ii/Ey9b
Wwf/d/eEW7qVGEy0zbxVEXDdgWku74xEhNFzf2kphz0gAtNb1qqFFH93ZH6kH2v0k3R1zE5t/Lzw
CBenQ2dwpW9pHoy0SrHcIgaGi7im3prTFCAMjMbvNZvsjRcjWCDICgnSpcb93woyQSAmQDpkFZ+i
FJM5O/oIVlxdeEeoaJXC7AqnZZyJLxvz8jrP6HCw1t6ZDiEiRYpwhkecAyqCPXjONhiJR8MALcWn
qySZZ4mj7TyF3vXcI+9trNh2IRMJLitIcZutMgvrtJ8PAb6RfzM58YbTeKc8eK+w8aeVTMz4Jhze
o5ubwSy6BXD+4LmkwrGPyOcJBm488MdlrdFyjCYe5g==
=YTKe
-----END PGP SIGNATURE-----

--------------CPNxsF0B0C5ouXrFtOX0GkNv--


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 12:37:37 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 12:37:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356571.584808 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5nzb-0007PN-2U; Mon, 27 Jun 2022 12:37:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356571.584808; Mon, 27 Jun 2022 12:37:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5nza-0007PG-Vs; Mon, 27 Jun 2022 12:37:26 +0000
Received: by outflank-mailman (input) for mailman id 356571;
 Mon, 27 Jun 2022 12:37:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o5nzZ-0006ub-Cy
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 12:37:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o5nzZ-0002wr-AH; Mon, 27 Jun 2022 12:37:25 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o5nzZ-0007Ql-1x; Mon, 27 Jun 2022 12:37:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=XVUKUMHdJRQ1Ho4SEl/UvW5XaMba3R7x8iHBIEdv7MQ=; b=cBxr+PMKesl+cgeA7NVj/ZjhnM
	yMapaeUAmT1aPLWjPzz/FQXUImz5I+zHRw/LQnFdfmLIQsABQAf9gQ5L4S2e8LvrkVZl2QbpI9xy+
	e6nJcdbauKwR4Rio2piVqIpHA0j2R5kFcwrE1m1+eKvmusYp9P1SFwH9VFvCJsvDia10=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: jbeulich@suse.com,
	Julien Grall <jgrall@amazon.com>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v2 2/2] public/io: xs_wire: Allow Xenstore to report EPERM
Date: Mon, 27 Jun 2022 13:36:35 +0100
Message-Id: <20220627123635.3416-3-julien@xen.org>
X-Mailer: git-send-email 2.32.0
In-Reply-To: <20220627123635.3416-1-julien@xen.org>
References: <20220627123635.3416-1-julien@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

C Xenstored is using EPERM when the client is not allowed to change
the owner (see GET_PERMS). However, the xenstore protocol doesn't
describe EPERM so EINVAL will be sent to the client.

When writing test, it would be useful to differentiate between EINVAL
(e.g. parsing error) and EPERM (i.e. no permission). So extend
xsd_errors[] to support return EPERM.

Looking at previous time xsd_errors was extended (8b2c441a1b), it was
considered to be safe to add a new error because at least Linux driver
and libxenstore treat an unknown error code as EINVAL.

This statement doesn't cover other possible OSes, however I am not
aware of any breakage.

Signed-off-by: Julien Grall <jgrall@amazon.com>

----

Changes in v2:
    - Define EPERM at the end of xsd_errors
---
 xen/include/public/io/xs_wire.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/include/public/io/xs_wire.h b/xen/include/public/io/xs_wire.h
index dd4c9c9b972d..211770911d9b 100644
--- a/xen/include/public/io/xs_wire.h
+++ b/xen/include/public/io/xs_wire.h
@@ -91,7 +91,8 @@ __attribute__((unused))
     XSD_ERROR(EBUSY),
     XSD_ERROR(EAGAIN),
     XSD_ERROR(EISCONN),
-    XSD_ERROR(E2BIG)
+    XSD_ERROR(E2BIG),
+    XSD_ERROR(EPERM),
 };
 #endif
 
-- 
2.32.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 27 12:37:37 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 12:37:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356570.584794 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5nzZ-0006yI-SR; Mon, 27 Jun 2022 12:37:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356570.584794; Mon, 27 Jun 2022 12:37:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5nzZ-0006xi-Lf; Mon, 27 Jun 2022 12:37:25 +0000
Received: by outflank-mailman (input) for mailman id 356570;
 Mon, 27 Jun 2022 12:37:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o5nzY-0006uV-JV
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 12:37:24 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o5nzY-0002wl-Gr; Mon, 27 Jun 2022 12:37:24 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o5nzY-0007Ql-8U; Mon, 27 Jun 2022 12:37:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=w7gSoXXb34865U21Be1Dz8VUMwYroXFPxp1CGLJXARQ=; b=tEEu3IRfqsl4LeTJgPhUjaaFHy
	57AeI0XGFwjlSBXt8yplYdBLiTcvQt0ogJYRrH84tPd+Z2re7iCsbEpQ7zDwjGM/64MS5sy7PcKgj
	tgSiaZ2uA723Xz2JuutsAm5Br4ztvi9Xt8Ie495kpfqmPtQAYc8FnmeoS8N/QkPQHYow=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: jbeulich@suse.com,
	Julien Grall <jgrall@amazon.com>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v2 1/2] public/io: xs_wire: Document that EINVAL should always be first in xsd_errors
Date: Mon, 27 Jun 2022 13:36:34 +0100
Message-Id: <20220627123635.3416-2-julien@xen.org>
X-Mailer: git-send-email 2.32.0
In-Reply-To: <20220627123635.3416-1-julien@xen.org>
References: <20220627123635.3416-1-julien@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

Some tools (e.g. xenstored) always expect EINVAL to be first in xsd_errors.

Document it so, one doesn't add a new entry before hand by mistake.

Signed-off-by: Julien Grall <jgrall@amazon.com>

----

I have tried to add a BUILD_BUG_ON() but GCC complained that the value
was not a constant. I couldn't figure out a way to make GCC happy.

Changes in v2:
    - New patch
---
 xen/include/public/io/xs_wire.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/include/public/io/xs_wire.h b/xen/include/public/io/xs_wire.h
index c1ec7c73e3b1..dd4c9c9b972d 100644
--- a/xen/include/public/io/xs_wire.h
+++ b/xen/include/public/io/xs_wire.h
@@ -76,6 +76,7 @@ static struct xsd_errors xsd_errors[]
 __attribute__((unused))
 #endif
     = {
+    /* /!\ Some users (e.g. xenstored) expect EINVAL to be the first entry. */
     XSD_ERROR(EINVAL),
     XSD_ERROR(EACCES),
     XSD_ERROR(EEXIST),
-- 
2.32.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 27 12:37:37 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 12:37:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356569.584787 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5nzZ-0006un-HE; Mon, 27 Jun 2022 12:37:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356569.584787; Mon, 27 Jun 2022 12:37:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5nzZ-0006uc-De; Mon, 27 Jun 2022 12:37:25 +0000
Received: by outflank-mailman (input) for mailman id 356569;
 Mon, 27 Jun 2022 12:37:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o5nzX-0006uP-Rg
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 12:37:23 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o5nzX-0002we-No; Mon, 27 Jun 2022 12:37:23 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o5nzX-0007Ql-F0; Mon, 27 Jun 2022 12:37:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:Message-Id:Date:
	Subject:Cc:To:From; bh=9QjI8fP+CzasV/5aMjimyJtteFHRu8dJ9BCH5fbuL/k=; b=ZUmqEN
	VmtuNFlm/lmokNSv8MHN1oUbyhopMdjahJ91fI1tD6vp1WNfw+Z1KJNiawrrVZQRhQTF1qo3wNpdR
	IucQ2NeROw4DP2zJBuHFRQXDyqsEPGMCLQgJqplabERpfwcFrLzHLFBHfwaxBykx/Q2BlaSmNeper
	CqWgbQiI+lI=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: jbeulich@suse.com,
	Julien Grall <jgrall@amazon.com>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v2 0/2] public/io: xs_wire: Allow Xenstore to report EPERM
Date: Mon, 27 Jun 2022 13:36:33 +0100
Message-Id: <20220627123635.3416-1-julien@xen.org>
X-Mailer: git-send-email 2.32.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

Hi all,

This small patch series allows xenstore to report EPERM (see patch #2).
Patch #1 was introduced to avoid someone else making the mistake to
introduce a new error before EINVAL.

Cheers,

Julien Grall (2):
  public/io: xs_wire: Document that EINVAL should always be first in
    xsd_errors
  public/io: xs_wire: Allow Xenstore to report EPERM

 xen/include/public/io/xs_wire.h | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

-- 
2.32.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 27 13:01:20 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 13:01:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356587.584820 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5oMZ-00033O-Vl; Mon, 27 Jun 2022 13:01:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356587.584820; Mon, 27 Jun 2022 13:01:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5oMZ-00033H-RR; Mon, 27 Jun 2022 13:01:11 +0000
Received: by outflank-mailman (input) for mailman id 356587;
 Mon, 27 Jun 2022 13:01:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CVBH=XC=citrix.com=prvs=1703a0240=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o5oMY-00032Q-K4
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 13:01:10 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 310ad472-f619-11ec-bd2d-47488cf2e6aa;
 Mon, 27 Jun 2022 15:01:01 +0200 (CEST)
Received: from mail-bn8nam12lp2170.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.170])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 27 Jun 2022 09:00:50 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by CO1PR03MB5842.namprd03.prod.outlook.com (2603:10b6:303:91::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15; Mon, 27 Jun
 2022 13:00:48 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5373.018; Mon, 27 Jun 2022
 13:00:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 310ad472-f619-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656334861;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=OduBG/jg/vB7bQWMij9E7bTxnIdPn0377v2JGq02Xec=;
  b=OdpVgRZg2tOIht8tcYitoa+r7DXAg8qxevhl0kZ9b8I5PUHaTjUDBwrq
   YXj/lObfFdD3vHY8JRXqKRCsh/OF8OhE+VJ/8TTAGmCMTGmuy1FM/i64a
   fw3B3eJmh4gpzdgcGZI+BaMjVXcersCs8ltjf0pu9wUT06UpkZSD79lmC
   A=;
X-IronPort-RemoteIP: 104.47.55.170
X-IronPort-MID: 73826371
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:56jXQK7FOVV3yiyKg0+VVAxRtM7GchMFZxGqfqrLsTDasY5as4F+v
 jcdCD/QO/+NMGukLYhyYdi1/RlV78KGnd83QAtp/isxHi5G8cbLO4+Ufxz6V8+wwmwvb67FA
 +E2MISowBUcFyeEzvuVGuG96yE6j8lkf5KkYAL+EnkZqTRMFWFw03qPp8Zj2tQy2YbjXFvU0
 T/Pi5a31GGNimYc3l08s8pvmDs31BglkGpF1rCWTakjUG72zxH5PrpGTU2CByKQrr1vNvy7X
 47+IISRpQs1yfuP5uSNyd4XemVSKlLb0JPnZnB+A8BOiTAazsA+PzpS2FPxpi67hh3Q9+2dx
 umhurSaaV80P5bBtN43DQsfGQJPHfVt8ubudC3XXcy7lyUqclPK6tA3VgQcG91d/ex6R2ZT6
 fYfNTYBKAiZgP67y666Te8qgdk/KM7sP8UUvXQIITPxVK56B8ycBfiXo4YHhF/chegXdRraT
 9AeZjd1KgzJfjVEO0sNCYJ4l+Ct7pX6W2ID+AjL/vVui4TV5E8pgZ71CtvuRuGHZ8dlkky+u
 2380U2sV3n2M/Tak1Jp6EmEhODVmjjgcJkPD7D+/flv6HWDy2pWBBAIWF+TpfiillX4S99ZM
 1YT+Cclse417kPDZsLmQxSyrXqAvxgdc9ldCes37EeK0KW8ywSEAmkJSBZRZdpgs9U5LRQg2
 0WVhdrvCXpquaeMVHOG3r6OqHW5Pi19BVEFYSgIXA4U+e7JqYs4jg/MZtt7GavzhdrwcRnyy
 T2XqCk1h50IkNUGka68+DjvnDaEtpXPCAkv6W3/XG2/5wd9TIegbp6v7x7Q6vMoEWqCZlyIv
 XxBkc7O6ukLVMuJjHbUH71LG6y17fGYNjGamURoA5Qq6zWq/TikYJxU5zZ9YkxuN67oZAPUX
 aMagisJjLc7AZdgRfYfj16ZYyjy8ZXdKA==
IronPort-HdrOrdr: A9a23:ioafdqMehqIyR8BcT1r155DYdb4zR+YMi2TDiHoddfUFSKalfp
 6V98jztSWatN/eYgBEpTmlAtj5fZq6z+8P3WBxB8baYOCCggeVxe5ZjbcKrweQeBEWs9Qtr5
 uIEJIOd+EYb2IK6voSiTPQe7hA/DDEytHPuQ639QYQcegAUdAF0+4WMHf4LqUgLzM2eKbRWa
 DskPZvln6FQzA6f867Dn4KU6zqoMDKrovvZVojCwQ84AeDoDu04PqieiLolis2Yndq+/MP4G
 LFmwv26uGKtOy68AbV0yv2445NkNXs59NfDIini9QTKB/rlgG0Db4REoGqjXQQmqWC+VwqmN
 7Dr1MJONly0WrYeiWPrR7ky2DboUMTwk6n7WXdrWrooMT/Sj5/IdFGn5hlfhzQ7FdllM1g0Y
 pQtljp+6Z/PFflpmDQ9tLIXxZlmg6funw5i9MeiHRZTM83dKJRl4oC50lYea1wUR4S0LpXXt
 WGMfuspcq/KTihHjDkVyhUsZaRt00Ib1i7qhNogL3X79BU9EoJvXfwivZv3Evoz6hNOqWs19
 60TJiAq4s+PvP+TZgNcNvpEvHHfVDlcFbrDF+4B2jBOeUuB0/twqSHk4ndotvaM6A18A==
X-IronPort-AV: E=Sophos;i="5.92,226,1650945600"; 
   d="scan'208";a="73826371"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aUueZEcdHtEtLc7bk1ae9yNJCbW2BmFGQc0nJA0dE7s/OkOlP3wCx/P+CP7AQK6tK9ZtlS2R2l7vUVSlcLPKb/HJmmkW+XWVq/dz4kfVIvG4bZBBaa/OJpSfjXJAFPJSGhZfU+dGixhCop6gKSmRs/g0JeXbf9R9KWYrdzltufbD/eOn9E965Dh720F+YG39luOSWqST7nBJmIiI49oB0OChOxxde+tesR1PMA0o5dRfZ1G/DEoRgR7EOtls8VjjLpbuv1HB971QlKB9HdwwTjpTBUoGo/Nw/mjVrcxK2nQgs9D+eEd2jckCGYc/mV3RNMOPJ85hxbNnJdUbP4wIPA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nSXkb4yNA8FmX5diu9PcRef5S45fTEAnQSpEMy9qVRo=;
 b=dGWx8TjAVr6gUYEl1lg9RdB7wjYnZv642KR408HLuOcmo2mvwqECZl6pVK/H7dZbH3k2cKb7u0/qCxiT9gdOZ4o2WI/a9jxYorfwX3YcvHEYCAMxrRga1PuQHVNz10jo1Gy3YUJytNW71HEUJQ2KrRuKw4TEahcmjXTGde90idx/mVteDKvzRozwz+Zas2kVvPBvyKMmDS4wHhPxD4wKZoA7lVEJI3bJNwJCFMGpJIWo1VaX4NIu92IWaqCmx9w2J1vTJP4Xl7YNs5e+BkOqg+zsPMKsUN718LYwDe5fzKVmwBpyOjHRWcwPhTU+AWxJNAgANOz7fotNogQ5n+FGVA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nSXkb4yNA8FmX5diu9PcRef5S45fTEAnQSpEMy9qVRo=;
 b=d8zrW6/BZgXqBRT86t0d+g3UZxiQwXZ9CU/ahrib3gdDZxVnmmlUfaYOfv2xkEhjuxSSvafqjLU/WPr2qyB6Go3Dnrdax+jbwLqKw3dEzB3pvfb0Q9aPoHs1Tc4pSOU2Td0wBiyT6SvJbwxuZUpvi08nO7tHl7AN6OlsSL1TFX4=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Mon, 27 Jun 2022 15:00:43 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
	Julien Grall <jgrall@amazon.com>
Subject: Re: [PATCH v6 1/9] xen: move do_vcpu_op() to arch specific code
Message-ID: <Yrmp+74f8aFhl2V5@Air-de-Roger>
References: <20220324140139.5899-1-jgross@suse.com>
 <20220324140139.5899-2-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20220324140139.5899-2-jgross@suse.com>
X-ClientProxiedBy: LO4P123CA0542.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:319::7) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6004aef2-0728-4529-d9d2-08da583d0e74
X-MS-TrafficTypeDiagnostic: CO1PR03MB5842:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	YeBAGOcBwQ4r/I2YShh9tTgD1K+cNNpEnjfAqTKnhNpSPxXmxH3PxvD819VyeUXAv0yUwvyjhI+LMg2UnEUKwOROeJVEjSQA9OuKwS7rw2ELdliWnyS2owqC36x6t9RgPdMvT0/yAeEVOV2HyEc7bivBiViBo/mdRBByTMjbgS5mk/1HWSWgCAN7TDPy0j0ybqvx3ME3z1cEW8zSkdOBBe42kR+ZM5LiWOV82XKFAVNHpQVFkMgQGIpNcbyDMUrhHVxs5GcMs6RJunAKJZ7u3NPh/XQZeTRiXTl2ixiC+uky9TzqymTrv5KaSC+UvD2amNauUJQLuHkijSwEEzSVjZEGK5aX4IwrxGJM3ugB6h0K0s+O5HPbslcQWF2iqmvUrnenHOJZ/vebbwYTDr38No2p1HwCaXSEinScPA5YT6wceDa1cMD/7f6aAQ19jG7nVDiW07Ly9UwS5LWyPoX03dSl7n7jM5lLj8PHIOTZQpLH2cOCstTUkuJNsQfbBgtBt8ibxPfx72RAqJVc8nwIRhzOHmWJdpuD2RiMwbdS79im54QlMDsePxzV0Gq5KF4n3hlvaioRUbc8rdKCtNXylmDgqTI07NTy91g0rHu2fUj7SzCZoTA4gRjjqQSwT985M3NEZ1QQ9u9tUTiBJ6GxjYoT3t5VJ1dfv+nQyIHwhPIVhlFnW94vQbLIL8y5PSiJa8H4yWbrIazu4l+R3XfrYL9Y98evdCzlHGuIHkqYcQ+jvizRHF0FqWDpG1/ds54XM41R3zK8EQTkJOwKZZz4gOaVgS+zxUiUxca7qzmFQVM=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(7916004)(4636009)(39860400002)(396003)(346002)(376002)(136003)(366004)(6916009)(86362001)(26005)(83380400001)(9686003)(38100700002)(6512007)(54906003)(4326008)(85182001)(41300700001)(316002)(2906002)(186003)(6506007)(478600001)(82960400001)(66946007)(6666004)(8676002)(5660300002)(6486002)(8936002)(66556008)(66476007)(33716001)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?S2lpdG4veU40a09hLzcrTi9iNkE2a3BNWTNRKzZ6UndnODhKcmgvemI1N2lG?=
 =?utf-8?B?V3loemNQM3JpTHY3aXRMZHpmZ21xWHpleHd5dmNzaVUrQlVDd3psYXoyL2xB?=
 =?utf-8?B?aFpuZS93dWtqQVpPNFg5enRiODNKWU5qOVYwYXNrZHNDdVlra1JjK2xUNEVw?=
 =?utf-8?B?YU9WUkZGQVY0dXRuc1RQTTFiMTdJcnhCaDZSamVSSzQ3NU9UQ0dKOTRTYUhK?=
 =?utf-8?B?Tk5tMG81VDdidXp3N214OHRhcHNHWVc2bFlqbHd5dzZVL2JaenZSamkwaWNJ?=
 =?utf-8?B?VXVhOFR2TnczSHFqY0FQVUVkSUQ2YWd1MEVLTXpoL25DSmRLaGt5OVoyTDNq?=
 =?utf-8?B?Yng4NGNha05VcHhZSm9vK3BTMXphb0hzTWdIMlNsWjRXUitoWkdRR251UnRa?=
 =?utf-8?B?dkxrVTJWRDFHc0hlRmJCV2tzWmt2MStKM2dqVmlOVy9nU0ZLcnlBVFVNdk9V?=
 =?utf-8?B?TWF6bUZXVVgxUnRjYXRCRHhGL0NFYWh2WTdobksvMnFFNS9lMjYvQWc1S0Ni?=
 =?utf-8?B?SVdPU2NTNGo4VUFwa2ZPSnArSG9ob1lPOFdmck1DL0xEVDZSWWNSRE1RYlJh?=
 =?utf-8?B?NzdlM1RKMlNkMjdTc1NYbWNpb3kvT0JDeDZOazg2ZzlnOHRWNWZGWVpJMEN0?=
 =?utf-8?B?bXYvcHV5Y1ZHRG9HdW1xMGR5cG9kNytMZTlyT0o4RXpsVENxZG5MdVp0a2F3?=
 =?utf-8?B?bGM3eXV4L2hPR2Nhemp2d0pYSDRsR2FNNU9aSkZKMThYSEF5RStRbStIZjVy?=
 =?utf-8?B?NnJpbFdXNlR2cnBNVzAyWnFsd0w2REI3NGpxK1dGMGVHa2dSWTQ4M1NEK204?=
 =?utf-8?B?NlcxMWc4cHpFRDk5bWtmNFpLd21LUGdWQjY0djNTbHF1K3ExUThTREFENm1r?=
 =?utf-8?B?c3J6clEyb3I3WjN2SXRCNDhuSXVBVTJkUWFtT0xxNkNWM3Zhd0ZYckRFVnhE?=
 =?utf-8?B?dXFldm41RGRnV2JaWHh2czJSdzBVc1h4TzltNlROenpXY016TEUrOTFBckdF?=
 =?utf-8?B?Z05zemoreExzSW1GQkpCa2diQUppVmY1Q0NTcG9WdUZ0RlZuVVEramFEUmtH?=
 =?utf-8?B?b1BjemZMVExQQk9qcjBiYzJtTGYzbDdoLzBLZVJSVTBVU1M1dWtENVpKMnBT?=
 =?utf-8?B?dzh4MHdsOTV5QVNhRHhjaVVSMGRVaXNIbGlYZFJwd2lJQjM4Tld0aFEvWCtz?=
 =?utf-8?B?dldwckNoNDQ1aDNEUTdqMVJrNFFQbkhSOTl2ZlY2MVg4bVl1VW5RYnpBSEo0?=
 =?utf-8?B?akNIL0ZXOEFMZjJHYmhvTUw1NXVKNUY0T1ZhWWRlbm1CZDUvQytYRnlITlRv?=
 =?utf-8?B?djcrUXZ6U2s5SjVZYVQ4Ti9WQndmUktaYnB1dURqYTRCUi9QUzBIa0NuM0tr?=
 =?utf-8?B?ejlpTVB3Q3BnVDVXd2VxTE0rTUJRSVlMYUxLZ0FuaXJEWHVtelhUVjdYUGdv?=
 =?utf-8?B?S2RyMjdLK0NXM3ZvVGt6MmpRdFU4VVNGdUdTd01kUjgzVzlOR2kyelRPUDRP?=
 =?utf-8?B?MGxtb1Fvb1pIdVAvYWJsb2s1bTV4NnVIS2JZUitwTHprcmlUa1RDQThub1Br?=
 =?utf-8?B?bXNOSTJhaGhxbmx1VTZwbm9qSGFmTGhxVjZIOEV3UEVKRWhuVmZHWnQ5TGxk?=
 =?utf-8?B?NTFKQjJRaU1uQ1RybjBTa3JJM0JCTEM3K2h6bzZscExkei9HZ1FzYWtLdGFj?=
 =?utf-8?B?K0NwZXRsdWc0SzNaYkpaNTVMZnVib1R6VlRSOEVWUDVLWmFodERNZTE0Y2JU?=
 =?utf-8?B?YUFkWGtWUDRhakkrdGxsL1NESWF4cHJ3bDdRNWhkUnEyZVRUdVBtam4vekt2?=
 =?utf-8?B?cGFvSUIyT2NpT05yQjZhQ2FHNmZaa2JKMGJVZmlxeDlxRFdXNXF6VWQ3UkVP?=
 =?utf-8?B?S2pZcXd3WkdYc0NOeHljdlNYd1lzWVN5ZWFjV1BBNjh1aXFud3lHcGNRaWty?=
 =?utf-8?B?SFBDMnNyV1BkNWxDd0VLMTlDMEFadE1SUW96bndIYy95MzMxL0U1YkIxcUlN?=
 =?utf-8?B?aTFwRzdWVVlCTW9yelhCMlR5dnJOdlowOXlCam5wQ2FGS2pNZ1hSMHUzdi9K?=
 =?utf-8?B?WThhanZRS05TdlNBY1RCbU9LWHYrNnhCemZmQ1M3R2lYcjcwSURlNjFVQk5w?=
 =?utf-8?Q?2BfPoEUarZ66uCGVmiHJuMfO4?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6004aef2-0728-4529-d9d2-08da583d0e74
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2022 13:00:48.6384
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: xyjmj3DHV0hjm0f25IIkOQW59oCkVCSjFI3T8aXqc49Lts3k2ZyxQ7XrLi+hOdWQfDyw+N4mkelwSxAttv5baw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO1PR03MB5842

On Thu, Mar 24, 2022 at 03:01:31PM +0100, Juergen Gross wrote:
> The entry point used for the vcpu_op hypercall on Arm is different
> from the one on x86 today, as some of the common sub-ops are not
> supported on Arm. The Arm specific handler filters out the not
> supported sub-ops and then calls the common handler. This leads to the
> weird call hierarchy:
> 
>   do_arm_vcpu_op()
>     do_vcpu_op()
>       arch_do_vcpu_op()
> 
> Clean this up by renaming do_vcpu_op() to common_vcpu_op() and
> arch_do_vcpu_op() in each architecture to do_vcpu_op(). This way one
> of above calls can be avoided without restricting any potential
> future use of common sub-ops for Arm.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Reviewed-by: Julien Grall <jgrall@amazon.com>

Acked-by: Roger Pau Monné <roger.pau@cirtrix.com>

>From an x86 only PoV I prefer the previous arrangement (do_vcpu_op() ->
arch_do_vcpu_op()), but this approach seems to be better for Arm, so
that's a reasonable argument for changing it.

My preference would have been to move the handling of hypercalls not
used by Arm into arch_do_vcpu_op() for x86, but that's also not widely
liked.

> ---
> V4:
> - don't remove HYPERCALL_ARM()
> V4.1:
> - add missing cf_check (Andrew Cooper)
> V5:
> - use v instead of current (Julien Grall)
> ---
>  xen/arch/arm/domain.c                | 15 ++++++++-------
>  xen/arch/arm/include/asm/hypercall.h |  2 --
>  xen/arch/arm/traps.c                 |  2 +-
>  xen/arch/x86/domain.c                | 12 ++++++++----
>  xen/arch/x86/include/asm/hypercall.h |  2 +-
>  xen/arch/x86/x86_64/domain.c         | 18 +++++++++++++-----
>  xen/common/compat/domain.c           | 15 ++++++---------
>  xen/common/domain.c                  | 12 ++++--------
>  xen/include/xen/hypercall.h          |  2 +-
>  9 files changed, 42 insertions(+), 38 deletions(-)
> 
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 8110c1df86..2f8eaab7b5 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -1079,23 +1079,24 @@ void arch_dump_domain_info(struct domain *d)
>  }
>  
>  
> -long do_arm_vcpu_op(int cmd, unsigned int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
> +long do_vcpu_op(int cmd, unsigned int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
>  {
> +    struct domain *d = current->domain;
> +    struct vcpu *v;
> +
> +    if ( (v = domain_vcpu(d, vcpuid)) == NULL )
> +        return -ENOENT;

My preference (here and in x86) code would be to do the initialization
at definition, and then just check for v here, ie:

struct vcpu *v = domain_vcpu(d, vcpuid);

if ( !v )
    return -ENOENT;

But that's just my taste.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 13:09:44 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 13:09:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356593.584831 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5oUl-0003o0-Oh; Mon, 27 Jun 2022 13:09:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356593.584831; Mon, 27 Jun 2022 13:09:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5oUl-0003nt-LJ; Mon, 27 Jun 2022 13:09:39 +0000
Received: by outflank-mailman (input) for mailman id 356593;
 Mon, 27 Jun 2022 13:09:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yQHX=XC=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o5oUj-0003nl-UA
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 13:09:38 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-eopbgr80080.outbound.protection.outlook.com [40.107.8.80])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 64e07739-f61a-11ec-bd2d-47488cf2e6aa;
 Mon, 27 Jun 2022 15:09:36 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB6057.eurprd04.prod.outlook.com (2603:10a6:10:cd::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Mon, 27 Jun
 2022 13:09:33 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5373.018; Mon, 27 Jun 2022
 13:09:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 64e07739-f61a-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Y/eiX3N9aMqNivC5xHCemPFSORhzL5KUZQHdk7e3KrGu0QVeCI6t+u9PC63HlO6XoAiTuQPsioAcYZQdIMDQleme66wtWYkel0X5R91rhOup/w7m1OxMgYl/BSiDO8hcWMSa9vCvVvXyyWVHMSQv5KFWleZi9ZejgPzAmIVZSWK3/ufRDTs3wnzoF1nnp4rnGy4L7MjamHm7bFmGCcD4lBU9OzFt+BhsDFJWNs7CZ1sZC2KsKcnc2sVKywh/zsWVShpBKJr8h2yU1VcrtOaTEJgW8kSsOXHXPXAz1d3Bo0E39jv/IL1G045Qa4Ov2FHQzLu/YOO7UqaRzO6iw8I/HA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=NFH4uodbb91hO2SaVLJxX3+sc8ovgRyMkIoKp+qXrYc=;
 b=cNLp+Djuwg2HSeVsqi9pp3YgSggdmwfckr/Vlrs9BelTRb6hCGxr8fgrv0EPk9xv0uPzAg+2MaUMwtG19EqkKHRVBV0yZYZNGj216Ma399HwcW2jtVhpFAHmDqCh8H3pWQhne3eXJAuCXtKM+tD1IJGhb7FthJt6MCYWprl/E3Dr/JISQC0F5exxr+ENawswAiaS3qP+YoOrZBTOr/9dq6Rn/MA9m5pXiPPLwieusUD/LYugiIrJCuLdZO8NQOE8xmSS9gGRUHix+58hbpe4exK55MMqL00DpltyGoyqvIJVFjHvJNap2TT7Wkkl+X/zSk2o6U8F0so7eWgvZO3CTA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NFH4uodbb91hO2SaVLJxX3+sc8ovgRyMkIoKp+qXrYc=;
 b=ZYZpkKOsKE/Q1hXSh0HuZ/KkhvOjavbmhnnZoxvx/Gb+sNgIwU2risGTl74Sdqa3rcCmqMP779FUrzjIE9T5tqq4VsJjpuA1Z5hPrZe7g6XM62NHKDPOfIIFAAjl4xWGFQbYkNIyoY/F0A9xX1+xSOJrcRZ48nsWj+WGIji+E7qG9M/w1STTbv8ZFIcfDWGsexFufK63iJhPaiUdaQZ2JzJnOao4E0+CwadSc/q2NH2J6+EEMRa1ZDBHzIOl3dIDEVUUID1ESjOcIW0me9dU5J76RkqIOGaW9AMlucE071+Y2IcBM/HCoq0uuTQHIVQodS6FRlcp6dU4Mo4xVuqZUg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <4239996a-c764-1d6c-dd36-81db5cd99c5f@suse.com>
Date: Mon, 27 Jun 2022 15:09:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v7 7/9] xen/arm: unpopulate memory when domain is static
Content-Language: en-US
To: Penny Zheng <Penny.Zheng@arm.com>
Cc: Wei Chen <Wei.Chen@arm.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20220620024408.203797-1-Penny.Zheng@arm.com>
 <20220620024408.203797-8-Penny.Zheng@arm.com>
 <5ac0e46d-2100-331e-b4d2-8fc715973b71@suse.com>
 <DU2PR08MB73255B2995B4692B5D46252FF7B99@DU2PR08MB7325.eurprd08.prod.outlook.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <DU2PR08MB73255B2995B4692B5D46252FF7B99@DU2PR08MB7325.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM7PR03CA0002.eurprd03.prod.outlook.com
 (2603:10a6:20b:130::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 681e67a9-3f9a-4353-788e-08da583e4761
X-MS-TrafficTypeDiagnostic: DBBPR04MB6057:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Ues2XN8v12QWTGU+Zen8jh+Adl/mCFLsdWLXtnV+6tGAkhEtBYxm+ZHPR25/iFlh/EKLcnJ+FwCybSYZhRR7xwKMtOkZLqcyaC5ft2kbkppxAOSLc2bScBdjgoyqTHIKgDuoxAkomBkr6N/MG3Lhk39K043W5m0eOGFvJUw4UgU2vwMfVEj8pF1TOczrJNLQSlPbSxLcRggAIObxB7R7/lndMsqs9t6g+KDDmLGPC6jcsZsFSBzqKawX2CT28itPu5rvS3/LxBTCDp97dYOYAa6An+wRIJlKQqgRGsj+NQaRH+uYl8vt6vSBCb7b3zVaPoPVRGINsKLL5P6ypkX5TJQrfVfFoE4WPpQPOB1DnwGnjgXjXii8HtEeASDQO1KAiCI1qV4acGu9IedrJuhwbV+dSUX6jn4w6jQIkxx5/BkB32dsisrzyfZb1ZKLs6CrK8JABy7Bphn8VpfwB6t9KfeLO+p41SkBjdVgT8chYz8l5YXnNUKMBIAyQfwEFWD5v83o8d8rze778CPNlvb6+rLtHrhymSBC7+XRJ7rkgLUTJC8oYsywDbWvmhLEWLX/QedmAyAzt7JVlRAVbyccc1bTBDDhHErqUU47NAukfEYMqmMc+Yy5NFqXq2kg/4Oa2qm8OOjTdJxPY8/N/a6PhSmwIUPEGiWdQxsi8OglritbG9nbsr6+ig75IIkW8LzvJxIoSbAS5ConuLf3NyU+NJVWML+9Fi0Wzn2G/g5OcYOB4ZSgjQ8HpDzvAizqA5g7Mj3LccBvevK3etHYfR3iHCO28Tk06OkJihCN1F3qow5t4vin4t7elIyAGoQltqukj/AtF4exnEYFXWTAGrBUkw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(396003)(136003)(346002)(366004)(39860400002)(376002)(41300700001)(186003)(83380400001)(8936002)(5660300002)(6512007)(6486002)(26005)(54906003)(478600001)(2906002)(38100700002)(316002)(53546011)(2616005)(6506007)(6862004)(36756003)(8676002)(66476007)(31696002)(66946007)(66556008)(4326008)(31686004)(86362001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MlhhaVRqeUFuU0szUTViMyt4ZVB2Q0E2akVNeFgzZDAzU2lKTS9Wc2s4VE1X?=
 =?utf-8?B?WmFtRGhmaHVZdWNWc2Q0NGNwc0hyRmg2TTByclIyVDBQMWJZL0R4dzhhS1hU?=
 =?utf-8?B?cE1wZEZONkVGU0FjRXJmU1Z1RUdMQi9pdWVMd245Vm5Zc1hNakZVQzFtNlQ3?=
 =?utf-8?B?SE1TM0hMZkRkOXF3aUd0bUtNa3pZN1IyYkIzei9EbXYrR29yMFhTU2J4VHRP?=
 =?utf-8?B?MSt0NmltUXByRlVWMFdOZGw3RHhDVlNQR1Zsbnc2b3V3UWJBSlZNY0VyTEpV?=
 =?utf-8?B?M2JteGg5R1JBZERxYWs1NmhtcEZTbmNZMkRjRkIzVktFRHB3YXNXN09WWXE2?=
 =?utf-8?B?aFdHV3FqSmdTSEFrOE4rNlBDUFpOTUJDN1ROVGs2KzdDR3NHYTViRUJCQVpF?=
 =?utf-8?B?b0dpOUlDcEVkR3gwc2xFbDcrZTAvM3JFdFp5YVlYcEJZcFdTRElkK1hTUGtk?=
 =?utf-8?B?cVJjb1FKUkJMakZ3VzJSUm5nWlhROEF4TUtCSFdKY09jckg3cTV0d3JTUFNu?=
 =?utf-8?B?QUM5bFVWVFMyRWRlOGN0UzhkdkI5R1JTdlVDWEREUTVGbXpSTk53NnVqdkox?=
 =?utf-8?B?NERjclhBUVBBZDdDL2dnaXk3bXgyNUVJQTJEVWZydHJ0dnp3ZkhCcGdORzBK?=
 =?utf-8?B?d0d4bExhd0Q3aHk0TnNvRktic0J2WmU2MVg5Z0dwaFNVa21KaERCMWlOMW9E?=
 =?utf-8?B?ZXBqbXdJWXkrYjBqMzkzR09sdzJBZCsrK05PL2xjYmIycGRPbWhDc1ZXWUVG?=
 =?utf-8?B?SSsyMnNHQXVXNWNDd0NsTkZFbVRLN3dmcHkvc3JmQ1BEOW9uZ0RDUDJtN24x?=
 =?utf-8?B?MlhqaDh1M0x0aVozZmZQeXE5SC9tbDkrTjhVQ201NGpudVM5d2NFQ1RHVTFO?=
 =?utf-8?B?T3VrYzlPWERvb3R0TkswbEl5WjN0ZGd2MWJCZ1pXc3ZsR1pPaGJia3E5ODRn?=
 =?utf-8?B?OGlXeUs1TW8rRkMwWEM5WDI3SDZoUUt0MFcrWEdVd2FXcER4bklFZnh4QWlP?=
 =?utf-8?B?alNKMDF2Y2pFYld1ZWJEQmRtM3NwUXU4cVFpVkdLektrbDhMc0VLdC8yb0Yy?=
 =?utf-8?B?OHNUamZWNlJ2cUtLWElqMjRhQ29pTnpMc3l2a1E2ek03djdTSkttUVVhVXpQ?=
 =?utf-8?B?UjQzZnd1YVlESXBZSmpBY09tYS94WmxZK1lVdWZxMExUdlpTc1NLRXFKYmw1?=
 =?utf-8?B?OGt6ciswV3ViUnhZellFczJMN1NNd3BLZlFVNUxhSDBsc3Y4aEIxYU1OR1ZR?=
 =?utf-8?B?TnFWTXY1RDVlb0xsVFpjd0VUSG9QMUYvbGgxcFY0dmhYZS9Xb243Vkg1bkVY?=
 =?utf-8?B?VUVnWmRkNVJFZ2NIdWJ1RG9UdEpiSytncWR0U214eHFxWHlVcG1WS3RVMTRk?=
 =?utf-8?B?eW5iblp2czY1d3JSR3EyZ3VEL2FVZldZbkxkSDNUdXlWRU1NY0szamRJMUVl?=
 =?utf-8?B?SUdqU0lwaUFGblNUMWVlZlJhOGJLNDB3MnJ1NHphSldHMEIxbE1qdEl3dFlE?=
 =?utf-8?B?bndoZXord2NHNittb1pLeGtDMW5FN2NpZlZ2Z3NwYncxRW1tRHVaQUpKZi84?=
 =?utf-8?B?TkR0OU1jSFpwYnhFNllBYnFYOWxXTXpoVmF1M3NXcjJaTjhHNTdHcWNPK3lJ?=
 =?utf-8?B?aVZvZWM5Rm5DNlB2WmZoa1NQeExZN2NMMDhTb0hxUXRhemRBQ2I5WnRGblBX?=
 =?utf-8?B?TG1HaXZaMUVPVy9ZV2kvTE1OSEZ4N0VQT0xPbU1OTmhwTlBaUUNRMk9KalJQ?=
 =?utf-8?B?SVMvUFVLbHY1ZWI0cWcvZFJSME8wVXArTFhnOEovZzhBSlhYcjlvbFhORlNH?=
 =?utf-8?B?ZzNnbkUzQUJNcjFDNVUxQlpYVnNzdlhLZzhjTTNqWm9aT3EyOE0zbUt1WnhX?=
 =?utf-8?B?VWQ2RGxqMjU5bjcwYmJ4K2ZyYVJibDdZdGJKOHFFOXVUMEs0K0Q3NUx0Qzdw?=
 =?utf-8?B?Z1gwWjN1cnp6UmtzWWU2NWVIQTcxSzJYNXFrQVp4Q3VVVWZDUXBOL0pVVC94?=
 =?utf-8?B?Rnk1NFhXNFIxOWVRZlNHand3Qys3RkFmZHlUWkFqOFJ6TVlZeGxKd0pXVDVD?=
 =?utf-8?B?TTZpUzJERHE1alh4RVM0U01pc0VVc1JMWEtBeUFIVVJrcUczVmFSYTd5aHdq?=
 =?utf-8?Q?GlEsx3iEPO7wV+y3x9VijVwBX?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 681e67a9-3f9a-4353-788e-08da583e4761
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2022 13:09:33.6851
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 8/JbYEBMEqQ37d93e4ckqfNIfNY+PW9UF9nD0YM2ZrGOoreVYDvKluJrQVub0CqkeTZmgibMWkzpXCvMubL+wg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB6057

On 27.06.2022 12:03, Penny Zheng wrote:
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: Wednesday, June 22, 2022 5:24 PM
>>
>> Furthermore careful with the local variable name used here. Consider what
>> would happen with an invocation of
>>
>>     put_static_pages(d, page, i);
>>
>> To common approach is to suffix an underscore to the variable name.
>> Such names are not supposed to be used outside of macros definitions, and
>> hence there's then no potential for such a conflict.
>>
> 
> Understood!! I will change "unsigned int i" to "unsigned int _i";

Note how I said "suffix", not "prefix".

>> Finally I think you mean (1u << (order)) to be on the safe side against UB if
>> order could ever reach 31. Then again - is "order" as a parameter needed
>> here in the first place? Wasn't it that staticmem operations are limited to
>> order-0 regions?
> 
> Yes, right now, the actual usage is limited to order-0, how about I add assertion here
> and remove order parameter:
> 
>         /* Add page on the resv_page_list *after* it has been freed. */
>         if ( unlikely(pg->count_info & PGC_static) )
>         {
>             ASSERT(!order);
>             put_static_pages(d, pg);
>         }

I don't mind an ASSERT() as long as upper layers indeed guarantee this.
What I'm worried about is that you might assert on user controlled input.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 13:16:14 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 13:16:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356601.584852 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5ob5-0005Y9-TZ; Mon, 27 Jun 2022 13:16:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356601.584852; Mon, 27 Jun 2022 13:16:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5ob5-0005Y2-OV; Mon, 27 Jun 2022 13:16:11 +0000
Received: by outflank-mailman (input) for mailman id 356601;
 Mon, 27 Jun 2022 13:16:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vXPS=XC=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o5ob4-0005HM-DF
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 13:16:10 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 4c4db998-f61b-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 15:16:04 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8B224175A;
 Mon, 27 Jun 2022 06:16:08 -0700 (PDT)
Received: from e129167.arm.com (unknown [10.57.42.186])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5F6F13F5A1;
 Mon, 27 Jun 2022 06:16:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4c4db998-f61b-11ec-b725-ed86ccbb4733
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 1/7] xen/arm: Use unsigned int instead of plain unsigned
Date: Mon, 27 Jun 2022 15:15:37 +0200
Message-Id: <20220627131543.410971-2-michal.orzel@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220627131543.410971-1-michal.orzel@arm.com>
References: <20220627131543.410971-1-michal.orzel@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This is just for the style and consistency reasons as the former is
being used more often than the latter.

Signed-off-by: Michal Orzel <michal.orzel@arm.com>
---
 xen/arch/arm/domain_build.c             |  2 +-
 xen/arch/arm/guestcopy.c                | 13 +++++++------
 xen/arch/arm/include/asm/arm32/bitops.h |  8 ++++----
 xen/arch/arm/include/asm/fixmap.h       |  4 ++--
 xen/arch/arm/include/asm/guest_access.h |  8 ++++----
 xen/arch/arm/include/asm/mm.h           |  2 +-
 xen/arch/arm/irq.c                      |  2 +-
 xen/arch/arm/kernel.c                   |  2 +-
 xen/arch/arm/mm.c                       |  4 ++--
 9 files changed, 23 insertions(+), 22 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 7ddd16c26d..3fd1186b53 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1007,7 +1007,7 @@ static void __init set_interrupt(gic_interrupt_t interrupt,
  */
 static int __init fdt_property_interrupts(const struct kernel_info *kinfo,
                                           gic_interrupt_t *intr,
-                                          unsigned num_irq)
+                                          unsigned int num_irq)
 {
     int res;
 
diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index 32681606d8..abb6236e27 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -56,7 +56,7 @@ static unsigned long copy_guest(void *buf, uint64_t addr, unsigned int len,
                                 copy_info_t info, unsigned int flags)
 {
     /* XXX needs to handle faults */
-    unsigned offset = addr & ~PAGE_MASK;
+    unsigned int offset = addr & ~PAGE_MASK;
 
     BUILD_BUG_ON((sizeof(addr)) < sizeof(vaddr_t));
     BUILD_BUG_ON((sizeof(addr)) < sizeof(paddr_t));
@@ -64,7 +64,7 @@ static unsigned long copy_guest(void *buf, uint64_t addr, unsigned int len,
     while ( len )
     {
         void *p;
-        unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
+        unsigned int size = min(len, (unsigned int)PAGE_SIZE - offset);
         struct page_info *page;
 
         page = translate_get_page(info, addr, flags & COPY_linear,
@@ -106,26 +106,27 @@ static unsigned long copy_guest(void *buf, uint64_t addr, unsigned int len,
     return 0;
 }
 
-unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len)
+unsigned long raw_copy_to_guest(void *to, const void *from, unsigned int len)
 {
     return copy_guest((void *)from, (vaddr_t)to, len,
                       GVA_INFO(current), COPY_to_guest | COPY_linear);
 }
 
 unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
-                                             unsigned len)
+                                             unsigned int len)
 {
     return copy_guest((void *)from, (vaddr_t)to, len, GVA_INFO(current),
                       COPY_to_guest | COPY_flush_dcache | COPY_linear);
 }
 
-unsigned long raw_clear_guest(void *to, unsigned len)
+unsigned long raw_clear_guest(void *to, unsigned int len)
 {
     return copy_guest(NULL, (vaddr_t)to, len, GVA_INFO(current),
                       COPY_to_guest | COPY_linear);
 }
 
-unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned len)
+unsigned long raw_copy_from_guest(void *to, const void __user *from,
+                                  unsigned int len)
 {
     return copy_guest(to, (vaddr_t)from, len, GVA_INFO(current),
                       COPY_from_guest | COPY_linear);
diff --git a/xen/arch/arm/include/asm/arm32/bitops.h b/xen/arch/arm/include/asm/arm32/bitops.h
index 57938a5874..d0309d47c1 100644
--- a/xen/arch/arm/include/asm/arm32/bitops.h
+++ b/xen/arch/arm/include/asm/arm32/bitops.h
@@ -6,17 +6,17 @@
 /*
  * Little endian assembly bitops.  nr = 0 -> byte 0 bit 0.
  */
-extern int _find_first_zero_bit_le(const void * p, unsigned size);
+extern int _find_first_zero_bit_le(const void * p, unsigned int size);
 extern int _find_next_zero_bit_le(const void * p, int size, int offset);
-extern int _find_first_bit_le(const unsigned long *p, unsigned size);
+extern int _find_first_bit_le(const unsigned long *p, unsigned int size);
 extern int _find_next_bit_le(const unsigned long *p, int size, int offset);
 
 /*
  * Big endian assembly bitops.  nr = 0 -> byte 3 bit 0.
  */
-extern int _find_first_zero_bit_be(const void * p, unsigned size);
+extern int _find_first_zero_bit_be(const void * p, unsigned int size);
 extern int _find_next_zero_bit_be(const void * p, int size, int offset);
-extern int _find_first_bit_be(const unsigned long *p, unsigned size);
+extern int _find_first_bit_be(const unsigned long *p, unsigned int size);
 extern int _find_next_bit_be(const unsigned long *p, int size, int offset);
 
 #ifndef __ARMEB__
diff --git a/xen/arch/arm/include/asm/fixmap.h b/xen/arch/arm/include/asm/fixmap.h
index 365a2385a0..d0c9a52c8c 100644
--- a/xen/arch/arm/include/asm/fixmap.h
+++ b/xen/arch/arm/include/asm/fixmap.h
@@ -30,9 +30,9 @@
 extern lpae_t xen_fixmap[XEN_PT_LPAE_ENTRIES];
 
 /* Map a page in a fixmap entry */
-extern void set_fixmap(unsigned map, mfn_t mfn, unsigned attributes);
+extern void set_fixmap(unsigned int map, mfn_t mfn, unsigned int attributes);
 /* Remove a mapping from a fixmap entry */
-extern void clear_fixmap(unsigned map);
+extern void clear_fixmap(unsigned int map);
 
 #define fix_to_virt(slot) ((void *)FIXMAP_ADDR(slot))
 
diff --git a/xen/arch/arm/include/asm/guest_access.h b/xen/arch/arm/include/asm/guest_access.h
index 53766386d3..4421e43611 100644
--- a/xen/arch/arm/include/asm/guest_access.h
+++ b/xen/arch/arm/include/asm/guest_access.h
@@ -4,11 +4,11 @@
 #include <xen/errno.h>
 #include <xen/sched.h>
 
-unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len);
+unsigned long raw_copy_to_guest(void *to, const void *from, unsigned int len);
 unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
-                                             unsigned len);
-unsigned long raw_copy_from_guest(void *to, const void *from, unsigned len);
-unsigned long raw_clear_guest(void *to, unsigned len);
+                                             unsigned int len);
+unsigned long raw_copy_from_guest(void *to, const void *from, unsigned int len);
+unsigned long raw_clear_guest(void *to, unsigned int len);
 
 /* Copy data to guest physical address, then clean the region. */
 unsigned long copy_to_guest_phys_flush_dcache(struct domain *d,
diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
index 045a8ba4bb..c4bc3cd1e5 100644
--- a/xen/arch/arm/include/asm/mm.h
+++ b/xen/arch/arm/include/asm/mm.h
@@ -192,7 +192,7 @@ extern void setup_xenheap_mappings(unsigned long base_mfn, unsigned long nr_mfns
 /* Map a frame table to cover physical addresses ps through pe */
 extern void setup_frametable_mappings(paddr_t ps, paddr_t pe);
 /* map a physical range in virtual memory */
-void __iomem *ioremap_attr(paddr_t start, size_t len, unsigned attributes);
+void __iomem *ioremap_attr(paddr_t start, size_t len, unsigned int attributes);
 
 static inline void __iomem *ioremap_nocache(paddr_t start, size_t len)
 {
diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
index 5268c01434..fd0c15fffd 100644
--- a/xen/arch/arm/irq.c
+++ b/xen/arch/arm/irq.c
@@ -631,7 +631,7 @@ void pirq_set_affinity(struct domain *d, int pirq, const cpumask_t *mask)
     BUG();
 }
 
-static bool irq_validate_new_type(unsigned int curr, unsigned new)
+static bool irq_validate_new_type(unsigned int curr, unsigned int new)
 {
     return (curr == IRQ_TYPE_INVALID || curr == new );
 }
diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index 25ded1c056..2556a45c38 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -256,7 +256,7 @@ static __init int kernel_decompress(struct bootmodule *mod)
     char *output, *input;
     char magic[2];
     int rc;
-    unsigned kernel_order_out;
+    unsigned int kernel_order_out;
     paddr_t output_size;
     struct page_info *pages;
     mfn_t mfn;
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index be37176a47..009b8cd9ef 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -352,7 +352,7 @@ lpae_t mfn_to_xen_entry(mfn_t mfn, unsigned int attr)
 }
 
 /* Map a 4k page in a fixmap entry */
-void set_fixmap(unsigned map, mfn_t mfn, unsigned int flags)
+void set_fixmap(unsigned int map, mfn_t mfn, unsigned int flags)
 {
     int res;
 
@@ -361,7 +361,7 @@ void set_fixmap(unsigned map, mfn_t mfn, unsigned int flags)
 }
 
 /* Remove a mapping from a fixmap entry */
-void clear_fixmap(unsigned map)
+void clear_fixmap(unsigned int map)
 {
     int res;
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 27 13:16:14 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 13:16:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356600.584842 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5ob4-0005HZ-Jg; Mon, 27 Jun 2022 13:16:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356600.584842; Mon, 27 Jun 2022 13:16:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5ob4-0005HS-FY; Mon, 27 Jun 2022 13:16:10 +0000
Received: by outflank-mailman (input) for mailman id 356600;
 Mon, 27 Jun 2022 13:16:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vXPS=XC=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o5ob3-0005HM-Gf
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 13:16:09 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 4adb00ea-f61b-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 15:16:02 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 238DD1758;
 Mon, 27 Jun 2022 06:16:06 -0700 (PDT)
Received: from e129167.arm.com (unknown [10.57.42.186])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 905533F5A1;
 Mon, 27 Jun 2022 06:16:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4adb00ea-f61b-11ec-b725-ed86ccbb4733
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	"Daniel P. Smith" <dpsmith@apertussolutions.com>
Subject: [PATCH 0/7] xen/arm: use unsigned int instead of plain unsigned
Date: Mon, 27 Jun 2022 15:15:36 +0200
Message-Id: <20220627131543.410971-1-michal.orzel@arm.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This is done purely for cosmetic/style reasons (as we tend to use unsigned int
more often than just unsigned) and not to waste changes made as part of [1]
series that contained invalid justification (this is not fixing MISRA 8.1 rule).

Most of these patches have already been reviewed/acked but because of changing
the commit messages/titles the tags are dropped now.

[1] https://lore.kernel.org/all/e6c10adc-27a8-2f31-7d84-6aee916c56bf@suse.com/t/


Michal Orzel (7):
  xen/arm: Use unsigned int instead of plain unsigned
  xen/domain: Use unsigned int instead of plain unsigned
  xen/common: Use unsigned int instead of plain unsigned
  include/xen: Use unsigned int instead of plain unsigned
  include/public: Use uint32_t instead of unsigned (int)
  xsm/flask: Use unsigned int instead of plain unsigned
  drivers/acpi: Drop the unneeded casts to unsigned

 xen/arch/arm/domain_build.c             |  2 +-
 xen/arch/arm/guestcopy.c                | 13 +++++++------
 xen/arch/arm/include/asm/arm32/bitops.h |  8 ++++----
 xen/arch/arm/include/asm/fixmap.h       |  4 ++--
 xen/arch/arm/include/asm/guest_access.h |  8 ++++----
 xen/arch/arm/include/asm/mm.h           |  2 +-
 xen/arch/arm/irq.c                      |  2 +-
 xen/arch/arm/kernel.c                   |  2 +-
 xen/arch/arm/mm.c                       |  4 ++--
 xen/common/domain.c                     |  2 +-
 xen/common/grant_table.c                |  6 +++---
 xen/common/gunzip.c                     |  8 ++++----
 xen/common/sched/cpupool.c              |  4 ++--
 xen/common/trace.c                      |  2 +-
 xen/drivers/acpi/tables/tbfadt.c        |  6 +++---
 xen/drivers/acpi/tables/tbutils.c       |  1 -
 xen/include/public/physdev.h            |  4 ++--
 xen/include/public/sysctl.h             | 10 +++++-----
 xen/include/xen/domain.h                |  2 +-
 xen/include/xen/perfc.h                 |  2 +-
 xen/include/xen/sched.h                 |  2 +-
 xen/xsm/flask/ss/avtab.c                |  2 +-
 22 files changed, 48 insertions(+), 48 deletions(-)

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 27 13:16:15 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 13:16:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356602.584864 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5ob9-0005qw-4P; Mon, 27 Jun 2022 13:16:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356602.584864; Mon, 27 Jun 2022 13:16:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5ob9-0005qn-0j; Mon, 27 Jun 2022 13:16:15 +0000
Received: by outflank-mailman (input) for mailman id 356602;
 Mon, 27 Jun 2022 13:16:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vXPS=XC=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o5ob7-0005lc-Ad
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 13:16:13 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 50b8281d-f61b-11ec-bd2d-47488cf2e6aa;
 Mon, 27 Jun 2022 15:16:12 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B9545175A;
 Mon, 27 Jun 2022 06:16:11 -0700 (PDT)
Received: from e129167.arm.com (unknown [10.57.42.186])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D23983F5A1;
 Mon, 27 Jun 2022 06:16:08 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 50b8281d-f61b-11ec-bd2d-47488cf2e6aa
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 2/7] xen/domain: Use unsigned int instead of plain unsigned
Date: Mon, 27 Jun 2022 15:15:38 +0200
Message-Id: <20220627131543.410971-3-michal.orzel@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220627131543.410971-1-michal.orzel@arm.com>
References: <20220627131543.410971-1-michal.orzel@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This is just for the style and consistency reasons as the former is
being used more often than the latter.

Signed-off-by: Michal Orzel <michal.orzel@arm.com>
---
 xen/common/domain.c      | 2 +-
 xen/include/xen/domain.h | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/common/domain.c b/xen/common/domain.c
index 7570eae91a..57a8515f21 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -1446,7 +1446,7 @@ int vcpu_reset(struct vcpu *v)
  * of memory, and it sets a pending event to make sure that a pending
  * event doesn't get missed.
  */
-int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset)
+int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned int offset)
 {
     struct domain *d = v->domain;
     void *mapping;
diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
index 1c3c88a14d..628b14b086 100644
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -65,7 +65,7 @@ void cf_check free_pirq_struct(void *);
 int  arch_vcpu_create(struct vcpu *v);
 void arch_vcpu_destroy(struct vcpu *v);
 
-int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset);
+int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned int offset);
 void unmap_vcpu_info(struct vcpu *v);
 
 int arch_domain_create(struct domain *d,
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 27 13:16:19 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 13:16:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356603.584875 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5obD-0006BN-Ei; Mon, 27 Jun 2022 13:16:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356603.584875; Mon, 27 Jun 2022 13:16:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5obD-0006BA-BO; Mon, 27 Jun 2022 13:16:19 +0000
Received: by outflank-mailman (input) for mailman id 356603;
 Mon, 27 Jun 2022 13:16:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vXPS=XC=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o5obB-0005lc-HH
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 13:16:17 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 534a9ce7-f61b-11ec-bd2d-47488cf2e6aa;
 Mon, 27 Jun 2022 15:16:16 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0A2711758;
 Mon, 27 Jun 2022 06:16:16 -0700 (PDT)
Received: from e129167.arm.com (unknown [10.57.42.186])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 518FB3F5A1;
 Mon, 27 Jun 2022 06:16:11 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 534a9ce7-f61b-11ec-bd2d-47488cf2e6aa
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>,
	Dario Faggioli <dfaggioli@suse.com>
Subject: [PATCH 3/7] xen/common: Use unsigned int instead of plain unsigned
Date: Mon, 27 Jun 2022 15:15:39 +0200
Message-Id: <20220627131543.410971-4-michal.orzel@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220627131543.410971-1-michal.orzel@arm.com>
References: <20220627131543.410971-1-michal.orzel@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This is just for the style and consistency reasons as the former is
being used more often than the latter.

Signed-off-by: Michal Orzel <michal.orzel@arm.com>
---
 xen/common/grant_table.c   | 6 +++---
 xen/common/gunzip.c        | 8 ++++----
 xen/common/sched/cpupool.c | 4 ++--
 xen/common/trace.c         | 2 +-
 4 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 3918e6de6b..2d110d9f41 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -895,7 +895,7 @@ done:
 static int _set_status(const grant_entry_header_t *shah,
                        grant_status_t *status,
                        struct domain *rd,
-                       unsigned rgt_version,
+                       unsigned int rgt_version,
                        struct active_grant_entry *act,
                        int readonly,
                        int mapflag,
@@ -1763,8 +1763,8 @@ static int
 gnttab_populate_status_frames(struct domain *d, struct grant_table *gt,
                               unsigned int req_nr_frames)
 {
-    unsigned i;
-    unsigned req_status_frames;
+    unsigned int i;
+    unsigned int req_status_frames;
 
     req_status_frames = grant_to_status_frames(req_nr_frames);
 
diff --git a/xen/common/gunzip.c b/xen/common/gunzip.c
index aa16fec4bb..71ec5f26be 100644
--- a/xen/common/gunzip.c
+++ b/xen/common/gunzip.c
@@ -14,13 +14,13 @@ static memptr __initdata free_mem_end_ptr;
 #define WSIZE           0x80000000
 
 static unsigned char *__initdata inbuf;
-static unsigned __initdata insize;
+static unsigned int __initdata insize;
 
 /* Index of next byte to be processed in inbuf: */
-static unsigned __initdata inptr;
+static unsigned int __initdata inptr;
 
 /* Bytes in output buffer: */
-static unsigned __initdata outcnt;
+static unsigned int __initdata outcnt;
 
 #define OF(args)        args
 
@@ -73,7 +73,7 @@ static __init void flush_window(void)
      * compute the crc.
      */
     unsigned long c = crc;
-    unsigned n;
+    unsigned int n;
     unsigned char *in, ch;
 
     in = window;
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index a20e3a5fcb..2afe54f54d 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -850,7 +850,7 @@ int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op)
 
     case XEN_SYSCTL_CPUPOOL_OP_ADDCPU:
     {
-        unsigned cpu;
+        unsigned int cpu;
         const cpumask_t *cpus;
 
         cpu = op->cpu;
@@ -895,7 +895,7 @@ int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op)
 
     case XEN_SYSCTL_CPUPOOL_OP_RMCPU:
     {
-        unsigned cpu;
+        unsigned int cpu;
 
         c = cpupool_get_by_id(op->cpupool_id);
         ret = -ENOENT;
diff --git a/xen/common/trace.c b/xen/common/trace.c
index a7c092fcbb..fb3752ce62 100644
--- a/xen/common/trace.c
+++ b/xen/common/trace.c
@@ -834,7 +834,7 @@ void __trace_hypercall(uint32_t event, unsigned long op,
 
 #define APPEND_ARG32(i)                         \
     do {                                        \
-        unsigned i_ = (i);                      \
+        unsigned int i_ = (i);                  \
         *a++ = args[(i_)];                      \
         d.op |= TRC_PV_HYPERCALL_V2_ARG_32(i_); \
     } while( 0 )
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 27 13:16:22 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 13:16:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356604.584886 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5obF-0006UK-PJ; Mon, 27 Jun 2022 13:16:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356604.584886; Mon, 27 Jun 2022 13:16:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5obF-0006UB-LL; Mon, 27 Jun 2022 13:16:21 +0000
Received: by outflank-mailman (input) for mailman id 356604;
 Mon, 27 Jun 2022 13:16:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vXPS=XC=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o5obE-0005HM-S1
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 13:16:20 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 52ddb4d1-f61b-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 15:16:15 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B24A61758;
 Mon, 27 Jun 2022 06:16:19 -0700 (PDT)
Received: from e129167.arm.com (unknown [10.57.42.186])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6830C3F5A1;
 Mon, 27 Jun 2022 06:16:16 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 52ddb4d1-f61b-11ec-b725-ed86ccbb4733
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 4/7] include/xen: Use unsigned int instead of plain unsigned
Date: Mon, 27 Jun 2022 15:15:40 +0200
Message-Id: <20220627131543.410971-5-michal.orzel@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220627131543.410971-1-michal.orzel@arm.com>
References: <20220627131543.410971-1-michal.orzel@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This is just for the style and consistency reasons as the former is
being used more often than the latter.

Signed-off-by: Michal Orzel <michal.orzel@arm.com>
---
 xen/include/xen/perfc.h | 2 +-
 xen/include/xen/sched.h | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/include/xen/perfc.h b/xen/include/xen/perfc.h
index bb010b0aae..7c5ce537bd 100644
--- a/xen/include/xen/perfc.h
+++ b/xen/include/xen/perfc.h
@@ -49,7 +49,7 @@ enum perfcounter {
 #undef PERFSTATUS
 #undef PERFSTATUS_ARRAY
 
-typedef unsigned perfc_t;
+typedef unsigned int perfc_t;
 #define PRIperfc ""
 
 DECLARE_PER_CPU(perfc_t[NUM_PERFCOUNTERS], perfcounters);
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 463d41ffb6..b9515eb497 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -519,7 +519,7 @@ struct domain
     /* hvm_print_line() and guest_console_write() logging. */
 #define DOMAIN_PBUF_SIZE 200
     char       *pbuf;
-    unsigned    pbuf_idx;
+    unsigned int pbuf_idx;
     spinlock_t  pbuf_lock;
 
     /* OProfile support. */
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 27 13:16:25 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 13:16:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356607.584897 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5obJ-0006tV-2p; Mon, 27 Jun 2022 13:16:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356607.584897; Mon, 27 Jun 2022 13:16:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5obI-0006t2-Vc; Mon, 27 Jun 2022 13:16:24 +0000
Received: by outflank-mailman (input) for mailman id 356607;
 Mon, 27 Jun 2022 13:16:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vXPS=XC=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o5obI-0005HM-3q
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 13:16:24 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 54c4fbbd-f61b-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 15:16:19 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 000331758;
 Mon, 27 Jun 2022 06:16:22 -0700 (PDT)
Received: from e129167.arm.com (unknown [10.57.42.186])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 08E843F5A1;
 Mon, 27 Jun 2022 06:16:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 54c4fbbd-f61b-11ec-b725-ed86ccbb4733
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 5/7] include/public: Use uint32_t instead of unsigned (int)
Date: Mon, 27 Jun 2022 15:15:41 +0200
Message-Id: <20220627131543.410971-6-michal.orzel@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220627131543.410971-1-michal.orzel@arm.com>
References: <20220627131543.410971-1-michal.orzel@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Public interfaces shall make use of types that indicate size and
signedness. Take the opportunity to also modify places where explicit
unsigned int is used.

Signed-off-by: Michal Orzel <michal.orzel@arm.com>
---
 xen/include/public/physdev.h |  4 ++--
 xen/include/public/sysctl.h  | 10 +++++-----
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/xen/include/public/physdev.h b/xen/include/public/physdev.h
index d271766ad0..f8d1905e30 100644
--- a/xen/include/public/physdev.h
+++ b/xen/include/public/physdev.h
@@ -211,8 +211,8 @@ struct physdev_manage_pci_ext {
     /* IN */
     uint8_t bus;
     uint8_t devfn;
-    unsigned is_extfn;
-    unsigned is_virtfn;
+    uint32_t is_extfn;
+    uint32_t is_virtfn;
     struct {
         uint8_t bus;
         uint8_t devfn;
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index b0a4af8789..60c8711483 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -624,7 +624,7 @@ struct xen_sysctl_arinc653_schedule {
         /* If a domain has multiple VCPUs, vcpu_id specifies which one
          * this schedule entry applies to. It should be set to 0 if
          * there is only one VCPU for the domain. */
-        unsigned int vcpu_id;
+        uint32_t vcpu_id;
         /* runtime specifies the amount of time that should be allocated
          * to this VCPU per major frame. It is specified in nanoseconds */
         uint64_aligned_t runtime;
@@ -644,18 +644,18 @@ struct xen_sysctl_credit_schedule {
     /* Length of timeslice in milliseconds */
 #define XEN_SYSCTL_CSCHED_TSLICE_MAX 1000
 #define XEN_SYSCTL_CSCHED_TSLICE_MIN 1
-    unsigned tslice_ms;
-    unsigned ratelimit_us;
+    uint32_t tslice_ms;
+    uint32_t ratelimit_us;
     /*
      * How long we consider a vCPU to be cache-hot on the
      * CPU where it has run (max 100ms, in microseconds)
     */
 #define XEN_SYSCTL_CSCHED_MGR_DLY_MAX_US (100 * 1000)
-    unsigned vcpu_migr_delay_us;
+    uint32_t vcpu_migr_delay_us;
 };
 
 struct xen_sysctl_credit2_schedule {
-    unsigned ratelimit_us;
+    uint32_t ratelimit_us;
 };
 
 /* XEN_SYSCTL_scheduler_op */
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 27 13:16:27 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 13:16:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356608.584908 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5obL-0007JY-Ja; Mon, 27 Jun 2022 13:16:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356608.584908; Mon, 27 Jun 2022 13:16:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5obL-0007JK-F4; Mon, 27 Jun 2022 13:16:27 +0000
Received: by outflank-mailman (input) for mailman id 356608;
 Mon, 27 Jun 2022 13:16:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vXPS=XC=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o5obK-0005lc-3c
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 13:16:26 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 58936768-f61b-11ec-bd2d-47488cf2e6aa;
 Mon, 27 Jun 2022 15:16:25 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0B0B11758;
 Mon, 27 Jun 2022 06:16:25 -0700 (PDT)
Received: from e129167.arm.com (unknown [10.57.42.186])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4501F3F5A1;
 Mon, 27 Jun 2022 06:16:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 58936768-f61b-11ec-bd2d-47488cf2e6aa
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	"Daniel P. Smith" <dpsmith@apertussolutions.com>
Subject: [PATCH 6/7] xsm/flask: Use unsigned int instead of plain unsigned
Date: Mon, 27 Jun 2022 15:15:42 +0200
Message-Id: <20220627131543.410971-7-michal.orzel@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220627131543.410971-1-michal.orzel@arm.com>
References: <20220627131543.410971-1-michal.orzel@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This is just for the style and consistency reasons as the former is
being used more often than the latter.

Signed-off-by: Michal Orzel <michal.orzel@arm.com>
---
 xen/xsm/flask/ss/avtab.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/xsm/flask/ss/avtab.c b/xen/xsm/flask/ss/avtab.c
index 017f5183de..9761d028d8 100644
--- a/xen/xsm/flask/ss/avtab.c
+++ b/xen/xsm/flask/ss/avtab.c
@@ -349,7 +349,7 @@ int avtab_read_item(struct avtab *a, void *fp, struct policydb *pol,
     struct avtab_key key;
     struct avtab_datum datum;
     int i, rc;
-    unsigned set;
+    unsigned int set;
 
     memset(&key, 0, sizeof(struct avtab_key));
     memset(&datum, 0, sizeof(struct avtab_datum));
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 27 13:16:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 13:16:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356612.584919 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5obO-0007jV-2X; Mon, 27 Jun 2022 13:16:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356612.584919; Mon, 27 Jun 2022 13:16:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5obN-0007jD-QV; Mon, 27 Jun 2022 13:16:29 +0000
Received: by outflank-mailman (input) for mailman id 356612;
 Mon, 27 Jun 2022 13:16:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vXPS=XC=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o5obN-0005lc-47
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 13:16:29 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 59fdf657-f61b-11ec-bd2d-47488cf2e6aa;
 Mon, 27 Jun 2022 15:16:27 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 666D9175A;
 Mon, 27 Jun 2022 06:16:27 -0700 (PDT)
Received: from e129167.arm.com (unknown [10.57.42.186])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 697053F5A1;
 Mon, 27 Jun 2022 06:16:25 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 59fdf657-f61b-11ec-bd2d-47488cf2e6aa
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 7/7] drivers/acpi: Drop the unneeded casts to unsigned
Date: Mon, 27 Jun 2022 15:15:43 +0200
Message-Id: <20220627131543.410971-8-michal.orzel@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220627131543.410971-1-michal.orzel@arm.com>
References: <20220627131543.410971-1-michal.orzel@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

... and make use of PRIu format specifiers when applicable.

Signed-off-by: Michal Orzel <michal.orzel@arm.com>
---
 xen/drivers/acpi/tables/tbfadt.c  | 6 +++---
 xen/drivers/acpi/tables/tbutils.c | 1 -
 2 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/xen/drivers/acpi/tables/tbfadt.c b/xen/drivers/acpi/tables/tbfadt.c
index f11fd5a900..d8fcc50dec 100644
--- a/xen/drivers/acpi/tables/tbfadt.c
+++ b/xen/drivers/acpi/tables/tbfadt.c
@@ -233,9 +233,9 @@ void __init acpi_tb_create_local_fadt(struct acpi_table_header *table, u32 lengt
 	 */
 	if (length > sizeof(struct acpi_table_fadt)) {
 		ACPI_WARNING((AE_INFO,
-			      "FADT (revision %u) is longer than ACPI 5.0 version,"
-			      " truncating length %u to %zu",
-			      table->revision, (unsigned)length,
+			      "FADT (revision %"PRIu8") is longer than ACPI 5.0 version,"
+			      " truncating length %"PRIu32" to %zu",
+			      table->revision, length,
 			      sizeof(struct acpi_table_fadt)));
 	}
 
diff --git a/xen/drivers/acpi/tables/tbutils.c b/xen/drivers/acpi/tables/tbutils.c
index d135a50ff9..11412c47de 100644
--- a/xen/drivers/acpi/tables/tbutils.c
+++ b/xen/drivers/acpi/tables/tbutils.c
@@ -481,7 +481,6 @@ acpi_tb_parse_root_table(acpi_physical_address rsdp_address, u8 flags)
 			if (ACPI_FAILURE(status)) {
 				ACPI_WARNING((AE_INFO,
 					      "Truncating %u table entries!",
-					      (unsigned)
 					      (acpi_gbl_root_table_list.size -
 					       acpi_gbl_root_table_list.
 					       count)));
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 27 13:22:20 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 13:22:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356658.584934 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5ogx-0002Ui-O5; Mon, 27 Jun 2022 13:22:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356658.584934; Mon, 27 Jun 2022 13:22:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5ogx-0002Ub-KJ; Mon, 27 Jun 2022 13:22:15 +0000
Received: by outflank-mailman (input) for mailman id 356658;
 Mon, 27 Jun 2022 13:22:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yQHX=XC=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o5ogw-0002US-0z
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 13:22:14 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr130049.outbound.protection.outlook.com [40.107.13.49])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 254579e4-f61c-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 15:22:09 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (20.179.234.89) by
 PAXPR04MB8639.eurprd04.prod.outlook.com (10.141.86.83) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.18; Mon, 27 Jun 2022 13:22:11 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5373.018; Mon, 27 Jun 2022
 13:22:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 254579e4-f61c-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EO79UDzHgj55nTlEkAv/LCDfxm0UK0CQmCwHuUPE93lYWAUYMST5Qs5OKSdyRbOJst5I4POMkF2cPMfZ6dsCFQ1pmpC2BJTq210errOwfXfJ7VJL92vvMzmum1/UZbRpimJgb9YloYdbj1/zeNcDadMtZx0ke+Lhbb9H535E1o2D1MJOcXWcXzn7Mb61t13nUR5gKyM9eoTqt59qtWOUuulj89WmkzowTATT7Xrbvb3Hae5Rf8fX6eJUEyLAS5rkJ711BkYMJQJemL9Yffh9fM8JnpmrJFAio6aVjPHseURGBb802R/RgtnhbBOmz/Fjm/5pC0dbKJrKGfENS72YXw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8ezhv0zzQ1+MrV7DdGElOA2lWszu1wIz+Y/fSWeyX8M=;
 b=VZxtFTlsbeYKqoyoP9tVUjwcR6hbWVpUKDJ5+u9qm9SdVpa3YxR50mo1YgGPl08AiEmReM55GaDfx94/1IEvqLujPCq5dVSpb+NrET6y47tY0cCFm+jdCOccKzNaUsxHx1JrPZqGDHOmgQZRL6EqAnLa2EjmkpPzWQEFMd1YUICT4wR7xa9iQs1juXuuyxiNi1jv3nLga7j/ttH491RnPsQqqNJoe4tX1Kw3wkAVyZnlIlzmRF5L0LI1upEJdn2U2wWm/2+QH/RdDYbL1KtcNe2IMlWOrNhkDnJghAWyfOGuTsoZsysSwVomhZ2DWdeJQksgtCM/Xte3wxnWQi1MHw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8ezhv0zzQ1+MrV7DdGElOA2lWszu1wIz+Y/fSWeyX8M=;
 b=LmXFW/onTFzbM1elxXLzc2Bc2w/D9vIEPNd0iNk5kuSbfIePZ5pLJa82TxmpZ8oIYHFyr3SehHfjN2pi+ASmt4LlS7uHVYGYuIGGLQRiRSM/R5IF5YOhJ8p/X91aVFZrZStB2uFFn7my8BXR7Q+kG71pS1z2fg/7KHk0rXXXqK5BJ8tI9WJu1Pv0/dApTeV1ifMV+loE6CYI8fiasp6gcyoPXvDOq/ntfEWfEo0zFe1mrb+sAfQPzNp8rhCbCBBn4GvRcZU2zc+zl90cOyXPMl0PS0LAPdejjuZOmSue/QWlRIO9TXVSNXt/eY2Nljx6PrJU9VIFNVZv4bouGSDoxQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <11a782c2-9a8f-62fd-8c49-2ab16f5cb4d4@suse.com>
Date: Mon, 27 Jun 2022 15:22:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 2/7] xen/domain: Use unsigned int instead of plain
 unsigned
Content-Language: en-US
To: Michal Orzel <michal.orzel@arm.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220627131543.410971-1-michal.orzel@arm.com>
 <20220627131543.410971-3-michal.orzel@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220627131543.410971-3-michal.orzel@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR04CA0137.eurprd04.prod.outlook.com
 (2603:10a6:20b:48a::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 334491b5-629e-499c-e6b9-08da58400afc
X-MS-TrafficTypeDiagnostic: PAXPR04MB8639:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	y7z/y1Ly6xoNaJSNK0itKToLr0thZmAaLyY5fqHyceUFWD3MrUDfvoUzHuWJQ5QCFO+yfaygADByLPezNHLoO9nt4avEGYHfuI9qZ643qHr1gthODdequq2vHqbLY+2I8qe7jxMK0fMLhuj2e8nfy0hV1vLlMxSRRVNKn7gcZjT/ebT1xfhbDbyLO+5DcOvoGJxlTmv877PD6jK50gyOTugqwpqrq7lj52TECrWqD+yABR1SBDZ+6uVcL+osAGcf0/xTrIa/gBsqZFYw5gd2Fyac7GTfsVxjahh+2Q46qX2uMR2mJPnHS2gKvafI+eFIcZnM9JratHAJL0wqbvDX80OYhTkrB6MwDTBV7QtZ3XYfwmEavOQvTBWbtMLCkDF25XnlivcNVS773PGP0yeFBqqJlypfhq2/rUnXHaVUFd7u/zHT6saqBxc2hX69I8eczsT1KzDSgzNNGfIOuI+Q0rG5tq4eoJxW4HjvjwF8KECfLvyyDuepLNeTrDUCjAZBILq0ocmaMm7zxfEXk/c3kh9cdHwuBdRzJ+HrMEriYtPQ54JUPbLq9V/xjHFpWZP+piZ8MzTvlYW6HB3tw9WGTcQovvfxkJx+PGjg1Te/lhF4JgghyiFK4dJXuBg6SGsZJZUEcgIVtTVsKQOu5fleAb8D9LI4Pf3BwSdBrUd8gRAo5uxFcInZypMT0SXv/UiJtTLvCUpDnk2wRzbCiZU0St4wf1US8Y4/g/DQla3ya+w8S0Pg8+I8s9FQVyW6PpueW8KFSx61Eet7o7VfvgNcAeR4ggXiUoAS2P/T5jWYBaEoPebX5jICKSF/yuMNyBWS5VFKLw3Vql8XaOhbHlqREw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(376002)(39860400002)(366004)(136003)(396003)(346002)(316002)(2906002)(66476007)(6916009)(31686004)(54906003)(4326008)(8936002)(53546011)(2616005)(38100700002)(66946007)(8676002)(6486002)(66556008)(6506007)(86362001)(478600001)(36756003)(5660300002)(6512007)(31696002)(41300700001)(186003)(558084003)(26005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?T2YvaXY4NTdEZExTVklUYnU0NWlsZ2ZWZXlUaVQzTlhZc3E1aHBSKzF2OXdq?=
 =?utf-8?B?RXE3VlZ0WVpXUXNKNFJnVlozQnhVV1pmbVlFTXFLYXJaOUtqczhBOVlSR28r?=
 =?utf-8?B?ZldpQlZnYk1lRjNMZDhIWEQyNmdtU1h0S2VWQWhQUlMwa0lObUJVRVQxdlUy?=
 =?utf-8?B?VFJ4QnJwUko2OVJnaTV4QUJ3UmxScEN6bnRJamh3OVNNQWJIbmFLZzYxZ245?=
 =?utf-8?B?WVY1QjFQUjVoZXVZTzQ4dUluWkNLcHllQUpFdXFBcTQrYnd1aHE2WHFHd1Vw?=
 =?utf-8?B?TzRVOGl5OTluVk9lZDNsWDZkYlFFS1VuL0xiR0tKMUk2OTFhb2Nsem9OZC9m?=
 =?utf-8?B?dk1VR3RnbnZqdmt1NFNjR3pBdHhIdzJKZjA1b2VvTnJQZktLUjVFMkhXV0hr?=
 =?utf-8?B?Zm5FYUJaWU11UUI3UVFLNmtEblFieW51MzNXYU9iV3dCVFRBNitnWVZwWFMx?=
 =?utf-8?B?L1FreFlHNjVrR2ptTDVTamlNbVdqNm8wMWl6Z2tRZ3JDc0FPUlZ1SVhQTGJp?=
 =?utf-8?B?SXk5T3dLTUp0SWk4ZU9EVmpESXY5MEhBaWZRUVVxdmJMNHcxd1FjT1pUM1ds?=
 =?utf-8?B?NlVKcXBVMzB6cktTZnRXbFZOYjh2WnVXYXVTT0VGcFQ0SnZMS0VWT3FMWVds?=
 =?utf-8?B?NzlEMkhSVERqU3YyZGNORnhoR2xCcjMrQ2hHZ09ka1dIUFpndkYyT1pxV2Y2?=
 =?utf-8?B?RDVtMkE4L096OTlLZ2dqdGRzUEIzMDFlTU40SzBwSTFLa3pQaHQ0Zjd3UTVt?=
 =?utf-8?B?OTZIajNOTStLL094NjVtUll5azBaQmZUdXNPczR4WkNDdGh6WjBUUHREeUVp?=
 =?utf-8?B?Mk9NV3RvQldsQmJEbFZONzVWT1pRZXY1SERNSUdVeHhPYlJMU0pmaGpkZDVK?=
 =?utf-8?B?bEc2ek50MXE2dVZtTThqWmt6UUxWVEJkV255UThvMUNlc3BmVDBpWXpIK2lY?=
 =?utf-8?B?UTUxZ3ZtWVdGUjM1QTdEZ1ZyNGh0a3BCajNzZFRqTEl5UDhLanJ0aHZRTTR6?=
 =?utf-8?B?TUxtSWtiVjJzeGY0NVZ2SVRnQjNTdS9ROExyVFJ6d3BmanpKTUdNMXIwanI1?=
 =?utf-8?B?ZFE4T2EyVHdsS1B0K1RsNFRQS3hzSjMvclhrU1o1RlljdlFKcjVqd3FjUEJY?=
 =?utf-8?B?YVZrNmxLaWlTNFVIbjFOVGFCWkl6dFpNNGR4aUJrZEFHTEtndWNNVnpmdzEr?=
 =?utf-8?B?a09QR2FqY1JoMEFqbFlxU3h5OFR6R3RjWDBEYk1SNTRheit5eU43M0JiWkNJ?=
 =?utf-8?B?QlFYejRhNmlOR01MM1lPUXpFaVR0OWs2eWVpajI4bmQ0VGQvWW1ad29MQzlI?=
 =?utf-8?B?QUNRcDd0V0Y1K0hBdlNMWnd2NFJRcS9lZ1Bic0hVRnJldzZjeXhmNGRWUnJW?=
 =?utf-8?B?RVBHUmVmME9WYW5GWVk5K20vNXF3UE1iM1J3dDdFb3l1dzJlRHlNVndhKzZk?=
 =?utf-8?B?MG9neGFqNUN0OW9WTHhzVDlEaWVHb1V1OVd0SndjZ2xwQmdOWERKWDduWEpB?=
 =?utf-8?B?L0Q4U2VDYklWMmpSY2hvdHJIanpHM0ljYmVuTXFlNGRVU0xaRGVWRHVxbUtu?=
 =?utf-8?B?Z3plaEs2S0VMUmRQb2YxOXVJaCt5cjNrTVdPR3ZvZ1VoV2s1R1pCRW5tUVRU?=
 =?utf-8?B?UGxRZzBtZlliMmh1WlN6YUVEZHlySWlndU5OMG9nL2xmR2prbytrVzg5cVZi?=
 =?utf-8?B?TTVNcUVqb3RZOEJ1VGwvd3lnL1hSa0pYY2xEaExqa3VHaE1SWXJzN0lpQzhZ?=
 =?utf-8?B?cTJzWDRLbGpSazArR3dpYXArTWtoeFpHeVVueHhQWG8rSGJIT0tIb0U3MndW?=
 =?utf-8?B?blEyZUJWQWI5ckthWUEzRUorTktJVjdwdXpmRlFqRi94YW5jb3d4QkZtWDZZ?=
 =?utf-8?B?Z0w1T0pjUm1ad1NlSS91OThqWWNTaU56RldNTXVjSnhZQjh5OEVvcmUwZ3Zt?=
 =?utf-8?B?OHkvYWlXVTZpakJLRUl5dmR1Q2NzTDBjaFhpOTFVVlFxOHZtWkcrTnNrK0No?=
 =?utf-8?B?S29aOVB2K01lcnNBZHpxQkFCdTc5djhtZi9meG5ibW5Eak1qT0FsK25YaFVi?=
 =?utf-8?B?QmRNRHVnTTJtZGkyK2Z0d2xvUDVQQytZaFd2Q21qb3Q2NlJrNFpPSENCNTNa?=
 =?utf-8?Q?wZCBSKeFfdYRb9Up9txlLkXJl?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 334491b5-629e-499c-e6b9-08da58400afc
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2022 13:22:11.3406
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 75jI+bpHglQ2SgJCCJ3KUfvS6Xiywanbe28GJAjCqB5ee9JRzoXKVNyMJb8OSEsSbq9SWKAgNtKgZ/eCGHZsEw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8639

On 27.06.2022 15:15, Michal Orzel wrote:
> This is just for the style and consistency reasons as the former is
> being used more often than the latter.
> 
> Signed-off-by: Michal Orzel <michal.orzel@arm.com>

Acked-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Mon Jun 27 13:23:05 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 13:23:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356663.584945 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5ohl-00032k-3g; Mon, 27 Jun 2022 13:23:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356663.584945; Mon, 27 Jun 2022 13:23:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5ohk-00032d-Vz; Mon, 27 Jun 2022 13:23:04 +0000
Received: by outflank-mailman (input) for mailman id 356663;
 Mon, 27 Jun 2022 13:23:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yQHX=XC=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o5ohj-0002yI-AK
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 13:23:03 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2058.outbound.protection.outlook.com [40.107.22.58])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 42d038fd-f61c-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 15:22:58 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by HE1PR0402MB2748.eurprd04.prod.outlook.com (2603:10a6:3:e2::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Mon, 27 Jun
 2022 13:22:59 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5373.018; Mon, 27 Jun 2022
 13:22:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 42d038fd-f61c-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hOnC3hAjv0zorHbvy/ZQeD9GhIDvVaqIJ8Dv87tmiVKKsAyIXvritv15r2TwL9v/x/0gGM81GX1zG9Kqxx/XobMgt+c4Jiye3vaDI/NKLRjIattt6ZYgA5LQnNE9HZ7aLRMEHHLgfjnIGmaYLNIzXqsA9w8xAqN/rCM/pISEb4WZtzI381i3LHALSUinuW5RBuurqzug/pHCIf5+ciIwQmj76Pppy6sAshzPquW2xuqNyhn9Z0RchvHmD5w8/LgJZILJK07SL0XEbrvgazNNHYWAXGF1rRjmCLir8rJCnge4TwCMpoXKaYr0PockQj/NiLUS/wG1ozNiW/y6qdvvtg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8ezhv0zzQ1+MrV7DdGElOA2lWszu1wIz+Y/fSWeyX8M=;
 b=NEpk0y9tVoFpXXhmwQgwynptaETZdxOEeuj/f3EItjDn/3U/pY9IA7a40BsvsB1D9AH06Tw4X3ngYuhM4tMMGg8BP7zUt0JHwmFsBqX/y8AQkroXtADm7NW6g8UKo009SQFzHIwH0/5emdpYZhnyOd9ZLookVaVP2MOm0dEvMifE1h4hbqozu4jOG17Ma8+pG3QI4FcYtmwwDezy5wph2SzeGdoYBohGmXw8Xxp01BNFvuxdtESN481UnA3FQnHa7AUTsgbOtnKlC2VWSIVTo7MAW3n/tm9Pm6Pb+8nBPVION3KWd82YksujPu8zQngt8xey47NZoNCMbviRh1B8rA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8ezhv0zzQ1+MrV7DdGElOA2lWszu1wIz+Y/fSWeyX8M=;
 b=CEtoVgHPlbWF3ouWYYd34JYMYLGWw8cT/oZDExhc47sacEmL5vSD5zY1SsCHDMnVDeIaUCMH6DxSwkC+UNrjxeomYejd3doLUL0/SJ/zDNdVFrT2ObZNOpuOmrmpsM+6yE1i6zSV6HNuz3sEpc2nNE+4j/KRgjQY3olXsm/Kd/Y84EXS2KE/FKsIdrJb1JnnlaFRB66OiFJKB+d9+i+MUUT586LAWQ6sPB3BRedXWNsmUS4ssjVtFIiBjYqKVxdCa0aU9BXyXtSY+AreVdf8RuDAwtEMQiPT5ZX7Smw0uaaxv/yAVAgVYAkoxFVT6Gq7l18s4U4UCTJ9iJEaAWbovQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <8595526d-9af9-c892-814f-e7002cc68f6e@suse.com>
Date: Mon, 27 Jun 2022 15:22:58 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 3/7] xen/common: Use unsigned int instead of plain
 unsigned
Content-Language: en-US
To: Michal Orzel <michal.orzel@arm.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>, Dario Faggioli <dfaggioli@suse.com>,
 xen-devel@lists.xenproject.org
References: <20220627131543.410971-1-michal.orzel@arm.com>
 <20220627131543.410971-4-michal.orzel@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220627131543.410971-4-michal.orzel@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR04CA0153.eurprd04.prod.outlook.com
 (2603:10a6:20b:48a::27) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 354cd993-3395-424d-ca61-08da584027dd
X-MS-TrafficTypeDiagnostic: HE1PR0402MB2748:EE_
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	EqwCB1cWHZMCbquKF9HH59F3+wTzW/Rh5ReyJGEGA13iqg6lVKX/u8cHDLkdkZ18eg9ZH3F+8uWnA4mZybSNINO/tXFbPuO985YY4c2RDuHGtjaCAsAflTaIweaNbmECkcV76w8y9Y6wQlPXx75NxzNi0mXwsQ+0t+g2LT4627EVTT53heWp/EAnz0Pw+0mfBnh86PdB00+FgrDTio5xM27QBiuUNKgU5dDnbsm9c0291Ip5rwh4TTwqM0FwnDDHkVvqoBdWNLevJu13mO6CAQ4F9BinYRNufaOADNLzLOA0qY8tZjbV8D3GNCT4o6BPJXnzba/DI0xtjYR5zsfgBtTGQv+E1FKPCRWyEkkAfqM/nZXPvYyVUHeOjpX3vjTpf3yWRBjacHcOoMzuKJ0BWQiVD8ZzaXp7lGBxIAKv2Pn0kQKKnwiXjfaCZJQ5MvZCCudZdnaHxE1QJFrYYWEIZ8ubW0bDXqTlNyba2DyrFTJOBj3w9785ba18zIdsZDUghdQVXgTBjhC4KdFJf95x2LAnd1WSWBnsZNTuyq/7sRNuTUi4vIvT6I2h1lKjnWdoT8Y6vIbFLGIHw6Q4gXSEyWghdrcAuEN8POjXSA6JOcv/IHMzd2o9E5m1qLN3ouQGTjyH1efX0Qk5tIT4Uh7A31sW4hLzKGUM5EdO2kT8+/KM/7H6QCJqGPW/6zXQXonz+Fjfb3xdm/VXBpe+Ik59auaM3zxhxcAWDO698qJEbpPytrJGEKer+MY3SsiOMroVRgUwq91WT8QPU/MKUctrpyVoODiQMeK0iLpQxTpAz9nlmgHekT/8jD5+mWG3/DKn1QIDT/cBldPxCJJaisEm+w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(39860400002)(136003)(376002)(396003)(346002)(26005)(31696002)(6486002)(8676002)(186003)(558084003)(54906003)(6916009)(4326008)(478600001)(6512007)(66476007)(86362001)(316002)(5660300002)(6506007)(53546011)(41300700001)(31686004)(2906002)(36756003)(66946007)(8936002)(66556008)(2616005)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZmZlQ3NjWlhqc2VlTnJaejNYS2Y5THBkUXJjSjFxR0xKZ1lmT0wzMnRCRGFk?=
 =?utf-8?B?SHIvZDdjM2tJbktTUVNNVlVYYlEwaW15eUtKbDdKY1pYWDFoWkZGSUtDaWdD?=
 =?utf-8?B?dy9qb1RzT2x1Qm05ZEVGTlZJQW9jTmZBWnJPOFhuSFp3czZ2OWg5TzNySy9G?=
 =?utf-8?B?MlpWSjhUdGZiM3hUMUdNZjg1TlkxNEtRTStvUmJBVHlPY1J4eGVaNnZ6eDlE?=
 =?utf-8?B?UTZQVE1KMnJKL1pON0RZeWVDcVU0aVNGWXNqeXp0cGxDSE5VLzJPK0pIakdC?=
 =?utf-8?B?YTRDY3dlRnlZTnJIQnMvaUpkWGluVm1kZFhCVUFlYmo5N3Ywa0YxQTZFWnZH?=
 =?utf-8?B?OEFEY1ljeFRFU1BXRktIcktLY2ROMk1aVHBhejlYSlM1VURSYWdrT1czaEwx?=
 =?utf-8?B?REdyc0VRa25wK0dicmViUjhONGhZR1FVZDR6ZFJYVXJQYkl0OS82SFRpZUYr?=
 =?utf-8?B?QVgzRGZvZ1NIcWFRbEd0R1BXb0I4VitpZkwzR3hUODlWMXB6d0NmbzZJVVdB?=
 =?utf-8?B?bEhxelU5UmJaNEJhN2NyLzk2NUNQS3FrS3JvOUtDYkF6L3NFUVQzblBWNHZp?=
 =?utf-8?B?czFIMWhsRHBJY0ppajhQTXRXM0w0NmJEN2ViRTYwOTN2NjR2dGZDZ2hDK3Np?=
 =?utf-8?B?THd4YmNrR1krNnQ4TkVxV0FvSFlsSGtMS1ZSNlpmRG5FSE9qS20vV2hoQXNk?=
 =?utf-8?B?TEtVMnlkck9QZSsvTjlsVEtyOG4yMWJJOXl1MVAzaExFUloxRVpFOGEwNDdT?=
 =?utf-8?B?V1ZzZmpPV041c0N2N3Z4eklpbm94eGVQRlRKYVRmeTk3SGtvUlpZK3Q2eUtM?=
 =?utf-8?B?YW1wMFIrdkQ2c0ZQOTFodjYydnQ0amRwdW9hVGVVODg4YjUyQXJ0OG1BSVF3?=
 =?utf-8?B?ZkdnNW15YVJDTUd0QS9tbzY2WEpvT3Y1Mkc5V2hBdEh0WDRyVGdwYjczTlY5?=
 =?utf-8?B?VWN4NGFXNkJvZVhabVd2RkZXTVlrb3prNVprdVJGQTJuV2tpVTVOTy9PR0dM?=
 =?utf-8?B?Z0FFeUZyM0I3WUovSG5mdmFhV3FBTEVMdUdXejNwZTdjaTRkbG1TQkpSV1kr?=
 =?utf-8?B?VjgrRkRocC9JSXpFdDFvYy9YVHFBVXJpU2dhL1RjbUpqMnNNZCtqKzd3dkY0?=
 =?utf-8?B?V0tGZVd2ck5iRHNib3dEWi9WdVV6TUVxUFozNkdaRWN3N29aUmtKTmNSZTNP?=
 =?utf-8?B?cXFvWHdlcVFtVi9oTEU4aTZhOGlFcmQwQ2V6Q3M4RjZZTWpBWG1HYmpzUTh0?=
 =?utf-8?B?UnBqd25HQVFZY0tlU0pHSHhhdGJVZzRjSSsvaWo4aHJXWDhwMTdHT1IxanAw?=
 =?utf-8?B?SnlYS2lETmRuU2FHdVZac2U3RkVQSE1BWjRLcno4TktaQU9ISFJpb0RzVnB0?=
 =?utf-8?B?VUtjQWlQYlNLb084L25SUVJPU0V6Zm5mTW9IYkxHbSs0TlFIZkc3L3RUdE1D?=
 =?utf-8?B?ZjVSck90eWp0c3hMREsrWWdLN3FMdHFzU0NsRjdyRGV6QnFZcFNJZmNxTVVP?=
 =?utf-8?B?Y1h3SzJOdkNnVit2N2dsT0dJSENRRGVEVGdidDFVcmI1cGN5bW9zOHlzNzBo?=
 =?utf-8?B?U3l4SGtWVkprNDgrZHRTWFBuSkFYVSt6dWF6NzJuUTZSWlMxZWdWSEl0R05o?=
 =?utf-8?B?UEoySk00KzM4YUVKRVdOZjR0ZEI4Q01IRFFwTnlMay9iOFU4Q2NsREsxRlJ5?=
 =?utf-8?B?eUJVb2EzQk1ERXBId3l3N3BSc093QTAwc3J3VUJmdDJidjd4WXlaanpseUsy?=
 =?utf-8?B?TXV5ek5sZGd3RE9mZVZ4N3ZCM2k4NHM3QTZ6cTgyRUR2Q0VpZnFpQ3FOV016?=
 =?utf-8?B?TUN4citiZE5mZXZ4aXRDM0dUWVE1NUxnWmpkME5NV2l3NU0rM3hmcUFoczZY?=
 =?utf-8?B?cjlEd1RsblUwTW1pV2hnUUZiTzFjUHcrelg2bk5VT0NqdjA1bHlRdWw0ZVhY?=
 =?utf-8?B?NEVwdk1IbnlYdVA1OW9ITTk5N3lvNmJtSUlBZDhMNTJwNjFzYXdrTGJEbHdL?=
 =?utf-8?B?Tm5CQUh4Z25sakdXZzB6a05VcWhHemg3TVQyTkNHemNBN1AxUkpFcmM0M0RU?=
 =?utf-8?B?emdwQ2c3Rm9DWmtOY3pwa1FjV2EycU5Sa0hGTkZpK0pvZGpXOFl3MG5PNy9M?=
 =?utf-8?Q?4YpOjSRGzEOl6VZ/4vnjiZpOe?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 354cd993-3395-424d-ca61-08da584027dd
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2022 13:22:59.7750
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9ZFUMufMq95ABoWGIWn11sG4qvnJRARsn4Q5rTOnKA9ZY2+t2ZN0EbeTqQMsG+D7QOk7j9Mf/KZnUzZlz1M9MQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0402MB2748

On 27.06.2022 15:15, Michal Orzel wrote:
> This is just for the style and consistency reasons as the former is
> being used more often than the latter.
> 
> Signed-off-by: Michal Orzel <michal.orzel@arm.com>

Acked-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Mon Jun 27 13:23:27 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 13:23:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356666.584956 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5oi7-0003X1-Cu; Mon, 27 Jun 2022 13:23:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356666.584956; Mon, 27 Jun 2022 13:23:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5oi7-0003Wu-8y; Mon, 27 Jun 2022 13:23:27 +0000
Received: by outflank-mailman (input) for mailman id 356666;
 Mon, 27 Jun 2022 13:23:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yQHX=XC=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o5oi6-0002yI-C2
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 13:23:26 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2063.outbound.protection.outlook.com [40.107.22.63])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 507ee545-f61c-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 15:23:21 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by HE1PR0402MB2748.eurprd04.prod.outlook.com (2603:10a6:3:e2::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Mon, 27 Jun
 2022 13:23:24 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5373.018; Mon, 27 Jun 2022
 13:23:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 507ee545-f61c-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=l7sWOYpubU/GlBKtSowUeGsxfLxvUBOmjRAbMBGH4k5Z4BKkdQUm0/WsFi/eFcCwq37AJaJm1U2MZeNed3LhbuMXY6iyhYHaTwFQInVVatNszLSIySpbveAGQ3SimBEypd5meohy189pUUv0bOFmGV9AuQC4IjhSVW8AYwmB20FQgxTYPv+svNsDK/UxYoLvNgZwtifCi2PuQZXKfKq6kK4F9qVVjCzvQhfb/wOCoOKViGFTdh0Smg0aEi6kHjH4hEuTvFJ0ZpaSMQMO/bZLS0nvq30UI9Xe5vof3ICgu5jDR0mYQtXYOdrq7UvTwjSueXjABDaTaX0pFODTKTTc+w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8ezhv0zzQ1+MrV7DdGElOA2lWszu1wIz+Y/fSWeyX8M=;
 b=itLvU9g6h1Zw1iafwYpYCT9nyZEHEz4Xu1pAVqMzK5NBHab4mSs4PX5BNbopMwkNXYnNfOmMEkwIvz/1sdt3qQwpVYiiwJxq+6HsgLpOe8mWYK3APHSByE4AA8Ge+c68+UESkvKKTPUJPeO4JkKpzqdqdib3aqIYST6r4RaYdSx+KJn9mRFzb1pdvRCUMlaOG3G1G31e/GTKuYI8aPOqAaacxncg/V6Sb2Mwdsajiv9bwRGVy1Fr/E2yBQITXeDqjfZboFkyoZd7IZ7+VDMKdmwPsg7aE6/DhGBPGFtalo5Bk3etT/Ybn6OGzJThE1aWAhLnEoXkq05C8Z6AFkgFUQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8ezhv0zzQ1+MrV7DdGElOA2lWszu1wIz+Y/fSWeyX8M=;
 b=qmP3OxMOACAs2Q2CXsMIDQgeaZzPY/N6u+bUqC5PtDw6zIFrU8KBes2lvsS3IML7igKDPGW45EXoIqckfRFPfem4aVT/TnJr+P0wdPn+KVIrCOsA8hQPM0DEy+7960VY7j0crt8ms9dDnmw3RLfqBMCNXtOzTsYlPMqTlJe9RnJ3fPIbir4BLL7evQoHYz+Gtd44hXntXiULi928umAQXU3LBPpmQZVwvjR4QX5/qA+Brszr0qBtFZK4Mo8wWBQopRzHQABHoyCrdTkNfVtCsMbqJ8akqt+bcQ3sJ3S24TTq1OHBaEGbwxl9OzF+sE97ee63N9vczlgcCcf/i8RrEQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b34ea258-c967-7e6d-9ef5-6c974f252082@suse.com>
Date: Mon, 27 Jun 2022 15:23:22 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 4/7] include/xen: Use unsigned int instead of plain
 unsigned
Content-Language: en-US
To: Michal Orzel <michal.orzel@arm.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220627131543.410971-1-michal.orzel@arm.com>
 <20220627131543.410971-5-michal.orzel@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220627131543.410971-5-michal.orzel@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0051.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4a::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ed9f7271-89c4-488e-9440-08da58403656
X-MS-TrafficTypeDiagnostic: HE1PR0402MB2748:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	pqs/lw20zceXBR7wbvYO4g5Ky1Uf2djaUH/QF5aqV3SwZIhD0tE9EsmnMFmVuEsPX4AD97nGo8aqhIP+cyP0NZqwuczUqAgcuShygy+zBwNG8F6SjUsfL+NYbfOu6p8HDjbb6yE4hZZ2/95NILzKpk4f6tJJ49FQXhM9SP4Z+b7luShAE9rBpxG0lU4bAXXzPEyz4AAk0rPq7TVM6JwF1Z18fm7IM/QudlJGlcpROlyOCW+QMgqW1z2p7bxesh8YNAyoPbkMl4ijMUOCwZ0tfN4X16PdVzYiqXLRGxHBSF2XuidnXtYfarKAE/pZwR1TirXLLG1sf6RITrxauSns17Sgd2dFp95WPNjxuDlYP+yig8MW1dkeZS4pfI7eVwaZ5e85zbHQeUzFEfhAoGUqOWz1tadGF9lb89m+ylfIxS/SKX1ZRd9+93mEcvS1SqrH0x1a42pqxgDDKVlj6yyXinYZTBjziKKzSj1XTgOjOHXdl55D66O/swZQTzvrsq1DtwOhUHPq3A43ObHqDAzd0DA77v3cIYyGDzvJIRZyEUlwVcNXb+KBuqqlARK7KeKbrfsMqoq+A1fBrMRH6GJzqQGIWfYPpCBZlw6UXpLz+jfR5pkW58Ss2Y+EQyqRylLLQP2x+uOCBsaI6f/odNXpotAjtGUixDg1dDV18XKTJi7SbGgAo15zfK7y0sp2PYwHTss41Vaq8jJhLl2TIuxV2p4ndNlXRIq+/ZXkfqbkssB7+Lum3y0F1wnph4Je8uEuqGi8a+LnSG9cW1uUkox4/NB8JXeIeSFDh1vqFfznaRjIogBWeVT/jMLgSrJcSGgurxtMT+L/9IKQQlaBJk5Lmg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(39860400002)(136003)(376002)(396003)(346002)(26005)(31696002)(6486002)(8676002)(186003)(558084003)(54906003)(6916009)(4326008)(478600001)(6512007)(66476007)(86362001)(316002)(5660300002)(6506007)(53546011)(41300700001)(31686004)(2906002)(36756003)(66946007)(8936002)(66556008)(2616005)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?d09JSU9NbWFIMGhEOXNtSFNqY0Z6TzNYdkNPYXdGNTZQQ3FpcTk5ZXdVMlpl?=
 =?utf-8?B?UUdyK2FTVWtlbnBYWTBvcXNiZnVMc3dGalVCRHNxSi9EVHF5a3FjSndGemc0?=
 =?utf-8?B?aWk5dlBPRDFKUTBBUzVRYW9maHRocG5ua1ljdmVwTjY5WTBLSmRROWZTNTBo?=
 =?utf-8?B?Wm5ycHRNbWkzYmVjMkw5bkl5MlN1aUJieXpWb3dlK2xnU0pYR2svZ2JVcjRh?=
 =?utf-8?B?MGYySDBTTUU2U1ZwSExhVGNiMlc2ZHZGdE9HczNhalNHMmhWeFhzSHFVd3M4?=
 =?utf-8?B?MFJUT04wR1d1QWs0Zk1QRmEwdjl5emxZeGpwZFRHQ1B1ZDdiTGVIckNZcFQw?=
 =?utf-8?B?cUlFVlhKZitRVDlSMjVKUTVmWkF0cU0xOVFWaXZZdndDeXZiVUtwRVVrRE55?=
 =?utf-8?B?K0Z5VmZXaS9uSDMxUjgvZCtGdW9jaXIzUFZsZlI0L3g2MzVpeGFsVHprSHgx?=
 =?utf-8?B?WWFOZ0lycjR3QmxkcDFtbTJBUmhVeUVNS294MlFER2hwVUI5czBkSlJFVmI5?=
 =?utf-8?B?YTNxVjF4UHN2U0YyVWNFUnYwZlRwRjdKWGxRWjJ5ZmpYWlQvOE92SitzaEtZ?=
 =?utf-8?B?UEJMSlNTTW9mUXhxTlVSR0dMYzNBdTlBWnZHS091Q3ViSUE1L0pCcDFpNWhl?=
 =?utf-8?B?TnpCbHc0WjZuWldHaDk5ajZWQ1kwd2tlU3RyaVJsZWNQdzlQcC84SlM3Q0w2?=
 =?utf-8?B?SmVrVzRRMGxDdTNFSk0vZU9SVUxsSjZkR1VxVmFvcG1TRFo3VHVHVkF5dXV5?=
 =?utf-8?B?eXYyZ0hUSXQvQU00WDlMUEpxUkZZTTZZUHBHZ3lGUmtnUlRubXgrTm9xSk55?=
 =?utf-8?B?OGZYeFc5QlRYYUtkWDIrbVd6YlZBUTVlVEo1Q21pc3hGY1BXbmswZ041UGhL?=
 =?utf-8?B?RWRBWkhKbksySDNjZFF6TG5kWE5LN29HRDE2clFrekhkajFaQmR6RERMV1hn?=
 =?utf-8?B?TDJvanVHaWwydHpPTXhpdy9tWmFLSmpQMUhvQStLY0ZFL2Z4OVNVelFoR0U4?=
 =?utf-8?B?eTN5aE9KQ0llRVNHYVBKTExTcUI3azhEQ3lMbU4waU5yOVZMWE5sMDVMbEJJ?=
 =?utf-8?B?OFBHbDJFT0ZVNktURU5maWNXaUJlRlBoc1FUaHZZbWJuZ29oMm1VMmtjbXNw?=
 =?utf-8?B?T2tRTklXc05lWTkwZjgrdjdSOXYwM0UvNnZRckcrak1uK1JGVGZPMGRYVFc5?=
 =?utf-8?B?VEJrRU1EVHZYd01RUHF5SDMyUUNRNko2ZlNTdHBvQ0ZNS3RuaVpyYnZCVFdq?=
 =?utf-8?B?amRPbklSbUh2aXdwd3ROd2gwZlhFUldKNDkwb0ZOWno1K2tIN0VWNjB2OUIx?=
 =?utf-8?B?bFo4YklvL1FuUllQNzJFQXJvejZoOUx4aUhITnVJK1lReXNpZkYzcitMemZi?=
 =?utf-8?B?bFlZc0dSTDRUam42anhkVyt3T0VXMW5MYVBxaE5HZmR0Z0pqdmJJYjBsZXF6?=
 =?utf-8?B?blRTWE5uZ21RYWdtQTVrdlFyYmdFSXdPcFQvenpvSG1oWmg1am93THlSL2Ji?=
 =?utf-8?B?TUx3bVRpMU9UMVhSTUVOdjZBdSt2S1VtN2J3QmVpeHdqMmpFQWtWcnM4M2VS?=
 =?utf-8?B?cVRIckt1SEtGQ2R5ZEZPbVZ1V2xIYjBxODg4SWlSWmV5YkpSK2N3MTR0T25h?=
 =?utf-8?B?VnBJRS9UWUN0TGw1bTk5TUxpWEJnSDFEVnBrbnBIMUVrbFZOcEdIYkQxU0lT?=
 =?utf-8?B?WVNvUWpsbytlMkFudXRqemZRUWF5NmNKTU94T3ZOUjAxajdDQ0ZoV2hFZHZs?=
 =?utf-8?B?STNvNXZLTCt6Z0RWYlFtTlFLd0FSVGNaWC9NYXArR2IxeDdaemlBc1VYcFdh?=
 =?utf-8?B?blBtbjNMbnJxQnErbFh5VjZCS0xoQVZGRGNuNHprS2J4elIzWjNSZG9STGVG?=
 =?utf-8?B?QzZNbEhEK1YzcjFXNWg0bHo1ZXBxOFBJVjRwc09SbGo3MjkyYnlKbGRjOU1H?=
 =?utf-8?B?UWVoOFZkQXErVmJ6MzR1K3BIeWVHTVpCakg1cnNNekx4ZG9KYVZETEdqcm83?=
 =?utf-8?B?R2hiNS9jSm5ZTmh2dExqVXN0UjJ5eFBzN3NDRTVlWmFkNzBWa3dkcXJKMUZU?=
 =?utf-8?B?SjRUTWpWT3ovbS9CdEw5ajJ2MzdTZW1DMnlNbUZmVytlOHluMkdZZmx6ZUZU?=
 =?utf-8?Q?zGvQLE19RhdTmuSmh5M1f6l3S?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ed9f7271-89c4-488e-9440-08da58403656
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2022 13:23:24.0547
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: mzMqgKuSZaA1/0Qptgoworl7ZMtqi5H5JvWByRdLkFhgVrrcHf+2CUR5fPW7z/QdnfShQ54fpkYu269h9vzbFw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0402MB2748

On 27.06.2022 15:15, Michal Orzel wrote:
> This is just for the style and consistency reasons as the former is
> being used more often than the latter.
> 
> Signed-off-by: Michal Orzel <michal.orzel@arm.com>

Acked-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Mon Jun 27 13:24:21 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 13:24:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356674.584967 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5oiy-0004Ee-PH; Mon, 27 Jun 2022 13:24:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356674.584967; Mon, 27 Jun 2022 13:24:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5oiy-0004EX-MG; Mon, 27 Jun 2022 13:24:20 +0000
Received: by outflank-mailman (input) for mailman id 356674;
 Mon, 27 Jun 2022 13:24:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yQHX=XC=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o5oix-0002yI-IS
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 13:24:19 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2080.outbound.protection.outlook.com [40.107.20.80])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7025af69-f61c-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 15:24:14 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by HE1PR0402MB2748.eurprd04.prod.outlook.com (2603:10a6:3:e2::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Mon, 27 Jun
 2022 13:24:17 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5373.018; Mon, 27 Jun 2022
 13:24:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7025af69-f61c-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=D7cRKfddbOpC9/RbaOgR65ans69Fvhyv7L7JAckBw3podRhe3N31ANRP+Rn0xOuDQlAeihfWv1rIozlZvQP/gikoDaissCLn4AX1SZow0w3OsL7Rk1NsnIStq5qcUmTXix0YSTDRTfJtKBnihHjf8/fNcSTF3yGmHpHzbnEGMZaMigvvRjh+sCoY1xQsf0i+MEdjHJRd1sxCRbaBv/v7feYcUEaLCZLPHm/CLmt3FRTUI17DiKVrpuZdwzA0Xy0EyPaYxwsATI8Cn372ejl0a4M3ViZe8TcjftbfLL5Vj7O2VGc8pHzoyc1fYZzsa/VdGfVXFhM+zPQj5rhepyJZfA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=yXY/RvvLeX38EgZncQfxe9NgQsrTPq1js01SnIFooeA=;
 b=A2ARYscB88S3h5NQMH/SCYEzrTHyKp0ZYVhBBCSa3fQUADtKygOfVtZbLdszkYm+CJCh5wuFxcljhu60FwYOBLl0VH/AYSU60AgA6YahXwj1WzLuMYZtf9CfsuptmNOa45cjtoL2yGOIxsZzrR0r84bQROeGl1ICKESDOLCW12tWS8wkx8iIwAZlMbcETQcHXjK+Z1IVspl09wRbGHa44s3Y9nRcUpd+DX7JSqcz9L5YQi9se3vbYnJcDW+39MJaxwz5ub9eXX+EQH1tB5ub4KcCjTbpJMFfiFD2uk0oqb+Pd+q2cqYyPnxv8h6h9gxXGic5pGwDJSTVRpN24o1xxg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yXY/RvvLeX38EgZncQfxe9NgQsrTPq1js01SnIFooeA=;
 b=oy7RMDyycjPOTWz4a4sGWIhk+pSeM6PSV5YoGqztijeEi+vtwOSGtN+Dj/vqMghR71z3B3aD5SFnR8s1GzxNBXs0hAQL7uQE0Fl644a67Pak8uuZyWTkD1fWOvnBi8RGsGeKeyI6u4Bt4HN2/1Vnk5uQyNNMKb3wLePMu+5M2XihJDNU6tDlX+3JX62WLz7uByjYCqUayMggQnjFUFWvawxzXVlzHe8S3X0lHUM9Ux7nvo45yro2wp7Ist8v7U8dQ6mXRV2PT2n6nnQZ6Mavcg28BQK+qrC+kPUD04otfBI8Bc2WwiaH2n6E7mWcTFhv86mQYr50d56M6D8qGcEEeA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <06307ee9-b331-dd07-ee25-d2455f70175c@suse.com>
Date: Mon, 27 Jun 2022 15:24:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 5/7] include/public: Use uint32_t instead of unsigned
 (int)
Content-Language: en-US
To: Michal Orzel <michal.orzel@arm.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220627131543.410971-1-michal.orzel@arm.com>
 <20220627131543.410971-6-michal.orzel@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220627131543.410971-6-michal.orzel@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR06CA0103.eurprd06.prod.outlook.com
 (2603:10a6:20b:465::21) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ecf6467e-06d6-4554-4f0e-08da584055c2
X-MS-TrafficTypeDiagnostic: HE1PR0402MB2748:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Z2v6C1qiQkElR/lFxsTWW9J3CvuHDoDnuGrFrXjBSecW7IH/UoBQz1OU4OAUUPEliPQTr3+4ZyAM1gBH4SEhy9pNbT3PlQngnJyndi8ZUZbI0Q1UT+kcrHArQ3zUplfafXo6HfcN1jJJFOoOTQZwHyd+uE8RPUHnIBJ8Xf7gvCjLpmQTvxifCKcL/Gx5Cap1zUOFPZx62eFcwS/pyzUqqNXW/6HIjKlKOjSrbxq5lfqfSWVVfcXra4qKOm09eA/Vv0uBWzDIMreWuRA0Z0Yz3Rs3j2NMb3eeZkrMgYxTTYGA/R03Oemf1RUyibjRl64Ux5zZQ37r0I0gsWxl4mw48/FNwatz9Dhow5kBxc066IcuByFvurFCA3Ffy35SsIy7AZG8jM5En9YvnaXjJIYArOfThnVadLQXm6BTz6TrPaPEExSOwgjTmnHYzVz/EpYWFyuAXj4I4m7V0XSPe2vMpt1BmiqMoKzXKf1893ZhbTZVFrMEuq9lYwQbI9KSvDgpXD7Y3zFwuhQ/VeK4YMedXYbIVZgsYihq+n0J1N3V0b5uQOgYlhWX2cQFHdot7xQA0kcMEKvCe/68k1cMTzR5eLld4GPEkJDQBOpQeOzSB1oKtmVtRYJ6ShbAs8ZfpMu6P6F7mDdKRDt/HAmF0D7pJO+FS0nt0nYyyy5Jpj7/3Zs98r/TzVjGOxHLGYMlDV1+T5N1xkiRuerlLvppXjCLWDvdGHgwre7EIhixOJth0HYAHgXcr2zcF2uJ80Mft6iq5xozI/jYQbc/TieTGCo1fFcZRDn+g8yercANZPZqNfGQcjbVnHVz3iwFmTV7PGh7Nx5yYwoMAsYb6exsUV9eaQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(39860400002)(136003)(376002)(396003)(346002)(26005)(31696002)(6486002)(8676002)(186003)(4744005)(54906003)(6916009)(4326008)(478600001)(6512007)(66476007)(86362001)(316002)(5660300002)(6506007)(53546011)(41300700001)(31686004)(2906002)(36756003)(66946007)(8936002)(66556008)(2616005)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Q1YyYXg1YmhKRncxd3RjQXp0MGtDSEVtcXczUWtEbE1PbjkrRFU5dUtIcWw5?=
 =?utf-8?B?T2g4M2krSk5zM00xaG0zT2NDMzdUWXllRmNSUGVBL2E2aWFoWElZSkJnQi9s?=
 =?utf-8?B?dXp5ajY1SWVaSjV5bGpxbXpGWjlrQnNmQTdFbHUyS1FsRllmY1dpemEwSXI5?=
 =?utf-8?B?NmZOMWxTaEU3MVpEUk9lN255Tmg5NERkUU9ITUlRckQ5QWhCR3NKOXBSS280?=
 =?utf-8?B?a3Z5VkI3WjBkZXdvbExnZ056RHdZdENZZXI3R3B0NUl4Y1Nucm9abi9TM01o?=
 =?utf-8?B?ZTF2YTBrV2c0bER4b3VZRFJFaUcxY3hWRmcxdXJkYW5ScktrMVBmQ0hFQlM5?=
 =?utf-8?B?cEN4aDJLb1FSNHRPRy9hcjJHc3V5UEMvWnN1SlRZaFpDSlppeUZSb1NUVW95?=
 =?utf-8?B?V1ZwL0tCbFprd3NnUTdKR3JCa3hrRzVjL0Y4MFhtZzlDSWt0dlByQlVobExD?=
 =?utf-8?B?YytjaGJkN056R3lBRXQwU1dteW5TRlVSMHRCbXNOK1NvRlBaNEczMkV3NTVO?=
 =?utf-8?B?WStPQW93QVdjYUROY3IwTEJVRFZtb1R2dEErSDNlYjQ5dUhPdEVXaEszdnNk?=
 =?utf-8?B?a2FuWWt0VGw0eHRENEZsK3BWbXh2NjUydjIrcjZBQ25OS2kzWHJheFJYV0Fz?=
 =?utf-8?B?dFBFOCtLKzZXeThoVkcyRlYzY2FKeUZiemVmZDUxYTIraEdVNU9MVXM4Qjk5?=
 =?utf-8?B?Wk11RE94QUgrRDkxRUhRMFpjM1M0eXE4VEJqUW9Ici8yK1JUMEVzSDRlTWE0?=
 =?utf-8?B?dkI2dmZkdnFhaWZScXR4aCtXaWtZd0ltQTJxdXFTbWRsUHd2T0hwWitxWmtk?=
 =?utf-8?B?TFR5VUhGRzYwTWxDQnh2NWRzNElkVkNTajFwenVMZHFEbHVSUFYrdzdFMms1?=
 =?utf-8?B?QXFxQzA3eXFnWXRBb2F0aTNoN2t6TEdJNC9NUUhXRjFsU2trQW0vcXFHVXVm?=
 =?utf-8?B?MGFKV3ZPaURvYTlBQnd2ek0xanRwVlN3NXJSOFhkR3FTaFF1RTJUR3BOU1p1?=
 =?utf-8?B?UjJaTWVzQzRYeHFzS3k1dkZ5S2l4QXdEQ0dhOVdURDlBRnFlQ1NSWUdRWGM2?=
 =?utf-8?B?eThGVVd2ZGRmakU4OUUxOEwrdDdBajFKQ3ZGMURTWWtsbDA3MEVlUlRLZnc2?=
 =?utf-8?B?MHpZbk5GS0tiWnpRaDBKOHR5clJkaDFhajRJOFl3dWlVRXNTY3g0bXNZeXRJ?=
 =?utf-8?B?V1lRMWtTQmZYS3VtUTg1UEZDN1FHMnlmRDFDcEpCd3JUcEVUeVlPeXZkMXFq?=
 =?utf-8?B?MWxDT0NPUmRFQjdNOFpJN3BZZ01tTnY1YThhNVRMbUQ4UmRGWnMzSG11Q25v?=
 =?utf-8?B?N1hBYjcxWSthOVYrRm5JaHc1aDN6ckZsZ25rOXN2bXBsU2lNR281WFpPSVh0?=
 =?utf-8?B?dU5xWjNRMFAvK3RkM2hCY2N5RlNuTkwxMHgxbHB5WmlROEQxRVphZXZUNkpQ?=
 =?utf-8?B?L2ZEOHA0aklSVUk3ak1UdytjSkwvM2NFVitjM2ZxWHlZU2V6V09kR2o1ZGw1?=
 =?utf-8?B?VzUwNWZpTGI0US9waXVJdFpMektBQitKdk9SWU5GdnpVZndxc0s5VTlJTUlm?=
 =?utf-8?B?OEpkQ2szYzR4eDI3bVZxT0RpSWVoU0MvNktZckJIM0tOa1pzZ000M1BIZE1X?=
 =?utf-8?B?MmRTU1l3TjlVYXVHeFRvSEVxWll2NzZQdFQxKzM0eWdyVXBlL0FTc05vWjBw?=
 =?utf-8?B?TmFKSXFHbHVtaTJyVzc1RFU4dTF1b0lXQS9ibHpaTy9mMi9WOGdkYUJtem9R?=
 =?utf-8?B?S2QwTys2ZmhISGV2aUUrWWdlTTNXSEVIcUd4YmlCTS9McVZrbzg3eTVha2hk?=
 =?utf-8?B?eFlZZDJTYlRQcTRhc3NOZitZOEtBMGRBcUpCR2Rsc05KQ1NqR01qaTBhYnl4?=
 =?utf-8?B?RTM4Nkx0WDE5cXRsczhFamtIa1ZZZVRTWDhRMzU4OGdTUmU3ei96ZWE3QkxV?=
 =?utf-8?B?amxrWnhjY3dVYjFlQmloM2ROSEZ3Q1BvV3FEbTNMSnZVTzVuUEE3eHEzMC9z?=
 =?utf-8?B?czJkdE1kNHh1ZUE4TnhHQ2liaklIWkVLR1BmSVNJV0VTWHVwd2wrYWpld3Rr?=
 =?utf-8?B?bW4zOFdHS0toby9LNmJibUdDVXdiUVFSRk5nSGpiMnFTbE0vMU5SUVBrTnFm?=
 =?utf-8?Q?MulVXb6rxKw3NsvkNXaTFT8Av?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ecf6467e-06d6-4554-4f0e-08da584055c2
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2022 13:24:16.7545
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: GfpIK1ua/VvSj/AbV6YOGHI/VMcEwALGqBjuLQvKPsOSwQqL7Z6b/5fShBso9qNMBZodEOnNgX8LHaQouMUadg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0402MB2748

On 27.06.2022 15:15, Michal Orzel wrote:
> Public interfaces shall make use of types that indicate size and
> signedness. Take the opportunity to also modify places where explicit
> unsigned int is used.
> 
> Signed-off-by: Michal Orzel <michal.orzel@arm.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Mon Jun 27 13:25:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 13:25:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356680.584978 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5ojy-0004pT-3m; Mon, 27 Jun 2022 13:25:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356680.584978; Mon, 27 Jun 2022 13:25:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5ojy-0004pM-0o; Mon, 27 Jun 2022 13:25:22 +0000
Received: by outflank-mailman (input) for mailman id 356680;
 Mon, 27 Jun 2022 13:25:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o5ojx-0004pE-0e
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 13:25:21 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o5ojw-0003uo-Q6; Mon, 27 Jun 2022 13:25:20 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=[192.168.2.226]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o5ojw-0001Rl-K9; Mon, 27 Jun 2022 13:25:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=uCFnzqu/otbkuUjGWSqXiMsTCIK5mQHf4QYN5eBYANU=; b=YZgLKlOx6mJKYH63WJKcw39QBn
	a2AUgT2j77AOtvFjCEAymxRXmURLF6WhFz2MwYhpGm717w28OADGFtgIlXUqAWo915RVgX3jrDrO0
	ylBSQ0tIeR6oBQ7vOkl9Sy6Ok/m9aBHdUoqN9hgc+ObC9Qs+GV4vA6f4jr0aGFFKWuDU=;
Message-ID: <1757b28e-2b89-1362-0bc1-34583485d8d7@xen.org>
Date: Mon, 27 Jun 2022 14:25:18 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH 1/7] xen/arm: Use unsigned int instead of plain unsigned
To: Michal Orzel <michal.orzel@arm.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220627131543.410971-1-michal.orzel@arm.com>
 <20220627131543.410971-2-michal.orzel@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220627131543.410971-2-michal.orzel@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 27/06/2022 14:15, Michal Orzel wrote:
> This is just for the style and consistency reasons as the former is
> being used more often than the latter.
> 
> Signed-off-by: Michal Orzel <michal.orzel@arm.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 13:26:21 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 13:26:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356686.584989 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5oku-0005SQ-DB; Mon, 27 Jun 2022 13:26:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356686.584989; Mon, 27 Jun 2022 13:26:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5oku-0005SJ-AA; Mon, 27 Jun 2022 13:26:20 +0000
Received: by outflank-mailman (input) for mailman id 356686;
 Mon, 27 Jun 2022 13:26:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yQHX=XC=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o5okt-0005S7-1x
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 13:26:19 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-eopbgr80047.outbound.protection.outlook.com [40.107.8.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b73ca4c9-f61c-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 15:26:13 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (20.179.234.89) by
 PAXPR04MB8639.eurprd04.prod.outlook.com (10.141.86.83) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.18; Mon, 27 Jun 2022 13:26:16 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5373.018; Mon, 27 Jun 2022
 13:26:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b73ca4c9-f61c-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hBtRCrFrTLReRy/hXwbcMCuJxK0MKBDVzYIGGyiXHvpi9UOMPGZpmL66SyRPcF2CnIuiCMhus//4yz+gyxqZjFlKflqvLWj7Ws1i8/BLulYKabnMeMOp4ycvNYhb24/RBKNpdM8HdurMtRt/Opg6GSUulrxHMX2UcIIKQp+ipBB6VaT0pOsonTjh5u8od3pJZMLY66oy4Xxl3d7h75FMOmYA7e9e7y/RPZIUdsMwrGOuS+kzR0RTMIY5x8w0DS0/9fj16FV+ZYQGAb3x9Jgty865aVPyaTWtO47Sq71lxtS2i4X1TMkcviJXYQn+nmmumk5QEx0WY8nYGEYz9xVXWw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8ezhv0zzQ1+MrV7DdGElOA2lWszu1wIz+Y/fSWeyX8M=;
 b=oGN+LUpu42qP3kxzgQ0bzomYMaHFRdMdIx4ZaeFZdK/CeFAX3kC408RmnZYEMrlCWI1TucpbdJY4rdHdiQMCw9NIPtCk58YvDBVUYTzlG9PzFPjdi6mmImUfaeCsNK2Ei4UgYGyWzzITaDNf4Q3oWpl+D1/QVPJkJQY7ViRokKbh9BF0oOmNUfOGCfHVAuNFnyVvRMzWdv/xiXa3ODArpqPbzuOezoxG8iM8ufZgaYqTrgBG120CoDj015kB+GqtgTu7U8O3qKk1eRv6ShLbQlkTli1XIBxLcJqih/AQdEAnWAr7OSWUin7M0Nlq6wKdsJSyHEAMB9sVyoJ1EMVWbA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8ezhv0zzQ1+MrV7DdGElOA2lWszu1wIz+Y/fSWeyX8M=;
 b=XTa4n4hVyhO3Bu/R8CFJEbv9Blcx1sItywvFBLKJo9JnAqrLZ5QgujMNKX+CMTR9ofmp3p+QK55uSlq4TFsgdPFQZ/J/dy2yg30aWqqhWbf7BJrzOw0LI7AvBMpyFPMHl9nxqBHkItDpjmBv1+Vk+HjVz0qRyj43sJVJZGQOoSSGSF5gZlUZRnik6fxSE1cyKQyWDJLlKuUiGXgGRTAAUsSufHnLk8wziM70RGT3EtWXcSbNZTdkWcd3DksPBqypKWOTJXA1bK5ctjDSnOt7UZmUj0NL2HwvpC/pDHUI2xAmyt4reKmiNg+/GlYi5rc4wI3hBWXsGf2QgH2kCU9gCg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <7d71eda4-8695-83fd-d729-5c28070c1539@suse.com>
Date: Mon, 27 Jun 2022 15:26:14 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 6/7] xsm/flask: Use unsigned int instead of plain unsigned
Content-Language: en-US
To: Michal Orzel <michal.orzel@arm.com>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 "Daniel P. Smith" <dpsmith@apertussolutions.com>,
 xen-devel@lists.xenproject.org
References: <20220627131543.410971-1-michal.orzel@arm.com>
 <20220627131543.410971-7-michal.orzel@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220627131543.410971-7-michal.orzel@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9P250CA0020.EURP250.PROD.OUTLOOK.COM
 (2603:10a6:20b:532::22) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 1ee6ac6d-50a5-4f11-140c-08da58409d2b
X-MS-TrafficTypeDiagnostic: PAXPR04MB8639:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	mUXAAC3EVsIGXh2YuGuMGBrAoX2ir+COmPJJSX+NTGSUd2zb32s3eGyHTY7b8g6OtoW/xNOQ11upyzlrHpyWEN665HcF9N8MUDYpbj7dsXjEBedR9pAy/juJHp8F8ZVXMtSib+wBtQYf9vA+TWk5Grc2LXjy5UFvfCEjM011W8hEHCY7YLgJorqnENtmiUf7VkG5Vd3XOdCK3JPBZ9cOjn1PKxfHepVkMt6/hWnx7BlQY74Pa6iodjgkQ+zlLXozYhZpPfPJkCAIBIJn6XKbskFWVzsld2xxlITz/xEi/KjuahG82pYdfDTF8/W3dx1LEmaLvRYFf8DM/zgF2OyULuNI3QL/HBXi19LqrOeTEDURis+8DF4k2FnJ0Ts0mVbgwDTIwesRUwpfhf2hoKmmr/eQLLnqZI5dw+4H+L0lQWyLZcRX2CwMr4zkBozktpYT4T/gSd6CzGcC3ESf+x9f2+zHsI7lPCDKs3inyiajKTvFTE/GS42Nm0IQvizlEu12/NvnlYQWj/33rOIlEpAFfMifP/e1MOl2KhbmpXgjKsiKjmMCplvdke8nIfIlg9MsiilKd3c3RmYsU6bWe+ESi3so7hQuCSb4VuT6QkFv9NUOE8GNLH0E7XPhJhNP41TDXUCHgXBjDkPhST7eI1GGNvX+5FZxe2aET5SVA2n859yAcuWqH4v2vvodF7ihYZRJhFasliB+XOB8Kq7uvX6B0OpU2Xw+CED403NUfFvVZ0xHTa98v47jLL8Sm9S80Aa/aZpQl1Cnvle8vpScEe85VIzceeXIK2lwADdmJwUKzu8O6ljqiHHjlOgk0zj3HA5udNSYbMTcpH5RXzegTlsrSw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(346002)(136003)(396003)(39860400002)(376002)(366004)(86362001)(36756003)(478600001)(6506007)(186003)(41300700001)(26005)(558084003)(6512007)(31696002)(5660300002)(4326008)(8936002)(54906003)(53546011)(2616005)(316002)(2906002)(6916009)(31686004)(66476007)(38100700002)(66556008)(66946007)(8676002)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Snk5R0Vlc1Y4MXJlVndFYXRzTnVaNHZ1MmVVVGV6dW5xMXFtTTFjaS9vYUdE?=
 =?utf-8?B?S0NobndSWm1KemY4UDVwYXNuOGdKdmlsN2RFcC9XTjlQRHZiMXdYQVpHQkY4?=
 =?utf-8?B?RkRoQ044SHo5alRvVkRuVmFkWGMyZ1RITVBZaExuS1JEa2tqLys5YVB3ZE1x?=
 =?utf-8?B?U0Z3RGV2LzltaHRDVXhwbnBVY0NyY3Y3ZDB5TzVUT2pzakxqdllMN2N1TUdo?=
 =?utf-8?B?STJnTG9XWmY4a05mbXp1QTc4dTZER1huNUo4Q014WUs4K0pzVDlDZnZXdlA0?=
 =?utf-8?B?cjRjWnZocy9rV0ZiS0J4MWlvSHF3NlcyY0NiNytaZ2pJS3JVME9udEMxUGg0?=
 =?utf-8?B?VHVrTlJoWCt3dEozRnBvUi9jcnlWYVFGdmpQUlJ5dkNnWlZ2OU5wWlFJMEgz?=
 =?utf-8?B?OU1jMW1uN1JybmFpQm9WZi9ZNE1LV04xSUVnRVhzdTNrN2FxNnNTMzZQajVW?=
 =?utf-8?B?dUNsbHhaUHBMUkQ0NGkwbUp2elM2T3pxR25iS3lybGt4QVU1SUREeFl1eWo3?=
 =?utf-8?B?R0VKRGs4RkkrUzRTODZha1VrTHNMQUc4WWE2bDVjSmdEQnNyTG0wUzJuL29h?=
 =?utf-8?B?TTlLKzBNU3UxQ0VhUE9DUTRXZFg0UVU4MkVuWGFTUzBZWEY4dkJVWVZKS1Zu?=
 =?utf-8?B?OFAxS09sczRiSnRxamZFSEsreEdJTTVvZnE0WlJvWG1YZW9SaVdjVFVlUnAy?=
 =?utf-8?B?b0VuSndNSmtHYVlScVoyU3NDQWFqNDFBYzJNOVN5VnhLb1lXa1RiVU15RFN5?=
 =?utf-8?B?OHNlWldyT05ObGlzOUNkSytmbnE4dVZNSFdERmpKL2xJNzFBVjlMZlFmeE0w?=
 =?utf-8?B?YXlLK3pESGJtT2J0N1cwTldmVUFodHY5aXIzakFsY2JaS0g5UUJpemxVS2JK?=
 =?utf-8?B?clBmaWVIR0ZkbE01OTI0bFVJQ0RlVFhxUTBIVExtRDhsd21xTVRqeG9EQ29B?=
 =?utf-8?B?eGVBalpwSEJsUElhbEVoNy9KUWtSelFJOGQwQlhDSlQ5K2ZUSDZrYXRiRHRI?=
 =?utf-8?B?dmhiK2VEUndYcVlpMW55cDBZcUR1YkRORzFMRkVHaHZpcXlQNlRSclFxREZv?=
 =?utf-8?B?MGI2c2xldFNIRWpDTCtwRmpOMUFzVWtZWlNIQXRUanNpd3RnUmVZSHJKTGtR?=
 =?utf-8?B?a2M5d2U1dWY2anppanF3VjNaZ00vOFNnYTB6NFZpb0xuSXVNd3ROQnYvK0xT?=
 =?utf-8?B?NTRIRDFzME9WU1hDR3FNMm1BMDVSUlhPK3hrWXNOeDlmNXhQRzFIbi9oNU04?=
 =?utf-8?B?aXpnL2JUbEJXdVZYVU0xck0zekhhZU5XM05pZVBlaTllbklIdmRnN09KUFFG?=
 =?utf-8?B?NmpEb3YxN0NwdXROMis3LzhCcjNZMHY1TnpZZjE3aGx2WGNpbzRWbWVCS1p0?=
 =?utf-8?B?S2I3QUt6ZHJuTWI1a250YytPV2ZaWVlHSWRTTktUTzhpUllQUUN5VEgzK2R3?=
 =?utf-8?B?eVQyVk9HM2pZL3JrQld6Y1l6cFlobFVxUGZOa1JBNC8rQnVyR2FtbFBOWGFl?=
 =?utf-8?B?aFBMeVRKcjlzZTJGTG9NOXh0MTk5WXV3N25ja0l3TTViUkIyV3BIQ1JoV25C?=
 =?utf-8?B?T0REOGU1S2YxbmdaeDloRm8xZHQwczhNZi9LU1Rrc3BXMm0zZGpIZmQ4Q2Rq?=
 =?utf-8?B?aVFOZE1jSzZPc0ZOV2M4a0NJSXJGQkQ3MG9td1E2RmlqNTFRR3ZQelI0L1cz?=
 =?utf-8?B?U0haWUpKdUJ5aTA4MHVxeUF0ZHpUYUtWRzQ5WXduL1JxVmJMb21NOXU2M3BY?=
 =?utf-8?B?dENSSlFiY0dMcy9CeC9CMFgrZmFQTkpiaVdTVFEvWkRmbWlPTGxSdjZXR0tH?=
 =?utf-8?B?TmdieGNtUTdsWW1EWXkzUTJCbGs4VElLTWc5TkowbXYydTdaUHV6V2FWZ0dE?=
 =?utf-8?B?LzkrK0YxM2JoNkd4UFJWaGsrN2ZZMzFuVGhyOXJTamtSMXl3Z2owN0s1bEtG?=
 =?utf-8?B?aXJvc1NPMU9GWmxQZjhnZ3ZjTkh0ZmpOaTBNM05kdTZERDNUKzcxb2ZxTFYv?=
 =?utf-8?B?Vmh6UnVGcXE5RlltWUFoVExxYjR1R1VpUHFHb1l2UzV3RS8wWFNQdVFRNTNE?=
 =?utf-8?B?Wm1VWFp6OTFiZjZKaitndlA0VEJsUEV6U0NhOXE0eWNoNUU3bU8rNXg5RGFp?=
 =?utf-8?Q?YubwDtAoBnribTQGDm+Hp+1KK?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1ee6ac6d-50a5-4f11-140c-08da58409d2b
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2022 13:26:16.5593
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: aJXo5fHH5Zm8d89FItvYlYrz/KsfF+jKcc0i80+A6Y/FBivqYtBYGPM1VrJHeCi9V/FOGeiLijY81b335BEBkA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8639

On 27.06.2022 15:15, Michal Orzel wrote:
> This is just for the style and consistency reasons as the former is
> being used more often than the latter.
> 
> Signed-off-by: Michal Orzel <michal.orzel@arm.com>

Acked-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Mon Jun 27 13:27:32 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 13:27:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356693.585000 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5olz-00062O-Nj; Mon, 27 Jun 2022 13:27:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356693.585000; Mon, 27 Jun 2022 13:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5olz-00062H-Kh; Mon, 27 Jun 2022 13:27:27 +0000
Received: by outflank-mailman (input) for mailman id 356693;
 Mon, 27 Jun 2022 13:27:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yQHX=XC=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o5olx-000624-Ef
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 13:27:25 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 (mail-eopbgr20089.outbound.protection.outlook.com [40.107.2.89])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e17328ff-f61c-11ec-bd2d-47488cf2e6aa;
 Mon, 27 Jun 2022 15:27:24 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (20.179.234.89) by
 PAXPR04MB8639.eurprd04.prod.outlook.com (10.141.86.83) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.18; Mon, 27 Jun 2022 13:27:23 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5373.018; Mon, 27 Jun 2022
 13:27:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e17328ff-f61c-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TNzBVYy5tIUElNAWSnlzB54qMzLbLYUATsY+wgiJ9l8YMVwXU6EtCMCPhqPnfGTUBNcstzmEZV42+JzWign3HG1imBLKBt1qdAAb0sj87V91vc+Tv/LjxjcfcJPwJH2F3iOpN59QxPz9rvYn54wHR7IVaJlhoSBCo4quPmN+n3td0edJhYV+PFvTMIUMxU5It6i7yvPZwFfQwrXlk21Qih4VCyeeFZt3yI06Ao9ujqeCf0MZZZc77zLLIWiYQ84K3il2L+zqXIVCqTBxpDTDL4AQEPOcrjj56P4B+pxBtrSiXoTmqZjEf9wtwROD18MIGoRqZsul72CuOMsWhb1+xA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=niUSg1BmDonetG4uHcK7+pUV0glb4m066L0M3TOSIew=;
 b=Fv2j2xKgJV8lLDBoEZGJhydLU/lpnETv68CeYZ5uI1EMHQs8ZeybHnd8evUtPKOQxS5SZjjBAwE9JaGC/lKetQ4w5IgL46g2rfLYlsUQ+g34Q3j0E5xG0/gbOOYbE+WOM6xv2xV9R2anc9ffPtQooM6Sx7rlwx+KfVV70ygigOwOnMvL5uYxRc8Zkbjs0OrcHMNGjot5rIZU8RFEVeafnC58/3DecANuATxF+8BVF5I2Wj91W/Msm8tUh0lo1+Uyg7bw3roTKiZhHc604AOiWd2QI+6r9/l+yArv6bz96ihbqHT7sVZuMXU6lqHEyghScvwfqFSk61uCIyFmHl0stA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=niUSg1BmDonetG4uHcK7+pUV0glb4m066L0M3TOSIew=;
 b=o60l6I+b2CzTft22qTEevKT0ojgptZUPnfJKmyEu2ChacgYph++ARAU8Uh4THMsvsekzEB7+qaPq5iNjG6bO7HdYpmpup39YU8249Es9wQPeEuEpoCbGd52Ezucqy9iCB9VSYu3OGLhTfD3DSt4wFUt70r9KTQ8Fm66iW7WhwCJDiSrVhEM5y6/NvtP8vpPBcd84yq+9ipz/D1kh71E0oGp8HOwEhb9DaCkqLg++RP0Pw37/oa859JGoqTyyqZo2yYj3aur7LjgbS1E5x6KmvT9sm3ny/MwHx/ecF/fCP6ljwWroIy+5RwcavRgsdkQhXMbbloxGdiKRPtOA3Efelg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <05b5ccb3-c412-96e1-a168-84c108384147@suse.com>
Date: Mon, 27 Jun 2022 15:27:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 7/7] drivers/acpi: Drop the unneeded casts to unsigned
Content-Language: en-US
To: Michal Orzel <michal.orzel@arm.com>
References: <20220627131543.410971-1-michal.orzel@arm.com>
 <20220627131543.410971-8-michal.orzel@arm.com>
Cc: xen-devel@lists.xenproject.org
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220627131543.410971-8-michal.orzel@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR05CA0035.eurprd05.prod.outlook.com
 (2603:10a6:20b:489::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4fc39fa2-9d85-4c96-fb28-08da5840c4e2
X-MS-TrafficTypeDiagnostic: PAXPR04MB8639:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	CWERwrot65RleYm0oFfBV2+0gDX58j0jV8crrqQ1T9Q0dNerSVK0z7wIObVURGYkLNlXfi1jq8Vslf2O6d2h4FKpddMBVb+Wu4wRErjcmYnfslwMgMqKM1scq6InlZyLaW57qjib9nttt1qrMWl0QE2ubY+1ZYoFd4MeLISRtxl8lq+7+mr4n3t4e5LyGo54b37iOizL/PyIWVD4riPj+X0jS32sChAqpzVlOjSCp0QBCxVzB+3DUZ+TKIO0UgNWgsClSBHNfKza8HFGTVbg+obtiPPKB93359YUvjfNWvfq0qi6JsbYxKakLpIWvdJKjN9MGN/8NeiXwMpws+0t0RW9yPIjUmw1lKHNbfs/3wBlYiPIxvTWrSsRP8PhUquaLIkkrYlRfvSg9ImF2hK9IVrEu+eH0AHyxtMkLeLssnAbCUI2fMkJ2S7RxbHlP5A07Er6uYkNOFPhehAOOXpmtehjhVrgmFYMQZIZVINUKo9h05wV4DfVXJX1xg9q3W/hZkRTHb4JAvnvvRx8yr5WBh6Jq2M5zUFLQX/C9pCiAxrYpBDE60P7pB7NwjuOjqGugZ/4K+sVDuHH8OouWX3SIUfnMYMbZVlRgtL1Rim0jEJrMbsyKEWg3hWxqORsbo8dmYq54hy2pGwDLJr7GP6AcraVtWCEeQfWs2Krh8NwBv8mHRl5s5eOIH40CyscegoheLcO860QRKujGzHHGoE4hQKFexn4eMAx3cavJJPn1mn//g3t6SRP0ivHGqH0Tz+05cOPyzga1RJAY1B76PNL6iSDjzdW2IrSi1PqvcBet2we6k6smGJfmnIxQ/T4aR7UBhbmQOXjJiT7bTKNdZEjbg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(346002)(136003)(396003)(39860400002)(376002)(366004)(86362001)(36756003)(478600001)(6506007)(186003)(41300700001)(26005)(558084003)(6512007)(31696002)(5660300002)(4326008)(8936002)(53546011)(2616005)(316002)(2906002)(6916009)(31686004)(66476007)(38100700002)(66556008)(66946007)(8676002)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TmdMcUs4L1NLUElaNE9vaWVGMm5TcW9iazJ3RVMvaXRtSlZBb3N1cmFRT0FT?=
 =?utf-8?B?Zkxwa3RWVmFuWWM2L2d2VDhMbC9uWkF6R3A2dm9KTmVDU2ZnTms3L3VIbi9L?=
 =?utf-8?B?Yzc1MFczOWoyNmdNRm0wZ2pwdGNOWlFhUktkck1lYWRrWVpwY3Q3S2J0eXJm?=
 =?utf-8?B?K1JPZE1UbElxWVJUaXhXMWh5STM5MVp5UkQram9Pd0pTSEtaSFdteG0wZjI1?=
 =?utf-8?B?d3BxQWZRUHNRRXZaMXc4ZFRVT3RoZTBGUUN6Y0Q2OVV0MUUzQmRiOVVSTEts?=
 =?utf-8?B?T1lsdHk0aUhlZmxVakc0b0h6M1R0NHExcFRDUmxRekFJdzVWbWFjejh6NXgz?=
 =?utf-8?B?T0hnaVVjYXM2RVpXUHRIM08xOFVRSWlmNVN0Y3hhMm9mSVBORHpweDd5eW5L?=
 =?utf-8?B?dEQwM3FLazhmMVVNekJmbkVIRENUM0pRV3NFZmR4d2YrTFJMdDhuYndJb0p1?=
 =?utf-8?B?Q3pmL2EwQ2FIREMzZWprT0hhMEZaRldLMTlaWGZXcVQvQ3o2QVI0NmhuQ2lj?=
 =?utf-8?B?RGYvSzBOYmY1QTJoby9pemVnQ1laamxjMW9lZVBXcTlzV3ZXUjdZVDdnajhY?=
 =?utf-8?B?dEV5M2VwS1F4YTJhc3psT2FVanhIT0ExZTQ4TEZQL3d4REllL0ZGSWU0U0hs?=
 =?utf-8?B?WTFQZnlxSHFFc1FCV084UnVYbEZTUVBmcnBiV3VFWnBDcTBOSGNxcUdzN0hs?=
 =?utf-8?B?LzdZUFFEZTl1U0V3U24wak4yeFZZN1FqZ2pJUVNWaTgrWDFCNENkc3BqZVJn?=
 =?utf-8?B?Y2tQc1VMRGNIUWJmemZRTCtLZEgzdFZ5TEljR1pJZG0wclZyOVBNaFQvSUFO?=
 =?utf-8?B?ZEtCOUxtL0ZTUlZ6THprNk41cFBIS2J5Y09DalNsampwVUZQRkhGdjliVmtp?=
 =?utf-8?B?a21LdVovaVcvR1h0Tk9rNDZZODdNNHlTbEhpRzRXQWN2NW96VmM3NXdEd1Bt?=
 =?utf-8?B?WVl0SnJSbGRVSFZxQjJmU0JRdGxpRndHOFcwS09wQ2diSzBaZ09XNC9JaGwy?=
 =?utf-8?B?MFdoLzdZVng4aDUxcjBaeVFFSHpncWhXbml5ZHVFVXF3RisvTWhtdGYyNE51?=
 =?utf-8?B?Q0FZa2IrbDRCeHpjdVg2cWg2V0VWWkh5di9UQklDcGhRZnhIeXdXT1pqTUlH?=
 =?utf-8?B?RHRIaitYZ0NhZGNKTkZjanVzeXRibm14bUM3UFBTK1ZReG9MMCs4Q3AvZ2dq?=
 =?utf-8?B?R0Y1U1VmZUNwWER6TytpNnRrME91TnFXdXBLa0I1Q0NnMTY1ZWZvd2F0NlQ1?=
 =?utf-8?B?YTFNZjNwZmdTZExucFBWZzNtakFWbm1SZGJPZERkdFF1K3J5NWM0cDE2ZTRv?=
 =?utf-8?B?WDF5d21jM0cxSnhmSnlDUm1jamc5NDhGSEdTTzkxOVAxVTYrazhPVXg1aFdw?=
 =?utf-8?B?S2lsOXlqekgzVS9rVGN5TDBMYXlVVDlGazhjMld2VGZPcmV4cCtmTENPMmFq?=
 =?utf-8?B?SHYyUWsxMnFLaktFQTBhL1BiOVl1VVBKWTk0bHR6RFhEQzlzNUE1VzA5b1ZG?=
 =?utf-8?B?c2V3MmVRMDZuTDViNWNTMFA3czlTdkJQY2NoNEdPcGFlQkFiTjZZZ3lhV2x3?=
 =?utf-8?B?K3NLN1JTQlduOVJseW5uSEFuSG1MZmYyUERJWXJIU0pZSTNLZWlNeWFoOE85?=
 =?utf-8?B?c2tEWFB2VzJxUDlMK1ZDSms2RW9jVUtkTk9FSC9jK0xQZ1dYYU5oN1JDRllX?=
 =?utf-8?B?RllHR0pZcGRXcjl1SGlSK0syY3d2TFpZQWkySTZoY041OUw0c1FpRDVDQzNU?=
 =?utf-8?B?QmVmM083T0x0SWZvTm5JS2Z1TFF1OWdhS0dxbU1BU1R2anB1YVRzTjloSkFa?=
 =?utf-8?B?cDJocGVtak5JTzQ2VVdHWG9YYytMQ1dHUGxMRGkwYm40TnE3SlRJMlRuVndZ?=
 =?utf-8?B?NDlZOHhqZG5jV1g5TGxRS0pZL3ZZaVNWQ09CYkZNVHBKdjNNZ3NWeEZVOENV?=
 =?utf-8?B?ZzdLOWtaM2FReTNzQjVPeHR4aWh3WTJpZUdEYWQ0cGpxQzM4NVBpZWtIamhx?=
 =?utf-8?B?dDN2cmNzZ2dUOFI5aVM0U1BBWEszUjQzaEhFVnIrbzlkUnJ2Wm5KcWh0Vm9V?=
 =?utf-8?B?Q1llYXpMQUFSbFhkNjM5cExldDZaWXJ6QUNPN08vTkNSNTNRUy9nVGtRMk5S?=
 =?utf-8?Q?PiDV4jY2GZTEtKrzKOCJH56bk?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4fc39fa2-9d85-4c96-fb28-08da5840c4e2
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2022 13:27:23.2269
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: pxrmBIKp3UZMeDXBKTdOB5goKSd6mtpm/eFuN5IB+q69No4dJ/rF7S87jb9/p+kdKGUtsku4GqOXWC8UB/AayQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8639

On 27.06.2022 15:15, Michal Orzel wrote:
> ... and make use of PRIu format specifiers when applicable.
> 
> Signed-off-by: Michal Orzel <michal.orzel@arm.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Mon Jun 27 14:00:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 14:00:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356742.585028 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5pHU-0002PO-Pq; Mon, 27 Jun 2022 14:00:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356742.585028; Mon, 27 Jun 2022 14:00:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5pHU-0002PH-Mq; Mon, 27 Jun 2022 14:00:00 +0000
Received: by outflank-mailman (input) for mailman id 356742;
 Mon, 27 Jun 2022 13:59:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ti8b=XC=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1o5pHS-0002PB-Gx
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 13:59:58 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr70080.outbound.protection.outlook.com [40.107.7.80])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6aef6ca5-f621-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 15:59:53 +0200 (CEST)
Received: from DU2PR04CA0048.eurprd04.prod.outlook.com (2603:10a6:10:234::23)
 by AM0PR08MB4034.eurprd08.prod.outlook.com (2603:10a6:208:12e::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Mon, 27 Jun
 2022 13:59:55 +0000
Received: from DBAEUR03FT055.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:234:cafe::3a) by DU2PR04CA0048.outlook.office365.com
 (2603:10a6:10:234::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.21 via Frontend
 Transport; Mon, 27 Jun 2022 13:59:55 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT055.mail.protection.outlook.com (100.127.142.171) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Mon, 27 Jun 2022 13:59:55 +0000
Received: ("Tessian outbound 01afcf8ccfad:v120");
 Mon, 27 Jun 2022 13:59:55 +0000
Received: from 9fd1f68681da.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 AF6C08B2-6676-44D8-89C1-7A773D2B7978.1; 
 Mon, 27 Jun 2022 13:59:43 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9fd1f68681da.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 27 Jun 2022 13:59:43 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by DB9PR08MB7493.eurprd08.prod.outlook.com (2603:10a6:10:36e::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Mon, 27 Jun
 2022 13:59:40 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5373.017; Mon, 27 Jun 2022
 13:59:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6aef6ca5-f621-11ec-b725-ed86ccbb4733
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=DykPKJeHQvBFZCcD4hWh026TwS8vZX0V7FAh1SUJ6kNfaPZGDZANQU73UqgQ7+fr/mIOjnwHvTqoVSALY/8OF3cQEC1fvtnyFmaD98/7+4dfXvKdOHGuqEAubmz97wZkSWDNv7eMflB59j4pGo3lV4Bk/DLESmJJ6gzIzl0UV9fEpDmCbDZQVfcK9+Y3vpGEUOOLU14fJ1M+ySYSuJ5G68AADxGByFqvORQ0Ys/PWWGXJpdaxeUcamVVJSsgWNx6z+dfYaJlXIuRLN1rwZsZkDJ/V/MV+73oK8Mpbt/V4WuURvZizzpwNEQFFXzL7QsGBPK2V/nRXqDOKXuJwmYG7A==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=SMDReZihMgL8n6yUE+Z180LcQUc2BcrDbXX4U+oLnyI=;
 b=Or4w5XXL1yNYuBjBHquKthlLiWbH0+UJbA90jAEVitOVkjcFMIXL76N6Kd3YgxUZ1/da1R7bsTDzaeRl++D3OZ0yAcUDSCvqb3mBFWofKUZAfYKxR33ewZy24y+Q4vboE3PtJVC9kpZzUUWdE6ZVCAItZxc4QaTQKWT1/AUKTE2OeZixrkmA6Fa25ubcl25XLqnFQYu1ifacAm65O0pD/JljA0QVTHC/q6QEoqNW9wkZa4ljFSPXL827uea/4ck6oiaZO19jSPu3DidthrrgUIe+0BxQsrC3P/mksf3DQrN988n02SMWj50WLCRgXEsUEF+XWkfwIStKAy+/Bc+4+A==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SMDReZihMgL8n6yUE+Z180LcQUc2BcrDbXX4U+oLnyI=;
 b=B0ngmQrDT5+Lo2VgHAmIVCRZR5y1r3Oc5pjM+emuvPEOjcIWu2KQynXeekZO3GLo7AyglWxrXvAiWWfDqGGWnYWXB+bkhMI7NYD0MA4rYX1Lu3wnUj+CjVRv2fjJeiiOfaY5hYW46zWVOkt0ZYlEBdryhNLMvkXeJ/IH/+TirwU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 9b4d11d3115172b8
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=h7Ptd9Wn2rR+yP8FZXao15X7EdMJ351gqbLjCGoUg6L0k87NLL7EVSFWAxr8MpAkoA3V/9J5Td7GNaTEYYUobh1pROU/PXuE56znHumSYsoWdFFPbJGFu/zMY6c3nGOfsmrqj06vsxeWxMBkqu4sQiwLAc/Y+7Z/z7gER19N/FVSONCZHhUg1jd75gCZVNrN4G3CP2UESE/EVXSWY5OYTZUvFr1DEtFOKcTuSbNKoJSPfw0CCe2Q/FdVgKS7zMnm8nd2YktOkv2iEYenFIdLosCjvypnL3aHEUVt0HHz0P58jJjFQvw/Vh0IkUm3Xc3P4u1M6tdXawNEieN6qQFN1A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=SMDReZihMgL8n6yUE+Z180LcQUc2BcrDbXX4U+oLnyI=;
 b=bS9MtEw68l7qA+1olgDLtzrtyP4vQxjZwV6YjQuR5TbRccJ8tSj7MEzsTFQVG5YosA9Hotnf+mmNcjc5bnG18eIK+SBek8nA0hRspTB+rLd93KKaOCIIrLBzvzFQ8lcYDRDvk6Bu/QhxYIbizMYBeu5V4JhHbquntFSk7EWjDYCTZoSJt14YPL+C2P+36mFyeyTGPfWhdEr80kPr1ryFd0yR4nHIB+MYbuwYLEORZby6ELlSv8s8FT1bJl3FR6mAPWK62Sgh6dJ18fjki74WRIkiA6k9qjQHBZmzWZfYh5+mGvaFEhpLrpNxqjZk4hz+TACDco8c5LRiIWxABIOwXA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SMDReZihMgL8n6yUE+Z180LcQUc2BcrDbXX4U+oLnyI=;
 b=B0ngmQrDT5+Lo2VgHAmIVCRZR5y1r3Oc5pjM+emuvPEOjcIWu2KQynXeekZO3GLo7AyglWxrXvAiWWfDqGGWnYWXB+bkhMI7NYD0MA4rYX1Lu3wnUj+CjVRv2fjJeiiOfaY5hYW46zWVOkt0ZYlEBdryhNLMvkXeJ/IH/+TirwU=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: Michal Orzel <Michal.Orzel@arm.com>, xen-devel
	<xen-devel@lists.xenproject.org>, Julien Grall <jgrall@amazon.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH 2/7] xen/arm32: head.S: Introduce a macro to load the
 physical address of a symbol
Thread-Topic: [PATCH 2/7] xen/arm32: head.S: Introduce a macro to load the
 physical address of a symbol
Thread-Index: AQHYh6p3jlOKzsv2fEeFkG7om0FjI61iz6mAgAA4JICAAAIvAIAAAqUAgABAUoA=
Date: Mon, 27 Jun 2022 13:59:39 +0000
Message-ID: <C40F17BE-748B-467E-BDDA-2A8562C737CC@arm.com>
References: <20220624091146.35716-1-julien@xen.org>
 <20220624091146.35716-3-julien@xen.org>
 <1218329a-13a3-79b6-6753-c2c9a0c45b2d@arm.com>
 <e92b0f0f-d73c-a003-eb0f-15f7d624a75e@xen.org>
 <8c8d6a9f-18cd-43e5-0835-68927e7d1bac@arm.com>
 <fbead981-fe36-30fe-12cd-29842a642e47@xen.org>
In-Reply-To: <fbead981-fe36-30fe-12cd-29842a642e47@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: c8ac4a8b-ad52-49a0-a68a-08da58455092
x-ms-traffictypediagnostic:
	DB9PR08MB7493:EE_|DBAEUR03FT055:EE_|AM0PR08MB4034:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 VzJAoE8rjhaFFJKI/PqgJkGHCdz6gabiMjtZieWyddoBEMYQZssCas4WAshv5OWwxTN8G/wQrVCgdfG2DxPoQWoMYcyfz+m0Pb8UgbmXO2fkwJfrg/msd87Msg5ef96T/p8pbLKrMxklaEBGKyFuJZ4KzOY/bx2AXMDwfqbYqlZltJQ8JeVI9O4p+Ko2/IKpTd8hZmOGcB769ba6AyWry3GBCc+7Vob3OHmcuE1zBXjRz3k/HqELeRgaSIcIdBppydXHn4dd3qtSLIF6ytocsU0t2MLXt/J3Z529VuDDSECdW5cxPmQf43Ela6NdmmzJHvD8WwXxZoDGPnfGviTxgavrNwOtan/NTqsnAHVSJyBlbHc53G0b98qbEHG1tAw46Uhzrhzu4fJ6XPEHwRDdvMVB8J9DgI2W5GX+t4NUU28KwfL/lwaV54EeP47OELnldrxqAI3AteJPVSYG9ZC1gMMbj3tDfxO0W5vmKW3hZstIUed+xKmDVWJYIxnMlzkCb9TamQ6onbbxw8Xeh17+/BNb8djDxLD8FXDUEako2VZh4FUUBUJJtNWNbf56w9QgM4pv3LkR5WOUJuWz6OehKyOFFC9iJadbaguvNfXAjWlazBo+jXymPWxj9C7LhfwHjaVFZPI8so0Mi59M6mEz1UGXFNzWABvkVE5JlZ6s93d2RlMYYjRBhn+CY1DLfk5kAUFN9H8DXmeaEgO3JmhWzv3gJKSkPtPLo+9xtGf0x/MzBrT3Vd6z6qfa7ioG8kpRh+tKQBDI2jWtUhJ/XNki4cPjJ5YnZmI2y2TO3vkym4Zhlbzw3nhUEbyH7xhAcZoJ2lhXKPcKMweGX8Z+ygNzcg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(346002)(136003)(366004)(39860400002)(376002)(396003)(33656002)(66946007)(76116006)(8936002)(8676002)(4326008)(478600001)(2616005)(64756008)(91956017)(66446008)(41300700001)(2906002)(66476007)(5660300002)(66556008)(26005)(71200400001)(86362001)(6486002)(6506007)(53546011)(6512007)(6916009)(54906003)(122000001)(38070700005)(38100700002)(316002)(186003)(36756003)(83380400001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <F0B06C3A6A611F44A1E2648AB42A3321@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB7493
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT055.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	c13e8e25-c109-45ed-d37d-08da5845474a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	uohU9zDG4e3oCL5nwYQrp7zr/r+TWAr/nP7jmlune5uc+bEt2OXeNhxy8pIk/OdzBzog/Od1kOCJ0VLXfuH0ADJJ6w4+aKdghS8+O3e/XzmAnrV8Vsrvrw1JzQ7+S8tOCo8uAhfiMQ4ZOA6MaVUou6dn1piBEv6vr7pQnGOGNgkoNZ8krwABBaqhiG8xLOJ+37jktYxusSdR50J7m8Wu6u8TgiONvqwlYj9CODYXT/qpfkC07XiItii7eGW5yDUHQ9dWgmOYeO/NvMkXh6i+yU2mg5z3Fc6VHlbcfaOpcaPUdJEPkfcvSEXnLmPOdiRSeVHaCa+1WXYTUdn5woqMPR5mZ7zicQoLtXrK+qCa59T4m2EfibFswmPWeE20bhCd2Y1Eulp5o2ueocqE0kJDDV4pE3JovJNJ18vYx01fqvz/u8pRa2FU+qqVJEnzLKa6+olL5uo6rqMtZYkIPCMJEipnxTcwFLwWuoOXL8vG5VVYRkVWfc+VUBC2ZNxZm1lRQs7/sWHN6UFGuqvNilYTvMSioYr0axWPCdFvZqyGcRE7GH5o2D7Mn9XsfVQH45MWgYsigMfm6dXncGhle32myaY2XC192ESnV2ATFNFkKEuaS/b+KWkwx2eHoB2PjbQWXtgaTwALPHTNmCyzjTq0lflYgQdXoVxV83Q1QI4/DAT4i9kfiOv3lpKm6oP3eK4pOPBbwqOR7PvWtVONDkTlFNfCujYM91CPotkXtcbvAEATnhNoUIf3Q43uTCuWgCt7T0kT6wzyAOJbCS0QsbfrmH/+ewRrZMLyY/nsNXHopzPXSRFBhcS/omxoGpRDBKIz
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(376002)(396003)(136003)(346002)(40470700004)(46966006)(36840700001)(40480700001)(2906002)(70586007)(6512007)(6862004)(54906003)(26005)(70206006)(6486002)(6506007)(53546011)(82740400003)(86362001)(8936002)(83380400001)(4326008)(36860700001)(478600001)(47076005)(107886003)(2616005)(33656002)(41300700001)(36756003)(8676002)(356005)(316002)(40460700003)(82310400005)(5660300002)(336012)(186003)(81166007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2022 13:59:55.3966
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c8ac4a8b-ad52-49a0-a68a-08da58455092
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT055.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB4034

Hi Julien,

> On 27 Jun 2022, at 11:09, Julien Grall <julien@xen.org> wrote:
>=20
> Hi Michal,
>=20
> On 27/06/2022 10:59, Michal Orzel wrote:
>> On 27.06.2022 11:52, Julien Grall wrote:
>>>=20
>>>=20
>>> On 27/06/2022 07:31, Michal Orzel wrote:
>>>> Hi Julien,
>>>=20
>>> Hi Michal,
>>>=20
>>>> On 24.06.2022 11:11, Julien Grall wrote:
>>>>> From: Julien Grall <jgrall@amazon.com>
>>>>>=20
>>>>> A lot of places in the ARM32 assembly requires to load the physical a=
ddress
>>>>> of a symbol. Rather than open-coding the translation, introduce a new=
 macro
>>>>> that will load the phyiscal address of a symbol.
>>>>>=20
>>>>> Lastly, use the new macro to replace all the current open-coded versi=
on.
>>>>>=20
>>>>> Note that most of the comments associated to the code changed have be=
en
>>>>> removed because the code is now self-explanatory.
>>>>>=20
>>>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>>>> ---
>>>>> xen/arch/arm/arm32/head.S | 23 +++++++++++------------
>>>>> 1 file changed, 11 insertions(+), 12 deletions(-)
>>>>>=20
>>>>> diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
>>>>> index c837d3054cf9..77f0a619ca51 100644
>>>>> --- a/xen/arch/arm/arm32/head.S
>>>>> +++ b/xen/arch/arm/arm32/head.S
>>>>> @@ -65,6 +65,11 @@
>>>>> .endif
>>>>> .endm
>>>>> +.macro load_paddr rb, sym
>>>>> +  ldr  \rb, =3D\sym
>>>>> +  add  \rb, \rb, r10
>>>>> +.endm
>>>>> +
>>>> All the macros in this file have a comment so it'd be useful to follow=
 this convention.
>>> This is not really a convention. Most of the macros are non-trivial (e.=
g. they may clobber registers).
>>>=20
>>> The comment I have in mind here would be:
>>>=20
>>> "Load the physical address of \sym in \rb"
>>>=20
>>> I am fairly confident that anyone can understand from the ".macro" line=
... So I don't feel the comment is necessary.
>>>=20
>> Fair enough although you did put a comment when introducing load_paddr f=
or arm64 head.S
>=20
> For the better (or the worse) my way to code has evolved in the past 5 ye=
ars. :) Commenting is something that changed. I learnt from other open sour=
ce projects that it is better to comment when it is not clear what the func=
tion/code is doing.
>=20
> Anyway, this is easy enough for me to add if either Bertrand or Stefano t=
hink that it is better to add a comment.

I do not think a comment to explain what is done in there is needed as it i=
s quite obvious so:

Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Mon Jun 27 14:00:57 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 14:00:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356747.585040 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5pIP-0003pe-31; Mon, 27 Jun 2022 14:00:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356747.585040; Mon, 27 Jun 2022 14:00:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5pIO-0003pX-Vt; Mon, 27 Jun 2022 14:00:56 +0000
Received: by outflank-mailman (input) for mailman id 356747;
 Mon, 27 Jun 2022 14:00:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ti8b=XC=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1o5pIN-0003pR-K6
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 14:00:55 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 (mail-eopbgr20086.outbound.protection.outlook.com [40.107.2.86])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8f5887ef-f621-11ec-bd2d-47488cf2e6aa;
 Mon, 27 Jun 2022 16:00:54 +0200 (CEST)
Received: from AM6P191CA0019.EURP191.PROD.OUTLOOK.COM (2603:10a6:209:8b::32)
 by GV1PR08MB7802.eurprd08.prod.outlook.com (2603:10a6:150:59::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Mon, 27 Jun
 2022 14:00:50 +0000
Received: from AM5EUR03FT042.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8b:cafe::4) by AM6P191CA0019.outlook.office365.com
 (2603:10a6:209:8b::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17 via Frontend
 Transport; Mon, 27 Jun 2022 14:00:50 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT042.mail.protection.outlook.com (10.152.17.168) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Mon, 27 Jun 2022 14:00:49 +0000
Received: ("Tessian outbound 1766a3bff204:v120");
 Mon, 27 Jun 2022 14:00:49 +0000
Received: from cf96ba6acd44.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 F45EDAFF-1E1B-4612-BF2E-C1C2F99FC221.1; 
 Mon, 27 Jun 2022 14:00:42 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id cf96ba6acd44.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 27 Jun 2022 14:00:42 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AM8PR08MB6594.eurprd08.prod.outlook.com (2603:10a6:20b:36a::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15; Mon, 27 Jun
 2022 14:00:40 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5373.017; Mon, 27 Jun 2022
 14:00:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8f5887ef-f621-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=P7RqHL4/mo6aC47eFDLm29W/KlJ81zKHE4L6GqGccx+3+i34RPwckwCYqY7zBku4z1VnaoVxlWviEUI9u8RNtoVO3CLVmO2pOATHfjLxfkQ6sXgHv3SxyDrPssiAyWKXKYHyTw8HOzkYgSsgoyA95vjBsohbqlOhEytIbWt5fh76bD5LM0z7ZmUGqNf252TBjun8lHgSfdCYL20jqNJlr7rVuajr13kBHVrTDRYfkjdimwC4KZ2E3OLvitdhKp/PJTRX8YCMW5rXDXVwO3eH4E+jgPY1I8OFyBM442lu9nfgsBXpqS/3iozx2xZY1jWDMd3EercKiuFhZCS+9VXqkg==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nw/a06kpE9QmGdv1iXE6qe/tbKTvHBbhcSF5ltAIUnY=;
 b=VsuwpigZGgZ0OmX+2LhUlujvdHIbfBFZPCogcDIGmasGk2Vm+uCwbJKVEUQKrOVCapdMUmjLqR3VblAAEOfwD4Xycvx4+JPq0TxsqcAwTUqQZT+IywDiXxTBlpvSd6FkXjFI5l6M4PvVRSmXz1ENubIlKVzKeAnVRSgbqqjceeQidbdrSrUTp59/ylSL7YZDDy/5xJwTWIarIJFMn47aG2rCYWKYAQkbgjYmq3bPqZCBFlEkxgsujXjlV/tyhk05xh1WiYUscivAAFg+HDG5wIP01+IgcI1aTZM3pYs3saowKiqjP2pkdEQtWdUJn8BXSH0nC3Pz9k7e6EmLPKMaxw==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nw/a06kpE9QmGdv1iXE6qe/tbKTvHBbhcSF5ltAIUnY=;
 b=rERTFDeuHc8tTmHzvFEMmooeJRRmwPntPlnJm/E0+ELVinato+6NxbHJy+FPC4DJt2nwE3wQ82rGwv1PtJbJzLdf7dECLQ244MIJ38lAErpUTtyU2VeDZ4bLS+zMQzDPZs730E8+piaG7PQM+YP8URx9fk01pM5b+3icEcIRzI0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 53ac89e7dd038888
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=k109H30vt1Xtlhw+BXY9DIyoM7p6A+lVCBQkBoc8DXEyCwlvLbajZvDM9W0Phzx+54zoZCaQK6RvteN7q6rbWqFgoDHUdrhRwDNpLsJ1GuFz2O/hRf74QXEOaDJephB1JvyAGSbeODdG4A31KOoceq/mr1hDWHMyNkDUuJU6RWbWgoOl3wAGaWk1xmFASMZ5W1/9PtFjGSc0Ra7jtqYyfbuu9kdq6zQtk7MRLsuH8bCXtHNvkJ153I8gBjTcWwc/72A+4qntXrMro2qR84mFZvaTI6gtPdSDywg20pX6CpsG3CDQuHdV7cLzHH0S42C/lxfXARdSpknkjkLoWfqqXA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nw/a06kpE9QmGdv1iXE6qe/tbKTvHBbhcSF5ltAIUnY=;
 b=K3xgx/i/GdNkwPr7qYQZD27NejfstKIfPL7grHLeCe1qSqAx3mwyFCXEuWQ1Y1k4OR1XgS/dCdWFuAsard/Uaz+4o2ZMlY9On1zIgJbfuS0Fn970NqXJaeY0Zygzq6TOKdwtbeT5UT+fVg1NdFvpc/jzThacaJNgz6KeViSu7ubSQ/HPkSH6mJAg3cCkYx4w4PlfiStDbaTRq8uezPnSRA2LamjNbQecTf+GVsbxf2hYA/pJCVkVtzmgotdNLnJrJI2NTwwFux+YTDI6XLpkHzRdgLqEFTNB91xc6s5e9HDuJfSX7MZ8CU34XX9BjbFD77lj87zWbku+M4I5BvMiIw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nw/a06kpE9QmGdv1iXE6qe/tbKTvHBbhcSF5ltAIUnY=;
 b=rERTFDeuHc8tTmHzvFEMmooeJRRmwPntPlnJm/E0+ELVinato+6NxbHJy+FPC4DJt2nwE3wQ82rGwv1PtJbJzLdf7dECLQ244MIJ38lAErpUTtyU2VeDZ4bLS+zMQzDPZs730E8+piaG7PQM+YP8URx9fk01pM5b+3icEcIRzI0=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: xen-devel <xen-devel@lists.xenproject.org>, Julien Grall
	<jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH 3/7] xen/arm: head: Add missing isb after writing to
 SCTLR_EL2/HSCTLR
Thread-Topic: [PATCH 3/7] xen/arm: head: Add missing isb after writing to
 SCTLR_EL2/HSCTLR
Thread-Index: AQHYh6p3WB7MoRj1SkKsSuzOQQLxTq1jTTwA
Date: Mon, 27 Jun 2022 14:00:40 +0000
Message-ID: <ED856863-B44D-44D2-9161-5878AB72F2C2@arm.com>
References: <20220624091146.35716-1-julien@xen.org>
 <20220624091146.35716-4-julien@xen.org>
In-Reply-To: <20220624091146.35716-4-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 1c04eecc-7611-4f06-127a-08da58457109
x-ms-traffictypediagnostic:
	AM8PR08MB6594:EE_|AM5EUR03FT042:EE_|GV1PR08MB7802:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 G5YmvvfSXh64Hr1jj79Ey6AuBr2FL8/Ka1Z9Eoiyf4tgvokV8zzM/bBSYrlUYcms8T5/pjC4Qy32tv8cfEs8rY9v0zuJcKrvQeXUaZghJSAmn1x3GDyx/2PR4mS6YKFQcso8TQ0tEWsMVDOSXr4LXw15IurGXogO+MGD9wEQQxwrq9IvAt0+TxNUWxncjmoGwboCNkCx4dlrOzYsWsP0xDFQgaTy1a3Dm+DHPK5Wlvx2N+U8lBkfPwt3ISD3O3yrdzvbsoAph3eoNd5J/FdYpad15yRqgzIW8LRr+C5vgEsc6Q2KuBHKGkXUhc/l1bTJ7XtXjUmd9tlgQ+EIgB1f8zxMw7sCjU7xiN5pbrgUWKXD0F2dZJYKz59hoyIkzju9D8pz/eqzrlagGBD31/cfHodeqBWsSRemSJUxpw1gE0w6ZfmCRpvqKLQ8r+5Dlxrgj4Vd5DgGYy7GhEChj1pP4KNny80I3gFq1UuBdrL3l2G7HjTXcqfBOQUaP+jEDq8FZbyICenR8ovX0qFFv7ZafkpnxFPzddL112LhD0z+jjClnr40AoHzY+8c+ckOfWf1XtaR0OJIZnTURr470ZD0D1fNCxidAe7b8PCv9l/+KP+LajTxjXZKDmuZKUE/9ntuQ3eQiFE4AWn/SdYc2RMqIFQFb2gKIaoIQttU04OYECuAvuFBZgXJu3gtxlk6gwK4Gx402MyjINBm8Efv061d06Z155SFWRrNOY+sL0kjScXAe+wA0y3G14zph77sADg76KJZy11XrVIgKNPzcR5Jbp7wNJIFOarinRIeOZaJ4i85qe9HyiN9ld+mmXjKJEVtaBPYrgRVjBVF/qu4DDGRpg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(136003)(366004)(396003)(346002)(376002)(38100700002)(71200400001)(66446008)(4326008)(41300700001)(66476007)(122000001)(54906003)(66556008)(316002)(38070700005)(33656002)(478600001)(53546011)(8936002)(76116006)(6506007)(8676002)(86362001)(2616005)(6486002)(4744005)(36756003)(26005)(2906002)(6512007)(66946007)(91956017)(64756008)(186003)(5660300002)(6916009)(83380400001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <3D0C47D382EFB247B659B3FC72B53BB3@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB6594
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT042.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	3b11ac74-9a49-413e-714e-08da58456b6f
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/p3D0rwG7Z6Ymyv8/2WpXmIYlntxWxyDfZibdD3+o7fTa/DOYJ6vAmhB8w6Tgjfz4VkKUCqZdL4PbMgKOjyBgRxwZSbZrtMhaqKUp06ZTr5yK+iC+YFdkl6y7BjpM5MEnXtnI3HuV/GAtnksPfDDy9cZLrtQg5xTpmvZ5BrWqJ1O5sXcBoKcpCXQk3sSurodVgfD+iaH/hSPxOB5mj3c9iEJIdIqId3UyaFr7mGmugHUzIezeDLGjWt33jgD5pXLnxx/x+d0a/BERH9Hhc5n8kr7b39ituROmnWMNxUvZkdCJ0kf/dgkvT4xEYBsA0s9+p2cNx3LSrn6hLYGJ7wRhdNKhtvKAKGLaRktGYEjB0GgXKsGjYmp8Z4PS7AaKGFv9HrC1qYnLw7TOoYVw8eoyVO0tw+Lirmxr/teOWawUErb6EdjoEtC48ZDaAK+azpzdch0wme/dA6znkz3adziYtywFxqi7uk2ltkJXSn/IwSy0y6Lf7qwsVeADg0kcQmdUR0UppgB8SLG05vp3Cv9vlGCi7INt814XD7q0a2Ig8TjSQW44BuCHmaNq3chXmrdBCx2+rLmu2o0IxHkHQfEz79FZsZPjx15uXt7WdjUHzp+xj7TolPK6DFnttzffiewFktjKtAD+/SMj9QhNFZ3FY6diamEtsrOLBkgamJzeukDF1FQ+LThk7xUepFKEKqKBodOAFcH78d8ScIc5qzw4/8M6KmHK91ZjhgAL5qaY6k9TuN8TFxibeC227CA4taw6Bx97P0nrIAs1+KK9gRUUpURDlxfysdxPw29e30Jve3f/q+RB5T5gp5pmYhoOUEV
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(396003)(136003)(376002)(346002)(40470700004)(36840700001)(46966006)(6862004)(86362001)(8936002)(5660300002)(336012)(53546011)(47076005)(83380400001)(40460700003)(356005)(4326008)(6512007)(6506007)(33656002)(8676002)(26005)(70586007)(41300700001)(70206006)(107886003)(4744005)(2906002)(2616005)(36756003)(6486002)(54906003)(40480700001)(81166007)(82310400005)(316002)(186003)(36860700001)(478600001)(82740400003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2022 14:00:49.8129
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 1c04eecc-7611-4f06-127a-08da58457109
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT042.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB7802

Hi Julien,

> On 24 Jun 2022, at 10:11, Julien Grall <julien@xen.org> wrote:
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> Write to SCTLR_EL2/HSCTLR may not be visible until the next context
> synchronization. When initializing the CPU, we want the update to take
> effect right now. So add an isb afterwards.
>=20
> Spec references:
>    - AArch64: D13.1.2 ARM DDI 0406C.d
>    - AArch32 v8: G8.1.2 ARM DDI 0406C.d
>    - AArch32 v7: B5.6.3 ARM DDI 0406C.d
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Mon Jun 27 14:13:34 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 14:13:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356795.585068 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5pUV-0006Z5-Gi; Mon, 27 Jun 2022 14:13:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356795.585068; Mon, 27 Jun 2022 14:13:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5pUV-0006Yy-E9; Mon, 27 Jun 2022 14:13:27 +0000
Received: by outflank-mailman (input) for mailman id 356795;
 Mon, 27 Jun 2022 14:13:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z6rY=XC=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o5pUU-0006Ys-Bd
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 14:13:26 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4ee6f1c3-f623-11ec-bd2d-47488cf2e6aa;
 Mon, 27 Jun 2022 16:13:25 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 824E11F9EB;
 Mon, 27 Jun 2022 14:13:24 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 3EFC413456;
 Mon, 27 Jun 2022 14:13:24 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id x2QBDgS7uWJWNAAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 27 Jun 2022 14:13:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4ee6f1c3-f623-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656339204; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=ttF2zs398Y6hahiTyMXtPyliCKLRw1tITnbFDxbK1Sk=;
	b=VTw7dzmLN1F86w6RbW5ONLDrlqMwVTk5DjMaN5/C86kkjItpUGYkqtZHktwkWKqg1gpU4i
	v6p9mnLL5frrw/cpOJ5gPtxP/t0bBTaA6rBg6msnv8/bq/WGppEYpUte4pHnjkOP46mbtR
	EX6kbmWv3pcx+h2arsD8ym8eVqIVFjM=
Message-ID: <4aff12cf-e913-719c-0d1c-653b35c837f7@suse.com>
Date: Mon, 27 Jun 2022 16:13:23 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 3/7] xen/common: Use unsigned int instead of plain
 unsigned
Content-Language: en-US
To: Michal Orzel <michal.orzel@arm.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Dario Faggioli <dfaggioli@suse.com>
References: <20220627131543.410971-1-michal.orzel@arm.com>
 <20220627131543.410971-4-michal.orzel@arm.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20220627131543.410971-4-michal.orzel@arm.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------fbw63HCrXqj2pBBLmo3tYmLU"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------fbw63HCrXqj2pBBLmo3tYmLU
Content-Type: multipart/mixed; boundary="------------bRpLVKG1rcAn0cidsyJwzers";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Michal Orzel <michal.orzel@arm.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Dario Faggioli <dfaggioli@suse.com>
Message-ID: <4aff12cf-e913-719c-0d1c-653b35c837f7@suse.com>
Subject: Re: [PATCH 3/7] xen/common: Use unsigned int instead of plain
 unsigned
References: <20220627131543.410971-1-michal.orzel@arm.com>
 <20220627131543.410971-4-michal.orzel@arm.com>
In-Reply-To: <20220627131543.410971-4-michal.orzel@arm.com>

--------------bRpLVKG1rcAn0cidsyJwzers
Content-Type: multipart/mixed; boundary="------------z3u0hSAide2lIOanRNM0eLhf"

--------------z3u0hSAide2lIOanRNM0eLhf
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjcuMDYuMjIgMTU6MTUsIE1pY2hhbCBPcnplbCB3cm90ZToNCj4gVGhpcyBpcyBqdXN0
IGZvciB0aGUgc3R5bGUgYW5kIGNvbnNpc3RlbmN5IHJlYXNvbnMgYXMgdGhlIGZvcm1lciBp
cw0KPiBiZWluZyB1c2VkIG1vcmUgb2Z0ZW4gdGhhbiB0aGUgbGF0dGVyLg0KPiANCj4gU2ln
bmVkLW9mZi1ieTogTWljaGFsIE9yemVsIDxtaWNoYWwub3J6ZWxAYXJtLmNvbT4NCg0KUmV2
aWV3ZWQtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4NCg0KDQpKdWVyZ2Vu
DQo=
--------------z3u0hSAide2lIOanRNM0eLhf
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------z3u0hSAide2lIOanRNM0eLhf--

--------------bRpLVKG1rcAn0cidsyJwzers--

--------------fbw63HCrXqj2pBBLmo3tYmLU
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK5uwMFAwAAAAAACgkQsN6d1ii/Ey+g
Xwf9GoN9W6RKP2wpxFvbft8YHKTEtRH7Kb/w8ETsIcY7inHUav3olzqQRls9VMtBmIGS71QxguwN
wArisMk06uUfAnX2dZzbnCoj+08ZgETUfkrp/Zj7etMAYrNCwHJsx980R3AygwMIcoNfv8Gu9XyT
BKRDvS+6Z1avc59P10/y0QyImybzY7cdwTtuIKqrt+Oo3GAWQw53GeOZtfJ6RBIy+ZNpUOibderC
qfjwsvP2vtcT5XnDpbrlDC9MUdga93H+MHQqcK8gExX0ps4tjseL9vE7IGdl301xDXz2BVKlqEC2
QlMLXcjKsAhYh06T8k6q5B65MdRuFfVJFR/uDxohsg==
=Hkc5
-----END PGP SIGNATURE-----

--------------fbw63HCrXqj2pBBLmo3tYmLU--


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 14:29:41 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 14:29:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356802.585079 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5pk8-0008GW-28; Mon, 27 Jun 2022 14:29:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356802.585079; Mon, 27 Jun 2022 14:29:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5pk7-0008GP-Vn; Mon, 27 Jun 2022 14:29:35 +0000
Received: by outflank-mailman (input) for mailman id 356802;
 Mon, 27 Jun 2022 14:29:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Z0SW=XC=citrix.com=prvs=170a910b0=ross.lagerwall@srs-se1.protection.inumbo.net>)
 id 1o5pk6-0008GJ-E2
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 14:29:34 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8e46d2e4-f625-11ec-bd2d-47488cf2e6aa;
 Mon, 27 Jun 2022 16:29:32 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8e46d2e4-f625-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656340172;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=xaKM+ebYAqyEmZhRkNr800vQvKcTck+EiNxG1DdvZek=;
  b=h6XQbnxiKlr81peYzTH3aqab9dYo8LAm9oZQdsVuVAN+BqdQ1dS3mqEK
   BCuGWznYPLyYz5bHLyduD+LH2bjiwuNl1lWgh+/8jME63/rxxGTZObATu
   3huZGJwwmHNQxe8aaVrxZoZgIvT0//RFVUfCtzpHymFjbKO+glTEKPWyD
   A=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 74341223
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:B+rK7a5iNjcBtE950fu0uAxRtAnHchMFZxGqfqrLsTDasY5as4F+v
 jcZX2vSbq6MajP0KdwkPYS2oENSv8TTzN5mGwVkqnhnHi5G8cbLO4+Ufxz6V8+wwmwvb67FA
 +E2MISowBUcFyeEzvuVGuG96yE6j8lkf5KkYAL+EnkZqTRMFWFw03qPp8Zj2tQy2YbjXFvU0
 T/Pi5a31GGNimYc3l08s8pvmDs31BglkGpF1rCWTakjUG72zxH5PrpGTU2CByKQrr1vNvy7X
 47+IISRpQs1yfuP5uSNyd4XemVSKlLb0JPnZnB+A8BOiTAazsA+PzpS2FPxpi67hh3Q9+2dx
 umhurStDicPEoPzst5HSgNHFHllOLNq2JnYdC3XXcy7lyUqclPpyvRqSko3IZcZ6qB8BmQmG
 f4wcW5XKErZ3qTvnez9GrIEascLdaEHOKsFvWp7izXQAvs8XpnHR43B5MNC3Sd2jcdLdRrbT
 5VFMmY2Nk6bC/FJEkgREq9vzMb1umjUazJjoUOkgZYu7FGGmWSd15CyaYGIK7RmX/59mUKVp
 XnP+WjjNQ0LL9yUyTeD8XWEi/fGmGXwX4d6PLG/8PFugRuBxmUVBzURT1KwpfT/gUm7M/pVL
 FYV4WwptrQo81KwTcjVWAexq3qJ+BUbXrJ4A+A8rQ2A1KfQywKYHXQfCC5MbsQ8s807TiBs0
 UWG9/vrCiZoq6a9Um+G+/GfqjbaETMOMWYIaCsATA0Ey9ruuoc+ilTIVNkLLUKupoSrQ3eqm
 WnM9XVgweVI5SIW60ml1U2AoxSAiKfjdFQs/BrQBnmg8C9ZQpHwMuRE9mPnAeZ8wJexFwfc4
 iBfxpDBvIjiHrnWynXTHbxl8KWBoq/cbWaC2QMH84wJrWzFxpK1QWxHDNiSzm9NO91MRzLma
 VS7Veh5tM4KZyvCgUOajuuM5yUWIUvIT42Nugj8NIYmX3SIXFbvENtSTUCRxXvxt0MnjLsyP
 5yWGe71UytEU/o3kWvoHr9HuVPO+szZ7TqLLa0XMjz9iebODJJrYext3KSyghARs/rf/VS9H
 yd3PMqW0RRPONDDjt3s2ddLdzgidCFjbbiv8pw/XrPSeWJORTB+Y8I9NJt8IuSJaYwOzreWl
 px8M2cFoGfCaYrvc1XSMC85M+OzB/6SbxsTZEQRALph4FB7Ca7H0UvVX8FtFVX73ISPFcJJc
 sQ=
IronPort-HdrOrdr: A9a23:JgFGkaN7oMRkusBcTs+jsMiBIKoaSvp037Eqv3oRdfUzSL3+qy
 nOpoVj6faaskdzZJhNo7+90cq7MBfhHPxOkOss1N6ZNWGM0gbFEGgL1/qF/9SKIU3DH4Bmu5
 uIC5IObeHNMQ==
X-IronPort-AV: E=Sophos;i="5.92,226,1650945600"; 
   d="scan'208";a="74341223"
From: Ross Lagerwall <ross.lagerwall@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Ross Lagerwall <ross.lagerwall@citrix.com>, Juergen Gross
	<jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, "Oleksandr
 Tyshchenko" <oleksandr_tyshchenko@epam.com>, Dongli Zhang
	<dongli.zhang@oracle.com>, Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: [PATCH] xen/manage: Use orderly_reboot() to reboot
Date: Mon, 27 Jun 2022 15:28:22 +0100
Message-ID: <20220627142822.3612106-1-ross.lagerwall@citrix.com>
X-Mailer: git-send-email 2.31.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Currently when the toolstack issues a reboot, it gets translated into a
call to ctrl_alt_del(). But tying reboot to ctrl-alt-del means rebooting
may fail if e.g. the user has masked the ctrl-alt-del.target under
systemd.

A previous attempt to fix this set the flag to force rebooting when
ctrl_alt_del() is called. However, this doesn't give userspace the
opportunity to block rebooting or even do any cleanup or syncing.

Instead, call orderly_reboot() which will call the "reboot" command,
giving userspace the opportunity to block it or perform the usual reboot
process while being independent of the ctrl-alt-del behaviour. It also
matches what happens in the shutdown case.

Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com>
---
 drivers/xen/manage.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/xen/manage.c b/drivers/xen/manage.c
index 3d5a384d65f7..c16df629907e 100644
--- a/drivers/xen/manage.c
+++ b/drivers/xen/manage.c
@@ -205,7 +205,7 @@ static void do_poweroff(void)
 static void do_reboot(void)
 {
 	shutting_down = SHUTDOWN_POWEROFF; /* ? */
-	ctrl_alt_del();
+	orderly_reboot();
 }
 
 static struct shutdown_handler shutdown_handlers[] = {
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 27 14:31:38 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 14:31:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356807.585090 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5pm5-0001H7-Ee; Mon, 27 Jun 2022 14:31:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356807.585090; Mon, 27 Jun 2022 14:31:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5pm5-0001H0-Bt; Mon, 27 Jun 2022 14:31:37 +0000
Received: by outflank-mailman (input) for mailman id 356807;
 Mon, 27 Jun 2022 14:31:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z6rY=XC=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o5pm4-0001Gu-Fu
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 14:31:36 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d6295788-f625-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 16:31:31 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id D4AA01F9FC;
 Mon, 27 Jun 2022 14:31:34 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id AD7D013456;
 Mon, 27 Jun 2022 14:31:34 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id zJUYKUa/uWL9PAAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 27 Jun 2022 14:31:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d6295788-f625-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656340294; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=a0717JwRJTN1P8VXJ8tnpmxFaGUDfHIV5/gpuWRh0kM=;
	b=VLz5jHtAGhX/FS20ycOWZOAPcUDHwR3zGMtV/xvlY8G1D6EkkCcBXyKA8tE/w1Z9uFzo67
	9gdrCg/8AZMrnEJHXKyZH880KfoZqmkJJPNBKmcI88yndcFdpR/FHYhD5Vo7h1KQkGBSKD
	Y4Es1/vmBsJfODLHJfikg6cjIu4dU6U=
Message-ID: <d0330408-2301-6145-f46b-c3da302a1edb@suse.com>
Date: Mon, 27 Jun 2022 16:31:34 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v2 1/2] public/io: xs_wire: Document that EINVAL should
 always be first in xsd_errors
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: jbeulich@suse.com, Julien Grall <jgrall@amazon.com>
References: <20220627123635.3416-1-julien@xen.org>
 <20220627123635.3416-2-julien@xen.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20220627123635.3416-2-julien@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------v0BbcrXxybXE5lMIgIAU42u3"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------v0BbcrXxybXE5lMIgIAU42u3
Content-Type: multipart/mixed; boundary="------------SQJo3kdLaMyHTGbHpuLA106k";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: jbeulich@suse.com, Julien Grall <jgrall@amazon.com>
Message-ID: <d0330408-2301-6145-f46b-c3da302a1edb@suse.com>
Subject: Re: [PATCH v2 1/2] public/io: xs_wire: Document that EINVAL should
 always be first in xsd_errors
References: <20220627123635.3416-1-julien@xen.org>
 <20220627123635.3416-2-julien@xen.org>
In-Reply-To: <20220627123635.3416-2-julien@xen.org>

--------------SQJo3kdLaMyHTGbHpuLA106k
Content-Type: multipart/mixed; boundary="------------rPs4yjCoC15pFnFCTYDwYSxA"

--------------rPs4yjCoC15pFnFCTYDwYSxA
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjcuMDYuMjIgMTQ6MzYsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gRnJvbTogSnVsaWVu
IEdyYWxsIDxqZ3JhbGxAYW1hem9uLmNvbT4NCj4gDQo+IFNvbWUgdG9vbHMgKGUuZy4geGVu
c3RvcmVkKSBhbHdheXMgZXhwZWN0IEVJTlZBTCB0byBiZSBmaXJzdCBpbiB4c2RfZXJyb3Jz
Lg0KPiANCj4gRG9jdW1lbnQgaXQgc28sIG9uZSBkb2Vzbid0IGFkZCBhIG5ldyBlbnRyeSBi
ZWZvcmUgaGFuZCBieSBtaXN0YWtlLg0KPiANCj4gU2lnbmVkLW9mZi1ieTogSnVsaWVuIEdy
YWxsIDxqZ3JhbGxAYW1hem9uLmNvbT4NCj4gDQo+IC0tLS0NCj4gDQo+IEkgaGF2ZSB0cmll
ZCB0byBhZGQgYSBCVUlMRF9CVUdfT04oKSBidXQgR0NDIGNvbXBsYWluZWQgdGhhdCB0aGUg
dmFsdWUNCj4gd2FzIG5vdCBhIGNvbnN0YW50LiBJIGNvdWxkbid0IGZpZ3VyZSBvdXQgYSB3
YXkgdG8gbWFrZSBHQ0MgaGFwcHkuDQo+IA0KPiBDaGFuZ2VzIGluIHYyOg0KPiAgICAgIC0g
TmV3IHBhdGNoDQo+IC0tLQ0KPiAgIHhlbi9pbmNsdWRlL3B1YmxpYy9pby94c193aXJlLmgg
fCAxICsNCj4gICAxIGZpbGUgY2hhbmdlZCwgMSBpbnNlcnRpb24oKykNCj4gDQo+IGRpZmYg
LS1naXQgYS94ZW4vaW5jbHVkZS9wdWJsaWMvaW8veHNfd2lyZS5oIGIveGVuL2luY2x1ZGUv
cHVibGljL2lvL3hzX3dpcmUuaA0KPiBpbmRleCBjMWVjN2M3M2UzYjEuLmRkNGM5YzliOTcy
ZCAxMDA2NDQNCj4gLS0tIGEveGVuL2luY2x1ZGUvcHVibGljL2lvL3hzX3dpcmUuaA0KPiAr
KysgYi94ZW4vaW5jbHVkZS9wdWJsaWMvaW8veHNfd2lyZS5oDQo+IEBAIC03Niw2ICs3Niw3
IEBAIHN0YXRpYyBzdHJ1Y3QgeHNkX2Vycm9ycyB4c2RfZXJyb3JzW10NCj4gICBfX2F0dHJp
YnV0ZV9fKCh1bnVzZWQpKQ0KPiAgICNlbmRpZg0KPiAgICAgICA9IHsNCj4gKyAgICAvKiAv
IVwgU29tZSB1c2VycyAoZS5nLiB4ZW5zdG9yZWQpIGV4cGVjdCBFSU5WQUwgdG8gYmUgdGhl
IGZpcnN0IGVudHJ5LiAqLw0KPiAgICAgICBYU0RfRVJST1IoRUlOVkFMKSwNCj4gICAgICAg
WFNEX0VSUk9SKEVBQ0NFUyksDQo+ICAgICAgIFhTRF9FUlJPUihFRVhJU1QpLA0KDQpXaGF0
IGFib3V0IGFub3RoZXIgYXBwcm9hY2gsIGxpa2U6DQoNCi0tLSBhL3Rvb2xzL3hlbnN0b3Jl
L3hlbnN0b3JlZF9jb3JlLmMNCisrKyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3Jl
LmMNCkBAIC03NDYsMTYgKzc0NiwxNyBAQCB1bnNpZ25lZCBpbnQgZ2V0X3N0cmluZ3Moc3Ry
dWN0IGJ1ZmZlcmVkX2RhdGEgKmRhdGEsDQogIHN0YXRpYyB2b2lkIHNlbmRfZXJyb3Ioc3Ry
dWN0IGNvbm5lY3Rpb24gKmNvbm4sIGludCBlcnJvcikNCiAgew0KICAgICAgICAgdW5zaWdu
ZWQgaW50IGk7DQorICAgICAgIGNoYXIgKmVyciA9IE5VTEw7DQoNCiAgICAgICAgIGZvciAo
aSA9IDA7IGVycm9yICE9IHhzZF9lcnJvcnNbaV0uZXJybnVtOyBpKyspIHsNCiAgICAgICAg
ICAgICAgICAgaWYgKGkgPT0gQVJSQVlfU0laRSh4c2RfZXJyb3JzKSAtIDEpIHsNCiAgICAg
ICAgICAgICAgICAgICAgICAgICBlcHJpbnRmKCJ4ZW5zdG9yZWQ6IGVycm9yICVpIHVudHJh
bnNsYXRhYmxlIiwgZXJyb3IpOw0KLSAgICAgICAgICAgICAgICAgICAgICAgaSA9IDA7IC8q
IEVJTlZBTCAqLw0KKyAgICAgICAgICAgICAgICAgICAgICAgZXJyID0gIkVJTlZBTCI7DQog
ICAgICAgICAgICAgICAgICAgICAgICAgYnJlYWs7DQogICAgICAgICAgICAgICAgIH0NCiAg
ICAgICAgIH0NCi0gICAgICAgc2VuZF9yZXBseShjb25uLCBYU19FUlJPUiwgeHNkX2Vycm9y
c1tpXS5lcnJzdHJpbmcsDQotICAgICAgICAgICAgICAgICAgICAgICAgIHN0cmxlbih4c2Rf
ZXJyb3JzW2ldLmVycnN0cmluZykgKyAxKTsNCisgICAgICAgZXJyID0gZXJyID8gOiB4c2Rf
ZXJyb3JzW2ldLmVycnN0cmluZzsNCisgICAgICAgc2VuZF9yZXBseShjb25uLCBYU19FUlJP
UiwgZXJyLCBzdHJsZW4oZXJyKSArIDEpOw0KICB9DQoNCiAgdm9pZCBzZW5kX3JlcGx5KHN0
cnVjdCBjb25uZWN0aW9uICpjb25uLCBlbnVtIHhzZF9zb2NrbXNnX3R5cGUgdHlwZSwNCg0K

--------------rPs4yjCoC15pFnFCTYDwYSxA
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------rPs4yjCoC15pFnFCTYDwYSxA--

--------------SQJo3kdLaMyHTGbHpuLA106k--

--------------v0BbcrXxybXE5lMIgIAU42u3
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK5v0YFAwAAAAAACgkQsN6d1ii/Ey/V
ywf+IbgX0aCSOCAcAkqsqA7kwdsw57sSMzB1nEBmMKt3R7yu7HJVnWAAqbMYp7LM0l/sQr1erUAn
7HWXNXBOpYZzfL8rRALOorKf0419PxH+y6pllka/+IA1yYpZ0r0qbK35vgIz97vYxOT+sGdSV/TN
M7way2qamhkWIiyw/vbWxl79JFAlW9bSX89LAXEQ+t46s2lFnCrmhB1DUO3aedMIfcd6G/rHqS97
PYqyR/PntLAFWe3GeQoP4C4YMnZ/zcGJWjuoeORRbodfLaTqiYhV8XZQb/524FVLQnZEoNNdlmjT
PZf6Y4/wylGcN7LZHcmRrJdoF3G5Z3vOVurlhxGijQ==
=eNMj
-----END PGP SIGNATURE-----

--------------v0BbcrXxybXE5lMIgIAU42u3--


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 14:49:15 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 14:49:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356814.585102 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5q2v-00030e-Vl; Mon, 27 Jun 2022 14:49:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356814.585102; Mon, 27 Jun 2022 14:49:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5q2v-00030X-Rb; Mon, 27 Jun 2022 14:49:01 +0000
Received: by outflank-mailman (input) for mailman id 356814;
 Mon, 27 Jun 2022 14:49:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o5q2u-00030R-Cv
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 14:49:00 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o5q2u-0005ah-8s; Mon, 27 Jun 2022 14:49:00 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=[192.168.2.226]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o5q2u-0005hy-3D; Mon, 27 Jun 2022 14:49:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=mfYSvZDINBjAxFwi6uDx9hpkJkTCYgdSXiAjyiSkuvU=; b=3peGLiA+zC8uXYwZeZ0yAG3Gd+
	z3my+Ax0bqSTKo0UZWgeGUqiFkDUlwa5P4l6HlXP0bL30UU2MSEGkgcw9Z3iL9YmkjFzs1R//Ih0p
	Gb3zBtGBP9EZaLw6+WwiH2H+nsl1RLaVUOJ2XKqKbD7pSA8nl7ZMkIh+PTtioDTb/Bpo=;
Message-ID: <7af3e9ec-59fe-32ce-2a9d-b8dab57d0e9e@xen.org>
Date: Mon, 27 Jun 2022 15:48:58 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH v2 1/2] public/io: xs_wire: Document that EINVAL should
 always be first in xsd_errors
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: jbeulich@suse.com, Julien Grall <jgrall@amazon.com>
References: <20220627123635.3416-1-julien@xen.org>
 <20220627123635.3416-2-julien@xen.org>
 <d0330408-2301-6145-f46b-c3da302a1edb@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <d0330408-2301-6145-f46b-c3da302a1edb@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi,

On 27/06/2022 15:31, Juergen Gross wrote:
> On 27.06.22 14:36, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Some tools (e.g. xenstored) always expect EINVAL to be first in 
>> xsd_errors.
>>
>> Document it so, one doesn't add a new entry before hand by mistake.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>
>> ----
>>
>> I have tried to add a BUILD_BUG_ON() but GCC complained that the value
>> was not a constant. I couldn't figure out a way to make GCC happy.
>>
>> Changes in v2:
>>      - New patch
>> ---
>>   xen/include/public/io/xs_wire.h | 1 +
>>   1 file changed, 1 insertion(+)
>>
>> diff --git a/xen/include/public/io/xs_wire.h 
>> b/xen/include/public/io/xs_wire.h
>> index c1ec7c73e3b1..dd4c9c9b972d 100644
>> --- a/xen/include/public/io/xs_wire.h
>> +++ b/xen/include/public/io/xs_wire.h
>> @@ -76,6 +76,7 @@ static struct xsd_errors xsd_errors[]
>>   __attribute__((unused))
>>   #endif
>>       = {
>> +    /* /!\ Some users (e.g. xenstored) expect EINVAL to be the first 
>> entry. */
>>       XSD_ERROR(EINVAL),
>>       XSD_ERROR(EACCES),
>>       XSD_ERROR(EEXIST),
> 
> What about another approach, like:

In place of what? I still think we need the comment because this 
assumption is not part of the ABI (AFAICT xs_wire.h is meant to be stable).

At which point, I see limited reason to fix xenstored_core.c.

But I would have really prefer to use a BUILD_BUG_ON() (or similar) so 
we can catch any misue a build. Maybe I should write a small program 
that is executed at compile time?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 14:49:46 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 14:49:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356817.585113 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5q3c-0003Us-8f; Mon, 27 Jun 2022 14:49:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356817.585113; Mon, 27 Jun 2022 14:49:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5q3c-0003Ul-5E; Mon, 27 Jun 2022 14:49:44 +0000
Received: by outflank-mailman (input) for mailman id 356817;
 Mon, 27 Jun 2022 14:49:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z6rY=XC=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o5q3b-0003UX-84
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 14:49:43 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5df9b2a7-f628-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 16:49:37 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 805021F9FC;
 Mon, 27 Jun 2022 14:49:41 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 4CA6213AB2;
 Mon, 27 Jun 2022 14:49:41 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id b2DuEIXDuWL7RAAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 27 Jun 2022 14:49:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5df9b2a7-f628-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656341381; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=/pKS3FRD7md/yjGps2cf8DUF1qB3qBp1ml/VVot9SB4=;
	b=JDE2n/1Pr0DdZx9o9pcnoTi/7qYMnHauI2Cf75QTpFbUyLqfnzMQ52hvsDvb1bnrGsDyo2
	K0hVVl4M/yd9KNzWLUG0c62iqjlj+IZvII2Vtu2CignMezrpo+Rdt0mhZdqOXek6rRnKtw
	68yElBBpishaRKNiCH5Go2+docVu2tg=
Message-ID: <cb96ebcf-46d7-63a8-cb7b-ce274ff7d815@suse.com>
Date: Mon, 27 Jun 2022 16:49:40 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Content-Language: en-US
To: Ross Lagerwall <ross.lagerwall@citrix.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Dongli Zhang <dongli.zhang@oracle.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <20220627142822.3612106-1-ross.lagerwall@citrix.com>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH] xen/manage: Use orderly_reboot() to reboot
In-Reply-To: <20220627142822.3612106-1-ross.lagerwall@citrix.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------Q6OdEyPXulR1QIOKe7CfTVZN"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------Q6OdEyPXulR1QIOKe7CfTVZN
Content-Type: multipart/mixed; boundary="------------0AfsGRMueD1YhcFLMs0qhTyL";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Ross Lagerwall <ross.lagerwall@citrix.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Dongli Zhang <dongli.zhang@oracle.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <cb96ebcf-46d7-63a8-cb7b-ce274ff7d815@suse.com>
Subject: Re: [PATCH] xen/manage: Use orderly_reboot() to reboot
References: <20220627142822.3612106-1-ross.lagerwall@citrix.com>
In-Reply-To: <20220627142822.3612106-1-ross.lagerwall@citrix.com>

--------------0AfsGRMueD1YhcFLMs0qhTyL
Content-Type: multipart/mixed; boundary="------------uCpcWeqF10wX09TTDmuX3RkV"

--------------uCpcWeqF10wX09TTDmuX3RkV
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjcuMDYuMjIgMTY6MjgsIFJvc3MgTGFnZXJ3YWxsIHdyb3RlOg0KPiBDdXJyZW50bHkg
d2hlbiB0aGUgdG9vbHN0YWNrIGlzc3VlcyBhIHJlYm9vdCwgaXQgZ2V0cyB0cmFuc2xhdGVk
IGludG8gYQ0KPiBjYWxsIHRvIGN0cmxfYWx0X2RlbCgpLiBCdXQgdHlpbmcgcmVib290IHRv
IGN0cmwtYWx0LWRlbCBtZWFucyByZWJvb3RpbmcNCj4gbWF5IGZhaWwgaWYgZS5nLiB0aGUg
dXNlciBoYXMgbWFza2VkIHRoZSBjdHJsLWFsdC1kZWwudGFyZ2V0IHVuZGVyDQo+IHN5c3Rl
bWQuDQo+IA0KPiBBIHByZXZpb3VzIGF0dGVtcHQgdG8gZml4IHRoaXMgc2V0IHRoZSBmbGFn
IHRvIGZvcmNlIHJlYm9vdGluZyB3aGVuDQo+IGN0cmxfYWx0X2RlbCgpIGlzIGNhbGxlZC4N
Cg0KU29ycnksIEkgaGF2ZSBwcm9ibGVtcyBwYXJzaW5nIHRoaXMgc2VudGVuY2UuDQoNCiA+
IEhvd2V2ZXIsIHRoaXMgZG9lc24ndCBnaXZlIHVzZXJzcGFjZSB0aGUNCj4gb3Bwb3J0dW5p
dHkgdG8gYmxvY2sgcmVib290aW5nIG9yIGV2ZW4gZG8gYW55IGNsZWFudXAgb3Igc3luY2lu
Zy4NCj4gDQo+IEluc3RlYWQsIGNhbGwgb3JkZXJseV9yZWJvb3QoKSB3aGljaCB3aWxsIGNh
bGwgdGhlICJyZWJvb3QiIGNvbW1hbmQsDQo+IGdpdmluZyB1c2Vyc3BhY2UgdGhlIG9wcG9y
dHVuaXR5IHRvIGJsb2NrIGl0IG9yIHBlcmZvcm0gdGhlIHVzdWFsIHJlYm9vdA0KPiBwcm9j
ZXNzIHdoaWxlIGJlaW5nIGluZGVwZW5kZW50IG9mIHRoZSBjdHJsLWFsdC1kZWwgYmVoYXZp
b3VyLiBJdCBhbHNvDQo+IG1hdGNoZXMgd2hhdCBoYXBwZW5zIGluIHRoZSBzaHV0ZG93biBj
YXNlLg0KPiANCj4gU2lnbmVkLW9mZi1ieTogUm9zcyBMYWdlcndhbGwgPHJvc3MubGFnZXJ3
YWxsQGNpdHJpeC5jb20+DQo+IC0tLQ0KPiAgIGRyaXZlcnMveGVuL21hbmFnZS5jIHwgMiAr
LQ0KPiAgIDEgZmlsZSBjaGFuZ2VkLCAxIGluc2VydGlvbigrKSwgMSBkZWxldGlvbigtKQ0K
PiANCj4gZGlmZiAtLWdpdCBhL2RyaXZlcnMveGVuL21hbmFnZS5jIGIvZHJpdmVycy94ZW4v
bWFuYWdlLmMNCj4gaW5kZXggM2Q1YTM4NGQ2NWY3Li5jMTZkZjYyOTkwN2UgMTAwNjQ0DQo+
IC0tLSBhL2RyaXZlcnMveGVuL21hbmFnZS5jDQo+ICsrKyBiL2RyaXZlcnMveGVuL21hbmFn
ZS5jDQo+IEBAIC0yMDUsNyArMjA1LDcgQEAgc3RhdGljIHZvaWQgZG9fcG93ZXJvZmYodm9p
ZCkNCj4gICBzdGF0aWMgdm9pZCBkb19yZWJvb3Qodm9pZCkNCj4gICB7DQo+ICAgCXNodXR0
aW5nX2Rvd24gPSBTSFVURE9XTl9QT1dFUk9GRjsgLyogPyAqLw0KPiAtCWN0cmxfYWx0X2Rl
bCgpOw0KPiArCW9yZGVybHlfcmVib290KCk7DQo+ICAgfQ0KPiAgIA0KPiAgIHN0YXRpYyBz
dHJ1Y3Qgc2h1dGRvd25faGFuZGxlciBzaHV0ZG93bl9oYW5kbGVyc1tdID0gew0KDQpUaGUg
Y29kZSBzZWVtcyB0byBiZSBmaW5lLg0KDQpBbGJlaXQgSSB3b25kZXIgd2hldGhlciB3ZSBz
aG91bGRuJ3QgdHVybiBzaHV0dGluZ19kb3duIGludG8gYSBib29sLA0KYXMgYWxsIHRoYXQg
c2VlbXMgdG8gYmUgbmVlZGVkIGlzICJzaHV0dGluZ19kb3duICE9IFNIVVRET1dOX0lOVkFM
SUQiDQp0b2RheS4gQnV0IHRoaXMgY291bGQgYmUgcGFydCBvZiBhbm90aGVyIHBhdGNoLg0K
DQoNCkp1ZXJnZW4NCg==
--------------uCpcWeqF10wX09TTDmuX3RkV
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------uCpcWeqF10wX09TTDmuX3RkV--

--------------0AfsGRMueD1YhcFLMs0qhTyL--

--------------Q6OdEyPXulR1QIOKe7CfTVZN
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK5w4QFAwAAAAAACgkQsN6d1ii/Ey+0
VQf+JhVkoemqcHS4yPxqXVGiOreILcez6U8VehRQ0X/iMQ4zurYQ2nHidX/HAZgGw0E1hMOpGV+i
gfmv0KPJGbRgchosbyn9it8d252/XAvPxfaf5bLn9mSWX1WxU6NiNKMB6JxBaReVZr0HR6E2dvOg
460uRJBpK701KKZODcu/JCUP8WnwhCOIyAnocB3Ah/2q4g3lSNYSKL0Q342MpfZKQE5krGpFr9OV
3D3g2dOzGvRTgJQxXyVdDXb3buT2XoDQxXwO4STf6RMgQHZe6/51dNvjZR2djN6MXh+buDFWpKWQ
F4hfssfyXUNzb4d4PjBIsVbET7WyTeTt51Sf3G2z/g==
=ex4X
-----END PGP SIGNATURE-----

--------------Q6OdEyPXulR1QIOKe7CfTVZN--


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 14:50:53 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 14:50:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356825.585124 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5q4i-0004xV-L9; Mon, 27 Jun 2022 14:50:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356825.585124; Mon, 27 Jun 2022 14:50:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5q4i-0004xO-IS; Mon, 27 Jun 2022 14:50:52 +0000
Received: by outflank-mailman (input) for mailman id 356825;
 Mon, 27 Jun 2022 14:50:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z6rY=XC=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o5q4g-0004xB-MQ
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 14:50:50 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 86486995-f628-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 16:50:45 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 6C98F1F9EB;
 Mon, 27 Jun 2022 14:50:49 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 3B2F413AB2;
 Mon, 27 Jun 2022 14:50:49 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id S+ocDcnDuWJ7RQAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 27 Jun 2022 14:50:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 86486995-f628-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656341449; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=eVRYvn/FfdZC51N4nTWZyS4mKOShDYlGYcsTxLKihvY=;
	b=s18soCF5C7VuO5ztHer2TDoFAFg/XmaEGkvrZxTzBzn943PJe42/Q6mcLKdIZturZgorn8
	v4QglPjqq5ZD6kC/EwE8ziCUmzvdV0AI9aBRzZAljBNBvNWDHF6bQpGF6+J1Bdm2RSWx+p
	1z69Ec5tqjF7qI0Jj6uOgMnK72kQ5d0=
Message-ID: <f7c0d5c1-01da-4dca-42ac-ce17c6109371@suse.com>
Date: Mon, 27 Jun 2022 16:50:48 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v2 1/2] public/io: xs_wire: Document that EINVAL should
 always be first in xsd_errors
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: jbeulich@suse.com, Julien Grall <jgrall@amazon.com>
References: <20220627123635.3416-1-julien@xen.org>
 <20220627123635.3416-2-julien@xen.org>
 <d0330408-2301-6145-f46b-c3da302a1edb@suse.com>
 <7af3e9ec-59fe-32ce-2a9d-b8dab57d0e9e@xen.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <7af3e9ec-59fe-32ce-2a9d-b8dab57d0e9e@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------S0Zn0279OSf3ZGozI3PWPxwA"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------S0Zn0279OSf3ZGozI3PWPxwA
Content-Type: multipart/mixed; boundary="------------YU0EKbw84Hc8q0pvKb6XCjp5";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: jbeulich@suse.com, Julien Grall <jgrall@amazon.com>
Message-ID: <f7c0d5c1-01da-4dca-42ac-ce17c6109371@suse.com>
Subject: Re: [PATCH v2 1/2] public/io: xs_wire: Document that EINVAL should
 always be first in xsd_errors
References: <20220627123635.3416-1-julien@xen.org>
 <20220627123635.3416-2-julien@xen.org>
 <d0330408-2301-6145-f46b-c3da302a1edb@suse.com>
 <7af3e9ec-59fe-32ce-2a9d-b8dab57d0e9e@xen.org>
In-Reply-To: <7af3e9ec-59fe-32ce-2a9d-b8dab57d0e9e@xen.org>

--------------YU0EKbw84Hc8q0pvKb6XCjp5
Content-Type: multipart/mixed; boundary="------------2Q0ve0ZPrTlUCUo1NCMLrb9A"

--------------2Q0ve0ZPrTlUCUo1NCMLrb9A
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjcuMDYuMjIgMTY6NDgsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGksDQo+IA0KPiBP
biAyNy8wNi8yMDIyIDE1OjMxLCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4gT24gMjcuMDYu
MjIgMTQ6MzYsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4+PiBGcm9tOiBKdWxpZW4gR3JhbGwg
PGpncmFsbEBhbWF6b24uY29tPg0KPj4+DQo+Pj4gU29tZSB0b29scyAoZS5nLiB4ZW5zdG9y
ZWQpIGFsd2F5cyBleHBlY3QgRUlOVkFMIHRvIGJlIGZpcnN0IGluIHhzZF9lcnJvcnMuDQo+
Pj4NCj4+PiBEb2N1bWVudCBpdCBzbywgb25lIGRvZXNuJ3QgYWRkIGEgbmV3IGVudHJ5IGJl
Zm9yZSBoYW5kIGJ5IG1pc3Rha2UuDQo+Pj4NCj4+PiBTaWduZWQtb2ZmLWJ5OiBKdWxpZW4g
R3JhbGwgPGpncmFsbEBhbWF6b24uY29tPg0KPj4+DQo+Pj4gLS0tLQ0KPj4+DQo+Pj4gSSBo
YXZlIHRyaWVkIHRvIGFkZCBhIEJVSUxEX0JVR19PTigpIGJ1dCBHQ0MgY29tcGxhaW5lZCB0
aGF0IHRoZSB2YWx1ZQ0KPj4+IHdhcyBub3QgYSBjb25zdGFudC4gSSBjb3VsZG4ndCBmaWd1
cmUgb3V0IGEgd2F5IHRvIG1ha2UgR0NDIGhhcHB5Lg0KPj4+DQo+Pj4gQ2hhbmdlcyBpbiB2
MjoNCj4+PiDCoMKgwqDCoCAtIE5ldyBwYXRjaA0KPj4+IC0tLQ0KPj4+IMKgIHhlbi9pbmNs
dWRlL3B1YmxpYy9pby94c193aXJlLmggfCAxICsNCj4+PiDCoCAxIGZpbGUgY2hhbmdlZCwg
MSBpbnNlcnRpb24oKykNCj4+Pg0KPj4+IGRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9wdWJs
aWMvaW8veHNfd2lyZS5oIGIveGVuL2luY2x1ZGUvcHVibGljL2lvL3hzX3dpcmUuaA0KPj4+
IGluZGV4IGMxZWM3YzczZTNiMS4uZGQ0YzljOWI5NzJkIDEwMDY0NA0KPj4+IC0tLSBhL3hl
bi9pbmNsdWRlL3B1YmxpYy9pby94c193aXJlLmgNCj4+PiArKysgYi94ZW4vaW5jbHVkZS9w
dWJsaWMvaW8veHNfd2lyZS5oDQo+Pj4gQEAgLTc2LDYgKzc2LDcgQEAgc3RhdGljIHN0cnVj
dCB4c2RfZXJyb3JzIHhzZF9lcnJvcnNbXQ0KPj4+IMKgIF9fYXR0cmlidXRlX18oKHVudXNl
ZCkpDQo+Pj4gwqAgI2VuZGlmDQo+Pj4gwqDCoMKgwqDCoCA9IHsNCj4+PiArwqDCoMKgIC8q
IC8hXCBTb21lIHVzZXJzIChlLmcuIHhlbnN0b3JlZCkgZXhwZWN0IEVJTlZBTCB0byBiZSB0
aGUgZmlyc3QgZW50cnkuICovDQo+Pj4gwqDCoMKgwqDCoCBYU0RfRVJST1IoRUlOVkFMKSwN
Cj4+PiDCoMKgwqDCoMKgIFhTRF9FUlJPUihFQUNDRVMpLA0KPj4+IMKgwqDCoMKgwqAgWFNE
X0VSUk9SKEVFWElTVCksDQo+Pg0KPj4gV2hhdCBhYm91dCBhbm90aGVyIGFwcHJvYWNoLCBs
aWtlOg0KPiANCj4gSW4gcGxhY2Ugb2Ygd2hhdD8gSSBzdGlsbCB0aGluayB3ZSBuZWVkIHRo
ZSBjb21tZW50IGJlY2F1c2UgdGhpcyBhc3N1bXB0aW9uIGlzIA0KPiBub3QgcGFydCBvZiB0
aGUgQUJJIChBRkFJQ1QgeHNfd2lyZS5oIGlzIG1lYW50IHRvIGJlIHN0YWJsZSkuDQo+IA0K
PiBBdCB3aGljaCBwb2ludCwgSSBzZWUgbGltaXRlZCByZWFzb24gdG8gZml4IHhlbnN0b3Jl
ZF9jb3JlLmMuDQo+IA0KPiBCdXQgSSB3b3VsZCBoYXZlIHJlYWxseSBwcmVmZXIgdG8gdXNl
IGEgQlVJTERfQlVHX09OKCkgKG9yIHNpbWlsYXIpIHNvIHdlIGNhbiANCj4gY2F0Y2ggYW55
IG1pc3VlIGEgYnVpbGQuIE1heWJlIEkgc2hvdWxkIHdyaXRlIGEgc21hbGwgcHJvZ3JhbSB0
aGF0IGlzIGV4ZWN1dGVkIA0KPiBhdCBjb21waWxlIHRpbWU/DQoNCk15IHN1Z2dlc3Rpb24g
cmVtb3ZlcyB0aGUgbmVlZCBmb3IgRUlOVkFMIGJlaW5nIHRoZSBmaXJzdCBlbnRyeS4NCg0K
DQpKdWVyZ2VuDQoNCg==
--------------2Q0ve0ZPrTlUCUo1NCMLrb9A
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------2Q0ve0ZPrTlUCUo1NCMLrb9A--

--------------YU0EKbw84Hc8q0pvKb6XCjp5--

--------------S0Zn0279OSf3ZGozI3PWPxwA
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK5w8gFAwAAAAAACgkQsN6d1ii/Ey/h
3wgAi/EUvF7x0kG8kwXJYcXz4fh2TM/f+PnXsi7A/qlnhf0oOOG1ufph9KTjkGzODTFjHdpTr6hC
P6/vDvg0TS986imptOppw2ROD1OtezvfjaRX0rJHhlYvJfI0QQctuwqxysE+GmX+ue15F7wVtV8i
zOrToQ93Wf2z4Q+FD953WlaLlUz4MoVILJ4q5oerfY0dTQmdVXkIollUZJIqTj8PGdCnSewmJg5B
Uu7C8E/E+IlW9VlhjPreSfpURH7F0F3vVThAAFfGqn85LmCLuxHbj4tA0z2PqDpDjWKgoQoa/Yzy
qkfqkuOL5cPgeyvkUt7ofsVcJ3AaFBrRbWf6XbEAhA==
=wUwD
-----END PGP SIGNATURE-----

--------------S0Zn0279OSf3ZGozI3PWPxwA--


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 14:52:48 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 14:52:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356830.585134 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5q6Z-0005c7-0k; Mon, 27 Jun 2022 14:52:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356830.585134; Mon, 27 Jun 2022 14:52:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5q6Y-0005c0-UO; Mon, 27 Jun 2022 14:52:46 +0000
Received: by outflank-mailman (input) for mailman id 356830;
 Mon, 27 Jun 2022 14:52:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z6rY=XC=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o5q6X-0005bl-GE
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 14:52:45 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cac8ba5b-f628-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 16:52:40 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 585871FD82;
 Mon, 27 Jun 2022 14:52:44 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 3393313AB2;
 Mon, 27 Jun 2022 14:52:44 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id BB5WCzzEuWKKRgAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 27 Jun 2022 14:52:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cac8ba5b-f628-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656341564; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=f/608W6b3V+QJOgReAAuAf8lFE7BAi4UQp2sH7G+YrA=;
	b=BSAtwD2ZoTjxliDumlK+yf07TbC1frlCcDyka5q7gkZJcI1ACjgAPXn0RLYpEGZGcOUvN5
	k0PlivRnX263l2mD/Y9T7nW8SBFaS2/ji22XstDARIJkut8suNMCAzEDXaPkkzBRaSyKNv
	H9mcHRC82HuXmc6TShcV1rcJpUGRSn4=
Message-ID: <2d95bb8c-c89d-c5d7-4171-12ba64721480@suse.com>
Date: Mon, 27 Jun 2022 16:52:43 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v2 2/2] public/io: xs_wire: Allow Xenstore to report EPERM
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: jbeulich@suse.com, Julien Grall <jgrall@amazon.com>
References: <20220627123635.3416-1-julien@xen.org>
 <20220627123635.3416-3-julien@xen.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20220627123635.3416-3-julien@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------y5nWgTzwSazHMDaYJwrC0zUl"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------y5nWgTzwSazHMDaYJwrC0zUl
Content-Type: multipart/mixed; boundary="------------gr70JMgZa6MzHuIFcIw00xv3";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: jbeulich@suse.com, Julien Grall <jgrall@amazon.com>
Message-ID: <2d95bb8c-c89d-c5d7-4171-12ba64721480@suse.com>
Subject: Re: [PATCH v2 2/2] public/io: xs_wire: Allow Xenstore to report EPERM
References: <20220627123635.3416-1-julien@xen.org>
 <20220627123635.3416-3-julien@xen.org>
In-Reply-To: <20220627123635.3416-3-julien@xen.org>

--------------gr70JMgZa6MzHuIFcIw00xv3
Content-Type: multipart/mixed; boundary="------------xiWKLsm0zzP6Tqy97vZ0a4w2"

--------------xiWKLsm0zzP6Tqy97vZ0a4w2
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjcuMDYuMjIgMTQ6MzYsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gRnJvbTogSnVsaWVu
IEdyYWxsIDxqZ3JhbGxAYW1hem9uLmNvbT4NCj4gDQo+IEMgWGVuc3RvcmVkIGlzIHVzaW5n
IEVQRVJNIHdoZW4gdGhlIGNsaWVudCBpcyBub3QgYWxsb3dlZCB0byBjaGFuZ2UNCj4gdGhl
IG93bmVyIChzZWUgR0VUX1BFUk1TKS4gSG93ZXZlciwgdGhlIHhlbnN0b3JlIHByb3RvY29s
IGRvZXNuJ3QNCj4gZGVzY3JpYmUgRVBFUk0gc28gRUlOVkFMIHdpbGwgYmUgc2VudCB0byB0
aGUgY2xpZW50Lg0KPiANCj4gV2hlbiB3cml0aW5nIHRlc3QsIGl0IHdvdWxkIGJlIHVzZWZ1
bCB0byBkaWZmZXJlbnRpYXRlIGJldHdlZW4gRUlOVkFMDQo+IChlLmcuIHBhcnNpbmcgZXJy
b3IpIGFuZCBFUEVSTSAoaS5lLiBubyBwZXJtaXNzaW9uKS4gU28gZXh0ZW5kDQo+IHhzZF9l
cnJvcnNbXSB0byBzdXBwb3J0IHJldHVybiBFUEVSTS4NCj4gDQo+IExvb2tpbmcgYXQgcHJl
dmlvdXMgdGltZSB4c2RfZXJyb3JzIHdhcyBleHRlbmRlZCAoOGIyYzQ0MWExYiksIGl0IHdh
cw0KPiBjb25zaWRlcmVkIHRvIGJlIHNhZmUgdG8gYWRkIGEgbmV3IGVycm9yIGJlY2F1c2Ug
YXQgbGVhc3QgTGludXggZHJpdmVyDQo+IGFuZCBsaWJ4ZW5zdG9yZSB0cmVhdCBhbiB1bmtu
b3duIGVycm9yIGNvZGUgYXMgRUlOVkFMLg0KPiANCj4gVGhpcyBzdGF0ZW1lbnQgZG9lc24n
dCBjb3ZlciBvdGhlciBwb3NzaWJsZSBPU2VzLCBob3dldmVyIEkgYW0gbm90DQo+IGF3YXJl
IG9mIGFueSBicmVha2FnZS4NCj4gDQo+IFNpZ25lZC1vZmYtYnk6IEp1bGllbiBHcmFsbCA8
amdyYWxsQGFtYXpvbi5jb20+DQoNClJldmlld2VkLWJ5OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jv
c3NAc3VzZS5jb20+DQoNCg0KSnVlcmdlbg0K
--------------xiWKLsm0zzP6Tqy97vZ0a4w2
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------xiWKLsm0zzP6Tqy97vZ0a4w2--

--------------gr70JMgZa6MzHuIFcIw00xv3--

--------------y5nWgTzwSazHMDaYJwrC0zUl
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK5xDsFAwAAAAAACgkQsN6d1ii/Ey+b
MQf+JJoD+bRGp0YoiOg2IBTXlEBx0P4tarwUo7dT6mwEpGgB/1p7LEBUNKjjS8mWUxINlHTT2Ohb
fmILkhFammd0HfF9O9xRh0Es1cXyGxh/+NARfJXAAyAlBqv9QqfmUVXWdGkNswPjkCatKDCkqDle
jRcutKKRUpVCd93VJpVeOW6Uh/EVQxwr8X6pO/aBmP76cK+EiUjfzIcm0i1eRDBafZjknVxSFs8R
YHtZwedEsR3ryMaZKz/u/LRfSgeSxC/OlGO+2qAhZwAu2Vi0C1AXjwjVylJAs+V0AVI3NhnMdZb6
hCfmodIFlpbzC4greUtFkb5vjzDT9qsMycooDMmMKg==
=7ky8
-----END PGP SIGNATURE-----

--------------y5nWgTzwSazHMDaYJwrC0zUl--


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 14:54:07 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 14:54:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356838.585146 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5q7q-0006C8-Bl; Mon, 27 Jun 2022 14:54:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356838.585146; Mon, 27 Jun 2022 14:54:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5q7q-0006C1-8V; Mon, 27 Jun 2022 14:54:06 +0000
Received: by outflank-mailman (input) for mailman id 356838;
 Mon, 27 Jun 2022 14:54:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5q7p-0006Bp-JT; Mon, 27 Jun 2022 14:54:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5q7p-0005h0-GB; Mon, 27 Jun 2022 14:54:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5q7p-0007bm-3f; Mon, 27 Jun 2022 14:54:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o5q7p-0005bX-3B; Mon, 27 Jun 2022 14:54:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jO+oT/b81AyVlex2WHjFF7yG5gRvshCkYa97ZFMGx2o=; b=HO3qlzlLN0m6Zi9/1FbWtqqITY
	2ueuXwec+lEeO+HPs7WXcmTOnXIyOWxalZnfoQrq1WGlldvFvEKxIaNOP0EGKr6R+3fmb1QNIs3G1
	vy/7RxwezVI5bu20offsxEI7tLGlmdydLBKlHl7YELd32AJi0DqtPnDR4ztXKW6Y+5tk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171367-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 171367: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-xl-vhd:guest-start.2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=097ccbbbaf2681df1e65542e5b7d2b2d0c66e2bc
X-Osstest-Versions-That:
    qemuu=40d522490714b65e0856444277db6c14c5cc3796
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 27 Jun 2022 14:54:05 +0000

flight 171367 qemu-mainline real [real]
flight 171370 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/171367/
http://logs.test-lab.xenproject.org/osstest/logs/171370/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-vhd       22 guest-start.2            fail REGR. vs. 171348

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171348
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171348
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171348
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171348
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171348
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171348
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171348
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171348
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                097ccbbbaf2681df1e65542e5b7d2b2d0c66e2bc
baseline version:
 qemuu                40d522490714b65e0856444277db6c14c5cc3796

Last test of basis   171348  2022-06-25 01:39:21 Z    2 days
Testing same since   171367  2022-06-27 06:39:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Helge Deller <deller@gmx.de>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Richard Henderson <richard.henderson@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 845 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 15:04:01 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 15:04:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356847.585157 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5qHJ-0007uD-Fz; Mon, 27 Jun 2022 15:03:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356847.585157; Mon, 27 Jun 2022 15:03:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5qHJ-0007u6-Ca; Mon, 27 Jun 2022 15:03:53 +0000
Received: by outflank-mailman (input) for mailman id 356847;
 Mon, 27 Jun 2022 15:03:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o5qHI-0007u0-5T
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 15:03:52 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o5qHH-0005sQ-Vi; Mon, 27 Jun 2022 15:03:51 +0000
Received: from 54-240-197-228.amazon.com ([54.240.197.228]
 helo=[192.168.2.226]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o5qHH-0006RW-Pi; Mon, 27 Jun 2022 15:03:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=NRtOrs33NpR65ofFeatqjuzydiEEvK49qJdVP+UX3Zg=; b=07hyKt8+90juTH6njdrOxLSpkH
	iK0DIN4GQkdQUF9a9kdfQBAZlfNLWhtr0JGXtEuwXmXmitUijMFFeI6H0Cx5DqRjtoIfMaNjKaOXV
	fbgQo904/jjT6nC+9+Gs40IEb/9pIBU7NGLjZgPUn9VYrfS9mH6IHMWRlD8RlhSLMvWA=;
Message-ID: <10c2bc1f-f035-648f-3b9d-7c29007d3527@xen.org>
Date: Mon, 27 Jun 2022 16:03:49 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH v2 1/2] public/io: xs_wire: Document that EINVAL should
 always be first in xsd_errors
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: jbeulich@suse.com, Julien Grall <jgrall@amazon.com>
References: <20220627123635.3416-1-julien@xen.org>
 <20220627123635.3416-2-julien@xen.org>
 <d0330408-2301-6145-f46b-c3da302a1edb@suse.com>
 <7af3e9ec-59fe-32ce-2a9d-b8dab57d0e9e@xen.org>
 <f7c0d5c1-01da-4dca-42ac-ce17c6109371@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <f7c0d5c1-01da-4dca-42ac-ce17c6109371@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 27/06/2022 15:50, Juergen Gross wrote:
> On 27.06.22 16:48, Julien Grall wrote:
>> Hi,
>>
>> On 27/06/2022 15:31, Juergen Gross wrote:
>>> On 27.06.22 14:36, Julien Grall wrote:
>>>> From: Julien Grall <jgrall@amazon.com>
>>>>
>>>> Some tools (e.g. xenstored) always expect EINVAL to be first in 
>>>> xsd_errors.
>>>>
>>>> Document it so, one doesn't add a new entry before hand by mistake.
>>>>
>>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>>>
>>>> ----
>>>>
>>>> I have tried to add a BUILD_BUG_ON() but GCC complained that the value
>>>> was not a constant. I couldn't figure out a way to make GCC happy.
>>>>
>>>> Changes in v2:
>>>>      - New patch
>>>> ---
>>>>   xen/include/public/io/xs_wire.h | 1 +
>>>>   1 file changed, 1 insertion(+)
>>>>
>>>> diff --git a/xen/include/public/io/xs_wire.h 
>>>> b/xen/include/public/io/xs_wire.h
>>>> index c1ec7c73e3b1..dd4c9c9b972d 100644
>>>> --- a/xen/include/public/io/xs_wire.h
>>>> +++ b/xen/include/public/io/xs_wire.h
>>>> @@ -76,6 +76,7 @@ static struct xsd_errors xsd_errors[]
>>>>   __attribute__((unused))
>>>>   #endif
>>>>       = {
>>>> +    /* /!\ Some users (e.g. xenstored) expect EINVAL to be the 
>>>> first entry. */
>>>>       XSD_ERROR(EINVAL),
>>>>       XSD_ERROR(EACCES),
>>>>       XSD_ERROR(EEXIST),
>>>
>>> What about another approach, like:
>>
>> In place of what? I still think we need the comment because this 
>> assumption is not part of the ABI (AFAICT xs_wire.h is meant to be 
>> stable).
>>
>> At which point, I see limited reason to fix xenstored_core.c.
>>
>> But I would have really prefer to use a BUILD_BUG_ON() (or similar) so 
>> we can catch any misue a build. Maybe I should write a small program 
>> that is executed at compile time?
> 
> My suggestion removes the need for EINVAL being the first entry

xsd_errors[] is part of the stable ABI. If Xenstored is already 
"misusing" it, then I wouldn't be surprised if other software rely on 
this as well.

Therefore, I don't really see how fixing Xenstored would allow us to 
remove this restriction.

The only reason this was spotted is by Jan reviewing C Xenstored. 
Without that, it would have problably took a long time to notice
this change (I don't think there are many other errno used by Xenstored
and xsd_errors). So I think the risk is not worth the effort.

At least, this is not a patch I would be willing to have my name on 
(either as a signed-off-by or acked-by).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 15:13:27 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 15:13:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356855.585170 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5qQV-00012m-BR; Mon, 27 Jun 2022 15:13:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356855.585170; Mon, 27 Jun 2022 15:13:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5qQV-00012f-8j; Mon, 27 Jun 2022 15:13:23 +0000
Received: by outflank-mailman (input) for mailman id 356855;
 Mon, 27 Jun 2022 15:13:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z6rY=XC=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o5qQU-00012Z-CM
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 15:13:22 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id abbb9691-f62b-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 17:13:16 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 954961FA63;
 Mon, 27 Jun 2022 15:13:20 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 6DCF513AB2;
 Mon, 27 Jun 2022 15:13:20 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id +ixTGRDJuWLUUAAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 27 Jun 2022 15:13:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: abbb9691-f62b-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656342800; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=RBbt/XB73h/kq6kSZdzu9YQ6+tvIXQVXwroWuJNZ790=;
	b=BrgNjrAVzwjxBnZ5DOZ6GKwMNA170zBi0MezWo1DM0SvtPtcbJZo/uDaWGnWnUu04OS7RY
	mSn5Kt4TXfzBTyvGY1KHJGrbph5gX7Fy1MRBnRZY+PqqrEGWfIoEzzAqHnOaZMFjoCgejV
	7jfCoQvLf0azoHE884g5Bz95S001uk8=
Message-ID: <c6ed6696-74f1-824f-5f64-f016284e3348@suse.com>
Date: Mon, 27 Jun 2022 17:13:20 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: jbeulich@suse.com, Julien Grall <jgrall@amazon.com>
References: <20220627123635.3416-1-julien@xen.org>
 <20220627123635.3416-2-julien@xen.org>
 <d0330408-2301-6145-f46b-c3da302a1edb@suse.com>
 <7af3e9ec-59fe-32ce-2a9d-b8dab57d0e9e@xen.org>
 <f7c0d5c1-01da-4dca-42ac-ce17c6109371@suse.com>
 <10c2bc1f-f035-648f-3b9d-7c29007d3527@xen.org>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v2 1/2] public/io: xs_wire: Document that EINVAL should
 always be first in xsd_errors
In-Reply-To: <10c2bc1f-f035-648f-3b9d-7c29007d3527@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------UNfAah1Fnrsewn9BpaI7wW0B"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------UNfAah1Fnrsewn9BpaI7wW0B
Content-Type: multipart/mixed; boundary="------------t8F5Xlm7wXgGnvbo8rW8T2Gp";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: jbeulich@suse.com, Julien Grall <jgrall@amazon.com>
Message-ID: <c6ed6696-74f1-824f-5f64-f016284e3348@suse.com>
Subject: Re: [PATCH v2 1/2] public/io: xs_wire: Document that EINVAL should
 always be first in xsd_errors
References: <20220627123635.3416-1-julien@xen.org>
 <20220627123635.3416-2-julien@xen.org>
 <d0330408-2301-6145-f46b-c3da302a1edb@suse.com>
 <7af3e9ec-59fe-32ce-2a9d-b8dab57d0e9e@xen.org>
 <f7c0d5c1-01da-4dca-42ac-ce17c6109371@suse.com>
 <10c2bc1f-f035-648f-3b9d-7c29007d3527@xen.org>
In-Reply-To: <10c2bc1f-f035-648f-3b9d-7c29007d3527@xen.org>

--------------t8F5Xlm7wXgGnvbo8rW8T2Gp
Content-Type: multipart/mixed; boundary="------------7gcikDgpxpYVrwMGbJkfofhT"

--------------7gcikDgpxpYVrwMGbJkfofhT
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjcuMDYuMjIgMTc6MDMsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGkgSnVlcmdlbiwN
Cj4gDQo+IE9uIDI3LzA2LzIwMjIgMTU6NTAsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+PiBP
biAyNy4wNi4yMiAxNjo0OCwgSnVsaWVuIEdyYWxsIHdyb3RlOg0KPj4+IEhpLA0KPj4+DQo+
Pj4gT24gMjcvMDYvMjAyMiAxNTozMSwgSnVlcmdlbiBHcm9zcyB3cm90ZToNCj4+Pj4gT24g
MjcuMDYuMjIgMTQ6MzYsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4+Pj4+IEZyb206IEp1bGll
biBHcmFsbCA8amdyYWxsQGFtYXpvbi5jb20+DQo+Pj4+Pg0KPj4+Pj4gU29tZSB0b29scyAo
ZS5nLiB4ZW5zdG9yZWQpIGFsd2F5cyBleHBlY3QgRUlOVkFMIHRvIGJlIGZpcnN0IGluIHhz
ZF9lcnJvcnMuDQo+Pj4+Pg0KPj4+Pj4gRG9jdW1lbnQgaXQgc28sIG9uZSBkb2Vzbid0IGFk
ZCBhIG5ldyBlbnRyeSBiZWZvcmUgaGFuZCBieSBtaXN0YWtlLg0KPj4+Pj4NCj4+Pj4+IFNp
Z25lZC1vZmYtYnk6IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpvbi5jb20+DQo+Pj4+Pg0K
Pj4+Pj4gLS0tLQ0KPj4+Pj4NCj4+Pj4+IEkgaGF2ZSB0cmllZCB0byBhZGQgYSBCVUlMRF9C
VUdfT04oKSBidXQgR0NDIGNvbXBsYWluZWQgdGhhdCB0aGUgdmFsdWUNCj4+Pj4+IHdhcyBu
b3QgYSBjb25zdGFudC4gSSBjb3VsZG4ndCBmaWd1cmUgb3V0IGEgd2F5IHRvIG1ha2UgR0ND
IGhhcHB5Lg0KPj4+Pj4NCj4+Pj4+IENoYW5nZXMgaW4gdjI6DQo+Pj4+PiDCoMKgwqDCoCAt
IE5ldyBwYXRjaA0KPj4+Pj4gLS0tDQo+Pj4+PiDCoCB4ZW4vaW5jbHVkZS9wdWJsaWMvaW8v
eHNfd2lyZS5oIHwgMSArDQo+Pj4+PiDCoCAxIGZpbGUgY2hhbmdlZCwgMSBpbnNlcnRpb24o
KykNCj4+Pj4+DQo+Pj4+PiBkaWZmIC0tZ2l0IGEveGVuL2luY2x1ZGUvcHVibGljL2lvL3hz
X3dpcmUuaCBiL3hlbi9pbmNsdWRlL3B1YmxpYy9pby94c193aXJlLmgNCj4+Pj4+IGluZGV4
IGMxZWM3YzczZTNiMS4uZGQ0YzljOWI5NzJkIDEwMDY0NA0KPj4+Pj4gLS0tIGEveGVuL2lu
Y2x1ZGUvcHVibGljL2lvL3hzX3dpcmUuaA0KPj4+Pj4gKysrIGIveGVuL2luY2x1ZGUvcHVi
bGljL2lvL3hzX3dpcmUuaA0KPj4+Pj4gQEAgLTc2LDYgKzc2LDcgQEAgc3RhdGljIHN0cnVj
dCB4c2RfZXJyb3JzIHhzZF9lcnJvcnNbXQ0KPj4+Pj4gwqAgX19hdHRyaWJ1dGVfXygodW51
c2VkKSkNCj4+Pj4+IMKgICNlbmRpZg0KPj4+Pj4gwqDCoMKgwqDCoCA9IHsNCj4+Pj4+ICvC
oMKgwqAgLyogLyFcIFNvbWUgdXNlcnMgKGUuZy4geGVuc3RvcmVkKSBleHBlY3QgRUlOVkFM
IHRvIGJlIHRoZSBmaXJzdCANCj4+Pj4+IGVudHJ5LiAqLw0KPj4+Pj4gwqDCoMKgwqDCoCBY
U0RfRVJST1IoRUlOVkFMKSwNCj4+Pj4+IMKgwqDCoMKgwqAgWFNEX0VSUk9SKEVBQ0NFUyks
DQo+Pj4+PiDCoMKgwqDCoMKgIFhTRF9FUlJPUihFRVhJU1QpLA0KPj4+Pg0KPj4+PiBXaGF0
IGFib3V0IGFub3RoZXIgYXBwcm9hY2gsIGxpa2U6DQo+Pj4NCj4+PiBJbiBwbGFjZSBvZiB3
aGF0PyBJIHN0aWxsIHRoaW5rIHdlIG5lZWQgdGhlIGNvbW1lbnQgYmVjYXVzZSB0aGlzIGFz
c3VtcHRpb24gDQo+Pj4gaXMgbm90IHBhcnQgb2YgdGhlIEFCSSAoQUZBSUNUIHhzX3dpcmUu
aCBpcyBtZWFudCB0byBiZSBzdGFibGUpLg0KPj4+DQo+Pj4gQXQgd2hpY2ggcG9pbnQsIEkg
c2VlIGxpbWl0ZWQgcmVhc29uIHRvIGZpeCB4ZW5zdG9yZWRfY29yZS5jLg0KPj4+DQo+Pj4g
QnV0IEkgd291bGQgaGF2ZSByZWFsbHkgcHJlZmVyIHRvIHVzZSBhIEJVSUxEX0JVR19PTigp
IChvciBzaW1pbGFyKSBzbyB3ZSBjYW4gDQo+Pj4gY2F0Y2ggYW55IG1pc3VlIGEgYnVpbGQu
IE1heWJlIEkgc2hvdWxkIHdyaXRlIGEgc21hbGwgcHJvZ3JhbSB0aGF0IGlzIA0KPj4+IGV4
ZWN1dGVkIGF0IGNvbXBpbGUgdGltZT8NCj4+DQo+PiBNeSBzdWdnZXN0aW9uIHJlbW92ZXMg
dGhlIG5lZWQgZm9yIEVJTlZBTCBiZWluZyB0aGUgZmlyc3QgZW50cnkNCj4gDQo+IHhzZF9l
cnJvcnNbXSBpcyBwYXJ0IG9mIHRoZSBzdGFibGUgQUJJLiBJZiBYZW5zdG9yZWQgaXMgYWxy
ZWFkeSAibWlzdXNpbmciIGl0LCANCj4gdGhlbiBJIHdvdWxkbid0IGJlIHN1cnByaXNlZCBp
ZiBvdGhlciBzb2Z0d2FyZSByZWx5IG9uIHRoaXMgYXMgd2VsbC4NCg0KWGVuc3RvcmVkIGlz
IHRoZSBvbmx5IGluc3RhbmNlIHdoaWNoIG5lZWRzIGEgdHJhbnNsYXRpb24gZnJvbSB2YWx1
ZSB0bw0Kc3RyaW5nLCB3aGlsZSBhbGwgb3RoZXIgdXNlcnMgc2hvdWxkIG5lZWQgb25seSB0
aGUgb3Bwb3NpdGUgZGlyZWN0aW9uLg0KVGhlIG9ubHkgb3RoZXIgY2FuZGlkYXRlIHdvdWxk
IGJlIG94ZW5zdG9yZWQsIGJ1dCB0aGF0IHNlZW1zIG5vdCB0byB1c2UNCnhzZF9lcnJvcnNb
XS4NCg0KQW5kIGluIGZhY3QgbGlieGVuc3RvcmUgd2lsbCBqdXN0IHJldHVybiBhIHBsYWlu
IEVJTlZBTCBpbiBjYXNlIGl0DQpjYW4ndCBmaW5kIGEgdHJhbnNsYXRpb24sIHdoaWxlIGh2
bWxvYWRlciB3aWxsIHJldHVybiBFSU8gaW4gdGhhdCBjYXNlLg0KDQpXaXRoIHlvdXIgcmVh
c29uaW5nIGFuZCB0aGUgaHZtbG9hZGVyIHVzZSBjYXNlIHlvdSBjb3VsZCBhcmd1ZSB0aGF0
DQp0aGUgRUlPIGVudHJ5IG5lZWRzIHRvIHN0YXkgYXQgdGhlIHNhbWUgcG9zaXRpb24sIHRv
by4NCg0KPiBUaGVyZWZvcmUsIEkgZG9uJ3QgcmVhbGx5IHNlZSBob3cgZml4aW5nIFhlbnN0
b3JlZCB3b3VsZCBhbGxvdyB1cyB0byByZW1vdmUgdGhpcyANCj4gcmVzdHJpY3Rpb24uDQo+
IA0KPiBUaGUgb25seSByZWFzb24gdGhpcyB3YXMgc3BvdHRlZCBpcyBieSBKYW4gcmV2aWV3
aW5nIEMgWGVuc3RvcmVkLiBXaXRob3V0IHRoYXQsIA0KPiBpdCB3b3VsZCBoYXZlIHByb2Js
YWJseSB0b29rIGEgbG9uZyB0aW1lIHRvIG5vdGljZQ0KPiB0aGlzIGNoYW5nZSAoSSBkb24n
dCB0aGluayB0aGVyZSBhcmUgbWFueSBvdGhlciBlcnJubyB1c2VkIGJ5IFhlbnN0b3JlZA0K
PiBhbmQgeHNkX2Vycm9ycykuIFNvIEkgdGhpbmsgdGhlIHJpc2sgaXMgbm90IHdvcnRoIHRo
ZSBlZmZvcnQuDQoNCkkgZG9uJ3Qgc2VlIGEgcmVhbCByaXNrIGhlcmUsIGJ1dCBpZiB0aGVy
ZSBpcyBjb25zZW5zdXMgdGhhdCB0aGUgcmlzayBzaG91bGQNCm5vdCBiZSB0YWtlbiwgdGhl
biBJJ2QgcmF0aGVyIGFkZCBhIGNvbW1lbnQgdGhhdCBuZXcgZW50cmllcyBhcmUgb25seSBh
bGxvd2VkDQp0byBiZSBhZGRlZCBhdCB0aGUgZW5kIG9mIHRoZSBhcnJheS4NCg0KPiBBdCBs
ZWFzdCwgdGhpcyBpcyBub3QgYSBwYXRjaCBJIHdvdWxkIGJlIHdpbGxpbmcgdG8gaGF2ZSBt
eSBuYW1lIG9uIChlaXRoZXIgYXMgYSANCj4gc2lnbmVkLW9mZi1ieSBvciBhY2tlZC1ieSku
DQoNCkZhaXIgZW5vdWdoLiA6LSkNCg0KDQpKdWVyZ2VuDQo=
--------------7gcikDgpxpYVrwMGbJkfofhT
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------7gcikDgpxpYVrwMGbJkfofhT--

--------------t8F5Xlm7wXgGnvbo8rW8T2Gp--

--------------UNfAah1Fnrsewn9BpaI7wW0B
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK5yRAFAwAAAAAACgkQsN6d1ii/Ey9T
xAf7BzZvvCgCfYom6Zw+I8G84tZhrVLDo2CHsAJnMK1qIcke7JKfaLwwQRxloTXucYPerq3BwzS6
zk5Uo6cKQy+cLpLIFqIG19S8YJFCUiCYab9U8pRN2R48/6sgQQ9At/1JyUyGBvZu9Ylel5kueW+f
WSqUca2toJ6ZDrsL00Yto6/duhlEfEiboh1yxMqZKAhGFMw3kMvuOVF3nzb1P7gA+g2yCR6AO4T9
d/Jtss/HfY+lC1uQoMwZoGg4NBucSjvbAOFDBmUc2lT3hcDGS28cSAJm+loNyRoaPHCU4aGcwvpy
z7hneLHvcey6lXyYw3XMLgwH8BFW4oh6DFPmG5mHDw==
=OGjs
-----END PGP SIGNATURE-----

--------------UNfAah1Fnrsewn9BpaI7wW0B--


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 15:30:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 15:30:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356861.585182 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5qgd-000382-Qf; Mon, 27 Jun 2022 15:30:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356861.585182; Mon, 27 Jun 2022 15:30:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5qgd-00037K-M1; Mon, 27 Jun 2022 15:30:03 +0000
Received: by outflank-mailman (input) for mailman id 356861;
 Mon, 27 Jun 2022 15:30:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Z0SW=XC=citrix.com=prvs=170a910b0=ross.lagerwall@srs-se1.protection.inumbo.net>)
 id 1o5qgb-0002qH-TK
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 15:30:01 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fe2a3901-f62d-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 17:29:55 +0200 (CEST)
Received: from mail-mw2nam10lp2101.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.101])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 27 Jun 2022 11:29:56 -0400
Received: from PH0PR03MB6382.namprd03.prod.outlook.com (2603:10b6:510:ab::9)
 by DM6PR03MB4956.namprd03.prod.outlook.com (2603:10b6:5:1e6::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Mon, 27 Jun
 2022 15:29:55 +0000
Received: from PH0PR03MB6382.namprd03.prod.outlook.com
 ([fe80::8815:10ec:1816:ec3f]) by PH0PR03MB6382.namprd03.prod.outlook.com
 ([fe80::8815:10ec:1816:ec3f%6]) with mapi id 15.20.5373.018; Mon, 27 Jun 2022
 15:29:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe2a3901-f62d-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656343799;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=s2wPi/TeYboNn+cjx+EIVAVEvku3I53J2WHzMSAuXps=;
  b=dQhLqrw0e7nAkROzhyUJbMxXNLSq9wdNek3xOxswM79+k946X7rZvX1l
   lNkW2orHstuoDeDSBBMvNYsE+Ezh6AOzDH62xl19PqgiByZmbiWfCRzL4
   eJdklADMYMazN4SnRYhBdn1bPfnvISr5WLTY3R1cIRWMwqKOq38pfawUn
   4=;
X-IronPort-RemoteIP: 104.47.55.101
X-IronPort-MID: 74933334
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:rMC77Kj/np0rh+jMJU3v88GyX161sREKZh0ujC45NGQN5FlHY01je
 htvXTyHOP+CZGrzKNF2PNjk8htQvpTXytIxTgpqrCE3Rikb9cadCdqndUqhZCn6wu8v7a5EA
 2fyTvGacajYm1eF/k/F3oDJ9CU6jefSLlbFILas1hpZHGeIcw98z0M58wIFqtQw24LhXVnS4
 YqaT/D3YzdJ5RYlagr41IrbwP9flKyaVOQw5wFWiVhj5TcyplFNZH4tDfjZw0jQG+G4KtWSV
 efbpIxVy0uCl/sb5nFJpZ6gGqECaua60QFjERO6UYD66vRJjnRaPqrWqJPwwKqY4tmEt4kZ9
 TlDiXC/YSwIZY39tsYXaSZdAQN1PfAFpJqEA2fq5KR/z2WeG5ft69NHKRhueKgnoKNwC2wI8
 uEEIjcQaBzFn/ix3L+wVuhrgIIkMdXvO4Qc/HpnyFk1D95/GcyFH/qMuIIehW9p7ixNNa+2i
 84xcz10d1LEahJCIEgeDJYWl+a0nHjvNTZfrTp5oIJovzmIl1cqjNABNvKJe/vRZIZ2hHyEm
 U7Y2WvHDwoeGYamnG/tHnWEw7WncTnAcIAdDrqj7dZxnUaegGcUDXU+RVa95PW0lEO6c9ZeM
 FAPvDojq7Ao806mRcW7WAe3yFafpQIVUddUF+w86SmOx7DS7gLfAXILJhZGbNElr8YwSSYdy
 k6Sn9jpCDpst5WYUXuYsLyTqFuaNS8TImsDIz0ERA0Ky975qYo3g1TESdMLOKSylNzuXzbr3
 yqNsjM9lp0Ul8cA06j99lfC6w9AvbDMRw8xowDIBGSs61ohYJb/PtTxr1/G8fxHMYCVCEGbu
 2QJkNSf6+ZICoyRkCuKQ6MGG7TBC+u5DQAwSGVHR/EJnwlBMVb6FWyMyFmS/HtUD/s=
IronPort-HdrOrdr: A9a23:IgcJpaH7A4TEEMSHpLqFWJHXdLJyesId70hD6qkvc3Fom52j/f
 xGws5x6fatskdpZJkh8erwW5VoMkmsjaKdhrNhdotKPTOW8FdASbsP0WKM+UyGJ8STzI9gPM
 RbAuJD4b/LfD5HZK/BiWHWferIq+P3kpxA8N2uq0uFOjsaDp2IgT0YNi+rVmlNACVWD5swE5
 SRouBdoSC7RHgRZsOnQlEYQunqvbTw5d7bSC9DIyRixBiFjDuu5rK/OQOfxA0iXzRGxqpn2X
 TZkjb++r6ov5iAu1DhPi7ontprcenau5t+7f+3+4sow/LX+0SVjbFaKvy/VfYO0aSSARgR4Z
 3xSlwbTr9OAjvqDxuISF3WqkTdOX8VmgPfIVP0uwqfneXpAD09EMZPnoRfb1/Q7Fchpsh11O
 ZR03uerIc/N2K2oM3R3am8a/hRrDvBnVMy1eoIy3BPW4oXb7Fc6YQZ4UNOCZ8FWCb38pouHu
 ViBNzVoK8+SyLSU1nJ+m10hNC8VHU6GRmLBkAEp8yOyjBT2HR01VERysATlmoJsJg9V55H7e
 LZNbkArsA5cuYGKaZmQOsRS8q+DWLABRrKLWKJOFziULoKPnrcwqSHkondJNvaC6Dg4KFC5q
 gpCmkoylLaU3ieePGmzdlM7g3HRnm7UHDk1txejqIJyoHBeA==
X-IronPort-AV: E=Sophos;i="5.92,226,1650945600"; 
   d="scan'208";a="74933334"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MhrDtcYVhzRyCMtbMK9fVFCGoxWpAt5LQzDX7JRnFYzCy7LYtszyxH6owgsPkP9NA1SpeGP/JCn6wGLP7ySr48jUUUXwBQdvnsbwgg8v3L50KJbvX4C3Boyu7BLhCXX1ok8Eog62kPr3qAOXYkTv1N3Xe4sgJpWtwSDKACixuswkzjHLnrXL2pvN1+GgrmHaDNAZT0RzdcRTDTmfMX1MoxlStZpSBvVnhcws42ADPkryJXsdMsUTVA4dcZZXgcjU4Rwb7V+4Qu8mBdEYQF5Ul5yKoqq6H6ldduaU5m4fgtwvRgIiPTmjz+hsYMWPspCzEv+nxdGAkwmzKrTXTKrA8Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=1z7Wy4bj5krUpA7LosAUE0/1btvlK4cd9rZX5jjNJ4Y=;
 b=Cp80QgnGTpkd36JWFVnvEttjXsd7RlpczgOVQ0uShA6LPjtDK1Z7nwlYYbtibO046MYqgiZ3u4MOoipcXBNXZ2v+RN8M+jl/rd+U+TTxYZPG1WtYvK6zCGepr/KKZfotmCMMXySp3sWNpEBgExTyR65QIBtLSgJEDFLq2IUkSIle2dc1gUyZ0F3RsX/a7Qv15Wq/Wf7KWbiQfyZ4kwuTyZpR4wf3Dl9n7VDDzOiLZ3i66wCaW/gdhrFVu6JbVZItZrWcIR42WhDVjp4bJEcC6Gc2zVIXrCYKeIDW72d+L3/pgU4sqOh/Gqev2dlJbnENCHwy1bRdEelfkyjb7kwH3g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1z7Wy4bj5krUpA7LosAUE0/1btvlK4cd9rZX5jjNJ4Y=;
 b=MEuOwdZ1ZbEVjqwvO9asepQGUiUpC4nGvD7xFjg3+vigrQPjvn8z3hBezCvsUp9uBP9PzLMO+50/BwBFjcMrf/ZuOfialQpupdCnTMmyUl/2uUVU3B72E9PuHs7IAlphrIj8dhOrS6gdEbGahTcBMgLPSl3PWs/a2doIlp8NYsA=
From: Ross Lagerwall <ross.lagerwall@citrix.com>
To: Juergen Gross <jgross@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Oleksandr Tyshchenko
	<oleksandr_tyshchenko@epam.com>, Dongli Zhang <dongli.zhang@oracle.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [PATCH] xen/manage: Use orderly_reboot() to reboot
Thread-Topic: [PATCH] xen/manage: Use orderly_reboot() to reboot
Thread-Index: AQHYijk2RX7/puuRFEiy7ev5zhwyTw==
Date: Mon, 27 Jun 2022 15:29:55 +0000
Message-ID:
 <PH0PR03MB6382AA82CA7C475532E6AFE6F0B99@PH0PR03MB6382.namprd03.prod.outlook.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
msip_labels:
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 04b2d3d1-c1b1-448a-b76d-08da5851e322
x-ms-traffictypediagnostic: DM6PR03MB4956:EE_
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 hkOcL9Qyqp6RzRMnQvKrjVDcN+fR39QJYM/Rq8S4tPmoApJuM+7fgp4PoA22ZOihGxIO1seaBKd9J7TC3OfC7sZfNsbzAhppArWkPOf9jiJjynkClAyk3NJF/GUvAVy1LouBk3+UmPgQFmjq5e9Ashn6CnFKEDL2CF9zDg/V11wrYYOfNV5fe96aBWzfZwTEyLNwt855N7Eh5stRUxsi7SVd2erDWZUdM7Lq4hj+Mq40atcOmbWWBjjINx0pQ+D7tAZo6T16uU8lvRhhCcu7UCIWusP/eqQjPAVPrNRBFNrb0TXNpo4rjsaEdekST1O4+V1DoFtut0fgg3Z7TwsC72tz3MNSQOupzbwPox/LcpT7FEGDWXJGA+nwJmDb8Kl8pR4gvXVW5XoGpP3OxVU9MtoYQ4F2O2z+2nxVPNVQQHcpH6quC16ehgvPOWacQd/Nk9jNlc9ymDBt6PQ7qCYMYrSUlIWTjqlVhbLYTNfTIKjD/lk0FEhkN4OWXWq9fwowem7ZRmWig03FkEx7p9qEMxnJWfSSpdtn32Xlrfdq9U2sO9v8FDI+LLwpocHGstpI31xzKZW80XTuF92QJV3rwYgA862rWKQyYsUaJzhFBgcPAG+Bk0p+2FAd83DGglTxj7g0FhxrHAzmpgqrnPYTfWBxrbnx4hU4GOqyiAT1tIxvB+AL4Hn+axN9BxxBN5J91rXeBrBxzjeh91z5zAogr1SYZeLxK88B1O8vQex7+xtksdqHdxR9JvVF5jn3vLXDlCs5La9faa/Yv+g5dsNTZiKU4Pc6O1V49sWo6wnZVho=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB6382.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(346002)(396003)(376002)(366004)(136003)(39860400002)(91956017)(9686003)(8676002)(64756008)(82960400001)(55016003)(4326008)(52536014)(5660300002)(66476007)(478600001)(66946007)(8936002)(66556008)(26005)(44832011)(76116006)(83380400001)(66446008)(71200400001)(122000001)(38070700005)(316002)(6506007)(33656002)(110136005)(7696005)(186003)(2906002)(53546011)(54906003)(38100700002)(86362001)(41300700001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?iso-8859-1?Q?QCp6/Ugk6G65kDooi3gYwDOVgoTdMWmo4y/pYiLAzjQfX5wCIcItHnBqdd?=
 =?iso-8859-1?Q?O51YbF5uh9ZO9E4rery2CPhSlePAr3JHEoyzL7hT1wxzxhawB+yUX1dXU9?=
 =?iso-8859-1?Q?pe60E+h9vU5j7TtSXh73Q7sWBEEury5C15mMNQy1mtniB9N3FHFFkV57XK?=
 =?iso-8859-1?Q?Prkct6ZYcUTOY7U2uM+FAYcgeI/1X1pr20+S9AyWI4zeevnp52eBIxC83l?=
 =?iso-8859-1?Q?j7r6nibkfXuj2odmhxf369qe4QEMVl0aprNIa3rGwiJyerbVRfyNfnJXPz?=
 =?iso-8859-1?Q?OdTZJwTM/2k6/7nnu1JsQokbeV1eomS5nBgqGWTkRO9NVHVFakOrTDkBji?=
 =?iso-8859-1?Q?1htPawZ40twnjZDd+xzqFVVeCIgSf8HGzr0gYuXRlSiawhuH3KyhqhVE/J?=
 =?iso-8859-1?Q?DZc+4+xTWXfLHs/orC3Mu9VM3f6C+Z1YCUqizK7S/nufuP/KykWYY8putf?=
 =?iso-8859-1?Q?PxapB9kKUL+itxj1tU18UqFiZ6w0UmSJoygfPF8EiH5wThPFIzktuIQA5b?=
 =?iso-8859-1?Q?9n8/Kkxnk7WgzzICdhjPcwoaFqka3XLJrKe6HwaV4xJI02jyzrLgPN4ska?=
 =?iso-8859-1?Q?ebShdwGlCL1ajtbD9V0cIrFXxsIm/OKCmM1BQbJ2Wso9QzKM9sUVoWaHq6?=
 =?iso-8859-1?Q?YvVHP0SToyUz+W9rqGZ5SQGGFLzhYDpELqZg+TvjekdhSpGPTeO+bWDTE4?=
 =?iso-8859-1?Q?b9W5Sh5g9yFzwDWz3jDbSweFTDVJzjtxo3ITfHUIrY6DAJS6Tp6GeQKaEb?=
 =?iso-8859-1?Q?isepLgUts3Jiz7ZWQ18jTbG3zZUy2uaxCnapv89y6/WrTLRg0ytNsw9Pgz?=
 =?iso-8859-1?Q?51LSSzYcSK1YArpVMU8cGZWc+W4ymCt0T+weKvPW5WytZqfEMR4JFTOqyt?=
 =?iso-8859-1?Q?5q6n+PQS7GGXfru8EcrMsCsgIHBiqzoaJ7ue1c/8pCKL/oG+avEw9eXcNx?=
 =?iso-8859-1?Q?VSbuB+4or5KfYAIbGlQ8cX8X1YbBcLWfPMTCb+b20MMHvry/mIQkNEjjcH?=
 =?iso-8859-1?Q?U6nZszWVEmAAYXst1oDv30nMNVtK8gKQpipliYVhHKlZR+ggI3iE6mA95C?=
 =?iso-8859-1?Q?n4B0Rf1zgyqDZ4rf3NbPMBe+VKd0ET1u4Aa/Fn4E3SBTTUGJdbXmCuoZ7v?=
 =?iso-8859-1?Q?hCH8+uXircNr1nGjZxbz/coGbYtbKqgR7Y0ap8ZkfpB9SBX/jUrfDvXxZh?=
 =?iso-8859-1?Q?JoLEIWpj3iEWXId4SHXSaHZjJOf+elwVQbYzSVOTyb0mEvUZT+2icMbJCA?=
 =?iso-8859-1?Q?tCXwYlUjIcyq0fGOzppa0dzuZwJdXBtQglpdgZ9pSWAqnKhgHmSlg5OuY5?=
 =?iso-8859-1?Q?AoEt0AUIqkphfvMQNkaLkvGu6VRe70+293vqESbcJaqUAzLcAhhjD88ywF?=
 =?iso-8859-1?Q?ueVrirHYVQh4XVGhBUjBehXMfv3ihRskSucL0xZKWP7whz4fpTIDy+RolO?=
 =?iso-8859-1?Q?9cYkoiHNM9tSElNAUVxGaYhboiDhlfq73bwKWdT20inYW5NwjjWhtjY3QX?=
 =?iso-8859-1?Q?LiEgsW2eA7vE2OYbLzj6vKlXWiBFQAI5xO/ixs3FmIbR54uV2GB0LKE1f/?=
 =?iso-8859-1?Q?0wXeh2Cj7CZqeLsYTWRAWmHoNrcsSC01BFj7ZU+P596SOHz5EcnODMBpU3?=
 =?iso-8859-1?Q?I4doHtCx67MwbdYov2ML3ypZOcNWFfoSoI?=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB6382.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 04b2d3d1-c1b1-448a-b76d-08da5851e322
X-MS-Exchange-CrossTenant-originalarrivaltime: 27 Jun 2022 15:29:55.2020
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: UUztJvjlxRYTnZqh4CfvXqR9K/uzAX3o1zAXO1+ulIjmrvEeTw5WpCJm3eZh9ut5AALSbNwH3NfnuLJSPg1lMudf62s/ppAqiwtloEuh5WE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4956

> From: Juergen Gross=0A=
> Sent: Monday, June 27, 2022 3:49 PM=0A=
> To: Ross Lagerwall; xen-devel@lists.xenproject.org=0A=
> Cc: Stefano Stabellini; Oleksandr Tyshchenko; Dongli Zhang; Boris Ostrovs=
ky=0A=
> Subject: Re: [PATCH] xen/manage: Use orderly_reboot() to reboot=0A=
>             =0A=
>            =0A=
> On 27.06.22 16:28, Ross Lagerwall wrote:=0A=
> > Currently when the toolstack issues a reboot, it gets translated into a=
=0A=
> > call to ctrl_alt_del(). But tying reboot to ctrl-alt-del means rebootin=
g=0A=
> > may fail if e.g. the user has masked the ctrl-alt-del.target under=0A=
> > systemd.=0A=
> > =0A=
> > A previous attempt to fix this set the flag to force rebooting when=0A=
> > ctrl_alt_del() is called.=0A=
> =0A=
> Sorry, I have problems parsing this sentence.=0A=
=0A=
Probably because it is poorly worded... How about this?=0A=
=0A=
A previous attempt to fix this issue made a change that sets the=0A=
kernel.ctrl-alt-del sysctl to 1 before ctrl_alt_del() is called.=0A=
=0A=
> =0A=
> > However, this doesn't give userspace the=0A=
> > opportunity to block rebooting or even do any cleanup or syncing.=0A=
> > =0A=
> > Instead, call orderly_reboot() which will call the "reboot" command,=0A=
> > giving userspace the opportunity to block it or perform the usual reboo=
t=0A=
> > process while being independent of the ctrl-alt-del behaviour. It also=
=0A=
> > matches what happens in the shutdown case.=0A=
> > =0A=
> > Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com>=0A=
> > ---=0A=
> >   drivers/xen/manage.c | 2 +-=0A=
> >   1 file changed, 1 insertion(+), 1 deletion(-)=0A=
> > =0A=
> > diff --git a/drivers/xen/manage.c b/drivers/xen/manage.c=0A=
> > index 3d5a384d65f7..c16df629907e 100644=0A=
> > --- a/drivers/xen/manage.c=0A=
> > +++ b/drivers/xen/manage.c=0A=
> > @@ -205,7 +205,7 @@ static void do_poweroff(void)=0A=
> >   static void do_reboot(void)=0A=
> >   {=0A=
> >        shutting_down =3D SHUTDOWN_POWEROFF; /* ? */=0A=
> > -     ctrl_alt_del();=0A=
> > +     orderly_reboot();=0A=
> >   }=0A=
> >   =0A=
> >   static struct shutdown_handler shutdown_handlers[] =3D {=0A=
> =0A=
> The code seems to be fine.=0A=
> =0A=
> Albeit I wonder whether we shouldn't turn shutting_down into a bool,=0A=
> as all that seems to be needed is "shutting_down !=3D SHUTDOWN_INVALID"=
=0A=
> today. But this could be part of another patch.=0A=
> =0A=
=0A=
Agreed that shutting_down could be a bool but better to change it=0A=
in a separate patch.=0A=
=0A=
Thanks,=0A=
Ross=


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 15:34:21 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 15:34:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356867.585193 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5qkl-0004EN-Hc; Mon, 27 Jun 2022 15:34:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356867.585193; Mon, 27 Jun 2022 15:34:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5qkl-0004EG-EU; Mon, 27 Jun 2022 15:34:19 +0000
Received: by outflank-mailman (input) for mailman id 356867;
 Mon, 27 Jun 2022 15:34:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z6rY=XC=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o5qkk-0004EA-Q9
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 15:34:18 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 98b58dfe-f62e-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 17:34:13 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 1B73B21DB0;
 Mon, 27 Jun 2022 15:34:17 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id CF1A813AB2;
 Mon, 27 Jun 2022 15:34:16 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id a24tMfjNuWKtWQAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 27 Jun 2022 15:34:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 98b58dfe-f62e-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656344057; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=C+S45zKi9V9sA4MZmEglV2Ia55SYVUyIjqCZ749k8i8=;
	b=Hj7eOjPjkapZulPPPJxHR57PyT86RqV6elMpGZbiAJ7R6ERxsdnqGK1y+cMRzFTb/YPyFJ
	SBYXCWCybXcmi2oi4tCBKCeuzFmDP7hcnVDshRP3qe9Untm97mK8TO/FOoKSo8ZGSdEnX2
	n5iHXldBIl61uE6/AVPqxSDcvoXEIGQ=
Message-ID: <cb414ba0-d049-51d4-3e44-e52785119ea1@suse.com>
Date: Mon, 27 Jun 2022 17:34:16 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH] xen/manage: Use orderly_reboot() to reboot
Content-Language: en-US
To: Ross Lagerwall <ross.lagerwall@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Dongli Zhang <dongli.zhang@oracle.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <PH0PR03MB6382AA82CA7C475532E6AFE6F0B99@PH0PR03MB6382.namprd03.prod.outlook.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <PH0PR03MB6382AA82CA7C475532E6AFE6F0B99@PH0PR03MB6382.namprd03.prod.outlook.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------RtPbTXkN908i7MtQcQ37OQh8"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------RtPbTXkN908i7MtQcQ37OQh8
Content-Type: multipart/mixed; boundary="------------B2bve18atiGrvFHMzoO9TShl";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Ross Lagerwall <ross.lagerwall@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Dongli Zhang <dongli.zhang@oracle.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <cb414ba0-d049-51d4-3e44-e52785119ea1@suse.com>
Subject: Re: [PATCH] xen/manage: Use orderly_reboot() to reboot
References: <PH0PR03MB6382AA82CA7C475532E6AFE6F0B99@PH0PR03MB6382.namprd03.prod.outlook.com>
In-Reply-To: <PH0PR03MB6382AA82CA7C475532E6AFE6F0B99@PH0PR03MB6382.namprd03.prod.outlook.com>

--------------B2bve18atiGrvFHMzoO9TShl
Content-Type: multipart/mixed; boundary="------------8YebmerxXxTtcf9Ro2sHewd5"

--------------8YebmerxXxTtcf9Ro2sHewd5
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjcuMDYuMjIgMTc6MjksIFJvc3MgTGFnZXJ3YWxsIHdyb3RlOg0KPj4gRnJvbTogSnVl
cmdlbiBHcm9zcw0KPj4gU2VudDogTW9uZGF5LCBKdW5lIDI3LCAyMDIyIDM6NDkgUE0NCj4+
IFRvOiBSb3NzIExhZ2Vyd2FsbDsgeGVuLWRldmVsQGxpc3RzLnhlbnByb2plY3Qub3JnDQo+
PiBDYzogU3RlZmFubyBTdGFiZWxsaW5pOyBPbGVrc2FuZHIgVHlzaGNoZW5rbzsgRG9uZ2xp
IFpoYW5nOyBCb3JpcyBPc3Ryb3Zza3kNCj4+IFN1YmplY3Q6IFJlOiBbUEFUQ0hdIHhlbi9t
YW5hZ2U6IFVzZSBvcmRlcmx5X3JlYm9vdCgpIHRvIHJlYm9vdA0KPj4gICAgICAgICAgICAg
IA0KPj4gICAgICAgICAgICAgDQo+PiBPbiAyNy4wNi4yMiAxNjoyOCwgUm9zcyBMYWdlcndh
bGwgd3JvdGU6DQo+Pj4gQ3VycmVudGx5IHdoZW4gdGhlIHRvb2xzdGFjayBpc3N1ZXMgYSBy
ZWJvb3QsIGl0IGdldHMgdHJhbnNsYXRlZCBpbnRvIGENCj4+PiBjYWxsIHRvIGN0cmxfYWx0
X2RlbCgpLiBCdXQgdHlpbmcgcmVib290IHRvIGN0cmwtYWx0LWRlbCBtZWFucyByZWJvb3Rp
bmcNCj4+PiBtYXkgZmFpbCBpZiBlLmcuIHRoZSB1c2VyIGhhcyBtYXNrZWQgdGhlIGN0cmwt
YWx0LWRlbC50YXJnZXQgdW5kZXINCj4+PiBzeXN0ZW1kLg0KPj4+DQo+Pj4gQSBwcmV2aW91
cyBhdHRlbXB0IHRvIGZpeCB0aGlzIHNldCB0aGUgZmxhZyB0byBmb3JjZSByZWJvb3Rpbmcg
d2hlbg0KPj4+IGN0cmxfYWx0X2RlbCgpIGlzIGNhbGxlZC4NCj4+DQo+PiBTb3JyeSwgSSBo
YXZlIHByb2JsZW1zIHBhcnNpbmcgdGhpcyBzZW50ZW5jZS4NCj4gDQo+IFByb2JhYmx5IGJl
Y2F1c2UgaXQgaXMgcG9vcmx5IHdvcmRlZC4uLiBIb3cgYWJvdXQgdGhpcz8NCj4gDQo+IEEg
cHJldmlvdXMgYXR0ZW1wdCB0byBmaXggdGhpcyBpc3N1ZSBtYWRlIGEgY2hhbmdlIHRoYXQg
c2V0cyB0aGUNCj4ga2VybmVsLmN0cmwtYWx0LWRlbCBzeXNjdGwgdG8gMSBiZWZvcmUgY3Ry
bF9hbHRfZGVsKCkgaXMgY2FsbGVkLg0KDQpZZXMsIG11Y2ggYmV0dGVyLg0KDQpXaXRoIHRo
YXQgKGNhbiBiZSBkb25lIHdoaWxlIGNvbW1pdHRpbmcpOg0KDQpSZXZpZXdlZC1ieTogSnVl
cmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPg0KDQo+IA0KPj4NCj4+PiBIb3dldmVyLCB0
aGlzIGRvZXNuJ3QgZ2l2ZSB1c2Vyc3BhY2UgdGhlDQo+Pj4gb3Bwb3J0dW5pdHkgdG8gYmxv
Y2sgcmVib290aW5nIG9yIGV2ZW4gZG8gYW55IGNsZWFudXAgb3Igc3luY2luZy4NCj4+Pg0K
Pj4+IEluc3RlYWQsIGNhbGwgb3JkZXJseV9yZWJvb3QoKSB3aGljaCB3aWxsIGNhbGwgdGhl
ICJyZWJvb3QiIGNvbW1hbmQsDQo+Pj4gZ2l2aW5nIHVzZXJzcGFjZSB0aGUgb3Bwb3J0dW5p
dHkgdG8gYmxvY2sgaXQgb3IgcGVyZm9ybSB0aGUgdXN1YWwgcmVib290DQo+Pj4gcHJvY2Vz
cyB3aGlsZSBiZWluZyBpbmRlcGVuZGVudCBvZiB0aGUgY3RybC1hbHQtZGVsIGJlaGF2aW91
ci4gSXQgYWxzbw0KPj4+IG1hdGNoZXMgd2hhdCBoYXBwZW5zIGluIHRoZSBzaHV0ZG93biBj
YXNlLg0KPj4+DQo+Pj4gU2lnbmVkLW9mZi1ieTogUm9zcyBMYWdlcndhbGwgPHJvc3MubGFn
ZXJ3YWxsQGNpdHJpeC5jb20+DQo+Pj4gLS0tDQo+Pj4gICAgZHJpdmVycy94ZW4vbWFuYWdl
LmMgfCAyICstDQo+Pj4gICAgMSBmaWxlIGNoYW5nZWQsIDEgaW5zZXJ0aW9uKCspLCAxIGRl
bGV0aW9uKC0pDQo+Pj4NCj4+PiBkaWZmIC0tZ2l0IGEvZHJpdmVycy94ZW4vbWFuYWdlLmMg
Yi9kcml2ZXJzL3hlbi9tYW5hZ2UuYw0KPj4+IGluZGV4IDNkNWEzODRkNjVmNy4uYzE2ZGY2
Mjk5MDdlIDEwMDY0NA0KPj4+IC0tLSBhL2RyaXZlcnMveGVuL21hbmFnZS5jDQo+Pj4gKysr
IGIvZHJpdmVycy94ZW4vbWFuYWdlLmMNCj4+PiBAQCAtMjA1LDcgKzIwNSw3IEBAIHN0YXRp
YyB2b2lkIGRvX3Bvd2Vyb2ZmKHZvaWQpDQo+Pj4gICAgc3RhdGljIHZvaWQgZG9fcmVib290
KHZvaWQpDQo+Pj4gICAgew0KPj4+ICAgICAgICAgc2h1dHRpbmdfZG93biA9IFNIVVRET1dO
X1BPV0VST0ZGOyAvKiA/ICovDQo+Pj4gLSAgICAgY3RybF9hbHRfZGVsKCk7DQo+Pj4gKyAg
ICAgb3JkZXJseV9yZWJvb3QoKTsNCj4+PiAgICB9DQo+Pj4gICAgDQo+Pj4gICAgc3RhdGlj
IHN0cnVjdCBzaHV0ZG93bl9oYW5kbGVyIHNodXRkb3duX2hhbmRsZXJzW10gPSB7DQo+Pg0K
Pj4gVGhlIGNvZGUgc2VlbXMgdG8gYmUgZmluZS4NCj4+DQo+PiBBbGJlaXQgSSB3b25kZXIg
d2hldGhlciB3ZSBzaG91bGRuJ3QgdHVybiBzaHV0dGluZ19kb3duIGludG8gYSBib29sLA0K
Pj4gYXMgYWxsIHRoYXQgc2VlbXMgdG8gYmUgbmVlZGVkIGlzICJzaHV0dGluZ19kb3duICE9
IFNIVVRET1dOX0lOVkFMSUQiDQo+PiB0b2RheS4gQnV0IHRoaXMgY291bGQgYmUgcGFydCBv
ZiBhbm90aGVyIHBhdGNoLg0KPj4NCj4gDQo+IEFncmVlZCB0aGF0IHNodXR0aW5nX2Rvd24g
Y291bGQgYmUgYSBib29sIGJ1dCBiZXR0ZXIgdG8gY2hhbmdlIGl0DQo+IGluIGEgc2VwYXJh
dGUgcGF0Y2guDQoNClllcy4NCg0KDQpKdWVyZ2VuDQoNCg==
--------------8YebmerxXxTtcf9Ro2sHewd5
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------8YebmerxXxTtcf9Ro2sHewd5--

--------------B2bve18atiGrvFHMzoO9TShl--

--------------RtPbTXkN908i7MtQcQ37OQh8
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK5zfgFAwAAAAAACgkQsN6d1ii/Ey9O
Kgf/QGzMFkLaIFT/dN0++ABIcfaA9nTzP9SNdO0tUT6Rydf0cRxhJdDUzmDd5JX3Ip14Scyu8rvd
AoxbpdGx6jPmzSDwDwT286lxSeUDNEc5tn4PlBRpfl6YQI2ANegQIxiTQF2A7hf8sa8WGoI+12Rp
jhOG8C1hoDwx2wWD/vcIqPyH7DQqyE240Z2rbg0Tb4eeQSFVoS4RFwawyuHTvw/9L0weVH2ePYXk
64ZN3LPrCA+xe512vV0JRE1C5ecG3CcXMxH6eYUILwG9+mwVWa9OBcavtfIsTRtOSmvsrnLNUORq
GyKP0yYy44rxvb8FhQyv2Ew7tIOo0eQx5z/qE9KRYA==
=3a9r
-----END PGP SIGNATURE-----

--------------RtPbTXkN908i7MtQcQ37OQh8--


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 17:53:59 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 17:53:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356876.585209 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5sve-0002Qs-KW; Mon, 27 Jun 2022 17:53:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356876.585209; Mon, 27 Jun 2022 17:53:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5sve-0002Ql-Hp; Mon, 27 Jun 2022 17:53:42 +0000
Received: by outflank-mailman (input) for mailman id 356876;
 Mon, 27 Jun 2022 17:53:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5svd-0002Qb-HL; Mon, 27 Jun 2022 17:53:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5svd-00019y-DG; Mon, 27 Jun 2022 17:53:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5svc-00024J-UJ; Mon, 27 Jun 2022 17:53:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o5svc-0003wL-Tr; Mon, 27 Jun 2022 17:53:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qlHRyDEaXmqrZzmiE3mWHCYHM9L/KWw+gpUq3+lg6Vs=; b=cmZWujelHYeIp3gI3SLEmo8l2R
	rxuGJDliBWhnxaXHh20XAL9pb+59bZRfgkjqCQSpMV5Ha2a4XeyFjafn48CpMYd77jLpoYYqpdoGy
	uxbcUg+Z8rjwymZzFJcIALqkFuhL3INpVabrOh0rd2lFT9dtVH2gdDOtw3wHIyOfYJzQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171368-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171368: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=03c765b0e3b4cb5063276b086c76f7a612856a9a
X-Osstest-Versions-That:
    linux=354c6e071be986a44b956f7b57f1884244431048
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 27 Jun 2022 17:53:40 +0000

flight 171368 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171368/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-intel  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-amd  8 xen-boot              fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 171277
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 171277
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 171277
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 171277

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 171277

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171277
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171277
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171277
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                03c765b0e3b4cb5063276b086c76f7a612856a9a
baseline version:
 linux                354c6e071be986a44b956f7b57f1884244431048

Last test of basis   171277  2022-06-19 03:11:35 Z    8 days
Failing since        171280  2022-06-19 15:12:25 Z    8 days   24 attempts
Testing same since   171364  2022-06-27 01:55:47 Z    0 days    2 attempts

------------------------------------------------------------
358 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 12715 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 18:10:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 18:10:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356886.585231 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5tBv-0005Mm-Dc; Mon, 27 Jun 2022 18:10:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356886.585231; Mon, 27 Jun 2022 18:10:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5tBv-0005Md-At; Mon, 27 Jun 2022 18:10:31 +0000
Received: by outflank-mailman (input) for mailman id 356886;
 Mon, 27 Jun 2022 18:10:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dsg=XC=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1o5tBt-0005LG-HS
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 18:10:29 +0000
Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com
 [66.111.4.25]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6b76fac3-f644-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 20:10:26 +0200 (CEST)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.nyi.internal (Postfix) with ESMTP id 1A7B85C01CF;
 Mon, 27 Jun 2022 14:10:26 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute2.internal (MEProxy); Mon, 27 Jun 2022 14:10:26 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 27 Jun 2022 14:10:25 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6b76fac3-f644-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to; s=fm2; t=
	1656353426; x=1656439826; bh=lOgO7/MkD/pT4PfK69b91gFfkelWuZztBiA
	L6+qToLw=; b=llieCiZBXLbkXZGOfCR9hyK5iUCGnrXyHEfu4pm/xco/WbiUYfw
	m4YfIlTTed6Oq1/RusdqQeQqRdR9Svjlf1q6bKiCdzKjzfmcwhXpFvlYQfWN6UNP
	TyZyfR9gNAWJbGHjpkeNXVVBU4AzRvkZcyPHKwffO8AYrRTy5iFTtIWjM0AeM6YF
	0xTwOYPsW5LF0sMsN68uMuPvwg1yQwn5r8EDLKElaqEgFUorkUYHBn0ovqHRYzAB
	AJkgsGBd1lJcpL11JqmhRZM3LkSXVG6HahP9CrvKDuZzpdOYsS3HXUUu8+vqqGHV
	4cS2jyzhl/ROSxV7whtHWihNzdyz75yMCgg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm2; t=1656353426; x=1656439826; bh=lOgO7/MkD/pT4
	PfK69b91gFfkelWuZztBiAL6+qToLw=; b=iO6uNxFtRmfY8kW6yvdkW90T/tkO7
	+5+ihtclmMYmAAibdh624QgqV86qGPc31vcQ/PfX4gOQmT/vZaPg6o/RO0GoCdc5
	EMJSGiKZEAyUQwj28pUlomwmj9ZXmf817sxkQdcGhublCGlu+1gsI2jdvMdv03GU
	AGO7lhW139N3J3GIA9JPNnYxng9E5DPdb8R4zOK95QjGGFb10xfdwkdbw1Oe9gvj
	dC2CWfz4sVr2sNQN5qo7syT8SIEke6vrlHUyg+R9RGR7seCUzoDgtmt9OYuWS08x
	+jGmoKg9Y6D4+MyFV53jVyZQvyYEgJt9L3HvJQTdBAq8omuo8w+W+/QdQ==
X-ME-Sender: <xms:kfK5YiEcRkT70Ov0wytkUB1WneSnm1OuXwsjriKQJCoblcQZ15Fr_w>
    <xme:kfK5YjW8geMgrRWb2aZrW1EuqdYzYmukxFfzdKhvab60PPXlFzRLKKkbBH57BEYPs
    S3e-RgKA9X4ZD0>
X-ME-Received: <xmr:kfK5YsIYkdu1LPZGwMSnNsjBRhh6LBASKdIL4trLEGBbY0xXLOz5TCy39Tyn-RSbOaaPQMWVzRYF>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrudeghedguddvgecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeffvghm
    ihcuofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinh
    hgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeettdegudelkeelveejhefhvdek
    hfevuedvvedtudfhhfdvledtgedtgfekuddugeenucffohhmrghinhepghhithhhuhgsrd
    gtohhmpdhkvghrnhgvlhdrohhrghenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgr
    mhepmhgrihhlfhhrohhmpeguvghmihesihhnvhhishhisghlvghthhhinhhgshhlrggsrd
    gtohhm
X-ME-Proxy: <xmx:kfK5YsFCwSy7ugcmGGoG21zHIZqaRrYPPB0VZprxYm_-jg70EqJEPg>
    <xmx:kfK5YoWWctQWNbo7KvtdUArwO2noAMNQr2lsvVPXG6hHMKQJRfGYlA>
    <xmx:kfK5YvMwiAH4aA5tyEL9n4ADEBiK0TObHtJMbY1b2NNkaYKY1schFQ>
    <xmx:kvK5YjeFm22CkV5ZMhSBCJ9qJtMCPp9pYXYFeINpFtXljJ0v14f3UA>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: stable@vger.kernel.org,
	Xen developer discussion <xen-devel@lists.xenproject.org>,
	Juergen Gross <jgross@suse.com>
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>
Subject: [PATCH 4.19] xen/gntdev: Avoid blocking in unmap_grant_pages()
Date: Mon, 27 Jun 2022 14:10:04 -0400
Message-Id: <20220627181006.1954-3-demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20220627181006.1954-1-demi@invisiblethingslab.com>
References: <20220627181006.1954-1-demi@invisiblethingslab.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

commit dbe97cff7dd9f0f75c524afdd55ad46be3d15295 upstream

unmap_grant_pages() currently waits for the pages to no longer be used.
In https://github.com/QubesOS/qubes-issues/issues/7481, this lead to a
deadlock against i915: i915 was waiting for gntdev's MMU notifier to
finish, while gntdev was waiting for i915 to free its pages.  I also
believe this is responsible for various deadlocks I have experienced in
the past.

Avoid these problems by making unmap_grant_pages async.  This requires
making it return void, as any errors will not be available when the
function returns.  Fortunately, the only use of the return value is a
WARN_ON(), which can be replaced by a WARN_ON when the error is
detected.  Additionally, a failed call will not prevent further calls
from being made, but this is harmless.

Because unmap_grant_pages is now async, the grant handle will be sent to
INVALID_GRANT_HANDLE too late to prevent multiple unmaps of the same
handle.  Instead, a separate bool array is allocated for this purpose.
This wastes memory, but stuffing this information in padding bytes is
too fragile.  Furthermore, it is necessary to grab a reference to the
map before making the asynchronous call, and release the reference when
the call returns.

It is also necessary to guard against reentrancy in gntdev_map_put(),
and to handle the case where userspace tries to map a mapping whose
contents have not all been freed yet.

Fixes: 745282256c75 ("xen/gntdev: safely unmap grants in case they are still in use")
Cc: stable@vger.kernel.org
Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
Link: https://lore.kernel.org/r/20220622022726.2538-1-demi@invisiblethingslab.com
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/xen/gntdev-common.h |   8 ++
 drivers/xen/gntdev.c        | 146 +++++++++++++++++++++++++-----------
 2 files changed, 109 insertions(+), 45 deletions(-)

diff --git a/drivers/xen/gntdev-common.h b/drivers/xen/gntdev-common.h
index 2f8b949c3eeb..fab6f5a54d5b 100644
--- a/drivers/xen/gntdev-common.h
+++ b/drivers/xen/gntdev-common.h
@@ -15,6 +15,8 @@
 #include <linux/mman.h>
 #include <linux/mmu_notifier.h>
 #include <linux/types.h>
+#include <xen/interface/event_channel.h>
+#include <xen/grant_table.h>
 
 struct gntdev_dmabuf_priv;
 
@@ -61,6 +63,7 @@ struct gntdev_grant_map {
 	struct gnttab_unmap_grant_ref *unmap_ops;
 	struct gnttab_map_grant_ref   *kmap_ops;
 	struct gnttab_unmap_grant_ref *kunmap_ops;
+	bool *being_removed;
 	struct page **pages;
 	unsigned long pages_vm_start;
 
@@ -78,6 +81,11 @@ struct gntdev_grant_map {
 	/* Needed to avoid allocation in gnttab_dma_free_pages(). */
 	xen_pfn_t *frames;
 #endif
+
+	/* Number of live grants */
+	atomic_t live_grants;
+	/* Needed to avoid allocation in __unmap_grant_pages */
+	struct gntab_unmap_queue_data unmap_data;
 };
 
 struct gntdev_grant_map *gntdev_alloc_map(struct gntdev_priv *priv, int count,
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index e519063e421e..492084814f55 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -64,11 +64,12 @@ MODULE_PARM_DESC(limit, "Maximum number of grants that may be mapped by "
 
 static atomic_t pages_mapped = ATOMIC_INIT(0);
 
+/* True in PV mode, false otherwise */
 static int use_ptemod;
 #define populate_freeable_maps use_ptemod
 
-static int unmap_grant_pages(struct gntdev_grant_map *map,
-			     int offset, int pages);
+static void unmap_grant_pages(struct gntdev_grant_map *map,
+			      int offset, int pages);
 
 static struct miscdevice gntdev_miscdev;
 
@@ -125,6 +126,7 @@ static void gntdev_free_map(struct gntdev_grant_map *map)
 	kfree(map->unmap_ops);
 	kfree(map->kmap_ops);
 	kfree(map->kunmap_ops);
+	kfree(map->being_removed);
 	kfree(map);
 }
 
@@ -144,12 +146,15 @@ struct gntdev_grant_map *gntdev_alloc_map(struct gntdev_priv *priv, int count,
 	add->kmap_ops  = kcalloc(count, sizeof(add->kmap_ops[0]), GFP_KERNEL);
 	add->kunmap_ops = kcalloc(count, sizeof(add->kunmap_ops[0]), GFP_KERNEL);
 	add->pages     = kcalloc(count, sizeof(add->pages[0]), GFP_KERNEL);
+	add->being_removed =
+		kcalloc(count, sizeof(add->being_removed[0]), GFP_KERNEL);
 	if (NULL == add->grants    ||
 	    NULL == add->map_ops   ||
 	    NULL == add->unmap_ops ||
 	    NULL == add->kmap_ops  ||
 	    NULL == add->kunmap_ops ||
-	    NULL == add->pages)
+	    NULL == add->pages     ||
+	    NULL == add->being_removed)
 		goto err;
 
 #ifdef CONFIG_XEN_GRANT_DMA_ALLOC
@@ -245,6 +250,35 @@ void gntdev_put_map(struct gntdev_priv *priv, struct gntdev_grant_map *map)
 		return;
 
 	atomic_sub(map->count, &pages_mapped);
+	if (map->pages && !use_ptemod) {
+		/*
+		 * Increment the reference count.  This ensures that the
+		 * subsequent call to unmap_grant_pages() will not wind up
+		 * re-entering itself.  It *can* wind up calling
+		 * gntdev_put_map() recursively, but such calls will be with a
+		 * reference count greater than 1, so they will return before
+		 * this code is reached.  The recursion depth is thus limited to
+		 * 1.  Do NOT use refcount_inc() here, as it will detect that
+		 * the reference count is zero and WARN().
+		 */
+		refcount_set(&map->users, 1);
+
+		/*
+		 * Unmap the grants.  This may or may not be asynchronous, so it
+		 * is possible that the reference count is 1 on return, but it
+		 * could also be greater than 1.
+		 */
+		unmap_grant_pages(map, 0, map->count);
+
+		/* Check if the memory now needs to be freed */
+		if (!refcount_dec_and_test(&map->users))
+			return;
+
+		/*
+		 * All pages have been returned to the hypervisor, so free the
+		 * map.
+		 */
+	}
 
 	if (map->notify.flags & UNMAP_NOTIFY_SEND_EVENT) {
 		notify_remote_via_evtchn(map->notify.event);
@@ -302,6 +336,7 @@ static int set_grant_ptes_as_special(pte_t *pte, pgtable_t token,
 
 int gntdev_map_grant_pages(struct gntdev_grant_map *map)
 {
+	size_t alloced = 0;
 	int i, err = 0;
 
 	if (!use_ptemod) {
@@ -350,87 +385,109 @@ int gntdev_map_grant_pages(struct gntdev_grant_map *map)
 			map->pages, map->count);
 
 	for (i = 0; i < map->count; i++) {
-		if (map->map_ops[i].status == GNTST_okay)
+		if (map->map_ops[i].status == GNTST_okay) {
 			map->unmap_ops[i].handle = map->map_ops[i].handle;
-		else if (!err)
+			if (!use_ptemod)
+				alloced++;
+		} else if (!err)
 			err = -EINVAL;
 
 		if (map->flags & GNTMAP_device_map)
 			map->unmap_ops[i].dev_bus_addr = map->map_ops[i].dev_bus_addr;
 
 		if (use_ptemod) {
-			if (map->kmap_ops[i].status == GNTST_okay)
+			if (map->kmap_ops[i].status == GNTST_okay) {
+				if (map->map_ops[i].status == GNTST_okay)
+					alloced++;
 				map->kunmap_ops[i].handle = map->kmap_ops[i].handle;
-			else if (!err)
+			} else if (!err)
 				err = -EINVAL;
 		}
 	}
+	atomic_add(alloced, &map->live_grants);
 	return err;
 }
 
-static int __unmap_grant_pages(struct gntdev_grant_map *map, int offset,
-			       int pages)
+static void __unmap_grant_pages_done(int result,
+		struct gntab_unmap_queue_data *data)
 {
-	int i, err = 0;
-	struct gntab_unmap_queue_data unmap_data;
+	unsigned int i;
+	struct gntdev_grant_map *map = data->data;
+	unsigned int offset = data->unmap_ops - map->unmap_ops;
 
+	for (i = 0; i < data->count; i++) {
+		WARN_ON(map->unmap_ops[offset+i].status);
+		pr_debug("unmap handle=%d st=%d\n",
+			map->unmap_ops[offset+i].handle,
+			map->unmap_ops[offset+i].status);
+		map->unmap_ops[offset+i].handle = -1;
+	}
+	/*
+	 * Decrease the live-grant counter.  This must happen after the loop to
+	 * prevent premature reuse of the grants by gnttab_mmap().
+	 */
+	atomic_sub(data->count, &map->live_grants);
+
+	/* Release reference taken by __unmap_grant_pages */
+	gntdev_put_map(NULL, map);
+}
+
+static void __unmap_grant_pages(struct gntdev_grant_map *map, int offset,
+			       int pages)
+{
 	if (map->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) {
 		int pgno = (map->notify.addr >> PAGE_SHIFT);
+
 		if (pgno >= offset && pgno < offset + pages) {
 			/* No need for kmap, pages are in lowmem */
 			uint8_t *tmp = pfn_to_kaddr(page_to_pfn(map->pages[pgno]));
+
 			tmp[map->notify.addr & (PAGE_SIZE-1)] = 0;
 			map->notify.flags &= ~UNMAP_NOTIFY_CLEAR_BYTE;
 		}
 	}
 
-	unmap_data.unmap_ops = map->unmap_ops + offset;
-	unmap_data.kunmap_ops = use_ptemod ? map->kunmap_ops + offset : NULL;
-	unmap_data.pages = map->pages + offset;
-	unmap_data.count = pages;
+	map->unmap_data.unmap_ops = map->unmap_ops + offset;
+	map->unmap_data.kunmap_ops = use_ptemod ? map->kunmap_ops + offset : NULL;
+	map->unmap_data.pages = map->pages + offset;
+	map->unmap_data.count = pages;
+	map->unmap_data.done = __unmap_grant_pages_done;
+	map->unmap_data.data = map;
+	refcount_inc(&map->users); /* to keep map alive during async call below */
 
-	err = gnttab_unmap_refs_sync(&unmap_data);
-	if (err)
-		return err;
-
-	for (i = 0; i < pages; i++) {
-		if (map->unmap_ops[offset+i].status)
-			err = -EINVAL;
-		pr_debug("unmap handle=%d st=%d\n",
-			map->unmap_ops[offset+i].handle,
-			map->unmap_ops[offset+i].status);
-		map->unmap_ops[offset+i].handle = -1;
-	}
-	return err;
+	gnttab_unmap_refs_async(&map->unmap_data);
 }
 
-static int unmap_grant_pages(struct gntdev_grant_map *map, int offset,
-			     int pages)
+static void unmap_grant_pages(struct gntdev_grant_map *map, int offset,
+			      int pages)
 {
-	int range, err = 0;
+	int range;
+
+	if (atomic_read(&map->live_grants) == 0)
+		return; /* Nothing to do */
 
 	pr_debug("unmap %d+%d [%d+%d]\n", map->index, map->count, offset, pages);
 
 	/* It is possible the requested range will have a "hole" where we
 	 * already unmapped some of the grants. Only unmap valid ranges.
 	 */
-	while (pages && !err) {
-		while (pages && map->unmap_ops[offset].handle == -1) {
+	while (pages) {
+		while (pages && map->being_removed[offset]) {
 			offset++;
 			pages--;
 		}
 		range = 0;
 		while (range < pages) {
-			if (map->unmap_ops[offset+range].handle == -1)
+			if (map->being_removed[offset + range])
 				break;
+			map->being_removed[offset + range] = true;
 			range++;
 		}
-		err = __unmap_grant_pages(map, offset, range);
+		if (range)
+			__unmap_grant_pages(map, offset, range);
 		offset += range;
 		pages -= range;
 	}
-
-	return err;
 }
 
 /* ------------------------------------------------------------------ */
@@ -500,7 +557,6 @@ static int unmap_if_in_range(struct gntdev_grant_map *map,
 			      bool blockable)
 {
 	unsigned long mstart, mend;
-	int err;
 
 	if (!in_range(map, start, end))
 		return 0;
@@ -514,10 +570,9 @@ static int unmap_if_in_range(struct gntdev_grant_map *map,
 			map->index, map->count,
 			map->vma->vm_start, map->vma->vm_end,
 			start, end, mstart, mend);
-	err = unmap_grant_pages(map,
+	unmap_grant_pages(map,
 				(mstart - map->vma->vm_start) >> PAGE_SHIFT,
 				(mend - mstart) >> PAGE_SHIFT);
-	WARN_ON(err);
 
 	return 0;
 }
@@ -558,7 +613,6 @@ static void mn_release(struct mmu_notifier *mn,
 {
 	struct gntdev_priv *priv = container_of(mn, struct gntdev_priv, mn);
 	struct gntdev_grant_map *map;
-	int err;
 
 	mutex_lock(&priv->lock);
 	list_for_each_entry(map, &priv->maps, next) {
@@ -567,8 +621,7 @@ static void mn_release(struct mmu_notifier *mn,
 		pr_debug("map %d+%d (%lx %lx)\n",
 				map->index, map->count,
 				map->vma->vm_start, map->vma->vm_end);
-		err = unmap_grant_pages(map, /* offset */ 0, map->count);
-		WARN_ON(err);
+		unmap_grant_pages(map, /* offset */ 0, map->count);
 	}
 	list_for_each_entry(map, &priv->freeable_maps, next) {
 		if (!map->vma)
@@ -576,8 +629,7 @@ static void mn_release(struct mmu_notifier *mn,
 		pr_debug("map %d+%d (%lx %lx)\n",
 				map->index, map->count,
 				map->vma->vm_start, map->vma->vm_end);
-		err = unmap_grant_pages(map, /* offset */ 0, map->count);
-		WARN_ON(err);
+		unmap_grant_pages(map, /* offset */ 0, map->count);
 	}
 	mutex_unlock(&priv->lock);
 }
@@ -1113,6 +1165,10 @@ static int gntdev_mmap(struct file *flip, struct vm_area_struct *vma)
 		goto unlock_out;
 	}
 
+	if (atomic_read(&map->live_grants)) {
+		err = -EAGAIN;
+		goto unlock_out;
+	}
 	refcount_inc(&map->users);
 
 	vma->vm_ops = &gntdev_vmops;
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 18:10:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 18:10:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356885.585220 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5tBq-00055Y-4P; Mon, 27 Jun 2022 18:10:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356885.585220; Mon, 27 Jun 2022 18:10:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5tBq-00055R-1i; Mon, 27 Jun 2022 18:10:26 +0000
Received: by outflank-mailman (input) for mailman id 356885;
 Mon, 27 Jun 2022 18:10:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dsg=XC=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1o5tBo-00055L-EM
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 18:10:24 +0000
Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com
 [66.111.4.25]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 67b59f01-f644-11ec-bd2d-47488cf2e6aa;
 Mon, 27 Jun 2022 20:10:21 +0200 (CEST)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.nyi.internal (Postfix) with ESMTP id 9CED25C017C;
 Mon, 27 Jun 2022 14:10:19 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute5.internal (MEProxy); Mon, 27 Jun 2022 14:10:19 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 27 Jun 2022 14:10:18 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 67b59f01-f644-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding:date
	:date:from:from:in-reply-to:message-id:mime-version:reply-to
	:sender:subject:subject:to:to; s=fm2; t=1656353419; x=
	1656439819; bh=XZWK+bw9iY+oAJN0Z4Vfay10wlsKWP9yZKGtDt1PJP4=; b=T
	JqByFNGRD/9fBh5O/htScTScPxK+rsdaHV1yGzCNInah2OFXA/fdtaizdHzYtAEM
	DS3c/2xZVhK4JO6iHWFfVaPZ9v1xDqckC4uXsBQW88P7aAskYuq5fasbE61rrvlZ
	zjewboZAzDCWtFR6kXlS/Sl31UvfwhXM6OZSyCHUdJJ2SU9Re7grITot1bWxZmPN
	gwOQsKBtAw04/fvryivMWrtLOSlMOvMrWxjiMUkIgsR18gPV9xF6ynD/LbqC35WR
	bzF/OBCNJ0sOzfwUf16OuVSynedG8PdS8M6TxC/+xUDYyb6uqP2IQnTzDR4zzbtf
	DTttukg9G6Br20P5eWmyA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:message-id
	:mime-version:reply-to:sender:subject:subject:to:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=
	1656353419; x=1656439819; bh=XZWK+bw9iY+oAJN0Z4Vfay10wlsKWP9yZKG
	tDt1PJP4=; b=oCwdZDCN0J7MeBl3MsZLKkZyzSbHUQKF3qe77yTZ8b2cRUCrB61
	4FDl9m/iqBNVOr5IkWV2K85aFbw5cwVr8hahrv3smJ60x8AR4G+hmrWIp1CQTLpM
	3w6RycdHc2uZjDZXGR5XemzDl7lYqfhYVvwDENYsNkDN0hardCei9lfVrv+Oyqfp
	k9CJG6XbJCu4VOZbKpaW5UgSf+8EUw/dW82oH0SWoDF0pwtifXuUZz/QKwfFRqCi
	L2AbLAJKhFOqrjx0U6nmij7ihR+8eqaXMubeGumDTRBvIYeWZP4JKQGKawbiWaUf
	mvO0n2ZCwHkqjp2CfKfqzYWDxx88AwcIDKw==
X-ME-Sender: <xms:i_K5YsnU2K0hf72eRh6cuU4-AnBowEieiAjsKii0Es_Yshj7A-V2Iw>
    <xme:i_K5Yr1LwZOocD-WYVuKxyZiFLHGWPQ0FslH-YNiXd3aFN86eBLeRBH0ux97HFug2
    gHCb_Xb-boRZZc>
X-ME-Received: <xmr:i_K5Yqpl9UI09Bx4w4u3XdXOUMfHEOi0ecGcxgIaYjN5qLwk-1pE36jLT20jIP_lJdeJncBUNn-q>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrudeghedguddvgecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffoggfgsedtkeertdertddtnecuhfhrohhmpeffvghmihcu
    ofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinhhgsh
    hlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeeiudeggfelhfeludeuffeifeegleef
    vdehvdduvdeiteefteekueeuuefhteeileenucffohhmrghinhepghhithhhuhgsrdgtoh
    hmpdhkvghrnhgvlhdrohhrghenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhep
    mhgrihhlfhhrohhmpeguvghmihesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtoh
    hm
X-ME-Proxy: <xmx:i_K5YolxbmX71E8kDT9-FD2a8BmY_AOl5OXe_lYWMbTiHTpEqdClrA>
    <xmx:i_K5Yq0xlx6S9tIvKkxFR_qOXg7ZmhGWIv1knEPnmPLY35ZE27Nt0g>
    <xmx:i_K5YvswZC9ZrFvlVavnPzupBajC2LYFtZa38RheCMfWktOzCoOuDw>
    <xmx:i_K5Yv_E5EYyEsCPPcug7VSSTGyj3dafMGc0AUxv1_acNAAJlzQ06g>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: stable@vger.kernel.org,
	Xen developer discussion <xen-devel@lists.xenproject.org>,
	Juergen Gross <jgross@suse.com>
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>
Subject: [PATCH 5.10] xen/gntdev: Avoid blocking in unmap_grant_pages()
Date: Mon, 27 Jun 2022 14:10:02 -0400
Message-Id: <20220627181006.1954-1-demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

commit dbe97cff7dd9f0f75c524afdd55ad46be3d15295 upstream

unmap_grant_pages() currently waits for the pages to no longer be used.
In https://github.com/QubesOS/qubes-issues/issues/7481, this lead to a
deadlock against i915: i915 was waiting for gntdev's MMU notifier to
finish, while gntdev was waiting for i915 to free its pages.  I also
believe this is responsible for various deadlocks I have experienced in
the past.

Avoid these problems by making unmap_grant_pages async.  This requires
making it return void, as any errors will not be available when the
function returns.  Fortunately, the only use of the return value is a
WARN_ON(), which can be replaced by a WARN_ON when the error is
detected.  Additionally, a failed call will not prevent further calls
from being made, but this is harmless.

Because unmap_grant_pages is now async, the grant handle will be sent to
INVALID_GRANT_HANDLE too late to prevent multiple unmaps of the same
handle.  Instead, a separate bool array is allocated for this purpose.
This wastes memory, but stuffing this information in padding bytes is
too fragile.  Furthermore, it is necessary to grab a reference to the
map before making the asynchronous call, and release the reference when
the call returns.

It is also necessary to guard against reentrancy in gntdev_map_put(),
and to handle the case where userspace tries to map a mapping whose
contents have not all been freed yet.

Fixes: 745282256c75 ("xen/gntdev: safely unmap grants in case they are still in use")
Cc: stable@vger.kernel.org
Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
Link: https://lore.kernel.org/r/20220622022726.2538-1-demi@invisiblethingslab.com
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/xen/gntdev-common.h |   7 ++
 drivers/xen/gntdev.c        | 142 +++++++++++++++++++++++++-----------
 2 files changed, 106 insertions(+), 43 deletions(-)

diff --git a/drivers/xen/gntdev-common.h b/drivers/xen/gntdev-common.h
index 20d7d059dadb..40ef379c28ab 100644
--- a/drivers/xen/gntdev-common.h
+++ b/drivers/xen/gntdev-common.h
@@ -16,6 +16,7 @@
 #include <linux/mmu_notifier.h>
 #include <linux/types.h>
 #include <xen/interface/event_channel.h>
+#include <xen/grant_table.h>
 
 struct gntdev_dmabuf_priv;
 
@@ -56,6 +57,7 @@ struct gntdev_grant_map {
 	struct gnttab_unmap_grant_ref *unmap_ops;
 	struct gnttab_map_grant_ref   *kmap_ops;
 	struct gnttab_unmap_grant_ref *kunmap_ops;
+	bool *being_removed;
 	struct page **pages;
 	unsigned long pages_vm_start;
 
@@ -73,6 +75,11 @@ struct gntdev_grant_map {
 	/* Needed to avoid allocation in gnttab_dma_free_pages(). */
 	xen_pfn_t *frames;
 #endif
+
+	/* Number of live grants */
+	atomic_t live_grants;
+	/* Needed to avoid allocation in __unmap_grant_pages */
+	struct gntab_unmap_queue_data unmap_data;
 };
 
 struct gntdev_grant_map *gntdev_alloc_map(struct gntdev_priv *priv, int count,
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index 54778aadf618..a631a453eb57 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -35,6 +35,7 @@
 #include <linux/slab.h>
 #include <linux/highmem.h>
 #include <linux/refcount.h>
+#include <linux/workqueue.h>
 
 #include <xen/xen.h>
 #include <xen/grant_table.h>
@@ -60,10 +61,11 @@ module_param(limit, uint, 0644);
 MODULE_PARM_DESC(limit,
 	"Maximum number of grants that may be mapped by one mapping request");
 
+/* True in PV mode, false otherwise */
 static int use_ptemod;
 
-static int unmap_grant_pages(struct gntdev_grant_map *map,
-			     int offset, int pages);
+static void unmap_grant_pages(struct gntdev_grant_map *map,
+			      int offset, int pages);
 
 static struct miscdevice gntdev_miscdev;
 
@@ -120,6 +122,7 @@ static void gntdev_free_map(struct gntdev_grant_map *map)
 	kvfree(map->unmap_ops);
 	kvfree(map->kmap_ops);
 	kvfree(map->kunmap_ops);
+	kvfree(map->being_removed);
 	kfree(map);
 }
 
@@ -140,12 +143,13 @@ struct gntdev_grant_map *gntdev_alloc_map(struct gntdev_priv *priv, int count,
 	add->kunmap_ops = kvcalloc(count,
 				   sizeof(add->kunmap_ops[0]), GFP_KERNEL);
 	add->pages     = kvcalloc(count, sizeof(add->pages[0]), GFP_KERNEL);
+	add->being_removed =
+		kvcalloc(count, sizeof(add->being_removed[0]), GFP_KERNEL);
 	if (NULL == add->grants    ||
-	    NULL == add->map_ops   ||
-	    NULL == add->unmap_ops ||
 	    NULL == add->kmap_ops  ||
 	    NULL == add->kunmap_ops ||
-	    NULL == add->pages)
+	    NULL == add->pages     ||
+	    NULL == add->being_removed)
 		goto err;
 
 #ifdef CONFIG_XEN_GRANT_DMA_ALLOC
@@ -240,9 +244,36 @@ void gntdev_put_map(struct gntdev_priv *priv, struct gntdev_grant_map *map)
 	if (!refcount_dec_and_test(&map->users))
 		return;
 
-	if (map->pages && !use_ptemod)
+	if (map->pages && !use_ptemod) {
+		/*
+		 * Increment the reference count.  This ensures that the
+		 * subsequent call to unmap_grant_pages() will not wind up
+		 * re-entering itself.  It *can* wind up calling
+		 * gntdev_put_map() recursively, but such calls will be with a
+		 * reference count greater than 1, so they will return before
+		 * this code is reached.  The recursion depth is thus limited to
+		 * 1.  Do NOT use refcount_inc() here, as it will detect that
+		 * the reference count is zero and WARN().
+		 */
+		refcount_set(&map->users, 1);
+
+		/*
+		 * Unmap the grants.  This may or may not be asynchronous, so it
+		 * is possible that the reference count is 1 on return, but it
+		 * could also be greater than 1.
+		 */
 		unmap_grant_pages(map, 0, map->count);
 
+		/* Check if the memory now needs to be freed */
+		if (!refcount_dec_and_test(&map->users))
+			return;
+
+		/*
+		 * All pages have been returned to the hypervisor, so free the
+		 * map.
+		 */
+	}
+
 	if (map->notify.flags & UNMAP_NOTIFY_SEND_EVENT) {
 		notify_remote_via_evtchn(map->notify.event);
 		evtchn_put(map->notify.event);
@@ -288,6 +319,7 @@ static int set_grant_ptes_as_special(pte_t *pte, unsigned long addr, void *data)
 
 int gntdev_map_grant_pages(struct gntdev_grant_map *map)
 {
+	size_t alloced = 0;
 	int i, err = 0;
 
 	if (!use_ptemod) {
@@ -336,87 +368,109 @@ int gntdev_map_grant_pages(struct gntdev_grant_map *map)
 			map->pages, map->count);
 
 	for (i = 0; i < map->count; i++) {
-		if (map->map_ops[i].status == GNTST_okay)
+		if (map->map_ops[i].status == GNTST_okay) {
 			map->unmap_ops[i].handle = map->map_ops[i].handle;
-		else if (!err)
+			if (!use_ptemod)
+				alloced++;
+		} else if (!err)
 			err = -EINVAL;
 
 		if (map->flags & GNTMAP_device_map)
 			map->unmap_ops[i].dev_bus_addr = map->map_ops[i].dev_bus_addr;
 
 		if (use_ptemod) {
-			if (map->kmap_ops[i].status == GNTST_okay)
+			if (map->kmap_ops[i].status == GNTST_okay) {
+				if (map->map_ops[i].status == GNTST_okay)
+					alloced++;
 				map->kunmap_ops[i].handle = map->kmap_ops[i].handle;
-			else if (!err)
+			} else if (!err)
 				err = -EINVAL;
 		}
 	}
+	atomic_add(alloced, &map->live_grants);
 	return err;
 }
 
-static int __unmap_grant_pages(struct gntdev_grant_map *map, int offset,
-			       int pages)
+static void __unmap_grant_pages_done(int result,
+		struct gntab_unmap_queue_data *data)
 {
-	int i, err = 0;
-	struct gntab_unmap_queue_data unmap_data;
+	unsigned int i;
+	struct gntdev_grant_map *map = data->data;
+	unsigned int offset = data->unmap_ops - map->unmap_ops;
 
+	for (i = 0; i < data->count; i++) {
+		WARN_ON(map->unmap_ops[offset+i].status);
+		pr_debug("unmap handle=%d st=%d\n",
+			map->unmap_ops[offset+i].handle,
+			map->unmap_ops[offset+i].status);
+		map->unmap_ops[offset+i].handle = -1;
+	}
+	/*
+	 * Decrease the live-grant counter.  This must happen after the loop to
+	 * prevent premature reuse of the grants by gnttab_mmap().
+	 */
+	atomic_sub(data->count, &map->live_grants);
+
+	/* Release reference taken by __unmap_grant_pages */
+	gntdev_put_map(NULL, map);
+}
+
+static void __unmap_grant_pages(struct gntdev_grant_map *map, int offset,
+			       int pages)
+{
 	if (map->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) {
 		int pgno = (map->notify.addr >> PAGE_SHIFT);
+
 		if (pgno >= offset && pgno < offset + pages) {
 			/* No need for kmap, pages are in lowmem */
 			uint8_t *tmp = pfn_to_kaddr(page_to_pfn(map->pages[pgno]));
+
 			tmp[map->notify.addr & (PAGE_SIZE-1)] = 0;
 			map->notify.flags &= ~UNMAP_NOTIFY_CLEAR_BYTE;
 		}
 	}
 
-	unmap_data.unmap_ops = map->unmap_ops + offset;
-	unmap_data.kunmap_ops = use_ptemod ? map->kunmap_ops + offset : NULL;
-	unmap_data.pages = map->pages + offset;
-	unmap_data.count = pages;
+	map->unmap_data.unmap_ops = map->unmap_ops + offset;
+	map->unmap_data.kunmap_ops = use_ptemod ? map->kunmap_ops + offset : NULL;
+	map->unmap_data.pages = map->pages + offset;
+	map->unmap_data.count = pages;
+	map->unmap_data.done = __unmap_grant_pages_done;
+	map->unmap_data.data = map;
+	refcount_inc(&map->users); /* to keep map alive during async call below */
 
-	err = gnttab_unmap_refs_sync(&unmap_data);
-	if (err)
-		return err;
-
-	for (i = 0; i < pages; i++) {
-		if (map->unmap_ops[offset+i].status)
-			err = -EINVAL;
-		pr_debug("unmap handle=%d st=%d\n",
-			map->unmap_ops[offset+i].handle,
-			map->unmap_ops[offset+i].status);
-		map->unmap_ops[offset+i].handle = -1;
-	}
-	return err;
+	gnttab_unmap_refs_async(&map->unmap_data);
 }
 
-static int unmap_grant_pages(struct gntdev_grant_map *map, int offset,
-			     int pages)
+static void unmap_grant_pages(struct gntdev_grant_map *map, int offset,
+			      int pages)
 {
-	int range, err = 0;
+	int range;
+
+	if (atomic_read(&map->live_grants) == 0)
+		return; /* Nothing to do */
 
 	pr_debug("unmap %d+%d [%d+%d]\n", map->index, map->count, offset, pages);
 
 	/* It is possible the requested range will have a "hole" where we
 	 * already unmapped some of the grants. Only unmap valid ranges.
 	 */
-	while (pages && !err) {
-		while (pages && map->unmap_ops[offset].handle == -1) {
+	while (pages) {
+		while (pages && map->being_removed[offset]) {
 			offset++;
 			pages--;
 		}
 		range = 0;
 		while (range < pages) {
-			if (map->unmap_ops[offset+range].handle == -1)
+			if (map->being_removed[offset + range])
 				break;
+			map->being_removed[offset + range] = true;
 			range++;
 		}
-		err = __unmap_grant_pages(map, offset, range);
+		if (range)
+			__unmap_grant_pages(map, offset, range);
 		offset += range;
 		pages -= range;
 	}
-
-	return err;
 }
 
 /* ------------------------------------------------------------------ */
@@ -468,7 +522,6 @@ static bool gntdev_invalidate(struct mmu_interval_notifier *mn,
 	struct gntdev_grant_map *map =
 		container_of(mn, struct gntdev_grant_map, notifier);
 	unsigned long mstart, mend;
-	int err;
 
 	if (!mmu_notifier_range_blockable(range))
 		return false;
@@ -489,10 +542,9 @@ static bool gntdev_invalidate(struct mmu_interval_notifier *mn,
 			map->index, map->count,
 			map->vma->vm_start, map->vma->vm_end,
 			range->start, range->end, mstart, mend);
-	err = unmap_grant_pages(map,
+	unmap_grant_pages(map,
 				(mstart - map->vma->vm_start) >> PAGE_SHIFT,
 				(mend - mstart) >> PAGE_SHIFT);
-	WARN_ON(err);
 
 	return true;
 }
@@ -980,6 +1032,10 @@ static int gntdev_mmap(struct file *flip, struct vm_area_struct *vma)
 		goto unlock_out;
 	if (use_ptemod && map->vma)
 		goto unlock_out;
+	if (atomic_read(&map->live_grants)) {
+		err = -EAGAIN;
+		goto unlock_out;
+	}
 	refcount_inc(&map->users);
 
 	vma->vm_ops = &gntdev_vmops;
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 18:10:32 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 18:10:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356887.585238 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5tBv-0005Tk-TC; Mon, 27 Jun 2022 18:10:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356887.585238; Mon, 27 Jun 2022 18:10:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5tBv-0005SB-Nj; Mon, 27 Jun 2022 18:10:31 +0000
Received: by outflank-mailman (input) for mailman id 356887;
 Mon, 27 Jun 2022 18:10:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dsg=XC=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1o5tBt-00055L-TS
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 18:10:30 +0000
Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com
 [66.111.4.25]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6c706bcb-f644-11ec-bd2d-47488cf2e6aa;
 Mon, 27 Jun 2022 20:10:28 +0200 (CEST)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.nyi.internal (Postfix) with ESMTP id B57055C01BD;
 Mon, 27 Jun 2022 14:10:27 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute2.internal (MEProxy); Mon, 27 Jun 2022 14:10:27 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 27 Jun 2022 14:10:27 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c706bcb-f644-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to; s=fm2; t=
	1656353427; x=1656439827; bh=Xfz+7HA2dql7VULcEYFVfcQ/vdo2gQjgCSF
	DGnps/D4=; b=hejLduX/xEGj0Bv/ewAMWTZqGQ8bNpaw6S4qPXSuPLc08xeRgJQ
	9OWoAU9NuQQpJf6gesnVweLkzg0xMBpP1uEWudLmirwaL7gc3b0NPLAPAq40DeSC
	smjYpBuKU2etDdtj0j25iuEHeDG1kWjBGI4Mgl95mdhyvFjEGc88ESc4+9mw444/
	eDf5ZkDw2NXFUPnNvxO6znlmIwCxGtLBDDdvJVHtImsm/GQ1dOCgNaz2HzVjfVIt
	G2cXb9o9oKWSXCkd6TjHrSAjuIgFRuim84pt+mR5Tpyt1vk5RZ1v3Bca3aFoBFyT
	vGqMuJHoWESMU2wpaXK5XytRlm7Ce8zrzMw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm2; t=1656353427; x=1656439827; bh=Xfz+7HA2dql7V
	ULcEYFVfcQ/vdo2gQjgCSFDGnps/D4=; b=wwNWD7qdQaP//8ksVkWEP5tdJngPD
	iOSOpBWT1gZWwJUJD1W2u8X0iAnQolbbhSXuOd2hD306e0SMG16F0eTeRWJOKS8t
	Hc7P+z+vk/nnG+gvsls3hxp/ZsFNcX+MOUxHiWrpTnF13UjQbr3f5BMRW3dxEUVS
	X+WAll55t4/rO8YRJdc8dxOY0ltdYVSKP3nPEKYP/LgEwI+5tzfnqNHSxSGInmv7
	A+XPMVK1Al6njURVA75QeRPdfs4L4FtQgiAgI2TX87pVZrgFkjBEQ3uyAxx3nHQE
	QW1aktB5RJ0BNMw9pjqquBjYR0fYMn89cDhbu/Kjpw5mz/d0Rtfm14b5A==
X-ME-Sender: <xms:k_K5Yp55-YwjjRM5LfO_p1gM5E2jlHuyGznAZj0z7W0pVwn0CzVvNg>
    <xme:k_K5Ym7YGpNe7oYoYql4FzUXuguva7DRRAJbi85LVnUVZFwxPkkFQw4xuVSuSwvX1
    xx32B7HYEmA37Q>
X-ME-Received: <xmr:k_K5YgdTqaGwNbepvbS8aUjQHxj0Rn49A9XmbYOxinpyeIjeSUuZyRxgNgrBYkvwKSAQvRuSXd8o>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrudeghedguddvgecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeffvghm
    ihcuofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinh
    hgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeettdegudelkeelveejhefhvdek
    hfevuedvvedtudfhhfdvledtgedtgfekuddugeenucffohhmrghinhepghhithhhuhgsrd
    gtohhmpdhkvghrnhgvlhdrohhrghenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgr
    mhepmhgrihhlfhhrohhmpeguvghmihesihhnvhhishhisghlvghthhhinhhgshhlrggsrd
    gtohhm
X-ME-Proxy: <xmx:k_K5YiIQCNdpoOmd9yYxa0o2AgO8K7ujD9CyUeChZX4H5zv3gOVnqg>
    <xmx:k_K5YtIuiWQZfRnh-GcXqPhOUxTLrCCAUEBLZXUuDlEgxO7sIgdWrw>
    <xmx:k_K5Yrw3-NQlzaOBMDZVliO85VzAvP3C7A0kGjvn-mR7lTvo7VcKug>
    <xmx:k_K5Yrg9UEhxd1fc4sD3xy8zL6rx6K04KkgchcO_Hgr9MDC6T0m5_g>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: stable@vger.kernel.org,
	Xen developer discussion <xen-devel@lists.xenproject.org>,
	Juergen Gross <jgross@suse.com>
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>
Subject: [PATCH 4.14] xen/gntdev: Avoid blocking in unmap_grant_pages()
Date: Mon, 27 Jun 2022 14:10:05 -0400
Message-Id: <20220627181006.1954-4-demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20220627181006.1954-1-demi@invisiblethingslab.com>
References: <20220627181006.1954-1-demi@invisiblethingslab.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

commit dbe97cff7dd9f0f75c524afdd55ad46be3d15295 upstream

unmap_grant_pages() currently waits for the pages to no longer be used.
In https://github.com/QubesOS/qubes-issues/issues/7481, this lead to a
deadlock against i915: i915 was waiting for gntdev's MMU notifier to
finish, while gntdev was waiting for i915 to free its pages.  I also
believe this is responsible for various deadlocks I have experienced in
the past.

Avoid these problems by making unmap_grant_pages async.  This requires
making it return void, as any errors will not be available when the
function returns.  Fortunately, the only use of the return value is a
WARN_ON(), which can be replaced by a WARN_ON when the error is
detected.  Additionally, a failed call will not prevent further calls
from being made, but this is harmless.

Because unmap_grant_pages is now async, the grant handle will be sent to
INVALID_GRANT_HANDLE too late to prevent multiple unmaps of the same
handle.  Instead, a separate bool array is allocated for this purpose.
This wastes memory, but stuffing this information in padding bytes is
too fragile.  Furthermore, it is necessary to grab a reference to the
map before making the asynchronous call, and release the reference when
the call returns.

It is also necessary to guard against reentrancy in gntdev_map_put(),
and to handle the case where userspace tries to map a mapping whose
contents have not all been freed yet.

Fixes: 745282256c75 ("xen/gntdev: safely unmap grants in case they are still in use")
Cc: stable@vger.kernel.org
Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
Link: https://lore.kernel.org/r/20220622022726.2538-1-demi@invisiblethingslab.com
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/xen/gntdev.c | 145 ++++++++++++++++++++++++++++++-------------
 1 file changed, 103 insertions(+), 42 deletions(-)

diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index 7b4ac5505f53..2827015604fb 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -59,6 +59,7 @@ MODULE_PARM_DESC(limit, "Maximum number of grants that may be mapped by "
 
 static atomic_t pages_mapped = ATOMIC_INIT(0);
 
+/* True in PV mode, false otherwise */
 static int use_ptemod;
 #define populate_freeable_maps use_ptemod
 
@@ -94,11 +95,16 @@ struct grant_map {
 	struct gnttab_unmap_grant_ref *unmap_ops;
 	struct gnttab_map_grant_ref   *kmap_ops;
 	struct gnttab_unmap_grant_ref *kunmap_ops;
+	bool *being_removed;
 	struct page **pages;
 	unsigned long pages_vm_start;
+	/* Number of live grants */
+	atomic_t live_grants;
+	/* Needed to avoid allocation in unmap_grant_pages */
+	struct gntab_unmap_queue_data unmap_data;
 };
 
-static int unmap_grant_pages(struct grant_map *map, int offset, int pages);
+static void unmap_grant_pages(struct grant_map *map, int offset, int pages);
 
 /* ------------------------------------------------------------------ */
 
@@ -129,6 +135,7 @@ static void gntdev_free_map(struct grant_map *map)
 	kfree(map->unmap_ops);
 	kfree(map->kmap_ops);
 	kfree(map->kunmap_ops);
+	kfree(map->being_removed);
 	kfree(map);
 }
 
@@ -147,12 +154,15 @@ static struct grant_map *gntdev_alloc_map(struct gntdev_priv *priv, int count)
 	add->kmap_ops  = kcalloc(count, sizeof(add->kmap_ops[0]), GFP_KERNEL);
 	add->kunmap_ops = kcalloc(count, sizeof(add->kunmap_ops[0]), GFP_KERNEL);
 	add->pages     = kcalloc(count, sizeof(add->pages[0]), GFP_KERNEL);
+	add->being_removed =
+		kcalloc(count, sizeof(add->being_removed[0]), GFP_KERNEL);
 	if (NULL == add->grants    ||
 	    NULL == add->map_ops   ||
 	    NULL == add->unmap_ops ||
 	    NULL == add->kmap_ops  ||
 	    NULL == add->kunmap_ops ||
-	    NULL == add->pages)
+	    NULL == add->pages     ||
+	    NULL == add->being_removed)
 		goto err;
 
 	if (gnttab_alloc_pages(count, add->pages))
@@ -217,6 +227,35 @@ static void gntdev_put_map(struct gntdev_priv *priv, struct grant_map *map)
 		return;
 
 	atomic_sub(map->count, &pages_mapped);
+	if (map->pages && !use_ptemod) {
+		/*
+		 * Increment the reference count.  This ensures that the
+		 * subsequent call to unmap_grant_pages() will not wind up
+		 * re-entering itself.  It *can* wind up calling
+		 * gntdev_put_map() recursively, but such calls will be with a
+		 * reference count greater than 1, so they will return before
+		 * this code is reached.  The recursion depth is thus limited to
+		 * 1.  Do NOT use refcount_inc() here, as it will detect that
+		 * the reference count is zero and WARN().
+		 */
+		refcount_set(&map->users, 1);
+
+		/*
+		 * Unmap the grants.  This may or may not be asynchronous, so it
+		 * is possible that the reference count is 1 on return, but it
+		 * could also be greater than 1.
+		 */
+		unmap_grant_pages(map, 0, map->count);
+
+		/* Check if the memory now needs to be freed */
+		if (!refcount_dec_and_test(&map->users))
+			return;
+
+		/*
+		 * All pages have been returned to the hypervisor, so free the
+		 * map.
+		 */
+	}
 
 	if (map->notify.flags & UNMAP_NOTIFY_SEND_EVENT) {
 		notify_remote_via_evtchn(map->notify.event);
@@ -274,6 +313,7 @@ static int set_grant_ptes_as_special(pte_t *pte, pgtable_t token,
 
 static int map_grant_pages(struct grant_map *map)
 {
+	size_t alloced = 0;
 	int i, err = 0;
 
 	if (!use_ptemod) {
@@ -322,85 +362,107 @@ static int map_grant_pages(struct grant_map *map)
 			map->pages, map->count);
 
 	for (i = 0; i < map->count; i++) {
-		if (map->map_ops[i].status == GNTST_okay)
+		if (map->map_ops[i].status == GNTST_okay) {
 			map->unmap_ops[i].handle = map->map_ops[i].handle;
-		else if (!err)
+			if (!use_ptemod)
+				alloced++;
+		} else if (!err)
 			err = -EINVAL;
 
 		if (map->flags & GNTMAP_device_map)
 			map->unmap_ops[i].dev_bus_addr = map->map_ops[i].dev_bus_addr;
 
 		if (use_ptemod) {
-			if (map->kmap_ops[i].status == GNTST_okay)
+			if (map->kmap_ops[i].status == GNTST_okay) {
+				if (map->map_ops[i].status == GNTST_okay)
+					alloced++;
 				map->kunmap_ops[i].handle = map->kmap_ops[i].handle;
-			else if (!err)
+			} else if (!err)
 				err = -EINVAL;
 		}
 	}
+	atomic_add(alloced, &map->live_grants);
 	return err;
 }
 
-static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
+static void __unmap_grant_pages_done(int result,
+		struct gntab_unmap_queue_data *data)
 {
-	int i, err = 0;
-	struct gntab_unmap_queue_data unmap_data;
+	unsigned int i;
+	struct grant_map *map = data->data;
+	unsigned int offset = data->unmap_ops - map->unmap_ops;
+
+	for (i = 0; i < data->count; i++) {
+		WARN_ON(map->unmap_ops[offset+i].status);
+		pr_debug("unmap handle=%d st=%d\n",
+			map->unmap_ops[offset+i].handle,
+			map->unmap_ops[offset+i].status);
+		map->unmap_ops[offset+i].handle = -1;
+	}
+	/*
+	 * Decrease the live-grant counter.  This must happen after the loop to
+	 * prevent premature reuse of the grants by gnttab_mmap().
+	 */
+	atomic_sub(data->count, &map->live_grants);
 
+	/* Release reference taken by unmap_grant_pages */
+	gntdev_put_map(NULL, map);
+}
+
+static void __unmap_grant_pages(struct grant_map *map, int offset, int pages)
+{
 	if (map->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) {
 		int pgno = (map->notify.addr >> PAGE_SHIFT);
+
 		if (pgno >= offset && pgno < offset + pages) {
 			/* No need for kmap, pages are in lowmem */
 			uint8_t *tmp = pfn_to_kaddr(page_to_pfn(map->pages[pgno]));
+
 			tmp[map->notify.addr & (PAGE_SIZE-1)] = 0;
 			map->notify.flags &= ~UNMAP_NOTIFY_CLEAR_BYTE;
 		}
 	}
 
-	unmap_data.unmap_ops = map->unmap_ops + offset;
-	unmap_data.kunmap_ops = use_ptemod ? map->kunmap_ops + offset : NULL;
-	unmap_data.pages = map->pages + offset;
-	unmap_data.count = pages;
+	map->unmap_data.unmap_ops = map->unmap_ops + offset;
+	map->unmap_data.kunmap_ops = use_ptemod ? map->kunmap_ops + offset : NULL;
+	map->unmap_data.pages = map->pages + offset;
+	map->unmap_data.count = pages;
+	map->unmap_data.done = __unmap_grant_pages_done;
+	map->unmap_data.data = map;
+	refcount_inc(&map->users); /* to keep map alive during async call below */
 
-	err = gnttab_unmap_refs_sync(&unmap_data);
-	if (err)
-		return err;
-
-	for (i = 0; i < pages; i++) {
-		if (map->unmap_ops[offset+i].status)
-			err = -EINVAL;
-		pr_debug("unmap handle=%d st=%d\n",
-			map->unmap_ops[offset+i].handle,
-			map->unmap_ops[offset+i].status);
-		map->unmap_ops[offset+i].handle = -1;
-	}
-	return err;
+	gnttab_unmap_refs_async(&map->unmap_data);
 }
 
-static int unmap_grant_pages(struct grant_map *map, int offset, int pages)
+static void unmap_grant_pages(struct grant_map *map, int offset, int pages)
 {
-	int range, err = 0;
+	int range;
+
+	if (atomic_read(&map->live_grants) == 0)
+		return; /* Nothing to do */
 
 	pr_debug("unmap %d+%d [%d+%d]\n", map->index, map->count, offset, pages);
 
 	/* It is possible the requested range will have a "hole" where we
 	 * already unmapped some of the grants. Only unmap valid ranges.
 	 */
-	while (pages && !err) {
-		while (pages && map->unmap_ops[offset].handle == -1) {
+	while (pages) {
+		while (pages && map->being_removed[offset]) {
 			offset++;
 			pages--;
 		}
 		range = 0;
 		while (range < pages) {
-			if (map->unmap_ops[offset+range].handle == -1)
+			if (map->being_removed[offset + range])
 				break;
+			map->being_removed[offset + range] = true;
 			range++;
 		}
-		err = __unmap_grant_pages(map, offset, range);
+		if (range)
+			__unmap_grant_pages(map, offset, range);
 		offset += range;
 		pages -= range;
 	}
-
-	return err;
 }
 
 /* ------------------------------------------------------------------ */
@@ -456,7 +518,6 @@ static void unmap_if_in_range(struct grant_map *map,
 			      unsigned long start, unsigned long end)
 {
 	unsigned long mstart, mend;
-	int err;
 
 	if (!map->vma)
 		return;
@@ -470,10 +531,9 @@ static void unmap_if_in_range(struct grant_map *map,
 			map->index, map->count,
 			map->vma->vm_start, map->vma->vm_end,
 			start, end, mstart, mend);
-	err = unmap_grant_pages(map,
+	unmap_grant_pages(map,
 				(mstart - map->vma->vm_start) >> PAGE_SHIFT,
 				(mend - mstart) >> PAGE_SHIFT);
-	WARN_ON(err);
 }
 
 static void mn_invl_range_start(struct mmu_notifier *mn,
@@ -498,7 +558,6 @@ static void mn_release(struct mmu_notifier *mn,
 {
 	struct gntdev_priv *priv = container_of(mn, struct gntdev_priv, mn);
 	struct grant_map *map;
-	int err;
 
 	mutex_lock(&priv->lock);
 	list_for_each_entry(map, &priv->maps, next) {
@@ -507,8 +566,7 @@ static void mn_release(struct mmu_notifier *mn,
 		pr_debug("map %d+%d (%lx %lx)\n",
 				map->index, map->count,
 				map->vma->vm_start, map->vma->vm_end);
-		err = unmap_grant_pages(map, /* offset */ 0, map->count);
-		WARN_ON(err);
+		unmap_grant_pages(map, /* offset */ 0, map->count);
 	}
 	list_for_each_entry(map, &priv->freeable_maps, next) {
 		if (!map->vma)
@@ -516,8 +574,7 @@ static void mn_release(struct mmu_notifier *mn,
 		pr_debug("map %d+%d (%lx %lx)\n",
 				map->index, map->count,
 				map->vma->vm_start, map->vma->vm_end);
-		err = unmap_grant_pages(map, /* offset */ 0, map->count);
-		WARN_ON(err);
+		unmap_grant_pages(map, /* offset */ 0, map->count);
 	}
 	mutex_unlock(&priv->lock);
 }
@@ -1006,6 +1063,10 @@ static int gntdev_mmap(struct file *flip, struct vm_area_struct *vma)
 		goto unlock_out;
 	}
 
+	if (atomic_read(&map->live_grants)) {
+		err = -EAGAIN;
+		goto unlock_out;
+	}
 	refcount_inc(&map->users);
 
 	vma->vm_ops = &gntdev_vmops;
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 18:10:32 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 18:10:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356888.585244 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5tBw-0005XW-AR; Mon, 27 Jun 2022 18:10:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356888.585244; Mon, 27 Jun 2022 18:10:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5tBw-0005Vy-0e; Mon, 27 Jun 2022 18:10:32 +0000
Received: by outflank-mailman (input) for mailman id 356888;
 Mon, 27 Jun 2022 18:10:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dsg=XC=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1o5tBu-0005LG-A5
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 18:10:30 +0000
Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com
 [66.111.4.25]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 69e2f984-f644-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 20:10:27 +0200 (CEST)
Received: from compute4.internal (compute4.nyi.internal [10.202.2.44])
 by mailout.nyi.internal (Postfix) with ESMTP id 707205C01AE;
 Mon, 27 Jun 2022 14:10:23 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute4.internal (MEProxy); Mon, 27 Jun 2022 14:10:23 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 27 Jun 2022 14:10:22 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 69e2f984-f644-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to; s=fm2; t=
	1656353423; x=1656439823; bh=8/9GPS9d3jK36QieyU12uO0QQ6L+v2RvJxj
	grl9TmnY=; b=rNhC023lGay0R+tLnxWa9T88Fo4fWBSHEdE7qsDmjsrv11M2snn
	VkZIMm3SM/22RRrteievkUYiaMFvfGx34Tv17vq+jQr4eSR4DF47wOPOg3r6INbS
	/uS1yzUuK6iDIIcfRY68KDEktsaYQTg3xcxOSJW7VK0jpaWy0EgRWO8nLQP8SfM1
	SvXRk6OZJKzT3PcybBrlD5ZhcpHt7K6LFXFexPJMIztbK0C9PtBaSyELj9L+R9H1
	l28JgiOl9LtxoAvUd6nIW5lxNrx9uwzKp2EWZKZb8GqhE76bBw9z/m3eOg2iij+3
	TnOLCm9ffoR9TmCcH2T0U1YvDFsDRZ9uzJA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm2; t=1656353423; x=1656439823; bh=8/9GPS9d3jK36
	QieyU12uO0QQ6L+v2RvJxjgrl9TmnY=; b=NUqfUKEBJGZbruWLVsVyLn4k3xEKo
	3zZLNWwQeosbJznSm4wgS2r3r1ce0xZxIPFI9+QexXurBzeDmyco04bcKfndFSN8
	srBN9AB9QtYpCHYhBc8fW2HE5WW9iBTC2YmPb3sVtRV321mVYqFldIl5RRtA5GfG
	GAzv/pmbVRtcAasm6rQ6Ip4zjWwNgD3IhoLLQJb9a8BVPtsq2MqC2xcIcsnYmpMA
	5tLcTLuidNgTojpnEd5sNgY84kx2OEB0UYRMG13L9F/ObyCHaGxXyX4zi8igbYWV
	k6+FowxDzC3Qz3BttwxVB6VtK7f2r+kNWz0C7Q06Oeo+T0T47fxJpZozg==
X-ME-Sender: <xms:j_K5Yit6uhY-gZLFv4Kzl_fLT6GmdM34aQcGzUotxbI8eaidkqmsgw>
    <xme:j_K5YnfCwmjy0WTCNuWRJsmmrvNMGKFV8vIF-fiXW-AflhKDZKnHhwHTCpvaQTlh4
    HIuxFBOeaZoATw>
X-ME-Received: <xmr:j_K5YtwOuARQuWXar5E9a0ugi333w77rPrYMxTteR24L2oWPhJa4C6JWBWVPVoIA4pGNAl4bf0zU>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrudeghedguddvfecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeffvghm
    ihcuofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinh
    hgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeettdegudelkeelveejhefhvdek
    hfevuedvvedtudfhhfdvledtgedtgfekuddugeenucffohhmrghinhepghhithhhuhgsrd
    gtohhmpdhkvghrnhgvlhdrohhrghenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgr
    mhepmhgrihhlfhhrohhmpeguvghmihesihhnvhhishhisghlvghthhhinhhgshhlrggsrd
    gtohhm
X-ME-Proxy: <xmx:j_K5YtNnax8ytkZOsZQkrW_CJsyRbHgsPvqPxR-OG3vUkCqXnACg5Q>
    <xmx:j_K5Yi-LysyigsgHy0UefVEltKPhc38ypJIQTWSJRCNYzHEaTkOO2Q>
    <xmx:j_K5YlUZhL1TMMBTQacxhx-qbPgMKMRWVHqsnk_BNeu1al2j-hUPDA>
    <xmx:j_K5Yol6KuxtWa30_9ZBSwpwNV-fIqasfMkM2AO3FqbfDrgYvUnDlw>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: stable@vger.kernel.org,
	Xen developer discussion <xen-devel@lists.xenproject.org>,
	Juergen Gross <jgross@suse.com>
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>
Subject: [PATCH 5.4] xen/gntdev: Avoid blocking in unmap_grant_pages()
Date: Mon, 27 Jun 2022 14:10:03 -0400
Message-Id: <20220627181006.1954-2-demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20220627181006.1954-1-demi@invisiblethingslab.com>
References: <20220627181006.1954-1-demi@invisiblethingslab.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

commit dbe97cff7dd9f0f75c524afdd55ad46be3d15295 upstream

unmap_grant_pages() currently waits for the pages to no longer be used.
In https://github.com/QubesOS/qubes-issues/issues/7481, this lead to a
deadlock against i915: i915 was waiting for gntdev's MMU notifier to
finish, while gntdev was waiting for i915 to free its pages.  I also
believe this is responsible for various deadlocks I have experienced in
the past.

Avoid these problems by making unmap_grant_pages async.  This requires
making it return void, as any errors will not be available when the
function returns.  Fortunately, the only use of the return value is a
WARN_ON(), which can be replaced by a WARN_ON when the error is
detected.  Additionally, a failed call will not prevent further calls
from being made, but this is harmless.

Because unmap_grant_pages is now async, the grant handle will be sent to
INVALID_GRANT_HANDLE too late to prevent multiple unmaps of the same
handle.  Instead, a separate bool array is allocated for this purpose.
This wastes memory, but stuffing this information in padding bytes is
too fragile.  Furthermore, it is necessary to grab a reference to the
map before making the asynchronous call, and release the reference when
the call returns.

It is also necessary to guard against reentrancy in gntdev_map_put(),
and to handle the case where userspace tries to map a mapping whose
contents have not all been freed yet.

Fixes: 745282256c75 ("xen/gntdev: safely unmap grants in case they are still in use")
Cc: stable@vger.kernel.org
Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
Link: https://lore.kernel.org/r/20220622022726.2538-1-demi@invisiblethingslab.com
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/xen/gntdev-common.h |   8 ++
 drivers/xen/gntdev.c        | 147 +++++++++++++++++++++++++-----------
 2 files changed, 110 insertions(+), 45 deletions(-)

diff --git a/drivers/xen/gntdev-common.h b/drivers/xen/gntdev-common.h
index 2f8b949c3eeb..fab6f5a54d5b 100644
--- a/drivers/xen/gntdev-common.h
+++ b/drivers/xen/gntdev-common.h
@@ -15,6 +15,8 @@
 #include <linux/mman.h>
 #include <linux/mmu_notifier.h>
 #include <linux/types.h>
+#include <xen/interface/event_channel.h>
+#include <xen/grant_table.h>
 
 struct gntdev_dmabuf_priv;
 
@@ -61,6 +63,7 @@ struct gntdev_grant_map {
 	struct gnttab_unmap_grant_ref *unmap_ops;
 	struct gnttab_map_grant_ref   *kmap_ops;
 	struct gnttab_unmap_grant_ref *kunmap_ops;
+	bool *being_removed;
 	struct page **pages;
 	unsigned long pages_vm_start;
 
@@ -78,6 +81,11 @@ struct gntdev_grant_map {
 	/* Needed to avoid allocation in gnttab_dma_free_pages(). */
 	xen_pfn_t *frames;
 #endif
+
+	/* Number of live grants */
+	atomic_t live_grants;
+	/* Needed to avoid allocation in __unmap_grant_pages */
+	struct gntab_unmap_queue_data unmap_data;
 };
 
 struct gntdev_grant_map *gntdev_alloc_map(struct gntdev_priv *priv, int count,
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index e953ea34b3e4..f46479347765 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -35,6 +35,7 @@
 #include <linux/slab.h>
 #include <linux/highmem.h>
 #include <linux/refcount.h>
+#include <linux/workqueue.h>
 
 #include <xen/xen.h>
 #include <xen/grant_table.h>
@@ -62,11 +63,12 @@ MODULE_PARM_DESC(limit, "Maximum number of grants that may be mapped by "
 
 static atomic_t pages_mapped = ATOMIC_INIT(0);
 
+/* True in PV mode, false otherwise */
 static int use_ptemod;
 #define populate_freeable_maps use_ptemod
 
-static int unmap_grant_pages(struct gntdev_grant_map *map,
-			     int offset, int pages);
+static void unmap_grant_pages(struct gntdev_grant_map *map,
+			      int offset, int pages);
 
 static struct miscdevice gntdev_miscdev;
 
@@ -123,6 +125,7 @@ static void gntdev_free_map(struct gntdev_grant_map *map)
 	kfree(map->unmap_ops);
 	kfree(map->kmap_ops);
 	kfree(map->kunmap_ops);
+	kfree(map->being_removed);
 	kfree(map);
 }
 
@@ -142,12 +145,15 @@ struct gntdev_grant_map *gntdev_alloc_map(struct gntdev_priv *priv, int count,
 	add->kmap_ops  = kcalloc(count, sizeof(add->kmap_ops[0]), GFP_KERNEL);
 	add->kunmap_ops = kcalloc(count, sizeof(add->kunmap_ops[0]), GFP_KERNEL);
 	add->pages     = kcalloc(count, sizeof(add->pages[0]), GFP_KERNEL);
+	add->being_removed =
+		kcalloc(count, sizeof(add->being_removed[0]), GFP_KERNEL);
 	if (NULL == add->grants    ||
 	    NULL == add->map_ops   ||
 	    NULL == add->unmap_ops ||
 	    NULL == add->kmap_ops  ||
 	    NULL == add->kunmap_ops ||
-	    NULL == add->pages)
+	    NULL == add->pages     ||
+	    NULL == add->being_removed)
 		goto err;
 
 #ifdef CONFIG_XEN_GRANT_DMA_ALLOC
@@ -243,6 +249,35 @@ void gntdev_put_map(struct gntdev_priv *priv, struct gntdev_grant_map *map)
 		return;
 
 	atomic_sub(map->count, &pages_mapped);
+	if (map->pages && !use_ptemod) {
+		/*
+		 * Increment the reference count.  This ensures that the
+		 * subsequent call to unmap_grant_pages() will not wind up
+		 * re-entering itself.  It *can* wind up calling
+		 * gntdev_put_map() recursively, but such calls will be with a
+		 * reference count greater than 1, so they will return before
+		 * this code is reached.  The recursion depth is thus limited to
+		 * 1.  Do NOT use refcount_inc() here, as it will detect that
+		 * the reference count is zero and WARN().
+		 */
+		refcount_set(&map->users, 1);
+
+		/*
+		 * Unmap the grants.  This may or may not be asynchronous, so it
+		 * is possible that the reference count is 1 on return, but it
+		 * could also be greater than 1.
+		 */
+		unmap_grant_pages(map, 0, map->count);
+
+		/* Check if the memory now needs to be freed */
+		if (!refcount_dec_and_test(&map->users))
+			return;
+
+		/*
+		 * All pages have been returned to the hypervisor, so free the
+		 * map.
+		 */
+	}
 
 	if (map->notify.flags & UNMAP_NOTIFY_SEND_EVENT) {
 		notify_remote_via_evtchn(map->notify.event);
@@ -298,6 +333,7 @@ static int set_grant_ptes_as_special(pte_t *pte, unsigned long addr, void *data)
 
 int gntdev_map_grant_pages(struct gntdev_grant_map *map)
 {
+	size_t alloced = 0;
 	int i, err = 0;
 
 	if (!use_ptemod) {
@@ -346,87 +382,109 @@ int gntdev_map_grant_pages(struct gntdev_grant_map *map)
 			map->pages, map->count);
 
 	for (i = 0; i < map->count; i++) {
-		if (map->map_ops[i].status == GNTST_okay)
+		if (map->map_ops[i].status == GNTST_okay) {
 			map->unmap_ops[i].handle = map->map_ops[i].handle;
-		else if (!err)
+			if (!use_ptemod)
+				alloced++;
+		} else if (!err)
 			err = -EINVAL;
 
 		if (map->flags & GNTMAP_device_map)
 			map->unmap_ops[i].dev_bus_addr = map->map_ops[i].dev_bus_addr;
 
 		if (use_ptemod) {
-			if (map->kmap_ops[i].status == GNTST_okay)
+			if (map->kmap_ops[i].status == GNTST_okay) {
+				if (map->map_ops[i].status == GNTST_okay)
+					alloced++;
 				map->kunmap_ops[i].handle = map->kmap_ops[i].handle;
-			else if (!err)
+			} else if (!err)
 				err = -EINVAL;
 		}
 	}
+	atomic_add(alloced, &map->live_grants);
 	return err;
 }
 
-static int __unmap_grant_pages(struct gntdev_grant_map *map, int offset,
-			       int pages)
+static void __unmap_grant_pages_done(int result,
+		struct gntab_unmap_queue_data *data)
 {
-	int i, err = 0;
-	struct gntab_unmap_queue_data unmap_data;
+	unsigned int i;
+	struct gntdev_grant_map *map = data->data;
+	unsigned int offset = data->unmap_ops - map->unmap_ops;
 
+	for (i = 0; i < data->count; i++) {
+		WARN_ON(map->unmap_ops[offset+i].status);
+		pr_debug("unmap handle=%d st=%d\n",
+			map->unmap_ops[offset+i].handle,
+			map->unmap_ops[offset+i].status);
+		map->unmap_ops[offset+i].handle = -1;
+	}
+	/*
+	 * Decrease the live-grant counter.  This must happen after the loop to
+	 * prevent premature reuse of the grants by gnttab_mmap().
+	 */
+	atomic_sub(data->count, &map->live_grants);
+
+	/* Release reference taken by __unmap_grant_pages */
+	gntdev_put_map(NULL, map);
+}
+
+static void __unmap_grant_pages(struct gntdev_grant_map *map, int offset,
+			       int pages)
+{
 	if (map->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) {
 		int pgno = (map->notify.addr >> PAGE_SHIFT);
+
 		if (pgno >= offset && pgno < offset + pages) {
 			/* No need for kmap, pages are in lowmem */
 			uint8_t *tmp = pfn_to_kaddr(page_to_pfn(map->pages[pgno]));
+
 			tmp[map->notify.addr & (PAGE_SIZE-1)] = 0;
 			map->notify.flags &= ~UNMAP_NOTIFY_CLEAR_BYTE;
 		}
 	}
 
-	unmap_data.unmap_ops = map->unmap_ops + offset;
-	unmap_data.kunmap_ops = use_ptemod ? map->kunmap_ops + offset : NULL;
-	unmap_data.pages = map->pages + offset;
-	unmap_data.count = pages;
+	map->unmap_data.unmap_ops = map->unmap_ops + offset;
+	map->unmap_data.kunmap_ops = use_ptemod ? map->kunmap_ops + offset : NULL;
+	map->unmap_data.pages = map->pages + offset;
+	map->unmap_data.count = pages;
+	map->unmap_data.done = __unmap_grant_pages_done;
+	map->unmap_data.data = map;
+	refcount_inc(&map->users); /* to keep map alive during async call below */
 
-	err = gnttab_unmap_refs_sync(&unmap_data);
-	if (err)
-		return err;
-
-	for (i = 0; i < pages; i++) {
-		if (map->unmap_ops[offset+i].status)
-			err = -EINVAL;
-		pr_debug("unmap handle=%d st=%d\n",
-			map->unmap_ops[offset+i].handle,
-			map->unmap_ops[offset+i].status);
-		map->unmap_ops[offset+i].handle = -1;
-	}
-	return err;
+	gnttab_unmap_refs_async(&map->unmap_data);
 }
 
-static int unmap_grant_pages(struct gntdev_grant_map *map, int offset,
-			     int pages)
+static void unmap_grant_pages(struct gntdev_grant_map *map, int offset,
+			      int pages)
 {
-	int range, err = 0;
+	int range;
+
+	if (atomic_read(&map->live_grants) == 0)
+		return; /* Nothing to do */
 
 	pr_debug("unmap %d+%d [%d+%d]\n", map->index, map->count, offset, pages);
 
 	/* It is possible the requested range will have a "hole" where we
 	 * already unmapped some of the grants. Only unmap valid ranges.
 	 */
-	while (pages && !err) {
-		while (pages && map->unmap_ops[offset].handle == -1) {
+	while (pages) {
+		while (pages && map->being_removed[offset]) {
 			offset++;
 			pages--;
 		}
 		range = 0;
 		while (range < pages) {
-			if (map->unmap_ops[offset+range].handle == -1)
+			if (map->being_removed[offset + range])
 				break;
+			map->being_removed[offset + range] = true;
 			range++;
 		}
-		err = __unmap_grant_pages(map, offset, range);
+		if (range)
+			__unmap_grant_pages(map, offset, range);
 		offset += range;
 		pages -= range;
 	}
-
-	return err;
 }
 
 /* ------------------------------------------------------------------ */
@@ -496,7 +554,6 @@ static int unmap_if_in_range(struct gntdev_grant_map *map,
 			      bool blockable)
 {
 	unsigned long mstart, mend;
-	int err;
 
 	if (!in_range(map, start, end))
 		return 0;
@@ -510,10 +567,9 @@ static int unmap_if_in_range(struct gntdev_grant_map *map,
 			map->index, map->count,
 			map->vma->vm_start, map->vma->vm_end,
 			start, end, mstart, mend);
-	err = unmap_grant_pages(map,
+	unmap_grant_pages(map,
 				(mstart - map->vma->vm_start) >> PAGE_SHIFT,
 				(mend - mstart) >> PAGE_SHIFT);
-	WARN_ON(err);
 
 	return 0;
 }
@@ -554,7 +610,6 @@ static void mn_release(struct mmu_notifier *mn,
 {
 	struct gntdev_priv *priv = container_of(mn, struct gntdev_priv, mn);
 	struct gntdev_grant_map *map;
-	int err;
 
 	mutex_lock(&priv->lock);
 	list_for_each_entry(map, &priv->maps, next) {
@@ -563,8 +618,7 @@ static void mn_release(struct mmu_notifier *mn,
 		pr_debug("map %d+%d (%lx %lx)\n",
 				map->index, map->count,
 				map->vma->vm_start, map->vma->vm_end);
-		err = unmap_grant_pages(map, /* offset */ 0, map->count);
-		WARN_ON(err);
+		unmap_grant_pages(map, /* offset */ 0, map->count);
 	}
 	list_for_each_entry(map, &priv->freeable_maps, next) {
 		if (!map->vma)
@@ -572,8 +626,7 @@ static void mn_release(struct mmu_notifier *mn,
 		pr_debug("map %d+%d (%lx %lx)\n",
 				map->index, map->count,
 				map->vma->vm_start, map->vma->vm_end);
-		err = unmap_grant_pages(map, /* offset */ 0, map->count);
-		WARN_ON(err);
+		unmap_grant_pages(map, /* offset */ 0, map->count);
 	}
 	mutex_unlock(&priv->lock);
 }
@@ -1102,6 +1155,10 @@ static int gntdev_mmap(struct file *flip, struct vm_area_struct *vma)
 		goto unlock_out;
 	}
 
+	if (atomic_read(&map->live_grants)) {
+		err = -EAGAIN;
+		goto unlock_out;
+	}
 	refcount_inc(&map->users);
 
 	vma->vm_ops = &gntdev_vmops;
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 18:10:33 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 18:10:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356889.585253 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5tBw-0005kN-TZ; Mon, 27 Jun 2022 18:10:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356889.585253; Mon, 27 Jun 2022 18:10:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5tBw-0005iC-LT; Mon, 27 Jun 2022 18:10:32 +0000
Received: by outflank-mailman (input) for mailman id 356889;
 Mon, 27 Jun 2022 18:10:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dsg=XC=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1o5tBv-0005LG-Rj
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 18:10:32 +0000
Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com
 [66.111.4.25]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6da97e1f-f644-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 20:10:30 +0200 (CEST)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.nyi.internal (Postfix) with ESMTP id C260D5C01AB;
 Mon, 27 Jun 2022 14:10:29 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute2.internal (MEProxy); Mon, 27 Jun 2022 14:10:29 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 27 Jun 2022 14:10:29 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6da97e1f-f644-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to; s=fm2; t=
	1656353429; x=1656439829; bh=RprnaSYI0mL+yNjtEtS6aEn1qbkBaECOWhl
	hXlboKR4=; b=ekjOyBfjOuG8nnYBFQzscZ+qRtjF0kfpGjrxeVRSKar/oPvBEx0
	UNW4CJ17iiH9AlKhSAMfFzzLf5UtAiVq7uw/ty3gDMNqo+RBPgJZ62pORIhFOG39
	nzcyc/DOlxVt1qVMoGpWdPCRHGV9eg8WHe4bpkOgVYM8awJx4DzESQVcGAzHs/CO
	APQBnwuG6FU2GrEh5zSSpBbQ/ntSIS5N+ng/f7nnqXfrH3W0/aY5KNTDXJqzkcGU
	kybrRLgRKntrZtAann63CzpYnfzJEfXgIU1BFg/0SrBRIy002SXArrVVuWC0h9D1
	ZyEcOYE1cfOX3jT8FZldZYGAkEMQrK/itLQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm2; t=1656353429; x=1656439829; bh=RprnaSYI0mL+y
	NjtEtS6aEn1qbkBaECOWhlhXlboKR4=; b=HDJh7U5evnJYnjA+zqHYmNvERKg5i
	MKXz2VCDoBh1S8pbHuf/6vYbj+cyKaOTVMCCN0XhpJmFg/XLrX2RjfT5f6AtHYPs
	NY7Qsw3RWJLSd9uvKkexsX9cwPs0FBLmYVxCSyMkvhq1W0XHPI7oT36F5fM+z80M
	U/nA2SYtcUoVUab0fOujs61T9OXEKKk+dpNJTFPfinOP2PVRiupMgVTL9x7ktkg9
	vp9NHlKuxa6VslH37UkPYFzLHBlyJ3y7oyVG77fEoXaBcDtAG59k0R9VDm24PBwI
	AmbGCG10wy6hLTBxODnm4vRBS4ejYodRYDrTDXAHzMghA1WqmPGy/MjaQ==
X-ME-Sender: <xms:lfK5YlbVepKKYshUTPal5U8GlSbZGfLjjnC_ubam8F9hCiuBzuc3EQ>
    <xme:lfK5YsY1NkEtARuBFQdzsMCcImEYm91J9XNCbigtTVgfV9dUt3RMZ_dgI4X_1LQl9
    aG3hpXBtJ4gZz0>
X-ME-Received: <xmr:lfK5Yn8hTbaNCOBzhbN35bmeBF8fZKJ51VrzIhYyX1HT-ktMxVft8LyZ_JUE-6W33yEA9LK915SW>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrudeghedguddvgecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeffvghm
    ihcuofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinh
    hgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeettdegudelkeelveejhefhvdek
    hfevuedvvedtudfhhfdvledtgedtgfekuddugeenucffohhmrghinhepghhithhhuhgsrd
    gtohhmpdhkvghrnhgvlhdrohhrghenucevlhhushhtvghrufhiiigvpedvnecurfgrrhgr
    mhepmhgrihhlfhhrohhmpeguvghmihesihhnvhhishhisghlvghthhhinhhgshhlrggsrd
    gtohhm
X-ME-Proxy: <xmx:lfK5YjrJ5jIqKT0zvmJFkRH00DQZV7dbXnkcAEHhgD37gDeZkUgmKw>
    <xmx:lfK5YgqOyQ6RWN-pqLTOgJxpOQ7VjOxpzmwWf1-nqZmbc6zQs2ithw>
    <xmx:lfK5YpQSjX-e6THCheInuNIVowxtGOYWcAHvDYEvBtEyQyOSIo0viQ>
    <xmx:lfK5YtABdB0PUjTieNs5fRESCtK5d65y0uEr9NaWO-9FiZo9VytCzA>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: stable@vger.kernel.org,
	Xen developer discussion <xen-devel@lists.xenproject.org>,
	Juergen Gross <jgross@suse.com>
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>
Subject: [PATCH 4.9] xen/gntdev: Avoid blocking in unmap_grant_pages()
Date: Mon, 27 Jun 2022 14:10:06 -0400
Message-Id: <20220627181006.1954-5-demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20220627181006.1954-1-demi@invisiblethingslab.com>
References: <20220627181006.1954-1-demi@invisiblethingslab.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

commit dbe97cff7dd9f0f75c524afdd55ad46be3d15295 upstream

unmap_grant_pages() currently waits for the pages to no longer be used.
In https://github.com/QubesOS/qubes-issues/issues/7481, this lead to a
deadlock against i915: i915 was waiting for gntdev's MMU notifier to
finish, while gntdev was waiting for i915 to free its pages.  I also
believe this is responsible for various deadlocks I have experienced in
the past.

Avoid these problems by making unmap_grant_pages async.  This requires
making it return void, as any errors will not be available when the
function returns.  Fortunately, the only use of the return value is a
WARN_ON(), which can be replaced by a WARN_ON when the error is
detected.  Additionally, a failed call will not prevent further calls
from being made, but this is harmless.

Because unmap_grant_pages is now async, the grant handle will be sent to
INVALID_GRANT_HANDLE too late to prevent multiple unmaps of the same
handle.  Instead, a separate bool array is allocated for this purpose.
This wastes memory, but stuffing this information in padding bytes is
too fragile.  Furthermore, it is necessary to grab a reference to the
map before making the asynchronous call, and release the reference when
the call returns.

It is also necessary to guard against reentrancy in gntdev_map_put(),
and to handle the case where userspace tries to map a mapping whose
contents have not all been freed yet.

Fixes: 745282256c75 ("xen/gntdev: safely unmap grants in case they are still in use")
Cc: stable@vger.kernel.org
Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
Link: https://lore.kernel.org/r/20220622022726.2538-1-demi@invisiblethingslab.com
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/xen/gntdev.c | 144 ++++++++++++++++++++++++++++++-------------
 1 file changed, 102 insertions(+), 42 deletions(-)

diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index 69d59102ff1b..2c3248e71e9c 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -57,6 +57,7 @@ MODULE_PARM_DESC(limit, "Maximum number of grants that may be mapped by "
 
 static atomic_t pages_mapped = ATOMIC_INIT(0);
 
+/* True in PV mode, false otherwise */
 static int use_ptemod;
 #define populate_freeable_maps use_ptemod
 
@@ -92,11 +93,16 @@ struct grant_map {
 	struct gnttab_unmap_grant_ref *unmap_ops;
 	struct gnttab_map_grant_ref   *kmap_ops;
 	struct gnttab_unmap_grant_ref *kunmap_ops;
+	bool *being_removed;
 	struct page **pages;
 	unsigned long pages_vm_start;
+	/* Number of live grants */
+	atomic_t live_grants;
+	/* Needed to avoid allocation in unmap_grant_pages */
+	struct gntab_unmap_queue_data unmap_data;
 };
 
-static int unmap_grant_pages(struct grant_map *map, int offset, int pages);
+static void unmap_grant_pages(struct grant_map *map, int offset, int pages);
 
 /* ------------------------------------------------------------------ */
 
@@ -127,6 +133,7 @@ static void gntdev_free_map(struct grant_map *map)
 	kfree(map->unmap_ops);
 	kfree(map->kmap_ops);
 	kfree(map->kunmap_ops);
+	kfree(map->being_removed);
 	kfree(map);
 }
 
@@ -145,12 +152,15 @@ static struct grant_map *gntdev_alloc_map(struct gntdev_priv *priv, int count)
 	add->kmap_ops  = kcalloc(count, sizeof(add->kmap_ops[0]), GFP_KERNEL);
 	add->kunmap_ops = kcalloc(count, sizeof(add->kunmap_ops[0]), GFP_KERNEL);
 	add->pages     = kcalloc(count, sizeof(add->pages[0]), GFP_KERNEL);
+	add->being_removed =
+		kcalloc(count, sizeof(add->being_removed[0]), GFP_KERNEL);
 	if (NULL == add->grants    ||
 	    NULL == add->map_ops   ||
 	    NULL == add->unmap_ops ||
 	    NULL == add->kmap_ops  ||
 	    NULL == add->kunmap_ops ||
-	    NULL == add->pages)
+	    NULL == add->pages     ||
+	    NULL == add->being_removed)
 		goto err;
 
 	if (gnttab_alloc_pages(count, add->pages))
@@ -215,6 +225,34 @@ static void gntdev_put_map(struct gntdev_priv *priv, struct grant_map *map)
 		return;
 
 	atomic_sub(map->count, &pages_mapped);
+	if (map->pages && !use_ptemod) {
+		/*
+		 * Increment the reference count.  This ensures that the
+		 * subsequent call to unmap_grant_pages() will not wind up
+		 * re-entering itself.  It *can* wind up calling
+		 * gntdev_put_map() recursively, but such calls will be with a
+		 * reference count greater than 1, so they will return before
+		 * this code is reached.  The recursion depth is thus limited to
+		 * 1.
+		 */
+		atomic_set(&map->users, 1);
+
+		/*
+		 * Unmap the grants.  This may or may not be asynchronous, so it
+		 * is possible that the reference count is 1 on return, but it
+		 * could also be greater than 1.
+		 */
+		unmap_grant_pages(map, 0, map->count);
+
+		/* Check if the memory now needs to be freed */
+		if (!atomic_dec_and_test(&map->users))
+			return;
+
+		/*
+		 * All pages have been returned to the hypervisor, so free the
+		 * map.
+		 */
+	}
 
 	if (map->notify.flags & UNMAP_NOTIFY_SEND_EVENT) {
 		notify_remote_via_evtchn(map->notify.event);
@@ -272,6 +310,7 @@ static int set_grant_ptes_as_special(pte_t *pte, pgtable_t token,
 
 static int map_grant_pages(struct grant_map *map)
 {
+	size_t alloced = 0;
 	int i, err = 0;
 
 	if (!use_ptemod) {
@@ -320,85 +359,107 @@ static int map_grant_pages(struct grant_map *map)
 			map->pages, map->count);
 
 	for (i = 0; i < map->count; i++) {
-		if (map->map_ops[i].status == GNTST_okay)
+		if (map->map_ops[i].status == GNTST_okay) {
 			map->unmap_ops[i].handle = map->map_ops[i].handle;
-		else if (!err)
+			if (!use_ptemod)
+				alloced++;
+		} else if (!err)
 			err = -EINVAL;
 
 		if (map->flags & GNTMAP_device_map)
 			map->unmap_ops[i].dev_bus_addr = map->map_ops[i].dev_bus_addr;
 
 		if (use_ptemod) {
-			if (map->kmap_ops[i].status == GNTST_okay)
+			if (map->kmap_ops[i].status == GNTST_okay) {
+				if (map->map_ops[i].status == GNTST_okay)
+					alloced++;
 				map->kunmap_ops[i].handle = map->kmap_ops[i].handle;
-			else if (!err)
+			} else if (!err)
 				err = -EINVAL;
 		}
 	}
+	atomic_add(alloced, &map->live_grants);
 	return err;
 }
 
-static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
+static void __unmap_grant_pages_done(int result,
+		struct gntab_unmap_queue_data *data)
 {
-	int i, err = 0;
-	struct gntab_unmap_queue_data unmap_data;
+	unsigned int i;
+	struct grant_map *map = data->data;
+	unsigned int offset = data->unmap_ops - map->unmap_ops;
+
+	for (i = 0; i < data->count; i++) {
+		WARN_ON(map->unmap_ops[offset+i].status);
+		pr_debug("unmap handle=%d st=%d\n",
+			map->unmap_ops[offset+i].handle,
+			map->unmap_ops[offset+i].status);
+		map->unmap_ops[offset+i].handle = -1;
+	}
+	/*
+	 * Decrease the live-grant counter.  This must happen after the loop to
+	 * prevent premature reuse of the grants by gnttab_mmap().
+	 */
+	atomic_sub(data->count, &map->live_grants);
 
+	/* Release reference taken by unmap_grant_pages */
+	gntdev_put_map(NULL, map);
+}
+
+static void __unmap_grant_pages(struct grant_map *map, int offset, int pages)
+{
 	if (map->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) {
 		int pgno = (map->notify.addr >> PAGE_SHIFT);
+
 		if (pgno >= offset && pgno < offset + pages) {
 			/* No need for kmap, pages are in lowmem */
 			uint8_t *tmp = pfn_to_kaddr(page_to_pfn(map->pages[pgno]));
+
 			tmp[map->notify.addr & (PAGE_SIZE-1)] = 0;
 			map->notify.flags &= ~UNMAP_NOTIFY_CLEAR_BYTE;
 		}
 	}
 
-	unmap_data.unmap_ops = map->unmap_ops + offset;
-	unmap_data.kunmap_ops = use_ptemod ? map->kunmap_ops + offset : NULL;
-	unmap_data.pages = map->pages + offset;
-	unmap_data.count = pages;
+	map->unmap_data.unmap_ops = map->unmap_ops + offset;
+	map->unmap_data.kunmap_ops = use_ptemod ? map->kunmap_ops + offset : NULL;
+	map->unmap_data.pages = map->pages + offset;
+	map->unmap_data.count = pages;
+	map->unmap_data.done = __unmap_grant_pages_done;
+	map->unmap_data.data = map;
+	atomic_inc(&map->users); /* to keep map alive during async call below */
 
-	err = gnttab_unmap_refs_sync(&unmap_data);
-	if (err)
-		return err;
-
-	for (i = 0; i < pages; i++) {
-		if (map->unmap_ops[offset+i].status)
-			err = -EINVAL;
-		pr_debug("unmap handle=%d st=%d\n",
-			map->unmap_ops[offset+i].handle,
-			map->unmap_ops[offset+i].status);
-		map->unmap_ops[offset+i].handle = -1;
-	}
-	return err;
+	gnttab_unmap_refs_async(&map->unmap_data);
 }
 
-static int unmap_grant_pages(struct grant_map *map, int offset, int pages)
+static void unmap_grant_pages(struct grant_map *map, int offset, int pages)
 {
-	int range, err = 0;
+	int range;
+
+	if (atomic_read(&map->live_grants) == 0)
+		return; /* Nothing to do */
 
 	pr_debug("unmap %d+%d [%d+%d]\n", map->index, map->count, offset, pages);
 
 	/* It is possible the requested range will have a "hole" where we
 	 * already unmapped some of the grants. Only unmap valid ranges.
 	 */
-	while (pages && !err) {
-		while (pages && map->unmap_ops[offset].handle == -1) {
+	while (pages) {
+		while (pages && map->being_removed[offset]) {
 			offset++;
 			pages--;
 		}
 		range = 0;
 		while (range < pages) {
-			if (map->unmap_ops[offset+range].handle == -1)
+			if (map->being_removed[offset + range])
 				break;
+			map->being_removed[offset + range] = true;
 			range++;
 		}
-		err = __unmap_grant_pages(map, offset, range);
+		if (range)
+			__unmap_grant_pages(map, offset, range);
 		offset += range;
 		pages -= range;
 	}
-
-	return err;
 }
 
 /* ------------------------------------------------------------------ */
@@ -454,7 +515,6 @@ static void unmap_if_in_range(struct grant_map *map,
 			      unsigned long start, unsigned long end)
 {
 	unsigned long mstart, mend;
-	int err;
 
 	if (!map->vma)
 		return;
@@ -468,10 +528,9 @@ static void unmap_if_in_range(struct grant_map *map,
 			map->index, map->count,
 			map->vma->vm_start, map->vma->vm_end,
 			start, end, mstart, mend);
-	err = unmap_grant_pages(map,
+	unmap_grant_pages(map,
 				(mstart - map->vma->vm_start) >> PAGE_SHIFT,
 				(mend - mstart) >> PAGE_SHIFT);
-	WARN_ON(err);
 }
 
 static void mn_invl_range_start(struct mmu_notifier *mn,
@@ -503,7 +562,6 @@ static void mn_release(struct mmu_notifier *mn,
 {
 	struct gntdev_priv *priv = container_of(mn, struct gntdev_priv, mn);
 	struct grant_map *map;
-	int err;
 
 	mutex_lock(&priv->lock);
 	list_for_each_entry(map, &priv->maps, next) {
@@ -512,8 +570,7 @@ static void mn_release(struct mmu_notifier *mn,
 		pr_debug("map %d+%d (%lx %lx)\n",
 				map->index, map->count,
 				map->vma->vm_start, map->vma->vm_end);
-		err = unmap_grant_pages(map, /* offset */ 0, map->count);
-		WARN_ON(err);
+		unmap_grant_pages(map, /* offset */ 0, map->count);
 	}
 	list_for_each_entry(map, &priv->freeable_maps, next) {
 		if (!map->vma)
@@ -521,8 +578,7 @@ static void mn_release(struct mmu_notifier *mn,
 		pr_debug("map %d+%d (%lx %lx)\n",
 				map->index, map->count,
 				map->vma->vm_start, map->vma->vm_end);
-		err = unmap_grant_pages(map, /* offset */ 0, map->count);
-		WARN_ON(err);
+		unmap_grant_pages(map, /* offset */ 0, map->count);
 	}
 	mutex_unlock(&priv->lock);
 }
@@ -1012,6 +1068,10 @@ static int gntdev_mmap(struct file *flip, struct vm_area_struct *vma)
 		goto unlock_out;
 	}
 
+	if (atomic_read(&map->live_grants)) {
+		err = -EAGAIN;
+		goto unlock_out;
+	}
 	atomic_inc(&map->users);
 
 	vma->vm_ops = &gntdev_vmops;
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 19:21:34 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 19:21:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356923.585276 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5uIK-0006fc-2y; Mon, 27 Jun 2022 19:21:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356923.585276; Mon, 27 Jun 2022 19:21:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5uIJ-0006fV-Up; Mon, 27 Jun 2022 19:21:11 +0000
Received: by outflank-mailman (input) for mailman id 356923;
 Mon, 27 Jun 2022 19:21:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=S7a8=XC=apertussolutions.com=dpsmith@srs-se1.protection.inumbo.net>)
 id 1o5uII-0006ee-HL
 for xen-devel@lists.xenproject.org; Mon, 27 Jun 2022 19:21:10 +0000
Received: from sender4-of-o51.zoho.com (sender4-of-o51.zoho.com
 [136.143.188.51]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4a4607ac-f64e-11ec-b725-ed86ccbb4733;
 Mon, 27 Jun 2022 21:21:07 +0200 (CEST)
Received: from [10.10.1.138] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1656357661009927.8401575099294;
 Mon, 27 Jun 2022 12:21:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a4607ac-f64e-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; t=1656357663; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=KB3mDj5erkiy2LXsnOadIyAlqDqc+f+WJTLCb38TbWTtaSGd1P2VIeFDWylLc8+aL7CMy+gQS6AZCgkOBpUNrlLGAni2cUUtGDEEWJBBBIEuIHbjQY9Ig9SAFhIbSQYOMJi03UR0fmcTnwEWrk37ZuknRVrWu8Hs7MX+Lt9viw4=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1656357663; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=2CNSh8Lmow9QY3JC7qAJgQkqEVgQUDK0vrwE5w0w8yg=; 
	b=KQyhIZZrlkpdcMXUW6f8bFscwwxuJKxFDvBeejA9in6DPAatr5F4axAg0gxkAYWII1fAmJqVJE4pcjHcSO/A46GKB1tItYLl61exi53s37/udBVjsohY5aVOvyWCSkseXyLv1bS6mVg0v/O6oGpVBVZ4sYuojTi3X3Bal8wR1LY=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1656357663;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Message-ID:Date:Date:MIME-Version:Subject:Subject:To:To:Cc:Cc:References:From:From:In-Reply-To:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=2CNSh8Lmow9QY3JC7qAJgQkqEVgQUDK0vrwE5w0w8yg=;
	b=LIFu59qdnzIuXhvny3vDGsU2n9d22l469wFF/UfI1USMIOacXPevPptEENTw3wmT
	xxOuhJzSY1UCTOS3WDWh1feWNBuBAywLaRsTigHwE7p0mC28BnVNjQ33TDrE18jToCc
	rBNyQfKYrq0RqGqfzPYNUfk3QB8Q9oQ2HaTqvJ5w=
Message-ID: <c2b68766-9608-5910-7937-b7747ad189e7@apertussolutions.com>
Date: Mon, 27 Jun 2022 15:19:00 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH 6/7] xsm/flask: Use unsigned int instead of plain unsigned
Content-Language: en-US
To: Michal Orzel <michal.orzel@arm.com>, xen-devel@lists.xenproject.org
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Jason Andryuk <jandryuk@gmail.com>
References: <20220627131543.410971-1-michal.orzel@arm.com>
 <20220627131543.410971-7-michal.orzel@arm.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
In-Reply-To: <20220627131543.410971-7-michal.orzel@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ZohoMailClient: External


On 6/27/22 09:15, Michal Orzel wrote:
> This is just for the style and consistency reasons as the former is
> being used more often than the latter.
> 
> Signed-off-by: Michal Orzel <michal.orzel@arm.com>
> ---
>  xen/xsm/flask/ss/avtab.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/xsm/flask/ss/avtab.c b/xen/xsm/flask/ss/avtab.c
> index 017f5183de..9761d028d8 100644
> --- a/xen/xsm/flask/ss/avtab.c
> +++ b/xen/xsm/flask/ss/avtab.c
> @@ -349,7 +349,7 @@ int avtab_read_item(struct avtab *a, void *fp, struct policydb *pol,
>      struct avtab_key key;
>      struct avtab_datum datum;
>      int i, rc;
> -    unsigned set;
> +    unsigned int set;
>  
>      memset(&key, 0, sizeof(struct avtab_key));
>      memset(&datum, 0, sizeof(struct avtab_datum));

Is this not v2? Jason gave a Rb on the similar patch if I am not mistaken.

v/r,
dps


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 20:34:55 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 20:34:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356931.585292 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5vRN-0005wc-Gl; Mon, 27 Jun 2022 20:34:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356931.585292; Mon, 27 Jun 2022 20:34:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5vRN-0005wV-Du; Mon, 27 Jun 2022 20:34:37 +0000
Received: by outflank-mailman (input) for mailman id 356931;
 Mon, 27 Jun 2022 20:34:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5vRM-0005wL-1P; Mon, 27 Jun 2022 20:34:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5vRL-0004Io-Q1; Mon, 27 Jun 2022 20:34:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5vRK-0001hu-1I; Mon, 27 Jun 2022 20:34:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o5vRK-0001Dl-0p; Mon, 27 Jun 2022 20:34:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2x4IMnWvd7qB/BK8DFqlGDXycHBcnyCeNGKpOaarY0U=; b=ejD++sJA7bRDJwPeH+3+wCjJN+
	w6xPIt5zQ/bm0npXwjn8MePfucI5NNHILtSuGmVfQ/05Q4hObfGx7PvR+OB02k9kdoDhnVtWKtdtp
	1BPttQMFW2EGwfWT/cmMVhbcAahAput5OOWTpLfjov1PUbbalH/RIE3gHMljh4QhxI8E=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171369-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.15-testing test] 171369: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    xen-4.15-testing:test-amd64-coresched-amd64-xl:<job status>:broken:regression
    xen-4.15-testing:test-amd64-coresched-amd64-xl:host-install(5):broken:regression
    xen-4.15-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:guest-start/debianhvm.repeat:fail:regression
    xen-4.15-testing:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=cc3329fbfbb5af1c4fdb8a7a4e3a87c12264661c
X-Osstest-Versions-That:
    xen=a3faf632606e54437146dbcac2c9bbb89b9a4007
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 27 Jun 2022 20:34:34 +0000

flight 171369 xen-4.15-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171369/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-coresched-amd64-xl    <job status>                 broken
 test-amd64-coresched-amd64-xl  5 host-install(5)       broken REGR. vs. 171205
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 20 guest-start/debianhvm.repeat fail REGR. vs. 171205

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171205
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171205
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171205
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171205
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171205
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171205
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171205
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171205
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171205
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171205
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171205
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171205
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  cc3329fbfbb5af1c4fdb8a7a4e3a87c12264661c
baseline version:
 xen                  a3faf632606e54437146dbcac2c9bbb89b9a4007

Last test of basis   171205  2022-06-16 13:08:03 Z   11 days
Testing same since   171369  2022-06-27 13:37:55 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                broken  
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-coresched-amd64-xl broken
broken-step test-amd64-coresched-amd64-xl host-install(5)

Not pushing.

------------------------------------------------------------
commit cc3329fbfbb5af1c4fdb8a7a4e3a87c12264661c
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Jun 27 15:16:56 2022 +0200

    update Xen version to 4.15.3
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Jun 27 23:05:38 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jun 2022 23:05:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356942.585304 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5xnA-0004xc-Fe; Mon, 27 Jun 2022 23:05:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356942.585304; Mon, 27 Jun 2022 23:05:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o5xnA-0004xV-BV; Mon, 27 Jun 2022 23:05:16 +0000
Received: by outflank-mailman (input) for mailman id 356942;
 Mon, 27 Jun 2022 23:05:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5xn9-0004xL-0J; Mon, 27 Jun 2022 23:05:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5xn8-0007BX-Tg; Mon, 27 Jun 2022 23:05:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o5xn8-0001K5-Fr; Mon, 27 Jun 2022 23:05:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o5xn8-0004g1-ES; Mon, 27 Jun 2022 23:05:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=iarwhkO2Z7P6nXN5zWIyISjXA5VfVW1AZ3LtqwbJ+0Y=; b=L6tXNhZE4gr8K5XDaUY1nI45Pg
	5cNFNik23zWQ493sFq3rAydGv1hdkYWQi0Vhgu+dPPgxtVoiZ5s96rw1RQVoR36+UUgplOLRwuSVf
	jVbIatTUoCQkmlpsZe2vSsicm9TS7q5/kv5jEK7qEqmazfHGBVD+0Y2QNmhqRHwHZofc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171372-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 171372: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-xl-vhd:guest-start.2:fail:heisenbug
    qemu-mainline:test-amd64-i386-libvirt-raw:xen-boot:fail:heisenbug
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:heisenbug
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start.2:fail:heisenbug
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=097ccbbbaf2681df1e65542e5b7d2b2d0c66e2bc
X-Osstest-Versions-That:
    qemuu=40d522490714b65e0856444277db6c14c5cc3796
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 27 Jun 2022 23:05:14 +0000

flight 171372 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171372/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-vhd       22 guest-start.2    fail in 171367 pass in 171372
 test-amd64-i386-libvirt-raw   8 xen-boot                   fail pass in 171367
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install          fail pass in 171367
 test-amd64-amd64-libvirt-vhd 20 guest-start.2              fail pass in 171367

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-raw 14 migrate-support-check fail in 171367 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171348
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171348
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171348
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171348
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171348
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171348
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171348
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171348
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                097ccbbbaf2681df1e65542e5b7d2b2d0c66e2bc
baseline version:
 qemuu                40d522490714b65e0856444277db6c14c5cc3796

Last test of basis   171348  2022-06-25 01:39:21 Z    2 days
Testing same since   171367  2022-06-27 06:39:45 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Helge Deller <deller@gmx.de>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Richard Henderson <richard.henderson@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   40d5224907..097ccbbbaf  097ccbbbaf2681df1e65542e5b7d2b2d0c66e2bc -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Tue Jun 28 02:20:47 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 02:20:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356950.585314 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o60q3-0007gZ-PY; Tue, 28 Jun 2022 02:20:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356950.585314; Tue, 28 Jun 2022 02:20:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o60q3-0007gS-MP; Tue, 28 Jun 2022 02:20:27 +0000
Received: by outflank-mailman (input) for mailman id 356950;
 Tue, 28 Jun 2022 02:20:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XMom=XD=kernel.org=sashal@srs-se1.protection.inumbo.net>)
 id 1o60q1-0007gM-Td
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 02:20:26 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dd3b7f25-f688-11ec-b725-ed86ccbb4733;
 Tue, 28 Jun 2022 04:20:23 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 4E28EB81C0B;
 Tue, 28 Jun 2022 02:20:22 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id E6C61C34115;
 Tue, 28 Jun 2022 02:20:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dd3b7f25-f688-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1656382821;
	bh=Asz9ew2hFmI18bnYpVbWeSIOuLQ9FU4MX/wfshCkR5w=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=Wm5qD/N3uSHISZ4abRUbRUHGlbp6c62EBG4ghbYADH1y0iqvooDAeYFKu/Z8xHajv
	 O/ZOgu6IvwnB6w8oofFgvGBub5Zr6RPiv7OivKBznV4zq/J4jkiV6G7LvJ/X68gN2r
	 qyyg1lE3bsqKR+K/IhuhYkGYqa9dT142qZwXaplHvhr3mxLjdx2RWa/G6eycCF+VKI
	 OD3SSDc2tyubnptvlVVLa4Zb4XIDX9XoYpQFxJ/aqNp4Jl6kZyXactH5c0MlnGdY8v
	 KmAI1V1sln3R00n6ZXBKb42phdZMC1rSbCowJ+57RxcYspLkdt8J1iEFw5CmPQL9up
	 B9cKq3/Ukrs8w==
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Juergen Gross <jgross@suse.com>,
	Sasha Levin <sashal@kernel.org>,
	airlied@linux.ie,
	daniel@ffwll.ch,
	dri-devel@lists.freedesktop.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH AUTOSEL 5.18 38/53] drm/xen: Add missing VM_DONTEXPAND flag in mmap callback
Date: Mon, 27 Jun 2022 22:18:24 -0400
Message-Id: <20220628021839.594423-38-sashal@kernel.org>
X-Mailer: git-send-email 2.35.1
In-Reply-To: <20220628021839.594423-1-sashal@kernel.org>
References: <20220628021839.594423-1-sashal@kernel.org>
MIME-Version: 1.0
X-stable: review
X-Patchwork-Hint: Ignore
Content-Transfer-Encoding: 8bit

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

[ Upstream commit ca6969013d13282b42cb5edcc13db731a08e0ad8 ]

With Xen PV Display driver in use the "expected" VM_DONTEXPAND flag
is not set (neither explicitly nor implicitly), so the driver hits
the code path in drm_gem_mmap_obj() which triggers the WARNING.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Link: https://lore.kernel.org/r/1652104303-5098-1-git-send-email-olekstysh@gmail.com
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/gpu/drm/xen/xen_drm_front_gem.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
index 5a5bf4e5b717..e31554d7139f 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
@@ -71,7 +71,7 @@ static int xen_drm_front_gem_object_mmap(struct drm_gem_object *gem_obj,
 	 * the whole buffer.
 	 */
 	vma->vm_flags &= ~VM_PFNMAP;
-	vma->vm_flags |= VM_MIXEDMAP;
+	vma->vm_flags |= VM_MIXEDMAP | VM_DONTEXPAND;
 	vma->vm_pgoff = 0;
 
 	/*
-- 
2.35.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 28 02:48:21 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 02:48:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356956.585326 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o61Gw-00021E-V2; Tue, 28 Jun 2022 02:48:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356956.585326; Tue, 28 Jun 2022 02:48:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o61Gw-000217-RJ; Tue, 28 Jun 2022 02:48:14 +0000
Received: by outflank-mailman (input) for mailman id 356956;
 Tue, 28 Jun 2022 02:48:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o61Gv-00020x-V9; Tue, 28 Jun 2022 02:48:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o61Gv-0001l9-Sd; Tue, 28 Jun 2022 02:48:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o61Gs-0004xY-Kw; Tue, 28 Jun 2022 02:48:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o61Gs-0001CZ-K6; Tue, 28 Jun 2022 02:48:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2bwPGTPJiGDl/K/JtUTT324LiKpTAPkBGju9TBd5/W0=; b=h5MpHVmSccXlp+nCzE0+/dUNwQ
	yEFzh7dkBegZyFW7m6erLzfUGI77Z9fQWF5JS5ik8izbLlvTEsQxIULDte3pETr5ejOff81+MwiPk
	lMFK0l7+vgRbSGNRSKyxR6+2uinF7sdX3p3DNPqr1nj+2aBhKS14CZRIFv4j6N8e5ym0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171374-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171374: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=941e3e7912696b9fbe3586083a7c2e102cee7a87
X-Osstest-Versions-That:
    linux=354c6e071be986a44b956f7b57f1884244431048
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 28 Jun 2022 02:48:10 +0000

flight 171374 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171374/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-intel  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-amd  8 xen-boot              fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 171277
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 171277
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 171277
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 171277

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 171277

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171277
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171277
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171277
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                941e3e7912696b9fbe3586083a7c2e102cee7a87
baseline version:
 linux                354c6e071be986a44b956f7b57f1884244431048

Last test of basis   171277  2022-06-19 03:11:35 Z    8 days
Failing since        171280  2022-06-19 15:12:25 Z    8 days   25 attempts
Testing same since   171374  2022-06-27 18:13:03 Z    0 days    1 attempts

------------------------------------------------------------
368 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 13051 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 28 06:23:32 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 06:23:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356966.585341 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o64d4-0008Co-Bg; Tue, 28 Jun 2022 06:23:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356966.585341; Tue, 28 Jun 2022 06:23:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o64d4-0008Ch-8B; Tue, 28 Jun 2022 06:23:18 +0000
Received: by outflank-mailman (input) for mailman id 356966;
 Tue, 28 Jun 2022 06:23:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o64d2-0008CX-N6; Tue, 28 Jun 2022 06:23:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o64d2-0006Jn-KZ; Tue, 28 Jun 2022 06:23:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o64d2-0002sf-6C; Tue, 28 Jun 2022 06:23:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o64d2-0006F1-5i; Tue, 28 Jun 2022 06:23:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=edkvog4JcawWQ3HpCiEArQCsuO42ewfhe3lMIG/xERg=; b=cFTfjJx+HmSck1a3IQRs0DXVHJ
	lg1NKkE2ARfNTKI+GUiu2l9oeUE9nLkutCxGzC3EIcjyPD26Vj9ZTJdiDPoYmWpMm1IsO7o2Z053M
	A62CE+dAHYTAZN/HGxnc6Wy8kP8Y/t4rzmGhkJ38/RkxQvW+fJ+rqnblhyrjVPi4BNHo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171375-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.15-testing test] 171375: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.15-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=cc3329fbfbb5af1c4fdb8a7a4e3a87c12264661c
X-Osstest-Versions-That:
    xen=a3faf632606e54437146dbcac2c9bbb89b9a4007
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 28 Jun 2022 06:23:16 +0000

flight 171375 xen-4.15-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171375/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171205
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171205
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171205
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171205
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171205
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171205
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171205
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171205
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171205
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171205
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171205
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171205
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  cc3329fbfbb5af1c4fdb8a7a4e3a87c12264661c
baseline version:
 xen                  a3faf632606e54437146dbcac2c9bbb89b9a4007

Last test of basis   171205  2022-06-16 13:08:03 Z   11 days
Testing same since   171369  2022-06-27 13:37:55 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   a3faf63260..cc3329fbfb  cc3329fbfbb5af1c4fdb8a7a4e3a87c12264661c -> stable-4.15


From xen-devel-bounces@lists.xenproject.org Tue Jun 28 06:27:27 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 06:27:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356973.585352 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o64h4-0000Ot-S2; Tue, 28 Jun 2022 06:27:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356973.585352; Tue, 28 Jun 2022 06:27:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o64h4-0000Om-Oz; Tue, 28 Jun 2022 06:27:26 +0000
Received: by outflank-mailman (input) for mailman id 356973;
 Tue, 28 Jun 2022 06:27:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VwHr=XD=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1o64h3-0000Ob-A5
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 06:27:25 +0000
Received: from mail-wr1-x42d.google.com (mail-wr1-x42d.google.com
 [2a00:1450:4864:20::42d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5f39d8c4-f6ab-11ec-bd2d-47488cf2e6aa;
 Tue, 28 Jun 2022 08:27:24 +0200 (CEST)
Received: by mail-wr1-x42d.google.com with SMTP id o4so12149565wrh.3
 for <xen-devel@lists.xenproject.org>; Mon, 27 Jun 2022 23:27:24 -0700 (PDT)
Received: from [192.168.1.10] (adsl-146.37.6.170.tellas.gr. [37.6.170.146])
 by smtp.gmail.com with ESMTPSA id
 b16-20020a5d6350000000b0021b8905e797sm12381193wrw.69.2022.06.27.23.27.22
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 27 Jun 2022 23:27:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f39d8c4-f6ab-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=message-id:date:mime-version:user-agent:subject:content-language:to
         :cc:references:from:in-reply-to:content-transfer-encoding;
        bh=fkdfoOl2U+ksDRlxAGj37ooDJZJBKaU2UDmKNiHUqAA=;
        b=L+4xl1muWVXTAdaCLKii9FKdBTvoMN6x7C0DQjwRqSinTfPepvBpeDMJE2YU8G63Vg
         t8BoVUp9WRFt5CQGJ/vKNn8fIod/ZE3Fb6QaresmrqEF7t9PFmJ0POEc51WeHMlk8Zbl
         bMNAPvNwAJFQA8j3NvKp0dtVsBCABpvFt6wo/ek9mrT8AU25mNjrUF5++B6/7BRByK3b
         8SZzOmP0M0chB2lG6HE2hsCFwj8Pn2a+cGSXp5zU2HnLgMS6WIzdJlzGwbV9Wz5DO5MD
         CUJ30jD3UQKY4jjvKL59UoKBB0hYbMviVdQ1kh84/nmYw698z1WYZv9dTWk8mjN5LJNr
         mmyQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:message-id:date:mime-version:user-agent:subject
         :content-language:to:cc:references:from:in-reply-to
         :content-transfer-encoding;
        bh=fkdfoOl2U+ksDRlxAGj37ooDJZJBKaU2UDmKNiHUqAA=;
        b=4NDG0XvcmwMe28G7EP5pLYNMbtCcy7kKJR6JVtcviDSEDTVJSzUVKPtME+ZUkihPT5
         XSWFfau0ED9u3UPPVboZvonru1hMLqdBg5WWx04+x203WpawQUNzERIFinOp74OdRTjz
         9q7c2fntJZFPSGCz4qN5/3GoafuEtUzPHyIdM/Gsga0x1TRRCejfLNEZKiR0nfssBFR4
         uekbJhYogchoVVmN4LuABVA8HkMCbvA0hjKf9TvrBVRAVpoTdo08EvMWFRCgIY2VB5cw
         mxVOTtxtkhCVTRT27Ex4r1MFIJWZDGMCVCyHNvEavUgUNnVnEzKO1Qrqodx5qNibDcw2
         96Pw==
X-Gm-Message-State: AJIora8+++j6f8QHO/mS4sdtv2xD8jJxUfjHUYaVDvT+f3LX1lS6ABmy
	Fv+0ZCM/R9WU6fqrh6xn/RE=
X-Google-Smtp-Source: AGRyM1sr7Uhz5fberXugJbrfigFE4tpnmrbd7LZMdAQLw+O01vdC+JvGSroilCZgnffEq6RfTUzryw==
X-Received: by 2002:a5d:604a:0:b0:21b:9517:66eb with SMTP id j10-20020a5d604a000000b0021b951766ebmr15750413wrt.494.1656397643446;
        Mon, 27 Jun 2022 23:27:23 -0700 (PDT)
Message-ID: <5495bb68-6e83-a64f-7d55-d2d973733e18@gmail.com>
Date: Tue, 28 Jun 2022 09:27:20 +0300
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH 2/5] xen/common: vm_event: Fix MISRA C 2012 Rule 8.7
 violation
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Tamas K Lengyel <tamas@tklengyel.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>, xen-devel@lists.xenproject.org
References: <20220626211131.678995-1-burzalodowa@gmail.com>
 <20220626211131.678995-3-burzalodowa@gmail.com>
 <61094b37-4075-e362-7fc6-ce28f965bb05@suse.com>
 <cb50eba7-bbc5-3100-2be3-98587766c240@gmail.com>
 <a5e958aa-a616-115e-29e2-fe410b708583@suse.com>
From: xenia <burzalodowa@gmail.com>
In-Reply-To: <a5e958aa-a616-115e-29e2-fe410b708583@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Jan,

On 6/27/22 11:43, Jan Beulich wrote:
> On 27.06.2022 09:54, xenia wrote:
>> On 6/27/22 10:11, Jan Beulich wrote:
>>> On 26.06.2022 23:11, Xenia Ragiadakou wrote:
>>>> The function vm_event_wake() is referenced only in vm_event.c.
>>>> Change the linkage of the function from external to internal by adding
>>>> the storage-class specifier static to the function definition.
>>>>
>>>> This patch aims to resolve indirectly a MISRA C 2012 Rule 8.4 violation warning.
>>> Actually also for patch 1 - this is slightly confusing, as the title
>>> talks about 8.7. At first I suspected a typo. May I suggest to add
>>> "also" (which I think could be easily done while committing)?
>>
>> This is actually a violation of MISRA C 2012 Rule 8.7 (Advisory), which
>> states that functions referenced in only one translation unit should not
>> be defined with external linkage.
>> This violation triggers a MISRA C 2012 Rule 8.4 violation warning,
>> because the function is defined with external linkage without a visible
>> declaration at the point of definition.
>> I thought that this does not make it a violation of MISRA C 2012 Rule 8.4.
> I think this is a violation of both rules. It would be a violation of
> only 8.7 if the function had a declaration, but still wasn't used
> outside its defining CU.

So you are suggesting to change the line
"This patch aims to resolve indirectly a MISRA C 2012 Rule 8.4 violation 
warning."
to
"Also, this patch aims to resolve indirectly a MISRA C 2012 Rule 8.4 
violation warning."

I will wait one more day in case there is some input for the last patch, 
and I will send a v2 with the above changed.

Xenia



From xen-devel-bounces@lists.xenproject.org Tue Jun 28 06:28:21 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 06:28:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356978.585363 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o64hx-00013P-6S; Tue, 28 Jun 2022 06:28:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356978.585363; Tue, 28 Jun 2022 06:28:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o64hx-00013I-3Y; Tue, 28 Jun 2022 06:28:21 +0000
Received: by outflank-mailman (input) for mailman id 356978;
 Tue, 28 Jun 2022 06:28:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M1EI=XD=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o64hw-00013A-AZ
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 06:28:20 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 7f87094d-f6ab-11ec-bd2d-47488cf2e6aa;
 Tue, 28 Jun 2022 08:28:18 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E4571152B;
 Mon, 27 Jun 2022 23:28:17 -0700 (PDT)
Received: from [10.57.39.195] (unknown [10.57.39.195])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id BCC7C3F5A1;
 Mon, 27 Jun 2022 23:28:16 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7f87094d-f6ab-11ec-bd2d-47488cf2e6aa
Message-ID: <9aca51aa-16f5-53de-46b1-94785faccecb@arm.com>
Date: Tue, 28 Jun 2022 08:28:02 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH 6/7] xsm/flask: Use unsigned int instead of plain unsigned
Content-Language: en-US
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>,
 xen-devel@lists.xenproject.org
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Jason Andryuk <jandryuk@gmail.com>
References: <20220627131543.410971-1-michal.orzel@arm.com>
 <20220627131543.410971-7-michal.orzel@arm.com>
 <c2b68766-9608-5910-7937-b7747ad189e7@apertussolutions.com>
From: Michal Orzel <michal.orzel@arm.com>
In-Reply-To: <c2b68766-9608-5910-7937-b7747ad189e7@apertussolutions.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Hi Daniel,

On 27.06.2022 21:19, Daniel P. Smith wrote:
> 
> On 6/27/22 09:15, Michal Orzel wrote:
>> This is just for the style and consistency reasons as the former is
>> being used more often than the latter.
>>
>> Signed-off-by: Michal Orzel <michal.orzel@arm.com>
>> ---
>>  xen/xsm/flask/ss/avtab.c | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/xen/xsm/flask/ss/avtab.c b/xen/xsm/flask/ss/avtab.c
>> index 017f5183de..9761d028d8 100644
>> --- a/xen/xsm/flask/ss/avtab.c
>> +++ b/xen/xsm/flask/ss/avtab.c
>> @@ -349,7 +349,7 @@ int avtab_read_item(struct avtab *a, void *fp, struct policydb *pol,
>>      struct avtab_key key;
>>      struct avtab_datum datum;
>>      int i, rc;
>> -    unsigned set;
>> +    unsigned int set;
>>  
>>      memset(&key, 0, sizeof(struct avtab_key));
>>      memset(&datum, 0, sizeof(struct avtab_datum));
> 
> Is this not v2? Jason gave a Rb on the similar patch if I am not mistaken.
> 
No it is not. This applies to all the patches in this series that was pushed as
a new one due to different justification/commit titles/commit msgs (see cover letter).

> v/r,
> dps

Cheers,
Michal


From xen-devel-bounces@lists.xenproject.org Tue Jun 28 06:35:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 06:35:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356988.585374 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o64oo-0002aJ-2a; Tue, 28 Jun 2022 06:35:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356988.585374; Tue, 28 Jun 2022 06:35:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o64on-0002aC-VV; Tue, 28 Jun 2022 06:35:25 +0000
Received: by outflank-mailman (input) for mailman id 356988;
 Tue, 28 Jun 2022 06:35:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W+rg=XD=arm.com=Jiamei.Xie@srs-se1.protection.inumbo.net>)
 id 1o64om-0002a6-6Z
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 06:35:24 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-eopbgr150050.outbound.protection.outlook.com [40.107.15.50])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7ba6e4f1-f6ac-11ec-b725-ed86ccbb4733;
 Tue, 28 Jun 2022 08:35:21 +0200 (CEST)
Received: from DBBPR09CA0041.eurprd09.prod.outlook.com (2603:10a6:10:d4::29)
 by DBBPR08MB6025.eurprd08.prod.outlook.com (2603:10a6:10:203::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16; Tue, 28 Jun
 2022 06:35:19 +0000
Received: from DBAEUR03FT007.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:d4:cafe::27) by DBBPR09CA0041.outlook.office365.com
 (2603:10a6:10:d4::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16 via Frontend
 Transport; Tue, 28 Jun 2022 06:35:19 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT007.mail.protection.outlook.com (100.127.142.161) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Tue, 28 Jun 2022 06:35:19 +0000
Received: ("Tessian outbound d3318d0cda7b:v120");
 Tue, 28 Jun 2022 06:35:19 +0000
Received: from ce655a689f97.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 72D7B2CA-A58C-45DF-9651-F7C8071684F8.1; 
 Tue, 28 Jun 2022 06:35:13 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ce655a689f97.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 28 Jun 2022 06:35:13 +0000
Received: from AS8PR08MB7696.eurprd08.prod.outlook.com (2603:10a6:20b:523::11)
 by HE1PR0801MB1642.eurprd08.prod.outlook.com (2603:10a6:3:86::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Tue, 28 Jun
 2022 06:35:10 +0000
Received: from AS8PR08MB7696.eurprd08.prod.outlook.com
 ([fe80::69e7:f6d2:15e6:d90]) by AS8PR08MB7696.eurprd08.prod.outlook.com
 ([fe80::69e7:f6d2:15e6:d90%6]) with mapi id 15.20.5373.016; Tue, 28 Jun 2022
 06:35:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7ba6e4f1-f6ac-11ec-b725-ed86ccbb4733
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=CDes4tHhCMeWc4X8urhDz2aoEsMJoKcyZrt4jVZ/mIDj1GxQiNoSWmut7FB91MGyHOk9iQ93yPeelgFCGMBPMYtyP871fAZhIOqLNSFYLpavUz5WC6BTI4BPQWg6gwfoSFRNwFpa1nMMLPshcoQVFNFG2o0+AIsFxDFNhCdAsf6jaYXWugurzg8APdKXeawdt8Y3Peq47j42Le1chpC8XoRVkm37iKk8YxNFX6qwcsIw0JXrgatSQMYnEyiigvTbXOBuoGGQC0Daf6dctnfH9lo2+hHH6C2H5tcFGqjnOUCIEMiHRvDNef7E58E3m0PrPaeN6e0gA0kY62YmARkLCg==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=UH+cqtypElYfY90Wuyq0LIWrsJrUyePK4fCJonffO00=;
 b=QHdixmv1ESTXoG/zDvOyCWo/TyM+sC8bpqxr5F8l9Bh2wdYLg5wkGJpgPJFt7CvSVXoiPG+osM7wRKsiqtZo4iNfb+tUyhl0sD+bhqiW2gl5fp9yTY/93uHgbaEpgdcb/OLel7UGLWDDtpwFkj/KLrJ7RlHmeWRbIyuhzAcdC7q8Lmcr5/OC0+kcAqUUL52MvLh7MHD2xASzi9mVXSMJIAmc4I/Z6GJinDqYU4b6Ot3c1u8kB1LJsR5ekvPtq3zTEfQnl2UPyphV1bmWgcpEFyspLVrZD3oQzOdcHLpuLMWP6QidWjCe0f5ZfUtq1lW8HhHtllIDf1NkvoU7tBUJPg==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UH+cqtypElYfY90Wuyq0LIWrsJrUyePK4fCJonffO00=;
 b=IKLMC2LJRTJWAepXmrb3G4NYoFOOC8nABGgVbsWDbUuA0os0KKVXrZjJY770HR3zBEoVxVo+Q0vL4waKTwXxR6M7Znqu1EUYx5vZ8aBCTup1nCKAb2jHD/2dc38/Hj/A/UA/Rlfw4ro+UDLw6ipYAMqr0teAoIbtVN9+aRD8NOY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=af+ZZ7oV3AJzVHaHlXTSwGhObS4zDiY3qSr57OKdqPZfOC4L5HVvcQQ9bmHBH0Qx2Qzu4aKLAjzbWURKoYyPfuQmhkkYatin6NqutdWkStMLJlXr20gX88LqdfXb7f12DgFkgVzCYD5uiir8D70tc3T8oFnmRnTTUGv4lY6Yi/HROxYObG9kVAs0+k4xPgwEmlzBdnOQtFmRCRsfZZTYgJd8R+pErZ2FjLER8Es2v3xd24Ld2CTNlbQNSyFzYQwpZwASweufCjJMIKYc1v1eXXCscDf8fUNqlPk7QhKOrtEsp7oCpiA7QE7rbyjTsfujm6ou8H3bm7ZXaWa7pI/hag==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=UH+cqtypElYfY90Wuyq0LIWrsJrUyePK4fCJonffO00=;
 b=PEcaaSZlw2QG+EOJE6aTARl3eSIERmeNFyefugKUS3Mty8pF4hG/KkWmZeUp+yN1jN/Y+Q59MbzxEj3Xi31GyP5EAnnraJJu5aC/UmwUDu5Uuxqp/6zxLKQzEo0pKUQz2hK7awlh2YhRtVqJWpNQH6WKom2p+ACWxySMZuHdRn/pvFNHH91FLLSZBZr8Vt8OSKpcIm81LeNUReArYr29KIZisIiB8lDpO9kCsXky5wCKCCqnIwc1IfjzV1NUVJpzCP/Vq3uIm4e5wDdlCi4T+iCeAaPXpSfMPnWF5P25OdGnkg+jhlvb0rj9O1uP3NsHG9dL7oVF11qOumoGdtDJaQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UH+cqtypElYfY90Wuyq0LIWrsJrUyePK4fCJonffO00=;
 b=IKLMC2LJRTJWAepXmrb3G4NYoFOOC8nABGgVbsWDbUuA0os0KKVXrZjJY770HR3zBEoVxVo+Q0vL4waKTwXxR6M7Znqu1EUYx5vZ8aBCTup1nCKAb2jHD/2dc38/Hj/A/UA/Rlfw4ro+UDLw6ipYAMqr0teAoIbtVN9+aRD8NOY=
From: Jiamei Xie <Jiamei.Xie@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Wei Chen <Wei.Chen@arm.com>
Subject: RE: [PATCH] xen/arm: avoid extra caclulations when setting vtimer in
 context switch
Thread-Topic: [PATCH] xen/arm: avoid extra caclulations when setting vtimer in
 context switch
Thread-Index: AQHYidHeBWnNLqFeFEKI7B/maNA44q1jD7IAgAFN1oA=
Date: Tue, 28 Jun 2022 06:35:09 +0000
Message-ID:
 <AS8PR08MB76964C46AE16A6CEF6DE221892B89@AS8PR08MB7696.eurprd08.prod.outlook.com>
References: <20220627025809.1985720-1-jiamei.xie@arm.com>
 <cbb7a231-0f61-7170-3fc4-d4ae55398f3a@xen.org>
In-Reply-To: <cbb7a231-0f61-7170-3fc4-d4ae55398f3a@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: A830DB3F2C79624E866F1B99E7D1065E.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 3585c526-3633-4c91-d439-08da58d05ebf
x-ms-traffictypediagnostic:
	HE1PR0801MB1642:EE_|DBAEUR03FT007:EE_|DBBPR08MB6025:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 6Gg/iVYkcYtu6RdQ/MOl0u+6FBIifatUWIBmEQUpIZbAIbHMt9xYtelTfZHdbu504ZLy7Ao51RweITXk2pIvCczsWTs0cOthSTpNb6R4eEXYx+VDdtydr7Ds0zlVAB6qVpc3m4A6LC811mLKLXrZAdANbWsr3cyGxyLMpu7bMENHdOIAQlpifXgSwrv0tN70K9IKUoVgmRakDJG4aydRsciseULXEOz2mjNcwUBa//LGTpPOBQXfdXOfW/5AHGkM2o9VVx+xJOjPOg49nczkxoHow/8c4R3YjM6hQdk/vhElguc/WN/3SqJrpbb+8q6+wRy8K399nYIrMGffLtKARmkQAFPeVf8UR2Z4YR8zE3IIuu1IZgStAeFEAN1ismCuwxwZhC7s3M8BSxFNJ2qjAGprkm0AfaovS8comQoWVukOas0jZaNaF66oBAocSV84MNwVqtiQC6gVjXZBAVb2r8vVvnOfwFP89hcA1Of8rggKn00ukYLauf7Smo62bmHFGOJ0ZC+wWDa3bm0dKsnodQhUUOghNSY0RVyqeuxP9VtSxkPzKZ/OuANb1wtFFtsmLdY3JCxGA58607Cg9UR9RqzNJHfGPPm1wXrAIQQpAQVCM8yVrJ5sTrcm1cYFsGA1BX6qenagYsmQzZCMwkI592izyz5n4KkHAwxzo48E/F84iCM0X9Lz8TUTcn+Lm1GoV9/CgwxvEpBpDSIXZFG+UzmHA++54nLdQ9gLtekkjKklXK7YZdKAdzVtg4vDINnc6He922IAnZmPx34OxZxVFOI2Gbo9ZasI3R3SwGY3Fft+4Ax/NnGl/wjxSPt4OOblT/AsfWecJ+xwwexVL/EoHucma6B8h2KDSVG2lA77GMY=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7696.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(346002)(366004)(396003)(136003)(39860400002)(376002)(6506007)(83380400001)(186003)(53546011)(478600001)(26005)(41300700001)(54906003)(66476007)(7696005)(38070700005)(8676002)(9686003)(8936002)(4326008)(33656002)(52536014)(5660300002)(38100700002)(2906002)(966005)(76116006)(71200400001)(316002)(55016003)(86362001)(110136005)(66946007)(66556008)(122000001)(66446008)(64756008);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0801MB1642
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT007.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	85d82fa2-20be-4be8-726e-08da58d05925
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nQyfBkBt+H+s2JFwY2wyo2rarUFvrlGy6aYoXMNMlEkP5Do274YFLU/NBFqpzvpb0+rUku7Xy4s5zE1UEkno7F61xdr/2Z6QkunELuFWZUxd6eYdorUZnw6OxhtztLYS+2Tx8PcMV2ICe+UD5orxRa26aU8YPgmQMpYzPE+3YsdasEhIHVf3UWM/R8sByVl2bi7ryq369NfmCprB8tU8Cdr2OXy6Yvq0t/Al1h7YL63tp2bUSMN8bq0uSaYB8hl9S5M3kWHsHE7xr5TZ3yhezY0YkC6j9ICYKGf7nigDjeTzmCSwfYpzZDyvqFE/tnZKIzudj2apw/ClcgvWhJEM2lOt5dyCYOLOrgQ0beV+piw+v4ZPTj2fUu0yJgjA03JEz2om/KSYQb58lPgabwDOmTL6Ldcg5Fi6Iu5u9F7Mutof+BAyh9T2kDtPRHduXIjv40z5u+WU8sOAW6kUuiRuqxx2YmcAoshqHOmSEbOjr7/lPv+4og4YcT+qUxu9KmwcE2KgwHTHUPd6UrYuj2Y4/Qob1AAwj3CXc+b+D2R6P6aOyxQxRtuRbOH9/YUKq6CY0yR2y6S3uX0qWYIWWhvBvedr1zyVRCrXQC+PPUavgfry3Huxzl7hF2C5u+cKjU9PovGcx9+mqQgIQE4DPZAolpW3rtlFlrDo7S8/DPwCOyW4wihHcWVAl7+2ur9+jYIkJZ/nn1M59ZKgnvrbwHdif5WbxndtIq5LF4I89rIy+9prZeaG70Grie1Irf+vrW7/k4VItUrKLAthh4QvduEV/5szno+nS8tclsapv1r/p8E6tck4EWl1AL700wqQsnaI7CUU7sh+Ztmz4ARDxw1JRp71lMX2FFOK63LVhKJ93jE=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(376002)(39860400002)(346002)(396003)(136003)(46966006)(40470700004)(36840700001)(70206006)(26005)(9686003)(2906002)(70586007)(47076005)(83380400001)(186003)(41300700001)(336012)(8676002)(7696005)(6506007)(5660300002)(478600001)(33656002)(86362001)(8936002)(52536014)(55016003)(4326008)(110136005)(54906003)(40480700001)(81166007)(316002)(40460700003)(82740400003)(356005)(36860700001)(966005)(82310400005)(53546011);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2022 06:35:19.2418
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3585c526-3633-4c91-d439-08da58d05ebf
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT007.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6025

SGkgSnVsaWVuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEp1bGll
biBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+IFNlbnQ6IDIwMjLlubQ25pyIMjfml6UgMTg6MzYN
Cj4gVG86IEppYW1laSBYaWUgPEppYW1laS5YaWVAYXJtLmNvbT47IHhlbi1kZXZlbEBsaXN0cy54
ZW5wcm9qZWN0Lm9yZw0KPiBDYzogU3RlZmFubyBTdGFiZWxsaW5pIDxzc3RhYmVsbGluaUBrZXJu
ZWwub3JnPjsgQmVydHJhbmQgTWFycXVpcw0KPiA8QmVydHJhbmQuTWFycXVpc0Bhcm0uY29tPjsg
Vm9sb2R5bXlyIEJhYmNodWsNCj4gPFZvbG9keW15cl9CYWJjaHVrQGVwYW0uY29tPjsgV2VpIENo
ZW4gPFdlaS5DaGVuQGFybS5jb20+DQo+IFN1YmplY3Q6IFJlOiBbUEFUQ0hdIHhlbi9hcm06IGF2
b2lkIGV4dHJhIGNhY2x1bGF0aW9ucyB3aGVuIHNldHRpbmcgdnRpbWVyDQo+IGluIGNvbnRleHQg
c3dpdGNoDQo+IA0KPiBIaSBKaWFtaQ0KPiANCj4gVGl0bGU6IHMvY2FjbHVsYXRpb25zL2NhbGN1
bGF0aW9ucy8NCj4gDQo+IEhvd2V2ZXIsIEkgdGhpbmsgdGhlIHRpdGxlIHNob3VsZCBtZW50aW9u
IHRoZSBvdmVyZmxvdyByYXRoZXIgdGhhbiB0aGUNCj4gZXh0cmEgY2FsY3VsYXRpb25zLiBUaGUg
Zm9ybWVyIGlzIG1vcmUgaW1wb3J0YW50IHRoZSBsYXR0ZXIuDQo+IA0KSSB3aWxsIGNoYW5nZSB0
aGUgdGl0bGUgdG8gIiB4ZW4vYXJtOiBhdm9pZCBvdmVyZmxvdyB3aGVuIHNldHRpbmcgdnRpbWVy
IGluIGNvbnRleHQgc3dpdGNoIg0KDQo+IE9uIDI3LzA2LzIwMjIgMDM6NTgsIEppYW1laSBYaWUg
d3JvdGU6DQo+ID4gdmlydF92dGltZXJfc2F2ZSBpcyBjYWxjdWxhdGluZyB0aGUgbmV3IHRpbWUg
Zm9yIHRoZSB2dGltZXIgaW46DQo+ID4gInYtPmFyY2gudmlydF90aW1lci5jdmFsICsgdi0+ZG9t
YWluLT5hcmNoLnZpcnRfdGltZXJfYmFzZS5vZmZzZXQNCj4gPiAtIGJvb3RfY291bnQiLg0KPiA+
IEluIHRoaXMgZm9ybXVsYSwgImN2YWwgKyBvZmZzZXQiIG1pZ2h0IGNhdXNlIHVpbnQ2NF90IG92
ZXJmbG93Lg0KPiA+IENoYW5naW5nIGl0IHRvICJ2LT5kb21haW4tPmFyY2gudmlydF90aW1lcl9i
YXNlLm9mZnNldCAtIGJvb3RfY291bnQgKw0KPiA+IHYtPmFyY2gudmlydF90aW1lci5jdmFsIiBj
YW4gcmVkdWNlIHRoZSBwb3NzaWJpbGl0eSBvZiBvdmVyZmxvdw0KPiANCj4gVGhpcyByZWFkIHN0
cmFuZ2UgdG8gbWUuIFdlIHdhbnQgdG8gcmVtb3ZlIHRoZSBvdmVyZmxvdyBjb21wbGV0ZWx5IG5v
dA0KPiByZWR1Y2luZyBpdC4gVGhlIG92ZXJmbG93IGlzIGNvbXBsZXRlbHkgcmVtb3ZlZCBieSBj
b252ZXJ0aW5nIHRoZQ0KPiAib2Zmc2V0IC0gYm91bnRfY291bnQiIHRvIG5zIHVwZnJvbnQuDQo+
IA0KPiBBRkFJQ1QsIHRoZSBjb21taXQgbWVzc2FnZSBkb2Vzbid0IGV4cGxhaW4gdGhhdC4NClRo
YW5rcyBmb3IgcG9pbnRpbmcgb3V0IHRoYXQuIEhvdyBhYm91dCBwdXR0aW5nIHRoZSBjb21taXQg
bWVzc2FnZSBsaWtlIHRoZSBiZWxvdzoNCiAgICB4ZW4vYXJtOiBhdm9pZCBvdmVyZmxvdyB3aGVu
IHNldHRpbmcgdnRpbWVyIGluIGNvbnRleHQgc3dpdGNoDQoNCiAgICB2aXJ0X3Z0aW1lcl9zYXZl
IGlzIGNhbGN1bGF0aW5nIHRoZSBuZXcgdGltZSBmb3IgdGhlIHZ0aW1lciBpbjoNCiAgICAidi0+
YXJjaC52aXJ0X3RpbWVyLmN2YWwgKyB2LT5kb21haW4tPmFyY2gudmlydF90aW1lcl9iYXNlLm9m
ZnNldA0KICAgIC0gYm9vdF9jb3VudCIuDQogICAgSW4gdGhpcyBmb3JtdWxhLCAiY3ZhbCArIG9m
ZnNldCIgbWlnaHQgY2F1c2UgdWludDY0X3Qgb3ZlcmZsb3cuDQogICAgQ2hhbmdpbmcgaXQgdG8g
InRpY2tzX3RvX25zKHYtPmRvbWFpbi0+YXJjaC52aXJ0X3RpbWVyX2Jhc2Uub2Zmc2V0IC0NCiAg
ICBib290X2NvdW50KSArIHRpY2tzX3RvX25zKHYtPmFyY2gudmlydF90aW1lci5jdmFsKSIgY2Fu
IGF2b2lkIG92ZXJmbG93LA0KICAgIGFuZCAidGlja3NfdG9fbnMoYXJjaC52aXJ0X3RpbWVyX2Jh
c2Uub2Zmc2V0IC0gYm9vdF9jb3VudCkiIHdpbGwgYmUNCiAgICBhbHdheXMgdGhlIHNhbWUsIHdo
aWNoIGhhcyBiZWVuIGNhY3VsYXRlZCBpbiBkb21haW5fdnRpbWVyX2luaXQuDQogICAgSW50cm9k
dWNlIGEgbmV3IGZpZWxkIHZpcnRfdGltZXJfYmFzZS5uYW5vc2Vjb25kcyB0byBzdG9yZSB0aGlz
IHZhbHVlDQogICAgZm9yIGFybSBpbiBzdHJ1Y3QgYXJjaF9kb21haW4sIHNvIHdlIGNhbiB1c2Ug
aXQgZGlyZWN0bHkuDQo+IA0KPiA+ICwgYW5kDQo+ID4gImFyY2gudmlydF90aW1lcl9iYXNlLm9m
ZnNldCAtIGJvb3RfY291bnQiIHdpbGwgYmUgYWx3YXlzIHRoZSBzYW1lLA0KPiA+IHdoaWNoIGhh
cyBiZWVuIGNhY3VsYXRlZCBpbiBkb21haW5fdnRpbWVyX2luaXQuIEludHJvZHVjZSBhIG5ldyBm
aWVsZA0KPiA+IHZ0aW1lcl9vZmZzZXQubmFub3NlY29uZHMgdG8gc3RvcmUgdGhpcyB2YWx1ZSBm
b3IgYXJtIGluIHN0cnVjdA0KPiA+IGFyY2hfZG9tYWluLCBzbyB3ZSBjYW4gdXNlIGl0IGRpcmVj
dGx5IGFuZCBleHRyYSBjYWNsdWxhdGlvbnMgY2FuIGJlDQo+ID4gYXZvaWRlZC4NCj4gPg0KPiA+
IFRoaXMgcGF0Y2ggaXMgZW5saWdodGVuZWQgZnJvbSBbMV0uDQo+ID4NCj4gPiBTaWduZWQtb2Zm
LWJ5OiBKaWFtZWkgWGllIDxqaWFtZWkueGllQGFybS5jb20+DQo+ID4NCj4gPiBbMV0gaHR0cHM6
Ly93d3cubWFpbC1hcmNoaXZlLmNvbS94ZW4tDQo+IGRldmVsQGxpc3RzLnhlbnByb2plY3Qub3Jn
L21zZzEyMzEzOS5odG0NCj4gDQo+IFRoaXMgbGluayBkb2Vzbid0IHdvcmsuIEJ1dCBJIHdvdWxk
IHBlcnNvbmFsbHkgcmVtb3ZlIGl0IGZyb20gdGhlIGNvbW1pdA0KPiBtZXNzYWdlIChvciBhZGQg
LS0tKSBiZWNhdXNlIGl0IGRvZXNuJ3QgYnJpbmcgdmFsdWUgKHRoaXMgcGF0Y2ggbG9va3MNCj4g
bGlrZSBhIHYyIHRvIG1lKS4NClNvcnJ5LCBhICdsJyBpcyBtaXNzaW5nIGF0IHRoZSBlbmQgb2Yg
dGhlIGxpbmsuICBUaGUgbGluayBpcyAgaHR0cHM6Ly93d3cubWFpbC1hcmNoaXZlLmNvbS94ZW4t
ZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmcvbXNnMTIzMTM5Lmh0bWwgLg0KSSB3aWxsIHB1dCBp
dCBhZnRlciAtLS0gaW4gdjMuDQo+IA0KPiA+IC0tLQ0KPiA+IHhlbi9hcmNoL2FybS9pbmNsdWRl
L2FzbS9kb21haW4uaCB8IDQgKysrKw0KPiA+ICAgeGVuL2FyY2gvYXJtL3Z0aW1lci5jICAgICAg
ICAgICAgIHwgNiArKysrLS0NCj4gPiAgIDIgZmlsZXMgY2hhbmdlZCwgOCBpbnNlcnRpb25zKCsp
LCAyIGRlbGV0aW9ucygtKQ0KPiA+DQo+ID4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9pbmNs
dWRlL2FzbS9kb21haW4uaA0KPiBiL3hlbi9hcmNoL2FybS9pbmNsdWRlL2FzbS9kb21haW4uaA0K
PiA+IGluZGV4IGVkNjNjMmI2ZjkuLjk0ZmU1YjY0NDQgMTAwNjQ0DQo+ID4gLS0tIGEveGVuL2Fy
Y2gvYXJtL2luY2x1ZGUvYXNtL2RvbWFpbi5oDQo+ID4gKysrIGIveGVuL2FyY2gvYXJtL2luY2x1
ZGUvYXNtL2RvbWFpbi5oDQo+ID4gQEAgLTczLDYgKzczLDEwIEBAIHN0cnVjdCBhcmNoX2RvbWFp
bg0KPiA+ICAgICAgICAgICB1aW50NjRfdCBvZmZzZXQ7DQo+ID4gICAgICAgfSB2aXJ0X3RpbWVy
X2Jhc2U7DQo+ID4NCj4gPiArICAgIHN0cnVjdCB7DQo+ID4gKyAgICAgICAgaW50NjRfdCBuYW5v
c2Vjb25kczsNCj4gDQo+IFRoaXMgc2hvdWxkIGJlIHNfdGltZV90IHRvIG1hdGNoIHRoZSBhcmd1
bWVudCBvZiBzZXRfdGltZXIoKSBhbmQgcmV0dXJuDQo+IG9mIHRpY2tzX3RvX25zKCkuDQo+IA0K
PiA+ICsgICAgfSB2dGltZXJfb2Zmc2V0Ow0KPiANCj4gV2h5IGFyZSB5b3UgYWRkaW5nIGEgbmV3
IHN0cnVjdHVyZSByYXRoZXIgdGhhbiByZS11c2luZyB2aXJ0X3RpbWVyX2Jhc2U/DQpTdXJlLCBJ
J2xsIGFkZCB0aGlzIGZpZWxkIGluIHZpcnRfdGltZXJfYmFzZS4NCiAgICAgc3RydWN0IHsNCiAg
ICAgICAgIHVpbnQ2NF90IG9mZnNldDsNCiAgICAgICAgIHNfdGltZV90IG5hbm9zZWNvbmRzOw0K
ICAgICB9IHZpcnRfdGltZXJfYmFzZTsNCj4gDQo+ID4gKw0KPiA+ICAgICAgIHN0cnVjdCB2Z2lj
X2Rpc3QgdmdpYzsNCj4gPg0KPiA+ICAgICAgIHN0cnVjdCB2dWFydCB7DQo+ID4gZGlmZiAtLWdp
dCBhL3hlbi9hcmNoL2FybS92dGltZXIuYyBiL3hlbi9hcmNoL2FybS92dGltZXIuYw0KPiA+IGlu
ZGV4IDZiNzhmZWE3N2QuLjU0MTYxZTVmZWEgMTAwNjQ0DQo+ID4gLS0tIGEveGVuL2FyY2gvYXJt
L3Z0aW1lci5jDQo+ID4gKysrIGIveGVuL2FyY2gvYXJtL3Z0aW1lci5jDQo+ID4gQEAgLTY0LDYg
KzY0LDcgQEAgaW50IGRvbWFpbl92dGltZXJfaW5pdChzdHJ1Y3QgZG9tYWluICpkLCBzdHJ1Y3QN
Cj4geGVuX2FyY2hfZG9tYWluY29uZmlnICpjb25maWcpDQo+ID4gICB7DQo+ID4gICAgICAgZC0+
YXJjaC52aXJ0X3RpbWVyX2Jhc2Uub2Zmc2V0ID0gZ2V0X2N5Y2xlcygpOw0KPiA+ICAgICAgIGQt
PnRpbWVfb2Zmc2V0LnNlY29uZHMgPSB0aWNrc190b19ucyhkLT5hcmNoLnZpcnRfdGltZXJfYmFz
ZS5vZmZzZXQgLQ0KPiBib290X2NvdW50KTsNCj4gPiArICAgIGQtPmFyY2gudnRpbWVyX29mZnNl
dC5uYW5vc2Vjb25kcyA9IGQtPnRpbWVfb2Zmc2V0LnNlY29uZHM7DQo+IA0KPiBIbW1tLi4uIEkg
ZmluZCBvZGQgdG8gYXNzaWduIGEgZmllbGQgIm5hbm9zZWNvbmRzIiB0byAic2Vjb25kcyIuIEkg
d291bGQNCj4gc3VnZ2VzdCB0byByZS1vcmRlciBzbyB5b3UgZmlyc3Qgc2V0IG5hbm9zZWNvbmRz
IGFuZCB0aGVuIHNldCBzZWNvbmRzLg0KPiANCj4gVGhpcyB3aWxsIG1ha2UgbW9yZSBvYnZpb3Vz
IHRoYXQgdGhpcyBpcyBub3QgYSBtaXN0YWtlIGFuZCAic2Vjb25kcyINCj4gd2lsbCBiZSBjbG9z
ZXIgdG8gdGhlIGRvX2RpdigpIGJlbG93Lg0KSXMgaXQgb2sgdG8gcmVtb3ZlIGRvX2RpdiBhbmQg
d3JpdGUgbGlrZSBiZWxvdz8NCiAgICBkLT5hcmNoLnZpcnRfdGltZXJfYmFzZS5uYW5vc2Vjb25k
cyA9DQogICAgICAgIHRpY2tzX3RvX25zKGQtPmFyY2gudmlydF90aW1lcl9iYXNlLm9mZnNldCAt
IGJvb3RfY291bnQpOw0KICAgIGQtPnRpbWVfb2Zmc2V0LnNlY29uZHMgPSBkLT5hcmNoLnZpcnRf
dGltZXJfYmFzZS5uYW5vc2Vjb25kcyAvDQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAx
MDAwMDAwMDAwOw0KDQpCZXN0IHdpc2hlcw0KSmlhbWVpIFhpZQ0KDQoNCj4gDQo+ID4gICAgICAg
ZG9fZGl2KGQtPnRpbWVfb2Zmc2V0LnNlY29uZHMsIDEwMDAwMDAwMDApOw0KPiA+DQo+ID4gICAg
ICAgY29uZmlnLT5jbG9ja19mcmVxdWVuY3kgPSB0aW1lcl9kdF9jbG9ja19mcmVxdWVuY3k7DQo+
ID4gQEAgLTE0NCw4ICsxNDUsOSBAQCB2b2lkIHZpcnRfdGltZXJfc2F2ZShzdHJ1Y3QgdmNwdSAq
dikNCj4gPiAgICAgICBpZiAoICh2LT5hcmNoLnZpcnRfdGltZXIuY3RsICYgQ05UeF9DVExfRU5B
QkxFKSAmJg0KPiA+ICAgICAgICAgICAgISh2LT5hcmNoLnZpcnRfdGltZXIuY3RsICYgQ05UeF9D
VExfTUFTSykpDQo+ID4gICAgICAgew0KPiA+IC0gICAgICAgIHNldF90aW1lcigmdi0+YXJjaC52
aXJ0X3RpbWVyLnRpbWVyLCB0aWNrc190b19ucyh2LQ0KPiA+YXJjaC52aXJ0X3RpbWVyLmN2YWwg
Kw0KPiA+IC0gICAgICAgICAgICAgICAgICB2LT5kb21haW4tPmFyY2gudmlydF90aW1lcl9iYXNl
Lm9mZnNldCAtIGJvb3RfY291bnQpKTsNCj4gPiArICAgICAgICBzZXRfdGltZXIoJnYtPmFyY2gu
dmlydF90aW1lci50aW1lciwNCj4gPiArICAgICAgICAgICAgICAgICAgdi0+ZG9tYWluLT5hcmNo
LnZ0aW1lcl9vZmZzZXQubmFub3NlY29uZHMgKw0KPiA+ICsgICAgICAgICAgICAgICAgICB0aWNr
c190b19ucyh2LT5hcmNoLnZpcnRfdGltZXIuY3ZhbCkpOw0KPiA+ICAgICAgIH0NCj4gPiAgIH0N
Cj4gPg0KPiANCj4gQ2hlZXJzLA0KPiANCj4gLS0NCj4gSnVsaWVuIEdyYWxsDQo=


From xen-devel-bounces@lists.xenproject.org Tue Jun 28 07:29:59 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 07:29:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.356996.585385 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o65fJ-0008Ld-3j; Tue, 28 Jun 2022 07:29:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 356996.585385; Tue, 28 Jun 2022 07:29:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o65fJ-0008LW-0K; Tue, 28 Jun 2022 07:29:41 +0000
Received: by outflank-mailman (input) for mailman id 356996;
 Tue, 28 Jun 2022 07:29:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pAZO=XD=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1o65fH-0008LO-IC
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 07:29:39 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr70055.outbound.protection.outlook.com [40.107.7.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 10c41068-f6b4-11ec-b725-ed86ccbb4733;
 Tue, 28 Jun 2022 09:29:38 +0200 (CEST)
Received: from AM7PR02CA0005.eurprd02.prod.outlook.com (2603:10a6:20b:100::15)
 by GV1PR08MB8107.eurprd08.prod.outlook.com (2603:10a6:150:94::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Tue, 28 Jun
 2022 07:29:31 +0000
Received: from AM5EUR03FT050.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:100:cafe::b3) by AM7PR02CA0005.outlook.office365.com
 (2603:10a6:20b:100::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.21 via Frontend
 Transport; Tue, 28 Jun 2022 07:29:31 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT050.mail.protection.outlook.com (10.152.17.47) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Tue, 28 Jun 2022 07:29:30 +0000
Received: ("Tessian outbound 5b5a41c043d3:v120");
 Tue, 28 Jun 2022 07:29:30 +0000
Received: from 5d84a693d802.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 DBBB24ED-8CAC-41D5-8021-83B239905BFF.1; 
 Tue, 28 Jun 2022 07:29:24 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 5d84a693d802.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 28 Jun 2022 07:29:24 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AS4PR08MB8244.eurprd08.prod.outlook.com (2603:10a6:20b:51d::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16; Tue, 28 Jun
 2022 07:29:22 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5373.017; Tue, 28 Jun 2022
 07:29:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 10c41068-f6b4-11ec-b725-ed86ccbb4733
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=EmT+NiVruQBaT2VZAu24oTlamV+iEIRcEfldtRKoAxcHXnY6K41853kIzOMMkY2UBa/6IaN21dGW8c4441fN+1I6rwv43lexbTkhJqtOXnccRqYBgfGj38C1N7ToaXyRfxSGR+rSHEWUF2fc3QLQoDpszP3zKh+eK1YHcj5PLXAPuL5YJZ35ZXRyMxPx5xaF9GRm/i0skA/rEFrcz586Vp2DadTkOP9zUQ6Ey2q3ufUMJT2PqXQ1e5cJ9wKXhZuD7OxI2/+WjgBWEB7pzFtKnmVyIxHi4QgXGml/ae4qKOwSIHJIYr/dyGRkbNQTHxLqKVmYvAEkjwsJhvXfBRR3vg==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6XOUgNQhhdpx8lS4C8GrtRkhs9t6JQRMFzK7XI5APXg=;
 b=RmpmpTxvN2mPxGzfMdh8SnEcAKqaGrwzzaz+nEC0slD+qGSWT2tl3ZGWds+ut1bTz+5NrFZGkuFjBULS1tJdZqMhNKEdc1NhYmuMMAbTEOVg9kVJ6khuqdlJOvFmRP2nqdh3oryZuV/68UEVI+HQJkU0ffDBL3HbBOKCeFXQkclS1OKPNQaBT2T9iOcuHBeRWqf+Pam26+nJaHCZ1cl8zxnFpyrQ9dpTnP996ZDiFXaMzYyDQzQlIzziyDm50l7yxpWB612K3qxaFmOCTrzendcRB39HFEiy2mPKwa3sXANDIbJ202yvhd1prUYXzHov1hlp7MCkm2vKWhAYt7jdsQ==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6XOUgNQhhdpx8lS4C8GrtRkhs9t6JQRMFzK7XI5APXg=;
 b=o+Y/GpnMs4Qd7DgrURJfwnblmzrx9P6c22BcB4oWokcLChBg+Wfe3qKI1Cc/Tev7bsPYw0n+iATViwEwAut55XOp9B7YG92KieDExe9Vsl8AJwYRW+oNxetiTFlsa1crR0XxGSqqp+TGRh4DECwfX0Lfzj0bkt7wYYc4C8SqB9o=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 5d6f6f73bb43da19
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CZ5GSMhLuffUNVYJOBkdmgQdV58RP/awwsEn5Sm4+KF3/wSuV27ZRey4XuowGxvHQFZ+HYloxCc+BEU/k1TD3RLRTy87/9JP9wmIlbuXwAA7oYF84GnqnjIEf32PI7mepfaOGw1/4L4iPBLhjOnGu0a2iLN1vI2YSjMXmyzeGp4/oBqCwyHILi/W0epJ7PFe/efbVJQ/SznlH8LCQvyr6ZqY4dy+ZmWeO9PyE/jicSW0/sJweLjSzv0Q2326WqUmXJbYJKDSrT+6KuJ1CmvDxpeWg/UDZ3/iJTklgLjUwovalCXtmFpQqgEPbB6x75PevbSEENGYUU+zRN0Gs2BeZg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6XOUgNQhhdpx8lS4C8GrtRkhs9t6JQRMFzK7XI5APXg=;
 b=hv3gxJFUWQ9nz/8Ow9+flKjnHf3cP1D85Y/9Hdg4riZcoh26NpXiJP6D5pzOOgGwA2ys8wJJrzs4SltrBflaYN0AROCV1x+9hC9tnvi+Zagnd58M5iB3uuJW0j15H6nr7SDYatPvj+snPY7ajc0c2sd7lYE8B26aTkdmalY96cGpCCXVuwoTcRXr6t9NoYcAFMY7jgCLW/o5Sc+yaH2BL3p7yxBEYoSvj5/EQER+1H8dn9CoByRlON65nLUZjR+Oi/FQNL+Wa+bx/N9LTmqMFYHnUsrNa3RJRIOlmA79g4ojVdNw1me+PTEbAMKDvTaBBnAn9mREWLGGSlPvcPSKgg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6XOUgNQhhdpx8lS4C8GrtRkhs9t6JQRMFzK7XI5APXg=;
 b=o+Y/GpnMs4Qd7DgrURJfwnblmzrx9P6c22BcB4oWokcLChBg+Wfe3qKI1Cc/Tev7bsPYw0n+iATViwEwAut55XOp9B7YG92KieDExe9Vsl8AJwYRW+oNxetiTFlsa1crR0XxGSqqp+TGRh4DECwfX0Lfzj0bkt7wYYc4C8SqB9o=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jiamei Xie <Jiamei.Xie@arm.com>
CC: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Wei
 Chen <Wei.Chen@arm.com>
Subject: Re: [PATCH] xen/arm: avoid extra caclulations when setting vtimer in
 context switch
Thread-Topic: [PATCH] xen/arm: avoid extra caclulations when setting vtimer in
 context switch
Thread-Index: AQHYidHe3XYB0asC1UCP3Ythq1qeCq1jD7IAgAFPFoCAAA8lgA==
Date: Tue, 28 Jun 2022 07:29:21 +0000
Message-ID: <983E3136-4388-4249-90DC-FAFF18BC7724@arm.com>
References: <20220627025809.1985720-1-jiamei.xie@arm.com>
 <cbb7a231-0f61-7170-3fc4-d4ae55398f3a@xen.org>
 <AS8PR08MB76964C46AE16A6CEF6DE221892B89@AS8PR08MB7696.eurprd08.prod.outlook.com>
In-Reply-To:
 <AS8PR08MB76964C46AE16A6CEF6DE221892B89@AS8PR08MB7696.eurprd08.prod.outlook.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 3f7122f0-3385-4bf7-9dcf-08da58d7f0be
x-ms-traffictypediagnostic:
	AS4PR08MB8244:EE_|AM5EUR03FT050:EE_|GV1PR08MB8107:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 VRZvSMRcSDy5cB1JXTzKa+sov7V+mRQg/7ZpImwS8vho0WWknTNKggqLMfjGSl/0i1RowccFb59NJij9oVHw1mtY/SUuaGLWIVrvnpIIfV11mrOyKwjQmCuSrD9M+E1WyAISOqPdow0ZBZ6MnadUisSjCr690vL/feUw+RfjdjjYIxJqnadihSSG/pe1HNTWGwHlzBgbNyeG2/U0HokKkYDwbOnevLwYr+u/UdOKoBEjJrSXHKhmtwx/ZcMH77ElQwrEFs2y33rShR+W/172UVhaR7BVwSZZ8rVkKjmaVAeb8YhRMMihNJ56bP3E8YY99z0+dohbxkyxerGBophlaPOtRXVXdmpmLBF409iCNN0aIaGUE1J5r7W68OL+CVAamw3oZSUyFSSwTW8LSQ/P2kFkeAtslr93MtYTL5F3G4LiWL60Z7nQxCIT86SCM8qDz3VjmFrzBT5ZcxbPKYnwKEmLasZE/Sby+fLxwRfmx2a4UVmiT/0k0xIYQ5GkqvVq1P81KwoS9Et/tozeHkowOXIf6/UP0gTjB+yRJU4E2jK1mzuGevY8TP280UL6AH1C3hbfsP/LeU0i+TLhADjGSgO0+lY4gPi6WQoPTUBWQO6yWd1i9pjSFZzs37PosOS0I+EKKh7unmrtG66frJdqkSLhstNCgRP7/9bGkqsPDVsvZCEi7qv/2fZ3LjZaL2WnwOyQSLqaS5CA3r6GA9GKTzZ8NqXXWtUwFQzMygWt14lBG26N2yJLiBIMUN/9JaW1zbPj2gVuqs2qnVsKRPVwcMmr5qWx1+2E9NqyqV3RLSsYbIF0j5L4XTwqQ6QOGq2M1xa02HsqryntZS5TmHJ28HwsCr8NdJmmTV2CAI6MVdJVVt7N1soFyRnheqoponLr3hgwxLk8phJer2gd2mcGNQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(346002)(376002)(136003)(396003)(39860400002)(366004)(6512007)(36756003)(66446008)(53546011)(316002)(76116006)(41300700001)(38100700002)(66556008)(64756008)(8936002)(86362001)(2616005)(6486002)(83380400001)(33656002)(26005)(478600001)(6506007)(6862004)(2906002)(6636002)(54906003)(966005)(8676002)(66946007)(71200400001)(38070700005)(91956017)(5660300002)(122000001)(4326008)(186003)(37006003)(66476007)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <B61BCA94FDAE2049AF561893C67CD1F4@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR08MB8244
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT050.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	430f288d-6241-48da-ee42-08da58d7eb91
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	UVYn7RuhpkLF/k85HNoSF6H3wcGDXBNoOKEl701WlwuIXNlm7g9QzwsrjCcfxZ/OzMcMSXp2IyYWHAohXEvlqQuJM1xMp24slMl4Wo2NNTxx4tbJSVZvt2G/pFRshZe4gDRS+J/HR62hj2xoSxxNNvdj4BBEEVSAM3LSjdqA5OJVjRPrtxdURN7+zbktiD926HLNhz4xK6VeTiDkjJ6HKLRllR8JzdwH+wCTQPUgF71AepBwjlAlJl75LA93b0ygmQyOTMVxv6t4Q9/xTuRoqdL8qhzdPUiX1ISNN9sfd7yOQ9mOcIO+zKwXoKgtbi7umF3i8muxWUjYzWJcbsGPE1Aj+liZconAD/EUJTP6e4p2eExWde8tNKIiHFsyvBKildsZvpe4p/5OJi7IUwiWA7+tFzrLZjGGPz2ctLV8N/W7Biz7xWM1GsM3IQcKnvc9JZJgdqsJ0xN/p+VP7jqd8KvvGH5IUeDNlfPnBWou4+TFNBaWCBPguLcaI8n6Q+8dla8oC/uJraRXUGqy7nPxzC8+KLLJ8g+nNCROaX0X47LtyDBwOv3P92zfVOqAivA6W0KQYTgtw3TpGi2sApqRlQBRFs0dHLK3jQtHLnqLH8rc0GMf4KFAevZjRrX8giDrlEcd3AToIR+u++50dYWJ9Af8DLQN2ZQQZVCUdx0/8d8Hd+aJP0wuKwmYwOtc3mTZRZSxiU2RAf3+0UbtNq6EaudJHJjR9PtHjXb7/tWQaij5BNNcCL2rGPAqUs0EpEe+gjQ+TVFGr7WoEzdW2Po6ZtKKsdJ9XhcP7MUy6htWuqdfnFUoq4w47jMOl5LP0QQYVVwV8JjI9cvTDleRb42/qyTehMTBVm3xWjvwTapP1IE=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(136003)(396003)(39860400002)(346002)(376002)(36840700001)(46966006)(40470700004)(2906002)(81166007)(83380400001)(82310400005)(82740400003)(47076005)(186003)(54906003)(40480700001)(336012)(36756003)(5660300002)(40460700003)(36860700001)(33656002)(8936002)(70206006)(2616005)(37006003)(70586007)(6862004)(4326008)(41300700001)(6512007)(8676002)(478600001)(6506007)(356005)(6486002)(966005)(6636002)(86362001)(26005)(316002)(53546011);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2022 07:29:30.5802
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3f7122f0-3385-4bf7-9dcf-08da58d7f0be
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT050.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB8107

SGkgSmlhbWVpLA0KDQo+IE9uIDI4IEp1biAyMDIyLCBhdCAwNzozNSwgSmlhbWVpIFhpZSA8Smlh
bWVpLlhpZUBhcm0uY29tPiB3cm90ZToNCj4gDQo+IEhpIEp1bGllbiwNCj4gDQo+PiAtLS0tLU9y
aWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPj4gRnJvbTogSnVsaWVuIEdyYWxsIDxqdWxpZW5AeGVuLm9y
Zz4NCj4+IFNlbnQ6IDIwMjLlubQ25pyIMjfml6UgMTg6MzYNCj4+IFRvOiBKaWFtZWkgWGllIDxK
aWFtZWkuWGllQGFybS5jb20+OyB4ZW4tZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmcNCj4+IENj
OiBTdGVmYW5vIFN0YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+OyBCZXJ0cmFuZCBN
YXJxdWlzDQo+PiA8QmVydHJhbmQuTWFycXVpc0Bhcm0uY29tPjsgVm9sb2R5bXlyIEJhYmNodWsN
Cj4+IDxWb2xvZHlteXJfQmFiY2h1a0BlcGFtLmNvbT47IFdlaSBDaGVuIDxXZWkuQ2hlbkBhcm0u
Y29tPg0KPj4gU3ViamVjdDogUmU6IFtQQVRDSF0geGVuL2FybTogYXZvaWQgZXh0cmEgY2FjbHVs
YXRpb25zIHdoZW4gc2V0dGluZyB2dGltZXINCj4+IGluIGNvbnRleHQgc3dpdGNoDQo+PiANCj4+
IEhpIEppYW1pDQo+PiANCj4+IFRpdGxlOiBzL2NhY2x1bGF0aW9ucy9jYWxjdWxhdGlvbnMvDQo+
PiANCj4+IEhvd2V2ZXIsIEkgdGhpbmsgdGhlIHRpdGxlIHNob3VsZCBtZW50aW9uIHRoZSBvdmVy
ZmxvdyByYXRoZXIgdGhhbiB0aGUNCj4+IGV4dHJhIGNhbGN1bGF0aW9ucy4gVGhlIGZvcm1lciBp
cyBtb3JlIGltcG9ydGFudCB0aGUgbGF0dGVyLg0KPj4gDQo+IEkgd2lsbCBjaGFuZ2UgdGhlIHRp
dGxlIHRvICIgeGVuL2FybTogYXZvaWQgb3ZlcmZsb3cgd2hlbiBzZXR0aW5nIHZ0aW1lciBpbiBj
b250ZXh0IHN3aXRjaCINCj4gDQo+PiBPbiAyNy8wNi8yMDIyIDAzOjU4LCBKaWFtZWkgWGllIHdy
b3RlOg0KPj4+IHZpcnRfdnRpbWVyX3NhdmUgaXMgY2FsY3VsYXRpbmcgdGhlIG5ldyB0aW1lIGZv
ciB0aGUgdnRpbWVyIGluOg0KPj4+ICJ2LT5hcmNoLnZpcnRfdGltZXIuY3ZhbCArIHYtPmRvbWFp
bi0+YXJjaC52aXJ0X3RpbWVyX2Jhc2Uub2Zmc2V0DQo+Pj4gLSBib290X2NvdW50Ii4NCj4+PiBJ
biB0aGlzIGZvcm11bGEsICJjdmFsICsgb2Zmc2V0IiBtaWdodCBjYXVzZSB1aW50NjRfdCBvdmVy
Zmxvdy4NCj4+PiBDaGFuZ2luZyBpdCB0byAidi0+ZG9tYWluLT5hcmNoLnZpcnRfdGltZXJfYmFz
ZS5vZmZzZXQgLSBib290X2NvdW50ICsNCj4+PiB2LT5hcmNoLnZpcnRfdGltZXIuY3ZhbCIgY2Fu
IHJlZHVjZSB0aGUgcG9zc2liaWxpdHkgb2Ygb3ZlcmZsb3cNCj4+IA0KPj4gVGhpcyByZWFkIHN0
cmFuZ2UgdG8gbWUuIFdlIHdhbnQgdG8gcmVtb3ZlIHRoZSBvdmVyZmxvdyBjb21wbGV0ZWx5IG5v
dA0KPj4gcmVkdWNpbmcgaXQuIFRoZSBvdmVyZmxvdyBpcyBjb21wbGV0ZWx5IHJlbW92ZWQgYnkg
Y29udmVydGluZyB0aGUNCj4+ICJvZmZzZXQgLSBib3VudF9jb3VudCIgdG8gbnMgdXBmcm9udC4N
Cj4+IA0KPj4gQUZBSUNULCB0aGUgY29tbWl0IG1lc3NhZ2UgZG9lc24ndCBleHBsYWluIHRoYXQu
DQo+IFRoYW5rcyBmb3IgcG9pbnRpbmcgb3V0IHRoYXQuIEhvdyBhYm91dCBwdXR0aW5nIHRoZSBj
b21taXQgbWVzc2FnZSBsaWtlIHRoZSBiZWxvdzoNCj4geGVuL2FybTogYXZvaWQgb3ZlcmZsb3cg
d2hlbiBzZXR0aW5nIHZ0aW1lciBpbiBjb250ZXh0IHN3aXRjaA0KPiANCj4gdmlydF92dGltZXJf
c2F2ZSBpcyBjYWxjdWxhdGluZyB0aGUgbmV3IHRpbWUgZm9yIHRoZSB2dGltZXIgaW46DQo+ICJ2
LT5hcmNoLnZpcnRfdGltZXIuY3ZhbCArIHYtPmRvbWFpbi0+YXJjaC52aXJ0X3RpbWVyX2Jhc2Uu
b2Zmc2V0DQo+IC0gYm9vdF9jb3VudCIuDQo+IEluIHRoaXMgZm9ybXVsYSwgImN2YWwgKyBvZmZz
ZXQiIG1pZ2h0IGNhdXNlIHVpbnQ2NF90IG92ZXJmbG93Lg0KPiBDaGFuZ2luZyBpdCB0byAidGlj
a3NfdG9fbnModi0+ZG9tYWluLT5hcmNoLnZpcnRfdGltZXJfYmFzZS5vZmZzZXQgLQ0KPiBib290
X2NvdW50KSArIHRpY2tzX3RvX25zKHYtPmFyY2gudmlydF90aW1lci5jdmFsKSIgY2FuIGF2b2lk
IG92ZXJmbG93LA0KPiBhbmQgInRpY2tzX3RvX25zKGFyY2gudmlydF90aW1lcl9iYXNlLm9mZnNl
dCAtIGJvb3RfY291bnQpIiB3aWxsIGJlDQo+IGFsd2F5cyB0aGUgc2FtZSwgd2hpY2ggaGFzIGJl
ZW4gY2FjdWxhdGVkIGluIGRvbWFpbl92dGltZXJfaW5pdC4NCj4gSW50cm9kdWNlIGEgbmV3IGZp
ZWxkIHZpcnRfdGltZXJfYmFzZS5uYW5vc2Vjb25kcyB0byBzdG9yZSB0aGlzIHZhbHVlDQo+IGZv
ciBhcm0gaW4gc3RydWN0IGFyY2hfZG9tYWluLCBzbyB3ZSBjYW4gdXNlIGl0IGRpcmVjdGx5Lg0K
Pj4gDQo+Pj4gLCBhbmQNCj4+PiAiYXJjaC52aXJ0X3RpbWVyX2Jhc2Uub2Zmc2V0IC0gYm9vdF9j
b3VudCIgd2lsbCBiZSBhbHdheXMgdGhlIHNhbWUsDQo+Pj4gd2hpY2ggaGFzIGJlZW4gY2FjdWxh
dGVkIGluIGRvbWFpbl92dGltZXJfaW5pdC4gSW50cm9kdWNlIGEgbmV3IGZpZWxkDQo+Pj4gdnRp
bWVyX29mZnNldC5uYW5vc2Vjb25kcyB0byBzdG9yZSB0aGlzIHZhbHVlIGZvciBhcm0gaW4gc3Ry
dWN0DQo+Pj4gYXJjaF9kb21haW4sIHNvIHdlIGNhbiB1c2UgaXQgZGlyZWN0bHkgYW5kIGV4dHJh
IGNhY2x1bGF0aW9ucyBjYW4gYmUNCj4+PiBhdm9pZGVkLg0KPj4+IA0KPj4+IFRoaXMgcGF0Y2gg
aXMgZW5saWdodGVuZWQgZnJvbSBbMV0uDQo+Pj4gDQo+Pj4gU2lnbmVkLW9mZi1ieTogSmlhbWVp
IFhpZSA8amlhbWVpLnhpZUBhcm0uY29tPg0KPj4+IA0KPj4+IFsxXSBodHRwczovL3d3dy5tYWls
LWFyY2hpdmUuY29tL3hlbi0NCj4+IGRldmVsQGxpc3RzLnhlbnByb2plY3Qub3JnL21zZzEyMzEz
OS5odG0NCj4+IA0KPj4gVGhpcyBsaW5rIGRvZXNuJ3Qgd29yay4gQnV0IEkgd291bGQgcGVyc29u
YWxseSByZW1vdmUgaXQgZnJvbSB0aGUgY29tbWl0DQo+PiBtZXNzYWdlIChvciBhZGQgLS0tKSBi
ZWNhdXNlIGl0IGRvZXNuJ3QgYnJpbmcgdmFsdWUgKHRoaXMgcGF0Y2ggbG9va3MNCj4+IGxpa2Ug
YSB2MiB0byBtZSkuDQo+IFNvcnJ5LCBhICdsJyBpcyBtaXNzaW5nIGF0IHRoZSBlbmQgb2YgdGhl
IGxpbmsuIFRoZSBsaW5rIGlzIGh0dHBzOi8vd3d3Lm1haWwtYXJjaGl2ZS5jb20veGVuLWRldmVs
QGxpc3RzLnhlbnByb2plY3Qub3JnL21zZzEyMzEzOS5odG1sIC4NCj4gSSB3aWxsIHB1dCBpdCBh
ZnRlciAtLS0gaW4gdjMuDQo+PiANCj4+PiAtLS0NCj4+PiB4ZW4vYXJjaC9hcm0vaW5jbHVkZS9h
c20vZG9tYWluLmggfCA0ICsrKysNCj4+PiB4ZW4vYXJjaC9hcm0vdnRpbWVyLmMgfCA2ICsrKyst
LQ0KPj4+IDIgZmlsZXMgY2hhbmdlZCwgOCBpbnNlcnRpb25zKCspLCAyIGRlbGV0aW9ucygtKQ0K
Pj4+IA0KPj4+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vaW5jbHVkZS9hc20vZG9tYWluLmgN
Cj4+IGIveGVuL2FyY2gvYXJtL2luY2x1ZGUvYXNtL2RvbWFpbi5oDQo+Pj4gaW5kZXggZWQ2M2My
YjZmOS4uOTRmZTViNjQ0NCAxMDA2NDQNCj4+PiAtLS0gYS94ZW4vYXJjaC9hcm0vaW5jbHVkZS9h
c20vZG9tYWluLmgNCj4+PiArKysgYi94ZW4vYXJjaC9hcm0vaW5jbHVkZS9hc20vZG9tYWluLmgN
Cj4+PiBAQCAtNzMsNiArNzMsMTAgQEAgc3RydWN0IGFyY2hfZG9tYWluDQo+Pj4gdWludDY0X3Qg
b2Zmc2V0Ow0KPj4+IH0gdmlydF90aW1lcl9iYXNlOw0KPj4+IA0KPj4+ICsgc3RydWN0IHsNCj4+
PiArIGludDY0X3QgbmFub3NlY29uZHM7DQo+PiANCj4+IFRoaXMgc2hvdWxkIGJlIHNfdGltZV90
IHRvIG1hdGNoIHRoZSBhcmd1bWVudCBvZiBzZXRfdGltZXIoKSBhbmQgcmV0dXJuDQo+PiBvZiB0
aWNrc190b19ucygpLg0KPj4gDQo+Pj4gKyB9IHZ0aW1lcl9vZmZzZXQ7DQo+PiANCj4+IFdoeSBh
cmUgeW91IGFkZGluZyBhIG5ldyBzdHJ1Y3R1cmUgcmF0aGVyIHRoYW4gcmUtdXNpbmcgdmlydF90
aW1lcl9iYXNlPw0KPiBTdXJlLCBJJ2xsIGFkZCB0aGlzIGZpZWxkIGluIHZpcnRfdGltZXJfYmFz
ZS4NCj4gc3RydWN0IHsNCj4gdWludDY0X3Qgb2Zmc2V0Ow0KPiBzX3RpbWVfdCBuYW5vc2Vjb25k
czsNCj4gfSB2aXJ0X3RpbWVyX2Jhc2U7DQo+PiANCj4+PiArDQo+Pj4gc3RydWN0IHZnaWNfZGlz
dCB2Z2ljOw0KPj4+IA0KPj4+IHN0cnVjdCB2dWFydCB7DQo+Pj4gZGlmZiAtLWdpdCBhL3hlbi9h
cmNoL2FybS92dGltZXIuYyBiL3hlbi9hcmNoL2FybS92dGltZXIuYw0KPj4+IGluZGV4IDZiNzhm
ZWE3N2QuLjU0MTYxZTVmZWEgMTAwNjQ0DQo+Pj4gLS0tIGEveGVuL2FyY2gvYXJtL3Z0aW1lci5j
DQo+Pj4gKysrIGIveGVuL2FyY2gvYXJtL3Z0aW1lci5jDQo+Pj4gQEAgLTY0LDYgKzY0LDcgQEAg
aW50IGRvbWFpbl92dGltZXJfaW5pdChzdHJ1Y3QgZG9tYWluICpkLCBzdHJ1Y3QNCj4+IHhlbl9h
cmNoX2RvbWFpbmNvbmZpZyAqY29uZmlnKQ0KPj4+IHsNCj4+PiBkLT5hcmNoLnZpcnRfdGltZXJf
YmFzZS5vZmZzZXQgPSBnZXRfY3ljbGVzKCk7DQo+Pj4gZC0+dGltZV9vZmZzZXQuc2Vjb25kcyA9
IHRpY2tzX3RvX25zKGQtPmFyY2gudmlydF90aW1lcl9iYXNlLm9mZnNldCAtDQo+PiBib290X2Nv
dW50KTsNCj4+PiArIGQtPmFyY2gudnRpbWVyX29mZnNldC5uYW5vc2Vjb25kcyA9IGQtPnRpbWVf
b2Zmc2V0LnNlY29uZHM7DQo+PiANCj4+IEhtbW0uLi4gSSBmaW5kIG9kZCB0byBhc3NpZ24gYSBm
aWVsZCAibmFub3NlY29uZHMiIHRvICJzZWNvbmRzIi4gSSB3b3VsZA0KPj4gc3VnZ2VzdCB0byBy
ZS1vcmRlciBzbyB5b3UgZmlyc3Qgc2V0IG5hbm9zZWNvbmRzIGFuZCB0aGVuIHNldCBzZWNvbmRz
Lg0KPj4gDQo+PiBUaGlzIHdpbGwgbWFrZSBtb3JlIG9idmlvdXMgdGhhdCB0aGlzIGlzIG5vdCBh
IG1pc3Rha2UgYW5kICJzZWNvbmRzIg0KPj4gd2lsbCBiZSBjbG9zZXIgdG8gdGhlIGRvX2Rpdigp
IGJlbG93Lg0KPiBJcyBpdCBvayB0byByZW1vdmUgZG9fZGl2IGFuZCB3cml0ZSBsaWtlIGJlbG93
Pw0KPiBkLT5hcmNoLnZpcnRfdGltZXJfYmFzZS5uYW5vc2Vjb25kcyA9DQo+IHRpY2tzX3RvX25z
KGQtPmFyY2gudmlydF90aW1lcl9iYXNlLm9mZnNldCAtIGJvb3RfY291bnQpOw0KPiBkLT50aW1l
X29mZnNldC5zZWNvbmRzID0gZC0+YXJjaC52aXJ0X3RpbWVyX2Jhc2UubmFub3NlY29uZHMgLw0K
PiAxMDAwMDAwMDAwOw0KDQpUaGUgaW1wbGVtZW50YXRpb24gbXVzdCB1c2UgZG9fZGl2IHRvIHBy
b3Blcmx5IGhhbmRsZSB0aGUgZGl2aXNpb24gZnJvbSBhDQo2NGJpdCBieSBhIDMyYml0IG9uIGFy
bTMyIG90aGVyd2lzZSB0aGUgY29kZSB3aWxsIGJlIGEgbG90IHNsb3dlci4NCg0KQ2hlZXJzDQpC
ZXJ0cmFuZA0KDQo=


From xen-devel-bounces@lists.xenproject.org Tue Jun 28 08:18:05 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 08:18:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357009.585402 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o66Q2-00064r-Bc; Tue, 28 Jun 2022 08:17:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357009.585402; Tue, 28 Jun 2022 08:17:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o66Q2-00064k-8m; Tue, 28 Jun 2022 08:17:58 +0000
Received: by outflank-mailman (input) for mailman id 357009;
 Tue, 28 Jun 2022 08:17:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W+rg=XD=arm.com=Jiamei.Xie@srs-se1.protection.inumbo.net>)
 id 1o66Q1-00064e-3N
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 08:17:57 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2051.outbound.protection.outlook.com [40.107.20.51])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cfc9bb61-f6ba-11ec-bd2d-47488cf2e6aa;
 Tue, 28 Jun 2022 10:17:55 +0200 (CEST)
Received: from AM6PR04CA0033.eurprd04.prod.outlook.com (2603:10a6:20b:92::46)
 by VI1PR08MB3118.eurprd08.prod.outlook.com (2603:10a6:803:46::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15; Tue, 28 Jun
 2022 08:17:53 +0000
Received: from AM5EUR03FT007.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:92:cafe::bb) by AM6PR04CA0033.outlook.office365.com
 (2603:10a6:20b:92::46) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.21 via Frontend
 Transport; Tue, 28 Jun 2022 08:17:52 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT007.mail.protection.outlook.com (10.152.16.145) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Tue, 28 Jun 2022 08:17:52 +0000
Received: ("Tessian outbound 6f53897bcd4e:v120");
 Tue, 28 Jun 2022 08:17:51 +0000
Received: from 0592b720cbb0.3
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 91CE5B72-D6BA-4381-BBFE-6050126E76FF.1; 
 Tue, 28 Jun 2022 08:17:41 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 0592b720cbb0.3
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 28 Jun 2022 08:17:41 +0000
Received: from AS8PR08MB7696.eurprd08.prod.outlook.com (2603:10a6:20b:523::11)
 by DB9PR08MB7147.eurprd08.prod.outlook.com (2603:10a6:10:2cb::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Tue, 28 Jun
 2022 08:17:39 +0000
Received: from AS8PR08MB7696.eurprd08.prod.outlook.com
 ([fe80::69e7:f6d2:15e6:d90]) by AS8PR08MB7696.eurprd08.prod.outlook.com
 ([fe80::69e7:f6d2:15e6:d90%6]) with mapi id 15.20.5373.016; Tue, 28 Jun 2022
 08:17:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cfc9bb61-f6ba-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=Eb1rlAdfcdNnygh/6uw4WVfZBHV02e53RCPGymL7GIuFcFgDg7Y+XsN9R5IwghDtYyeF4/iYuIxxseyzihL6lTNfO3PJAv03qp5LBfhMDnP5SXbxIK+9AEKoRrfv/gEc3LVV6BrDGeX6AtfPWV7nGXe+aqjJAvNbU8n+C2zXMX81xfMtLFtqWZ3HDn0qxMZOvFjUUtQB4q6yvoXJ9AIm/FPopw76JF53UfgtjSVaSFY1FP1QGxAMNnT65iFz3j0iK7SWf1rrqNXdZW2YCw0TDEn2Vvp97XgvwhG86g/bhhdH5odcO1l/09/Go6AO7EEbdIq5TL+HW13FTOr2DTjmzQ==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=hIAh/tt34y2FHhmXu+jB0X/mDmoFFAYal2qtxwfInFA=;
 b=NKGjR6rLTX2aNcrftGn4pBs1/28EyRvWfha+U5tl7WEHsLnLWXPavEklx3vC8+fkqwxjE9Va1rKo3xUr51+GaqCQgLzJi9HPUB3ko3XnBatMCusvyYNSVIm7nfvzWiLqTiwlS4sybeNYLpG0apsP8ALeY2MyWWGgCEZHLshioph/E55iu3GBSAC4BYPgbc6xjYtpcUHAvgg0UyMN9A/ki3NXCLE1zHYi1BlJX12eeO5akFHYkzyf9YDlbBhDhDTJRYgDaPi8OHSECe2HWHd5dPX1ni8eyB639w86qWXMlrOM2ac5ZKSV+slwVxcpX7MZRlIedmKs7NlmiCfcoi01kw==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hIAh/tt34y2FHhmXu+jB0X/mDmoFFAYal2qtxwfInFA=;
 b=KQ0MvQdIHFQcEiWVkTMWVyXW5UZxq3nPhEnuVyhDhl5Rc3A4wWcXUPiObavoByabdYNl/rLZug8RLyzoQ2j8+2nsTm9640leVnLV1S2tH3paFv3S3PlObZcwyYVJnS4GCsYVh5KzL5TuCeTUTvNyK4dz0lfR6mHk9neQgv423IE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ScrLtuE4aY858b6mqgqgZFHe9K96+ko80KEPzk54ZaGZt4UGdCjvs/UKRDz9TeuAlNVY715oO2V+WoRzZjfixJHn44zcDXr0xh2x5AfihUtROLOW5Xysz8dwUZ40OZPdt7/0dleFZZmf3zXBSJEEyLxlk7X8yM/BJd1/mfrP6RrJTp7pxqdITKAF/KzQx2C4hob+aQaBDQRYnydqaoEys2A0qIGf+87X7uIeZqHU98oD3LlmXJ8rzGbJ2LhJIpdlm16FxoQka3eB7hbx6+QZ8UYk6jstvpwchDJN31ov8FAlsaB1n6s19858ND28WZjkVSPtb+LRTNywxt10maya9A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=hIAh/tt34y2FHhmXu+jB0X/mDmoFFAYal2qtxwfInFA=;
 b=MwFD2M3sE3+80NaOVLsqZ5YmIkYIU+wccA3GmIbovySbahAPwER7xuSwlqAvqyaHguQzTb4wHrzPtB6liWI10sIccidf/L1fKRzofqz63H9LaSjHQr5JLaJSach6XzrSecQ70Pr7wuSgpKIUYWZlPIDoR3GpZ3d/uy9zIgFX6FfibEL8C1FyJie7ZCsGrkvQfvSMHSoeSOSe4CO67SCS0dyu1qMyCikXuf6h6R0GJS2fmNZRRHiE+pkhMAQv4mkJrgX6vl8KaO5iHErEyRegfCFP1nUp4TMaOZUixxIUmEmZHsmGRO/lh7VT/TzKw0UpQI2+bBGoc6NDVcFYOGvSHg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hIAh/tt34y2FHhmXu+jB0X/mDmoFFAYal2qtxwfInFA=;
 b=KQ0MvQdIHFQcEiWVkTMWVyXW5UZxq3nPhEnuVyhDhl5Rc3A4wWcXUPiObavoByabdYNl/rLZug8RLyzoQ2j8+2nsTm9640leVnLV1S2tH3paFv3S3PlObZcwyYVJnS4GCsYVh5KzL5TuCeTUTvNyK4dz0lfR6mHk9neQgv423IE=
From: Jiamei Xie <Jiamei.Xie@arm.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
CC: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Wei
 Chen <Wei.Chen@arm.com>
Subject: RE: [PATCH] xen/arm: avoid extra caclulations when setting vtimer in
 context switch
Thread-Topic: [PATCH] xen/arm: avoid extra caclulations when setting vtimer in
 context switch
Thread-Index: AQHYidHeBWnNLqFeFEKI7B/maNA44q1jD7IAgAFN1oCAABBlgIAADOqA
Date: Tue, 28 Jun 2022 08:17:37 +0000
Message-ID:
 <AS8PR08MB7696A4F4FA945FA2F9C8A77E92B89@AS8PR08MB7696.eurprd08.prod.outlook.com>
References: <20220627025809.1985720-1-jiamei.xie@arm.com>
 <cbb7a231-0f61-7170-3fc4-d4ae55398f3a@xen.org>
 <AS8PR08MB76964C46AE16A6CEF6DE221892B89@AS8PR08MB7696.eurprd08.prod.outlook.com>
 <983E3136-4388-4249-90DC-FAFF18BC7724@arm.com>
In-Reply-To: <983E3136-4388-4249-90DC-FAFF18BC7724@arm.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 44CB141685E18A45B70C29B289D97A68.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: a0fe32a0-e3ed-4a29-898d-08da58deb220
x-ms-traffictypediagnostic:
	DB9PR08MB7147:EE_|AM5EUR03FT007:EE_|VI1PR08MB3118:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 WNtAdKiPzzkCOn4ESS/r4/WdYuroqyjah97Pp5QkOPM1sO3dQCQQ1dIEMgKjjcBBVCBPIfKgS2L1opNTcK9r9Lb2NkB+puC8VOdOZaWWQfMfhckjXDISyhe2XdX5rA9YpE0aQItWIMkosYX9fNDC0cuGLCXUPNgC3+ir574ZuVTVIAVoe7ZM78rMBc73EFWHP9iNkwZW5BdMVEdj7efZu82XJh5U8iBBIVGL/J+DaZ5CVEBkhQugQYOHzMlLSHrbSe2f3zuZXRQ4qR4B7g2x6Pa0lEis0gB0zly1ezI4wZBEF1gqhyt0mkuBexNtAosHamEM3Z9N6OlQgyi9aXIly0SC49F6mDmImeqwQWIp4RiST/WJO09tMSFaWUX8bcXI5Az6D/bu726FrGZIA6hRdHMwGoYjzcwQ+FtNxcs0k78CdeXC2QUe/S6olMv6D6a4bFWjcoPU95aSAKxLvi9CTwfvO4y6HEy1Jclo9SgNwSq1O63b2W8QR812E9fZF8oNHjXgbQpX1kfeJHvQleBXKgCGUZeJwc7awEzdobw4Bk73ZhPYinaaYcMcR1Q38GnXH66wB09yIG7+GSuC6Ldz4fUBFdCEL+Ccva8Bk0avJync8gipPYkEctEiaLFgjq3BCLKs+nwQ3ALCBZ997RSQ8iot/CTEJfJfHXMcece0+FGSjZUTElQTMUp4+s7j+XZ+rVwH+g+QcCwEsvZiWhz+HoWP/TuIA83uLu4/1My+xqcBJe531QC9U0yiLBBguQ9Ue3PR2X/7CSMzNQJV2/jHC1W4+6KKQbUBf2PjJT/qDeckmuGFeC5WCLrfBoMD8mTK
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7696.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(136003)(366004)(396003)(346002)(376002)(76116006)(8676002)(66476007)(7696005)(8936002)(66946007)(66446008)(64756008)(83380400001)(26005)(38070700005)(66556008)(84970400001)(4326008)(55016003)(6506007)(186003)(71200400001)(33656002)(9686003)(6636002)(6862004)(478600001)(122000001)(5660300002)(52536014)(966005)(54906003)(316002)(2906002)(86362001)(53546011)(38100700002)(41300700001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB7147
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT007.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	5673388e-9dc6-416b-8286-08da58dea9b1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	AF+dw4wQ3ct0nGTbMVAB6nc9Lnp8zaeL8lWqimehvwUqf/ZhhefkAzN9NwyD1TSinVCfeEJSAmpqw+pSI2I91KRGnNOJyO9RBss6uLmDU8A0iPN3a5eApnA3vZsEbOkgunOqc07FSGOZA/bGo30X0jOW2TPqsWfGNqpBIbW0+GXhFWYkoXtxHWN53iYs5ODrvo8XycLOHZ+fB+moCpsaUvXTG47UyzDfFHYHzIFFO34QCMqszBq6yukNyW3cM7EjJRx9Y/OYuKaagUi4vSPKfea3AzyuEaedNAPMsGoBtcSJCkWO2MA0SoJQfw16yq0aClqltbl0DzTX2VIBJh/Pk78ddQmABkxJANdvMtcH4Ui7OhZZYbNlFD19UjAWDpvyeydtJDWs16YML0GOpZjyFrqrHObO0EmwViBOFMHPIbrtdFtl6X3ahr5xC+8wTholeA953vDTBF7IFY89RjMOU4ysj2e0F53Oxz1kgVBdgKj4BaYoCvqsovE7P3C3SC54dygAjc/jTSpoSE7MhAMVuM+Qv+JteHxofiJ0WFsTrypStA3oumQHO8r+A2kOXfSz6N73RbKWm3siMu7D0W2Vi+XBBl77a0uvnp/l0kyRP9TH4MW1LpEen3ljeBC2ZPYDiGSdtQjlAe9EuzIUyGXxLMG0lC19kQqFJejmLuiB71wNeAnfcRhFYLXvXSJVqBRQhFBt8K0iFTF17rEoBWZDAgitKdjeaC7GCW+yAxp9dWv87jE2a+N291YFzF1df+hRJecJbkmlKEQQgMM+T+h7dt8r/+NTHwkdQy0JeZGsBaw5QdH2y3hBoEI7BVSadWvuQIaCgELo239b8l4hOsUBDA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(396003)(136003)(346002)(39860400002)(376002)(36840700001)(46966006)(40470700004)(33656002)(478600001)(36860700001)(6506007)(186003)(55016003)(84970400001)(41300700001)(26005)(86362001)(40480700001)(966005)(9686003)(336012)(5660300002)(7696005)(47076005)(40460700003)(8936002)(316002)(83380400001)(81166007)(356005)(6636002)(2906002)(82740400003)(54906003)(70586007)(8676002)(53546011)(6862004)(70206006)(52536014)(82310400005)(4326008);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2022 08:17:52.0026
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a0fe32a0-e3ed-4a29-898d-08da58deb220
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT007.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3118

SGkgQmVydHJhbmQsDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogQmVy
dHJhbmQgTWFycXVpcyA8QmVydHJhbmQuTWFycXVpc0Bhcm0uY29tPg0KPiBTZW50OiAyMDIy5bm0
NuaciDI45pelIDE1OjI5DQo+IFRvOiBKaWFtZWkgWGllIDxKaWFtZWkuWGllQGFybS5jb20+DQo+
IENjOiBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4ub3JnPjsgeGVuLWRldmVsQGxpc3RzLnhlbnBy
b2plY3Qub3JnOyBTdGVmYW5vDQo+IFN0YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+
OyBWb2xvZHlteXIgQmFiY2h1aw0KPiA8Vm9sb2R5bXlyX0JhYmNodWtAZXBhbS5jb20+OyBXZWkg
Q2hlbiA8V2VpLkNoZW5AYXJtLmNvbT4NCj4gU3ViamVjdDogUmU6IFtQQVRDSF0geGVuL2FybTog
YXZvaWQgZXh0cmEgY2FjbHVsYXRpb25zIHdoZW4gc2V0dGluZyB2dGltZXINCj4gaW4gY29udGV4
dCBzd2l0Y2gNCj4gDQo+IEhpIEppYW1laSwNCj4gDQo+ID4gT24gMjggSnVuIDIwMjIsIGF0IDA3
OjM1LCBKaWFtZWkgWGllIDxKaWFtZWkuWGllQGFybS5jb20+IHdyb3RlOg0KPiA+DQo+ID4gSGkg
SnVsaWVuLA0KPiA+DQo+ID4+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+ID4+IEZyb206
IEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+ID4+IFNlbnQ6IDIwMjLlubQ25pyIMjfm
l6UgMTg6MzYNCj4gPj4gVG86IEppYW1laSBYaWUgPEppYW1laS5YaWVAYXJtLmNvbT47IHhlbi1k
ZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZw0KPiA+PiBDYzogU3RlZmFubyBTdGFiZWxsaW5pIDxz
c3RhYmVsbGluaUBrZXJuZWwub3JnPjsgQmVydHJhbmQgTWFycXVpcw0KPiA+PiA8QmVydHJhbmQu
TWFycXVpc0Bhcm0uY29tPjsgVm9sb2R5bXlyIEJhYmNodWsNCj4gPj4gPFZvbG9keW15cl9CYWJj
aHVrQGVwYW0uY29tPjsgV2VpIENoZW4gPFdlaS5DaGVuQGFybS5jb20+DQo+ID4+IFN1YmplY3Q6
IFJlOiBbUEFUQ0hdIHhlbi9hcm06IGF2b2lkIGV4dHJhIGNhY2x1bGF0aW9ucyB3aGVuIHNldHRp
bmcNCj4gdnRpbWVyDQo+ID4+IGluIGNvbnRleHQgc3dpdGNoDQo+ID4+DQo+ID4+IEhpIEppYW1p
DQo+ID4+DQo+ID4+IFRpdGxlOiBzL2NhY2x1bGF0aW9ucy9jYWxjdWxhdGlvbnMvDQo+ID4+DQo+
ID4+IEhvd2V2ZXIsIEkgdGhpbmsgdGhlIHRpdGxlIHNob3VsZCBtZW50aW9uIHRoZSBvdmVyZmxv
dyByYXRoZXIgdGhhbiB0aGUNCj4gPj4gZXh0cmEgY2FsY3VsYXRpb25zLiBUaGUgZm9ybWVyIGlz
IG1vcmUgaW1wb3J0YW50IHRoZSBsYXR0ZXIuDQo+ID4+DQo+ID4gSSB3aWxsIGNoYW5nZSB0aGUg
dGl0bGUgdG8gIiB4ZW4vYXJtOiBhdm9pZCBvdmVyZmxvdyB3aGVuIHNldHRpbmcgdnRpbWVyIGlu
DQo+IGNvbnRleHQgc3dpdGNoIg0KPiA+DQo+ID4+IE9uIDI3LzA2LzIwMjIgMDM6NTgsIEppYW1l
aSBYaWUgd3JvdGU6DQo+ID4+PiB2aXJ0X3Z0aW1lcl9zYXZlIGlzIGNhbGN1bGF0aW5nIHRoZSBu
ZXcgdGltZSBmb3IgdGhlIHZ0aW1lciBpbjoNCj4gPj4+ICJ2LT5hcmNoLnZpcnRfdGltZXIuY3Zh
bCArIHYtPmRvbWFpbi0+YXJjaC52aXJ0X3RpbWVyX2Jhc2Uub2Zmc2V0DQo+ID4+PiAtIGJvb3Rf
Y291bnQiLg0KPiA+Pj4gSW4gdGhpcyBmb3JtdWxhLCAiY3ZhbCArIG9mZnNldCIgbWlnaHQgY2F1
c2UgdWludDY0X3Qgb3ZlcmZsb3cuDQo+ID4+PiBDaGFuZ2luZyBpdCB0byAidi0+ZG9tYWluLT5h
cmNoLnZpcnRfdGltZXJfYmFzZS5vZmZzZXQgLSBib290X2NvdW50ICsNCj4gPj4+IHYtPmFyY2gu
dmlydF90aW1lci5jdmFsIiBjYW4gcmVkdWNlIHRoZSBwb3NzaWJpbGl0eSBvZiBvdmVyZmxvdw0K
PiA+Pg0KPiA+PiBUaGlzIHJlYWQgc3RyYW5nZSB0byBtZS4gV2Ugd2FudCB0byByZW1vdmUgdGhl
IG92ZXJmbG93IGNvbXBsZXRlbHkgbm90DQo+ID4+IHJlZHVjaW5nIGl0LiBUaGUgb3ZlcmZsb3cg
aXMgY29tcGxldGVseSByZW1vdmVkIGJ5IGNvbnZlcnRpbmcgdGhlDQo+ID4+ICJvZmZzZXQgLSBi
b3VudF9jb3VudCIgdG8gbnMgdXBmcm9udC4NCj4gPj4NCj4gPj4gQUZBSUNULCB0aGUgY29tbWl0
IG1lc3NhZ2UgZG9lc24ndCBleHBsYWluIHRoYXQuDQo+ID4gVGhhbmtzIGZvciBwb2ludGluZyBv
dXQgdGhhdC4gSG93IGFib3V0IHB1dHRpbmcgdGhlIGNvbW1pdCBtZXNzYWdlIGxpa2UNCj4gdGhl
IGJlbG93Og0KPiA+IHhlbi9hcm06IGF2b2lkIG92ZXJmbG93IHdoZW4gc2V0dGluZyB2dGltZXIg
aW4gY29udGV4dCBzd2l0Y2gNCj4gPg0KPiA+IHZpcnRfdnRpbWVyX3NhdmUgaXMgY2FsY3VsYXRp
bmcgdGhlIG5ldyB0aW1lIGZvciB0aGUgdnRpbWVyIGluOg0KPiA+ICJ2LT5hcmNoLnZpcnRfdGlt
ZXIuY3ZhbCArIHYtPmRvbWFpbi0+YXJjaC52aXJ0X3RpbWVyX2Jhc2Uub2Zmc2V0DQo+ID4gLSBi
b290X2NvdW50Ii4NCj4gPiBJbiB0aGlzIGZvcm11bGEsICJjdmFsICsgb2Zmc2V0IiBtaWdodCBj
YXVzZSB1aW50NjRfdCBvdmVyZmxvdy4NCj4gPiBDaGFuZ2luZyBpdCB0byAidGlja3NfdG9fbnMo
di0+ZG9tYWluLT5hcmNoLnZpcnRfdGltZXJfYmFzZS5vZmZzZXQgLQ0KPiA+IGJvb3RfY291bnQp
ICsgdGlja3NfdG9fbnModi0+YXJjaC52aXJ0X3RpbWVyLmN2YWwpIiBjYW4gYXZvaWQgb3ZlcmZs
b3csDQo+ID4gYW5kICJ0aWNrc190b19ucyhhcmNoLnZpcnRfdGltZXJfYmFzZS5vZmZzZXQgLSBi
b290X2NvdW50KSIgd2lsbCBiZQ0KPiA+IGFsd2F5cyB0aGUgc2FtZSwgd2hpY2ggaGFzIGJlZW4g
Y2FjdWxhdGVkIGluIGRvbWFpbl92dGltZXJfaW5pdC4NCj4gPiBJbnRyb2R1Y2UgYSBuZXcgZmll
bGQgdmlydF90aW1lcl9iYXNlLm5hbm9zZWNvbmRzIHRvIHN0b3JlIHRoaXMgdmFsdWUNCj4gPiBm
b3IgYXJtIGluIHN0cnVjdCBhcmNoX2RvbWFpbiwgc28gd2UgY2FuIHVzZSBpdCBkaXJlY3RseS4N
Cj4gPj4NCj4gPj4+ICwgYW5kDQo+ID4+PiAiYXJjaC52aXJ0X3RpbWVyX2Jhc2Uub2Zmc2V0IC0g
Ym9vdF9jb3VudCIgd2lsbCBiZSBhbHdheXMgdGhlIHNhbWUsDQo+ID4+PiB3aGljaCBoYXMgYmVl
biBjYWN1bGF0ZWQgaW4gZG9tYWluX3Z0aW1lcl9pbml0LiBJbnRyb2R1Y2UgYSBuZXcgZmllbGQN
Cj4gPj4+IHZ0aW1lcl9vZmZzZXQubmFub3NlY29uZHMgdG8gc3RvcmUgdGhpcyB2YWx1ZSBmb3Ig
YXJtIGluIHN0cnVjdA0KPiA+Pj4gYXJjaF9kb21haW4sIHNvIHdlIGNhbiB1c2UgaXQgZGlyZWN0
bHkgYW5kIGV4dHJhIGNhY2x1bGF0aW9ucyBjYW4gYmUNCj4gPj4+IGF2b2lkZWQuDQo+ID4+Pg0K
PiA+Pj4gVGhpcyBwYXRjaCBpcyBlbmxpZ2h0ZW5lZCBmcm9tIFsxXS4NCj4gPj4+DQo+ID4+PiBT
aWduZWQtb2ZmLWJ5OiBKaWFtZWkgWGllIDxqaWFtZWkueGllQGFybS5jb20+DQo+ID4+Pg0KPiA+
Pj4gWzFdIGh0dHBzOi8vd3d3Lm1haWwtYXJjaGl2ZS5jb20veGVuLQ0KPiA+PiBkZXZlbEBsaXN0
cy54ZW5wcm9qZWN0Lm9yZy9tc2cxMjMxMzkuaHRtDQo+ID4+DQo+ID4+IFRoaXMgbGluayBkb2Vz
bid0IHdvcmsuIEJ1dCBJIHdvdWxkIHBlcnNvbmFsbHkgcmVtb3ZlIGl0IGZyb20gdGhlIGNvbW1p
dA0KPiA+PiBtZXNzYWdlIChvciBhZGQgLS0tKSBiZWNhdXNlIGl0IGRvZXNuJ3QgYnJpbmcgdmFs
dWUgKHRoaXMgcGF0Y2ggbG9va3MNCj4gPj4gbGlrZSBhIHYyIHRvIG1lKS4NCj4gPiBTb3JyeSwg
YSAnbCcgaXMgbWlzc2luZyBhdCB0aGUgZW5kIG9mIHRoZSBsaW5rLiBUaGUgbGluayBpcyBodHRw
czovL3d3dy5tYWlsLQ0KPiBhcmNoaXZlLmNvbS94ZW4tZGV2ZWxAbGlzdHMueGVucHJvamVjdC5v
cmcvbXNnMTIzMTM5Lmh0bWwgLg0KPiA+IEkgd2lsbCBwdXQgaXQgYWZ0ZXIgLS0tIGluIHYzLg0K
PiA+Pg0KPiA+Pj4gLS0tDQo+ID4+PiB4ZW4vYXJjaC9hcm0vaW5jbHVkZS9hc20vZG9tYWluLmgg
fCA0ICsrKysNCj4gPj4+IHhlbi9hcmNoL2FybS92dGltZXIuYyB8IDYgKysrKy0tDQo+ID4+PiAy
IGZpbGVzIGNoYW5nZWQsIDggaW5zZXJ0aW9ucygrKSwgMiBkZWxldGlvbnMoLSkNCj4gPj4+DQo+
ID4+PiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL2luY2x1ZGUvYXNtL2RvbWFpbi5oDQo+ID4+
IGIveGVuL2FyY2gvYXJtL2luY2x1ZGUvYXNtL2RvbWFpbi5oDQo+ID4+PiBpbmRleCBlZDYzYzJi
NmY5Li45NGZlNWI2NDQ0IDEwMDY0NA0KPiA+Pj4gLS0tIGEveGVuL2FyY2gvYXJtL2luY2x1ZGUv
YXNtL2RvbWFpbi5oDQo+ID4+PiArKysgYi94ZW4vYXJjaC9hcm0vaW5jbHVkZS9hc20vZG9tYWlu
LmgNCj4gPj4+IEBAIC03Myw2ICs3MywxMCBAQCBzdHJ1Y3QgYXJjaF9kb21haW4NCj4gPj4+IHVp
bnQ2NF90IG9mZnNldDsNCj4gPj4+IH0gdmlydF90aW1lcl9iYXNlOw0KPiA+Pj4NCj4gPj4+ICsg
c3RydWN0IHsNCj4gPj4+ICsgaW50NjRfdCBuYW5vc2Vjb25kczsNCj4gPj4NCj4gPj4gVGhpcyBz
aG91bGQgYmUgc190aW1lX3QgdG8gbWF0Y2ggdGhlIGFyZ3VtZW50IG9mIHNldF90aW1lcigpIGFu
ZCByZXR1cm4NCj4gPj4gb2YgdGlja3NfdG9fbnMoKS4NCj4gPj4NCj4gPj4+ICsgfSB2dGltZXJf
b2Zmc2V0Ow0KPiA+Pg0KPiA+PiBXaHkgYXJlIHlvdSBhZGRpbmcgYSBuZXcgc3RydWN0dXJlIHJh
dGhlciB0aGFuIHJlLXVzaW5nIHZpcnRfdGltZXJfYmFzZT8NCj4gPiBTdXJlLCBJJ2xsIGFkZCB0
aGlzIGZpZWxkIGluIHZpcnRfdGltZXJfYmFzZS4NCj4gPiBzdHJ1Y3Qgew0KPiA+IHVpbnQ2NF90
IG9mZnNldDsNCj4gPiBzX3RpbWVfdCBuYW5vc2Vjb25kczsNCj4gPiB9IHZpcnRfdGltZXJfYmFz
ZTsNCj4gPj4NCj4gPj4+ICsNCj4gPj4+IHN0cnVjdCB2Z2ljX2Rpc3QgdmdpYzsNCj4gPj4+DQo+
ID4+PiBzdHJ1Y3QgdnVhcnQgew0KPiA+Pj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS92dGlt
ZXIuYyBiL3hlbi9hcmNoL2FybS92dGltZXIuYw0KPiA+Pj4gaW5kZXggNmI3OGZlYTc3ZC4uNTQx
NjFlNWZlYSAxMDA2NDQNCj4gPj4+IC0tLSBhL3hlbi9hcmNoL2FybS92dGltZXIuYw0KPiA+Pj4g
KysrIGIveGVuL2FyY2gvYXJtL3Z0aW1lci5jDQo+ID4+PiBAQCAtNjQsNiArNjQsNyBAQCBpbnQg
ZG9tYWluX3Z0aW1lcl9pbml0KHN0cnVjdCBkb21haW4gKmQsIHN0cnVjdA0KPiA+PiB4ZW5fYXJj
aF9kb21haW5jb25maWcgKmNvbmZpZykNCj4gPj4+IHsNCj4gPj4+IGQtPmFyY2gudmlydF90aW1l
cl9iYXNlLm9mZnNldCA9IGdldF9jeWNsZXMoKTsNCj4gPj4+IGQtPnRpbWVfb2Zmc2V0LnNlY29u
ZHMgPSB0aWNrc190b19ucyhkLT5hcmNoLnZpcnRfdGltZXJfYmFzZS5vZmZzZXQgLQ0KPiA+PiBi
b290X2NvdW50KTsNCj4gPj4+ICsgZC0+YXJjaC52dGltZXJfb2Zmc2V0Lm5hbm9zZWNvbmRzID0g
ZC0+dGltZV9vZmZzZXQuc2Vjb25kczsNCj4gPj4NCj4gPj4gSG1tbS4uLiBJIGZpbmQgb2RkIHRv
IGFzc2lnbiBhIGZpZWxkICJuYW5vc2Vjb25kcyIgdG8gInNlY29uZHMiLiBJIHdvdWxkDQo+ID4+
IHN1Z2dlc3QgdG8gcmUtb3JkZXIgc28geW91IGZpcnN0IHNldCBuYW5vc2Vjb25kcyBhbmQgdGhl
biBzZXQgc2Vjb25kcy4NCj4gPj4NCj4gPj4gVGhpcyB3aWxsIG1ha2UgbW9yZSBvYnZpb3VzIHRo
YXQgdGhpcyBpcyBub3QgYSBtaXN0YWtlIGFuZCAic2Vjb25kcyINCj4gPj4gd2lsbCBiZSBjbG9z
ZXIgdG8gdGhlIGRvX2RpdigpIGJlbG93Lg0KPiA+IElzIGl0IG9rIHRvIHJlbW92ZSBkb19kaXYg
YW5kIHdyaXRlIGxpa2UgYmVsb3c/DQo+ID4gZC0+YXJjaC52aXJ0X3RpbWVyX2Jhc2UubmFub3Nl
Y29uZHMgPQ0KPiA+IHRpY2tzX3RvX25zKGQtPmFyY2gudmlydF90aW1lcl9iYXNlLm9mZnNldCAt
IGJvb3RfY291bnQpOw0KPiA+IGQtPnRpbWVfb2Zmc2V0LnNlY29uZHMgPSBkLT5hcmNoLnZpcnRf
dGltZXJfYmFzZS5uYW5vc2Vjb25kcyAvDQo+ID4gMTAwMDAwMDAwMDsNCj4gDQo+IFRoZSBpbXBs
ZW1lbnRhdGlvbiBtdXN0IHVzZSBkb19kaXYgdG8gcHJvcGVybHkgaGFuZGxlIHRoZSBkaXZpc2lv
biBmcm9tIGENCj4gNjRiaXQgYnkgYSAzMmJpdCBvbiBhcm0zMiBvdGhlcndpc2UgdGhlIGNvZGUg
d2lsbCBiZSBhIGxvdCBzbG93ZXIuDQoNClRoYW5rcyBmb3IgeW91ciBleHBsYW5hdGlvbiBmb3Ig
dGhpcy4gSSB3aWxsIGtlZXAgdGhlIGRvX2Rpdi4gDQoNCkJlc3Qgd2lzaGVzDQpKaWFtZWkgWGll
DQoNCg0KPiANCj4gQ2hlZXJzDQo+IEJlcnRyYW5kDQo+IA0KDQo=


From xen-devel-bounces@lists.xenproject.org Tue Jun 28 08:31:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 08:31:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357015.585413 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o66cs-000089-KJ; Tue, 28 Jun 2022 08:31:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357015.585413; Tue, 28 Jun 2022 08:31:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o66cs-000082-G9; Tue, 28 Jun 2022 08:31:14 +0000
Received: by outflank-mailman (input) for mailman id 357015;
 Tue, 28 Jun 2022 08:31:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Q6TN=XD=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1o66cr-00007E-6p
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 08:31:13 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-eopbgr140083.outbound.protection.outlook.com [40.107.14.83])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id aa235118-f6bc-11ec-bd2d-47488cf2e6aa;
 Tue, 28 Jun 2022 10:31:11 +0200 (CEST)
Received: from DU2P251CA0004.EURP251.PROD.OUTLOOK.COM (2603:10a6:10:230::14)
 by DU2PR08MB7309.eurprd08.prod.outlook.com (2603:10a6:10:2e4::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15; Tue, 28 Jun
 2022 08:30:51 +0000
Received: from DBAEUR03FT052.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:230:cafe::24) by DU2P251CA0004.outlook.office365.com
 (2603:10a6:10:230::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.21 via Frontend
 Transport; Tue, 28 Jun 2022 08:30:51 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT052.mail.protection.outlook.com (100.127.142.144) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Tue, 28 Jun 2022 08:30:50 +0000
Received: ("Tessian outbound 4ab5a053767b:v120");
 Tue, 28 Jun 2022 08:30:50 +0000
Received: from f35499f39983.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 558FEC84-5E43-45A3-B542-D4B0758B1BC3.1; 
 Tue, 28 Jun 2022 08:30:45 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f35499f39983.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 28 Jun 2022 08:30:45 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com (2603:10a6:102:2b9::9)
 by AM6PR08MB4582.eurprd08.prod.outlook.com (2603:10a6:20b:8f::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Tue, 28 Jun
 2022 08:30:42 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::980a:f741:c4e1:82f7]) by PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::980a:f741:c4e1:82f7%5]) with mapi id 15.20.5373.018; Tue, 28 Jun 2022
 08:30:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aa235118-f6bc-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=A5brDs99770jOtsi41dlOwVl9/neVoe0+gDcWhfsae7dd8lhRFkgijmJGwWYFFZLQdyvSdMVHC7WLxskZW7JfmC6+Sl9KnmhCf69O/HsPTo9AL+GD5q4+zQryOznRtiHwNHfzb1cl45pFFIJW7bjTOciqoHVmqoCmZfgCY1FuvJPOOLbpWYWog64Ic40cZ/W4M85MOTiYN6mwkts0RAbcUowelitwoRuZE2YMsTzrubhU8MySc3UcNC7IIm/3p6gc5TShRAGVtsTnFLuPFfxxPME4lCTYKAD/eveOGDtkNlX6jFNZkUpHEshvwOrExe/KEQ+/uf6rdwDHveBoOrrFA==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5A6db0l4g8W7LA6weOSLOmOMkU2UVydY50Od/z+f9/c=;
 b=ejPwGhZ/2L5xutvd7lw1kAOSE7l5ZNTokeQEpSQnzIXJQwfBvhJYjbqmAOsrUWTh2H45cf6Qyj2totVlUlLm6y+Gtd8L13tDBsd0ujV7LaW8E9HeZLo6HHtN+auGX25BHjc6IL+5MpKJj/MBt+mc9Q+VUDPq4kqPobtLS2j0nIf8mojeeXZqLEA1/PR/i+htEvpUdN042oispN/QakVDvRXMHB83rTxVdZqE8LGtrUGzGg3oTlp+TVAkgolf2BIUgzhBgmjPtI8tMeX7K+Dh7AGwV5m/LMaK82ombck5tQDFXZrDlt19Xxg3o5b6BO/WfYFSOoWv2bQ38LO/5qWouw==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5A6db0l4g8W7LA6weOSLOmOMkU2UVydY50Od/z+f9/c=;
 b=CPu0w1Q4ebO4fQbRrJvK+rxKaky5yTSdgqlHBKvarX57X5OkvtwxHAik441Rk0JDmilMQNycKmKPIdM4DBxrCBE5xajAd57E2KTLNW2NcJ3CdEJhHt3wjoHcVgSzbfnWQzB9it6VkGXhOTpt7nK/oi7xrh0QEvpVqpnDh0/Qh24=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=O5jOBJfpDV27clp57rI7rtAHQ3RfSf38/Php/KFGg9ZXGgV49LsLCOf8+J0wCnaUrMAbLcHKDdAUqbGkNO+bkb0yQmiEuC9flwTaq68qmDd+Qc52OmoGZ1Bgqps7n6UvSIUDc+0Q0wgpqzAk/N9R6sUuINI1DT41HZjPryB08zlk4jbu4tisnKcy3JtS2V7MSxeH3CArSOTivZGdPNZzhzWKBueXBGnv4hPm9ZXtDgu36ce0wZx9k05XYXXc4e2W+a/NkhjJvu9n2m7nLaNKWG60WiUY/r2/EGz4k8RbJDVGAwqPDYBo2dmv01rIeoL+u+Pcw2X4/khEz1hKS4WuiQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5A6db0l4g8W7LA6weOSLOmOMkU2UVydY50Od/z+f9/c=;
 b=YZNQRJlgZB7qI9efjAiYNKjZv0tcQTDZjsGIhqGzCjb/nFcWoIsnjXSsbYTd9HUN6SSXM0f+KAJqvYs9l6BcMCdzEvVkz4d2Tks/84/CDClPsVyL/RBhIAaNLXYhP/peBefLcC4vfLuLiXJnEn/sbNERCfsRWYHtBorOxKCKysTCU8A1GBNtJhRXTJpImYwZvUbVPJnzE0rGFmlVQvPq9clFUTdJymsTGD8gan6ErZyECTKS2OC8g8aWTSbgU9wzazpKnTdar3wb+NZQOJGlNEWqtzeUVEVFqJ/l+FI/uwOXm0O9xFM1kSu22xV8tZFDz7oazX1PvFXl2I7aSmaQoQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5A6db0l4g8W7LA6weOSLOmOMkU2UVydY50Od/z+f9/c=;
 b=CPu0w1Q4ebO4fQbRrJvK+rxKaky5yTSdgqlHBKvarX57X5OkvtwxHAik441Rk0JDmilMQNycKmKPIdM4DBxrCBE5xajAd57E2KTLNW2NcJ3CdEJhHt3wjoHcVgSzbfnWQzB9it6VkGXhOTpt7nK/oi7xrh0QEvpVqpnDh0/Qh24=
From: Wei Chen <Wei.Chen@arm.com>
To: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>
CC: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Jiamei Xie <Jiamei.Xie@arm.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v6 1/8] xen: reuse x86 EFI stub functions for Arm
Thread-Topic: [PATCH v6 1/8] xen: reuse x86 EFI stub functions for Arm
Thread-Index:
 AQHYfI531SRNO5A+8Ei3iOFoJrK02K1dB3iAgAE0OjCAABXrgIAAECwAgAAEoICABjJ8oA==
Date: Tue, 28 Jun 2022 08:30:42 +0000
Message-ID:
 <PAXPR08MB7420FBD6CB737E5819CDADF79EB89@PAXPR08MB7420.eurprd08.prod.outlook.com>
References: <20220610055316.2197571-1-wei.chen@arm.com>
 <20220610055316.2197571-2-wei.chen@arm.com>
 <05dadcda-505d-d46a-776a-bb29b8915815@suse.com>
 <PAXPR08MB74205A192C0E6E2E4BDD64BB9EB49@PAXPR08MB7420.eurprd08.prod.outlook.com>
 <8e44e765-c47f-4480-ee44-704ea13a170d@xen.org>
 <cd5d728c-a21e-780e-3b79-0cfb163eb824@suse.com>
 <a6844d62-c1aa-a29f-56ba-3556bc1d4dac@xen.org>
In-Reply-To: <a6844d62-c1aa-a29f-56ba-3556bc1d4dac@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 73BACCE1B4EF4B4EB0A3406A26254B3C.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: bda9c244-fb01-4421-2ae1-08da58e0824a
x-ms-traffictypediagnostic:
	AM6PR08MB4582:EE_|DBAEUR03FT052:EE_|DU2PR08MB7309:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 d9+PCaEtz/TwlVNQMg8JjaBTCnHrNw3dsk+mKL32bkt9t43c8ltUjEWqxEcFr1VjN+fl6ro5l2mTLWoOT7G1RJ53fKRnstNSP5wPHwdor3brl1YYIvk8sRZUZSNwFB80BU1ZgkuZMlcuIW8UC3OniT3q2O7GDrwiYSHo0EGWKnDLXstAstpoPADNsMUManeDuvwLdd6Jx5PrNp9hU4EkX+tea4NiH9RJ+Pf//OQiWa+CWIndjDvQ156fCR4+8eSyMGANZ9iEfzox0xZuCk8s4g1oiE49TlKKgTiNbk0Xmxzm7qqAsPAFvB31fowJB/0wOJ6/CqxYcFSnA6KnU0SazK/mtK2ZXPahOeZaPR/VPzH1LlYQAmx2gjRUz5sq7CPw3zH2b/PxiPWWiLliqIylEm/wpwJV971sdLVfQGIpZbHi9RDzkmyt8GfhGJZH5EJU2WdsCZELJ59s0TcDof0NmApZP+0GAjzFrOZjbZ+iB/i47Sxiq5tbAlUaAvMxocfwVSNfqT0m4foVEeb23r58LgMUgTm/r9deBAkXsqlozXBdhS0BAdepmEfE00zZu1llqahIDHiFhVOfWsbuCnxw9EK1GO6RLObLQqVKJUDK98Hl9lmRzQQN8PwHnNzFtoiAK2V3jXkZCHx7hcDD2fl4MM/03DE2+32tAzQe7tiCx9EZZsxglHqSGL7Db37BG2m2XS7/y9GtWYFa0js/DvWiFd16gGrQ8Osb340ibVX3dDXZs2HjaEAbpjO08h56oSm9jancKZojtiYqJbGtAN5lLwXWuIZVxqaG14kb83+W3+Q=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB7420.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(346002)(39860400002)(376002)(136003)(396003)(2906002)(55016003)(86362001)(26005)(53546011)(54906003)(66446008)(186003)(110136005)(4326008)(9686003)(66476007)(52536014)(83380400001)(478600001)(6506007)(8936002)(38070700005)(66946007)(8676002)(71200400001)(33656002)(38100700002)(7696005)(41300700001)(66556008)(316002)(122000001)(64756008)(5660300002)(76116006);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4582
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT052.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	cec87480-b695-4308-6c6f-08da58e07d87
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	PobkkJUpcgWC2cjFydLl2SCNcEgRQmGZOY1pKsY9fTs4xt7c+IykJ4IRj6061CZMNwYE7EvSmsoa44q0G47faM4wk7JZ3FSE/w7P6gMsS1UIYjxki/y3Fo/p5g2toqT4ckRmxMRctK2SiThKNutdwEFCtDYB2vMpSpLHky8/Uj7qh1+GTvGI8sCbQsskb4KNudqWxfVJzW2iV/Qj8hBARJK33hjowVe8fIRa2bcUjgTRwQvxH0AMtZ7XYuxtg3++zYvNMHmIg+HDmMVfpddsQQJID3mj8n7xXrX4+5BMHsvvwDcL73c1dDZhFmVPP0E5YIsYIKlHbkFFQMD3dX30dwtsYBONlbSIKwqnhq2aVfhU8JxF2uYmN+V/zfbz+oPqbibGl0py098WRyIhPQybEF0Qmoc13NPbkFlRp3Ymd0NTlZXW+2Ynu90Cjyg4z5+Tcbcs5LlE3a66BI1LiFubYJS9lzUcX/hGukMQrLUfYqFlxuE1fwqHFI9rks9hLeMx8h/s6vgsTUSSfVorxkwI15urYWhk+7SPfy9esQHU2WyeerKzmdPW4Pbg0SaYendvHrOcOkQVtjl2hx61PzvlWsnQR9IA0BVQjgkVp9kr9IbVqJcuXnQqxmpfOUnmtU0cSEM+M96DRTcXQ8rDWW28/u73TXEskM7GqZ5SmCkm/k+vpHNLzixWxltaU+y/jxkOLsjnLHY9h5rPjNI1/Ac8DspRmoY9PyKVYb5iXupCcU5N4ZF2VYMR7xAwZzrUT54vG0OvUwokiwThVMF20IxGE/M5FsbNYPyNNgyPrvltokCICZ2AvMqtL4Z3cF1TTpHV
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(346002)(396003)(376002)(39860400002)(136003)(40470700004)(46966006)(36840700001)(26005)(2906002)(186003)(52536014)(478600001)(81166007)(40460700003)(6506007)(33656002)(5660300002)(356005)(36860700001)(47076005)(70586007)(4326008)(82740400003)(86362001)(336012)(41300700001)(53546011)(8936002)(9686003)(110136005)(7696005)(40480700001)(83380400001)(55016003)(8676002)(316002)(54906003)(82310400005)(70206006);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2022 08:30:50.8007
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: bda9c244-fb01-4421-2ae1-08da58e0824a
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT052.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR08MB7309

SGkgSnVsaWVuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEp1bGll
biBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+IFNlbnQ6IDIwMjLlubQ25pyIMjTml6UgMTc6NTAN
Cj4gVG86IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4gQ2M6IG5kIDxuZEBhcm0u
Y29tPjsgU3RlZmFubyBTdGFiZWxsaW5pIDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPjsgQmVydHJh
bmQNCj4gTWFycXVpcyA8QmVydHJhbmQuTWFycXVpc0Bhcm0uY29tPjsgVm9sb2R5bXlyIEJhYmNo
dWsNCj4gPFZvbG9keW15cl9CYWJjaHVrQGVwYW0uY29tPjsgQW5kcmV3IENvb3BlciA8YW5kcmV3
LmNvb3BlcjNAY2l0cml4LmNvbT47DQo+IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRy
aXguY29tPjsgV2VpIExpdSA8d2xAeGVuLm9yZz47IEppYW1laSBYaWUNCj4gPEppYW1laS5YaWVA
YXJtLmNvbT47IHhlbi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZzsgV2VpIENoZW4NCj4gPFdl
aS5DaGVuQGFybS5jb20+DQo+IFN1YmplY3Q6IFJlOiBbUEFUQ0ggdjYgMS84XSB4ZW46IHJldXNl
IHg4NiBFRkkgc3R1YiBmdW5jdGlvbnMgZm9yIEFybQ0KPiANCj4gSGkgSmFuLA0KPiANCj4gT24g
MjQvMDYvMjAyMiAxMDozMywgSmFuIEJldWxpY2ggd3JvdGU6DQo+ID4gT24gMjQuMDYuMjAyMiAx
MDozNSwgSnVsaWVuIEdyYWxsIHdyb3RlOg0KPiA+PiBPbiAyNC8wNi8yMDIyIDA4OjE4LCBXZWkg
Q2hlbiB3cm90ZToNCj4gPj4+PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiA+Pj4+IEZy
b206IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4gPj4+PiBTZW50OiAyMDIy5bm0
NuaciDIz5pelIDIwOjU0DQo+ID4+Pj4gVG86IFdlaSBDaGVuIDxXZWkuQ2hlbkBhcm0uY29tPg0K
PiA+Pj4+IENjOiBuZCA8bmRAYXJtLmNvbT47IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJlbGxp
bmlAa2VybmVsLm9yZz47DQo+IEp1bGllbg0KPiA+Pj4+IEdyYWxsIDxqdWxpZW5AeGVuLm9yZz47
IEJlcnRyYW5kIE1hcnF1aXMgPEJlcnRyYW5kLk1hcnF1aXNAYXJtLmNvbT47DQo+ID4+Pj4gVm9s
b2R5bXlyIEJhYmNodWsgPFZvbG9keW15cl9CYWJjaHVrQGVwYW0uY29tPjsgQW5kcmV3IENvb3Bl
cg0KPiA+Pj4+IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPjsgUm9nZXIgUGF1IE1vbm7DqSA8
cm9nZXIucGF1QGNpdHJpeC5jb20+Ow0KPiBXZWkNCj4gPj4+PiBMaXUgPHdsQHhlbi5vcmc+OyBK
aWFtZWkgWGllIDxKaWFtZWkuWGllQGFybS5jb20+OyB4ZW4tDQo+ID4+Pj4gZGV2ZWxAbGlzdHMu
eGVucHJvamVjdC5vcmcNCj4gPj4+PiBTdWJqZWN0OiBSZTogW1BBVENIIHY2IDEvOF0geGVuOiBy
ZXVzZSB4ODYgRUZJIHN0dWIgZnVuY3Rpb25zIGZvciBBcm0NCj4gPj4+Pg0KPiA+Pj4+IE9uIDEw
LjA2LjIwMjIgMDc6NTMsIFdlaSBDaGVuIHdyb3RlOg0KPiA+Pj4+PiAtLS0gYS94ZW4vYXJjaC9h
cm0vTWFrZWZpbGUNCj4gPj4+Pj4gKysrIGIveGVuL2FyY2gvYXJtL01ha2VmaWxlDQo+ID4+Pj4+
IEBAIC0xLDYgKzEsNSBAQA0KPiA+Pj4+PiAgICBvYmotJChDT05GSUdfQVJNXzMyKSArPSBhcm0z
Mi8NCj4gPj4+Pj4gICAgb2JqLSQoQ09ORklHX0FSTV82NCkgKz0gYXJtNjQvDQo+ID4+Pj4+IC1v
YmotJChDT05GSUdfQVJNXzY0KSArPSBlZmkvDQo+ID4+Pj4+ICAgIG9iai0kKENPTkZJR19BQ1BJ
KSArPSBhY3BpLw0KPiA+Pj4+PiAgICBvYmotJChDT05GSUdfSEFTX1BDSSkgKz0gcGNpLw0KPiA+
Pj4+PiAgICBpZm5lcSAoJChDT05GSUdfTk9fUExBVCkseSkNCj4gPj4+Pj4gQEAgLTIwLDYgKzE5
LDcgQEAgb2JqLXkgKz0gZG9tYWluLm8NCj4gPj4+Pj4gICAgb2JqLXkgKz0gZG9tYWluX2J1aWxk
LmluaXQubw0KPiA+Pj4+PiAgICBvYmoteSArPSBkb21jdGwubw0KPiA+Pj4+PiAgICBvYmotJChD
T05GSUdfRUFSTFlfUFJJTlRLKSArPSBlYXJseV9wcmludGsubw0KPiA+Pj4+PiArb2JqLXkgKz0g
ZWZpLw0KPiA+Pj4+PiAgICBvYmoteSArPSBnaWMubw0KPiA+Pj4+PiAgICBvYmoteSArPSBnaWMt
djIubw0KPiA+Pj4+PiAgICBvYmotJChDT05GSUdfR0lDVjMpICs9IGdpYy12My5vDQo+ID4+Pj4+
IC0tLSBhL3hlbi9hcmNoL2FybS9lZmkvTWFrZWZpbGUNCj4gPj4+Pj4gKysrIGIveGVuL2FyY2gv
YXJtL2VmaS9NYWtlZmlsZQ0KPiA+Pj4+PiBAQCAtMSw0ICsxLDEyIEBADQo+ID4+Pj4+ICAgIGlu
Y2x1ZGUgJChzcmN0cmVlKS9jb21tb24vZWZpL2VmaS1jb21tb24ubWsNCj4gPj4+Pj4NCj4gPj4+
Pj4gK2lmZXEgKCQoQ09ORklHX0FSTV9FRkkpLHkpDQo+ID4+Pj4+ICAgIG9iai15ICs9ICQoRUZJ
T0JKLXkpDQo+ID4+Pj4+ICAgIG9iai0kKENPTkZJR19BQ1BJKSArPSAgZWZpLWRvbTAuaW5pdC5v
DQo+ID4+Pj4+ICtlbHNlDQo+ID4+Pj4+ICsjIEFkZCBzdHViLm8gdG8gRUZJT0JKLXkgdG8gcmUt
dXNlIHRoZSBjbGVhbi1maWxlcyBpbg0KPiA+Pj4+PiArIyBlZmktY29tbW9uLm1rLiBPdGhlcndp
c2UgdGhlIGxpbmsgb2Ygc3R1Yi5jIGluIGFybS9lZmkNCj4gPj4+Pj4gKyMgd2lsbCBub3QgYmUg
Y2xlYW5lZCBpbiAibWFrZSBjbGVhbiIuDQo+ID4+Pj4+ICtFRklPQkoteSArPSBzdHViLm8NCj4g
Pj4+Pj4gK29iai15ICs9IHN0dWIubw0KPiA+Pj4+PiArZW5kaWYNCj4gPj4+Pg0KPiA+Pj4+IFRo
aXMgaGFzIGNhdXNlZA0KPiA+Pj4+DQo+ID4+Pj4gbGQ6IHdhcm5pbmc6IGFyY2gvYXJtL2VmaS9i
dWlsdF9pbi5vIHVzZXMgMi1ieXRlIHdjaGFyX3QgeWV0IHRoZQ0KPiBvdXRwdXQgaXMNCj4gPj4+
PiB0byB1c2UgNC1ieXRlIHdjaGFyX3Q7IHVzZSBvZiB3Y2hhcl90IHZhbHVlcyBhY3Jvc3Mgb2Jq
ZWN0cyBtYXkgZmFpbA0KPiA+Pj4+DQo+ID4+Pj4gZm9yIHRoZSAzMi1iaXQgQXJtIGJ1aWxkIHRo
YXQgSSBrZWVwIGRvaW5nIGV2ZXJ5IG9uY2UgaW4gYSB3aGlsZSwNCj4gd2l0aA0KPiA+Pj4+IChp
ZiBpdCBtYXR0ZXJzKSBHTlUgbGQgMi4zOC4gSSBndWVzcyB5b3Ugd2lsbCB3YW50IHRvIGNvbnNp
ZGVyDQo+IGJ1aWxkaW5nDQo+ID4+Pj4gYWxsIG9mIFhlbiB3aXRoIC1mc2hvcnQtd2NoYXIsIG9y
IHRvIGF2b2lkIGJ1aWxkaW5nIHN0dWIuYyB3aXRoIHRoYXQNCj4gPj4+PiBvcHRpb24uDQo+ID4+
Pj4NCj4gPj4+DQo+ID4+PiBUaGFua3MgZm9yIHBvaW50aW5nIHRoaXMgb3V0LiBJIHdpbGwgdHJ5
IHRvIHVzZSAtZnNob3J0LXdjaGFyIGZvcg0KPiBBcm0zMiwNCj4gPj4+IGlmIEFybSBtYWludGFp
bmVycyBhZ3JlZS4NCj4gPj4NCj4gPj4gTG9va2luZyBhdCB0aGUgY29kZSB3ZSBkb24ndCBzZWVt
IHRvIGJ1aWxkIFhlbiBhcm02NCB3aXRoIC1mc2hvcnQtd2NoYXINCj4gPj4gKGFzaWRlIHRoZSBF
RkkgZmlsZXMpLiBTbyBpdCBpcyBub3QgZW50aXJlbHkgY2xlYXIgd2h5IHdlIHdvdWxkIHdhbnQg
dG8NCj4gPj4gdXNlIC1mc2hvcnQtd2NoYXIgZm9yIGFybTMyLg0KPiA+DQo+ID4gV2UgZG9uJ3Qg
dXNlIHdjaGFyX3Qgb3V0c2lkZSBvZiBFRkkgY29kZSBhZmFpY3QuIEhlbmNlIHRvIGFsbCBvdGhl
ciBjb2RlDQo+ID4gaXQgc2hvdWxkIGJlIGJlbmlnbiB3aGV0aGVyIC1mc2hvcnQtd2NoYXIgaXMg
aW4gdXNlLiBTbyB0aGUgc3VnZ2VzdGlvbg0KPiA+IHRvIHVzZSB0aGUgZmxhZyB1bmlsYXRlcmFs
bHkgb24gQXJtMzIgaXMgcmVhbGx5IGp1c3QgdG8gc2lsZW5jZSB0aGUgbGQNCj4gPiB3YXJuaW5n
Ow0KPiANCj4gT2suIFRoaXMgaXMgb2RkLiBXaHkgd291bGQgbGQgd2FybiBvbiBhcm0zMiBidXQg
bm90IG90aGVyIGFyY2g/IFdlaSwgZG8NCj4geW91IHRoaW5rIHlvdSBjYW4gaGF2ZSBhIGxvb2s/
DQo+IA0KDQpKYW4gaGFzIGFscmVhZHkgZ2l2ZW4gc29tZSBhbnN3ZXJzLiBJJ2xsIGhhdmUgYSBs
b29rIGFuZCBzZWUgaWYgdGhlcmUncw0KYSBiZXR0ZXIgd2F5Lg0KDQpDaGVlcnMsDQpXZWkgQ2hl
bg0KDQo+ID4gb2ZmIHRoZSB0b3Agb2YgbXkgaGVhZCBJIGNhbid0IHNlZSBhbnl0aGluZyB3cm9u
ZyB3aXRoIHVzaW5nDQo+ID4gdGhlIG9wdGlvbiBhbHNvIGZvciBBcm02NCBvciBldmVuIGdsb2Jh
bGx5LiBZZXQgb3RvaCB3ZSB0eXBpY2FsbHkgdHJ5IHRvDQo+ID4gbm90IG1ha2UgY2hhbmdlcyBm
b3IgZW52aXJvbm1lbnRzIHdoZXJlIHRoZXkgYXJlbid0IHJlYWxseSBuZWVkZWQuDQo+IA0KPiBJ
IGFncmVlLiBJZiB3ZSBuZWVkIGEgd29ya2Fyb3VuZCwgdGhlbiBteSBwcmVmZXJlbmNlIHdvdWxk
IGJlIHRvIG5vdA0KPiBidWlsZCBzdHViLmMgd2l0aCAtZnNob3J0LXdjaGFyLg0KPiANCj4gQ2hl
ZXJzLA0KPiANCj4gLS0NCj4gSnVsaWVuIEdyYWxsDQo=


From xen-devel-bounces@lists.xenproject.org Tue Jun 28 08:37:08 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 08:37:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357022.585424 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o66iW-0000qB-EP; Tue, 28 Jun 2022 08:37:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357022.585424; Tue, 28 Jun 2022 08:37:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o66iW-0000q4-9s; Tue, 28 Jun 2022 08:37:04 +0000
Received: by outflank-mailman (input) for mailman id 357022;
 Tue, 28 Jun 2022 08:37:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o66iV-0000pu-N3; Tue, 28 Jun 2022 08:37:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o66iV-00020B-Hz; Tue, 28 Jun 2022 08:37:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o66iV-0000JP-38; Tue, 28 Jun 2022 08:37:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o66iV-0002WX-2h; Tue, 28 Jun 2022 08:37:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mJpvO+g26e/y2rSuzu6zk6Oq0V0R5ZE3Vpgn66hjz0U=; b=Hs01qzCHgIsuP7hjXR7q3vBGXm
	+1o0Mpn1a17vwlNRjTgWNZClhotAGQhn7e24f0r/AhcU3GaYm9euCokwECNeTPZ/ppxuDRyuNYdEu
	7ST0vTOBFTk/pkdefcPAUsb91Fdsl/HIExttRvw4TLAbEeMuwf3WM9dlULFxRtuB69kY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171376-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 171376: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=29f6db75667f44f3f01ba5037dacaf9ebd9328da
X-Osstest-Versions-That:
    qemuu=097ccbbbaf2681df1e65542e5b7d2b2d0c66e2bc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 28 Jun 2022 08:37:03 +0000

flight 171376 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171376/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 171372

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171372
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171372
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171372
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171372
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171372
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171372
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171372
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171372
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                29f6db75667f44f3f01ba5037dacaf9ebd9328da
baseline version:
 qemuu                097ccbbbaf2681df1e65542e5b7d2b2d0c66e2bc

Last test of basis   171372  2022-06-27 15:08:38 Z    0 days
Testing same since   171376  2022-06-27 23:09:50 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Graf <agraf@csgraf.de>
  Martin Liska <mliska@suse.cz>
  Martin Liška <mliska@suse.cz>
  Peter Maydell <peter.maydell@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   097ccbbbaf..29f6db7566  29f6db75667f44f3f01ba5037dacaf9ebd9328da -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Tue Jun 28 11:31:07 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 11:31:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357037.585434 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o69Qg-0002qL-0h; Tue, 28 Jun 2022 11:30:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357037.585434; Tue, 28 Jun 2022 11:30:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o69Qf-0002qE-UC; Tue, 28 Jun 2022 11:30:49 +0000
Received: by outflank-mailman (input) for mailman id 357037;
 Tue, 28 Jun 2022 11:30:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o69Qe-0002q4-Ln; Tue, 28 Jun 2022 11:30:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o69Qe-0005KO-IZ; Tue, 28 Jun 2022 11:30:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o69Qd-0000Fx-Uw; Tue, 28 Jun 2022 11:30:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o69Qd-0003gR-UT; Tue, 28 Jun 2022 11:30:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ZsFfK0OeBadJGwy3/yAbeDcUQcoqm8Oftd45gOtQGHU=; b=JmOJ1GKrChagIRA2z9ltqtzP78
	ocO1oyyAHx4OvTSQ5iiuiLmpjS6KKywyBRcL4iSHfzLiWoieqLHH8OqlUL8QDNJHoLrGSBXFED05J
	9YiJrH59vVhzZqvXZ+MaSlhVZqtlwsYEjHenpETDCzElnOPIDsUppt2DupNCNJLdTAhE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171377-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 171377: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=0544c4ee4b48f7e2715e69ff3e73c3d5545b0526
X-Osstest-Versions-That:
    xen=0544c4ee4b48f7e2715e69ff3e73c3d5545b0526
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 28 Jun 2022 11:30:47 +0000

flight 171377 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171377/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds   18 guest-localmigrate fail in 171357 pass in 171377
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 171363 pass in 171357
 test-armhf-armhf-xl-rtds     14 guest-start                fail pass in 171363
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 171363

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds    15 migrate-support-check fail in 171363 never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-check fail in 171363 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171363
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171363
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171363
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171363
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171363
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171363
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171363
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171363
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171363
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171363
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171363
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171363
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  0544c4ee4b48f7e2715e69ff3e73c3d5545b0526
baseline version:
 xen                  0544c4ee4b48f7e2715e69ff3e73c3d5545b0526

Last test of basis   171377  2022-06-28 01:53:06 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Tue Jun 28 11:52:51 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 11:52:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357046.585445 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o69lt-0005Wy-Tx; Tue, 28 Jun 2022 11:52:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357046.585445; Tue, 28 Jun 2022 11:52:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o69lt-0005Wr-RM; Tue, 28 Jun 2022 11:52:45 +0000
Received: by outflank-mailman (input) for mailman id 357046;
 Tue, 28 Jun 2022 11:52:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o69lt-0005Wh-5i; Tue, 28 Jun 2022 11:52:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o69lt-0005n1-3M; Tue, 28 Jun 2022 11:52:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o69ls-0001fA-JH; Tue, 28 Jun 2022 11:52:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o69ls-0002KN-In; Tue, 28 Jun 2022 11:52:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DGGCRDlrT8uWkYRD1kkfmX3hTzICTODviI0RG8a9+3g=; b=dOrtiKZ4GJ6cANCy1Gppg5mHBI
	M8HsVLeg6wFRBiHsM/hGRVWspdhheLNcds9Z8EHdB943oZ9JqLvV/75032MUmAUXrjIQonN3QDUva
	oPXq5aoZCU1wtJIHr9xP2Je/n4xN2eF+vAr4rfnESzt0E3YsoYwQRW9tgHbAw4rOR6pk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171381-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 171381: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=59141288716f8917968d4bb96367b7d08fe5ab8a
X-Osstest-Versions-That:
    ovmf=7f4eca4cc2e01d4160ef265f477f9d098d7d33df
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 28 Jun 2022 11:52:44 +0000

flight 171381 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171381/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 59141288716f8917968d4bb96367b7d08fe5ab8a
baseline version:
 ovmf                 7f4eca4cc2e01d4160ef265f477f9d098d7d33df

Last test of basis   171365  2022-06-27 02:58:42 Z    1 days
Testing same since   171381  2022-06-28 09:41:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bob Feng <bob.c.feng@intel.com>
  Feng, Bob C <bob.c.feng@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   7f4eca4cc2..5914128871  59141288716f8917968d4bb96367b7d08fe5ab8a -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jun 28 12:03:25 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 12:03:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357061.585456 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o69w8-0007Mw-BT; Tue, 28 Jun 2022 12:03:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357061.585456; Tue, 28 Jun 2022 12:03:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o69w8-0007Mp-8t; Tue, 28 Jun 2022 12:03:20 +0000
Received: by outflank-mailman (input) for mailman id 357061;
 Tue, 28 Jun 2022 12:03:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VG4i=XD=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o69w6-0007Mc-Hq
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 12:03:18 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4b1778a9-f6da-11ec-b725-ed86ccbb4733;
 Tue, 28 Jun 2022 14:03:17 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 0C2DD21EDC;
 Tue, 28 Jun 2022 12:03:16 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id A35D5139E9;
 Tue, 28 Jun 2022 12:03:15 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id HhcWJgPuumJVFAAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 28 Jun 2022 12:03:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4b1778a9-f6da-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656417796; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=yVWg7wp4h+Cfn2qEfC6WV/K5Vv+RBe+YiFsHKlpkmZQ=;
	b=hn0c66rsHaEfSrsSHHC75IHTya+X9eM25nyDtKYbte2CMyE/UkS5l2Qk+r7uMe2XaB4x9P
	FyitlS0G+yD3KL+iWvvwRwSZHj4mp6FJMYNS74SzKWDKnaIsRvJJS0jRmErIdE30Vr6sL4
	WYPCi5jahpUkFo1HGGlnS4TmaMnHVBM=
Message-ID: <8e7faa1d-9e72-5b2e-2a70-426d822d05b3@suse.com>
Date: Tue, 28 Jun 2022 14:03:15 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, x86@kernel.org, linux-kernel@vger.kernel.org
References: <20220623094608.7294-1-jgross@suse.com>
 <20220623094608.7294-2-jgross@suse.com>
 <117fd526-a241-2f01-47b5-e40e1803124b@suse.com>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v2 1/3] x86/xen: use clear_bss() for Xen PV guests
In-Reply-To: <117fd526-a241-2f01-47b5-e40e1803124b@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------PKRvAy4qT3iQa0ns18QQvZQy"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------PKRvAy4qT3iQa0ns18QQvZQy
Content-Type: multipart/mixed; boundary="------------80o8n7krF6sw0zfxyPWo7gx5";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, x86@kernel.org, linux-kernel@vger.kernel.org
Message-ID: <8e7faa1d-9e72-5b2e-2a70-426d822d05b3@suse.com>
Subject: Re: [PATCH v2 1/3] x86/xen: use clear_bss() for Xen PV guests
References: <20220623094608.7294-1-jgross@suse.com>
 <20220623094608.7294-2-jgross@suse.com>
 <117fd526-a241-2f01-47b5-e40e1803124b@suse.com>
In-Reply-To: <117fd526-a241-2f01-47b5-e40e1803124b@suse.com>

--------------80o8n7krF6sw0zfxyPWo7gx5
Content-Type: multipart/mixed; boundary="------------sDLlMP0x9f6jm3WTSHG4Qbmi"

--------------sDLlMP0x9f6jm3WTSHG4Qbmi
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjMuMDYuMjIgMTE6NTEsIEphbiBCZXVsaWNoIHdyb3RlOg0KPiBPbiAyMy4wNi4yMDIy
IDExOjQ2LCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4gLS0tIGEvYXJjaC94ODYveGVuL2Vu
bGlnaHRlbl9wdi5jDQo+PiArKysgYi9hcmNoL3g4Ni94ZW4vZW5saWdodGVuX3B2LmMNCj4+
IEBAIC0xMTgzLDE1ICsxMTgzLDE5IEBAIHN0YXRpYyB2b2lkIF9faW5pdCB4ZW5fZG9tdV9z
ZXRfbGVnYWN5X2ZlYXR1cmVzKHZvaWQpDQo+PiAgIGV4dGVybiB2b2lkIGVhcmx5X3hlbl9p
cmV0X3BhdGNoKHZvaWQpOw0KPj4gICANCj4+ICAgLyogRmlyc3QgQyBmdW5jdGlvbiB0byBi
ZSBjYWxsZWQgb24gWGVuIGJvb3QgKi8NCj4+IC1hc21saW5rYWdlIF9fdmlzaWJsZSB2b2lk
IF9faW5pdCB4ZW5fc3RhcnRfa2VybmVsKHZvaWQpDQo+PiArYXNtbGlua2FnZSBfX3Zpc2li
bGUgdm9pZCBfX2luaXQgeGVuX3N0YXJ0X2tlcm5lbChzdHJ1Y3Qgc3RhcnRfaW5mbyAqc2kp
DQo+PiAgIHsNCj4+ICAgCXN0cnVjdCBwaHlzZGV2X3NldF9pb3BsIHNldF9pb3BsOw0KPj4g
ICAJdW5zaWduZWQgbG9uZyBpbml0cmRfc3RhcnQgPSAwOw0KPj4gICAJaW50IHJjOw0KPj4g
ICANCj4+IC0JaWYgKCF4ZW5fc3RhcnRfaW5mbykNCj4+ICsJaWYgKCFzaSkNCj4+ICAgCQly
ZXR1cm47DQo+PiAgIA0KPj4gKwljbGVhcl9ic3MoKTsNCj4gDQo+IEFzIHBlciBzdWJzZXF1
ZW50IG9ic2VydmF0aW9uLCB0aGlzIHNob3VsZG4ndCByZWFsbHkgYmUgbmVlZGVkOiBUaGUN
Cj4gaHlwZXJ2aXNvciAob3IgdG9vbCBzdGFjayBmb3IgRG9tVS1zKSBhbHJlYWR5IGRvZXMg
c28uIFdoaWxlIEkgZ3Vlc3MNCj4gd2Ugd2FudCB0byBrZWVwIGl0IHRvIGJlIG9uIHRoZSBz
YWZlIHNpZGUsIG1heWJlIHdvcnRoIGEgY29tbWVudD8NCg0KQXJlIHlvdSBzdXJlIGFsbCBw
b3NzaWJsZSBib290IGxvYWRlcnMgYXJlIGNsZWFyaW5nIGFsbG9jLW9ubHkgc2VjdGlvbnM/
DQoNCkknZCByYXRoZXIgbm90IGNvdW50IG9uIGUuZy4gZ3J1YiBkb2luZyB0aGlzIGluIGFs
bCBjYXNlcy4NCg0KDQpKdWVyZ2VuDQo=
--------------sDLlMP0x9f6jm3WTSHG4Qbmi
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------sDLlMP0x9f6jm3WTSHG4Qbmi--

--------------80o8n7krF6sw0zfxyPWo7gx5--

--------------PKRvAy4qT3iQa0ns18QQvZQy
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK67gMFAwAAAAAACgkQsN6d1ii/Ey8H
Ugf+OD0yOcSu9qozk0L8XeQDqTSi3n0Bqexn+BBDt0XHm6r4L2T2nvJU+bm4aH5cJDM8yrn6ZBze
cv9u3M/ecd16V2FT4ISCgC769pZX+9BEUD+GGwKsEhCaTiVw67M+Te/HqgqrnMClvQD4WBOBU4a5
AFaJRyBb1kymiMPPW16OlNeGH8a1+5/m6G3Kxz4bvfbhuAJDeusORVKsHEwts77+8wHd7ZRQG6O+
d9OmaoT9tpFr1QSOCfYLx5X4wGnr+G6LFX8955T1JzKkVJ9g275FTBnStHGtnGcXsxFpkMv7ejK9
EQMtmeyo0A+EVQt3QXGr9aRZL5jZR8uw7De/F9rXCA==
=5oig
-----END PGP SIGNATURE-----

--------------PKRvAy4qT3iQa0ns18QQvZQy--


From xen-devel-bounces@lists.xenproject.org Tue Jun 28 12:10:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 12:10:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357068.585468 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6A31-0000Sy-4T; Tue, 28 Jun 2022 12:10:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357068.585468; Tue, 28 Jun 2022 12:10:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6A31-0000Sr-1l; Tue, 28 Jun 2022 12:10:27 +0000
Received: by outflank-mailman (input) for mailman id 357068;
 Tue, 28 Jun 2022 12:10:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6A30-0000Sh-3B; Tue, 28 Jun 2022 12:10:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6A2z-00066W-TW; Tue, 28 Jun 2022 12:10:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6A2z-0002D0-Gt; Tue, 28 Jun 2022 12:10:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o6A2z-0008CS-GS; Tue, 28 Jun 2022 12:10:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=AWOK7DijeG/+HgXT6f9SFAQHEde4wD8mzTfjdaM5fpQ=; b=NUIv007wa5ttBw8wbShdm9Wq3U
	8/wFH5BIYUN9C5Pse1pPHF3YdeV89inKfR5bwIdkbfnHAByF5+T/HdMIV++eCYfkyZyqwRUI2DLh9
	FBk5ohjAjMAAiRR6GwuZBy3/ZrLi0PEkPwXea5AahzrSvyo7B+EYs8crJm//tb4czH9s=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171379-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 171379: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=e3245696913c4508e2110a4b610f1e31f6054076
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 28 Jun 2022 12:10:25 +0000

flight 171379 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171379/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              e3245696913c4508e2110a4b610f1e31f6054076
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  718 days
Failing since        151818  2020-07-11 04:18:52 Z  717 days  699 attempts
Testing same since   171379  2022-06-28 04:20:22 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Amneesh Singh <natto@weirdnatto.in>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani@anisinha.ca>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brad Laue <brad@brad-x.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Kirbach <christian.kirbach@gmail.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Christophe Fergeau <cfergeau@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Didik Supriadi <didiksupriadi41@gmail.com>
  dinglimin <dinglimin@cmss.chinamobile.com>
  Divya Garg <divya.garg@nutanix.com>
  Dmitrii Shcherbakov <dmitrii.shcherbakov@canonical.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Emilio Herrera <ehespinosa57@gmail.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Florian Schmidt <flosch@nutanix.com>
  Franck Ridel <fridel@protonmail.com>
  Gavi Teitz <gavi@nvidia.com>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Haonan Wang <hnwanga1@gmail.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Hiroki Narukawa <hnarukaw@yahoo-corp.jp>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Ian Wienand <iwienand@redhat.com>
  Ioanna Alifieraki <ioanna-maria.alifieraki@canonical.com>
  Ivan Teterevkov <ivan.teterevkov@nutanix.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  jason lee <ppark5237@gmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jia Zhou <zhou.jia2@zte.com.cn>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jing Qi <jinqi@redhat.com>
  Jinsheng Zhang <zhangjl02@inspur.com>
  Jiri Denemark <jdenemar@redhat.com>
  Joachim Falk <joachim.falk@gmx.de>
  John Ferlan <jferlan@redhat.com>
  John Levon <john.levon@nutanix.com>
  John Levon <levon@movementarian.org>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Justin Gatzen <justin.gatzen@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kim InSoo <simmon@nplob.com>
  Koichi Murase <myoga.murase@gmail.com>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Lei Yang <yanglei209@huawei.com>
  Lena Voytek <lena.voytek@canonical.com>
  Liang Yan <lyan@digitalocean.com>
  Liang Yan <lyan@digtalocean.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Liu Yiding <liuyd.fnst@fujitsu.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  luzhipeng <luzhipeng@cestc.cn>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Mielke <mark.mielke@gmail.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Martin Pitt <mpitt@debian.org>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matej Cepl <mcepl@cepl.eu>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Goodhart <c@chromakode.com>
  Maxim Nestratov <mnestratov@virtuozzo.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Moteen Shah <codeguy.moteen@gmail.com>
  Moteen Shah <moteenshah.02@gmail.com>
  Muha Aliss <muhaaliss@gmail.com>
  Nathan <nathan95@live.it>
  Neal Gompa <ngompa13@gmail.com>
  Nick Chevsky <nchevsky@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nicolas Lécureuil <neoclust@mageia.org>
  Nicolas Lécureuil <nicolas.lecureuil@siveo.net>
  Nikolay Shirokovskiy <nikolay.shirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Niteesh Dubey <niteesh@linux.ibm.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Or Ozeri <oro@il.ibm.com>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peng Liang <tcx4c70@gmail.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Praveen K Paladugu <prapal@linux.microsoft.com>
  Prerna Saxena <prerna.saxena@nutanix.com>
  Richard W.M. Jones <rjones@redhat.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Robin Lee <cheeselee@fedoraproject.org>
  Rohit Kumar <rohit.kumar3@nutanix.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Davis <scott.davis@starlab.io>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Sergey A <sw@atrus.ru>
  Sergey A. <sw@atrus.ru>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  shenjiatong <yshxxsjt715@gmail.com>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Simon Rowe <simon.rowe@nutanix.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Temuri Doghonadze <temuri.doghonadze@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tom Wieczorek <tom@bibbu.net>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tu Qiang <tu.qiang35@zte.com.cn>
  Tuguoyi <tu.guoyi@h3c.com>
  tuqiang <tu.qiang35@zte.com.cn>
  Vasiliy Ulyanov <vulyanov@suse.de>
  Victor Toso <victortoso@redhat.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Vinayak Kale <vkale@nvidia.com>
  Vineeth Pillai <viremana@linux.microsoft.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  Wei-Chen Chen <weicche@microsoft.com>
  William Douglas <william.douglas@intel.com>
  Xu Chao <xu.chao6@zte.com.cn>
  Yalan Zhang <yalzhang@redhat.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Fei <yangfei85@huawei.com>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yasuhiko Kamata <belphegor@belbel.or.jp>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
  zhangjl02 <zhangjl02@inspur.com>
  zhanglei <zhanglei@smartx.com>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>
  Дамјан Георгиевски <gdamjan@gmail.com>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 114242 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 28 12:40:10 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 12:40:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357076.585478 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6AVW-0003Ge-Dh; Tue, 28 Jun 2022 12:39:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357076.585478; Tue, 28 Jun 2022 12:39:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6AVW-0003GX-Ag; Tue, 28 Jun 2022 12:39:54 +0000
Received: by outflank-mailman (input) for mailman id 357076;
 Tue, 28 Jun 2022 12:39:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XT0n=XD=citrix.com=prvs=171720f04=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o6AVV-0003GR-Hb
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 12:39:53 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 66060c7a-f6df-11ec-bd2d-47488cf2e6aa;
 Tue, 28 Jun 2022 14:39:51 +0200 (CEST)
Received: from mail-bn1nam07lp2046.outbound.protection.outlook.com (HELO
 NAM02-BN1-obe.outbound.protection.outlook.com) ([104.47.51.46])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 28 Jun 2022 08:39:47 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by SA1PR03MB6671.namprd03.prod.outlook.com (2603:10b6:806:1c2::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16; Tue, 28 Jun
 2022 12:39:43 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5373.018; Tue, 28 Jun 2022
 12:39:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 66060c7a-f6df-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656419991;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=NlMLok0wipjO3IonR+i06xV2i8xJwVifXM00T3PBIZQ=;
  b=Z5VUxocFYt9L33oruN10qU/TyjY+65BOTYx2wKNhPPCnIFjkuAa6sZcb
   VhEdMpTYxRBvyoF58lk2GnX8M7xC0qdUrcU9AjWeUzMA3tWhve5xxXcC7
   EKOsolVlA8+qvhwMywboLp4qle+IogRkkodfobcpOpM6T1ZU9bSwOoJRb
   Q=;
X-IronPort-RemoteIP: 104.47.51.46
X-IronPort-MID: 74581219
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:WM5BRaIm+K+gmGaqFE+RqpQlxSXFcZb7ZxGr2PjKsXjdYENShjYFx
 2EYDTuEM63ca2Kke9t2YNix8h4CupPSnIdgG1ZlqX01Q3x08seUXt7xwmUcns+xwm8vaGo9s
 q3yv/GZdJhcokf0/0vrav67xZVF/fngqoDUUYYoAQgsA14+IMsdoUg7wbRh3NQ42YHR7z6l4
 rseneWOYDdJ5BYsWo4kw/rrRMRH5amaVJsw5zTSVNgT1LPsvyB94KE3fMldG0DQUIhMdtNWc
 s6YpF2PEsE1yD92Yj+tuu6TnkTn2dc+NyDW4pZdc/DKbhSvOkXee0v0XRYRQR4/ttmHozx+4
 N4Sk7GvQhgJBabnm/hBaAhgUBl6N6ITrdcrIVDn2SCS52vvViO2ht9IVQQxN4Be/ftrC2ZT8
 /BeMCoKch2Im+OxxvS8V/VogcMgasLsOevzuFk5lW2fUalgHMmFH/uiCdxwhV/cguhUGvnTf
 YwBYCdHZxXceRxffFwQDfrSmc/32iSvKmcB+Tp5o4Iw/2nN11JfiIHpH9zeS9mGGMcMp3ah8
 zeuE2PRR0ty2Mak4TiP/2+oh+TPtTjmQ49UH7q9ntZ6jVvWymENBRk+UVqgveL/mkO4Q8hYK
 UEf5mwpt6da3FSiU93VTxC+5nmesXY0RN54A+A8rgaXxcLpDx2xA2EFSntLboUgvcpuGjgyj
 AfVwZXuGCBlt6CTRTSF7LCIoDiuOC8Ta2gfeSsDSghD6N7myG0usi/yoh9YOPbdprXI9fvYm
 VhmcABWa20vsPM2
IronPort-HdrOrdr: A9a23:rcKnL64ROAnerHZFrwPXwVOBI+orL9Y04lQ7vn2ZFiY5TiXIra
 qTdaogviMc6Ax/ZJjvo6HkBEClewKlyXcT2/hrAV7CZniehILMFu1fBOTZowEIdxeOldK1kJ
 0QCZSWa+eAcmSS7/yKhzVQeuxIqLfnzEnrv5a5854Ed3AXV0gK1XYcNu/0KDwVeOEQbqBJaa
 Z0q/A30QaISDAyVICWF3MFV+/Mq5nik4/nWwcPA1oC5BOVhT2lxbbmG1zAty1uGw9n8PMHyy
 zoggb57qKsv7WSzQLd7Xba69BzlMH6wtVOKcSQgow+KynqiCyveIN9Mofy9QwdkaWK0hIHgd
 PMqxAvM4Ba7G7QRHi8pV/X1wzpwF8Vmgrf4G7dpUGmjd3yRTo8BcYEr5leaAHl500pu8w5+L
 5X3kqC3qAnQS/orWDY3ZzlRhtqnk27rT4JiugIlUFSVoMYdft4sZEfxkVIC50NdRiKpbzPKN
 MeQv002cwmMG9zNxvizylSKZ2XLz4O9y69Mwc/Upf/6UkUoJh7p3FotvD30E1wtq7VcKM0lt
 gsAp4Y6o2mcfVmHZ6VJN1xNfdfWVa9Ni7kASa1HWnNMp0hFjbkl6PXiY9Fl91CPqZ4h6cPpA
 ==
X-IronPort-AV: E=Sophos;i="5.92,227,1650945600"; 
   d="scan'208";a="74581219"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nblIJFPbOQsCah21azVicaKR2Xw1dg4HpkOZ6kHQsb5IVcBiqdj7oHdX0luzCFA+hZKNyK5dv4jJrDtDTdAstFZiYrjPh9PgYte0GVT8igvU4Qf3IOxFQG9myTYeDXTanwFLOOFcr2pluiiyHnJdtu4UOUeHPGAtrjw3+kSDRoYU31m321VNYnUFa3SuOoQifv3Oc9h/v2WyApv29v3AYFpDfzPWyNdhnudDQBYioU4MX9CJmzPFYRoBdVKckuHaQF2QjzkJ0gFzqU3NRuZm6KqmFSglkYcQagDbezV9VHcjVmn1A123UFaF9GDy9szV5x1CGsky2N6ZOjpCiSIamw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=kVjt8sfCuWYXB10hVUR/u+9zhHGwIjybEpB3AGvesAM=;
 b=EOhi/yqf+JA0vc6IB1Rj+PvFPwbpNr3hBrf00v2+kTaqFuFfww+qDflUOFDNK/6govJ1NAO2KYnQMWKRAwbSfh6pH6DKI5L47ACc4ARTauPqnzQkOTNjG2eginHYmgnbFI3KxORZX15j5LgTPKGVzcSk/tYx9qrgaIemVkyhD/5jGRpAgVd3weyozwqQZc5UlGBYazyaIlTzlaOvPPPZDzVQ8jr5m7SUCShaEo8FjlP5e44mc+fT4Twt8bdx4JS884pCB2F7qWdbKsKDgrqV739pJPKPlDYuKKZBemV8vaC2a8V7muxtexpjtGEN4OyTwlvO6XpWp3MtViFPGrK/8g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kVjt8sfCuWYXB10hVUR/u+9zhHGwIjybEpB3AGvesAM=;
 b=QAnsG4j2f/xVD0EoCiyoDi85LWiiFQ2B1lHlRu1fOCLQG0zDNLFJ8FiEjn5sR0KPgt473rJtoL/e7gmVtLG5uMISz2h2KpHtWIQU6rMhlBM5bmNiPyWIlSu8/mbROI07ZC+CuFo98R49JCZkvjEF2GZmeKDkTz6jZQJJk3KR1XU=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 28 Jun 2022 14:39:37 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v6 01/12] IOMMU/x86: support freeing of pagetables
Message-ID: <Yrr2iUlG43xIsOpD@Air-de-Roger>
References: <e873e30c-7a04-a8a3-2fe5-0dda30e654fe@suse.com>
 <24eb0b99-c2c4-08aa-740d-df94d2505599@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <24eb0b99-c2c4-08aa-740d-df94d2505599@suse.com>
X-ClientProxiedBy: LO2P265CA0198.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:9e::18) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b795c8b4-d189-4993-4130-08da59034678
X-MS-TrafficTypeDiagnostic: SA1PR03MB6671:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	17sF0ySRruEAxDs1MBsnSjQDHev/mlw4K13VqctSpoP2gBZSvLp5IY26w81tK4hCJD9Ng/JcRkVtLzWOz4czQjFu2sVOq8TaJmAhvGUE+x8AI+iQFgy27cTr+tzcIZWqYByUgoM2X/K0WG8A7lU2gFfmt/mYHhlWhRM5KC0M80ks43TPKThLfUcsGlgwRLXIphcTylvqBP9NbeNGWuwx2hx7Sgx9Y1UegiO/ceEhC+cIrB108zus20FjI1YjBY2K2v7++tHhrNtMdpNPn0JvPERanq+tWmoe0wLz900eGa41MBUsWoTVIGzLswwH0irl8LkfTrRc7+05+nOq3x4GPB7R7wuoWPww6se+VlXA7V+6O0NX67RwU6yycqWttdl8hDZq5C3nXuOdzBm60QIS8Zls85g9RHu1Vpybmid9ZwJJ2l4AGAbY6n1gyj+ECV2JICvOnidRXg1KfMVqaxX3eeUidbEGY3v/+9K+8Y0ABD1u4fSjOp3POvTRhL39F3LS83e1VK7zhasjf3mgeJX5Ed/213TkZ83IlRRz9VUKdp0Wdm8cCM+hlO6qhRbAgKY/zeanYq+gMSjWdZ5p1LJMPzsR3C2mg19R2ZdzsJ+LlCGTha+SYrf94pG09bJ0kAdO6AlMKGN+5ONVNj31rvsTymoZKeQOC1b/7WoFL/SUvB/G/AW8UlQuHtwujA7CZXXR1L9WyOlSSonNXMPjWvb5Zxoie1DgkL6LThim4VFo8IuRI4g5yRL1xhw8ZETUPPFnaWatfmuHDKLEnGrDpxg/Kw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(7916004)(4636009)(396003)(136003)(39860400002)(366004)(376002)(346002)(4326008)(8676002)(316002)(82960400001)(66946007)(66556008)(54906003)(6666004)(66476007)(85182001)(83380400001)(186003)(6506007)(33716001)(41300700001)(6512007)(9686003)(2906002)(26005)(86362001)(8936002)(6486002)(38100700002)(478600001)(6916009)(5660300002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?THV2a0xjZlJqZ2o3aWlzR2svWHZrMWZkcGEyOFFQQkJ3aW04MzhPVkNNVnNh?=
 =?utf-8?B?VEplK3BmbVJ2NitDbytSYk5NOVM2b0d5TUlLN3FRV0w0aTVISjFoK3o2a1NK?=
 =?utf-8?B?ZDcwQUQzYm9PNU42cU9qM1YvWXAvQ2sreWU2eHRnbzl4NHF1OUJmSzRKZ2dQ?=
 =?utf-8?B?My9ib0tzemdPVEtLbHovdi8wQ2I4NDVnR0tacVloMDV2RzZlTjNUSTdNYWhz?=
 =?utf-8?B?YjdXM1I1QllxOWs3Q3paVHdQZWR6MVVWVko2OTdoRmVEM044OEs3NjFRVmNt?=
 =?utf-8?B?d01VYVFoSll1VjJaREx6a1lkOXI1Zmh0K3MxU21sNmJZM05CUW5uZ05FMTlu?=
 =?utf-8?B?NWNqY1pqbG5JenNVK1hDUkdJTU1WZ3BGa0tvRzk0cy83bUxEYlgxNGdtbURS?=
 =?utf-8?B?b2piUGZTTmw3UUQxa0ZHc2JvT3ZXeUZCVWhDYXkrdGdieExYZXZGamV5dVFy?=
 =?utf-8?B?V0ZVQlJWbVdGR1B6S1RCdFhZK0FkUjViL0N2N0tlUTVTanJscGkvYTZ5d2hV?=
 =?utf-8?B?RHErWGVBcGlqekZGUmRWS2xlR2VMYXp0U3F4azZ4OVhMZi9JOE0rWVU3VFJS?=
 =?utf-8?B?TDVGaUJYcStpOElQekxIQ0FETkdWUEpJeHVCT0ZXSVg3NitveUwzTWNyRVdk?=
 =?utf-8?B?YnVOanBiQVdWMmJ6Wi94Vm51SWlrN2RhTnVCOGUyaE1xemFhcTNsajU1Y1Qv?=
 =?utf-8?B?emZRUURRbFhqVlgzVXNlTHI4WkRWejdZVDFnOVNlZ28zRGV0N2Z5STFOY3lR?=
 =?utf-8?B?dTA2YjU2Q3VhZmxnK1NzUkJHSHFwWjVhbmlQdk5hZlU2NG1RNGR6eFZERldz?=
 =?utf-8?B?YWRCNE0vZWptMHpWYURhSDAydzRtWGhxOW13YlRWN29EWHZ5dGhMWDlEQ1VY?=
 =?utf-8?B?VjhNaWE1K25FVDhpeHEzbXhXSjJtRFZnUHlFMXVOZnhXckxEWXBmdzk4OElZ?=
 =?utf-8?B?MXFMNS9uUVVQWitkVytKa2F5WTdSWGtIZjY0N0g4bDVuUnRYQWhSenZQSFBm?=
 =?utf-8?B?NEVtQ3ZFNnhSR1BSTTBSclhyMXpPN05qejd2dW9uUERwT1M1RklUUmVpc0xj?=
 =?utf-8?B?Ui8rOHNrOGI2VE1pLzBteUwzWk9SWm5Ya3dXdndEVVl0Mi9qb1BGa3F1K2do?=
 =?utf-8?B?QVRyYzBOZFlodzh5cXJiOXNuWnJvQ1lOeUJESVpsTXZBSG5BTVNHQ080cWkx?=
 =?utf-8?B?Y1Qva3hXSGpqZUhuYmpTaUpMcnFGWjJ5a0dLVFM1eHpkbXEyelpMODZuZXhJ?=
 =?utf-8?B?aWNralhyTDRTMzRwSXR3VFE1N2JDbU4xYTNJMndyR09mVlBRb1lQeVQ2akJz?=
 =?utf-8?B?ODhuZytlYWhZdVF6T29KRWYyekV4REN4M2xvRHRWSzZzUGdsazVsc1d0b2xw?=
 =?utf-8?B?OVdqdG1icHNPM2JRY0Rra1BEdGdyQTR4RUsvU0htMkk2WVhNUXVsd3pmUEtr?=
 =?utf-8?B?Q01aVlhUc2ZmZTBBOTh6L3RJaDVnSVd2cWpTWWhTbzliSHkxZStRdWNkRWNH?=
 =?utf-8?B?b0xWb1B2ejcrUTYxRHpodVhYMDdNQUc3WTZTYWNGTFJrQ3lGdFVZNmpxOWlQ?=
 =?utf-8?B?cDJHRDNZZFNWTGlSeUsxQkZpRnVzd0M0Z1dXR1NHZURrcUo4b1M1OFZsU2Vz?=
 =?utf-8?B?aTJXKzdxTDBYVG5WZlY4QUhMS0xpeFNJSE1LUDVlQTlWOUtVN0plVDNxUGV1?=
 =?utf-8?B?S3BpcG56UFhib0ZWek5DWGFZWXExWHJVemJQd2plalJZL2dzaWthUkFPOWxT?=
 =?utf-8?B?NVBtMFpQeFJOdzZ2QkkyUitOc0FsbzRYTi9PSUdSZkh6ZzVGOXdGVGtaeEtN?=
 =?utf-8?B?OC95SXJCSG1DbHlGYy9nKzhZVTZRdzFBVjVVUDNCQ2FQK2pqNURLMm9qUDFj?=
 =?utf-8?B?T25aSGgzRDRPcU9FTW1yczRQMENYUWFJeC9yWHV0MzhSWHljdEhGSGFIclF1?=
 =?utf-8?B?di9XWkREaDAzc2xibWhWQ2RTcFFya24vMW43bkJwbHc0T1pzdTlHMndDUUdm?=
 =?utf-8?B?RGtqaUh3Q3FXK2llN0VJR0RwL0V6K1VPVHM4bk00aFFGL04rM2NCdHJscS9T?=
 =?utf-8?B?czBGVEdrSVhVV3BSdWUzNWhBQ25OdURtUXl5Tjd3ZDVhUUsvcE5mNTZlVXhI?=
 =?utf-8?Q?shvxtq5ZJHp3VQ+c6QGJSz2Xl?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b795c8b4-d189-4993-4130-08da59034678
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2022 12:39:43.0281
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 63SQauZOW8dImxk1z/P/TiNrWUrUT/zR1ozAiPY2mt+xkS/w3qohTgcwfuuyvq/OyIhv28lFDYb6NnG3vBVxOw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR03MB6671

On Thu, Jun 09, 2022 at 12:16:38PM +0200, Jan Beulich wrote:
> For vendor specific code to support superpages we need to be able to
> deal with a superpage mapping replacing an intermediate page table (or
> hierarchy thereof). Consequently an iommu_alloc_pgtable() counterpart is
> needed to free individual page tables while a domain is still alive.
> Since the freeing needs to be deferred until after a suitable IOTLB
> flush was performed, released page tables get queued for processing by a
> tasklet.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> I was considering whether to use a softirq-tasklet instead. This would
> have the benefit of avoiding extra scheduling operations, but come with
> the risk of the freeing happening prematurely because of a
> process_pending_softirqs() somewhere.
> ---
> v6: Extend comment on the use of process_pending_softirqs().
> v5: Fix CPU_UP_PREPARE for BIGMEM. Schedule tasklet in CPU_DOWN_FAILED
>     when list is not empty. Skip all processing in CPU_DEAD when list is
>     empty.
> v4: Change type of iommu_queue_free_pgtable()'s 1st parameter. Re-base.
> v3: Call process_pending_softirqs() from free_queued_pgtables().
> 
> --- a/xen/arch/x86/include/asm/iommu.h
> +++ b/xen/arch/x86/include/asm/iommu.h
> @@ -147,6 +147,7 @@ void iommu_free_domid(domid_t domid, uns
>  int __must_check iommu_free_pgtables(struct domain *d);
>  struct domain_iommu;
>  struct page_info *__must_check iommu_alloc_pgtable(struct domain_iommu *hd);
> +void iommu_queue_free_pgtable(struct domain_iommu *hd, struct page_info *pg);
>  
>  #endif /* !__ARCH_X86_IOMMU_H__ */
>  /*
> --- a/xen/drivers/passthrough/x86/iommu.c
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -12,6 +12,7 @@
>   * this program; If not, see <http://www.gnu.org/licenses/>.
>   */
>  
> +#include <xen/cpu.h>
>  #include <xen/sched.h>
>  #include <xen/iocap.h>
>  #include <xen/iommu.h>
> @@ -551,6 +552,103 @@ struct page_info *iommu_alloc_pgtable(st
>      return pg;
>  }
>  
> +/*
> + * Intermediate page tables which get replaced by large pages may only be
> + * freed after a suitable IOTLB flush. Hence such pages get queued on a
> + * per-CPU list, with a per-CPU tasklet processing the list on the assumption
> + * that the necessary IOTLB flush will have occurred by the time tasklets get
> + * to run. (List and tasklet being per-CPU has the benefit of accesses not
> + * requiring any locking.)
> + */
> +static DEFINE_PER_CPU(struct page_list_head, free_pgt_list);
> +static DEFINE_PER_CPU(struct tasklet, free_pgt_tasklet);
> +
> +static void free_queued_pgtables(void *arg)

I think this is missing a cf_check attribute?



> +{
> +    struct page_list_head *list = arg;
> +    struct page_info *pg;
> +    unsigned int done = 0;

Might be worth adding an:

ASSERT(list == &this_cpu(free_pgt_list));

To make sure tasklets are never executed on the wrong CPU.

Apart form that:

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jun 28 12:52:46 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 12:52:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357083.585489 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Ahn-0005hY-Ke; Tue, 28 Jun 2022 12:52:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357083.585489; Tue, 28 Jun 2022 12:52:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Ahn-0005hR-Hg; Tue, 28 Jun 2022 12:52:35 +0000
Received: by outflank-mailman (input) for mailman id 357083;
 Tue, 28 Jun 2022 12:52:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XT0n=XD=citrix.com=prvs=171720f04=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o6Ahl-0005hL-MZ
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 12:52:33 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2bd6b4ab-f6e1-11ec-bd2d-47488cf2e6aa;
 Tue, 28 Jun 2022 14:52:32 +0200 (CEST)
Received: from mail-bn8nam12lp2168.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 28 Jun 2022 08:52:20 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by PH0PR03MB6802.namprd03.prod.outlook.com (2603:10b6:510:119::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Tue, 28 Jun
 2022 12:52:17 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5373.018; Tue, 28 Jun 2022
 12:52:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2bd6b4ab-f6e1-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656420752;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=s7q7nirPYl6/B6E0QbKRwO6w5gky2w39dAI2+yPMxB4=;
  b=EXH6IlFf3Kq1cVVyENsu2Rs59j+iwBtJCBv4pnzsYrSKkOQLgO1HOwh/
   DYcdVleV8aAACCT79d1nk+0DOq6WmD693EG3YzdDUYSfBYyrHt9uHLmz1
   B+HRY3YPp+22FsjiaeqJDK3BddBIhe7onPzkVlckkOh2Jbw9BlV7o/pih
   8=;
X-IronPort-RemoteIP: 104.47.55.168
X-IronPort-MID: 77154489
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:6JBunqqrrvLo2tSEU99zO3ptqQVeBmIIZBIvgKrLsJaIsI4StFCzt
 garIBnSPquIMDCkKNpzbdi/9E0O6p/Wx9BrTQE/rH9kRiwap5uZCYyVIHmrMnLJJKUvbq7GA
 +byyDXkBJppJpMJjk71atANlVEliefQAOCU5NfsYkidfyc9IMsaoU8lyrRRbrJA24DjWVvT4
 4Oq+qUzBXf+s9JKGjNMg068gEsHUMTa4Fv0aXRnOJinFHeH/5UkJMp3yZOZdhMUcaENdgKOf
 M7RzanRw4/s10xF5uVJMFrMWhZirrb6ZWBig5fNMkSoqkAqSicais7XOBeAAKv+Zvrgc91Zk
 b1wWZKMpQgBOr/DwfoDVChjCTxME6xjwJTWcEeOvpnGp6HGWyOEL/RGKmgTZNdd0MAnRGZE+
 LofNSwHaQ2Fi6Su2rWnR+Jwh8Mlas72IIcYvXImxjbcZRokacmbH+OWupkFgXFp2JAm8fX2P
 qL1bRJ1axvNeVtXM0o/A5Mihua4wHL4dlW0rXrK+vNqvzSDnWSd1pDTaYHfUdqMff4Nj2eCi
 k/HuEnFLQEzYYn3JT2ttyjEavX0tSHxVZ8WFba43uV3m1DVzWsWYDUGWF3+rfSnh0qWX9NEN
 1dS6icotbI19kGgUp/6RRLQiGaNoxo0S9dWVeog52ml1a788wufQG8eQVZpa9E4tclwWT0j0
 HeImc/kAXpkt7j9dJ6G3rKdrDf3My5FK2YHPHUAVVFcvYmlp5wvhBXSSNolCLSyktD+BTD3x
 XaNsTQ6gLIQy8UM0s1X4Gz6vt5lnbCRJiZd2+kddjnNAt9RDGJ9W7GV1A==
IronPort-HdrOrdr: A9a23:OHyt1aiPFFR4/jb4WKH7feEqy3BQX0h13DAbv31ZSRFFG/FwyP
 rCoB1L73XJYWgqM03I+eruBEBPewK4yXdQ2/hoAV7EZnichILIFvAa0WKG+VHd8kLFltK1uZ
 0QEJSWTeeAd2SS7vyKnzVQcexQp+VvmZrA7Ym+854ud3ANV0gJ1XYENu/xKDwTeOApP+taKH
 LKjfA32gZINE5nJ/hSQRI+Lpv+juyOsKijTQ8NBhYh5gXLpTS06ITiGxzd+hsFSTtAzZor7G
 CAymXCl+6emsD+7iWZ+37Y7pxQltek4txfBPaUgsxQDjn3kA6naKloRrXHljEop+OE7kosjb
 D30l8dFvU2z0mUUnC+oBPr1QWl+DEy60X6wVvdunfnqdyRfkNPN+NxwaZiNjfJ4Uspu99xlI
 hR2XiCipZRBRTc2Azg+tnhTXhR5wWJiEtntdRWo21UUIMYZrMUh5cY5llpHJAJGz+/wJw7Ed
 NpENrX6J9tAB+nhkjizyhSKeGXLzQO9k/seDlAhiXV6UkaoJlB9TpX+CRF9U1wtq7USPF/lp
 H52+pT5fRzp/QtHNNA7dc6MLWK41P2MGLx2UKpUCPa/fI8SgTwgq+yxokJz8eXX7FN5KcOuf
 36ISFlXCgJCgjTNfE=
X-IronPort-AV: E=Sophos;i="5.92,227,1650945600"; 
   d="scan'208";a="77154489"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CRkvZ50ISYiIdN4DQ8l7dNx5ugdvYs/xYqFd9PvJ8bH4fyzncJlFbIKA6c4sOIc3L+wv7FjhxirERlRuGxsPURTNtXoAshQ6DY0Ghn7WjuCTcmq4YR1STw6x7DbRrk9JJ2v7udZuoF4oorjlYZZQVo6PVH/WN47cYGIteG101G093j/4uZ8GktokaWWfYwl2UJLShFH4FmD4lzqDXhB9SOJdY3WUI6pxAAY5Wl64AcyV7N3iGPZuO0h/NAG+e+cy1fhQUiAHXjdSIV0n71xTlA9PO+eFVZbLLRDQvKXlgcsCJ6N5zVJNFeqCViPma96blTA94j7S7WhEcGFsw7qRRQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ugGCxt2Oi3lZMs2xzCtuqrxzegRH1SjXuinG3eg9hsw=;
 b=XIvDXK2YS0dHFlZp/H6p7AmZz88oYHvZnZM/FiAx198N1dbwPSkNMsDcOkOdFeZBLkq7vbbEMJlIUs4a+59FBPDD1xqyR4F8TP1F3JElRMIEPGFgBy6XyTDVbA4ChWTj1dTKP5KmveeTsCPFDg+LxNon+DSQqQ+fOvgWFgPouKZf1mlQwvhKDGQdFOZ3QMXJPKaUwHpDBr0oDNsxErI6IQPMOHiVVTmrcosBgIt8vpddnFXNpbXaJwvnJ4PIgVqjYW6zHU0QAYPhg23GkNR6FY86xDgVBN7/NwEi/JwBmolHbuJ8fDC5pYe6dPPgV1rYax8Ni+2AE9N6q56EdfeBqA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ugGCxt2Oi3lZMs2xzCtuqrxzegRH1SjXuinG3eg9hsw=;
 b=VORNtgNwWB/cfce6Q6x3HyUBYyKb52Xf2Tt5HsO53ilxuOuYq3oaNNIrHlQmSaklbVTOnv4Eqhgp7JGlnLu/29ctcXQco1t/6HS1GoRupRyJxCqAmcjVMjPguCoEcKsoYmqkckcSpKQ5uK5c0/xwk0inDrtjnwZkpOtS2GfSQQQ=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 28 Jun 2022 14:52:13 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>,
	Kevin Tian <kevin.tian@intel.com>
Subject: Re: [PATCH v6 02/12] IOMMU/x86: new command line option to suppress
 use of superpage mappings
Message-ID: <Yrr5fTdO1CAKfIH7@Air-de-Roger>
References: <e873e30c-7a04-a8a3-2fe5-0dda30e654fe@suse.com>
 <99086452-43a8-2d93-ab4d-0343a0259259@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <99086452-43a8-2d93-ab4d-0343a0259259@suse.com>
X-ClientProxiedBy: LO4P123CA0562.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:33b::12) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d4cff0e3-bf38-43a3-a520-08da59050832
X-MS-TrafficTypeDiagnostic: PH0PR03MB6802:EE_
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	z8UA2bl468k9kgfzII3/WzZieMzHymRLcfC2kpsi5SDVWC3nqNqgNEmbd7bgGUl2Z5qwgV2Qr/FKWalY1TSIzrQ9dYmY/zdpt7fB4509oQpdg096MGKrAPNShMNM/Phw8PmMvW25WHu1RR5+wwS+43iLFEBDenZNMTWzLsaurpbqwGcnzvy2dW+FZM4y0iGzz0fI+oxhYDoLnrLUWSJVOxIxAPYLMpEbwpvkT1/7PKAb1cP+oEK6ww0U24JAJZVty6aQyMfrk9rv7bdtZMtLD4YDN4sVKAOWc1zDyVStOl5QjjOnXZQ8Vz38gK1CMR3ho9uI0KwcDLpWYwiNjRCJQh6pw1dcHcKxXFnp99rxlXZMIQYelCN4mFz9OF/Nq4Tif3LB+0hla36RdaAQWCn9+Ptmed8Ln/vuubWO0h0BlaP1VbTPVrFNdJiTKvTmAe4andzN7wbJpWP/Gfx/PElz6Mwj16n6BkEHMbIIAOWfBulX+lgDPPlFGCNn88rKts4WryFfmHgE1D+3g9hUvHxw4jPyBqNEc3RlcoKGIqBh5uXI8oxlSDhaRQijo/U9miUmfJCL/D7LXE+ONnrIXi9ODghz21AR6vvZmq+VEHvegOVntAMWdLB258MMlOgXE6D/XUZLbuBsWEyc1pZXx+ZjYlFX2sJBZATFeeUAz/bw+EfrcrHuIaVk5PUwYXOJ705HdypHeGnjNc+Z30o+ZVkM60JsPhARPWc1wuCOmKe/kCA=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(7916004)(4636009)(376002)(396003)(366004)(346002)(39860400002)(136003)(4326008)(82960400001)(41300700001)(6486002)(6916009)(26005)(9686003)(66946007)(66556008)(83380400001)(316002)(85182001)(8676002)(6512007)(54906003)(66476007)(5660300002)(6666004)(478600001)(86362001)(6506007)(186003)(33716001)(2906002)(38100700002)(8936002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dmtWbHpVREorNXYxbFg5SStwWjQxWmxmY2ttUEV5Vm1ydzZUVDA2RjVzUXZB?=
 =?utf-8?B?OGYxeWgwYmJWVUg4RFdaVWQyY0NhYkh3NzJFMW9FWmVDMU9yZGVZSnM1MVdW?=
 =?utf-8?B?YS82ZDRpbjFvd1M1anJQS3BuRnNkR2VOQUYvVHdMSXY1YzcxbGYxZ1B1SklG?=
 =?utf-8?B?aUlYdFcrN25SNmNXdEFPd1RxR08rWURoVzQvNTlUOTFBelNKZGo1Y3hQMVg1?=
 =?utf-8?B?d3R0bVNGSmtjL3hBOFdmZlRJVnV6bGgzSS92V2RoeFU0UTdDMjErRzlORXBL?=
 =?utf-8?B?SEt2V1FLQjZpQmhpTU13cG9iM3ozbWdqc1Y1N0dZSU5ONnZkS0YzTXpxRVR1?=
 =?utf-8?B?N2VTNVRNRnVWVnI3d2RJaytBdDFUbzM1TUdVT3NnRVdFRDVIN3ZtRnBZUk4r?=
 =?utf-8?B?NVY5ODdsV2lFNGtDQThUSzZkMGRMS3h2YzRYeld6bEpUVzNxcTlUVVMrRXhh?=
 =?utf-8?B?eW1aK1Q0Y1hEanl5OGswK3pndCsrR3AvMENMcUI0dGdycVJuUUtFRFBnTEZR?=
 =?utf-8?B?WUtublFIdXRYZzgvSE1pVjlpUVRVNEZxSS9PYlM0TVY1bTlxVnp6M3FWRUVH?=
 =?utf-8?B?VXVWRFlOKzBubGFPQlF1QWd1L2J4WkxQNXJ2MUJ5akxpSldWUmNqSzY1dmxK?=
 =?utf-8?B?cEtpdzQ4RStTUTFSY1A0WDk4TE9EVVAvVjBVSmIxSmcwV2l6UGN5R3Y0cUZU?=
 =?utf-8?B?cmxSQXl6N3pVUzRBL1pJczhWam1WaUN3SGVGSEtrN29FMWUvbXEzanNFd29z?=
 =?utf-8?B?MmJ5d3EzeStEOTNEdUFubXNTUnBCcm9uOThXYXIzbWpMdHJOZDRYRUIwbDV6?=
 =?utf-8?B?NUVLYXU5cElIM2ZsSGp2a3lIVXl1UStsRW8zSkF0NlJ4eGMzTVF1QW5KWGg3?=
 =?utf-8?B?bm1QcTBOblhuaUJjdDNHZ21UaDhYRTlsRm5sOGFkRHZLcE9NM0ZrVVA2dXdQ?=
 =?utf-8?B?ejA5bVhyeFVzSHlyQWQrS2UwQ1FWLzZOS3FwM254Y2RnOGZ0VGs3ZU9wR1Fn?=
 =?utf-8?B?MG5xTlo5dnFKT214dHB0Sy9mejdGWDNvQzdxaGE1czhJdm9JOG9lOUIrN1NL?=
 =?utf-8?B?TzR6Z252Q0JqbVY4L0pBWkI1WEM3cndKeFRXcTViR3U0NkFqMUdpaHZmRlNL?=
 =?utf-8?B?dlowWmF1Mk9Cek9UTjF2S2J0TnRrRUxocFEyQzgyVU5BcFF1RCs2WDM4R0RP?=
 =?utf-8?B?cTJLQXBGMnJXWm9jdE1kSENXVnRQMVdmczR5UDREclRYcmgxU2MrdGRlYU5Z?=
 =?utf-8?B?KzROdDRsKzFRNEVmMmpaUG5ObGZpRkw3MnV6QzlwV09pTFdPNWpaSDduN2Zn?=
 =?utf-8?B?Mkc2UTRTSW9IdVN0MXBSemozWnVNRnlscDdvZnJPdEZhT2QvRmVPMnhGY2hJ?=
 =?utf-8?B?T1RPdHhqTDUxSHROKzhYdTBVSDduSDRJc1U0N0haWXVaaVBGOFNSUUZLYVVN?=
 =?utf-8?B?UUJsNGVKZnhhSXdxL0o5NkM5aEV2NEdZU1h6K3hFbGFVSldsZ2lEMmorVytG?=
 =?utf-8?B?cHNNeklZQUtidm5jNTFYVlY0QSt3NlRkVXFrZXJGV2JKQmlKbldwZGk5WVQ3?=
 =?utf-8?B?UkllOVI3NCtQSC9WU3NTVmZoUEx3ZnVISDJ5eEFsQTNUcE5VeUloYXJkR2JQ?=
 =?utf-8?B?K29ocXhyZlRBT1VyY2xPd2Q2aDdMa2NjV0hkRUFWTmFscWxYbzVFUlBEMWNE?=
 =?utf-8?B?b0tXT1JwSDdlZkJmbEo2OHBLRkh6azAwSFQwczVZd0lmMzVwbGZVYmRsSjN5?=
 =?utf-8?B?eEg1Sm5PSU5DMXprNG9KZ0tXYmJyWWxuODlPcGRYcDV6VDlZRk95YnJFanhy?=
 =?utf-8?B?Nmk3VzhDbHIwYTMyR1NsajFTamEvU2hLTWZZTmcvNGVQamowMytwa284OWZo?=
 =?utf-8?B?eStiQ2Q4ZnJBejA1a0NiRkRUQXdjVDRacGM1aFY3YytHSVRkSExkMWQ3Zy9V?=
 =?utf-8?B?SGFvVUZ3K21XYWFDK2xTbDIrTEZxRXlob1FvYk1iayt1ZzNCcmNYalRTZDd4?=
 =?utf-8?B?VmdVVE5BQ0J4MnhJTFlZTUovSTN3SVQvRDJCWWY5YzJmOUtEeEs3OUlxSFRB?=
 =?utf-8?B?VjVSVFMvVC9mZ3ozQi9xbzErZUhJb0s0c0JJVitwTko3cjA1U3QrR3YyMmlD?=
 =?utf-8?B?NDdaSTEwKzJqemZKc01aMzR2eTl2cis1Y291ckhjUTBld0tkWUllOEJNdlJ3?=
 =?utf-8?B?WHc9PQ==?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d4cff0e3-bf38-43a3-a520-08da59050832
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2022 12:52:17.5154
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: VJJlJVcIvhYB6rxpjmWcU5iSOZ9JEsoV0155iLOuwOymrv64PFlj3N5Wtue5PteFd3WqeGGHVAUN+Aio0Ephkg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6802

On Thu, Jun 09, 2022 at 12:17:23PM +0200, Jan Beulich wrote:
> Before actually enabling their use, provide a means to suppress it in
> case of problems. Note that using the option can also affect the sharing
> of page tables in the VT-d / EPT combination: If EPT would use large
> page mappings but the option is in effect, page table sharing would be
> suppressed (to properly fulfill the admin request).
> 
> Requested-by: Roger Pau Monné <roger.pau@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v6: New.
> 
> --- a/docs/misc/xen-command-line.pandoc
> +++ b/docs/misc/xen-command-line.pandoc
> @@ -1405,7 +1405,7 @@ detection of systems known to misbehave
>  
>  ### iommu
>      = List of [ <bool>, verbose, debug, force, required, quarantine[=scratch-page],
> -                sharept, intremap, intpost, crash-disable,
> +                sharept, superpages, intremap, intpost, crash-disable,
>                  snoop, qinval, igfx, amd-iommu-perdev-intremap,
>                  dom0-{passthrough,strict} ]
>  
> @@ -1481,6 +1481,12 @@ boolean (e.g. `iommu=no`) can override t
>  
>      This option is ignored on ARM, and the pagetables are always shared.
>  
> +*   The `superpages` boolean controls whether superpage mappings may be used
> +    in IOMMU page tables.  If using this option is necessary to fix an issue,
> +    please report a bug.
> +
> +    This option is only valid on x86.
> +
>  *   The `intremap` boolean controls the Interrupt Remapping sub-feature, and
>      is active by default on compatible hardware.  On x86 systems, the first
>      generation of IOMMUs only supported DMA remapping, and Interrupt Remapping
> --- a/xen/arch/x86/include/asm/iommu.h
> +++ b/xen/arch/x86/include/asm/iommu.h
> @@ -132,7 +132,7 @@ extern bool untrusted_msi;
>  int pi_update_irte(const struct pi_desc *pi_desc, const struct pirq *pirq,
>                     const uint8_t gvec);
>  
> -extern bool iommu_non_coherent;
> +extern bool iommu_non_coherent, iommu_superpages;
>  
>  static inline void iommu_sync_cache(const void *addr, unsigned int size)
>  {
> --- a/xen/drivers/passthrough/iommu.c
> +++ b/xen/drivers/passthrough/iommu.c
> @@ -88,6 +88,8 @@ static int __init cf_check parse_iommu_p
>              iommu_igfx = val;
>          else if ( (val = parse_boolean("qinval", s, ss)) >= 0 )
>              iommu_qinval = val;
> +        else if ( (val = parse_boolean("superpages", s, ss)) >= 0 )
> +            iommu_superpages = val;
>  #endif
>          else if ( (val = parse_boolean("verbose", s, ss)) >= 0 )
>              iommu_verbose = val;
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -2213,7 +2213,8 @@ static bool __init vtd_ept_page_compatib
>      if ( rdmsr_safe(MSR_IA32_VMX_EPT_VPID_CAP, ept_cap) != 0 ) 
>          return false;
>  
> -    return (ept_has_2mb(ept_cap) && opt_hap_2mb) <= cap_sps_2mb(vtd_cap) &&
> +    return iommu_superpages &&
> +           (ept_has_2mb(ept_cap) && opt_hap_2mb) <= cap_sps_2mb(vtd_cap) &&
>             (ept_has_1gb(ept_cap) && opt_hap_1gb) <= cap_sps_1gb(vtd_cap);

Isn't this too strict in requesting iommu superpages to be enabled
regardless of whether EPT has superpage support?

Shouldn't this instead be:

return iommu_superpages ? ((ept_has_2mb(ept_cap) && opt_hap_2mb) <= cap_sps_2mb(vtd_cap) &&
                           (ept_has_1gb(ept_cap) && opt_hap_1gb) <= cap_sps_1gb(vtd_cap))
                        : !((ept_has_2mb(ept_cap) && opt_hap_2mb) ||
                            (ept_has_1gb(ept_cap) && opt_hap_1gb));

I think we want to introduce some local variables to store EPT
superpage availability, as the lines are too long.

The rest LGTM.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jun 28 12:59:29 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 12:59:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357089.585501 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6AoP-0006Sr-Aw; Tue, 28 Jun 2022 12:59:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357089.585501; Tue, 28 Jun 2022 12:59:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6AoP-0006Sk-8A; Tue, 28 Jun 2022 12:59:25 +0000
Received: by outflank-mailman (input) for mailman id 357089;
 Tue, 28 Jun 2022 12:59:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XT0n=XD=citrix.com=prvs=171720f04=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o6AoN-0006Se-RH
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 12:59:23 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2062a602-f6e2-11ec-bd2d-47488cf2e6aa;
 Tue, 28 Jun 2022 14:59:22 +0200 (CEST)
Received: from mail-co1nam11lp2171.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.171])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 28 Jun 2022 08:59:19 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by PH0PR03MB5861.namprd03.prod.outlook.com (2603:10b6:510:3a::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Tue, 28 Jun
 2022 12:59:17 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5373.018; Tue, 28 Jun 2022
 12:59:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2062a602-f6e2-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656421162;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=KPJvd70DOpzKOXiXJHvjVUgEHRyWZ697HcGH8i26s2Q=;
  b=KtMope8XexI0ZQFVlgxL1BkU0MlRkB+laS5N+XU8gv1dGBmCSnFbrqUZ
   IqzGPzOkLP1VaVdRVhGIOIUYUwc0aGPeVnduzIiF83VybbvpyEq1i2NyT
   mRyQBiQz5AxK+DoKYwHPEwxc167uUqabLkifmoVnKeP3oyXJ8wBiYLRHS
   k=;
X-IronPort-RemoteIP: 104.47.56.171
X-IronPort-MID: 74582839
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:4K3G4a+SG79XSV1kC/X0DrUD9H+TJUtcMsCJ2f8bNWPcYEJGY0x3y
 GscWD/Va6yMZWLyLoh/OYqyoxsF7cfQz9JnHAI6pXw8E34SpcT7XtnIdU2Y0wF+jyHgoOCLy
 +1EN7Es+ehtFie0Si+Fa+Sn9T8mvU2xbuKU5NTsY0idfic5DnZ74f5fs7Rh2NQw34LpW1rlV
 e7a+KUzBnf0g1aYDUpMg06zgEsHUCPa4W5wUvQWPJinjXeG/5UnJMt3yZKZdhMUdrJ8DO+iL
 9sv+Znilo/vE7XBPfv++lrzWhVirrc/pmFigFIOM0SpqkAqSiDfTs/XnRfTAKtao2zhojx/9
 DlCnb7qFQ5zMobpor4QEBxGIhB4Hfdp1aCSdBBTseTLp6HHW13F5qw3SWoRZMgf8OsxBnxS/
 /sFLjxLdgqEm++93LO8TK9rm9gnK87oeogYvxmMzxmAVapgHc+FHvuMvIABtNszrpkm8fL2f
 c0WZCApdB3dSxZOJk0WGNQ1m+LAanzXLGEF+AjF/vtfD277/jco/qfsAevpfd2Gf/hPw3eVp
 WGF4DGsav0dHJnFodafyVqujOLSmSLwWKoJCaa1sPVthTW71mEVTREbS1a/if24kVKlHcJSL
 VQO/SgjprR081akJvHiWzWorXjCuQQTM+e8CMU/4QCJj6HTugCQAzFdSiYbMYN/8sgrWTYty
 1mF2cvzAiBiu6GUTnTb8aqIqTS1Om4eKmpqiTI4cDbpKuLL+Okb5i8jhP46eEJpprUZwQ3N/
 g0=
IronPort-HdrOrdr: A9a23:WoRk9qmYEWjzVtb68YQgXghRMZPpDfO3imdD5ihNYBxZY6Wkfp
 +V8cjzhCWftN9OYhodcLC7V5Voj0mskKKdxbNhRYtKOzOWw1dATbsSlLcKpgeNJ8SQzI5gPM
 tbAstD4ZjLfCJHZKXBkXaF+rQbsb66GcmT7I+xrkuFDzsaDZ2Ihz0JdjpzeXcGIDWua6BJdq
 Z1saF81kedkDksH7KGL0hAe9KGi8zAlZrgbxJDLxk76DOWhTftzLLhCRCX0joXTjsKmN4ZgC
 D4uj28wp/mn+Cwyxfa2WOWx5NKmOH5wt8GIMCXkMAaJhjllw7tToV8XL+puiwzvYiUmR8Xue
 iJhy1lE9V46nvXcG3wiRzx2zP42DJr0HPmwU/wuwqXneXJABYBT+ZRj4NQdRXUr2A6ustn7a
 5N12WF87JKEBLphk3GlpT1fiAvsnDxjWspkOYVgXAae5AZcqVtoYsW+14QOIscHRj99JssHI
 BVfYzhDc5tAB2nhk3izyhSKITGZAVyIv7GeDlJhiWt6UkYoJgjpHFoh/D2nR87heAAotd/lq
 b5259T5cBzp/8tHNxA7dg6MLuK40z2MGbx2TGpUCPaPZBCHU7xgLjKx5hwzN2WWfUzvegPcd
 L6IRhliVI=
X-IronPort-AV: E=Sophos;i="5.92,227,1650945600"; 
   d="scan'208";a="74582839"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MyTBqCAi5qFO2CNOUTLbGnvNuglnY6WdF7qHonpUiwFL4tuXNyt3UKX6IDIET8d7Y7S3PfsSpD/09NSn3/AHNmActjk5z+QrE+0Sf54RNz78v9maqWvEjUAGfCw6TfdgZ9pv78cCEJ1VA4LDpbZtZJe+UT6rUAPY/YhF56NIlRCOrOHp9HkkAafq4/iM6gZlGoWDznvPhR3uCHoBKQWK7EfI4bsHhJ4YuyaCUWbAJ3R4wmO0B7BJMLddults6miPjENwNuXYs6RT36SGg75NRPC3IUEAl+Fdn8JMc293DXTYo3J1C01XwpuW95mutLlaX0Lts4V3c0snzlgxUIgcUw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=F4WQVfKaHh29/UDMHLsgwd0cfxFswnebgrRcID3Ts1U=;
 b=EYfWsEyImzKswLS6H8WsmuqZE0egoTaXk+yrhtluZLdvEnxf1oEhUjcC/PiSE8Mwot2gByB6VhX++IjN5Ngb2hcx3YsIVf2JEWfsKtLTOVnvtWED6AqpwhZiRVbmYiBzBXac+hxkeQZD5yJpdXzwsXLfNc/q4hSnAhVeFOJZlPEaPk8uIdzA80HDaWnZfVJjhdBh2zm4YLmOG036Av0oxLCXViGtVBukl12Qnkd0uZCu7q/m1oQuAChWHFVOjFpRzaGFfxvKsP4NzVOCZU3+5XUUyCq5Y0Yi8eJfFic3BpXFJkihhRhIB950lQzQ1U8GQVTb3LPhy8K46dqSCBucVQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=F4WQVfKaHh29/UDMHLsgwd0cfxFswnebgrRcID3Ts1U=;
 b=ZB/i3NvDeGixmp/b5/Oq/6CQ624lbQ/8lC/H14WbDbrc3MlzxskuXUFO33+wBpw9LPV2HdjB3E+Om3wYldDY2Nj3ptV8reF2TtQuSpPIqQkP6CSqF3LdD3lnUzlNUtpvhGqZOZJj9L0K+A57SXeO9cL/q9MiQq9GVVGtielgyL8=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 28 Jun 2022 14:59:12 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2] x86: correct asm() constraints when dealing with
 immediate selector values
Message-ID: <Yrr7IO3LW4d1MZfR@Air-de-Roger>
References: <2c04b2aa-d41f-0a2c-6045-2d37a6fac53e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <2c04b2aa-d41f-0a2c-6045-2d37a6fac53e@suse.com>
X-ClientProxiedBy: LO4P265CA0218.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:33a::19) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 9309cf79-fb8b-4338-3113-08da5906022e
X-MS-TrafficTypeDiagnostic: PH0PR03MB5861:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	jws8tfhRztCyCErfBMyv2RKdWVrUOF0OV2siwaJLcB3TNX5231+c4jiY1pm0ydYLyki4rtJ1AtLHutjGNpFh5PpS56LgdL/E0J5pmbk2zqSK6lvqHCBlREvC1A0USEmO3W0zQ3NGUdgx/+NZg25kNAU3yYO23jCz1qIZuDCq2O6NAVEYWAb6hkpvT4kSbGQDojE6Fze0jExUua22g3NI1RzycDIj7PHKLpmJVBpwtiCy7dP8jksHsTyiXd62yMkpPt8tWK45zf2ttSBvUV6gGwlus9i1VQ2ImNTZpAsBq/zcfWdSAX6NmnX1CQjA1lIopW7tCQb6Z88UhHPDCzjO4xYb+vPyfFNXPsagvhUj6QyQJhSZFA72IMwYGyGGPH7aM2jEfg3/r/TwbeC9yBX9g3LHriLa3HTSoukKtY84fE0pRnT551jHZTgELHDnSXTqpT5aSyxYaaNve1aTdSCdrB/6VVXlsPbRxkIn+1nta1/SkcH50/xT7giE0SaHSkSNCYwSRR/b49GJ8dS1GYNCj/D7LQN1uOuE99uuXpBMq9AHkwmzpT9eoPXqZpIHD/qf2BFkqAFoeBy/84Fw6VqRNOU8ob/KA00tbmeqMe1asAB1OFhtN8NHF5YlAwou4T06yWe8PQpmPX94Som4jDc0k6tljfmJn/JCKT6qT557mLB6uLx67r5KugIeQ5k0qDtJwnPKl8rSNYqe6bhdQywTbRPnE+ZlHMRbm1lrnRG64Tl2doKURiGVC0rK7iAzfIYk0w9zmgFcOswT+2xCNdXIgA9uG3cX2TbLC1BKK45xXMU=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(7916004)(376002)(136003)(366004)(346002)(396003)(39860400002)(6916009)(6666004)(316002)(54906003)(2906002)(82960400001)(86362001)(41300700001)(5660300002)(85182001)(38100700002)(66556008)(26005)(33716001)(66946007)(66476007)(4326008)(6512007)(8676002)(6486002)(6506007)(83380400001)(478600001)(9686003)(186003)(8936002)(14773001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?U1c3MWY2TWJyYlNnQ3hxaXBNeGRJbnVVZ1EvcXN0WkRJb1JxZUFhMm5yT0Ni?=
 =?utf-8?B?R0NMYUtjMFQ2THVLUGFxN0QzK2hhejN3NjJwVUlkYUxQeUhJOVQ0UjJERFl6?=
 =?utf-8?B?S0FpOE4yZXptK2l0T3NjSndUQ2VSVEhPU21FdTVrZENuSDBnMEt1WXkrZDY4?=
 =?utf-8?B?VytlU3JPcXNXVWtMeENCRFRiOW0wd3hZWDFXckRNbTZHSHJDZ1pHS1MyanUv?=
 =?utf-8?B?QzRuN1NRU3V0MzM3aFlhdTVyQzV6T2FCYitFZjd5ejg3YlVlVjN5TGd4dlZJ?=
 =?utf-8?B?T3pxb3g4NndoVlFuaGRZQzg4RThiejJlK29YUEp2RkF5WXNPejFUdzZJM25z?=
 =?utf-8?B?b0trRTNERjlXRkduWitWVFBrME1wSVkvN3dOSE93UzhwRHhFY3pIdXZ3OHJQ?=
 =?utf-8?B?bTNyRGMvODV6WU43aHRJRytXRW85R2dqdUlnUGdvVkZUWm5Ecm1qQ3BOWk9R?=
 =?utf-8?B?dlFObG5saFFxaVZSdlZXeHI4dFBoVm5kQURVSUhQeWpabVdMUnJDZFBtUmcv?=
 =?utf-8?B?MmZmV2grYW4yUmdHekh5aS9IQ2pJRDdNV0FjanNsOHhCRFNIcWJuSStuL3lZ?=
 =?utf-8?B?dy83OXg2MUNVbWtxdnJBeUFjTFJqSFMyVFJlSlBJY05iT1k4djBmSllhSFN6?=
 =?utf-8?B?QjNiVXJxeU5aZlkyMVZYWjkrcXhNczd2TktXMkREeWJrRE5FRDliUjIxd0o1?=
 =?utf-8?B?ZXRMM3pDZGJHeE1ablZad1dWbElqL2RxSkNhcW9heC9reHZBQjZlQzVDWVY0?=
 =?utf-8?B?VThQVDB6VUgrNzk3aEc0VDlkaDZDa215alBaWGc0YkhYTHdsWlUzQ2d1SExt?=
 =?utf-8?B?c3FuOFRqNlpHTSszMGRmckxMOE5ta1loU0dFZ05QTk5aZVJOSlZIWnZya1Na?=
 =?utf-8?B?eVdDWWJXYXM4L215bWxwYVRzVGRZVTlvbUNWQTNyMTdCTEpxY3NKNzVBRFhT?=
 =?utf-8?B?SnhUcUJaRllrU1Q5Q2dDZWtXdEc5aHAzVktYUElyRTlXM2NPQjZ5QjNoSmhj?=
 =?utf-8?B?OS9DTEdsOUx6L2NUMmlmdU9HWVJ4bUpBcTUyOU0xRU9kMUFIK1Z1dDErYUps?=
 =?utf-8?B?QjJpK1R1akk2KzcwcmROZ01Hb09Dei8zM1VNZkFXL3ovQ04yTnZTSDlIc1FJ?=
 =?utf-8?B?UVhxQnlPWktMK04zS0pvWEdkbHFDVHAzM0xXWWRYQ0MvMFJDWERKNytXVk9z?=
 =?utf-8?B?VHpvQWR2bjZsODJqMHRPa21qaHBoODc3eXRBVEdOSGFpcU1zWURheHlYRGEr?=
 =?utf-8?B?SUo4U0hOWUE0NHV4b0lOWUJzUGV1VHdDSjYzUm5pYmVJZjBIUDVqL1VEdXlp?=
 =?utf-8?B?OHhjL3hDdUgrR3ZZRUxkZkozaGJIRVcvOVl2TG5sTDdTeGJTRC83dzhMcERP?=
 =?utf-8?B?TndQQ1BUSVhiNVpVUjF2OVBnZXlnZVBaZWROdlIybFR3TGQ3U0R1SFlKU3V4?=
 =?utf-8?B?ei9ndHNUYXNnUnE1a3l3SVpyL3NZU3ZxQ1ZpWVVlZ0YrZzFUeEZkS1FyNHRo?=
 =?utf-8?B?cXM5WURnb0FWK3NzejF0VXVMc29LT3lZWG41Nm0wRVg0dzhicmtzamdrZTJh?=
 =?utf-8?B?RTdLRmNxTjAveStzNGlBcDM3VWFBQi82RXFwRSsxbHFicG1SZzZLaklDR29x?=
 =?utf-8?B?RmFZQms1clBzMHZ2WDRWZitGQ2hsRWVzVDNVcEw3NWRDNXlrNmpFMll2NEtD?=
 =?utf-8?B?TlhQNG9RYk9hRXBoS3dpbWI0RGhnWW1nVlVCUklJMGszSFZNTjJ1c3B2ZkxC?=
 =?utf-8?B?bW0ycFhndE41WXNQcm51SHo1cnE4NWI2NnRUSkJGRFV1Q2ZaMUdHRS8xbFVO?=
 =?utf-8?B?bjBhbFJ1ZjNjNEx3TmMrWE8ra3lFbmgxSWlyaTJtcytyVnoxUmJvSkZyZzIv?=
 =?utf-8?B?UXJlVXRjdTJ1N3Q5K0Q4dW9HM1htaXFSSG5RQWIwZnVUWTNzb2lJQzZKUjk4?=
 =?utf-8?B?bDRqNG1Cb2FBeU9YTVVVOG5Vd05LNlNaeWorbVlScmFoQm5wMzFOTk1iMTF2?=
 =?utf-8?B?dkQrWldTbXppQ1BzVEFGM2M5NnRwRERNbGFyaWxPRm93UkxhSDRIcS94K1JH?=
 =?utf-8?B?SFBLZXh2MG5lWFBJS1hPdjBvbERoR1JYcE9VSDY5bEh4MFdVRjdMVTZpbWE3?=
 =?utf-8?B?WUtPZGtXQ3NJQWdZNXZkc1RGMi83K28wZ3p2a1NRY2kyWHJMMzQ2QlNlOGNV?=
 =?utf-8?B?VGc9PQ==?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9309cf79-fb8b-4338-3113-08da5906022e
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2022 12:59:17.1978
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: nuOpM0q+rmPZIYYj+QY5nSc7reMMOYuEq6p5EfiM/ndOxI0oWj56PqZLYO0zDN9IU8WLSpyhfqM5HnQUSBlTag==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5861

On Mon, Sep 13, 2021 at 08:26:21AM +0200, Jan Beulich wrote:
> asm() constraints need to fit both the intended insn(s) which the
> respective operands are going to be used with as well as the actual kind
> of value specified. "m" (alone) together with a constant, however, leads
> to gcc saying
> 
> error: memory input <N> is not directly addressable
> 
> while clang complains
> 
> error: invalid lvalue in asm input for constraint 'm'
> 
> And rightly so - in order to access a memory operand, an address needs
> to be specified to the insn. In some cases it might be possible for a
> compiler to synthesize a memory operand holding the requested constant,
> but I think any solution there would have sharp edges.
> 
> If "m" alone doesn't work with constants, it is at best pointless (and
> perhaps misleading or even risky - the compiler might decide to actually
> pick "m" and not try very hard to find a suitable register) to specify
> it alongside "r". And indeed clang does, oddly enough despite its
> objection to "m" alone. Which means there the change also improves the
> generated code.
> 
> While there also switch the two operand case to using named operands.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, and sorry for the delay.

Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jun 28 13:09:06 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 13:09:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357095.585512 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Axi-00083c-BT; Tue, 28 Jun 2022 13:09:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357095.585512; Tue, 28 Jun 2022 13:09:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Axi-00083V-6V; Tue, 28 Jun 2022 13:09:02 +0000
Received: by outflank-mailman (input) for mailman id 357095;
 Tue, 28 Jun 2022 13:09:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XT0n=XD=citrix.com=prvs=171720f04=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o6Axg-00083P-1G
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 13:09:00 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 77affbcf-f6e3-11ec-b725-ed86ccbb4733;
 Tue, 28 Jun 2022 15:08:58 +0200 (CEST)
Received: from mail-dm6nam11lp2176.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.176])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 28 Jun 2022 09:08:51 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by PH0PR03MB5861.namprd03.prod.outlook.com (2603:10b6:510:3a::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Tue, 28 Jun
 2022 13:08:49 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5373.018; Tue, 28 Jun 2022
 13:08:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77affbcf-f6e3-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656421738;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=Vf4bb+TM7E0Odjs5ChzhUjlVu7YvyIlojToWm+ccGoM=;
  b=aklJ9zqfY0OE/tNIRHeUwXt1fCuyJmHbIhvGC/yHrJiEmqaSqq/TCPTb
   8QvYNxhKLtstrWgNeDJYNTO9tXM1uo0l2Z+y31xUpV1azHqQSnDWpFsoL
   RoesgBUU78rLemXju3DB4dMkyyRQ4ZNJ3X4G/zxsxHgX08n1U36bw4zYh
   k=;
X-IronPort-RemoteIP: 104.47.57.176
X-IronPort-MID: 73919553
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:dOIASawatEYGT6o8oLN6t+dNxyrEfRIJ4+MujC+fZmUNrF6WrkVVx
 mccCzjVb/6PMGWjf4p3bIm09UsEv5PQydQySQdrrCAxQypGp/SeCIXCJC8cHc8zwu4v7q5Dx
 59DAjUVBJlsFhcwnj/0bv656yMUOZigHtIQMsadUsxKbVIiGX1JZS5LwbZj2NY224jhWmthh
 PupyyHhEA79s9JLGjp8B5Kr8HuDa9yr5Vv0FnRnDRx6lAe2e0s9VfrzFonoR5fMeaFGH/bSe
 gr25OrRElU1XfsaIojNfr7TKiXmS1NJVOSEoiI+t6OK2nCuqsGuu0qS2TV1hUp/0l20c95NJ
 Nplus21FwwXN7D2puVNSzxUFz0nO4tp9+qSSZS/mZT7I0zuVVLJmqwrIGRoeIoS96BwHH1E8
 uEeJHYVdBefiumqwbW9DO5xmsAkK8qtN4Qa0p1i5WiBUbB6HtaeHOOTuoEwMDQY36iiGd7EY
 MUUc3x3ZQnoaBxTIFYHTpk5mY9Eg1GgKGwB+Q/I/sLb5UDS1xd/ienyDub2XfWORt4FlUaHt
 E/ZqjGR7hYycYb3JSC+2nCmi/LLnCj7cJkPD7D+/flv6HWMwkQDBRtQUkG0ydGhg1O6c8JSL
 QoT4CVGhbg/8gmnQ8fwWzW8oWWYpVgMVtxICeo45QqRjK3O7G6k6nMsSzdAbJkqsZEwTDlzj
 1uRxYq2W3poraGfTm+b+vGMtzSuNCMJLGgEIygZUQ8C5Nqlq4Y25v7Scute/GeOpoWdMVnNL
 /qi9kDSW517YRY36piG
IronPort-HdrOrdr: A9a23:69cW6KHFWkmI86d6pLqFc5HXdLJyesId70hD6qkvc3Fom52j/f
 xGws5x6faVslkssb8b6LW90Y27MAvhHPlOkPIs1NaZLXDbUQ6TQL2KgrGD/9SNIVycygcZ79
 YbT0EcMqyOMbEZt7ec3ODQKb9Jrri6GeKT9IHjJh9WPH1XgspbnmNE42igYy9LrF4sP+tFKH
 PQ3LswmxOQPVAsKuirDHgMWObO4/XNiZLdeBYDQzoq8hOHgz+E4KPzV0Hw5GZXbxp/hZMZtU
 TVmQ3w4auu99m91x/nzmfWq7BbgsHoxNdvDNGFzuIVNjLvoAC1Y5kJYczKgBkF5MWUrHo6mt
 jFpBkte+x19nPqZ2mw5SDg3gHxuQxenkPK+Bu9uz/OsMb5TDU1B45qnoRCaCbU7EImoZVVzL
 9L93jxjesaMTrw2ADGo/TYXRBjkUS55VA4l/QIsnBZWYwCLJdMsI0k+l9PGptoJlO21GkeKp
 ghMCjg3ocWTbvDBEqp/lWHgebcFEjbJy32DXTr4aeuontrdHMQ9Tpr+CVQpAZDyHsHceg02w
 31CNUXqFhwdL5nUUtcPpZ0fSLlMB27fTv8dESvHH/AKIYrf1rwlr+f2sRH2AjtQu1C8KcP
X-IronPort-AV: E=Sophos;i="5.92,227,1650945600"; 
   d="scan'208";a="73919553"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JnFGzWE53ILN7PrAKAlv0vvuYax7yV0hmLFIWx/qa1IeZP3Rt4USO/BoO1S2PjjYZhRI9ursCoJPi/6Y5qtUVrkVVhQOMumuPrDp4sfxXF3iljG3KJEf7PqsyCIq6vgsGr4OXifzfa1cfMGm/NRigLnwDxXz6TdLYJi2SjwwwFSTNvatFKyyuI6+U7sj5Ecpyl/G28kvhztCZk+J/7GYor+gA/3YB8FF9MNjhZEAFjm2nGarwI/p4QYjDvJvPpIiH8xLxu9SOLiuPSp3RifQoi9+qs+7CxXx458yKse61pgZRNLjRuSYrT4SqlF/gmenKAXkcH84cxba0GYUTvPgGA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=B3SwzpiVtop0VuA72AvRWz3+VKwiDk7gvmWxnGRbUwo=;
 b=Ze9epkjFjtBEG1IafI9XfSCM72FaQtnwiCk3pbmU2FsLSFTMcGVnb/Yyzcjp6dZJNSYJJ3q2ogbLRlafNBQJgSEcTXWSMoWN2qoAtMsc198LAOQbf7khsT7cYel45TQDmoNXCDlMbFc8kLPVU2yEVsr8SDGmgXVztACPiRhOqQReJDhTgq7AmcgOQT7K+UGZypu1rh8vAV/ufpPka1Lk6GwEL9sh3raqWBFTNkPApgP8IugBL9rkyZpxcmqUVsxEpA0+7EnoltQNZt5smUj7ScojdOiSJra11n9/J5IGnACEYJtVJf9KvzLqBCLJRfW+xtS16GCf9rB1NuCKqal/Eg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=B3SwzpiVtop0VuA72AvRWz3+VKwiDk7gvmWxnGRbUwo=;
 b=hrPjkjdWOzpCEWpE0pFLW/3xqH0DijO++Kd+vKVGAvXPGed8/l7v9XAlC/4Hmhi7kfJjY8aqB2KotbgqJShTVNcvj/EDLwUUAqNl/SWPW2twnMryHDF19KmRT8BJccqSSBFzIWbDo91oSj2F3zxC01jl0ms3HJrtb63bEXiQLqg=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 28 Jun 2022 15:08:45 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
Subject: Re: [PATCH] iommu: add preemption support to iommu_{un,}map()
Message-ID: <Yrr9XUxZCrjLFGDL@Air-de-Roger>
References: <20220610083248.25800-1-roger.pau@citrix.com>
 <a968598e-cacc-d762-46b8-579e18f64d12@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <a968598e-cacc-d762-46b8-579e18f64d12@suse.com>
X-ClientProxiedBy: LO4P123CA0229.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a6::18) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 1da19fe6-3832-4fad-6bcc-08da59075762
X-MS-TrafficTypeDiagnostic: PH0PR03MB5861:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	N+C8OCcZIPX7ebUkNms/i3ffStqvB2Epm1lQFAYYSPITSkNsEtZbkIodvp8eCKs4t1qB0WE/mm2E0fHPj52n5oUpVHXpPmIOw7u9AW+ous3Ytnz+JmsJMRebIyBkFoK2iXmT+NoNHxvIJCfrrMWoKSUkp2z2mUGwCwKbOGFKhttG2VQ4ePSnCM619cJNlFlJZUUWgZZ3JEdQxEpb4uLN86zlcNELRA40W5U56x68Ux3MEtAlPmEX9TB64Vh7X22PKJQ+lmtOGzz6o+VPf0sYcSe6stOpZOWm4BGV9Vl1KquSCCQZnwP48S6zGpXv5cJvRZPCHYMe+5/ao8IvzrMp89w0blmj8HnYkkHy3LfPvGPJkXpFZBFs+4xeHmwRupmlH+7O0S6Qv3Sovuhow+B6u7H5m6grXImhq9ebBiq/hXTkTM3ylH3VBxC1+uV2T96oftDcgoSUDOhKRGeJMYvin6zDnC0btpkLEynFNSVmi3Xnfnv7OGRItAjszGv8mlSW/6KKK8whzCn/Ni/mDP0ynxbvAmXY7gaQr2xH2I/TeREC5IJusPgGG/JLiLoIs7IDzMnXO+VNyfTtdil8I+Tvq38rN3kJufFoBlW0Hh3iehq3SkwNCLgl4XJvsS3DlCLuuw/H0KbroSbDItLNWQK3yU7BtdORWUnPewhS13fTmLVIG5Lb6M1HuuZ34BmsaqQZWMzcEMIYftyQ7Unom2Vup7X6vBu9BbpcKGAlzZGLiywVxP3Ctfuc5L9M/wICRY3i
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(7916004)(376002)(136003)(366004)(346002)(396003)(39860400002)(53546011)(6916009)(6666004)(316002)(54906003)(2906002)(82960400001)(86362001)(41300700001)(5660300002)(85182001)(38100700002)(66556008)(26005)(33716001)(66946007)(66476007)(4326008)(6512007)(8676002)(6486002)(6506007)(83380400001)(478600001)(9686003)(186003)(8936002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cGdaRXlIdjZhSjdscGxIWVNjaEl1RDBlNVUxd1BDUXFGa042c2VQTm9aOUt6?=
 =?utf-8?B?dUhyK1BvWUpodnlnYkdHdkFMS3o1aWdsVXBpZDZ0b0kwMnJtbk5ydnY0SHgx?=
 =?utf-8?B?aWMySnlHRGlrUmdMeXlGeWdlT1lIbGRpeW9BNGVSdzZSZ20zaSsyUCtPSFJk?=
 =?utf-8?B?L3ZFSUJuVEdJbTRxZ3FUTjAxVnpJSk55T014aWpoV0lmZGxHUHA1dmZ1azVk?=
 =?utf-8?B?TStUK0VCYlhqMXlWVU9peENkV0VTOUJXYXczeU5mQXEzUmEzTmM4VHpEY1Vx?=
 =?utf-8?B?TWNxY0E0OVhuQldYekVPd09LQzhCYkFkd000UEJNMVpLRG1rZzI2dVU5UDBV?=
 =?utf-8?B?OVFsUi9XaEpUbi9ZaTlLVVJIU2U4ZmVETkNkNEloU3BsbHRFTlkzZi9tMTk4?=
 =?utf-8?B?KzJwWmhrQmxHUmFmTEpjK2JPWFpYWldlZVRZZjVtcSs4SGxHakc3YTh6dGdX?=
 =?utf-8?B?eERPUFhZSllyeFdEeG04UnVXVXFLRHAwKzh2blVXYWppWVo1M2hpcEhXelFt?=
 =?utf-8?B?b0lHTjVselJhQS82RkVHODlHNk9LeHdwY21KRk85dHMzcmJSNkxydURvWUJ0?=
 =?utf-8?B?YlF6bGU5cnhpYlNYRmhvVHduZEhhNG0zUm9leXl4elJQa3EyRHl1dFp2VWl4?=
 =?utf-8?B?M1BtZTlMWGlINGNXR1AvM1BnckUzUHYyMUhuaUNRTmc3MHBXWkU3Szh4V3Zv?=
 =?utf-8?B?WFNIZUdBZmQyWE03U0xwS3UwcVQzNE1wTHNSK2xPV0dScTJQRlhyMXBJb0Rx?=
 =?utf-8?B?RGNrY3dVelF1dXlIdUI1TnNRRXRrb3ZBOUlZem1VbXlhV21VZUdxcEFpZjl5?=
 =?utf-8?B?SGJJNW5Kck5IcHQ4U3pONTM2RXc3L2xSNFUvTDBBM0VSeUpHT2NJQ2NxNVkr?=
 =?utf-8?B?MWJwcFliVG8vQjZyTUlRQXFEMTErc3UvZk5QejJtVkNhdXM0NnM3RlFuY2lv?=
 =?utf-8?B?WFBqa004bk1iam9qUkJrVERzNEE0bGdCSnJ5Ujg2VkNSNTE3SEt4c0Qxc29N?=
 =?utf-8?B?YVNQdGUyeXQ4Y2JkZFNQUDdKYmtEZE53MGFYRUZMekVFQ1VqSDZjUDI5NGg0?=
 =?utf-8?B?bm4xUk5EVE4xRjBaWHdaMEVQcXlaMWZlelNMT1Vza1NBRmN1dUZ2M2psbUsx?=
 =?utf-8?B?aVNKbnJKRHZKT0t5S3pQcVpWUmtiWDZQSmJpcjRtZU9yeEtEaHQxYlBHdVo3?=
 =?utf-8?B?cGY4Zk5IR3pndTMyWHozRU96UXZ1K3A1a1hNc2laYngvSjMvYjI1V3lSdUFT?=
 =?utf-8?B?NGx2bnBBd3RXWHIxTnM2b2loa2p0U25QUVY1QWNJVE9zVytOVGJ6aEhQdEpB?=
 =?utf-8?B?NENmYUYzU3dEMUNIa01YdWRqWGdtL21EMVlDWVFCL0RhKzFJNUY1U1V4bDBk?=
 =?utf-8?B?MUxwZkpRbVJQWnNyOVhyelVPc2VxQVc2ZVlRTFBXUDAyWjhUaFV0Q1owM1Bu?=
 =?utf-8?B?UURHalRnZWxxYjRDNmVUQUVLWE5BV1ZsUVpMdjZHWU9UclBiU1Z2WmpORUdF?=
 =?utf-8?B?U3AyWW1CNDM2Z2QxVENxVnRlYk5oRzVJSk1rZk5vWmNxS08vNEhiRTdCUjJF?=
 =?utf-8?B?aDNCanVvRGYyRFUraExQSEJITm11RXRORGxEZjRWdllwcWtnalREUE9iVGpU?=
 =?utf-8?B?Z0FyS0IrZnV4REJ6Y1JOYUxkOWdrMGx2dDhUK3poT09QcG5UQ3g4M1lHbFh4?=
 =?utf-8?B?dVJ4SkxldFNTOEdhOUhsT0hoc2NLcVZHOTIza0Z2N2VrdytJUWlFMnc5Q1Rz?=
 =?utf-8?B?WFhlL2dmNzNpeGFjZms5aGgrbGlGYXBnVHlqK0MzVzFVdnJPclFwYTJIYmll?=
 =?utf-8?B?Z28yVHBIUTU4bzRnUHZ2K29vcDN0bXdwZndURFNpZnd2MlA5bVFJMWwwR3RF?=
 =?utf-8?B?TkljS3BCMERJa3NoTjBRNkpVelgrTXN0bzhOSmxQTE1Nci9Vc2xGZkMyaWVU?=
 =?utf-8?B?Y2tCS003UUVyekNONGVmSG5IWGEzdXJNVmZENGcxdG00MElNN1p3c1d6cllT?=
 =?utf-8?B?UHp4cS9BYzVBYU9DMFowa283WEdjZDI3dlRzN2RYTlVGdHlZeTZZOTFRekF1?=
 =?utf-8?B?K3MwMyt0MHVMdVVEaGpJWE9jbkI1cVRaNGxXTDJYY29ZVnVrZmN4QVBtMHBY?=
 =?utf-8?B?OHBabXIxTXBUZEE0dm0rY0ZHWXk2QlBuSzRZeGkvalJwclExUFlSci82VG1a?=
 =?utf-8?B?aGc9PQ==?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1da19fe6-3832-4fad-6bcc-08da59075762
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2022 13:08:49.4603
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 4A4X/Ngl1XJEY1jtuLuVE3Kd8dScVrkt04FS2nKtmSlf9O3zuJXKuP56EbKx1T0ma6Wcgg2A8PffS7Yz+nBK9Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5861

On Thu, Jun 23, 2022 at 11:49:00AM +0200, Jan Beulich wrote:
> On 10.06.2022 10:32, Roger Pau Monne wrote:
> > The loop in iommu_{un,}map() can be arbitrary large, and as such it
> > needs to handle preemption.  Introduce a new parameter that allow
> > returning the number of pages that have been processed, and which
> > presence also signals whether the function should do preemption
> > checks.
> > 
> > Note that the cleanup done in iommu_map() can now be incomplete if
> > preemption has happened, and hence callers would need to take care of
> > unmapping the whole range (ie: ranges already mapped by previously
> > preempted calls).  So far none of the callers care about having those
> > ranges unmapped, so error handling in iommu_memory_setup() and
> > arch_iommu_hwdom_init() can be kept as-is.
> > 
> > Note that iommu_legacy_{un,}map() is left without preemption handling:
> > callers of those interfaces are not modified to pass bigger chunks,
> > and hence the functions won't be modified as are legacy and should be
> > replaced with iommu_{un,}map() instead if preemption is required.
> > 
> > Fixes: f3185c165d ('IOMMU/x86: perform PV Dom0 mappings in batches')
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> > ---
> >  xen/arch/x86/pv/dom0_build.c        | 15 ++++++++++++---
> >  xen/drivers/passthrough/iommu.c     | 26 +++++++++++++++++++-------
> >  xen/drivers/passthrough/x86/iommu.c | 13 +++++++++++--
> >  xen/include/xen/iommu.h             |  4 ++--
> >  4 files changed, 44 insertions(+), 14 deletions(-)
> 
> I'm a little confused, I guess: On irc you did, if I'm not mistaken,
> say you'd post what you have, but that would be incomplete. Now this
> looks pretty complete when leaving aside the fact that the referenced
> commit has meanwhile been reverted, and there's also no post-commit-
> message remark towards anything else that needs doing. I'd like to
> include this change in the next version of my series (ahead of the
> previously reverted change), doing the re-basing as necessary. But
> for that I first need to understand the state of this change.

It ended up not being as complicated as I thought at first, and hence
the change seemed correct.  I've posted it quickly without realizing
that you had already reverted the original change, and hence might
have sharp edges.

> > @@ -327,6 +327,12 @@ int iommu_map(struct domain *d, dfn_t dfn0, mfn_t mfn0,
> >          dfn_t dfn = dfn_add(dfn0, i);
> >          mfn_t mfn = mfn_add(mfn0, i);
> >  
> > +        if ( done && !(++j & 0xfffff) && general_preempt_check() )
> 
> 0xfffff seems rather high to me; I'd be inclined to move down to 0xffff
> or even 0xfff.

That's fine.  I picked this from arch_iommu_hwdom_init().  We might
want to adjust the check there.

> > --- a/xen/include/xen/iommu.h
> > +++ b/xen/include/xen/iommu.h
> > @@ -155,10 +155,10 @@ enum
> >  
> >  int __must_check iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn,
> >                             unsigned long page_count, unsigned int flags,
> > -                           unsigned int *flush_flags);
> > +                           unsigned int *flush_flags, unsigned long *done);
> >  int __must_check iommu_unmap(struct domain *d, dfn_t dfn,
> >                               unsigned long page_count,
> > -                             unsigned int *flush_flags);
> > +                             unsigned int *flush_flags, unsigned long *done);
> 
> While I'm okay with adding a 6th parameter to iommu_unmap(), I'm afraid
> I don't really like adding a 7th one to iommu_map(). I'd instead be
> inclined to overload the return values of both functions, with positive
> values indicating "partially done, this many completed".

We need to be careful then so that the returned value is not
overflowed by the input count of pages, which is of type unsigned
long.

> The 6th
> parameter of iommu_unmap() would then be a "flags" one, with one bit
> identifying whether preemption is to be checked for. Thoughts?

Seems fine, but we migth want to do the same for iommu_unmap() in
order to keep a consistent interface between both?  Not strictly
required, but it's always better in order to avoid mistakes.

Are you OK with doing the changes and incorporating into your series?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jun 28 13:48:54 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 13:48:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357107.585523 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Ba2-0004It-Do; Tue, 28 Jun 2022 13:48:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357107.585523; Tue, 28 Jun 2022 13:48:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Ba2-0004Im-Aj; Tue, 28 Jun 2022 13:48:38 +0000
Received: by outflank-mailman (input) for mailman id 357107;
 Tue, 28 Jun 2022 13:48:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6Ba1-0004Ic-In; Tue, 28 Jun 2022 13:48:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6Ba1-0007zQ-Gu; Tue, 28 Jun 2022 13:48:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6Ba1-0004qI-1x; Tue, 28 Jun 2022 13:48:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o6Ba1-0001UU-1T; Tue, 28 Jun 2022 13:48:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=cvwMrYBV6vU/41UDV/cXZ1JnPPVfH36xqynTSQQ8agw=; b=jlJjPfO2ALPe1yb01itdjbigCg
	iMqq3gc25QSuZitoYErADF/0ytcDEraElu0Wlj0vy5uG+3D/RIRAxZ16LBWVtVEF6lWFCFvKrk9FI
	6C1uLq4Kb78bDAJV2yoEfRqo0MC/WBneP5x5forBkdPOdoMPo6HzDY0EXEwtXEyldqgA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171378-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171378: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=941e3e7912696b9fbe3586083a7c2e102cee7a87
X-Osstest-Versions-That:
    linux=354c6e071be986a44b956f7b57f1884244431048
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 28 Jun 2022 13:48:37 +0000

flight 171378 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171378/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-intel  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-amd  8 xen-boot              fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 171277
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 171277
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 171277
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 171277

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 171277

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171277
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171277
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171277
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                941e3e7912696b9fbe3586083a7c2e102cee7a87
baseline version:
 linux                354c6e071be986a44b956f7b57f1884244431048

Last test of basis   171277  2022-06-19 03:11:35 Z    9 days
Failing since        171280  2022-06-19 15:12:25 Z    8 days   26 attempts
Testing same since   171374  2022-06-27 18:13:03 Z    0 days    2 attempts

------------------------------------------------------------
368 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 13051 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 28 13:53:44 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 13:53:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357114.585534 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Bex-0005gi-27; Tue, 28 Jun 2022 13:53:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357114.585534; Tue, 28 Jun 2022 13:53:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Bew-0005gb-VY; Tue, 28 Jun 2022 13:53:42 +0000
Received: by outflank-mailman (input) for mailman id 357114;
 Tue, 28 Jun 2022 13:53:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+uE0=XD=arm.com=Rahul.Singh@srs-se1.protection.inumbo.net>)
 id 1o6Bev-0005gT-ST
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 13:53:42 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2059.outbound.protection.outlook.com [40.107.22.59])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b6d4fadf-f6e9-11ec-bd2d-47488cf2e6aa;
 Tue, 28 Jun 2022 15:53:40 +0200 (CEST)
Received: from AM6P193CA0074.EURP193.PROD.OUTLOOK.COM (2603:10a6:209:88::15)
 by DU0PR08MB7568.eurprd08.prod.outlook.com (2603:10a6:10:31d::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Tue, 28 Jun
 2022 13:53:37 +0000
Received: from VE1EUR03FT034.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:88:cafe::a7) by AM6P193CA0074.outlook.office365.com
 (2603:10a6:209:88::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17 via Frontend
 Transport; Tue, 28 Jun 2022 13:53:37 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT034.mail.protection.outlook.com (10.152.18.85) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Tue, 28 Jun 2022 13:53:36 +0000
Received: ("Tessian outbound 5b5a41c043d3:v120");
 Tue, 28 Jun 2022 13:53:36 +0000
Received: from 358754253f0b.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 8DAB5619-3306-4E18-B672-5B84316F3CA3.1; 
 Tue, 28 Jun 2022 13:53:29 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 358754253f0b.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 28 Jun 2022 13:53:29 +0000
Received: from AS8PR08MB7158.eurprd08.prod.outlook.com (2603:10a6:20b:404::24)
 by VI1PR08MB3071.eurprd08.prod.outlook.com (2603:10a6:803:46::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Tue, 28 Jun
 2022 13:53:27 +0000
Received: from AS8PR08MB7158.eurprd08.prod.outlook.com
 ([fe80::5cc5:d9b5:e3b0:c8d7]) by AS8PR08MB7158.eurprd08.prod.outlook.com
 ([fe80::5cc5:d9b5:e3b0:c8d7%9]) with mapi id 15.20.5373.016; Tue, 28 Jun 2022
 13:53:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b6d4fadf-f6e9-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=MEQFxpwC4dVe5RcSRvdnyrSkpSANDvVA5Z77cQdb5HI5lUhB3SCBrDvZ54cTBx/0WLftugeyoYUKlheXrIE6o9LmmBW1hih4kKUk57Wb9LihmE0TqadwiKKkxnKoY8xBh5c3VZpwQmitR/95kK8lcVyUWO6sAeTElNcDA8HcQ5FAymdygdxIPDG/+65/enGn2+762YZQlrBfRWPqCpqB+LgPRoV856rugA6FdfEqcbp6/8zooO1UckuoW6A79ho+Fd63Aqj4FZYifKdAHNBIUMvVTvxwCdP5GoKnZe+GimDQV6+8ECIW/Dc0RaY+7kUp6r1/A3iISar4y8quIeSGkQ==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=4QGzdp3NXY9qbSlSCQc4sAq+S4XSDxZRLhPVPH6yvRU=;
 b=T1WNcq96eHC53OsXOJnYS8EUWfEfaAV8MMAiBR/BS0ZdsHmymHPdr/Q5Pjm3eeZqxIfym5QLxm1RJ363CRqM2/U3ItP/y6ShvNtCNl43tOblDOlFhLEAX/lIj6fMfEU0zZYZh/hIGFa9CWiRu1ISk1//GIMYkHOMRGsaqBgnbg20M9CBy7qiLWKURnFDYJGV9wgs97/ci4SPPF7Nxmkgv0xsgKZaYNw5KO0+2Vlt9WxIGmGp7CUST7jkcOfyqJFAI3+zbzUPq5qo88XTX07cLIOH453Lydys1bCSgDxez6VtskUKXhwpTLjdNXWI2DLPJgZS5rL/Mp7kOX9/Dn5phw==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4QGzdp3NXY9qbSlSCQc4sAq+S4XSDxZRLhPVPH6yvRU=;
 b=tTxXtXbe78Xf5TQsgVc5GPuWbJkOVBdIoxZkWZ3qgGv7242qN1GB94iMXSoBtsSRsBMUR7lgsrsOFreVvc2XtvC66FIyXYKYUXLOcv6UmGSgnpJBjvBhRvWT45LfVXAG+hqARiejDvmORAnsm4Uh4dvkmJNxWxdCux5A6jK5CB4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 4eae6b67f2840145
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=n2f1AXVbsZcs2W1QeY3Wt53OH3efSn+qkWwEuPb5eGFOC8hjYfrtdBJV4s9Lx+hWEyAZ0Ce/dgvv7tFxJzHdsSRRg0Rc6XmqyOBUD9pvDsd1NszujLbwS0znIdL5l8+RgoTE2+oCtL9+ASl6+uphYX9Q3hOIRu0BL77tcoF1KRh0LaRDGKvGKur3tarwm4692XjI+0XUp9MhpTUDWdYdMi1Sd9pNdOCBmtfMqzv7ydDN1t9AWvoybA4wDgK//6UQ56niiCBwYAAWaWkQOopQImHwe28pG8pAxZXTUNgpENO9CpBRTFnneyLpZiCYojHYeUmS0ERHbvtZS7tCNSjyXA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=4QGzdp3NXY9qbSlSCQc4sAq+S4XSDxZRLhPVPH6yvRU=;
 b=CnCCZGxnnCLwYOBS61Oy5kEU4TgS1J41bSqokJQPF8ByQeM+RUgc0jDtMJ6WFYlTN71XHRgGELds7gKTv2+nXTefbOsqOXZP9tCrJktYejlKMkGI5zwaQgq3Hv7ik1elYo5OR71Mr69k6dhwBnlnHjPC6aCFuLgXCJeTY8LVGTmURdrjqTB8j0dsX5e6mAUVo5K2tSO0W/Pk4MTdpAg/4VtWGqxeuOOo0b706jGJhtNKsBzYrHCI+WM3F+Hj8RDGFCfXvZKjS5hg6TOKeGbWnGq2mO1IH+GGwRqyuczow/acNtHQMzl2ZBRQWoJGTQiQRNtf6FtXOw3SLjIi1snfpg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4QGzdp3NXY9qbSlSCQc4sAq+S4XSDxZRLhPVPH6yvRU=;
 b=tTxXtXbe78Xf5TQsgVc5GPuWbJkOVBdIoxZkWZ3qgGv7242qN1GB94iMXSoBtsSRsBMUR7lgsrsOFreVvc2XtvC66FIyXYKYUXLOcv6UmGSgnpJBjvBhRvWT45LfVXAG+hqARiejDvmORAnsm4Uh4dvkmJNxWxdCux5A6jK5CB4=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Julien Grall <julien@xen.org>
CC: xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 5/8] xen/evtchn: don't close the static event channel.
Thread-Topic: [PATCH 5/8] xen/evtchn: don't close the static event channel.
Thread-Index: AQHYhkX79rj81mJ150e3nBZ0EA7O6K1bhmuAgAGTuoCAAAWegIAHwJEA
Date: Tue, 28 Jun 2022 13:53:27 +0000
Message-ID: <DC011AED-F74B-4055-8DEE-6FFC6FC5215C@arm.com>
References: <cover.1655903088.git.rahul.singh@arm.com>
 <91656930b5bfd49e40ff5a9d060d7643e6311f4f.1655903088.git.rahul.singh@arm.com>
 <b64a7980-e51b-417b-4929-94a020c81438@xen.org>
 <7403EAA7-67A4-4A8D-835E-6015463B9016@arm.com>
 <a5cd291d-45b1-baf4-4d0b-907140b38eab@xen.org>
In-Reply-To: <a5cd291d-45b1-baf4-4d0b-907140b38eab@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 95c1cad0-ff16-49f6-9c52-08da590d9914
x-ms-traffictypediagnostic:
	VI1PR08MB3071:EE_|VE1EUR03FT034:EE_|DU0PR08MB7568:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 V3wCnNzfQ6r2OW1IEzRthaJFIE3FqQoK2GgLOD3z/NGRyM+OjgdzJXx5Cb6+QynhlXE0XsIFW44VX4KMLtbNVRnUBOHUrZEwto4l3EeXAzVUZwBVtCVf9BED95XUAvW23CT7NoracU+2YJpePURHLeTTwEd8UWKy0LvD35BUalXMR3zykwb86P2e/h/fLbTS8V3FCRW0wavVQDOJAuwvMIkxMxEUkZ1AaLJQmGzvkc0/kmaGSHQ3Pj+4SenY7FnMD3HnwE3CLBv2LQFe/2Df21uV9sdp+lZ2P6M5sLA/lcoDSnvio2ivWyFDtJCF2uBvpmhpOuB3vbSIOJ1/6VDie+E/G/eJ0SU4x0Yl5aeqbagTrSkNoFExy7+F2lK9qwOeQ5zV2W89zY+BZjkbF93s6yItqGK6lU0oNPTgirlBiPJ8cZaBEvb877kwv3YJrFYYOAflEKvdEdgC9Rjex5qhBCngpQmJvF9rKyXmlKujqXWl+f6VHe11/yIw9sYAj2k8KivIw5/hEEfRd4Jj3hpyKHNblXnjX9fKl0hoT05eeylP7jLt2XfAXrcCt/F0vs3wNDzGcskcbTNnxTaISJERz6G2UOThwNzdAIipp3f511/wSjY0nOED9K05uDzqsyx/XCwYuT/uk+tYmdoNMOgYyWt9mn1jeC5H2C+akduSHOeAEDCSPEXdepJhoAWphDvKtLxLPyLO7tcsiCupmD1KNcF/cha+rJHRfdzdV9LJjECPcVdtxnvrxdwRtm+AmpE7BbFtebVARV3PFn+nmf1KcS762IMZ7QDKYTB9Kc/M6f4jTjslW0Dj0n+bGU0IbzQq65C29LHwvvK0cWN9qG/M9w==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7158.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(136003)(376002)(366004)(39860400002)(396003)(346002)(36756003)(33656002)(5660300002)(4326008)(6506007)(8936002)(71200400001)(64756008)(6486002)(66446008)(66476007)(2906002)(66556008)(4744005)(316002)(86362001)(122000001)(38100700002)(83380400001)(8676002)(478600001)(66946007)(54906003)(91956017)(186003)(76116006)(6916009)(41300700001)(26005)(2616005)(53546011)(6512007)(38070700005)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <DE286CE7A8CD3A4D97DC4F9208887917@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3071
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	d531b075-7e8f-42c7-5e69-08da590d93af
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	UhYnew4t0hTvbmqSx2aKMpJM7dGHAbtkTY/okwAE8PGMtyv3f18ul3Q6WJsQc/wkRds0UgmMidET19QBxkW84MgTn8A8+bTw7L+/wwXPt3vADO/uO7KzcvCfIVWRbxY8LV46ORbTM7HH97BxIuMS7zzmn23Ur3hWnnovOFCgYiz/R0aYho9cdCVDQi12FSF7CcptxgJ6irnYmmxwkkbXJuyYexck+dxGr5glugMbDwpxPJuHsxsZ+VNBOdCkMggh0hrnpPL1a34oOsP08JIwEu4LdDeOzYUF5P1EcrRmnkN7/sFXL9VvKZzTLVmfFJMWFs+4rTb1l/Ng+e0KOB6JDeb/boI844Qe6+8ognQG/yhTilLyHPhSIAYBzsjMmqKYItnyaB6F3N13otyvtOEaXyyOn9wo94QOqAL86hQdBPcZ2J9oCJoTz6CWt6BEc7IGzK4+nFz2VTBZs2gOuxXe6OyaHqXEWxr53IFVIgEzgBYWSoJ2qLvqmHEiWHHMvBlRP1NwNBASm12cUIGbS2Uzu2y5LDzp30pFQXgqxeIEC9TX4hQIPqOo0xHjN6rQwHvv5WQoGx+Hrl7Bt2+lWUI0jpFD6jOtgpgsM8Da9Att7dJZxqDPAvfS/iZPQqpsNLjHLvVjESEOPtdmk4r80SbfoaTbSfo6OC/0dIzSrlxUJMTyZY/1Pq9MlsertovLYFlfDis4a+sAS1ZuMFb+ujtfzjg/BGbRjXiEjMyTI79cjrrwnbG3ChP/yxCgUTGOH9CtJGSpsV/Hjo9xOKxt/5uVYrSJRrky4AxHlBxT0GMIz5lMzY+4z7Y5nw2ywSJnawg4
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(346002)(376002)(396003)(39860400002)(136003)(36840700001)(40470700004)(46966006)(36860700001)(8676002)(186003)(47076005)(336012)(5660300002)(83380400001)(33656002)(86362001)(2616005)(82310400005)(70586007)(8936002)(4326008)(40480700001)(70206006)(478600001)(6862004)(41300700001)(356005)(82740400003)(36756003)(6512007)(6506007)(54906003)(6486002)(53546011)(2906002)(40460700003)(26005)(316002)(81166007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2022 13:53:36.2964
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 95c1cad0-ff16-49f6-9c52-08da590d9914
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB7568

Hi Julien

> On 23 Jun 2022, at 4:30 pm, Julien Grall <julien@xen.org> wrote:
>=20
>=20
>=20
> On 23/06/2022 16:10, Rahul Singh wrote:
>> Hi Julien,
>>> On 22 Jun 2022, at 4:05 pm, Julien Grall <julien@xen.org> wrote:
>>>=20
>>> Hi,
>>>=20
>>> On 22/06/2022 15:38, Rahul Singh wrote:
>>>> Guest can request the Xen to close the event channels. Ignore the
>>>> request from guest to close the static channels as static event channe=
ls
>>>> should not be closed.
>>>=20
>>> Why do you want to prevent the guest to close static ports? The problem=
 I can see is...
>> As a static event channel should be available during the lifetime of the=
 guest we want to prevent
>> the guest to close the static ports.
> I don't think it is Xen job to prevent a guest to close a static port. If=
 the guest decide to do it, then it will just break itself and not Xen.

It is okay for the guest to close a port, port is not allocated by the gues=
t in case of a static event channel.=20
Xen has nothing to do for close the static event channel and just return 0.

Regards,
Rahul
 =


From xen-devel-bounces@lists.xenproject.org Tue Jun 28 13:58:47 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 13:58:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357122.585544 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Bjn-0006Vf-PU; Tue, 28 Jun 2022 13:58:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357122.585544; Tue, 28 Jun 2022 13:58:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Bjn-0006VY-MM; Tue, 28 Jun 2022 13:58:43 +0000
Received: by outflank-mailman (input) for mailman id 357122;
 Tue, 28 Jun 2022 13:58:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nsoM=XD=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o6Bjl-0006VS-Od
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 13:58:41 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-eopbgr140043.outbound.protection.outlook.com [40.107.14.43])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 69c254ca-f6ea-11ec-b725-ed86ccbb4733;
 Tue, 28 Jun 2022 15:58:40 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB6PR0402MB2725.eurprd04.prod.outlook.com (2603:10a6:4:95::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Tue, 28 Jun
 2022 13:58:38 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5373.018; Tue, 28 Jun 2022
 13:58:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 69c254ca-f6ea-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UQsRSBnSKztSRaqxGxNp+msaGtSnB1LQs9+u2zxpYpq7BZYytXgMxM+xVmnGrEoe40MIPFiut+blTzX2p9QuF1t7z/KqaBMBGx9eFpFJy4CQrRQEx+OK0Ucik9KwJuHZ0Df6vSA++2vfUxRmXpEbQhJX53p/eLFrf9KXaes8FvigU3xhXV2GL1UpFVHwOjaNxnkyRLocHLR5gpP/4ul158sngYZmIaATUD0tnwfhGGserMwhxAAfvFlAtHkhO6raHloy7NGFvjh20cGi6e741XWwaczK6z98gyubkVgR4gWPqPYBblgcXq98FUyClq4TXHJdUdtlmiBhf2yFDy1wiw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=HjZFOURuPIESZfi6d8SlnLFqG4BLVKxR6BwdSzb4BBE=;
 b=Uc19MZe4LiKX7bd7U5CKgbHwxpzPoCVlTwRbCz9Ik9O+2LYoUhj5wPF/YG3WBd/VFRYcncZZnxFRcfJD5NoQrP+F+4GVxfIRkNMhSbtBd3tn7Ta/qkjIDHmG37K7sb8LeZZWA5GXCLxWcV90WRKdbDzNZ9WCnb3Vxs5ZoaUPUTN8LQw6h/108aqRJNP4o4vkBrOu5A+I+SDx7smyV86+FKsjwjpsyFhxHergc1UGwWPdSgvE/4lTyhyO89xg0SC2VdfoJNP/EJ3zaMPH5jq18Zku/SVzQVLkmq3GLzYGl21piUraJxn06rnW4b5rKv8Y+JQLzzNWUnQikfUF1Cokzw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HjZFOURuPIESZfi6d8SlnLFqG4BLVKxR6BwdSzb4BBE=;
 b=Fm5WXIxyb0lhp1zPIuki7KUwXkdAwN1u/JP8q2iG7em2Dq4Kw9o2x3x+RhG5+2EnVzhYki9qNSlITdF/GbHtiloApdDOeqeGLcjTK6kxFm1E5fROEfKFk6DWpwi2V+yZMu4F4i8L/DFQLn4A4oxeRkqZNGXof9DMv9hsbqxycCRc45VklbdkAQUYU9B0p59ilxO6VmdNIcgdjuUAALwzEG4FyoaTAAkMLTjbSYXHIsG6VJLK8uvOilvd5rX9Tou5gJ4P0/AtCWKh0RhQ4gBhn+IHRhBKqwBvLPw22BORM3YCY5+Ek1LcBXS4092hYFs2JYGllU9boOgsmmXGO2Eo8Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <890322f8-1ff2-aa74-f7dc-435d744f01ab@suse.com>
Date: Tue, 28 Jun 2022 15:58:36 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH] iommu: add preemption support to iommu_{un,}map()
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20220610083248.25800-1-roger.pau@citrix.com>
 <a968598e-cacc-d762-46b8-579e18f64d12@suse.com>
 <Yrr9XUxZCrjLFGDL@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <Yrr9XUxZCrjLFGDL@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AM6PR08CA0021.eurprd08.prod.outlook.com
 (2603:10a6:20b:b2::33) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 42e53399-46ce-4f96-e533-08da590e4cc5
X-MS-TrafficTypeDiagnostic: DB6PR0402MB2725:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RvTXHf0UaAgOtPOEDa1ddVkerEjj+rDgWgUPtCaXCcEA1WTywYmXDkAGTi7Y9kCuJd4Jgswi4fAKps+NuZCnNOOuQ/bPBp2xyRkY5ZWJsYe0HUlP+wBplJ3RGg+t5GlBf/zGl+OX+XmQIKseRMb3HKgIlZ4SqH3cNe44F3ykJMc9QKiVgrzLDgd9SZWX9S7YazPiMHh7HK5+UJRXIpdawmjv5CBVwtP9fBV2Gil4iOrg1DFzvs1uWDNVrCGwsL/6ee28uYzVi6k7ZFMxbcb0vrIP/Au39UqDH35ltEXBLiQnFOI6BAUmd51I6smghVW0+MdsdYBEBw8s8PIKbNdN3JUk9wnvz08bEhTn0irp/TKY1Ern73IbUPAvlcRhBHjsx++WOvNith2OyBbg5LbjpFFZwAk9U04GtYacpQgWsKEUCRqkhjMwm8rWWGNB2zMFvyJILif52Mmwgrryop2cOdEIRhSjU+TuqgoUVm+k+0CaAWnHyhP8Z5luEzLbELpPM8PpkbE9vlvhzERtT0q70cQ/doQ9lGd0qYWwoWf6q0v8OWN0N7EOg3sugL+z2s83GtyY5iZ8/9ffEeHx0ueoIFQ+T4/rociI1TGLS8OHDLnMGkXhfGCaooaLSjL0paTxZ0YNz/FaPMWZzEdbVVDrBzGS3clVG5H8JroY4xSqpH0OCusqvcqOjYT4VSFeJ7JnSodAO4wO3pb2SAMTIyAgTDJClxJDz0hsp7QEg0T+Dm+ZsSc8h+sbTx2aJM8pNYBCgzMtL3b8uglVmZ97ERBO8eo6tpnsz5P321TUkTAMvNA1gWcIGjOej7fjBuwPE+OjWZIetazAyutojWmud0520A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(136003)(39860400002)(346002)(366004)(396003)(376002)(478600001)(6486002)(41300700001)(4326008)(36756003)(31696002)(8676002)(26005)(86362001)(6512007)(186003)(83380400001)(38100700002)(31686004)(66946007)(54906003)(5660300002)(2616005)(8936002)(53546011)(6506007)(66476007)(6916009)(66556008)(2906002)(316002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ME9BdlJ1ZktlTUQ3MGVuMmU0S0RON05BNit6VGlnTUR2V0tuYlM3V3lwYndn?=
 =?utf-8?B?Q0ErRmdmMU83RjhvSkVnYXRHcWgrRnRkWGRGMnZBLzFkZWtjaDlNRVE4UVQr?=
 =?utf-8?B?S0t3VFVNNVFSM2wvNHNqZEg1L3d1NzJwaktRZkJaRVBXekhqcG9zamtvUjAw?=
 =?utf-8?B?aWtBMlRJNFFMbndqSU8xdHJGY2NuV1VxRW9VRmx0cXpwS2MzbjIzdUhTTzFi?=
 =?utf-8?B?dk1KVWpGaTZPOU5nbzJ6T09XbEVNbzVETm1ERHkxeWRYczN6SHZyYXFydUdZ?=
 =?utf-8?B?RXF3VGs3RWlhU2paM05rQ2lqcjQ4Vlc3c1R0MEd0SEVTYVVvS2pwVG5GQ01V?=
 =?utf-8?B?NGxsZGZKWWhia0Nub2ljc2Q1M1MrSVViSlNGWFpOZlhlblczNUtlajJCSkZw?=
 =?utf-8?B?SWpIbHNlYmtoaTRuK0pCb2g1VnJqUW9KaTZSZVF2NTF6MkRUeE5ESEN4ZVdj?=
 =?utf-8?B?MlgrclJNR0JHTlI0N015cm5UYW5LcFgrb0Rud0xjSzluN25vS3N3K2xjNHIw?=
 =?utf-8?B?RFNUenpNVTRSVE40TnVDZkZtU2RYR1JFMkhKZW5WZzhmWCtXTkQwbk9TU01i?=
 =?utf-8?B?VUN5Y1NKckcwdGNlQkJpZUdZSXJieklLN3ZvS3hzUXRLZ2t0MW5wN2l5Q0h5?=
 =?utf-8?B?OU82SllNaVJLeGxQY1FTdlk5YzkwOW9FM21CWnNjV1VhcURJMFZlYkV0SE11?=
 =?utf-8?B?VmVCNzJVZm5YUFRiRUowMUs5ZllXUHZqc2NKSmF5UkJSbkE2RSthLzN1Q3pq?=
 =?utf-8?B?OE4ybzRGeEE2YzREdElHWXBEeW1TeVFuSEJuMFZSMkRDd1RuL2hOVVFHSGF1?=
 =?utf-8?B?b2RKWUJTRllZQ3kySUptVVRYZE51SG1WTCt6clR2YlNPMFl2V044dVhLNUxn?=
 =?utf-8?B?ejlJT2xRVnRiVFRJOEpyRlJsU3UzcW9pOXBheXJqbFNYZFpmWGFwMFRxT29J?=
 =?utf-8?B?RWxqRVJza3kxQjgvV0lKWTFFNUtzMWdKT0FYeEFCc25jUy90ZTdFR1h1MFFS?=
 =?utf-8?B?bjhTQ3RaQnV3YVdmN09sckpTUHlubmU5eGIyUXhzSERmQ1ZQNXdycE9NeC81?=
 =?utf-8?B?a1JWVE9GZjk3eEJhdHBhNXNqYmVkNHdJS2M0d0hhdlRUUkZ6ZUdQTlYvU0VL?=
 =?utf-8?B?R1BQeGhueU43NXdaMWx5QjZXd0JlNjR0TEJjYWtrQ2s4RGNPQnpqVTBzQWFL?=
 =?utf-8?B?b0VSanZtYi9rdWRVUElEbG5kU0FCaDhZVnFxYVY4MFBoS21kV2FQaTBOL3dH?=
 =?utf-8?B?TjNEeTVpTThjekpJelM1WnQwY2IxUnZ2bHdIM2NWUUowc1Bwbk5aUFByb2dv?=
 =?utf-8?B?cC96anpzRVdSL3kwK2ROdHZSYklMSWtqNml1TkxBbXVUbjRiMVhacU1LUzNH?=
 =?utf-8?B?Zjc4dHpTdEdxWksrMGlvdlhlN0VjNk9yeFZLNmJMVTF5d05JemVDNk0yb21I?=
 =?utf-8?B?aFVtKzhtQzR0eVBKTTA1MVhOVHRQRWI0ZzZOc25KV1k1dUJGTzJSMXp3a3ZS?=
 =?utf-8?B?TW9UM1MvNnJ1THRjNWdnVERPeWNYYnUrdnlqME11RXZhSDFPNXJ1TDdVekF1?=
 =?utf-8?B?aUdIczZ5bENVWWRUdzJBaWlST25oVlpveFBLZVpZd2pYcVZ0d2lOZkNsdTdk?=
 =?utf-8?B?MUpzQXE0SHhHSW1sZmVrN1BHRVN4Nm1SdDkyMGtpaDU4S09yU1NiZmFIWlBX?=
 =?utf-8?B?NXVlcGhCYWxlb1BiYXVhV1RQd1ZPV29DZ29OaThUc0VWMEFIcHpMZ1E1N2V6?=
 =?utf-8?B?UzBzdll0YWtyRnZyWGd6UDRRQVVuNnpTbVhzMi9Uc09nNE5pWnBDTWlZd3dV?=
 =?utf-8?B?UGo0UjhmSXJiOUJobjREWDJJdjVtWHVXclBIT2orRGE2M2pQb0p2QWdxTXV0?=
 =?utf-8?B?ZWVTMGxCM1NITUZVa1dpZXBqNWpudzc1RDQwYkxmSm5ITG5iVk1pTVBnNFZY?=
 =?utf-8?B?TXpzNUxIdGluZjk2c0pXYnEzcHlBZ0phaFVnbkF4aXA3K1BLNFhmYWpJNWQv?=
 =?utf-8?B?MXhhYWhXbDhHMWZNdGVnZEpZQWUxUTR3N2VNNisweGtXU3Y3emFNVWxFdEpo?=
 =?utf-8?B?RjRBaGt5R1V3aXk2UWxWT1pnM2hIWEdNaEs5blg1bTFQRlJXY2hJVzBHOHdl?=
 =?utf-8?Q?bU3owXtXaEME4dJkliokhYwFZ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 42e53399-46ce-4f96-e533-08da590e4cc5
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2022 13:58:38.0829
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: CwF7lPsodvesod1jH4wb/AHO69VQW+L7ZxjROe4QY/eQ9xErB2tmt6UCGqhMP7jQxpa8B8aCikXo01imuctyRQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0402MB2725

On 28.06.2022 15:08, Roger Pau Monné wrote:
> On Thu, Jun 23, 2022 at 11:49:00AM +0200, Jan Beulich wrote:
>> On 10.06.2022 10:32, Roger Pau Monne wrote:
>>> --- a/xen/include/xen/iommu.h
>>> +++ b/xen/include/xen/iommu.h
>>> @@ -155,10 +155,10 @@ enum
>>>  
>>>  int __must_check iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn,
>>>                             unsigned long page_count, unsigned int flags,
>>> -                           unsigned int *flush_flags);
>>> +                           unsigned int *flush_flags, unsigned long *done);
>>>  int __must_check iommu_unmap(struct domain *d, dfn_t dfn,
>>>                               unsigned long page_count,
>>> -                             unsigned int *flush_flags);
>>> +                             unsigned int *flush_flags, unsigned long *done);
>>
>> While I'm okay with adding a 6th parameter to iommu_unmap(), I'm afraid
>> I don't really like adding a 7th one to iommu_map(). I'd instead be
>> inclined to overload the return values of both functions, with positive
>> values indicating "partially done, this many completed".
> 
> We need to be careful then so that the returned value is not
> overflowed by the input count of pages, which is of type unsigned
> long.

Of course.

>> The 6th
>> parameter of iommu_unmap() would then be a "flags" one, with one bit
>> identifying whether preemption is to be checked for. Thoughts?
> 
> Seems fine, but we migth want to do the same for iommu_unmap() in
> order to keep a consistent interface between both?  Not strictly
> required, but it's always better in order to avoid mistakes.

That was the plan - both functions would then have a "flags" parameter,
replacing unmap()'s order one.

> Are you OK with doing the changes and incorporating into your series?

Of course. I was merely waiting with doing the integration until having
feedback from you on my questions / remarks. Thanks for that.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 28 14:11:09 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 14:11:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357130.585555 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Bvh-0000Ve-Sy; Tue, 28 Jun 2022 14:11:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357130.585555; Tue, 28 Jun 2022 14:11:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Bvh-0000VX-Py; Tue, 28 Jun 2022 14:11:01 +0000
Received: by outflank-mailman (input) for mailman id 357130;
 Tue, 28 Jun 2022 14:11:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pAZO=XD=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1o6Bvg-0000VR-Q2
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 14:11:00 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-eopbgr80040.outbound.protection.outlook.com [40.107.8.40])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 226b7715-f6ec-11ec-b725-ed86ccbb4733;
 Tue, 28 Jun 2022 16:10:59 +0200 (CEST)
Received: from AS9P194CA0025.EURP194.PROD.OUTLOOK.COM (2603:10a6:20b:46d::15)
 by AM0PR08MB4595.eurprd08.prod.outlook.com (2603:10a6:208:10c::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16; Tue, 28 Jun
 2022 14:10:56 +0000
Received: from VE1EUR03FT017.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:46d:cafe::1f) by AS9P194CA0025.outlook.office365.com
 (2603:10a6:20b:46d::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16 via Frontend
 Transport; Tue, 28 Jun 2022 14:10:56 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT017.mail.protection.outlook.com (10.152.18.90) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Tue, 28 Jun 2022 14:10:56 +0000
Received: ("Tessian outbound 514db98d9a19:v121");
 Tue, 28 Jun 2022 14:10:55 +0000
Received: from dc528320ab94.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 C2EC5A66-627C-4E60-8946-859FF7F1F546.1; 
 Tue, 28 Jun 2022 14:10:48 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id dc528320ab94.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 28 Jun 2022 14:10:48 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by DBAPR08MB5848.eurprd08.prod.outlook.com (2603:10a6:10:1b1::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Tue, 28 Jun
 2022 14:10:47 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5373.017; Tue, 28 Jun 2022
 14:10:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 226b7715-f6ec-11ec-b725-ed86ccbb4733
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=C6EMfvJgI/XbBbzmASxgDRnZ7yPB8MxDFqpE55CInlUg3cPMPbw52OFCMcDpK5DYXvgEiuo+DOQw4G8HVgNQ3EzKlr9whLAET7Brkqpl5n2RY0VsGIlinum3TWfyX6+x7N8z6A21fI34MCHEZajBdOwdmvE3wN88WjKWGQMb9xw9bGNpc1k4m2B+43wTV92PfoGaCM/tkA8bEv5qjLkJx9YzrYI1LZDXESWciLodP7FMk0Xsdo50Q+hkY5yNf731knoPnaGxyDT0rx/KuOcoEqBNhYBxDUPODXSchrfHH0iWPoBNf8cBjTKe2xUN6/LANKu1WAVH6Sr3dndIVQPdig==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=btgu3gFaOZvs/7oOpXmKYmRQ9KcHhN6NVAvkKMpBd8E=;
 b=RUI3ijXShf47jm+fmsiEd7RCO7e6tFYYqt12yZl0ovTD5bVr2VBM6k7Daf+iid1BmSjo6ca5oAKvxE/7rqhE6AnsRabAccC0m4Q/z+jEzd/S5Kc9NpkFgos4mNuDi2PTiVjP48CCKFXKoSbynI1pO/LPBr6woNG2Pg+rCWH5+Uxzz4WlIRQT05b7TJE9JEh9bApPI54Fd+XUDJfBZGUkt8pj2Sv1FZMGH0PHISbWnx4VCF7tm9sQ0TAJLqCxDDxecKHhcq7/DwhyXGsDnCN8WLDyeaTELy4OC1lZO8/DsWEgSZ4YGaUpp7WuMmD5JYYxFpQtdCx0yekGfRjEKmYhMg==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=btgu3gFaOZvs/7oOpXmKYmRQ9KcHhN6NVAvkKMpBd8E=;
 b=ZdRFZp9KxhqXgx/5st4vGF2nyKEwsLGgbuzILolefSiljgYkfODWTW7BGfRFKxmppRAdIWpY4KSTSDyBJrMITDKWLC7wfrnpQJK7cqfLCV+vN7kCpwYoRonX57wTjSTryPgeQ5gS7CyOqacebMxWJxnbcbLmIoFHtcsPO8VdWHY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 7918aa1eb295c601
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AW8e7AoRqDFhdZb5vzJZyPd78K5v88JFKw3Rw71Ng5oDtw+PfUGW7xuq9Ib2FTw4DnxWwAwrk4uQEUugW9TSuyptprBF3HeoVUY2fehG27gPuTEWfluAXHc5248b6zideD+k6FK07V6/VIZagoogwE5ezZhl3fymeIJlzB/giccMZLlpeYibKtrlbRH570gxdQVrbk7PZMih8/u47du/bxvrmj29Ez2h7yosNt3T1qHaWdOD+Z1NwAwoo6/hRxtGExoTb4qxQIBbei++/MhjYinF7p3HgOK7Y1+ReFDPns5cQjlnr77/pquLYyfaC6kEXqfrxIEiNq4PbFdhASetfQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=btgu3gFaOZvs/7oOpXmKYmRQ9KcHhN6NVAvkKMpBd8E=;
 b=iEj5/u7VWyKIq3eBmZGIZJdO3eV+Iv/TIvhWgHGZ4unCrjDH8+bTM6aZ5y0mnm3GvfiHmcx2AJ8aJNWVztnlEYU+mAlRIo84ZZn4OvMSwYpZuW+pVOwjRcqXvnYIPR8DgRhNBka9+iKEMrT2TlNFHxpm0AN5VxBe+I3hWIXxBlF116MM+H6KeG4fr8AnKG60DNlgIm2D5J+xT8OaEM066adj3NAZaGsBb1a9uIbjF/B9HC8C/CWVDmCUmGDaSFYtCI+s4NcKMD67frAMtxPF5jiaEA2raeTxtxisSiZxgFAUvUVevZVhJiESstS7k8JzBAak0L4/sXbwpyQYyOBRqA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=btgu3gFaOZvs/7oOpXmKYmRQ9KcHhN6NVAvkKMpBd8E=;
 b=ZdRFZp9KxhqXgx/5st4vGF2nyKEwsLGgbuzILolefSiljgYkfODWTW7BGfRFKxmppRAdIWpY4KSTSDyBJrMITDKWLC7wfrnpQJK7cqfLCV+vN7kCpwYoRonX57wTjSTryPgeQ5gS7CyOqacebMxWJxnbcbLmIoFHtcsPO8VdWHY=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Xenia Ragiadakou <burzalodowa@gmail.com>
CC: xen-devel <xen-devel@lists.xenproject.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH 5/5] xen/arm64: traps: Fix MISRA C 2012 Rule 8.4
 violations
Thread-Topic: [PATCH 5/5] xen/arm64: traps: Fix MISRA C 2012 Rule 8.4
 violations
Thread-Index: AQHYiaFo/vW4vxzYIUWAwsr8WvFDwq1k3nUA
Date: Tue, 28 Jun 2022 14:10:47 +0000
Message-ID: <B2F18894-0287-4B04-9062-2F8AA448F55F@arm.com>
References: <20220626211131.678995-1-burzalodowa@gmail.com>
 <20220626211131.678995-6-burzalodowa@gmail.com>
In-Reply-To: <20220626211131.678995-6-burzalodowa@gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: a733ac62-80a8-4346-0065-08da591004d7
x-ms-traffictypediagnostic:
	DBAPR08MB5848:EE_|VE1EUR03FT017:EE_|AM0PR08MB4595:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 cfLBSHggKJJClyZIQPAnKMMWWjluJz3yy4boXTZ/GrlRPsiiF6MiU4SfC+Ws2dLEp0agkqlbezYhmyH5bzJz8IJE1FfBohqauzkyt0NlZh2k9AW9ZB3pUHqXCgGQ65G+9hhOa3ZgRcFRw5AO4TnxOO7cOakno6CHY71s+n80wX2DGgKPJETSO2634uMGksTdvPxz2n3qcWBB+1YQJuyUrWwgYG8asN1KK15GCiIFm9EraVw7KnqJN15aVj9/ZGkOmYUlk2tq6OeacmVE6OEjSvp7E+uMVWXCT1KRe7akeRV/B+vKGLZW7+1uvOtwg5flgO3Kv/Mk5LLtUYxF9zGYgaFY6B/4NN9F8Spax9aYEfiTn/gxnb93BUJDP0qEUwUqE3VveSnTgmJwzWNws5/Sy/lWiefH82GHIXrW2WOXqDt2pQMX381UB557d93mR9GJX1G4Th0bNvWdvSavXsnYZekUUJJrMO3HzZ/j1kuBZ454AtPEfXx37KD0TeZi4U9NrDd5QUvcRzslCmuUweHJW9IZaiBC+azGyYEOOgB1Wld9m8DQWV7SeKvrbcYISnCZXhxK05n1+74RmLvvR/6dD9yvGSV1kn92meAnHYhsQKvGNWRDPuSxSeEpGoAPU/qj+ndDXzadJJTRK/0XayOW622YRDuLz9NOKUrZH4yWLox3EkShHM1o2rm1tcjtThTOHUnBOtzAmavhlCTeV1SYUQju/M6bIJDfB7GOZLFTOcVo3dH3J55t2Y0cHhGs78bGz2naIaeu+Z+iWUmqMyFECSjjxj63iYXQCFElfgJVW7ObR5cbJ7kaVF78L7jexcBWtGQwogbp/K/l6u/BEIkkMw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(396003)(346002)(366004)(136003)(376002)(39860400002)(53546011)(76116006)(316002)(64756008)(36756003)(54906003)(478600001)(91956017)(71200400001)(38100700002)(26005)(4326008)(6506007)(6512007)(2906002)(6486002)(66446008)(2616005)(5660300002)(41300700001)(38070700005)(186003)(6916009)(8676002)(66946007)(33656002)(86362001)(8936002)(122000001)(66556008)(66476007)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <882CDD81444C92448797352E0B476392@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5848
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT017.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	40b2e281-a62a-4480-2c83-08da590fff6b
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	AsKZrKIZDQ32JaULLu8ngs49z7u8OMFHqFquE09HaNLmTlgMvo0XbbbXd/Oz/imTan05QzSAK6zFqxkJC9BJbxCrpHAJ8T1a/Rr+4plShy44aJ6jg6o/9p4yjlSyFgJJzv1aBgxGPB2iF4Alu/QzQfakger9q8A4gLXVWpsPIsSSmacAjupJJ9MGHZkTyRIId+CcohtvyDWC+K98jd4ICGqg/IXubq5tNgW3Ls1Zgg+SfVEuQnYlVw/5cXT1UL2vd2ExjDN1vJR4ax6wL0EGi1avgn7PQBR5My4REcgBtp/2GG49tdZhj1NxJ+ROjsBXQkeLM51FeaBzXmXfRazMAtJvevcXRmRPhvVuznZ9BFp7kddJ+r6TQ6fLakEAPC4lrBpL4wDvsZdS5P6yH6v6UPKT5vqlF0uf/KaBfWaaR5oo2wFmmeTSuDmKvmtRC5h/ktEKgzNKX863kfQhowtBwQSZ78fWBw+iXp1X7JFxWcfwL/PsJa9FaV8Y7GO933+hsh5ggAluDcDadNhCXySId8MlwbdCVV01HoV39Iqgn5f/qd3s3oLaS0TXzKtucg3Mqqo5Ej27qdbO3UCKS6PllcD8WZ3Ydqn8w/p3VZoEZSsiiBEFPb48P63SoI8y/E9rS5MmHhDgEq95pBeMBuPpvB6XnRY60EPQSdXtdWuKOmFkmuZSFkgkcYx0G3aVOORD8gxsBf6W5IkXvVK9lf52HvQV0VHj9tolkT3U/DgOx2Cfqd1mn4+Q8+igIJBapMKSiroTgeTwIYaBN4aQvCjpk3oUirZrhTrKIcQqRz/hxXs=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(396003)(376002)(346002)(136003)(39860400002)(40470700004)(46966006)(36840700001)(186003)(6486002)(107886003)(478600001)(5660300002)(2616005)(33656002)(40460700003)(336012)(36860700001)(40480700001)(41300700001)(82310400005)(6862004)(47076005)(83380400001)(36756003)(26005)(8936002)(86362001)(316002)(81166007)(4326008)(54906003)(53546011)(6506007)(70206006)(82740400003)(70586007)(6512007)(356005)(2906002)(8676002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2022 14:10:56.0957
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a733ac62-80a8-4346-0065-08da591004d7
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT017.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB4595

Hi Xenia,

> On 26 Jun 2022, at 22:11, Xenia Ragiadakou <burzalodowa@gmail.com> wrote:
>=20
> Add a function prototype for do_bad_mode() in <asm/arm64/traps.h> and inc=
lude
> header <asm/traps.h> in traps.c, so that the declarations of the function=
s
> do_bad_mode() and finalize_instr_emulation(), which have external linkage=
,
> are visible before the function definitions.
>=20
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>

Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

> ---
> xen/arch/arm/arm64/traps.c             | 1 +
> xen/arch/arm/include/asm/arm64/traps.h | 2 ++
> 2 files changed, 3 insertions(+)
>=20
> diff --git a/xen/arch/arm/arm64/traps.c b/xen/arch/arm/arm64/traps.c
> index 3f8858acec..a995ad7c2c 100644
> --- a/xen/arch/arm/arm64/traps.c
> +++ b/xen/arch/arm/arm64/traps.c
> @@ -22,6 +22,7 @@
> #include <asm/hsr.h>
> #include <asm/system.h>
> #include <asm/processor.h>
> +#include <asm/traps.h>
>=20
> #include <public/xen.h>
>=20
> diff --git a/xen/arch/arm/include/asm/arm64/traps.h b/xen/arch/arm/includ=
e/asm/arm64/traps.h
> index 2379b578cb..a347cb13d6 100644
> --- a/xen/arch/arm/include/asm/arm64/traps.h
> +++ b/xen/arch/arm/include/asm/arm64/traps.h
> @@ -6,6 +6,8 @@ void inject_undef64_exception(struct cpu_user_regs *regs,=
 int instr_len);
> void do_sysreg(struct cpu_user_regs *regs,
>                const union hsr hsr);
>=20
> +void do_bad_mode(struct cpu_user_regs *regs, int reason);
> +
> #endif /* __ASM_ARM64_TRAPS__ */
> /*
>  * Local variables:
> --=20
> 2.34.1
>=20



From xen-devel-bounces@lists.xenproject.org Tue Jun 28 14:24:57 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 14:24:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357135.585567 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6C95-00027z-5e; Tue, 28 Jun 2022 14:24:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357135.585567; Tue, 28 Jun 2022 14:24:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6C95-00027s-1M; Tue, 28 Jun 2022 14:24:51 +0000
Received: by outflank-mailman (input) for mailman id 357135;
 Tue, 28 Jun 2022 14:24:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pAZO=XD=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1o6C93-00027m-Ha
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 14:24:49 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-eopbgr60060.outbound.protection.outlook.com [40.107.6.60])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1051ee85-f6ee-11ec-b725-ed86ccbb4733;
 Tue, 28 Jun 2022 16:24:48 +0200 (CEST)
Received: from AM5PR1001CA0044.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:206:15::21) by DB9PR08MB6505.eurprd08.prod.outlook.com
 (2603:10a6:10:23e::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Tue, 28 Jun
 2022 14:24:46 +0000
Received: from VE1EUR03FT052.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:15:cafe::76) by AM5PR1001CA0044.outlook.office365.com
 (2603:10a6:206:15::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16 via Frontend
 Transport; Tue, 28 Jun 2022 14:24:45 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT052.mail.protection.outlook.com (10.152.19.173) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Tue, 28 Jun 2022 14:24:45 +0000
Received: ("Tessian outbound 4748bc5c2894:v121");
 Tue, 28 Jun 2022 14:24:44 +0000
Received: from 95a6fee2ce30.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 48F377B4-CC6C-4173-9E10-9095CB14994C.1; 
 Tue, 28 Jun 2022 14:24:37 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 95a6fee2ce30.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 28 Jun 2022 14:24:37 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AM9PR08MB7165.eurprd08.prod.outlook.com (2603:10a6:20b:41f::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16; Tue, 28 Jun
 2022 14:24:35 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5373.017; Tue, 28 Jun 2022
 14:24:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1051ee85-f6ee-11ec-b725-ed86ccbb4733
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=FQYme7JkvGzhi1p7aFboOYWdbWVT4uBH9PRVFHY1ZJBkYvhPv9aNqk50QuKFtG/lpsdnVD0hjhTDJQ2Vb7UlH7Zl54Q2bxuxZJvoEk/k3q+dgkMxL5lpEK+DybZUOAl3gfx/BGE27m/KgMm6+pwNNq0/9DCA+I/uy3dNT1Mr5yeucjxQOMjkKs+ekh9mCZWfPM+3cf7nD/v53y50O2jsKfsIhX5jGAWRhA7uSIpXLptbAyNvx7bk6vJQYsf83oVKMnpJdsIAGhSWcCrw4A6Xg56jNkmVdK7c0ZtL4czS+olfnP8TNiKLclY5aSy1467o75l67lcBNdifqlezLp0l0A==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=tNFCa9PQXZb27xq1mDfcE0Ls2j5jP0DKStu6VYCidqI=;
 b=NfPLgdUGxU9wwEqGN1yBC0bmCWeuul6Q9vagNLccyo/K8KfSUZ2WBnIWrr5gs0sBbXmYL1zafdXxjHAAicxRNKK/FsGHKCReI1h26oyixNqRMIX8KkopDHkFL7IE1ZEmqwSWSbLkhFg115eSMoaFtxoVUFzNEGGS4aTPiKVwzivb0rVGILJfVXNmVNQqhu0niImsz+vodAxqigL0mvu9IADFOQn5jUs1oS23vj/OM4e5i1ZzGXFzQUDd+kwZmwcEHYGiuAJfPi/xQL6X6CKRh+bYryrj6LU67a9KxairuKrZX7NR0+xSG8M2rHA3Gw1TJzUVUcd3JHb0Xq3H5A2h/Q==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tNFCa9PQXZb27xq1mDfcE0Ls2j5jP0DKStu6VYCidqI=;
 b=rlrJ5Mhf/I5LrBRUwU72rYE8sCd8hQ4+iVWKAWEY/5ECfl/AYy7cR4lxEQslhfqR2xztTWPCiXkVqtmy0/wne/L+lCGQuXasKhE/4ji/PbHdtG4MpXRMrKv3AEaeG29Iw6hdn/ZfwrvaS+9BztZ//G2wUbxuViDUE8nQjflT2i8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 07b896fbeff96a15
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OEQYUqEYvhAv6CMXDUmrxZpF+5qQ9fayzrMqBfEydZ6xFnTpButIs9sTIYGtK/qNH+wJjW90XgIGh8p2xcredm/uJuMbkHI9jeTSgAic3WaKFCbzQEBlMmyzTBHO70gQOhLbWjr/m+CZcXv7aALmJHaBjy/5dUvIPtlaGZ682Q/S9kiA6AWM4hEQxIVkM9r7/YZq5AcvM5/NqSc+JsUKBRguD6arwGiRR7T9qnfYtzh2bVqXchn+5LaVtTnNNPT3KqG1CwIRHbTOeWSZq7Hvx6WYpGbjJXJJvK6wlZy/Qp/h7hBvfxagODtmTAh5uL3RSn8XQRUdgfvWRnzJPyC/xg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=tNFCa9PQXZb27xq1mDfcE0Ls2j5jP0DKStu6VYCidqI=;
 b=UJUKJMyneFkq/yxyCm1ZHzI73XuAPorMdZZMFlsanLC9WmBIIKinaFz1SX7ytXvfCkIf62UZNHvtn0lMZ+Q3s9ZiU9apGSnq5vHOVvfngUVVr3XVFO8XE5nwHcTOdVeZ9Kop9ETsHH5n7Q5LUu/xHQVUBEPfgQttycAqg2l6PPxxwM3BY4HHxaFouc2IoAiEXw7QOrkjFYQhVv9JofvuPbQGAzwebAGzX2GHq7AUtODFG5ytA9e2R+nQPm0OezKW21ZFtI/yBRh/AhlYUs8Q3zuNqBLAkb80CXSuUAqLI7flTQu1CfM8cJOB6NlUO/VcFp6fhiSWKxHXUO4TEUfb7A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tNFCa9PQXZb27xq1mDfcE0Ls2j5jP0DKStu6VYCidqI=;
 b=rlrJ5Mhf/I5LrBRUwU72rYE8sCd8hQ4+iVWKAWEY/5ECfl/AYy7cR4lxEQslhfqR2xztTWPCiXkVqtmy0/wne/L+lCGQuXasKhE/4ji/PbHdtG4MpXRMrKv3AEaeG29Iw6hdn/ZfwrvaS+9BztZ//G2wUbxuViDUE8nQjflT2i8=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: xen-devel <xen-devel@lists.xenproject.org>, Julien Grall
	<jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>, Ross Lagerwall <ross.lagerwall@citrix.com>
Subject: Re: [PATCH] xen/arm: Remove most of the *_VIRT_END defines
Thread-Topic: [PATCH] xen/arm: Remove most of the *_VIRT_END defines
Thread-Index: AQHYbt5OQ+9i9MqGEE+kzOfwd/8WE60trFiAgDALKYCAB2BXgA==
Date: Tue, 28 Jun 2022 14:24:35 +0000
Message-ID: <E2A3AE34-D584-4FD6-8101-2668F68A5684@arm.com>
References: <20220523194953.70636-1-julien@xen.org>
 <F2040FC0-C040-46F5-8DD0-79EE0E1B3A1E@arm.com>
 <b8f364cc-ef22-75bb-8362-c44698d318ff@xen.org>
In-Reply-To: <b8f364cc-ef22-75bb-8362-c44698d318ff@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 01a1f524-3385-40b7-b28f-08da5911f30d
x-ms-traffictypediagnostic:
	AM9PR08MB7165:EE_|VE1EUR03FT052:EE_|DB9PR08MB6505:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 G5jccSJ6b5IETjnd0g4wWlVISC11icfRaizanuJYMAR0zkvDlbjqO+9kY7CK8wK40eQUWXdpFnpg+qHj1KRyVTZBjexf8uB1RAYBOt4wiG4U43eyYhgAmL+ifm6fFf44m9Z5JHQxzbUNnFLE+9lm9OQ43RbRYthD6OFsg/BtoJ41+fNVkTyGlRWn72U0YI7wFILU7Id4gNflVlpjfOu8u3N7iwaY7v55ms5RiHWliE3RjEKPmkimYSPy5y/V/IQmmRJwsjXTig3FLDNvhnu5oPTx4Zfkh4T/qj8OWOrmaL+0KD2vDtH2oJsgjQyZv263ZDLE26h/hiQsIEm6gfV8VYQfzXs0rRvuQhACqjQLvXGLpmU1K6qVYSmzg+bVlP64qPxwAwZE8YK+v0DsVrIYfUh3eDnbhe5t+uCFORxMdzqs5TwREITQ08SF2nGWTyjbtC0K2xzRBvoc9nFKBVSsJUasTCLkBkSPpkVGbqkva9Rhl6xUxcvpA+ReXnpY+fZ2v80SbJ7F6hGXQBirr8ZVnP6CPTbHKIb5ZYDRod/0ZmGjodg+TDI5cRUZMKQfi64mS+Iczbyi+/5/TH32SjyPErPPBFoVsFOS4Yn2sERearFsQBTaCUULSVAmi9RC2jv6jdYOaH7s3RHo/AnnzZj2vFW+Lod2ExuUKDYX0m0D7qvLm4wYNOskKaN9p4atkPIvcx4/jdYtK4n3rcfZBKDct12/amZfwFEYoISB8JWkIxl5xtbd9p3gD9MdiDce7peQPXYHQWcMR/burljkR6RCA57WbK6GscfrNL5zfixjiMbbfPFA7Z79OzVfwqwTfgwxL3oCwtlWh36XDzWgVL9l1g==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(39860400002)(396003)(136003)(376002)(346002)(26005)(53546011)(66556008)(38070700005)(91956017)(122000001)(6512007)(2906002)(76116006)(66476007)(66446008)(8676002)(66946007)(64756008)(316002)(36756003)(6506007)(4326008)(38100700002)(41300700001)(478600001)(71200400001)(8936002)(2616005)(6916009)(6486002)(33656002)(186003)(5660300002)(83380400001)(86362001)(54906003)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <23304E84DE2F0B42BECD0E890DD8183E@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB7165
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT052.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ab9c4029-b539-4eda-ad96-08da5911ed25
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ul1n5d24w2JFM4VYewE6rvN+Gp4RcdPLzH2osjIEXz2iVL9hm7VK17whrk/VJHhqBvQ5IisBe18r1vHOANdG0DhMdRorh+3UW4IyP9gDg2Qypucb1wXtqzBii2g1gB0qoA+9yF5sib9jq5cELSFz2bbQq6IHEoi5uhqc1Rr48EDtuZyMgKbe104owTFIAS9LeR9rfPlHrtTbQb3k2GyFqNWQ+gbyzMGP/qfOfV/65AlFzuUviVvYEW8ZWVdMhEyg6eXUCwYch0aC/hOOop+n0bcnLLgXaZ871nMzRE2GXaTP/kARhsyDy2u0qA5h4+shQ+lygb14tU2SW7hs5qdyAHagPGeYTJ9Mtzr9WTxHHJTIlGtQqICZmEnCP9UdqEONk+EGUs3pOFA8Hz9zKIafxbFp1JxMo+AwfO2iK5Rq719pJyW7iXu3cDeylwFxTtipRt/tdDjPIHjwBcgwN7icrGaFR6oS1rEbfMFVVWDChNg+hZC6P0h8CZm7Vvr3thCXs1OX/zMR09ZY/R1F+j1kEYWnnvpURa4dlB+RoF2tw9SqtepM67/gmlYW2ONbFX1f1tKZg5AIl+fdEWkJWnnxca+qCzQqCXx5LnxhrJU8mH/2P4eaPiTo4UrKUNU5K9rWzHYgzROC/WRYzEhqFNPfge1KgkNLjDJEnPQ1VAPMnerG4s8L72NCZv/EhRa8jESakIYQ5xW/TEcfhRxlwF+i8pPLhhFEjCwapTwCXEl/Hd39a4SypUy5MM7Dtrw9JCZAEsBYpw0B6EZPmFyZ1Mi1EyA1ej+aNmfVRFCW3asMFJiqW2YRV4IlqWA3mzdOgeHC
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(346002)(376002)(39860400002)(396003)(136003)(46966006)(40470700004)(36840700001)(2616005)(6512007)(26005)(83380400001)(41300700001)(6506007)(336012)(186003)(82310400005)(107886003)(5660300002)(40460700003)(86362001)(8676002)(40480700001)(33656002)(6486002)(8936002)(53546011)(70586007)(36756003)(478600001)(4326008)(70206006)(6862004)(36860700001)(82740400003)(54906003)(2906002)(81166007)(316002)(47076005)(356005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2022 14:24:45.2330
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 01a1f524-3385-40b7-b28f-08da5911f30d
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT052.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB6505

Hi Julien,

> On 23 Jun 2022, at 22:45, Julien Grall <julien@xen.org> wrote:
>=20
> Hi Bertrand,
>=20
> On 24/05/2022 09:05, Bertrand Marquis wrote:
>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>>=20
>>> ----
>>>=20
>>> I noticed that a few functions in Xen expect [start, end[. This is risk=
y
>>> as we may end up with end < start if the region is defined right at the
>>> top of the address space.
>>>=20
>>> I haven't yet tackle this issue. But I would at least like to get rid
>>> of *_VIRT_END.
>>> ---
>>> xen/arch/arm/include/asm/config.h | 18 ++++++++----------
>>> xen/arch/arm/livepatch.c          |  2 +-
>>> xen/arch/arm/mm.c                 | 13 ++++++++-----
>>> 3 files changed, 17 insertions(+), 16 deletions(-)
>>>=20
>>> diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/a=
sm/config.h
>>> index 3e2a55a91058..66db618b34e7 100644
>>> --- a/xen/arch/arm/include/asm/config.h
>>> +++ b/xen/arch/arm/include/asm/config.h
>>> @@ -111,12 +111,11 @@
>>> #define FIXMAP_ADDR(n)        (_AT(vaddr_t,0x00400000) + (n) * PAGE_SIZ=
E)
>>>=20
>>> #define BOOT_FDT_VIRT_START    _AT(vaddr_t,0x00600000)
>>> -#define BOOT_FDT_SLOT_SIZE     MB(4)
>>> -#define BOOT_FDT_VIRT_END      (BOOT_FDT_VIRT_START + BOOT_FDT_SLOT_SI=
ZE)
>>> +#define BOOT_FDT_VIRT_SIZE     _AT(vaddr_t, MB(4))
>>>=20
>>> #ifdef CONFIG_LIVEPATCH
>>> #define LIVEPATCH_VMAP_START   _AT(vaddr_t,0x00a00000)
>>> -#define LIVEPATCH_VMAP_END     (LIVEPATCH_VMAP_START + MB(2))
>>> +#define LIVEPATCH_VMAP_SIZE    _AT(vaddr_t, MB(2))
>>> #endif
>>>=20
>>> #define HYPERVISOR_VIRT_START  XEN_VIRT_START
>>> @@ -132,18 +131,18 @@
>>> #define FRAMETABLE_VIRT_END    (FRAMETABLE_VIRT_START + FRAMETABLE_SIZE=
 - 1)
>>>=20
>>> #define VMAP_VIRT_START        _AT(vaddr_t,0x10000000)
>>> +#define VMAP_VIRT_SIZE         _AT(vaddr_t, GB(1) - MB(256))
>> This looks a bit odd, any reason not to use MB(768) ?
>=20
> This is to match how we define FRAMETABLE_SIZE. It is a lot easier to fig=
ure out how the size was found and match the comment:
>=20
> 256M -   1G   VMAP: ioremap and early_ioremap use this virtual address
>                     space
>=20
>> If not then there might be something worse explaining with a comment her=
e.
>=20
> This is really a matter of taste here. I don't think it has to be explain=
ed in the commit message.

You are right make sense with the comment at the beginning of the section i=
n config.h


>=20
> [...]
>=20
>>> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
>>> index be37176a4725..0607c65f95cd 100644
>>> --- a/xen/arch/arm/mm.c
>>> +++ b/xen/arch/arm/mm.c
>>> @@ -128,9 +128,11 @@ static DEFINE_PAGE_TABLE(xen_first);
>>> /* xen_pgtable =3D=3D root of the trie (zeroeth level on 64-bit, first =
on 32-bit) */
>>> static DEFINE_PER_CPU(lpae_t *, xen_pgtable);
>>> #define THIS_CPU_PGTABLE this_cpu(xen_pgtable)
>>> -/* xen_dommap =3D=3D pages used by map_domain_page, these pages contai=
n
>>> +/*
>>> + * xen_dommap =3D=3D pages used by map_domain_page, these pages contai=
n
>>>  * the second level pagetables which map the domheap region
>>> - * DOMHEAP_VIRT_START...DOMHEAP_VIRT_END in 2MB chunks. */
>>> + * starting at DOMHEAP_VIRT_START in 2MB chunks.
>>> + */
>> Please just mention that you also fixed a comment coding style in the co=
mmit message.
>=20
> Sure.
>=20
>>> static DEFINE_PER_CPU(lpae_t *, xen_dommap);
>>> /* Root of the trie for cpu0, other CPU's PTs are dynamically allocated=
 */
>>> static DEFINE_PAGE_TABLE(cpu0_pgtable);
>>> @@ -476,7 +478,7 @@ mfn_t domain_page_map_to_mfn(const void *ptr)
>>>     int slot =3D (va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
>>>     unsigned long offset =3D (va>>THIRD_SHIFT) & XEN_PT_LPAE_ENTRY_MASK=
;
>>>=20
>>> -    if ( va >=3D VMAP_VIRT_START && va < VMAP_VIRT_END )
>>> +    if ( (va >=3D VMAP_VIRT_START) && ((VMAP_VIRT_START - va) < VMAP_V=
IRT_SIZE) )
>>>         return virt_to_mfn(va);
>>>=20
>>>     ASSERT(slot >=3D 0 && slot < DOMHEAP_ENTRIES);
>>> @@ -570,7 +572,8 @@ void __init remove_early_mappings(void)
>>>     int rc;
>>>=20
>>>     /* destroy the _PAGE_BLOCK mapping */
>>> -    rc =3D modify_xen_mappings(BOOT_FDT_VIRT_START, BOOT_FDT_VIRT_END,
>>> +    rc =3D modify_xen_mappings(BOOT_FDT_VIRT_START,
>>> +                             BOOT_FDT_VIRT_START + BOOT_FDT_VIRT_SIZE,
>>>                              _PAGE_BLOCK);
>>>     BUG_ON(rc);
>>> }
>>> @@ -850,7 +853,7 @@ void __init setup_frametable_mappings(paddr_t ps, p=
addr_t pe)
>>>=20
>>> void *__init arch_vmap_virt_end(void)
>>> {
>>> -    return (void *)VMAP_VIRT_END;
>>> +    return (void *)(VMAP_VIRT_START + VMAP_VIRT_SIZE);
>>> }
>>>=20
>>> /*


Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Tue Jun 28 14:26:37 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 14:26:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357141.585578 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6CAl-0002mF-It; Tue, 28 Jun 2022 14:26:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357141.585578; Tue, 28 Jun 2022 14:26:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6CAl-0002m8-GG; Tue, 28 Jun 2022 14:26:35 +0000
Received: by outflank-mailman (input) for mailman id 357141;
 Tue, 28 Jun 2022 14:26:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o6CAj-0002ly-NQ
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 14:26:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o6CAj-0000Pt-Bf; Tue, 28 Jun 2022 14:26:33 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.2.252]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o6CAj-0005ra-57; Tue, 28 Jun 2022 14:26:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=YBp3rjCoK/7YhduO4MDknePhI7n5ZJucvRKCk30BpwU=; b=7D4XBBMVSZCCF568nzM+frKAj2
	lS21T2IkwlOU2iXIm4RP2+Mupw4KsxFEteUlvgoyXZ3IVYDzmNbEDGK/6mwArRH+TqnNYD+njIXDa
	W06iRfWhJtFJv8DhWtEDojBSQf53MG/PgRc9yRNl6aedE4UnIUR6wvr0fgvqaS08HpII=;
Message-ID: <dbdf3bb2-edc6-e622-f17a-8819f6fcb43d@xen.org>
Date: Tue, 28 Jun 2022 15:26:30 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH 5/8] xen/evtchn: don't close the static event channel.
To: Rahul Singh <Rahul.Singh@arm.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>
References: <cover.1655903088.git.rahul.singh@arm.com>
 <91656930b5bfd49e40ff5a9d060d7643e6311f4f.1655903088.git.rahul.singh@arm.com>
 <b64a7980-e51b-417b-4929-94a020c81438@xen.org>
 <7403EAA7-67A4-4A8D-835E-6015463B9016@arm.com>
 <a5cd291d-45b1-baf4-4d0b-907140b38eab@xen.org>
 <DC011AED-F74B-4055-8DEE-6FFC6FC5215C@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <DC011AED-F74B-4055-8DEE-6FFC6FC5215C@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 28/06/2022 14:53, Rahul Singh wrote:
> Hi Julien

Hi Rahul,

>> On 23 Jun 2022, at 4:30 pm, Julien Grall <julien@xen.org> wrote:
>>
>>
>>
>> On 23/06/2022 16:10, Rahul Singh wrote:
>>> Hi Julien,
>>>> On 22 Jun 2022, at 4:05 pm, Julien Grall <julien@xen.org> wrote:
>>>>
>>>> Hi,
>>>>
>>>> On 22/06/2022 15:38, Rahul Singh wrote:
>>>>> Guest can request the Xen to close the event channels. Ignore the
>>>>> request from guest to close the static channels as static event channels
>>>>> should not be closed.
>>>>
>>>> Why do you want to prevent the guest to close static ports? The problem I can see is...
>>> As a static event channel should be available during the lifetime of the guest we want to prevent
>>> the guest to close the static ports.
>> I don't think it is Xen job to prevent a guest to close a static port. If the guest decide to do it, then it will just break itself and not Xen.
> 
> It is okay for the guest to close a port, port is not allocated by the guest in case of a static event channel.
As I wrote before, the OS will need to know that the port is statically 
allocated when initializing the port (we don't want to call the 
hypercall to bind the event channel). By extend, the OS should be able 
to know that when closing it and skip the hypercall.

> Xen has nothing to do for close the static event channel and just return 0.

Xen would not need to be modified if the OS was doing the right (i.e. no 
calling close).

So it is still unclear why papering over the issue in Xen is the best 
solution.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 28 14:40:44 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 14:40:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357149.585589 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6COL-0005Jw-RJ; Tue, 28 Jun 2022 14:40:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357149.585589; Tue, 28 Jun 2022 14:40:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6COL-0005Jp-Ni; Tue, 28 Jun 2022 14:40:37 +0000
Received: by outflank-mailman (input) for mailman id 357149;
 Tue, 28 Jun 2022 14:40:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pAZO=XD=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1o6COK-0005Jj-Bo
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 14:40:36 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-eopbgr80040.outbound.protection.outlook.com [40.107.8.40])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 44fc3ee7-f6f0-11ec-b725-ed86ccbb4733;
 Tue, 28 Jun 2022 16:40:35 +0200 (CEST)
Received: from AS9PR06CA0147.eurprd06.prod.outlook.com (2603:10a6:20b:467::19)
 by AM5PR0801MB1985.eurprd08.prod.outlook.com (2603:10a6:203:4b::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15; Tue, 28 Jun
 2022 14:40:33 +0000
Received: from VE1EUR03FT029.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:467:cafe::20) by AS9PR06CA0147.outlook.office365.com
 (2603:10a6:20b:467::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15 via Frontend
 Transport; Tue, 28 Jun 2022 14:40:32 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT029.mail.protection.outlook.com (10.152.18.107) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Tue, 28 Jun 2022 14:40:32 +0000
Received: ("Tessian outbound 8dc5ba215ad1:v121");
 Tue, 28 Jun 2022 14:40:32 +0000
Received: from b88fb6d517e4.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 029EBB07-BD59-40F8-A81B-F812010AC5EF.1; 
 Tue, 28 Jun 2022 14:40:25 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b88fb6d517e4.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 28 Jun 2022 14:40:25 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AS8PR08MB7028.eurprd08.prod.outlook.com (2603:10a6:20b:34f::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15; Tue, 28 Jun
 2022 14:40:24 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5373.017; Tue, 28 Jun 2022
 14:40:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 44fc3ee7-f6f0-11ec-b725-ed86ccbb4733
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=hRImLPVAmrzgA90gNPe0G6uZbF77B6qi5oi+d/4MgoqFrfD3k/tS+Alt3djW7xi/rGh589UgpFRf1YUuSVeEcbpUkPqzQZO5cQx3EhMFyw6I3WFl/F6rKw6m65lNbPHfHbGeyJyGtCT1iQyUp953IMjBkZU7xMorkMohObgIS5NCdCYMEshxKY1WElZ/Fs/c5diqJ9A8UTCsFmLvLTf8DDwIJhGhtu305c3mIxpJWbr7s4+K4ObI1TmG6T4RD5KUU5XWfSpwQ+AVzgx3zjJJzyWocnMI8p9kz/f1YXiNxLnDMsKMYX25Zp0TpMAWLeGIx1pqNzzvwy8fl9lCBLc56w==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=TBlHUe/V+FMT0djyj1W9U7/vBkn/oW7XVxqCJ9ZEEbM=;
 b=EyrA4J3iDFbLjVuc5lA9DW2r3ygnAfUNEyuN0qV13gbvT/0s6c1RtMjot4ZFrAyfGulrFVihOecWWehWoh0veXnaqA/1bxdbuRxawxUjr607k89BuhQadmIr0D0X76dkS3+8WvKOgHrqgTEjXemDhmtmdApSAHVQsWRNAUoAaC/9FOabIGYL0ZpPe6Lt3S4A70O+p/vxzBJQk9ObsTQPt/DFu6UgUz9QN/UrhcsY8gZcCpbHOPM3DZRfBjoTDLYs9orK+GCKOEDI9btxTdc29se2sRKBjchDagAt5TbOw/Vo4mRWJv/pkvX+0CaB/uGofEUD+WcUI8ZtW4cyvFh3eQ==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TBlHUe/V+FMT0djyj1W9U7/vBkn/oW7XVxqCJ9ZEEbM=;
 b=4u5fFA9A0knbFWfmBhSZfDDr+5gyOO3WwTP/l27pnAPXkioi7pqsGPbGq9CFGZU9WSblKDO/AHqXE9uIahn9YXNbnP7eprjk9XkzyWuziarE3hXxkv8OHZQIToY4JxaxE75luAzv0grolDlhFo7etH/7S7EMJxpApdckoNv46io=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 1a8d2ef52655265b
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PwctJcqu/IHoCm7dJ0qTSJe/fk/oR3U+8ipLw8LhBMuNWnUggeHJUzk+spWKYrd0ztYHw/PI+vcV+DB7EtZUfHalphX+gmHbHyoMD+sM1sfZDECqnUpR+OlQ9XreqxxkebsD3lLLt00hiGDkly6sZSl2bw7YqTqZ5e/V1WKrWftRJ26+W1prWOToI66HpcnVpmnSA0816UqY3jB7YUVGZCTz2MUIH6FHt+o3abXDOF8n/w+sG5+96hndDjpuY7l8R/zofqX61tlLU7OQe6nnfDXz+RzS+2OIeviHYcSzZUFjH5sLqO99724UrQiR7FzkVV9IdJU7ROQ4hhuRY1hbGw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=TBlHUe/V+FMT0djyj1W9U7/vBkn/oW7XVxqCJ9ZEEbM=;
 b=bkCaDzLxQDp16YTYRGyLKuKOEjq/3L7e+TCHKbwuETYlP95OWoMdQKgQKWPDSzyCRydoAQyujZPirhXYKq5/gNyj9X67PaMF9rRhZvYNpmkMs+BzlLv7satLUl9MAJTnk7bdk+yNDUkz2mEPfEBFZXnujZ4oAswGHOBdzJiHn0zafoU8DrKdb16no3bQipLz+Bju9m9a6HpoLbM4xzLP4xZC16EcP9XcZ/52m+YDwK9Ku1mbI0/uv2lXcUy3agBLjAR+11/QVImeSmEzzAY5b7Eww69txgCKAaNpCNLZq4mS8ymL+GtqqWJ9BukfuTLaodwR70EMGTlWfDIfcNvd1A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TBlHUe/V+FMT0djyj1W9U7/vBkn/oW7XVxqCJ9ZEEbM=;
 b=4u5fFA9A0knbFWfmBhSZfDDr+5gyOO3WwTP/l27pnAPXkioi7pqsGPbGq9CFGZU9WSblKDO/AHqXE9uIahn9YXNbnP7eprjk9XkzyWuziarE3hXxkv8OHZQIToY4JxaxE75luAzv0grolDlhFo7etH/7S7EMJxpApdckoNv46io=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: xen-devel <xen-devel@lists.xenproject.org>, Hongyan Xia
	<hongyxia@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>, George
 Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Julien Grall
	<jgrall@amazon.com>
Subject: Re: [PATCH 2/2] xen/heap: pass order to free_heap_pages() in heap
 init
Thread-Topic: [PATCH 2/2] xen/heap: pass order to free_heap_pages() in heap
 init
Thread-Index: AQHYe9tJexjmG0CeP0OUFEWOQtMOSq1lAkiA
Date: Tue, 28 Jun 2022 14:40:23 +0000
Message-ID: <B8DAD34C-B156-4DC9-AA80-F1401028DF6F@arm.com>
References: <20220609083039.76667-1-julien@xen.org>
 <20220609083039.76667-3-julien@xen.org>
In-Reply-To: <20220609083039.76667-3-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 12087391-61a0-4644-73b6-08da591427b0
x-ms-traffictypediagnostic:
	AS8PR08MB7028:EE_|VE1EUR03FT029:EE_|AM5PR0801MB1985:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 xo2WSCvFWAFSwTstfgBhfLHmiEo6SWLBgMnR7PZ4JA6L0+4hAuTg1Rt5yfCJYvjXyInaGgnwRPxlyH2Vho6QP+M/49nSpE+VudUSptKZxg5J4b2i86JG3g6E9391VGNJvdbyRAMdJcjZlk3AvMbubLaXeCXh5pt28fqRdsFGIOBj+sJ94dcxeJROPStjHJcxUiPPsd12gLpo1AGnman1Jtexuy7js73ux955l+CAbkmn8atUwa2ZaSXLszzn75+5kg+YR5SExEec2fpto9+8euKHZFIe0WuR0rWLndFywELCTt+6EqS6jlpxpovFIXrKPozMFU9y3bQuOynstIqUFeaiAfVrlmjaW6jEfOauQExWgMrT0bK59hThZ3W2aVBuq1wBUpDOhzbPlXMrkQXmNBRi/IDMHFYQFF5wFbfMyhEe612GaG8oMGSu4FnvE1RG+d3fBpsrm9roKY8yd5Mzm2Jb7Hmc6mx8zoFJO6AeOwDPKZPF3YDHdSKAjxf/D0Xk3Y12U6v8eeYovFTQgOQcgmr8bCt547fcpJcj4BU6UeGPkIIZEIBtPy+1tJc7JjjV2zd54FmS5EfRY/DRVZNGY6cq0tbFsVAPFpulvfMVcAu0udA0N9+MPoL0pfcLDOyRO3oqPNyRRVegCMc4Vr/RqQ7VyjdSVvQLkGqjMhqWmF6aJgzKp61/icpaAlwrUrTS7NAnSNyCEP1TTpFHiWIWBgrjDjPMWFXjWXrChr4LJtVZKtoXUBGza/g0OR+KTEL14fLqtIFcmzY0J/+ANIdTvFW7biEgONeJsqBHFlGh+BqrD6AlCZO9s7AqDEQbg4HjVXZmRAoFXdyroYbKwIN0Rg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(39860400002)(396003)(136003)(376002)(346002)(26005)(53546011)(66556008)(38070700005)(91956017)(122000001)(6512007)(2906002)(76116006)(66476007)(66446008)(8676002)(66946007)(64756008)(316002)(36756003)(6506007)(4326008)(38100700002)(41300700001)(478600001)(71200400001)(8936002)(2616005)(6916009)(6486002)(33656002)(186003)(5660300002)(83380400001)(86362001)(54906003)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <CB5EA6C300BF244F8EBD452087AC53B2@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB7028
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT029.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	47ad916c-0a78-4937-d191-08da5914227d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WykRmHy033v0b6ysOM39Y80uefAjnrs2X76KhIvkK+669NKQe2CDYhGkfQRpGUPz/TzafZcRqaG0AKn7aMrG9mpScwgVEaEpf1S/ngB6CRSIqDI2CidzOlOmvoq7ZTwWlriUG+8F8V21D2vxnrOXvgzi7dQ7tcVQXgSH8Ku1Tm8fUAIlEKLs64qNS6k0KHMTXxTXLdZMnYQUCd/k2zUN5ET7k9iYj7omovJZvOZm+XPpkknBf+096rX2IqT5VQP5bg6Dve0XnnNnj40gZEtgiR3J2Lp/G4XGyNbITNQ0qHcwpLGVRSQq5Mvtrhea5gWIURdvKd1ZfCKmhsaNH1viY/D1Eas3181jcFVf5BKJVmv3CKNPKkwlncSOLeGIYjaakhWH+N3EDMQBxPJRuDi1f3M867hS87BD6u2HzAaTAIamu+HrRzQPdG8XNphLpacT+rBpqnTvubt1oal9TWoZK3QbZyvQ0/GpznKm3U81lP68HeJHiIebQStwN0JUIl6CgfZ4oPSqRL7BNpgKcZLydgt3R1DiW0b8z1brr9kFwhwSB+E+Mo78M5FXBJHYzC6WuEje7+yPEpshLRQ+FYRYYKHte/Gh/19sU5TGv/SWu73UPDlT6BYfxJVFKqBLCO4rIESYUM15hsTNJWy8F3ZD5u/GkP4IsqIRGnClpaExH0/5W4f/N/RfGeObNRR3ZHXV8ghs1KiMxMK13rwiTDe32GDdYxTZ3PUPkjX90yOVf5Iro7mw0xZNFDAmjEu2iq/2YQNyeGmekAdgjuEEbbxYOeD1EQlh9qMng2trRiLV3Q6FrGx8hAEb2lB2a9nyA+Xm
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(376002)(346002)(136003)(396003)(39860400002)(46966006)(36840700001)(40470700004)(107886003)(36756003)(316002)(8936002)(6862004)(40480700001)(40460700003)(356005)(33656002)(8676002)(83380400001)(47076005)(70586007)(2616005)(4326008)(36860700001)(54906003)(81166007)(2906002)(186003)(478600001)(26005)(70206006)(82740400003)(53546011)(336012)(86362001)(5660300002)(6486002)(41300700001)(6512007)(82310400005)(6506007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2022 14:40:32.5346
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 12087391-61a0-4644-73b6-08da591427b0
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT029.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM5PR0801MB1985

Hi Julien,

> On 9 Jun 2022, at 09:30, Julien Grall <julien@xen.org> wrote:
>=20
> From: Hongyan Xia <hongyxia@amazon.com>
>=20
> The idea is to split the range into multiple aligned power-of-2 regions
> which only needs to call free_heap_pages() once each. We check the least
> significant set bit of the start address and use its bit index as the
> order of this increment. This makes sure that each increment is both
> power-of-2 and properly aligned, which can be safely passed to
> free_heap_pages(). Of course, the order also needs to be sanity checked
> against the upper bound and MAX_ORDER.
>=20
> Testing on a nested environment on c5.metal with various amount
> of RAM. Time for end_boot_allocator() to complete:
>            Before         After
>    - 90GB: 1426 ms        166 ms
>    -  8GB:  124 ms         12 ms
>    -  4GB:   60 ms          6 ms


On a arm64 Neoverse N1 system with 32GB of Ram I have:
- 1180 ms before
- 63 ms after

and my internal tests are passing on arm64.

Great optimisation :-)

(I will do a full review of code the in a second step).

>=20
> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Tue Jun 28 14:52:34 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 14:52:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357155.585600 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6CZf-0006yz-TH; Tue, 28 Jun 2022 14:52:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357155.585600; Tue, 28 Jun 2022 14:52:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6CZf-0006ys-Pg; Tue, 28 Jun 2022 14:52:19 +0000
Received: by outflank-mailman (input) for mailman id 357155;
 Tue, 28 Jun 2022 14:52:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pAZO=XD=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1o6CZe-0006ym-R7
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 14:52:19 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 (mail-eopbgr50067.outbound.protection.outlook.com [40.107.5.67])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e70beaed-f6f1-11ec-b725-ed86ccbb4733;
 Tue, 28 Jun 2022 16:52:17 +0200 (CEST)
Received: from DB6PR0501CA0040.eurprd05.prod.outlook.com (2603:10a6:4:67::26)
 by AM6PR08MB4916.eurprd08.prod.outlook.com (2603:10a6:20b:ca::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16; Tue, 28 Jun
 2022 14:52:14 +0000
Received: from DBAEUR03FT007.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:67:cafe::bb) by DB6PR0501CA0040.outlook.office365.com
 (2603:10a6:4:67::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.21 via Frontend
 Transport; Tue, 28 Jun 2022 14:52:14 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT007.mail.protection.outlook.com (100.127.142.161) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Tue, 28 Jun 2022 14:52:13 +0000
Received: ("Tessian outbound 879f4da7a6e9:v121");
 Tue, 28 Jun 2022 14:52:13 +0000
Received: from c4641c7cb94c.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 89222CF9-4897-400C-A1DC-E29A9BE96B89.1; 
 Tue, 28 Jun 2022 14:52:02 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c4641c7cb94c.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 28 Jun 2022 14:52:02 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AM9PR08MB7290.eurprd08.prod.outlook.com (2603:10a6:20b:435::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Tue, 28 Jun
 2022 14:52:00 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5373.017; Tue, 28 Jun 2022
 14:52:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e70beaed-f6f1-11ec-b725-ed86ccbb4733
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=ZgeIupU9VHXKbgfRsbU8TlI6LS5m/uJ6BdO54ztqPZKu/m4zia4MAXJDmtPJl21/MtSVZqCNSfISnPXfnk3dI1RPORB2B1CWpJLmHQq4W061fP6FhKTniAS6yJFkzS3jgGOJMKFkNqz93GYQFGlVp4L/V8K1FfrNn/x28X5FGsUZJ5/aAprKJeDoUqqJoQo5Tkup1C8DDwiPpTafTU4VetF1EMUHVk1oA9Qum7NOoD28rPgPYZYb2m8yseGOR6LIMgEAoUQ7f9KvVIpEpLnvoE1/FSNyyTh977p+5oiKHzvVD/cOFyyqSenZ/APe/fs9fXcyASGQaHi7R5X2s6kqXw==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=E3Usn+WLduDq2CAWi0URDioj7XgRGUFCPKnDcPy7kPU=;
 b=YWHSQfPKv24+FairVlQE24ApxBSG+maX0aEPT0UcDD6qkXEGeuMfSWSyJJcjCe/9DuE0WtA0GZdfPvisuEo50ueD6z/cprZBDpPXxNLhQAZXhgroi1N8V/Qv3MpxnmYgP2ZT55OiZVeB3bxXTThLfK66qYfADO1Pvc3n8Clv7Dhsr2gCsybew93LNeQWLaPyiGcz7Q7S/pTqV3IB9vreCC2IORVq7WoQTmOuhyRtCKp67d4BNzKjqAXI26QcJBC1nSOgamMZc4HjxmH78+gNFXkn2v4EC9PeToOS2I7fzoEEGTLNbGn13KGXLcCc6RyuYdLMxXW3m0XS5NCEEGMPzA==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=E3Usn+WLduDq2CAWi0URDioj7XgRGUFCPKnDcPy7kPU=;
 b=QJho5GkSb9LpwtaROMwU7dcf0qOJSF8rPgtBLqCfsYuacExJiqzgDCUhVoYsrRg/0DOWRN8sWXN5dHqq366CaF/17NcwV33xML/p8s0Wjb86GCNW4T86EvpDRfoNpC0qA/D55B3nBJu4z8BEeX+g3Zmhli/7qBdoaq5Z4ufsP8c=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 4d40ca54d5ac9106
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QemuhnuxxRrkMgUVG4Kuxs4BjfAtOnbAxWtM7pHQy/+yiInQuQYFvx/ThJoFITNLiys1hgsRe/3SclmwwafpfgfDnHEjLoJVTmy3KbUgwEr4XAqqC38ltax36QjfeZdW4O73Cn1vjMdYccaw8FiY1c0t4smLv5yYd/vmClun4vHS49fiyuLA4h+VJO6P0/lQF+IbO94UQeDqvFuXjako08HhIcCgJ5J1g/HrpF85VZdQogN+20WGhL6wTr3PHFPloFdyFfbHOka6XB6Zea/CPiEnsYoO6B9EvYLjZJWEBh5hkTspT6TW4XC0zsPpNVI1XWs7ei78ZYq6PYsLVqOvqQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=E3Usn+WLduDq2CAWi0URDioj7XgRGUFCPKnDcPy7kPU=;
 b=l7jB00QhdNcOHZ/zitLAmi3uQcLlYv75DP70VaRPNs2BohKFq8BPDxsO1zFIp9lzT6f5laHDCxFFUvMHbXN/MVvJbc7v2cjRIbOcmSBGXN+rt2CiZzpGOkNUC5cpm1z1HjBBVu9R7q7ztMaFByLiJ73EwcLJmVOE7ASRwOIf0mbVItqQJi9BEKqlcIh5WAH92ZGnl3g4Q+HsXZFn3/vzVcG/de3LNX3TFA9N6883ZSIOqJhyFkVIqC0YISpvthg84N0geoceGWHCyY0VC9wU5x0uSq7Z1YLttcGnL6eKYO+0H2hi8luAm+HYE4hxgNNylpkII62JXOyDvZicUebR/g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=E3Usn+WLduDq2CAWi0URDioj7XgRGUFCPKnDcPy7kPU=;
 b=QJho5GkSb9LpwtaROMwU7dcf0qOJSF8rPgtBLqCfsYuacExJiqzgDCUhVoYsrRg/0DOWRN8sWXN5dHqq366CaF/17NcwV33xML/p8s0Wjb86GCNW4T86EvpDRfoNpC0qA/D55B3nBJu4z8BEeX+g3Zmhli/7qBdoaq5Z4ufsP8c=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: Rahul Singh <Rahul.Singh@arm.com>, xen-devel
	<xen-devel@lists.xenproject.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH 5/8] xen/evtchn: don't close the static event channel.
Thread-Topic: [PATCH 5/8] xen/evtchn: don't close the static event channel.
Thread-Index:
 AQHYhkX7DDZgCoAMsUecvv2wUY+1U61bhmuAgAGTuwCAAAWdgIAHwJOAgAAJOwCAAAcgAA==
Date: Tue, 28 Jun 2022 14:52:00 +0000
Message-ID: <67EA3F72-5F6D-4150-A9BA-EAF92E6C9EA1@arm.com>
References: <cover.1655903088.git.rahul.singh@arm.com>
 <91656930b5bfd49e40ff5a9d060d7643e6311f4f.1655903088.git.rahul.singh@arm.com>
 <b64a7980-e51b-417b-4929-94a020c81438@xen.org>
 <7403EAA7-67A4-4A8D-835E-6015463B9016@arm.com>
 <a5cd291d-45b1-baf4-4d0b-907140b38eab@xen.org>
 <DC011AED-F74B-4055-8DEE-6FFC6FC5215C@arm.com>
 <dbdf3bb2-edc6-e622-f17a-8819f6fcb43d@xen.org>
In-Reply-To: <dbdf3bb2-edc6-e622-f17a-8819f6fcb43d@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 33a5013b-9183-4edd-63cf-08da5915c994
x-ms-traffictypediagnostic:
	AM9PR08MB7290:EE_|DBAEUR03FT007:EE_|AM6PR08MB4916:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 eh9E1A+Jag0J21rmNahOV10V9ba0Zb3W9yPLKIipB7GnGg1PQWGpbY9SDYRJNF0b3rtcF9Lc3B7l4GufGn+u1bAG7Mrg7s4OWNAAUC8iXXt4WSJ0FPaeIh1MmAF3RO8oYIlBemaqEZ3KtErUIuvfVA9UF+d69Cb4hgmNb+f1ApJn6c36+k/X1Bz3dmuA7tatt3zkOCwBGRCD8tL0aOWKzXE6LdRmrvF+L3+vbfFzs9zsAYyvFefFGvIcyXSho+sYUG+fvi3Afh7ebyWIPeBIpf6faCsd6Sx6I3N+NYdlFzginvM4HuHyI19JQJAavmrP6JKs6Bn4yFbZrF9U5SluXH5SNwq6AExjbBEsTq8hTTeyM55qNf1mDTEvMGZ2ykOF5ZT9Vp+fVU20ByShRHfQ55o3PACu/ZTsLCnFqU9oC7PPU7AIjO9pi2ToEgkFhFvzhqjiTV0M4F3wz48DN0/QQEkptqgyp8uyTLTxynJD/t2j4x+wOAysrUU8FoHq+rN5K/mCbecMdoUpEDoGaQ2xqrcXNgwrw63J/6YEdvQI44PS8i58GDBH6f/LexwQp7UL4quMbyGRroCPVZWBII1lHqjTJK6XOR8dugFcMq0pz+7e15xs9V+32y+6xMAXFCygVvTuSHpgn7IgGt74bMQ0sLPXIrzZXRLujv71FVSqHYPr5jh76qIUV+WOLiaJbjtPxAVBnb6XSF+WE2bKzru1SxSgRv06eIvIwRkvJseIj78jdV1R7N2wD9L/k9+ViKnfHCj3hsqgstrc1KCgB2AZTFAZbApijoBLbKcK6fZrGVmGU93jkGL5WhR/Rm8Pw8Mx/KhSdl8ekXOhB65KftK3Cg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(136003)(366004)(39860400002)(376002)(396003)(346002)(26005)(66476007)(186003)(8676002)(38100700002)(66446008)(41300700001)(6486002)(6512007)(54906003)(5660300002)(2616005)(76116006)(38070700005)(478600001)(4326008)(64756008)(33656002)(6506007)(66556008)(122000001)(36756003)(91956017)(53546011)(8936002)(2906002)(71200400001)(66946007)(316002)(6916009)(86362001)(83380400001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <4DDEEEA0EC8F344585A1D504B92DFF21@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB7290
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT007.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	7f639868-4f66-4e56-a57e-08da5915c19e
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	aOnhfDxEsza7wQgdjtyV0iK0t3NCrX36eiYY8vpVCprSrGYZsane7so3+MmnKEPnGUryPInvoOmAIO4+jXQPeZIaHqlIYAqNsCGtgrxz1yvwfEAeSawm6X+iegByuMp8qcF8iD6EMR7YkaO6dT4IKuVvU41Bq/ycwFwcbT3VI4l3FRFvd3tyBviM9QC8I0ActiRHwQP1yCqf6mRFmzqjN4/wShyh3HHHFF3YGfwNpo2dA/D9sOPHgnNs4vtoXFAVcfVaxI7tIi9GcFXm+Uv45svdxYGg40GlvCtlc7SlitwTnEm3WilWn3isnX3VzjQimFFSNwdndi5vvDhItJJaeUHx6t9LO4kMpCtDZqkBJ00GuT0lCrueCPgpvtSjsGZCsJFrOh2Nkn96w+vURhH/AwQDpmwwLlosmdRQzdSpAb0SOfcaZscOPvwcyQLM30F2XuS5ThJvSad+6v70M/dKufQeTPS/WWG2fDuciKy1UyrXQ+mYsitkr1o+Cj2hESeKI4T7+RsaMz6p0LpKfEaWWmwTBjIL6vJb8b9JT0ZR3gmSCrUnFxWOF1XUq3bIutFeeNCzoGZOA3FzNO+eO9CNT4AqXrtWjiSlzRKojVWvAyj68k7UCuOPdC0m9Ij1vdPQLqO6bcu4NjEd9T7Rs+A9P7jQSiJP6j3i0vXEVuxblGI+hz0fcH+f4vkxsVMvFa2tR0nxwQXvA9HVWb/U4A2SmIWB33DQpAO5bpgztinTwVl++sWh4nXRYoyADg449e+4NJFa5gj8aHOMfVG6LdpdjfN0aAalE80Npn/erHoPttjBiCHk7Aom8jfUF4kiFQLu
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(136003)(376002)(346002)(39860400002)(396003)(36840700001)(40470700004)(46966006)(40460700003)(33656002)(6512007)(53546011)(336012)(8936002)(6862004)(6486002)(8676002)(2906002)(5660300002)(4326008)(81166007)(41300700001)(356005)(82740400003)(478600001)(86362001)(36860700001)(6506007)(316002)(82310400005)(36756003)(186003)(47076005)(40480700001)(2616005)(26005)(83380400001)(54906003)(70586007)(70206006);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2022 14:52:13.7320
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 33a5013b-9183-4edd-63cf-08da5915c994
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT007.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4916

Hi Julien,

> On 28 Jun 2022, at 15:26, Julien Grall <julien@xen.org> wrote:
>=20
>=20
>=20
> On 28/06/2022 14:53, Rahul Singh wrote:
>> Hi Julien
>=20
> Hi Rahul,
>=20
>>> On 23 Jun 2022, at 4:30 pm, Julien Grall <julien@xen.org> wrote:
>>>=20
>>>=20
>>>=20
>>> On 23/06/2022 16:10, Rahul Singh wrote:
>>>> Hi Julien,
>>>>> On 22 Jun 2022, at 4:05 pm, Julien Grall <julien@xen.org> wrote:
>>>>>=20
>>>>> Hi,
>>>>>=20
>>>>> On 22/06/2022 15:38, Rahul Singh wrote:
>>>>>> Guest can request the Xen to close the event channels. Ignore the
>>>>>> request from guest to close the static channels as static event chan=
nels
>>>>>> should not be closed.
>>>>>=20
>>>>> Why do you want to prevent the guest to close static ports? The probl=
em I can see is...
>>>> As a static event channel should be available during the lifetime of t=
he guest we want to prevent
>>>> the guest to close the static ports.
>>> I don't think it is Xen job to prevent a guest to close a static port. =
If the guest decide to do it, then it will just break itself and not Xen.
>> It is okay for the guest to close a port, port is not allocated by the g=
uest in case of a static event channel.
> As I wrote before, the OS will need to know that the port is statically a=
llocated when initializing the port (we don't want to call the hypercall to=
 bind the event channel). By extend, the OS should be able to know that whe=
n closing it and skip the hypercall.
>=20
>> Xen has nothing to do for close the static event channel and just return=
 0.
>=20
> Xen would not need to be modified if the OS was doing the right (i.e. no =
calling close).
>=20
> So it is still unclear why papering over the issue in Xen is the best sol=
ution.

It is not that a static event channel cannot be closed, it is just that dur=
ing a close there is nothing to do for Xen as the event channel is static a=
nd hence is never removed so none of the operations to be done for a non st=
atic one are needed (maybe some day some will be, who knows).

Why requiring the OS to have the knowledge of the fact that an event channe=
l is static or not and introduce some complexity on guest code if we can pr=
event it ?

Doing so would need to have a specific binding in device tree (not to menti=
on the issue on ACPI), a new driver in linux kernel, etc where right now we=
 just need to introduce an extra IOCTL in linux to support this feature.

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Tue Jun 28 15:03:57 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 15:03:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357162.585611 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Ckq-0000Fa-28; Tue, 28 Jun 2022 15:03:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357162.585611; Tue, 28 Jun 2022 15:03:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Ckp-0000FT-VT; Tue, 28 Jun 2022 15:03:51 +0000
Received: by outflank-mailman (input) for mailman id 357162;
 Tue, 28 Jun 2022 15:03:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VwHr=XD=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1o6Ckp-0000FN-2O
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 15:03:51 +0000
Received: from mail-ed1-x52f.google.com (mail-ed1-x52f.google.com
 [2a00:1450:4864:20::52f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8449c901-f6f3-11ec-b725-ed86ccbb4733;
 Tue, 28 Jun 2022 17:03:50 +0200 (CEST)
Received: by mail-ed1-x52f.google.com with SMTP id eq6so17978352edb.6
 for <xen-devel@lists.xenproject.org>; Tue, 28 Jun 2022 08:03:50 -0700 (PDT)
Received: from uni.. (adsl-146.37.6.170.tellas.gr. [37.6.170.146])
 by smtp.googlemail.com with ESMTPSA id
 k14-20020aa7c04e000000b00431962fe5d4sm9785589edo.77.2022.06.28.08.03.47
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 28 Jun 2022 08:03:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8449c901-f6f3-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=hSJtrhUfilQnx5h21KZUt74irrz86sKwXYnsxGTRXgk=;
        b=mGraIyHHNWSGCry3npUmLLDoL6T959Kx2UZvOMuUJYg5IGxABcjqt8jSvK2DbXKBG7
         cqOdKg4ZOUwqSdGo4pf7CbqBneTtrQbX5JM2XkM43ewlWemwve9tEZe53zBkERUFvqX1
         9nFk9ycqVC7i+h8Rm5H6bT8U7Yb6MMseuMwYNgtoGAARmenaQ0CpRUTAiPeepk1fAfii
         2OSbuua1tzfKGeFIb2oBOi+kwgiFxRblMYJJWR9bGLp092PsRcILAfx8v7It9SFxls/q
         IxPT+UN5TVu2VKUf1AayzIBLhCmTrl2iFYQnslrCGIOOn9Z6RIzd8DT/3vOg8taTH1vH
         v2Ag==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=hSJtrhUfilQnx5h21KZUt74irrz86sKwXYnsxGTRXgk=;
        b=mG662GL+MgORS8eneCwak75KuTh8L4azoNsPnokBJp5fBfTqcDWpacqNStgAm/mS0U
         6jHUhUTHNZXd+yNvMEgtsxjuhuOIcad8630xPH/KH/CqXKNG4ekrKYuwmMfNkc7Jyxsp
         mGh/UbiwmydK5BPDq7zkOJXj883EpiRF8urjYlcDKDTUIl7ZhNw5ysZDSuOKkrOqbpS8
         iA+KMB3wTnz0lePsEJ4NyhAHxD5k1qQJhEFESb/sho+8T8yhU/gJV4CH4blV0XrOiDmm
         gSAMzNV5SdSgaWUTHm4ysmSsWqBkO5XEJ9pHTluV7b23/LQoQslDj30QnWtQ6qfSK26e
         wrBQ==
X-Gm-Message-State: AJIora+afoLXDPHQza/k+w9NDglBM308++hzeX7JA1bAEagxPygd/cAW
	bd3cyyeN6hJsYm1jZHvhYw4ZgtvS6FNwyQ==
X-Google-Smtp-Source: AGRyM1vGiynz/8MYX1qcwwpGeO94p767eEaR2FbJ6+FRN7kSWmnvO6sDs8TuWpPbJI5nDevdTuSV0g==
X-Received: by 2002:a05:6402:31f6:b0:435:5a08:d5e0 with SMTP id dy22-20020a05640231f600b004355a08d5e0mr24026373edb.308.1656428629199;
        Tue, 28 Jun 2022 08:03:49 -0700 (PDT)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Tamas K Lengyel <tamas@tklengyel.com>,
	Alexandru Isaila <aisaila@bitdefender.com>,
	Petre Pircalabu <ppircalabu@bitdefender.com>,
	Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 0/5] Fix MISRA C 2012 violations
Date: Tue, 28 Jun 2022 18:03:32 +0300
Message-Id: <20220628150337.8520-1-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Resolve MISRA C 2012 Rule 8.4 warnings.

Changes in v2:
- In commit messages of patches 1-4, replace the phrase "This patch aims to
  resolve indirectly a MISRA C 2012 Rule 8.4 violation warning." with
  "Also, this patch aims to resolve indirectly a MISRA C 2012 Rule 8.4
  violation warning."
- Add "Reviewed-by" tags.

Xenia Ragiadakou (5):
  xen/common: page_alloc: Fix MISRA C 2012 Rule 8.7 violation
  xen/common: vm_event: Fix MISRA C 2012 Rule 8.7 violation
  xen/drivers: iommu: Fix MISRA C 2012 Rule 8.7 violation
  xen/sched: credit: Fix MISRA C 2012 Rule 8.7 violation
  xen/arm64: traps: Fix MISRA C 2012 Rule 8.4 violations

 xen/arch/arm/arm64/traps.c             | 1 +
 xen/arch/arm/include/asm/arm64/traps.h | 2 ++
 xen/common/page_alloc.c                | 4 ++--
 xen/common/sched/credit.c              | 2 +-
 xen/common/vm_event.c                  | 2 +-
 xen/drivers/passthrough/iommu.c        | 2 +-
 6 files changed, 8 insertions(+), 5 deletions(-)

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 28 15:03:58 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 15:03:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357163.585622 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Ckv-0000WK-B7; Tue, 28 Jun 2022 15:03:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357163.585622; Tue, 28 Jun 2022 15:03:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Ckv-0000WB-66; Tue, 28 Jun 2022 15:03:57 +0000
Received: by outflank-mailman (input) for mailman id 357163;
 Tue, 28 Jun 2022 15:03:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VwHr=XD=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1o6Ckt-0000FN-Gr
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 15:03:55 +0000
Received: from mail-ed1-x52b.google.com (mail-ed1-x52b.google.com
 [2a00:1450:4864:20::52b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 86b8b540-f6f3-11ec-b725-ed86ccbb4733;
 Tue, 28 Jun 2022 17:03:54 +0200 (CEST)
Received: by mail-ed1-x52b.google.com with SMTP id r18so10128764edb.9
 for <xen-devel@lists.xenproject.org>; Tue, 28 Jun 2022 08:03:54 -0700 (PDT)
Received: from uni.. (adsl-146.37.6.170.tellas.gr. [37.6.170.146])
 by smtp.googlemail.com with ESMTPSA id
 k14-20020aa7c04e000000b00431962fe5d4sm9785589edo.77.2022.06.28.08.03.50
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 28 Jun 2022 08:03:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 86b8b540-f6f3-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=M0SruDIC7tILE6Tp9AaAgZ7Os1HI5vkvwVQqezZqtgw=;
        b=a4IZlozpkE6hL7/PA4FYmLFksQ8QRj59XbyHnYcG/KDI6EtbEBSe41T+C6emSXeJgv
         BfTlI83zKQhiWvbSM9ymqFnrLdR5yma/eMWMEUOiKBEhEVbpNhmTpNBqxYN2EtJzURJ+
         70SaXtg7+cbkyGwoWHL1kkd0wkcevobiB4jS1UEUbXkZVBN6cX1AiHG6zSINbbONExI0
         w62IAShur36tUBhSqhzqy2c0xvNv+ZqSKJircS5eWgKmTsjv2KvVzU46bfYpSexMAA3r
         IZNDT5OdbzZhGSgtgLHwXipapVZ2KjyErSJeZUILqGjGjJavIlDJQuUOm1o5xgxqsg4L
         wmog==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=M0SruDIC7tILE6Tp9AaAgZ7Os1HI5vkvwVQqezZqtgw=;
        b=HuNWgw3yV8LtD/NrXrM+p0cCBVxZYdbRmcOH6xXfrsJNveWW5laHmIRj3x9wsAUh4X
         5EQjFrAFWyJnt/sOwcOIwJ31YTzSRWAs9Ph1YncCS7c8p17V4bESxsOlPx5kUgMSrx6L
         cGm6jptbxrY/Cqo3dGroOYFB8WJ1HFJkkCnbIyK2sVivEU2zmuG2ZMsHHfiYsnQqsLC6
         MWByufLIHwYbwr/XF7bzlRbyTZlpm2+BSXMXMAtYnYErIhnapprJbimwBejH8Mfyk9ms
         TZsbXg00kfIvyjT7/HgpIlxa40peMyksp3v9Rx9ZwY6smUsSGl/YkDZabGYS//l/iQwE
         paEw==
X-Gm-Message-State: AJIora8IurRIN48otfewLdTFkOs6muNNApz8vE4ygZdNuAM3JDL8TX4d
	tzARaKlJRnfm8nVN47Y3iyWTzfTkZso=
X-Google-Smtp-Source: AGRyM1u3PNIN1RneM/j5WWwXSiJuUHYoy2v7tnHRb8olj21vskIYCn1u+vLDS7eueUqi14NxWAjQKw==
X-Received: by 2002:a05:6402:50c7:b0:435:923b:9b23 with SMTP id h7-20020a05640250c700b00435923b9b23mr24377774edb.336.1656428632549;
        Tue, 28 Jun 2022 08:03:52 -0700 (PDT)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 1/5] xen/common: page_alloc: Fix MISRA C 2012 Rule 8.7 violation
Date: Tue, 28 Jun 2022 18:03:33 +0300
Message-Id: <20220628150337.8520-2-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20220628150337.8520-1-burzalodowa@gmail.com>
References: <20220628150337.8520-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The variables page_offlined_list and page_broken_list are referenced only
in page_alloc.c.
Change their linkage from external to internal by adding the storage-class
specifier static to their definitions.

Also, this patch aims to resolve indirectly a MISRA C 2012 Rule 8.4 violation
warning.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Changes in v2:
- replace the phrase "This patch aims to resolve indirectly a MISRA C 2012
  Rule 8.4 violation warning." with "Also, this patch aims to resolve
  indirectly a MISRA C 2012 Rule 8.4 violation warning."

 xen/common/page_alloc.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 000ae6b972..fe0e15429a 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -235,9 +235,9 @@ static unsigned int dma_bitsize;
 integer_param("dma_bits", dma_bitsize);
 
 /* Offlined page list, protected by heap_lock. */
-PAGE_LIST_HEAD(page_offlined_list);
+static PAGE_LIST_HEAD(page_offlined_list);
 /* Broken page list, protected by heap_lock. */
-PAGE_LIST_HEAD(page_broken_list);
+static PAGE_LIST_HEAD(page_broken_list);
 
 /*************************
  * BOOT-TIME ALLOCATOR
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 28 15:03:58 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 15:03:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357164.585632 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Ckw-0000n5-Hl; Tue, 28 Jun 2022 15:03:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357164.585632; Tue, 28 Jun 2022 15:03:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Ckw-0000my-Eh; Tue, 28 Jun 2022 15:03:58 +0000
Received: by outflank-mailman (input) for mailman id 357164;
 Tue, 28 Jun 2022 15:03:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VwHr=XD=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1o6Ckv-0000FN-82
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 15:03:57 +0000
Received: from mail-ej1-x630.google.com (mail-ej1-x630.google.com
 [2a00:1450:4864:20::630])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 88320c78-f6f3-11ec-b725-ed86ccbb4733;
 Tue, 28 Jun 2022 17:03:56 +0200 (CEST)
Received: by mail-ej1-x630.google.com with SMTP id d2so14637188ejy.1
 for <xen-devel@lists.xenproject.org>; Tue, 28 Jun 2022 08:03:56 -0700 (PDT)
Received: from uni.. (adsl-146.37.6.170.tellas.gr. [37.6.170.146])
 by smtp.googlemail.com with ESMTPSA id
 k14-20020aa7c04e000000b00431962fe5d4sm9785589edo.77.2022.06.28.08.03.54
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 28 Jun 2022 08:03:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 88320c78-f6f3-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=b+7utVhDuej6MZ9OSVh+jkMKshxCgiL51GUsSj3Aeec=;
        b=L5ORSMiC91/FjooMk1FXAP4yFWISXTFt+a4ktt9OtGiEGdoYojrBDG3WEHZsmUgD5r
         Y8mb9aewBpupG1TOg1FDnboy75T6IPvVOv+6vR+ZJQ4juqNxZOiXB8Z9JgBl0mnTlwFv
         pI67qb8oF+5AbUhZj6oT/xJv3M+TSAtEMXpGNF1hZ1Txh4SfbOpbw845uQhc40Mdrl1n
         vwMhbmTWUrqwWCT39sQZd6+Y4VeL3WdTyNeJgarCmg4CkvPWEOGITr1AKEy4/YDTQLU6
         KBXoUeG2/b/bD+fe+e4iR4NqkFpO4lpP1LmyyGM6mgYjWOXLOLTq+EidFCLhBCHsqKg1
         FqvA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=b+7utVhDuej6MZ9OSVh+jkMKshxCgiL51GUsSj3Aeec=;
        b=fnvY+Neh7VN1C7PLNe2FeyL0/A1ILnML4fX4k3fPEPqExiYhALtQkXuIJy0CY5kUB1
         9Iwvd50Rtx9TaBGxwdmylqyva6D9tWPlYaWB7aD9OtlcZGeUw3LJnZLp8O3G+Us/8fCO
         RCvc0vC7MOQzbC9f9OTllVLB2eO9NY3IcLHnHFk+raX2SdLMyaKDy55O7vocZKFvARL1
         638jiikVwgmX3iJNmO10jgwSQDhxH7TLs8uDi+67tekujfce8zY6sC26AqqQl6gbtVwB
         iTLxViQ7mYnvdxMuUGepM1HRNDfO9k89HaPXqon+Y9HYDhujGjZ66tcFCWVO79Xdi2rZ
         kdZQ==
X-Gm-Message-State: AJIora/ySm3ae1cHNp27ClmRKbOTpbx3bbXWu1RLjUFGA/IrT1ce37yy
	uF4/2Z+847nWeOWkU2wPXq99QwOv1hU=
X-Google-Smtp-Source: AGRyM1tYernbfqXZi265QLoSri62mY8O22St/fnHlQL+O98CzrHwXWmHU7CCC666D0UkR0jzWLPNww==
X-Received: by 2002:a17:906:4482:b0:70a:19e3:d18a with SMTP id y2-20020a170906448200b0070a19e3d18amr18397947ejo.510.1656428635852;
        Tue, 28 Jun 2022 08:03:55 -0700 (PDT)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Tamas K Lengyel <tamas@tklengyel.com>,
	Alexandru Isaila <aisaila@bitdefender.com>,
	Petre Pircalabu <ppircalabu@bitdefender.com>,
	Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 2/5] xen/common: vm_event: Fix MISRA C 2012 Rule 8.7 violation
Date: Tue, 28 Jun 2022 18:03:34 +0300
Message-Id: <20220628150337.8520-3-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20220628150337.8520-1-burzalodowa@gmail.com>
References: <20220628150337.8520-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The function vm_event_wake() is referenced only in vm_event.c.
Change the linkage of the function from external to internal by adding
the storage-class specifier static to the function definition.

Also, this patch aims to resolve indirectly a MISRA C 2012 Rule 8.4 violation
warning.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Changes in v2:
- replace the phrase "This patch aims to resolve indirectly a MISRA C 2012
  Rule 8.4 violation warning." with "Also, this patch aims to resolve
  indirectly a MISRA C 2012 Rule 8.4 violation warning."

 xen/common/vm_event.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index 0b99a6ea72..ecf49c38a9 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -173,7 +173,7 @@ static void vm_event_wake_queued(struct domain *d, struct vm_event_domain *ved)
  * call vm_event_wake() again, ensuring that any blocked vCPUs will get
  * unpaused once all the queued vCPUs have made it through.
  */
-void vm_event_wake(struct domain *d, struct vm_event_domain *ved)
+static void vm_event_wake(struct domain *d, struct vm_event_domain *ved)
 {
     if ( !list_empty(&ved->wq.list) )
         vm_event_wake_queued(d, ved);
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 28 15:04:00 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 15:04:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357165.585644 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Ckx-00013t-SP; Tue, 28 Jun 2022 15:03:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357165.585644; Tue, 28 Jun 2022 15:03:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Ckx-00013k-OT; Tue, 28 Jun 2022 15:03:59 +0000
Received: by outflank-mailman (input) for mailman id 357165;
 Tue, 28 Jun 2022 15:03:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VwHr=XD=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1o6Ckx-0000FN-G0
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 15:03:59 +0000
Received: from mail-ed1-x52b.google.com (mail-ed1-x52b.google.com
 [2a00:1450:4864:20::52b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 89a1cdaa-f6f3-11ec-b725-ed86ccbb4733;
 Tue, 28 Jun 2022 17:03:58 +0200 (CEST)
Received: by mail-ed1-x52b.google.com with SMTP id r18so10128764edb.9
 for <xen-devel@lists.xenproject.org>; Tue, 28 Jun 2022 08:03:58 -0700 (PDT)
Received: from uni.. (adsl-146.37.6.170.tellas.gr. [37.6.170.146])
 by smtp.googlemail.com with ESMTPSA id
 k14-20020aa7c04e000000b00431962fe5d4sm9785589edo.77.2022.06.28.08.03.57
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 28 Jun 2022 08:03:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 89a1cdaa-f6f3-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=TLsqItHuAGIYOdooSORiDOzPNTwAvEg24UXCgU+28Pk=;
        b=VLtQimo71H1QZ7/hEVyCSkGkokfusO6KVmyXYkNXKAdzlg/8Kvxzgoo8n9GIBTu0b5
         1Y5GGAZHx8eGHvjsKUz4WA6WzOatLD1l6IAAJSr9eirG0ePglIqCZsq3ARpWMmOt1TP+
         DdqrUSKRcWLU9T7GaVDbh3um2Kiz3Cu/iD6KPDKKItM4NFKsOhuItAT+kufbG211sU+1
         u5Nk9VrAZ7AoFHyZds3YDWu5tjJ4y/3xXXodpgFY9zjFpKMmaPlsSY5txk3cSSbwcU3X
         cnNgzCYP7zVnCw5eP/OZOubctS8kEixfbAP024RwNG+OndfpgEFsrmiDI91JUYYiO73W
         3vSA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=TLsqItHuAGIYOdooSORiDOzPNTwAvEg24UXCgU+28Pk=;
        b=6nxomox8poAbIDq27BKBzNktZoELUaMrlyX1rXamM8w8CONg6gM5hhulF37UzpcjQb
         GTiUeH1WaNRNwMFEDHnyXSmPWEENra9v3Lia97JarCeLK5xBJ7g6kHcWVpDmQmGvkWW1
         oGpw5gI5HrLoDldj7WvTt2GqkbjBnPyP3kmZDhxpjfvay17uI9+bgVLb/3dultGSQqCB
         c62h1Kt5FQ193Y7PIpZpCGXAlcPdpCiq7oEj2woHu43tw5ts0W5Rk2XeeYA+zckEAP5W
         ME1aICvjYc/OcQOVU7nOV2ogpf8qavzxt5jF3OTIE2+3UORa3w8Wl3KNypEB9eEL0vm0
         PIgQ==
X-Gm-Message-State: AJIora+ZKT8/TCFRHa5yvE3n63Y6bn6c1NL12GEYG5qHzRNI/1ruWbhi
	wU8CEIWPT08L4D7V7gtEeDr3Zh1Zxto=
X-Google-Smtp-Source: AGRyM1uezSb2EiHcmNDWmtQ2wbhEfVC5xFgC737kFVXbc/rOBq1/PjWvHCj1z4PwnuyzMuHbMBphqg==
X-Received: by 2002:a05:6402:35cf:b0:435:bd7f:932c with SMTP id z15-20020a05640235cf00b00435bd7f932cmr23778175edc.415.1656428638330;
        Tue, 28 Jun 2022 08:03:58 -0700 (PDT)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v2 3/5] xen/drivers: iommu: Fix MISRA C 2012 Rule 8.7 violation
Date: Tue, 28 Jun 2022 18:03:35 +0300
Message-Id: <20220628150337.8520-4-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20220628150337.8520-1-burzalodowa@gmail.com>
References: <20220628150337.8520-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The variable iommu_crash_disable is referenced only in one translation unit.
Change its linkage from external to internal by adding the storage-class
specifier static to its definition.

Also, this patch aims to resolve indirectly a MISRA C 2012 Rule 8.4 violation
warning.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Changes in v2:
- replace the phrase "This patch aims to resolve indirectly a MISRA C 2012
  Rule 8.4 violation warning." with "Also, this patch aims to resolve
  indirectly a MISRA C 2012 Rule 8.4 violation warning."

 xen/drivers/passthrough/iommu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 75df3aa8dd..77f64e6174 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -29,7 +29,7 @@ bool_t __initdata iommu_enable = 1;
 bool_t __read_mostly iommu_enabled;
 bool_t __read_mostly force_iommu;
 bool_t __read_mostly iommu_verbose;
-bool_t __read_mostly iommu_crash_disable;
+static bool_t __read_mostly iommu_crash_disable;
 
 #define IOMMU_quarantine_none         0 /* aka false */
 #define IOMMU_quarantine_basic        1 /* aka true */
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 28 15:04:04 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 15:04:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357166.585655 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Cl2-0001Pd-5U; Tue, 28 Jun 2022 15:04:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357166.585655; Tue, 28 Jun 2022 15:04:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Cl2-0001PR-21; Tue, 28 Jun 2022 15:04:04 +0000
Received: by outflank-mailman (input) for mailman id 357166;
 Tue, 28 Jun 2022 15:04:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VwHr=XD=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1o6Cl1-0001MY-2P
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 15:04:03 +0000
Received: from mail-ed1-x534.google.com (mail-ed1-x534.google.com
 [2a00:1450:4864:20::534])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8afb4271-f6f3-11ec-bd2d-47488cf2e6aa;
 Tue, 28 Jun 2022 17:04:01 +0200 (CEST)
Received: by mail-ed1-x534.google.com with SMTP id c65so18004566edf.4
 for <xen-devel@lists.xenproject.org>; Tue, 28 Jun 2022 08:04:01 -0700 (PDT)
Received: from uni.. (adsl-146.37.6.170.tellas.gr. [37.6.170.146])
 by smtp.googlemail.com with ESMTPSA id
 k14-20020aa7c04e000000b00431962fe5d4sm9785589edo.77.2022.06.28.08.03.59
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 28 Jun 2022 08:04:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8afb4271-f6f3-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=IoisOmV2uP3Ir6MH/g/SxzICNH4UZsoy3K2WoiWM8Iw=;
        b=EVvItujdDdLUU1KM6UOtMsojrD58ruvnpsKIenfvsus/T3FnBl1Vb8Y9j33kwFb7vh
         sRF06HwvHGEKxWJBEVwxHX2LpyFJO00/uoeILCqtaU5XDBdbdgWlo1QDrOxw8cNO/r2R
         W3pccvCq6cyibp4tzdFrwXpo1x+0xfhHYb6y/+JEgQhajbguOE6nC9DDsR0uxEK8UEs2
         +ZG0OZ/Sm04bW9xZlZIxMuHZgiF9/1dFqV7VcI4ginV4nlC/T+Mrpe1LkBU+X/6GKBlB
         wKPFFImyGl1QbPdC5OINPm34XsxA35LPK/XLl9A16UJ8Hf8tRsW8JaN6CxiwlbX9gPKj
         M26Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=IoisOmV2uP3Ir6MH/g/SxzICNH4UZsoy3K2WoiWM8Iw=;
        b=uCnXWzMSkPPfhGWUwLsv+rYNqUg0aaVqGgz4Yd2IQaqPVwnaACqOzKS+9zEMGdEzeM
         JiqtKr6SNPudx73pnOxkXFDIddUE761qgNtehYP9s/7oiyrqmpMMOINK1r5Qwobs9FF3
         uKdrErjTXDCePU5U4nvdrdKv10x1dacSdSGAfgJ8wIMa9A3Os2Sxx+Zx/VsF5zHp0YZe
         VjJD3ToK+sfghCacTanOuyZTAdC9A8oxfPxh5FDmjdGVfshHgNG7mHJGahu7bUjAh5P2
         QXbgFPPrSdFAwgV11gY2MmW1jjIl+4KRYp8wUTMSBtesGPEZ4wEbDBfBGKVBNF0Ch9Qu
         nZGA==
X-Gm-Message-State: AJIora88JSQPDwGDj4geHkqBacAbbCld2I7wXrkaJTboxA41MUtPrheP
	oUkJl/Kx72mK2e7Or0bsEFo+sDbz0PQ=
X-Google-Smtp-Source: AGRyM1v+sNPVN8DK0U9kto6+Lpl4hAQnuW0RIELbDg0B3b+egAaDEs9ErMIJgt/8bxOSz3b0bwJQ5A==
X-Received: by 2002:a05:6402:360d:b0:435:710a:2531 with SMTP id el13-20020a056402360d00b00435710a2531mr23651364edb.377.1656428640739;
        Tue, 28 Jun 2022 08:04:00 -0700 (PDT)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 4/5] xen/sched: credit: Fix MISRA C 2012 Rule 8.7 violation
Date: Tue, 28 Jun 2022 18:03:36 +0300
Message-Id: <20220628150337.8520-5-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20220628150337.8520-1-burzalodowa@gmail.com>
References: <20220628150337.8520-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The per-cpu variable last_tickle_cpu is referenced only in credit.c.
Change its linkage from external to internal by adding the storage-class
specifier static to its definitions.

Also, this patch aims to resolve indirectly a MISRA C 2012 Rule 8.4 violation
warning.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Changes in v2:
- replace the phrase "This patch aims to resolve indirectly a MISRA C 2012
  Rule 8.4 violation warning." with "Also, this patch aims to resolve
  indirectly a MISRA C 2012 Rule 8.4 violation warning."

 xen/common/sched/credit.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/common/sched/credit.c b/xen/common/sched/credit.c
index 4d3bd8cba6..47945c2834 100644
--- a/xen/common/sched/credit.c
+++ b/xen/common/sched/credit.c
@@ -348,7 +348,7 @@ static void burn_credits(struct csched_unit *svc, s_time_t now)
 static bool __read_mostly opt_tickle_one_idle = true;
 boolean_param("tickle_one_idle_cpu", opt_tickle_one_idle);
 
-DEFINE_PER_CPU(unsigned int, last_tickle_cpu);
+static DEFINE_PER_CPU(unsigned int, last_tickle_cpu);
 
 static inline void __runq_tickle(const struct csched_unit *new)
 {
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 28 15:04:06 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 15:04:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357167.585667 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Cl4-0001is-Kk; Tue, 28 Jun 2022 15:04:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357167.585667; Tue, 28 Jun 2022 15:04:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Cl4-0001ie-Dy; Tue, 28 Jun 2022 15:04:06 +0000
Received: by outflank-mailman (input) for mailman id 357167;
 Tue, 28 Jun 2022 15:04:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VwHr=XD=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1o6Cl3-0001MY-8x
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 15:04:05 +0000
Received: from mail-ed1-x52b.google.com (mail-ed1-x52b.google.com
 [2a00:1450:4864:20::52b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8cce3726-f6f3-11ec-bd2d-47488cf2e6aa;
 Tue, 28 Jun 2022 17:04:04 +0200 (CEST)
Received: by mail-ed1-x52b.google.com with SMTP id ej4so18004090edb.7
 for <xen-devel@lists.xenproject.org>; Tue, 28 Jun 2022 08:04:04 -0700 (PDT)
Received: from uni.. (adsl-146.37.6.170.tellas.gr. [37.6.170.146])
 by smtp.googlemail.com with ESMTPSA id
 k14-20020aa7c04e000000b00431962fe5d4sm9785589edo.77.2022.06.28.08.04.02
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 28 Jun 2022 08:04:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8cce3726-f6f3-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=UuYWZQeJKj9O8pOqNcM3ZHmqzZL57A1GnfhK3BOWViQ=;
        b=o2fhUTpFrFf8bN2Q/dR0BNHDWBLCS+GUjUsmVjNwYaSe/T3V2odWsrmzOXQ8O+MWRU
         LzJ6tUofAyJKvfIGGy+JJQQj6dpIUPVft9G8d4y61n0xhcMhGjOm0A6iCKclihdjUChf
         XJajD1692yuJMow1b8wQ8EdNeFf8LZT6wpf00PTkYbn4paD0NSR3z4r9XOKVE+hvQH0s
         VoVWohf+GPYCTlX1/+8ezTIBk/NopRT2Jr+BxbbLj4m51+KP9o00kas5B2SNH3rlVql7
         t62EwCWmQCJ8iSltqFRFY8tYiSw8aEVWogOwq/2U7elDKoAD7PNmI0dkyirBTJYX/reA
         mdkw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=UuYWZQeJKj9O8pOqNcM3ZHmqzZL57A1GnfhK3BOWViQ=;
        b=1CdEdASIFH10cykdfqM2vfuV7THIrbGyhyQvPx7oKL4oFcMAETbl8GtmEHBaI/kAyJ
         bE24LtO74kfaKRFFZrP31jdqjCLxc5OqSK9XB12E7vLG+4j8SzXdrNzIPWhpTuoJPshU
         2JquMYSlRG1YyB1fX4loRDHX1YXaErIxLFa+smWzMBDFIdvYQWhrJ0+H6VZ/PCRaBSwQ
         wsLEsKpMzQMI4UkzlhZH/qJqddKiWIszA7uzNHgQ8Ky7q7cWeYBvt10o9bTnZaouePAV
         C3ua67zy+jT/noYPkcdYfU2NWqkjHAqwGjQSmDl9CYA1A99luA8vCE4evEuH3f9nFgWp
         /DEA==
X-Gm-Message-State: AJIora/y8LCM1IMUzWhB0mhraRfbZBwkmI+Zr9gfjPTSTo85OfAUtQXK
	9013h8X+xnKF1E5LTKb3qkefyd4xyBg=
X-Google-Smtp-Source: AGRyM1uBUIqLV8SKYrgSLRFmpLFMEyzHqGF+QMDoPC2VSVa/9V/6McvCD0l3L0wWzy8JRya7KmaXNQ==
X-Received: by 2002:a05:6402:414c:b0:435:1e2a:2c7f with SMTP id x12-20020a056402414c00b004351e2a2c7fmr23926753eda.132.1656428643870;
        Tue, 28 Jun 2022 08:04:03 -0700 (PDT)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 5/5] xen/arm64: traps: Fix MISRA C 2012 Rule 8.4 violations
Date: Tue, 28 Jun 2022 18:03:37 +0300
Message-Id: <20220628150337.8520-6-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20220628150337.8520-1-burzalodowa@gmail.com>
References: <20220628150337.8520-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a function prototype for do_bad_mode() in <asm/arm64/traps.h> and include
header <asm/traps.h> in traps.c, so that the declarations of the functions
do_bad_mode() and finalize_instr_emulation(), which have external linkage,
are visible before the function definitions.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes in v2:
- none

 xen/arch/arm/arm64/traps.c             | 1 +
 xen/arch/arm/include/asm/arm64/traps.h | 2 ++
 2 files changed, 3 insertions(+)

diff --git a/xen/arch/arm/arm64/traps.c b/xen/arch/arm/arm64/traps.c
index 3f8858acec..a995ad7c2c 100644
--- a/xen/arch/arm/arm64/traps.c
+++ b/xen/arch/arm/arm64/traps.c
@@ -22,6 +22,7 @@
 #include <asm/hsr.h>
 #include <asm/system.h>
 #include <asm/processor.h>
+#include <asm/traps.h>
 
 #include <public/xen.h>
 
diff --git a/xen/arch/arm/include/asm/arm64/traps.h b/xen/arch/arm/include/asm/arm64/traps.h
index 2379b578cb..a347cb13d6 100644
--- a/xen/arch/arm/include/asm/arm64/traps.h
+++ b/xen/arch/arm/include/asm/arm64/traps.h
@@ -6,6 +6,8 @@ void inject_undef64_exception(struct cpu_user_regs *regs, int instr_len);
 void do_sysreg(struct cpu_user_regs *regs,
                const union hsr hsr);
 
+void do_bad_mode(struct cpu_user_regs *regs, int reason);
+
 #endif /* __ASM_ARM64_TRAPS__ */
 /*
  * Local variables:
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 28 15:09:03 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 15:09:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357198.585676 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Cpn-0003vp-Cl; Tue, 28 Jun 2022 15:08:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357198.585676; Tue, 28 Jun 2022 15:08:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Cpn-0003vi-9x; Tue, 28 Jun 2022 15:08:59 +0000
Received: by outflank-mailman (input) for mailman id 357198;
 Tue, 28 Jun 2022 15:08:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VwHr=XD=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1o6Cpm-0003vc-Ai
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 15:08:58 +0000
Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com
 [2a00:1450:4864:20::629])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3b7542b4-f6f4-11ec-b725-ed86ccbb4733;
 Tue, 28 Jun 2022 17:08:57 +0200 (CEST)
Received: by mail-ej1-x629.google.com with SMTP id g26so26400667ejb.5
 for <xen-devel@lists.xenproject.org>; Tue, 28 Jun 2022 08:08:57 -0700 (PDT)
Received: from uni.. (adsl-146.37.6.170.tellas.gr. [37.6.170.146])
 by smtp.googlemail.com with ESMTPSA id
 q10-20020a170906940a00b006fe8bf56f53sm6668274ejx.43.2022.06.28.08.08.55
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 28 Jun 2022 08:08:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3b7542b4-f6f4-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=ACpe12rHHvpdrdhy+82lhTBW41nWcQXj5b1t3gYtOfA=;
        b=khYJQ/k0MsWHAgd+ZQevIeQNilPWk0kdo4dtM9JhXHyjdKPlaG4Pe43HZ2+HvFKrdY
         mmU9MWIYV+1+45zuKuT0nuLVXXgu97I9zIgcHo0NiL1THmPAMQrsUnJ8ww/BA6qhMwVw
         Dg6ndBJFcp0pqIc2XUhCq9Lz4ZB4YcSmthx3BcXCm2tBfENrO+yFWmq5WOEyHWm6Nmgo
         Dmqq4Qr2mjSCkAnvP2VklU9XQ4HKYr6lz/oLSvQOMGUW8ib9n6OvBRWGe3aZgdo+Vvmi
         me04ECpzP88d/yWNJS/HKOt3AoClILdyWhBXSqzmnJBsXiXWdezYrUHA2Kxijn5L3Mwh
         lbPg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=ACpe12rHHvpdrdhy+82lhTBW41nWcQXj5b1t3gYtOfA=;
        b=dH9GIiQAhipKJptmH0/oaTB/a9EGoC8Ny3ErsqaTcupYtNSAKnS1p7N/QrThWnRhUq
         049EA7xFu3Nr9iedMFfkSEVBmHXDQrvJVGDN0xuj9idE0vuqEUbx72qzzyPUT2lfLLSV
         lGhKvvjjF7ybRQywTf84PLaeZsdIYVo1hk/LN0tJJQ1rvRBriiwszectlmm+6dHMGZHa
         yjgHC9X9V1DVCb52xlKv0fucJ+bOKkQ9Ml8nq8t7e1BtPA84IbHcIDtUbhadcuDWCadh
         cGWaywvgmKdWBt065PXfcHcgFkucugMKszKTps/zTaS90xVWti2NGjUP7fpb9byV1s31
         enlA==
X-Gm-Message-State: AJIora+lW0xPvzjMdYoGReqym14rlXBAPGPZGR1Zbo/f5t+x5v+O/GK8
	CPW/4TsLmc71zqSByTxtxhOytquqTiE=
X-Google-Smtp-Source: AGRyM1uPxfWpPy+5aHbKyD8xmSQq4e2Mxji9oMECs2fAmTcKKzFXtl3t0lHG3/Vu0j7rKAVqs/N4tw==
X-Received: by 2002:a17:906:5053:b0:70d:a0cc:b3fd with SMTP id e19-20020a170906505300b0070da0ccb3fdmr18473630ejk.162.1656428936737;
        Tue, 28 Jun 2022 08:08:56 -0700 (PDT)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Rahul Singh <rahul.singh@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH] xen/arm: smmu-v3: Fix MISRA C 2012 Rule 1.3 violations
Date: Tue, 28 Jun 2022 18:08:51 +0300
Message-Id: <20220628150851.8627-1-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The expression 1 << 31 produces undefined behaviour because the type of integer
constant 1 is (signed) int and the result of shifting 1 by 31 bits is not
representable in the (signed) int type.
Change the type of 1 to unsigned int by adding the U suffix.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---
Q_OVERFLOW_FLAG has already been fixed in upstream kernel code.
For GBPA_UPDATE I will submit a patch.

 xen/drivers/passthrough/arm/smmu-v3.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
index 1e857f915a..f2562acc38 100644
--- a/xen/drivers/passthrough/arm/smmu-v3.c
+++ b/xen/drivers/passthrough/arm/smmu-v3.c
@@ -338,7 +338,7 @@ static int platform_get_irq_byname_optional(struct device *dev,
 #define CR2_E2H				(1 << 0)
 
 #define ARM_SMMU_GBPA			0x44
-#define GBPA_UPDATE			(1 << 31)
+#define GBPA_UPDATE			(1U << 31)
 #define GBPA_ABORT			(1 << 20)
 
 #define ARM_SMMU_IRQ_CTRL		0x50
@@ -410,7 +410,7 @@ static int platform_get_irq_byname_optional(struct device *dev,
 
 #define Q_IDX(llq, p)			((p) & ((1 << (llq)->max_n_shift) - 1))
 #define Q_WRP(llq, p)			((p) & (1 << (llq)->max_n_shift))
-#define Q_OVERFLOW_FLAG			(1 << 31)
+#define Q_OVERFLOW_FLAG			(1U << 31)
 #define Q_OVF(p)			((p) & Q_OVERFLOW_FLAG)
 #define Q_ENT(q, p)			((q)->base +			\
 					 Q_IDX(&((q)->llq), p) *	\
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 28 15:18:26 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 15:18:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357212.585712 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Cyr-0005o1-H5; Tue, 28 Jun 2022 15:18:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357212.585712; Tue, 28 Jun 2022 15:18:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Cyr-0005nu-E3; Tue, 28 Jun 2022 15:18:21 +0000
Received: by outflank-mailman (input) for mailman id 357212;
 Tue, 28 Jun 2022 15:18:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o6Cyq-0005no-H2
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 15:18:20 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o6Cyp-0001Yr-Ki; Tue, 28 Jun 2022 15:18:19 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.2.252]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o6Cyp-00006R-E5; Tue, 28 Jun 2022 15:18:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=crtC6Ga16gTJ62ZJDhD7WuSz1o1voCcNtctTikjsDyc=; b=XU9P1kw27uCG4RveYaH0W7fLpk
	ptkggeVvRn5FxmZ5BvFAR18RbOy2RSzpquEAzaP5gi8V3eSuAzE/4KKF4h4YxKy7MnaoumeuhgE5a
	uaL6sF+lo8RSIgkjsB+HRY9rzif3mSYr6/vGXz4IbIcohQmwkapujK1l6fsNR6XZTALc=;
Message-ID: <289ebc8b-96f2-8764-5b17-680734e473fe@xen.org>
Date: Tue, 28 Jun 2022 16:18:17 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH 5/8] xen/evtchn: don't close the static event channel.
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>
References: <cover.1655903088.git.rahul.singh@arm.com>
 <91656930b5bfd49e40ff5a9d060d7643e6311f4f.1655903088.git.rahul.singh@arm.com>
 <b64a7980-e51b-417b-4929-94a020c81438@xen.org>
 <7403EAA7-67A4-4A8D-835E-6015463B9016@arm.com>
 <a5cd291d-45b1-baf4-4d0b-907140b38eab@xen.org>
 <DC011AED-F74B-4055-8DEE-6FFC6FC5215C@arm.com>
 <dbdf3bb2-edc6-e622-f17a-8819f6fcb43d@xen.org>
 <67EA3F72-5F6D-4150-A9BA-EAF92E6C9EA1@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <67EA3F72-5F6D-4150-A9BA-EAF92E6C9EA1@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 28/06/2022 15:52, Bertrand Marquis wrote:
> Hi Julien,
> 
>> On 28 Jun 2022, at 15:26, Julien Grall <julien@xen.org> wrote:
>>
>>
>>
>> On 28/06/2022 14:53, Rahul Singh wrote:
>>> Hi Julien
>>
>> Hi Rahul,
>>
>>>> On 23 Jun 2022, at 4:30 pm, Julien Grall <julien@xen.org> wrote:
>>>>
>>>>
>>>>
>>>> On 23/06/2022 16:10, Rahul Singh wrote:
>>>>> Hi Julien,
>>>>>> On 22 Jun 2022, at 4:05 pm, Julien Grall <julien@xen.org> wrote:
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> On 22/06/2022 15:38, Rahul Singh wrote:
>>>>>>> Guest can request the Xen to close the event channels. Ignore the
>>>>>>> request from guest to close the static channels as static event channels
>>>>>>> should not be closed.
>>>>>>
>>>>>> Why do you want to prevent the guest to close static ports? The problem I can see is...
>>>>> As a static event channel should be available during the lifetime of the guest we want to prevent
>>>>> the guest to close the static ports.
>>>> I don't think it is Xen job to prevent a guest to close a static port. If the guest decide to do it, then it will just break itself and not Xen.
>>> It is okay for the guest to close a port, port is not allocated by the guest in case of a static event channel.
>> As I wrote before, the OS will need to know that the port is statically allocated when initializing the port (we don't want to call the hypercall to bind the event channel). By extend, the OS should be able to know that when closing it and skip the hypercall.
>>
>>> Xen has nothing to do for close the static event channel and just return 0.
>>
>> Xen would not need to be modified if the OS was doing the right (i.e. no calling close).
>>
>> So it is still unclear why papering over the issue in Xen is the best solution.
> 
> It is not that a static event channel cannot be closed, it is just that during a close there is nothing to do for Xen as the event channel is static and hence is never removed so none of the operations to be done for a non static one are needed (maybe some day some will be, who knows).

I feel there are some disagreement on the meaning of "close" here. In 
the context of event channel, "close" means that the port is marked as 
ECS_FREE.

So I think this is wrong to say that there is nothing to do for "close" 
because after this operation the port will still be "open" (the port 
state will be ECS_INTERDOMAIN).

In fact, to me, a "static" port is the same as if the event channel was 
allocated from the toolstack (for instance this is the case for 
Xenstored). In such case, we are still allowing the guest to close the 
port and then re-opening. So I don't really see why we should diverge here.

> 
> Why requiring the OS to have the knowledge of the fact that an event channel is static or not and introduce some complexity on guest code if we can prevent it ?

I am confused. Your OS already need to know that this is a static port 
(so it doesn't call the hypercall to "open" the port). So why is it a 
non-issue for "opening" but one for "closing" ?

> 
> Doing so would need to have a specific binding in device tree (not to mention the issue on ACPI), 

You already need to create a Device-Tree binding to expose the static 
event-channel. So why is this a new problem?

Likewise for ACPI, you already have this issue with your current proposal.

> a new driver in linux kernel, etc where right now we just need to introduce an extra IOCTL in linux to support this feature.

I don't understand why would need a new driver, etc. Given that you are 
introducing a new IOCTL you could pass a flag to say "This is a static 
event channel so don't close it".

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 28 15:20:51 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 15:20:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357217.585722 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6D1G-00078f-UW; Tue, 28 Jun 2022 15:20:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357217.585722; Tue, 28 Jun 2022 15:20:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6D1G-00078Y-Ro; Tue, 28 Jun 2022 15:20:50 +0000
Received: by outflank-mailman (input) for mailman id 357217;
 Tue, 28 Jun 2022 15:20:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nsoM=XD=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o6D1F-00078Q-62
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 15:20:49 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-eopbgr60058.outbound.protection.outlook.com [40.107.6.58])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e2c4c465-f6f5-11ec-bd2d-47488cf2e6aa;
 Tue, 28 Jun 2022 17:20:47 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB7819.eurprd04.prod.outlook.com (2603:10a6:10:1e9::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Tue, 28 Jun
 2022 15:20:44 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5373.018; Tue, 28 Jun 2022
 15:20:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e2c4c465-f6f5-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=j4BTBFzaT0gz9yXJcyiR76DywuRs5NTwKYPVEjwlVaH/6+LW0znUvMEBbAAajWVLQvT75Uf3oLlc17Fr/or9EqZzDZ6IMyK11+3RmeXDg2DI8OMNggxrEj8dckLqid1ruTSGciZQi87CTRkzs1OEt9wMSoWATwFujZgVhXHopJfWleCLcV/hCSJGtE5yTTEXlOZTrPv3FLcekRj9o1OE0jzkEjYAgDiRXQ4OU6+JKtYj0+4J/1MCfeLvl30m6HqBxWxke7jW/4qosFaelPNp9sylJSXi8SUvmw6972XJ9sD+6XMfem/w5US7V5IPzMf8fKWG+KMbpD+9Ne8kAlsvcg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=L2euJl6l9bPd9IDyYyw6HOemsHeHwjnRhK079qHDPPo=;
 b=lPLPG2EXnTwNgAwj0UOY36N8i8YLrtAlP5+DWyHlZNbGR9uRVU+jcZErjW5ZsGB6vthJIve1yOZG7TZvGCUBamGaPnkphmP6c2M+zRT9DS6/HD3w7vYQngrSFQTLe/vIjjWQP2THKA63HJnbUzFiXCnLKRvx0O6YIoZRf3/GYO9o0PIfsJMR3N5SNWEWLR2U7piumLJt44e2ICrSQ+BnUDHn/GkB96N7LzNjwCMNlqgY9FFlmP4FoZVkVP4YUKX0Zav1PhPNR40DZWJs2x+xhzYsiWCqcQDBeCNid+zU0ZxF2QaXO98KtY5Oixxy8mWEUJS/shp4QzHAYdyhRSibMA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=L2euJl6l9bPd9IDyYyw6HOemsHeHwjnRhK079qHDPPo=;
 b=Ao6Qjjux9v8a8hdEei0JM7DRCRNtZ8ubEytoXFLQKzjh771l8rs5FM3F0YhLqCxwhaoT/kj99Sget5GCDOZZhLMknCL2N/UQaxpiMa7n01seF32w+TABPj+xLNDOJXxuWJij/4UlJwSOqZOCFBi/Z2jglXXTYIz/DTXH8lBAaTcpTZvdiK/gb+8Zk+UEGb7Wy2HX6kgogqT5WQD3xBh2gl0iupGpfDaPHiKt/azFqafbeJwr1PjenYYvXg7WIlNNYy4/AlPt8XREIBLKjPI3pmbCaW7UqoPFYXvfxHKBNA5V1O9ZPL4UTLLneSQpK3Aar2zRJZ/F6cxUg7rRFtHgCw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ad7423d7-c8b8-feaf-93b9-5cb02b0a2526@suse.com>
Date: Tue, 28 Jun 2022 17:20:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 5/8] xen/evtchn: don't close the static event channel.
Content-Language: en-US
To: Julien Grall <julien@xen.org>, Bertrand Marquis
 <Bertrand.Marquis@arm.com>, Rahul Singh <Rahul.Singh@arm.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>
References: <cover.1655903088.git.rahul.singh@arm.com>
 <91656930b5bfd49e40ff5a9d060d7643e6311f4f.1655903088.git.rahul.singh@arm.com>
 <b64a7980-e51b-417b-4929-94a020c81438@xen.org>
 <7403EAA7-67A4-4A8D-835E-6015463B9016@arm.com>
 <a5cd291d-45b1-baf4-4d0b-907140b38eab@xen.org>
 <DC011AED-F74B-4055-8DEE-6FFC6FC5215C@arm.com>
 <dbdf3bb2-edc6-e622-f17a-8819f6fcb43d@xen.org>
 <67EA3F72-5F6D-4150-A9BA-EAF92E6C9EA1@arm.com>
 <289ebc8b-96f2-8764-5b17-680734e473fe@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <289ebc8b-96f2-8764-5b17-680734e473fe@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS8P251CA0015.EURP251.PROD.OUTLOOK.COM
 (2603:10a6:20b:2f2::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 869ad558-4648-40e2-e8b1-08da5919c53d
X-MS-TrafficTypeDiagnostic: DBBPR04MB7819:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Qkyel6FXdJu7oal0TFOsIgK+gnFwXCOPpZ3gyrlor+o/NzT+jB0RU5ixHm4O6EL62Ku+RsBn+ZovtYI64MbKp34qZ0ua/dZKlsiwYXOitfXe5gZH2bJGaBwLimRtm6FRToxH1LQUZPHru6VFI4e9f8+Igo/dRISdj+QiVCYJUWjRMAbtSKCXuTekCQZ/GPUGnuhYRJKmcQfyS8kPPe39K6rzKCNvH7cKdSh7FyhpgCDXnV9elNGbhDLDZkKytKyalRxqkcFgHuSxGHVjeXdwPO7YDTkF0t0qReTkPoqwby6cnC2RbEDXa74B7MeylOagq/4c26zyolPgZ3ob9WhVmiiQ825icA/UdYSaB9OA7gd/nwg9HBqjwJ2lrBiOzM6hzs51y0karRKH6ukFkZVop3HZFbyW1xrlZJHs73S66VeJ01+FvHuMMgdJofQw1ecBfANXz645XVHx1o/cFl5qEb7HLsQWOBQRCpCORnM86bKRUjIJbJv7tsUEMITqyEbsSjbKqa3k6YY7PztMBzX7H0qC6WHTzLn52teFof+/oRWix3VsAQtTt1w750YAJ4/P4BvWrNgWvdBsWS+MtZR+fMmae3hUWMNueoKFN6LWfw2XI6zBONW7HDvyuAisJwZ2g0Soji7BdVF84CDNBwB2vk4gGap4eyi4SscGZF7YCrGO/V5E30UByCZRF91PP3e5Z57v39/8/HIki0CUoOaYqmWqjudXfBIghF+1Y5ZxOwLvHpf5M9wYEDEHndkDgHOYmfUhj8kMsucD/rQ1FCNb+ZkaJBTmnRrHpLTZs1KRKcyaMtacPl2m7RTFgcQsd6W9i26Ajs/UZJmADqTvsfAlyw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(376002)(346002)(366004)(396003)(39860400002)(136003)(2616005)(86362001)(26005)(31696002)(6512007)(8676002)(54906003)(4744005)(186003)(83380400001)(6506007)(110136005)(36756003)(2906002)(6486002)(66556008)(66946007)(41300700001)(38100700002)(4326008)(31686004)(5660300002)(66476007)(8936002)(478600001)(316002)(53546011)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aHliYlRteUhlU29YQ1dnYzVHWmlLSkJTNkdqaDkwOUtVeVpVOCtIWEdLTnB0?=
 =?utf-8?B?RGZGTUtpdXVzUE14Wk5iTXRJRkpzMzVVcmlia2Rvc1o3VnBlR0JYMVRKMnZU?=
 =?utf-8?B?eUQwMTdUWkFFenhQSy9ZVm9jQkxzWXlQU2QzU3BiU3p2L2ttUGVrNnRUOW44?=
 =?utf-8?B?dXBnQjluQ2I3UjR1Mk8vdkhrci9adHg1UU5pN0xVaFdOekdZM0dDaVhIUVlo?=
 =?utf-8?B?enF2V1RiSDFGVjdFN0haVDFQSUhMeFdEc0RUa2wvR0EzeEpZSHZ0RzlsQWhr?=
 =?utf-8?B?M29kMWp4WXJPdkNkL3FEbnNCQWNXSUtJNlh6eFE1TGRkbUt1dDlOQU1vNVMy?=
 =?utf-8?B?dlNHbDh2QW5ZbkJoKzdIZzVUb1Q3QTJKN3crOXArMXVhUFh0UmVKcWJUSTFY?=
 =?utf-8?B?NUxKdHZaY3owUG9lYWUvdW1lYUwwOGNrN0E5SXpsSFEySEFnVU5nSXpOdmZM?=
 =?utf-8?B?cVhyYzNnbGJQdDdEZytINUtvdGRiNE93TUhmQ2FWWTdKNUp1VTlUUVJ3V0ZL?=
 =?utf-8?B?Z1pOZ2xKVlBrb2ZZcnVNRm1lVUVwYS9KUHp0c012NFpnZ0FYdUJkSGZ6Wlow?=
 =?utf-8?B?QTNuWXhqeUVJUGVkOGhFSHFBbDc5bnI0UGcwUE56T05LT3ZLdjlCTEwvcCtR?=
 =?utf-8?B?OGo1aEJnemQ1Qk5pTXcwNHF2OGdmSFlhNFRQb25RNlg0Zk0vdjJVZTBUZ0NF?=
 =?utf-8?B?U1JhaTdTOEZVcEFCT0N4aFFXbDVudU5hbFlhTUFEZXZNTDV4U0RrdmhQbW9k?=
 =?utf-8?B?TG00UzBxcVFodHJSN21RdlJ4REpVdGlUS0p3c3ViMXllR283dU9RVXliSkVU?=
 =?utf-8?B?K014bTJnRlRzNzM3SjlYdzAxYXNTUThFSzFtM3o3MFJiVHhNeXFkV1ZRM2o1?=
 =?utf-8?B?b203UHlOekFXa29qNFZwL0M2TDkwbVN2bjgyMVRCOGhHYnJQK1EzL3lkN3k5?=
 =?utf-8?B?eHBBSHBESFlQOWR1OER2ZjBlRUN0OFJDa21wQnV6T0pCVkZJYmV4VlVYWTNY?=
 =?utf-8?B?eWFsZTdIQWZOMHcxV05MTHVSczNMNDJ5OFI4R2RVVGN0LzhMdkJWM2poeFBz?=
 =?utf-8?B?MFpSd2VFK2JsV2RpVENyYzFjbzZPejlWelI3ZDB2aWIwNVd0SUlkMGVzZmc0?=
 =?utf-8?B?bGhzaC9vajY4RDRjOTdkNm9IcGJwZk5Ic0VPTWZEWW53elpkVEJabExITmdz?=
 =?utf-8?B?a2l6eStxRXdUSjJEV1lWc0pJUnJmRUQ5ZStoSDFIZndiN01DSFh4eVExUEl0?=
 =?utf-8?B?Z3g5eHN0ek5hZFVQVUZJRCthMW5EcjlIbmJ5K2txblgvcmdyblIvS3BiMkFl?=
 =?utf-8?B?RVRJMmFBWDVzbXUvTUpOM2cxc2Q1UlV2bGltNVVmSlExcERZS28ybklIUGNq?=
 =?utf-8?B?ZzRmeEdLQktaZXA1SlRFMDY1NDBzS0lMYVIvZmozbnE3cXJSMVZpVm85Uk1t?=
 =?utf-8?B?VzNUQmpMUk5OU1ZFQWhPdXg0VmxwQVZjSVZQdm1TYklwQW50cVlMYllvdXFs?=
 =?utf-8?B?eitsUDB4VElSc0dxVUtkNUVlVWZ5d3FxZ3ROeElNOGFnbHhxenNTa3NaUFhh?=
 =?utf-8?B?Mmg5a2hEd2IzTFF0dXN0WFNIb24zV05SbU5Rc08yVmYycWFIZTNGNjErdEl1?=
 =?utf-8?B?YXpGUjRaZVdOSk5RRnh3YmZIbmpuNU8zSXdvQXpVMGx2Ky9GSkdXS01xL0ZW?=
 =?utf-8?B?Qm9INStpRVluditnbkhNMUliT1U4bkxoTDRCTE44WkxyTzhBUVJ3b01YOHo2?=
 =?utf-8?B?NjdGUFVsb1VSRjlRY0NDRzRBTWswaHJ5R2hFaVdSVHpKeE16ZU5KQVlHRitE?=
 =?utf-8?B?eXRJWEJBUk5FMlFoZnl6Z2RIQmVVa1NQbmtFc2NzbUdvejJGRmtBYnpCVnMv?=
 =?utf-8?B?L3A0bkdZQ2puZWZGSGtKUTBsT2RJdVhpN1VSVERnTXdwYTR6TTF2TkZsd0N0?=
 =?utf-8?B?YXZTTHcrUlUraEtVQUVRaWtWaTZLWkJVUVBNRHdZUXFZcFFJN1IwazdRQ1h0?=
 =?utf-8?B?RWI1QTh1K0tsTU5UNjB3S0JwSXZNamZlM1BuVFZma2Q1QWtPUEJYZm54S1RB?=
 =?utf-8?B?YURtTVVZR0RlUFRxYldaV2ttRHphTmc0VDU1bjBpWWFuY2xyQ045NTB0WW4v?=
 =?utf-8?Q?Us0YUMxUuJR9UOK3cPcGDI9/S?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 869ad558-4648-40e2-e8b1-08da5919c53d
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2022 15:20:44.5975
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: jq8faolfTYTN2VlqcebA9p0ViDCOAmLIBhaMyt+AXFXup2oh5XVHjcGeFokCIH9BSMvUVdkmMJLnDzQO06Wq2A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7819

On 28.06.2022 17:18, Julien Grall wrote:
> In fact, to me, a "static" port is the same as if the event channel was 
> allocated from the toolstack (for instance this is the case for 
> Xenstored). In such case, we are still allowing the guest to close the 
> port and then re-opening. So I don't really see why we should diverge here.

Fwiw, I agree with Julien's view here.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 28 15:24:16 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 15:24:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357224.585734 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6D4W-0007mN-Ch; Tue, 28 Jun 2022 15:24:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357224.585734; Tue, 28 Jun 2022 15:24:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6D4W-0007mG-9o; Tue, 28 Jun 2022 15:24:12 +0000
Received: by outflank-mailman (input) for mailman id 357224;
 Tue, 28 Jun 2022 15:24:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=97eV=XD=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1o6D4U-0007lr-7Z
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 15:24:10 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr10052.outbound.protection.outlook.com [40.107.1.52])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 59ea71a8-f6f6-11ec-b725-ed86ccbb4733;
 Tue, 28 Jun 2022 17:24:07 +0200 (CEST)
Received: from AS8PR07CA0014.eurprd07.prod.outlook.com (2603:10a6:20b:451::11)
 by AM9PR08MB5972.eurprd08.prod.outlook.com (2603:10a6:20b:280::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Tue, 28 Jun
 2022 15:23:59 +0000
Received: from VE1EUR03FT011.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:451:cafe::9a) by AS8PR07CA0014.outlook.office365.com
 (2603:10a6:20b:451::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.13 via Frontend
 Transport; Tue, 28 Jun 2022 15:23:59 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT011.mail.protection.outlook.com (10.152.18.134) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Tue, 28 Jun 2022 15:23:58 +0000
Received: ("Tessian outbound 879f4da7a6e9:v121");
 Tue, 28 Jun 2022 15:23:57 +0000
Received: from f07c404ea09e.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 6937AAAD-C2EA-49EF-B0DF-4CA9C084BC06.1; 
 Tue, 28 Jun 2022 15:23:50 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f07c404ea09e.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 28 Jun 2022 15:23:50 +0000
Received: from AM0PR08MB3809.eurprd08.prod.outlook.com (2603:10a6:208:103::16)
 by DB7PR08MB3051.eurprd08.prod.outlook.com (2603:10a6:5:1e::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Tue, 28 Jun
 2022 15:23:49 +0000
Received: from AM0PR08MB3809.eurprd08.prod.outlook.com
 ([fe80::4ca:af1b:4380:abf9]) by AM0PR08MB3809.eurprd08.prod.outlook.com
 ([fe80::4ca:af1b:4380:abf9%5]) with mapi id 15.20.5373.018; Tue, 28 Jun 2022
 15:23:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 59ea71a8-f6f6-11ec-b725-ed86ccbb4733
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=PKL0BA4lwDA//0i8Bl91+MQjt/XrDLbkE9QIZxC25WZfEtmPsa8Sf6Kv18pFiwuvBaG5IWYAYGyGW0r39DOLbZ65uKgUtAFAjTNbu4hhufWZqhoueidSustyDAA+HSgUJM3w2JiK7gr41DcajjWdu1GdGSrItirIjH8OEglsHWRCvYOhFa2gHoo/w2ek99FBJ0lnVNeAHIt5havwUg5die1l9vCtE5XnL/AXcENQ+xrrmZPUeqLnaPXvzgZl3sgMV0wi4kAA6y8mrEdsI8ym81vbb28I03pld2qyzdgJfMIkaVDfg5sd2SXiEVB680uobzXYtaeDGtZzurzdBp4vMA==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=jkAEXHpWhd+TGBrVXj9Cje4WR6fV5sL+sqCK7ejyEIU=;
 b=KRWSOyVNM8PED4jpiyDzaWMhQ8g4TlMQoOXni7cMRioUsRjenr/4TKgDuOSStOs4ul2AJsczUDhMVTJNWl7iI+6RH+lpSRsyZmu1EmYlYYO8nrpO9D1A3tJoZ68YpD+ouLFFZp45frg3dphjU/w8iG0/qHEp+Jk5CKrDlzDaYpXC1BHXCHzNysxv0kMxZRqeh1PJeIiUDj5D7xifoUV5LSZocBWePTKgADyeKb25KoRrZ6vizUSDZUvrGo1MorCFPQr5hvIcudj15Ajng2xeCH4mSWu2om90hvsrHEfPI5/WTzYh4GFNPD1peFqSSn6GgoXVT1smcts5884xG8Ajqw==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jkAEXHpWhd+TGBrVXj9Cje4WR6fV5sL+sqCK7ejyEIU=;
 b=AYs8xwuif0tobu/9/V5LijYMfIjYZ3ActRSAu1W3y2aiBJ7DJcBpPFRPUsE+XqXaw5QtILfj1aZiBtB0tDMwKO8dQ+IElyOyJafv32ctVdB3LM1GmA+25VyZIFCohlyFBGm3J3TJm/L3MThJJG0l+yAk10E6mZlqGVa9yWVS1Qc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: f472122aade4a4d4
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XEP54BOOA75lEMz8S2VZ1JRyCIgEPYyNGKkUL8N5AwYLnjRhO0F4rgrptk6n0O5tWxLSps7IkgjfrQzw8u8FMBWMlZCNrG43AIXRnJtPcXIGvcCSt2D0aLG8D7EXd63pqJk7gpGU65y196oOy4W94rMMC2qZRWJrg7awxRVY6XWAqOpDf2Z0QvDoR5oxnM/U6ojwVp3kVPzRJmm8Phh931oy8xCpvzRse8v+cq+gFE2EexxuArOtXCPDrhhg644LqZD95mIsn0PfjqPQJQk+HdwjHeiBUkUQFQP3sgw46hk9SC0OYQp6HW5Dyy5ZVLqAE3mzgND+sjmlp0QuFbIj6A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=jkAEXHpWhd+TGBrVXj9Cje4WR6fV5sL+sqCK7ejyEIU=;
 b=YdFz1Er9iHQNq1NnPD1LZRPKAbnyjIpubslsTWVjvRWKt/NI8FKR6RZnf7QiW/J9nN+ZA1Wagr1roSquaGoYWyqi8ZZY97ODKP6qk2H6WmKYMn9E0+wPDNwVB7qCuiGge9B4eocuuAIhncZ4h7PuoSEKS+tAERr8JvGQY/tplLFSAWEdelVdPKqYVY7qb9aRRK2PuWFxTDE3+H9KXV3323mKbsKD6RrNHBl9+hliQ4vOVjAWBm7v9SW3maL0PhCHeUTBpuPxsnhcpM1JxwwWI52CzAzybYP5nVweaEtuZ2PwW51yEn0+51KeQHX+0Be5mZVzPhnkBbrU4g3A7cIaXQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jkAEXHpWhd+TGBrVXj9Cje4WR6fV5sL+sqCK7ejyEIU=;
 b=AYs8xwuif0tobu/9/V5LijYMfIjYZ3ActRSAu1W3y2aiBJ7DJcBpPFRPUsE+XqXaw5QtILfj1aZiBtB0tDMwKO8dQ+IElyOyJafv32ctVdB3LM1GmA+25VyZIFCohlyFBGm3J3TJm/L3MThJJG0l+yAk10E6mZlqGVa9yWVS1Qc=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH] docs/misra: Add instructions for cppcheck
Thread-Topic: [PATCH] docs/misra: Add instructions for cppcheck
Thread-Index:
 AQHYh7nJHM8+T3fZ/0Gr0RGCgDxGeq1eaYGAgAALcICAAARSgIAAFXiAgABAfoCABidzAA==
Date: Tue, 28 Jun 2022 15:23:49 +0000
Message-ID: <C7643376-EBD0-46C3-B940-D3F6198BD124@arm.com>
References: <20220624105311.21057-1-luca.fancellu@arm.com>
 <692d09fa-5513-132a-6b5b-4bc62e46a443@xen.org>
 <15F23829-3693-47CC-A9D6-3D7A3B44EB64@arm.com>
 <88bd7017-e2b3-59f3-a68a-25db9e53136d@xen.org>
 <CA8DFF26-3D7F-4CDA-9EDC-E173203B2A51@arm.com>
 <81c33c8c-e345-2fe3-32c6-2f80799eefd0@xen.org>
In-Reply-To: <81c33c8c-e345-2fe3-32c6-2f80799eefd0@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.80.82.1.1)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 504c5049-85d0-4d05-44e6-08da591a38da
x-ms-traffictypediagnostic:
	DB7PR08MB3051:EE_|VE1EUR03FT011:EE_|AM9PR08MB5972:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 6nq1PdwlR+jTT6lIQWt/q282PJv2T8UV9H3IphnyUmJZqyv9PZT/wjuUSVG5xmZxaIQVU982qgpuvDxaPM2dkKbJ8G0IN3nZjRPTb0JKW8v7Isbbq7yGgSaROGFW5KXylmYF3BuQlS4LAIzQ5e0jBwSXy6vGSU4ZtaHKLx4sJqkYcO+BgjEdRGax8P9JVvfFsn/EKP09U91xsyZdRIfL0kGl3iLi9uRVrdlHrJhGmD3IWX6kd68VU53/suVWRnARfroqFlJHUZoGLRfVYn06ezQ3qhpHyJ4zzAWzS62uTxMfGYofsreDQ/GaCuVc0yytl5Za9dGKoXEFX1FypQ0gvzaEhloQalNlKqTiPGU8FcLo4TW9XaTDssOvLbB1rGu3jDh/CGrvaYxNC+NtSoSSn9j0vmGHCH/3IhVYpplsvW/l3m13GAy4MNqSz7u/ZI6ttkRXxAmLAnggEtHhNK33tJWUIDVnnIseBAaZxPq0Jw63cnDujLg5rtL3lftQzhXd0nakAcNCj6Mrr7K05qjeBomnMMGJS5/JZUq5c1c+12/VpPI3sQjMXvmqvSrucED4EyPCnv8rz2loCmoMdJoIBOiFYpY7bqMeyGajW1baK7856MNuc/R8qifwhl0maRpXfzSEg8BBVC/26CtHWGCtfVioofFdMLnyTOfWF48ML8jKK2WNLodHPBDpDTO6qszqltcOxktjqySHTFJkz4v+gb9gCptAMPhsFBCBpm15urTsA18IJKBFv8mreBKX2M0hBKIqN03a/0IWi47289bJGVYO2wPFAmagwHrCaDX00Hm0kmQqGG5UV9DqQnlJTmRiq8HoXqu34nQN0bWvDrz//9/IY3SVsaLJFh38gd4/1BI//GCfvabEjjwM1gr1xXnl3/wEj5QfePGXNN+D91m4cg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB3809.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(346002)(39860400002)(376002)(136003)(396003)(366004)(8676002)(6486002)(38070700005)(66556008)(4326008)(64756008)(71200400001)(66946007)(66476007)(966005)(86362001)(6506007)(53546011)(41300700001)(76116006)(26005)(6512007)(66446008)(478600001)(91956017)(186003)(6916009)(2906002)(36756003)(33656002)(38100700002)(8936002)(122000001)(2616005)(54906003)(5660300002)(316002)(83380400001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <DCE19F31B8451849B9E1CAF03E430121@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3051
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT011.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	6e2d5fc3-1974-4829-e8a5-08da591a3349
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qCXNK4X8wEScbeRXz0bUgDn8U8ToVyAAUJDmwCvDsZR9gU11x7s4WunHZPuHffUgpIHzvXrcwYEF/m/Fe9jrlcijsbsGvO3Q8nOnaaAtjMbfJke7FVpYCrx3i97L9BOttTBvcoKr7Dr9oeEkK63WslbVa1JFvZDx9LQXRQ6xATi5c4+D+cIJ1C+vTzZOldMROVT1GLeS63L8CVgT+Ke9DIClv5vdI+f1bkpWn3+PgHloln/p6NCO+Di1GaKVmObukqolpZ7oyXq3nwC4MqFuwbhxmPmABbrFatKte9LuJthfTqyhr/hHjHnvPRoD5EuLXB2gxOB1qF00cLnIPXNInI48Z09D6utK9HXNLialU0w+sPeSKDKbJQZ+loY/SNVnGthmv11dVrbNH3XwvXiKRXSz5J8W2d9x6T4hHvQSSvJvI44FRDu1VnyCq5M9+pXA9HQjKD36d8Jw9b9otLKwbwTeW4WBOlrZ7PKQNbChlczz+hf/c747T4GECLQ8XSF1LVLaan/mGVCr6tQN9gPNcxrYrCycSWJgbv7ypvP/LtmIiUyMZyCb2g/o9F/ea/aXoMi4dfYcL1sJUNsUWJz1SRe/GPuj52GcFZfqLxLJi7bKNhVlp1uKvV31Z6b54K/r9jNAQpJbiYYxriiw6FPdtHGnlnOxHIifVeyHWuh55EgohO0NcOsJc8vzojGFhvJN9DEYm3oXbzBHV1gWTsyfd4WRy0DPOs04155hae6Ips+t+mjOhUaUrNPqLi9gV0FKC2H8sa+qkxcllf7NbmOG4dmQyes30LLAPrBQmP5FxTTsy+RIOXxdjhfB+8bRV2PdLRBJSthr4mh6EueXSVsds0OrNDVuvJb8nRvPf4ETdx0QChjGg0KCgI5MlSZBqraj
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(136003)(396003)(346002)(39860400002)(376002)(40470700004)(46966006)(36840700001)(6512007)(53546011)(478600001)(26005)(5660300002)(2616005)(41300700001)(82310400005)(6506007)(8936002)(40480700001)(356005)(2906002)(33656002)(70586007)(336012)(316002)(82740400003)(83380400001)(40460700003)(36860700001)(47076005)(186003)(81166007)(966005)(8676002)(6862004)(36756003)(54906003)(86362001)(70206006)(4326008)(6486002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2022 15:23:58.3103
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 504c5049-85d0-4d05-44e6-08da591a38da
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT011.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB5972

DQoNCj4gT24gMjQgSnVuIDIwMjIsIGF0IDE4OjI1LCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4u
b3JnPiB3cm90ZToNCj4gDQo+IEhpIEx1Y2EsDQo+IA0KPiBPbiAyNC8wNi8yMDIyIDE0OjM0LCBM
dWNhIEZhbmNlbGx1IHdyb3RlOg0KPj4+IE9uIDI0IEp1biAyMDIyLCBhdCAxMzoxNywgSnVsaWVu
IEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4gd3JvdGU6DQo+PiBJIHdvdWxkIGtlZXAgdGhlIHNlY3Rp
b24gYWJvdXQgY29tcGlsaW5nIGNwcGNoZWNrIHNpbmNlIG1hbnkgcmVjZW50IGRpc3RybyBkb2Vz
buKAmXQgcHJvdmlkZSBjcHBjaGVjayA+PTIuNyB5ZXQgKGFuZCAyLjggaXMgYnJva2VuKSwNCj4+
IElmIHlvdSBhZ3JlZSB3aXRoIGl0Lg0KPiANCj4gSXQgZGVwZW5kcyBvbiB0aGUgY29udGVudCBv
ZiB0aGUgc2VjdGlvbi4gSWYgdGhlIGNvbnRlbnQgZHVwbGljYXRlcyB0aGUgY3BwY2hlY2sgUkVB
RE1FIHRoZW4gbm8uIElmIHRoaXMgaXMganVzdCB0byBwb2ludCB0byB0aGUgY3BwY2hlY2sgUkVB
RE1FLCB0aGVuIEkgYW0gT0sgd2l0aCB0aGF0Lg0KPiANCg0KSGkgSnVsaWVuLA0KDQpTb3JyeSBm
b3IgdGhlIGxhdGUgcmVwbHksIHRoaXMgd291bGQgYmUgbXkgY2hhbmdlcywgd291bGQgeW91IGFn
cmVlIG9uIHRoZW0/DQoNCkNwcGNoZWNrIGZvciBYZW4gc3RhdGljIGFuZCBNSVNSQSBhbmFseXNp
cw0KPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09DQoNClhlbiBjYW4g
YmUgYW5hbHlzZWQgZm9yIGJvdGggc3RhdGljIGFuYWx5c2lzIHByb2JsZW1zIGFuZCBNSVNSQSB2
aW9sYXRpb24gdXNpbmcNCmNwcGNoZWNrLCB0aGUgb3BlbiBzb3VyY2UgdG9vbCBhbGxvd3MgdGhl
IGNyZWF0aW9uIG9mIGEgcmVwb3J0IHdpdGggYWxsIHRoZQ0KZmluZGluZ3MuIFhlbiBoYXMgaW50
cm9kdWNlZCB0aGUgc3VwcG9ydCBpbiB0aGUgTWFrZWZpbGUgc28gaXQncyB2ZXJ5IGVhc3kgdG8N
CnVzZSBhbmQgaW4gdGhpcyBkb2N1bWVudCB3ZSBjYW4gc2VlIGhvdy4NCg0KVGhlIG1pbmltdW0g
dmVyc2lvbiByZXF1aXJlZCBmb3IgY3BwY2hlY2sgaXMgMi43LiBOb3RlIHRoYXQgYXQgdGhlIHRp
bWUgb2YNCndyaXRpbmcgKEp1bmUgMjAyMiksIHRoZSB2ZXJzaW9uIDIuOCBpcyBrbm93biB0byBi
ZSBicm9rZW4gWzFdLg0KDQpJbnN0YWxsIGNwcGNoZWNrIG9uIHRoZSBzeXN0ZW0NCj09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PQ0KDQpDcHBjaGVjayBjYW4gYmUgcmV0cmlldmVkIGZyb20g
dGhlIGdpdGh1YiByZXBvc2l0b3J5IG9yIGJ5IGRvd25sb2FkaW5nIHRoZQ0KdGFyYmFsbCwgdGhl
IHZlcnNpb24gdGVzdGVkIHNvIGZhciBpcyB0aGUgMi43Og0KDQogLSBodHRwczovL2dpdGh1Yi5j
b20vZGFubWFyL2NwcGNoZWNrL3RyZWUvMi43DQogLSBodHRwczovL2dpdGh1Yi5jb20vZGFubWFy
L2NwcGNoZWNrL2FyY2hpdmUvMi43LnRhci5neg0KDQpUbyBjb21waWxlIGFuZCBpbnN0YWxsIGl0
LCB0aGUgY29tcGxldGUgY29tbWFuZCBsaW5lIGNhbiBiZSBmb3VuZCBpbiByZWFkbWUubWQsDQpz
ZWN0aW9uICJHTlUgbWFrZSIsIHBsZWFzZSBhZGQgdGhlICJpbnN0YWxsIiB0YXJnZXQgdG8gdGhh
dCBsaW5lIGFuZCB1c2UgZXZlcnkNCmFyZ3VtZW50IGFzIGl0IGlzIGluIHRoZSBkb2N1bWVudGF0
aW9uIG9mIHRoZSB0b29sLCBzbyB0aGF0IGV2ZXJ5IFhlbiBkZXZlbG9wZXINCmZvbGxvd2luZyB0
aGlzIHBhZ2UgY2FuIHJlcHJvZHVjZSB0aGUgc2FtZSBmaW5kaW5ncy4NCg0KVGhpcyB3aWxsIGNv
bXBpbGUgYW5kIGluc3RhbGwgY3BwY2hlY2sgaW4gL3Vzci9iaW4gYW5kIGFsbCB0aGUgY3BwY2hl
Y2sgY29uZmlnDQpmaWxlcyBhbmQgYWRkb25zIHdpbGwgYmUgaW5zdGFsbGVkIGluIC91c3Ivc2hh
cmUvY3BwY2hlY2sgZm9sZGVyLCBwbGVhc2UgbW9kaWZ5DQp0aGF0IHBhdGggaW4gRklMRVNESVIg
YW5kIENGR0RJUiBpZiBpdCdzIG5vdCBjb252aW5pZW50IGZvciB5b3VyIHN5c3RlbS4NCg0KSWYg
eW91IGRvbid0IHdhbnQgdG8gb3ZlcndyaXRlIGEgcG9zc2libGUgY3BwY2hlY2sgYmluYXJ5IGlu
c3RhbGxlZCBpbiB5b3VyDQpzeXN0ZW0sIHlvdSBjYW4gb21pdCB0aGUgImluc3RhbGwiIHRhcmdl
dCwgRklMRVNESVIsIENGR0RJUiBhbmQgY3BwY2hlY2sgd2lsbCBiZQ0KanVzdCBjb21waWxlZCBh
bmQgdGhlIGJpbmFyaWVzIHdpbGwgYmUgYXZhaWxhYmxlIGluIHRoZSBzYW1lIGZvbGRlci4NCklm
IHlvdSBjaG9vc2UgdG8gZG8gdGhhdCwgbGF0ZXIgaW4gdGhpcyBwYWdlIGl0J3MgZXhwbGFpbmVk
IGhvdyB0byB1c2UgYSBsb2NhbA0KaW5zdGFsbGF0aW9uIG9mIGNwcGNoZWNrIGZvciB0aGUgWGVu
IGFuYWx5c2lzLg0KDQpEZXBlbmRlbmNpZXMgYXJlIGxpc3RlZCBpbiB0aGUgcmVhZG1lLm1kIG9m
IHRoZSBwcm9qZWN0IHJlcG9zaXRvcnkuDQoNClsgbGVhdmluZyBVc2UgY3BwY2hlY2sgdG8gYW5h
bHlzZSBYZW4gYXMgaXQgaXMgXQ0KW+KApl0NCg0KWzFdIGh0dHBzOi8vc291cmNlZm9yZ2UubmV0
L3AvY3BwY2hlY2svZGlzY3Vzc2lvbi9nZW5lcmFsL3RocmVhZC9iZmMzYWI2YzQxLz9saW1pdD0y
NQ0KDQoNCkNoZWVycywNCkx1Y2ENCg0K


From xen-devel-bounces@lists.xenproject.org Tue Jun 28 16:05:59 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 16:05:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357285.585790 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Dib-00067G-Qf; Tue, 28 Jun 2022 16:05:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357285.585790; Tue, 28 Jun 2022 16:05:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Dib-000679-Nm; Tue, 28 Jun 2022 16:05:37 +0000
Received: by outflank-mailman (input) for mailman id 357285;
 Tue, 28 Jun 2022 16:05:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5PJw=XD=suse.com=carnold@srs-se1.protection.inumbo.net>)
 id 1o6Dia-000673-P0
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 16:05:37 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr10067.outbound.protection.outlook.com [40.107.1.67])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 24560c15-f6fc-11ec-b725-ed86ccbb4733;
 Tue, 28 Jun 2022 18:05:34 +0200 (CEST)
Received: from DB9PR04MB8300.eurprd04.prod.outlook.com (2603:10a6:10:243::22)
 by AS8PR04MB8387.eurprd04.prod.outlook.com (2603:10a6:20b:3f7::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Tue, 28 Jun
 2022 16:05:33 +0000
Received: from DB9PR04MB8300.eurprd04.prod.outlook.com
 ([fe80::95f6:44a9:7525:1a05]) by DB9PR04MB8300.eurprd04.prod.outlook.com
 ([fe80::95f6:44a9:7525:1a05%7]) with mapi id 15.20.5373.018; Tue, 28 Jun 2022
 16:05:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 24560c15-f6fc-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TpHs7xj8DinGhJ0+b1NHllpEFoX3myqMxy5zad/m05H6iQQzgBPWITNm1KgI2KXpIQN6vXyK1cprJUbFAik7ZS5e+6T4mMD/IGeJDfeQEuQIDwNvnrqwh0wbdI7u2sNk1B6EYSSN1Surg3DDth/VhPHLQZJk+xHm8pSZ5DxP4/vuExopyT1u4+KLqWvwWPadkBBYYB6vMkQa9Hj8rFmOS5jEwZFQcC7EKSNdhWAY5sajWi05sXZ7HzzBPMnR8xR11x5HwD3kohmFPu3BhGFVkkzc5iIzVDAOGCc/dWqp2ISbwQQI7MtH276CQLU6z6kN4MZaK98uCLs7rvl8fNus3A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=n3l8Y3qZsdbZY8qpM11t9AOxhibLw8tbAwdpafDyLAU=;
 b=k1nqMITeDbafmEMhd2eRTh9sI6VC1B3AbeiVf5O+rc830VTb7MxDvwUfbwg2v1CcTFclcwoVzyG+d9KNw0QhWJqRHb2xkr1kHta9KVKEvmdzWViCe8AsYeV5jgn6Dpe5ao+Ye47YUzoLJpOfgIJ6T5eP3jAcftut8deEv1Xiy+rGu8pY2appXgip+lpEZsxr9YsAOORQxCA8ymJmhTVHPtFB15yOZ+DzAFEMBWKYP6RyL3dpcbxV5VaBzkvxBlzxsbNL3i4DwcNMkcejyme/eDW7Mt3TK5BG5bZDBoF/scqhFelMbaMe5Y7IFwkuGWCxqNtc3vt29sjY8wXX4wmIoQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=n3l8Y3qZsdbZY8qpM11t9AOxhibLw8tbAwdpafDyLAU=;
 b=w3n+GNK02D1+MRr4bOWI2S0aY9y2gyerACt53xrs+4LWdUEXxM0t3AZLHhExvBPZFe1J/unJOxmxJLqo048hyT+clTGH3kuxIYaVdhR9RwjmRQ/ixN6Z4IQgRAnwHhN+PTZEkOXkLCwTDXp/eFG2n2IW7ZC61cUMfuRPSl2g7//6FpWOVBp5HSNX65m6NJ83xqbeKz3mbQhNM5Ns5jZxYD3rXyDdNrozcKMIr9xvkeT/zW3jisILuImLu5vEW+4IFfZGdBDbU3X++Vi6QPNpPs0j5A+1+MATjJP4W9ol6122N3esry/msk7Oun/uQyC0htyCy2gthCSIUJbNKd84JA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
To: xen-devel@lists.xenproject.org
From: Charles Arnold <carnold@suse.com>
Subject: [PATCH] Fix compilation error with gcc13
Message-ID: <46d17735-e96f-2eee-5d24-fc3d15526c6e@suse.com>
Date: Tue, 28 Jun 2022 10:09:07 -0600
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.9.1
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-ClientProxiedBy: AS9PR06CA0737.eurprd06.prod.outlook.com
 (2603:10a6:20b:487::13) To DB9PR04MB8300.eurprd04.prod.outlook.com
 (2603:10a6:10:243::22)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4b7d5919-cd1e-400a-a2b6-08da592007a7
X-MS-TrafficTypeDiagnostic: AS8PR04MB8387:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	fX/Pj9labirptvHOUYrIqeiXLaDwW2dXzoaFhw739Cf8qViqvpEBLNyroSA4Uk+0QvOghverRFQmcFwKu4jkRrjtup7OC9AOeKpa6crPu/jH9xT8ZC6HXlQemtNumxPqSt8jvhajYtDBDhwFqrIRdZI113TPYlg2ZhjqLOw+WQCUu3/d5R0o4/LYaoTc+Livhr7c3HW3qm7Q6bWPFpSpJ7e+ccqTRXEwOZzYLBc8MN0sa9w0irHGu/TB3fIUVd+gmE3Jn2bgCQduVbtxPzS4QdV/HfrsLF8M0OkTajc2exKVs/ctMT+/ilY46l18C1E4EYD5qijFqYl7QdrqCNSQ8PODJZQPgNfF0pG939oaiGoBkukKQsUiW1gFAb5yU6rykX97FNEaIBHAKhJfCHWbvPOvDGg05nnth2W/ad1697fiE1s1ieuvJNRv+xsK+nPQpUw4Z4V0vbM9EStUTlsFzk4+N82HS1XHipsz4o20MMPvj49MBeNAtv7g/V6X0raXOp6BEKn3QsE9hPU3L+7aEG2zE3SZPfpzYnJHi7zS12FPaS1155ZP3+VK/OoRARW2ztIc31miqngYLq+vA1x/wT/f9vR7tkQTQvX7pio/wiQrOFzOMtPU/0VhpI+wEQ50VDdmTRKfszWZk1EtY3kq0qRVrXCFatcLWRtkJGZ7Arm4mpwARmq0In81ysepTkZXoN1ezSNOZok7pGzqdpfpSyCjfBBewnBpeQCBTdA0/tgAHHyvOjNgu2iI5RDFZ0mv1xZ6ZX3h2A5b40FkXM61RmFyfvnzA8KL5RdjJDDY+lnmeRlDlfQUBVO53oV+DOvESsJzfzRoNMQXxwiYWivPEA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB9PR04MB8300.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(396003)(136003)(39860400002)(346002)(376002)(8936002)(38100700002)(2906002)(36756003)(186003)(6916009)(316002)(83380400001)(31686004)(2616005)(5660300002)(86362001)(66476007)(66556008)(8676002)(6486002)(31696002)(66946007)(6506007)(26005)(6512007)(6666004)(53546011)(41300700001)(478600001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NDBlL3J3OXNoNEd5TlZkaTh2TkVaNmNOeTRpVnAwdHB6QXJidFZ3Z3lvd3VC?=
 =?utf-8?B?SEZXdTV6RTA0dHpTRkQ3L21nWEkwWVh2S0xVRjhhc0lhVnRqYnFxd1F2RXBj?=
 =?utf-8?B?aFlJMzVpZllnRjJOSzZibjI4NkEwK3c4NlVFRjlwZHROL2k4YmtPZkRoTW0x?=
 =?utf-8?B?c1B0YSt3M2I5NUNPaVltNVQrb1FZalpnL1d1NHBFenZTYVRFNUJKWXB5dyth?=
 =?utf-8?B?S2tsdVBvOU1MTTZkV3NLdGJQVGRxTkxpVnU4K3FFZngyK1MyKzRwaGVWcGlK?=
 =?utf-8?B?RjNmanBVaEsrTE10N0pQUG5Nd3cwWXBPc0g3azFJVDZWTkNodlRCbHdEY1VQ?=
 =?utf-8?B?eisyM0NmdEJ1bW00dmQ2VERkM2kyU2xBUGdLQUtDTkhza3ZLNGhyMTkwUThs?=
 =?utf-8?B?RG8ycG5YOU5DSk1UK3hhTU5PSVFvYlJZa1FhMUJ2MVZqeGRDM1lXVC9VMHNM?=
 =?utf-8?B?OURmcy9JUnh4M0ZydldJL2pPS0xUYVdJN3E3cnNSbWYxWkw1YThvY0lsMUdz?=
 =?utf-8?B?K2M2Y1daM1V5Vnc4alNzM0kvYnhSb1M5bnh2ODJlK3JveVBJSWdOemtVY2xZ?=
 =?utf-8?B?OFh3V1ZNSEdkRG5HK2cwSnJRYzRLTWhUdzY1eUNwTEVDNUNuWkdUckJvQW54?=
 =?utf-8?B?bktHaVVXTndrMml2RGNLVEc0NHdDVXJBWnBUMTU5N2RWb2pMMnMxUlNYM3pa?=
 =?utf-8?B?QVhhM1NLSXp6TERBcCt1cVA3UWJrTmpnWEhTVzVNT1IvY3JtUG5zdURyR1lx?=
 =?utf-8?B?eXA4Z0VmSDZEaEJJRlpuRk1zbkxlTzdueWN0MEN6a3k1b042Nng1RmcrMHh3?=
 =?utf-8?B?NjZJOHVOS2ZqcnU3RFViSFVPMUZzdjU4MTZYay9yMXhaRzNLazdsTFZ3Y2JM?=
 =?utf-8?B?NDBlYkhtVjNISDBtb2V0SFZvblBpdklsbHhRajFOblRCRnp3Ym5nZ1l1Sktp?=
 =?utf-8?B?ZGpGMUwrampRSEhRNWthbDJNandiSURpUkxsaVpVV2o5eHY2bWM3SUI5ZWds?=
 =?utf-8?B?VHUzbUd3U0tLTVY3N0w5SWg4V2hNamxocFJkVHUzWTh0MEdnb2JyVXJhSmRE?=
 =?utf-8?B?azRaZzc4NGhLdWYxcnZGYTl3bzdleTNvandLNXFiTHQzODZuQi90R2xhSnA1?=
 =?utf-8?B?WTFJMVU5ZEx1L2FPejJTb2h5cDFVakxRZnppUGpiYmdjVEpSYzFId29RQytr?=
 =?utf-8?B?Y3gvSC9OT09LNlJGL3lmOGRkekJnUkJDcldpMlk5bWFKaFdjQVRZdTNnNFFF?=
 =?utf-8?B?WDI3Vkw3ejB0aERyYUZyUHFQdFFxUkxQcGtNMkw4VUx0SWJFRktaT1hSUUV3?=
 =?utf-8?B?ZXNydG8yZ1Z2aHVuUjdnRTRyUzM3NGRsQW9ZeWxNb25JR3BseXRQcDRmZlNZ?=
 =?utf-8?B?ZWVjY3g0a1l3WE5XTVpUOW9QWmZ1ck5KWUYxcVJqRlVqaUZvSVBTNWJlQmFD?=
 =?utf-8?B?Y3hQN0RxUUp4TFozMEZNckU0NEYySXlKeUxvWCtVNWxDaC9ESmNZSHV0K0hh?=
 =?utf-8?B?aHVJTWRJcS9qZlBiUWNjdm5HUFRIbGIwNDNYOHluMUVNZ3Jod3Zrc1EvUmR6?=
 =?utf-8?B?Zy95Mm5UTlo2ejRyeDFiVURaTDY1RWtNdGZrZVZCQ3NuVG1mRWN2T0V2Wk1j?=
 =?utf-8?B?SG9LR0VNVjZqZzZNUVQwMGpnVE1zdkR1a0pyRU9iaXJZOURDTVlyOXkyeXFK?=
 =?utf-8?B?Q3hTNmFmRkErNzZ6NDMrVjV5M2c5Q2JGYWFvOUw4WlNSTEhoellBdGU4aGJN?=
 =?utf-8?B?TjFYUVFXRW1US1FvTzlMY2UvNDd4c0RMVGJPRVJQelRxeDEvM01oQ0ZXbC91?=
 =?utf-8?B?eDdIeFpubFhaT05wSUxCUWZEY0J6SWUrakh6c1p1NXFsS1Z3ajZxTFE3Q0I4?=
 =?utf-8?B?YzhjUUV2UUlMOFEwbjg3VXNFYTdhUVBWMEFUWnp2dnphU2xjWDVzZEY1WEll?=
 =?utf-8?B?emVWbk9EZDNkUDdxcW5tMnMySkQ3VlljTk5ySlBXUlQ3UElZZk5UK1dEMCtD?=
 =?utf-8?B?OGZjZ0tvdWZ2bzJhZzAvNDRJSWxnRElsWjM0V05qY2xwY2Z4bi94aTRqU3ZK?=
 =?utf-8?B?R3BTOXZodjE1RUdUa2diWnNqM2FscUYwVEx0VytuWTlMQnZKRGk2YVRvYVlJ?=
 =?utf-8?Q?QHHKE78s5tNi8HkVXmk4acrz2?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4b7d5919-cd1e-400a-a2b6-08da592007a7
X-MS-Exchange-CrossTenant-AuthSource: DB9PR04MB8300.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2022 16:05:33.0785
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: OUr34xABBGmyb8rXaSr9WHNWFE+fOI8gjGKRPeSppMKo5cKlwUQY390pevG5T3uKTnR6S9UYeBbwDoOD/Bb9CA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8387

 From 359f490021e69220313ca8bd2981bad4fcfea0db Mon Sep 17 00:00:00 2001
From: Charles Arnold <carnold@suse.com>
Date: Tue, 28 Jun 2022 09:55:28 -0600
Subject: Fix compilation error with gcc13.

xc_psr.c:161:5: error: conflicting types for 'xc_psr_cmt_get_data'
due to enum/integer mismatch;

Signed-off-by: Charles Arnold <carnold@suse.com>
---
  tools/include/xenctrl.h | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 5464a68eb2..0c8b4c3aa7 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -2520,7 +2520,7 @@ int xc_psr_cmt_get_l3_event_mask(xc_interface 
*xch, uint32_t *event_mask);
  int xc_psr_cmt_get_l3_cache_size(xc_interface *xch, uint32_t cpu,
                                   uint32_t *l3_cache_size);
  int xc_psr_cmt_get_data(xc_interface *xch, uint32_t rmid, uint32_t cpu,
-                        uint32_t psr_cmt_type, uint64_t *monitor_data,
+                        xc_psr_cmt_type type, uint64_t *monitor_data,
                          uint64_t *tsc);
  int xc_psr_cmt_enabled(xc_interface *xch);

-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 28 16:09:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 16:09:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357310.585801 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6DmE-0006sC-AJ; Tue, 28 Jun 2022 16:09:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357310.585801; Tue, 28 Jun 2022 16:09:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6DmE-0006s5-7d; Tue, 28 Jun 2022 16:09:22 +0000
Received: by outflank-mailman (input) for mailman id 357310;
 Tue, 28 Jun 2022 16:09:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+uE0=XD=arm.com=Rahul.Singh@srs-se1.protection.inumbo.net>)
 id 1o6DmD-0006rz-9C
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 16:09:21 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-eopbgr140070.outbound.protection.outlook.com [40.107.14.70])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id aa582b99-f6fc-11ec-bd2d-47488cf2e6aa;
 Tue, 28 Jun 2022 18:09:19 +0200 (CEST)
Received: from AM6PR10CA0092.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:209:8c::33)
 by AS8PR08MB6438.eurprd08.prod.outlook.com (2603:10a6:20b:33e::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16; Tue, 28 Jun
 2022 16:09:17 +0000
Received: from AM5EUR03FT030.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8c:cafe::cb) by AM6PR10CA0092.outlook.office365.com
 (2603:10a6:209:8c::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17 via Frontend
 Transport; Tue, 28 Jun 2022 16:09:17 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT030.mail.protection.outlook.com (10.152.16.117) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Tue, 28 Jun 2022 16:09:16 +0000
Received: ("Tessian outbound ef501234d194:v121");
 Tue, 28 Jun 2022 16:09:16 +0000
Received: from b329bf3226a4.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A39BF014-261B-46D2-9DD1-3CC5C920DF37.1; 
 Tue, 28 Jun 2022 16:09:10 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b329bf3226a4.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 28 Jun 2022 16:09:10 +0000
Received: from AS8PR08MB7158.eurprd08.prod.outlook.com (2603:10a6:20b:404::24)
 by GV1PR08MB7876.eurprd08.prod.outlook.com (2603:10a6:150:5f::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15; Tue, 28 Jun
 2022 16:09:08 +0000
Received: from AS8PR08MB7158.eurprd08.prod.outlook.com
 ([fe80::5cc5:d9b5:e3b0:c8d7]) by AS8PR08MB7158.eurprd08.prod.outlook.com
 ([fe80::5cc5:d9b5:e3b0:c8d7%9]) with mapi id 15.20.5373.016; Tue, 28 Jun 2022
 16:09:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aa582b99-f6fc-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=Yx+eP25QMUs0mcRZYycidaL6jVWKbOCFcBtaQy2TUEE50eRvNaUYPk11O/wiovJxZ2N95deBiT/0BuDJYuW+I3hkkYyd3e6AAfjMoawFf47LJcoB6zibhBF85imI5C8Ss9itAO4U758AI5XxzoxDk7//GC6aZGZDCaSCUVo0ay52FvpGlAMi8C2n7heoAHR0qFTHgYm02IZlLqhLDIGlK8HsCgnAkxf2omvuM6Y9szorVFBtnevy4dPNVIfRun2+XN+uG2TIL4MtCVExkdDz+0ZBWoFWhF5lwkTYQW3B+ZwCV/DIDrSrXDknVbaVKGZ+u7OmzkX++Kf/zZZbifl4Cw==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=SN2ZAQZXILDaBZsbkmRhI4rnGOcT6dF72uPMXaUaW54=;
 b=dvWd9PqqcyzDTBg1FC/b7tCqeasx9cxeAexJwTfbxHL9JjnAoVafuXjWpV7ydY3F7Ns0RZrvbVmK17E+o+GKWsyxbY5fkGeB0ylMk7Vp51DDucHbJTi6oLCwfxmRN+ISV5+G03eLsh416X3ZXr9DbypgqYyFCr3G3b22i0he0T21V3CYs2NtRmZdO5b3ddyDG2rrL/3BT4MofD3Uwjkh/3SfLFpYZ7SqvKbJIIFYAWSbaOrucSUMiXLV6YP8UXts6chQ8bbPeKFGckHEB1R7vIgD0y3qaAg0+SbwPOXVOR12nRJhQknKh8fLZgqm3tgDoiOF+YzoquO4dWQ/fSd5TA==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SN2ZAQZXILDaBZsbkmRhI4rnGOcT6dF72uPMXaUaW54=;
 b=dFAfipd+/agk5BqTs1hV9TekZJeOsedi/gDzF9ij+S7iuE7Qk8FjaJlDLyhhLA62YhcdEsgDc6wAgmGk+F778YlPXGmGTT+PjKJNFzlgUOCv6TXQ9rVRrXh1Yxgv6baWXjcU/66Y0bpmoA2st1sgpJDA+Oiim2PnUKdTErQ/77s=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 8e948e9ea95dc040
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gGUo87+KZfrjXlPJozYrVlR/hjep1Xy8d+lGanH+GItYZ5kEbnxpJ7R7AYUF/Z7gj4akZ1sCNMm479GB55Kyt+TawVgbaYA6przyUTC45m5bhQ5k1oIO9DcAJehnAmiH6Vn/wqW5jWxgrlwY3JylBRpSAOQE2B956TNWbrD9YYU5b7S59IqMOlg1SRzl2A9qDtswqhUOmQttIUFozfs2h+Nhw3kSxiBcgP7JteRMUgnBxhy4GRzuRqSnOGIWosVqOVgDB1kBhaDPB/GLiuJzaNzMQraT8aiGnDYcI9CBY597d28a6xClxuQB+eiuFqZICJQaLgK3rVx9IQeEjp7FlQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=SN2ZAQZXILDaBZsbkmRhI4rnGOcT6dF72uPMXaUaW54=;
 b=i8vfpMAWQWk/kTj2mn+5RpvBxS+go1DtZYBb2cjCDX68uL6sQcFqLLxm8hWRJqZn6DlJ96sr4OoqUUFx5MFj5nsE36tq12NsRoyQIFfyyK9srNAognzlBij33mHoaZ1VQHQWFI3J972Xw0hWbYkzG/woBjtCWM3dOOjNX6Fb/c2Sw09fGmpxVGQYRqux2kCALqQgcjP2Qa2rFC+p3/8PEtxXSoeBBxdx9QP2z66K2Rw2ayHn82tua/XaugzMGFtjgo6Np/cWNFAM5z+C+zQfn+nuwOIUGQKO+ENjk/zmdW1aWXz1s+WqnL7X8pSjxO3U6fmXMS6aYy5juFUmkCaL4A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SN2ZAQZXILDaBZsbkmRhI4rnGOcT6dF72uPMXaUaW54=;
 b=dFAfipd+/agk5BqTs1hV9TekZJeOsedi/gDzF9ij+S7iuE7Qk8FjaJlDLyhhLA62YhcdEsgDc6wAgmGk+F778YlPXGmGTT+PjKJNFzlgUOCv6TXQ9rVRrXh1Yxgv6baWXjcU/66Y0bpmoA2st1sgpJDA+Oiim2PnUKdTErQ/77s=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Xenia Ragiadakou <burzalodowa@gmail.com>
CC: xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: smmu-v3: Fix MISRA C 2012 Rule 1.3 violations
Thread-Topic: [PATCH] xen/arm: smmu-v3: Fix MISRA C 2012 Rule 1.3 violations
Thread-Index: AQHYiwEFE+FsL2M3X0iEtb0hwEG2QK1k/MeA
Date: Tue, 28 Jun 2022 16:09:07 +0000
Message-ID: <CC6DCBA3-AA80-4D51-AFAF-C15855BBF7B0@arm.com>
References: <20220628150851.8627-1-burzalodowa@gmail.com>
In-Reply-To: <20220628150851.8627-1-burzalodowa@gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: ff78ce39-22da-4ed2-7a60-08da59208cff
x-ms-traffictypediagnostic:
	GV1PR08MB7876:EE_|AM5EUR03FT030:EE_|AS8PR08MB6438:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 XY2/t0hk6RB9+5ceBpeLPvZ2QM2sqm9hzXXmtpXP0cgydo95MdpR+/eh4UpPFnydRt+s7d3+82Su3ufh+tdw/n7Y1npet4dEf42PzGRPDBpwYGtPHgjYDTtda4qgdYX1mKTUaxJ8sf1suddqsVAI8XY74rPTiormvlfSMQkshHmyFaIYtbmtdbvh4OZgphBI2m94rzKD8U41sgAfiHNE6SMcmxikvN88a5W2MGXxk2lklncv3sxCg2HFHia0ops0xj7xUQuiGZsOHQ2Zz7VK2pQaf9MWv47QHDFMhw7jycBK2TkVPY2bGVyTSS7pr2se2rykCdVnX6f+sdHTYTpOvHh5yvVjMq0H//JH+KqSXDmxH48OURLAjsZQJ3LYnb+xwqMcvWSiqSynKl1yKlx9+R6vVchJCxm1+YMaPhyizyDMz05u7ZaqPZRBzKmilyjCEBZQzzyTCz2r6I70sXMBNJMjLp7hKpb5IG72p22u6PMKzzM2LFQpDNwm59Gr3s65hQwZyibF/WhwchMO68C63VM1ERWH8H5R1avODF5M8rwk0yu0E3Hax6IR6L2FGKwmpYdWZbDxsnx7jsSMbdZBxwdee+i30wh8KWRoEd7pbY62jqdGQ2GXTYB8DzX7QCJeJRIVpXtMxITPu+YRqseWk5zhn0mTcqromTqknZ6OGKLkGz24ULRbt7cFjCxRYldt7tOmWu+MwJguZj1Pvu9ielWunpYKjn2lmMS8MKCCxVw6nGAwFpysydRlCGABz2T95Q/P5D/W2xU7jgqV6RUxN00/aHhXbiuZXh1EzXgpzQD4+KVvpGE0r9Awd2eoOJXY4v8qZfOaLOG/lg/tp1P6Ow==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7158.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(346002)(39860400002)(376002)(366004)(136003)(396003)(6486002)(6506007)(6512007)(76116006)(41300700001)(71200400001)(26005)(66476007)(66446008)(2616005)(186003)(38100700002)(86362001)(83380400001)(478600001)(33656002)(122000001)(53546011)(36756003)(316002)(54906003)(5660300002)(6916009)(2906002)(66556008)(66946007)(91956017)(8936002)(64756008)(8676002)(38070700005)(4326008)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <ABBE22999FF38B45AB4FC0D036CD8891@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB7876
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT030.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	a9aff0d7-bc77-494e-09ed-08da592087bb
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5uhD0yf/QEhdkto+Wf2U6GRELKXxPMoQffCjwnCprvyb3fVkelIVm2mCGIOIu1QZU6katR4WX9EQJlBxgZY7Oq5igA3jnxkerPZyUSoT7ryaRxJv+pfN6m0lyxCHpRinvX3PqDOAFAsvZXdvCKlE5a6jvBdBJ7ksha/C4A+f4qhYp0bfTG0W6FnU4GZVNUGXESh8SFs2m19OBjiwsVBgRri7pkX/QQJSEgU2Pke95QjAd7kdW4Gnbp1VC1bNH0r+0tjlWe1bt+VpFESCBjL9qENhFakPE53P707JXzIeuMfkqfEPvZgTW9zlYRc2iNc/RSqcm7+MC6n3rLQC6COhueu4JA3nTLHxuLNEvyxeILZe7vFFyqKToP0Q8HoEnxOpuR1reZqV+3xB30gxD52MzgCLvhPRibv6tzAPBEc5Ww5Oopl4kh1xTULd4pLAj0s+a54ML7Fq3rlSIDreg7N98oOvVnj9MFs5wZchUbCh4GaV8SzHulNguT+TBbjgFT+SocXRFL7Z1z53YpHNTrx3Tdhd3h+7jADpe3arbapCJsIzsh3DVMKHWK8AGzKpLlL8hhJghcHL7H67bN39y/5evSF8E4yNEZy4ZaomTWkAieJlc+/Osxu1Vp3t4/oHGQR39cg2CxxKzj5eeqjzkNh4AFTriNtdhzGfgvaiDTNP8afqx+51kKHj0lYokfvb2n0mi3A8K8V/0eyxJbhKWIzLzzHNz7rfZhWCtHprA6dWykOI5/pi1HKEaHGPbB7B6+98cp93p6esv6V+X6TDgvOXgNxiKS+NTqNe7xND07dYYxwsD7ELtbHf1+Wm0npTBVLI
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(136003)(396003)(346002)(39860400002)(376002)(36840700001)(40470700004)(46966006)(356005)(82740400003)(40460700003)(83380400001)(47076005)(81166007)(54906003)(5660300002)(4326008)(36756003)(186003)(8936002)(107886003)(82310400005)(70586007)(36860700001)(40480700001)(336012)(6862004)(6512007)(2616005)(6486002)(41300700001)(53546011)(6506007)(8676002)(316002)(86362001)(26005)(2906002)(70206006)(478600001)(33656002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2022 16:09:16.5121
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ff78ce39-22da-4ed2-7a60-08da59208cff
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT030.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6438

Hi Xenia,

> On 28 Jun 2022, at 4:08 pm, Xenia Ragiadakou <burzalodowa@gmail.com> wrot=
e:
>=20
> The expression 1 << 31 produces undefined behaviour because the type of i=
nteger
> constant 1 is (signed) int and the result of shifting 1 by 31 bits is not
> representable in the (signed) int type.
> Change the type of 1 to unsigned int by adding the U suffix.
>=20
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>

Reviewed-by: Rahul Singh <rahul.singh@arm.com>

Regards,
Rahul
> ---
> Q_OVERFLOW_FLAG has already been fixed in upstream kernel code.
> For GBPA_UPDATE I will submit a patch.
>=20
> xen/drivers/passthrough/arm/smmu-v3.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>=20
> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthro=
ugh/arm/smmu-v3.c
> index 1e857f915a..f2562acc38 100644
> --- a/xen/drivers/passthrough/arm/smmu-v3.c
> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
> @@ -338,7 +338,7 @@ static int platform_get_irq_byname_optional(struct de=
vice *dev,
> #define CR2_E2H				(1 << 0)
>=20
> #define ARM_SMMU_GBPA			0x44
> -#define GBPA_UPDATE			(1 << 31)
> +#define GBPA_UPDATE			(1U << 31)
> #define GBPA_ABORT			(1 << 20)
>=20
> #define ARM_SMMU_IRQ_CTRL		0x50
> @@ -410,7 +410,7 @@ static int platform_get_irq_byname_optional(struct de=
vice *dev,
>=20
> #define Q_IDX(llq, p)			((p) & ((1 << (llq)->max_n_shift) - 1))
> #define Q_WRP(llq, p)			((p) & (1 << (llq)->max_n_shift))
> -#define Q_OVERFLOW_FLAG			(1 << 31)
> +#define Q_OVERFLOW_FLAG			(1U << 31)
> #define Q_OVF(p)			((p) & Q_OVERFLOW_FLAG)
> #define Q_ENT(q, p)			((q)->base +			\
> 					 Q_IDX(&((q)->llq), p) *	\
> --=20
> 2.34.1
>=20



From xen-devel-bounces@lists.xenproject.org Tue Jun 28 16:56:52 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 16:56:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357324.585812 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6EW4-0003v1-TH; Tue, 28 Jun 2022 16:56:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357324.585812; Tue, 28 Jun 2022 16:56:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6EW4-0003uu-Pn; Tue, 28 Jun 2022 16:56:44 +0000
Received: by outflank-mailman (input) for mailman id 357324;
 Tue, 28 Jun 2022 16:56:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6EW3-0003uk-MB; Tue, 28 Jun 2022 16:56:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6EW3-0003tk-J3; Tue, 28 Jun 2022 16:56:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6EW3-00016e-4D; Tue, 28 Jun 2022 16:56:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o6EW3-0006DM-3k; Tue, 28 Jun 2022 16:56:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=seHI+VxQRqDPPOhVlsQqOb8LKlZNdqfY780lJ82F80k=; b=6rV8YrpmHPoE7Q8LB6oUwlhw62
	8Ms339pfu2c2WJJ6lpORNiBT91+1gEUJAIbmj78WykL3S4wpIMesZyvlKJsH7mOquwAmfMK1t2Pxj
	/fUUwM0woYiMUGEXdVzd8YuD0jAgEE72npI24/2nmCPs5KNZKsae+Zy3n4Fc5t9KEVwg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171380-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 171380: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-libvirt-raw:xen-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:build-arm64-pvops:kernel-build:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=2a8835cb45371a1f05c9c5899741d66685290f28
X-Osstest-Versions-That:
    qemuu=29f6db75667f44f3f01ba5037dacaf9ebd9328da
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 28 Jun 2022 16:56:43 +0000

flight 171380 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171380/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-libvirt-raw   7 xen-install              fail REGR. vs. 171376
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 171376
 build-arm64-pvops             6 kernel-build             fail REGR. vs. 171376

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds   18 guest-start/debian.repeat fail blocked in 171376
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171376
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171376
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171376
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171376
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171376
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171376
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171376
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171376
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                2a8835cb45371a1f05c9c5899741d66685290f28
baseline version:
 qemuu                29f6db75667f44f3f01ba5037dacaf9ebd9328da

Last test of basis   171376  2022-06-27 23:09:50 Z    0 days
Testing same since   171380  2022-06-28 08:41:09 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alex Bennée <alex.bennee@linaro.org>
  David Hildenbrand <david@redhat.com>
  Jagannathan Raman <jag.raman@oracle.com>
  Kevin Wolf <kwolf@redhat.com>
  Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
  Michael S. Tsirkin <mst@redhat.com>
  Richard Henderson <richard.henderson@linaro.org>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Zhenzhong Duan <zhenzhong.duan@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1112 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 28 17:23:58 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 17:23:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357335.585826 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6EwC-0007Wn-48; Tue, 28 Jun 2022 17:23:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357335.585826; Tue, 28 Jun 2022 17:23:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6EwC-0007Wg-1M; Tue, 28 Jun 2022 17:23:44 +0000
Received: by outflank-mailman (input) for mailman id 357335;
 Tue, 28 Jun 2022 17:23:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8dqK=XD=gmail.com=mykyta.poturai@srs-se1.protection.inumbo.net>)
 id 1o6EwA-0007Wa-Pz
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 17:23:42 +0000
Received: from mail-wm1-x32e.google.com (mail-wm1-x32e.google.com
 [2a00:1450:4864:20::32e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0dee8506-f707-11ec-bd2d-47488cf2e6aa;
 Tue, 28 Jun 2022 19:23:41 +0200 (CEST)
Received: by mail-wm1-x32e.google.com with SMTP id
 t17-20020a1c7711000000b003a0434b0af7so5471467wmi.0
 for <xen-devel@lists.xenproject.org>; Tue, 28 Jun 2022 10:23:41 -0700 (PDT)
Received: from localhost.localdomain ([193.151.14.160])
 by smtp.gmail.com with ESMTPSA id
 23-20020a05600c22d700b003a018e43df2sm141882wmg.34.2022.06.28.10.23.39
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 28 Jun 2022 10:23:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0dee8506-f707-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=4tneAeCsW+HKHp2Ra436gYhCNjMs8eKFMJflioWagLg=;
        b=UVJh50jeXi3ynm7vy//IU0gNPod8XYwZACWGV4upusrrtXtJy2qO591IMxoGWkFFbn
         ZPzK1FvgsIDLarA+WO1KvIUzM+mWRsLEuamDrH4f/dw9rpraFkVOaUJu/On3SqtDdAeD
         KZaqX9QmwxuKBbJoQ4oTYaUP6TsY/ixwBWZpoPmqV49Zlq6JvjU9YkK8O9VmNY4sqUDx
         jlft9v8BECI5vA6DUNQ74xWm5Wij+TZecOqSSKqLN4SpG+Y1vr7EQx17RhH51shlLxBS
         RfG3kuB0Ai2bvfpVsSDVxukurgIGnniuZsLQ2NBlJEDleMDszRGGqB7iqg+3F34HPnvC
         8zwA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=4tneAeCsW+HKHp2Ra436gYhCNjMs8eKFMJflioWagLg=;
        b=IMrd8ovn9t9Xkgge6/+OV86cUAHm8RjGLCkt0wLi27cSe1vgj7M0BVOuDTMe+8PX7Q
         f41qVKfQ29uC1ps9DgGjzsDcl2DkHpPGWX8/SvgRO4VJVoZ6EoSNPzOTz/0ePxSh0gJo
         kIuvtTCMmdiNv5ZQUkl9X+54eZ3S2ae9xMP5vwYxeyVx6RjwJLcHP2prNRD5XjT9jTg1
         +J3jvCXFkSudB7FxOgQsHkJVC5SKgVZcKbLYZhRY8HTJfFXcbtvroaLTmRJDyUlLzVee
         Tglon6nXBAc4u1RlFMPpX1swsxzfa/2WyZsFPcw2SikejvPY2W2xYBHoYlrfi4XLnnmY
         fMyw==
X-Gm-Message-State: AJIora82mH7RMikMDvV+mHjhb5jYzA7mgsYzYSPl6jUsF0tOMOZIQIwl
	fDj801gMeEmptS9Dw6ZCmpI=
X-Google-Smtp-Source: AGRyM1tmVix0KCegYBu28Sh8t+SjyRyzffMIoJRxb68cLY45x3AhXLViwu3qT6R8ytfmvjQudjptCQ==
X-Received: by 2002:a7b:c242:0:b0:3a0:3ba5:81fd with SMTP id b2-20020a7bc242000000b003a03ba581fdmr752171wmj.47.1656437020892;
        Tue, 28 Jun 2022 10:23:40 -0700 (PDT)
From: Mykyta Poturai <mykyta.poturai@gmail.com>
X-Google-Original-From: Mykyta Poturai <mykyta_poturai@epam.com>
To: rahul.singh@arm.com
Cc: Bertrand.Marquis@arm.com,
	Volodymyr_Babchuk@epam.com,
	julien@xen.org,
	mykyta.poturai@gmail.com,
	sstabellini@kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH] xen/arm: smmuv1: remove iommu group when deassign a device
Date: Tue, 28 Jun 2022 20:23:38 +0300
Message-Id: <20220628172338.1637121-1-mykyta_poturai@epam.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <0A58139F-CA6F-4E18-B44A-2066AEF0C8F6@arm.com>
References: <0A58139F-CA6F-4E18-B44A-2066AEF0C8F6@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

> Hi Mykyta,
> 
>> On 21 Jun 2022, at 10:38 am, Mykyta Poturai <mykyta.poturai@gmail.com> wrote:
>> 
>>> Thanks for testing the patch.
>>>> But not fixed the "Unexpected Global fault" that occasionally happens when destroying
>>>> the domain with an actively working GPU. Although, I am not sure if this issue
>>>> is relevant here.
>>> 
>>> Can you please if possible share the more details and logs so that I can look if this issue is relevant here ?
>> 
>> So in my setup I have a board with IMX8 chip and 2 core Vivante GPU. GPU is split between domains.
>> One core goes to Dom0 and one to DomU.
>> 
>> Steps to trigger this issue:
>> 1. Start DomU
>> 2. Start wayland and glmark2-es2-wayland inside DomU
>> 3. Destroy DomU
>> 
>> Sometimes it destroys fine but roughly 1 of 8 times I get logs like this:
>> 
>> root@dom0:~# xl dest DomU
>> [12725.412940] xenbr0: port 1(vif8.0) entered disabled state
>> [12725.671033] xenbr0: port 1(vif8.0) entered disabled state
>> [12725.689923] device vif8.0 left promiscuous mode
>> [12725.696736] xenbr0: port 1(vif8.0) entered disabled state
>> [12725.696989] audit: type=1700 audit(1616594240.068:39): dev=vif8.0 prom=0 old_prom=256 auid=4294967295 uid=0 gid=0 ses=4294967295
>> (XEN) smmu: /iommu@51400000: Unexpected global fault, this could be serious
>> (XEN) smmu: /iommu@51400000:    GFSR 0x00000001, GFSYNR0 0x00000004, GFSYNR1 0x00001055, GFSYNR2 0x00000000
>> 
>> My guess is that this happens because GPU continues to access memory when the context is already invalidated,
>> and therefore triggers the "Invalid context fault".
> 
> Yes you are right in this case GPU trying to do DMA operation after Xen destroyed the guest and configures
> the S2CR type value to fault. Solution to this issue is the patch that I shared earlier.
> 
> You can try this patch and confirm.This patch will solve both the issues.
> 
> diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
> index 5cacb2dd99..ff1b73d3d8 100644
> --- a/xen/drivers/passthrough/arm/smmu.c
> +++ b/xen/drivers/passthrough/arm/smmu.c
> @@ -1680,6 +1680,10 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
> if (!cfg)
> return -ENODEV;
> 
> + ret = arm_smmu_master_alloc_smes(dev);
> + if (ret)
> + return ret;
> +
> return arm_smmu_domain_add_master(smmu_domain, cfg);
> }
> 
> @@ -2075,7 +2079,7 @@ static int arm_smmu_add_device(struct device *dev)
> iommu_group_add_device(group, dev);
> iommu_group_put(group);
> 
> - return arm_smmu_master_alloc_smes(dev);
> + return 0;
> }
> 
> 
> Regards,
> Rahul

Hi Rahul,

With this patch I get the same results, here is the error message:

(XEN) smmu: /iommu@51400000: Unexpected global fault, this could be serious
(XEN) smmu: /iommu@51400000:    GFSR 0x00000001, GFSYNR0 0x00000004, GFSYNR1 0x00001055, GFSYNR2 0x00000000

Regards,
Mykyta


From xen-devel-bounces@lists.xenproject.org Tue Jun 28 20:27:12 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 20:27:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357362.585843 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6HnN-0000w2-Cq; Tue, 28 Jun 2022 20:26:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357362.585843; Tue, 28 Jun 2022 20:26:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6HnN-0000vv-9v; Tue, 28 Jun 2022 20:26:49 +0000
Received: by outflank-mailman (input) for mailman id 357362;
 Tue, 28 Jun 2022 20:26:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6HnL-0000vl-Ur; Tue, 28 Jun 2022 20:26:47 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6HnL-000820-TP; Tue, 28 Jun 2022 20:26:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6HnL-0002Ng-HK; Tue, 28 Jun 2022 20:26:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o6HnL-0003A1-Gs; Tue, 28 Jun 2022 20:26:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=TssW7k8xzvCEpBWcbOmtDyPWH+CWo+rP6mIbRzyz4i8=; b=FsZTqbOhTpIsc8U6bGnR7/PMC4
	HL0FqN5pSCepn79ay/bmpRIJlAglJpQDg0TcGXlplIhQR7NcM/YF5UIsTkL0XTIbw3MANPYZo01Mp
	BmC6fsqBKSQAsjXVxqc+U8KeZa5qlZkbqQK0afvSZxIilk7daFOPSzvPmJGhORFG39i4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171383-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 171383: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8c99264c6746541ddbfd7afec533e6ad1c8c41a5
X-Osstest-Versions-That:
    xen=0544c4ee4b48f7e2715e69ff3e73c3d5545b0526
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 28 Jun 2022 20:26:47 +0000

flight 171383 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171383/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8c99264c6746541ddbfd7afec533e6ad1c8c41a5
baseline version:
 xen                  0544c4ee4b48f7e2715e69ff3e73c3d5545b0526

Last test of basis   171354  2022-06-25 15:00:28 Z    3 days
Testing same since   171383  2022-06-28 16:01:41 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@arm.com>
  Roger Pau Monné <roger.pau@cirtrix.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   0544c4ee4b..8c99264c67  8c99264c6746541ddbfd7afec533e6ad1c8c41a5 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Jun 28 20:59:09 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 20:59:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357370.585853 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6IIX-0004aq-Ol; Tue, 28 Jun 2022 20:59:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357370.585853; Tue, 28 Jun 2022 20:59:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6IIX-0004aj-Lw; Tue, 28 Jun 2022 20:59:01 +0000
Received: by outflank-mailman (input) for mailman id 357370;
 Tue, 28 Jun 2022 20:59:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vcau=XD=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1o6IIW-0004ad-NH
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 20:59:00 +0000
Received: from mail-ed1-x52d.google.com (mail-ed1-x52d.google.com
 [2a00:1450:4864:20::52d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 217a2ed0-f725-11ec-bd2d-47488cf2e6aa;
 Tue, 28 Jun 2022 22:58:59 +0200 (CEST)
Received: by mail-ed1-x52d.google.com with SMTP id c65so19301475edf.4
 for <xen-devel@lists.xenproject.org>; Tue, 28 Jun 2022 13:58:59 -0700 (PDT)
Received: from [127.0.0.1] (dynamic-077-013-111-114.77.13.pool.telefonica.de.
 [77.13.111.114]) by smtp.gmail.com with ESMTPSA id
 d20-20020aa7ce14000000b00435d4179bbdsm10331687edv.4.2022.06.28.13.58.57
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 28 Jun 2022 13:58:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 217a2ed0-f725-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=date:from:to:cc:subject:in-reply-to:references:message-id
         :mime-version:content-transfer-encoding;
        bh=VYyYVaguM9AiFz2TacfiRyojZpDmxjlgMkPeylJcdgQ=;
        b=mF8IcLcvwuMm6Be0+eyVflbaFtOrEUkx2oDWudB0rvhcP3nnoJpFFyJzqlgu1ouxTU
         PEt0871jMzuVjrOrpAC5V1HejZYdYGefXtzUK8HVxefBgJ/HQksV1wAnrdGRx7tnroOo
         0mzGEUhOYLweECH17El+RhxCDSlYQX3PxbrZvsK1Zg+3KLARmzs0yG+FpLjrINLjZVWR
         1YwxKYRoOmUs5Sy5MvGrziqRJi01v8gji08yCwY222RRt41gP27pA+ii6EjXFABzllXL
         0coiCUqWPcr92Td8O5ovc8NYErIo/adThn2kECuTfAwb0xqOVwktjcOrmpq13Kumanzp
         uG6Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:references
         :message-id:mime-version:content-transfer-encoding;
        bh=VYyYVaguM9AiFz2TacfiRyojZpDmxjlgMkPeylJcdgQ=;
        b=BKn1FS4Gs53NNb1Qe0N+iDg4ck23Zwv+P0vzAVdcKQPEOS6Lcet2Rbn93jcdhhN9Nc
         DkrENmZ6A8oGXVWym1cvgjoemHekGIbE233W3pfG+o6u3jxRSVYwbdSpwAkCKcmweIm+
         jMlp+iruNu3eP94c1hyZqZBtSlbxt6dlorH9IkSi+v8a2TLdqZGyy/8n7sUULz0nmsXz
         /a3gtnV+uqjXNjl+m5lpd0H+ENRdrSQaVcnq9ihLtMJtHM9wxzO0ehDaHcmtSEuMu3oy
         hUztQFE2WwK0Lxq0zcc0AwlMKQJy1VREzuWN+zQEnH1b3DvipmB/QHqT9Wi+jJSsVebJ
         gCeg==
X-Gm-Message-State: AJIora8c+mZuGxArZNL6it+PHhjVX0EBLYRs8RyIwhXKNm7yOkuWzDyQ
	OtLEiZtJQ8iUoa6hjSWlJ6M=
X-Google-Smtp-Source: AGRyM1s3dozt9bvArLV6A7D/jktCc2unZFpP4SoY0M8gCtHnEeirqTD4xhcTN6fkDmV7R6YoFH0EOQ==
X-Received: by 2002:aa7:d38e:0:b0:435:6785:66d1 with SMTP id x14-20020aa7d38e000000b00435678566d1mr25310422edq.393.1656449938552;
        Tue, 28 Jun 2022 13:58:58 -0700 (PDT)
Date: Tue, 28 Jun 2022 20:58:52 +0000
From: B <shentey@gmail.com>
To: qemu-devel@nongnu.org, Laurent Vivier <laurent@vivier.eu>
CC: Richard Henderson <richard.henderson@linaro.org>,
 xen-devel@lists.xenproject.org, qemu-trivial@nongnu.org,
 Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Paolo Bonzini <pbonzini@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>
Subject: Re: [PATCH 0/2] Decouple Xen-HVM from PIIX
In-Reply-To: <20220626094656.15673-1-shentey@gmail.com>
References: <20220626094656.15673-1-shentey@gmail.com>
Message-ID: <D8EF825B-45A2-4DE5-A787-8FE7BE88D2E6@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain;
 charset=utf-8
Content-Transfer-Encoding: quoted-printable



Am 26=2E Juni 2022 09:46:54 UTC schrieb Bernhard Beschow <shentey@gmail=2E=
com>:
>hw/i386/xen/xen-hvm=2Ec contains logic which is PIIX-specific=2E This mak=
es xen-hvm=2Ec depend on PIIX which can be avoided if PIIX logic was isolat=
ed in PIIX itself=2E
>
>
>
>Bernhard Beschow (2):
>
>  hw/i386/xen/xen-hvm: Allow for stubbing xen_set_pci_link_route()
>
>  hw/i386/xen/xen-hvm: Inline xen_piix_pci_write_config_client() and
>
>    remove it
>
>
>
> hw/i386/xen/xen-hvm=2Ec       | 17 ++---------------
>
> hw/isa/piix3=2Ec              | 15 ++++++++++++++-
>
> include/hw/xen/xen=2Eh        |  2 +-
>
> include/hw/xen/xen_common=2Eh |  6 ------
>
> stubs/xen-hw-stub=2Ec         |  3 ++-
>
> 5 files changed, 19 insertions(+), 24 deletions(-)
>
>
>
>-- >
>2=2E36=2E1
>
>
>

Hi Laurent,

would you like to queue this as well? Both patches have been reviewed at l=
east once, piix twice=2E Or would you rather keep the review period open fo=
r longer?

Best regards,
Bernhard


From xen-devel-bounces@lists.xenproject.org Tue Jun 28 21:10:08 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 21:10:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357376.585865 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6ITC-0006OJ-Sz; Tue, 28 Jun 2022 21:10:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357376.585865; Tue, 28 Jun 2022 21:10:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6ITC-0006Np-Op; Tue, 28 Jun 2022 21:10:02 +0000
Received: by outflank-mailman (input) for mailman id 357376;
 Tue, 28 Jun 2022 21:10:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=F/Gg=XD=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o6ITA-0006B8-Vu
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 21:10:00 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a9ba430d-f726-11ec-bd2d-47488cf2e6aa;
 Tue, 28 Jun 2022 23:09:59 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 7EC7B61865;
 Tue, 28 Jun 2022 21:09:56 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 59CB1C341C8;
 Tue, 28 Jun 2022 21:09:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a9ba430d-f726-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1656450595;
	bh=1QJk4e5j0/pz9uIpQ4JWK8nVogVrZMYPM6m/PAoDjys=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=nsBw1UXuvcxUmPH3tNOnNFdZmPyjltPcid9fynO3ewZauLE/h0Zh18vREFpG4Nqzu
	 OURDTsKPHFaUrB6xAS/uDBdZlT6nup9UQFWqRhV7bVH6dvrG2vyOgconcovvPAiI8T
	 YjNjQtMDgVUJPw8gTKybzJR8uJnlc6pUBGobQBf2CENvKROG01H7xvF7nKGdEiIvNp
	 05spaUErlHtzYRv6eXBbkgHYR3MmWa6k64+oOwWq4PkEy/1+g6OHMldslx6Cd72eOE
	 kxE6BeyIZl2aTxC9kXzoZny7+GOf5ELCBrK194czS1pUKo3lMpZGRQ7A/j3ZPYBo4U
	 FdTV4tqNYeaEQ==
Date: Tue, 28 Jun 2022 14:09:39 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: B <shentey@gmail.com>
cc: qemu-devel@nongnu.org, Laurent Vivier <laurent@vivier.eu>, 
    Richard Henderson <richard.henderson@linaro.org>, 
    xen-devel@lists.xenproject.org, qemu-trivial@nongnu.org, 
    Eduardo Habkost <eduardo@habkost.net>, 
    Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Paolo Bonzini <pbonzini@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>, 
    Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>
Subject: Re: [PATCH 0/2] Decouple Xen-HVM from PIIX
In-Reply-To: <D8EF825B-45A2-4DE5-A787-8FE7BE88D2E6@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2206281408490.247593@ubuntu-linux-20-04-desktop>
References: <20220626094656.15673-1-shentey@gmail.com> <D8EF825B-45A2-4DE5-A787-8FE7BE88D2E6@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 28 Jun 2022, B wrote:
> Am 26. Juni 2022 09:46:54 UTC schrieb Bernhard Beschow <shentey@gmail.com>:
> >hw/i386/xen/xen-hvm.c contains logic which is PIIX-specific. This makes xen-hvm.c depend on PIIX which can be avoided if PIIX logic was isolated in PIIX itself.
> >
> >
> >
> >Bernhard Beschow (2):
> >
> >  hw/i386/xen/xen-hvm: Allow for stubbing xen_set_pci_link_route()
> >
> >  hw/i386/xen/xen-hvm: Inline xen_piix_pci_write_config_client() and
> >
> >    remove it
> >
> >
> >
> > hw/i386/xen/xen-hvm.c       | 17 ++---------------
> >
> > hw/isa/piix3.c              | 15 ++++++++++++++-
> >
> > include/hw/xen/xen.h        |  2 +-
> >
> > include/hw/xen/xen_common.h |  6 ------
> >
> > stubs/xen-hw-stub.c         |  3 ++-
> >
> > 5 files changed, 19 insertions(+), 24 deletions(-)
> >
> >
> >
> >-- >
> >2.36.1
> >
> >
> >
> 
> Hi Laurent,
> 
> would you like to queue this as well? Both patches have been reviewed at least once, piix twice. Or would you rather keep the review period open for longer?
 
Paul reviewed them both -- I don't think we need further reviews.
Laurent could just take them.


From xen-devel-bounces@lists.xenproject.org Tue Jun 28 21:39:35 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 21:39:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357383.585876 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6IvX-0001Kt-6u; Tue, 28 Jun 2022 21:39:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357383.585876; Tue, 28 Jun 2022 21:39:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6IvX-0001Km-3J; Tue, 28 Jun 2022 21:39:19 +0000
Received: by outflank-mailman (input) for mailman id 357383;
 Tue, 28 Jun 2022 21:39:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o6IvV-0001Kg-7k
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 21:39:17 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o6IvV-00019E-1a; Tue, 28 Jun 2022 21:39:17 +0000
Received: from gw1.octic.net ([81.187.162.82] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o6IvU-0007Ml-SP; Tue, 28 Jun 2022 21:39:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=bBhwfUS3citZT5dS9ckk80b8c7bARwGY50DeZHPPYfI=; b=nCGKdnT6/KQVvzTMY2UoiZRGFm
	eH2U45q7iWmIT1pcx9rO2gUwRm7/YWrvr3ekbfOeuaolmo65AEltirlPAjf/JdPwrWnyjYPQbtI6M
	WkBRuPHe5P4nQYOlxoRRjwG7F9PW5VHk2xaH+LsLdr52fAytbHTtyXABzYOZb5pLIEbQ=;
Message-ID: <b85abae8-3d20-e997-55ea-1cf2fd312eb3@xen.org>
Date: Tue, 28 Jun 2022 22:39:14 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH v2 1/2] public/io: xs_wire: Document that EINVAL should
 always be first in xsd_errors
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: jbeulich@suse.com, Julien Grall <jgrall@amazon.com>
References: <20220627123635.3416-1-julien@xen.org>
 <20220627123635.3416-2-julien@xen.org>
 <d0330408-2301-6145-f46b-c3da302a1edb@suse.com>
 <7af3e9ec-59fe-32ce-2a9d-b8dab57d0e9e@xen.org>
 <f7c0d5c1-01da-4dca-42ac-ce17c6109371@suse.com>
 <10c2bc1f-f035-648f-3b9d-7c29007d3527@xen.org>
 <c6ed6696-74f1-824f-5f64-f016284e3348@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <c6ed6696-74f1-824f-5f64-f016284e3348@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 27/06/2022 16:13, Juergen Gross wrote:
> On 27.06.22 17:03, Julien Grall wrote:
>> Hi Juergen,
>>
>> On 27/06/2022 15:50, Juergen Gross wrote:
>>> On 27.06.22 16:48, Julien Grall wrote:
>>>> Hi,
>>>>
>>>> On 27/06/2022 15:31, Juergen Gross wrote:
>>>>> On 27.06.22 14:36, Julien Grall wrote:
>>>>>> From: Julien Grall <jgrall@amazon.com>
>>>>>>
>>>>>> Some tools (e.g. xenstored) always expect EINVAL to be first in 
>>>>>> xsd_errors.
>>>>>>
>>>>>> Document it so, one doesn't add a new entry before hand by mistake.
>>>>>>
>>>>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>>>>>
>>>>>> ----
>>>>>>
>>>>>> I have tried to add a BUILD_BUG_ON() but GCC complained that the 
>>>>>> value
>>>>>> was not a constant. I couldn't figure out a way to make GCC happy.
>>>>>>
>>>>>> Changes in v2:
>>>>>>      - New patch
>>>>>> ---
>>>>>>   xen/include/public/io/xs_wire.h | 1 +
>>>>>>   1 file changed, 1 insertion(+)
>>>>>>
>>>>>> diff --git a/xen/include/public/io/xs_wire.h 
>>>>>> b/xen/include/public/io/xs_wire.h
>>>>>> index c1ec7c73e3b1..dd4c9c9b972d 100644
>>>>>> --- a/xen/include/public/io/xs_wire.h
>>>>>> +++ b/xen/include/public/io/xs_wire.h
>>>>>> @@ -76,6 +76,7 @@ static struct xsd_errors xsd_errors[]
>>>>>>   __attribute__((unused))
>>>>>>   #endif
>>>>>>       = {
>>>>>> +    /* /!\ Some users (e.g. xenstored) expect EINVAL to be the 
>>>>>> first entry. */
>>>>>>       XSD_ERROR(EINVAL),
>>>>>>       XSD_ERROR(EACCES),
>>>>>>       XSD_ERROR(EEXIST),
>>>>>
>>>>> What about another approach, like:
>>>>
>>>> In place of what? I still think we need the comment because this 
>>>> assumption is not part of the ABI (AFAICT xs_wire.h is meant to be 
>>>> stable).
>>>>
>>>> At which point, I see limited reason to fix xenstored_core.c.
>>>>
>>>> But I would have really prefer to use a BUILD_BUG_ON() (or similar) 
>>>> so we can catch any misue a build. Maybe I should write a small 
>>>> program that is executed at compile time?
>>>
>>> My suggestion removes the need for EINVAL being the first entry
>>
>> xsd_errors[] is part of the stable ABI. If Xenstored is already 
>> "misusing" it, then I wouldn't be surprised if other software rely on 
>> this as well.
> 
> Xenstored is the only instance which needs a translation from value to
> string, while all other users should need only the opposite direction.
> The only other candidate would be oxenstored, but that seems not to use
> xsd_errors[].

That's assuming this is the only two implementation of Xenstored 
existing ;).

> 
> And in fact libxenstore will just return a plain EINVAL in case it
> can't find a translation, while hvmloader will return EIO in that case. >
> With your reasoning and the hvmloader use case you could argue that
> the EIO entry needs to stay at the same position, too.

I have looked at the hmvloader code. It doesn't seem to expect EIO to be 
at a specific position.

However, I do agree that it is probably best to keep each error at the 
same position.

> 
>> Therefore, I don't really see how fixing Xenstored would allow us to 
>> remove this restriction.
>>
>> The only reason this was spotted is by Jan reviewing C Xenstored. 
>> Without that, it would have problably took a long time to notice
>> this change (I don't think there are many other errno used by Xenstored
>> and xsd_errors). So I think the risk is not worth the effort.
> 
> I don't see a real risk here, but if there is consensus that the risk 
> should
> not be taken, then I'd rather add a comment that new entries are only 
> allowed
> to be added at the end of the array.

I would be fine to mandate that new errors should be added at the end of 
the array.

Cheersm

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 28 21:54:33 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 21:54:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357389.585887 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6JAA-0003it-GZ; Tue, 28 Jun 2022 21:54:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357389.585887; Tue, 28 Jun 2022 21:54:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6JAA-0003im-DZ; Tue, 28 Jun 2022 21:54:26 +0000
Received: by outflank-mailman (input) for mailman id 357389;
 Tue, 28 Jun 2022 21:54:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Opqg=XD=gmail.com=dmitry.semenets@srs-se1.protection.inumbo.net>)
 id 1o6JA9-0003if-6y
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 21:54:25 +0000
Received: from mail-ed1-x52a.google.com (mail-ed1-x52a.google.com
 [2a00:1450:4864:20::52a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dec805eb-f72c-11ec-bd2d-47488cf2e6aa;
 Tue, 28 Jun 2022 23:54:24 +0200 (CEST)
Received: by mail-ed1-x52a.google.com with SMTP id z41so907757ede.1
 for <xen-devel@lists.xenproject.org>; Tue, 28 Jun 2022 14:54:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dec805eb-f72c-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=mSYrvx7Zs7WVuOjUVbgDVrTlqXlYD3NgG0yMPN+CjWk=;
        b=eBkHapLjErSmUbyssocth1SWXwvClGzbhdoKHmvFhA+sW8/i0nEB/Z3Z4Y2KeV6azc
         3b9TAplOkiRGufY/578ZzZZhhwcoFWtvu9fCsY5OrpI1nTTcYchKnP5sUEXmumUCvnol
         0JRZcjYNYMANaXtUcehRDW4iKW4f5TTh/uAGiiBt4AioBBm2Xdbl3V5unpEi29NSxwS5
         Jc/CQXIgnzzgo6Ow0rSU0zRMTTN0mokkTOmBjGmuWp6rpGjZYkBd35Wd8KEhp0UsN6eF
         8+kNZZDdtsBLH8cTmUptOmU81MvBLBE2GJkmMHv/lifZWD/aP2IyPaaeovgeAmjg6dEM
         ri9w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=mSYrvx7Zs7WVuOjUVbgDVrTlqXlYD3NgG0yMPN+CjWk=;
        b=7otxB22H6sTNBDG5mhXnKexSKf5Es9d9yWGFusVkunE8CXoZmxtmTDyrnufLYWSBt7
         QOYmD8Yq2aoa/tyMvxBjspTg148rfR650Vg53rWTCpT1s0Qhb2ufj6reHMym7pLGS86T
         uF1X2JST+oyIfth0tFrIkEYZ3Tbb5qI3MJyLy/sZqbZ4hHZgknk0QiEcGv6wk4w2udhm
         +dngDJ1YbuwkmXbmMSS5ruPK7X2CAWjuGOsabjc7D+oHuaUIN+kXGmPL0Y2nUqQxUTZu
         g++dd9GEgTgkVvkeiswUHZLm3ere0B3avIPlpGoKn447IPagwDn2vh+n2GR4sD6jE73N
         6EMQ==
X-Gm-Message-State: AJIora+cpMaVRO8khL7HGsDSLsjDg7BB7zD1JPmAYzRgVIkAxHG+ZE6M
	kbg2sanL5vxEW9vInPkVHpVkvr2OxKppECHmAmo=
X-Google-Smtp-Source: AGRyM1umlGm+9a42Pp1JSygaD0AA/oYLlXOwgSSlfWNJx9Wp1LoXs2PaAY8wE+4OxYr1BkcBK0CnvnBWMd64/7IMw0I=
X-Received: by 2002:a05:6402:26d6:b0:435:ba41:dbb0 with SMTP id
 x22-20020a05640226d600b00435ba41dbb0mr195186edd.242.1656453262531; Tue, 28
 Jun 2022 14:54:22 -0700 (PDT)
MIME-Version: 1.0
References: <20220623074428.226719-1-dmitry.semenets@gmail.com>
 <alpine.DEB.2.22.394.2206231457250.2410338@ubuntu-linux-20-04-desktop>
 <e60a4e68-ed00-6cc7-31ca-64bcfc4bbdc5@xen.org> <alpine.DEB.2.22.394.2206241414420.2410338@ubuntu-linux-20-04-desktop>
 <5c986703-c932-3c7d-3756-2b885bb96e42@xen.org>
In-Reply-To: <5c986703-c932-3c7d-3756-2b885bb96e42@xen.org>
From: Dmytro Semenets <dmitry.semenets@gmail.com>
Date: Wed, 29 Jun 2022 00:54:11 +0300
Message-ID: <CACM97VVhQ_cpr59ZbJj4HvxRvCj5h3yZmwsSVcGk7QycX_S24g@mail.gmail.com>
Subject: Re: [PATCH] xen: arm: Don't use stop_cpu() in halt_this_cpu()
To: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org, 
	Dmytro Semenets <dmytro_semenets@epam.com>, Bertrand Marquis <bertrand.marquis@arm.com>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Stefano and Julien,
What is the conclusion about this patch?

=D1=81=D0=B1, 25 =D0=B8=D1=8E=D0=BD. 2022 =D0=B3. =D0=B2 17:45, Julien Gral=
l <julien@xen.org>:
>
> Hi Stefano,
>
> On 24/06/2022 22:31, Stefano Stabellini wrote:
> > On Fri, 24 Jun 2022, Julien Grall wrote:
> >> On 23/06/2022 23:07, Stefano Stabellini wrote:
> >>> On Thu, 23 Jun 2022, dmitry.semenets@gmail.com wrote:
> >>>> From: Dmytro Semenets <dmytro_semenets@epam.com>
> >>> So wouldn't it be better to remove the panic from the implementation =
of
> >>> call_psci_cpu_off?
> >>
> >> I have asked to keep the panic() in call_psci_cpu_off(). If you remove=
 the
> >> panic() then we will hide the fact that the CPU was not properly turne=
d off
> >> and will consume more energy than expected.
> >>
> >> The WFI loop is fine when shutting down or rebooting because we know t=
his will
> >> only happen for a short period of time.
> >
> > Yeah, I don't think we should hide that CPU_OFF failed. I think we
> > should print a warning. However, given that we know CPU_OFF can
> > reasonably fail in normal conditions returning DENIED when a Trusted OS
> > is present, then I think we should not panic.
>
> We know how to detect this condition (see section 5.9 in DEN0022D.b). So
> I would argue we should fix it properly rather than removing the panic().
>
> >
> > If there was a way to distinguish a failure because a Trusted OS is
> > present (the "normal" failure) from other failures, I would suggest to:
> > - print a warning if failed due to a Trusted OS being present
> > - panic in other cases
> >
> > Unfortunately it looks like in all cases the return code is DENIED :-(
> I am confused. Per the spec, the only reason CPU_OFF can return DENIED
> is because the Trusted OS is resident. So what other cases are you
> talking about?
>
> >
> >
> > Given that, I would not panic and only print a warning in all cases. Or
> > we could ASSERT which at least goes away in !DEBUG builds.
>
> ASSERT() is definitely not way to deal with external input. I could
> possibly accept a WARN(), but see above.
>
> >>> The reason I am saying this is that stop_cpu is called in a number of
> >>> places beyond halt_this_cpu and as far as I can tell any of them coul=
d
> >>> trigger the panic. (I admit they are unlikely places but still.)
> >>
> >> This is one of the example where the CPU will not be stopped for a sho=
rt
> >> period of time. We should deal with them differently (i.e. migrating t=
he
> >> trusted OS or else) so we give all the chance for the CPU to be fully =
powered.
> >>
> >> IMHO, this is a different issue and hence why I didn't ask Dmitry to s=
olve it.
> >
> > I see your point now. I was seeing the two things as one.
> >
> > I think it is true that the WFI loop is likely to work. Also it is true
> > that from a power perspective it makes no different on power down or
> > reboot.  From that point of view this patch is OK.
> >
> > But even on shut-down/reboot, why not do that as a fallback in case
> > CPU_OFF didn't work? It is going to work most of the times anyway, why
> > change the default for the few cases that doesn't work?
>
> Because we would not be consistent how we would turn off a CPU on a
> system supporting PSCI. I would prefer to use the same method everywhere
> so it is easier to reason about.
>
> I am also not sure how you define "most of the time". Yes it is possible
> that the boards we aware of will not have this issue, but how about the
> one we don't know?
>
> >
> > Given that this patch would work, I don't want to insist on this and le=
t
> > you decide.
> >
> >
> > But even if we don't want to remove the panic as part of this patch, I
> > think we should remove the panic in a separate patch anyway, at least
> > until someone investigates and thinks of a strategy how to migrate the
> > TrustedOS as you suggested.
> If we accept this patch, then we remove the immediate pain. The other
> uses of stop_cpu() are in:
>         1) idle_loop(), this is reachable when turning off a CPU after bo=
ot
> (not supported on Arm)
>          2) start_secondary(), this is only used during boot (CPU
> hot-unplug is not supported)
>
> Even if it would be possible to trigger the panic() in 2), I am not
> aware of an immediate issue there. So I think it would be the wrong
> approach to remove the panic() first and then investigate.
>
> The advantage of the panic() is it will remind us that some needs to be
> fixed. With a warning (or WARN()) people will tend to ignore it.
>
> Cheers,
>
> --
> Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 28 21:59:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 21:59:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357397.585898 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6JF2-0004Wf-BW; Tue, 28 Jun 2022 21:59:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357397.585898; Tue, 28 Jun 2022 21:59:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6JF2-0004WY-7z; Tue, 28 Jun 2022 21:59:28 +0000
Received: by outflank-mailman (input) for mailman id 357397;
 Tue, 28 Jun 2022 21:59:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6JF0-0004WO-T5; Tue, 28 Jun 2022 21:59:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6JF0-0001XE-QN; Tue, 28 Jun 2022 21:59:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6JF0-00051C-CP; Tue, 28 Jun 2022 21:59:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o6JF0-0003FE-Bx; Tue, 28 Jun 2022 21:59:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=W0U9077abFrBpexq+z0z5JRBBoOZFbpUo85v63FO/ig=; b=LYYXUCrTx/6q/9KHuyq1YHM5Co
	Y9wNWEPUFwb+6DFlCrOuecPvzlzM3eAXBUoIouBryOJZiirr/zMYcvFg82vZQgSNQ1yl1221XMGHX
	01HZFFSt4lnYo8mNUS1P0juMZmWRiFhdZRRg9aC6uRmpRKjT68vGPKJmU/EWGbJSUYMI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171382-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171382: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=941e3e7912696b9fbe3586083a7c2e102cee7a87
X-Osstest-Versions-That:
    linux=354c6e071be986a44b956f7b57f1884244431048
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 28 Jun 2022 21:59:26 +0000

flight 171382 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171382/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-intel  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-amd  8 xen-boot              fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 171277
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 171277
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 171277
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 171277

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 171277

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171277
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171277
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171277
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                941e3e7912696b9fbe3586083a7c2e102cee7a87
baseline version:
 linux                354c6e071be986a44b956f7b57f1884244431048

Last test of basis   171277  2022-06-19 03:11:35 Z    9 days
Failing since        171280  2022-06-19 15:12:25 Z    9 days   27 attempts
Testing same since   171374  2022-06-27 18:13:03 Z    1 days    3 attempts

------------------------------------------------------------
368 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 13051 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 28 22:24:51 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 22:24:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357407.585915 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6JdS-0007x2-EZ; Tue, 28 Jun 2022 22:24:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357407.585915; Tue, 28 Jun 2022 22:24:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6JdS-0007wv-C0; Tue, 28 Jun 2022 22:24:42 +0000
Received: by outflank-mailman (input) for mailman id 357407;
 Tue, 28 Jun 2022 22:24:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l9jI=XD=vivier.eu=laurent@srs-se1.protection.inumbo.net>)
 id 1o6JdQ-0007wp-Dp
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 22:24:40 +0000
Received: from mout.kundenserver.de (mout.kundenserver.de [212.227.17.10])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 192d91c8-f731-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 00:24:39 +0200 (CEST)
Received: from [192.168.100.1] ([82.142.8.70]) by mrelayeu.kundenserver.de
 (mreue109 [213.165.67.119]) with ESMTPSA (Nemesis) id
 1MvsMz-1np3Kf27eB-00srJY; Wed, 29 Jun 2022 00:24:34 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 192d91c8-f731-11ec-bd2d-47488cf2e6aa
Message-ID: <46b178af-57f6-ade9-dea0-f0482d47fb10@vivier.eu>
Date: Wed, 29 Jun 2022 00:24:32 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 1/2] hw/i386/xen/xen-hvm: Allow for stubbing
 xen_set_pci_link_route()
Content-Language: fr
To: Bernhard Beschow <shentey@gmail.com>, qemu-devel@nongnu.org
Cc: Richard Henderson <richard.henderson@linaro.org>,
 xen-devel@lists.xenproject.org, qemu-trivial@nongnu.org,
 Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Paolo Bonzini <pbonzini@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>
References: <20220626094656.15673-1-shentey@gmail.com>
 <20220626094656.15673-2-shentey@gmail.com>
From: Laurent Vivier <laurent@vivier.eu>
In-Reply-To: <20220626094656.15673-2-shentey@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Provags-ID: V03:K1:SpNdfX8QSasU2Zj6YI+XAVj3/X+6V1WNzWw1qrVY5PUTKnJqE3a
 2T+d26KufAEUmLYQ5OvBneg+brKZ1xqSj44bLjahNgMpA1kq1y64PZYAQwcLQowBzruV5bu
 a+JMy/xu1r0NUB/QA9ZE3YIchtpn0cLNLjaxOV95GeohVHFBmU4/yexbkGfvEIRqhU9xX1F
 fQ9mfJkJVmk1Qto45GZuQ==
X-Spam-Flag: NO
X-UI-Out-Filterresults: notjunk:1;V03:K0:paBfxqDRFb4=:826bM7cbaeOz7uyu4iPfOA
 /rloibS8b+EtqsAZ52CS2h1nf6B2149Ms8hGxpizsg4c/1aUwmrWpdE3hv02ECDg7ajuAUDhu
 yDVQ1zRK8JTT4v/i5JXOk9Q/zl2YKKLt4PB8hRpXDMvxgtPpwR9HItESD7j47FjNyC9tiqPZ3
 p9AombP4PPO0dUGwhMXl9Kdn75AbJ1cgKb5BpF1tqUYNep/pTFJsQdwCLdF+SDE5ciG5P9UAz
 i6NcBFhgXVLkZ0H0AKFMG9F2uWF+T/uHgNLgz4RFajfK2wgQ7E5+SBQjnqW40R9/nh+jTqpYS
 keKfVVWMDKAVItAGVPHuMKYzJcFQ8xoTZsxUUaSalbN+IghxiJYBaRhXBqEqd3lm3L3avcw38
 yuox4/r45PSNHHPNku35OASKxiBl1Q/IWFdH3Kay3zUKNuXCjbMId2Mhu/NzFcpfOHu6yQlZ4
 DGGGg4QrAJ1mg5t+4CpDmX/9OrCf6d2Yc4KJL/cT7gEI1ZTD6ahtSEqGJhRrX8AUWgkg03Zix
 5F31V6khkjUzWOaWVBSCHCl0oT0k8j3zvgPWOT7KXoohZZSJ0IPH8Wz1FZmxLDtnPNSA5yLNZ
 0YcaD9Y06UqmjNxfoEokPTtuZvDXj4sJ+H7TMmpG8SCZW3awC+lAKDDN4BSmFXL/6mDE9NC7w
 QHO0Jvf0Pmm5U39b7qQk0BdrcGC0sieGmupSGFENTwxSq36SHExA1cXMNH75Rrjh+HNfVkzok
 XwDc4JGRo+eZolUd/dO/5fDLIKL8h02cjS4Iaw==

Le 26/06/2022 à 11:46, Bernhard Beschow a écrit :
> The only user of xen_set_pci_link_route() is
> xen_piix_pci_write_config_client() which implements PIIX-specific logic in
> the xen namespace. This makes xen-hvm depend on PIIX which could be
> avoided if xen_piix_pci_write_config_client() was implemented in PIIX. In
> order to do this, xen_set_pci_link_route() needs to be stubbable which
> this patch addresses.
> 
> Signed-off-by: Bernhard Beschow <shentey@gmail.com>
> ---
>   hw/i386/xen/xen-hvm.c       | 7 ++++++-
>   include/hw/xen/xen.h        | 1 +
>   include/hw/xen/xen_common.h | 6 ------
>   stubs/xen-hw-stub.c         | 5 +++++
>   4 files changed, 12 insertions(+), 7 deletions(-)
> 
> diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
> index 0731f70410..204fda7949 100644
> --- a/hw/i386/xen/xen-hvm.c
> +++ b/hw/i386/xen/xen-hvm.c
> @@ -161,11 +161,16 @@ void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len)
>           }
>           v &= 0xf;
>           if (((address + i) >= PIIX_PIRQCA) && ((address + i) <= PIIX_PIRQCD)) {
> -            xen_set_pci_link_route(xen_domid, address + i - PIIX_PIRQCA, v);
> +            xen_set_pci_link_route(address + i - PIIX_PIRQCA, v);
>           }
>       }
>   }
>   
> +int xen_set_pci_link_route(uint8_t link, uint8_t irq)
> +{
> +    return xendevicemodel_set_pci_link_route(xen_dmod, xen_domid, link, irq);
> +}
> +
>   int xen_is_pirq_msi(uint32_t msi_data)
>   {
>       /* If vector is 0, the msi is remapped into a pirq, passed as
> diff --git a/include/hw/xen/xen.h b/include/hw/xen/xen.h
> index 0f9962b1c1..13bffaef53 100644
> --- a/include/hw/xen/xen.h
> +++ b/include/hw/xen/xen.h
> @@ -21,6 +21,7 @@ extern enum xen_mode xen_mode;
>   extern bool xen_domid_restrict;
>   
>   int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num);
> +int xen_set_pci_link_route(uint8_t link, uint8_t irq);
>   void xen_piix3_set_irq(void *opaque, int irq_num, int level);
>   void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len);
>   void xen_hvm_inject_msi(uint64_t addr, uint32_t data);
> diff --git a/include/hw/xen/xen_common.h b/include/hw/xen/xen_common.h
> index 179741ff79..77ce17d8a4 100644
> --- a/include/hw/xen/xen_common.h
> +++ b/include/hw/xen/xen_common.h
> @@ -316,12 +316,6 @@ static inline int xen_set_pci_intx_level(domid_t domid, uint16_t segment,
>                                                device, intx, level);
>   }
>   
> -static inline int xen_set_pci_link_route(domid_t domid, uint8_t link,
> -                                         uint8_t irq)
> -{
> -    return xendevicemodel_set_pci_link_route(xen_dmod, domid, link, irq);
> -}
> -
>   static inline int xen_inject_msi(domid_t domid, uint64_t msi_addr,
>                                    uint32_t msi_data)
>   {
> diff --git a/stubs/xen-hw-stub.c b/stubs/xen-hw-stub.c
> index 15f3921a76..743967623f 100644
> --- a/stubs/xen-hw-stub.c
> +++ b/stubs/xen-hw-stub.c
> @@ -23,6 +23,11 @@ void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len)
>   {
>   }
>   
> +int xen_set_pci_link_route(uint8_t link, uint8_t irq)
> +{
> +    return -1;
> +}
> +
>   void xen_hvm_inject_msi(uint64_t addr, uint32_t data)
>   {
>   }

Applied to my trivial-patches branch.

Thanks,
Laurent


From xen-devel-bounces@lists.xenproject.org Tue Jun 28 22:25:25 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 22:25:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357412.585926 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Je8-0008QW-Nv; Tue, 28 Jun 2022 22:25:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357412.585926; Tue, 28 Jun 2022 22:25:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Je8-0008QP-Jv; Tue, 28 Jun 2022 22:25:24 +0000
Received: by outflank-mailman (input) for mailman id 357412;
 Tue, 28 Jun 2022 22:25:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l9jI=XD=vivier.eu=laurent@srs-se1.protection.inumbo.net>)
 id 1o6Je6-0008Gi-PU
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 22:25:22 +0000
Received: from mout.kundenserver.de (mout.kundenserver.de [212.227.17.13])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 32c807de-f731-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 00:25:22 +0200 (CEST)
Received: from [192.168.100.1] ([82.142.8.70]) by mrelayeu.kundenserver.de
 (mreue108 [213.165.67.119]) with ESMTPSA (Nemesis) id
 1N7Qt9-1na9591ikn-017o7K; Wed, 29 Jun 2022 00:25:18 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 32c807de-f731-11ec-b725-ed86ccbb4733
Message-ID: <894aff5a-d0e9-803e-11c0-103c80290ef5@vivier.eu>
Date: Wed, 29 Jun 2022 00:25:17 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 2/2] hw/i386/xen/xen-hvm: Inline
 xen_piix_pci_write_config_client() and remove it
Content-Language: fr
To: Bernhard Beschow <shentey@gmail.com>, qemu-devel@nongnu.org
Cc: Richard Henderson <richard.henderson@linaro.org>,
 xen-devel@lists.xenproject.org, qemu-trivial@nongnu.org,
 Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Paolo Bonzini <pbonzini@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>
References: <20220626094656.15673-1-shentey@gmail.com>
 <20220626094656.15673-3-shentey@gmail.com>
From: Laurent Vivier <laurent@vivier.eu>
In-Reply-To: <20220626094656.15673-3-shentey@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Provags-ID: V03:K1:Q1PxkKcVSV0JwRAhfkIs7bGQPt9tcQI6Dihg/POxP+3m9dAxwZ0
 kuiWGcSTUKccHgZl8D6YPiombR7PH7svnSi1AjlNLlCksLIAQZaGXPJpcJzgKc78LgOAzHj
 Vs/N8c2MM4la9gOHuJtzdIu6HRb2IhC3HTQyMQrlLor/o50XAbs83ctVzepXdtsjX43CXuw
 UIUFGD3LuOGfsuLbXRWkA==
X-Spam-Flag: NO
X-UI-Out-Filterresults: notjunk:1;V03:K0:b9M81z/HEM0=:ATHY/GaB03ozWEEHVeO/lH
 4fJgJaYElZq/zVaPvc5H7pA908C6Hc+nqiPEFJg6WeuGZkrBbklbqF8K+42lzDz3jrIimCFSW
 x+zasNqYuX8ZFtcgFFF8TU7od7JIztxbjjJZpb35ElXxDPUPPqCiDxzj0Zge5EV4uGtNO2G1v
 yHFjWBLzLKI58twG5YWU8J+KWLT1qyq7Jk2DNSxXcv8/nTE8r3MxWcrJxa9LGy/05Fm+tK7hU
 TgSMLe977OaKikEq3QF/I/WpcJsne/AIfGJY09McLvD7vaRCGI1GY7ceCGitysKd3nnHu62pz
 /1b9zgo64uuNrwcXcwDIXXAoMeKsHNIshCYgs7EWy9Ycx1MZo2gcSzAyHPIiNEtodA96lzx84
 2TuWhXMLnACZl/RJegUIbF8PO9hyKlm49dfFdlI03xK4OWNVx+8vSe+Zl/mlatz7wF82MxlBk
 WStlLnv138H4/zWWZjX8gjRW3IBAAJ4M7RuBd8tF8wbKAkP1ZZ/QhHeeY6XqrpRiEoafnmlBr
 /Tkz89ubuk68LAZm72k5QeQUX9xvANUnB7/Ac8wcR2ju7GOY2mLqppbdUikP7qwz5uhV8pFsP
 klvZ0Nfrea5Rfc/5VyjO9nh5IfGNosFUKhjmQiXW7LOAWbVF06OgUNKAecuj0lFRkGk/H0nHJ
 QrzQrY2DPXQNBbR5Ltf000yAHM7gKDNbRR6YYAb9G2Jqe9DjFl/bvY20B6G3nwiJuF+EcIKFC
 7ht9JEz7T5n9f1rvGB5UZXvgicvPiK7Yv6K/wA==

Le 26/06/2022 à 11:46, Bernhard Beschow a écrit :
> xen_piix_pci_write_config_client() is implemented in the xen sub tree and
> uses PIIX constants internally, thus creating a direct dependency on
> PIIX. Now that xen_set_pci_link_route() is stubbable, the logic of
> xen_piix_pci_write_config_client() can be moved to PIIX which resolves
> the dependency.
> 
> Signed-off-by: Bernhard Beschow <shentey@gmail.com>
> ---
>   hw/i386/xen/xen-hvm.c | 18 ------------------
>   hw/isa/piix3.c        | 15 ++++++++++++++-
>   include/hw/xen/xen.h  |  1 -
>   stubs/xen-hw-stub.c   |  4 ----
>   4 files changed, 14 insertions(+), 24 deletions(-)
> 
> diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
> index 204fda7949..e4293d6d66 100644
> --- a/hw/i386/xen/xen-hvm.c
> +++ b/hw/i386/xen/xen-hvm.c
> @@ -15,7 +15,6 @@
>   #include "hw/pci/pci.h"
>   #include "hw/pci/pci_host.h"
>   #include "hw/i386/pc.h"
> -#include "hw/southbridge/piix.h"
>   #include "hw/irq.h"
>   #include "hw/hw.h"
>   #include "hw/i386/apic-msidef.h"
> @@ -149,23 +148,6 @@ void xen_piix3_set_irq(void *opaque, int irq_num, int level)
>                              irq_num & 3, level);
>   }
>   
> -void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len)
> -{
> -    int i;
> -
> -    /* Scan for updates to PCI link routes (0x60-0x63). */
> -    for (i = 0; i < len; i++) {
> -        uint8_t v = (val >> (8 * i)) & 0xff;
> -        if (v & 0x80) {
> -            v = 0;
> -        }
> -        v &= 0xf;
> -        if (((address + i) >= PIIX_PIRQCA) && ((address + i) <= PIIX_PIRQCD)) {
> -            xen_set_pci_link_route(address + i - PIIX_PIRQCA, v);
> -        }
> -    }
> -}
> -
>   int xen_set_pci_link_route(uint8_t link, uint8_t irq)
>   {
>       return xendevicemodel_set_pci_link_route(xen_dmod, xen_domid, link, irq);
> diff --git a/hw/isa/piix3.c b/hw/isa/piix3.c
> index 6388558f92..48f9ab1096 100644
> --- a/hw/isa/piix3.c
> +++ b/hw/isa/piix3.c
> @@ -138,7 +138,20 @@ static void piix3_write_config(PCIDevice *dev,
>   static void piix3_write_config_xen(PCIDevice *dev,
>                                      uint32_t address, uint32_t val, int len)
>   {
> -    xen_piix_pci_write_config_client(address, val, len);
> +    int i;
> +
> +    /* Scan for updates to PCI link routes (0x60-0x63). */
> +    for (i = 0; i < len; i++) {
> +        uint8_t v = (val >> (8 * i)) & 0xff;
> +        if (v & 0x80) {
> +            v = 0;
> +        }
> +        v &= 0xf;
> +        if (((address + i) >= PIIX_PIRQCA) && ((address + i) <= PIIX_PIRQCD)) {
> +            xen_set_pci_link_route(address + i - PIIX_PIRQCA, v);
> +        }
> +    }
> +
>       piix3_write_config(dev, address, val, len);
>   }
>   
> diff --git a/include/hw/xen/xen.h b/include/hw/xen/xen.h
> index 13bffaef53..afdf9c436a 100644
> --- a/include/hw/xen/xen.h
> +++ b/include/hw/xen/xen.h
> @@ -23,7 +23,6 @@ extern bool xen_domid_restrict;
>   int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num);
>   int xen_set_pci_link_route(uint8_t link, uint8_t irq);
>   void xen_piix3_set_irq(void *opaque, int irq_num, int level);
> -void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len);
>   void xen_hvm_inject_msi(uint64_t addr, uint32_t data);
>   int xen_is_pirq_msi(uint32_t msi_data);
>   
> diff --git a/stubs/xen-hw-stub.c b/stubs/xen-hw-stub.c
> index 743967623f..34a22f2ad7 100644
> --- a/stubs/xen-hw-stub.c
> +++ b/stubs/xen-hw-stub.c
> @@ -19,10 +19,6 @@ void xen_piix3_set_irq(void *opaque, int irq_num, int level)
>   {
>   }
>   
> -void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len)
> -{
> -}
> -
>   int xen_set_pci_link_route(uint8_t link, uint8_t irq)
>   {
>       return -1;

Applied to my trivial-patches branch.

Thanks,
Laurent



From xen-devel-bounces@lists.xenproject.org Tue Jun 28 22:57:01 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 22:57:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357422.585937 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6K8T-0003jt-6U; Tue, 28 Jun 2022 22:56:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357422.585937; Tue, 28 Jun 2022 22:56:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6K8T-0003jm-3r; Tue, 28 Jun 2022 22:56:45 +0000
Received: by outflank-mailman (input) for mailman id 357422;
 Tue, 28 Jun 2022 22:56:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=F/Gg=XD=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o6K8S-0003jg-1w
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 22:56:44 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9207b0d7-f735-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 00:56:41 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 4B2EC619C5;
 Tue, 28 Jun 2022 22:56:39 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4BF08C341C8;
 Tue, 28 Jun 2022 22:56:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9207b0d7-f735-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1656456998;
	bh=3iabwqUOx1OYkfYDHACVFMEC1AYXGdSImMOLv41aTcg=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=AyzYQ/wY9gOWY2Pd7SHzuHeenoClQf0XSzzJvjPM8Bwl0AH8VfoN2idxefdApaKA/
	 ap9GFCpp7nVeA0NfykA1KJ4jJSgEegaXwmhKpfyZ/W8CGpJErr3ANUzznIfcF4hgcu
	 mm8fXK83CTOKV+/joka+cSG+ccOmK8yviv9RRyhrS2Dq9g6aoFjEhg9a3H4hUC9z08
	 3jdx2mvDiVANYW+Qi9dMXububSCaKYaTFVDW9lc+pmTYACyx4Jhznp9R4jLbPs1SES
	 TSjxr/KX9Q+V9Q6rh25RjUtroU4jeNH1o9CQjPX95v0+T8wKIxQA+sX/4WGDtSAFau
	 rKCxcd8IKt2ZQ==
Date: Tue, 28 Jun 2022 15:56:28 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, dmitry.semenets@gmail.com, 
    xen-devel@lists.xenproject.org, Dmytro Semenets <dmytro_semenets@epam.com>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen: arm: Don't use stop_cpu() in halt_this_cpu()
In-Reply-To: <5c986703-c932-3c7d-3756-2b885bb96e42@xen.org>
Message-ID: <alpine.DEB.2.22.394.2206281538320.4389@ubuntu-linux-20-04-desktop>
References: <20220623074428.226719-1-dmitry.semenets@gmail.com> <alpine.DEB.2.22.394.2206231457250.2410338@ubuntu-linux-20-04-desktop> <e60a4e68-ed00-6cc7-31ca-64bcfc4bbdc5@xen.org> <alpine.DEB.2.22.394.2206241414420.2410338@ubuntu-linux-20-04-desktop>
 <5c986703-c932-3c7d-3756-2b885bb96e42@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sat, 25 Jun 2022, Julien Grall wrote:
> On 24/06/2022 22:31, Stefano Stabellini wrote:
> > On Fri, 24 Jun 2022, Julien Grall wrote:
> > > On 23/06/2022 23:07, Stefano Stabellini wrote:
> > > > On Thu, 23 Jun 2022, dmitry.semenets@gmail.com wrote:
> > > > > From: Dmytro Semenets <dmytro_semenets@epam.com>
> > > > So wouldn't it be better to remove the panic from the implementation of
> > > > call_psci_cpu_off?
> > > 
> > > I have asked to keep the panic() in call_psci_cpu_off(). If you remove the
> > > panic() then we will hide the fact that the CPU was not properly turned
> > > off
> > > and will consume more energy than expected.
> > > 
> > > The WFI loop is fine when shutting down or rebooting because we know this
> > > will
> > > only happen for a short period of time.
> > 
> > Yeah, I don't think we should hide that CPU_OFF failed. I think we
> > should print a warning. However, given that we know CPU_OFF can
> > reasonably fail in normal conditions returning DENIED when a Trusted OS
> > is present, then I think we should not panic.
> 
> We know how to detect this condition (see section 5.9 in DEN0022D.b). So I
> would argue we should fix it properly rather than removing the panic().
> 
> > 
> > If there was a way to distinguish a failure because a Trusted OS is
> > present (the "normal" failure) from other failures, I would suggest to:
> > - print a warning if failed due to a Trusted OS being present
> > - panic in other cases
> > 
> > Unfortunately it looks like in all cases the return code is DENIED :-(
> I am confused. Per the spec, the only reason CPU_OFF can return DENIED is
> because the Trusted OS is resident. So what other cases are you talking about?
> 
> > 
> > 
> > Given that, I would not panic and only print a warning in all cases. Or
> > we could ASSERT which at least goes away in !DEBUG builds.
> 
> ASSERT() is definitely not way to deal with external input. I could possibly
> accept a WARN(), but see above.
> 
> > > > The reason I am saying this is that stop_cpu is called in a number of
> > > > places beyond halt_this_cpu and as far as I can tell any of them could
> > > > trigger the panic. (I admit they are unlikely places but still.)
> > > 
> > > This is one of the example where the CPU will not be stopped for a short
> > > period of time. We should deal with them differently (i.e. migrating the
> > > trusted OS or else) so we give all the chance for the CPU to be fully
> > > powered.
> > > 
> > > IMHO, this is a different issue and hence why I didn't ask Dmitry to solve
> > > it.
> > 
> > I see your point now. I was seeing the two things as one.
> > 
> > I think it is true that the WFI loop is likely to work. Also it is true
> > that from a power perspective it makes no different on power down or
> > reboot.  From that point of view this patch is OK.
> > 
> > But even on shut-down/reboot, why not do that as a fallback in case
> > CPU_OFF didn't work? It is going to work most of the times anyway, why
> > change the default for the few cases that doesn't work?
> 
> Because we would not be consistent how we would turn off a CPU on a system
> supporting PSCI. I would prefer to use the same method everywhere so it is
> easier to reason about.
> 
> I am also not sure how you define "most of the time". Yes it is possible that
> the boards we aware of will not have this issue, but how about the one we
> don't know?
> 
> > 
> > Given that this patch would work, I don't want to insist on this and let
> > you decide.
> > 
> > 
> > But even if we don't want to remove the panic as part of this patch, I
> > think we should remove the panic in a separate patch anyway, at least
> > until someone investigates and thinks of a strategy how to migrate the
> > TrustedOS as you suggested.
> If we accept this patch, then we remove the immediate pain. The other uses of
> stop_cpu() are in:
> 	1) idle_loop(), this is reachable when turning off a CPU after boot
> (not supported on Arm)
>         2) start_secondary(), this is only used during boot (CPU hot-unplug is
> not supported)
> 
> Even if it would be possible to trigger the panic() in 2), I am not aware of
> an immediate issue there. So I think it would be the wrong approach to remove
> the panic() first and then investigate.
> 
> The advantage of the panic() is it will remind us that some needs to be fixed.
> With a warning (or WARN()) people will tend to ignore it.

I know that this specific code path (cpu off) is probably not super
relevant for what I am about to say, but as we move closer to safety
certifiability we need to get away from using "panic" and BUG_ON as a
reminder that more work is needed to have a fully correct implementation
of something.

I also see your point and agree that ASSERT is not acceptable for
external input but from my point of view panic is the same (slightly
worse because it doesn't go away in production builds).

The return value of CPU_OFF is "external input" but this patch would
make that problem go away for halt_this_cpu, and the other two call
sites are only relevant during boot.

So, although this is not my preference, I don't want to block this
patch. (I also think it is a lot better to move faster as a project
even with not-ideal implementations.)

Julien if you are going to ack the patch feel free to go ahead.


From xen-devel-bounces@lists.xenproject.org Tue Jun 28 22:57:09 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jun 2022 22:57:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357424.585948 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6K8r-00048o-Er; Tue, 28 Jun 2022 22:57:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357424.585948; Tue, 28 Jun 2022 22:57:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6K8r-00048h-Bo; Tue, 28 Jun 2022 22:57:09 +0000
Received: by outflank-mailman (input) for mailman id 357424;
 Tue, 28 Jun 2022 22:57:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=F/Gg=XD=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o6K8q-00045Y-0W
 for xen-devel@lists.xenproject.org; Tue, 28 Jun 2022 22:57:08 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org
 [2604:1380:4601:e00::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a1b9eda3-f735-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 00:57:06 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 63E4FB82051;
 Tue, 28 Jun 2022 22:57:05 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id BBC7EC341C8;
 Tue, 28 Jun 2022 22:57:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a1b9eda3-f735-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1656457024;
	bh=ONdiUrLFJs9AeLXIFU3Q1r6IfQVzd+KgzQrs23lZIls=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Bs5oiNq4qxuvaMGMqOPYDUh7twQjfBh1RM/peJeBayug5dfEvk1Sf+dhgFPcITTxG
	 zT0dp9TK+mvgkudkd/sHfWvAuw0YxWV6YN7lNMTFIHAcEESaq1P8y+h9Yw/iTGGx85
	 qxEya5buuH/Q+Cy6ZeldVmtI9swxBJrC+SqIbI3uYb9gTVLo5WPU68VdidjMHt+81q
	 uFjlhr9IgMAFniZDBeqAGAP5zMZQc5XU3ZSJzscERyJhQzofWeXcvvd/6asgc61HXE
	 Bu/BgsTv73hNjm15caxjTGjhW5n/O5ZqdBmzO5urSQLENTG4LVjEeb/b6uLcUMtOd2
	 /pyr+jaKopmjw==
Date: Tue, 28 Jun 2022 15:57:03 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Dmytro Semenets <dmitry.semenets@gmail.com>
cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, Dmytro Semenets <dmytro_semenets@epam.com>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen: arm: Don't use stop_cpu() in halt_this_cpu()
In-Reply-To: <CACM97VVhQ_cpr59ZbJj4HvxRvCj5h3yZmwsSVcGk7QycX_S24g@mail.gmail.com>
Message-ID: <alpine.DEB.2.22.394.2206281556420.4389@ubuntu-linux-20-04-desktop>
References: <20220623074428.226719-1-dmitry.semenets@gmail.com> <alpine.DEB.2.22.394.2206231457250.2410338@ubuntu-linux-20-04-desktop> <e60a4e68-ed00-6cc7-31ca-64bcfc4bbdc5@xen.org> <alpine.DEB.2.22.394.2206241414420.2410338@ubuntu-linux-20-04-desktop>
 <5c986703-c932-3c7d-3756-2b885bb96e42@xen.org> <CACM97VVhQ_cpr59ZbJj4HvxRvCj5h3yZmwsSVcGk7QycX_S24g@mail.gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-241359656-1656457024=:4389"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-241359656-1656457024=:4389
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

I'll let Julien ack (and commit) the patch.


On Wed, 29 Jun 2022, Dmytro Semenets wrote:
> Hi Stefano and Julien,
> What is the conclusion about this patch?
> 
> сб, 25 июн. 2022 г. в 17:45, Julien Grall <julien@xen.org>:
> >
> > Hi Stefano,
> >
> > On 24/06/2022 22:31, Stefano Stabellini wrote:
> > > On Fri, 24 Jun 2022, Julien Grall wrote:
> > >> On 23/06/2022 23:07, Stefano Stabellini wrote:
> > >>> On Thu, 23 Jun 2022, dmitry.semenets@gmail.com wrote:
> > >>>> From: Dmytro Semenets <dmytro_semenets@epam.com>
> > >>> So wouldn't it be better to remove the panic from the implementation of
> > >>> call_psci_cpu_off?
> > >>
> > >> I have asked to keep the panic() in call_psci_cpu_off(). If you remove the
> > >> panic() then we will hide the fact that the CPU was not properly turned off
> > >> and will consume more energy than expected.
> > >>
> > >> The WFI loop is fine when shutting down or rebooting because we know this will
> > >> only happen for a short period of time.
> > >
> > > Yeah, I don't think we should hide that CPU_OFF failed. I think we
> > > should print a warning. However, given that we know CPU_OFF can
> > > reasonably fail in normal conditions returning DENIED when a Trusted OS
> > > is present, then I think we should not panic.
> >
> > We know how to detect this condition (see section 5.9 in DEN0022D.b). So
> > I would argue we should fix it properly rather than removing the panic().
> >
> > >
> > > If there was a way to distinguish a failure because a Trusted OS is
> > > present (the "normal" failure) from other failures, I would suggest to:
> > > - print a warning if failed due to a Trusted OS being present
> > > - panic in other cases
> > >
> > > Unfortunately it looks like in all cases the return code is DENIED :-(
> > I am confused. Per the spec, the only reason CPU_OFF can return DENIED
> > is because the Trusted OS is resident. So what other cases are you
> > talking about?
> >
> > >
> > >
> > > Given that, I would not panic and only print a warning in all cases. Or
> > > we could ASSERT which at least goes away in !DEBUG builds.
> >
> > ASSERT() is definitely not way to deal with external input. I could
> > possibly accept a WARN(), but see above.
> >
> > >>> The reason I am saying this is that stop_cpu is called in a number of
> > >>> places beyond halt_this_cpu and as far as I can tell any of them could
> > >>> trigger the panic. (I admit they are unlikely places but still.)
> > >>
> > >> This is one of the example where the CPU will not be stopped for a short
> > >> period of time. We should deal with them differently (i.e. migrating the
> > >> trusted OS or else) so we give all the chance for the CPU to be fully powered.
> > >>
> > >> IMHO, this is a different issue and hence why I didn't ask Dmitry to solve it.
> > >
> > > I see your point now. I was seeing the two things as one.
> > >
> > > I think it is true that the WFI loop is likely to work. Also it is true
> > > that from a power perspective it makes no different on power down or
> > > reboot.  From that point of view this patch is OK.
> > >
> > > But even on shut-down/reboot, why not do that as a fallback in case
> > > CPU_OFF didn't work? It is going to work most of the times anyway, why
> > > change the default for the few cases that doesn't work?
> >
> > Because we would not be consistent how we would turn off a CPU on a
> > system supporting PSCI. I would prefer to use the same method everywhere
> > so it is easier to reason about.
> >
> > I am also not sure how you define "most of the time". Yes it is possible
> > that the boards we aware of will not have this issue, but how about the
> > one we don't know?
> >
> > >
> > > Given that this patch would work, I don't want to insist on this and let
> > > you decide.
> > >
> > >
> > > But even if we don't want to remove the panic as part of this patch, I
> > > think we should remove the panic in a separate patch anyway, at least
> > > until someone investigates and thinks of a strategy how to migrate the
> > > TrustedOS as you suggested.
> > If we accept this patch, then we remove the immediate pain. The other
> > uses of stop_cpu() are in:
> >         1) idle_loop(), this is reachable when turning off a CPU after boot
> > (not supported on Arm)
> >          2) start_secondary(), this is only used during boot (CPU
> > hot-unplug is not supported)
> >
> > Even if it would be possible to trigger the panic() in 2), I am not
> > aware of an immediate issue there. So I think it would be the wrong
> > approach to remove the panic() first and then investigate.
> >
> > The advantage of the panic() is it will remind us that some needs to be
> > fixed. With a warning (or WARN()) people will tend to ignore it.
> >
> > Cheers,
> >
> > --
> > Julien Grall
> 
--8323329-241359656-1656457024=:4389--


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 00:27:29 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 00:27:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357436.585964 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6LXz-000631-Gc; Wed, 29 Jun 2022 00:27:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357436.585964; Wed, 29 Jun 2022 00:27:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6LXz-00062u-Dx; Wed, 29 Jun 2022 00:27:11 +0000
Received: by outflank-mailman (input) for mailman id 357436;
 Wed, 29 Jun 2022 00:27:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6n86=XE=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o6LXz-00062o-2V
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 00:27:11 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 34e84464-f742-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 02:27:07 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 6282661B7F;
 Wed, 29 Jun 2022 00:27:06 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 87E9EC341CB;
 Wed, 29 Jun 2022 00:27:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34e84464-f742-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1656462425;
	bh=XrB+7s7nUnWRhJKddWbY4OOYoUdoajsrk+NZ+jGW7f8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=d6lKJTudWeJXg6OmnwxdU4Otk72TwAGscBkR4z6C/w+0DHoKGENpE3HqkmjrotX5+
	 sLxxlnu1jvWMP6MnL/FbekW2dsCJwGnby+JjHSpp2rnMQtLHSf7+0YtuklEuUp5/mf
	 OUnCj4jBUzWDoMmhJZ4JzwpwTVdDphIf1QPncXl9cWmAlIzqSkcULvdNzhPJQEXDpb
	 P3PkmkfqJzqCYJdEqGHvC6cqBFwGm9ACASR1ItD1ngWcCwE568ulyxqN8KeZc85yJs
	 EJ4QkvHOyJjEHGgmgaXA+mG62WZUuf9wLsSG4N2Q22fXNAedSjDgymOh2Ji4rSqu15
	 s2lHjP5xsYasw==
Date: Tue, 28 Jun 2022 17:27:04 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Xenia Ragiadakou <burzalodowa@gmail.com>
cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
    viryaos-discuss@lists.sourceforge.net
Subject: Re: [PATCH 1/2] uboot-script-gen: prevent user mistakes due to
 DOM0_KERNEL becoming optional
In-Reply-To: <20220626184536.666647-1-burzalodowa@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2206281726560.4389@ubuntu-linux-20-04-desktop>
References: <20220626184536.666647-1-burzalodowa@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sun, 26 Jun 2022, Xenia Ragiadakou wrote:
> Before enabling true dom0less configuration, the script was failing instantly
> if DOM0_KERNEL parameter was not specified. This behaviour has changed and
> this needs to be communicated to the user.
> 
> Mention in README.md that for dom0less configurations, the parameter
> DOM0_KERNEL is optional.
> 
> If DOM0_KERNEL is not set, check that no other dom0 specific parameters are
> specified by the user. Fail the script early with an appropriate error
> message, if it was invoked with erroneous configuration settings.
> 
> Change message "Dom0 kernel is not specified, continue with dom0less setup."
> to "Dom0 kernel is not specified, continue with true dom0less setup."
> to refer more accurately to a dom0less setup without dom0.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  README.md                |  1 +
>  scripts/uboot-script-gen | 21 ++++++++++++++-------
>  2 files changed, 15 insertions(+), 7 deletions(-)
> 
> diff --git a/README.md b/README.md
> index 17ff206..cb15ca5 100644
> --- a/README.md
> +++ b/README.md
> @@ -100,6 +100,7 @@ Where:
>    been specified in XEN_PASSTHROUGH_PATHS.
>  
>  - DOM0_KERNEL specifies the Dom0 kernel file to load.
> +  For dom0less configurations, the parameter is optional.
>  
>  - DOM0_MEM specifies the amount of memory for Dom0 VM in MB. The default
>    is 1024. This is only applicable when XEN_CMD is not specified.
> diff --git a/scripts/uboot-script-gen b/scripts/uboot-script-gen
> index e85c6ec..085e29f 100755
> --- a/scripts/uboot-script-gen
> +++ b/scripts/uboot-script-gen
> @@ -410,6 +410,20 @@ function find_root_dev()
>  
>  function xen_config()
>  {
> +    if test -z "$DOM0_KERNEL"
> +    then
> +        if test "$NUM_DOMUS" -eq "0"
> +        then
> +            echo "Neither dom0 or domUs are specified, exiting."
> +            exit 1
> +        elif test "$DOM0_MEM" || test "$DOM0_VCPUS" || test "$DOM0_COLORS" || test "$DOM0_CMD" || test "$DOM0_RAMDISK" || test "$DOM0_ROOTFS"
> +        then
> +            echo "For dom0less configuration without dom0, no dom0 specific parameters should be specified, exiting."
> +            exit 1
> +        fi
> +        echo "Dom0 kernel is not specified, continue with true dom0less setup."
> +    fi
> +
>      if [ -z "$XEN_CMD" ]
>      then
>          if [ -z "$DOM0_MEM" ]
> @@ -457,13 +471,6 @@ function xen_config()
>      fi
>      if test -z "$DOM0_KERNEL"
>      then
> -        if test "$NUM_DOMUS" -eq "0"
> -        then
> -            echo "Neither dom0 or domUs are specified, exiting."
> -            exit 1
> -        fi
> -        echo "Dom0 kernel is not specified, continue with dom0less setup."
> -        unset DOM0_RAMDISK
>          # Remove dom0 specific parameters from the XEN command line.
>          local params=($XEN_CMD)
>          XEN_CMD="${params[@]/dom0*/}"
> -- 
> 2.34.1
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 00:28:23 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 00:28:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357440.585976 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6LZ8-0006gf-S6; Wed, 29 Jun 2022 00:28:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357440.585976; Wed, 29 Jun 2022 00:28:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6LZ8-0006gY-Od; Wed, 29 Jun 2022 00:28:22 +0000
Received: by outflank-mailman (input) for mailman id 357440;
 Wed, 29 Jun 2022 00:28:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6n86=XE=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o6LZ7-0006Vw-R8
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 00:28:21 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 60668675-f742-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 02:28:21 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 30174B8210D;
 Wed, 29 Jun 2022 00:28:19 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9D816C341C8;
 Wed, 29 Jun 2022 00:28:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 60668675-f742-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1656462497;
	bh=gdo1im7jZVNUXLq9t5pb61VNaHlbtkv2qPlgFzG1R+U=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=NO+Ynh4PoaP70HZhaSJ4fsXVbxPDkiinnev8bSlPeDd4VX4qFPASd/DarmQwMKGKb
	 YT0e6Ppymv/sND8LbFRUrowuEf7dQiOdCjImO/v2rWmk1xYe+TyqrMpotr1zCRnpax
	 WkgJTkHotFUXJJS+kclcCaDKuvRXXVV1kUHAHkHYUJRQooHY6AaK599nPzJr0cwbCA
	 Rd1PMEaWTWph4qRNmzFn8WKEysENAoFZ5pXby5zRON3PkS4YdenZa4KaFHzdAISl1D
	 mZZvJuAIWuwVMjUlhZUU2/mlcP9n3OU9t+cza7L3GMCHOSqSNw3j1mL+il6HqWqAto
	 Tg0+kTdkAeG5g==
Date: Tue, 28 Jun 2022 17:28:17 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Xenia Ragiadakou <burzalodowa@gmail.com>
cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
    viryaos-discuss@lists.sourceforge.net
Subject: Re: [PATCH 2/2] uboot-script-gen: do not enable direct mapping by
 default
In-Reply-To: <20220626184536.666647-2-burzalodowa@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2206281727080.4389@ubuntu-linux-20-04-desktop>
References: <20220626184536.666647-1-burzalodowa@gmail.com> <20220626184536.666647-2-burzalodowa@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sun, 26 Jun 2022, Xenia Ragiadakou wrote:
> To be inline with XEN, do not enable direct mapping automatically for all
> statically allocated domains.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>

Actually I don't know about this one. I think it is OK that ImageBuilder
defaults are different from Xen defaults. This is a case where I think
it would be good to enable DOMU_DIRECT_MAP by default when
DOMU_STATIC_MEM is specified.


> ---
>  README.md                | 4 ++--
>  scripts/uboot-script-gen | 8 ++------
>  2 files changed, 4 insertions(+), 8 deletions(-)
> 
> diff --git a/README.md b/README.md
> index cb15ca5..03e437b 100644
> --- a/README.md
> +++ b/README.md
> @@ -169,8 +169,8 @@ Where:
>    if specified, indicates the host physical address regions
>    [baseaddr, baseaddr + size) to be reserved to the VM for static allocation.
>  
> -- DOMU_DIRECT_MAP[number] can be set to 1 or 0.
> -  If set to 1, the VM is direct mapped. The default is 1.
> +- DOMU_DIRECT_MAP[number] if set to 1, enables direct mapping.
> +  By default, direct mapping is disabled.
>    This is only applicable when DOMU_STATIC_MEM is specified.
>  
>  - LINUX is optional but specifies the Linux kernel for when Xen is NOT
> diff --git a/scripts/uboot-script-gen b/scripts/uboot-script-gen
> index 085e29f..66ce6f7 100755
> --- a/scripts/uboot-script-gen
> +++ b/scripts/uboot-script-gen
> @@ -52,7 +52,7 @@ function dt_set()
>              echo "fdt set $path $var $array" >> $UBOOT_SOURCE
>          elif test $data_type = "bool"
>          then
> -            if test "$data" -eq 1
> +            if test "$data" == "1"
>              then
>                  echo "fdt set $path $var" >> $UBOOT_SOURCE
>              fi
> @@ -74,7 +74,7 @@ function dt_set()
>              fdtput $FDTEDIT -p -t s $path $var $data
>          elif test $data_type = "bool"
>          then
> -            if test "$data" -eq 1
> +            if test "$data" == "1"
>              then
>                  fdtput $FDTEDIT -p $path $var
>              fi
> @@ -491,10 +491,6 @@ function xen_config()
>          then
>              DOMU_CMD[$i]="console=ttyAMA0"
>          fi
> -        if test -z "${DOMU_DIRECT_MAP[$i]}"
> -        then
> -             DOMU_DIRECT_MAP[$i]=1
> -        fi
>          i=$(( $i + 1 ))
>      done
>  }
> -- 
> 2.34.1
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 00:32:20 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 00:32:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357448.585987 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Lcw-000886-9z; Wed, 29 Jun 2022 00:32:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357448.585987; Wed, 29 Jun 2022 00:32:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Lcw-00087z-7N; Wed, 29 Jun 2022 00:32:18 +0000
Received: by outflank-mailman (input) for mailman id 357448;
 Wed, 29 Jun 2022 00:32:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6n86=XE=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o6Lcu-00087t-Ot
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 00:32:16 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ec5127a9-f742-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 02:32:15 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 05DAF61BEA;
 Wed, 29 Jun 2022 00:32:14 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 33CEAC341C8;
 Wed, 29 Jun 2022 00:32:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ec5127a9-f742-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1656462733;
	bh=kV3Xg2ZJrv/1vONNtAHMshda227zmNUQfAmIYro4lBg=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Kih+yuhkNHHUaf/e+ZxwGSPlf5B8RNvHcuk/NbC1T+WUbGvqhKBWJAuFXSQOtNcXb
	 Ivq0Zegs1AoKFG95B5vloI6QzlDizV44PGEo6GmviCx9cZq01SB/gzZd6oE7/GeTYI
	 5qQczNTwDDL8b0Xn/cjW2vz//sysWhWwCzT0HHo9ESExMVprnqaJ1tXRz60ywmtzxU
	 EpaKIgbQ+nh7OE6D9DGIg2XcqjOD51ztpcFQXYXZtdrODZ+yzwJVHbXlC473gtSvnH
	 T1HddtvDTJCrXPMk9i4I52Nki+IhIJgtUnpI4OoNsqJOjJx5sfG2nCx0Ug+TJScgGj
	 AoePG2jRwtXWA==
Date: Tue, 28 Jun 2022 17:32:12 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Xenia Ragiadakou <burzalodowa@gmail.com>
cc: xen-devel@lists.xenproject.org, Tamas K Lengyel <tamas@tklengyel.com>, 
    Alexandru Isaila <aisaila@bitdefender.com>, 
    Petre Pircalabu <ppircalabu@bitdefender.com>
Subject: Re: [PATCH 2/5] xen/common: vm_event: Fix MISRA C 2012 Rule 8.7
 violation
In-Reply-To: <20220626211131.678995-3-burzalodowa@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2206281732060.4389@ubuntu-linux-20-04-desktop>
References: <20220626211131.678995-1-burzalodowa@gmail.com> <20220626211131.678995-3-burzalodowa@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 27 Jun 2022, Xenia Ragiadakou wrote:
> The function vm_event_wake() is referenced only in vm_event.c.
> Change the linkage of the function from external to internal by adding
> the storage-class specifier static to the function definition.
> 
> This patch aims to resolve indirectly a MISRA C 2012 Rule 8.4 violation warning.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/common/vm_event.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
> index 0b99a6ea72..ecf49c38a9 100644
> --- a/xen/common/vm_event.c
> +++ b/xen/common/vm_event.c
> @@ -173,7 +173,7 @@ static void vm_event_wake_queued(struct domain *d, struct vm_event_domain *ved)
>   * call vm_event_wake() again, ensuring that any blocked vCPUs will get
>   * unpaused once all the queued vCPUs have made it through.
>   */
> -void vm_event_wake(struct domain *d, struct vm_event_domain *ved)
> +static void vm_event_wake(struct domain *d, struct vm_event_domain *ved)
>  {
>      if ( !list_empty(&ved->wq.list) )
>          vm_event_wake_queued(d, ved);
> -- 
> 2.34.1
> 
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 00:33:48 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 00:33:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357454.585998 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6LeL-0000KP-Oy; Wed, 29 Jun 2022 00:33:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357454.585998; Wed, 29 Jun 2022 00:33:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6LeL-0000KI-Lh; Wed, 29 Jun 2022 00:33:45 +0000
Received: by outflank-mailman (input) for mailman id 357454;
 Wed, 29 Jun 2022 00:33:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6n86=XE=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o6LeJ-0000K8-TU
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 00:33:43 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 206265ee-f743-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 02:33:42 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id DB2B8B81FAE;
 Wed, 29 Jun 2022 00:33:41 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 70EC7C341C8;
 Wed, 29 Jun 2022 00:33:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 206265ee-f743-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1656462820;
	bh=kAcag4fVpVdnx2Peoj1azmGRfaFc0FdpCsNg81+VX0s=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Vi6mBf2onL33kngnOhk+y7rYXGMtawOpcYvA5rckmKgOc/G7/iigqiiby/c1+ewXU
	 zMPfzeE2uzhQ+3sIhSpLtexwqoCzztYCQJtIQ63+LfpmlpCgl4JGhKw7gVMsc+fX+r
	 Wu5Fh5Y5ukRfRJ0qF0tr4kN9LhWAPKP/jBcfoWKHuw1TeF08wRLWX4bfmboS9Q5rv5
	 zYhIScTb2q8r3xKNqBzRRwILCZdO7Dw6TypBX0IjNs4t8kMArFLoQ/6dHVrIt5X7JA
	 feDetXs90NqFApW/9JCuUrlaTqKgZ2Hc9Vh9XfdDcWK/91DytMlchQKIZYBYnE7Bmb
	 x7snyOjczNtlw==
Date: Tue, 28 Jun 2022 17:33:39 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Xenia Ragiadakou <burzalodowa@gmail.com>
cc: xen-devel@lists.xenproject.org, George Dunlap <george.dunlap@citrix.com>, 
    Dario Faggioli <dfaggioli@suse.com>
Subject: Re: [PATCH 4/5] xen/sched: credit: Fix MISRA C 2012 Rule 8.7
 violation
In-Reply-To: <20220626211131.678995-5-burzalodowa@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2206281733290.4389@ubuntu-linux-20-04-desktop>
References: <20220626211131.678995-1-burzalodowa@gmail.com> <20220626211131.678995-5-burzalodowa@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 27 Jun 2022, Xenia Ragiadakou wrote:
> The per-cpu variable last_tickle_cpu is referenced only in credit.c.
> Change its linkage from external to internal by adding the storage-class
> specifier static to its definitions.
> 
> This patch aims to resolve indirectly a MISRA C 2012 Rule 8.4 violation warning.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/common/sched/credit.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/common/sched/credit.c b/xen/common/sched/credit.c
> index 4d3bd8cba6..47945c2834 100644
> --- a/xen/common/sched/credit.c
> +++ b/xen/common/sched/credit.c
> @@ -348,7 +348,7 @@ static void burn_credits(struct csched_unit *svc, s_time_t now)
>  static bool __read_mostly opt_tickle_one_idle = true;
>  boolean_param("tickle_one_idle_cpu", opt_tickle_one_idle);
>  
> -DEFINE_PER_CPU(unsigned int, last_tickle_cpu);
> +static DEFINE_PER_CPU(unsigned int, last_tickle_cpu);
>  
>  static inline void __runq_tickle(const struct csched_unit *new)
>  {
> -- 
> 2.34.1
> 
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 00:38:38 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 00:38:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357460.586009 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Lj3-00015n-AH; Wed, 29 Jun 2022 00:38:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357460.586009; Wed, 29 Jun 2022 00:38:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Lj3-00015g-7N; Wed, 29 Jun 2022 00:38:37 +0000
Received: by outflank-mailman (input) for mailman id 357460;
 Wed, 29 Jun 2022 00:38:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6n86=XE=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o6Lj1-00015a-2f
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 00:38:35 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cdd303e1-f743-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 02:38:33 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 8014561BF4;
 Wed, 29 Jun 2022 00:38:32 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id B2469C341C8;
 Wed, 29 Jun 2022 00:38:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cdd303e1-f743-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1656463111;
	bh=vo9UkzoEjvMroMudDkrfnchAdK3D7tPQ2c43lJljPqM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Ccj+IRLspfXm6t4nen9+1kCD7x99EauXt3RgNnfj8qDsfYGohf5Iv5QldLwsvD8Lg
	 tb1EhloJ3K54csce6Ol+r+63ULygbDQq/qyfKkejzVjlL+c1I2SPEer4W2O4HYg2U/
	 N4un+MRVtb6cRzlCkNBV/jOMYBAtaJSoWNqy+AVrhuiodZv5/zNEAWVMEhasILnt4T
	 vzbatAiszlEFEaC2CHWa3DbKgTnyg98F8z8lykUbU9qHPjog77F8wKUqHTIbR0FblE
	 bW2sN3WqFK3aMVCIp3DzuDl0CU8At+bYPyEeIAN0P/4/S/D0BDTFBQ7BrETHanKuK/
	 YM24S3dfQQ8rA==
Date: Tue, 28 Jun 2022 17:38:29 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Xenia Ragiadakou <burzalodowa@gmail.com>
cc: xen-devel@lists.xenproject.org, Tamas K Lengyel <tamas@tklengyel.com>, 
    Alexandru Isaila <aisaila@bitdefender.com>, 
    Petre Pircalabu <ppircalabu@bitdefender.com>, 
    Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2 2/5] xen/common: vm_event: Fix MISRA C 2012 Rule 8.7
 violation
In-Reply-To: <20220628150337.8520-3-burzalodowa@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2206281738220.4389@ubuntu-linux-20-04-desktop>
References: <20220628150337.8520-1-burzalodowa@gmail.com> <20220628150337.8520-3-burzalodowa@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 28 Jun 2022, Xenia Ragiadakou wrote:
> The function vm_event_wake() is referenced only in vm_event.c.
> Change the linkage of the function from external to internal by adding
> the storage-class specifier static to the function definition.
> 
> Also, this patch aims to resolve indirectly a MISRA C 2012 Rule 8.4 violation
> warning.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

> ---
> Changes in v2:
> - replace the phrase "This patch aims to resolve indirectly a MISRA C 2012
>   Rule 8.4 violation warning." with "Also, this patch aims to resolve
>   indirectly a MISRA C 2012 Rule 8.4 violation warning."
> 
>  xen/common/vm_event.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
> index 0b99a6ea72..ecf49c38a9 100644
> --- a/xen/common/vm_event.c
> +++ b/xen/common/vm_event.c
> @@ -173,7 +173,7 @@ static void vm_event_wake_queued(struct domain *d, struct vm_event_domain *ved)
>   * call vm_event_wake() again, ensuring that any blocked vCPUs will get
>   * unpaused once all the queued vCPUs have made it through.
>   */
> -void vm_event_wake(struct domain *d, struct vm_event_domain *ved)
> +static void vm_event_wake(struct domain *d, struct vm_event_domain *ved)
>  {
>      if ( !list_empty(&ved->wq.list) )
>          vm_event_wake_queued(d, ved);
> -- 
> 2.34.1
> 
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 00:38:53 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 00:38:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357461.586020 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6LjG-0001Pw-Jd; Wed, 29 Jun 2022 00:38:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357461.586020; Wed, 29 Jun 2022 00:38:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6LjG-0001Pp-G8; Wed, 29 Jun 2022 00:38:50 +0000
Received: by outflank-mailman (input) for mailman id 357461;
 Wed, 29 Jun 2022 00:38:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6n86=XE=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o6LjG-00015a-3T
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 00:38:50 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d78acded-f743-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 02:38:49 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id BD635B81E04;
 Wed, 29 Jun 2022 00:38:48 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 343B1C341C8;
 Wed, 29 Jun 2022 00:38:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d78acded-f743-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1656463127;
	bh=O0NqIp879v6Y64z8RHuI9sAdYl28wsDo45fN6FatBaI=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=m6UVXtsnh9Ckog6KnoTj3Cu5jLMK4mYqbpOBtATE2xhdb7HcN4SHeQmhhBz3sHYo+
	 USCzSQGYleCz+VtXxIuoH8Zkb+8YPFdQwNccen22R2WcPacnZla8L+GASUgqHqplR6
	 1jW55A0zlcuYnnzHLhkoD2EfGtAjOQ3cqqg3gjaVyinQ9Gb/NJcing0P2KUwFBRKF+
	 Cyw6qkFz5/hOWXHRg4NTf/NRVQ9GVPiq6/MvhhQFgsX+OAmnJ0fFP/NnQ8VA78w6TI
	 istSXTS941SDodWWJIYAQGQkjbFvooiYrfwlFAJien4gcdEWB4G/3nP9GgikkhBRhs
	 JZbDrNWPcRS+A==
Date: Tue, 28 Jun 2022 17:38:46 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Xenia Ragiadakou <burzalodowa@gmail.com>
cc: xen-devel@lists.xenproject.org, George Dunlap <george.dunlap@citrix.com>, 
    Dario Faggioli <dfaggioli@suse.com>, Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2 4/5] xen/sched: credit: Fix MISRA C 2012 Rule 8.7
 violation
In-Reply-To: <20220628150337.8520-5-burzalodowa@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2206281738390.4389@ubuntu-linux-20-04-desktop>
References: <20220628150337.8520-1-burzalodowa@gmail.com> <20220628150337.8520-5-burzalodowa@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 28 Jun 2022, Xenia Ragiadakou wrote:
> The per-cpu variable last_tickle_cpu is referenced only in credit.c.
> Change its linkage from external to internal by adding the storage-class
> specifier static to its definitions.
> 
> Also, this patch aims to resolve indirectly a MISRA C 2012 Rule 8.4 violation
> warning.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

> ---
> Changes in v2:
> - replace the phrase "This patch aims to resolve indirectly a MISRA C 2012
>   Rule 8.4 violation warning." with "Also, this patch aims to resolve
>   indirectly a MISRA C 2012 Rule 8.4 violation warning."
> 
>  xen/common/sched/credit.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/common/sched/credit.c b/xen/common/sched/credit.c
> index 4d3bd8cba6..47945c2834 100644
> --- a/xen/common/sched/credit.c
> +++ b/xen/common/sched/credit.c
> @@ -348,7 +348,7 @@ static void burn_credits(struct csched_unit *svc, s_time_t now)
>  static bool __read_mostly opt_tickle_one_idle = true;
>  boolean_param("tickle_one_idle_cpu", opt_tickle_one_idle);
>  
> -DEFINE_PER_CPU(unsigned int, last_tickle_cpu);
> +static DEFINE_PER_CPU(unsigned int, last_tickle_cpu);
>  
>  static inline void __runq_tickle(const struct csched_unit *new)
>  {
> -- 
> 2.34.1
> 
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 00:58:33 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 00:58:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357474.586035 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6M2D-0004Ik-7Q; Wed, 29 Jun 2022 00:58:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357474.586035; Wed, 29 Jun 2022 00:58:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6M2D-0004Id-3n; Wed, 29 Jun 2022 00:58:25 +0000
Received: by outflank-mailman (input) for mailman id 357474;
 Wed, 29 Jun 2022 00:58:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6n86=XE=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o6M2B-0004IX-KI
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 00:58:23 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 91c8bf73-f746-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 02:58:21 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 2F3B261AFE;
 Wed, 29 Jun 2022 00:58:20 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6E470C341C8;
 Wed, 29 Jun 2022 00:58:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 91c8bf73-f746-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1656464299;
	bh=qzXzyU3Yxwkc5XoKYIl4dSB/mDLg7UqRnK4AC6snhZQ=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=hiakdTfXzYVHnt8rotaaK4/4vf9Jm1h+sYAUrz+zMLPSM5E4rWUFyo8rGX5PESKLl
	 SY2grO9EbqiR2Nh8X1DlSWeFnUzx1cg+HuylAwTsj3sr/+ORIBdOpNsXMOnWmuN6ka
	 e2KJOMM1+ScuOsUPDAC1TZ/o0/sRKF5JLy0VULpTw8abuMurRFrQ2sdxT5B5uo3PDr
	 DXaHPlaEBpEPp5jGAs1Zib5nV/ajkGe5CzC/WOwT8Mrm5V0CYmaFpYEomoQw4Cf14U
	 D6FpTXAQNTrIUIZxZWmpBkwKNqx6a/kMQOdKaXi3gnqvGIHJkYKlz1LMjrtMKpRGng
	 Lgc0rh6qy+uLA==
Date: Tue, 28 Jun 2022 17:58:17 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Juergen Gross <jgross@suse.com>
cc: xen-devel@lists.xenproject.org, x86@kernel.org, linux-s390@vger.kernel.org, 
    linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, 
    linux-arch@vger.kernel.org, Heiko Carstens <hca@linux.ibm.com>, 
    Vasily Gorbik <gor@linux.ibm.com>, 
    Alexander Gordeev <agordeev@linux.ibm.com>, 
    Christian Borntraeger <borntraeger@linux.ibm.com>, 
    Sven Schnelle <svens@linux.ibm.com>, 
    Dave Hansen <dave.hansen@linux.intel.com>, 
    Andy Lutomirski <luto@kernel.org>, Peter Zijlstra <peterz@infradead.org>, 
    Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, 
    Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>, 
    "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, 
    Arnd Bergmann <arnd@arndb.de>, Russell King <linux@armlinux.org.uk>, 
    Boris Ostrovsky <boris.ostrovsky@oracle.com>, 
    linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH v3 0/3] virtio: support requiring restricted access per
 device
In-Reply-To: <20220622063838.8854-1-jgross@suse.com>
Message-ID: <alpine.DEB.2.22.394.2206281758050.4389@ubuntu-linux-20-04-desktop>
References: <20220622063838.8854-1-jgross@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 22 Jun 2022, Juergen Gross wrote:
> Instead of an all or nothing approach add support for requiring
> restricted memory access per device.
> 
> Changes in V3:
> - new patches 1 + 2
> - basically complete rework of patch 3
> 
> Juergen Gross (3):
>   virtio: replace restricted mem access flag with callback
>   kernel: remove platform_has() infrastructure
>   xen: don't require virtio with grants for non-PV guests


On the whole series:

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


>  MAINTAINERS                            |  8 --------
>  arch/arm/xen/enlighten.c               |  4 +++-
>  arch/s390/mm/init.c                    |  4 ++--
>  arch/x86/mm/mem_encrypt_amd.c          |  4 ++--
>  arch/x86/xen/enlighten_hvm.c           |  4 +++-
>  arch/x86/xen/enlighten_pv.c            |  5 ++++-
>  drivers/virtio/Kconfig                 |  4 ++++
>  drivers/virtio/Makefile                |  1 +
>  drivers/virtio/virtio.c                |  4 ++--
>  drivers/virtio/virtio_anchor.c         | 18 +++++++++++++++++
>  drivers/xen/Kconfig                    |  9 +++++++++
>  drivers/xen/grant-dma-ops.c            | 10 ++++++++++
>  include/asm-generic/Kbuild             |  1 -
>  include/asm-generic/platform-feature.h |  8 --------
>  include/linux/platform-feature.h       | 19 ------------------
>  include/linux/virtio_anchor.h          | 19 ++++++++++++++++++
>  include/xen/xen-ops.h                  |  6 ++++++
>  include/xen/xen.h                      |  8 --------
>  kernel/Makefile                        |  2 +-
>  kernel/platform-feature.c              | 27 --------------------------
>  20 files changed, 84 insertions(+), 81 deletions(-)
>  create mode 100644 drivers/virtio/virtio_anchor.c
>  delete mode 100644 include/asm-generic/platform-feature.h
>  delete mode 100644 include/linux/platform-feature.h
>  create mode 100644 include/linux/virtio_anchor.h
>  delete mode 100644 kernel/platform-feature.c
> 
> -- 
> 2.35.3
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 02:23:02 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 02:23:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357483.586052 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6NLq-0003uM-D7; Wed, 29 Jun 2022 02:22:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357483.586052; Wed, 29 Jun 2022 02:22:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6NLq-0003uF-8Q; Wed, 29 Jun 2022 02:22:46 +0000
Received: by outflank-mailman (input) for mailman id 357483;
 Wed, 29 Jun 2022 02:22:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6NLp-0003u4-1l; Wed, 29 Jun 2022 02:22:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6NLo-0005NE-Uk; Wed, 29 Jun 2022 02:22:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6NLo-0001j2-Gv; Wed, 29 Jun 2022 02:22:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o6NLo-0008Lv-G9; Wed, 29 Jun 2022 02:22:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=j1Fb6iWMHvz/bwQjr3RWYqSdbmyYY/eg61ctbkb5Ui4=; b=p+Fq1b/XC8ltSPGvHE2YOzd8rU
	tKi4Uzu6XNBXD2DYuTsSZNAFUQG71ZfQmfgYtiCBZ4kS2ngkoTGSAsEbXVCJ6hhfiIiizf7dVjUOV
	WRFn5JweePaCcKTRiBGfORYDBQTdkqXguS7FGpZbl2dJRuH8nlIYv3TXS6rCb0HqEcVU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171385-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 171385: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-arm64-libvirt:libvirt-build:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=2a8835cb45371a1f05c9c5899741d66685290f28
X-Osstest-Versions-That:
    qemuu=29f6db75667f44f3f01ba5037dacaf9ebd9328da
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 29 Jun 2022 02:22:44 +0000

flight 171385 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171385/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 171376
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 171376

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171376
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171376
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171376
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171376
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171376
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171376
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171376
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171376
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                2a8835cb45371a1f05c9c5899741d66685290f28
baseline version:
 qemuu                29f6db75667f44f3f01ba5037dacaf9ebd9328da

Last test of basis   171376  2022-06-27 23:09:50 Z    1 days
Testing same since   171380  2022-06-28 08:41:09 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alex Bennée <alex.bennee@linaro.org>
  David Hildenbrand <david@redhat.com>
  Jagannathan Raman <jag.raman@oracle.com>
  Kevin Wolf <kwolf@redhat.com>
  Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
  Michael S. Tsirkin <mst@redhat.com>
  Richard Henderson <richard.henderson@linaro.org>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Zhenzhong Duan <zhenzhong.duan@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1112 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 03:12:25 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 03:12:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357494.586069 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6O7l-0001AF-45; Wed, 29 Jun 2022 03:12:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357494.586069; Wed, 29 Jun 2022 03:12:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6O7l-0001A8-1H; Wed, 29 Jun 2022 03:12:17 +0000
Received: by outflank-mailman (input) for mailman id 357494;
 Wed, 29 Jun 2022 03:12:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rlzK=XE=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1o6O7j-0001A2-2m
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 03:12:15 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2053.outbound.protection.outlook.com [40.107.104.53])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 44937ea8-f759-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 05:12:12 +0200 (CEST)
Received: from AM7PR02CA0006.eurprd02.prod.outlook.com (2603:10a6:20b:100::16)
 by AM0PR08MB3412.eurprd08.prod.outlook.com (2603:10a6:208:dc::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Wed, 29 Jun
 2022 03:12:10 +0000
Received: from VE1EUR03FT056.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:100:cafe::45) by AM7PR02CA0006.outlook.office365.com
 (2603:10a6:20b:100::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14 via Frontend
 Transport; Wed, 29 Jun 2022 03:12:09 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT056.mail.protection.outlook.com (10.152.19.28) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Wed, 29 Jun 2022 03:12:09 +0000
Received: ("Tessian outbound 879f4da7a6e9:v121");
 Wed, 29 Jun 2022 03:12:08 +0000
Received: from e67e5b16c9e1.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 B945B161-F1C7-49B8-B0B2-530AE90C7BB1.1; 
 Wed, 29 Jun 2022 03:12:03 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e67e5b16c9e1.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 29 Jun 2022 03:12:03 +0000
Received: from DU2PR08MB7325.eurprd08.prod.outlook.com (2603:10a6:10:2e4::7)
 by DB8PR08MB4137.eurprd08.prod.outlook.com (2603:10a6:10:a5::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Wed, 29 Jun
 2022 03:12:01 +0000
Received: from DU2PR08MB7325.eurprd08.prod.outlook.com
 ([fe80::50e5:8757:6643:77f8]) by DU2PR08MB7325.eurprd08.prod.outlook.com
 ([fe80::50e5:8757:6643:77f8%9]) with mapi id 15.20.5373.017; Wed, 29 Jun 2022
 03:12:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 44937ea8-f759-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=jhKD6DaZfbgmGdPnaT1Wv3iAfb8muvii8onRbv+ffL2NxvO055qmiFz5FWOlW0qQxv6TB8ImR3Rg9MMofR0Zoyb0gPD6m+SNfSBIto3n4K7Vgif3GtTTiOL+7UISmd7u7EA41SP4vZJqlqU2pYpmbjLKcnoX+//dDJF83MFbtgF1eoYWmJ0BcAZGT+7ANbt0HB4xqFlzsqE/Iemqkc8SWeguiTelTPmBR4YJkv1ytWKkr/x8yrUXw+OjE2pZW6NnbGzqHJ/LrFhsILL05DYERb1PK4osJChBi7VDn8bucbQdgWYEJpOhPaOW2OcdvR29EPzsK9bhZdS5lEIz4LEMmg==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=kHbnPXx6i6lC9j4Wm1VPjmIcEPajH8+B1X/q2adN1qI=;
 b=e8GgBkwh9xU1+85nm0E4ZWMsZyLOq+c6CIB0nBaymnRdW5bwDPUlUcU9FkhTguK5zq7cYhHR4ZeFDy9Jan4XaiLoCqTQJ1mT1C/gsM+un8/ayt/7jnvO5WxOvSjzWmZ4GcwoIED7ZCC5k5esytNYc59XTFaQosGm5wqUZ81JZ5pBTvjQZV50uwTaSa8tnvhWP5vGWA9+XC9lLjZCYXEEj69SZrce9ghBM6BhLbUPL+WRm29ZW11suN6T/r5sRjWtn4xAjqFMJXOJKxOL9qsu5Rg2CHXduZA69gKVltmMlXJt5+7SmmXmzu8ZZqqXLLAuR0l348i/iEnJnm1mJ9IgKg==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kHbnPXx6i6lC9j4Wm1VPjmIcEPajH8+B1X/q2adN1qI=;
 b=b9Xvd42w5hyuRbBpdygirzqArpBf6puS5hNhg/uxeEsQ8zLErcJME6GalPjU5VT/5E1DqkY5L18CToEKTcZyE/r2+cPKld3AZtKxfT1g5he1j/bqYpuuc5G3j7tFcVZB1uYIrBa71aTbSphj/jHhWSA22gaQIbU52+3jl/Ci+eA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XWFHPXnbGpMVGTzX5cZozjJECZmWrZz7rfP+m6deyZA2a6/UI38jyobjB0EtgoGS0QnImaJfiIzQDU2qnKYFbdsyQy9Fc3uCdDY8sFUPferIOEXoiJAS5jrSHGY1pR9o3eksS67fr2eihUcq21bD1wMJV1GsbgI586GQ2ZSVp4e2M+6iw5f5DcSzW7dOGWcUPJLUUurCvmgUwQLNMm6IXSd/Vu4qb8eLowv8A0ZSoiNdPXZvo0HzfJg7uIstjuZRORXo7MkIYKRoXsjypeMhkyWzcCnUWRk0LgdX2MrG30JRtbTL00NUn43+8HbsL6Z7kahFS16/3D+XgaPbalyfIA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=kHbnPXx6i6lC9j4Wm1VPjmIcEPajH8+B1X/q2adN1qI=;
 b=KsLC/iBW/aUhGJOybX+w+T8R9XLuBVBy1NgsBGwbju80SSfZk4Rv2AsmJHykrH+knlJlPJh6D2RkPJ1gZsXI501zAT0I0iVQ6/ZLRFQdMo2rSdockFG7t9J8mqpSAMSfGpt7gJH9Y7bOloR7+8XKmQb0wQnm4hkCO+tDNRfG7k7cg5I9oUJeyU3gY//d/zTFn4JBxl5xPmz8ogh8k5ByQqJ8g+C5H8gwXEsl8KZCHnyq6OVvpeyYmRLHOkoPfTPhc18t/Dh5poq/XPxqlXFxJ0adYhpq3M3F2m+VlyGiwGIwdydxnz3I5/Zij8CA++8f2fGlLzxQKFS8Mp7dntbEeA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kHbnPXx6i6lC9j4Wm1VPjmIcEPajH8+B1X/q2adN1qI=;
 b=b9Xvd42w5hyuRbBpdygirzqArpBf6puS5hNhg/uxeEsQ8zLErcJME6GalPjU5VT/5E1DqkY5L18CToEKTcZyE/r2+cPKld3AZtKxfT1g5he1j/bqYpuuc5G3j7tFcVZB1uYIrBa71aTbSphj/jHhWSA22gaQIbU52+3jl/Ci+eA=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>
CC: Wei Chen <Wei.Chen@arm.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v7 7/9] xen/arm: unpopulate memory when domain is static
Thread-Topic: [PATCH v7 7/9] xen/arm: unpopulate memory when domain is static
Thread-Index: AQHYhE/FRuoXPvvK9EWAAAtCUPIrtq1bKxqAgAfcEECAAA7bAIACoRkg
Date: Wed, 29 Jun 2022 03:12:01 +0000
Message-ID:
 <DU2PR08MB732507EFB0CC4FEAA4872B3AF7BB9@DU2PR08MB7325.eurprd08.prod.outlook.com>
References: <20220620024408.203797-1-Penny.Zheng@arm.com>
 <20220620024408.203797-8-Penny.Zheng@arm.com>
 <5ac0e46d-2100-331e-b4d2-8fc715973b71@suse.com>
 <DU2PR08MB73255B2995B4692B5D46252FF7B99@DU2PR08MB7325.eurprd08.prod.outlook.com>
 <380b2610-fe2f-6246-e6a4-f0dd8295d488@xen.org>
In-Reply-To: <380b2610-fe2f-6246-e6a4-f0dd8295d488@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: CB6DC4AC1B8EF4408F095CBF975E8DCC.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 2ba2c3d3-a77e-472f-6da4-08da597d275a
x-ms-traffictypediagnostic:
	DB8PR08MB4137:EE_|VE1EUR03FT056:EE_|AM0PR08MB3412:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 9nSqAlE6gk8wn9r8tkwX/97HQOrUKn7IjzGCXeiMJ6w5BtzTuZpm1arGyLPboPg2CadMBUgPZvsBIPkacBYd6H4938dh+epZr8ny1PFlW/6IrDFxb6stXsqavARGrTKjxsucNUhrfyDW4vYeC0XYiFhwYFFL5VageVlZwpQtEYIH+bu8RereiTbfrCh+zj0seY1V2nih5b02FITmxKKBIg0BPLau6zjQbj8Fvnf9+jvqcuD9oHGyLPIoWUur26up4rai/cqnAl75qxIfYV7mpgvZN4EEIGEwFVWvinYtuETq0WhsgBhVv+itWfd+XKNSnnkzK308Mgm/Xb4IDBFp+qmJO0Wgt4HwiagxlXatbBI4EJ3X6Vu1wOyqcIc00k01E9uKOxVk1PpZE/pOWBpIXF73FMljY269c9Hvv+LF6JW1FP/PwwNCiSSq7+6Snye1jN6ji0VrdBiZUYX3Vg/rGFnqANNwFGbPbbweXAvoKXvykFVKTyFGkNKH1xZg35zprp2b/46L5Tr2oKjiGnYTR9ygbyDN52fwFhpPkiIpxN2THaSNLFWy55hI9+Wdo1C4mHuRHqoaB8zoWZWGiHXFApUqhnzq47OKh/zHGwo0KQyVBc7lBNXPM7syvRMwhuj+t97CDTL9tv6lQ5jgK/Px/hloD9eHdVNPZ12/sVOHv/WvVjgU/YyY/9NaYEiZ1J8FEsmPrwW795sKCtBHwbrXJ5GptuF5u1WGMdXkJ8y9RFpGlpBKu9Vywusq7a0A2MrVf+qEBHUQ7YwWnBS9vV10+OSPuNO096CE34XnqygtfPA=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DU2PR08MB7325.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(396003)(136003)(39860400002)(346002)(376002)(38100700002)(122000001)(186003)(8936002)(33656002)(55016003)(2906002)(5660300002)(316002)(83380400001)(110136005)(54906003)(64756008)(86362001)(66476007)(9686003)(38070700005)(52536014)(66556008)(8676002)(66946007)(4326008)(71200400001)(6506007)(7696005)(76116006)(26005)(41300700001)(53546011)(478600001)(66446008);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB4137
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT056.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	72eb29fb-caad-422a-0683-08da597d22b8
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2bF/1I3lAgZ9fqLaM9NdNWS94MHB3vN621RY7U2jZ0SPMt3v6MMhYSvqtWG2FKyIvn6/pmGlrnLWcAppim4wBPqII05AfsxtZzepqRYat0hAUAUWe0DapnmdARSYuqaEPv68aORWvezUYbYcXg1UuivlavmSYUfw5NTNaFPI+ZrqN57v/Cx63F5BkP02tvm2aYQUy4jpc2WeT/SDSzRD2AFxVFcmDeZwAuwsMKCKvXn3lBSxxywzdAaxBMgITh6h4YqfQ+yQb54OFA/Y6MIwtgbzTTK9moBVX8ES0YeiCrHLmNaxzp8w9B/xoxjvEK4P2ZVPscgCxIngYTXUOxuJJxkSqN2/FQuAD+MKL+eeeW7lkGHfK13L7TXbbSBeWvxcf67Sc2wv/QlsYtIqI5z2vqP0ofjOK4uLTRuPrM3JmLl+6PIAAusqNth8IYI2gjz+DBv6/qubiiL7dNwNH8ou8xNCfUE+cHpQTPta093DuxYtg6+nXTjMN70EZ8+703SZLgy8PByKGQUxpC1CFApnBgj1Pm0P1cLTVYgpM8CluR0nJSJ2IxR74wK48B6acNbSBubVYhHhdt209qaFN4eUW5Ji7nQ4KVapBhvSqRYBqDodHy1J3rpovKRy6N96O6WTPMZfi7sH4O9wSx4KhhyjU9RmJRsP/dTxg8IFVO1bPavtQ+MPN0HXoUOEpluZIOZUp9JxjJown9qrxZQntnG6+8BpCDRIQa9YgdNq/NjFgN9PhUAKZ2GOyOGieacPTpI9t6Ff/IexwsvYOqnxBJbV5sV5Pkn56gj+u4yUNdRRXcY+58QQ9oylQ/1v6PAeC0yN
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(396003)(39860400002)(376002)(346002)(136003)(40470700004)(36840700001)(46966006)(336012)(40460700003)(41300700001)(70206006)(26005)(8676002)(47076005)(6506007)(36860700001)(9686003)(110136005)(54906003)(52536014)(33656002)(40480700001)(478600001)(186003)(81166007)(5660300002)(316002)(55016003)(356005)(2906002)(83380400001)(86362001)(70586007)(8936002)(7696005)(4326008)(53546011)(82740400003)(82310400005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jun 2022 03:12:09.0960
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 2ba2c3d3-a77e-472f-6da4-08da597d275a
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT056.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB3412

SGkgSnVsaWVuIGFuZCBKYW4NCg0KPiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9t
OiBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4ub3JnPg0KPiBTZW50OiBNb25kYXksIEp1bmUgMjcs
IDIwMjIgNjoxOSBQTQ0KPiBUbzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+OyBK
YW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQo+IENjOiBXZWkgQ2hlbiA8V2VpLkNoZW5A
YXJtLmNvbT47IEFuZHJldyBDb29wZXINCj4gPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+OyBH
ZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJpeC5jb20+Ow0KPiBTdGVmYW5vIFN0YWJl
bGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+OyBXZWkgTGl1IDx3bEB4ZW4ub3JnPjsgeGVu
LQ0KPiBkZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZw0KPiBTdWJqZWN0OiBSZTogW1BBVENIIHY3
IDcvOV0geGVuL2FybTogdW5wb3B1bGF0ZSBtZW1vcnkgd2hlbiBkb21haW4gaXMNCj4gc3RhdGlj
DQo+IA0KPiANCj4gDQo+IE9uIDI3LzA2LzIwMjIgMTE6MDMsIFBlbm55IFpoZW5nIHdyb3RlOg0K
PiA+IEhpIGphbg0KPiA+DQo+ID4+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+ID4gcHV0
X3N0YXRpY19wYWdlcywgdGhhdCBpcywgYWRkaW5nIHBhZ2VzIHRvIHRoZSByZXNlcnZlZCBsaXN0
LCBpcyBvbmx5DQo+ID4gZm9yIGZyZWVpbmcgc3RhdGljIHBhZ2VzIG9uIHJ1bnRpbWUuIEluIHN0
YXRpYyBwYWdlIGluaXRpYWxpemF0aW9uDQo+ID4gc3RhZ2UsIEkgYWxzbyB1c2UgZnJlZV9zdGF0
aW1lbV9wYWdlcywgYW5kIGluIHdoaWNoIHN0YWdlLCBJIHRoaW5rIHRoZQ0KPiA+IGRvbWFpbiBo
YXMgbm90IGJlZW4gY29uc3RydWN0ZWQgYXQgYWxsLiBTbyBJIHByZWZlciB0aGUgZnJlZWluZyBv
Zg0KPiA+IHN0YXRpY21lbSBwYWdlcyBpcyBzcGxpdCBpbnRvIHR3byBwYXJ0czogZnJlZV9zdGF0
aWNtZW1fcGFnZXMgYW5kDQo+ID4gcHV0X3N0YXRpY19wYWdlcw0KPiANCj4gQUZBSVUsIGFsbCB0
aGUgcGFnZXMgd291bGQgaGF2ZSB0byBiZSBhbGxvY2F0ZWQgdmlhDQo+IGFjcXVpcmVfZG9tc3Rh
dGljX3BhZ2VzKCkuIFRoaXMgY2FsbCByZXF1aXJlcyB0aGUgZG9tYWluIHRvIGJlIGFsbG9jYXRl
ZCBhbmQNCj4gc2V0dXAgZm9yIGhhbmRsaW5nIG1lbW9yeS4NCj4gDQo+IFRoZXJlZm9yZSwgSSB0
aGluayB0aGUgc3BsaXQgaXMgdW5uZWNlc3NhcnkuIFRoaXMgd291bGQgYWxzbyBoYXZlIHRoZQ0K
PiBhZHZhbnRhZ2UgdG8gcmVtb3ZlIG9uZSBsb29wLiBBZG1pdHRseSwgdGhpcyBpcyBub3QgaW1w
b3J0YW50IHdoZW4gdGhlDQo+IG9yZGVyIDAsIGJ1dCBpdCB3b3VsZCBiZWNvbWUgYSBwcm9ibGVt
IGZvciBsYXJnZXIgb3JkZXIgKHlvdSBtYXkgaGF2ZSB0bw0KPiBwdWxsIHRoZSBzdHJ1Y3QgcGFn
ZV9pbmZvIG11bHRpcGxlIHRpbWUgaW4gdGhlIGNhY2hlKS4NCj4gDQoNCkhvdyBhYm91dCB0aGlz
Og0KSSBjcmVhdGUgYSBuZXcgZnVuYyBmcmVlX2RvbXN0YXRpY19wYWdlLCBhbmQgaXQgd2lsbCBi
ZSBsaWtlOg0KIg0Kc3RhdGljIHZvaWQgZnJlZV9kb21zdGF0aWNfcGFnZShzdHJ1Y3QgZG9tYWlu
ICpkLCBzdHJ1Y3QgcGFnZV9pbmZvICpwYWdlKQ0Kew0KICAgIHVuc2lnbmVkIGludCBpOw0KICAg
IGJvb2wgbmVlZF9zY3J1YjsNCg0KICAgIC8qIE5CLiBNYXkgcmVjdXJzaXZlbHkgbG9jayBmcm9t
IHJlbGlucXVpc2hfbWVtb3J5KCkuICovDQogICAgc3Bpbl9sb2NrX3JlY3Vyc2l2ZSgmZC0+cGFn
ZV9hbGxvY19sb2NrKTsNCg0KICAgIGFyY2hfZnJlZV9oZWFwX3BhZ2UoZCwgcGFnZSk7DQoNCiAg
ICAvKg0KICAgICAqIE5vcm1hbGx5IHdlIGV4cGVjdCBhIGRvbWFpbiB0byBjbGVhciBwYWdlcyBi
ZWZvcmUgZnJlZWluZyB0aGVtLA0KICAgICAqIGlmIGl0IGNhcmVzIGFib3V0IHRoZSBzZWNyZWN5
IG9mIHRoZWlyIGNvbnRlbnRzLiBIb3dldmVyLCBhZnRlcg0KICAgICAqIGEgZG9tYWluIGhhcyBk
aWVkIHdlIGFzc3VtZSByZXNwb25zaWJpbGl0eSBmb3IgZXJhc3VyZS4gV2UgZG8NCiAgICAgKiBz
Y3J1YiByZWdhcmRsZXNzIGlmIG9wdGlvbiBzY3J1Yl9kb21oZWFwIGlzIHNldC4NCiAgICAgKi8N
CiAgICBuZWVkX3NjcnViID0gZC0+aXNfZHlpbmcgfHwgc2NydWJfZGVidWcgfHwgb3B0X3NjcnVi
X2RvbWhlYXA7DQoNCiAgICBmcmVlX3N0YXRpY21lbV9wYWdlcyhwYWdlLCAxLCBuZWVkX3NjcnVi
KTsNCg0KICAgIC8qIEFkZCBwYWdlIG9uIHRoZSByZXN2X3BhZ2VfbGlzdCAqYWZ0ZXIqIGl0IGhh
cyBiZWVuIGZyZWVkLiAqLw0KICAgIHB1dF9zdGF0aWNfcGFnZShkLCBwYWdlKTsNCg0KICAgIGRy
b3BfZG9tX3JlZiA9ICFkb21haW5fYWRqdXN0X3RvdF9wYWdlcyhkLCAtMSk7DQoNCiAgICBzcGlu
X3VubG9ja19yZWN1cnNpdmUoJmQtPnBhZ2VfYWxsb2NfbG9jayk7DQoNCiAgICBpZiAoIGRyb3Bf
ZG9tX3JlZiApDQogICAgICAgIHB1dF9kb21haW4oZCk7DQp9DQoiDQoNCkluIGZyZWVfZG9taGVh
cF9wYWdlcywgd2UganVzdCBjYWxsIGZyZWVfZG9tc3RhdGljX3BhZ2U6DQoNCiINCkBAIC0yNDMw
LDYgKzI0MzAsOSBAQCB2b2lkIGZyZWVfZG9taGVhcF9wYWdlcyhzdHJ1Y3QgcGFnZV9pbmZvICpw
ZywgdW5zaWduZWQgaW50IG9yZGVyKQ0KDQogICAgIEFTU0VSVF9BTExPQ19DT05URVhUKCk7DQoN
CisgICAgaWYgKCB1bmxpa2VseShwZy0+Y291bnRfaW5mbyAmIFBHQ19zdGF0aWMpICkNCisgICAg
ICAgIHJldHVybiBmcmVlX2RvbXN0YXRpY19wYWdlKGQsIHBnKTsNCisNCiAgICAgaWYgKCB1bmxp
a2VseShpc194ZW5faGVhcF9wYWdlKHBnKSkgKQ0KICAgICB7DQogICAgICAgICAvKiBOQi4gTWF5
IHJlY3Vyc2l2ZWx5IGxvY2sgZnJvbSByZWxpbnF1aXNoX21lbW9yeSgpLiAqLw0KQEAgLTI2NzMs
NiArMjY3NiwzOCBAQCB2b2lkIGZyZWVfc3RhdGljbWVtX3BhZ2VzKHN0cnVjdCBwYWdlX2luZm8g
KnBnLCB1bnNpZ25lZCBsb25nIG5yX21mbnMsDQoiDQoNClRoZW4gdGhlIHNwbGl0IGNvdWxkIGJl
IGF2b2lkZWQgYW5kIHdlIGNvdWxkIHNhdmUgdGhlIGxvb3AgYXMgbXVjaCBhcyBwb3NzaWJsZS4N
CkFueSBzdWdnZXN0aW9uPyANCg0KPiBDaGVlcnMsDQo+IA0KPiAtLQ0KPiBKdWxpZW4gR3JhbGwN
Cg==


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 04:05:53 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 04:05:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357500.586079 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6OxJ-0006qX-4x; Wed, 29 Jun 2022 04:05:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357500.586079; Wed, 29 Jun 2022 04:05:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6OxJ-0006qQ-11; Wed, 29 Jun 2022 04:05:33 +0000
Received: by outflank-mailman (input) for mailman id 357500;
 Wed, 29 Jun 2022 04:05:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6OxH-0006qG-KP; Wed, 29 Jun 2022 04:05:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6OxH-0007HQ-9L; Wed, 29 Jun 2022 04:05:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6OxH-0006IN-0g; Wed, 29 Jun 2022 04:05:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o6OxH-0005ix-0B; Wed, 29 Jun 2022 04:05:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=76p9gtFcaDWPTZ9RwSg4YsNUYZE9pR3PN7pSZUnLWv8=; b=txh/QrgcSQEXQqoqT6IYbGIQtG
	6u8CPI70ROV6e/8B0s3EyBtIqKHKYQ+y9Br44VEYXXRLkPPgvpXHIj/T0MbgGNvWLxSoxS0+e1Vda
	M7dl2o9Ei5N/txqvVb0GmWaQjygFE66YrtPUlgWXFgMc0QcAZOw6bCZYZfTub2jPhFNU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171391-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 171391: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=93aa071f66b78a2abbf134aeb96b02f066e6091d
X-Osstest-Versions-That:
    xen=8c99264c6746541ddbfd7afec533e6ad1c8c41a5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 29 Jun 2022 04:05:31 +0000

flight 171391 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171391/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  93aa071f66b78a2abbf134aeb96b02f066e6091d
baseline version:
 xen                  8c99264c6746541ddbfd7afec533e6ad1c8c41a5

Last test of basis   171383  2022-06-28 16:01:41 Z    0 days
Testing same since   171391  2022-06-29 01:01:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8c99264c67..93aa071f66  93aa071f66b78a2abbf134aeb96b02f066e6091d -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 05:39:21 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 05:39:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357512.586100 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6QPo-0000HU-47; Wed, 29 Jun 2022 05:39:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357512.586100; Wed, 29 Jun 2022 05:39:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6QPo-0000HN-0z; Wed, 29 Jun 2022 05:39:04 +0000
Received: by outflank-mailman (input) for mailman id 357512;
 Wed, 29 Jun 2022 05:39:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rlzK=XE=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1o6QPm-0000HF-G1
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 05:39:02 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-eopbgr150080.outbound.protection.outlook.com [40.107.15.80])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c5cead2c-f76d-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 07:38:58 +0200 (CEST)
Received: from AS9PR06CA0536.eurprd06.prod.outlook.com (2603:10a6:20b:49d::35)
 by VE1PR08MB4735.eurprd08.prod.outlook.com (2603:10a6:802:a2::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14; Wed, 29 Jun
 2022 05:38:55 +0000
Received: from AM5EUR03FT042.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:49d:cafe::77) by AS9PR06CA0536.outlook.office365.com
 (2603:10a6:20b:49d::35) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14 via Frontend
 Transport; Wed, 29 Jun 2022 05:38:55 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT042.mail.protection.outlook.com (10.152.17.168) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Wed, 29 Jun 2022 05:38:54 +0000
Received: ("Tessian outbound d5fa056a5959:v121");
 Wed, 29 Jun 2022 05:38:54 +0000
Received: from b70799b39e99.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 7BA0A5CA-3B73-4AFA-B613-8789A81B2ED4.1; 
 Wed, 29 Jun 2022 05:38:43 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b70799b39e99.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 29 Jun 2022 05:38:43 +0000
Received: from DU2PR08MB7325.eurprd08.prod.outlook.com (2603:10a6:10:2e4::7)
 by DB8PR08MB3962.eurprd08.prod.outlook.com (2603:10a6:10:a9::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16; Wed, 29 Jun
 2022 05:38:41 +0000
Received: from DU2PR08MB7325.eurprd08.prod.outlook.com
 ([fe80::50e5:8757:6643:77f8]) by DU2PR08MB7325.eurprd08.prod.outlook.com
 ([fe80::50e5:8757:6643:77f8%9]) with mapi id 15.20.5373.017; Wed, 29 Jun 2022
 05:38:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c5cead2c-f76d-11ec-b725-ed86ccbb4733
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=heWQcijjwJywmxOwuLzwk1PU8KJXtclFr1MI5k2DtC0b8SZC4klCTzKumX8JOi+I37MeaTCaZfvTss1HElETiZDCySbBbqTWpeM5C9Mv3usYlKlwKngvg+NpA4k+TgBXmrH/lWft3/s2oz+W8TvcHdZqCjS//uPptj7vVBQPV9ScUwKPSCKW2JQUTE3fDRzAl/LTF/QQuZeXi1FZO5cygLyh3NGglI2FHYOWL2Gkh3wueulaM0acT1bFN2naFXEPGdyW9xYrKvKDpPKaKl2TvALRuqtFjXFrvTN+ELRWluK68uKxu7ZJ2zxFgOaBcaELe5qoUyAfY55HM76kKEfJPQ==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=vCno2hPywUnai1Pqs71FD/iWAA1gQivouNCulcsvxDc=;
 b=FIbKtmSKjG8QQufo7kKdn6AeHUYg6GZNiNze8/XSAvmq64vcaHnUxlqu2VrgWkZ2gb6/u/Ccl4i5RjmN3c+EzjYUXPUL4a8zcR1SjCTOh1llkYu9vUNvXfKbIWwLoT2ir8fYE0hHwP9pQVZGvKDH/vkYWRlXkRwcKZwb2Ay6BmIzEe6b4tNryaUhuzHNOtUnhXdLKVmWrgdYI57bh7a3AdQOrufACJ8B6dqlyu+Uu1TkiLsaSKkayH+bMSoNn6JQ7Ef5cmHrOPRFWz/nO58c8buwz704Q9bBamKvNSJQpgK4fOQSUxTRvpxFDQ/tiY9gOQ2DBHC+xvgHF7hDlHb/MQ==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vCno2hPywUnai1Pqs71FD/iWAA1gQivouNCulcsvxDc=;
 b=T0d90Qk+XSR31sKoIeMYIFOA0pxjxJ6jF0tJ9ve7rUjIKUhn6CXrC768ASB04DmJAiIa7nVhvQATHH6jq29jJb2WO1C7XaxTInOWc7sCs9pksjrsOEj07IdtuJ1K98xSShqjkCIZ4jKWA30Ef5twYMkVaD0SBS5NWWRGKroimeU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fZ61sAo8kcT9lAOQCIjXCbJS4upvPDJthN3yaUVHNGkKC6yaMx3hIck1A/nNlUNCjbQM48yZI3Z0Reag5v/F4P3QFKlEIm4ABdIVVKNrWQ4got5OodMJtCJ+Ul9fYfZ0p0dIa8a/tgrWGEXjPQLewSdwoCnfWQelMeroAFdGl6C5i3K7e6iF8nCO2WQLtUTnY+ge61jZ5U16ilJkc2LACh93diYGNGC9mi4VqkoGCKucuIjBq5k8JGm4Uzo9J5ZSEYFMnJ7Zo8waWSkOq+x1xYdtm2W/FpAWhNssK3YMlWktj3tj4MEW8v9lMPW8dAOmuFBenqkBTHS2sDy7tT7ASQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=vCno2hPywUnai1Pqs71FD/iWAA1gQivouNCulcsvxDc=;
 b=VGyRyUhEHdugaOvb0tzl7uePkkGC2s8jWJuGE+IvQ2lyIztqNJMBAgopb3ReH/zxFWCljlUayk/JAvE4GU3mboY6gWVpq5iV1UbgvM3T1XESpoVSi91ojzXSE5gj/47EW5c6lepWvOXA5ElcGAUbsQmqt+Dezili5e3S1AyLe1A1UStI0WVLSjKe5LH2l99syX2AOu//x6OcqOPJa29S8b/T1tR9gTCDRfsxg1YRWACjIBZLB5ZQIWS5mVVRANkM+wt+KQI2SUk/eSCJ04innX98JD3CCV6AK2Q3TngPjzeraTU/xI+tytK7QR8Old5Q1szwodaIt9jWFCIsS7fp5Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vCno2hPywUnai1Pqs71FD/iWAA1gQivouNCulcsvxDc=;
 b=T0d90Qk+XSR31sKoIeMYIFOA0pxjxJ6jF0tJ9ve7rUjIKUhn6CXrC768ASB04DmJAiIa7nVhvQATHH6jq29jJb2WO1C7XaxTInOWc7sCs9pksjrsOEj07IdtuJ1K98xSShqjkCIZ4jKWA30Ef5twYMkVaD0SBS5NWWRGKroimeU=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v5 1/8] xen/arm: introduce static shared memory
Thread-Topic: [PATCH v5 1/8] xen/arm: introduce static shared memory
Thread-Index: AQHYhGRAIMWAAZoglEyYz3pT8Ubq/a1e3kuAgAboQaA=
Date: Wed, 29 Jun 2022 05:38:41 +0000
Message-ID:
 <DU2PR08MB7325C156D4D6D5A2D18E0FF4F7BB9@DU2PR08MB7325.eurprd08.prod.outlook.com>
References: <20220620051114.210118-1-Penny.Zheng@arm.com>
 <20220620051114.210118-2-Penny.Zheng@arm.com>
 <45a41132-1520-a894-a9eb-6688c79a660d@xen.org>
In-Reply-To: <45a41132-1520-a894-a9eb-6688c79a660d@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 62B60BE4B712B841B3569517583A2F95.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 716852b4-6572-40cb-385c-08da5991a7c6
x-ms-traffictypediagnostic:
	DB8PR08MB3962:EE_|AM5EUR03FT042:EE_|VE1PR08MB4735:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 p6zzcrqjlxrsD9PD88iPWbJE36PnIc7LNAa4TA4Gts+XQ1EI2nQkDAc6KzqrshgpHDdGM6nbVVpjIhvWIftkiqy5vG2bXns3qYYL+c2Dbo8KcgIN18NlwyNEKpPztZUfDDZAM5C+/I20YMjDcxgraTJnBEipaN+o0PgsRGXLxD5H39vJH0LAMKB2UZKCvDp/BxWQbkb1ihjUF2nlRGI2iuz/EC9oMdUeLTRYI6xIFgMcDjbeG+4Zt/EMgHUbgntQa06Jb3vq/CN9rc1YOiNBBx5KXalUDzfFwT25se+5piyc5byZ3qhEfDtc4WNwkG4M/ICv11i2xdqT8eYWhEpYLsBcKJKMQOqsIXAS85GqE3+baufcMKqORpGr+NTj1F2OdB38wDbUZYSITixzQFO/D3q4OsExMshjSFTh78vqb2of83JkaabVt2W/Nv3vBn8/MpxXcnKeh7lXnMf0A4qrlFo0nFUxmemzcBYjmeTv+duopaoPQzs/WAkglyJQdUUh04gdgotUlQudRaBP1sGUFX93EC5BVLEbCQbH3u2M73ik4ft3EJp9Ffjl/NPFSUIjMLBPPJ8GRNCYIYhUDyTfuvqAA7UpuZNMLjPood6+0vyhh/qtI0SiNa4PmlGY+gN6XNMx4j/VceIK8m2R8iWMTj0GxE+DP0ax5opwg//vGfee4AvzNGGHP6OumyvvljRHQxI9Ya9Xv3E0cYcDcZ+B7E8c2SXsSYpxXvqH2I3MYk48SiE4W3Umv04IdxpBwaBiZBEKyjSYiRKi0NA45vAMM8rJQcMtLImYgq2PaN2N8kE=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DU2PR08MB7325.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(136003)(346002)(376002)(39860400002)(396003)(5660300002)(38100700002)(86362001)(26005)(122000001)(55016003)(38070700005)(9686003)(71200400001)(8936002)(76116006)(33656002)(8676002)(66946007)(66556008)(66476007)(478600001)(83380400001)(41300700001)(2906002)(110136005)(186003)(52536014)(316002)(66446008)(64756008)(4326008)(6506007)(53546011)(7696005)(54906003);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB3962
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT042.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	7fcd76a6-64aa-4846-be4d-08da59919fcd
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	a6TH2XzflW3Oz8W/6y0wlphVP4sMN0vpeIGyhuUQQfe/pRu/8gIXZAdFG2xSj9KqPTkdoTJwhYz8cl+MVCnytIclQpDctx6FY1ZhBFruisPU3NkUT1+Mbs4KpT9dmcTDQYtJIOcn9iWQfy93qvwLYAjHy9VAXPl2c1OQXFh/qI0CFa2n4cx9cfiixB6EbuyZmeLsUtvPrysp/pRL9ne2aHLe616Cj51n9LvPBrmFhzzcodXhaW/eTjTL4rTEW13+WO2g0AR5J7v3VZS0T7lF20mu2k1URt6AxkMCweL3AwCRKCWheukEXVKLKb/2guyEBedrosDCxL93nUfPMAaTs93zZR6ysbsakQT5ICXx1OzNhCjQTPIoZ02IWEeKUhg7jAdOvRFGcZgMLuEJ69TwfxPZRj6XgkgG/9q8mEUb2MsYx2qGcM9Ie4aAdHMOX+atbriLDbbtq/R/MPrmmw4YEzHcM62VWf32khNeC/WpewMHRFGhZrkeCq1jpj36OCjf/TlAlN8sRktgFvhgI1j7Js+hdpZpc7Sun5XXIm7ieSJfxVNgBgmA8o22dVyxoT43SzSBkx0iITG1eOmmZjdy6ee80FZbqcCpqDkAKQzT8Ubr6h8s13wJZHN5TD0Rvz5jaWossHyuw6xzHnsvdEYS9w50WFdvJSdTHlbCKhXKnGZbBk0UfpvbxyjIK59LpjDZMUcCbmxjLHV1QezkBX3EarsT5U665spMBO7/jgdF8BhII1OWV+QLmWaHfNto75DHQYoPOu9MlglZG4OnLR/2KEbo29qIfRlvBOs5FNDUd45bPDfQDuj0bbHs5MGpGaoV
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(396003)(136003)(376002)(39860400002)(346002)(40470700004)(46966006)(36840700001)(5660300002)(8936002)(40460700003)(86362001)(33656002)(478600001)(52536014)(7696005)(6506007)(26005)(2906002)(9686003)(53546011)(41300700001)(36860700001)(356005)(55016003)(82740400003)(81166007)(82310400005)(47076005)(336012)(107886003)(186003)(40480700001)(83380400001)(70586007)(70206006)(4326008)(54906003)(8676002)(316002)(110136005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jun 2022 05:38:54.5637
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 716852b4-6572-40cb-385c-08da5991a7c6
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT042.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB4735

SGkgSnVsaWVuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSnVsaWVu
IEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4NCj4gU2VudDogU2F0dXJkYXksIEp1bmUgMjUsIDIwMjIg
MTo1NSBBTQ0KPiBUbzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+OyB4ZW4tZGV2
ZWxAbGlzdHMueGVucHJvamVjdC5vcmcNCj4gQ2M6IFdlaSBDaGVuIDxXZWkuQ2hlbkBhcm0uY29t
PjsgU3RlZmFubyBTdGFiZWxsaW5pDQo+IDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPjsgQmVydHJh
bmQgTWFycXVpcyA8QmVydHJhbmQuTWFycXVpc0Bhcm0uY29tPjsNCj4gVm9sb2R5bXlyIEJhYmNo
dWsgPFZvbG9keW15cl9CYWJjaHVrQGVwYW0uY29tPg0KPiBTdWJqZWN0OiBSZTogW1BBVENIIHY1
IDEvOF0geGVuL2FybTogaW50cm9kdWNlIHN0YXRpYyBzaGFyZWQgbWVtb3J5DQo+IA0KPiBIaSBQ
ZW5ueSwNCj4gDQo+IE9uIDIwLzA2LzIwMjIgMDY6MTEsIFBlbm55IFpoZW5nIHdyb3RlOg0KPiA+
IEZyb206IFBlbm55IFpoZW5nIDxwZW5ueS56aGVuZ0Bhcm0uY29tPg0KPiA+DQo+ID4gVGhpcyBw
YXRjaCBzZXJpZSBpbnRyb2R1Y2VzIGEgbmV3IGZlYXR1cmU6IHNldHRpbmcgdXAgc3RhdGljDQo+
IA0KPiBUeXBvOiBzL3NlcmllL3Nlcmllcy8NCj4gDQo+ID4gc2hhcmVkIG1lbW9yeSBvbiBhIGRv
bTBsZXNzIHN5c3RlbSwgdGhyb3VnaCBkZXZpY2UgdHJlZSBjb25maWd1cmF0aW9uLg0KPiA+DQo+
ID4gVGhpcyBjb21taXQgcGFyc2VzIHNoYXJlZCBtZW1vcnkgbm9kZSBhdCBib290LXRpbWUsIGFu
ZCByZXNlcnZlIGl0IGluDQo+ID4gYm9vdGluZm8ucmVzZXJ2ZWRfbWVtIHRvIGF2b2lkIG90aGVy
IHVzZS4NCj4gPg0KPiA+IFRoaXMgY29tbWl0cyBwcm9wb3NlcyBhIG5ldyBLY29uZmlnIENPTkZJ
R19TVEFUSUNfU0hNIHRvIHdyYXANCj4gPiBzdGF0aWMtc2htLXJlbGF0ZWQgY29kZXMsIGFuZCB0
aGlzIG9wdGlvbiBkZXBlbmRzIG9uIHN0YXRpYyBtZW1vcnkoDQo+ID4gQ09ORklHX1NUQVRJQ19N
RU1PUlkpLiBUaGF0J3MgYmVjYXVzZSB0aGF0IGxhdGVyIHdlIHdhbnQgdG8gcmV1c2UgYQ0KPiA+
IGZldyBoZWxwZXJzLCBndWFyZGVkIHdpdGggQ09ORklHX1NUQVRJQ19NRU1PUlksIGxpa2UNCj4g
PiBhY3F1aXJlX3N0YXRpY21lbV9wYWdlcywgZXRjLCBvbiBzdGF0aWMgc2hhcmVkIG1lbW9yeS4N
Cj4gPg0KPiA+IFNpZ25lZC1vZmYtYnk6IFBlbm55IFpoZW5nIDxwZW5ueS56aGVuZ0Bhcm0uY29t
Pg0KPiA+IFJldmlld2VkLWJ5OiBTdGVmYW5vIFN0YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5l
bC5vcmc+DQo+ID4gLS0tDQo+ID4gdjUgY2hhbmdlOg0KPiA+IC0gbm8gY2hhbmdlDQo+ID4gLS0t
DQo+ID4gdjQgY2hhbmdlOg0KPiA+IC0gbml0IGZpeCBvbiBkb2MNCj4gPiAtLS0NCj4gPiB2MyBj
aGFuZ2U6DQo+ID4gLSBtYWtlIG5yX3NobV9kb21haW4gdW5zaWduZWQgaW50DQo+ID4gLS0tDQo+
ID4gdjIgY2hhbmdlOg0KPiA+IC0gZG9jdW1lbnQgcmVmaW5lbWVudA0KPiA+IC0gcmVtb3ZlIGJp
dG1hcCBhbmQgdXNlIHRoZSBpdGVyYXRpb24gdG8gY2hlY2sNCj4gPiAtIGFkZCBhIG5ldyBmaWVs
ZCBucl9zaG1fZG9tYWluIHRvIGtlZXAgdGhlIG51bWJlciBvZiBzaGFyZWQgZG9tYWluDQo+ID4g
LS0tDQo+ID4gICBkb2NzL21pc2MvYXJtL2RldmljZS10cmVlL2Jvb3RpbmcudHh0IHwgMTIwDQo+
ICsrKysrKysrKysrKysrKysrKysrKysrKysrDQo+ID4gICB4ZW4vYXJjaC9hcm0vS2NvbmZpZyAg
ICAgICAgICAgICAgICAgIHwgICA2ICsrDQo+ID4gICB4ZW4vYXJjaC9hcm0vYm9vdGZkdC5jICAg
ICAgICAgICAgICAgIHwgIDY4ICsrKysrKysrKysrKysrKw0KPiA+ICAgeGVuL2FyY2gvYXJtL2lu
Y2x1ZGUvYXNtL3NldHVwLmggICAgICB8ICAgMyArDQo+ID4gICA0IGZpbGVzIGNoYW5nZWQsIDE5
NyBpbnNlcnRpb25zKCspDQo+ID4NCj4gPiBkaWZmIC0tZ2l0IGEvZG9jcy9taXNjL2FybS9kZXZp
Y2UtdHJlZS9ib290aW5nLnR4dA0KPiA+IGIvZG9jcy9taXNjL2FybS9kZXZpY2UtdHJlZS9ib290
aW5nLnR4dA0KPiA+IGluZGV4IDk4MjUzNDE0YjguLjY0NjdiYzVhMjggMTAwNjQ0DQo+ID4gLS0t
IGEvZG9jcy9taXNjL2FybS9kZXZpY2UtdHJlZS9ib290aW5nLnR4dA0KPiA+ICsrKyBiL2RvY3Mv
bWlzYy9hcm0vZGV2aWNlLXRyZWUvYm9vdGluZy50eHQNCj4gPiBAQCAtMzc4LDMgKzM3OCwxMjMg
QEAgZGV2aWNlLXRyZWU6DQo+ID4NCj4gPiAgIFRoaXMgd2lsbCByZXNlcnZlIGEgNTEyTUIgcmVn
aW9uIHN0YXJ0aW5nIGF0IHRoZSBob3N0IHBoeXNpY2FsIGFkZHJlc3MNCj4gPiAgIDB4MzAwMDAw
MDAgdG8gYmUgZXhjbHVzaXZlbHkgdXNlZCBieSBEb21VMS4NCj4gPiArDQo+ID4gK1N0YXRpYyBT
aGFyZWQgTWVtb3J5DQo+ID4gKz09PT09PT09PT09PT09PT09PT09DQo+ID4gKw0KPiA+ICtUaGUg
c3RhdGljIHNoYXJlZCBtZW1vcnkgZGV2aWNlIHRyZWUgbm9kZXMgYWxsb3cgdXNlcnMgdG8gc3Rh
dGljYWxseQ0KPiA+ICtzZXQgdXAgc2hhcmVkIG1lbW9yeSBvbiBkb20wbGVzcyBzeXN0ZW0sIGVu
YWJsaW5nIGRvbWFpbnMgdG8gZG8NCj4gPiArc2htLWJhc2VkIGNvbW11bmljYXRpb24uDQo+ID4g
Kw0KPiA+ICstIGNvbXBhdGlibGUNCj4gPiArDQo+ID4gKyAgICAieGVuLGRvbWFpbi1zaGFyZWQt
bWVtb3J5LXYxIg0KPiA+ICsNCj4gPiArLSB4ZW4sc2htLWlkDQo+ID4gKw0KPiA+ICsgICAgQW4g
OC1iaXQgaW50ZWdlciB0aGF0IHJlcHJlc2VudHMgdGhlIHVuaXF1ZSBpZGVudGlmaWVyIG9mIHRo
ZSBzaGFyZWQNCj4gbWVtb3J5DQo+ID4gKyAgICByZWdpb24uIFRoZSBtYXhpbXVtIGlkZW50aWZp
ZXIgc2hhbGwgYmUgInhlbixzaG0taWQgPSA8MHhmZj4iLg0KPiA+ICsNCj4gPiArLSB4ZW4sc2hh
cmVkLW1lbQ0KPiA+ICsNCj4gPiArICAgIEFuIGFycmF5IHRha2VzIGEgcGh5c2ljYWwgYWRkcmVz
cywgd2hpY2ggaXMgdGhlIGJhc2UgYWRkcmVzcyBvZiB0aGUNCj4gPiArICAgIHNoYXJlZCBtZW1v
cnkgcmVnaW9uIGluIGhvc3QgcGh5c2ljYWwgYWRkcmVzcyBzcGFjZSwgYSBzaXplLCBhbmQgYQ0K
PiBndWVzdA0KPiA+ICsgICAgcGh5c2ljYWwgYWRkcmVzcywgYXMgdGhlIHRhcmdldCBhZGRyZXNz
IG9mIHRoZSBtYXBwaW5nLiBUaGUgbnVtYmVyIG9mDQo+IGNlbGxzDQo+ID4gKyAgICBmb3IgdGhl
IGhvc3QgYWRkcmVzcyAoYW5kIHNpemUpIGlzIHRoZSBzYW1lIGFzIHRoZSBndWVzdCBwc2V1ZG8t
cGh5c2ljYWwNCj4gPiArICAgIGFkZHJlc3MgYW5kIHRoZXkgYXJlIGluaGVyaXRlZCBmcm9tIHRo
ZSBwYXJlbnQgbm9kZS4NCj4gDQo+IFNvcnJ5IGZvciBqdW1wIGluIHRoZSBkaXNjdXNzaW9uIGxh
dGUuIEJ1dCBhcyB0aGlzIGlzIGdvaW5nIHRvIGJlIGEgc3RhYmxlIEFCSSwgSQ0KPiB3b3VsZCB0
byBtYWtlIHN1cmUgdGhlIGludGVyZmFjZSBpcyBnb2luZyB0byBiZSBlYXNpbHkgZXh0ZW5kYWJs
ZS4NCj4gDQo+IEFGQUlVLCB3aXRoIHlvdXIgcHJvcG9zYWwgdGhlIGhvc3QgcGh5c2ljYWwgYWRk
cmVzcyBpcyBtYW5kYXRvcnkuIEkgd291bGQNCj4gZXhwZWN0IHRoYXQgc29tZSB1c2VyIG1heSB3
YW50IHRvIHNoYXJlIG1lbW9yeSBidXQgZG9uJ3QgY2FyZSBhYm91dCB0aGUNCj4gZXhhY3QgbG9j
YXRpb24gaW4gbWVtb3J5LiBTbyBJIHRoaW5rIGl0IHdvdWxkIGJlIGdvb2QgdG8gbWFrZSBpdCBv
cHRpb25hbCBpbg0KPiB0aGUgYmluZGluZy4NCj4gDQo+IEkgdGhpbmsgdGhpcyB3YW50cyB0byBi
ZSBkb25lIG5vdyBiZWNhdXNlIGl0IHdvdWxkIGJlIGRpZmZpY3VsdCB0byBjaGFuZ2UgdGhlDQo+
IGJpbmRpbmcgYWZ0ZXJ3YXJkcyAodGhlIGhvc3QgcGh5c2ljYWwgYWRkcmVzcyBpcyB0aGUgZmly
c3Qgc2V0IG9mIGNlbGxzKS4NCj4gDQo+IFRoZSBYZW4gZG9lc24ndCBuZWVkIHRvIGhhbmRsZSB0
aGUgb3B0aW9uYWwgY2FzZS4NCj4gDQoNClN1cmUsIEknbGwgbWFrZSAidGhlIGhvc3QgcGh5c2lj
YWwgYWRkcmVzcyIgb3B0aW9uYWwgaGVyZSwgYW5kIHJpZ2h0IG5vdywgd2l0aCBubyBhY3R1YWwN
CmNvZGUgaW1wbGVtZW50YXRpb24uIEknbGwgbWFrZSB1cCBpdCBsYXRlciBpbiBmcmVlIHRpbWV+
DQoNClRoZSB1c2VyIGNhc2UgeW91IG1lbnRpb25lZCBoZXJlIGlzIHRoYXQgd2UgbGV0IHhlbiB0
byBhbGxvY2F0ZSBhbiBhcmJpdHJhcnkgc3RhdGljIHNoYXJlZA0KbWVtb3J5IHJlZ2lvbiwgc28g
c2l6ZSBhbmQgZ3Vlc3QgcGh5c2ljYWwgYWRkcmVzcyBhcmUgc3RpbGwgbWFuZGF0b3J5LCByaWdo
dD8NCiANCj4gWy4uLl0NCj4gDQo+ID4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9LY29uZmln
IGIveGVuL2FyY2gvYXJtL0tjb25maWcgaW5kZXgNCj4gPiBiZTllZmYwMTQxLi43MzIxZjQ3YzBm
IDEwMDY0NA0KPiA+IC0tLSBhL3hlbi9hcmNoL2FybS9LY29uZmlnDQo+ID4gKysrIGIveGVuL2Fy
Y2gvYXJtL0tjb25maWcNCj4gPiBAQCAtMTM5LDYgKzEzOSwxMiBAQCBjb25maWcgVEVFDQo+ID4N
Cj4gPiAgIHNvdXJjZSAiYXJjaC9hcm0vdGVlL0tjb25maWciDQo+ID4NCj4gPiArY29uZmlnIFNU
QVRJQ19TSE0NCj4gPiArCWJvb2wgIlN0YXRpY2FsbHkgc2hhcmVkIG1lbW9yeSBvbiBhIGRvbTBs
ZXNzIHN5c3RlbSIgaWYNCj4gVU5TVVBQT1JURUQNCj4gDQo+IFlvdSBhbHNvIHdhbnQgdG8gdXBk
YXRlIFNVUFBPUlQubWQuDQo+DQoNClJpZ2h0Lg0KDQo+ID4gKwlkZXBlbmRzIG9uIFNUQVRJQ19N
RU1PUlkNCj4gPiArCWhlbHANCj4gPiArCSAgVGhpcyBvcHRpb24gZW5hYmxlcyBzdGF0aWNhbGx5
IHNoYXJlZCBtZW1vcnkgb24gYSBkb20wbGVzcyBzeXN0ZW0uDQo+ID4gKw0KPiA+ICAgZW5kbWVu
dQ0KPiA+DQo+ID4gICBtZW51ICJBUk0gZXJyYXRhIHdvcmthcm91bmQgdmlhIHRoZSBhbHRlcm5h
dGl2ZSBmcmFtZXdvcmsiDQo+ID4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9ib290ZmR0LmMg
Yi94ZW4vYXJjaC9hcm0vYm9vdGZkdC5jIGluZGV4DQo+ID4gZWM4MWE0NWRlOS4uMzhkY2IwNWQ1
ZCAxMDA2NDQNCj4gPiAtLS0gYS94ZW4vYXJjaC9hcm0vYm9vdGZkdC5jDQo+ID4gKysrIGIveGVu
L2FyY2gvYXJtL2Jvb3RmZHQuYw0KPiA+IEBAIC0zNjEsNiArMzYxLDcwIEBAIHN0YXRpYyBpbnQg
X19pbml0IHByb2Nlc3NfZG9tYWluX25vZGUoY29uc3Qgdm9pZA0KPiAqZmR0LCBpbnQgbm9kZSwN
Cj4gPiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgc2l6ZV9jZWxscywgJmJv
b3RpbmZvLnJlc2VydmVkX21lbSwgdHJ1ZSk7DQo+ID4gICB9DQo+ID4NCj4gPiArI2lmZGVmIENP
TkZJR19TVEFUSUNfU0hNDQo+ID4gK3N0YXRpYyBpbnQgX19pbml0IHByb2Nlc3Nfc2htX25vZGUo
Y29uc3Qgdm9pZCAqZmR0LCBpbnQgbm9kZSwNCj4gPiArICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICB1MzIgYWRkcmVzc19jZWxscywgdTMyIHNpemVfY2VsbHMpDQo+ID4gK3sNCj4g
PiArICAgIGNvbnN0IHN0cnVjdCBmZHRfcHJvcGVydHkgKnByb3A7DQo+ID4gKyAgICBjb25zdCBf
X2JlMzIgKmNlbGw7DQo+ID4gKyAgICBwYWRkcl90IHBhZGRyLCBzaXplOw0KPiA+ICsgICAgc3Ry
dWN0IG1lbWluZm8gKm1lbSA9ICZib290aW5mby5yZXNlcnZlZF9tZW07DQo+ID4gKyAgICB1bnNp
Z25lZCBsb25nIGk7DQo+IA0KPiBucl9iYW5rcyBpcyAidW5zaWduZWQgaW50IiBzbyBJIHRoaW5r
IHRoaXMgc2hvdWxkIGJlICJ1bnNpZ25lZCBpbnQiIGFzIHdlbGwuDQo+IA0KDQpSaWdodA0KDQo+
ID4gKw0KPiA+ICsgICAgaWYgKCBhZGRyZXNzX2NlbGxzIDwgMSB8fCBzaXplX2NlbGxzIDwgMSAp
DQo+ID4gKyAgICB7DQo+ID4gKyAgICAgICAgcHJpbnRrKCJmZHQ6IGludmFsaWQgI2FkZHJlc3Mt
Y2VsbHMgb3IgI3NpemUtY2VsbHMgZm9yIHN0YXRpYyBzaGFyZWQNCj4gbWVtb3J5IG5vZGUuXG4i
KTsNCj4gPiArICAgICAgICByZXR1cm4gLUVJTlZBTDsNCj4gPiArICAgIH0NCj4gPiArDQo+ID4g
KyAgICBwcm9wID0gZmR0X2dldF9wcm9wZXJ0eShmZHQsIG5vZGUsICJ4ZW4sc2hhcmVkLW1lbSIs
IE5VTEwpOw0KPiA+ICsgICAgaWYgKCAhcHJvcCApDQo+ID4gKyAgICAgICAgcmV0dXJuIC1FTk9F
TlQ7DQo+ID4gKw0KPiA+ICsgICAgLyoNCj4gPiArICAgICAqIHhlbixzaGFyZWQtbWVtID0gPHBh
ZGRyLCBzaXplLCBnYWRkcj47DQo+ID4gKyAgICAgKiBNZW1vcnkgcmVnaW9uIHN0YXJ0aW5nIGZy
b20gcGh5c2ljYWwgYWRkcmVzcyAjcGFkZHIgb2YgI3NpemUgc2hhbGwNCj4gPiArICAgICAqIGJl
IG1hcHBlZCB0byBndWVzdCBwaHlzaWNhbCBhZGRyZXNzICNnYWRkciBhcyBzdGF0aWMgc2hhcmVk
IG1lbW9yeQ0KPiA+ICsgICAgICogcmVnaW9uLg0KPiA+ICsgICAgICovDQo+ID4gKyAgICBjZWxs
ID0gKGNvbnN0IF9fYmUzMiAqKXByb3AtPmRhdGE7DQo+ID4gKyAgICBkZXZpY2VfdHJlZV9nZXRf
cmVnKCZjZWxsLCBhZGRyZXNzX2NlbGxzLCBzaXplX2NlbGxzLCAmcGFkZHIsDQo+ID4gKyAmc2l6
ZSk7DQo+IA0KPiBQbGVhc2UgY2hlY2sgdGhlIGxlbiBvZiB0aGUgcHJvcGVydHkgdG8gY29uZmly
bSBpcyBpdCBiaWcgZW5vdWdoIHRvIGNvbnRhaW4NCj4gInBhZGRyIiwgInNpemUiLCBhbmQgImdh
ZGRyIi4NCj4gDQoNClN1cmUsIHdpbGwgZG8NCg0KPiA+ICsgICAgZm9yICggaSA9IDA7IGkgPCBt
ZW0tPm5yX2JhbmtzOyBpKysgKQ0KPiA+ICsgICAgew0KPiA+ICsgICAgICAgIC8qDQo+ID4gKyAg
ICAgICAgICogQSBzdGF0aWMgc2hhcmVkIG1lbW9yeSByZWdpb24gY291bGQgYmUgc2hhcmVkIGJl
dHdlZW4gbXVsdGlwbGUNCj4gPiArICAgICAgICAgKiBkb21haW5zLg0KPiA+ICsgICAgICAgICAq
Lw0KPiA+ICsgICAgICAgIGlmICggcGFkZHIgPT0gbWVtLT5iYW5rW2ldLnN0YXJ0ICYmIHNpemUg
PT0gbWVtLT5iYW5rW2ldLnNpemUgKQ0KPiA+ICsgICAgICAgICAgICBicmVhazsNCj4gPiArICAg
IH0NCj4gPiArDQo+ID4gKyAgICBpZiAoIGkgPT0gbWVtLT5ucl9iYW5rcyApDQo+ID4gKyAgICB7
DQo+ID4gKyAgICAgICAgaWYgKCBpIDwgTlJfTUVNX0JBTktTICkNCj4gPiArICAgICAgICB7DQo+
ID4gKyAgICAgICAgICAgIC8qIFN0YXRpYyBzaGFyZWQgbWVtb3J5IHNoYWxsIGJlIHJlc2VydmVk
IGZyb20gYW55IG90aGVyIHVzZS4gKi8NCj4gPiArICAgICAgICAgICAgbWVtLT5iYW5rW21lbS0+
bnJfYmFua3NdLnN0YXJ0ID0gcGFkZHI7DQo+ID4gKyAgICAgICAgICAgIG1lbS0+YmFua1ttZW0t
Pm5yX2JhbmtzXS5zaXplID0gc2l6ZTsNCj4gPiArICAgICAgICAgICAgbWVtLT5iYW5rW21lbS0+
bnJfYmFua3NdLnhlbl9kb21haW4gPSB0cnVlOw0KPiA+ICsgICAgICAgICAgICBtZW0tPm5yX2Jh
bmtzKys7DQo+ID4gKyAgICAgICAgfQ0KPiA+ICsgICAgICAgIGVsc2UNCj4gPiArICAgICAgICB7
DQo+ID4gKyAgICAgICAgICAgIHByaW50aygiV2FybmluZzogTWF4IG51bWJlciBvZiBzdXBwb3J0
ZWQgbWVtb3J5IHJlZ2lvbnMNCj4gcmVhY2hlZC5cbiIpOw0KPiA+ICsgICAgICAgICAgICByZXR1
cm4gLUVOT1NQQzsNCj4gPiArICAgICAgICB9DQo+ID4gKyAgICB9DQo+ID4gKyAgICAvKg0KPiA+
ICsgICAgICoga2VlcCBhIGNvdW50IG9mIHRoZSBudW1iZXIgb2YgZG9tYWlucywgd2hpY2ggbGF0
ZXIgbWF5IGJlIHVzZWQgdG8NCj4gPiArICAgICAqIGNhbGN1bGF0ZSB0aGUgbnVtYmVyIG9mIHRo
ZSByZWZlcmVuY2UgY291bnQuDQo+ID4gKyAgICAgKi8NCj4gPiArICAgIG1lbS0+YmFua1tpXS5u
cl9zaG1fZG9tYWluKys7DQo+ID4gKw0KPiA+ICsgICAgcmV0dXJuIDA7DQo+ID4gK30NCj4gPiAr
I2VuZGlmDQo+ID4gKw0KPiA+ICAgc3RhdGljIGludCBfX2luaXQgZWFybHlfc2Nhbl9ub2RlKGNv
bnN0IHZvaWQgKmZkdCwNCj4gPiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBp
bnQgbm9kZSwgY29uc3QgY2hhciAqbmFtZSwgaW50IGRlcHRoLA0KPiA+ICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIHUzMiBhZGRyZXNzX2NlbGxzLCB1MzIgc2l6ZV9jZWxscywN
Cj4gPiBAQCAtMzg2LDYgKzQ1MCwxMCBAQCBzdGF0aWMgaW50IF9faW5pdCBlYXJseV9zY2FuX25v
ZGUoY29uc3Qgdm9pZCAqZmR0LA0KPiA+ICAgICAgICAgICBwcm9jZXNzX2Nob3Nlbl9ub2RlKGZk
dCwgbm9kZSwgbmFtZSwgYWRkcmVzc19jZWxscywgc2l6ZV9jZWxscyk7DQo+ID4gICAgICAgZWxz
ZSBpZiAoIGRlcHRoID09IDIgJiYgZGV2aWNlX3RyZWVfbm9kZV9jb21wYXRpYmxlKGZkdCwgbm9k
ZSwNCj4gInhlbixkb21haW4iKSApDQo+ID4gICAgICAgICAgIHJjID0gcHJvY2Vzc19kb21haW5f
bm9kZShmZHQsIG5vZGUsIG5hbWUsIGFkZHJlc3NfY2VsbHMsDQo+ID4gc2l6ZV9jZWxscyk7DQo+
ID4gKyNpZmRlZiBDT05GSUdfU1RBVElDX1NITQ0KPiA+ICsgICAgZWxzZSBpZiAoIGRlcHRoIDw9
IDMgJiYgZGV2aWNlX3RyZWVfbm9kZV9jb21wYXRpYmxlKGZkdCwgbm9kZSwNCj4gInhlbixkb21h
aW4tc2hhcmVkLW1lbW9yeS12MSIpICkNCj4gPiArICAgICAgICByYyA9IHByb2Nlc3Nfc2htX25v
ZGUoZmR0LCBub2RlLCBhZGRyZXNzX2NlbGxzLCBzaXplX2NlbGxzKTsNCj4gPiArI2VuZGlmDQo+
ID4NCj4gPiAgICAgICBpZiAoIHJjIDwgMCApDQo+ID4gICAgICAgICAgIHByaW50aygiZmR0OiBu
b2RlIGAlcyc6IHBhcnNpbmcgZmFpbGVkXG4iLCBuYW1lKTsgZGlmZiAtLWdpdA0KPiA+IGEveGVu
L2FyY2gvYXJtL2luY2x1ZGUvYXNtL3NldHVwLmgNCj4gYi94ZW4vYXJjaC9hcm0vaW5jbHVkZS9h
c20vc2V0dXAuaA0KPiA+IGluZGV4IDJiYjAxZWNmYTguLjUwNjNlNWQwNzcgMTAwNjQ0DQo+ID4g
LS0tIGEveGVuL2FyY2gvYXJtL2luY2x1ZGUvYXNtL3NldHVwLmgNCj4gPiArKysgYi94ZW4vYXJj
aC9hcm0vaW5jbHVkZS9hc20vc2V0dXAuaA0KPiA+IEBAIC0yNyw2ICsyNyw5IEBAIHN0cnVjdCBt
ZW1iYW5rIHsNCj4gPiAgICAgICBwYWRkcl90IHN0YXJ0Ow0KPiA+ICAgICAgIHBhZGRyX3Qgc2l6
ZTsNCj4gPiAgICAgICBib29sIHhlbl9kb21haW47IC8qIHdoZXRoZXIgdGhlIG1lbW9yeSBiYW5r
IGlzIGJvdW5kIHRvIGEgWGVuDQo+ID4gZG9tYWluLiAqLw0KPiA+ICsjaWZkZWYgQ09ORklHX1NU
QVRJQ19TSE0NCj4gPiArICAgIHVuc2lnbmVkIGludCBucl9zaG1fZG9tYWluOw0KPiA+ICsjZW5k
aWYNCj4gPiAgIH07DQo+ID4NCj4gPiAgIHN0cnVjdCBtZW1pbmZvIHsNCj4gDQo+IENoZWVycywN
Cj4gDQo+IC0tDQo+IEp1bGllbiBHcmFsbA0K


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 05:54:19 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 05:54:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357520.586114 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6QeS-0002kk-K7; Wed, 29 Jun 2022 05:54:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357520.586114; Wed, 29 Jun 2022 05:54:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6QeS-0002kY-Gq; Wed, 29 Jun 2022 05:54:12 +0000
Received: by outflank-mailman (input) for mailman id 357520;
 Wed, 29 Jun 2022 05:54:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6QeR-0002kL-8o; Wed, 29 Jun 2022 05:54:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6QeR-0001FL-6w; Wed, 29 Jun 2022 05:54:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6QeQ-0003B6-VS; Wed, 29 Jun 2022 05:54:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o6QeQ-0003BI-Ux; Wed, 29 Jun 2022 05:54:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=HOHMWwXzyzUnNeDeyvWcQByTHObdP0CgP8sLJhVp9BU=; b=Un9wnbneeLPW/dn1Nk7LfESZVl
	0xssDGuM8s3uQTrkzEeqbBari8+LqaKel85jDWkxIfXvebx9U7IaT0bddO1BjZySbpDjNj2dCcQco
	p8fw17QxvQhKNCi90vYORJrnJvFhysYyfjfWT1cc+HunMJMNuDT7FQlohU1C6Wvarn+o=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171395-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 171395: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=c13377153f74d66adc83702b4e4ca5e9eadde2fd
X-Osstest-Versions-That:
    ovmf=59141288716f8917968d4bb96367b7d08fe5ab8a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 29 Jun 2022 05:54:10 +0000

flight 171395 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171395/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 c13377153f74d66adc83702b4e4ca5e9eadde2fd
baseline version:
 ovmf                 59141288716f8917968d4bb96367b7d08fe5ab8a

Last test of basis   171381  2022-06-28 09:41:51 Z    0 days
Testing same since   171395  2022-06-29 03:10:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Miki Shindo <miki.shindo@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   5914128871..c13377153f  c13377153f74d66adc83702b4e4ca5e9eadde2fd -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 05:55:43 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 05:55:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357527.586125 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Qfu-0003KN-V1; Wed, 29 Jun 2022 05:55:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357527.586125; Wed, 29 Jun 2022 05:55:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Qfu-0003KG-Rp; Wed, 29 Jun 2022 05:55:42 +0000
Received: by outflank-mailman (input) for mailman id 357527;
 Wed, 29 Jun 2022 05:55:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NFaf=XE=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o6Qft-0003KA-Fn
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 05:55:41 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr130071.outbound.protection.outlook.com [40.107.13.71])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1ac8e8ba-f770-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 07:55:40 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM5PR04MB2963.eurprd04.prod.outlook.com (2603:10a6:206:9::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Wed, 29 Jun
 2022 05:55:33 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5395.014; Wed, 29 Jun 2022
 05:55:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ac8e8ba-f770-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=I5OqECYlIjQ0cn/w0Ixy/KkYA5bHdbNKu75wl+UOmoRPMssH1JveW91Xp2rkMzl683p9mKkGyUV8XbfmhAFky50MqQ7sWjYURxRj0l5rW1Eb/q4TAX/5OT6wad1GERkS9J6UdWqn1TIz86AgA9vCQn5Dda0URUhVTkOmHylNMdUshmtZb+HIljaygL9lNm/+N8dZn5/uOv8R9xz0IOGSTzUihQwhs+7bjItFca7Q+ozW+w7B0sDxPD7x0UoMsNxkPkig8ClLDJ1JkYfWd2xa/RKc+4ghpFsSTgrqrWbQRVcQXFAGoZT6degG1GbzT8fAuWr9GvEdnkh+d0rFRdAEDQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=z25N6SSFNyje+IEz5RjNnP9mb6RXVGvFh2/tUE7zZWk=;
 b=LStCsGjT7uYBzi1Rkj+FF9lKOIKRakgv391yfSwDFhzxa1JYGqIW3mvLiYeeBIq8hPGgZqGkpOIUCb3hRHfhCE/WtSiBUTzEem0YxmGWcXDqk1fZ7EuUCvTs1oFPFzYyN8+7IjMk8KNrUHzvtH4Lx+Xl7RXAdTK5wxCqDC8PCvUnJ+HPxioCPfahqo7+5zC55ih/vZS+hG59y+xqglv2YcvZq42vyqe/aNS5KOufcZPxqZywc0ivtN6GvMIOc2u5KMmRYZObHI+IEbYk2/ZhthD/zjvsrdThAf4gzZB4xov7YlN19RCe71be6fBUcmmNS5/aRzOYvxmqU7nG76jO2w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=z25N6SSFNyje+IEz5RjNnP9mb6RXVGvFh2/tUE7zZWk=;
 b=bDGBQ96+KQ7ud9TPnblbFiXhql9dewp9EBzQfVHPcxi64VBH6f9R7FTfdTlKjr7kzYJYJCzsoFCphrhfwS2HQfqrj14JEHdXjXIZUwW7y+q3I5p1y5QEeTVqv4qRTTPzcuWKDeeh9JakV5+vUpEq9RH7ZLf6pMJ4AWOslUq+tiEYEKZdhPCvAfred6udz4WAR17hL5ojvAAnB2NMpeUFlwR4S3ds1d6UMG9pC4JSbMc02K7lWFp13WdMCEh5LNd1Bn0jX4zYc4un4tSmXweNRT2RgSELsdjOrYUbBTBlUhPlaHWSUb/aNyFKjKNPalSAGZqburIQ/WRFB3yzBwEnrg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <6cfca3ce-219e-f9e4-e30c-40d7a74ea523@suse.com>
Date: Wed, 29 Jun 2022 07:55:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v7 7/9] xen/arm: unpopulate memory when domain is static
Content-Language: en-US
To: Penny Zheng <Penny.Zheng@arm.com>
Cc: Wei Chen <Wei.Chen@arm.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Julien Grall <julien@xen.org>
References: <20220620024408.203797-1-Penny.Zheng@arm.com>
 <20220620024408.203797-8-Penny.Zheng@arm.com>
 <5ac0e46d-2100-331e-b4d2-8fc715973b71@suse.com>
 <DU2PR08MB73255B2995B4692B5D46252FF7B99@DU2PR08MB7325.eurprd08.prod.outlook.com>
 <380b2610-fe2f-6246-e6a4-f0dd8295d488@xen.org>
 <DU2PR08MB732507EFB0CC4FEAA4872B3AF7BB9@DU2PR08MB7325.eurprd08.prod.outlook.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <DU2PR08MB732507EFB0CC4FEAA4872B3AF7BB9@DU2PR08MB7325.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0003.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:15::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e9a6276a-7768-4227-8ef1-08da5993fb06
X-MS-TrafficTypeDiagnostic: AM5PR04MB2963:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	697eIi1PUkCPbH7u8A2KgMiDWxxzrfG1NslntYNvCEmIU1UVyo/sSdpVKYBsXLtI4Fg2P8+PSqOxmGgZJSW/qyNd9QIZUbKNBi5lFcZWR3skmcGM07fiTN7z89c9m7fhbO9mKgOzKPEId8isZqMZ8EE0tdiqaS9fvWBg3nKgUs+2g+AsxgFAS1PrJ1LQulDgclQ0WG5eRWvUI078iK3i4TwThMDV6LWDUtNnmboSvGrAnsOYzVKH5nNTjPg9q+VkLhoF4iHp88J/Bw/MzEMQ4jBcWntHWdoCJ+wGgwUPI7So8PfElXu8+WqIVZ7gVOMLtVh+OYOBdXnrwIEmRUpec0BXrmuuJ/tgQX5u8p2YKT+qFEYPB2n4TguNm5g815YtN/foUVF0fFK+CkBeLCm7+37NqyUPBi8mjdlfJt/54fxN9892xeU5EyyP562YP23oqlGRKoCGlSZag6QiPqnRB81OmgzzMFtUHwBSsH4jicnpGqImFbOIZVcvQnCvIZt6XumGPw1HA++Rk4Itut1T95MZqgX7esx53gcv1mvLcy11f2BFkQ1oWEfbor6YdJQn3Aq/Wi7j1I6bmw3slA/q+MS56z4+856FPwgTgn85lAOaB9FWt1+9stiPNBYYeHhylO57RoZ+MKlcgJeHQO6gArMs9VURW0SVev61gf/oMFMkj+gfCL3gYTUYVJeg9CD275Ooa05gQkbnZsuCOs8uwPeeP6jEDPK2O2qIzEjdxk6vDLcYZu7VJnYESGXzw379SF3j4P09ZXrQJXbIEM5AHfRwLmZTJDl0g23jTRT2vmpbL6agi7kqSIBQYpnkl+MXrcS+l9QJ65xG/8qTpk5nPQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(136003)(376002)(366004)(39860400002)(396003)(346002)(6486002)(66476007)(186003)(31696002)(2616005)(6862004)(26005)(478600001)(5660300002)(4326008)(54906003)(6512007)(8676002)(316002)(66946007)(83380400001)(36756003)(53546011)(6506007)(41300700001)(38100700002)(66556008)(86362001)(2906002)(31686004)(8936002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZWpxR01YbDB2S29yYVV0SXljRjI1dG9wTnM1R1ZCenhDNWxnUG5Tc1pGWHlO?=
 =?utf-8?B?amdlTzNhMGFtL0VKNkZtWWQ5RkdzcVNDSmEra1BPUmF5NlNKUXVKR291Vm8x?=
 =?utf-8?B?K1hoTWc3VVRJdHhaRzVsaE1QSnhuK1dBZjBJb2pmbzRBQjQ3eDREeEIyc0Z4?=
 =?utf-8?B?U0x0cFMzbFRKZ0JJVEZxbjJmSzVuZHJwMlhacEF3d29LeU0vTnMvbEVFcHNS?=
 =?utf-8?B?ekM5aUhHUDl4eWYvbHFDWDZ2dkRRNTNCU2FVUzJjUHJKc3VacWZFTVZwNng2?=
 =?utf-8?B?dnEwWUc5dlZ1TE9FNkdWRGpqYXVNS1ZWVzNSQVdZKzF6MndUZUphQlpIcTZC?=
 =?utf-8?B?VGwyeG1BRWl4Mk9GdVZwTEo0N09LNE8zcVpDeXdjZ1RqUXlFSHF2MFRPZDB2?=
 =?utf-8?B?Qjl1cEhadk93UDVFUFFCWk0xaHd3VEl3eVoyZ2NCQjkvUlJFcHhNV3BBc1NT?=
 =?utf-8?B?MWdpRlJxNERVdnUvS0Rxb3RBQXlYK1R4Q3kxV3d3RlNCZzhydEVIMkhxUnBp?=
 =?utf-8?B?M1Bic21FUFRYaHRIRmNDQ1lFUk1uOEFqUUhrNFJ5TFI1QUo4dXhSV0F4MSti?=
 =?utf-8?B?ck9hZmlVK0xtdUhBZG1uNGhFY1ByTUNEVFE1M1NVZnN2NGRudEwvRjdKSTBW?=
 =?utf-8?B?MXZjbjNpUUNOdkNUY1RveVpiUnh3WHZsU25BNUVVN2kwaUhsQnQrSXBqL055?=
 =?utf-8?B?dnM4VU9zczhGMjdUbGcyNzFrOUFySFc0NE5WbHFXcVF1djE1ejI1aDJ6QnU0?=
 =?utf-8?B?elFNTW5uNmFDTkZuL3dHajJibkQ5N2hmMmMwU3NQeUtBTjNMVTlVNWwyWk1x?=
 =?utf-8?B?OC9vSXR3aTlGOXl2VUN5ODJKSWVuQmg2RERZRTc2MXNpYmtZaUxGOFNxZW9v?=
 =?utf-8?B?S1hGYnE4ekQwVUxDTlNCbWhhZG9zOVlIS0dlZVZmVnMrOHFFN1JLcXFiUDBs?=
 =?utf-8?B?WVZ2WkF6M0hBaTRKbDdMVTVmZGdNeHlmUTVMTUNyMnh5ODdjNUxrTkpzRm9C?=
 =?utf-8?B?cnBGa2R0b0ZtU0huNE9qTFhIaTArVzdtWE9adGVGMzZmRUwxUUM1ZTlqNmhx?=
 =?utf-8?B?d1Q5d3pSWXArUmVwa1NNaVd1czhzazU3bWd5aGYwWXpBeW44V1JCVFdPUm90?=
 =?utf-8?B?TThGd1BQa1MrdnJONHZMR3dZWFRVWjlEQ1FKWU8vTGlNZ3Jrd3pSS0M5Z2lO?=
 =?utf-8?B?QXhIUW8yUER2Mllta1BjbWptVmhCY1N1N2V0UHhSc05hU0JHT2NLU0VLUmNR?=
 =?utf-8?B?UFl6UExJQ0NyM0RLZ1BvSnJCWXpHR09aM0NNeDVTd0MxY1RBajlJMjZJaDZC?=
 =?utf-8?B?YnNGTlhjK05jZ3R0NGJKZW5KRjY5RmpaSkZXZTdVeW5EQmVzQXpGdnR0MGJa?=
 =?utf-8?B?bjM1RERWd2ppNEQ2Tllld25XaDZiZ2VkMEZDVmZVV0c1NnRBZFFWMzVPR1hz?=
 =?utf-8?B?Tm5pQWlyWjcyNUdwK3RzZDlYTkZqald6Y1MwMzNKODJ4WlVVR25XZ1AyV0tB?=
 =?utf-8?B?N2JGajU2WE1uOUg5cVExQlNyclU0WFpnajF6ZjI4ZmFwazdrcTJSZDdTSVVT?=
 =?utf-8?B?cjJtQkZtZjVRTWRVM0g1c0pzbDF4SGZ3dXJYS05hTHg2b2xRcS8zQmdYeWZq?=
 =?utf-8?B?SFUvVnBDQUNmR1cvOFBKYW0ydDB6d2NxeEJXMHJvUDdGR3Z5MmRYMlZhTDJO?=
 =?utf-8?B?eURpVlJhaWg5WUhPQkxrNnlYTG9rMjBLdzBnd0N5MFdTZ0xsa2RwNUpFQTBO?=
 =?utf-8?B?NklodUs4THIrTUM3YWNRSjR5c2pkNlZRVURXTmRMUzFKeHZCZFlMSVV1R1ZS?=
 =?utf-8?B?bGQrL21sZzgwSTZrRVR3cjZBOTJVN0xpYlNqbCs4WkZ5Ukp0cWh5SGk1RTl0?=
 =?utf-8?B?UXd6VXZ2Smo0T01TeGN4Z3gvelo4SEFaMFVmRTBsbjdrMkIyWkZPT1ZDRE43?=
 =?utf-8?B?c3QvYXRUMTVKR0l1UHlJWmRDcEJRcVR5Z0p5WTBEN1NVbFlDQkNGSkVmaWJk?=
 =?utf-8?B?dE5IWUM2V0RQZWVFYlZ2SUNYSmdNQk5JS2MzVkVBUFhKUzRBY2R1cXJmSFVC?=
 =?utf-8?B?R3pSVlYxSEI0YVNaUzlGWTZwelF2NlV0ZEo2UHhvVk1JV0p6eGtRakxrY2p0?=
 =?utf-8?Q?P8aLAsCRBG4PUO7Rh4WhXISzA?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e9a6276a-7768-4227-8ef1-08da5993fb06
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jun 2022 05:55:33.3859
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: xYN5PpbcjFNFtbdBHIFybh2X0s+rtZwMemT5RbvQkWZrvREP8qQai34tR15lFca9P2xrp90klhifagOaaAeNzQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM5PR04MB2963

On 29.06.2022 05:12, Penny Zheng wrote:
> Hi Julien and Jan
> 
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: Monday, June 27, 2022 6:19 PM
>> To: Penny Zheng <Penny.Zheng@arm.com>; Jan Beulich <jbeulich@suse.com>
>> Cc: Wei Chen <Wei.Chen@arm.com>; Andrew Cooper
>> <andrew.cooper3@citrix.com>; George Dunlap <george.dunlap@citrix.com>;
>> Stefano Stabellini <sstabellini@kernel.org>; Wei Liu <wl@xen.org>; xen-
>> devel@lists.xenproject.org
>> Subject: Re: [PATCH v7 7/9] xen/arm: unpopulate memory when domain is
>> static
>>
>>
>>
>> On 27/06/2022 11:03, Penny Zheng wrote:
>>> Hi jan
>>>
>>>> -----Original Message-----
>>> put_static_pages, that is, adding pages to the reserved list, is only
>>> for freeing static pages on runtime. In static page initialization
>>> stage, I also use free_statimem_pages, and in which stage, I think the
>>> domain has not been constructed at all. So I prefer the freeing of
>>> staticmem pages is split into two parts: free_staticmem_pages and
>>> put_static_pages
>>
>> AFAIU, all the pages would have to be allocated via
>> acquire_domstatic_pages(). This call requires the domain to be allocated and
>> setup for handling memory.
>>
>> Therefore, I think the split is unnecessary. This would also have the
>> advantage to remove one loop. Admittly, this is not important when the
>> order 0, but it would become a problem for larger order (you may have to
>> pull the struct page_info multiple time in the cache).
>>
> 
> How about this:
> I create a new func free_domstatic_page, and it will be like:
> "
> static void free_domstatic_page(struct domain *d, struct page_info *page)
> {
>     unsigned int i;
>     bool need_scrub;
> 
>     /* NB. May recursively lock from relinquish_memory(). */
>     spin_lock_recursive(&d->page_alloc_lock);
> 
>     arch_free_heap_page(d, page);
> 
>     /*
>      * Normally we expect a domain to clear pages before freeing them,
>      * if it cares about the secrecy of their contents. However, after
>      * a domain has died we assume responsibility for erasure. We do
>      * scrub regardless if option scrub_domheap is set.
>      */
>     need_scrub = d->is_dying || scrub_debug || opt_scrub_domheap;
> 
>     free_staticmem_pages(page, 1, need_scrub);
> 
>     /* Add page on the resv_page_list *after* it has been freed. */
>     put_static_page(d, page);
> 
>     drop_dom_ref = !domain_adjust_tot_pages(d, -1);
> 
>     spin_unlock_recursive(&d->page_alloc_lock);
> 
>     if ( drop_dom_ref )
>         put_domain(d);
> }
> "
> 
> In free_domheap_pages, we just call free_domstatic_page:
> 
> "
> @@ -2430,6 +2430,9 @@ void free_domheap_pages(struct page_info *pg, unsigned int order)
> 
>      ASSERT_ALLOC_CONTEXT();
> 
> +    if ( unlikely(pg->count_info & PGC_static) )
> +        return free_domstatic_page(d, pg);
> +
>      if ( unlikely(is_xen_heap_page(pg)) )
>      {
>          /* NB. May recursively lock from relinquish_memory(). */
> @@ -2673,6 +2676,38 @@ void free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
> "
> 
> Then the split could be avoided and we could save the loop as much as possible.
> Any suggestion? 

Looks reasonable at the first glance (will need to see it in proper
context for a final opinion), provided e.g. Xen heap pages can never
be static.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 06:04:40 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 06:04:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357534.586135 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6QoT-00051X-Pj; Wed, 29 Jun 2022 06:04:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357534.586135; Wed, 29 Jun 2022 06:04:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6QoT-00051Q-ND; Wed, 29 Jun 2022 06:04:33 +0000
Received: by outflank-mailman (input) for mailman id 357534;
 Wed, 29 Jun 2022 06:04:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=los1=XE=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1o6QoS-00051I-K2
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 06:04:32 +0000
Received: from sonic314-21.consmr.mail.gq1.yahoo.com
 (sonic314-21.consmr.mail.gq1.yahoo.com [98.137.69.84])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 562eed8f-f771-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 08:04:30 +0200 (CEST)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic314.consmr.mail.gq1.yahoo.com with HTTP; Wed, 29 Jun 2022 06:04:28 +0000
Received: by hermes--production-ne1-7864dcfd54-q4948 (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 2814da2f277392d570820fdc55e6888d; 
 Wed, 29 Jun 2022 06:04:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 562eed8f-f771-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1656482668; bh=zXI8lOjk5jak20Wr8hBYS9pShh6LSGwvLeCNZDO/jUI=; h=From:To:Cc:Subject:Date:References:From:Subject:Reply-To; b=VDftd59DYcE9LQ5W+YhKifpwd9op51H4T/Uvvt6NlnIXT46SOIZ9vv6j7NbFcaBDWHWjcXRpPREm+Dn96WZWqs8ZzvbsZaViMvnW60snqFBdQj9T417tXWQheM+7u8I/HXWymBl9n3TP7FAoOsj2MO+rpIEc7IlocQRGLq/RBP4hAENrIHDUY1Eob2aRr8h/GDoILLOK3sFtvKUP5kFR1mZZfXeYQvjBMayccfcyTWR639HaFS6vEFszPV/IHzcVfLsbKAnBnMGf4mwwhvJRE/kHEju3XZBP5wzLhEuGTEVZCTsmiMFh+lnxgzMSvWzIgXRBYPzX6oqYB7Se7ITM3g==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1656482668; bh=LkoGCfHrTahP0imugLWlseCDSzbQlQ262ug1rjpAhv0=; h=X-Sonic-MF:From:To:Subject:Date:From:Subject; b=ZiuGxlcj9Mmr1Su7jtfgyvesfzMIyg/H/UDS19SDoXnwURDVkERYIxCU3XA+DDkd8GwSv6BK4p1rujVWNJBHjhKbLjSjvZK/QghwAuloJuSGRgLJa7tMQuzLzoLITgdlC1kkUV8VwbHzlxRp7mPis5dhUYRXmpPlHM6O9wiExNnUnc137uBO0FhVf/4sWF7fU6TAic780M+bkQeK5KRbRlq8gYUaN2gyWi1nzPpANdP9ecr4QvZcLBS5vA8xNfPmmOqMXmouTY1HCn6FQ5Ry1WzsuBNtiXUHF4a7geckSjZXjw2FdNt9Rk8G3+cp5S5uURyZDaxlw6l7VuiVl6pmEQ==
X-YMail-OSG: IOMYgPEVM1nbZllUnd57vB752F9AROM_emUXMjv3mvYZOiS4728Z4TylVTWrq8h
 i.XKC7ut2cRilCwqGAjHk9eoDv.reJ5jySqIHio0O7zBvglQ4f4SKCCX_AOqdgpPDsGvu3D_2iOa
 VEOCPpIvFKlbfcwKXNdNMRdjWEJYgeJTs_R7iIsfRxWwhxxNyvsrIoG8q8SfGDZEysItTPyCqSLw
 IqX4mY410irWfToPt5aHDjzwEqmnYH0A_gHvJ22ft7471yrd4snoDrr3j782v8UbktfMlXnNVMQF
 J9BPKl_DEf_NGd5IIb_ZI_FrgmZCeauA59F_mF1BecFb0gKQWExiU.y_3gNyEMb6SaIVEHBnqv1z
 w2snb5VJgEoY00.8UwC5R2ZuaUr.ZwNrtumkgSFF9Lh3uaZ6DBmS.kB36ueXVHGwGjAnn65J_y92
 0OSzVK18COeNhlv37fgBURcTEHH1TPRKeN96qQ8mlepUxKW0mSBihyC2QXGCraJMwQL12ojkkJ7E
 VYkVaonSrDbRjtLcLJkoCuwFaA54iLVxogRYRzfZLRbUXQ_IEdXghYQNfHWNiqVX2VWVqqpmKvMx
 EJiQQdaCYTQ8RNunPPVabP8k9Qb4QQ7NgkK4UsabUZ4YYM9UmJBMhWqcUmyuGnr3symErenMnGth
 oSefk3QWMS7thC7z58ZzvSMkWRZj_1x2vH7NIOCFZQ7.8khryypzJTcbxyhMq7ZRv5e7lZviVtvl
 BwTUnMVVT5Xv91Xp_O.uqKrV.jw_kuNtjFOpcQmGyoyco783JVTRltvjOplBWNJXbqi5N8WBbIeZ
 ccPaOCXFppnxaR6W54wTbhP2Lt65GlDc.7.FFxBKEhqG.7uGaX6D8y606pXcTZlcMA9I4p1YYLFD
 xTIdBILPzJYqUZkm7zZYK9uYCgoHdVGbW.FfIRBzi1x6CzxG8TJfxMPgVkK024fSZkaCCppXiXTG
 XpUa4EbQNwEbeITGXuW1FIzJV4eRcsLh5f1owJU0aGah1pz3ssRrOSYR04QKr5_CoZazkbTDjk92
 DHdV3KCKVsXXo0hvAdfMqO.citeHlVvmDZto9Jvj2PVqBQx8XHzldBGBXd3ZSQ4yG0nFnfzeJUo.
 ha7d84AqKJ1wvRowgHzDWwGGwWAtmnwcxzUt.8r7YjTYm1keCTVKEJeyClF37QQv6U5OkO6IoR5q
 7yDcF83re2LfK35c85A.emg8Ng3qpWKSCIViW2GBqrzP5z6HH_NSwu.vPnyGUcVk6snjRozFkjG5
 TN43guIvR_RsJOP4wBJtYCUku_cdlYimyH8tVs_e.iHzWM7mrWT6sQwhiu2bSI5RW8G61KEygBue
 rZQG0J_dVUdrDByhaXkPE6y9fKLD038abIbRKiVTaqM1_CE47CKsmdr3mGGEIUAlWWyp8Oq3tThF
 nrcejJ62fbmDgWs_DMxaNb.GSD.3UCqFeP72YbERBxhVSWDd5RJ6dFwv4VPlhI.8gdZXPnHAT6VD
 yGX8QZi1OF3iFm4xBjcQydTi.Ac_rd2ZnKZvSxq6cP09hB6fzKtTXtHzViGJxgLluYR7u9E4Q9QS
 e9eBmQgO26QBDQxqA8It0ZRYEBSt2a3e2ts9.4ZYpv3i7bnwmVjAhGqBtmpdxjk.171zgWMqCzW4
 FXK6yITcPuxid9ESk.QV0MV2zQD_tdbGB77VrCB6Sn7exBwJCg7LPAK3NN5h8uAlQjl5LyauAeyT
 zk1fysr.7Z.eYrHgJ.LI3ZW.Ew2EYyFNJpTbcT.DGBv7p4B3ma7n3pbLYbtUdoLGpBnQ1o0qLMpe
 Nu.imlxP8Z_X2PImPWKH6bPt6Iwp.nupWkVJ0kYS2Kq73rA3jOjfr0NXRjvfPk7GnYEztUmiZDSo
 JP22UzKUrDE.qctdwUy.g3S6t81hcatRLPt2x1GZLKK7rAM5jCFpRwi4hDWZQE7tRZGSjiLIt8H2
 6MCF8CxZXnpLEgivV4RRw4gRWW_JPNHcp8Bhx7YivwJFbnBYmc2BUuITR1VBWo.1RBDCRma_.zgd
 3KIuTKpBZ3sZEwtD0qLGmVZk1eLNVDl_aPitugeI4qat_veRLH1eA4xQai8hNAkLMjqCI.buzHEY
 SfkTKyG1RRgj9Zw97mnM.gdTAZsFfCRi4gm5XvrE0G6.K7hcWmwKJ9j4VOCn6QoA6R.4EbvisqTz
 ra_p7oGmZ61PMzMWCWPtjHDwUh7oOjmevTI3DXE2JUKnMQbkL6a8eiLlyw_dagzemYuQuUJ8-
X-Sonic-MF: <brchuckz@aim.com>
From: Chuck Zmudzinski <brchuckz@aol.com>
To: qemu-devel@nongnu.org
Cc: xen-devel@lists.xenproject.org,
	qemu-trivial@nongnu.org,
	qemu-stable@nongnu.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH v3] xen/pass-through: don't create needless register group
Date: Wed, 29 Jun 2022 02:04:05 -0400
Message-Id: <c76dff6369ccf2256bd9eed5141da1db767293d2.1656480662.git.brchuckz@aol.com>
X-Mailer: git-send-email 2.36.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
References: <c76dff6369ccf2256bd9eed5141da1db767293d2.1656480662.git.brchuckz.ref@aol.com>
Content-Length: 2636

Currently we are creating a register group for the Intel IGD OpRegion
for every device we pass through, but the XEN_PCI_INTEL_OPREGION
register group is only valid for an Intel IGD. Add a check to make
sure the device is an Intel IGD and a check that the administrator has
enabled gfx_passthru in the xl domain configuration. Require both checks
to be true before creating the register group. Use the existing
is_igd_vga_passthrough() function to check for a graphics device from
any vendor and that the administrator enabled gfx_passthru in the xl
domain configuration, but further require that the vendor be Intel,
because only Intel IGD devices have an Intel OpRegion. These are the
same checks hvmloader and libxl do to determine if the Intel OpRegion
needs to be mapped into the guest's memory. Also, move the comment
about trapping 0xfc for the Intel OpRegion where it belongs after
applying this patch.

Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
---
v2: * Move the comment to an appropriate place after applying this patch
    * Mention that the comment is moved in the commit message

v2 addresses the comment by Anthony Perard on the original
version of this patch.

v3: * Add Reviewed-By: Anthony Perard <anthony.perard@citrix.com>
    * Add qemu-stable@nongnu.org to recipients to indicate the patch
      may be suitable for backport to Qemu stable

Thank you, Anthony, for taking the time to review this patch.

 hw/xen/xen_pt_config_init.c | 14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/hw/xen/xen_pt_config_init.c b/hw/xen/xen_pt_config_init.c
index c5c4e943a8..cad4aeba84 100644
--- a/hw/xen/xen_pt_config_init.c
+++ b/hw/xen/xen_pt_config_init.c
@@ -2031,12 +2031,16 @@ void xen_pt_config_init(XenPCIPassthroughState *s, Error **errp)
             }
         }
 
-        /*
-         * By default we will trap up to 0x40 in the cfg space.
-         * If an intel device is pass through we need to trap 0xfc,
-         * therefore the size should be 0xff.
-         */
         if (xen_pt_emu_reg_grps[i].grp_id == XEN_PCI_INTEL_OPREGION) {
+            if (!is_igd_vga_passthrough(&s->real_device) ||
+                s->real_device.vendor_id != PCI_VENDOR_ID_INTEL) {
+                continue;
+            }
+            /*
+             * By default we will trap up to 0x40 in the cfg space.
+             * If an intel device is pass through we need to trap 0xfc,
+             * therefore the size should be 0xff.
+             */
             reg_grp_offset = XEN_PCI_INTEL_OPREGION;
         }
 
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 29 06:05:41 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 06:05:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357539.586147 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6QpZ-0005Zf-32; Wed, 29 Jun 2022 06:05:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357539.586147; Wed, 29 Jun 2022 06:05:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6QpZ-0005ZY-0I; Wed, 29 Jun 2022 06:05:41 +0000
Received: by outflank-mailman (input) for mailman id 357539;
 Wed, 29 Jun 2022 06:05:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=los1=XE=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1o6QpX-0005ZE-JU
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 06:05:39 +0000
Received: from sonic312-25.consmr.mail.gq1.yahoo.com
 (sonic312-25.consmr.mail.gq1.yahoo.com [98.137.69.206])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7e6d6dfc-f771-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 08:05:38 +0200 (CEST)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic312.consmr.mail.gq1.yahoo.com with HTTP; Wed, 29 Jun 2022 06:05:35 +0000
Received: by hermes--production-bf1-58957fb66f-svnl5 (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 16e4bc4b8b559d9c3d23f506152de31d; 
 Wed, 29 Jun 2022 06:05:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7e6d6dfc-f771-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1656482735; bh=sQSX+TX/QbDZPGUjoGyYhLhqwOe1AyUdSWH+3ieV12M=; h=From:To:Cc:Subject:Date:References:From:Subject:Reply-To; b=svSFRLj1PPLi4urn5ja/BSs7/aB6Feg146nCTegsev1+nhQpkyWtPDGPIGhnoadzv7Cbgsv1Znt1Gn532veFPmEyxZz7wnKRG+Wm0OnsFiGqBBSj15gm2gnZ0hqAFfmQYGwhSmgRsUoJ0Zx5AiT728o3Bmq2sD2/D4PAOPEktWUh+FJ6kEasqfWjq31wOUtohWCnxwpAYrq/t4i+3rv7Le5EQ1EzjLjAjsL8VwHn/pQxZTOMFluVuhpBuXi4tgIsllV4bZhNYI1e3aEzP0Ki0eDuSnxrSSj6CAGC2VPRAfTZZtNmmjLv8EbvmcVYQVZ2PrgGdfqaouxC0oB8tCgWMA==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1656482735; bh=238Ph/3RkYD0NY9cn2ZTq4/1RVhcYkzTiR8z9a1/vSX=; h=X-Sonic-MF:From:To:Subject:Date:From:Subject; b=bwu9wVEJNTFXRikFhfyNGp1khYOgNNaQswo9G35Nh/dxDp+qfw7C32gCyKfXKpOTNyUSHzffn1Nz4zummsFDl7JqGsq3OA1gMffuDAmh7magl/9QSNb5hzsTIzjBoEOuV86/lGan95hTzJN7QlSwuzEc14tbZ2aKmKDqmK9prdKosLyozHSiYHBIRYm5FytdpX/8GpmEULR8wIeh8+7XAyIN4SsBQXwy7snZ+z8PaRdoc3UQiiCxEyqmDo2YrROSBSl1sLuVLJSqkifO8GNKyj51BbbstHiODxW/5I/rMFvpfx/5nY9J/WXhbfoVqjejOULpJ+wPfl/txJPzfndgBw==
X-YMail-OSG: cH5_zLMVM1maR3svwhItRtiDA_O5NPOSlhMZF00jWo9jYdHPab577P5jLabzly.
 DIDRozQKl3JVxow8Thb8bZsIC0ZeETb_JRjrCqboZpLBSOEdYGGlZJDaWHN71.hlPiVw2eXfqtFf
 KcXNslUnHbndZtiOxDH1LF23qhcv_or90x801KIFEissVXkIp1tnCIr2qdMYYvVkxTbWFaeJnYn0
 mE6W4TSh2GdCTLn4FW8nsYJyarav7nfnd4p6Ll5T9cx355Fx3bE4QRFsEhdn0BiMaz3XxUDKaZRs
 JfrUesI5J1sVgddj2xurpSxiIna16ROnNF0nAYtplllY5Zvs9RR.6OKrsPljsB3NiWM7ws0muitO
 NpKePAnskgYwuo1RV4a_a7oPlVB7OE7HSQyzxEhX148blUcHatOFHL7ter8IHAw1pjwx4qcR9i_f
 SLE8YFjGaBR_NrTTvGgfxhksklsG3G3HCXYHsvBfZ6wJdWmxMIu0V4bKoOCZVFQh1hwgKLW14gsh
 15r5g0o2SZePP4bJMdoNUQZe5USeKGL9hbUzSAMnfaAmAnLJY.9p__Yqi5L6pMWYRKpPrCK1hCwC
 owqVXYLwQG0F_i.R5JRNjb7YDU6eo7LcfxnkEw7D4gTiRY7EFS7pdOxuCN97v1ugYr24ssmP94.J
 wbLvf._CAb1iWwrYHLTTgTgEzrBenZ1Rqo0tk0LGAYPLMgdModRQhqHNqi6MzIe8DKXne.DePmEr
 VnXwN7bZaysRr6a1jBjTNmZ0mqvGB4LIwqC7mChDM4sWm3OIW7.RsLMGepumBRA9eIiaC7d5Wdyv
 jAcwo0Q9dWnKLtpx48IFBlC0B9L559otp9lRB0.8qf6CGcnNWkKCi6Vr2687jrlD4zoi3HSELNlW
 e4XDfk3RrBjo3sKPF7CMbUp1d81JJDte.7ETO4j1zUGjRJTiycCpOW99DYGq6kyuN2O6Ad9T_rKs
 az2DMYczGEuzfRy0ahMz7lC.1LlfhXu5yVMV8rLYa0CIGv558qqzMXrZy_sbeKceh1hrjY6fu.kN
 sp31.I4SKhj5sBIdYms5bvKmapZKf0EH3H4WAPUaXil.FHsdUsrmL8r_o5yAykqk4cX195oOj3IG
 .dNXYoMbCZpbs.RONsQufOOs0oreoSsjP90ukMYVrZw5sj_p2gBWpsuuWAWjDKUqAV1IWCSRrJge
 6IZaySP95UUERVikk8EFlrH45k70pTedjT1BbngWpHd.YcVIxU5hvrd73jOkfW6EedGpCzalag7G
 U.t7TX.iCznJ41Q4ix8GzxmBts8IzK9rh6o3xrGr6vXHzEft_MRLH2.MRpuybOxxofcrWtzE1PQf
 EJGW44HjKIr8bVr4odqEo9XeZtRjnH3JpKECV9JjXNs0uFFLe6rxxAFP.HEtUrPbC1N814JNLbHP
 qnVPlfOoJeXp7qqiqFAgaepY_02nTRjCQDBQQuVH6E4F29rLXRdywjM9lz2H9qgtX7.frSGGSIwq
 XIs5US4uv92v4N_Sgg7NlonAktJlBgKUX97uEtgv_EJlAI4bZPkfIWVHOaUUEZeOCjFkybvhBjVL
 Ns3gmPSwjNfJArtx4g5hDbbOoh5A7pfAuJ1qD.a.U5i8nloCESfznx9tMVH2qwSZozzvl0q3eUf3
 Va2kutrs..B7mKOJ8fyRnnd0K8hR5knWSXWFv7xRTVngakgL1Yfb5KjkrrwL1B2w6vsis9pxfsTc
 HN1.5yhHX9pWswN74GJ6mOQEd5jLeldIyH2Op61lzx0g2wq7Slv6S8mKlh4K4vA4__sqXxXC5s2E
 EtkjDKMePmEqZOxbItfdp0Nzbm.kz1uInF4rJ5bYLIFH0SNr4N_tZ4Ibm6RKYibaza5.3jEoM6fM
 wc64GeSFQ6A8AnjhAxyO_dSc598mD8tWjBcUl7UzfYn6RlPF4WXHItaen4gKqzCj1Pozpwc_pB4I
 RhdEVTXPGL8rdtWWTiz5..K4UWAvvQdsZGu5efG9cAln5eK9Vc5r0ZHWM0VxWz0j1Ytm2t3KbkSn
 H3YuaAs4H_y9Qpgt37qlsmWsJR._AK4lRi1ZT2mvzkWdJAc.iM44sSa_K3VcKxEG_B9MVLrGsyBU
 IG7PTI54AKU9.lNgls9rFv_OudkObt1schFK0H3RTJj3edS2dCGnttoTjYcwwkJwr3pD5JZjYC8w
 clfsswAvpEyi4huHNoIxQ6mBeXl7I7ME28EV_gk5ZAZZtK8QfI8pWawtsP1kXXkkgctDGA6ERoc8
 -
X-Sonic-MF: <brchuckz@aim.com>
From: Chuck Zmudzinski <brchuckz@aol.com>
To: qemu-devel@nongnu.org
Cc: xen-devel@lists.xenproject.org,
	qemu-trivial@nongnu.org,
	qemu-stable@nongnu.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH v3] xen/pass-through: merge emulated bits correctly
Date: Wed, 29 Jun 2022 02:05:23 -0400
Message-Id: <5cd07587898cac43bf4b7a52489c380a44cab652.1656480662.git.brchuckz@aol.com>
X-Mailer: git-send-email 2.36.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
References: <5cd07587898cac43bf4b7a52489c380a44cab652.1656480662.git.brchuckz.ref@aol.com>
Content-Length: 2855

In xen_pt_config_reg_init(), there is an error in the merging of the
emulated data with the host value. With the current Qemu, instead of
merging the emulated bits with the host bits as defined by emu_mask,
the emulated bits are merged with the host bits as defined by the
inverse of emu_mask. In some cases, depending on the data in the
registers on the host, the way the registers are setup, and the
initial values of the emulated bits, the end result will be that
the register is initialized with the wrong value.

To correct this error, use the XEN_PT_MERGE_VALUE macro to help ensure
the merge is done correctly.

This correction is needed to resolve Qemu project issue #1061, which
describes the failure of Xen HVM Linux guests to boot in certain
configurations with passed through PCI devices, that is, when this error
disables instead of enables the PCI_STATUS_CAP_LIST bit of the
PCI_STATUS register of a passed through PCI device, which in turn
disables the MSI-X capability of the device in Linux guests with the end
result being that the Linux guest never completes the boot process.

Fixes: 2e87512eccf3
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1061
Buglink: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=988333

Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
---
v2: Edit the commit message to more accurately describe the cause
of the error.

v3: * Add Reviewed-By: Anthony Perard <anthony.perard@citrix.com>
    * Add qemu-stable@nongnu.org to recipients to indicate the patch
      may be suitable for backport to Qemu stable

Thank you, Anthony, for taking the time to review this patch.

 hw/xen/xen_pt_config_init.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/hw/xen/xen_pt_config_init.c b/hw/xen/xen_pt_config_init.c
index cad4aeba84..21839a3c98 100644
--- a/hw/xen/xen_pt_config_init.c
+++ b/hw/xen/xen_pt_config_init.c
@@ -1966,10 +1966,10 @@ static void xen_pt_config_reg_init(XenPCIPassthroughState *s,
         if ((data & host_mask) != (val & host_mask)) {
             uint32_t new_val;
 
-            /* Mask out host (including past size). */
-            new_val = val & host_mask;
-            /* Merge emulated ones (excluding the non-emulated ones). */
-            new_val |= data & host_mask;
+            /* Merge the emulated bits (data) with the host bits (val)
+             * and mask out the bits past size to enable restoration
+             * of the proper value for logging below. */
+            new_val = XEN_PT_MERGE_VALUE(val, data, host_mask) & size_mask;
             /* Leave intact host and emulated values past the size - even though
              * we do not care as we write per reg->size granularity, but for the
              * logging below lets have the proper value. */
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 29 06:09:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 06:09:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357547.586157 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Qsw-0006QT-Mm; Wed, 29 Jun 2022 06:09:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357547.586157; Wed, 29 Jun 2022 06:09:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Qsw-0006QM-K2; Wed, 29 Jun 2022 06:09:10 +0000
Received: by outflank-mailman (input) for mailman id 357547;
 Wed, 29 Jun 2022 06:09:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rlzK=XE=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1o6Qsv-0006QD-4f
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 06:09:09 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr70042.outbound.protection.outlook.com [40.107.7.42])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fba104b4-f771-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 08:09:07 +0200 (CEST)
Received: from AM6P195CA0015.EURP195.PROD.OUTLOOK.COM (2603:10a6:209:81::28)
 by DB7PR08MB3721.eurprd08.prod.outlook.com (2603:10a6:10:7f::28) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16; Wed, 29 Jun
 2022 06:09:04 +0000
Received: from AM5EUR03FT038.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:81:cafe::5d) by AM6P195CA0015.outlook.office365.com
 (2603:10a6:209:81::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14 via Frontend
 Transport; Wed, 29 Jun 2022 06:09:04 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT038.mail.protection.outlook.com (10.152.17.118) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Wed, 29 Jun 2022 06:09:04 +0000
Received: ("Tessian outbound 9336577968ca:v121");
 Wed, 29 Jun 2022 06:09:03 +0000
Received: from adec7aa232ab.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 DC212FA6-EE96-4363-9A75-343CFBE0FD4E.1; 
 Wed, 29 Jun 2022 06:08:54 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id adec7aa232ab.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 29 Jun 2022 06:08:54 +0000
Received: from DU2PR08MB7325.eurprd08.prod.outlook.com (2603:10a6:10:2e4::7)
 by DB9PR08MB6698.eurprd08.prod.outlook.com (2603:10a6:10:2a2::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Wed, 29 Jun
 2022 06:08:52 +0000
Received: from DU2PR08MB7325.eurprd08.prod.outlook.com
 ([fe80::50e5:8757:6643:77f8]) by DU2PR08MB7325.eurprd08.prod.outlook.com
 ([fe80::50e5:8757:6643:77f8%9]) with mapi id 15.20.5373.017; Wed, 29 Jun 2022
 06:08:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fba104b4-f771-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=gQ1VemotP6HLPmNrrXRzYbcxg/9Iynt+8ErCxUxdGHjJ1nAKt9SEeSzwy25pL9xH3jOP5KTodef5XRAWxjt4xhorr9OynXzUrFqOOAjWPYviNZlH4hDGVoxB3nrNQcom8LNQO7fdIQJM+WM8JupY2za7wJgkDzDz9+DU5+W68Y+SKwFGK4i1q5PNRF1PKWQPjHZy+s10rMirPfCEMNsW4bZ0JrOTXDfb9uBsqq5Owlokpt78SGI3My5Pa5zwMeDyZybN1Zgp93WHcbR0QKJztWd/bYQMcN1C9HqPLmErpRgMDAhzeERJE0abnXXJTjeRoMEOB3HYyQvLGPm+ai47jA==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=FAELKJo0AWpheEOWeZU4t88zp0j0ouqU4liKC/e+OrU=;
 b=ipnwt0olG2UuY0MiRUx5OW1OXBC7BPpdObaMft1o/4DUjPLFDc6E6PJwt/a0/GHIrKBx8fd7ZqGmPvyM4bsLAbd9RoXBeD+aDadbzJGT/YvFfH2LfK/86YqHHx9p8rUljvG5xfdQxP47Si2waf55tAjceGsiyvfXi62ZFyMb8Xk2scv9Ymw6ppKKj8FxweZd4Otg1jlniKfWGnry4rH/Qa2mU5PdPaesjF0bGLb1BqNpjB6LnI5mWKaPbQDhhyRIx2Fg2RCLqe16KJVZjHBLNsYAneWKVSSSxLQTBlR50VHFpti41BQ9dC2LZqF8SpLwyBb15/Wygyx0WG7jowJLqw==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FAELKJo0AWpheEOWeZU4t88zp0j0ouqU4liKC/e+OrU=;
 b=8tGR4aBgHGLbYQE4FJuAYClvpk8MkNTE6ldiWClem5AoKted+vzcu3K7IV0CctyThthNeXN2grgwJnsWVrJxz3vG0QvQfahybqE+YRUhkX31btIx2/2RO08DZO0iODhuPLmu2LN08qWMIIsB9c3PCUBVBAJXfGIni6DHHcPAgmA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=L/hNO7aJPZMgDgZp6EucuqjkiHN3Q7eO8F6zHw7LUFqams44yy3VN40SeT3xwoLU3J8for0AwYTjqIqFpv3u+5BxM3MlWTUqNJWRUMvD0jLhi45BkhoP+iDE2Es49NJjzWlc4drPRh8ohmO2cT4+JdP2qq5sdwhwSFkjJ1zDVwfsGNP3aTvvBiu9RraPc6N9KYsJz2gzWT61RH2g0njQ42uuOMPkOO7RzixheT55re+GII+Y2ryngDqQK1Ai3UkOyeeUQE0+eKyvd5+xHpnpCEJfTxjd2U7vkRYApXFKKPzd+Q+yIDP8KMLMJ206UhNqFVyUWRUs0Z5Xunmfv3nRNg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=FAELKJo0AWpheEOWeZU4t88zp0j0ouqU4liKC/e+OrU=;
 b=i49pvj4xdiXr842NnVy27v8P2e1tXdk4lTDnB4pLyL1lNNpmQg/LG2VsulW3H0odrYrSXyWMEYEI5bCNnxsBIDCd/gmP4HsCKSvixuQkKR7g4pyYKVnTW0ap1lO6lNNn4SPWxIOHYQd1u7M7FPRgvEO2m3xM2URsLxt+x0jo/gkszv2JkjWtDHM2u8vFt7wMuORdSx3n4CQ982W59/9OuxqtEFeT3OH0NiwX0IUbh2wGRh7+qyk9O4VBFpnBWYMxh1/EZXVGFkpnD1eGi/wBVmiDM5csa+LHJwiEce/pOjOyJmmXuyxLV6qsyfxVRtyiP9I7tqjviV6YTAXp7q7x8A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FAELKJo0AWpheEOWeZU4t88zp0j0ouqU4liKC/e+OrU=;
 b=8tGR4aBgHGLbYQE4FJuAYClvpk8MkNTE6ldiWClem5AoKted+vzcu3K7IV0CctyThthNeXN2grgwJnsWVrJxz3vG0QvQfahybqE+YRUhkX31btIx2/2RO08DZO0iODhuPLmu2LN08qWMIIsB9c3PCUBVBAJXfGIni6DHHcPAgmA=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Chen <Wei.Chen@arm.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Julien
 Grall <julien@xen.org>
Subject: RE: [PATCH v7 7/9] xen/arm: unpopulate memory when domain is static
Thread-Topic: [PATCH v7 7/9] xen/arm: unpopulate memory when domain is static
Thread-Index:
 AQHYhE/FRuoXPvvK9EWAAAtCUPIrtq1bKxqAgAfcEECAAA7bAIACoRkggAA58ICAAACkIA==
Date: Wed, 29 Jun 2022 06:08:51 +0000
Message-ID:
 <DU2PR08MB7325B9C7AC3441780E7AEB78F7BB9@DU2PR08MB7325.eurprd08.prod.outlook.com>
References: <20220620024408.203797-1-Penny.Zheng@arm.com>
 <20220620024408.203797-8-Penny.Zheng@arm.com>
 <5ac0e46d-2100-331e-b4d2-8fc715973b71@suse.com>
 <DU2PR08MB73255B2995B4692B5D46252FF7B99@DU2PR08MB7325.eurprd08.prod.outlook.com>
 <380b2610-fe2f-6246-e6a4-f0dd8295d488@xen.org>
 <DU2PR08MB732507EFB0CC4FEAA4872B3AF7BB9@DU2PR08MB7325.eurprd08.prod.outlook.com>
 <6cfca3ce-219e-f9e4-e30c-40d7a74ea523@suse.com>
In-Reply-To: <6cfca3ce-219e-f9e4-e30c-40d7a74ea523@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 170E402F55E47347B72F05CCC6033C8E.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 724968e7-9882-49ef-6760-08da5995de5e
x-ms-traffictypediagnostic:
	DB9PR08MB6698:EE_|AM5EUR03FT038:EE_|DB7PR08MB3721:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 S2rGMoo1NsKJMnnb9DvXOyfjgKr6HJdjlov6k/he6ieC5gDzmdndZHLjZYxIS0mFQEVUYlizmdjcNBiy90yNqsN+sxpBj0xsvK9YshX1Uo2h4OBtfSLgO2g7i4tkA9NY9Ye4HMktm47ww6eui0iiGUiy2ynzoSEId0ZK/AoT/jfEf/e46/YT9sSFN5U10CvGT+6yqBwMO/lsgf385i2KTi50O7Hx+mC51DxYlD6ttVyRLCgbiedUAkwtwpPvBtA8zkyvLGnRnH29n2fjbYxi9U/n6cbVjyrThjSyc5Lg1+rJrGg1Zd1TD5i9Cga5fUt/8aJB8fLUIXmvc5RCQEk21WW3ZJr+nw5ZqAgS9LuUwvm8NTOOSGirHbZgsEYKhVNlwScBo6UkAzcWbVucJr4AYhZIDPn5F0dnHMPFxWOtDFPacZdwguaMmiyUUs/l2y/cJrP4QXX0fJkbneOvPRHROH9E5ZO6ncAISMc2mhR88dKS28JOURhBi3e0Lro2yVDINtJC+9+Jm/F4SNaz91qD0x2irIO0mqSe2rGhjtaE/L1iN5PiPUjZLZuq8JoZ4TmpFlTQJ67WCGUYzY01JdCYDuBL0euSFTePhUU6nAZlT97lv9Ei9hZkbFwn9A9lPnESgrZicjAZlhTfM2n7A8raUTHov99EjD2sF1cLXKwtEbjVzDTO5A3jLCIH1rcOw9OL49q3ectYOPEKwhU1QnlfFNyeBOwPkVjyhjrSqNcVpO+cAH7eu7vlavyWquEBUTyK57nC5BraMjbq5v4iYPh7rRTNQu7EvSvnXrsvCHslHmI=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DU2PR08MB7325.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(136003)(366004)(346002)(396003)(376002)(39860400002)(38100700002)(41300700001)(33656002)(66476007)(54906003)(5660300002)(186003)(316002)(6916009)(2906002)(478600001)(9686003)(7696005)(83380400001)(26005)(6506007)(86362001)(76116006)(66556008)(38070700005)(53546011)(71200400001)(55016003)(8676002)(64756008)(122000001)(52536014)(4326008)(66446008)(8936002)(66946007);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB6698
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT038.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	7d03ea7c-5ea2-4790-cc3c-08da5995d6f5
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	MX5tUQwDDSytZkaetCraX7LJsiY5btwvxjdM7fentSDlCq2Os54M3r4c42D4mQr4/h9gKwyz9v7uO+4/0XnvwzPEZildNTTLAXtEKYpGAV4f/ah5gWdR0nBfVEd0RS56xyYqWRl8X6VTCH2I966Bl6k2fsuBIM5f2tQeos7lkF6Q41Yo8Qp9Pp7iz04eHOaBTBAtiLjfzJMrVRkqNLS2RMqgRiL7YpFROgfU07SHoH6oWS9AMC2rdD7Ha2c4gqRTP2lwY6NWim+kDWahsY5A3wg8OUgxLmSVFe+WMxY0DOeP/iUb+ujz30CyEOvxMXZWE10Mn8iS3IEm/YasJ0Eeq3dUnrEc8e6CCY9jT+UyNmg3+Z0o2yPJfkrlnB/DAfeLGkjMH4FLdg002OdJRS/Un2pmrK0Y7t62TLwIKRTI+uiIQRMCebLiZnCiCBhaGYFwX4aX8tWNaqX/c8BWwPhLF6gAT7PrFyDDnV6wxgXO/+6oBs2SjIch8K72koDw5Bit1U+IWvKw1Xp8KhgcAXGzduDgAK1el26ff6zjvjfiOyKPgOsk1sowg3n8/Z5VuKvx9WRuwOaRkefruJKm2nc3aIkLcNNJ6b01+cmV/jyr87i8e1ccWdzntONjniNyfm0/xkSnrMBWisgiHSG6ys1Ak/Qi2HtgFYhi/XlV/84ltU8O1034bwEd6/rsCcxkttkZoSw6nYo88s9QjT4hNlHVJRufS6847uw6mgKYW3g7mS62wlyBh+5PglIbQiJIfjEDiUVsPHiWacVieLBylXj9wamwFUe3qRhVCEj+cxBQrgdnzB1VooAaSbKJSjgaHiRR
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(396003)(376002)(346002)(136003)(39860400002)(40470700004)(46966006)(36840700001)(4326008)(40460700003)(70586007)(55016003)(70206006)(8676002)(36860700001)(8936002)(7696005)(54906003)(316002)(186003)(52536014)(83380400001)(478600001)(33656002)(26005)(5660300002)(41300700001)(2906002)(81166007)(356005)(47076005)(6506007)(40480700001)(82310400005)(6862004)(336012)(86362001)(53546011)(82740400003)(9686003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jun 2022 06:09:04.1137
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 724968e7-9882-49ef-6760-08da5995de5e
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT038.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3721

SGkgSmFuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSmFuIEJldWxp
Y2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KPiBTZW50OiBXZWRuZXNkYXksIEp1bmUgMjksIDIwMjIg
MTo1NiBQTQ0KPiBUbzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+DQo+IENjOiBX
ZWkgQ2hlbiA8V2VpLkNoZW5AYXJtLmNvbT47IEFuZHJldyBDb29wZXINCj4gPGFuZHJldy5jb29w
ZXIzQGNpdHJpeC5jb20+OyBHZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJpeC5jb20+
Ow0KPiBTdGVmYW5vIFN0YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+OyBXZWkgTGl1
IDx3bEB4ZW4ub3JnPjsgeGVuLQ0KPiBkZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZzsgSnVsaWVu
IEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4NCj4gU3ViamVjdDogUmU6IFtQQVRDSCB2NyA3LzldIHhl
bi9hcm06IHVucG9wdWxhdGUgbWVtb3J5IHdoZW4gZG9tYWluIGlzDQo+IHN0YXRpYw0KPiANCj4g
T24gMjkuMDYuMjAyMiAwNToxMiwgUGVubnkgWmhlbmcgd3JvdGU6DQo+ID4gSGkgSnVsaWVuIGFu
ZCBKYW4NCj4gPg0KPiA+PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiA+PiBGcm9tOiBK
dWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4ub3JnPg0KPiA+PiBTZW50OiBNb25kYXksIEp1bmUgMjcs
IDIwMjIgNjoxOSBQTQ0KPiA+PiBUbzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+
OyBKYW4gQmV1bGljaA0KPiA+PiA8amJldWxpY2hAc3VzZS5jb20+DQo+ID4+IENjOiBXZWkgQ2hl
biA8V2VpLkNoZW5AYXJtLmNvbT47IEFuZHJldyBDb29wZXINCj4gPj4gPGFuZHJldy5jb29wZXIz
QGNpdHJpeC5jb20+OyBHZW9yZ2UgRHVubGFwDQo+ID4+IDxnZW9yZ2UuZHVubGFwQGNpdHJpeC5j
b20+OyBTdGVmYW5vIFN0YWJlbGxpbmkNCj4gPj4gPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+OyBX
ZWkgTGl1IDx3bEB4ZW4ub3JnPjsgeGVuLQ0KPiA+PiBkZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9y
Zw0KPiA+PiBTdWJqZWN0OiBSZTogW1BBVENIIHY3IDcvOV0geGVuL2FybTogdW5wb3B1bGF0ZSBt
ZW1vcnkgd2hlbiBkb21haW4NCj4gaXMNCj4gPj4gc3RhdGljDQo+ID4+DQo+ID4+DQo+ID4+DQo+
ID4+IE9uIDI3LzA2LzIwMjIgMTE6MDMsIFBlbm55IFpoZW5nIHdyb3RlOg0KPiA+Pj4gSGkgamFu
DQo+ID4+Pg0KPiA+Pj4+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+ID4+PiBwdXRfc3Rh
dGljX3BhZ2VzLCB0aGF0IGlzLCBhZGRpbmcgcGFnZXMgdG8gdGhlIHJlc2VydmVkIGxpc3QsIGlz
DQo+ID4+PiBvbmx5IGZvciBmcmVlaW5nIHN0YXRpYyBwYWdlcyBvbiBydW50aW1lLiBJbiBzdGF0
aWMgcGFnZQ0KPiA+Pj4gaW5pdGlhbGl6YXRpb24gc3RhZ2UsIEkgYWxzbyB1c2UgZnJlZV9zdGF0
aW1lbV9wYWdlcywgYW5kIGluIHdoaWNoDQo+ID4+PiBzdGFnZSwgSSB0aGluayB0aGUgZG9tYWlu
IGhhcyBub3QgYmVlbiBjb25zdHJ1Y3RlZCBhdCBhbGwuIFNvIEkNCj4gPj4+IHByZWZlciB0aGUg
ZnJlZWluZyBvZiBzdGF0aWNtZW0gcGFnZXMgaXMgc3BsaXQgaW50byB0d28gcGFydHM6DQo+ID4+
PiBmcmVlX3N0YXRpY21lbV9wYWdlcyBhbmQgcHV0X3N0YXRpY19wYWdlcw0KPiA+Pg0KPiA+PiBB
RkFJVSwgYWxsIHRoZSBwYWdlcyB3b3VsZCBoYXZlIHRvIGJlIGFsbG9jYXRlZCB2aWENCj4gPj4g
YWNxdWlyZV9kb21zdGF0aWNfcGFnZXMoKS4gVGhpcyBjYWxsIHJlcXVpcmVzIHRoZSBkb21haW4g
dG8gYmUNCj4gPj4gYWxsb2NhdGVkIGFuZCBzZXR1cCBmb3IgaGFuZGxpbmcgbWVtb3J5Lg0KPiA+
Pg0KPiA+PiBUaGVyZWZvcmUsIEkgdGhpbmsgdGhlIHNwbGl0IGlzIHVubmVjZXNzYXJ5LiBUaGlz
IHdvdWxkIGFsc28gaGF2ZSB0aGUNCj4gPj4gYWR2YW50YWdlIHRvIHJlbW92ZSBvbmUgbG9vcC4g
QWRtaXR0bHksIHRoaXMgaXMgbm90IGltcG9ydGFudCB3aGVuDQo+ID4+IHRoZSBvcmRlciAwLCBi
dXQgaXQgd291bGQgYmVjb21lIGEgcHJvYmxlbSBmb3IgbGFyZ2VyIG9yZGVyICh5b3UgbWF5DQo+
ID4+IGhhdmUgdG8gcHVsbCB0aGUgc3RydWN0IHBhZ2VfaW5mbyBtdWx0aXBsZSB0aW1lIGluIHRo
ZSBjYWNoZSkuDQo+ID4+DQo+ID4NCj4gPiBIb3cgYWJvdXQgdGhpczoNCj4gPiBJIGNyZWF0ZSBh
IG5ldyBmdW5jIGZyZWVfZG9tc3RhdGljX3BhZ2UsIGFuZCBpdCB3aWxsIGJlIGxpa2U6DQo+ID4g
Ig0KPiA+IHN0YXRpYyB2b2lkIGZyZWVfZG9tc3RhdGljX3BhZ2Uoc3RydWN0IGRvbWFpbiAqZCwg
c3RydWN0IHBhZ2VfaW5mbw0KPiA+ICpwYWdlKSB7DQo+ID4gICAgIHVuc2lnbmVkIGludCBpOw0K
PiA+ICAgICBib29sIG5lZWRfc2NydWI7DQo+ID4NCj4gPiAgICAgLyogTkIuIE1heSByZWN1cnNp
dmVseSBsb2NrIGZyb20gcmVsaW5xdWlzaF9tZW1vcnkoKS4gKi8NCj4gPiAgICAgc3Bpbl9sb2Nr
X3JlY3Vyc2l2ZSgmZC0+cGFnZV9hbGxvY19sb2NrKTsNCj4gPg0KPiA+ICAgICBhcmNoX2ZyZWVf
aGVhcF9wYWdlKGQsIHBhZ2UpOw0KPiA+DQo+ID4gICAgIC8qDQo+ID4gICAgICAqIE5vcm1hbGx5
IHdlIGV4cGVjdCBhIGRvbWFpbiB0byBjbGVhciBwYWdlcyBiZWZvcmUgZnJlZWluZyB0aGVtLA0K
PiA+ICAgICAgKiBpZiBpdCBjYXJlcyBhYm91dCB0aGUgc2VjcmVjeSBvZiB0aGVpciBjb250ZW50
cy4gSG93ZXZlciwgYWZ0ZXINCj4gPiAgICAgICogYSBkb21haW4gaGFzIGRpZWQgd2UgYXNzdW1l
IHJlc3BvbnNpYmlsaXR5IGZvciBlcmFzdXJlLiBXZSBkbw0KPiA+ICAgICAgKiBzY3J1YiByZWdh
cmRsZXNzIGlmIG9wdGlvbiBzY3J1Yl9kb21oZWFwIGlzIHNldC4NCj4gPiAgICAgICovDQo+ID4g
ICAgIG5lZWRfc2NydWIgPSBkLT5pc19keWluZyB8fCBzY3J1Yl9kZWJ1ZyB8fCBvcHRfc2NydWJf
ZG9taGVhcDsNCj4gPg0KPiA+ICAgICBmcmVlX3N0YXRpY21lbV9wYWdlcyhwYWdlLCAxLCBuZWVk
X3NjcnViKTsNCj4gPg0KPiA+ICAgICAvKiBBZGQgcGFnZSBvbiB0aGUgcmVzdl9wYWdlX2xpc3Qg
KmFmdGVyKiBpdCBoYXMgYmVlbiBmcmVlZC4gKi8NCj4gPiAgICAgcHV0X3N0YXRpY19wYWdlKGQs
IHBhZ2UpOw0KPiA+DQo+ID4gICAgIGRyb3BfZG9tX3JlZiA9ICFkb21haW5fYWRqdXN0X3RvdF9w
YWdlcyhkLCAtMSk7DQo+ID4NCj4gPiAgICAgc3Bpbl91bmxvY2tfcmVjdXJzaXZlKCZkLT5wYWdl
X2FsbG9jX2xvY2spOw0KPiA+DQo+ID4gICAgIGlmICggZHJvcF9kb21fcmVmICkNCj4gPiAgICAg
ICAgIHB1dF9kb21haW4oZCk7DQo+ID4gfQ0KPiA+ICINCj4gPg0KPiA+IEluIGZyZWVfZG9taGVh
cF9wYWdlcywgd2UganVzdCBjYWxsIGZyZWVfZG9tc3RhdGljX3BhZ2U6DQo+ID4NCj4gPiAiDQo+
ID4gQEAgLTI0MzAsNiArMjQzMCw5IEBAIHZvaWQgZnJlZV9kb21oZWFwX3BhZ2VzKHN0cnVjdCBw
YWdlX2luZm8gKnBnLA0KPiA+IHVuc2lnbmVkIGludCBvcmRlcikNCj4gPg0KPiA+ICAgICAgQVNT
RVJUX0FMTE9DX0NPTlRFWFQoKTsNCj4gPg0KPiA+ICsgICAgaWYgKCB1bmxpa2VseShwZy0+Y291
bnRfaW5mbyAmIFBHQ19zdGF0aWMpICkNCj4gPiArICAgICAgICByZXR1cm4gZnJlZV9kb21zdGF0
aWNfcGFnZShkLCBwZyk7DQo+ID4gKw0KPiA+ICAgICAgaWYgKCB1bmxpa2VseShpc194ZW5faGVh
cF9wYWdlKHBnKSkgKQ0KPiA+ICAgICAgew0KPiA+ICAgICAgICAgIC8qIE5CLiBNYXkgcmVjdXJz
aXZlbHkgbG9jayBmcm9tIHJlbGlucXVpc2hfbWVtb3J5KCkuICovIEBADQo+ID4gLTI2NzMsNiAr
MjY3NiwzOCBAQCB2b2lkIGZyZWVfc3RhdGljbWVtX3BhZ2VzKHN0cnVjdCBwYWdlX2luZm8gKnBn
LA0KPiA+IHVuc2lnbmVkIGxvbmcgbnJfbWZucywgIg0KPiA+DQo+ID4gVGhlbiB0aGUgc3BsaXQg
Y291bGQgYmUgYXZvaWRlZCBhbmQgd2UgY291bGQgc2F2ZSB0aGUgbG9vcCBhcyBtdWNoIGFzDQo+
IHBvc3NpYmxlLg0KPiA+IEFueSBzdWdnZXN0aW9uPw0KPiANCj4gTG9va3MgcmVhc29uYWJsZSBh
dCB0aGUgZmlyc3QgZ2xhbmNlICh3aWxsIG5lZWQgdG8gc2VlIGl0IGluIHByb3BlciBjb250ZXh0
IGZvciBhDQo+IGZpbmFsIG9waW5pb24pLCBwcm92aWRlZCBlLmcuIFhlbiBoZWFwIHBhZ2VzIGNh
biBuZXZlciBiZSBzdGF0aWMuDQoNCklmIHlvdSBkb24ndCBwcmVmZXIgbGV0IGZyZWVfZG9taGVh
cF9wYWdlcyB0byBjYWxsIGZyZWVfZG9tc3RhdGljX3BhZ2UsIHRoZW4sIG1heWJlDQp0aGUgaWYt
YXJyYXkgc2hvdWxkIGhhcHBlbiBhdCBwdXRfcGFnZQ0KIg0KQEAgLTE2MjIsNiArMTYyMiw4IEBA
IHZvaWQgcHV0X3BhZ2Uoc3RydWN0IHBhZ2VfaW5mbyAqcGFnZSkNCg0KICAgICBpZiAoIHVubGlr
ZWx5KChueCAmIFBHQ19jb3VudF9tYXNrKSA9PSAwKSApDQogICAgIHsNCisgICAgICAgIGlmICgg
dW5saWtlbHkocGFnZS0+Y291bnRfaW5mbyAmIFBHQ19zdGF0aWMpICkNCisgICAgICAgICAgICBm
cmVlX2RvbXN0YXRpY19wYWdlKHBhZ2UpOw0KICAgICAgICAgZnJlZV9kb21oZWFwX3BhZ2UocGFn
ZSk7DQogICAgIH0NCiB9DQoiDQpXZHl0IG5vdz8NCiANCj4gDQo+IEphbg0K


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 06:19:21 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 06:19:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357553.586169 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6R2f-00080Y-LJ; Wed, 29 Jun 2022 06:19:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357553.586169; Wed, 29 Jun 2022 06:19:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6R2f-00080R-I8; Wed, 29 Jun 2022 06:19:13 +0000
Received: by outflank-mailman (input) for mailman id 357553;
 Wed, 29 Jun 2022 06:19:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NFaf=XE=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o6R2e-00080L-0I
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 06:19:12 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-eopbgr80040.outbound.protection.outlook.com [40.107.8.40])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 63809c83-f773-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 08:19:10 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB6772.eurprd04.prod.outlook.com (2603:10a6:208:188::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Wed, 29 Jun
 2022 06:19:08 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5395.014; Wed, 29 Jun 2022
 06:19:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 63809c83-f773-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Av1wnYBuepFT5CvvIAMsFPWT8pno5A2LBc2Ioi6B1tnKuEcx5q6MIPOOIHHL4j5v8szb4byuDE6pdkY/sBtDcuCKPmZTiEcZIzHEiaYlKSxEo9Rcp4+DvF7OHsf4B/GyjoYCchKWKQ7V7au2G9NZSsQPkZdmXIBObRf+Vn3S7Kj14pHw0XAOSnCkI2DzedyFs0aqm7zTTovXYhshsiRt3lU9tsXULsrtpy0FMPn/GKm9pjpb/9suhBBz7G70dMZANr+5F/uIClM2b8y8ElbHV0meGyfnXAKqy/eRez7SDj7po38CskAJaVavgFQtWGsfWoKsYANhwzYblDrJIELHWw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=bzkFdjR0q1QDQClsBvEwo+30DWf9JqVf5Be/ROD0dXk=;
 b=Wh9rqgv4byzTQNGTYpceh6telQ9/JnTFQgEgedu/KrcJceySX1WiCp9igdgNdbi/zT7/aW3ssu+0QS1DaZxWRm58kEH6KZfZgwiIwSHxGPErKhgpapo+fnFq8ZvVkZ7XaCH6V77Wt/cXg9EIJSX/i+tGjsq47UOxjYpZur05Mh8t5bQFJs/VazZy+n4KzCQqs3uVdfL6MShcA9XqTiVmw8afkqF0ueeMvf5yyRjCXqHcCmHqmElrO768vFfSJjUGa4GII8rLe87v1KamcveLU1CHB/d/GOotz7lOjUMSRsr7Uqjv23CgfedadoQKve/nNpbPWVmbri+GdqIYyVzDXQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bzkFdjR0q1QDQClsBvEwo+30DWf9JqVf5Be/ROD0dXk=;
 b=zmmtQfp+wqut3sdJTwASN5SyQEkTnNOWI9C/urNoeo7zIKsxDQvbBq4qKKtYcbaAHF2LNU2IYSed56Et+l//jgqfEBHNjrbqCPO07QpMobMR2z17lduXPjC/ZS2/OlUnH75j55D403XT8d1+/WO0SWp15UBfyRF+dT39b8eE8jtVkRCfopZxd4lTM7Qs6zUwYPOITyLGA3iw3DxYR+sRykA2SH9CEEdJ9XueQapLm36TO/ITKEa61A8YetInwiTMqYvH4MvPfHnlD01SsLqq5s2rShblV12dljK2SGtM/S7u9FsCVNIkU3CWuZ8ddmznocyY+2gcXcfrHE9NKSjE9w==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <7d7aa075-faa0-732f-44ad-3984dcb86e08@suse.com>
Date: Wed, 29 Jun 2022 08:19:06 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v7 7/9] xen/arm: unpopulate memory when domain is static
Content-Language: en-US
To: Penny Zheng <Penny.Zheng@arm.com>
Cc: Wei Chen <Wei.Chen@arm.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Julien Grall <julien@xen.org>
References: <20220620024408.203797-1-Penny.Zheng@arm.com>
 <20220620024408.203797-8-Penny.Zheng@arm.com>
 <5ac0e46d-2100-331e-b4d2-8fc715973b71@suse.com>
 <DU2PR08MB73255B2995B4692B5D46252FF7B99@DU2PR08MB7325.eurprd08.prod.outlook.com>
 <380b2610-fe2f-6246-e6a4-f0dd8295d488@xen.org>
 <DU2PR08MB732507EFB0CC4FEAA4872B3AF7BB9@DU2PR08MB7325.eurprd08.prod.outlook.com>
 <6cfca3ce-219e-f9e4-e30c-40d7a74ea523@suse.com>
 <DU2PR08MB7325B9C7AC3441780E7AEB78F7BB9@DU2PR08MB7325.eurprd08.prod.outlook.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <DU2PR08MB7325B9C7AC3441780E7AEB78F7BB9@DU2PR08MB7325.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0105.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a9::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 58466941-5911-4e82-bcf9-08da59974641
X-MS-TrafficTypeDiagnostic: AM0PR04MB6772:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	9QtdPTQ5JdcFU9sy5VKys0Zr/w37rFQhJsrVraiQukg2JRetyWBypBZC3pUTGD0qmJUA4vLt6/I+o3ec6bp+Cx2oc57E+ZmRTpqD2mIVbo7sJZIm8/E1sR+nHXR5Z3pDVYMW9K9QYEtwR2RLN+RrMQWIXz5Q+iAYerRlWNz0JxDOTJJjeFuuxtAF4g4GbcG7ueU/v9US5TS33NiF293iz7uAC7F84Kk/PLar3t5xG7fv1chxXsC9wFmowndi3feRea7Lm88d6XzBQYen18vWoejT9vSky+BPkjfdHL/tZ+NO2Ke2nOHslJ/xmvEq0GkMz2gk+TSS9oSNeBfn2uFF9hz2yam+NOcs/BzSbb6s1A7vC5H6DOskFXDx8/k84A7wv06Km2YU1hfFFpJ+mZzh4KcF6LoSQKCEu4fgObR0BjMRYWeoj6dcxJxHPlx2/g0GhBmQqFFZjtjHxuDwkK6SWz4zMobqgzNqFgTUqg5epf+Y2s6USOskGVyLkP8Ld8MIjF1e3JqJji750bRXIWf2wTim1RzLblsa37MlyyWhNDs1SuxRu5OXdwGRWEr4Ue0czZ4BOXKek7YsRfqGvra9ukTqi+dt2QLm+7dTOiw5uL6PHX92oFOPOM9TfwxN1rOrNIg7r1jmvxg5khT0+Vht3qnIxNRaLIP25xm6CGB1+GzybxifyQ1o1+Y7m9abfyijmO++c+zvZyMUULz2PfwB7kymoluwatMkULQHpUMovtIkydOouSU1ZOpCyq7MgfYE/ky3XS4qPpdxIc5P2bKJ95mDGiQoutOvM/QGa1BRLMXtekOgLx4god/P7dt2361XwvrOopeiDuAN937tFU8BSg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(396003)(39860400002)(376002)(346002)(136003)(366004)(66946007)(186003)(38100700002)(316002)(66556008)(83380400001)(86362001)(31696002)(6862004)(36756003)(2906002)(6486002)(4326008)(66476007)(31686004)(54906003)(8676002)(5660300002)(26005)(53546011)(478600001)(6512007)(6506007)(8936002)(2616005)(41300700001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bUpwOXZaUGg1cFRxRXRZL3BUaGdRN3A3aEI4NDBWT0JETi9GSWp2VTRlb0s1?=
 =?utf-8?B?MG9Wbi9DNW1vVVB3WGdxYVpNblpnMUFmOU1mRW5MT09qYi96UEJHRW1zd1JJ?=
 =?utf-8?B?bjVpaG1oK0lXdWFCa3BldmM5eUo0VElQakpraVpjdTlpOFpGM0d5bnRuWWJa?=
 =?utf-8?B?SW9XbGMvK2VGZ1JZUzZEZ0NjT2xjL0k4Q0MyaWlTeDJkVjMxektUaHpEZnlm?=
 =?utf-8?B?TDNPaHZ5ZUc3K0dYQVhBdWRYcEh4cnJuVGdFRVR2NWRiM1hQQUU4T2xrV1Av?=
 =?utf-8?B?ejZ1OG9YY2xBRWwvbHFBYXM2WnhjU0JDRitwVGwvMU5JamZ4S2dXZkxxL1pU?=
 =?utf-8?B?eEc0RktPOWF5Z2VVUTZJVUFRWEFybXo3N2J2K3Boem1FVSs4OEkyMFo5bVpi?=
 =?utf-8?B?M2pIay95alN4Qm50UE5pWkZXUnNlYlovQzZ2Yk1UUWpIeUdtblVCdUt2Y05F?=
 =?utf-8?B?d1Z5M3BzMGsrTm5LL1dqYk5FdExaWFg5THduejhYNzMzUkpOVkk5aXBUeHpz?=
 =?utf-8?B?UEhjZmYxWlVRSUNaSVA4NFJveS9neFBlZVZnK2JwTTZ1dlJTYVhhOE5ONlpC?=
 =?utf-8?B?bnVoTk1BdEJDTmVqa29XS0RCSHNyZm10NlF6MFRucnpGVVJRbDNDMVhvQStR?=
 =?utf-8?B?ekh1TG1Vek43dUpnL2tsODRicjdkdkd6emVXSXRWeE5SdzJlRFpBTXl2djNN?=
 =?utf-8?B?TFNrTFJkV1B0QUxTQ3g3TVhmZTJWcWdWR1hocE5HUk9xRFBVOVlpMkdXMjBN?=
 =?utf-8?B?SHQySXQrc2FFM2VIMld2N3JWbGFCdGhXeEpDeC9icG5xRFFxZmlBWVI1a1dq?=
 =?utf-8?B?MFFFR0NRMXpWOTVrbnBjVEJsOTFXRWcrS3VRVkRWQlFhcDNBNVduWkpZTExj?=
 =?utf-8?B?ODQ3Y2F4azVON3VRTUNITWV1aVZnSVBneVcwZUNrd1hBTG8vZVdNQVZ1QW5K?=
 =?utf-8?B?TWFsMXdnYjVCVzdveW1uVGg2Z2lUQk9sdHJod0c4dVNLdjVqQnI0RDVXWjV3?=
 =?utf-8?B?OGRyRHNIUEpLUW42OHBpcW1EYVlvemg3SXZ5OXZ1SndUK1VUTE5sZUtnQkRo?=
 =?utf-8?B?L2ZQTXVDMktHeHJLTXE4cXJqN3F3NGVEdzI0bjF5YnhnSjNTZEppOVFCS25W?=
 =?utf-8?B?cEFpL0YxSDFvdEVKTklJY3lhbHhsYUtuQ2pYcDBVVm5DVWk0RHVHTmx4Y0tC?=
 =?utf-8?B?ZWhwYzd2Ym9IcFMycG01bjE2RzhLU1dyV3NGanIrWmFsUlJRWkZ0UWlIdFB4?=
 =?utf-8?B?cURyK2dFM1dja0ZpQWY4bzhOand2U3Q0R3IxOUJWSzk1S1k4MnpmV1Z1Z2Vu?=
 =?utf-8?B?ZTJ5LytjdE5kVWJGSVBIME8zeG9VMU5hTlhDclhwWGNOS25iRVpLbFJZTW44?=
 =?utf-8?B?RERRRzRYOXk5T29KY1Y4eHB6aVRzVm93aFJGQ1hZMGgxZm1UQTN3MkFyUGxN?=
 =?utf-8?B?QmhXd3p5MzM1N0JSVUVaVW4zM3JYeGtrcnBPZUxnVXpiZk5YTVRheEJUY1li?=
 =?utf-8?B?SFdYN2k1d01RNndMMTRMRittcGFtTlVqd21sTEorWTV6ZE9jOGg4RG95ZC9i?=
 =?utf-8?B?UWpXeWVVNkhwbDFzQUV2WjluTm1zRTlYSUp6N2JjcUQ0RDVSQlVabkQ4dHdo?=
 =?utf-8?B?MHd1cG9rQXR6UlRRQUk0R2JjbjYrQ3FiVWN6cXMvb3Q4Q25nbXpXejRxNk1T?=
 =?utf-8?B?QlY5b0wrZTBrMnkyQmZLV25ScC91RzJGNTV6OUJSYngraVhneGdRMGUvNVZF?=
 =?utf-8?B?VG1JbVZsTlFUNXJRMXJTUENCOXlCUVp3aUVqbTF3OWd6cUZja1ZFMnBnTGUx?=
 =?utf-8?B?QnFLY1BEaXlRc2ZZUmI4N0xPbUdOM082cjFaQ21wbVh0OGN5VjRRa1JmbmNh?=
 =?utf-8?B?WTlnZktNME1yeVl2L0dnM1VXTXBVeEI5YVZXMGMvekxIdCtwZUhtUllDVHpu?=
 =?utf-8?B?a1R1U3FkU0ZFbmZScUM3ZnduQlFMMnA0SndxTkpJR254SmJoS1V1ZWVaNlNW?=
 =?utf-8?B?MFVCY3BMZEdGYUQrczFIRm8zZDI2bktNSXhyNHNwRWp5dXMxcXFrVHFGNjQ0?=
 =?utf-8?B?Tk41TlFUSGJnbTV6K0hUb2dkemFHVmtRaG83R2hsUVRFaFNPUk5QMjQvYmxx?=
 =?utf-8?Q?zeR8962eUlNxo87y6f3/1SFJd?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 58466941-5911-4e82-bcf9-08da59974641
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jun 2022 06:19:08.1399
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 8VgGebjfp5JHKMQllF9aic7GQnq6ujU8UySYcbKD9BhtzqXmuBQovtr9zBiQ8R4ozmns4o8DEPjfYAvMRNqy9g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB6772

On 29.06.2022 08:08, Penny Zheng wrote:
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: Wednesday, June 29, 2022 1:56 PM
>>
>> On 29.06.2022 05:12, Penny Zheng wrote:
>>>> From: Julien Grall <julien@xen.org>
>>>> Sent: Monday, June 27, 2022 6:19 PM
>>>>
>>>> On 27/06/2022 11:03, Penny Zheng wrote:
>>>>>> -----Original Message-----
>>>>> put_static_pages, that is, adding pages to the reserved list, is
>>>>> only for freeing static pages on runtime. In static page
>>>>> initialization stage, I also use free_statimem_pages, and in which
>>>>> stage, I think the domain has not been constructed at all. So I
>>>>> prefer the freeing of staticmem pages is split into two parts:
>>>>> free_staticmem_pages and put_static_pages
>>>>
>>>> AFAIU, all the pages would have to be allocated via
>>>> acquire_domstatic_pages(). This call requires the domain to be
>>>> allocated and setup for handling memory.
>>>>
>>>> Therefore, I think the split is unnecessary. This would also have the
>>>> advantage to remove one loop. Admittly, this is not important when
>>>> the order 0, but it would become a problem for larger order (you may
>>>> have to pull the struct page_info multiple time in the cache).
>>>>
>>>
>>> How about this:
>>> I create a new func free_domstatic_page, and it will be like:
>>> "
>>> static void free_domstatic_page(struct domain *d, struct page_info
>>> *page) {
>>>     unsigned int i;
>>>     bool need_scrub;
>>>
>>>     /* NB. May recursively lock from relinquish_memory(). */
>>>     spin_lock_recursive(&d->page_alloc_lock);
>>>
>>>     arch_free_heap_page(d, page);
>>>
>>>     /*
>>>      * Normally we expect a domain to clear pages before freeing them,
>>>      * if it cares about the secrecy of their contents. However, after
>>>      * a domain has died we assume responsibility for erasure. We do
>>>      * scrub regardless if option scrub_domheap is set.
>>>      */
>>>     need_scrub = d->is_dying || scrub_debug || opt_scrub_domheap;
>>>
>>>     free_staticmem_pages(page, 1, need_scrub);
>>>
>>>     /* Add page on the resv_page_list *after* it has been freed. */
>>>     put_static_page(d, page);
>>>
>>>     drop_dom_ref = !domain_adjust_tot_pages(d, -1);
>>>
>>>     spin_unlock_recursive(&d->page_alloc_lock);
>>>
>>>     if ( drop_dom_ref )
>>>         put_domain(d);
>>> }
>>> "
>>>
>>> In free_domheap_pages, we just call free_domstatic_page:
>>>
>>> "
>>> @@ -2430,6 +2430,9 @@ void free_domheap_pages(struct page_info *pg,
>>> unsigned int order)
>>>
>>>      ASSERT_ALLOC_CONTEXT();
>>>
>>> +    if ( unlikely(pg->count_info & PGC_static) )
>>> +        return free_domstatic_page(d, pg);
>>> +
>>>      if ( unlikely(is_xen_heap_page(pg)) )
>>>      {
>>>          /* NB. May recursively lock from relinquish_memory(). */ @@
>>> -2673,6 +2676,38 @@ void free_staticmem_pages(struct page_info *pg,
>>> unsigned long nr_mfns, "
>>>
>>> Then the split could be avoided and we could save the loop as much as
>> possible.
>>> Any suggestion?
>>
>> Looks reasonable at the first glance (will need to see it in proper context for a
>> final opinion), provided e.g. Xen heap pages can never be static.
> 
> If you don't prefer let free_domheap_pages to call free_domstatic_page, then, maybe
> the if-array should happen at put_page
> "
> @@ -1622,6 +1622,8 @@ void put_page(struct page_info *page)
> 
>      if ( unlikely((nx & PGC_count_mask) == 0) )
>      {
> +        if ( unlikely(page->count_info & PGC_static) )
> +            free_domstatic_page(page);
>          free_domheap_page(page);
>      }
>  }
> "
> Wdyt now?

Personally I'd prefer this variant, but we'll have to see what Julien or
the other Arm maintainers think.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 06:24:16 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 06:24:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357560.586179 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6R7W-00012I-BP; Wed, 29 Jun 2022 06:24:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357560.586179; Wed, 29 Jun 2022 06:24:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6R7W-00012B-8k; Wed, 29 Jun 2022 06:24:14 +0000
Received: by outflank-mailman (input) for mailman id 357560;
 Wed, 29 Jun 2022 06:24:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=m/oR=XE=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o6R7U-000125-Sk
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 06:24:13 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1611c051-f774-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 08:24:11 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id BA38121D8A;
 Wed, 29 Jun 2022 06:24:09 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 68DAC132C0;
 Wed, 29 Jun 2022 06:24:09 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id PxgSGAnwu2KhNwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 29 Jun 2022 06:24:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1611c051-f774-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656483849; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=gxNDFpA+MEHs+GBhsYv2MsVZ4P2BDKkYpmkCRCtQG+E=;
	b=EDsyYMeNVCuIJvcnW+ujHZ7vomQ/MPPcN2h4FkRtvoQ4DDa5Grxfi6m5Q8S5jVjKZ4cvRH
	4WfVzFnGgXpuFB0UVW5MaZ7YgArzNYLXkzduWPPhF7CwRbdLyfIYVAojHTABmW//b/Eg6p
	IYMbAvyZ2XNz0ypMab3ZhxHwLUjdUm4=
Message-ID: <5d7de637-3f7a-3e71-5dcf-cbeb1fa08b7b@suse.com>
Date: Wed, 29 Jun 2022 08:24:08 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v6 2/9] xen: harmonize return types of hypercall handlers
Content-Language: en-US
To: xen-devel@lists.xenproject.org,
 Christopher Clark <christopher.w.clark@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20220324140139.5899-1-jgross@suse.com>
 <20220324140139.5899-3-jgross@suse.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20220324140139.5899-3-jgross@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------TJU09KNhsc0TEW3ee4EeP4K0"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------TJU09KNhsc0TEW3ee4EeP4K0
Content-Type: multipart/mixed; boundary="------------Vq7pjCuk0pao0M00vm8iXg7L";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
 Christopher Clark <christopher.w.clark@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
Message-ID: <5d7de637-3f7a-3e71-5dcf-cbeb1fa08b7b@suse.com>
Subject: Re: [PATCH v6 2/9] xen: harmonize return types of hypercall handlers
References: <20220324140139.5899-1-jgross@suse.com>
 <20220324140139.5899-3-jgross@suse.com>
In-Reply-To: <20220324140139.5899-3-jgross@suse.com>

--------------Vq7pjCuk0pao0M00vm8iXg7L
Content-Type: multipart/mixed; boundary="------------9bCt7W7A2Z04hok7bQa65kNq"

--------------9bCt7W7A2Z04hok7bQa65kNq
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjQuMDMuMjIgMTU6MDEsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+IFRvZGF5IG1vc3Qg
aHlwZXJjYWxsIGhhbmRsZXJzIGhhdmUgYSByZXR1cm4gdHlwZSBvZiBsb25nLCB3aGlsZSB0
aGUNCj4gY29tcGF0IG9uZXMgcmV0dXJuIGFuIGludC4gVGhlcmUgYXJlIGEgZmV3IGV4Y2Vw
dGlvbnMgZnJvbSB0aGF0IHJ1bGUsDQo+IGhvd2V2ZXIuDQo+IA0KPiBHZXQgcmlkIG9mIHRo
ZSBleGNlcHRpb25zIGJ5IGxldHRpbmcgY29tcGF0IGhhbmRsZXJzIGFsd2F5cyByZXR1cm4g
aW50DQo+IGFuZCBvdGhlcnMgYWx3YXlzIHJldHVybiBsb25nLCB3aXRoIHRoZSBleGNlcHRp
b24gb2YgdGhlIEFybSBzcGVjaWZpYw0KPiBwaHlzZGV2X29wIGhhbmRsZXIuDQo+IA0KPiBG
b3IgdGhlIGNvbXBhdCBodm0gY2FzZSB1c2UgZWF4IGluc3RlYWQgb2YgcmF4IGZvciB0aGUg
c3RvcmVkIHJlc3VsdCBhcw0KPiBpdCBzaG91bGQgaGF2ZSBiZWVuIGZyb20gdGhlIGJlZ2lu
bmluZy4NCj4gDQo+IEFkZGl0aW9uYWxseSBtb3ZlIHNvbWUgcHJvdG90eXBlcyB0byBpbmNs
dWRlL2FzbS14ODYvaHlwZXJjYWxsLmgNCj4gYXMgdGhleSBhcmUgeDg2IHNwZWNpZmljLiBN
b3ZlIHRoZSBjb21wYXRfcGxhdGZvcm1fb3AoKSBwcm90b3R5cGUgdG8NCj4gdGhlIGNvbW1v
biBoZWFkZXIuDQo+IA0KPiBSZW5hbWUgcGFnaW5nX2RvbWN0bF9jb250aW51YXRpb24oKSB0
byBkb19wYWdpbmdfZG9tY3RsX2NvbnQoKSBhbmQgYWRkDQo+IGEgbWF0Y2hpbmcgZGVmaW5l
IGZvciB0aGUgYXNzb2NpYXRlZCBoeXBlcmNhbGwuDQo+IA0KPiBNYWtlIGRvX2NhbGxiYWNr
X29wKCkgYW5kIGNvbXBhdF9jYWxsYmFja19vcCgpIG1vcmUgc2ltaWxhciBieSBhZGRpbmcN
Cj4gdGhlIGNvbnN0IGF0dHJpYnV0ZSB0byBjb21wYXRfY2FsbGJhY2tfb3AoKSdzIDJuZCBw
YXJhbWV0ZXIuDQo+IA0KPiBDaGFuZ2UgdGhlIHR5cGUgb2YgdGhlIGNtZCBwYXJhbWV0ZXIg
Zm9yIFtkb3xjb21wYXRdX2tleGVjX29wKCkgdG8NCj4gdW5zaWduZWQgaW50LCBhcyB0aGlz
IGlzIG1vcmUgYXBwcm9wcmlhdGUgZm9yIHRoZSBjb21wYXQgY2FzZS4NCj4gDQo+IFNpZ25l
ZC1vZmYtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4NCj4gUmV2aWV3ZWQt
Ynk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCg0KQ291bGQgSSBwbGVhc2Ug
aGF2ZSBzb21lIGZlZWRiYWNrIHJlZ2FyZGluZyB0aGUga2V4ZWMgYW5kIGFyZ28gY2hhbmdl
cz8NCg0KDQpKdWVyZ2VuDQoNCj4gLS0tDQo+IFYyOg0KPiAtIHJld29yayBwbGF0Zm9ybV9v
cCBjb21wYXQgaGFuZGxpbmcgKEphbiBCZXVsaWNoKQ0KPiBWMzoNCj4gLSByZW1vdmUgaW5j
bHVkZSBvZiB0eXBlcy5oIChKYW4gQmV1bGljaCkNCj4gVjQ6DQo+IC0gZG9uJ3QgbW92ZSBk
b19waHlzZGV2X29wKCkgKEp1bGllbiBHcmFsbCkNCj4gLSBjYXJ2ZSBvdXQgbm9uIHN0eWxl
IGNvbXBsaWFudCBwYXJhbWV0ZXIgcmVwbGFjZW1lbnRzIChKdWxpZW4gR3JhbGwpDQo+IFY2
Og0KPiAtIHJlbW92ZSByZWJhc2UgYXJ0aWZhY3QgKEphbiBCZXVsaWNoKQ0KPiAtLS0NCj4g
ICB4ZW4vYXJjaC94ODYvZG9tY3RsLmMgICAgICAgICAgICAgICAgfCAgNCArKy0tDQo+ICAg
eGVuL2FyY2gveDg2L2h2bS9oeXBlcmNhbGwuYyAgICAgICAgIHwgIDggKystLS0tLQ0KPiAg
IHhlbi9hcmNoL3g4Ni9oeXBlcmNhbGwuYyAgICAgICAgICAgICB8ICAyICstDQo+ICAgeGVu
L2FyY2gveDg2L2luY2x1ZGUvYXNtL2h5cGVyY2FsbC5oIHwgMzEgKysrKysrKysrKysrKyst
LS0tLS0tLS0tLS0tLQ0KPiAgIHhlbi9hcmNoL3g4Ni9pbmNsdWRlL2FzbS9wYWdpbmcuaCAg
ICB8ICAzIC0tLQ0KPiAgIHhlbi9hcmNoL3g4Ni9tbS9wYWdpbmcuYyAgICAgICAgICAgICB8
ICAzICsrLQ0KPiAgIHhlbi9hcmNoL3g4Ni9wdi9jYWxsYmFjay5jICAgICAgICAgICB8IDE0
ICsrKysrKy0tLS0tLS0NCj4gICB4ZW4vYXJjaC94ODYvcHYvZW11bC1wcml2LW9wLmMgICAg
ICAgfCAgMiArLQ0KPiAgIHhlbi9hcmNoL3g4Ni9wdi9oeXBlcmNhbGwuYyAgICAgICAgICB8
ICA1ICstLS0tDQo+ICAgeGVuL2FyY2gveDg2L3B2L2lyZXQuYyAgICAgICAgICAgICAgIHwg
IDQgKystLQ0KPiAgIHhlbi9hcmNoL3g4Ni9wdi9taXNjLWh5cGVyY2FsbHMuYyAgICB8IDE0
ICsrKysrKysrLS0tLS0NCj4gICB4ZW4vY29tbW9uL2FyZ28uYyAgICAgICAgICAgICAgICAg
ICAgfCAgNiArKystLS0NCj4gICB4ZW4vY29tbW9uL2tleGVjLmMgICAgICAgICAgICAgICAg
ICAgfCAgNiArKystLS0NCj4gICB4ZW4vaW5jbHVkZS94ZW4vaHlwZXJjYWxsLmggICAgICAg
ICAgfCAyMCArKysrKysrKy0tLS0tLS0tLS0NCj4gICAxNCBmaWxlcyBjaGFuZ2VkLCA1OCBp
bnNlcnRpb25zKCspLCA2NCBkZWxldGlvbnMoLSkNCj4gDQo+IGRpZmYgLS1naXQgYS94ZW4v
YXJjaC94ODYvZG9tY3RsLmMgYi94ZW4vYXJjaC94ODYvZG9tY3RsLmMNCj4gaW5kZXggZTQ5
ZjllOTFiOS4uZWE3ZDYwZmZiNiAxMDA2NDQNCj4gLS0tIGEveGVuL2FyY2gveDg2L2RvbWN0
bC5jDQo+ICsrKyBiL3hlbi9hcmNoL3g4Ni9kb21jdGwuYw0KPiBAQCAtMjIxLDggKzIyMSw4
IEBAIGxvbmcgYXJjaF9kb19kb21jdGwoDQo+ICAgICAgIGNhc2UgWEVOX0RPTUNUTF9zaGFk
b3dfb3A6DQo+ICAgICAgICAgICByZXQgPSBwYWdpbmdfZG9tY3RsKGQsICZkb21jdGwtPnUu
c2hhZG93X29wLCB1X2RvbWN0bCwgMCk7DQo+ICAgICAgICAgICBpZiAoIHJldCA9PSAtRVJF
U1RBUlQgKQ0KPiAtICAgICAgICAgICAgcmV0dXJuIGh5cGVyY2FsbF9jcmVhdGVfY29udGlu
dWF0aW9uKF9fSFlQRVJWSVNPUl9hcmNoXzEsDQo+IC0gICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgImgiLCB1X2RvbWN0bCk7DQo+ICsgICAgICAg
ICAgICByZXR1cm4gaHlwZXJjYWxsX2NyZWF0ZV9jb250aW51YXRpb24oDQo+ICsgICAgICAg
ICAgICAgICAgICAgICAgIF9fSFlQRVJWSVNPUl9wYWdpbmdfZG9tY3RsX2NvbnQsICJoIiwg
dV9kb21jdGwpOw0KPiAgICAgICAgICAgY29weWJhY2sgPSB0cnVlOw0KPiAgICAgICAgICAg
YnJlYWs7DQo+ICAgDQo+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvaHZtL2h5cGVyY2Fs
bC5jIGIveGVuL2FyY2gveDg2L2h2bS9oeXBlcmNhbGwuYw0KPiBpbmRleCA2MmI1MzQ5ZTdk
Li4zYTM1NTQzOTk3IDEwMDY0NA0KPiAtLS0gYS94ZW4vYXJjaC94ODYvaHZtL2h5cGVyY2Fs
bC5jDQo+ICsrKyBiL3hlbi9hcmNoL3g4Ni9odm0vaHlwZXJjYWxsLmMNCj4gQEAgLTEyNCw4
ICsxMjQsNiBAQCBzdGF0aWMgbG9uZyBjZl9jaGVjayBodm1fcGh5c2Rldl9vcChpbnQgY21k
LCBYRU5fR1VFU1RfSEFORExFX1BBUkFNKHZvaWQpIGFyZykNCj4gICAgICAgWyBfX0hZUEVS
VklTT1JfICMjIHggXSA9IHsgKGh5cGVyY2FsbF9mbl90ICopIGRvXyAjIyB4LCAgXA0KPiAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAoaHlwZXJjYWxsX2ZuX3QgKikgY29t
cGF0XyAjIyB4IH0NCj4gICANCj4gLSNkZWZpbmUgZG9fYXJjaF8xICAgICAgICAgICAgIHBh
Z2luZ19kb21jdGxfY29udGludWF0aW9uDQo+IC0NCj4gICBzdGF0aWMgY29uc3Qgc3RydWN0
IHsNCj4gICAgICAgaHlwZXJjYWxsX2ZuX3QgKm5hdGl2ZSwgKmNvbXBhdDsNCj4gICB9IGh2
bV9oeXBlcmNhbGxfdGFibGVbXSA9IHsNCj4gQEAgLTE1OCwxMSArMTU2LDkgQEAgc3RhdGlj
IGNvbnN0IHN0cnVjdCB7DQo+ICAgI2lmZGVmIENPTkZJR19IWVBGUw0KPiAgICAgICBIWVBF
UkNBTEwoaHlwZnNfb3ApLA0KPiAgICNlbmRpZg0KPiAtICAgIEhZUEVSQ0FMTChhcmNoXzEp
DQo+ICsgICAgSFlQRVJDQUxMKHBhZ2luZ19kb21jdGxfY29udCkNCj4gICB9Ow0KPiAgIA0K
PiAtI3VuZGVmIGRvX2FyY2hfMQ0KPiAtDQo+ICAgI3VuZGVmIEhZUEVSQ0FMTA0KPiAgICN1
bmRlZiBIVk1fQ0FMTA0KPiAgICN1bmRlZiBDT01QQVRfQ0FMTA0KPiBAQCAtMzAwLDcgKzI5
Niw3IEBAIGludCBodm1faHlwZXJjYWxsKHN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzKQ0K
PiAgICNlbmRpZg0KPiAgIA0KPiAgICAgICAgICAgY3Vyci0+aGNhbGxfY29tcGF0ID0gdHJ1
ZTsNCj4gLSAgICAgICAgcmVncy0+cmF4ID0gaHZtX2h5cGVyY2FsbF90YWJsZVtlYXhdLmNv
bXBhdChlYngsIGVjeCwgZWR4LCBlc2ksIGVkaSk7DQo+ICsgICAgICAgIHJlZ3MtPmVheCA9
IGh2bV9oeXBlcmNhbGxfdGFibGVbZWF4XS5jb21wYXQoZWJ4LCBlY3gsIGVkeCwgZXNpLCBl
ZGkpOw0KPiAgICAgICAgICAgY3Vyci0+aGNhbGxfY29tcGF0ID0gZmFsc2U7DQo+ICAgDQo+
ICAgI2lmbmRlZiBOREVCVUcNCj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9oeXBlcmNh
bGwuYyBiL3hlbi9hcmNoL3g4Ni9oeXBlcmNhbGwuYw0KPiBpbmRleCAyMzcwZDMxZDNmLi4w
N2UxYTQ1ZWY1IDEwMDY0NA0KPiAtLS0gYS94ZW4vYXJjaC94ODYvaHlwZXJjYWxsLmMNCj4g
KysrIGIveGVuL2FyY2gveDg2L2h5cGVyY2FsbC5jDQo+IEBAIC03NSw3ICs3NSw3IEBAIGNv
bnN0IGh5cGVyY2FsbF9hcmdzX3QgaHlwZXJjYWxsX2FyZ3NfdGFibGVbTlJfaHlwZXJjYWxs
c10gPQ0KPiAgICAgICBBUkdTKGRtX29wLCAzKSwNCj4gICAgICAgQVJHUyhoeXBmc19vcCwg
NSksDQo+ICAgICAgIEFSR1MobWNhLCAxKSwNCj4gLSAgICBBUkdTKGFyY2hfMSwgMSksDQo+
ICsgICAgQVJHUyhwYWdpbmdfZG9tY3RsX2NvbnQsIDEpLA0KPiAgIH07DQo+ICAgDQo+ICAg
I3VuZGVmIENPTVANCj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9pbmNsdWRlL2FzbS9o
eXBlcmNhbGwuaCBiL3hlbi9hcmNoL3g4Ni9pbmNsdWRlL2FzbS9oeXBlcmNhbGwuaA0KPiBp
bmRleCBkNmRhYTdlNGNiLi40OTk3MzgyMGFmIDEwMDY0NA0KPiAtLS0gYS94ZW4vYXJjaC94
ODYvaW5jbHVkZS9hc20vaHlwZXJjYWxsLmgNCj4gKysrIGIveGVuL2FyY2gveDg2L2luY2x1
ZGUvYXNtL2h5cGVyY2FsbC5oDQo+IEBAIC0xMSw2ICsxMSw4IEBADQo+ICAgI2luY2x1ZGUg
PHB1YmxpYy9hcmNoLXg4Ni94ZW4tbWNhLmg+IC8qIGZvciBkb19tY2EgKi8NCj4gICAjaW5j
bHVkZSA8YXNtL3BhZ2luZy5oPg0KPiAgIA0KPiArI2RlZmluZSBfX0hZUEVSVklTT1JfcGFn
aW5nX2RvbWN0bF9jb250IF9fSFlQRVJWSVNPUl9hcmNoXzENCj4gKw0KPiAgIHR5cGVkZWYg
dW5zaWduZWQgbG9uZyBoeXBlcmNhbGxfZm5fdCgNCj4gICAgICAgdW5zaWduZWQgbG9uZywg
dW5zaWduZWQgbG9uZywgdW5zaWduZWQgbG9uZywNCj4gICAgICAgdW5zaWduZWQgbG9uZywg
dW5zaWduZWQgbG9uZyk7DQo+IEBAIC04MCw3ICs4Miw3IEBAIGRvX3NldF9kZWJ1Z3JlZygN
Cj4gICAgICAgaW50IHJlZywNCj4gICAgICAgdW5zaWduZWQgbG9uZyB2YWx1ZSk7DQo+ICAg
DQo+IC1leHRlcm4gdW5zaWduZWQgbG9uZyBjZl9jaGVjaw0KPiArZXh0ZXJuIGxvbmcgY2Zf
Y2hlY2sNCj4gICBkb19nZXRfZGVidWdyZWcoDQo+ICAgICAgIGludCByZWcpOw0KPiAgIA0K
PiBAQCAtMTE4LDcgKzEyMCw3IEBAIGRvX21tdWV4dF9vcCgNCj4gICBleHRlcm4gbG9uZyBj
Zl9jaGVjayBkb19jYWxsYmFja19vcCgNCj4gICAgICAgaW50IGNtZCwgWEVOX0dVRVNUX0hB
TkRMRV9QQVJBTShjb25zdF92b2lkKSBhcmcpOw0KPiAgIA0KPiAtZXh0ZXJuIHVuc2lnbmVk
IGxvbmcgY2ZfY2hlY2sNCj4gK2V4dGVybiBsb25nIGNmX2NoZWNrDQo+ICAgZG9faXJldCgN
Cj4gICAgICAgdm9pZCk7DQo+ICAgDQo+IEBAIC0xMzMsMTcgKzEzNSwyMCBAQCBkb19zZXRf
c2VnbWVudF9iYXNlKA0KPiAgICAgICB1bnNpZ25lZCBpbnQgd2hpY2gsDQo+ICAgICAgIHVu
c2lnbmVkIGxvbmcgYmFzZSk7DQo+ICAgDQo+ICtsb25nIGNmX2NoZWNrIGRvX25taV9vcCh1
bnNpZ25lZCBpbnQgY21kLCBYRU5fR1VFU1RfSEFORExFX1BBUkFNKHZvaWQpIGFyZyk7DQo+
ICsNCj4gK2xvbmcgY2ZfY2hlY2sgZG9feGVucG11X29wKHVuc2lnbmVkIGludCBvcCwNCj4g
KyAgICAgICAgICAgICAgICAgICAgICAgICAgIFhFTl9HVUVTVF9IQU5ETEVfUEFSQU0oeGVu
X3BtdV9wYXJhbXNfdCkgYXJnKTsNCj4gKw0KPiArbG9uZyBjZl9jaGVjayBkb19wYWdpbmdf
ZG9tY3RsX2NvbnQoDQo+ICsgICAgWEVOX0dVRVNUX0hBTkRMRV9QQVJBTSh4ZW5fZG9tY3Rs
X3QpIHVfZG9tY3RsKTsNCj4gKw0KPiAgICNpZmRlZiBDT05GSUdfQ09NUEFUDQo+ICAgDQo+
ICAgI2luY2x1ZGUgPGNvbXBhdC9hcmNoLXg4Ni94ZW4uaD4NCj4gICAjaW5jbHVkZSA8Y29t
cGF0L3BoeXNkZXYuaD4NCj4gICAjaW5jbHVkZSA8Y29tcGF0L3BsYXRmb3JtLmg+DQo+ICAg
DQo+IC1leHRlcm4gaW50IGNmX2NoZWNrDQo+IC1jb21wYXRfcGh5c2Rldl9vcCgNCj4gLSAg
ICBpbnQgY21kLA0KPiAtICAgIFhFTl9HVUVTVF9IQU5ETEVfUEFSQU0odm9pZCkgYXJnKTsN
Cj4gLQ0KPiAgIGV4dGVybiBpbnQNCj4gICBjb21wYXRfY29tbW9uX3ZjcHVfb3AoDQo+ICAg
ICAgIGludCBjbWQsIHN0cnVjdCB2Y3B1ICp2LCBYRU5fR1VFU1RfSEFORExFX1BBUkFNKHZv
aWQpIGFyZyk7DQo+IEBAIC0xNTQsMTIgKzE1OSw4IEBAIGV4dGVybiBpbnQgY2ZfY2hlY2sg
Y29tcGF0X21tdWV4dF9vcCgNCj4gICAgICAgWEVOX0dVRVNUX0hBTkRMRV9QQVJBTSh1aW50
KSBwZG9uZSwNCj4gICAgICAgdW5zaWduZWQgaW50IGZvcmVpZ25kb20pOw0KPiAgIA0KPiAt
REVGSU5FX1hFTl9HVUVTVF9IQU5ETEUoY29tcGF0X3BsYXRmb3JtX29wX3QpOw0KPiAtZXh0
ZXJuIGludCBjZl9jaGVjayBjb21wYXRfcGxhdGZvcm1fb3AoDQo+IC0gICAgWEVOX0dVRVNU
X0hBTkRMRV9QQVJBTShjb21wYXRfcGxhdGZvcm1fb3BfdCkgdV94ZW5wZl9vcCk7DQo+IC0N
Cj4gLWV4dGVybiBsb25nIGNmX2NoZWNrIGNvbXBhdF9jYWxsYmFja19vcCgNCj4gLSAgICBp
bnQgY21kLCBYRU5fR1VFU1RfSEFORExFKHZvaWQpIGFyZyk7DQo+ICtleHRlcm4gaW50IGNm
X2NoZWNrIGNvbXBhdF9jYWxsYmFja19vcCgNCj4gKyAgICBpbnQgY21kLCBYRU5fR1VFU1Rf
SEFORExFKGNvbnN0X3ZvaWQpIGFyZyk7DQo+ICAgDQo+ICAgZXh0ZXJuIGludCBjZl9jaGVj
ayBjb21wYXRfdXBkYXRlX3ZhX21hcHBpbmcoDQo+ICAgICAgIHVuc2lnbmVkIGludCB2YSwg
dWludDMyX3QgbG8sIHVpbnQzMl90IGhpLCB1bnNpZ25lZCBpbnQgZmxhZ3MpOw0KPiBAQCAt
MTc3LDEyICsxNzgsMTIgQEAgZXh0ZXJuIGludCBjZl9jaGVjayBjb21wYXRfc2V0X2dkdCgN
Cj4gICBleHRlcm4gaW50IGNmX2NoZWNrIGNvbXBhdF91cGRhdGVfZGVzY3JpcHRvcigNCj4g
ICAgICAgdWludDMyX3QgcGFfbG8sIHVpbnQzMl90IHBhX2hpLCB1aW50MzJfdCBkZXNjX2xv
LCB1aW50MzJfdCBkZXNjX2hpKTsNCj4gICANCj4gLWV4dGVybiB1bnNpZ25lZCBpbnQgY2Zf
Y2hlY2sgY29tcGF0X2lyZXQodm9pZCk7DQo+ICtleHRlcm4gaW50IGNmX2NoZWNrIGNvbXBh
dF9pcmV0KHZvaWQpOw0KPiAgIA0KPiAgIGV4dGVybiBpbnQgY2ZfY2hlY2sgY29tcGF0X25t
aV9vcCgNCj4gICAgICAgdW5zaWduZWQgaW50IGNtZCwgWEVOX0dVRVNUX0hBTkRMRV9QQVJB
TSh2b2lkKSBhcmcpOw0KPiAgIA0KPiAtZXh0ZXJuIGxvbmcgY2ZfY2hlY2sgY29tcGF0X3Nl
dF9jYWxsYmFja3MoDQo+ICtleHRlcm4gaW50IGNmX2NoZWNrIGNvbXBhdF9zZXRfY2FsbGJh
Y2tzKA0KPiAgICAgICB1bnNpZ25lZCBsb25nIGV2ZW50X3NlbGVjdG9yLCB1bnNpZ25lZCBs
b25nIGV2ZW50X2FkZHJlc3MsDQo+ICAgICAgIHVuc2lnbmVkIGxvbmcgZmFpbHNhZmVfc2Vs
ZWN0b3IsIHVuc2lnbmVkIGxvbmcgZmFpbHNhZmVfYWRkcmVzcyk7DQo+ICAgDQo+IGRpZmYg
LS1naXQgYS94ZW4vYXJjaC94ODYvaW5jbHVkZS9hc20vcGFnaW5nLmggYi94ZW4vYXJjaC94
ODYvaW5jbHVkZS9hc20vcGFnaW5nLmgNCj4gaW5kZXggZjBiNGVmYzY2ZS4uNTRjNDQwYmU2
NSAxMDA2NDQNCj4gLS0tIGEveGVuL2FyY2gveDg2L2luY2x1ZGUvYXNtL3BhZ2luZy5oDQo+
ICsrKyBiL3hlbi9hcmNoL3g4Ni9pbmNsdWRlL2FzbS9wYWdpbmcuaA0KPiBAQCAtMjM0LDkg
KzIzNCw2IEBAIGludCBwYWdpbmdfZG9tY3RsKHN0cnVjdCBkb21haW4gKmQsIHN0cnVjdCB4
ZW5fZG9tY3RsX3NoYWRvd19vcCAqc2MsDQo+ICAgICAgICAgICAgICAgICAgICAgWEVOX0dV
RVNUX0hBTkRMRV9QQVJBTSh4ZW5fZG9tY3RsX3QpIHVfZG9tY3RsLA0KPiAgICAgICAgICAg
ICAgICAgICAgIGJvb2xfdCByZXN1bWluZyk7DQo+ICAgDQo+IC0vKiBIZWxwZXIgaHlwZXJj
YWxsIGZvciBkZWFsaW5nIHdpdGggY29udGludWF0aW9ucy4gKi8NCj4gLWxvbmcgY2ZfY2hl
Y2sgcGFnaW5nX2RvbWN0bF9jb250aW51YXRpb24oWEVOX0dVRVNUX0hBTkRMRV9QQVJBTSh4
ZW5fZG9tY3RsX3QpKTsNCj4gLQ0KPiAgIC8qIENhbGwgd2hlbiBkZXN0cm95aW5nIGEgdmNw
dS9kb21haW4gKi8NCj4gICB2b2lkIHBhZ2luZ192Y3B1X3RlYXJkb3duKHN0cnVjdCB2Y3B1
ICp2KTsNCj4gICBpbnQgcGFnaW5nX3RlYXJkb3duKHN0cnVjdCBkb21haW4gKmQpOw0KPiBk
aWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L21tL3BhZ2luZy5jIGIveGVuL2FyY2gveDg2L21t
L3BhZ2luZy5jDQo+IGluZGV4IDFmMGI5NGFkMjEuLmE3ZTI3MDdlY2MgMTAwNjQ0DQo+IC0t
LSBhL3hlbi9hcmNoL3g4Ni9tbS9wYWdpbmcuYw0KPiArKysgYi94ZW4vYXJjaC94ODYvbW0v
cGFnaW5nLmMNCj4gQEAgLTIxLDYgKzIxLDcgQEANCj4gICANCj4gICAjaW5jbHVkZSA8eGVu
L2luaXQuaD4NCj4gICAjaW5jbHVkZSA8eGVuL2d1ZXN0X2FjY2Vzcy5oPg0KPiArI2luY2x1
ZGUgPHhlbi9oeXBlcmNhbGwuaD4NCj4gICAjaW5jbHVkZSA8YXNtL3BhZ2luZy5oPg0KPiAg
ICNpbmNsdWRlIDxhc20vc2hhZG93Lmg+DQo+ICAgI2luY2x1ZGUgPGFzbS9wMm0uaD4NCj4g
QEAgLTc1OSw3ICs3NjAsNyBAQCBpbnQgcGFnaW5nX2RvbWN0bChzdHJ1Y3QgZG9tYWluICpk
LCBzdHJ1Y3QgeGVuX2RvbWN0bF9zaGFkb3dfb3AgKnNjLA0KPiAgICAgICAgICAgcmV0dXJu
IHNoYWRvd19kb21jdGwoZCwgc2MsIHVfZG9tY3RsKTsNCj4gICB9DQo+ICAgDQo+IC1sb25n
IGNmX2NoZWNrIHBhZ2luZ19kb21jdGxfY29udGludWF0aW9uKA0KPiArbG9uZyBjZl9jaGVj
ayBkb19wYWdpbmdfZG9tY3RsX2NvbnQoDQo+ICAgICAgIFhFTl9HVUVTVF9IQU5ETEVfUEFS
QU0oeGVuX2RvbWN0bF90KSB1X2RvbWN0bCkNCj4gICB7DQo+ICAgICAgIHN0cnVjdCB4ZW5f
ZG9tY3RsIG9wOw0KPiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L3B2L2NhbGxiYWNrLmMg
Yi94ZW4vYXJjaC94ODYvcHYvY2FsbGJhY2suYw0KPiBpbmRleCA1NTE0OGM3ZjllLi4xYmU5
ZDNmNzMxIDEwMDY0NA0KPiAtLS0gYS94ZW4vYXJjaC94ODYvcHYvY2FsbGJhY2suYw0KPiAr
KysgYi94ZW4vYXJjaC94ODYvcHYvY2FsbGJhY2suYw0KPiBAQCAtMjA3LDkgKzIwNyw5IEBA
IGxvbmcgY2ZfY2hlY2sgZG9fc2V0X2NhbGxiYWNrcygNCj4gICAjaW5jbHVkZSA8Y29tcGF0
L2NhbGxiYWNrLmg+DQo+ICAgI2luY2x1ZGUgPGNvbXBhdC9ubWkuaD4NCj4gICANCj4gLXN0
YXRpYyBsb25nIGNvbXBhdF9yZWdpc3Rlcl9ndWVzdF9jYWxsYmFjayhzdHJ1Y3QgY29tcGF0
X2NhbGxiYWNrX3JlZ2lzdGVyICpyZWcpDQo+ICtzdGF0aWMgaW50IGNvbXBhdF9yZWdpc3Rl
cl9ndWVzdF9jYWxsYmFjayhzdHJ1Y3QgY29tcGF0X2NhbGxiYWNrX3JlZ2lzdGVyICpyZWcp
DQo+ICAgew0KPiAtICAgIGxvbmcgcmV0ID0gMDsNCj4gKyAgICBpbnQgcmV0ID0gMDsNCj4g
ICAgICAgc3RydWN0IHZjcHUgKmN1cnIgPSBjdXJyZW50Ow0KPiAgIA0KPiAgICAgICBmaXh1
cF9ndWVzdF9jb2RlX3NlbGVjdG9yKGN1cnItPmRvbWFpbiwgcmVnLT5hZGRyZXNzLmNzKTsN
Cj4gQEAgLTI1NiwxMCArMjU2LDEwIEBAIHN0YXRpYyBsb25nIGNvbXBhdF9yZWdpc3Rlcl9n
dWVzdF9jYWxsYmFjayhzdHJ1Y3QgY29tcGF0X2NhbGxiYWNrX3JlZ2lzdGVyICpyZWcpDQo+
ICAgICAgIHJldHVybiByZXQ7DQo+ICAgfQ0KPiAgIA0KPiAtc3RhdGljIGxvbmcgY29tcGF0
X3VucmVnaXN0ZXJfZ3Vlc3RfY2FsbGJhY2soDQo+ICtzdGF0aWMgaW50IGNvbXBhdF91bnJl
Z2lzdGVyX2d1ZXN0X2NhbGxiYWNrKA0KPiAgICAgICBzdHJ1Y3QgY29tcGF0X2NhbGxiYWNr
X3VucmVnaXN0ZXIgKnVucmVnKQ0KPiAgIHsNCj4gLSAgICBsb25nIHJldDsNCj4gKyAgICBp
bnQgcmV0Ow0KPiAgIA0KPiAgICAgICBzd2l0Y2ggKCB1bnJlZy0+dHlwZSApDQo+ICAgICAg
IHsNCj4gQEAgLTI4Myw5ICsyODMsOSBAQCBzdGF0aWMgbG9uZyBjb21wYXRfdW5yZWdpc3Rl
cl9ndWVzdF9jYWxsYmFjaygNCj4gICAgICAgcmV0dXJuIHJldDsNCj4gICB9DQo+ICAgDQo+
IC1sb25nIGNmX2NoZWNrIGNvbXBhdF9jYWxsYmFja19vcChpbnQgY21kLCBYRU5fR1VFU1Rf
SEFORExFKHZvaWQpIGFyZykNCj4gK2ludCBjZl9jaGVjayBjb21wYXRfY2FsbGJhY2tfb3Ao
aW50IGNtZCwgWEVOX0dVRVNUX0hBTkRMRShjb25zdF92b2lkKSBhcmcpDQo+ICAgew0KPiAt
ICAgIGxvbmcgcmV0Ow0KPiArICAgIGludCByZXQ7DQo+ICAgDQo+ICAgICAgIHN3aXRjaCAo
IGNtZCApDQo+ICAgICAgIHsNCj4gQEAgLTMyMSw3ICszMjEsNyBAQCBsb25nIGNmX2NoZWNr
IGNvbXBhdF9jYWxsYmFja19vcChpbnQgY21kLCBYRU5fR1VFU1RfSEFORExFKHZvaWQpIGFy
ZykNCj4gICAgICAgcmV0dXJuIHJldDsNCj4gICB9DQo+ICAgDQo+IC1sb25nIGNmX2NoZWNr
IGNvbXBhdF9zZXRfY2FsbGJhY2tzKA0KPiAraW50IGNmX2NoZWNrIGNvbXBhdF9zZXRfY2Fs
bGJhY2tzKA0KPiAgICAgICB1bnNpZ25lZCBsb25nIGV2ZW50X3NlbGVjdG9yLCB1bnNpZ25l
ZCBsb25nIGV2ZW50X2FkZHJlc3MsDQo+ICAgICAgIHVuc2lnbmVkIGxvbmcgZmFpbHNhZmVf
c2VsZWN0b3IsIHVuc2lnbmVkIGxvbmcgZmFpbHNhZmVfYWRkcmVzcykNCj4gICB7DQo+IGRp
ZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvcHYvZW11bC1wcml2LW9wLmMgYi94ZW4vYXJjaC94
ODYvcHYvZW11bC1wcml2LW9wLmMNCj4gaW5kZXggMjJiMTBkZWMyYS4uNWRhMDBlMjRlNCAx
MDA2NDQNCj4gLS0tIGEveGVuL2FyY2gveDg2L3B2L2VtdWwtcHJpdi1vcC5jDQo+ICsrKyBi
L3hlbi9hcmNoL3g4Ni9wdi9lbXVsLXByaXYtb3AuYw0KPiBAQCAtMjIsMTMgKzIyLDEzIEBA
DQo+ICAgI2luY2x1ZGUgPHhlbi9kb21haW5fcGFnZS5oPg0KPiAgICNpbmNsdWRlIDx4ZW4v
ZXZlbnQuaD4NCj4gICAjaW5jbHVkZSA8eGVuL2d1ZXN0X2FjY2Vzcy5oPg0KPiArI2luY2x1
ZGUgPHhlbi9oeXBlcmNhbGwuaD4NCj4gICAjaW5jbHVkZSA8eGVuL2lvY2FwLmg+DQo+ICAg
DQo+ICAgI2luY2x1ZGUgPGFzbS9hbWQuaD4NCj4gICAjaW5jbHVkZSA8YXNtL2RlYnVncmVn
Lmg+DQo+ICAgI2luY2x1ZGUgPGFzbS9lbmRici5oPg0KPiAgICNpbmNsdWRlIDxhc20vaHBl
dC5oPg0KPiAtI2luY2x1ZGUgPGFzbS9oeXBlcmNhbGwuaD4NCj4gICAjaW5jbHVkZSA8YXNt
L21jMTQ2ODE4cnRjLmg+DQo+ICAgI2luY2x1ZGUgPGFzbS9wdi9kb21haW4uaD4NCj4gICAj
aW5jbHVkZSA8YXNtL3B2L3RyYWNlLmg+DQo+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYv
cHYvaHlwZXJjYWxsLmMgYi94ZW4vYXJjaC94ODYvcHYvaHlwZXJjYWxsLmMNCj4gaW5kZXgg
ZThmYmVlN2JiYi4uZmU4ZGZlOWU4ZiAxMDA2NDQNCj4gLS0tIGEveGVuL2FyY2gveDg2L3B2
L2h5cGVyY2FsbC5jDQo+ICsrKyBiL3hlbi9hcmNoL3g4Ni9wdi9oeXBlcmNhbGwuYw0KPiBA
QCAtNDcsOCArNDcsNiBAQCB0eXBlZGVmIHN0cnVjdCB7DQo+ICAgI2RlZmluZSBDT01QQVRf
Q0FMTCh4KSBIWVBFUkNBTEwoeCkNCj4gICAjZW5kaWYNCj4gICANCj4gLSNkZWZpbmUgZG9f
YXJjaF8xICAgICAgICAgICAgIHBhZ2luZ19kb21jdGxfY29udGludWF0aW9uDQo+IC0NCj4g
ICBzdGF0aWMgY29uc3QgcHZfaHlwZXJjYWxsX3RhYmxlX3QgcHZfaHlwZXJjYWxsX3RhYmxl
W10gPSB7DQo+ICAgICAgIENPTVBBVF9DQUxMKHNldF90cmFwX3RhYmxlKSwNCj4gICAgICAg
SFlQRVJDQUxMKG1tdV91cGRhdGUpLA0KPiBAQCAtMTA5LDExICsxMDcsMTAgQEAgc3RhdGlj
IGNvbnN0IHB2X2h5cGVyY2FsbF90YWJsZV90IHB2X2h5cGVyY2FsbF90YWJsZVtdID0gew0K
PiAgICNlbmRpZg0KPiAgICAgICBIWVBFUkNBTEwobWNhKSwNCj4gICAjaWZuZGVmIENPTkZJ
R19QVl9TSElNX0VYQ0xVU0lWRQ0KPiAtICAgIEhZUEVSQ0FMTChhcmNoXzEpLA0KPiArICAg
IEhZUEVSQ0FMTChwYWdpbmdfZG9tY3RsX2NvbnQpLA0KPiAgICNlbmRpZg0KPiAgIH07DQo+
ICAgDQo+IC0jdW5kZWYgZG9fYXJjaF8xDQo+ICAgI3VuZGVmIENPTVBBVF9DQUxMDQo+ICAg
I3VuZGVmIEhZUEVSQ0FMTA0KPiAgIA0KPiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L3B2
L2lyZXQuYyBiL3hlbi9hcmNoL3g4Ni9wdi9pcmV0LmMNCj4gaW5kZXggZGQyOTY1ZDhmMC4u
NTVlYjZhNjNiZCAxMDA2NDQNCj4gLS0tIGEveGVuL2FyY2gveDg2L3B2L2lyZXQuYw0KPiAr
KysgYi94ZW4vYXJjaC94ODYvcHYvaXJldC5jDQo+IEBAIC00OCw3ICs0OCw3IEBAIHN0YXRp
YyB2b2lkIGFzeW5jX2V4Y2VwdGlvbl9jbGVhbnVwKHN0cnVjdCB2Y3B1ICpjdXJyKQ0KPiAg
ICAgICAgICAgY3Vyci0+YXJjaC5hc3luY19leGNlcHRpb25fc3RhdGUodHJhcCkub2xkX21h
c2s7DQo+ICAgfQ0KPiAgIA0KPiAtdW5zaWduZWQgbG9uZyBjZl9jaGVjayBkb19pcmV0KHZv
aWQpDQo+ICtsb25nIGNmX2NoZWNrIGRvX2lyZXQodm9pZCkNCj4gICB7DQo+ICAgICAgIHN0
cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzID0gZ3Vlc3RfY3B1X3VzZXJfcmVncygpOw0KPiAg
ICAgICBzdHJ1Y3QgaXJldF9jb250ZXh0IGlyZXRfc2F2ZWQ7DQo+IEBAIC0xMDUsNyArMTA1
LDcgQEAgdW5zaWduZWQgbG9uZyBjZl9jaGVjayBkb19pcmV0KHZvaWQpDQo+ICAgfQ0KPiAg
IA0KPiAgICNpZmRlZiBDT05GSUdfUFYzMg0KPiAtdW5zaWduZWQgaW50IGNmX2NoZWNrIGNv
bXBhdF9pcmV0KHZvaWQpDQo+ICtpbnQgY2ZfY2hlY2sgY29tcGF0X2lyZXQodm9pZCkNCj4g
ICB7DQo+ICAgICAgIHN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzID0gZ3Vlc3RfY3B1X3Vz
ZXJfcmVncygpOw0KPiAgICAgICBzdHJ1Y3QgdmNwdSAqdiA9IGN1cnJlbnQ7DQo+IGRpZmYg
LS1naXQgYS94ZW4vYXJjaC94ODYvcHYvbWlzYy1oeXBlcmNhbGxzLmMgYi94ZW4vYXJjaC94
ODYvcHYvbWlzYy1oeXBlcmNhbGxzLmMNCj4gaW5kZXggNTY0OWFhYWI0NC4uNjM1ZjVhNjQ0
YSAxMDA2NDQNCj4gLS0tIGEveGVuL2FyY2gveDg2L3B2L21pc2MtaHlwZXJjYWxscy5jDQo+
ICsrKyBiL3hlbi9hcmNoL3g4Ni9wdi9taXNjLWh5cGVyY2FsbHMuYw0KPiBAQCAtMjgsMTIg
KzI4LDE2IEBAIGxvbmcgY2ZfY2hlY2sgZG9fc2V0X2RlYnVncmVnKGludCByZWcsIHVuc2ln
bmVkIGxvbmcgdmFsdWUpDQo+ICAgICAgIHJldHVybiBzZXRfZGVidWdyZWcoY3VycmVudCwg
cmVnLCB2YWx1ZSk7DQo+ICAgfQ0KPiAgIA0KPiAtdW5zaWduZWQgbG9uZyBjZl9jaGVjayBk
b19nZXRfZGVidWdyZWcoaW50IHJlZykNCj4gK2xvbmcgY2ZfY2hlY2sgZG9fZ2V0X2RlYnVn
cmVnKGludCByZWcpDQo+ICAgew0KPiAtICAgIHVuc2lnbmVkIGxvbmcgdmFsOw0KPiAtICAg
IGludCByZXMgPSB4ODZlbXVsX3JlYWRfZHIocmVnLCAmdmFsLCBOVUxMKTsNCj4gLQ0KPiAt
ICAgIHJldHVybiByZXMgPT0gWDg2RU1VTF9PS0FZID8gdmFsIDogLUVOT0RFVjsNCj4gKyAg
ICAvKiBBdm9pZCBpbXBsZW1lbnRhdGlvbiBkZWZpbmVkIGJlaGF2aW9yIGNhc3RpbmcgdW5z
aWduZWQgbG9uZyB0byBsb25nLiAqLw0KPiArICAgIHVuaW9uIHsNCj4gKyAgICAgICAgdW5z
aWduZWQgbG9uZyB2YWw7DQo+ICsgICAgICAgIGxvbmcgcmV0Ow0KPiArICAgIH0gdTsNCj4g
KyAgICBpbnQgcmVzID0geDg2ZW11bF9yZWFkX2RyKHJlZywgJnUudmFsLCBOVUxMKTsNCj4g
Kw0KPiArICAgIHJldHVybiByZXMgPT0gWDg2RU1VTF9PS0FZID8gdS5yZXQgOiAtRU5PREVW
Ow0KPiAgIH0NCj4gICANCj4gICBsb25nIGNmX2NoZWNrIGRvX2ZwdV90YXNrc3dpdGNoKGlu
dCBzZXQpDQo+IGRpZmYgLS1naXQgYS94ZW4vY29tbW9uL2FyZ28uYyBiL3hlbi9jb21tb24v
YXJnby5jDQo+IGluZGV4IDI5N2Y2ZDExZjAuLjI2YTAxYzIxODggMTAwNjQ0DQo+IC0tLSBh
L3hlbi9jb21tb24vYXJnby5jDQo+ICsrKyBiL3hlbi9jb21tb24vYXJnby5jDQo+IEBAIC0y
MjA3LDEzICsyMjA3LDEzIEBAIGRvX2FyZ29fb3AodW5zaWduZWQgaW50IGNtZCwgWEVOX0dV
RVNUX0hBTkRMRV9QQVJBTSh2b2lkKSBhcmcxLA0KPiAgIH0NCj4gICANCj4gICAjaWZkZWYg
Q09ORklHX0NPTVBBVA0KPiAtbG9uZyBjZl9jaGVjaw0KPiAraW50IGNmX2NoZWNrDQo+ICAg
Y29tcGF0X2FyZ29fb3AodW5zaWduZWQgaW50IGNtZCwgWEVOX0dVRVNUX0hBTkRMRV9QQVJB
TSh2b2lkKSBhcmcxLA0KPiAgICAgICAgICAgICAgICAgIFhFTl9HVUVTVF9IQU5ETEVfUEFS
QU0odm9pZCkgYXJnMiwgdW5zaWduZWQgbG9uZyBhcmczLA0KPiAgICAgICAgICAgICAgICAg
IHVuc2lnbmVkIGxvbmcgYXJnNCkNCj4gICB7DQo+ICAgICAgIHN0cnVjdCBkb21haW4gKmN1
cnJkID0gY3VycmVudC0+ZG9tYWluOw0KPiAtICAgIGxvbmcgcmM7DQo+ICsgICAgaW50IHJj
Ow0KPiAgICAgICB4ZW5fYXJnb19zZW5kX2FkZHJfdCBzZW5kX2FkZHI7DQo+ICAgICAgIHhl
bl9hcmdvX2lvdl90IGlvdnNbWEVOX0FSR09fTUFYSU9WXTsNCj4gICAgICAgY29tcGF0X2Fy
Z29faW92X3QgY29tcGF0X2lvdnNbWEVOX0FSR09fTUFYSU9WXTsNCj4gQEAgLTIyNjcsNyAr
MjI2Nyw3IEBAIGNvbXBhdF9hcmdvX29wKHVuc2lnbmVkIGludCBjbWQsIFhFTl9HVUVTVF9I
QU5ETEVfUEFSQU0odm9pZCkgYXJnMSwNCj4gICANCj4gICAgICAgcmMgPSBzZW5kdihjdXJy
ZCwgJnNlbmRfYWRkci5zcmMsICZzZW5kX2FkZHIuZHN0LCBpb3ZzLCBuaW92LCBhcmc0KTsN
Cj4gICAgb3V0Og0KPiAtICAgIGFyZ29fZHByaW50aygiPC1jb21wYXRfYXJnb19vcCgldSk9
JWxkXG4iLCBjbWQsIHJjKTsNCj4gKyAgICBhcmdvX2RwcmludGsoIjwtY29tcGF0X2FyZ29f
b3AoJXUpPSVkXG4iLCBjbWQsIHJjKTsNCj4gICANCj4gICAgICAgcmV0dXJuIHJjOw0KPiAg
IH0NCj4gZGlmZiAtLWdpdCBhL3hlbi9jb21tb24va2V4ZWMuYyBiL3hlbi9jb21tb24va2V4
ZWMuYw0KPiBpbmRleCBhMmZmYjY1MzBjLi40MTY2OTk2NGQyIDEwMDY0NA0KPiAtLS0gYS94
ZW4vY29tbW9uL2tleGVjLmMNCj4gKysrIGIveGVuL2NvbW1vbi9rZXhlYy5jDQo+IEBAIC0x
MjEzLDcgKzEyMTMsNyBAQCBzdGF0aWMgaW50IGtleGVjX3N0YXR1cyhYRU5fR1VFU1RfSEFO
RExFX1BBUkFNKHZvaWQpIHVhcmcpDQo+ICAgICAgIHJldHVybiAhIXRlc3RfYml0KGJpdCwg
JmtleGVjX2ZsYWdzKTsNCj4gICB9DQo+ICAgDQo+IC1zdGF0aWMgaW50IGRvX2tleGVjX29w
X2ludGVybmFsKHVuc2lnbmVkIGxvbmcgb3AsDQo+ICtzdGF0aWMgaW50IGRvX2tleGVjX29w
X2ludGVybmFsKHVuc2lnbmVkIGludCBvcCwNCj4gICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIFhFTl9HVUVTVF9IQU5ETEVfUEFSQU0odm9pZCkgdWFyZywNCj4gICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIGJvb2xfdCBjb21wYXQpDQo+ICAgew0KPiBA
QCAtMTI2NSwxMyArMTI2NSwxMyBAQCBzdGF0aWMgaW50IGRvX2tleGVjX29wX2ludGVybmFs
KHVuc2lnbmVkIGxvbmcgb3AsDQo+ICAgICAgIHJldHVybiByZXQ7DQo+ICAgfQ0KPiAgIA0K
PiAtbG9uZyBjZl9jaGVjayBkb19rZXhlY19vcCh1bnNpZ25lZCBsb25nIG9wLCBYRU5fR1VF
U1RfSEFORExFX1BBUkFNKHZvaWQpIHVhcmcpDQo+ICtsb25nIGNmX2NoZWNrIGRvX2tleGVj
X29wKHVuc2lnbmVkIGludCBvcCwgWEVOX0dVRVNUX0hBTkRMRV9QQVJBTSh2b2lkKSB1YXJn
KQ0KPiAgIHsNCj4gICAgICAgcmV0dXJuIGRvX2tleGVjX29wX2ludGVybmFsKG9wLCB1YXJn
LCAwKTsNCj4gICB9DQo+ICAgDQo+ICAgI2lmZGVmIENPTkZJR19DT01QQVQNCj4gLWludCBj
Zl9jaGVjayBjb21wYXRfa2V4ZWNfb3AodW5zaWduZWQgbG9uZyBvcCwgWEVOX0dVRVNUX0hB
TkRMRV9QQVJBTSh2b2lkKSB1YXJnKQ0KPiAraW50IGNmX2NoZWNrIGNvbXBhdF9rZXhlY19v
cCh1bnNpZ25lZCBpbnQgb3AsIFhFTl9HVUVTVF9IQU5ETEVfUEFSQU0odm9pZCkgdWFyZykN
Cj4gICB7DQo+ICAgICAgIHJldHVybiBkb19rZXhlY19vcF9pbnRlcm5hbChvcCwgdWFyZywg
MSk7DQo+ICAgfQ0KPiBkaWZmIC0tZ2l0IGEveGVuL2luY2x1ZGUveGVuL2h5cGVyY2FsbC5o
IGIveGVuL2luY2x1ZGUveGVuL2h5cGVyY2FsbC5oDQo+IGluZGV4IDgxYWFlN2E2NjIuLmEw
MzJiYTJiNGEgMTAwNjQ0DQo+IC0tLSBhL3hlbi9pbmNsdWRlL3hlbi9oeXBlcmNhbGwuaA0K
PiArKysgYi94ZW4vaW5jbHVkZS94ZW4vaHlwZXJjYWxsLmgNCj4gQEAgLTExNCwxMSArMTE0
LDYgQEAgY29tbW9uX3ZjcHVfb3AoaW50IGNtZCwNCj4gICAgICAgc3RydWN0IHZjcHUgKnYs
DQo+ICAgICAgIFhFTl9HVUVTVF9IQU5ETEVfUEFSQU0odm9pZCkgYXJnKTsNCj4gICANCj4g
LWV4dGVybiBsb25nIGNmX2NoZWNrDQo+IC1kb19ubWlfb3AoDQo+IC0gICAgdW5zaWduZWQg
aW50IGNtZCwNCj4gLSAgICBYRU5fR1VFU1RfSEFORExFX1BBUkFNKHZvaWQpIGFyZyk7DQo+
IC0NCj4gICBleHRlcm4gbG9uZyBjZl9jaGVjaw0KPiAgIGRvX2h2bV9vcCgNCj4gICAgICAg
dW5zaWduZWQgbG9uZyBvcCwNCj4gQEAgLTEyNiw3ICsxMjEsNyBAQCBkb19odm1fb3AoDQo+
ICAgDQo+ICAgZXh0ZXJuIGxvbmcgY2ZfY2hlY2sNCj4gICBkb19rZXhlY19vcCgNCj4gLSAg
ICB1bnNpZ25lZCBsb25nIG9wLA0KPiArICAgIHVuc2lnbmVkIGludCBvcCwNCj4gICAgICAg
WEVOX0dVRVNUX0hBTkRMRV9QQVJBTSh2b2lkKSB1YXJnKTsNCj4gICANCj4gICBleHRlcm4g
bG9uZyBjZl9jaGVjaw0KPiBAQCAtMTQ1LDkgKzE0MCw2IEBAIGV4dGVybiBsb25nIGNmX2No
ZWNrIGRvX2FyZ29fb3AoDQo+ICAgZXh0ZXJuIGxvbmcgY2ZfY2hlY2sNCj4gICBkb194ZW5v
cHJvZl9vcChpbnQgb3AsIFhFTl9HVUVTVF9IQU5ETEVfUEFSQU0odm9pZCkgYXJnKTsNCj4g
ICANCj4gLWV4dGVybiBsb25nIGNmX2NoZWNrDQo+IC1kb194ZW5wbXVfb3AodW5zaWduZWQg
aW50IG9wLCBYRU5fR1VFU1RfSEFORExFX1BBUkFNKHhlbl9wbXVfcGFyYW1zX3QpIGFyZyk7
DQo+IC0NCj4gICBleHRlcm4gbG9uZyBjZl9jaGVjaw0KPiAgIGRvX2RtX29wKA0KPiAgICAg
ICBkb21pZF90IGRvbWlkLA0KPiBAQCAtMjA1LDE1ICsxOTcsMjEgQEAgZXh0ZXJuIGludCBj
Zl9jaGVjayBjb21wYXRfeHNtX29wKA0KPiAgICAgICBYRU5fR1VFU1RfSEFORExFX1BBUkFN
KHZvaWQpIG9wKTsNCj4gICANCj4gICBleHRlcm4gaW50IGNmX2NoZWNrIGNvbXBhdF9rZXhl
Y19vcCgNCj4gLSAgICB1bnNpZ25lZCBsb25nIG9wLCBYRU5fR1VFU1RfSEFORExFX1BBUkFN
KHZvaWQpIHVhcmcpOw0KPiArICAgIHVuc2lnbmVkIGludCBvcCwgWEVOX0dVRVNUX0hBTkRM
RV9QQVJBTSh2b2lkKSB1YXJnKTsNCj4gICANCj4gICBERUZJTkVfWEVOX0dVRVNUX0hBTkRM
RShtdWx0aWNhbGxfZW50cnlfY29tcGF0X3QpOw0KPiAgIGV4dGVybiBpbnQgY2ZfY2hlY2sg
Y29tcGF0X211bHRpY2FsbCgNCj4gICAgICAgWEVOX0dVRVNUX0hBTkRMRV9QQVJBTShtdWx0
aWNhbGxfZW50cnlfY29tcGF0X3QpIGNhbGxfbGlzdCwNCj4gICAgICAgdWludDMyX3QgbnJf
Y2FsbHMpOw0KPiAgIA0KPiAraW50IGNvbXBhdF9waHlzZGV2X29wKGludCBjbWQsIFhFTl9H
VUVTVF9IQU5ETEVfUEFSQU0odm9pZCkgYXJnKTsNCj4gKw0KPiArdHlwZWRlZiBzdHJ1Y3Qg
Y29tcGF0X3BsYXRmb3JtX29wIGNvbXBhdF9wbGF0Zm9ybV9vcF90Ow0KPiArREVGSU5FX1hF
Tl9HVUVTVF9IQU5ETEUoY29tcGF0X3BsYXRmb3JtX29wX3QpOw0KPiAraW50IGNvbXBhdF9w
bGF0Zm9ybV9vcChYRU5fR1VFU1RfSEFORExFX1BBUkFNKGNvbXBhdF9wbGF0Zm9ybV9vcF90
KSB1X3hlbnBmX29wKTsNCj4gKw0KPiAgICNpZmRlZiBDT05GSUdfQVJHTw0KPiAtZXh0ZXJu
IGxvbmcgY2ZfY2hlY2sgY29tcGF0X2FyZ29fb3AoDQo+ICtleHRlcm4gaW50IGNmX2NoZWNr
IGNvbXBhdF9hcmdvX29wKA0KPiAgICAgICB1bnNpZ25lZCBpbnQgY21kLA0KPiAgICAgICBY
RU5fR1VFU1RfSEFORExFX1BBUkFNKHZvaWQpIGFyZzEsDQo+ICAgICAgIFhFTl9HVUVTVF9I
QU5ETEVfUEFSQU0odm9pZCkgYXJnMiwNCg0K
--------------9bCt7W7A2Z04hok7bQa65kNq
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------9bCt7W7A2Z04hok7bQa65kNq--

--------------Vq7pjCuk0pao0M00vm8iXg7L--

--------------TJU09KNhsc0TEW3ee4EeP4K0
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK78AkFAwAAAAAACgkQsN6d1ii/Ey8R
awf9EFNVD1n4oGaRmgYd2dPfmRKrFz9+JJn4QSJmKOjkq1+wtzRRlv7Sw/9FaMi56MtVe9yFFEYX
AcJDEs0tPCrOhW2EHURLUV3NVbyiH1jFYRNhlvQeMHQK7jB3MjZPYpGVmtRlnm5k5y14C3HT5Ib8
UKJ2Aiogo9V07BWoMD64vgseUzuvJr9qxN1mrBMYgjDz2w3reUSPoUcBPqWG0DLUN0Rdha301kh9
Mhe7w5XLmK88ztGrUhSNR7OyWCEoOurSUpCxkKXIoZHR1c5i2w5C9XQF+AWbxN02heQSvMoFsUoi
UjYNnLqcpCJmwEpz1r6FKyGbyr3JF3mxNZmZ5stkuw==
=Fn/S
-----END PGP SIGNATURE-----

--------------TJU09KNhsc0TEW3ee4EeP4K0--


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 06:29:13 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 06:29:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357567.586190 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6RCI-0001qq-20; Wed, 29 Jun 2022 06:29:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357567.586190; Wed, 29 Jun 2022 06:29:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6RCH-0001qj-Va; Wed, 29 Jun 2022 06:29:09 +0000
Received: by outflank-mailman (input) for mailman id 357567;
 Wed, 29 Jun 2022 06:29:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NFaf=XE=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o6RCH-0001qa-4P
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 06:29:09 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-eopbgr60083.outbound.protection.outlook.com [40.107.6.83])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c78e3522-f774-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 08:29:08 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB6144.eurprd04.prod.outlook.com (2603:10a6:803:fd::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14; Wed, 29 Jun
 2022 06:29:03 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5395.014; Wed, 29 Jun 2022
 06:29:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c78e3522-f774-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VhpZ/qMHIzmCzTZVeVWM6zSb8O3JakJB0CK6IW0ekbaZ4IAen2aYkoeCiQsHJXcULc4ak9B/d/yonKV28TWgSTnLNGnCsE757RM1JEzjZx6NRQhKxrUoWwC3GKKsvCHekqmMJ6C5TEQVc9kHidw6m/nFKWD7nYFs562Fs+xt2BSiZ4p8jWS/1jsMox9HFO38Ttzyr0QcdV/qIiB6R0YQeJT/W/4seF8f/IzxcfjffoaWGvorRmAx5bEVBVvUWSScUDL151Iw1X214OYr4PimxH/6EErwSXv8Ri3pqgQPbtqMFW1ldIveKV5Xxp2WasdNlq4OkA0oZj4y02TmPagw/g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ogcTZpmZMCofwVYSyIvnrE9+Xp50lIbVwPYp8yzvc0g=;
 b=eOw+i6+qfym8fNZJNG3sh5RfrvPCsWhBYb3U5igkvco5LqmGu+gEnPKiWL3T4oqOwTRvLaY+pTtHc/huqDeM0qN0cl/BamDNE9WuhYd8lt/OmUUi/3sqcclFWuczQgt2s/WSZ18W1dk20q6VIeBG9mqsVq7zTOreAQO8YipChB5EKuHXoG4v4fbhEvzLZUghTjx+0hs3uTxyK+HT4Vm/5R4jkz0an6AIKc3EJU5Y7ptZWBeNs/SdAau9z9oQwlF3qSkBt3pdjDNRH+kOxLuTbytczYdBrBGP/Hm4AP+x/8N3M27HKSemlZwBJwNqgjekvf0IF8EmeDVDCmDIODdB4w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ogcTZpmZMCofwVYSyIvnrE9+Xp50lIbVwPYp8yzvc0g=;
 b=LZ66FsHdW1weLt/lG/ncpww7GR+/O0jl6ARUKxchmb/5XvZlA6v9icQ1x3CvEo0znzgspIZCRM1YTea7Zk6alvu6KMaSnvQ8bcMx01exDIHA8azZ4nashLxmw8pyH7f54+fGWMk2YprHkKHyyQ36SZf3Y9JFD210OHJMP5jTaaktLMxH+Uwu1Yb9qRDFN96tb3KGknGDnNJaPLzDsS+8jsEc8DlLXD+2H1pM74XfI+xhju/+ki8S7Dc8dd8/90TzLVL5/FAnLU094uybLvVHSXbQe+PfSamcsZDXCr3UJ0fMMH7cbTo5OsndqvNQgs1huvdn9x+Xp5Eh/1DTw/+nEQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ca75ce19-7fea-68c2-f046-0bc2abb3d5d2@suse.com>
Date: Wed, 29 Jun 2022 08:29:02 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH] Fix compilation error with gcc13
Content-Language: en-US
To: Charles Arnold <carnold@suse.com>
References: <46d17735-e96f-2eee-5d24-fc3d15526c6e@suse.com>
Cc: xen-devel@lists.xenproject.org, Anthony Perard
 <anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>,
 Wei Liu <wl@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <46d17735-e96f-2eee-5d24-fc3d15526c6e@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0067.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4b::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 345ed50b-ae78-42bb-123c-08da5998a941
X-MS-TrafficTypeDiagnostic: VI1PR04MB6144:EE_
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JQV3tasFuLmk9CRADQrihweGGdJM/sW3wl0GGrSfw3NSOpgaNKIkgfUBO0YYijbF1it/Lm8sKifVjqTI1Y/wTXvW3Rupmum2/sOV/mVHcOaQwg0FXlRU/cLlu6hGGazx2t66gLv4FHGufScuYKiW1pzHP5YKD+X7QBTXkngcud2ZfwHY4iLjMHKdpH7dAwh/3jxBbL5WMdanL8hRf8N6P36VN1I3q+zVlwcbNAb6rQ3tSaxkN3nYKSEwO3xZX0rus4+QAQEu+k6bBekbGlnEDahJT7n93e59XOk3HCrgk+Z18OqUnY/ulc5pkgRZqHKtUsnI9iUj6fwUeOA6AG63wQtd8jXnsRwP1LzlBWd3tomf6hmFbK3PoXYcNkLn4PlCE/7gjZS9qsoQsevRCzr3YyQbT5ncWI7qTasA4dO0lMF67Avs9TRqYPWwyudOECLavKXW85Lg9fDDr1fTPo9efkwEhvx+jzSe9+HWcF+32IsJykKIjhbfbEebOOALnRDNxr28B644D6h1ut07VJQEJMq6huYWdl4NyjP6uP3q/pWjhOL0Ss3KbqFlcTyNppkDen39BzXqCdnDmh4kz1rASyeVbicbkdv067NFiuti8eTxvpN1tm42t/+ddzNrQAOWkRSA2l3GHBlY6zFFchenStmN9rllrrU7ZMNfeoZ8Bes0Bx7hOnM43iLh5COj/QNLcgTOLuJIG+qlLQN/asmhHSBCV3naWOhet1DVzAQONFgGspLHN8sMOhOgEJKA/CR97bF0TFZiiVhVB/Pj4t9MiyGljg2ruoRQOzttB4c0NfsD5T22h8YbZ3R4iLf/V/8HiOJ3DtYPltFjXFdwXJA9BQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(39860400002)(136003)(396003)(376002)(346002)(6506007)(53546011)(31686004)(41300700001)(36756003)(186003)(478600001)(6486002)(66946007)(2616005)(6512007)(2906002)(38100700002)(26005)(37006003)(54906003)(316002)(4326008)(8676002)(5660300002)(66476007)(66556008)(6636002)(8936002)(6862004)(86362001)(31696002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MThQTC9wL1hlMG5pSklQcDBSemsyT0xhb2hwaUNPaURCWHY0eWRpWjFvZVR5?=
 =?utf-8?B?MDE1Ujh0V2dEdm52a2d1cWpJZ3Q0NXVsQ2NRdFlyNUVrR3BrZnI0QVczSEJJ?=
 =?utf-8?B?VElhZkdQd1RDd0hqWUdWMVlTNzJZeFo5aWxmblFVT0VzVS9OUTczZU5wUFhh?=
 =?utf-8?B?SEdGNE1FVGNLbWkraHgrbFZUZVVWQ1JVR1RSSmZ4ZVhpSTlpc1RqN3V6VHZ4?=
 =?utf-8?B?UVRxSWZrcm5sRGpwaU5Ec1Z1T1F1N0E5a29HMWxUbzdIRDNOLzZRU3gvRzRM?=
 =?utf-8?B?bG5jWWVSUElnUUNJVGcvUWFKai9VdTVzN3lzN29wU09pd245RXhyV0ZCMXBz?=
 =?utf-8?B?TDZBSjRtWDVXTVNIcnFYVDh0aVp6Y1VRYVZxZkxoYzVDYTE0OHVUTTJ5bnpG?=
 =?utf-8?B?R3d4WFNCRTBtck9BdlVxZ2d5TXJxbHZwZFNwTnhGcmZGL3diUFhzUzQzbVlF?=
 =?utf-8?B?eG1xN3JQVnlKSitoKzE4ZkM1Sk1acjA2bnpjZ3o3WUFhc0ZvN1h2QlZ0cTFi?=
 =?utf-8?B?L1cxZm9nbVFXdmxoZ1lpaW9wdWt4Z1phTFFzMHY1VFR5TVpyRURMd2xXSllJ?=
 =?utf-8?B?QUpYRWZuOEptWkpaZG1PSFE0RWY5UUJCZTdmUnprSzJkb2Z0WVhhdUlhWnZn?=
 =?utf-8?B?MlYxVW5ERWxPN2trcHBNcG1xSzloNHkvSndnQjRUSU5WYjJiMHA2ZDlKd3lL?=
 =?utf-8?B?SXpKSnVhSEk5ZkttZmowVmREK202UWJ2TGpycVFjSkVvMG9jT2xmMDc0Mno1?=
 =?utf-8?B?UzhVUzB5OG05djNPNkhCVi9WZEpwVHdsZmFIajFsaWxJWU95dlNuSGFzSjBY?=
 =?utf-8?B?ODRqWWo5ck51ZHBXTllqTWpDcExDKzY3WE1WT01SYUk2WHM0Z3drZzBZbENj?=
 =?utf-8?B?QVRiSStONVB3dkxYL3l2ak9wUXlHZGpsR3dLM3Zmd3NjVnN5cHA1T2c1YjU2?=
 =?utf-8?B?RkpGc3orbFEycHQ1WFhqM0hGejZ2MkNpL0NMMWxVMU44TDVWS3NBREZoT21S?=
 =?utf-8?B?TTFKeWFySnBMK0Y0YmNYKzlWcFNXbjdZZTRoSGZVMGVZRTdmWlZkUnR5ZEd2?=
 =?utf-8?B?a3RZbS9XZFdZR3pQcXVqVzg5ZzBmSEUxSVBvZXBsVlp2d3h3MzZjUDdhM1hY?=
 =?utf-8?B?L3hJVEZnZ0xFdzduMmNZdVZkOEdiUzY4YjBPMmx5SmRsQlF4VUlUbWw0ell5?=
 =?utf-8?B?a3dZVHhWRG5kRy9qYlUzdkhJdkI5eWdqSGhJc1pzZSsrRnJwd1FjeXV4N1I4?=
 =?utf-8?B?Y2pOZUplNkxoWDJlV1ZDMGhYM2ZwYXBBVnBwVnlHbkVyV1BhckltT0Y4T1ZH?=
 =?utf-8?B?NXJWa3NEeHViZjV6Q2FiQ2ZDaFkzYXhQUWk4VS80OVl4dXRrNTJaMEt0aWk5?=
 =?utf-8?B?RmEyRXBuNVpHa1BPcC9oQVQvZ3VQMjk2cHhWS1FDK1Q0eVFqZys5MTluWlZu?=
 =?utf-8?B?d0NrR21ydGNnSVNuUGQ0WjkyNHpVNlpDRE55dXZld2ZTUXJ0V1pWOGl3OXlk?=
 =?utf-8?B?N0pDaTRiN2xZSkJhVzNUOWFJcVQ3SmpLaXE1V3FlUmI3bjlJdDhLWG5LZTh2?=
 =?utf-8?B?TGF4dmhYY0JBVHg3SGcxU1pZSzV6SVJEWUZlQW9WejFRektnQnVMM1psQWRD?=
 =?utf-8?B?SzdWaHhQSXlONXVPaWJ4aytvMUVqeGFzd2NyQ0pJQkpxV2hPeTRYa1ZzU2l5?=
 =?utf-8?B?WmcyaytYUmtIbVB1U3l4by9rK2Y3NDMydnlybmtIUFEycWFIV1orODd2eC9D?=
 =?utf-8?B?NitSQml1a1FlVUo5N2pxek54aDJyQmJqTE4vbGNiak5hbXE1R3pESzFaRkJu?=
 =?utf-8?B?VElLUHV3dmNaVTdPUGlOUXZvbWo3WE9nZ3A0OWZhSHhDRlNObEZNWGluc1px?=
 =?utf-8?B?eTNIeFRsQzhCenJ6VHI2N0dxcUFDUU5LWThRQ3RpYTNKODlNV0d5Y2Q5b2p0?=
 =?utf-8?B?NmN2bzhRci85SjVXNXVMODNnZ2w5RmV3OEhYNGlPYTJoNzNPYjNudmlXNUhT?=
 =?utf-8?B?WXBmS1I5d2hnaU9NNi8xTHBhWHAyVXNBSmFpdDFzTmN0ZWZGMUx1MkVpYU1R?=
 =?utf-8?B?emhWYXVPQWpBWXNxT1prNHdnNDVmYzZiTmZmWERnMDhpNnZaS2lTL210U2ZI?=
 =?utf-8?Q?PSSkCx+1LFkXBQFUWLc8g95D+?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 345ed50b-ae78-42bb-123c-08da5998a941
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jun 2022 06:29:03.6955
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: l2rUf6A1pvBpCwozh8ZYtc2k8JYWo4uqP4zE81uKpKapOzMrWf7fwEWm+PXyG6HSSizl2wZJbQiIRSfSbJcsgg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6144

(Cc-ing maintainers / reviewers)

On 28.06.2022 18:09, Charles Arnold wrote:
>  From 359f490021e69220313ca8bd2981bad4fcfea0db Mon Sep 17 00:00:00 2001
> From: Charles Arnold <carnold@suse.com>
> Date: Tue, 28 Jun 2022 09:55:28 -0600
> Subject: Fix compilation error with gcc13.
> 
> xc_psr.c:161:5: error: conflicting types for 'xc_psr_cmt_get_data'
> due to enum/integer mismatch;
> 
> Signed-off-by: Charles Arnold <carnold@suse.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

The subject would benefit from having a "libxc: " prefix, to know at
the first glance which component is being touched.

> --- a/tools/include/xenctrl.h
> +++ b/tools/include/xenctrl.h
> @@ -2520,7 +2520,7 @@ int xc_psr_cmt_get_l3_event_mask(xc_interface 
> *xch, uint32_t *event_mask);
>   int xc_psr_cmt_get_l3_cache_size(xc_interface *xch, uint32_t cpu,
>                                    uint32_t *l3_cache_size);
>   int xc_psr_cmt_get_data(xc_interface *xch, uint32_t rmid, uint32_t cpu,
> -                        uint32_t psr_cmt_type, uint64_t *monitor_data,
> +                        xc_psr_cmt_type type, uint64_t *monitor_data,
>                           uint64_t *tsc);
>   int xc_psr_cmt_enabled(xc_interface *xch);
> 

The patch looks slightly garbled, reminding me of how things looked
for me before I adjusted TB configuration accordingly. I'd be fine
hand-editing the patch while committing, if no other need arises for
a v2 (and of course once a maintainer ack has been provided).

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 06:41:18 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 06:41:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357573.586202 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6RNn-0004DG-2w; Wed, 29 Jun 2022 06:41:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357573.586202; Wed, 29 Jun 2022 06:41:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6RNm-0004D9-VS; Wed, 29 Jun 2022 06:41:02 +0000
Received: by outflank-mailman (input) for mailman id 357573;
 Wed, 29 Jun 2022 06:41:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8wio=XE=intel.com=kevin.tian@srs-se1.protection.inumbo.net>)
 id 1o6RNl-0004D3-3Q
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 06:41:01 +0000
Received: from mga18.intel.com (mga18.intel.com [134.134.136.126])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6dbbdd54-f776-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 08:40:58 +0200 (CEST)
Received: from orsmga004.jf.intel.com ([10.7.209.38])
 by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 28 Jun 2022 23:40:55 -0700
Received: from fmsmsx605.amr.corp.intel.com ([10.18.126.85])
 by orsmga004.jf.intel.com with ESMTP; 28 Jun 2022 23:40:54 -0700
Received: from fmsmsx611.amr.corp.intel.com (10.18.126.91) by
 fmsmsx605.amr.corp.intel.com (10.18.126.85) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2308.27; Tue, 28 Jun 2022 23:40:54 -0700
Received: from fmsedg601.ED.cps.intel.com (10.1.192.135) by
 fmsmsx611.amr.corp.intel.com (10.18.126.91) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2308.27 via Frontend Transport; Tue, 28 Jun 2022 23:40:54 -0700
Received: from NAM12-BN8-obe.outbound.protection.outlook.com (104.47.55.177)
 by edgegateway.intel.com (192.55.55.70) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2308.27; Tue, 28 Jun 2022 23:40:53 -0700
Received: from BN9PR11MB5276.namprd11.prod.outlook.com (2603:10b6:408:135::18)
 by BY5PR11MB4371.namprd11.prod.outlook.com (2603:10b6:a03:1bf::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Wed, 29 Jun
 2022 06:40:45 +0000
Received: from BN9PR11MB5276.namprd11.prod.outlook.com
 ([fe80::8435:5a99:1e28:b38c]) by BN9PR11MB5276.namprd11.prod.outlook.com
 ([fe80::8435:5a99:1e28:b38c%2]) with mapi id 15.20.5373.018; Wed, 29 Jun 2022
 06:40:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6dbbdd54-f776-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1656484858; x=1688020858;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=063tlB7U9zlc1H1dfbr0xOAIGkeOokp1UkJxgtcJ8cY=;
  b=iH4oLOZ/ac9Y83ufMmmUOgmAsoLhg1gdRWAExCss7DgBYgUZ2+e254sy
   6v2HF2IICLBk91OXnc01QfiTraZPsq8EFv7BZGfcJXZ3Iw3NGXIxiDNuH
   /M1uG7CxREIYzydZlAbJeIcch0C9NMYgfYQ126OfAatnjZ8ysV1Qayjqu
   3NuBvpTT0/m8DZRdrffOwt3FnhtdLPIBVcfU9zgK4CgQt2gHnPX8r8ZiC
   +q5fC39fNC0eVa0YJDcka9Sg6wJOD6sUAiEAwi1SjZ5zhB95Jpz+y2a60
   tpsX5W+2SF7b2ms2HahLhB5xsOoZ3Hm9BHtsBbQbIK0EYqcby7v/vjL/v
   Q==;
X-IronPort-AV: E=McAfee;i="6400,9594,10392"; a="264980511"
X-IronPort-AV: E=Sophos;i="5.92,230,1650956400"; 
   d="scan'208";a="264980511"
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.92,230,1650956400"; 
   d="scan'208";a="717707417"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZzVSEx1ceulXBHyoavM7SxOLz0JXpDG0oBQV42f1EdFCKY22zkoqM1/V08E3+IyfpgRnDsxyAVENmmUkW4ig/OyJb75DtIylrdIPl/wyOQFZAhJkxwiU1gt9ZPipr+pFVf3Jdf0UepfzzpwXrEQFFSvsGI7uxtVDgcmc1Uu4/wFgG9rDe/JuRoBGTcL5hk2JmuFB7tv68dMOyHa+J/MwkVTfpdMs5JaDEphuCyfNZz3PFxN4tOmWaS1UTfH/9vHIZMUBqD1JfjGZHA8TpjDkJ7x9Hy7HUZeq3KuoxfnRhS4Y173F1Pw+VDhAjJxEohV1X5nT/eYghFlNoqpEFUrXIg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=063tlB7U9zlc1H1dfbr0xOAIGkeOokp1UkJxgtcJ8cY=;
 b=UbbdGs831aeBkgjxihO/K1UHBbx6G9wEp/YH12zWbv7nLx9fCbcw58AK/QHZS1c/pFdepJqmcVm3owaFL30se4Kx1CFvKPSKELIlo9Nxoe1YEtqEmq2eir5kJpZwSQncsXrWkyxZ5s1pXg/fV2ZHQV/oJvezK5hPfEBdtjT6ZVAXOFeBOQlGu9AcQb59MrAk/WIKX0VSXhZe4JdCu4mitCYoipy+XNuCfvyhdHlsV3GZJNyqOgVL6WfgzzL2j4QzwXWyDLh9dj9EJ6tDtOqj9bLZU5pTVnSPCDGflCiQwu3qYYPlomSYxxZ3Q/uH1ZHEg7XQkrWVhaamxEHvy4zU6w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
From: "Tian, Kevin" <kevin.tian@intel.com>
To: =?utf-8?B?UGF1IE1vbm7DqSwgUm9nZXI=?= <roger.pau@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: =?utf-8?B?UGF1IE1vbm7DqSwgUm9nZXI=?= <roger.pau@citrix.com>, "Nakajima,
 Jun" <jun.nakajima@intel.com>, "Beulich, Jan" <JBeulich@suse.com>, "Cooper,
 Andrew" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: RE: [PATCH 5/5] x86/vmx: fix indentation of LBR
Thread-Topic: [PATCH 5/5] x86/vmx: fix indentation of LBR
Thread-Index: AQHYbE70QXaX6+cUdUmsLrVvfVqSp61mLXgQ
Date: Wed, 29 Jun 2022 06:40:45 +0000
Message-ID: <BN9PR11MB52769BAB408AB479E48DB6808CBB9@BN9PR11MB5276.namprd11.prod.outlook.com>
References: <20220520133746.66142-1-roger.pau@citrix.com>
 <20220520133746.66142-6-roger.pau@citrix.com>
In-Reply-To: <20220520133746.66142-6-roger.pau@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=intel.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 19d91251-50ce-43da-c6dd-08da599a4bba
x-ms-traffictypediagnostic: BY5PR11MB4371:EE_
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: mounQFDSFgSrriBvKsWVr6TmevCCzFcasdE74X5SFxZNvT35QpimdPRfk35xNOvbwzYsz2zZ8cOI2Gei58jeLJN2aMq1W5bHxX/WkbuIobzLUGK/aeZyqPG4gJNB3L4x5jXhq+CWRvolHULJbQo0dxvSf24pfCnNQdTMnmQa38Qa/FI0z9xmp6CaZgbVfoTgxb1sTlXTqw/yt3dEtwi9voYzP0h56CzQyz7fJ4Sqh9mkX5GAlrtH25bhxMjFNE/qxC2iXwTasEWqcoWfNFK9PlNDHxr6XcgVXwmsZxpPOr1v1wXq3nDiAXcItpPIZIpu2jFHkHfczSfKu/NiU+GVtBf8s0S2QrZz8EnLA1qKCgpKdWCKm16zrksbqijVnBhkIsab2tpDEnwYbWRgbhiGb5c55SK0XBjUJDgVJMZgtI3fJh2j69aATrmVbh3xSITMKPFwFoFKspIHqMFjQpRO2RzBoI4MqygNxee+CBm+wpsaHcx7/H9T0/ZQC/fhEK0Qf/jv3E0CXK3GlcZw08HnE0Um9dVlryVv3m+hTrNYVw152V56Kc3qMvwmX0iczDHlE7zTErix8kNpMZyEal89uBwg9gO90+GqzGe+Mvv2gapjKiFVnjcoIEXE3fLImTQDBUjj8cDrnVXCAYKk8suHMrBjBsgRMdJEoVxDu3UKel8rGDBSZ0ixhSb4n8xiciamy7w1UynQJXvB2/emw3hL/HvQ/pOnwErZCQFvg5CwTNnnLsyKOKmoybW00cHNqJsx+CeJD0az8KIPz8KHcn/lchJotMxxSaZnlk/WBfnrWzI=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BN9PR11MB5276.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(376002)(396003)(136003)(346002)(39860400002)(366004)(41300700001)(7696005)(64756008)(316002)(5660300002)(76116006)(66946007)(4326008)(66556008)(83380400001)(38100700002)(122000001)(8676002)(54906003)(186003)(9686003)(55016003)(2906002)(66446008)(52536014)(26005)(33656002)(38070700005)(110136005)(6506007)(8936002)(82960400001)(66476007)(71200400001)(86362001)(478600001);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?ZGlwNjRpUDlYdWZnUHh1Y3BiWDNJcGFldmxLSGFBRmVTQ0ZFdnB1dFVJa1hn?=
 =?utf-8?B?bFFRUHgwNnlzU0xMRGd3MFA2d3Bya1JQS3NMZWVuS0tUSnNuQlEwRjQzM1Zy?=
 =?utf-8?B?R01XTHZGUUhnQW12K2pCbW0vODJENnVEZkRqdmNkNlRMYUQ3R3lJYk5Kd1Mv?=
 =?utf-8?B?Q1NYSnhTWGRUV3Q5c0gvV0V2Sy9iY05reDhQaWxVKzh5S0xzNUhkM3drWUlY?=
 =?utf-8?B?NnJack5GWDNmSFpCYkpidFhCNDFGdmhROVVNMG5RQ2lhb2txWThxdlJJaUNm?=
 =?utf-8?B?MHpwMzRjQ0hRakpSWG5WY2pOcFMxNjRiVlZLQUQwbmhMSTd2L3YvNWtua0xD?=
 =?utf-8?B?OTNGTVVlVEZKbEtSbjkvWXh6OTkvKzcwMGsyQ211QXlNYlFKR0ZuZjF3RjJi?=
 =?utf-8?B?YnloK3FlbzFCMGdSUG9Xa0xnRkoxWlNuMDNYekoyQlh0b2lnMjdybUZZbFBN?=
 =?utf-8?B?WDdzYWd5OFcxc0hnS25BU05iWC92WDdaeFVORHlmeHJ4eWRRaVFWR29PTmo0?=
 =?utf-8?B?YWw3QURzZHM0S01VeTNiNURqNGFZVlpBOGpLaWtjVVVsVS9vbmNqc2Nyamtn?=
 =?utf-8?B?OEVGMS9qb0dSbXo0MlVZUDkxTXpiN3cxazlodlpObXN0KzZ6ZWRZYWM2TDQr?=
 =?utf-8?B?TWErQzFiSDZiTmNmUEt5TFBaLzU2cGNaWUtaOVBhcEtIYm5CZ1NOYXZqNklu?=
 =?utf-8?B?RTVtci83Tk5tWmFqNDBzOTJwVVNRQkR6bGFrbnVQSHdCeE1LTlNsdXYwRXIy?=
 =?utf-8?B?ZVdxSGxnMHVCZjVVQXVSVlRqMVVvSFBjTmtZRTA5OTR2TXFPK0pvSmNIcXd4?=
 =?utf-8?B?SUhkY21ydG15WVF5SzdybVkzNVI0ZVM5enhYc2xpYmFIeHZOdWt0a1Y3WlVz?=
 =?utf-8?B?b0RGVk1PN1R6OU8yNldvc0VvMXlMVDVkRXFVclR6SnRyV09QeWNTWkplUkhZ?=
 =?utf-8?B?WFkxWGVINmhyTFBPc2FYaHptcFlqdnhUVVF4QmFuTDg4dFJkcmVBd1N2QWpB?=
 =?utf-8?B?NzVHcmZFckRIQVVaUmJPVWpCSDNNUjVBR29mZTNtRjNEVERTZS8ybndUb0lF?=
 =?utf-8?B?Y1FIWGJqaHNubTNwSDhQZS9jRVBnQWsyZXlHTkxOTkd3eG5IRXRqcG9tWTdB?=
 =?utf-8?B?N0JTZVRUMzAyY2hZQ01IbHAzU1FoYzFvZVdLY2RDcGFUZTJkSzkyblQ4OXpF?=
 =?utf-8?B?bWdaOTF0QkpINENBakc2Qmk5cnZybUdYMXlYRDhONmU2bnk4bDc1MGFJNVVO?=
 =?utf-8?B?MjFPRGFucjRoRHkzbU5jM1kySm9ORS9naVNMOHJIWStJZy81Vm1TcHlPN0N6?=
 =?utf-8?B?SkJWOFV0QlBIa2hBWkxCTldnenB5V3Rpcy9GOE52Y3c0RWdIcGZybGF6Uyt0?=
 =?utf-8?B?bldCV0JiYVppamhSR014TWpnV1Znaks3RG9UWG1IZ3ZwTmhLVWlKNTBKR2Yv?=
 =?utf-8?B?ekxZTmgwRzhnVFo2ZS94em92YkNxRmZTaG45OGVGN0ROWjMrWmJVNXVHVTJH?=
 =?utf-8?B?cHFIajRyRGJ0RlY1RWxSWWV4ZlMzSjZIMFEyUkFjcjFWL29oSGY4dTRpbDg2?=
 =?utf-8?B?Mnl1ZTdRcG9FK25SRmJzZmlKWTlaRUF6R2s1c05OL2p0Z1A2U3RUVERxNGND?=
 =?utf-8?B?QUc5eUg3eGd5UGQ5VDVtempKK0ZQTzlLZWJzR1NnTEpKdG5xQ3FlY3JyTkpV?=
 =?utf-8?B?Ulk3cnV2OXk4cmxhOG50VjQ2TkdKN1ZmcEtZNlk1N1FZZkhsZFRNS2ZuYk9u?=
 =?utf-8?B?R0pQQkZ0RUczd1NFeXd3WWxGU0xqTmxRbE0vK0JLQThCbG9ZdEtjcmI1QURz?=
 =?utf-8?B?NkhEZ0dlempvS3Q2dVkxczNCUWtSWjgyWmc1OE45alFSa1RKa1FoRnZCQnRl?=
 =?utf-8?B?NS9zTmNIbEx3c2hHOTY4c1NHSExaVkFUOHJnSFFHTlZtU2RPYzYvQkxlSE03?=
 =?utf-8?B?bTlncjRGOFhaYXBydzF0M0xHYjZTUytoVVRHak52aUgwV3lGakdRdmwyR1Zj?=
 =?utf-8?B?VC9tZ1F3Y2Q3L1A4TkdMOG94WGFIQXIvRjZ4T29RZFF3dUF6MkVwR3FPcSsz?=
 =?utf-8?B?Nno1UXNlVkRsV01YdWpCVUppcEhzN1U2TmJHd0NvTis1dGphZUh5VG9ya3N3?=
 =?utf-8?Q?6ChB6SmnGwkCYPrx9iZqYSrNd?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BN9PR11MB5276.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 19d91251-50ce-43da-c6dd-08da599a4bba
X-MS-Exchange-CrossTenant-originalarrivaltime: 29 Jun 2022 06:40:45.6245
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: W6J3BLv9G02DUUll7/6JHCrcBYbDnafJddPBjCaFyW4QVKtS8Asc6riDBXVkO7g+nYM2rS5GhX7aZFb3oRETTA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR11MB4371
X-OriginatorOrg: intel.com

PiBGcm9tOiBSb2dlciBQYXUgTW9ubmUNCj4gU2VudDogRnJpZGF5LCBNYXkgMjAsIDIwMjIgOToz
OCBQTQ0KPiANCj4gUHJvcGVybHkgaW5kZW50IHRoZSBoYW5kbGluZyBvZiBMQlIgZW5hYmxlIGlu
IE1TUl9JQTMyX0RFQlVHQ1RMTVNSDQo+IHZteF9tc3Jfd3JpdGVfaW50ZXJjZXB0KCkuDQo+IA0K
PiBObyBmdW5jdGlvbmFsIGNoYW5nZS4NCj4gDQo+IFNpZ25lZC1vZmYtYnk6IFJvZ2VyIFBhdSBN
b25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPg0KDQpSZXZpZXdlZC1ieTogS2V2aW4gVGlhbiA8
a2V2aW4udGlhbkBpbnRlbC5jb20+DQoNCj4gLS0tDQo+IEZlZWwgZnJlZSB0byBzcXVhc2ggb250
byB0aGUgcHJldmlvdXMgcGF0Y2gsIGRpZCBzZXBhcmF0ZWx5IHRvIGFpZCB0aGUNCj4gcmVhZGFi
aWxpdHkgb2YgdGhlIHByZXZpb3VzIGNoYW5nZS4NCj4gLS0tDQo+ICB4ZW4vYXJjaC94ODYvaHZt
L3ZteC92bXguYyB8IDM4ICsrKysrKysrKysrKysrKysrKystLS0tLS0tLS0tLS0tLS0tLS0tDQo+
ICAxIGZpbGUgY2hhbmdlZCwgMTkgaW5zZXJ0aW9ucygrKSwgMTkgZGVsZXRpb25zKC0pDQo+IA0K
PiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L2h2bS92bXgvdm14LmMgYi94ZW4vYXJjaC94ODYv
aHZtL3ZteC92bXguYw0KPiBpbmRleCAzZjQ1YWMwNWM2Li5mZjEwYjI5M2E0IDEwMDY0NA0KPiAt
LS0gYS94ZW4vYXJjaC94ODYvaHZtL3ZteC92bXguYw0KPiArKysgYi94ZW4vYXJjaC94ODYvaHZt
L3ZteC92bXguYw0KPiBAQCAtMzU0MCwzMSArMzU0MCwzMSBAQCBzdGF0aWMgaW50IGNmX2NoZWNr
IHZteF9tc3Jfd3JpdGVfaW50ZXJjZXB0KA0KPiANCj4gICAgICAgICAgICAgIGlmICggbGJyLT5j
b3VudCApDQo+ICAgICAgICAgICAgICB7DQo+IC0gICAgICAgICAgICBmb3IgKCA7IGxici0+Y291
bnQ7IGxicisrICkNCj4gLSAgICAgICAgICAgIHsNCj4gLSAgICAgICAgICAgICAgICB1bnNpZ25l
ZCBpbnQgaTsNCj4gLQ0KPiAtICAgICAgICAgICAgICAgIGZvciAoIGkgPSAwOyBpIDwgbGJyLT5j
b3VudDsgaSsrICkNCj4gKyAgICAgICAgICAgICAgICBmb3IgKCA7IGxici0+Y291bnQ7IGxicisr
ICkNCj4gICAgICAgICAgICAgICAgICB7DQo+IC0gICAgICAgICAgICAgICAgICAgIGludCByYyA9
IHZteF9hZGRfZ3Vlc3RfbXNyKHYsIGxici0+YmFzZSArIGksIDApOw0KPiArICAgICAgICAgICAg
ICAgICAgICB1bnNpZ25lZCBpbnQgaTsNCj4gDQo+IC0gICAgICAgICAgICAgICAgICAgIGlmICgg
dW5saWtlbHkocmMpICkNCj4gKyAgICAgICAgICAgICAgICAgICAgZm9yICggaSA9IDA7IGkgPCBs
YnItPmNvdW50OyBpKysgKQ0KPiAgICAgICAgICAgICAgICAgICAgICB7DQo+IC0gICAgICAgICAg
ICAgICAgICAgICAgICBncHJpbnRrKFhFTkxPR19FUlIsDQo+IC0gICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICJHdWVzdCBsb2FkL3NhdmUgbGlzdCBlcnJvciAlZFxuIiwgcmMpOw0KPiAt
ICAgICAgICAgICAgICAgICAgICAgICAgZG9tYWluX2NyYXNoKHYtPmRvbWFpbik7DQo+IC0gICAg
ICAgICAgICAgICAgICAgICAgICByZXR1cm4gWDg2RU1VTF9PS0FZOw0KPiAtICAgICAgICAgICAg
ICAgICAgICB9DQo+ICsgICAgICAgICAgICAgICAgICAgICAgICBpbnQgcmMgPSB2bXhfYWRkX2d1
ZXN0X21zcih2LCBsYnItPmJhc2UgKyBpLCAwKTsNCj4gDQo+IC0gICAgICAgICAgICAgICAgICAg
IHZteF9jbGVhcl9tc3JfaW50ZXJjZXB0KHYsIGxici0+YmFzZSArIGksIFZNWF9NU1JfUlcpOw0K
PiArICAgICAgICAgICAgICAgICAgICAgICAgaWYgKCB1bmxpa2VseShyYykgKQ0KPiArICAgICAg
ICAgICAgICAgICAgICAgICAgew0KPiArICAgICAgICAgICAgICAgICAgICAgICAgICAgIGdwcmlu
dGsoWEVOTE9HX0VSUiwNCj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICJH
dWVzdCBsb2FkL3NhdmUgbGlzdCBlcnJvciAlZFxuIiwgcmMpOw0KPiArICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIGRvbWFpbl9jcmFzaCh2LT5kb21haW4pOw0KPiArICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIHJldHVybiBYODZFTVVMX09LQVk7DQo+ICsgICAgICAgICAgICAgICAgICAg
ICAgICB9DQo+ICsNCj4gKyAgICAgICAgICAgICAgICAgICAgICAgIHZteF9jbGVhcl9tc3JfaW50
ZXJjZXB0KHYsIGxici0+YmFzZSArIGksIFZNWF9NU1JfUlcpOw0KPiArICAgICAgICAgICAgICAg
ICAgICB9DQo+ICAgICAgICAgICAgICAgICAgfQ0KPiAtICAgICAgICAgICAgfQ0KPiANCj4gLSAg
ICAgICAgICAgIHYtPmFyY2guaHZtLnZteC5sYnJfZmxhZ3MgfD0gTEJSX01TUlNfSU5TRVJURUQ7
DQo+IC0gICAgICAgICAgICBpZiAoIGxicl90c3hfZml4dXBfbmVlZGVkICkNCj4gLSAgICAgICAg
ICAgICAgICB2LT5hcmNoLmh2bS52bXgubGJyX2ZsYWdzIHw9IExCUl9GSVhVUF9UU1g7DQo+IC0g
ICAgICAgICAgICBpZiAoIGxlcl90b19maXh1cF9uZWVkZWQgKQ0KPiAtICAgICAgICAgICAgICAg
IHYtPmFyY2guaHZtLnZteC5sYnJfZmxhZ3MgfD0gTEJSX0ZJWFVQX0xFUl9UTzsNCj4gKyAgICAg
ICAgICAgICAgICB2LT5hcmNoLmh2bS52bXgubGJyX2ZsYWdzIHw9IExCUl9NU1JTX0lOU0VSVEVE
Ow0KPiArICAgICAgICAgICAgICAgIGlmICggbGJyX3RzeF9maXh1cF9uZWVkZWQgKQ0KPiArICAg
ICAgICAgICAgICAgICAgICB2LT5hcmNoLmh2bS52bXgubGJyX2ZsYWdzIHw9IExCUl9GSVhVUF9U
U1g7DQo+ICsgICAgICAgICAgICAgICAgaWYgKCBsZXJfdG9fZml4dXBfbmVlZGVkICkNCj4gKyAg
ICAgICAgICAgICAgICAgICAgdi0+YXJjaC5odm0udm14Lmxicl9mbGFncyB8PSBMQlJfRklYVVBf
TEVSX1RPOw0KPiAgICAgICAgICAgICAgfQ0KPiAgICAgICAgICAgICAgZWxzZQ0KPiAgICAgICAg
ICAgICAgICAgIC8qIE5vIG1vZGVsIHNwZWNpZmljIExCUnMsIGlnbm9yZSBERUJVR0NUTE1TUi5M
QlIuICovDQo+IC0tDQo+IDIuMzYuMA0KPiANCg0K


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 07:13:38 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 07:13:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357581.586219 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6RtD-0007zI-Nh; Wed, 29 Jun 2022 07:13:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357581.586219; Wed, 29 Jun 2022 07:13:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6RtD-0007zB-K4; Wed, 29 Jun 2022 07:13:31 +0000
Received: by outflank-mailman (input) for mailman id 357581;
 Wed, 29 Jun 2022 07:13:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rlzK=XE=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1o6RtB-0007z5-B4
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 07:13:29 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2070.outbound.protection.outlook.com [40.107.20.70])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f7a105f6-f77a-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 09:13:25 +0200 (CEST)
Received: from DB6PR07CA0019.eurprd07.prod.outlook.com (2603:10a6:6:2d::29) by
 AM9PR08MB6306.eurprd08.prod.outlook.com (2603:10a6:20b:2d6::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Wed, 29 Jun
 2022 07:13:22 +0000
Received: from DBAEUR03FT010.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:2d:cafe::3f) by DB6PR07CA0019.outlook.office365.com
 (2603:10a6:6:2d::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14 via Frontend
 Transport; Wed, 29 Jun 2022 07:13:22 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT010.mail.protection.outlook.com (100.127.142.78) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Wed, 29 Jun 2022 07:13:21 +0000
Received: ("Tessian outbound 879f4da7a6e9:v121");
 Wed, 29 Jun 2022 07:13:21 +0000
Received: from a342d768b2e5.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 886B99C0-25DC-4389-ACB8-5A4A40479CF8.1; 
 Wed, 29 Jun 2022 07:13:10 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a342d768b2e5.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 29 Jun 2022 07:13:10 +0000
Received: from DU2PR08MB7325.eurprd08.prod.outlook.com (2603:10a6:10:2e4::7)
 by AM6PR08MB5173.eurprd08.prod.outlook.com (2603:10a6:20b:e5::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14; Wed, 29 Jun
 2022 07:13:06 +0000
Received: from DU2PR08MB7325.eurprd08.prod.outlook.com
 ([fe80::50e5:8757:6643:77f8]) by DU2PR08MB7325.eurprd08.prod.outlook.com
 ([fe80::50e5:8757:6643:77f8%9]) with mapi id 15.20.5373.017; Wed, 29 Jun 2022
 07:13:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f7a105f6-f77a-11ec-b725-ed86ccbb4733
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=PKDbhhqXLhxH1AwENCBApV/ymp0V6Pend3kISlt+1GMLbzS8t9BQq3PsSMGz2PMwp/HG4RYfOwrzYPsvWYBx/0nyyIBCO33Yifh1aaPdaiwkjrRfhKwCw2IJoqc+vSbbzu3LCDRXnWOCxC792PCKF9M3VA44s+HukG3fG7UH23CnG66Ipgz0Xim5X103ZCArRix5a5l1a/C1S8qsYr39Wn6yE+apOA8S9HdufUPrMKodqHrluIyQ/CfDZTHlGBhA+xRJMt9Nw34/LvftlGV7A+F2nMH2GhhTbKxDtQEA9gyTHKhs8ZeR8Veh7D7NBY4mSsxRA3NSh8p/FZRwIzKy0g==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+bQsK6dcfEDQqMIB2vkfnZULSYnqg6Hgg/dJ1p+FbrA=;
 b=fcz/TPlkZEIzv3YzsKto4xRHM1yjngRtTkNgEUUJvrbDIaQda4vQey3vzT7awJgaBNtaOaFQsZIUGjfbDqSsjluCsVJAOnXrApqPBOWHDXEMZ19VrcLEdwKcuAb75yfoPN+hMrQ77+BQN7SOXoV53D1LF9iaIDVoM2gjpYwP8yO2ibrgks/rANjHxQMZHXNgSNcUhsi594YC1dEAl+9Tux98s6v0BH2NwIwuXCSR90l9ngyL4n4o9vvqD3QuI2YQZWBHFm/B9VDXt1kzqzUF6xjAKVYCd+jbbriUtZ8HYgjZZ/uRrhBGWjLwdagjEDYqfc5ydLWzzAjo4P9fiQi1QQ==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+bQsK6dcfEDQqMIB2vkfnZULSYnqg6Hgg/dJ1p+FbrA=;
 b=8ePDZpzacIPli/+KP8Hs6BIm89djGSQkS5JxWYAfYnLiOZEH11ehzggiXKoE7RqjCp6jIIx9/kjKK2PA+n8JvwPNJYIZ14wJ5rrHIQseOC2stc+erWzFvleBD2q/ya2Q12Rt2ypdDfJnhVJcyxDz6EY4cSKw87W2dLR8XYH1XPc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=J/N6MXtiOdwzfHUXz/9bUb+huRXrpA8cZdVCfhKV1MeoC1FTIjZHJQTcoVhaiY7N0lXfA4lolXuv5NPdBd6+aGx+cs7t2cSy9n/TjP+jYhEa9r8x1tAvkxRqfe+vUfhcF9KmiBh6X6943PinWabvuAm6zbGeX/H3Fi47UGf17orbgoQgZSq8Ca3sJRCyATlZJoSpB2XW33C+gq3gvFuEDgLadoEP3L6ik1gyevrTMighAR/KMep5N4I9VuzgZ2AMktrgbxPU7uwVfkKaFKpBlGnJYUQsyo37fib5y3qk6EU0xuIbbiJE4I0lYFRVA3X6zM1xpLZ6l0uYZyW8stGNcw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+bQsK6dcfEDQqMIB2vkfnZULSYnqg6Hgg/dJ1p+FbrA=;
 b=ILZV7+fKxOToeIzfKddmrlYc7UOze4Y51MQlY7aJG41ZEw4rJpcUqOkHTHuO0X719whaScsGZnj/19ZpXgZAvD2MIQkD92wVXh1pNe2i+edo3DrjGF6bw8Vs7FqPrwJ9zTWsyibRD/T2mK70Ugfj8YysH6jtfZZEMQtVs3LD+oumm0CjkxqEl0JFvU8dZCxxYEAmBgIgGh9gBgSxCP7/9fcwrts5P0JthQL95v+4MYSbXU7e3o7nHByUYxTU5ikiqNVEeHdEDeB2jl0qwTSwJoHl2Mb6Lkf7Q4JnGFCY9IKKdrE7rV/MJQcCq4A8P36oHG+GfFTubVyFaEP5kN3nog==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+bQsK6dcfEDQqMIB2vkfnZULSYnqg6Hgg/dJ1p+FbrA=;
 b=8ePDZpzacIPli/+KP8Hs6BIm89djGSQkS5JxWYAfYnLiOZEH11ehzggiXKoE7RqjCp6jIIx9/kjKK2PA+n8JvwPNJYIZ14wJ5rrHIQseOC2stc+erWzFvleBD2q/ya2Q12Rt2ypdDfJnhVJcyxDz6EY4cSKw87W2dLR8XYH1XPc=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>
Subject: RE: [PATCH v5 2/8] xen/arm: allocate static shared memory to the
 default owner dom_io
Thread-Topic: [PATCH v5 2/8] xen/arm: allocate static shared memory to the
 default owner dom_io
Thread-Index: AQHYhGQ84mRcoOoQ8UiRh2nj8rBdHq1e5e0AgAcJLcA=
Date: Wed, 29 Jun 2022 07:13:05 +0000
Message-ID:
 <DU2PR08MB7325A7C7C50807D7FF6AE280F7BB9@DU2PR08MB7325.eurprd08.prod.outlook.com>
References: <20220620051114.210118-1-Penny.Zheng@arm.com>
 <20220620051114.210118-3-Penny.Zheng@arm.com>
 <3b7b32cb-df48-e458-e8a9-f17e86f39c9a@xen.org>
In-Reply-To: <3b7b32cb-df48-e458-e8a9-f17e86f39c9a@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 751C3984BAC2A14F847AC07C6E46847A.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: db044164-0601-45ec-6fec-08da599ed962
x-ms-traffictypediagnostic:
	AM6PR08MB5173:EE_|DBAEUR03FT010:EE_|AM9PR08MB6306:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 sBcUmW7vZVFEJgAuV+FAZn5UxqtGwFMhF2uPryt+yxDKpZBmVN3Mu6TwndO5/6MjazduEb8iTcErz8r2JIkD1IpsNHuLGvDgd6YENsPTTm+U9DWoq+OEog1cCU6+mZiQtFmMk9g2+pzQ0QORU/R8UF+1dRqirEqH4F9mp63Nz3y0zmfOzJjL2RCruyBax1y8RrM6KtsADAI/skrzHBlu585SRRPJ1ygAHQ8DzCTSM6yNXehtp4rJwrvdzMfP9yH+GhjMsXk5NducwWkCsnxgWvpcq1PhunBnog4P23LUHeS5sY6pcoSeT2UuinTpbo5dy3s3qxCJPUWKp1RXo3qFsLEJRoq6sO7AO0Xr152MruN6ZbwjfhzGg6AcA60e2H2pqMc1fEWkgtu33wN0L5dcx1srtdnzAyahMpOfHRxnK4cEldsvTa3tfbU6fpqNkqhNR9jycyASATcRv8NLu499piA5huoek1NyKqq7w/chvmd1kXxIz1T0kucNfEmbUNkAPi9nE6uuqkmVZeOR1hXyw1F0tKwYCKHf670ul6Lf1Fh/ZItweVBODcnzUhLkSNf+T/LoXkTntzLpzAJvBpUZGWjZODoBvaTLGi/0SVy+pBZSjwny/Vp8XGFTaBHJkj5xqcqxkciLMHX+BTbp875t4EOq4P4sge9eHKKSPOVMEUYDn6SYVlIX4oaJSJXUPjKqC94LEum35PS1bP2eoasD7p691ZdZEQL0jEPmrrDlHhuQ4tcKzNop00eYm+OeROj7q6rStvIuH998nYRUkCD/yAAcBYeN+Fr/Dp/nRWRM6uI=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DU2PR08MB7325.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(346002)(396003)(376002)(366004)(136003)(71200400001)(478600001)(38100700002)(86362001)(33656002)(6506007)(41300700001)(7696005)(9686003)(26005)(110136005)(8676002)(53546011)(316002)(186003)(54906003)(66476007)(66446008)(76116006)(66946007)(64756008)(66556008)(55016003)(5660300002)(30864003)(52536014)(8936002)(2906002)(83380400001)(38070700005)(122000001)(4326008);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB5173
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT010.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	732b4cf4-c79e-424a-e006-08da599ed023
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	OQvgZJ6NW60ULTjq+KFTg3YHRr7o4lCpJgoku81thKS0RTE9UD4Q3rHxhsr2GVfDBGieu9xkVSC/SHLrtdSi1acG6wCzkE6rZ5pAnJmydoOUZoNMlNT86HsUsd2DPEPaeL0RyKKQ8wukKmz61O3zlSJLuJFYGaw/zTsRzmCpWUbo8HEDyEuHOtqvntaOP892Q2T0jgpZkY65tvAAEL6uioXeZ/2klrojiMcZnd43QVVN6B7f8lzQJmpbS0gQOLavuqgQxjIdE4TnjmkmnS6ler7yfCR8d3anN7sQ+zkrfGcd6HVIz7/f+hM16dS8pl9OY6JRUYv9G9asZu501X4MY7I/QNxoS+7ZHZ+fGfNQ1wbRXo/O5PsXhom2+NiVyxiMQb2jLFVCkKPp57kCwwZaK6tU8EFSvY4nnyh+BY86i9uQQTCwbPaonR3PYAgTgj9PimeiYSlWBD6XWJBV6QdEKDowFwWOarmfBrH7cbx1raDyuYaZj6eWrCBy7dmgj3L69a1r9JGwRTDXxEsAphyTTIr7bfvVnJuSqB5cdwDY6KePlHhY9bxMbw06ijy8DqcWRModB+uCt895feYB5hvwTAop7yUr00ZtEaAxCPywVZ+QtBTaPhIMryYzeUgtdORCeHCryEVYvniyRIJm1aGltLx2flQ2YM3+Xhj37pmMVnb9vHCvlTz6PGE9MxiUWxQdokdYIeI4Xe7W51NVl7m1Jccp93IR2AcUGkvq8RvWSX58yv5VE0MHJh2ZoG9aIe5l/RpVdyiFyj/i2OVweWWsbmIJW4k1YoOC7jpASim81w4=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(136003)(346002)(376002)(396003)(40470700004)(36840700001)(46966006)(8676002)(81166007)(70206006)(40460700003)(70586007)(40480700001)(6506007)(82740400003)(9686003)(4326008)(478600001)(7696005)(30864003)(83380400001)(55016003)(186003)(52536014)(33656002)(316002)(82310400005)(41300700001)(26005)(5660300002)(53546011)(47076005)(54906003)(2906002)(356005)(86362001)(110136005)(336012)(36860700001)(8936002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jun 2022 07:13:21.3002
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: db044164-0601-45ec-6fec-08da599ed962
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT010.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6306

SGkgSnVsaWVuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSnVsaWVu
IEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4NCj4gU2VudDogU2F0dXJkYXksIEp1bmUgMjUsIDIwMjIg
MjoyMiBBTQ0KPiBUbzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+OyB4ZW4tZGV2
ZWxAbGlzdHMueGVucHJvamVjdC5vcmcNCj4gQ2M6IFdlaSBDaGVuIDxXZWkuQ2hlbkBhcm0uY29t
PjsgU3RlZmFubyBTdGFiZWxsaW5pDQo+IDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPjsgQmVydHJh
bmQgTWFycXVpcyA8QmVydHJhbmQuTWFycXVpc0Bhcm0uY29tPjsNCj4gVm9sb2R5bXlyIEJhYmNo
dWsgPFZvbG9keW15cl9CYWJjaHVrQGVwYW0uY29tPjsgQW5kcmV3IENvb3Blcg0KPiA8YW5kcmV3
LmNvb3BlcjNAY2l0cml4LmNvbT47IEdlb3JnZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4
LmNvbT47DQo+IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT47IFdlaSBMaXUgPHdsQHhl
bi5vcmc+DQo+IFN1YmplY3Q6IFJlOiBbUEFUQ0ggdjUgMi84XSB4ZW4vYXJtOiBhbGxvY2F0ZSBz
dGF0aWMgc2hhcmVkIG1lbW9yeSB0byB0aGUNCj4gZGVmYXVsdCBvd25lciBkb21faW8NCj4gDQo+
IEhpIFBlbm55LA0KPiANCj4gT24gMjAvMDYvMjAyMiAwNjoxMSwgUGVubnkgWmhlbmcgd3JvdGU6
DQo+ID4gRnJvbTogUGVubnkgWmhlbmcgPHBlbm55LnpoZW5nQGFybS5jb20+DQo+ID4NCj4gPiBU
aGlzIGNvbW1pdCBpbnRyb2R1Y2VzIHByb2Nlc3Nfc2htIHRvIGNvcGUgd2l0aCBzdGF0aWMgc2hh
cmVkIG1lbW9yeQ0KPiA+IGluIGRvbWFpbiBjb25zdHJ1Y3Rpb24uDQo+ID4NCj4gPiBET01JRF9J
TyB3aWxsIGJlIHRoZSBkZWZhdWx0IG93bmVyIG9mIG1lbW9yeSBwcmUtc2hhcmVkIGFtb25nDQo+
IG11bHRpcGxlDQo+ID4gZG9tYWlucyBhdCBib290IHRpbWUsIHdoZW4gbm8gZXhwbGljaXQgb3du
ZXIgaXMgc3BlY2lmaWVkLg0KPiANCj4gVGhlIGRvY3VtZW50IGluIHBhdGNoICMxIHN1Z2dlc3Qg
dGhlIHBhZ2Ugd2lsbCBiZSBzaGFyZWQgd2l0aCBkb21fc2hhcmVkLg0KPiBCdXQgaGVyZSB5b3Ug
c2F5ICJET01JRF9JTyIuDQo+IA0KPiBXaGljaCBvbmUgaXMgY29ycmVjdD8NCj4gDQoNCknigJls
bCBmaXggdGhlIGRvY3VtZW50YXRpb24sIERPTV9JTyBpcyB0aGUgbGFzdCBjYWxsLg0KDQo+ID4N
Cj4gPiBUaGlzIGNvbW1pdCBvbmx5IGNvbnNpZGVycyBhbGxvY2F0aW5nIHN0YXRpYyBzaGFyZWQg
bWVtb3J5IHRvIGRvbV9pbw0KPiA+IHdoZW4gb3duZXIgZG9tYWluIGlzIG5vdCBleHBsaWNpdGx5
IGRlZmluZWQgaW4gZGV2aWNlIHRyZWUsIGFsbCB0aGUNCj4gPiBsZWZ0LCBpbmNsdWRpbmcgdGhl
ICJib3Jyb3dlciIgY29kZSBwYXRoLCB0aGUgImV4cGxpY2l0IG93bmVyIiBjb2RlDQo+ID4gcGF0
aCwgc2hhbGwgYmUgaW50cm9kdWNlZCBsYXRlciBpbiB0aGUgZm9sbG93aW5nIHBhdGNoZXMuDQo+
ID4NCj4gPiBTaWduZWQtb2ZmLWJ5OiBQZW5ueSBaaGVuZyA8cGVubnkuemhlbmdAYXJtLmNvbT4N
Cj4gPiBSZXZpZXdlZC1ieTogU3RlZmFubyBTdGFiZWxsaW5pIDxzc3RhYmVsbGluaUBrZXJuZWwu
b3JnPg0KPiA+IC0tLQ0KPiA+IHY1IGNoYW5nZToNCj4gPiAtIHJlZmluZSBpbi1jb2RlIGNvbW1l
bnQNCj4gPiAtLS0NCj4gPiB2NCBjaGFuZ2U6DQo+ID4gLSBubyBjaGFuZ2VzDQo+ID4gLS0tDQo+
ID4gdjMgY2hhbmdlOg0KPiA+IC0gcmVmaW5lIGluLWNvZGUgY29tbWVudA0KPiA+IC0tLQ0KPiA+
IHYyIGNoYW5nZToNCj4gPiAtIGluc3RlYWQgb2YgaW50cm9kdWNpbmcgYSBuZXcgc3lzdGVtIGRv
bWFpbiwgcmV1c2UgdGhlIGV4aXN0aW5nDQo+ID4gZG9tX2lvDQo+ID4gLSBtYWtlIGRvbV9pbyBh
IG5vbi1hdXRvLXRyYW5zbGF0ZWQgZG9tYWluLCB0aGVuIG5vIG5lZWQgdG8gY3JlYXRlIFAyTQ0K
PiA+IGZvciBpdA0KPiA+IC0gY2hhbmdlIGRvbV9pbyBkZWZpbml0aW9uIGFuZCBtYWtlIGl0IHdp
ZGVyIHRvIHN1cHBvcnQgc3RhdGljIHNobQ0KPiA+IGhlcmUgdG9vDQo+ID4gLSBpbnRyb2R1Y2Ug
aXNfc2htX2FsbG9jYXRlZF90b19kb21pbyB0byBjaGVjayB3aGV0aGVyIHN0YXRpYyBzaG0gaXMN
Cj4gPiBhbGxvY2F0ZWQgeWV0LCBpbnN0ZWFkIG9mIHVzaW5nIHNobV9tYXNrIGJpdG1hcA0KPiA+
IC0gYWRkIGluLWNvZGUgY29tbWVudA0KPiA+IC0tLQ0KPiA+ICAgeGVuL2FyY2gvYXJtL2RvbWFp
bl9idWlsZC5jIHwgMTMyDQo+ICsrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrLQ0K
PiA+ICAgeGVuL2NvbW1vbi9kb21haW4uYyAgICAgICAgIHwgICAzICsNCj4gPiAgIDIgZmlsZXMg
Y2hhbmdlZCwgMTM0IGluc2VydGlvbnMoKyksIDEgZGVsZXRpb24oLSkNCj4gPg0KPiA+IGRpZmYg
LS1naXQgYS94ZW4vYXJjaC9hcm0vZG9tYWluX2J1aWxkLmMgYi94ZW4vYXJjaC9hcm0vZG9tYWlu
X2J1aWxkLmMNCj4gPiBpbmRleCA3ZGRkMTZjMjZkLi45MWE1YWNlODUxIDEwMDY0NA0KPiA+IC0t
LSBhL3hlbi9hcmNoL2FybS9kb21haW5fYnVpbGQuYw0KPiA+ICsrKyBiL3hlbi9hcmNoL2FybS9k
b21haW5fYnVpbGQuYw0KPiA+IEBAIC01MjcsNiArNTI3LDEwIEBAIHN0YXRpYyBib29sIF9faW5p
dA0KPiBhcHBlbmRfc3RhdGljX21lbW9yeV90b19iYW5rKHN0cnVjdCBkb21haW4gKmQsDQo+ID4g
ICAgICAgcmV0dXJuIHRydWU7DQo+ID4gICB9DQo+ID4NCj4gPiArLyoNCj4gPiArICogSWYgY2Vs
bCBpcyBOVUxMLCBwYmFzZSBhbmQgcHNpemUgc2hvdWxkIGhvbGQgdmFsaWQgdmFsdWVzLg0KPiA+
ICsgKiBPdGhlcndpc2UsIGNlbGwgd2lsbCBiZSBwb3B1bGF0ZWQgdG9nZXRoZXIgd2l0aCBwYmFz
ZSBhbmQgcHNpemUuDQo+ID4gKyAqLw0KPiA+ICAgc3RhdGljIG1mbl90IF9faW5pdCBhY3F1aXJl
X3N0YXRpY19tZW1vcnlfYmFuayhzdHJ1Y3QgZG9tYWluICpkLA0KPiA+ICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBjb25zdCBfX2JlMzIgKipjZWxsLA0K
PiA+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1MzIg
YWRkcl9jZWxscywgdTMyDQo+ID4gc2l6ZV9jZWxscywgQEAgLTUzNSw3ICs1MzksOCBAQCBzdGF0
aWMgbWZuX3QgX19pbml0DQo+IGFjcXVpcmVfc3RhdGljX21lbW9yeV9iYW5rKHN0cnVjdCBkb21h
aW4gKmQsDQo+ID4gICAgICAgbWZuX3Qgc21mbjsNCj4gPiAgICAgICBpbnQgcmVzOw0KPiA+DQo+
ID4gLSAgICBkZXZpY2VfdHJlZV9nZXRfcmVnKGNlbGwsIGFkZHJfY2VsbHMsIHNpemVfY2VsbHMs
IHBiYXNlLCBwc2l6ZSk7DQo+ID4gKyAgICBpZiAoIGNlbGwgKQ0KPiA+ICsgICAgICAgIGRldmlj
ZV90cmVlX2dldF9yZWcoY2VsbCwgYWRkcl9jZWxscywgc2l6ZV9jZWxscywgcGJhc2UsDQo+ID4g
KyBwc2l6ZSk7DQo+IA0KPiBJIHRoaW5rIHRoaXMgaXMgYSBiaXQgb2YgYSBoYWNrLiBUbyBtZSBp
dCBzb3VuZHMgbGlrZSB0aGlzIHNob3VsZCBiZSBtb3ZlZCBvdXQgdG8NCj4gYSBzZXBhcmF0ZSBo
ZWxwZXIuIFRoaXMgd2lsbCBhbHNvIG1ha2UgdGhlIGludGVyZmFjZSBvZg0KPiBhY3F1aXJlX3No
YXJlZF9tZW1vcnlfYmFuaygpIGxlc3MgcXVlc3Rpb25hYmxlIChzZWUgYmVsb3cpLg0KPiANCg0K
T2ssICBJJ2xsIHRyeSB0byBub3QgcmV1c2UgYWNxdWlyZV9zdGF0aWNfbWVtb3J5X2JhbmsgaW4N
CmFjcXVpcmVfc2hhcmVkX21lbW9yeV9iYW5rLg0KDQo+IEFzIHRoaXMgaXMgdjUsIEkgd291bGQg
YmUgT0sgd2l0aCBhIGZvbGxvdy11cCBmb3IgdGhpcyBzcGxpdC4gQnV0IHRoaXMgaW50ZXJmYWNl
IG9mDQo+IGFjdWlxcmVfc2hhcmVkX21lbW9yeV9iYW5rKCkgbmVlZHMgdG8gY2hhbmdlLg0KPiAN
Cg0KSSdsbCB0cnkgdG8gZml4IGl0IGluIG5leHQgdmVyc2lvbi4NCg0KPiA+ICAgICAgIEFTU0VS
VChJU19BTElHTkVEKCpwYmFzZSwgUEFHRV9TSVpFKSAmJiBJU19BTElHTkVEKCpwc2l6ZSwNCj4g
PiBQQUdFX1NJWkUpKTsNCj4gDQo+IEluIHRoZSBjb250ZXh0IG9mIHlvdXIgc2VyaWVzLCB3aG8g
aXMgY2hlY2tpbmcgdGhhdCBib3RoIHBzaXplIGFuZCBwYmFzZSBhcmUNCj4gc3VpdGFibHkgYWxp
Z25lZD8NCj4gDQoNCkFjdHVhbGx5LCB0aGUgd2hvbGUgcGFyc2luZyBwcm9jZXNzIGlzIHJlZHVu
ZGFudCBmb3IgdGhlIHN0YXRpYyBzaGFyZWQgbWVtb3J5Lg0KSSd2ZSBhbHJlYWR5IHBhcnNlZCBp
dCBhbmQgY2hlY2tlZCBpdCBiZWZvcmUgaW4gcHJvY2Vzc19zaG0uDQoNCj4gPiAgICAgICBpZiAo
IFBGTl9ET1dOKCpwc2l6ZSkgPiBVSU5UX01BWCApDQo+ID4gICAgICAgew0KPiA+IEBAIC03NTks
NiArNzY0LDEyNSBAQCBzdGF0aWMgdm9pZCBfX2luaXQgYXNzaWduX3N0YXRpY19tZW1vcnlfMTEo
c3RydWN0DQo+IGRvbWFpbiAqZCwNCj4gPiAgICAgICBwYW5pYygiRmFpbGVkIHRvIGFzc2lnbiBy
ZXF1ZXN0ZWQgc3RhdGljIG1lbW9yeSBmb3IgZGlyZWN0LW1hcA0KPiBkb21haW4gJXBkLiIsDQo+
ID4gICAgICAgICAgICAgZCk7DQo+ID4gICB9DQo+ID4gKw0KPiA+ICsjaWZkZWYgQ09ORklHX1NU
QVRJQ19TSE0NCj4gPiArLyoNCj4gPiArICogVGhpcyBmdW5jdGlvbiBjaGVja3Mgd2hldGhlciB0
aGUgc3RhdGljIHNoYXJlZCBtZW1vcnkgcmVnaW9uIGlzDQo+ID4gKyAqIGFscmVhZHkgYWxsb2Nh
dGVkIHRvIGRvbV9pby4NCj4gPiArICovDQo+ID4gK3N0YXRpYyBib29sIF9faW5pdCBpc19zaG1f
YWxsb2NhdGVkX3RvX2RvbWlvKHBhZGRyX3QgcGJhc2UpIHsNCj4gPiArICAgIHN0cnVjdCBwYWdl
X2luZm8gKnBhZ2U7DQo+ID4gKw0KPiA+ICsgICAgcGFnZSA9IG1hZGRyX3RvX3BhZ2UocGJhc2Up
Ow0KPiA+ICsgICAgQVNTRVJUKHBhZ2UpOw0KPiANCj4gbWFkZHJfdG9fcGFnZSgpIGNhbiBuZXZl
ciByZXR1cm4gTlVMTC4gSWYgeW91IHdhbnQgdG8gY2hlY2sgYSBwYWdlIHdpbGwgYmUNCj4gdmFs
aWQsIHRoZW4geW91IHNob3VsZCB1c2UgbWZuX3ZhbGlkKCkuDQo+IA0KPiBIb3dldmVyLCB0aGUg
QVNTRVJUKCkgaW1wbGllcyB0aGF0IHRoZSBhZGRyZXNzIHdhcyBzdWl0YWJseSBjaGVja2VkIGJl
Zm9yZS4NCj4gQnV0IEkgY2FuJ3QgZmluZCBzdWNoIGNoZWNrLg0KPiANCj4gPiArDQo+ID4gKyAg
ICBpZiAoIHBhZ2VfZ2V0X293bmVyKHBhZ2UpID09IE5VTEwgKQ0KPiA+ICsgICAgICAgIHJldHVy
biBmYWxzZTsNCj4gPiArDQo+ID4gKyAgICBBU1NFUlQocGFnZV9nZXRfb3duZXIocGFnZSkgPT0g
ZG9tX2lvKTsNCj4gQ291bGQgdGhpcyBiZSBoaXQgYmVjYXVzZSBvZiBhIHdyb25nIGRldmljZS10
cmVlPyBJZiB5ZXMsIHRoZW4gdGhpcyBzaG91bGQgbm90DQo+IGJlIGFuIEFTU0VSVCgpICh0aGV5
IGFyZSBub3Qgc3VpdGFibGUgdG8gY2hlY2sgdXNlciBpbnB1dCkuDQo+IA0KDQpZZXMsIGl0IGNh
biBoYXBwZW4sIEknbGwgY2hhbmdlIGl0IHRvIGlmLWFycmF5IHRvIG91dHB1dCB0aGUgZXJyb3Iu
DQoNCj4gPiArICAgIHJldHVybiB0cnVlOw0KPiA+ICt9DQo+ID4gKw0KPiA+ICtzdGF0aWMgbWZu
X3QgX19pbml0IGFjcXVpcmVfc2hhcmVkX21lbW9yeV9iYW5rKHN0cnVjdCBkb21haW4gKmQsDQo+
ID4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdTMyIGFk
ZHJfY2VsbHMsIHUzMiBzaXplX2NlbGxzLA0KPiA+ICsgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIHBhZGRyX3QgKnBiYXNlLA0KPiA+ICtwYWRkcl90ICpwc2l6
ZSkNCj4gDQo+IFRoZXJlIGlzIHNvbWV0aGluZyB0aGF0IGRvZXNuJ3QgYWRkLXVwIGluIHRoaXMg
aW50ZXJmYWNlLiBUaGUgdXNlIG9mIHBvaW50ZXINCj4gaW1wbGllcyB0aGF0IHBiYXNlIGFuZCBw
c2l6ZSBtYXkgYmUgbW9kaWZpZWQgYnkgdGhlIGZ1bmN0aW9uLiBTby4uLg0KPiANCg0KSnVzdCBs
aWtlIHlvdSBwb2ludHMgb3V0IGJlZm9yZSwgaXQncyBhIGJpdCBoYWNreSB0byB1c2UgYWNxdWly
ZV9zdGF0aWNfbWVtb3J5X2JhbmssDQphbmQgdGhlIHBvaW50ZXIgdXNlZCBoZXJlIGlzIGFsc28g
ZHVlIHRvIGl0LiBJbnRlcm5hbCBwYXJzaW5nIHByb2Nlc3Mgb2YgYWNxdWlyZV9zdGF0aWNfbWVt
b3J5X2JhbmsNCm5lZWRzIHBvaW50ZXIgdG8gZGVsaXZlciB0aGUgcmVzdWx0Lg0KDQpJ4oCZbGwg
cmV3cml0ZSBhY3F1aXJlX3NoYXJlZF9tZW1vcnksIGFuZCBpdCB3aWxsIGJlIGxpa2U6DQoiDQpz
dGF0aWMgbWZuX3QgX19pbml0IGFjcXVpcmVfc2hhcmVkX21lbW9yeV9iYW5rKHN0cnVjdCBkb21h
aW4gKmQsDQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBh
ZGRyX3QgcGJhc2UsIHBhZGRyX3QgcHNpemUpDQp7DQogICAgbWZuX3Qgc21mbjsNCiAgICB1bnNp
Z25lZCBsb25nIG5yX3BmbnM7DQogICAgaW50IHJlczsNCg0KICAgIC8qDQogICAgICogUGFnZXMg
b2Ygc3RhdGljYWxseSBzaGFyZWQgbWVtb3J5IHNoYWxsIGJlIGluY2x1ZGVkDQogICAgICogaW4g
ZG9tYWluX3RvdF9wYWdlcygpLg0KICAgICAqLw0KICAgIG5yX3BmbnMgPSBQRk5fRE9XTihwc2l6
ZSk7DQogICAgaWYgKCBkLT5tYXhfcGFnZSArIG5yX3BmbnMgPiBVSU5UX01BWCApDQogICAgew0K
ICAgICAgICBwcmludGsoWEVOTE9HX0VSUiAiJXBkOiBPdmVyLWFsbG9jYXRpb24gZm9yIGQtPm1h
eF9wYWdlczogJWx1LlxuIiwNCiAgICAgICAgICAgICAgIGQsIHBzaXplKTsNCiAgICAgICAgcmV0
dXJuIElOVkFMSURfTUZOOw0KICAgIH0NCiAgICBkLT5tYXhfcGFnZXMgKz0gbnJfcGZuczsNCg0K
ICAgIHNtZm4gPSBtYWRkcl90b19tZm4ocGJhc2UpOw0KICAgIHJlcyA9IGFjcXVpcmVfZG9tc3Rh
dGljX3BhZ2VzKGQsIHNtZm4sIG5yX3BmbnMsIDApOw0KICAgIGlmICggcmVzICkNCiAgICB7DQog
ICAgICAgIHByaW50ayhYRU5MT0dfRVJSDQogICAgICAgICAgICAgICAiJXBkOiBmYWlsZWQgdG8g
YWNxdWlyZSBzdGF0aWMgbWVtb3J5OiAlZC5cbiIsIGQsIHJlcyk7DQogICAgICAgIHJldHVybiBJ
TlZBTElEX01GTjsNCiAgICB9DQoNCiAgICByZXR1cm4gc21mbg0KfQ0KIg0KDQo+ID4gK3sNCj4g
PiArICAgIC8qDQo+ID4gKyAgICAgKiBQYWdlcyBvZiBzdGF0aWNhbGx5IHNoYXJlZCBtZW1vcnkg
c2hhbGwgYmUgaW5jbHVkZWQNCj4gPiArICAgICAqIGluIGRvbWFpbl90b3RfcGFnZXMoKS4NCj4g
PiArICAgICAqLw0KPiA+ICsgICAgZC0+bWF4X3BhZ2VzICs9IFBGTl9ET1dOKCpwc2l6ZSk7DQo+
IA0KPiAuLi4gaXQgc291bmRzIGEgYml0IHN0cmFuZ2UgdG8gdXNlIHBzaXplIGhlcmUuIElmIHBz
aXplLCBjYW4ndCBiZSBtb2RpZmllZCB0aGFuIGl0DQo+IHNob3VsZCBwcm9iYWJseSBub3QgYmUg
YSBwb2ludGVyLg0KPiANCj4gQWxzbywgd2hlcmUgZG8geW91IGNoZWNrIHRoYXQgZC0+bWF4X3Bh
Z2VzIHdpbGwgbm90IG92ZXJmbG93Pw0KPiANCg0KSSdsbCBjaGVjayB0aGUgb3ZlcmZsb3cgYXMg
Zm9sbG93czoNCiINCiAgICBucl9wZm5zID0gUEZOX0RPV04ocHNpemUpOw0KICAgIGlmICggZC0+
bWF4X3BhZ2UgKyBucl9wZm5zID4gVUlOVF9NQVggKQ0KICAgIHsNCiAgICAgICAgcHJpbnRrKFhF
TkxPR19FUlIgIiVwZDogT3Zlci1hbGxvY2F0aW9uIGZvciBkLT5tYXhfcGFnZXM6ICVsdS5cbiIs
DQogICAgICAgICAgICAgICBkLCBwc2l6ZSk7DQogICAgICAgIHJldHVybiBJTlZBTElEX01GTjsN
CiAgICB9DQogICAgZC0+bWF4X3BhZ2VzICs9IG5yX3BmbnM7DQoiDQoNCj4gPiArDQo+ID4gKyAg
ICByZXR1cm4gYWNxdWlyZV9zdGF0aWNfbWVtb3J5X2JhbmsoZCwgTlVMTCwgYWRkcl9jZWxscywg
c2l6ZV9jZWxscywNCj4gPiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBw
YmFzZSwgcHNpemUpOw0KPiA+ICsNCj4gPiArfQ0KPiA+ICsNCj4gPiArLyoNCj4gPiArICogRnVu
YyBhbGxvY2F0ZV9zaGFyZWRfbWVtb3J5IGlzIHN1cHBvc2VkIHRvIGJlIG9ubHkgY2FsbGVkDQo+
IA0KPiBJIGFtIGEgYml0IGNvbmNlcm5lZCB3aXRoIHRoZSB3b3JkICJzdXBwb3NlZCIuIEFyZSB5
b3UgaW1wbHlpbmcgdGhhdCBpdCBtYXkNCj4gYmUgY2FsbGVkIGJ5IHNvbWVvbmUgdGhhdCBpcyBu
b3QgdGhlIG93bmVyPyBJZiBub3QsIHRoZW4gaXQgc2hvdWxkIGJlDQo+ICJzaG91bGQiLg0KPiAN
Cj4gQWxzbyBOSVQ6IFNwZWxsIG91dCBjb21wbGV0ZWx5ICJmdW5jIi4gSS5lICJUaGUgZnVuY3Rp
b24iLg0KPiANCj4gPiArICogZnJvbSB0aGUgb3duZXIuDQo+IA0KPiBJIHJlYWQgZnJvbSBhcyAi
Y3VycmVudCBzaG91bGQgYmUgdGhlIG93bmVyIi4gQnV0IEkgZ3Vlc3MgdGhpcyBpcyBub3Qgd2hh
dCB5b3UNCj4gbWVhbiBoZXJlLiBJbnN0ZWFkIGl0IGxvb2tzIGxpa2UgeW91IG1lYW4gImQiIGlz
IHRoZSBvd25lci4gU28gSSB3b3VsZCB3cml0ZQ0KPiAiZCBzaG91bGQgYmUgdGhlIG93bmVyIG9m
IHRoZSBzaGFyZWQgYXJlYSIuDQo+IA0KPiBJdCB3b3VsZCBiZSBnb29kIHRvIGhhdmUgYSBjaGVj
ay9BU1NFUlQgY29uZmlybSB0aGlzIChhc3N1bWluZyB0aGlzIGlzIGVhc3kNCj4gdG8gd3JpdGUp
Lg0KPiANCg0KVGhlIGNoZWNrIGlzIGFscmVhZHkgaW4gdGhlIGNhbGxpbmcgcGF0aCwgSSBndWVz
cy4uLg0KT25seSB1bmRlciBjZXJ0YWluIGNpcmN1bXN0YW5jZXMsIHdlIGNvdWxkIGNhbGwgYWxs
b2NhdGVfc2hhcmVkX21lbW9yeQ0KDQo+ID4gKyAqLw0KPiA+ICtzdGF0aWMgaW50IF9faW5pdCBh
bGxvY2F0ZV9zaGFyZWRfbWVtb3J5KHN0cnVjdCBkb21haW4gKmQsDQo+ID4gKyAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdTMyIGFkZHJfY2VsbHMsIHUzMiBzaXplX2Nl
bGxzLA0KPiA+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBhZGRy
X3QgcGJhc2UsIHBhZGRyX3QNCj4gPiArcHNpemUpIHsNCj4gPiArICAgIG1mbl90IHNtZm47DQo+
ID4gKw0KPiA+ICsgICAgZHByaW50ayhYRU5MT0dfSU5GTywNCj4gPiArICAgICAgICAgICAgIkFs
bG9jYXRlIHN0YXRpYyBzaGFyZWQgbWVtb3J5IEJBTksgJSMiUFJJcGFkZHIiLQ0KPiAlIyJQUklw
YWRkciIuXG4iLA0KPiA+ICsgICAgICAgICAgICBwYmFzZSwgcGJhc2UgKyBwc2l6ZSk7DQo+IA0K
PiBOSVQ6IEkgd291bGQgc3VnZ2VzdCB0byBhbHNvIHByaW50IHRoZSBkb21haW4uIFRoaXMgY291
bGQgaGVscCB0byBlYXNpbHkgZmlndXJlDQo+IG91dCB0aGF0ICdkJyB3YXNuJ3QgdGhlIG93bmVy
Lg0KPg0KDQpTdXJlDQogDQo+ID4gKw0KPiA+ICsgICAgc21mbiA9IGFjcXVpcmVfc2hhcmVkX21l
bW9yeV9iYW5rKGQsIGFkZHJfY2VsbHMsIHNpemVfY2VsbHMsICZwYmFzZSwNCj4gPiArICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAmcHNpemUpOw0KPiA+ICsgICAgaWYgKCBt
Zm5fZXEoc21mbiwgSU5WQUxJRF9NRk4pICkNCj4gPiArICAgICAgICByZXR1cm4gLUVJTlZBTDsN
Cj4gPiArDQo+ID4gKyAgICAvKg0KPiA+ICsgICAgICogRE9NSURfSU8gaXMgdGhlIGRvbWFpbiwg
bGlrZSBET01JRF9YRU4sIHRoYXQgaXMgbm90IGF1dG8tDQo+IHRyYW5zbGF0ZWQuDQo+ID4gKyAg
ICAgKiBJdCBzZWVzIFJBTSAxOjEgYW5kIHdlIGRvIG5vdCBuZWVkIHRvIGNyZWF0ZSBQMk0gbWFw
cGluZyBmb3IgaXQNCj4gPiArICAgICAqLw0KPiA+ICsgICAgQVNTRVJUKGQgPT0gZG9tX2lvKTsN
Cj4gPiArICAgIHJldHVybiAwOw0KPiA+ICt9DQo+ID4gKw0KPiA+ICtzdGF0aWMgaW50IF9faW5p
dCBwcm9jZXNzX3NobShzdHJ1Y3QgZG9tYWluICpkLA0KPiA+ICsgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBjb25zdCBzdHJ1Y3QgZHRfZGV2aWNlX25vZGUgKm5vZGUpIHsNCj4gPiArICAg
IHN0cnVjdCBkdF9kZXZpY2Vfbm9kZSAqc2htX25vZGU7DQo+ID4gKyAgICBpbnQgcmV0ID0gMDsN
Cj4gPiArICAgIGNvbnN0IHN0cnVjdCBkdF9wcm9wZXJ0eSAqcHJvcDsNCj4gPiArICAgIGNvbnN0
IF9fYmUzMiAqY2VsbHM7DQo+ID4gKyAgICB1MzIgc2htX2lkOw0KPiA+ICsgICAgdTMyIGFkZHJf
Y2VsbHMsIHNpemVfY2VsbHM7DQo+ID4gKyAgICBwYWRkcl90IGdiYXNlLCBwYmFzZSwgcHNpemU7
DQo+ID4gKw0KPiA+ICsgICAgZHRfZm9yX2VhY2hfY2hpbGRfbm9kZShub2RlLCBzaG1fbm9kZSkN
Cj4gPiArICAgIHsNCj4gPiArICAgICAgICBpZiAoICFkdF9kZXZpY2VfaXNfY29tcGF0aWJsZShz
aG1fbm9kZSwgInhlbixkb21haW4tc2hhcmVkLQ0KPiBtZW1vcnktdjEiKSApDQo+ID4gKyAgICAg
ICAgICAgIGNvbnRpbnVlOw0KPiA+ICsNCj4gPiArICAgICAgICBpZiAoICFkdF9wcm9wZXJ0eV9y
ZWFkX3UzMihzaG1fbm9kZSwgInhlbixzaG0taWQiLCAmc2htX2lkKSApDQo+ID4gKyAgICAgICAg
ew0KPiA+ICsgICAgICAgICAgICBwcmludGsoIlNoYXJlZCBtZW1vcnkgbm9kZSBkb2VzIG5vdCBw
cm92aWRlIFwieGVuLHNobS1pZFwiDQo+IHByb3BlcnR5LlxuIik7DQo+ID4gKyAgICAgICAgICAg
IHJldHVybiAtRU5PRU5UOw0KPiA+ICsgICAgICAgIH0NCj4gPiArDQo+ID4gKyAgICAgICAgYWRk
cl9jZWxscyA9IGR0X25fYWRkcl9jZWxscyhzaG1fbm9kZSk7DQo+ID4gKyAgICAgICAgc2l6ZV9j
ZWxscyA9IGR0X25fc2l6ZV9jZWxscyhzaG1fbm9kZSk7DQo+ID4gKyAgICAgICAgcHJvcCA9IGR0
X2ZpbmRfcHJvcGVydHkoc2htX25vZGUsICJ4ZW4sc2hhcmVkLW1lbSIsIE5VTEwpOw0KPiA+ICsg
ICAgICAgIGlmICggIXByb3AgKQ0KPiA+ICsgICAgICAgIHsNCj4gPiArICAgICAgICAgICAgcHJp
bnRrKCJTaGFyZWQgbWVtb3J5IG5vZGUgZG9lcyBub3QgcHJvdmlkZSBcInhlbixzaGFyZWQtDQo+
IG1lbVwiIHByb3BlcnR5LlxuIik7DQo+ID4gKyAgICAgICAgICAgIHJldHVybiAtRU5PRU5UOw0K
PiA+ICsgICAgICAgIH0NCj4gPiArICAgICAgICBjZWxscyA9IChjb25zdCBfX2JlMzIgKilwcm9w
LT52YWx1ZTsNCj4gPiArICAgICAgICAvKiB4ZW4sc2hhcmVkLW1lbSA9IDxwYmFzZSwgcHNpemUs
IGdiYXNlPjsgKi8NCj4gPiArICAgICAgICBkZXZpY2VfdHJlZV9nZXRfcmVnKCZjZWxscywgYWRk
cl9jZWxscywgc2l6ZV9jZWxscywgJnBiYXNlLCAmcHNpemUpOw0KPiA+ICsgICAgICAgIEFTU0VS
VChJU19BTElHTkVEKHBiYXNlLCBQQUdFX1NJWkUpICYmIElTX0FMSUdORUQocHNpemUsDQo+ID4g
KyBQQUdFX1NJWkUpKTsNCj4gDQo+IFNlZSBhYm92ZSBhYm91dCB3aGF0IEFTU0VSVCgpcyBhcmUg
Zm9yLg0KPiANCg0KRG8geW91IHRoaW5rIGFkZHJlc3Mgd2FzIHN1aXRhYmx5IGNoZWNrZWQgaGVy
ZSwgaXMgaXQgZW5vdWdoPw0KSWYgaXQgaXMgZW5vdWdoLCBJJ2xsIG1vZGlmeSBhYm92ZSBBU1NF
UlQoKSB0byBtZm5fdmFsaWQoKQ0KDQo+ID4gKyAgICAgICAgZ2Jhc2UgPSBkdF9yZWFkX251bWJl
cihjZWxscywgYWRkcl9jZWxscyk7DQo+ID4gKw0KPiA+ICsgICAgICAgIC8qIFRPRE86IENvbnNp
ZGVyIG93bmVyIGRvbWFpbiBpcyBub3QgdGhlIGRlZmF1bHQgZG9tX2lvLiAqLw0KPiA+ICsgICAg
ICAgIC8qDQo+ID4gKyAgICAgICAgICogUGVyIHN0YXRpYyBzaGFyZWQgbWVtb3J5IHJlZ2lvbiBj
b3VsZCBiZSBzaGFyZWQgYmV0d2VlbiBtdWx0aXBsZQ0KPiA+ICsgICAgICAgICAqIGRvbWFpbnMu
DQo+ID4gKyAgICAgICAgICogSW4gY2FzZSByZS1hbGxvY2F0aW5nIHRoZSBzYW1lIHNoYXJlZCBt
ZW1vcnkgcmVnaW9uLCB3ZSBjaGVjaw0KPiA+ICsgICAgICAgICAqIGlmIGl0IGlzIGFscmVhZHkg
YWxsb2NhdGVkIHRvIHRoZSBkZWZhdWx0IG93bmVyIGRvbV9pbyBiZWZvcmUNCj4gPiArICAgICAg
ICAgKiB0aGUgYWN0dWFsIGFsbG9jYXRpb24uDQo+ID4gKyAgICAgICAgICovDQo+ID4gKyAgICAg
ICAgaWYgKCAhaXNfc2htX2FsbG9jYXRlZF90b19kb21pbyhwYmFzZSkgKQ0KPiA+ICsgICAgICAg
IHsNCj4gPiArICAgICAgICAgICAgLyogQWxsb2NhdGUgc3RhdGljYWxseSBzaGFyZWQgcGFnZXMg
dG8gdGhlIGRlZmF1bHQgb3duZXIgZG9tX2lvLiAqLw0KPiA+ICsgICAgICAgICAgICByZXQgPSBh
bGxvY2F0ZV9zaGFyZWRfbWVtb3J5KGRvbV9pbywgYWRkcl9jZWxscywgc2l6ZV9jZWxscywNCj4g
PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBwYmFzZSwgcHNpemUp
Ow0KPiA+ICsgICAgICAgICAgICBpZiAoIHJldCApDQo+ID4gKyAgICAgICAgICAgICAgICByZXR1
cm4gcmV0Ow0KPiA+ICsgICAgICAgIH0NCj4gPiArICAgIH0NCj4gPiArDQo+ID4gKyAgICByZXR1
cm4gMDsNCj4gPiArfQ0KPiA+ICsjZW5kaWYgLyogQ09ORklHX1NUQVRJQ19TSE0gKi8NCj4gPiAg
ICNlbHNlDQo+ID4gICBzdGF0aWMgdm9pZCBfX2luaXQgYWxsb2NhdGVfc3RhdGljX21lbW9yeShz
dHJ1Y3QgZG9tYWluICpkLA0KPiA+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgc3RydWN0IGtlcm5lbF9pbmZvICpraW5mbywNCj4gPiBAQCAtMzIzNiw2ICszMzYw
LDEyIEBAIHN0YXRpYyBpbnQgX19pbml0IGNvbnN0cnVjdF9kb21VKHN0cnVjdCBkb21haW4NCj4g
KmQsDQo+ID4gICAgICAgZWxzZQ0KPiA+ICAgICAgICAgICBhc3NpZ25fc3RhdGljX21lbW9yeV8x
MShkLCAma2luZm8sIG5vZGUpOw0KPiA+DQo+ID4gKyNpZmRlZiBDT05GSUdfU1RBVElDX1NITQ0K
PiA+ICsgICAgcmMgPSBwcm9jZXNzX3NobShkLCBub2RlKTsNCj4gPiArICAgIGlmICggcmMgPCAw
ICkNCj4gPiArICAgICAgICByZXR1cm4gcmM7DQo+ID4gKyNlbmRpZg0KPiA+ICsNCj4gPiAgICAg
ICAvKg0KPiA+ICAgICAgICAqIEJhc2UgYWRkcmVzcyBhbmQgaXJxIG51bWJlciBhcmUgbmVlZGVk
IHdoZW4gY3JlYXRpbmcgdnBsMDExDQo+IGRldmljZQ0KPiA+ICAgICAgICAqIHRyZWUgbm9kZSBp
biBwcmVwYXJlX2R0Yl9kb21VLCBzbyBpbml0aWFsaXphdGlvbiBvbiByZWxhdGVkDQo+ID4gdmFy
aWFibGVzIGRpZmYgLS1naXQgYS94ZW4vY29tbW9uL2RvbWFpbi5jIGIveGVuL2NvbW1vbi9kb21h
aW4uYw0KPiBpbmRleA0KPiA+IDc1NzBlYWU5MWEuLjcwNzBmNWE5YjkgMTAwNjQ0DQo+ID4gLS0t
IGEveGVuL2NvbW1vbi9kb21haW4uYw0KPiA+ICsrKyBiL3hlbi9jb21tb24vZG9tYWluLmMNCj4g
PiBAQCAtNzgwLDYgKzc4MCw5IEBAIHZvaWQgX19pbml0IHNldHVwX3N5c3RlbV9kb21haW5zKHZv
aWQpDQo+ID4gICAgICAgICogVGhpcyBkb21haW4gb3ducyBJL08gcGFnZXMgdGhhdCBhcmUgd2l0
aGluIHRoZSByYW5nZSBvZiB0aGUNCj4gcGFnZV9pbmZvDQo+ID4gICAgICAgICogYXJyYXkuIE1h
cHBpbmdzIG9jY3VyIGF0IHRoZSBwcml2IG9mIHRoZSBjYWxsZXIuDQo+ID4gICAgICAgICogUXVh
cmFudGluZWQgUENJIGRldmljZXMgd2lsbCBiZSBhc3NvY2lhdGVkIHdpdGggdGhpcyBkb21haW4u
DQo+ID4gKyAgICAgKg0KPiA+ICsgICAgICogRE9NSURfSU8gaXMgYWxzbyB0aGUgZGVmYXVsdCBv
d25lciBvZiBtZW1vcnkgcHJlLXNoYXJlZCBhbW9uZw0KPiBtdWx0aXBsZQ0KPiA+ICsgICAgICog
ZG9tYWlucyBhdCBib290IHRpbWUuDQo+ID4gICAgICAgICovDQo+ID4gICAgICAgZG9tX2lvID0g
ZG9tYWluX2NyZWF0ZShET01JRF9JTywgTlVMTCwgMCk7DQo+ID4gICAgICAgaWYgKCBJU19FUlIo
ZG9tX2lvKSApDQo+IA0KPiBDaGVlcnMsDQo+IA0KPiAtLQ0KPiBKdWxpZW4gR3JhbGwNCg==


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 07:24:59 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 07:24:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357588.586229 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6S4E-0001D1-Sm; Wed, 29 Jun 2022 07:24:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357588.586229; Wed, 29 Jun 2022 07:24:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6S4E-0001Cu-Pw; Wed, 29 Jun 2022 07:24:54 +0000
Received: by outflank-mailman (input) for mailman id 357588;
 Wed, 29 Jun 2022 07:24:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DiI7=XE=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1o6S4D-0001Co-Jg
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 07:24:53 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 (mail-eopbgr50079.outbound.protection.outlook.com [40.107.5.79])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 91209e78-f77c-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 09:24:52 +0200 (CEST)
Received: from DB7PR05CA0041.eurprd05.prod.outlook.com (2603:10a6:10:2e::18)
 by AS8PR08MB7919.eurprd08.prod.outlook.com (2603:10a6:20b:53a::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14; Wed, 29 Jun
 2022 07:24:50 +0000
Received: from DBAEUR03FT048.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:2e:cafe::2a) by DB7PR05CA0041.outlook.office365.com
 (2603:10a6:10:2e::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5273.22 via Frontend
 Transport; Wed, 29 Jun 2022 07:24:50 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT048.mail.protection.outlook.com (100.127.142.200) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Wed, 29 Jun 2022 07:24:49 +0000
Received: ("Tessian outbound 4748bc5c2894:v121");
 Wed, 29 Jun 2022 07:24:49 +0000
Received: from f910740833bd.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 E945B5ED-3366-4A94-9D73-BF3BAD374B50.1; 
 Wed, 29 Jun 2022 07:24:38 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f910740833bd.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 29 Jun 2022 07:24:38 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by GV1PR08MB8011.eurprd08.prod.outlook.com (2603:10a6:150:99::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14; Wed, 29 Jun
 2022 07:24:36 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5373.017; Wed, 29 Jun 2022
 07:24:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 91209e78-f77c-11ec-b725-ed86ccbb4733
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=InAND9r4oiv8xYPczmAnsDVVD/UtO+9ownfAFghBz/sIiywGyES3sMgvAflyEKwD+0R5w3YUag+8WJgYPk3Ty159sfL4Ibp8JP6LsQRwhErwWXHIyEQC3kkLLmfSEwwUoRkmXdNPHXobkcXGMtcM8e7Gbg1uHd0z4y+pliYOYjy+yZj7AIo2swJk+wwh1NI05FHgO76WrLI6y3k1RMKPUqpY3T0jUAg7U+mKvd2AyYenUcUdGPA2RF3V2OtJ/RIW0Zo3Pfl10lo5rKH+JoJyIdjMFq5QlgA/uyWUybOq7wrZiTbOuKjmZdHkPMw0M/CQs0ksHocosIpX4VzlR6FEYQ==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6EbsGJD4I+ly+24gKzUWrr0eH8sJyGBH/nl4+VcvseE=;
 b=Fk1LAPwKH5ZARubk4sQ760O8z5lJcdzR1lJoiZ0zHb+vtXEE9Byhl4ND2g3gBNkkkhhhLEKnitE3m5rdjQiKTyAJdNIPfX+ctwPD0eWjjfi2fKbJqkzilQS4NsaWznUI6Nr7djkCkDY0nFuNMpiv7G0LVEdHSRteUsl7fHmseu4p7Bhv+8O0dbbrYJjSC4/2LO2/go184dtIeJmV7mssxULJz79VJ/AEI/FXJVQvkEqBW9TO1zi0F7qV9FNpTpWVsfRZ1tyRhCpId1KTwIxfd0FMzL9iOCA22HWkhqPRjB802lIYsyHSJVRASVlJ8fxGUplLS1+McmdY7SLtSC197w==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6EbsGJD4I+ly+24gKzUWrr0eH8sJyGBH/nl4+VcvseE=;
 b=6BFE1i0AtwuuijixyQdfUCQkSKYPKgoPk+dCgQXjS+cJcvD+O+/b0XULP0CKSk5RNvSwMf4CGv1vYVsAtLvBJgZQugpU7HnaufKI+f4Cv+xMMXOLiDiag+PcCZMFyIGBD+KSOKSf0UwXA4USavBxldfULc2ZyMEYWMMRs0t4bug=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 9c978156913fce0f
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=f0UBpEZVbWleMb/WwPGspq4Uk2FoOl/dO9yYMGdg+GJEd+RPFQUJKHxWtpuCVgqMirgzoEFWrx1XskoGJiEF+G05mow6l90ST8vCRIqqemZILyiOVUmcKXqDz5rQeuyhUOqcHhldpQPboeBJA/JqL+a1dc6E6tSYSCRXGBIssk5+BiIJUlbxxmvYeUo77ghXviFfLN+cZx/dtVL0qLSM3dAwgPZUJV6yAlp7nFuRmUS2luR+0MtfwNolPhwbQgIbq8wZULJTtWZFQ5hvoL8r2Jsvz5nvBRCuRV5HzmCBzd/wFsGAw1xkY3q9Q2k2P6eTh7LhUPUZSVawrtzRSMT/9Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6EbsGJD4I+ly+24gKzUWrr0eH8sJyGBH/nl4+VcvseE=;
 b=CSqeLoH9jLVjeYqELhp+XiAoukAd0CTE8IkFIp7fVoYiRjlYC/4tQIKMTnBHM3zgpySKx+WXstnW9AUnNzrVVoI2aKFQH0/vlTe80bsKT/CgGluS4A1l8PUozlMrnwIbm8pJm707GtvPipvrQWgjq+CmWwbSN0R05ucqZCPedqL23oJm8SOtcYYZUOBeHF3/oRSUcJ9Xe06/RU/6KIcRydHW/nJB/ElfkAteOf9IILpfRld8LoeAyg9RtK2w0zIvNFIgvWAgdyUFDQAQm04UUTSreswmQ++knZUUlX9fWfZMXHZchy9WGl/jMR5rWLschCnGoEJVHxEHoVRCy+ysJQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6EbsGJD4I+ly+24gKzUWrr0eH8sJyGBH/nl4+VcvseE=;
 b=6BFE1i0AtwuuijixyQdfUCQkSKYPKgoPk+dCgQXjS+cJcvD+O+/b0XULP0CKSk5RNvSwMf4CGv1vYVsAtLvBJgZQugpU7HnaufKI+f4Cv+xMMXOLiDiag+PcCZMFyIGBD+KSOKSf0UwXA4USavBxldfULc2ZyMEYWMMRs0t4bug=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Xenia Ragiadakou <burzalodowa@gmail.com>
CC: xen-devel <xen-devel@lists.xenproject.org>, Rahul Singh
	<Rahul.Singh@arm.com>, Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: smmu-v3: Fix MISRA C 2012 Rule 1.3 violations
Thread-Topic: [PATCH] xen/arm: smmu-v3: Fix MISRA C 2012 Rule 1.3 violations
Thread-Index: AQHYiwEFKAxihkpP+kCxp/TNi89h5q1l/I6A
Date: Wed, 29 Jun 2022 07:24:36 +0000
Message-ID: <BF0AB23A-DB4B-4DA3-9E4C-C15FAD360247@arm.com>
References: <20220628150851.8627-1-burzalodowa@gmail.com>
In-Reply-To: <20220628150851.8627-1-burzalodowa@gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: e53defb0-9b19-4336-4778-08da59a073b7
x-ms-traffictypediagnostic:
	GV1PR08MB8011:EE_|DBAEUR03FT048:EE_|AS8PR08MB7919:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 gNaPsnlKZXBZhDBI1oX4g+Zjxhe7OGXGQeaTE/JAq5qWOkILA2A21tCZAwLQdWbhvegYGUdmUaErq+aVJhuArlmIwMiGcZq9LJoEDX2Ym6TuRgtBp1l1W4jKl3TkNaH2brqCjvLYHSBsMJhAnxRgqVS4iEee4Er6z/sRViYmOGRKMOVUbXDVlO9ILkwRz6FBNNET6YGKA+XSGHRqbD1L6XnJBb+C/wJ8Z8NTohSN3lt4NNAK/RD512vkm6BS+DDR+Me91J6P1wuB8ImQc8ELc4u4axkSV0b6XgwqJERih/jr3zPVsUhDR+KLyN7Jw5bnPBVEVY4/z4OlNPyHLDPH6IXsDleaviVeBgKfroYnC8PGvOGkSMNzilYseLNnZX0zvTUzjyRZLOhZfmNW+MrgDh6bLDr1MA30eXMupcRutCLkve/uVf6f6HD/fk8C86WlqmMT+owUMnvd6QT2eoTJN7tHv5D01OepbxOjpFZXtdllayvZqkZ4BOEoS2QA/SBX84mEuCA+/QIh9Q8GA4fZB57qPKrIkqYlvK0J45AuZcq9LmEaGNSfJYi+xuwgdFZ8Zxh8pXg7iX61lL+PwoUjOTxPhTJZlumWLvBpN0qOsSrXYfrLfImTwyDQFJS9x+jvUbAOGi7B5gZK2ki7VelP7L3lPirJmWcs6ixjbh09a1uZBjSuTNLBAN+DraZMcxfEnkO/vtLcAgq1l2vMSwnUdKVWSItk+JFm98vmtNBpRZQy60Gh2nBhDAFX30WtS6RAGJoXST2C4mLu1H+F7yiTrMjxP01PFCiKy1yvG83m2i69U+qPb3GcKOCTWbJ+qflsWHrWZQfYzuOxxcnPs5fKuw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(396003)(366004)(136003)(346002)(376002)(39860400002)(83380400001)(38100700002)(33656002)(478600001)(5660300002)(2906002)(38070700005)(36756003)(8936002)(41300700001)(6506007)(53546011)(86362001)(122000001)(316002)(66476007)(8676002)(66446008)(4326008)(6512007)(66556008)(66946007)(2616005)(64756008)(6486002)(71200400001)(186003)(6916009)(54906003)(91956017)(76116006)(26005)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <B901EAF10719924DB2C13DBF155DE92F@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB8011
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT048.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	5edb8ecb-e51e-4238-5aff-08da59a06b9e
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	aoALF96Q/drMJJxoT5tm3YYcuNHdYWqOb+NNyTRT1Nu0VonRoTbuLcgc8GPcsWHHl5OjXQJb08nyDSaVHVbXc/ePdrVQfvX1ebX/W0C1Nh/AaeMmY0GPeuOYkKL/sanY/w1c5mgqsKAX5bKJKrJu9vztja/0EH+YMKGG7r3NaWlEd3YmidkmC6/4VoIS3xnGv9tggI6XsXERvg63luctsaX6npD6Jrk7aynyqDv6C9/3xNd9eo3KQhqSkQY2IYBBHq46RLuvIl50ufFYA5ld7fYYZyFOobfNzlQ6TGogykolUrn+HhaWs2teLxnsQB3NHqDL3b2wFWm3VI0ZOq+tj/9ZKH2SbmJQfi1pT9jnN20JIJO2ubL3YAwGETftpt3L8CPCccj99+kKmr2/mgY14deCNb/ybxMNAJhOjHwNqStX9ygBYSJVPu2pBhyrVtGS8SqPZacIbr7CzJbBYRodjkc/fVZ/Ez/5BU/OtnxQ7JzcdSsRq4qwfC6ky0QGFIkFbp97H3eO4qS3dm/oXWmm/Ne7Ztv45ZG+7bA+5GQUGBmxmWAixl7emO1RpHuG8FocVSCtRu8OAr1Ma9wIDfTj2QueaEY5NbePH1Qmfuj35ZFxJ05jVRh2sctFmBfQG5VSkThsZUEvdiV59G81LVO9g8Z9RuZiF6IO1jjl7YAHQrsMQFE570/L+C9ikSNmlhq2FRtKNlfyxGZOpusInIZdFo028qwXJ3QydXHEkI5a5k7FzGRKegSIdlLAqkBKwGsIPy78Q2AIao10beYAsmzP+pZOO6ZrmKrT5KOSkDY5E2yISKQkCcBod/KBr/LgVVWF
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(346002)(376002)(136003)(396003)(39860400002)(46966006)(36840700001)(40470700004)(47076005)(336012)(6512007)(53546011)(2616005)(107886003)(186003)(36860700001)(6506007)(40480700001)(83380400001)(2906002)(6862004)(5660300002)(8936002)(40460700003)(82310400005)(6486002)(41300700001)(478600001)(4326008)(70586007)(54906003)(8676002)(316002)(70206006)(86362001)(33656002)(356005)(81166007)(36756003)(26005)(82740400003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jun 2022 07:24:49.7220
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e53defb0-9b19-4336-4778-08da59a073b7
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT048.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB7919

Hi Xenia,

> On 28 Jun 2022, at 16:08, Xenia Ragiadakou <burzalodowa@gmail.com> wrote:
>=20
> The expression 1 << 31 produces undefined behaviour because the type of i=
nteger
> constant 1 is (signed) int and the result of shifting 1 by 31 bits is not
> representable in the (signed) int type.
> Change the type of 1 to unsigned int by adding the U suffix.
>=20
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
> ---
> Q_OVERFLOW_FLAG has already been fixed in upstream kernel code.
> For GBPA_UPDATE I will submit a patch.
>=20
> xen/drivers/passthrough/arm/smmu-v3.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>=20
> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthro=
ugh/arm/smmu-v3.c
> index 1e857f915a..f2562acc38 100644
> --- a/xen/drivers/passthrough/arm/smmu-v3.c
> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
> @@ -338,7 +338,7 @@ static int platform_get_irq_byname_optional(struct de=
vice *dev,
> #define CR2_E2H				(1 << 0)
>=20
> #define ARM_SMMU_GBPA			0x44
> -#define GBPA_UPDATE			(1 << 31)
> +#define GBPA_UPDATE			(1U << 31)
> #define GBPA_ABORT			(1 << 20)
>=20
> #define ARM_SMMU_IRQ_CTRL		0x50
> @@ -410,7 +410,7 @@ static int platform_get_irq_byname_optional(struct de=
vice *dev,
>=20
> #define Q_IDX(llq, p)			((p) & ((1 << (llq)->max_n_shift) - 1))
> #define Q_WRP(llq, p)			((p) & (1 << (llq)->max_n_shift))

Could also make sense to fix those 2 to be coherent.

> -#define Q_OVERFLOW_FLAG			(1 << 31)
> +#define Q_OVERFLOW_FLAG			(1U << 31)
> #define Q_OVF(p)			((p) & Q_OVERFLOW_FLAG)
> #define Q_ENT(q, p)			((q)->base +			\
> 					 Q_IDX(&((q)->llq), p) *	\

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Wed Jun 29 07:50:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 07:50:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357594.586241 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6SSi-0004kN-0s; Wed, 29 Jun 2022 07:50:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357594.586241; Wed, 29 Jun 2022 07:50:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6SSh-0004kG-TY; Wed, 29 Jun 2022 07:50:11 +0000
Received: by outflank-mailman (input) for mailman id 357594;
 Wed, 29 Jun 2022 07:50:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rlzK=XE=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1o6SSg-0004kA-CS
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 07:50:10 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr70045.outbound.protection.outlook.com [40.107.7.45])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 186e4e0e-f780-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 09:50:08 +0200 (CEST)
Received: from DB6PR07CA0193.eurprd07.prod.outlook.com (2603:10a6:6:42::23) by
 VI1PR08MB2704.eurprd08.prod.outlook.com (2603:10a6:802:1b::18) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.18; Wed, 29 Jun 2022 07:50:02 +0000
Received: from DBAEUR03FT013.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:42:cafe::35) by DB6PR07CA0193.outlook.office365.com
 (2603:10a6:6:42::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.7 via Frontend
 Transport; Wed, 29 Jun 2022 07:50:01 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT013.mail.protection.outlook.com (100.127.142.222) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Wed, 29 Jun 2022 07:50:01 +0000
Received: ("Tessian outbound 514db98d9a19:v121");
 Wed, 29 Jun 2022 07:50:01 +0000
Received: from f12120776115.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 008B0CAC-AE6E-4837-8844-479CEC574696.1; 
 Wed, 29 Jun 2022 07:49:56 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f12120776115.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 29 Jun 2022 07:49:56 +0000
Received: from DU2PR08MB7325.eurprd08.prod.outlook.com (2603:10a6:10:2e4::7)
 by DB8PR08MB5017.eurprd08.prod.outlook.com (2603:10a6:10:ef::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Wed, 29 Jun
 2022 07:49:54 +0000
Received: from DU2PR08MB7325.eurprd08.prod.outlook.com
 ([fe80::50e5:8757:6643:77f8]) by DU2PR08MB7325.eurprd08.prod.outlook.com
 ([fe80::50e5:8757:6643:77f8%9]) with mapi id 15.20.5373.017; Wed, 29 Jun 2022
 07:49:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 186e4e0e-f780-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=T5VCm9R5vCuBtUPeNG+XyONOMGS67zs7KSUUosDB8vavHp+gw6tRfILuACfZGHnohCAJAOThRru5loUoJbOTqYWalAUn9lrsqZ/H0tdBcJ01IxxngGzyr3nXhRvEXW5ra9U0sHmI+j/d1YUKq9cERle22YNQPVaFlpdz7CJONs+6uhBB1Z6CTUHWtO9OxfdiIojF+NhshPkBPPeTNdeIYStnYqKlcTOdawygxx0HjwegKz4FnqwXK9AfidafUExl2LDhUfXe5FMLNsDYVQklWl6+du45z77o0zPE8j+gYAaH7jvcnLzbfVBkuxAGXInAlBAPoztW0okhXgWWy5hVig==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Zag62M8tH5+CJv1vPt7L0sAa1qAxkF/R0nqvxTkVhss=;
 b=hjGhXvQa+X7nPM7Ul7KFIoATcGyxUipE0pQ12DBfWvhOt8loN8PHPRY+5Yuz6FWFphvOtUH5AFklWcopz+C/j9PO154+oEqjWgDlS/FYmoZi4/S2rqxt8ETElJCpQMtD4+PAxXsdT5W86HF4J1msGnwoQIah3U6drbRgb1lNerE70XSBgednvpokhztHSj0mN9J83t0OqOD8v/FkVXpXjMxvpGfz3ggbyrmZTpG6yN9jDpK0krijiZZ+9F5hPbhMiC65/nZLYIAt56W12neFTqpMLRRxczADkBhx4r1n+VWeZ7bnQg/Qj/E1fcIthOZzbPrZPR/pU9/qzyTZpLDSaw==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Zag62M8tH5+CJv1vPt7L0sAa1qAxkF/R0nqvxTkVhss=;
 b=gZlQX+4gEelQRw0v2TWKcgaBr7RTGrhjydwGkNaCpCk7Fv89LWi/kkixgBr/mrDZn3nttmD30jESxucUAUeKd2PfOn0pH00eZrKlcGbb126jpd3Mg4beYb/sB3T+FuM1+BaRE1BFU8KQxKh3c1GFsllErKRwSwclIEIDd0WucLY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QLzFw7K/GlOYLe8e0i9iq2JiZVjk/qjKtWvNPtMcJBlYVgJIdaCFRxvMAXavnhk116AZO9Yw+ZeJdi2nPWonSz3V5JD4FhDtx4NuI0HgoGWlxJfSbOE3T2XvqXeVk4mlGki72VgnNcErE3MaphaxW6tM/g+VuoRAeD2l98Hekgr92PbfKYEQwQphmflK2dgRzKj4+SKWx4YPO+EfLZ6B4qBSi5f6IeMgsn1GDQOBLUAvBIjCo4GbsQMpT0MlG/f6DXTZNXXiHAtLurBFSYqRlFGf1zA5dmCOMIQRzptiK5GBdLWj3rBB0LF0t6/bT/HOSz9vU3gwo+kCJarAG5LElA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Zag62M8tH5+CJv1vPt7L0sAa1qAxkF/R0nqvxTkVhss=;
 b=bYTg3ThAknuCpXc1WhYzhVUM2mfq0FbAU+AZ2i29ocAx8oSDMo5oOusLWfbVlu7i8x1pstNUMvHixvHQQcYGxYdQvqxeXKtb0r2Jtrd+U4lssVl1c6OYZCZiFo6+coicT7vQGp8ua3Lt0Tmih/OTT0Quzm70D+6JnDcd0oGgxUJjuDzsbdQQx9j7Ya6lX/GE4Oj8AQC7SrZlIo/4tQe+LjiecRRYvIv1kFg2wvLJfe61T3icGEyjpecbKK04+cdwt00S4V2XLmkqRf/DKjVkll9K1CuJHfL6UlPJ08ccuGt3hACsTHJF7Q7TUXm8E7fM/5ThksnblqVhgJ6b1Eqoag==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Zag62M8tH5+CJv1vPt7L0sAa1qAxkF/R0nqvxTkVhss=;
 b=gZlQX+4gEelQRw0v2TWKcgaBr7RTGrhjydwGkNaCpCk7Fv89LWi/kkixgBr/mrDZn3nttmD30jESxucUAUeKd2PfOn0pH00eZrKlcGbb126jpd3Mg4beYb/sB3T+FuM1+BaRE1BFU8KQxKh3c1GFsllErKRwSwclIEIDd0WucLY=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v5 3/8] xen/arm: allocate static shared memory to a
 specific owner domain
Thread-Topic: [PATCH v5 3/8] xen/arm: allocate static shared memory to a
 specific owner domain
Thread-Index: AQHYhGQ+VQS5G+xrzUihN2YrNBvaxq1e8m8AgAcdTKA=
Date: Wed, 29 Jun 2022 07:49:54 +0000
Message-ID:
 <DU2PR08MB73257C8DDCC8DB00E62A7D68F7BB9@DU2PR08MB7325.eurprd08.prod.outlook.com>
References: <20220620051114.210118-1-Penny.Zheng@arm.com>
 <20220620051114.210118-4-Penny.Zheng@arm.com>
 <8cf391b9-02a3-6058-35cb-e0a63b8db854@xen.org>
In-Reply-To: <8cf391b9-02a3-6058-35cb-e0a63b8db854@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: DC1E737AB0E2D546BC17B082587D6C56.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: aa73bd23-3a72-49e0-8b51-08da59a3f8c7
x-ms-traffictypediagnostic:
	DB8PR08MB5017:EE_|DBAEUR03FT013:EE_|VI1PR08MB2704:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 YLEXtiAOLSuKvtTqNrqeMPmf86QvqarZWawVvNzGhA3J3RGo9j6p7YlyGdU8hynKhKmMspwlv3i4kKS5CClxnuEDdtCBTly7y7/qBB6rOza/CDEAKpQRt88k72vojKJVR/1kCZIJEa+VZ3794OuTOBdvkctFHggaY7F5E5DYBzU290gk32hfDmyoOXCvm0WfpuqWLdYcxkjl1c+hVcbHJB6ejM+zlIoZZ/yY5eMM0q3ij/LKQcm4YwhUJ0wXy1eDqOO9pL+PJrplSC5lLGYiEob1hwylTaEMSl4Y72DY6HeCcXx5CGmWzWr/U6qFcPPWbDskj0DgSZW06Tbzum0A0EQrdOglfU7dHPcZ6ubBiJQQ2bQLLMxVPjo/gH/5e4MYYSzVOoTPl4wKVN7y9cCPdt8Q9SmBEhN/eARSTDHlDNJoSPGhO7yXMXdpzHWzdeuWh/qohStm5KjFgL7Ug8wQJyOmQUs44Ie3/nxUoyqXbShfRkQDyThcp5BoVXZYHEZWJPxT/bgmsgTnE7qFDtVQKMIlRK6XK5P0qucvsmQ+gd20IyX3qQo0O3/ztJ1603F1C3QmefK/YGA3AE6mEJ1l/zZuVGvJP2wdaKWHZKV3s/AnnZYKGjMxELSTDwSPht9bZDzR0jXi5QwZ3xYnXYkF+SLGJVBoU+m7N3+zL3NAFWQm+L045G8X8dFxXJY02uMwW8c8Cn4T2q6ekb8NgztwRhlJkUW7HfOuFJtzshjfwDxeDmvZjoHX2ASUuRReVuJSn1/iH43KwfSBcQwIDPOnbgWbfQDRe2wdHar2V96SpAQ=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DU2PR08MB7325.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(346002)(39860400002)(136003)(396003)(376002)(110136005)(54906003)(86362001)(38100700002)(76116006)(52536014)(66446008)(186003)(4326008)(26005)(66946007)(66556008)(83380400001)(53546011)(38070700005)(6506007)(9686003)(64756008)(5660300002)(122000001)(33656002)(41300700001)(8936002)(316002)(66476007)(2906002)(7696005)(8676002)(55016003)(71200400001)(478600001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5017
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT013.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	2ce7ba0d-37e0-4814-9271-08da59a3f480
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	P9qMULTUWLuNOgeIGK+ObZL+ps5sXHgJFVpiCJ1h8qfw31kezg/t+EKDWG+jfKhzhy8MQaCjKJHCnj2/uJsIg8MpgP7CY5HElba3kxZFNqvo2PySiqq2fLO46FJwg82OBW5Wu7ALkvVvah6W3GTd8VXqQ0zo156vNdhnbIZjsXPlGdtHjYyjUXZ/2/YD8At5dFolHinsK46RuINTAB86/On3XG8N2kOFI7DzPW10G5H4KQkHSwjBM5g4jZ4ce6lWIajBcjGgCfsOLMv4P1mhxJ9ENvSpMx0rfw4OjHa27oxRDKTSxfsP1WKK/oCaLwclDMgrJ26oP4wRa/AzQ7mpcRkZSO5ZPO1DkTF9VKyP9ZtTMV3OWiT6oajiuaSkZ3AxWpe2EljsggBr1JiocrJhCkSwK5KyDM5P2oT0ivGYblc1KDT+oQoOooTJjvHYa9JeFe/ICyoSFKEv2hnKNyHxGfOajqApJLsZj83LUfn7jmsqROpExiN82JdvKn2ogy2UgDD+vOvwQCtmqMw3amCoIwoDCYbJjmx/I5QEIeWGjlHdVIgKi0TjtDRbWriYQc6psECRkRyQhEUbt0Stn3twUCgNI2FfZ0rpZBpqG15Nc/IVlluj9DVFI11oDUuplJ6B/KMTjOGf1P4jkYMJWnpNoIboigsBAk+Tkxp/4QR5v5H9zhSYobcqAwEvvoUOaoj2gqbjhnDBIIF4CO8Q2f7BhVWnWyi+tK4h387XTxTNuCvpYl9piryDlqBwdQMwUOfDTFaCDnQkhX30t2lbmpiYiPTId3LlvdkNxk8sE+FjxbX24LAbM5lpKkT55dyGnrlp
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(396003)(136003)(346002)(376002)(46966006)(40470700004)(36840700001)(40460700003)(52536014)(33656002)(26005)(47076005)(8936002)(356005)(9686003)(5660300002)(186003)(336012)(86362001)(6506007)(70206006)(4326008)(107886003)(70586007)(82310400005)(8676002)(41300700001)(2906002)(53546011)(83380400001)(110136005)(7696005)(40480700001)(55016003)(82740400003)(316002)(81166007)(36860700001)(54906003)(478600001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jun 2022 07:50:01.4558
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: aa73bd23-3a72-49e0-8b51-08da59a3f8c7
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT013.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB2704

SGkgSnVsaWVuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSnVsaWVu
IEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4NCj4gU2VudDogU2F0dXJkYXksIEp1bmUgMjUsIDIwMjIg
MzowNyBBTQ0KPiBUbzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+OyB4ZW4tZGV2
ZWxAbGlzdHMueGVucHJvamVjdC5vcmcNCj4gQ2M6IFdlaSBDaGVuIDxXZWkuQ2hlbkBhcm0uY29t
PjsgU3RlZmFubyBTdGFiZWxsaW5pDQo+IDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPjsgQmVydHJh
bmQgTWFycXVpcyA8QmVydHJhbmQuTWFycXVpc0Bhcm0uY29tPjsNCj4gVm9sb2R5bXlyIEJhYmNo
dWsgPFZvbG9keW15cl9CYWJjaHVrQGVwYW0uY29tPg0KPiBTdWJqZWN0OiBSZTogW1BBVENIIHY1
IDMvOF0geGVuL2FybTogYWxsb2NhdGUgc3RhdGljIHNoYXJlZCBtZW1vcnkgdG8gYQ0KPiBzcGVj
aWZpYyBvd25lciBkb21haW4NCj4gDQo+IEhpIFBlbm55LA0KPiANCj4gT24gMjAvMDYvMjAyMiAw
NjoxMSwgUGVubnkgWmhlbmcgd3JvdGU6DQo+ID4gSWYgb3duZXIgcHJvcGVydHkgaXMgZGVmaW5l
ZCwgdGhlbiBvd25lciBkb21haW4gb2YgYSBzdGF0aWMgc2hhcmVkDQo+ID4gbWVtb3J5IHJlZ2lv
biBpcyBub3QgdGhlIGRlZmF1bHQgZG9tX2lvIGFueW1vcmUsIGJ1dCBhIHNwZWNpZmljIGRvbWFp
bi4NCj4gPg0KPiA+IFRoaXMgY29tbWl0IGltcGxlbWVudHMgYWxsb2NhdGluZyBzdGF0aWMgc2hh
cmVkIG1lbW9yeSB0byBhIHNwZWNpZmljDQo+ID4gZG9tYWluIHdoZW4gb3duZXIgcHJvcGVydHkg
aXMgZGVmaW5lZC4NCj4gPg0KPiA+IENvZGluZyBmbG93IGZvciBkZWFsaW5nIGJvcnJvd2VyIGRv
bWFpbiB3aWxsIGJlIGludHJvZHVjZWQgbGF0ZXIgaW4NCj4gPiB0aGUgZm9sbG93aW5nIGNvbW1p
dHMuDQo+ID4NCj4gPiBTaWduZWQtb2ZmLWJ5OiBQZW5ueSBaaGVuZyA8cGVubnkuemhlbmdAYXJt
LmNvbT4NCj4gPiBSZXZpZXdlZC1ieTogU3RlZmFubyBTdGFiZWxsaW5pIDxzc3RhYmVsbGluaUBr
ZXJuZWwub3JnPg0KPiA+IC0tLQ0KPiA+IHY1IGNoYW5nZToNCj4gPiAtIG5vIGNoYW5nZQ0KPiA+
IC0tLQ0KPiA+IHY0IGNoYW5nZToNCj4gPiAtIG5vIGNoYW5nZXMNCj4gPiAtLS0NCj4gPiB2MyBj
aGFuZ2U6DQo+ID4gLSBzaW1wbGlmeSB0aGUgY29kZSBzaW5jZSBvX2diYXNlIGlzIG5vdCB1c2Vk
IGlmIHRoZSBkb21haW4gaXMgZG9tX2lvDQo+ID4gLS0tDQo+ID4gdjIgY2hhbmdlOg0KPiA+IC0g
UDJNIG1hcHBpbmcgaXMgcmVzdHJpY3RlZCB0byBub3JtYWwgZG9tYWluDQo+ID4gLSBpbi1jb2Rl
IGNvbW1lbnQgZml4DQo+ID4gLS0tDQo+ID4gICB4ZW4vYXJjaC9hcm0vZG9tYWluX2J1aWxkLmMg
fCA0NCArKysrKysrKysrKysrKysrKysrKysrKysrKystLS0tLS0tLQ0KPiAtLQ0KPiA+ICAgMSBm
aWxlIGNoYW5nZWQsIDMzIGluc2VydGlvbnMoKyksIDExIGRlbGV0aW9ucygtKQ0KPiA+DQo+ID4g
ZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9kb21haW5fYnVpbGQuYyBiL3hlbi9hcmNoL2FybS9k
b21haW5fYnVpbGQuYw0KPiA+IGluZGV4IDkxYTVhY2U4NTEuLmQ0ZmQ2NGUyYmQgMTAwNjQ0DQo+
ID4gLS0tIGEveGVuL2FyY2gvYXJtL2RvbWFpbl9idWlsZC5jDQo+ID4gKysrIGIveGVuL2FyY2gv
YXJtL2RvbWFpbl9idWlsZC5jDQo+ID4gQEAgLTgwNSw5ICs4MDUsMTEgQEAgc3RhdGljIG1mbl90
IF9faW5pdA0KPiBhY3F1aXJlX3NoYXJlZF9tZW1vcnlfYmFuayhzdHJ1Y3QgZG9tYWluICpkLA0K
PiA+ICAgICovDQo+ID4gICBzdGF0aWMgaW50IF9faW5pdCBhbGxvY2F0ZV9zaGFyZWRfbWVtb3J5
KHN0cnVjdCBkb21haW4gKmQsDQo+ID4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIHUzMiBhZGRyX2NlbGxzLCB1MzIgc2l6ZV9jZWxscywNCj4gPiAtICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBwYWRkcl90IHBiYXNlLCBwYWRkcl90IHBz
aXplKQ0KPiA+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBhZGRy
X3QgcGJhc2UsIHBhZGRyX3QgcHNpemUsDQo+ID4gKyAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgcGFkZHJfdCBnYmFzZSkNCj4gPiAgIHsNCj4gPiAgICAgICBtZm5fdCBz
bWZuOw0KPiA+ICsgICAgaW50IHJldCA9IDA7DQo+ID4NCj4gPiAgICAgICBkcHJpbnRrKFhFTkxP
R19JTkZPLA0KPiA+ICAgICAgICAgICAgICAgIkFsbG9jYXRlIHN0YXRpYyBzaGFyZWQgbWVtb3J5
IEJBTksNCj4gPiAlIyJQUklwYWRkciItJSMiUFJJcGFkZHIiLlxuIiwgQEAgLTgyMiw4ICs4MjQs
MTggQEAgc3RhdGljIGludCBfX2luaXQNCj4gYWxsb2NhdGVfc2hhcmVkX21lbW9yeShzdHJ1Y3Qg
ZG9tYWluICpkLA0KPiA+ICAgICAgICAqIERPTUlEX0lPIGlzIHRoZSBkb21haW4sIGxpa2UgRE9N
SURfWEVOLCB0aGF0IGlzIG5vdCBhdXRvLQ0KPiB0cmFuc2xhdGVkLg0KPiA+ICAgICAgICAqIEl0
IHNlZXMgUkFNIDE6MSBhbmQgd2UgZG8gbm90IG5lZWQgdG8gY3JlYXRlIFAyTSBtYXBwaW5nIGZv
ciBpdA0KPiA+ICAgICAgICAqLw0KPiA+IC0gICAgQVNTRVJUKGQgPT0gZG9tX2lvKTsNCj4gPiAt
ICAgIHJldHVybiAwOw0KPiA+ICsgICAgaWYgKCBkICE9IGRvbV9pbyApDQo+ID4gKyAgICB7DQo+
ID4gKyAgICAgICAgcmV0ID0gZ3Vlc3RfcGh5c21hcF9hZGRfcGFnZXMoZCwgZ2FkZHJfdG9fZ2Zu
KGdiYXNlKSwgc21mbiwNCj4gPiArIFBGTl9ET1dOKHBzaXplKSk7DQo+IA0KPiBDb2Rpbmcgc3R5
bGU6IHRoaXMgbGluZSBpcyBvdmVyIDgwIGNoYXJhY3RlcnMuIEFuZC4uLg0KPiANCj4gPiArICAg
ICAgICBpZiAoIHJldCApDQo+ID4gKyAgICAgICAgew0KPiA+ICsgICAgICAgICAgICBwcmludGso
WEVOTE9HX0VSUg0KPiA+ICsgICAgICAgICAgICAgICAgICAgIkZhaWxlZCB0byBtYXAgc2hhcmVk
IG1lbW9yeSB0byAlcGQuXG4iLCBkKTsNCj4gDQo+IC4uLiB0aGlzIGxpbmUgY291bGQgYmUgbWVy
Z2VkIHdpdGggdGhlIHByZXZpb3VzIG9uZS4NCj4gDQo+ID4gKyAgICAgICAgICAgIHJldHVybiBy
ZXQ7DQo+ID4gKyAgICAgICAgfQ0KPiA+ICsgICAgfQ0KPiA+ICsNCj4gPiArICAgIHJldHVybiBy
ZXQ7DQo+ID4gICB9DQo+ID4NCj4gPiAgIHN0YXRpYyBpbnQgX19pbml0IHByb2Nlc3Nfc2htKHN0
cnVjdCBkb21haW4gKmQsIEBAIC04MzYsNiArODQ4LDggQEANCj4gPiBzdGF0aWMgaW50IF9faW5p
dCBwcm9jZXNzX3NobShzdHJ1Y3QgZG9tYWluICpkLA0KPiA+ICAgICAgIHUzMiBzaG1faWQ7DQo+
ID4gICAgICAgdTMyIGFkZHJfY2VsbHMsIHNpemVfY2VsbHM7DQo+ID4gICAgICAgcGFkZHJfdCBn
YmFzZSwgcGJhc2UsIHBzaXplOw0KPiA+ICsgICAgY29uc3QgY2hhciAqcm9sZV9zdHI7DQo+ID4g
KyAgICBib29sIG93bmVyX2RvbV9pbyA9IHRydWU7DQo+IA0KPiBJIHRoaW5rIGl0IHdvdWxkIGJl
IGJlc3QgaWYgcm9sZV9zdHIgYW5kIG93bmVyX2RvbV9pbyBhcmUgZGVmaW5lZCB3aXRoaW4gdGhl
DQo+IGxvb3AuIFNhbWUgZ29lcyBmb3IgYWxsIHRoZSBvdGhlciBkZWNsYXJhdGlvbnMuDQo+IA0K
PiA+DQo+ID4gICAgICAgZHRfZm9yX2VhY2hfY2hpbGRfbm9kZShub2RlLCBzaG1fbm9kZSkNCj4g
PiAgICAgICB7DQo+ID4gQEAgLTg2MiwxOSArODc2LDI3IEBAIHN0YXRpYyBpbnQgX19pbml0IHBy
b2Nlc3Nfc2htKHN0cnVjdCBkb21haW4gKmQsDQo+ID4gICAgICAgICAgIEFTU0VSVChJU19BTElH
TkVEKHBiYXNlLCBQQUdFX1NJWkUpICYmIElTX0FMSUdORUQocHNpemUsDQo+IFBBR0VfU0laRSkp
Ow0KPiA+ICAgICAgICAgICBnYmFzZSA9IGR0X3JlYWRfbnVtYmVyKGNlbGxzLCBhZGRyX2NlbGxz
KTsNCj4gPg0KPiA+IC0gICAgICAgIC8qIFRPRE86IENvbnNpZGVyIG93bmVyIGRvbWFpbiBpcyBu
b3QgdGhlIGRlZmF1bHQgZG9tX2lvLiAqLw0KPiA+ICsgICAgICAgIC8qDQo+ID4gKyAgICAgICAg
ICogInJvbGUiIHByb3BlcnR5IGlzIG9wdGlvbmFsIGFuZCBpZiBpdCBpcyBkZWZpbmVkIGV4cGxp
Y2l0bHksDQo+ID4gKyAgICAgICAgICogdGhlbiB0aGUgb3duZXIgZG9tYWluIGlzIG5vdCB0aGUg
ZGVmYXVsdCAiZG9tX2lvIiBkb21haW4uDQo+ID4gKyAgICAgICAgICovDQo+ID4gKyAgICAgICAg
aWYgKCBkdF9wcm9wZXJ0eV9yZWFkX3N0cmluZyhzaG1fbm9kZSwgInJvbGUiLCAmcm9sZV9zdHIp
ID09IDAgKQ0KPiA+ICsgICAgICAgICAgICBvd25lcl9kb21faW8gPSBmYWxzZQ0KPiBJSVVDLCB0
aGUgcm9sZSBpcyBwZXItcmVnaW9uLiBIb3dldmVyLCBvd25lcl9kb21faW8gaXMgZmlyc3QgaW5p
dGlhbGl6ZWQgdG8NCj4gZmFsc2Ugb3V0c2lkZSB0aGUgbG9vcC4gVGhlcmVmb3JlLCB0aGUgdmFy
aWFibGUgbWF5IG5vdCBiZSBjb3JyZWN0IG9uIHRoZSBuZXh0DQo+IHJlZ2lvbi4NCj4gDQo+IFNv
IEkgdGhpbmsgeW91IHdhbnQgdG8gd3JpdGU6DQo+IA0KPiBvd25lcl9kb21faW8gPSAhZHRfcHJv
cGVydHlfcmVhZF9zdHJpbmcoLi4uKTsNCj4gDQo+IFRoaXMgY2FuIGFsc28gYmUgYXZvaWRlZCBp
ZiB5b3UgcmVkdWNlIHRoZSBzY29wZSBvZiB0aGUgdmFyaWFibGUgKGl0IGlzIG1lYW50DQo+IHRv
IG9ubHkgYmUgdXNlZCBpbiB0aGUgbG9vcCkuDQo+IA0KDQpZZXMsIGl0IGlzIGEgYnVnLCB0aHgh
ISEgSSdsbCByZWR1Y2UgdGhlIHNjb3BlLg0KDQo+ID4gKw0KPiA+ICAgICAgICAgICAvKg0KPiA+
ICAgICAgICAgICAgKiBQZXIgc3RhdGljIHNoYXJlZCBtZW1vcnkgcmVnaW9uIGNvdWxkIGJlIHNo
YXJlZCBiZXR3ZWVuIG11bHRpcGxlDQo+ID4gICAgICAgICAgICAqIGRvbWFpbnMuDQo+ID4gLSAg
ICAgICAgICogSW4gY2FzZSByZS1hbGxvY2F0aW5nIHRoZSBzYW1lIHNoYXJlZCBtZW1vcnkgcmVn
aW9uLCB3ZSBjaGVjaw0KPiA+IC0gICAgICAgICAqIGlmIGl0IGlzIGFscmVhZHkgYWxsb2NhdGVk
IHRvIHRoZSBkZWZhdWx0IG93bmVyIGRvbV9pbyBiZWZvcmUNCj4gPiAtICAgICAgICAgKiB0aGUg
YWN0dWFsIGFsbG9jYXRpb24uDQo+ID4gKyAgICAgICAgICogU28gd2hlbiBvd25lciBkb21haW4g
aXMgdGhlIGRlZmF1bHQgZG9tX2lvLCBpbiBjYXNlIHJlLWFsbG9jYXRpbmcNCj4gPiArICAgICAg
ICAgKiB0aGUgc2FtZSBzaGFyZWQgbWVtb3J5IHJlZ2lvbiwgd2UgY2hlY2sgaWYgaXQgaXMgYWxy
ZWFkeSBhbGxvY2F0ZWQNCj4gPiArICAgICAgICAgKiB0byB0aGUgZGVmYXVsdCBvd25lciBkb21f
aW8gYmVmb3JlIHRoZSBhY3R1YWwgYWxsb2NhdGlvbi4NCj4gPiAgICAgICAgICAgICovDQo+ID4g
LSAgICAgICAgaWYgKCAhaXNfc2htX2FsbG9jYXRlZF90b19kb21pbyhwYmFzZSkgKQ0KPiA+ICsg
ICAgICAgIGlmICggKG93bmVyX2RvbV9pbyAmJiAhaXNfc2htX2FsbG9jYXRlZF90b19kb21pbyhw
YmFzZSkpIHx8DQo+ID4gKyAgICAgICAgICAgICAoIW93bmVyX2RvbV9pbyAmJiBzdHJjbXAocm9s
ZV9zdHIsICJvd25lciIpID09IDApICkNCj4gPiAgICAgICAgICAgew0KPiA+IC0gICAgICAgICAg
ICAvKiBBbGxvY2F0ZSBzdGF0aWNhbGx5IHNoYXJlZCBwYWdlcyB0byB0aGUgZGVmYXVsdCBvd25l
ciBkb21faW8uICovDQo+ID4gLSAgICAgICAgICAgIHJldCA9IGFsbG9jYXRlX3NoYXJlZF9tZW1v
cnkoZG9tX2lvLCBhZGRyX2NlbGxzLCBzaXplX2NlbGxzLA0KPiA+IC0gICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIHBiYXNlLCBwc2l6ZSk7DQo+ID4gKyAgICAgICAgICAg
IC8qIEFsbG9jYXRlIHN0YXRpY2FsbHkgc2hhcmVkIHBhZ2VzIHRvIHRoZSBvd25lciBkb21haW4u
ICovDQo+ID4gKyAgICAgICAgICAgIHJldCA9IGFsbG9jYXRlX3NoYXJlZF9tZW1vcnkob3duZXJf
ZG9tX2lvID8gZG9tX2lvIDogZCwNCj4gPiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBhZGRyX2NlbGxzLCBzaXplX2NlbGxzLA0KPiA+ICsgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIHBiYXNlLCBwc2l6ZSwgZ2Jhc2UpOw0KPiA+ICAgICAg
ICAgICAgICAgaWYgKCByZXQgKQ0KPiA+ICAgICAgICAgICAgICAgICAgIHJldHVybiByZXQ7DQo+
ID4gICAgICAgICAgIH0NCj4gDQo+IENoZWVycywNCj4gDQo+IC0tDQo+IEp1bGllbiBHcmFsbA0K


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 07:57:39 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 07:57:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357601.586251 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6SZq-0005UJ-VF; Wed, 29 Jun 2022 07:57:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357601.586251; Wed, 29 Jun 2022 07:57:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6SZq-0005UC-Sd; Wed, 29 Jun 2022 07:57:34 +0000
Received: by outflank-mailman (input) for mailman id 357601;
 Wed, 29 Jun 2022 07:57:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=los1=XE=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1o6SZo-0005U6-Cw
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 07:57:32 +0000
Received: from sonic316-8.consmr.mail.gq1.yahoo.com
 (sonic316-8.consmr.mail.gq1.yahoo.com [98.137.69.32])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1f06e91f-f781-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 09:57:30 +0200 (CEST)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic316.consmr.mail.gq1.yahoo.com with HTTP; Wed, 29 Jun 2022 07:57:27 +0000
Received: by hermes--production-bf1-58957fb66f-xc7t4 (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 54da85cd042d010097af8a7b4a3a105a; 
 Wed, 29 Jun 2022 07:57:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1f06e91f-f781-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1656489447; bh=E/3phgmr2on8518X6t5OCFyE0KA8ET1fzccg5b4q+lc=; h=From:To:Cc:Subject:Date:References:From:Subject:Reply-To; b=JEoGeVX+4RCazZkzR7rbzC90ELap/QvDOSJaDl2MFhbEW19ax8xuZ/H4p1GSSuT3GYdOxmoMsMHGmF0t0Hjh3MjBKjz4zGSYqO3MYMUt75Fz5ET9MHO1QSgGpGESFGZaI9o6DmUmkG2EcU0rb65Q+rHqvEuUdJ8n3g3lV+Bnuphuh86CYSe18fVSdqGcAqdYUq7qkgJbeSwMsyLwm4ZtLVVH26WX27yOL6xsptT92FYslEoi5xzMqA1hjKFRfAi0VtbmTAIBXQHTP5O8FqBCGyA9EA7JeWbojkN2FK7VmwAzixgMbU/8FeCfrwCjPBWz1tI5ZIcHdiMbvzS+0oAHeQ==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1656489447; bh=9xmbxlEBcekFxb9avc6m12xqUTk4rsQ5ESiPwqvGtL6=; h=X-Sonic-MF:From:To:Subject:Date:From:Subject; b=fV+GZyF/loXcp4VCRrrzyWF9hT4TCpaa2/fqM5xGBCCfS7DFujrJj03v7ycTqIZ13ivycmuYkXu3snjBWK2cXCB7B1q4DNhBSbL8hzR9y4R0rjrMFktkIt5gmwnqdMmd+t/Uwg0mzSXaNrSW1BKq9CpBtZK8UF1A4p7j85ZS+nWXPJh9Kqt9SRrZ1pESNDzfsn23bQjAwUBQMBZWoOmC5XsUCGiZNIo4GGg/FwzFwHacjIf1ASuuL6mbB/DQLSdrJuM+QN8LuB319L7JzCnv6j+SK/5KVaCxmz1dTqM9ViNfOsRltHptuGs80dU7ChEo9Ai0wMrzgE0sBGDxfel++w==
X-YMail-OSG: P1ITJT8VM1kOD6BJFWPGuO3YiZ36oJ0J50fInCaW3vIscfhBkNSOciRrz7Tk0xn
 3jroEq0ouCQAm4aC6_5QA49v8nWBzwmv4f3XrNKBGKHsAFpvlYG8xteXLtaR4nX2s5b9AtZZygI7
 jZYmLeBnPOy22i7ngQScoqUYnwksqC7x1RfmL4Dt1owOJWMDqqzsYJc.48iO07k_pqmQJHi77ANk
 GIvnLhem_4BQoMwxiEIwNpjEaP6rKKbvfDsZ5aq0GGFJjzLuU9S5SAPw2L49Oq7fxQGnEu2ELp1Q
 LUSduyifKJO10_on9u9tsuidVKgWBtsFUymL3RI_c9q55EgwaCFDVAJqkMyHQRzsFhENVB2A3lL_
 MqUVgYiZGczimyj5YMlQgK_YtKVK1O6I8mKT6RFwjJCXWbYdkgPt7FUPJlYeM.wIUZJcaDaaJy8i
 w19lnxQVAu3jaUhNWESq8YvSSv.k0Uo5q4eX7FdJ84UBpJmlQ84gMrcsvsahodAtg735sga6.HpN
 JgWLCf31PZNHFNrlSctxsBNb3qw7hWYl_DTiv0N6jKGVks8iNvk9Lg0UCxytkdraK.sVwchh0qAP
 IoNE1qxplpvQCRPywPX_V3cOKbkPNe4MRmSq2qmTiIGjIt36c4LoH_FNkhW4rmCrTWEyiTdV8x63
 D_nuQkHoxq686h1TLaerv0PMex6.7iDEPtl4WSf9iuTs0XlZ5xECQd_BMgW_GLIPbwFQnqFghf9q
 CfmvA.VuDeTztfBYnBSAiD531Hs67.QV4KqLggUuRBXlh1V6llR0ahvlHWN8biuv_Ts46jfJ_UBO
 rHHvHhIOpoWI4Q7zeeWHIsHp6pgjuOIRKpch.hXG.w3l8HSUMGF2hVi.0z_8g.bU_oGbgCL6OTc9
 gSbW3nRwDyIvMl3ihMz_NAKCmGpIRC3.KjsKzcBdjaGz2Qo7rMayQ98GUGudkbGA_KSp6ldCZVa3
 dj.IwAeN6jyrhuSYIY7WBCZXzYQCsdA56fPUUYTcgOYIzwvmp2bpcb4aD7.8_QWSHG9d4.0FHh9n
 Vs3aaHfSsDvbqC5129NVdCkLIDfAXmR0xpbW2bU_3GKvmn6NP7PDX_82CzU2KWV6gsEIoxDcLWfX
 1ul9ipFQ.7w4HjNGk7icoWywsGhkBHETCjYgvhcN7GK_nM8.bMg.PWCMXIYsam2UaFFDjhfu4n3y
 uHwJvuMLLY0svWBO2vzGu2Vi6a3l8NLDSGdePThadoqr_EaR3aWF8mdat1Ta5rPn4iN2xY8HW2TV
 phk9S4MBWHDddb3lv_.TraN5r3esj9nV32BzYoPVe61D27LASBWS4JDHIvNQia9c4dOnpreLlKPF
 lgW_S90W411cJkVfo4M6kFDfc.PMBrx_FHy4vbwnpkT9_5YNrO_ulajYJi5D2W_6GtON3NRUyqhu
 vBdA2bqrORb6fFex4x4kuLwj7X.Ae3TLYzf6lp_efT5RLI.LHRLM0YINiG_l7X1o4bMuNUVNGhqv
 YYZe69dHQULRPoRlbosU.IAamY7m7sBfkMtcpy_LDHZjBa8wEmcG.zdZxrrAsZ_DFSwQByNemc.b
 M6VquFrZZ2jGUp2nC8QnZhoQKeLt19VH4GsLZjpwddBtB4H8rj8pjJb_i2qyoL0R97vC5fEtb3B1
 De0iwUdfLom8cOG7geMdxQBVK7cUUJNYhQR6ekHaRFFT7jO1PdWu0mExWzf7WO9AX1vtICrERUXL
 2HhKa7RZ.ak4nDPp17fQut8BNY9aQINAwwVamEGuEe1aYTKLFXkN.G0IOhTTUoKIwFRqoYAl8yZt
 5F0G1fOZKq9L5Nj1O96aFm5AfoXwxSWpJJ6i67DHF2TTCHfZJF1vfUx1AI8w2gf7rXKwBkIPe9fA
 pZSluNYFU6s5WX41RdIGM9zpRJ3lfdoEpBtEZvRGMjQhfHDeqPz7bVtOmgXyj5ba.dUQpuHJ0nOO
 hJp8qnyh4WGrdQS.ZpNiulwtfpdzWfapAJJJByWbejUArCRptwSiYQKnG4PVLkOC2Itic_vdf42C
 V_am5cD.lGFxCFgYUqFuRK1grhV51uljmr.4k0GsRfRhkvu2jjqG3fNe9gWiq3u3WchvvH2uWTDO
 UkUKp_iaYegSNv_WbAZjmlmWhA13oI5qUI56CSktIp_QmXy1ywbeNOkjelKYPl_tDukjDBGGOxT6
 e.eS7ucjefi_pI8E43hfJnDzcKxJzL8CUirVr9eZ4BxUxHzHXMgcAPYepTt4wi4VsyIvS6aBjELV
 PNdDK
X-Sonic-MF: <brchuckz@aim.com>
From: Chuck Zmudzinski <brchuckz@aol.com>
To: qemu-devel@nongnu.org
Cc: xen-devel@lists.xenproject.org,
	qemu-trivial@nongnu.org,
	qemu-stable@nongnu.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH v4] xen/pass-through: merge emulated bits correctly
Date: Wed, 29 Jun 2022 03:57:12 -0400
Message-Id: <5cd07587898cac43bf4b7a52489c380a44cab652.1656480662.git.brchuckz@aol.com>
X-Mailer: git-send-email 2.36.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
References: <5cd07587898cac43bf4b7a52489c380a44cab652.1656480662.git.brchuckz.ref@aol.com>
Content-Length: 3063

In xen_pt_config_reg_init(), there is an error in the merging of the
emulated data with the host value. With the current Qemu, instead of
merging the emulated bits with the host bits as defined by emu_mask,
the emulated bits are merged with the host bits as defined by the
inverse of emu_mask. In some cases, depending on the data in the
registers on the host, the way the registers are setup, and the
initial values of the emulated bits, the end result will be that
the register is initialized with the wrong value.

To correct this error, use the XEN_PT_MERGE_VALUE macro to help ensure
the merge is done correctly.

This correction is needed to resolve Qemu project issue #1061, which
describes the failure of Xen HVM Linux guests to boot in certain
configurations with passed through PCI devices, that is, when this error
disables instead of enables the PCI_STATUS_CAP_LIST bit of the
PCI_STATUS register of a passed through PCI device, which in turn
disables the MSI-X capability of the device in Linux guests with the end
result being that the Linux guest never completes the boot process.

Fixes: 2e87512eccf3 ("xen/pt: Sync up the dev.config and data values")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1061
Buglink: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=988333

Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
---
v2: Edit the commit message to more accurately describe the cause
of the error.

v3: * Add Reviewed-By: Anthony Perard <anthony.perard@citrix.com>
    * Add qemu-stable@nongnu.org to recipients to indicate the patch
      may be suitable for backport to Qemu stable

v4: * Add Fixed commit subject to Fixes: 2e87512eccf3

Thank you, Anthony, for taking the time to review this patch.

Sorry for the extra noise with v4 (I thought the Fixed commit subject
would be automatically added).

 hw/xen/xen_pt_config_init.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/hw/xen/xen_pt_config_init.c b/hw/xen/xen_pt_config_init.c
index cad4aeba84..21839a3c98 100644
--- a/hw/xen/xen_pt_config_init.c
+++ b/hw/xen/xen_pt_config_init.c
@@ -1966,10 +1966,10 @@ static void xen_pt_config_reg_init(XenPCIPassthroughState *s,
         if ((data & host_mask) != (val & host_mask)) {
             uint32_t new_val;
 
-            /* Mask out host (including past size). */
-            new_val = val & host_mask;
-            /* Merge emulated ones (excluding the non-emulated ones). */
-            new_val |= data & host_mask;
+            /* Merge the emulated bits (data) with the host bits (val)
+             * and mask out the bits past size to enable restoration
+             * of the proper value for logging below. */
+            new_val = XEN_PT_MERGE_VALUE(val, data, host_mask) & size_mask;
             /* Leave intact host and emulated values past the size - even though
              * we do not care as we write per reg->size granularity, but for the
              * logging below lets have the proper value. */
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 29 08:01:13 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 08:01:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357612.586263 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6SdL-0007X1-Rn; Wed, 29 Jun 2022 08:01:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357612.586263; Wed, 29 Jun 2022 08:01:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6SdL-0007Wu-MK; Wed, 29 Jun 2022 08:01:11 +0000
Received: by outflank-mailman (input) for mailman id 357612;
 Wed, 29 Jun 2022 08:01:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rlzK=XE=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1o6SdK-0007W3-LR
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 08:01:10 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-eopbgr80087.outbound.protection.outlook.com [40.107.8.87])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a275be6c-f781-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 10:01:09 +0200 (CEST)
Received: from DB6PR07CA0191.eurprd07.prod.outlook.com (2603:10a6:6:42::21) by
 HE1PR0802MB2411.eurprd08.prod.outlook.com (2603:10a6:3:db::13) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.18; Wed, 29 Jun 2022 08:01:06 +0000
Received: from DBAEUR03FT057.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:42:cafe::6f) by DB6PR07CA0191.outlook.office365.com
 (2603:10a6:6:42::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.7 via Frontend
 Transport; Wed, 29 Jun 2022 08:01:06 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT057.mail.protection.outlook.com (100.127.142.182) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Wed, 29 Jun 2022 08:01:05 +0000
Received: ("Tessian outbound 8dc5ba215ad1:v121");
 Wed, 29 Jun 2022 08:01:05 +0000
Received: from 49b17dcaed49.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 6D9FA508-8DAF-44B0-A6B5-025FD946DEA6.1; 
 Wed, 29 Jun 2022 08:00:59 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 49b17dcaed49.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 29 Jun 2022 08:00:59 +0000
Received: from DU2PR08MB7325.eurprd08.prod.outlook.com (2603:10a6:10:2e4::7)
 by AM5PR0801MB1732.eurprd08.prod.outlook.com (2603:10a6:203:35::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Wed, 29 Jun
 2022 08:00:58 +0000
Received: from DU2PR08MB7325.eurprd08.prod.outlook.com
 ([fe80::50e5:8757:6643:77f8]) by DU2PR08MB7325.eurprd08.prod.outlook.com
 ([fe80::50e5:8757:6643:77f8%9]) with mapi id 15.20.5373.017; Wed, 29 Jun 2022
 08:00:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a275be6c-f781-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=czEI22XMJRqNO1sriJ24tXIt48VekGN5LqOiXWxiqxWlUnV5Sim4t0YCVP12CASSWcUJWp4jAXi4YUtCOhCE1/y1sGUuVj4BvYOGTXr1h9QUH3j4UeUpA0be2qO1O6GY8tLqbNWPpMl4Z2FWqXUqNKgs0fRwT4Q7DTt5CpV4HwoA8RMdjVBHxvyNs0uhHPoNKeesclM+UvSpxeIo/73PSM+Q0z9pUTJQVqTWFMB/77Qfdwk/s+7IfVeEvh4VpKpoFoXUUxxnBIlUogW57W+/vGz2mdq/C0l60+Cj2WtdsZcDxAkFKOYl9GL/U1hqxk42hdFz805TqjTiiH484d25bw==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=J/1TKzmdiSLsy6p3WNNRvQVY1fgUO+Mpr2uf4Sm5CK4=;
 b=YMn5HtwDhtYfmTPmrkDwc2WTmbvi9CRBdiDx5BpenqUQ6bR2doFTQW96qjlFI+MqGs21oK5HZ0+vIkPv4I7fYYUaVUAb+gSl2fB/62+8ln+1/hobd7WUnmg3Ia1LJzNpEyM93iLRy6g1T/lJrXnc9WX2/e+R/u7HvIrSr5c8wy33k33tmjECKEVGOdIBZMikBxXlgRKfG/Vw6MCy0vBhNu4vA/njAI1uCt0OP7Kgew0RF2leQfJPerHerWaltndTnF2YD25Dnwy18URRzPEtFSpPIt21Kya4x8SCA6TYvFF+/Y8cJSKPxpl1CZd78zCyDA1/8snmqErLZyeEV8Xr7Q==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=J/1TKzmdiSLsy6p3WNNRvQVY1fgUO+Mpr2uf4Sm5CK4=;
 b=DSrnFGPZfA1LkI1d5QOy2nG3MnPUX8sUXd6rA8ZOLefjcF+AT72fkDdIcP8akmCY+TAu/kwo7V0CIuct9/BMK13MXv11TYPZmbXQT437OWxSojj01odRTMJ9DZkxHD4uNgGocfORIYnnzqiyueWp5cSUOtW1H6YneszIxEHSZzg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iQH/iPbk0XStfDdv4D/V1eZfZyU2XveeqOcg6mZT9zU83FsZNyIwP/jptG/0NVC4bGGsR4JedoK1xFXE+z0AzVU6GpWsrLw20cHkjqqtwLt5PNS116CfGeUYUIMcICHDIUe5Ylh34L5a+JknQPPicPUzVIjQxCmu9KJXZ4QCOTEzVOK6DKXuQ+uKsPwFw7apB3zoW3jUngPNluSv9L775WNVb0x1+/2SHGdwgyUkifyD1gL5eqm84TUZRPDi4sWzrxhS6PivpkU0VFJz0kvi9+GgaDRrZ9G0sNozOnUWDTAojpBz2BNZ4ouHZKp20MVxOw6ziZ24BQcA9T5OH/PwQg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=J/1TKzmdiSLsy6p3WNNRvQVY1fgUO+Mpr2uf4Sm5CK4=;
 b=GH3jDjeLTNpoPB+oyJC7Ni7d2sFZISAK2UderePj2Cfp27++WdygQgJa9xxtPMjA1Kzf3iTE6idzKU1wXA9LZHvYKaaiFs3UFa0ijfAMX8CzH46XHp2XWgHwkxIQvoBoS8lKdhcvk5h/2junhxewRAwcOmjfRIATD79Wc2ePlIABbtN+oWKYcEGOAgq0+izH0OS3m4P/M3Oox+aEBYfrFlgdC0F+CfuS/kn8Rsa/o4bbcG5kKmHh9ZUqVX+jXMQxfNh0G4GcWqbsS6MqmZdj6KJx3IFsqO7V0KKtGRWvavYaPVE4CYcthiYnRvpqsLVS+hMA8YO9VGDzPhCEArws0A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=J/1TKzmdiSLsy6p3WNNRvQVY1fgUO+Mpr2uf4Sm5CK4=;
 b=DSrnFGPZfA1LkI1d5QOy2nG3MnPUX8sUXd6rA8ZOLefjcF+AT72fkDdIcP8akmCY+TAu/kwo7V0CIuct9/BMK13MXv11TYPZmbXQT437OWxSojj01odRTMJ9DZkxHD4uNgGocfORIYnnzqiyueWp5cSUOtW1H6YneszIxEHSZzg=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v5 5/8] xen/arm: Add additional reference to owner domain
 when the owner is allocated
Thread-Topic: [PATCH v5 5/8] xen/arm: Add additional reference to owner domain
 when the owner is allocated
Thread-Index: AQHYhGRF6qbL+APV6ECW1Ou9EE75yq1e9ZGAgAcdi0A=
Date: Wed, 29 Jun 2022 08:00:58 +0000
Message-ID:
 <DU2PR08MB732566EA1E988ECDD6A6C172F7BB9@DU2PR08MB7325.eurprd08.prod.outlook.com>
References: <20220620051114.210118-1-Penny.Zheng@arm.com>
 <20220620051114.210118-6-Penny.Zheng@arm.com>
 <3e397ff3-0b67-523c-179a-0a2035b081da@xen.org>
In-Reply-To: <3e397ff3-0b67-523c-179a-0a2035b081da@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 1E0E0A75A317E34097305742E9FEFB52.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 39e010da-dead-4871-f190-08da59a584d8
x-ms-traffictypediagnostic:
	AM5PR0801MB1732:EE_|DBAEUR03FT057:EE_|HE1PR0802MB2411:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 nxtZeTmunqt6sC4dSJHXfOwobOdeJ4gq4SQaZXA+4CW4A9hMno3qyLJQ/mKmwxUIUQAY1XPfIaxJSxW8NZHGSCBIIfJc4LAqjnyBzwD9r1IstkGJCi91KXPodTN08Pizog3FUzxFgdm2+3suspCMpQfh3QL6PG2FK+Loxl4brGCXLcEsIvBtT1DvaeNA2UjvgsUdbSAw7NtOeU9qa8WiWImdtXUXz8F5FvMxdBk4GQq0xs4/g2Q55bSNyEJ1eZ5nCe17mBeNLuQuuYWa5xPrhjh0Px0WnjYzg3PPg1tW2/qpvCHd8z7leVogPNFhedY7EHltW7MYRq1cAgliI9glxRwcOrdlgf6TbKemRgFwegA1yW/oz/7tjJ1luE1bOGMBYyBlkth9PsekX0i2xf5mmSOYfn9QmVod+nmBfOpvtKdItYDyNMPAxhftgTo613OVSLsAWSMkN1n7nwY7UoJaFKYgNEwzfXGJ6Pzxw3lxOv+PJvhiZ6WM8KR0s4QJR+N0eVWaYmzK4UaF60fUhLFPzasakI90VeJlGuyvhBYoMNd/YoE/xLjcTjFKIkhYI1aOvNXuH8zr9H3FEeNLwdne3x5j1FSF56HXjEiFMf9eguVuLR7N8sq7ltxcbMAfyd2L+5dXnfKwvFot3X0zTIdUhGqzygJwAVzz3OBs3+3WBL3qa3uvKHRF0mbLnk88noSQYwa7w4bazlbLOWkTVnaZ60g/i7NtdWPzHyrTDiIh3Zmf7o2jgHRlxTS93u7dEe9oOQ0eOWIceUXvPFjsh8Rn8k6HfsiAWJLY5xLIpXzgq+E=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DU2PR08MB7325.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(346002)(396003)(136003)(366004)(376002)(186003)(38100700002)(41300700001)(478600001)(2906002)(8936002)(52536014)(122000001)(86362001)(38070700005)(7696005)(110136005)(55016003)(6506007)(66946007)(64756008)(33656002)(54906003)(66446008)(76116006)(71200400001)(53546011)(8676002)(4326008)(66556008)(26005)(83380400001)(66476007)(316002)(5660300002)(9686003);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM5PR0801MB1732
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT057.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ecfe3ecb-1551-48b1-d0f6-08da59a5803b
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/cwNNmS7KvlYuBtyg30N9ZfdSqLFee5ZjNYeH2IoLzg663APbguqYNBVbMA36rfV04wEIrfgbEfoeuYAAjpLyebWZDNbWiCKO9iMXmO33kctfxT238bcVSsESDdkdK/dCK31w9Rd0p5vEyh0MtBjWcoFKiGDAmWn1riuJJKLOJ2XzWRwiKxNJ7L52/GhxiQjw600Kj7jIFyEuHzvLRZSYGthRxcfvjH0Ay5fsR4idSesGS0CjtGkzV5LdVI/wln7vs7opQ3MncFiwjTc+0zU6EIMxWlFQ6ZdIG0LJiIEB+fi60XjOceoF7BMkw928ui7Xhz/F0vip52Cx1rlopIroBFTfIzDa+4eNU12ktggGNQChU9+pvNV2EhVb2N4R892foTrd1S11EF1tB8XEg/oiiMA1vtiwhOiy7igkcU9aVwfyvqa2VUM0KarMobjC9aZEgIgTrwezIrjHlzWR/iZu+YS6Tn8JD+7GnlryE/5MNtqXFw1oaASlSREJnRYd8hT85xKI+Zj2kFi++w91WWqzSlaVOe6qhQuio50ve69dhwPqJl2lRIhBYlcK8RwpObV6NQo7Nc/wdViI/fr5BeDRRS4CVEyEFinc/LfdBrkmjUjuloCtP/QTBrK2/q4h+9zMwcUwUNjBOauplDC66wifq568yqhk64ykV4ejxcaOyPasXaTydtyMUsJ1PjuwrFP8Zdwoaxewi39mcjPpB79d/KzuPCjrTCastOLQiEueekHYgghFdKQ2Fr0kOKx01AqmLjJd1siqc9q5cridWk0fDaUbMkSQ6B01r6AHAtHAdmo77Cj83cSxP/8Zwzf259p
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(396003)(376002)(346002)(39860400002)(136003)(40470700004)(36840700001)(46966006)(40460700003)(316002)(4326008)(41300700001)(8676002)(356005)(70586007)(70206006)(186003)(55016003)(6506007)(110136005)(36860700001)(5660300002)(7696005)(26005)(107886003)(47076005)(33656002)(86362001)(54906003)(9686003)(8936002)(52536014)(53546011)(2906002)(40480700001)(82310400005)(83380400001)(82740400003)(81166007)(336012)(478600001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jun 2022 08:01:05.9476
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 39e010da-dead-4871-f190-08da59a584d8
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT057.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0802MB2411

SGkgSnVsaWVuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSnVsaWVu
IEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4NCj4gU2VudDogU2F0dXJkYXksIEp1bmUgMjUsIDIwMjIg
MzoxOCBBTQ0KPiBUbzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+OyB4ZW4tZGV2
ZWxAbGlzdHMueGVucHJvamVjdC5vcmcNCj4gQ2M6IFdlaSBDaGVuIDxXZWkuQ2hlbkBhcm0uY29t
PjsgU3RlZmFubyBTdGFiZWxsaW5pDQo+IDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPjsgQmVydHJh
bmQgTWFycXVpcyA8QmVydHJhbmQuTWFycXVpc0Bhcm0uY29tPjsNCj4gVm9sb2R5bXlyIEJhYmNo
dWsgPFZvbG9keW15cl9CYWJjaHVrQGVwYW0uY29tPg0KPiBTdWJqZWN0OiBSZTogW1BBVENIIHY1
IDUvOF0geGVuL2FybTogQWRkIGFkZGl0aW9uYWwgcmVmZXJlbmNlIHRvIG93bmVyDQo+IGRvbWFp
biB3aGVuIHRoZSBvd25lciBpcyBhbGxvY2F0ZWQNCj4gDQo+IEhpIFBlbm55LA0KPiANCj4gT24g
MjAvMDYvMjAyMiAwNjoxMSwgUGVubnkgWmhlbmcgd3JvdGU6DQo+ID4gQm9ycm93ZXIgZG9tYWlu
IHdpbGwgZmFpbCB0byBnZXQgYSBwYWdlIHJlZiB1c2luZyB0aGUgb3duZXIgZG9tYWluDQo+ID4g
ZHVyaW5nIGFsbG9jYXRpb24sIHdoZW4gdGhlIG93bmVyIGlzIGNyZWF0ZWQgYWZ0ZXIgYm9ycm93
ZXIuDQo+ID4NCj4gPiBTbyBoZXJlLCB3ZSBkZWNpZGUgdG8gZ2V0IGFuZCBhZGQgdGhlIHJpZ2h0
IGFtb3VudCBvZiByZWZlcmVuY2UsIHdoaWNoDQo+ID4gaXMgdGhlIG51bWJlciBvZiBib3Jyb3dl
cnMsIHdoZW4gdGhlIG93bmVyIGlzIGFsbG9jYXRlZC4NCj4gPg0KPiA+IFNpZ25lZC1vZmYtYnk6
IFBlbm55IFpoZW5nIDxwZW5ueS56aGVuZ0Bhcm0uY29tPg0KPiA+IFJldmlld2VkLWJ5OiBTdGVm
YW5vIFN0YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+DQo+ID4gLS0tDQo+ID4gdjUg
Y2hhbmdlOg0KPiA+IC0gbm8gY2hhbmdlDQo+ID4gLS0tDQo+ID4gdjQgY2hhbmdlczoNCj4gPiAt
IG5vIGNoYW5nZQ0KPiA+IC0tLQ0KPiA+IHYzIGNoYW5nZToNCj4gPiAtIHByaW50ayByYXRoZXIg
dGhhbiBkcHJpbnRrIHNpbmNlIGl0IGlzIGEgc2VyaW91cyBlcnJvcg0KPiA+IC0tLQ0KPiA+IHYy
IGNoYW5nZToNCj4gPiAtIG5ldyBjb21taXQNCj4gPiAtLS0NCj4gPiAgIHhlbi9hcmNoL2FybS9k
b21haW5fYnVpbGQuYyB8IDYyDQo+ICsrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysr
KysNCj4gPiAgIDEgZmlsZSBjaGFuZ2VkLCA2MiBpbnNlcnRpb25zKCspDQo+ID4NCj4gPiBkaWZm
IC0tZ2l0IGEveGVuL2FyY2gvYXJtL2RvbWFpbl9idWlsZC5jIGIveGVuL2FyY2gvYXJtL2RvbWFp
bl9idWlsZC5jDQo+ID4gaW5kZXggZDRmZDY0ZTJiZC4uNjUwZDE4ZjVlZiAxMDA2NDQNCj4gPiAt
LS0gYS94ZW4vYXJjaC9hcm0vZG9tYWluX2J1aWxkLmMNCj4gPiArKysgYi94ZW4vYXJjaC9hcm0v
ZG9tYWluX2J1aWxkLmMNCj4gPiBAQCAtNzk5LDYgKzc5OSwzNCBAQCBzdGF0aWMgbWZuX3QgX19p
bml0DQo+ID4gYWNxdWlyZV9zaGFyZWRfbWVtb3J5X2Jhbmsoc3RydWN0IGRvbWFpbiAqZCwNCj4g
Pg0KPiA+ICAgfQ0KPiA+DQo+ID4gK3N0YXRpYyBpbnQgX19pbml0IGFjcXVpcmVfbnJfYm9ycm93
ZXJfZG9tYWluKHN0cnVjdCBkb21haW4gKmQsDQo+ID4gKyAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIHBhZGRyX3QgcGJhc2UsIHBhZGRyX3QgcHNpemUsDQo+ID4g
KyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVuc2lnbmVkIGxv
bmcNCj4gPiArKm5yX2JvcnJvd2Vycykgew0KPiA+ICsgICAgdW5zaWduZWQgbG9uZyBiYW5rOw0K
PiA+ICsNCj4gPiArICAgIC8qIEl0ZXJhdGUgcmVzZXJ2ZWQgbWVtb3J5IHRvIGZpbmQgcmVxdWVz
dGVkIHNobSBiYW5rLiAqLw0KPiA+ICsgICAgZm9yICggYmFuayA9IDAgOyBiYW5rIDwgYm9vdGlu
Zm8ucmVzZXJ2ZWRfbWVtLm5yX2JhbmtzOyBiYW5rKysgKQ0KPiA+ICsgICAgew0KPiA+ICsgICAg
ICAgIHBhZGRyX3QgYmFua19zdGFydCA9IGJvb3RpbmZvLnJlc2VydmVkX21lbS5iYW5rW2Jhbmtd
LnN0YXJ0Ow0KPiA+ICsgICAgICAgIHBhZGRyX3QgYmFua19zaXplID0gYm9vdGluZm8ucmVzZXJ2
ZWRfbWVtLmJhbmtbYmFua10uc2l6ZTsNCj4gPiArDQo+ID4gKyAgICAgICAgaWYgKCBwYmFzZSA9
PSBiYW5rX3N0YXJ0ICYmIHBzaXplID09IGJhbmtfc2l6ZSApDQo+ID4gKyAgICAgICAgICAgIGJy
ZWFrOw0KPiA+ICsgICAgfQ0KPiA+ICsNCj4gPiArICAgIGlmICggYmFuayA9PSBib290aW5mby5y
ZXNlcnZlZF9tZW0ubnJfYmFua3MgKQ0KPiA+ICsgICAgICAgIHJldHVybiAtRU5PRU5UOw0KPiA+
ICsNCj4gPiArICAgIGlmICggZCA9PSBkb21faW8gKQ0KPiA+ICsgICAgICAgICpucl9ib3Jyb3dl
cnMgPQ0KPiBib290aW5mby5yZXNlcnZlZF9tZW0uYmFua1tiYW5rXS5ucl9zaG1fZG9tYWluOw0K
PiA+ICsgICAgZWxzZQ0KPiA+ICsgICAgICAgIC8qIEV4Y2x1ZGUgdGhlIG93bmVyIGRvbWFpbiBp
dHNlbGYuICovDQo+IE5JVDogSSB0aGluayB0aGlzIGNvbW1lbnQgd2FudHMgdG8gYmUganVzdCBh
Ym92ZSB0aGUgJ2lmJyBhbmQgZXhwYW5kZWQgdG8NCj4gZXhwbGFpbiB3aHkgdGhlICJkb21faW8i
IGlzIG5vdCBpbmNsdWRlZC4gQUZBSVUsIHRoaXMgaXMgYmVjYXVzZSAiZG9tX2lvIiBpcw0KPiBu
b3QgZGVzY3JpYmVkIGluIHRoZSBEZXZpY2UtVHJlZSwgc28gaXQgd291bGQgbm90IGJlIHRha2Vu
IGludG8gYWNjb3VudCBmb3INCj4gbnJfc2htX2RvbWFpbi4NCj4gDQo+ID4gKyAgICAgICAgKm5y
X2JvcnJvd2VycyA9DQo+ID4gKyBib290aW5mby5yZXNlcnZlZF9tZW0uYmFua1tiYW5rXS5ucl9z
aG1fZG9tYWluIC0gMTsNCj4gDQo+IFRCSCwgZ2l2ZW4gdGhlIHVzZSBoZXJlLiBJIHdvdWxkIGhh
dmUgY29uc2lkZXIgdG8gbm90IGluY3JlbWVudA0KPiBucl9zaG1fZG9tYWluIGlmIHRoZSByb2xl
IHdhcyBvd25lciBpbiBwYXJzaW5nIGNvZGUuIFRoaXMgaXMgdjUgbm93LCBzbyBJDQo+IHdvdWxk
IGJlIE9LIHdpdGggdGhlIGNvbW1lbnQgYWJvdmUuDQo+IA0KPiBCdXQgSSB3b3VsZCBzdWdnZXN0
IHRvIGNvbnNpZGVyIGl0IGFzIGEgZm9sbG93LXVwLg0KPiANCg0KTFRNLCBpdCBpcyBub3QgYSBi
aWcgY2hhbmdlLCBJJ2xsIHRyeSB0byBpbmNsdWRlIGl0IGluIHRoZSBuZXh0IHNlcmllfg0KDQo+
ID4gKw0KPiA+ICsgICAgcmV0dXJuIDA7DQo+ID4gK30NCj4gPiArDQo+ID4gICAvKg0KPiA+ICAg
ICogRnVuYyBhbGxvY2F0ZV9zaGFyZWRfbWVtb3J5IGlzIHN1cHBvc2VkIHRvIGJlIG9ubHkgY2Fs
bGVkDQo+ID4gICAgKiBmcm9tIHRoZSBvd25lci4NCj4gPiBAQCAtODEwLDYgKzgzOCw4IEBAIHN0
YXRpYyBpbnQgX19pbml0IGFsbG9jYXRlX3NoYXJlZF9tZW1vcnkoc3RydWN0DQo+IGRvbWFpbiAq
ZCwNCj4gPiAgIHsNCj4gPiAgICAgICBtZm5fdCBzbWZuOw0KPiA+ICAgICAgIGludCByZXQgPSAw
Ow0KPiA+ICsgICAgdW5zaWduZWQgbG9uZyBucl9wYWdlcywgbnJfYm9ycm93ZXJzLCBpOw0KPiA+
ICsgICAgc3RydWN0IHBhZ2VfaW5mbyAqcGFnZTsNCj4gPg0KPiA+ICAgICAgIGRwcmludGsoWEVO
TE9HX0lORk8sDQo+ID4gICAgICAgICAgICAgICAiQWxsb2NhdGUgc3RhdGljIHNoYXJlZCBtZW1v
cnkgQkFOSw0KPiA+ICUjIlBSSXBhZGRyIi0lIyJQUklwYWRkciIuXG4iLCBAQCAtODI0LDYgKzg1
NCw3IEBAIHN0YXRpYyBpbnQgX19pbml0DQo+IGFsbG9jYXRlX3NoYXJlZF9tZW1vcnkoc3RydWN0
IGRvbWFpbiAqZCwNCj4gPiAgICAgICAgKiBET01JRF9JTyBpcyB0aGUgZG9tYWluLCBsaWtlIERP
TUlEX1hFTiwgdGhhdCBpcyBub3QgYXV0by0NCj4gdHJhbnNsYXRlZC4NCj4gPiAgICAgICAgKiBJ
dCBzZWVzIFJBTSAxOjEgYW5kIHdlIGRvIG5vdCBuZWVkIHRvIGNyZWF0ZSBQMk0gbWFwcGluZyBm
b3IgaXQNCj4gPiAgICAgICAgKi8NCj4gPiArICAgIG5yX3BhZ2VzID0gUEZOX0RPV04ocHNpemUp
Ow0KPiA+ICAgICAgIGlmICggZCAhPSBkb21faW8gKQ0KPiA+ICAgICAgIHsNCj4gPiAgICAgICAg
ICAgcmV0ID0gZ3Vlc3RfcGh5c21hcF9hZGRfcGFnZXMoZCwgZ2FkZHJfdG9fZ2ZuKGdiYXNlKSwg
c21mbiwNCj4gPiBQRk5fRE9XTihwc2l6ZSkpOyBAQCAtODM1LDYgKzg2NiwzNyBAQCBzdGF0aWMg
aW50IF9faW5pdA0KPiBhbGxvY2F0ZV9zaGFyZWRfbWVtb3J5KHN0cnVjdCBkb21haW4gKmQsDQo+
ID4gICAgICAgICAgIH0NCj4gPiAgICAgICB9DQo+ID4NCj4gPiArICAgIC8qDQo+ID4gKyAgICAg
KiBHZXQgdGhlIHJpZ2h0IGFtb3VudCBvZiByZWZlcmVuY2VzIHBlciBwYWdlLCB3aGljaCBpcyB0
aGUgbnVtYmVyIG9mDQo+ID4gKyAgICAgKiBib3Jyb3cgZG9tYWlucy4NCj4gPiArICAgICAqLw0K
PiA+ICsgICAgcmV0ID0gYWNxdWlyZV9ucl9ib3Jyb3dlcl9kb21haW4oZCwgcGJhc2UsIHBzaXpl
LCAmbnJfYm9ycm93ZXJzKTsNCj4gPiArICAgIGlmICggcmV0ICkNCj4gPiArICAgICAgICByZXR1
cm4gcmV0Ow0KPiA+ICsNCj4gPiArICAgIC8qDQo+ID4gKyAgICAgKiBJbnN0ZWFkIG9mIGxldCBi
b3Jyb3dlciBkb21haW4gZ2V0IGEgcGFnZSByZWYsIHdlIGFkZCBhcyBtYW55DQo+IA0KPiBUeXBv
OiBzL2xldC9sZXR0aW5nLw0KPiANCj4gPiArICAgICAqIGFkZGl0aW9uYWwgcmVmZXJlbmNlIGFz
IHRoZSBudW1iZXIgb2YgYm9ycm93ZXJzIHdoZW4gdGhlIG93bmVyDQo+ID4gKyAgICAgKiBpcyBh
bGxvY2F0ZWQsIHNpbmNlIHRoZXJlIGlzIGEgY2hhbmNlIHRoYXQgb3duZXIgaXMgY3JlYXRlZA0K
PiA+ICsgICAgICogYWZ0ZXIgYm9ycm93ZXIuDQo+IA0KPiBXaGF0IGlmIHRoZSBib3Jyb3dlciBp
cyBjcmVhdGVkIGZpcnN0PyBXb3VsZG4ndCB0aGlzIHJlc3VsdCB0byBhZGQgcGFnZXMgaW4gdGhl
DQo+IFAyTSB3aXRob3V0IHJlZmVyZW5jZT8NCj4gDQo+IElmIHllcywgdGhlbiBJIHRoaW5rIHRo
aXMgaXMgd29ydGggYW4gZXhwbGFpbmF0aW9uLg0KPiANCg0KWWVzLCBpdCBpcyBpbnRlbmRlZCB0
byBiZSB0aGUgd2F5IHlvdSBzYWlkLCBhbmQgSSdsbCBhZGQgYSBjb21tZW50IHRvIGV4cGxhaW4u
DQoNCj4gPiArICAgICAqLw0KPiA+ICsgICAgcGFnZSA9IG1mbl90b19wYWdlKHNtZm4pOw0KPiAN
Cj4gV2hlcmUgZG8geW91IHZhbGlkYXRlIHRoZSByYW5nZSBbc21mbiwgbnJfcGFnZXNdPw0KPiAN
Cj4gPiArICAgIGZvciAoIGkgPSAwOyBpIDwgbnJfcGFnZXM7IGkrKyApDQo+ID4gKyAgICB7DQo+
ID4gKyAgICAgICAgaWYgKCAhZ2V0X3BhZ2VfbnIocGFnZSArIGksIGQsIG5yX2JvcnJvd2Vycykg
KQ0KPiA+ICsgICAgICAgIHsNCj4gPiArICAgICAgICAgICAgcHJpbnRrKFhFTkxPR19FUlINCj4g
PiArICAgICAgICAgICAgICAgICAgICJGYWlsZWQgdG8gYWRkICVsdSByZWZlcmVuY2VzIHRvIHBh
Z2UgJSJQUklfbWZuIi5cbiIsDQo+ID4gKyAgICAgICAgICAgICAgICAgICBucl9ib3Jyb3dlcnMs
IG1mbl94KHNtZm4pICsgaSk7DQo+ID4gKyAgICAgICAgICAgIGdvdG8gZmFpbDsNCj4gPiArICAg
ICAgICB9DQo+ID4gKyAgICB9DQo+ID4gKw0KPiA+ICsgICAgcmV0dXJuIDA7DQo+ID4gKw0KPiA+
ICsgZmFpbDoNCj4gPiArICAgIHdoaWxlICggLS1pID49IDAgKQ0KPiA+ICsgICAgICAgIHB1dF9w
YWdlX25yKHBhZ2UgKyBpLCBucl9ib3Jyb3dlcnMpOw0KPiA+ICAgICAgIHJldHVybiByZXQ7DQo+
ID4gICB9DQo+ID4NCj4gDQo+IENoZWVycywNCj4gDQo+IC0tDQo+IEp1bGllbiBHcmFsbA0K


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 08:23:38 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 08:23:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357618.586274 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Syv-0001ig-JM; Wed, 29 Jun 2022 08:23:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357618.586274; Wed, 29 Jun 2022 08:23:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Syv-0001iZ-GA; Wed, 29 Jun 2022 08:23:29 +0000
Received: by outflank-mailman (input) for mailman id 357618;
 Wed, 29 Jun 2022 08:23:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o6Syu-0001iT-K0
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 08:23:28 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o6Syt-0004iJ-Bq; Wed, 29 Jun 2022 08:23:27 +0000
Received: from gw1.octic.net ([81.187.162.82] helo=[10.0.0.187])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o6Syt-0006DX-5K; Wed, 29 Jun 2022 08:23:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=mzau7RHWT9YKHDzGYL/HJDzSwOr7eE/vuFBO/z2bUXc=; b=pU5+SKA1SywQNXMi74on9GffjM
	QW/r8/VVg2fga6nva+xLQ43f/Pn9SYIFhXTKNwFJAmlS+fXybJrhRXG3zX2pymjlFNVvlj/O9hDx7
	iWMqa4RKBgPgtav2Uhl0BY4zuuIs0KceRvDw064UoUJTF/yZIiHPi1dJpftbI2p4kgKg=;
Message-ID: <26a1b208-7192-a64f-ca6d-c144de89ed2c@xen.org>
Date: Wed, 29 Jun 2022 09:23:25 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH] xen: arm: Don't use stop_cpu() in halt_this_cpu()
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: dmitry.semenets@gmail.com, xen-devel@lists.xenproject.org,
 Dmytro Semenets <dmytro_semenets@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220623074428.226719-1-dmitry.semenets@gmail.com>
 <alpine.DEB.2.22.394.2206231457250.2410338@ubuntu-linux-20-04-desktop>
 <e60a4e68-ed00-6cc7-31ca-64bcfc4bbdc5@xen.org>
 <alpine.DEB.2.22.394.2206241414420.2410338@ubuntu-linux-20-04-desktop>
 <5c986703-c932-3c7d-3756-2b885bb96e42@xen.org>
 <alpine.DEB.2.22.394.2206281538320.4389@ubuntu-linux-20-04-desktop>
From: Julien Grall <julien@xen.org>
In-Reply-To: <alpine.DEB.2.22.394.2206281538320.4389@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 28/06/2022 23:56, Stefano Stabellini wrote:
>> The advantage of the panic() is it will remind us that some needs to be fixed.
>> With a warning (or WARN()) people will tend to ignore it.
> 
> I know that this specific code path (cpu off) is probably not super
> relevant for what I am about to say, but as we move closer to safety
> certifiability we need to get away from using "panic" and BUG_ON as a
> reminder that more work is needed to have a fully correct implementation
> of something.

I don't think we have many places at runtime using BUG_ON()/panic(). 
They are often used because we think Xen would not be able to recover if 
the condition is hit.

I am happy to remove them, but this should not be at the expense to 
introduce other potential weird bugs.

> 
> I also see your point and agree that ASSERT is not acceptable for
> external input but from my point of view panic is the same (slightly
> worse because it doesn't go away in production builds).

I think it depends on your target. Would you be happy if Xen continue to 
run with potentially a fatal flaw?

> 
> Julien if you are going to ack the patch feel free to go ahead.

I will do and commit it.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 08:40:04 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 08:40:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357625.586285 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6TEr-0003bo-3D; Wed, 29 Jun 2022 08:39:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357625.586285; Wed, 29 Jun 2022 08:39:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6TEq-0003bh-VP; Wed, 29 Jun 2022 08:39:56 +0000
Received: by outflank-mailman (input) for mailman id 357625;
 Wed, 29 Jun 2022 08:39:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rlzK=XE=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1o6TEp-0003bb-DU
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 08:39:55 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-eopbgr60072.outbound.protection.outlook.com [40.107.6.72])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0b247f52-f787-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 10:39:52 +0200 (CEST)
Received: from AM7PR02CA0006.eurprd02.prod.outlook.com (2603:10a6:20b:100::16)
 by AM9PR08MB6193.eurprd08.prod.outlook.com (2603:10a6:20b:282::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16; Wed, 29 Jun
 2022 08:39:50 +0000
Received: from AM5EUR03FT048.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:100:cafe::3f) by AM7PR02CA0006.outlook.office365.com
 (2603:10a6:20b:100::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14 via Frontend
 Transport; Wed, 29 Jun 2022 08:39:50 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT048.mail.protection.outlook.com (10.152.17.177) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Wed, 29 Jun 2022 08:39:50 +0000
Received: ("Tessian outbound 514db98d9a19:v121");
 Wed, 29 Jun 2022 08:39:49 +0000
Received: from e26d1694e332.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 3E9584DE-E30C-455A-AA56-B563E737A172.1; 
 Wed, 29 Jun 2022 08:39:43 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e26d1694e332.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 29 Jun 2022 08:39:43 +0000
Received: from DU2PR08MB7325.eurprd08.prod.outlook.com (2603:10a6:10:2e4::7)
 by DB6PR0802MB2486.eurprd08.prod.outlook.com (2603:10a6:4:a0::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14; Wed, 29 Jun
 2022 08:39:42 +0000
Received: from DU2PR08MB7325.eurprd08.prod.outlook.com
 ([fe80::50e5:8757:6643:77f8]) by DU2PR08MB7325.eurprd08.prod.outlook.com
 ([fe80::50e5:8757:6643:77f8%9]) with mapi id 15.20.5373.017; Wed, 29 Jun 2022
 08:39:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0b247f52-f787-11ec-b725-ed86ccbb4733
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=XaShdd57gP/hjaE+l3f+H7CcnXHEbcRWfUhrFRBSoe3eR4S/3zrxlFL0RkrpLPymHSwK42XerXrHYFr3NT1Z5NzidwiUJNpExgCklDRWk6BV+kfc1i3geyCqaZgY/vGIYrQc7m1lPmizqxVxKu2PggYQgO5TCicGwc8XuT4yNDmLC25dOtFymXBhGeXVzg62/x31e/i0ernlVo/DZbukOw1E2X0hoW9U83VT06jQPW0fB+6k6CpMt20LwWBEwaSxms9ZT6HosREuJsS+sibeKIDcCtB0P6iMkmx8/oIm7on4tXJyt3r0cNmonKxZOzQ1KYDKLlI5/T/Xt/frSBYG2A==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ThyY9lgDddYrBpL/j+Izlw3fl/9L0beLfZ8UUZAG4+s=;
 b=F6n3rpZxj+ZeibgnYD5ZZACtQFMrHCmRAJQ4DtjJ62gLuwUxsC66FejSwoigNKpJ8KNz3W4j6UBWk5XV5IzG3vICEB8wdPguWAe4CY8cW3BdHODjsAl25dGL1eJgYuAOyb1H+yLWKYHZmLZxF/B52ULr8g7UTgmpUbgBGNKMU+4X3FI0A5c8WUy4cCZRs7OVQST8rfWC1PaGFk6QBlujZCR43XDH8rCs86thU+2Jca799YDaKRI1V6pRKD9XP8WN35mUxqcrc/iUrLusXc+EOMYjHCqrLXQ2cD2hwpMAPk1rl8xQy5MUh5ZL2sWjcHIyAy6/OKA8Yxum0sOpInbHPQ==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ThyY9lgDddYrBpL/j+Izlw3fl/9L0beLfZ8UUZAG4+s=;
 b=ldd50p1vyHx6yfFOzhiYcgs0Jwo4mfQQyEZ7Co/6nO6f27LPgkv/NQaqMM1qM8elj60kaNrmxBJNutqtybdaghlqoFoZtbRJIoMl1l1udMHfu5fO7fF41TRsb9KsD34wQj62QXmm0mKhok1jyd+vSWUQfLEgtaPNqdlWHJCt8k8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fcE0XaGRs+CkJjC1Ec3HjYTFVDU+1x7+b9GgTDyxtRiLXcRWb9CFNRqZEVa+LRWS5sC+W4qkXStaNQi3+YMX3gEWeBqikrRmiXihtkAckZY81n77Osc11kf4Z9XdIC1+x19mKmpXeIbISmsgLJQ8vT2eDY86UWVZ1SvEe1a/oQ3IQao+4PMQU4K8Vbw40w2EajJPi4likPwfA0DBi6eC+G0YXoSJSOV+It2j2Sm9U0yAQFpSdZiJI+r4uA1YOUCgm3U8jLP1mfDbM78EMVKNyp/x1OWjWv0/nLE3pjoaXEPdphwQ+NzI9hLr3zOrG1cSp8mNVBQg/7ylsT2PqrGxyA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ThyY9lgDddYrBpL/j+Izlw3fl/9L0beLfZ8UUZAG4+s=;
 b=HRkWV39I74WGOe0WFgibqjBwtaao0S+FeaUghnte2bxds5u6x3fvNGmjcYRiuMoFCbpALcmNkFTywwIsd1lr8vZjfY+pN4SJDLFZDfSIjfglEulXaKPcmOjDOyAWR25BOtNGn1yKJCgmOArXyvw5RHrM/mkP2LjN8c1XmowjCXT17iryXMLbS6gRTYriBICPyQy6JwvDVktgqNeQLossF7WH4Yk+3chIDZcflYJJWfZGPgYqD5oeYsBEh3tOP1PkstP6alprGtD5Ec2GZEQGSXajt/o+RzY3JunnR6rAXXtWJfV9nV1W50/B0jEsnCoaatlQBRoyecrroC8/lFVXRA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ThyY9lgDddYrBpL/j+Izlw3fl/9L0beLfZ8UUZAG4+s=;
 b=ldd50p1vyHx6yfFOzhiYcgs0Jwo4mfQQyEZ7Co/6nO6f27LPgkv/NQaqMM1qM8elj60kaNrmxBJNutqtybdaghlqoFoZtbRJIoMl1l1udMHfu5fO7fF41TRsb9KsD34wQj62QXmm0mKhok1jyd+vSWUQfLEgtaPNqdlWHJCt8k8=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v5 1/8] xen/arm: introduce static shared memory
Thread-Topic: [PATCH v5 1/8] xen/arm: introduce static shared memory
Thread-Index: AQHYhGRAIMWAAZoglEyYz3pT8Ubq/a1e3kuAgAc4ucA=
Date: Wed, 29 Jun 2022 08:39:42 +0000
Message-ID:
 <DU2PR08MB7325E703004D3BB160C2CF50F7BB9@DU2PR08MB7325.eurprd08.prod.outlook.com>
References: <20220620051114.210118-1-Penny.Zheng@arm.com>
 <20220620051114.210118-2-Penny.Zheng@arm.com>
 <45a41132-1520-a894-a9eb-6688c79a660d@xen.org>
In-Reply-To: <45a41132-1520-a894-a9eb-6688c79a660d@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: F17094AFF1292A43AB8A85E75C4E9347.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 29a9e31e-7905-482e-1e92-08da59aaee37
x-ms-traffictypediagnostic:
	DB6PR0802MB2486:EE_|AM5EUR03FT048:EE_|AM9PR08MB6193:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 MhDqMMnI7GTXv0jHdVnYxxG1fyfF30Cz1usFQfpJQKWCfkEx8OSy24w8mtcUMVixgJ1+1OlZEo5z41NUKnfvuOajJfGFM/IzihjrvB22AutwUVndwu1cF+6s9R5JqcJrQ/vlvWCXkRm8Tl1jrBgQtuncJzEVfakd7KWKN/ug9t+dwvpZhTI67DyBkPbF6ltPYXXgQ9yUP5vKAyiu5xxlk3Dx4PAGHjLHf2eWS6q0jeaiMr0Tn/Q36R73Z8ATF9RR7uoKf5ZpBTTAA8de0GNBBMVgsHf1CL5q2PJeZ8k52viJpLeAeCgLgVMP0Cwk62XmYfXB5iDyBB/f3uKd2mVGD+Uv4dMiSvWzc75PFBqNICEozVhM5x3VlBP8Clz83wnvUXB0/0BlDYr/r0+pUwnl/npTYynEYYtBIBlnxpEBbkK6QwIx81eI3MWn3nRdLmezrBjH91vCMzUdGOHLo45eX0qeWD/RotgjDH4QUtHhODX+Xl7CJv9ImX3UReVkvOoozcP0QJx+xHYwMsRovYXATpvdOPjAiOy9W0kQys3SOfnqcsvKWVzN64oQhaykggj2NO8uhMnZjlwxwfaa0ZlJ6ZxbF7L3aGf4DDUfrj0zRmtHpTDKeSWVTivZAx5GyBbSAivtAXf1iDH4uh95pBDhNmy2g3b99yPRBYnkFnJjuXiT87UXKTHki32bjaTQKLe7DIm5VT+vTHSx3ia4CDWRKUe2Ajd9MG6Vh42U/oaJBc2K5IQ2uPH+knY39NOiYjCNLHk5aHBXsxw4aXbrPZ0H6KBw5MoRf5nYCmz9KqREDuc=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DU2PR08MB7325.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(366004)(136003)(346002)(396003)(376002)(26005)(9686003)(52536014)(38100700002)(86362001)(5660300002)(8676002)(41300700001)(71200400001)(316002)(2906002)(66556008)(6506007)(38070700005)(8936002)(64756008)(478600001)(186003)(33656002)(122000001)(53546011)(4326008)(66476007)(66446008)(76116006)(66946007)(54906003)(110136005)(7696005)(55016003)(83380400001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0802MB2486
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT048.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	b9be7eab-b81c-447c-c611-08da59aae975
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	CCShZfTcF95ZNho9Jc6NC52gZS7zNCbivX3n/sg46diSM7XQTIOrByPG59v9vlqNnArZ0se8O8e4qOenpiThP8n6+pBHR4T/6k8wgVIFRsFcu2nl2+YwDDS0xcuH5dWyvNBVgdLrYkz7bzCoXoRP3Ftom9SIAk79pM2Q3bpMwKODgoTgL8eiQpSPV4TS2ff3e1JMrcrN28dMR1h9BODARngYy2tV3Upj9iuPS/gfB85KD3YW3qVr3ChYssMLuEvaFNv19+cWjVqjNwHEcy/4MUNMTXM2J11NH6OjBresGYqO/ZdYlmkbtagSHXDK0ppMHCCu+kuzE7N4Ewyr+XqVjQz5couLPH998ADlS7ExGl9yTzLz6ibRyZQb5hIMRQCt6wF4w89suxHKu0oNqS6GCsO865lkO/eOZD+Nj176Xaorf5azyAQnCWeYrc8ua5CvNZrcIEU+S3sThjUIhZ9bmN2RixttRkaxdaPM3KkJIt+lGU7A4nxZd3GsaOH1ZW451vLi9yfIs0lnw7Oy7LJnXAAsrlsNEpIsk7A0pggyWU+/21IAXNuSwwTUjZ7U8sanQOA5LlfJ2MKChEJontiND0MxY35cq/6YkzKuC3J16nyXgQoLsJCyzliniz7vFkE7627aFfgciY327Ma9Ca2AEgG2chFfGSDOzUhy1RIoOtHduStLEzcnZsjBUl71xhzBbVywFra3w4aqZ/iR4SvEU/d1FCjILMAGIph3VVfNW5dji5lT9cbgYnyoLeLQQ/5yZFcuH8iFqkH1u2Rz6Vwz2ovgIXSkm8L/IYDqp0jFNk1CSGQidIlQKyybgatwDYWo
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(136003)(396003)(376002)(346002)(39860400002)(36840700001)(46966006)(40470700004)(41300700001)(83380400001)(107886003)(82310400005)(47076005)(36860700001)(40460700003)(33656002)(186003)(316002)(6506007)(9686003)(110136005)(26005)(55016003)(336012)(7696005)(4326008)(40480700001)(5660300002)(478600001)(70586007)(52536014)(8676002)(356005)(54906003)(53546011)(81166007)(8936002)(82740400003)(2906002)(86362001)(70206006);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jun 2022 08:39:50.1311
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 29a9e31e-7905-482e-1e92-08da59aaee37
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT048.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6193

SGkgSnVsaWVuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSnVsaWVu
IEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4NCj4gU2VudDogU2F0dXJkYXksIEp1bmUgMjUsIDIwMjIg
MTo1NSBBTQ0KPiBUbzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+OyB4ZW4tZGV2
ZWxAbGlzdHMueGVucHJvamVjdC5vcmcNCj4gQ2M6IFdlaSBDaGVuIDxXZWkuQ2hlbkBhcm0uY29t
PjsgU3RlZmFubyBTdGFiZWxsaW5pDQo+IDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPjsgQmVydHJh
bmQgTWFycXVpcyA8QmVydHJhbmQuTWFycXVpc0Bhcm0uY29tPjsNCj4gVm9sb2R5bXlyIEJhYmNo
dWsgPFZvbG9keW15cl9CYWJjaHVrQGVwYW0uY29tPg0KPiBTdWJqZWN0OiBSZTogW1BBVENIIHY1
IDEvOF0geGVuL2FybTogaW50cm9kdWNlIHN0YXRpYyBzaGFyZWQgbWVtb3J5DQo+IA0KPiBIaSBQ
ZW5ueSwNCj4gDQo+IE9uIDIwLzA2LzIwMjIgMDY6MTEsIFBlbm55IFpoZW5nIHdyb3RlOg0KPiA+
IEZyb206IFBlbm55IFpoZW5nIDxwZW5ueS56aGVuZ0Bhcm0uY29tPg0KPiA+DQo+ID4gVGhpcyBw
YXRjaCBzZXJpZSBpbnRyb2R1Y2VzIGEgbmV3IGZlYXR1cmU6IHNldHRpbmcgdXAgc3RhdGljDQo+
IA0KPiBUeXBvOiBzL3NlcmllL3Nlcmllcy8NCj4gDQo+ID4gc2hhcmVkIG1lbW9yeSBvbiBhIGRv
bTBsZXNzIHN5c3RlbSwgdGhyb3VnaCBkZXZpY2UgdHJlZSBjb25maWd1cmF0aW9uLg0KPiA+DQo+
ID4gVGhpcyBjb21taXQgcGFyc2VzIHNoYXJlZCBtZW1vcnkgbm9kZSBhdCBib290LXRpbWUsIGFu
ZCByZXNlcnZlIGl0IGluDQo+ID4gYm9vdGluZm8ucmVzZXJ2ZWRfbWVtIHRvIGF2b2lkIG90aGVy
IHVzZS4NCj4gPg0KPiA+IFRoaXMgY29tbWl0cyBwcm9wb3NlcyBhIG5ldyBLY29uZmlnIENPTkZJ
R19TVEFUSUNfU0hNIHRvIHdyYXANCj4gPiBzdGF0aWMtc2htLXJlbGF0ZWQgY29kZXMsIGFuZCB0
aGlzIG9wdGlvbiBkZXBlbmRzIG9uIHN0YXRpYyBtZW1vcnkoDQo+ID4gQ09ORklHX1NUQVRJQ19N
RU1PUlkpLiBUaGF0J3MgYmVjYXVzZSB0aGF0IGxhdGVyIHdlIHdhbnQgdG8gcmV1c2UgYQ0KPiA+
IGZldyBoZWxwZXJzLCBndWFyZGVkIHdpdGggQ09ORklHX1NUQVRJQ19NRU1PUlksIGxpa2UNCj4g
PiBhY3F1aXJlX3N0YXRpY21lbV9wYWdlcywgZXRjLCBvbiBzdGF0aWMgc2hhcmVkIG1lbW9yeS4N
Cj4gPg0KPiA+IFNpZ25lZC1vZmYtYnk6IFBlbm55IFpoZW5nIDxwZW5ueS56aGVuZ0Bhcm0uY29t
Pg0KPiA+IFJldmlld2VkLWJ5OiBTdGVmYW5vIFN0YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5l
bC5vcmc+DQo+ID4gLS0tDQo+ID4gdjUgY2hhbmdlOg0KPiA+IC0gbm8gY2hhbmdlDQo+ID4gLS0t
DQo+ID4gdjQgY2hhbmdlOg0KPiA+IC0gbml0IGZpeCBvbiBkb2MNCj4gPiAtLS0NCj4gPiB2MyBj
aGFuZ2U6DQo+ID4gLSBtYWtlIG5yX3NobV9kb21haW4gdW5zaWduZWQgaW50DQo+ID4gLS0tDQo+
ID4gdjIgY2hhbmdlOg0KPiA+IC0gZG9jdW1lbnQgcmVmaW5lbWVudA0KPiA+IC0gcmVtb3ZlIGJp
dG1hcCBhbmQgdXNlIHRoZSBpdGVyYXRpb24gdG8gY2hlY2sNCj4gPiAtIGFkZCBhIG5ldyBmaWVs
ZCBucl9zaG1fZG9tYWluIHRvIGtlZXAgdGhlIG51bWJlciBvZiBzaGFyZWQgZG9tYWluDQo+ID4g
LS0tDQo+ID4gICBkb2NzL21pc2MvYXJtL2RldmljZS10cmVlL2Jvb3RpbmcudHh0IHwgMTIwDQo+
ICsrKysrKysrKysrKysrKysrKysrKysrKysrDQo+ID4gICB4ZW4vYXJjaC9hcm0vS2NvbmZpZyAg
ICAgICAgICAgICAgICAgIHwgICA2ICsrDQo+ID4gICB4ZW4vYXJjaC9hcm0vYm9vdGZkdC5jICAg
ICAgICAgICAgICAgIHwgIDY4ICsrKysrKysrKysrKysrKw0KPiA+ICAgeGVuL2FyY2gvYXJtL2lu
Y2x1ZGUvYXNtL3NldHVwLmggICAgICB8ICAgMyArDQo+ID4gICA0IGZpbGVzIGNoYW5nZWQsIDE5
NyBpbnNlcnRpb25zKCspDQo+ID4NCj4gPiBkaWZmIC0tZ2l0IGEvZG9jcy9taXNjL2FybS9kZXZp
Y2UtdHJlZS9ib290aW5nLnR4dA0KPiA+IGIvZG9jcy9taXNjL2FybS9kZXZpY2UtdHJlZS9ib290
aW5nLnR4dA0KPiA+IGluZGV4IDk4MjUzNDE0YjguLjY0NjdiYzVhMjggMTAwNjQ0DQo+ID4gLS0t
IGEvZG9jcy9taXNjL2FybS9kZXZpY2UtdHJlZS9ib290aW5nLnR4dA0KPiA+ICsrKyBiL2RvY3Mv
bWlzYy9hcm0vZGV2aWNlLXRyZWUvYm9vdGluZy50eHQNCj4gPiBAQCAtMzc4LDMgKzM3OCwxMjMg
QEAgZGV2aWNlLXRyZWU6DQo+ID4NCj4gPiAgIFRoaXMgd2lsbCByZXNlcnZlIGEgNTEyTUIgcmVn
aW9uIHN0YXJ0aW5nIGF0IHRoZSBob3N0IHBoeXNpY2FsIGFkZHJlc3MNCj4gPiAgIDB4MzAwMDAw
MDAgdG8gYmUgZXhjbHVzaXZlbHkgdXNlZCBieSBEb21VMS4NCj4gPiArDQo+ID4gK1N0YXRpYyBT
aGFyZWQgTWVtb3J5DQo+ID4gKz09PT09PT09PT09PT09PT09PT09DQo+ID4gKw0KPiA+ICtUaGUg
c3RhdGljIHNoYXJlZCBtZW1vcnkgZGV2aWNlIHRyZWUgbm9kZXMgYWxsb3cgdXNlcnMgdG8gc3Rh
dGljYWxseQ0KPiA+ICtzZXQgdXAgc2hhcmVkIG1lbW9yeSBvbiBkb20wbGVzcyBzeXN0ZW0sIGVu
YWJsaW5nIGRvbWFpbnMgdG8gZG8NCj4gPiArc2htLWJhc2VkIGNvbW11bmljYXRpb24uDQo+ID4g
Kw0KPiA+ICstIGNvbXBhdGlibGUNCj4gPiArDQo+ID4gKyAgICAieGVuLGRvbWFpbi1zaGFyZWQt
bWVtb3J5LXYxIg0KPiA+ICsNCj4gPiArLSB4ZW4sc2htLWlkDQo+ID4gKw0KPiA+ICsgICAgQW4g
OC1iaXQgaW50ZWdlciB0aGF0IHJlcHJlc2VudHMgdGhlIHVuaXF1ZSBpZGVudGlmaWVyIG9mIHRo
ZSBzaGFyZWQNCj4gbWVtb3J5DQo+ID4gKyAgICByZWdpb24uIFRoZSBtYXhpbXVtIGlkZW50aWZp
ZXIgc2hhbGwgYmUgInhlbixzaG0taWQgPSA8MHhmZj4iLg0KPiA+ICsNCj4gPiArLSB4ZW4sc2hh
cmVkLW1lbQ0KPiA+ICsNCj4gPiArICAgIEFuIGFycmF5IHRha2VzIGEgcGh5c2ljYWwgYWRkcmVz
cywgd2hpY2ggaXMgdGhlIGJhc2UgYWRkcmVzcyBvZiB0aGUNCj4gPiArICAgIHNoYXJlZCBtZW1v
cnkgcmVnaW9uIGluIGhvc3QgcGh5c2ljYWwgYWRkcmVzcyBzcGFjZSwgYSBzaXplLCBhbmQgYQ0K
PiBndWVzdA0KPiA+ICsgICAgcGh5c2ljYWwgYWRkcmVzcywgYXMgdGhlIHRhcmdldCBhZGRyZXNz
IG9mIHRoZSBtYXBwaW5nLiBUaGUgbnVtYmVyIG9mDQo+IGNlbGxzDQo+ID4gKyAgICBmb3IgdGhl
IGhvc3QgYWRkcmVzcyAoYW5kIHNpemUpIGlzIHRoZSBzYW1lIGFzIHRoZSBndWVzdCBwc2V1ZG8t
cGh5c2ljYWwNCj4gPiArICAgIGFkZHJlc3MgYW5kIHRoZXkgYXJlIGluaGVyaXRlZCBmcm9tIHRo
ZSBwYXJlbnQgbm9kZS4NCj4gDQo+IFNvcnJ5IGZvciBqdW1wIGluIHRoZSBkaXNjdXNzaW9uIGxh
dGUuIEJ1dCBhcyB0aGlzIGlzIGdvaW5nIHRvIGJlIGEgc3RhYmxlIEFCSSwgSQ0KPiB3b3VsZCB0
byBtYWtlIHN1cmUgdGhlIGludGVyZmFjZSBpcyBnb2luZyB0byBiZSBlYXNpbHkgZXh0ZW5kYWJs
ZS4NCj4gDQo+IEFGQUlVLCB3aXRoIHlvdXIgcHJvcG9zYWwgdGhlIGhvc3QgcGh5c2ljYWwgYWRk
cmVzcyBpcyBtYW5kYXRvcnkuIEkgd291bGQNCj4gZXhwZWN0IHRoYXQgc29tZSB1c2VyIG1heSB3
YW50IHRvIHNoYXJlIG1lbW9yeSBidXQgZG9uJ3QgY2FyZSBhYm91dCB0aGUNCj4gZXhhY3QgbG9j
YXRpb24gaW4gbWVtb3J5LiBTbyBJIHRoaW5rIGl0IHdvdWxkIGJlIGdvb2QgdG8gbWFrZSBpdCBv
cHRpb25hbCBpbg0KPiB0aGUgYmluZGluZy4NCj4gDQo+IEkgdGhpbmsgdGhpcyB3YW50cyB0byBi
ZSBkb25lIG5vdyBiZWNhdXNlIGl0IHdvdWxkIGJlIGRpZmZpY3VsdCB0byBjaGFuZ2UgdGhlDQo+
IGJpbmRpbmcgYWZ0ZXJ3YXJkcyAodGhlIGhvc3QgcGh5c2ljYWwgYWRkcmVzcyBpcyB0aGUgZmly
c3Qgc2V0IG9mIGNlbGxzKS4NCj4gDQo+IFRoZSBYZW4gZG9lc24ndCBuZWVkIHRvIGhhbmRsZSB0
aGUgb3B0aW9uYWwgY2FzZS4NCj4gDQo+IFsuLi5dDQo+IA0KPiA+IGRpZmYgLS1naXQgYS94ZW4v
YXJjaC9hcm0vS2NvbmZpZyBiL3hlbi9hcmNoL2FybS9LY29uZmlnIGluZGV4DQo+ID4gYmU5ZWZm
MDE0MS4uNzMyMWY0N2MwZiAxMDA2NDQNCj4gPiAtLS0gYS94ZW4vYXJjaC9hcm0vS2NvbmZpZw0K
PiA+ICsrKyBiL3hlbi9hcmNoL2FybS9LY29uZmlnDQo+ID4gQEAgLTEzOSw2ICsxMzksMTIgQEAg
Y29uZmlnIFRFRQ0KPiA+DQo+ID4gICBzb3VyY2UgImFyY2gvYXJtL3RlZS9LY29uZmlnIg0KPiA+
DQo+ID4gK2NvbmZpZyBTVEFUSUNfU0hNDQo+ID4gKwlib29sICJTdGF0aWNhbGx5IHNoYXJlZCBt
ZW1vcnkgb24gYSBkb20wbGVzcyBzeXN0ZW0iIGlmDQo+IFVOU1VQUE9SVEVEDQo+IA0KPiBZb3Ug
YWxzbyB3YW50IHRvIHVwZGF0ZSBTVVBQT1JULm1kLg0KPiANCj4gPiArCWRlcGVuZHMgb24gU1RB
VElDX01FTU9SWQ0KPiA+ICsJaGVscA0KPiA+ICsJICBUaGlzIG9wdGlvbiBlbmFibGVzIHN0YXRp
Y2FsbHkgc2hhcmVkIG1lbW9yeSBvbiBhIGRvbTBsZXNzIHN5c3RlbS4NCj4gPiArDQo+ID4gICBl
bmRtZW51DQo+ID4NCj4gPiAgIG1lbnUgIkFSTSBlcnJhdGEgd29ya2Fyb3VuZCB2aWEgdGhlIGFs
dGVybmF0aXZlIGZyYW1ld29yayINCj4gPiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL2Jvb3Rm
ZHQuYyBiL3hlbi9hcmNoL2FybS9ib290ZmR0LmMgaW5kZXgNCj4gPiBlYzgxYTQ1ZGU5Li4zOGRj
YjA1ZDVkIDEwMDY0NA0KPiA+IC0tLSBhL3hlbi9hcmNoL2FybS9ib290ZmR0LmMNCj4gPiArKysg
Yi94ZW4vYXJjaC9hcm0vYm9vdGZkdC5jDQo+ID4gQEAgLTM2MSw2ICszNjEsNzAgQEAgc3RhdGlj
IGludCBfX2luaXQgcHJvY2Vzc19kb21haW5fbm9kZShjb25zdCB2b2lkDQo+ICpmZHQsIGludCBu
b2RlLA0KPiA+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzaXplX2NlbGxz
LCAmYm9vdGluZm8ucmVzZXJ2ZWRfbWVtLCB0cnVlKTsNCj4gPiAgIH0NCj4gPg0KPiA+ICsjaWZk
ZWYgQ09ORklHX1NUQVRJQ19TSE0NCj4gPiArc3RhdGljIGludCBfX2luaXQgcHJvY2Vzc19zaG1f
bm9kZShjb25zdCB2b2lkICpmZHQsIGludCBub2RlLA0KPiA+ICsgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIHUzMiBhZGRyZXNzX2NlbGxzLCB1MzIgc2l6ZV9jZWxscykNCj4gPiAr
ew0KPiA+ICsgICAgY29uc3Qgc3RydWN0IGZkdF9wcm9wZXJ0eSAqcHJvcDsNCj4gPiArICAgIGNv
bnN0IF9fYmUzMiAqY2VsbDsNCj4gPiArICAgIHBhZGRyX3QgcGFkZHIsIHNpemU7DQo+ID4gKyAg
ICBzdHJ1Y3QgbWVtaW5mbyAqbWVtID0gJmJvb3RpbmZvLnJlc2VydmVkX21lbTsNCj4gPiArICAg
IHVuc2lnbmVkIGxvbmcgaTsNCj4gDQo+IG5yX2JhbmtzIGlzICJ1bnNpZ25lZCBpbnQiIHNvIEkg
dGhpbmsgdGhpcyBzaG91bGQgYmUgInVuc2lnbmVkIGludCIgYXMgd2VsbC4NCj4gDQo+ID4gKw0K
PiA+ICsgICAgaWYgKCBhZGRyZXNzX2NlbGxzIDwgMSB8fCBzaXplX2NlbGxzIDwgMSApDQo+ID4g
KyAgICB7DQo+ID4gKyAgICAgICAgcHJpbnRrKCJmZHQ6IGludmFsaWQgI2FkZHJlc3MtY2VsbHMg
b3IgI3NpemUtY2VsbHMgZm9yIHN0YXRpYyBzaGFyZWQNCj4gbWVtb3J5IG5vZGUuXG4iKTsNCj4g
PiArICAgICAgICByZXR1cm4gLUVJTlZBTDsNCj4gPiArICAgIH0NCj4gPiArDQo+ID4gKyAgICBw
cm9wID0gZmR0X2dldF9wcm9wZXJ0eShmZHQsIG5vZGUsICJ4ZW4sc2hhcmVkLW1lbSIsIE5VTEwp
Ow0KPiA+ICsgICAgaWYgKCAhcHJvcCApDQo+ID4gKyAgICAgICAgcmV0dXJuIC1FTk9FTlQ7DQo+
ID4gKw0KPiA+ICsgICAgLyoNCj4gPiArICAgICAqIHhlbixzaGFyZWQtbWVtID0gPHBhZGRyLCBz
aXplLCBnYWRkcj47DQo+ID4gKyAgICAgKiBNZW1vcnkgcmVnaW9uIHN0YXJ0aW5nIGZyb20gcGh5
c2ljYWwgYWRkcmVzcyAjcGFkZHIgb2YgI3NpemUgc2hhbGwNCj4gPiArICAgICAqIGJlIG1hcHBl
ZCB0byBndWVzdCBwaHlzaWNhbCBhZGRyZXNzICNnYWRkciBhcyBzdGF0aWMgc2hhcmVkIG1lbW9y
eQ0KPiA+ICsgICAgICogcmVnaW9uLg0KPiA+ICsgICAgICovDQo+ID4gKyAgICBjZWxsID0gKGNv
bnN0IF9fYmUzMiAqKXByb3AtPmRhdGE7DQo+ID4gKyAgICBkZXZpY2VfdHJlZV9nZXRfcmVnKCZj
ZWxsLCBhZGRyZXNzX2NlbGxzLCBzaXplX2NlbGxzLCAmcGFkZHIsDQo+ID4gKyAmc2l6ZSk7DQo+
IA0KPiBQbGVhc2UgY2hlY2sgdGhlIGxlbiBvZiB0aGUgcHJvcGVydHkgdG8gY29uZmlybSBpcyBp
dCBiaWcgZW5vdWdoIHRvIGNvbnRhaW4NCj4gInBhZGRyIiwgInNpemUiLCBhbmQgImdhZGRyIi4N
Cj4gDQo+ID4gKyAgICBmb3IgKCBpID0gMDsgaSA8IG1lbS0+bnJfYmFua3M7IGkrKyApDQo+ID4g
KyAgICB7DQo+ID4gKyAgICAgICAgLyoNCj4gPiArICAgICAgICAgKiBBIHN0YXRpYyBzaGFyZWQg
bWVtb3J5IHJlZ2lvbiBjb3VsZCBiZSBzaGFyZWQgYmV0d2VlbiBtdWx0aXBsZQ0KPiA+ICsgICAg
ICAgICAqIGRvbWFpbnMuDQo+ID4gKyAgICAgICAgICovDQo+ID4gKyAgICAgICAgaWYgKCBwYWRk
ciA9PSBtZW0tPmJhbmtbaV0uc3RhcnQgJiYgc2l6ZSA9PSBtZW0tPmJhbmtbaV0uc2l6ZSApDQo+
ID4gKyAgICAgICAgICAgIGJyZWFrOw0KDQpNYXliZSBJIG5lZWQgdG8gYWRkIGEgY2hlY2sgb24g
c2htLWlkOg0KIg0KICAgICAgICAvKg0KICAgICAgICAgKiBBIHN0YXRpYyBzaGFyZWQgbWVtb3J5
IHJlZ2lvbiBjb3VsZCBiZSBzaGFyZWQgYmV0d2VlbiBtdWx0aXBsZQ0KICAgICAgICAgKiBkb21h
aW5zLg0KICAgICAgICAgKi8NCiAgICAgICAgaWYgKCBzdHJjbXAoc2htX2lkLCBtZW0tPmJhbmtb
aV0uc2htX2lkKSA9PSAwICkNCiAgICAgICAgew0KICAgICAgICAgICAgaWYgKCBwYWRkciA9PSBt
ZW0tPmJhbmtbaV0uc3RhcnQgJiYgc2l6ZSA9PSBtZW0tPmJhbmtbaV0uc2l6ZSApDQogICAgICAg
ICAgICAgICAgYnJlYWs7DQogICAgICAgICAgICBlbHNlDQogICAgICAgICAgICB7DQogICAgICAg
ICAgICAgICAgcHJpbnRrKCJXYXJuaW5nOiB4ZW4sc2htLWlkICVzIGRvZXMgbm90IG1hdGNoIGZv
ciBhbGwgdGhlIG5vZGVzIHVzaW5nIHRoZSBzYW1lIHJlZ2lvbi5cbiIsDQogICAgICAgICAgICAg
ICAgICAgICAgIHNobV9pZCk7DQogICAgICAgICAgICAgICAgcmV0dXJuIC1FSU5WQUw7DQogICAg
ICAgICAgICB9DQogICAgICAgIH0NCiINCldkeXTvvJ8NCg0KPiA+ICsgICAgfQ0KPiA+ICsNCj4g
PiArICAgIGlmICggaSA9PSBtZW0tPm5yX2JhbmtzICkNCj4gPiArICAgIHsNCj4gPiArICAgICAg
ICBpZiAoIGkgPCBOUnNobV9NRU1fQkFOS1MgKQ0KPiA+ICsgICAgICAgIHsNCj4gPiArICAgICAg
ICAgICAgLyogU3RhdGljIHNoYXJlZCBtZW1vcnkgc2hhbGwgYmUgcmVzZXJ2ZWQgZnJvbSBhbnkg
b3RoZXIgdXNlLiAqLw0KPiA+ICsgICAgICAgICAgICBtZW0tPmJhbmtbbWVtLT5ucl9iYW5rc10u
c3RhcnQgPSBwYWRkcjsNCj4gPiArICAgICAgICAgICAgbWVtLT5iYW5rW21lbS0+bnJfYmFua3Nd
LnNpemUgPSBzaXplOw0KPiA+ICsgICAgICAgICAgICBtZW0tPmJhbmtbbWVtLT5ucl9iYW5rc10u
eGVuX2RvbWFpbiA9IHRydWU7DQo+ID4gKyAgICAgICAgICAgIG1lbS0+bnJfYmFua3MrKzsNCj4g
PiArICAgICAgICB9DQo+ID4gKyAgICAgICAgZWxzZQ0KPiA+ICsgICAgICAgIHsNCj4gPiArICAg
ICAgICAgICAgcHJpbnRrKCJXYXJuaW5nOiBNYXggbnVtYmVyIG9mIHN1cHBvcnRlZCBtZW1vcnkg
cmVnaW9ucw0KPiByZWFjaGVkLlxuIik7DQo+ID4gKyAgICAgICAgICAgIHJldHVybiAtRU5PU1BD
Ow0KPiA+ICsgICAgICAgIH0NCj4gPiArICAgIH0NCj4gPiArICAgIC8qDQo+ID4gKyAgICAgKiBr
ZWVwIGEgY291bnQgb2YgdGhlIG51bWJlciBvZiBkb21haW5zLCB3aGljaCBsYXRlciBtYXkgYmUg
dXNlZCB0bw0KPiA+ICsgICAgICogY2FsY3VsYXRlIHRoZSBudW1iZXIgb2YgdGhlIHJlZmVyZW5j
ZSBjb3VudC4NCj4gPiArICAgICAqLw0KPiA+ICsgICAgbWVtLT5iYW5rW2ldLm5yX3NobV9kb21h
aW4rKzsNCj4gPiArDQo+ID4gKyAgICByZXR1cm4gMDsNCj4gPiArfQ0KPiA+ICsjZW5kaWYNCj4g
PiArDQo+ID4gICBzdGF0aWMgaW50IF9faW5pdCBlYXJseV9zY2FuX25vZGUoY29uc3Qgdm9pZCAq
ZmR0LA0KPiA+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGludCBub2RlLCBj
b25zdCBjaGFyICpuYW1lLCBpbnQgZGVwdGgsDQo+ID4gICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgdTMyIGFkZHJlc3NfY2VsbHMsIHUzMiBzaXplX2NlbGxzLA0KPiA+IEBAIC0z
ODYsNiArNDUwLDEwIEBAIHN0YXRpYyBpbnQgX19pbml0IGVhcmx5X3NjYW5fbm9kZShjb25zdCB2
b2lkICpmZHQsDQo+ID4gICAgICAgICAgIHByb2Nlc3NfY2hvc2VuX25vZGUoZmR0LCBub2RlLCBu
YW1lLCBhZGRyZXNzX2NlbGxzLCBzaXplX2NlbGxzKTsNCj4gPiAgICAgICBlbHNlIGlmICggZGVw
dGggPT0gMiAmJiBkZXZpY2VfdHJlZV9ub2RlX2NvbXBhdGlibGUoZmR0LCBub2RlLA0KPiAieGVu
LGRvbWFpbiIpICkNCj4gPiAgICAgICAgICAgcmMgPSBwcm9jZXNzX2RvbWFpbl9ub2RlKGZkdCwg
bm9kZSwgbmFtZSwgYWRkcmVzc19jZWxscywNCj4gPiBzaXplX2NlbGxzKTsNCj4gPiArI2lmZGVm
IENPTkZJR19TVEFUSUNfU0hNDQo+ID4gKyAgICBlbHNlIGlmICggZGVwdGggPD0gMyAmJiBkZXZp
Y2VfdHJlZV9ub2RlX2NvbXBhdGlibGUoZmR0LCBub2RlLA0KPiAieGVuLGRvbWFpbi1zaGFyZWQt
bWVtb3J5LXYxIikgKQ0KPiA+ICsgICAgICAgIHJjID0gcHJvY2Vzc19zaG1fbm9kZShmZHQsIG5v
ZGUsIGFkZHJlc3NfY2VsbHMsIHNpemVfY2VsbHMpOw0KPiA+ICsjZW5kaWYNCj4gPg0KPiA+ICAg
ICAgIGlmICggcmMgPCAwICkNCj4gPiAgICAgICAgICAgcHJpbnRrKCJmZHQ6IG5vZGUgYCVzJzog
cGFyc2luZyBmYWlsZWRcbiIsIG5hbWUpOyBkaWZmIC0tZ2l0DQo+ID4gYS94ZW4vYXJjaC9hcm0v
aW5jbHVkZS9hc20vc2V0dXAuaA0KPiBiL3hlbi9hcmNoL2FybS9pbmNsdWRlL2FzbS9zZXR1cC5o
DQo+ID4gaW5kZXggMmJiMDFlY2ZhOC4uNTA2M2U1ZDA3NyAxMDA2NDQNCj4gPiAtLS0gYS94ZW4v
YXJjaC9hcm0vaW5jbHVkZS9hc20vc2V0dXAuaA0KPiA+ICsrKyBiL3hlbi9hcmNoL2FybS9pbmNs
dWRlL2FzbS9zZXR1cC5oDQo+ID4gQEAgLTI3LDYgKzI3LDkgQEAgc3RydWN0IG1lbWJhbmsgew0K
PiA+ICAgICAgIHBhZGRyX3Qgc3RhcnQ7DQo+ID4gICAgICAgcGFkZHJfdCBzaXplOw0KPiA+ICAg
ICAgIGJvb2wgeGVuX2RvbWFpbjsgLyogd2hldGhlciB0aGUgbWVtb3J5IGJhbmsgaXMgYm91bmQg
dG8gYSBYZW4NCj4gPiBkb21haW4uICovDQo+ID4gKyNpZmRlZiBDT05GSUdfU1RBVElDX1NITQ0K
PiA+ICsgICAgdW5zaWduZWQgaW50IG5yX3NobV9kb21haW47DQo+ID4gKyNlbmRpZg0KPiA+ICAg
fTsNCj4gPg0KPiA+ICAgc3RydWN0IG1lbWluZm8gew0KPiANCj4gQ2hlZXJzLA0KPiANCj4gLS0N
Cj4gSnVsaWVuIEdyYWxsDQo=


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 08:40:17 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 08:40:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357626.586296 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6TFB-0004jC-DP; Wed, 29 Jun 2022 08:40:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357626.586296; Wed, 29 Jun 2022 08:40:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6TFB-0004j3-8C; Wed, 29 Jun 2022 08:40:17 +0000
Received: by outflank-mailman (input) for mailman id 357626;
 Wed, 29 Jun 2022 08:40:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rlzK=XE=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1o6TFA-0003bb-02
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 08:40:16 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2082.outbound.protection.outlook.com [40.107.22.82])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 18c421d4-f787-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 10:40:15 +0200 (CEST)
Received: from DB6PR0802CA0030.eurprd08.prod.outlook.com (2603:10a6:4:a3::16)
 by AM6PR08MB4627.eurprd08.prod.outlook.com (2603:10a6:20b:d1::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Wed, 29 Jun
 2022 08:40:11 +0000
Received: from DBAEUR03FT007.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:a3:cafe::27) by DB6PR0802CA0030.outlook.office365.com
 (2603:10a6:4:a3::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17 via Frontend
 Transport; Wed, 29 Jun 2022 08:40:11 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT007.mail.protection.outlook.com (100.127.142.161) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Wed, 29 Jun 2022 08:40:10 +0000
Received: ("Tessian outbound 514db98d9a19:v121");
 Wed, 29 Jun 2022 08:40:10 +0000
Received: from ec086d193c11.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 7BF37E11-EEAD-4975-9EE4-A73768C28182.1; 
 Wed, 29 Jun 2022 08:40:04 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ec086d193c11.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 29 Jun 2022 08:40:04 +0000
Received: from DU2PR08MB7325.eurprd08.prod.outlook.com (2603:10a6:10:2e4::7)
 by AM5PR0802MB2514.eurprd08.prod.outlook.com (2603:10a6:203:9e::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Wed, 29 Jun
 2022 08:40:01 +0000
Received: from DU2PR08MB7325.eurprd08.prod.outlook.com
 ([fe80::50e5:8757:6643:77f8]) by DU2PR08MB7325.eurprd08.prod.outlook.com
 ([fe80::50e5:8757:6643:77f8%9]) with mapi id 15.20.5373.017; Wed, 29 Jun 2022
 08:40:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 18c421d4-f787-11ec-b725-ed86ccbb4733
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=Pz9QoPfoyV4CCZXSI0BQcuFEvKfJyWvzNbTqPJB/agWHkDXSWMZ7FgbPLfKKDhmCusZCY24mMIdJa8zm9mEfWR6Cw2fdEDv5kKUEK31KeNegZA0PmFPQMCWOksx148yKOqerUR7BLoxO7CtyKdoEe9GE9fxAQDqWTDgpWQO9Jz0rAPFIZRGvF3A2lmYBmOZ3GNQYMgLIy0lfT8welUGQqxq52mSKoGsvJfr/uth7QrtstEOzneFWy1KqxVHEP92aO8utlnzHUUs6tWMao+iq6nEsfBzRj8fqIMdtjb2Oc/xwvx6LPHSFhcDa5S+WGKP1dgaLO6P+VelPvTtSWWbcAQ==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=BxhQYcrZizVHH6I3rhCROrcdC4H2VRPMprJbwCxCTC4=;
 b=QItI+iiBTqspEj710fjo8/YInw2jFGTZI67vz21+y9K+U4/w5RtbUDyk67eG206QJrdkakW7HDAcy20E452cUz79QIUVw9rEiKgFpGYMc51bIOGTHRyZKAzN6nEEn0qrKjQpMlxyrfPHf1o5pYN6CMb7l+TMUb1QI3puMkUU3kxunQS6fHKnMQOd9dW6+4XB6NFtGpp2HSoHUQBvVRbBBibXEY01QfY2tsb2flyQSWUcagJQr9TqWPgmsIrwLTNXYor7mqp6j9aLz4dp3JBE3ctIGX4inOFqhmcdjxNoR7z3/Pizba4lm8Ga+JR3Bk1J16cDabs+coLawlIWv3YzNQ==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BxhQYcrZizVHH6I3rhCROrcdC4H2VRPMprJbwCxCTC4=;
 b=QKqLvhmlV40vZ06FhaM4/zeF9gWmDsuVk2t4L9FIuGggH/HcFkUFdzStnPv6UzPXYsIGrlLhBNsEy9ioj2PpNj3TUHzovCgFQ3o/4oPbK+mUCVra4XvXAS9FH6RF7HwFOhwFnFYqkzxaew+YIHsRFqX1g/pf4Cs1Vb4KZ8jmqcQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kGyAw55OPgQOTnwmvupatIy0KqB9hVUX9WODrQT059YnKNIXnF4Yzg/zhspaFYsfOivts6XGNLn+mLSfP0E0Bccd7kIUfjeoZh/m/oaDEC1RqLc6fnzD3A2Vp7ymzpxdlE1bk9zKbdK/vTGnFVUT+hX4wFM9gNfmdBFkTf54hNHtpakaAdnyt0Gu42tzk5qliQh5/poE2S+EiqJPPTWwKLa+FTKOwHoZaGw6tLueOULp5F4CqGiAbyOzm/5Rvrcv31wKqz60dh1wSd3zPimdSDB4LIjgUsuc6sURxGU3bo7tRIjABWZEJzzci27ezlPvR9ldZvTY0XGYVQAz5qJhYw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=BxhQYcrZizVHH6I3rhCROrcdC4H2VRPMprJbwCxCTC4=;
 b=Uaz2dFiqxen0dFT6gLK/O/eGRVPUusCabhLzAVblXhHZHYBWfYYflgj7+6TV6bdVmJs2c62fnCFcT1hJmgfkco/Ic+MmRypc8WqL1BDyBiNjLG30P6jJD4m+pAu8BgzDzvZIrLaukcitOafbDIhI1CFeuPtVaIvj61s7CxbMBWXic+vyF0icszy0/baGxttUU6KkAnJPE+PUwV/xHAy000qZTGIjLDRd7D9pF6O/aAROP7Olh2DcFSWDjq2zSAmZntiVLFsnuG5oVvE/u44ZnPFH79/y2UOhK+MHAZaH3cRU55Oiyn9GUWwHkrowe8cqGeyf9RyXe8L+BfRcmR3EKQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BxhQYcrZizVHH6I3rhCROrcdC4H2VRPMprJbwCxCTC4=;
 b=QKqLvhmlV40vZ06FhaM4/zeF9gWmDsuVk2t4L9FIuGggH/HcFkUFdzStnPv6UzPXYsIGrlLhBNsEy9ioj2PpNj3TUHzovCgFQ3o/4oPbK+mUCVra4XvXAS9FH6RF7HwFOhwFnFYqkzxaew+YIHsRFqX1g/pf4Cs1Vb4KZ8jmqcQ=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v5 1/8] xen/arm: introduce static shared memory
Thread-Topic: [PATCH v5 1/8] xen/arm: introduce static shared memory
Thread-Index: AQHYhGRAIMWAAZoglEyYz3pT8Ubq/a1e94+AgAcdS3A=
Date: Wed, 29 Jun 2022 08:40:01 +0000
Message-ID:
 <DU2PR08MB7325D305AC5EC1881A57EFCCF7BB9@DU2PR08MB7325.eurprd08.prod.outlook.com>
References: <20220620051114.210118-1-Penny.Zheng@arm.com>
 <20220620051114.210118-2-Penny.Zheng@arm.com>
 <f87c00c5-8253-0c51-4f05-e137d98fc149@xen.org>
In-Reply-To: <f87c00c5-8253-0c51-4f05-e137d98fc149@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 1EBCD29F70FF0C4B8DC73D829A8A8C4A.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: cfe55523-3850-4f76-7065-08da59aafa60
x-ms-traffictypediagnostic:
	AM5PR0802MB2514:EE_|DBAEUR03FT007:EE_|AM6PR08MB4627:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 okhtmaoqFOs+iZiopfjsLaGYpPiO2WaJsMo3cUdjWSfModibNZidBvdsLai0pIjxIE/6b2VdepUbP99nCzvlR0ExgzuzsEWJaYYDXPzWX2QVBdT/DKhxkO9He/IJa+uEkoem0l1iFAQsMF1qXu1kusWb9Zbyl6K+AzOGgByKDIQJ1qdDHQps4ZOVpcb3RxKCNAshfROJWln7Z3AoIAF1TlBJMeune9it4g6T9OtTyguv8EHYxjf6rXPabl7dNkMt/IAncjb674dnkFVnqoZv79e2ytL83TTEcnt3Il2ujhkrTdw+vGI3WbPSqUmh1Yf6msrk0LMdXVJkWev9H5yOI+VAcd5/ai/HbYZ/6DiXw2+2CA8UCIYMPSjX6mI/Ut/t0bXLx5k9eiEPfQdUGmmfnGtXkIn/ZByfTKguXbLmjatLoyEL2ZryvQ7o68CS8qrap1gRd413Qlvtl3ozPJ6xC1NwcFH2lds4ULBCA2gETc26lCmLKRmF9/RvRtUmwACRlVr+zWtCgL4y3q/edo4aB7ZpjzdevmDmYXo7hEoIaBY70edH4DF/Swt+NMfdpFvYITMlSQfUcAIDuAy7pv7YN4z2+crWFq7Xh3fwLeYY9HdTBys+mkVayRhShITPZC9PhyGZlmLNfIs+iPOCq23tIB88tmd7RU3dJ0A3wwsY0I50RTZvxqYvHsgwEKoZe9zdJbj+gWdjPz0lyl3GT/em5aApPgf0C/30oq6xSPz2yNfoh/+4ccG3QoxtyRSTZeSmmm4S1rvggBbh3/hFxLzm8tB7HNxh9t2JkHhO3MjJAhM=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DU2PR08MB7325.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(136003)(366004)(376002)(396003)(346002)(186003)(122000001)(33656002)(7696005)(6506007)(53546011)(110136005)(41300700001)(38100700002)(2906002)(38070700005)(54906003)(86362001)(55016003)(52536014)(64756008)(8676002)(9686003)(478600001)(316002)(26005)(71200400001)(76116006)(66446008)(66556008)(83380400001)(66476007)(66946007)(4326008)(5660300002)(8936002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM5PR0802MB2514
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT007.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	48dadc2d-58f8-417c-d87c-08da59aaf506
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	h9RLiK37UDkDOG9vvLx+CIPjyfKCe2NbI+1y4tSXqzmGDBeqY5nt7ABCglYs/OAjDiLVulAng2CS9VYa3lC3Rmdpd4UgheMgFbNyYPGPsj2ZgoYsblDbrwfaJwb+wEDIrRdUpKAWtRC/3hc3UYXf0SpRYSHKDeaLi5qeAj25A57wSBfUAq/hsjEm7JAYH8YHdK7cNTMK1PBrD3+ToXtRLTaPkhYD8UERfNikkSkWwL0Aoa7WxLZMBOMxPqzKL0bHsNFRd5IhLDnoxH5ikCoLLFBouMDT9/RyLAXFzP6ngQ6aPSFgP3cktSQAyVthJP2igELzS4HOdxU9aXQ1X39bu+8qNlb5RmQnHvqFJ33q+ZJlEJuCPB/PQEO1RDVTKv19LB+8b6FipD+VT8X2DTXodi/S+8HAoz8XZLoV8+zU6tJuOc7ETZw4YC46OzDpO62MWZJqtCb9phBJyAXmm3Jfsf9niB8rvhdrF1M5oAB0Pc4Vr3CH04c8aX6Sy1HeNXYFq5RqPjYk7aBr7X+epcaNRMA2hN3xXOD7AggECh1yuQ5YFrZ+bSmVi+Ydal1xwechbOPuyyjAobuBtBtkp/9bIX5mG+l2i3DkpqB7vwqBv+U1WdN75loKGJD/ZUulNHL1/Wu4GBZeAQHL0vQFov0NRbH+ctL6F2J61bcYbYIGprUdW6TUxlWr0Qlz/lq+nzwQNsGWtdD4xppQrCzR9vaur6P8TCsktS6RfCOJ3nQr3f33h4VMDRg9uQv48abn1VP4PI3TXcX5DQU7dLW08FXjVAqp9yw7WvCRAA5bgQ/s1SA=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(376002)(346002)(136003)(396003)(40470700004)(46966006)(36840700001)(356005)(82310400005)(107886003)(8936002)(47076005)(186003)(33656002)(81166007)(55016003)(2906002)(5660300002)(83380400001)(316002)(110136005)(54906003)(336012)(86362001)(9686003)(36860700001)(70586007)(70206006)(52536014)(8676002)(4326008)(6506007)(40480700001)(82740400003)(40460700003)(7696005)(26005)(53546011)(478600001)(41300700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jun 2022 08:40:10.6126
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: cfe55523-3850-4f76-7065-08da59aafa60
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT007.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4627

SGkgSnVsaWVuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSnVsaWVu
IEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4NCj4gU2VudDogU2F0dXJkYXksIEp1bmUgMjUsIDIwMjIg
MzoyNiBBTQ0KPiBUbzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+OyB4ZW4tZGV2
ZWxAbGlzdHMueGVucHJvamVjdC5vcmcNCj4gQ2M6IFdlaSBDaGVuIDxXZWkuQ2hlbkBhcm0uY29t
PjsgU3RlZmFubyBTdGFiZWxsaW5pDQo+IDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPjsgQmVydHJh
bmQgTWFycXVpcyA8QmVydHJhbmQuTWFycXVpc0Bhcm0uY29tPjsNCj4gVm9sb2R5bXlyIEJhYmNo
dWsgPFZvbG9keW15cl9CYWJjaHVrQGVwYW0uY29tPg0KPiBTdWJqZWN0OiBSZTogW1BBVENIIHY1
IDEvOF0geGVuL2FybTogaW50cm9kdWNlIHN0YXRpYyBzaGFyZWQgbWVtb3J5DQo+IA0KPiBIaSBQ
ZW5ueSwNCj4gDQo+IEkgaGF2ZSBsb29rZWQgYXQgdGhlIGNvZGUgYW5kIEkgaGF2ZSBmdXJ0aGVy
IHF1ZXN0aW9ucyBhYm91dCB0aGUgYmluZGluZy4NCj4gDQo+IE9uIDIwLzA2LzIwMjIgMDY6MTEs
IFBlbm55IFpoZW5nIHdyb3RlOg0KPiA+IC0tLQ0KPiA+ICAgZG9jcy9taXNjL2FybS9kZXZpY2Ut
dHJlZS9ib290aW5nLnR4dCB8IDEyMA0KPiArKysrKysrKysrKysrKysrKysrKysrKysrKw0KPiA+
ICAgeGVuL2FyY2gvYXJtL0tjb25maWcgICAgICAgICAgICAgICAgICB8ICAgNiArKw0KPiA+ICAg
eGVuL2FyY2gvYXJtL2Jvb3RmZHQuYyAgICAgICAgICAgICAgICB8ICA2OCArKysrKysrKysrKysr
KysNCj4gPiAgIHhlbi9hcmNoL2FybS9pbmNsdWRlL2FzbS9zZXR1cC5oICAgICAgfCAgIDMgKw0K
PiA+ICAgNCBmaWxlcyBjaGFuZ2VkLCAxOTcgaW5zZXJ0aW9ucygrKQ0KPiA+DQo+ID4gZGlmZiAt
LWdpdCBhL2RvY3MvbWlzYy9hcm0vZGV2aWNlLXRyZWUvYm9vdGluZy50eHQNCj4gPiBiL2RvY3Mv
bWlzYy9hcm0vZGV2aWNlLXRyZWUvYm9vdGluZy50eHQNCj4gPiBpbmRleCA5ODI1MzQxNGI4Li42
NDY3YmM1YTI4IDEwMDY0NA0KPiA+IC0tLSBhL2RvY3MvbWlzYy9hcm0vZGV2aWNlLXRyZWUvYm9v
dGluZy50eHQNCj4gPiArKysgYi9kb2NzL21pc2MvYXJtL2RldmljZS10cmVlL2Jvb3RpbmcudHh0
DQo+ID4gQEAgLTM3OCwzICszNzgsMTIzIEBAIGRldmljZS10cmVlOg0KPiA+DQo+ID4gICBUaGlz
IHdpbGwgcmVzZXJ2ZSBhIDUxMk1CIHJlZ2lvbiBzdGFydGluZyBhdCB0aGUgaG9zdCBwaHlzaWNh
bCBhZGRyZXNzDQo+ID4gICAweDMwMDAwMDAwIHRvIGJlIGV4Y2x1c2l2ZWx5IHVzZWQgYnkgRG9t
VTEuDQo+ID4gKw0KPiA+ICtTdGF0aWMgU2hhcmVkIE1lbW9yeQ0KPiA+ICs9PT09PT09PT09PT09
PT09PT09PQ0KPiA+ICsNCj4gPiArVGhlIHN0YXRpYyBzaGFyZWQgbWVtb3J5IGRldmljZSB0cmVl
IG5vZGVzIGFsbG93IHVzZXJzIHRvIHN0YXRpY2FsbHkNCj4gPiArc2V0IHVwIHNoYXJlZCBtZW1v
cnkgb24gZG9tMGxlc3Mgc3lzdGVtLCBlbmFibGluZyBkb21haW5zIHRvIGRvDQo+ID4gK3NobS1i
YXNlZCBjb21tdW5pY2F0aW9uLg0KPiA+ICsNCj4gPiArLSBjb21wYXRpYmxlDQo+ID4gKw0KPiA+
ICsgICAgInhlbixkb21haW4tc2hhcmVkLW1lbW9yeS12MSINCj4gPiArDQo+ID4gKy0geGVuLHNo
bS1pZA0KPiA+ICsNCj4gPiArICAgIEFuIDgtYml0IGludGVnZXIgdGhhdCByZXByZXNlbnRzIHRo
ZSB1bmlxdWUgaWRlbnRpZmllciBvZiB0aGUgc2hhcmVkDQo+IG1lbW9yeQ0KPiA+ICsgICAgcmVn
aW9uLiBUaGUgbWF4aW11bSBpZGVudGlmaWVyIHNoYWxsIGJlICJ4ZW4sc2htLWlkID0gPDB4ZmY+
Ii4NCj4gDQo+IFRoZXJlIGlzIG5vdGhpbmcgaW4gWGVuIHRoYXQgd2lsbCBlbnN1cmUgdGhhdCB4
ZW4sc2htLWlkIHdpbGwgbWF0Y2ggZm9yIGFsbCB0aGUNCj4gbm9kZXMgdXNpbmcgdGhlIHNhbWUg
cmVnaW9uLg0KPiANCg0KVHJ1ZSwgd2UgYWN0dWFsbHkgZG8gbm90IHVzZSB0aGlzIGZpZWxkLCBh
ZGRpbmcgaXQgaGVyZSB0byBqdXN0IGJlIGFsaWduZWQgd2l0aCBMaW51eC4NCkkgY291bGQgYWRk
IGEgY2hlY2sgaW4gdGhlIHZlcnkgYmVnaW5uaW5nIHdoZW4gd2UgcGFyc2UgdGhlIGRldmljZSB0
cmVlLg0KSSdsbCBnaXZlIG1vcmUgZGV0YWlscyB0byBleHBsYWluIGluIHdoaWNoIGNvZGUgbG9j
YXRlcy4NCg0KPiBJIHNlZSB5b3Ugd3JpdGUgaXQgdG8gdGhlIGd1ZXN0IGRldmljZS10cmVlLiBI
b3dldmVyIHRoZXJlIGlzIGEgbWlzbWF0Y2ggb2YgdGhlDQo+IHR5cGU6IGhlcmUgeW91IHVzZSBh
biBpbnRlZ2VyIHdoZXJlYXMgdGhlIGd1ZXN0IGJpbmRpbmcgaXMgdXNpbmcgYSBzdHJpbmcuDQo+
IA0KPiBDaGVlcnMsDQo+IA0KPiAtLQ0KPiBKdWxpZW4gR3JhbGwNCg==


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 08:41:54 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 08:41:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357637.586307 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6TGj-0005Zp-Rk; Wed, 29 Jun 2022 08:41:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357637.586307; Wed, 29 Jun 2022 08:41:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6TGj-0005Zi-OM; Wed, 29 Jun 2022 08:41:53 +0000
Received: by outflank-mailman (input) for mailman id 357637;
 Wed, 29 Jun 2022 08:41:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8wio=XE=intel.com=kevin.tian@srs-se1.protection.inumbo.net>)
 id 1o6TGh-0005ZX-OY
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 08:41:52 +0000
Received: from mga07.intel.com (mga07.intel.com [134.134.136.100])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4ff5b620-f787-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 10:41:49 +0200 (CEST)
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
 by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 29 Jun 2022 01:41:46 -0700
Received: from fmsmsx601.amr.corp.intel.com ([10.18.126.81])
 by fmsmga002.fm.intel.com with ESMTP; 29 Jun 2022 01:41:46 -0700
Received: from fmsmsx606.amr.corp.intel.com (10.18.126.86) by
 fmsmsx601.amr.corp.intel.com (10.18.126.81) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2308.27; Wed, 29 Jun 2022 01:41:46 -0700
Received: from FMSEDG603.ED.cps.intel.com (10.1.192.133) by
 fmsmsx606.amr.corp.intel.com (10.18.126.86) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2308.27 via Frontend Transport; Wed, 29 Jun 2022 01:41:46 -0700
Received: from NAM12-MW2-obe.outbound.protection.outlook.com (104.47.66.42) by
 edgegateway.intel.com (192.55.55.68) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2308.27; Wed, 29 Jun 2022 01:41:45 -0700
Received: from BN9PR11MB5276.namprd11.prod.outlook.com (2603:10b6:408:135::18)
 by DM5PR11MB1866.namprd11.prod.outlook.com (2603:10b6:3:10a::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Wed, 29 Jun
 2022 08:41:43 +0000
Received: from BN9PR11MB5276.namprd11.prod.outlook.com
 ([fe80::8435:5a99:1e28:b38c]) by BN9PR11MB5276.namprd11.prod.outlook.com
 ([fe80::8435:5a99:1e28:b38c%2]) with mapi id 15.20.5373.018; Wed, 29 Jun 2022
 08:41:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4ff5b620-f787-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1656492109; x=1688028109;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=qlsxLao+V3cC2goOMLRBxrzRBta+ns0IcM4LT8E+mWU=;
  b=X1kb4DphF5KvESLQ1JArIx7UG8vnKIPKD0r196e3BYOSlmh25VmAIegS
   qFvaFMuIrMX2pUp0t+EtTVR3aZxd8MeT5N40NzuG9KxJVNIZBGzKFXTa6
   Dj/jZcTz4l/NDU9sL8s8C8KjEUsI2SNqT3QH1/i8lJPiLeqROY4ScEUJB
   Yz35PZHlBdYyyeK13roUSJ2TcFUO6iS7JWv4ajxyVr2VSYUD2i2DMk/QB
   +48GqOEFQQ4QjbOeuLvyt2ptguE5R6ZvO7sYbtr7v/FEA7/e1EE7qHyL0
   nm90/dopKkA5kvPWO8rg0sf0LqVRpf0SdrhvJNfVCYpxKod1w1HQveSqN
   g==;
X-IronPort-AV: E=McAfee;i="6400,9594,10392"; a="345965015"
X-IronPort-AV: E=Sophos;i="5.92,231,1650956400"; 
   d="scan'208";a="345965015"
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.92,231,1650956400"; 
   d="scan'208";a="693486234"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VPHutFDV+4hA3SZ1joVKSXAyoh/Ik7o+UTol+wOb2BUe5nlN/iRARafU7OrqMvWIn45gY574+InLu1775dZ7kXnoPtMZXR55HiZDlys2DKUWMoC73yEkOIUKAimDbIEG+MEV/L+XtN81tpTSM/hJsE4x9ZFRBcNnPKVnIu9kbKYFHZcm5evuQTs6QK3pGw9nlDFUw6soklVXUL3I0iPiuTFZ7pVgzfwJyKGOfMEyppuEXK/1L4DqNn4JA0Nx8gXnf9iVhY4aRAa0MA43BIu9Ei77Ip4OHwZb+poPfqG+uBAnZapjMtRASC0v8PGgOR+YsuCHMd2BEuUps77Rts/zKg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qlsxLao+V3cC2goOMLRBxrzRBta+ns0IcM4LT8E+mWU=;
 b=IbdYKT7DvCtX2c502lJ4gc69TFdOo8KtW5tjqkSkK0FbTgWQ9LKs5q7ANde/59U1u1c8sWDDDG2Vh3cDhe+A57StUU3aNJJNdtYEfzRmT2n+WmPPQGBYGxSqr2s0vpKC2xW+7BpK+JAOQZPaEBVEREhRlFBzylYHPtyY+tCTz5LLWSFqMoXF/L/DBwTVxTek2hdQxJEPrXDBgxyMX2EhEve7aZ7m7rxXyemILQtwPrcCfLH2q+sAntdqKwHdVr4fdK7wKGASIbIcfLdXC1nY+CaRDDp3OXqC7RxrH9fKzwqnLwH/wZrNOX2HFUJ3tnjzEGQT4wGkK9FKW9cpwnpbbw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
From: "Tian, Kevin" <kevin.tian@intel.com>
To: =?utf-8?B?UGF1IE1vbm7DqSwgUm9nZXI=?= <roger.pau@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: =?utf-8?B?UGF1IE1vbm7DqSwgUm9nZXI=?= <roger.pau@citrix.com>, "Nakajima,
 Jun" <jun.nakajima@intel.com>, "Beulich, Jan" <JBeulich@suse.com>, "Cooper,
 Andrew" <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>
Subject: RE: [PATCH] x86/ept: fix shattering of special pages
Thread-Topic: [PATCH] x86/ept: fix shattering of special pages
Thread-Index: AQHYig0A8+iuyC//Bk2PKQQEd7+3W61mDnbA
Date: Wed, 29 Jun 2022 08:41:43 +0000
Message-ID: <BN9PR11MB527685F117AAA3C1716A41EF8CBB9@BN9PR11MB5276.namprd11.prod.outlook.com>
References: <20220627100119.55363-1-roger.pau@citrix.com>
In-Reply-To: <20220627100119.55363-1-roger.pau@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
dlp-version: 11.6.500.17
dlp-product: dlpe-windows
dlp-reaction: no-action
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=intel.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 8caf0d6f-708e-450b-0615-08da59ab319b
x-ms-traffictypediagnostic: DM5PR11MB1866:EE_
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: HNmX/dLayhn/UGRmRnHJilvDUreE5aYIwHJel4JIgdVkH8YHfQEarydCt8wUo9Ai+9jh/JuH/xMjQ2FXT/t/nTZCl/OJDFC522QNZEdLGwEIQTxd7h+D5jUYelezoL+XQJCawWKcV4g7nZwuUtYpFhlmhoZvQUBsengjtlH9xtqE6J7Cc1J3w23u9QtuDHQlA1xfTSfJjnwINr49JS1WUubiDtSxrNskUQNNMEVPa6zmzBN5hR2mzVlE/TK7GlLjz+z6sQZ1CT+hVTQ15shNzIa4QqmtlO3kcXb0Brix5m+NnQwAYEOIM632Be1a1Rhua5oQVcFyGI7/jLDGDdqqH2HJ7UvbWnnLe/923IohdULAtvRI2sIepLozMFZFIALiumwdXSbZ3vZ8rCqbmLGs9OWJ2bWMq9xgZl/VTkwuwiF/ccpKJCzwBri9f7ROK8H7zfQydjnA7yPPA+2tRjJBgLatD6EUlCisaO/5M0sA4zEczfhZp00T9kvPnUhfD01eH2FmZ8MDL8LYji+T2ekB/BiBZWcQ0MLAc1ch271EBjKMlsVLjxWahl2jKRszsRo1SBZvtYLKOfCNQ/fz0mRIae3qVkb2smP9jep9uYZOmSV0fpXYoi7rgD+lQJzhIx0Zxo9vwyXBkMjY9gr3yyDw6z+jpGQLPH/n9EEGqUqWx0gLx+Vjqq3EaXsyjk7VKDW9hR9ulR2gvODT3z2qe7aqXUfNr25hnm37xTSTYd3EuFdQji2fYgBPLW6x8lTatRNbY46MxhQwZGUdr2aEtDZ1xDt7Ix1lvOVTNXYLChKt9R5vmETDnK68cgDJaBJjE3YwHlpRoXKQT1Xey3PFQT1X5y3tf+lPVwNrhZqPXjVFfTSilzh2IcZkt1n9JSOmpjs1
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BN9PR11MB5276.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(396003)(346002)(136003)(366004)(376002)(39860400002)(66476007)(66446008)(66556008)(122000001)(478600001)(186003)(66946007)(6506007)(26005)(38100700002)(8936002)(71200400001)(41300700001)(4326008)(64756008)(76116006)(966005)(9686003)(86362001)(8676002)(38070700005)(82960400001)(316002)(54906003)(55016003)(7696005)(110136005)(2906002)(52536014)(5660300002)(83380400001)(33656002);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?cmZET2ZtM3lHTGtNLytEbGpkaEt0OW9WSlJzSWlSdjU2RTlIbUUxczVta0gw?=
 =?utf-8?B?d1diOHpCWXQ1KzFYV1o2R0xySDNPbHdoRm9Lb2NiKzhqYXhNMHF1K2tVejVB?=
 =?utf-8?B?em9xVHQ4bm04bkdEbjh3TXVhRXRCam9RMmh0TEhOcWNPSHVMa1dFd2tRTS9R?=
 =?utf-8?B?T2M2N2hZZnMzZHhYSEhyTWRyWmFWNnZFWWZSZ1Z1Mk02dmJXc1ZHbjEvRmdZ?=
 =?utf-8?B?bXdXWFJ6NjQxSVR3cnAwWHhEVS8rV25tYzZ2R1BlUUQxdmk2UWI3REg0K1lH?=
 =?utf-8?B?Ump4SzFnMFExdEtvY3NBb1F1ZXQ2OWs5YU5HYTRra3JNS3U2UHFzcURYZ1Bl?=
 =?utf-8?B?Y0hvamNLcUVDcWJGems0bnVXNTlyRjQrVTgxdHAvV0dHMjBzd0ZSRm1JSzBD?=
 =?utf-8?B?U2Ntd0hyaEtOMktUMjFwMWl2WWNBeFFacklSaTg5M21xTlY5elBqYUtFMnVN?=
 =?utf-8?B?RWRNZVdjWk9sMC90VzhnQU1ueWdDY1NNZXBHc0hSbVo0TFpyMXNrNDYrMkFu?=
 =?utf-8?B?d2E1WGZrV1ZoRk81eG5pZy9HMFVRS1A5YTRZUXV6bTB5Ri91Nk9NSE9wWEx3?=
 =?utf-8?B?dXcrSVh3c2FYMXBFQVZESnBycmpBWWRZUStzbHcxQmJrQWZ3OElrMEpPOC9y?=
 =?utf-8?B?NUZhVmxJWEMyUkdnS292SkNOR3ZHaUJsWTM2dVkzMnZKZ1BXVnNwZ0I1TDdi?=
 =?utf-8?B?bDBpVVN4UFlXS1pJWTRsZjRoSGtES25VSUtrK0RQbDU0MGIrZFBGNkc1aVdM?=
 =?utf-8?B?NlBVeTNzQmNaRXJzOGNoZ1cwS3ZFQ2VsQkxqNGF0NXNKeUM5Zlkyd1JFU3JC?=
 =?utf-8?B?emhPSnZmeFVOMzlQMWZnWSt5WEw5dUk3eXZ2OTZQcXprdG5ENDZEeDViWnUy?=
 =?utf-8?B?VmNLUnNmMldIYi9EUVY3WEJKalVXemZiNHA0L25Jb05ROUdlV3VSY050QXhs?=
 =?utf-8?B?ZU4vNXVKbWNIYjcxN1VLeVA3QW5lN1hxR25ibHJ0bDY1b0ZLakhJSkFhUFZP?=
 =?utf-8?B?NEdIbk1GeStZYjdKR21aY0dhN0VoT2g5WVVZRkRNOWFtbHBxWHZmcWM0bjdK?=
 =?utf-8?B?MVNzSzd2VGZZeWFTOFB3NlVTNHIzMklXQ2RVWkEyYVpMRnM1dGxnTWlpaVZ4?=
 =?utf-8?B?c1B6K1lpejFYMmlNYk15dHVkM083WUVLRDdSaUVUQXgyVVNSV0dHcEpINlpa?=
 =?utf-8?B?ZGprcFdXanNxR3BLaHFrU25IWDRRb1FnTE1hMU12NlJLVERuaTNtZjl4OE9K?=
 =?utf-8?B?U1EvTmZsNndFWXBNN2JpakR5dndTY2dldFRwdWsxcUlHdlUzNGl0TDBMY3Nj?=
 =?utf-8?B?UmJrbWRGa2gvTmlsUEpubFVLTW1mSEkwSTViYTZIUkM2aU5zN0tTajZubGZ5?=
 =?utf-8?B?cGczOGlFcUtpb3h2Skl6blgreUxMQy9weGJGWU9YS0FsZVFFY3ludk9xZDFC?=
 =?utf-8?B?cjlrVk1yN09PUWNJREVvcG8rbTNlKzg1R2dxdGtVOEZ6a3Y4TXI5bmtjcUhr?=
 =?utf-8?B?SGh4bDFFWmxZenVhM0Zjbm8vOUNzc05tQWx4TnE3aVo2Yk00dG5hZnlZcnJy?=
 =?utf-8?B?enpKSVFKeEtwZmI4QllyVlFwSTYrbW4yQW5hQ3dvUytIWm9tcG0vQXNZbU5o?=
 =?utf-8?B?RmpLaTZ1VEtiYmNHVUc4RDl1di8xMFBYUjV1bzNPY0cxc0s4OGF6Y1pITlFm?=
 =?utf-8?B?QTdFbSttVWxxSTM0aDE0UjFiYmxmaEw3VHdma3RrSnRNaEF3WmZaZmFuSXFV?=
 =?utf-8?B?bjB2SFl4ZGRlUHg2bjk5dzdER0hIVVZKWkVkRG1OZDVMNmpTRDkrL3RidWJO?=
 =?utf-8?B?dWtUa0NqR0UvaHh3aHFROWQrMWFhM0VOMG5FWjB5ekFTanl6MitjcVRtSXpm?=
 =?utf-8?B?RDVnbFJXRiszYTVVUkEzQVNlZnJjV1gxRFIrcHlLMlpGNVBQaE5xSVR6aUFv?=
 =?utf-8?B?cTNBem4ySC92M1F1dGRpdzlBTC9xU0pCbllFdEhnVC96L2hHeFBZd0x3TG5V?=
 =?utf-8?B?RDMxbnVHbWNqY1JjWElsMlpLTkhSUFUwdlEwQXMwWHdhWVNMNVM1eWhqeC94?=
 =?utf-8?B?dlJod3VkdjFlZzJmZm9IR05OcE16WmFaYW53bkdKNjJKaUFBQ29WRnQyVUl5?=
 =?utf-8?Q?psP3FcJYLd0t6w4ZpU1I6/njW?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BN9PR11MB5276.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8caf0d6f-708e-450b-0615-08da59ab319b
X-MS-Exchange-CrossTenant-originalarrivaltime: 29 Jun 2022 08:41:43.2869
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: IAKrQfESKdEFl10rA0BoVqTgEiNnKhx2l2rTjrp+j5cXeTzRO9Tb94mPvB7R83twQua4MqNRn1NzD8lXmG9OlQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR11MB1866
X-OriginatorOrg: intel.com

PiBGcm9tOiBSb2dlciBQYXUgTW9ubmUgPHJvZ2VyLnBhdUBjaXRyaXguY29tPg0KPiBTZW50OiBN
b25kYXksIEp1bmUgMjcsIDIwMjIgNjowMSBQTQ0KPiANCj4gVGhlIGN1cnJlbnQgbG9naWMgaW4g
ZXB0ZV9nZXRfZW50cnlfZW10KCkgd2lsbCBzcGxpdCBhbnkgcGFnZSBtYXJrZWQNCj4gYXMgc3Bl
Y2lhbCB3aXRoIG9yZGVyIGdyZWF0ZXIgdGhhbiB6ZXJvLCB3aXRob3V0IGNoZWNraW5nIHdoZXRo
ZXIgdGhlDQo+IHN1cGVyIHBhZ2UgaXMgYWxsIHNwZWNpYWwuDQo+IA0KPiBGaXggdGhpcyBieSBv
bmx5IHNwbGl0dGluZyB0aGUgcGFnZSBvbmx5IGlmIGl0J3Mgbm90IGFsbCBtYXJrZWQgYXMNCj4g
c3BlY2lhbCwgaW4gb3JkZXIgdG8gcHJldmVudCB1bm5lZWRlZCBzdXBlciBwYWdlIHNodXR0ZXJp
bmcuDQo+IA0KPiBGaXhlczogY2EyNGIyZmZkYiAoJ3g4Ni9odm06IHNldCAnaXBhdCcgaW4gRVBU
IGZvciBzcGVjaWFsIHBhZ2VzJykNCj4gU2lnbmVkLW9mZi1ieTogUm9nZXIgUGF1IE1vbm7DqSA8
cm9nZXIucGF1QGNpdHJpeC5jb20+DQo+IC0tLQ0KPiBDYzogUGF1bCBEdXJyYW50IDxwYXVsQHhl
bi5vcmc+DQo+IC0tLQ0KPiBJdCB3b3VsZCBzZWVtIHdlaXJkIHRvIG1lIHRvIGhhdmUgYSBzdXBl
ciBwYWdlIGVudHJ5IGluIEVQVCB3aXRoDQo+IHJhbmdlcyBtYXJrZWQgYXMgc3BlY2lhbCBhbmQg
bm90IHRoZSBmdWxsIHBhZ2UuICBJIGd1ZXNzIGl0J3MgYmV0dGVyDQo+IHRvIGJlIHNhZmUsIGJ1
dCBJIGRvbid0IHNlZSBhbiBzY2VuYXJpbyB3aGVyZSB3ZSBjb3VsZCBlbmQgdXAgaW4gdGhhdA0K
PiBzaXR1YXRpb24uDQo+IA0KPiBJJ3ZlIGJlZW4gdHJ5aW5nIHRvIGZpbmQgYSBjbGFyaWZpY2F0
aW9uIGluIHRoZSBvcmlnaW5hbCBwYXRjaA0KPiBzdWJtaXNzaW9uIGFib3V0IGhvdyBpdCdzIHBv
c3NpYmxlIHRvIGhhdmUgc3VjaCBzdXBlciBwYWdlIEVQVCBlbnRyeSwNCj4gYnV0IGhhdmVuJ3Qg
YmVlbiBhYmxlIHRvIGZpbmQgYW55IGp1c3RpZmljYXRpb24uDQo+IA0KPiBNaWdodCBiZSBuaWNl
IHRvIGV4cGFuZCB0aGUgY29tbWl0IG1lc3NhZ2UgYXMgdG8gd2h5IGl0J3MgcG9zc2libGUgdG8N
Cj4gaGF2ZSBzdWNoIG1peGVkIHN1cGVyIHBhZ2UgZW50cmllcyB0aGF0IHdvdWxkIG5lZWQgc3Bs
aXR0aW5nLg0KDQpIZXJlIGlzIHdoYXQgSSBkaWcgb3V0Lg0KDQpXaGVuIHJldmlld2luZyB2MSBv
ZiBhZGRpbmcgc3BlY2lhbCBwYWdlIGNoZWNrLCBKYW4gc3VnZ2VzdGVkDQp0aGF0IEFQSUMgYWNj
ZXNzIHBhZ2Ugd2FzIGFsc28gY292ZXJlZCBoZW5jZSB0aGUgb2xkIGxvZ2ljIGZvciBBUElDDQph
Y2Nlc3MgcGFnZSBjYW4gYmUgcmVtb3ZlZC4gWzFdDQoNClRoZW4gd2hlbiByZXZpZXdpbmcgdjIg
aGUgZm91bmQgdGhhdCB0aGUgb3JkZXIgY2hlY2sgaW4gcmVtb3ZlZA0KbG9naWMgd2FzIG5vdCBj
YXJyaWVkIHRvIHRoZSBuZXcgY2hlY2sgb24gc3BlY2lhbCBwYWdlLiBbMl0gDQoNClRoZSBvcmln
aW5hbCBvcmRlciBjaGVjayBpbiBvbGQgQVBJQyBhY2Nlc3MgbG9naWMgY2FtZSBmcm9tOg0KDQpj
b21taXQgMTI2MDE4ZjJhY2Q1NDE2NDM0NzQ3NDIzZTYxYTQ2OTAxMDhiOWRjOQ0KQXV0aG9yOiBK
YW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQpEYXRlOiAgIEZyaSBNYXkgMiAxMDo0ODo0
OCAyMDE0ICswMjAwDQoNCiAgICB4ODYvRVBUOiBjb25zaWRlciBwYWdlIG9yZGVyIHdoZW4gY2hl
Y2tpbmcgZm9yIEFQSUMgTUZODQoNCiAgICBUaGlzIHdhcyBvdmVybG9va2VkIGluIDNkOTBkNmU2
ICgieDg2L0VQVDogc3BsaXQgc3VwZXIgcGFnZXMgdXBvbg0KICAgIG1pc21hdGNoaW5nIG1lbW9y
eSB0eXBlcyIpLg0KDQogICAgU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1
c2UuY29tPg0KICAgIEFja2VkLWJ5OiBLZXZpbiBUaWFuIDxrZXZpbi50aWFuQGludGVsLmNvbT4N
CiAgICBSZXZpZXdlZC1ieTogVGltIERlZWdhbiA8dGltQHhlbi5vcmc+DQoNCkkgc3VwcG9zZSBK
YW4gbWF5IGFjdHVhbGx5IGZpbmQgc3VjaCBtaXhlZCBzdXBlciBwYWdlIGVudHJ5IGNhc2UNCnRv
IGJyaW5nIHRoaXMgZml4IGluLg0KDQpOb3Qgc3VyZSB3aGV0aGVyIEphbiBzdGlsbCByZW1lbWJl
ciB0aGUgaGlzdG9yeS4NCg0KWzFdIGh0dHBzOi8vbGlzdHMueGVucHJvamVjdC5vcmcvYXJjaGl2
ZXMvaHRtbC94ZW4tZGV2ZWwvMjAyMC0wNy9tc2cwMTY0OC5odG1sDQpbMl0gaHR0cHM6Ly9sb3Jl
Lmtlcm5lbC5vcmcvYWxsL2E0ODU2YzMzLThiYjAtNGFmYS1jYzcxLTNhZjRjMjI5YmMyN0BzdXNl
LmNvbS8NCg0KPiAtLS0NCj4gIHhlbi9hcmNoL3g4Ni9tbS9wMm0tZXB0LmMgfCAyMCArKysrKysr
KysrKy0tLS0tLS0tLQ0KPiAgMSBmaWxlIGNoYW5nZWQsIDExIGluc2VydGlvbnMoKyksIDkgZGVs
ZXRpb25zKC0pDQo+IA0KPiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L21tL3AybS1lcHQuYyBi
L3hlbi9hcmNoL3g4Ni9tbS9wMm0tZXB0LmMNCj4gaW5kZXggYjA0Y2E2ZGJlOC4uYjQ5MTliYWQ1
MSAxMDA2NDQNCj4gLS0tIGEveGVuL2FyY2gveDg2L21tL3AybS1lcHQuYw0KPiArKysgYi94ZW4v
YXJjaC94ODYvbW0vcDJtLWVwdC5jDQo+IEBAIC00OTEsNyArNDkxLDcgQEAgaW50IGVwdGVfZ2V0
X2VudHJ5X2VtdChzdHJ1Y3QgZG9tYWluICpkLCBnZm5fdCBnZm4sDQo+IG1mbl90IG1mbiwNCj4g
IHsNCj4gICAgICBpbnQgZ210cnJfbXR5cGUsIGhtdHJyX210eXBlOw0KPiAgICAgIHN0cnVjdCB2
Y3B1ICp2ID0gY3VycmVudDsNCj4gLSAgICB1bnNpZ25lZCBsb25nIGk7DQo+ICsgICAgdW5zaWdu
ZWQgbG9uZyBpLCBzcGVjaWFsX3BnczsNCj4gDQo+ICAgICAgKmlwYXQgPSBmYWxzZTsNCj4gDQo+
IEBAIC01MjUsMTUgKzUyNSwxNyBAQCBpbnQgZXB0ZV9nZXRfZW50cnlfZW10KHN0cnVjdCBkb21h
aW4gKmQsIGdmbl90DQo+IGdmbiwgbWZuX3QgbWZuLA0KPiAgICAgICAgICByZXR1cm4gTVRSUl9U
WVBFX1dSQkFDSzsNCj4gICAgICB9DQo+IA0KPiAtICAgIGZvciAoIGkgPSAwOyBpIDwgKDF1bCA8
PCBvcmRlcik7IGkrKyApDQo+IC0gICAgew0KPiArICAgIGZvciAoIHNwZWNpYWxfcGdzID0gaSA9
IDA7IGkgPCAoMXVsIDw8IG9yZGVyKTsgaSsrICkNCj4gICAgICAgICAgaWYgKCBpc19zcGVjaWFs
X3BhZ2UobWZuX3RvX3BhZ2UobWZuX2FkZChtZm4sIGkpKSkgKQ0KPiAtICAgICAgICB7DQo+IC0g
ICAgICAgICAgICBpZiAoIG9yZGVyICkNCj4gLSAgICAgICAgICAgICAgICByZXR1cm4gLTE7DQo+
IC0gICAgICAgICAgICAqaXBhdCA9IHRydWU7DQo+IC0gICAgICAgICAgICByZXR1cm4gTVRSUl9U
WVBFX1dSQkFDSzsNCj4gLSAgICAgICAgfQ0KPiArICAgICAgICAgICAgc3BlY2lhbF9wZ3MrKzsN
Cj4gKw0KPiArICAgIGlmICggc3BlY2lhbF9wZ3MgKQ0KPiArICAgIHsNCj4gKyAgICAgICAgaWYg
KCBzcGVjaWFsX3BncyAhPSAoMXVsIDw8IG9yZGVyKSApDQo+ICsgICAgICAgICAgICByZXR1cm4g
LTE7DQo+ICsNCj4gKyAgICAgICAgKmlwYXQgPSB0cnVlOw0KPiArICAgICAgICByZXR1cm4gTVRS
Ul9UWVBFX1dSQkFDSzsNCg0KRGlkIHlvdSBhY3R1YWxseSBvYnNlcnZlIGFuIGltcGFjdCB3L28g
dGhpcyBmaXg/IA0KDQo+ICAgICAgfQ0KPiANCj4gICAgICBzd2l0Y2ggKCB0eXBlICkNCj4gLS0N
Cj4gMi4zNi4xDQoNCg==


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 08:46:33 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 08:46:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357657.586366 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6TLC-0006nj-W1; Wed, 29 Jun 2022 08:46:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357657.586366; Wed, 29 Jun 2022 08:46:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6TLC-0006nc-SX; Wed, 29 Jun 2022 08:46:30 +0000
Received: by outflank-mailman (input) for mailman id 357657;
 Wed, 29 Jun 2022 08:46:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6TLB-0006nS-Sq; Wed, 29 Jun 2022 08:46:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6TLB-0005TW-QO; Wed, 29 Jun 2022 08:46:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6TLB-0001rk-Fc; Wed, 29 Jun 2022 08:46:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o6TLB-00076n-Cs; Wed, 29 Jun 2022 08:46:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9d5rCHxXU4X/dofkm+rFHNX3DkJLM8KznsP+oo39Oas=; b=uaQ0yu9DB7TWrKcVugioth3rUl
	uGm0ORgRklrr3zkxKvTWvSLaiAOtqDBkePGz3VN8KrIgAJ7QiJa0sC3ucZ7oSpSlY1zOZT1Cww9iu
	wdYGSnoo9UJWEfvnNn3n+iLIT3+HeuLNtbfsh5HB2Fung6enFvis0IHmV56n8tIPVB4E=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171387-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 171387: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-dom0pvh-xl-intel:guest-localmigrate:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8c99264c6746541ddbfd7afec533e6ad1c8c41a5
X-Osstest-Versions-That:
    xen=0544c4ee4b48f7e2715e69ff3e73c3d5545b0526
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 29 Jun 2022 08:46:29 +0000

flight 171387 xen-unstable real [real]
flight 171401 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/171387/
http://logs.test-lab.xenproject.org/osstest/logs/171401/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-dom0pvh-xl-intel 18 guest-localmigrate fail pass in 171401-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171377
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171377
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171377
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171377
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171377
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171377
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171377
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171377
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171377
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171377
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171377
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171377
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  8c99264c6746541ddbfd7afec533e6ad1c8c41a5
baseline version:
 xen                  0544c4ee4b48f7e2715e69ff3e73c3d5545b0526

Last test of basis   171377  2022-06-28 01:53:06 Z    1 days
Testing same since   171387  2022-06-28 20:36:55 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@arm.com>
  Roger Pau Monné <roger.pau@cirtrix.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   0544c4ee4b..8c99264c67  8c99264c6746541ddbfd7afec533e6ad1c8c41a5 -> master


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 08:47:22 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 08:47:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357665.586376 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6TM2-0007Ox-GD; Wed, 29 Jun 2022 08:47:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357665.586376; Wed, 29 Jun 2022 08:47:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6TM2-0007Oq-DD; Wed, 29 Jun 2022 08:47:22 +0000
Received: by outflank-mailman (input) for mailman id 357665;
 Wed, 29 Jun 2022 08:47:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JXcD=XE=citrix.com=prvs=172c93792=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o6TM1-0007Cn-Gg
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 08:47:21 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 14f34472-f788-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 10:47:20 +0200 (CEST)
Received: from mail-bn8nam11lp2168.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 29 Jun 2022 04:47:08 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by MW4PR03MB6651.namprd03.prod.outlook.com (2603:10b6:303:12e::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16; Wed, 29 Jun
 2022 08:47:06 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5373.018; Wed, 29 Jun 2022
 08:47:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 14f34472-f788-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656492439;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=Z+3nsIcqvTRDFbgVW4woxf9eqpZbI8h5X+c97YaWGB4=;
  b=IzRUxMN4g+uN1XhhnOX9knPemlw4h97bb4U4uqb3ZOsiPIw8ecBcl40b
   5SuUdhXKcBvXPU9hM6Vgubmqy6EJoG24Z9TPVXzMzt5/Nh8BXFCfkJSYd
   ivlxidvqLkgSLtCUqucH9Zoyhu5ylhaqXgiiyhtTTYL4+P5YAnJeplGWp
   8=;
X-IronPort-RemoteIP: 104.47.58.168
X-IronPort-MID: 74506153
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:VtwDdqK3Om1ry0klFE+RpZQlxSXFcZb7ZxGr2PjKsXjdYENS0GADm
 GFMCD3VOPiCNmr9fo1/b4rk8RlUv8DcmIQwSAFlqX01Q3x08seUXt7xwmUcns+xwm8vaGo9s
 q3yv/GZdJhcokf0/0vrav67xZVF/fngqoDUUYYoAQgsA14+IMsdoUg7wbRh3NQ52YHR7z6l4
 rseneWOYDdJ5BYsWo4kw/rrRMRH5amaVJsw5zTSVNgT1LPsvyB94KE3fMldG0DQUIhMdtNWc
 s6YpF2PEsE1yD92Yj+tuu6TnkTn2dc+NyDW4pZdc/DKbhSvOkXee0v0XRYRQR4/ttmHozx+4
 PcSs8SZdT0qBYrvnMtEQV5WPhNZA7ITrdcrIVDn2SCS52vvViK1ht9IXAQxN4Be/ftrC2ZT8
 /BeMCoKch2Im+OxxvS8V/VogcMgasLsOevzuFk5lW2fUalgHMCFGvuajTNb9G5YasRmB/HRa
 tBfcTNyRB/BfwdOKhEcD5dWcOKA2SWiK2MB9Qn9Sawf33Xi1S5Wz5LUbtfoRvqtZNRZu1bIu
 TeTl4j+KlRAXDCF8hKH+H+xgu7EnQvgRZkfUra/85ZCn1m71mEVThoMWjOTsfS/z0KzRd9bA
 0gV4TY167g/8lSxSdvwVAH+p2SL1iPwQPJVGuw+rQuLmqzd5l/AAnBeF2ARLts7qMUxWDomk
 EeTmM/kDiBut7vTTm+B8rCTrnW5Pi19wXI+WBLohDAtu7HLyLzfRDqUJjq/OMZZVuHIJAw=
IronPort-HdrOrdr: A9a23:ekQ6f6DmSlKz/iTlHeg3sceALOsnbusQ8zAXPh9KJCC9I/bzqy
 nxpp8mPH/P5wr5lktQ/OxoHJPwOU80kqQFmrX5XI3SJTUO3VHFEGgM1+vfKlHbak7DH6tmpN
 1dmstFeaLN5DpB/KHHCWCDer5PoeVvsprY49s2p00dMT2CAJsQizuRZDzrcHGfE2J9dOcE/d
 enl7x6jgvlXU5SQtWwB3EDUeSGj9rXlKj+aRpDIxI88gGBgR6h9ba/SnGjr18jegIK5Y1n3X
 nOkgT/6Knmm/anyiXE32uWy5hNgtPuxvZKGcTJoMkILTfHjBquee1aKvS/lQFwhNvqxEchkd
 HKrRtlF8Nv60nJdmXwmhfp0xmI6kda11bSjXujxVfzq83wQzw3T+Bbg5hCTxff4008+Plhza
 NixQuixtZqJCKFuB64y8nDVhlsmEbxi2Eli/Qvg3tWVpZbQKNNrLYY4FheHP47bW/HAbgcYa
 dT5fznlbdrmQvwVQGYgoAv+q3nYp0LJGbIfqBY0fblkAS/nxhCvjklLYIk7zU9HakGOuh5Dt
 T/Q9pVfY51P78rhIJGdZM8qJiMexvwaCOJFl6uCnLaM4xCE07xivfMkcYIDaeRCdc18Kc=
X-IronPort-AV: E=Sophos;i="5.92,231,1650945600"; 
   d="scan'208";a="74506153"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HuBBIvrrCWVHCIO0/SV0q159Q/hyZPbFOyzyZNcUZ4ch+LeeqcAusswtM3Sr0GX/7BcczsR69Z9DHIYltL1KLhjlCJrZ0dEZWrLl90utwQ37HI+586VIMUrfecf2vHnt/glMLppRYFEDdYtjfcyeeeUG2FSIaci7vFRF+9IzN5xLHMaIzb3QqPQfwuxUP3LQUJtTBcMH+TTL7ufSZqXUc+0a90WD/Y/ic9VtcZTKbJ0nG4ORZjX37dOQ9siUtxSpDG6Quzmn9wi3I67Cm4a0hDm++51DqrIFV+dwCTYYGjymeYbMxjsrhOcKdY8/SNtDFfO3SCVLDn7T2/w10tlepw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IhbvO5SwJfFs9bHqeDl4RP4KD4/w5e265OYnGZAESY8=;
 b=RMfNtvXwV6BCYBUSwOHfBpvBH7PW9bHQ3gldvwmNX3ahEhvxKGrZXhrGiQzq+Cp/NtJvpNSjsErygbs9hh4ky3ABHiMCoo/SICVvuzGyJ2RqxqGUlpMicz7M+E/C2tTvpoLSV06lQYN4FReGgMT28jPivZndhZJPCowo2XYkyAFM7sQYZemRFxKdUP6bs+JFhh9kKmYpl6WiBku6Q/QZ5opmDckS67tAu3w/uCFJQ9a9DxIQWXymbWJT0Il6k8h/XeUtBqWu3MB//VFhMc/bm4Xbq0pQSY3hVm70ctNZfdZEdElRoMbJVj5E/gbUUjxOuf3453F2gNw8EDxCp581gw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IhbvO5SwJfFs9bHqeDl4RP4KD4/w5e265OYnGZAESY8=;
 b=Ky0HPmnJLF+WpR+ya5e2WQMUTiSHQCEnG8mHF+KRo0iQvV7aEj024dnRs2EtSLX915e7uAjUfNhi8lu2R/U0etBoW88j2hJrvaEO3AU+JH7AFRrBwgLJTUu+uG87e8pLJ8hPxQBbCPVkoy+Vse3Gxe0ycO9CIt8JK/9qDG+gK34=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 29 Jun 2022 10:47:01 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH 1/6] x86/Kconfig: add selection of default x2APIC
 destination mode
Message-ID: <YrwRhZUl/0wYhwBm@Air-de-Roger>
References: <20220623082428.28038-1-roger.pau@citrix.com>
 <20220623082428.28038-2-roger.pau@citrix.com>
 <2a94902e-bc4f-26d1-b47d-abd4709226de@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <2a94902e-bc4f-26d1-b47d-abd4709226de@suse.com>
X-ClientProxiedBy: LO2P265CA0319.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a4::19) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 42ef5c72-050e-4275-4324-08da59abf252
X-MS-TrafficTypeDiagnostic: MW4PR03MB6651:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RQ8L/yBCVA5c6P+nlY2ZjFvNet9FA8naZw6q2UKU9Q84dZCp1UG+ngN424k+2/JicPErg2Dd4iGFHNWyQJPEaQ0wO5nJP87oa/yiTkgzYGfG8RcLfOc/Zz9F2cMGDCaVp05DJtxoGtM4Mq7/V5SYNjKBZoKdwgQaA/6hrI3BMG4WacLrqytzS87O//C4EEP95lE7GUFOY005JPyefm6CosCnrzcIc1u45AFchgTHxKOGc4aeOwYrbu7py3jewnGFY9dN1mhYU5apjSNUbv4cGEaOR/no1P7dzIlJhIwn1RUD8csbOho3eucOg5SqWIVVUL5eyrrHyFf/7mM/ISGzkOsFRN34cA+peqDE5E39XjDvzd51Op4V8m8pokDR4+uLYN22ShIZzQAfGkegIaTgQTbS6Zj02/q+D2y/3P/ne7lcpqUMz7pVLpZKEbsn1iM2thsz9Xa92pWkA3UtA8Y15ql7zx6ggRRrlHhjH6/cNJZH8Z6WFY0aDV5buROEMXfU6hDpAuaPeKoi2ZGwtyAvdrw3w3vg4xqRadE6tZPoaovwCXnKA63W2QhtPQX7i5PJ6K7UXyraojO5iOUOjwq0jIxFMDBEamk1eFgtwT//jUH0dblsmOItyfS8ggDTTIzt85Ou37wMQi/cikx6vS805lw3gOjDDqzXd58HcezPw9yzITmlJF/CazkaDBjdnU8S1M3mAHtW+EEP1JOJgHizhZcd2nGMYS/6gt+l3Vp7iULgC9wU9z+1fzq2wzUI+UKF
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(7916004)(4636009)(396003)(366004)(136003)(376002)(346002)(39860400002)(41300700001)(8936002)(6512007)(8676002)(2906002)(6916009)(26005)(6666004)(186003)(478600001)(9686003)(6486002)(83380400001)(54906003)(5660300002)(38100700002)(66556008)(82960400001)(53546011)(33716001)(6506007)(66476007)(4326008)(85182001)(86362001)(66946007)(316002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QWVGbFRjenFXTi9lWW1qRy9tTTVqR2oyZGNaOVRhY2l0S0J0RDNmWXpiQTNC?=
 =?utf-8?B?VDNEcVIrQldRYTJoY1pOQVpUZ29oNDF5Q092SVFOZDlKQjdrSWZjbU80Q0I5?=
 =?utf-8?B?UFEzM1J2NzdFUjBSVi9OV1dvOTVwSEVnc2xZTk9CNFY0czBBOEVFVmJSOW1j?=
 =?utf-8?B?YTlzOFpIU1Fidi9XdjFjUjJvZzFWNjZtNnhOVUV6UXA5SnJ2T0JZcFFldnNC?=
 =?utf-8?B?bG9GSnUzbUduYVJ4WmlrdzdQZG5WSjZDVldSVXVFTGJDK0dtWTF2WkV0UkdJ?=
 =?utf-8?B?YmxJcEhHUGRtLzcwZzVRZnJmUmxjaDZQMkpDNXlJZkV2bUlFdmdrRkJSUTNo?=
 =?utf-8?B?azBqMHZkNVJpR2FGYXhQRllFWUo3bmNWeHZBeDJpVEVqcmpqOHl2QkFMSExI?=
 =?utf-8?B?ZTFCVGo2S1QwTktNc2VnWmYwSFNNS0ZWdXJic2UrUUNXTXJQK3hwd21KZ2VV?=
 =?utf-8?B?cnpyUGZmVS9vZUQ3VDhoTUdkL1F1L1N0aE9oanUvT2dsM0ZpbUwvT0RCbFEw?=
 =?utf-8?B?ajRGTktiY29nV09YRUVEZUEvQlUvRVlEaXlxQzVUMnI5TGZHOUUwWG95TG9s?=
 =?utf-8?B?cmtzNmNONlNaWHo3ZHBibkJnUUlrcnVCa0toSXZnVi8wNDZKSDhjVkRwajU3?=
 =?utf-8?B?OFlBNlRTTS9SYlgyTnJGeEVaS2pWajJqTWlCYXUzejFURzN0TVN3Mks0T3BT?=
 =?utf-8?B?U2pQdkdhSmdrQmlzT1FWVGN3Myt1NEM0czZhYUluZC80ZjRBWkw3QzFrZG5p?=
 =?utf-8?B?UUhDSVczU1ZmK29mck5pTzZWcmd5ZVBwYXV6cXdrM1V2REV3eE0wTXYxajNS?=
 =?utf-8?B?L3Y3bnRpend4a1YyRFRJN2luRXRaWUVmTmZ0cUF4cFdLdVlZNHloVms3TUoz?=
 =?utf-8?B?bDlYbXdORDJkZmFNellseGZkQXdJOWowQnFCU1hxajN0Ti9ndU5XdFNOZW9O?=
 =?utf-8?B?RjdtbmJYLzNmOXFGQUt3ajRGOWZZNWxzRitSaFNKZWlRci94cEFjUzl5c2Vs?=
 =?utf-8?B?M1NKRFJDQW0waTk2MlM3YURIbHcyTkIreFRNUWpCMEY0L1pCZUU3by9UREZl?=
 =?utf-8?B?VlgwbFVib1RSTWlCQmN0cW12NWNIL0wzWVpPbEtOMmhodHRpZTF2K2dId1FH?=
 =?utf-8?B?VjZSdkx5ekxORkNmcDFpMCtjbmRnM1c2QTZqSUJUNjgreUJ5VWJtWnIwZXc0?=
 =?utf-8?B?OUFXOExZcUMzS283QXkyeEMxNnF3eDJZUnRIR0txNFdhM1Z1QTlyMW9pNzl4?=
 =?utf-8?B?NWdtNVJMcmdVcXdlMGczUnNxSUZPMXJPeE1nTitNR2RqSmdLc2ZFTDMvcUtW?=
 =?utf-8?B?Nk05dHhvVTQ0VDhJNHQrcWY0N0lPTGxoSzBCOU44ZXo0bHd3NEUyNmo4WTUy?=
 =?utf-8?B?YnZuZjNUdUdqQ1pyc3ZaVkJtYWhaOE82LzJnT0w3QVIycnZEbno4T3lnRnhq?=
 =?utf-8?B?MjJzcW1HSnlIMmw3ZzdZd3hJeWwyTjQ4RDhSdFpRTlIyenB2czRUUjd4WnUx?=
 =?utf-8?B?TVFVRnJGQWpNMWcwZUZGVFV4R1RQMUVWQXd1ZjY0dk1zdjQ0OVRUdjl1NVR4?=
 =?utf-8?B?MDV2Q3VJb1o2RGJFdlJVckNxY3RLQnpsTEs2UWpUVlRwRlpaZHBzcDVXQ3hs?=
 =?utf-8?B?aFBKWVpjTmVpaGcyOWxiMGNtM213N3IyYUJMYWFpYUhHVjdRblo4VU51YmpL?=
 =?utf-8?B?c0E0S3BXZ3hYMldQVlVOdk1GV21ZN1UrSmtnakpwNGpzMFdKdXlCeHh6ZDhI?=
 =?utf-8?B?bXd1bXFnSGRQTWZxdWlRZlRTNENQZnZOb2s1dzNTdVRFWkc5QVZCam44c3Ro?=
 =?utf-8?B?b0dyRHhSeTZFUGwzZU0ra2RHV21TOGxveUk5MVhkYTlkWU1BVlhWNFVpeUEv?=
 =?utf-8?B?UjV5VTRNWnhYTUhlcDF2YUhFQmdLODBxYTJ1SmNiOFpHcGYwbWMyRVZVcUlG?=
 =?utf-8?B?UExLTXMyZGFjRkRHQ09GdUdBdU9NeE1icHVIeHJhNFAxbHhCZVl6TXAwb1BV?=
 =?utf-8?B?eHNXMENKS2VGVkhyK0dxOXhFYkxZZ0w1bTZiTUhlNThvdWxxaithUmltcUNO?=
 =?utf-8?B?bWJndk81cHlRTkhSTUlpd3U4dmdBbjJwaHdqQjZwQVdhL3RuazZnczFadlVD?=
 =?utf-8?B?OEE5d0Y0Zk9jb0JYM21Rem5wSnRwT1NNdW9FZmFJVVBINlBGaDM4WVZKdnYx?=
 =?utf-8?B?OFE9PQ==?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 42ef5c72-050e-4275-4324-08da59abf252
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jun 2022 08:47:06.7665
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: GQEb2lAVvGWtirzq0tX2cm0A+M3tODkm1MMnOZhOuPCXfNMalSa/cgbgEKWr4Uz6LRtaIvHj/vJdR3vue39Pjg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR03MB6651

On Thu, Jun 23, 2022 at 04:47:22PM +0200, Jan Beulich wrote:
> On 23.06.2022 10:24, Roger Pau Monne wrote:
> > Allow selecting the default x2APIC destination mode from Kconfig.
> > Note the default destination mode is still Logical (Cluster) mode.
> > 
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> > ---
> >  xen/arch/x86/Kconfig          | 29 +++++++++++++++++++++++++++++
> >  xen/arch/x86/genapic/x2apic.c |  6 ++++--
> >  2 files changed, 33 insertions(+), 2 deletions(-)
> > 
> > diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
> > index 1e31edc99f..f560dc13f4 100644
> > --- a/xen/arch/x86/Kconfig
> > +++ b/xen/arch/x86/Kconfig
> > @@ -226,6 +226,35 @@ config XEN_ALIGN_2M
> >  
> >  endchoice
> >  
> > +choice
> > +	prompt "x2APIC default destination mode"
> 
> What's the point of using "choice" here, and not a single "bool"?

I think choice better reflects the purpose of the option, it's
selecting between two different modes.  It could be expressed with a
bool, but I think it's less clear.

> > +	default X2APIC_LOGICAL
> > +	---help---
> 
> Nit: Please don't use ---help--- anymore - we're trying to phase out its
> use as Linux has dropped it altogether (and hence once we update our
> Kconfig, we'd like to change as few places as possible), leaving just
> "help".
> 
> One downside of "choice" (iirc) is that the individual sub-options' help
> text is inaccessible from at least the command line version of kconfig.

Hm, I usually use menuconfig when wanting to poke at options help.

I guess I could introduce a single X2APIC_PHYSICAL bool that starts
with default false and notes that otherwise the destination mode is
logical.

> > +	  Specify default destination mode for x2APIC.
> > +
> > +	  If unsure, choose "Logical".
> > +
> > +config X2APIC_LOGICAL
> > +	bool "Logical mode"
> > +	---help---
> > +	  Use Logical Destination mode.
> > +
> > +	  When using this mode APICs are addressed using the Logical
> > +	  Destination mode, which allows for optimized IPI sending,
> > +	  but also reduces the amount of vectors available for external
> > +	  interrupts when compared to physical mode.
> > +
> > +config X2APIC_PHYS
> 
> X2APIC_PHYSICAL (to be in line with X2APIC_LOGICAL)?

Right, was about to expand it but did consider PHYS to be clear enough
(opposed to using LOG or LOGIC), will expand in next version.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 08:56:03 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 08:56:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357672.586388 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6TUB-0000Yt-BD; Wed, 29 Jun 2022 08:55:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357672.586388; Wed, 29 Jun 2022 08:55:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6TUB-0000Ym-7N; Wed, 29 Jun 2022 08:55:47 +0000
Received: by outflank-mailman (input) for mailman id 357672;
 Wed, 29 Jun 2022 08:55:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M2+n=XE=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1o6TU9-0000Yg-Kx
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 08:55:45 +0000
Received: from mail-ed1-x52b.google.com (mail-ed1-x52b.google.com
 [2a00:1450:4864:20::52b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4250955f-f789-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 10:55:43 +0200 (CEST)
Received: by mail-ed1-x52b.google.com with SMTP id z7so21178707edm.13
 for <xen-devel@lists.xenproject.org>; Wed, 29 Jun 2022 01:55:43 -0700 (PDT)
Received: from [192.168.1.10] (adsl-146.37.6.170.tellas.gr. [37.6.170.146])
 by smtp.gmail.com with ESMTPSA id
 m2-20020a50ef02000000b00435a997303bsm10856710eds.71.2022.06.29.01.55.41
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 29 Jun 2022 01:55:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4250955f-f789-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=message-id:date:mime-version:user-agent:subject:content-language:to
         :cc:references:from:in-reply-to:content-transfer-encoding;
        bh=a9tIi6HYyRb2sSQBzHY+oz4F33gAZEPkOS4Qjxp2mFQ=;
        b=KGkpKckxBVvgdjqVpwXKKPeHV8DzC8rrgfszKsJG6Q+Nh3utKjVNWa01m89fKDfUGb
         VsIvU8QDUaR4zMdCxQmobWzaLUODIU2ZNtbRsCyxRpffK3E9wGmIBFX3ztUPAeotnRiF
         hw8VtlUbMUcwJJphqAdOQicZdK+Zbdja3BKUxJEA4JOwccTq7xbA/ulZQWHXxWFd5RtF
         e2FzodqLizy/TQqzOZY/siNKQ4DhmsBxKqNvjFPcWRCu1e7k4tcN++2QqcUwoKggl6kk
         X8BEQiiJqvr41n/rUGVmoualAJk4nIc32k2f0qB8vu6Kzf9JjYdrdopM6mA4XUv7wHrQ
         oCBw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:message-id:date:mime-version:user-agent:subject
         :content-language:to:cc:references:from:in-reply-to
         :content-transfer-encoding;
        bh=a9tIi6HYyRb2sSQBzHY+oz4F33gAZEPkOS4Qjxp2mFQ=;
        b=ohyFWQUA334VdjYI23WMlHKNvl5G3OahmXkzNP/UFtVzwgCtdzpxZvaiYIdWiSNEkA
         OaLtK2Ei0qjySS6aW6RZFNMoDWJyPF3nTh2VsuvlltWDO0or3TzpUYXDLwfvrZdeiLWN
         4b4LxuuHUMBxaVJLOrmLGF4gY2GTsb5lldj0FeRXs/+aAxhVThgloh5UxBgsZUUbtqG9
         OA/cO4NJkmOmxKqLzjwx5eYK5rlbGCK+O8sIFeoyiGkb1DVTVuFQODwzDggl6BfVRZLU
         LFjeNyDHhtR6DB3MOfBZ7AWvt5KxzvcF4KPOU116KnS3WqG0cSkYxXDuI9OHyYBcBO8C
         1zsg==
X-Gm-Message-State: AJIora+WVW4Dfz/5mGeEH8D17dt3aoYyRvzjKOc9mYfg50kCLCKx/BYq
	dHfzZXjfRKptKLnpfkf7IRg=
X-Google-Smtp-Source: AGRyM1sQQ8xSC1e90xCoGar4UlhfsKnSI6v82HegzykaRJG1YtMSpRmtoOSZOEvL+udqyju3cGIDNQ==
X-Received: by 2002:aa7:dad6:0:b0:435:7a44:7480 with SMTP id x22-20020aa7dad6000000b004357a447480mr2834027eds.138.1656492943326;
        Wed, 29 Jun 2022 01:55:43 -0700 (PDT)
Message-ID: <1b28f8b2-2153-61f6-515f-b2ed880f84b6@gmail.com>
Date: Wed, 29 Jun 2022 11:55:40 +0300
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH] xen/arm: smmu-v3: Fix MISRA C 2012 Rule 1.3 violations
Content-Language: en-US
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 Rahul Singh <Rahul.Singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220628150851.8627-1-burzalodowa@gmail.com>
 <BF0AB23A-DB4B-4DA3-9E4C-C15FAD360247@arm.com>
From: xenia <burzalodowa@gmail.com>
In-Reply-To: <BF0AB23A-DB4B-4DA3-9E4C-C15FAD360247@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Bertrand,

On 6/29/22 10:24, Bertrand Marquis wrote:
> Hi Xenia,
>
>> On 28 Jun 2022, at 16:08, Xenia Ragiadakou <burzalodowa@gmail.com> wrote:
>>
>> The expression 1 << 31 produces undefined behaviour because the type of integer
>> constant 1 is (signed) int and the result of shifting 1 by 31 bits is not
>> representable in the (signed) int type.
>> Change the type of 1 to unsigned int by adding the U suffix.
>>
>> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
>> ---
>> Q_OVERFLOW_FLAG has already been fixed in upstream kernel code.
>> For GBPA_UPDATE I will submit a patch.
>>
>> xen/drivers/passthrough/arm/smmu-v3.c | 4 ++--
>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
>> index 1e857f915a..f2562acc38 100644
>> --- a/xen/drivers/passthrough/arm/smmu-v3.c
>> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
>> @@ -338,7 +338,7 @@ static int platform_get_irq_byname_optional(struct device *dev,
>> #define CR2_E2H				(1 << 0)
>>
>> #define ARM_SMMU_GBPA			0x44
>> -#define GBPA_UPDATE			(1 << 31)
>> +#define GBPA_UPDATE			(1U << 31)
>> #define GBPA_ABORT			(1 << 20)
>>
>> #define ARM_SMMU_IRQ_CTRL		0x50
>> @@ -410,7 +410,7 @@ static int platform_get_irq_byname_optional(struct device *dev,
>>
>> #define Q_IDX(llq, p)			((p) & ((1 << (llq)->max_n_shift) - 1))
>> #define Q_WRP(llq, p)			((p) & (1 << (llq)->max_n_shift))
> Could also make sense to fix those 2 to be coherent.
According to the spec, the maximum value that max_n_shift can take is 19.
Hence, 1 << (llq)->max_n_shift cannot produce undefined behavior.

Personally, I have no problem to submit another patch that adds U/UL 
suffixes to all shifted integer constants in the file :) but ...
It seems that this driver has been ported from linux and this file still 
uses linux coding style, probably because deviations will reduce its 
maintainability.
Adding a U suffix to those two might be considered an unjustified 
deviation.
>> -#define Q_OVERFLOW_FLAG			(1 << 31)
>> +#define Q_OVERFLOW_FLAG			(1U << 31)
>> #define Q_OVF(p)			((p) & Q_OVERFLOW_FLAG)
>> #define Q_ENT(q, p)			((q)->base +			\
>> 					 Q_IDX(&((q)->llq), p) *	\
> Cheers
> Bertrand
>


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 08:56:12 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 08:56:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357673.586400 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6TUa-00010K-Jx; Wed, 29 Jun 2022 08:56:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357673.586400; Wed, 29 Jun 2022 08:56:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6TUa-000108-F0; Wed, 29 Jun 2022 08:56:12 +0000
Received: by outflank-mailman (input) for mailman id 357673;
 Wed, 29 Jun 2022 08:56:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8wio=XE=intel.com=kevin.tian@srs-se1.protection.inumbo.net>)
 id 1o6TUY-0000wG-Lc
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 08:56:10 +0000
Received: from mga09.intel.com (mga09.intel.com [134.134.136.24])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4f4ff375-f789-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 10:56:07 +0200 (CEST)
Received: from orsmga006.jf.intel.com ([10.7.209.51])
 by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 29 Jun 2022 01:56:05 -0700
Received: from fmsmsx604.amr.corp.intel.com ([10.18.126.84])
 by orsmga006.jf.intel.com with ESMTP; 29 Jun 2022 01:56:04 -0700
Received: from fmsmsx609.amr.corp.intel.com (10.18.126.89) by
 fmsmsx604.amr.corp.intel.com (10.18.126.84) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2308.27; Wed, 29 Jun 2022 01:56:04 -0700
Received: from fmsedg601.ED.cps.intel.com (10.1.192.135) by
 fmsmsx609.amr.corp.intel.com (10.18.126.89) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2308.27 via Frontend Transport; Wed, 29 Jun 2022 01:56:04 -0700
Received: from NAM10-MW2-obe.outbound.protection.outlook.com (104.47.55.101)
 by edgegateway.intel.com (192.55.55.70) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2308.27; Wed, 29 Jun 2022 01:56:04 -0700
Received: from BN9PR11MB5276.namprd11.prod.outlook.com (2603:10b6:408:135::18)
 by SN6PR11MB3200.namprd11.prod.outlook.com (2603:10b6:805:ba::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Wed, 29 Jun
 2022 08:56:02 +0000
Received: from BN9PR11MB5276.namprd11.prod.outlook.com
 ([fe80::8435:5a99:1e28:b38c]) by BN9PR11MB5276.namprd11.prod.outlook.com
 ([fe80::8435:5a99:1e28:b38c%2]) with mapi id 15.20.5373.018; Wed, 29 Jun 2022
 08:56:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4f4ff375-f789-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1656492967; x=1688028967;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=CPmbE+BRc69ZfHF3vhM7wH1H0i6SmKhoV5Y0PgygeC8=;
  b=CnSBEsHV81QjV3yBg+esPKrS5TBTvehqYNUjUUMErlqoBKTUMTF4OoFR
   wj0YB5yC+KSmbZCQK5vklLyuMDCPH8+PxfXAjjSErs2p32INjeldPY4C0
   awkbky5qDakuSwn9F8otJyEChYACL1t6w/mYVj337F7JrW40PASiRZbra
   61QYnRuCgQavBTSpJRsOl7UoLOOOrTqudlhAqdTIq7ekivU9wDE39is8q
   AEuzS3HYUHOffFu1ptJowcHxj1OL6ygcMIqR5HLAeAk8dvU89iO/dIFNT
   ooUPmD8Kq7j0s9Yn0bp0NDtGCNhX0RaXMJOBJ+jm2hrZdVygwa/K4r3Oh
   A==;
X-IronPort-AV: E=McAfee;i="6400,9594,10392"; a="282710016"
X-IronPort-AV: E=Sophos;i="5.92,231,1650956400"; 
   d="scan'208";a="282710016"
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.92,231,1650956400"; 
   d="scan'208";a="565372784"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=F3TY7PdeoLzgBjg80lojYIUgOnRDYus7XvXMmz1aw4Y3pCphLaGJ/qUlQDcgydTb+UcTAOL5Jc82vnO5H+N2TyU+ViXmqsmQl8C16HRH/eScJSVm/e3uEuihiYwQhkXbPvw0KE6I1DE9AxOWTJXRSbuqgfu1fwPZey/b5AoWrS0SaFGRMQBjLFKb86UzOd7EI4BnuMHMLz5I98hNPM88DdAdJMxXAbWY2iLmGAr/UsXyW3HdjoPVbXzdArmQUc6EWHyF2yX1Vc89HnpXWgYyTgNpDN08PVoNW+Ght1QIyklqyLuJ2sP7sUktY//iMMVLVRzguwWml0NzbmmYVyDQGg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=CPmbE+BRc69ZfHF3vhM7wH1H0i6SmKhoV5Y0PgygeC8=;
 b=CH6hVjzbeCaZvZsPcYqWA9Zrtcn1AQkBD5epSjZJ/tuLiXb8r09OPgwEH76oWcjxXaF2y6F0mHzhXpVNR3RAMwMfPHssLE2Qqj4PnBt3CXxYZ5cxB/3hQ8IupZBm98m+poMQ7rAyFU8Pl/b9DyndaiIscnFo7KfaFfv3CTJD+WN1FTiyiGu1ZyrOFnY/Ay99k13yUBC3CBC2sMiUbVK/5ZJJKgOB1iRksMwUp4nzk7oFdDyFsIdLcTxVrBboI/QJblCAwK/pgaCnhvYRGoGpmHp6OQXFqziRZus0wRWVgsk1vcyZbZXqX30a4FHbiPHbXjy2Dqoafhhw9v2iN4mQRQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
From: "Tian, Kevin" <kevin.tian@intel.com>
To: =?utf-8?B?UGF1IE1vbm7DqSwgUm9nZXI=?= <roger.pau@citrix.com>, "Beulich,
 Jan" <JBeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"Cooper, Andrew" <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
	Wei Liu <wl@xen.org>
Subject: RE: [PATCH v6 02/12] IOMMU/x86: new command line option to suppress
 use of superpage mappings
Thread-Topic: [PATCH v6 02/12] IOMMU/x86: new command line option to suppress
 use of superpage mappings
Thread-Index: AQHYe+oq9bL7IByuf0q0gcGGaDvoUK1k4/GAgAFO9LA=
Date: Wed, 29 Jun 2022 08:56:02 +0000
Message-ID: <BN9PR11MB52768C93686842D9391000478CBB9@BN9PR11MB5276.namprd11.prod.outlook.com>
References: <e873e30c-7a04-a8a3-2fe5-0dda30e654fe@suse.com>
 <99086452-43a8-2d93-ab4d-0343a0259259@suse.com>
 <Yrr5fTdO1CAKfIH7@Air-de-Roger>
In-Reply-To: <Yrr5fTdO1CAKfIH7@Air-de-Roger>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=intel.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 14c5bd58-fb27-4c12-ccb6-08da59ad31b0
x-ms-traffictypediagnostic: SN6PR11MB3200:EE_
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 2PAorjsSuWXZvuVy9FNmGiN03dhuj/YZdsDVuuLJbAGo+dz7oqmdqpIIte0w8Ph2C1LRk6VQgoA0CQiMphwIKyS6vGNpRHzCZHzt4U/i+4G5XNZw+3gbYAMfnDgHU7Bfi1+Bai0DJLdxxtINjUbCjDNjUElvm+Ef/x5nQCLtpMLNG3JGvjO9UD1cCkCq7ksMeB9n/VkilgRtAv7TKYqG+yNwY1DgBByC1pTT42kw/s4MKXouLHBqkKOFaHUH7yLeB8wUVABi08kRNjNONUlKyydJViwVHabnJ8S+pIiX2tPOE/Y3DfkT1HwGMSnvrll/PKreKiQTPtnYj5kOrGUsNoOj5KbZWhT+7hTKGack/4XVrFfH/SlN3+HzJLKEfv94CDj1fj2tQ5GVHFnJV9Cmcwm4cQ/S/Z1f190gpHPc2PFj0/sqpqlARjGxCVV1gMA9XlPcu2aq85fFnQUtij3FhhI5KkbQ8yj8TIGzDh61wkzbWR8RQPR2zSv8kQAqJI0iU831BQjbJowoHIq0KeRkfTgqfD2Flqp7mG9zVo4cP+SbY31nwuubdlpBRBls9HSsIeVr8uV2cIepIBNJFRvP5VLSIXBzxvEuolBzpYXr6OvhTsJv9QdmDeoKGDzv7NhMIyPQgAJR7H2TAmHF163JgPKRMBVZNzHNxmZvRv/dS+Hw44Rw7MdXG2lvPM1fIar+EDYEzKyoiYC/rvloSbtDqi8+ewpWKLhD9Qy/VmEzUNY7P6VONjqFb9kod8OEXeR70ibRSWQ3AVAD3RlvKmXuRotu9ZqplB38T1KvprT/znk=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BN9PR11MB5276.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(396003)(136003)(346002)(366004)(39860400002)(376002)(186003)(6506007)(41300700001)(7696005)(66476007)(76116006)(9686003)(26005)(66446008)(38100700002)(38070700005)(82960400001)(8936002)(83380400001)(5660300002)(52536014)(2906002)(33656002)(55016003)(86362001)(110136005)(316002)(4326008)(8676002)(54906003)(478600001)(71200400001)(66556008)(64756008)(66946007)(122000001);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?WSttNHErT3RMWnl6aXd1SnB0R0dsanovNUJudytBMS9FSWlDNVFsYlZYWStn?=
 =?utf-8?B?cHZQbnltVG0vSCtScUVGLzU2cFdXUXFjZlpIZEhEN1pvTVZ6RVI4bERsNmMv?=
 =?utf-8?B?TzBsb0ZGdWZtcFNOcXkxZkI3VWNRZHlXM2xFb201ZWtUQ3dtZE41ZE00QjVo?=
 =?utf-8?B?MElIMGhtU25SQ1J3UjRleHV1RXZKWUNlViszMFNQM2x6UTRHWkQyaEtETktN?=
 =?utf-8?B?dFFWZVdudHBDRThGazQ2R0lsclVwOWxGREI2cW1LeW0vK3pxOVZWUjVab1Vh?=
 =?utf-8?B?S0xacHhSMVh6V1BaaXRSSHpSUEFDM0s5MllNSzJPRHA3NFJ1V2NrZTFNb1hG?=
 =?utf-8?B?ZDdsa2ZTdHg5WW9LUWFlaTBwWXVMMmxGWjJEWFFqdld0NUNpNnZSV1NRNGts?=
 =?utf-8?B?bUNWVmsyWWh2L1FMUWtWaFYxalBGQnIxaGZwMGVoUm5lOFhBNkZlSURtejlw?=
 =?utf-8?B?K0d1QXl5RUV0dE1SVXRMQ1hWYUIrMDFnVDNVdFBHeHgyVGh5ME9hYkdmampV?=
 =?utf-8?B?ei8wOTJEVTBOMlhiYVczc0JBNHl6UmlkaSticFJiaVRQSElJZ2tkNXZCbHdq?=
 =?utf-8?B?YUU5YklRQ0JjSmc5YTNkNXlFRFowUFdUc0NoLzVkaTJvVTJuSktySEE2OURT?=
 =?utf-8?B?MVE5cGNNUlcydzFXbm95L29ZL3lqdmlWTHRPS2dXU1lJaGt5R1VnYVJzRW5M?=
 =?utf-8?B?dnFJVDVxMnhwR2ltQUxjUUhPMXRXS0pEcWpvMXFxOTFMSU1MR0I3Z0RaNjdj?=
 =?utf-8?B?L3Y4V2RZcjl1cFQ0OEs1eUdvRFFjUTJtQ05NdW9YUjJjMFA1QlJGcVZsVFpP?=
 =?utf-8?B?eWV6Y0wrVjZCMnZRQ2pmYVA2VEk2cjBQd3p1ZmRNNVZBaHdOWkVna1B3RXZ1?=
 =?utf-8?B?djNCaUR6emZXb3FRakR4ZEtWZTNIdzRuQStncmZEelRwV09JTUM1V2ZhV1hv?=
 =?utf-8?B?cFd6UDJCVlNUbkZhNDU5eXc4S2NxRFZrVHFhY2NOUWE4WEtkV1ZTT1F6OHUr?=
 =?utf-8?B?UEthTGZjOTRFSk9VcTRKNjUzb2loTkp4VXBwTW00ZW1WakFQbW5GUEo2K0RD?=
 =?utf-8?B?WDBqdlpydFdxTWhXZnYwZlI3YnMzNW1yZEx2VnZ2aVg4UnY4RkpkMjlGN2FK?=
 =?utf-8?B?bHBRNUdtTzduS00zRm1ILzY4UjhrOUFSMUVDRjlOaUZYQmFKMWdjZWJiRyt1?=
 =?utf-8?B?ck5VQW5EZm84RytXZWg3K2NXU0RqTmVZcDM3TVJTM0xvWUdpcHVOYzFVbFhn?=
 =?utf-8?B?Y1ZpT2ZnaG1JVjBZMWMybGtBQkZDajVYQzlaL3gvM1k1cERZWnoxVUdCUWhP?=
 =?utf-8?B?b0pMa3pMcHNhYmJEVjRKOVVkWVNoZXdROVVMN0J0UzRGT2tXWG9nbG9PZjFu?=
 =?utf-8?B?UU5KalluN0RsSUcvbWZmZnYzL3VRN21vNGR1ZE5IdEx2by83ZVdNMThCUHFm?=
 =?utf-8?B?Ukc5TmxEc2FVVmI0bmJuN2xDL1R6RFBKNUJtNFBLQmxNb3dtSXVseVRtcFBD?=
 =?utf-8?B?ZmFFYnpjVm1BOFBobGVRV1g0VmFQdW1QbzYwTE5PTmJuVVRlYmt6bkNSZGsy?=
 =?utf-8?B?d3Zjcm84dmk1cEUzZFVvMndYWk1OcUszY3FJWEd4Wk5FdVFJY1lHbnlpQWVU?=
 =?utf-8?B?QWRyd3Qyd0FuT1BNM214ZTJmazVobHdBNjcrbGtQRlpycUJMSlBvUkptc29M?=
 =?utf-8?B?Y3ZsOUpsaUpINkwxZC9ERDFCelVrY3I3QnJjeXdwbnB5K0xWUGRHMnlnYWVJ?=
 =?utf-8?B?b3V1c1AxczVsRnhDVlNGTzZkc0VxUE83akxIbWJiZVdRMWs0UUFhZWNneXZs?=
 =?utf-8?B?MitWS0Qvc2FBQW91OHJMQlRDeGM2TTUwenBEOExmcE1iL21ISGYySkVzM0Rp?=
 =?utf-8?B?bWxXN20wNVRRZ3FDM1gwZ0F0RkczUS9MS2FCckNFTGVaYjU2cDZ2dkNHakI1?=
 =?utf-8?B?NlF3R0U4ajdEdTlYYU5PYVJJVWRaY1lhV0tzRVAzQllhRG1JdHlMbHRIZHFB?=
 =?utf-8?B?a2tTR1RjUVZ3amdMbGc2NTBHVUpOZlFsZFVYTU44bnVhTTY5VXdjOGJZblY5?=
 =?utf-8?B?Qnl1Ykh2TUdSWEs2TkdQeTJxbWtYc1hJYy9xZWhtZG5nekxRUml6YkRKSnIx?=
 =?utf-8?Q?EqfZtzukA53ogCcOMY8jqhBkU?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BN9PR11MB5276.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 14c5bd58-fb27-4c12-ccb6-08da59ad31b0
X-MS-Exchange-CrossTenant-originalarrivaltime: 29 Jun 2022 08:56:02.3918
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: SJKfBEUT2xnsLpRBsjXU3F5lNfrzjux+xXZCDCsGkh6LJ88EjtYA+J2ALptpQ2P5rCKrF4eKu+OGi1NURJiNWQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR11MB3200
X-OriginatorOrg: intel.com

PiBGcm9tOiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4NCj4gU2VudDog
VHVlc2RheSwgSnVuZSAyOCwgMjAyMiA4OjUyIFBNDQo+IA0KPiBPbiBUaHUsIEp1biAwOSwgMjAy
MiBhdCAxMjoxNzoyM1BNICswMjAwLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gPiBCZWZvcmUgYWN0
dWFsbHkgZW5hYmxpbmcgdGhlaXIgdXNlLCBwcm92aWRlIGEgbWVhbnMgdG8gc3VwcHJlc3MgaXQg
aW4NCj4gPiBjYXNlIG9mIHByb2JsZW1zLiBOb3RlIHRoYXQgdXNpbmcgdGhlIG9wdGlvbiBjYW4g
YWxzbyBhZmZlY3QgdGhlIHNoYXJpbmcNCj4gPiBvZiBwYWdlIHRhYmxlcyBpbiB0aGUgVlQtZCAv
IEVQVCBjb21iaW5hdGlvbjogSWYgRVBUIHdvdWxkIHVzZSBsYXJnZQ0KPiA+IHBhZ2UgbWFwcGlu
Z3MgYnV0IHRoZSBvcHRpb24gaXMgaW4gZWZmZWN0LCBwYWdlIHRhYmxlIHNoYXJpbmcgd291bGQg
YmUNCj4gPiBzdXBwcmVzc2VkICh0byBwcm9wZXJseSBmdWxmaWxsIHRoZSBhZG1pbiByZXF1ZXN0
KS4NCj4gPg0KPiA+IFJlcXVlc3RlZC1ieTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNp
dHJpeC5jb20+DQo+ID4gU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2Uu
Y29tPg0KPiA+IC0tLQ0KPiA+IHY2OiBOZXcuDQo+ID4NCj4gPiAtLS0gYS9kb2NzL21pc2MveGVu
LWNvbW1hbmQtbGluZS5wYW5kb2MNCj4gPiArKysgYi9kb2NzL21pc2MveGVuLWNvbW1hbmQtbGlu
ZS5wYW5kb2MNCj4gPiBAQCAtMTQwNSw3ICsxNDA1LDcgQEAgZGV0ZWN0aW9uIG9mIHN5c3RlbXMg
a25vd24gdG8gbWlzYmVoYXZlDQo+ID4NCj4gPiAgIyMjIGlvbW11DQo+ID4gICAgICA9IExpc3Qg
b2YgWyA8Ym9vbD4sIHZlcmJvc2UsIGRlYnVnLCBmb3JjZSwgcmVxdWlyZWQsIHF1YXJhbnRpbmVb
PXNjcmF0Y2gtDQo+IHBhZ2VdLA0KPiA+IC0gICAgICAgICAgICAgICAgc2hhcmVwdCwgaW50cmVt
YXAsIGludHBvc3QsIGNyYXNoLWRpc2FibGUsDQo+ID4gKyAgICAgICAgICAgICAgICBzaGFyZXB0
LCBzdXBlcnBhZ2VzLCBpbnRyZW1hcCwgaW50cG9zdCwgY3Jhc2gtZGlzYWJsZSwNCj4gPiAgICAg
ICAgICAgICAgICAgIHNub29wLCBxaW52YWwsIGlnZngsIGFtZC1pb21tdS1wZXJkZXYtaW50cmVt
YXAsDQo+ID4gICAgICAgICAgICAgICAgICBkb20wLXtwYXNzdGhyb3VnaCxzdHJpY3R9IF0NCj4g
Pg0KPiA+IEBAIC0xNDgxLDYgKzE0ODEsMTIgQEAgYm9vbGVhbiAoZS5nLiBgaW9tbXU9bm9gKSBj
YW4gb3ZlcnJpZGUgdA0KPiA+DQo+ID4gICAgICBUaGlzIG9wdGlvbiBpcyBpZ25vcmVkIG9uIEFS
TSwgYW5kIHRoZSBwYWdldGFibGVzIGFyZSBhbHdheXMgc2hhcmVkLg0KPiA+DQo+ID4gKyogICBU
aGUgYHN1cGVycGFnZXNgIGJvb2xlYW4gY29udHJvbHMgd2hldGhlciBzdXBlcnBhZ2UgbWFwcGlu
Z3MgbWF5DQo+IGJlIHVzZWQNCj4gPiArICAgIGluIElPTU1VIHBhZ2UgdGFibGVzLiAgSWYgdXNp
bmcgdGhpcyBvcHRpb24gaXMgbmVjZXNzYXJ5IHRvIGZpeCBhbiBpc3N1ZSwNCj4gPiArICAgIHBs
ZWFzZSByZXBvcnQgYSBidWcuDQo+ID4gKw0KPiA+ICsgICAgVGhpcyBvcHRpb24gaXMgb25seSB2
YWxpZCBvbiB4ODYuDQo+ID4gKw0KPiA+ICAqICAgVGhlIGBpbnRyZW1hcGAgYm9vbGVhbiBjb250
cm9scyB0aGUgSW50ZXJydXB0IFJlbWFwcGluZyBzdWItZmVhdHVyZSwNCj4gYW5kDQo+ID4gICAg
ICBpcyBhY3RpdmUgYnkgZGVmYXVsdCBvbiBjb21wYXRpYmxlIGhhcmR3YXJlLiAgT24geDg2IHN5
c3RlbXMsIHRoZSBmaXJzdA0KPiA+ICAgICAgZ2VuZXJhdGlvbiBvZiBJT01NVXMgb25seSBzdXBw
b3J0ZWQgRE1BIHJlbWFwcGluZywgYW5kIEludGVycnVwdA0KPiBSZW1hcHBpbmcNCj4gPiAtLS0g
YS94ZW4vYXJjaC94ODYvaW5jbHVkZS9hc20vaW9tbXUuaA0KPiA+ICsrKyBiL3hlbi9hcmNoL3g4
Ni9pbmNsdWRlL2FzbS9pb21tdS5oDQo+ID4gQEAgLTEzMiw3ICsxMzIsNyBAQCBleHRlcm4gYm9v
bCB1bnRydXN0ZWRfbXNpOw0KPiA+ICBpbnQgcGlfdXBkYXRlX2lydGUoY29uc3Qgc3RydWN0IHBp
X2Rlc2MgKnBpX2Rlc2MsIGNvbnN0IHN0cnVjdCBwaXJxICpwaXJxLA0KPiA+ICAgICAgICAgICAg
ICAgICAgICAgY29uc3QgdWludDhfdCBndmVjKTsNCj4gPg0KPiA+IC1leHRlcm4gYm9vbCBpb21t
dV9ub25fY29oZXJlbnQ7DQo+ID4gK2V4dGVybiBib29sIGlvbW11X25vbl9jb2hlcmVudCwgaW9t
bXVfc3VwZXJwYWdlczsNCj4gPg0KPiA+ICBzdGF0aWMgaW5saW5lIHZvaWQgaW9tbXVfc3luY19j
YWNoZShjb25zdCB2b2lkICphZGRyLCB1bnNpZ25lZCBpbnQgc2l6ZSkNCj4gPiAgew0KPiA+IC0t
LSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL2lvbW11LmMNCj4gPiArKysgYi94ZW4vZHJpdmVy
cy9wYXNzdGhyb3VnaC9pb21tdS5jDQo+ID4gQEAgLTg4LDYgKzg4LDggQEAgc3RhdGljIGludCBf
X2luaXQgY2ZfY2hlY2sgcGFyc2VfaW9tbXVfcA0KPiA+ICAgICAgICAgICAgICBpb21tdV9pZ2Z4
ID0gdmFsOw0KPiA+ICAgICAgICAgIGVsc2UgaWYgKCAodmFsID0gcGFyc2VfYm9vbGVhbigicWlu
dmFsIiwgcywgc3MpKSA+PSAwICkNCj4gPiAgICAgICAgICAgICAgaW9tbXVfcWludmFsID0gdmFs
Ow0KPiA+ICsgICAgICAgIGVsc2UgaWYgKCAodmFsID0gcGFyc2VfYm9vbGVhbigic3VwZXJwYWdl
cyIsIHMsIHNzKSkgPj0gMCApDQo+ID4gKyAgICAgICAgICAgIGlvbW11X3N1cGVycGFnZXMgPSB2
YWw7DQo+ID4gICNlbmRpZg0KPiA+ICAgICAgICAgIGVsc2UgaWYgKCAodmFsID0gcGFyc2VfYm9v
bGVhbigidmVyYm9zZSIsIHMsIHNzKSkgPj0gMCApDQo+ID4gICAgICAgICAgICAgIGlvbW11X3Zl
cmJvc2UgPSB2YWw7DQo+ID4gLS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL2lvbW11
LmMNCj4gPiArKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvaW9tbXUuYw0KPiA+IEBA
IC0yMjEzLDcgKzIyMTMsOCBAQCBzdGF0aWMgYm9vbCBfX2luaXQgdnRkX2VwdF9wYWdlX2NvbXBh
dGliDQo+ID4gICAgICBpZiAoIHJkbXNyX3NhZmUoTVNSX0lBMzJfVk1YX0VQVF9WUElEX0NBUCwg
ZXB0X2NhcCkgIT0gMCApDQo+ID4gICAgICAgICAgcmV0dXJuIGZhbHNlOw0KPiA+DQo+ID4gLSAg
ICByZXR1cm4gKGVwdF9oYXNfMm1iKGVwdF9jYXApICYmIG9wdF9oYXBfMm1iKSA8PQ0KPiBjYXBf
c3BzXzJtYih2dGRfY2FwKSAmJg0KPiA+ICsgICAgcmV0dXJuIGlvbW11X3N1cGVycGFnZXMgJiYN
Cj4gPiArICAgICAgICAgICAoZXB0X2hhc18ybWIoZXB0X2NhcCkgJiYgb3B0X2hhcF8ybWIpIDw9
DQo+IGNhcF9zcHNfMm1iKHZ0ZF9jYXApICYmDQo+ID4gICAgICAgICAgICAgKGVwdF9oYXNfMWdi
KGVwdF9jYXApICYmIG9wdF9oYXBfMWdiKSA8PSBjYXBfc3BzXzFnYih2dGRfY2FwKTsNCj4gDQo+
IElzbid0IHRoaXMgdG9vIHN0cmljdCBpbiByZXF1ZXN0aW5nIGlvbW11IHN1cGVycGFnZXMgdG8g
YmUgZW5hYmxlZA0KPiByZWdhcmRsZXNzIG9mIHdoZXRoZXIgRVBUIGhhcyBzdXBlcnBhZ2Ugc3Vw
cG9ydD8NCj4gDQo+IFNob3VsZG4ndCB0aGlzIGluc3RlYWQgYmU6DQo+IA0KPiByZXR1cm4gaW9t
bXVfc3VwZXJwYWdlcyA/ICgoZXB0X2hhc18ybWIoZXB0X2NhcCkgJiYgb3B0X2hhcF8ybWIpIDw9
DQo+IGNhcF9zcHNfMm1iKHZ0ZF9jYXApICYmDQo+ICAgICAgICAgICAgICAgICAgICAgICAgICAg
IChlcHRfaGFzXzFnYihlcHRfY2FwKSAmJiBvcHRfaGFwXzFnYikgPD0NCj4gY2FwX3Nwc18xZ2Io
dnRkX2NhcCkpDQo+ICAgICAgICAgICAgICAgICAgICAgICAgIDogISgoZXB0X2hhc18ybWIoZXB0
X2NhcCkgJiYgb3B0X2hhcF8ybWIpIHx8DQo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAo
ZXB0X2hhc18xZ2IoZXB0X2NhcCkgJiYgb3B0X2hhcF8xZ2IpKTsNCj4gDQo+IEkgdGhpbmsgd2Ug
d2FudCB0byBpbnRyb2R1Y2Ugc29tZSBsb2NhbCB2YXJpYWJsZXMgdG8gc3RvcmUgRVBUDQo+IHN1
cGVycGFnZSBhdmFpbGFiaWxpdHksIGFzIHRoZSBsaW5lcyBhcmUgdG9vIGxvbmcuDQo+IA0KDQpP
ciB0byBiZSBwYWlyIHdpdGggZXB0IHNpZGUgY2hlY2tzOg0KDQogICAgcmV0dXJuIChlcHRfaGFz
XzJtYihlcHRfY2FwKSAmJiBvcHRfaGFwXzJtYikgPD0NCiAgICAgICAgICAgKGNhcF9zcHNfMm1i
KHZ0ZF9jYXApICYmIGlvbW11X3N1cGVycGFnZXMpICYmDQogICAgICAgICAgIChlcHRfaGFzXzFn
YihlcHRfY2FwKSAmJiBvcHRfaGFwXzFnYikgPD0NCiAgICAgICAgICAgKGNhcF9zcHNfMWdiKHZ0
ZF9jYXApICYmIGlvbW11X3N1cGVycGFnZXMpOw0K


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 08:59:36 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 08:59:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357683.586410 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6TXs-0001vf-5k; Wed, 29 Jun 2022 08:59:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357683.586410; Wed, 29 Jun 2022 08:59:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6TXs-0001vY-2t; Wed, 29 Jun 2022 08:59:36 +0000
Received: by outflank-mailman (input) for mailman id 357683;
 Wed, 29 Jun 2022 08:59:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cYaO=XE=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1o6TXr-0001vQ-4u
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 08:59:35 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-eopbgr140054.outbound.protection.outlook.com [40.107.14.54])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cb3761d1-f789-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 10:59:33 +0200 (CEST)
Received: from AM6PR01CA0068.eurprd01.prod.exchangelabs.com
 (2603:10a6:20b:e0::45) by DB8PR08MB4058.eurprd08.prod.outlook.com
 (2603:10a6:10:aa::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Wed, 29 Jun
 2022 08:59:29 +0000
Received: from AM5EUR03FT049.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:e0:cafe::a9) by AM6PR01CA0068.outlook.office365.com
 (2603:10a6:20b:e0::45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16 via Frontend
 Transport; Wed, 29 Jun 2022 08:59:28 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT049.mail.protection.outlook.com (10.152.17.130) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Wed, 29 Jun 2022 08:59:27 +0000
Received: ("Tessian outbound 8dc5ba215ad1:v121");
 Wed, 29 Jun 2022 08:59:27 +0000
Received: from b532e1bb6bbf.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 9775BA5D-E988-418B-BDB4-B5ED63C4E14D.1; 
 Wed, 29 Jun 2022 08:59:20 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b532e1bb6bbf.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 29 Jun 2022 08:59:20 +0000
Received: from AM0PR08MB3809.eurprd08.prod.outlook.com (2603:10a6:208:103::16)
 by DB9PR08MB7495.eurprd08.prod.outlook.com (2603:10a6:10:36c::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16; Wed, 29 Jun
 2022 08:59:19 +0000
Received: from AM0PR08MB3809.eurprd08.prod.outlook.com
 ([fe80::4ca:af1b:4380:abf9]) by AM0PR08MB3809.eurprd08.prod.outlook.com
 ([fe80::4ca:af1b:4380:abf9%5]) with mapi id 15.20.5395.014; Wed, 29 Jun 2022
 08:59:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cb3761d1-f789-11ec-b725-ed86ccbb4733
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=Z51Q2O9OHCdFU9C0Fir+/4L8fet8+jrn68OsoORc9j30b8Wx8heLpq3OSLYmxcbzpOuISKml0stpg+fJrSdDnIaAB4eA6G2zR0I7QC5UcaJk9xQxoySEzLRBtLD5ADHZallooAfYcUxK/KizwCaQqcl8kRkDtCcRR9HnH9UCwk1vjY1qwyrKb8Md/C+kdyE4/F3qo6z6nz6TDZkGZmxY2vbC5Jc3iuGVAa8N9rhEUIEiRIpz0B8sj/N5vCMhsXNghH3IbYJeN8Uz/JiM7WmaVOGQc4Gu03fmw3HemC0qXkMgq196BjcIz50VwnwAZf2SxwBQEwBsT5RDDd1m2/MP6g==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=iqIz8QR8SYglMHSbN2zIcTWWRFdd7HVOcLn39vUwd1E=;
 b=B2uHalbiS8oMAP/sRUf2S3vkdQB3y6bEkANOp6tq//lDr5uKAkdCQSu/OaMAYYyRIhVsBE3hWPi3ypB5iM5K7Sk1yb9+m1NNkJENu6zKkDZUdHtA/8AJ4jyH1IV6Mcw1Xt8xvoLrAdFGxCm6wvjkEBT56h4GPNZmrAOHRdYPBRX10F0xmWHLrnTXAtgLy+0oiQZN8m5LIys6B59kArmcrkVKrOUeBbfo9XaZenef77rzZiVMtWg8fUuTTgyGAm1kf6UxHg+P66Nrdl62aJ5DAmvBpM5Wa9dwK1CrFbqXYzM+A47fIoSw2LuW0R9B4d3yhZguEjbo4pjQtsxa+JKw0Q==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iqIz8QR8SYglMHSbN2zIcTWWRFdd7HVOcLn39vUwd1E=;
 b=oJPoaaSAWuq6bkuGzsLUURFvMUh7ukLbYvAlF51pmYAwaWZBd362zu5F62MHTQYxg90kiVk4wsQxEBdq+pIIV1B/xDVS0b5KvpIIXWRri+XVLcuI2K0xShj8XD0aCNgncp1J52pxy0pHpiz2UMwhPYD+AWXd6mWiX8/rWfmIs3s=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 9f1e6e5403b0204d
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=egD7V7ZtuizhsSNyP9wrG8jKerUypMWFxTxml3ZiUhsSeBDhgPMEah06qOeRdO8d/qNnyfCDoV60F5EdZXC6//x6CNt/UuFZ9lIV4/u5I27FNRayWsngOyhy8Te8llEbhoGKf1GVRdPMELrI6TX3O3eKlUKLuKYNp2/nFH3GVCcQq/KdS2QWQAjYm6Oilwn4UayqLA67gtgBuue8Ky9dlua80lKsGzB1yyeU308Kr+iCcW4tbgyM+qZ7DsPtrtw+8Fijcqr60R0wKTZxjY0GQ6mgB/L02g7hOoT+Q6bHb8QPcrqB0NKHmEs0dmbB0GlvinBdBKlLkZ37tQpvfXBOMA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=iqIz8QR8SYglMHSbN2zIcTWWRFdd7HVOcLn39vUwd1E=;
 b=lK5B8ELMhuZlw8n/8fQPerNoZgVnavR42I6c+0GdCirghCvJhc7KdY3BxyuYDQ35ACX+S7S48s3n6xRxrcb1x9SMMNw8IcWtMzmbWBFyl2dFPVZ8O4P8Dz8tvZoYR6/rmGjJ7ox0sqxn+LvwHOvAzAmlP9VvW5ZfH59gzwIHpCn/k30uV70uukxWLhVzgO/4vvO7M+P5FJFUV0WvZwiOmXA9FMI+FW5w/LhAELe7n5dDu7iSztqvhbVQ+ebJ2JQURhFjBOx1ASQMm8VxoiS4aSRaiI2JMxq3YAztGGQER1qRd/wTzrJLEXA2yUqEBl9YsBGPUtJHlYPpVWbp1ly3hA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iqIz8QR8SYglMHSbN2zIcTWWRFdd7HVOcLn39vUwd1E=;
 b=oJPoaaSAWuq6bkuGzsLUURFvMUh7ukLbYvAlF51pmYAwaWZBd362zu5F62MHTQYxg90kiVk4wsQxEBdq+pIIV1B/xDVS0b5KvpIIXWRri+XVLcuI2K0xShj8XD0aCNgncp1J52pxy0pHpiz2UMwhPYD+AWXd6mWiX8/rWfmIs3s=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Anthony PERARD <anthony.perard@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>, Elena
 Ufimtseva <elena.ufimtseva@oracle.com>, Tim Deegan <tim@xen.org>, Daniel De
 Graaf <dgdegra@tycho.nsa.gov>, "Daniel P. Smith"
	<dpsmith@apertussolutions.com>, =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>, Juergen Gross <jgross@suse.com>, Christian Lindig
	<christian.lindig@citrix.com>, David Scott <dave@recoil.org>, Stefano
 Stabellini <sstabellini@kernel.org>
Subject: Re: [XEN PATCH v3 25/25] tools: Remove -Werror everywhere else
Thread-Topic: [XEN PATCH v3 25/25] tools: Remove -Werror everywhere else
Thread-Index: AQHYh+S87hLhz2jfzU+bF+lpBjZnJ61mHT4A
Date: Wed, 29 Jun 2022 08:59:18 +0000
Message-ID: <BF28045C-0848-4B5A-8DAB-57192C7B4A18@arm.com>
References: <20220624160422.53457-1-anthony.perard@citrix.com>
 <20220624160422.53457-26-anthony.perard@citrix.com>
In-Reply-To: <20220624160422.53457-26-anthony.perard@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.80.82.1.1)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 44d65c40-93d0-4fc4-a1e7-08da59adac2a
x-ms-traffictypediagnostic:
	DB9PR08MB7495:EE_|AM5EUR03FT049:EE_|DB8PR08MB4058:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 gRFwqJy8taO7DzZhM9t+PVHv+H5WPOrVoSkpzvZXJPe7FI1/x4lAgFFQBBkwqAhHeygN91qMJ+11V9Ah8h//ntj5zSStLoa6yfnXHSqS70poh7+3YeU//pCEHWZLPLTWSrdlQ5JNRGduRBgC8HYtwu3pV/tZ/BzTykWoEPvcyYwpvSryofdNNWuesC+hxy10P53p4iu0vzhVYLJU9TpziZ1tim/bsRrso7AJp+hoYpl68eUjLuafHBjS8S7JVSK2L31AM3/yReNKn/wPgc7EYT/V3V/K6jfx4zMpTiQEcr7riYZTsSLe0ZEtXpAhgHcnW1EmDgNxVthIR1X9aA/GhvnxyFPK/tA9SPTrecTKs1coQFjRRtbysagrWu+nkUx+VfYewaxtm1l0k+M5fCREx5qUdo90pPQlgvSVJN4KlhMmu15b6f5uD4C8pO8wiryYOIziOpzt8pFVDVESaSDZx+p4balkBscTrg41TZLqC4KviUc2RHdyv09xhwGVXQXXG5AdUjHWi88mMs1w3WtD3ussqNSFbpLzlbXnC548tjkM8ac0GAXSfa/tnKt+h7RgiMUjRyf+N2p2aCsOyYmfC9EuwttIrdfX78upo9CnIM/4tYg+Kv8V11MHEK29I+vp9lAcnekhygXEs9cUFbtFV1oagyZplW08zVraZFixTtODleumDJ99WQMNsS809/bStyx3/TyxKM4Nc0wuRESCmzESWjhdmqRzJ2S+U5mlijY5ODGuQoKlf6TGFfOEFKiB8QKO6Qa848CCKv9Oyxv3/wXeRA0BoOi4w06p7CwhmWPoR/FvLwV9R/R9XzVWUdzcSouBNnnsayY5UVWfuPcb+Q==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB3809.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(396003)(136003)(366004)(346002)(39860400002)(376002)(6916009)(86362001)(54906003)(8676002)(66556008)(316002)(2616005)(83380400001)(66946007)(76116006)(66446008)(4326008)(66476007)(64756008)(36756003)(6486002)(91956017)(33656002)(2906002)(122000001)(8936002)(38100700002)(38070700005)(7416002)(41300700001)(53546011)(6506007)(5660300002)(478600001)(186003)(71200400001)(26005)(6512007)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <1730392772B33246925DA04F7E6912DF@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB7495
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT049.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	5d159193-05de-4453-1127-08da59ada6ce
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/MM3txjuKv6FCXqsPQnhIJ665emb0UuRatq/xJ1ijCdiDF8yl5GXf/C3cfG/bazdnAeF0wmLfZpFnJiIVE7mrIJV/R9HJobOQtpiUSS0DcQ/9PzXKexPfoNxiPgAlq8H50sFACvx7uj8qzMKGL3jjtie+UtCkvwgp5GTxuDPVz3u8ViHBYRElYXtOjFfEc/vB4+siWHWjrNwLmU0zmfyArqLuBhe7k9YQMtIgLJsaaMiBNUBdYzECHkJisLBd4UH3TJOxC/AoA4SL3Z1F4OorIlY0F59RDTQfySlVXhXBO/JJ/SLZu99f/OsAD7t9qZ3hOhF1yms/rb3RRUcePWisoFIvXcxaaMfE8AXDC4VRZs+gtoVDgKiqGjGuaeRybOKfXo5O9zJZ2Tl3GLUoEIepOv2Su4TTWj7GhIQzh0qMix27jHeR7plIWx+RpgMspNDy7KElpTHydKaEiD5AYykfTm+rg+FOjF1WPRvQiLHRNp9S37naPBt7+d0iR5Q1QNy4F9iAc7HUCpyVbS4gKntYKdnNV9B09VmmhQVcmOca8IoPMnV8Ed2NBu2EEzEDUTSkvBHvJK4bwHezdIoZ3AzYEECNRag9/v03gd/N/Qw9ld231Ydfc0Wh7PDuQhyPRZ2gLXQFEh7tw3HHGkspeuc6MSGXORSDReve1j6uN0ffZJ22BS4jBnbVVJHXObidwFi7SxojRCC8g16dO3uPf+PCkt+KWZdD0TygBbY/OY2vvRd16JZHpmcbek2Jbl15N6PB/puB4uftqx6Rsf6D7FbJbXRutMu+chT/HmtTrHonFMccVerpzTUBsf6ARMIQ/OB
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(346002)(396003)(376002)(136003)(39860400002)(36840700001)(46966006)(40470700004)(8676002)(82740400003)(4326008)(6512007)(336012)(5660300002)(70586007)(2616005)(83380400001)(40480700001)(70206006)(6862004)(26005)(40460700003)(36860700001)(81166007)(6486002)(36756003)(478600001)(6506007)(2906002)(186003)(54906003)(82310400005)(316002)(41300700001)(107886003)(8936002)(33656002)(53546011)(47076005)(356005)(86362001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jun 2022 08:59:27.8098
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 44d65c40-93d0-4fc4-a1e7-08da59adac2a
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT049.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB4058

DQorIENDOiBTdGVmYW5vIFN0YWJlbGxpbmkNCg0KPiBPbiAyNCBKdW4gMjAyMiwgYXQgMTc6MDQs
IEFudGhvbnkgUEVSQVJEIDxhbnRob255LnBlcmFyZEBjaXRyaXguY29tPiB3cm90ZToNCj4gDQo+
IFBhdGNoICJ0b29sczogQWRkIC1XZXJyb3IgYnkgZGVmYXVsdCB0byBhbGwgdG9vbHMvIiBoYXZl
IGFkZGVkDQo+ICItV2Vycm9yIiB0byBDRkxBR1MgaW4gdG9vbHMvUnVsZXMubWssIHJlbW92ZSBp
dCBmcm9tIGV2ZXJ5IG90aGVyDQo+IG1ha2VmaWxlcyBhcyBpdCBpcyBub3cgZHVwbGljYXRlZC4N
Cj4gDQo+IFNpZ25lZC1vZmYtYnk6IEFudGhvbnkgUEVSQVJEIDxhbnRob255LnBlcmFyZEBjaXRy
aXguY29tPg0KDQpIaSBBbnRob255LA0KDQpJIHdpbGwgdHJ5IHRvIHJldmlldyB0aGUgc2VyaWUg
d2hlbiBJIG1hbmFnZSB0byBoYXZlIHNvbWUgdGltZSwgaW4gdGhlIG1lYW4gdGltZSBJIGNhbiBz
YXkgdGhlIHdob2xlDQpzZXJpZSBidWlsZHMgZmluZSBpbiBteSBZb2N0byBzZXR1cCBvbiBhcm02
NCBhbmQgeDg2XzY0LCBJ4oCZdmUgdHJpZWQgYWxzbyB0aGUgdG9vbCBzdGFjayB0bw0KY3JlYXRl
L2Rlc3Ryb3kvY29uc29sZSBndWVzdHMgYW5kIG5vIHByb2JsZW0gc28gZmFyLg0KDQpUaGUgb25s
eSBwcm9ibGVtIEkgaGF2ZSBpcyBidWlsZGluZyBmb3IgYXJtMzIgYmVjYXVzZSwgSSB0aGluaywg
dGhpcyBwYXRjaCBkb2VzIGEgZ3JlYXQgam9iIGFuZCBpdA0KZGlzY292ZXJzIGEgcHJvYmxlbSBo
ZXJlOg0KDQphcm0tcG9reS1saW51eC1nbnVlYWJpLWdjYyAgLW10aHVtYiAtbWZwdT1uZW9uIC1t
ZmxvYXQtYWJpPWhhcmQgLW1jcHU9Y29ydGV4LWExNSAgLS1zeXNyb290PS9kYXRhX3NkYzEvbHVj
ZmFuMDEvdGVzdF9raXJrc3RvbmVfeGVuL2J1aWxkL3h0cC1xZW11LWFybTMyL3RtcC93b3JrL2Nv
cnRleGExNXQyaGYtbmVvbi1wb2t5LWxpbnV4LWdudWVhYmkveGVuLXRvb2xzLzQuMTcrZ2l0MS1y
MC9yZWNpcGUtc3lzcm9vdCAgIC1tYXJtIC1EQlVJTERfSUQgLWZuby1zdHJpY3QtYWxpYXNpbmcg
LXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXIt
c3RhdGVtZW50IC1Xbm8tdW51c2VkLWJ1dC1zZXQtdmFyaWFibGUgLVduby11bnVzZWQtbG9jYWwt
dHlwZWRlZnMgICAtV2Vycm9yIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtRF9fWEVOX0lOVEVS
RkFDRV9WRVJTSU9OX189X19YRU5fTEFURVNUX0lOVEVSRkFDRV9WRVJTSU9OX18gLU1NRCAtTVAg
LU1GIC5pbml0LWRvbTBsZXNzLm8uZCAtRF9GSUxFX09GRlNFVF9CSVRTPTY0IC1EX0xBUkdFRklM
RV9TT1VSQ0UgLURfTEFSR0VGSUxFNjRfU09VUkNFICAtbXRodW1iIC1tZnB1PW5lb24gLW1mbG9h
dC1hYmk9aGFyZCAtbWNwdT1jb3J0ZXgtYTE1IC1mc3RhY2stcHJvdGVjdG9yLXN0cm9uZyAgLU8y
IC1EX0ZPUlRJRllfU09VUkNFPTIgLVdmb3JtYXQgLVdmb3JtYXQtc2VjdXJpdHkgLVdlcnJvcj1m
b3JtYXQtc2VjdXJpdHkgIC1PMiAtcGlwZSAtZyAtZmVsaW1pbmF0ZS11bnVzZWQtZGVidWctdHlw
ZXMgLWZtYWNyby1wcmVmaXgtbWFwPS9kYXRhX3NkYzEvbHVjZmFuMDEvdGVzdF9raXJrc3RvbmVf
eGVuL2J1aWxkL3h0cC1xZW11LWFybTMyL3RtcC93b3JrL2NvcnRleGExNXQyaGYtbmVvbi1wb2t5
LWxpbnV4LWdudWVhYmkveGVuLXRvb2xzLzQuMTcrZ2l0MS1yMD0vdXNyL3NyYy9kZWJ1Zy94ZW4t
dG9vbHMvNC4xNytnaXQxLXIwICAgICAgICAgICAgICAgICAgICAgIC1mZGVidWctcHJlZml4LW1h
cD0vZGF0YV9zZGMxL2x1Y2ZhbjAxL3Rlc3Rfa2lya3N0b25lX3hlbi9idWlsZC94dHAtcWVtdS1h
cm0zMi90bXAvd29yay9jb3J0ZXhhMTV0MmhmLW5lb24tcG9reS1saW51eC1nbnVlYWJpL3hlbi10
b29scy80LjE3K2dpdDEtcjA9L3Vzci9zcmMvZGVidWcveGVuLXRvb2xzLzQuMTcrZ2l0MS1yMCAg
ICAgICAgICAgICAgICAgICAgICAtZmRlYnVnLXByZWZpeC1tYXA9L2RhdGFfc2RjMS9sdWNmYW4w
MS90ZXN0X2tpcmtzdG9uZV94ZW4vYnVpbGQveHRwLXFlbXUtYXJtMzIvdG1wL3dvcmsvY29ydGV4
YTE1dDJoZi1uZW9uLXBva3ktbGludXgtZ251ZWFiaS94ZW4tdG9vbHMvNC4xNytnaXQxLXIwL3Jl
Y2lwZS1zeXNyb290PSAgICAgICAgICAgICAgICAgICAgICAtZmRlYnVnLXByZWZpeC1tYXA9L2Rh
dGFfc2RjMS9sdWNmYW4wMS90ZXN0X2tpcmtzdG9uZV94ZW4vYnVpbGQveHRwLXFlbXUtYXJtMzIv
dG1wL3dvcmsvY29ydGV4YTE1dDJoZi1uZW9uLXBva3ktbGludXgtZ251ZWFiaS94ZW4tdG9vbHMv
NC4xNytnaXQxLXIwL3JlY2lwZS1zeXNyb290LW5hdGl2ZT0gIC1JL2RhdGFfc2RjMS9sdWNmYW4w
MS90ZXN0X2tpcmtzdG9uZV94ZW4vYnVpbGQveHRwLXFlbXUtYXJtMzIvdG1wL3dvcmsvY29ydGV4
YTE1dDJoZi1uZW9uLXBva3ktbGludXgtZ251ZWFiaS94ZW4tdG9vbHMvNC4xNytnaXQxLXIwL2xv
Y2FsLXhlbi94ZW4vdG9vbHMvaGVscGVycy8uLi8uLi90b29scy9pbmNsdWRlIC1JL2RhdGFfc2Rj
MS9sdWNmYW4wMS90ZXN0X2tpcmtzdG9uZV94ZW4vYnVpbGQveHRwLXFlbXUtYXJtMzIvdG1wL3dv
cmsvY29ydGV4YTE1dDJoZi1uZW9uLXBva3ktbGludXgtZ251ZWFiaS94ZW4tdG9vbHMvNC4xNytn
aXQxLXIwL2xvY2FsLXhlbi94ZW4vdG9vbHMvaGVscGVycy8uLi8uLi90b29scy9pbmNsdWRlIC1J
L2RhdGFfc2RjMS9sdWNmYW4wMS90ZXN0X2tpcmtzdG9uZV94ZW4vYnVpbGQveHRwLXFlbXUtYXJt
MzIvdG1wL3dvcmsvY29ydGV4YTE1dDJoZi1uZW9uLXBva3ktbGludXgtZ251ZWFiaS94ZW4tdG9v
bHMvNC4xNytnaXQxLXIwL2xvY2FsLXhlbi94ZW4vdG9vbHMvaGVscGVycy8uLi8uLi90b29scy9p
bmNsdWRlIC1JL2RhdGFfc2RjMS9sdWNmYW4wMS90ZXN0X2tpcmtzdG9uZV94ZW4vYnVpbGQveHRw
LXFlbXUtYXJtMzIvdG1wL3dvcmsvY29ydGV4YTE1dDJoZi1uZW9uLXBva3ktbGludXgtZ251ZWFi
aS94ZW4tdG9vbHMvNC4xNytnaXQxLXIwL2xvY2FsLXhlbi94ZW4vdG9vbHMvaGVscGVycy8uLi8u
Li90b29scy9pbmNsdWRlIC1EX19YRU5fVE9PTFNfXyAtSS9kYXRhX3NkYzEvbHVjZmFuMDEvdGVz
dF9raXJrc3RvbmVfeGVuL2J1aWxkL3h0cC1xZW11LWFybTMyL3RtcC93b3JrL2NvcnRleGExNXQy
aGYtbmVvbi1wb2t5LWxpbnV4LWdudWVhYmkveGVuLXRvb2xzLzQuMTcrZ2l0MS1yMC9sb2NhbC14
ZW4veGVuL3Rvb2xzL2hlbHBlcnMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtRF9fWEVOX1RPT0xTX18g
LUkvZGF0YV9zZGMxL2x1Y2ZhbjAxL3Rlc3Rfa2lya3N0b25lX3hlbi9idWlsZC94dHAtcWVtdS1h
cm0zMi90bXAvd29yay9jb3J0ZXhhMTV0MmhmLW5lb24tcG9reS1saW51eC1nbnVlYWJpL3hlbi10
b29scy80LjE3K2dpdDEtcjAvbG9jYWwteGVuL3hlbi90b29scy9oZWxwZXJzLy4uLy4uL3Rvb2xz
L2luY2x1ZGUgIC1jIC1vIGluaXQtZG9tMGxlc3MubyBpbml0LWRvbTBsZXNzLmMgDQppbml0LWRv
bTBsZXNzLmM6IEluIGZ1bmN0aW9uICdjcmVhdGVfeGVuc3RvcmUnOg0KaW5pdC1kb20wbGVzcy5j
OjE0MTo1MzogZXJyb3I6IGZvcm1hdCAnJWx1JyBleHBlY3RzIGFyZ3VtZW50IG9mIHR5cGUgJ2xv
bmcgdW5zaWduZWQgaW50JywgYnV0IGFyZ3VtZW50IDQgaGFzIHR5cGUgJ3VpbnQ2NF90JyB7YWth
ICdsb25nIGxvbmcgdW5zaWduZWQgaW50J30gWy1XZXJyb3I9Zm9ybWF0PV0NCiAgMTQxIHwgICAg
IHJjID0gc25wcmludGYobWF4X21lbWtiX3N0ciwgU1RSX01BWF9MRU5HVEgsICIlbHUiLCBpbmZv
LT5tYXhfbWVta2IpOw0KICAgICAgfCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIH5+XiAgIH5+fn5+fn5+fn5+fn5+fg0KICAgICAgfCAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgfCAgICAgICB8DQogICAg
ICB8ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB8
ICAgICAgIHVpbnQ2NF90IHtha2EgbG9uZyBsb25nIHVuc2lnbmVkIGludH0NCiAgICAgIHwgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGxvbmcgdW5z
aWduZWQgaW50DQogICAgICB8ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgJWxsdQ0KaW5pdC1kb20wbGVzcy5jOjE0NDo1NjogZXJyb3I6IGZvcm1hdCAn
JWx1JyBleHBlY3RzIGFyZ3VtZW50IG9mIHR5cGUgJ2xvbmcgdW5zaWduZWQgaW50JywgYnV0IGFy
Z3VtZW50IDQgaGFzIHR5cGUgJ3VpbnQ2NF90JyB7YWthICdsb25nIGxvbmcgdW5zaWduZWQgaW50
J30gWy1XZXJyb3I9Zm9ybWF0PV0NCiAgMTQ0IHwgICAgIHJjID0gc25wcmludGYodGFyZ2V0X21l
bWtiX3N0ciwgU1RSX01BWF9MRU5HVEgsICIlbHUiLCBpbmZvLT5jdXJyZW50X21lbWtiKTsNCiAg
ICAgIHwgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICB+fl4gICB+fn5+fn5+fn5+fn5+fn5+fn5+DQogICAgICB8ICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB8ICAgICAgIHwNCiAgICAgIHwgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHwgICAg
ICAgdWludDY0X3Qge2FrYSBsb25nIGxvbmcgdW5zaWduZWQgaW50fQ0KICAgICAgfCAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbG9uZyB1bnNp
Z25lZCBpbnQNCiAgICAgIHwgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAlbGx1DQogICAgICANCg0KV29u4oCZdCBiZSB0b28gZGlmZmljdWx0IHRv
IGZpeCwgaWYgSSBoYXZlIHRpbWUgSSB3aWxsIGRvIGl0LCBvdGhlcndpc2UgaWYgc29tZW9uZSB3
YW50cyB0byBkbyBpdOKAmXMgZmluZSBmb3IgbWUuDQoNCkNoZWVycywNCkx1Y2ENCg0KDQo=


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 09:03:53 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 09:03:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357691.586421 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Tbx-0003NZ-Oe; Wed, 29 Jun 2022 09:03:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357691.586421; Wed, 29 Jun 2022 09:03:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Tbx-0003NS-KS; Wed, 29 Jun 2022 09:03:49 +0000
Received: by outflank-mailman (input) for mailman id 357691;
 Wed, 29 Jun 2022 09:03:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DiI7=XE=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1o6Tbw-0003NM-T5
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 09:03:49 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2079.outbound.protection.outlook.com [40.107.22.79])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 62fee4ef-f78a-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 11:03:48 +0200 (CEST)
Received: from AM6PR08CA0012.eurprd08.prod.outlook.com (2603:10a6:20b:b2::24)
 by AM6PR08MB4935.eurprd08.prod.outlook.com (2603:10a6:20b:d5::25)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14; Wed, 29 Jun
 2022 09:03:39 +0000
Received: from AM5EUR03FT056.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:b2:cafe::7f) by AM6PR08CA0012.outlook.office365.com
 (2603:10a6:20b:b2::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16 via Frontend
 Transport; Wed, 29 Jun 2022 09:03:39 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT056.mail.protection.outlook.com (10.152.17.224) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Wed, 29 Jun 2022 09:03:39 +0000
Received: ("Tessian outbound 4748bc5c2894:v121");
 Wed, 29 Jun 2022 09:03:38 +0000
Received: from 10fdb726acf2.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 582CE289-96DC-47C7-8267-42205B524180.1; 
 Wed, 29 Jun 2022 09:03:28 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 10fdb726acf2.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 29 Jun 2022 09:03:28 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by DB8PR08MB4201.eurprd08.prod.outlook.com (2603:10a6:10:a3::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Wed, 29 Jun
 2022 09:03:26 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5373.017; Wed, 29 Jun 2022
 09:03:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 62fee4ef-f78a-11ec-b725-ed86ccbb4733
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=lFOKF82+Ccpc2XKxJOTymJDPTLvmZSGCR9BO+3inUHHrVUsrjvTB8AB3f5DLEiwx2DJi7+9Qn/HqtiDK/QqEw22EP8AAY0GWjmFB4BPAELZySxuLUWzHvjVX8nD7y4Jp8wIz0065hnrjX7N3AXf/rPhTs47g8as0QEJ+alNNVWYPI6smg/TRNEtU2/TaMc/MjLCEGozm731c8FhMHr0BTIeNlKpJP9vSaGlT6TW/T/clVrP1K57kTaZZ9IwnddGBT3ZI1EvdNrkiqS1MCIeYyRUJuCHJTP8dwJ2xRUX2aTRQ4aWEQIg3VY/7ztcgbB+A0h2l9KK3tl52hMdARmtmjg==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=oSk1Zu5fOg8U/urVtk/KJpmPDHRMBlo3TzNdLX3+2V0=;
 b=XXsKs7wSOZbkqmuFGAj9JfxeDjo5waDVm5GmbmLmFfPFjHwcCt+tCehGRnZF+9WogPjScZFiijWFEomKuJEEG6SxxB05whtO8ga7ZOaDDiquBQhwKEIgYUmmxi+I7awKJKvpb4Qr176rQCEmVVmD1RE9VWwI54R8sSKfGwOndF9qJKgIZvUPhXAQbUuR94lk5Wf0luHVsMW9ndepmtU2rsZ/WKLi2TYYTPYy7OUJEZGCl7OQpOXkX7DuVGRZMG5kRVvIP5+uHfwwI2tqu4+YKI/hzwM1A2r0ih/uaAGgFT3TfVPbv2i9ddIlmpZS1orLQUTu4M/wyafXZKB5FnIrbg==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oSk1Zu5fOg8U/urVtk/KJpmPDHRMBlo3TzNdLX3+2V0=;
 b=4jz3jco5Y+LoKWiVQI5GGuoDz0+uVUTFYzudLNLtwxgpMH1qN68hlThnCYvT/cpMXSqKANSm2j0Z2bLWo6m7RaBsNvgtW1X6K56s02+VbUT17Gn5gDwMsVJJxvaBfcpP+31r+MF9RizpH5yN3+o4xsVY6syCTiBSZkSfXCUnbRo=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 717b301f47cfab62
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DiChj5tQy4Hpu8NRIHW3EgVMkjoN3D0K8k9P5QFV/AjCfHrK/guQJVMb3MTTwFG90D28SM+xgnd3yc2l0zlR4nnohYe/jre8nZnyP60X1bcAW1KBxfZbVCeNIV/J4EODqcDrHpsS42OeZHcqggyEUce6iXXThqHv3ln9lRnJ0NVeIvj2TM5Z3bWy2Jng2ch+5qY5CjiYjF6jph87Awl49RutMooXUWW4MJo8PT3RVswL2xR0KuySR0i1bs6MoFvDRXjkvXcVCYF1OiQ0zAqUb0ZrCz1SKNdwuOBhvfDn+LV69rYWoDZFOGpuvZq12wGovc7ec9qeiMOcERojHfHBtQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=oSk1Zu5fOg8U/urVtk/KJpmPDHRMBlo3TzNdLX3+2V0=;
 b=EUUJL5GeMNJZltLFcXuQOics4YGDsyed0JbG6rzsAwv7R02Zwj/V5Rn8UHRM+d8GD/PDRlmzii/oSa3T82SoJyetKG7WC4EtPpJsfzdx/PIVuefKR8K1PgN9IKMEzG3OgqF6fqXxkNZPUb+t211vsjGNJuPVD/0adktuFX5gHlDyqQ3cnoHQFGiin7qL43YjHygDL/k6M7phR35pLetyvwQkzBpGTx1MLg2neQA03edWbS9xDZ3nsTKphgmUT14cyLhVq3WIOVCIFo/0MkeGCdZZoCcFjSUJt8aCJyxDMw73fOSqkE3OtI9foL15KtaUZQuxNhHuHJUFYi8H+BXHCw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oSk1Zu5fOg8U/urVtk/KJpmPDHRMBlo3TzNdLX3+2V0=;
 b=4jz3jco5Y+LoKWiVQI5GGuoDz0+uVUTFYzudLNLtwxgpMH1qN68hlThnCYvT/cpMXSqKANSm2j0Z2bLWo6m7RaBsNvgtW1X6K56s02+VbUT17Gn5gDwMsVJJxvaBfcpP+31r+MF9RizpH5yN3+o4xsVY6syCTiBSZkSfXCUnbRo=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: xenia <burzalodowa@gmail.com>
CC: xen-devel <xen-devel@lists.xenproject.org>, Rahul Singh
	<Rahul.Singh@arm.com>, Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: smmu-v3: Fix MISRA C 2012 Rule 1.3 violations
Thread-Topic: [PATCH] xen/arm: smmu-v3: Fix MISRA C 2012 Rule 1.3 violations
Thread-Index: AQHYiwEFKAxihkpP+kCxp/TNi89h5q1l/I6AgAAZcwCAAAIrgA==
Date: Wed, 29 Jun 2022 09:03:25 +0000
Message-ID: <D8C0B798-C736-45CC-A723-1535131F1323@arm.com>
References: <20220628150851.8627-1-burzalodowa@gmail.com>
 <BF0AB23A-DB4B-4DA3-9E4C-C15FAD360247@arm.com>
 <1b28f8b2-2153-61f6-515f-b2ed880f84b6@gmail.com>
In-Reply-To: <1b28f8b2-2153-61f6-515f-b2ed880f84b6@gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 534e5fc9-b656-4ef4-36c0-08da59ae41eb
x-ms-traffictypediagnostic:
	DB8PR08MB4201:EE_|AM5EUR03FT056:EE_|AM6PR08MB4935:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 6eOrqt80QuEe9a5VSucjdknInMRsdtFTtRxQHb0tg07MvTnNEjxXvCV45vE0OFdZIsoa/QPIMQLSb7Yo+zTEd0WdepfNO77tWMjMw9dgy0bI8aHfNgt1YF7GAACUvdWzc9RHtFanpRGQ78Qik7A698hXg+lC+Uct9GjFptbmkpYjQd11ybc52/mMLpx5J6l/RuTGoR0avi3Kb+BqzySQ1sPXexe2090ZcSlWe1ef1XXEieSi8ycn9HVqFsK/PySzaJNY9X9t3rrq5axIp9BV7y2Wu1enLwrLjA9VeOwjU2t0TlJbARYP/TaSjsQvSSJ240WZx/BiOECFRySUxegqysOo5hUc4H7ec36l3wmcy1XhIpdfqLvBmuQ/L5vT6If2Tzj4mTyLWGV82V8Do/cCwechwC6xQROBta8dO0tH6WjWg7wRHnS70G9Z2KpwOAlZK77ljyNXFxN/udT7a9ScsReWmHwyWekTuTY7l/9LsShaBaMnqpyOmS/dSu2BT4C8zOJ4Rt1I/i+5oUwCBB/azoGUF4VfrdBxxzUIqMTHoCDV5aGNnLm4uvuq8QgViepwq6SGIi/qaxy26AF5ZapIir5XmssmBVkSVyDWpMDH7lpifdFy3v/3s35Gqhdsotk0GfDbkTuS/OihjGayM7JxjFUs4N9AfuRV1h1gawPVTXF7NWsT6vu798t+f6ZrHG1Z9RRoD81Vu0S7RWuEw+5tju2d4fjP+owLHOCteEQ5zAorZ/PhY+HCInj22eJpyuoYFFNorQVQihoPze5K6CcW+VY815ZCbFhnijXu8ljeSneC2DAsVvwT9KofXQuQl9MIkkjX7E9sgCqc5VHAq60gbw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(376002)(396003)(136003)(366004)(39860400002)(346002)(6486002)(53546011)(6512007)(41300700001)(26005)(478600001)(54906003)(36756003)(8936002)(83380400001)(6506007)(38070700005)(71200400001)(2616005)(186003)(2906002)(6916009)(33656002)(4326008)(91956017)(122000001)(66946007)(66446008)(76116006)(5660300002)(86362001)(66556008)(64756008)(38100700002)(8676002)(66476007)(316002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <C5D0ACE7525C3D4888508C7C4E58DCE2@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB4201
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT056.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	7f08d5cc-eb81-4de8-f2f6-08da59ae3a19
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	y56Ok6OFZD4XuYhfdedhL0G+rNcPuGa4h6dMdYAsbHA+6zENG6FYdfkTFEe9kcK52pCulAdGjmjE0MgxX9idzVxYJdmUos9QWcbU3rNue5/Tu93/XZp5qiNErgHvOSyDdDiBPcXdBiPb+ZFfMQAYg+L3Z8J+P6WX+OB4uLkX8/syDr3Lh+Ewu5ecQjQUpBCGGfpd7xYwMfdFVTH+Pae7B3xroklHM9jZ3dRmv8LfWsT90bWsIRgBRTeHh7P1kkdohdjMS95A7RsOXJQFV+UD/K3R6rPsqqoQcPaqL6k2sXUzYqjputhjTFIe5YpL4csi/D90vkSDeUp0dSsNbSl/TFWRX0pTrOlz3W74oxE6wzUzIVgjEWnSHIvEgxYXgYMTRfBmaenvQQGouATzBO2tQ4xzjMharXJ+HOxgLk9Vgg7OJdD/LKD8iI8L9lxAZye380wvntyNmalKmtHebugMEQFP98yAIvE75kr1cEjGySTmO9TJctx1/eBBRdMfOdwXA81YUGusgafXPl/VPLoN3G1nxTWyw8ZwkUgZrdidwSEgv/OnA1dr2Y8hTU/rgM2yo61uOIImiYeV4+XaPUFS9KG9Y2+7rMixQVYeoLpu0vEuWdtj36EE8bNhAOZZUDSnSGHyll9xbtMlW0YACNYVJlJlWj/Gblah5o1jnVl6tPVIOhf8Ds7QERGTM7GtOaV94pkMNjbGz/U9mKlcOamEpwhAQnuC+0Jfz1XO9ujplLBLmKzaWZCT4+9f2+UQeot80o5tPEinCKEJD58utmTDQ14O7zEFhgqXfI/Cl545GNU=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(396003)(136003)(346002)(39860400002)(376002)(40470700004)(36840700001)(46966006)(316002)(107886003)(40480700001)(5660300002)(36756003)(81166007)(6512007)(33656002)(82310400005)(2616005)(54906003)(26005)(70586007)(70206006)(36860700001)(356005)(82740400003)(6486002)(86362001)(53546011)(4326008)(8676002)(40460700003)(186003)(6506007)(2906002)(41300700001)(6862004)(478600001)(8936002)(83380400001)(336012)(47076005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jun 2022 09:03:39.0818
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 534e5fc9-b656-4ef4-36c0-08da59ae41eb
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT056.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4935

Hi Xenia,

> On 29 Jun 2022, at 09:55, xenia <burzalodowa@gmail.com> wrote:
>=20
> Hi Bertrand,
>=20
> On 6/29/22 10:24, Bertrand Marquis wrote:
>> Hi Xenia,
>>=20
>>> On 28 Jun 2022, at 16:08, Xenia Ragiadakou <burzalodowa@gmail.com> wrot=
e:
>>>=20
>>> The expression 1 << 31 produces undefined behaviour because the type of=
 integer
>>> constant 1 is (signed) int and the result of shifting 1 by 31 bits is n=
ot
>>> representable in the (signed) int type.
>>> Change the type of 1 to unsigned int by adding the U suffix.
>>>=20
>>> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
>>> ---
>>> Q_OVERFLOW_FLAG has already been fixed in upstream kernel code.
>>> For GBPA_UPDATE I will submit a patch.
>>>=20
>>> xen/drivers/passthrough/arm/smmu-v3.c | 4 ++--
>>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>>=20
>>> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passth=
rough/arm/smmu-v3.c
>>> index 1e857f915a..f2562acc38 100644
>>> --- a/xen/drivers/passthrough/arm/smmu-v3.c
>>> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
>>> @@ -338,7 +338,7 @@ static int platform_get_irq_byname_optional(struct =
device *dev,
>>> #define CR2_E2H				(1 << 0)
>>>=20
>>> #define ARM_SMMU_GBPA			0x44
>>> -#define GBPA_UPDATE			(1 << 31)
>>> +#define GBPA_UPDATE			(1U << 31)
>>> #define GBPA_ABORT			(1 << 20)
>>>=20
>>> #define ARM_SMMU_IRQ_CTRL		0x50
>>> @@ -410,7 +410,7 @@ static int platform_get_irq_byname_optional(struct =
device *dev,
>>>=20
>>> #define Q_IDX(llq, p)			((p) & ((1 << (llq)->max_n_shift) - 1))
>>> #define Q_WRP(llq, p)			((p) & (1 << (llq)->max_n_shift))
>> Could also make sense to fix those 2 to be coherent.
> According to the spec, the maximum value that max_n_shift can take is 19.
> Hence, 1 << (llq)->max_n_shift cannot produce undefined behavior.

I agree with that but my preference would be to not rely on deep analysis o=
n the code for those kind of cases and use 1U whenever possible.

What other maintainers think on this ?

>=20
> Personally, I have no problem to submit another patch that adds U/UL suff=
ixes to all shifted integer constants in the file :) but ...
> It seems that this driver has been ported from linux and this file still =
uses linux coding style, probably because deviations will reduce its mainta=
inability.
> Adding a U suffix to those two might be considered an unjustified deviati=
on.

At this stage the SMMU code is starting to deviate a lot from Linux so it w=
ill not be possible to update it easily to sync with latest linux code.
And the decision should be are we fixing it or should we fully skip this fi=
le saying that it is coming from linux and should not be fixed.
We started to have a discussion during the FuSa meeting on this but we need=
 to come up with a conclusion per file.

On this one I would say keeping it with linux code style and near from the =
linux one is not needed.
Rahul, Julien, Stefano: what do you think here ?

Cheers
Bertrand

>>> -#define Q_OVERFLOW_FLAG			(1 << 31)
>>> +#define Q_OVERFLOW_FLAG			(1U << 31)
>>> #define Q_OVF(p)			((p) & Q_OVERFLOW_FLAG)
>>> #define Q_ENT(q, p)			((q)->base +			\
>>> 					 Q_IDX(&((q)->llq), p) *	\
>> Cheers
>> Bertrand



From xen-devel-bounces@lists.xenproject.org Wed Jun 29 09:10:48 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 09:10:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357697.586432 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Tie-0004vD-Ij; Wed, 29 Jun 2022 09:10:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357697.586432; Wed, 29 Jun 2022 09:10:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Tie-0004v6-F4; Wed, 29 Jun 2022 09:10:44 +0000
Received: by outflank-mailman (input) for mailman id 357697;
 Wed, 29 Jun 2022 09:10:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JXcD=XE=citrix.com=prvs=172c93792=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o6Tid-0004v0-56
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 09:10:43 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5812a8dc-f78b-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 11:10:41 +0200 (CEST)
Received: from mail-bn8nam11lp2169.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 29 Jun 2022 05:10:37 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by MW4PR03MB6329.namprd03.prod.outlook.com (2603:10b6:303:11c::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15; Wed, 29 Jun
 2022 09:10:35 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5373.018; Wed, 29 Jun 2022
 09:10:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5812a8dc-f78b-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656493840;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=prmL4w+zXxLZzai1GxG4oCtlsLRjQinUhA9zyDJ0J5M=;
  b=AHwQbJK0Z/mt7ioGpm/u5yD1rbz9bF59JV35mqveF37hgI75zMBRnz4B
   iMoJk2yFPXitqq9VpNYcjjYZgxk2Z2wYQiqlpV+GeY6xZwR0sRSiHjeFl
   sfH/FTpoP/zBbGKkEImXm4YL/OfpO/s6QFM4mNVxl309Pgt2w95ou5sIs
   s=;
X-IronPort-RemoteIP: 104.47.58.169
X-IronPort-MID: 74666284
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:NgDGp6DLqJ6tyxVW/zviw5YqxClBgxIJ4kV8jS/XYbTApDlx1zEFz
 DQYUWvUMvmCZWWjc90nao/j80JV6MfXn9djQQY4rX1jcSlH+JHPbTi7wuYcHM8wwunrFh8PA
 xA2M4GYRCwMZiaA4E/raNANlFEkvU2ybuOU5NXsZ2YgH2eIdA970Ug5w7Bj09Yx6TSEK1jlV
 e3a8pW31GCNg1aYAkpMg05UgEoy1BhakGpwUm0WPZinjneH/5UmJMt3yZWKB2n5WuFp8tuSH
 I4v+l0bElTxpH/BAvv9+lryn9ZjrrT6ZWBigVIOM0Sub4QrSoXfHc/XOdJFAXq7hQllkPhak
 NpMiJaZdDwgGYbnhNsnYRx5DAhXaPguFL/veRBTsOS15mifKj7G5aUrC0s7e4oF5uxwHGdCs
 +QCLywAZQyCgOTwx6+nTu5rhYIoK8yD0IE34yk8i22GS6t2B8mdEs0m5vcBtNs0rtpJEvvEI
 dIQdBJkbQjaYg0JMVASYH47tLj02CehKW0EwL6TjZolpG3D6hB06uTGENiSatrQdOQWx3/N8
 woq+Ey8WHn2Lue36xCI73atje/nhj7gVcQZE7jQ3u5nhhify3IeDDUSVECnur+ph0imQdVdJ
 kcIvC00osAa9lGtCN/0XBS6oXuNlh8aR9dUVeY97Wml1a788wufQG8eQVZpcNU7sOcmSDps0
 UWG9+4FHhRqubyRDHibprGdqGrrPTBPdDFTIygZUQEC/t/v5pkpiQ7CRcpiF6jzicDpHTb3w
 HaBqy1Wa6gvsPPnHp6TpTjv6w9AbLCQJuLpzm07hl6Y0z4=
IronPort-HdrOrdr: A9a23:LYLI76haYSiECXQA+NvThgW9UXBQX0h13DAbv31ZSRFFG/FwyP
 rCoB1L73XJYWgqM03I+eruBEBPewK4yXdQ2/hoAV7EZnichILIFvAa0WKG+VHd8kLFltK1uZ
 0QEJSWTeeAd2SS7vyKnzVQcexQp+VvmZrA7Ym+854ud3ANV0gJ1XYENu/xKDwTeOApP+taKH
 LKjfA32gZINE5nJ/hSQRI+Lpv+juyOsKijTQ8NBhYh5gXLpTS06ITiGxzd+hsFSTtAzZor7G
 CAymXCl+6emsD+7iWZ+37Y7pxQltek4txfBPaUgsxQDjn3kA6naKloRrXHljEop+OE7kosjb
 D30l8dFvU2z0mUUnC+oBPr1QWl+DEy60X6wVvdunfnqdyRfkNPN+NxwaZiNjfJ4Uspu99xlI
 hR2XiCipZRBRTc2Azg+tnhTXhR5wWJiEtntdRWo21UUIMYZrMUh5cY5llpHJAJGz+/wJw7Ed
 NpENrX6J9tAB+nhkjizyhSKeGXLzQO9k/seDlAhiXV6UkaoJlB9TpX+CRF9U1wtq7USPF/lp
 H52+pT5fRzp/QtHNNA7dc6MLWK41P2MGLx2UKpUCPa/fI8SgTwgq+yxokJz8eXX7FN5KcOuf
 36ISFlXCgJCgjTNfE=
X-IronPort-AV: E=Sophos;i="5.92,231,1650945600"; 
   d="scan'208";a="74666284"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=F60QSMG2JrSPiCIBAhqIvhVAP+cmAItPVHjOwGA2BD6Kk7LsfTtstkE20h8CkwNQcTro1ILJN3k818Jh8uS+w8/FNelxfb8iNZ0vt1UNuCJHEaLBHau/bJr+MpHJxvA/C0t+dYM1DYRc05uW2FTDpiDKYkl44sItyLELjgu9U3yBHOoztCIEfr0xAHXjnXryqZLRVR24yrFA+n7nQqk1TpPxBBnjSbD6vqH+kE+tzi/MwOv0rf+GYWm1xq5GcoYGZwZBDVQ7Q+8BrjRXTwntWMKbyiMdBpcVUU+9mXiC62Dgy2iA8Db9bjIhOXAKT66bQ3aYA8exWHppyzpLB+RYIA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=546m18QLwJdo3blaNNbrIhLdTBlakkue4/DRTTBf2/s=;
 b=SigGM7lUvRBJvMVj+3mnVZQsTgVu560CFf7WJa34gGaT9NnGR5ML1VFIKRHaTtyoMRdDxwhSyY3eK+OlKktU5kBNwSGb+Svop5ScQ/S5HdoWNNJBp9irndgX+mfdGQjQo3KywApcQtbrXMrYNfdfXREMSwOP6kRi6lPWaI36AGeDIobupgq9kUA4/a96wgPo5m+rgvGTVf6uOdBiDR5cVXLLKUvkcJIURpEJWX25NS329UbFrcgaWqEgl79FV5qB1zkIKdmUuoIPE0ZZVAdDwM0+9Sgzpj7sz+XCLJy5z8Zyhju1VrfFwyHwwB7LivrQzO7xvLzkKyyziaJByOYJ8g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=546m18QLwJdo3blaNNbrIhLdTBlakkue4/DRTTBf2/s=;
 b=k7YDzfXDhbJwOno2AV2YI/3J/pNd46hey2pAu9lC1ozWuJHv9WSaJPf7uxbuvWyK+9zpHSEfuVMJ2HKJChCdYYDpG6KBU7k4YCx7kSAhFAphuPPIc/4Ht6zm60g+eaMiLTmkIGhSLdnaTVmAkoGWFvRq292eA/XQvIENHOflkUs=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 29 Jun 2022 11:10:30 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: "Tian, Kevin" <kevin.tian@intel.com>,
	"Beulich, Jan" <JBeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"Nakajima, Jun" <jun.nakajima@intel.com>,
	"Cooper, Andrew" <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
	Paul Durrant <paul@xen.org>
Subject: Re: [PATCH] x86/ept: fix shattering of special pages
Message-ID: <YrwXBtvpIl81GhQ7@Air-de-Roger>
References: <20220627100119.55363-1-roger.pau@citrix.com>
 <BN9PR11MB527685F117AAA3C1716A41EF8CBB9@BN9PR11MB5276.namprd11.prod.outlook.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <BN9PR11MB527685F117AAA3C1716A41EF8CBB9@BN9PR11MB5276.namprd11.prod.outlook.com>
X-ClientProxiedBy: LO4P123CA0446.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a9::19) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 18533b92-3caa-4b06-227c-08da59af39d8
X-MS-TrafficTypeDiagnostic: MW4PR03MB6329:EE_
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Ig3EGHCubhIdcA+DEFEhPLMXIQR8wJqYIK4WV8LMIjYeYO+d72PE6WZYrlH3EwwDRdewoXqEThpIncGn9Db83OIMmodagLWF0ep4+QmaSb+c/NvHk9spc40gm4WULmcZdU+ReFAfcPHDsfeRY52HL3RFkdZ83wYjWHprzd/p9JEFTdhwlaNQroJnlgZXH9anGCGREsT9IhS6qgFiP+Jto6+SJk4zxyvqrb7sGlBgyYvzyebVqaslbqbpVAUbjTS+uc4eYPh/4xlojvkBk4RkJJR4YuQ7TSHmX8JKE2nn3dDbkqkELAiTtFDDrUj93xz6Ot8BZ1QdLb69FyhyUJJCrZMiaUE3w1clYBTseRMY0a3nxGDdiCbNJTT+0N0eo5eZI6DFHrCQjH1SI+2fs9jBmCInda3dgHR0myrqTPOX9Qga7iivxFLvwZWFdQDw9DZ7HxP4Z8J8omVPdXhe/+BfeSdvMnoLK4whxY40ge0+bDnFR5uyK+ZhRCRrBnD+n9V3ddaUBWhu0NW/p/HNqOvVhi7fVckblCa2BCzqmh/PFOhIgCJauAUkQ0SASQHr2xsNTOJdiiS6Ft6m7LzIDve+LFZlNiq8j+3P5gHtyHBaY5PwTLRbLMI8PbMWrVknG/O/DRWQxbQP4XBIVQDMxNMfXcL0toNS4Y1btSvbo6O/dFKesMdSWgcQWum9TxjZi5dsal4lIL+NRMgYGH4zfnNAKk7g/RymCLHEDj+xu0NWGNehoUgfOs6LzoBlCS8y0Ay4gQeysa/S5BDoTpJKtN1oBeM050zOwPVACd8cPqmToM0NKEqr7dZID9RUgtFprwKx
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(7916004)(4636009)(39860400002)(136003)(346002)(396003)(376002)(366004)(966005)(6666004)(38100700002)(41300700001)(83380400001)(6512007)(2906002)(6506007)(26005)(9686003)(33716001)(85182001)(5660300002)(8936002)(86362001)(478600001)(110136005)(54906003)(6486002)(82960400001)(316002)(186003)(66476007)(66556008)(66946007)(8676002)(4326008);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZW5iS1ZpZUZuN3pUSDRId2pmVkZvN2YyV1k1c20xMjBJWURxQ0pDL0J6OXAr?=
 =?utf-8?B?NThGeVlrT2VTZGJ1eTJVUUhGcVhDNFBoOGRNN3o5N3JwTFNha3ByKzhha2U5?=
 =?utf-8?B?eGVtdVlORlhzNnBEMXhhQjBuR0tBQkVwbS9DR2x0Mm4wYTJrcnBaR09tQUxo?=
 =?utf-8?B?UEdxVEpkTVZ3eHB2RmxpSndYQlhta3poOUlzSklkR3ByTlc3ZU1NQllEWUQ1?=
 =?utf-8?B?RFl3ZGdOSEw3dklNdFRTc2NUWGp6V1NtUFBVaDdObUNkTGkxY3VNeXU2Qmhx?=
 =?utf-8?B?SDFBVFJqdzR5L3N4WVV5SE9uem5HWFMwN1ZSQkFlRU12SENiU2d2WVNkbXZv?=
 =?utf-8?B?NXV3dHprT2xUSFZqMVU1N3FzSG5XejUrajVheUdWOXdqdVNiY0xlM3kzbTlC?=
 =?utf-8?B?TjUzS0h2UmJBaGN5SDRDeGhJMDExcWtoSXFZVVJpVS9XRjdicXJYTER0QW1o?=
 =?utf-8?B?elRiTlB1L0J1elBwRzZjK0JLSUV0SEdocE1yYnRPOGpUeGtIaGN6ZzRYajlz?=
 =?utf-8?B?UThCemh0eVZFSXp4WFdzK2pRZy83ZnAvc05sNERqeFBmM0plTVN4Mkl2Znhq?=
 =?utf-8?B?aWtoaFRSNG9RV3E3TWtwNjAvL1RuMmxLRXZab0NIbTA5SlhSZUZWY2tRazA2?=
 =?utf-8?B?Ykx2MEFjOWhsSnpIZngvV0o4U0xHckVpKzF1UUpZUTVKZURGcHFjNU5POXZn?=
 =?utf-8?B?M2tvMWlVKzFCTk94c3N6OUxLNkxiYlRhV0ZHcHhNOHFZSXhIOUhFYlVNUU5u?=
 =?utf-8?B?V0NkTlhBWXM3Q1lTbjR5S0xnbkl2bllIeWo1UURVRzhVeWZkUWcrMlZWL3RM?=
 =?utf-8?B?WEI5VG83aGZibWRYZ0owMzUxdENRMHJKM25KU29GdFMwcmdyYkx5LzZKZjVW?=
 =?utf-8?B?alBTa0ZTNTVFcnVRK29iZGVubGd4eVlxcmZZa1ZmbXpiTko1V3FlRStEdDRF?=
 =?utf-8?B?ZEUrZTlnMGdYSTczMlpqODNNaHVxZzlUandWOG1BS0pBRDB0L3VzY1ZVQzNT?=
 =?utf-8?B?TG9mZHJseXFSMENoVE12UEQ1M1FEeWhGdGJvMkxGTmtWbndONnA0Q29Oc1VG?=
 =?utf-8?B?dWc5QVdvS3B6a2lDYkFlQVBNK3p4Z1hqQTRBTUxwdFVZM1A2ZjJGWDNaNEFh?=
 =?utf-8?B?ZUdhWmU1WENKVkVEaHFoN1lqa2hTRDBocXZDNzc0MCt3bVVJWUVyWXRCNzcv?=
 =?utf-8?B?SEpBNXlFVWI4NjhaRkJMRG43WWVrMGVOVzJtZEl5aFZOQ1MvTm1vVnhtYUxo?=
 =?utf-8?B?S21qNUQ1djZkVEk0cHlhU0hENTJWM2p0ZjVXaC9DTnZ5TUI4Mzhmc25NTEJ3?=
 =?utf-8?B?ZWpJRkxzckxxcXJ1OHpsdHgvMFR4L1ozQ2ZFaVN2cVlDS3MvZ2ZuS0pPaVF5?=
 =?utf-8?B?Tk1JSnNacCswMDRzKzdYNjNKZis1Y1BBRXVCNVAwWHpkK3daeGtLN2dJQ2M0?=
 =?utf-8?B?RUJOUS9qZHJWbnRINnpLdStPL1FUOUZKS2RFWlpTdnU2RTJNOGtjZ3F4U1dO?=
 =?utf-8?B?SitFNm1qRlFJWXNtc0dUVlZjWnlvbWhzVWR5bmhScktxV3ZITmRLaFpVWUpa?=
 =?utf-8?B?UFBGVG9kdllpTVNLeTM0MEU5NDA5aUU4azk1QzhkNnBTRXh5eWh4Qk5kWEVG?=
 =?utf-8?B?NU1ObmRFd1ZEN1F6QnRaWEpkRVZsRjg0cEZyMllrb1ErOHVaaUd1YlFaSXpZ?=
 =?utf-8?B?d1dtZjVZVGFWcDVxMjRVMmNrYkZ0VG5Na3pBOE1MYzVlT2p5MnhSMzIvL21R?=
 =?utf-8?B?cjhQVHV3c1R3WUJIS1JKeXlKd2hzdG12Q0lpWFRtSEduMGZ4WjM1V29maUx3?=
 =?utf-8?B?eTZLY1RaN0MwclE2Q3d2V1NaS2JwdVFhTXJkWjZ0cHQyalMxYWJqSzJDYUFh?=
 =?utf-8?B?OWRZckFPY1M5bE9mazVVOEltVjhJL0M1dVhzUjVKR3psVkZISTZEV28zd01E?=
 =?utf-8?B?YzBMWWRkSnlMd2xlcHUyRFdIOUFBT1JUOFF3VTA5RzhqSlE4ZHFVbnNWS1FY?=
 =?utf-8?B?Z212eGtZN0FSZUJaZ3R2Qlp1ZkNHZ1BoT0sxM0ZiQlR6R3hTU3ZXa0h0VjFL?=
 =?utf-8?B?bXlsV3Q4TlI1amRVU0loSERQOTZId3E5ZzZZMnROVVdrQ0J1Wnovb0ZpS3hR?=
 =?utf-8?B?QUIwci81Z2MxK1FSMlRRNzN2WWJEYzNXdlBwOUZLNjdUUHgzZlFhZDRISkVo?=
 =?utf-8?B?MkE9PQ==?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 18533b92-3caa-4b06-227c-08da59af39d8
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jun 2022 09:10:35.2535
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: jZwIRGDdWy/pOlpDHZHa8Mua6jEVM6NPO+lOoXmfyG7VtT5YJDLS4MChMLFnGagTkENIeBR3VIAo0ajaJohTkQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR03MB6329

On Wed, Jun 29, 2022 at 08:41:43AM +0000, Tian, Kevin wrote:
> > From: Roger Pau Monne <roger.pau@citrix.com>
> > Sent: Monday, June 27, 2022 6:01 PM
> > 
> > The current logic in epte_get_entry_emt() will split any page marked
> > as special with order greater than zero, without checking whether the
> > super page is all special.
> > 
> > Fix this by only splitting the page only if it's not all marked as
> > special, in order to prevent unneeded super page shuttering.
> > 
> > Fixes: ca24b2ffdb ('x86/hvm: set 'ipat' in EPT for special pages')
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> > ---
> > Cc: Paul Durrant <paul@xen.org>
> > ---
> > It would seem weird to me to have a super page entry in EPT with
> > ranges marked as special and not the full page.  I guess it's better
> > to be safe, but I don't see an scenario where we could end up in that
> > situation.
> > 
> > I've been trying to find a clarification in the original patch
> > submission about how it's possible to have such super page EPT entry,
> > but haven't been able to find any justification.
> > 
> > Might be nice to expand the commit message as to why it's possible to
> > have such mixed super page entries that would need splitting.
> 
> Here is what I dig out.
> 
> When reviewing v1 of adding special page check, Jan suggested
> that APIC access page was also covered hence the old logic for APIC
> access page can be removed. [1]

But the APIC access page is always added using set_mmio_p2m_entry(),
which will unconditionally create an entry for it in the EPT, hence
there's no explicit need to check for it's presence inside of higher
order pages.

> Then when reviewing v2 he found that the order check in removed
> logic was not carried to the new check on special page. [2] 
> 
> The original order check in old APIC access logic came from:
> 
> commit 126018f2acd5416434747423e61a4690108b9dc9
> Author: Jan Beulich <jbeulich@suse.com>
> Date:   Fri May 2 10:48:48 2014 +0200
> 
>     x86/EPT: consider page order when checking for APIC MFN
> 
>     This was overlooked in 3d90d6e6 ("x86/EPT: split super pages upon
>     mismatching memory types").
> 
>     Signed-off-by: Jan Beulich <jbeulich@suse.com>
>     Acked-by: Kevin Tian <kevin.tian@intel.com>
>     Reviewed-by: Tim Deegan <tim@xen.org>
> 
> I suppose Jan may actually find such mixed super page entry case
> to bring this fix in.

Hm, I guess theoretically it could be possible for contiguous entries
to be coalesced (if we ever implement that for p2m page tables), and
hence result in super pages being created from smaller entries.

It that case it would be less clear to assert that special pages
cannot be coalesced with other contiguous entries.

> Not sure whether Jan still remember the history.
> 
> [1] https://lists.xenproject.org/archives/html/xen-devel/2020-07/msg01648.html
> [2] https://lore.kernel.org/all/a4856c33-8bb0-4afa-cc71-3af4c229bc27@suse.com/
> 
> > ---
> >  xen/arch/x86/mm/p2m-ept.c | 20 +++++++++++---------
> >  1 file changed, 11 insertions(+), 9 deletions(-)
> > 
> > diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
> > index b04ca6dbe8..b4919bad51 100644
> > --- a/xen/arch/x86/mm/p2m-ept.c
> > +++ b/xen/arch/x86/mm/p2m-ept.c
> > @@ -491,7 +491,7 @@ int epte_get_entry_emt(struct domain *d, gfn_t gfn,
> > mfn_t mfn,
> >  {
> >      int gmtrr_mtype, hmtrr_mtype;
> >      struct vcpu *v = current;
> > -    unsigned long i;
> > +    unsigned long i, special_pgs;
> > 
> >      *ipat = false;
> > 
> > @@ -525,15 +525,17 @@ int epte_get_entry_emt(struct domain *d, gfn_t
> > gfn, mfn_t mfn,
> >          return MTRR_TYPE_WRBACK;
> >      }
> > 
> > -    for ( i = 0; i < (1ul << order); i++ )
> > -    {
> > +    for ( special_pgs = i = 0; i < (1ul << order); i++ )
> >          if ( is_special_page(mfn_to_page(mfn_add(mfn, i))) )
> > -        {
> > -            if ( order )
> > -                return -1;
> > -            *ipat = true;
> > -            return MTRR_TYPE_WRBACK;
> > -        }
> > +            special_pgs++;
> > +
> > +    if ( special_pgs )
> > +    {
> > +        if ( special_pgs != (1ul << order) )
> > +            return -1;
> > +
> > +        *ipat = true;
> > +        return MTRR_TYPE_WRBACK;
> 
> Did you actually observe an impact w/o this fix? 

Yes, the original change has caused a performance regression in some
GPU pass through workloads for XenServer.  I think it's reasonable to
avoid super page shattering if the resulting pages would all end up
using ipat and WRBACK cache attribute, as there's no reason for the
split in the first case.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 09:37:44 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 09:37:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357705.586448 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6U8f-0007f3-QF; Wed, 29 Jun 2022 09:37:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357705.586448; Wed, 29 Jun 2022 09:37:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6U8f-0007ew-NS; Wed, 29 Jun 2022 09:37:37 +0000
Received: by outflank-mailman (input) for mailman id 357705;
 Wed, 29 Jun 2022 09:37:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6U8e-0007ek-Dk; Wed, 29 Jun 2022 09:37:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6U8e-0007H7-9w; Wed, 29 Jun 2022 09:37:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6U8d-000693-SW; Wed, 29 Jun 2022 09:37:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o6U8d-0003v8-S1; Wed, 29 Jun 2022 09:37:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=eD0UWZaFFxA19744pGgIR3364vljsJdV/5+nc/T03eY=; b=YWC6U7IiTOhBTxYGfSR9t7zbLG
	niiinh4lKulBLZI86vvbndzWPk8CKnB10UMs3BaD2LVBfc7NvDYhwWLAWZBqNncwpD1jvYMreTFzN
	FdKI4+29v0KYf/2Lv0Q5xY6ctH32Nkr9tU0gRNDTiE/lYsn4vwQoKQE3Rnw3yipIcvXk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171389-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171389: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=941e3e7912696b9fbe3586083a7c2e102cee7a87
X-Osstest-Versions-That:
    linux=354c6e071be986a44b956f7b57f1884244431048
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 29 Jun 2022 09:37:35 +0000

flight 171389 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171389/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-intel  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-amd  8 xen-boot              fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 171277
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 171277
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 171277
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 171277
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 171277
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 171277

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 171277

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171277
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171277
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171277
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                941e3e7912696b9fbe3586083a7c2e102cee7a87
baseline version:
 linux                354c6e071be986a44b956f7b57f1884244431048

Last test of basis   171277  2022-06-19 03:11:35 Z   10 days
Failing since        171280  2022-06-19 15:12:25 Z    9 days   28 attempts
Testing same since   171374  2022-06-27 18:13:03 Z    1 days    4 attempts

------------------------------------------------------------
368 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 13051 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 09:44:34 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 09:44:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357714.586460 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6UFJ-0000qb-P8; Wed, 29 Jun 2022 09:44:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357714.586460; Wed, 29 Jun 2022 09:44:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6UFJ-0000qU-M6; Wed, 29 Jun 2022 09:44:29 +0000
Received: by outflank-mailman (input) for mailman id 357714;
 Wed, 29 Jun 2022 09:44:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=39p8=XE=gmail.com=matiasevara@srs-se1.protection.inumbo.net>)
 id 1o6UFI-0000qO-AU
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 09:44:28 +0000
Received: from mail-wr1-x42d.google.com (mail-wr1-x42d.google.com
 [2a00:1450:4864:20::42d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 109cf6ef-f790-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 11:44:27 +0200 (CEST)
Received: by mail-wr1-x42d.google.com with SMTP id v9so5958241wrp.7
 for <xen-devel@lists.xenproject.org>; Wed, 29 Jun 2022 02:44:26 -0700 (PDT)
Received: from horizon ([2a01:e0a:19f:35f0:dde5:d55a:20f5:7ef5])
 by smtp.gmail.com with ESMTPSA id
 bk20-20020a0560001d9400b0021b8b998ca5sm15144659wrb.107.2022.06.29.02.44.25
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 29 Jun 2022 02:44:25 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 109cf6ef-f790-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:content-transfer-encoding:in-reply-to;
        bh=AlQeOmPpUQJBjKBz2Qp4KLbJ9+pcVkvjnIwLu7K9TMg=;
        b=gazambA0KHXnEeskYh5wqWbiGZrhGgzegB+YS3PJB0WPWdhhYJF/yr0z9owwyFxK15
         DyzBdL6EB7DHAbpbSfURhoqPp290OP6A9gN5C1/V8oPZqD/2guK1s2sM7pOJ0W5rHa1h
         n9HYhCEyBMpdYz9xNLZ8sukgXua8ZDXZgEv8pGZ6l7VRNFzh9b5fDYgDaVO4lD5o1ERA
         wrX3yn72HB0VrZXDfDGd0Ief/j/O+l6yF4U8epQionAoQfN2UxmbPTP8i6yetDj1QPTs
         B7sYH1/A0B49zC0o7pgA6BRNOlzHCYKgyR3Qm/EFlfHwTUQ/xAYTcJqMqxDtbOZwGMZN
         D+6g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:content-transfer-encoding
         :in-reply-to;
        bh=AlQeOmPpUQJBjKBz2Qp4KLbJ9+pcVkvjnIwLu7K9TMg=;
        b=v84BVTo0P1n+EoHJORjkcJSUlJTkVaWMwbL2zb/q1UOzpCv9/66LcmQXAW326aO2vS
         3XvaXMX/cYcIUeFYlcnEl1/cKaz/pa8/0o4pVm0bxdFzwiWXhUtZqjEVBa9daYOFWqN1
         IEzyXEzf1SyldiEOXWB/5deE0shJEU51DJ79bgm7G55ZV9MNh5vwqtkAQV3cFi9WoEiW
         Wpzf6f09NKQlsvHo+AJ+LgMQ0In+MbL2BbQIv1HrszcAXL/enRcjT4SNxEhyuUH2NUyp
         kCi6PeYjfxTtQ1pfniEYw/4/yz63HDqE+yQsmR3bt8a/I0jzZJGeyB1PnqT9JSxaV7/4
         D2FQ==
X-Gm-Message-State: AJIora8ZAbEvLo0ML8vIFiN1tiTyS0zSxw3e29KV2sos5dohEILPh0T0
	pX/KTPu8I3Jpej8AEg04U6g=
X-Google-Smtp-Source: AGRyM1sIhUDszCCqY8InATbzKyazV7qIW99Vsxrxj6D6r5NJIZAZusPVchlaw56Dyu14wuO3rI244w==
X-Received: by 2002:a5d:5261:0:b0:21b:8740:382b with SMTP id l1-20020a5d5261000000b0021b8740382bmr2010526wrc.143.1656495866102;
        Wed, 29 Jun 2022 02:44:26 -0700 (PDT)
Date: Wed, 29 Jun 2022 11:44:23 +0200
From: Matias Ezequiel Vara Larsen <matiasevara@gmail.com>
To: George Dunlap <George.Dunlap@citrix.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Matias Ezequiel Vara Larsen <matias.vara@vates.fr>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	Dario Faggioli <dfaggioli@suse.com>
Subject: Re: [RFC PATCH 1/2] xen/memory : Add stats_table resource type
Message-ID: <20220629094423.GA250661@horizon>
References: <cover.1652797713.git.matias.vara@vates.fr>
 <d0afb6657b1e78df4857ad7bcc875982e9c022b4.1652797713.git.matias.vara@vates.fr>
 <C9B7EF20-595D-4BCB-8545-F35611B718AF@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <C9B7EF20-595D-4BCB-8545-F35611B718AF@citrix.com>

On Fri, Jun 17, 2022 at 08:54:34PM +0000, George Dunlap wrote:
> Preface: It looks like this may be one of your first hypervisor patches.  So thank you!  FYI there’s a lot that needs fixing up here; please don’t read a tone of annoyance into it, I’m just trying to tell you what needs fixing in the most efficient manner. :-)
> 

Thanks for the comments :) I have addressed some of them already in the response to the
cover letter.

> > On 17 May 2022, at 15:33, Matias Ezequiel Vara Larsen <matiasevara@gmail.com> wrote:
> > 
> > Allow to map vcpu stats using acquire_resource().
> 
> This needs a lot more expansion in terms of what it is that you’re doing in this patch and why.
> 

Got it. I'll improve the commit message in v1.

> > 
> > Signed-off-by: Matias Ezequiel Vara Larsen <matias.vara@vates.fr>
> > ---
> > xen/common/domain.c         | 42 +++++++++++++++++++++++++++++++++++++
> > xen/common/memory.c         | 29 +++++++++++++++++++++++++
> > xen/common/sched/core.c     |  5 +++++
> > xen/include/public/memory.h |  1 +
> > xen/include/xen/sched.h     |  5 +++++
> > 5 files changed, 82 insertions(+)
> > 
> > diff --git a/xen/common/domain.c b/xen/common/domain.c
> > index 17cc32fde3..ddd9f88874 100644
> > --- a/xen/common/domain.c
> > +++ b/xen/common/domain.c
> > @@ -132,6 +132,42 @@ static void vcpu_info_reset(struct vcpu *v)
> >     v->vcpu_info_mfn = INVALID_MFN;
> > }
> > 
> > +static void stats_free_buffer(struct vcpu * v)
> > +{
> > +    struct page_info *pg = v->stats.pg;
> > +
> > +    if ( !pg )
> > +        return;
> > +
> > +    v->stats.va = NULL;
> > +
> > +    if ( v->stats.va )
> > +        unmap_domain_page_global(v->stats.va);
> > +
> > +    v->stats.va = NULL;
> 
> Looks like you meant to delete the first `va->stats.va = NULL` but forgot?
> 

Apologies for this, I completely missed.

> > +
> > +    free_domheap_page(pg);
> 
> Pretty sure this will crash.
> 
> Unfortunately page allocation and freeing is somewhat complicated and requires a bit of boilerplate.  You can look at the vmtrace_alloc_buffer() code for a template, but the general sequence you want is as follows:
> 
> * On the allocate side:
> 
> 1. Allocate the page
> 
>    pg = alloc_domheap_page(d, MEMF_no_refcount);
> 
> This will allocate a page with the PGC_allocated bit set and a single reference count.  (If you pass a page with PGC_allocated set to free_domheap_page(), it will crash; which is why I said the above.)
> 
> 2.  Grab a general reference count for the vcpu struct’s reference to it; as well as a writable type count, so that it doesn’t get used as a special page
> 
> if (get_page_and_type(pg, d, PGT_writable_page)) {
>   put_page_alloc_ref(p);
>   /* failure path */
> }
> 
> * On the free side, don’t call free_domheap_pages() directly.  Rather, drop the allocation, then drop your own type count, thus:
> 
> v->stats.va <http://stats.va/> = NULL;
> 
> put_page_alloc_ref(pg);
> put_page_and_type(pg);
> 
> The issue here is that we can’t free the page until all references have dropped; and the whole point of this exercise is to allow guest userspace in dom0 to gain a reference to the page.  We can’t actually free the page until *all* references are gone, including the userspace one in dom0.  The put_page() which brings the reference count to 0 will automatically free the page.
> 

Thanks for the explanation. For some reason, this is not crashing. I will
reimplement the allocation, releasing, and then update the documentation that I
proposed at
https://lists.xenproject.org/archives/html/xen-devel/2022-05/msg01963.html. The
idea of this reference document is to guide someone that would like to export a new
resource by relying on the acquire-resource interface. 

> 
> > +}
> > +
> > +static int stats_alloc_buffer(struct vcpu *v)
> > +{
> > +    struct domain *d = v->domain;
> > +    struct page_info *pg;
> > +
> > +    pg = alloc_domheap_page(d, MEMF_no_refcount);
> > +
> > +    if ( !pg )
> > +        return -ENOMEM;
> > +
> > +    v->stats.va = __map_domain_page_global(pg);
> > +    if ( !v->stats.va )
> > +        return -ENOMEM;
> > +
> > +    v->stats.pg = pg;
> > +    clear_page(v->stats.va);
> > +    return 0;
> > +}
> 
> The other thing to say about this is that the memory is being allocated unconditionally, even if nobody is planning to read it.  The vast majority of Xen users will not be running xcp-rrd, so it will be pointless overheard.
> 
> At a basic level, you want to follow suit with the vmtrace buffers, which are only allocated if the proper domctl flag is set on domain creation.  (We could consider instead, or in addition, having a global Xen command-line parameter which would enable this feature for all domains.)
>

I agree. I will add a domctl flag on domain creation to enable the allocation of
these buffers.
 
> > +
> > static void vmtrace_free_buffer(struct vcpu *v)
> > {
> >     const struct domain *d = v->domain;
> > @@ -203,6 +239,9 @@ static int vmtrace_alloc_buffer(struct vcpu *v)
> >  */
> > static int vcpu_teardown(struct vcpu *v)
> > {
> > +
> > +    stats_free_buffer(v);
> > +
> >     vmtrace_free_buffer(v);
> > 
> >     return 0;
> > @@ -269,6 +308,9 @@ struct vcpu *vcpu_create(struct domain *d, unsigned int vcpu_id)
> >     if ( vmtrace_alloc_buffer(v) != 0 )
> >         goto fail_wq;
> > 
> > +    if ( stats_alloc_buffer(v) != 0 )
> > +        goto fail_wq;
> > +
> >     if ( arch_vcpu_create(v) != 0 )
> >         goto fail_sched;
> > 
> > diff --git a/xen/common/memory.c b/xen/common/memory.c
> > index 297b98a562..39de6d9d05 100644
> > --- a/xen/common/memory.c
> > +++ b/xen/common/memory.c
> > @@ -1099,6 +1099,10 @@ static unsigned int resource_max_frames(const struct domain *d,
> >     case XENMEM_resource_vmtrace_buf:
> >         return d->vmtrace_size >> PAGE_SHIFT;
> > 
> > +    // WIP: to figure out the correct size of the resource
> > +    case XENMEM_resource_stats_table:
> > +        return 1;
> > +
> >     default:
> >         return -EOPNOTSUPP;
> >     }
> > @@ -1162,6 +1166,28 @@ static int acquire_vmtrace_buf(
> >     return nr_frames;
> > }
> > 
> > +static int acquire_stats_table(struct domain *d,
> > +                                unsigned int id,
> > +                                unsigned int frame,
> > +                                unsigned int nr_frames,
> > +                                xen_pfn_t mfn_list[])
> > +{
> > +    const struct vcpu *v = domain_vcpu(d, id);
> > +    mfn_t mfn;
> > +
> 
> Maybe I’m paranoid, but I might add an “ASSERT(nr_frames == 1)” here
>

Thanks, I will have this in mind in v1.
 
> > +    if ( !v )
> > +        return -ENOENT;
> > +
> > +    if ( !v->stats.pg )
> > +        return -EINVAL;
> > +
> > +    mfn = page_to_mfn(v->stats.pg);
> > +    mfn_list[0] = mfn_x(mfn);
> > +
> > +    printk("acquire_perf_table: id: %d, nr_frames: %d, %p, domainid: %d\n", id, nr_frames, v->stats.pg, d->domain_id);
> > +    return 1;
> > +}
> > +
> > /*
> >  * Returns -errno on error, or positive in the range [1, nr_frames] on
> >  * success.  Returning less than nr_frames contitutes a request for a
> > @@ -1182,6 +1208,9 @@ static int _acquire_resource(
> >     case XENMEM_resource_vmtrace_buf:
> >         return acquire_vmtrace_buf(d, id, frame, nr_frames, mfn_list);
> > 
> > +    case XENMEM_resource_stats_table:
> > +        return acquire_stats_table(d, id, frame, nr_frames, mfn_list);
> > +
> >     default:
> >         return -EOPNOTSUPP;
> >     }
> > diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
> > index 8f4b1ca10d..2a8b534977 100644
> > --- a/xen/common/sched/core.c
> > +++ b/xen/common/sched/core.c
> > @@ -264,6 +264,7 @@ static inline void vcpu_runstate_change(
> > {
> >     s_time_t delta;
> >     struct sched_unit *unit = v->sched_unit;
> > +    uint64_t * runstate;
> > 
> >     ASSERT(spin_is_locked(get_sched_res(v->processor)->schedule_lock));
> >     if ( v->runstate.state == new_state )
> > @@ -287,6 +288,10 @@ static inline void vcpu_runstate_change(
> >     }
> > 
> >     v->runstate.state = new_state;
> > +
> > +    // WIP: use a different interface
> > +    runstate = (uint64_t*)v->stats.va;
> 
> I think you should look at xen.git/xen/include/public/hvm/ioreq.h:shared_iopage_t for inspiration.  Basically, you cast the void pointer to that type, and then just access `iopage->vcpu_ioreq[vcpuid]`.   Put it in a public header, and then both the userspace consumer and Xen can use the same structure.
> 

Thanks for pointing it out. I've addresed this comment in the response to the cover
letter.

> As I said in my response to the cover letter, I think vcpu_runstate_info_t would be something to look at and gain inspiration from.
> 
> The other thing to say here is that this is a hot path; we don’t want to be copying lots of information around if it’s not going to be used.  So you should only allocate the buffers if specifically enabled, and you should only copy the information over if v->stats.va <http://stats.va/> != NULL.
> 
> I think that should be enough to start with; we can nail down more when you send v1.
> 

Thanks for the comments, I will tackle them in v1.

Matias

> Peace,
>  -George
> 




From xen-devel-bounces@lists.xenproject.org Wed Jun 29 10:16:08 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 10:16:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357720.586471 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Ujh-0004Y3-8t; Wed, 29 Jun 2022 10:15:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357720.586471; Wed, 29 Jun 2022 10:15:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Ujh-0004Xw-4c; Wed, 29 Jun 2022 10:15:53 +0000
Received: by outflank-mailman (input) for mailman id 357720;
 Wed, 29 Jun 2022 10:15:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=m/oR=XE=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o6Ujf-0004Xq-D8
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 10:15:51 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 72758b42-f794-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 12:15:49 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 8F42122049;
 Wed, 29 Jun 2022 10:15:48 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 455F8133D1;
 Wed, 29 Jun 2022 10:15:48 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id KB68DlQmvGKLGwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 29 Jun 2022 10:15:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 72758b42-f794-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656497748; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=qlyXEotVs3KGLoWj7lEpW6RoUlD6dJzdc722NlwYUNk=;
	b=hwrstUrn51gitQvIuaTucP1RQg5OVCii3wZ1Zonq0Ognj6qzCOKvT2weiU2pzJIHgti7ak
	yUIv4ZEYnEeDA1Zb4gXOQuzaZVlNwOBTfI0tVvMtihYqXF9jSTwbzJU8EgkNB/HVAqZnQy
	pq9y7EZOQzT+VI2aOI+OBqjWe2IMq14=
Message-ID: <dbb48b14-9581-f85c-3fa0-c5b2dc5bea02@suse.com>
Date: Wed, 29 Jun 2022 12:15:47 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [XEN PATCH v3 12/25] .gitignore: Cleanup ignores of
 tools/libs/*/{headers.chk,*.pc}
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <20220624160422.53457-1-anthony.perard@citrix.com>
 <20220624160422.53457-13-anthony.perard@citrix.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20220624160422.53457-13-anthony.perard@citrix.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------YLGlkFnuz0UDWxoULsWYUIPd"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------YLGlkFnuz0UDWxoULsWYUIPd
Content-Type: multipart/mixed; boundary="------------uBU5ZohxcQO3ZEBTgVDgNIyy";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
Message-ID: <dbb48b14-9581-f85c-3fa0-c5b2dc5bea02@suse.com>
Subject: Re: [XEN PATCH v3 12/25] .gitignore: Cleanup ignores of
 tools/libs/*/{headers.chk,*.pc}
References: <20220624160422.53457-1-anthony.perard@citrix.com>
 <20220624160422.53457-13-anthony.perard@citrix.com>
In-Reply-To: <20220624160422.53457-13-anthony.perard@citrix.com>

--------------uBU5ZohxcQO3ZEBTgVDgNIyy
Content-Type: multipart/mixed; boundary="------------cIKsxlVeS7SFnbNRTX8s4uls"

--------------cIKsxlVeS7SFnbNRTX8s4uls
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjQuMDYuMjIgMTg6MDQsIEFudGhvbnkgUEVSQVJEIHdyb3RlOg0KPiBTaWduZWQtb2Zm
LWJ5OiBBbnRob255IFBFUkFSRCA8YW50aG9ueS5wZXJhcmRAY2l0cml4LmNvbT4NCg0KUmV2
aWV3ZWQtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4NCg0KDQpKdWVyZ2Vu
DQo=
--------------cIKsxlVeS7SFnbNRTX8s4uls
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------cIKsxlVeS7SFnbNRTX8s4uls--

--------------uBU5ZohxcQO3ZEBTgVDgNIyy--

--------------YLGlkFnuz0UDWxoULsWYUIPd
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK8JlMFAwAAAAAACgkQsN6d1ii/Ey8+
hgf+MIVQuqswzxBjA4Ng7Ar09oXnEuGg5jzZGy+jlBXn7wzr1Dc6xpIEvmI+11xJozpP5AUumm+C
o7H8NsTDeEIS+4DOJTezAa1zq4Avqvq08I82kXanRSqQPMFydfIfUQq9bGre3bL1bhmyd6J22nZ1
ZzYOtDRHQix0F+LHXHRNfCGfFFmkcEZaGn9Qhccqg3r6uypiVAdUFVnDSZK+Iy9UGqTQgxUW3CWg
KFCdKNhEIFIGe81w7g5W1anteHyXkFwQ+BLhxpTt9FxSkEHE3Hqak8SYFv5b3qO6yXROa+Q9UrWA
1Q/R5zUuUO0kJVhuLd/3XcSzWEbHeB6zy727WJGaVA==
=adUC
-----END PGP SIGNATURE-----

--------------YLGlkFnuz0UDWxoULsWYUIPd--


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 10:16:25 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 10:16:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357723.586482 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6UkD-000515-Gj; Wed, 29 Jun 2022 10:16:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357723.586482; Wed, 29 Jun 2022 10:16:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6UkD-00050y-Dh; Wed, 29 Jun 2022 10:16:25 +0000
Received: by outflank-mailman (input) for mailman id 357723;
 Wed, 29 Jun 2022 10:16:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o6UkB-00050f-Kh
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 10:16:23 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o6UkA-0008CO-RL; Wed, 29 Jun 2022 10:16:22 +0000
Received: from [54.239.6.187] (helo=[192.168.9.41])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o6UkA-0003J7-Kq; Wed, 29 Jun 2022 10:16:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=CDX5dW/kN9Ma7LhWIm4Lj0I6P9fRWx6ltP0lA23bHAg=; b=DfkMqrvwQ+kF5pMdbgkWR1Q5/2
	fCNtxYESVyphuBy6pS6In7JEScskgbDlaqB/4v2Wl4iMki6ZQqu3DUOE8V33cBrW1+sOr7it8aU0m
	mCAAmmRYnXCeD3w452jv5FcnQ/hitLouIm+njw0eFZChOwu/IltTn8IVx1F4nHotn3pQ=;
Message-ID: <57926957-8bdd-d62b-55fb-47874dc51cdb@xen.org>
Date: Wed, 29 Jun 2022 11:16:20 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH] docs/misra: Add instructions for cppcheck
To: Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20220624105311.21057-1-luca.fancellu@arm.com>
 <692d09fa-5513-132a-6b5b-4bc62e46a443@xen.org>
 <15F23829-3693-47CC-A9D6-3D7A3B44EB64@arm.com>
 <88bd7017-e2b3-59f3-a68a-25db9e53136d@xen.org>
 <CA8DFF26-3D7F-4CDA-9EDC-E173203B2A51@arm.com>
 <81c33c8c-e345-2fe3-32c6-2f80799eefd0@xen.org>
 <C7643376-EBD0-46C3-B940-D3F6198BD124@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <C7643376-EBD0-46C3-B940-D3F6198BD124@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Luca,

On 28/06/2022 16:23, Luca Fancellu wrote:
>> On 24 Jun 2022, at 18:25, Julien Grall <julien@xen.org> wrote:
>> On 24/06/2022 14:34, Luca Fancellu wrote:
>>>> On 24 Jun 2022, at 13:17, Julien Grall <julien@xen.org> wrote:
> Sorry for the late reply, this would be my changes, would you agree on them?

They LGTM.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 10:17:45 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 10:17:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357730.586493 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6UlU-0005fm-QK; Wed, 29 Jun 2022 10:17:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357730.586493; Wed, 29 Jun 2022 10:17:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6UlU-0005ff-NS; Wed, 29 Jun 2022 10:17:44 +0000
Received: by outflank-mailman (input) for mailman id 357730;
 Wed, 29 Jun 2022 10:17:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cYaO=XE=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1o6UlS-0005fS-Fn
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 10:17:42 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-eopbgr60081.outbound.protection.outlook.com [40.107.6.81])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b52b0d36-f794-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 12:17:41 +0200 (CEST)
Received: from AM5PR0601CA0049.eurprd06.prod.outlook.com (2603:10a6:206::14)
 by PR2PR08MB4906.eurprd08.prod.outlook.com (2603:10a6:101:26::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Wed, 29 Jun
 2022 10:17:38 +0000
Received: from AM5EUR03FT054.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:0:cafe::b9) by AM5PR0601CA0049.outlook.office365.com
 (2603:10a6:206::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16 via Frontend
 Transport; Wed, 29 Jun 2022 10:17:38 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT054.mail.protection.outlook.com (10.152.16.212) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Wed, 29 Jun 2022 10:17:37 +0000
Received: ("Tessian outbound 8dc5ba215ad1:v121");
 Wed, 29 Jun 2022 10:17:37 +0000
Received: from 05c6941b7347.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 BDDF4AB1-E719-4A1B-BE35-2BEB0FC891A3.1; 
 Wed, 29 Jun 2022 10:17:26 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 05c6941b7347.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 29 Jun 2022 10:17:26 +0000
Received: from AM0PR08MB3809.eurprd08.prod.outlook.com (2603:10a6:208:103::16)
 by AM0PR08MB5121.eurprd08.prod.outlook.com (2603:10a6:208:159::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14; Wed, 29 Jun
 2022 10:17:23 +0000
Received: from AM0PR08MB3809.eurprd08.prod.outlook.com
 ([fe80::4ca:af1b:4380:abf9]) by AM0PR08MB3809.eurprd08.prod.outlook.com
 ([fe80::4ca:af1b:4380:abf9%5]) with mapi id 15.20.5395.014; Wed, 29 Jun 2022
 10:17:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b52b0d36-f794-11ec-b725-ed86ccbb4733
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=dbTafGI7X/aPOh86vlTwnVhXTZm4Vz44HP7IOX0XI3nhLhL/NuXlM/JyK9O/Zi6z6kKTRMLTc1+d8vNWWkIl8sDrnE4AkYL3yoM8NVr8WTpePBp50cppwxcm9lLdg8hamoWDE81jWlKqQTEReBrmvAnXOrS2RWPxjIa5J0XrLO2U7vwrKjYSUOUalgW5o4EHU1/T8YoLp+EEjMB17UGabhBwkYsS0MaYxvK6kUHFwLSK8KDJfjjdmP3RWK/FjftIr4tO4WHgpP4O5McjpRTrBYpNBfiVCVxKJf5T304/uSIJfm9xQQ5qWowE17n61sz7rLqJ+e4zExizKImFLr0eew==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=RcGUSuPHhnQgSx9IullrdCyPRozCK1d1d6Rs9UquELs=;
 b=mZCJQxLnCgnYrH32gYXS6S5/wCbaJ/awdCTV7cSmkBB+cfuSs0P3af2bKzVaowl633PHOFBAAhDbQd9bdPp/lAAxfZVRoIBS5lwbBk3SS/U+xeP6zONXIGBG/pXnhrRsd3IsoZJ6BR06ms51OYGPQEkfnS+qjYFbW6BQnyhqpQUEFu8y65wjljuEyxKk+lLvRck/tnVahHZHJOXIbYwTCQq4K0H6+S5zuL45BE+R6eplYYlcZ18vwhgOUXdhgnHFUDFDXHgcFvUtPa2oFdFagRoLKzAV7KO6qfrHuF9g2zqJgDe/ZciDekJoI+O0rv5SlTfaemJkthRFubiop9De8g==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RcGUSuPHhnQgSx9IullrdCyPRozCK1d1d6Rs9UquELs=;
 b=AW+UbgWTyZFlnfTruZvldZKbE/qQawGTheHleDORsBP3L+G3pfYz+Ujqgz/ZJuDDZbPY43jF6NV9p1raEogBGka55fZqWuVw89dyU9ulnAjHCyg605ikfi/AJGVPIC20gUDpvkR5nB2O7XHfQMEf1WLJq/Ltw+zXeqvT7XifdOc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 3364c7aa79bf3648
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=F/OJ/ticixNXHoiRg2ZC0aeX1gvmbkE7lvHbELYco/hVkdD6FHeUMEaV8IjnRVL+diV1s43EgaD10R9bC6EUJzlRH4XS14HpbH/kfwLCiu42YoJNETR86pjU9VBLkczo5sreX2y/2gUoe/rH+gqqAQFyjBGKPLdWgn8vy+BX8ttumnK0ZfUqbwa8IVabyRm4hStpqSsmFhPXLXrws9dGEuYRmytgm/S1X+ZtUYE4oYn2zfv6WWQEiTLCb34hy5uyHUbvg0GzXOGr9KeZQWG+maCI8d/kvmKsX82aBSwqJo4bWKxScjycr1w46isHknsfRnP4xv9y+FWufkvP0FJ2Mw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=RcGUSuPHhnQgSx9IullrdCyPRozCK1d1d6Rs9UquELs=;
 b=T6usdIdeyXPmTQW5gkP46H692M3RksshRxmxXm69foPlIkW+SaAhyl88Ntim1e1bDhyTEezZCiIMXy+4GW8/u3i2Wu3obYBLwtZkk2to1kXBmc+1dBtcjwkcRan7klgJryH1VJ28TuKJExEeCqeTbqQjHzQ1I90KZ5eC9zVt22hud8+tILTMp9SXRCedEuVI11ZGVoIcN9t/461uhHcXBkOmhFt327cqhEbPG2yktPCSxNLTVPaDAjX3bMnf6P7Jpi0UJ+TmuUdEAuXxwnvlyamEiJ8EdGbDWlDNIk9hnszym7jVDtFgL6aEWzrKfwDPnZksh5Vt+PFG8ogD2SJgjA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RcGUSuPHhnQgSx9IullrdCyPRozCK1d1d6Rs9UquELs=;
 b=AW+UbgWTyZFlnfTruZvldZKbE/qQawGTheHleDORsBP3L+G3pfYz+Ujqgz/ZJuDDZbPY43jF6NV9p1raEogBGka55fZqWuVw89dyU9ulnAjHCyg605ikfi/AJGVPIC20gUDpvkR5nB2O7XHfQMEf1WLJq/Ltw+zXeqvT7XifdOc=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH] docs/misra: Add instructions for cppcheck
Thread-Topic: [PATCH] docs/misra: Add instructions for cppcheck
Thread-Index:
 AQHYh7nJHM8+T3fZ/0Gr0RGCgDxGeq1eaYGAgAALcICAAARSgIAAFXiAgABAfoCABidzAIABPG0AgAAASgA=
Date: Wed, 29 Jun 2022 10:17:23 +0000
Message-ID: <B0D23F89-5368-4B21-946F-318A35743CB3@arm.com>
References: <20220624105311.21057-1-luca.fancellu@arm.com>
 <692d09fa-5513-132a-6b5b-4bc62e46a443@xen.org>
 <15F23829-3693-47CC-A9D6-3D7A3B44EB64@arm.com>
 <88bd7017-e2b3-59f3-a68a-25db9e53136d@xen.org>
 <CA8DFF26-3D7F-4CDA-9EDC-E173203B2A51@arm.com>
 <81c33c8c-e345-2fe3-32c6-2f80799eefd0@xen.org>
 <C7643376-EBD0-46C3-B940-D3F6198BD124@arm.com>
 <57926957-8bdd-d62b-55fb-47874dc51cdb@xen.org>
In-Reply-To: <57926957-8bdd-d62b-55fb-47874dc51cdb@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.80.82.1.1)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: ff744ac0-9a22-43ae-2450-08da59b89773
x-ms-traffictypediagnostic:
	AM0PR08MB5121:EE_|AM5EUR03FT054:EE_|PR2PR08MB4906:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 h0FkHBLyq1HrUAGCae/T2oGU4QzqKIbIXflpCnvR/Y4ApZkWaueoAxjmxb0iB1MW5N/S+eYG5VJr+uF78YjQwUqcmGMJqtGDsWn/Qb4vxMiw8jvZXiKdSlMV81LNKA91P5Iw2oEtx0d6hN1Fa8/r6EjTPdDnPvmW75WlplrUcPFiG0nAQY+19wBjj0ubZutisr4svXrTh7QPfBVfzv9Qr6BF9oadf7XWa1qu/vPhawD99NDXRqexvz1Ou0Y4Za44nLo0fwBdiWxTw/K9TvqNIvwHMjBCQnHfhU5S/O3ZjfPaT7150VCYtR+56JlH7vZDbXCVANTqoVboqYQH6KhVQ5jEQ3PNPk3s9mBBYkrJ/UmiJoGGW2JAQ1sjwSE4g/IUgRLO7m8+lUyrqCA2eZ4u19avs/bJosSgVNzAofB0S/2YnvwE4I1OtO4k8bdo9TfrgJaia8YWOCw7PD7q43CWE44n4kYhdVc09XG4980BZeFKDtO1aLzqiN+KzSu9F+E0mRG2RrFQgQ9Et/3gpIP8Jq/JxNsXW2+NrdoHm38S932PvRNgQ9S0hW+IQyiBhJs5+2nFgEaBFVceW+fPAtIXp4hrsJ6FDn8UKgZ9fv8S+3HR7B8NV+6AZuznd0NbBWsRtnanGYbSFJRjynsXLvIeasPTWL5ZKZ2c6ZZFkNVYGStq75zFnv4Afp3sghHSlAuxmr4Vzgs7FSc5cB2ngk3TS2lX+reNO2RUbQ30FihsbAq0353CCtxtk4Tyad1vxbIMFJ0TgvayDIJIUz4DoJOOrZXA48djoK15t9LtftGYopdun54Yv/2RmAtY9db07KR4dslxl3YIWi4Frzj+2FfCjA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB3809.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(396003)(346002)(136003)(376002)(39860400002)(53546011)(4326008)(478600001)(5660300002)(8676002)(6506007)(41300700001)(33656002)(38100700002)(122000001)(8936002)(86362001)(4744005)(64756008)(186003)(2906002)(2616005)(26005)(6512007)(6486002)(66946007)(38070700005)(66476007)(66556008)(66446008)(71200400001)(54906003)(36756003)(6916009)(316002)(76116006)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <2E80F210F494584394C2D087010AD989@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB5121
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT054.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	7245242d-4640-409c-f260-08da59b88f18
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	EALp66/6UswPUESvqOmk6biM+ji3Cr+rwDqeWBMM4AclzHB27mdYKOT89VNWtkTw9ul21cVopaYv+zy/RTTbkg2S2vINafj3Whu2rdGo2C0Dbr56UJTse4r+OdqFf9jrenuIcaXqo2xGHgxQ/w47Vn5zKbToX9/DMWiLj87dd0kYHFGP7HkGzissBeigayxNWrUG+RMvG3OtOgeJ7CizK8GTgu8w0ACQrdS2/X0FmYM65Zcd4tJxO4YtMiQOU6SoRGYm/DaCQMO9rSdAYqUXyK8sjY8l3Fb4z2NUYSH7tMxi4KQW284TlFGLXE3ABe5h8scJuk3LauamHnCnY0PoWdvQYyuOuJOmrtkKQ5lojYEeXVNEtqUPXWpS1leV08AP58HIs/9VJ3aLkyxLDCecIQObMFvK1pyqVHSPQRpp5sbEw26EDvGQOiN3rnP5+Rc3O+S1oyXC6LSwYziM0fe/gmtkEfo31rLXaBeWSFoRq3/Gt1yo0sOaTKPONu5+qDC4+YmuIYovJ6dy4KfqV7B4WzcDS1tYDZisWkrwxpxvkVZ+upN+V/x/JMXeHpmJnf3W7spfsbyC4D4medfsFzyt8KUvRzKm7fO3Hb9tYpcvdLpTI5SLRyC7dllXUZpNrNI6narjF1B2N/19vln9QoavcfedNgDW7Cgw1NjL3q5CoFHFqfj6Ofp224D306IAL+kgH24/qqxju2ridZz7dF69V0IqIRsIG+qs/FuI4Cnr97e57apj3bgpsJ2uwdTs2NIl0aSy+qclqiD78Y43qF2gP9EE+mBVga7lFD3K1Qzm/al46J85ovcgR/1/QKi6aahy
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(136003)(376002)(396003)(346002)(39860400002)(40470700004)(36840700001)(46966006)(82310400005)(186003)(6862004)(478600001)(316002)(81166007)(356005)(336012)(40480700001)(8936002)(36756003)(54906003)(47076005)(2616005)(40460700003)(53546011)(4744005)(33656002)(6486002)(86362001)(2906002)(4326008)(8676002)(41300700001)(26005)(70206006)(82740400003)(6512007)(70586007)(6506007)(5660300002)(36860700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jun 2022 10:17:37.5502
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ff744ac0-9a22-43ae-2450-08da59b89773
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT054.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR2PR08MB4906



> On 29 Jun 2022, at 11:16, Julien Grall <julien@xen.org> wrote:
>=20
> Hi Luca,
>=20
> On 28/06/2022 16:23, Luca Fancellu wrote:
>>> On 24 Jun 2022, at 18:25, Julien Grall <julien@xen.org> wrote:
>>> On 24/06/2022 14:34, Luca Fancellu wrote:
>>>>> On 24 Jun 2022, at 13:17, Julien Grall <julien@xen.org> wrote:
>> Sorry for the late reply, this would be my changes, would you agree on t=
hem?
>=20
> They LGTM.

Thanks, I will send V2 soon.

>=20
> Cheers,
>=20
> --=20
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Wed Jun 29 10:17:59 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 10:17:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357732.586504 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Ulj-00063r-7F; Wed, 29 Jun 2022 10:17:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357732.586504; Wed, 29 Jun 2022 10:17:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Ulj-00063i-3w; Wed, 29 Jun 2022 10:17:59 +0000
Received: by outflank-mailman (input) for mailman id 357732;
 Wed, 29 Jun 2022 10:17:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o6Ulh-000625-OL
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 10:17:57 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o6Ulh-0008En-H8; Wed, 29 Jun 2022 10:17:57 +0000
Received: from [54.239.6.187] (helo=[192.168.9.41])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o6Ulh-0003KL-BQ; Wed, 29 Jun 2022 10:17:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=tCNZoEBeaXiQQORTk17dQGs+E6WZpOg1vOmlhRfXa+A=; b=E4Wkgbj9pJCpCjBn+qTiKVwFS8
	ccsmOMzkMsKRSzfBQbI3XCOjEtT9sp3mrwrzMMceuq/Nkf9V8qdOlo0TGETyWaDdyM6bsvaHVcbEu
	4/5IxKhnzRQ8FcrjqJ/hwOfjYNp06CbulTKhFVRdyS8wn2jZ7+LupoUAbyDYadY6ORMA=;
Message-ID: <9172fc95-0939-3680-94cf-b991c46d0918@xen.org>
Date: Wed, 29 Jun 2022 11:17:55 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH v5 1/8] xen/arm: introduce static shared memory
To: Penny Zheng <Penny.Zheng@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220620051114.210118-1-Penny.Zheng@arm.com>
 <20220620051114.210118-2-Penny.Zheng@arm.com>
 <45a41132-1520-a894-a9eb-6688c79a660d@xen.org>
 <DU2PR08MB7325C156D4D6D5A2D18E0FF4F7BB9@DU2PR08MB7325.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <DU2PR08MB7325C156D4D6D5A2D18E0FF4F7BB9@DU2PR08MB7325.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 29/06/2022 06:38, Penny Zheng wrote:
> Hi Julien

Hi Penny,

> 
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: Saturday, June 25, 2022 1:55 AM
>> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org
>> Cc: Wei Chen <Wei.Chen@arm.com>; Stefano Stabellini
>> <sstabellini@kernel.org>; Bertrand Marquis <Bertrand.Marquis@arm.com>;
>> Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
>> Subject: Re: [PATCH v5 1/8] xen/arm: introduce static shared memory
>>
>> Hi Penny,
>>
>> On 20/06/2022 06:11, Penny Zheng wrote:
>>> From: Penny Zheng <penny.zheng@arm.com>
>>>
>>> This patch serie introduces a new feature: setting up static
>>
>> Typo: s/serie/series/
>>
>>> shared memory on a dom0less system, through device tree configuration.
>>>
>>> This commit parses shared memory node at boot-time, and reserve it in
>>> bootinfo.reserved_mem to avoid other use.
>>>
>>> This commits proposes a new Kconfig CONFIG_STATIC_SHM to wrap
>>> static-shm-related codes, and this option depends on static memory(
>>> CONFIG_STATIC_MEMORY). That's because that later we want to reuse a
>>> few helpers, guarded with CONFIG_STATIC_MEMORY, like
>>> acquire_staticmem_pages, etc, on static shared memory.
>>>
>>> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
>>> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
>>> ---
>>> v5 change:
>>> - no change
>>> ---
>>> v4 change:
>>> - nit fix on doc
>>> ---
>>> v3 change:
>>> - make nr_shm_domain unsigned int
>>> ---
>>> v2 change:
>>> - document refinement
>>> - remove bitmap and use the iteration to check
>>> - add a new field nr_shm_domain to keep the number of shared domain
>>> ---
>>>    docs/misc/arm/device-tree/booting.txt | 120
>> ++++++++++++++++++++++++++
>>>    xen/arch/arm/Kconfig                  |   6 ++
>>>    xen/arch/arm/bootfdt.c                |  68 +++++++++++++++
>>>    xen/arch/arm/include/asm/setup.h      |   3 +
>>>    4 files changed, 197 insertions(+)
>>>
>>> diff --git a/docs/misc/arm/device-tree/booting.txt
>>> b/docs/misc/arm/device-tree/booting.txt
>>> index 98253414b8..6467bc5a28 100644
>>> --- a/docs/misc/arm/device-tree/booting.txt
>>> +++ b/docs/misc/arm/device-tree/booting.txt
>>> @@ -378,3 +378,123 @@ device-tree:
>>>
>>>    This will reserve a 512MB region starting at the host physical address
>>>    0x30000000 to be exclusively used by DomU1.
>>> +
>>> +Static Shared Memory
>>> +====================
>>> +
>>> +The static shared memory device tree nodes allow users to statically
>>> +set up shared memory on dom0less system, enabling domains to do
>>> +shm-based communication.
>>> +
>>> +- compatible
>>> +
>>> +    "xen,domain-shared-memory-v1"
>>> +
>>> +- xen,shm-id
>>> +
>>> +    An 8-bit integer that represents the unique identifier of the shared
>> memory
>>> +    region. The maximum identifier shall be "xen,shm-id = <0xff>".
>>> +
>>> +- xen,shared-mem
>>> +
>>> +    An array takes a physical address, which is the base address of the
>>> +    shared memory region in host physical address space, a size, and a
>> guest
>>> +    physical address, as the target address of the mapping. The number of
>> cells
>>> +    for the host address (and size) is the same as the guest pseudo-physical
>>> +    address and they are inherited from the parent node.
>>
>> Sorry for jump in the discussion late. But as this is going to be a stable ABI, I
>> would to make sure the interface is going to be easily extendable.
>>
>> AFAIU, with your proposal the host physical address is mandatory. I would
>> expect that some user may want to share memory but don't care about the
>> exact location in memory. So I think it would be good to make it optional in
>> the binding.
>>
>> I think this wants to be done now because it would be difficult to change the
>> binding afterwards (the host physical address is the first set of cells).
>>
>> The Xen doesn't need to handle the optional case.
>>
> 
> Sure, I'll make "the host physical address" optional here, and right now, with no actual
> code implementation. I'll make up it later in free time~
> 
> The user case you mentioned here is that we let xen to allocate an arbitrary static shared
> memory region, so size and guest physical address are still mandatory, right?

That's correct.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 10:18:32 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 10:18:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357740.586515 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6UmG-0006uR-Hy; Wed, 29 Jun 2022 10:18:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357740.586515; Wed, 29 Jun 2022 10:18:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6UmG-0006uF-F9; Wed, 29 Jun 2022 10:18:32 +0000
Received: by outflank-mailman (input) for mailman id 357740;
 Wed, 29 Jun 2022 10:18:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=m/oR=XE=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o6UmE-0005fS-TA
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 10:18:31 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d283695b-f794-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 12:18:30 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 01F0422057;
 Wed, 29 Jun 2022 10:18:30 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id D589E133D1;
 Wed, 29 Jun 2022 10:18:29 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id hWleMvUmvGKlHAAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 29 Jun 2022 10:18:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d283695b-f794-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656497910; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Fqq5ZtmFZsfKexKA77Ajx9AwmFNW4TjK54RY27OzxrA=;
	b=rY+bCZASUMrABZL1dZWBa+U4oa9t1SsbAUSC217mzygy+VdbdaIJQE4cImJU3j8Z1oHbBl
	Oj+5vyg5iC1l5Rycjo/8c0TkfdC1fPb70fxsmFJqogQSMO8jUPu6e0QHbjWdOREqLYVb/E
	X8CkZbKwCA/K4yeFJVQDekFJ+gtSWRI=
Message-ID: <fca7f825-39a6-ed5d-52b7-6fd5a4799607@suse.com>
Date: Wed, 29 Jun 2022 12:18:29 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [XEN PATCH v3 13/25] tools/libs/util: cleanup Makefile
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>
References: <20220624160422.53457-1-anthony.perard@citrix.com>
 <20220624160422.53457-14-anthony.perard@citrix.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20220624160422.53457-14-anthony.perard@citrix.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------m4b0S0xoGYG6kx7OSY7wQ43i"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------m4b0S0xoGYG6kx7OSY7wQ43i
Content-Type: multipart/mixed; boundary="------------LqCLT7AgQIJvokbt04AxihNL";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>
Message-ID: <fca7f825-39a6-ed5d-52b7-6fd5a4799607@suse.com>
Subject: Re: [XEN PATCH v3 13/25] tools/libs/util: cleanup Makefile
References: <20220624160422.53457-1-anthony.perard@citrix.com>
 <20220624160422.53457-14-anthony.perard@citrix.com>
In-Reply-To: <20220624160422.53457-14-anthony.perard@citrix.com>

--------------LqCLT7AgQIJvokbt04AxihNL
Content-Type: multipart/mixed; boundary="------------6CMlah0P3DztrhqPnCnHwAC9"

--------------6CMlah0P3DztrhqPnCnHwAC9
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjQuMDYuMjIgMTg6MDQsIEFudGhvbnkgUEVSQVJEIHdyb3RlOg0KPiBSZW1vdmUgLUku
IGZyb20gQ0ZMQUdTLCBpdCBpc24ndCBuZWNlc3NhcnkuDQo+IA0KPiBSZW1vdmVkICQoQVVU
T1NSQ1MpLCBpdCBpc24ndCB1c2VkLg0KPiANCj4gU2lnbmVkLW9mZi1ieTogQW50aG9ueSBQ
RVJBUkQgPGFudGhvbnkucGVyYXJkQGNpdHJpeC5jb20+DQoNClJldmlld2VkLWJ5OiBKdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+DQoNCg0KSnVlcmdlbg0K
--------------6CMlah0P3DztrhqPnCnHwAC9
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------6CMlah0P3DztrhqPnCnHwAC9--

--------------LqCLT7AgQIJvokbt04AxihNL--

--------------m4b0S0xoGYG6kx7OSY7wQ43i
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK8JvUFAwAAAAAACgkQsN6d1ii/Ey/P
3gf/f+LpHthZWUtcavx/x2Eua9n+jQQ+hmp1ajFVzk1cCjvkv9DCXLL0QUXuAdQo9S8stfhApxSg
tr93HWh1Rla3viJx1AiQYTmAR2HxPo7gSw+rOA/NJz3efLMiMiznKoeItmqAW8AQnS79wM5c0U0+
gmmXKHYt9pmAzthNlg1Fe70ospTcuV5uq4vtsSAPYDVolaLeCAI4mX3EnGOIOrlBsqpXtAUjXCZ8
B1Y568fkYyErDc/p/upXLP3aC/sJhfoyZO6oWlIyqqi2afRIsHqk/CF5rvJIdLcJ6GXR0tyUE4o/
dRuxCc3AJjHmjPPweiRiWPL9oDQ0KlriG3cwBM4Ftw==
=YljK
-----END PGP SIGNATURE-----

--------------m4b0S0xoGYG6kx7OSY7wQ43i--


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 10:19:59 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 10:19:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357746.586525 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Une-0007c6-SJ; Wed, 29 Jun 2022 10:19:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357746.586525; Wed, 29 Jun 2022 10:19:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Une-0007bz-Pl; Wed, 29 Jun 2022 10:19:58 +0000
Received: by outflank-mailman (input) for mailman id 357746;
 Wed, 29 Jun 2022 10:19:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=m/oR=XE=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o6Und-0007bo-Hy
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 10:19:57 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 060320d0-f795-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 12:19:56 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 605A51FEE8;
 Wed, 29 Jun 2022 10:19:56 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 41D1C133D1;
 Wed, 29 Jun 2022 10:19:56 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id u3C1DkwnvGI4HQAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 29 Jun 2022 10:19:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 060320d0-f795-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656497996; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=m7WduauO9Mq6OSzkDfleBPgkhI2aSccQXxxgG4buZR8=;
	b=NE/k2TOFn/6e2b9ZXgzyi54vmqgoZR2N5tmpWTudS+M2bz5xi3H4M3pwpwwnwxbuSsEkFP
	57zwAasmb8eXeq08h4XsRnqcp23Y+rS1oy3LdJUFbq2TlasAzoBtm/M4LkCOSMI5exQ6TK
	B/83jpT66Gul9WPFL46CgReH1QQuGn0=
Message-ID: <7ccc0f96-c09b-c4f4-a673-0a5002100df8@suse.com>
Date: Wed, 29 Jun 2022 12:19:55 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [XEN PATCH v3 15/25] libs/libs.mk: Rename $(LIB) to $(TARGETS)
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>
References: <20220624160422.53457-1-anthony.perard@citrix.com>
 <20220624160422.53457-16-anthony.perard@citrix.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20220624160422.53457-16-anthony.perard@citrix.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------MIzt6yJE0anIVbkIC0qb9HUV"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------MIzt6yJE0anIVbkIC0qb9HUV
Content-Type: multipart/mixed; boundary="------------0vBSRWAgE109fnJZMryNggsk";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>
Message-ID: <7ccc0f96-c09b-c4f4-a673-0a5002100df8@suse.com>
Subject: Re: [XEN PATCH v3 15/25] libs/libs.mk: Rename $(LIB) to $(TARGETS)
References: <20220624160422.53457-1-anthony.perard@citrix.com>
 <20220624160422.53457-16-anthony.perard@citrix.com>
In-Reply-To: <20220624160422.53457-16-anthony.perard@citrix.com>

--------------0vBSRWAgE109fnJZMryNggsk
Content-Type: multipart/mixed; boundary="------------KGA7jkl7wo2n0SnZyaWZX8o1"

--------------KGA7jkl7wo2n0SnZyaWZX8o1
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjQuMDYuMjIgMTg6MDQsIEFudGhvbnkgUEVSQVJEIHdyb3RlOg0KPiBTaWduZWQtb2Zm
LWJ5OiBBbnRob255IFBFUkFSRCA8YW50aG9ueS5wZXJhcmRAY2l0cml4LmNvbT4NCg0KUmV2
aWV3ZWQtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4NCg0KDQpKdWVyZ2Vu
DQo=
--------------KGA7jkl7wo2n0SnZyaWZX8o1
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------KGA7jkl7wo2n0SnZyaWZX8o1--

--------------0vBSRWAgE109fnJZMryNggsk--

--------------MIzt6yJE0anIVbkIC0qb9HUV
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK8J0sFAwAAAAAACgkQsN6d1ii/Ey/i
FQf/axNy+a1THxEHqStoivFB+zHQvxoQLcGHqZzS1grM7HLemA7oC4d4RnuyNU2JSNur8UQ+2ipo
ldD84/YgsCjzoaxwCebhJqtAOFJF+VqmYm8MdFTqEehi7B38Maa55hcekJ2TS2SZqpYQQrNSLWli
nEH5k/KxHbH9N1W6kfPY0icEIHZ7cEZecUi5NN/FzGwws6pZSUAUboGqIpIvXfMhXoRXNKPxnf5X
j8bT7bkhEAXXErMNoEanNKrNGAA6XzOoMyIiwEcuLFFbWd5jtwGHk3DAMtM8CnVxP3c+4UN6+lS0
BrpRXNGBXFwj109qdAtrWwxkHE64XkcqkbVoDWAB/g==
=UVUC
-----END PGP SIGNATURE-----

--------------MIzt6yJE0anIVbkIC0qb9HUV--


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 10:22:18 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 10:22:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357756.586537 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Upq-0000at-AR; Wed, 29 Jun 2022 10:22:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357756.586537; Wed, 29 Jun 2022 10:22:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Upq-0000am-7U; Wed, 29 Jun 2022 10:22:14 +0000
Received: by outflank-mailman (input) for mailman id 357756;
 Wed, 29 Jun 2022 10:22:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=m/oR=XE=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o6Upo-0000ae-Ua
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 10:22:12 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 56b1c377-f795-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 12:22:12 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 876DA22061;
 Wed, 29 Jun 2022 10:22:11 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 68043133D1;
 Wed, 29 Jun 2022 10:22:11 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id /9TKF9MnvGKLHgAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 29 Jun 2022 10:22:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 56b1c377-f795-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656498131; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=7SIfEwfoqyOteOiCxudJa/3wxg5EnZPiOAnLIj9MdeA=;
	b=swQRgiAJNeFwbK5s4L0QveptEVdeLmvD3p28yGOL4FWfUz5SEPBDqZPP0JLSFmDzkpiexy
	u3r4ijlyGyitSfkxlPvvxjr7YPtpvsLdFfp73cBWXI508u1Nnd42n02oTD5MzOSUrcNnkN
	S7wZLnYzzcsoMxmyQtGuqjm+o6wZaSQ=
Message-ID: <721ac9e1-116a-1b74-7257-ce0ff5b0dfac@suse.com>
Date: Wed, 29 Jun 2022 12:22:10 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [XEN PATCH v3 16/25] libs/libs.mk: Remove the need for
 $(PKG_CONFIG_INST)
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>
References: <20220624160422.53457-1-anthony.perard@citrix.com>
 <20220624160422.53457-17-anthony.perard@citrix.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20220624160422.53457-17-anthony.perard@citrix.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------5SLnLgS10Q31SQbw2lP0Q59A"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------5SLnLgS10Q31SQbw2lP0Q59A
Content-Type: multipart/mixed; boundary="------------rxuQGZYwzeBDEXWwxp01vRGY";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>
Message-ID: <721ac9e1-116a-1b74-7257-ce0ff5b0dfac@suse.com>
Subject: Re: [XEN PATCH v3 16/25] libs/libs.mk: Remove the need for
 $(PKG_CONFIG_INST)
References: <20220624160422.53457-1-anthony.perard@citrix.com>
 <20220624160422.53457-17-anthony.perard@citrix.com>
In-Reply-To: <20220624160422.53457-17-anthony.perard@citrix.com>

--------------rxuQGZYwzeBDEXWwxp01vRGY
Content-Type: multipart/mixed; boundary="------------31I0aYDAAwcPFv1KXh3q7Saa"

--------------31I0aYDAAwcPFv1KXh3q7Saa
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjQuMDYuMjIgMTg6MDQsIEFudGhvbnkgUEVSQVJEIHdyb3RlOg0KPiBXZSBjYW4gc2lt
cGx5IHVzZSAkKFBLR19DT05GSUcpIHRvIHNldCB0aGUgcGFyYW1ldGVycywgYW5kIGFkZCBp
dCB0bw0KPiAkKFRBUkdFVFMpIGFzIG5lY2Vzc2FyeS4NCj4gDQo+IFNpZ25lZC1vZmYtYnk6
IEFudGhvbnkgUEVSQVJEIDxhbnRob255LnBlcmFyZEBjaXRyaXguY29tPg0KDQpSZXZpZXdl
ZC1ieTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPg0KDQoNCkp1ZXJnZW4NCg==

--------------31I0aYDAAwcPFv1KXh3q7Saa
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------31I0aYDAAwcPFv1KXh3q7Saa--

--------------rxuQGZYwzeBDEXWwxp01vRGY--

--------------5SLnLgS10Q31SQbw2lP0Q59A
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK8J9MFAwAAAAAACgkQsN6d1ii/Ey+L
6AgAmluXb+12RC0WclWh/Meliqx2LfzYxaD/FhieYUjfwy3JpjMYmBsqc/GidprUvBgdsAQuYnrt
jy9gBt0np5xN+GrKVFxzS09BSUOSMGAwy/gly7wi3BwbQ3nhG2xRS3rKxr838UGyvbVC7xqoHhnA
O4LdFDbFD3Q9pXXTx7MJAxgL9rIlUkLzdvh5cMhVsmc41KhTi5FO7y1kW7/DmAcSmdJrdBjPZ3vv
/v6Zs6oXTBXBPdc3NtOxYUfPqFFSYv01Aplv9hwlEmdSEcF44sJhdjyWr1YA13PjUBFQ6hyO0XrC
BtQplSNmNJTGFuTAgqPce9bke/ge4CmPgbUF6vfT2A==
=KLG5
-----END PGP SIGNATURE-----

--------------5SLnLgS10Q31SQbw2lP0Q59A--


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 10:23:29 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 10:23:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357763.586547 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Ur2-0001Eb-Q5; Wed, 29 Jun 2022 10:23:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357763.586547; Wed, 29 Jun 2022 10:23:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Ur2-0001EU-NH; Wed, 29 Jun 2022 10:23:28 +0000
Received: by outflank-mailman (input) for mailman id 357763;
 Wed, 29 Jun 2022 10:23:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=m/oR=XE=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o6Ur2-0001EL-5K
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 10:23:28 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8395a331-f795-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 12:23:27 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 18EA12206C;
 Wed, 29 Jun 2022 10:23:27 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id EE3FE133D1;
 Wed, 29 Jun 2022 10:23:26 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id ViezNx4ovGIpHwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 29 Jun 2022 10:23:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8395a331-f795-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656498207; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=zIpFvp8gnwid1VpzoRX0Y3WSiR1K0gBpoUdliXfMNSc=;
	b=oTDDT4RqF2ZX73RSYrw0kE/Lku2V0CS/tBMe6pQQtC47hKrT6RUmrqXUOrVUkLaf0iiM/a
	DGNEaWZkyK3C5ebp+SkOoinJhwQD0jXumpMvgN5/wZosAuKuOoJe9V7KFjECX1RT/r0EpC
	wWly6xlmks6S32GQispCVet6Fo3C4yU=
Message-ID: <b6fba5ef-61f8-9834-137b-3e1e5430c966@suse.com>
Date: Wed, 29 Jun 2022 12:23:26 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [XEN PATCH v3 17/25] libs/libs.mk: Rework target headers.chk
 dependencies
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>
References: <20220624160422.53457-1-anthony.perard@citrix.com>
 <20220624160422.53457-18-anthony.perard@citrix.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20220624160422.53457-18-anthony.perard@citrix.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------DW9IugJB9Dv2Zji5LsXbO0EF"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------DW9IugJB9Dv2Zji5LsXbO0EF
Content-Type: multipart/mixed; boundary="------------TX5pue2pTq9fYKtI3h4Odo4P";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>
Message-ID: <b6fba5ef-61f8-9834-137b-3e1e5430c966@suse.com>
Subject: Re: [XEN PATCH v3 17/25] libs/libs.mk: Rework target headers.chk
 dependencies
References: <20220624160422.53457-1-anthony.perard@citrix.com>
 <20220624160422.53457-18-anthony.perard@citrix.com>
In-Reply-To: <20220624160422.53457-18-anthony.perard@citrix.com>

--------------TX5pue2pTq9fYKtI3h4Odo4P
Content-Type: multipart/mixed; boundary="------------qMeKs0CAgg4XykMNEI5bkOKb"

--------------qMeKs0CAgg4XykMNEI5bkOKb
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjQuMDYuMjIgMTg6MDQsIEFudGhvbnkgUEVSQVJEIHdyb3RlOg0KPiBUaGVyZSBpcyBu
byBuZWVkIHRvIGNhbGwgdGhlICJoZWFkZXJzLmNoayIgdGFyZ2V0IHdoZW4gaXQgaXNuJ3QN
Cj4gd2FudGVkLCBzbyBpdCBuZXZlciBuZWVkIHRvIGJlIC5QSE9OWS4NCj4gDQo+IEFsc28s
IHRoZXJlIGlzIG5vIG1vcmUgcmVhc29uIHRvIHNlcGFyYXRlIHRoZSBwcmVyZXF1aXNpdGVz
IGZyb20gdGhlDQo+IHJlY2lwZS4NCj4gDQo+IFNpZ25lZC1vZmYtYnk6IEFudGhvbnkgUEVS
QVJEIDxhbnRob255LnBlcmFyZEBjaXRyaXguY29tPg0KDQpSZXZpZXdlZC1ieTogSnVlcmdl
biBHcm9zcyA8amdyb3NzQHN1c2UuY29tPg0KDQoNCkp1ZXJnZW4NCg==
--------------qMeKs0CAgg4XykMNEI5bkOKb
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------qMeKs0CAgg4XykMNEI5bkOKb--

--------------TX5pue2pTq9fYKtI3h4Odo4P--

--------------DW9IugJB9Dv2Zji5LsXbO0EF
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK8KB4FAwAAAAAACgkQsN6d1ii/Ey99
Hgf+MfCO03Eepn88mEdDvTOXubLOU6ocm491TJLrS+bEVDd0uaeivkaTI/13+B2BDtAHjJjfxxCe
5x7j2qBqXRJ4qNQKa8UNJURpfkS3VbMsHNpfPPefHHiOvCK6/z1l6qZ/f8G0eoe8Dfto/CAGmKNv
dh+VfsFAFHkoStlNb7hCvtfeVWK+HrDbaxTBaD7UuXN0QmyXyhmmPCqIzMakdooOXrxWmIn0c85A
thwGtkuDEGAmhKGFSHJyl+BSb1Rxx3Zba9UqGpn4T0Sx/qF+zxQjCtSdMcajVypp4VA4ghGTVILQ
s99EusDE0mHXXhprPfTU1nJP/WfZ4b8+wtHUJ2dYZQ==
=+AHL
-----END PGP SIGNATURE-----

--------------DW9IugJB9Dv2Zji5LsXbO0EF--


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 10:26:09 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 10:26:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357769.586559 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Utb-0001pf-78; Wed, 29 Jun 2022 10:26:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357769.586559; Wed, 29 Jun 2022 10:26:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Utb-0001pY-4N; Wed, 29 Jun 2022 10:26:07 +0000
Received: by outflank-mailman (input) for mailman id 357769;
 Wed, 29 Jun 2022 10:26:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=m/oR=XE=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o6Uta-0001pS-1X
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 10:26:06 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e1a0644b-f795-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 12:26:05 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id D5D7021F90;
 Wed, 29 Jun 2022 10:26:04 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id B656B133D1;
 Wed, 29 Jun 2022 10:26:04 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 4yYXK7wovGJGIAAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 29 Jun 2022 10:26:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e1a0644b-f795-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656498364; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=NQeCfMHyw+TPb/Ff8NyKyc14By/yE0uYswCVx9GkER8=;
	b=uFX0PJKb3qtmbmZHOg58WhyYsYeKctbVkQFr77y8gL3FzbDtOYuMFISXH8BNnn30kEVG/0
	9h4GaIGtmEyfmCwME2qlG6z5icClgOU+TvmgL3monBE+1rpwR9VZhH7qRWDAfkHTjiRc+y
	xuq0IQuL8HxW64PIOOS/PRSj7h+/lHk=
Message-ID: <91e01772-1d23-760b-4cea-3d0ed7a62237@suse.com>
Date: Wed, 29 Jun 2022 12:26:04 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [XEN PATCH v3 18/25] tools: Introduce $(xenlibs-rpath,..) to
 replace $(SHDEPS_lib*)
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>
References: <20220624160422.53457-1-anthony.perard@citrix.com>
 <20220624160422.53457-19-anthony.perard@citrix.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20220624160422.53457-19-anthony.perard@citrix.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------tw60pFjxzWpjI2bzB05Nkr7n"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------tw60pFjxzWpjI2bzB05Nkr7n
Content-Type: multipart/mixed; boundary="------------Q80JA0qdgJ3Zb3EjL3K5uGnZ";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>
Message-ID: <91e01772-1d23-760b-4cea-3d0ed7a62237@suse.com>
Subject: Re: [XEN PATCH v3 18/25] tools: Introduce $(xenlibs-rpath,..) to
 replace $(SHDEPS_lib*)
References: <20220624160422.53457-1-anthony.perard@citrix.com>
 <20220624160422.53457-19-anthony.perard@citrix.com>
In-Reply-To: <20220624160422.53457-19-anthony.perard@citrix.com>

--------------Q80JA0qdgJ3Zb3EjL3K5uGnZ
Content-Type: multipart/mixed; boundary="------------8y1CJZWShjfV195fNCqonXfm"

--------------8y1CJZWShjfV195fNCqonXfm
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjQuMDYuMjIgMTg6MDQsIEFudGhvbnkgUEVSQVJEIHdyb3RlOg0KPiBUaGlzIHBhdGNo
IGludHJvZHVjZSBhIG5ldyBtYWNybyAkKHhlbmxpYnMtZGVwZW5kZW5jaWVzLCkgdG8gZ2Vu
ZXJhdGUNCj4gYSBsaXN0IG9mIGFsbCB0aGUgeGVuIGxpYnJhcnkgdGhhdCBhIGxpYnJhcnkg
aXMgbGlzdCBhZ2FpbnN0LCBhbmQgdGhleQ0KPiBhcmUgbGlzdGVkIG9ubHkgb25jZS4gV2Ug
dXNlIHRoZSBzaWRlIGVmZmVjdCBvZiAkKHNvcnQgKSB3aGljaCByZW1vdmUNCj4gZHVwbGlj
YXRlcy4NCj4gDQo+IFRoaXMgaXMgdXNlZCBieSBhbm90aGVyIG1hY3JvICQoeGVubGlicy1y
cGF0aCwpIHdoaWNoIGlzIHRvIHJlcGxhY2UNCj4gJChTSERFUFNfbGlieGVuKikuDQo+IA0K
PiBJbiBsaWJzLm1rLCB3ZSBkb24ndCBuZWVkIHRvICQoc29ydCApIFNITElCX2xpYiogYW55
bW9yZSBhcyB0aGlzIHdhcyB1c2VkDQo+IHRvIHJlbW92ZSBkdXBsaWNhdGVzIGFuZCB0aGV5
IGFyZSBubyBtb3JlIGR1cGxpY2F0ZXMuDQo+IA0KPiBTaWduZWQtb2ZmLWJ5OiBBbnRob255
IFBFUkFSRCA8YW50aG9ueS5wZXJhcmRAY2l0cml4LmNvbT4NCg0KUmV2aWV3ZWQtYnk6IEp1
ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4NCg0KDQpKdWVyZ2VuDQo=
--------------8y1CJZWShjfV195fNCqonXfm
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------8y1CJZWShjfV195fNCqonXfm--

--------------Q80JA0qdgJ3Zb3EjL3K5uGnZ--

--------------tw60pFjxzWpjI2bzB05Nkr7n
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK8KLwFAwAAAAAACgkQsN6d1ii/Ey/Q
QAf8D/+9D8RL/GIfrgULbINOPlRX2SOLjbmtR/6gvItV91V0jayTvOfMrctQEvJjCV9FaRcLku1t
pEmhyarC85sx4lUDYquiQ2iihFvUMAvfEKSjHRM02ai5TB4J8HGuLG3RBx167kYTF0zrdvDHet5M
VrllKQGO42vgirTYgWe9DLvq7sLt+am/UbAmYMPqjPejTkzLUYd3EtnnWvTgezCp/HyCFZV9l21Q
BqvqbEj1VI9a40HXfmbEro/UEH59jK0a1dZBVaPVp42F9+J7DWDExdlF08L3vQ57orzHhsZfJ5Al
uU+ILKJgkibHKE7oAT8VHtXMUTqkFsgDctC/AdfcEA==
=GJgn
-----END PGP SIGNATURE-----

--------------tw60pFjxzWpjI2bzB05Nkr7n--


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 10:29:38 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 10:29:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357775.586569 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Uwx-0002bJ-Ld; Wed, 29 Jun 2022 10:29:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357775.586569; Wed, 29 Jun 2022 10:29:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Uwx-0002bC-Im; Wed, 29 Jun 2022 10:29:35 +0000
Received: by outflank-mailman (input) for mailman id 357775;
 Wed, 29 Jun 2022 10:29:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=m/oR=XE=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o6Uww-0002b6-97
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 10:29:34 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5d68e7c9-f796-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 12:29:32 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 01E641FEF9;
 Wed, 29 Jun 2022 10:29:33 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id D7481132C0;
 Wed, 29 Jun 2022 10:29:32 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id iNU2M4wpvGKUIQAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 29 Jun 2022 10:29:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5d68e7c9-f796-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656498573; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=eiIzUnp80qj7odxnRHFL40W1UZOBVwV1YdzoZJs4SZs=;
	b=hlsRi9gv44N+ZGp/b4w8rsC1455WCuHOZMQwbZKLIDSx5oujn65TuBH68q+vOWC2DjAfg4
	sU1cWVZD/vdVk8raG06qgakcIGOS654Ogwav+n/eG3c8ebFWsxydjYa3J/zPOZKOLEPJdK
	ALu0p8S9Z5ItcfM3VZR+T2rCX+8EmV8=
Message-ID: <ec3487df-bf54-cf99-71c3-bddf296fa1bb@suse.com>
Date: Wed, 29 Jun 2022 12:29:32 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [XEN PATCH v3 19/25] tools: Introduce $(xenlibs-ldlibs, ) macro
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>
References: <20220624160422.53457-1-anthony.perard@citrix.com>
 <20220624160422.53457-20-anthony.perard@citrix.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20220624160422.53457-20-anthony.perard@citrix.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------ofYjV57i3RDAhuvTKJuvPH49"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------ofYjV57i3RDAhuvTKJuvPH49
Content-Type: multipart/mixed; boundary="------------jcZwaMb0vRvGbj93Acdx15XK";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>
Message-ID: <ec3487df-bf54-cf99-71c3-bddf296fa1bb@suse.com>
Subject: Re: [XEN PATCH v3 19/25] tools: Introduce $(xenlibs-ldlibs, ) macro
References: <20220624160422.53457-1-anthony.perard@citrix.com>
 <20220624160422.53457-20-anthony.perard@citrix.com>
In-Reply-To: <20220624160422.53457-20-anthony.perard@citrix.com>

--------------jcZwaMb0vRvGbj93Acdx15XK
Content-Type: multipart/mixed; boundary="------------TD8v8nF6KcHcVTopEY9IoxmX"

--------------TD8v8nF6KcHcVTopEY9IoxmX
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjQuMDYuMjIgMTg6MDQsIEFudGhvbnkgUEVSQVJEIHdyb3RlOg0KPiBUaGlzIGNhbiBi
ZSB1c2VkIHdoZW4gbGlua2luZyBhZ2FpbnN0IG11bHRpcGxlIGluLXRyZWUgWGVuIGxpYnJh
cmllcywNCj4gYW5kIGF2b2lkIGR1cGxpY2F0ZWQgZmxhZ3MuIEl0IGNhbiBiZSB1c2VkIGlu
c3RlYWQgb2YgbXVsdGlwbGUNCj4gJChMRExJQlNfbGlieGVuKikuDQo+IA0KPiBGb3Igbm93
LCByZXBsYWNlIHRoZSBvcGVuLWNvZGluZyBpbiBsaWJzLm1rLg0KPiANCj4gVGhlIG1hY3Jv
ICQoeGVubGlicy1saWJzLCApIHdpbGwgYmUgdXNlZnVsIGxhdGVyIHdoZW4gb25seSB0aGUg
cGF0aCB0bw0KPiB0aGUgbGlicmFyaWVzIGlzIHdhbnRlZCAoZS5nLiBmb3IgY2hlY2tpbmcg
Zm9yIGRlcGVuZGVuY2llcykuDQo+IA0KPiBTaWduZWQtb2ZmLWJ5OiBBbnRob255IFBFUkFS
RCA8YW50aG9ueS5wZXJhcmRAY2l0cml4LmNvbT4NCg0KUmV2aWV3ZWQtYnk6IEp1ZXJnZW4g
R3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4NCg0KDQpKdWVyZ2VuDQo=
--------------TD8v8nF6KcHcVTopEY9IoxmX
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------TD8v8nF6KcHcVTopEY9IoxmX--

--------------jcZwaMb0vRvGbj93Acdx15XK--

--------------ofYjV57i3RDAhuvTKJuvPH49
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK8KYwFAwAAAAAACgkQsN6d1ii/Ey+B
Fwf+PgyPF986RcKTLU2DcbJGhVd6WyEqwHvTcE5tXMPMuu+WqM7oFjHi1wm1UD0IJYXyl4tyS28D
QTyK6OisF3uyQSb8VzKvDfjfogm6dBLm5rsTz1vWKfoAQdVoUY/UpEUX0d3UkrKG4JGnX7pAd5Vu
Pbl0g8qwt8UCFXcu/hWrsL8Jj34CBPOQekyRVAMRiAoDaYeG6dwH4JdoZR59lSe9kaByWGk5eD/E
OozC3HIqglMdDhjgEuP350hPK5TbIP6zKSa5KKrPBqVw5yfLEponUak2wgSfC76Jdaqb+M/pB0XA
dEvUkB3wiSr8RuIU/hPAHysAmn9P467SkDoH2tQg9g==
=D6jH
-----END PGP SIGNATURE-----

--------------ofYjV57i3RDAhuvTKJuvPH49--


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 10:31:29 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 10:31:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357780.586581 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Uym-0003yF-27; Wed, 29 Jun 2022 10:31:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357780.586581; Wed, 29 Jun 2022 10:31:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Uyl-0003y8-VI; Wed, 29 Jun 2022 10:31:27 +0000
Received: by outflank-mailman (input) for mailman id 357780;
 Wed, 29 Jun 2022 10:31:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=m/oR=XE=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o6Uyk-0003y2-OQ
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 10:31:26 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a0d49c0f-f796-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 12:31:25 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 9FE4A22072;
 Wed, 29 Jun 2022 10:31:25 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 4EC50132C0;
 Wed, 29 Jun 2022 10:31:25 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id y9fVEf0pvGIpIwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 29 Jun 2022 10:31:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a0d49c0f-f796-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656498685; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=87C0rkajzJrJcpRcxfDhoQfkWwe0sArB5dJQ5FClOkc=;
	b=l2zJlHBoIW4Z50IlImGnrhNYpBps7ytCfYh0ErYT/YwM6hnPbaSX0oaNkF4P32+s1ob/Wa
	94wBWEbiKhkVK1oYDla16Co9BqmJ/8yryje3zT8kGTbLusn7/07uHPOLr/UdacS9/sSlYh
	/5dVf0kDx7fBwqoX4YRSaIPeqBVHdiA=
Message-ID: <dee9190d-635b-9af2-0cb0-35fd0b4e41cd@suse.com>
Date: Wed, 29 Jun 2022 12:31:24 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [XEN PATCH v3 25/25] tools: Remove -Werror everywhere else
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Elena Ufimtseva <elena.ufimtseva@oracle.com>,
 Tim Deegan <tim@xen.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 "Daniel P. Smith" <dpsmith@apertussolutions.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Christian Lindig <christian.lindig@citrix.com>, David Scott <dave@recoil.org>
References: <20220624160422.53457-1-anthony.perard@citrix.com>
 <20220624160422.53457-26-anthony.perard@citrix.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20220624160422.53457-26-anthony.perard@citrix.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------xLyClJFsxMZWXDWgCCG0gtaF"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------xLyClJFsxMZWXDWgCCG0gtaF
Content-Type: multipart/mixed; boundary="------------I3mZZ3DK0leXqMo1v70lpkRH";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Elena Ufimtseva <elena.ufimtseva@oracle.com>,
 Tim Deegan <tim@xen.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 "Daniel P. Smith" <dpsmith@apertussolutions.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Christian Lindig <christian.lindig@citrix.com>, David Scott <dave@recoil.org>
Message-ID: <dee9190d-635b-9af2-0cb0-35fd0b4e41cd@suse.com>
Subject: Re: [XEN PATCH v3 25/25] tools: Remove -Werror everywhere else
References: <20220624160422.53457-1-anthony.perard@citrix.com>
 <20220624160422.53457-26-anthony.perard@citrix.com>
In-Reply-To: <20220624160422.53457-26-anthony.perard@citrix.com>

--------------I3mZZ3DK0leXqMo1v70lpkRH
Content-Type: multipart/mixed; boundary="------------gOTZ0Enpy3C5r3h1g7Q5nWZq"

--------------gOTZ0Enpy3C5r3h1g7Q5nWZq
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjQuMDYuMjIgMTg6MDQsIEFudGhvbnkgUEVSQVJEIHdyb3RlOg0KPiBQYXRjaCAidG9v
bHM6IEFkZCAtV2Vycm9yIGJ5IGRlZmF1bHQgdG8gYWxsIHRvb2xzLyIgaGF2ZSBhZGRlZA0K
PiAiLVdlcnJvciIgdG8gQ0ZMQUdTIGluIHRvb2xzL1J1bGVzLm1rLCByZW1vdmUgaXQgZnJv
bSBldmVyeSBvdGhlcg0KPiBtYWtlZmlsZXMgYXMgaXQgaXMgbm93IGR1cGxpY2F0ZWQuDQo+
IA0KPiBTaWduZWQtb2ZmLWJ5OiBBbnRob255IFBFUkFSRCA8YW50aG9ueS5wZXJhcmRAY2l0
cml4LmNvbT4NCg0KUmV2aWV3ZWQtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNv
bT4NCg0KDQpKdWVyZ2VuDQo=
--------------gOTZ0Enpy3C5r3h1g7Q5nWZq
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------gOTZ0Enpy3C5r3h1g7Q5nWZq--

--------------I3mZZ3DK0leXqMo1v70lpkRH--

--------------xLyClJFsxMZWXDWgCCG0gtaF
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK8KfwFAwAAAAAACgkQsN6d1ii/Ey9b
zwf+PQwh5cnIl+lV9wiwi34S+20YVP4Dt43vu0knCGNxMs/8YCRewJVNvSHr8DEYZ5uewC972yKo
8XYYpSwiwBJt+5iEn327gqR6b22IZr91xiGtw9AeyQdksrWEcXCXPU+q/BDw1cQqsImmk7vd3Djy
KgAMj1xePSfd0kdSgL1ezFFqiueGgraem7JWRgbQ27cFkFTFzTzgIpa2jgZnOj/IWi9gCklFJrT/
y1Z+ReYwspa3GVcDn9TiQI2Wa7HoFe+KgRsSD+M6UGJR9pQxThuFyI12mr6aMQpOrn60KBdkcJEw
Y0JnAxxkQTzH7lpqEuj+/p15lRKsIgArglswBwlzQg==
=L0mL
-----END PGP SIGNATURE-----

--------------xLyClJFsxMZWXDWgCCG0gtaF--


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 10:34:51 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 10:34:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357788.586591 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6V1z-0004eA-M2; Wed, 29 Jun 2022 10:34:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357788.586591; Wed, 29 Jun 2022 10:34:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6V1z-0004e3-J8; Wed, 29 Jun 2022 10:34:47 +0000
Received: by outflank-mailman (input) for mailman id 357788;
 Wed, 29 Jun 2022 10:34:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o6V1y-0004dx-Mt
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 10:34:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o6V1y-00009U-3Z; Wed, 29 Jun 2022 10:34:46 +0000
Received: from [54.239.6.187] (helo=[192.168.9.41])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o6V1x-00047W-TP; Wed, 29 Jun 2022 10:34:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=SFUA7YNICG+IslhnVSZ93t4ry5H9zkAIIe/R5MN3gks=; b=P1K6Ry2j3UVxhrbXi0CXrKXHlt
	bGWeaKKgLk03ArXtjFcab/2BLwXbnuoJKpA+tuAaYhbm1T3A0Pcs24xZrH8OXwDslZLRkgWsp/4ya
	4cw3VGkmUMue7lKj0WIPBf49VkxsoeIKdhkntN2zWSTS4WBQgPAxikmxj5lG1h0UMy48=;
Message-ID: <5a49381c-c69d-88dc-1bba-783241dbfe23@xen.org>
Date: Wed, 29 Jun 2022 11:34:43 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH v5 2/8] xen/arm: allocate static shared memory to the
 default owner dom_io
To: Penny Zheng <Penny.Zheng@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>
References: <20220620051114.210118-1-Penny.Zheng@arm.com>
 <20220620051114.210118-3-Penny.Zheng@arm.com>
 <3b7b32cb-df48-e458-e8a9-f17e86f39c9a@xen.org>
 <DU2PR08MB7325A7C7C50807D7FF6AE280F7BB9@DU2PR08MB7325.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <DU2PR08MB7325A7C7C50807D7FF6AE280F7BB9@DU2PR08MB7325.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit



On 29/06/2022 08:13, Penny Zheng wrote:
> Hi Julien

Hi Penny,

> 
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: Saturday, June 25, 2022 2:22 AM
>> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org
>> Cc: Wei Chen <Wei.Chen@arm.com>; Stefano Stabellini
>> <sstabellini@kernel.org>; Bertrand Marquis <Bertrand.Marquis@arm.com>;
>> Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>; Andrew Cooper
>> <andrew.cooper3@citrix.com>; George Dunlap <george.dunlap@citrix.com>;
>> Jan Beulich <jbeulich@suse.com>; Wei Liu <wl@xen.org>
>> Subject: Re: [PATCH v5 2/8] xen/arm: allocate static shared memory to the
>> default owner dom_io
>>
>> Hi Penny,
>>
>> On 20/06/2022 06:11, Penny Zheng wrote:
>>> From: Penny Zheng <penny.zheng@arm.com>
>>>
>>> This commit introduces process_shm to cope with static shared memory
>>> in domain construction.
>>>
>>> DOMID_IO will be the default owner of memory pre-shared among
>> multiple
>>> domains at boot time, when no explicit owner is specified.
>>
>> The document in patch #1 suggest the page will be shared with dom_shared.
>> But here you say "DOMID_IO".
>>
>> Which one is correct?
>>
> 
> I’ll fix the documentation, DOM_IO is the last call.
> 
>>>
>>> This commit only considers allocating static shared memory to dom_io
>>> when owner domain is not explicitly defined in device tree, all the
>>> left, including the "borrower" code path, the "explicit owner" code
>>> path, shall be introduced later in the following patches.
>>>
>>> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
>>> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
>>> ---
>>> v5 change:
>>> - refine in-code comment
>>> ---
>>> v4 change:
>>> - no changes
>>> ---
>>> v3 change:
>>> - refine in-code comment
>>> ---
>>> v2 change:
>>> - instead of introducing a new system domain, reuse the existing
>>> dom_io
>>> - make dom_io a non-auto-translated domain, then no need to create P2M
>>> for it
>>> - change dom_io definition and make it wider to support static shm
>>> here too
>>> - introduce is_shm_allocated_to_domio to check whether static shm is
>>> allocated yet, instead of using shm_mask bitmap
>>> - add in-code comment
>>> ---
>>>    xen/arch/arm/domain_build.c | 132
>> +++++++++++++++++++++++++++++++++++-
>>>    xen/common/domain.c         |   3 +
>>>    2 files changed, 134 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>>> index 7ddd16c26d..91a5ace851 100644
>>> --- a/xen/arch/arm/domain_build.c
>>> +++ b/xen/arch/arm/domain_build.c
>>> @@ -527,6 +527,10 @@ static bool __init
>> append_static_memory_to_bank(struct domain *d,
>>>        return true;
>>>    }
>>>
>>> +/*
>>> + * If cell is NULL, pbase and psize should hold valid values.
>>> + * Otherwise, cell will be populated together with pbase and psize.
>>> + */
>>>    static mfn_t __init acquire_static_memory_bank(struct domain *d,
>>>                                                   const __be32 **cell,
>>>                                                   u32 addr_cells, u32
>>> size_cells, @@ -535,7 +539,8 @@ static mfn_t __init
>> acquire_static_memory_bank(struct domain *d,
>>>        mfn_t smfn;
>>>        int res;
>>>
>>> -    device_tree_get_reg(cell, addr_cells, size_cells, pbase, psize);
>>> +    if ( cell )
>>> +        device_tree_get_reg(cell, addr_cells, size_cells, pbase,
>>> + psize);
>>
>> I think this is a bit of a hack. To me it sounds like this should be moved out to
>> a separate helper. This will also make the interface of
>> acquire_shared_memory_bank() less questionable (see below).
>>
> 
> Ok,  I'll try to not reuse acquire_static_memory_bank in
> acquire_shared_memory_bank.

I am OK with that so long it doesn't involve too much duplication.

>>>        ASSERT(IS_ALIGNED(*pbase, PAGE_SIZE) && IS_ALIGNED(*psize,
>>> PAGE_SIZE));
>>
>> In the context of your series, who is checking that both psize and pbase are
>> suitably aligned?
>>
> 
> Actually, the whole parsing process is redundant for the static shared memory.
> I've already parsed it and checked it before in process_shm.

I looked at process_shm() and couldn't find any code that would check 
pbase and psize are suitable aligned (ASSERT() doesn't count).

> 
>>> +    return true;
>>> +}
>>> +
>>> +static mfn_t __init acquire_shared_memory_bank(struct domain *d,
>>> +                                               u32 addr_cells, u32 size_cells,
>>> +                                               paddr_t *pbase,
>>> +paddr_t *psize)
>>
>> There is something that doesn't add-up in this interface. The use of pointer
>> implies that pbase and psize may be modified by the function. So...
>>
> 
> Just like you points out before, it's a bit hacky to use acquire_static_memory_bank,
> and the pointer used here is also due to it. Internal parsing process of acquire_static_memory_bank
> needs pointer to deliver the result.
> 
> I’ll rewrite acquire_shared_memory, and it will be like:
> "
> static mfn_t __init acquire_shared_memory_bank(struct domain *d,
>                                                 paddr_t pbase, paddr_t psize)
> {
>      mfn_t smfn;
>      unsigned long nr_pfns;
>      int res;
> 
>      /*
>       * Pages of statically shared memory shall be included
>       * in domain_tot_pages().
>       */
>      nr_pfns = PFN_DOWN(psize);
>      if ( d->max_page + nr_pfns > UINT_MAX )

On Arm32, this check would always be true a 32-bit unsigned value is 
always below UINT_MAX. On arm64, you might get away because nr_pfns is 
unsigned long (so I think the type promotion works). But this is fragile.

I would suggest to use the following check:

(UINT_MAX - d->max_page) < nr_pfns

>      {
>          printk(XENLOG_ERR "%pd: Over-allocation for d->max_pages: %lu.\n",
>                 d, psize);
>          return INVALID_MFN;
>      }
>      d->max_pages += nr_pfns;
> 
>      smfn = maddr_to_mfn(pbase);
>      res = acquire_domstatic_pages(d, smfn, nr_pfns, 0);
>      if ( res )
>      {
>          printk(XENLOG_ERR
>                 "%pd: failed to acquire static memory: %d.\n", d, res);
>          return INVALID_MFN;
>      }
> 
>      return smfn
> }
> "
> 
>>> +{
>>> +    /*
>>> +     * Pages of statically shared memory shall be included
>>> +     * in domain_tot_pages().
>>> +     */
>>> +    d->max_pages += PFN_DOWN(*psize);
>>
>> ... it sounds a bit strange to use psize here. If psize, can't be modified than it
>> should probably not be a pointer.
>>
>> Also, where do you check that d->max_pages will not overflow?
>>
> 
> I'll check the overflow as follows:

See above about the check.

> "
>      nr_pfns = PFN_DOWN(psize);
>      if ( d->max_page + nr_pfns > UINT_MAX )
>      {
>          printk(XENLOG_ERR "%pd: Over-allocation for d->max_pages: %lu.\n",
>                 d, psize);
>          return INVALID_MFN;
>      }
>      d->max_pages += nr_pfns;
> "
> 
>>> +
>>> +    return acquire_static_memory_bank(d, NULL, addr_cells, size_cells,
>>> +                                      pbase, psize);
>>> +
>>> +}
>>> +
>>> +/*
>>> + * Func allocate_shared_memory is supposed to be only called
>>
>> I am a bit concerned with the word "supposed". Are you implying that it may
>> be called by someone that is not the owner? If not, then it should be
>> "should".
>>
>> Also NIT: Spell out completely "func". I.e "The function".
>>
>>> + * from the owner.
>>
>> I read from as "current should be the owner". But I guess this is not what you
>> mean here. Instead it looks like you mean "d" is the owner. So I would write
>> "d should be the owner of the shared area".
>>
>> It would be good to have a check/ASSERT confirm this (assuming this is easy
>> to write).
>>
> 
> The check is already in the calling path, I guess...

Can you please confirm it?

[...]

>>> +        prop = dt_find_property(shm_node, "xen,shared-mem", NULL);
>>> +        if ( !prop )
>>> +        {
>>> +            printk("Shared memory node does not provide \"xen,shared-
>> mem\" property.\n");
>>> +            return -ENOENT;
>>> +        }
>>> +        cells = (const __be32 *)prop->value;
>>> +        /* xen,shared-mem = <pbase, psize, gbase>; */
>>> +        device_tree_get_reg(&cells, addr_cells, size_cells, &pbase, &psize);
>>> +        ASSERT(IS_ALIGNED(pbase, PAGE_SIZE) && IS_ALIGNED(psize,
>>> + PAGE_SIZE));
>>
>> See above about what ASSERT()s are for.
>>
> 
> Do you think address was suitably checked here, is it enough?

As I wrote before, ASSERT() should not be used to check user inputs. 
They need to happen in both debug and production build.

> If it is enough, I'll modify above ASSERT() to mfn_valid()

It is not clear what ASSERT() you are referring to.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 10:43:41 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 10:43:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357795.586603 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6VAW-0006Ei-Gs; Wed, 29 Jun 2022 10:43:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357795.586603; Wed, 29 Jun 2022 10:43:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6VAW-0006Eb-Dp; Wed, 29 Jun 2022 10:43:36 +0000
Received: by outflank-mailman (input) for mailman id 357795;
 Wed, 29 Jun 2022 10:43:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iMtf=XE=citrix.com=prvs=172cd6ca3=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1o6VAV-0006EV-96
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 10:43:35 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 516c6f0b-f798-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 12:43:33 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 516c6f0b-f798-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656499413;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=RLWqZnvkENPRN1tooOkPezJsvJ8KPMIm6oJBRyshOmI=;
  b=FR90wqeRhjVM6j8jrPfToU6SEf5f3ChFyTd+O2MMAhk7OzfZ0rAoP+SF
   tq/FNFwwhLlxB3rJmc+Hq/4re+U1yDhap5ZXt2zezXWvN2AnVa05EevKc
   Chf4jDH7MIsfp7WA1Vb+IYRaqOfU+iGdY+HlevofB7weQTnoV9K7Q36xN
   w=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 74008342
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:fe4wN665fXRuE//MMqHhngxRtHPHchMFZxGqfqrLsTDasY5as4F+v
 mMZUW/Sbq6PZGH3LYwiO4W/9klTupTSy4AyQFZsrC4xHi5G8cbLO4+Ufxz6V8+wwmwvb67FA
 +E2MISowBUcFyeEzvuVGuG96yE6j8lkf5KkYAL+EnkZqTRMFWFw03qPp8Zj2tQy2YbjUlvU0
 T/Pi5a31GGNimYc3l08s8pvmDs31BglkGpF1rCWTakjUG72zxH5PrpGTU2CByKQrr1vNvy7X
 47+IISRpQs1yfuP5uSNyd4XemVSKlLb0JPnZnB+A8BOiTAazsA+PzpS2FPxpi67hh3Q9+2dx
 umhurSNSCIuEfyQxt1ESgR6LjNDJrZL/uTYdC3XXcy7lyUqclPpyvRqSko3IZcZ6qB8BmQmG
 f4wcW5XKErZ3qTvnez9GrIEascLdaEHOKsWvG1gyjfIS+4rW5nZT43B5MNC3Sd2jcdLdRrbT
 5VFMWI/N0iaC/FJElEJMZc8mtuYv1e8UW1mhE/Eq5MzxWeGmWSd15CyaYGIK7RmX/59jkue4
 27L4Wn9KhUbL8CEjyqI9Gq2ge3Clj+9X5gdfJW6/PN3hFyYxkQIFQYbE1C8pJGEZlWWAowFb
 RZOo2x38PZ0pBfDosTBswOQnX+huTFNB4RpSvQnyjyf97HP7gDCGT1RJtJeU+DKpPPaVBRzi
 ALXxIOxWmA32FGGYSnDr+nJ9FteLQBQdDZfPnFcEGPp9vG5+OkOYgTzosGP+UJfpvn8AnnOz
 j+Dt0DSbJ1D3JdQh81XEb0q6g9AR6QlrSZvv207pkr/smtEiHeNPuREE2Tz4/daN5q+RVKcp
 nUCkMX2xLlQUM/RyHXUHr5dQe3BCxO53Nv02A8H834Jp1yQF4OLJ9gMsFmS2m8zWir7RdMZS
 BCK4l4AjHOiFHCrcbV2c+qMNije9oC5TY6NfqmNNrJmO8EtHCfarXoGTRPBgAjQfL0EzPhX1
 WGzKp78Ux73yM1PkVKLegvq+eRwmHxumT+DFM6TItbO+eP2WUN5gIwtaDOmBt3VJovdyOkJ2
 76z7/e39ig=
IronPort-HdrOrdr: A9a23:V6dkr6AZeM8K4YrlHems55DYdb4zR+YMi2TC1yhKJyC9Vvbo8/
 xG/c5rsCMc5wx9ZJhNo7y90ey7MBThHP1OkOss1NWZPDUO0VHAROoJ0WKh+UyCJ8SXzJ866U
 4KSclD4bPLYmRHsQ==
X-IronPort-AV: E=Sophos;i="5.92,231,1650945600"; 
   d="scan'208";a="74008342"
Date: Wed, 29 Jun 2022 11:43:01 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Jane Malalane <jane.malalane@citrix.com>, Xen-devel
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v10 0/2] xen: Report and use hardware APIC virtualization
 capabilities
Message-ID: <Yrwste7T5DSeazjh@perard.uk.xensource.com>
References: <20220413112111.30675-1-jane.malalane@citrix.com>
 <e16b3b4b-45f3-a520-0360-c1d59602469b@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <e16b3b4b-45f3-a520-0360-c1d59602469b@suse.com>

On Thu, Jun 23, 2022 at 09:23:27AM +0200, Jan Beulich wrote:
> On 13.04.2022 13:21, Jane Malalane wrote:
> > Jane Malalane (2):
> >   xen+tools: Report Interrupt Controller Virtualization capabilities on
> >     x86
> >   x86/xen: Allow per-domain usage of hardware virtualized APIC
> > 
> >  docs/man/xl.cfg.5.pod.in              | 15 ++++++++++++++
> >  docs/man/xl.conf.5.pod.in             | 12 +++++++++++
> >  tools/golang/xenlight/helpers.gen.go  | 16 ++++++++++++++
> >  tools/golang/xenlight/types.gen.go    |  4 ++++
> >  tools/include/libxl.h                 | 14 +++++++++++++
> >  tools/libs/light/libxl.c              |  3 +++
> >  tools/libs/light/libxl_arch.h         |  9 ++++++--
> >  tools/libs/light/libxl_arm.c          | 14 ++++++++++---
> >  tools/libs/light/libxl_create.c       | 22 ++++++++++++--------
> >  tools/libs/light/libxl_types.idl      |  4 ++++
> >  tools/libs/light/libxl_x86.c          | 39 +++++++++++++++++++++++++++++++++--
> >  tools/ocaml/libs/xc/xenctrl.ml        |  7 +++++++
> >  tools/ocaml/libs/xc/xenctrl.mli       |  7 +++++++
> >  tools/ocaml/libs/xc/xenctrl_stubs.c   | 17 ++++++++++++---
> >  tools/xl/xl.c                         |  8 +++++++
> >  tools/xl/xl.h                         |  2 ++
> >  tools/xl/xl_info.c                    |  6 ++++--
> >  tools/xl/xl_parse.c                   | 19 +++++++++++++++++
> >  xen/arch/x86/domain.c                 | 29 +++++++++++++++++++++++++-
> >  xen/arch/x86/hvm/hvm.c                |  3 +++
> >  xen/arch/x86/hvm/vmx/vmcs.c           | 11 ++++++++++
> >  xen/arch/x86/hvm/vmx/vmx.c            | 13 ++++--------
> >  xen/arch/x86/include/asm/hvm/domain.h |  6 ++++++
> >  xen/arch/x86/include/asm/hvm/hvm.h    | 10 +++++++++
> >  xen/arch/x86/sysctl.c                 |  4 ++++
> >  xen/arch/x86/traps.c                  |  5 +++--
> >  xen/include/public/arch-x86/xen.h     |  5 +++++
> >  xen/include/public/sysctl.h           | 11 +++++++++-
> >  28 files changed, 281 insertions(+), 34 deletions(-)
> > 
> 
> Just FYI: It's been over two months that v10 has been pending. There
> are still missing acks. You may want to ping the respective maintainers
> for this to make progress.

Hi Jan,

Are you looking for a ack for the "docs/man" changes? If so, I guess
I'll have to make it more explicit next time that a review for "tools"
also mean review of the changes in their respective man pages.

Or are you looking for a ack for the "golang" changes? Those changes are
automatically generated by a tool already in our repository.

Or is it an "ocaml" ack for the first patch? Unfortunately, the
maintainers haven't been CCed, I guess that could be an issue.

Cheers,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 10:46:53 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 10:46:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357800.586614 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6VDd-0006qi-Vd; Wed, 29 Jun 2022 10:46:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357800.586614; Wed, 29 Jun 2022 10:46:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6VDd-0006qb-So; Wed, 29 Jun 2022 10:46:49 +0000
Received: by outflank-mailman (input) for mailman id 357800;
 Wed, 29 Jun 2022 10:46:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fVKq=XE=arm.com=michal.orzel@srs-se1.protection.inumbo.net>)
 id 1o6VDc-0006qR-Dp
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 10:46:48 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id c556acd6-f798-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 12:46:46 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5C83E152B;
 Wed, 29 Jun 2022 03:46:46 -0700 (PDT)
Received: from [10.57.39.201] (unknown [10.57.39.201])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1480A3F792;
 Wed, 29 Jun 2022 03:46:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c556acd6-f798-11ec-b725-ed86ccbb4733
Message-ID: <4ee1fbaf-9d31-d28e-cb8d-f330c6a1923f@arm.com>
Date: Wed, 29 Jun 2022 12:46:29 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH 3/7] xen/common: Use unsigned int instead of plain
 unsigned
Content-Language: en-US
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>,
 Dario Faggioli <dfaggioli@suse.com>
References: <20220627131543.410971-1-michal.orzel@arm.com>
 <20220627131543.410971-4-michal.orzel@arm.com>
In-Reply-To: <20220627131543.410971-4-michal.orzel@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 27.06.2022 15:15, Michal Orzel wrote:
> This is just for the style and consistency reasons as the former is
> being used more often than the latter.
> 
> Signed-off-by: Michal Orzel <michal.orzel@arm.com>

It looks like this change was forgotten when merging other patches from the series.


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 10:49:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 10:49:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357808.586625 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6VG8-0007Zn-FB; Wed, 29 Jun 2022 10:49:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357808.586625; Wed, 29 Jun 2022 10:49:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6VG8-0007Zg-B0; Wed, 29 Jun 2022 10:49:24 +0000
Received: by outflank-mailman (input) for mailman id 357808;
 Wed, 29 Jun 2022 10:49:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o6VG6-0007ZI-F5
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 10:49:22 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o6VG2-0000Ta-Kv; Wed, 29 Jun 2022 10:49:18 +0000
Received: from [54.239.6.187] (helo=[192.168.9.41])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o6VG2-0004n1-EH; Wed, 29 Jun 2022 10:49:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=XwTHFAoYDeCCa3r+hwh3qwwzGindoFtN6Pb11qN4SJg=; b=FZbp0QBQleoSQqN+dIg4nCmmcu
	lZ8+QfcSzs0I159Le36qSuOqm5nTP6zRExMLAThkRU8KzgNX4KjZSdmEXHiRrk9sPsfmgFuJnYtPe
	mILYHylEAE3ne5Xcmfe2nbEobMMhIG25c76GjjMSAFPfFB0xU1HKONOY6O5UbIc0NvQc=;
Message-ID: <6c7e87df-7633-6cba-8ff2-2adc3059f698@xen.org>
Date: Wed, 29 Jun 2022 11:49:16 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH v7 7/9] xen/arm: unpopulate memory when domain is static
To: Jan Beulich <jbeulich@suse.com>, Penny Zheng <Penny.Zheng@arm.com>
Cc: Wei Chen <Wei.Chen@arm.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20220620024408.203797-1-Penny.Zheng@arm.com>
 <20220620024408.203797-8-Penny.Zheng@arm.com>
 <5ac0e46d-2100-331e-b4d2-8fc715973b71@suse.com>
 <DU2PR08MB73255B2995B4692B5D46252FF7B99@DU2PR08MB7325.eurprd08.prod.outlook.com>
 <380b2610-fe2f-6246-e6a4-f0dd8295d488@xen.org>
 <DU2PR08MB732507EFB0CC4FEAA4872B3AF7BB9@DU2PR08MB7325.eurprd08.prod.outlook.com>
 <6cfca3ce-219e-f9e4-e30c-40d7a74ea523@suse.com>
 <DU2PR08MB7325B9C7AC3441780E7AEB78F7BB9@DU2PR08MB7325.eurprd08.prod.outlook.com>
 <7d7aa075-faa0-732f-44ad-3984dcb86e08@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <7d7aa075-faa0-732f-44ad-3984dcb86e08@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Jan,

On 29/06/2022 07:19, Jan Beulich wrote:
> On 29.06.2022 08:08, Penny Zheng wrote:
>>> From: Jan Beulich <jbeulich@suse.com>
>>> Sent: Wednesday, June 29, 2022 1:56 PM
>>>
>>> On 29.06.2022 05:12, Penny Zheng wrote:
>>>>> From: Julien Grall <julien@xen.org>
>>>>> Sent: Monday, June 27, 2022 6:19 PM
>>>>>
>>>>> On 27/06/2022 11:03, Penny Zheng wrote:
>>>>>>> -----Original Message-----
>>>>>> put_static_pages, that is, adding pages to the reserved list, is
>>>>>> only for freeing static pages on runtime. In static page
>>>>>> initialization stage, I also use free_statimem_pages, and in which
>>>>>> stage, I think the domain has not been constructed at all. So I
>>>>>> prefer the freeing of staticmem pages is split into two parts:
>>>>>> free_staticmem_pages and put_static_pages
>>>>>
>>>>> AFAIU, all the pages would have to be allocated via
>>>>> acquire_domstatic_pages(). This call requires the domain to be
>>>>> allocated and setup for handling memory.
>>>>>
>>>>> Therefore, I think the split is unnecessary. This would also have the
>>>>> advantage to remove one loop. Admittly, this is not important when
>>>>> the order 0, but it would become a problem for larger order (you may
>>>>> have to pull the struct page_info multiple time in the cache).
>>>>>
>>>>
>>>> How about this:
>>>> I create a new func free_domstatic_page, and it will be like:
>>>> "
>>>> static void free_domstatic_page(struct domain *d, struct page_info
>>>> *page) {
>>>>      unsigned int i;
>>>>      bool need_scrub;
>>>>
>>>>      /* NB. May recursively lock from relinquish_memory(). */
>>>>      spin_lock_recursive(&d->page_alloc_lock);
>>>>
>>>>      arch_free_heap_page(d, page);
>>>>
>>>>      /*
>>>>       * Normally we expect a domain to clear pages before freeing them,
>>>>       * if it cares about the secrecy of their contents. However, after
>>>>       * a domain has died we assume responsibility for erasure. We do
>>>>       * scrub regardless if option scrub_domheap is set.
>>>>       */
>>>>      need_scrub = d->is_dying || scrub_debug || opt_scrub_domheap;
>>>>
>>>>      free_staticmem_pages(page, 1, need_scrub);
>>>>
>>>>      /* Add page on the resv_page_list *after* it has been freed. */
>>>>      put_static_page(d, page);
>>>>
>>>>      drop_dom_ref = !domain_adjust_tot_pages(d, -1);
>>>>
>>>>      spin_unlock_recursive(&d->page_alloc_lock);
>>>>
>>>>      if ( drop_dom_ref )
>>>>          put_domain(d);
>>>> }
>>>> "
>>>>
>>>> In free_domheap_pages, we just call free_domstatic_page:
>>>>
>>>> "
>>>> @@ -2430,6 +2430,9 @@ void free_domheap_pages(struct page_info *pg,
>>>> unsigned int order)
>>>>
>>>>       ASSERT_ALLOC_CONTEXT();
>>>>
>>>> +    if ( unlikely(pg->count_info & PGC_static) )
>>>> +        return free_domstatic_page(d, pg);
>>>> +
>>>>       if ( unlikely(is_xen_heap_page(pg)) )
>>>>       {
>>>>           /* NB. May recursively lock from relinquish_memory(). */ @@
>>>> -2673,6 +2676,38 @@ void free_staticmem_pages(struct page_info *pg,
>>>> unsigned long nr_mfns, "
>>>>
>>>> Then the split could be avoided and we could save the loop as much as
>>> possible.
>>>> Any suggestion?
>>>
>>> Looks reasonable at the first glance (will need to see it in proper context for a
>>> final opinion), provided e.g. Xen heap pages can never be static.
>>
>> If you don't prefer let free_domheap_pages to call free_domstatic_page, then, maybe
>> the if-array should happen at put_page
>> "
>> @@ -1622,6 +1622,8 @@ void put_page(struct page_info *page)
>>
>>       if ( unlikely((nx & PGC_count_mask) == 0) )
>>       {
>> +        if ( unlikely(page->count_info & PGC_static) )

At a first glance, this would likely need to be tested against 'nx'.

>> +            free_domstatic_page(page);
>>           free_domheap_page(page);
>>       }
>>   }
>> "
>> Wdyt now?
> 
> Personally I'd prefer this variant, but we'll have to see what Julien or
> the other Arm maintainers think.

I think this is fine so long we are not expecting more places where 
free_domheap_page() may need to be replaced with free_domstatic_page().

I can't think of any at the moment, but I would like Penny to confirm 
what Arm plans to do with static memory.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 10:52:39 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 10:52:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357814.586636 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6VJF-0000Xa-Sv; Wed, 29 Jun 2022 10:52:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357814.586636; Wed, 29 Jun 2022 10:52:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6VJF-0000XT-Pk; Wed, 29 Jun 2022 10:52:37 +0000
Received: by outflank-mailman (input) for mailman id 357814;
 Wed, 29 Jun 2022 10:52:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o6VJF-0000XG-0p
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 10:52:37 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o6VJE-0000Xp-0c; Wed, 29 Jun 2022 10:52:36 +0000
Received: from [54.239.6.187] (helo=[192.168.9.41])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o6VJD-0004t5-Qb; Wed, 29 Jun 2022 10:52:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=kCFwRr7yce9vJdVsRE5LV2dkcIbYhc6BbKQwIfSdxuI=; b=ETZWNmUL4jZxxRRrEYn3bd03c2
	WuzryDlhU86NjmAoGmxX8DwlJ0SBSlGPkqJD9l336+CGncR8G4ceYEDPUjbcxISEhOKK0/CwafzDo
	78o/x2qU2HpbCA6UhgYylMZwnYfL3WHti3f9BrOTepk+Ur4t/r3E5eUn3pzF7uw+bYSM=;
Message-ID: <fdaa154e-95f1-6f80-6f27-f94aaaf1f77b@xen.org>
Date: Wed, 29 Jun 2022 11:52:33 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH 3/7] xen/common: Use unsigned int instead of plain
 unsigned
To: Michal Orzel <michal.orzel@arm.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>, Dario Faggioli <dfaggioli@suse.com>
References: <20220627131543.410971-1-michal.orzel@arm.com>
 <20220627131543.410971-4-michal.orzel@arm.com>
 <4ee1fbaf-9d31-d28e-cb8d-f330c6a1923f@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <4ee1fbaf-9d31-d28e-cb8d-f330c6a1923f@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 29/06/2022 11:46, Michal Orzel wrote:
> On 27.06.2022 15:15, Michal Orzel wrote:
>> This is just for the style and consistency reasons as the former is
>> being used more often than the latter.
>>
>> Signed-off-by: Michal Orzel <michal.orzel@arm.com>
> 
> It looks like this change was forgotten when merging other patches from the series.

I noticed the same and was going to commit it yesterday night. However, 
it is technically missing an ack/review for trace.c (this is maintained 
by George).

The change is small and likely not controversial. So I guess we could do 
without George's review. That said, I would like to give him a chance to 
answer (I will commit it on Friday if there are no answer).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 11:09:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 11:09:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357825.586653 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6VZB-0002S9-Ey; Wed, 29 Jun 2022 11:09:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357825.586653; Wed, 29 Jun 2022 11:09:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6VZB-0002S2-C2; Wed, 29 Jun 2022 11:09:05 +0000
Received: by outflank-mailman (input) for mailman id 357825;
 Wed, 29 Jun 2022 11:09:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NFaf=XE=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o6VZA-0002Rw-5A
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 11:09:04 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr70054.outbound.protection.outlook.com [40.107.7.54])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e233b831-f79b-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 13:09:03 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB6700.eurprd04.prod.outlook.com (2603:10a6:10:109::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Wed, 29 Jun
 2022 11:09:01 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5395.014; Wed, 29 Jun 2022
 11:09:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e233b831-f79b-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FXN3hE2ZRqUat6oQEm7UgaJzUPUhG1o0vFwv03tFzGx/ILgBbHoSQGUsAuokCxbJvMF7Iq4e6G1VfrLuJ9WIswf65dg0n06zGmuTxBqHmZjcstq1TtecJtBJMdp1C6eJ9w6HhZdwLMym31A9sXDSMx0tAj+fDzUe8wQZ3OEI1dZSk14EpqDjDVuab1ZBDnjKt5YWJQM7EyrQnjVkvmyQjb1IWGRdNUnhJ8cmJ4w9FdxqqCTYFLtB01KWSN2jtGTFFqkPHDp8SzYKr3MQekVAMN97e73rlcgMdkgsFrqSdhpcKyt3hmoIhaVeocqU6x4rfhx1Te/3AAOzncKaIEKQCQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=FoaNgmGvsxM2XJ3pqymMDLiSoEN7vnOQiiS6ig3BON4=;
 b=PRv99yt3D/6worm+S3fhmMX3iQn6M9WYNRTIBn0E0Fq/opwhS8yFCq+JgKuTWlDm5Sz9MqdLU6oHaAP5OCuV+NoH7tKUZdTiIwS1+LM9rC3c0WBsQusNlLBkXO0InGfRUsAijfmiqPgwD5xW5fHROsQyHZOJc6wtJkLCWnkJ8MLV6qHRBg31MIOF+MR9FsP/UUbuZApyHUXy9RxNmYZxMvTMjmuwaZsCtVoozs1QOqVtLldOEHvFb9/L68gPfsN/EF70BD/PplC7yVnDcextWc5Sl7LEe1suhVJSqJKxpHD3ZlwxS5fJs1Xio91uv+JmrBBQAamJw5mJ5maGRDE/kg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FoaNgmGvsxM2XJ3pqymMDLiSoEN7vnOQiiS6ig3BON4=;
 b=VC4rOaqkNcK7dnsTMWB2CtPMvZXrOtPCb3gPIUM3q+9wX4yNtD6WAI4cQeWeullOcSIlUj4kkW+B04rQWTxHIOaHC8wrKQbsSO3eXl3n51J8ftoY1ilEBnfFUD/mHjXBgfOUPUL+xdfOBT2mAwuEuXcL8gAsIM3WrEBBDCuKZQxeSp2AXbBY1MshqV+ASyulipZohUbDL229WGUYzdSy0RpYvglXKPH2p2mmc7s71jQbPIlho4oP+a8uRhJFUUUIjf+pTUgV1sToQBw72vqAwrcaNkBcNxpEN2m4HyIVej7UYuKESGa3Gr0S/TH0Dx/Q2v10tOSECdHSqOWx16jDOg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <3bb4c5d4-325a-d14f-038b-7206b3b6b29f@suse.com>
Date: Wed, 29 Jun 2022 13:09:00 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v10 0/2] xen: Report and use hardware APIC virtualization
 capabilities
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Jane Malalane <jane.malalane@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20220413112111.30675-1-jane.malalane@citrix.com>
 <e16b3b4b-45f3-a520-0360-c1d59602469b@suse.com>
 <Yrwste7T5DSeazjh@perard.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <Yrwste7T5DSeazjh@perard.uk.xensource.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR06CA0497.eurprd06.prod.outlook.com
 (2603:10a6:20b:49b::22) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 82b5c9c0-5f6f-4825-9cc5-08da59bfc5a1
X-MS-TrafficTypeDiagnostic: DB8PR04MB6700:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	VDrWiwBG3WRhzHtiP9FUCM2TskZnh5Tc6slhFhox8Rzu+RoRSPam4VmVNv9XWq8jwpypKaYaGlQKFuk1qmOonWcE5atVg4oGXNqtnl1/jk8YEIr4iYZOAcKdAPraXFDQfPY/wGvBnY1R1+fg8VIGWOvCw2h9jAtiAaF8rdoxbHxnKsSZwGpK27CiHwnlOeJG/VwtEiSNLiEtklJ8OkFsNM/vHOq3ztc7a2IaN8PK2+Bjsz9/tBIphY9PWLcg1Y0sDq4oHa54lp1urfT8x6vQm2sgrPFIUr9UwMSeaNJw56+gNNXzFIfQr7uftFnlCgXz77yv+ncn1nl6WzeRwrQ6oLQF0FcqLFagGroBjEaMVmzXve/XAY3RegLbdvzEC+/5Ooo2hfQhqkawPo1cIU55fuxM19NN6v+/5LuQHWmpcFPNZ+nKFpy4n2z8qj4VzDOePoo6yy7r1AK8t6MUuGFctAFm5glvdKat12j7dZ8QEtHZOY3WaaQ9cDy7taHwWK8k3Auo5SY06Zzaesj9Pa39VX+0me2/kMgl06QSEK8jHT+j5Lzv+0/21D9fYEcmrIKVUDViel8oeu2eAvg4w/bP9CZw/YLxrRkXT9zU3FQrDdvaNhsplKsBFaRx35Q+d7k9wtUwZrXtsEs2p1Qc2ZNanV0lczUwuVwPTJkitBGgiNn7f+ygWKgpTUlKI0KBiKgVa1DZMhvR2hMbquxMpRSRMP6CdIf6zNtRMPiPsWUVqyLXafLYRDWIYjsPk90g5Sr1JfvvaQqpv0qWjZgD97bLeZf7XtXqcxif26l1b7K0GFwhLcySw5dT609iWLt+xRLoeMFfGqWOzCsrH10shCk2Dw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(346002)(136003)(396003)(366004)(376002)(39860400002)(316002)(54906003)(53546011)(5660300002)(6916009)(2906002)(36756003)(41300700001)(31686004)(86362001)(2616005)(186003)(66556008)(38100700002)(8676002)(6512007)(4326008)(66476007)(66946007)(6506007)(8936002)(26005)(31696002)(478600001)(6486002)(83380400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SDhvb3hMbVBoRnFkNmZNRDhpNjd0NDdCUU45OUxmTWl3ZmlLM05kRHNEUnBT?=
 =?utf-8?B?SGVDdW5lWXVhYTRmVlpWZkpkeDI2TUVvK0lDcjI1cUpmamNCTXlFV3pHQ0Ey?=
 =?utf-8?B?UldsYk5qdGF4QVZwR21QNkFMVnVhL3gvYWRFSkZwQnpkWGJHVmM1MjBadWZU?=
 =?utf-8?B?VDRQVS9FRE5FNk1qSG1RczJIREhISGd2d3ZxMkJIa0M5eXFXS2lMcHc4M0Rq?=
 =?utf-8?B?T2NlZnhoU3B3bmc5R3RSSDBtaEQ2Q2hPUmhwS25jckVUMnpTanQxN2xwZFJT?=
 =?utf-8?B?dHdNcnorWkpIbmRFUEdhdisxSnUzdFhNVmxaWnFsdWZoNkk2ZlJCN2wyN2pq?=
 =?utf-8?B?QWdlMGpzckkrTks0a1A5cjE5eEpjR3VwWkk3SkNGOGp1QkJ4Mml2cExVM2VX?=
 =?utf-8?B?TitpTnBQMklNWXBVaWNXUElFcDZQN3dZaTF4OFZ0blZxaW8wekJPbEZ1TzB4?=
 =?utf-8?B?L1FZNXJ0aEFWSFVpZ1hEaU5WdXVicHBUS1JjOEtIWGRWeE5zakZYckZzLzdG?=
 =?utf-8?B?eDZ3aEVNSitzV281OEhyK1VoN01hcnNnNk1wMElnNkVOT2E5Z3hMU0lLZTNY?=
 =?utf-8?B?czFZR0w3VjZjdzQ4Mml1QTZwLzFsMXhYTkVoZm5tUHJNUUQzWm4rNmlxZEgx?=
 =?utf-8?B?RlQwbkRLdEtXQVVlM2txaTBLOUE5S2hGZW9TbXdOVWxjRTR1LzhDV1lUTHpw?=
 =?utf-8?B?VitLenlCNllUdVVaaFRrS0RIaURRdnVvdkpsQ1NwOFY2RkFxbENCYWJ0cEM5?=
 =?utf-8?B?TWZ5Q0c1a0g5Y2tPNGRNTnJKNzh1aHJ1OFdTTmVlTlVmUGlGVjVQQk1oS3l6?=
 =?utf-8?B?Rzdybys5SmNOYzJHV2JjN2xNdUZEcU9URkZWQVVaaFRIVi9uU1IvUVFFWjM1?=
 =?utf-8?B?MGlyMGppYnk2a2p5U0lNZVJXSnZaVG96MElWdk8vakNuMW0zM3FCbHlEL2Mv?=
 =?utf-8?B?cHpzSDNEZmFWL1BlTnNRNTVoQzNoa00zdXJBc2VYZldHa29hai9YRHVTWk81?=
 =?utf-8?B?ZmRFTEZBcEtKaHpOdHlnaVVEc3Vwc3RjUjBJdklhUjFiWWtIZzRFVElXL0JS?=
 =?utf-8?B?aHdlRE10RXIzUHFHUE1EVTg4b2svanA0bzlrK0Q0VzBnbS9mOEIyUjUzRkNG?=
 =?utf-8?B?ZGtaVHZBTzRSZmZaVXhtVmN5MTFxOWd0TkJidzhCU3dEVktjUzgzdS9FK0VR?=
 =?utf-8?B?MFpkVTMzTEJ5bGpLL2FxeEhNUFh6ZzRkY0o0WnluQ0dxbmplemw3c0h6RTgz?=
 =?utf-8?B?WVNZeHlNQTRiRXc2aW4yeXd5SGYydnE4cmtjYU9YSThzZlVTTHc2ZEY0Y0VZ?=
 =?utf-8?B?RCtEY3llMkZUUmhhbGltd01qWUtRMmIvSzN0emc5RTVWaXFsdFYrcDRTRldN?=
 =?utf-8?B?YjhZZ0dNM0JwNlozQ243dHRTc2VqZkpqY3RyalVETmxBVXpwbjNOY0x0NnVo?=
 =?utf-8?B?ZXBGQnhQR1FoRHF2WHFMSVZ4OEc2VVN0OGJPQS9aYWF6Q3Rqb1QzQ1luSWtx?=
 =?utf-8?B?cnpoQWF3d3NGNzNocHBmbmZSdnJBYVRkWDdWaW9Hb05HL2w4blRIMkQ4YXlo?=
 =?utf-8?B?c0UwaktRVjdNd01MQXhQZStZOHo3ZStGSzlwTEkvZUJCY3QzTnI1WHBqdUVU?=
 =?utf-8?B?cUFDcGRVMEpzQjRtZVFlNlpSd1pxUlBuelYxRGdiSU1vU3VCNTBCbllxa2ZN?=
 =?utf-8?B?VlkvTFpMUUpNNzF4UFphR2lvWmluNjZPVE9IY0R1b2JmUDFEM3FRaGtHWG5r?=
 =?utf-8?B?emgvOTVhckQvUGlYSlkzNVBBTFFzSjNNeVQ2OHNPZ3BSaFZSajYwVGoralBC?=
 =?utf-8?B?UGR2S0twUitaYnczVzNtbFFtZDA2Sk05aDRibnhzWW5WNmVRYmRQZ0hNYXBU?=
 =?utf-8?B?Nm51Y2JmMjh3WXVWdXUwODhSd0ZPQ0NGdmZPNnBoYTRHeURlVU9OSFFhcWxL?=
 =?utf-8?B?K25iUmYwN1k0a3ZNa0VGKzVyYWxiUHlTM0xHRUw0c0FtUWNMRDRud3Y5bGRD?=
 =?utf-8?B?TlVPb1JBY3BrckRQbkM1WERoVFJEUGEyRVNnN1lSTVBNYnNkV1ZLL1BkTkNL?=
 =?utf-8?B?ajJZL1JlSitHZmlVR0hUeDN0Wm90NkZLZW5KL3dJUDF6UnJ0a01HMWJEWHZ3?=
 =?utf-8?Q?A21v6QJtVNVuG1x3Pkpj++mgG?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 82b5c9c0-5f6f-4825-9cc5-08da59bfc5a1
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jun 2022 11:09:01.6612
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: VJj9KPzX8OfgvNWGnqbSG4hpb8TilX/UlxesS/ZsZvN1yqD077dJrwzO3NswsdwCmXKCWe2GF8RkcaOMJL790Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB6700

On 29.06.2022 12:43, Anthony PERARD wrote:
> On Thu, Jun 23, 2022 at 09:23:27AM +0200, Jan Beulich wrote:
>> On 13.04.2022 13:21, Jane Malalane wrote:
>>> Jane Malalane (2):
>>>   xen+tools: Report Interrupt Controller Virtualization capabilities on
>>>     x86
>>>   x86/xen: Allow per-domain usage of hardware virtualized APIC
>>>
>>>  docs/man/xl.cfg.5.pod.in              | 15 ++++++++++++++
>>>  docs/man/xl.conf.5.pod.in             | 12 +++++++++++
>>>  tools/golang/xenlight/helpers.gen.go  | 16 ++++++++++++++
>>>  tools/golang/xenlight/types.gen.go    |  4 ++++
>>>  tools/include/libxl.h                 | 14 +++++++++++++
>>>  tools/libs/light/libxl.c              |  3 +++
>>>  tools/libs/light/libxl_arch.h         |  9 ++++++--
>>>  tools/libs/light/libxl_arm.c          | 14 ++++++++++---
>>>  tools/libs/light/libxl_create.c       | 22 ++++++++++++--------
>>>  tools/libs/light/libxl_types.idl      |  4 ++++
>>>  tools/libs/light/libxl_x86.c          | 39 +++++++++++++++++++++++++++++++++--
>>>  tools/ocaml/libs/xc/xenctrl.ml        |  7 +++++++
>>>  tools/ocaml/libs/xc/xenctrl.mli       |  7 +++++++
>>>  tools/ocaml/libs/xc/xenctrl_stubs.c   | 17 ++++++++++++---
>>>  tools/xl/xl.c                         |  8 +++++++
>>>  tools/xl/xl.h                         |  2 ++
>>>  tools/xl/xl_info.c                    |  6 ++++--
>>>  tools/xl/xl_parse.c                   | 19 +++++++++++++++++
>>>  xen/arch/x86/domain.c                 | 29 +++++++++++++++++++++++++-
>>>  xen/arch/x86/hvm/hvm.c                |  3 +++
>>>  xen/arch/x86/hvm/vmx/vmcs.c           | 11 ++++++++++
>>>  xen/arch/x86/hvm/vmx/vmx.c            | 13 ++++--------
>>>  xen/arch/x86/include/asm/hvm/domain.h |  6 ++++++
>>>  xen/arch/x86/include/asm/hvm/hvm.h    | 10 +++++++++
>>>  xen/arch/x86/sysctl.c                 |  4 ++++
>>>  xen/arch/x86/traps.c                  |  5 +++--
>>>  xen/include/public/arch-x86/xen.h     |  5 +++++
>>>  xen/include/public/sysctl.h           | 11 +++++++++-
>>>  28 files changed, 281 insertions(+), 34 deletions(-)
>>>
>>
>> Just FYI: It's been over two months that v10 has been pending. There
>> are still missing acks. You may want to ping the respective maintainers
>> for this to make progress.
> 
> Are you looking for a ack for the "docs/man" changes? If so, I guess
> I'll have to make it more explicit next time that a review for "tools"
> also mean review of the changes in their respective man pages.

No, the docs changes (being clearly tools docs) are fine.

> Or are you looking for a ack for the "golang" changes? Those changes are
> automatically generated by a tool already in our repository.

Indeed it's Go (where I think an ack is still required, no matter
if the changes are generated ones [which I wasn't even aware of, I
have to confess]) and ...

> Or is it an "ocaml" ack for the first patch? Unfortunately, the
> maintainers haven't been CCed, I guess that could be an issue.

... OCaml which I was after.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 11:44:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 11:44:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357834.586664 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6W7P-0006w7-5d; Wed, 29 Jun 2022 11:44:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357834.586664; Wed, 29 Jun 2022 11:44:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6W7P-0006w0-2g; Wed, 29 Jun 2022 11:44:27 +0000
Received: by outflank-mailman (input) for mailman id 357834;
 Wed, 29 Jun 2022 11:44:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6W7O-0006vq-6E; Wed, 29 Jun 2022 11:44:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6W7O-0001XB-2H; Wed, 29 Jun 2022 11:44:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6W7N-0005SI-N4; Wed, 29 Jun 2022 11:44:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o6W7N-0000hd-Ma; Wed, 29 Jun 2022 11:44:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3jdRpCKtqqvJCn/Qyd4tKoAg+4+3SQRozVoRKPfkdHk=; b=rJpgSNh7yWaxS8oroaXphaob3P
	IpkwiT32O0s7LLRmEAo5wUfjBwfv+ZJZfI+sncfgzy0EvPYE6WcJXzlrPqGKfFXOOvSESxL10ni+p
	kk+UkE2qEK4182rD2ICuZC0Lp9c7/USNPzUt48/8isgVq4Cfa7CrIWjQT3zBdoCFpz2g=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171396-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 171396: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=0dd1fdae2e9f8a3e2ca8cd4c2aa5f475146d9623
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 29 Jun 2022 11:44:25 +0000

flight 171396 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171396/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              0dd1fdae2e9f8a3e2ca8cd4c2aa5f475146d9623
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  719 days
Failing since        151818  2020-07-11 04:18:52 Z  718 days  700 attempts
Testing same since   171396  2022-06-29 04:20:22 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Amneesh Singh <natto@weirdnatto.in>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani@anisinha.ca>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brad Laue <brad@brad-x.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Kirbach <christian.kirbach@gmail.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Christophe Fergeau <cfergeau@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Didik Supriadi <didiksupriadi41@gmail.com>
  dinglimin <dinglimin@cmss.chinamobile.com>
  Divya Garg <divya.garg@nutanix.com>
  Dmitrii Shcherbakov <dmitrii.shcherbakov@canonical.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Emilio Herrera <ehespinosa57@gmail.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Florian Schmidt <flosch@nutanix.com>
  Franck Ridel <fridel@protonmail.com>
  Gavi Teitz <gavi@nvidia.com>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Haonan Wang <hnwanga1@gmail.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Hiroki Narukawa <hnarukaw@yahoo-corp.jp>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Ian Wienand <iwienand@redhat.com>
  Ioanna Alifieraki <ioanna-maria.alifieraki@canonical.com>
  Ivan Teterevkov <ivan.teterevkov@nutanix.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  jason lee <ppark5237@gmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jia Zhou <zhou.jia2@zte.com.cn>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jing Qi <jinqi@redhat.com>
  Jinsheng Zhang <zhangjl02@inspur.com>
  Jiri Denemark <jdenemar@redhat.com>
  Joachim Falk <joachim.falk@gmx.de>
  John Ferlan <jferlan@redhat.com>
  John Levon <john.levon@nutanix.com>
  John Levon <levon@movementarian.org>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Justin Gatzen <justin.gatzen@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kim InSoo <simmon@nplob.com>
  Koichi Murase <myoga.murase@gmail.com>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Lei Yang <yanglei209@huawei.com>
  Lena Voytek <lena.voytek@canonical.com>
  Liang Yan <lyan@digitalocean.com>
  Liang Yan <lyan@digtalocean.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Liu Yiding <liuyd.fnst@fujitsu.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  luzhipeng <luzhipeng@cestc.cn>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Mielke <mark.mielke@gmail.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Martin Pitt <mpitt@debian.org>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matej Cepl <mcepl@cepl.eu>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Goodhart <c@chromakode.com>
  Maxim Nestratov <mnestratov@virtuozzo.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Moteen Shah <codeguy.moteen@gmail.com>
  Moteen Shah <moteenshah.02@gmail.com>
  Muha Aliss <muhaaliss@gmail.com>
  Nathan <nathan95@live.it>
  Neal Gompa <ngompa13@gmail.com>
  Nick Chevsky <nchevsky@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nicolas Lécureuil <neoclust@mageia.org>
  Nicolas Lécureuil <nicolas.lecureuil@siveo.net>
  Nikolay Shirokovskiy <nikolay.shirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Niteesh Dubey <niteesh@linux.ibm.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Or Ozeri <oro@il.ibm.com>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peng Liang <tcx4c70@gmail.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Praveen K Paladugu <prapal@linux.microsoft.com>
  Prerna Saxena <prerna.saxena@nutanix.com>
  Richard W.M. Jones <rjones@redhat.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Robin Lee <cheeselee@fedoraproject.org>
  Rohit Kumar <rohit.kumar3@nutanix.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Davis <scott.davis@starlab.io>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Sergey A <sw@atrus.ru>
  Sergey A. <sw@atrus.ru>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  shenjiatong <yshxxsjt715@gmail.com>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Simon Rowe <simon.rowe@nutanix.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Temuri Doghonadze <temuri.doghonadze@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tom Wieczorek <tom@bibbu.net>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tu Qiang <tu.qiang35@zte.com.cn>
  Tuguoyi <tu.guoyi@h3c.com>
  tuqiang <tu.qiang35@zte.com.cn>
  Vasiliy Ulyanov <vulyanov@suse.de>
  Victor Toso <victortoso@redhat.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Vinayak Kale <vkale@nvidia.com>
  Vineeth Pillai <viremana@linux.microsoft.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  Wei-Chen Chen <weicche@microsoft.com>
  William Douglas <william.douglas@intel.com>
  Xu Chao <xu.chao6@zte.com.cn>
  Yalan Zhang <yalzhang@redhat.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Fei <yangfei85@huawei.com>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yasuhiko Kamata <belphegor@belbel.or.jp>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
  zhangjl02 <zhangjl02@inspur.com>
  zhanglei <zhanglei@smartx.com>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>
  Дамјан Георгиевски <gdamjan@gmail.com>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 114276 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 12:20:52 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 12:20:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357843.586675 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6WgX-0003E5-9u; Wed, 29 Jun 2022 12:20:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357843.586675; Wed, 29 Jun 2022 12:20:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6WgX-0003Dy-5K; Wed, 29 Jun 2022 12:20:45 +0000
Received: by outflank-mailman (input) for mailman id 357843;
 Wed, 29 Jun 2022 12:20:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6WgV-0003Do-PA; Wed, 29 Jun 2022 12:20:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6WgV-0002DM-IJ; Wed, 29 Jun 2022 12:20:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6WgV-0007kt-4T; Wed, 29 Jun 2022 12:20:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o6WgV-000421-41; Wed, 29 Jun 2022 12:20:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=iY4Hqqv5He+jmUUixsaqS9sqE6lgDR2tqVR/beOoSlg=; b=gVtrOM5tHhgbFk7icFbPjnSsby
	26n9qtfE13AR2vtl2GTTGqRQzwSMSnuT/TweXMYG6fsPb+EKkbqG4f5SvSP+bPwmH+KHgEYfW/uhm
	GC488PJArGHsi4tRxqo1grQIDRU6H3cWTn/avn2sdzFVaV6aWVvkNoL3z1gIOuUfJoVo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171393-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 171393: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=2a8835cb45371a1f05c9c5899741d66685290f28
X-Osstest-Versions-That:
    qemuu=29f6db75667f44f3f01ba5037dacaf9ebd9328da
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 29 Jun 2022 12:20:43 +0000

flight 171393 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171393/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171376
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171376
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171376
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171376
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171376
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171376
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171376
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171376
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                2a8835cb45371a1f05c9c5899741d66685290f28
baseline version:
 qemuu                29f6db75667f44f3f01ba5037dacaf9ebd9328da

Last test of basis   171376  2022-06-27 23:09:50 Z    1 days
Testing same since   171380  2022-06-28 08:41:09 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alex Bennée <alex.bennee@linaro.org>
  David Hildenbrand <david@redhat.com>
  Jagannathan Raman <jag.raman@oracle.com>
  Kevin Wolf <kwolf@redhat.com>
  Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
  Michael S. Tsirkin <mst@redhat.com>
  Richard Henderson <richard.henderson@linaro.org>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Zhenzhong Duan <zhenzhong.duan@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   29f6db7566..2a8835cb45  2a8835cb45371a1f05c9c5899741d66685290f28 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 12:50:09 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 12:50:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357853.586689 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6X8h-000674-JJ; Wed, 29 Jun 2022 12:49:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357853.586689; Wed, 29 Jun 2022 12:49:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6X8h-00066x-FR; Wed, 29 Jun 2022 12:49:51 +0000
Received: by outflank-mailman (input) for mailman id 357853;
 Wed, 29 Jun 2022 12:49:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cYaO=XE=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1o6X8g-00066r-Iy
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 12:49:50 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id f4847bba-f7a9-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 14:49:49 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 76CC5152B;
 Wed, 29 Jun 2022 05:49:46 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com
 [10.1.195.16])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7CECE3F66F;
 Wed, 29 Jun 2022 05:49:45 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f4847bba-f7a9-11ec-b725-ed86ccbb4733
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH] tools/helpers: fix snprintf argument in init-dom0less.c
Date: Wed, 29 Jun 2022 13:49:38 +0100
Message-Id: <20220629124938.26498-1-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.17.1

Fix snprintf argument in init-dom0less.c because two instances of
the function are using libxl_dominfo struct members that are uint64_t
types, so change "%lu" to "%"PRIu64 to handle it properly when
building on arm32 and arm64.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
 tools/helpers/init-dom0less.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/tools/helpers/init-dom0less.c b/tools/helpers/init-dom0less.c
index 4c90dd6a0c8f..fee93459c4b9 100644
--- a/tools/helpers/init-dom0less.c
+++ b/tools/helpers/init-dom0less.c
@@ -5,6 +5,7 @@
 #include <stdlib.h>
 #include <sys/mman.h>
 #include <sys/time.h>
+#include <inttypes.h>
 #include <xenstore.h>
 #include <xenctrl.h>
 #include <xenguest.h>
@@ -138,10 +139,10 @@ static int create_xenstore(struct xs_handle *xsh,
                   "vm/" LIBXL_UUID_FMT, LIBXL_UUID_BYTES(uuid));
     if (rc < 0 || rc >= STR_MAX_LENGTH)
         return rc;
-    rc = snprintf(max_memkb_str, STR_MAX_LENGTH, "%lu", info->max_memkb);
+    rc = snprintf(max_memkb_str, STR_MAX_LENGTH, "%"PRIu64, info->max_memkb);
     if (rc < 0 || rc >= STR_MAX_LENGTH)
         return rc;
-    rc = snprintf(target_memkb_str, STR_MAX_LENGTH, "%lu", info->current_memkb);
+    rc = snprintf(target_memkb_str, STR_MAX_LENGTH, "%"PRIu64, info->current_memkb);
     if (rc < 0 || rc >= STR_MAX_LENGTH)
         return rc;
     rc = snprintf(ring_ref_str, STR_MAX_LENGTH, "%lld",
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 29 12:55:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 12:55:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357860.586699 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6XEJ-0007W1-76; Wed, 29 Jun 2022 12:55:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357860.586699; Wed, 29 Jun 2022 12:55:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6XEJ-0007Vu-3q; Wed, 29 Jun 2022 12:55:39 +0000
Received: by outflank-mailman (input) for mailman id 357860;
 Wed, 29 Jun 2022 12:55:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cYaO=XE=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1o6XEH-0007Vo-RC
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 12:55:37 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id c45e385f-f7aa-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 14:55:36 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 043F1152B;
 Wed, 29 Jun 2022 05:55:35 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com
 [10.1.195.16])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9811A3F792;
 Wed, 29 Jun 2022 05:55:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c45e385f-f7aa-11ec-bd2d-47488cf2e6aa
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2] docs/misra: Add instructions for cppcheck
Date: Wed, 29 Jun 2022 13:55:26 +0100
Message-Id: <20220629125526.28190-1-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.17.1

Add instructions on how to build cppcheck, the version currently used
and an example to use the cppcheck integration to run the analysis on
the Xen codebase

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
Changes in v2:
- typo fixes, removed build command line, rephrasing (Julien)
---
 docs/misra/cppcheck.txt | 64 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 64 insertions(+)
 create mode 100644 docs/misra/cppcheck.txt

diff --git a/docs/misra/cppcheck.txt b/docs/misra/cppcheck.txt
new file mode 100644
index 000000000000..25d8c3050b72
--- /dev/null
+++ b/docs/misra/cppcheck.txt
@@ -0,0 +1,64 @@
+Cppcheck for Xen static and MISRA analysis
+==========================================
+
+Xen can be analysed for both static analysis problems and MISRA violation using
+cppcheck, the open source tool allows the creation of a report with all the
+findings. Xen has introduced the support in the Makefile so it's very easy to
+use and in this document we can see how.
+
+The minimum version required for cppcheck is 2.7. Note that at the time of
+writing (June 2022), the version 2.8 is known to be broken [1].
+
+Install cppcheck on the system
+==============================
+
+Cppcheck can be retrieved from the github repository or by downloading the
+tarball, the version tested so far is the 2.7:
+
+ - https://github.com/danmar/cppcheck/tree/2.7
+ - https://github.com/danmar/cppcheck/archive/2.7.tar.gz
+
+To compile and install it, the complete command line can be found in readme.md,
+section "GNU make", please add the "install" target to that line and use every
+argument as it is in the documentation of the tool, so that every Xen developer
+following this page can reproduce the same findings.
+
+This will compile and install cppcheck in /usr/bin and all the cppcheck config
+files and addons will be installed in /usr/share/cppcheck folder, please modify
+that path in FILESDIR if it's not convinient for your system.
+
+If you don't want to overwrite a possible cppcheck binary installed in your
+system, you can omit the "install" target and FILESDIR, cppcheck will be just
+compiled and the binaries will be available in the same folder.
+If you choose to do that, later in this page it's explained how to use a local
+installation of cppcheck for the Xen analysis.
+
+Dependencies are listed in the readme.md of the project repository.
+
+Use cppcheck to analyse Xen
+===========================
+
+Using cppcheck integration is very simple, it requires few steps:
+
+ 1) Compile Xen
+ 2) call the cppcheck make target to generate a report in xml format:
+    make CPPCHECK_MISRA=y cppcheck
+ 3) call the cppcheck-html make target to generate a report in xml and html
+    format:
+    make CPPCHECK_MISRA=y cppcheck-html
+
+    In case the cppcheck binaries are not in the PATH, CPPCHECK and
+    CPPCHECK_HTMLREPORT variables can be overridden with the full path to the
+    binaries:
+
+    make -C xen \
+        CPPCHECK=/path/to/cppcheck \
+        CPPCHECK_HTMLREPORT=/path/to/cppcheck-htmlreport \
+        CPPCHECK_MISRA=y \
+        cppcheck-html
+
+The output is by default in a folder named cppcheck-htmlreport, but the name
+can be changed by passing it in the CPPCHECK_HTMLREPORT_OUTDIR variable.
+
+
+[1] https://sourceforge.net/p/cppcheck/discussion/general/thread/bfc3ab6c41/?limit=25
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 29 13:57:08 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 13:57:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357867.586711 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6YBW-0005r4-OX; Wed, 29 Jun 2022 13:56:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357867.586711; Wed, 29 Jun 2022 13:56:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6YBW-0005qx-Lx; Wed, 29 Jun 2022 13:56:50 +0000
Received: by outflank-mailman (input) for mailman id 357867;
 Wed, 29 Jun 2022 13:56:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gFtW=XE=citrix.com=prvs=17228c8f8=Jane.Malalane@srs-se1.protection.inumbo.net>)
 id 1o6YBV-0005qp-CG
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 13:56:49 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4fda81d5-f7b3-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 15:56:46 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4fda81d5-f7b3-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656511006;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=aenH0fD8UqySU1/PrIlq9KObEtNkts4LZGVHnoFy7D0=;
  b=Nv+/ciWBUlXMI1VJ+2vAX9T12wPwB3S32VUt4drNHpaIytBy9miV6+3a
   QEFZ0iWfPSubMyve7uXyg6UbsqG+tJT4xfLgwVBgu2F915dDoy+bLaufw
   C4UdwcmIuZscVIraB2r9zh37qPexBVGZDO0fJhNMZ2IosFzzKMo60Sdpf
   M=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 74715955
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:Pfb6u6864Zi+E2EIzk6wDrUDPn+TJUtcMsCJ2f8bNWPcYEJGY0x3m
 mEWCGzQbPuKN2D2c9h0aoi2pkIG75CEztE3HlZrrSE8E34SpcT7XtnIdU2Y0wF+jyHgoOCLy
 +1EN7Es+ehtFie0Si+Fa+Sn9T8mvU2xbuKU5NTsY0idfic5DnZ74f5fs7Rh2NQw34LoW1rlV
 e7a+KUzBnf0g1aYDUpMg06zgEsHUCPa4W5wUvQWPJinjXeG/5UnJMt3yZKZdhMUdrJ8DO+iL
 9sv+Znilo/vE7XBPfv++lrzWhVirrc/pmFigFIOM0SpqkAqSiDfTs/XnRfTAKtao2zhojx/9
 DlCnZa0eQlqJbLNo8tHazsDEg1XIasa8paSdBBTseTLp6HHW37lwvEoB0AqJ4wIvO1wBAmi9
 9RBdmpLNErawbvrnvTrEYGAhex6RCXvFKEWvHwm6DjdBPIvR53rSKTW/95Imjw3g6iiGN6BO
 5VANGsyMXwsZTVzKHIuBdVnt96xrUGucRtciVeyn7c4tj27IAtZj+G2bYu9lsaxbdVYmAOUq
 3zL+0z9AwoGL5qPxDyd6HWui+TT2yThV+ov+KaQr6AwxgfJnypKVUNQBQDTTeSFZlCWUdZvJ
 Q8P5SsVgvIK1heqYvDhWUGyiSvR1vIDYOa8A9HW+SnUlPeKuFbBWTRcJtJSQId47ZFrHFTGw
 nfMxoq0XmI37dV5XFrHrt+pQSWO1T/5xIPoTQsNVkM77tbqu+nfZTqfH484QMZZYjAYcAwcI
 gxmTwBk3t3/deZRi82GEanv2lpAXKThQA8v/RnwVWm49A5/b4PNT9X2tAaHsa8Zct3JEwXpU
 J04dy62tbFm4XalxESwrBglRun1t55pzhWG6bKQI3XR32v0oCPyFWyhyDp/OF1oIq45RNMdW
 2eK4Vk5zMYKZBOCNPYrC6rsWp9C5fWxSrzYugX8M4Mmjm5ZL1fXokmDpCe4ggjQraTbufthY
 8fALJz8UCZy5GYO5GPeetrxGIQDnkgWrV4/j7ihlHxLDZL2iKapdIo4
IronPort-HdrOrdr: A9a23:xngPe6wJQZAVgN3regZqKrPwIr1zdoMgy1knxilNoRw8SK2lfq
 eV7ZImPH7P+VEssR4b6LO90cW7Lk80lqQFhbX5X43SPjUO0VHAROoJgOffKlXbalTDH4VmtZ
 uIHZIRNDSJNykesfrH
X-IronPort-AV: E=Sophos;i="5.92,231,1650945600"; 
   d="scan'208";a="74715955"
From: Jane Malalane <jane.malalane@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Jane Malalane <jane.malalane@citrix.com>
Subject: [PATCH RESEND v10 0/2] xen: Report and use hardware APIC virtualization capabilities
Date: Wed, 29 Jun 2022 14:55:32 +0100
Message-ID: <20220629135534.19923-1-jane.malalane@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Jane Malalane (2):
  xen+tools: Report Interrupt Controller Virtualization capabilities on
    x86
  x86/xen: Allow per-domain usage of hardware virtualized APIC

 docs/man/xl.cfg.5.pod.in              | 15 ++++++++++++++
 docs/man/xl.conf.5.pod.in             | 12 +++++++++++
 tools/golang/xenlight/helpers.gen.go  | 16 ++++++++++++++
 tools/golang/xenlight/types.gen.go    |  4 ++++
 tools/include/libxl.h                 | 14 +++++++++++++
 tools/libs/light/libxl.c              |  3 +++
 tools/libs/light/libxl_arch.h         |  9 ++++++--
 tools/libs/light/libxl_arm.c          | 14 ++++++++++---
 tools/libs/light/libxl_create.c       | 22 ++++++++++++--------
 tools/libs/light/libxl_types.idl      |  4 ++++
 tools/libs/light/libxl_x86.c          | 39 +++++++++++++++++++++++++++++++++--
 tools/ocaml/libs/xc/xenctrl.ml        |  7 +++++++
 tools/ocaml/libs/xc/xenctrl.mli       |  7 +++++++
 tools/ocaml/libs/xc/xenctrl_stubs.c   | 17 ++++++++++++---
 tools/xl/xl.c                         |  8 +++++++
 tools/xl/xl.h                         |  2 ++
 tools/xl/xl_info.c                    |  6 ++++--
 tools/xl/xl_parse.c                   | 19 +++++++++++++++++
 xen/arch/x86/domain.c                 | 29 +++++++++++++++++++++++++-
 xen/arch/x86/hvm/hvm.c                |  3 +++
 xen/arch/x86/hvm/vmx/vmcs.c           | 11 ++++++++++
 xen/arch/x86/hvm/vmx/vmx.c            | 13 ++++--------
 xen/arch/x86/include/asm/hvm/domain.h |  6 ++++++
 xen/arch/x86/include/asm/hvm/hvm.h    | 10 +++++++++
 xen/arch/x86/sysctl.c                 |  4 ++++
 xen/arch/x86/traps.c                  |  5 +++--
 xen/include/public/arch-x86/xen.h     |  5 +++++
 xen/include/public/sysctl.h           | 11 +++++++++-
 28 files changed, 281 insertions(+), 34 deletions(-)

-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Jun 29 13:57:08 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 13:57:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357868.586721 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6YBo-0006Bm-4Y; Wed, 29 Jun 2022 13:57:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357868.586721; Wed, 29 Jun 2022 13:57:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6YBo-0006Bd-14; Wed, 29 Jun 2022 13:57:08 +0000
Received: by outflank-mailman (input) for mailman id 357868;
 Wed, 29 Jun 2022 13:57:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gFtW=XE=citrix.com=prvs=17228c8f8=Jane.Malalane@srs-se1.protection.inumbo.net>)
 id 1o6YBn-0006Am-66
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 13:57:07 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5a49a132-f7b3-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 15:57:04 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5a49a132-f7b3-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656511024;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=ymS8YJryKhEVbGpFZD0/rfTANegGjTxcrvPr4PO6Bfs=;
  b=TShD1bo1ZmeSUyRU3tcx9xcNGciyO8wIRS2/fZgOTDbvbWVL6VpR9Rmd
   SXve/iWOy+84MulkxiknY8NiilqyMq1SHo8WfFa7HanQUjdWw8m5aMz8t
   sS3n87BZu/gpHlFJHzuzkMmEgq1ZJkK4KhdH9dGj1toX9j4hA0inJI44a
   8=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 77263211
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:HrAyOK0UaR9FyJazb/bD5TBxkn2cJEfYwER7XKvMYLTBsI5bpzcBz
 zAYCmuAaKqDYDT9et0nO4ixo0IGscKEyoc2SlZppC1hF35El5HIVI+TRqvS04J+DSFhoGZPt
 Zh2hgzodZhsJpPkjk7xdOCn9xGQ7InQLlbGILes1htZGEk1Ek/NtTo5w7Rj2tAy0IDga++wk
 YiaT/P3aQfNNwFcagr424rbwP+4lK2v0N+wlgVWicFj5DcypVFMZH4sDfjZw0/DaptVBoaHq
 9Prl9lVyI97EyAFUbtJmp6jGqEDryW70QKm0hK6UID66vROS7BbPg/W+5PwZG8O4whlkeydx
 /0QiZGoVxooHJbxhekdcysISBNlIah/reqvzXiX6aR/zmXDenrohf5vEFs3LcsT/eMf7WNmr
 KJCbmpXN1ba2rzwkOnTpupE36zPKOHJNYUS/FRpyTjdBPAraZvCX7/L9ZlT2zJYasVmQqmEO
 ZFDMGMHgBLoWiAQEVxJJIIHzPr1rELWXQZRpFW5jP9ii4TU5FMoi+W8WDbPQfSISt9ShV2wv
 X/d8iLyBRRyHN6VxCeB83msrvTShi69U4UXfJW0/+BnqEeezWsSDFsRT1TTifq0lE+4Hc5eI
 ko8+ywyoKx0/0uuJvH/Qhv+pneHtxwdXtN4Eusm5QXLwa3Riy6DAXMOVDlGa9oOu8o/RDhs3
 ViM9/v5CDoqvLCLRHa18raPsSj0KSUTNXUFZyIPUU0C+daLiJ43pgLCSJBkCqHdpsbuBTj6z
 jSOrS4/r7Yel8gG0+O851+vqzCxopnESCYl6wORWXiqhj6Vf6b8OdbuswKCq68dcsDJFTFto
 UToheDD0O0WE4yMthewXegNPIP5vMSeLDjl1AsH84Yay9i9x5KyVdkOvW8ldB82bp5slSzBO
 xGK514IjHNHFD7zNPIsPdrsYyg/5fK4fekJQMw4eTanjnJZUAactB9jakeLt4wGuBh9yPpvU
 Xt3nCvFMJr7NUiE5GDvLwvl+eV3rh3SPEuKLXwB8zyp0KCFeFmeQqofPV2FY4gRtf3Z/lmJq
 o8BbJvXl32ztdEShAGOoOb/ynhaRUXX+Lis85AHHgJ9ClAO9J4d5w/5nup6Jt0Nc1V9nebU5
 HCtMnJlJK7ErSSfc22iMyk7AJu2BMoXhS9qZkQEYAf3s0XPlK7ytc/zgbNsJel5nAGipNYpJ
 8Q4lzKoWKoVGm+aoWRAPfEQbuVKLXyWuO5HBAL9CBBXQnKqb1ahFgPMFuc3yBQzMw==
IronPort-HdrOrdr: A9a23:RYZsTK2MGiNgHgcIIaxvMwqjBIokLtp133Aq2lEZdPRUGvb3qy
 nIpoVj6faUskd2ZJhOo7C90cW7LU80sKQFhLX5Xo3SOzUO2lHYT72KhLGKq1aLdhEWtNQtsZ
 uIG5IOcOEYZmIasS+V2maF+q4bsbu6zJw=
X-IronPort-AV: E=Sophos;i="5.92,231,1650945600"; 
   d="scan'208";a="77263211"
From: Jane Malalane <jane.malalane@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Jane Malalane <jane.malalane@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Nick Rosbrook <rosbrookn@gmail.com>, Wei Liu
	<wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Juergen Gross
	<jgross@suse.com>, Christian Lindig <christian.lindig@citrix.com>, "David
 Scott" <dave@recoil.org>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian
	<kevin.tian@intel.com>
Subject: [PATCH RESEND v10 1/2] xen+tools: Report Interrupt Controller Virtualization capabilities on x86
Date: Wed, 29 Jun 2022 14:55:33 +0100
Message-ID: <20220629135534.19923-2-jane.malalane@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20220629135534.19923-1-jane.malalane@citrix.com>
References: <20220629135534.19923-1-jane.malalane@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Add XEN_SYSCTL_PHYSCAP_X86_ASSISTED_XAPIC and
XEN_SYSCTL_PHYSCAP_X86_ASSISTED_X2APIC to report accelerated xAPIC and
x2APIC, on x86 hardware. This is so that xAPIC and x2APIC virtualization
can subsequently be enabled on a per-domain basis.
No such features are currently implemented on AMD hardware.

HW assisted xAPIC virtualization will be reported if HW, at the
minimum, supports virtualize_apic_accesses as this feature alone means
that an access to the APIC page will cause an APIC-access VM exit. An
APIC-access VM exit provides a VMM with information about the access
causing the VM exit, unlike a regular EPT fault, thus simplifying some
internal handling.

HW assisted x2APIC virtualization will be reported if HW supports
virtualize_x2apic_mode and, at least, either apic_reg_virt or
virtual_intr_delivery. This also means that
sysctl follows the conditionals in vmx_vlapic_msr_changed().

For that purpose, also add an arch-specific "capabilities" parameter
to struct xen_sysctl_physinfo.

Note that this interface is intended to be compatible with AMD so that
AVIC support can be introduced in a future patch. Unlike Intel that
has multiple controls for APIC Virtualization, AMD has one global
'AVIC Enable' control bit, so fine-graining of APIC virtualization
control cannot be done on a common interface.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jane Malalane <jane.malalane@citrix.com>
Reviewed-by: "Roger Pau Monné" <roger.pau@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
---
CC: George Dunlap <george.dunlap@citrix.com>
CC: Nick Rosbrook <rosbrookn@gmail.com>
CC: Wei Liu <wl@xen.org>
CC: Anthony PERARD <anthony.perard@citrix.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Julien Grall <julien@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Juergen Gross <jgross@suse.com>
CC: Christian Lindig <christian.lindig@citrix.com>
CC: David Scott <dave@recoil.org>
CC: "Roger Pau Monné" <roger.pau@citrix.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>

v10:
 * Make assisted_x{2}apic_available conditional upon _vmx_cpu_up()

v9:
 * Move assisted_x{2}apic_available to vmx_vmcs_init() so they get
   declared at boot time, after vmx_secondary_exec_control is set

v8:
 * Improve commit message

v7:
 * Make sure assisted_x{2}apic_available evaluates to false, to ensure
   Xen builds, when !CONFIG_HVM
 * Fix coding style issues

v6:
 * Limit abi check to x86
 * Fix coding style issue

v5:
 * Have assisted_xapic_available solely depend on
   cpu_has_vmx_virtualize_apic_accesses and assisted_x2apic_available
   depend on cpu_has_vmx_virtualize_x2apic_mode and
   cpu_has_vmx_apic_reg_virt OR cpu_has_vmx_virtual_intr_delivery

v4:
 * Fallback to the original v2/v1 conditions for setting
   assisted_xapic_available and assisted_x2apic_available so that in
   the future APIC virtualization can be exposed on AMD hardware
   since fine-graining of "AVIC" is not supported, i.e., AMD solely
   uses "AVIC Enable". This also means that sysctl mimics what's
   exposed in CPUID

v3:
 * Define XEN_SYSCTL_PHYSCAP_ARCH_MAX for ABI checking and actually
   set "arch_capbilities", via a call to c_bitmap_to_ocaml_list()
 * Have assisted_x2apic_available only depend on
   cpu_has_vmx_virtualize_x2apic_mode

v2:
 * Use one macro LIBXL_HAVE_PHYSINFO_ASSISTED_APIC instead of two
 * Pass xcpyshinfo as a pointer in libxl__arch_get_physinfo
 * Set assisted_x{2}apic_available to be conditional upon "bsp" and
   annotate it with __ro_after_init
 * Change XEN_SYSCTL_PHYSCAP_ARCH_ASSISTED_X{2}APIC to
   _X86_ASSISTED_X{2}APIC
 * Keep XEN_SYSCTL_PHYSCAP_X86_ASSISTED_X{2}APIC contained within
   sysctl.h
 * Fix padding introduced in struct xen_sysctl_physinfo and bump
   XEN_SYSCTL_INTERFACE_VERSION
---
 tools/golang/xenlight/helpers.gen.go |  4 ++++
 tools/golang/xenlight/types.gen.go   |  2 ++
 tools/include/libxl.h                |  7 +++++++
 tools/libs/light/libxl.c             |  3 +++
 tools/libs/light/libxl_arch.h        |  4 ++++
 tools/libs/light/libxl_arm.c         |  5 +++++
 tools/libs/light/libxl_types.idl     |  2 ++
 tools/libs/light/libxl_x86.c         | 11 +++++++++++
 tools/ocaml/libs/xc/xenctrl.ml       |  5 +++++
 tools/ocaml/libs/xc/xenctrl.mli      |  5 +++++
 tools/ocaml/libs/xc/xenctrl_stubs.c  | 15 +++++++++++++--
 tools/xl/xl_info.c                   |  6 ++++--
 xen/arch/x86/hvm/hvm.c               |  3 +++
 xen/arch/x86/hvm/vmx/vmcs.c          |  7 +++++++
 xen/arch/x86/include/asm/hvm/hvm.h   |  5 +++++
 xen/arch/x86/sysctl.c                |  4 ++++
 xen/include/public/sysctl.h          | 11 ++++++++++-
 17 files changed, 94 insertions(+), 5 deletions(-)

diff --git a/tools/golang/xenlight/helpers.gen.go b/tools/golang/xenlight/helpers.gen.go
index b746ff1081..dd4e6c9f14 100644
--- a/tools/golang/xenlight/helpers.gen.go
+++ b/tools/golang/xenlight/helpers.gen.go
@@ -3373,6 +3373,8 @@ x.CapVmtrace = bool(xc.cap_vmtrace)
 x.CapVpmu = bool(xc.cap_vpmu)
 x.CapGnttabV1 = bool(xc.cap_gnttab_v1)
 x.CapGnttabV2 = bool(xc.cap_gnttab_v2)
+x.CapAssistedXapic = bool(xc.cap_assisted_xapic)
+x.CapAssistedX2Apic = bool(xc.cap_assisted_x2apic)
 
  return nil}
 
@@ -3407,6 +3409,8 @@ xc.cap_vmtrace = C.bool(x.CapVmtrace)
 xc.cap_vpmu = C.bool(x.CapVpmu)
 xc.cap_gnttab_v1 = C.bool(x.CapGnttabV1)
 xc.cap_gnttab_v2 = C.bool(x.CapGnttabV2)
+xc.cap_assisted_xapic = C.bool(x.CapAssistedXapic)
+xc.cap_assisted_x2apic = C.bool(x.CapAssistedX2Apic)
 
  return nil
  }
diff --git a/tools/golang/xenlight/types.gen.go b/tools/golang/xenlight/types.gen.go
index b1e84d5258..87be46c745 100644
--- a/tools/golang/xenlight/types.gen.go
+++ b/tools/golang/xenlight/types.gen.go
@@ -1014,6 +1014,8 @@ CapVmtrace bool
 CapVpmu bool
 CapGnttabV1 bool
 CapGnttabV2 bool
+CapAssistedXapic bool
+CapAssistedX2Apic bool
 }
 
 type Connectorinfo struct {
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 7ce978e83c..364d852278 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -528,6 +528,13 @@
 #define LIBXL_HAVE_MAX_GRANT_VERSION 1
 
 /*
+ * LIBXL_HAVE_PHYSINFO_ASSISTED_APIC indicates that libxl_physinfo has
+ * cap_assisted_xapic and cap_assisted_x2apic fields, which indicates
+ * the availability of x{2}APIC hardware assisted virtualization.
+ */
+#define LIBXL_HAVE_PHYSINFO_ASSISTED_APIC 1
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
diff --git a/tools/libs/light/libxl.c b/tools/libs/light/libxl.c
index a0bf7d186f..6d699951e2 100644
--- a/tools/libs/light/libxl.c
+++ b/tools/libs/light/libxl.c
@@ -15,6 +15,7 @@
 #include "libxl_osdeps.h"
 
 #include "libxl_internal.h"
+#include "libxl_arch.h"
 
 int libxl_ctx_alloc(libxl_ctx **pctx, int version,
                     unsigned flags, xentoollog_logger * lg)
@@ -410,6 +411,8 @@ int libxl_get_physinfo(libxl_ctx *ctx, libxl_physinfo *physinfo)
     physinfo->cap_gnttab_v2 =
         !!(xcphysinfo.capabilities & XEN_SYSCTL_PHYSCAP_gnttab_v2);
 
+    libxl__arch_get_physinfo(physinfo, &xcphysinfo);
+
     GC_FREE;
     return 0;
 }
diff --git a/tools/libs/light/libxl_arch.h b/tools/libs/light/libxl_arch.h
index 1522ecb97f..207ceac6a1 100644
--- a/tools/libs/light/libxl_arch.h
+++ b/tools/libs/light/libxl_arch.h
@@ -86,6 +86,10 @@ int libxl__arch_extra_memory(libxl__gc *gc,
                              uint64_t *out);
 
 _hidden
+void libxl__arch_get_physinfo(libxl_physinfo *physinfo,
+                              const xc_physinfo_t *xcphysinfo);
+
+_hidden
 void libxl__arch_update_domain_config(libxl__gc *gc,
                                       libxl_domain_config *dst,
                                       const libxl_domain_config *src);
diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index eef1de0939..39fdca1b49 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -1431,6 +1431,11 @@ int libxl__arch_passthrough_mode_setdefault(libxl__gc *gc,
     return rc;
 }
 
+void libxl__arch_get_physinfo(libxl_physinfo *physinfo,
+                              const xc_physinfo_t *xcphysinfo)
+{
+}
+
 void libxl__arch_update_domain_config(libxl__gc *gc,
                                       libxl_domain_config *dst,
                                       const libxl_domain_config *src)
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 2a42da2f7d..42ac6c357b 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -1068,6 +1068,8 @@ libxl_physinfo = Struct("physinfo", [
     ("cap_vpmu", bool),
     ("cap_gnttab_v1", bool),
     ("cap_gnttab_v2", bool),
+    ("cap_assisted_xapic", bool),
+    ("cap_assisted_x2apic", bool),
     ], dir=DIR_OUT)
 
 libxl_connectorinfo = Struct("connectorinfo", [
diff --git a/tools/libs/light/libxl_x86.c b/tools/libs/light/libxl_x86.c
index 1feadebb18..e0a06ecfe3 100644
--- a/tools/libs/light/libxl_x86.c
+++ b/tools/libs/light/libxl_x86.c
@@ -866,6 +866,17 @@ int libxl__arch_passthrough_mode_setdefault(libxl__gc *gc,
     return rc;
 }
 
+void libxl__arch_get_physinfo(libxl_physinfo *physinfo,
+                              const xc_physinfo_t *xcphysinfo)
+{
+    physinfo->cap_assisted_xapic =
+        !!(xcphysinfo->arch_capabilities &
+           XEN_SYSCTL_PHYSCAP_X86_ASSISTED_XAPIC);
+    physinfo->cap_assisted_x2apic =
+        !!(xcphysinfo->arch_capabilities &
+           XEN_SYSCTL_PHYSCAP_X86_ASSISTED_X2APIC);
+}
+
 void libxl__arch_update_domain_config(libxl__gc *gc,
                                       libxl_domain_config *dst,
                                       const libxl_domain_config *src)
diff --git a/tools/ocaml/libs/xc/xenctrl.ml b/tools/ocaml/libs/xc/xenctrl.ml
index 8eab6f60eb..7152394fce 100644
--- a/tools/ocaml/libs/xc/xenctrl.ml
+++ b/tools/ocaml/libs/xc/xenctrl.ml
@@ -128,6 +128,10 @@ type physinfo_cap_flag =
 	| CAP_Gnttab_v1
 	| CAP_Gnttab_v2
 
+type physinfo_arch_cap_flag =
+	| CAP_X86_ASSISTED_XAPIC
+	| CAP_X86_ASSISTED_X2APIC
+
 type physinfo =
 {
 	threads_per_core : int;
@@ -141,6 +145,7 @@ type physinfo =
 	(* XXX hw_cap *)
 	capabilities     : physinfo_cap_flag list;
 	max_nr_cpus      : int;
+	arch_capabilities : physinfo_arch_cap_flag list;
 }
 
 type version =
diff --git a/tools/ocaml/libs/xc/xenctrl.mli b/tools/ocaml/libs/xc/xenctrl.mli
index d3014a2708..bb5bf5207d 100644
--- a/tools/ocaml/libs/xc/xenctrl.mli
+++ b/tools/ocaml/libs/xc/xenctrl.mli
@@ -113,6 +113,10 @@ type physinfo_cap_flag =
   | CAP_Gnttab_v1
   | CAP_Gnttab_v2
 
+type physinfo_arch_cap_flag =
+  | CAP_X86_ASSISTED_XAPIC
+  | CAP_X86_ASSISTED_X2APIC
+
 type physinfo = {
   threads_per_core : int;
   cores_per_socket : int;
@@ -124,6 +128,7 @@ type physinfo = {
   scrub_pages      : nativeint;
   capabilities     : physinfo_cap_flag list;
   max_nr_cpus      : int; (** compile-time max possible number of nr_cpus *)
+  arch_capabilities : physinfo_arch_cap_flag list;
 }
 type version = { major : int; minor : int; extra : string; }
 type compile_info = {
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index 513ee142d2..e56484590e 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -718,7 +718,7 @@ CAMLprim value stub_xc_send_debug_keys(value xch, value keys)
 CAMLprim value stub_xc_physinfo(value xch)
 {
 	CAMLparam1(xch);
-	CAMLlocal2(physinfo, cap_list);
+	CAMLlocal3(physinfo, cap_list, arch_cap_list);
 	xc_physinfo_t c_physinfo;
 	int r;
 
@@ -737,7 +737,7 @@ CAMLprim value stub_xc_physinfo(value xch)
 		/* ! XEN_SYSCTL_PHYSCAP_ XEN_SYSCTL_PHYSCAP_MAX max */
 		(c_physinfo.capabilities);
 
-	physinfo = caml_alloc_tuple(10);
+	physinfo = caml_alloc_tuple(11);
 	Store_field(physinfo, 0, Val_int(c_physinfo.threads_per_core));
 	Store_field(physinfo, 1, Val_int(c_physinfo.cores_per_socket));
 	Store_field(physinfo, 2, Val_int(c_physinfo.nr_cpus));
@@ -749,6 +749,17 @@ CAMLprim value stub_xc_physinfo(value xch)
 	Store_field(physinfo, 8, cap_list);
 	Store_field(physinfo, 9, Val_int(c_physinfo.max_cpu_id + 1));
 
+#if defined(__i386__) || defined(__x86_64__)
+	/*
+	 * arch_capabilities: physinfo_arch_cap_flag list;
+	 */
+	arch_cap_list = c_bitmap_to_ocaml_list
+		/* ! physinfo_arch_cap_flag CAP_ none */
+		/* ! XEN_SYSCTL_PHYSCAP_ XEN_SYSCTL_PHYSCAP_X86_MAX max */
+		(c_physinfo.arch_capabilities);
+	Store_field(physinfo, 10, arch_cap_list);
+#endif
+
 	CAMLreturn(physinfo);
 }
 
diff --git a/tools/xl/xl_info.c b/tools/xl/xl_info.c
index 712b7638b0..3205270754 100644
--- a/tools/xl/xl_info.c
+++ b/tools/xl/xl_info.c
@@ -210,7 +210,7 @@ static void output_physinfo(void)
          info.hw_cap[4], info.hw_cap[5], info.hw_cap[6], info.hw_cap[7]
         );
 
-    maybe_printf("virt_caps              :%s%s%s%s%s%s%s%s%s%s%s\n",
+    maybe_printf("virt_caps              :%s%s%s%s%s%s%s%s%s%s%s%s%s\n",
          info.cap_pv ? " pv" : "",
          info.cap_hvm ? " hvm" : "",
          info.cap_hvm && info.cap_hvm_directio ? " hvm_directio" : "",
@@ -221,7 +221,9 @@ static void output_physinfo(void)
          info.cap_vmtrace ? " vmtrace" : "",
          info.cap_vpmu ? " vpmu" : "",
          info.cap_gnttab_v1 ? " gnttab-v1" : "",
-         info.cap_gnttab_v2 ? " gnttab-v2" : ""
+         info.cap_gnttab_v2 ? " gnttab-v2" : "",
+         info.cap_assisted_xapic ? " assisted_xapic" : "",
+         info.cap_assisted_x2apic ? " assisted_x2apic" : ""
         );
 
     vinfo = libxl_get_version_info(ctx);
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 5b16fb4cd8..0a32a948db 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -117,6 +117,9 @@ static const char __initconst warning_hvm_fep[] =
 static bool_t __initdata opt_altp2m_enabled = 0;
 boolean_param("altp2m", opt_altp2m_enabled);
 
+bool __ro_after_init assisted_xapic_available;
+bool __ro_after_init assisted_x2apic_available;
+
 static int cf_check cpu_callback(
     struct notifier_block *nfb, unsigned long action, void *hcpu)
 {
diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 56fed2db03..7329622dd4 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -2146,7 +2146,14 @@ int __init vmx_vmcs_init(void)
     ret = _vmx_cpu_up(true);
 
     if ( !ret )
+    {
+        /* Check whether hardware supports accelerated xapic and x2apic. */
+        assisted_xapic_available = cpu_has_vmx_virtualize_apic_accesses;
+        assisted_x2apic_available = cpu_has_vmx_virtualize_x2apic_mode &&
+                                    (cpu_has_vmx_apic_reg_virt ||
+                                     cpu_has_vmx_virtual_intr_delivery);
         register_keyhandler('v', vmcs_dump, "dump VT-x VMCSs", 1);
+    }
 
     return ret;
 }
diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/asm/hvm/hvm.h
index caaeacabc7..8d162b2c99 100644
--- a/xen/arch/x86/include/asm/hvm/hvm.h
+++ b/xen/arch/x86/include/asm/hvm/hvm.h
@@ -388,6 +388,9 @@ int hvm_get_param(struct domain *d, uint32_t index, uint64_t *value);
 #define hvm_tsc_scaling_ratio(d) \
     ((d)->arch.hvm.tsc_scaling_ratio)
 
+extern bool assisted_xapic_available;
+extern bool assisted_x2apic_available;
+
 #define hvm_get_guest_time(v) hvm_get_guest_time_fixed(v, 0)
 
 #define hvm_paging_enabled(v) \
@@ -901,6 +904,8 @@ static inline void hvm_set_reg(struct vcpu *v, unsigned int reg, uint64_t val)
 #define hvm_tsc_scaling_supported false
 #define hap_has_1gb false
 #define hap_has_2mb false
+#define assisted_xapic_available false
+#define assisted_x2apic_available false
 
 #define hvm_paging_enabled(v) ((void)(v), false)
 #define hvm_wp_enabled(v) ((void)(v), false)
diff --git a/xen/arch/x86/sysctl.c b/xen/arch/x86/sysctl.c
index f82abc2488..716525f72f 100644
--- a/xen/arch/x86/sysctl.c
+++ b/xen/arch/x86/sysctl.c
@@ -135,6 +135,10 @@ void arch_do_physinfo(struct xen_sysctl_physinfo *pi)
         pi->capabilities |= XEN_SYSCTL_PHYSCAP_hap;
     if ( IS_ENABLED(CONFIG_SHADOW_PAGING) )
         pi->capabilities |= XEN_SYSCTL_PHYSCAP_shadow;
+    if ( assisted_xapic_available )
+        pi->arch_capabilities |= XEN_SYSCTL_PHYSCAP_X86_ASSISTED_XAPIC;
+    if ( assisted_x2apic_available )
+        pi->arch_capabilities |= XEN_SYSCTL_PHYSCAP_X86_ASSISTED_X2APIC;
 }
 
 long arch_do_sysctl(
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 60c8711483..fefc17c288 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -35,7 +35,7 @@
 #include "domctl.h"
 #include "physdev.h"
 
-#define XEN_SYSCTL_INTERFACE_VERSION 0x00000014
+#define XEN_SYSCTL_INTERFACE_VERSION 0x00000015
 
 /*
  * Read console content from Xen buffer ring.
@@ -111,6 +111,13 @@ struct xen_sysctl_tbuf_op {
 /* Max XEN_SYSCTL_PHYSCAP_* constant.  Used for ABI checking. */
 #define XEN_SYSCTL_PHYSCAP_MAX XEN_SYSCTL_PHYSCAP_gnttab_v2
 
+/* The platform supports x{2}apic hardware assisted emulation. */
+#define XEN_SYSCTL_PHYSCAP_X86_ASSISTED_XAPIC  (1u << 0)
+#define XEN_SYSCTL_PHYSCAP_X86_ASSISTED_X2APIC (1u << 1)
+
+/* Max XEN_SYSCTL_PHYSCAP_X86__* constant. Used for ABI checking. */
+#define XEN_SYSCTL_PHYSCAP_X86_MAX XEN_SYSCTL_PHYSCAP_X86_ASSISTED_X2APIC
+
 struct xen_sysctl_physinfo {
     uint32_t threads_per_core;
     uint32_t cores_per_socket;
@@ -120,6 +127,8 @@ struct xen_sysctl_physinfo {
     uint32_t max_node_id; /* Largest possible node ID on this host */
     uint32_t cpu_khz;
     uint32_t capabilities;/* XEN_SYSCTL_PHYSCAP_??? */
+    uint32_t arch_capabilities;/* XEN_SYSCTL_PHYSCAP_{X86,ARM,...}_??? */
+    uint32_t pad;
     uint64_aligned_t total_pages;
     uint64_aligned_t free_pages;
     uint64_aligned_t scrub_pages;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Jun 29 13:57:10 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 13:57:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357869.586733 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6YBq-0006UM-Ez; Wed, 29 Jun 2022 13:57:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357869.586733; Wed, 29 Jun 2022 13:57:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6YBq-0006UF-Ao; Wed, 29 Jun 2022 13:57:10 +0000
Received: by outflank-mailman (input) for mailman id 357869;
 Wed, 29 Jun 2022 13:57:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gFtW=XE=citrix.com=prvs=17228c8f8=Jane.Malalane@srs-se1.protection.inumbo.net>)
 id 1o6YBo-0006Am-KJ
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 13:57:08 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5c839493-f7b3-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 15:57:06 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5c839493-f7b3-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656511026;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=y6Khs4v6dKl0EH/WYll4raEtrw3v1GIBosTFY5gqhzA=;
  b=bNRLeUEimtFUTQBcWyMlB519n4vhpLT+rmIG+zSpT7FcWSDUHvX3/tyA
   HDn+SFo4Q5CMHnAbpmquNO182nrgAzFxRRS34mR4lFr5KWkty4d4ky1ck
   AxbTHB6l2aXaCROTieP05nhMMtyvk4U0PmSd9p1XU9G3MR/ESjYWeRgdb
   c=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 77263214
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:8dVMMq4sTtiVgTYxhYQTMwxRtAPHchMFZxGqfqrLsTDasY5as4F+v
 jQWUWGBO/rZN2HxcoxyPdzn9BtU7ZLdxtc3HQpvpXs0Hi5G8cbLO4+Ufxz6V8+wwmwvb67FA
 +E2MISowBUcFyeEzvuVGuG96yE6j8lkf5KkYAL+EnkZqTRMFWFw03qPp8Zj2tQy2YbjUlvU0
 T/Pi5a31GGNimYc3l08s8pvmDs31BglkGpF1rCWTakjUG72zxH5PrpGTU2CByKQrr1vNvy7X
 47+IISRpQs1yfuP5uSNyd4XemVSKlLb0JPnZnB+A8BOiTAazsA+PzpS2FPxpi67hh3Q9+2dx
 umhurSsUg44D4bpyd0zaCd3UCQiOL9627XYdC3XXcy7lyUqclPpyvRqSko3IZcZ6qB8BmQmG
 f4wcW5XKErZ3qTvnez9GrIEascLdaEHOKs9s3dtixTQCf8iSJbOa67L+cVZzHE7gcUm8fP2O
 JVDN2E1NUiojxtnFmk0V5YMheyTn2TeaTZigXGpobIPyj2GpOB2+Oe0a4eEEjCQfu1XlEuFo
 mPN/0ziHwoXcteYzFKt8H22gfTUtTjmQ49UH7q9ntZpjUOSwCoPCRQQfVq9vfS9zEW5Xrp3K
 VESvCwnrqEw9UmiZtj7QxC85nWDu3Y0RN54A+A8rgaXxcL88wufQ2QJUDNFQNgnr9MtAywn0
 EeTmNHkDiApt6eaIVqR/6mZhSm/Mi8UKSkFfyBsZREC+NP5p4YwiDrAS99iFOi+ididJN3r6
 2nU9m5k3exV1JNVkfXglbzav96yjoTSRx8EzSCGZ0ye0Cw6YdKgdbCL40eOuJ6sM72lokm9U
 GkswpbDsrteVc7RzERhU81WQuj3uq/t3Cn0xAc2QsJ/r2nFF2uLJ9g43d1oGKt+3i/okxfNa
 VSbhw5e7YQ70JCCPf4uONLZ5yjHIMHd+TXZuhP8NIMmjmBZLlPvwc2XTRf4M5rRuEYti7ojH
 pyQbNyhC30XYYw+kmfoG7tEiOFwln1irY82eXwd5032uVZ5TC79dFv4GAHWMrBRAF2s+m05D
 Oqzx+PVkk4CAYUSkwHc8JIJLEBiEEXX8ave8pQNHsbae1IOMDh4V5f5nOJ6E6Q4zvs9qws91
 izkMqOu4AGn1SOvxMTjQi0LVY4Dqr4l8yhrZnF2bQv4s5XhCK72hJoim1IMVeFP3IReITRcF
 ZHpp+3o7ixzdwn6
IronPort-HdrOrdr: A9a23:/8PYSqDfA7btmLrlHemW55DYdb4zR+YMi2TC1yhKKCC9Ffbo7/
 xG/c5rrCMc5wxhO03I9eruBEDEewK5yXcX2/h2AV7BZniFhILAFugLhuGOrwEIWReOkdK1vZ
 0QCJSWY+eRMbEVt6jHCXGDYrMd/OU=
X-IronPort-AV: E=Sophos;i="5.92,231,1650945600"; 
   d="scan'208";a="77263214"
From: Jane Malalane <jane.malalane@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Jane Malalane <jane.malalane@citrix.com>, Wei Liu <wl@xen.org>, "Anthony
 PERARD" <anthony.perard@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Nick Rosbrook <rosbrookn@gmail.com>, Juergen
 Gross <jgross@suse.com>, Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>, Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian
	<kevin.tian@intel.com>
Subject: [PATCH RESEND v10 2/2] x86/xen: Allow per-domain usage of hardware virtualized APIC
Date: Wed, 29 Jun 2022 14:55:34 +0100
Message-ID: <20220629135534.19923-3-jane.malalane@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20220629135534.19923-1-jane.malalane@citrix.com>
References: <20220629135534.19923-1-jane.malalane@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Introduce a new per-domain creation x86 specific flag to
select whether hardware assisted virtualization should be used for
x{2}APIC.

A per-domain option is added to xl in order to select the usage of
x{2}APIC hardware assisted virtualization, as well as a global
configuration option.

Having all APIC interaction exit to Xen for emulation is slow and can
induce much overhead. Hardware can speed up x{2}APIC by decoding the
APIC access and providing a VM exit with a more specific exit reason
than a regular EPT fault or by altogether avoiding a VM exit.

On the other hand, being able to disable x{2}APIC hardware assisted
virtualization can be useful for testing and debugging purposes.

Note:

- vmx_install_vlapic_mapping doesn't require modifications regardless
of whether the guest has "Virtualize APIC accesses" enabled or not,
i.e., setting the APIC_ACCESS_ADDR VMCS field is fine so long as
virtualize_apic_accesses is supported by the CPU.

- Both per-domain and global assisted_x{2}apic options are not part of
the migration stream, unless explicitly set in the respective
configuration files. Default settings of assisted_x{2}apic done
internally by the toolstack, based on host capabilities at create
time, are not migrated.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jane Malalane <jane.malalane@citrix.com>
Acked-by: Christian Lindig <christian.lindig@citrix.com>
Reviewed-by: "Roger Pau Monné" <roger.pau@citrix.com>
Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
---
CC: Wei Liu <wl@xen.org>
CC: Anthony PERARD <anthony.perard@citrix.com>
CC: George Dunlap <george.dunlap@citrix.com>
CC: Nick Rosbrook <rosbrookn@gmail.com>
CC: Juergen Gross <jgross@suse.com>
CC: Christian Lindig <christian.lindig@citrix.com>
CC: David Scott <dave@recoil.org>
CC: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: "Roger Pau Monné" <roger.pau@citrix.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>

v10:
 * Improve commit message note on migration

v9:
 * Fix style issues
 * Fix exit() logic for assisted_x{2}apic parsing
 * Add and use XEN_X86_MISC_FLAGS_MAX for ABI checking instead of
   using XEN_X86_ASSISTED_X2APIC directly
 * Expand commit message to mention migration is safe

v8:
 * Widen assisted_x{2}apic parsing to PVH guests in
   parse_config_data()

v7:
 * Fix void return in libxl__arch_domain_build_info_setdefault
 * Fix style issues
 * Use EINVAL when rejecting assisted_x{2}apic for PV guests and
   ENODEV otherwise, when assisted_x{2}apic isn't supported
 * Define has_assisted_x{2}apic macros for when !CONFIG_HVM
 * Replace "EPT" fault reference with "p2m" fault since the former is
   Intel-specific

v6:
 * Use ENODEV instead of EINVAL when rejecting assisted_x{2}apic
   for PV guests
 * Move has_assisted_x{2}apic macros out of an Intel specific header
 * Remove references to Intel specific features in documentation

v5:
 * Revert v4 changes in vmx_vlapic_msr_changed(), preserving the use of
   the has_assisted_x{2}apic macros
 * Following changes in assisted_x{2}apic_available definitions in
   patch 1, retighten conditionals for setting
   XEN_HVM_CPUID_APIC_ACCESS_VIRT and XEN_HVM_CPUID_X2APIC_VIRT in
   cpuid_hypervisor_leaves()

v4:
 * Add has_assisted_x{2}apic macros and use them where appropriate
 * Replace CPU checks with per-domain assisted_x{2}apic control
   options in vmx_vlapic_msr_changed() and cpuid_hypervisor_leaves(),
   following edits to assisted_x{2}apic_available definitions in
   patch 1
   Note: new assisted_x{2}apic_available definitions make later
   cpu_has_vmx_apic_reg_virt and cpu_has_vmx_virtual_intr_delivery
   checks redundant in vmx_vlapic_msr_changed()

v3:
 * Change info in xl.cfg to better express reality and fix
   capitalization of x{2}apic
 * Move "physinfo" variable definition to the beggining of
   libxl__domain_build_info_setdefault()
 * Reposition brackets in if statement to match libxl coding style
 * Shorten logic in libxl__arch_domain_build_info_setdefault()
 * Correct dprintk message in arch_sanitise_domain_config()
 * Make appropriate changes in vmx_vlapic_msr_changed() and
   cpuid_hypervisor_leaves() for amended "assisted_x2apic" bit
 * Remove unneeded parantheses

v2:
 * Add a LIBXL_HAVE_ASSISTED_APIC macro
 * Pass xcpyshinfo as a pointer in libxl__arch_get_physinfo
 * Add a return statement in now "int"
   libxl__arch_domain_build_info_setdefault()
 * Preserve libxl__arch_domain_build_info_setdefault 's location in
   libxl_create.c
 * Correct x{2}apic default setting logic in
   libxl__arch_domain_prepare_config()
 * Correct logic for parsing assisted_x{2}apic host/guest options in
   xl_parse.c and initialize them to -1 in xl.c
 * Use guest options directly in vmx_vlapic_msr_changed
 * Fix indentation of bool assisted_x{2}apic in struct hvm_domain
 * Add a change in xenctrl_stubs.c to pass xenctrl ABI checks
---
 docs/man/xl.cfg.5.pod.in              | 15 +++++++++++++++
 docs/man/xl.conf.5.pod.in             | 12 ++++++++++++
 tools/golang/xenlight/helpers.gen.go  | 12 ++++++++++++
 tools/golang/xenlight/types.gen.go    |  2 ++
 tools/include/libxl.h                 |  7 +++++++
 tools/libs/light/libxl_arch.h         |  5 +++--
 tools/libs/light/libxl_arm.c          |  9 ++++++---
 tools/libs/light/libxl_create.c       | 22 +++++++++++++---------
 tools/libs/light/libxl_types.idl      |  2 ++
 tools/libs/light/libxl_x86.c          | 28 ++++++++++++++++++++++++++--
 tools/ocaml/libs/xc/xenctrl.ml        |  2 ++
 tools/ocaml/libs/xc/xenctrl.mli       |  2 ++
 tools/ocaml/libs/xc/xenctrl_stubs.c   |  2 +-
 tools/xl/xl.c                         |  8 ++++++++
 tools/xl/xl.h                         |  2 ++
 tools/xl/xl_parse.c                   | 19 +++++++++++++++++++
 xen/arch/x86/domain.c                 | 29 ++++++++++++++++++++++++++++-
 xen/arch/x86/hvm/vmx/vmcs.c           |  4 ++++
 xen/arch/x86/hvm/vmx/vmx.c            | 13 ++++---------
 xen/arch/x86/include/asm/hvm/domain.h |  6 ++++++
 xen/arch/x86/include/asm/hvm/hvm.h    |  5 +++++
 xen/arch/x86/traps.c                  |  5 +++--
 xen/include/public/arch-x86/xen.h     |  5 +++++
 23 files changed, 187 insertions(+), 29 deletions(-)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index b98d161398..6d98d73d76 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -1862,6 +1862,21 @@ firmware tables when using certain older guest Operating
 Systems. These tables have been superseded by newer constructs within
 the ACPI tables.
 
+=item B<assisted_xapic=BOOLEAN>
+
+B<(x86 only)> Enables or disables hardware assisted virtualization for
+xAPIC. With this option enabled, a memory-mapped APIC access will be
+decoded by hardware and either issue a more specific VM exit than just
+a p2m fault, or altogether avoid a VM exit. The
+default is settable via L<xl.conf(5)>.
+
+=item B<assisted_x2apic=BOOLEAN>
+
+B<(x86 only)> Enables or disables hardware assisted virtualization for
+x2APIC. With this option enabled, certain accesses to MSR APIC
+registers will avoid a VM exit into the hypervisor. The default is
+settable via L<xl.conf(5)>.
+
 =item B<nx=BOOLEAN>
 
 B<(x86 only)> Hides or exposes the No-eXecute capability. This allows a guest
diff --git a/docs/man/xl.conf.5.pod.in b/docs/man/xl.conf.5.pod.in
index df20c08137..95d136d1ea 100644
--- a/docs/man/xl.conf.5.pod.in
+++ b/docs/man/xl.conf.5.pod.in
@@ -107,6 +107,18 @@ Sets the default value for the C<max_grant_version> domain config value.
 
 Default: maximum grant version supported by the hypervisor.
 
+=item B<assisted_xapic=BOOLEAN>
+
+If enabled, domains will use xAPIC hardware assisted virtualization by default.
+
+Default: enabled if supported.
+
+=item B<assisted_x2apic=BOOLEAN>
+
+If enabled, domains will use x2APIC hardware assisted virtualization by default.
+
+Default: enabled if supported.
+
 =item B<vif.default.script="PATH">
 
 Configures the default hotplug script used by virtual network devices.
diff --git a/tools/golang/xenlight/helpers.gen.go b/tools/golang/xenlight/helpers.gen.go
index dd4e6c9f14..dece545ee0 100644
--- a/tools/golang/xenlight/helpers.gen.go
+++ b/tools/golang/xenlight/helpers.gen.go
@@ -1120,6 +1120,12 @@ x.ArchArm.Vuart = VuartType(xc.arch_arm.vuart)
 if err := x.ArchX86.MsrRelaxed.fromC(&xc.arch_x86.msr_relaxed);err != nil {
 return fmt.Errorf("converting field ArchX86.MsrRelaxed: %v", err)
 }
+if err := x.ArchX86.AssistedXapic.fromC(&xc.arch_x86.assisted_xapic);err != nil {
+return fmt.Errorf("converting field ArchX86.AssistedXapic: %v", err)
+}
+if err := x.ArchX86.AssistedX2Apic.fromC(&xc.arch_x86.assisted_x2apic);err != nil {
+return fmt.Errorf("converting field ArchX86.AssistedX2Apic: %v", err)
+}
 x.Altp2M = Altp2MMode(xc.altp2m)
 x.VmtraceBufKb = int(xc.vmtrace_buf_kb)
 if err := x.Vpmu.fromC(&xc.vpmu);err != nil {
@@ -1605,6 +1611,12 @@ xc.arch_arm.vuart = C.libxl_vuart_type(x.ArchArm.Vuart)
 if err := x.ArchX86.MsrRelaxed.toC(&xc.arch_x86.msr_relaxed); err != nil {
 return fmt.Errorf("converting field ArchX86.MsrRelaxed: %v", err)
 }
+if err := x.ArchX86.AssistedXapic.toC(&xc.arch_x86.assisted_xapic); err != nil {
+return fmt.Errorf("converting field ArchX86.AssistedXapic: %v", err)
+}
+if err := x.ArchX86.AssistedX2Apic.toC(&xc.arch_x86.assisted_x2apic); err != nil {
+return fmt.Errorf("converting field ArchX86.AssistedX2Apic: %v", err)
+}
 xc.altp2m = C.libxl_altp2m_mode(x.Altp2M)
 xc.vmtrace_buf_kb = C.int(x.VmtraceBufKb)
 if err := x.Vpmu.toC(&xc.vpmu); err != nil {
diff --git a/tools/golang/xenlight/types.gen.go b/tools/golang/xenlight/types.gen.go
index 87be46c745..253c9ad93d 100644
--- a/tools/golang/xenlight/types.gen.go
+++ b/tools/golang/xenlight/types.gen.go
@@ -520,6 +520,8 @@ Vuart VuartType
 }
 ArchX86 struct {
 MsrRelaxed Defbool
+AssistedXapic Defbool
+AssistedX2Apic Defbool
 }
 Altp2M Altp2MMode
 VmtraceBufKb int
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 364d852278..7910c458e3 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -535,6 +535,13 @@
 #define LIBXL_HAVE_PHYSINFO_ASSISTED_APIC 1
 
 /*
+ * LIBXL_HAVE_ASSISTED_APIC indicates that libxl_domain_build_info has
+ * assisted_xapic and assisted_x2apic fields for enabling hardware
+ * assisted virtualization for x{2}apic per domain.
+ */
+#define LIBXL_HAVE_ASSISTED_APIC 1
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
diff --git a/tools/libs/light/libxl_arch.h b/tools/libs/light/libxl_arch.h
index 207ceac6a1..03b89929e6 100644
--- a/tools/libs/light/libxl_arch.h
+++ b/tools/libs/light/libxl_arch.h
@@ -71,8 +71,9 @@ void libxl__arch_domain_create_info_setdefault(libxl__gc *gc,
                                                libxl_domain_create_info *c_info);
 
 _hidden
-void libxl__arch_domain_build_info_setdefault(libxl__gc *gc,
-                                              libxl_domain_build_info *b_info);
+int libxl__arch_domain_build_info_setdefault(libxl__gc *gc,
+                                             libxl_domain_build_info *b_info,
+                                             const libxl_physinfo *physinfo);
 
 _hidden
 int libxl__arch_passthrough_mode_setdefault(libxl__gc *gc,
diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index 39fdca1b49..7dee2afd4b 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -1384,14 +1384,15 @@ void libxl__arch_domain_create_info_setdefault(libxl__gc *gc,
     }
 }
 
-void libxl__arch_domain_build_info_setdefault(libxl__gc *gc,
-                                              libxl_domain_build_info *b_info)
+int libxl__arch_domain_build_info_setdefault(libxl__gc *gc,
+                                             libxl_domain_build_info *b_info,
+                                             const libxl_physinfo *physinfo)
 {
     /* ACPI is disabled by default */
     libxl_defbool_setdefault(&b_info->acpi, false);
 
     if (b_info->type != LIBXL_DOMAIN_TYPE_PV)
-        return;
+        return 0;
 
     LOG(DEBUG, "Converting build_info to PVH");
 
@@ -1399,6 +1400,8 @@ void libxl__arch_domain_build_info_setdefault(libxl__gc *gc,
     memset(&b_info->u, '\0', sizeof(b_info->u));
     b_info->type = LIBXL_DOMAIN_TYPE_INVALID;
     libxl_domain_build_info_init_type(b_info, LIBXL_DOMAIN_TYPE_PVH);
+
+    return 0;
 }
 
 int libxl__arch_passthrough_mode_setdefault(libxl__gc *gc,
diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_create.c
index 2339f09e95..b9dd2deedf 100644
--- a/tools/libs/light/libxl_create.c
+++ b/tools/libs/light/libxl_create.c
@@ -75,6 +75,7 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
                                         libxl_domain_build_info *b_info)
 {
     int i, rc;
+    libxl_physinfo info;
 
     if (b_info->type != LIBXL_DOMAIN_TYPE_HVM &&
         b_info->type != LIBXL_DOMAIN_TYPE_PV &&
@@ -264,7 +265,18 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
     if (!b_info->event_channels)
         b_info->event_channels = 1023;
 
-    libxl__arch_domain_build_info_setdefault(gc, b_info);
+    rc = libxl_get_physinfo(CTX, &info);
+    if (rc) {
+        LOG(ERROR, "failed to get hypervisor info");
+        return rc;
+    }
+
+    rc = libxl__arch_domain_build_info_setdefault(gc, b_info, &info);
+    if (rc) {
+        LOG(ERROR, "unable to set domain arch build info defaults");
+        return rc;
+    }
+
     libxl_defbool_setdefault(&b_info->dm_restrict, false);
 
     if (b_info->iommu_memkb == LIBXL_MEMKB_DEFAULT)
@@ -457,14 +469,6 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
     }
 
     if (b_info->max_grant_version == LIBXL_MAX_GRANT_DEFAULT) {
-        libxl_physinfo info;
-
-        rc = libxl_get_physinfo(CTX, &info);
-        if (rc) {
-            LOG(ERROR, "failed to get hypervisor info");
-            return rc;
-        }
-
         if (info.cap_gnttab_v2)
             b_info->max_grant_version = 2;
         else if (info.cap_gnttab_v1)
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 42ac6c357b..db5eb0a0b3 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -648,6 +648,8 @@ libxl_domain_build_info = Struct("domain_build_info",[
                                ("vuart", libxl_vuart_type),
                               ])),
     ("arch_x86", Struct(None, [("msr_relaxed", libxl_defbool),
+                               ("assisted_xapic", libxl_defbool),
+                               ("assisted_x2apic", libxl_defbool),
                               ])),
     # Alternate p2m is not bound to any architecture or guest type, as it is
     # supported by x86 HVM and ARM support is planned.
diff --git a/tools/libs/light/libxl_x86.c b/tools/libs/light/libxl_x86.c
index e0a06ecfe3..7c5ee74443 100644
--- a/tools/libs/light/libxl_x86.c
+++ b/tools/libs/light/libxl_x86.c
@@ -23,6 +23,15 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
     if (libxl_defbool_val(d_config->b_info.arch_x86.msr_relaxed))
         config->arch.misc_flags |= XEN_X86_MSR_RELAXED;
 
+    if (d_config->c_info.type != LIBXL_DOMAIN_TYPE_PV)
+    {
+        if (libxl_defbool_val(d_config->b_info.arch_x86.assisted_xapic))
+            config->arch.misc_flags |= XEN_X86_ASSISTED_XAPIC;
+
+        if (libxl_defbool_val(d_config->b_info.arch_x86.assisted_x2apic))
+            config->arch.misc_flags |= XEN_X86_ASSISTED_X2APIC;
+    }
+
     return 0;
 }
 
@@ -819,11 +828,26 @@ void libxl__arch_domain_create_info_setdefault(libxl__gc *gc,
 {
 }
 
-void libxl__arch_domain_build_info_setdefault(libxl__gc *gc,
-                                              libxl_domain_build_info *b_info)
+int libxl__arch_domain_build_info_setdefault(libxl__gc *gc,
+                                             libxl_domain_build_info *b_info,
+                                             const libxl_physinfo *physinfo)
 {
     libxl_defbool_setdefault(&b_info->acpi, true);
     libxl_defbool_setdefault(&b_info->arch_x86.msr_relaxed, false);
+
+    if (b_info->type != LIBXL_DOMAIN_TYPE_PV) {
+        libxl_defbool_setdefault(&b_info->arch_x86.assisted_xapic,
+                                 physinfo->cap_assisted_xapic);
+        libxl_defbool_setdefault(&b_info->arch_x86.assisted_x2apic,
+                                 physinfo->cap_assisted_x2apic);
+    }
+    else if (!libxl_defbool_is_default(b_info->arch_x86.assisted_xapic) ||
+             !libxl_defbool_is_default(b_info->arch_x86.assisted_x2apic)) {
+        LOG(ERROR, "Interrupt Controller Virtualization not supported for PV");
+        return ERROR_INVAL;
+    }
+
+    return 0;
 }
 
 int libxl__arch_passthrough_mode_setdefault(libxl__gc *gc,
diff --git a/tools/ocaml/libs/xc/xenctrl.ml b/tools/ocaml/libs/xc/xenctrl.ml
index 7152394fce..2836abb110 100644
--- a/tools/ocaml/libs/xc/xenctrl.ml
+++ b/tools/ocaml/libs/xc/xenctrl.ml
@@ -50,6 +50,8 @@ type x86_arch_emulation_flags =
 
 type x86_arch_misc_flags =
 	| X86_MSR_RELAXED
+	| X86_ASSISTED_XAPIC
+	| X86_ASSISTED_X2APIC
 
 type xen_x86_arch_domainconfig =
 {
diff --git a/tools/ocaml/libs/xc/xenctrl.mli b/tools/ocaml/libs/xc/xenctrl.mli
index bb5bf5207d..4dc4779ad2 100644
--- a/tools/ocaml/libs/xc/xenctrl.mli
+++ b/tools/ocaml/libs/xc/xenctrl.mli
@@ -44,6 +44,8 @@ type x86_arch_emulation_flags =
 
 type x86_arch_misc_flags =
   | X86_MSR_RELAXED
+  | X86_ASSISTED_XAPIC
+  | X86_ASSISTED_X2APIC
 
 type xen_x86_arch_domainconfig = {
   emulation_flags: x86_arch_emulation_flags list;
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index e56484590e..39b8034f2a 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -244,7 +244,7 @@ CAMLprim value stub_xc_domain_create(value xch, value wanted_domid, value config
 
 		cfg.arch.misc_flags = ocaml_list_to_c_bitmap
 			/* ! x86_arch_misc_flags X86_ none */
-			/* ! XEN_X86_ XEN_X86_MSR_RELAXED all */
+			/* ! XEN_X86_ XEN_X86_MISC_FLAGS_MAX max */
 			(VAL_MISC_FLAGS);
 
 #undef VAL_MISC_FLAGS
diff --git a/tools/xl/xl.c b/tools/xl/xl.c
index 2d1ec18ea3..31eb223309 100644
--- a/tools/xl/xl.c
+++ b/tools/xl/xl.c
@@ -57,6 +57,8 @@ int max_grant_frames = -1;
 int max_maptrack_frames = -1;
 int max_grant_version = LIBXL_MAX_GRANT_DEFAULT;
 libxl_domid domid_policy = INVALID_DOMID;
+int assisted_xapic = -1;
+int assisted_x2apic = -1;
 
 xentoollog_level minmsglevel = minmsglevel_default;
 
@@ -201,6 +203,12 @@ static void parse_global_config(const char *configfile,
     if (!xlu_cfg_get_long (config, "claim_mode", &l, 0))
         claim_mode = l;
 
+    if (!xlu_cfg_get_long (config, "assisted_xapic", &l, 0))
+        assisted_xapic = l;
+
+    if (!xlu_cfg_get_long (config, "assisted_x2apic", &l, 0))
+        assisted_x2apic = l;
+
     xlu_cfg_replace_string (config, "remus.default.netbufscript",
         &default_remus_netbufscript, 0);
     xlu_cfg_replace_string (config, "colo.default.proxyscript",
diff --git a/tools/xl/xl.h b/tools/xl/xl.h
index c5c4bedbdd..528deb3feb 100644
--- a/tools/xl/xl.h
+++ b/tools/xl/xl.h
@@ -286,6 +286,8 @@ extern libxl_bitmap global_vm_affinity_mask;
 extern libxl_bitmap global_hvm_affinity_mask;
 extern libxl_bitmap global_pv_affinity_mask;
 extern libxl_domid domid_policy;
+extern int assisted_xapic;
+extern int assisted_x2apic;
 
 enum output_format {
     OUTPUT_FORMAT_JSON,
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index b98c0de378..6080f8154d 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -2761,6 +2761,25 @@ skip_usbdev:
 
     xlu_cfg_get_defbool(config, "vpmu", &b_info->vpmu, 0);
 
+    if (b_info->type != LIBXL_DOMAIN_TYPE_PV) {
+        e = xlu_cfg_get_long(config, "assisted_xapic", &l , 0);
+        if (!e)
+            libxl_defbool_set(&b_info->arch_x86.assisted_xapic, l);
+        else if (e != ESRCH)
+            exit(1);
+        else if (assisted_xapic != -1) /* use global default if present */
+            libxl_defbool_set(&b_info->arch_x86.assisted_xapic, assisted_xapic);
+
+        e = xlu_cfg_get_long(config, "assisted_x2apic", &l, 0);
+        if (!e)
+            libxl_defbool_set(&b_info->arch_x86.assisted_x2apic, l);
+        else if (e != ESRCH)
+            exit(1);
+        else if (assisted_x2apic != -1) /* use global default if present */
+            libxl_defbool_set(&b_info->arch_x86.assisted_x2apic,
+                              assisted_x2apic);
+    }
+
     xlu_cfg_destroy(config);
 }
 
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 0d2944fe14..bc8f0b51ff 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -50,6 +50,7 @@
 #include <asm/cpuidle.h>
 #include <asm/mpspec.h>
 #include <asm/ldt.h>
+#include <asm/hvm/domain.h>
 #include <asm/hvm/hvm.h>
 #include <asm/hvm/nestedhvm.h>
 #include <asm/hvm/support.h>
@@ -619,6 +620,8 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
     bool hvm = config->flags & XEN_DOMCTL_CDF_hvm;
     bool hap = config->flags & XEN_DOMCTL_CDF_hap;
     bool nested_virt = config->flags & XEN_DOMCTL_CDF_nested_virt;
+    bool assisted_xapic = config->arch.misc_flags & XEN_X86_ASSISTED_XAPIC;
+    bool assisted_x2apic = config->arch.misc_flags & XEN_X86_ASSISTED_X2APIC;
     unsigned int max_vcpus;
 
     if ( hvm ? !hvm_enabled : !IS_ENABLED(CONFIG_PV) )
@@ -685,13 +688,31 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
         }
     }
 
-    if ( config->arch.misc_flags & ~XEN_X86_MSR_RELAXED )
+    if ( config->arch.misc_flags & ~(XEN_X86_MSR_RELAXED |
+                                     XEN_X86_ASSISTED_XAPIC |
+                                     XEN_X86_ASSISTED_X2APIC) )
     {
         dprintk(XENLOG_INFO, "Invalid arch misc flags %#x\n",
                 config->arch.misc_flags);
         return -EINVAL;
     }
 
+    if ( (assisted_xapic || assisted_x2apic) && !hvm )
+    {
+        dprintk(XENLOG_INFO,
+                "Interrupt Controller Virtualization not supported for PV\n");
+        return -EINVAL;
+    }
+
+    if ( (assisted_xapic && !assisted_xapic_available) ||
+         (assisted_x2apic && !assisted_x2apic_available) )
+    {
+        dprintk(XENLOG_INFO,
+                "Hardware assisted x%sAPIC requested but not available\n",
+                assisted_xapic && !assisted_xapic_available ? "" : "2");
+        return -ENODEV;
+    }
+
     return 0;
 }
 
@@ -864,6 +885,12 @@ int arch_domain_create(struct domain *d,
 
     d->arch.msr_relaxed = config->arch.misc_flags & XEN_X86_MSR_RELAXED;
 
+    d->arch.hvm.assisted_xapic =
+        config->arch.misc_flags & XEN_X86_ASSISTED_XAPIC;
+
+    d->arch.hvm.assisted_x2apic =
+        config->arch.misc_flags & XEN_X86_ASSISTED_X2APIC;
+
     spec_ctrl_init_domain(d);
 
     return 0;
diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 7329622dd4..683c650d77 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -1134,6 +1134,10 @@ static int construct_vmcs(struct vcpu *v)
         __vmwrite(PLE_WINDOW, ple_window);
     }
 
+    if ( !has_assisted_xapic(d) )
+        v->arch.hvm.vmx.secondary_exec_control &=
+            ~SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
+
     if ( cpu_has_vmx_secondary_exec_control )
         __vmwrite(SECONDARY_VM_EXEC_CONTROL,
                   v->arch.hvm.vmx.secondary_exec_control);
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index f08a00dcbb..47554cc004 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -3376,16 +3376,11 @@ static void vmx_install_vlapic_mapping(struct vcpu *v)
 
 void vmx_vlapic_msr_changed(struct vcpu *v)
 {
-    int virtualize_x2apic_mode;
     struct vlapic *vlapic = vcpu_vlapic(v);
     unsigned int msr;
 
-    virtualize_x2apic_mode = ( (cpu_has_vmx_apic_reg_virt ||
-                                cpu_has_vmx_virtual_intr_delivery) &&
-                               cpu_has_vmx_virtualize_x2apic_mode );
-
-    if ( !cpu_has_vmx_virtualize_apic_accesses &&
-         !virtualize_x2apic_mode )
+    if ( !has_assisted_xapic(v->domain) &&
+         !has_assisted_x2apic(v->domain) )
         return;
 
     vmx_vmcs_enter(v);
@@ -3395,7 +3390,7 @@ void vmx_vlapic_msr_changed(struct vcpu *v)
     if ( !vlapic_hw_disabled(vlapic) &&
          (vlapic_base_address(vlapic) == APIC_DEFAULT_PHYS_BASE) )
     {
-        if ( virtualize_x2apic_mode && vlapic_x2apic_mode(vlapic) )
+        if ( has_assisted_x2apic(v->domain) && vlapic_x2apic_mode(vlapic) )
         {
             v->arch.hvm.vmx.secondary_exec_control |=
                 SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE;
@@ -3416,7 +3411,7 @@ void vmx_vlapic_msr_changed(struct vcpu *v)
                 vmx_clear_msr_intercept(v, MSR_X2APIC_SELF, VMX_MSR_W);
             }
         }
-        else
+        else if ( has_assisted_xapic(v->domain) )
             v->arch.hvm.vmx.secondary_exec_control |=
                 SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
     }
diff --git a/xen/arch/x86/include/asm/hvm/domain.h b/xen/arch/x86/include/asm/hvm/domain.h
index 698455444e..92bf53483c 100644
--- a/xen/arch/x86/include/asm/hvm/domain.h
+++ b/xen/arch/x86/include/asm/hvm/domain.h
@@ -117,6 +117,12 @@ struct hvm_domain {
 
     bool                   is_s3_suspended;
 
+    /* xAPIC hardware assisted virtualization. */
+    bool                   assisted_xapic;
+
+    /* x2APIC hardware assisted virtualization. */
+    bool                   assisted_x2apic;
+
     /* hypervisor intercepted msix table */
     struct list_head       msixtbl_list;
 
diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/asm/hvm/hvm.h
index 8d162b2c99..03096f31ef 100644
--- a/xen/arch/x86/include/asm/hvm/hvm.h
+++ b/xen/arch/x86/include/asm/hvm/hvm.h
@@ -391,6 +391,9 @@ int hvm_get_param(struct domain *d, uint32_t index, uint64_t *value);
 extern bool assisted_xapic_available;
 extern bool assisted_x2apic_available;
 
+#define has_assisted_xapic(d) ((d)->arch.hvm.assisted_xapic)
+#define has_assisted_x2apic(d) ((d)->arch.hvm.assisted_x2apic)
+
 #define hvm_get_guest_time(v) hvm_get_guest_time_fixed(v, 0)
 
 #define hvm_paging_enabled(v) \
@@ -907,6 +910,8 @@ static inline void hvm_set_reg(struct vcpu *v, unsigned int reg, uint64_t val)
 #define assisted_xapic_available false
 #define assisted_x2apic_available false
 
+#define has_assisted_xapic(d) ((void)(d), false)
+#define has_assisted_x2apic(d) ((void)(d), false)
 #define hvm_paging_enabled(v) ((void)(v), false)
 #define hvm_wp_enabled(v) ((void)(v), false)
 #define hvm_pcid_enabled(v) ((void)(v), false)
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index bb3dfcc90f..cabebf4f5b 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -1117,7 +1117,8 @@ void cpuid_hypervisor_leaves(const struct vcpu *v, uint32_t leaf,
         if ( !is_hvm_domain(d) || subleaf != 0 )
             break;
 
-        if ( cpu_has_vmx_apic_reg_virt )
+        if ( cpu_has_vmx_apic_reg_virt &&
+             has_assisted_xapic(d) )
             res->a |= XEN_HVM_CPUID_APIC_ACCESS_VIRT;
 
         /*
@@ -1126,7 +1127,7 @@ void cpuid_hypervisor_leaves(const struct vcpu *v, uint32_t leaf,
          * and wrmsr in the guest will run without VMEXITs (see
          * vmx_vlapic_msr_changed()).
          */
-        if ( cpu_has_vmx_virtualize_x2apic_mode &&
+        if ( has_assisted_x2apic(d) &&
              cpu_has_vmx_apic_reg_virt &&
              cpu_has_vmx_virtual_intr_delivery )
             res->a |= XEN_HVM_CPUID_X2APIC_VIRT;
diff --git a/xen/include/public/arch-x86/xen.h b/xen/include/public/arch-x86/xen.h
index 7acd94c8eb..58a1e87ee9 100644
--- a/xen/include/public/arch-x86/xen.h
+++ b/xen/include/public/arch-x86/xen.h
@@ -317,9 +317,14 @@ struct xen_arch_domainconfig {
  * doesn't allow the guest to read or write to the underlying MSR.
  */
 #define XEN_X86_MSR_RELAXED (1u << 0)
+#define XEN_X86_ASSISTED_XAPIC (1u << 1)
+#define XEN_X86_ASSISTED_X2APIC (1u << 2)
     uint32_t misc_flags;
 };
 
+/* Max  XEN_X86_* constant. Used for ABI checking. */
+#define XEN_X86_MISC_FLAGS_MAX XEN_X86_ASSISTED_X2APIC
+
 /* Location of online VCPU bitmap. */
 #define XEN_ACPI_CPU_MAP             0xaf00
 #define XEN_ACPI_CPU_MAP_LEN         ((HVM_MAX_VCPUS + 7) / 8)
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Jun 29 14:10:14 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 14:10:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357887.586744 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6YOQ-0001IM-L2; Wed, 29 Jun 2022 14:10:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357887.586744; Wed, 29 Jun 2022 14:10:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6YOQ-0001IF-Hq; Wed, 29 Jun 2022 14:10:10 +0000
Received: by outflank-mailman (input) for mailman id 357887;
 Wed, 29 Jun 2022 14:10:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=m/oR=XE=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o6YOO-0001GE-RO
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 14:10:08 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2d45c366-f7b5-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 16:10:06 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id B5BAA1F927;
 Wed, 29 Jun 2022 14:10:05 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 6BE89133D1;
 Wed, 29 Jun 2022 14:10:05 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 6VbqGD1dvGKqAgAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 29 Jun 2022 14:10:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2d45c366-f7b5-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656511805; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=SKb3qQs7axuf7v6bLmTJKcj9bTFBrGGLvM5cZkiAZxo=;
	b=TjC5VTtis4QEO4FRYj+PKvphz69NIKvetmKIm8BwgSWqI7OmA2ImJdGpQ9VJ2AaSijPpMq
	GKgaVygxvU0c5fh3xSMPAulF5LxFwrK74uMASC0CEK1ACiQUikdZ9R2ILogTrJVcBlJFM/
	+cWGlyQAaRN+Wrewd3EshkXTR5P1pfE=
Message-ID: <2592493b-4339-3e54-8acf-585dcf90be96@suse.com>
Date: Wed, 29 Jun 2022 16:10:04 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Content-Language: en-US
To: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org
Cc: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, regressions@lists.linux.dev
References: <20220623094608.7294-1-jgross@suse.com>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v2 0/3] x86: fix brk area initialization
In-Reply-To: <20220623094608.7294-1-jgross@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------uGZghzY610stt0wl6fX7JZ6R"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------uGZghzY610stt0wl6fX7JZ6R
Content-Type: multipart/mixed; boundary="------------p9mC49YkD27kudZypFeCdkWn";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org
Cc: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, regressions@lists.linux.dev
Message-ID: <2592493b-4339-3e54-8acf-585dcf90be96@suse.com>
Subject: Re: [PATCH v2 0/3] x86: fix brk area initialization
References: <20220623094608.7294-1-jgross@suse.com>
In-Reply-To: <20220623094608.7294-1-jgross@suse.com>

--------------p9mC49YkD27kudZypFeCdkWn
Content-Type: multipart/mixed; boundary="------------UzDDFnJxJu0cFBGkSu0hNa40"

--------------UzDDFnJxJu0cFBGkSu0hNa40
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjMuMDYuMjIgMTE6NDYsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+IFRoZSBicmsgYXJl
YSBuZWVkcyB0byBiZSB6ZXJvZWQgaW5pdGlhbGx5LCBsaWtlIHRoZSAuYnNzIHNlY3Rpb24u
DQo+IEF0IHRoZSBzYW1lIHRpbWUgaXRzIG1lbW9yeSBzaG91bGQgYmUgY292ZXJlZCBieSB0
aGUgRUxGIHByb2dyYW0NCj4gaGVhZGVycy4NCj4gDQo+IEp1ZXJnZW4gR3Jvc3MgKDMpOg0K
PiAgICB4ODYveGVuOiB1c2UgY2xlYXJfYnNzKCkgZm9yIFhlbiBQViBndWVzdHMNCj4gICAg
eDg2OiBmaXggc2V0dXAgb2YgYnJrIGFyZWENCj4gICAgeDg2OiBmaXggLmJyayBhdHRyaWJ1
dGUgaW4gbGlua2VyIHNjcmlwdA0KPiANCj4gICBhcmNoL3g4Ni9pbmNsdWRlL2FzbS9zZXR1
cC5oICB8ICAzICsrKw0KPiAgIGFyY2gveDg2L2tlcm5lbC9oZWFkNjQuYyAgICAgIHwgIDQg
KysrLQ0KPiAgIGFyY2gveDg2L2tlcm5lbC92bWxpbnV4Lmxkcy5TIHwgIDIgKy0NCj4gICBh
cmNoL3g4Ni94ZW4vZW5saWdodGVuX3B2LmMgICB8ICA4ICsrKysrKy0tDQo+ICAgYXJjaC94
ODYveGVuL3hlbi1oZWFkLlMgICAgICAgfCAxMCArLS0tLS0tLS0tDQo+ICAgNSBmaWxlcyBj
aGFuZ2VkLCAxNCBpbnNlcnRpb25zKCspLCAxMyBkZWxldGlvbnMoLSkNCj4gDQoNCkNvdWxk
IEkgcGxlYXNlIGhhdmUgc29tZSBmZWVkYmFjaz8gVGhpcyBzZXJpZXMgaXMgZml4aW5nIGEg
bWFqb3INCnJlZ3Jlc3Npb24gcmVnYXJkaW5nIHJ1bm5pbmcgYXMgWGVuIFBWIGd1ZXN0IChk
ZXBlbmRpbmcgb24ga2VybmVsDQpjb25maWd1cmF0aW9uIHN5c3RlbSB3aWxsIGNyYXNoIHZl
cnkgZWFybHkpLg0KDQojcmVnemJvdCBeaW50cm9kdWNlZCBlMzI2ODNjNmY3ZDINCg0KDQpK
dWVyZ2VuDQo=
--------------UzDDFnJxJu0cFBGkSu0hNa40
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------UzDDFnJxJu0cFBGkSu0hNa40--

--------------p9mC49YkD27kudZypFeCdkWn--

--------------uGZghzY610stt0wl6fX7JZ6R
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK8XTwFAwAAAAAACgkQsN6d1ii/Ey+a
ZAf/RRtfJG9S29Qozq/Vh4Ztvwb8yuAVX6zeDmhN69LCbbmcNf0veBd555vVoqgph8g2uAdar8RE
+jyKPr9ay4A5GXC5vLW99w0yCGhSP1A0D58w7KXr95qg5CUh7e4rx094nd8dTm5QN0z9YKShIUUN
B6LYAgHuWju3dW2goGSlA2Hxb+KSi6q6NpcLHA7V2/irXVwlVG3dT64qRShxXD9m3AC1hKC+IxXg
y2RNPgO0E2FvrDF3ClYXAOsscG2+psCT6GyDS5nzaWqcjvXwX/EJBxVQycN6B/ToVX9mB6FcPzYy
Eado7aKY7P9aQCGbnoJ4pqnieb/pty6EeFj/H/cYjw==
=E4vZ
-----END PGP SIGNATURE-----

--------------uGZghzY610stt0wl6fX7JZ6R--


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 14:10:51 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 14:10:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357891.586755 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6YP4-0001nO-UP; Wed, 29 Jun 2022 14:10:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357891.586755; Wed, 29 Jun 2022 14:10:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6YP4-0001nH-RM; Wed, 29 Jun 2022 14:10:50 +0000
Received: by outflank-mailman (input) for mailman id 357891;
 Wed, 29 Jun 2022 14:10:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DiI7=XE=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1o6YP2-0001mM-Mn
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 14:10:49 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-eopbgr150083.outbound.protection.outlook.com [40.107.15.83])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 461c17cc-f7b5-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 16:10:48 +0200 (CEST)
Received: from DB9PR01CA0029.eurprd01.prod.exchangelabs.com
 (2603:10a6:10:1d8::34) by PAXPR08MB7394.eurprd08.prod.outlook.com
 (2603:10a6:102:2bc::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Wed, 29 Jun
 2022 14:10:45 +0000
Received: from DBAEUR03FT007.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:1d8:cafe::a5) by DB9PR01CA0029.outlook.office365.com
 (2603:10a6:10:1d8::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14 via Frontend
 Transport; Wed, 29 Jun 2022 14:10:45 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT007.mail.protection.outlook.com (100.127.142.161) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Wed, 29 Jun 2022 14:10:44 +0000
Received: ("Tessian outbound 3c5325c30453:v121");
 Wed, 29 Jun 2022 14:10:44 +0000
Received: from 5dd2a8c47862.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 550DCC43-EDF1-46BC-B5DA-FEE3ACA28835.1; 
 Wed, 29 Jun 2022 14:10:32 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 5dd2a8c47862.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 29 Jun 2022 14:10:32 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AS8PR08MB6808.eurprd08.prod.outlook.com (2603:10a6:20b:39c::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Wed, 29 Jun
 2022 14:10:30 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5373.017; Wed, 29 Jun 2022
 14:10:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 461c17cc-f7b5-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=F5RUAaUarZ7yrnNs52O45ant0jRFbVwc5zcocZl8Cb/X0Ycp2rurd3lls1pO5AY/5ohHPDzv19f4NRIix7+lLHmEsf1ujOOGvBzkiAM4FE4zDIMpkMn4fjt+CECkiVSDoyzgNmefq71UOYn7UsovhdCi4hpeZnRbpG7ofrVfxxfVFc2mZoSmgbgYhMNaHSRuJm3HKI7+FrxsLLtmDDCdY11JYljXbAWD/WjInLRiF/ykKIdyhl4isEI0FWBV8BGU4PryMbjgfrA4XvJKgS7mMUigO40gwQMqomaaeDOmDy8dpuBWyD+zS+p0fNJ4/8q7LYax0rKiAlpyyHxKYC8TZQ==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Ud/+5UlDKigZrJ+APK41c0mg/Zd9qlvEZ0mAhSJ3m5k=;
 b=Gym/TCSp7s6ML9BvvYfTAG0VsqZQOKMllL5TGh9B4F9ptd1x7nVEC6kEkAnR2qVCMYcxnUhIMkBwHEi9Xdm7hDjrqvk/DJqRXRLx2WxrhQIK1iJQatQN4sbxdEwVHe42iZ3n4FNaZJULRYQqjuACGh2N3CsuD8fUSTmsI+gFi+SAUoX51ETsLCVpA7wxWa85oCDS/vfiWrNPmJNmK8Php1YQNsl8ljspOVb2/DTidGcnFO+5HGVOB3FpACrnPVn+bdmx9j+WunuH9teukqmKSpN7AqxuT2EwwsiianfzGcdbM2zCIijuQmn8vRs359L4GP1hszSdNMBNrgiDzaNfTw==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ud/+5UlDKigZrJ+APK41c0mg/Zd9qlvEZ0mAhSJ3m5k=;
 b=fM03jjBgLk5j8ZJtgOyYquf1CX7dWjCesaX25ucaK2jwoi08SMiC0Phq5994l3raJ+4VHuOOitsMfPZYZ3jxhqYv0OBaw83exDv864plUK9D07CAdegI1r2ht/FnNLFXCAGsqnzZvTJtcfblPB0Lq60wzNc0ZQxvbG3Zpt4MtLA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 5f4128e47955d2d2
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cZsn7nkYl7a8oy3Cseclrzo/YAwG52TeijlFAq1nh6UNNtc5BcmgL7Ha/k2qqtHRcRd/ZUEgPPMZxYVn4677n6GiFPo8KxBUYzko5Gt9U3dTqW1BqMh4uBAdxH+U1faJJ6LVuAIuUVArHGuo+pBoQ7ORwi6ZAHrmmHog7Icb4C3nR7YVtoO1G+InjKGXsYS40d4KpCjejoXX+j3ie+dPcFT+CkC4zjKv/TIGkyzod2Y5tgox+c3SBV6UbHrzVNA+g78e72hbIBYCDXxqHhjbswShir0bfseiG7L0+cBjtCIS5wOWQpt/XZ84XYDkSsOTRYI72XvUmKM0t/qpBdvELQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Ud/+5UlDKigZrJ+APK41c0mg/Zd9qlvEZ0mAhSJ3m5k=;
 b=afwLNcG8GNdwbeDrdpB+86nooJ7mNoPNrrOxerivJ9DfLowYj2Kj+ZrhSxCvRNp6j16bMuuaUkWM/Lc0zyS55nJPg84n5cREIKDB0US6CdHAh3gUfVln/rs3dk1RQ9CxqoH7oFdj5r9BSDjBveH2oFqnqoEIYTty2pd3HExJJO66kn73hjzJCVsJ/f30mjEEH0HTagTJSIOXfv7iyRh9OCrp9X3QeHgiuOYtcPFm/BlyGTgCFiKMBX05YXsU7K2qmDxoyVLXNkghtyUJpIwa/MA6KEGD7S8YQiMThfoaWIscT9Co6hCL9oaOeCkgOoWbHlIKEXtyxQ52XaqU8UUv9A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ud/+5UlDKigZrJ+APK41c0mg/Zd9qlvEZ0mAhSJ3m5k=;
 b=fM03jjBgLk5j8ZJtgOyYquf1CX7dWjCesaX25ucaK2jwoi08SMiC0Phq5994l3raJ+4VHuOOitsMfPZYZ3jxhqYv0OBaw83exDv864plUK9D07CAdegI1r2ht/FnNLFXCAGsqnzZvTJtcfblPB0Lq60wzNc0ZQxvbG3Zpt4MtLA=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: xenia <burzalodowa@gmail.com>
CC: xen-devel <xen-devel@lists.xenproject.org>, Rahul Singh
	<Rahul.Singh@arm.com>, Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: smmu-v3: Fix MISRA C 2012 Rule 1.3 violations
Thread-Topic: [PATCH] xen/arm: smmu-v3: Fix MISRA C 2012 Rule 1.3 violations
Thread-Index: AQHYiwEFKAxihkpP+kCxp/TNi89h5q1l/I6AgAAZcwCAAAIrgIAAVcuA
Date: Wed, 29 Jun 2022 14:10:30 +0000
Message-ID: <E0AD2430-78DB-454B-9D76-EB2E24E80E1F@arm.com>
References: <20220628150851.8627-1-burzalodowa@gmail.com>
 <BF0AB23A-DB4B-4DA3-9E4C-C15FAD360247@arm.com>
 <1b28f8b2-2153-61f6-515f-b2ed880f84b6@gmail.com>
 <D8C0B798-C736-45CC-A723-1535131F1323@arm.com>
In-Reply-To: <D8C0B798-C736-45CC-A723-1535131F1323@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 072e8266-a702-4b8a-3875-08da59d92821
x-ms-traffictypediagnostic:
	AS8PR08MB6808:EE_|DBAEUR03FT007:EE_|PAXPR08MB7394:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 vO2GD/pGj7E4YycpN18byXiuCpTrlGl8VJ3vA0eJNXOHfjlKhlUMFa6A3X+VHsipPhtIY4IooiHJ82TYpVP1CdyAAaD3Nc19yDwup6PLCR4n+HrsGY68KCzccuEAlIRBPVc/kaReN8JgI2WPgKmSAuNsaYa99xot6De9/THfvUid31KfxTDFzgaEz/9ViAxeCJtMbwBqZcYuKRuWHqbvaomte418sFRh/WAn+4e5DnJLH0SseeRiKqHaKegUS20JfT/8pO7jM54HwznYNh5tKIetrvEGEpnS92NgKqMTiQvcPnIOrpN/ykYbOcnXrVSAyQ8p8Zwdg2wBUOlhRFvpIr1sPyQvydG59IIPsZ86YdlRF0VbB4ugkqAYqLnKvYdLAEbuHz6uOKETzhwFWPKyKsQR474JzrL3v3SnvKRjJzIMiA/uI7JmXT4iOdyJ5fnDq1cCpWtPQQh8UyttwfqCnlXdmTz/xPYwZsmgG+V9RbNsS20jrq6BpefLTJvoWfm64tdkoUWakw46xM4iFmMnHf7sE+ckJC51dHmnItM4JwpHgCqLFhWtgIZtPkzgIaXKbQsjF24TsyUuIMinYoOGMI0T2VChqJiOJQWHrD9zqx5h69uo3Hz1+K2DKkH5O8W+Y9q9HZWe5HGRk4AeMECbTAwrtqgReCgaUI8O4mzAuG8TyI3MvsISepEEHmnQ+Vpab7UGCjzwop7IPdV5h7AqlcsyQ5ryDBA1Xq8pddYyQI7Fg7QfZNYJEmT6//oAap2WEO3Nn1Erdoo99WOWfYsxqYOd/WqYphhkdYVfok6H9ViSBEJxjPpUA/4D66v0ugxIaA8V9LXFi2Z7ptVsAMG6gA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(376002)(136003)(396003)(346002)(39860400002)(366004)(2616005)(66476007)(54906003)(2906002)(122000001)(316002)(38100700002)(186003)(76116006)(8936002)(5660300002)(66946007)(66556008)(83380400001)(64756008)(6916009)(6506007)(38070700005)(53546011)(66446008)(4326008)(41300700001)(8676002)(478600001)(71200400001)(91956017)(6486002)(26005)(33656002)(36756003)(86362001)(6512007)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <6E5C4AB19BD7934883A79462AC368A79@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6808
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT007.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	fcd66b13-53cf-4d97-ca8f-08da59d91ff3
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	fGUKz1bovpIY7d0I+fSgOX+KK+v0o5rAgaR/MBVRbu/8KG9z0yHKJh/2ORc7g1QqHnG2Qz3vQa/bOv7I+i5uyYNFcYlDFei+JfS854pq+GL63qoeLyQqMcLyeW4dJksm0I5szyCZfV5+A9QrKPRzyFNVXqtnQHnX5isW7AvPzDrXG16hHpdTyhGK2ly5kjZJLNjokHBPbBfEFw60RPbvXFZO9dOoK+erWp2QbGPdZbmLZL7OZIiu6fJ54S0d2wsgVURt/yKQWxhsor+uIaXHRSNllR/wbsdRvQGtjJMsjKVm9cwGoZHZaYfCuwqIfOkCnvt10ufSMK4ahO5buHF6wJdHkWDWgi2TjQ2wINXPz9x4wRgCx3qbJG2vGhrVs0DI36deEhs2qykJgNZrvOUXyLHPVVVrFiZrxck5QGNZSvsC1aCQbvOhL9e2v14CSkA7DWjx3YjKFqVYMqwPWSKi2jjNeRZE6qKJiwmxDJvJ2aKAf1NOJN9Y3NGxVhzmYx9ROT8t2HESd1xQgDxABOHAtYuRdkviRMdzHdp6BlWWc1oh45ncX4fI4kBz63ku88Q0iO7zTbR/R7ILUp5ZmKl0CNEz/KI5FMdE9EFIuCp0rBh5wMufj+UvOpdx10H2qg5zbTU9P4IzHNRkVdsVaoYbCwHYhAEd/sva955Ox5FbzdE2hZ7MY1TTkFHD21LD0TGKNQi0sApDIbWDfFg8esy0gABK6pEAIiyBWHyYrX4vRdbiUUOpm0KTkoyFUM+v4wKkshLgBVJN701QdxoVX+ZMByo92bu5JS5+hhN2l8cP/8/Bq5myWXsTZxz24TM2LFUWkmVH5SmxREAZvEoISJNvgeRqzM3BN3bzpp44EjMhsPo=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(136003)(396003)(346002)(39860400002)(376002)(36840700001)(40470700004)(46966006)(36756003)(33656002)(82310400005)(40480700001)(70206006)(40460700003)(8936002)(478600001)(6862004)(316002)(5660300002)(2906002)(86362001)(4326008)(6486002)(2616005)(54906003)(36860700001)(47076005)(70586007)(83380400001)(186003)(53546011)(107886003)(6506007)(336012)(8676002)(81166007)(41300700001)(82740400003)(6512007)(26005)(356005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jun 2022 14:10:44.2231
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 072e8266-a702-4b8a-3875-08da59d92821
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT007.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB7394

Hi,

In fact the patch was committed before we started this discussion as Rahul =
R-b was enough.
But I would still be interested to have other maintainers view on this.

> On 29 Jun 2022, at 10:03, Bertrand Marquis <Bertrand.Marquis@arm.com> wro=
te:
>=20
> Hi Xenia,
>=20
>> On 29 Jun 2022, at 09:55, xenia <burzalodowa@gmail.com> wrote:
>>=20
>> Hi Bertrand,
>>=20
>> On 6/29/22 10:24, Bertrand Marquis wrote:
>>> Hi Xenia,
>>>=20
>>>> On 28 Jun 2022, at 16:08, Xenia Ragiadakou <burzalodowa@gmail.com> wro=
te:
>>>>=20
>>>> The expression 1 << 31 produces undefined behaviour because the type o=
f integer
>>>> constant 1 is (signed) int and the result of shifting 1 by 31 bits is =
not
>>>> representable in the (signed) int type.
>>>> Change the type of 1 to unsigned int by adding the U suffix.
>>>>=20
>>>> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
>>>> ---
>>>> Q_OVERFLOW_FLAG has already been fixed in upstream kernel code.
>>>> For GBPA_UPDATE I will submit a patch.
>>>>=20
>>>> xen/drivers/passthrough/arm/smmu-v3.c | 4 ++--
>>>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>>>=20
>>>> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passt=
hrough/arm/smmu-v3.c
>>>> index 1e857f915a..f2562acc38 100644
>>>> --- a/xen/drivers/passthrough/arm/smmu-v3.c
>>>> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
>>>> @@ -338,7 +338,7 @@ static int platform_get_irq_byname_optional(struct=
 device *dev,
>>>> #define CR2_E2H				(1 << 0)
>>>>=20
>>>> #define ARM_SMMU_GBPA			0x44
>>>> -#define GBPA_UPDATE			(1 << 31)
>>>> +#define GBPA_UPDATE			(1U << 31)
>>>> #define GBPA_ABORT			(1 << 20)
>>>>=20
>>>> #define ARM_SMMU_IRQ_CTRL		0x50
>>>> @@ -410,7 +410,7 @@ static int platform_get_irq_byname_optional(struct=
 device *dev,
>>>>=20
>>>> #define Q_IDX(llq, p)			((p) & ((1 << (llq)->max_n_shift) - 1))
>>>> #define Q_WRP(llq, p)			((p) & (1 << (llq)->max_n_shift))
>>> Could also make sense to fix those 2 to be coherent.
>> According to the spec, the maximum value that max_n_shift can take is 19=
.
>> Hence, 1 << (llq)->max_n_shift cannot produce undefined behavior.
>=20
> I agree with that but my preference would be to not rely on deep analysis=
 on the code for those kind of cases and use 1U whenever possible.
>=20
> What other maintainers think on this ?
>=20
>>=20
>> Personally, I have no problem to submit another patch that adds U/UL suf=
fixes to all shifted integer constants in the file :) but ...
>> It seems that this driver has been ported from linux and this file still=
 uses linux coding style, probably because deviations will reduce its maint=
ainability.
>> Adding a U suffix to those two might be considered an unjustified deviat=
ion.
>=20
> At this stage the SMMU code is starting to deviate a lot from Linux so it=
 will not be possible to update it easily to sync with latest linux code.
> And the decision should be are we fixing it or should we fully skip this =
file saying that it is coming from linux and should not be fixed.
> We started to have a discussion during the FuSa meeting on this but we ne=
ed to come up with a conclusion per file.
>=20
> On this one I would say keeping it with linux code style and near from th=
e linux one is not needed.
> Rahul, Julien, Stefano: what do you think here ?
>=20
> Cheers
> Bertrand

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Wed Jun 29 14:27:04 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 14:27:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357899.586766 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Yeh-0003a5-Bc; Wed, 29 Jun 2022 14:26:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357899.586766; Wed, 29 Jun 2022 14:26:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Yeh-0003Zy-7I; Wed, 29 Jun 2022 14:26:59 +0000
Received: by outflank-mailman (input) for mailman id 357899;
 Wed, 29 Jun 2022 14:26:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NFaf=XE=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o6Yef-0003Zs-C1
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 14:26:57 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr10060.outbound.protection.outlook.com [40.107.1.60])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 87260bce-f7b7-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 16:26:56 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB4175.eurprd04.prod.outlook.com (2603:10a6:803:40::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Wed, 29 Jun
 2022 14:26:52 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5395.014; Wed, 29 Jun 2022
 14:26:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 87260bce-f7b7-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=i2Zha6J1T3hdsaiTPw3viAFAL1A6mv+cH1aLcKi6CxD7HCbMTapxh7ntSPTCgj0M36BdX+iSG6GaArMmU0peGYxrsfbHvuhStKiPPkQAnJ47xJgBmWrk8JFPVvrxY02f0OANeTJpjFPSeYI8wqE9zp8bUUy98iqZpPZmTOLlD8mP0VF+krYxPoYsZncrauFPzNeryiUMHoujUdA+G+UScccf3lyZx0MZMrXYctowaO5VpRShLSfMr0+V0O5gyLKoDk3kjvcnQqG3S3w0U+RAppe3x7jRL9KD3kl5HAdzYHMXyKludb9wCJcRyuOeGeYdBI6aVygh8HLJq9ZJC535rQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=R+nFgHJBTsahS1uvwU/1zBzUooLEJhwLLPEV54s4kwg=;
 b=OscQYl+a6drewOSfHpbfDw80zVHdq/C5b21cej9kJMvn6dbwp5C2C+o6fnbYRxtNGSefRQEOA2V2t8YirmnqYlN6wvbK/1GovQa1d5G/fqRjaUNMIh2RooIWtVdeuTRoWZprU4/W+UyLjlT8bvn9eSMSvm3ss9IwxHTTtJJNNlB45J4uYnn8nItVicxqXx2OhkR6ZNGkv+Vpn3ujTGFrM3Fm/CRCIe2PSbfy42KRMraxroMhEjjpgL/c7o4yCP/Po3VcA7/A/O/tH/D1FyolCk0Ev7KJnSVURP4HAFsiCS5b1F3hM9cs65wdhsS/vtc7O3kiSDHrCpi2AQXHR1PxXg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=R+nFgHJBTsahS1uvwU/1zBzUooLEJhwLLPEV54s4kwg=;
 b=0Yl4v4TRlGSuMpXzkA/lCvqSlyYJsQmFujAI0sSp1sGzzAtYM9PhUhRNN1YuCdp82lCw0pjzcHBbkP51b5XMeJB1TZA8VQEJk9zJh8FuRWivmOwOwEGQcSWvjqPOaN924Qp6DK1AI4/L2d/YVj993g7020rfsF85yfZfLNh04Dl0/XIHugVzMpUHyQ90m+Xn2USaz/x8itLdWWwGqza6DGduzdQHmEQ2X3EI6P1QDN2+ZE40Iol363YO0F9EKlgWK/amU7Rb9AEfDdw4bIHrR+2IteolBg/nquqwrtwuDmp6/VSbIfMSfTV1cibSYhQoC7c12hlP/zMe14May6bXFQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <efb34738-49a9-fa4f-900a-fd530ff835ce@suse.com>
Date: Wed, 29 Jun 2022 16:26:50 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH RESEND v10 1/2] xen+tools: Report Interrupt Controller
 Virtualization capabilities on x86
Content-Language: en-US
To: Jane Malalane <jane.malalane@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>,
 Nick Rosbrook <rosbrookn@gmail.com>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Juergen Gross
 <jgross@suse.com>, Christian Lindig <christian.lindig@citrix.com>,
 David Scott <dave@recoil.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Jun Nakajima <jun.nakajima@intel.com>,
 Kevin Tian <kevin.tian@intel.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20220629135534.19923-1-jane.malalane@citrix.com>
 <20220629135534.19923-2-jane.malalane@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220629135534.19923-2-jane.malalane@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AM6PR0502CA0061.eurprd05.prod.outlook.com
 (2603:10a6:20b:56::38) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f2f95609-188d-48d6-f543-08da59db690d
X-MS-TrafficTypeDiagnostic: VI1PR04MB4175:EE_
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+esOF03onwQ6CYP2NSWsJx8kifoslQ88Fpac2DQemn3Hka6EBgpr8+Q6GiwBGqi9BTPIbnkS+kquBChx3Omk48raLDc3ylDJPVOSZoZbbEkL7o/NC8bB9qc7kh2f8VuquU3xHiKHQKT+1hOytEuVpoANZ00q/Nu18tY9xgbQ1DF27Kt8o/NpTkzDSjN5KQyX27uspXSd1upKbf/jQMK7khE8yCrA3OWzOIUXgLTey32zaEfmBBZLxRUZVuuQ/7bYWI+g5ee8EBGKRvkXH3D3Il+GJ054Ebk5t/d8JySTqDz91uxgTKVttpq+oD8QiVmd+RQC4h0KilgI+Mh3sFjIfv12RPYttPpX/vvhufGByn4eFIDdMSybQHeEByQf3Vwdhcw9faSZMsUCeboK28FnOgB2u2oANCEL/ztwtoAC4OCoH3jq4WvDE8qQIMu8oPLtBXxGfnJA5gxYsGeJ3z6dqk/xk4wYJhaqu0RzlrvWCZauZXLwv6Mcr+hVQ2djv3gNYlR5oilUOA+RKPxwgWxDiiX4HwiIGlOpjm/HHLgDg5dUwqNwMTjSnSC+2y2v5Ikiwi/1CH3XJkjob51e07RRt1t/dYacdMLOSQXWYeYovtw0OW11MKGaLlOGbWBQT7sdn58fFRK7VkBcTwUSZJG7m5gBHiTINUodt79A4C2cg/+AJSqUpxgq1EgkX+BSVCU2w/8JQGdTlgFajcXElRvvcm0GlLWh03nTg6sw0jPb22Xy9tOpYXtQRKCGHMwVcfQbhZuV8Kyhm6FLr1L7A2f9yC0z17dRzUlUhLhutSckBJjGLCTfwa5aBt/opMkhY9LGeAOyVwPovDeOQ7AnpTxjfw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(346002)(366004)(396003)(136003)(39860400002)(376002)(2616005)(26005)(41300700001)(6916009)(186003)(8676002)(66946007)(6506007)(54906003)(83380400001)(53546011)(6512007)(478600001)(31696002)(31686004)(66556008)(2906002)(38100700002)(5660300002)(36756003)(4326008)(86362001)(7416002)(316002)(66476007)(6486002)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?L1V4b0JxdHYwck10dDN2LzdlL1J2L0l0WGFYOVkrZi9FSm9wWG9jN2RCcFFo?=
 =?utf-8?B?OVY4R0UvOGluS2RPS3pmd1RKc2V0SXBaMHo5VWZiSFp5U2hydnd6TDNDdzVk?=
 =?utf-8?B?NTdIMWFJWWhlTkVIdU81QkZyU01oaWFRWXI3YVlXeHdEdEZvZC9pY2hlb0xt?=
 =?utf-8?B?Z2U0YjlYN1B2ZlJkbVJCK1orL2sweDB5VWxiV2o1QlZDT3NBQWpscE1TNkY5?=
 =?utf-8?B?ZEt1NjVBd3ZYaG12T1lYQlBzRlA5T3MzQ2FBOThPcmZFZHN0aU4rTGNOdXAr?=
 =?utf-8?B?NTgzQTRTSXNvLzlMZ282VnVPbXFmaFo5MVFEWUI5cTAwNDlPTmc5dllHbEZH?=
 =?utf-8?B?SXBvUmVIV0M1L04wanlwdzJUVVJVd2M4QktDQXBmRXpyUUQ5bDZwdVNFQk1n?=
 =?utf-8?B?RTBVVzBBUm1QczF2RmE0VDgzN3FHWW12VDZ4eVF4azZySllNL1RoUnBYUmVw?=
 =?utf-8?B?RTRhVFJUaytjVEtjRkgrRzZFT0JuZ0hYMW5wMTY5RkFKaVpOUnU1VW9FbU04?=
 =?utf-8?B?WUdIazF0MU5mTnR6dVArNDlLZUFiZ0hGVjRpTWsrT0NURzdWQm1oeEl1U05D?=
 =?utf-8?B?VlJRWXlFb256dGhtUGFyQVdYckc1RWNzV3g2NVl5SFNWL2dzanVFb05Oc21T?=
 =?utf-8?B?TWNhS28zTWNITU9id3JOVWxhYzhpTC9DTldnL2lIRFR4MDBtaDV4ZTduNjBn?=
 =?utf-8?B?NGkvQ3NCbm5SZ21odnlEM2t6ekUxKzhQbEcyOWFxcUV4eFJZd1pyRTFVNHBu?=
 =?utf-8?B?NEtTS2VTNVAvVXRwYmdjQmV2N0xRSnFXaHh2dFl2Z2ZKcEhLcUJVUUxCUmdO?=
 =?utf-8?B?alVISThJZzhiQW5PblY3ZkRWQ2FkU0FMQUhhOFN1TDJvNTV5cENaM1JKRUo3?=
 =?utf-8?B?M05jbU5DU1I4Uy9rZkVRM0dIUUNWUElwRzQydThWWXNwRDB6MUYxNnpnb1lR?=
 =?utf-8?B?ZGJHSUd4TWhqMHE2Zm1nWlhsQkMzdndDNDJGaFJnOEEycmNmUVNBZXhpcHht?=
 =?utf-8?B?NjlENThHVnhkK3Y2b2RYUWFlaDlyb3VPdDFwRTRscXhZbjU2UWEvam5DOEd5?=
 =?utf-8?B?eEZza2VRcjRMWUZOSmRuK2xweXJYTkRSa1VLSXFua3RTaXJlWElhazBPdFdk?=
 =?utf-8?B?WFZPZ2d6YlRpRUNpdjlYbWdxOHZrWFhMRWlRV295aS9pWDhnSWNLZGtXdUM4?=
 =?utf-8?B?c3Q5bUhBUGlHZXZxODcvSVJnMm9RUHNYRXJPV3JFSXNyRXdqeHl5OU5XeEJ3?=
 =?utf-8?B?a2RPdWtZVkl5MkdPVFpXUlo1TEtFZWxVZ2N6OEUzMUJ4S29MWFE1OW40TWRz?=
 =?utf-8?B?NTlUazk0bVZsSjJObVVnUWJLWmI2TmJvR1c2SlJ6QTRBeDhxNmxSbi9NelNZ?=
 =?utf-8?B?MlVaQ2s0WmNsTGZmSWNJWGhtTVByUHJ2TU1sSkJPbUxEMnRSYkpTQjB3dFNQ?=
 =?utf-8?B?SXdMN3MyZkV2WTRrczNCQlNmcEZ1N1RrUnJ1VVlLQmlRRlFpMHQwM2dRR0lz?=
 =?utf-8?B?MWwweVhnNkJPbDhRUm15bFUrRmJWSEp6aHBReTczeXlBSnBRMEN5L2FiU2x3?=
 =?utf-8?B?U08xK2ZyNEtGSEw4Uk9hditxbkZHM2grcWxOT1ExWGVURE1KNEtWV2RtTGJN?=
 =?utf-8?B?cExtSjYwYldGRml0Vnc1V3hYeGl3b2lnZ2VMY2FiSTEvOVNINFdhZ2FNNjRu?=
 =?utf-8?B?Qi9NK0FwblZuZWI4a0puaUdMZlpQYkc2YzlkYnBXZlgzYzNGbXJna01SVTUy?=
 =?utf-8?B?bFJEM1NKM2x6bUtLOWVVUW1YdWdpbkUyWXhhNWo4N3oxdkxrdXFZZGZLNGpJ?=
 =?utf-8?B?NkJqelJaRlh1OHJ2NzlJREFZa3dzampiMzZxQlFyYmZBOS94V3Q4MFpyRUQz?=
 =?utf-8?B?OE95WmNGRGJMQk9ZaENNSnFjYVMxNlBLei9sZVY2NUQ5VzVxbGdRMWJxaHBr?=
 =?utf-8?B?KzRLakZucy82Q09nOWhLVHlFNVVxWkxpd001RWtDa0RqRkFCbW5SdWlJay9q?=
 =?utf-8?B?dWRCeE1Yc0EyWVFiV2IxSjU1M0V0Nk1zQXBTUHRycmFtdGpVN0VWalk5Y1pX?=
 =?utf-8?B?dWhRT2UwNjZwQVlrUkxRY1ZGM1ZPcHBZdUdLVlA3b2JRelpuTEc3eGJZSlUy?=
 =?utf-8?Q?uzvNdgaQTkt0H8MlFAVBhrKJO?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f2f95609-188d-48d6-f543-08da59db690d
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jun 2022 14:26:52.4234
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: qGSPA0OW+0Yrk4JbhzfpgHHh7+XS6r4jJBKs2wpMRWE0ggBubzuzzqio+fuAimREP86Rnbc9Rk2zcBiFnS0xrw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4175

On 29.06.2022 15:55, Jane Malalane wrote:
> Add XEN_SYSCTL_PHYSCAP_X86_ASSISTED_XAPIC and
> XEN_SYSCTL_PHYSCAP_X86_ASSISTED_X2APIC to report accelerated xAPIC and
> x2APIC, on x86 hardware. This is so that xAPIC and x2APIC virtualization
> can subsequently be enabled on a per-domain basis.
> No such features are currently implemented on AMD hardware.
> 
> HW assisted xAPIC virtualization will be reported if HW, at the
> minimum, supports virtualize_apic_accesses as this feature alone means
> that an access to the APIC page will cause an APIC-access VM exit. An
> APIC-access VM exit provides a VMM with information about the access
> causing the VM exit, unlike a regular EPT fault, thus simplifying some
> internal handling.
> 
> HW assisted x2APIC virtualization will be reported if HW supports
> virtualize_x2apic_mode and, at least, either apic_reg_virt or
> virtual_intr_delivery. This also means that
> sysctl follows the conditionals in vmx_vlapic_msr_changed().
> 
> For that purpose, also add an arch-specific "capabilities" parameter
> to struct xen_sysctl_physinfo.
> 
> Note that this interface is intended to be compatible with AMD so that
> AVIC support can be introduced in a future patch. Unlike Intel that
> has multiple controls for APIC Virtualization, AMD has one global
> 'AVIC Enable' control bit, so fine-graining of APIC virtualization
> control cannot be done on a common interface.
> 
> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jane Malalane <jane.malalane@citrix.com>
> Reviewed-by: "Roger Pau Monné" <roger.pau@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Could you please clarify whether you did drop Kevin's R-b (which, a
little unhelpfully, he provided in reply to v9 a week after you had
posted v10) because of ...

> v10:
>  * Make assisted_x{2}apic_available conditional upon _vmx_cpu_up()

... this, requiring him to re-offer the tag? Until told otherwise I
will assume so.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 14:31:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 14:31:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357906.586777 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Yj4-0005Ba-0c; Wed, 29 Jun 2022 14:31:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357906.586777; Wed, 29 Jun 2022 14:31:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Yj3-0005BT-TE; Wed, 29 Jun 2022 14:31:29 +0000
Received: by outflank-mailman (input) for mailman id 357906;
 Wed, 29 Jun 2022 14:31:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NFaf=XE=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o6Yj2-0005BN-SL
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 14:31:28 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2052.outbound.protection.outlook.com [40.107.22.52])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2913f59f-f7b8-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 16:31:27 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB5075.eurprd04.prod.outlook.com (2603:10a6:208:bf::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14; Wed, 29 Jun
 2022 14:31:25 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5395.014; Wed, 29 Jun 2022
 14:31:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2913f59f-f7b8-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UpJpYzzLcZY6WzQSTxFcNqvtcCX2o2HxDzm48jJfhg+f5QdODD8XxbNgvtev4U0uRDaz7/12/xP7gTDrF1yqxz+EudedPlS+UtGDx9jHELQseqy56xC7Oe8P89QSMVcY43T/4TSIBwcoSAU6d7XH9KXYDY19TPXsFFSUsZdLULPXd+J1PD8jZI+v4nb/ej/uNB32pdkfr+46Bl4wrJijGJ/GY93KLEd4ReEQMT69yLUl0nCQPb4hsyVSzP1W+EGNvpFNYKWIBuWFGEBQQk/i8mqyG2siLogcEyXUBMB9d6Kj/qrQoUiZnhwv5m9Js5tRJRIyQyu5YYtrIXe3mMw2fw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8MOtbfpUTGhtY80aLDTBGrhyMDKP1+UNoByz/7bLFKE=;
 b=Lf+vDvhXQswHPnKHzHnXusMlHiSzUoqlszR1HV3Eqisn4WeNy+JNL5oFnlBwzq++JlBv5ZFm/j+ZjkSOa0ntWwUYOa5SUcW94HlNWQABGNgD65ElaKi9M1IVzma1oJ4URjwGJEWYUy1eXlO+cNva5SSW1G4sO6xGSi0j8XetQsboDu4swQdWEqLvS4s9Rjr+/NzLlbRUsoqm7o1nqxwfFilda5dJ0xxJHk097uCSSDUdBn1IlfZO5JC+ygQQZ5YmOJ3qVXjIawdaYhCQZFyyhNCv1W72icnMYgJxNBIGwOC7rIhmiS5fFLspfrzPhyG2K3vJsBD3hTg3gGYd7yN1Zw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8MOtbfpUTGhtY80aLDTBGrhyMDKP1+UNoByz/7bLFKE=;
 b=lOePXD3oBjywNbgBKQ8ldCdEe9gRI8EdyHbryt4jmM8pv4lImwadf8JYF+ITZeB7o/frSrTYg9cHge8rEnPNngaN3Fgl+kn5Bf63FEpri8KLZ+V3hTsiyv6jVLQ1gU0ON22C2DnH15+LWT4cmiZ9iWk+rfDRxr2UK2EYrF5H39Giv0zaCsVKH4c+3v/547e2rBEuM0iamxk7PDnjxZWXxD6CVwAJ2fuqn7nH/UqVAjsj8wlQX58u1jGhuFa27LC4x6y0cO1KrHwW/cCZ42mL/E/nD4MSQg6YH/bxFt0ri2jxeyRY7W6drm3GG4PKFevW1jnGivZHBh3iw7bayoF96g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <48eac477-43a3-fd1b-f019-a8e3e239602d@suse.com>
Date: Wed, 29 Jun 2022 16:31:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v10 0/2] xen: Report and use hardware APIC virtualization
 capabilities
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Jane Malalane <jane.malalane@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20220413112111.30675-1-jane.malalane@citrix.com>
 <e16b3b4b-45f3-a520-0360-c1d59602469b@suse.com>
 <Yrwste7T5DSeazjh@perard.uk.xensource.com>
 <3bb4c5d4-325a-d14f-038b-7206b3b6b29f@suse.com>
In-Reply-To: <3bb4c5d4-325a-d14f-038b-7206b3b6b29f@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS8P189CA0060.EURP189.PROD.OUTLOOK.COM
 (2603:10a6:20b:458::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6d4b130b-6812-491c-76c0-08da59dc0be0
X-MS-TrafficTypeDiagnostic: AM0PR04MB5075:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	cX2gJSsdmmnpLy2ZmTYuukNaBK/fv/XLBCRdAuqNxE/aAVsGHwqlGOwNKl69O1rOGytJonG2Xknhn/Zh7e83dG05wJ/DwbcgpcSA7H+6lVIbzYOI8IvlSRh/PBfvYC54AIt0AElhk/uuqHU89WPPbvgjizHSXrYVafnmvcJSFY+NIYaLPtvnltfjvEG4mijZiGokF5E3j7OUEEUCNMUjhFBqQGNKrnali2FAL9Afy4YKEzBdyLrv7GFHxvEFcvPGT3J1MZOPqgmw4QJmhjX5Yyi/sW0csrEzY3iRYMVfttHv54hymZEwoBKE1B9H8u44TL9AAqcq/AICe6vvePXySBHRXZJD0e184swc8WhRc+6CNrFOIjSSk64uxF2jOHmt0TJePnSLIBcay9+3AOe3uA1gsLGibulKB5KuW69ECt6eQpMlbcZ3+h7ViOoCKSn+OQK8D5zaitLXWWLKR1xCX9bzX3y3/4ABwKjbwKtnwkSIxU1CINPlHhuPH25IRF19NXzdIGFXpqcmwpk9lcNHZOv+C0pshTilv3ekSAhnlhX8SGD2UD9oa3EJQhN52/EaHOXdmp1z9BgkNTebFU/y/cTbcRIHgRI8m7Nv5WnEd1UkJq4ZDreFePc6MWQIYwNrinqzdT/THgwobapkCwZN6fcq2n4KSbwXffRdDbM4VmlhSW2+AeJAZTDnuqD9kUenwJ1yVgm4Ids7OJTkXKB/ORIadiGXc/c1X+sNXyObnFe6buF1JYgGG3OwyX32lerXKFrVoYsIcegqzwc2980sr04tEtGy0wK1xAWzkfhAegamufoy/znMYOiQznYdDSpj4rLT1EKVu9kvidv3vvvBzw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(376002)(346002)(366004)(396003)(39860400002)(136003)(66476007)(26005)(4326008)(31686004)(8676002)(66946007)(66556008)(36756003)(6916009)(54906003)(38100700002)(83380400001)(6512007)(316002)(41300700001)(8936002)(5660300002)(186003)(2616005)(6486002)(2906002)(6506007)(53546011)(86362001)(478600001)(31696002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VzVVT0U2Snk1NGs3V3pMRlM4dUdyaVpvNUxaeVZLRXAxK2N5LzBXcGFmLy93?=
 =?utf-8?B?ZGNJRzF2Y0d0YTBKMDlVWDJLWHo5MHg2eHpNMFJsYktwNlBlNnpSM0NZRjhE?=
 =?utf-8?B?SjUySjRiRGlEZXNIVkdXdmFRRkNLWXY5Y2FubmtuanIxZzd5cXRyVFVacmhG?=
 =?utf-8?B?NGhRS1RXb2VyR3hJWmpFRUZIdU9CNE1XY3FxVkJxamdOS1M3YWplTzhzdlJY?=
 =?utf-8?B?NUV5Y1FNOXZLYjZKU2xudnVVbEFnMDBXWW1NZ1JpNzQ0c05kVlJMUG1JV0ZH?=
 =?utf-8?B?dG0yeGZaeXBqSjNwYUZ4Mk1wdlltQm9Ja3BSMXd5R3Jmc3QyUFQxdFUrZld3?=
 =?utf-8?B?VlRIYlJOSEQ1YjMzL2FJN2k2a1haMlMxT3d0U3VIR3JyYmxPaHNrRU1oR2lp?=
 =?utf-8?B?Z0pjQUY4a2JGR05ETXl4SUdvdytBcGkwbG8vMW5BOU1QOG1NVkxPSVpySFBR?=
 =?utf-8?B?QlhCUU9lenNvN1lIMlhmZHAwTmVSZ3czQ2dFWDRNQ0NCUG1DQkMwZ3MyS3VM?=
 =?utf-8?B?NXQ2MVNsR0R1eVNoRDZqc05nL0hJeVNCazllUlJmZzE0K3ArbVg5aHh3R3lQ?=
 =?utf-8?B?azVWSHkyWTBSKzY0OWUyaEYxeFNLRXJPRlpROHFGbTN4RTg1Sm9iVkM0L3d3?=
 =?utf-8?B?cjdkWTIrTUdndTM3L3dWT0tLaGNaeGVTcml4WDJVMkMwa04wMnRtSGg5OVRj?=
 =?utf-8?B?VldBdUNLc1FJdnpBUUo0UzlxeFhlOE9YcHNmbTBHd25MaXg5cmtPd0N3QTU3?=
 =?utf-8?B?M2RhZjRZZ3hSQ2JlV2ZMMkpTOG9zalpjR3N6a0d1QzdnQkE3U08rUHQyN1pB?=
 =?utf-8?B?bWNGcngycVhCNUliWUcyMndGRFBrbUFsdk5xQUZUa1NZNHpsQVNyRENMWFJW?=
 =?utf-8?B?a1lZOWhkVmh0cTlJUFEyQzhZL2dOU0kvbXZaQnlBM25xWDdBZjRCbldzRlJD?=
 =?utf-8?B?amJIUHlieU9nV2lmRGpJQmZOMEZXVEd6MnhRa2FNMVA2TmpDOEFHRU83WU1r?=
 =?utf-8?B?QVhQb2d1bisweGFvWHZ0Rm5MSlFLUjNoSW1WWGhwd1BWaUJtazFTMFFtNHh6?=
 =?utf-8?B?RVo5UlZRT0ZjT2lXSm9tZW13L2QwbFA4bk9Qa0pPQ3JDdm42UElIWjZlUGU2?=
 =?utf-8?B?VHgzdk9vOHVFMnFXamMxNFl6N1k5dUp2cTRuZnBtZG1zcVpLSUJyVWtiYXdO?=
 =?utf-8?B?NTBhdFFSUFN5Y080Vll4c29jV2dsaFVzSWlySzdOd2JJYkM3bCtrUENZNENT?=
 =?utf-8?B?NkRQVE5jSUhhaDh6bVFTdVJqd3NOYzRvdTFSaW5leUM2dk8yUUoxRzBLNGJF?=
 =?utf-8?B?dy9Eb1V4ZENpV0sxUTZEQUkwM2MwYlJJWjBHSldEV2cyZkxXMW5BYi9YbXdm?=
 =?utf-8?B?M2trYjRxdmZNcWx2SVNJSDgxUWxYMmdKWnp2V3lYUitvK1NLZ1pLRWpOU0J5?=
 =?utf-8?B?bWlGb1RhM2Y2STBqUnNPbkJRaXphR2RPanBNbFp2M1RJcjAxdjhTY0lHZ1c4?=
 =?utf-8?B?bWVJVmN1cnZSOWRpNzdOdDVra2c3c3pWY1p4N0RqZ2NrVUtVbEZBSTE5V0tM?=
 =?utf-8?B?aElTOUIrRXFqbVFRSVRVNlZneDM1NGxCZ0w4V1pia3UweCs5MlEwMkxmMXJH?=
 =?utf-8?B?aytSVitmWUd6VHFqbTFOV2pNYitvSUd4MzBZa2h2ZEoyaktQb0x1L1lPSStm?=
 =?utf-8?B?MFhFYjQ2c0Fjb0RBNDFjcGc0QXk3dHNQNEdjRW12c1VmVjNPUWRZa1puV3Fj?=
 =?utf-8?B?aFdaTko4Y3kxKzFTK0ZQQmlGUURjcUJ6NXFJWkpyekR6ek1YMW9xNW9zSEtI?=
 =?utf-8?B?SE1JTjBSOGVTVDdHcU5tSjRZUjJaMjFvc1IxbHAvc0ZZTWtoVy9YekhSbTk1?=
 =?utf-8?B?elJmRzNJcFZRY2NnbWtOSHBJdzZBekVNZER4V3d4ekI2bGNoOUhUdXZEMEJm?=
 =?utf-8?B?RXZqNFc4dnIrSGdDOTlWMkFYY1lYZkY2cmNWZUZqUVlsRzBCNUlZUjE3Mk8w?=
 =?utf-8?B?TTlJUENrdDVUbmxPbjBjSDJuTXpIZG1CcWRIVXQ2eDE4bmp2a24wekVqc1Vw?=
 =?utf-8?B?ZlpPS0E2cXFtYTZUUGVWUWRNcnBTOFMzTnorQmgxM2pDZTQzeUVJSWtsNGlz?=
 =?utf-8?Q?R8GLdIdPB6akyP28mZHS3NLfh?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6d4b130b-6812-491c-76c0-08da59dc0be0
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jun 2022 14:31:25.4533
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 1t7YLydCRX6U0iylE34ANYL60fjy2amWjkyW+kWhPeiXzQfq7WMBNc+xXF2OEsUlFfnrnhL+B/xBFXjc0FX1LQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB5075

On 29.06.2022 13:09, Jan Beulich wrote:
> On 29.06.2022 12:43, Anthony PERARD wrote:
>> On Thu, Jun 23, 2022 at 09:23:27AM +0200, Jan Beulich wrote:
>>> On 13.04.2022 13:21, Jane Malalane wrote:
>>>> Jane Malalane (2):
>>>>   xen+tools: Report Interrupt Controller Virtualization capabilities on
>>>>     x86
>>>>   x86/xen: Allow per-domain usage of hardware virtualized APIC
>>>>
>>>>  docs/man/xl.cfg.5.pod.in              | 15 ++++++++++++++
>>>>  docs/man/xl.conf.5.pod.in             | 12 +++++++++++
>>>>  tools/golang/xenlight/helpers.gen.go  | 16 ++++++++++++++
>>>>  tools/golang/xenlight/types.gen.go    |  4 ++++
>>>>  tools/include/libxl.h                 | 14 +++++++++++++
>>>>  tools/libs/light/libxl.c              |  3 +++
>>>>  tools/libs/light/libxl_arch.h         |  9 ++++++--
>>>>  tools/libs/light/libxl_arm.c          | 14 ++++++++++---
>>>>  tools/libs/light/libxl_create.c       | 22 ++++++++++++--------
>>>>  tools/libs/light/libxl_types.idl      |  4 ++++
>>>>  tools/libs/light/libxl_x86.c          | 39 +++++++++++++++++++++++++++++++++--
>>>>  tools/ocaml/libs/xc/xenctrl.ml        |  7 +++++++
>>>>  tools/ocaml/libs/xc/xenctrl.mli       |  7 +++++++
>>>>  tools/ocaml/libs/xc/xenctrl_stubs.c   | 17 ++++++++++++---
>>>>  tools/xl/xl.c                         |  8 +++++++
>>>>  tools/xl/xl.h                         |  2 ++
>>>>  tools/xl/xl_info.c                    |  6 ++++--
>>>>  tools/xl/xl_parse.c                   | 19 +++++++++++++++++
>>>>  xen/arch/x86/domain.c                 | 29 +++++++++++++++++++++++++-
>>>>  xen/arch/x86/hvm/hvm.c                |  3 +++
>>>>  xen/arch/x86/hvm/vmx/vmcs.c           | 11 ++++++++++
>>>>  xen/arch/x86/hvm/vmx/vmx.c            | 13 ++++--------
>>>>  xen/arch/x86/include/asm/hvm/domain.h |  6 ++++++
>>>>  xen/arch/x86/include/asm/hvm/hvm.h    | 10 +++++++++
>>>>  xen/arch/x86/sysctl.c                 |  4 ++++
>>>>  xen/arch/x86/traps.c                  |  5 +++--
>>>>  xen/include/public/arch-x86/xen.h     |  5 +++++
>>>>  xen/include/public/sysctl.h           | 11 +++++++++-
>>>>  28 files changed, 281 insertions(+), 34 deletions(-)
>>>>
>>>
>>> Just FYI: It's been over two months that v10 has been pending. There
>>> are still missing acks. You may want to ping the respective maintainers
>>> for this to make progress.
>>
>> Are you looking for a ack for the "docs/man" changes? If so, I guess
>> I'll have to make it more explicit next time that a review for "tools"
>> also mean review of the changes in their respective man pages.
> 
> No, the docs changes (being clearly tools docs) are fine.
> 
>> Or are you looking for a ack for the "golang" changes? Those changes are
>> automatically generated by a tool already in our repository.
> 
> Indeed it's Go (where I think an ack is still required, no matter
> if the changes are generated ones [which I wasn't even aware of, I
> have to confess]) and ...
> 
>> Or is it an "ocaml" ack for the first patch? Unfortunately, the
>> maintainers haven't been CCed, I guess that could be an issue.
> 
> ... OCaml which I was after.

Oh and actually for at least patch 2 also VMX. For patch 1 I've sent a
separate reply to the resent v10.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 14:50:01 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 14:50:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357914.586788 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Z0t-00071Z-Jl; Wed, 29 Jun 2022 14:49:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357914.586788; Wed, 29 Jun 2022 14:49:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6Z0t-00071S-GS; Wed, 29 Jun 2022 14:49:55 +0000
Received: by outflank-mailman (input) for mailman id 357914;
 Wed, 29 Jun 2022 14:49:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MoWB=XE=citrix.com=prvs=17238af8c=christian.lindig@srs-se1.protection.inumbo.net>)
 id 1o6Z0s-00071M-0d
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 14:49:54 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ba11f5d8-f7ba-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 16:49:51 +0200 (CEST)
Received: from mail-bn8nam11lp2168.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 29 Jun 2022 10:49:43 -0400
Received: from MW4PR03MB6539.namprd03.prod.outlook.com (2603:10b6:303:126::9)
 by BN9PR03MB6202.namprd03.prod.outlook.com (2603:10b6:408:11f::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14; Wed, 29 Jun
 2022 14:49:41 +0000
Received: from MW4PR03MB6539.namprd03.prod.outlook.com
 ([fe80::e0da:b315:76f5:37f9]) by MW4PR03MB6539.namprd03.prod.outlook.com
 ([fe80::e0da:b315:76f5:37f9%7]) with mapi id 15.20.5373.018; Wed, 29 Jun 2022
 14:49:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ba11f5d8-f7ba-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656514191;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:mime-version;
  bh=m5mzK4t9lf3CHP3TXYfn6EiZjrD8031eyiDfHP4IN0M=;
  b=ZOdp3VqkAC9SvipUsjbTroY2W7BYQhi9vO/Gd58+/3SnZpfqo5RuhVjc
   I8AMhf4A/3oRB/SLTUfmXcv/dQt2SsEe3BpTfPwq34RkFw7mAKUk31F88
   +CMVF0Zfp6eObGw+j7kVjJCBgcWu1LtTTQI+rby5NyE0AYHYSnEz2bA7E
   E=;
X-IronPort-RemoteIP: 104.47.58.168
X-IronPort-MID: 74536291
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:1+zVRKuv3m7HVmyAy3BIzj2DpefnVNZfMUV32f8akzHdYApBsoF/q
 tZmKTjSOPnbZWWne99zPYizpEMP75HcndNnSVQ5/H0yRCMW+JbJXdiXEBz9bniYRiHhoOOLz
 Cm8hv3odp1coqr0/0/1WlTZhSAgk/nOHNIQMcacUsxLbVYMpBwJ1FQywYbVvqYy2YLjW13X6
 IuryyHiEATNNwBcYzp8B52r8HuDjNyq0N/PlgVjDRzjlAa2e0g9VPrzF4noR5fLatA88tqBb
 /TC1NmEElbxpH/BPD8HfoHTKSXmSpaKVeSHZ+E/t6KK2nCurQRquko32WZ1he66RFxlkvgoo
 Oihu6BcRi84D63Jv7VeECNmAi1ODaJe9KP8MX2W5Jn7I03uKxMAwt1IJWRvZMgy3LYyBmtDs
 /sFNDoKcxaPwfqsx662QfVtgcJlK9T3OIQYuTdryjSx4fQOGMifBfmVo4IImm5u1qiiHt6HD
 yYdQTFvdhXbfxxGPBETCZQyneiAjXjjaTxI7lmSoMLb5kCMkFEqjuK8abI5fPStRsZ3tx6dg
 1782FrUITMADpvO5SOKpyfEaujn2HmTtJgpPK218LtmjUOewkQXCQYKTh2rrP+hkEm8VtlDb
 UsO9UIGt68p/lasSdr7dxK9qX+A+BUbXrJ4DOkS+AyLjK3O7G6xBGIJUzpAY9wOr9ItSHoh0
 Vrht9H0AT1itpWFRHTb8a2bxRupIjQcJ2IGYS4CTCMG7sPlrYV1iQjAJv5hH7SylcbdAizrz
 naBqy1Wr7kak8Mjzaiw+lHDxTW2qfDhTBMzoAPeXWun7wZwTI+je4Gsr1Pc6J5oL4uHT1/Ho
 HkNneCf6vwDCdeGkynlfQkWNLSg5vLAOjuMh1dqRsAl7270pS7lep1M6jZjIksvKtwDZTLif
 E7Uv0VW+YNXO3ypK6RwZupdFvgX8EQpLvy9Pti8UzaESsMZmNOvlM22WXOt4g==
IronPort-HdrOrdr: A9a23:kSoSa6hSroOcv5XtoRv1AqKSDHBQX3Z13DAbv31ZSRFFG/FwyP
 rCoB1L73XJYWgqM03IwerwQ5VpQRvnhP1ICPoqTM2ftWjdySaVxeRZgbcKrAeQfBEWmtQ96U
 4kSdkHNDSSNyk3sS+Z2njfLz9I+rDun86VbKXlvg5QpGpRGsNdBnJCe2Km+zpNNWx77PQCdK
 a0145inX6NaH4XZsO0Cj0uRO7YveDGk5rgfFovGwMnwBPmt0Ln1JfKVzyjmjsOWTJGxrkvtU
 LflRbi26mlu/anjjfBym7o6YhMkteJ8KoDOCXMsLlUFtzfsHfrWG1TYczGgNnzmpDq1L8eqq
 iOn/7nBbU115qeRBDynfKn4Xic7N9n0Q6f9bbfuwqtnSWxfkNFN+NRwY1eaRfX8EwmoZV117
 9KxXuQs95NAQrHhzmV3amBa/jErDvHnZKz+dRj8EC3fLFuHoO5l7ZvtX99AdMFBmb3+YonGO
 5hAIXV4+tXa0qTazTcsnN0yNKhU3wvFlPeK3Jy8PC9wnxThjR03kEYzMsQkjMJ8488UYBN46
 DBPr5znL9DQ8cKZeZ2BfsHQ8GwFmvRKCi8e166MBDiDuUKKnjNo5n47PE84/yrYoUByN8olJ
 HIQDpjxBkPkoLVeLmzNbFwg2DwqT+GLEXQI+lllutEk6y5Qqb3OiueT11rm9e8opwkc7jmZ8
 o=
X-IronPort-AV: E=Sophos;i="5.92,231,1650945600"; 
   d="scan'208,217";a="74536291"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hN9y1M8uUR9U/z+d4enkkbF0WZVKHlJq3F6XBk9ajU5mgebF8dY8hPe3j0/jN+dkt4jwRrAXEFbZ7QP7PoZnudXYXxbSqTysKbRUYhEBt82rC+PZkzEyXc6rdELQqLNT4jp2zKsxFL2OHHgIjID6UJ73iOHzK55+sRky77B2iBwTHBtBG7pbQ/DnshT9LEfqBct729O6wcExqAztmNyNrw4DGrfBEPT21NeWBJshLZr6Hh4ru1cDqf1wBTf2faSwlutTIdDVYnzcg0O8hJLcsxS8xeZirPaaTHCAKD16C45AMoAh6DqK7u+pBy8vJfkE5j2gqVByhIw1WbytWD4nNg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=m5mzK4t9lf3CHP3TXYfn6EiZjrD8031eyiDfHP4IN0M=;
 b=Sa2JaTATnUotfu5kdfV4MJIJqSv5gxMNPriPcVN4GzSk+DQ1yJBkZQypj2T6PJV44rRX3IFAxx740UQgTzA9XolMinVORSp0nkR7A80xfPJahfBaSPJAfiT4veAL0an5QsL6IyKchAksUFgXceIc+9EJZTEoFUnJ6Dpro/N0QXmwBL8IBVVQBbljoKEyMhospdZ4vCCz6Q9TZtkcv4rWFhzkO2ur8G4esM7VenUu0j9VotmMP1+fBXiP+i9QMb+hDDVH7+YakOrB7o81HCzPgRXfC+FS5vbe/zyQZnOPQa37X9LnYv2z6hXZB78HGf+oPbwIObzyOZf4Fo7nEt6tkA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=m5mzK4t9lf3CHP3TXYfn6EiZjrD8031eyiDfHP4IN0M=;
 b=kDSwWoLUt6C450QrYH9qY4Ni+iuzv2Cw2TQjEza3V2ozNO8q635Zqpfbrbye1SXy7zU48U/e6Ydfql6yG398x6W+8LQfv3q9ux//i9A/ugidqP1D1TKQi14I05nR6nfpiLkjHYz0lToOW6ClSCFLsvuYpjTUR1IQIjqPLYWbrVg=
From: Christian Lindig <christian.lindig@citrix.com>
To: Jane Malalane <Jane.Malalane@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, George Dunlap
	<George.Dunlap@citrix.com>, Nick Rosbrook <rosbrookn@gmail.com>, Wei Liu
	<wl@xen.org>, Anthony Perard <anthony.perard@citrix.com>, Andrew Cooper
	<Andrew.Cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Juergen Gross
	<jgross@suse.com>, David Scott <dave@recoil.org>, Roger Pau Monne
	<roger.pau@citrix.com>, Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian
	<kevin.tian@intel.com>
Subject: Re: [PATCH RESEND v10 1/2] xen+tools: Report Interrupt Controller
 Virtualization capabilities on x86
Thread-Topic: [PATCH RESEND v10 1/2] xen+tools: Report Interrupt Controller
 Virtualization capabilities on x86
Thread-Index: AQHYi8AZUiaK8c/2rk6L3U58j6bHXK1md2qA
Date: Wed, 29 Jun 2022 14:49:40 +0000
Message-ID: <29B192D0-DD84-45D8-9D1E-83C004F015DA@citrix.com>
References: <20220629135534.19923-1-jane.malalane@citrix.com>
 <20220629135534.19923-2-jane.malalane@citrix.com>
In-Reply-To: <20220629135534.19923-2-jane.malalane@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.7)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 50252ab9-5959-4008-2bfd-08da59de98f6
x-ms-traffictypediagnostic: BN9PR03MB6202:EE_
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 XD8zOepr63RwgTQSlnyQ38hVf1IkROLWXFPyejg/VtymKqGjV1TX4MLFdOeboHOq65xpE6iOPF/HpM6tTNxV+ol4YTWiZw41mN6YU5W41MMYdcaPMlGNLhQosdIZ2Cd9EAisYNJzFzSlMfRFv2NK/uf9KLk/BmoanLcPB/xmJBAHCioJzDD5wj4HCBADLeClcZlVxNswYvJqqpknttlhoWIAonQoVVtjrEhIOZjM8qt5acHVTXgEd8OtiaiuuLTCSbObIhemGvZaxb6j7zYR3teIR20kQcma3kp0QKV4Tcpf7XDUGw+D56gufoJHKvtXAkTwOIpulY20G3EJVeaDR13XvwzBeSna1QvVP5k2JDTsphJMMW4k9cE19y6f5b/cN/Q7JR7k+U8zFFT/f9YHbxovJj/f/FEDeKHL6xex5O8WQKRZaFCHVS+q+j+NpLPcx+9o/iEzSz3mzN415lpCKYvFjI+r3WggH9y+OmYjgeculYm5d06NNxoLhCjn6cfDtzuF0y+5Rp+Rfs0rbV6yscy94Dee3UWCG2fFcqUBF1Sgg4ZjFRBvzNNzIO3gAPhCPFbJNredKsoJn8+gCUSB5xJ9HveWcid8RV1/dd0oaW2LGW2w6wHRLNzObD/byREoblJFMjvIaCTGlF1mmVnOM+m4stmiGaDw8sg9E6F5MvOfF3VFkv5aZ1of2depVMbvRyQUxnyY1NOFrznew4ZoXgWOpzJk3HlSx3AXeunDX+QQKBA6H7vMJ6xv/AjR74db/rfQKNfQu2EuWQUck+5zwCY5LlGkdaNw4lza1ww0XUtR7po3FG4RnwOTVPrJozHB
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MW4PR03MB6539.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(136003)(39860400002)(396003)(366004)(346002)(376002)(7416002)(5660300002)(71200400001)(44832011)(122000001)(6862004)(8936002)(33656002)(82960400001)(6486002)(478600001)(186003)(2616005)(6506007)(38070700005)(41300700001)(2906002)(6512007)(26005)(86362001)(53546011)(36756003)(91956017)(76116006)(38100700002)(54906003)(37006003)(6636002)(66556008)(66446008)(66946007)(8676002)(4326008)(66476007)(64756008)(316002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?eEFuU3RnU2F1RWhhTGZGT09rVHNkUmkwZ1FkUEFKY0s0NVlXMkpOQWFpejl1?=
 =?utf-8?B?MXJWbXZ2aTRHZ2xjeUdjQWZTb016ejVsUGpzTHorbER0Q1E5d2R6eVYzeXNY?=
 =?utf-8?B?NVNYVnNxOC90aFJ2dW8yR0dmUXk2SjV3dFFGcmdURzZpNTJZNzZVdTJVZ0o0?=
 =?utf-8?B?VkJKeHAyQTljRTh4QW9Va2hOUzNWbks4T291QWw1S0RvaEs0WjJvRzdXVWNh?=
 =?utf-8?B?Q2pZT2NFYUJnOVJqSmpDZjZ6bVZPZ2VrRGp3RG1LSmc5TitzZEpFbDhCaW1I?=
 =?utf-8?B?K3orWGNWREpEVFlUMVU5MjFCTG02eCtLZU5jWE9GcDlZdEJxRmNaMHhBNkI5?=
 =?utf-8?B?V09IaldCVWZFK1I0emJTeUVYTU5mcjVjampCN0NVdkdZYXNCSWJaRjZldC8w?=
 =?utf-8?B?c3RSRUkvb3QvMzhsbS8rcU9PTlBWT1hzdG42TzR0QjFsa0U5ZExqRTVEREpW?=
 =?utf-8?B?UWRjdTRUVnN4bjB5SkN6bnZpdG9YZkYvRjZ5QVpCSWE1YUdPRFhwT0NhYU1E?=
 =?utf-8?B?V1BVUXI1TWR2bnJsL2l2RFJPeDUvOFZSeGF3bnlyTnVWWE81TWJOUzhla1Ri?=
 =?utf-8?B?aytwdnUydk1rN1ZXYldQcHo2dUZQa0xNTHJwQ0lFMXRjTXJtUytlT0twRHNO?=
 =?utf-8?B?TllZQWI3QkRqS1Y4SmlPRHJRa3BMYXp4RTRnLzlFQ1JxN1FGMERkdnlXcVJB?=
 =?utf-8?B?YkxBWFRBeHBkdlg0ank1NXY3SzA2aW43clJxSVV0MmFDdUJzQnpFL0d2WWZN?=
 =?utf-8?B?eXd1NFZFOFNJOXRLTDlad1ZSb2lNZUZGVVdDRTZZSTVVUnc0ZTA1aGdOa2hk?=
 =?utf-8?B?K09UdldIRUxmcFF0ekg0MmIzN1VQYWxnWVNFdkNpZ0VSQ3BxWkdWMWg1NTBK?=
 =?utf-8?B?cHlyL20rSy9FNUdaM2hQancyalVmd1lMaE42TFlmRVVlVXBReTMxUTEyS0pV?=
 =?utf-8?B?S2lDdGgyTjc4d2RFWEhMQ3JaZ28xSXkvZWl6eHNXQmh6c3dyVWpPYzRjT0FK?=
 =?utf-8?B?VzhSTmEyTWJvK1BqNnlBWURFdUdSSkVLdUgvM0toMSt0YnBuQk5TUTl6VnB5?=
 =?utf-8?B?MG1oQnBWTkFhYkx4QXQxdGZjTVVBWWd0a2lxbXQ5Z2Z3eWl5MmhDYVZoRDkv?=
 =?utf-8?B?V3YxTFJGWGxLM21vUVdVSElvVHZacHdWcVlzeXcrdEhjMkg0WDMrdVMzT2Z4?=
 =?utf-8?B?NTY2NzI0cGlMYUs1aHJKTExEbnpESlFLYkJucGZDVFYvU3kwQnNJNUJuMHVK?=
 =?utf-8?B?OUFNZ0E3MEJYZFZuUnV6c3EycjgySU1udjRaK1A0S2tGb2JVOURkMkRtZzl4?=
 =?utf-8?B?L1BCejhDWGxlLzg5OHBKVDdLNlNpb1pxSkhsTTNpemFKODdlV0xQcGlWQzdH?=
 =?utf-8?B?NVdhUFoxaHJVS0w2YkEweFBZN0h0bmFFV1NsTTBmMXBabWtZOVFQVkhRZ1lE?=
 =?utf-8?B?Sk5LWjlxYldlcUljQmFhTG9ZZjUraER3anV5dU5UN05zQmorRS80R3AwMkkr?=
 =?utf-8?B?aWdpa1RKcldiSXJneVVxM2YwRXg1Wml0SWszU1cvVHhQbVhWNEJZSUJMMkJ6?=
 =?utf-8?B?UE9QeDd6OEtyRVFxTTNrNit1bS9GdlltN1lzamg1VTNubHJMZ216Z2Ziem1O?=
 =?utf-8?B?Mm1ia3RWT0s2T2RLT01kTENMcmhZUzk5T1BwalR3S1lPTDZwaEdJeGZaby8w?=
 =?utf-8?B?ZWE5eDR4bFoyVTRXNDEzMUNEcitlYXZiN2ZIRnBaVjlWaFR2SS95K1FxUHhL?=
 =?utf-8?B?OWMrbXlwYWcva2lKYnIzQmh0Y2ZtaUtGMkR5cHBQb1ZwL1prY2x4dytkbXFX?=
 =?utf-8?B?Z1hyV3ZuY2ROeU81REt0aWkrbWVqNXdaejkwa3ZwNEZOTjRhbWJvV0g2RWdJ?=
 =?utf-8?B?UEdIbkFBcHJ3VnpCSDNUY1N1QUhTZUh6a2duQjZTc25SeEk4dC9OZGpLeXZF?=
 =?utf-8?B?bDgxVkNpek1NRGhHTkRxYXpQVWxlNCsrUmMvYVUrMVdhY2l3cDZTR1ZZUzhV?=
 =?utf-8?B?OXVLVDNOdUFDM25kRGFBaHdkRFU2VXlMUXhZYzRZUnBZOG4wT1VoZnk5ZlNJ?=
 =?utf-8?B?MEVWVFU2ZTE0SDQ2VEVNdTNaa2RGVFNkVmI1U1FLM2lPKytCWWg2VkdiTkxP?=
 =?utf-8?B?UVhUUWloMHNSL3c0cUp2M0IwUlBUVFdsZU5DRFBEcW9LbjN5eU1RdTE0NFQ3?=
 =?utf-8?B?ZlE9PQ==?=
Content-Type: multipart/alternative;
	boundary="_000_29B192D0DD8445D89D1E83C004F015DAcitrixcom_"
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MW4PR03MB6539.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 50252ab9-5959-4008-2bfd-08da59de98f6
X-MS-Exchange-CrossTenant-originalarrivaltime: 29 Jun 2022 14:49:40.9915
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 9f7tELw32oGPHdmES/9ZDGArl5pJegT5gN1egdVhA0grvEjB5n/yq3LH790dN7YC+UXay5Y0lWZK1XBEbIfxVmUnXx0fT91IBAAqAknznTA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR03MB6202

--_000_29B192D0DD8445D89D1E83C004F015DAcitrixcom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64

DQoNCk9uIDI5IEp1biAyMDIyLCBhdCAxNDo1NSwgSmFuZSBNYWxhbGFuZSA8amFuZS5tYWxhbGFu
ZUBjaXRyaXguY29tPG1haWx0bzpqYW5lLm1hbGFsYW5lQGNpdHJpeC5jb20+PiB3cm90ZToNCg0K
KyBwaHlzaW5mbyA9IGNhbWxfYWxsb2NfdHVwbGUoMTEpOw0KU3RvcmVfZmllbGQocGh5c2luZm8s
IDAsIFZhbF9pbnQoY19waHlzaW5mby50aHJlYWRzX3Blcl9jb3JlKSk7DQpTdG9yZV9maWVsZChw
aHlzaW5mbywgMSwgVmFsX2ludChjX3BoeXNpbmZvLmNvcmVzX3Blcl9zb2NrZXQpKTsNClN0b3Jl
X2ZpZWxkKHBoeXNpbmZvLCAyLCBWYWxfaW50KGNfcGh5c2luZm8ubnJfY3B1cykpOw0KQEAgLTc0
OSw2ICs3NDksMTcgQEAgQ0FNTHByaW0gdmFsdWUgc3R1Yl94Y19waHlzaW5mbyh2YWx1ZSB4Y2gp
DQpTdG9yZV9maWVsZChwaHlzaW5mbywgOCwgY2FwX2xpc3QpOw0KU3RvcmVfZmllbGQocGh5c2lu
Zm8sIDksIFZhbF9pbnQoY19waHlzaW5mby5tYXhfY3B1X2lkICsgMSkpOw0KDQorI2lmIGRlZmlu
ZWQoX19pMzg2X18pIHx8IGRlZmluZWQoX194ODZfNjRfXykNCisgLyoNCisgICogYXJjaF9jYXBh
YmlsaXRpZXM6IHBoeXNpbmZvX2FyY2hfY2FwX2ZsYWcgbGlzdDsNCisgICovDQorIGFyY2hfY2Fw
X2xpc3QgPSBjX2JpdG1hcF90b19vY2FtbF9saXN0DQorIC8qICEgcGh5c2luZm9fYXJjaF9jYXBf
ZmxhZyBDQVBfIG5vbmUgKi8NCisgLyogISBYRU5fU1lTQ1RMX1BIWVNDQVBfIFhFTl9TWVNDVExf
UEhZU0NBUF9YODZfTUFYIG1heCAqLw0KKyAoY19waHlzaW5mby5hcmNoX2NhcGFiaWxpdGllcyk7
DQorIFN0b3JlX2ZpZWxkKHBoeXNpbmZvLCAxMCwgYXJjaF9jYXBfbGlzdCk7DQorI2VuZGlmDQor
DQpDQU1McmV0dXJuKHBoeXNpbmZvKTsNCn0NCg0KSSB0aGlzIGV4dGVuZGluZyB0aGUgdHVwbGUg
YnV0IG9ubHkgZGVmaW5pbmcgYSB2YWx1ZSBvbiB4ODY/IERvZXMgdGhpcyBub3QgbGVhZCB0byB1
bmRlZmluZWQgZmllbGRzIG9uIG90aGVyIGFyY2hpdGVjdHVyZXM/DQoNCnR5cGUgcGh5c2luZm8g
PSB7DQogIHRocmVhZHNfcGVyX2NvcmUgOiBpbnQ7DQogIGNvcmVzX3Blcl9zb2NrZXQgOiBpbnQ7
DQpAQCAtMTI0LDYgKzEyOCw3IEBAIHR5cGUgcGh5c2luZm8gPSB7DQogIHNjcnViX3BhZ2VzICAg
ICAgOiBuYXRpdmVpbnQ7DQogIGNhcGFiaWxpdGllcyAgICAgOiBwaHlzaW5mb19jYXBfZmxhZyBs
aXN0Ow0KICBtYXhfbnJfY3B1cyAgICAgIDogaW50OyAoKiogY29tcGlsZS10aW1lIG1heCBwb3Nz
aWJsZSBudW1iZXIgb2YgbnJfY3B1cyAqKQ0KKyAgYXJjaF9jYXBhYmlsaXRpZXMgOiBwaHlzaW5m
b19hcmNoX2NhcF9mbGFnIGxpc3Q7DQp9DQp0eXBlIHZlcnNpb24gPSB7IG1ham9yIDogaW50OyBt
aW5vciA6IGludDsgZXh0cmEgOiBzdHJpbmcNCg0KSGVyZSB0aGUgcmVjb3JkIGlzIGV4dGVuZGVk
IGJ1dCBpdCBsb29rcyB0byBtZSBsaWtlIHRoZSBuZXcgZmllbGQgaXMga2VwdCB1bmRlZmluZWQg
b24gbm9uIHg4NiBhcmNoaXRlY3R1cmVzLiBJZiB0aGUgZmllbGQgc3RpbGwgaGFzIGEgZGVmaW5l
ZCBjb250ZW50LCBpdCB3b3VsZCBiZSBnb29kIHRvIGV4cGxhaW4gd2h5Lg0KDQrigJQgQw0K

--_000_29B192D0DD8445D89D1E83C004F015DAcitrixcom_
Content-Type: text/html; charset="utf-8"
Content-ID: <5AF878C32336BD41A15DA6D783DAA984@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64

PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5IHN0eWxlPSJ3b3JkLXdy
YXA6IGJyZWFrLXdvcmQ7IC13ZWJraXQtbmJzcC1tb2RlOiBzcGFjZTsgbGluZS1icmVhazogYWZ0
ZXItd2hpdGUtc3BhY2U7IiBjbGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NCjxkaXY+PGJyIGNsYXNz
PSIiPg0KPGJsb2NrcXVvdGUgdHlwZT0iY2l0ZSIgY2xhc3M9IiI+DQo8ZGl2IGNsYXNzPSIiPk9u
IDI5IEp1biAyMDIyLCBhdCAxNDo1NSwgSmFuZSBNYWxhbGFuZSAmbHQ7PGEgaHJlZj0ibWFpbHRv
OmphbmUubWFsYWxhbmVAY2l0cml4LmNvbSIgY2xhc3M9IiI+amFuZS5tYWxhbGFuZUBjaXRyaXgu
Y29tPC9hPiZndDsgd3JvdGU6PC9kaXY+DQo8YnIgY2xhc3M9IkFwcGxlLWludGVyY2hhbmdlLW5l
d2xpbmUiPg0KPGRpdiBjbGFzcz0iIj48c3BhbiBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAw
LCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxl
OiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7
IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDog
MHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFj
aW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9u
OiBub25lOyBmbG9hdDogbm9uZTsgZGlzcGxheTogaW5saW5lICFpbXBvcnRhbnQ7IiBjbGFzcz0i
Ij4rPC9zcGFuPjxzcGFuIGNsYXNzPSJBcHBsZS10YWItc3BhbiIgc3R5bGU9ImNhcmV0LWNvbG9y
OiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsg
Zm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdo
dDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4
dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBwcmU7IHdv
cmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVj
b3JhdGlvbjogbm9uZTsiPg0KPC9zcGFuPjxzcGFuIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAs
IDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5
bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1h
bDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50
OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNw
YWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRp
b246IG5vbmU7IGZsb2F0OiBub25lOyBkaXNwbGF5OiBpbmxpbmUgIWltcG9ydGFudDsiIGNsYXNz
PSIiPnBoeXNpbmZvDQogPSBjYW1sX2FsbG9jX3R1cGxlKDExKTs8L3NwYW4+PGJyIHN0eWxlPSJj
YXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNp
emU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsg
Zm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjog
c3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFj
ZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDog
MHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IiBjbGFzcz0iIj4NCjxzcGFuIGNsYXNzPSJBcHBs
ZS10YWItc3BhbiIgc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5
OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZh
cmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzog
bm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zv
cm06IG5vbmU7IHdoaXRlLXNwYWNlOiBwcmU7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRl
eHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiPjwvc3Bhbj48c3Bh
biBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGlj
YTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBz
OiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRl
eHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsg
d2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJv
a2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyBmbG9hdDogbm9uZTsgZGlzcGxh
eTogaW5saW5lICFpbXBvcnRhbnQ7IiBjbGFzcz0iIj5TdG9yZV9maWVsZChwaHlzaW5mbywNCiAw
LCBWYWxfaW50KGNfcGh5c2luZm8udGhyZWFkc19wZXJfY29yZSkpOzwvc3Bhbj48YnIgc3R5bGU9
ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQt
c2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFs
OyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWdu
OiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNw
YWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRo
OiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiIGNsYXNzPSIiPg0KPHNwYW4gY2xhc3M9IkFw
cGxlLXRhYi1zcGFuIiBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1p
bHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQt
dmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5n
OiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5z
Zm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IHByZTsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQt
dGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyI+PC9zcGFuPjxz
cGFuIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0
aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNh
cHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsg
dGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25l
OyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0
cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IGZsb2F0OiBub25lOyBkaXNw
bGF5OiBpbmxpbmUgIWltcG9ydGFudDsiIGNsYXNzPSIiPlN0b3JlX2ZpZWxkKHBoeXNpbmZvLA0K
IDEsIFZhbF9pbnQoY19waHlzaW5mby5jb3Jlc19wZXJfc29ja2V0KSk7PC9zcGFuPjxiciBzdHls
ZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9u
dC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3Jt
YWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxp
Z246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUt
c3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lk
dGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyIgY2xhc3M9IiI+DQo8c3BhbiBjbGFzcz0i
QXBwbGUtdGFiLXNwYW4iIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZh
bWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9u
dC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNp
bmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJh
bnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogcHJlOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtp
dC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7Ij48L3NwYW4+
PHNwYW4gc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2
ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQt
Y2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFs
OyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5v
bmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQt
c3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsgZmxvYXQ6IG5vbmU7IGRp
c3BsYXk6IGlubGluZSAhaW1wb3J0YW50OyIgY2xhc3M9IiI+U3RvcmVfZmllbGQocGh5c2luZm8s
DQogMiwgVmFsX2ludChjX3BoeXNpbmZvLm5yX2NwdXMpKTs8L3NwYW4+PGJyIHN0eWxlPSJjYXJl
dC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6
IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9u
dC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3Rh
cnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTog
bm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4
OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IiBjbGFzcz0iIj4NCjxzcGFuIHN0eWxlPSJjYXJldC1j
b2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEy
cHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13
ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7
IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9y
bWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0
ZXh0LWRlY29yYXRpb246IG5vbmU7IGZsb2F0OiBub25lOyBkaXNwbGF5OiBpbmxpbmUgIWltcG9y
dGFudDsiIGNsYXNzPSIiPkBADQogLTc0OSw2ICs3NDksMTcgQEAgQ0FNTHByaW0gdmFsdWUgc3R1
Yl94Y19waHlzaW5mbyh2YWx1ZSB4Y2gpPC9zcGFuPjxiciBzdHlsZT0iY2FyZXQtY29sb3I6IHJn
YigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250
LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBu
b3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWlu
ZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29y
ZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNv
cmF0aW9uOiBub25lOyIgY2xhc3M9IiI+DQo8c3BhbiBjbGFzcz0iQXBwbGUtdGFiLXNwYW4iIHN0
eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBm
b250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5v
cm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1h
bGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0
ZS1zcGFjZTogcHJlOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0
aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7Ij48L3NwYW4+PHNwYW4gc3R5bGU9ImNhcmV0
LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTog
MTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250
LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFy
dDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBu
b3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7
IHRleHQtZGVjb3JhdGlvbjogbm9uZTsgZmxvYXQ6IG5vbmU7IGRpc3BsYXk6IGlubGluZSAhaW1w
b3J0YW50OyIgY2xhc3M9IiI+U3RvcmVfZmllbGQocGh5c2luZm8sDQogOCwgY2FwX2xpc3QpOzwv
c3Bhbj48YnIgc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBI
ZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlh
bnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9y
bWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06
IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRl
eHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiIGNsYXNzPSIiPg0K
PHNwYW4gY2xhc3M9IkFwcGxlLXRhYi1zcGFuIiBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAw
LCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxl
OiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7
IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDog
MHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IHByZTsgd29yZC1zcGFjaW5n
OiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBu
b25lOyI+PC9zcGFuPjxzcGFuIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250
LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsg
Zm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNw
YWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQt
dHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsg
LXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IGZs
b2F0OiBub25lOyBkaXNwbGF5OiBpbmxpbmUgIWltcG9ydGFudDsiIGNsYXNzPSIiPlN0b3JlX2Zp
ZWxkKHBoeXNpbmZvLA0KIDksIFZhbF9pbnQoY19waHlzaW5mby5tYXhfY3B1X2lkICsgMSkpOzwv
c3Bhbj48YnIgc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBI
ZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlh
bnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9y
bWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06
IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRl
eHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiIGNsYXNzPSIiPg0K
PGJyIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0
aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNh
cHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsg
dGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25l
OyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0
cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IiBjbGFzcz0iIj4NCjxzcGFu
IHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNh
OyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6
IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4
dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3
aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9r
ZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IGZsb2F0OiBub25lOyBkaXNwbGF5
OiBpbmxpbmUgIWltcG9ydGFudDsiIGNsYXNzPSIiPisjaWYNCiBkZWZpbmVkKF9faTM4Nl9fKSB8
fCBkZWZpbmVkKF9feDg2XzY0X18pPC9zcGFuPjxiciBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigw
LCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0
eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3Jt
YWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVu
dDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1z
cGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0
aW9uOiBub25lOyIgY2xhc3M9IiI+DQo8c3BhbiBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAw
LCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxl
OiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7
IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDog
MHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFj
aW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9u
OiBub25lOyBmbG9hdDogbm9uZTsgZGlzcGxheTogaW5saW5lICFpbXBvcnRhbnQ7IiBjbGFzcz0i
Ij4rPC9zcGFuPjxzcGFuIGNsYXNzPSJBcHBsZS10YWItc3BhbiIgc3R5bGU9ImNhcmV0LWNvbG9y
OiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsg
Zm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdo
dDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4
dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBwcmU7IHdv
cmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVj
b3JhdGlvbjogbm9uZTsiPg0KPC9zcGFuPjxzcGFuIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAs
IDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5
bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1h
bDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50
OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNw
YWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRp
b246IG5vbmU7IGZsb2F0OiBub25lOyBkaXNwbGF5OiBpbmxpbmUgIWltcG9ydGFudDsiIGNsYXNz
PSIiPi8qPC9zcGFuPjxiciBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1m
YW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZv
bnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFj
aW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRy
YW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13
ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyIgY2xh
c3M9IiI+DQo8c3BhbiBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1p
bHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQt
dmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5n
OiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5z
Zm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJr
aXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyBmbG9hdDog
bm9uZTsgZGlzcGxheTogaW5saW5lICFpbXBvcnRhbnQ7IiBjbGFzcz0iIj4rPC9zcGFuPjxzcGFu
IGNsYXNzPSJBcHBsZS10YWItc3BhbiIgc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7
IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9y
bWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0
ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsg
dGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBwcmU7IHdvcmQtc3BhY2luZzogMHB4
OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsi
Pg0KPC9zcGFuPjxzcGFuIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZh
bWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9u
dC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNp
bmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJh
bnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdl
YmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IGZsb2F0
OiBub25lOyBkaXNwbGF5OiBpbmxpbmUgIWltcG9ydGFudDsiIGNsYXNzPSIiPjxzcGFuIGNsYXNz
PSJBcHBsZS1jb252ZXJ0ZWQtc3BhY2UiPiZuYnNwOzwvc3Bhbj4qDQogYXJjaF9jYXBhYmlsaXRp
ZXM6IHBoeXNpbmZvX2FyY2hfY2FwX2ZsYWcgbGlzdDs8L3NwYW4+PGJyIHN0eWxlPSJjYXJldC1j
b2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEy
cHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13
ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7
IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9y
bWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0
ZXh0LWRlY29yYXRpb246IG5vbmU7IiBjbGFzcz0iIj4NCjxzcGFuIHN0eWxlPSJjYXJldC1jb2xv
cjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7
IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWln
aHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRl
eHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFs
OyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0
LWRlY29yYXRpb246IG5vbmU7IGZsb2F0OiBub25lOyBkaXNwbGF5OiBpbmxpbmUgIWltcG9ydGFu
dDsiIGNsYXNzPSIiPis8L3NwYW4+PHNwYW4gY2xhc3M9IkFwcGxlLXRhYi1zcGFuIiBzdHlsZT0i
Y2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1z
aXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7
IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246
IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3Bh
Y2U6IHByZTsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBw
eDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyI+DQo8L3NwYW4+PHNwYW4gc3R5bGU9ImNhcmV0LWNv
bG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJw
eDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdl
aWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFydDsg
dGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBub3Jt
YWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRl
eHQtZGVjb3JhdGlvbjogbm9uZTsgZmxvYXQ6IG5vbmU7IGRpc3BsYXk6IGlubGluZSAhaW1wb3J0
YW50OyIgY2xhc3M9IiI+PHNwYW4gY2xhc3M9IkFwcGxlLWNvbnZlcnRlZC1zcGFjZSI+Jm5ic3A7
PC9zcGFuPiovPC9zcGFuPjxiciBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9u
dC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7
IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1z
cGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0
LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7
IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyIg
Y2xhc3M9IiI+DQo8c3BhbiBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1m
YW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZv
bnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFj
aW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRy
YW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13
ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyBmbG9h
dDogbm9uZTsgZGlzcGxheTogaW5saW5lICFpbXBvcnRhbnQ7IiBjbGFzcz0iIj4rPC9zcGFuPjxz
cGFuIGNsYXNzPSJBcHBsZS10YWItc3BhbiIgc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwg
MCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTog
bm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBs
ZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBw
eDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBwcmU7IHdvcmQtc3BhY2luZzog
MHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9u
ZTsiPg0KPC9zcGFuPjxzcGFuIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250
LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsg
Zm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNw
YWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQt
dHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsg
LXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IGZs
b2F0OiBub25lOyBkaXNwbGF5OiBpbmxpbmUgIWltcG9ydGFudDsiIGNsYXNzPSIiPmFyY2hfY2Fw
X2xpc3QNCiA9IGNfYml0bWFwX3RvX29jYW1sX2xpc3Q8L3NwYW4+PGJyIHN0eWxlPSJjYXJldC1j
b2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEy
cHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13
ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7
IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9y
bWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0
ZXh0LWRlY29yYXRpb246IG5vbmU7IiBjbGFzcz0iIj4NCjxzcGFuIHN0eWxlPSJjYXJldC1jb2xv
cjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7
IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWln
aHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRl
eHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFs
OyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0
LWRlY29yYXRpb246IG5vbmU7IGZsb2F0OiBub25lOyBkaXNwbGF5OiBpbmxpbmUgIWltcG9ydGFu
dDsiIGNsYXNzPSIiPis8L3NwYW4+PHNwYW4gY2xhc3M9IkFwcGxlLXRhYi1zcGFuIiBzdHlsZT0i
Y2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1z
aXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7
IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246
IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3Bh
Y2U6IHByZTsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBw
eDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyI+DQo8L3NwYW4+PHNwYW4gY2xhc3M9IkFwcGxlLXRh
Yi1zcGFuIiBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhl
bHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFu
dC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3Jt
YWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTog
bm9uZTsgd2hpdGUtc3BhY2U6IHByZTsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1z
dHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyI+PC9zcGFuPjxzcGFuIHN0
eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBm
b250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5v
cm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1h
bGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0
ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13
aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IGZsb2F0OiBub25lOyBkaXNwbGF5OiBp
bmxpbmUgIWltcG9ydGFudDsiIGNsYXNzPSIiPi8qDQogISBwaHlzaW5mb19hcmNoX2NhcF9mbGFn
IENBUF8gbm9uZSAqLzwvc3Bhbj48YnIgc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7
IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9y
bWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0
ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsg
dGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzog
MHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9u
ZTsiIGNsYXNzPSIiPg0KPHNwYW4gc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZv
bnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFs
OyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXIt
c3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4
dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4
OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsg
ZmxvYXQ6IG5vbmU7IGRpc3BsYXk6IGlubGluZSAhaW1wb3J0YW50OyIgY2xhc3M9IiI+Kzwvc3Bh
bj48c3BhbiBjbGFzcz0iQXBwbGUtdGFiLXNwYW4iIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAs
IDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5
bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1h
bDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50
OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogcHJlOyB3b3JkLXNwYWNp
bmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246
IG5vbmU7Ij4NCjwvc3Bhbj48c3BhbiBjbGFzcz0iQXBwbGUtdGFiLXNwYW4iIHN0eWxlPSJjYXJl
dC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6
IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9u
dC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3Rh
cnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTog
cHJlOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0
ZXh0LWRlY29yYXRpb246IG5vbmU7Ij48L3NwYW4+PHNwYW4gc3R5bGU9ImNhcmV0LWNvbG9yOiBy
Z2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9u
dC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDog
bm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1p
bmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdv
cmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVj
b3JhdGlvbjogbm9uZTsgZmxvYXQ6IG5vbmU7IGRpc3BsYXk6IGlubGluZSAhaW1wb3J0YW50OyIg
Y2xhc3M9IiI+LyoNCiAhIFhFTl9TWVNDVExfUEhZU0NBUF8gWEVOX1NZU0NUTF9QSFlTQ0FQX1g4
Nl9NQVggbWF4ICovPC9zcGFuPjxiciBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsg
Zm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3Jt
YWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRl
ci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0
ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAw
cHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25l
OyIgY2xhc3M9IiI+DQo8c3BhbiBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9u
dC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7
IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1z
cGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0
LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7
IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyBm
bG9hdDogbm9uZTsgZGlzcGxheTogaW5saW5lICFpbXBvcnRhbnQ7IiBjbGFzcz0iIj4rPC9zcGFu
PjxzcGFuIGNsYXNzPSJBcHBsZS10YWItc3BhbiIgc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwg
MCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHls
ZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFs
OyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6
IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBwcmU7IHdvcmQtc3BhY2lu
ZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjog
bm9uZTsiPg0KPC9zcGFuPjxzcGFuIGNsYXNzPSJBcHBsZS10YWItc3BhbiIgc3R5bGU9ImNhcmV0
LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTog
MTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250
LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFy
dDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBw
cmU7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRl
eHQtZGVjb3JhdGlvbjogbm9uZTsiPjwvc3Bhbj48c3BhbiBzdHlsZT0iY2FyZXQtY29sb3I6IHJn
YigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250
LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBu
b3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWlu
ZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29y
ZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNv
cmF0aW9uOiBub25lOyBmbG9hdDogbm9uZTsgZGlzcGxheTogaW5saW5lICFpbXBvcnRhbnQ7IiBj
bGFzcz0iIj4oY19waHlzaW5mby5hcmNoX2NhcGFiaWxpdGllcyk7PC9zcGFuPjxiciBzdHlsZT0i
Y2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1z
aXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7
IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246
IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3Bh
Y2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6
IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyIgY2xhc3M9IiI+DQo8c3BhbiBzdHlsZT0iY2Fy
ZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXpl
OiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZv
bnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0
YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6
IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBw
eDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyBmbG9hdDogbm9uZTsgZGlzcGxheTogaW5saW5lICFp
bXBvcnRhbnQ7IiBjbGFzcz0iIj4rPC9zcGFuPjxzcGFuIGNsYXNzPSJBcHBsZS10YWItc3BhbiIg
c3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7
IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczog
bm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0
LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdo
aXRlLXNwYWNlOiBwcmU7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdp
ZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiPg0KPC9zcGFuPjxzcGFuIHN0eWxlPSJj
YXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNp
emU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsg
Zm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjog
c3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFj
ZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDog
MHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IGZsb2F0OiBub25lOyBkaXNwbGF5OiBpbmxpbmUg
IWltcG9ydGFudDsiIGNsYXNzPSIiPlN0b3JlX2ZpZWxkKHBoeXNpbmZvLA0KIDEwLCBhcmNoX2Nh
cF9saXN0KTs8L3NwYW4+PGJyIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250
LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsg
Zm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNw
YWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQt
dHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsg
LXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IiBj
bGFzcz0iIj4NCjxzcGFuIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZh
bWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9u
dC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNp
bmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJh
bnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdl
YmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IGZsb2F0
OiBub25lOyBkaXNwbGF5OiBpbmxpbmUgIWltcG9ydGFudDsiIGNsYXNzPSIiPisjZW5kaWY8L3Nw
YW4+PGJyIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVs
dmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50
LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1h
bDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBu
b25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0
LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IiBjbGFzcz0iIj4NCjxz
cGFuIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0
aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNh
cHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsg
dGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25l
OyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0
cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IGZsb2F0OiBub25lOyBkaXNw
bGF5OiBpbmxpbmUgIWltcG9ydGFudDsiIGNsYXNzPSIiPis8L3NwYW4+PGJyIHN0eWxlPSJjYXJl
dC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6
IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9u
dC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3Rh
cnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTog
bm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4
OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IiBjbGFzcz0iIj4NCjxzcGFuIGNsYXNzPSJBcHBsZS10
YWItc3BhbiIgc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBI
ZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlh
bnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9y
bWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06
IG5vbmU7IHdoaXRlLXNwYWNlOiBwcmU7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQt
c3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiPjwvc3Bhbj48c3BhbiBz
dHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsg
Zm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBu
b3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQt
YWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hp
dGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Ut
d2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyBmbG9hdDogbm9uZTsgZGlzcGxheTog
aW5saW5lICFpbXBvcnRhbnQ7IiBjbGFzcz0iIj5DQU1McmV0dXJuKHBoeXNpbmZvKTs8L3NwYW4+
PGJyIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0
aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNh
cHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsg
dGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25l
OyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0
cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IiBjbGFzcz0iIj4NCjxzcGFu
IHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNh
OyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6
IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4
dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3
aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9r
ZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IGZsb2F0OiBub25lOyBkaXNwbGF5
OiBpbmxpbmUgIWltcG9ydGFudDsiIGNsYXNzPSIiPn08L3NwYW4+PC9kaXY+DQo8L2Jsb2NrcXVv
dGU+DQo8L2Rpdj4NCjxiciBjbGFzcz0iIj4NCjxkaXYgY2xhc3M9IiI+SSB0aGlzIGV4dGVuZGlu
ZyB0aGUgdHVwbGUgYnV0IG9ubHkgZGVmaW5pbmcgYSB2YWx1ZSBvbiB4ODY/IERvZXMgdGhpcyBu
b3QgbGVhZCB0byB1bmRlZmluZWQgZmllbGRzIG9uIG90aGVyIGFyY2hpdGVjdHVyZXM/Jm5ic3A7
PC9kaXY+DQo8ZGl2IGNsYXNzPSIiPjxiciBjbGFzcz0iIj4NCjwvZGl2Pg0KPGRpdiBjbGFzcz0i
Ij4NCjxibG9ja3F1b3RlIHR5cGU9ImNpdGUiIGNsYXNzPSIiPnR5cGUgcGh5c2luZm8gPSB7PGJy
IGNsYXNzPSIiPg0KJm5ic3A7Jm5ic3A7dGhyZWFkc19wZXJfY29yZSA6IGludDs8YnIgY2xhc3M9
IiI+DQombmJzcDsmbmJzcDtjb3Jlc19wZXJfc29ja2V0IDogaW50OzxiciBjbGFzcz0iIj4NCkBA
IC0xMjQsNiArMTI4LDcgQEAgdHlwZSBwaHlzaW5mbyA9IHs8YnIgY2xhc3M9IiI+DQombmJzcDsm
bmJzcDtzY3J1Yl9wYWdlcyAmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDs6IG5hdGl2ZWlu
dDs8YnIgY2xhc3M9IiI+DQombmJzcDsmbmJzcDtjYXBhYmlsaXRpZXMgJm5ic3A7Jm5ic3A7Jm5i
c3A7Jm5ic3A7OiBwaHlzaW5mb19jYXBfZmxhZyBsaXN0OzxiciBjbGFzcz0iIj4NCiZuYnNwOyZu
YnNwO21heF9ucl9jcHVzICZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOzogaW50OyAoKiog
Y29tcGlsZS10aW1lIG1heCBwb3NzaWJsZSBudW1iZXIgb2YgbnJfY3B1cyAqKTxiciBjbGFzcz0i
Ij4NCisgJm5ic3A7YXJjaF9jYXBhYmlsaXRpZXMgOiBwaHlzaW5mb19hcmNoX2NhcF9mbGFnIGxp
c3Q7PGJyIGNsYXNzPSIiPg0KfTxiciBjbGFzcz0iIj4NCnR5cGUgdmVyc2lvbiA9IHsgbWFqb3Ig
OiBpbnQ7IG1pbm9yIDogaW50OyBleHRyYSA6IHN0cmluZzwvYmxvY2txdW90ZT4NCjxiciBjbGFz
cz0iIj4NCjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5IZXJlIHRoZSByZWNvcmQgaXMgZXh0ZW5kZWQg
YnV0IGl0IGxvb2tzIHRvIG1lIGxpa2UgdGhlIG5ldyBmaWVsZCBpcyBrZXB0IHVuZGVmaW5lZCBv
biBub24geDg2IGFyY2hpdGVjdHVyZXMuIElmIHRoZSBmaWVsZCBzdGlsbCBoYXMgYSBkZWZpbmVk
IGNvbnRlbnQsIGl0IHdvdWxkIGJlIGdvb2QgdG8gZXhwbGFpbiB3aHkuPC9kaXY+DQo8ZGl2IGNs
YXNzPSIiPjxiciBjbGFzcz0iIj4NCjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj7igJQgQzwvZGl2Pg0K
PC9ib2R5Pg0KPC9odG1sPg0K

--_000_29B192D0DD8445D89D1E83C004F015DAcitrixcom_--


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 15:02:42 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 15:02:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357922.586799 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6ZD0-00016J-Ri; Wed, 29 Jun 2022 15:02:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357922.586799; Wed, 29 Jun 2022 15:02:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6ZD0-00016C-OU; Wed, 29 Jun 2022 15:02:26 +0000
Received: by outflank-mailman (input) for mailman id 357922;
 Wed, 29 Jun 2022 15:02:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o6ZCz-00015n-22
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 15:02:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o6ZCy-0005Fd-Hi; Wed, 29 Jun 2022 15:02:24 +0000
Received: from [54.239.6.187] (helo=[192.168.9.41])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o6ZCy-0000fQ-8n; Wed, 29 Jun 2022 15:02:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=lywU+qnXAnogvgk9qTspWsxzTHhKuRc8BOUBuQExzps=; b=RpKT3YElscYWah6+nFgyzkKAIq
	EeBT4wTb5jFmXKpwHgpGwgtJDNg8hKmHFM0ujNqioqRZAhqoJG9dinLdD2AEC79MzpIN+pvs/+7Uz
	jBxJ21ZBVzrgLHsK2bs1mQl0dWmBpZB2wLVJAbsaUkJqDnwEEsg5ERjzFWRbFa4xGCQo=;
Message-ID: <1e53136a-0675-6a0f-e06c-6ffb390d9399@xen.org>
Date: Wed, 29 Jun 2022 16:02:20 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH] xen/arm: smmu-v3: Fix MISRA C 2012 Rule 1.3 violations
To: Bertrand Marquis <Bertrand.Marquis@arm.com>, xenia <burzalodowa@gmail.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 Rahul Singh <Rahul.Singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220628150851.8627-1-burzalodowa@gmail.com>
 <BF0AB23A-DB4B-4DA3-9E4C-C15FAD360247@arm.com>
 <1b28f8b2-2153-61f6-515f-b2ed880f84b6@gmail.com>
 <D8C0B798-C736-45CC-A723-1535131F1323@arm.com>
 <E0AD2430-78DB-454B-9D76-EB2E24E80E1F@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <E0AD2430-78DB-454B-9D76-EB2E24E80E1F@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 29/06/2022 15:10, Bertrand Marquis wrote:
> Hi,

Hi Bertrand,

> In fact the patch was committed before we started this discussion as Rahul R-b was enough.

It was probably merged a bit too fast. When there are multiple 
maintainers responsible for the code, I tend to prefer to wait a bit 
just in case there are extra comments.

> But I would still be interested to have other maintainers view on this.

Rahul and you are the official maintainers for that code. I think 
Stefano and I are only CCed because we maintain the Arm code 
(get_maintainers.pl doesn't automatically remove maintainers from upper 
directory).

> 
>> On 29 Jun 2022, at 10:03, Bertrand Marquis <Bertrand.Marquis@arm.com> wrote:
>>
>> Hi Xenia,
>>
>>> On 29 Jun 2022, at 09:55, xenia <burzalodowa@gmail.com> wrote:
>>>
>>> Hi Bertrand,
>>>
>>> On 6/29/22 10:24, Bertrand Marquis wrote:
>>>> Hi Xenia,
>>>>
>>>>> On 28 Jun 2022, at 16:08, Xenia Ragiadakou <burzalodowa@gmail.com> wrote:
>>>>>
>>>>> The expression 1 << 31 produces undefined behaviour because the type of integer
>>>>> constant 1 is (signed) int and the result of shifting 1 by 31 bits is not
>>>>> representable in the (signed) int type.
>>>>> Change the type of 1 to unsigned int by adding the U suffix.
>>>>>
>>>>> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
>>>>> ---
>>>>> Q_OVERFLOW_FLAG has already been fixed in upstream kernel code.
>>>>> For GBPA_UPDATE I will submit a patch.
>>>>>
>>>>> xen/drivers/passthrough/arm/smmu-v3.c | 4 ++--
>>>>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
>>>>> index 1e857f915a..f2562acc38 100644
>>>>> --- a/xen/drivers/passthrough/arm/smmu-v3.c
>>>>> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
>>>>> @@ -338,7 +338,7 @@ static int platform_get_irq_byname_optional(struct device *dev,
>>>>> #define CR2_E2H				(1 << 0)
>>>>>
>>>>> #define ARM_SMMU_GBPA			0x44
>>>>> -#define GBPA_UPDATE			(1 << 31)
>>>>> +#define GBPA_UPDATE			(1U << 31)
>>>>> #define GBPA_ABORT			(1 << 20)
>>>>>
>>>>> #define ARM_SMMU_IRQ_CTRL		0x50
>>>>> @@ -410,7 +410,7 @@ static int platform_get_irq_byname_optional(struct device *dev,
>>>>>
>>>>> #define Q_IDX(llq, p)			((p) & ((1 << (llq)->max_n_shift) - 1))
>>>>> #define Q_WRP(llq, p)			((p) & (1 << (llq)->max_n_shift))
>>>> Could also make sense to fix those 2 to be coherent.
>>> According to the spec, the maximum value that max_n_shift can take is 19.
>>> Hence, 1 << (llq)->max_n_shift cannot produce undefined behavior.
>>
>> I agree with that but my preference would be to not rely on deep analysis on the code for those kind of cases and use 1U whenever possible.
>>
>> What other maintainers think on this ?

In general, I prefer if we use 1U (or 1UL/1ULL) so it is clearer that 
the code is correct and consistent (I always find odd when we use 1U << 
31 but 1 << 20).

In this case, even if you use 1U, it is not really clear whether 
max_n_shift can be greater than 31. That said, I would not suggest to 
use ULL unless this is strictly necessary.

>>
>>>
>>> Personally, I have no problem to submit another patch that adds U/UL suffixes to all shifted integer constants in the file :) but ...
>>> It seems that this driver has been ported from linux and this file still uses linux coding style, probably because deviations will reduce its maintainability.
>>> Adding a U suffix to those two might be considered an unjustified deviation.

This can be solved by sending patch to Linux as well. This will also 
help Linux to reduce the number MISRA violations (I guess SMMUv3 will be 
part of the safety certification scope).

>>
>> At this stage the SMMU code is starting to deviate a lot from Linux so it will not be possible to update it easily to sync with latest linux code.
>> And the decision should be are we fixing it or should we fully skip this file saying that it is coming from linux and should not be fixed.
>> We started to have a discussion during the FuSa meeting on this but we need to come up with a conclusion per file.
>>
>> On this one I would say keeping it with linux code style and near from the linux one is not needed.

I will address both point separately.

In general, we don't want to mix coding style within a file. As the file 
started with the Linux one, then we should keep it like that. Otherwise, 
you will end up with a mix and it will be difficult for the contributor 
to know how to write new code. That said, I would not necessarily 
consider using "1 << ..." as part of the code style we want to keep.

This brings to the second part (i.e. keeping the code near Linux). Linux 
has a much larger user/developper base than us. Therefore there are more 
chance for them to find bugs and fix them. By diverging, then we could 
end up to add new bugs and also increase the maintainance.

I have tried both way with the SMMUv{1,2} driver. The first attempt was 
to fully adapt the code to Xen (coding style...). But this made 
difficult to keep track of bugs. A few months we removed it completely 
and then tried to keep as close as to Linux. Since then Linux reworked 
the IOMMU subsystem, so port needs to be adapted. It is more difficult 
but likely less than if we had our own port.

So overall, I think it was beneficials for our version of the SMMU{v1, 
v2} drivers to be close to Linux. I haven't looked very close to the 
SMMUv3 to state whether we should stay close or not.

>> Rahul, Julien, Stefano: what do you think here ?

Rahul and you are the maintainers. I can share my preference/experience, 
but I think this is your call on how you want to maintain the driver.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 15:16:44 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 15:16:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357928.586810 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6ZQk-0002mM-3r; Wed, 29 Jun 2022 15:16:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357928.586810; Wed, 29 Jun 2022 15:16:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6ZQk-0002mF-01; Wed, 29 Jun 2022 15:16:38 +0000
Received: by outflank-mailman (input) for mailman id 357928;
 Wed, 29 Jun 2022 15:16:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gFtW=XE=citrix.com=prvs=17228c8f8=Jane.Malalane@srs-se1.protection.inumbo.net>)
 id 1o6ZQj-0002m9-1P
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 15:16:37 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 75f979e8-f7be-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 17:16:35 +0200 (CEST)
Received: from mail-bn8nam12lp2168.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 29 Jun 2022 11:16:32 -0400
Received: from DM5PR03MB3386.namprd03.prod.outlook.com (2603:10b6:4:46::36) by
 SJ0PR03MB6615.namprd03.prod.outlook.com (2603:10b6:a03:388::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Wed, 29 Jun
 2022 15:16:30 +0000
Received: from DM5PR03MB3386.namprd03.prod.outlook.com
 ([fe80::298d:4044:f235:c782]) by DM5PR03MB3386.namprd03.prod.outlook.com
 ([fe80::298d:4044:f235:c782%6]) with mapi id 15.20.5373.018; Wed, 29 Jun 2022
 15:16:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 75f979e8-f7be-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656515795;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:mime-version;
  bh=R7fcrvHDe8RBM3bVEbVpaF7WqyvBuW0KTWFqviKM1JI=;
  b=OiuNMxd3l2EDm3gVo1ouj2iVzgKMIp2Jg+1ChqsZcUyxAQzaCJynaOIR
   cQdd1ix/cbl63Dwf2F5g48kCmLn6gIUDLxrzHWS7dFpZfbdYXC7PPi4HA
   9UGCtOidlgk5X5t6VDxdZxQtZ077L4MYZh/Y2c6V1rUYex7Gg4788ywCt
   k=;
X-IronPort-RemoteIP: 104.47.55.168
X-IronPort-MID: 74539313
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:wvBi5KOLzJBv5lfvrR1ulsFynXyQoLVcMsEvi/8bNHDolHp+jmZWi
 jtAB3bbeeLOPDejS21E3dSHQXl2v8XXzNVjGwc5r3pkQiMU+ZXMW9nGcEqpZn2ZdpObERg75
 p9ENNSec5hkRyGAqkqhb+Ho9iMi2auFHLOhBLSZYkidKeMKpAIJ0HqPzMZg39M52rBVej+wh
 O4eg/EzGXf5hmR9bzhOsPre8Usw5quq5DlHt1FjOv0T5gCFnCNNVMNDKfm9IUWjT9gPFIZWZ
 QpiIJJVXI/9101wYj9wuu+jKiXmepaLYU7WzCA+t5GK2nCunARrukoAHKdaOB0/ZwmhxYgrk
 o0T783oEG/FA4WX8Agje0gAe81BFfUuFI/veRBTZuTKkiUq21O1qxlfJBle0b8wo46bMkkXn
 RAsExgfbwjrug6D6OnTpt+AJCgUBJKD0Is34hmMxNxCZBosacirr67ivbe00Nqs7yzn8Dm3i
 8cxMFJSgBr8jxJnFxQbL5wykf2TwXjNfxxnp3nI5oQ1/D2GpOBx+OCF3Nv9XPWvHJ8QtGDI4
 2XM8iL+Hw0QM8GZxXyd6HWwi+TTnCT9HoUPCLm/8f0si1qWroARIEROCR3n/r/k1wjnBYI3x
 088o0LCqYAQ/UqmCPz0WxS8qXiJlhUdR8BRA6sx7wTlJq/8vFrEXTlfF2UphNoOpe4yRRh23
 w6wmNbvNwc0v+OVEmi+3+LBxd+1EW1PRYMYXgcUQA1A79T9rYUbihPUUs0lAKOzlsfyGzz73
 3aNtidWulkIpcsC1qH+91aXhTup/8LNVlRsuFWRWX+55ARkYoLjf5av9VXQ8fdHKsCeU0WFu
 38H3cOZ6YjiEK2wqcBEe81VdJnB2hpPGGe0bYJHd3X5ywmQxg==
IronPort-HdrOrdr: A9a23:gN8GEa9gbane+QU16shuk+DOI+orL9Y04lQ7vn2ZLiYlF/Bw9v
 re/sjzuiWVtN98YhsdcPG7WZVoIkm2yXZ1ibN+AV7KZmCPhILPFu1fBODZsl7d8kPFm9K1rp
 0OT5RD
X-IronPort-AV: E=Sophos;i="5.92,231,1650945600"; 
   d="dat'59?scan'59,208,59";a="74539313"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=b0z/6Nfkxcaa+QrpQlZwrLdQM447xEPOqTcVWWiNEDtaEJOXTlhGKvxy9uCuFji1/srb62C1WAWvdcbHJvTzeGEt7fYMuuFOj2qBFHT3C5gvEYwlDLxhBkr6HVzjc2YIQmuZazKfgM8ZSAuKNmcfa5c+oug8JsxK7XuD9Ok2oGHhd8QA4PoQA1uu2t+xTJOAWz1iB03b3D6YM1qu5fCCSJs485fsPktcP9jdSv9LFq3zDbKdNJUvx4Px0uoG1SoynWXLgV65USFF/Esx8rxel7mCnbK1xNEdk18ZoSMzw2sgspTTHrf7EE7JbyLWNYq2fye3KzYktF468h6V8v31jg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=R7fcrvHDe8RBM3bVEbVpaF7WqyvBuW0KTWFqviKM1JI=;
 b=lLRuHB+QzwztBQu1xMdKkOEAWY1rMlz/frUc04Ryyi8phlyKVpHO4CatAdi4LDh1hCwhknOCgMu+q0uCsq8qs5ti/Go1EY6wDFRF2i1oj4DveVDFyGyEhf+iHQZWbs9Ilo3HGN4MbJl8CEpKd6uJ1TI7we3TBaZcdxIrWUAMNQnNTddvWLcNwrJoD12C6Hezhwy5CFkGQMqZeurNECONiNHFraO1ihJ5eHU6MfH5gyjWyYvDiALStVt9a0z/5CXdvkXTLxyglKliWBTxlHOmft2ftR7bkCrwa2bmPQUCRIZpfU8rsy7H0urn70miZL1HYj62gkNGJqokOOmlFhgugg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=R7fcrvHDe8RBM3bVEbVpaF7WqyvBuW0KTWFqviKM1JI=;
 b=wwLPzgURYPUyR92M2BY8xmvgUYnVpeQuTOClA9Wbz0iTXVcaIQ/o/3ABybkrLxjLdMjL3jIh0ZWG9s2oombtt39De4oc917yT0YqP0IFqqiIrLFKbimRWU1x/gg9INW+Vk+al95meCMTEpokP4HfEuz2dc5aKmf5F5DHo2u2zYI=
Content-Type: multipart/mixed;
	boundary="_000_3e4443ecde14b02b99f2e6b63b06ca41citrixcom_"
From: Jane Malalane <Jane.Malalane@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, Kevin Tian <kevin.tian@intel.com>
CC: George Dunlap <George.Dunlap@citrix.com>, Nick Rosbrook
	<rosbrookn@gmail.com>, Wei Liu <wl@xen.org>, Anthony Perard
	<anthony.perard@citrix.com>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
	Juergen Gross <jgross@suse.com>, Christian Lindig
	<christian.lindig@citrix.com>, David Scott <dave@recoil.org>, Roger Pau Monne
	<roger.pau@citrix.com>, Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian
	<kevin.tian@intel.com>, Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH RESEND v10 1/2] xen+tools: Report Interrupt Controller
 Virtualization capabilities on x86
Thread-Topic: [PATCH RESEND v10 1/2] xen+tools: Report Interrupt Controller
 Virtualization capabilities on x86
Thread-Index: AQHYi8AZy145ktbIbk+rkdBDuBblea1mcQoAgAAN3AA=
Date: Wed, 29 Jun 2022 15:16:30 +0000
Message-ID: <3e4443ec-de14-b02b-99f2-e6b63b06ca41@citrix.com>
References: <20220629135534.19923-1-jane.malalane@citrix.com>
 <20220629135534.19923-2-jane.malalane@citrix.com>
 <efb34738-49a9-fa4f-900a-fd530ff835ce@suse.com>
In-Reply-To: <efb34738-49a9-fa4f-900a-fd530ff835ce@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator: <3e4443ec-de14-b02b-99f2-e6b63b06ca41@citrix.com>
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 5bdf5a77-9b24-4012-b7a1-08da59e2581b
x-ms-traffictypediagnostic: SJ0PR03MB6615:EE_
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 +S4cgsbBD2Bs+O3807bbSqQ8xItwPmMk/tjsa5otw5PbF51VCObRDPK+XAgi8Bihrol62XMQ0y/gWqg7VFVCfcnSXNzn1hh/mgS1We/af7OTsPCksUMG2l3MQaqshp1zzFGOmtqRe+q0H7DjYNf80mCTJvnOb9atKUObXpyqpqkpzUTKsj8C2jKxa3L4+kJGT0gkIPXYkJa7/Vn7NeIqVo4jTCmzyj4oSuvJSZZ5K7CIvPDu249lzFq9LEBCK4Jd5uDm4qRlVM2ttlskMnfuy2amM047zi+ObruNrihH/9qmEaURQBtSpldJLUAFRcXvCr/wDexC0eQR6OS6ZVwQ5xZU5SsjlIG155SRhW0VwY1QpmGufOJ9t/qa8HEuZuiRm3PyhzCz6NKS7d18YD0rdTJw4qDbzlE1Rl1fxZeeJkiLEQZbx2oUVzgk5ySxuUjT4eLhVroTmkuTxm9EZr3Vu6rBaBx1HBD+0cleIsCCJ56XSK0w5hjQ83OU5h6I4BZlq7UY02kr/XPZ2ASTVTDfvXsZXLJUcs/U3PjGSgmnOiPDPa4YWcek4C8nXDxKIs7jikMjOi8pFSBRvCIMCuAnlQaSdUnCjGDAhzr8P/Z4dxJz/c6WG8hoCj0sGFLP9ayLZdCnaVkq+Zy1c87HYi2otmvpdlgOKAC4r6lyJG4u2q9Dtk4srrnOll9xxTpIIP1TF67oGfxv2uTRY5Run+hbP7hZVBF99iie9eNmKgI+sl8su2cdHrvBM3WjfYau1me3bc5Wz+nmgrvZVAHLbxt+tFdEYHIQMJXEqIexmtS/CXuR612KI8RDjgujOttkQMW1DbAYWMiRDF48PLc6bRzDdQ==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR03MB3386.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(346002)(136003)(376002)(396003)(39860400002)(2616005)(53546011)(64756008)(41300700001)(26005)(38100700002)(86362001)(122000001)(83380400001)(6512007)(71200400001)(82960400001)(6506007)(5660300002)(2906002)(76116006)(4326008)(54906003)(8676002)(186003)(66946007)(31686004)(66556008)(66446008)(8936002)(110136005)(66476007)(91956017)(31696002)(7416002)(478600001)(6486002)(38070700005)(316002)(36756003)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?N0hseWtzdnVUcno5eFJ0TVE5d1l6RFl1OHZVWjU0NXlXcU9nWWw4bVlua1NR?=
 =?utf-8?B?NVphRzZIWFVQZW1aajZkU2RmWERlUC9JU0RTdTVRTDUzendzQWJKK3JkcHZ6?=
 =?utf-8?B?VDdpQVdZcjZZNCtYZ3p6eWhYWm9hVmhhUWMzeFovYmlhd002QjNweHE1SFRn?=
 =?utf-8?B?R0pncWpCTDlEQ000cmxHU3hkVzEyTm1jR2RlbjVRY1B3YXUwRUxBMlF3dXlW?=
 =?utf-8?B?VkZjanBMTEFoU2pmUFRRVHArek1nbFJNUHp2OXhISThaNWk4V2FvaVlleDRC?=
 =?utf-8?B?RFlpNFRZbjZCS1Z4SFloWkFmdGhDQUxZdkE2Y0llcVlBcGc2eVErbkNjM0JM?=
 =?utf-8?B?NE9GVG5qSGRmb1J2NDNjZWhWOC9zN3lMNFFyTVlMUkFycE8wcEx4NmE2dGti?=
 =?utf-8?B?cDF1U0VpOG5FUHlkNktZRDdLVTFWWVN4K2R0VlhwelVQOXYxVmt0QUVLMjl2?=
 =?utf-8?B?bGR4UTg2Ylhmby9HTkZDeVhYeVdrTW9wS0JPeTBRbTdaQnFRbllVS0xxWDls?=
 =?utf-8?B?VW5ubUFlNGRoYUI3ekhMY3dPVFlMMkxaMEgvREZPWWhHdTB5dHkrY2VRbVp2?=
 =?utf-8?B?cTZDU0haVjdrd0k5N2orVTJYZUpaQ3I2OThlN0dDWU15U3MrOEdzTHNyVG9a?=
 =?utf-8?B?UVVzYlFyTUtjVnNzTzd2eENFMnF4WUJrSXQvZm9mUklBc3dHMnIvTDNTTExT?=
 =?utf-8?B?b1BucEpiYjUzTzJ6eXV0VDVvMjdlSTQ5cW1kT0s3b0YyZkFXOW9xTHczdW1J?=
 =?utf-8?B?azlHM2lmdkgyNDcxY2Z1MGlNUWgyN1hPQTNRWXBjRk14WGw1YkZJd3dKL25X?=
 =?utf-8?B?eU9KTHB0NVFRWk1FQUQ1N0JraXdTVmM5cmxwakNaSyszT3JudFlIRWNkMkx3?=
 =?utf-8?B?SDZQVmNIU3oyZ2FUVkZPaEVwMjVZM0ZxMXA1SWJoYzdYS0dPM0pSdzZuUEFo?=
 =?utf-8?B?NEdpRFc5RDZUN0QwUEg1VGRTbVprTWFPWUw4VkJGc2VGU3pJdEZzM0swZzVq?=
 =?utf-8?B?RGhGaGVzNGVzZEh2QXpwdEVzbFhYUmZMNVpBaC9EVFVVU1BhSTRsTVlCNXlt?=
 =?utf-8?B?QzIrYWpxdTlaZi9JdVFjMFNyRThiZUFtcEdkb0lWU2s0ZzlmM25kMndrWUMy?=
 =?utf-8?B?amllYXBzM1Qxajd5UHJmNDN6NlQybGo4RHNwWFJmdnYwTWpFZjVKWkY2YWpj?=
 =?utf-8?B?MStkQ0xlSnYwYm9yRm82NWtSeE5UazV0OU5qVFVCMjhZQTE1YktzVzJObkhH?=
 =?utf-8?B?d3dRVUY1bndYL3RvWkxuVm9nME9SZytGOXFFK3JUNWdzK3ZuNXo0K1ROSi9T?=
 =?utf-8?B?dldDT2tIQnB3YldUdEJ0MkU0RVBKcXhYRWFRYTkvL0RYdkRmTVZBSVNiSk16?=
 =?utf-8?B?ZE9OQWhYWTBJaVdZeGpsTEh0U2lYYWlsdzVHUEh0NDc3ei9LejZwTU4vVFZw?=
 =?utf-8?B?WDMzZk56VDVUVjZBTTU4UVdZZTA1cEJXM1dJNG9jM240WVJIcVExaEJUL3hx?=
 =?utf-8?B?ekY0ZWV1aW9oZUlUT1VYWldWa2drbmUvbURORGs5Z2FXTWU2Y0pGNnp0NXJW?=
 =?utf-8?B?d2xvdGYrOWhtRGlkajRyNzdVY2NickFYOVJzY0NOR2tPQTVCMUdiNDg2NnhB?=
 =?utf-8?B?amRHdk9VY0xORkxZTTlDMncyQWNFMTZFU041UUYydTZzTDh3ZlJSekpXWm4x?=
 =?utf-8?B?SHZxazlMSS9qOUdxS3pjM2tVMjZZeERWNUpnaGN2Sjk0WWxpSlk2b1BUQW12?=
 =?utf-8?B?K1BnQ2lYY2l2cFVXL1hOY2wweEE2K3NTQkdsMGxGNURVYVhEemhoWmpOdUdN?=
 =?utf-8?B?WCt3eTEvMjNDUTcxT1NZRjYxNURTVWUvYkFXK3BUdlZMS3MyS0VPWlZpQ2o3?=
 =?utf-8?B?SEliYS8wYStCdS9IeXEyRzA0cDF2dnEwRzZIZmtvcDd0dEVxRFJEcGRKSkg0?=
 =?utf-8?B?NjNBcDZvb2xFOUZEaWZSWmZCOEZKcUFIc3ZuLzVidVNCOVk4anBVT3ZpaDR5?=
 =?utf-8?B?dHovVWI5Q0hWbGI1SHhBdlZUeU5WMndnK2NSQitqbVZiQVhpKzdkWFZJOENM?=
 =?utf-8?B?OVNqWWxWb05rNmRmN3ZhSC9Lc0E4eC9IR3orbVhpRmtXamZDalpNUE9oc2pk?=
 =?utf-8?B?MHcrUGpPUTh2WFkrQlh0SERuYTBMR3dWdmFkckJsc20yRWdmNU5MNER4K1Q2?=
 =?utf-8?B?UGc9PQ==?=
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: DM5PR03MB3386.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5bdf5a77-9b24-4012-b7a1-08da59e2581b
X-MS-Exchange-CrossTenant-originalarrivaltime: 29 Jun 2022 15:16:30.1413
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: fCZj2rtBWWLVVYc6hsNm2oH/BIw1utTpiowX9x3zVagX6RnCxGNQj6GmiMD7Tw7txardQAScrxddfcMqODbE3ONuNYoyg85S4EdECvPu27w=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6615

--_000_3e4443ecde14b02b99f2e6b63b06ca41citrixcom_
Content-Type: text/plain; charset="utf-8"
Content-ID: <DE96ECC9425C1C4181FFC0C4C78185F9@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64

T24gMjkvMDYvMjAyMiAxNToyNiwgSmFuIEJldWxpY2ggd3JvdGU6DQo+IE9uIDI5LjA2LjIwMjIg
MTU6NTUsIEphbmUgTWFsYWxhbmUgd3JvdGU6DQo+PiBBZGQgWEVOX1NZU0NUTF9QSFlTQ0FQX1g4
Nl9BU1NJU1RFRF9YQVBJQyBhbmQNCj4+IFhFTl9TWVNDVExfUEhZU0NBUF9YODZfQVNTSVNURURf
WDJBUElDIHRvIHJlcG9ydCBhY2NlbGVyYXRlZCB4QVBJQyBhbmQNCj4+IHgyQVBJQywgb24geDg2
IGhhcmR3YXJlLiBUaGlzIGlzIHNvIHRoYXQgeEFQSUMgYW5kIHgyQVBJQyB2aXJ0dWFsaXphdGlv
bg0KPj4gY2FuIHN1YnNlcXVlbnRseSBiZSBlbmFibGVkIG9uIGEgcGVyLWRvbWFpbiBiYXNpcy4N
Cj4+IE5vIHN1Y2ggZmVhdHVyZXMgYXJlIGN1cnJlbnRseSBpbXBsZW1lbnRlZCBvbiBBTUQgaGFy
ZHdhcmUuDQo+Pg0KPj4gSFcgYXNzaXN0ZWQgeEFQSUMgdmlydHVhbGl6YXRpb24gd2lsbCBiZSBy
ZXBvcnRlZCBpZiBIVywgYXQgdGhlDQo+PiBtaW5pbXVtLCBzdXBwb3J0cyB2aXJ0dWFsaXplX2Fw
aWNfYWNjZXNzZXMgYXMgdGhpcyBmZWF0dXJlIGFsb25lIG1lYW5zDQo+PiB0aGF0IGFuIGFjY2Vz
cyB0byB0aGUgQVBJQyBwYWdlIHdpbGwgY2F1c2UgYW4gQVBJQy1hY2Nlc3MgVk0gZXhpdC4gQW4N
Cj4+IEFQSUMtYWNjZXNzIFZNIGV4aXQgcHJvdmlkZXMgYSBWTU0gd2l0aCBpbmZvcm1hdGlvbiBh
Ym91dCB0aGUgYWNjZXNzDQo+PiBjYXVzaW5nIHRoZSBWTSBleGl0LCB1bmxpa2UgYSByZWd1bGFy
IEVQVCBmYXVsdCwgdGh1cyBzaW1wbGlmeWluZyBzb21lDQo+PiBpbnRlcm5hbCBoYW5kbGluZy4N
Cj4+DQo+PiBIVyBhc3Npc3RlZCB4MkFQSUMgdmlydHVhbGl6YXRpb24gd2lsbCBiZSByZXBvcnRl
ZCBpZiBIVyBzdXBwb3J0cw0KPj4gdmlydHVhbGl6ZV94MmFwaWNfbW9kZSBhbmQsIGF0IGxlYXN0
LCBlaXRoZXIgYXBpY19yZWdfdmlydCBvcg0KPj4gdmlydHVhbF9pbnRyX2RlbGl2ZXJ5LiBUaGlz
IGFsc28gbWVhbnMgdGhhdA0KPj4gc3lzY3RsIGZvbGxvd3MgdGhlIGNvbmRpdGlvbmFscyBpbiB2
bXhfdmxhcGljX21zcl9jaGFuZ2VkKCkuDQo+Pg0KPj4gRm9yIHRoYXQgcHVycG9zZSwgYWxzbyBh
ZGQgYW4gYXJjaC1zcGVjaWZpYyAiY2FwYWJpbGl0aWVzIiBwYXJhbWV0ZXINCj4+IHRvIHN0cnVj
dCB4ZW5fc3lzY3RsX3BoeXNpbmZvLg0KPj4NCj4+IE5vdGUgdGhhdCB0aGlzIGludGVyZmFjZSBp
cyBpbnRlbmRlZCB0byBiZSBjb21wYXRpYmxlIHdpdGggQU1EIHNvIHRoYXQNCj4+IEFWSUMgc3Vw
cG9ydCBjYW4gYmUgaW50cm9kdWNlZCBpbiBhIGZ1dHVyZSBwYXRjaC4gVW5saWtlIEludGVsIHRo
YXQNCj4+IGhhcyBtdWx0aXBsZSBjb250cm9scyBmb3IgQVBJQyBWaXJ0dWFsaXphdGlvbiwgQU1E
IGhhcyBvbmUgZ2xvYmFsDQo+PiAnQVZJQyBFbmFibGUnIGNvbnRyb2wgYml0LCBzbyBmaW5lLWdy
YWluaW5nIG9mIEFQSUMgdmlydHVhbGl6YXRpb24NCj4+IGNvbnRyb2wgY2Fubm90IGJlIGRvbmUg
b24gYSBjb21tb24gaW50ZXJmYWNlLg0KPj4NCj4+IFN1Z2dlc3RlZC1ieTogQW5kcmV3IENvb3Bl
ciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4NCj4+IFNpZ25lZC1vZmYtYnk6IEphbmUgTWFs
YWxhbmUgPGphbmUubWFsYWxhbmVAY2l0cml4LmNvbT4NCj4+IFJldmlld2VkLWJ5OiAiUm9nZXIg
UGF1IE1vbm7DqSIgPHJvZ2VyLnBhdUBjaXRyaXguY29tPg0KPj4gUmV2aWV3ZWQtYnk6IEphbiBC
ZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4+IFJldmlld2VkLWJ5OiBBbnRob255IFBFUkFS
RCA8YW50aG9ueS5wZXJhcmRAY2l0cml4LmNvbT4NCj4gDQo+IENvdWxkIHlvdSBwbGVhc2UgY2xh
cmlmeSB3aGV0aGVyIHlvdSBkaWQgZHJvcCBLZXZpbidzIFItYiAod2hpY2gsIGENCj4gbGl0dGxl
IHVuaGVscGZ1bGx5LCBoZSBwcm92aWRlZCBpbiByZXBseSB0byB2OSBhIHdlZWsgYWZ0ZXIgeW91
IGhhZA0KPiBwb3N0ZWQgdjEwKSBiZWNhdXNlIG9mIC4uLg0KPiANCj4+IHYxMDoNCj4+ICAgKiBN
YWtlIGFzc2lzdGVkX3h7Mn1hcGljX2F2YWlsYWJsZSBjb25kaXRpb25hbCB1cG9uIF92bXhfY3B1
X3VwKCkNCj4gDQo+IC4uLiB0aGlzLCByZXF1aXJpbmcgaGltIHRvIHJlLW9mZmVyIHRoZSB0YWc/
IFVudGlsIHRvbGQgb3RoZXJ3aXNlIEkNCj4gd2lsbCBhc3N1bWUgc28uDQoNCkl0IHdhc24ndCBp
bnRlbnRpb25hbCBidXQgeWVzLCB0aGF0IGlzIHJpZ2h0LiBUaGVyZSB3YXMgYSBjaGFuZ2UsIGFs
YmVpdCANCm1pbm9yLCBpbiB2bXggZnJvbSB2OSB0byB2MTAgc28gSSBkbyByZXF1aXJlIEtldmlu
IFRpYW4gb3IgSnVuIE5ha2FqaW1hIA0KdG8gcmV2aWV3IGl0Lg0KDQpUaGFuayB5b3UsDQoNCkph
bmUu

--_000_3e4443ecde14b02b99f2e6b63b06ca41citrixcom_
Content-Disposition: attachment; filename="winmail.dat"
Content-Transfer-Encoding: base64
Content-Type: application/ms-tnef; name="winmail.dat"

eJ8+IrlgAQaQCAAEAAAAAAABAAEAAQeQBgAIAAAA5AQAAAAAAADoAAEJgAEAIQAAAEY0QkNCMDIw
NDk0QThCNEM4MzdFNTIzRjI4QzhCQ0NFAF8HAQ2ABAACAAAAAgACAAEFgAMADgAAAOYHBgAdAA8A
EAAeAAMAUAEBIIADAA4AAADmBwYAHQAPABAAHgADAFABAQiABwAYAAAASVBNLk1pY3Jvc29mdCBN
YWlsLk5vdGUAMQgBBIABAGUAAABSZTogW1BBVENIIFJFU0VORCB2MTAgMS8yXSB4ZW4rdG9vbHM6
IFJlcG9ydCBJbnRlcnJ1cHQgQ29udHJvbGxlciBWaXJ0dWFsaXphdGlvbiBjYXBhYmlsaXRpZXMg
b24geDg2AKMiAQOQBgC4OwAAWAAAAAIBfwABAAAAMgAAADwzZTQ0NDNlYy1kZTE0LWIwMmItOTlm
Mi1lNmI2M2IwNmNhNDFAY2l0cml4LmNvbT4AAAACAQkQAQAAAPsGAAD3BgAAdgsAAExaRnW38le1
YQAKZmJpZAQAAGNjwHBnMTI1MgD+A0PwdGV4dAH3AqQD4wIABGNoCsBzZXQwIO8HbQKDAFARTTIK
gAa0AoCWfQqACMg7CWIxOQ7AvwnDFnIKMhZxAoAVYioJsHMJ8ASQYXQFsg5QA2Bzom8BgCBFeBHB
bhgwXQZSdgSQF7YCEHIAwHR9CFBuGjEQIAXABaAbZGSaIANSIBAiF7JcdgiQ5HdrC4BkNR1TBPAH
QA0XcDAKcRfyYmttawZzAZAAICBCTV9C4EVHSU59CvwB8QvwCDIgTwOgMjkvMAw2LwHQIXExNToy
EDYsIEoDkUJldWJsDlBoIHcDYBAgOq5cI2AYUAqAPiGULiIAsi4iNjU1ItMZ4E0HQE8mUSYRI7wk
gEFkHGBYAEVOX1NZU0NUEExfUEgoYUFQXwBYODZfQVNTSTBTVEVEKTApEElDviAAcAswJ0coHyku
MiojNRiQIAlwcAkRKmBjY+5lHnAYcRxReCovLwAtUy8i0AIgLwApUCAR0WR3qQrAZS4SQGgEACAx
8b8ZIBzAEdAFQC8XMDUgHWClACB1B0BpehiAaQIgRyc4HlADoHN1YhIAccMKUAIwbHkgYhngCfCP
AaAecBxgMMFhIHAEkFwtZANxC4A2YGEAkHM6Lic4Ti3ANaAjgWZl/xiACHAHkTGBG9AIcAlwNiM7
B3ALUGUHgAIwNvRBTc5EMSgnNyc4SFcqYAQQ/wQALtgz7COgAxADIDZxLeS/HFEGkD3xItAyoTKA
ZSc4Zm0LgAdwdW0i0DWgcIcuAgQgM+dlX2FwDlD/RFAuYQQQOgIEIDKAMfE5pf8qYAkAJhEHgABx
JzgygwOR/0S0LaJBwSfALzIKsBnRQBNdHlB1EgBH0i8iLUgVVqJNNpB4aXQxsEE0uf9Kf0uAN2AD
YB1gAQA6EUsh70tAA/AygDIQbhrkP9IBoL8IYEGjSAU0ykoAC4BnSKPTSzUi0HVuI2BrRiEt0cZn
I1AKwUVQVBxwSfB+bFKBMoBKADJAOxIGkHn/UaIZIAeAJzgLgBuhNrADIPUZoWQkIWc8zz3eM48/
3/9A50MXJzhDuTBARGMEYipi70FzHnA4MFKBZU7BG7FEY61TQV8z4jCwclzPX1Zh3HJfAQAjYBox
eTG1B0DPMlFGozJzJzhzeQTwNjD/HHAG8AkAHjBIoxvhDeA0gudjkU7xM9BteGCwC2BeQ8ZzYoAZ
lGQoKVd/J6H2RgWxMoNwCHAuABIAQXEtY6JhJ+FH4nIRwC1zzTdwYwaQDlAgIh5QCrDvDcAjYDSA
B5AiSTEYcAeAfxuhRvk5MWJwOWAysQnwX/FlJF9waGUwTwJo3zjz/xAgMnRFc1ZjVAAucDISVmL/
HcAcUS2xNnEFoDsgNHE20e9OpDwCMlUnOVYqQUMlNVO/NnFiUgRwOWBA8jcyZk/wr0YCdOERwDGw
VVLESTuB/wMgZEwR0AQgQtBUMAUgdTHfG+F4YWbxGuFI5FYz+yLQ9zwEBCBGYmcJADggCVAnR+on
dwNFNrMnfJY2YFJy9zJRbJAYUC0JwAtxUaIZMN9I5DPvNPaBdTVhbiPQNmL/N7AmETcjdLEEYAOg
cvdwzzEnoVN1ZxgwPnItYnR5OkuxZAlwB+AIUG/ZN3EgPCpxilEuBaCKskwzQGxwYnBpeItxbbmI
mmlnGFCJwBkwZonT6SX8PGomAS4AwCZki//ZJ4NSZR1iibUiCAAYMD0FwFBJ8AXQAiALkCdFWjlt
oDwDYJJhLgqwdb+QD5EfIuuPIDZwI1NANaB/EgCUn5GJS8AygAIgNlBQ8EVSQVI8IIsBmdOT4L8Y
YQsglC8kYiQXCFFsHGD+eQhgN2BfUjpRU4FVESOgf0HQX+Od0g3gHGCKQIqwIFJLkXFuJwQgUonQ
IP4onwAjcUFxJBdtQTYwGeCzUrBB0GxweUBa8Hki0D9IwU3FeNMt4TZBLbF2OfM3QZGwZWsqYAGA
n1UR0K8qlzdgGRAcQnYekCk2YftJ5INBLqhgnF9dQR6QJyk9qpAqJjFS8j5FXgBce/UOwH1EZHYL
cAtgdSJmWT9SoC4AA6BgsGdhDmB1X/9DMGiwqJ+oUkVjItAJcDXw/4PQUaIx4ByxLcKNshuxSLL5
AZBnP3oRNIB60QbwNwH/X+ID8XpxJBda0z4xQuAZ4LcZIDh2JBVJBUAxcHOgkNcFQHPDrWViT/F5
B5BUUucyoTHxBRBnaEuRMdAEkP8msUVBhuEZo2sSNnBNkSQV/0KRBbAi0GckHHSk8aTCHpD9MkJJ
hlGwpRngoFMSQQORnQWxSlKwB7Cq8GFqB3D/N1AkFS2zHWIyEEuQtcwx0LcAcKVwndEstcwl8i4V
QgHEUAAfAEIAAQAAABwAAABKAGEAbgBlACAATQBhAGwAYQBsAGEAbgBlAAAAHwBlAAEAAAAyAAAA
SgBhAG4AZQAuAE0AYQBsAGEAbABhAG4AZQBAAGMAaQB0AHIAaQB4AC4AYwBvAG0AAAAAAB8AZAAB
AAAACgAAAFMATQBUAFAAAAAAAAIBQQABAAAAcAAAAAAAAACBKx+kvqMQGZ1uAN0BD1QCAAAAgEoA
YQBuAGUAIABNAGEAbABhAGwAYQBuAGUAAABTAE0AVABQAAAASgBhAG4AZQAuAE0AYQBsAGEAbABh
AG4AZQBAAGMAaQB0AHIAaQB4AC4AYwBvAG0AAAAfAAJdAQAAADIAAABKAGEAbgBlAC4ATQBhAGwA
YQBsAGEAbgBlAEAAYwBpAHQAcgBpAHgALgBjAG8AbQAAAAAAHwDlXwEAAAA6AAAAcwBpAHAAOgBq
AGEAbgBlAC4AbQBhAGwAYQBsAGEAbgBlAEAAYwBpAHQAcgBpAHgALgBjAG8AbQAAAAAAHwAaDAEA
AAAcAAAASgBhAG4AZQAgAE0AYQBsAGEAbABhAG4AZQAAAB8AHwwBAAAAMgAAAEoAYQBuAGUALgBN
AGEAbABhAGwAYQBuAGUAQABjAGkAdAByAGkAeAAuAGMAbwBtAAAAAAAfAB4MAQAAAAoAAABTAE0A
VABQAAAAAAACARkMAQAAAHAAAAAAAAAAgSsfpL6jEBmdbgDdAQ9UAgAAAIBKAGEAbgBlACAATQBh
AGwAYQBsAGEAbgBlAAAAUwBNAFQAUAAAAEoAYQBuAGUALgBNAGEAbABhAGwAYQBuAGUAQABjAGkA
dAByAGkAeAAuAGMAbwBtAAAAHwABXQEAAAAyAAAASgBhAG4AZQAuAE0AYQBsAGEAbABhAG4AZQBA
AGMAaQB0AHIAaQB4AC4AYwBvAG0AAAAAAAsAQDoBAAAAHwAaAAEAAAASAAAASQBQAE0ALgBOAG8A
dABlAAAAAAADAPE/CQQAAAsAQDoBAAAAAwD9P+QEAAACAQswAQAAABAAAAD0vLAgSUqLTIN+Uj8o
yLzOAwAXAAEAAABAADkAAJuFNcuL2AFAAAgw02mWNcuL2AEfADcAAQAAAMoAAABSAGUAOgAgAFsA
UABBAFQAQwBIACAAUgBFAFMARQBOAEQAIAB2ADEAMAAgADEALwAyAF0AIAB4AGUAbgArAHQAbwBv
AGwAcwA6ACAAUgBlAHAAbwByAHQAIABJAG4AdABlAHIAcgB1AHAAdAAgAEMAbwBuAHQAcgBvAGwA
bABlAHIAIABWAGkAcgB0AHUAYQBsAGkAegBhAHQAaQBvAG4AIABjAGEAcABhAGIAaQBsAGkAdABp
AGUAcwAgAG8AbgAgAHgAOAA2AAAAAAAfAD0AAQAAAAoAAABSAGUAOgAgAAAAAAADAN4/6f0AAAMA
JgAAAAAACwApAAAAAAADADYAAAAAAB8AcAABAAAAwgAAAFsAUABBAFQAQwBIACAAUgBFAFMARQBO
AEQAIAB2ADEAMAAgADEALwAyAF0AIAB4AGUAbgArAHQAbwBvAGwAcwA6ACAAUgBlAHAAbwByAHQA
IABJAG4AdABlAHIAcgB1AHAAdAAgAEMAbwBuAHQAcgBvAGwAbABlAHIAIABWAGkAcgB0AHUAYQBs
AGkAegBhAHQAaQBvAG4AIABjAGEAcABhAGIAaQBsAGkAdABpAGUAcwAgAG8AbgAgAHgAOAA2AAAA
AAACAXEAAQAAACAAAAABAdiLwBnLXjmS1shuT6uR0EO4FuV5rWZxCgCAAA3cAAsABgwAAAAAHwAV
EAEAAAB2AAAARABFADkANgBFAEMAQwA5ADQAMgA1AEMAMQBDADQAMQA4ADEARgBGAEMAMABDADQA
QwA3ADgAMQA4ADUARgA5AEAAbgBhAG0AcAByAGQAMAAzAC4AcAByAG8AZAAuAG8AdQB0AGwAbwBv
AGsALgBjAG8AbQAAAAAAHwA1EAEAAABkAAAAPAAzAGUANAA0ADQAMwBlAGMALQBkAGUAMQA0AC0A
YgAwADIAYgAtADkAOQBmADIALQBlADYAYgA2ADMAYgAwADYAYwBhADQAMQBAAGMAaQB0AHIAaQB4
AC4AYwBvAG0APgAAAB8AORABAAAAKAEAADwAMgAwADIAMgAwADYAMgA5ADEAMwA1ADUAMwA0AC4A
MQA5ADkAMgAzAC0AMQAtAGoAYQBuAGUALgBtAGEAbABhAGwAYQBuAGUAQABjAGkAdAByAGkAeAAu
AGMAbwBtAD4AIAA8ADIAMAAyADIAMAA2ADIAOQAxADMANQA1ADMANAAuADEAOQA5ADIAMwAtADIA
LQBqAGEAbgBlAC4AbQBhAGwAYQBsAGEAbgBlAEAAYwBpAHQAcgBpAHgALgBjAG8AbQA+ACAAPABl
AGYAYgAzADQANwAzADgALQA0ADkAYQA5AC0AZgBhADQAZgAtADkAMAAwAGEALQBmAGQANQAzADAA
ZgBmADgAMwA1AGMAZQBAAHMAdQBzAGUALgBjAG8AbQA+AAAAHwBCEAEAAABgAAAAPABlAGYAYgAz
ADQANwAzADgALQA0ADkAYQA5AC0AZgBhADQAZgAtADkAMAAwAGEALQBmAGQANQAzADAAZgBmADgA
MwA1AGMAZQBAAHMAdQBzAGUALgBjAG8AbQA+AAAAAwATEgAAAABAAAcwFPWANcuL2AECARMwAQAA
ABAAAADLXjmS1shuT6uR0EO4FuV5AgEUMAEAAAAMAAAANAEAAJkDhK5PAAAAAwBbMwEAAAADAFo2
AAAAAAMAaDYNAAAACwD6NgEAAAAfANk/AQAAAAACAABPAG4AIAAyADkALwAwADYALwAyADAAMgAy
ACAAMQA1ADoAMgA2ACwAIABKAGEAbgAgAEIAZQB1AGwAaQBjAGgAIAB3AHIAbwB0AGUAOgANAAoA
PgAgAE8AbgAgADIAOQAuADAANgAuADIAMAAyADIAIAAxADUAOgA1ADUALAAgAEoAYQBuAGUAIABN
AGEAbABhAGwAYQBuAGUAIAB3AHIAbwB0AGUAOgANAAoAPgA+ACAAQQBkAGQAIABYAEUATgBfAFMA
WQBTAEMAVABMAF8AUABIAFkAUwBDAEEAUABfAFgAOAA2AF8AQQBTAFMASQBTAFQARQBEAF8AWABB
AFAASQBDACAAYQBuAGQADQAKAD4APgAgAFgARQBOAF8AUwBZAFMAQwBUAEwAXwBQAEgAWQBTAEMA
QQBQAF8AWAA4ADYAXwBBAFMAUwBJAFMAVABFAEQAXwBYADIAQQBQAEkAQwAgAHQAbwAgAHIAZQBw
AG8AcgB0ACAAYQBjAGMAZQBsAGUAcgBhAHQAZQBkACAAeABBAFAASQBDACAAYQBuAGQADQAKAD4A
PgAgAHgAMgBBAFAASQBDACwAIABvAG4AIAB4ADgANgAgAGgAYQByAGQAdwBhAHIAZQAuACAAVABo
AGkAcwAgAGkAcwAgAHMAbwAgAHQAaABhAHQAIAAAAB8A+D8BAAAAHAAAAEoAYQBuAGUAIABNAGEA
bABhAGwAYQBuAGUAAAAfAPo/AQAAABwAAABKAGEAbgBlACAATQBhAGwAYQBsAGEAbgBlAAAAHwAi
QAEAAAAGAAAARQBYAAAAAAAfACNAAQAAAAIBAAAvAE8APQBFAFgAQwBIAEEATgBHAEUATABBAEIA
UwAvAE8AVQA9AEUAWABDAEgAQQBOAEcARQAgAEEARABNAEkATgBJAFMAVABSAEEAVABJAFYARQAg
AEcAUgBPAFUAUAAgACgARgBZAEQASQBCAE8ASABGADIAMwBTAFAARABMAFQAKQAvAEMATgA9AFIA
RQBDAEkAUABJAEUATgBUAFMALwBDAE4APQA5AEEARgBFAEEAMQAzAEMAMgA4ADQANQA0AEIAMwA5
AEIARgBDAEIAOABDADUANwA4ADcANgAxAEQARABGADgALQBKAEEATgBFACAATQBBAEwAQQBMAEEA
AAAAAB8AJEABAAAABgAAAEUAWAAAAAAAHwAlQAEAAAACAQAALwBPAD0ARQBYAEMASABBAE4ARwBF
AEwAQQBCAFMALwBPAFUAPQBFAFgAQwBIAEEATgBHAEUAIABBAEQATQBJAE4ASQBTAFQAUgBBAFQA
SQBWAEUAIABHAFIATwBVAFAAIAAoAEYAWQBEAEkAQgBPAEgARgAyADMAUwBQAEQATABUACkALwBD
AE4APQBSAEUAQwBJAFAASQBFAE4AVABTAC8AQwBOAD0AOQBBAEYARQBBADEAMwBDADIAOAA0ADUA
NABCADMAOQBCAEYAQwBCADgAQwA1ADcAOAA3ADYAMQBEAEQARgA4AC0ASgBBAE4ARQAgAE0AQQBM
AEEATABBAAAAAAAfADBAAQAAABwAAABKAGEAbgBlACAATQBhAGwAYQBsAGEAbgBlAAAAHwAxQAEA
AAAcAAAASgBhAG4AZQAgAE0AYQBsAGEAbABhAG4AZQAAAB8AOEABAAAAHAAAAEoAYQBuAGUAIABN
AGEAbABhAGwAYQBuAGUAAAAfADlAAQAAABwAAABKAGEAbgBlACAATQBhAGwAYQBsAGEAbgBlAAAA
AwBZQAAAAAADAFpAAAAAAAMAN1ABAAAAHwAKXQEAAAAyAAAASgBhAG4AZQAuAE0AYQBsAGEAbABh
AG4AZQBAAGMAaQB0AHIAaQB4AC4AYwBvAG0AAAAAAB8AC10BAAAAMgAAAEoAYQBuAGUALgBNAGEA
bABhAGwAYQBuAGUAQABjAGkAdAByAGkAeAAuAGMAbwBtAAAAAAACARVdAQAAABIAAAAC3jZYM+9C
okOxRTSMLunKWwEAAAIBFl0BAAAAEgAAAALeNlgz70KiQ7FFNIwu6cpbAQAACwAAgAggBgAAAAAA
wAAAAAAAAEYAAAAAFIUAAAAAAAADAACAUONjC8yc0BG82wCAX8zOBAEAAAAkAAAASQBuAGQAZQB4
AGkAbgBnAEUAcgByAG8AcgBDAG8AZABlAAAAGwAAAB8AAIBQ42MLzJzQEbzbAIBfzM4EAQAAACoA
AABJAG4AZABlAHgAaQBuAGcARQByAHIAbwByAE0AZQBzAHMAYQBnAGUAAAAAAAEAAABwAAAASQBu
AGQAZQB4AGkAbgBnACAAUABlAG4AZABpAG4AZwAgAHcAaABpAGwAZQAgAEIAaQBnAEYAdQBuAG4A
ZQBsAFAATwBJAEkAcwBVAHAAVABvAEQAYQB0AGUAIABpAHMAIABmAGEAbABzAGUALgAAAAsAAIBQ
42MLzJzQEbzbAIBfzM4EAQAAACYAAABJAHMAUABlAHIAbQBhAG4AZQBuAHQARgBhAGkAbAB1AHIA
ZQAAAAAAAAAAAAMAAIAIIAYAAAAAAMAAAAAAAABGAQAAADIAAABFAHgAYwBoAGEAbgBnAGUAQQBw
AHAAbABpAGMAYQB0AGkAbwBuAEYAbABhAGcAcwAAAAAAIAAAAAIBAIATj/JB9IMUQaWE7ttaawv/
AQAAAC4AAABIAGUAYQBkAGUAcgBCAG8AZAB5AEYAcgBhAGcAbQBlAG4AdABMAGkAcwB0AAAAAAAB
AAAASgAAAAEACgAAAAQAAAACAAAAFAAAAAAAAAD+////AAAAAAAAAAAUAAAAAAAAAP7///8pAAAA
AAAAABQAAAAAAAAAVgAAAP////8AAAAAAAALAACAE4/yQfSDFEGlhO7bWmsL/wEAAAAcAAAASABh
AHMAUQB1AG8AdABlAGQAVABlAHgAdAAAAAEAAAALAACAE4/yQfSDFEGlhO7bWmsL/wEAAAAoAAAA
SQBzAFEAdQBvAHQAZQBkAFQAZQB4AHQAQwBoAGEAbgBnAGUAZAAAAAEAAAAfAACAH6TrM6h6LkK+
e3nhqY5UswEAAAA4AAAAQwBvAG4AdgBlAHIAcwBhAHQAaQBvAG4ASQBuAGQAZQB4AFQAcgBhAGMA
awBpAG4AZwBFAHgAAAABAAAA2AEAAEkASQA9AFsAQwBJAEQAPQBlAGQAOQA2AGUANwBiAGMALQAx
AGEAOQAzAC0ANABmADcAOAAtAGIANAAzADkALQA1AGUAYQAxAGEAOAA4ADgAMwA3ADEAMgA7AEkA
RABYAEgARQBBAEQAPQAwADEARAA4ADgAQgBDAEIAMwA1ADsASQBEAFgAQwBPAFUATgBUAD0AMQBd
ADsAUwBCAE0ASQBEAD0AMgA2ADsAUwAxAD0APABlAGYAYgAzADQANwAzADgALQA0ADkAYQA5AC0A
ZgBhADQAZgAtADkAMAAwAGEALQBmAGQANQAzADAAZgBmADgAMwA1AGMAZQBAAHMAdQBzAGUALgBj
AG8AbQA+ADsAUgBUAFAAPQBEAGkAcgBlAGMAdABDAGgAaQBsAGQAOwBUAEQATgA9AFMAYQBtAGUA
OwBUAEYAUgA9AE4AbwB0AEYAbwByAGsAaQBuAGcAOwBWAGUAcgBzAGkAbwBuAD0AVgBlAHIAcwBp
AG8AbgAgADEANQAuADIAMAAgACgAQgB1AGkAbABkACAANQAzADcAMwAuADAAKQAsACAAUwB0AGEA
ZwBlAD0ASAAyADsAVQBQAD0ARAAwADsARABQAD0AMQAwADEAAAAfAACAE4/yQfSDFEGlhO7bWmsL
/wEAAAAWAAAAQwBsAGkAZQBuAHQASQBuAGYAbwAAAAAAAQAAAC4BAABDAGwAaQBlAG4AdAA9AFcA
ZQBiAFMAZQByAHYAaQBjAGUAcwA7AE0AbwB6AGkAbABsAGEALwA1AC4AMAAgACgAVwBpAG4AZABv
AHcAcwAgAE4AVAAgADEAMAAuADAAOwAgAFcAaQBuADYANAA7ACAAeAA2ADQAKQAgAEEAcABwAGwA
ZQBXAGUAYgBLAGkAdAAvADUAMwA3AC4AMwA2ACAAKABLAEgAVABNAEwALAAgAGwAaQBrAGUAIABH
AGUAYwBrAG8AKQAgAEMAaAByAG8AbQBlAC8AOQAwAC4AMAAuADQANAAzADAALgA5ADMAIABTAGEA
ZgBhAHIAaQAvADUAMwA3AC4AMwA2ACAARQBkAGcALwA5ADAALgAwAC4AOAAxADgALgA0ADkAOwAA
AAAAAwANNP0/AAAfAACAhgMCAAAAAADAAAAAAAAARgEAAAAuAAAAYQB1AHQAaABlAG4AdABpAGMA
YQB0AGkAbwBuAC0AcgBlAHMAdQBsAHQAcwAAAAAAAQAAALgAAABkAGsAaQBtAD0AbgBvAG4AZQAg
ACgAbQBlAHMAcwBhAGcAZQAgAG4AbwB0ACAAcwBpAGcAbgBlAGQAKQAgAGgAZQBhAGQAZQByAC4A
ZAA9AG4AbwBuAGUAOwBkAG0AYQByAGMAPQBuAG8AbgBlACAAYQBjAHQAaQBvAG4APQBuAG8AbgBl
ACAAaABlAGEAZABlAHIALgBmAHIAbwBtAD0AYwBpAHQAcgBpAHgALgBjAG8AbQA7AAAAHwAAgIYD
AgAAAAAAwAAAAAAAAEYBAAAAHgAAAGEAYwBjAGUAcAB0AGwAYQBuAGcAdQBhAGcAZQAAAAAAAQAA
AAwAAABlAG4ALQBVAFMAAAAfAACAhgMCAAAAAADAAAAAAAAARgEAAAAgAAAAeAAtAG0AcwAtAGgA
YQBzAC0AYQB0AHQAYQBjAGgAAAABAAAAAgAAAAAAAABIAACACCAGAAAAAADAAAAAAAAARgEAAAAi
AAAATgBlAHQAdwBvAHIAawBNAGUAcwBzAGEAZwBlAEkAZAAAAAAAd1rfWySbEkC3oQjaWeJYGx8A
AICGAwIAAAAAAMAAAAAAAABGAQAAAC4AAAB4AC0AbQBzAC0AcAB1AGIAbABpAGMAdAByAGEAZgBm
AGkAYwB0AHkAcABlAAAAAAABAAAADAAAAEUAbQBhAGkAbAAAAB8AAICGAwIAAAAAAMAAAAAAAABG
AQAAAFAAAAB4AC0AbQBzAC0AbwBmAGYAaQBjAGUAMwA2ADUALQBmAGkAbAB0AGUAcgBpAG4AZwAt
AGMAbwByAHIAZQBsAGEAdABpAG8AbgAtAGkAZAAAAAEAAABKAAAANQBiAGQAZgA1AGEANwA3AC0A
OQBiADIANAAtADQAMAAxADIALQBiADcAYQAxAC0AMAA4AGQAYQA1ADkAZQAyADUAOAAxAGIAAAAA
AB8AAICGAwIAAAAAAMAAAAAAAABGAQAAADYAAAB4AC0AbQBzAC0AdAByAGEAZgBmAGkAYwB0AHkA
cABlAGQAaQBhAGcAbgBvAHMAdABpAGMAAAAAAAEAAAAkAAAAUwBKADAAUABSADAAMwBNAEIANgA2
ADEANQA6AEUARQBfAAAAHwAAgIYDAgAAAAAAwAAAAAAAAEYBAAAAHgAAAHgALQBsAGQALQBwAHIA
bwBjAGUAcwBzAGUAZAAAAAAAAQAAAFoAAAAzADMANQA4ADMANgBkAGUALQA0ADIAZQBmAC0ANAAz
AGEAMgAtAGIAMQA0ADUALQAzADQAOABjADIAZQBlADkAYwBhADUAYgAsAEUAeAB0AEEAZABkAHIA
AAAAAB8AAICGAwIAAAAAAMAAAAAAAABGAQAAACAAAAB4AC0AbwB1AHQAYgBvAHUAbgBkAC0AYQB1
AHQAaAAAAAEAAAAqAAAAYwBpAHQAcgBpAHgAXwBvAG4AbAB5AF8AbwB1AHQAYgBvAHUAbgBkAAAA
AAAfAACAhgMCAAAAAADAAAAAAAAARgEAAAA4AAAAeAAtAG0AcwAtAGUAeABjAGgAYQBuAGcAZQAt
AHMAZQBuAGQAZQByAGEAZABjAGgAZQBjAGsAAAABAAAABAAAADEAAAAfAACAhgMCAAAAAADAAAAA
AAAARgEAAAA6AAAAeAAtAG0AcwAtAGUAeABjAGgAYQBuAGcAZQAtAGEAbgB0AGkAcwBwAGEAbQAt
AHIAZQBsAGEAeQAAAAAAAQAAAAQAAAAwAAAAHwAAgIYDAgAAAAAAwAAAAAAAAEYBAAAAKgAAAHgA
LQBtAGkAYwByAG8AcwBvAGYAdAAtAGEAbgB0AGkAcwBwAGEAbQAAAAAAAQAAAA4AAABCAEMATAA6
ADAAOwAAAAAAHwAAgIYDAgAAAAAAwAAAAAAAAEYBAAAARAAAAHgALQBtAGkAYwByAG8AcwBvAGYA
dAAtAGEAbgB0AGkAcwBwAGEAbQAtAG0AZQBzAHMAYQBnAGUALQBpAG4AZgBvAAAAAQAAALIGAAAr
AFMANABjAGcAcwBiAEIARAAyAEIAcwArAE8AMwA4ADAANwBiAGIAUwBxAFEAOAB4AEkAdAB3AFAA
bQBNAGsALwB0AGoAcwBhADUAbwB0AHcANQBQAGIARgA1ADEAVgBDAE8AYgBSAEQAUABLACsAWABB
AGcAaQA4AEIAaQBoAHIAbwBsADYAMgBYAE0AUQAwAHkALwBnAFcAcQBnADcAVgBGAFYAQwBmAGMA
bgBTAFgATgB6AG4AMQBoAGgALwBtAGcAUwAxAFcAZQAvAGEAZgA3AE8AVABzAFAAQwBrAHMAVQBN
AEcAMgBsADMATQBRAGEAcQBzAGgAcAAxAHoAegBGAEcATwBtAHQAcQBSAGUAKwBxADAASAA3AEQA
agBZAE4AZgA4ADAAbQBDAFQASgB2AG4ATwBiADkAYQB0AEsAVQBPAGIAWABwAHkAcQBwAHEAawBw
AHoAVQBUAEsAcwBqADgAQwAyAGoASwB4AGEAMwBMADQAKwBrAEoARwBUADAAZwBrAEkAUABYAFkA
awBKAGEANwAvAFYAbgA3AE4AZQBJAHEAVgBvADQAagBUAEMAbQB6AHkAagA0AG8AUwB1AHYASgBT
AFoAWgA1AEsANwBDAEkAdgBQAEQAdQAyADQAOQBsAHoARgBxADkATABFAEIAQwBLADQASgBkADUA
dQBEAG0ANABxAFIAbABWAE0AMgB0AHQAbABzAGsATQBuAGYAdQB5ADIAYQBtAE0AMAA0ADcAegBp
ACsATwBiAHIAdQBOAHIAaQBoAEgALwA5AHEAbQBFAGEAVQBSAFEAQgB0AFMAcABsAGQASgBMAFUA
QQBGAFIAYwBYAHYAQwByAC8AdwBEAGUAeABDADAAZQBRAFIANgBPAFMANgBaAFYAdwBRADUAeABa
AFUANQBTAHMAagBsAEkARwAxADUANQBTAFIAaABXADAAVgB3AFkAMQBRAHAAbQBHAHUAZgBPAEoA
OQB0AC8AcQBhADgASABFAHUAWgB1AGkAUgBtADMAUAB5AGgAegBDAHoANgBOAEsAUwA3AGQAMQA4
AFkARAAwAHIAZABUAEoAdwA0AHEARABiAHoAbABFADEAUgBsADEAZgB4AFoAZQBlAEoAawBpAEwA
RQBRAFoAYgB4ADIAbwBVAFYAegBnAGsANQB5AFMAeAB1AFUAagBUADQAZQBMAGgAVgByAG8AVABt
AGsAdQBUAHgAbQA5AEUAWgByADMAVgB1ADYAcgBCAGEAQgB4ADEASABCAEQAKwAwAGMAbABlAEkA
cwBDAEMASgA1ADYAWABTAEsAMAB3ADUAaABqAFEAOAAzAE8AVQA1AGgANgBJADQAQgBaAGwAcQA3
AFUAWQAwADIAawByAC8AWABQAFoAMgBBAFMAVABWAFQARABmAHYAWABzAFoAWABMAEoAVQBjAHMA
LwBVADMAUABqAEcAUwBnAG0AbgBPAGkAUABEAFAAYQA0AFkAVwBjAGUAawA0AEMAOABuAFgARAB4
AEsASQBzADcAagBpAGsATQBqAE8AaQA4AHAARgBTAEIAUgB2AEMASQBNAEMAdQBBAG4AbABRAGEA
UwBkAFUAbgBDAGoARwBEAEEAaAB6AHIAOABQAC8AWgA0AGQAeABKAHoALwBjADYAVwBHADgAaABv
AEMAagAwAHMARwBGAEwAUAA5AGEAeQBMAFoAZABDAG4AYQBWAGsAcQArAFoAeQAxAGMAOAA3AEgA
WQBpADIAbwB0AG0AdgBwAGQAbABnAE8ASwBBAEMANAByADYAbAB5AEoARwA0AHUAMgBxADkARAB0
AGsANABzAHIAcgBuAE8AbABsADkAeAB4AFQAcABJAEkAUAAxAFQARgA2ADcAbwBHAGYAeAB2ADIA
dQBUAFIAWQA1AFIAdQBuACsAaABiAFAANwBoAFoAVgBCAEYAOQA5AGkAaQBlADkAZQBOAG0ASwBn
AEkAKwBzAGwAOABzAHUAMgBjAGQASAByAHYAQgBNADMAVwBqAGYAWQBhAHUAMQBtAGUAMwBiAGMA
NQBXAHoAKwBuAG0AZwByAHYAWgBWAEEASABMAGIAeAB0ACsAdABGAGQARQBZAEgASQBRAE0ASgBY
AEUAcQBJAGUAeABtAHQAUwAvAEMAWAB1AFIANgAxADIASwBJADgAUgBEAGoAZwB1AGoATwB0AHQA
awBRAE0AVwAxAEQAYgBBAFkAVwBNAGkAUgBEAEYANAA4AFAATABjADYAYgBSAHoARABkAFEAPQA9
AAAAAAAfAACAhgMCAAAAAADAAAAAAAAARgEAAAA4AAAAeAAtAGYAbwByAGUAZgByAG8AbgB0AC0A
YQBuAHQAaQBzAHAAYQBtAC0AcgBlAHAAbwByAHQAAAABAAAAjgQAAEMASQBQADoAMgA1ADUALgAy
ADUANQAuADIANQA1AC4AMgA1ADUAOwBDAFQAUgBZADoAOwBMAEEATgBHADoAZQBuADsAUwBDAEwA
OgAxADsAUwBSAFYAOgA7AEkAUABWADoATgBMAEkAOwBTAEYAVgA6AE4AUwBQAE0AOwBIADoARABN
ADUAUABSADAAMwBNAEIAMwAzADgANgAuAG4AYQBtAHAAcgBkADAAMwAuAHAAcgBvAGQALgBvAHUA
dABsAG8AbwBrAC4AYwBvAG0AOwBQAFQAUgA6ADsAQwBBAFQAOgBOAE8ATgBFADsAUwBGAFMAOgAo
ADEAMwAyADMAMAAwADEANgApACgANAA2ADMANgAwADAAOQApACgAMwA2ADYAMAAwADQAKQAoADMA
NAA2ADAAMAAyACkAKAAxADMANgAwADAAMwApACgAMwA3ADYAMAAwADIAKQAoADMAOQA2ADAAMAAz
ACkAKAAzADkAOAA2ADAANAAwADAAMAAwADIAKQAoADIANgAxADYAMAAwADUAKQAoADUAMwA1ADQA
NgAwADEAMQApACgANgA0ADcANQA2ADAAMAA4ACkAKAA0ADEAMwAwADAANwAwADAAMAAwADEAKQAo
ADIANgAwADAANQApACgAMwA4ADEAMAAwADcAMAAwADAAMAAyACkAKAA4ADYAMwA2ADIAMAAwADEA
KQAoADEAMgAyADAAMAAwADAAMAAxACkAKAA4ADMAMwA4ADAANAAwADAAMAAwADEAKQAoADYANQAx
ADIAMAAwADcAKQAoADcAMQAyADAAMAA0ADAAMAAwADAAMQApACgAOAAyADkANgAwADQAMAAwADAA
MAAxACkAKAA2ADUAMAA2ADAAMAA3ACkAKAA1ADYANgAwADMAMAAwADAAMAAyACkAKAAyADkAMAA2
ADAAMAAyACkAKAA3ADYAMQAxADYAMAAwADYAKQAoADQAMwAyADYAMAAwADgAKQAoADUANAA5ADAA
NgAwADAAMwApACgAOAA2ADcANgAwADAAMgApACgAMQA4ADYAMAAwADMAKQAoADYANgA5ADQANgAw
ADAANwApACgAMwAxADYAOAA2ADAAMAA0ACkAKAA2ADYANQA1ADYAMAAwADgAKQAoADYANgA0ADQA
NgAwADAAOAApACgAOAA5ADMANgAwADAAMgApACgAMQAxADAAMQAzADYAMAAwADUAKQAoADYANgA0
ADcANgAwADAANwApACgAOQAxADkANQA2ADAAMQA3ACkAKAAzADEANgA5ADYAMAAwADIAKQAoADcA
NAAxADYAMAAwADIAKQAoADQANwA4ADYAMAAwADAAMAAxACkAKAA2ADQAOAA2ADAAMAAyACkAKAAz
ADgAMAA3ADAANwAwADAAMAAwADUAKQAoADMAMQA2ADAAMAAyACkAKAAzADYANwA1ADYAMAAwADMA
KQAoADQANQA5ADgAMAA1ADAAMAAwADAAMQApADsARABJAFIAOgBPAFUAVAA7AFMARgBQADoAMQAx
ADAAMQA7AAAAAAAfAACAhgMCAAAAAADAAAAAAAAARgEAAABcAAAAeAAtAG0AcwAtAGUAeABjAGgA
YQBuAGcAZQAtAGEAbgB0AGkAcwBwAGEAbQAtAG0AZQBzAHMAYQBnAGUAZABhAHQAYQAtAGMAaAB1
AG4AawBjAG8AdQBuAHQAAAABAAAABAAAADEAAAAfAACAhgMCAAAAAADAAAAAAAAARgEAAABKAAAA
eAAtAG0AcwAtAGUAeABjAGgAYQBuAGcAZQAtAGEAbgB0AGkAcwBwAGEAbQAtAG0AZQBzAHMAYQBn
AGUAZABhAHQAYQAtADAAAAAAAAEAAACyDAAANwBIAGwAeQBrAHMAdgB1AFQAcgB6ADkAeABSAHQA
TQBRADkAdwBZAHoARABZAHUAOAB2AFUAWgA1ADQANQB5AFcAcQBPAGcAWQBsADgAbQBZAG4AawBT
AFEANQBaAGEARwA2AEgAWABVAFAAZQBtAFoAagA2AGQAUwBkAGYAWABEAGUAUAAvAEkAUwBEAFMA
dQA1AFEATAA1ADMAegB3AHMAQQBiAEoAKwByAGQAcAB2AHoAVAA3AGkAQQBXAFkAcgA2AFkANAAr
AFgAZwB6AHoAeQBoAFgAWgBvAGEAVgBoAGEAUQBjADMAeABaAC8AYgBpAGEAdwBNADYAQgAzAHAA
eABxADUASABUAGcARwBKAGcAcQBqAEIATAA5AEQAQwBNADQAcgBsAEcAUwB4AGQAVwAxADIATgBt
AGMARwBkAGUAbgA1AFEAYwBQAHcAYQB1ADAARQBMAEEAMgBRAHcAdQB5AFYAVgBGAGMAagBwAEwA
TABBAGgAUwBqAGYAUABUAFEAVABwACsAegBNAGcAbABSAE0AUAB6AHYAOQB4AEgASQA4AFoANQBp
ADgAVwBhAG8AaQBZAGUAeAA0AEIARABZAGkANABUAFkAbgA2AEIASwBWAHgASABZAGgAWgBBAGYA
dABoAEMAQQBMAFkAdgBBADYAYwBJAGUAcQBZAEEAcABnADYAeQBRACsAbgBDAGMAMwBCAEwANABP
AEYAVABuAGoASABkAGYAbwBSAHYANAAzAGMAZQBoAFYAOAAvAHMANwB5AEwANABRAHIATQBZAEwA
UgBBAHIAcABPADAAcABMAHgANgBhADYAdABrAGIAcAAxAHUAUwBFAGkAOABuAEUAUAB5AGQANgBL
AFkARAA3AEsAVQAxAFYAWQBTAHgAKwBkAHQAVgBYAHAAegBVAFAAOQB2ADEAVgBrAHQAQQBFAEsA
MgA5AHYAbABkAHgAUQA4ADYAYgBYAGYAbwAvAEcATgBGAEMAeQBYAFgAeQBXAGsATQBvAHAASwBC
AE8AeQAwAFEAbQA3AFoAQgBxAFEAbgBZAFUASwBMAHEAWAA5AGwAVQBuAG4AbQBBAGUANABkAGgA
YQBCADcAegBIAEwAYwB3AE8AVABZAEwAMgBMAFoAMABIAC8ARABGAE8AWQBoAEcAdQAwAHkAdAB5
ACsAYwBlAFEAbQBaAHYAcQA2AEMAUwBIAFoAVgA3AGsAdwBJADkANwBqACsAVQAyAFgAZQBKAFoA
QwByADYAOQA4AGUANwBHAEMAWQBNAHkAUwBzACsAOABHAHMATABzAHIAVABvAFoAUQBVAHMAYgBR
AHIATQBLAGMAVgBzAHMATwA3AHYAeABDAEUAMgBxAHgAWQBCAGsASQB0AC8AZgBvAGYAUgBJAEEA
cwB3AEcAMgByAC8ATAAzAFMATABMAFMAbwBQAG4AcABKAGIAYgA1ADMATwAyAHoAeQB1AHQAVAA1
AG8AMgA3AGUASQA0ADkAcQBtAGQATwBLADcAbwBGADIAZgBBAFcAOQBvAHEATAB3ADMAdQBtAEkA
awA5AEcAMwBpAGYAdgBIADIANAA3ADEAYwBmAHUAMABpAE0AUQBoADIANwBYAE8AQQAzAFEAWQBw
AGMARgBNAHgAWABsADUAYgBGAEkAdwB3AEoALwBuAFcAeQBPAEoATABwAHQANQBRAFEAWgBNAEUA
QQBEADUANwBCAGsAaQB3AFMAVgBjADkAcgBsAHAAagBDAFoASwArADMATwByAG4AdABZAEgARQBj
AGQAMgBMAHcASAA2AFAAVgBjAEgAUwB6ADIAZwBhAFQAVgBGAE8AaABFAHAAMgA1AFkAMwBGAHEA
MQBwADUASQBiAGgAYwA3AFgASwBHAE8AMwBKAFIAdwA2AG4AUABBAGgANABHAGkARABXADkARAA2
AFQANwBEADAAUABIADUAVABkAFMAbQBaAGsATQBhAE8AWQBMADgAVgBCAEYAcwBlAEYAUwB6AEkA
dABGAHMAMwBLADAAZwA1AGoARABoAEYAaABlAHMANABlAHMAZABIAHYAQQB6AHAAdABFAHMAbABY
AFgAUgBmAEwANQBaAEEAaAAvAEQAVABVAFUAUwBQAGEASQA0AGwATQBZAEIANQB5AG0AQwAyACsA
YQBqAHEAdQA5AFoAZgAvAEkAdQBRAGMAMABTAHIARQA4AGIAZQBBAG0AcABHAGQAbwBJAFYAUwBr
ADQAZwA5AGYAMwBuAGQAMgB3AGsAWQBDADIAagBpAGUAYQBwAHMAMwBUADEAagA3AHkAUAByAGYA
NAAzAHoANgBUADIAbABqADgARABzAHAAWABSAGYAdgB2ADAATQBqAEUAZgA1AEoAWgBGADYAYQBq
AGMAMQArAGQAQwBMAGUASgB2ADAAYgBvAHIARgBvADYANQBrAFIAeABOAFQAawA1AHQAOQBOAGoA
VABVAEIAMgA4AFkAQQAxADUAYgBLAHMAVwAyAE4AbgBIAEcAdwB3AFEAVQBGADUAbgB3AFgALwB0
AG8AWgBMAG4AVgBvAGcAMABPAFIAZwArAEYAOQBxAEUAKwByAFQANQBnAHMAKwB2AG4ANQB6ADQA
KwBUAE4ASgAvAFMAdgBXAEMATwBrAEgAQgBwAHcAYgBXAFQAdABCAHQAMgBFADQARQBQAEoAcQB4
AFgARQBhAFEAYQA5AC8ALwBEAFgAdgBEAGYATQBWAEEASQBTAGIASgBNAHoAZABPAE4AQQBoAFgA
WQAwAEkAaQBXAFkAeABqAGwATABIAHQAUwBpAFgAYQBpAGwAdwA1AEcAUABIAHQANAA3ADcAegAv
AEsAegA2AHAATQBOAC8AVABWAHAAWAAzADMAZgBOAHoAVAA1AFQAVgA2AEEATQA1ADgAUQBXAFkA
ZQAwADUAcABCAFcAMwBXAEkANABvAGMAMwBuADQAWQBSAEgAcQBRADEAaABCAFQALwB4AHEAegBG
ADQAZQBlAHUAaQBvAGgAZQBJAFQATwBVAFgAWgBXAFYAawBnAGsAbgBlAC8AbQBEAE4ARABrADkA
ZwBhAFcATQBlADYAYwBKAEYANgB6AHQANQByAFYAdwBsAG8AdABmACsAOQBoAG0ARABpAGQAagA0
AHIANwA3AFUAYwBjAGIAcgBBAFgAOQBSAHMAYwBDAE4ARwBrAE8AQQA1AEIAMQBHAGIANAA4ADYA
NgB4AEEAagBkAEcAdgBPAFUAYwBMAE4ARgBMAFkATQA5AEMAMgB3ADIAQQBjAEUAMQA2AEUAUwBO
ADUAUQBGADIAdQA2AHMATAA4AHcAZgBSAFIAegBKAFcAWgBuADEASAB2AHEAawA5AEwASQAvAGoA
OQBHAHEASwB6AGMAMwBrAFUAMgA2AFkAeABEAFYANQBKAGcAaABjAHYASgA5ADQAWQBsAGkASgBZ
ADYAbwBQAFQAQQBtAHYAKwBQAGcAQwBpAFgAYwBpAHYAcABVAFcALwBYAE4AYwBsADAAeABBADYA
KwBzAFMAQgBHAGwAMABsAEYANQBEAFUAYQBYAEQAegBoAGgAWgBqAE4AdQBHAE0AWAArAHcAeQAx
AC8AMgAzAEMAUQA3ADEATwBTAFkARgA2ADEANQBEAFMAVQBlAC8AYgBBAFcAKwBwAFQAdgBWAEwA
SwBzADIASwBFAE8AWgBWAGkAQwBqADcASABJAGIAYQAvADAAYQArAEIAdQAvAEgAeQBxADIARwAw
ADQAcAAxAHYAdgBxADAARwA2AEgAZgBrAG8AcAA3AHQAdABFAHEARABSAEQAcABkAEoASgBIADQA
NgAzAEEAcAA2AG8AbwBsAEUAOQBGAEQAaQBmAFIAWgBmAEIAOABGAEoAcQBBAEgAcwB2AG4ALwA1
AGIAdQBTAEIAOQBZADgAagBwAFUATwB2AGkAaAA0AHkAdAB6AC8AVQBiADkAQwBIAFYAbABiADUA
SAB4AEEAdgBWAFQAeQBOAFYAMgB3AGcAKwBjAFIAQgArAGoAbQBWAGIAQQBYAGkAKwA3AGQAWABW
AEkAOABDAEwAOQBTAGoAWQBsAFYAbwBOAGsANgBkAGYANwB2AGEASAAvAEsAcwBBADgAeAAvAEgA
RwB6ACsAbQBYAGkARgBrAFcAagBmAEMAagBaAE0AUABPAGgAcwBqAGQAMAB3ACsAUABqAE8AUQA4
AHYAWABZACsAQgBYAHQASABEAG4AYQAwAEwARwB3AFYAdgBhAGQAcgBCAGwAcwBtADIARQBnAGYA
NQBOAEwANABEAHgAKwBUADYAUABnAD0APQAAAAAAx5w=

--_000_3e4443ecde14b02b99f2e6b63b06ca41citrixcom_--


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 15:23:18 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 15:23:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357936.586821 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6ZX8-0004OD-Td; Wed, 29 Jun 2022 15:23:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357936.586821; Wed, 29 Jun 2022 15:23:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6ZX8-0004O6-Qt; Wed, 29 Jun 2022 15:23:14 +0000
Received: by outflank-mailman (input) for mailman id 357936;
 Wed, 29 Jun 2022 15:23:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JXcD=XE=citrix.com=prvs=172c93792=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o6ZX7-0004O0-5w
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 15:23:13 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 61f532a5-f7bf-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 17:23:11 +0200 (CEST)
Received: from mail-dm6nam11lp2173.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.173])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 29 Jun 2022 11:23:07 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by SA2PR03MB5722.namprd03.prod.outlook.com (2603:10b6:806:110::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Wed, 29 Jun
 2022 15:23:05 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5373.018; Wed, 29 Jun 2022
 15:23:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 61f532a5-f7bf-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656516191;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=aRMQn6lIowK23J0Wg4x0oNyRCGZJYIOiBGp/kDgX2+A=;
  b=e05WLUfDbRs+K10fuoNXCp8HO2sm4ZON5n7GmELO37EufQ6PMsK5u94P
   hJrjH8ieGShcvO7vNg0MG8jCydfjhvYAG3O5r/gabojl1tV2wL0UwVLm6
   07nM5OjHfyU4/X1RO9VmZL6e3NIAaqTrkNc8kN1ZIFPIepVplXzM1Hwwj
   I=;
X-IronPort-RemoteIP: 104.47.57.173
X-IronPort-MID: 74697971
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:QjKLIK6v9b2t8UK/dg5uTAxRtCfGchMFZxGqfqrLsTDasY5as4F+v
 mZMDDvUPPzbYjGmeowgOo/lpkwAu5OHnNc3HAVr/H9nHi5G8cbLO4+Ufxz6V8+wwmwvb67FA
 +E2MISowBUcFyeEzvuVGuG96yE6j8lkf5KkYAL+EnkZqTRMFWFw03qPp8Zj2tQy2YbjUlvU0
 T/Pi5a31GGNimYc3l08s8pvmDs31BglkGpF1rCWTakjUG72zxH5PrpGTU2CByKQrr1vNvy7X
 47+IISRpQs1yfuP5uSNyd4XemVSKlLb0JPnZnB+A8BOiTAazsA+PzpS2FPxpi67hh3Q9+2dx
 umhurSJEDsjY47gnNgSEAICNCB1F/J9qa3IdC3XXcy7lyUqclPK6tA3VQQaGNNd/ex6R2ZT6
 fYfNTYBKAiZgP67y666Te8qgdk/KM7sP8UUvXQIITPxVK56B8ycBfiVo4YHh1/chegXdRraT
 9AeZjd1KgzJfjVEO0sNCYJ4l+Ct7pX6W2IE9gPK9PVui4TV5DJf1JjiYITWQcWlQJRy2X+Eu
 VvtvGusV3n2M/Tak1Jp6EmEhOXCgCf6U4I6D6Cj+7hhh1j77nweDlgaWEW2pdG9i1WiQJRPJ
 koM4C0soKMuskuxQbHVXQC8oXOClg4RXZxXCeJSwBqW1qPe7gKdB24FZj1MctorsIkxXzNC/
 kCNt8PkA3poqrL9dJ6G3rKdrDf3NS1LK2YHPHYAVVFcvIKlp5wvhBXSSNolCLSyktD+BTD3x
 XaNsTQ6gLIQy8UM0s1X4Gz6vt5lnbCRJiZd2+kddjv9hu+lTOZJv7CV1GU=
IronPort-HdrOrdr: A9a23:539hBqB1jAxYBrjlHehJsceALOsnbusQ8zAXPh9KJCC9I/bzqy
 nxpp8mPH/P5wr5lktQ/OxoHJPwOU80lKQFmLX5WI3PYOCIgguVxe1ZnOjfKnjbalbDH41mpN
 tdmspFebrN5DFB5K6VgTVQUexQpuVvmJrY+Ns2pE0dKT2CBZsQjTuQXW2gYzdLrUR9dO0EPa
 vZwvACiyureHwRYMj+Ln4ZX9Lbr9mOsJ79exYJCzMu9QHL1FqTmffHOind+i1bfyJEwL8k/2
 SAuwvl5p+7u/X+7hPHzWfc47lfhdOk4NpeA86njNQTN1zX+3CVTbUkf4fHkCE+oemp5lpvuN
 7Qoy04N8A20H/VdnHdm2qe5yDQlBIVr1Pyw16RhnXu5ebjQighNsZHjYVFNjPE9ksJprhHoe
 929lPck6ASIQLLnSz76dSNfQptjFCIrX0rlvNWp2BDULEZdKRaoeUkjQlo+a87bW3HAb0cYa
 dT5Jm23ocWTbraVQGTgoBX+q3hYpxpdS32AnTruaSuoktrdT5CvgglLfck7wk9HaIGOuZ5Dt
 v/Q9VVfZF1P7orhPFGdZM8aPryLFDxajTxF0/XCWjbNcg8SgLwQtjMkf0I2N0=
X-IronPort-AV: E=Sophos;i="5.92,231,1650945600"; 
   d="scan'208";a="74697971"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Oavj1XIaD3FQyLpcSyNo8xmU0j7+4nBhHiz8JB5OA5zaojGwR84hAkwIHgt4dNsUTbLcna1Zh3+pO8LRdatGm6Qw/HQMaJB1fUorrSfmk6HpuD/BmiGCakXR4w8alFNwEizapQCM+4MH3GeVSb0YpeisDqUNqrO4Em439Nl0zlqBKpxzoqYcioYDBdLWaCfsHI/8t5QYHXFnPTPd5lo0DYTBlKPnn5Xv59Yp3PBqFObk1j/Jt5Izoh+fngroTWpuCE5IIgH4y4Z7nykIuts/+HSg4whSWwqNhYm9if1NR+3u1/nyJHUcD00gimlM170NFc3f9RYcvvIDhDnp0k5arw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qotMkl5OwEnIimmwqNF0LmAhZev6AVNiaN6J2nWxWMw=;
 b=oRZ6F+YXqp51+arjMcIhkiRoCV7lK09/DOKpzKN4hnjGq6amA0s+6NXwaF9iATqNbWzH37SvlvTsCPOd2NB4QRzJXFBgsB836gPRsgdNlX1z72MFA1gNzM6Uow1Gm1V6Q6H58edEAM/tKdl3SU+BTUgg34ADtrUxX1Uyx8zbBCAZKvQgBRYV46P6p0ldYIhuQFypt/cZX+LfMrKek6Lc/38R68sFIl2XrrptXEiRMYwcRqc2MWwPwumkMMK5RYka91b+FmAdcNhQaqsojlMYHoZYopE8c2wxQl/n2NsSzfcyI4fKMOuPcrtU5EJ5tWXoMNXJOQoDMwL2yVsYNfL8nA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qotMkl5OwEnIimmwqNF0LmAhZev6AVNiaN6J2nWxWMw=;
 b=mns5juI2NQAUZ5oJ02UTjyo2PH8oCkBnTQcqLYSm3vVzbJfyzBEUzRaoyYckeE5cbOPJ9k3OD0+s4zG6/prVwTpBKTMafd1lmm8OSlP9yRutfhVlrarMQRBmaJBwEm+M+KP59WxA0IPDTIkBJx+fQtUikM8Rn/XYrffHrtzEYpQ=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 29 Jun 2022 17:23:00 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 2/2] console/serial: bump buffer from 16K to 32K
Message-ID: <YrxuVPMb990xYKi9@Air-de-Roger>
References: <20220623090852.29622-1-roger.pau@citrix.com>
 <20220623090852.29622-3-roger.pau@citrix.com>
 <e45d8dcf-fd0a-6875-a887-5c0dafcc4543@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <e45d8dcf-fd0a-6875-a887-5c0dafcc4543@suse.com>
X-ClientProxiedBy: LO4P123CA0463.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1aa::18) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 424ed0b8-873d-403e-ec01-08da59e343b2
X-MS-TrafficTypeDiagnostic: SA2PR03MB5722:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	OB2kVP9MZ0nwLUSEAhTsSgrvIV3S3FYk+5poVidQJ7AxHfLk+dYkNvmdL7wWBnEGc1AmVu5V8SF/7DEicd1OcqvJ1+7iaX4ENYa1cmnRczrSR1ZV0GFqW8ouw7NteHQwbKa2vxobSoQRy4+zHV+frCIYGTmYv9HgoCsKPASAmRTMgfBD438QW5zHVQRZpt/roeD4BBFPFBNDEqAEfnfbntNVkFq5zJzRxulNgSRQ3IF+nluGGpHYvsD2tFFOuqaMCgA+HxacPsJByphWSoMN1+HM9uhKiifAtd1EGJwHSprdJXvsIMIRKD5kr3xwFoDt/JYjbeyfYmQ6giHIETLtNhwJWJjkBF4G9s/y5BnbjIc++oZz+wSQXWnzY0YWkeM6bp2ze86R7VfZHtNTv9bigv4wmCLp07sycsziiWbxKKJtRDiRuXBk+Um1FMwhSwj/+q0SgppGr/u4Qk0zaXpVdC2UCJ3hXCbmeLxwIEbrNFD7RZXYXK+A8I/rtk66iCeitY+R8OUHiSuyQvTzRw1e2SQyHwQTbedlnOqCaM/CzsPaWfoO/b9o5OlYzjz4CyGdMneHogS9eom+cjV7Ft+cQJ4+UKQSJEAk6hgH7Q8M1AYXzpsVUdgHQILYP29pO07hmiXq3HrlPaNhCnVg6c8/atvX+EexUdDLM3hb+grYFPjez4JLGIaEZZBzXm0XAwUIt9YDOv5sKJ/g00caEWNVxFAy+LzlPijXW6IuDrWuYkc=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(7916004)(4636009)(396003)(346002)(39860400002)(376002)(366004)(136003)(5660300002)(54906003)(82960400001)(186003)(66556008)(66946007)(2906002)(41300700001)(83380400001)(8936002)(6666004)(85182001)(6512007)(33716001)(6486002)(316002)(86362001)(9686003)(6916009)(53546011)(6506007)(478600001)(66476007)(4326008)(38100700002)(8676002)(26005);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eXVNNVoxcGRmaUVhaGVkeGo5ZXUyYXdSZWxSdFNoMFBJR1RtWnpKNXlwbkFx?=
 =?utf-8?B?QUFrYnM3ZGpqdEhSTEJTMi93bVBhZkxDNVI1eUJKb3puUHJZdUpiMitOTjNh?=
 =?utf-8?B?dlZLNC80akc0MXFRRnF3SDU3M3RBY1JyNXdiNkc3OGFZbjlUMk02dXpOSlpT?=
 =?utf-8?B?dm55RXF2SitsU2dGMS9IQW5xcXlWU2JuVXNWR0p5MEJnV3lwOUtkVHhoRFNR?=
 =?utf-8?B?Y0UzcE9OZHVvVC9XbVBlUm1GWFhPTndiNWdvK0QxcHpjUXlPZWwxWFV2ajdD?=
 =?utf-8?B?dlkyOHJMZjkrTkEzZ1p0citSVWxwK0VlckdjZWlOUHZQNUgxNUpWUlhJNTZQ?=
 =?utf-8?B?ZHJhbEpXSExDa0F3cXdZdkQrREpGNmxZT09sUFZETlZadzRGazdBU3dnVDRD?=
 =?utf-8?B?cHV2OGZiM0JzaGpEa0NyMkRUREN2WjFJUGtkbzFlV1J3cGRzK0FDRGpkd1Ro?=
 =?utf-8?B?ZjBqOE5aRzlCZ1RvNGdOaHdKU0pmTVN6SlpqcW5RZDJYTW55aW9GdTFNS21q?=
 =?utf-8?B?MW5nb1dwRkJxWlRjSVYwM2hYK250eERaNEYyZUFoV2t5NkV4N2duZTB1NUxX?=
 =?utf-8?B?WUM4SDgzTUNHVDBvK0pWMG4wYlRMWnJlcEJwYTE4WDhSWkVnQ1NoT2JOK2hl?=
 =?utf-8?B?NktDZ3NOanJZbG52eStucURoamFvWWdkTEdyNDIzalFPRlZTM0hWeHZZd3Mr?=
 =?utf-8?B?Y3lLTzRkZmtuMVFra0g0OXJjWjl3d0VwbWI2Q0pBYlo5NDRmWjBLd25jYXVq?=
 =?utf-8?B?Zk1sQ3I3ODMxRVZNSWhHYVdIRS9xbjRNMVhwdWM3TFBMTVhuU05tNWxVYmhR?=
 =?utf-8?B?MlVSSzV3aGw0MTZBSEVZandDV1d6U0JESnJqZk4vZjFrTWZXSVZYN2dzOExC?=
 =?utf-8?B?T3VCRmJvSGZ1VVJENkxGTjBmM2xzZkdWSEUvdW5jamdsRmNLYlhyWFhSQ3pa?=
 =?utf-8?B?UlZnbVBYQVZmdXN2Ny83YnVrSFFVYnJaMi83UkxvNlpqamZQMzZnckN1eU9E?=
 =?utf-8?B?R2pZV1FVSys2S1pSUXRjOFR2NHl0UHVHZFBqRDAzOHpFd1c0ekZnS3ZWVGVz?=
 =?utf-8?B?RWx5YzZkVmVRdmttU2VFb2FGUm9IMGM4N0t3dXRTc2h6RWV1WUN4aWZNODha?=
 =?utf-8?B?ckJOWS9RMEIyRko1RjZWWm8vckFKSnFNZ0RUOFFabG04TVVpNzNPUHh0cFRz?=
 =?utf-8?B?OEJzcGFZamxFZUhpS25jUGplK2VPZmhzUHFaUEJsazNKQ3ptd2F4SEVpSlhl?=
 =?utf-8?B?T2xkVUVtZW5hbHRjUWtjUFB3SlhBUFU2K2ltVW1WUURHeGFyeW1xV1lleEJS?=
 =?utf-8?B?RlNhSGF2cnJvb21lU1JZT3pUUFE3amtKaHl3dTdQZUJWMGg2WmpBcXpjcmZU?=
 =?utf-8?B?cTAxOE1vVXlQSlZ6dUgzUk5EQ3ljTzlDb0pQU3NINk5XbmZCaDdHWHJVSUFv?=
 =?utf-8?B?cndBMVVwS21wWW9ReFYwbXZRWnNRQjhwN3BpSnVmYllqWkZ1MzhWb3Z2WGdR?=
 =?utf-8?B?TTJyalFadnhHVEdCMUliMmlCcDVac3RyRFhUbUJaZ3VFczZOeWJaOWJkMVVF?=
 =?utf-8?B?QTZKSTJRNkVxb3JyNFFCaTgrS2Q2WkduR1JrSDR6aEJ3YlBrTmZuenJwUG1l?=
 =?utf-8?B?ZTV1Qy9mRkVOd2k4akkzTnNXazVxZXJDT2ZzOWh5YjBtL3RsR1ZMSEhtaWdt?=
 =?utf-8?B?UHhoK0o4NU8wL3dwcHE4L3BsMVVTVGEzdzFMa0wxUWc1aFBQaWRVeU9vNmJN?=
 =?utf-8?B?MWJpT2xDbElrUWMxdHZNSDZaVXQ5QkNVQUJHYmt0V29UN1hNbjFBMFFYalR2?=
 =?utf-8?B?d3VkczBJV0ZRRzZUS1RMdWxRMW1mNnFmdXpZSFJSakxGcUkwSERrZ0EzdWlo?=
 =?utf-8?B?TGJZYVRDTnlJUFFidi81Y0ZIaFJCaUpIb0dPODdDQ3ZrV1F6aUUwdXlJV2U1?=
 =?utf-8?B?TlhXcFhlbVdXd2swbjlEWE91bWI4U3BJZlpaTVROWmY4Y2l0emlrSFZBcVdR?=
 =?utf-8?B?eCs0NnRkaUlxSC9qMmtVdHJBNi8rNjlwdW5hcjk2cmgrd2E4Mm5jNzNsRHFN?=
 =?utf-8?B?b0pBNWcrVVlOSXZSSEpxdWt5bHk2cFVOYVVJSHN3T1pDS1ZHV3NIR1ozSWFw?=
 =?utf-8?Q?6Miebczw24K2QbJfiZ0oRT29M?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 424ed0b8-873d-403e-ec01-08da59e343b2
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jun 2022 15:23:05.6781
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: s9k8p1ZsqqZNHwVs6SSEh1dvil2pVZ01O7p9ZrIAllN3L6OT1jmf4OdQDp3XNlr4/zwpIAW4LEid3GWjNboCbg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA2PR03MB5722

On Thu, Jun 23, 2022 at 03:32:30PM +0200, Jan Beulich wrote:
> On 23.06.2022 11:08, Roger Pau Monne wrote:
> > Testing on a Kaby Lake box with 8 CPUs leads to the serial buffer
> > being filled halfway during dom0 boot, and thus a non-trivial chunk of
> > Linux boot messages are dropped.
> > 
> > Increasing the buffer to 32K does fix the issue and Linux boot
> > messages are no longer dropped.  There's no justification either on
> > why 16K was chosen, and hence bumping to 32K in order to cope with
> > current systems generating output faster does seem appropriate to have
> > a better user experience with the provided defaults.
> 
> Just to record what was part of an earlier discussion: I'm not going
> to nak such a change, but I think the justification is insufficient:
> On this same basis someone else could come a few days later and bump
> to 64k, then 128k, etc.

Indeed, and that would be fine IMO.  We should aim to provide defaults
that work fine for most situations, and here I don't see what drawback
it has to increase the default buffer size from 16kiB to 32kiB, and
I would be fine with increasing to 128kiB if that's required for some
use case, albeit I have a hard time seeing how we could fill that
buffer.

If I can ask, what kind of justification you would see fit for
granting an increase to the default buffer size?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 15:54:20 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 15:54:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357942.586832 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6a18-00082s-D5; Wed, 29 Jun 2022 15:54:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357942.586832; Wed, 29 Jun 2022 15:54:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6a18-00082l-9L; Wed, 29 Jun 2022 15:54:14 +0000
Received: by outflank-mailman (input) for mailman id 357942;
 Wed, 29 Jun 2022 15:54:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gFtW=XE=citrix.com=prvs=17228c8f8=Jane.Malalane@srs-se1.protection.inumbo.net>)
 id 1o6a17-00082f-4K
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 15:54:13 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b644e214-f7c3-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 17:54:11 +0200 (CEST)
Received: from mail-sn1anam02lp2048.outbound.protection.outlook.com (HELO
 NAM02-SN1-obe.outbound.protection.outlook.com) ([104.47.57.48])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 29 Jun 2022 11:54:00 -0400
Received: from DM5PR03MB3386.namprd03.prod.outlook.com (2603:10b6:4:46::36) by
 BLAPR03MB5539.namprd03.prod.outlook.com (2603:10b6:208:29a::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Wed, 29 Jun
 2022 15:53:58 +0000
Received: from DM5PR03MB3386.namprd03.prod.outlook.com
 ([fe80::298d:4044:f235:c782]) by DM5PR03MB3386.namprd03.prod.outlook.com
 ([fe80::298d:4044:f235:c782%6]) with mapi id 15.20.5373.018; Wed, 29 Jun 2022
 15:53:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b644e214-f7c3-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656518051;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=dOJWYZSZwxLQ1p14xgUczEAKCz098QDeQcy3Z1ip4dg=;
  b=SO1pK2NEO2T80t1XKlLTsFLvV8Ky0u+fSZAHVa1cgbSt4YgVwxgTHaTe
   MSzDC86zK6D2M1GL5PwvyUK/C8aX4njy7ffceMdiYH1VGAKwJbhsAF0x0
   1uLkjKWBrs8AT8lfw3r/pV+kDft+jXe/E+9+m9kVBQm2RT9O+Cqu/47I3
   s=;
X-IronPort-RemoteIP: 104.47.57.48
X-IronPort-MID: 74036694
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:7CrCDa0/cDW6y6cqfvbD5VRwkn2cJEfYwER7XKvMYLTBsI5bp2FUn
 GAWXTuBOP+DMWb1KIp2OYW39koP65Hcx9U1SVA5pC1hF35El5HIVI+TRqvS04J+DSFhoGZPt
 Zh2hgzodZhsJpPkjk7xdOCn9xGQ7InQLlbGILes1htZGEk1Ek/NtTo5w7Rj2tAy0IDga++wk
 YiaT/P3aQfNNwFcagr424rbwP+4lK2v0N+wlgVWicFj5DcypVFMZH4sDfjZw0/DaptVBoaHq
 9Prl9lVyI97EyAFUbtJmp6jGqEDryW70QKm0hK6UID66vROS7BbPg/W+5PwZG8O4whlkeydx
 /0UrL+gbwEvBJf0lfwUfEZRMjNgIZRJreqvzXiX6aR/zmXgWl60mbBVKhhzOocVvOFqHWtJ6
 PoUbigXaQyOjP63x7T9TfRwgsMkL4/gO4Z3VnNIlGmFS6p5B82cBfmajTNb9G5YasRmP//Ya
 ow8YD5maB3GbjVEO0sNCYJ4l+Ct7pX6W2IE8AnL+/tri4TV5E9o/7jOMMHFRoKhW8gLw12bj
 zjox12sV3n2M/Tak1Jp6EmEluLJ2C/2Ro8WPLm57eJxxk2ewHQJDx8bXkf9puO24makXMlVM
 UsT+SwGoq079UjtRd74NzWnpFaUsxhaXMBfe8U44gyQzqvf4y6CG3MJCDVGbbQOttIyRDEs/
 k+EmZXuHzMHmKaOVXuX+7OQrDWzESsYN2kPYWkDVwRty9vsuoYolTrUU81uVqWyi7XdFTjuz
 hiQoSM5hrFVitQEv4254FaBhTuvr5rISwcd5wPLU2bj5QR8DKamapKp7x7H7P9GBIefUlSF+
 nMDnqCjAPsmCJiMkGmWRrwEGrisv6yBKGeE3Q4pGIQ9/TOw/XLlZZpX/Dx1OEZuNIADZCPtZ
 0jQ/whW4fe/IUeXUEO+WKrpY+xC8EQqPY2Nuiz8BjaWXqVMSQ==
IronPort-HdrOrdr: A9a23:msYiv6xQUBlSVBy/DxzGKrPxjuskLtp133Aq2lEZdPULSKGlfp
 GV9sjziyWetN9IYgBapTiBUJPwIk81bfZOkMQs1MSZLXPbUQyTXc1fBOrZsnfd8kjFmtK1up
 0QFJSWZOeQMbE+t7eD3ODaKadu/DDkytHPuQ629R4EIm9XguNbnn5E422gYy9LrXx9dP4E/e
 2nl696TlSbGUg/X4CePD0oTuLDr9rEmNbNehgdHSMq7wGIkHeB9KP6OwLw5GZebxp/hZMZtU
 TVmQ3w4auu99uhzAXH6mPV55NK3PP819p4AtCWgMR9EESttu/oXvUjZ1SxhkFxnAid0idvrD
 AKmWZmAy1H0QKSQohym2qq5+Cv6kd215ao8y7kvZKqm72EeNt9MbsOuWsRSGqn12Mw+N57y6
 5FxGSfqt5eCg7Bhj3045zSWwhtjVfcmwtqrQe9tQ0tbWIyUs4nkWUkxjIiLL4QWCbhrIw3Gu
 hnC8/RoP5QbFOBdnjc+m1i2salUHg/FgqPBhFqgL3f7xFG2HRii0cIzs0WmXkNsJo7Vplf/u
 zBdqBljqtHQMMaZb90QO0BXcy0AGrQRg+kChPbHX33UKUcf37doZ/+57s4oOmsZZwT1ZM33I
 /MVVtJ3FRCD34Gyff+qaGj3iq9MVlVBw6dtP22z6IJyoHUVf7sLTCJTkwono+pv+gfa/erKc
 qOBA==
X-IronPort-AV: E=Sophos;i="5.92,231,1650945600"; 
   d="scan'208";a="74036694"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EU5TN8cPFQRWz8Pt57sPbA4AnLRGgrCrwH/sU5ca2O/JaaIGeKDjQl+MLURZO0Hvao+fVYsJNWvbZ5EOdU6LnkTwzJYMG4dkLVFYAiVJ/hoj4vR3dzeFTWXkniJC+/lsdbVySZCN+RhanWwpKu0/x4y60xxzyD9kSlhXRNKgukV+lAmjaN+zvmbxqXYO1MaIVG/aZVzk23UgxF24hyym3kPQ/leicJQivNW5fPQKOJwj9IdrnYJsxd1t61iQcpNI0PMtbfebw216MRCNT0jL6WlClUV45BQ1aaChLJsZb8urY7SF5t0ckeTW/RZb/OJr75GpzXwuEeZwk7JUba+POQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=dOJWYZSZwxLQ1p14xgUczEAKCz098QDeQcy3Z1ip4dg=;
 b=VMLgM0nqXopfIgUyhIWVvGoMUZdmA0dM0Nid38Vl4wQTPjL358ITJR2u7DTb8KSGovb6VwQFWSHwu3Ir2CTGuPHpSHmSKs3lZclW1cpHy98k5BXpj51y8wQbu2Y88X3dNlLd4O7bBkFrwuFlVv4OotERThovulT9GCAuwNtPk9tDVaN2tsDugWkWq6IHQvY9bByvfWsWd2tpAxzOlT/dqRrdI7CiU1BwWkDNWr3SrLvmhxpU3h3j668NFZJ6fchhrHO81Af2dWIU4+7jqGPm2dj6PlCs8GmS9ba3/701cJuJzx/2PyvI6mX7OXKcubpkvDJlglxPxue7W63UOK6c3g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dOJWYZSZwxLQ1p14xgUczEAKCz098QDeQcy3Z1ip4dg=;
 b=oBMjDxyQhewLouXEateeeNTxBo15WNPhn3SlJ6wagkLli4/tbNXSiUw86efbsidl+lQ/lnqz3WhD8KRw54Yb6gm/gZcfx1U5VXGW4BmYDla3vRKdYvkxaN+Hu9nQ66In7O4PdlvBcVXz5OjtOGJfT0+xsbDwSyjTK1IXvKICBy4=
From: Jane Malalane <Jane.Malalane@citrix.com>
To: Christian Lindig <christian.lindig@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, George Dunlap
	<George.Dunlap@citrix.com>, Nick Rosbrook <rosbrookn@gmail.com>, Wei Liu
	<wl@xen.org>, Anthony Perard <anthony.perard@citrix.com>, Andrew Cooper
	<Andrew.Cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Juergen Gross
	<jgross@suse.com>, David Scott <dave@recoil.org>, Roger Pau Monne
	<roger.pau@citrix.com>, Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian
	<kevin.tian@intel.com>
Subject: Re: [PATCH RESEND v10 1/2] xen+tools: Report Interrupt Controller
 Virtualization capabilities on x86
Thread-Topic: [PATCH RESEND v10 1/2] xen+tools: Report Interrupt Controller
 Virtualization capabilities on x86
Thread-Index: AQHYi8AZy145ktbIbk+rkdBDuBblea1md2sAgAAR9QA=
Date: Wed, 29 Jun 2022 15:53:58 +0000
Message-ID: <0840f806-cd45-45c3-e6cc-cc3ed8b0bb43@citrix.com>
References: <20220629135534.19923-1-jane.malalane@citrix.com>
 <20220629135534.19923-2-jane.malalane@citrix.com>
 <29B192D0-DD84-45D8-9D1E-83C004F015DA@citrix.com>
In-Reply-To: <29B192D0-DD84-45D8-9D1E-83C004F015DA@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 30da2121-cad1-4a77-17de-08da59e7944c
x-ms-traffictypediagnostic: BLAPR03MB5539:EE_
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 kM81fkzzq0bHiHYraEVDmZrRazMxo0wtc+8MtVJZdZM3Rn30qbEW/a4Ro48+YtmzSKPqvC61wawfcsjF/LQMAtr7kcuH2QSew0KJ9CLTFxlQ0EZC743bpGoskVqKJc1ZaHfNigYfKJRvNEg5g9MvNauzPx6ESBCZjLLWJQ1DcHOdticgIylAWhMbE1sdFyfeLBJOWY3ItfOzAutH6ld+QqOMoDngn5f5h5+j1McGjXxFXU8UBOpoEaFHOB2584RLkIpUgnSGys5Bdc7ZnefHFiOQwpZ/uq4HTeXVJ5MrwABdfjJ/4wDbmZ+4/658zO37FOqp8zWJI9MRmzDT7faLpxzVCKBMUev4fp5ZQMyyJ+oH8x5IxY9UnHZWxtWlo4EgwAD4HeOI56wZHy5YJ7ZtQ5IAH/dB7P+qChJWeKdf0fZLi6x895IlgE7LyBEXre8H+WpDZVGdpqrnWVkduSgkTNzhz4dKirdTntUTNuawFqP4IR0twYURKGACwS6hKHB6yUV2iwCNV4mKPrCFCwaXUdvxcP6WfJhSTYpwfYWyqyPsiBgm4vmNulJ9PO5R/CRPGKaArMJrj9w1PO+byCuuyCszWex91bvZtgJuUs7PNkZgEu6Kkgfz4sqb/GC3m72YL4z4/fDn2jQkJXpuBMKF2ffbcMQCLuko2wQVG0po4W8waGy33E3fdVUwc8DHugMTQcdLlzx5wWM8qYpC/n23/+zu1bkUsjnlPLHPUVpJ8HZrAH1GL5Rs9XuyMflGr2cnVbDeIbUCnwzEToEeWhICT/utAKbakMu635towEdiPyqCuj4xfo8YSccEAqT+pJp/2J34X/Ag/D1JzYfGHEDjgw==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR03MB3386.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(376002)(396003)(136003)(366004)(39860400002)(346002)(5660300002)(37006003)(38100700002)(2906002)(36756003)(2616005)(6862004)(6636002)(82960400001)(54906003)(186003)(316002)(31686004)(41300700001)(66446008)(6506007)(26005)(8676002)(91956017)(86362001)(31696002)(4326008)(66946007)(71200400001)(76116006)(64756008)(7416002)(8936002)(478600001)(66556008)(66476007)(38070700005)(122000001)(6512007)(6486002)(53546011)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?VTJXYlFPb1A4VkRQcktNT05DS1d0RUVmZkZaQ0YzQnNLVzR6VUtrZmJqdGNt?=
 =?utf-8?B?UDNXSDQ3Wk1FY2lxaWlQWWd6Y0oyeGVTekRIUXZRUHVMZHRLcDhOUVdrMjQv?=
 =?utf-8?B?d0Nha25tK0MyN0owN2haUWRRWCsyVHdCZ1czbUJnQWlvL3RNbE5YM21ZSzdP?=
 =?utf-8?B?WkErQUxOUzBhdDRTL3FJVktCd3dNQy9xeUswckFNN1NPaUZzdnZoem9mK2VJ?=
 =?utf-8?B?eDVtNWYxeFk2T0FGSWl4Rk1FeFNkdEVNdFBMYnRreWpyNi9qY1Qwdk1yNjBh?=
 =?utf-8?B?MVAvNzgxbmpEdDBlUXNNN2lDZW1Cak52YmVqZy9JMUp0djN1aGZUaWVDQzJS?=
 =?utf-8?B?MFhpaTEwMnlyNXBhaHBsYlhUSGZDNHRRdVdMblNHV3RxOGx1OExUenBLV3ky?=
 =?utf-8?B?QXpoVUcwTzBWa0JXMlMzRDlYSzl2V2d3WlhQOFhpV1ZId3pHZU83N3F1blRX?=
 =?utf-8?B?SG5IMmpMMTZpd3g1NU9jYWdQWXBTMGVpQXA2MVNoN09XNXFMYzdFTnF3dFN3?=
 =?utf-8?B?RmFOcElmSlJNT2gvL25Ma3BVZ1JwTUp0TmpUWklVTlhrbVJxeDNWNG1rLzdl?=
 =?utf-8?B?cXhiUUtMODd5ZDEzNDFyaENka005RkZ4Y3RYTk1haEFWZ3RsVVk3YzBTdUxX?=
 =?utf-8?B?dnNzajJPY09Id0RXeGM2b21FaHFZOFFzUnlNbVFKWWJzRDRsSHkreklkRk9u?=
 =?utf-8?B?N2xhcTNjVXl5K3RHUGNiK2VkM0NHY1g4UDNUZEJlb2ZpekZhRDhMVFh1eDZN?=
 =?utf-8?B?aW8xZ1Y5NnZlOVlEQzNLbG5Mc0J3WFJLMmFncHBNSThPbW1VYWZjUCtPdEts?=
 =?utf-8?B?ZEdETnpEWmM2QkFQVHVBQTk5OTI3V0NDa3hHa3QzWGorRUgrcTJsTUJPZkVJ?=
 =?utf-8?B?ZDkyUzZObyttT2pzQVJLMktqWVh5bmNZNXVlNWZ3U0dtL3NnL01BQXk5Nmwr?=
 =?utf-8?B?K1V5MEJpdjdKSTQ2Nk9HRVlMN0VyK1J0WHpBeGZTT201OXZDT0ptUnVhYWV5?=
 =?utf-8?B?WndMVmQzaExCQWFLelc5Unl5aXNsbjZXRTNBT2dnU2h0MDJxWDZTd1JjdXFx?=
 =?utf-8?B?c2RwTEU0L1dTTnNBZDhnZW82dWtDQ1BPNitwN0xWUHViamszU1h2d3ArS3Np?=
 =?utf-8?B?cit6MlhFVDlGcVRnMDIyUGVndEJySStTWlFTTnFBZ3IwUy9NeTdZaG9OMWEy?=
 =?utf-8?B?WWxENXhyZjFRSjhKNGVhcldEazREUklMaVpZOTNJMnhwYm5WY2xTWWpRaS9I?=
 =?utf-8?B?dnpsMTVYWFVIOHF4WDJML2NFaEdlMk5GL0VBNXNaZlJvWk9ZdDQ4NU5yem5q?=
 =?utf-8?B?cWk3REVkLzRBWnBCWE1GS3M4Uy8rNThNRk1DV0p4VDExbmFKb1RKeVpQOXA4?=
 =?utf-8?B?Y1RXYnRLU2pWQ1ZVKzloWUVmeVB5SDkrWnJYU1lVMjdZMmpLcklRV0tZamFx?=
 =?utf-8?B?RUwrd3Uxai93bFZLKzhtUS9pNmlNYm5xbk9YRW0xOXZnVUFBaVFMQjE2Y0gy?=
 =?utf-8?B?RFJ3V3FHcGR4VkZFZlhBUy8wcHFTMGFoaFVlU0NrQk9XM3liMk1TZ0x6RkhO?=
 =?utf-8?B?Z3ZNaStkdzBHNjVodzBUZ3gvbEQ1RFRCRGlLLzQvKzF3dVVXdFY0eGZiV3I5?=
 =?utf-8?B?dnpBbEt2R0FLQWtFTmErMTdhczJ3OEdmTTFMNTRqbVlnc1NtTlVpdFdQcG9o?=
 =?utf-8?B?ZnpiNC83cmFlRnRodWs5bFBjb1lDRVdXMFhnUkNkTC9RRkY3V2JWbTBwc0h4?=
 =?utf-8?B?MEpnVnRLN0RLVTBFY05YbzlzTmRPYldheDIrNjltQXNZd2Q5SFE2aHEyLzdE?=
 =?utf-8?B?YVVFbDZGaFIwM3Y3aDZoem1pNUNoaTlneTBiNm03RUYrTm5CSkQzQ1JkRnVQ?=
 =?utf-8?B?ZktacmZtMWtWOWd6RmplOGdocjlJY0RENEpLRVBWcGhpOXExQTR5dzF4bGVr?=
 =?utf-8?B?WFVvZVk0a1FibWc4dlRlL0ZJUUhWUDNmRE9LTzF1aUkwZ292NUhXbXgxWHlF?=
 =?utf-8?B?Wi9xWTg5T3hCdUEzY0ZuSkgraG5TK3VFUnZaSmJFK3ZPOXZGdXlnTHNVdWtM?=
 =?utf-8?B?M1NsOU9RQjgrdzN0bnFIc0RXNkc4dllXWCtkR3U2aTRsZDMzTmlpZVlqS0RH?=
 =?utf-8?B?VFhQYS9uMTJlZjBtSUhaQW1jZzJRSVZjcm1GaGk5bWJSM1ZJM0dWUkxNeklT?=
 =?utf-8?B?Umc9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <66AAAE10AAC46448BC64EBCE544984CE@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: DM5PR03MB3386.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 30da2121-cad1-4a77-17de-08da59e7944c
X-MS-Exchange-CrossTenant-originalarrivaltime: 29 Jun 2022 15:53:58.6522
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: WdTQbEDAccc8NFRePgcvS60N2Lm6eoazAROLC91UjiI4I9W2N0dJU7/iEUsidvGbStrG180vcsXAK33iyb7Dq2b8/hvnsvsxjbIosEBpm7I=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR03MB5539

T24gMjkvMDYvMjAyMiAxNTo0OSwgQ2hyaXN0aWFuIExpbmRpZyB3cm90ZToNCj4gDQo+IA0KPiBP
biAyOSBKdW4gMjAyMiwgYXQgMTQ6NTUsIEphbmUgTWFsYWxhbmUgPGphbmUubWFsYWxhbmVAY2l0
cml4LmNvbTxtYWlsdG86amFuZS5tYWxhbGFuZUBjaXRyaXguY29tPj4gd3JvdGU6DQo+IA0KPiAr
IHBoeXNpbmZvID0gY2FtbF9hbGxvY190dXBsZSgxMSk7DQo+IFN0b3JlX2ZpZWxkKHBoeXNpbmZv
LCAwLCBWYWxfaW50KGNfcGh5c2luZm8udGhyZWFkc19wZXJfY29yZSkpOw0KPiBTdG9yZV9maWVs
ZChwaHlzaW5mbywgMSwgVmFsX2ludChjX3BoeXNpbmZvLmNvcmVzX3Blcl9zb2NrZXQpKTsNCj4g
U3RvcmVfZmllbGQocGh5c2luZm8sIDIsIFZhbF9pbnQoY19waHlzaW5mby5ucl9jcHVzKSk7DQo+
IEBAIC03NDksNiArNzQ5LDE3IEBAIENBTUxwcmltIHZhbHVlIHN0dWJfeGNfcGh5c2luZm8odmFs
dWUgeGNoKQ0KPiBTdG9yZV9maWVsZChwaHlzaW5mbywgOCwgY2FwX2xpc3QpOw0KPiBTdG9yZV9m
aWVsZChwaHlzaW5mbywgOSwgVmFsX2ludChjX3BoeXNpbmZvLm1heF9jcHVfaWQgKyAxKSk7DQo+
IA0KPiArI2lmIGRlZmluZWQoX19pMzg2X18pIHx8IGRlZmluZWQoX194ODZfNjRfXykNCj4gKyAv
Kg0KPiArICAqIGFyY2hfY2FwYWJpbGl0aWVzOiBwaHlzaW5mb19hcmNoX2NhcF9mbGFnIGxpc3Q7
DQo+ICsgICovDQo+ICsgYXJjaF9jYXBfbGlzdCA9IGNfYml0bWFwX3RvX29jYW1sX2xpc3QNCj4g
KyAvKiAhIHBoeXNpbmZvX2FyY2hfY2FwX2ZsYWcgQ0FQXyBub25lICovDQo+ICsgLyogISBYRU5f
U1lTQ1RMX1BIWVNDQVBfIFhFTl9TWVNDVExfUEhZU0NBUF9YODZfTUFYIG1heCAqLw0KPiArIChj
X3BoeXNpbmZvLmFyY2hfY2FwYWJpbGl0aWVzKTsNCj4gKyBTdG9yZV9maWVsZChwaHlzaW5mbywg
MTAsIGFyY2hfY2FwX2xpc3QpOw0KPiArI2VuZGlmDQo+ICsNCj4gQ0FNTHJldHVybihwaHlzaW5m
byk7DQo+IH0NCj4gDQo+IEkgdGhpcyBleHRlbmRpbmcgdGhlIHR1cGxlIGJ1dCBvbmx5IGRlZmlu
aW5nIGEgdmFsdWUgb24geDg2PyBEb2VzIHRoaXMgbm90IGxlYWQgdG8gdW5kZWZpbmVkIGZpZWxk
cyBvbiBvdGhlciBhcmNoaXRlY3R1cmVzPw0KDQpZb3UncmUgcmlnaHQsIGl0J3MgbWlzc2luZyBh
IGRlZmluaXRpb24sIEkgd2lsbCBzZW5kIGEgbmV3IHZlcnNpb24gLSANCndpbGwganVzdCBnaXZl
IHNvbWUgdGltZSBmb3IgbW9yZSBldmVudHVhbCBjb21tZW50cyBmcm9tIG90aGVycyBvbiB0aGUg
DQpzZXJpZXMgb3ZlcmFsbC4NCg0KVGhhbmsgeW91LA0KDQpKYW5l


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 16:03:54 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 16:03:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357948.586843 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6aAH-0001kK-9l; Wed, 29 Jun 2022 16:03:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357948.586843; Wed, 29 Jun 2022 16:03:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6aAH-0001kD-6x; Wed, 29 Jun 2022 16:03:41 +0000
Received: by outflank-mailman (input) for mailman id 357948;
 Wed, 29 Jun 2022 16:03:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NFaf=XE=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o6aAF-0001k7-Q1
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 16:03:40 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2040.outbound.protection.outlook.com [40.107.22.40])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 092408fa-f7c5-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 18:03:38 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB7950.eurprd04.prod.outlook.com (2603:10a6:102:c6::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Wed, 29 Jun
 2022 16:03:36 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5395.014; Wed, 29 Jun 2022
 16:03:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 092408fa-f7c5-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=apZCgQuSeNYul390UfHJOxIyFMYuSlRgMhpZCBLelB5tpSHd+qTaz326BWZBbNmQV8IoDJF3vZIeXa0g7UxGbKsCanpGVwHVjVJkjIgnguxSTmfD6dipndlK0etu9CgZv75IgBEDfOPAtDcOAa6zVlUcV+7DwkZCitsXxVNwVL7qkQO+wgzUZ8CfeoRle8V4WVbqlv0w9WME9BI0AWAT+/1UFNrmYbpwYDF5WwvN6pX64fjHJCE6agTckwkR2MMkM/WkPNMKxvFSbVFZsrdrej47Ik2KzPpuAQvbwAsIzwXLT7kWVdOyTB1SwMrNSbKYc3BPHiv8uBOFHMAoUqZ/gg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=s4IMaJrfXu0opUtaqtqlrlFOx+Xz3+lkJ6UvFcjDn7o=;
 b=ai/iVJHLlzeUG4a5RK6vVNpJ/bgr6bzwHjVLh6VXUB3bCdXPOj+XwdpkhT2picMcuCUnTCWCkzb3t/scM8yi8ot9scT+mzA9EmlsU6QfCRhy5DI/d/23ZX0mfJR8+Jk4OoeSgLqkpUIh54EtjZnPrbKzsixpnuF+cPZit6JMaKI2Sk3LjXGFDYMIRuJmFmYuCEbyvT069mYWSISPzOHfpzMmo+87pgL0fGTfMXy26E42zjNtltOspBUMiwjzYKtm8b+/+xaEgaZtsm+PWgR2X32DsrDid/P1GII9XGl8hLyKyECG/M1qEGH7c3DeJDdmbKoddL+0ysdtm90zmuuhug==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=s4IMaJrfXu0opUtaqtqlrlFOx+Xz3+lkJ6UvFcjDn7o=;
 b=x7eQTv1zPz8COL1iauJfv4fopbDCNEXXkGHQZWSHkALMpnkEWrbCEi98cVEV4H8XK0uXBx/yKKU6mf6yDGsa4/dHM8++jre3WuDR8y6dZQVPIeZ9FhAbCDRUHiOVx9M1vb4dADV3L88jPTwLAMq8vCTxjAdFSUMKO4BbNboyXIjelKjiHSFLSUGs452uYT8cUQliwObsx7qCVpHO5z3a022cl9YmaRsgospp+TqTbNFIAldGui+hLV382UkFUegJI5XaxiQ1iPQVnmx0zIStKPI6naU2YedTBgFTvyeBKRTQWcnlnOwYtEaUEvNiS7y2J+mtu+e+1V6sozfq21YPwg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b4740e9b-3586-04c3-454a-5d60bae2cc55@suse.com>
Date: Wed, 29 Jun 2022 18:03:34 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 2/2] console/serial: bump buffer from 16K to 32K
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <20220623090852.29622-1-roger.pau@citrix.com>
 <20220623090852.29622-3-roger.pau@citrix.com>
 <e45d8dcf-fd0a-6875-a887-5c0dafcc4543@suse.com>
 <YrxuVPMb990xYKi9@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <YrxuVPMb990xYKi9@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AM5PR1001CA0033.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:206:2::46) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 7590ddaa-ff88-4c44-0c0d-08da59e8ec91
X-MS-TrafficTypeDiagnostic: PA4PR04MB7950:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WiY8vSUPwMi4ms8IrNXRCjHR3IDuoPoZWCcbHdKkgkEBEH5bTpkTq3u9r8MCW2gI5t5P+xYhyIZzgzDYYrVRCSl5jy0JB4pkIa9jTXjL+mmljlZ5e9FoRgBhPyabZ8wk49FCoWb4UtyoabG/16StGclXUbNu3NaXTiEeNHSiP6C0UMnaJD5RrUu09AwtoBgbVaVAI52tcPLQB/6JIZp3bIc+t0vkw+y423XDYLahsZc/F6Hp2nGuRiT1wYLXmCgLgoP4fpp3UVpMKpg/UGcBsCM1S3l+TFlJFXfCFQ+nkblT1Ck2hIF31K5K6WOa6xqMAugplCTT3qS9+QTIikNRaOXUW1XS6GPucZHa53Y8FVwCF4R0Kcz8LRSklcnWquqE0GcUBVe5HT1k05qQ4SBSGqu/a7xHT3rDrPbWpNYIJ+kIfXNeXld0ozJ/+SW388Iek/59lGB2a1yoO7zeV9AyMrejaTESn98NIMNWMXo+q/AGjPgisAe57FqhA94jP91kCeiMBNNjVEphMR6ArGkjp9AIMvjDVCgrfteLhc96eiwpB9K5ONmTcsQV60n9frxaotgZgD3o29NZWJo6SELaYleSS4KLHUS5aC14XOrmcx59S4OrmLhJefmzNQYKdTk1agD6FiWUkVerJ/LfdBRegzEpMjy8y7DSAnf5NC8u5gZfBA5TrxJQxApZ3ZYOW4ZDjIV6rnqjsjdQ5cwjwo70qODWnaFDoDuc7kNZDLFpHb25EakPK3gGdEvWh0EYiyIr8ItRJJAv3WOfVDhwSxrSH61ef6/gzWlzRGSrTavguWD98TPEZBiPZnS6DmPF49BM2915gJdF97p88HI9agxJ2A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(136003)(39860400002)(346002)(376002)(396003)(6486002)(36756003)(8936002)(478600001)(2906002)(31686004)(83380400001)(26005)(5660300002)(66556008)(316002)(2616005)(41300700001)(4326008)(31696002)(86362001)(6506007)(53546011)(8676002)(6512007)(54906003)(186003)(66946007)(66476007)(38100700002)(6916009)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aU4yU2pOeVdQY3hPWCtLemlZMW9Ec0c1SnlwekRTblNqOW1TcWtuNmd6R2pq?=
 =?utf-8?B?enhnRmgyVDloQWx1L2R1MHBlU3RkU2REU0hOekw3ZCtOMUpEWEN6Nk05VGVE?=
 =?utf-8?B?SmQ5amFuT1JNL0ExbGpNbVlsNTV0Yi9palFROHVmSURURWVXSmxjN05KWTla?=
 =?utf-8?B?WEFaTnJ6YnNIS1Y0UHRrR09WY3RicE1jcVp3Q0U4ZXVTZ1pZRmVCaTA3M29Y?=
 =?utf-8?B?ejliNHhWTGVuZTlGaTNzVWxOd0YxZWpTeURtQVV5V0w1cElHTzEvNFlwWnB1?=
 =?utf-8?B?QkxKNDEwdjBsTUY0V05qRE40NlNocllBaTkxQXlmV0pqc3ZvbkVmVFZzS203?=
 =?utf-8?B?UURhZm55bW9KbGtaRURkZzhWQ3VuWXJIcVdGckZCTHFneTg0aU9Ba0xRWjBt?=
 =?utf-8?B?cHF4bGFkVmxuVFFBZnpEeC96dEFuOExFTGlQSC9Vd0d0RmhKMU4ydHFMalZi?=
 =?utf-8?B?dEtGWEkxMGJneG5RdCtSbkEzU2tNeERkNXd0NXZJMjVpSTdrOWgzMm9GS3k2?=
 =?utf-8?B?aTY0aThwSTZyQVJxaUpteXFEbmpMaVFJTmw2c2tqbWhhallJKzI3V2Y2elFU?=
 =?utf-8?B?YndRZENJcHBDcVlLajkyNkw5U0JIOEljWHZ2U3lLNllWQVFRdlVYaENZUk1t?=
 =?utf-8?B?OWcyM3FsRFhwU29jcVVtaVYvYTQ4WlFSU2ZQOCtjRHJQWVVCS083RDFEYkFy?=
 =?utf-8?B?N1VnMXA0aFFQZGV2c3o2d0pFNkh4d2JHeTRjc2RyV2lRQ2NwZXRJVlhWaEd5?=
 =?utf-8?B?bFNjV0RyMjY1TGluWUZmK2ZRRXpTc1g5WFJid2lYZzhTeWo4bHpwWWt1Tm1v?=
 =?utf-8?B?dy85RkxiQnUwcks0L1hJS21weHVVQ3BXZ2VTMDhiTVpmNm1vYzFZVUM0VHRP?=
 =?utf-8?B?ZGJ2QnlXY2VTZXNWVk1IYk9kK2d3SEZHMnpVWmg2RHBxeW8wSHkwMW5aTGVY?=
 =?utf-8?B?Y0xFdzBLNGt5bnlaMkdYMURGKzlWcXE1L2tFREt4QlRzRjMxeTI4anBBbzk4?=
 =?utf-8?B?TTZsWW9jVWJVWkdyZ0pPNTB2TGhxbTVFUjVRVk1Sb3ptUWlZLy9PamZrRWp1?=
 =?utf-8?B?UDUzamlCNVU2WHJFK2FMQ091aDEwblBKQlVsT2xpTGpBKytDZ3VPanovNDdN?=
 =?utf-8?B?NWVSbXErMkhUR2hTSWtDUG13Z1cwMHhXZlVlQ2lmNGJqVmpmZ3ptTEdZNUlv?=
 =?utf-8?B?d1FvbDJZNEFNcE1kVUN0Qkt6VHZXSDVxbkdoeVZGOTNRbkhJR3NLdnB6UFZs?=
 =?utf-8?B?elhCbkpQRFdMUlY4aUFrTTJCS01IZWZycWlOcllseG5kZzNMTzZvdWdMYnJw?=
 =?utf-8?B?eFpiN3RxL2NxanA1Q2IrTC9iZ2VUSFNqSjV0RG43R2s4c0xIZXEzcllEVXll?=
 =?utf-8?B?UGdYdGE1U3k2YTJWUWxJbzl2MXI5WnMyMGx1cHVuaUpoM2d5V3dPazhUTzVm?=
 =?utf-8?B?ZWNkc3ZSUDk3M201UGQrUlorT2hYRGlOV2U1eDF5eXpuVEVvL1RDU29TaW1i?=
 =?utf-8?B?d3ZLNlY1cEJIeVZKb0grZytySmNRSFh1WTIyVVJHcTRGUFBUd0lNdElnT1kx?=
 =?utf-8?B?dlNtVS9iWE5OeGtQZXloTDR2R3BGUXdnZkxTRVRZaVUrRXQ4N0VoWGZSdkNO?=
 =?utf-8?B?K1p0K0VmWTNCTFh6Z2swUzRXWXIxMFhNRUxlUDJiR3NqWFByOW13ZmNML25F?=
 =?utf-8?B?Z1BMSFdsR25SN0JuRGcxU29UUTF6NEFoZnkxVXN5ZUprRnJQam43UFpjdmhX?=
 =?utf-8?B?WkpnN0RzNmRmQWtscTRhcFQ0U1R1T3VvTkxmM1BDc2hUQTh0a3Q2VWo5YVcy?=
 =?utf-8?B?dWREZlJsOTJESHVabFVOT2hkQWoxLzgrR1l1OUxiTlg5TjBZQmFWQmVJOTZ0?=
 =?utf-8?B?MENkUVRXd2ZnckY1eTZpZ21la0dBWVBSU1djYlhmdUdYK3UvVkFSdFF5S2VM?=
 =?utf-8?B?Nnp6M2pKTk8rekhidndmeEdnTUxqSHIvZWhiSkoyeVpybDJ6blNncmlydmt3?=
 =?utf-8?B?czdic09ybHBhR0JEdXRlR0s0MTdvSERNU3lIRTlsRUZXU0xLT1ZVRWpiK2Za?=
 =?utf-8?B?bmx0NUt4NndXRmhhY1ViNGZaUzhRVUsza3RndUZQQVNnNjNQQ3BGYitZYjJI?=
 =?utf-8?Q?sWuMWGYxgDQhJ0qQlp2xdjVpK?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7590ddaa-ff88-4c44-0c0d-08da59e8ec91
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jun 2022 16:03:36.4122
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: UQiyj+kSagXT+QaJcJbsU39nk8Md/gaDDLvBFHjnQYlsxSrAOXBftQY3t+H0sejcqociluIrAuJqVAzqKv/DgA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB7950

On 29.06.2022 17:23, Roger Pau Monné wrote:
> On Thu, Jun 23, 2022 at 03:32:30PM +0200, Jan Beulich wrote:
>> On 23.06.2022 11:08, Roger Pau Monne wrote:
>>> Testing on a Kaby Lake box with 8 CPUs leads to the serial buffer
>>> being filled halfway during dom0 boot, and thus a non-trivial chunk of
>>> Linux boot messages are dropped.
>>>
>>> Increasing the buffer to 32K does fix the issue and Linux boot
>>> messages are no longer dropped.  There's no justification either on
>>> why 16K was chosen, and hence bumping to 32K in order to cope with
>>> current systems generating output faster does seem appropriate to have
>>> a better user experience with the provided defaults.
>>
>> Just to record what was part of an earlier discussion: I'm not going
>> to nak such a change, but I think the justification is insufficient:
>> On this same basis someone else could come a few days later and bump
>> to 64k, then 128k, etc.
> 
> Indeed, and that would be fine IMO.  We should aim to provide defaults
> that work fine for most situations, and here I don't see what drawback
> it has to increase the default buffer size from 16kiB to 32kiB, and
> I would be fine with increasing to 128kiB if that's required for some
> use case, albeit I have a hard time seeing how we could fill that
> buffer.
> 
> If I can ask, what kind of justification you would see fit for
> granting an increase to the default buffer size?

Making plausible that for a majority of contemporary systems the buffer
is not large enough would be one aspect. But then there's imo always
going to be an issue: What if non-Linux Dom0 would be far more chatty?
What if Linux, down the road, was made less verbose (by default)? What
if people expect large enough a buffer to also suffice when Linux runs
in e.g. ignore_loglevel mode? We simply can't fit all use cases and at
the same time also not go overboard with the default size. That's why
there's a way to control this via command line option.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 16:19:38 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 16:19:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357955.586854 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6aPZ-0003d4-OW; Wed, 29 Jun 2022 16:19:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357955.586854; Wed, 29 Jun 2022 16:19:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6aPZ-0003cx-LD; Wed, 29 Jun 2022 16:19:29 +0000
Received: by outflank-mailman (input) for mailman id 357955;
 Wed, 29 Jun 2022 16:19:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JXcD=XE=citrix.com=prvs=172c93792=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o6aPY-0003cr-AS
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 16:19:28 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3c90db3b-f7c7-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 18:19:24 +0200 (CEST)
Received: from mail-mw2nam12lp2040.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.40])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 29 Jun 2022 12:19:23 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by DM6PR03MB5243.namprd03.prod.outlook.com (2603:10b6:5:248::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Wed, 29 Jun
 2022 16:19:21 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5373.018; Wed, 29 Jun 2022
 16:19:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3c90db3b-f7c7-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656519566;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=8UTvpINnetV7/J7s9p+/dx3Nh/oHZ7aO/tTTeL8kqbI=;
  b=V1LfajI34P4lrQ/2xBGN03MnGj3M6K7SDA3gDj3yDJUGmC5l7+VTnkS5
   /tBXvhqyq/YmF0m9sIv31ycRvkrvEb4rw9GydSMl4SyfCOlmZNsMK1owj
   kfQOPSCcd9IP17dvGtCiNCL/AFC+kudR7wi8cX8BLGgNbipceHyvBlsKz
   E=;
X-IronPort-RemoteIP: 104.47.66.40
X-IronPort-MID: 74038969
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:wJeLJa0DHUQlu8Yl9PbD5c5wkn2cJEfYwER7XKvMYLTBsI5bp2cHz
 DcdUWjQM/6NMGv8Kd8jPIzn90gEvseGyYBnTwY/pC1hF35El5HIVI+TRqvS04J+DSFhoGZPt
 Zh2hgzodZhsJpPkjk7xdOCn9xGQ7InQLlbGILes1htZGEk1Ek/NtTo5w7Rj2tAy0IDga++wk
 YiaT/P3aQfNNwFcagr424rbwP+4lK2v0N+wlgVWicFj5DcypVFMZH4sDfjZw0/DaptVBoaHq
 9Prl9lVyI97EyAFUbtJmp6jGqEDryW70QKm0hK6UID66vROS7BbPg/W+5PwZG8O4whlkeydx
 /1Kr7PuQgB5LpGSv7gMDBcFFSJRMKNvreqvzXiX6aR/zmXgWl61mbBLMxtzOocVvOFqHWtJ6
 PoUbigXaQyOjP63x7T9TfRwgsMkL4/gO4Z3VnNIlGmFS6p5B86dBfmSjTNb9G5YasRmB/HRa
 tBfcTNyRB/BfwdOKhEcD5dWcOKA2SWnKG0J9gP9Sawf+WPx9jx37b3UaIDfesC6G/Vsox+Kq
 TeTl4j+KlRAXDCF8hKH+H+xgu7EnQvgRZkfUra/85ZCkFCVg2AeFhASfV+6uuWizF6zXcpFL
 E4Z8TZoqrI9nGSpU938UhuQsHOC+BkGVLJ4CPYm4QuAzq7V5QexBWUeSDNFLts8u6ceWjgCx
 lKP2dTzClRSXKa9THuc8vKRsmm0MC1Md2saP3dYFU0C/sXpp5w1glTXVNF/HaWpj9rzXzbt3
 zSNqyt4jLIW5SIW65iGEZn8q2rEjvD0osQdv207gkrNAttFWbOY
IronPort-HdrOrdr: A9a23:NyL/z6mjw63wKviH1qhs3/TUe2jpDfO3imdD5ihNYBxZY6Wkfp
 +V8cjzhCWftN9OYhodcLC7V5Voj0mskKKdxbNhRYtKOzOWw1dATbsSlLcKpgeNJ8SQzI5gPM
 tbAstD4ZjLfCJHZKXBkXaF+rQbsb66GcmT7I+xrkuFDzsaDZ2Ihz0JdjpzeXcGIDWua6BJdq
 Z1saF81kedkDksH7KGL0hAe9KGi8zAlZrgbxJDLxk76DOWhTftzLLhCRCX0joXTjsKmN4ZgC
 D4uj28wp/mn+Cwyxfa2WOWx5NKmOH5wt8GIMCXkMAaJhjllw7tToV8XL+puiwzvYiUmR8Xue
 iJhy1lE9V46nvXcG3wiRzx2zP42DJr0HPmwU/wuwqXneXJABYBT+ZRj4NQdRXUr2A6ustn7a
 5N12WF87JKEBLphk3GlpT1fiAvsnDxjWspkOYVgXAae5AZcqVtoYsW+14QOIscHRj99JssHI
 BVfYzhDc5tAB2nhk3izyhSKITGZAVyIv7GeDlJhiWt6UkYoJgjpHFoh/D2nR87heAAotd/lq
 b5259T5cBzp/8tHNxA7dg6MLuK40z2MGbx2TGpUCPaPZBCHU7xgLjKx5hwzN2WWfUzvegPcd
 L6IRhliVI=
X-IronPort-AV: E=Sophos;i="5.92,231,1650945600"; 
   d="scan'208";a="74038969"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JqOO3A7HqKCwUpa4OzIeWfj5qeJ4Ge+rqeBfKpU+uK5HyYZ1j3n04awlD7spKqb2hYB4L+DFSJVI+La1kLxsq30jnws3nKmVO1cSS1E5sxZNt4mG0iP+Do2G6hCEwVvXKzEpL4mm2ve+4bxOgzqOYDxJ5OXSFpJ4kNbMO+y3VAU7narMa/MI9bCvQuu4HF5jNZk03+2aqo0c1OupO1ppM6CQlE/X8OZfXOKo/6sVuA6bANCTtmge+pmRzgKEhLOUNg8kAfPMOOSvUGLhMkiEe8jWz9tCeJyk4C2Kec1wlDEr3nyLHdHexKddcfw+1JdjRokpor1f1E6Yk4jFep/9Rg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Fe8V4skNM6KXg8WzbrTc3HISkT1jaNp0Yxoqt5N04eg=;
 b=IjVSOik/077ve8JJl1EeeG8yf2zL41I5IVk4bNwO+TfwS7ZOzkc9fVCSGoQYvw8R4UITg0s/52ap+7PioYEJfmXe24+pyLr1Ba40c/F2RB4n0e1+TlLBbiX8pC0Yv/4dSwdvAFGYvbDkCkpRzX1RPnrmV9K0EXKZfpq/LiCpfNshFjNd/KhbaWl7XRwTLGngFFGxsfY6j9fUcxOaTosqVRbKnTTwSQ8z1y1bpgEQ7DxnenwzbNUkHQe5Sl2nYyVU6SP1kBMF8rvwB17cG/XcfYJc04kBnLDLcwIfg4PFAO1nOUeBL9oHjq0Oc/0UWeETwachMvCmj1A9/BzmYDt6Vw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Fe8V4skNM6KXg8WzbrTc3HISkT1jaNp0Yxoqt5N04eg=;
 b=PCICZ08SUigeAU6glir7QITu5cOuZt87Sp9P+uP8v7ttTwKhtQOq/6QSXhKGNNPvovnHlJRNsZOiH0k38e5bK5A+CC5wmCPMCJaMPTBOCaD954WVscDaNvV4KtblIeadfE7l81EFequzUc+RuT6uoVTjw6234nRaViuuA+bK1u4=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 29 Jun 2022 18:19:16 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 2/2] console/serial: bump buffer from 16K to 32K
Message-ID: <Yrx7hDevqMgvtMRR@Air-de-Roger>
References: <20220623090852.29622-1-roger.pau@citrix.com>
 <20220623090852.29622-3-roger.pau@citrix.com>
 <e45d8dcf-fd0a-6875-a887-5c0dafcc4543@suse.com>
 <YrxuVPMb990xYKi9@Air-de-Roger>
 <b4740e9b-3586-04c3-454a-5d60bae2cc55@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <b4740e9b-3586-04c3-454a-5d60bae2cc55@suse.com>
X-ClientProxiedBy: LO4P123CA0150.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:188::11) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: fd828633-3b92-487e-9811-08da59eb1fcc
X-MS-TrafficTypeDiagnostic: DM6PR03MB5243:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	k0m57FpUbIvxsJIUR+IDYyjkEzGHTI9A4z46UgZ75IX8sUkX5yMv3f2/vIBeGpZRBkZyrByc3c8F1Qm2qVD+7XUT+2KpOJC1Dp2q1BJhfzMKG1xA7uYEEDB4ndB88qS2MFQ7/OieJArhoeVlalukARIwD7VilimMPdzGRv9fckRCMiwKD4yKUfbOg9p9e9ux8K8uLLGeuS1w36cQtogGZPIqO+L//Mv4Xzvxb7Elh+i7oJoaQndk5CdjD5nrb4rUmPziq/Lgn/ZYjWFZ/G+vEAohBPaEFnUMkWcndfneO0yaSJ1KEhX748O7mnI07KqclyHzakUSOdSPNsj3tk0j98z6kq3dGP4ZESNpv0F7fAchFkY6RAuybB0MmbdE9dznqgyJgm5BRlLzh4I49fAATrYia40V20TM29J/TynOeeMZCK+vH77YSxOWuviNAbae9CXWGoWk5hvdqDUHQ/PV1VoN06UVoUb+OUWu41t6aJs8enFKzD2TVXH/Mohu416yVQ7Y8Z7NXTHxssnS/ITS13cc1GWO/JVDW/LKnERPnnhMlNVDtdXPum7TDGoAuJjLFUUYWwta2e6sO6XKXniBUrq7k0FKBInC3hTVGsbz11NcseXp9STOG4OIexJHawSuHzh8VXO8JgBFxJPqNc9VyS0Ujrmca5I6Sc3p1j4UthQDNgQ2NCm6LeqyWnKKezjEKSmLRoqFbgWjNH32ajcdjiiMzCKNoUSanjswkJLofv7MgwbQncvemiCqsHXjsGBq
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(7916004)(4636009)(39860400002)(376002)(396003)(136003)(366004)(346002)(26005)(83380400001)(2906002)(6486002)(41300700001)(6506007)(66476007)(66556008)(6512007)(9686003)(66946007)(186003)(5660300002)(86362001)(8936002)(478600001)(8676002)(6666004)(4326008)(6916009)(54906003)(316002)(33716001)(82960400001)(53546011)(38100700002)(85182001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OVBvd1V0bVJOTCtlMWpiYXdiOFRaU3F0dnF3R3lTTy9aMlRXakxtaXZZaW5W?=
 =?utf-8?B?TE51NjlCRWJsSEZlVlU0TzZxNVFMMTJudlFjM2txL1N1aVBsUWJmT25OVXNK?=
 =?utf-8?B?Wk82L2djK3RqekV1ZG00NjY3SnFlU1d0L0wzaTVzN2FJVjU2Z0JOMU5WWC9O?=
 =?utf-8?B?b3FGOWk4cFEvYVR2SFJHZHh4U2dLQXNoeWxzZ1N3L3Z5d294aDQvL1ZPU09Y?=
 =?utf-8?B?MUhuRzBrQUd6VjEwMTlIWUpzblFndVVLQndrc005enNIaWpNYmVZUkJ6RGRV?=
 =?utf-8?B?OUYyMkUxVERwWGx5Q2tGcmgyVHB2OG9HUjhlVUpLQnV0M3BTWmhXNlBPVVJk?=
 =?utf-8?B?b1ZmMXZtNGZyR1o1ckJpTjJvZ0liUnRrV3hSUjJTdW41bVlaMG1zSTUzQk9n?=
 =?utf-8?B?N1VXS3BXUkJ5MC9LNVV1d2p0TElVVlRQS3N2M0FtYjIwZGN5c3dyTXIvckNR?=
 =?utf-8?B?bHExeGxCYkY1UlZHc0ZNVkZDY2o0TlZvZGM3YjJkLzk5VEQrT2dZS3g1WXZV?=
 =?utf-8?B?ZUdHUWdiMEo4REthK0YrZENhNWVXanN5UDdHMzE3VXZUVkxXODl2SmdoTDIz?=
 =?utf-8?B?UlVyQWNVdEUrS29NWDE4RW1vV3dDa2JnQWRVRC9aTGtjQ3FRNnZkbGhWblpU?=
 =?utf-8?B?RjBwemVNY01uclZmYXJPU2liM3pWc3g2RFNLbUQyZEwwTVRBSDEvV2RqZkhs?=
 =?utf-8?B?RTFxQzRvOFAxN3UwTC9kcFJaaWhoUkpzZjR0Nnd6czZzZ3UzZU5tbHF3WnVk?=
 =?utf-8?B?Rno4bkJtb3RkQVFPbzZoamcyQVl0ZkF6bDV0ZVM4ejljSDEyakpOUXp0MU1u?=
 =?utf-8?B?em85aXlzUS85R1JPNWRPWUVBWU9uQUk4VkZ6NzVqYTBYVHNpZFlxL28wNGNv?=
 =?utf-8?B?MTYxbGppeUhieWRwRUNqczVSL202QnJ0U09FVnRhOUw5OTZtVWxyTEs1R2dR?=
 =?utf-8?B?em9YQjRJRDJ4Z1NhR1NlZUNJNzRZNnEvT3Y3YitxTy9qdUMzc2pWVlVKSzJQ?=
 =?utf-8?B?UlJVaC95YWI1VU5GMi9Gengxdkx1Sm1SdDQyYmhsNWNBT2xubFFUQkd2WjZ0?=
 =?utf-8?B?Uk5jZ0w4K1hpZWJIS3Rab2wwSi83SFBUNXpkMVZVUXhSbUVwU2hHdFg4bGJ2?=
 =?utf-8?B?ZUM4akdtVVdsNW5PYUp0dVJ6ZTNueE5SQ0dQbjZOenZjaktrNDFlOE5EZEMy?=
 =?utf-8?B?aFZUVXFwSEJJUVMyUVFPTFFkRUVMeDdZY2hiVFF3b0wwTGgyZWlFRzdqU2Qz?=
 =?utf-8?B?ODFoQmw1VXp1KzJzT3dtckEyeW5zMXRSOGRsWUtHQW1TUld5Uk1EeFdRWGhM?=
 =?utf-8?B?MVVVcnpQbmtVTnZ1SVZrRnI2cFFubWMydHU5NWRnN1RLcFdvSjRjWXpRb1o3?=
 =?utf-8?B?VDYzZmNXd3FUR0JkUUdXeUVVQlFHSWZFZ0JNUUpoYzlsUzY2YXpLcS9KRnpq?=
 =?utf-8?B?blVzcXFpMXZ4Z0VOYWgyOVFUV0laOXhyTU43RWhKWnhmbzZtZUFGcXQxVGRO?=
 =?utf-8?B?UHp1ODc2REpWQ2VwNGNVVEo4dWdhTE80QUVNRWU3b3JTYUlJKzFXbWQ4VG80?=
 =?utf-8?B?L05rSVg1Vm1xQlNzZi9pbDFoS0dZM05JdEt0SW5VSC85c2t3S0F6UVBxWjVq?=
 =?utf-8?B?NDVXSGlBNTZaMVVUREUzMTc2aXFCcDdheWNROHRMMGcyZndUR3ByMVpxY3hm?=
 =?utf-8?B?aTlhUmYyMkRwUEdFMmM4aFFVdVczK3dteUEzRUl0cFV0cllJYVhJbG5zcGoy?=
 =?utf-8?B?d2FHQUM5QklCY3krMG9oNVFGNVhMbmVmdE1YZGdMVkppaHVRMW1kWkNpMStK?=
 =?utf-8?B?ekZhbVB2cTQvTmhPdUgydzdzS3V3dzQzc1E1V0JWMnl3WXFCQVgyOTBRbmNx?=
 =?utf-8?B?VWJ1bjE2bXhRa1NsUmFLajIyM08rT3pUdHJOeVRXbVk2dmVqaDZ6VjZ5UHNL?=
 =?utf-8?B?ejlLdW5paWRGOHc5bWlNdjFidlpaMC91emtZVEo2N2t5TWk0VGpISVJUSmlV?=
 =?utf-8?B?UG8zcDl2eWQ4U2FqbWc5bUFPc0NFeTB0NE9QSjcvSElDZmhBc1FRckdETE9l?=
 =?utf-8?B?dHM1YXVuR1Z0QTY4U3NWWWZXZFVNT29OdGYwcERlNlRvc0tpVFdVcUtzNzc3?=
 =?utf-8?B?RlpzNFpQQW91ZDg4OE03YUwyQVllU3NqbW5nUGY0TGlxbW1JQnBjWEsxQWty?=
 =?utf-8?B?S3c9PQ==?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fd828633-3b92-487e-9811-08da59eb1fcc
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jun 2022 16:19:21.3753
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: JIpTDnlWaMBeQ2efxpd3gfC1y+mJdo41TEkkDV7sVt19XY1eCa2WGUnSaf0ozWU2MeVcWu7z96p/HSkQcI0+HQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5243

On Wed, Jun 29, 2022 at 06:03:34PM +0200, Jan Beulich wrote:
> On 29.06.2022 17:23, Roger Pau Monné wrote:
> > On Thu, Jun 23, 2022 at 03:32:30PM +0200, Jan Beulich wrote:
> >> On 23.06.2022 11:08, Roger Pau Monne wrote:
> >>> Testing on a Kaby Lake box with 8 CPUs leads to the serial buffer
> >>> being filled halfway during dom0 boot, and thus a non-trivial chunk of
> >>> Linux boot messages are dropped.
> >>>
> >>> Increasing the buffer to 32K does fix the issue and Linux boot
> >>> messages are no longer dropped.  There's no justification either on
> >>> why 16K was chosen, and hence bumping to 32K in order to cope with
> >>> current systems generating output faster does seem appropriate to have
> >>> a better user experience with the provided defaults.
> >>
> >> Just to record what was part of an earlier discussion: I'm not going
> >> to nak such a change, but I think the justification is insufficient:
> >> On this same basis someone else could come a few days later and bump
> >> to 64k, then 128k, etc.
> > 
> > Indeed, and that would be fine IMO.  We should aim to provide defaults
> > that work fine for most situations, and here I don't see what drawback
> > it has to increase the default buffer size from 16kiB to 32kiB, and
> > I would be fine with increasing to 128kiB if that's required for some
> > use case, albeit I have a hard time seeing how we could fill that
> > buffer.
> > 
> > If I can ask, what kind of justification you would see fit for
> > granting an increase to the default buffer size?
> 
> Making plausible that for a majority of contemporary systems the buffer
> is not large enough would be one aspect. But then there's imo always
> going to be an issue: What if non-Linux Dom0 would be far more chatty?
> What if Linux, down the road, was made less verbose (by default)? What
> if people expect large enough a buffer to also suffice when Linux runs
> in e.g. ignore_loglevel mode? We simply can't fit all use cases and at
> the same time also not go overboard with the default size. That's why
> there's a way to control this via command line option.

Maybe I should clarify that the current buffer size is not enough on
this system with the default Linux log level. I think we can expect
someone that changes the default Linux log level to also consider
changing the Xen buffer size.  OTOH when using the default Linux log
level the default Xen serial buffer should be enough.

I haven't tested with FreeBSD on that system, other systems I have
seem to be fine when booting FreeBSD with the default Xen serial
buffer size.

I think we should exercise caution if someone proposed to increase to
1M for example, but I don't see why so much controversy for going from
16K to 32K, it's IMO a reasonable value and has proven to prevent
serial log loss when using the default Linux log level.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 16:21:18 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 16:21:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357960.586865 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6aRK-0004yH-3t; Wed, 29 Jun 2022 16:21:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357960.586865; Wed, 29 Jun 2022 16:21:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6aRK-0004yA-0b; Wed, 29 Jun 2022 16:21:18 +0000
Received: by outflank-mailman (input) for mailman id 357960;
 Wed, 29 Jun 2022 16:21:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NFaf=XE=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o6aRJ-0004y4-8d
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 16:21:17 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2055.outbound.protection.outlook.com [40.107.20.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7feeaac7-f7c7-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 18:21:16 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR0402MB3380.eurprd04.prod.outlook.com (2603:10a6:208:19::32)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Wed, 29 Jun
 2022 16:21:14 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5395.014; Wed, 29 Jun 2022
 16:21:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7feeaac7-f7c7-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mY3deMXYLasuNs3FGu6U9foTwKxncLfPIRPa4QOeHf2+cnbpFWLTWJk+sAtIYLyq9p/md4UngxSbQzeINDD4MaonvV1pMOA73KD3lmgQ3Zo2fqzmSJGGr5dxDQDVhOHTzjemyOsXvtJ8HoNbxE8kFZ1YZDcR4NUUoq7pmxRr/sHgYfskGvS9cnaIGub34h2P0sM6hrQYdqqjWT1oJuLoLdQ1D1HYOvOkxrfUGZSnL1lRPQSv+RDA97mfSOM2nHUGtpZjuQSokm91pUhXEMLxGS1jR+h1LtLFsv27DRhLumgWDK/5Jyix7UhafKIBs8Ss+ICFIzHwjqSkJcLFnA9zlA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Y70GIlYFuOx1UrbqmbhtVGLcIpzH3WvBuudZ2aIuwc8=;
 b=VyHWOQpn98V8d5o5tsMPIeVVdOVHyrzENP5d7gY+OhMvyEY1x1MzCWvliJAXloo+wJD8cQVFCrs0AQgZTSq4Gw2R3i2YIakcNIBirorCpRqeJF9C6Y716A/mDLeQRdPx7Y/o0DgDBogid2lBVZFlJhRgfA3WE5Zbn5NqqE9td+KJ9IrSaRY0ZkBwhabEo/ADl+mN+8vQyOV9qujDffC08RjhJDPAo7/2mAxMYaByzXFTWfUjE93vr3rkOd3fi0E5UAggFz2B6XvpJf6RtK+HA3KN2OVLAh7nhOSvKwaTChc7C7K/yt3bC2OL+o0zR5H9ZgnelUCPQgPUpDHwtt9Z4g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Y70GIlYFuOx1UrbqmbhtVGLcIpzH3WvBuudZ2aIuwc8=;
 b=YAT8xFnjdQJf0IBcdlKz7VShWGHheG6UvKYPOu1+YZtQaWulY5n51qu6FZ/yrBT1ylGRLahBou6IWU/tnpbHcsPxWAr1KAEGHcBpBy1VJr60vz8qfKwyo8Uz2BS04P+YEMDZ9BMKZj3jznp0k3UOq7il5hZCXn74xf9OsKniuASydvHEcEAOlFwYTS+D4v5dg1GDNr0uqZ5YY8He9Dr1VsTS4x5JKzXAWxQQJoHGdg5jvx7uoJHeSJCxQaPO8uDV3oGCcwncVy6xXuJZDN152T0MHbBgQXPHJcxm56Uz7lyOneDYUQ45ck8ncsHWRK2rqaIJQBRJ3RQ/qD45N2nKCg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <22a70226-71a5-6caf-46b9-52956f6e9a6b@suse.com>
Date: Wed, 29 Jun 2022 18:21:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH] x86/ept: fix shattering of special pages
Content-Language: en-US
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20220627100119.55363-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220627100119.55363-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0052.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:48::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 163978a9-1c5e-4ada-2bc9-08da59eb6333
X-MS-TrafficTypeDiagnostic: AM0PR0402MB3380:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	UyWOCMzdqagE1nm325CzxK3ikSAcQdRL9qrB0lQ9pkscfdiFLnx68Ffxjbtb+N8JpdAh1ke8JnCVplh4+2hpBVGJxki74IzcqmytOYV6GgwlmKP5zG2zyOiNzLxFNonaXNS97jcHOO75FhJ7wPgn7HVc5asNDFIGVXtGYPNrcEzwC13+lQOPuyniGsAO8MzDeLwE9IebO4sGojvGoEUxY2ZhKB/weBV7FdW/TG/l4/fbpcFFwQFc4zyqYwlSKh5OaHWFYhnKQ7UMNGoT5zlRlnF73zqlMND3QJ1KmF6PmJGTaj1tn/iciE/SUOnREDH1AkmO4IQVjQTvFBS5TyQoR8SnQbxFOkIux1hrMeff0vkcb9Fkpf+7XLfu4c3epf65aH3VwWIpHNC3GTIWErRXjvTtUuRjnnNsx/mypPhSOQmmAnhAcbNqML+hcJ436hvOCu+9Df2iYcnJp5uSWr/6w15wQuTg7leXGlLspCkxxDEzJZO0dO1OanrNmLwMCfq4tRrxIlu5GxOaKab75P+raeQdZXCByECWuUkeC5Ui4xvCIszjUGMah8N8ImSqalnnG/JH5pgma9EL+douzxIhPU+F23xopLkvXbbhUllam+JLHM7LGbiBfcmXs0wE1qGFPf+SwTKjvM41acU09+ekBry1kvPYwL3TZR0yVNzJxaSHMzRIZ2/PCBQQ/x4LMYtVwrew+mnIBhQ+KumTyuWOAViP5pHmr4V9iwq7L0bPu0dW++qSKGRSO0NKgfeeFQ03XUafuL9SbbJ52pNzaORukhhxDdlzdCESR8BztBQ6hHTHk75zXWi76SVXWA7jIPavqOTrYjNmm+hC2JOKFtpKdA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(376002)(136003)(39860400002)(366004)(396003)(346002)(26005)(36756003)(4326008)(6486002)(186003)(5660300002)(8936002)(478600001)(6512007)(31686004)(41300700001)(31696002)(2616005)(6916009)(54906003)(66476007)(6506007)(66556008)(53546011)(38100700002)(8676002)(316002)(86362001)(2906002)(66946007)(83380400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aWZIV3NvT1NaTStVRkZPWkJBeVdPNldrVnNPeHZhOVEvc3NtUmJYR0J3SnFn?=
 =?utf-8?B?SlJZSDlvcUZ6SGVRaTFJRW80QU1rR1lZY2ZlVmZmN2FIcW9RN0Z3SnhmUytP?=
 =?utf-8?B?Y3NWYmRsUThYMHB6MXZhNHRWclM2QkhRdlJJK21veHhvSWhzZzR2ZlZreFdI?=
 =?utf-8?B?YitWZDNOLzVkTUE1R05qdnJBRk5hRHBYNnEvNmx3dmkvdys1bThPV1pPNUkz?=
 =?utf-8?B?WU9hZmp0TXlCc25MbStFR09QcGFwT3d6cTRIUnFVNnNPcm9hZGRFY2hjbjg3?=
 =?utf-8?B?VGNBQzZiTmpnWEpzMURRM1FlQk0reno2aFFKbFVFb3Q3aFFkU09UWkM3dC9i?=
 =?utf-8?B?OXVYTmxMdDRBdENGUWFlbHdpMkJGWEt3ZFd2UHBsVERpQU9LR2MxTVgyam9F?=
 =?utf-8?B?VThVdWhlK0tPUVB1SG9uRjEzeThaNnAyK2QxK2l3WG5mRHlKOEJCRHhUMWlF?=
 =?utf-8?B?U3cvSWptbHI4Z1R2Z0ZwT0trREhpYUt0U2FRbVJpYUVYRitrWlI3OUd1eEVj?=
 =?utf-8?B?QmJIaTVsemIwS3BPRXBXWHVua1NQODN3alJhUTJnRG8rVlliMDBVUUdia2Nk?=
 =?utf-8?B?RGVCdTBicXJTaDdkbzhnaDFNdnNiM01aeE1aRGVHOEd6ZXpqb1R4R3VML3pB?=
 =?utf-8?B?cUZ1UWs2Rkt5TEtOaEF4a00zWkIxTnpIWEROY1M3U1Uxa1RxSGxaSTVmMFBp?=
 =?utf-8?B?NEQvd0Nvczg0ZzJXVFdrbE9PbFBZSmdrOFZyVDNodUZaL3BLWC9PWDdHTWp6?=
 =?utf-8?B?NGZMb1Z0V0dxNml3ZnJHakFiMDhqZEpGMWFOMDJoSVVoNnJPNjJiVkgvSmtE?=
 =?utf-8?B?eWczejBMbkFUbTFqNDlHVjkzMHczV1kwaFI1Z0N5SWNOcVRIMnRabkxwdUQ5?=
 =?utf-8?B?aDU0MkxhK01OQzVNSjJaVVp4U2E1M3BrTjVyRmI2VVhMc0M5Y0VoWURwc1Yr?=
 =?utf-8?B?a0lxcXEzTFV3ZGR3Q0xUWWo2THBhU0Q3bk1TOXZGR3pFaHJjdkErQng0T3lT?=
 =?utf-8?B?ZjZxMERhNW5UL1hiUW1xVytFOHhTU2hUS2xpNkM1YmNDeFRKRmNBV3J2T2xN?=
 =?utf-8?B?Y0NIRllQMGIwVXFoTEROMWM3cjRKSGU3ZERSSHJDWS85ZGxHdG9nMVBCOFZY?=
 =?utf-8?B?RjY1enc2S1lBbUFQbjFMdVUvRjkwbjhmQzhIWGZ0SHJVb0JzaEw5QVNzcjRS?=
 =?utf-8?B?Rm5oMHI3elRqMmlnMlUyc1JOWUllS3EwTG5XZ1M0bnJKT2JmSHI5V0RWM1du?=
 =?utf-8?B?aktNYWlFdmdUS0JRamExN2VSdlBBN3hxRFVpV0NUdEc4VFJmdG94OFF5VkYx?=
 =?utf-8?B?cVlmNzBmdTYzS2ZMUzltUFErYmk1WmV4ZmIwWVBzdHVtV01WeVMwSWY2bGFR?=
 =?utf-8?B?aDR0Zk1xY2h6Z1Njc3lqelU2RlJOYnlmM0xWVHQ0TXVVcUN1RStJUGZLbnJM?=
 =?utf-8?B?U25aRDk2OHNJUEVGTTdMVjkvZGVNWnQ4d3Z4YkhFNXlZeUp3QWJzRjl4ZUZG?=
 =?utf-8?B?M1ArN0tCaFhqNHlkdyt4Z2pGa3M5S3U1bUhxNDUrdzNHU1hxWmVNbXllU0Zk?=
 =?utf-8?B?ZGNEamk5bCt2b3FzeWZVdFRnWEY3cGFtc204QVF2bGRjZ1dnUnJ1ajQwNk9h?=
 =?utf-8?B?ZTZBU0JNYlRUYkkxSktpRGNoVThiYW1TRjlVMS9GeGhhT2VTZFBJcGlMaXZj?=
 =?utf-8?B?cUlvdUtEYTVlaVlDdWRNMEdiMmtMUlVXaUJqakRJcXoxWnNmL1pOSzdyK3pC?=
 =?utf-8?B?N0hybC9URStGWWEwWGdWc0x3bUhzNUJmUnQvK0tmaHJGRUE2Vm9iYmg5QkFw?=
 =?utf-8?B?NlNiQ0FuRkhiKzdTL2Jpc0gwcDJTN21SV0NacHdNL0ZsekFMYzhyWGxkZGdL?=
 =?utf-8?B?aFlxVENNYjRwbjRWa3htd0JodGlSYndNN1BqQzNvOG45SWJSWjNPWGl6ZDVO?=
 =?utf-8?B?YmJtLzN4S3crajg2c2ZNNDRxeGpnTmZXRmx2YmhrZEk2WEFlampreUtvdlpU?=
 =?utf-8?B?eWlyN0RsRmJnUDYyNzZxaGdtWnJMRFZoUlhWWGI5djFtbEtMVWpQNUZmaUZT?=
 =?utf-8?B?NFRPYVpZL09QOWkxWFBCZ0RERG9pbGFyWWxQeFFGVEtQZ0VyMTlabEN4SEVH?=
 =?utf-8?Q?I3btvzcGOc9S5a/rQ2B4FXNKe?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 163978a9-1c5e-4ada-2bc9-08da59eb6333
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jun 2022 16:21:14.4080
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: JqZn9xcZ7aOjVzYW9OcLg+L1Dab39QQSSRpHORqq7lFx7Kjz4+waLS/Pu9zv3+eaF6i+UWLUll4rWVXhNImIUQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR0402MB3380

On 27.06.2022 12:01, Roger Pau Monne wrote:
> The current logic in epte_get_entry_emt() will split any page marked
> as special with order greater than zero, without checking whether the
> super page is all special.
> 
> Fix this by only splitting the page only if it's not all marked as
> special, in order to prevent unneeded super page shuttering.
> 
> Fixes: ca24b2ffdb ('x86/hvm: set 'ipat' in EPT for special pages')
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> Cc: Paul Durrant <paul@xen.org>
> ---
> It would seem weird to me to have a super page entry in EPT with
> ranges marked as special and not the full page.  I guess it's better
> to be safe, but I don't see an scenario where we could end up in that
> situation.
> 
> I've been trying to find a clarification in the original patch
> submission about how it's possible to have such super page EPT entry,
> but haven't been able to find any justification.

I think the loop there was added "just in case", whereas in reality
it was only single special pages that would ever be mapped. So ...

> Might be nice to expand the commit message as to why it's possible to
> have such mixed super page entries that would need splitting.

... there may not be anything to add. I don't mind this being re-done,
hence
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Yet I'm not sure I understand what case this actually fixes (and
hence why you did add a Fixes: tag) - are there cases where non-
order-0 special pages are mapped in reality?

As to the Fixes: tag - I think we mean to follow Linux there and
hence want 12-digit hashes to be specified. See also
docs/process/sending-patches.pandoc. (But no need to resend just
for this.)

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 16:30:32 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 16:30:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357967.586876 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6aa9-0006Z0-2e; Wed, 29 Jun 2022 16:30:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357967.586876; Wed, 29 Jun 2022 16:30:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6aa8-0006Yt-Un; Wed, 29 Jun 2022 16:30:24 +0000
Received: by outflank-mailman (input) for mailman id 357967;
 Wed, 29 Jun 2022 16:30:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NFaf=XE=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o6aa6-0006Yn-V0
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 16:30:22 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2087.outbound.protection.outlook.com [40.107.22.87])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c43f2dc5-f7c8-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 18:30:20 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB7541.eurprd04.prod.outlook.com (2603:10a6:20b:29a::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14; Wed, 29 Jun
 2022 16:30:20 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5395.014; Wed, 29 Jun 2022
 16:30:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c43f2dc5-f7c8-11ec-b725-ed86ccbb4733
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=I/Ae/15aJJN94OuD4/xAjGlvIliEZmBNy7O+OJPqtUvUMJLYFnmL448MaNtC+dBgIS7d7UNterAQ79xaZN55Y25EReCLGPX2Gwl5uFuvdxbbtrzrdmow6/T6CxbZ56vrDpnSqscPQ5VZy1K9BhBIn14Dm8cA+aA57Cl6JQOMyztTCfHvn7eC0NVBs2j3/L4cHPnc79h7AGVxOnNBQXM5bARgidR8AuDt4/W9ZMFmYaSm9p2ss5m1AaNXjH0suoXHC45dg7IQ7ZcO85yz0Lxkq9EWozPiZTrhCwQXVZ4GYe4AIp7sWSB3KUra8U4AJOCxgxnNBrSnPlm+oBus8HM2Fw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/T/y0ZAVUA0Wee6D01J2LYmfbojAe28mmuP56LBhq3M=;
 b=MwhL+u69R8CHYcMbpYbCPf1H0MhNksKpESPu/Ylyksd3lGBn/HMfbZtxu3w+ibHE8y9GWLW/PISVgG5VQtOT0FTPs+N+6IINc9O2QBUj9vbKGqFaAiOnziSdMFWbhiuKUOVV0Pno5xrcP/LfNQBWwcdSk/JGKRbgwlNPNyUlQwkQa1/qJGQh6jbnGtHcAr3PH/ueUjs0gievP+4hVruC+5Si20SeZf8/8tPxTIO0hGMcIDisrgZbdO8EKofQc6V37NLJpD16Iboa0vvWBy42tdoCXQGcZ2AQeunpuxse3nqryuVyHHWTCQ+NtUuXmeBModZkytRn8msIpaeaIm0Lww==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/T/y0ZAVUA0Wee6D01J2LYmfbojAe28mmuP56LBhq3M=;
 b=0wzXKaiPV1Ab1fD8Kv1Txuxys/mkQrIGBPXH7D+Wr9TvHbdDQD8m1WFjhkFPOXDziF0X8tDqib6aBXNGUkURxCHnYYsrj0TCK+snujUOkJBQ2tkJMov+UtKyCpx9x/MUflpg1kChceimbNVntE02soC21q5N2fL7sDuE1LYoXCeThkX04M8wQ+jfOtrk+q+UqowcAzvMaRCzzJ3RFFqPzd/1yhFGPte3E6zXxb0YzKqve/k3eleybsLvDpxKy1hhIWbA/w2FOeNAfTzi+ajddrIEW8F8do01U77BMByizGshXhyJWqk6mMmtOi5a+N5/qld2dEL3HikG0AWr6LMwmA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <59fe1b28-b173-e497-5b8a-5e0bb6d946b6@suse.com>
Date: Wed, 29 Jun 2022 18:30:18 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH 2/2] console/serial: bump buffer from 16K to 32K
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <20220623090852.29622-1-roger.pau@citrix.com>
 <20220623090852.29622-3-roger.pau@citrix.com>
 <e45d8dcf-fd0a-6875-a887-5c0dafcc4543@suse.com>
 <YrxuVPMb990xYKi9@Air-de-Roger>
 <b4740e9b-3586-04c3-454a-5d60bae2cc55@suse.com>
 <Yrx7hDevqMgvtMRR@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <Yrx7hDevqMgvtMRR@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0006.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1d::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4adb5191-b865-4475-fd70-08da59eca84d
X-MS-TrafficTypeDiagnostic: AS8PR04MB7541:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	UEN9BZei8zMmiau+9U1mthGOUbIatiXI0gLF3JSmuva6287eqobFJV2u3UeiiK35dwz3KABaDGKOm8UM3hHmjPWt7ZLYwMcVa2GN/n/DIq5psFmrWbJxVPju1NPBwBcfuJb1+AXHp9q56tX6wWj1doTqdzD+H6acHaQVPtO5v5UuFvHwZPIQPCxJKDZwAt+xLDAgUOVaRXGNGjfirkr25NCRdxENG0f+qIUCgM4GIwDUopjeE9zcWdQC2rqgYqaAKacUqAW/cmmXyg7HdUqvhyGjMx7E1/IY+c577GXTcYTtRmwAqoG28L7HDHZ+SS+q1tkSkbXmp3Z+YIKIWVa9LOlk6xDyfGaVam2ZwixYFIz/b3rzJOqCgb5sKSQgC+3ZlmtgIJtpMm7941zCN+Y2zKVirKpZ6axuJNpp1+sgroY/ETue2iKbmnkZArLU0ixksH5SIN8wfLNlBGQcm8/f3YIsi7oxvBay77yF99Is7tqjdzenX5+Evzt1Mn64K7rDZ3qCU0hLt8qpR28GZPaW20qS6EfPvk+rbSWlgFcmQuHPleFRKS7p27pL5cVXYeZZVNb7UKkRtZ0DQt3EkzohNvr3yKtSWynGYgC+r811YWCUuqZL4DKhNrQEQQiV/c5NLlVzcQ+SAf7q/hLQKs+bBtrdxA9J2WgDga7cbFyOkPML2JFXD95YDSA2C7o5YAvvVWjrYRnNoyV6932UsSX4MOeg3qpp7Vn3JvuzygMoAgaUprOwuQtpK8r0Lids1UPOZcDOWUEFKTZLDUahUAwaVI5Dm1DgyvdZ6WVRAuwQseG9aKDuEcoq/cWdoUaTyLr/TB5pq5yBKnVOuAoQuROR9A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(136003)(39860400002)(366004)(346002)(376002)(396003)(66946007)(54906003)(66476007)(6916009)(316002)(6506007)(41300700001)(6486002)(2906002)(36756003)(53546011)(31686004)(2616005)(8676002)(6512007)(26005)(66556008)(186003)(4326008)(86362001)(31696002)(83380400001)(38100700002)(478600001)(5660300002)(8936002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SGg0c3lIWGc5NG1wY3J6WTdjMFJLeDNOT2JsWTBRY3kyUjNVZTZ5NWVXaVBW?=
 =?utf-8?B?TnFVL0hnWXRQQ21OMHp2UmxISGFkRDFmZ1poSHMwZDlSOVJVK0dMNTRObFFS?=
 =?utf-8?B?bmNEd0hrbXJuL1F5VllkTVpjTFVDb0wvOEN2ZWltWWdxcEJJbkNLdlQvYXhq?=
 =?utf-8?B?MUM5TG5oVjl6NllGQ0JjMTUyUml5enRUKzlUZGlic3NtdnpxR0RTcFY0bElE?=
 =?utf-8?B?dTg2MGQ4SDhnMmlYRnZjazZzMEhsNWdTQTNGTjZianlQYjR5Z2VReVl6SjFR?=
 =?utf-8?B?L0JpTVVDd1lZYk9KWi9kSVhFdzlQcmpsVXV2Sk5Qbm5iR1EzVzJab0dTOWZw?=
 =?utf-8?B?dURLNjIxaFY5K05MWG1VVjJMV2hjU01KZm5KdzVzTlZEMFBwcVVjZlkvZTNC?=
 =?utf-8?B?cW54a21rVENIYlI4NFFmUlRZcklVMis1VEtOeDdoaDB3ZGlWdC9kc2N0Q3lo?=
 =?utf-8?B?bGlERlNpb045R1loM25IcC9UR3czT0lISDQyTXZYbnhFQncyL1lQT3NMMCtw?=
 =?utf-8?B?UHR3YXZMb01VaDQ4Rno3OFJDM0Y1eXFQOCtFcWhnalZXN3pEMysxeTVVL0xU?=
 =?utf-8?B?Vlh2R3BXS3NiMmFUMmxFT29VVC9vTmcwS2QzZndJdWNSQkR2UG9xdHJtRTFW?=
 =?utf-8?B?cU5XS3hKNXVZeFR3MkRIcDh3dE9PVzN1OUhHSDFOMnBDc0pGZ2YzM0tFaWFr?=
 =?utf-8?B?dlhNT1JpOTN0TC9XRjRseHY3c1dVZEJVbkdNU2F5b2xtK091SDlUVkRiUzZx?=
 =?utf-8?B?Slk4VzA5RWEwaHg3N2tZNS94ODVQK2tsNzJhKzZQUEhoam0rb0dCSkZXYnEw?=
 =?utf-8?B?NXFZUzJ1Ym4yZkZWL3ZhYk41ZUxVckt5L1hDZ2xMV2RMZStxTTZEUGtUeDdL?=
 =?utf-8?B?cnM2TnYwNVBHeTA5UVUwc2loNFRIMUx0bXF1Unk4TzNIVTVYbk1hV1VEY2ox?=
 =?utf-8?B?OEdRL3NncEh2eUt6SnFGZjE3RWxJdHRTRHBMV2dsN05GcjBVVG9NcVZSb3Ev?=
 =?utf-8?B?UGkxbmlXYTZaNHN1U1BqRndYL1NDM3NVdGFJRmtOYTFmL0h4SmFSci9LWWE3?=
 =?utf-8?B?bFJiR0ZOZTcyRkZlNjgwSFBOYTM4RjkwTC9rNWZDT2tpSTZpMGNwT2RzRXE5?=
 =?utf-8?B?endkTzBYNmVpSWRZN01qUHhsNVI0SVM5UGNRMFc3eXFDUlNtZXptV2pzcXlN?=
 =?utf-8?B?TUFJUmhjcGZEcmlUOGY3aVZ4em1kdVZsZzkzOWt1VWp3MlN0cFl5ZVoxVFdh?=
 =?utf-8?B?azFKcFFKZWsrc2ZFc3Fjb2hyK0N4ZGxqWTBsSTFXWVgrbWVvVnFleEZicTJp?=
 =?utf-8?B?UVRWaDc2RGg3MlZYYlRjYThlQno1RXlDenM1K3NiWWhTYjdmcDNkNnNFT2gx?=
 =?utf-8?B?S01GQ0pWL2hzQ1RRc0xnaC8yVmc3M0Z0cGpEc1J3MlBROXdoMXRXRXc0OUtj?=
 =?utf-8?B?bHZZU2V4MWUrS2xTMGVJR0dneVRBL3RYK25qWmRvTDFqbWdjVEk1UkphR2dp?=
 =?utf-8?B?UHF3UHFvaVlJR3BaOVhLRXFIajQ5UUp2WFJ4d3JGVk5DOFBPd3hNZ2xkRXFz?=
 =?utf-8?B?NVJFN2Z0TGlIMC9XTG43TGpCZDFKSWRONFpoR2hPZ1JITDFMZ01XQ1BOY0dQ?=
 =?utf-8?B?ZUtYdTNKd2duSC94T1RBaXdnRFR5dTEzU2kxZHZacmZnS2FETzhjZ0hIRndL?=
 =?utf-8?B?YVMyMWYzaC9GQ1FNMEV2RVJzS0RieWpOVW1CTi9oYVJVZnVRTFlNUkI2YmQ1?=
 =?utf-8?B?c2QxSG9SVDVHVU91L3JhNzdka0o0cDNtSzBaakdPOGRidm55SDFyL0xGR1Fq?=
 =?utf-8?B?TFVOWGJTcWxLWUVyb2g4SUJ4U1J5Zk9lbm9qdXNJK3RobFQzODNzM0lKeVBx?=
 =?utf-8?B?YzJaclU2aXB5N1NyWjkxY1Z6WkEzTmJuMUc4K3lNMG5vWW1TRXdLaWMxVEF6?=
 =?utf-8?B?c004NkNEYmlqOEduQmdQVVUvb1N2WHBnVlRvRXNBUUp5RGo5emFRaTUvVmdv?=
 =?utf-8?B?TXcwcEhDWFRCZ0w1UTltYUtDa2gvTEkvZkxudTNkdkU1SGNpZktHMUVXN3hr?=
 =?utf-8?B?ZXlmdFZEcmZGcmRBdzVXbXNFV3Jlc0JjMnF6N0JBYVJoeFJNLzcrdjFRQ0Js?=
 =?utf-8?Q?jSlldGIzGJMKr/WYlTTJYr8PJ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4adb5191-b865-4475-fd70-08da59eca84d
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jun 2022 16:30:19.8590
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 7amy+NkL+41gxjkpuHp8TVoePvHLbBt5f0Ok/22rd81TUw2X3AIJOGsYQnhy4g5LprIr+ZtCMfair1EDjdi3fw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB7541

On 29.06.2022 18:19, Roger Pau Monné wrote:
> On Wed, Jun 29, 2022 at 06:03:34PM +0200, Jan Beulich wrote:
>> On 29.06.2022 17:23, Roger Pau Monné wrote:
>>> On Thu, Jun 23, 2022 at 03:32:30PM +0200, Jan Beulich wrote:
>>>> On 23.06.2022 11:08, Roger Pau Monne wrote:
>>>>> Testing on a Kaby Lake box with 8 CPUs leads to the serial buffer
>>>>> being filled halfway during dom0 boot, and thus a non-trivial chunk of
>>>>> Linux boot messages are dropped.
>>>>>
>>>>> Increasing the buffer to 32K does fix the issue and Linux boot
>>>>> messages are no longer dropped.  There's no justification either on
>>>>> why 16K was chosen, and hence bumping to 32K in order to cope with
>>>>> current systems generating output faster does seem appropriate to have
>>>>> a better user experience with the provided defaults.
>>>>
>>>> Just to record what was part of an earlier discussion: I'm not going
>>>> to nak such a change, but I think the justification is insufficient:
>>>> On this same basis someone else could come a few days later and bump
>>>> to 64k, then 128k, etc.
>>>
>>> Indeed, and that would be fine IMO.  We should aim to provide defaults
>>> that work fine for most situations, and here I don't see what drawback
>>> it has to increase the default buffer size from 16kiB to 32kiB, and
>>> I would be fine with increasing to 128kiB if that's required for some
>>> use case, albeit I have a hard time seeing how we could fill that
>>> buffer.
>>>
>>> If I can ask, what kind of justification you would see fit for
>>> granting an increase to the default buffer size?
>>
>> Making plausible that for a majority of contemporary systems the buffer
>> is not large enough would be one aspect. But then there's imo always
>> going to be an issue: What if non-Linux Dom0 would be far more chatty?
>> What if Linux, down the road, was made less verbose (by default)? What
>> if people expect large enough a buffer to also suffice when Linux runs
>> in e.g. ignore_loglevel mode? We simply can't fit all use cases and at
>> the same time also not go overboard with the default size. That's why
>> there's a way to control this via command line option.
> 
> Maybe I should clarify that the current buffer size is not enough on
> this system with the default Linux log level. I think we can expect
> someone that changes the default Linux log level to also consider
> changing the Xen buffer size.  OTOH when using the default Linux log
> level the default Xen serial buffer should be enough.
> 
> I haven't tested with FreeBSD on that system, other systems I have
> seem to be fine when booting FreeBSD with the default Xen serial
> buffer size.
> 
> I think we should exercise caution if someone proposed to increase to
> 1M for example, but I don't see why so much controversy for going from
> 16K to 32K, it's IMO a reasonable value and has proven to prevent
> serial log loss when using the default Linux log level.

But see - that's exactly my point. Where do you draw the line between
"we should accept" and "exercise caution"? Is it 256k? Or 512k?

Certainly I'm also aware of the common "memory is cheap" attitude,
but that doesn't make me like it. That's both because of having
started with 64k total (or maybe even less; too long ago meanwhile),
and because of my general observation that everything only ever
(fast) growing makes many things slower than they would need to be.

As to controversy - I did make clear before that I don't mean to nak
the change. But given my views, you'll need to find someone else to
ack it despite being aware of my opinion.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 17:02:06 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 17:02:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357974.586887 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6b4f-0001vb-JX; Wed, 29 Jun 2022 17:01:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357974.586887; Wed, 29 Jun 2022 17:01:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6b4f-0001vU-GV; Wed, 29 Jun 2022 17:01:57 +0000
Received: by outflank-mailman (input) for mailman id 357974;
 Wed, 29 Jun 2022 17:01:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M2+n=XE=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1o6b4e-0001vO-Q3
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 17:01:56 +0000
Received: from mail-ej1-x631.google.com (mail-ej1-x631.google.com
 [2a00:1450:4864:20::631])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2dec736f-f7cd-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 19:01:55 +0200 (CEST)
Received: by mail-ej1-x631.google.com with SMTP id ay16so33852850ejb.6
 for <xen-devel@lists.xenproject.org>; Wed, 29 Jun 2022 10:01:55 -0700 (PDT)
Received: from [192.168.1.10] (adsl-146.37.6.170.tellas.gr. [37.6.170.146])
 by smtp.gmail.com with ESMTPSA id
 fd5-20020a056402388500b00436f3107bdasm10688424edb.38.2022.06.29.10.01.53
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 29 Jun 2022 10:01:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2dec736f-f7cd-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=message-id:date:mime-version:user-agent:subject:content-language:to
         :cc:references:from:in-reply-to:content-transfer-encoding;
        bh=+Vd8lywTbQEL1LZx5JFpNcyJV94TE2CwbAKyE4V1pTc=;
        b=FtHb6P631xYi8c4peq8t7LqoYLzgy4qDtqyC80vqfMJFNDpFwMHKH9Z6nOv+otBek+
         sNGMbmErhMRIBJ5RXIeMO7N3TjQ5/et2Z+bpTVuC5NtstKAOSUdp1hLUkzGW2kE57jQa
         ebHoNRPtiqc23s/3rPtMNYZBYD4RZlG51ejKv8VK4F41GYt/3XoLL+YXreEFcEZtI8AV
         sFWOgsC3Yotv6laU0J3dlQFouIGcey2Xnfz4zeouBj3Vw7eGouKHvqE1diV8cNiMeZba
         huzcoUMKTHq0zAhz7lQC4OucogW9dlJf8E0SeitLslsXoS24IUxbOx+8xbLSfqxA8rNE
         iAmQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:message-id:date:mime-version:user-agent:subject
         :content-language:to:cc:references:from:in-reply-to
         :content-transfer-encoding;
        bh=+Vd8lywTbQEL1LZx5JFpNcyJV94TE2CwbAKyE4V1pTc=;
        b=v3FgtNCgw9SdMeQ63VDlNCSjkE8KHtdOpVVfSaQA1h3qLGP66Kox00H86hP5oBHv60
         CL6Zgj2UvFNs/zdCFb8YNAGH0T3Ywklt+WuHcjc6rTWprx4g9tcyWUH3id1iDFcGjGqv
         ZhizmYFYUHgp65VnFKnOTJ2v003/TfyAuuTSWjkTvNcb2T0BzpotUnvTxAen+UdEqdxI
         L/IEKaJ8ZHbLb3/cHK2V+e9FTyGa8avOd8UHJJMmIbdNiE/MRyrmHBhW5uFcseXg+W9L
         RBlUaccMdJBeb6hhpwo/cgKhbAAdFf97DADbov/YDVmU47mcz27pKb941H+QabdlwCoR
         Fr9g==
X-Gm-Message-State: AJIora8TY/E3UmjLRu3/hrNnwsYEdvI8m6W3Oz2qSwocdBlwZrkBFILB
	C9+aDttzEh5zQ5nbEAlBJ/U=
X-Google-Smtp-Source: AGRyM1vsMeIxmAkw/U+lfMrlAnsr3lW5Y7VOTSMtc5+e3TOgTMouGYEa6IDhb+4rq0nRfGKj1x5nSg==
X-Received: by 2002:a17:907:1620:b0:726:c0d8:7578 with SMTP id hb32-20020a170907162000b00726c0d87578mr4203309ejc.587.1656522115005;
        Wed, 29 Jun 2022 10:01:55 -0700 (PDT)
Message-ID: <22476413-14da-21cd-eb02-15165bfe602a@gmail.com>
Date: Wed, 29 Jun 2022 20:01:52 +0300
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH 2/2] uboot-script-gen: do not enable direct mapping by
 default
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, viryaos-discuss@lists.sourceforge.net,
 Ayan Kumar Halder <ayankuma@amd.com>
References: <20220626184536.666647-1-burzalodowa@gmail.com>
 <20220626184536.666647-2-burzalodowa@gmail.com>
 <alpine.DEB.2.22.394.2206281727080.4389@ubuntu-linux-20-04-desktop>
From: xenia <burzalodowa@gmail.com>
In-Reply-To: <alpine.DEB.2.22.394.2206281727080.4389@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 6/29/22 03:28, Stefano Stabellini wrote:
> On Sun, 26 Jun 2022, Xenia Ragiadakou wrote:
>> To be inline with XEN, do not enable direct mapping automatically for all
>> statically allocated domains.
>>
>> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
> Actually I don't know about this one. I think it is OK that ImageBuilder
> defaults are different from Xen defaults. This is a case where I think
> it would be good to enable DOMU_DIRECT_MAP by default when
> DOMU_STATIC_MEM is specified.
Just realized that I forgot to add [ImageBuilder] tag to the patches. 
Sorry about that.

I cc Ayan, since the change was suggested by him.
I have no strong preference on the default value.

Xenia

>> ---
>>   README.md                | 4 ++--
>>   scripts/uboot-script-gen | 8 ++------
>>   2 files changed, 4 insertions(+), 8 deletions(-)
>>
>> diff --git a/README.md b/README.md
>> index cb15ca5..03e437b 100644
>> --- a/README.md
>> +++ b/README.md
>> @@ -169,8 +169,8 @@ Where:
>>     if specified, indicates the host physical address regions
>>     [baseaddr, baseaddr + size) to be reserved to the VM for static allocation.
>>   
>> -- DOMU_DIRECT_MAP[number] can be set to 1 or 0.
>> -  If set to 1, the VM is direct mapped. The default is 1.
>> +- DOMU_DIRECT_MAP[number] if set to 1, enables direct mapping.
>> +  By default, direct mapping is disabled.
>>     This is only applicable when DOMU_STATIC_MEM is specified.
>>   
>>   - LINUX is optional but specifies the Linux kernel for when Xen is NOT
>> diff --git a/scripts/uboot-script-gen b/scripts/uboot-script-gen
>> index 085e29f..66ce6f7 100755
>> --- a/scripts/uboot-script-gen
>> +++ b/scripts/uboot-script-gen
>> @@ -52,7 +52,7 @@ function dt_set()
>>               echo "fdt set $path $var $array" >> $UBOOT_SOURCE
>>           elif test $data_type = "bool"
>>           then
>> -            if test "$data" -eq 1
>> +            if test "$data" == "1"
>>               then
>>                   echo "fdt set $path $var" >> $UBOOT_SOURCE
>>               fi
>> @@ -74,7 +74,7 @@ function dt_set()
>>               fdtput $FDTEDIT -p -t s $path $var $data
>>           elif test $data_type = "bool"
>>           then
>> -            if test "$data" -eq 1
>> +            if test "$data" == "1"
>>               then
>>                   fdtput $FDTEDIT -p $path $var
>>               fi
>> @@ -491,10 +491,6 @@ function xen_config()
>>           then
>>               DOMU_CMD[$i]="console=ttyAMA0"
>>           fi
>> -        if test -z "${DOMU_DIRECT_MAP[$i]}"
>> -        then
>> -             DOMU_DIRECT_MAP[$i]=1
>> -        fi
>>           i=$(( $i + 1 ))
>>       done
>>   }
>> -- 
>> 2.34.1
>>


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 17:07:44 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 17:07:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357980.586898 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6bA5-0002Z8-8r; Wed, 29 Jun 2022 17:07:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357980.586898; Wed, 29 Jun 2022 17:07:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6bA5-0002Z1-4F; Wed, 29 Jun 2022 17:07:33 +0000
Received: by outflank-mailman (input) for mailman id 357980;
 Wed, 29 Jun 2022 17:07:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=los1=XE=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1o6bA3-0002Yv-GP
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 17:07:32 +0000
Received: from sonic312-24.consmr.mail.gq1.yahoo.com
 (sonic312-24.consmr.mail.gq1.yahoo.com [98.137.69.205])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f3d27184-f7cd-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 19:07:28 +0200 (CEST)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic312.consmr.mail.gq1.yahoo.com with HTTP; Wed, 29 Jun 2022 17:07:26 +0000
Received: by hermes--production-ne1-7864dcfd54-q4948 (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID b6ab6cb55138359f47e14faeedf2220c; 
 Wed, 29 Jun 2022 17:07:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f3d27184-f7cd-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1656522446; bh=KGleZCrK1ory7vZwxUmrZsedjLqWV8c6qrWiB0U6mow=; h=From:To:Cc:Subject:Date:References:From:Subject:Reply-To; b=mr80KzqVqatPdlmBEV9f0VtMfWq45miGJA2WQ0/YScGHp0WLX/IsRongAc7X6/4dJiKxYsWXgMVeN84sK+Ok8qx4ORAroEVDYBObI4y7u6QLp8emcy/is0xcnocST6dFC26JbsG1s4OnYL0DnCy4xiuqzkW95rIekWcrrKI1ayaxt/ybV6aEC5Po0y3i7Ostl+hVuKP/WUc04py+PMOP2ddUvmQjGfu3ulMwtXl0T9yFHy4s1FxQgqFYoGj+p010zGJ9/her2PjCQ1+Q0o/qafIU7/kRqggl7dCrQ7zzH3P8tVyco/xeIjnkYZdntLmt74hemEva/nM/QdUsEEuNTA==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1656522446; bh=vADSH42mt0SrfQIwqiqm56R6UvOcvPatZYtPdEu/k6D=; h=X-Sonic-MF:From:To:Subject:Date:From:Subject; b=D5V4w92Ebw+iFrzLrQn38NPfeAMwbCaSTip/5373qeg5aSoVrGsQPB2mWWzSoSngAmBKnAGDSPafqUpPGmPap+8ozN7nbSR1orUxuycow+zG/+tEk29BqCAe4hj2MzxXjnD3GPYphwOIEICM22CKEOKhOqBlLpj+IqAj7DM5wWJjcrHbiWRubhUk+8QCJLlI5vnPyjXkZUR2y0bfQ/ylSsAWsLPfiecDsHqTaTrKiTAuvPTCAKcoXmwMUZAbEHpziJ1QDkDUMXCPPSomiCHYYLjrlWm4cSUvI/+VYrY11LuoMpCxgMnPd/QW48aeWl5A+zy8sYy0jPiRRjmDpbfGJQ==
X-YMail-OSG: GbWx56UVM1kDP1w7PXqGFTZw8a8.Q5vCHL21ZWJMlsWuRDhGaiptXkInk7wrSDt
 WqjPVDDtKHzugx9w4kWGTrZ3L4oo8G_6oVwPvn52QxMqzpsPOSLM1oe5oDGkE9XC91PE0ZVB9A.9
 zuWpvh_MqscHNfethKiY5ud1XdPBi7rzkEbAKpDUR9BXVH0iAtgCYMSOKgTvzBrv3MZgefiUaxYb
 EAbaDk_9VPpqE50LYKI1hNW9ShAlkPfa9UIzcwmhc0JjTSZ7DVMG4iDsfaG1WjSBkHNtwgQLHDAq
 qd0IkT5MhxgbI.V8lDONULHbfamPzFdbrdjABZc9T7XZ29vYkpjh1lngLKYOBSr6i_AwM1eQajHw
 qxcsMRUoQFaSa43gmAsqj.UAdklSHIlSwN91Yj8UFfKW4S83YpMNnJ8pPSnT0hLLkQcvP1nuuz96
 bIt77kd2ahfl92RW5.LV7swi2EE5P1rlgEA66ZO75S3Bdu3IkZAavbqolA2LrnWL1393Xkk6sY0h
 4VB1GZdWkc9MHEWlIBG9dJGpGdhnPC42XMV5tRf.b5u1EsXGhEf2WteK2o6_1DJcbFygj5KK211C
 9ml2hfZW3GwzbZhsPocDBlBLnTvM2_vtByT2ICPi9SwntfNqlZoDnS2KxUc8xYYSbFwQz0Kc2yHj
 8O4BaQWIqm5381VXtTARTBGIykK6zsi9qrGFoExFTmR2FTSJGrS52A.SHXjjnCBsaqcJ.H.F71hO
 dLb9kpPtdq0lUoBFxRYdk7k9op6HHfuQYM1oA3NsEiigzC7lvVxC9lOL8Hw7LkdJduIObxWTq4t8
 UjRNs0imI3Pn_.FkNVIbUItb0gCfu2cqhRQP_PahIxalZWw.trnSjMqF2X9gYfDKtMjx1Gl4.R.s
 ZeZUIvuSLpnaEzQIezThSVsPMGOPgvMSBsTUacYTecP1Amq_aOzkf_Z.PO.TpXh6A6zlyY16uidE
 usL94ull_vzItEZefaS7_vhZXP2vXHWEGR3KZK7RSf2jfxXUs_wK9PTCtdxQh5BNDvBKKicfPqgW
 inXWQbY42JbLnfpGe7wXteNTV05qh13Yw0J3ON4K8h7VqtWsqCrEy0YPX0V.zGdXfqbaKFKQRXOA
 kgg5PX5yiMcA56BhTCzzqVZj2o53brTapquMZp.PgVf2TT5q5coSQ5bWIRtABpOWKzCECRr59Py_
 muhaJ7IBQ3VY4tiy4rqFY3BtL8HF9GSkygBhCplpSFfs_Kq47q39ZBmSIAqQUN3Ys9TU_gjipivm
 EteGqxxiz844dHP7lGR5tIrV1XJgUKfJEPLkwCvPQB_qktOxc7tHwn3eW5jQAexeqJfDOsnF5yzv
 VoW.74X2mxfskf6cWOS.MHgMTMQ7Q5WgX6zhPDXzLDknwZBStTJu4AT50Gn4U4SyNoROmhBiG8Ts
 4FzKWqUIGBefJkzH4WYRxV2I4KCyyCyuOUm36.17pILXJPXde.7ytSol31oc0LQ.OSU81k.30WW4
 hbh8v8moTaoK_h5Sdo82aiBj2A1J58dUgSfMTTF9Alf97l89tVJiciqwrXEQDMLHMN_pDdoKf1Ox
 e06_aew0t2sA1qijSZxnqZt9rT9A.PBfuwnkdT3V24xQMX0LILO.F5.R0IRtXrGFhjR8y6I.ZHYC
 kz6JXJx6gTFCDWi_zU3aRUawuI9QPWv._E86kq3sMjs4NHdVKwvnRfpuh7G.16UcuTHfe9kQcNle
 TP.LGr5p0yrnB6lNi7gykkWluTZ51o67FCZ8NDTTT.FtS7wvye1V7gpbm6kEcGHVgddsFsxiWWdb
 K03_bbYLpAcj.IPhtd4w7cH3hoL3GsVZcgFYcay2oN6mRO3twBPLyY.CyTPBF4kQrzoriOdHxUhi
 h1L4XN811tvB40ZD6vt9u6RW9eFygNVmfFo4XwYgjyNJIDnDORv3gsOnsIH4X4UpUhUiePeYRZ_i
 FqddYkVykq69N_6m0VWeDzEMHCHgXhFUufc9GU0AP8cGhoia9gMnjJMdAgpCY7ByIz1XdGwFNxEx
 HNOMG6LAVgAD48q0JHgFEn7WSF2rPL6eMkAUtMGA.W_BDH3J2AwRcg44TbtZM5jLk5daTXv1PWEB
 PEDt.MZh_I1WAhwrDOL21ukap9hVMw3YNJOrcMGL_A2QcWmSm2ZX0GCOnfw5RY.efllqYkFxWs1r
 .fAK4y1jBqwR8Me2GJp5tGvXs.V_bBkjmBWsQV_wgn1x6Gja6hJfiDhMUfhVIPS3bCrc7hifOPLq
 sXlY-
X-Sonic-MF: <brchuckz@aim.com>
From: Chuck Zmudzinski <brchuckz@aol.com>
To: qemu-devel@nongnu.org
Cc: xen-devel@lists.xenproject.org,
	qemu-trivial@nongnu.org,
	qemu-stable@nongnu.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH v5] xen/pass-through: merge emulated bits correctly
Date: Wed, 29 Jun 2022 13:07:12 -0400
Message-Id: <e4392535d8e5266063dc5461d0f1d301e3dd5951.1656522217.git.brchuckz@aol.com>
X-Mailer: git-send-email 2.36.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
References: <e4392535d8e5266063dc5461d0f1d301e3dd5951.1656522217.git.brchuckz.ref@aol.com>
Content-Length: 3280

In xen_pt_config_reg_init(), there is an error in the merging of the
emulated data with the host value. With the current Qemu, instead of
merging the emulated bits with the host bits as defined by emu_mask,
the emulated bits are merged with the host bits as defined by the
inverse of emu_mask. In some cases, depending on the data in the
registers on the host, the way the registers are setup, and the
initial values of the emulated bits, the end result will be that
the register is initialized with the wrong value.

To correct this error, use the XEN_PT_MERGE_VALUE macro to help ensure
the merge is done correctly.

This correction is needed to resolve Qemu project issue #1061, which
describes the failure of Xen HVM Linux guests to boot in certain
configurations with passed through PCI devices, that is, when this error
disables instead of enables the PCI_STATUS_CAP_LIST bit of the
PCI_STATUS register of a passed through PCI device, which in turn
disables the MSI-X capability of the device in Linux guests with the end
result being that the Linux guest never completes the boot process.

Fixes: 2e87512eccf3 ("xen/pt: Sync up the dev.config and data values")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1061
Buglink: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=988333

Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
---
v2: Edit the commit message to more accurately describe the cause
of the error.

v3: * Add Reviewed-By: Anthony Perard <anthony.perard@citrix.com>
    * Add qemu-stable@nongnu.org to recipients to indicate the patch
      may be suitable for backport to Qemu stable

v4: * Add Fixed commit subject to Fixes: 2e87512eccf3

Sorry for the extra noise with v4 (I thought the Fixed commit subject
would be automatically added).

v5: * Coding style fix: move block comment leading /* and trailing */
      to separate lines

Again, sorry for the noise, but the style of the comment was wrong
before v5.

Thank you, Anthony, again, for taking the time to review this patch.

 hw/xen/xen_pt_config_init.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/hw/xen/xen_pt_config_init.c b/hw/xen/xen_pt_config_init.c
index cad4aeba84..4758514ddf 100644
--- a/hw/xen/xen_pt_config_init.c
+++ b/hw/xen/xen_pt_config_init.c
@@ -1965,11 +1965,12 @@ static void xen_pt_config_reg_init(XenPCIPassthroughState *s,
 
         if ((data & host_mask) != (val & host_mask)) {
             uint32_t new_val;
-
-            /* Mask out host (including past size). */
-            new_val = val & host_mask;
-            /* Merge emulated ones (excluding the non-emulated ones). */
-            new_val |= data & host_mask;
+            /*
+             * Merge the emulated bits (data) with the host bits (val)
+             * and mask out the bits past size to enable restoration
+             * of the proper value for logging below.
+             */
+            new_val = XEN_PT_MERGE_VALUE(val, data, host_mask) & size_mask;
             /* Leave intact host and emulated values past the size - even though
              * we do not care as we write per reg->size granularity, but for the
              * logging below lets have the proper value. */
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 29 17:15:09 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 17:15:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357986.586909 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6bHL-00048V-Vv; Wed, 29 Jun 2022 17:15:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357986.586909; Wed, 29 Jun 2022 17:15:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6bHL-00048O-ST; Wed, 29 Jun 2022 17:15:03 +0000
Received: by outflank-mailman (input) for mailman id 357986;
 Wed, 29 Jun 2022 17:15:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fe+6=XE=kernel.org=jpoimboe@srs-se1.protection.inumbo.net>)
 id 1o6bHK-00048I-S5
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 17:15:02 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0286b230-f7cf-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 19:15:01 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id A5268B82424;
 Wed, 29 Jun 2022 17:15:00 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0932AC341CA;
 Wed, 29 Jun 2022 17:14:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0286b230-f7cf-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1656522899;
	bh=nRYAO0YhqCfW4n6wXNQYaEZksNx77Z2cvSAAwv2BTXw=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=mmoROso54H2ZVFwQUwvRDiFyIa1hsR2lG/3lAZaePvTzKruFHeRsiGi4UC7PkEO8l
	 qWc1TuTWvJNfGt8g2jUYIoiEpgw1IVFqpfuGxCEmq/KihSpGPzrNcam7ATayIKyxmW
	 f84Z6R9fLtvGpaBEBiC5vfSbFRpz2dLWFplsUScFoVQLHclhAH0r2DGjW5wk0y5V0C
	 9aUICt/P0uNFQcTVjcc5lpqsp1Ao41rJTVA8vpojbxmL/unJ8H/UQaDityIYrRDKh7
	 LPIOQxETfTJbjRenZ5zm+rwcV+1EmrEfYCPDrqPdM/NXyS98sXiPnIlRoc+1bN6puO
	 vZ9EP/kQpZRvQ==
Date: Wed, 29 Jun 2022 10:14:57 -0700
From: Josh Poimboeuf <jpoimboe@kernel.org>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>
Subject: Re: [PATCH v2 2/3] x86: fix setup of brk area
Message-ID: <20220629171457.amdsrgaxady55hds@treble>
References: <20220623094608.7294-1-jgross@suse.com>
 <20220623094608.7294-3-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20220623094608.7294-3-jgross@suse.com>

Hi Juergen,

It helps to actually Cc the person who broke it ;-)

On Thu, Jun 23, 2022 at 11:46:07AM +0200, Juergen Gross wrote:
> Commit e32683c6f7d2 ("x86/mm: Fix RESERVE_BRK() for older binutils")
> put the brk area into the .bss..brk section (placed directly behind
> .bss),

Hm? It didn't actually do that.

For individual translation units, it did rename the section from
".brk_reservation" to ".bss..brk".  But then during linking it's still
placed in .brk in vmlinux, just like before.

> causing it not to be cleared initially. As the brk area is used
> to allocate early page tables, these might contain garbage in not
> explicitly written entries.
> 
> This is especially a problem for Xen PV guests, as the hypervisor will
> validate page tables (check for writable page tables and hypervisor
> private bits) before accepting them to be used. There have been reports
> of early crashes of PV guests due to illegal page table contents.
> 
> Fix that by letting clear_bss() clear the brk area, too.

While it does make sense to clear the brk area, I don't understand how
my patch broke this.  How was it getting cleared before?

-- 
Josh


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 17:18:39 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 17:18:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357992.586920 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6bKn-0004uD-Ge; Wed, 29 Jun 2022 17:18:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357992.586920; Wed, 29 Jun 2022 17:18:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6bKn-0004u6-DF; Wed, 29 Jun 2022 17:18:37 +0000
Received: by outflank-mailman (input) for mailman id 357992;
 Wed, 29 Jun 2022 17:18:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fe+6=XE=kernel.org=jpoimboe@srs-se1.protection.inumbo.net>)
 id 1o6bKl-0004u0-KU
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 17:18:35 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 80765d17-f7cf-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 19:18:34 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id CF21A61E4F;
 Wed, 29 Jun 2022 17:18:31 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id E2C95C34114;
 Wed, 29 Jun 2022 17:18:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 80765d17-f7cf-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1656523111;
	bh=cElJT+fqYXMLdJ0O9skHbTQ8acmcYY3npP0Idut/ntE=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=ti985LV+4onMHrTrVh1q2DMliiCN0ww7NfP/MJct+nuqmClkkjSIm86ULTpKL6g59
	 j52/wek/dOmKMWSktBjEyg1c3kKfFSnPTazCVlvOWhTqyvfGgp3YWMqiwQVEherWIx
	 3rqDa3Vl8HJobmeSU2QbQa1aLy6JIFBt1L9QOc+Hg4uEotfZ7mm8fA+CgaOVzQtRju
	 ltcPDaOpAuvwEcnqabUllBaMIDjNmYW9vzyVao4gVbX8MBdN0LZKtvwywxC/uRQ7af
	 pWZwK075jfREZIJpdmS6LNLO0ojr7i5JUu0L0Ncws5c/OqsGKW84vkr92vx8zhwQOE
	 vyBGKuZ/nMtIw==
Date: Wed, 29 Jun 2022 10:18:29 -0700
From: Josh Poimboeuf <jpoimboe@kernel.org>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>
Subject: Re: [PATCH v2 3/3] x86: fix .brk attribute in linker script
Message-ID: <20220629171829.shotpln44nzgo2eu@treble>
References: <20220623094608.7294-1-jgross@suse.com>
 <20220623094608.7294-4-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20220623094608.7294-4-jgross@suse.com>

On Thu, Jun 23, 2022 at 11:46:08AM +0200, Juergen Gross wrote:
> Commit e32683c6f7d2 ("x86/mm: Fix RESERVE_BRK() for older binutils")
> added the "NOLOAD" attribute to the .brk section as a "failsafe"
> measure.
> 
> Unfortunately this leads to the linker no longer covering the .brk
> section in a program header, resulting in the kernel loader not knowing
> that the memory for the .brk section must be reserved.
> 
> This has led to crashes when loading the kernel as PV dom0 under Xen,
> but other scenarios could be hit by the same problem (e.g. in case an
> uncompressed kernel is used and the initrd is placed directly behind
> it).
> 
> So drop the "NOLOAD" attribute. This has been verified to correctly
> cover the .brk section by a program header of the resulting ELF file.
> 
> Fixes: e32683c6f7d2 ("x86/mm: Fix RESERVE_BRK() for older binutils")
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Josh Poimboeuf <jpoimboe@kernel.org>

-- 
Josh


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 17:18:40 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 17:18:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.357993.586931 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6bKq-0005AY-OM; Wed, 29 Jun 2022 17:18:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 357993.586931; Wed, 29 Jun 2022 17:18:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6bKq-0005AN-KY; Wed, 29 Jun 2022 17:18:40 +0000
Received: by outflank-mailman (input) for mailman id 357993;
 Wed, 29 Jun 2022 17:18:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BAEK=XE=oracle.com=boris.ostrovsky@srs-se1.protection.inumbo.net>)
 id 1o6bKp-00059P-1r
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 17:18:40 +0000
Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com
 [205.220.165.32]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 82679e6f-f7cf-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 19:18:37 +0200 (CEST)
Received: from pps.filterd (m0246627.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 25TFECoD005839;
 Wed, 29 Jun 2022 17:17:59 GMT
Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com
 (iadpaimrmta01.appoci.oracle.com [130.35.100.223])
 by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3gwry0hy3v-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 29 Jun 2022 17:17:59 +0000
Received: from pps.filterd
 (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1])
 by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (8.16.1.2/8.16.1.2)
 with SMTP id 25TH6vFh025070; Wed, 29 Jun 2022 17:17:57 GMT
Received: from nam02-bn1-obe.outbound.protection.outlook.com
 (mail-bn1nam07lp2043.outbound.protection.outlook.com [104.47.51.43])
 by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com with ESMTP id
 3gwrt9aqhf-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 29 Jun 2022 17:17:57 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com (2603:10b6:208:321::10)
 by SA1PR10MB5782.namprd10.prod.outlook.com (2603:10b6:806:23f::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Wed, 29 Jun
 2022 17:17:56 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::e48c:bcc0:fff3:eac6]) by BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::e48c:bcc0:fff3:eac6%9]) with mapi id 15.20.5373.023; Wed, 29 Jun 2022
 17:17:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 82679e6f-f7cf-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=message-id : date :
 subject : to : cc : references : from : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2021-07-09;
 bh=HDQXZKqaw0A5rOXZ5xAzN6hKpLvNQis5UWbnEKxVxZw=;
 b=aew7oMYw8UYQxuQP9vwkGh61JZ0MRisL1VL784RGbd948A1ABjQDSfZSXqMCcHQSaCWy
 VFI3xz8mekImpvBU4FPNmuSuoZL3oGaTP5bYVH2tU+ifk5/WaPpCrJ/dcq5IjQUez6bc
 MZKnEspIKEmfo1XjYLiImYumE8wa+RTjWxILWkeCjq+VDB6zPs/H1MA1npnxb6h8mmRz
 QkjOBYxRwk4U3Bs5eXBKTVci2NgNFtOPpd+dbj4CSGQkD5iNy2e71La9N5r12bzq3qCf
 +8GNfQOnh4PLYG7Wy8j1IAv6rtBS4FvO7EEjGfKkzYINsMy22saH5F61Tf7VIbKK9LEE Yw== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BoK3l5LLRsC6XB+uUop6+cG4hT2RecshHNsyYQcMDxPZAPLeBVm1/hqpJ1jnev+/YPQ6TzMHBHCkvo422ujrGO5ISb5/57K61SlfGUT0eX0qgyUsjzY1l4Ycrx+dz1DkcqyZ/Lx/gv9AqRsgflfRVETA1mRhxXxECHWMMcqqg+7JKV8B+8lql9i65gWl6ISf1CoNZPtwI0MHQ6ZPWlpfeayzG0AvxOn0pblN/YDu8Bx8d2E9d1GQGwGHMQU0F5ihqBrFo01j7uAvUQwIDTIaWdfcCzvprK163agkWNtL+/fUYeP3XuXx8hHqv5/eGMCz0Wur1QYgO8kcJW6AjOqUSw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=HDQXZKqaw0A5rOXZ5xAzN6hKpLvNQis5UWbnEKxVxZw=;
 b=CCaD8LpBunRfMcRYCQlcOnN4gg2q/sWGOfQnVPQw4znLd93n92qsYfQ+XKtg4gehq4OwIOhsR6uK0Z7djU75qbYSd3uTBu7iFwQ4FQftXaA17dt58DI3dYrNqrk7tjsHT4hsiJUg3VBYTatEiP/nn8F8z4ja6M/so/Wer9/XjnFcr6ZfesW5nou7Zc92RFR+UVchrfc2ScyXrp2QglzbjTczY8d75aPuSBwa5OnB0S16dJf/4VQbQxmlY9LTnIDHlhfRhmadgaLFvqF4VDLjScUxxlmDgk/nZXP2JgMf9iVMYGLvKDoTAS6af3a0xyu8rCVM8wcEpBsE+nBF1PEuHA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HDQXZKqaw0A5rOXZ5xAzN6hKpLvNQis5UWbnEKxVxZw=;
 b=kW4EKSs3MKT9an4GhRGYmQuKW/OrJ1rp/o9kspif6t9FvbtDPv/iUkoTJXaPQZ4AACLL90FJ7/ue1GmSG1M+rwzyznDSE61Un34kWcdFgWEwfW/tcNTjKyJoakH1ZBPLou9Ucaldxc12djb0y28Kh1hBIWhcItHy8tIIMzEOCdE=
Message-ID: <b77201c2-3fb2-feba-af40-22955d5fc19c@oracle.com>
Date: Wed, 29 Jun 2022 13:17:52 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH v2 0/3] x86: fix brk area initialization
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
        x86@kernel.org, linux-kernel@vger.kernel.org
Cc: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
        Borislav Petkov <bp@alien8.de>,
        Dave Hansen <dave.hansen@linux.intel.com>,
        "H. Peter Anvin" <hpa@zytor.com>, regressions@lists.linux.dev
References: <20220623094608.7294-1-jgross@suse.com>
 <2592493b-4339-3e54-8acf-585dcf90be96@suse.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
In-Reply-To: <2592493b-4339-3e54-8acf-585dcf90be96@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: SA0PR11CA0079.namprd11.prod.outlook.com
 (2603:10b6:806:d2::24) To BLAPR10MB5009.namprd10.prod.outlook.com
 (2603:10b6:208:321::10)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8142f033-27fe-4086-cc36-08da59f34ed0
X-MS-TrafficTypeDiagnostic: SA1PR10MB5782:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	kyFcjsRUDnz2s+sAHhDpitcBospDnOf8hwbEoaeU0i+4JI8wretYciIVuIBNY9EdVoVaufnuGEqOZV+NaUFQsSYpbBrFP/hhuo9BZdmNAd91Xrl+84GMvNcZqDAl3UN3mvhPU2chKkFCfyG2oSPBrsQjP6+8t8hNqs/EeVvf2/fUa6dcMnkPEaFR3y6yrjTU2LGO2FdB15rQh0J/gOElCOfDepzGbcaPrazNA5FrzuWrnMj4OAS4Fi821EhmNNoH9O6Ll9uyD4tBtC9XM2wbQcWH11GfNn1U3esKFaRwV557O4Pjw1wuTohObyKOM/9MngM2K+jdxQHE4fKBnTcITrVBesj6iD3jwYrIkSQMryTiAWmTGUCxf1ctvZlFDrjZGa8hvhccr/Y10DNcyoEkTcfd3mzg/kDGJ3Dau48w2I3jU9JSXUP6l0fWYrj8Fi/uExpDxe2/dO8GHbO85fdAid/Y24mEKoZ6ZZ45/+yFxZHbW2zkye9HTWk/Nd15I4HTWlikndoRP+5RPrJiEDKckXNgqYCGcZGawMWYQgRWP2JgNR1iDpSg0j5XUuDNKgLRzdzGEzFdmjlpqt+Tg+6H1qKObs6t/rKyWT3nSrPjRGkROWS+T7/5utI4e7S+96/yTwxarvmumUrTqNioWsS8Nloc/lv+9cEGjtmzzd/bJXli9uMa/KSkjWwQXnaXN5ySxhTXqyknA97jVMCp7QPrWvgJFpyJiot195ElD9mNii+F9cM0dduBJN4P+IwjZwKa/4rC2JyVsBJVyII58zrQtPkz8NP75eMT2QWx0v4I5T/PSYuX/6e00Z7zxWU+YRdTtFVnv88Dx4+2JzSpMKViqg==
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB5009.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(396003)(346002)(39860400002)(366004)(376002)(136003)(2616005)(41300700001)(54906003)(186003)(66476007)(8936002)(83380400001)(38100700002)(4326008)(6486002)(66556008)(316002)(8676002)(31686004)(7416002)(6512007)(36756003)(53546011)(66946007)(478600001)(6506007)(31696002)(44832011)(26005)(6666004)(5660300002)(2906002)(86362001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?utf-8?B?KytMWWhVWkdWY0M2SWZETVI5L21IR2ZSRzNRdmhRYkI5M2dBYUNRd1hoUzFD?=
 =?utf-8?B?bDhRbEFDdEpNeHNVS2N2YXpFQnNBVGZrTERHNkZ6cHkyMkFNOEt0MUNObVYw?=
 =?utf-8?B?RjlPUlJNMnBBdW1rMUJsaTFqdHJNV3hVbUk4N0Y5UHZ6WGV4eG1RWThWdEp6?=
 =?utf-8?B?aGtvazZJc1dJSjJRQ3VqQitPbWNOcVdVKzgyaHNtMEtrL05TSnMvRE0zZzFt?=
 =?utf-8?B?enI1c2c1KzlwQ21ZZG0wTlNpckkwR1pEYW9MRXpNeFJ4QzlyRW5XVHFDZVFv?=
 =?utf-8?B?SE5OSDlFQ0swVks3NVdPd25vQy9pOUZ1VnVxbVp5ZklBTjdDZ0ZwNGlKQXl4?=
 =?utf-8?B?MjFNK1RFaVZqaGw5eGdPckJpSS9ZM2ZXOHdKQ3dmWDJsNkdwZFZRZWRleXNn?=
 =?utf-8?B?RzhDNktQL2RpY1BObTZuRnhuNUpnZGlzdDZ3bzZXRVBBdmd6M2JCZUNEeFBL?=
 =?utf-8?B?WnQ2ck1zU3QvVUJITlR1b3F5NzRhaEZzZXVuMlhoNVA3cmo2SDc0a2NQcUFD?=
 =?utf-8?B?NnhZUUZoT2hVWjY3SXhXYVE0Q3I2NzhtVVErMWJpYldLL25jblluRXFZVHVX?=
 =?utf-8?B?K0JMQWw0dEhOR2ZtR3MwVDVRTXpqWGhWZ2MxM0ZQcHNkUWs1M0drYzJCaTdk?=
 =?utf-8?B?ZUIzVytoVTJZNkhHejlGRnIrQzl1Kzd5TVFKbWpGeDRwaUZ5YzMvZGlqK0Ur?=
 =?utf-8?B?ODRDWXhXYVRrT1R5a3F1aTZhclR4YXlNRUVNaEM4S0luN0ZJV2lMSGhMeFRi?=
 =?utf-8?B?ZE15R3hMM3BHcnRjQTQ2L3Q4M2lmY1pQRTg1a250bjROUFVDQkNKcHJna2s3?=
 =?utf-8?B?WkgrbDgrN3M4TjRDKzVpdzJsM2w2SnVDUURIbk1wSml6anFXSm1tajRHeVFQ?=
 =?utf-8?B?ME9qR3E1Q29VNFBvNjdTNU0zY29TY3h4SjQvL25EQ2t1SUNrcEN2a1Zkc0tj?=
 =?utf-8?B?dzM2bWhMSEZBWlB5N3hMZ3JIVWxkODFlS0toOUo5ZUhHNy9wSmZXK3Zzb2NK?=
 =?utf-8?B?ckFJd1AwalZ1VWVUeVBFZUdhWFc1L1pGR01lSDZSSGk3cWN5TkczSk0xV3g1?=
 =?utf-8?B?TFhja1h5eUdOSHN6OWpNeTBnSlNoNlZFM1JCZERqUDJJZmtHMzhMc0hWa1cx?=
 =?utf-8?B?dmlIc3BrTzJGdVZoUEZ6SGRvd0tqMXJwSi9VRmNaek80cFRoSHl4K2ErRkYv?=
 =?utf-8?B?UGk4cFJMUHJLNXRDNEhhckZWb0lBbnNMb0ZZdCtxMWo4cEtDQ3FjODRTREtP?=
 =?utf-8?B?emxBR21weldCOEJzakhtSWsrKzhPZVJ3UjdDaHJPNU1jaXBZb0duYmxQcGFX?=
 =?utf-8?B?NElQbHI5aUpBQUdMSnR0WlhPclZVQXgzYys0Q2ltM1JIbkUxSENZQlVxa2ty?=
 =?utf-8?B?ak5pWFdLOU1CQ2dIYjgrME9BOXFwRnBFQ2lSM2RueUc4ZkNYcktDTm8wOUhW?=
 =?utf-8?B?ekM5bW9IUE13TjJEVCtsMk5BLysxOW53NjNGSFdWOVZGZE9BWThFZnpvYTlD?=
 =?utf-8?B?eG5YdmgwelpzSFBvUWJ3a1VNa0FyWkZLUE51VWpWM1RjRG1LSE5Pam1TSTUv?=
 =?utf-8?B?ejZMVENKdi8yMElYWVBGOUNLdndjODJFZDhMZkxuc01RQjVJZDNhK1BSVmhK?=
 =?utf-8?B?SU1hdEZicFdHYnpKRlhVYWhuTVN5eWRKZlBiczlvOFJBM3lIb0o0NVJ6Mi92?=
 =?utf-8?B?UFk3M3ZJM1lKOFhXYWNXb3lOU0hKNWovZ2d3QWVyMlRwSldOQ3NraTg5OU9y?=
 =?utf-8?B?SGVraU9JaitEZ1g0cG9ZK3AyNG4xMnlEREk3WXRmajlzTHljRmpUY1NRN09F?=
 =?utf-8?B?WGp3YnZtQzBLRjdOYVUzaEsxc1ZGeTBRbVBKSWRpaHlFQ2FLeGUwSjFBaWp1?=
 =?utf-8?B?ajg4K1Jyc0lncWUyakdYd2hkWGxjK3ppNWlLZk1LMTh1bkwxKytBcUY0SVlR?=
 =?utf-8?B?M1dIUWhMb0NLQ1pyN1N3aCtsK1lCRVJxRHlzYkhQeVlYUzdtSHZIYW5CTEJ5?=
 =?utf-8?B?WDJYMk04bG9GRGpYS0JVS3Y0U1hGbGlmTS91Wm11SzZOaEtFYTlRZUJBaW42?=
 =?utf-8?B?N1hOSUxLZnBSQytqakF6K01EL3VRUkhQRk1CWEk1VVduZWd2UHAySUplT3ZJ?=
 =?utf-8?B?ZVJpaFZza3Jha0c4cElYcG0xL0s5eVJIWERSK2xvdW1ldTg0WVQ4OHNibTRt?=
 =?utf-8?B?MFE9PQ==?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8142f033-27fe-4086-cc36-08da59f34ed0
X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB5009.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jun 2022 17:17:56.2140
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 8mpVUgwrhAF5N2zwOvMxNSOGVxlC7TdHAL+otVui8ou/agtZXZxBsoA0THOj6WPY4PynFBSYWFREx21KL5QA3YXbOhDC1rDcFP3XKn5HePM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR10MB5782
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.517,18.0.883
 definitions=2022-06-29_18:2022-06-28,2022-06-29 signatures=0
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 bulkscore=0 mlxlogscore=999
 suspectscore=0 adultscore=0 phishscore=0 malwarescore=0 mlxscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2204290000
 definitions=main-2206290062
X-Proofpoint-GUID: PQ7eudLs0E5Y6lzhYKoodq8nEbFhaK8D
X-Proofpoint-ORIG-GUID: PQ7eudLs0E5Y6lzhYKoodq8nEbFhaK8D


On 6/29/22 10:10 AM, Juergen Gross wrote:
> On 23.06.22 11:46, Juergen Gross wrote:
>> The brk area needs to be zeroed initially, like the .bss section.
>> At the same time its memory should be covered by the ELF program
>> headers.
>>
>> Juergen Gross (3):
>>    x86/xen: use clear_bss() for Xen PV guests
>>    x86: fix setup of brk area
>>    x86: fix .brk attribute in linker script
>>
>>   arch/x86/include/asm/setup.h  |  3 +++
>>   arch/x86/kernel/head64.c      |  4 +++-
>>   arch/x86/kernel/vmlinux.lds.S |  2 +-
>>   arch/x86/xen/enlighten_pv.c   |  8 ++++++--
>>   arch/x86/xen/xen-head.S       | 10 +---------
>>   5 files changed, 14 insertions(+), 13 deletions(-)
>>
>
> Could I please have some feedback? This series is fixing a major
> regression regarding running as Xen PV guest (depending on kernel
> configuration system will crash very early).
>
> #regzbot ^introduced e32683c6f7d2
>


I don't think you need this for Xen bits as Jan had already reviewed it but in case you do


Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>



From xen-devel-bounces@lists.xenproject.org Wed Jun 29 17:19:10 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 17:19:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358001.586942 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6bLK-0005yA-48; Wed, 29 Jun 2022 17:19:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358001.586942; Wed, 29 Jun 2022 17:19:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6bLK-0005xH-0j; Wed, 29 Jun 2022 17:19:10 +0000
Received: by outflank-mailman (input) for mailman id 358001;
 Wed, 29 Jun 2022 17:19:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6n86=XE=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o6bLI-0004u0-HI
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 17:19:08 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 940fc1b4-f7cf-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 19:19:06 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 4CCD961E66;
 Wed, 29 Jun 2022 17:19:05 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 58995C34114;
 Wed, 29 Jun 2022 17:19:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 940fc1b4-f7cf-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1656523144;
	bh=ZNfIjq9KinlkCHq9f5ckm9fc4RWVx0w9mg779R11+NQ=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=IIBMIL86/VS45bdLqOXlLD02vNI+3H7dxPvshT9RcIC710ph5QkdAVd7/KVBtOeN6
	 MQTgwF/OUnl2bhJNJTFe8tWI4EOS8haLZq48ougAHY+7NBKIrtjTAuCd9Jqm3avtpW
	 nvSulNLWHYUisw44BBzQBmaO0axn9h361fZ9LhOx8Q5i+DOvWZg88l39nQRTVQFTHo
	 cb179YCLZSBV5zU5S/a87kJGyR5wmDPxQjj6Z9i2z/FM6DlXFeufDtpM4I7mo1WMUC
	 DIFuBwJAoFKsfcaVdh5YA3vC7cFJ1Fjb50UzUTRJelCnBOWhRm8f6IpnNL3rtT9oTx
	 0o5jnXMyZb2xA==
Date: Wed, 29 Jun 2022 10:19:04 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, dmitry.semenets@gmail.com, 
    xen-devel@lists.xenproject.org, Dmytro Semenets <dmytro_semenets@epam.com>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen: arm: Don't use stop_cpu() in halt_this_cpu()
In-Reply-To: <26a1b208-7192-a64f-ca6d-c144de89ed2c@xen.org>
Message-ID: <alpine.DEB.2.22.394.2206291014000.4389@ubuntu-linux-20-04-desktop>
References: <20220623074428.226719-1-dmitry.semenets@gmail.com> <alpine.DEB.2.22.394.2206231457250.2410338@ubuntu-linux-20-04-desktop> <e60a4e68-ed00-6cc7-31ca-64bcfc4bbdc5@xen.org> <alpine.DEB.2.22.394.2206241414420.2410338@ubuntu-linux-20-04-desktop>
 <5c986703-c932-3c7d-3756-2b885bb96e42@xen.org> <alpine.DEB.2.22.394.2206281538320.4389@ubuntu-linux-20-04-desktop> <26a1b208-7192-a64f-ca6d-c144de89ed2c@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 29 Jun 2022, Julien Grall wrote:
> On 28/06/2022 23:56, Stefano Stabellini wrote:
> > > The advantage of the panic() is it will remind us that some needs to be
> > > fixed.
> > > With a warning (or WARN()) people will tend to ignore it.
> > 
> > I know that this specific code path (cpu off) is probably not super
> > relevant for what I am about to say, but as we move closer to safety
> > certifiability we need to get away from using "panic" and BUG_ON as a
> > reminder that more work is needed to have a fully correct implementation
> > of something.
> 
> I don't think we have many places at runtime using BUG_ON()/panic(). They are
> often used because we think Xen would not be able to recover if the condition
> is hit.
> 
> I am happy to remove them, but this should not be at the expense to introduce
> other potential weird bugs.
> 
> > 
> > I also see your point and agree that ASSERT is not acceptable for
> > external input but from my point of view panic is the same (slightly
> > worse because it doesn't go away in production builds).
> 
> I think it depends on your target. Would you be happy if Xen continue to run
> with potentially a fatal flaw?

Actually, this is an excellent question. I don't know what is the
expected behavior from a safety perspective in case of serious errors.
How the error should be reported and whether continuing or not is
recommended. I'll try to find out more information.


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 17:22:46 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 17:22:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358011.586953 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6bOi-0007Ul-JQ; Wed, 29 Jun 2022 17:22:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358011.586953; Wed, 29 Jun 2022 17:22:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6bOi-0007Ue-Gc; Wed, 29 Jun 2022 17:22:40 +0000
Received: by outflank-mailman (input) for mailman id 358011;
 Wed, 29 Jun 2022 17:22:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6n86=XE=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o6bOh-0007UY-KM
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 17:22:39 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 13109c5d-f7d0-11ec-b725-ed86ccbb4733;
 Wed, 29 Jun 2022 19:22:38 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id F03B7B821D1;
 Wed, 29 Jun 2022 17:22:37 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 19A7EC34114;
 Wed, 29 Jun 2022 17:22:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 13109c5d-f7d0-11ec-b725-ed86ccbb4733
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1656523356;
	bh=3petwhcBcc+OmqawYboWpq3pveWZ6qvP5TZ10HLPmxY=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=bqzt8SHWvTPw8li1Kcd0KwKHHYKb4SNFmL90kp2GDasgx5Aw98xZQ0Nwhitiibf5n
	 cXrgvaWXj29kpEEnIA+wOLBXHrrjBHo+PiHhJ23RmkkJA9XySHKkNvalnOL358PpjH
	 cEf8CX06LDQRJudndvICC4CVeGyVBhMFd5sscO92spJ4P1JV6YYxRdzpi40Oyg2u2E
	 95KcaY3O0IiwXyCTo6RKRNLU5waBjUZDbFno2EhV8ufecaaSww1TpZTKCpNLojXUbV
	 TX0/vLwBRe1zSpE2IO4GtuvVAfsR5383BbUK7AapEjJtw0dvHf6djfqY2+SOjOL1+a
	 P5j9hRUVwbRow==
Date: Wed, 29 Jun 2022 10:22:36 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Luca Fancellu <Luca.Fancellu@arm.com>
cc: Anthony PERARD <anthony.perard@citrix.com>, 
    Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>, 
    Elena Ufimtseva <elena.ufimtseva@oracle.com>, Tim Deegan <tim@xen.org>, 
    Daniel De Graaf <dgdegra@tycho.nsa.gov>, 
    "Daniel P. Smith" <dpsmith@apertussolutions.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Juergen Gross <jgross@suse.com>, 
    Christian Lindig <christian.lindig@citrix.com>, 
    David Scott <dave@recoil.org>, Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [XEN PATCH v3 25/25] tools: Remove -Werror everywhere else
In-Reply-To: <BF28045C-0848-4B5A-8DAB-57192C7B4A18@arm.com>
Message-ID: <alpine.DEB.2.22.394.2206291019550.4389@ubuntu-linux-20-04-desktop>
References: <20220624160422.53457-1-anthony.perard@citrix.com> <20220624160422.53457-26-anthony.perard@citrix.com> <BF28045C-0848-4B5A-8DAB-57192C7B4A18@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1667198580-1656523357=:4389"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1667198580-1656523357=:4389
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Wed, 29 Jun 2022, Luca Fancellu wrote:
> + CC: Stefano Stabellini
> 
> > On 24 Jun 2022, at 17:04, Anthony PERARD <anthony.perard@citrix.com> wrote:
> > 
> > Patch "tools: Add -Werror by default to all tools/" have added
> > "-Werror" to CFLAGS in tools/Rules.mk, remove it from every other
> > makefiles as it is now duplicated.
> > 
> > Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> 
> Hi Anthony,
> 
> I will try to review the serie when I manage to have some time, in the mean time I can say the whole
> serie builds fine in my Yocto setup on arm64 and x86_64, I’ve tried also the tool stack to
> create/destroy/console guests and no problem so far.
> 
> The only problem I have is building for arm32 because, I think, this patch does a great job and it
> discovers a problem here:

That reminds me that we only have arm32 Xen hypervisor builds in
gitlab-ci, we don't have any arm32 Xen tools builds. I'll add it to my
TODO but if someone (not necessarily Luca) has some spare time it could
be a nice project. It could be done with Yocto by adding a Yocto build
container to automation/build/.
--8323329-1667198580-1656523357=:4389--


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 17:53:07 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 17:53:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358019.586964 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6bs0-0002jT-1D; Wed, 29 Jun 2022 17:52:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358019.586964; Wed, 29 Jun 2022 17:52:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6brz-0002jM-TN; Wed, 29 Jun 2022 17:52:55 +0000
Received: by outflank-mailman (input) for mailman id 358019;
 Wed, 29 Jun 2022 17:52:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6bry-0002jC-BN; Wed, 29 Jun 2022 17:52:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6bry-0000Qb-7t; Wed, 29 Jun 2022 17:52:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6brx-00031j-TZ; Wed, 29 Jun 2022 17:52:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o6brx-0003jA-T8; Wed, 29 Jun 2022 17:52:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2VYgzqQifJbc+R1M58YmfIHNz29j35r6Pe9v3TffcNk=; b=rExktqHzTSVWFzdI0SKyYV+cLL
	qyyCU/DKGc6yc2jbtq0Ws8XtMTnhq+6JoeFMVpeHbmerXncM389exVHcuuu/6tXdbe433iu5I0MoL
	vTyFNS3RMBza+0+kJk8L8/c1Yn7QtiaIaWGbfIw7+nJyUh+g6gRrQw4RLIRu4aqTso6k=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171400-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 171400: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:regression
    linux-5.4:test-arm64-arm64-xl-thunderx:xen-boot:fail:heisenbug
    linux-5.4:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=9ef3ad40a81ff6b8b65ed870588b230f38812f2a
X-Osstest-Versions-That:
    linux=23db944f754e99abf814a79a2273b0191d35e4ff
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 29 Jun 2022 17:52:53 +0000

flight 171400 linux-5.4 real [real]
flight 171406 linux-5.4 real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/171400/
http://logs.test-lab.xenproject.org/osstest/logs/171406/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-multivcpu 14 guest-start             fail REGR. vs. 171352

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-thunderx  8 xen-boot            fail pass in 171406-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail blocked in 171352
 test-arm64-arm64-xl-thunderx 15 migrate-support-check fail in 171406 never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check fail in 171406 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171352
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171352
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171352
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171352
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171352
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171352
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171352
 test-armhf-armhf-xl-credit1  18 guest-start/debian.repeat    fail  like 171352
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171352
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 171352
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171352
 test-armhf-armhf-xl-credit2  14 guest-start                  fail  like 171352
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171352
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171352
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                9ef3ad40a81ff6b8b65ed870588b230f38812f2a
baseline version:
 linux                23db944f754e99abf814a79a2273b0191d35e4ff

Last test of basis   171352  2022-06-25 11:13:17 Z    4 days
Testing same since   171400  2022-06-29 07:11:34 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aidan MacDonald <aidanmacdonald.0x0@gmail.com>
  Alexander Gordeev <agordeev@linux.ibm.com>
  Anatolii Gerasymenko <anatolii.gerasymenko@intel.com>
  Andrew Donnellan <ajd@linux.ibm.com>
  Antoine Tenart <atenart@kernel.org>
  Arnd Bergmann <arnd@arndb.de>
  Ballon Shi <ballon.shi@quectel.com>
  Bartosz Golaszewski <brgl@bgdev.pl>
  Baruch Siach <baruch@tkos.co.il>
  Carlo Lobrano <c.lobrano@gmail.com>
  Chevron Li <chevron.li@bayhubtech.com>
  Chevron Li<chevron.li@bayhubtech.com>
  Claudiu Manoil <claudiu.manoil@nxp.com>
  Curtis Taylor <cutaylor-pub@yahoo.com>
  Damien Le Moal <damien.lemoal@opensource.wdc.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Borkmann <daniel@iogearbox.net>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  Dmitry Rokosov <ddrokosov@sberdevices.ru>
  Edward Wu <edwardwu@realtek.com>
  Eelco Chaudron <echaudro@redhat.com>
  Eric Dumazet <edumazet@google.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Genjian Zhang <zhanggenjian@kylinos.cn>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Gurucharan <gurucharanx.g@intel.com> (A Contingent worker at Intel)
  Haibo Chen <haibo.chen@nxp.com>
  Hans de Goede <hdegoede@redhat.com>
  Helge Deller <deller@gmx.de>
  huhai <huhai@kylinos.cn>
  Hulk Robot <hulkrobot@huawei.com>
  Jakub Kicinski <kuba@kernel.org>
  Jason A. Donenfeld <Jason@zx2c4.com>
  Jason Wang <jasowang@redhat.com>
  Jay Vosburgh <jay.vosburgh@canonical.com>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Jiri Slaby <jslaby@suse.cz>
  Johan Hovold <johan@kernel.org>
  Jon Hunter <jonathanh@nvidia.com>
  Jon Maxwell <jmaxwell37@gmail.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Jonathan Toppins <jtoppins@redhat.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Kailang Yang <kailang@realtek.com>
  Krzysztof Halasa <khalasa@piap.pl>
  Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
  Liang He <windhl@126.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Lucas Stach <l.stach@pengutronix.de>
  Macpaul Lin <macpaul.lin@mediatek.com>
  Marc Dionne <marc.dionne@auristor.com>
  Mark Brown <broonie@kernel.org>
  Masahiro Yamada <masahiroy@kernel.org>
  Mathias Nyman <mathias.nyman@linux.intel.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Miaoqian Lin <linmq006@gmail.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Mike Snitzer <snitzer@kernel.org>
  Mikulas Patocka <mpatocka@redhat.com>
  Miquel Raynal <miquel.raynal@bootlin.com>
  Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
  Nikos Tsironis <ntsironis@arrikto.com>
  Olivier Moysan <olivier.moysan@foss.st.com>
  Paolo Abeni <pabeni@redhat.com>
  Peilin Ye <peilin.ye@bytedance.com>
  Rob Clark <robdclark@chromium.org>
  Ron Economos <re@w6rz.net>
  Rosemarie O'Riorden <roriorden@redhat.com>
  Sami Tolvanen <samitolvanen@google.com>
  Sascha Hauer <s.hauer@pengutronix.de>
  Sasha Levin <sashal@kernel.org>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Shawn Guo <shawnguo@kernel.org>
  Shuah Khan <skhan@linuxfoundation.org>
  Stephan Gerhold <stephan.gerhold@kernkonzept.com>
  Stephen Hemminger <stephen@networkplumber.org>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Sumanth Korikkar <sumanthk@linux.ibm.com>
  Sumit Dubey2 <Sumit.Dubey2@ibm.com>
  Takashi Iwai <tiwai@suse.de>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Thomas Richter <tmricht@linux.ibm.com>
  Tim Crawford <tcrawford@system76.com>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Vincent Whitchurch <vincent.whitchurch@axis.com>
  Xu Yang <xu.yang_2@nxp.com>
  Yonglin Tan <yonglin.tan@outlook.com>
  Zheyu Ma <zheyuma97@gmail.com>
  Ziyang Xuan <william.xuanziyang@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1766 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 18:02:03 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 18:02:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358028.586975 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6c0l-0004UD-4F; Wed, 29 Jun 2022 18:01:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358028.586975; Wed, 29 Jun 2022 18:01:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6c0k-0004U6-VO; Wed, 29 Jun 2022 18:01:58 +0000
Received: by outflank-mailman (input) for mailman id 358028;
 Wed, 29 Jun 2022 18:01:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o6c0k-0004U0-4h
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 18:01:58 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o6c0j-0000fq-UE; Wed, 29 Jun 2022 18:01:57 +0000
Received: from [54.239.6.187] (helo=[192.168.9.41])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o6c0j-00062L-Mc; Wed, 29 Jun 2022 18:01:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=7VoP880SRoJXEVg0j7FHhUEnCYZABffhkjsN2aIfDVA=; b=Pmf102J2sSI+sYr89ALuS9FIss
	iU+FK3/KiEc0MsJRtfvAYnySwaX28lsi32OcxF+N5LrRNd2ImuCx4mMCxQsECahIaqa4xRlB11dFY
	DzzPzxB7piwiYTrr7KsoAoGje/JH3TtYfFin2FGFP5EAWjZIZ39XGrurCDeNUitww8jo=;
Message-ID: <ef8b540c-d2c2-c999-d3fe-08fc88665ad9@xen.org>
Date: Wed, 29 Jun 2022 19:01:55 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: R: R: Crash when using xencov
To: Carmine Cesarano <c.cesarano@hotmail.it>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <AM7PR10MB355942D32F58FF02379C398CF8B99@AM7PR10MB3559.EURPRD10.PROD.OUTLOOK.COM>
 <87d0667b-2b85-f006-ea3c-6f557b2bdc8e@xen.org>
 <AM7PR10MB355972A68A222CB9FBAC43D4F8B99@AM7PR10MB3559.EURPRD10.PROD.OUTLOOK.COM>
 <daa12b90-da87-d463-24c4-a13fba330f1d@xen.org>
 <AM7PR10MB35593AA7F46B4D4A0BBD9841F8B99@AM7PR10MB3559.EURPRD10.PROD.OUTLOOK.COM>
 <AM7PR10MB3559BB8CB733902773B1AD6AF8B99@AM7PR10MB3559.EURPRD10.PROD.OUTLOOK.COM>
 <AM7PR10MB3559A1984F6B53CEFB4FECC7F8B89@AM7PR10MB3559.EURPRD10.PROD.OUTLOOK.COM>
From: Julien Grall <julien@xen.org>
In-Reply-To: <AM7PR10MB3559A1984F6B53CEFB4FECC7F8B89@AM7PR10MB3559.EURPRD10.PROD.OUTLOOK.COM>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

(moving the discussion to xen-devel)

On 28/06/2022 16:32, Carmine Cesarano wrote:
> Hi,

Hello,

Please refrain to top-post and/or post images on the ML. If you need to 
share an image, then please upload them somewhere else.

> I made two further attempts, first by compiling xen and xen-tools with gcc 10 and second with gcc 7, getting the same problem.
> 
> By running “xencov reset”, with with both compilers, the line of code associated with the crash is:

Discussing with Andrew Cooper on IRC, it looks like the problem is 
because Xen and GCC disagrees on the format. There are newer format that 
Xen doesn't understand.

If you are interested to support GCOV on your setup, then I would 
suggest to look at the documentation and/or look at what Linux is doing 
for newer compiler.

> 
>    *   /xen/xen/common/coverage/gcc_4_7.c:123
> By running “xencov read”, I get two different behaviors with the two compilers:
> 
>    *   /xen/xen/common/coverage/gcc_4_7.c:165   [GCC 11]
>    *   /xen/xen/common/coverage/gcov.c:131          [GCC 7]
> 
> Attached are the logs captured with a serial port.
> 
> Cheers,
> 
> Carmine Cesarano
> Da: Julien Grall<mailto:julien@xen.org>
> Inviato: lunedì 27 giugno 2022 14:42
> A: Carmine Cesarano<mailto:c.cesarano@hotmail.it>
> Oggetto: Re: R: Crash when using xencov
> 
> Hello,
> 
> You seemed to have removed xen-users from the CC list. Please keep it in
> CC unless the discussion needs to private.
> 
> Also, please avoid top-posting.
> 
> On 27/06/2022 13:36, Carmine Cesarano wrote:
>> Yes, i mean stable-4.16. Below the logs after running "xencov reset". The situation for "xencov read" is similar.
>>
>> (XEN) ----[ Xen-4.16.2-pre  x86_64  debug=y gcov=y  Not tainted ]----
>> (XEN) CPU:    0
>> (XEN) RIP:    e008:[<ffff82d040257bd2>] gcov_info_reset+0x87/0xa9
>> (XEN) RFLAGS: 0000000000010202   CONTEXT: hypervisor (d0v0)
>> (XEN) rax: 0000000000000000   rbx: ffff82d04056bdc0   rcx: 00000000000c000b
>> (XEN) rdx: 0000000000000000   rsi: 0000000000000001   rdi: ffff82d04056bdc0
>> (XEN) rbp: ffff83023a7e7cb0   rsp: ffff83023a7e7c88   r8:  7fffffffffffffff
>> (XEN) r9:  deadbeefdeadf00d   r10: 0000000000000000   r11: 0000000000000000
>> (XEN) r12: 0000000000000001   r13: ffff82d04056be28   r14: 0000000000000000
>> (XEN) r15: ffff82d04056bdc0   cr0: 0000000080050033   cr4: 0000000000172620
>> (XEN) cr3: 000000017ea0b000   cr2: 0000000000000000
>> (XEN) fsb: 00007fc0fb0ca200   gsb: ffff88807b400000   gss: 0000000000000000
>> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
>> (XEN) Xen code around <ffff82d040257bd2> (gcov_info_reset+0x87/0xa9):
>> (XEN)  1d 44 89 f0 49 8b 57 70 <4c> 8b 24 c2 49 83 c4 18 48 83 05 a6 81 4c 00 01
>> (XEN) Xen stack trace from rsp=ffff83023a7e7c88:
>> (XEN)    ffff82d04056bdc0 0000000000000001 ffff82d04070f180 0000000000000001
>> (XEN)    0000000000000000 ffff83023a7e7cc8 ffff82d040257a6a ffff83023a7e7db0
>> (XEN)    ffff83023a7e7ce8 ffff82d040257547 ffff83023a7e7fff ffff83023a7e7fff
>> (XEN)    ffff83023a7e7e58 ffff82d040255d5f ffff83023a7e7d68 ffff82d0403b5e8b
>> (XEN)    000000000017d5b2 0000000000000000 ffff83023a6b5000 0000000000000000
>> (XEN)    00007fc0fb348010 800000017ea0e127 0000000000000202 ffff82d040399fd8
>> (XEN)    0000000000005a40 ffff83023a7e7d68 0000000000000206 ffff82e002fab640
>> (XEN)    ffff83023a7e7e58 ffff82d0403bb29d ffff83023a69f000 000000003a7e7fff
>> (XEN)    000000017ea0f067 0000000000000000 000000000017d5b2 000000000017d5b2
>> (XEN)    0000001400000014 0000000000000002 ffffffffffffffff 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
>> (XEN)    0000000000000000 ffff83023a7e7ef8 0000000000000001 ffff83023a69f000
>> (XEN)    deadbeefdeadf00d ffff82d04025579d ffff83023a7e7ee8 ffff82d040387f62
>> (XEN)    00007fc0fb348010 deadbeefdeadf00d deadbeefdeadf00d deadbeefdeadf00d
>> (XEN)    deadbeefdeadf00d ffff83023a7e7fff ffff82d0403b2c99 ffff83023a7e7eb8
>> (XEN)    ffff82d04038214c ffff83023a69f000 ffff83023a7e7ed8 ffff83023a69f000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
>> (XEN)    00007cfdc58180e7 ffff82d0404392ae 0000000000000000 ffff88800f484c00
>> (XEN) Xen call trace:
>> (XEN)    [<ffff82d040257bd2>] R gcov_info_reset+0x87/0xa9
> 
> Thanks! There are multiple versions of gcov_info_reset() in the tree.
> The one used will depend on the compiler you are using.
> 
> Can you use addr2line (or gdb) to find out the file and line of code
> associated with the crash?
> 
> For addr2line you could do:
> 
>     addr2line -e xen-syms 0xffff82d040257bd2

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 18:45:51 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 18:45:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358036.587000 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6ch8-00013v-Mg; Wed, 29 Jun 2022 18:45:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358036.587000; Wed, 29 Jun 2022 18:45:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6ch8-00013o-Jp; Wed, 29 Jun 2022 18:45:46 +0000
Received: by outflank-mailman (input) for mailman id 358036;
 Wed, 29 Jun 2022 18:45:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=R6b1=XE=citrix.com=prvs=172711fe8=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1o6ch6-0000mL-SD
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 18:45:44 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ad1bcc4c-f7db-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 20:45:43 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ad1bcc4c-f7db-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656528343;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=hZZKKx5wLOQQOp1PoQBnMdpXG5SbkLDIEV13BnQaatU=;
  b=HrlqaoZDyhdg+N6BC1sIu9MvlyMQlxWnGtIOFLnHKaJ9PMJoFrWmT2UO
   HB+9Bv6z5MnxyksKgs6wso77UgX0eRr4imFqxgVrDLK/svAXVV1JJ2WOl
   Yefhj201e/C3Oj5dYH5VOrEP1GhnFVN0sA9KEdB/6GZKdHXv8fET6WkGt
   4=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 74746992
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:3VTefatbpT3L9DzYjk7fsfh2UOfnVAleMUV32f8akzHdYApBsoF/q
 tZmKWDTbPaMNjT8Kd5wYd/gpE8B6MPXm4VmGwBr+3o0FC5A+JbJXdiXEBz9bniYRiHhoOOLz
 Cm8hv3odp1coqr0/0/1WlTZhSAgk/nOHNIQMcacUsxLbVYMpBwJ1FQywYbVvqYy2YLjW13X6
 IuryyHiEATNNwBcYzp8B52r8HuDjNyq0N/PlgVjDRzjlAa2e0g9VPrzF4noR5fLatA88tqBb
 /TC1NmEElbxpH/BPD8HfoHTKSXmSpaKVeSHZ+E/t6KK2nCurQRquko32WZ1he66RFxlkvgoo
 Oihu6BcRi8TbqORu6MbUiB6Chp+ZqB5943jDkGG5Jn7I03uKxMAwt1rBUAye4YZ5vx2ESdF8
 vlwxDIlN07ZwbjsmfTiF7cq1p9LwMrDZevzvllJyz3DAOlgapfEW6jQvvdT3Ssqh9AIFvHbD
 yYcQWUzM0ieMkwVUrsRIKwSxuGvnnr9TzdB9HGv/6A7wlD50DUkhdABN/KKI4fXFK25hH2wu
 Wbu72n/RBYAO7S36xCI73atje/nhj7gVcQZE7jQ3u5nhhify3IeDDUSVECnur+ph0imQdVdJ
 kcIvC00osAPGFeDF4enGUfi+Tjd40BaC4E4//AGBB+l8PraviXeAGk9bCd6aIcri8AEYRMT7
 wrc9z/2PgCDoIF5WFrEqOrK8GrjZXRMRYMRTXRaFFVYurEPtKl210uSFYg7TcZZm/WvQVnNL
 ya2QD/Sbln5peoCzO2F8F/OmFpATbCZH1dutm07so9Ihz6VhbJJhKTysDA3Fd4acO6koqCp5
 RDoYfS24uEUFo2qnyeQWugLF7zBz6/bbWOA3Qc2QcJxqmjFF5ufkWZ4uWAWyKBBYq45lcLBO
 heP6Wu9GrcJVJdVUUOHS93oUJl7pUQRPd/kSurVfrJzX3SFTyfepHsGTRfJhwjFyRFw+Ylia
 MzzWZv9Uh4n5VFPkWPeqxE1iud7mEjTBAr7GPjG8vhQ+eHENCLEGetdbQDmgyJQxPrsnTg5O
 u13b6OioyizmsWnCsUL2eb/9Ww3EEU=
IronPort-HdrOrdr: A9a23:S0kXN6GqN+pPxrBSpLqE5MeALOsnbusQ8zAXP0AYc3Jom6uj5q
 eTdZUgpHvJYVkqOE3I9ertBEDiewK4yXcW2/hzAV7KZmCP0wHEEGgL1/qF/9SKIUzDH4Bmup
 uIC5IOauHNMQ==
X-IronPort-AV: E=Sophos;i="5.92,231,1650945600"; 
   d="scan'208";a="74746992"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 1/2] x86/spec-ctrl: Only adjust MSR_SPEC_CTRL for idle with legacy IBRS
Date: Wed, 29 Jun 2022 19:45:07 +0100
Message-ID: <20220629184508.15956-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20220629184508.15956-1-andrew.cooper3@citrix.com>
References: <20220629184508.15956-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Back at the time of the original Spectre-v2 fixes, it was recommended to clear
MSR_SPEC_CTRL when going idle.  This is because of the side effects on the
sibling thread caused by the microcode IBRS and STIBP implementations which
were retrofitted to existing CPUs.

However, there are no relevant cross-thread impacts for the hardware
IBRS/STIBP implementations, so this logic should not be used on Intel CPUs
supporting eIBRS, or any AMD CPUs; doing so only adds unnecessary latency to
the idle path.

Furthermore, there's no point playing with MSR_SPEC_CTRL in the idle paths if
SMT is disabled for other reasons.

Fixes: 8d03080d2a33 ("x86/spec-ctrl: Cease using thunk=lfence on AMD")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

v2:
 * New
---
 xen/arch/x86/include/asm/cpufeatures.h |  2 +-
 xen/arch/x86/include/asm/spec_ctrl.h   |  5 +++--
 xen/arch/x86/spec_ctrl.c               | 10 ++++++++--
 3 files changed, 12 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/include/asm/cpufeatures.h b/xen/arch/x86/include/asm/cpufeatures.h
index bd45a144ee78..493d338a085e 100644
--- a/xen/arch/x86/include/asm/cpufeatures.h
+++ b/xen/arch/x86/include/asm/cpufeatures.h
@@ -33,7 +33,7 @@ XEN_CPUFEATURE(SC_MSR_HVM,        X86_SYNTH(17)) /* MSR_SPEC_CTRL used by Xen fo
 XEN_CPUFEATURE(SC_RSB_PV,         X86_SYNTH(18)) /* RSB overwrite needed for PV */
 XEN_CPUFEATURE(SC_RSB_HVM,        X86_SYNTH(19)) /* RSB overwrite needed for HVM */
 XEN_CPUFEATURE(XEN_SELFSNOOP,     X86_SYNTH(20)) /* SELFSNOOP gets used by Xen itself */
-XEN_CPUFEATURE(SC_MSR_IDLE,       X86_SYNTH(21)) /* (SC_MSR_PV || SC_MSR_HVM) && default_xen_spec_ctrl */
+XEN_CPUFEATURE(SC_MSR_IDLE,       X86_SYNTH(21)) /* Clear MSR_SPEC_CTRL on idle */
 XEN_CPUFEATURE(XEN_LBR,           X86_SYNTH(22)) /* Xen uses MSR_DEBUGCTL.LBR */
 /* Bits 23,24 unused. */
 XEN_CPUFEATURE(SC_VERW_IDLE,      X86_SYNTH(25)) /* VERW used by Xen for idle */
diff --git a/xen/arch/x86/include/asm/spec_ctrl.h b/xen/arch/x86/include/asm/spec_ctrl.h
index 751355f471f4..7e83e0179fb9 100644
--- a/xen/arch/x86/include/asm/spec_ctrl.h
+++ b/xen/arch/x86/include/asm/spec_ctrl.h
@@ -78,7 +78,8 @@ static always_inline void spec_ctrl_enter_idle(struct cpu_info *info)
     uint32_t val = 0;
 
     /*
-     * Branch Target Injection:
+     * It is recommended in some cases to clear MSR_SPEC_CTRL when going idle,
+     * to avoid impacting sibling threads.
      *
      * Latch the new shadow value, then enable shadowing, then update the MSR.
      * There are no SMP issues here; only local processor ordering concerns.
@@ -114,7 +115,7 @@ static always_inline void spec_ctrl_exit_idle(struct cpu_info *info)
     uint32_t val = info->xen_spec_ctrl;
 
     /*
-     * Branch Target Injection:
+     * Restore MSR_SPEC_CTRL on exit from idle.
      *
      * Disable shadowing before updating the MSR.  There are no SMP issues
      * here; only local processor ordering concerns.
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index 1f275ad1fb5d..57f4fcb21398 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -1151,8 +1151,14 @@ void __init init_speculation_mitigations(void)
     /* (Re)init BSP state now that default_spec_ctrl_flags has been calculated. */
     init_shadow_spec_ctrl_state();
 
-    /* If Xen is using any MSR_SPEC_CTRL settings, adjust the idle path. */
-    if ( default_xen_spec_ctrl )
+    /*
+     * For microcoded IBRS only (i.e. Intel, pre eIBRS), it is recommended to
+     * clear MSR_SPEC_CTRL before going idle, to avoid impacting sibling
+     * threads.  Activate this if SMT is enabled, and Xen is using a non-zero
+     * MSR_SPEC_CTRL setting.
+     */
+    if ( boot_cpu_has(X86_FEATURE_IBRSB) && !(caps & ARCH_CAPS_IBRS_ALL) &&
+         hw_smt_enabled && default_xen_spec_ctrl )
         setup_force_cpu_cap(X86_FEATURE_SC_MSR_IDLE);
 
     xpti_init_default(caps);
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Jun 29 18:45:51 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 18:45:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358037.587011 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6ch9-0001Jx-Vk; Wed, 29 Jun 2022 18:45:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358037.587011; Wed, 29 Jun 2022 18:45:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6ch9-0001Jq-S4; Wed, 29 Jun 2022 18:45:47 +0000
Received: by outflank-mailman (input) for mailman id 358037;
 Wed, 29 Jun 2022 18:45:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=R6b1=XE=citrix.com=prvs=172711fe8=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1o6ch8-0000mL-DY
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 18:45:46 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id aef34e82-f7db-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 20:45:45 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aef34e82-f7db-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656528344;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=qX8FoVKbQduGMZIO5wy6Y3hMntj7bH2XpUmWXTKnZfE=;
  b=Xl5F6iBj9dzNNlqAo6mdyWaT5evlDbIxLRVpzaw+u/5K0k5q0WInr4Ye
   RcCZVyDKhjnPdliG7nRvVIhlGfbHOnG6X/3Lm3goSVp6z2wh145cIXK9A
   QcHRnvfSR6dc1WX1lgGoddOpMQqcYv+8HDA9EwxorbDkKBrx/guRNbFKB
   E=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 74746993
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:LnFTLKhXFEXqwBCtgDC0rNzsX161PxAKZh0ujC45NGQN5FlHY01je
 htvUTrQPKmDamf9fNx2YI+y90sEsZ7RzoJgTwJtpC03FCMb9cadCdqndUqhZCn6wu8v7a5EA
 2fyTvGacajYm1eF/k/F3oDJ9CU6jefSLlbFILas1hpZHGeIcw98z0M58wIFqtQw24LhXVnc4
 YqaT/D3YzdJ5RYlagr41IrbwP9flKyaVOQw5wFWiVhj5TcyplFNZH4tDfjZw0jQG+G4KtWSV
 efbpIxVy0uCl/sb5nFJpZ6gGqECaua60QFjERO6UYD66vRJjnRaPqrWqJPwwKqY4tmEt4kZ9
 TlDiXC/YT15MPDv3+IcajtBKCElMJJJypvoGEHq5KR/z2WeG5ft6/BnDUVwNowE4OdnR2pJ8
 JT0KhhUMErF3bjvhuvmFK883azPL+GyVG8bklhmwSvUErANRpfbTr+RzdRZwC0xloZFGvO2i
 88xNmYwMEqRMkYn1lE/CMIlo8GYpkHDLQZBqljK9PoI42Hc5VkkuFTqGIWMIYHbLSlPpW6Ho
 krW8mK/BQsVXPS94zeY9nOnhsfUgDj2HokVEdWQ5vNsxVGe2GEXIBkXTkeg5+m0jFakXNBSI
 FBS/TAhxZXe72TyEIO7BUfh5ifZ4FhMALK8DtHW9im3mqSJwEGfB1EmVwVBM9EZu/0SagUTg
 wrhc8zSOdB/jFGEYSvDq+nJ9GLuZXF9wXwqPnFdE1ZcizX3iMRq10+UEI4+eEKgpoetcQwc1
 Qxmu8TXa187qccQn5u28lnc695HjsiYF1Vljuk7s4/M0++YWGJGT9bxgbQjxawcRLt1t3HY1
 JT+p+CQ7foVEbaGnzGXTeMGEdmBvqjYbmGA2AcxRMl8q1xBHkJPm6gJsVmSw285WvvohBezO
 BOD0e+vzMU70ISWgV9fPNvqVpVCIVnIHtX5TPHEBudzjmxKXFbfpklGPBfIt0i0yRREufxuY
 v+zLJfzZUv2/Iw6lVJasc9Gie91rs3/rEuOLa3GI+OPiuDOOC/FFe9YazNjrIkRtcu5nekcy
 P4HX+Pi9vmVeLSWjvX/mWLLEW03EA==
IronPort-HdrOrdr: A9a23:fA9MuajhUOLgsfO5aHKh7Ri6RHBQXtYji2hC6mlwRA09TySZ//
 rBoB19726StN9xYgBFpTnuAsm9qB/nmaKdgrNhWItKPjOW21dARbsKheCJrgEIcxeOkNK1vp
 0AT0ERMrLN5CBB/KTH3DU=
X-IronPort-AV: E=Sophos;i="5.92,231,1650945600"; 
   d="scan'208";a="74746993"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 2/2] x86/spec-ctrl: Knobs for STIBP and PSFD, and follow hardware STIBP hint
Date: Wed, 29 Jun 2022 19:45:08 +0100
Message-ID: <20220629184508.15956-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20220629184508.15956-1-andrew.cooper3@citrix.com>
References: <20220629184508.15956-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

STIBP and PSFD are slightly weird bits, because they're both implied by other
bits in MSR_SPEC_CTRL.  Add fine grain controls for them, and take the
implications into account when setting IBRS/SSBD.

Rearrange the IBPB text/variables/logic to keep all the MSR_SPEC_CTRL bits
together, for consistency.

However, AMD have a hardware hint CPUID bit recommending that STIBP be set
unilaterally.  This is advertised on Zen3, so follow the recommendation.
Furthermore, in such cases, set STIBP behind the guest's back for now.  This
has negligible overhead for the guest, but saves a WRMSR on vmentry.  This is
the only default change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

v2:
 * Tweak comments/logic per suggestion.
 * Also set STIBP behind the guest's back to improve the vmentry path.
---
 docs/misc/xen-command-line.pandoc | 21 ++++++++++---
 xen/arch/x86/hvm/svm/vmcb.c       |  9 ++++++
 xen/arch/x86/spec_ctrl.c          | 65 +++++++++++++++++++++++++++++++++------
 3 files changed, 81 insertions(+), 14 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index a92b7d228cae..da18172e50c5 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -2258,8 +2258,9 @@ By default SSBD will be mitigated at runtime (i.e `ssbd=runtime`).
 
 ### spec-ctrl (x86)
 > `= List of [ <bool>, xen=<bool>, {pv,hvm,msr-sc,rsb,md-clear}=<bool>,
->              bti-thunk=retpoline|lfence|jmp, {ibrs,ibpb,ssbd,eager-fpu,
->              l1d-flush,branch-harden,srb-lock,unpriv-mmio}=<bool> ]`
+>              bti-thunk=retpoline|lfence|jmp, {ibrs,ibpb,ssbd,psfd,
+>              eager-fpu,l1d-flush,branch-harden,srb-lock,
+>              unpriv-mmio}=<bool> ]`
 
 Controls for speculative execution sidechannel mitigations.  By default, Xen
 will pick the most appropriate mitigations based on compiled in support,
@@ -2309,9 +2310,10 @@ On hardware supporting IBRS (Indirect Branch Restricted Speculation), the
 If Xen is not using IBRS itself, functionality is still set up so IBRS can be
 virtualised for guests.
 
-On hardware supporting IBPB (Indirect Branch Prediction Barrier), the `ibpb=`
-option can be used to force (the default) or prevent Xen from issuing branch
-prediction barriers on vcpu context switches.
+On hardware supporting STIBP (Single Thread Indirect Branch Predictors), the
+`stibp=` option can be used to force or prevent Xen using the feature itself.
+By default, Xen will use STIBP when IBRS is in use (IBRS implies STIBP), and
+when hardware hints recommend using it as a blanket setting.
 
 On hardware supporting SSBD (Speculative Store Bypass Disable), the `ssbd=`
 option can be used to force or prevent Xen using the feature itself.  On AMD
@@ -2319,6 +2321,15 @@ hardware, this is a global option applied at boot, and not virtualised for
 guest use.  On Intel hardware, the feature is virtualised for guests,
 independently of Xen's choice of setting.
 
+On hardware supporting PSFD (Predictive Store Forwarding Disable), the `psfd=`
+option can be used to force or prevent Xen using the feature itself.  By
+default, Xen will not use PSFD.  PSFD is implied by SSBD, and SSBD is off by
+default.
+
+On hardware supporting IBPB (Indirect Branch Prediction Barrier), the `ibpb=`
+option can be used to force (the default) or prevent Xen from issuing branch
+prediction barriers on vcpu context switches.
+
 On all hardware, the `eager-fpu=` option can be used to force or prevent Xen
 from using fully eager FPU context switches.  This is currently implemented as
 a global control.  By default, Xen will choose to use fully eager context
diff --git a/xen/arch/x86/hvm/svm/vmcb.c b/xen/arch/x86/hvm/svm/vmcb.c
index 958309657799..0fc57dfd71cf 100644
--- a/xen/arch/x86/hvm/svm/vmcb.c
+++ b/xen/arch/x86/hvm/svm/vmcb.c
@@ -29,6 +29,7 @@
 #include <asm/hvm/support.h>
 #include <asm/hvm/svm/svm.h>
 #include <asm/hvm/svm/svmdebug.h>
+#include <asm/spec_ctrl.h>
 
 struct vmcb_struct *alloc_vmcb(void)
 {
@@ -176,6 +177,14 @@ static int construct_vmcb(struct vcpu *v)
             vmcb->_pause_filter_thresh = SVM_PAUSETHRESH_INIT;
     }
 
+    /*
+     * When default_xen_spec_ctrl simply SPEC_CTRL_STIBP, default this behind
+     * the back of the VM too.  Our SMT topology isn't accurate, the overhead
+     * is neglegable, and doing this saves a WRMSR on the vmentry path.
+     */
+    if ( default_xen_spec_ctrl == SPEC_CTRL_STIBP )
+        v->arch.msrs->spec_ctrl.raw = SPEC_CTRL_STIBP;
+
     return 0;
 }
 
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index 57f4fcb21398..efed24933d91 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -48,9 +48,13 @@ static enum ind_thunk {
     THUNK_LFENCE,
     THUNK_JMP,
 } opt_thunk __initdata = THUNK_DEFAULT;
+
 static int8_t __initdata opt_ibrs = -1;
+int8_t __initdata opt_stibp = -1;
+bool __read_mostly opt_ssbd;
+int8_t __initdata opt_psfd = -1;
+
 bool __read_mostly opt_ibpb = true;
-bool __read_mostly opt_ssbd = false;
 int8_t __read_mostly opt_eager_fpu = -1;
 int8_t __read_mostly opt_l1d_flush = -1;
 static bool __initdata opt_branch_harden = true;
@@ -172,12 +176,20 @@ static int __init cf_check parse_spec_ctrl(const char *s)
             else
                 rc = -EINVAL;
         }
+
+        /* Bits in MSR_SPEC_CTRL. */
         else if ( (val = parse_boolean("ibrs", s, ss)) >= 0 )
             opt_ibrs = val;
-        else if ( (val = parse_boolean("ibpb", s, ss)) >= 0 )
-            opt_ibpb = val;
+        else if ( (val = parse_boolean("stibp", s, ss)) >= 0 )
+            opt_stibp = val;
         else if ( (val = parse_boolean("ssbd", s, ss)) >= 0 )
             opt_ssbd = val;
+        else if ( (val = parse_boolean("psfd", s, ss)) >= 0 )
+            opt_psfd = val;
+
+        /* Misc settings. */
+        else if ( (val = parse_boolean("ibpb", s, ss)) >= 0 )
+            opt_ibpb = val;
         else if ( (val = parse_boolean("eager-fpu", s, ss)) >= 0 )
             opt_eager_fpu = val;
         else if ( (val = parse_boolean("l1d-flush", s, ss)) >= 0 )
@@ -376,7 +388,7 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
                "\n");
 
     /* Settings for Xen's protection, irrespective of guests. */
-    printk("  Xen settings: BTI-Thunk %s, SPEC_CTRL: %s%s%s%s, Other:%s%s%s%s%s\n",
+    printk("  Xen settings: BTI-Thunk %s, SPEC_CTRL: %s%s%s%s%s, Other:%s%s%s%s%s\n",
            thunk == THUNK_NONE      ? "N/A" :
            thunk == THUNK_RETPOLINE ? "RETPOLINE" :
            thunk == THUNK_LFENCE    ? "LFENCE" :
@@ -390,6 +402,9 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
            (!boot_cpu_has(X86_FEATURE_SSBD) &&
             !boot_cpu_has(X86_FEATURE_AMD_SSBD))     ? "" :
            (default_xen_spec_ctrl & SPEC_CTRL_SSBD)  ? " SSBD+" : " SSBD-",
+           (!boot_cpu_has(X86_FEATURE_PSFD) &&
+            !boot_cpu_has(X86_FEATURE_INTEL_PSFD))   ? "" :
+           (default_xen_spec_ctrl & SPEC_CTRL_PSFD)  ? " PSFD+" : " PSFD-",
            !(caps & ARCH_CAPS_TSX_CTRL)              ? "" :
            (opt_tsx & 1)                             ? " TSX+" : " TSX-",
            !cpu_has_srbds_ctrl                       ? "" :
@@ -980,10 +995,7 @@ void __init init_speculation_mitigations(void)
         if ( !has_spec_ctrl )
             printk(XENLOG_WARNING "?!? CET active, but no MSR_SPEC_CTRL?\n");
         else if ( opt_ibrs == -1 )
-        {
             opt_ibrs = ibrs = true;
-            default_xen_spec_ctrl |= SPEC_CTRL_IBRS | SPEC_CTRL_STIBP;
-        }
 
         if ( opt_thunk == THUNK_DEFAULT || opt_thunk == THUNK_RETPOLINE )
             thunk = THUNK_JMP;
@@ -1087,14 +1099,49 @@ void __init init_speculation_mitigations(void)
             setup_force_cpu_cap(X86_FEATURE_SC_MSR_HVM);
     }
 
-    /* If we have IBRS available, see whether we should use it. */
+    /* Figure out default_xen_spec_ctrl. */
     if ( has_spec_ctrl && ibrs )
+    {
+        /* IBRS implies STIBP.  */
+        if ( opt_stibp == -1 )
+            opt_stibp = 1;
+
         default_xen_spec_ctrl |= SPEC_CTRL_IBRS;
+    }
+
+    /*
+     * Use STIBP by default if the hardware hint is set.  Otherwise, leave it
+     * off as it a severe performance pentalty on pre-eIBRS Intel hardware
+     * where it was retrofitted in microcode.
+     */
+    if ( opt_stibp == -1 )
+        opt_stibp = !!boot_cpu_has(X86_FEATURE_STIBP_ALWAYS);
+
+    if ( opt_stibp && (boot_cpu_has(X86_FEATURE_STIBP) ||
+                       boot_cpu_has(X86_FEATURE_AMD_STIBP)) )
+        default_xen_spec_ctrl |= SPEC_CTRL_STIBP;
 
-    /* If we have SSBD available, see whether we should use it. */
     if ( opt_ssbd && (boot_cpu_has(X86_FEATURE_SSBD) ||
                       boot_cpu_has(X86_FEATURE_AMD_SSBD)) )
+    {
+        /* SSBD implies PSFD */
+        if ( opt_psfd == -1 )
+            opt_psfd = 1;
+
         default_xen_spec_ctrl |= SPEC_CTRL_SSBD;
+    }
+
+    /*
+     * Don't use PSFD by default.  AMD designed the predictor to
+     * auto-clear on privilege change.  PSFD is implied by SSBD, which is
+     * off by default.
+     */
+    if ( opt_psfd == -1 )
+        opt_psfd = 0;
+
+    if ( opt_psfd && (boot_cpu_has(X86_FEATURE_PSFD) ||
+                      boot_cpu_has(X86_FEATURE_INTEL_PSFD)) )
+        default_xen_spec_ctrl |= SPEC_CTRL_PSFD;
 
     /*
      * PV guests can create RSB entries for any linear address they control,
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Jun 29 18:45:52 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 18:45:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358035.586988 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6cgy-0000mY-Dr; Wed, 29 Jun 2022 18:45:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358035.586988; Wed, 29 Jun 2022 18:45:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6cgy-0000mR-BB; Wed, 29 Jun 2022 18:45:36 +0000
Received: by outflank-mailman (input) for mailman id 358035;
 Wed, 29 Jun 2022 18:45:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=R6b1=XE=citrix.com=prvs=172711fe8=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1o6cgx-0000mL-4o
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 18:45:35 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a67b003b-f7db-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 20:45:32 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a67b003b-f7db-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656528332;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=bZC+c6jVkNdg+OpYTgJbDfKY8xiUZ80gFTUBVJE8k2k=;
  b=Ly5XpRfng0n66PWXCyEhXjX/1qk/P42S5aGYXA99vKU6r8SeX+xRS9Oh
   q/IZAN8STrrpknkz3tg914fjd1Al7SIr9hXMO6OFUOpbaLCV3+QlrHC4q
   kH+VZf+3MO2URL++XsWkUfHcywOiQH8AsFxcEBH6hN5jG+5PzVOV2dgjg
   o=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 75147939
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:aV4cM64vL0qxqgbHoopU/AxRtGbHchMFZxGqfqrLsTDasY5as4F+v
 jMaDGjSaf2KZjP0eYxyYdy/p0gF7JHVz9BnGgturys1Hi5G8cbLO4+Ufxz6V8+wwmwvb67FA
 +E2MISowBUcFyeEzvuVGuG96yE6j8lkf5KkYAL+EnkZqTRMFWFw03qPp8Zj2tQy2YbjUlvU0
 T/Pi5a31GGNimYc3l08s8pvmDs31BglkGpF1rCWTakjUG72zxH5PrpGTU2CByKQrr1vNvy7X
 47+IISRpQs1yfuP5uSNyd4XemVSKlLb0JPnZnB+A8BOiTAazsA+PzpS2FPxpi67hh3Q9+2dx
 umhurSzbC0iebPql94RaENXPC9mIoJru7/+dC3XXcy7lyUqclPpyvRqSko3IZcZ6qB8BmQmG
 f4wcW5XKErZ3qTvnez9GrIEascLdaEHOKs2vH16wC6fJvEhWZ3ZGI3B5MNC3Sd2jcdLdRrbT
 5VEMGYwPU+RC/FJEmc5S4o4zOOCv3XcLgAIhmm8iKAr0lGGmWSd15CyaYGIK7RmX/59gUKwt
 m/AuWPjDXkyFvaS1D6E+XKEnfLUkGXwX4d6PK218LtmjUOewkQXCQYKTh2rrP+hkEm8VtlDb
 UsO9UITQbMarRLxCIOnBlvh/SDC7kV0t8ds//MS6ySnwe3KxQmjClNeQRFPcMMJmYw2fGl/v
 rOWpO8FFQCDoZXMFy/Dq+7F8W/rUcQGBTRcPHFZFGPp9/Gm+dhu1UyXE76PBYbv1rXI9SfML
 ydmRcTUr5EaloY12qqy5jgraBr898GSHmbZCug6N19JDz+Vh6b/PuREEXCBsZ59wH+xFzFtR
 kQslcmE9/wpBpqQjiGLS+hlNOj3uqjZbGSF3wU/QsVJG9GRF5iLJNE4DNZWdC9U3jssI2e1M
 Cc/RysLjHOsAJdaRfAuON/gYyjb5aPhCc7kRpjpUza6WbAoLFXv1Hg3PSa4hjmx+GBxwPpXE
 crKLq6R4YMyVP0PIMyeHL9Nj9fGB0kWmAvueHwM50/9gebPNCLNFOxt3ZnnRrlR0Z5oaT79q
 753X/ZmAT0GCYUSvgG/HVYvEG03
IronPort-HdrOrdr: A9a23:TbCeuaOch5W53sBcTs2jsMiBIKoaSvp037Eqv3oedfUzSL3+qy
 nOpoV+6faaslYssR0b9exoW5PwJE80l6QFgrX5VI3KNGKN1VdARLsSi7cKqAeAJ8SRzIFgPN
 9bAspDNOE=
X-IronPort-AV: E=Sophos;i="5.92,231,1650945600"; 
   d="scan'208";a="75147939"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 0/2] x86/spec-ctrl: Minor fixes
Date: Wed, 29 Jun 2022 19:45:06 +0100
Message-ID: <20220629184508.15956-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Patch 2 has been posted before, but ages ago and has been rebased over
subseqent XSAs.  Patch 1 is new.

Andrew Cooper (2):
  x86/spec-ctrl: Only adjust MSR_SPEC_CTRL for idle with legacy IBRS
  x86/spec-ctrl: Knobs for STIBP and PSFD, and follow hardware STIBP hint

 docs/misc/xen-command-line.pandoc      | 21 +++++++---
 xen/arch/x86/hvm/svm/vmcb.c            |  9 ++++
 xen/arch/x86/include/asm/cpufeatures.h |  2 +-
 xen/arch/x86/include/asm/spec_ctrl.h   |  5 ++-
 xen/arch/x86/spec_ctrl.c               | 75 +++++++++++++++++++++++++++++-----
 5 files changed, 93 insertions(+), 19 deletions(-)

-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Jun 29 19:02:55 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 19:02:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358054.587021 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6cxc-0004dw-KV; Wed, 29 Jun 2022 19:02:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358054.587021; Wed, 29 Jun 2022 19:02:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6cxc-0004dp-HB; Wed, 29 Jun 2022 19:02:48 +0000
Received: by outflank-mailman (input) for mailman id 358054;
 Wed, 29 Jun 2022 19:02:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Omrp=XE=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1o6cxa-0004dj-RY
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 19:02:47 +0000
Received: from NAM11-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam11on2040.outbound.protection.outlook.com [40.107.236.40])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0e814f6d-f7de-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 21:02:44 +0200 (CEST)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by MN2PR12MB3213.namprd12.prod.outlook.com (2603:10b6:208:af::26)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.14; Wed, 29 Jun
 2022 19:02:41 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::6d20:93ce:c4d6:f683]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::6d20:93ce:c4d6:f683%4]) with mapi id 15.20.5373.018; Wed, 29 Jun 2022
 19:02:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0e814f6d-f7de-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=j7yhwyW5XKCuJtmcURuiFUKUCOQ4oAc9ZtHo4/D60DzzCqsWjS2MDltwX91EQSzHgv4E+g9CHzaI/4XhuipMmyFd0NWT9A7jXJ3VQjMaBpk/GkUhQGFsgXvHSM3DxrtDVkezxwhMCxVazQyM+vZ1S9e1uruz1CvDCsdQDseXA5Aj3ATGmbMjdGQC7o5rKwwqbe4qvKJ+ilmYPiGFjj/D9XGSzAOtl7uEJtuSizniqpOkCmA5e+3/FY8WvawTBjJKu+lLnOp+xZn+FxzmAa1rlb8iHsHjvtUVWRIhUyIvqT239j6/ySEupzLFXoBXaX+hxoRjZN55u6muKk1LcDCaHw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=4JGreh3MVBlu7GqnDF7jol/Jz//foxmoiPKAeC7DaPk=;
 b=EpUFwtsVaumQ7bXfzJl0sq4Tu+pefiJIAL4r73c7w5yX+YPSSkWhuZuTnwm9ejRtQkKIKOr79PyHB1/c58nOUM3bYeV4wII4oZJU2YrduG2AgYNFjKkeoOk02PDs0YZMsdlROtPTZyMpm2huFKSkFK1GnfiRGdPLkVSqZupGJCC235AAbrNCreuKUaiSwzpHG1VBQwCli4GKVLLmO11DGf5B73PHQlR1HWhbuQMUHPk/1bCAIoaBfd+7AEFxBwvpR/XDZCQ32KsJZdBU5WJzjIJpT5VXyxyi5w2+mgw2xOtido34aI5nrHns67UniyO+J/zXowSkcarGcT1in4gO/Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4JGreh3MVBlu7GqnDF7jol/Jz//foxmoiPKAeC7DaPk=;
 b=qZMvPozvhDQdgWBKK1JrErT5AR0rf/bsY7Smd83YqOX+XwWXS6z92YpCaFsiC3mMdjdjzs8gx+toZ/Sx+y+ADxaOBllTgCncRnbXWRl9WMNGb7v4GArk2FIWS1V85SrF/XaqKeEi2mFbTpi/997tpZrCtjp+6jHJE2n2qP+p3Vc=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <cafc1602-6f5f-3238-801d-29c13ed37f50@amd.com>
Date: Wed, 29 Jun 2022 20:02:33 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH 2/2] uboot-script-gen: do not enable direct mapping by
 default
To: xenia <burzalodowa@gmail.com>, Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, viryaos-discuss@lists.sourceforge.net
References: <20220626184536.666647-1-burzalodowa@gmail.com>
 <20220626184536.666647-2-burzalodowa@gmail.com>
 <alpine.DEB.2.22.394.2206281727080.4389@ubuntu-linux-20-04-desktop>
 <22476413-14da-21cd-eb02-15165bfe602a@gmail.com>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <22476413-14da-21cd-eb02-15165bfe602a@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P123CA0007.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:a6::19) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4dbb6d1f-1523-4c13-5463-08da5a01effa
X-MS-TrafficTypeDiagnostic: MN2PR12MB3213:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	T1zs6+JEVk8BAVVphdMakwyGz8GHlUoUWSIqIPgnggbgzi9Kxj66T0QaufmLG5YFxxnX+wfKOR8AAT8sAKIHd0hhhaUK9hXnXzON0t/Fv5KOFPmJUw2h39LPxZwOg+VaSx55QIm/mBHAKE30WsCr+85FTM2qusiTuN49f0TBzoYy8GetpIQmtMr7wX2eCg3hno3QzEvraVsF1DTO3bdArno4fA8nxajmaRqEjkvDGuC7vS4EBP5Lg9NEwwYep2KG99dgpjtgPlP2cCT35Wji4tAf0T17CffS86ZvuWZzK33w6mFyvrAzC0ex08Pae1QPrBMEggNUC1/xWtjd6Q/4cexzWVG38dZf2mlLJv92oRpQPJ5o5bO8bI7HWgs6Ygty5/26EieoGERF+lrDOvulfDvCl4AdbIuOpn1d9+mBQ7APexNwfPGIFHsuJVRf5Z3JCp6f5RxbviNdrhiIRiFZ0VmS/crwamCqrcs4dAt21wXSKsDiAZ4gX+PqFfO7c0AgCN0Kb/nA8AXA9DSNJcT5hMs2oTmPytwc92PSiAlyxGM9WInrGEoAFGylIXMpxXIsZiJsqD3OBCQoxBFTfRVZHYT5Lm5dDDE8YE6CLc4Di/olWA2mTBp+/oN1h5OgNotkjO+5TVqCmbV3guiliNmiZa38zaUS+A11KAdx6ir1s480D5yx/850b60xZGrUouN0fpIIMNeZWzRJ5db3qV+EpNJLXwlKyJh25ItbaXt7UPouPsZATvyF9c5sYthm9VxrsBSywVRvDw5ftxBPEofmQ58B4EfToadTOv9WKB9bGOsVvav+5zGc/Z7koYoiY7k5j0C9dnW6zB3Ejks35hKtvQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(396003)(136003)(346002)(376002)(39860400002)(366004)(66946007)(41300700001)(110136005)(8676002)(6506007)(53546011)(6512007)(36756003)(66476007)(186003)(31696002)(31686004)(8936002)(2616005)(2906002)(38100700002)(316002)(66556008)(6666004)(6486002)(26005)(478600001)(5660300002)(4326008)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UEVlK3lqTVRpVStSYzJHNHBEalhNQjFjaGZEN2JWNGhCekIzV0dkcjQvdFVB?=
 =?utf-8?B?NGd2ZWpieDJNZjl6OFo4L0w1YU1oRFVSd2F6UFgycEo1RXRJc1g2bHQzK1Y0?=
 =?utf-8?B?dnZaQy9kbGloSm5xa1QxQ2hJbE9xT1BSNkgzRFlqMDYvT1lHNURoQk9DMVRw?=
 =?utf-8?B?NWJucDlBaldPQjZLcUVabU8xNEtxb0h0V2VTZ3hQNVQ5c3hRWkpCNU1xaGNT?=
 =?utf-8?B?QWlCcFdic2J5TTZjZGN3UWhqN2pybjlCQ1BDYksxN3Y3elZKMjdWQlZFMldK?=
 =?utf-8?B?dUxuamdBWVVzdnlNbWdTOXd4c1NYM2tua1Iza1B3TWsyaWR4N0w1aUxQeE1N?=
 =?utf-8?B?N2FwRS9kWTVRZ3plSFQ5SDQzMURlMTRyMzdtdXpleEN0YjExRlpIRnZWcDZr?=
 =?utf-8?B?TkhmN3ViWTV1M3VpRWlKak81RmU4YkpWTnBnM3lkSEMyM3FkUXNPMlQvT1dN?=
 =?utf-8?B?NHEwMmpNdVRMYWhxeWpMWlRnSFZOQ2NLY1poWmN3U0pOVGVuaTJTR3NIMUFD?=
 =?utf-8?B?VDdMd05Ja2FQdFVxSG1BR0lGV1g3M0VEQ0NsZHhQdEZHZS8ySGVkYmVXU084?=
 =?utf-8?B?KzNjRHF6bWpHWFhtY1l1Vk5mNkRUVjV1UnJZeUsxWlllNjhuOFBXcTJBZDBL?=
 =?utf-8?B?YWE4TTFCNnpBQzFHbFRoSzl4cGtmYlRrL1QyV1d2cGtaQ0k3Vkc5S2NpbDFJ?=
 =?utf-8?B?QWtkT0Y1b2lnOG96ZVAxdUVOVEJsS1hma2JPVytsUjJFci8yM0RJQldEMTFm?=
 =?utf-8?B?N3dNVkdURDZsc05GQ1pVQVJmNFpDem90NnVaTzVFbElaZ2NLQS9MU1pmMXFt?=
 =?utf-8?B?U0xLM0ZlVTRLMTV5YjArNWlRUFFNVDlRdXp0VjlGU3Uxd0I1bEZnaXg1MHNS?=
 =?utf-8?B?L3AxZ0tlV01FREFOQUtZdTZVSlhWZkdUYmUrVzZHcmU5MEVkQXFKRk5jMHFj?=
 =?utf-8?B?RWgyaC9LTjRHMURrVG81SitpQ3p1YUpZMDhjNG96dHJtUGptOGRuSXBOWllL?=
 =?utf-8?B?WVB5dExEUHFlL1gyT1N3QW9PMlRZb0lCUXdkUktIOG0yZTRNTjN4cHBsdFI5?=
 =?utf-8?B?REpydURlLzAxc1VZYWxvV2JYZXBlVXE2R3hLaG0xM3J3UWE4WWR1b3FMWXpo?=
 =?utf-8?B?SEh4Q2NETXZjdjRYbHNDblBVZDB2eTJKaW1PaWgvcDIzYUhJMHBudThpZ1Zi?=
 =?utf-8?B?WlcrQ3VROXFvRVUyTDY1MUVKL0tsOCtVbjUvRmM0M3Fyd1J1a3FMOGN6U2Y5?=
 =?utf-8?B?cEhHekdEcng0ZXVhNUhJRFdxU2JJcXY5cTFIL2ZSOUFnZGUweGJIVm1kQWNy?=
 =?utf-8?B?dVFNT2JpMmJHWGxocUFRVU80KytsVFdLYllIUFFXTVFOdnNwUkVwMjRwWFNB?=
 =?utf-8?B?cmc1T3c2N3ByVFVvRnVzTk4yZ2xEb3dLQWRpVHhNUC94blhJSVFIUWo4VGkx?=
 =?utf-8?B?b0tPTXRGYktwUWJSTWVBeFVxUHgxV2NOeFFpL0l5SjlpRUJIWGZ0ZGQ2Y3V0?=
 =?utf-8?B?WEhORG04NnQyaGZHLzFXS1RINmpwcTN0YjRBQ0RGZVdnemtoNEs2RVQxSXUv?=
 =?utf-8?B?S3VUSFFVMGFpam9BTFBMQmpiNlFiRDEySHlNbWxYQ3RKUW01SnIxYkF5WGY3?=
 =?utf-8?B?THNpNjc5N1h3NzI4SWl6eXJTanJ4bGF6VklhSGpvZHlxZDk0VnVvM3BGUXhq?=
 =?utf-8?B?d2hWOGhpN0N2KzM2Q2dGc0Z4Z3VnNVdGRkZqekhvejZZVWwzc2JmS3o5RU9r?=
 =?utf-8?B?SWpXTjMyM1VZd0s0LzFpaVRzSVBCS3cvNXpyK09wYzhqM1RPZXNnYkd4V0x4?=
 =?utf-8?B?eWNQOEd1STVMTXp6QlRNTytheXE4aXk1bFB0UDhyOFk2NW5GN0dDYjNSazB3?=
 =?utf-8?B?Sk5zLytLWHdrTFgydFUwVm9lRldsSElqT1RYU1hjdEpiTlVObXZBNUIrczRm?=
 =?utf-8?B?d0t4UUhPTnBWWUIzelVMdEx4N1NlTGZSM0NIUTE0SlBpbHJkMXNiZTB1dDBn?=
 =?utf-8?B?SWhDcUVkZXNETzlKWjBuS0MyTW1jdXQ3Sm4wSTdZc3BkNmdkY3FFSGFNM2kv?=
 =?utf-8?B?V09DT0ZMcTROZTNxOHRqUnRySVcwalc2TkVzQjJySmlpQjZnMWpiV094a2Zx?=
 =?utf-8?Q?iTV3K2PSTDKjBLIoswWrZmSIM?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4dbb6d1f-1523-4c13-5463-08da5a01effa
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jun 2022 19:02:39.6629
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: DZ1Kq5CIzUT/CrHm+7XB0s8j5ontfvWzuPxojisrYN4Vi5vd932RaMh3AS0j/g9sIQL4t4s54U+EEO/V+8a1oQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB3213

Hi Stefano/Xenia,

On 29/06/2022 18:01, xenia wrote:
> Hi Stefano,
>
> On 6/29/22 03:28, Stefano Stabellini wrote:
>> On Sun, 26 Jun 2022, Xenia Ragiadakou wrote:
>>> To be inline with XEN, do not enable direct mapping automatically 
>>> for all
>>> statically allocated domains.
>>>
>>> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
>> Actually I don't know about this one. I think it is OK that ImageBuilder
>> defaults are different from Xen defaults. This is a case where I think
>> it would be good to enable DOMU_DIRECT_MAP by default when
>> DOMU_STATIC_MEM is specified.
> Just realized that I forgot to add [ImageBuilder] tag to the patches. 
> Sorry about that.

@Stefano, why do you wish the Imagebuilder's behaviour to differ from 
Xen ? Is there any use-case that helps.

- Ayan

>
> I cc Ayan, since the change was suggested by him.
> I have no strong preference on the default value.
>
> Xenia
>
>>> ---
>>>   README.md                | 4 ++--
>>>   scripts/uboot-script-gen | 8 ++------
>>>   2 files changed, 4 insertions(+), 8 deletions(-)
>>>
>>> diff --git a/README.md b/README.md
>>> index cb15ca5..03e437b 100644
>>> --- a/README.md
>>> +++ b/README.md
>>> @@ -169,8 +169,8 @@ Where:
>>>     if specified, indicates the host physical address regions
>>>     [baseaddr, baseaddr + size) to be reserved to the VM for static 
>>> allocation.
>>>   -- DOMU_DIRECT_MAP[number] can be set to 1 or 0.
>>> -  If set to 1, the VM is direct mapped. The default is 1.
>>> +- DOMU_DIRECT_MAP[number] if set to 1, enables direct mapping.
>>> +  By default, direct mapping is disabled.
>>>     This is only applicable when DOMU_STATIC_MEM is specified.
>>>     - LINUX is optional but specifies the Linux kernel for when Xen 
>>> is NOT
>>> diff --git a/scripts/uboot-script-gen b/scripts/uboot-script-gen
>>> index 085e29f..66ce6f7 100755
>>> --- a/scripts/uboot-script-gen
>>> +++ b/scripts/uboot-script-gen
>>> @@ -52,7 +52,7 @@ function dt_set()
>>>               echo "fdt set $path $var $array" >> $UBOOT_SOURCE
>>>           elif test $data_type = "bool"
>>>           then
>>> -            if test "$data" -eq 1
>>> +            if test "$data" == "1"
>>>               then
>>>                   echo "fdt set $path $var" >> $UBOOT_SOURCE
>>>               fi
>>> @@ -74,7 +74,7 @@ function dt_set()
>>>               fdtput $FDTEDIT -p -t s $path $var $data
>>>           elif test $data_type = "bool"
>>>           then
>>> -            if test "$data" -eq 1
>>> +            if test "$data" == "1"
>>>               then
>>>                   fdtput $FDTEDIT -p $path $var
>>>               fi
>>> @@ -491,10 +491,6 @@ function xen_config()
>>>           then
>>>               DOMU_CMD[$i]="console=ttyAMA0"
>>>           fi
>>> -        if test -z "${DOMU_DIRECT_MAP[$i]}"
>>> -        then
>>> -             DOMU_DIRECT_MAP[$i]=1
>>> -        fi
>>>           i=$(( $i + 1 ))
>>>       done
>>>   }
>>> -- 
>>> 2.34.1
>>>


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 20:04:11 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 20:04:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358060.587033 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6duj-00037N-5K; Wed, 29 Jun 2022 20:03:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358060.587033; Wed, 29 Jun 2022 20:03:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6duj-00037G-0y; Wed, 29 Jun 2022 20:03:53 +0000
Received: by outflank-mailman (input) for mailman id 358060;
 Wed, 29 Jun 2022 20:03:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6n86=XE=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o6duh-00037A-69
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 20:03:51 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 969de4bb-f7e6-11ec-bd2d-47488cf2e6aa;
 Wed, 29 Jun 2022 22:03:49 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id CA8E6620D2;
 Wed, 29 Jun 2022 20:03:47 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1A553C34114;
 Wed, 29 Jun 2022 20:03:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 969de4bb-f7e6-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1656533027;
	bh=welFEjVoKls0lDEjJTFKkpCLJrP+fIwuTGzM9lJhdbM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=jQ/6N+RcquSq/VinnIGfZma81pTSCvwbpUyOr556DOn5aJWN1CaR/xbqPHFqKndhr
	 xS+Gm7p1WTaV5Apk3YAx11sIpGBT+dYtYuiA6UVtucnsGIzXasxe0YXFNgHeaBtfo8
	 wx9Y4/0r7JRXwjsAAo/vkdQ2Z5tBYf8MDjDTgl9+qRhaWabqgSyvnpola5Enain/rM
	 1JUKe78dMS97GYN5rSoXa72e6xr6ST1NHpYWw09RtvV/lOHRQIjznpcw+9uft7Pa1W
	 DbWX7HcspjO+zpFQo3c8jWahfNyW4mIOePVDmMp18IL60OgrvjWDveQ4HFvx4dnPpH
	 9n3tpnysFhM4A==
Date: Wed, 29 Jun 2022 13:03:44 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: "SK, SivaSangeetha (Siva Sangeetha)" <SivaSangeetha.SK@amd.com>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, jgross@suse.com, 
    boris.ostrovsky@oracle.com
Subject: Re: Reg. Tee init fail...
In-Reply-To: <7689497b-1977-b30a-5835-587fa266c721@xen.org>
Message-ID: <alpine.DEB.2.22.394.2206291251240.4389@ubuntu-linux-20-04-desktop>
References: <DM4PR12MB5200C7C38770E07B5946424A80B49@DM4PR12MB5200.namprd12.prod.outlook.com> <7689497b-1977-b30a-5835-587fa266c721@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

Adding Juergen and Boris because this is a Linux/x86 issue.


As you can see from this Linux driver:
https://elixir.bootlin.com/linux/latest/source/drivers/crypto/ccp/tee-dev.c#L132

Linux as dom0 on x86 is trying to communicate with firmware (TEE). Linux
is calling __pa to pass a physical address to firmware. However, __pa
returns a "fake" address not an mfn. I imagine that a quick workaround
would be to call "virt_to_machine" instead of "__pa" in tee-dev.c.

Normally, if this was a device, the "right fix" would be to use
swiotlb-xen:xen_swiotlb_map_page to get back a real physical address.

However, xen_swiotlb_map_page is meant to be used as part of the dma_ops
API and takes a struct device *dev as input parameter. Maybe
xen_swiotlb_map_page can be used for tee-dev as well?


Basically tee-dev would need to call dma_map_page before passing
addresses to firmware, and dma_unmap_page when it is done. E.g.:


  cmd_buffer = dma_map_page(dev, virt_to_page(cmd),
                            cmd & ~PAGE_MASK,
                            ring_size,
                            DMA_TO_DEVICE);


Juergen, Boris,
what do you think?



On Fri, 24 Jun 2022, Julien Grall wrote:
> Hi,
> 
> (moving the discussion to xen-devel as I think it is more appropriate)
> 
> On 24/06/2022 10:53, SK, SivaSangeetha (Siva Sangeetha) wrote:
> > [AMD Official Use Only - General]
> 
> Not clear what this means.
> 
> > 
> > Hi Xen team,
> > 
> > In TEE driver, We allocate a ring buffer, get its physical address from
> > __pa() macro, pass the physical address to secure processor for mapping it
> > and using in secure processor side.
> > 
> > Source:
> > https://elixir.bootlin.com/linux/latest/source/drivers/crypto/ccp/tee-dev.c#L132
> > 
> > This works good natively in Dom0 on the target.
> > When we boot the same Dom0 kernel, with Xen hypervisor enabled, ring init
> > fails.
> 
> Do you have any error message or error code?
> 
> > 
> > 
> > We suspect that the address passed to secure processor, is not same when xen
> > is enabled, and when xen is enabled, some level of address translation might
> > be required to get exact physical address.
> 
> If you are using Xen upstream, Dom0 will be mapped with IPA == PA. So there
> should be no need for translation.
> 
> Can you provide more details on your setup (version of Xen, Linux...)?
> 
> Cheers,
> 
> -- 
> Julien Grall
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 20:29:08 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 20:29:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358068.587050 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6eIw-00060C-8B; Wed, 29 Jun 2022 20:28:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358068.587050; Wed, 29 Jun 2022 20:28:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6eIw-000605-4A; Wed, 29 Jun 2022 20:28:54 +0000
Received: by outflank-mailman (input) for mailman id 358068;
 Wed, 29 Jun 2022 20:28:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6n86=XE=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o6eIu-0005zx-6K
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 20:28:52 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 155ecde8-f7ea-11ec-bdce-3d151da133c5;
 Wed, 29 Jun 2022 22:28:50 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id CDC83B82604;
 Wed, 29 Jun 2022 20:28:48 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id BAD12C34114;
 Wed, 29 Jun 2022 20:28:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 155ecde8-f7ea-11ec-bdce-3d151da133c5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1656534527;
	bh=wTwvtie98c9AFO1fnYU065UEv2WSdG55d4PR8TL9RbM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=g5ID/EY6Dnms9WiYNPfH5ING/dj2bbc2Ct5ygGawRhuTUTA8r1nbwN96JiGpqGEIs
	 v8AJCbC5UZUhdnhFC6bFk/vvVr1zP7HCXFNtN56OdNi0uF8nOVot93/EFnZtPUYdcZ
	 nodATMFURM1H8EMaa8x704vDMhAQPlr+vaep5/SNYp6uwgfVNi4WnFPzhVA4IrxeaP
	 ht3j/c/SaGhi34gwpo2COY12tA8JsxaRxVgj0SMqyOCKBnUDpnibiFKlEIDdx4Uerm
	 wDtDVHaBuGYZM12sKg/hhutzyFmmIdYckVpROYfctiN/z57Hbn6+iAxRU43IulOIfp
	 LGaijoANm5wQw==
Date: Wed, 29 Jun 2022 13:28:45 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Ayan Kumar Halder <ayankuma@amd.com>
cc: xenia <burzalodowa@gmail.com>, Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, viryaos-discuss@lists.sourceforge.net
Subject: Re: [PATCH 2/2] uboot-script-gen: do not enable direct mapping by
 default
In-Reply-To: <cafc1602-6f5f-3238-801d-29c13ed37f50@amd.com>
Message-ID: <alpine.DEB.2.22.394.2206291323470.4389@ubuntu-linux-20-04-desktop>
References: <20220626184536.666647-1-burzalodowa@gmail.com> <20220626184536.666647-2-burzalodowa@gmail.com> <alpine.DEB.2.22.394.2206281727080.4389@ubuntu-linux-20-04-desktop> <22476413-14da-21cd-eb02-15165bfe602a@gmail.com>
 <cafc1602-6f5f-3238-801d-29c13ed37f50@amd.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1972016207-1656534527=:4389"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1972016207-1656534527=:4389
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Wed, 29 Jun 2022, Ayan Kumar Halder wrote:
> Hi Stefano/Xenia,
> 
> On 29/06/2022 18:01, xenia wrote:
> > Hi Stefano,
> > 
> > On 6/29/22 03:28, Stefano Stabellini wrote:
> > > On Sun, 26 Jun 2022, Xenia Ragiadakou wrote:
> > > > To be inline with XEN, do not enable direct mapping automatically for
> > > > all
> > > > statically allocated domains.
> > > > 
> > > > Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
> > > Actually I don't know about this one. I think it is OK that ImageBuilder
> > > defaults are different from Xen defaults. This is a case where I think
> > > it would be good to enable DOMU_DIRECT_MAP by default when
> > > DOMU_STATIC_MEM is specified.
> > Just realized that I forgot to add [ImageBuilder] tag to the patches. Sorry
> > about that.
> 
> @Stefano, why do you wish the Imagebuilder's behaviour to differ from Xen ? Is
> there any use-case that helps.

As background, ImageBuilder is meant to be very simple to use especially
for the most common configurations. In fact, I think ImageBuilder
doesn't necessarely have to support all the options that Xen offers,
only the most common and important.

If someone wants an esoteric option, they can always edit the generated
boot.source and make any necessary changes. I make sure to explain that
editing boot.source is always a possibility in all the talks I gave
about ImageBuilder.

Now to answer the specific question. I am positive that the most common
configuration for people that wants static memory is to have direct_map.
That is because the two go hand-in-hand in configuration where the IOMMU
is not used. So I think that from an ImageBuilder perspective direct_map
should default to enabled when static memory is requested. It can always
be disabled, both using DOMU_DIRECT_MAP, or editing boot.source.


> > I cc Ayan, since the change was suggested by him.
> > I have no strong preference on the default value.
> > 
> > Xenia
> > 
> > > > ---
> > > >   README.md                | 4 ++--
> > > >   scripts/uboot-script-gen | 8 ++------
> > > >   2 files changed, 4 insertions(+), 8 deletions(-)
> > > > 
> > > > diff --git a/README.md b/README.md
> > > > index cb15ca5..03e437b 100644
> > > > --- a/README.md
> > > > +++ b/README.md
> > > > @@ -169,8 +169,8 @@ Where:
> > > >     if specified, indicates the host physical address regions
> > > >     [baseaddr, baseaddr + size) to be reserved to the VM for static
> > > > allocation.
> > > >   -- DOMU_DIRECT_MAP[number] can be set to 1 or 0.
> > > > -  If set to 1, the VM is direct mapped. The default is 1.
> > > > +- DOMU_DIRECT_MAP[number] if set to 1, enables direct mapping.
> > > > +  By default, direct mapping is disabled.
> > > >     This is only applicable when DOMU_STATIC_MEM is specified.
> > > >     - LINUX is optional but specifies the Linux kernel for when Xen is
> > > > NOT
> > > > diff --git a/scripts/uboot-script-gen b/scripts/uboot-script-gen
> > > > index 085e29f..66ce6f7 100755
> > > > --- a/scripts/uboot-script-gen
> > > > +++ b/scripts/uboot-script-gen
> > > > @@ -52,7 +52,7 @@ function dt_set()
> > > >               echo "fdt set $path $var $array" >> $UBOOT_SOURCE
> > > >           elif test $data_type = "bool"
> > > >           then
> > > > -            if test "$data" -eq 1
> > > > +            if test "$data" == "1"
> > > >               then
> > > >                   echo "fdt set $path $var" >> $UBOOT_SOURCE
> > > >               fi
> > > > @@ -74,7 +74,7 @@ function dt_set()
> > > >               fdtput $FDTEDIT -p -t s $path $var $data
> > > >           elif test $data_type = "bool"
> > > >           then
> > > > -            if test "$data" -eq 1
> > > > +            if test "$data" == "1"
> > > >               then
> > > >                   fdtput $FDTEDIT -p $path $var
> > > >               fi
> > > > @@ -491,10 +491,6 @@ function xen_config()
> > > >           then
> > > >               DOMU_CMD[$i]="console=ttyAMA0"
> > > >           fi
> > > > -        if test -z "${DOMU_DIRECT_MAP[$i]}"
> > > > -        then
> > > > -             DOMU_DIRECT_MAP[$i]=1
> > > > -        fi
> > > >           i=$(( $i + 1 ))
> > > >       done
> > > >   }
> > > > -- 
> > > > 2.34.1
> > > > 
> 
--8323329-1972016207-1656534527=:4389--


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 20:58:19 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 20:58:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358075.587064 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6elJ-0001IA-Hr; Wed, 29 Jun 2022 20:58:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358075.587064; Wed, 29 Jun 2022 20:58:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6elJ-0001I3-Ed; Wed, 29 Jun 2022 20:58:13 +0000
Received: by outflank-mailman (input) for mailman id 358075;
 Wed, 29 Jun 2022 20:58:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6elI-0001Hs-3A; Wed, 29 Jun 2022 20:58:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6elH-0003sO-WA; Wed, 29 Jun 2022 20:58:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6elH-0003bw-Dc; Wed, 29 Jun 2022 20:58:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o6elH-0008Lh-DC; Wed, 29 Jun 2022 20:58:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=a0xCkNrH2CI5TOdNJ4QRZcTgh3sf+8R4ClOhUKR9GdE=; b=nDoG+4WFKLCJBYjKIfokzPcOQp
	tPsowBWdYOw0hrplSahRjKuuKZ8hAYrEDmrUhwH2j64JtCF+d0c7MT63sDmoFxMYCVKeqBjLDjlQ9
	c4RxM+dXGssaR0Uy5NMn2amjUeUTCrSmvl9tFfc5/Fec+Sk2YlhPHhwCx3h3wegoAE60=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171402-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 171402: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-pvshim:debian-fixup:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=93aa071f66b78a2abbf134aeb96b02f066e6091d
X-Osstest-Versions-That:
    xen=8c99264c6746541ddbfd7afec533e6ad1c8c41a5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 29 Jun 2022 20:58:11 +0000

flight 171402 xen-unstable real [real]
flight 171409 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/171402/
http://logs.test-lab.xenproject.org/osstest/logs/171409/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-pvshim    13 debian-fixup        fail pass in 171409-retest

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    14 guest-start          fail in 171409 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171387
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171387
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171387
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171387
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171387
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171387
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171387
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171387
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171387
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171387
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171387
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171387
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  93aa071f66b78a2abbf134aeb96b02f066e6091d
baseline version:
 xen                  8c99264c6746541ddbfd7afec533e6ad1c8c41a5

Last test of basis   171387  2022-06-28 20:36:55 Z    1 days
Testing same since   171402  2022-06-29 08:48:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8c99264c67..93aa071f66  93aa071f66b78a2abbf134aeb96b02f066e6091d -> master


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 21:02:53 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 21:02:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358085.587076 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6epo-0002q5-9c; Wed, 29 Jun 2022 21:02:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358085.587076; Wed, 29 Jun 2022 21:02:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6epo-0002py-6b; Wed, 29 Jun 2022 21:02:52 +0000
Received: by outflank-mailman (input) for mailman id 358085;
 Wed, 29 Jun 2022 21:02:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6epm-0002po-Tt; Wed, 29 Jun 2022 21:02:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6epm-0003zf-Qo; Wed, 29 Jun 2022 21:02:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6epm-00042X-FZ; Wed, 29 Jun 2022 21:02:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o6epm-000185-F5; Wed, 29 Jun 2022 21:02:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=OsIegG6iAi9b3PEuwyxdubbUb3uOYotnvMXGcDa3sBI=; b=M/ndUQIW0aCptjPlI4zreT77nG
	tdOxpN2oLmzoklXzfcSGwzPGQ/TzBJnIRx9LVnM4MmEYLPpqm+yt70FR1NqmdcR7TBG+2cGNAlVSU
	Dv4sNd0OgrPDIwQWa+rmEojk/adX6EfxI8veb2ibIpoty0T5R+RZ65zk9bZWQBONAOYM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171404-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171404: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:guest-stop:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=941e3e7912696b9fbe3586083a7c2e102cee7a87
X-Osstest-Versions-That:
    linux=354c6e071be986a44b956f7b57f1884244431048
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 29 Jun 2022 21:02:50 +0000

flight 171404 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171404/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-intel  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-amd  8 xen-boot              fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 171277
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 171277
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 171277
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 171277
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 171277
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 171277

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-rtds     17 guest-stop                 fail pass in 171389

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 171277

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171277
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171277
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171277
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                941e3e7912696b9fbe3586083a7c2e102cee7a87
baseline version:
 linux                354c6e071be986a44b956f7b57f1884244431048

Last test of basis   171277  2022-06-19 03:11:35 Z   10 days
Failing since        171280  2022-06-19 15:12:25 Z   10 days   29 attempts
Testing same since   171374  2022-06-27 18:13:03 Z    2 days    5 attempts

------------------------------------------------------------
368 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 13051 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 22:46:22 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 22:46:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358093.587087 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6gRh-0005OL-Qt; Wed, 29 Jun 2022 22:46:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358093.587087; Wed, 29 Jun 2022 22:46:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6gRh-0005OE-MF; Wed, 29 Jun 2022 22:46:05 +0000
Received: by outflank-mailman (input) for mailman id 358093;
 Wed, 29 Jun 2022 22:46:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BAEK=XE=oracle.com=boris.ostrovsky@srs-se1.protection.inumbo.net>)
 id 1o6gRe-0005O8-J3
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 22:46:03 +0000
Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com
 [205.220.177.32]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3e9e42d4-f7fd-11ec-bd2d-47488cf2e6aa;
 Thu, 30 Jun 2022 00:46:00 +0200 (CEST)
Received: from pps.filterd (m0246631.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 25TM4BT5024089;
 Wed, 29 Jun 2022 22:45:44 GMT
Received: from phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com
 (phxpaimrmta03.appoci.oracle.com [138.1.37.129])
 by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3gws52jm58-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 29 Jun 2022 22:45:44 +0000
Received: from pps.filterd
 (phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1])
 by phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com (8.16.1.2/8.16.1.2)
 with SMTP id 25TMfWTR004641; Wed, 29 Jun 2022 22:45:43 GMT
Received: from nam10-bn7-obe.outbound.protection.outlook.com
 (mail-bn7nam10lp2108.outbound.protection.outlook.com [104.47.70.108])
 by phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com with ESMTP id
 3gwrt37kjg-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 29 Jun 2022 22:45:43 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com (2603:10b6:208:321::10)
 by BN6PR10MB1347.namprd10.prod.outlook.com (2603:10b6:404:41::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14; Wed, 29 Jun
 2022 22:45:40 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::e48c:bcc0:fff3:eac6]) by BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::e48c:bcc0:fff3:eac6%9]) with mapi id 15.20.5373.023; Wed, 29 Jun 2022
 22:45:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e9e42d4-f7fd-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=message-id : date :
 subject : to : cc : references : from : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2021-07-09;
 bh=FTiecvSj+tUzFAuHeAb2GCtWdBW97SIGtEyKoxIX3n4=;
 b=JclqjCYW9UwyfFe0CXrj2YoTuajAw/KYhpijRrdk0TqrFMEHbPKlGbOYhZXnkpRG9b9y
 gOSDtn4eo7sHoYPcZIusewb03pA97RTyYd0C101xTB1zj1J1kDK6PS3g7hAuWmqiGmeO
 9yfuitUAIPj9xVOI6twx7ynWqBYbhpSMYasZH58V+AUVwd0x0uLbu+BCR2CfyagCJ07t
 BbfQucPUt1ZOc7Z3/wSoVkS/8R+1/cP4FS0DBVs+8dD1UmNL+YI+5o7glkY0oYwmILUo
 BzsnxExV6hfLoT5lBKEJ4ItcIzTctkbrwWp9lBH+qZERxv/Vrb8Q6C1ay+w0hVivW/PF rg== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UJWJZdFP71BykFxUOOo1B6MwMXhktLxH7ZfBGf2CicHvpn83G89EKXLTRJTPuA+YkjBWsRRqWb9hN8ls5ThagZFAM20KtXLjMWQg06m0lq6bFPNkdLwaoce6z1ENp/AJao3KmF6G+eYH5a7HEMZItUd58khWQYAYBM8QuGxhMQZMx8Qq53KplgcZ1crFwpS+SVte+U2cqe4LnyTUDyvEWPnqBWl4OytbgPEYleiDCfpxLtmYqpJL3/Q7dFf86BSfsP6Pkj7v9n2c7WuJclbBpaQAhCanpaxvmyXFDQr9vi2jmI3ZDr8SmQ8lohtImSKHZvj+l4WipbNpIc8p5JHU6Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=FTiecvSj+tUzFAuHeAb2GCtWdBW97SIGtEyKoxIX3n4=;
 b=fJkXDWnqF4/x0w0gI+UOUipuZHtbgNMWvIPR8dX5aiJI1fqSn7gzgNOrYgLGoBTPB/ZNvzu04VDq+eljnQ7XgqaTmaLNcy54CJHE0BdJ5qc+Cm68GVQmeqRjuoUm/2ZpzUik4QQryrSTUoEVK1N02Qztw0mMnlDei/mIx4BjF7HLF5qBoMlyVbstd+ATxcygyRpQdUy3ByIJSTTsQOTECUucyG43VrC0O7paMQD/nm0ZlepYS5iYj9OyiXzVUSz84D0RThJTmPczY9TpJLuC1RraAaHpCk290GPM2gC3bJxSPVxpB+9IWBoDY7lUSgjVkpzBZIkekZdiVXfnmRJ3bg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FTiecvSj+tUzFAuHeAb2GCtWdBW97SIGtEyKoxIX3n4=;
 b=xcHFPOTXIsINOM1eoL8WIwVTXiojBBPKGMS2qWP4Haimvhy8l3eDK3BXx4zZzyX4r237Jot2GoXTrbYjn153QFTrQqtcktvoDzRRBJaSd+byfzCVd0E9AGh85Gs5zQdpZh2uZutEWXlXqoCvi1Wnx9BmcLpdfqzD6cVW3PVmsyM=
Message-ID: <873138b7-f70b-2010-217d-7d32042c801e@oracle.com>
Date: Wed, 29 Jun 2022 18:45:37 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: Reg. Tee init fail...
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Cc: "SK, SivaSangeetha (Siva Sangeetha)" <SivaSangeetha.SK@amd.com>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Bertrand Marquis <bertrand.marquis@arm.com>,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, jgross@suse.com
References: <DM4PR12MB5200C7C38770E07B5946424A80B49@DM4PR12MB5200.namprd12.prod.outlook.com>
 <7689497b-1977-b30a-5835-587fa266c721@xen.org>
 <alpine.DEB.2.22.394.2206291251240.4389@ubuntu-linux-20-04-desktop>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
In-Reply-To: <alpine.DEB.2.22.394.2206291251240.4389@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: DM6PR03CA0024.namprd03.prod.outlook.com
 (2603:10b6:5:40::37) To BLAPR10MB5009.namprd10.prod.outlook.com
 (2603:10b6:208:321::10)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a2cf9358-f540-45b5-80d9-08da5a2117b3
X-MS-TrafficTypeDiagnostic: BN6PR10MB1347:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	nKrDDBnnbrhhI0Js374PotaZyGT9DrXpvXX5xdxk2Gbp9YCZkH/jjFtkXpusS/zp4l/CfCIGkKmMzG5r52iEbP6c7gbmJstzoHgdu38p17BvDdlNMnTdc2JEPeVGx7hh7aoLgK4vLldx0cuUJOYGmdxp37sE3wiJBGXQ2xf1oj9RQ/kPggwtKKz7LUGUg+EBnCXRoDrNk/BAzf+VU/Ga7D/fYuAWC0GWOQ093m1W2PVRbjKeGy6nfIgM9LyvBV5r9Scj5hWN9QLMRt+/bi9PcXwRiipt1KT/ij+zQMpt1AcUNcKP3Z/pkeIZ4AM/zXYTZLnS1g6zpurxnfu41IeRiW4v+VW82SO5VUibBESYrUYd/kd+5lAWNQ6356DgFK0IcJFDXUXNdVbS8mzkpN5MDEf9f3lh1BPYj9BqbDa6GQIpnqd3gJDgXIeoBG4h8vjMyEutuFVAnd5KwuzdJV6fdjWuc+KU7OyXRUG8dIL3l8XRIHYSl4nxeCvWU2eHSJPV51dIcdxJtVKHVmNH0bxzrY9ffzApHJb948Rf87DiRB3qvPxzrlrHKI0uc/2wRiuYp4auuhJ9++OuUUNIjTetnW+EYyIXJr1CKq1GXaH46vXAcypkgZBCvNRlHLNCCi/9Jwh2YvObOKBrzr9TC8bmsEDq7dnf/H5Wb9zG4q9lutJ63u/pDxmFn+OdSa/lboUz3JqkEyySzFPPm+dqK/4VI+ynqWJwpWDzHDSTZLbZnBuhs8YwSN/BG/S7LTB2FxyV1oAugxw9ioos6vljb4zknxyUdd9OoAgY2ZkDUuGTUmhvtSVjfSrJxgzdtRBAGt8o7EaJ+PsZJ00L9j6Qy1QljQuf+uYBZn9SkabaH6ZWaKI=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB5009.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(396003)(39860400002)(376002)(136003)(346002)(54906003)(6506007)(53546011)(5660300002)(6512007)(186003)(26005)(966005)(110136005)(6486002)(44832011)(41300700001)(2616005)(316002)(6666004)(66476007)(4326008)(8676002)(66556008)(31686004)(8936002)(2906002)(66946007)(83380400001)(478600001)(38100700002)(36756003)(31696002)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?utf-8?B?VWVCOHQ5Zk9hc0JETExEdUVZUGZ0VEJJUFN0QU1Zajh3THRLWXhNcEpTelZ2?=
 =?utf-8?B?eEg1S01ydmJyY05GeGRmSmk1bWpYZlE5WDAydGhjem80Sm1yaFNpWis2NzFm?=
 =?utf-8?B?SHZEYVZocTRsKzBEOXRLQVVlZERpY01TSWcwR0RCYjhvZDdVYnJ4TXJyd2Rj?=
 =?utf-8?B?YTZ1NzZUOTF5N1Y0cXhlVWViZFVuM3gzdVNnVWpyK2V4RFdWNXA1NERuQWpv?=
 =?utf-8?B?Zm50T3BHUTh6aUpRMUFCQmpsWlFiNWhCS2xFOEVMdmI5T2NTVUgyMTFVeGU5?=
 =?utf-8?B?Ty93R1VjUnRGZTlldEZ4a3pOZjJsNVdRTGJhb3g2ekZ1dUVjUUViZjlIejhl?=
 =?utf-8?B?aW5PNzEzWFlnMHdKRHlab3preXQrUHI5S0pqQ0VlZVd0SzBGaGp0TjIrYVly?=
 =?utf-8?B?OFoyS0g5bElqL25OT3N0UnJkcXRwMldEYjUzVEhhVXJnRlcraE90TU1sVlBn?=
 =?utf-8?B?dFJkMlQreElFTkJTS3NxdTV6SHJtOVh4RWhRdUNxdEVOSExYUkFTQWt2cWhJ?=
 =?utf-8?B?KzYvK1VxYklxZzhDTUdzVlIvOVhZc1Z0dWwyQ3dacUhVMDNnaWRzajF5M2Ry?=
 =?utf-8?B?akhwTlpZVlQ2YnZYeUpMVnIrSThKRlp6bUxHdENmWGR4QzFLWDJhKytpTEFN?=
 =?utf-8?B?a1hrVlVkZjVHdUxTK3QzdTRScnE0dm8vVFVWSXAxcUpVTnlETG5BUGtxYkdQ?=
 =?utf-8?B?cmlKNWJNSUhtY29pMnJxd2MvcndrTmloK05qL3BRc3hZZnhUeGYxbzhyOW9B?=
 =?utf-8?B?TGUvMG02Y3VFaVRnNGk4NW1TVVI4d00zK0ZvZ1Zpa01PQTEyVEdHNUVIbm9s?=
 =?utf-8?B?aWRjM1M0SE50WEpqMlBGQ2lCQU1EWVZkakRRTjNIMVRLRlJsZURtcng4NWRy?=
 =?utf-8?B?VTR0a1loS2pLY1IxSDFFd2NwZDZCc0MrMnowMnA1Z3VBVnFqNFFsNUFxb291?=
 =?utf-8?B?L24zWVhXYXlNUW4xZitoMU9UTEVsbkd4YStkTWY1NnVTRTk4TU9RQ0xHMnpV?=
 =?utf-8?B?MXN1OVllRnVVYzNLc2k2WUg3L0R0Qk0zUWE5TjFXeUhEZGFkMXVUNVZ1MTZx?=
 =?utf-8?B?aVJyQm5CNkY3dndaQTY4alRlTk9mQ2FScG1NeXRrM2pHOHFrSWNQVVdoTDBh?=
 =?utf-8?B?VmdlNm5RZjk0LzFSY2J1dnY5ZXo0NG1VSlB5OVJOVlQwOHFGc3BNamRGVDZU?=
 =?utf-8?B?ajZUWlhCbUQ5aExwaGYycktadjZBYkgyOTFWQlpEMXRjZ3daQzczQThUK3cx?=
 =?utf-8?B?d0kzNUlSY1hPdGxaQnZZdmhNczBCVnd0R1Y1enR3Nk1sTmVzbm1ZR1VJdDI1?=
 =?utf-8?B?czY3akdCaEM5UGVBMXRBRFRia3JOdlp2S1RtVUc0LzRnRVpydm4yYW9Zc1dw?=
 =?utf-8?B?QldpWVBPaVQxV0lQN29YZURXZVFIQ3BWNUpPcFloQm1OSWUyOFlxUzVGRDZT?=
 =?utf-8?B?TmN2aXIwYVY3eWNNdnltS2phWEw5ZGtUVGw1SjRxRXpqV1BvQWN3TnFQSGF1?=
 =?utf-8?B?UlhDMVArc3ZmUE13dHIvdFJId3EwSE8vZ0pXNDlraFB3NFU3S2JZZm5FbVNu?=
 =?utf-8?B?UW5nVENUQy9kNXdDOFdxRXFJcTl4SGNDQXBIN3pYRjFpSXVMUGtRSjY1bWEw?=
 =?utf-8?B?SzBlWHFtWnRWV3BDRXdUb2l0aHdCQkg3KzlZRkhmMll5dkdIRnA4Q28vRFBB?=
 =?utf-8?B?Zzl2N2h4dDNPOHJGbWxnenJuZEx4TDZYbzlOMjREQmNuTUU0OEFGcTdwNHJn?=
 =?utf-8?B?T0luaThCQ05oVmNOVnR6a1diWDZ4R1BvLy9NL1dpUC9Na1JyRW1DK3pBU0lR?=
 =?utf-8?B?ZGZMMHMrM0ZJZGM2UkpYM1B2bGIxS3VscUtJSXkwR3UycWMrcG42dWlJeDVq?=
 =?utf-8?B?SmoxVUkvOEhCWHpBYWt1NVR1a2tMbmo5dTF4WjIrUlI3UmlkSzE1Y3FjWDdU?=
 =?utf-8?B?TXFTbGx3enpUZ3R1dVMzVVdwL1kzaUd3aUI2S0p1NFo3Ymh3QXIvUjd6dzZ4?=
 =?utf-8?B?b2R5VnZqWDdMdit4TzlFQVU3Y0J1dklYZmJwQ1JxSWpZaHNoVlZTYXdIY1VJ?=
 =?utf-8?B?c0pZbkNiYmZJNTFFeVYyaTU1YTIvLzUwU3ZiNS9zdXY4UG1uVldab3JJbzE2?=
 =?utf-8?B?blJTRG5nMHZKWFVJeFJVRHlRRTNxWmQ2ejJQVTBvNG5GeWpXT2RFcGhxM2E3?=
 =?utf-8?B?Y0E9PQ==?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a2cf9358-f540-45b5-80d9-08da5a2117b3
X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB5009.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jun 2022 22:45:40.6594
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: iQw5nJknr4xRIS9+4MZH2YGihiObUWPL+B3bmQfJC4lr1LUWXGU8PL2YzwC6/IHY3tw1xmjH1X7BwHIJpHP5SimxJsLH5XONsRmVtDfY9cQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR10MB1347
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.517,18.0.883
 definitions=2022-06-29_22:2022-06-28,2022-06-29 signatures=0
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0 spamscore=0 bulkscore=0
 phishscore=0 mlxlogscore=999 adultscore=0 mlxscore=0 suspectscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2204290000
 definitions=main-2206290078
X-Proofpoint-ORIG-GUID: f8tuUtkJT46HmCqR44Pc5axgAGEPOdjD
X-Proofpoint-GUID: f8tuUtkJT46HmCqR44Pc5axgAGEPOdjD


On 6/29/22 4:03 PM, Stefano Stabellini wrote:
> Adding Juergen and Boris because this is a Linux/x86 issue.
>
>
> As you can see from this Linux driver:
> https://elixir.bootlin.com/linux/latest/source/drivers/crypto/ccp/tee-dev.c#L132
>
> Linux as dom0 on x86 is trying to communicate with firmware (TEE). Linux
> is calling __pa to pass a physical address to firmware. However, __pa
> returns a "fake" address not an mfn. I imagine that a quick workaround
> would be to call "virt_to_machine" instead of "__pa" in tee-dev.c.


It's probably worth a try but it seems we may need to OR the result with C-bit (i.e. sme_me_mask). Or (for testing purposes) run with TSME on, I think C-bit is not set then.


-boris


> Normally, if this was a device, the "right fix" would be to use
> swiotlb-xen:xen_swiotlb_map_page to get back a real physical address.
>
> However, xen_swiotlb_map_page is meant to be used as part of the dma_ops
> API and takes a struct device *dev as input parameter. Maybe
> xen_swiotlb_map_page can be used for tee-dev as well?
>
>
> Basically tee-dev would need to call dma_map_page before passing
> addresses to firmware, and dma_unmap_page when it is done. E.g.:
>
>
>    cmd_buffer = dma_map_page(dev, virt_to_page(cmd),
>                              cmd & ~PAGE_MASK,
>                              ring_size,
>                              DMA_TO_DEVICE);
>
>
> Juergen, Boris,
> what do you think?
>
>
>
> On Fri, 24 Jun 2022, Julien Grall wrote:
>> Hi,
>>
>> (moving the discussion to xen-devel as I think it is more appropriate)
>>
>> On 24/06/2022 10:53, SK, SivaSangeetha (Siva Sangeetha) wrote:
>>> [AMD Official Use Only - General]
>> Not clear what this means.
>>
>>> Hi Xen team,
>>>
>>> In TEE driver, We allocate a ring buffer, get its physical address from
>>> __pa() macro, pass the physical address to secure processor for mapping it
>>> and using in secure processor side.
>>>
>>> Source:
>>> https://elixir.bootlin.com/linux/latest/source/drivers/crypto/ccp/tee-dev.c#L132
>>>
>>> This works good natively in Dom0 on the target.
>>> When we boot the same Dom0 kernel, with Xen hypervisor enabled, ring init
>>> fails.
>> Do you have any error message or error code?
>>
>>>
>>> We suspect that the address passed to secure processor, is not same when xen
>>> is enabled, and when xen is enabled, some level of address translation might
>>> be required to get exact physical address.
>> If you are using Xen upstream, Dom0 will be mapped with IPA == PA. So there
>> should be no need for translation.
>>
>> Can you provide more details on your setup (version of Xen, Linux...)?
>>
>> Cheers,
>>
>> -- 
>> Julien Grall
>>


From xen-devel-bounces@lists.xenproject.org Wed Jun 29 22:53:45 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jun 2022 22:53:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358099.587098 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6gZ3-00078g-Ps; Wed, 29 Jun 2022 22:53:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358099.587098; Wed, 29 Jun 2022 22:53:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6gZ3-00078Z-Mi; Wed, 29 Jun 2022 22:53:41 +0000
Received: by outflank-mailman (input) for mailman id 358099;
 Wed, 29 Jun 2022 22:53:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=R6b1=XE=citrix.com=prvs=172711fe8=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1o6gZ2-000778-SE
 for xen-devel@lists.xenproject.org; Wed, 29 Jun 2022 22:53:40 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4ef9962a-f7fe-11ec-bd2d-47488cf2e6aa;
 Thu, 30 Jun 2022 00:53:38 +0200 (CEST)
Received: from mail-co1nam11lp2177.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.177])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 29 Jun 2022 18:53:30 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SN4PR03MB6701.namprd03.prod.outlook.com (2603:10b6:806:21d::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Wed, 29 Jun
 2022 22:53:27 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::bd46:feab:b3:4a5c]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::bd46:feab:b3:4a5c%4]) with mapi id 15.20.5373.018; Wed, 29 Jun 2022
 22:53:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4ef9962a-f7fe-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656543218;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=gJAuH/OQ8b3fMw7jbs23x5gQzTwuFwTBW2cB8pQSdUg=;
  b=Jd9UU9/m9K9E/9HtmcbBHUVEYnFqmS5bIZw23nV6dTWZSDanAodcRTKg
   YwWiu7lrJBgYiM2wMUmhjq9fvpdJPSnzF2bQNuolsvFrMhln7RY4EcQRJ
   VGwq7HwdNA6LDZEwyCXplkgjPCj9e+SJPDwcTTMT1JqtK71hX4sVby3p5
   4=;
X-IronPort-RemoteIP: 104.47.56.177
X-IronPort-MID: 77309708
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:rlk37a3puJ3bi9fWvPbD5Rlwkn2cJEfYwER7XKvMYLTBsI5bpzNVn
 TFKX2qGPvjeZ2L2L4p/aNm+oUsEuMeBmt9kHgdrpC1hF35El5HIVI+TRqvS04J+DSFhoGZPt
 Zh2hgzodZhsJpPkjk7xdOCn9xGQ7InQLlbGILes1htZGEk1Ek/NtTo5w7Rj2tAy0IDja++wk
 YiaT/P3aQfNNwFcagr424rbwP+4lK2v0N+wlgVWicFj5DcypVFMZH4sDfjZw0/DaptVBoaHq
 9Prl9lVyI97EyAFUbtJmp6jGqEDryW70QKm0hK6UID66vROS7BbPg/W+5PwZG8O4whlkeydx
 /1vtd+VeQpzApfOt8oWbQlDVAZsYIFvreqvzXiX6aR/zmXgWl61mrBFKxhzOocVvOFqHWtJ6
 PoUbigXaQyOjP63x7T9TfRwgsMkL4/gO4Z3VnNIlGmFS6p5B82dBfyVure03x9p7ixKNd/Ya
 9AUdnxEaxPYbgcUElwWFIg/jKGjgXyXnzhw9w3O/ftouzi7IApZ4bP/Fd7VS9C2ddxOsU3ft
 Ej23m/gK0RPXDCY4X/fmp62vcfNgCf6VYQ6BLC+sPlwjzW7/W0NASYfU1S2rOW5gwiFePpWL
 kBS8S0rxYAi+UruQtTjUhmQpH+fogVaS9dWC/c96gyG1uzT+QnxLnMfUjdLZdgitck3bT8nz
 FmEm5XuHzMHmLeYU26H/7GY6za7IzEILHQqbDUBCwAC5rHLnoY3iR7eS8d5J4S8hNb1BDLYz
 iiDqW41gLB7sCIQ/6Cy/FSCiDX1oJHMF1cx/l+OAT3j6R5lbom4YYDu8ULc8ftLMIeeSB+Go
 WQAnM+dqusJCPlhiRCwfQnEJ5nxj97tDdEWqQcH80UJn9h1x0OeQA==
IronPort-HdrOrdr: A9a23:yLCba62KwqM3IrC03YlbOAqjBRFyeYIsimQD101hICG9Lfb0qy
 n+pp4mPEHP4wr5AEtQ4uxpOMG7MBDhHQYc2/hdAV7QZnidhILOFvAv0WKC+UyrJ8SazIJgPM
 hbAs9D4bHLbGSSyPyKmDVQcOxQj+VvkprY49s2pk0FJW4FV0gj1XYBNu/xKDwVeOAyP+tcKH
 Pq3Lsjm9PPQxQqR/X+IkNAc/nIptXNmp6jSwUBHQQb5A6Hii7twKLmEjCDty1uEg9n8PMHyy
 zoggb57qKsv7WQ0RnHzVLe6JxQhZ/I1sZDPsqRkcIYQw+cyjpAJb4RGIFqjgpF5d1H22xa1O
 UkZC1QePib3kmhPF1dZyGdnTUIngxeskMKgmXo/EcL6faJOA7STfAxy76xOyGplXbJ9rtHod
 129nPcuJxNARzamiPho9DOShFxj0Kx5WEviOgJkhVkIMIjgZJq3PsiFXluYeE9NTO/7JpiHP
 hlDcna6voTeVSGb2rBtm0qxNC3RHw8EhqPX0BH46WuonNrtWE8y1FdyN0Un38G+p54Q55Y5/
 7cOqAtkL1VVMcZYa90Ge9ES8qqDW7GRw7KLQupUBzaPbBCP2iIp4/84b0z6u3vcJsUzIEqkJ
 CES19cvX5aQTObNSRP5uw/zvngehTMYd228LAu23FQgMyOeJP7dSueVVspj8ys5/0CH8yzYY
 fABK5r
X-IronPort-AV: E=Sophos;i="5.92,232,1650945600"; 
   d="scan'208";a="77309708"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CvWhKNfsJ+0l52ih3pSUTG+5637+NP0TQk38nzKiluBuYTKARHAPa52AnUaUH3z/9m40Q3tquxh5vQ/TI0cWKbhLctwNNFyOasveGklNhArdVhpCsgCmZ271YeSG8/PXP8+0u6eiI0M6EuwVYdCxVG4yK51QDFsoM6K1HqMB44EP3F7W/fCaAgU5YZbdJ0iHzOkDY+APczmdVvhmFRu7VzkN7a2o1HGYIX9SeWkQXGm7O6CNlXoDOh9unrSwxoeharg4rbidpOpPH4mi2Cb/vyOCDNQT91VafUVXtsYF2jHYBMqPUd1PWEhE8VLykHrfePEXV4o/mkrCEtZ7EBSrcw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=gJAuH/OQ8b3fMw7jbs23x5gQzTwuFwTBW2cB8pQSdUg=;
 b=dN4XbXc2r+xa6ZTP91kr4OzK7BBMetz3hug/DHOxedA2/5+p1CKIF3WLzAu7PXf0mmBzFFmEzcxRKWHepO5Pt8QainvGtcRgjXeMagSmu+GlwkbkIu/XbwWQqOvxZsSCKkQM7ITfghjdx3FB30FH+6JRmEtxnClvsa0iUhR2DhF7H8kSPz92U/+QIMPcwdaySvuztjAXegFLB5OTk1eOXmU/O5qlHmucUQGBYYZa/M9/h7XWWkv5Rfit6Q1wZ7JRlXge0Vzpy5djyd5zuQX1f37QpWbCDWm9xN2dpX/SdRXxv5Erxt/EjUmE0rriE/3iNvqAzcJcSDSfmtGfVq3/sw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gJAuH/OQ8b3fMw7jbs23x5gQzTwuFwTBW2cB8pQSdUg=;
 b=kG1uqpRfALfKIC05FtN30zmZqYogFTF017YeN408Hyqlcd4J3ulSkrfFrc3ZwQYMxemOEERZbErG5hMCEV5PNXEEgHAuXLiwbOJJqcdwJYhVgoQ6JeBIaJtTKTKTyOUv100d2a+TS8b1KAfg9DWRH5NUz4e+tkpu5M3J0DbSBGs=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Julien Grall <julien@xen.org>, "SK, SivaSangeetha (Siva Sangeetha)"
	<SivaSangeetha.SK@amd.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: Reg. Tee init fail...
Thread-Topic: Reg. Tee init fail...
Thread-Index: AQHYh+/iOCuX3wO5yE+jj9xk6AFSM61nBjSA
Date: Wed, 29 Jun 2022 22:53:26 +0000
Message-ID: <40929dd3-f2bb-b20d-48e1-1e23417a1d7a@citrix.com>
References:
 <DM4PR12MB5200C7C38770E07B5946424A80B49@DM4PR12MB5200.namprd12.prod.outlook.com>
 <7689497b-1977-b30a-5835-587fa266c721@xen.org>
In-Reply-To: <7689497b-1977-b30a-5835-587fa266c721@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 543bd88f-437a-407f-85b2-08da5a222dc5
x-ms-traffictypediagnostic: SN4PR03MB6701:EE_
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 YrqDBnxaMOJdfn/dE0VYrOdNQa4U5aOniEiRDIldm5OAZbRyG+pIR8Ut7jvV65agz5shau3AIRZZAN/SbEc5lC2Uz8ISMUFdhtrws2xLNRmYGUTx7/sorWyym/g2CRQ/AGkcGQHOfEZytDQQ/17aOW0oK64+PvVMkoe6EvTqU6+siQ0HdGcICF/tMR4y0W+0C2hpAAlpH4t6yEpP7Fqt3LhKjiwomCg7g8gur9TApAVV6rgpdLP0qkCMktviUMzZeLnRX4gdbQhzFHVPiHaqBwtWyjs4u0vaGs+j/f4TOV4vyMFpGa+15GZV53ez+5VoUJFq3ydJEWvMpIA6lcI27n62eRIgQPNKx6iULn8N0vt60SsbaNFGtx4U107wMMlGric9EXre0bf3gm3EvLOF5oHOOI3/UorzPlDtfhmDR9/T92l8aY4m3aIMWYFbLQuvS/6eCugANFSOQEqctICYWlhuZGM95O0LvmQhiCnkEQa9MgXlqYheZcuHFlX9unyuR7v3Kk3ddXg5LwdzVYp25FKUcImnQU19uNdk+1mmeVAp6Nw9DQGpkZzOanrCZKDz+TZHrRtZgCXtfb6tAhpbM6GtmDD8kHLCbsidDGsp4tKzq34UxP3KHY/a8043HkNHpL5PrF12clNYKzC0oSP3TPV8NWQEAKTavUaQPXxj/ApsqEHE9KnBt5ssePFjbFSBtgVMbDqokMTi3AReX/+vQLP3GrElf6QDEWo8LCLfN4gN/2lcwYvTfPsr975tkeGMRvOJzKz9zu7xJxSUO8sdmCZK86ZBwOpOftgBjim7XKBxBGJtUYBWOjpPY+VKYbU8KjLPzSftXuze2++h52KpBLF2GGDxT5YZo5KBMlF+R2E=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(136003)(366004)(346002)(396003)(39860400002)(376002)(4326008)(36756003)(5660300002)(4744005)(31686004)(71200400001)(478600001)(122000001)(8936002)(64756008)(38100700002)(316002)(6486002)(66446008)(2906002)(86362001)(76116006)(91956017)(110136005)(53546011)(6506007)(55236004)(66946007)(2616005)(8676002)(54906003)(41300700001)(186003)(26005)(38070700005)(66476007)(66556008)(6512007)(45080400002)(31696002)(82960400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?clNKT01zYk5aNGh1ZFJ3a0o3M3gvRVh6S1V3ZEhUWEhHU3I0Vnp6cjZuYk1P?=
 =?utf-8?B?aEN0YjFGalBtSFFGRUZkM3JkU2FhZlkxT1NkQW1MQitZREI5SXdOUjFnaFVs?=
 =?utf-8?B?cEdNcm9oOUkxSWM1OENNT2RENGlKWU5PZ3NpQXJOVlRwbU1takE1K1hHNllF?=
 =?utf-8?B?NjVpVUo1MzJCc3ErcXNDdlhvMDhOWEpEb1EvckVreU5YNnRFVXJERlpLa3VD?=
 =?utf-8?B?OUxGSk1oZU8yb3B4YjZvRVMyUzI4UWN5MjZvN2NYbVorekUzeWhuUEpFaW1y?=
 =?utf-8?B?Z24ydGgwZDlPTjhOMTY5aTB5cFhMRk5vOGlsQ0Uxelk4QjJLcGFvV0FBbVhQ?=
 =?utf-8?B?NDhaWVMybWx2UWJTd2hockMweWxNRkkyVWdmNlZ5ODBJKy95Q3k4dTR2Y04z?=
 =?utf-8?B?MWFMdEtVdXZoTHRNUG5oWDN3aFJteEdrRHlPL2pBQmFCUmU4NDRBR2tJeTMy?=
 =?utf-8?B?Z2ZjYStqbXFCdUxsS2JYV2J3Z2Q1T2tlbk9qVGQ4eEV1OEhUckV2NkZiY2JT?=
 =?utf-8?B?RWsyS2VYbmdUUDZ0WDFHSjNWRjZuQTVubk9wSi82SzVtM1FMQXRmK1BTRk96?=
 =?utf-8?B?cnEzdFYxTE1EbkVNT2RLUHpINGhKeXl6RUUxNnVWT1dDSEdtOVRpOWFmN2ND?=
 =?utf-8?B?cFRNL01PcVhvbWgrNDVWZ2M5U0dWLytrUmQ2TzlFMUlNUGd4NldJWDdmcmhn?=
 =?utf-8?B?alBXQndEM21FeFc3ODJXSFA2NWw5bDQ0U3dYTkxCam9YYkdDL0JndXA5TmNk?=
 =?utf-8?B?OEVRVnFac09XeHUwMEJHK0R1ekx6MFQ5endwNEU4NXFCdXhGVlRGQXBxNFVX?=
 =?utf-8?B?WnVNR3pPR0FCK0hiZUhXY3I0UWpTanBYaW4xNHkvRzlCUVkyeDNyWlFVNUFR?=
 =?utf-8?B?amsrVitYT3dJM3lKQWM0cVc3TzNsaWo4cjdFU0VOMmxxVTVwTjFwL0c5RHgx?=
 =?utf-8?B?OGRFNStzaDZONll2a0h6SFFqTHJBSUo3clJmWkp3QTR4Q3BLQkdIYlpLcyt1?=
 =?utf-8?B?MFcvcGE0N2l6TVNXbk9uNndNeU5mWXdRdlFUMXBsRUJ4NStxL1F5bmE1WWtn?=
 =?utf-8?B?VWhhRUdmZFkvNjVhYmN5Z043Y21YL3E2M2R2U0tWNU9Fd01yTVVtSks4RVFK?=
 =?utf-8?B?bkp5MjEwZlBpcmZscTFoNWdhaEFNMDRvMitDdURjQy9tZzFoTGI5eUNmblNS?=
 =?utf-8?B?RWhlTkhpbVBNTmJWdWVqd1B4SkROd0pyMXIyRjdWQ1Jjd05QVXFwWGdaK2Zh?=
 =?utf-8?B?WXNneXgrZzVNOHBCQXB1cHZhYnQrVlA5U05iTlhYS2pmQTl0UDZEVmUrUVdU?=
 =?utf-8?B?YnNueEc5eHcraGYyQmo1UXRVNTQzSlFqdXFYdlFBNUljNWVKQ1NNSzA2R1NS?=
 =?utf-8?B?dkhlSW1ySUhVU3ZQMlB1VzBwWUptcGQrT0dXbFR2c0dUam9iekZzUjNQNUF3?=
 =?utf-8?B?VFBIRnh0bmJ2TmxWTWx3ci9EN0RkYis4ZFc1bTRqZm51Vy9oaU9uOWNzdG9x?=
 =?utf-8?B?cjhSSFZpeHgzZys4UXE3UUNpRnVyZXd6VUFYbkIrTHpkelJRYURhekRxKzBW?=
 =?utf-8?B?ZG05SW9wekMweng5ZkY2RWlETnVRc25sRWV5bENzaTAvNkZtOTFxR0srUE1F?=
 =?utf-8?B?d0JnSVhoYTdBREdIMml3eTM5Tm9BbW9kbHBsMnFPRHpMR3JlZEJNcDAzM2pE?=
 =?utf-8?B?ZDlmOVkzQ25zQUlqM3g5YTdCUGF1cmorNnQvVjVJODRSL2lBeHN5U1BBQmRx?=
 =?utf-8?B?aXNGdVlvKzVkUXNuM0RRODZpZ3YxMWYxWXRteXNCeXhCaWRVNnVQNUR6eTJj?=
 =?utf-8?B?WFlHT0dxajNLWU9RRFIybmlvSWMrYVQ5a1hPMEhwakwrdjdBNENVMDljUmhD?=
 =?utf-8?B?V0g2R0Fzb2lLbzNOTzl5MkNEZGtqRUg1b1pxS2lqeThmWFRDVU9SV3pRWkpD?=
 =?utf-8?B?VzJSQlFLQkNvbzFwZzdyeTFyU3ZaMnlZNEcxWUc3RGUvN1NOS0ozRzVFTWZY?=
 =?utf-8?B?SXVaQ2gxdDNSK05WUUVLY2lScnA3NllHUHJ4MllCU25zVUt3OHdxWnlQOTR2?=
 =?utf-8?B?aGdzQi9xc2xVN1lWeW9DejgrY25ZTEFRV2ZBbFIvUXFrL2gxL281VTF6V0dr?=
 =?utf-8?Q?BWzVfhi9TulAZNAV3cnraxJJU?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <408FCE0E942008488DCB1C798AF3801C@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 543bd88f-437a-407f-85b2-08da5a222dc5
X-MS-Exchange-CrossTenant-originalarrivaltime: 29 Jun 2022 22:53:26.9185
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: U1iCl5ImSIwyBAAWuutpZ5+TBg/NRNmqj0TZr85JXtlvv6ghL3N4CTa8TWZH3Vf5emwzdiI0tV5pjhHxiDBiNQm6fqCBa36Ht9ZF9cjjAcg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN4PR03MB6701

T24gMjQvMDYvMjAyMiAxODoyOCwgSnVsaWVuIEdyYWxsIHdyb3RlOg0KPiBIaSwNCj4NCj4gKG1v
dmluZyB0aGUgZGlzY3Vzc2lvbiB0byB4ZW4tZGV2ZWwgYXMgSSB0aGluayBpdCBpcyBtb3JlIGFw
cHJvcHJpYXRlKQ0KPg0KPiBPbiAyNC8wNi8yMDIyIDEwOjUzLCBTSywgU2l2YVNhbmdlZXRoYSAo
U2l2YSBTYW5nZWV0aGEpIHdyb3RlOg0KPj4gW0FNRCBPZmZpY2lhbCBVc2UgT25seSAtIEdlbmVy
YWxdDQo+DQo+IE5vdCBjbGVhciB3aGF0IHRoaXMgbWVhbnMuDQoNCkl0cyBhbiBPZmZpY2UzNjUg
dGhpbmcgYXV0b21hdGljYWxseSBpbnNlcnRlZCBwb3N0LXNlbmQuwqAgU29tZSBwZW9wbGUsDQpk
ZXBlbmRpbmcgb24gb3JnYW5pc2F0aW9uYWwgcG9saWN5LCBjYW5ub3Qgb3B0IG91dCBvZiBpdC4N
Cg0KSXQncyBpcnJpdGF0aW5nLCBidXQgYmVzdCBqdXN0IGlnbm9yZWQuDQoNCn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 01:07:13 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 01:07:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358109.587126 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6ie9-0003e7-CT; Thu, 30 Jun 2022 01:07:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358109.587126; Thu, 30 Jun 2022 01:07:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6ie9-0003dy-8S; Thu, 30 Jun 2022 01:07:05 +0000
Received: by outflank-mailman (input) for mailman id 358109;
 Thu, 30 Jun 2022 01:07:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HXfy=XF=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o6ie7-0003d1-Pr
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 01:07:03 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f23e8e8a-f810-11ec-bdce-3d151da133c5;
 Thu, 30 Jun 2022 03:07:01 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 34F31B827C1;
 Thu, 30 Jun 2022 01:07:00 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9650AC34114;
 Thu, 30 Jun 2022 01:06:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f23e8e8a-f810-11ec-bdce-3d151da133c5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1656551219;
	bh=ADS86hmhn+G273gfWNc4aN140CYL60KTh6QbXWJGJvo=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=DavTASOoI9td7fwL7mHKPagtz5xyIlnQedTtDOaOvZ6ndzgqlwpx11PlN4c9HIIHS
	 xNIV80kui868Wif0OVHp/10ajvogB/Jdeed2JymhxXSHb7EXGK1kqCLWLt65t6IYHV
	 0XdUWVCaZSaAvoauiiMPJWJUotNp4Ne8Y2m2Mh2upGDQ5qHqcSsgT+jf/ZNyugLeOZ
	 5+ovvrqmfV6JZI1xaHAUOZ4YGo7blBs3TUfNG3hg7yDr1fMiBX7SJFikGkNZpq6Do+
	 c4sOsSMVcL5IZOKt1wYNIrlTA6N50dzEE6yipuBa91sxbm0riZJNH38Dv+J4n9Faxr
	 z3hAb6Xw4oG9g==
Date: Wed, 29 Jun 2022 18:06:57 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Juergen Gross <jgross@suse.com>, Julien Grall <julien@xen.org>
Subject: Re: [PATCH V1 2/2] xen/grant-table: Use unpopulated contiguous pages
 instead of real RAM ones
In-Reply-To: <1655740136-3974-3-git-send-email-olekstysh@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2206291756110.4389@ubuntu-linux-20-04-desktop>
References: <1655740136-3974-1-git-send-email-olekstysh@gmail.com> <1655740136-3974-3-git-send-email-olekstysh@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 20 Jun 2022, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> Depends on CONFIG_XEN_UNPOPULATED_ALLOC. If enabled then unpopulated
> contiguous pages will be allocated for grant mapping into instead of
> ballooning out real RAM pages.
> 
> Also fallback to allocate DMAable pages (balloon out real RAM pages)
> if we failed to allocate unpopulated contiguous pages. Use recently
> introduced is_xen_unpopulated_page() in gnttab_dma_free_pages() to know
> what API to use for freeing pages.
> 
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> ---
> Please note, I haven't re-checked yet the use-case where the xen-swiotlb
> is involved (proposed by Stefano):
> https://lore.kernel.org/xen-devel/alpine.DEB.2.22.394.2206031348230.2783803@ubuntu-linux-20-04-desktop/
> I will re-check that for next version and add corresponding comment
> in the code.

Great. The patch looks good so far.


> Changes RFC -> V1:
>    - update commit subject/description
>    - rework to avoid introducing alternative implementation
>      of gnttab_dma_alloc(free)_pages(), use IS_ENABLED()
>    - implement a fallback to real RAM pages if we failed to allocate
>      unpopulated contiguous pages (resolve initial TODO)
>    - update according to the API renaming (s/dma/contiguous)
> ---
>  drivers/xen/grant-table.c | 24 ++++++++++++++++++++++++
>  1 file changed, 24 insertions(+)
> 
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index 738029d..15e426b 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -1047,6 +1047,23 @@ int gnttab_dma_alloc_pages(struct gnttab_dma_alloc_args *args)
>  	size_t size;
>  	int i, ret;
>  
> +	if (IS_ENABLED(CONFIG_XEN_UNPOPULATED_ALLOC)) {
> +		ret = xen_alloc_unpopulated_contiguous_pages(args->dev, args->nr_pages,
> +				args->pages);
> +		if (ret < 0)
> +			goto fallback;
> +
> +		ret = gnttab_pages_set_private(args->nr_pages, args->pages);
> +		if (ret < 0)
> +			goto fail;
> +
> +		args->vaddr = page_to_virt(args->pages[0]);
> +		args->dev_bus_addr = page_to_phys(args->pages[0]);
> +
> +		return ret;
> +	}
> +
> +fallback:
>  	size = args->nr_pages << PAGE_SHIFT;
>  	if (args->coherent)
>  		args->vaddr = dma_alloc_coherent(args->dev, size,
> @@ -1103,6 +1120,13 @@ int gnttab_dma_free_pages(struct gnttab_dma_alloc_args *args)
>  
>  	gnttab_pages_clear_private(args->nr_pages, args->pages);
>  
> +	if (IS_ENABLED(CONFIG_XEN_UNPOPULATED_ALLOC) &&
> +			is_xen_unpopulated_page(args->pages[0])) {
> +		xen_free_unpopulated_contiguous_pages(args->dev, args->nr_pages,
> +				args->pages);
> +		return 0;
> +	}
> +
>  	for (i = 0; i < args->nr_pages; i++)
>  		args->frames[i] = page_to_xen_pfn(args->pages[i]);
>  
> -- 
> 2.7.4
> 


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 01:07:13 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 01:07:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358108.587115 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6ie2-0003Nf-3H; Thu, 30 Jun 2022 01:06:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358108.587115; Thu, 30 Jun 2022 01:06:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6ie1-0003NY-Vv; Thu, 30 Jun 2022 01:06:57 +0000
Received: by outflank-mailman (input) for mailman id 358108;
 Thu, 30 Jun 2022 01:06:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HXfy=XF=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o6ie1-0003NQ-2X
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 01:06:57 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id edf4832d-f810-11ec-bd2d-47488cf2e6aa;
 Thu, 30 Jun 2022 03:06:54 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 597AF61F29;
 Thu, 30 Jun 2022 01:06:53 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6359DC34114;
 Thu, 30 Jun 2022 01:06:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: edf4832d-f810-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1656551212;
	bh=rf8ydyMF9jB1/wdlniuphKYaGwrKeatmW07T0hp5ssY=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Jwu502o8WWiI46qeXhszzTv9Irueag1IhJCd7UH2r5tgWCxBbtreJrGSBaDO1GbPJ
	 5+2TPNq6teIjJyRPYx7VknbWw1hIFe7IhKNUOzh9S84bmgIHXSeeoaMf3JfY7Q9/jn
	 VuWxUbcomUea8k+NbJYxH2tBIRrYxwXf/JnicRNHzaEP2uz4nXrEAR5N4o5BXalamZ
	 K1/m/RLqkkcLNioqoYaAAe2DcOn2w+nkzL27ir9b4TV9kF8b6ElbTdDj1opCKyH7pA
	 9SFXdPn8gsO2sHuLbybM45lrx02S18Nt+LZVQ/L7xUBNe96+qssFmF5fNkiVeDa2qc
	 duCw//hfcU9UQ==
Date: Wed, 29 Jun 2022 18:06:51 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Juergen Gross <jgross@suse.com>, Julien Grall <julien@xen.org>
Subject: Re: [PATCH V1 1/2] xen/unpopulated-alloc: Introduce helpers for
 contiguous allocations
In-Reply-To: <1655740136-3974-2-git-send-email-olekstysh@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2206291733500.4389@ubuntu-linux-20-04-desktop>
References: <1655740136-3974-1-git-send-email-olekstysh@gmail.com> <1655740136-3974-2-git-send-email-olekstysh@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 20 Jun 2022, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> Add ability to allocate unpopulated contiguous pages suitable for
> grant mapping into. This is going to be used by userspace PV backends
> for grant mappings in gnttab code (see gnttab_dma_alloc_pages()).
> 
> Patch also changes the allocation mechanism for unpopulated pages.
> Instead of using page_list and page->zone_device_data to manage
> hot-plugged in fill_pool() (former fill_list()) pages, reuse genpool
> subsystem to do the job for us.
> 
> Please note that even for non-contiguous allocations we always try
> to allocate single contiguous chunk in alloc_unpopulated_pages()
> instead of allocating memory page-by-page. Although it leads to less
> efficient resource utilization, it is faster. Taking into the account
> that on both x86 and Arm the unpopulated memory resource is arbitrarily
> large (it is not backed by real memory) this is not going to be a problem.
> 
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> ---
> I am still thinking how we can optimize free_unpopulated_pages()
> to avoid freeing memory page-by-page for non-contiguous allocations:
> 1. We could update users to allocate/free contiguous pages even when
>    continuity is not strictly required. But besides a need to alter
>    a few places, this also requires having a valid struct device
>    pointer in hand (maybe instead of passing *dev we could pass
>    max_addr? With that we could drop DMA_BIT_MASK).
> 2. Almost all users of unpopulated pages (except gnttab_page_cache_shrink()
>    in grant-table.c) retain initially allocated pages[i] array, so it
>    passed in free_unpopulated_pages() absolutely unmodified since
>    being allocated.
>    We could update free_unpopulated_pages() to always try to free memory
>    by a single chuck (previously make sure that chunk is in a pool using
>    gen_pool_has_addr()) and update gnttab_page_cache_shrink() to not pass
>    pages[i] array with mixed pages in it when dealing with unpopulated
>    pages. This doesn't require altering a few places.
> 
> Any thoughts?
 
I think it would be better to change the callers, because it would be an
obvious explicit change, compared to try to be smarter in
free_unpopulated_pages.


> Changes RFC -> V1:
>    - update commit subject/description
>    - rework to avoid code duplication (resolve initial TODO)
>    - rename API according to new naming scheme (s/dma/contiguous),
>      also rename some local stuff
>    - drop the page_list & friends entirely and use unpopulated_pool for all
>      (contiguous and non-contiguous) allocations
>    - fix build on x86 by inclusion of <linux/dma-mapping.h>
>    - introduce is_xen_unpopulated_page()
>    - share the implementation for xen_alloc_unpopulated_contiguous_pages()
>      and xen_alloc_unpopulated_pages()
> ---
>  drivers/xen/unpopulated-alloc.c | 188 +++++++++++++++++++++++++++++-----------
>  include/xen/xen.h               |  20 +++++
>  2 files changed, 158 insertions(+), 50 deletions(-)
> 
> diff --git a/drivers/xen/unpopulated-alloc.c b/drivers/xen/unpopulated-alloc.c
> index a39f2d3..3988480d 100644
> --- a/drivers/xen/unpopulated-alloc.c
> +++ b/drivers/xen/unpopulated-alloc.c
> @@ -1,5 +1,7 @@
>  // SPDX-License-Identifier: GPL-2.0
> +#include <linux/dma-mapping.h>
>  #include <linux/errno.h>
> +#include <linux/genalloc.h>
>  #include <linux/gfp.h>
>  #include <linux/kernel.h>
>  #include <linux/mm.h>
> @@ -12,9 +14,8 @@
>  #include <xen/page.h>
>  #include <xen/xen.h>
>  
> -static DEFINE_MUTEX(list_lock);
> -static struct page *page_list;
> -static unsigned int list_count;
> +static DEFINE_MUTEX(pool_lock);
> +static struct gen_pool *unpopulated_pool;
>  
>  static struct resource *target_resource;
>  
> @@ -31,12 +32,12 @@ int __weak __init arch_xen_unpopulated_init(struct resource **res)
>  	return 0;
>  }
>  
> -static int fill_list(unsigned int nr_pages)
> +static int fill_pool(unsigned int nr_pages)
>  {
>  	struct dev_pagemap *pgmap;
>  	struct resource *res, *tmp_res = NULL;
>  	void *vaddr;
> -	unsigned int i, alloc_pages = round_up(nr_pages, PAGES_PER_SECTION);
> +	unsigned int alloc_pages = round_up(nr_pages, PAGES_PER_SECTION);
>  	struct range mhp_range;
>  	int ret;
>  
> @@ -106,6 +107,7 @@ static int fill_list(unsigned int nr_pages)
>           * conflict with any devices.
>           */
>  	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
> +		unsigned int i;
>  		xen_pfn_t pfn = PFN_DOWN(res->start);
>  
>  		for (i = 0; i < alloc_pages; i++) {
> @@ -125,16 +127,17 @@ static int fill_list(unsigned int nr_pages)
>  		goto err_memremap;
>  	}
>  
> -	for (i = 0; i < alloc_pages; i++) {
> -		struct page *pg = virt_to_page(vaddr + PAGE_SIZE * i);
> -
> -		pg->zone_device_data = page_list;
> -		page_list = pg;
> -		list_count++;
> +	ret = gen_pool_add_virt(unpopulated_pool, (unsigned long)vaddr, res->start,
> +			alloc_pages * PAGE_SIZE, NUMA_NO_NODE);
> +	if (ret) {
> +		pr_err("Cannot add memory range to the unpopulated pool\n");
> +		goto err_pool;
>  	}
>  
>  	return 0;
>  
> +err_pool:
> +	memunmap_pages(pgmap);
>  err_memremap:
>  	kfree(pgmap);
>  err_pgmap:
> @@ -149,51 +152,49 @@ static int fill_list(unsigned int nr_pages)
>  	return ret;
>  }
>  
> -/**
> - * xen_alloc_unpopulated_pages - alloc unpopulated pages
> - * @nr_pages: Number of pages
> - * @pages: pages returned
> - * @return 0 on success, error otherwise
> - */
> -int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages)
> +static int alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages,
> +		bool contiguous)
>  {
>  	unsigned int i;
>  	int ret = 0;
> +	void *vaddr;
> +	bool filled = false;
>  
>  	/*
>  	 * Fallback to default behavior if we do not have any suitable resource
>  	 * to allocate required region from and as the result we won't be able to
>  	 * construct pages.
>  	 */
> -	if (!target_resource)
> +	if (!target_resource) {
> +		if (contiguous && nr_pages > 1)
> +			return -ENODEV;

unpopulated_init fails and return an error if !target_resource. Does the
latest Linux continue booting if unpopulated_init, an early_initcall,
fails?

If not, then there is no point in this check because we cannot get to
this point.


>  		return xen_alloc_ballooned_pages(nr_pages, pages);
> +	}
> +
> +	mutex_lock(&pool_lock);
>  
> -	mutex_lock(&list_lock);
> -	if (list_count < nr_pages) {
> -		ret = fill_list(nr_pages - list_count);
> +	while (!(vaddr = (void *)gen_pool_alloc(unpopulated_pool,
> +			nr_pages * PAGE_SIZE))) {
> +		if (filled)
> +			ret = -ENOMEM;
> +		else {
> +			ret = fill_pool(nr_pages);
> +			filled = true;
> +		}
>  		if (ret)
>  			goto out;
>  	}
>  
>  	for (i = 0; i < nr_pages; i++) {
> -		struct page *pg = page_list;
> -
> -		BUG_ON(!pg);
> -		page_list = pg->zone_device_data;
> -		list_count--;
> -		pages[i] = pg;
> +		pages[i] = virt_to_page(vaddr + PAGE_SIZE * i);
>  
>  #ifdef CONFIG_XEN_HAVE_PVMMU
>  		if (!xen_feature(XENFEAT_auto_translated_physmap)) {
> -			ret = xen_alloc_p2m_entry(page_to_pfn(pg));
> +			ret = xen_alloc_p2m_entry(page_to_pfn(pages[i]));
>  			if (ret < 0) {
> -				unsigned int j;
> -
> -				for (j = 0; j <= i; j++) {
> -					pages[j]->zone_device_data = page_list;
> -					page_list = pages[j];
> -					list_count++;
> -				}
> +				gen_pool_free(unpopulated_pool, (unsigned long)vaddr,
> +						nr_pages * PAGE_SIZE);
>  				goto out;
>  			}
>  		}
> @@ -201,9 +202,68 @@ int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages)
>  	}
>  
>  out:
> -	mutex_unlock(&list_lock);
> +	mutex_unlock(&pool_lock);
>  	return ret;
>  }
> +
> +static bool in_unpopulated_pool(unsigned int nr_pages, struct page *page)
> +{
> +	if (!target_resource)
> +		return false;
> +
> +	return gen_pool_has_addr(unpopulated_pool,
> +			(unsigned long)page_to_virt(page), nr_pages * PAGE_SIZE);
> +}
> +
> +static void free_unpopulated_pages(unsigned int nr_pages, struct page **pages,
> +		bool contiguous)
> +{
> +	if (!target_resource) {
> +		if (contiguous && nr_pages > 1)
> +			return;
> +
> +		xen_free_ballooned_pages(nr_pages, pages);
> +		return;
> +	}
> +
> +	mutex_lock(&pool_lock);
> +
> +	/* XXX Do we need to check the range (gen_pool_has_addr)? */
> +	if (contiguous)
> +		gen_pool_free(unpopulated_pool, (unsigned long)page_to_virt(pages[0]),
> +				nr_pages * PAGE_SIZE);
> +	else {
> +		unsigned int i;
> +
> +		for (i = 0; i < nr_pages; i++)
> +			gen_pool_free(unpopulated_pool,
> +					(unsigned long)page_to_virt(pages[i]), PAGE_SIZE);
> +	}
> +
> +	mutex_unlock(&pool_lock);
> +}
> +
> +/**
> + * is_xen_unpopulated_page - check whether page is unpopulated
> + * @page: page to be checked
> + * @return true if page is unpopulated, else otherwise
> + */
> +bool is_xen_unpopulated_page(struct page *page)
> +{
> +	return in_unpopulated_pool(1, page);
> +}
> +EXPORT_SYMBOL(is_xen_unpopulated_page);
> +
> +/**
> + * xen_alloc_unpopulated_pages - alloc unpopulated pages
> + * @nr_pages: Number of pages
> + * @pages: pages returned
> + * @return 0 on success, error otherwise
> + */
> +int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages)
> +{
> +	return alloc_unpopulated_pages(nr_pages, pages, false);
> +}
>  EXPORT_SYMBOL(xen_alloc_unpopulated_pages);
>  
>  /**
> @@ -213,22 +273,40 @@ EXPORT_SYMBOL(xen_alloc_unpopulated_pages);
>   */
>  void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages)
>  {
> -	unsigned int i;
> +	free_unpopulated_pages(nr_pages, pages, false);
> +}
> +EXPORT_SYMBOL(xen_free_unpopulated_pages);
>  
> -	if (!target_resource) {
> -		xen_free_ballooned_pages(nr_pages, pages);
> -		return;
> -	}
> +/**
> + * xen_alloc_unpopulated_contiguous_pages - alloc unpopulated contiguous pages
> + * @dev: valid struct device pointer
> + * @nr_pages: Number of pages
> + * @pages: pages returned
> + * @return 0 on success, error otherwise
> + */
> +int xen_alloc_unpopulated_contiguous_pages(struct device *dev,
> +		unsigned int nr_pages, struct page **pages)
> +{
> +	/* XXX Handle devices which support 64-bit DMA address only for now */
> +	if (dma_get_mask(dev) != DMA_BIT_MASK(64))
> +		return -EINVAL;
>  
> -	mutex_lock(&list_lock);
> -	for (i = 0; i < nr_pages; i++) {
> -		pages[i]->zone_device_data = page_list;
> -		page_list = pages[i];
> -		list_count++;
> -	}
> -	mutex_unlock(&list_lock);
> +	return alloc_unpopulated_pages(nr_pages, pages, true);
>  }
> -EXPORT_SYMBOL(xen_free_unpopulated_pages);
> +EXPORT_SYMBOL(xen_alloc_unpopulated_contiguous_pages);
> +
> +/**
> + * xen_free_unpopulated_contiguous_pages - return unpopulated contiguous pages
> + * @dev: valid struct device pointer
> + * @nr_pages: Number of pages
> + * @pages: pages to return
> + */
> +void xen_free_unpopulated_contiguous_pages(struct device *dev,
> +		unsigned int nr_pages, struct page **pages)
> +{
> +	free_unpopulated_pages(nr_pages, pages, true);
> +}
> +EXPORT_SYMBOL(xen_free_unpopulated_contiguous_pages);
>  
>  static int __init unpopulated_init(void)
>  {
> @@ -237,9 +315,19 @@ static int __init unpopulated_init(void)
>  	if (!xen_domain())
>  		return -ENODEV;
>  
> +	unpopulated_pool = gen_pool_create(PAGE_SHIFT, NUMA_NO_NODE);
> +	if (!unpopulated_pool) {
> +		pr_err("xen:unpopulated: Cannot create unpopulated pool\n");
> +		return -ENOMEM;
> +	}
> +
> +	gen_pool_set_algo(unpopulated_pool, gen_pool_best_fit, NULL);
> +
>  	ret = arch_xen_unpopulated_init(&target_resource);
>  	if (ret) {
>  		pr_err("xen:unpopulated: Cannot initialize target resource\n");
> +		gen_pool_destroy(unpopulated_pool);
> +		unpopulated_pool = NULL;
>  		target_resource = NULL;
>  	}
>  
> diff --git a/include/xen/xen.h b/include/xen/xen.h
> index 0780a81..7d396cc 100644
> --- a/include/xen/xen.h
> +++ b/include/xen/xen.h
> @@ -60,9 +60,16 @@ static inline void xen_set_restricted_virtio_memory_access(void)
>  		platform_set(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS);
>  }
>  
> +struct device;
> +
>  #ifdef CONFIG_XEN_UNPOPULATED_ALLOC
>  int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages);
>  void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages);
> +int xen_alloc_unpopulated_contiguous_pages(struct device *dev,
> +		unsigned int nr_pages, struct page **pages);
> +void xen_free_unpopulated_contiguous_pages(struct device *dev,
> +		unsigned int nr_pages, struct page **pages);
> +bool is_xen_unpopulated_page(struct page *page);
>  #include <linux/ioport.h>
>  int arch_xen_unpopulated_init(struct resource **res);
>  #else
> @@ -77,6 +84,19 @@ static inline void xen_free_unpopulated_pages(unsigned int nr_pages,
>  {
>  	xen_free_ballooned_pages(nr_pages, pages);
>  }
> +static inline int xen_alloc_unpopulated_contiguous_pages(struct device *dev,
> +		unsigned int nr_pages, struct page **pages)
> +{
> +	return -1;
> +}
> +static inline void xen_free_unpopulated_contiguous_pages(struct device *dev,
> +		unsigned int nr_pages, struct page **pages)
> +{
> +}
> +static inline bool is_xen_unpopulated_page(struct page *page)
> +{
> +	return false;
> +}
>  #endif
>  
>  #endif	/* _XEN_XEN_H */
> -- 
> 2.7.4
> 


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 01:55:36 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 01:55:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358121.587137 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6jP1-0001Jv-7D; Thu, 30 Jun 2022 01:55:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358121.587137; Thu, 30 Jun 2022 01:55:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6jP1-0001Jn-28; Thu, 30 Jun 2022 01:55:31 +0000
Received: by outflank-mailman (input) for mailman id 358121;
 Thu, 30 Jun 2022 01:55:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IfmQ=XF=arm.com=jiamei.xie@srs-se1.protection.inumbo.net>)
 id 1o6jOz-0001Jh-ME
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 01:55:29 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id b67370fe-f817-11ec-bdce-3d151da133c5;
 Thu, 30 Jun 2022 03:55:27 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D89261480;
 Wed, 29 Jun 2022 18:55:26 -0700 (PDT)
Received: from a015971.shanghai.arm.com (unknown [10.169.188.104])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 3DCE73F5A1;
 Wed, 29 Jun 2022 18:55:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b67370fe-f817-11ec-bdce-3d151da133c5
From: Jiamei Xie <jiamei.xie@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Jiamei Xie <jiamei.xie@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Wei Chen <wei.chen@arm.com>
Subject: [PATCH v3] xen/arm: avoid overflow when setting vtimer in context switch
Date: Thu, 30 Jun 2022 09:53:37 +0800
Message-Id: <20220630015336.3040355-1-jiamei.xie@arm.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

virt_vtimer_save is calculating the new time for the vtimer in:
"v->arch.virt_timer.cval + v->domain->arch.virt_timer_base.offset
- boot_count".
In this formula, "cval + offset" might cause uint64_t overflow.
Changing it to "ticks_to_ns(v->domain->arch.virt_timer_base.offset -
boot_count) + ticks_to_ns(v->arch.virt_timer.cval)" can avoid overflow,
and "ticks_to_ns(arch.virt_timer_base.offset - boot_count)" will be
always the same, which has been caculated in domain_vtimer_init.
Introduce a new field virt_timer_base.nanoseconds to store this value
for arm in struct arch_domain, so we can use it directly.

Signed-off-by: Jiamei Xie <jiamei.xie@arm.com>
Change-Id: Ib80cee51eaf844661e6f92154a0339ad2a652f9b
---
was "xen/arm: avoid vtimer flip-flop transition in context switch".
v3 changes:
-re-write commit message
-store nanoseconds in virt_timer_base instead of adding a new structure
-assign to nanoseconds first, then seconds
---
 xen/arch/arm/include/asm/domain.h | 1 +
 xen/arch/arm/vtimer.c             | 9 ++++++---
 2 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
index ed63c2b6f9..cd9ce19b4b 100644
--- a/xen/arch/arm/include/asm/domain.h
+++ b/xen/arch/arm/include/asm/domain.h
@@ -71,6 +71,7 @@ struct arch_domain
 
     struct {
         uint64_t offset;
+        s_time_t nanoseconds;
     } virt_timer_base;
 
     struct vgic_dist vgic;
diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
index 6b78fea77d..aeaea78e4c 100644
--- a/xen/arch/arm/vtimer.c
+++ b/xen/arch/arm/vtimer.c
@@ -63,7 +63,9 @@ static void virt_timer_expired(void *data)
 int domain_vtimer_init(struct domain *d, struct xen_arch_domainconfig *config)
 {
     d->arch.virt_timer_base.offset = get_cycles();
-    d->time_offset.seconds = ticks_to_ns(d->arch.virt_timer_base.offset - boot_count);
+    d->arch.virt_timer_base.nanoseconds =
+        ticks_to_ns(d->arch.virt_timer_base.offset - boot_count);
+    d->time_offset.seconds = d->arch.virt_timer_base.nanoseconds;
     do_div(d->time_offset.seconds, 1000000000);
 
     config->clock_frequency = timer_dt_clock_frequency;
@@ -144,8 +146,9 @@ void virt_timer_save(struct vcpu *v)
     if ( (v->arch.virt_timer.ctl & CNTx_CTL_ENABLE) &&
          !(v->arch.virt_timer.ctl & CNTx_CTL_MASK))
     {
-        set_timer(&v->arch.virt_timer.timer, ticks_to_ns(v->arch.virt_timer.cval +
-                  v->domain->arch.virt_timer_base.offset - boot_count));
+        set_timer(&v->arch.virt_timer.timer,
+                  v->domain->arch.virt_timer_base.nanoseconds +
+                  ticks_to_ns(v->arch.virt_timer.cval));
     }
 }
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 30 02:00:16 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 02:00:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358128.587148 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6jTb-0003FV-OQ; Thu, 30 Jun 2022 02:00:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358128.587148; Thu, 30 Jun 2022 02:00:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6jTb-0003FO-L1; Thu, 30 Jun 2022 02:00:15 +0000
Received: by outflank-mailman (input) for mailman id 358128;
 Thu, 30 Jun 2022 02:00:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6jTb-0003FE-5m; Thu, 30 Jun 2022 02:00:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6jTb-00085n-1Z; Thu, 30 Jun 2022 02:00:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6jTa-0006P4-Fb; Thu, 30 Jun 2022 02:00:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o6jTa-0007KG-F9; Thu, 30 Jun 2022 02:00:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0vGBhu42pXvJqWR+hVkNw3UGb85VThLJiLxJfIn1uQs=; b=hZdFo+ffcLjAbJ0WVMYH0XeRIb
	6bmTHEz/3BcY+OeHe38nXNEPkPKr6e2BpnQSAszoBoWSuJEbqHs97+6MVWYC24fjFBJu9ZbOcoYPL
	9GmvlwY81rj2naGotkcCcucdYrSaN/qhmNRTstLn6ikOKXAb2ik5tydwM80g7OU0SaC8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171408-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 171408: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start/debian.repeat:fail:regression
    linux-5.4:test-arm64-arm64-xl-thunderx:xen-boot:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-vhd:guest-start:fail:heisenbug
    linux-5.4:test-armhf-armhf-libvirt-raw:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=9ef3ad40a81ff6b8b65ed870588b230f38812f2a
X-Osstest-Versions-That:
    linux=23db944f754e99abf814a79a2273b0191d35e4ff
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 30 Jun 2022 02:00:14 +0000

flight 171408 linux-5.4 real [real]
flight 171414 linux-5.4 real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/171408/
http://logs.test-lab.xenproject.org/osstest/logs/171414/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-multivcpu 18 guest-start/debian.repeat fail REGR. vs. 171352

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-thunderx  8 xen-boot         fail in 171400 pass in 171408
 test-armhf-armhf-xl-multivcpu 14 guest-start     fail in 171400 pass in 171408
 test-armhf-armhf-xl-credit1  14 guest-start                fail pass in 171400
 test-armhf-armhf-xl-vhd      13 guest-start                fail pass in 171400
 test-armhf-armhf-libvirt-raw 17 guest-start/debian.repeat  fail pass in 171400

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail blocked in 171352
 test-armhf-armhf-xl-credit1 18 guest-start/debian.repeat fail in 171400 like 171352
 test-armhf-armhf-xl-credit1 15 migrate-support-check fail in 171400 never pass
 test-armhf-armhf-xl-credit1 16 saverestore-support-check fail in 171400 never pass
 test-armhf-armhf-xl-vhd     14 migrate-support-check fail in 171400 never pass
 test-armhf-armhf-xl-vhd 15 saverestore-support-check fail in 171400 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171352
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171352
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171352
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171352
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171352
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171352
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171352
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171352
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 171352
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171352
 test-armhf-armhf-xl-credit2  14 guest-start                  fail  like 171352
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171352
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171352
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                9ef3ad40a81ff6b8b65ed870588b230f38812f2a
baseline version:
 linux                23db944f754e99abf814a79a2273b0191d35e4ff

Last test of basis   171352  2022-06-25 11:13:17 Z    4 days
Testing same since   171400  2022-06-29 07:11:34 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aidan MacDonald <aidanmacdonald.0x0@gmail.com>
  Alexander Gordeev <agordeev@linux.ibm.com>
  Anatolii Gerasymenko <anatolii.gerasymenko@intel.com>
  Andrew Donnellan <ajd@linux.ibm.com>
  Antoine Tenart <atenart@kernel.org>
  Arnd Bergmann <arnd@arndb.de>
  Ballon Shi <ballon.shi@quectel.com>
  Bartosz Golaszewski <brgl@bgdev.pl>
  Baruch Siach <baruch@tkos.co.il>
  Carlo Lobrano <c.lobrano@gmail.com>
  Chevron Li <chevron.li@bayhubtech.com>
  Chevron Li<chevron.li@bayhubtech.com>
  Claudiu Manoil <claudiu.manoil@nxp.com>
  Curtis Taylor <cutaylor-pub@yahoo.com>
  Damien Le Moal <damien.lemoal@opensource.wdc.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Borkmann <daniel@iogearbox.net>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  Dmitry Rokosov <ddrokosov@sberdevices.ru>
  Edward Wu <edwardwu@realtek.com>
  Eelco Chaudron <echaudro@redhat.com>
  Eric Dumazet <edumazet@google.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Genjian Zhang <zhanggenjian@kylinos.cn>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Gurucharan <gurucharanx.g@intel.com> (A Contingent worker at Intel)
  Haibo Chen <haibo.chen@nxp.com>
  Hans de Goede <hdegoede@redhat.com>
  Helge Deller <deller@gmx.de>
  huhai <huhai@kylinos.cn>
  Hulk Robot <hulkrobot@huawei.com>
  Jakub Kicinski <kuba@kernel.org>
  Jason A. Donenfeld <Jason@zx2c4.com>
  Jason Wang <jasowang@redhat.com>
  Jay Vosburgh <jay.vosburgh@canonical.com>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Jiri Slaby <jslaby@suse.cz>
  Johan Hovold <johan@kernel.org>
  Jon Hunter <jonathanh@nvidia.com>
  Jon Maxwell <jmaxwell37@gmail.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Jonathan Toppins <jtoppins@redhat.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Kailang Yang <kailang@realtek.com>
  Krzysztof Halasa <khalasa@piap.pl>
  Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
  Liang He <windhl@126.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Lucas Stach <l.stach@pengutronix.de>
  Macpaul Lin <macpaul.lin@mediatek.com>
  Marc Dionne <marc.dionne@auristor.com>
  Mark Brown <broonie@kernel.org>
  Masahiro Yamada <masahiroy@kernel.org>
  Mathias Nyman <mathias.nyman@linux.intel.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Miaoqian Lin <linmq006@gmail.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Mike Snitzer <snitzer@kernel.org>
  Mikulas Patocka <mpatocka@redhat.com>
  Miquel Raynal <miquel.raynal@bootlin.com>
  Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
  Nikos Tsironis <ntsironis@arrikto.com>
  Olivier Moysan <olivier.moysan@foss.st.com>
  Paolo Abeni <pabeni@redhat.com>
  Peilin Ye <peilin.ye@bytedance.com>
  Rob Clark <robdclark@chromium.org>
  Ron Economos <re@w6rz.net>
  Rosemarie O'Riorden <roriorden@redhat.com>
  Sami Tolvanen <samitolvanen@google.com>
  Sascha Hauer <s.hauer@pengutronix.de>
  Sasha Levin <sashal@kernel.org>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Shawn Guo <shawnguo@kernel.org>
  Shuah Khan <skhan@linuxfoundation.org>
  Stephan Gerhold <stephan.gerhold@kernkonzept.com>
  Stephen Hemminger <stephen@networkplumber.org>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Sumanth Korikkar <sumanthk@linux.ibm.com>
  Sumit Dubey2 <Sumit.Dubey2@ibm.com>
  Takashi Iwai <tiwai@suse.de>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Thomas Richter <tmricht@linux.ibm.com>
  Tim Crawford <tcrawford@system76.com>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Vincent Whitchurch <vincent.whitchurch@axis.com>
  Xu Yang <xu.yang_2@nxp.com>
  Yonglin Tan <yonglin.tan@outlook.com>
  Zheyu Ma <zheyuma97@gmail.com>
  Ziyang Xuan <william.xuanziyang@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1766 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 02:04:35 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 02:04:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358137.587158 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6jXk-0003wW-F5; Thu, 30 Jun 2022 02:04:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358137.587158; Thu, 30 Jun 2022 02:04:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6jXk-0003wP-C7; Thu, 30 Jun 2022 02:04:32 +0000
Received: by outflank-mailman (input) for mailman id 358137;
 Thu, 30 Jun 2022 02:04:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M0xB=XF=intel.com=kevin.tian@srs-se1.protection.inumbo.net>)
 id 1o6jXj-0003wJ-3T
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 02:04:31 +0000
Received: from mga07.intel.com (mga07.intel.com [134.134.136.100])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f7912a7d-f818-11ec-bd2d-47488cf2e6aa;
 Thu, 30 Jun 2022 04:04:28 +0200 (CEST)
Received: from orsmga002.jf.intel.com ([10.7.209.21])
 by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 29 Jun 2022 19:04:25 -0700
Received: from fmsmsx604.amr.corp.intel.com ([10.18.126.84])
 by orsmga002.jf.intel.com with ESMTP; 29 Jun 2022 19:04:24 -0700
Received: from fmsmsx612.amr.corp.intel.com (10.18.126.92) by
 fmsmsx604.amr.corp.intel.com (10.18.126.84) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2308.27; Wed, 29 Jun 2022 19:04:23 -0700
Received: from fmsedg602.ED.cps.intel.com (10.1.192.136) by
 fmsmsx612.amr.corp.intel.com (10.18.126.92) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2308.27 via Frontend Transport; Wed, 29 Jun 2022 19:04:23 -0700
Received: from NAM11-DM6-obe.outbound.protection.outlook.com (104.47.57.176)
 by edgegateway.intel.com (192.55.55.71) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2308.27; Wed, 29 Jun 2022 19:04:23 -0700
Received: from BN9PR11MB5276.namprd11.prod.outlook.com (2603:10b6:408:135::18)
 by MN2PR11MB4711.namprd11.prod.outlook.com (2603:10b6:208:24e::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14; Thu, 30 Jun
 2022 02:04:21 +0000
Received: from BN9PR11MB5276.namprd11.prod.outlook.com
 ([fe80::8435:5a99:1e28:b38c]) by BN9PR11MB5276.namprd11.prod.outlook.com
 ([fe80::8435:5a99:1e28:b38c%2]) with mapi id 15.20.5373.018; Thu, 30 Jun 2022
 02:04:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f7912a7d-f818-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1656554668; x=1688090668;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=M5U6Uo0IVW3vHmxN8E2RMKF6mrAdlDY0/XN+ZTiqiTw=;
  b=IMz6iaEvivGiq4DZ9fPZZZai7g3X8OqIi9n6wBACcWk+u4Nr7qSemZS/
   X+eUyosi6691czID7Lzlwrrnfz0pECLmJkV9akasKJKkAkD2RpdCyabRX
   yM+yASxq8A/OAJwKphHe4rsjEcy4XfZT82O8OUBG/g0jnrtxoB19resIx
   gZDEp5YVoNuc7V3lTdC8n6bQ9KAJWgMhgSllWX31HdvzACiJuwtR49C1Q
   n4r3Ic+AHXntAiT+wTQXezNKZnASSV6V6WX4NLvNV039riEnpXoGSwsf2
   6waboJSOdFCKN+AwoNeM7RFXvzIhlOUckC0jeS9xXpQ0o91jVZOrF+JBQ
   w==;
X-IronPort-AV: E=McAfee;i="6400,9594,10393"; a="346207514"
X-IronPort-AV: E=Sophos;i="5.92,232,1650956400"; 
   d="scan'208";a="346207514"
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.92,232,1650956400"; 
   d="scan'208";a="590974461"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DwPRf4OF9OuP5uxHU+KGP7cPycOdkt2kp2x6H/AS2NUhjGq4aG2h1QZ5cBSRvHDNoUE7eJ7YJB8xcJfA7xO7gPimUsQ6kE8k4E+OYW1Go7PfLw866hAjMz95MnVAcJUAwjF4DOmMJJ3uTM6vItsHf0GuohX1hPvNCpES5lWTyVvNimbwQbTLC7dw+oSnEKnf6wzM2boH+rV2DgiBsSVDc+Oa4q+Ilh9huQfjyTWj3HYjM8asQnx7OBusmCoMAP8nDo7wWRw/ypqmabMdvv6JKvcYkIHMgTeLaLHdRCBnHSdV3CyFBg3IHEgQw4+abs2GK/j66LYPwJcVyjGr8QyiMA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=M5U6Uo0IVW3vHmxN8E2RMKF6mrAdlDY0/XN+ZTiqiTw=;
 b=RxI6ZKYsJNEgDcZtCeKcxq2qg/aQibH1NiOaaSJuRjYxNGR2262nsQmnWlJQYTuZAQep5ZrdoSXJUzahvPtWDPj2+9Mj44/F2oBb9i1SacPjua07F+ZSeAdxiHD1lbis82g+ia2ysjT9Xl1ZRAM0ub/Ie+kBC0ecWV9MfO+NbhMrhWury92KhwjcK3pl08JXBocHgZjP17gTLpNH6z4JA4F0PjwaMYXJ6Vzx6q8nuvFzngLj6L0mA3nGgZFvqoZSkqSWDVTarm2S0fxpQJ9aXUNNH7gzddxqsRpd3e0h7lfM8UZKFzFlK2tRMyduVxBil4a/7R79nm8xQFKwiow7Ow==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
From: "Tian, Kevin" <kevin.tian@intel.com>
To: =?utf-8?B?UGF1IE1vbm7DqSwgUm9nZXI=?= <roger.pau@citrix.com>, "Beulich,
 Jan" <JBeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"Nakajima, Jun" <jun.nakajima@intel.com>, "Cooper, Andrew"
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Wei
 Liu" <wl@xen.org>, Paul Durrant <paul@xen.org>
Subject: RE: [PATCH] x86/ept: fix shattering of special pages
Thread-Topic: [PATCH] x86/ept: fix shattering of special pages
Thread-Index: AQHYig0A8+iuyC//Bk2PKQQEd7+3W61mDnbAgAANmACAARrLMA==
Date: Thu, 30 Jun 2022 02:04:21 +0000
Message-ID: <BN9PR11MB5276E5D3B9EBC700729E7F808CBA9@BN9PR11MB5276.namprd11.prod.outlook.com>
References: <20220627100119.55363-1-roger.pau@citrix.com>
 <BN9PR11MB527685F117AAA3C1716A41EF8CBB9@BN9PR11MB5276.namprd11.prod.outlook.com>
 <YrwXBtvpIl81GhQ7@Air-de-Roger>
In-Reply-To: <YrwXBtvpIl81GhQ7@Air-de-Roger>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=intel.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 6dd24497-22e9-401e-fcbc-08da5a3cd929
x-ms-traffictypediagnostic: MN2PR11MB4711:EE_
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: ZnZhRBxyHIJjtlht/zj22g+92D+C2b1n9ZmOqCv+xr89Pt4Ae8HyEzw1jpYpH+upRToh8Ha2BjCaZ8q3oqneYHLAi0EVl4ix3vDs2opJ0zcI7Nd+1Vd2hJ/6XJSHpDAlF9YyuD6o6rAv1D7IkbIrbAMf5t2cSGzd8YEcPEyusatTb9bv4EeBVtB3DhUHAf2ZoutgnYaTkmwjUVe/jVQ9BMBv/l8TWBuxK97V6i25RU+8IUA7EJV81OBjDHpJW7UiMclacJ3EAPHF71+yd2uQYQcVbTd66DnTNukhVZ2LnQe/sE2JK4jaUIsZKCkz0HQcI1hmC+iO32VJme/KOEd8sQWBx4jrMModC4yBwgIWfEgYfj5csyHlM1ha3LwqvGzq1SLmvBBJ/75ApeGhXOpcp5pKL/zqVLs5eoi4NB0E+3wLMTufJQ9+ilVg5z3gONiDm9vKkxeVurph7/kNGXfEnB88ZLEYNmWbq7s959C0LeYLqZ9eUHmjmG3c1+Fpv7RHfO5S/DZjnswLi1rjbriyUj5UB/1L08UbQh5S++rvMeJGBBs/MDhmjdOJvqlX7dCSz/wUqgrOWMixLjgG+OXIi33LUQ/fGqPSZeS7euPXEJhL2WQlDWagSFPWmpFYoG+zJ2RCnjx1iqPfWPPsnPr1f2eiAKSVvO8cQRTUzpT7/LlvC8SBi2ymMW8WaO+nR/BASVef74pabmrQHc0SF/muPnV+uIzhlyn1bke/CelZszivzCgOu5c4jMA/ejUv1RZY/9gyJiuomyLDAyNG3NK9NqkCUuoevFHeMMyu+J+4A5v0F8cXQxDDR9dLVxQQpHxA
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BN9PR11MB5276.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(346002)(136003)(39860400002)(376002)(396003)(366004)(8936002)(52536014)(186003)(41300700001)(7696005)(6506007)(71200400001)(5660300002)(33656002)(478600001)(9686003)(2906002)(26005)(86362001)(66446008)(38070700005)(83380400001)(122000001)(82960400001)(54906003)(110136005)(4326008)(38100700002)(8676002)(76116006)(66476007)(64756008)(66556008)(66946007)(316002)(55016003);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?QkNaeXhweExrdnZEK0E0WFI4THJRVElIcjBqTTlKcmlZWnhwTzU0SEEzeDl2?=
 =?utf-8?B?OXUvYWd6ZFhWL25SWER4T243NWhxS25LRFNidUs2SGxlSHNhM3ZPODI4S0p4?=
 =?utf-8?B?Zm95TjV4NS9FNUZXdmFsMzZ5SFovSzJhMll5dDdEd29tWnBMMjBaeXNsenhl?=
 =?utf-8?B?aldOeDhnL3ovRW5KaFJ0dkZFeXNrNlZtc2hqTnhMWk41Z2hSVWF6NGoyQmh3?=
 =?utf-8?B?SC9odmVHaHR6Kzl3SVM1dk11U05vMHFXZ0FJeGZkem1ubjRuU09nWkw1TXdZ?=
 =?utf-8?B?T1lYOWxiZW5OS2dnRTNlRnlNaVFQSGZISndHR1FmclU3T0RQTEdmdHN2ajFm?=
 =?utf-8?B?NWlqbDJZZnoxeERDV1JzR0pXelhIUSsxUmZKNm1udFpwaG9RaGZJVjdRM1Qw?=
 =?utf-8?B?YWVPOHZBSFNXRFd2Yk9ka25VOWJBYjRkZXRlNlorM1NGYXBNd2dTU2pLSk1M?=
 =?utf-8?B?WGZoa3k2NzdRbktzMThzUlc3S3RlVWtvUlhWVW4rWHZqSVV1ZGxJb1N5OUZ6?=
 =?utf-8?B?S2ZLc1VrVm41c00wbWRnbXlNQ21sN3BXRkdRT1dKNnMxS0h3ZnlKelB2NXlB?=
 =?utf-8?B?T2N4aDkvVEZtdzI4Mm1FYU1INlF3NkVwSUVXdHQvNm5JOGl2OUhnR3dGMUtZ?=
 =?utf-8?B?T2tOUVNvVCtSLy9aR0dVdlBuUHIxdmRoYmJ0MVBoL0h6ZkNTdlVRc0xWckZ2?=
 =?utf-8?B?REh1dkVBQWVwSVV5cEZrNzB5MytQTEV3bHVRUDBKaTFOdmtsdFlmOHZDcTRM?=
 =?utf-8?B?dWtRYS9TaFJRL3gyNTJuQ3lWODJDc1VHU2I4cU9OVlFQeGdZZlZhSnpITGdQ?=
 =?utf-8?B?SmZGM3BVZ0pmK2FBUHFGOHpXUFkzaGhWWEtNMy9pZ0NQM2pRR1FobVlBUHlv?=
 =?utf-8?B?S01Wa2pPUTU3cllTa25WOUw0aFRjM1NSLzV0U09ZcURHTEdwVHl2VzIvbWxk?=
 =?utf-8?B?MTVLL0ZUWVZoQXpGS1ZlK1ZrbFE0NmdiNWVCQVhUdU1DbkVNRWlYYmU5NnpO?=
 =?utf-8?B?WlFkVEVGV1JNaDFLelJxQzN1TVZxSzFYdFhYTGZlRmxpWi90QkxDVW9wV3NE?=
 =?utf-8?B?WlJTa3Z5d2QwZU4xUmdBSUNuNXRpeW4vbU5ELzV5NFphTGNXNDVrRDVuQnlJ?=
 =?utf-8?B?YjR3cFNrT3VORTQ2QWNGek9paXQ3K3Z6UStPZ3cyNVRzdTJ1aHl3bEd0c25r?=
 =?utf-8?B?QmJtNFBidWVZZFZwUDZCRnFZNUNSa2ZrQmEyM0tvREhjUTVxRDdCb0J2RTF0?=
 =?utf-8?B?NzNydGFEUVVXZmx4N3JKVlJ1MTRWbU0ySEc2WS83VjU2eFNDOUJvWnd1dXNj?=
 =?utf-8?B?M05pN0FRc2FHRld2V3RZSERQZVRIYm9mM3FaWEJ4L3JyS015dG9KS2g0ell5?=
 =?utf-8?B?eXNvazN4OUVsZ2F6M2l4UnlXRUtucTZWdEEyejhZbGxZSm9nWGhQdHNYUTR1?=
 =?utf-8?B?NUZWUGoxa204bWE0N00ydXgwVzlYUVlWZW84dEhnSVJmSkU3TTJlM3lJeUNT?=
 =?utf-8?B?SGxtU0NzQkF5UmFYSjRlWlZhUTJlM2xtS3loL3JXQ3JIN0d1WkVsYnFiUk95?=
 =?utf-8?B?Uy81ZkZZUldOSUNmTmZ5djhIbzY2S3RYQVdock1qMlJjZjFqSEE0MmIzdldO?=
 =?utf-8?B?YVpDTlhEZk9lUEhnQTg2UTg2WVBHdE53R3I5TGtMNTRndHMzeU9OWE52UWxj?=
 =?utf-8?B?YjROd1VuelNzdVhrOHJTVG5NRGIwQU9zK0x3STR2NnE5WVREN1RrVXZVRVkx?=
 =?utf-8?B?VUEwOEZsWHorZGtaMFp5NW54UkdaK3VoRjRmSG9maUoyRVdvM0J2blBXc1Jw?=
 =?utf-8?B?NWNlR3BjVW9VRFV5WjBTSjQwdEF0ZHBucndYRmpUL3Qya04zWVVyZTdQTjcv?=
 =?utf-8?B?ZWtwUUhwZlU3YmF0YUZmemlIZ3hBNFREcXpFeEc0MmJTbGxqZ21qT0tiRUpQ?=
 =?utf-8?B?NXp3M0lCbkpVMkVQMzFoQnJTakxzNTQ0citNbVU3eTVJb0pvdTU4R0RnTmh1?=
 =?utf-8?B?TUVORW51NFlJbGoyVExWYUV1eXM4QXl0bVlPcVI2VFZBQ1lheGpsRHJWK2Vt?=
 =?utf-8?B?VTVIS3V0bFkrbWl2NEZjSVFhemZXWHZnbW9ScUhHNzVKSnYvUzBoYy9kemFm?=
 =?utf-8?Q?1GOt1c4vDu4WfjloYkTpseIIt?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BN9PR11MB5276.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6dd24497-22e9-401e-fcbc-08da5a3cd929
X-MS-Exchange-CrossTenant-originalarrivaltime: 30 Jun 2022 02:04:21.4135
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: M5EZB8ArfR5TVTp5szxnaga0W4f7ofmX/S50tN/i6p4pWYh3vS1l/Nw/+PrnP10Iq96SEa/Jj+icCRspLp0pmg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR11MB4711
X-OriginatorOrg: intel.com

PiBGcm9tOiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4NCj4gU2VudDog
V2VkbmVzZGF5LCBKdW5lIDI5LCAyMDIyIDU6MTEgUE0NCj4gDQo+IE9uIFdlZCwgSnVuIDI5LCAy
MDIyIGF0IDA4OjQxOjQzQU0gKzAwMDAsIFRpYW4sIEtldmluIHdyb3RlOg0KPiA+ID4gRnJvbTog
Um9nZXIgUGF1IE1vbm5lIDxyb2dlci5wYXVAY2l0cml4LmNvbT4NCj4gPiA+IFNlbnQ6IE1vbmRh
eSwgSnVuZSAyNywgMjAyMiA2OjAxIFBNDQo+ID4gPg0KPiA+ID4gVGhlIGN1cnJlbnQgbG9naWMg
aW4gZXB0ZV9nZXRfZW50cnlfZW10KCkgd2lsbCBzcGxpdCBhbnkgcGFnZSBtYXJrZWQNCj4gPiA+
IGFzIHNwZWNpYWwgd2l0aCBvcmRlciBncmVhdGVyIHRoYW4gemVybywgd2l0aG91dCBjaGVja2lu
ZyB3aGV0aGVyIHRoZQ0KPiA+ID4gc3VwZXIgcGFnZSBpcyBhbGwgc3BlY2lhbC4NCj4gPiA+DQo+
ID4gPiBGaXggdGhpcyBieSBvbmx5IHNwbGl0dGluZyB0aGUgcGFnZSBvbmx5IGlmIGl0J3Mgbm90
IGFsbCBtYXJrZWQgYXMNCj4gPiA+IHNwZWNpYWwsIGluIG9yZGVyIHRvIHByZXZlbnQgdW5uZWVk
ZWQgc3VwZXIgcGFnZSBzaHV0dGVyaW5nLg0KPiA+ID4NCj4gPiA+IEZpeGVzOiBjYTI0YjJmZmRi
ICgneDg2L2h2bTogc2V0ICdpcGF0JyBpbiBFUFQgZm9yIHNwZWNpYWwgcGFnZXMnKQ0KPiA+ID4g
U2lnbmVkLW9mZi1ieTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+DQo+
ID4gPiAtLS0NCj4gPiA+IENjOiBQYXVsIER1cnJhbnQgPHBhdWxAeGVuLm9yZz4NCj4gPiA+IC0t
LQ0KPiA+ID4gSXQgd291bGQgc2VlbSB3ZWlyZCB0byBtZSB0byBoYXZlIGEgc3VwZXIgcGFnZSBl
bnRyeSBpbiBFUFQgd2l0aA0KPiA+ID4gcmFuZ2VzIG1hcmtlZCBhcyBzcGVjaWFsIGFuZCBub3Qg
dGhlIGZ1bGwgcGFnZS4gIEkgZ3Vlc3MgaXQncyBiZXR0ZXINCj4gPiA+IHRvIGJlIHNhZmUsIGJ1
dCBJIGRvbid0IHNlZSBhbiBzY2VuYXJpbyB3aGVyZSB3ZSBjb3VsZCBlbmQgdXAgaW4gdGhhdA0K
PiA+ID4gc2l0dWF0aW9uLg0KPiA+ID4NCj4gPiA+IEkndmUgYmVlbiB0cnlpbmcgdG8gZmluZCBh
IGNsYXJpZmljYXRpb24gaW4gdGhlIG9yaWdpbmFsIHBhdGNoDQo+ID4gPiBzdWJtaXNzaW9uIGFi
b3V0IGhvdyBpdCdzIHBvc3NpYmxlIHRvIGhhdmUgc3VjaCBzdXBlciBwYWdlIEVQVCBlbnRyeSwN
Cj4gPiA+IGJ1dCBoYXZlbid0IGJlZW4gYWJsZSB0byBmaW5kIGFueSBqdXN0aWZpY2F0aW9uLg0K
PiA+ID4NCj4gPiA+IE1pZ2h0IGJlIG5pY2UgdG8gZXhwYW5kIHRoZSBjb21taXQgbWVzc2FnZSBh
cyB0byB3aHkgaXQncyBwb3NzaWJsZSB0bw0KPiA+ID4gaGF2ZSBzdWNoIG1peGVkIHN1cGVyIHBh
Z2UgZW50cmllcyB0aGF0IHdvdWxkIG5lZWQgc3BsaXR0aW5nLg0KPiA+DQo+ID4gSGVyZSBpcyB3
aGF0IEkgZGlnIG91dC4NCj4gPg0KPiA+IFdoZW4gcmV2aWV3aW5nIHYxIG9mIGFkZGluZyBzcGVj
aWFsIHBhZ2UgY2hlY2ssIEphbiBzdWdnZXN0ZWQNCj4gPiB0aGF0IEFQSUMgYWNjZXNzIHBhZ2Ug
d2FzIGFsc28gY292ZXJlZCBoZW5jZSB0aGUgb2xkIGxvZ2ljIGZvciBBUElDDQo+ID4gYWNjZXNz
IHBhZ2UgY2FuIGJlIHJlbW92ZWQuIFsxXQ0KPiANCj4gQnV0IHRoZSBBUElDIGFjY2VzcyBwYWdl
IGlzIGFsd2F5cyBhZGRlZCB1c2luZyBzZXRfbW1pb19wMm1fZW50cnkoKSwNCj4gd2hpY2ggd2ls
bCB1bmNvbmRpdGlvbmFsbHkgY3JlYXRlIGFuIGVudHJ5IGZvciBpdCBpbiB0aGUgRVBULCBoZW5j
ZQ0KPiB0aGVyZSdzIG5vIGV4cGxpY2l0IG5lZWQgdG8gY2hlY2sgZm9yIGl0J3MgcHJlc2VuY2Ug
aW5zaWRlIG9mIGhpZ2hlcg0KPiBvcmRlciBwYWdlcy4NCj4gDQo+ID4gVGhlbiB3aGVuIHJldmll
d2luZyB2MiBoZSBmb3VuZCB0aGF0IHRoZSBvcmRlciBjaGVjayBpbiByZW1vdmVkDQo+ID4gbG9n
aWMgd2FzIG5vdCBjYXJyaWVkIHRvIHRoZSBuZXcgY2hlY2sgb24gc3BlY2lhbCBwYWdlLiBbMl0N
Cj4gPg0KPiA+IFRoZSBvcmlnaW5hbCBvcmRlciBjaGVjayBpbiBvbGQgQVBJQyBhY2Nlc3MgbG9n
aWMgY2FtZSBmcm9tOg0KPiA+DQo+ID4gY29tbWl0IDEyNjAxOGYyYWNkNTQxNjQzNDc0NzQyM2U2
MWE0NjkwMTA4YjlkYzkNCj4gPiBBdXRob3I6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNv
bT4NCj4gPiBEYXRlOiAgIEZyaSBNYXkgMiAxMDo0ODo0OCAyMDE0ICswMjAwDQo+ID4NCj4gPiAg
ICAgeDg2L0VQVDogY29uc2lkZXIgcGFnZSBvcmRlciB3aGVuIGNoZWNraW5nIGZvciBBUElDIE1G
Tg0KPiA+DQo+ID4gICAgIFRoaXMgd2FzIG92ZXJsb29rZWQgaW4gM2Q5MGQ2ZTYgKCJ4ODYvRVBU
OiBzcGxpdCBzdXBlciBwYWdlcyB1cG9uDQo+ID4gICAgIG1pc21hdGNoaW5nIG1lbW9yeSB0eXBl
cyIpLg0KPiA+DQo+ID4gICAgIFNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBz
dXNlLmNvbT4NCj4gPiAgICAgQWNrZWQtYnk6IEtldmluIFRpYW4gPGtldmluLnRpYW5AaW50ZWwu
Y29tPg0KPiA+ICAgICBSZXZpZXdlZC1ieTogVGltIERlZWdhbiA8dGltQHhlbi5vcmc+DQo+ID4N
Cj4gPiBJIHN1cHBvc2UgSmFuIG1heSBhY3R1YWxseSBmaW5kIHN1Y2ggbWl4ZWQgc3VwZXIgcGFn
ZSBlbnRyeSBjYXNlDQo+ID4gdG8gYnJpbmcgdGhpcyBmaXggaW4uDQo+IA0KPiBIbSwgSSBndWVz
cyB0aGVvcmV0aWNhbGx5IGl0IGNvdWxkIGJlIHBvc3NpYmxlIGZvciBjb250aWd1b3VzIGVudHJp
ZXMNCj4gdG8gYmUgY29hbGVzY2VkIChpZiB3ZSBldmVyIGltcGxlbWVudCB0aGF0IGZvciBwMm0g
cGFnZSB0YWJsZXMpLCBhbmQNCj4gaGVuY2UgcmVzdWx0IGluIHN1cGVyIHBhZ2VzIGJlaW5nIGNy
ZWF0ZWQgZnJvbSBzbWFsbGVyIGVudHJpZXMuDQo+IA0KPiBJdCB0aGF0IGNhc2UgaXQgd291bGQg
YmUgbGVzcyBjbGVhciB0byBhc3NlcnQgdGhhdCBzcGVjaWFsIHBhZ2VzDQo+IGNhbm5vdCBiZSBj
b2FsZXNjZWQgd2l0aCBvdGhlciBjb250aWd1b3VzIGVudHJpZXMuDQoNCldpdGggSmFuJ3MgY29u
ZmlybWF0aW9uIEknbSBmaW5lIHdpdGggdGhpcyBjaGFuZ2UgdG9vLiBKdXN0IGJlbG93Li4uDQoN
Cj4gPg0KPiA+IERpZCB5b3UgYWN0dWFsbHkgb2JzZXJ2ZSBhbiBpbXBhY3Qgdy9vIHRoaXMgZml4
Pw0KPiANCj4gWWVzLCB0aGUgb3JpZ2luYWwgY2hhbmdlIGhhcyBjYXVzZWQgYSBwZXJmb3JtYW5j
ZSByZWdyZXNzaW9uIGluIHNvbWUNCj4gR1BVIHBhc3MgdGhyb3VnaCB3b3JrbG9hZHMgZm9yIFhl
blNlcnZlci4gIEkgdGhpbmsgaXQncyByZWFzb25hYmxlIHRvDQo+IGF2b2lkIHN1cGVyIHBhZ2Ug
c2hhdHRlcmluZyBpZiB0aGUgcmVzdWx0aW5nIHBhZ2VzIHdvdWxkIGFsbCBlbmQgdXANCj4gdXNp
bmcgaXBhdCBhbmQgV1JCQUNLIGNhY2hlIGF0dHJpYnV0ZSwgYXMgdGhlcmUncyBubyByZWFzb24g
Zm9yIHRoZQ0KPiBzcGxpdCBpbiB0aGUgZmlyc3QgY2FzZS4NCj4gDQoNCi4uLiBJJ2QgYXBwcmVj
aWF0ZSBtZW50aW9uaW5nIHRoZSByZWdyZXNzaW9uIGNhc2UgaW4gdGhlIGNvbW1pdCBtc2cuDQoN
CldpdGggdGhhdCwNCg0KUmV2aWV3ZWQtYnk6IEtldmluIFRpYW4gPGtldmluLnRpYW5AaW50ZWwu
Y29tPg0K


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 02:15:48 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 02:15:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358143.587170 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6jiO-0005Yz-Dh; Thu, 30 Jun 2022 02:15:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358143.587170; Thu, 30 Jun 2022 02:15:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6jiO-0005Ys-Ao; Thu, 30 Jun 2022 02:15:32 +0000
Received: by outflank-mailman (input) for mailman id 358143;
 Thu, 30 Jun 2022 02:15:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M0xB=XF=intel.com=kevin.tian@srs-se1.protection.inumbo.net>)
 id 1o6jiM-0005Ym-Kc
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 02:15:30 +0000
Received: from mga05.intel.com (mga05.intel.com [192.55.52.43])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8171f9a7-f81a-11ec-bd2d-47488cf2e6aa;
 Thu, 30 Jun 2022 04:15:28 +0200 (CEST)
Received: from fmsmga003.fm.intel.com ([10.253.24.29])
 by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 29 Jun 2022 19:15:26 -0700
Received: from fmsmsx604.amr.corp.intel.com ([10.18.126.84])
 by FMSMGA003.fm.intel.com with ESMTP; 29 Jun 2022 19:15:26 -0700
Received: from fmsmsx610.amr.corp.intel.com (10.18.126.90) by
 fmsmsx604.amr.corp.intel.com (10.18.126.84) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2308.27; Wed, 29 Jun 2022 19:15:25 -0700
Received: from fmsmsx610.amr.corp.intel.com (10.18.126.90) by
 fmsmsx610.amr.corp.intel.com (10.18.126.90) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2308.27; Wed, 29 Jun 2022 19:15:25 -0700
Received: from fmsedg602.ED.cps.intel.com (10.1.192.136) by
 fmsmsx610.amr.corp.intel.com (10.18.126.90) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2308.27 via Frontend Transport; Wed, 29 Jun 2022 19:15:25 -0700
Received: from NAM02-DM3-obe.outbound.protection.outlook.com (104.47.56.44) by
 edgegateway.intel.com (192.55.55.71) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2308.27; Wed, 29 Jun 2022 19:15:25 -0700
Received: from BN9PR11MB5276.namprd11.prod.outlook.com (2603:10b6:408:135::18)
 by SN6PR11MB2941.namprd11.prod.outlook.com (2603:10b6:805:dc::25)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Thu, 30 Jun
 2022 02:15:22 +0000
Received: from BN9PR11MB5276.namprd11.prod.outlook.com
 ([fe80::8435:5a99:1e28:b38c]) by BN9PR11MB5276.namprd11.prod.outlook.com
 ([fe80::8435:5a99:1e28:b38c%2]) with mapi id 15.20.5373.018; Thu, 30 Jun 2022
 02:15:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8171f9a7-f81a-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1656555328; x=1688091328;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=Q67O8NPS7kbCJUjPjQIzJxZ73OAVeYTuFXN0uy/HgJM=;
  b=PKbHR1XB6o6HEk8PqW/chwO39GlVtl4UhBJ/YuvKoo/u6McFYe1v2n0A
   foKqMKY31FXRxniugOQnauEHoZDhwQU5WsB5P47K3aNGZ97sTHuW8gFf4
   XPI2LLju0BsCbviAElV4YRdIhdqJppWEMfNFFOAh/r6dARzDkdnjJrhiG
   M7+D8iQl8BSteSdUm/p/4eMJ7hzddEWs98fdKEK4uCDo9Sgz1+9ITcqWy
   G0f6mjlOM31XQataxBQzMIm13CYH1ZSmPeDZAILp73+pljjEMHm803T8D
   ejo9jqOYKU3dErt0cFcLiHjwKHFsCSv5Po1isqtDaM8eOELjl3vKY5QDk
   A==;
X-IronPort-AV: E=McAfee;i="6400,9594,10393"; a="368532552"
X-IronPort-AV: E=Sophos;i="5.92,232,1650956400"; 
   d="scan'208";a="368532552"
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.92,232,1650956400"; 
   d="scan'208";a="680774086"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EFmhK/zJHz4z592XKIqSIFoFNHi/KJYRziI6r8wuHgcHiRHcre3zsbpvPS5dI4WdQ41E40i4DC9Xa4cKMSbVHIl1IdJM3mzOow1d5ptaKvsgzBAaRLK7fxZJS03bowENR9YJzux3P5rTlHY9+qQzVUPXD6WPWtjBH3v76uz+yU4TTjaKH2YsKDDgt1JNipDw3UgpTO3Z0oeFL8h7Sxm5/Rqkf76aa6FDhYJSjm//iv64oSlpGygk5GVONpe5X5HSN4g/j86LraoH2lKH9AL1OA/bN9otgzSzW/WeT5fYqYQwcFbbxHWHxiq6uju0gL1FsaBGm6zj7l7KVVuQNZln0g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Q67O8NPS7kbCJUjPjQIzJxZ73OAVeYTuFXN0uy/HgJM=;
 b=E3LWBC48pWs15gXTDoYOg2H7UJVKrlJCUPLNi4yKD+9200v1P6mCQyZA3D0JpF1+GeIvdmJQ3EJaThUwuGfzaK1l1hAjAGWhr/SovCQifzxMRJetXKCmE0ckQakgRsoa99THB79DurFw4DyksNz6yNMcJ+YZtm5Nrra9ox36fBs/auu7CLFSs2nORDXl+GbBUNkilVKrjb7SvicTVotJH8O2HKB+RqCgofXxobxVHQf1Q/bYqtoC8X3MNeK4C+pqq/6x0EmGzLdVFOVsqXV/7p1nV+vsR52YsbN1feOwf/FAskmUoVArq/0qyEtIiFHFThaLYeUhYqZW1dXEoKre1g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
From: "Tian, Kevin" <kevin.tian@intel.com>
To: Jane Malalane <Jane.Malalane@citrix.com>, "Beulich, Jan"
	<JBeulich@suse.com>
CC: George Dunlap <George.Dunlap@citrix.com>, Nick Rosbrook
	<rosbrookn@gmail.com>, Wei Liu <wl@xen.org>, Anthony Perard
	<anthony.perard@citrix.com>, "Cooper, Andrew" <andrew.cooper3@citrix.com>,
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
	"Gross, Jurgen" <jgross@suse.com>, Christian Lindig
	<christian.lindig@citrix.com>, David Scott <dave@recoil.org>,
	=?utf-8?B?UGF1IE1vbm7DqSwgUm9nZXI=?= <roger.pau@citrix.com>, "Nakajima, Jun"
	<jun.nakajima@intel.com>, Xen-devel <xen-devel@lists.xenproject.org>
Subject: RE: [PATCH RESEND v10 1/2] xen+tools: Report Interrupt Controller
 Virtualization capabilities on x86
Thread-Topic: [PATCH RESEND v10 1/2] xen+tools: Report Interrupt Controller
 Virtualization capabilities on x86
Thread-Index: AQHYi8Ai6ihqZv9P00+2HZAdh00G3K1mcQoAgAAN4ACAALfGIA==
Date: Thu, 30 Jun 2022 02:15:22 +0000
Message-ID: <BN9PR11MB527622D32D748EC5668C25DD8CBA9@BN9PR11MB5276.namprd11.prod.outlook.com>
References: <20220629135534.19923-1-jane.malalane@citrix.com>
 <20220629135534.19923-2-jane.malalane@citrix.com>
 <efb34738-49a9-fa4f-900a-fd530ff835ce@suse.com>
 <3e4443ec-de14-b02b-99f2-e6b63b06ca41@citrix.com>
In-Reply-To: <3e4443ec-de14-b02b-99f2-e6b63b06ca41@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=intel.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 376484bb-e541-408a-451b-08da5a3e62ea
x-ms-traffictypediagnostic: SN6PR11MB2941:EE_
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 3FQerlRXRN/h/RGJfF8CswCTQKoIfxHkWbk0FqyfIhtsnyHXtI7K/wnklwwo8mO8JSX/sktqEaSR47J/HnFZAwXoXTq5SSYGcDGo3KhEXl5BGThwuG2Mwm/mfWFJX6B3RTLDN4LxB6pQ5THm25ZeaLtqgk9mdLwjmqO1Qu3dI97jJvKLTWhGwYJuORdDNeCQsjDPN8AO3Td91F3lTpbGP5Tdc4cszfz1UsmFqNFsLiERWK6w7ApQbt1R1PfHADyRCkUSxpVr04K7Vz+dl/OoszlmM8r7JylGI5mUKnv5efFUMyWQ3B28U94mVyP3pm1lJB01ZF/XiJku7VPuYA+4DgetWt2jGQva55iCmbs8N8ggG6cRAXGjVRfM+KpXbtNTsutA4JVNnF3An7YiyWV1m6V3VWu2QusovPWp+WqoWW5Ox7JAmekEqYKK6p/f9GNHVPVF4kj5bHYau0XkADMRVfNOMvP0C+S6gU8szCQbH6yNBfbepPyeHA+NFCwUn4YQHL4AQcHIMI/F99pfB22yOJ3mVhNhq+lHf9KALPMDao2AHePxlIfoMqFFk2P34hFY2sHPkn5AQR309aHmzT6KbWHmyft2jgyEA6+C52qb2b3dIEmfkkbAC7ZitaQQiIaLZcbt7TwLdyjshTcXD76XwmMEDVRVcBn17tTFF2qWAsxKeFYnzxPbLMh9za0nhEPgCPoaKjxf9QGr4EPLnIxAFUv3G1zAX15tBLBBB5GRPIbZObNWy7AJsJ56sQWplBZOWkm8JKQ0+qMFk7M7dFPK8rULtaG1uQPfrgHLxgmbSr3dBOft6j6g0oxusqZwTwhL
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BN9PR11MB5276.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(346002)(396003)(366004)(376002)(39860400002)(136003)(66556008)(26005)(8936002)(53546011)(186003)(9686003)(71200400001)(66446008)(76116006)(8676002)(83380400001)(7696005)(41300700001)(33656002)(86362001)(6506007)(55016003)(4326008)(66946007)(38100700002)(64756008)(38070700005)(478600001)(66476007)(54906003)(110136005)(52536014)(5660300002)(316002)(82960400001)(7416002)(2906002)(122000001);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?TWhCWjd4Qm1NZ0pXdllkdFZKRE4yd2VFS3FKNDFRWjVwWFdNcHFtYVg0NFpZ?=
 =?utf-8?B?SFNWcWxWNVA0NTZ6VFMyRkg1d0xWcTRZSmplcmVxZXFPVnYrdk15Wjd6U3Qr?=
 =?utf-8?B?WmNVSDFQQXpLcTdXcUpBYUFHRVlyNVYrSG5JUkdBQTl3cEs1OEVHakxmZCtv?=
 =?utf-8?B?QlYzVVNERjc1RTQ0TmMvRm5YZTZQRDJRR0Y5K0MvM1pxaWhIL1hKWGNwanN3?=
 =?utf-8?B?cXVUL1YrS1JObXJpaE1STmpCS1Y2RkU0NllRY3NURjlWeFR4cEVnTXhocHBC?=
 =?utf-8?B?UlI3bHVOb24vNEo5M1ZQV0V0VUMyWFZqeXdyUE0vV0lNOGt0YVRkNm5EaEts?=
 =?utf-8?B?bCsxWVNsM3JWWHFqbVB4ays0eHJmb2EzbUNPdWxvemtUT2dhK040Umt2bXFS?=
 =?utf-8?B?RkRLZUJGb2ROeldGYVJ5eVZjSnRSYmZ4SEtuUjZPVU1LNWlyTjRDelowbjNC?=
 =?utf-8?B?SVh2TktuQWNCY0JYTnV1cFFzTFZpOWdXNHN6aFA3ckFOWUl3b3oxeStINjQ2?=
 =?utf-8?B?MDlFbWd4UGZ2NWV2YWlpRk12WTdIRlNHUXpWaWlEN3FRYzMxa2pNTXpiVVdr?=
 =?utf-8?B?cDRCT2tKU1lHMU5HQ1Nibi9oSVJOa0xxNFdWd0ZiOXNJay85Wmo4bE9SZlNy?=
 =?utf-8?B?QnZHT0lCb3o5NFlZRGZ6VVNVYSt2c3NpQ3BhMVJ6NlZRQ05JRGZtS0g3Wjdh?=
 =?utf-8?B?Qk1DbWFEdDM4K0NWUjhIMFp0WTRMaEtHMFk5NTdyQ1hyM1VHVG9URXpHRkFV?=
 =?utf-8?B?ODY4enkvWm52ZERmczlWRFphV0FZZmJJNE1BQ25GWjZTd0VObWRiL3FIUGNY?=
 =?utf-8?B?OWJRanc2VFFnUmp5Z3R0SDNxZVgxaXVTZGRnM1VtTG0ycnI1VW95R0VoRk5Y?=
 =?utf-8?B?eGkxRmMvMk5MdS9rZkZCelZIZTNZRVhaaDNNR09jQW94TlpZK0xsbGdmRC9a?=
 =?utf-8?B?TmRCRDdQcnBvVFFwRXNKajVxZ3F6c2hudzE0RXFoL1k2Q2owOWhiNkt0bW5a?=
 =?utf-8?B?c09Ccko1MWo3ZFhCZ29GUlQzRVFPZWMrVGxVVFp4ZHhlK1JCWkJHMHgxampB?=
 =?utf-8?B?NExHK1dBc0kvcmpYL0pGUkJ1QU5INHBMWE9OMEFoMmR5MnlpMFI5ZDd3bHUz?=
 =?utf-8?B?UGJyajNGK1RkdHEycTh2MEFFQW9DekFQU2pGS3FWeVJTUXZWYlVKaGZJbVU4?=
 =?utf-8?B?eHBWd1ZZTWwxeWV4OWJKaC9ES25lTTMzNFY0K2lJWUt2WTZ6MzBDbDlENnVF?=
 =?utf-8?B?akNqZXJ4MXlKRUpuTjJaNTdtcFh6cEFOMnVVVmhiT1V5bGZsVjQ0RjNadFo1?=
 =?utf-8?B?enJGa3U1WkU1KzREVFpIc0I4dTM1VE1QZlp5ZEtvTklXNFFCNTkzOC9oL1Zj?=
 =?utf-8?B?ZjNvYXJiRFZaMnpzcDNMdldSTnlRUkEvTDZPamJDRE9qMzd2ZHM1VTFIQmFS?=
 =?utf-8?B?NUdVblMzNkRzbGJhTHozQ2R1S3E1QUVuQUJQQllyL2hodk5yQm43ekxSdDgw?=
 =?utf-8?B?cDN4ZXJsS2lOY29PajAwNFNqSXEwZXkyTmlkQUVLQlhzT3NiemtDY2sxQlUz?=
 =?utf-8?B?NjBBbjdQbFJRQUpKbzZWWTI3NzMwZDRaMk9TYjhkQ0kvVVFWbVFjdytjY2ZT?=
 =?utf-8?B?OGxwWmZWYmppS0lQRVpFcXI5K1laaVA3Mm02QytFRUk2V2k5bHNiMVQzdUFK?=
 =?utf-8?B?TXpza0tnbkN3VmVGbWdPZXFqRTR1ZjdrU2JYZkphMUxvWUNpaDZQbEhBdGNi?=
 =?utf-8?B?a3NDYmRRditHMjZhVnVLUHVuRDErazJrTmhPRkljcXQxOE5zQ1I4K0l2bmY2?=
 =?utf-8?B?cjJtWGU1M01SMW1lZ1poZkd4Zk1lS1FZTGthNnhLSDQxbTA2Nk9IYnVYaUpr?=
 =?utf-8?B?Q2ZFUDljU29hUEpwRTRLT2haVzNJdkR1MkNiNEM4bTl2VHJLTERMYTJmN09Y?=
 =?utf-8?B?NFplTVZMQ054Y0lTcTBPaWFQMit4U2NtL0FRdzMrN3BmZDhETmd5MVdwNSty?=
 =?utf-8?B?bmplZURJdHpOcXRlZzQwdUFLK29ocUxkeFdBdGNFTlJYSXlXa2ZDT0JvZktD?=
 =?utf-8?B?YzMrQytIaU8xajhXUlFGaEpzekVUdkZRQm9kU0VyVURBalVWblNsaU9YZzkw?=
 =?utf-8?Q?7HTR+sWY/+2W2XVRvxNW6hlBZ?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BN9PR11MB5276.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 376484bb-e541-408a-451b-08da5a3e62ea
X-MS-Exchange-CrossTenant-originalarrivaltime: 30 Jun 2022 02:15:22.0183
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: dEQ7zk5SP9eLQg6U8STb4fZ33TTmUi0rFd2qPIc7bjaqmk3hBs7aObsQ8D7EXKyvoT7mmYhly61/ZlT3Jr7H0A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR11MB2941
X-OriginatorOrg: intel.com

PiBGcm9tOiBKYW5lIE1hbGFsYW5lIDxKYW5lLk1hbGFsYW5lQGNpdHJpeC5jb20+DQo+IFNlbnQ6
IFdlZG5lc2RheSwgSnVuZSAyOSwgMjAyMiAxMToxNyBQTQ0KPiANCj4gT24gMjkvMDYvMjAyMiAx
NToyNiwgSmFuIEJldWxpY2ggd3JvdGU6DQo+ID4gT24gMjkuMDYuMjAyMiAxNTo1NSwgSmFuZSBN
YWxhbGFuZSB3cm90ZToNCj4gPj4gQWRkIFhFTl9TWVNDVExfUEhZU0NBUF9YODZfQVNTSVNURURf
WEFQSUMgYW5kDQo+ID4+IFhFTl9TWVNDVExfUEhZU0NBUF9YODZfQVNTSVNURURfWDJBUElDIHRv
IHJlcG9ydCBhY2NlbGVyYXRlZCB4QVBJQw0KPiBhbmQNCj4gPj4geDJBUElDLCBvbiB4ODYgaGFy
ZHdhcmUuIFRoaXMgaXMgc28gdGhhdCB4QVBJQyBhbmQgeDJBUElDIHZpcnR1YWxpemF0aW9uDQo+
ID4+IGNhbiBzdWJzZXF1ZW50bHkgYmUgZW5hYmxlZCBvbiBhIHBlci1kb21haW4gYmFzaXMuDQo+
ID4+IE5vIHN1Y2ggZmVhdHVyZXMgYXJlIGN1cnJlbnRseSBpbXBsZW1lbnRlZCBvbiBBTUQgaGFy
ZHdhcmUuDQo+ID4+DQo+ID4+IEhXIGFzc2lzdGVkIHhBUElDIHZpcnR1YWxpemF0aW9uIHdpbGwg
YmUgcmVwb3J0ZWQgaWYgSFcsIGF0IHRoZQ0KPiA+PiBtaW5pbXVtLCBzdXBwb3J0cyB2aXJ0dWFs
aXplX2FwaWNfYWNjZXNzZXMgYXMgdGhpcyBmZWF0dXJlIGFsb25lIG1lYW5zDQo+ID4+IHRoYXQg
YW4gYWNjZXNzIHRvIHRoZSBBUElDIHBhZ2Ugd2lsbCBjYXVzZSBhbiBBUElDLWFjY2VzcyBWTSBl
eGl0LiBBbg0KPiA+PiBBUElDLWFjY2VzcyBWTSBleGl0IHByb3ZpZGVzIGEgVk1NIHdpdGggaW5m
b3JtYXRpb24gYWJvdXQgdGhlIGFjY2Vzcw0KPiA+PiBjYXVzaW5nIHRoZSBWTSBleGl0LCB1bmxp
a2UgYSByZWd1bGFyIEVQVCBmYXVsdCwgdGh1cyBzaW1wbGlmeWluZyBzb21lDQo+ID4+IGludGVy
bmFsIGhhbmRsaW5nLg0KPiA+Pg0KPiA+PiBIVyBhc3Npc3RlZCB4MkFQSUMgdmlydHVhbGl6YXRp
b24gd2lsbCBiZSByZXBvcnRlZCBpZiBIVyBzdXBwb3J0cw0KPiA+PiB2aXJ0dWFsaXplX3gyYXBp
Y19tb2RlIGFuZCwgYXQgbGVhc3QsIGVpdGhlciBhcGljX3JlZ192aXJ0IG9yDQo+ID4+IHZpcnR1
YWxfaW50cl9kZWxpdmVyeS4gVGhpcyBhbHNvIG1lYW5zIHRoYXQNCj4gPj4gc3lzY3RsIGZvbGxv
d3MgdGhlIGNvbmRpdGlvbmFscyBpbiB2bXhfdmxhcGljX21zcl9jaGFuZ2VkKCkuDQo+ID4+DQo+
ID4+IEZvciB0aGF0IHB1cnBvc2UsIGFsc28gYWRkIGFuIGFyY2gtc3BlY2lmaWMgImNhcGFiaWxp
dGllcyIgcGFyYW1ldGVyDQo+ID4+IHRvIHN0cnVjdCB4ZW5fc3lzY3RsX3BoeXNpbmZvLg0KPiA+
Pg0KPiA+PiBOb3RlIHRoYXQgdGhpcyBpbnRlcmZhY2UgaXMgaW50ZW5kZWQgdG8gYmUgY29tcGF0
aWJsZSB3aXRoIEFNRCBzbyB0aGF0DQo+ID4+IEFWSUMgc3VwcG9ydCBjYW4gYmUgaW50cm9kdWNl
ZCBpbiBhIGZ1dHVyZSBwYXRjaC4gVW5saWtlIEludGVsIHRoYXQNCj4gPj4gaGFzIG11bHRpcGxl
IGNvbnRyb2xzIGZvciBBUElDIFZpcnR1YWxpemF0aW9uLCBBTUQgaGFzIG9uZSBnbG9iYWwNCj4g
Pj4gJ0FWSUMgRW5hYmxlJyBjb250cm9sIGJpdCwgc28gZmluZS1ncmFpbmluZyBvZiBBUElDIHZp
cnR1YWxpemF0aW9uDQo+ID4+IGNvbnRyb2wgY2Fubm90IGJlIGRvbmUgb24gYSBjb21tb24gaW50
ZXJmYWNlLg0KPiA+Pg0KPiA+PiBTdWdnZXN0ZWQtYnk6IEFuZHJldyBDb29wZXIgPGFuZHJldy5j
b29wZXIzQGNpdHJpeC5jb20+DQo+ID4+IFNpZ25lZC1vZmYtYnk6IEphbmUgTWFsYWxhbmUgPGph
bmUubWFsYWxhbmVAY2l0cml4LmNvbT4NCj4gPj4gUmV2aWV3ZWQtYnk6ICJSb2dlciBQYXUgTW9u
bsOpIiA8cm9nZXIucGF1QGNpdHJpeC5jb20+DQo+ID4+IFJldmlld2VkLWJ5OiBKYW4gQmV1bGlj
aCA8amJldWxpY2hAc3VzZS5jb20+DQo+ID4+IFJldmlld2VkLWJ5OiBBbnRob255IFBFUkFSRCA8
YW50aG9ueS5wZXJhcmRAY2l0cml4LmNvbT4NCj4gPg0KPiA+IENvdWxkIHlvdSBwbGVhc2UgY2xh
cmlmeSB3aGV0aGVyIHlvdSBkaWQgZHJvcCBLZXZpbidzIFItYiAod2hpY2gsIGENCj4gPiBsaXR0
bGUgdW5oZWxwZnVsbHksIGhlIHByb3ZpZGVkIGluIHJlcGx5IHRvIHY5IGEgd2VlayBhZnRlciB5
b3UgaGFkDQo+ID4gcG9zdGVkIHYxMCkgYmVjYXVzZSBvZiAuLi4NCj4gPg0KPiA+PiB2MTA6DQo+
ID4+ICAgKiBNYWtlIGFzc2lzdGVkX3h7Mn1hcGljX2F2YWlsYWJsZSBjb25kaXRpb25hbCB1cG9u
IF92bXhfY3B1X3VwKCkNCj4gPg0KPiA+IC4uLiB0aGlzLCByZXF1aXJpbmcgaGltIHRvIHJlLW9m
ZmVyIHRoZSB0YWc/IFVudGlsIHRvbGQgb3RoZXJ3aXNlIEkNCj4gPiB3aWxsIGFzc3VtZSBzby4N
Cj4gDQo+IEl0IHdhc24ndCBpbnRlbnRpb25hbCBidXQgeWVzLCB0aGF0IGlzIHJpZ2h0LiBUaGVy
ZSB3YXMgYSBjaGFuZ2UsIGFsYmVpdA0KPiBtaW5vciwgaW4gdm14IGZyb20gdjkgdG8gdjEwIHNv
IEkgZG8gcmVxdWlyZSBLZXZpbiBUaWFuIG9yIEp1biBOYWthamltYQ0KPiB0byByZXZpZXcgaXQu
DQo+IA0KDQpSZXZpZXdlZC1ieTogS2V2aW4gVGlhbiA8a2V2aW4udGlhbkBpbnRlbC5jb20+DQo=


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 02:21:32 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 02:21:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358150.587180 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6jo8-0007Dg-6O; Thu, 30 Jun 2022 02:21:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358150.587180; Thu, 30 Jun 2022 02:21:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6jo8-0007DZ-3Z; Thu, 30 Jun 2022 02:21:28 +0000
Received: by outflank-mailman (input) for mailman id 358150;
 Thu, 30 Jun 2022 02:21:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nqik=XF=apertussolutions.com=dpsmith@srs-se1.protection.inumbo.net>)
 id 1o6jo6-0007DT-8p
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 02:21:26 +0000
Received: from sender4-of-o51.zoho.com (sender4-of-o51.zoho.com
 [136.143.188.51]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 554a0329-f81b-11ec-bdce-3d151da133c5;
 Thu, 30 Jun 2022 04:21:23 +0200 (CEST)
Received: from sisyou.hme. (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1656555678871523.7411028246486;
 Wed, 29 Jun 2022 19:21:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 554a0329-f81b-11ec-bdce-3d151da133c5
ARC-Seal: i=1; a=rsa-sha256; t=1656555681; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=UNFO2ZOVvqZ0pSRPMmB3nXjctdA+4dbwvaDW3lRhINC3GpGlbVGf+BMIyKtFTV9Ml5QH7qv7VbJKYQN8oPzuZUGfFJ8a8LiMXFZi1QSpiwkpmUDTnrLKoj66arx4dHeaKJS6gvzTSPcBt+JX82G8yM11jpK0UqH1pMFVUQm276I=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1656555681; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:MIME-Version:Message-ID:Subject:To; 
	bh=AJRi5oMb1/abWwiTsdt47QqKiOm5/WYP7pHFMGJCSio=; 
	b=jSRIg/WoiBqyl3MylTF9GhLf91LQCP2KiJO34EIam2D/lS2P1ciJMVnxBS7EkcqOgthyCynLB/zI63wETkpMkV3KRQXdtjq+2KH7KPLugAXKAnE3xeS2MUgitcFfYDQkZFgRJUtoNR5P5Rzt8WY7w+yuPuaa0WYWWVeScrdp9FE=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1656555681;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=From:From:To:To:Cc:Cc:Subject:Subject:Date:Date:Message-Id:Message-Id:MIME-Version:Content-Type:Content-Transfer-Encoding:Reply-To;
	bh=AJRi5oMb1/abWwiTsdt47QqKiOm5/WYP7pHFMGJCSio=;
	b=HsqqUoe/4KHEm1IhkKExHxQUXFy9KCIhRXf4i9BguShj5YF6qpIWI+XZVBAPdUeM
	DdyjJwwTBumLmMSJn9OE5mhK0pNp26wF1zEQpxnYuTg9k60dNKI+3RQshpuNxHDDQvW
	IUu/rUgNaK6JLhVTTXWJnspb3+WFtD4dtR5svAp8=
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
To: xen-devel@lists.xenproject.org
Cc: "Daniel P. Smith" <dpsmith@apertussolutions.com>,
	scott.davis@starlab.io,
	jandryuk@gmail.com,
	christopher.clark@starlab.io
Subject: [PATCH v9 0/3] Adds starting the idle domain privileged
Date: Wed, 29 Jun 2022 22:21:07 -0400
Message-Id: <20220630022110.31555-1-dpsmith@apertussolutions.com>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

This series makes it so that the idle domain is started privileged under the
default policy, which the SILO policy inherits, and under the flask policy. It
then introduces a new one-way XSM hook, xsm_transition_running, that is hooked
by an XSM policy to transition the idle domain to its running privilege level.

Patch 3 is an important one, as first it addresses the issue raised under an
RFC late last year by Jason Andryuk regarding the awkward entanglement of
flask_domain_alloc_security() and flask_domain_create(). Second, it helps
articulate why it is that the hypervisor should go through the access control
checks, even when it is doing the action itself. The issue at hand is not that
the hypervisor could be influenced to go around these check. The issue is these
checks provides a configurable way to express the execution flow that the
hypervisor should enforce. Specifically with this change, it is now possible
for an owner of a dom0less or hyperlaunch system to express a policy where the
hypervisor will enforce that no dom0 will be constructed, regardless of what
boot construction details were provided to it. Likewise, an owner that does not
want to see dom0less or hyperlaunch to be used can enforce that the hypervisor
will only construct a dom0 domain. This can all be accomplished without the
need to rebuild the hypervisor with these features enabled or disabled.

Changes in v9:
- added missing Rb/Tb to patch 1
- corrected the flask policy macro in patch 2 to allow domain create
- added patch 3 to address allowing the hypervisor create more than 1 domain

Changes in v8:
- adjusted panic messages in arm and x86 setup.c to be less than 80cols
- fixed comment line that went over 80col
- added line in patch #1 commit message to clarify the need is for domain
  creation

Changes in v7:
- adjusted error message in default and flask xsm_set_system_active hooks
- merged panic messages in arm and x86 setup.c to a single line

Changes in v6:
- readded the setting of is_privileged in flask_set_system_active()
- clarified comment on is_privileged in flask_set_system_active()
- added ASSERT on is_privileged and self_sid in flask_set_system_active()
- fixed err code returned on Arm for xsm_set_system_active() panic message

Changes in v5:
- dropped setting is_privileged in flask_set_system_active()
- added err code returned by xsm_set_system_active() to panic message

Changes in v4:
- reworded patch 1 commit messaged
- fixed whitespace to coding style
- fixed comment to coding style

Changes in v3:
- renamed *_transition_running() to *_set_system_active()
- changed the XSM hook set_system_active() from void to int return
- added ASSERT check for the expected privilege level each XSM policy expected
- replaced a check against is_privileged in each arch with checking the return
  value from the call to xsm_set_system_active()

Changes in v2:
- renamed flask_domain_runtime_security() to flask_transition_running()
- added the missed assignment of self_sid

Daniel P. Smith (3):
  xsm: create idle domain privileged and demote after setup
  flask: implement xsm_set_system_active
  xsm: refactor flask sid alloc and domain check

 tools/flask/policy/modules/dom0.te     |  3 ++
 tools/flask/policy/modules/domU.te     |  3 ++
 tools/flask/policy/modules/xen.if      |  7 +++
 tools/flask/policy/modules/xen.te      |  1 +
 tools/flask/policy/policy/initial_sids |  1 +
 xen/arch/arm/setup.c                   |  3 ++
 xen/arch/x86/setup.c                   |  4 ++
 xen/common/sched/core.c                |  7 ++-
 xen/include/xsm/dummy.h                | 17 +++++++
 xen/include/xsm/xsm.h                  |  6 +++
 xen/xsm/dummy.c                        |  1 +
 xen/xsm/flask/hooks.c                  | 66 ++++++++++++++++++++------
 xen/xsm/flask/policy/initial_sids      |  1 +
 13 files changed, 104 insertions(+), 16 deletions(-)

-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 30 02:21:32 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 02:21:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358151.587192 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6joC-0007UE-Fn; Thu, 30 Jun 2022 02:21:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358151.587192; Thu, 30 Jun 2022 02:21:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6joC-0007U7-B8; Thu, 30 Jun 2022 02:21:32 +0000
Received: by outflank-mailman (input) for mailman id 358151;
 Thu, 30 Jun 2022 02:21:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nqik=XF=apertussolutions.com=dpsmith@srs-se1.protection.inumbo.net>)
 id 1o6joB-0007DT-Vd
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 02:21:32 +0000
Received: from sender4-of-o51.zoho.com (sender4-of-o51.zoho.com
 [136.143.188.51]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5987cd99-f81b-11ec-bdce-3d151da133c5;
 Thu, 30 Jun 2022 04:21:30 +0200 (CEST)
Received: from sisyou.hme. (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1656555681174357.5298548276269;
 Wed, 29 Jun 2022 19:21:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5987cd99-f81b-11ec-bdce-3d151da133c5
ARC-Seal: i=1; a=rsa-sha256; t=1656555683; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=KkZjrAWhyCzxuaTfk4HyOQcABlZH9V2u2VGeNfUtgSBEPaiW8EcX25QVk+aThyW9bcgg3bJqrM95slVa9TIxdA4h+yGvABsf5rTdS/k22BZQLN2GL90Gzdtwy9/U2MuPUzn9sbVwTZPi39AjjdcwEZpgv4QXQOXCjwozdBvFwbw=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1656555683; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=Y1u1KN5rb+4xb8wageG1qE7t6SXe6Jl3PmiVFY61+kE=; 
	b=Zn8YMEOXaaK+TNIthX1/WdUBSMd7zWqTnMRnbG1IGteneXA8OGbhNdOL9NTDN/Nf2Gy13oA4JkaVf03ejogINWyH479y/V9pPQmPUIai1km+MjHkC0YYMLxa9scilQzKOPwhGWD/KJRfPh7p6rZJ2bBiIhbICD35UUoPRRGsdDw=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1656555683;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=From:From:To:To:Cc:Cc:Subject:Subject:Date:Date:Message-Id:Message-Id:In-Reply-To:References:MIME-Version:Content-Type:Content-Transfer-Encoding:Reply-To;
	bh=Y1u1KN5rb+4xb8wageG1qE7t6SXe6Jl3PmiVFY61+kE=;
	b=CDFj8biAa4oX+RJ4/e8TqpgpEsW22yRnV8rI/aB9bOpWv1qSh0wC7xxo1sEJfX76
	5CJOpU4FvTl4JNGrYX6Yswzl6qAaIJr8hTww9bPjv1nCyoy2+AHp67JNaL3OpfDPZ9G
	51UOBOTGIiXMDRkpj3Xs8gI2OrqMKnFrBz+WfYA8=
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
To: xen-devel@lists.xenproject.org,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Wei Liu <wl@xen.org>,
	"Daniel P. Smith" <dpsmith@apertussolutions.com>
Cc: scott.davis@starlab.io,
	jandryuk@gmail.com,
	christopher.clark@starlab.io,
	Luca Fancellu <luca.fancellu@arm.com>,
	Julien Grall <jgrall@amazon.com>,
	Rahul Singh <rahul.singh@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [PATCH v9 1/3] xsm: create idle domain privileged and demote after setup
Date: Wed, 29 Jun 2022 22:21:08 -0400
Message-Id: <20220630022110.31555-2-dpsmith@apertussolutions.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20220630022110.31555-1-dpsmith@apertussolutions.com>
References: <20220630022110.31555-1-dpsmith@apertussolutions.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

There are new capabilities, dom0less and hyperlaunch, that introduce internal
hypervisor logic, which needs to make resource allocation calls that are
protected by XSM access checks. The need for these resource allocations are
necessary for dom0less and hyperlaunch when they are constructing the initial
domain(s).  This creates an issue as a subset of the hypervisor code is
executed under a system domain, the idle domain, that is represented by a
per-CPU non-privileged struct domain. To enable these new capabilities to
function correctly but in a controlled manner, this commit changes the idle
system domain to be created as a privileged domain under the default policy and
demoted before transitioning to running. A new XSM hook,
xsm_set_system_active(), is introduced to allow each XSM policy type to demote
the idle domain appropriately for that policy type. In the case of SILO, it
inherits the default policy's hook for xsm_set_system_active().

For flask, a stub is added to ensure that flask policy system will function
correctly with this patch until flask is extended with support for starting the
idle domain privileged and properly demoting it on the call to
xsm_set_system_active().

Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
Acked-by: Julien Grall <jgrall@amazon.com> # arm
Reviewed-by: Rahul Singh <rahul.singh@arm.com>
Tested-by: Rahul Singh <rahul.singh@arm.com>
---
 xen/arch/arm/setup.c    |  3 +++
 xen/arch/x86/setup.c    |  4 ++++
 xen/common/sched/core.c |  7 ++++++-
 xen/include/xsm/dummy.h | 17 +++++++++++++++++
 xen/include/xsm/xsm.h   |  6 ++++++
 xen/xsm/dummy.c         |  1 +
 xen/xsm/flask/hooks.c   | 23 +++++++++++++++++++++++
 7 files changed, 60 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 577c54e6fb..85ff956ec2 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -1063,6 +1063,9 @@ void __init start_xen(unsigned long boot_phys_offset,
     /* Hide UART from DOM0 if we're using it */
     serial_endboot();
 
+    if ( (rc = xsm_set_system_active()) != 0 )
+        panic("xsm: unable to switch to SYSTEM_ACTIVE privilege: %d\n", rc);
+
     system_state = SYS_STATE_active;
 
     for_each_domain( d )
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 53a73010e0..f08b07b8de 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -619,6 +619,10 @@ static void noreturn init_done(void)
 {
     void *va;
     unsigned long start, end;
+    int err;
+
+    if ( (err = xsm_set_system_active()) != 0 )
+        panic("xsm: unable to switch to SYSTEM_ACTIVE privilege: %d\n", err);
 
     system_state = SYS_STATE_active;
 
diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index 8c73489654..250207038e 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -3033,7 +3033,12 @@ void __init scheduler_init(void)
         sched_ratelimit_us = SCHED_DEFAULT_RATELIMIT_US;
     }
 
-    idle_domain = domain_create(DOMID_IDLE, NULL, 0);
+    /*
+     * The idle dom is created privileged to ensure unrestricted access during
+     * setup and will be demoted by xsm_set_system_active() when setup is
+     * complete.
+     */
+    idle_domain = domain_create(DOMID_IDLE, NULL, CDF_privileged);
     BUG_ON(IS_ERR(idle_domain));
     BUG_ON(nr_cpu_ids > ARRAY_SIZE(idle_vcpu));
     idle_domain->vcpu = idle_vcpu;
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index 58afc1d589..77f27e7163 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -101,6 +101,23 @@ static always_inline int xsm_default_action(
     }
 }
 
+static XSM_INLINE int cf_check xsm_set_system_active(void)
+{
+    struct domain *d = current->domain;
+
+    ASSERT(d->is_privileged);
+
+    if ( d->domain_id != DOMID_IDLE )
+    {
+        printk("%s: should only be called by idle domain\n", __func__);
+        return -EPERM;
+    }
+
+    d->is_privileged = false;
+
+    return 0;
+}
+
 static XSM_INLINE void cf_check xsm_security_domaininfo(
     struct domain *d, struct xen_domctl_getdomaininfo *info)
 {
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index 3e2b7fe3db..8dad03fd3d 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -52,6 +52,7 @@ typedef enum xsm_default xsm_default_t;
  * !!! WARNING !!!
  */
 struct xsm_ops {
+    int (*set_system_active)(void);
     void (*security_domaininfo)(struct domain *d,
                                 struct xen_domctl_getdomaininfo *info);
     int (*domain_create)(struct domain *d, uint32_t ssidref);
@@ -208,6 +209,11 @@ extern struct xsm_ops xsm_ops;
 
 #ifndef XSM_NO_WRAPPERS
 
+static inline int xsm_set_system_active(void)
+{
+    return alternative_call(xsm_ops.set_system_active);
+}
+
 static inline void xsm_security_domaininfo(
     struct domain *d, struct xen_domctl_getdomaininfo *info)
 {
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 8c044ef615..e6ffa948f7 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -14,6 +14,7 @@
 #include <xsm/dummy.h>
 
 static const struct xsm_ops __initconst_cf_clobber dummy_ops = {
+    .set_system_active             = xsm_set_system_active,
     .security_domaininfo           = xsm_security_domaininfo,
     .domain_create                 = xsm_domain_create,
     .getdomaininfo                 = xsm_getdomaininfo,
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 6ffafc2f44..c97c44f803 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -191,6 +191,28 @@ static int cf_check flask_domain_alloc_security(struct domain *d)
     return 0;
 }
 
+static int cf_check flask_set_system_active(void)
+{
+    struct domain *d = current->domain;
+
+    ASSERT(d->is_privileged);
+
+    if ( d->domain_id != DOMID_IDLE )
+    {
+        printk("%s: should only be called by idle domain\n", __func__);
+        return -EPERM;
+    }
+
+    /*
+     * While is_privileged has no significant meaning under flask, set to false
+     * as is_privileged is not only used for a privilege check but also as a
+     * type of domain check, specifically if the domain is the control domain.
+     */
+    d->is_privileged = false;
+
+    return 0;
+}
+
 static void cf_check flask_domain_free_security(struct domain *d)
 {
     struct domain_security_struct *dsec = d->ssid;
@@ -1774,6 +1796,7 @@ static int cf_check flask_argo_send(
 #endif
 
 static const struct xsm_ops __initconst_cf_clobber flask_ops = {
+    .set_system_active = flask_set_system_active,
     .security_domaininfo = flask_security_domaininfo,
     .domain_create = flask_domain_create,
     .getdomaininfo = flask_getdomaininfo,
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 30 02:22:05 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 02:22:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358159.587203 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6joi-0008G5-OK; Thu, 30 Jun 2022 02:22:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358159.587203; Thu, 30 Jun 2022 02:22:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6joi-0008Ep-Jo; Thu, 30 Jun 2022 02:22:04 +0000
Received: by outflank-mailman (input) for mailman id 358159;
 Thu, 30 Jun 2022 02:22:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nqik=XF=apertussolutions.com=dpsmith@srs-se1.protection.inumbo.net>)
 id 1o6joh-0008BT-A4
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 02:22:03 +0000
Received: from sender4-of-o51.zoho.com (sender4-of-o51.zoho.com
 [136.143.188.51]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6beadad3-f81b-11ec-bd2d-47488cf2e6aa;
 Thu, 30 Jun 2022 04:22:01 +0200 (CEST)
Received: from sisyou.hme. (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1656555682878122.91886337918402;
 Wed, 29 Jun 2022 19:21:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6beadad3-f81b-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; t=1656555685; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=beWn45PMRfvxFhMizYRNdnezsGJT5qkQk2GeVtvsU4doaI4qGTEH92PNUfVkzgOCP19hPXkZ4neQMnQqWwEE8FLs0RR7xC2oaJg7LxODwyXeg+CcF6gamkP4pPaH+pAAIgiiSh6/J7MtBzgmLcKA8a2d7YUA6VnR3jO+t/Q21JE=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1656555685; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=lgQtLIGn/s1ADo/YMy42iciFpzQ0Han+37xft97wtw4=; 
	b=mOUUOw+SCuiGAlKPfFqV32hrboan+vKN9hsj6xxFGVIUd+D7YhhvypbiNE4dDZZXYQcYIiUOqT+kmOhpP6SXQ1npX7VteJmtHNK8jHfn6PNZv0cio8LW21vQRy0SVlfn9rLaYEuBJdQWP0U7Ngqv5g7bfoiuFltRQS5CtSYk5Z8=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1656555685;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=From:From:To:To:Cc:Cc:Subject:Subject:Date:Date:Message-Id:Message-Id:In-Reply-To:References:MIME-Version:Content-Transfer-Encoding:Reply-To;
	bh=lgQtLIGn/s1ADo/YMy42iciFpzQ0Han+37xft97wtw4=;
	b=jqaEV/m+/820i+9oBGJS2fZd+R8jU6KNKf8cUovdFCAei597pVP1g19wEyIXtEi6
	ffYNQ4uC280IlUL9/Umjye0SLXunFLGxj8yW3RjcJ3P7DuH1Y0t1Q3TzUmlAo5JY+oh
	5lKx4GYFNIf/NjuCoxxTtzu8pruzoTJ2u0vXe1cE=
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
To: xen-devel@lists.xenproject.org,
	"Daniel P. Smith" <dpsmith@apertussolutions.com>
Cc: scott.davis@starlab.io,
	jandryuk@gmail.com,
	christopher.clark@starlab.io,
	Luca Fancellu <luca.fancellu@arm.com>,
	Rahul Singh <rahul.singh@arm.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v9 2/3] flask: implement xsm_set_system_active
Date: Wed, 29 Jun 2022 22:21:09 -0400
Message-Id: <20220630022110.31555-3-dpsmith@apertussolutions.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20220630022110.31555-1-dpsmith@apertussolutions.com>
References: <20220630022110.31555-1-dpsmith@apertussolutions.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

This commit implements full support for starting the idle domain privileged by
introducing a new flask label xenboot_t which the idle domain is labeled with
at creation.  It then provides the implementation for the XSM hook
xsm_set_system_active to relabel the idle domain to the existing xen_t flask
label.

In the reference flask policy a new macro, xen_build_domain(target), is
introduced for creating policies for dom0less/hyperlaunch allowing the
hypervisor to create and assign the necessary resources for domain
construction.

Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
Tested-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Rahul Singh <rahul.singh@arm.com>
Tested-by: Rahul Singh <rahul.singh@arm.com>
---
 tools/flask/policy/modules/xen.if      | 7 +++++++
 tools/flask/policy/modules/xen.te      | 1 +
 tools/flask/policy/policy/initial_sids | 1 +
 xen/xsm/flask/hooks.c                  | 9 ++++++++-
 xen/xsm/flask/policy/initial_sids      | 1 +
 5 files changed, 18 insertions(+), 1 deletion(-)

diff --git a/tools/flask/policy/modules/xen.if b/tools/flask/policy/modules/xen.if
index 5e2aa472b6..424daab6a0 100644
--- a/tools/flask/policy/modules/xen.if
+++ b/tools/flask/policy/modules/xen.if
@@ -62,6 +62,13 @@ define(`create_domain_common', `
 			setparam altp2mhvm altp2mhvm_op dm };
 ')
 
+# xen_build_domain(target)
+#   Allow a domain to be created at boot by the hypervisor
+define(`xen_build_domain', `
+	allow xenboot_t $1:domain create;
+	allow xenboot_t $1_channel:event create;
+')
+
 # create_domain(priv, target)
 #   Allow a domain to be created directly
 define(`create_domain', `
diff --git a/tools/flask/policy/modules/xen.te b/tools/flask/policy/modules/xen.te
index 3dbf93d2b8..de98206fdd 100644
--- a/tools/flask/policy/modules/xen.te
+++ b/tools/flask/policy/modules/xen.te
@@ -24,6 +24,7 @@ attribute mls_priv;
 ################################################################################
 
 # The hypervisor itself
+type xenboot_t, xen_type, mls_priv;
 type xen_t, xen_type, mls_priv;
 
 # Domain 0
diff --git a/tools/flask/policy/policy/initial_sids b/tools/flask/policy/policy/initial_sids
index 6b7b7eff21..ec729d3ba3 100644
--- a/tools/flask/policy/policy/initial_sids
+++ b/tools/flask/policy/policy/initial_sids
@@ -2,6 +2,7 @@
 # objects created before the policy is loaded or for objects that do not have a
 # label defined in some other manner.
 
+sid xenboot gen_context(system_u:system_r:xenboot_t,s0)
 sid xen gen_context(system_u:system_r:xen_t,s0)
 sid dom0 gen_context(system_u:system_r:dom0_t,s0)
 sid domxen gen_context(system_u:system_r:domxen_t,s0)
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index c97c44f803..8c9cd0f297 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -173,7 +173,7 @@ static int cf_check flask_domain_alloc_security(struct domain *d)
     switch ( d->domain_id )
     {
     case DOMID_IDLE:
-        dsec->sid = SECINITSID_XEN;
+        dsec->sid = SECINITSID_XENBOOT;
         break;
     case DOMID_XEN:
         dsec->sid = SECINITSID_DOMXEN;
@@ -193,9 +193,14 @@ static int cf_check flask_domain_alloc_security(struct domain *d)
 
 static int cf_check flask_set_system_active(void)
 {
+    struct domain_security_struct *dsec;
     struct domain *d = current->domain;
 
+    dsec = d->ssid;
+
     ASSERT(d->is_privileged);
+    ASSERT(dsec->sid == SECINITSID_XENBOOT);
+    ASSERT(dsec->self_sid == SECINITSID_XENBOOT);
 
     if ( d->domain_id != DOMID_IDLE )
     {
@@ -210,6 +215,8 @@ static int cf_check flask_set_system_active(void)
      */
     d->is_privileged = false;
 
+    dsec->self_sid = dsec->sid = SECINITSID_XEN;
+
     return 0;
 }
 
diff --git a/xen/xsm/flask/policy/initial_sids b/xen/xsm/flask/policy/initial_sids
index 7eca70d339..e8b55b8368 100644
--- a/xen/xsm/flask/policy/initial_sids
+++ b/xen/xsm/flask/policy/initial_sids
@@ -3,6 +3,7 @@
 #
 # Define initial security identifiers 
 #
+sid xenboot
 sid xen
 sid dom0
 sid domio
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 30 02:22:27 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 02:22:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358162.587213 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6jp5-0000JA-0m; Thu, 30 Jun 2022 02:22:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358162.587213; Thu, 30 Jun 2022 02:22:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6jp4-0000J3-U4; Thu, 30 Jun 2022 02:22:26 +0000
Received: by outflank-mailman (input) for mailman id 358162;
 Thu, 30 Jun 2022 02:22:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nqik=XF=apertussolutions.com=dpsmith@srs-se1.protection.inumbo.net>)
 id 1o6jp3-0008BT-FC
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 02:22:25 +0000
Received: from sender4-of-o51.zoho.com (sender4-of-o51.zoho.com
 [136.143.188.51]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 79b04c7c-f81b-11ec-bd2d-47488cf2e6aa;
 Thu, 30 Jun 2022 04:22:24 +0200 (CEST)
Received: from sisyou.hme. (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1656555684266753.5376810279763;
 Wed, 29 Jun 2022 19:21:24 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 79b04c7c-f81b-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; t=1656555685; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=mYop27cKJl+Hmae0mmbw30oqw52xmdc7fV5P5iUOIyOt8fZiB3e7AEI34J+ZuOOhDSE4+gfFgiGzZZnujk01tQQSLEBvkMjwNhZngt6hx2Hvhc5n+rXkt8ZTANbiiD6baa+RwzuD9X8JOhl4zt6glPVRpm5Y9BGoQmH7XXuzoBU=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1656555685; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=tlQWv3Rz2gRTz+BpNK92sdZYks2PMVMJJttWyeIdrUk=; 
	b=Jnb6FbLXL0hEpkIFCRP+1kxfAE85YAgZCzWa2i90pzYWf1BulCvO9egCf7WZylXUmT5tuMmyVSy0kmN5ZD/MZCkgKvjyXvIZEhqaQ2UHh+KG+UWWnRc1X4YTP+qhcJsF7IbKr5UIgFtmJCBpgtVnJOpUGdyqoho53Nk0p7Emx6A=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1656555685;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=From:From:To:To:Cc:Cc:Subject:Subject:Date:Date:Message-Id:Message-Id:In-Reply-To:References:MIME-Version:Content-Type:Content-Transfer-Encoding:Reply-To;
	bh=tlQWv3Rz2gRTz+BpNK92sdZYks2PMVMJJttWyeIdrUk=;
	b=n9weZOBj6+lDaTHCecVjlcFGekNIr0LkNffiAlFm7i5Pw0k76kRQM89psgp4IRDz
	NS7ZiJJsmrahuUXDZ6S5hVT+NULP5yM3OrbQbguLlvj8CV8yswTOaiWb5YfcIR81sw5
	/9ad6XlXHtQ9zekJumjJY2dvKhQYySQNA/U4HzP0=
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
To: xen-devel@lists.xenproject.org,
	"Daniel P. Smith" <dpsmith@apertussolutions.com>
Cc: scott.davis@starlab.io,
	jandryuk@gmail.com,
	christopher.clark@starlab.io,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v9 3/3] xsm: refactor flask sid alloc and domain check
Date: Wed, 29 Jun 2022 22:21:10 -0400
Message-Id: <20220630022110.31555-4-dpsmith@apertussolutions.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20220630022110.31555-1-dpsmith@apertussolutions.com>
References: <20220630022110.31555-1-dpsmith@apertussolutions.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

The function flask_domain_alloc_security() is where a default sid should be
assigned to a domain under construction. For reasons unknown, the initial
domain would be assigned unlabeled_t and then fixed up under
flask_domain_create().  With the introduction of xenboot_t it is now possible
to distinguish when the hypervisor is in the boot state.

This commit looks to correct this by using a check to see if the hypervisor is
under the xenboot_t context in flask_domain_alloc_security(). If it is, then it
will inspect the domain's is_privileged field, and select the appropriate
default label, dom0_t or domU_t, for the domain. The logic for
flask_domain_create() was changed to allow the incoming sid to override the
default label.

The base policy was adjusted to allow the idle domain under the xenboot_t
context to be able to construct domains of both types, dom0 and domU.

Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
---
 tools/flask/policy/modules/dom0.te |  3 +++
 tools/flask/policy/modules/domU.te |  3 +++
 xen/xsm/flask/hooks.c              | 34 ++++++++++++++++++------------
 3 files changed, 26 insertions(+), 14 deletions(-)

diff --git a/tools/flask/policy/modules/dom0.te b/tools/flask/policy/modules/dom0.te
index 0a63ce15b6..2022bb9636 100644
--- a/tools/flask/policy/modules/dom0.te
+++ b/tools/flask/policy/modules/dom0.te
@@ -75,3 +75,6 @@ admin_device(dom0_t, ioport_t)
 admin_device(dom0_t, iomem_t)
 
 domain_comms(dom0_t, dom0_t)
+
+# Allow they hypervisor to build domains of type dom0_t
+xen_build_domain(dom0_t)
diff --git a/tools/flask/policy/modules/domU.te b/tools/flask/policy/modules/domU.te
index b77df29d56..73fc90c3c6 100644
--- a/tools/flask/policy/modules/domU.te
+++ b/tools/flask/policy/modules/domU.te
@@ -13,6 +13,9 @@ domain_comms(domU_t, domU_t)
 migrate_domain_out(dom0_t, domU_t)
 domain_self_comms(domU_t)
 
+# Allow they hypervisor to build domains of type domU_t
+xen_build_domain(domU_t)
+
 # Device model for domU_t.  You can define distinct types for device models for
 # domains of other types, or add more make_device_model lines for this type.
 declare_domain(dm_dom_t)
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 8c9cd0f297..caa0ae7d4c 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -182,7 +182,15 @@ static int cf_check flask_domain_alloc_security(struct domain *d)
         dsec->sid = SECINITSID_DOMIO;
         break;
     default:
-        dsec->sid = SECINITSID_UNLABELED;
+        if ( domain_sid(current->domain) == SECINITSID_XENBOOT )
+        {
+            if ( d->is_privileged )
+                dsec->sid = SECINITSID_DOM0;
+            else
+                dsec->sid = SECINITSID_DOMU;
+        }
+        else
+            dsec->sid = SECINITSID_UNLABELED;
     }
 
     dsec->self_sid = dsec->sid;
@@ -548,23 +556,21 @@ static int cf_check flask_domain_create(struct domain *d, uint32_t ssidref)
 {
     int rc;
     struct domain_security_struct *dsec = d->ssid;
-    static int dom0_created = 0;
 
-    if ( is_idle_domain(current->domain) && !dom0_created )
-    {
-        dsec->sid = SECINITSID_DOM0;
-        dom0_created = 1;
-    }
-    else
+    /*
+     * If domain has not already been labeled or a valid new label is provided,
+     * then use the provided label, otherwise use the existing label.
+     */
+    if ( dsec->sid == SECINITSID_UNLABELED || ssidref > 0 )
     {
-        rc = avc_current_has_perm(ssidref, SECCLASS_DOMAIN,
-                          DOMAIN__CREATE, NULL);
-        if ( rc )
-            return rc;
-
         dsec->sid = ssidref;
+        dsec->self_sid = dsec->sid;
     }
-    dsec->self_sid = dsec->sid;
+
+    rc = avc_current_has_perm(dsec->sid, SECCLASS_DOMAIN,
+                              DOMAIN__CREATE, NULL);
+    if ( rc )
+        return rc;
 
     rc = security_transition_sid(dsec->sid, dsec->sid, SECCLASS_DOMAIN,
                                  &dsec->self_sid);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 30 03:25:59 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 03:25:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358176.587225 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6koG-0007UW-Kq; Thu, 30 Jun 2022 03:25:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358176.587225; Thu, 30 Jun 2022 03:25:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6koG-0007UP-HF; Thu, 30 Jun 2022 03:25:40 +0000
Received: by outflank-mailman (input) for mailman id 358176;
 Thu, 30 Jun 2022 03:25:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M0xB=XF=intel.com=kevin.tian@srs-se1.protection.inumbo.net>)
 id 1o6koE-0007UJ-Fz
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 03:25:39 +0000
Received: from mga17.intel.com (mga17.intel.com [192.55.52.151])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4bc7eb83-f824-11ec-bd2d-47488cf2e6aa;
 Thu, 30 Jun 2022 05:25:33 +0200 (CEST)
Received: from orsmga004.jf.intel.com ([10.7.209.38])
 by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 29 Jun 2022 20:25:30 -0700
Received: from fmsmsx604.amr.corp.intel.com ([10.18.126.84])
 by orsmga004.jf.intel.com with ESMTP; 29 Jun 2022 20:25:30 -0700
Received: from fmsmsx606.amr.corp.intel.com (10.18.126.86) by
 fmsmsx604.amr.corp.intel.com (10.18.126.84) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2308.27; Wed, 29 Jun 2022 20:25:29 -0700
Received: from fmsedg601.ED.cps.intel.com (10.1.192.135) by
 fmsmsx606.amr.corp.intel.com (10.18.126.86) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2308.27 via Frontend Transport; Wed, 29 Jun 2022 20:25:29 -0700
Received: from NAM10-DM6-obe.outbound.protection.outlook.com (104.47.58.106)
 by edgegateway.intel.com (192.55.55.70) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2308.27; Wed, 29 Jun 2022 20:25:29 -0700
Received: from BN9PR11MB5276.namprd11.prod.outlook.com (2603:10b6:408:135::18)
 by DS7PR11MB6294.namprd11.prod.outlook.com (2603:10b6:8:96::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Thu, 30 Jun
 2022 03:25:27 +0000
Received: from BN9PR11MB5276.namprd11.prod.outlook.com
 ([fe80::8435:5a99:1e28:b38c]) by BN9PR11MB5276.namprd11.prod.outlook.com
 ([fe80::8435:5a99:1e28:b38c%2]) with mapi id 15.20.5373.018; Thu, 30 Jun 2022
 03:25:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4bc7eb83-f824-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1656559533; x=1688095533;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=iHMQwVWSAe3Hl2S45xwi4GndBhGFTE5SMkENzJbgI3E=;
  b=Z5TN3Lp+bVYHBl2WVUjioi8uUWcHPNvGC2dXGE7jYQ06Wjiu44yav7LR
   2AreVQ7V8G+BbsrEIWUi6i1axaevceb07I9DN/Eg4Q+ueGFyI4kXl51BI
   E7sLpL6aaYttyIPlxC8fQVxsZplzcnblWS6j7ukNumzaLLJJD4zGF5X3I
   Gsrc8NHfDrzhV4yjHSEgPyFGUysF83LbzTlJBd24gJoewgYzq6HMtwGOA
   0mcUo1U3TEF5YCN6h3kdt+OAcdSIRWlzBs+UjNnMypc9eI4yfqqaGOg3O
   B585slx7jHS3zxkax2Vt/05jbYiiKLcwzPyfoJYavx+tlr+rmoxLtCBsL
   Q==;
X-IronPort-AV: E=McAfee;i="6400,9594,10393"; a="262632842"
X-IronPort-AV: E=Sophos;i="5.92,232,1650956400"; 
   d="scan'208";a="262632842"
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.92,232,1650956400"; 
   d="scan'208";a="718068854"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YoSian//2CayasCK3yIudWQk17sanPYHLadv91lprKPY2Bda18V3wOJUS5zTfh+PMgV6lG/Znv5mx2saGjxAhDargksFJaQPtl4ioq7QE+KLMd5Gxyzy+qM7WNirhWi6edDvhLvGowCCc1c0Iy+2W9/poSeciGt8L261NTsJOYoUcmAaeOJJGTNy1G5gq6VPqG5FpTqcS67TbIAFqMrAXaA7txqoLdB+1MML8bFjfDr3qy2nhkwW+dkDtLRWcj2HJ1abHvVES5WQxPQk9lbzC45mjMLHkh79zgAb2ZTnQAkjjfz/m/+1y4IzFCttfs5j9S3tv2O3aDh3NKcTsKe6uQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=iHMQwVWSAe3Hl2S45xwi4GndBhGFTE5SMkENzJbgI3E=;
 b=JX/9KE5qN2CZ0tMZdJuwjCl9KBBM69relsj0QNlW886GhRknOOHDYc6f1df3cC2CMpWWI5FPtwNGSJFgkqnHpZOMd9oxxrq66I0vnp0jdFsJQvcUumRL5p6/cU6MjMkKTZshJk6BPNjk+06VyISSioRN4XCCmpBsqEx8cRBabc1LkG40UPNUFs3qqnmpBr4c/z1i/oCdr29cU7/8hy6RA3Y+INUvh7zW5ucRrGxDNnFLRQYy75mPgYAxsPChaeu3AAvii/l/Ph1wzrQRDOYJAL/frQCpqT955pgR9lukwIPH59hXxRXXolgiDPRive27imwX6O+dDjXBrtIboQhmfA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
From: "Tian, Kevin" <kevin.tian@intel.com>
To: Jane Malalane <jane.malalane@citrix.com>, Xen-devel
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>, "George
 Dunlap" <george.dunlap@citrix.com>, Nick Rosbrook <rosbrookn@gmail.com>,
	"Gross, Jurgen" <jgross@suse.com>, Christian Lindig
	<christian.lindig@citrix.com>, David Scott <dave@recoil.org>, "Beulich, Jan"
	<JBeulich@suse.com>, "Cooper, Andrew" <andrew.cooper3@citrix.com>,
	=?utf-8?B?UGF1IE1vbm7DqSwgUm9nZXI=?= <roger.pau@citrix.com>, "Nakajima, Jun"
	<jun.nakajima@intel.com>
Subject: RE: [PATCH RESEND v10 2/2] x86/xen: Allow per-domain usage of
 hardware virtualized APIC
Thread-Topic: [PATCH RESEND v10 2/2] x86/xen: Allow per-domain usage of
 hardware virtualized APIC
Thread-Index: AQHYi8AlvkpZvYzijUikfe8GmEsN/q1nQpEw
Date: Thu, 30 Jun 2022 03:25:27 +0000
Message-ID: <BN9PR11MB5276AC94021EA92C539D5F078CBA9@BN9PR11MB5276.namprd11.prod.outlook.com>
References: <20220629135534.19923-1-jane.malalane@citrix.com>
 <20220629135534.19923-3-jane.malalane@citrix.com>
In-Reply-To: <20220629135534.19923-3-jane.malalane@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=intel.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: e80a5a5c-eaa3-4e10-c1d0-08da5a482d72
x-ms-traffictypediagnostic: DS7PR11MB6294:EE_
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: +mxgPUbSUPRwS1RKpLpAuDRKLUCBP/lc+ESRGq61c3CsZuJYOjD5/SNmWakKgJFFh2NyRZSFTuVVYg103YTzoiuk/dWvYkM1BTvQoRi91Q+t1Eym0LXnaSeP7rz0JQce6PrOzrw2uS+9AtDanO5/ReAGQ4IfA67WLsp9rmcpUS/1/swthnv8GICBbXiAmksfiUc57u3vLjvmYAFZfPmHjCbcwOuCNxEHWx9LBU2NYfNBuvY3LUe60NYf4jNYA/wPS2ZhgMijkdC0PhztCr77JJO1eEM5IRtBjvrsGHf7eAgCIxYsKfWJVAUPaDjarUuwuBk6H2FEdZmhteCrzPvaXAf+VAZqc79OpQzuwqk/k/DpSPvTM08RqCohrLn4EYMp5E50meJSUJnxQUTj3JWn8v0k+0bLZRUXXDVRd/poup7fpEMlhOtnnAHvTPnwAaATWUnavnSg8xuPX+xysT7S9rI/jASe+PUBouvajU0IIE+2M60+mV77SpkbGLfNcpYIBpQOpShYaIxjyuklyW8TjYr2ldTZ43HIV5e1bkMqMuiYrppxuFq3cZBjcCXPeNj7GzYGKT08d5EvanFJLjAAmxoHtN9gFyO+lyBp3MRxhxYDR/FMHqsxMcNS7L6JjpVsAAhFFeqSTMg4ptDwkiUlmfObrirhRtgKatSpjIzRfNMFYMBsmr1B/y3WrCuuPUTc6nYn5JAQ/KBfm0sZ/onq22PQLaarqPRputyz4ZkE7nlWQbMPtEMdKbqmx5l8Tw1TvignhIC/XFiemNqh8lNN3LP31AQvrFP+RMvXkDtP0ymUdqLpm4qPwCbmeONKBIgo
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BN9PR11MB5276.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(396003)(39860400002)(136003)(376002)(366004)(346002)(71200400001)(82960400001)(5660300002)(66556008)(4326008)(86362001)(30864003)(316002)(66446008)(76116006)(33656002)(7416002)(478600001)(122000001)(83380400001)(66476007)(64756008)(41300700001)(66946007)(110136005)(6506007)(9686003)(54906003)(38070700005)(26005)(8676002)(52536014)(2906002)(8936002)(38100700002)(55016003)(107886003)(7696005)(186003)(579004)(559001);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?elJYdVRNclR3MWlPM28ySUdLaHFkR090bmpvZnRKTXRwWkd2dW1aM1pvSVFp?=
 =?utf-8?B?Y1FWcnRreTNsQjJsc3BKUElnTDg3SlRMRVFwa1RGMmpuMVEwd2JFMmZaSUU2?=
 =?utf-8?B?UG9LRjRpa1NlWXRXN0JpSkF3M0hFeVlhbXkvWlFnTVpTOC9HeEJsY2ZXS3RP?=
 =?utf-8?B?eExocUNwS3ptcEd5QjZNbzZFeUxkUDlaTjdLWHMyQ0ZmOCtNaDN2MHFIRzM5?=
 =?utf-8?B?WUJyWjRoMElsbzJSNWs3NkMyL0RuL0RZTm5PY1JIeHA0cEZOdDJYdWliWWVT?=
 =?utf-8?B?NC81YzhFcnFZRWhmT2pCS2ZyNUtFSTEvU0hSWFdQdkRXa3dFY3Q3Z2dHYnJL?=
 =?utf-8?B?M0JDS05vU0N6elpCditLdnNlUDNWR3o0aXBnWHllUlJPZ1laRGFLUkRzbWRw?=
 =?utf-8?B?MFZydGJpYzlodW52TU1ONzZxYXg2cCtrV1pkYVMvcjhDb3Qya2VGVXFlVkJk?=
 =?utf-8?B?T1VyM0cza2ljdXJjZzNpazFxS0N3OVJMUUE2bzV6a3hTU29jdHZ0b3VQLytP?=
 =?utf-8?B?eUd0Z25WNCtGK2RPWXV0SDc2TjBxRWZKbzIwZUgrOXhrYk84VG01eUllN2Uv?=
 =?utf-8?B?QVpxcXdlNlFocDYzN3JudWdwM2FvZmNWQVZHcU1FVGhxRFljaWl3S3N3UGgy?=
 =?utf-8?B?ZkNXd0pNcDZVdGpzdlR5L0xJMHVXM3k0cVh5N2gwU3pPdW90SnFCNm5WblNR?=
 =?utf-8?B?U0NPb2o5Sm80ZjlxNEZNY3VqK1dxblNESjVMeURGNkdqam1Eb1U2VTg3Qlox?=
 =?utf-8?B?K0xETlFsK0NBTEg1SVpiQkdRWlFpMkk5eC9CUkhlVGw3Y1AvSUpTMXlXcGlv?=
 =?utf-8?B?ZFNleUNLTFBoeGlwUHpidURSNi9sQ2xVRWQ5RHR2S3ozYVdzb2VObFBWUjVE?=
 =?utf-8?B?eXBET212K1lxTEtEeXM4WmRiV1J5THErRkpRaXpWT3lEVGZjaGxyNUZQMHBs?=
 =?utf-8?B?bGlxSENlNlNCUEdEN0dEamx1aFFPV2tVaVZ2dVR3ZlppeDVtbk1SQ2ZkeXhC?=
 =?utf-8?B?a1ptYkhwMWJ3MnFmY1FmUmZFeDREeGhPVTBLdmg5ekxJUVEzbTNQTnJERkhD?=
 =?utf-8?B?ZE9XTHYydU81M3lMZFZ2d0xtNWJPRkZRdHdxWDI1b0pkWUFTd2RDMHFvTkZ4?=
 =?utf-8?B?Qkh5ejFBTDI0Z3oxRTQwbnFMRUFZc1ppTEJ2VkQxREYxbmE3Y2VmWXB0SU5K?=
 =?utf-8?B?MUppK1ZzUnNHR2JXRzlOZEtURjBxaklzV1k2OUpWVnVVdlR4YTBibUN6OWFE?=
 =?utf-8?B?OVB1ODRVQkk4NTc5TFFHNEFabWk3MzBIRUZDWWVXeXpLaW9ZN3F3aWhxSFRh?=
 =?utf-8?B?blB2WUtnMk9RL1VucXdCNCs2SmRtaG83NG9ZRkZ1b1NKdnFGYkNsNUdvRWdw?=
 =?utf-8?B?S2dJalRFbDFxdlJidXl6VVdUZFdnL3IyamU3c3ZQd1NKOWhBZDB3TkprZkJz?=
 =?utf-8?B?bTVRTUh3S2UyQWpON1hGZ0RrbDlWb244TXAvOThKUnlJTUw4alZvK3d0YnYv?=
 =?utf-8?B?bzlxN3FKMVJTOTVqa2lCTExPdXZrcktpbk12TDdRVUc0MFkza25nYlpyd0tI?=
 =?utf-8?B?dGdBU2g3bitWTnk4WHAvd0xJUEplNDRFWGd0TXUrRWZISVltTnJvTjIwbnV3?=
 =?utf-8?B?czJCc2JsbzB6Q08vemlZa284aWdqeWxEekNsYWtpTHhSbVU4eG5mSVFhak13?=
 =?utf-8?B?ZDJ1Q0xWdFlzRlJ4K2N6NHBMcmExR2Rvek5IVkhIUjd3NzNsQzBLcytueGVK?=
 =?utf-8?B?RExoUjhWQzNLWWtHQkdmOUxOR25nTlRGOEdUbTI0R0FBb1NHZi8rNjB0MUtY?=
 =?utf-8?B?TDZ3V2ZzRlkzUGxlanJkUWtlU3FnTHBOYmgwd2NyTGJMSFJsa3ZpTmRJV2Rk?=
 =?utf-8?B?UFE2VEU1OXRQN0VENklDTmJYeDkrZGlEdmJOMDBCb2dqcE1qMjlGUitpc3F5?=
 =?utf-8?B?WTQ5SzdGcjJnVDZtZHI0S3VQZE5SLzIrOElyT3EyakNDSGkxb0VyRmt0T3B4?=
 =?utf-8?B?eVBkWGhLZS9ndThwRmk3b3NHSEhvS1hHamFEellyV28xVitVV3lVbWV3dnFi?=
 =?utf-8?B?SFhnU0RBUlo3MUExVTFCM0kvYWxWRlR1Skhia056WnYyRE14eFFoM3VzQ1dS?=
 =?utf-8?Q?xXVRXllSKWRjg/NNJZuMpa2rq?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BN9PR11MB5276.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e80a5a5c-eaa3-4e10-c1d0-08da5a482d72
X-MS-Exchange-CrossTenant-originalarrivaltime: 30 Jun 2022 03:25:27.2362
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 3u1r82qJb4R8iPvPsk2RpBpoisk3SIJYM2LmEJI7DjrctD8aJ0qUMyP6pz41Q2H7jPPm8EIy/KzKr1Wy+HnB6g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR11MB6294
X-OriginatorOrg: intel.com

PiBGcm9tOiBKYW5lIE1hbGFsYW5lIDxqYW5lLm1hbGFsYW5lQGNpdHJpeC5jb20+DQo+IFNlbnQ6
IFdlZG5lc2RheSwgSnVuZSAyOSwgMjAyMiA5OjU2IFBNDQo+IA0KPiBJbnRyb2R1Y2UgYSBuZXcg
cGVyLWRvbWFpbiBjcmVhdGlvbiB4ODYgc3BlY2lmaWMgZmxhZyB0bw0KPiBzZWxlY3Qgd2hldGhl
ciBoYXJkd2FyZSBhc3Npc3RlZCB2aXJ0dWFsaXphdGlvbiBzaG91bGQgYmUgdXNlZCBmb3INCj4g
eHsyfUFQSUMuDQo+IA0KPiBBIHBlci1kb21haW4gb3B0aW9uIGlzIGFkZGVkIHRvIHhsIGluIG9y
ZGVyIHRvIHNlbGVjdCB0aGUgdXNhZ2Ugb2YNCj4geHsyfUFQSUMgaGFyZHdhcmUgYXNzaXN0ZWQg
dmlydHVhbGl6YXRpb24sIGFzIHdlbGwgYXMgYSBnbG9iYWwNCj4gY29uZmlndXJhdGlvbiBvcHRp
b24uDQo+IA0KPiBIYXZpbmcgYWxsIEFQSUMgaW50ZXJhY3Rpb24gZXhpdCB0byBYZW4gZm9yIGVt
dWxhdGlvbiBpcyBzbG93IGFuZCBjYW4NCj4gaW5kdWNlIG11Y2ggb3ZlcmhlYWQuIEhhcmR3YXJl
IGNhbiBzcGVlZCB1cCB4ezJ9QVBJQyBieSBkZWNvZGluZyB0aGUNCj4gQVBJQyBhY2Nlc3MgYW5k
IHByb3ZpZGluZyBhIFZNIGV4aXQgd2l0aCBhIG1vcmUgc3BlY2lmaWMgZXhpdCByZWFzb24NCj4g
dGhhbiBhIHJlZ3VsYXIgRVBUIGZhdWx0IG9yIGJ5IGFsdG9nZXRoZXIgYXZvaWRpbmcgYSBWTSBl
eGl0Lg0KDQpBYm92ZSBpcyBvYnZpb3VzIGFuZCBjb3VsZCBiZSByZW1vdmVkLiANCg0KSSB0aGlu
ayB0aGUga2V5IGlzIGp1c3QgdGhlIG5leHQgcGFyYWdyYXBoIGZvciB3aHkgd2UNCndhbnQgdGhp
cyBwZXItZG9tYWluIGNvbnRyb2wuDQoNCkFwYXJ0IGZyb20gdGhhdDoNCg0KUmV2aWV3ZWQtYnk6
IEtldmluIFRpYW4gPGtldmluLnRpYW5AaW50ZWwuY29tPg0KDQo+IA0KPiBPbiB0aGUgb3RoZXIg
aGFuZCwgYmVpbmcgYWJsZSB0byBkaXNhYmxlIHh7Mn1BUElDIGhhcmR3YXJlIGFzc2lzdGVkDQo+
IHZpcnR1YWxpemF0aW9uIGNhbiBiZSB1c2VmdWwgZm9yIHRlc3RpbmcgYW5kIGRlYnVnZ2luZyBw
dXJwb3Nlcy4NCj4gDQo+IE5vdGU6DQo+IA0KPiAtIHZteF9pbnN0YWxsX3ZsYXBpY19tYXBwaW5n
IGRvZXNuJ3QgcmVxdWlyZSBtb2RpZmljYXRpb25zIHJlZ2FyZGxlc3MNCj4gb2Ygd2hldGhlciB0
aGUgZ3Vlc3QgaGFzICJWaXJ0dWFsaXplIEFQSUMgYWNjZXNzZXMiIGVuYWJsZWQgb3Igbm90LA0K
PiBpLmUuLCBzZXR0aW5nIHRoZSBBUElDX0FDQ0VTU19BRERSIFZNQ1MgZmllbGQgaXMgZmluZSBz
byBsb25nIGFzDQo+IHZpcnR1YWxpemVfYXBpY19hY2Nlc3NlcyBpcyBzdXBwb3J0ZWQgYnkgdGhl
IENQVS4NCj4gDQo+IC0gQm90aCBwZXItZG9tYWluIGFuZCBnbG9iYWwgYXNzaXN0ZWRfeHsyfWFw
aWMgb3B0aW9ucyBhcmUgbm90IHBhcnQgb2YNCj4gdGhlIG1pZ3JhdGlvbiBzdHJlYW0sIHVubGVz
cyBleHBsaWNpdGx5IHNldCBpbiB0aGUgcmVzcGVjdGl2ZQ0KPiBjb25maWd1cmF0aW9uIGZpbGVz
LiBEZWZhdWx0IHNldHRpbmdzIG9mIGFzc2lzdGVkX3h7Mn1hcGljIGRvbmUNCj4gaW50ZXJuYWxs
eSBieSB0aGUgdG9vbHN0YWNrLCBiYXNlZCBvbiBob3N0IGNhcGFiaWxpdGllcyBhdCBjcmVhdGUN
Cj4gdGltZSwgYXJlIG5vdCBtaWdyYXRlZC4NCj4gDQo+IFN1Z2dlc3RlZC1ieTogQW5kcmV3IENv
b3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4NCj4gU2lnbmVkLW9mZi1ieTogSmFuZSBN
YWxhbGFuZSA8amFuZS5tYWxhbGFuZUBjaXRyaXguY29tPg0KPiBBY2tlZC1ieTogQ2hyaXN0aWFu
IExpbmRpZyA8Y2hyaXN0aWFuLmxpbmRpZ0BjaXRyaXguY29tPg0KPiBSZXZpZXdlZC1ieTogIlJv
Z2VyIFBhdSBNb25uw6kiIDxyb2dlci5wYXVAY2l0cml4LmNvbT4NCj4gUmV2aWV3ZWQtYnk6IEFu
dGhvbnkgUEVSQVJEIDxhbnRob255LnBlcmFyZEBjaXRyaXguY29tPg0KPiAtLS0NCj4gQ0M6IFdl
aSBMaXUgPHdsQHhlbi5vcmc+DQo+IENDOiBBbnRob255IFBFUkFSRCA8YW50aG9ueS5wZXJhcmRA
Y2l0cml4LmNvbT4NCj4gQ0M6IEdlb3JnZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4LmNv
bT4NCj4gQ0M6IE5pY2sgUm9zYnJvb2sgPHJvc2Jyb29rbkBnbWFpbC5jb20+DQo+IENDOiBKdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+DQo+IENDOiBDaHJpc3RpYW4gTGluZGlnIDxjaHJp
c3RpYW4ubGluZGlnQGNpdHJpeC5jb20+DQo+IENDOiBEYXZpZCBTY290dCA8ZGF2ZUByZWNvaWwu
b3JnPg0KPiBDQzogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KPiBDQzogQW5kcmV3
IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4NCj4gQ0M6ICJSb2dlciBQYXUgTW9u
bsOpIiA8cm9nZXIucGF1QGNpdHJpeC5jb20+DQo+IENDOiBKdW4gTmFrYWppbWEgPGp1bi5uYWth
amltYUBpbnRlbC5jb20+DQo+IENDOiBLZXZpbiBUaWFuIDxrZXZpbi50aWFuQGludGVsLmNvbT4N
Cj4gDQo+IHYxMDoNCj4gICogSW1wcm92ZSBjb21taXQgbWVzc2FnZSBub3RlIG9uIG1pZ3JhdGlv
bg0KPiANCj4gdjk6DQo+ICAqIEZpeCBzdHlsZSBpc3N1ZXMNCj4gICogRml4IGV4aXQoKSBsb2dp
YyBmb3IgYXNzaXN0ZWRfeHsyfWFwaWMgcGFyc2luZw0KPiAgKiBBZGQgYW5kIHVzZSBYRU5fWDg2
X01JU0NfRkxBR1NfTUFYIGZvciBBQkkgY2hlY2tpbmcgaW5zdGVhZCBvZg0KPiAgICB1c2luZyBY
RU5fWDg2X0FTU0lTVEVEX1gyQVBJQyBkaXJlY3RseQ0KPiAgKiBFeHBhbmQgY29tbWl0IG1lc3Nh
Z2UgdG8gbWVudGlvbiBtaWdyYXRpb24gaXMgc2FmZQ0KPiANCj4gdjg6DQo+ICAqIFdpZGVuIGFz
c2lzdGVkX3h7Mn1hcGljIHBhcnNpbmcgdG8gUFZIIGd1ZXN0cyBpbg0KPiAgICBwYXJzZV9jb25m
aWdfZGF0YSgpDQo+IA0KPiB2NzoNCj4gICogRml4IHZvaWQgcmV0dXJuIGluIGxpYnhsX19hcmNo
X2RvbWFpbl9idWlsZF9pbmZvX3NldGRlZmF1bHQNCj4gICogRml4IHN0eWxlIGlzc3Vlcw0KPiAg
KiBVc2UgRUlOVkFMIHdoZW4gcmVqZWN0aW5nIGFzc2lzdGVkX3h7Mn1hcGljIGZvciBQViBndWVz
dHMgYW5kDQo+ICAgIEVOT0RFViBvdGhlcndpc2UsIHdoZW4gYXNzaXN0ZWRfeHsyfWFwaWMgaXNu
J3Qgc3VwcG9ydGVkDQo+ICAqIERlZmluZSBoYXNfYXNzaXN0ZWRfeHsyfWFwaWMgbWFjcm9zIGZv
ciB3aGVuICFDT05GSUdfSFZNDQo+ICAqIFJlcGxhY2UgIkVQVCIgZmF1bHQgcmVmZXJlbmNlIHdp
dGggInAybSIgZmF1bHQgc2luY2UgdGhlIGZvcm1lciBpcw0KPiAgICBJbnRlbC1zcGVjaWZpYw0K
PiANCj4gdjY6DQo+ICAqIFVzZSBFTk9ERVYgaW5zdGVhZCBvZiBFSU5WQUwgd2hlbiByZWplY3Rp
bmcgYXNzaXN0ZWRfeHsyfWFwaWMNCj4gICAgZm9yIFBWIGd1ZXN0cw0KPiAgKiBNb3ZlIGhhc19h
c3Npc3RlZF94ezJ9YXBpYyBtYWNyb3Mgb3V0IG9mIGFuIEludGVsIHNwZWNpZmljIGhlYWRlcg0K
PiAgKiBSZW1vdmUgcmVmZXJlbmNlcyB0byBJbnRlbCBzcGVjaWZpYyBmZWF0dXJlcyBpbiBkb2N1
bWVudGF0aW9uDQo+IA0KPiB2NToNCj4gICogUmV2ZXJ0IHY0IGNoYW5nZXMgaW4gdm14X3ZsYXBp
Y19tc3JfY2hhbmdlZCgpLCBwcmVzZXJ2aW5nIHRoZSB1c2Ugb2YNCj4gICAgdGhlIGhhc19hc3Np
c3RlZF94ezJ9YXBpYyBtYWNyb3MNCj4gICogRm9sbG93aW5nIGNoYW5nZXMgaW4gYXNzaXN0ZWRf
eHsyfWFwaWNfYXZhaWxhYmxlIGRlZmluaXRpb25zIGluDQo+ICAgIHBhdGNoIDEsIHJldGlnaHRl
biBjb25kaXRpb25hbHMgZm9yIHNldHRpbmcNCj4gICAgWEVOX0hWTV9DUFVJRF9BUElDX0FDQ0VT
U19WSVJUIGFuZCBYRU5fSFZNX0NQVUlEX1gyQVBJQ19WSVJUDQo+IGluDQo+ICAgIGNwdWlkX2h5
cGVydmlzb3JfbGVhdmVzKCkNCj4gDQo+IHY0Og0KPiAgKiBBZGQgaGFzX2Fzc2lzdGVkX3h7Mn1h
cGljIG1hY3JvcyBhbmQgdXNlIHRoZW0gd2hlcmUgYXBwcm9wcmlhdGUNCj4gICogUmVwbGFjZSBD
UFUgY2hlY2tzIHdpdGggcGVyLWRvbWFpbiBhc3Npc3RlZF94ezJ9YXBpYyBjb250cm9sDQo+ICAg
IG9wdGlvbnMgaW4gdm14X3ZsYXBpY19tc3JfY2hhbmdlZCgpIGFuZCBjcHVpZF9oeXBlcnZpc29y
X2xlYXZlcygpLA0KPiAgICBmb2xsb3dpbmcgZWRpdHMgdG8gYXNzaXN0ZWRfeHsyfWFwaWNfYXZh
aWxhYmxlIGRlZmluaXRpb25zIGluDQo+ICAgIHBhdGNoIDENCj4gICAgTm90ZTogbmV3IGFzc2lz
dGVkX3h7Mn1hcGljX2F2YWlsYWJsZSBkZWZpbml0aW9ucyBtYWtlIGxhdGVyDQo+ICAgIGNwdV9o
YXNfdm14X2FwaWNfcmVnX3ZpcnQgYW5kIGNwdV9oYXNfdm14X3ZpcnR1YWxfaW50cl9kZWxpdmVy
eQ0KPiAgICBjaGVja3MgcmVkdW5kYW50IGluIHZteF92bGFwaWNfbXNyX2NoYW5nZWQoKQ0KPiAN
Cj4gdjM6DQo+ICAqIENoYW5nZSBpbmZvIGluIHhsLmNmZyB0byBiZXR0ZXIgZXhwcmVzcyByZWFs
aXR5IGFuZCBmaXgNCj4gICAgY2FwaXRhbGl6YXRpb24gb2YgeHsyfWFwaWMNCj4gICogTW92ZSAi
cGh5c2luZm8iIHZhcmlhYmxlIGRlZmluaXRpb24gdG8gdGhlIGJlZ2dpbmluZyBvZg0KPiAgICBs
aWJ4bF9fZG9tYWluX2J1aWxkX2luZm9fc2V0ZGVmYXVsdCgpDQo+ICAqIFJlcG9zaXRpb24gYnJh
Y2tldHMgaW4gaWYgc3RhdGVtZW50IHRvIG1hdGNoIGxpYnhsIGNvZGluZyBzdHlsZQ0KPiAgKiBT
aG9ydGVuIGxvZ2ljIGluIGxpYnhsX19hcmNoX2RvbWFpbl9idWlsZF9pbmZvX3NldGRlZmF1bHQo
KQ0KPiAgKiBDb3JyZWN0IGRwcmludGsgbWVzc2FnZSBpbiBhcmNoX3Nhbml0aXNlX2RvbWFpbl9j
b25maWcoKQ0KPiAgKiBNYWtlIGFwcHJvcHJpYXRlIGNoYW5nZXMgaW4gdm14X3ZsYXBpY19tc3Jf
Y2hhbmdlZCgpIGFuZA0KPiAgICBjcHVpZF9oeXBlcnZpc29yX2xlYXZlcygpIGZvciBhbWVuZGVk
ICJhc3Npc3RlZF94MmFwaWMiIGJpdA0KPiAgKiBSZW1vdmUgdW5uZWVkZWQgcGFyYW50aGVzZXMN
Cj4gDQo+IHYyOg0KPiAgKiBBZGQgYSBMSUJYTF9IQVZFX0FTU0lTVEVEX0FQSUMgbWFjcm8NCj4g
ICogUGFzcyB4Y3B5c2hpbmZvIGFzIGEgcG9pbnRlciBpbiBsaWJ4bF9fYXJjaF9nZXRfcGh5c2lu
Zm8NCj4gICogQWRkIGEgcmV0dXJuIHN0YXRlbWVudCBpbiBub3cgImludCINCj4gICAgbGlieGxf
X2FyY2hfZG9tYWluX2J1aWxkX2luZm9fc2V0ZGVmYXVsdCgpDQo+ICAqIFByZXNlcnZlIGxpYnhs
X19hcmNoX2RvbWFpbl9idWlsZF9pbmZvX3NldGRlZmF1bHQgJ3MgbG9jYXRpb24gaW4NCj4gICAg
bGlieGxfY3JlYXRlLmMNCj4gICogQ29ycmVjdCB4ezJ9YXBpYyBkZWZhdWx0IHNldHRpbmcgbG9n
aWMgaW4NCj4gICAgbGlieGxfX2FyY2hfZG9tYWluX3ByZXBhcmVfY29uZmlnKCkNCj4gICogQ29y
cmVjdCBsb2dpYyBmb3IgcGFyc2luZyBhc3Npc3RlZF94ezJ9YXBpYyBob3N0L2d1ZXN0IG9wdGlv
bnMgaW4NCj4gICAgeGxfcGFyc2UuYyBhbmQgaW5pdGlhbGl6ZSB0aGVtIHRvIC0xIGluIHhsLmMN
Cj4gICogVXNlIGd1ZXN0IG9wdGlvbnMgZGlyZWN0bHkgaW4gdm14X3ZsYXBpY19tc3JfY2hhbmdl
ZA0KPiAgKiBGaXggaW5kZW50YXRpb24gb2YgYm9vbCBhc3Npc3RlZF94ezJ9YXBpYyBpbiBzdHJ1
Y3QgaHZtX2RvbWFpbg0KPiAgKiBBZGQgYSBjaGFuZ2UgaW4geGVuY3RybF9zdHVicy5jIHRvIHBh
c3MgeGVuY3RybCBBQkkgY2hlY2tzDQo+IC0tLQ0KPiAgZG9jcy9tYW4veGwuY2ZnLjUucG9kLmlu
ICAgICAgICAgICAgICB8IDE1ICsrKysrKysrKysrKysrKw0KPiAgZG9jcy9tYW4veGwuY29uZi41
LnBvZC5pbiAgICAgICAgICAgICB8IDEyICsrKysrKysrKysrKw0KPiAgdG9vbHMvZ29sYW5nL3hl
bmxpZ2h0L2hlbHBlcnMuZ2VuLmdvICB8IDEyICsrKysrKysrKysrKw0KPiAgdG9vbHMvZ29sYW5n
L3hlbmxpZ2h0L3R5cGVzLmdlbi5nbyAgICB8ICAyICsrDQo+ICB0b29scy9pbmNsdWRlL2xpYnhs
LmggICAgICAgICAgICAgICAgIHwgIDcgKysrKysrKw0KPiAgdG9vbHMvbGlicy9saWdodC9saWJ4
bF9hcmNoLmggICAgICAgICB8ICA1ICsrKy0tDQo+ICB0b29scy9saWJzL2xpZ2h0L2xpYnhsX2Fy
bS5jICAgICAgICAgIHwgIDkgKysrKysrLS0tDQo+ICB0b29scy9saWJzL2xpZ2h0L2xpYnhsX2Ny
ZWF0ZS5jICAgICAgIHwgMjIgKysrKysrKysrKysrKy0tLS0tLS0tLQ0KPiAgdG9vbHMvbGlicy9s
aWdodC9saWJ4bF90eXBlcy5pZGwgICAgICB8ICAyICsrDQo+ICB0b29scy9saWJzL2xpZ2h0L2xp
YnhsX3g4Ni5jICAgICAgICAgIHwgMjggKysrKysrKysrKysrKysrKysrKysrKysrKystLQ0KPiAg
dG9vbHMvb2NhbWwvbGlicy94Yy94ZW5jdHJsLm1sICAgICAgICB8ICAyICsrDQo+ICB0b29scy9v
Y2FtbC9saWJzL3hjL3hlbmN0cmwubWxpICAgICAgIHwgIDIgKysNCj4gIHRvb2xzL29jYW1sL2xp
YnMveGMveGVuY3RybF9zdHVicy5jICAgfCAgMiArLQ0KPiAgdG9vbHMveGwveGwuYyAgICAgICAg
ICAgICAgICAgICAgICAgICB8ICA4ICsrKysrKysrDQo+ICB0b29scy94bC94bC5oICAgICAgICAg
ICAgICAgICAgICAgICAgIHwgIDIgKysNCj4gIHRvb2xzL3hsL3hsX3BhcnNlLmMgICAgICAgICAg
ICAgICAgICAgfCAxOSArKysrKysrKysrKysrKysrKysrDQo+ICB4ZW4vYXJjaC94ODYvZG9tYWlu
LmMgICAgICAgICAgICAgICAgIHwgMjkgKysrKysrKysrKysrKysrKysrKysrKysrKysrKy0NCj4g
IHhlbi9hcmNoL3g4Ni9odm0vdm14L3ZtY3MuYyAgICAgICAgICAgfCAgNCArKysrDQo+ICB4ZW4v
YXJjaC94ODYvaHZtL3ZteC92bXguYyAgICAgICAgICAgIHwgMTMgKysrKy0tLS0tLS0tLQ0KPiAg
eGVuL2FyY2gveDg2L2luY2x1ZGUvYXNtL2h2bS9kb21haW4uaCB8ICA2ICsrKysrKw0KPiAgeGVu
L2FyY2gveDg2L2luY2x1ZGUvYXNtL2h2bS9odm0uaCAgICB8ICA1ICsrKysrDQo+ICB4ZW4vYXJj
aC94ODYvdHJhcHMuYyAgICAgICAgICAgICAgICAgIHwgIDUgKysrLS0NCj4gIHhlbi9pbmNsdWRl
L3B1YmxpYy9hcmNoLXg4Ni94ZW4uaCAgICAgfCAgNSArKysrKw0KPiAgMjMgZmlsZXMgY2hhbmdl
ZCwgMTg3IGluc2VydGlvbnMoKyksIDI5IGRlbGV0aW9ucygtKQ0KPiANCj4gZGlmZiAtLWdpdCBh
L2RvY3MvbWFuL3hsLmNmZy41LnBvZC5pbiBiL2RvY3MvbWFuL3hsLmNmZy41LnBvZC5pbg0KPiBp
bmRleCBiOThkMTYxMzk4Li42ZDk4ZDczZDc2IDEwMDY0NA0KPiAtLS0gYS9kb2NzL21hbi94bC5j
ZmcuNS5wb2QuaW4NCj4gKysrIGIvZG9jcy9tYW4veGwuY2ZnLjUucG9kLmluDQo+IEBAIC0xODYy
LDYgKzE4NjIsMjEgQEAgZmlybXdhcmUgdGFibGVzIHdoZW4gdXNpbmcgY2VydGFpbiBvbGRlciBn
dWVzdA0KPiBPcGVyYXRpbmcNCj4gIFN5c3RlbXMuIFRoZXNlIHRhYmxlcyBoYXZlIGJlZW4gc3Vw
ZXJzZWRlZCBieSBuZXdlciBjb25zdHJ1Y3RzIHdpdGhpbg0KPiAgdGhlIEFDUEkgdGFibGVzLg0K
PiANCj4gKz1pdGVtIEI8YXNzaXN0ZWRfeGFwaWM9Qk9PTEVBTj4NCj4gKw0KPiArQjwoeDg2IG9u
bHkpPiBFbmFibGVzIG9yIGRpc2FibGVzIGhhcmR3YXJlIGFzc2lzdGVkIHZpcnR1YWxpemF0aW9u
IGZvcg0KPiAreEFQSUMuIFdpdGggdGhpcyBvcHRpb24gZW5hYmxlZCwgYSBtZW1vcnktbWFwcGVk
IEFQSUMgYWNjZXNzIHdpbGwgYmUNCj4gK2RlY29kZWQgYnkgaGFyZHdhcmUgYW5kIGVpdGhlciBp
c3N1ZSBhIG1vcmUgc3BlY2lmaWMgVk0gZXhpdCB0aGFuIGp1c3QNCj4gK2EgcDJtIGZhdWx0LCBv
ciBhbHRvZ2V0aGVyIGF2b2lkIGEgVk0gZXhpdC4gVGhlDQo+ICtkZWZhdWx0IGlzIHNldHRhYmxl
IHZpYSBMPHhsLmNvbmYoNSk+Lg0KPiArDQo+ICs9aXRlbSBCPGFzc2lzdGVkX3gyYXBpYz1CT09M
RUFOPg0KPiArDQo+ICtCPCh4ODYgb25seSk+IEVuYWJsZXMgb3IgZGlzYWJsZXMgaGFyZHdhcmUg
YXNzaXN0ZWQgdmlydHVhbGl6YXRpb24gZm9yDQo+ICt4MkFQSUMuIFdpdGggdGhpcyBvcHRpb24g
ZW5hYmxlZCwgY2VydGFpbiBhY2Nlc3NlcyB0byBNU1IgQVBJQw0KPiArcmVnaXN0ZXJzIHdpbGwg
YXZvaWQgYSBWTSBleGl0IGludG8gdGhlIGh5cGVydmlzb3IuIFRoZSBkZWZhdWx0IGlzDQo+ICtz
ZXR0YWJsZSB2aWEgTDx4bC5jb25mKDUpPi4NCj4gKw0KPiAgPWl0ZW0gQjxueD1CT09MRUFOPg0K
PiANCj4gIEI8KHg4NiBvbmx5KT4gSGlkZXMgb3IgZXhwb3NlcyB0aGUgTm8tZVhlY3V0ZSBjYXBh
YmlsaXR5LiBUaGlzIGFsbG93cyBhDQo+IGd1ZXN0DQo+IGRpZmYgLS1naXQgYS9kb2NzL21hbi94
bC5jb25mLjUucG9kLmluIGIvZG9jcy9tYW4veGwuY29uZi41LnBvZC5pbg0KPiBpbmRleCBkZjIw
YzA4MTM3Li45NWQxMzZkMWVhIDEwMDY0NA0KPiAtLS0gYS9kb2NzL21hbi94bC5jb25mLjUucG9k
LmluDQo+ICsrKyBiL2RvY3MvbWFuL3hsLmNvbmYuNS5wb2QuaW4NCj4gQEAgLTEwNyw2ICsxMDcs
MTggQEAgU2V0cyB0aGUgZGVmYXVsdCB2YWx1ZSBmb3IgdGhlIEM8bWF4X2dyYW50X3ZlcnNpb24+
DQo+IGRvbWFpbiBjb25maWcgdmFsdWUuDQo+IA0KPiAgRGVmYXVsdDogbWF4aW11bSBncmFudCB2
ZXJzaW9uIHN1cHBvcnRlZCBieSB0aGUgaHlwZXJ2aXNvci4NCj4gDQo+ICs9aXRlbSBCPGFzc2lz
dGVkX3hhcGljPUJPT0xFQU4+DQo+ICsNCj4gK0lmIGVuYWJsZWQsIGRvbWFpbnMgd2lsbCB1c2Ug
eEFQSUMgaGFyZHdhcmUgYXNzaXN0ZWQgdmlydHVhbGl6YXRpb24gYnkNCj4gZGVmYXVsdC4NCj4g
Kw0KPiArRGVmYXVsdDogZW5hYmxlZCBpZiBzdXBwb3J0ZWQuDQo+ICsNCj4gKz1pdGVtIEI8YXNz
aXN0ZWRfeDJhcGljPUJPT0xFQU4+DQo+ICsNCj4gK0lmIGVuYWJsZWQsIGRvbWFpbnMgd2lsbCB1
c2UgeDJBUElDIGhhcmR3YXJlIGFzc2lzdGVkIHZpcnR1YWxpemF0aW9uIGJ5DQo+IGRlZmF1bHQu
DQo+ICsNCj4gK0RlZmF1bHQ6IGVuYWJsZWQgaWYgc3VwcG9ydGVkLg0KPiArDQo+ICA9aXRlbSBC
PHZpZi5kZWZhdWx0LnNjcmlwdD0iUEFUSCI+DQo+IA0KPiAgQ29uZmlndXJlcyB0aGUgZGVmYXVs
dCBob3RwbHVnIHNjcmlwdCB1c2VkIGJ5IHZpcnR1YWwgbmV0d29yayBkZXZpY2VzLg0KPiBkaWZm
IC0tZ2l0IGEvdG9vbHMvZ29sYW5nL3hlbmxpZ2h0L2hlbHBlcnMuZ2VuLmdvDQo+IGIvdG9vbHMv
Z29sYW5nL3hlbmxpZ2h0L2hlbHBlcnMuZ2VuLmdvDQo+IGluZGV4IGRkNGU2YzlmMTQuLmRlY2U1
NDVlZTAgMTAwNjQ0DQo+IC0tLSBhL3Rvb2xzL2dvbGFuZy94ZW5saWdodC9oZWxwZXJzLmdlbi5n
bw0KPiArKysgYi90b29scy9nb2xhbmcveGVubGlnaHQvaGVscGVycy5nZW4uZ28NCj4gQEAgLTEx
MjAsNiArMTEyMCwxMiBAQCB4LkFyY2hBcm0uVnVhcnQgPSBWdWFydFR5cGUoeGMuYXJjaF9hcm0u
dnVhcnQpDQo+ICBpZiBlcnIgOj0geC5BcmNoWDg2Lk1zclJlbGF4ZWQuZnJvbUMoJnhjLmFyY2hf
eDg2Lm1zcl9yZWxheGVkKTtlcnIgIT0gbmlsIHsNCj4gIHJldHVybiBmbXQuRXJyb3JmKCJjb252
ZXJ0aW5nIGZpZWxkIEFyY2hYODYuTXNyUmVsYXhlZDogJXYiLCBlcnIpDQo+ICB9DQo+ICtpZiBl
cnIgOj0geC5BcmNoWDg2LkFzc2lzdGVkWGFwaWMuZnJvbUMoJnhjLmFyY2hfeDg2LmFzc2lzdGVk
X3hhcGljKTtlcnIgIT0NCj4gbmlsIHsNCj4gK3JldHVybiBmbXQuRXJyb3JmKCJjb252ZXJ0aW5n
IGZpZWxkIEFyY2hYODYuQXNzaXN0ZWRYYXBpYzogJXYiLCBlcnIpDQo+ICt9DQo+ICtpZiBlcnIg
Oj0NCj4geC5BcmNoWDg2LkFzc2lzdGVkWDJBcGljLmZyb21DKCZ4Yy5hcmNoX3g4Ni5hc3Npc3Rl
ZF94MmFwaWMpO2VyciAhPSBuaWwgew0KPiArcmV0dXJuIGZtdC5FcnJvcmYoImNvbnZlcnRpbmcg
ZmllbGQgQXJjaFg4Ni5Bc3Npc3RlZFgyQXBpYzogJXYiLCBlcnIpDQo+ICt9DQo+ICB4LkFsdHAy
TSA9IEFsdHAyTU1vZGUoeGMuYWx0cDJtKQ0KPiAgeC5WbXRyYWNlQnVmS2IgPSBpbnQoeGMudm10
cmFjZV9idWZfa2IpDQo+ICBpZiBlcnIgOj0geC5WcG11LmZyb21DKCZ4Yy52cG11KTtlcnIgIT0g
bmlsIHsNCj4gQEAgLTE2MDUsNiArMTYxMSwxMiBAQCB4Yy5hcmNoX2FybS52dWFydCA9DQo+IEMu
bGlieGxfdnVhcnRfdHlwZSh4LkFyY2hBcm0uVnVhcnQpDQo+ICBpZiBlcnIgOj0geC5BcmNoWDg2
Lk1zclJlbGF4ZWQudG9DKCZ4Yy5hcmNoX3g4Ni5tc3JfcmVsYXhlZCk7IGVyciAhPSBuaWwgew0K
PiAgcmV0dXJuIGZtdC5FcnJvcmYoImNvbnZlcnRpbmcgZmllbGQgQXJjaFg4Ni5Nc3JSZWxheGVk
OiAldiIsIGVycikNCj4gIH0NCj4gK2lmIGVyciA6PSB4LkFyY2hYODYuQXNzaXN0ZWRYYXBpYy50
b0MoJnhjLmFyY2hfeDg2LmFzc2lzdGVkX3hhcGljKTsgZXJyICE9IG5pbA0KPiB7DQo+ICtyZXR1
cm4gZm10LkVycm9yZigiY29udmVydGluZyBmaWVsZCBBcmNoWDg2LkFzc2lzdGVkWGFwaWM6ICV2
IiwgZXJyKQ0KPiArfQ0KPiAraWYgZXJyIDo9IHguQXJjaFg4Ni5Bc3Npc3RlZFgyQXBpYy50b0Mo
JnhjLmFyY2hfeDg2LmFzc2lzdGVkX3gyYXBpYyk7IGVyciAhPQ0KPiBuaWwgew0KPiArcmV0dXJu
IGZtdC5FcnJvcmYoImNvbnZlcnRpbmcgZmllbGQgQXJjaFg4Ni5Bc3Npc3RlZFgyQXBpYzogJXYi
LCBlcnIpDQo+ICt9DQo+ICB4Yy5hbHRwMm0gPSBDLmxpYnhsX2FsdHAybV9tb2RlKHguQWx0cDJN
KQ0KPiAgeGMudm10cmFjZV9idWZfa2IgPSBDLmludCh4LlZtdHJhY2VCdWZLYikNCj4gIGlmIGVy
ciA6PSB4LlZwbXUudG9DKCZ4Yy52cG11KTsgZXJyICE9IG5pbCB7DQo+IGRpZmYgLS1naXQgYS90
b29scy9nb2xhbmcveGVubGlnaHQvdHlwZXMuZ2VuLmdvDQo+IGIvdG9vbHMvZ29sYW5nL3hlbmxp
Z2h0L3R5cGVzLmdlbi5nbw0KPiBpbmRleCA4N2JlNDZjNzQ1Li4yNTNjOWFkOTNkIDEwMDY0NA0K
PiAtLS0gYS90b29scy9nb2xhbmcveGVubGlnaHQvdHlwZXMuZ2VuLmdvDQo+ICsrKyBiL3Rvb2xz
L2dvbGFuZy94ZW5saWdodC90eXBlcy5nZW4uZ28NCj4gQEAgLTUyMCw2ICs1MjAsOCBAQCBWdWFy
dCBWdWFydFR5cGUNCj4gIH0NCj4gIEFyY2hYODYgc3RydWN0IHsNCj4gIE1zclJlbGF4ZWQgRGVm
Ym9vbA0KPiArQXNzaXN0ZWRYYXBpYyBEZWZib29sDQo+ICtBc3Npc3RlZFgyQXBpYyBEZWZib29s
DQo+ICB9DQo+ICBBbHRwMk0gQWx0cDJNTW9kZQ0KPiAgVm10cmFjZUJ1ZktiIGludA0KPiBkaWZm
IC0tZ2l0IGEvdG9vbHMvaW5jbHVkZS9saWJ4bC5oIGIvdG9vbHMvaW5jbHVkZS9saWJ4bC5oDQo+
IGluZGV4IDM2NGQ4NTIyNzguLjc5MTBjNDU4ZTMgMTAwNjQ0DQo+IC0tLSBhL3Rvb2xzL2luY2x1
ZGUvbGlieGwuaA0KPiArKysgYi90b29scy9pbmNsdWRlL2xpYnhsLmgNCj4gQEAgLTUzNSw2ICs1
MzUsMTMgQEANCj4gICNkZWZpbmUgTElCWExfSEFWRV9QSFlTSU5GT19BU1NJU1RFRF9BUElDIDEN
Cj4gDQo+ICAvKg0KPiArICogTElCWExfSEFWRV9BU1NJU1RFRF9BUElDIGluZGljYXRlcyB0aGF0
IGxpYnhsX2RvbWFpbl9idWlsZF9pbmZvIGhhcw0KPiArICogYXNzaXN0ZWRfeGFwaWMgYW5kIGFz
c2lzdGVkX3gyYXBpYyBmaWVsZHMgZm9yIGVuYWJsaW5nIGhhcmR3YXJlDQo+ICsgKiBhc3Npc3Rl
ZCB2aXJ0dWFsaXphdGlvbiBmb3IgeHsyfWFwaWMgcGVyIGRvbWFpbi4NCj4gKyAqLw0KPiArI2Rl
ZmluZSBMSUJYTF9IQVZFX0FTU0lTVEVEX0FQSUMgMQ0KPiArDQo+ICsvKg0KPiAgICogbGlieGwg
QUJJIGNvbXBhdGliaWxpdHkNCj4gICAqDQo+ICAgKiBUaGUgb25seSBndWFyYW50ZWUgd2hpY2gg
bGlieGwgbWFrZXMgcmVnYXJkaW5nIEFCSSBjb21wYXRpYmlsaXR5DQo+IGRpZmYgLS1naXQgYS90
b29scy9saWJzL2xpZ2h0L2xpYnhsX2FyY2guaCBiL3Rvb2xzL2xpYnMvbGlnaHQvbGlieGxfYXJj
aC5oDQo+IGluZGV4IDIwN2NlYWM2YTEuLjAzYjg5OTI5ZTYgMTAwNjQ0DQo+IC0tLSBhL3Rvb2xz
L2xpYnMvbGlnaHQvbGlieGxfYXJjaC5oDQo+ICsrKyBiL3Rvb2xzL2xpYnMvbGlnaHQvbGlieGxf
YXJjaC5oDQo+IEBAIC03MSw4ICs3MSw5IEBAIHZvaWQNCj4gbGlieGxfX2FyY2hfZG9tYWluX2Ny
ZWF0ZV9pbmZvX3NldGRlZmF1bHQobGlieGxfX2djICpnYywNCj4gICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbGlieGxfZG9tYWluX2NyZWF0ZV9pbmZvICpj
X2luZm8pOw0KPiANCj4gIF9oaWRkZW4NCj4gLXZvaWQgbGlieGxfX2FyY2hfZG9tYWluX2J1aWxk
X2luZm9fc2V0ZGVmYXVsdChsaWJ4bF9fZ2MgKmdjLA0KPiAtICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIGxpYnhsX2RvbWFpbl9idWlsZF9pbmZvICpiX2luZm8p
Ow0KPiAraW50IGxpYnhsX19hcmNoX2RvbWFpbl9idWlsZF9pbmZvX3NldGRlZmF1bHQobGlieGxf
X2djICpnYywNCj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IGxpYnhsX2RvbWFpbl9idWlsZF9pbmZvICpiX2luZm8sDQo+ICsgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICBjb25zdCBsaWJ4bF9waHlzaW5mbyAqcGh5c2luZm8p
Ow0KPiANCj4gIF9oaWRkZW4NCj4gIGludCBsaWJ4bF9fYXJjaF9wYXNzdGhyb3VnaF9tb2RlX3Nl
dGRlZmF1bHQobGlieGxfX2djICpnYywNCj4gZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnMvbGlnaHQv
bGlieGxfYXJtLmMgYi90b29scy9saWJzL2xpZ2h0L2xpYnhsX2FybS5jDQo+IGluZGV4IDM5ZmRj
YTFiNDkuLjdkZWUyYWZkNGIgMTAwNjQ0DQo+IC0tLSBhL3Rvb2xzL2xpYnMvbGlnaHQvbGlieGxf
YXJtLmMNCj4gKysrIGIvdG9vbHMvbGlicy9saWdodC9saWJ4bF9hcm0uYw0KPiBAQCAtMTM4NCwx
NCArMTM4NCwxNSBAQCB2b2lkDQo+IGxpYnhsX19hcmNoX2RvbWFpbl9jcmVhdGVfaW5mb19zZXRk
ZWZhdWx0KGxpYnhsX19nYyAqZ2MsDQo+ICAgICAgfQ0KPiAgfQ0KPiANCj4gLXZvaWQgbGlieGxf
X2FyY2hfZG9tYWluX2J1aWxkX2luZm9fc2V0ZGVmYXVsdChsaWJ4bF9fZ2MgKmdjLA0KPiAtICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGxpYnhsX2RvbWFpbl9i
dWlsZF9pbmZvICpiX2luZm8pDQo+ICtpbnQgbGlieGxfX2FyY2hfZG9tYWluX2J1aWxkX2luZm9f
c2V0ZGVmYXVsdChsaWJ4bF9fZ2MgKmdjLA0KPiArICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgbGlieGxfZG9tYWluX2J1aWxkX2luZm8gKmJfaW5mbywNCj4gKyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGNvbnN0IGxpYnhsX3Bo
eXNpbmZvICpwaHlzaW5mbykNCj4gIHsNCj4gICAgICAvKiBBQ1BJIGlzIGRpc2FibGVkIGJ5IGRl
ZmF1bHQgKi8NCj4gICAgICBsaWJ4bF9kZWZib29sX3NldGRlZmF1bHQoJmJfaW5mby0+YWNwaSwg
ZmFsc2UpOw0KPiANCj4gICAgICBpZiAoYl9pbmZvLT50eXBlICE9IExJQlhMX0RPTUFJTl9UWVBF
X1BWKQ0KPiAtICAgICAgICByZXR1cm47DQo+ICsgICAgICAgIHJldHVybiAwOw0KPiANCj4gICAg
ICBMT0coREVCVUcsICJDb252ZXJ0aW5nIGJ1aWxkX2luZm8gdG8gUFZIIik7DQo+IA0KPiBAQCAt
MTM5OSw2ICsxNDAwLDggQEAgdm9pZA0KPiBsaWJ4bF9fYXJjaF9kb21haW5fYnVpbGRfaW5mb19z
ZXRkZWZhdWx0KGxpYnhsX19nYyAqZ2MsDQo+ICAgICAgbWVtc2V0KCZiX2luZm8tPnUsICdcMCcs
IHNpemVvZihiX2luZm8tPnUpKTsNCj4gICAgICBiX2luZm8tPnR5cGUgPSBMSUJYTF9ET01BSU5f
VFlQRV9JTlZBTElEOw0KPiAgICAgIGxpYnhsX2RvbWFpbl9idWlsZF9pbmZvX2luaXRfdHlwZShi
X2luZm8sIExJQlhMX0RPTUFJTl9UWVBFX1BWSCk7DQo+ICsNCj4gKyAgICByZXR1cm4gMDsNCj4g
IH0NCj4gDQo+ICBpbnQgbGlieGxfX2FyY2hfcGFzc3Rocm91Z2hfbW9kZV9zZXRkZWZhdWx0KGxp
YnhsX19nYyAqZ2MsDQo+IGRpZmYgLS1naXQgYS90b29scy9saWJzL2xpZ2h0L2xpYnhsX2NyZWF0
ZS5jIGIvdG9vbHMvbGlicy9saWdodC9saWJ4bF9jcmVhdGUuYw0KPiBpbmRleCAyMzM5ZjA5ZTk1
Li5iOWRkMmRlZWRmIDEwMDY0NA0KPiAtLS0gYS90b29scy9saWJzL2xpZ2h0L2xpYnhsX2NyZWF0
ZS5jDQo+ICsrKyBiL3Rvb2xzL2xpYnMvbGlnaHQvbGlieGxfY3JlYXRlLmMNCj4gQEAgLTc1LDYg
Kzc1LDcgQEAgaW50IGxpYnhsX19kb21haW5fYnVpbGRfaW5mb19zZXRkZWZhdWx0KGxpYnhsX19n
YyAqZ2MsDQo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbGlieGxf
ZG9tYWluX2J1aWxkX2luZm8gKmJfaW5mbykNCj4gIHsNCj4gICAgICBpbnQgaSwgcmM7DQo+ICsg
ICAgbGlieGxfcGh5c2luZm8gaW5mbzsNCj4gDQo+ICAgICAgaWYgKGJfaW5mby0+dHlwZSAhPSBM
SUJYTF9ET01BSU5fVFlQRV9IVk0gJiYNCj4gICAgICAgICAgYl9pbmZvLT50eXBlICE9IExJQlhM
X0RPTUFJTl9UWVBFX1BWICYmDQo+IEBAIC0yNjQsNyArMjY1LDE4IEBAIGludCBsaWJ4bF9fZG9t
YWluX2J1aWxkX2luZm9fc2V0ZGVmYXVsdChsaWJ4bF9fZ2MNCj4gKmdjLA0KPiAgICAgIGlmICgh
Yl9pbmZvLT5ldmVudF9jaGFubmVscykNCj4gICAgICAgICAgYl9pbmZvLT5ldmVudF9jaGFubmVs
cyA9IDEwMjM7DQo+IA0KPiAtICAgIGxpYnhsX19hcmNoX2RvbWFpbl9idWlsZF9pbmZvX3NldGRl
ZmF1bHQoZ2MsIGJfaW5mbyk7DQo+ICsgICAgcmMgPSBsaWJ4bF9nZXRfcGh5c2luZm8oQ1RYLCAm
aW5mbyk7DQo+ICsgICAgaWYgKHJjKSB7DQo+ICsgICAgICAgIExPRyhFUlJPUiwgImZhaWxlZCB0
byBnZXQgaHlwZXJ2aXNvciBpbmZvIik7DQo+ICsgICAgICAgIHJldHVybiByYzsNCj4gKyAgICB9
DQo+ICsNCj4gKyAgICByYyA9IGxpYnhsX19hcmNoX2RvbWFpbl9idWlsZF9pbmZvX3NldGRlZmF1
bHQoZ2MsIGJfaW5mbywgJmluZm8pOw0KPiArICAgIGlmIChyYykgew0KPiArICAgICAgICBMT0co
RVJST1IsICJ1bmFibGUgdG8gc2V0IGRvbWFpbiBhcmNoIGJ1aWxkIGluZm8gZGVmYXVsdHMiKTsN
Cj4gKyAgICAgICAgcmV0dXJuIHJjOw0KPiArICAgIH0NCj4gKw0KPiAgICAgIGxpYnhsX2RlZmJv
b2xfc2V0ZGVmYXVsdCgmYl9pbmZvLT5kbV9yZXN0cmljdCwgZmFsc2UpOw0KPiANCj4gICAgICBp
ZiAoYl9pbmZvLT5pb21tdV9tZW1rYiA9PSBMSUJYTF9NRU1LQl9ERUZBVUxUKQ0KPiBAQCAtNDU3
LDE0ICs0NjksNiBAQCBpbnQgbGlieGxfX2RvbWFpbl9idWlsZF9pbmZvX3NldGRlZmF1bHQobGli
eGxfX2djDQo+ICpnYywNCj4gICAgICB9DQo+IA0KPiAgICAgIGlmIChiX2luZm8tPm1heF9ncmFu
dF92ZXJzaW9uID09IExJQlhMX01BWF9HUkFOVF9ERUZBVUxUKSB7DQo+IC0gICAgICAgIGxpYnhs
X3BoeXNpbmZvIGluZm87DQo+IC0NCj4gLSAgICAgICAgcmMgPSBsaWJ4bF9nZXRfcGh5c2luZm8o
Q1RYLCAmaW5mbyk7DQo+IC0gICAgICAgIGlmIChyYykgew0KPiAtICAgICAgICAgICAgTE9HKEVS
Uk9SLCAiZmFpbGVkIHRvIGdldCBoeXBlcnZpc29yIGluZm8iKTsNCj4gLSAgICAgICAgICAgIHJl
dHVybiByYzsNCj4gLSAgICAgICAgfQ0KPiAtDQo+ICAgICAgICAgIGlmIChpbmZvLmNhcF9nbnR0
YWJfdjIpDQo+ICAgICAgICAgICAgICBiX2luZm8tPm1heF9ncmFudF92ZXJzaW9uID0gMjsNCj4g
ICAgICAgICAgZWxzZSBpZiAoaW5mby5jYXBfZ250dGFiX3YxKQ0KPiBkaWZmIC0tZ2l0IGEvdG9v
bHMvbGlicy9saWdodC9saWJ4bF90eXBlcy5pZGwgYi90b29scy9saWJzL2xpZ2h0L2xpYnhsX3R5
cGVzLmlkbA0KPiBpbmRleCA0MmFjNmMzNTdiLi5kYjVlYjBhMGIzIDEwMDY0NA0KPiAtLS0gYS90
b29scy9saWJzL2xpZ2h0L2xpYnhsX3R5cGVzLmlkbA0KPiArKysgYi90b29scy9saWJzL2xpZ2h0
L2xpYnhsX3R5cGVzLmlkbA0KPiBAQCAtNjQ4LDYgKzY0OCw4IEBAIGxpYnhsX2RvbWFpbl9idWls
ZF9pbmZvID0NCj4gU3RydWN0KCJkb21haW5fYnVpbGRfaW5mbyIsWw0KPiAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICgidnVhcnQiLCBsaWJ4bF92dWFydF90eXBlKSwNCj4gICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIF0pKSwNCj4gICAgICAoImFyY2hfeDg2IiwgU3RydWN0
KE5vbmUsIFsoIm1zcl9yZWxheGVkIiwgbGlieGxfZGVmYm9vbCksDQo+ICsgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgKCJhc3Npc3RlZF94YXBpYyIsIGxpYnhsX2RlZmJvb2wpLA0KPiAr
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICgiYXNzaXN0ZWRfeDJhcGljIiwgbGlieGxf
ZGVmYm9vbCksDQo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBdKSksDQo+ICAgICAg
IyBBbHRlcm5hdGUgcDJtIGlzIG5vdCBib3VuZCB0byBhbnkgYXJjaGl0ZWN0dXJlIG9yIGd1ZXN0
IHR5cGUsIGFzIGl0IGlzDQo+ICAgICAgIyBzdXBwb3J0ZWQgYnkgeDg2IEhWTSBhbmQgQVJNIHN1
cHBvcnQgaXMgcGxhbm5lZC4NCj4gZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnMvbGlnaHQvbGlieGxf
eDg2LmMgYi90b29scy9saWJzL2xpZ2h0L2xpYnhsX3g4Ni5jDQo+IGluZGV4IGUwYTA2ZWNmZTMu
LjdjNWVlNzQ0NDMgMTAwNjQ0DQo+IC0tLSBhL3Rvb2xzL2xpYnMvbGlnaHQvbGlieGxfeDg2LmMN
Cj4gKysrIGIvdG9vbHMvbGlicy9saWdodC9saWJ4bF94ODYuYw0KPiBAQCAtMjMsNiArMjMsMTUg
QEAgaW50IGxpYnhsX19hcmNoX2RvbWFpbl9wcmVwYXJlX2NvbmZpZyhsaWJ4bF9fZ2MgKmdjLA0K
PiAgICAgIGlmIChsaWJ4bF9kZWZib29sX3ZhbChkX2NvbmZpZy0+Yl9pbmZvLmFyY2hfeDg2Lm1z
cl9yZWxheGVkKSkNCj4gICAgICAgICAgY29uZmlnLT5hcmNoLm1pc2NfZmxhZ3MgfD0gWEVOX1g4
Nl9NU1JfUkVMQVhFRDsNCj4gDQo+ICsgICAgaWYgKGRfY29uZmlnLT5jX2luZm8udHlwZSAhPSBM
SUJYTF9ET01BSU5fVFlQRV9QVikNCj4gKyAgICB7DQo+ICsgICAgICAgIGlmIChsaWJ4bF9kZWZi
b29sX3ZhbChkX2NvbmZpZy0+Yl9pbmZvLmFyY2hfeDg2LmFzc2lzdGVkX3hhcGljKSkNCj4gKyAg
ICAgICAgICAgIGNvbmZpZy0+YXJjaC5taXNjX2ZsYWdzIHw9IFhFTl9YODZfQVNTSVNURURfWEFQ
SUM7DQo+ICsNCj4gKyAgICAgICAgaWYgKGxpYnhsX2RlZmJvb2xfdmFsKGRfY29uZmlnLT5iX2lu
Zm8uYXJjaF94ODYuYXNzaXN0ZWRfeDJhcGljKSkNCj4gKyAgICAgICAgICAgIGNvbmZpZy0+YXJj
aC5taXNjX2ZsYWdzIHw9IFhFTl9YODZfQVNTSVNURURfWDJBUElDOw0KPiArICAgIH0NCj4gKw0K
PiAgICAgIHJldHVybiAwOw0KPiAgfQ0KPiANCj4gQEAgLTgxOSwxMSArODI4LDI2IEBAIHZvaWQN
Cj4gbGlieGxfX2FyY2hfZG9tYWluX2NyZWF0ZV9pbmZvX3NldGRlZmF1bHQobGlieGxfX2djICpn
YywNCj4gIHsNCj4gIH0NCj4gDQo+IC12b2lkIGxpYnhsX19hcmNoX2RvbWFpbl9idWlsZF9pbmZv
X3NldGRlZmF1bHQobGlieGxfX2djICpnYywNCj4gLSAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBsaWJ4bF9kb21haW5fYnVpbGRfaW5mbyAqYl9pbmZvKQ0KPiAr
aW50IGxpYnhsX19hcmNoX2RvbWFpbl9idWlsZF9pbmZvX3NldGRlZmF1bHQobGlieGxfX2djICpn
YywNCj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGxpYnhs
X2RvbWFpbl9idWlsZF9pbmZvICpiX2luZm8sDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBjb25zdCBsaWJ4bF9waHlzaW5mbyAqcGh5c2luZm8pDQo+ICB7
DQo+ICAgICAgbGlieGxfZGVmYm9vbF9zZXRkZWZhdWx0KCZiX2luZm8tPmFjcGksIHRydWUpOw0K
PiAgICAgIGxpYnhsX2RlZmJvb2xfc2V0ZGVmYXVsdCgmYl9pbmZvLT5hcmNoX3g4Ni5tc3JfcmVs
YXhlZCwgZmFsc2UpOw0KPiArDQo+ICsgICAgaWYgKGJfaW5mby0+dHlwZSAhPSBMSUJYTF9ET01B
SU5fVFlQRV9QVikgew0KPiArICAgICAgICBsaWJ4bF9kZWZib29sX3NldGRlZmF1bHQoJmJfaW5m
by0+YXJjaF94ODYuYXNzaXN0ZWRfeGFwaWMsDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICBwaHlzaW5mby0+Y2FwX2Fzc2lzdGVkX3hhcGljKTsNCj4gKyAgICAgICAgbGlieGxf
ZGVmYm9vbF9zZXRkZWZhdWx0KCZiX2luZm8tPmFyY2hfeDg2LmFzc2lzdGVkX3gyYXBpYywNCj4g
KyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBoeXNpbmZvLT5jYXBfYXNzaXN0ZWRf
eDJhcGljKTsNCj4gKyAgICB9DQo+ICsgICAgZWxzZSBpZiAoIWxpYnhsX2RlZmJvb2xfaXNfZGVm
YXVsdChiX2luZm8tPmFyY2hfeDg2LmFzc2lzdGVkX3hhcGljKSB8fA0KPiArICAgICAgICAgICAg
ICFsaWJ4bF9kZWZib29sX2lzX2RlZmF1bHQoYl9pbmZvLT5hcmNoX3g4Ni5hc3Npc3RlZF94MmFw
aWMpKSB7DQo+ICsgICAgICAgIExPRyhFUlJPUiwgIkludGVycnVwdCBDb250cm9sbGVyIFZpcnR1
YWxpemF0aW9uIG5vdCBzdXBwb3J0ZWQgZm9yIFBWIik7DQo+ICsgICAgICAgIHJldHVybiBFUlJP
Ul9JTlZBTDsNCj4gKyAgICB9DQo+ICsNCj4gKyAgICByZXR1cm4gMDsNCj4gIH0NCj4gDQo+ICBp
bnQgbGlieGxfX2FyY2hfcGFzc3Rocm91Z2hfbW9kZV9zZXRkZWZhdWx0KGxpYnhsX19nYyAqZ2Ms
DQo+IGRpZmYgLS1naXQgYS90b29scy9vY2FtbC9saWJzL3hjL3hlbmN0cmwubWwgYi90b29scy9v
Y2FtbC9saWJzL3hjL3hlbmN0cmwubWwNCj4gaW5kZXggNzE1MjM5NGZjZS4uMjgzNmFiYjExMCAx
MDA2NDQNCj4gLS0tIGEvdG9vbHMvb2NhbWwvbGlicy94Yy94ZW5jdHJsLm1sDQo+ICsrKyBiL3Rv
b2xzL29jYW1sL2xpYnMveGMveGVuY3RybC5tbA0KPiBAQCAtNTAsNiArNTAsOCBAQCB0eXBlIHg4
Nl9hcmNoX2VtdWxhdGlvbl9mbGFncyA9DQo+IA0KPiAgdHlwZSB4ODZfYXJjaF9taXNjX2ZsYWdz
ID0NCj4gIAl8IFg4Nl9NU1JfUkVMQVhFRA0KPiArCXwgWDg2X0FTU0lTVEVEX1hBUElDDQo+ICsJ
fCBYODZfQVNTSVNURURfWDJBUElDDQo+IA0KPiAgdHlwZSB4ZW5feDg2X2FyY2hfZG9tYWluY29u
ZmlnID0NCj4gIHsNCj4gZGlmZiAtLWdpdCBhL3Rvb2xzL29jYW1sL2xpYnMveGMveGVuY3RybC5t
bGkgYi90b29scy9vY2FtbC9saWJzL3hjL3hlbmN0cmwubWxpDQo+IGluZGV4IGJiNWJmNTIwN2Qu
LjRkYzQ3NzlhZDIgMTAwNjQ0DQo+IC0tLSBhL3Rvb2xzL29jYW1sL2xpYnMveGMveGVuY3RybC5t
bGkNCj4gKysrIGIvdG9vbHMvb2NhbWwvbGlicy94Yy94ZW5jdHJsLm1saQ0KPiBAQCAtNDQsNiAr
NDQsOCBAQCB0eXBlIHg4Nl9hcmNoX2VtdWxhdGlvbl9mbGFncyA9DQo+IA0KPiAgdHlwZSB4ODZf
YXJjaF9taXNjX2ZsYWdzID0NCj4gICAgfCBYODZfTVNSX1JFTEFYRUQNCj4gKyAgfCBYODZfQVNT
SVNURURfWEFQSUMNCj4gKyAgfCBYODZfQVNTSVNURURfWDJBUElDDQo+IA0KPiAgdHlwZSB4ZW5f
eDg2X2FyY2hfZG9tYWluY29uZmlnID0gew0KPiAgICBlbXVsYXRpb25fZmxhZ3M6IHg4Nl9hcmNo
X2VtdWxhdGlvbl9mbGFncyBsaXN0Ow0KPiBkaWZmIC0tZ2l0IGEvdG9vbHMvb2NhbWwvbGlicy94
Yy94ZW5jdHJsX3N0dWJzLmMNCj4gYi90b29scy9vY2FtbC9saWJzL3hjL3hlbmN0cmxfc3R1YnMu
Yw0KPiBpbmRleCBlNTY0ODQ1OTBlLi4zOWI4MDM0ZjJhIDEwMDY0NA0KPiAtLS0gYS90b29scy9v
Y2FtbC9saWJzL3hjL3hlbmN0cmxfc3R1YnMuYw0KPiArKysgYi90b29scy9vY2FtbC9saWJzL3hj
L3hlbmN0cmxfc3R1YnMuYw0KPiBAQCAtMjQ0LDcgKzI0NCw3IEBAIENBTUxwcmltIHZhbHVlIHN0
dWJfeGNfZG9tYWluX2NyZWF0ZSh2YWx1ZSB4Y2gsDQo+IHZhbHVlIHdhbnRlZF9kb21pZCwgdmFs
dWUgY29uZmlnDQo+IA0KPiAgCQljZmcuYXJjaC5taXNjX2ZsYWdzID0gb2NhbWxfbGlzdF90b19j
X2JpdG1hcA0KPiAgCQkJLyogISB4ODZfYXJjaF9taXNjX2ZsYWdzIFg4Nl8gbm9uZSAqLw0KPiAt
CQkJLyogISBYRU5fWDg2XyBYRU5fWDg2X01TUl9SRUxBWEVEIGFsbCAqLw0KPiArCQkJLyogISBY
RU5fWDg2XyBYRU5fWDg2X01JU0NfRkxBR1NfTUFYIG1heCAqLw0KPiAgCQkJKFZBTF9NSVNDX0ZM
QUdTKTsNCj4gDQo+ICAjdW5kZWYgVkFMX01JU0NfRkxBR1MNCj4gZGlmZiAtLWdpdCBhL3Rvb2xz
L3hsL3hsLmMgYi90b29scy94bC94bC5jDQo+IGluZGV4IDJkMWVjMThlYTMuLjMxZWIyMjMzMDkg
MTAwNjQ0DQo+IC0tLSBhL3Rvb2xzL3hsL3hsLmMNCj4gKysrIGIvdG9vbHMveGwveGwuYw0KPiBA
QCAtNTcsNiArNTcsOCBAQCBpbnQgbWF4X2dyYW50X2ZyYW1lcyA9IC0xOw0KPiAgaW50IG1heF9t
YXB0cmFja19mcmFtZXMgPSAtMTsNCj4gIGludCBtYXhfZ3JhbnRfdmVyc2lvbiA9IExJQlhMX01B
WF9HUkFOVF9ERUZBVUxUOw0KPiAgbGlieGxfZG9taWQgZG9taWRfcG9saWN5ID0gSU5WQUxJRF9E
T01JRDsNCj4gK2ludCBhc3Npc3RlZF94YXBpYyA9IC0xOw0KPiAraW50IGFzc2lzdGVkX3gyYXBp
YyA9IC0xOw0KPiANCj4gIHhlbnRvb2xsb2dfbGV2ZWwgbWlubXNnbGV2ZWwgPSBtaW5tc2dsZXZl
bF9kZWZhdWx0Ow0KPiANCj4gQEAgLTIwMSw2ICsyMDMsMTIgQEAgc3RhdGljIHZvaWQgcGFyc2Vf
Z2xvYmFsX2NvbmZpZyhjb25zdCBjaGFyDQo+ICpjb25maWdmaWxlLA0KPiAgICAgIGlmICgheGx1
X2NmZ19nZXRfbG9uZyAoY29uZmlnLCAiY2xhaW1fbW9kZSIsICZsLCAwKSkNCj4gICAgICAgICAg
Y2xhaW1fbW9kZSA9IGw7DQo+IA0KPiArICAgIGlmICgheGx1X2NmZ19nZXRfbG9uZyAoY29uZmln
LCAiYXNzaXN0ZWRfeGFwaWMiLCAmbCwgMCkpDQo+ICsgICAgICAgIGFzc2lzdGVkX3hhcGljID0g
bDsNCj4gKw0KPiArICAgIGlmICgheGx1X2NmZ19nZXRfbG9uZyAoY29uZmlnLCAiYXNzaXN0ZWRf
eDJhcGljIiwgJmwsIDApKQ0KPiArICAgICAgICBhc3Npc3RlZF94MmFwaWMgPSBsOw0KPiArDQo+
ICAgICAgeGx1X2NmZ19yZXBsYWNlX3N0cmluZyAoY29uZmlnLCAicmVtdXMuZGVmYXVsdC5uZXRi
dWZzY3JpcHQiLA0KPiAgICAgICAgICAmZGVmYXVsdF9yZW11c19uZXRidWZzY3JpcHQsIDApOw0K
PiAgICAgIHhsdV9jZmdfcmVwbGFjZV9zdHJpbmcgKGNvbmZpZywgImNvbG8uZGVmYXVsdC5wcm94
eXNjcmlwdCIsDQo+IGRpZmYgLS1naXQgYS90b29scy94bC94bC5oIGIvdG9vbHMveGwveGwuaA0K
PiBpbmRleCBjNWM0YmVkYmRkLi41MjhkZWIzZmViIDEwMDY0NA0KPiAtLS0gYS90b29scy94bC94
bC5oDQo+ICsrKyBiL3Rvb2xzL3hsL3hsLmgNCj4gQEAgLTI4Niw2ICsyODYsOCBAQCBleHRlcm4g
bGlieGxfYml0bWFwIGdsb2JhbF92bV9hZmZpbml0eV9tYXNrOw0KPiAgZXh0ZXJuIGxpYnhsX2Jp
dG1hcCBnbG9iYWxfaHZtX2FmZmluaXR5X21hc2s7DQo+ICBleHRlcm4gbGlieGxfYml0bWFwIGds
b2JhbF9wdl9hZmZpbml0eV9tYXNrOw0KPiAgZXh0ZXJuIGxpYnhsX2RvbWlkIGRvbWlkX3BvbGlj
eTsNCj4gK2V4dGVybiBpbnQgYXNzaXN0ZWRfeGFwaWM7DQo+ICtleHRlcm4gaW50IGFzc2lzdGVk
X3gyYXBpYzsNCj4gDQo+ICBlbnVtIG91dHB1dF9mb3JtYXQgew0KPiAgICAgIE9VVFBVVF9GT1JN
QVRfSlNPTiwNCj4gZGlmZiAtLWdpdCBhL3Rvb2xzL3hsL3hsX3BhcnNlLmMgYi90b29scy94bC94
bF9wYXJzZS5jDQo+IGluZGV4IGI5OGMwZGUzNzguLjYwODBmODE1NGQgMTAwNjQ0DQo+IC0tLSBh
L3Rvb2xzL3hsL3hsX3BhcnNlLmMNCj4gKysrIGIvdG9vbHMveGwveGxfcGFyc2UuYw0KPiBAQCAt
Mjc2MSw2ICsyNzYxLDI1IEBAIHNraXBfdXNiZGV2Og0KPiANCj4gICAgICB4bHVfY2ZnX2dldF9k
ZWZib29sKGNvbmZpZywgInZwbXUiLCAmYl9pbmZvLT52cG11LCAwKTsNCj4gDQo+ICsgICAgaWYg
KGJfaW5mby0+dHlwZSAhPSBMSUJYTF9ET01BSU5fVFlQRV9QVikgew0KPiArICAgICAgICBlID0g
eGx1X2NmZ19nZXRfbG9uZyhjb25maWcsICJhc3Npc3RlZF94YXBpYyIsICZsICwgMCk7DQo+ICsg
ICAgICAgIGlmICghZSkNCj4gKyAgICAgICAgICAgIGxpYnhsX2RlZmJvb2xfc2V0KCZiX2luZm8t
PmFyY2hfeDg2LmFzc2lzdGVkX3hhcGljLCBsKTsNCj4gKyAgICAgICAgZWxzZSBpZiAoZSAhPSBF
U1JDSCkNCj4gKyAgICAgICAgICAgIGV4aXQoMSk7DQo+ICsgICAgICAgIGVsc2UgaWYgKGFzc2lz
dGVkX3hhcGljICE9IC0xKSAvKiB1c2UgZ2xvYmFsIGRlZmF1bHQgaWYgcHJlc2VudCAqLw0KPiAr
ICAgICAgICAgICAgbGlieGxfZGVmYm9vbF9zZXQoJmJfaW5mby0+YXJjaF94ODYuYXNzaXN0ZWRf
eGFwaWMsIGFzc2lzdGVkX3hhcGljKTsNCj4gKw0KPiArICAgICAgICBlID0geGx1X2NmZ19nZXRf
bG9uZyhjb25maWcsICJhc3Npc3RlZF94MmFwaWMiLCAmbCwgMCk7DQo+ICsgICAgICAgIGlmICgh
ZSkNCj4gKyAgICAgICAgICAgIGxpYnhsX2RlZmJvb2xfc2V0KCZiX2luZm8tPmFyY2hfeDg2LmFz
c2lzdGVkX3gyYXBpYywgbCk7DQo+ICsgICAgICAgIGVsc2UgaWYgKGUgIT0gRVNSQ0gpDQo+ICsg
ICAgICAgICAgICBleGl0KDEpOw0KPiArICAgICAgICBlbHNlIGlmIChhc3Npc3RlZF94MmFwaWMg
IT0gLTEpIC8qIHVzZSBnbG9iYWwgZGVmYXVsdCBpZiBwcmVzZW50ICovDQo+ICsgICAgICAgICAg
ICBsaWJ4bF9kZWZib29sX3NldCgmYl9pbmZvLT5hcmNoX3g4Ni5hc3Npc3RlZF94MmFwaWMsDQo+
ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBhc3Npc3RlZF94MmFwaWMpOw0KPiArICAg
IH0NCj4gKw0KPiAgICAgIHhsdV9jZmdfZGVzdHJveShjb25maWcpOw0KPiAgfQ0KPiANCj4gZGlm
ZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9kb21haW4uYyBiL3hlbi9hcmNoL3g4Ni9kb21haW4uYw0K
PiBpbmRleCAwZDI5NDRmZTE0Li5iYzhmMGI1MWZmIDEwMDY0NA0KPiAtLS0gYS94ZW4vYXJjaC94
ODYvZG9tYWluLmMNCj4gKysrIGIveGVuL2FyY2gveDg2L2RvbWFpbi5jDQo+IEBAIC01MCw2ICs1
MCw3IEBADQo+ICAjaW5jbHVkZSA8YXNtL2NwdWlkbGUuaD4NCj4gICNpbmNsdWRlIDxhc20vbXBz
cGVjLmg+DQo+ICAjaW5jbHVkZSA8YXNtL2xkdC5oPg0KPiArI2luY2x1ZGUgPGFzbS9odm0vZG9t
YWluLmg+DQo+ICAjaW5jbHVkZSA8YXNtL2h2bS9odm0uaD4NCj4gICNpbmNsdWRlIDxhc20vaHZt
L25lc3RlZGh2bS5oPg0KPiAgI2luY2x1ZGUgPGFzbS9odm0vc3VwcG9ydC5oPg0KPiBAQCAtNjE5
LDYgKzYyMCw4IEBAIGludCBhcmNoX3Nhbml0aXNlX2RvbWFpbl9jb25maWcoc3RydWN0DQo+IHhl
bl9kb21jdGxfY3JlYXRlZG9tYWluICpjb25maWcpDQo+ICAgICAgYm9vbCBodm0gPSBjb25maWct
PmZsYWdzICYgWEVOX0RPTUNUTF9DREZfaHZtOw0KPiAgICAgIGJvb2wgaGFwID0gY29uZmlnLT5m
bGFncyAmIFhFTl9ET01DVExfQ0RGX2hhcDsNCj4gICAgICBib29sIG5lc3RlZF92aXJ0ID0gY29u
ZmlnLT5mbGFncyAmIFhFTl9ET01DVExfQ0RGX25lc3RlZF92aXJ0Ow0KPiArICAgIGJvb2wgYXNz
aXN0ZWRfeGFwaWMgPSBjb25maWctPmFyY2gubWlzY19mbGFncyAmDQo+IFhFTl9YODZfQVNTSVNU
RURfWEFQSUM7DQo+ICsgICAgYm9vbCBhc3Npc3RlZF94MmFwaWMgPSBjb25maWctPmFyY2gubWlz
Y19mbGFncyAmDQo+IFhFTl9YODZfQVNTSVNURURfWDJBUElDOw0KPiAgICAgIHVuc2lnbmVkIGlu
dCBtYXhfdmNwdXM7DQo+IA0KPiAgICAgIGlmICggaHZtID8gIWh2bV9lbmFibGVkIDogIUlTX0VO
QUJMRUQoQ09ORklHX1BWKSApDQo+IEBAIC02ODUsMTMgKzY4OCwzMSBAQCBpbnQgYXJjaF9zYW5p
dGlzZV9kb21haW5fY29uZmlnKHN0cnVjdA0KPiB4ZW5fZG9tY3RsX2NyZWF0ZWRvbWFpbiAqY29u
ZmlnKQ0KPiAgICAgICAgICB9DQo+ICAgICAgfQ0KPiANCj4gLSAgICBpZiAoIGNvbmZpZy0+YXJj
aC5taXNjX2ZsYWdzICYgflhFTl9YODZfTVNSX1JFTEFYRUQgKQ0KPiArICAgIGlmICggY29uZmln
LT5hcmNoLm1pc2NfZmxhZ3MgJiB+KFhFTl9YODZfTVNSX1JFTEFYRUQgfA0KPiArICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIFhFTl9YODZfQVNTSVNURURfWEFQSUMgfA0KPiAr
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFhFTl9YODZfQVNTSVNURURfWDJB
UElDKSApDQo+ICAgICAgew0KPiAgICAgICAgICBkcHJpbnRrKFhFTkxPR19JTkZPLCAiSW52YWxp
ZCBhcmNoIG1pc2MgZmxhZ3MgJSN4XG4iLA0KPiAgICAgICAgICAgICAgICAgIGNvbmZpZy0+YXJj
aC5taXNjX2ZsYWdzKTsNCj4gICAgICAgICAgcmV0dXJuIC1FSU5WQUw7DQo+ICAgICAgfQ0KPiAN
Cj4gKyAgICBpZiAoIChhc3Npc3RlZF94YXBpYyB8fCBhc3Npc3RlZF94MmFwaWMpICYmICFodm0g
KQ0KPiArICAgIHsNCj4gKyAgICAgICAgZHByaW50ayhYRU5MT0dfSU5GTywNCj4gKyAgICAgICAg
ICAgICAgICAiSW50ZXJydXB0IENvbnRyb2xsZXIgVmlydHVhbGl6YXRpb24gbm90IHN1cHBvcnRl
ZCBmb3IgUFZcbiIpOw0KPiArICAgICAgICByZXR1cm4gLUVJTlZBTDsNCj4gKyAgICB9DQo+ICsN
Cj4gKyAgICBpZiAoIChhc3Npc3RlZF94YXBpYyAmJiAhYXNzaXN0ZWRfeGFwaWNfYXZhaWxhYmxl
KSB8fA0KPiArICAgICAgICAgKGFzc2lzdGVkX3gyYXBpYyAmJiAhYXNzaXN0ZWRfeDJhcGljX2F2
YWlsYWJsZSkgKQ0KPiArICAgIHsNCj4gKyAgICAgICAgZHByaW50ayhYRU5MT0dfSU5GTywNCj4g
KyAgICAgICAgICAgICAgICAiSGFyZHdhcmUgYXNzaXN0ZWQgeCVzQVBJQyByZXF1ZXN0ZWQgYnV0
IG5vdCBhdmFpbGFibGVcbiIsDQo+ICsgICAgICAgICAgICAgICAgYXNzaXN0ZWRfeGFwaWMgJiYg
IWFzc2lzdGVkX3hhcGljX2F2YWlsYWJsZSA/ICIiIDogIjIiKTsNCj4gKyAgICAgICAgcmV0dXJu
IC1FTk9ERVY7DQo+ICsgICAgfQ0KPiArDQo+ICAgICAgcmV0dXJuIDA7DQo+ICB9DQo+IA0KPiBA
QCAtODY0LDYgKzg4NSwxMiBAQCBpbnQgYXJjaF9kb21haW5fY3JlYXRlKHN0cnVjdCBkb21haW4g
KmQsDQo+IA0KPiAgICAgIGQtPmFyY2gubXNyX3JlbGF4ZWQgPSBjb25maWctPmFyY2gubWlzY19m
bGFncyAmIFhFTl9YODZfTVNSX1JFTEFYRUQ7DQo+IA0KPiArICAgIGQtPmFyY2guaHZtLmFzc2lz
dGVkX3hhcGljID0NCj4gKyAgICAgICAgY29uZmlnLT5hcmNoLm1pc2NfZmxhZ3MgJiBYRU5fWDg2
X0FTU0lTVEVEX1hBUElDOw0KPiArDQo+ICsgICAgZC0+YXJjaC5odm0uYXNzaXN0ZWRfeDJhcGlj
ID0NCj4gKyAgICAgICAgY29uZmlnLT5hcmNoLm1pc2NfZmxhZ3MgJiBYRU5fWDg2X0FTU0lTVEVE
X1gyQVBJQzsNCj4gKw0KPiAgICAgIHNwZWNfY3RybF9pbml0X2RvbWFpbihkKTsNCj4gDQo+ICAg
ICAgcmV0dXJuIDA7DQo+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvaHZtL3ZteC92bWNzLmMg
Yi94ZW4vYXJjaC94ODYvaHZtL3ZteC92bWNzLmMNCj4gaW5kZXggNzMyOTYyMmRkNC4uNjgzYzY1
MGQ3NyAxMDA2NDQNCj4gLS0tIGEveGVuL2FyY2gveDg2L2h2bS92bXgvdm1jcy5jDQo+ICsrKyBi
L3hlbi9hcmNoL3g4Ni9odm0vdm14L3ZtY3MuYw0KPiBAQCAtMTEzNCw2ICsxMTM0LDEwIEBAIHN0
YXRpYyBpbnQgY29uc3RydWN0X3ZtY3Moc3RydWN0IHZjcHUgKnYpDQo+ICAgICAgICAgIF9fdm13
cml0ZShQTEVfV0lORE9XLCBwbGVfd2luZG93KTsNCj4gICAgICB9DQo+IA0KPiArICAgIGlmICgg
IWhhc19hc3Npc3RlZF94YXBpYyhkKSApDQo+ICsgICAgICAgIHYtPmFyY2guaHZtLnZteC5zZWNv
bmRhcnlfZXhlY19jb250cm9sICY9DQo+ICsgICAgICAgICAgICB+U0VDT05EQVJZX0VYRUNfVklS
VFVBTElaRV9BUElDX0FDQ0VTU0VTOw0KPiArDQo+ICAgICAgaWYgKCBjcHVfaGFzX3ZteF9zZWNv
bmRhcnlfZXhlY19jb250cm9sICkNCj4gICAgICAgICAgX192bXdyaXRlKFNFQ09OREFSWV9WTV9F
WEVDX0NPTlRST0wsDQo+ICAgICAgICAgICAgICAgICAgICB2LT5hcmNoLmh2bS52bXguc2Vjb25k
YXJ5X2V4ZWNfY29udHJvbCk7DQo+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvaHZtL3ZteC92
bXguYyBiL3hlbi9hcmNoL3g4Ni9odm0vdm14L3ZteC5jDQo+IGluZGV4IGYwOGEwMGRjYmIuLjQ3
NTU0Y2MwMDQgMTAwNjQ0DQo+IC0tLSBhL3hlbi9hcmNoL3g4Ni9odm0vdm14L3ZteC5jDQo+ICsr
KyBiL3hlbi9hcmNoL3g4Ni9odm0vdm14L3ZteC5jDQo+IEBAIC0zMzc2LDE2ICszMzc2LDExIEBA
IHN0YXRpYyB2b2lkIHZteF9pbnN0YWxsX3ZsYXBpY19tYXBwaW5nKHN0cnVjdA0KPiB2Y3B1ICp2
KQ0KPiANCj4gIHZvaWQgdm14X3ZsYXBpY19tc3JfY2hhbmdlZChzdHJ1Y3QgdmNwdSAqdikNCj4g
IHsNCj4gLSAgICBpbnQgdmlydHVhbGl6ZV94MmFwaWNfbW9kZTsNCj4gICAgICBzdHJ1Y3Qgdmxh
cGljICp2bGFwaWMgPSB2Y3B1X3ZsYXBpYyh2KTsNCj4gICAgICB1bnNpZ25lZCBpbnQgbXNyOw0K
PiANCj4gLSAgICB2aXJ0dWFsaXplX3gyYXBpY19tb2RlID0gKCAoY3B1X2hhc192bXhfYXBpY19y
ZWdfdmlydCB8fA0KPiAtICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBjcHVfaGFzX3Zt
eF92aXJ0dWFsX2ludHJfZGVsaXZlcnkpICYmDQo+IC0gICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgY3B1X2hhc192bXhfdmlydHVhbGl6ZV94MmFwaWNfbW9kZSApOw0KPiAtDQo+IC0gICAg
aWYgKCAhY3B1X2hhc192bXhfdmlydHVhbGl6ZV9hcGljX2FjY2Vzc2VzICYmDQo+IC0gICAgICAg
ICAhdmlydHVhbGl6ZV94MmFwaWNfbW9kZSApDQo+ICsgICAgaWYgKCAhaGFzX2Fzc2lzdGVkX3hh
cGljKHYtPmRvbWFpbikgJiYNCj4gKyAgICAgICAgICFoYXNfYXNzaXN0ZWRfeDJhcGljKHYtPmRv
bWFpbikgKQ0KPiAgICAgICAgICByZXR1cm47DQo+IA0KPiAgICAgIHZteF92bWNzX2VudGVyKHYp
Ow0KPiBAQCAtMzM5NSw3ICszMzkwLDcgQEAgdm9pZCB2bXhfdmxhcGljX21zcl9jaGFuZ2VkKHN0
cnVjdCB2Y3B1ICp2KQ0KPiAgICAgIGlmICggIXZsYXBpY19od19kaXNhYmxlZCh2bGFwaWMpICYm
DQo+ICAgICAgICAgICAodmxhcGljX2Jhc2VfYWRkcmVzcyh2bGFwaWMpID09IEFQSUNfREVGQVVM
VF9QSFlTX0JBU0UpICkNCj4gICAgICB7DQo+IC0gICAgICAgIGlmICggdmlydHVhbGl6ZV94MmFw
aWNfbW9kZSAmJiB2bGFwaWNfeDJhcGljX21vZGUodmxhcGljKSApDQo+ICsgICAgICAgIGlmICgg
aGFzX2Fzc2lzdGVkX3gyYXBpYyh2LT5kb21haW4pICYmIHZsYXBpY194MmFwaWNfbW9kZSh2bGFw
aWMpICkNCj4gICAgICAgICAgew0KPiAgICAgICAgICAgICAgdi0+YXJjaC5odm0udm14LnNlY29u
ZGFyeV9leGVjX2NvbnRyb2wgfD0NCj4gICAgICAgICAgICAgICAgICBTRUNPTkRBUllfRVhFQ19W
SVJUVUFMSVpFX1gyQVBJQ19NT0RFOw0KPiBAQCAtMzQxNiw3ICszNDExLDcgQEAgdm9pZCB2bXhf
dmxhcGljX21zcl9jaGFuZ2VkKHN0cnVjdCB2Y3B1ICp2KQ0KPiAgICAgICAgICAgICAgICAgIHZt
eF9jbGVhcl9tc3JfaW50ZXJjZXB0KHYsIE1TUl9YMkFQSUNfU0VMRiwgVk1YX01TUl9XKTsNCj4g
ICAgICAgICAgICAgIH0NCj4gICAgICAgICAgfQ0KPiAtICAgICAgICBlbHNlDQo+ICsgICAgICAg
IGVsc2UgaWYgKCBoYXNfYXNzaXN0ZWRfeGFwaWModi0+ZG9tYWluKSApDQo+ICAgICAgICAgICAg
ICB2LT5hcmNoLmh2bS52bXguc2Vjb25kYXJ5X2V4ZWNfY29udHJvbCB8PQ0KPiAgICAgICAgICAg
ICAgICAgIFNFQ09OREFSWV9FWEVDX1ZJUlRVQUxJWkVfQVBJQ19BQ0NFU1NFUzsNCj4gICAgICB9
DQo+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvaW5jbHVkZS9hc20vaHZtL2RvbWFpbi5oDQo+
IGIveGVuL2FyY2gveDg2L2luY2x1ZGUvYXNtL2h2bS9kb21haW4uaA0KPiBpbmRleCA2OTg0NTU0
NDRlLi45MmJmNTM0ODNjIDEwMDY0NA0KPiAtLS0gYS94ZW4vYXJjaC94ODYvaW5jbHVkZS9hc20v
aHZtL2RvbWFpbi5oDQo+ICsrKyBiL3hlbi9hcmNoL3g4Ni9pbmNsdWRlL2FzbS9odm0vZG9tYWlu
LmgNCj4gQEAgLTExNyw2ICsxMTcsMTIgQEAgc3RydWN0IGh2bV9kb21haW4gew0KPiANCj4gICAg
ICBib29sICAgICAgICAgICAgICAgICAgIGlzX3MzX3N1c3BlbmRlZDsNCj4gDQo+ICsgICAgLyog
eEFQSUMgaGFyZHdhcmUgYXNzaXN0ZWQgdmlydHVhbGl6YXRpb24uICovDQo+ICsgICAgYm9vbCAg
ICAgICAgICAgICAgICAgICBhc3Npc3RlZF94YXBpYzsNCj4gKw0KPiArICAgIC8qIHgyQVBJQyBo
YXJkd2FyZSBhc3Npc3RlZCB2aXJ0dWFsaXphdGlvbi4gKi8NCj4gKyAgICBib29sICAgICAgICAg
ICAgICAgICAgIGFzc2lzdGVkX3gyYXBpYzsNCj4gKw0KPiAgICAgIC8qIGh5cGVydmlzb3IgaW50
ZXJjZXB0ZWQgbXNpeCB0YWJsZSAqLw0KPiAgICAgIHN0cnVjdCBsaXN0X2hlYWQgICAgICAgbXNp
eHRibF9saXN0Ow0KPiANCj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9pbmNsdWRlL2FzbS9o
dm0vaHZtLmgNCj4gYi94ZW4vYXJjaC94ODYvaW5jbHVkZS9hc20vaHZtL2h2bS5oDQo+IGluZGV4
IDhkMTYyYjJjOTkuLjAzMDk2ZjMxZWYgMTAwNjQ0DQo+IC0tLSBhL3hlbi9hcmNoL3g4Ni9pbmNs
dWRlL2FzbS9odm0vaHZtLmgNCj4gKysrIGIveGVuL2FyY2gveDg2L2luY2x1ZGUvYXNtL2h2bS9o
dm0uaA0KPiBAQCAtMzkxLDYgKzM5MSw5IEBAIGludCBodm1fZ2V0X3BhcmFtKHN0cnVjdCBkb21h
aW4gKmQsIHVpbnQzMl90IGluZGV4LA0KPiB1aW50NjRfdCAqdmFsdWUpOw0KPiAgZXh0ZXJuIGJv
b2wgYXNzaXN0ZWRfeGFwaWNfYXZhaWxhYmxlOw0KPiAgZXh0ZXJuIGJvb2wgYXNzaXN0ZWRfeDJh
cGljX2F2YWlsYWJsZTsNCj4gDQo+ICsjZGVmaW5lIGhhc19hc3Npc3RlZF94YXBpYyhkKSAoKGQp
LT5hcmNoLmh2bS5hc3Npc3RlZF94YXBpYykNCj4gKyNkZWZpbmUgaGFzX2Fzc2lzdGVkX3gyYXBp
YyhkKSAoKGQpLT5hcmNoLmh2bS5hc3Npc3RlZF94MmFwaWMpDQo+ICsNCj4gICNkZWZpbmUgaHZt
X2dldF9ndWVzdF90aW1lKHYpIGh2bV9nZXRfZ3Vlc3RfdGltZV9maXhlZCh2LCAwKQ0KPiANCj4g
ICNkZWZpbmUgaHZtX3BhZ2luZ19lbmFibGVkKHYpIFwNCj4gQEAgLTkwNyw2ICs5MTAsOCBAQCBz
dGF0aWMgaW5saW5lIHZvaWQgaHZtX3NldF9yZWcoc3RydWN0IHZjcHUgKnYsDQo+IHVuc2lnbmVk
IGludCByZWcsIHVpbnQ2NF90IHZhbCkNCj4gICNkZWZpbmUgYXNzaXN0ZWRfeGFwaWNfYXZhaWxh
YmxlIGZhbHNlDQo+ICAjZGVmaW5lIGFzc2lzdGVkX3gyYXBpY19hdmFpbGFibGUgZmFsc2UNCj4g
DQo+ICsjZGVmaW5lIGhhc19hc3Npc3RlZF94YXBpYyhkKSAoKHZvaWQpKGQpLCBmYWxzZSkNCj4g
KyNkZWZpbmUgaGFzX2Fzc2lzdGVkX3gyYXBpYyhkKSAoKHZvaWQpKGQpLCBmYWxzZSkNCj4gICNk
ZWZpbmUgaHZtX3BhZ2luZ19lbmFibGVkKHYpICgodm9pZCkodiksIGZhbHNlKQ0KPiAgI2RlZmlu
ZSBodm1fd3BfZW5hYmxlZCh2KSAoKHZvaWQpKHYpLCBmYWxzZSkNCj4gICNkZWZpbmUgaHZtX3Bj
aWRfZW5hYmxlZCh2KSAoKHZvaWQpKHYpLCBmYWxzZSkNCj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNo
L3g4Ni90cmFwcy5jIGIveGVuL2FyY2gveDg2L3RyYXBzLmMNCj4gaW5kZXggYmIzZGZjYzkwZi4u
Y2FiZWJmNGY1YiAxMDA2NDQNCj4gLS0tIGEveGVuL2FyY2gveDg2L3RyYXBzLmMNCj4gKysrIGIv
eGVuL2FyY2gveDg2L3RyYXBzLmMNCj4gQEAgLTExMTcsNyArMTExNyw4IEBAIHZvaWQgY3B1aWRf
aHlwZXJ2aXNvcl9sZWF2ZXMoY29uc3Qgc3RydWN0IHZjcHUgKnYsDQo+IHVpbnQzMl90IGxlYWYs
DQo+ICAgICAgICAgIGlmICggIWlzX2h2bV9kb21haW4oZCkgfHwgc3VibGVhZiAhPSAwICkNCj4g
ICAgICAgICAgICAgIGJyZWFrOw0KPiANCj4gLSAgICAgICAgaWYgKCBjcHVfaGFzX3ZteF9hcGlj
X3JlZ192aXJ0ICkNCj4gKyAgICAgICAgaWYgKCBjcHVfaGFzX3ZteF9hcGljX3JlZ192aXJ0ICYm
DQo+ICsgICAgICAgICAgICAgaGFzX2Fzc2lzdGVkX3hhcGljKGQpICkNCj4gICAgICAgICAgICAg
IHJlcy0+YSB8PSBYRU5fSFZNX0NQVUlEX0FQSUNfQUNDRVNTX1ZJUlQ7DQo+IA0KPiAgICAgICAg
ICAvKg0KPiBAQCAtMTEyNiw3ICsxMTI3LDcgQEAgdm9pZCBjcHVpZF9oeXBlcnZpc29yX2xlYXZl
cyhjb25zdCBzdHJ1Y3QgdmNwdSAqdiwNCj4gdWludDMyX3QgbGVhZiwNCj4gICAgICAgICAgICog
YW5kIHdybXNyIGluIHRoZSBndWVzdCB3aWxsIHJ1biB3aXRob3V0IFZNRVhJVHMgKHNlZQ0KPiAg
ICAgICAgICAgKiB2bXhfdmxhcGljX21zcl9jaGFuZ2VkKCkpLg0KPiAgICAgICAgICAgKi8NCj4g
LSAgICAgICAgaWYgKCBjcHVfaGFzX3ZteF92aXJ0dWFsaXplX3gyYXBpY19tb2RlICYmDQo+ICsg
ICAgICAgIGlmICggaGFzX2Fzc2lzdGVkX3gyYXBpYyhkKSAmJg0KPiAgICAgICAgICAgICAgIGNw
dV9oYXNfdm14X2FwaWNfcmVnX3ZpcnQgJiYNCj4gICAgICAgICAgICAgICBjcHVfaGFzX3ZteF92
aXJ0dWFsX2ludHJfZGVsaXZlcnkgKQ0KPiAgICAgICAgICAgICAgcmVzLT5hIHw9IFhFTl9IVk1f
Q1BVSURfWDJBUElDX1ZJUlQ7DQo+IGRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9wdWJsaWMvYXJj
aC14ODYveGVuLmggYi94ZW4vaW5jbHVkZS9wdWJsaWMvYXJjaC0NCj4geDg2L3hlbi5oDQo+IGlu
ZGV4IDdhY2Q5NGM4ZWIuLjU4YTFlODdlZTkgMTAwNjQ0DQo+IC0tLSBhL3hlbi9pbmNsdWRlL3B1
YmxpYy9hcmNoLXg4Ni94ZW4uaA0KPiArKysgYi94ZW4vaW5jbHVkZS9wdWJsaWMvYXJjaC14ODYv
eGVuLmgNCj4gQEAgLTMxNyw5ICszMTcsMTQgQEAgc3RydWN0IHhlbl9hcmNoX2RvbWFpbmNvbmZp
ZyB7DQo+ICAgKiBkb2Vzbid0IGFsbG93IHRoZSBndWVzdCB0byByZWFkIG9yIHdyaXRlIHRvIHRo
ZSB1bmRlcmx5aW5nIE1TUi4NCj4gICAqLw0KPiAgI2RlZmluZSBYRU5fWDg2X01TUl9SRUxBWEVE
ICgxdSA8PCAwKQ0KPiArI2RlZmluZSBYRU5fWDg2X0FTU0lTVEVEX1hBUElDICgxdSA8PCAxKQ0K
PiArI2RlZmluZSBYRU5fWDg2X0FTU0lTVEVEX1gyQVBJQyAoMXUgPDwgMikNCj4gICAgICB1aW50
MzJfdCBtaXNjX2ZsYWdzOw0KPiAgfTsNCj4gDQo+ICsvKiBNYXggIFhFTl9YODZfKiBjb25zdGFu
dC4gVXNlZCBmb3IgQUJJIGNoZWNraW5nLiAqLw0KPiArI2RlZmluZSBYRU5fWDg2X01JU0NfRkxB
R1NfTUFYIFhFTl9YODZfQVNTSVNURURfWDJBUElDDQo+ICsNCj4gIC8qIExvY2F0aW9uIG9mIG9u
bGluZSBWQ1BVIGJpdG1hcC4gKi8NCj4gICNkZWZpbmUgWEVOX0FDUElfQ1BVX01BUCAgICAgICAg
ICAgICAweGFmMDANCj4gICNkZWZpbmUgWEVOX0FDUElfQ1BVX01BUF9MRU4gICAgICAgICAoKEhW
TV9NQVhfVkNQVVMgKyA3KSAvIDgpDQo+IC0tDQo+IDIuMTEuMA0KDQo=


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 04:16:13 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 04:16:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358183.587242 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6lb5-0005D7-ND; Thu, 30 Jun 2022 04:16:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358183.587242; Thu, 30 Jun 2022 04:16:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6lb5-0005D0-JU; Thu, 30 Jun 2022 04:16:07 +0000
Received: by outflank-mailman (input) for mailman id 358183;
 Thu, 30 Jun 2022 03:33:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3GZP=XF=amd.com=SivaSangeetha.SK@srs-se1.protection.inumbo.net>)
 id 1o6kwA-0000jA-Ql
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 03:33:51 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on2072.outbound.protection.outlook.com [40.107.223.72])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4c5e0ea8-f825-11ec-bd2d-47488cf2e6aa;
 Thu, 30 Jun 2022 05:32:42 +0200 (CEST)
Received: from DM4PR12MB5200.namprd12.prod.outlook.com (2603:10b6:5:397::11)
 by MWHPR12MB1886.namprd12.prod.outlook.com (2603:10b6:300:10f::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Thu, 30 Jun
 2022 03:32:36 +0000
Received: from DM4PR12MB5200.namprd12.prod.outlook.com
 ([fe80::c969:8bc8:b7c1:49f6]) by DM4PR12MB5200.namprd12.prod.outlook.com
 ([fe80::c969:8bc8:b7c1:49f6%7]) with mapi id 15.20.5395.014; Thu, 30 Jun 2022
 03:32:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4c5e0ea8-f825-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EwO/rBWA4DdrRjkcDQBtUrdEsOQGUQ3D5roDh7UEtUziohhSDV5cfWsLRh1Wc0SOaJMD/qNbpS0YdiSc236Ao/9VWmFQ8qL4p1JA+Y+IordcLjDDeXUpUDJty8rQSPZ9JIgkCjLcd/s/SYOJTVQvQgf+Tv14o5BNv4mSA8/6MB4+qS2j2tj1cy/jI5gkFn+cBZglSHTQvDEZEPDC54bQPvgQLHBMyWWIMEMGUFe3AhzKEq4PUgdA02PJ23yWGGywFspEyLCy+WO2N8hFs1SC50C5anzsvnO/iR+EvvUZUScd8zwvlE9SBrqaeD/TlGIV/jfbKc3j4ch6yR7wtm4mKQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=W+6+xBHQBEIoHUQGVwQy0HjFAgGXo3IOZhrED5V38qM=;
 b=PL9VoqmKrddtP1admZKgtLb+veyWerawfvsKiHXT9RWlfxTztqZuwOi99nUv5NxcNBBm1w6FvObhR7NOqnH/XEeVHnaxxJ2FtboOQWbgFzXrw79RCTubzkAh8UTPSYgshvzRzEcYgS/9d2kGgTv+2ZjS0cg7SkB1jVK1iRnq1vsUCaVP6kc2+XwAjQjFhy9dAEwn1GgeBNtlYoAq6A3B1Q403UaPaQYyEcQ/r3o9vU0KiGfpOdhhRMIrVA7HpqtvFjrvvVQxBMlmx7O3Qu2iSdllgrXJVsoACzNj9WB+QkKa9+uZdlyqp+nJfZ5rWJJOLUW+sU9mdh2nCQbTr9V9EA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=W+6+xBHQBEIoHUQGVwQy0HjFAgGXo3IOZhrED5V38qM=;
 b=qSjfIVqYrVktXsOWffrPWd3Rwu4iH7CCV90t9GHDeXJc1Z7yNsmYu2Jz4kopbvf/e6LEMXWBGaf0f9fgw0p05CfNezhMo2v35bTqpOwonbDCYAJN6OWKjDo3RqpB8lbPn5QEE0T+WMlLg8qPC/84zN7xzb6/lAmfmODpK65icCg=
From: "SK, SivaSangeetha (Siva Sangeetha)" <SivaSangeetha.SK@amd.com>
To: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, "jgross@suse.com" <jgross@suse.com>,
	"boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>, "Pandeshwara
 krishna, Mythri" <Mythri.Pandeshwarakrishna@amd.com>, "Rangasamy, Devaraj"
	<Devaraj.Rangasamy@amd.com>, "Thomas, Rijo-john" <Rijo-john.Thomas@amd.com>
Subject: RE: Reg. Tee init fail...
Thread-Topic: Reg. Tee init fail...
Thread-Index: AdiHr2So4ZRGcIepR0+LrE4BlT2RXAAQHZuAAQDerQAAD6nlcA==
Date: Thu, 30 Jun 2022 03:32:36 +0000
Message-ID:
 <DM4PR12MB520060A696B62EFBA3E5E96680BA9@DM4PR12MB5200.namprd12.prod.outlook.com>
References:
 <DM4PR12MB5200C7C38770E07B5946424A80B49@DM4PR12MB5200.namprd12.prod.outlook.com>
 <7689497b-1977-b30a-5835-587fa266c721@xen.org>
 <alpine.DEB.2.22.394.2206291251240.4389@ubuntu-linux-20-04-desktop>
In-Reply-To:
 <alpine.DEB.2.22.394.2206291251240.4389@ubuntu-linux-20-04-desktop>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
msip_labels: MSIP_Label_4342314e-0df4-4b58-84bf-38bed6170a0f_Enabled=true;
 MSIP_Label_4342314e-0df4-4b58-84bf-38bed6170a0f_SetDate=2022-06-30T03:32:33Z;
 MSIP_Label_4342314e-0df4-4b58-84bf-38bed6170a0f_Method=Standard;
 MSIP_Label_4342314e-0df4-4b58-84bf-38bed6170a0f_Name=General;
 MSIP_Label_4342314e-0df4-4b58-84bf-38bed6170a0f_SiteId=3dd8961f-e488-4e60-8e11-a82d994e183d;
 MSIP_Label_4342314e-0df4-4b58-84bf-38bed6170a0f_ActionId=dcb75614-3210-49a0-b087-30203244579f;
 MSIP_Label_4342314e-0df4-4b58-84bf-38bed6170a0f_ContentBits=1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: f7c21439-e308-4657-6513-08da5a492d1c
x-ms-traffictypediagnostic: MWHPR12MB1886:EE_
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 jxcEkr13C4gZ7n+Cfg6uzDWsmXssm4Ac4pEbMSX66lf3u7K5u5/G+yZPxfVicrTCI+S5Nc2RHwgOfl6lr5QrU3du7Ba93yy+6S/SrsPqeWon5mU02+fbY6bpJEBNX9/UmR6BCguslubmtoER+Q1zOkzcC5I4fFPEoqYUpighdVo4plhkzQAKq8F/4Xo3jgvEQivgjweEbYmwkN2LtlA+Fnptu3goDi7PqbjG6c+D9fsnamw6xT/0BtD45TsCCyoTBADb7Pc6dBsfIwwV7Po04GUtzE+lN5+kzMgIEC9nkrQKGgVAjubKONG0wLFj/L01TE+uVGsbCUT6AueCc9vUvK8jtKDr6pnTR5lnMNfif4+7m8TNxAw+XYYYVQ+Tbisf3e7NjlhsfCl7bHHld/HVyqto8ryQ9FwZd/My1Ppcw344IyNx8PhWXusn+y0JLfIvJPkFilqbDploHRBPTFyjd5kzbGvZlnxiypxcoGMLknrz1FOT3apchu89I9Ulo/yDZxzwCxxblc+MHX9Frfjico3eBnz6j5ZtKNWnm1a64a11s8kQk4daMSkvHnCyRAB+a+ld+A9QxMQB/zaVX8Gf4fJJczKzrxpB8JlkN5vX9c1m7xA10sTUQ/6q+NmQ9echLkqyXpxearYYpF9JepfJk67KtVTZzCNigP2HZtoVxwI4tbLc1L8VXYQLBlWNT8dNFajCTxQYj5gaQDKq0ee/DTUNl2l9Cq0T0QNHIgUWv0WO335eaxIZPCthHjtuyEhDZTjnH20HPR07uRRbLgt6EqDerFN5zBsR+4BGY7yttD9P7E0A2n/8R5VVe8P3WZArVs+el61CEnHIcQ6c1CPyzbKUja5erPpio9Pgit8csNR3eSb1tp15HnNEX0/j5wz8
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM4PR12MB5200.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(136003)(366004)(376002)(346002)(396003)(39860400002)(45080400002)(55236004)(478600001)(52536014)(38100700002)(53546011)(966005)(41300700001)(110136005)(26005)(54906003)(2906002)(9686003)(6506007)(186003)(7696005)(83380400001)(316002)(8936002)(5660300002)(55016003)(4326008)(66446008)(8676002)(71200400001)(86362001)(66476007)(33656002)(64756008)(66556008)(76116006)(122000001)(38070700005)(66946007);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?us-ascii?Q?pbUONR/z9QJQMhlD1hKzk2cHkdg5NpN+UNvaqZ4gH2XCjssrrxo9mAHkIh5d?=
 =?us-ascii?Q?dNEXHzxA34h3wkctauYVrPtqWUWzwlWVLzj3ix1w4RQACApcnyzMnfQUcFMD?=
 =?us-ascii?Q?up/n13szIiQMPg45gJnHw4G0PcJ5qfBOVYUgJbk0LhJ3RTKVC7y4CPryBZKq?=
 =?us-ascii?Q?iLCLyC/lZ732THE+06AlIGksFREKLmYXVZy3H/P5z+T9NksusIM0efvjjH5E?=
 =?us-ascii?Q?nK3UjhJlftgJhfKmWjF7kr9PGRmpNyuzYGXNplxYs0tgXfysH0G1x8CawGzR?=
 =?us-ascii?Q?Rf5aDKV/Nmz09WwU+WHeHpoQoPYLg4NqUzWnwUm7KxQjnYekzMw4SBpChyHj?=
 =?us-ascii?Q?tyEH2NCtgN83qoaso6QdBnJugNNaTW0qPJ0A21acYZ4TdkWu92Ut2F2y3SBJ?=
 =?us-ascii?Q?+AazrpXIdVAp8s1bOYv46nEdiqBdHPKI7BAQUytGo+SGQFyLfUtnHUQj3hGU?=
 =?us-ascii?Q?jvGiHMJIi2i8fZqhcPGDEA4juEdUo76vx3jPUMx0FiWUDiUcb63J7DaORUmV?=
 =?us-ascii?Q?oO+dtS3GOc4GbkOPFgLmd79cpfo+69YBFe4YoAKGcmXr+B5xeHLz7okkp3z4?=
 =?us-ascii?Q?OZVPoJ9r2SK/avzduOLZVb4WFnWibW/M30kHUlp5Ibc+CGkmApNepBsJrH9F?=
 =?us-ascii?Q?ei9v6i92IREf551TivpV/szHnquMDZCxArQbKEHW7CWX9yTIY4r5dHBXVhjD?=
 =?us-ascii?Q?RYaMAvrjrGET1BxSSi3XrvWcT8tO0bSfwPd5P/GBM+i4hpu+1ssxIcsU/nyn?=
 =?us-ascii?Q?DvR5BU1aITEkNrghdmAjtAfPiHJ1nPaSIXSFRVRz8tKBtdC0b16a5ydx13nD?=
 =?us-ascii?Q?tKVvdyXDd7aE5E+mcDFA5c2eOTZiRHmoqgBZdgZL0i7QFNc5IozhJpotbo6d?=
 =?us-ascii?Q?D87ylN3yQWeaNY1mnmFx/XR2g5BkWm4+bD8R9jvaqA0EGoL/INVxSrsdOR/Q?=
 =?us-ascii?Q?UAFrhgWQWkfFszmmLlP+3rzf0BOoQjrEWetc1wf1YTxrGdina315I6HNZDBW?=
 =?us-ascii?Q?CNbSn/p9Ft2QHs5UUPeasj49R7cQmEPSORjMQhAMVORKtk/t/UfzMj/QdZWN?=
 =?us-ascii?Q?hhEappqdFndQBdSYQDVxQEhX8Apf3+Z3lhXpFqJoLGut4BbIlAT2MvLXzE+4?=
 =?us-ascii?Q?9JPedYUwG01Yau9tkrwdswx2pGdmMVTyW5WPejuoWYCQKpFWcMUoIsoVAqQp?=
 =?us-ascii?Q?zEdIWYB9/deN9+d+z1rpdOk2RcfzfROFZPG2AQlz/Zs+a8JaXlaR6r7grrR5?=
 =?us-ascii?Q?q8R1Gi2EmWNqOckCcBDz+C/YbYz11kJoxMmcsIj4PyEaMsWlGatIdxOuQKPp?=
 =?us-ascii?Q?ya3a7ASaJMSaDAdTWqUedFws4XCCkU97283FcHGGfDqfcHyGUh5+SXLnjBB6?=
 =?us-ascii?Q?X2lliycjS/fpb1avrkobk/47WedC+z/LLnda/szErtey43HXFjDvtxS3P1pe?=
 =?us-ascii?Q?JehLxzUBrtM1U+Kk6R+Os7rKbbewEJsp5dSxq7p0+/NhwbhhgNWzxXwxZ7bn?=
 =?us-ascii?Q?JrdmF7cEpIqsey2vwUc5XO6rhlPuvlL6YqofxVuXc/psnPBcNYUTxyFJ5AI+?=
 =?us-ascii?Q?oThOmZd/Ud57nqpV9WU=3D?=
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: DM4PR12MB5200.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f7c21439-e308-4657-6513-08da5a492d1c
X-MS-Exchange-CrossTenant-originalarrivaltime: 30 Jun 2022 03:32:36.1982
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: kN3SdH7niHdzw4Jipa7tCQKKa9QJTHhjGT8suFDPRHiv5+9xEf0l2V1Vi6VfzaY+jnh6jJEWY+usyfh8L100kg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR12MB1886

[AMD Official Use Only - General]

+team

-----Original Message-----
From: Stefano Stabellini <sstabellini@kernel.org>=20
Sent: Thursday, June 30, 2022 1:34 AM
To: Julien Grall <julien@xen.org>
Cc: SK, SivaSangeetha (Siva Sangeetha) <SivaSangeetha.SK@amd.com>; xen-deve=
l@lists.xenproject.org; Stefano Stabellini <sstabellini@kernel.org>; Bertra=
nd Marquis <bertrand.marquis@arm.com>; Volodymyr Babchuk <Volodymyr_Babchuk=
@epam.com>; jgross@suse.com; boris.ostrovsky@oracle.com
Subject: Re: Reg. Tee init fail...

Adding Juergen and Boris because this is a Linux/x86 issue.


As you can see from this Linux driver:
https://nam11.safelinks.protection.outlook.com/?url=3Dhttps%3A%2F%2Felixir.=
bootlin.com%2Flinux%2Flatest%2Fsource%2Fdrivers%2Fcrypto%2Fccp%2Ftee-dev.c%=
23L132&amp;data=3D05%7C01%7CSivaSangeetha.SK%40amd.com%7Ce962a907794f4917a8=
0b08da5a0a7b3b%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637921298315828=
104%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1=
haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=3DNxmMUckiDRGLv3qLJrhZKBt2zNT=
uomEZqYJdV74tXxA%3D&amp;reserved=3D0

Linux as dom0 on x86 is trying to communicate with firmware (TEE). Linux is=
 calling __pa to pass a physical address to firmware. However, __pa returns=
 a "fake" address not an mfn. I imagine that a quick workaround would be to=
 call "virt_to_machine" instead of "__pa" in tee-dev.c.

Normally, if this was a device, the "right fix" would be to use swiotlb-xen=
:xen_swiotlb_map_page to get back a real physical address.

However, xen_swiotlb_map_page is meant to be used as part of the dma_ops AP=
I and takes a struct device *dev as input parameter. Maybe xen_swiotlb_map_=
page can be used for tee-dev as well?


Basically tee-dev would need to call dma_map_page before passing addresses =
to firmware, and dma_unmap_page when it is done. E.g.:


  cmd_buffer =3D dma_map_page(dev, virt_to_page(cmd),
                            cmd & ~PAGE_MASK,
                            ring_size,
                            DMA_TO_DEVICE);


Juergen, Boris,
what do you think?



On Fri, 24 Jun 2022, Julien Grall wrote:
> Hi,
>=20
> (moving the discussion to xen-devel as I think it is more appropriate)
>=20
> On 24/06/2022 10:53, SK, SivaSangeetha (Siva Sangeetha) wrote:
> > [AMD Official Use Only - General]
>=20
> Not clear what this means.
>=20
> >=20
> > Hi Xen team,
> >=20
> > In TEE driver, We allocate a ring buffer, get its physical address=20
> > from
> > __pa() macro, pass the physical address to secure processor for=20
> > mapping it and using in secure processor side.
> >=20
> > Source:
> > https://nam11.safelinks.protection.outlook.com/?url=3Dhttps%3A%2F%2Fel
> > ixir.bootlin.com%2Flinux%2Flatest%2Fsource%2Fdrivers%2Fcrypto%2Fccp%
> > 2Ftee-dev.c%23L132&amp;data=3D05%7C01%7CSivaSangeetha.SK%40amd.com%7Ce
> > 962a907794f4917a80b08da5a0a7b3b%7C3dd8961fe4884e608e11a82d994e183d%7
> > C0%7C0%7C637921298315828104%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAw
> > MDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&a
> > mp;sdata=3DNxmMUckiDRGLv3qLJrhZKBt2zNTuomEZqYJdV74tXxA%3D&amp;reserved
> > =3D0
> >=20
> > This works good natively in Dom0 on the target.
> > When we boot the same Dom0 kernel, with Xen hypervisor enabled, ring=20
> > init fails.
>=20
> Do you have any error message or error code?
>=20
> >=20
> >=20
> > We suspect that the address passed to secure processor, is not same=20
> > when xen is enabled, and when xen is enabled, some level of address=20
> > translation might be required to get exact physical address.
>=20
> If you are using Xen upstream, Dom0 will be mapped with IPA =3D=3D PA. So=
=20
> there should be no need for translation.
>=20
> Can you provide more details on your setup (version of Xen, Linux...)?
>=20
> Cheers,
>=20
> --
> Julien Grall
>=20


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 05:21:26 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 05:21:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358192.587253 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6mby-0004yP-FF; Thu, 30 Jun 2022 05:21:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358192.587253; Thu, 30 Jun 2022 05:21:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6mby-0004yI-C3; Thu, 30 Jun 2022 05:21:06 +0000
Received: by outflank-mailman (input) for mailman id 358192;
 Thu, 30 Jun 2022 05:21:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IfmQ=XF=arm.com=Jiamei.Xie@srs-se1.protection.inumbo.net>)
 id 1o6mbw-0004yC-Uc
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 05:21:05 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 (mail-eopbgr20054.outbound.protection.outlook.com [40.107.2.54])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6dec6871-f834-11ec-bd2d-47488cf2e6aa;
 Thu, 30 Jun 2022 07:21:01 +0200 (CEST)
Received: from DB6P195CA0021.EURP195.PROD.OUTLOOK.COM (2603:10a6:4:cb::31) by
 DB9PR08MB6620.eurprd08.prod.outlook.com (2603:10a6:10:256::6) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.17; Thu, 30 Jun 2022 05:20:58 +0000
Received: from DBAEUR03FT021.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:cb:cafe::89) by DB6P195CA0021.outlook.office365.com
 (2603:10a6:4:cb::31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14 via Frontend
 Transport; Thu, 30 Jun 2022 05:20:58 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT021.mail.protection.outlook.com (100.127.142.184) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5395.14 via Frontend Transport; Thu, 30 Jun 2022 05:20:57 +0000
Received: ("Tessian outbound 3c5325c30453:v121");
 Thu, 30 Jun 2022 05:20:57 +0000
Received: from 9673085fb4fe.3
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A0719DC3-6FFF-4FC1-86E1-7E4ACE52D450.1; 
 Thu, 30 Jun 2022 05:20:48 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9673085fb4fe.3
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 30 Jun 2022 05:20:48 +0000
Received: from AS8PR08MB7696.eurprd08.prod.outlook.com (2603:10a6:20b:523::11)
 by AM0PR08MB4099.eurprd08.prod.outlook.com (2603:10a6:208:12a::32)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14; Thu, 30 Jun
 2022 05:20:40 +0000
Received: from AS8PR08MB7696.eurprd08.prod.outlook.com
 ([fe80::69e7:f6d2:15e6:d90]) by AS8PR08MB7696.eurprd08.prod.outlook.com
 ([fe80::69e7:f6d2:15e6:d90%6]) with mapi id 15.20.5373.016; Thu, 30 Jun 2022
 05:20:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6dec6871-f834-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=UWU+9HcYydiu1Uv9aH8f+MArB4mGxBoXyuvM/+j/L/+g14ES8WdHZrh/0KAkqw1c3io3Z7OkSpfFeU5dy5aLeUfvJDqfaqmr6UM101AoF1RtJtjYfT6h+XZAAXg0wzwyxC433OFY8sizznNvmzorPaiWoa6uj4IyJo+R1BeMk5HaBVKftOJ1YA6sjs3Td9QRXSA8wl4+3xq5EyYkR6kmEqGRV4KF86Lc1hy9m5incVbZpT8FnD+2JOsjwanh/SNscm+E5iGva/tgMVk8N+WdL6m2SwIVH+RSMEkH0RBgPx1AehOqjG36CjLSDVMl4UyaaNQxUYTQjH641uBlQc6jxQ==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=G6T5dsvrZbLHrASVIXJrjfLKzJvl5HgT2BnlwJphvCo=;
 b=FsJNLUEeRiIj9CJHsIcFp62ISCpqp/jcr2H7leU2OGUIA9lyelPl4vIC3D1NMiUZHfYpek67N8GsXgmaB16cltrp9wc+eQScaxtkSW246XLO1UrLCu3Hr5ZBv0Vb5JUToe+2JlamxadAkr+2HpHqUInDoEtV6B9OxWGgg1Qey5hKWhMB/1WiuJbqBohwd0DmImTmafvtnXWh3OBq6WAZDf3vtKBmKYpJgtsNv98Fy2iGpzEi1EsoeUd3XnCA3ngjhjE/Jhej54uKF4cu1Vvx3rrFi6/uTnVDGeMMwlfEiklocxMlLj4MyExotSXUnkmbrXkiL/wOMT8r3MPtgGDCWw==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G6T5dsvrZbLHrASVIXJrjfLKzJvl5HgT2BnlwJphvCo=;
 b=fS/WmGl515wNJ3cADNT5kWjMsNNQgkT/iWVJAdlC1TVgz94rC0w9mi5GL4MTy53I+pgFpprRdjau5AEblF9qHp4ffpD7GVLUL9wzDEgvXL92nGn/BxVURFBPQY/y5iYlopfOI0Y9reA5FfAnGUeHp+rgh7HJgY5kAxew0aohywQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TnSWtrePLHlFiiIcdEP+dpm9iH9rjHG6TEXzzf1RHj95D5oQwgc2CCBwJlwH5fDjTFcrNYK2I6oGiibJRAWOxogOLDn9saE6tiBftUFeOVSWH0cRy7KFjpYZdAPNGMR+dwQEUC+JXlXUnmFGkccnFQkUwZ/JGfqc8k08XsEdA9K8Qc7O9G0ZF2fh9cQU4XNOad1dxuAdSAX8jYZICXEnFAB7g6cMmN9dk+bcMJQ4cHJNyVN89IyV83Wr+ZiTazQZ8hTXyXyBGd5VeVprB20xPLLgLdXOi91JQK8Jk+H3gIgG+ZlmMlhaZaJX57IPoxwMH85HakQTvqn29C2aZpt10A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=G6T5dsvrZbLHrASVIXJrjfLKzJvl5HgT2BnlwJphvCo=;
 b=IOa2XjHTvvNNDxmqVe+8nbpQa17VXIaR4yPiG5L19AHXESDy7/o/fM4EPp0HXeeEqVAv1FblAF/Fp2bDCtcz367OY75QyJHpj8U1fod8raqc/+8BY7OMk9HevaY08WX+rWWFPLXVu5YmR9mk5swd+73qFeuJeGhWIXaYSjhfUuD9ZhuK5+P4WviXt3YM0P7Rbs8MZeRTQUG8vYszYLwvlVO5n3XjMoOJTNzwpRvI74SflN9B6gbF+N0WM8Vtyozb3cYRxbt9IzI5wOAn8ZlUYbPWCk5p+r0bnH6VJaxWJJiWvDG7O5ukvOZdLllPIw1Gimc94XAHRfnyP+Y170AXFQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G6T5dsvrZbLHrASVIXJrjfLKzJvl5HgT2BnlwJphvCo=;
 b=fS/WmGl515wNJ3cADNT5kWjMsNNQgkT/iWVJAdlC1TVgz94rC0w9mi5GL4MTy53I+pgFpprRdjau5AEblF9qHp4ffpD7GVLUL9wzDEgvXL92nGn/BxVURFBPQY/y5iYlopfOI0Y9reA5FfAnGUeHp+rgh7HJgY5kAxew0aohywQ=
From: Jiamei Xie <Jiamei.Xie@arm.com>
To: Jiamei Xie <Jiamei.Xie@arm.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>, Wei Chen <Wei.Chen@arm.com>
Subject: RE: [PATCH v3] xen/arm: avoid overflow when setting vtimer in context
 switch
Thread-Topic: [PATCH v3] xen/arm: avoid overflow when setting vtimer in
 context switch
Thread-Index: AQHYjCSE7P53YpeE00S5Qtvr6kIMUK1naUjw
Date: Thu, 30 Jun 2022 05:20:39 +0000
Message-ID:
 <AS8PR08MB769671FD067374347EF976CE92BA9@AS8PR08MB7696.eurprd08.prod.outlook.com>
References: <20220630015336.3040355-1-jiamei.xie@arm.com>
In-Reply-To: <20220630015336.3040355-1-jiamei.xie@arm.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 1A9049B64AEC31469192C62389608D4F.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 6bfde91f-e322-4c1c-41d1-08da5a58504e
x-ms-traffictypediagnostic:
	AM0PR08MB4099:EE_|DBAEUR03FT021:EE_|DB9PR08MB6620:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 izIt4vxrYW2X6q4NU70eDwJzN2AGemE4VEQDI0XuS8/ynuukrNbgDCIQkeMHqiVN9gj90ezPYe3Qr5NSQ3ADj4UUmet+V3yatUhRanMW3bfTU0ozXz+Fhjf49oTP2sfm+tb/fweMAUg8dDZtRME873fZhZiwy1cXsaRrNe8tzGr/6jfziaquAO7oBNp0BggwLG3mNMHr+6AtJC2+nugn6WZAUtXWLUNzlVIxkCpKvfc1X+W2RqwCZndQ/kF9277/pbqaOtrTI7Bj73isDTvg70c43ulU3mpwSqUqFk7f2N2bbvVaYXCyw9hUUuCA2MkjDCbu1bTs3/563LrmwNnrCfqhuPHiD5CHrw749P0hMPRLCB4iDNBhp3qNaopwU7TcaiOVKrQUDQg3RjocO6ZQ/+lbixa4qprvsuxw3IP7VJV5jdFGdlCoxU+BR3a3b59dSF9GkytwFyLFo25blEz9HmYwwQj1GCoK7pEhcbXuaOUGsRdHBNN3Q+dj32t4BrvITrxWQJflUNjACe+8+O8ikiFSuk60/0tY+mVVGb18BY341BwvBnRc4Xpyiztr2p6DD2T20SMHomoWtldU/iVRBOIE9MhA4g6DuouuAe0F9068VTEVbfnE15vZPidNCba1SwBj0JKJ7F7zwC2yKHiVsbtoO1EdiwHpryYc0QRdEdxKSOweK/tGHIo1Aq2g0jB3xsbAkI4D7TbfyIttZ+9kebi7qu5q2BbhHskAGQmPDnkUEouTOBODM8jrU3bbnXjklsDcJG0l6LGDSCT1KVnsSH7iEMRtpck99SZpD2vCHa+rGz/+V2NOj2fJnkfwa2RG
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7696.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(376002)(366004)(396003)(136003)(346002)(66476007)(8676002)(64756008)(66446008)(66946007)(4326008)(66556008)(76116006)(2906002)(8936002)(52536014)(33656002)(55016003)(5660300002)(38070700005)(71200400001)(478600001)(122000001)(54906003)(186003)(41300700001)(316002)(38100700002)(53546011)(9686003)(86362001)(26005)(6506007)(83380400001)(7696005)(110136005);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="iso-2022-jp"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB4099
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT021.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	4bcf7c5a-ac85-45e1-fe56-08da5a5845bb
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	P8ZdWwFwzADihGKNbM9uIRbC5GMZyWwWgdzW8UczccvKlxfcmf90X8/8Bnl8puZSsx/c0Q7rn+COcAE6mtlMEzpHUD5XvRn/u5EEfmNtXDeQZQOZON0LGwX4yQp7l82JTHzykEogojAIEB5HPsda96o1CIW8gTvJUmd5KHjW9UH3j6zOB40KQwchZ21AUpJs0CiylEYwXpOyzpGOe5kIVXev7bct5AhBxtAfDhjfF4NYw7BDpcaEqwItICI9gae/LQnP/vEU1bl8xZ5WEwJi8qxYjkx9dzZyVGGZ0TMEJNFncOgnKV4adoVNeGNiz9hhJjEPOYBYr4+soqW+oqR3qiOEhXO7Jr+xvXnoe8/4yJwEyBz7/fq9KND4jcyGwVlh1GXB7Fs7T9n+jv8MD/IjycA+LqWUeBANwUADXoAXqvBeFTNKE8OaF/gIhOrg7jLjPhWc/ep9lo2t8kmIKpTUQ1Qf/QvOahz/dMZsx9dvND9X+CsysEgr4uxUpeBJ0eW0waIyMtgNn8dgkSdhyU1gu3kv0j4+bDs2AEG2hvQNgL8ndUYLADIDxPxo8esQcqiG1drJq3d+ez3srlJyZDOeXpHdfyhjh8xI4TcRQ9ALetz3wue5OGGBe4wIBE3jjpXTWkEh+h7MWk8qNTH3G6/sxX6smyfEGDxM5F6pwRpwGH+aj0TEaePnrtlaLc1I32QrtsJ7A2u0mfSk61Q6xGrILcTPAYMBtvDrxHEiOUFjgAeyoyX/afuuyMWLNVggVYQu7tSalktO+iUfmllsVcDZzqZxlvHObqxYoup3Fxo5uFukg6kPv8Q/2qFvIa/R+Lvy
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(346002)(396003)(136003)(376002)(36840700001)(40470700004)(46966006)(316002)(70586007)(70206006)(4326008)(8676002)(81166007)(356005)(82740400003)(186003)(336012)(47076005)(53546011)(41300700001)(7696005)(6506007)(26005)(478600001)(9686003)(55016003)(54906003)(110136005)(40460700003)(83380400001)(36860700001)(2906002)(52536014)(86362001)(5660300002)(33656002)(40480700001)(82310400005)(8936002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 05:20:57.7274
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6bfde91f-e322-4c1c-41d1-08da5a58504e
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT021.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB6620

Hi,

> -----Original Message-----
> From: Jiamei Xie <jiamei.xie@arm.com>
> Sent: 2022=1B$BG/=1B(B6=1B$B7n=1B(B30=1B$BF|=1B(B 9:54
> To: xen-devel@lists.xenproject.org
> Cc: Jiamei Xie <Jiamei.Xie@arm.com>; Stefano Stabellini
> <sstabellini@kernel.org>; Julien Grall <julien@xen.org>; Bertrand Marquis
> <Bertrand.Marquis@arm.com>; Volodymyr Babchuk
> <Volodymyr_Babchuk@epam.com>; Wei Chen <Wei.Chen@arm.com>
> Subject: [PATCH v3] xen/arm: avoid overflow when setting vtimer in contex=
t
> switch
>=20
> virt_vtimer_save is calculating the new time for the vtimer in:
> "v->arch.virt_timer.cval + v->domain->arch.virt_timer_base.offset
> - boot_count".
> In this formula, "cval + offset" might cause uint64_t overflow.
> Changing it to "ticks_to_ns(v->domain->arch.virt_timer_base.offset -
> boot_count) + ticks_to_ns(v->arch.virt_timer.cval)" can avoid overflow,
> and "ticks_to_ns(arch.virt_timer_base.offset - boot_count)" will be
> always the same, which has been caculated in domain_vtimer_init.
> Introduce a new field virt_timer_base.nanoseconds to store this value
> for arm in struct arch_domain, so we can use it directly.
>=20
> Signed-off-by: Jiamei Xie <jiamei.xie@arm.com>
> Change-Id: Ib80cee51eaf844661e6f92154a0339ad2a652f9b

I am sorry I forget to remove the Change-Id.

> ---
> was "xen/arm: avoid vtimer flip-flop transition in context switch".
> v3 changes:
> -re-write commit message
> -store nanoseconds in virt_timer_base instead of adding a new structure
> -assign to nanoseconds first, then seconds
> ---
>  xen/arch/arm/include/asm/domain.h | 1 +
>  xen/arch/arm/vtimer.c             | 9 ++++++---
>  2 files changed, 7 insertions(+), 3 deletions(-)
>=20
> diff --git a/xen/arch/arm/include/asm/domain.h
> b/xen/arch/arm/include/asm/domain.h
> index ed63c2b6f9..cd9ce19b4b 100644
> --- a/xen/arch/arm/include/asm/domain.h
> +++ b/xen/arch/arm/include/asm/domain.h
> @@ -71,6 +71,7 @@ struct arch_domain
>=20
>      struct {
>          uint64_t offset;
> +        s_time_t nanoseconds;
>      } virt_timer_base;
>=20
>      struct vgic_dist vgic;
> diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
> index 6b78fea77d..aeaea78e4c 100644
> --- a/xen/arch/arm/vtimer.c
> +++ b/xen/arch/arm/vtimer.c
> @@ -63,7 +63,9 @@ static void virt_timer_expired(void *data)
>  int domain_vtimer_init(struct domain *d, struct xen_arch_domainconfig
> *config)
>  {
>      d->arch.virt_timer_base.offset =3D get_cycles();
> -    d->time_offset.seconds =3D ticks_to_ns(d->arch.virt_timer_base.offse=
t -
> boot_count);
> +    d->arch.virt_timer_base.nanoseconds =3D
> +        ticks_to_ns(d->arch.virt_timer_base.offset - boot_count);
> +    d->time_offset.seconds =3D d->arch.virt_timer_base.nanoseconds;
>      do_div(d->time_offset.seconds, 1000000000);
>=20
>      config->clock_frequency =3D timer_dt_clock_frequency;
> @@ -144,8 +146,9 @@ void virt_timer_save(struct vcpu *v)
>      if ( (v->arch.virt_timer.ctl & CNTx_CTL_ENABLE) &&
>           !(v->arch.virt_timer.ctl & CNTx_CTL_MASK))
>      {
> -        set_timer(&v->arch.virt_timer.timer, ticks_to_ns(v->arch.virt_ti=
mer.cval
> +
> -                  v->domain->arch.virt_timer_base.offset - boot_count));
> +        set_timer(&v->arch.virt_timer.timer,
> +                  v->domain->arch.virt_timer_base.nanoseconds +
> +                  ticks_to_ns(v->arch.virt_timer.cval));
>      }
>  }
>=20
> --
> 2.25.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 30 05:24:09 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 05:24:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358197.587263 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6meu-0005ZK-T2; Thu, 30 Jun 2022 05:24:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358197.587263; Thu, 30 Jun 2022 05:24:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6meu-0005ZD-QP; Thu, 30 Jun 2022 05:24:08 +0000
Received: by outflank-mailman (input) for mailman id 358197;
 Thu, 30 Jun 2022 05:24:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6met-0005Z3-Em; Thu, 30 Jun 2022 05:24:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6met-0003lQ-Bb; Thu, 30 Jun 2022 05:24:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6met-0002jJ-0h; Thu, 30 Jun 2022 05:24:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o6met-0005Ux-0C; Thu, 30 Jun 2022 05:24:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=UXrlHMl0PB60uqnmE8mEnn7BWcTA2KFzQf/ENwQ1MZc=; b=ow8yqy2M2ryE5SPKjPNmsKw52+
	YxxQUfQea+tEItR/lOcLNV60OfwUepFleEO9+kR4AEe/EsQRU49P3s42y3sucuYli2itBG9I5L4qD
	gsaR8pPhuK6cqXgDTUnI3W93yH/jMfLAw+gU5vsbq5BJsmrGx9UgdAS8qfXR6IitbZAI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171411-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171411: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=d9b2ba67917c18822c6a09af41c32fa161f1606b
X-Osstest-Versions-That:
    linux=354c6e071be986a44b956f7b57f1884244431048
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 30 Jun 2022 05:24:07 +0000

flight 171411 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171411/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-intel  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-amd  8 xen-boot              fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 171277
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 171277
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 171277
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 171277
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 171277
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 171277
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 171277

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 171277

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171277
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171277
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171277
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                d9b2ba67917c18822c6a09af41c32fa161f1606b
baseline version:
 linux                354c6e071be986a44b956f7b57f1884244431048

Last test of basis   171277  2022-06-19 03:11:35 Z   11 days
Failing since        171280  2022-06-19 15:12:25 Z   10 days   30 attempts
Testing same since   171411  2022-06-29 21:11:38 Z    0 days    1 attempts

------------------------------------------------------------
375 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 13469 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 05:31:47 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 05:31:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358207.587275 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6mmE-0007H9-Rz; Thu, 30 Jun 2022 05:31:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358207.587275; Thu, 30 Jun 2022 05:31:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6mmE-0007H2-OB; Thu, 30 Jun 2022 05:31:42 +0000
Received: by outflank-mailman (input) for mailman id 358207;
 Thu, 30 Jun 2022 05:31:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gQTH=XF=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o6mmD-0007Gw-Hm
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 05:31:41 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id eaedd5f7-f835-11ec-bdce-3d151da133c5;
 Thu, 30 Jun 2022 07:31:40 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id A91F821BE5;
 Thu, 30 Jun 2022 05:31:39 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 5BDBF13AB2;
 Thu, 30 Jun 2022 05:31:39 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id zqzdFDs1vWKECgAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 30 Jun 2022 05:31:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eaedd5f7-f835-11ec-bdce-3d151da133c5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656567099; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=ix2+59s8gFJZTK0LG6X0PrJ7Mwqo3WfIkEHQFE0T3MQ=;
	b=m0NNiCeHw9tKwUYI676C0aCpAeCw3JzW1mnFhjsVIDAndK8qd+ZfxXoc0DaJZRsm7u4Oma
	/jDD1ZgybZn1B9fEpTRV+yKCewzpZU6dFTQEhMKkYhkEmOWdaQp2obHiuxYQyfnYfT92dg
	Q+yaDZrkqORIY5gQMcZQKw7A3u1ceHM=
Message-ID: <60bf0e8a-1b58-4df4-fdcf-bcfeedd64e77@suse.com>
Date: Thu, 30 Jun 2022 07:31:38 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: Reg. Tee init fail...
Content-Language: en-US
To: "SK, SivaSangeetha (Siva Sangeetha)" <SivaSangeetha.SK@amd.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
 "Pandeshwara krishna, Mythri" <Mythri.Pandeshwarakrishna@amd.com>,
 "Rangasamy, Devaraj" <Devaraj.Rangasamy@amd.com>,
 "Thomas, Rijo-john" <Rijo-john.Thomas@amd.com>
References: <DM4PR12MB5200C7C38770E07B5946424A80B49@DM4PR12MB5200.namprd12.prod.outlook.com>
 <7689497b-1977-b30a-5835-587fa266c721@xen.org>
 <alpine.DEB.2.22.394.2206291251240.4389@ubuntu-linux-20-04-desktop>
 <DM4PR12MB520060A696B62EFBA3E5E96680BA9@DM4PR12MB5200.namprd12.prod.outlook.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <DM4PR12MB520060A696B62EFBA3E5E96680BA9@DM4PR12MB5200.namprd12.prod.outlook.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------zabseNiaGNi0VHpSQNVLpwLK"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------zabseNiaGNi0VHpSQNVLpwLK
Content-Type: multipart/mixed; boundary="------------7tnZdij7de3Bg8TOPOP4rBto";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: "SK, SivaSangeetha (Siva Sangeetha)" <SivaSangeetha.SK@amd.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
 "Pandeshwara krishna, Mythri" <Mythri.Pandeshwarakrishna@amd.com>,
 "Rangasamy, Devaraj" <Devaraj.Rangasamy@amd.com>,
 "Thomas, Rijo-john" <Rijo-john.Thomas@amd.com>
Message-ID: <60bf0e8a-1b58-4df4-fdcf-bcfeedd64e77@suse.com>
Subject: Re: Reg. Tee init fail...
References: <DM4PR12MB5200C7C38770E07B5946424A80B49@DM4PR12MB5200.namprd12.prod.outlook.com>
 <7689497b-1977-b30a-5835-587fa266c721@xen.org>
 <alpine.DEB.2.22.394.2206291251240.4389@ubuntu-linux-20-04-desktop>
 <DM4PR12MB520060A696B62EFBA3E5E96680BA9@DM4PR12MB5200.namprd12.prod.outlook.com>
In-Reply-To: <DM4PR12MB520060A696B62EFBA3E5E96680BA9@DM4PR12MB5200.namprd12.prod.outlook.com>

--------------7tnZdij7de3Bg8TOPOP4rBto
Content-Type: multipart/mixed; boundary="------------ktqdLnGgMOFD2Tl6i6okk0GL"

--------------ktqdLnGgMOFD2Tl6i6okk0GL
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMzAuMDYuMjIgMDU6MzIsIFNLLCBTaXZhU2FuZ2VldGhhIChTaXZhIFNhbmdlZXRoYSkg
d3JvdGU6DQo+IFtBTUQgT2ZmaWNpYWwgVXNlIE9ubHkgLSBHZW5lcmFsXQ0KPiANCj4gK3Rl
YW0NCj4gDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IFN0ZWZhbm8g
U3RhYmVsbGluaSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4NCj4gU2VudDogVGh1cnNkYXks
IEp1bmUgMzAsIDIwMjIgMTozNCBBTQ0KPiBUbzogSnVsaWVuIEdyYWxsIDxqdWxpZW5AeGVu
Lm9yZz4NCj4gQ2M6IFNLLCBTaXZhU2FuZ2VldGhhIChTaXZhIFNhbmdlZXRoYSkgPFNpdmFT
YW5nZWV0aGEuU0tAYW1kLmNvbT47IHhlbi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZzsg
U3RlZmFubyBTdGFiZWxsaW5pIDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPjsgQmVydHJhbmQg
TWFycXVpcyA8YmVydHJhbmQubWFycXVpc0Bhcm0uY29tPjsgVm9sb2R5bXlyIEJhYmNodWsg
PFZvbG9keW15cl9CYWJjaHVrQGVwYW0uY29tPjsgamdyb3NzQHN1c2UuY29tOyBib3Jpcy5v
c3Ryb3Zza3lAb3JhY2xlLmNvbQ0KPiBTdWJqZWN0OiBSZTogUmVnLiBUZWUgaW5pdCBmYWls
Li4uDQo+IA0KPiBBZGRpbmcgSnVlcmdlbiBhbmQgQm9yaXMgYmVjYXVzZSB0aGlzIGlzIGEg
TGludXgveDg2IGlzc3VlLg0KPiANCj4gDQo+IEFzIHlvdSBjYW4gc2VlIGZyb20gdGhpcyBM
aW51eCBkcml2ZXI6DQo+IGh0dHBzOi8vbmFtMTEuc2FmZWxpbmtzLnByb3RlY3Rpb24ub3V0
bG9vay5jb20vP3VybD1odHRwcyUzQSUyRiUyRmVsaXhpci5ib290bGluLmNvbSUyRmxpbnV4
JTJGbGF0ZXN0JTJGc291cmNlJTJGZHJpdmVycyUyRmNyeXB0byUyRmNjcCUyRnRlZS1kZXYu
YyUyM0wxMzImYW1wO2RhdGE9MDUlN0MwMSU3Q1NpdmFTYW5nZWV0aGEuU0slNDBhbWQuY29t
JTdDZTk2MmE5MDc3OTRmNDkxN2E4MGIwOGRhNWEwYTdiM2IlN0MzZGQ4OTYxZmU0ODg0ZTYw
OGUxMWE4MmQ5OTRlMTgzZCU3QzAlN0MwJTdDNjM3OTIxMjk4MzE1ODI4MTA0JTdDVW5rbm93
biU3Q1RXRnBiR1pzYjNkOGV5SldJam9pTUM0d0xqQXdNREFpTENKUUlqb2lWMmx1TXpJaUxD
SkJUaUk2SWsxaGFXd2lMQ0pYVkNJNk1uMCUzRCU3QzMwMDAlN0MlN0MlN0MmYW1wO3NkYXRh
PU54bU1VY2tpRFJHTHYzcUxKcmhaS0J0MnpOVHVvbUVacVlKZFY3NHRYeEElM0QmYW1wO3Jl
c2VydmVkPTANCj4gDQo+IExpbnV4IGFzIGRvbTAgb24geDg2IGlzIHRyeWluZyB0byBjb21t
dW5pY2F0ZSB3aXRoIGZpcm13YXJlIChURUUpLiBMaW51eCBpcyBjYWxsaW5nIF9fcGEgdG8g
cGFzcyBhIHBoeXNpY2FsIGFkZHJlc3MgdG8gZmlybXdhcmUuIEhvd2V2ZXIsIF9fcGEgcmV0
dXJucyBhICJmYWtlIiBhZGRyZXNzIG5vdCBhbiBtZm4uIEkgaW1hZ2luZSB0aGF0IGEgcXVp
Y2sgd29ya2Fyb3VuZCB3b3VsZCBiZSB0byBjYWxsICJ2aXJ0X3RvX21hY2hpbmUiIGluc3Rl
YWQgb2YgIl9fcGEiIGluIHRlZS1kZXYuYy4NCj4gDQo+IE5vcm1hbGx5LCBpZiB0aGlzIHdh
cyBhIGRldmljZSwgdGhlICJyaWdodCBmaXgiIHdvdWxkIGJlIHRvIHVzZSBzd2lvdGxiLXhl
bjp4ZW5fc3dpb3RsYl9tYXBfcGFnZSB0byBnZXQgYmFjayBhIHJlYWwgcGh5c2ljYWwgYWRk
cmVzcy4NCj4gDQo+IEhvd2V2ZXIsIHhlbl9zd2lvdGxiX21hcF9wYWdlIGlzIG1lYW50IHRv
IGJlIHVzZWQgYXMgcGFydCBvZiB0aGUgZG1hX29wcyBBUEkgYW5kIHRha2VzIGEgc3RydWN0
IGRldmljZSAqZGV2IGFzIGlucHV0IHBhcmFtZXRlci4gTWF5YmUgeGVuX3N3aW90bGJfbWFw
X3BhZ2UgY2FuIGJlIHVzZWQgZm9yIHRlZS1kZXYgYXMgd2VsbD8NCj4gDQo+IA0KPiBCYXNp
Y2FsbHkgdGVlLWRldiB3b3VsZCBuZWVkIHRvIGNhbGwgZG1hX21hcF9wYWdlIGJlZm9yZSBw
YXNzaW5nIGFkZHJlc3NlcyB0byBmaXJtd2FyZSwgYW5kIGRtYV91bm1hcF9wYWdlIHdoZW4g
aXQgaXMgZG9uZS4gRS5nLjoNCj4gDQo+IA0KPiAgICBjbWRfYnVmZmVyID0gZG1hX21hcF9w
YWdlKGRldiwgdmlydF90b19wYWdlKGNtZCksDQo+ICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgY21kICYgflBBR0VfTUFTSywNCj4gICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICByaW5nX3NpemUsDQo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgRE1BX1RPX0RF
VklDRSk7DQo+IA0KPiANCj4gSnVlcmdlbiwgQm9yaXMsDQo+IHdoYXQgZG8geW91IHRoaW5r
Pw0KDQpZZXMsIEkgdGhpbmsgdXNpbmcgdGhlIERNQSBpbnRlcmZhY2UgaXMgdGhlIGNvcnJl
Y3Qgd2F5IHRvIGhhbmRsZSB0aGF0Lg0KDQpCVFcsIEkgZGlkIGEgc2ltaWxhciBmaXggZm9y
IHRoZSBkY2RiYXMgZHJpdmVyIHJlY2VudGx5Og0KDQpodHRwczovL2xvcmUua2VybmVsLm9y
Zy9yLzIwMjIwMzE4MTUwOTUwLjE2ODQzLTEtamdyb3NzQHN1c2UuY29tDQoNCg0KSnVlcmdl
bg0K
--------------ktqdLnGgMOFD2Tl6i6okk0GL
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------ktqdLnGgMOFD2Tl6i6okk0GL--

--------------7tnZdij7de3Bg8TOPOP4rBto--

--------------zabseNiaGNi0VHpSQNVLpwLK
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK9NToFAwAAAAAACgkQsN6d1ii/Ey+l
7wgAmfZT7hc0WOdtSL7xUjwztPYLhFS9uBUK6ZA0sXbxbobLw4L5kkrro1XLulyAFGGjWsERLxoU
ugUaBn4GJWpNZo0RRV5sjVwbIdFNtuboe0j6TKYP7aOSmyxJgeTF1958rusl36bdX39/VLNoOzY4
V90ga/VJKqcHuaoOL8xoaQ7nBZ7Wt2O/7M/mEs3LcG9yEAVIeQVcWEqoLTf36N+QPSYAxvvrHSax
zYROokIDvftzXJHJqq7s22nG+WZtmFIPs0bZPBMNhNB+4kxDh+kUQFeFD251pbO5lgthfUd6NK6d
5tU2I0yRmjWsNbircU5m0jeKLKHPzJtm/ef6LJO7CQ==
=38U3
-----END PGP SIGNATURE-----

--------------zabseNiaGNi0VHpSQNVLpwLK--


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 06:03:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 06:03:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358215.587289 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6nGo-0002gz-7h; Thu, 30 Jun 2022 06:03:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358215.587289; Thu, 30 Jun 2022 06:03:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6nGo-0002gs-4b; Thu, 30 Jun 2022 06:03:18 +0000
Received: by outflank-mailman (input) for mailman id 358215;
 Thu, 30 Jun 2022 06:03:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6bdU=XF=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o6nGm-0002gm-IT
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 06:03:16 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 (mail-eopbgr50069.outbound.protection.outlook.com [40.107.5.69])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 541b1d78-f83a-11ec-bdce-3d151da133c5;
 Thu, 30 Jun 2022 08:03:15 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8970.eurprd04.prod.outlook.com (2603:10a6:20b:409::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.15; Thu, 30 Jun
 2022 06:03:12 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5395.014; Thu, 30 Jun 2022
 06:03:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 541b1d78-f83a-11ec-bdce-3d151da133c5
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NQ+ZQsaKUCnyWJitt8fBJYWrciTqli5VYE8zrk8JLJlJuCvBHr9nYi2ELUHmroy0MBBqxDsOOsWJF82Hazs5akVAQBQhy1nO1SW1SiXGgfNEEm0aziNM0pjHSXIYDrh16XICdLhPwHsAKKvuK7V3jlFLicZEabupCkkElazE2namfHeFjAJmCUZf7Xapc/GH+ewR+68yFRbRHJ3c9vgHbOWCwTF5SypqrwHrhebGCEIemvITFHjvFmXs7G9EJw0+2PNxJ7dEjdBvP/utX+J0RR+2z5J/aNmWJP38Iv89We1No+v+v0fIAuZcDO3sQMk7SIZclii3cJ1PN0Qn9u4oKQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=QJrvZkswdQNrWXwXbUANst9wYDwN/4fVAzNYJsY/MDY=;
 b=ipQHAQgeQdMGCloLr4xKG0jGbocAFR3CMufObnK9QandUpjOGzunk0BV4Kze+1GgmVpW8D2/DI6g/0o6q1VKAEViFd/P+tFdEju4X4iF1Y7XAtiK0jKIcODiccWx0rh4rekvQ4+iu0BRStLUIiAlWK1JwyuleNDQJhqsqT7s1Ikh/UyxpSRYpw24LIsWGKoNXVTBpB3H8a6tffFJT7DcwzUusR7iFy9tP9ZtXkP0g7ArRs3wxTOJ+1rFmNwieLSL8QqhHdAy7FeOfDaAaWM5JTrslnTsas9KVCnpr5eYbcvbwjoR9it8ga0XoTNm1kk+Q5H6inz5b6672oRTOw9hmQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QJrvZkswdQNrWXwXbUANst9wYDwN/4fVAzNYJsY/MDY=;
 b=E5/yUWvLowa+FoRXz92Exmemahb45RJq1t8llIlBA2703c5p7VrlusBnnVd8cfqk3XvErnrUCiryiBjJTxyKYhft63RNRwdugwRtnQLvfoRJG1Xpn/rC++tbstGqfMauBMTgA8oLJOZzAQBxvAE7XfGvYgSXmga+RtmCjAOvvLF1qRDHGsDWfa/6qxIcPT3MzmZ5SkEnMC1qcW5D4AEn+zZ2z7f+Sz4Pdbe9DrRpbNeqarP6/nlk7/7w5mGQ966/DObMi2s1L1C75rX2pxWXAjsSmo3Cp+zNH3blmS4j+k6AycszS0U5amiQXRa25c+QHgTvblv8/imXmsrcAh82UQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a878aadf-5bc9-184b-d3f8-4e43ffc64262@suse.com>
Date: Thu, 30 Jun 2022 08:03:10 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH RESEND v10 2/2] x86/xen: Allow per-domain usage of
 hardware virtualized APIC
Content-Language: en-US
To: "Tian, Kevin" <kevin.tian@intel.com>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Nick Rosbrook
 <rosbrookn@gmail.com>, "Gross, Jurgen" <jgross@suse.com>,
 Christian Lindig <christian.lindig@citrix.com>, David Scott
 <dave@recoil.org>, "Cooper, Andrew" <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Pau_Monn=c3=a9=2c_Roger?= <roger.pau@citrix.com>,
 "Nakajima, Jun" <jun.nakajima@intel.com>,
 Jane Malalane <jane.malalane@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20220629135534.19923-1-jane.malalane@citrix.com>
 <20220629135534.19923-3-jane.malalane@citrix.com>
 <BN9PR11MB5276AC94021EA92C539D5F078CBA9@BN9PR11MB5276.namprd11.prod.outlook.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <BN9PR11MB5276AC94021EA92C539D5F078CBA9@BN9PR11MB5276.namprd11.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0067.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4b::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: dbf2cafc-814d-4283-498d-08da5a5e3706
X-MS-TrafficTypeDiagnostic: AM9PR04MB8970:EE_
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5mkLjFrIxy82c5cqGlY9iH1Eyk5jH3gNEGjgcRUL9Lp2aed78lfa/chBW72eMVjEuJWKblnn5Iyls5Y64EI8EoaPztYaF2xFnbcjNnVDyloIfE407ShP4kUikVxfYQhCyRHtYZuFMQIgyKuWxO2xnuuYHhxuD4jzjn+AFlgID0j29i/frSycGF1TAsqTAhbxdk1FJDJBMBFMy5ts6qmX1c6GR9ZlF9Hl97OJbS4IZ6s/mEyQSrDZC/03EXtXh/bBiDjbkezUB3wSkIjUeHJa9JmWRyfNLCqehlxbtaekiiUJ9KSZ23uXEnfDS4hrbAdyBA1jQX0ANGq3gEYSpbOJE+KkK/CmRaUlkDJN6eRBT8zH1NDT4CEKfexINJyKx+pCV5hCeJxc+qIsEj3bEEQDaUtkE8E88cq4oRJ9xJiQV3CE5vdJbYhL9+Rek9Q/YzAfvyfBfY+ahGgGocKjyDrM2FmAfxLuQu1yrq49Mak3dUxPMe8LpzPwhyDbjB1BHK0RUs/chpdC234x4OCgiYjCmi90yFqqxjGtJqRWdTg11a3kJYO5BA6kCSggvjb+DCM8wVJgYoEebk/dJrUBKWr3JuTeagq+F/lsi2QOaYPelRw8icpZcZpp3RVRPluxeEct3gZqzEsI0Oo1tJ4mwnLWjChi+/lI+TOZNgiUYdIteoLCd6dBXhtDpSyJu8e1pbCpt4veExYLIDkJyEsMqLObIS57+BhdWNkcco+mghPCr58o4f20d6y96vO7JpB7JMV6Ri9/0DSgaoOXo2UcZvZqirXD6AxvFf80Xxq4p3w1/BIj8YIQggyHrxTNxePJtrqi722T4n43RRm4wuHT+fHMnTbm3eDz8/gdcYWlLh9nPCI=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(366004)(136003)(39860400002)(376002)(396003)(346002)(83380400001)(36756003)(186003)(38100700002)(2616005)(41300700001)(31686004)(2906002)(53546011)(5660300002)(316002)(66476007)(66556008)(8676002)(66946007)(7416002)(4326008)(8936002)(6506007)(6862004)(6512007)(26005)(6486002)(478600001)(31696002)(86362001)(54906003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OGZJT3Z4V0xwZEFPUDZEdnRnd2VZWDd1RWp5YmxITnpWSDFOVWdWc1d6eHJJ?=
 =?utf-8?B?YTI0ZS9QQldndmlEV3ZrKzh4dDlQb3JFUzUwMnFmd0tuVzczaXFYUGRhbjJp?=
 =?utf-8?B?RWZkbFgyVHR4M2lndVM4VGZJZ1JSQmFEemtMR1cxb3BVZ0I2ekRNMEhaY1pq?=
 =?utf-8?B?Q0cvTDVSK0lnaGJOZ0E1bjVrNEN0aVpRMG9tck9DQkd4RnEvajRNR1VESkQr?=
 =?utf-8?B?cytDNHJnWlZ4SmpaNjZNVHMyYWZESDNNRm55ODBKT1dhcmEzVFJaalBvVGR1?=
 =?utf-8?B?VVlxSnRHWmJYeVJGTElsQ3dPUlRrZlAwcTNjeEJSbnZiUnRkUHE0VGprWWV4?=
 =?utf-8?B?SjZIZHpJYk9jTmF5Rm02MlZVK1lzb1kzNkw1VkNHeklPL2pTQ1YrM096QzV2?=
 =?utf-8?B?aHczdDJvZGxCMlcvSUFNdGJzMVhhVTFkK1FIU1ArUmRiMGpRdUEvYzlReGZr?=
 =?utf-8?B?UW9QcEtEMlVVUWQrS2VLOTdOdjJ6ZnZRWXpnMWp4aTBuaUdUcUdvUGlwVzFX?=
 =?utf-8?B?ZE9HWndwMS9BazlCaHB0QXV6S3l0QjVMRERmYUxkSXk0dDV0SGNMRlBnRHZ5?=
 =?utf-8?B?UnZjVmNrZkFBTHdwZ09idTVPdXpDUm81Y2pQNUs0L1pYMHdsbTI0ZkZJbTMv?=
 =?utf-8?B?NnJzQW5BeU5xZ0dIRU1mbWtTRFRwMElJVDV6ZU12d1Zha0tEVWR2UlNrWU44?=
 =?utf-8?B?enArenFCQS8zWW00YjBJRmltWTVtMTVKSXZwbjRpMDJKdDEyQWR1ZW55ZXJ2?=
 =?utf-8?B?S2poa1ZLUlM2eGx5VXZKbUVaWWtGOTJXa1ZXc09ZREVnc3JZU0NQUjYxYURa?=
 =?utf-8?B?VnVRM2c4cDE3YXFhdG13Y2JldHg5UDdHdXNDQWZldXQvM0pzWkdoWHZVUlVJ?=
 =?utf-8?B?YjFZZVdmL2VKRFZSdEpSR2d2am84STQwam13UU9WQUQ5SzBVOTNUTXhpYXdp?=
 =?utf-8?B?MUphS2lTbFN2cWZuU1Vwd0M3TnNYY3FRS0d3N0NkZnVwSkFlcy9ZanhqRTNa?=
 =?utf-8?B?YVhIdSt0VXZCRDNKSkdHdUFVUlZNait5em5vRVA3RWk1eS9rYlNob3p0RkE1?=
 =?utf-8?B?a1VTODBFSDhrWlc1TC9TOWpjZGZjTW5sSDE3MkNVNFRGREh5TFNpTVNwYVhv?=
 =?utf-8?B?NE1reGgwWWdNeHdJWFNUZ3o0c3kyUE84SXJnWURUTHNRV3RxNnJ2emVjdnZM?=
 =?utf-8?B?QmJFZjV3Zmx1REtVWU5tSmw0TS9aZmZmUFBxUlhoTHB0V0VsYU9oc3FFVWda?=
 =?utf-8?B?aldjVHpsTUtkZFV4M3NoRFBmcGJDR2pPMkppTjlhZkZyd1dRd1ZiZ0hxalZG?=
 =?utf-8?B?QiswQ2VteGJLWWVOWWF1V21obkc0NGVVL3JHQmhnaEJ1ODYyeGVzWmZISVJx?=
 =?utf-8?B?KzBlNDZ5dDAzZE41ZVgxSE4zVVg2SXJRdGlEMWtaMnBSVW5ERnFrOVcyaDJ2?=
 =?utf-8?B?YnJxS0U2UXVPTS9xbUxnNFJ5Z2VMOW45bEhpenBWZjZ1T2RxdWR1RVZLdlN0?=
 =?utf-8?B?UVBPRmFDRUFQS3RUUHI3VXROaWpwNm1XVmRPQUtFYnQ0RVJYKzJuejZ0M295?=
 =?utf-8?B?VXFyanErWDBvWi9pMXdlbHp0Y2ZqNy9SNVhXVHNzUmdnT0RVNTdET2F6M21L?=
 =?utf-8?B?cHA2VVVXRFpTbzIxWlI5SkQ4TDJTMWs1WDBGSTU1dFVkem9odzdDdTc5a0dj?=
 =?utf-8?B?OUMxWEd4MVc3bmtMcGZya1pzMzM0NGRFeFVxV0EwdnZGWitPNDRaUHB1eVJD?=
 =?utf-8?B?aG9qL2U0RUt6T1BuVmd4U3MydGE3eUpPVzBOTFFid2x3ekFSOHluQlVINXZj?=
 =?utf-8?B?TUt1WjdNbHZ6TEQwbE9LVmNabUR6SFBsNGRmanJqQm1BcFozY1lFdyt1MHVN?=
 =?utf-8?B?TlgyRmh0Z21xZ2t2NmlaMFRRWlhUcXQrbjBqMlRpeHlxQVNtaDdrbkNlVFRX?=
 =?utf-8?B?VFZpYU1JSWpGY3YvMERxWUhYeEtLc1Z2S203Z0tXRXpLQWhqQTEvNW9xZmls?=
 =?utf-8?B?VFM5U0ZMQ1FWQUU0QkUyS1RlSnA2ZG5XSEkzR1BtNzBPb05GdGFwRzRxenBu?=
 =?utf-8?B?SXd1YWlhM2NBOTZpWGltVkVzMDAxYVpnOEc5a0tNeHM0a3IvWWlZaEVPaGJl?=
 =?utf-8?Q?x6mYwv/gCLMM0Kt5pB0y3zSa9?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: dbf2cafc-814d-4283-498d-08da5a5e3706
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 06:03:12.5899
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: WQU50DxIWAADHDbu1OJjOHJ2HZkN+gFyqy4wsWXLz5ay+QvGfl24V06qBy3jCybfdOCWKku1aMBDgH3CVSO33A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8970

On 30.06.2022 05:25, Tian, Kevin wrote:
>> From: Jane Malalane <jane.malalane@citrix.com>
>> Sent: Wednesday, June 29, 2022 9:56 PM
>>
>> Introduce a new per-domain creation x86 specific flag to
>> select whether hardware assisted virtualization should be used for
>> x{2}APIC.
>>
>> A per-domain option is added to xl in order to select the usage of
>> x{2}APIC hardware assisted virtualization, as well as a global
>> configuration option.
>>
>> Having all APIC interaction exit to Xen for emulation is slow and can
>> induce much overhead. Hardware can speed up x{2}APIC by decoding the
>> APIC access and providing a VM exit with a more specific exit reason
>> than a regular EPT fault or by altogether avoiding a VM exit.
> 
> Above is obvious and could be removed. 
> 
> I think the key is just the next paragraph for why we
> want this per-domain control.

Indeed, but the paragraph above sets the context. It might be possible
to shorten it, but ...

> Apart from that:
> 
> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
> 
>>
>> On the other hand, being able to disable x{2}APIC hardware assisted
>> virtualization can be useful for testing and debugging purposes.

... I think it is desirable for this sentence to start with "Otoh" or
alike.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 06:14:15 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 06:14:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358222.587303 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6nRM-0004Kw-8l; Thu, 30 Jun 2022 06:14:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358222.587303; Thu, 30 Jun 2022 06:14:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6nRM-0004Kp-5b; Thu, 30 Jun 2022 06:14:12 +0000
Received: by outflank-mailman (input) for mailman id 358222;
 Thu, 30 Jun 2022 06:14:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6bdU=XF=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o6nRL-0004Kj-2N
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 06:14:11 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-eopbgr60083.outbound.protection.outlook.com [40.107.6.83])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id da4da19c-f83b-11ec-bd2d-47488cf2e6aa;
 Thu, 30 Jun 2022 08:14:09 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS1PR04MB9240.eurprd04.prod.outlook.com (2603:10a6:20b:4c4::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.15; Thu, 30 Jun
 2022 06:14:07 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5395.014; Thu, 30 Jun 2022
 06:14:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: da4da19c-f83b-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cWCcEnPR1WDh5J8401R6ouTVmKN6p5RL2wISrZ5VgFc0aFopT3EixClNHAV4V9n78MONB8Vtlwk3l3pEfqWqSCDF/XEhc9vrggW9OxqDtM2NJa5jouIuJNfYLJyHY6TYCWqwiMh083+jMGSvYnPRYdVRWb0ZkLhbfMMH1xxZck+XOnGsSSfKrEzdeVEbnxMlByhkh4Yo5Cbj89UUJW6DlKybdCl6nPUk4ybYL1M4L/KG5kwrq40tSJbFdECzncorpWVHG0N8kSIP3ssF3I3GP3jFjmoGTuJumpuBI6jeFvLaSmnQgMRUEH9Va783CpcLlrW2lbMUIu4TbQEVyw8ciw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DOsIoU12XOqWwOCjoOU3p78EJ96++2DYfJgKQaIIg3I=;
 b=m6Qf+MaCeFaSWcNOcViUCj7JyAzrvWUF1Z9M7vLC/GeLuvKHJgO82XcjEU6vqP3XGFxJJNorUWDndpV/WHt0JxqwsOaZFGK3k/ZDQD+ai/4jEnj68KFDyBR0Kor2CeTY5246ii24RxcOkEa56STde4Ws1Cmox1UOHF20YtvWBNUQBRP+CrdHNrlOV3dZRZSsIqW8LMTf+JbKf4TZVluDftuH5fkSeq7Io/AfQV9tmOlv+K4dxb7QmOIFDNSJom44WlbVu5iMMG0iA+1//TXtiA+CJgFr7Z8S0v0z1BFyD8IaRJAZEjjhGr5yal68AOUxUvbOzCuwB5w/YXsjDdtHDw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DOsIoU12XOqWwOCjoOU3p78EJ96++2DYfJgKQaIIg3I=;
 b=PCmxKSTGBQaIlh6zBDut0NWpyphuXkFHlzq1Rp1oqWGYSEueApoBqeDHtoamzSm6YaHoPd4rsfk81PXpx0ZhI3Vs02ZQ+1Psh638LKrRaAISjIeNQ0vBGqRztRfS4suQMFpfTjG27G0li3XDWnI9UvoxtraTaVT0t4FKaswJZm5Z840aTBHGWeHfKrATII8rl/APBWu7HR6ko6zGvFdZyAnxUS0Q0VtitncMQyBCV1xgj4pWZQDEfUwYTXCaft7hjWAWQUtpkt/qLoio0OFCqxay9P7KbYhATL4Z+yXKhCx+Vq0mgMvAJ2ND+iokf11V9Zr4Fte0cerft5hWlt2E1w==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9833c7cb-d71d-10a3-f74f-3caf46db3cb4@suse.com>
Date: Thu, 30 Jun 2022 08:14:05 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v9 3/3] xsm: refactor flask sid alloc and domain check
Content-Language: en-US
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Cc: scott.davis@starlab.io, jandryuk@gmail.com, christopher.clark@starlab.io,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
References: <20220630022110.31555-1-dpsmith@apertussolutions.com>
 <20220630022110.31555-4-dpsmith@apertussolutions.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220630022110.31555-4-dpsmith@apertussolutions.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AS9PR06CA0382.eurprd06.prod.outlook.com
 (2603:10a6:20b:460::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8f41f92d-2f27-4183-6f6e-08da5a5fbd4c
X-MS-TrafficTypeDiagnostic: AS1PR04MB9240:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	4ea7gNALe9XVzeo/qzYutWxqZmHyl4zYmuUlUDlW+sJaX2q9Z6ZLzD2MbpRHx98uEgot34kthTvmt1omzes7GxgTHHdV5xSHjgW65GkdyGfW8Tsx4QdzuQeEPNcphIu2yV2hRrE4htATGS0TRZk3VET3SIPiI0IRJNya3aCctdbc74DkqP1VqkgPr11ZWaQoNKSTiTxVnb6QUTH6Wp+Q9gccZD3u+ZHLdkNoJbrLjAbcQH7SL1VLSavHGryGASYh4fRHh7xNXejh1vg3SrlrWWxy8hs6/f/UbrCF3LazL2k3U6X4OQqEURteADoEGUAFAQI3q7QQD0UShFRl9DuF+Jo61IamBiS2bLhB7Bb6GkWntL4U2MXC6b/LMQ6bqE/sGwWTd1216hNlNTTfVRlSnZit3Pd3NM9cGaq14MYsjkglVf0HpmnuNS5Gr01sjNULRsHZNQxLQxD0VhpOBgsbQuKIJl8NP6aQhBvDdzEZ2vG3sGWlFeejR5ZMs6Z4uKCgxEJjex2pxkb71G99YHBPfD8VDjCJSq+wOeYB52+sZ9b0R+zHiBZvReTmIN34o5ft6BbxFg+NwVK7SBKlroN5uHN5heHvqNHrKNLdMpr+rFrDebzb/i5Q3+oQ+dC8AjXsquVL19hTh8o0/Jj/6x5BJ5P/Ov7yv874mowi4wKxM8yrkBLvzEhpGXq5Utu/8z+Y7E/yEJoOj3SeiLiEY8A79GoliOWKYbRdb33K5sotnz7oHkoCgiwsCekThME5GIfM5ZQbs3BrIRRrkWxKNgCUQz+n3C0xs7ny8g6ZlNA6bxKDlLsH96OUilW2arXPdiy06T4LFFF54Azs26Wf2m/wGzOmOjcbLBXT21d3myLk+I8=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(396003)(39860400002)(376002)(366004)(346002)(136003)(8676002)(4326008)(316002)(31686004)(36756003)(66946007)(66556008)(83380400001)(66476007)(31696002)(186003)(6486002)(6512007)(26005)(2906002)(41300700001)(86362001)(5660300002)(53546011)(38100700002)(6916009)(54906003)(2616005)(8936002)(6506007)(478600001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?emxON1gvNXhLR3RSdnV2d2ovR1VWZjd2TitueVdNSHZNTTBaMWY4RTgrMmNt?=
 =?utf-8?B?dWZWM2E2VlI0cjBhb2Q0RlB6RC9EQjhLeTA3SEF5RGhuRDgrQjEyK3d0Z0lq?=
 =?utf-8?B?TDh4cHZtS3Y0ellnOXhMaHZTM3YzZ2d0NVZqQkJaU21wZERvVkJxaHVvbXpV?=
 =?utf-8?B?bVVscGVZOGlIZ2F5MzRVM2JNNVA2ZERuM0NxbVhXbzd2dVM0cmh3S1FSZ0dz?=
 =?utf-8?B?bGNWWHppZStVQ1JGVnhyUVJOSlpiVTFZOE0vUU1VS2g2TEdmN2V0d2s2cXho?=
 =?utf-8?B?ZzBPaWk1OHBzbUc5RThCYWVXSXJjMis0Z0R3c1dFdEpzWnNpL1BjdFRXa1NW?=
 =?utf-8?B?YzhDcEFmWDlOM1R4V3JmWThFVW9LODNFd3BwNDV0ZGZ3NkI3Uy9Iem5VWWF0?=
 =?utf-8?B?R2FjeTQzcVdzWFJjZ0YycjJEYU82eU1tZUhzZ2dqOThyZC9Ob0lndGpibEkw?=
 =?utf-8?B?Umt4d3dJNUZVeU9wb3N1NGhsYmFiUUprV29sc1BYWjg2cTVmK1AwWXVRZ3ZO?=
 =?utf-8?B?ZDV2S29HVEZNT0xObHUyNXR0NXUzZXpMbUNqMUVKQmt0Qlc5VkdrSHMyWThw?=
 =?utf-8?B?bEsxN0dqRFBNVWVyMkQwWjRneHJOTkQyWmd2Y0VmOUVub09RU25kK0U1SS9O?=
 =?utf-8?B?YXZlSnVLSUhyZGZJckNncVZFM1p0OUFQdHhvenRnUmROVFhiNWNFVWVmQnlt?=
 =?utf-8?B?NStuNEJwZzB0YlduNVpKLzI2UU9pSEM5RlRzY2I5MTRzazBvQ3c3aUtQZUdY?=
 =?utf-8?B?RWpsUk9rSkxoSzBVYndybXd3Q2h1ajExYXkxUFdUNlRjNXlTaDhwZGRqbE1G?=
 =?utf-8?B?NmMwbGp0KzBZMDJqYWtqcExIdmhQTzRxcGR4czdZZDZNbzM5SFhXK240RE9C?=
 =?utf-8?B?MHhXSEhHemxQNWM4ZE14VEhGRXF3S2NGako2ZSthUWJwOWJ0OW1EWWF4T01u?=
 =?utf-8?B?MFE0UHZYOVBZN1NPR1d6cTdya1lVYWhYM0kwK3lUZUZ6NHJ3NVYrWFVXOEhF?=
 =?utf-8?B?bnJFeE5PL0FmOHR0OC81Z1J3Sk5vUHhtS2dmOWU1bmxMVjZzdWNCaDBrYnMx?=
 =?utf-8?B?NFBKRmEzZGszY3FwbExVUkt4NjJ0aXBKK3FLbTFBbGJObXM2VVlzNWc4NGNQ?=
 =?utf-8?B?d1pROGtVOXF2eHhrdlZiM0g4U2M3NXFtRHVmTExyc3hDNWlBeUxYTUIzM1RG?=
 =?utf-8?B?VmZjZUJ2Sjc0V0RYb3IxTFJabkY0Mm9XZXVLWWhLait6UTNZS0VNOE5xcERL?=
 =?utf-8?B?VE9oNytlS2lGSzVRUXdFYm9hNFpUeERGbUZvaUNmVGVMUUx4ZmFFZUgzY3hP?=
 =?utf-8?B?ajFRZEdheENWc2REZks2M3VFSTBUVEdYME5ZVDJBWDJOT3hHdXhRblZkQ1BE?=
 =?utf-8?B?cm5SOTk3Tkp0SldlTEZkQnF1dkFUY3JVWjUxMzJxVjg5R0Vvb2o1R2NlVE16?=
 =?utf-8?B?eHlqcW5jSDlNZ3lrVU54K3NadG4rU21RYnlKVFZVWGRKc0kvUjZUUWdGNUNw?=
 =?utf-8?B?SWo0MUtqMGlOWXIwZEdSODJVQWhUL21Xa3ZQcXRhMUQwT2ZZbFhCU3UxVXR4?=
 =?utf-8?B?S0cvUjBJcW1WSVA3eXlEUzVzajY4QW1BMndHMytERjBnUDhQVXU4c1lzTmNv?=
 =?utf-8?B?cGVXSStIK3ZHNEtPZTFHOEE1UGdvZ2p4WHQxU2xlQTJxWS9UUkpTamJPOU5C?=
 =?utf-8?B?ZWo0UVQrR0YycEFURzB0Qk9YZ3E1c0RuWWVPOHJhMHIya05rRW5WNk9GVlJG?=
 =?utf-8?B?bzZTc3ZQbTBPRG9sUHNEV0kzbGxIRGdnQWFlanZiQ0V1bjY0V1dUajJpcDMw?=
 =?utf-8?B?V1ZZSXU4blh4dGJ4eGlRREd4b21qQXdyUUtSUjc2YTVMcEh1N21ucmtUaDBi?=
 =?utf-8?B?NHZQOHpaZ2JnY3E0QkJvOTNhRWFhTU56a3BERWphTGQ2cm5vRnB2YnpQMm0x?=
 =?utf-8?B?YW9qeFpLY3dwbEs5ck5MYUNQOXJ1QkEydTgxOVhCY2VOSENXckloMWlURWhw?=
 =?utf-8?B?REFVZTk2VndjK0ZYVzd0clN3QlIwNVRORXdONERpdE9nUzhhVFJMVUczQnZt?=
 =?utf-8?B?WHBud0EvU3RGaWptYkVwQmpuT0NYNVNXcnBHbzZjanV3S1dpemtrek05WFpR?=
 =?utf-8?Q?Kp8IJbYA+hv/cRybhVfkcM9PN?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8f41f92d-2f27-4183-6f6e-08da5a5fbd4c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 06:14:07.2361
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: JuL0mfLY161sG51ibLkQ5vonkdM/3wWYg4IIldLLAfdf51sPjTqstLKI0rlHFTObOvsGAiUXMp61s+Oo+IdJSg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS1PR04MB9240

Just a two nits - while the change looks plausible, I'm afraid I'm
not qualified to properly review it.

On 30.06.2022 04:21, Daniel P. Smith wrote:
> The function flask_domain_alloc_security() is where a default sid should be
> assigned to a domain under construction. For reasons unknown, the initial
> domain would be assigned unlabeled_t and then fixed up under
> flask_domain_create().  With the introduction of xenboot_t it is now possible
> to distinguish when the hypervisor is in the boot state.
> 
> This commit looks to correct this by using a check to see if the hypervisor is
> under the xenboot_t context in flask_domain_alloc_security(). If it is, then it

While (or maybe because) I'm not a native speaker, the use of "looks"
reads ambiguous to me. I think you mean it in the sense of e.g. "aims",
but at first I read it in the sense of "seems", which made me think
you're not certain whether it actually does.

> will inspect the domain's is_privileged field, and select the appropriate
> default label, dom0_t or domU_t, for the domain. The logic for
> flask_domain_create() was changed to allow the incoming sid to override the
> default label.
> 
> The base policy was adjusted to allow the idle domain under the xenboot_t
> context to be able to construct domains of both types, dom0 and domU.
> 
> Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
> ---
>  tools/flask/policy/modules/dom0.te |  3 +++
>  tools/flask/policy/modules/domU.te |  3 +++
>  xen/xsm/flask/hooks.c              | 34 ++++++++++++++++++------------
>  3 files changed, 26 insertions(+), 14 deletions(-)
> 
> diff --git a/tools/flask/policy/modules/dom0.te b/tools/flask/policy/modules/dom0.te
> index 0a63ce15b6..2022bb9636 100644
> --- a/tools/flask/policy/modules/dom0.te
> +++ b/tools/flask/policy/modules/dom0.te
> @@ -75,3 +75,6 @@ admin_device(dom0_t, ioport_t)
>  admin_device(dom0_t, iomem_t)
>  
>  domain_comms(dom0_t, dom0_t)
> +
> +# Allow they hypervisor to build domains of type dom0_t

Since it repeats ...

> +xen_build_domain(dom0_t)
> diff --git a/tools/flask/policy/modules/domU.te b/tools/flask/policy/modules/domU.te
> index b77df29d56..73fc90c3c6 100644
> --- a/tools/flask/policy/modules/domU.te
> +++ b/tools/flask/policy/modules/domU.te
> @@ -13,6 +13,9 @@ domain_comms(domU_t, domU_t)
>  migrate_domain_out(dom0_t, domU_t)
>  domain_self_comms(domU_t)
>  
> +# Allow they hypervisor to build domains of type domU_t
> +xen_build_domain(domU_t)

... here - s/they/the/ in both places?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 06:55:48 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 06:55:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358229.587314 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6o5E-0000n3-Hv; Thu, 30 Jun 2022 06:55:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358229.587314; Thu, 30 Jun 2022 06:55:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6o5E-0000mw-EW; Thu, 30 Jun 2022 06:55:24 +0000
Received: by outflank-mailman (input) for mailman id 358229;
 Thu, 30 Jun 2022 06:55:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gQTH=XF=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o6o5C-0000mq-QH
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 06:55:22 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9ba8dd14-f841-11ec-bdce-3d151da133c5;
 Thu, 30 Jun 2022 08:55:21 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 92BEF21BDB;
 Thu, 30 Jun 2022 06:55:20 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 4EE7E139E9;
 Thu, 30 Jun 2022 06:55:20 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id Qm21EdhIvWLOIwAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 30 Jun 2022 06:55:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9ba8dd14-f841-11ec-bdce-3d151da133c5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656572120; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=RF/ULgtrScEhpqsg3bxhvLTrzwOjtLWFPehDqVa0fCo=;
	b=qq83zVvbHqhf+2CanqC0MRm5gJI/WePJIe12XbvBwcuw9wk1WwaPqLeD/tA3pEa6gusFtr
	gj43vYHOqjqvp0vGKLMxs+mn0w839YteDNMsZl18jnaA8klwoI4yL0+uukZHil887VODhg
	6AMiCA9pJBpBnSAZgG2sS8cRNXqpK84=
Message-ID: <1df188eb-4a31-75a3-146b-b930002c1e68@suse.com>
Date: Thu, 30 Jun 2022 08:55:19 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Content-Language: en-US
To: Josh Poimboeuf <jpoimboe@kernel.org>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
 Dave Hansen <dave.hansen@linux.intel.com>, "H. Peter Anvin" <hpa@zytor.com>
References: <20220623094608.7294-1-jgross@suse.com>
 <20220623094608.7294-3-jgross@suse.com>
 <20220629171457.amdsrgaxady55hds@treble>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v2 2/3] x86: fix setup of brk area
In-Reply-To: <20220629171457.amdsrgaxady55hds@treble>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------As0kU50ZH0ob1d2LlJUV8Ybw"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------As0kU50ZH0ob1d2LlJUV8Ybw
Content-Type: multipart/mixed; boundary="------------PAESI0u0tYlmqcip1P1ThpgM";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Josh Poimboeuf <jpoimboe@kernel.org>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
 Dave Hansen <dave.hansen@linux.intel.com>, "H. Peter Anvin" <hpa@zytor.com>
Message-ID: <1df188eb-4a31-75a3-146b-b930002c1e68@suse.com>
Subject: Re: [PATCH v2 2/3] x86: fix setup of brk area
References: <20220623094608.7294-1-jgross@suse.com>
 <20220623094608.7294-3-jgross@suse.com>
 <20220629171457.amdsrgaxady55hds@treble>
In-Reply-To: <20220629171457.amdsrgaxady55hds@treble>

--------------PAESI0u0tYlmqcip1P1ThpgM
Content-Type: multipart/mixed; boundary="------------2e5h808bXWIRiOw5ZHQ6qSLB"

--------------2e5h808bXWIRiOw5ZHQ6qSLB
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjkuMDYuMjIgMTk6MTQsIEpvc2ggUG9pbWJvZXVmIHdyb3RlOg0KPiBIaSBKdWVyZ2Vu
LA0KPiANCj4gSXQgaGVscHMgdG8gYWN0dWFsbHkgQ2MgdGhlIHBlcnNvbiB3aG8gYnJva2Ug
aXQgOy0pDQo+IA0KPiBPbiBUaHUsIEp1biAyMywgMjAyMiBhdCAxMTo0NjowN0FNICswMjAw
LCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4gQ29tbWl0IGUzMjY4M2M2ZjdkMiAoIng4Ni9t
bTogRml4IFJFU0VSVkVfQlJLKCkgZm9yIG9sZGVyIGJpbnV0aWxzIikNCj4+IHB1dCB0aGUg
YnJrIGFyZWEgaW50byB0aGUgLmJzcy4uYnJrIHNlY3Rpb24gKHBsYWNlZCBkaXJlY3RseSBi
ZWhpbmQNCj4+IC5ic3MpLA0KPiANCj4gSG0/IEl0IGRpZG4ndCBhY3R1YWxseSBkbyB0aGF0
Lg0KPiANCj4gRm9yIGluZGl2aWR1YWwgdHJhbnNsYXRpb24gdW5pdHMsIGl0IGRpZCByZW5h
bWUgdGhlIHNlY3Rpb24gZnJvbQ0KPiAiLmJya19yZXNlcnZhdGlvbiIgdG8gIi5ic3MuLmJy
ayIuICBCdXQgdGhlbiBkdXJpbmcgbGlua2luZyBpdCdzIHN0aWxsDQo+IHBsYWNlZCBpbiAu
YnJrIGluIHZtbGludXgsIGp1c3QgbGlrZSBiZWZvcmUuDQoNClNvcnJ5LCBJIG1pc3JlYWQg
dGhlIHBhdGNoIGNvbW1pdCBtZXNzYWdlIGFuZCB3YXMgZm9vbGVkIGJ5IHRoZSBmYWN0IHRo
YXQNCmJpc2VjdGlvbiBjbGVhcmx5IHBvaW50ZWQgYXQgdGhpcyBwYXRjaCB0byBoYXZlIGlu
dHJvZHVjZWQgdGhlIHByb2JsZW0uDQoNCkkgb25seSBkaXNjb3ZlcmVkIGxhdGVyIHRoYXQg
dGhlIG1haW4gaXNzdWUgd2FzIHRoZSBhZGRlZCAiTk9MT0FEIg0KYXR0cmlidXRlLg0KDQo+
PiBjYXVzaW5nIGl0IG5vdCB0byBiZSBjbGVhcmVkIGluaXRpYWxseS4gQXMgdGhlIGJyayBh
cmVhIGlzIHVzZWQNCj4+IHRvIGFsbG9jYXRlIGVhcmx5IHBhZ2UgdGFibGVzLCB0aGVzZSBt
aWdodCBjb250YWluIGdhcmJhZ2UgaW4gbm90DQo+PiBleHBsaWNpdGx5IHdyaXR0ZW4gZW50
cmllcy4NCj4+DQo+PiBUaGlzIGlzIGVzcGVjaWFsbHkgYSBwcm9ibGVtIGZvciBYZW4gUFYg
Z3Vlc3RzLCBhcyB0aGUgaHlwZXJ2aXNvciB3aWxsDQo+PiB2YWxpZGF0ZSBwYWdlIHRhYmxl
cyAoY2hlY2sgZm9yIHdyaXRhYmxlIHBhZ2UgdGFibGVzIGFuZCBoeXBlcnZpc29yDQo+PiBw
cml2YXRlIGJpdHMpIGJlZm9yZSBhY2NlcHRpbmcgdGhlbSB0byBiZSB1c2VkLiBUaGVyZSBo
YXZlIGJlZW4gcmVwb3J0cw0KPj4gb2YgZWFybHkgY3Jhc2hlcyBvZiBQViBndWVzdHMgZHVl
IHRvIGlsbGVnYWwgcGFnZSB0YWJsZSBjb250ZW50cy4NCj4+DQo+PiBGaXggdGhhdCBieSBs
ZXR0aW5nIGNsZWFyX2JzcygpIGNsZWFyIHRoZSBicmsgYXJlYSwgdG9vLg0KPiANCj4gV2hp
bGUgaXQgZG9lcyBtYWtlIHNlbnNlIHRvIGNsZWFyIHRoZSBicmsgYXJlYSwgSSBkb24ndCB1
bmRlcnN0YW5kIGhvdw0KPiBteSBwYXRjaCBicm9rZSB0aGlzLiAgSG93IHdhcyBpdCBnZXR0
aW5nIGNsZWFyZWQgYmVmb3JlPw0KDQpJdCBzZWVtZWQgdG8gaGF2ZSB3b3JrZWQgYnkgY2hh
bmNlLiBUaGUgWGVuIGh5cGVydmlzb3IgaXMgY2xlYXJpbmcgYWxsDQphbGxvYy1vbmx5IHNl
Y3Rpb25zIHdoZW4gbG9hZGluZyBhIGtlcm5lbCAodGhpcyB3aWxsICJmaXgiIHRoZSBkb20w
DQpjYXNlIHJlbGlhYmx5IHRvZ2V0aGVyIHdpdGggcGF0Y2ggMyBvZiB0aGlzIHNlcmllcyku
DQoNCkdydWIgbWlnaHQgZG8gdGhlIGNsZWFyaW5nLCB0b28gKGZvciB0aGUgUFYgZG9tVSBj
YXNlKSwgYnV0IEkgaGF2ZW4ndA0KdmVyaWZpZWQgdGhhdCBieSBjb2RlIGluc3BlY3Rpb24u
DQoNCkknbGwgZHJvcCB0aGUgIkZpeGVzOiIgdGFnLg0KDQoNCkp1ZXJnZW4NCg==
--------------2e5h808bXWIRiOw5ZHQ6qSLB
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------2e5h808bXWIRiOw5ZHQ6qSLB--

--------------PAESI0u0tYlmqcip1P1ThpgM--

--------------As0kU50ZH0ob1d2LlJUV8Ybw
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK9SNcFAwAAAAAACgkQsN6d1ii/Ey+G
UQf+JnCTpsTiEn78EDDRe0nYXqWPAXGLTBCbA3hsMUyKSO/TpzMqrUXJzxVqxbNK0IMOfinQftuu
ybZFa2awaHzu9Gk3fBKvRt1rNGvTXd5XAgmgnOIVDhw1jxsHYcarXsN7cuIMhnrIqUAABArcdZ5K
fciHlP1RtnWwudWzye/kaU6JhFFU/geizDiIF5Owlzg28cpQX9blC/e4kl9bZUVoEgEjCa7Tfsey
FuCqz7MqAcnU1Rg2l7oqsKm/eAGeQ0RMGLuitFLyUerM9PRXy9KB7CqmIsA4Vq0xVmW+zduRqLj/
IRKJ1o96rAEIauwQyFV/aDKPIw51lLsGG4fdIKLVUA==
=uW1p
-----END PGP SIGNATURE-----

--------------As0kU50ZH0ob1d2LlJUV8Ybw--


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 07:14:51 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 07:14:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358235.587325 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6oNz-0003Yw-6M; Thu, 30 Jun 2022 07:14:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358235.587325; Thu, 30 Jun 2022 07:14:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6oNz-0003Yp-3E; Thu, 30 Jun 2022 07:14:47 +0000
Received: by outflank-mailman (input) for mailman id 358235;
 Thu, 30 Jun 2022 07:14:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gQTH=XF=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o6oNx-0003Yc-Pt
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 07:14:45 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 512bc2b1-f844-11ec-bdce-3d151da133c5;
 Thu, 30 Jun 2022 09:14:44 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 3549C1F910;
 Thu, 30 Jun 2022 07:14:44 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id CC6F913A5C;
 Thu, 30 Jun 2022 07:14:43 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id C4qFMGNNvWKvKgAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 30 Jun 2022 07:14:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 512bc2b1-f844-11ec-bdce-3d151da133c5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656573284; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=w8JobtSvpzNoUpwJIImS2sf/s9H6qdugYtbCVXRzcT0=;
	b=JFb6EH83ripqdvoD961Q5CiVgMh8sDno1KZ5b6pIW5+KAGWd8xoxWWwBT1Of6BlatnmRl4
	vBiV4qmXv1K8NCPocoxFmfwWwn7DfVydWr2H4jUJLimfFQt4AVBxKY1VELUuG0TTjmjfD0
	oTfF+bbPZeE2ZmvbSfI4g1OX4OExjtY=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org
Cc: jpoimboe@kernel.org,
	Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: [PATCH v3 0/3] x86: fix brk area initialization
Date: Thu, 30 Jun 2022 09:14:38 +0200
Message-Id: <20220630071441.28576-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The brk area needs to be zeroed initially, like the .bss section.
At the same time its memory should be covered by the ELF program
headers.

Juergen Gross (3):
  x86/xen: use clear_bss() for Xen PV guests
  x86: clear .brk area at early boot
  x86: fix .brk attribute in linker script

 arch/x86/include/asm/setup.h  |  3 +++
 arch/x86/kernel/head64.c      |  4 +++-
 arch/x86/kernel/vmlinux.lds.S |  2 +-
 arch/x86/xen/enlighten_pv.c   |  8 ++++++--
 arch/x86/xen/xen-head.S       | 10 +---------
 5 files changed, 14 insertions(+), 13 deletions(-)

-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Thu Jun 30 07:14:51 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 07:14:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358237.587347 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6oO0-00043j-Lm; Thu, 30 Jun 2022 07:14:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358237.587347; Thu, 30 Jun 2022 07:14:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6oO0-00043c-HJ; Thu, 30 Jun 2022 07:14:48 +0000
Received: by outflank-mailman (input) for mailman id 358237;
 Thu, 30 Jun 2022 07:14:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gQTH=XF=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o6oNz-0003Yo-Ct
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 07:14:47 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5176ab13-f844-11ec-bd2d-47488cf2e6aa;
 Thu, 30 Jun 2022 09:14:46 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id E19551F92C;
 Thu, 30 Jun 2022 07:14:44 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 9846913A5C;
 Thu, 30 Jun 2022 07:14:44 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id iOHWI2RNvWKvKgAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 30 Jun 2022 07:14:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5176ab13-f844-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656573284; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=VcbWEwweM6uAYrYgVJNLQYhtOesCwLF7IPn5AMOMTe4=;
	b=poOAGXG5HMP5JL8mmIzmAsRosA8/pUtqIF93iUlTdXo4zyLPaI9yGkWJpDn5zOobZq5yir
	pZhLC+TwmME1b+6nuKw+Oi8n+V80EsJhCCqPzYP1W3WpbjEbVlIYIITcJ3/u8M4o0oM0VJ
	BN1I9NQpuGSXsF+OWK/QvRFvPhtR/hM=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org
Cc: jpoimboe@kernel.org,
	Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>
Subject: [PATCH v3 2/3] x86: clear .brk area at early boot
Date: Thu, 30 Jun 2022 09:14:40 +0200
Message-Id: <20220630071441.28576-3-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20220630071441.28576-1-jgross@suse.com>
References: <20220630071441.28576-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The .brk section has the same properties as .bss: it is an alloc-only
section and should be cleared before being used.

Not doing so is especially a problem for Xen PV guests, as the
hypervisor will validate page tables (check for writable page tables
and hypervisor private bits) before accepting them to be used.

Make sure .brk is initially zero by letting clear_bss() clear the brk
area, too.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/kernel/head64.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index e7e233209a8c..6a3cfaf6b72a 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -430,6 +430,8 @@ void __init clear_bss(void)
 {
 	memset(__bss_start, 0,
 	       (unsigned long) __bss_stop - (unsigned long) __bss_start);
+	memset(__brk_base, 0,
+	       (unsigned long) __brk_limit - (unsigned long) __brk_base);
 }
 
 static unsigned long get_cmd_line_ptr(void)
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Thu Jun 30 07:14:51 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 07:14:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358236.587332 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6oNz-0003cR-HK; Thu, 30 Jun 2022 07:14:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358236.587332; Thu, 30 Jun 2022 07:14:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6oNz-0003br-AJ; Thu, 30 Jun 2022 07:14:47 +0000
Received: by outflank-mailman (input) for mailman id 358236;
 Thu, 30 Jun 2022 07:14:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gQTH=XF=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o6oNy-0003Yc-IU
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 07:14:46 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 51504260-f844-11ec-bdce-3d151da133c5;
 Thu, 30 Jun 2022 09:14:44 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 8F8B821C17;
 Thu, 30 Jun 2022 07:14:44 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 3CAD313A5C;
 Thu, 30 Jun 2022 07:14:44 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id sFx8DWRNvWKvKgAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 30 Jun 2022 07:14:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 51504260-f844-11ec-bdce-3d151da133c5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656573284; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=38TnYLyTGlkUm6ORupS6PivJsuoTeOfCzm7BgT7zdQQ=;
	b=fyg68Z2J2j/iqSbIIg7jvar8VY2mi3IdtR6AgHni//Ufu8+ErfvlFGZo2Eq/0tnuD+zeyT
	FqU6rt5f7JUdr6NOd6EOq6RqPNAQu1WUQqEZVZQSbx0adbJEQmuk1UNvLZ7DWUe715qXoj
	7dvpO8HOTIXQHHGaIWs3x963H4sf+f8=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org
Cc: jpoimboe@kernel.org,
	Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v3 1/3] x86/xen: use clear_bss() for Xen PV guests
Date: Thu, 30 Jun 2022 09:14:39 +0200
Message-Id: <20220630071441.28576-2-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20220630071441.28576-1-jgross@suse.com>
References: <20220630071441.28576-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of clearing the bss area in assembly code, use the clear_bss()
function.

This requires to pass the start_info address as parameter to
xen_start_kernel() in order to avoid the xen_start_info being zeroed
again.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 arch/x86/include/asm/setup.h |  3 +++
 arch/x86/kernel/head64.c     |  2 +-
 arch/x86/xen/enlighten_pv.c  |  8 ++++++--
 arch/x86/xen/xen-head.S      | 10 +---------
 4 files changed, 11 insertions(+), 12 deletions(-)

diff --git a/arch/x86/include/asm/setup.h b/arch/x86/include/asm/setup.h
index f8b9ee97a891..f37cbff7354c 100644
--- a/arch/x86/include/asm/setup.h
+++ b/arch/x86/include/asm/setup.h
@@ -120,6 +120,9 @@ void *extend_brk(size_t size, size_t align);
 	static char __brk_##name[size]
 
 extern void probe_roms(void);
+
+void clear_bss(void);
+
 #ifdef __i386__
 
 asmlinkage void __init i386_start_kernel(void);
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index bd4a34100ed0..e7e233209a8c 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -426,7 +426,7 @@ void __init do_early_exception(struct pt_regs *regs, int trapnr)
 
 /* Don't add a printk in there. printk relies on the PDA which is not initialized 
    yet. */
-static void __init clear_bss(void)
+void __init clear_bss(void)
 {
 	memset(__bss_start, 0,
 	       (unsigned long) __bss_stop - (unsigned long) __bss_start);
diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index e3297b15701c..70fb2ea85e90 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -1183,15 +1183,19 @@ static void __init xen_domu_set_legacy_features(void)
 extern void early_xen_iret_patch(void);
 
 /* First C function to be called on Xen boot */
-asmlinkage __visible void __init xen_start_kernel(void)
+asmlinkage __visible void __init xen_start_kernel(struct start_info *si)
 {
 	struct physdev_set_iopl set_iopl;
 	unsigned long initrd_start = 0;
 	int rc;
 
-	if (!xen_start_info)
+	if (!si)
 		return;
 
+	clear_bss();
+
+	xen_start_info = si;
+
 	__text_gen_insn(&early_xen_iret_patch,
 			JMP32_INSN_OPCODE, &early_xen_iret_patch, &xen_iret,
 			JMP32_INSN_SIZE);
diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S
index 3a2cd93bf059..13af6fe453e3 100644
--- a/arch/x86/xen/xen-head.S
+++ b/arch/x86/xen/xen-head.S
@@ -48,15 +48,6 @@ SYM_CODE_START(startup_xen)
 	ANNOTATE_NOENDBR
 	cld
 
-	/* Clear .bss */
-	xor %eax,%eax
-	mov $__bss_start, %rdi
-	mov $__bss_stop, %rcx
-	sub %rdi, %rcx
-	shr $3, %rcx
-	rep stosq
-
-	mov %rsi, xen_start_info
 	mov initial_stack(%rip), %rsp
 
 	/* Set up %gs.
@@ -71,6 +62,7 @@ SYM_CODE_START(startup_xen)
 	cdq
 	wrmsr
 
+	mov	%rsi, %rdi
 	call xen_start_kernel
 SYM_CODE_END(startup_xen)
 	__FINIT
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Thu Jun 30 07:14:51 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 07:14:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358238.587358 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6oO1-0004Ji-Tp; Thu, 30 Jun 2022 07:14:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358238.587358; Thu, 30 Jun 2022 07:14:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6oO1-0004JX-Qm; Thu, 30 Jun 2022 07:14:49 +0000
Received: by outflank-mailman (input) for mailman id 358238;
 Thu, 30 Jun 2022 07:14:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gQTH=XF=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o6oO0-0003Yo-66
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 07:14:48 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 51b166cf-f844-11ec-bd2d-47488cf2e6aa;
 Thu, 30 Jun 2022 09:14:46 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 3F32A21C36;
 Thu, 30 Jun 2022 07:14:45 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id EA13713A5C;
 Thu, 30 Jun 2022 07:14:44 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 8Id4N2RNvWKvKgAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 30 Jun 2022 07:14:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 51b166cf-f844-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656573285; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=dCgMupKWsFTgcQQygfN0SR5QkcimXyNIOsA3rDiUr5Q=;
	b=kiJN9aVoEGPb5m8eSiQoOTlZK2EQTATiQvG06ICc4EspsoI4rKTMRoaB0xF5U1ryWN9IfU
	ZD942x/ZcThxVp3QaBx//I6RK0725O4ZA+6HIOiu6s0DmxhCkTOEB4nobwmczq6EHrrKLe
	clRlM9l3SfiOE9GBBHsJFfuJf/7b1oE=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org
Cc: jpoimboe@kernel.org,
	Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>
Subject: [PATCH v3 3/3] x86: fix .brk attribute in linker script
Date: Thu, 30 Jun 2022 09:14:41 +0200
Message-Id: <20220630071441.28576-4-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20220630071441.28576-1-jgross@suse.com>
References: <20220630071441.28576-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Commit e32683c6f7d2 ("x86/mm: Fix RESERVE_BRK() for older binutils")
added the "NOLOAD" attribute to the .brk section as a "failsafe"
measure.

Unfortunately this leads to the linker no longer covering the .brk
section in a program header, resulting in the kernel loader not knowing
that the memory for the .brk section must be reserved.

This has led to crashes when loading the kernel as PV dom0 under Xen,
but other scenarios could be hit by the same problem (e.g. in case an
uncompressed kernel is used and the initrd is placed directly behind
it).

So drop the "NOLOAD" attribute. This has been verified to correctly
cover the .brk section by a program header of the resulting ELF file.

Fixes: e32683c6f7d2 ("x86/mm: Fix RESERVE_BRK() for older binutils")
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
V2:
- new patch
---
 arch/x86/kernel/vmlinux.lds.S | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 81aba718ecd5..9487ce8c13ee 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -385,7 +385,7 @@ SECTIONS
 	__end_of_kernel_reserve = .;
 
 	. = ALIGN(PAGE_SIZE);
-	.brk (NOLOAD) : AT(ADDR(.brk) - LOAD_OFFSET) {
+	.brk : AT(ADDR(.brk) - LOAD_OFFSET) {
 		__brk_base = .;
 		. += 64 * 1024;		/* 64k alignment slop space */
 		*(.bss..brk)		/* areas brk users have reserved */
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Thu Jun 30 07:16:57 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 07:16:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358257.587369 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6oQ4-0005ui-Fn; Thu, 30 Jun 2022 07:16:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358257.587369; Thu, 30 Jun 2022 07:16:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6oQ4-0005ub-Cf; Thu, 30 Jun 2022 07:16:56 +0000
Received: by outflank-mailman (input) for mailman id 358257;
 Thu, 30 Jun 2022 07:16:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jUi7=XF=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1o6oQ2-0005uL-JI
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 07:16:54 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr130082.outbound.protection.outlook.com [40.107.13.82])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9d777525-f844-11ec-bdce-3d151da133c5;
 Thu, 30 Jun 2022 09:16:52 +0200 (CEST)
Received: from AS9PR05CA0011.eurprd05.prod.outlook.com (2603:10a6:20b:488::19)
 by AM0PR08MB5233.eurprd08.prod.outlook.com (2603:10a6:208:164::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16; Thu, 30 Jun
 2022 07:16:50 +0000
Received: from AM5EUR03FT062.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:488:cafe::ee) by AS9PR05CA0011.outlook.office365.com
 (2603:10a6:20b:488::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.15 via Frontend
 Transport; Thu, 30 Jun 2022 07:16:50 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT062.mail.protection.outlook.com (10.152.17.120) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5395.14 via Frontend Transport; Thu, 30 Jun 2022 07:16:49 +0000
Received: ("Tessian outbound 879f4da7a6e9:v121");
 Thu, 30 Jun 2022 07:16:48 +0000
Received: from 604353bbef16.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 15593C6D-2B07-4200-96A7-059B1A54A9C0.1; 
 Thu, 30 Jun 2022 07:16:42 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 604353bbef16.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 30 Jun 2022 07:16:42 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AS4PR08MB8000.eurprd08.prod.outlook.com (2603:10a6:20b:583::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Thu, 30 Jun
 2022 07:16:40 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5373.017; Thu, 30 Jun 2022
 07:16:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9d777525-f844-11ec-bdce-3d151da133c5
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=RXp8sCqtZxGs8+wadxtNiD9ZS7zxsndBX/lSTkI2oH8CiigKruoVlrZ1wUud4UiwSU+kw0ziCu9DU5PyUoQ5LVyN33v6hjxZ3OJ+rWNZfnMiDThNlxWHJUkSsswmDff3lczKtNE7Y/5Eb8kd1oqLFFqth7zAxIRRp1mO8UiTmXQn2HmQ4aqGLEA1Ts60zuMD7WPcLFCHLXOfyKu5c+7Qjh9EIuD+mimCGgq+8dbZu66q4Sjeq8G71LuH234Op/wLriLlgp6dAFsY5Loe7LkRB/C/FLKyZpgDEktnhlvHraSkPaOjZMtFWw5Gb0v39tyyGT0O09HDbJ91/1czv+/64w==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9Fd6wSu+qKNtqeVHQ1WoRks56L1+aBjZnNjDttmTySE=;
 b=maJCFoH5wHyQPJL0kr2Y21tyzgIxtuKH15Rx5AjdCRhBsWKRJFAMFHvFZ22dCS/J5fGxwwy2LxgIC0Oi0/vijrrDIw717bKJqza0KGagSsol7C4kohyWbH3Z30PHAe5/TT8XPKrRn2ekxuoFIH8tRP9vzZuDPmx47haaLajNqKp4ngG5x+iN8qQULajjRz/MpDRkXBmmgZrlCkJonDrTVCCg3zf9wh9mG+e49Q0E2UkMqmtJVMoTcfn9L91w0HuKedxjIEzFvc5xSufNE7xzBjNRf1hh8ZE0XfygdHxLNAl/RpEV0f+ExI9x05ehD6Jpy22JM72dU0EMiO9zMoaMCw==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9Fd6wSu+qKNtqeVHQ1WoRks56L1+aBjZnNjDttmTySE=;
 b=5hp5Ru5qObwo1k02L8MncXJHq7yIAjjue+6KlDW8rugEUmrehxLalF3P+92Hawc+L4q2T9Fy0/ozM9/7dI9K9Hbru0xnnMgcpzrEMSpJ9d4GQYDAUVc2esXYpmIZrLTETQOP2GSxSs9IpIE/3oGM5wK8afdy/EMmTwtDXcViCuk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 8a149a9a940af274
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DDWnN9wlTMiLOX5MIldeFxT0OXFMtwIS/AT31nx+IIb6cgdaQ3uWNZR6jyPAyZpN9uiJMeR4NTHGC8P6t+wVjOiCHNi8t1j0WgBCkEtwVYgSedvg4BBop5dCzi6+pvG22slkt7N0kbIPRwgy0eNqjJSP8wO3uBKrqWMuS6eAuh1Om4DimeMlr5bUuzZTXsOkyframNP6RAnLDrTa8+cDB8Azhc0Avrr7ULpv5RnbFtGnYE7O6vD7avXb/zS/ydfQuuXP2zL2zKzwLcN98P3SE4PAF+y8PJSmcUrE1RZc/G9gMz0nAYfBKgKD+HOSf9yVkY/OysLAcz/KgaFr4BI1Qw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9Fd6wSu+qKNtqeVHQ1WoRks56L1+aBjZnNjDttmTySE=;
 b=j8OzeNtse7fTrC2nARrGQf1M5gVnYZ2AWaPOe0+QDSY99NIxpNoCqgK7BxeC4721vFmEsjXGVPfNYT8bIRtQ381uaC1G8tvcdZvguLgGkVHEWde9FXLhe1CUykX1a6/ud4o5WvtpC+RhnO+3zwfFV21akokTVgfOv3FeK5Cmpk/67G5pIpfN0nHTwVDskNgXpmG/YAwJLioh0JOjgfOKLzCWqiiPBokIgwbsnSb4+3SPgcS7aaR58EEBimoxsBGKRFDA4TQNA4MVKQheIP4RkhvpXKNlP80ZaUrn6lmBMAnMEtKZPyR5IDWnQySLXShlXycR5vKLeCqcEQaFrEM4cA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9Fd6wSu+qKNtqeVHQ1WoRks56L1+aBjZnNjDttmTySE=;
 b=5hp5Ru5qObwo1k02L8MncXJHq7yIAjjue+6KlDW8rugEUmrehxLalF3P+92Hawc+L4q2T9Fy0/ozM9/7dI9K9Hbru0xnnMgcpzrEMSpJ9d4GQYDAUVc2esXYpmIZrLTETQOP2GSxSs9IpIE/3oGM5wK8afdy/EMmTwtDXcViCuk=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>, "dmitry.semenets@gmail.com"
	<dmitry.semenets@gmail.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Dmytro Semenets <dmytro_semenets@epam.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen: arm: Don't use stop_cpu() in halt_this_cpu()
Thread-Topic: [PATCH] xen: arm: Don't use stop_cpu() in halt_this_cpu()
Thread-Index:
 AQHYhtUZsflIoxNljUat3Yz6tc+m161djY6AgAC0bgCAANP9AIABIMQAgAVAMgCAAJ5ngIAAlakAgADqBgA=
Date: Thu, 30 Jun 2022 07:16:40 +0000
Message-ID: <14736B47-2F17-4684-9162-17C3E55F8D15@arm.com>
References: <20220623074428.226719-1-dmitry.semenets@gmail.com>
 <alpine.DEB.2.22.394.2206231457250.2410338@ubuntu-linux-20-04-desktop>
 <e60a4e68-ed00-6cc7-31ca-64bcfc4bbdc5@xen.org>
 <alpine.DEB.2.22.394.2206241414420.2410338@ubuntu-linux-20-04-desktop>
 <5c986703-c932-3c7d-3756-2b885bb96e42@xen.org>
 <alpine.DEB.2.22.394.2206281538320.4389@ubuntu-linux-20-04-desktop>
 <26a1b208-7192-a64f-ca6d-c144de89ed2c@xen.org>
 <alpine.DEB.2.22.394.2206291014000.4389@ubuntu-linux-20-04-desktop>
In-Reply-To:
 <alpine.DEB.2.22.394.2206291014000.4389@ubuntu-linux-20-04-desktop>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: f6c38b62-a43a-4a82-d9c7-08da5a687ff3
x-ms-traffictypediagnostic:
	AS4PR08MB8000:EE_|AM5EUR03FT062:EE_|AM0PR08MB5233:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 3XbuFuAmE41naKfbmqYlsEPk36N6J/aKmLP3pzDdkr5LSV0dEbG7UVvxe/N0ER/n3RC6nW/6E5sdLE9f7kaemfRiumcrK/ZM57xcv4dH9t7Mvg7qytUgnMtZNVS8wyFN3BGkv4rrA0dpJBNdBCxelJ5ErICsaK7q6rGmKZOmMfvaDnMruMp8zby7xG4M92ESU6jc4/DvysimP9mxSZZ2diURAq1rek2+6V9EpYvuMw+EcjyKZDd97KveUPeC0pRUQO6zaF2nkIDZExvFsxUjF296But7RjzHPqqd070X6C273O/mybmAZw/AGCIwTa+32ry4xWq4RzZqYwvlg+ieV17elgMoRFo8CT9eYVyi/W8vWmYIpGdrSG4LHyq7i0q+b2X8pVsjfJBRihQ8CYuDn716pgx+P74ryoc8ropklPPu6CWJ+8O5CoVg7lAK59mXC3MQJWQQPI7sxBc69erog/Q5xL7CFI8LgFwRUr5Z6FQOO5al75LQ9lHPN2YcFKXnlshHM/RU4yUsQ+kf942kZcKg87/Azy1MDgJHLtxx6BiDxnygJ0X5lIirT7rO7N3/nZ1a/L26qD2D9PEVhPbYbSvdxKLd90LHPPc753eOwNlIdTmotk59i22pHVv9Pl3JDeHNpUYlC9nYipm4dnlfhVLs3MSa6dEI+zzzez+Mwl9Y6P6X5eVi2vNc6mVfSuRGN3QzudhfPlAwfxNx9SGRWchAUvHblOdJ5M/WsliVVLp1H1f9dYvjdFMQqWQCuQWYLnrGR8zARA/7+gdot0X43ahbLqmsDjQMRThtPos2PjT4t1sYlshkoAHjJluEaZAOB4LhVrJcru9jGwY1ew7EhpipBZ+pwEwY/xHP44sVZKE=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(396003)(366004)(346002)(136003)(376002)(86362001)(33656002)(66946007)(66556008)(71200400001)(41300700001)(2616005)(38070700005)(53546011)(91956017)(6506007)(4326008)(26005)(64756008)(8936002)(66476007)(8676002)(5660300002)(83380400001)(76116006)(66446008)(2906002)(54906003)(6486002)(6916009)(6512007)(38100700002)(478600001)(122000001)(186003)(36756003)(316002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <6D41FFE32C169944A18E6BC5E9D26D23@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR08MB8000
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT062.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	6a8832b7-3088-41c2-14fc-08da5a687a5f
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8nU57gXQNXY83eViPnotvVgfZvzjhd4HLGiCi+ZS5xpkEHyfsb96MNj0//ZNYVXMgUozliSL+QVaGDntJ7WjdSmbxZCUtbFx/57aSoXQsmtWzhN5899u1BdDPGziwlaWhuVq1R3kDqMG50uUPrEua7HXgdcgSacCH28Rw6Q4iEYB1XRstQWWRRYCHKEBX/ZgllPDJrnUSmK2uXUWAys6TpQTzR4b/t69HE7YkUS4dCK/enridzUq5VF+cbF6/bNllq9BkpjpyYOG9lMpfOzH/UWnPcMYiawg5qZpgKGBcOpysz9HpX490dzzM8S/0rZRDAJZJMrXyF9rXUYxjS3k8JsZSxKkguwCfo8skAU9KilZUxg9h3fkuj3LsS6MXsYlT4HHj965dvVGwL5YnwOgyisoy71WKzFw96mPB6o+K+nMpVdV8hGAXPZFI6y+2BlZU7Q8DSQH9syAfQUqSC05l0PzzRJkn2zD097A8JO19K595UH8oib6bzl5DfvQvWeS6HMDk2m29lmjq4G/x15SDVNh7WlEX+XK7Z8FXRruvMz1XsBiGdq3CyuCggMAVtZmIu8loeN6vbdHRsyuj34x38Ikla83WBxaLgtbI54JISAURbom+o+gzaR4iVBL1yzjegv4KrWYmv1JdCOQGYwaJtWyv7YwwKKWb6X4rp3VI/vVSoI1x58mme0cTU6pbuV/mm/rTzHZS7YaGUzSmzdrUvjMrfuGS1qqiSn1lqTq/HEd9KV7vPrgj6+wWrX5JGvyvN2mgT3dgMFcAQlCANpszlQ4Q2FU4Pxrh7N855GhNf2vDxd/LR5lcKbd3+1Y9qTy
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(396003)(346002)(39860400002)(136003)(376002)(40470700004)(36840700001)(46966006)(81166007)(8936002)(186003)(70206006)(2906002)(5660300002)(336012)(40480700001)(478600001)(83380400001)(26005)(33656002)(41300700001)(54906003)(107886003)(6486002)(36860700001)(82740400003)(36756003)(2616005)(6512007)(356005)(4326008)(8676002)(86362001)(53546011)(316002)(6506007)(47076005)(82310400005)(6862004)(40460700003)(70586007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 07:16:49.5507
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f6c38b62-a43a-4a82-d9c7-08da5a687ff3
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT062.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB5233

Hi,

> On 29 Jun 2022, at 18:19, Stefano Stabellini <sstabellini@kernel.org> wro=
te:
>=20
> On Wed, 29 Jun 2022, Julien Grall wrote:
>> On 28/06/2022 23:56, Stefano Stabellini wrote:
>>>> The advantage of the panic() is it will remind us that some needs to b=
e
>>>> fixed.
>>>> With a warning (or WARN()) people will tend to ignore it.
>>>=20
>>> I know that this specific code path (cpu off) is probably not super
>>> relevant for what I am about to say, but as we move closer to safety
>>> certifiability we need to get away from using "panic" and BUG_ON as a
>>> reminder that more work is needed to have a fully correct implementatio=
n
>>> of something.
>>=20
>> I don't think we have many places at runtime using BUG_ON()/panic(). The=
y are
>> often used because we think Xen would not be able to recover if the cond=
ition
>> is hit.
>>=20
>> I am happy to remove them, but this should not be at the expense to intr=
oduce
>> other potential weird bugs.
>>=20
>>>=20
>>> I also see your point and agree that ASSERT is not acceptable for
>>> external input but from my point of view panic is the same (slightly
>>> worse because it doesn't go away in production builds).
>>=20
>> I think it depends on your target. Would you be happy if Xen continue to=
 run
>> with potentially a fatal flaw?
>=20
> Actually, this is an excellent question. I don't know what is the
> expected behavior from a safety perspective in case of serious errors.
> How the error should be reported and whether continuing or not is
> recommended. I'll try to find out more information.

I think there are 2 answers to this:
- as much as possible: those case must be avoided and it must be demonstrat=
ed that they are impossible and hence removed or turn the system in a fails=
afe mode so that actions can be handle (usually reboot after saving some da=
ta)
- in some cases this can be robustness code (more for security)

I think in our case that if we know that we are ending in a case where the =
system is unstable we should:
- stop the guest responsible for this (if a guest is the origin) or return =
an error to the guest and cancel the operation if suitable
- panic if this is internal or dom0

A warning informing that something not supported was done and ending in an =
unexpected behaviour is for sure not acceptable.

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Thu Jun 30 07:34:13 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 07:34:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358267.587380 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6ogh-0000BN-0D; Thu, 30 Jun 2022 07:34:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358267.587380; Thu, 30 Jun 2022 07:34:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6ogg-0000BG-Sq; Thu, 30 Jun 2022 07:34:06 +0000
Received: by outflank-mailman (input) for mailman id 358267;
 Thu, 30 Jun 2022 07:34:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jUi7=XF=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1o6ogf-0000B9-Fb
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 07:34:05 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr10054.outbound.protection.outlook.com [40.107.1.54])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 048713cc-f847-11ec-bd2d-47488cf2e6aa;
 Thu, 30 Jun 2022 09:34:04 +0200 (CEST)
Received: from DB6PR0402CA0023.eurprd04.prod.outlook.com (2603:10a6:4:91::33)
 by AM0PR08MB3090.eurprd08.prod.outlook.com (2603:10a6:208:56::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16; Thu, 30 Jun
 2022 07:33:58 +0000
Received: from DBAEUR03FT038.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:91:cafe::d5) by DB6PR0402CA0023.outlook.office365.com
 (2603:10a6:4:91::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.15 via Frontend
 Transport; Thu, 30 Jun 2022 07:33:58 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT038.mail.protection.outlook.com (100.127.143.23) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5395.14 via Frontend Transport; Thu, 30 Jun 2022 07:33:58 +0000
Received: ("Tessian outbound 879f4da7a6e9:v121");
 Thu, 30 Jun 2022 07:33:57 +0000
Received: from ba888a6f0445.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 08BA23F1-919A-479C-8A30-D837A175677D.1; 
 Thu, 30 Jun 2022 07:33:51 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ba888a6f0445.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 30 Jun 2022 07:33:51 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by PA4PR08MB5952.eurprd08.prod.outlook.com (2603:10a6:102:e9::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.15; Thu, 30 Jun
 2022 07:33:42 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::d9f0:12ef:bfa3:3adb%5]) with mapi id 15.20.5373.017; Thu, 30 Jun 2022
 07:33:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 048713cc-f847-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=NAJCCZ3sUyvh4k5uWPhF8iWkzZdW0umCfdyaGIlK1hzQHp8J3WcYY6OKJHtPX3YViOg5QstGJngyAmMxP4dyqOdISafcJ8xVd3TkoQ62LoqEWywt7sOTVJsbgRz6Mmc8txcuXD3ADUy8oFj7eUn9dgJWZ9cAKNq8NI59xiesTtl5yKFeJEyoW2A52iE2ji/PiTn7jGtZaDtOMv3znqqP5HimTkacBbnI9HugrTeQQcXuGZ7PW4m7XYfppRo3xcUFCA09zNDu0ZRN636Wxxxzb/FhxmDngw+X7tabkJ5B08J/RsjPNjsdGgbinZO5m2K9ejENUz69NGv1ULE5/emC6Q==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=p1Hd7Cg6jECUFRedG0LV7S55Xsmx1xsHD1Sv+yG639c=;
 b=CQKCzemmqtbUhGzAKUKOWGZ21MjqbNxbI9JZzAbfFV6W1mtT2/1jt8kUv7e4G3PsOuDnW3SG1Gq5j9MJ9SvKPZnsfaAJ7My+k3uDRrVq9ZVnfDDFxL56UxfevhVdfAXjZZLHi36tgZpYT0vf6UzuE2h2lHnasuJDkL+n2ebIkeFPIx5CFuWeJLq9qe9H/yP7tlGsXlwedsGFpg9g6tvT+SVRempxrXcRpqPeoh7zgWrfGBv6197rEkLhLQY77iCDs+mNAIHDu+D/4xj2dVLLPLj4WHFKcsijjrW4KuRt4UUCBhSK78twuzEzzESWeGdxpPhyTWHtTBW0MU0F+Z4OnA==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=p1Hd7Cg6jECUFRedG0LV7S55Xsmx1xsHD1Sv+yG639c=;
 b=S8QGg362thJSVaJYwP1VPYiue12nlCoR0UddvNO0yAAPLBlUVUH7BMg+ILCzTbnJ2Bz7HHJl1m0XUX+leTkGDmz/8TsvGRKtYSTvZSA4e7TNncfFOJ3Dj6lQoDELtStfBSoP0Us28Cg8a34tc0C5taJw9eC+56z5XltAV5Kw6dM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: d1b58a814aeea5fc
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=F/imnmyTjPllutb9SHPq/Flr0nf25Cs7MOW5WA8OAecwg4uHzbgrM3zmga3vGv3OM8dvdw7XIuPkeSaBg3RRXUjP/U+SCvx4w/RrHblk54p2Rx54oT259ablwFpRAhKlfS9tJsCb8R6sgGlYwIYKC0Tg/0s8Dkl3j4mguRKOzPGX3UQilWNG3kWzvLlWtl+cNtMwShu6v8/WZx5gg5wwdvciK3VkTAYt8zCkMBpzxIwHbzRzNqLnAm7DX975t4IDZJOAwvTSoDoetKQJNcTKN7vP6yFC5HpwxlhueumeG9OWCaWw1n50XX8kI4pjq60NK9umkHP/YpNIq6UMFVJbvw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=p1Hd7Cg6jECUFRedG0LV7S55Xsmx1xsHD1Sv+yG639c=;
 b=O68CsKhfpgFiAoTQ8B/Pww+dkuNh3u35yBjOjLthr2hNPBwV7X7ZUQYieZTL4P27T7ToDQHqJuEiJvY7D2/Znts85A7o++BYJzdW9/7ZQ6rfTXzY26DX2JBZZ/8Js0sUJdQOGEmMIp3yS1MUqrj98hLl3ZQ9uTNwm5eloJHoiliFw1vngJdlP5dXij7tdHDba+2fhakpP1RV8SyWm1E44X3OmII7ahpTCpBz14ZIGGE2/WO4YEgSt/+2zdSbsBDWrL23AeaUpHQTSuBCUx6Q0Aegxa1XEvFdE2Ch79ZTwX3j180p0vig1p5cp7Byp2b2YjGcT79ltpK9JhCFAlbSqQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=p1Hd7Cg6jECUFRedG0LV7S55Xsmx1xsHD1Sv+yG639c=;
 b=S8QGg362thJSVaJYwP1VPYiue12nlCoR0UddvNO0yAAPLBlUVUH7BMg+ILCzTbnJ2Bz7HHJl1m0XUX+leTkGDmz/8TsvGRKtYSTvZSA4e7TNncfFOJ3Dj6lQoDELtStfBSoP0Us28Cg8a34tc0C5taJw9eC+56z5XltAV5Kw6dM=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Luca Fancellu <Luca.Fancellu@arm.com>, Anthony PERARD
	<anthony.perard@citrix.com>, Xen-devel <xen-devel@lists.xenproject.org>, Wei
 Liu <wl@xen.org>, Elena Ufimtseva <elena.ufimtseva@oracle.com>, Tim Deegan
	<tim@xen.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>, "Daniel P. Smith"
	<dpsmith@apertussolutions.com>, =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>, Juergen Gross <jgross@suse.com>, Christian Lindig
	<christian.lindig@citrix.com>, David Scott <dave@recoil.org>
Subject: Re: [XEN PATCH v3 25/25] tools: Remove -Werror everywhere else
Thread-Topic: [XEN PATCH v3 25/25] tools: Remove -Werror everywhere else
Thread-Index: AQHYh+S8oo1uKED1kUSSvOK+OMp/V61mHT4AgACMngCAAO3MAA==
Date: Thu, 30 Jun 2022 07:33:42 +0000
Message-ID: <6CA16D2D-8AD6-415A-A98D-00B27F91C4DA@arm.com>
References: <20220624160422.53457-1-anthony.perard@citrix.com>
 <20220624160422.53457-26-anthony.perard@citrix.com>
 <BF28045C-0848-4B5A-8DAB-57192C7B4A18@arm.com>
 <alpine.DEB.2.22.394.2206291019550.4389@ubuntu-linux-20-04-desktop>
In-Reply-To:
 <alpine.DEB.2.22.394.2206291019550.4389@ubuntu-linux-20-04-desktop>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.100.31)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 2ceff935-a88f-4540-18dc-08da5a6ae4f3
x-ms-traffictypediagnostic:
	PA4PR08MB5952:EE_|DBAEUR03FT038:EE_|AM0PR08MB3090:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 1ypgeETqgEb6OH6/mWLN/T9668zoVooiHMNBJFCvLNfcIMuqQ5u34EDhragQD6qF6FolExi/SzNIrTwbwr9WZetubVAHLXHbTTOP49idteDQHvNCa1JZRWh6fN7t4Z1LaGBtE1Xzsppaj+ZIo16LqWl6hA0WfMtBk0gXoCu/qMp17jXyNyGR2NJCKiQZ2e8XaDRC2D4Bd6GSPMUlR1eoxITq3lLNWTWDKqvsyQLYgERujj/qNde3FzSyhh5rfdySl2IFiYEzP5d80nI3ziF+IXeWkW6jqWYLYJJ+3bbLajYf5k53jNBl4XlJzO3iUxAgJlMY3vBFyGQ6ZK/6PSSgzTwaIMd4kRHYWAAfBSDJe9u5bqAkv9msHUPvBjbN7BuZVb1Px0+dceE+pJEPCix2rCT7wVABuDe25ZKs5wLYiXcHdIlh1DEG9wcjoLjmX4eiut0cD0MZPpSXB9di2lvxGRj7julGaHgHMmZwoFpJ6TNJGfzt/8u5x8sC+pRnsDoQ2CdZQxkDU7CMtMnygnQ91TOxhm3Pj7EQd1MPsc4HjBqzXge7+DKCxnaW2sYrnSJRrOZsey08FBSCCb2ugHU/rVT2B0s+6idn+WeXTLxRblkF1MifHtwsoA5CT1Fvos66SIDfVPcxC8yQoZMDQ/7Ok5PfKnJK2IgINyjB/AjYBXPhhNkwSaEbVV8RRCnTHA9S5lKICxMbgNPJ3h8kNv6hSswqqyRPPjJAEIJ4R8+HfESpa5xeJ/r+kUM71+6WqheSzPAm945gERaBM+OFAC+kqY85i5wogbBFrmBmbE+vFGBaeUGVgNmQqMNi5hs7QyeXSgMx0fm5GFX36f00xUiEbezBxju01m0KwKgz8/y2pTw=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(366004)(396003)(136003)(346002)(376002)(38100700002)(66946007)(76116006)(316002)(66556008)(66476007)(64756008)(4326008)(38070700005)(66446008)(122000001)(26005)(8676002)(2616005)(186003)(6486002)(71200400001)(91956017)(53546011)(41300700001)(478600001)(6512007)(6506007)(6916009)(54906003)(2906002)(86362001)(5660300002)(7416002)(33656002)(36756003)(8936002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <E210679A06529340AB47E3F2E92B10DA@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB5952
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT038.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	8e48a946-f92d-46cb-0fb2-08da5a6adbac
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NSIStOehZzq3YnsqGoS+XdrBWol4IUx/tljsdBdwC+K3PAJJoAoEfBsly+zGWUD14OXJxGZ9zPKyZ3g7Cw6NpmCo+Qc1q4Zx5ipAoj+DTocBG/UGKJLKVGrbPCK8MLs05U/OGehReKW79Ddy/L3Uf3dvg66wQ5dqEuTlrjWCAy5jcI1snopP1kyPGEhP/h1bc4Yd7YgL4TtwmdXLWNXfK+H4xejNdLOepL+o31GO7joe23dZ22uoZJwcRCR1yga0/iRSQ81pFlxOFpcBOX9deLLs2GPFo1dUrd4HgAianSl2REBd8EBOV/bq4qCM1Wiw3HflDs2LAU6Y/nnRZrnh1JeivtB+tGFuxzHm+Gdw/sXbfsFK4r0y/M4jNoQc/R4lLBaRIAnjPe2NUhLXUznqgkJlijPAjcoi5cPtPZTbypuNqUwxD7kfR5TD574U5/WH6dyeek/oeRYSC5Is6aKL/8e8LsNbxF+b3C6fQ5mK+YuRAxyKDySpkh7XBQ1uPTsYPLOl3u7cvcXb5og6qwes+4jML5T9lJjoBdhb+QxqCLn+aMQAB/j4y9YohHqAH1j2VMv82Ud31A3bh6SWNRU9fqWIS552skMWU8ESkFn1Y/H0STgEW7lOdX56pP0ea6BJxihUFUOhvmBaQv2VVNAYUYzA45fgstQg+nVpZmefE1ZEnA9VW3fkjCUdtUiPngrruWUM5b83t96obt1IK2Ey2kDS+AKEPVS8uOGcNK6XUOKGp1qZJv86lPZ1oYHMAV4ESYaijzosNPeMwVVgayZXvdo5ZuUKln8JOfsc47VoT3w5b5jcXBmiZTHBH6romxRO
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(376002)(396003)(346002)(136003)(40470700004)(36840700001)(46966006)(336012)(47076005)(40460700003)(186003)(26005)(82310400005)(82740400003)(81166007)(36860700001)(356005)(41300700001)(6506007)(53546011)(40480700001)(54906003)(478600001)(316002)(6486002)(2906002)(2616005)(107886003)(8676002)(6512007)(70586007)(70206006)(4326008)(6862004)(5660300002)(8936002)(36756003)(33656002)(86362001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 07:33:58.0516
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 2ceff935-a88f-4540-18dc-08da5a6ae4f3
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT038.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB3090

SGkgU3RlZmFubywNCg0KPiBPbiAyOSBKdW4gMjAyMiwgYXQgMTg6MjIsIFN0ZWZhbm8gU3RhYmVs
bGluaSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4gd3JvdGU6DQo+IA0KPiBPbiBXZWQsIDI5IEp1
biAyMDIyLCBMdWNhIEZhbmNlbGx1IHdyb3RlOg0KPj4gKyBDQzogU3RlZmFubyBTdGFiZWxsaW5p
DQo+PiANCj4+PiBPbiAyNCBKdW4gMjAyMiwgYXQgMTc6MDQsIEFudGhvbnkgUEVSQVJEIDxhbnRo
b255LnBlcmFyZEBjaXRyaXguY29tPiB3cm90ZToNCj4+PiANCj4+PiBQYXRjaCAidG9vbHM6IEFk
ZCAtV2Vycm9yIGJ5IGRlZmF1bHQgdG8gYWxsIHRvb2xzLyIgaGF2ZSBhZGRlZA0KPj4+ICItV2Vy
cm9yIiB0byBDRkxBR1MgaW4gdG9vbHMvUnVsZXMubWssIHJlbW92ZSBpdCBmcm9tIGV2ZXJ5IG90
aGVyDQo+Pj4gbWFrZWZpbGVzIGFzIGl0IGlzIG5vdyBkdXBsaWNhdGVkLg0KPj4+IA0KPj4+IFNp
Z25lZC1vZmYtYnk6IEFudGhvbnkgUEVSQVJEIDxhbnRob255LnBlcmFyZEBjaXRyaXguY29tPg0K
Pj4gDQo+PiBIaSBBbnRob255LA0KPj4gDQo+PiBJIHdpbGwgdHJ5IHRvIHJldmlldyB0aGUgc2Vy
aWUgd2hlbiBJIG1hbmFnZSB0byBoYXZlIHNvbWUgdGltZSwgaW4gdGhlIG1lYW4gdGltZSBJIGNh
biBzYXkgdGhlIHdob2xlDQo+PiBzZXJpZSBidWlsZHMgZmluZSBpbiBteSBZb2N0byBzZXR1cCBv
biBhcm02NCBhbmQgeDg2XzY0LCBJ4oCZdmUgdHJpZWQgYWxzbyB0aGUgdG9vbCBzdGFjayB0bw0K
Pj4gY3JlYXRlL2Rlc3Ryb3kvY29uc29sZSBndWVzdHMgYW5kIG5vIHByb2JsZW0gc28gZmFyLg0K
Pj4gDQo+PiBUaGUgb25seSBwcm9ibGVtIEkgaGF2ZSBpcyBidWlsZGluZyBmb3IgYXJtMzIgYmVj
YXVzZSwgSSB0aGluaywgdGhpcyBwYXRjaCBkb2VzIGEgZ3JlYXQgam9iIGFuZCBpdA0KPj4gZGlz
Y292ZXJzIGEgcHJvYmxlbSBoZXJlOg0KPiANCj4gVGhhdCByZW1pbmRzIG1lIHRoYXQgd2Ugb25s
eSBoYXZlIGFybTMyIFhlbiBoeXBlcnZpc29yIGJ1aWxkcyBpbg0KPiBnaXRsYWItY2ksIHdlIGRv
bid0IGhhdmUgYW55IGFybTMyIFhlbiB0b29scyBidWlsZHMuIEknbGwgYWRkIGl0IHRvIG15DQo+
IFRPRE8gYnV0IGlmIHNvbWVvbmUgKG5vdCBuZWNlc3NhcmlseSBMdWNhKSBoYXMgc29tZSBzcGFy
ZSB0aW1lIGl0IGNvdWxkDQo+IGJlIGEgbmljZSBwcm9qZWN0LiBJdCBjb3VsZCBiZSBkb25lIHdp
dGggWW9jdG8gYnkgYWRkaW5nIGEgWW9jdG8gYnVpbGQNCj4gY29udGFpbmVyIHRvIGF1dG9tYXRp
b24vYnVpbGQvLg0KDQpXZSBoYXZlIG5vdyBhIHdheSB0byBidWlsZCBhbmQgcnVuIHhlbiBmb3Ig
YXJtMzIgb24gcWVtdSB1c2luZyBZb2N0by4NCldlIGFyZSB1c2luZyB0aGlzIGludGVybmFsbHkg
YW5kIGFsc28gd2lsbCB0ZXN0IFhlbiB3aXRoIGd1ZXN0cyBvbiBhcm0zMiB1c2luZyB0aGlzIHNv
b24uDQoNCkkgYW0gdXBzdHJlYW1pbmcgdG8gbWV0YS12aXJ0dWFsaXNhdGlvbiBhbGwgdGhlIGZp
eGVzIG5lZWRlZCBmb3IgdGhhdCBzbyBpdCBzaG91bGQgYmUgZmFpcmx5IHN0cmFpZ2h0IGZvcndh
cmQgZG8gcmVwcm9kdWNlIHRoaXMgaW4gWW9jdG8gYnVpbGQgaW4gYSBjb250YWluZXIuDQoNClBs
ZWFzZSB0ZWxsIG1lIHdoYXQgeW91IG5lZWQgYW5kIEkgd2lsbCB0cnkgdG8gcHJvdmlkZSB5b3Ug
YSBzZXQgb2Ygc2NyaXB0cyBvciBpbnN0cnVjdGlvbnMgZG8gcmVwcm9kdWNlIHRoYXQgb24gZ2l0
bGFiLg0KDQpDaGVlcnMNCkJlcnRyYW5kDQoNCg0K


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 07:53:21 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 07:53:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358274.587391 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6ozD-0002z9-Nc; Thu, 30 Jun 2022 07:53:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358274.587391; Thu, 30 Jun 2022 07:53:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6ozD-0002z2-KM; Thu, 30 Jun 2022 07:53:15 +0000
Received: by outflank-mailman (input) for mailman id 358274;
 Thu, 30 Jun 2022 07:50:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PGlj=XF=nfschina.com=jiaming@srs-se1.protection.inumbo.net>)
 id 1o6oww-0002vj-5O
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 07:50:54 +0000
Received: from mail.nfschina.com (unknown
 [2400:dd01:100f:2:72e2:84ff:fe10:5f45])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 5bbd0596-f849-11ec-bdce-3d151da133c5;
 Thu, 30 Jun 2022 09:50:50 +0200 (CEST)
Received: from localhost (unknown [127.0.0.1])
 by mail.nfschina.com (Postfix) with ESMTP id 739951E80D11;
 Thu, 30 Jun 2022 15:49:28 +0800 (CST)
Received: from mail.nfschina.com ([127.0.0.1])
 by localhost (mail.nfschina.com [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id ELo56DrHke8u; Thu, 30 Jun 2022 15:49:25 +0800 (CST)
Received: from localhost.localdomain (unknown [180.167.10.98])
 (Authenticated sender: jiaming@nfschina.com)
 by mail.nfschina.com (Postfix) with ESMTPA id 36A4A1E80CB6;
 Thu, 30 Jun 2022 15:49:25 +0800 (CST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5bbd0596-f849-11ec-bdce-3d151da133c5
X-Virus-Scanned: amavisd-new at test.com
From: Zhang Jiaming <jiaming@nfschina.com>
To: jgross@suse.com,
	sstabellini@kernel.org,
	oleksandr_tyshchenko@epam.com
Cc: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	liqiong@nfschina.com,
	renyu@nfschina.com,
	Zhang Jiaming <jiaming@nfschina.com>
Subject: [PATCH] xen: Fix spelling mistake
Date: Thu, 30 Jun 2022 15:50:27 +0800
Message-Id: <20220630075027.68833-1-jiaming@nfschina.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Change 'maped' to 'mapped'.
Change 'unmaped' to 'unmapped'.

Signed-off-by: Zhang Jiaming <jiaming@nfschina.com>
---
 drivers/xen/xen-front-pgdir-shbuf.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/xen-front-pgdir-shbuf.c b/drivers/xen/xen-front-pgdir-shbuf.c
index bef8d72a6ca6..5c0b5cb5b419 100644
--- a/drivers/xen/xen-front-pgdir-shbuf.c
+++ b/drivers/xen/xen-front-pgdir-shbuf.c
@@ -89,7 +89,7 @@ EXPORT_SYMBOL_GPL(xen_front_pgdir_shbuf_get_dir_start);
  * shared by the frontend itself) or map the provided granted
  * references onto the backing storage (buf->pages).
  *
- * \param buf shared buffer which grants to be maped.
+ * \param buf shared buffer which grants to be mapped.
  * \return zero on success or a negative number on failure.
  */
 int xen_front_pgdir_shbuf_map(struct xen_front_pgdir_shbuf *buf)
@@ -110,7 +110,7 @@ EXPORT_SYMBOL_GPL(xen_front_pgdir_shbuf_map);
  * shared by the frontend itself) or unmap the provided granted
  * references.
  *
- * \param buf shared buffer which grants to be unmaped.
+ * \param buf shared buffer which grants to be unmapped.
  * \return zero on success or a negative number on failure.
  */
 int xen_front_pgdir_shbuf_unmap(struct xen_front_pgdir_shbuf *buf)
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 30 08:08:18 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 08:08:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358285.587402 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6pDW-0005F1-92; Thu, 30 Jun 2022 08:08:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358285.587402; Thu, 30 Jun 2022 08:08:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6pDW-0005Eu-5j; Thu, 30 Jun 2022 08:08:02 +0000
Received: by outflank-mailman (input) for mailman id 358285;
 Thu, 30 Jun 2022 08:08:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPik=XF=citrix.com=prvs=1738a98a4=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o6pDU-0005Ek-3R
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 08:08:00 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bfa89246-f84b-11ec-bd2d-47488cf2e6aa;
 Thu, 30 Jun 2022 10:07:57 +0200 (CEST)
Received: from mail-bn8nam11lp2168.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 30 Jun 2022 04:07:51 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by PH0PR03MB5894.namprd03.prod.outlook.com (2603:10b6:510:31::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14; Thu, 30 Jun
 2022 08:07:49 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5395.015; Thu, 30 Jun 2022
 08:07:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bfa89246-f84b-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656576477;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=5b/th6kArmUiqQBbSdQYC1Fo21zGwgyT9Qq5XyRFtdI=;
  b=cInC76afEbJ0rQz4zxZ1Gqq3NFb4hcjqFSPvtE2rFEVWzoQBhCJym1mG
   nlAEYlPpyjQ9ThVpNCBbZdZNqJOXFA0YWPXtKSG2Wk5NUsSvWA2CX+rZV
   s5QSmU3skaIfA+rl4/19HdmAGWIvlA3ShkPNi4KTDV7GBSq9Bdl1NphOK
   4=;
X-IronPort-RemoteIP: 104.47.58.168
X-IronPort-MID: 74095269
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:HNoD/qxhrxrTz9PhaO16t+crxyrEfRIJ4+MujC+fZmUNrF6WrkUBz
 WUdWzrXM6zcNzH2c9kkbIzjoR5VscKGzNFnSgU4pCAxQypGp/SeCIXCJC8cHc8zwu4v7q5Dx
 59DAjUVBJlsFhcwnj/0bv656yMUOZigHtIQMsadUsxKbVIiGX1JZS5LwbZj2NY22oDhWmthh
 PupyyHhEA79s9JLGjp8B5Kr8HuDa9yr5Vv0FnRnDRx6lAe2e0s9VfrzFonoR5fMeaFGH/bSe
 gr25OrRElU1XfsaIojNfr7TKiXmS1NJVOSEoiI+t6OK2nCuqsGuu0qS2TV1hUp/0l20c95NJ
 NplqqWRZiIKP/zwifU+Vh1jQiYhBYFn0eqSSZS/mZT7I0zuVVLJmqwrJmdmeIoS96BwHH1E8
 uEeJHYVdBefiumqwbW9DO5xmsAkK8qtN4Qa0p1i5WiBUbB6HtaeE+OTu48wMDQY36iiGd7EY
 MUUc3x3ZQnoaBxTIFYHTpk5mY9Eg1GgL2UJ9QjL9MLb5UDSzi1D8bHNauDwIOGQXMVPwFeZm
 UDvqjGR7hYycYb3JSC+2nCmi/LLnCj7cJkPD7D+/flv6HWDy2pWBBAIWF+TpfiillX4S99ZM
 1YT+Cclse417kPDZtvgWxy1plaUsxhaXMBfe8Uh8x2EwKfQ5wefB0AHQyRHZdhgs9U5LRQ10
 neZktWvAiZg2IB5UlqY/7aQ6Dm0aS4cKDZbYTdeFFVVpd7+vIs0kxTDCM55F7K4hcH0Hje2x
 C2WqC85hPMYistjO7iHwG0rSgmE/vDhJjPZLC2ONo55xmuVvLKYWrE=
IronPort-HdrOrdr: A9a23:oqThjKrDLie/MTudz2ml+kMaV5uwL9V00zEX/kB9WHVpm5Oj+v
 xGzc5w6farsl0ssREb9uxo9pPwJE800aQFmbX5Wo3SJzUO2VHYVb2KiLGP/9SOIU3DH4JmpM
 Rdmu1FeafN5DtB/LnHCWuDYrEdKbC8mcjH5Ns2jU0dKz2CA5sQkzuRYTzrdnGeKjM2Z6bQQ/
 Gnl7d6TnebCAIqR/X+IkNAc/nIptXNmp6jSRkaByQ/4A3LqT+z8rb1HzWRwx9bClp0sP8f2F
 mAtza8yrSosvm9xBOZ/2jP765OkN+k7tdYHsSDhuUcNz2poAe1Y4ZKXaGEoVkO0aiSwWdvtO
 OJjwYrPsx15X+UVmapoSH10w2l6zoq42+K8y7svVLT5ejCAB4qActIgoxUNjHD7VA7gd162K
 VXm0qEqpt+F3r77WjAzumNcysvulu/oHIkn+JWpWdYS5EiZLhYqpFa1F9JEa0HADnx5OkcYa
 RT5fnnlbhrmG6hHjHkVjEF+q3tYp1zJGbNfqE6gL3b79AM90oJjHfxx6Qk7wU9HdwGOtt5Dt
 //Q9VVfYF1P7ErhJ1GdZc8qOuMexjwqEH3QRWvCGWiMp07EFTwjLOyyIkJxYiRCe81Jd0J6d
 /8bG8=
X-IronPort-AV: E=Sophos;i="5.92,233,1650945600"; 
   d="scan'208";a="74095269"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iAa0q+5/J80Aj9tAkXE5qd3vId3UNCQzme56w87RJGYHpvMDd3cgOGLODUX4UATEO72ZA2Co1loLHHEOJZOaOHmzV7BtN/F4VRnZisjvdijk5DLxRpLg0Xmzub720DQa3USWdDbo5OcAh3600FVhNgN3abgrmRi52/6dyi5EjA6PuGGZaYhpr/AcmDTbaa4Gu+rFCidiAd0DIKXTIfALU62q5K23oZO8jXYlniegBMDCx7vy7UafqOCp9hqMo6hU0XG3AR2N/spZMzz3IC253nXtjSXnp9f2DmKVUkJqzpiuwUDY+NJk3WACDIsqPwQvOcPmp81uywL/cP1/X+LPKA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/cv2DunFRSB8qTBkbROka8eJZHPCMPRgJF6AjxqMOXg=;
 b=jPL2/LsrtBv8mEmkLLU6wXsn9QbKI1BDTNDNfX9tL7gnQOPwdTOtQL7sy+wHvEtqmulgKzs9M7uCLn4aKHGcwSG1xwOp+DPquGMJioucrUcHzGDwJ23pOyzvLgRL8c1FZSByd8uO2ZYRmtTu1vWVZzNdHzeuAbiVRcX31BD2Gz3gxusG0D1LUZOOlFTp9z9l0feixbYh0nsNuNfV3/o/kZJDEDTmiNYZ8Q5cYG/Bsg1fjHkT0UXpKQ5VGU7NMVnzQmjjtFA4az1XqzMwbAFbpalzrwI0/yz6K8im6J+/SjJX8OwvSwOBj6P4olCPh0cfrOwkhfUWE+OGzxx4ZE7DyA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/cv2DunFRSB8qTBkbROka8eJZHPCMPRgJF6AjxqMOXg=;
 b=GH81edFulI9BmVgSFjuZwDeqVGUzyEUZ0SN8MxJfoRrFrVogGiPjxe4PeKswJ2JIDZnJcqWIECNxM4H43Hue21n9oYtle6CWRvlbBkgw2djIVj87lrSppw0AcgQjNS8lLXe1OzxsRlzzHRviYpcUK3qowuphgZY4j8rVzDNBtzA=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Thu, 30 Jun 2022 10:07:45 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 2/2] console/serial: bump buffer from 16K to 32K
Message-ID: <Yr1Z0eK2ub/Ad/kU@Air-de-Roger>
References: <20220623090852.29622-1-roger.pau@citrix.com>
 <20220623090852.29622-3-roger.pau@citrix.com>
 <e45d8dcf-fd0a-6875-a887-5c0dafcc4543@suse.com>
 <YrxuVPMb990xYKi9@Air-de-Roger>
 <b4740e9b-3586-04c3-454a-5d60bae2cc55@suse.com>
 <Yrx7hDevqMgvtMRR@Air-de-Roger>
 <59fe1b28-b173-e497-5b8a-5e0bb6d946b6@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <59fe1b28-b173-e497-5b8a-5e0bb6d946b6@suse.com>
X-ClientProxiedBy: LO4P123CA0179.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18a::22) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a2fedcc7-5f22-4082-9510-08da5a6f9f89
X-MS-TrafficTypeDiagnostic: PH0PR03MB5894:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	otC55MIya8O/iFcLq4Ay1mICOSh4IE7Klf0u8VFuLmhfavqwSAy/+h4jIvB9+OzBTzhFqm3A9ltgfWwkoCx+c5ZJRjsTmKBwIXoUNEAcleS84U6MVP2fpGNQ40RiySjULwEMOHXrp+x8w8z8nnUmtUkQsP/SB2wOLUQb5EjQCOA97X7U9mKBvMwDAYhpvyDI1GOA0ZkHfZUM8/yngj6mYPObl2gTKHe/hOKu+yJz3bKorcRIiWfnG1AMpLCtbAN3LXPKFPNX56mkUi5M4TBWiA4/uv4+BqNUSfmK1c/agtqAwRF3S9Bp9YExVjCcVP0xsKUR+ga1Y8l0hgQZh0gFIxBVnRG6wwsDuMTsWU9LDJ8QME/6Gec4prOohlG/86UdGICVIIhhb1gWUEyEqqaRYl0Q+JXUvIWrfpMivw/y652LFaVz9BAEXvSMlZTIian7BObbK8/94w/4nkmkK7d8uyT10zgnfnsNvyLu9Ij9orraYdokFP1sAH6s6ZHv9PzVi2t1Uy0DuzKy8ppREYON7pkCsjUKS2kfSoOp4w1giRVLGyXECI+BkvbqdRuqu+NmV51b7aqabbLQABn5Brv42edExnOOQuQqqkOFTAtXqypJDvsM8yYeXZzn4cjUsaJv07y4wNmKsbfeE8YTn9Ojw0AnrBlAoVmNH7+9vtF7DhE6wLnUwod4h5J11GAvt2OlxaqkxtFluj2mA5eaDC0QtQIPaAp5lEsQ3JTBpg63Av/dtOXGzB6RMb7g4mZQ0vrUA2rZEL3brkA7IC8upXkSBA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(7916004)(396003)(136003)(346002)(366004)(39860400002)(376002)(41300700001)(86362001)(38100700002)(478600001)(6486002)(66946007)(8936002)(9686003)(4326008)(66556008)(26005)(66476007)(8676002)(6512007)(5660300002)(6506007)(33716001)(6916009)(85182001)(186003)(54906003)(316002)(53546011)(83380400001)(6666004)(2906002)(82960400001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cUZXZGpzbmI3UXAzZHZPdFlCc2FTSlNrNzE1RGJNak1jdDlMSmpZU3lzcTdP?=
 =?utf-8?B?eG8rWnZjVGFSNzJZVFk2UHQvRjl4cS95eWhEUnhQMFp5V2pGSTRtdzRlQkFP?=
 =?utf-8?B?YnR1YkNhVWNXdFllaENUanA2Q0FJbUdxNlpZakJoT2llYk01YU9ZaTNwUURs?=
 =?utf-8?B?K1pid1FrTzZ2L3FQSFoyUFFtakQvZDFZMUNSSVlvT1Bsa3pJWEorMWVScjVm?=
 =?utf-8?B?Sk9MNHdkOGVULzYzaUhaV3dpTE1TYmZxaU1rVk56NDlVUElHNDFqNTltOGFR?=
 =?utf-8?B?SHpOc3BjOVd2ZmlkZWN0QnJNaG1RTnlRU2RtQmtoT0gwbmgxTHgvMGxJTEVT?=
 =?utf-8?B?ZzZmUjN5MjFyMVlsMnZIMXZiSldsbkZxRWFCT0dLZ2xLYkRvblJML0cvOFhY?=
 =?utf-8?B?Y1JHWlo4b3g0NzBTL1dENEhwQTNFVHNyN24raTdybm5WZ3pZU2RWL04xUFk2?=
 =?utf-8?B?VGQzYVNSYkVzRnRGNzRoUnBKVlFUNjNtVnRUckpkc3JES0d3TWVicEZ4TU9r?=
 =?utf-8?B?czkyV2F5SWxtQ1JOTEtZQmdPWHA2RlVTNllraElIWUwybkxoV3YvdzU5ZzBy?=
 =?utf-8?B?bGJXYmJKTlFEK0xkWmh1NXJCWVprNm91YnRGZkU0anB3LzRwNkVhV3MybVdZ?=
 =?utf-8?B?a0t5ZlN5UDM3eVhaaHo5ZmdNa1lWeVg5Y2cwN2Y2Z2RaT2JHSlova0lzSmlJ?=
 =?utf-8?B?R1FVUmlGbTFDYm0rREJHT2V3Um9NSzEzbE4xRlpSUm4yNmNhV1BPYkNCS1dy?=
 =?utf-8?B?VzJQRVNGQlVsYUFwQkNGTDBNakVJcWdHQTZmUmxrSnpxK3dMci9wUHZaa21J?=
 =?utf-8?B?RnZKUmRoc0duOWZrNHA3SzVVZHliZTR0Qms4Um9Oa0tVaUxUckYxQ1kvVmJk?=
 =?utf-8?B?RlNPemxrdTU3UWdodnhwamVRdkpZSXVoWEpCWWRaRlIxMlZ1UDBnakRnYTcr?=
 =?utf-8?B?dXhnSUJJbkZBZUpsQXJkQjVuR3h2by9Mdmp6OEp1emtEeDJaYmJpdWJ0dnRy?=
 =?utf-8?B?ZHFBNU5aNkJzSjJWU2crL2taSHo5SG83ZlJWWkYwVUUwdjV0NllvUDBsUm5E?=
 =?utf-8?B?Qm01QlAxU0lRaVNES1F0Y2VTK2YvYjB0alF0WTAvaUNzcHE0MjVCanJQUnVY?=
 =?utf-8?B?MGVDd1haQnlwbTFOR090SzJ3TzFZY2s4cEV4aThyZ2F3ZUpkVWlPS29xTXJD?=
 =?utf-8?B?S09MNHZRNGtPTE95cWZmZm94RjUrR0lPMWY4d0gzNm5GRU9LQ21SNi9VZjMv?=
 =?utf-8?B?WHkrMlNQMzJmSlM2T1hidG1IaTk2WUg5Q1Z3eDhiVDZSVGtlRnFVajNPajlj?=
 =?utf-8?B?UVhiM2dtMzB4WjB1aE5KYlVBeUJLcTFaQTllNWpINzhXUVBpVXpVSlhnS0Qy?=
 =?utf-8?B?ZFdiRG1CWHhscE5ad2dPV2pQaFo3QWcxbFYxR0w4MU9kL2dQT2NPYUFscjlX?=
 =?utf-8?B?MURLUktQc1h4VVcxOWJTNHM3ckIvdkYyaUVyMnN2VFJCOSt4ajlCaXhsV0dV?=
 =?utf-8?B?c1BTQ2NKWE9GT2xuMGxOa3JXWDYvYWR4THVWbHdZZ2lWVkF0azlKY20zaW5B?=
 =?utf-8?B?bnV4OUlqci90M3dPM043Q2ltTXAxVEZrV1Y1U0MrZ2RVbitKbi9sMEFOQkI3?=
 =?utf-8?B?c2c4Vnd6eFJtNmN3UUZweU1nV3FjMkduZU5wTzBUV1U3bWNQQ3hDa0poM0ZY?=
 =?utf-8?B?WnlJend5aS9oSXVLN0RQMTdoS0pLTU9rckVKSHRiOEluU0RhQnBNZWFMMDA2?=
 =?utf-8?B?RmdNVjZkZWtxMVlndjZRNU93SitKK2ZqWGV2bmNuck1IU0pFTjNLVFJ0c2ls?=
 =?utf-8?B?eEFPQ1ViN1JPeFZGOFUwZkFlLy9QKy9uTnhDTGNoWEFKczZKa3QvMFFTZTQ0?=
 =?utf-8?B?cTF1UmE4NG1BNTByUW1PdWhpY25YMzR6a25iYzE4QTUwWmw0eGhORkU2ZEtM?=
 =?utf-8?B?WisrZ2pMMkN0Tm5DcURKUXp0eUpwaXlKaHk3cUxsVGtteXFiMFBTdUV2WVJ1?=
 =?utf-8?B?UnRzbkRRTFp3Q080UHZ2MmZsTjhTWUxzOFpuQjdRSFNhTkt5TWV1RENiT0Rm?=
 =?utf-8?B?NWFiQWIzVGVDUUNIYVV6SGFrS0I2ZXpXcGpEY1NwSDI4YzJjT3pkNnc2UW5w?=
 =?utf-8?B?Sy9Ic21wZWNZMlhXdmtCL2VyY2VoWmM1YWM4Rlp2SDQyQ2VrcHFSeFJXZ2hs?=
 =?utf-8?B?amc9PQ==?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a2fedcc7-5f22-4082-9510-08da5a6f9f89
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 08:07:49.2841
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: crIH7NFdFlJDOfN0oS0boXpm9WKfgNKNEcKCyQBh6HiuxjZLrZWtu7OdNBUf4EMXDC4x2igcH5Dj+YgwdG5/rA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5894

On Wed, Jun 29, 2022 at 06:30:18PM +0200, Jan Beulich wrote:
> On 29.06.2022 18:19, Roger Pau Monné wrote:
> > On Wed, Jun 29, 2022 at 06:03:34PM +0200, Jan Beulich wrote:
> >> On 29.06.2022 17:23, Roger Pau Monné wrote:
> >>> On Thu, Jun 23, 2022 at 03:32:30PM +0200, Jan Beulich wrote:
> >>>> On 23.06.2022 11:08, Roger Pau Monne wrote:
> >>>>> Testing on a Kaby Lake box with 8 CPUs leads to the serial buffer
> >>>>> being filled halfway during dom0 boot, and thus a non-trivial chunk of
> >>>>> Linux boot messages are dropped.
> >>>>>
> >>>>> Increasing the buffer to 32K does fix the issue and Linux boot
> >>>>> messages are no longer dropped.  There's no justification either on
> >>>>> why 16K was chosen, and hence bumping to 32K in order to cope with
> >>>>> current systems generating output faster does seem appropriate to have
> >>>>> a better user experience with the provided defaults.
> >>>>
> >>>> Just to record what was part of an earlier discussion: I'm not going
> >>>> to nak such a change, but I think the justification is insufficient:
> >>>> On this same basis someone else could come a few days later and bump
> >>>> to 64k, then 128k, etc.
> >>>
> >>> Indeed, and that would be fine IMO.  We should aim to provide defaults
> >>> that work fine for most situations, and here I don't see what drawback
> >>> it has to increase the default buffer size from 16kiB to 32kiB, and
> >>> I would be fine with increasing to 128kiB if that's required for some
> >>> use case, albeit I have a hard time seeing how we could fill that
> >>> buffer.
> >>>
> >>> If I can ask, what kind of justification you would see fit for
> >>> granting an increase to the default buffer size?
> >>
> >> Making plausible that for a majority of contemporary systems the buffer
> >> is not large enough would be one aspect. But then there's imo always
> >> going to be an issue: What if non-Linux Dom0 would be far more chatty?
> >> What if Linux, down the road, was made less verbose (by default)? What
> >> if people expect large enough a buffer to also suffice when Linux runs
> >> in e.g. ignore_loglevel mode? We simply can't fit all use cases and at
> >> the same time also not go overboard with the default size. That's why
> >> there's a way to control this via command line option.
> > 
> > Maybe I should clarify that the current buffer size is not enough on
> > this system with the default Linux log level. I think we can expect
> > someone that changes the default Linux log level to also consider
> > changing the Xen buffer size.  OTOH when using the default Linux log
> > level the default Xen serial buffer should be enough.
> > 
> > I haven't tested with FreeBSD on that system, other systems I have
> > seem to be fine when booting FreeBSD with the default Xen serial
> > buffer size.
> > 
> > I think we should exercise caution if someone proposed to increase to
> > 1M for example, but I don't see why so much controversy for going from
> > 16K to 32K, it's IMO a reasonable value and has proven to prevent
> > serial log loss when using the default Linux log level.
> 
> But see - that's exactly my point. Where do you draw the line between
> "we should accept" and "exercise caution"? Is it 256k? Or 512k?

Hard to tell, it would depend on the justification/use case for needed
those buffer sizes.

To be fair 16K seems equally random to me, I tried to backtrack to the
commit it was introduced, but I haven't been able to find any
justification.

I think we can at least agree that making the buffer size configurable
from Kconfig is a desirable move.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 08:09:44 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 08:09:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358290.587414 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6pF8-0005zB-Mp; Thu, 30 Jun 2022 08:09:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358290.587414; Thu, 30 Jun 2022 08:09:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6pF8-0005z4-Hk; Thu, 30 Jun 2022 08:09:42 +0000
Received: by outflank-mailman (input) for mailman id 358290;
 Thu, 30 Jun 2022 08:09:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gQTH=XF=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o6pF8-0005yu-39
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 08:09:42 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fc4322e8-f84b-11ec-bdce-3d151da133c5;
 Thu, 30 Jun 2022 10:09:38 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 8D74721C97;
 Thu, 30 Jun 2022 08:09:40 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 5712E13A5C;
 Thu, 30 Jun 2022 08:09:40 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id AN/ZE0RavWLZQAAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 30 Jun 2022 08:09:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fc4322e8-f84b-11ec-bdce-3d151da133c5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656576580; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=gSJC+aj+BfNql4oU2JcGgOEE5XeUn40IH1rlwgZea4s=;
	b=X3GjqDbnVuh6BBl2wjlm4icDKQ5igyudH0AprJMQaGGeIDCbo5gWJp9fOH5T3q9Fb0YL4A
	3eUccMozS8f1EJ1ONjN66sZ/MxGnz3ibWhPTHdEVi3WzzCI58o9XQJXFxQ1sHg5dg6Thqw
	XLsbCA5fVJ0wYmkfo5RBBxBRDYK3SSg=
Message-ID: <9e2205f6-3d7d-fcdd-26ed-a8f7c9f1c814@suse.com>
Date: Thu, 30 Jun 2022 10:09:39 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH] xen: Fix spelling mistake
Content-Language: en-US
To: Zhang Jiaming <jiaming@nfschina.com>, sstabellini@kernel.org,
 oleksandr_tyshchenko@epam.com
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 liqiong@nfschina.com, renyu@nfschina.com
References: <20220630075027.68833-1-jiaming@nfschina.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20220630075027.68833-1-jiaming@nfschina.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------fNCBxfoPNxIBxv4WQdbzuGpD"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------fNCBxfoPNxIBxv4WQdbzuGpD
Content-Type: multipart/mixed; boundary="------------8QciTfUEMB4lsjlv1xBOvLsO";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Zhang Jiaming <jiaming@nfschina.com>, sstabellini@kernel.org,
 oleksandr_tyshchenko@epam.com
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 liqiong@nfschina.com, renyu@nfschina.com
Message-ID: <9e2205f6-3d7d-fcdd-26ed-a8f7c9f1c814@suse.com>
Subject: Re: [PATCH] xen: Fix spelling mistake
References: <20220630075027.68833-1-jiaming@nfschina.com>
In-Reply-To: <20220630075027.68833-1-jiaming@nfschina.com>

--------------8QciTfUEMB4lsjlv1xBOvLsO
Content-Type: multipart/mixed; boundary="------------vkyqrtAevLXUTiJzuBfVX8AP"

--------------vkyqrtAevLXUTiJzuBfVX8AP
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMzAuMDYuMjIgMDk6NTAsIFpoYW5nIEppYW1pbmcgd3JvdGU6DQo+IENoYW5nZSAnbWFw
ZWQnIHRvICdtYXBwZWQnLg0KPiBDaGFuZ2UgJ3VubWFwZWQnIHRvICd1bm1hcHBlZCcuDQo+
IA0KPiBTaWduZWQtb2ZmLWJ5OiBaaGFuZyBKaWFtaW5nIDxqaWFtaW5nQG5mc2NoaW5hLmNv
bT4NCg0KUmV2aWV3ZWQtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4NCg0K
DQpKdWVyZ2VuDQo=
--------------vkyqrtAevLXUTiJzuBfVX8AP
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------vkyqrtAevLXUTiJzuBfVX8AP--

--------------8QciTfUEMB4lsjlv1xBOvLsO--

--------------fNCBxfoPNxIBxv4WQdbzuGpD
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK9WkMFAwAAAAAACgkQsN6d1ii/Ey99
dQf/cPF94WiMiaR5qS+vQEg+DHt5IjHajB+kRTARuHdYhhTDiS4ZkDn/oR1xtnJzMW0HqzPjcmQa
aHB+N2cdEcH1PFyhxTJUPTEObGH3sZksqRzxltq99N/aZ9sNifoTjHkDx1FFwFPs0b+CM6UX+5HN
4rXt8Rxf2Hdnw6N5jsLI3ZjpoGZ7tCA720ntqJ8AMiDfqxZX/Uizi3yNCJLduum9gu58xKx1ams2
ugzMLZBIIPg82Nh4pGSL2GnNztIUNkLJD63Z7upSqeZCUAjlnM9hwLIHK4V8v7ljXYWxjCZLz6Aq
WYEqwhm3eO1QLs14H3iVj8YXm4Vn5NDVh4lxbmo0Zg==
=d3MF
-----END PGP SIGNATURE-----

--------------fNCBxfoPNxIBxv4WQdbzuGpD--


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 08:23:58 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 08:23:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358297.587423 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6pSr-0008Ug-SS; Thu, 30 Jun 2022 08:23:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358297.587423; Thu, 30 Jun 2022 08:23:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6pSr-0008UZ-Pp; Thu, 30 Jun 2022 08:23:53 +0000
Received: by outflank-mailman (input) for mailman id 358297;
 Thu, 30 Jun 2022 08:23:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPik=XF=citrix.com=prvs=1738a98a4=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o6pSq-0008UT-Je
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 08:23:52 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f7a9ee47-f84d-11ec-bd2d-47488cf2e6aa;
 Thu, 30 Jun 2022 10:23:50 +0200 (CEST)
Received: from mail-mw2nam12lp2046.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.46])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 30 Jun 2022 04:23:45 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by SJ0PR03MB5421.namprd03.prod.outlook.com (2603:10b6:a03:289::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Thu, 30 Jun
 2022 08:23:44 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5395.015; Thu, 30 Jun 2022
 08:23:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f7a9ee47-f84d-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656577430;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=yN7e6p77YPxwkl+F7ksxrii0BUbd6C7Ul9rN241wp30=;
  b=LnaXjXdL7c+0IDI9OZgIVTUhMQ9O6EmnGox6w8P3ighaozt1geSsznKC
   JWQzYwIbN4YQ4VY2RAdUxCJZTPkYKPjfgq7d6tKFoqtfNucbWUJ5dfSrd
   vpuVuX9hR4TS+1OqXUmP9xMiTehg9HH477IAmiTqW1OyUbQclzHXZxCJR
   4=;
X-IronPort-RemoteIP: 104.47.66.46
X-IronPort-MID: 74602641
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:pXclu6/UHhVTdQTTgTqSDrUDnX+TJUtcMsCJ2f8bNWPcYEJGY0x3z
 WdNCmzXO6yCYmLwe4hza4+x8kwBvsOGx4JlHQBv+yo8E34SpcT7XtnIdU2Y0wF+jyHgoOCLy
 +1EN7Es+ehtFie0Si+Fa+Sn9T8mvU2xbuKU5NTsY0idfic5DnZ74f5fs7Rh2NQw34PhW1rlV
 e7a+KUzBnf0g1aYDUpMg06zgEsHUCPa4W5wUvQWPJinjXeG/5UnJMt3yZKZdhMUdrJ8DO+iL
 9sv+Znilo/vE7XBPfv++lrzWhVirrc/pmFigFIOM0SpqkAqSiDfTs/XnRfTAKtao2zhojx/9
 DlCnYC3GQE2AbXuob4mUgJaHTEiAq4W3bCSdBBTseTLp6HHW13F5qw0SWQJZ8gf8OsxBnxS/
 /sFLjxLdgqEm++93LO8TK9rm9gnK87oeogYvxmMzxmAVapgHc+FHfuMuYEwMDQY36iiGd7EY
 MUUc3x3ZQnoaBxTIFYHTpk5mY9Eg1GgL2MA8gzJ/MLb5UDakjIq2enzDuaSOce7eMh/l0GBn
 HLJqjGR7hYycYb3JSC+2mKhgKrDkD32XKoWFaak7bh6jVuL3GsRBRYKE1yhrpGRiESzRtZeI
 Ew84Tc1oO4580nDZtvgWxy1plaUsxhaXMBfe8Uh8x2EwKfQ5wefB0AHQyRHZdhgs9U5LRQ10
 neZktWvAiZg2IB5UlqY/7aQ6DatYy4cKDZYYTdeFVdbpd7+vIs0kxTDCM55F7K4hcH0Hje2x
 C2WqC85hPMYistjO7iHwG0rSgmE/vDhJjPZLC2ONo55xmuVvLKYWrE=
IronPort-HdrOrdr: A9a23:Bauiy6ifMY+uJ3piW6NfbkCRtHBQXzV13DAbv31ZSRFFG/FwyP
 rCoB1L73XJYWgqM03I+eruBEBPewK/yXcT2/hoAV7CZniehILMFu1fBOTZslnd8kHFltK1tp
 0QDpSWdueAamSS5PySiGfYLz9J+qj8zEnCv5a6854Cd3AIV0k2hD0JcTpzX3cGMDVuNN4cLt
 6x98BHrz2vdTA+adm6PGAMW6zmq8fQnJzrTBYaD1p/gTP++w+A2frfKVy1zx0eWzRAzfML9n
 XEqRXw4uGGv+ugwhHR+mfP59B9mcfnyPFEGMuQ4/JlXQnEu0KNXsBMSreCtDc6rKWG70srqs
 DFp1MaM8F6+xrqDxKIiCqo/zOl/Ccl6nfkx1Pdq2Dku9bFSDUzDNcErZ5FczPCgnBQ9u1U4e
 Zu5Sa0ppBXBRTPkGDW/N7TTSxnkUKyvD4LjfMTtXpCSoETAYUh5rD3xHklXKvoIRiKp7zOSI
 JVfY/hDbdtABunhknizyRSKIfGZAVzIv+EKnJyyvB9nQIm3EyR9HFou/D3rk1wiK7VdKM0md
 gsSp4Y8o2mbvVmH56VV91xNPefOyjqfS/mFl60DBDOKJwnUki92qIfpo9Frd2XRA==
X-IronPort-AV: E=Sophos;i="5.92,233,1650945600"; 
   d="scan'208";a="74602641"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CNyDJaSwusRiqLYwr/ckPAAQq4X9qtlMT2NWT7UThTERtX75hkLCR62ELO7sFZtqWdRDWouUeXsfJzMmPEq5yS7hr/0yLrAcx90DuGAVhp2CCqOkcV792mSaY6+d2Cz69BqmNa1utE6hb+/pCQNSAGN8kTnAd490+50uCTJ5wSOTSoynVZgqAZGiL1DOZq56n66gSItm0QJ0VPA4u+jAz+f4RlE2vAiNxFl/teHttZu7ipL8i+MQC38Wp2TfxWaDSYDeQVsRIdUFFSlwD9pnExqvaz/yBR7X4ClLCT8UcZswdu3wPJ14DZzjdo/KTlHNIlmPHz9jCBC+XGzEPlF1hA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qjxY6oKOPhEUU0pls0z99JS4H5ntUS179yrFkFE+DfM=;
 b=jg6t8KFSvwVL1AD6Q8+2HI+IHMsCoMzMocD2T9cGqNjPEUuuhs/tas+tIXifw8dxR8sMTXncDpmGXT3Bcq3GahzORfgtwmvzq4E0LQOaLOfVLP88nWnaJDkFeJEywarol7lb0csaSSKDINdrkE3XKn4l3F+iphge04tcXUcRI4mi0HUiHPn/F/ztbYSM94YpZVtt5zaqYI/rgK+s1tO6DHPnU1lvCxMSePXvnu7icrCTC6wvGYY2CqMFg1fVJMTGdgedMqu6KjI/TyupY4em/307p8Vth4KwZRF/L/34J0781Oi/4FNedCCAzHvoTEJaDwOsQDl/+axx8pMP9/2j3w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qjxY6oKOPhEUU0pls0z99JS4H5ntUS179yrFkFE+DfM=;
 b=Wf/DvpCW23uNWiTlYTPJq03+nlS+g2w8rfihVRmFf3wEb57R+s6m+wAMHqWN7HUBwML46rUssO/1+LQaDysPPuo3ibvqZFBEU4C31LAoM+sKvIFshfkRsBWue6yImoMq/nz8WsTKQJ6cua8HtYtkDIazgDPiE/PmMxIPeQm0GzE=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 0/2] console/serial: adjust default TX buffer size
Date: Thu, 30 Jun 2022 10:23:28 +0200
Message-Id: <20220630082330.82706-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.36.1
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0135.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:9f::27) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 0eff8f1f-f652-411d-29cd-08da5a71d897
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5421:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ajtxE3rOEtXM0pAIg4PpGC79qYm1oFkoMYC08gMHZxCD/sO8MApnKXKTrU0Ci8W/9uCwKBqspL7Sg+Ov1D/6TWEO9jvU+5AhzTVhfg+phfjY/Puvu9ao1luFjpJlmzpRjDGboDQEUqrLO08NpEht/J2N1NqOaY6igrgVo8Tn1xyM6h3Lr8jC8ScIz5HIiFpQkvzRhz53vQt75G2Vzb1Q1c1/LxtbrNnermibRi/x2kcW3r2f8Uzs00cM7nkJ3b/EF7/E7QanjEFm9LZN6KZiy6RKFzWbOxe5hbFTuhZ/kQTwWMjQeXaLF2uFu6w5ioWUVUP18QTGIXCcVc5DZjbIRDwVpSuPKtdM53TtRcpw3MGzGs47coe+0XWWiTTiOkKpAVG8LH6l5LOlMPOlG4yDrPd2hROsgHAN6Z9Q6WCkVufIcVpJFftL0DPHPakdBcCuhYvgv8NPY3YN2E2tbq1gZt3U9GB8+lgCZm9GK7R3sujOblEkYA83BZpnYJvdlHf7taYh7JNNemybEWkxXVy9pbRSWp5ldyDtCbPFzk9tNKviPaSLnPWFCI+5sSdDTkMZd4+E/MWXrameuB6pAfp1kY+I/astSEojPCRpbFJy6XgcN65z15IIfhbIl1DHL8jVx659gh1ru4/7G2XoVGVviM8AhYGSDBhWQQcwwtP6+n6gEDtwV9+gfKbPyrq3grcDeTo0YHlN2+1jL5IkN5meBarP42hOnrPNHRsCQw3Kdk0MDLPqkDwwCma1jpSgxa6cgw/C8gMFk7OVbgcST9rkwvgjTt043fFg3AUyzcxOF5A=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(136003)(366004)(376002)(346002)(396003)(2906002)(86362001)(6666004)(478600001)(6506007)(2616005)(1076003)(5660300002)(4744005)(8936002)(186003)(6916009)(82960400001)(38100700002)(6512007)(41300700001)(316002)(26005)(66946007)(6486002)(4326008)(36756003)(8676002)(66476007)(54906003)(66556008)(83380400001)(966005);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SGZHdW0rRUNHSjFHaXNNbVRHWWJsdGdYQlNYejFlNlZkWEVzT1U5WGdyOU8x?=
 =?utf-8?B?bC92RXJQbHBBM2t3dit3SU5BdUlsZ1dwL283YkVhbWdESnJpRzI1WG90UC9G?=
 =?utf-8?B?ZGc4RFc5SlJsdkZ6UUxRZ01xRzF3N2lEYWxab3pqc0xNK2hNb2pKU0s5cXdM?=
 =?utf-8?B?UWNyTWU1dms1UUo3THp2WkhOWWJkV0lod3NCRUlXdDE4WSt3WmZRSGVpNFRm?=
 =?utf-8?B?d0hQa0hScXQvc1Vtb0pxcDVIRG5hRDNzYXpqWklBdllSWlA4RFgxbDN6UmRj?=
 =?utf-8?B?aTlGenhpaWVtbnBNYWE5R2NmNWdnL3Y1bVU0ZlRzNml4L1hBdjZRWTZlcmc3?=
 =?utf-8?B?ZGJXUXB3L0Vyc1l2dEdzNk56dWNjc2hmZ3F2ZFl6NWl3cEJCZmVXNHNOMkFz?=
 =?utf-8?B?V3ZpdUtNN2Vmdk9Oc2lxaEh2bmI2Y3hJYkFPczZFaVJ0YzZtVW5sNTYya2Rp?=
 =?utf-8?B?Yml1MEUxMTdUNkxYbVFXaHFUWndBbWlpY1lnaFp0c1BjY0h6ZDNUekU3VzQy?=
 =?utf-8?B?cUJob3NIdWRpN1BCVWVoNWxCd0xRUCt6UEJNU1VzQUZEbXR5TUprd04va1Iv?=
 =?utf-8?B?VkZrdUhNL1BHa0thQ1FxYzRxaEhaOVk0K1dvdEF4SWhYZzZMbHNjNU9JOTJ6?=
 =?utf-8?B?cUtQSTdRejd2NUFaY1ZOM2dvaUxPbmkvaUlJR3RoaExyd1lPWkVqWHJhZWpD?=
 =?utf-8?B?bzFVWE93Qnl6d1VaaEo2S2Q1b3Erd1duLzNnbGR5K2YvQlNBbDhlZG5BRVRp?=
 =?utf-8?B?SHp1VExKQVZ4Z2NyRWJwVGQzbTFKa2NFeW5BcDNNUCtGRUZvSWR6L1RUMFpx?=
 =?utf-8?B?SFU3TnU2R3BvbHlBVVlyOXNXalVOdG12VmpTcmxLOEFJRVZ6L2ZadXN1dWVX?=
 =?utf-8?B?Z3ZjSHo1SEhxeDBiQ2N2K0liZ2JKbndCaC80T1libE0xNmJHN3FJcTVveXBt?=
 =?utf-8?B?N0NIL2NCWm50ZXV5c1pRclVjbmhmbjk5b2ErS0svdXhPNXdJR2RVY0kzY3Ni?=
 =?utf-8?B?Qk83M2ZMczNBRXc1Yms1YzNOTE40SitURHNnSTRwTWl4M3dFWUFTbHV6a1I2?=
 =?utf-8?B?LzNCR1lVT01XWHV6aGpLR1E0QUZpdmIzNUhaRi9venhUZ0QrUmwxQVNFN2tO?=
 =?utf-8?B?MGtVUlZweDcycGh0d2V1bm1sYmlnUFlhdlFPL2VjYVVpMkFNb2RCdXZuc3Ja?=
 =?utf-8?B?NUEyYk9WcEhXemg0WVNmcWViYVY4dzZtTmJDaDVOdDM4VGpMSEhUNlZTci9Y?=
 =?utf-8?B?MDY2U1J1ajJxcUYwYnp5QnhEYVNMMmoxMEduRC9RcXJXa0FMcWJuU0hXMktS?=
 =?utf-8?B?dVFiRlJpWGtlYVpYdVNrL2RqV21NSXF0WnN0dW1OUkdKMmVZRVZZWUttUThX?=
 =?utf-8?B?OWZ2VTR0cEZYNy9oK0JGV3pUcXlSLzIwWlJCUWZRcndJTTJyTU9hME5TY2lV?=
 =?utf-8?B?YUFnZksxYXRlT2xoUk9NMHpjaXZTSTVyODJtR1IxQVZaRXQzSzVKekZUQ0VO?=
 =?utf-8?B?dEIzMTliQlNra24xZk1TMzdYbnZZOUQzYVA2TVhnbExjanJoeUpyMjJxZHlr?=
 =?utf-8?B?MTdhUjgvVW5KTGErNE5TMVRjZmw4ODZFUFZKckVBcG1RVVVBVVBPNHNXdUx3?=
 =?utf-8?B?Y2xpTVhocmRKTTB4M2VGYnBJUlZCUy8xUjJqL29nRkZZZ0c3N3ZIcWJhdFcz?=
 =?utf-8?B?QzhScEFxUXlPYXN3aDVDNzdKUHkySFMvU1R3QVJTRmc4ZzlqcnBRZ3ROVXhx?=
 =?utf-8?B?K1d5cHE1LzhJSmNrVDFhN0E1SCtKeGg3TmlvK0N0ak9ja2VGZFkrQ2tmS1I1?=
 =?utf-8?B?cjdUQnZiNTdpMmtyNHdDQTk2RmRSUkE0Szd0R21neDJaNWwxMVZXN05qM2FT?=
 =?utf-8?B?eVpEY2ZHVXYvbEFvL21JNFAxSUFwUDZzSmRQZS9IU25jenVWb3lqaHZNYXpa?=
 =?utf-8?B?bTBLOCtGZFNMRGltTmZVK0hDL1BsN1g3dkw5RlVGUVMrL2pnVElFUnBQbEVx?=
 =?utf-8?B?U3V2S1lZVXgvVVlWRnNpbVh5VFgycFp3TE8xOTZGemN4clNvTVBXNGlCVjU5?=
 =?utf-8?B?bUNEblpoYkpyK1c5WEFJSGpkSytSQ1RDMHRCeE5sUkFpY0NOZmlMUUxiYmZm?=
 =?utf-8?B?L1RyMWRISkp5QVVxdzRhaWVPdGFSVkMwdVZxVVBTcTVKYmI5L3Z3K1JnMjBo?=
 =?utf-8?B?c2c9PQ==?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0eff8f1f-f652-411d-29cd-08da5a71d897
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 08:23:43.9342
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: xCoxGQSDTBVYlfHPbkXpzJ4q8h5JdDPCm6xFoEXEQdBXpNQEWYExObpUvFIhzjtCBAHIzO3zMLbzwCENkZnD6A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5421

Hello,

First patch moves the setting of the default TX buffer size to Kconfig,
and shouldn't be controversial, second patch increases the buffer to
32K.  Jan doesn't feel comfortable Acking patch 2, so someone will have
to review and consider it, see:

https://lore.kernel.org/xen-devel/59fe1b28-b173-e497-5b8a-5e0bb6d946b6@suse.com/

Thanks, Roger.

Roger Pau Monne (2):
  console/serial: set the default transmit buffer size in Kconfig
  console/serial: bump buffer from 16K to 32K

 xen/drivers/char/Kconfig  | 10 ++++++++++
 xen/drivers/char/serial.c |  2 +-
 2 files changed, 11 insertions(+), 1 deletion(-)

-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 30 08:23:58 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 08:23:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358298.587435 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6pSv-0000KI-8B; Thu, 30 Jun 2022 08:23:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358298.587435; Thu, 30 Jun 2022 08:23:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6pSv-0000KB-5F; Thu, 30 Jun 2022 08:23:57 +0000
Received: by outflank-mailman (input) for mailman id 358298;
 Thu, 30 Jun 2022 08:23:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPik=XF=citrix.com=prvs=1738a98a4=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o6pSt-0000Hf-6u
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 08:23:55 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f7988413-f84d-11ec-bdce-3d151da133c5;
 Thu, 30 Jun 2022 10:23:51 +0200 (CEST)
Received: from mail-mw2nam12lp2048.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.48])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 30 Jun 2022 04:23:50 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by SJ0PR03MB5421.namprd03.prod.outlook.com (2603:10b6:a03:289::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Thu, 30 Jun
 2022 08:23:49 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5395.015; Thu, 30 Jun 2022
 08:23:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f7988413-f84d-11ec-bdce-3d151da133c5
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656577433;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=LiQtz8zgdXUTY6GP7spjzbqn2+XZTbP5Ey9lVx8mnXY=;
  b=h84DVu3Fki86Wa5Y2yoK6xA2biwKEvtM1g7INMlrGsDjpp0xFsgIlE2f
   ub32ubf2fdcJZ3kKYLJ/Yjq01EgZ3kmq7LEZi1PVjOP22h7G+bksVx/QR
   fZMQb+HVLk/SZjdKhZnp2SfTQMqu6eOv1pUFsf941/+Ium5O2eWM8W14Q
   g=;
X-IronPort-RemoteIP: 104.47.66.48
X-IronPort-MID: 74792292
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Zq9CN69Jm8bjULZSfpfNDrUDnX+TJUtcMsCJ2f8bNWPcYEJGY0x3z
 WQeD2yPa/6MMTGhe9kjYI7g8EhXsZfSyddjTgVorX08E34SpcT7XtnIdU2Y0wF+jyHgoOCLy
 +1EN7Es+ehtFie0Si+Fa+Sn9T8mvU2xbuKU5NTsY0idfic5DnZ74f5fs7Rh2NQw34PhW1rlV
 e7a+KUzBnf0g1aYDUpMg06zgEsHUCPa4W5wUvQWPJinjXeG/5UnJMt3yZKZdhMUdrJ8DO+iL
 9sv+Znilo/vE7XBPfv++lrzWhVirrc/pmFigFIOM0SpqkAqSiDfTs/XnRfTAKtao2zhojx/9
 DlCnaaOdDw2FPPmpLsMcgleCARXGr98wKCSdBBTseTLp6HHW13F5qw0SWQJZ8gf8OsxBnxS/
 /sFLjxLdgqEm++93LO8TK9rm9gnK87oeogYvxmMzxmAVapgHc+FHfuMuY8wMDQY36iiGd7EY
 MUUc3x3ZQnoaBxTIFYHTpk5mY9Eg1GgL2IJ8gjE/8Lb5UDCzRM2yYHfLuCId4S6VOJsmxeb+
 n3ZqjGR7hYycYb3JSC+2mKhgKrDkD32XKoWFaak7bh6jVuL3GsRBRYKE1yhrpGRiESzRtZeI
 Ew84Tc1oO4580nDZtvgWxy1plaUsxhaXMBfe8Uh8x2EwKfQ5wefB0AHQyRHZdhgs9U5LRQ10
 neZktWvAiZg2IB5UlqY/7aQ6DatYy4cKDZYYTdeFVdVpd7+vIs0kxTDCM55F7K4hcH0Hje2x
 C2WqC85hPMYistjO7iHwG0rSgmE/vDhJjPZLC2ONo55xmuVvLKYWrE=
IronPort-HdrOrdr: A9a23:VSU0+KoWYL+cDEUYXyIZqGYaV5uwL9V00zEX/kB9WHVpm5Oj+v
 xGzc5w6farsl0ssREb9uxo9pPwI080kqQFm7X5XI3SJzUO3VHFEGgM1/qH/9SNIU3DH6tmpN
 5dmstFeaLN5CZB/KHHCWCDer5PoeVvsprY/ds2p00dMT2CAJsQijuRZDzrcXGfE2J9dOcE/d
 enl7x6jgvlXU5SQtWwB3EDUeSGj9rXlKj+aRpDIxI88gGBgR6h9ba/SnGjr18jegIK5Y1n3X
 nOkgT/6Knmm/anyiXE32uWy5hNgtPuxvZKGcTJoMkILTfHjBquee1aKvS/lQFwhNvqxEchkd
 HKrRtlF8Nv60nJdmXwmhfp0xmI6kda11bSjXujxVfzq83wQzw3T+Bbg5hCTxff4008+Plhza
 NixQuixtZqJCKFuB64y8nDVhlsmEbxi2Eli/Qvg3tWVpZbQKNNrLYY4FheHP47bW/HAbgcYa
 dT5fznlbdrmQvwVQGYgoAv+q3nYp0LJGbIfqBY0fblkAS/nxhCvjklLYIk7zU9HakGOuh5Dt
 T/Q9pVfY51P78rhNpGdYE8qOuMexjwqEH3QRWvCGWiMp07EFTwjLOyyIkJxYiRCe81Jd0J6d
 /8bG8=
X-IronPort-AV: E=Sophos;i="5.92,233,1650945600"; 
   d="scan'208";a="74792292"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=h1nJANIOw43GRn2uUJN7wZbfABXSrBiBRQtmUp8PY1kseLDK1fu3OKKm9C8qqG6iBfQ7QIl7L76O9f20yt+WeFNg/ipthcCJmkCjfCX/WNrBnXD5h8cEpkaA3zf/WhsgJmVoERF6rsrQ0b2BWawpvC5eYb9iBHw4ruq6x97GW3fgp5fkp9KkteLXe8lpO6L034X1bMdZGa0FhWZFffaw1UFqVBt/YhS18v6Jt1HxaTUH1QwPkM1yJGoFVOks1czlEFOwzs6biLddXzsr/Btmr+uWEcmM6k+xPbEHHCMs2B1U7PYwaU7lT3MJBLwTEomBBM0J5JSTuCKEjrU57WkuJQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nfkwqWP0v5owMG2H+4Xard4Xv9SJblLQOcMul3+LP/Y=;
 b=KC9NwnBLvnbSW6CZ7AE7P1414OY0URnwlZscOV8xThzqNlHU/P7XKDQ2YThlZXbEKCBUSsLJp2SkBVmvQ+r6kJYUZzaHE2PSmhBEF8zshixmR2fsCQswtKTSmDT55uHMrkk66pTO3lG9nHtbQnNHvbzZ266VOP+rl8gQ8yc89ynBDVAUaQhyVN8jWzoU8agBkjWAC7/IDl4lwmk/RyDZExRgjrE9uUusYXX2CAEMUFS3DnPXVg/5oDHyqQ5llTsADygOXowmnaGzlTVHwWFt1QVuZ+pYicA1yiV29yo1MqwoRYN6bwHeyzFa8DeVLFmy6JTPiWZd3GDtcRspM+EKwg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nfkwqWP0v5owMG2H+4Xard4Xv9SJblLQOcMul3+LP/Y=;
 b=fGdOMJO3S7DtpsE9USz4zqXp1EcwE7r3muu23nU/dnAkFndejhQ3/f0dKHSmNH+VvEtDFxsP3p1IN07lZc48JqquQ/WRU3LJ34mt3l+VfjfxgQMtmHP1/utyKipguSfOW04GfefviuSz9FPfZMSLWzeeuExeN5lhmYXdGs0A5gs=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 1/2] console/serial: set the default transmit buffer size in Kconfig
Date: Thu, 30 Jun 2022 10:23:29 +0200
Message-Id: <20220630082330.82706-2-roger.pau@citrix.com>
X-Mailer: git-send-email 2.36.1
In-Reply-To: <20220630082330.82706-1-roger.pau@citrix.com>
References: <20220630082330.82706-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0191.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a::35) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 2698f678-b027-4984-cf69-08da5a71dbce
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5421:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BHSWxp5hLoF8OqI9xBXAXYzQevNpji3uhDrxM0cIBpx6cpTGFp3dXSKlHQqfp1FVyWeoaXIDtLanh0s305BOjcMGgjAx9nvP+DRxwfdPX+0kXQ+IkWrOrCnzqGzz53MfMWqNssikfM+mylwXF6ZjkTzoDgYA8gVgi1xriMbCfTyGHA7PXWTjUwZ0yeegaQ6Vj9vN5MhwLK4egh4h32t7WMlwQM7Cu3Tq/y6kksI54xgqJZ045/fVioaIxPvk+4JP3WxZhJzL2FJUuBXchRyPojyDRohKu5U/GeW2yUQnhK/3jkthkJVtXMyyZ8dul+x+P+1V6SfZnJuj0X5D5HXA/d4XckXkiUlsNw8lnPURGsjR/U9FrOFMfrzotJ3BcvI/H4HPrgqsZQfodVFAjI4mKPTshrd5OUEHvxkh3W7MuNJbHqUBsF14Qbub5SISR/hxStOHTQ0NHaWTBf8xz2AbLCDkI5sPYF1N8Nsnz6tI31gqfo80SDq13psbUzUqL8u5qFLofsHNAL/K30D+mynoTZG94VYK1ZkVrHV3KEPGgK3YxJ24nHoadslb0qVhHNsDlt0gxT3F2EwFMlnrMoVve4BTX3rSQxF1iOj+caELxwHIYKt5y5ixvZBAnv/wb5I62Hej/MPuWxLBQxA+AKFdjhyJgyeckOg2alBW7SrA1orQh933yTHeBpd/tvitXZUJd+MSbdGTdbcbAo7n4T8l+s+k7GqQm8wn+HRPOklHvJgzB/t0+beW97mDhfCxFMTAmbQXu3CUi5XxkUHTvCarIg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(136003)(366004)(376002)(346002)(396003)(2906002)(86362001)(6666004)(478600001)(6506007)(2616005)(1076003)(5660300002)(8936002)(186003)(6916009)(82960400001)(38100700002)(6512007)(41300700001)(316002)(26005)(66946007)(6486002)(4326008)(36756003)(8676002)(66476007)(54906003)(66556008)(83380400001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MDlRVFRLQTNjTEtrTVRiMi9rRjgvc2lyalVseUFXUlBWWDFYdENabVlqVXFs?=
 =?utf-8?B?akcvYVcwaGEzVSs4UEI2RkhpeEdTc01GSzFyWnFRZ3ZJd3BudWJ2UHBqSmJT?=
 =?utf-8?B?TTFIcllGdktyMHhaZzBPV2ZHZ2FmVWNmV05nOUR3K2d4Rjdzc0FaUlJUSzlJ?=
 =?utf-8?B?dGxwcm5QRzJCZ0FVbytvNGpJL2wxSVM2YWNVbkRyTncveWh1ZFJXL0c1R1Ex?=
 =?utf-8?B?VkpocVpvTlIrd2tUQ3ZJOXdHNFlNY0ZRR3gzM2VWNC8rOWx5elAyOFM2Wm1N?=
 =?utf-8?B?T0tnYmk4eXhKM1R0YVpvOFl0NFlobklwQkRPTlBjQUlZelB1c3NjeVVhMWZX?=
 =?utf-8?B?STk3ZGI2UUZCK3BhOTdDb3BxUEVBNncrdFB1M2w1eGNkdlVHaWpHeEFpQkR6?=
 =?utf-8?B?c2RxeXFQWFRyOTI4dmxTT0s3ZzJsTHZJbjdsRUZkNlVKTkxQbFFlcklZbkRp?=
 =?utf-8?B?VTg1Snc2REJSVzcwZS8vMWp5V2hFK2U1aWlyZnI3MjJLYXBRdFA0WTVDRm5N?=
 =?utf-8?B?V2p3WnBma0JpZ2FWbnZiZGVrZzlKTzhaU2RKbEUrZ1BCRUxxbklCRnpFT2Mw?=
 =?utf-8?B?eGVNeXM2bnV6U0lQbTNxcjg3SXZKQXBMQnY1a09QODQwR0F2STh0WmE1UVdJ?=
 =?utf-8?B?SkVoV1ZrZy9pNHkvUWNXYVB4UDZhcXRzSWlXTWdyVDROLy9mOFIxN2FHUGhE?=
 =?utf-8?B?V1hoN1NhbS9rL3B1Rlp3Q3JmWmk2L3VtNU1QRnJSMndFL0p5U1U4b1VrUFpD?=
 =?utf-8?B?aU9DeUh4cGZKaFJnbEVraWppUVRYUGVBeHdrbldYK3JYQ2kvMWVCNlp3cnBs?=
 =?utf-8?B?ck4zTEtpVjJpdlJZd0N6RmZtaW5wTVRVRDRmQWNWVkExSnJVSmJleXBrSUNZ?=
 =?utf-8?B?dVNrdWNZWWx4d01QZjFpVnIrMVZrTHBjV0tLTUNCU3ZYaVZmdHhVYTJreXJQ?=
 =?utf-8?B?cW9RazB5RWhuOVBXWVdKTXdPRzZ4RnNRSWZkRUVUSUU2MlYzSzNkQm9MRms2?=
 =?utf-8?B?YzQ1LzZ0VjZwMy83WVZmZElFTitZbSthQ1pRb1JNZDZ1N2NvN2VFbHJNWUZG?=
 =?utf-8?B?aTI3TVRtR1FQT2pvVG8zQ1ZHTW9OZC8rWUpLNlAzb0R1NWE1QzdqbGw3NjBN?=
 =?utf-8?B?WllTeWdjeGxzSmVHYzdqOVdCOTMvR1d0eGRCU1VWS0IrUC9sdmk0a1IxNzV1?=
 =?utf-8?B?ZzRkMWFsb3BxcjgzbjRoMmYwRjFsNFllRFJQeHdjSEJNd3JPWlN5U0VCY3BV?=
 =?utf-8?B?dENXdHhrMktHVENJSUtvUGM4R0xPeWhmZTFKUWV1aHUraHBGU0ZBRHhzc2Yw?=
 =?utf-8?B?VFFvaUVJZXl3TkE2QzNXQXFzamoyRGxEbkVUY25UVFJyMW1xaHk0ZFAzZjlo?=
 =?utf-8?B?WkZZeDI4YUlBb095NSt0bWlaYkhJbXBFenZBVHM1TERWUkVQckpvMXgwdjIv?=
 =?utf-8?B?WGY4OERaZWJ5NU8zc09kQTdnT2FNNVlqRHJtNFhINmxXR2Z3UlptOEVkWVRu?=
 =?utf-8?B?ZGVubFF1NTVFNlZBT1RZUVFwMjVhbTNKYkt2NGRWOUMwTEF4Z2lWeGNVODkx?=
 =?utf-8?B?eXZwVW5Zb1o2SFYyanNSQWlaYkl6ZUhqZUtkeThWcDk4aGZnT3BQTERZQTBF?=
 =?utf-8?B?WHJZb0xzMTZzeFdMS0t4N2l2K2pidnJnMFU5eUlxOEk5dnk3MzJjdmJqQnA3?=
 =?utf-8?B?eHJzcElhL1RWNEdjYm5QdnRqeGliT0xzYVZvWkg2d2M2aWMyRyt2b2hnR0VQ?=
 =?utf-8?B?Z0I2MW9LSU5hTDNTMEFzRFNGQlhibUFJNnNCbmJ6a3k2dzlYTDU1bERFMS8z?=
 =?utf-8?B?d3pKTmxLZ0wreGhXaFNKUGsxdENMRFQwcVd5cGllL0k0R3ZWTzdFVGVsK21k?=
 =?utf-8?B?Qk1HR1JYSXYzb0ZEWkEwKzliTFFWSnlBVzBFWHdSTzdSZytqQmVkQkhpZjRM?=
 =?utf-8?B?UU9HZ25OSUdyMWVPamZXR3YyWmsrLzBtYzJJWHBsTEx0akQxUXQ4amhYRFF2?=
 =?utf-8?B?SEozS2ZPaGRCSTRKZDcvUVVRbm9INDRaZnZaeXA1VDVwbEQrdFlwYWtoRkdl?=
 =?utf-8?B?Y2Zsa09qYlVBaTB1SVUvSzRuVGdsZXZ3VkNrSmpWd09MS2ZYTjRaR1BsYTRO?=
 =?utf-8?B?OENxWkwvUExEeXJSTEhtNkNFTjZNc0tTVURXVTBQUFhITHhYS1V4cy9WUklG?=
 =?utf-8?B?QUE9PQ==?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2698f678-b027-4984-cf69-08da5a71dbce
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 08:23:49.3632
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: AvpSPhxgA8qTtLtALqQPyq67qLP6ANBQN6S+gGXUP1T1a6/JVgektJWoREviMYYf4uNuzZSzGUkzN6aKy4HJ/A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5421

Take the opportunity to convert the variable to read-only after init.

No functional change intended.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v1:
 - Fix help message about rounded boundary, do not round up the
   default value (will be done at runtime).
 - Use kiB instead of KB.
---
 xen/drivers/char/Kconfig  | 10 ++++++++++
 xen/drivers/char/serial.c |  2 +-
 2 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/xen/drivers/char/Kconfig b/xen/drivers/char/Kconfig
index e5f7b1d8eb..dec58bc993 100644
--- a/xen/drivers/char/Kconfig
+++ b/xen/drivers/char/Kconfig
@@ -74,3 +74,13 @@ config HAS_EHCI
 	help
 	  This selects the USB based EHCI debug port to be used as a UART. If
 	  you have an x86 based system with USB, say Y.
+
+config SERIAL_TX_BUFSIZE
+	int "Size of the transmit serial buffer"
+	default 16384
+	help
+	  Controls the default size of the transmit buffer (in bytes) used by
+	  the serial driver.  Note the value provided will be rounded down to
+	  the nearest power of 2.
+
+	  Default value is 16384 (16kiB).
diff --git a/xen/drivers/char/serial.c b/xen/drivers/char/serial.c
index 5ecba0af33..f6c944bd30 100644
--- a/xen/drivers/char/serial.c
+++ b/xen/drivers/char/serial.c
@@ -16,7 +16,7 @@
 /* Never drop characters, even if the async transmit buffer fills. */
 /* #define SERIAL_NEVER_DROP_CHARS 1 */
 
-unsigned int __read_mostly serial_txbufsz = 16384;
+unsigned int __ro_after_init serial_txbufsz = CONFIG_SERIAL_TX_BUFSIZE;
 size_param("serial_tx_buffer", serial_txbufsz);
 
 #define mask_serial_rxbuf_idx(_i) ((_i)&(serial_rxbufsz-1))
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 30 08:24:01 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 08:24:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358299.587446 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6pSz-0000d7-JM; Thu, 30 Jun 2022 08:24:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358299.587446; Thu, 30 Jun 2022 08:24:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6pSz-0000d0-Ec; Thu, 30 Jun 2022 08:24:01 +0000
Received: by outflank-mailman (input) for mailman id 358299;
 Thu, 30 Jun 2022 08:23:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPik=XF=citrix.com=prvs=1738a98a4=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o6pSx-0008UT-4L
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 08:23:59 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fbd90a7f-f84d-11ec-bd2d-47488cf2e6aa;
 Thu, 30 Jun 2022 10:23:57 +0200 (CEST)
Received: from mail-mw2nam12lp2045.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.45])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 30 Jun 2022 04:23:55 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by SJ0PR03MB5421.namprd03.prod.outlook.com (2603:10b6:a03:289::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Thu, 30 Jun
 2022 08:23:53 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5395.015; Thu, 30 Jun 2022
 08:23:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fbd90a7f-f84d-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656577437;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=z2tFs+C+vki8dV3dWic/vRVXguDBXQHxTduVL37e4ng=;
  b=B9QfIwgz4iXIrZYjwJDTdAApz6JR3XMH0NL9dZ54RM/b2ruQpatb4xLK
   PmnVNefIxmbZCDBRBbBUgHLR8h5i1CMswCntnm3NyTzUydhb5YTsAiFSu
   JVI5ny4Ru1mKhcTmTKotmZytKPJ37IF3AtU6nAgqQZq+AZeS1BN7tqI5L
   A=;
X-IronPort-RemoteIP: 104.47.66.45
X-IronPort-MID: 75193007
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:xjf6Y6lvCemz2TCG+HBl2gHo5gycJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xIZWG6Dbq3cYzH8etAibt+18E9X6JGBm9A3Tldv+CBhRCMWpZLJC+rCIxarNUt+DCFioGGLT
 Sk6QoOdRCzhZiaE/n9BCpC48T8kk/vgqoPUUIYoAAgoLeNfYHpn2EgLd9IR2NYy24DnWFvV4
 LsenuWEULOb828sWo4rw/rrRCNH5JwebxtB4zTSzdgS1LPvvyF94KA3fMldHFOhKmVgJcaoR
 v6r8V2M1jixEyHBqD+Suu2TnkUiGtY+NOUV45Zcc/DKbhNq/kTe3kunXRa1hIg+ZzihxrhMJ
 NtxWZOYSjg2bpDNnPkkFEdXSwA9BfJNobD8Pi3q2SCT5xWun3rE5dxLVRtzGLJCv+F9DCdJ6
 OASLy0LYlabneWqzbmnS+5qwMM+MM3sO4BZsXZlpd3bJa9+HdafHOOUu5kEgV/chegXdRraT
 9AeZjd1KgzJfjVEO0sNCYJ4l+Ct7pX6W2IF8QPO/fJoi4TV5CZK1r73bP6MQNGPYO5SnGKkr
 VLF4HusV3n2M/Tak1Jp6EmEluLJ2C/2Ro8WPLm57eJxxk2ewHQJDx8bXkf9puO24ma8Ud9CL
 00f+gI1sLM/skesS7HVXQC8oXOClg4RXZxXCeJSwBqW1qPe7gKdB24FZj1MctorsIkxXzNC/
 kCNt8PkA3poqrL9dJ6G3rKdrDf3PDdPK2YHPHUAVVFdv4Slp5wvhBXSSNolCLSyktD+BTD3x
 XaNsTQ6gLIQy8UM0s1X4Gz6vt5lnbCRJiZd2+kddjjNAt9RDGJ9W7GV1A==
IronPort-HdrOrdr: A9a23:FUnsEqlHGoVPFfL13Dt9BJnpElbpDfO3imdD5ihNYBxZY6Wkfp
 +V8cjzhCWftN9OYhodcLC7V5Voj0msl6KdhrNhR4tKPTOWw1dASbsP0WKM+UyFJ8STzI5gPO
 JbAtFD4b7LfCdHZLjBkW6F+r8bqbHokZxAx92ut0uFJTsaF52IhD0JbzpzfHcGJzWvUvECZe
 ehD4d81kydUEVSSv7+KmgOXuDFqdGOvJX6YSQeDxpizAWVlzun5JPzDhDdh34lInty6IZn1V
 KAvx3y562lvf3+4hjA11XL55ATvNf60NNMCOGFl8BQADTxjQSDYphnRtS5zXkIidDqzGxvvM
 jHoh8mMcg2w3TNflutqR+o4AXk2CZG0Q6W9XaoxV/Y5eDpTjMzDMRMwahDdAHC1kYmtNZglI
 pWwmOwrfNsfF/9tRW4w+KNewBhl0Kyr3Znu/UUlWZjXYwXb6IUhZAD/XlSDIwLEEvBmc0a+d
 FVfY/hDcttABKnhyizhBgu/DXsZAV4Iv6+eDlMhiTPuAIm30yQzCMjtb4idzk7hdAAoqJ/lp
 X525RT5c9zp/AtHNJA7Z86MK2K40z2MGbx2TGpUCPaPZBCHU7xgLjKx5hwzN2WWfUzvegPcd
 L6IRhliVI=
X-IronPort-AV: E=Sophos;i="5.92,233,1650945600"; 
   d="scan'208";a="75193007"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=T/SVSQ2ceycmWN9p4hcHDX0G2FDAqcH25Fj5s7xzYz7QFAxcasikpwb7WTFBft4wVQmiRqMYeIchouH6q8CRbmJ77UsBGt2/diYqFSfHzyG0K7kTx4YqGrQya1fyQTn5TD3jYRrlYL3oRNN96wS2lE8RzTv+qI7TH/aog1rSBtAacUwcOAy+SGdSk9kcrZT/6cXYgUcmz92rYr7mfokr0lBJ6MZmjNpymfzeoyynrc7m3a1ZnceTuvjtk4sn0fyjmLI3p8mCNfzLISM51mpfaEXg6kVJRzzjYbILA0OdTg6y4KRaf1wzpDq13ob44AUQoSXC5xYDhzCQR+3iWGUKBA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=gXm89G8vAtrVKhjl6bu6e02BU+tlRuND+6XDnIkID2A=;
 b=Ip80s8+gaUfzYau+c45AZOSLE2mDDwCjztSnpPBiRPBkKvc0Wa2jWNwfm9NvPnInj3po/foh4pOn9/wGE9d95K/OEunLtX6dhLC6c6lVTWglrRNqTv+R7LG1KLw4S+LO6vDxEfmZRBzJcnIjaFEMSSdvS9Zr2heqw7TPA1mbV2fdBURERJFF3dU10lv+9zx1637+48lprF58Ul0NP++YJPp9GRkv0P0VJRzzHyUQOzzg4d0NQNYNVsEZjthXpUZ3m9UVZuMVbeE0UpdiUDz2mjiKH1xCy78Ns6x9aolAojGcUVgE9lK4P9rNbA0fB+PbkqmPN0kF190O94Kcba4xZA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gXm89G8vAtrVKhjl6bu6e02BU+tlRuND+6XDnIkID2A=;
 b=A8bZ1LL226036O3GFrBT6upp+Mg7OFJrEZqndJD3Sr6s+RJRKLWg4Mik/3aqDIdQSDIDgD40Kzbr7EtCwAQYMS6upkyIqj4lAFf9X416N6vkgmBPKjGzuBgoaTg28c3wvizdaY/la9tJgwZE3ib0RZlSanaiYCt5zQq8zaiHQoo=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 2/2] console/serial: bump buffer from 16K to 32K
Date: Thu, 30 Jun 2022 10:23:30 +0200
Message-Id: <20220630082330.82706-3-roger.pau@citrix.com>
X-Mailer: git-send-email 2.36.1
In-Reply-To: <20220630082330.82706-1-roger.pau@citrix.com>
References: <20220630082330.82706-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0555.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:33b::8) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 9969ede9-eb63-4543-25e8-08da5a71de61
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5421:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	MFob2LEnNzxfGbYMRLYq4wKGIo1iurxim+j9xiOFX7M81Zlj7J9v8TmtdoWH6P6132HrYoiic8jSCa3dGnMoe/oYa/dU70iNrJtPUce18dBKh+/vmT5bvvGP31aTs1wII0hVG2Tsn9NolIpt9QF8eaY54fY0foyLpQ0ncJebVmv4wJ4ju7Kyr5if9gJ6duuBtcn/XnlIljSxAYtZoZIhS5gJXSmJdc7+oa+EaxJCDfoYhbMkhrzFkSbIb4n3fXGBvaY7rPYFLQWloiyk6XUhiILqQzE2UEs2DKjn1JdVxqa86FJIES/pe5cN50UdMtT8TnITK9i+raLG2kqhsUVYbMQEaQWoUcZ2C/A83oJlr4Zaeis/czo40pJTNrmO7lce2OuPr4YE7laUgIZq60gFjbSRfvDVca//jbLL2FO+Etyb4ooecGewMSNaCxUOhNQWP8IOWQh0bUQ9gg7vDiV4BlqBHAnBgELFBW6ycxYxAL5TazQPw8GvEJcTnEPV4qZeABRC87AbXo29YhPCavu4a1ibtvodYo3MatnyEZwAnAU0xY+R8ph9L5ixX61V+ycjjHcjIjMn9/A5pB8fXZlUtFM1as8KDDFbYRdDArTXu/Yxl0vvceIDiTBCMM9VtPUPc+Y5mknQVNR+pQ/BHwS7fl9wqp2xiuw16HDnNaHdVmDC6RkhnLrj9L+ZDGdba0m/XM5G3CLmvRiVEmh26GMmQqp5PqfG3DusY4ifQXXvnjA0dv+osfR0vFdhugjsXYIoEE072UTFqER8Om8Nm5v9bg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(136003)(366004)(376002)(346002)(396003)(2906002)(86362001)(6666004)(478600001)(6506007)(2616005)(1076003)(5660300002)(8936002)(186003)(6916009)(82960400001)(38100700002)(6512007)(41300700001)(316002)(26005)(66946007)(6486002)(4326008)(36756003)(8676002)(66476007)(54906003)(66556008)(83380400001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YlB3YXBXdVBhaSs4T2k1eGZxNElaNDVzRXUzYzNCY1FoSEV5RjVVYm1hWTVT?=
 =?utf-8?B?ZUdWZ2ppNkwvL2tFTm9aOEphMDZSTVVqMjVMaGk0Uk9YeGFNcU81c2pvcjMx?=
 =?utf-8?B?cGR5UW5FRG9hN0wzUTZuNmRnSmpvVjAza21OZW1RWHV3Q2FrbVpuUHRLL2FS?=
 =?utf-8?B?RFZRK2lTVWRLT0l5NXErUUF5dlZQeVJBR3ZmVG5MU3BEMndvQlNhNS9tOGQz?=
 =?utf-8?B?c0hhb2xTaGh6c3UyTDA4UGtwb01wVW1HR3JsN1RmUHlRY3poYXdQM0JGN0Jj?=
 =?utf-8?B?RStML1hKL1VFZXpUMWltSFpWNHhKSzh6VkxCWTlyWVJySVAvNTR4VkdEOFVy?=
 =?utf-8?B?Q0IwYmVnV1JIUG1LM2lraVNIeGdGRUNGSkozek1KbGY0N1M3bnNRUXhFVUxS?=
 =?utf-8?B?ZFJTRnpYL2RQdzZlUE1FZWZpaHU2MVd6bDBsVjM0SVFzNGg2aFo1R3pmcnpW?=
 =?utf-8?B?NURXVXUyVFFXb0sxZXFtVzd6SGVGRXRBWFlFZzFBemYzSytkTmxuZGdtRUwz?=
 =?utf-8?B?cjN6Y2lSdWNtZzNKMEtFd1VsVWw5MkhZMEFPMkc4RFdEc0VPOTQ1SXphcGRC?=
 =?utf-8?B?YWRsdUdFbU90SWJhcC9ZcDNvM3cvS2d4QWo4aGg3M29ZbStpdy9ycmhsbFQz?=
 =?utf-8?B?SmxpcXhDS20ycE5IZ1Q2dDFTMzhWcGpoSkE1VHlsYng2cHpaTFZGNWhkdzVC?=
 =?utf-8?B?YTdyVmlRZGhxQVk3WjBlZXQyWVpwNGdyNkIwamlraFVlbGRBOGE0K2U2Ylln?=
 =?utf-8?B?SG5QRW1mc1d6a05Ba2E4R0UvR2FZa0lGZktmNkw2U2hUa1dMY1Fha01XSFVh?=
 =?utf-8?B?MXdtM3o2bmR1bWxiM25MK3MrYnRBVTVPb2lsL2daQU1MbGxqQktFcmVSWStD?=
 =?utf-8?B?VTJLNlpXdjdiL2wvdERYY2RjWmNUYlp6djRLZk5HdFhDZi9jUW5OK2JjZkdo?=
 =?utf-8?B?dy9vZUdPd2YrUnRJS01KYnF3TE9YN25iOVN1Mit1UXpSMm1xaWRvYUNNZGNP?=
 =?utf-8?B?STlMWmh0dzd3N2RIVUxIMHRMd2NDSWpabTJlWDNhdk55aUZPWEM2ZHdCVStm?=
 =?utf-8?B?YXNrbWYyU3dybUpxZ3lTQkZ5QWhmREV6elR2NStuYU5Qb2FEWXhiWDdPak02?=
 =?utf-8?B?V1dvZEtQZjM1Y3ZId2tpYlRnWHdWM3NzUGd3SHhLQWJhWHJ5NS9FMENrZ084?=
 =?utf-8?B?SWU2Z0hJMXhuWGJybU1ZRzFxRmEyMzNVZktDTkFIa3ovSUxjYjU2c2VLclZF?=
 =?utf-8?B?NGYvTUxDdGdUaXdYdTZTMUh4NG8ybHZTR3M4L0NaTlE0VkoveXFnaWNUV3Jh?=
 =?utf-8?B?N0FSbVFmY3Z0MFdpQVhkNG9zREVPdENQNnlwNzZWZVRTdWtNejdnRi9xOU9S?=
 =?utf-8?B?c0FHNXQ5c21Ed09qMUk2THJFV0VJTlNGL3J2dTRrbEhiNkd6a2Foa0tQQk1D?=
 =?utf-8?B?Z1dhb0d2QUFYUG9vNFQ0RDFhT01VdGtjeGhIc0dSM2ZGNXlHTG5jb1ZQelJk?=
 =?utf-8?B?ZVJLWjc3SGIxdlV0VVdCcW56MmlwYk9tMjU2YytiMUVFRFhQRTVFRTBjZkhI?=
 =?utf-8?B?THdaMnpDRGVBOTRja3R6S0Zqd1ZwWTM0YXkwUGVCaU81Wk1HMzV1cUpWRTRH?=
 =?utf-8?B?TWt5UTVhUVFLQThpYVdEUDZSYkhISW1tUkFPYjRnMnIwOUViZlZUWUFQQmty?=
 =?utf-8?B?aENzZHZQY3FQTndpS0ZaYmFsTVEreitQelJmdEFGNDRqbkpjTVZkdGNCZnpa?=
 =?utf-8?B?a3dsQUc0TXFNVEJQTmxoNjJaRW1uRS9tTzZtMEltdWtvT05LT1dFckVxSDlW?=
 =?utf-8?B?cUFjNWhnenFsYmIvWWwwcCtLeUdTQlNUYklSRVF2aE05MGY4RW9qWVBLRFln?=
 =?utf-8?B?SnBJL3drdCs2ejVMYmNDWDVOdEp4NWF0QXlUSHM2VTlnOVNNSnhxMEtVT2Zu?=
 =?utf-8?B?UnBaYXlFdk1hU0xxWTcrT285Yk0yRUJEOHcyR0IwOU8zSGRRMTJFM1E0N2Vl?=
 =?utf-8?B?Ui9pR0ZXV0duS3B4NndVRmROR3hBOUhyVXQ1dW9LQWZSYkIwUzAveVNBeGtI?=
 =?utf-8?B?RTdybjFkZWNTL1F6UHFFM1RMeFBhU2NOM0t1TTZCZVdGM3VwUUJoTjl5bTRo?=
 =?utf-8?B?TVU5Mk1EM0pUV21kL0pDU3VDY1dkSHZ3SWd0SE9Hd1pZdmh5TDRZOXhtUTgy?=
 =?utf-8?B?THc9PQ==?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9969ede9-eb63-4543-25e8-08da5a71de61
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 08:23:53.6777
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: sZ2IanK843qJ6RlMPbQtqYTY9x+t9PMXV3O8dVojLvSX8d9xYv0uTVgMnQAp6iBl3hndMX6hVkl9bxoiwQOffw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5421

Testing on a Kaby Lake box with 8 CPUs leads to the serial buffer
being filled halfway during dom0 boot, and thus a non-trivial chunk of
Linux boot messages are dropped.

Increasing the buffer to 32K does fix the issue and Linux boot
messages are no longer dropped.  There's no justification either on
why 16K was chosen, and hence bumping to 32K in order to cope with
current systems generating output faster does seem appropriate to have
a better user experience with the provided defaults.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/drivers/char/Kconfig | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/char/Kconfig b/xen/drivers/char/Kconfig
index dec58bc993..294b3509c7 100644
--- a/xen/drivers/char/Kconfig
+++ b/xen/drivers/char/Kconfig
@@ -77,10 +77,10 @@ config HAS_EHCI
 
 config SERIAL_TX_BUFSIZE
 	int "Size of the transmit serial buffer"
-	default 16384
+	default 32768
 	help
 	  Controls the default size of the transmit buffer (in bytes) used by
 	  the serial driver.  Note the value provided will be rounded down to
 	  the nearest power of 2.
 
-	  Default value is 16384 (16kiB).
+	  Default value is 32768 (32kiB).
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 30 08:36:37 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 08:36:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358316.587457 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6pf6-0002yo-MW; Thu, 30 Jun 2022 08:36:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358316.587457; Thu, 30 Jun 2022 08:36:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6pf6-0002yh-Jl; Thu, 30 Jun 2022 08:36:32 +0000
Received: by outflank-mailman (input) for mailman id 358316;
 Thu, 30 Jun 2022 08:36:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6pf5-0002yX-LD; Thu, 30 Jun 2022 08:36:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6pf5-0007sV-FC; Thu, 30 Jun 2022 08:36:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6pf4-0006as-US; Thu, 30 Jun 2022 08:36:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o6pf4-0003GH-Tx; Thu, 30 Jun 2022 08:36:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=AS2SdKLlkNzUIknTI7V65MpprXSdZA0In9JCsmDF3Ik=; b=7IxoIm7ikn1IPBbbnMysw4zwGe
	Y3VoYezZ/2cz/9ZwKJYx7/2SK+9ci3UycK3tRF2DRc59YfoT0zkPN0LWsXNH3ugs7Xj1WG7Mer08A
	tL3cliAHwGnyZ79Tt+gZCaqT1KTLbpOvsa/guP0N4xudEeUhK8KOJjpdaaLWIwqIogsA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171418-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 171418: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=21e6ef752239c3c840bc31745e14b391bf9c4691
X-Osstest-Versions-That:
    ovmf=c13377153f74d66adc83702b4e4ca5e9eadde2fd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 30 Jun 2022 08:36:30 +0000

flight 171418 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171418/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 21e6ef752239c3c840bc31745e14b391bf9c4691
baseline version:
 ovmf                 c13377153f74d66adc83702b4e4ca5e9eadde2fd

Last test of basis   171395  2022-06-29 03:10:27 Z    1 days
Testing same since   171418  2022-06-30 04:14:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gua Guo <gua.guo@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   c13377153f..21e6ef7522  21e6ef752239c3c840bc31745e14b391bf9c4691 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 08:41:29 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 08:41:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358326.587471 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6pjr-0004eF-Fy; Thu, 30 Jun 2022 08:41:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358326.587471; Thu, 30 Jun 2022 08:41:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6pjr-0004e8-Ce; Thu, 30 Jun 2022 08:41:27 +0000
Received: by outflank-mailman (input) for mailman id 358326;
 Thu, 30 Jun 2022 08:41:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+N5j=XF=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1o6pjq-0004e2-Em
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 08:41:26 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2067.outbound.protection.outlook.com [40.107.21.67])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6ab72044-f850-11ec-bdce-3d151da133c5;
 Thu, 30 Jun 2022 10:41:21 +0200 (CEST)
Received: from AM5PR1001CA0072.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:206:15::49) by PAXPR08MB7549.eurprd08.prod.outlook.com
 (2603:10a6:102:24c::8) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14; Thu, 30 Jun
 2022 08:41:22 +0000
Received: from AM5EUR03FT061.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:15:cafe::f2) by AM5PR1001CA0072.outlook.office365.com
 (2603:10a6:206:15::49) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14 via Frontend
 Transport; Thu, 30 Jun 2022 08:41:22 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT061.mail.protection.outlook.com (10.152.16.247) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5373.15 via Frontend Transport; Thu, 30 Jun 2022 08:41:21 +0000
Received: ("Tessian outbound 4748bc5c2894:v121");
 Thu, 30 Jun 2022 08:41:21 +0000
Received: from bd76e660bb4f.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 B836C286-7A1E-4A4F-8AEE-536469E6503E.1; 
 Thu, 30 Jun 2022 08:41:15 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id bd76e660bb4f.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 30 Jun 2022 08:41:15 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by VI1PR08MB2799.eurprd08.prod.outlook.com (2603:10a6:802:19::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14; Thu, 30 Jun
 2022 08:40:56 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::502f:a77a:aba1:f3ee]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::502f:a77a:aba1:f3ee%6]) with mapi id 15.20.5395.014; Thu, 30 Jun 2022
 08:40:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6ab72044-f850-11ec-bdce-3d151da133c5
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=GpQarhKfY1OzrZ3cXaKd7JxmKEfMQkD7CTby4oYN/HD5UlK32Fp4ilhvBA/2XgfxVd5LG6s+b18VanOh9wL7aqsSIWeMVyHoBln5dxC+4VMhsthQyq9IS/dfPF57XBWrmmfkxApVevxWwmzIzPtm2EK9ntBgeIM3RDLp7YAdGsAX6JXuG/QBBo8hWoxBE4IOWktYiG4zSVjQOWPHx6Ni6js6SmHo/lm3bQzmkHrjYwiQ6WCYQoqOCLv9BSGB4GBF1oyH7Gc8F8euJb/3C8S4kYOMwtDD5fOWsIySJ62B+pp/2H0bIoFQ7cp/FM8iQ5e5T8mrVGYzbBrHtVjMkDdAng==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ezCmuNKMIB3MTY9SF7b2uXP+7DTkfHwrHZflJsoVeYI=;
 b=IZz72PQKV8BiVU4cQlBeHbZdLjrIhif2D4IE6tv/KDgk5FqLRUrs1WfHHnbKw2q4SwF+T3U//a6wbCxjPnIf1z3ZElG+n3Ijp6lpYV9S+0U9Kk3iGjycuf0ZcAfMFXE0wVVz6r3prqmpJpmb/gYrgnoZ/TxNkPxPukxRgjSeRsquUoA0N+qcTkJei2opR2KE12/bmkDSGySY5IyXAS7Nctc74KzBPa0CQlb6sux4JAj49YVpRpZ3+MoWYKKzV1xZjD/RzW7chR+LYeHOsZOuMZVsnO4U0e3cNYPa404Zbg6NTSFTAR81lSUtOR215zoWIqxhsQOcNil62Ja2GRITCA==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ezCmuNKMIB3MTY9SF7b2uXP+7DTkfHwrHZflJsoVeYI=;
 b=AJZ/xzLYxpy/Xdwjn57vxvbqMnERIw171l1zTZnx/Vqyst1RYNHCtrWAdPWS5abGvxTeKU4+/F3yzbZ97rYKM/phjESSAAtJUCTtU5ME2UC30xOFCgV00Qy6XNtO9vnedPVOXCZGLD0Mk/KWLgeBMkHLSisfL40c8EKmnrrWzEQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XG7h+y5VnsSryCdIw0dnwdjKDFcXQcyWn2y7QUdHknUfB4rvAVFScW+FX+avcH+H/MVcERmI38sKFcSsbJ4WU/fYsdnTdZ8ykclCH5CNBasFjrw+OWpN+WZPWPUHSI2HxnMPJrWRh167QjCUuaP9GnIZddRboRfrCmRKH4mcUM0BsQ1GZB765OiQkX+SsWIg/iUK7sb5Qe4dMOQm98WCYVaN6y9Jqm/FGyp6CEy4IUde5NhooIhC+6LX7v7Cjc1CfrZ+pPb5cyJiW0OyKQAH8EoaNACgFdfKljYRVlD4Fc9Ys02icEIMQSXZx578JuoFaai57aAjtgTO8X6/wQDYmA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ezCmuNKMIB3MTY9SF7b2uXP+7DTkfHwrHZflJsoVeYI=;
 b=lACmGeF549uPm+leoTMLAr8oFzZkT0dLaYAxGoXY0W+lF64yslIj3jbymfakn1xrZJs8Uz1qTGvnSchUHGQG0A91jdO+YU9yZFFAgUal0XOYTzTJ+IQxr8Eb0SDCT4/bVmv/LjpfGefFvjjefmPfych+/3VZzYie/gFnfMN9pa6rg+Y/Plnm38IFQHmH1xRIndAFLxW6LhosWE7in0cRGgtX4+zjegYGgKdvo/8eVKay0qAnARzaCDKlWkZ7pcRP549/KOQjVHpV1P9F3J1Gf3loVqCKW84729V9C7XNUvU4yAuyKewcNGpfpY4blWyzwgC3g4VCEu7Hh5Pv2neskQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ezCmuNKMIB3MTY9SF7b2uXP+7DTkfHwrHZflJsoVeYI=;
 b=AJZ/xzLYxpy/Xdwjn57vxvbqMnERIw171l1zTZnx/Vqyst1RYNHCtrWAdPWS5abGvxTeKU4+/F3yzbZ97rYKM/phjESSAAtJUCTtU5ME2UC30xOFCgV00Qy6XNtO9vnedPVOXCZGLD0Mk/KWLgeBMkHLSisfL40c8EKmnrrWzEQ=
From: Henry Wang <Henry.Wang@arm.com>
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: "scott.davis@starlab.io" <scott.davis@starlab.io>, "jandryuk@gmail.com"
	<jandryuk@gmail.com>, "christopher.clark@starlab.io"
	<christopher.clark@starlab.io>, Daniel De Graaf <dgdegra@tycho.nsa.gov>, Wei
 Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Subject: RE: [PATCH v9 3/3] xsm: refactor flask sid alloc and domain check
Thread-Topic: [PATCH v9 3/3] xsm: refactor flask sid alloc and domain check
Thread-Index: AQHYjChOQ8kNa3hXJ0OsgkVBHq90Ka1nneog
Date: Thu, 30 Jun 2022 08:40:55 +0000
Message-ID:
 <AS8PR08MB7991EF8AA64EAB4754513A7892BA9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20220630022110.31555-1-dpsmith@apertussolutions.com>
 <20220630022110.31555-4-dpsmith@apertussolutions.com>
In-Reply-To: <20220630022110.31555-4-dpsmith@apertussolutions.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 9726E471F8D307449217EB6E7E959FD2.1
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 4aa5af78-f2a7-4379-0a18-08da5a744f2a
x-ms-traffictypediagnostic:
	VI1PR08MB2799:EE_|AM5EUR03FT061:EE_|PAXPR08MB7549:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 3pqXGziqjkoyti5cDoGH5JKdI8adOYXtySA0FlsJUorcx9PusoJ2P7+WhSF76/ybFj3MSG3JE93z8OYbCUn3ekRSCdQqoZlifJXmqPYwERkvR2oSRgzxjL4+JIZpbbGhoJ3cH9bm2EKamLEllr2UZ0kk0SSb9nHe7GuTGDbggfGVowf+Fc49h9IajOyeTanomz3ZqufnQadlHxeFkvv6zDpBwul3HpsiZ2RDjQlG49wbqFStb+QTQpeaGqs6/aUFSHrctMVLMnitohVTOM/ZHTbdxJEkOWnb4W5SFnB9MMN16whNp1V8uTgKlacKsJrRN5vI2TICH/ANBfy+2eCHgviNhHebiZExSbzB4CjOKa6ZEQfDUNzIez4FKOOZLoK62f8KjG1ZPcxoHsdIXOE9AimJsRAFwjGpKqzgu/RdIhkRWHKKzKEHDugw5RH8IBYwPwoO1GDqkP67lK8ELBDI7T49I0zIV+B2yM5wL0ene4fTxCocimsle1VPywQTFBjmM/pJEBTBMQE650YpmYQNIvreuXNunlR3mU+mQANGkW5Lwj1R3KmIclP8q4MN44Mk7SpYasfKZ4Ms6AVFaYpESQsTVGWYzbzDSCb4FCmgoGNPMlKRhWx6BmMi08XEYDAdaShM7dJ0af+GhXOvRi+73AXB4IKBXYj3q0TMyM0zFhlXMKjj7/CsQ2CFRlXWAo3Y7P0ZuEKZDgphhTK/DxHR6nKm5bGD2EOPZLUKSxW5EnEKl89veaKEpY6h2bVEfsDHkctO9442illnLAoyQzhWPeW/KWmRYf68qvETG3s1mRTVHi8Y17cxRsfsU8NXoVgX
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(346002)(39860400002)(376002)(396003)(136003)(366004)(8936002)(86362001)(33656002)(52536014)(5660300002)(8676002)(66446008)(66476007)(64756008)(478600001)(76116006)(66556008)(4326008)(66946007)(9686003)(26005)(83380400001)(55016003)(186003)(71200400001)(2906002)(41300700001)(122000001)(38070700005)(6506007)(7696005)(38100700002)(54906003)(316002)(110136005);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB2799
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT061.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ad49176d-da9c-4a1f-850f-08da5a743f9e
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NW/Kt9tRKCFgnhVvEvSBpbOEPkBZsrQlKLr3/399/cPCcrreP66JV0sqUC8uML1GOYPDDzKVN3xyuIdTdzOawXWljY5G1oag/i3kmFLwGPFsaEMLqxaZ9psHEpfluy52ZEemZvZnxZ9gxZvChi8YL+XJBGlks4VaoalLPvpgBWNAMhp/VwgH5Z5DveF1xg+vruwwywS3ZIRbLM++gUV4HzRxsHR3E+4t3PLhT0NHZoe/GUWntvYX4zPcUsBUXh85mswUfK6EXThi5VPISdpqN8QIbvWuNGrDiucae3EWyldC9XPkfnX/TVx3pnHdOjLPAauTU0SX2BtCSFXPk3nRqILT9PVOEmnVpSqWkmHYGE5JemM79ogbGnCtCB9MKrm+w3VeacdBG8+8FNKgW4xl1m0LU05pJxBVe4R8SWXhV70t+2TkqiixvXe0JOuXGozoASlmT4Ud3z4eZE5QWxxgZViOqlH/19dSudOEPqy+Jzre27i8WK7wxPk57dCyBSHIcOIxR/RoxMmHjF3cClnikEqwhIpLsjmEPozQhAEgsScp8NEaMmz8NOYoDK+qFseOboJufOrSea362CovRxo+HP1ip8hH8SxouCNh7ui1tieCWuJK6XDqn3HcoYC1EgZVASrXdUS5QW6x7DMun7etaIBU7ElZCEM+DRh3116C7HujMhbpajFeATp4TCm13nH4m/uJ0uhtTWiWSiz9NMNvX5A0Wy//5Vl71wyAh1ODurwHyLxbKjaeqQyggyJLJrQZbdZS6cKDj8cOD9ILLX3zAx9/knM1ssIz1/HMBf0hUbTVVGZtw6RyX6AhmKfZd+ja
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(376002)(136003)(396003)(346002)(39860400002)(40470700004)(46966006)(36840700001)(33656002)(2906002)(26005)(6506007)(7696005)(107886003)(41300700001)(47076005)(9686003)(336012)(8936002)(478600001)(5660300002)(86362001)(40460700003)(83380400001)(52536014)(186003)(36860700001)(81166007)(82740400003)(54906003)(82310400005)(316002)(110136005)(8676002)(356005)(4326008)(55016003)(40480700001)(70586007)(70206006);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 08:41:21.6582
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 4aa5af78-f2a7-4379-0a18-08da5a744f2a
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT061.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB7549

SGkgRGFuaWVsLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IFN1YmplY3Q6IFtQ
QVRDSCB2OSAzLzNdIHhzbTogcmVmYWN0b3IgZmxhc2sgc2lkIGFsbG9jIGFuZCBkb21haW4gY2hl
Y2sNCj4gDQo+IFRoZSBmdW5jdGlvbiBmbGFza19kb21haW5fYWxsb2Nfc2VjdXJpdHkoKSBpcyB3
aGVyZSBhIGRlZmF1bHQgc2lkIHNob3VsZCBiZQ0KPiBhc3NpZ25lZCB0byBhIGRvbWFpbiB1bmRl
ciBjb25zdHJ1Y3Rpb24uIEZvciByZWFzb25zIHVua25vd24sIHRoZSBpbml0aWFsDQo+IGRvbWFp
biB3b3VsZCBiZSBhc3NpZ25lZCB1bmxhYmVsZWRfdCBhbmQgdGhlbiBmaXhlZCB1cCB1bmRlcg0K
PiBmbGFza19kb21haW5fY3JlYXRlKCkuIMKgV2l0aCB0aGUgaW50cm9kdWN0aW9uIG9mIHhlbmJv
b3RfdCBpdCBpcyBub3cgcG9zc2libGUNCj4gdG8gZGlzdGluZ3Vpc2ggd2hlbiB0aGUgaHlwZXJ2
aXNvciBpcyBpbiB0aGUgYm9vdCBzdGF0ZS4NCj4gDQo+IFRoaXMgY29tbWl0IGxvb2tzIHRvIGNv
cnJlY3QgdGhpcyBieSB1c2luZyBhIGNoZWNrIHRvIHNlZSBpZiB0aGUgaHlwZXJ2aXNvciBpcw0K
PiB1bmRlciB0aGUgeGVuYm9vdF90IGNvbnRleHQgaW4gZmxhc2tfZG9tYWluX2FsbG9jX3NlY3Vy
aXR5KCkuIElmIGl0IGlzLCB0aGVuIGl0DQo+IHdpbGwgaW5zcGVjdCB0aGUgZG9tYWluJ3MgaXNf
cHJpdmlsZWdlZCBmaWVsZCwgYW5kIHNlbGVjdCB0aGUgYXBwcm9wcmlhdGUNCj4gZGVmYXVsdCBs
YWJlbCwgZG9tMF90IG9yIGRvbVVfdCwgZm9yIHRoZSBkb21haW4uIFRoZSBsb2dpYyBmb3INCj4g
Zmxhc2tfZG9tYWluX2NyZWF0ZSgpIHdhcyBjaGFuZ2VkIHRvIGFsbG93IHRoZSBpbmNvbWluZyBz
aWQgdG8gb3ZlcnJpZGUgdGhlDQo+IGRlZmF1bHQgbGFiZWwuDQo+IA0KPiBUaGUgYmFzZSBwb2xp
Y3kgd2FzIGFkanVzdGVkIHRvIGFsbG93IHRoZSBpZGxlIGRvbWFpbiB1bmRlciB0aGUgeGVuYm9v
dF90DQo+IGNvbnRleHQgdG8gYmUgYWJsZSB0byBjb25zdHJ1Y3QgZG9tYWlucyBvZiBib3RoIHR5
cGVzLCBkb20wIGFuZCBkb21VLg0KPiANCj4gU2lnbmVkLW9mZi1ieTogRGFuaWVsIFAuIFNtaXRo
IDxkcHNtaXRoQGFwZXJ0dXNzb2x1dGlvbnMuY29tPg0KDQpTYW1lIGFzIHdoYXQgSmFuIGhhcyBz
YWlkLCBJIGRvbid0IHRoaW5rIEkgYW0gcXVhbGlmaWVkIHRvIHByb3Blcmx5IHJldmlldw0KdGhl
IHNlcmllcywgYnV0IEkgZGlkIHJ1biBhIGNvbXBpbGUgYW5kIHJ1bnRpbWUgdGVzdCBvbiBBcm02
NCBwbGF0Zm9ybSB3aXRoDQp0aGUgeHNtIGFuZCBmbGFzayBlbmFibGVkIGFuZCBpdCBsb29rcyBs
aWtlIGV2ZXJ5dGhpbmcgaXMgZmluZS4NCg0KSGVuY2UgKGFsc28gZm9yIHRoZSB3aG9sZSBzZXJp
ZXMpOg0KVGVzdGVkLWJ5OiBIZW5yeSBXYW5nIDxIZW5yeS5XYW5nQGFybS5jb20+DQoNCj4gLS0t
DQo+ICB0b29scy9mbGFzay9wb2xpY3kvbW9kdWxlcy9kb20wLnRlIHwgIDMgKysrDQo+ICB0b29s
cy9mbGFzay9wb2xpY3kvbW9kdWxlcy9kb21VLnRlIHwgIDMgKysrDQo+ICB4ZW4veHNtL2ZsYXNr
L2hvb2tzLmMgICAgICAgICAgICAgIHwgMzQgKysrKysrKysrKysrKysrKysrLS0tLS0tLS0tLS0t
DQo+ICAzIGZpbGVzIGNoYW5nZWQsIDI2IGluc2VydGlvbnMoKyksIDE0IGRlbGV0aW9ucygtKQ0K
DQo=


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 08:52:26 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 08:52:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358332.587482 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6puO-0006Iw-GX; Thu, 30 Jun 2022 08:52:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358332.587482; Thu, 30 Jun 2022 08:52:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6puO-0006Ip-C3; Thu, 30 Jun 2022 08:52:20 +0000
Received: by outflank-mailman (input) for mailman id 358332;
 Thu, 30 Jun 2022 08:52:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6bdU=XF=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o6puN-0006Ij-BB
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 08:52:19 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr70074.outbound.protection.outlook.com [40.107.7.74])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f121ad0e-f851-11ec-bd2d-47488cf2e6aa;
 Thu, 30 Jun 2022 10:52:17 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB6PR04MB3110.eurprd04.prod.outlook.com (2603:10a6:6:c::14) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5395.15; Thu, 30 Jun 2022 08:52:14 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5395.014; Thu, 30 Jun 2022
 08:52:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f121ad0e-f851-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=S8fuCvmvRK3sPGXeU76+1Y3QzuiJ1bfOdJJncs7oYEsJV46nHHCDyeAYKtrF3XpVfqoq6lF3DsA8Z//SAVGYjKXWC0fs50CcCl4+pQHmHnbmMgNcU9WszrAMjBfmmFOQ9iBVb2k8zJe0tsivtBznv1xKUdZ079P6IcHZw43EsyUA1VISZN7pHL9Aj+VV/allwgShPXmtPvIc3oH9UM6Zf6vgAOJePU9pJExl+97BY3fW0LzSBVfwSqa4Ioa0UyZ7LJzHi83IC69e9CxS1B56GHT+iCXk/ZqunX7srnttaOof2sLo6amcK8tAIne2BMa2vpOz48P2lO/FTmqYwE7qbw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=2nitpe6UruRN2EQBY2H41fFc84WNrY1b1bKtQkwan04=;
 b=GPT2M7i6Kmx0mOR4+Ua6T5URJoGgA5eBN0EnJnPg+TwlYPf/sXuFDYO6wFpHjuWnzIhe6WuvrkU28RJPB/7vcgl9mMHzntypMyTSerPfCrbk4fq/gZMSvO8havhsycaf6GU7CIl/kkAZr1yVE1nfJmPzuYXvGRsQnejUa69XkvojtY/ZhEo0P//OnhkPWEbAwjzW1rIqJzO7y7TEa68RAisEJC9Kso4h3tjcwqt+tYHiDZxAfnPFNzy2Fb18QQwjhDe5+x4dPysO9pAK6CWD3nzUjkRTvJknC9GgQzmDsBFDcbQFXynIeBMA47atkYW7ZW5yEIMiP4bWjR2HOk9Swg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2nitpe6UruRN2EQBY2H41fFc84WNrY1b1bKtQkwan04=;
 b=c3Ofpu7UlH4AKn+j/bBJQMzsFjwx1ga8l94LbYQXPa73+7p9sxc+mkIl3QVj+vbeEXdAMgS+pW3jV0CrJhv4ySzhcVNbVcboXI/X/N4+NR4e0EETzhPAKp7gLdDzmQQ6h1jfVz7fC4rgWmEDyoyF8eb6I5vgS+k7pDwUDgOHjBaI+iNJPhKpA80N3c75x0SMJ/OOPSPhumOTmUM8FuNuUtX6RDMJYzjZEa7GuWq4PWUIb2wCWVTS11V3eDLukS2vHPhoJ+SsF3W+3bvrV6TxJB7J9ft7Q6BvsGgRdqRxfzRk1G2r5qvhIuLTEq9k2Sup6mTSiG2WzVLvwwbxh0WFdg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f8e8c90a-bbd9-d85d-876f-04de2617c000@suse.com>
Date: Thu, 30 Jun 2022 10:52:12 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v2 1/2] console/serial: set the default transmit buffer
 size in Kconfig
Content-Language: en-US
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20220630082330.82706-1-roger.pau@citrix.com>
 <20220630082330.82706-2-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20220630082330.82706-2-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AS8PR04CA0086.eurprd04.prod.outlook.com
 (2603:10a6:20b:313::31) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3bec33fc-17e7-49f6-26e9-08da5a75d411
X-MS-TrafficTypeDiagnostic: DB6PR04MB3110:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Sj/eCaxYWjOSy3gigQuhYaP7K/5hIGnOyURx64JNPWqM2MZpWUpIZzFoFNeUbg7F6rEzKNY59LOz9apc2sdXNt9amHckVIxm6+skKhQUTN7yKAmFQCG0qkfA+a+TCZndBXs6E9MI+zGel3degSPTx5uZWOT/RJKiTyiVVo4DrDXnicWvL9aMEagLZQ1SaM1Vl3mgeaD2Dmh4/Zcxv213TKraMVlQuVhIwvNzpMsIsSbFbhkpT8ybzUCnMyFiSfWNmaHNjfHUMoPHXmbkRWrdIv2tA4pmtFbGPlQdTGJPA+WVXcRVeYKoe+ZgnqGzIPFNbF6cVedcozG1FWpyn9HK/wVTxiPV1PsOet8hF/yZooQaEp0LH493dCJS/ztlwhcuEya77427cjz76kf04S0hPM3ZIl2Vojt8Lv4jJMgWPfOkkBFqaN67xevHkfOR3hufp6XGZomFUofDPTC7SNtOwjax+cUA7bBMqfs/wSpj63EZtrkYtjrnVPGMwH0qX2T+MIfhzLZS63dItCPpycHKBlgeKpa5jptaqh0HER05GUC7igaRigdaoiZK65lHVEupD2N1T5+dDBuDa5CDTMyb2TnNIyZk+biP3pGnEK2FTnvBgoYePFPCES8RAzYppvhw7tKjsiSBJuNDvyMB/c0PQ10HIxUgAu7s99QEYcHvv1YhR9wfyhxVgrmm94rwmiWKrwjCPurmRsF0AmoUI6dGmqxslX2rSAf7U8tLMbJo0/8Mru/Den2R5Dkl5IyciRQnrbGr6aMqiwQKdcoLRN0/yQPrnfMr58Q7iEbxWS1yNcdDgYBz0v3GteVsbiwr6Lr7TVUgX6gZb2EiRVWcGBbmPKDyRD4DGvQYFKsY2XgkIek=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(136003)(376002)(39860400002)(396003)(366004)(346002)(31696002)(86362001)(558084003)(66476007)(8936002)(6512007)(66556008)(2616005)(26005)(38100700002)(31686004)(5660300002)(6506007)(53546011)(2906002)(36756003)(186003)(316002)(6486002)(54906003)(4326008)(478600001)(66946007)(6916009)(41300700001)(8676002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MVcyT0FNcjZ0Q0lya2FTdk5pMFd4UmZWZXV0aHdnVXR4d0MveEVIL05NWFEx?=
 =?utf-8?B?anFPWVpNNVFGcXBJZ2dadDJjSzZydTlPUnFndG5FN3N2Y1o4blA5ajFraWln?=
 =?utf-8?B?cHRhbEdyS0M3OFhLUk9pTUFWVGlQcEJhLzdIYVRRNFpOZkR1eTNpUXdiTmsw?=
 =?utf-8?B?Y3NhZ0puOVZqUEJhaXhoc1dXQUZuYTJiYUdvS0RJS2lwN1BDLytsUUg5Rm91?=
 =?utf-8?B?cGFQYTBLQTZHM2U1NXJiKy8rTkZuN1ZrU08rckJmb1lLa1dad29sMjVOeCtP?=
 =?utf-8?B?SWxicjZ1OWxrRnl1anNZSVM0T2txRUlocjhPTG1kbVprZHJEZVkycG15Y1Bn?=
 =?utf-8?B?ZS81ZWJZZURiVHI0WTBZa3R4Z05PMk9rSGFlY1JNOU5wb2VBRGJoOHl4eHBK?=
 =?utf-8?B?TzBGTHk4TGtWMk8zL2lsUWZndkcxNDlndGRTdkdhMWpkN2M5RjhtN2RFUDZY?=
 =?utf-8?B?WFhMVitzTzFEVVVpYTUwSFpSTUorMmQwMWF3bDRIZUpnb0Y0MHFyQUprTEhl?=
 =?utf-8?B?YkRpZTNuZmdFcmhDTjRwOTFYNmRyYjNFdmVmWHh3UXkzblkxMmZvcDNuUG9U?=
 =?utf-8?B?MDN5UTBDZGxxc2tBanBqWFFtUWdRN1Y5YVZzZDZqVS8wUXZ0OTMzUmFhQ280?=
 =?utf-8?B?V0d4aVgvaFlMMlo0ckdzbzgvT1dFM2grSVBGaEVYNUdxbEhsMzlLczdURDNI?=
 =?utf-8?B?S2xzOEZUcytBTnJ0YjJJRS9BaWEvVzZFSFpDZ09IeXpkZUphajV5L1dDT1Jk?=
 =?utf-8?B?ZVlxa0NtUWR1RlRNbEU1dEUxUjd4VnozQnlCTGpkWUp5UXV1VTI1M01yMW9h?=
 =?utf-8?B?bUd1WG1LRExrdUczNXI0VnU3MGlzT281T2g3dEJYV3BLWE51S0ZkbGxsdDlH?=
 =?utf-8?B?OVczd2RYUXY2d3l1VkxaUW41YWVmZHo5b0J5eHNhQzRBbFVGcmhwRVRpK0pD?=
 =?utf-8?B?Z2l2TkdrOHVDS01hV3dxaTFKOFdMSzFaVzBJajF5elE3TzBUenlkR1Y4a1Ay?=
 =?utf-8?B?ZWlmdUM1bFQwN3BoeUwrV3o4SkUrcGRYa1QwUE1pY0xONEtTM3MrdE9uMktK?=
 =?utf-8?B?TWpvRXI1MWZzM2lRVEZvN3dwQVBtKzV1MDZRWjVleDhyS0RUNjVpTXhjaGpz?=
 =?utf-8?B?QXZQVjZkZHYxK1VGUSsrdHhOR3pWYlB4MEhBRDF1bGVEKzI2am1uVU9ibnhy?=
 =?utf-8?B?bDZvaFhRWnJ6d3JZY1AwUEtoMjRETnEwblcwN3czWnJnVGNCV1pXMzlIWTdT?=
 =?utf-8?B?SkdwSWlaQzVYWWNTQzlQVVFtMGEvVG9EYkdvZERhRXIwNVBEbit5eHF0MzVJ?=
 =?utf-8?B?L1pkbStYeWxSRzBYUmtodktBNzhTYU4rNXFKcDNIb3R6a0lNVldnZ3hWU3J1?=
 =?utf-8?B?aWhoTWpIZnZ4QjNTMTNqaFFMMDkyL1hJbnhZNDc2MWxOaWNDc0EyUm5wTDMw?=
 =?utf-8?B?RW5adVhPSmxudGFvVFg1UHBCQXZMUzRvR2ZQSmgwdU9sUW4ydHVjU1d0RjdY?=
 =?utf-8?B?enV5VFlRTlRFRjEyVGlxTkQ3NERYWkJsRHBxZWJTKzZESmd0bEs4bzRpNDE2?=
 =?utf-8?B?bGgzVjh1Vm1Gcis1VjIybndWN3pmUmdJOUVrTkVjZEd2eGVqSVlCcFNkbGNI?=
 =?utf-8?B?QXNDSnJzTWJuckxISDljeStudFVSaXA5eWM3bUpIb2hyUERqSmwycHc2WXNW?=
 =?utf-8?B?aTBvdnhqUjFocHhOSzZ4VjlIUDlyanN4MlEwVGp2Q09TQVY2dXcwUkRCTjVU?=
 =?utf-8?B?WFBjVnJaem9QWWlCR21QNTkwYldZeDhuU3g3Uisxb0RlYmtwTXF4NU4rZUFp?=
 =?utf-8?B?MjhFNW85ZjAyNzdzM0JlWTdyNUVEWWlyR3puWTJYSWsxcXZrMmZSb2pLWEl6?=
 =?utf-8?B?UFhsZFZEaUUwZDJaWUxWY3QvV0hJaUp2T0xUaW5QdXlmUnFraFdSUzNQaUN0?=
 =?utf-8?B?S2UxSnp0RWk0RWorNGtDdDJsZU4xZ3krNWR2bk9ieHFnaVBEWDY0ZUhuQmZI?=
 =?utf-8?B?b1NKZGg0TVRLVElhK0srNTBqTkhzWW8vTGhoY2ZNSVV3OW9QRnFFNFpXeFlF?=
 =?utf-8?B?VnljVjlUTGk5M0t5ay9JTlRPYUhoRjFjTVQzdXRTRnU1MkJ2aFpQc3c2NFFy?=
 =?utf-8?Q?ryqdMwsJCQ0bKQPZV0pmbHdWf?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3bec33fc-17e7-49f6-26e9-08da5a75d411
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 08:52:14.3944
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: +ktGiLJT7dmxAAiSZC+QYz7v52lBDBm/I5PSzrDggg7dFdYmK7NHaSgQPNUPyuo0XuiowMd4P2GDZ/aElVmOjQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR04MB3110

On 30.06.2022 10:23, Roger Pau Monne wrote:
> Take the opportunity to convert the variable to read-only after init.
> 
> No functional change intended.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 08:56:20 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 08:56:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358338.587493 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6pyF-0006w6-VY; Thu, 30 Jun 2022 08:56:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358338.587493; Thu, 30 Jun 2022 08:56:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6pyF-0006vz-Rr; Thu, 30 Jun 2022 08:56:19 +0000
Received: by outflank-mailman (input) for mailman id 358338;
 Thu, 30 Jun 2022 08:56:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPik=XF=citrix.com=prvs=1738a98a4=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o6pyE-0006vt-BN
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 08:56:18 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7fa928a8-f852-11ec-bd2d-47488cf2e6aa;
 Thu, 30 Jun 2022 10:56:17 +0200 (CEST)
Received: from mail-mw2nam12lp2043.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.43])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 30 Jun 2022 04:56:14 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by PH0PR03MB6977.namprd03.prod.outlook.com (2603:10b6:510:163::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.15; Thu, 30 Jun
 2022 08:56:13 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5395.015; Thu, 30 Jun 2022
 08:56:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7fa928a8-f852-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656579376;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=IdmikN7OmtdVVz1Uu7ycTraepP81sXp5bfMMqtggKcQ=;
  b=Yvo8DTZRvCr5CDUM6lpSG3jrkJl9NBAa0rgnqPgkBGKE6gRVCZ9NoU/D
   zndtl9/npWdpVrZkOOMT+iyx2Yn7dleSm8KJuSfXxEAL2mGbJKOZ0MFsM
   DPwx+i3IfLBti7zZzJW/3uDKazd/xS8ZU28ZvluJgUJXuIGCAl8suKIUN
   M=;
X-IronPort-RemoteIP: 104.47.66.43
X-IronPort-MID: 74794287
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:V7C85KNROtPM22TvrR3QlsFynXyQoLVcMsEvi/4bfWQNrUor0DRUn
 2JNWm2FM/qPamunLo8kO9mw/UtX6p/cmt5qSwto+SlhQUwRpJueD7x1DKtR0wB+jCHnZBg6h
 ynLQoCYdKjYdleF+lH1dOKJQUBUjclkfJKlYAL/En03FFUMpBsJ00o5wbZn2dYw2LBVPivW0
 T/Mi5yHULOa82Yc3lI8s8pvfzs24ZweEBtB1rAPTagjUG32zhH5P7pGTU2FFFPqQ5E8IwKPb
 72rIIdVXI/u10xF5tuNyt4Xe6CRK1LYFVDmZnF+A8BOjvXez8CbP2lS2Pc0MC9qZzu1c99Z6
 ZZKiZWaZiwTFaDOxcQ8Ax9SGCdUFPgTkFPHCSDXXc276WTjKiGp5so0SUY8MMsf5/p9BnxI+
 boAMjcRYxufhuWwhrWmVu1rgcdlJ87uVG8dkig4kXeFUrB4H9afGM0m5vcBtNs0rtpJEvvEI
 dIQdBJkbQjaYg0JMVASYH47tLj02SijKW0HwL6Tje1w4W2NlzV46orKMOjQedWrSZtSuFnN8
 woq+Ey8WHn2Lue32TeDt36hmOLLtSf6Q54JUq218OZwh1+ezXBVDwcZPXO5q/Skjk+1W/pEN
 lcZvCEpqMAa60iDXtT7GRqirxa5UgU0XtNRF6g/91uLw6+NuQKBXDBYFXhGdcAss9IwSXoyz
 FiVktj1BDtp9rqIVXaa8bTSpjS3UcQIEVI/ieY/ZVNty7HeTEsb1Hojkv4L/HaJs+DI
IronPort-HdrOrdr: A9a23:t6iuV6xxp7eV/NUAWiDTKrPxt+skLtp133Aq2lEZdPULSKGlfp
 GV9sjziyWetN9wYh4dcB67Scu9qBTnhORICOgqTMyftWzd1FdAQ7sSibcKrweBJ8S6zJ8l6U
 4CSdkANDSPNykcsS+S2mDRfbcdKZu8gdiVbI/lvgtQpGpRGsRdBmlCe2Wm+hocfng6OXN1Lu
 vr2uN34x6bPVgHZMWyAXcIG8DFut3wjZrjJToLHQQu5gWihS6hrOeSKWnR4j4uFxd0hZsy+2
 nMlAL0oo2lrvGA0xfZk0ve9Y5fltfNwsZKQOaMls8WADPxjRvAXvUpZ5Sy+BQO5M2/4lcjl9
 fB5z8mIsRI8nvUOlq4pBP8sjOQpQoG2jvH8xu1kHHjqcv2SHYREMxan79UdRPf9g4JoMx8+L
 gj5RPUi7NnSTf72Ajt7dnBUB9n0mCup2A5rOIVh3tDFaMDdb5qq5AF9k89KuZMIMvD0vFoLA
 BSNrCc2B4PGmnqL0wx/1MfiuBEZ05DUStvGSM5y4+oOzs/pgEK86JX/r1cop46zuNNd3B13Z
 W7Dk1WrsA/ciZvV9MaOA4ge7rCNoWfe2O6DEuiZXLaKYogB1Xh77bK3ZRd3pDYRHVP9up4pK
 j8
X-IronPort-AV: E=Sophos;i="5.92,233,1650945600"; 
   d="scan'208";a="74794287"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=faCHUyW6Ex6BHCVBx/6+KQzRX9eCQtlk3Svw9YaE+9Wt2snyXckez2bWOs3CnfjxrwIVtThH4d70HVFhmGv5HsYCB1R+zjQtGfPuQg2PVPu6ZU+TrCsx7wm3TZvSJxVvTdF3rEGy3ehLQnn5ZLzY+ZC0w954ChYeeOiWCXT8bDSoegXwYM2q7DKKARN5XTDWHdLTTPMzmTXNH3VYGkXw6/cG6lPGjB7Gb3IOoIwcc3q2k0HiM7DZu6gTlI5R+5CnKYQpKmoDO5mBaEbSDM4Y3XguimEqodKyy7BXxe30DuCO9053uyJQYtHK+tJBZAmL3dDGLvCzOkIdp+VCVEfPTw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=osL7U4plyZ9pS0plcC349zQYohginb6VmByznZxF5hs=;
 b=al2IBqAu6elKG4WiSKNG5mTPjoa+joigRNuZxDg0LmsbIKAN9ekSZjfxTzmDaIlWxRvWSee5TVBRoa27VPRuQj6j6rne+sQdAFd4LZXnnpYyODpbOKR6TWZsHLuV9Ljfl331kbrp7DGLCh9v10Z3j/dVBR6qWMB6cXBtsC+sihtixAkkJ0uiJ2Ja7uFn96NDtAaBcwrIbrt3uP/sppzDllkprtR32DP4ltH0NxhR3/xOQi8SV9TSkMlO6y4D2tlJVq5xbTBlERsDSwXOMubZ2A6EcDNuVYRegsJPMtW+cDJnedzBWPgT6JDwKoF4qEmLqgiTx19klLg3QA+TNkujBQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=osL7U4plyZ9pS0plcC349zQYohginb6VmByznZxF5hs=;
 b=sc8TQ7xK5tGjC2XVspC55JEv31SjWIitUc88+/4jD1uGNKpS0NAmo8WVqe3GT0pJEsE3XWLt7T7MxjiX2/4DP/O5vC1KqGKCyz61CMl9IsmgsGipTiAadINSM+DjTqt371B7twXMjfSh+/UsTJGXVSgaIxuwygBG7AgKIVHZhZ4=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 1/6] x86/Kconfig: add option for default x2APIC destination mode
Date: Thu, 30 Jun 2022 10:54:34 +0200
Message-Id: <20220630085439.83193-2-roger.pau@citrix.com>
X-Mailer: git-send-email 2.36.1
In-Reply-To: <20220630085439.83193-1-roger.pau@citrix.com>
References: <20220630085439.83193-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LNXP123CA0017.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:d2::29) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 7a2fdf11-ad61-4206-112f-08da5a76625b
X-MS-TrafficTypeDiagnostic: PH0PR03MB6977:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	mLs32beiMDc08RdhAoM7m3HCJ8Xdtn1ZHG9+vyQey/bmzSQFTp66SWtb0ixqfaOA+jvtX5f9RuARQsTt2ukyfXYtG74+VR5ephPNFIDWwUjxNf3nOaVTYDzOXhQvfScO7UUQZ+Wxk2Y5rd4YAFpdzqgJuWornS+K2QbIKB7z9yJtqzItXqTyvFJbhXlUAcporMKWD3wvrWrzriS0IRusLqmUj3/aXYT1aDIkB40EgbjBQwGiYfps370oaQgA8OwEZSxfhxb1IFj9f9yxh88ZCjhmMDqowGZRcODobFzaTmzFMnG4R9kJGXgrLcyROgZlw1oUJmNeykm+wbXbv/53mfiC3LNC9ofRawdJHtLMLXCyc3ChYW5lNEhHt2kHMZ9scd/1ua4VNwp1zDiZjUM8CMgVlCJkom+nSeTfp12g45y+mO28/4KiOskCYuoevYy8RzunPcNA4V2Tgeo3mFhlQGARKCTN8asyfFRXP9wsDgD/150XsCoeLrKm6Qefox1/p7BOSElS4kSOSxnxzpAG0SuynTMA/QFGw9MUE6NFmc1X5RVBOwinKa0FSMzbhFJYcg2qMO6CRDCvh6Z+swifv9JS9fMYFzNKLkmQQNpZfWrWbbf/E4ecMpflEwo8kBQE7W+9YstI/EvOSHR1herv21jh2M9P6FVr3+VrgdUo46xw772ws4zi0i9utudPIaaWWHKBDkvjsmauK4bYArrks7BrSdDeIIeVWq72dSv2bffLOtMzBfzf4h5v/FwNfihiRSn/6iNwZoO0obsYOSkEIQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(366004)(376002)(396003)(346002)(136003)(316002)(38100700002)(66556008)(66476007)(8676002)(4326008)(66946007)(478600001)(2616005)(186003)(6486002)(1076003)(6666004)(41300700001)(6506007)(26005)(6512007)(82960400001)(54906003)(6916009)(83380400001)(2906002)(5660300002)(86362001)(8936002)(36756003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?akp0VUVkNTcwOFl2KzhLdzBJa3ROaktwZFY5MkoxSEFTbFkyb3JFTUtRRllp?=
 =?utf-8?B?aExEbXFvbCtXVktMWW5hRUVVYmVIRGozU09acG05WW5RdnA5akh5WDYrNGhj?=
 =?utf-8?B?clVxeGllWWY1M0d6MmlDdmxrRVJ4a1RpYVQwanlBMWZHcCsyOFUxczBOQ2lu?=
 =?utf-8?B?VFEwR1BIZGZ0N0xBT1dSaFdlaHdvTDJVNG85eEdlcVBoQ2ttazZVYmtRRStt?=
 =?utf-8?B?R2RoMXVvRklLQ1pvMVpjamtwa1ovYWhuKzgvTGpZSkhPWTNCUHRRNldGT0Nr?=
 =?utf-8?B?M256TGtTbTgycjFUUVk4dG5YYTMyVUxpUGVZOTZlSGhrbHNNSUFSV2Zqb1Vu?=
 =?utf-8?B?OHZoL0NQWWxab0JPS1hyVU1PZnp2Mmt1UDZlcUUyVTJ5eGUwT2ZPMkRnVDZ5?=
 =?utf-8?B?Ykh5VGJMMkplT1FrK25aL2ZTOTNGc2dFa2h5eTNjSXpVcmRobDlQaUoxNUdD?=
 =?utf-8?B?RWpDaktneWZHYi9FRlNUeTlGWU53QnA0U0dsVmtrdnprTUs4MHQvbjhGOUh1?=
 =?utf-8?B?UkNaR1BvTFVZUE0rNjkvMkJaMVFxUDV3RUkyMzR6Q0FseWJmbWowOWcvZDB5?=
 =?utf-8?B?a0lZL1kybUs2NjkrTmQ0RXdFMXVXaWhVOTVTTDFOSTFWWHMvOWd2cFBHZFJI?=
 =?utf-8?B?aDROME5OWHZOczNjYnFrN3VJUENxN1VoZEFIYURHQTNjUWZwcWZlajdaNGVj?=
 =?utf-8?B?RjN1RWdHTUJnRGdYbjRMb3E4dGlNT3g0ak43SzdjUVE5YStmcm9LOE9TYm5S?=
 =?utf-8?B?T1NLd0xqUHQvUldjcmRNc2VIdkJPaXhXY3RocHBJMmVWUTlmL1RCWURzV2hF?=
 =?utf-8?B?Y2xOMnV5cit0OXIwaHVyeXBtbk83QWFUcmFLcm9uRlNoeEIwcUNDUHpLUjZh?=
 =?utf-8?B?Nk11dmJWd3QvK2NhWXYxZmRFUjBienF1VDB4d2ZzT3FSaFFWS1F6S0ZqVFVx?=
 =?utf-8?B?YWd6RjBPTllZcDZlWjJwM09WaXlLa0lQQkVISGVkQXBsZW1yMlU4RDcvMHNo?=
 =?utf-8?B?dFMzMFdYNHdCWDFHaEdlNStmM0pwck9LUmtUK0lYa0daaXhBSGFoLzVxamxS?=
 =?utf-8?B?VjV6dzlVRUtHbkFuMWNDZ2gwRjJJbkxWWEZ2d3kyem5xZi9JRGIzWUphOEZX?=
 =?utf-8?B?ZUt5QVBFNGlzZk1hTXViMCt3ZmJpS1hreUsyWjQxZHhDZTBMNzR5NUFnN2ZG?=
 =?utf-8?B?eDcvWkszMThzdW50MlIyRlE5KzJLZFlmeVArT1h2RXJZWk9BQ1B5eEZ4cXRD?=
 =?utf-8?B?UEVldm9qQVI1UlFFTm1oZUYyNzFHS3VTQ012dTVoMU9hM0t6aTZuNnZiTkdn?=
 =?utf-8?B?L3M2QWI3UEtlVytDazRpUURCU2ZLeHR1bVhTcFlmUWxlVEpBYlJINkw3aUJU?=
 =?utf-8?B?TzBGY3k0QVJDdzJPYnJEOGExTHI4TFozQUxHRkdoTldicWV1Y2swR3R1WVV1?=
 =?utf-8?B?dUIrTzJxTTB0bVAvYlNhNVVyT3NzRytnZW9RRzJvMEVZaDNFdnBhckxpSGpM?=
 =?utf-8?B?b2JXSUpldWZIWFFrcmIvNUwyaUVwSmh3bTZyWWMvditjZjEvTmtaeEtFMVVH?=
 =?utf-8?B?S25WWVUwRjU4RWRUc1grc0NKNnZyNDlEU1IwYVJwNzFzMDZ4ZE8yMjF4Zlpi?=
 =?utf-8?B?REJhUi8xL2t5S0NkakxsdytrVm5FSlV1SW1ZTXQvWG1Fdm82S1lSTnRaMkk2?=
 =?utf-8?B?bEJnYlBLTkZTeVoxc3ZCaUJVZFdjSUdpaU9QU1NDcERkZjdmNG9KSkhnY1J5?=
 =?utf-8?B?N1FtTEZaTzhJZXZnR0JBdloxT29GYzdkSTlBSlhsZjVJam9GNGYram5XNG84?=
 =?utf-8?B?QmlSZUJpY1UrZkdTNVErb0Q1cTlVZUw2SXprV21Xb0ZXZnd6RGZPMDVyUzc5?=
 =?utf-8?B?R1dhTEdjcUFuVEpGZnVvUmpoN2RaTVJJT2t0T1B3Qnd2MkJOcEVxL3JLQXR4?=
 =?utf-8?B?UFNUOHpxWWxFMVdSU3Rwa1c3ZWRla2ZhV3J1RkpFbWNDOG1kdmVZT21GS3NL?=
 =?utf-8?B?c0NaakxIYUtNc2lpZ1Q3N1lCelI3NlFvMDNSRml5WEl1SUFibjlhZHZHOEgw?=
 =?utf-8?B?WnF2WjdIZnBLdmdqVEg4L3RCWndBVWl5RWdjTTZHekp2cjk2VlpBdXJhTFNY?=
 =?utf-8?B?KzJtUlZ4VHdPWDVlOUVIMEpsY3J5SEtiZHNONURUYlQwd1Bka0ZMb24zelJS?=
 =?utf-8?B?cUE9PQ==?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7a2fdf11-ad61-4206-112f-08da5a76625b
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 08:56:13.0684
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: hxYOTaPtBXqB/84alqeyH4SCvvMGO/jtsytUJpFXUut1oiaCb798Yh0HbVuHigGu1H6/CDzUlygPcofZM7Wfwg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6977

Allow setting the default x2APIC destination mode from Kconfig to
Physical.

Note the default destination mode is still Logical (Cluster) mode.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v1:
 - Use a boolean rather than a choice.
 - Expand to X2APIC_PHYSICAL.
---
TBH I wasn't sure whether to keep X2APIC_PHYSICAL or X2APIC_LOGICAL as
the Kconfig option, went with X2APIC_PHYSICAL because that's the
define the code was already using.
---
 xen/arch/x86/Kconfig          | 18 ++++++++++++++++++
 xen/arch/x86/genapic/x2apic.c |  6 ++++--
 2 files changed, 22 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
index 1e31edc99f..6bed72b791 100644
--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -226,6 +226,24 @@ config XEN_ALIGN_2M
 
 endchoice
 
+config X2APIC_PHYSICAL
+	bool "x2APIC Physical Destination mode"
+	help
+	  Use x2APIC Physical Destination mode by default when available.
+
+	  When using this mode APICs are addressed using the Physical
+	  Destination mode, which allows using all dynamic vectors on each
+	  CPU independently.
+
+	  Physical Destination has the benefit of having more vectors available
+	  for external interrupts, but it also makes the delivery of multi
+	  destination inter processor interrupts (IPIs) slightly slower than
+	  Logical Destination mode.
+
+	  The mode when this option is not selected is Logical Destination.
+
+	  If unsure, say N.
+
 config GUEST
 	bool
 
diff --git a/xen/arch/x86/genapic/x2apic.c b/xen/arch/x86/genapic/x2apic.c
index de5032f202..7dfc793514 100644
--- a/xen/arch/x86/genapic/x2apic.c
+++ b/xen/arch/x86/genapic/x2apic.c
@@ -228,7 +228,7 @@ static struct notifier_block x2apic_cpu_nfb = {
    .notifier_call = update_clusterinfo
 };
 
-static s8 __initdata x2apic_phys = -1; /* By default we use logical cluster mode. */
+static int8_t __initdata x2apic_phys = -1;
 boolean_param("x2apic_phys", x2apic_phys);
 
 const struct genapic *__init apic_x2apic_probe(void)
@@ -241,7 +241,9 @@ const struct genapic *__init apic_x2apic_probe(void)
          * the usage of the high 16 bits to hold the cluster ID.
          */
         x2apic_phys = !iommu_intremap ||
-                      (acpi_gbl_FADT.flags & ACPI_FADT_APIC_PHYSICAL);
+                      (acpi_gbl_FADT.flags & ACPI_FADT_APIC_PHYSICAL) ||
+                      (IS_ENABLED(CONFIG_X2APIC_PHYSICAL) &&
+                       !(acpi_gbl_FADT.flags & ACPI_FADT_APIC_CLUSTER));
     }
     else if ( !x2apic_phys )
         switch ( iommu_intremap )
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 30 08:56:26 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 08:56:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358339.587503 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6pyM-0007Ej-8g; Thu, 30 Jun 2022 08:56:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358339.587503; Thu, 30 Jun 2022 08:56:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6pyM-0007Ec-5q; Thu, 30 Jun 2022 08:56:26 +0000
Received: by outflank-mailman (input) for mailman id 358339;
 Thu, 30 Jun 2022 08:56:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPik=XF=citrix.com=prvs=1738a98a4=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o6pyK-0006vt-ME
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 08:56:24 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 83a6c381-f852-11ec-bd2d-47488cf2e6aa;
 Thu, 30 Jun 2022 10:56:23 +0200 (CEST)
Received: from mail-mw2nam12lp2048.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.48])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 30 Jun 2022 04:56:09 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by PH0PR03MB6977.namprd03.prod.outlook.com (2603:10b6:510:163::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.15; Thu, 30 Jun
 2022 08:56:07 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5395.015; Thu, 30 Jun 2022
 08:56:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 83a6c381-f852-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656579383;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=Is0yTsQH5C0iliVyRcu+gGIXiTYvoerDUpGD7PPJUTA=;
  b=Y5KP/QQBMeUifv0FTvbEJkNpad81w7bXg/AqDi3e7VeYs9UHLUtwdt5i
   Q+zTofIgCBoXMQ3zgcZYPYms+sh5qbnvbAsn6yHUOVSIxE2UcZ7ndjyUy
   vCOtYYMbWHbjICe4Pum1bUPX+mru0yFC8LEcx/tFbY2bQQbO5fljB+SEi
   Y=;
X-IronPort-RemoteIP: 104.47.66.48
X-IronPort-MID: 74604574
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:w8bkUa2cv5QQduBhzPbD5c9wkn2cJEfYwER7XKvMYLTBsI5bpzUCz
 GAcCjzVPP7ZYDDxKYgkPISyphxV6pCGytNjTlFupC1hF35El5HIVI+TRqvS04J+DSFhoGZPt
 Zh2hgzodZhsJpPkjk7xdOCn9xGQ7InQLlbGILes1htZGEk1Ek/NtTo5w7Rj2tAz2YDga++wk
 YiaT/P3aQfNNwFcagr424rbwP+4lK2v0N+wlgVWicFj5DcypVFMZH4sDfjZw0/DaptVBoaHq
 9Prl9lVyI97EyAFUbtJmp6jGqEDryW70QKm0hK6UID66vROS7BbPg/W+5PwZG8O4whlkeydx
 /1m6sW/WUAgG5bokdg+bggINw1PF6dZreqvzXiX6aR/zmXgWl61mbBLMxtzOocVvOFqHWtJ6
 PoUbigXaQyOjP63x7T9TfRwgsMkL4/gO4Z3VnNIlGmFS6p5B86dBfmajTNb9G5YasRmB/HRa
 tBfcTNyRB/BfwdOKhEcD5dWcOKA2SWhK2EF9w79Sawfzmv86FxK1P/UM9f7dNO7QfxVkEzHu
 TeTl4j+KlRAXDCF8hKV/3TpiuLRkCfTXIMJCKb+5vNsmEeUxGEYFFsRT1TTifuzh1O6WtlfA
 1cJ4Sdopq83nGS0SvHtUhv+p2SL1jYeRt5RHusS+AyLjK3O7G6xHXMYRzRMbNgnss4eRjEw0
 FKN2dTzClRSXKa9THuc8vKRsmm0MC1Md2saP3dYHU0C/sXpp5w1glTXVNF/HaWpj9rzXzbt3
 zSNqyt4jLIW5SIW65iGEZn8q2rEjvD0osQdv1y/sr6Nhu+hWLOYWg==
IronPort-HdrOrdr: A9a23:4rmv4K6MUwThnhKG3gPXwSeBI+orL9Y04lQ7vn2ZFiY5TiXIra
 qTdaogviMc6Ax/ZJjvo6HjBEDmewKlyXcV2/hpAV7GZmXbUQSTXeVfBOfZowEIeBeOi9K1q5
 0QFJSWYeeYZTYasS+T2njDLz9K+qjjzEnHv5a88587JjsaEJ2Ioj0JfjqzIwlTfk1rFJA5HJ
 2T6o5uoCehQ20eaoCeCmMeV+bOitXXnNa+CCR2cSIP2U2rt3eF+bT6Gx+X0lM3VC5O+64r9S
 zgnxbi7quunvmnwlv31nPV7b5RhNz9o+Ezc/Cku4wwEHHBmwyobINuV/mruy00mvim7BIQnN
 zFs34bTrdOwkKUWlvwjQrm2gHm3jprwWTl00WkjXzqptG8bC4mCuJa7LgpOCfx2g4FhpVRwa
 hL12WWu958FhXbhhnw4NDOSlVDile0m3w/iuQe5kYvGrf2UIUh4bD3wXklX6vpREnBmc4a+a
 hVfYnhDc9tAB6nhyuzhBgv/DSuNk5DbituDHJy+vB96AIm40yR/3FouPD3oU1wiq7VM6M0gd
 gsEp4Y5o2mHfVmGJ5VNaMmffadLFDrbFblDF+ySG6XZZ3vfUi94qLK3A==
X-IronPort-AV: E=Sophos;i="5.92,233,1650945600"; 
   d="scan'208";a="74604574"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nQO5qWeJEKKlcn+nVWXCcwGUI9OYalvvKQ4tJTMNbd7RZUThr2dqQ9UJXgGsto6aW9DvBv0WI+m2lyI6uIAkK8q0Bc1mlQYVlbGD0/ylETXg7ZV3ONneYzqQDL7/CSrs62m1Red+MW3lYSKDqIw5vepJbhUI5h5zZVLE9bzm6TPyMVrgjPoZpn3nPaZTwyEJ45vywIfGwzDJaCZtubWGU8zQmLmxYR+rRpbwxl0g6ud6ZgU8/wem7FW7C/LRPtYJblYV1cR6ODuk1H2S2eK99XgCEqYwcnmBZ9FjxEd4c0Wtips5DQqcDATZ1Y3HXtLXJvne7Xl1wXlCIQPCoC53Dg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3Lk+qrBP83OcPYZn5y6uqm8i6vkKiGznZizcw4FHPHk=;
 b=jaM7UL9J8fS8TUzUq505S8uhH4y11VtdIcu9r3So0HzvWbnQzsRd6MRaFwhTpkP26JuICt/M+N5sngVlmuqXsRVhMIklmW3wFycpJpNYScX9k/SQ7baDD+TLudHZzvv43/typXzoMYJI1vFP9OLq6tE6z6+l5tgKAO5rd1Pp0Vp2WXoHSZs9MK7e50DpJAg/vuhOdSuyENzjGyqSX+WxOV7QuOVSeKul7QaJiCPpbtv4KQe2sUqJjL/8EcTfK3sIHeeUm6TtaicAtKWHn9g2PYmXepv/6VvgnOoJZDhwk4i/VRRurcVCBOKnNEndLmUfNgXMvg32NgX4VCsgqnvVLQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3Lk+qrBP83OcPYZn5y6uqm8i6vkKiGznZizcw4FHPHk=;
 b=f2AzyGtjMz6GNJs/vG6+4SmZEz8Nl/kBxCYnFo7HhxeWwcSNLh45Bxp4VhNqi9hDH46kw9vzHH8wjeK41tYkyNTe56EICznWBx+0xsxVCUWBik/AmCOk2md7oaAZ8Ypubs7k46Q30t0CNi0Q/3Jx/mssrfW+yJQaTTbdYhoab+c=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v2 0/6] x86/irq: switch x2APIC default destination mode
Date: Thu, 30 Jun 2022 10:54:33 +0200
Message-Id: <20220630085439.83193-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.36.1
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0373.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a3::25) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4b6c2e24-f9ff-448d-18e2-08da5a765f22
X-MS-TrafficTypeDiagnostic: PH0PR03MB6977:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	zScufEK5GoBVKq0gooYOTT52baQGzRhSWf45GZMdIA9vIE4DS+j7kUyrQviyz4Z8DeXCbMyuShVxS50f/BkU2x8qBRRgvEjwqrMEoHnkvoIRSKpUKh3Mf+rs+7r+LF+l25X9rLROHnRZr2dNZUW+c2xLSyerR3l3zyA9DYUWYVAue6Rr0koHTVrytUm4DRHrv0GlkOV5ukz9Y/EJn2jydJ60/UqPzQYgBVZIz/mmuX1WTD0TsdkAeDgECfdgv4ZCsgJHmG93atB0qXYY6v2UEMu5wrtAkexMyq+9/iLgT4r/JTYUwIv/REEYKhq5mAxtvz8/RSewwbFZoOl2dHNvDgbWq1Te7wBHx2yLjS961RpME5kpC7YAj/0YXiyiHECmrkgIRpgPLnTd9krgmbRzBNVZJY1wFPjU9bdHQzx+BRk4XlJoxbKwncz2oTxSPog5E/sr7X2kNryPXfNT7ehwGKOcxjJo/Unax/MfYtG5ubvyHRdo+7Gf+0Ffqp2/Xmqz+7MBFm9sPJuWVluvEqbzMCHtbPPC4OE05qtusklzKoubJw5jiBClOdJ8VL01ucexFiwLgZYpwWuoDXtA6d9KrUcQnvSjgJR8BB317pR+/wPnLVHG/3C395yrz7K/Fp1aBmFCMQR+KJwwnWhrQB91VwRXSv+hT6dVksoOgi4dB2z0BtDK2biVBllieGsiqasXajVE8cCp63RaapJR60kfIDzL2TWUbCOMTuDtZQkdzFa8ZXhzcjiWZkqpTj7gCAVNuQMFYfJ7suYZ5cNgMA/UdA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(366004)(376002)(396003)(346002)(136003)(316002)(38100700002)(66556008)(66476007)(8676002)(4326008)(66946007)(478600001)(2616005)(186003)(6486002)(1076003)(6666004)(41300700001)(6506007)(26005)(6512007)(82960400001)(54906003)(6916009)(83380400001)(2906002)(5660300002)(86362001)(8936002)(36756003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YWJFaEluVGFVUWNobUxRWVhmVVBjZ1FBMmI1RTV4TVp6ODA4b085UzV1NUJv?=
 =?utf-8?B?V2lidUVxZU1LdGZKOUU3OC9wa0JKd2xldjhtRGF5S1hoMlNMbzZzc2IwVVJt?=
 =?utf-8?B?MkF5dDUvOXptZ3k3MjhQYUdoM1lyYUx1dzR0TFNmYTcxalVhVGhraEQyYjhj?=
 =?utf-8?B?Mkw0eG55M0x4NmtKUEs5VFkwOHZmeXZWYkZJak9KRjRWVVUxK251Y3hVZVBn?=
 =?utf-8?B?ZDVwN28vQXBXOGJaRWg2Ym5MUWRtRVJGdUNyazNVSEtWbk1sdlV6bWE4ZG81?=
 =?utf-8?B?OEpYZ1Z4d2tFNFJkUEhrUnh2Mmdkd3Y3VVVYd0I4TjJmR000UkgyN20zUVJ1?=
 =?utf-8?B?Y2VrYkthWXZYS3gzMG5nbStxUEVobWhuVkhKM2pERUQvb2ZqRm9ySE94bHBJ?=
 =?utf-8?B?UmlHZzBiUW55dUp1dlZ0OWlWY1BaVEl0czRWTFcwR3pLam1XL0pja0lDTmtI?=
 =?utf-8?B?aS91dndVY0dMUktucDZwVmk0MisveXVzeWF2bjJmQ29VbkdubUlDc1R3OVdv?=
 =?utf-8?B?QjBlNVhjSEZ5QjczdWFDOEtteHI1Ym5rSHFpQXl1WmQzOVkvQjRHWjJ0RWth?=
 =?utf-8?B?N0FtVjZvclNBMnNpSkZoaHlrRThmNXk5VWNXeVRGM2MzeTI2V3ZsaENGWmxq?=
 =?utf-8?B?UllCTlZwUnlrNldjd21nV3g1aldURExyUnpyUGt2d3l6ZXd6V1hxcW9GNlZ0?=
 =?utf-8?B?U3JxbjdiZldOZTRZSzhsa1AvMCtraituL0o1SUQvN2dOb3pERVRIbnFuaGl1?=
 =?utf-8?B?OTU1SW1xZ2dFUkl5ZytQdUNhbWRtYlZpOVY3SG1WUEJHTFhOVXIraFJCeHpx?=
 =?utf-8?B?NnhrUkZFcG9sWmFvUjZKNWhVU0lQUFc1M0pVaXk5aVpUL1hpNEVZYzlmN2FV?=
 =?utf-8?B?aGp2Qi8zTVVlc3JyMkgzWm40cStXTmJ0bHV6WHVjN0t5eS9IcC9HZ0x2SHdI?=
 =?utf-8?B?K1VVeUNKK0pkaVloYWRKUm9YZWVVNmhtSU1FWGppSmgvSGtQaXJYYk83b1pI?=
 =?utf-8?B?bkxrNXl1bXEzZ3FHeWRLVjViSUtsR0dHd3VMQ3d0dllHNkNOM0lOdCtSakJD?=
 =?utf-8?B?S3ZnQVpRSkhzWDFrdy8wN0QxVTZrWE1nbEZ1amtTL0RZTFVpd0F4RVpPcHFv?=
 =?utf-8?B?dGpRYjFLSjhlaFRuRERKY0JETEN6cklJTTB6YWRPaVRRL1RpYWFQVnhFY29S?=
 =?utf-8?B?NE1DeTMwRWkwUzBFL2pEMVk2K25pQ1BlSWphcit4WlFwZjJyMXlpNjdXcTBL?=
 =?utf-8?B?UERhaWxld0kyY0dHdHNCNFpaWGtpYUpYZXlHQkZEaXpOZWpyVkVzaCtXUmZo?=
 =?utf-8?B?amJnTlpkKzRyS3ZrbzZBZ1ZveWtYR3ppcitDanZOVVZNemVhWVprTGE5TTFm?=
 =?utf-8?B?d1hUdzB3YzJldE04cGlWek0zcUJXZ2ZJdk9BUG5iWUtqTkxHUmNuUzBvTExC?=
 =?utf-8?B?SnBkODdoRGxvRkR4WGxMd2pWRDh6YmIzQVpuZHRQclVMMklFV2FFQnhoeCtk?=
 =?utf-8?B?MEtpdzZlak43dVMreE1JWkVzWnAwVjZZZGJaazEvZHcvVGVLWXdlWWlhOUxw?=
 =?utf-8?B?c0tONDVnbUc2cS90bXhQaHNuZDBodXlVdFcyYVBrUXhSbisvYjFkemNKak80?=
 =?utf-8?B?WWlVbWtrQkFXeUY1djhxQ0xZTW15Ky9oUC92dGxxSzBzMkRkM2hPWkV6QWd2?=
 =?utf-8?B?ZmFmTlhac1JleDByRXVwNlF3S2tXMFFySVFRVitGWndaL3V6a2dTL0dxTzg2?=
 =?utf-8?B?UmcwMHVlTGRINEpVWkZuT0dXZ3F2UjVZRGRuMjdlbzFUZVIrbzZxQ0w1cmRD?=
 =?utf-8?B?NittNDBvL0pHc1luRGNPaTZNTmU1NjBFK0lobGFIWGsvWVB0TVBnOEc1Ykow?=
 =?utf-8?B?TDI1MU5HSEZ6SVRIeTlVVU9GQnQ4NGZKbGJEdmtFd2N3bWNJMjZtbjhweDdr?=
 =?utf-8?B?cE9YRUx5c284T1Q5aHVib2k4T1NLeW9Qd3YyTGFJUHIzNjFqaXAyR3IzV000?=
 =?utf-8?B?SHdITU4ySFYwMHh0ak1oQlh4Z0VkUHdBL1I1ZmJPTHNXMzJnK2hOL0UzV0V0?=
 =?utf-8?B?MmRyZHpJbFJsRXZZcGJqUWZ3RjdBK3hocDJqWk1JMytQSC94SHYyNld2bmZj?=
 =?utf-8?B?Mm1sTGZvcis0ekh2eURWQW1KejFoU0gxUnh4VjRwWHZTSEN3R0JJQUNOZkp1?=
 =?utf-8?B?RkE9PQ==?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4b6c2e24-f9ff-448d-18e2-08da5a765f22
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 08:56:07.6932
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: e4oZGtRtdT5PZqQZUMc3UkWmiItSQxB+6NjliDjywXAcuJIGUzg1B+HrMWcMflN3KMJTqXUnAdp8kBHTG+3H7g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6977

Hello,

The following series aims to change the default x2APIC Destination mode
from Logical to Physical.  This is done in order to cope with boxes that
don't have a huge amount of CPUs, but do have a non trivial amount of
PCI devices using MSI(-X).

The default x2APIC destination mode can now be set from Kconfig, and
will default to phys in order to reliable boot on all boxes.

Further patches are a bit of cleanup related to the interrupt limits
reported at boot, and making those values more realistic.

Thanks, Roger.

Roger Pau Monne (6):
  x86/Kconfig: add option for default x2APIC destination mode
  x86/x2apic: use physical destination mode by default
  x86/setup: init nr_irqs after having detected x2APIC support
  x86/irq: fix setting irq limits
  x86/irq: print nr_irqs as limit on the number of MSI(-X) interrupts
  x86/irq: do not set nr_irqs based on nr_irqs_gsi in APIC mode

 docs/misc/xen-command-line.pandoc |  5 ++---
 xen/arch/x86/Kconfig              | 19 +++++++++++++++++++
 xen/arch/x86/genapic/x2apic.c     |  6 ++++--
 xen/arch/x86/include/asm/apic.h   |  2 ++
 xen/arch/x86/io_apic.c            | 10 ----------
 xen/arch/x86/irq.c                | 15 +++++++++++++++
 xen/arch/x86/mpparse.c            |  5 +++++
 7 files changed, 47 insertions(+), 15 deletions(-)

-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 30 08:56:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 08:56:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358340.587515 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6pyO-0007Wk-JQ; Thu, 30 Jun 2022 08:56:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358340.587515; Thu, 30 Jun 2022 08:56:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6pyO-0007WY-FU; Thu, 30 Jun 2022 08:56:28 +0000
Received: by outflank-mailman (input) for mailman id 358340;
 Thu, 30 Jun 2022 08:56:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPik=XF=citrix.com=prvs=1738a98a4=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o6pyN-0007Q3-BQ
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 08:56:27 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 82f21d27-f852-11ec-bdce-3d151da133c5;
 Thu, 30 Jun 2022 10:56:23 +0200 (CEST)
Received: from mail-mw2nam12lp2049.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.49])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 30 Jun 2022 04:56:21 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by PH0PR03MB6977.namprd03.prod.outlook.com (2603:10b6:510:163::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.15; Thu, 30 Jun
 2022 08:56:18 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5395.015; Thu, 30 Jun 2022
 08:56:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 82f21d27-f852-11ec-bdce-3d151da133c5
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656579386;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=N02eGk8FDP5Q9TobZZ+oVF55bMlSmE6X+tMDp50OFRI=;
  b=dLWoX9aJNDlI6bZLuVd8Us843DzZ6rZYZIJ7s7UYawt7y/+bH9aF9v6A
   cvtW57xnlfUqt46gpztCR5pMV+/0foV9PIiK/GAmky7IR3fS/kRJH5xk2
   1a7fLSYX/xyoib5U9NooVqca2MAVRPrsRgaGlP3nd1IFgdbrJmE5yv4xi
   Y=;
X-IronPort-RemoteIP: 104.47.66.49
X-IronPort-MID: 77339796
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:SkXhW6OnjKZZlOvvrR24lsFynXyQoLVcMsEvi/4bfWQNrUpx1jUGz
 2MYUW+Pa/fYMWSnLdBzatuz8khS6JLTxoNlHAto+SlhQUwRpJueD7x1DKtR0wB+jCHnZBg6h
 ynLQoCYdKjYdleF+lH1dOKJQUBUjclkfJKlYAL/En03FFUMpBsJ00o5wbZn2dYw2LBVPivW0
 T/Mi5yHULOa82Yc3lI8s8pvfzs24ZweEBtB1rAPTagjUG32zhH5P7pGTU2FFFPqQ5E8IwKPb
 72rIIdVXI/u10xF5tuNyt4Xe6CRK1LYFVDmZnF+A8BOjvXez8CbP2lS2Pc0MC9qZzu1c99Zw
 o9A9pqXGVkSMqDJvsEwUyBlKA9jMvgTkFPHCSDXXc276WTjKiGp5so0SUY8MMsf5/p9BnxI+
 boAMjcRYxufhuWwhrWmVu1rgcdlJ87uVG8dkig4kXeFUrB4H9afEs0m5vcBtNs0rtpJEvvEI
 dIQdBJkbQjaYg0JMVASYH47tLj33SKuLGwGwL6TjbgHxnn511R16qL8MP3cWNGtaeRyn2/N8
 woq+Ey8WHn2Lue32TeDt36hmOLLtSf6Q54JUq218OZwh1+ezXBVDwcZPXO5q/Skjk+1W/pEN
 lcZvCEpqMAa9lGvT9T7dw21pjiDpBF0c8FLD+Qw5QWJy6zVywWUHG4JSnhGctNOnNAybSwn0
 BmOhdyBONB0mLicSHbY86jOqzq3YHIRNTVaOX5CShYZ6d7+po11lgjIUttoDK+yiJvyBC30x
 DeJ6iM5gt3/kPI26klyxnif6xrEm3QDZlRdCtn/No590j5EWQ==
IronPort-HdrOrdr: A9a23:RPW3GKDF3bCZcO7lHeg3sceALOsnbusQ8zAXPh9KJCC9I/bzqy
 nxpp8mPH/P5wr5lktQ++xoX5PwO080lKQFmrX5WI3PYOCIghrNEGgP1+vfKnjbalTDH41mpN
 hdmtZFebrN5DFB5K6VgTVQUexQuOVvmJrY+ds2pE0dKD2CBZsQjDuQXW2gYzBLrUR9dOwEPa
 vZwvACiyureHwRYMj+Ln4ZX9Lbr9mOsJ79exYJCzMu9QHL1FqTmfbHOind+i1bfyJEwL8k/2
 SAuwvl5p+7u/X+7hPHzWfc47lfhdOk4NpeA86njNQTN1zX+0+VTbUkf4fHkCE+oemp5lpvuN
 7Qoy04N8A20H/VdnHdm2qZ5yDQlBIVr1Pyw16RhnXu5ebjQighNsZHjYVFNjPE9ksJprhHoe
 529lPck6ASIQLLnSz76dSNfQptjFCIrX0rlvNWp2BDULEZdKRaoeUkjQ5o+a87bWzHAb0cYa
 hT5Jm23ocXTbraVQGSgoBX+q3iYpxpdS32AXTruaSuokprdT5CvgklLfck7wY9HaIGOud5Dt
 v/Q9RVfcl1P6krhIJGdZM8qJiMexvwaCOJFl6uCnLaM4xCE07xivfMkcYIDaeRCdc18Kc=
X-IronPort-AV: E=Sophos;i="5.92,233,1650945600"; 
   d="scan'208";a="77339796"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dSMoxkF3zOGw/GzKnKsnPvMfTFmWk6AIBQiHb6slTcR+MOAZ6GpiT56q/aSOZdd1gqA/W5qdr/qvE06oH3ogfNdVGrUrOHh0vLe/yMfnoyz2r/ZeS8D6eEqUKpJkIwZbA8QjD8CwBp+bp/oWP0r5KlY/6iEhERPWErLPmfBwjLOKA9IMb81Q61Qf2D15htgoD9DWmoZaGD/80r/NNo/d4dTU63NA03KHHN/gVuty1fZ1EVhWfGRcredjkOKTFx+W9dErBzzyYFiEeQ+hkkH9pU0Dkq+Rl+NC07ZYdggrDlZOPXgYwbnEVNoyJcjANffcH5GMk2HAuv7NFAoYFYoCbA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=G83kxdx8/BCNbuvV4mxQ2qXbGd7+MsuOADIfqBBoRC4=;
 b=WotL2ZfM596tjpLIzABB1VpH8uUdLImvKKgWbLlRi9dBMKVjIVHgokpHbCiJB+CAP2oJG1VJ9YLhPf6XzYPxm/GFWLvf+Vg2itfn4OwO+3E+m2Gh6KY31RHmXDx746Ci6dCM+szz706+KEawm81u4mygYUVh2W/u4QHVDk6UdzcFJCsjgFawnajyvAISO5pevEZyyBcjSpvCWKqejZCK54mNQ/3fY+kZnVF4mFQIQPssHH36GDBcL6ZxXQU+NMlCmPAub5QVu5BWSJIeyueh+iHHBCiC1QnCb2+lyZ2Glq0ndjKv0GDtn9CmxtRaePSYpDaJpDVunP5BfzFMRjOFcw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G83kxdx8/BCNbuvV4mxQ2qXbGd7+MsuOADIfqBBoRC4=;
 b=lAunFh61UA7E/yb4GbSXWdoU3Bv4Js3YvynySD+b7qKduwOxuufxlu/7nC9MTMamkUHL7RQYimellLRu13CUUxNT28bO9oWqISy1mY5PMc0L/NwaqHZN01WqchnD4OQUOp17Lzuzb7Mp5uUerucroiyB9QYteipDF/fXjoHHUvE=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 2/6] x86/x2apic: use physical destination mode by default
Date: Thu, 30 Jun 2022 10:54:35 +0200
Message-Id: <20220630085439.83193-3-roger.pau@citrix.com>
X-Mailer: git-send-email 2.36.1
In-Reply-To: <20220630085439.83193-1-roger.pau@citrix.com>
References: <20220630085439.83193-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0108.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:c::24) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e3e8a997-0a74-4b5b-91d8-08da5a76659b
X-MS-TrafficTypeDiagnostic: PH0PR03MB6977:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	XKHfTAs0PE7JQ/IadQsaRqq6k3/5MK1UlAoaQTthlZiRZ11xZ8V2jnusdl582L+h9LPd/fFwcqV77E+THNiFrTTo5jYmUldktrhsQra+eW3NrhPZH0JWS+2mEfzt0yEQWZER1TqcIgzF6nkFPWxggncKZIRIhaLpBpjka227+Q49pTdoME8xZ39hmTLWpzP8OSodeds6zGJh21kwNiIvRmR1FiZ4nromi6CqusbBwxCnmNMGWkbRhTrhstLYNmXbe+N3HSC3C4zlJL7q0UThL8B/8Y880BZoqK5B77Rqxi8Cj07ipKjFHLwM70i0hfJvj+il/rM1YbhsRiTzGPPTphO1H5DeXUxoUPL+VvB9H/DPscPOLONbURgXOu0edIbOVflaEh08kesEZNhu9EcKk9+Kdbv+ByFVd90MkFm/BDkt6rbCN+y3n3X3QiDFT9jx9otuql1y2GES/9g8n98z4/sdFoWcjZgiw6xReKn0o9le1sDQ5lzDPWPUqGbToWfvHfMMwW0VtomUPlnuT2zNpZ7jMsg/GjGkFpULxIUWlUIWFF7zIQxJVRCMQkfyBenP5d+SzU9/D2fomtgFuzG/cOEWUVSat65jinHiQXm/L+zaUUi4QRdznwvulvZ4z3Ql3kn+jwIlb5V3RavjbyZ0wN2veKr9bKZLiKAUpQiBIcAhjU4RqikXgAtd7tGvVoXqrj6AAM09LtWa7K5JVK8wxV2bJZoRF1DtMlFwFwshUBFRU0wR+4pj9zdeDAUh+4VHU4OE57qHw4YxKUxsgui6Fg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(366004)(376002)(396003)(346002)(136003)(316002)(38100700002)(66556008)(66476007)(8676002)(4326008)(66946007)(478600001)(2616005)(186003)(6486002)(1076003)(6666004)(41300700001)(6506007)(26005)(6512007)(82960400001)(54906003)(6916009)(83380400001)(2906002)(5660300002)(86362001)(8936002)(36756003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WEl3dTZHZ0MyZTVKdzA3cWxIcEdzbXNLY1VRNEd2M2dRNURlNW13VjB6NDd5?=
 =?utf-8?B?VEkvS1hJbG9iYmJWSmJrM0VoK21xSW9WZ0lueGcydm5VK3pPVUpKWlNwanlB?=
 =?utf-8?B?VFlKcUhZaVlHcHovZmYxUDIwZHFYbVAzdjcwWXhoRGhzR3hsazdkMm51dFVF?=
 =?utf-8?B?ZWVVaHVyeXlFemZzZmZpYWZsNEhJMHowTGJjWDZ2MWtRWENZRVNEekJYR2M1?=
 =?utf-8?B?ZFdKRTJEOWp6dWxJMGVwS0RuTHJlYU1na29PbXAvbzlZdVpUTmc3R0ZRaThx?=
 =?utf-8?B?azMrTEluRHdPMk02OFlQZjhOS3ZwMUFxaWVCcDRUbG9uZHNVcTc0ZkFnVFR2?=
 =?utf-8?B?VWdIdVpsVXFVRFVJUklwcEk1T2lEZjh6VWltOGswVzgvS0lyaURKVGZSViti?=
 =?utf-8?B?emRyall4L1plUWpLYlZUZkpLSjBZd0psY0JXNWdRRlZ2MmNWdkdGQ09BTVNG?=
 =?utf-8?B?Zm1UamNJZ3huQ3RsUzVpS0FjREg2NXl0RFRLMjRPNHBDaDBiQzk1aEtMUm8y?=
 =?utf-8?B?VGhBZUJaaFdUcW9pZHlpUVBHV2RSUFpjb3pIRUUwK1lSdHNiMDZaaXgrdWp6?=
 =?utf-8?B?ZjhITWdwMmx4bVFyVHlVdFdSNWpIR043VHlWZnpJVC90SFcxOTBKMEJYZGgw?=
 =?utf-8?B?eFZJaXoxYlVKWFkxVzUxTGNQdmErcDdvanVFT3BQOFgyczFWTHVydXRGZUZ0?=
 =?utf-8?B?ZGFCMnR4VzE5c0lhbXdTaFBPZkFNeGZkdzlhM0JVYVJ4Z2ZVaUpLSFVySExJ?=
 =?utf-8?B?MnNuejNWR21GZDhHZ3hGUzdEeXZEMEFnKzhzNXpLcDhQOVZ2OG5LNm9TQ3Z6?=
 =?utf-8?B?MUZkRnBVSXNRNjkycjg1aGNNSUJYUEM0dHZCQ2hrYlNrTnA1NGxSeUl6RHE1?=
 =?utf-8?B?MGN3VUYyVmRvcUViWmpjdGpzZktzRi9OT3NwWldaNlRyQlMxU0FPVnhmRU14?=
 =?utf-8?B?aEhoM0R6b1htQUF6aXBqSzlPeU9YS00yMHZ5Z2VZUnEzSWFGRithZ3FBNGFZ?=
 =?utf-8?B?WGsvN1NqTERpSjV6eHIrSTI2emJ1aHB1Y1JiWDNVZ3E4M2VaZnV4WEExdkRO?=
 =?utf-8?B?SWJBVVZrRjdnMEthSjJTaUxRYzNKR3NSVitPY09iWFNpMitBSFpQV1J0OElh?=
 =?utf-8?B?TkwraGZJbXllcXV1UHQ1ZTByakdzUTNoRHhnQURnN0FhSWc0VXBEUW5ORWJD?=
 =?utf-8?B?YUpobjAxUU5jZGZpRFVXclNPa3M1b3Bsck43TjJHa1NzNzlQWTFiTWtRd1Nm?=
 =?utf-8?B?U0swWnpGbWdaMHBkVFZycDN5QVNsVFpFeTkvcjRGTENvTUxtNG85bnRTVEpL?=
 =?utf-8?B?WDBNeDJQMDlIMzRTRk5TcC9Xa1MwUFVjZjEzL0JmVjNKR21EVHVxaDljZk5X?=
 =?utf-8?B?Z3BEKzNvZ0UvK2VoMWIrQ29WSXpHQVc4UDhucG5KMktnR2pQb3JPTDlGVGRS?=
 =?utf-8?B?ODg5TEs5ZHc3aVNSRGdUYWxRNWhTa2EzeXV3RTJCa3c3WCtPckE4WE1uekF4?=
 =?utf-8?B?Wm44TzViT1NyVHNYZzY2dVBYZ01xK2kzWGQ5R3BURFBtMG1zNWMvY0JCVC9W?=
 =?utf-8?B?bEZtR1RTY1NxUmVHc1R5RHdLb3ZIOFVwdWhpV1VJcDNUQUZVbCttY21SMWdy?=
 =?utf-8?B?d1RVVUlRQndVb2xXWGpGT3c2cFhxKzhsRDV5RUFSWjIyTmtPUEhIalJjUXlV?=
 =?utf-8?B?eHRJODdRbWwvcDFSYVpja2lYdm5GMkowTlIrRGxJRjNVNnI3T2cwVGNJZXVh?=
 =?utf-8?B?RzhSQ0ZWdGFKOHAwUWR4VmYrUEViOVBsM3lMeENGMEkzdjB2MlorSURMUkRi?=
 =?utf-8?B?bCtCWUZwQnNydTlOSVlhUWg5Z2NTc1BlL0JMaTBveUNNd2ZZeTZNZ3UwVHNL?=
 =?utf-8?B?NVI0OWFJa1NENDNxeGNSZ3FDV3JNekNydWVUY1NZRjNYSy9nZE5paFFjQThw?=
 =?utf-8?B?ZGd2bHdBM29udEF3blh0NGY5U0xINUlyS2xXK29seTRZaHlxVFRRWkg2Nkh4?=
 =?utf-8?B?K3V0N2tYNzF0TXNiTGplVnh6cW1CR2kzNzJPeldjRk1aZ2dEL2N4N1c4eHIx?=
 =?utf-8?B?RWpSa2d5S2RKVGYrMzcxQXRiWW1TVE9yRjV6Wk8ybG02WW16V0FTOGI2djVZ?=
 =?utf-8?B?NS80RnVKVGl2S0lOd1UxNUtEalBXT2lORTladVVPb2NnTjNQWW8rc0R3TlJR?=
 =?utf-8?B?MVE9PQ==?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e3e8a997-0a74-4b5b-91d8-08da5a76659b
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 08:56:18.5087
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: IdD5NV2ZuhSqfwae47NhMSyNCZd/w/vIqLr/Ns4BDhzEEtMmL+IZHKmEJJcGoM0JhGGZiY9MBdihmQ/bMKx4Gg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6977

Using cluster mode by default greatly limits the amount of vectors
available, as then vector space is shared amongst all the CPUs in the
logical cluster.

This can lead to vector shortage issues on boxes with not a huge
amount of CPUs but with a non-trivial amount of devices.  There are
reports of boxes with 32 CPUs (2 logical clusters, and thus only 414
dynamic vectors) that run out of vectors and fail to setup interrupts
for dom0.

This could be considered a regression when switching from xAPIC, as
when using xAPIC physical mode is the default when the system has more
than 8 CPUs.

Note that using Physical Destination mode is less efficient than
Logical mode when sending IPIs to multiple CPUs, as in Logical mode
IPIs to CPUs on the same Cluster can be batched together.

Switch default Kconfig selection to use x2APIC physical destination
mode.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v1:
 - Expand commit message.
---
 docs/misc/xen-command-line.pandoc | 5 ++---
 xen/arch/x86/Kconfig              | 3 ++-
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index a92b7d228c..952874c4f4 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -2646,11 +2646,10 @@ Permit use of x2apic setup for SMP environments.
 ### x2apic_phys (x86)
 > `= <boolean>`
 
-> Default: `true` if **FADT** mandates physical mode or if interrupt remapping
->          is not available, `false` otherwise.
+> Default: `false` if **FADT** mandates cluster mode, `true` otherwise.
 
 In the case that x2apic is in use, this option switches between physical and
-clustered mode.  The default, given no hint from the **FADT**, is cluster
+clustered mode.  The default, given no hint from the **FADT**, is physical
 mode.
 
 ### xenheap_megabytes (arm32)
diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
index 6bed72b791..09441761d1 100644
--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -228,6 +228,7 @@ endchoice
 
 config X2APIC_PHYSICAL
 	bool "x2APIC Physical Destination mode"
+	default y
 	help
 	  Use x2APIC Physical Destination mode by default when available.
 
@@ -242,7 +243,7 @@ config X2APIC_PHYSICAL
 
 	  The mode when this option is not selected is Logical Destination.
 
-	  If unsure, say N.
+	  If unsure, say Y.
 
 config GUEST
 	bool
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 30 08:56:30 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 08:56:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358341.587526 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6pyP-0007ns-Uq; Thu, 30 Jun 2022 08:56:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358341.587526; Thu, 30 Jun 2022 08:56:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6pyP-0007nf-Q3; Thu, 30 Jun 2022 08:56:29 +0000
Received: by outflank-mailman (input) for mailman id 358341;
 Thu, 30 Jun 2022 08:56:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPik=XF=citrix.com=prvs=1738a98a4=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o6pyO-0007Q3-6b
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 08:56:28 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 84e8486b-f852-11ec-bdce-3d151da133c5;
 Thu, 30 Jun 2022 10:56:24 +0200 (CEST)
Received: from mail-sn1anam02lp2043.outbound.protection.outlook.com (HELO
 NAM02-SN1-obe.outbound.protection.outlook.com) ([104.47.57.43])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 30 Jun 2022 04:56:25 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by SJ0PR03MB6699.namprd03.prod.outlook.com (2603:10b6:a03:402::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14; Thu, 30 Jun
 2022 08:56:23 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5395.015; Thu, 30 Jun 2022
 08:56:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 84e8486b-f852-11ec-bdce-3d151da133c5
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656579387;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=NjywiNvb5PqoOaAfnLLHWPA+CBvHvcnWE3RraIAqnd4=;
  b=fEfZ/7yXPSzmFTILuGV/HUMKiLQavf4y5bvGVlhnVlzqItCy0Ai3aocT
   a7AsCcn9CGfMolho/YhumxxQKQsZml4rsKYH7WbXXPNuGYK/+qt276xdA
   LFSbZwzNrD0GhtUab6AnK/Jzk1CMyrs7KCl+BDNJWLBPV+umrZ7cGlC2W
   k=;
X-IronPort-RemoteIP: 104.47.57.43
X-IronPort-MID: 77339801
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:fRhvy6AfV+KWZBVW/13iw5YqxClBgxIJ4kV8jS/XYbTApDIi0GEPy
 DYXWGjVbqqMYGqkLt8lbIrjphgFscPdy9dnQQY4rX1jcSlH+JHPbTi7wuYcHM8wwunrFh8PA
 xA2M4GYRCwMZiaA4E/raNANlFEkvU2ybuOU5NXsZ2YgH2eIdA970Ug5w7Bi2tYx6TSEK1jlV
 e3a8pW31GCNg1aYAkpMg05UgEoy1BhakGpwUm0WPZinjneH/5UmJMt3yZWKB2n5WuFp8tuSH
 I4v+l0bElTxpH/BAvv9+lryn9ZjrrT6ZWBigVIOM0Sub4QrSoXfHc/XOdJFAXq7hQllkPhpm
 Php7oTqZTsTL6Pwp+FEbgNmFgFxaPguFL/veRBTsOS15mieKT7X5awrC0s7e4oF5uxwHGdCs
 +QCLywAZQyCgOTwx6+nTu5rhYIoK8yD0IE34yk8i22GS6t5B8yYK0nJzYYwMDMYnMdBEOyYf
 8MEQTFucA7Bc1tEPVJ/5JcWw7jz3SivK2QwRFS9/Ic1xXDKzCJIirmyLcTrZtDJbul0txPNz
 o7B1yGjav0AD/SPxDzA/n+yi+vnmSLgRJlUBLC+7uRtglCY2ioUEhJ+fVmxrOS9i0W+c8lCM
 EFS8S0rxYAt8GS7Q9+7WAe3yENopTYZUttUVvY8sQiLw6+MuQKBXDBYFXhGdcAss9IwSXoyz
 FiVktj1BDtp9rqIVXaa8bTSpjS3UcQIEVI/ieY/ZVNty7HeTEsb1Xojkv4L/HaJs+DI
IronPort-HdrOrdr: A9a23:AgQ66KnS8M7KiqQ7Iyc+l14+1HzpDfO3imdD5ihNYBxZY6Wkfp
 +V8cjzhCWftN9OYhodcLC7V5Voj0msl6KdhrNhR4tKPTOWw1dASbsP0WKM+UyFJ8STzI5gPO
 JbAtFD4b7LfCdHZLjBkW6F+r8bqbHokZxAx92ut0uFJTsaF52IhD0JbzpzfHcGJzWvUvECZe
 ehD4d81kydUEVSSv7+KmgOXuDFqdGOvJX6YSQeDxpizAWVlzun5JPzDhDdh34lInty6IZn1V
 KAvx3y562lvf3+4hjA11XL55ATvNf60NNMCOGFl8BQADTxjQSDYphnRtS5zXkIidDqzGxvvM
 jHoh8mMcg2w3TNflutqR+o4AXk2CZG0Q6W9XaoxV/Y5eDpTjMzDMRMwahDdAHC1kYmtNZglI
 pWwmOwrfNsfF/9tRW4w+KNewBhl0Kyr3Znu/UUlWZjXYwXb6IUhZAD/XlSDIwLEEvBmc0a+d
 FVfY/hDcttABKnhyizhBgu/DXsZAV4Iv6+eDlMhiTPuAIm30yQzCMjtb4idzk7hdAAoqJ/lp
 X525RT5c9zp/AtHNJA7Z86MK2K40z2MGbx2TGpUCPaPZBCHU7xgLjKx5hwzN2WWfUzvegPcd
 L6IRhliVI=
X-IronPort-AV: E=Sophos;i="5.92,233,1650945600"; 
   d="scan'208";a="77339801"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PrMfaK7Ad9O6gzvsf9L2fcDv2WVujNAFf4RvRLzS2knmlPSSK5WiVHIoEojUvdkTsJriakhvSdtqQBiBv9Q4AG5TwenNn316t5tBN6RIDVPOycdmSzVJ56N4Cj3OK1uLGROenHL/XTPtQ9bTxF9NPhW4zLOe8IQlRjnkCrIcxvqdG4hS4slyPFQw0EjHFBQqlug+ZMcIwV0ff5APB42t3tiaHBnj3iu4iFstopWi96BCNzkpMmVdPpvVdBKNg5dqGk4BQ1KXPQxhdn8Rd5wUbQB/xGO5emBn1T9e3+T9bouF/L1FYpQdJXM6V4BhsJgseA7ovTo+9ZZEGtrGhE+H1w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=euzhKLQqaFSC2Ezhbo0Ek1xKzO4ctm4oHFntZFUVdLc=;
 b=PciyX1Ek9nbuXzy+a3j2UOQBO1ZzksZrBnXeq8ttMqJxcRdO1C7ZEFF7aQKBzTlNFQqXONgsPqkEjAY58i8yna+/JPNYeLH6zekYavw/zjOu3EQDsvQVB5tK/U7YOrAAAdBwfE73lAVoZFTZ0rvht2PQ5LUYZOfCpcqlGU0nQVhRkgXO4xdXEpTelQt2X1PZ5Oa4nqbC9enb7TH+33fZEye9ScPPsUGCNAbOJgphcanfLSu1xzJqdYj0KDGGGl5cYjaHGOASFOjIUtPkIiJ8z8PEol0UGnueXpdYoX6t4bqvSJdBFTWPmEn3Y4BTXsn1g5gRan1bzCQIYvKscGy/Gg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=euzhKLQqaFSC2Ezhbo0Ek1xKzO4ctm4oHFntZFUVdLc=;
 b=L2ocW23meYFZSAp6ftsOyGan/Znxubb5V1uSTDj1F1+BFiEQvyxFjTznxPidYUBs9JsOyCEcdUTA8rRvO6q3PI14htrYBpqyeKWf05P/QWBOVkMNJgC/B72pBr8qjktnon//OJd0ygGROZAy+biDjuoFfPF6YAynk+HyUJvjzwc=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 3/6] x86/setup: init nr_irqs after having detected x2APIC support
Date: Thu, 30 Jun 2022 10:54:36 +0200
Message-Id: <20220630085439.83193-4-roger.pau@citrix.com>
X-Mailer: git-send-email 2.36.1
In-Reply-To: <20220630085439.83193-1-roger.pau@citrix.com>
References: <20220630085439.83193-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0228.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:b::24) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 7dba2b51-ada5-4b01-9b55-08da5a76688d
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6699:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RSmskpFO60T4doHraeENvE95tJImskdhT8bBtHK1O8Ni6XRameIIMYCXebbf+IJRP5mbA7K0Hc2Z9Nbd47xfvs2HG68HezL71n2Q3Bikjezt+HRVcnU1VljZv5tjvzcGvYVQkmupZ3nZaMuNCLUmHg+23ETjLg7FlxSc+SgedARQfPXN7UcyiVnplplAwLbx9VcwKcS4/q4+lNhG/e1QVkZr1gZ60p6UQj16j1Qx+Dl7jo3IWrvDoCOM67z4tHznHJPV3yBcDlszd83TMrr3hUfdVy7mV3HYE6/2prKVkGP+bGN1Kz1h8YTCr0i3gl41NtwqV7msXeoK/L8+oSSm56cZ8iDjdi1JsMMhqsugsXuK57xL+uxP1ph5UX+6UNnzuM3LIbrJbPwL6hLNmljZtWJAsON56oFTLrG7B/MFxB7JZ7HcaGbKDdShLG8y4hmxhfJKAknCsok5tItJpL3pogTY3r4QOH8hTFTtuD75oSJ/3e06nAzpMmfYwcXih9V8ISkrHe1uD8uJgK//JkVKwHBEwKeLlbxAsRzbavnasGpTlE9Hv//RGBCHTbSVa4pPOs02ir2qlvBJI7J8cSCyUPZvZli/VyLjwF3+ZchliyzwDkRLrsCAIYG6m0ND5Nef60WUowcrsKEZmpevt+JGbkgQ6WIH5dERv4h/2S9TNtstgFKA4NPxmNVd5gIWtSDhZPjBSs8qMKwYO15gX09hiN9BpVkohUDOjIt+Pk5sWdxPz5TYSkeZDr6kVGRt31Dt
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(376002)(396003)(366004)(346002)(136003)(186003)(36756003)(316002)(2906002)(6506007)(86362001)(54906003)(41300700001)(2616005)(6666004)(38100700002)(6916009)(66556008)(66476007)(26005)(66946007)(4326008)(5660300002)(8936002)(8676002)(6512007)(1076003)(478600001)(6486002)(82960400001)(83380400001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VHc1T0xMTS8yckdIT0RnQUg0QU9JNFM1VzRyN0d2UTUwVzhNaFllWkVSei85?=
 =?utf-8?B?YmF6NXB4cTJqRnZUdFN5N2dxTDRWQVVBdElRbW8wMEN6c3JBRXdiZlprcW1W?=
 =?utf-8?B?OTlFRXgzNFBsRWV5bXdQQWd2b01SWVFheDJzYURwM0tLam04d1I0YzBvdld4?=
 =?utf-8?B?Y3RqWGt3VzhIb2ZDbndpWDhZYml2QWdmS1RiRmFkaW5ML1BacXdkTFJHVzJJ?=
 =?utf-8?B?RytZbTdRdXQvOWIxOUNsTzR2RXZ3WEt5V1kxcU80cTgraHFPVDBqaW55bTRH?=
 =?utf-8?B?RHFyU0NqbFZHL0kwNklJSldWSWJBSnFTckdDTzY3VDJ3aG01LzllYlFGUTVo?=
 =?utf-8?B?NERPaVYzZUUvQnhqSUp3d3cvVWZnTlpDNXdobHl5MnhaU0lIVWdXenptcUlt?=
 =?utf-8?B?aGplbW9UcXBEQTBkWDZNYkhGNW4ydnMxYWR5WkR3emRENERWS2NXVldnQjRo?=
 =?utf-8?B?WVlQSHJSM2J5QUlNcjg3WkJVWkQrNUpHNDBYckRHazJJb3hVQXd1aTJ0aFJq?=
 =?utf-8?B?c2s0TUluUC93T1FLaTY2UXJiMndRSUtTaEp4T2o1M3R4bUpWcDBWOHJLNllC?=
 =?utf-8?B?MCtuaS9UUVdmZDJva1hqUVN1TzNIK2ZzNStSR0NUcG1FVXVFZGVsWDQ2dXRE?=
 =?utf-8?B?dUlLaGFrWGRnUk0xSzRMS0x6YmxlTGxyOURuTGZVbWFtWkZsUzFoc3Q0VUh0?=
 =?utf-8?B?RXljbEU5K3Q1dlYyZmFYS2k4UG5XSG9JZU4zWnhIZVY1bmljSWtmQVJ2ZjVn?=
 =?utf-8?B?amFwZVM0MkJRSmF1V2RxelM3ckFPVEhqdXU5anF6YWNSU0pSbVV0cU9Cc21T?=
 =?utf-8?B?cnlncmpwNXZFUmVBOTUwZVJna3NoclA1UU9jcExkU1A0OCsydFpRaXhQUWJT?=
 =?utf-8?B?aVJQeGg0ZzJ3amVqSXp0WEEzeFh5M2w5bitOVVlyei9SYi95N0tUc3ZPeWtv?=
 =?utf-8?B?ME8xaTdCL2ZQMi9uSjBsT0hsemQ3SlRWSDdEUGFnaTI2aW9IS2FtaVMwN2Zr?=
 =?utf-8?B?a0FRalhmYm4zWDRJa1o2MHg4VzVBZEpCck9haUZxYUFIUENHRWYvWExpYnd6?=
 =?utf-8?B?WHkzY0hPRmV5SUlpNEhaQkZjNlZtSkxNR1BBK2FsbzBsU0MxSm15UkgvTDlp?=
 =?utf-8?B?aVh2bkhwQlZ5SnFUNVVHWXlhaWZlcGhlaTdJdG44OXVHTVgzckxQTmNPaGs3?=
 =?utf-8?B?OUlSci9OK1d5T09OT0llVUZmTENPczMzeC9EVHVjM0Zzbk50ZlRYZTJyeTFJ?=
 =?utf-8?B?WUVlTkk0Um1iRzlyZGpGMkF0ZXkwTUh0SEVia3RlU0ExWXlNb3ZmY2Fqc1h1?=
 =?utf-8?B?YXUxL2c2TGJaeXFTMkQrR0szWXkxMEllOVNMbUxVajIySGlmcnMxemFTQkFT?=
 =?utf-8?B?OW16KzI0WFlzKzlrbHU3anBCN0o0Vk95Z0JPY0srbE1UVnI5bjd3OGsvcXVa?=
 =?utf-8?B?T2l0eldOL0dWTXkxUllyRi9iZFh2eXdSL0Q4a3oyNW0yTEczUVFpbnpPbXhw?=
 =?utf-8?B?bTdPcHAzWElJL1JCTjhlanJ1dHBpUVBlcnp3RWlCQTVVR1M3eHRTcENBa0xP?=
 =?utf-8?B?djk5clk3Vm9OTU1OeDZLY00wbmoybDJWQ2Z4Z3hRMkdJV0xjWGJYYzdOdjND?=
 =?utf-8?B?YlpXTGlmc3BHcW9xZWVlWXhvN0dTUHpGZmFRd0dwNmFQZU0wRnhlZ0tuVGRH?=
 =?utf-8?B?MXp1TlBqWXp2LzloRzRUWUhwUWhoZjQ5eFFmbDF3MlZCMnBvS0VHeE9tRUNx?=
 =?utf-8?B?cDAxSDROUml5VFVJZCtoNit4OUJHTWIwYUV1UHZ5T1F3aWJqeDdTcXZwMEI4?=
 =?utf-8?B?Q1N4TlhXMFp3VHFUYS9UbGp4STBLN3FrT2srZC9mQWtNbTRQenNkQWxBZjY3?=
 =?utf-8?B?R0U4bk44dWVZM2VDODJZeW8yZVA3MTlEQTZPb2had293eFhpMFN6cHBxR0JV?=
 =?utf-8?B?T1lnQzVKZ3g4RFlxZkNpUU04Nm1XTXA3c0tDT2dmbzg0WUNYdUNKRG9xdWhK?=
 =?utf-8?B?VFUwOERBV1hWREJ0NFFabVcvbkVLV21LcHZXSUhyN29BTU9MQ0JuTUt6Sk1k?=
 =?utf-8?B?TlUrVjlwNUVicDhtaldpMHJZMnptU1NnZXVOeTB5QWxLU0dVWWM2VEV2MHhH?=
 =?utf-8?B?NFhRMmppQ3dYTDZ0ZHJOSE41a1R6N3FLdkNmK0pEbzU5VmNkSlF5TTVjUUtr?=
 =?utf-8?B?K2c9PQ==?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7dba2b51-ada5-4b01-9b55-08da5a76688d
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 08:56:23.5128
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: sIvN9FeLTEE6th6IPoflK8bGNPAY17NURLEe+yewHvio0Qt01cnzbl/iCGcf67UDZ0djsQULcW3MKbqm8x5vng==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6699

Logic in ioapic_init() that sets the number of available vectors for
external interrupts requires knowing the x2APIC Destination Mode.  As
such move the call after x2APIC BSP setup.

Do it as part of init_irq_data(), which is called just after x2APIC
BSP init and also makes use of nr_irqs itself.

No functional change intended.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/io_apic.c | 10 ----------
 xen/arch/x86/irq.c     | 10 ++++++++++
 2 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/io_apic.c b/xen/arch/x86/io_apic.c
index c086f40f63..8d4923ba9a 100644
--- a/xen/arch/x86/io_apic.c
+++ b/xen/arch/x86/io_apic.c
@@ -2653,16 +2653,6 @@ void __init ioapic_init(void)
                max_gsi_irqs, nr_irqs_gsi);
         nr_irqs_gsi = max_gsi_irqs;
     }
-
-    if ( nr_irqs == 0 )
-        nr_irqs = cpu_has_apic ?
-                  max(0U + num_present_cpus() * NR_DYNAMIC_VECTORS,
-                      8 * nr_irqs_gsi) :
-                  nr_irqs_gsi;
-    else if ( nr_irqs < 16 )
-        nr_irqs = 16;
-    printk(XENLOG_INFO "IRQ limits: %u GSI, %u MSI/MSI-X\n",
-           nr_irqs_gsi, nr_irqs - nr_irqs_gsi);
 }
 
 unsigned int arch_hwdom_irqs(domid_t domid)
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index de30ee7779..b51e25f696 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -420,6 +420,16 @@ int __init init_irq_data(void)
     struct irq_desc *desc;
     int irq, vector;
 
+    if ( nr_irqs == 0 )
+        nr_irqs = cpu_has_apic ? max(0U + num_present_cpus() *
+                                     NR_DYNAMIC_VECTORS, 8 * nr_irqs_gsi)
+                               : nr_irqs_gsi;
+    else if ( nr_irqs < 16 )
+        nr_irqs = 16;
+
+    printk(XENLOG_INFO "IRQ limits: %u GSI, %u MSI/MSI-X\n",
+           nr_irqs_gsi, nr_irqs - nr_irqs_gsi);
+
     for ( vector = 0; vector < X86_NR_VECTORS; ++vector )
         this_cpu(vector_irq)[vector] = INT_MIN;
 
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 30 08:56:36 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 08:56:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358345.587537 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6pyW-0008Hn-Ec; Thu, 30 Jun 2022 08:56:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358345.587537; Thu, 30 Jun 2022 08:56:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6pyW-0008HR-Az; Thu, 30 Jun 2022 08:56:36 +0000
Received: by outflank-mailman (input) for mailman id 358345;
 Thu, 30 Jun 2022 08:56:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPik=XF=citrix.com=prvs=1738a98a4=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o6pyV-0007Q3-AU
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 08:56:35 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 880a05ab-f852-11ec-bdce-3d151da133c5;
 Thu, 30 Jun 2022 10:56:30 +0200 (CEST)
Received: from mail-sn1anam02lp2049.outbound.protection.outlook.com (HELO
 NAM02-SN1-obe.outbound.protection.outlook.com) ([104.47.57.49])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 30 Jun 2022 04:56:31 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by SJ0PR03MB6699.namprd03.prod.outlook.com (2603:10b6:a03:402::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14; Thu, 30 Jun
 2022 08:56:29 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5395.015; Thu, 30 Jun 2022
 08:56:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 880a05ab-f852-11ec-bdce-3d151da133c5
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656579393;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=JROiIcytnT09wIXffk8Xeuw3JUfAkJpdvq4oMXLF/HA=;
  b=GGYY13cWcrcO5IREOgWFjAUS+FI6b4Rs/1eUQVHop8cSYqLvVhEGq/4X
   YDKmVa8gWf8f7yH0bqxOi4fIyqP5bct+/Y0glm5TUWkNyg6/j4ExJrX+7
   Fqfofzwo4pJcG3CUN77m58f1UpyriAjW9ALulLGkAclcGFRTb6KWHAunv
   U=;
X-IronPort-RemoteIP: 104.47.57.49
X-IronPort-MID: 74794303
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:W57PUaB+nr07uBVW/13iw5YqxClBgxIJ4kV8jS/XYbTApD4n1DVSy
 GIYWmiCPq7fZTagc9EiO4+z908BsJ+DzN4yQQY4rX1jcSlH+JHPbTi7wuYcHM8wwunrFh8PA
 xA2M4GYRCwMZiaA4E/raNANlFEkvU2ybuOU5NXsZ2YgH2eIdA970Ug5w7Bi2tYx6TSEK1jlV
 e3a8pW31GCNg1aYAkpMg05UgEoy1BhakGpwUm0WPZinjneH/5UmJMt3yZWKB2n5WuFp8tuSH
 I4v+l0bElTxpH/BAvv9+lryn9ZjrrT6ZWBigVIOM0Sub4QrSoXfHc/XOdJFAXq7hQllkPgqx
 N9G7rqaaDwmL+7lmPowSzZGL3lxaPguFL/veRBTsOS15mieKT7X5awrC0s7e4oF5uxwHGdCs
 +QCLywAZQyCgOTwx6+nTu5rhYIoK8yD0IE34yk8i22GS6t5B8ySK0nJzYYwMDMYnMdBEOyYf
 8MEQTFucA7Bc1tEPVJ/5JcWw7v31yWkK2YwRFS9lJUIyjOQ5hFNyLnsLN+Oc8yXR+wOtxPNz
 o7B1yGjav0AD/SPxDzA/n+yi+vnmSLgRJlUBLC+7uRtglCY2ioUEhJ+fVmxrOS9i0W+c8lCM
 EFS8S0rxYAt8GS7Q9+7WAe3yENopTYZUttUVvY8sQiLw6+MuQKBXDBYH3hGdcAss9IwSXoyz
 FiVktj1BDtp9rqIVXaa8bTSpjS3UcQIEVI/ieY/ZVNty7HeTEsb1Hojkv4L/HaJs+DI
IronPort-HdrOrdr: A9a23:eQzNqq7/Bns5pG7x4wPXwVOBI+orL9Y04lQ7vn2ZFiY5TiXIra
 qTdaogviMc6Ax/ZJjvo6HjBEDmewKnyXcV2/hrAV7GZmXbUQSTXeVfBOfZowEIXheOj9K1tp
 0QDJSWdueAamSS5PySiGfYLz9j+qj+zEnBv5aj854Hd3AOV0gP1XYbNu7NeXcGOTWuSKBJYq
 a0145inX6NaH4XZsO0Cj0sWPXCncTCkNbDbQQdDxAqxQGShXfwgYSKWySw71M7aXdi0L0i+W
 /Kn0jQ4biiieiyzlv523XI55pbtdP9wp9oBdCKiOISNjLw4zzYLbhJavmnhnQYseuv4FElnJ
 3lpAohBd167zfrcmS8sXLWqnzd+Qdrz0Wn5U6TgHPlr8C8bik9EdB9iYVQdQacw1Y8vflnuZ
 g7k16xht5yN1ftjS7979/HW1VBjUyvu0cvluYVkjh2TZYeUrlMtoYSlXklXavoJBiKprzPLd
 MeTf01vJ1tABOnhjHizyNSKeWXLzsO9kzseDlAhiSXuwIm7kyRgXFohvD3pU1wha7Ve6M0md
 gsDZ4Y5I2mNvVmC56VJN1xNfdfWVa9Ni7kASa1HWnNMp0hFjbkl6PXiY9Fl91CPqZ4h6cPpA
 ==
X-IronPort-AV: E=Sophos;i="5.92,233,1650945600"; 
   d="scan'208";a="74794303"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KSHz5XwDLfM+xRWzdkRp76wM17Quhn3EiswzWIyGE32K7MaPlcn3gII0AUy7Us4FQOlTc2+5vOdUtphq//igiruZ8YXWUgj8ZSMeV2Y18ZMvmZHXZaSi965EfjQnH3F/Q2P2VdFNvAf+VhSaib+hJQZrt34ZaRBfFXYEPLZwc3uFu1XihtaxUgw7sUVug0CAOZ85iOUdOrZzb/8VKZrNaP5VFZF6G2gspEEFmLfs0mtbhC6cxTA7f7SY1+yyOhCp1KS+xWXmPtZFXyn79bYV6qUuLZCVBGiWqy24NtrpsTZxAMruUFztBr9yiRx+8mZorju3ESgP/hKbS9gzaojgoA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=FloscYbijaSLVpLxOdcT4wyXo161zpQTeRJwtCFDY80=;
 b=c/SHP9RIFXJlgOPgWEc/cjxrrMXPhUx/ChooB05OP3EcXJAru8QhlxAZr4L1YaSOTiwZWmeETqQFKma1iTZAoGK25m3u9efGJkdkZYOpUHjX2xQTLtArrL2mINaCQoXRz/aRGlXZoqesou5rjFrv2mKA6+qFkznMli/9K4g4FHuVMTbXVSM30eMyyFhdz2E+UCG/pf838bA7G7jg1VtUnxcyfZJlZInCmhWzKiuh+uwxAyRX3YveUb7cwk8tKx0r8PY64GnZTU8nRkmKm7eE7hLpMWUmCqJ0ty9bMk69bVLIVCB+RhiJ4qQvXOiZa42aHNQNKY4XtliIB/vAg9FFcg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FloscYbijaSLVpLxOdcT4wyXo161zpQTeRJwtCFDY80=;
 b=fuZlxv9fMBjeyh/KIYSmG6b+OA+XYvFVz/rYXS6p2nSo4NKgv4C7LGmZGaLAiw7UFYyroyI5iM5/IjGp95REvLv4CqUSAHUHAmeWzFNiIWWN++jilg3SCDobDfBav8QprKlvMZIMMbL3wDu0uiSC12RXeNx2PKX2Cz0soB/s5jA=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 4/6] x86/irq: fix setting irq limits
Date: Thu, 30 Jun 2022 10:54:37 +0200
Message-Id: <20220630085439.83193-5-roger.pau@citrix.com>
X-Mailer: git-send-email 2.36.1
In-Reply-To: <20220630085439.83193-1-roger.pau@citrix.com>
References: <20220630085439.83193-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0017.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:62::29) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c6ac01c8-77ed-428b-f3e2-08da5a766c04
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6699:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	m0nbE3sS4ZosGph5t2+YFXN0qJQwWkO/aR85HKeNB+eYkeXVs+AbFspUmbN6xKHj4I+I6P8wRhdnP/xMVWA9t7ltqvIQxLtKAL7rdM9gvs6Q0pjH3XfIfSsHwF55GdPcyLjHM6G/ThkaKryCOxTjhVn2SQyRtnE0NuolJbqSXqccJtWFGIDrB7Vv3bCGU6yT1N37Lo0t4OP6z8nAs6tvL1H+nFoBeHoL6vPOWjqZWtjWi31AJv78qVYxGgnMCeKKyVh1dYM80rILgiFTxDI/B/NgZMDoeuiYaXmH7a+wKOPWYLFZZ9yXByzixDgaufTz7Ucuemn9iouTlJDUkLGY9wVAY/NjfqtuxCdmu0K/uKJIu0VTUKhHAtjhi2mSLrlT3d0zJSWMVL97OhNvNfMMiE9bKqroVpoWSOU4TcdewB+oOy35RRuhc33h002HoIOFUbWOowFfd9s96voRL1oRr69159lMRMTXVjXOhIeLdylBAwbORj4n/ofxgHphkSjUrVlJ6pSAE1DUO7p872tocyC9yNWmnv+/31eK+qvrtsXifipMrkLSp+6RKyzo+sR8PCX6KoSl4sSnC5KGyvpoVZcO7l1iz6FaACNWukNmwEuptjZSUhqhgNkk3tXFSgtgn+x5Z0acCMY3IO/9mwYM+pl5FvdsqoacERXh3v/E5TYxi9Pk67PlJm+Kzj/JWBxXlWLbf47GSlO9UxEVIbBw0xTH42GoAfr/amwoTiud/od+RxlvSqTjwuUUiOQ3JfNT
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(376002)(396003)(366004)(346002)(136003)(186003)(36756003)(316002)(2906002)(6506007)(86362001)(54906003)(41300700001)(2616005)(6666004)(38100700002)(6916009)(66556008)(66476007)(26005)(66946007)(4326008)(5660300002)(8936002)(8676002)(6512007)(1076003)(478600001)(6486002)(82960400001)(83380400001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NytQcjNkblBqWTRxN2dORGdSTCtMYll3N21ObDRMdFlaVmRYN2xIUWIxRlBF?=
 =?utf-8?B?SlduVTVzYkJsczUzZHZLQXpaeXRlS0FiZjFkR1M2emxwQTYvbXczZ0l3SXZ0?=
 =?utf-8?B?bXRvTDhzbmFZNXVKcmJMdmxjM2xDVnBic1ZOTWI0TmkvbGFyS0syYk45RGNG?=
 =?utf-8?B?b0lWOGxmdk1aY0ZRclJpNDRyL1BLWVJxQitwUzFZSGpqYWhXV3BQWTFiN2c0?=
 =?utf-8?B?bkswN25tN2lLLzVPR1lZY2NzVE56ZzU5NDAvbkdDT3lYaXVCeld3bFZWREl0?=
 =?utf-8?B?YkZ6UzR0WG5PSVVsZUN4WElaQ2JZYXFRVk9Id1V0NGx5Zm9UcFE2TDVhREpO?=
 =?utf-8?B?K0hkR2dPeU5POERRdWMzaXdxcTVrSEVrZEV4ZmhleVQwSnl6ekN6T2VUa3E3?=
 =?utf-8?B?TFFOalAwdFp0bWN6TVFtWTVKN0dFcGc4QUhROTFSZm10Ry8zUUFuRktIbFZj?=
 =?utf-8?B?UVNHOHcyUWtIODlOazFGTHVZUUhuTFYwcXBuSjN5UzlsTjcyY0R5WXMwNC81?=
 =?utf-8?B?MzhJcUcwckZ3c09NMXhLbzRFS3A3VjZ5eW1PTWxUbkc2QkY4L2hyTXlQc3l0?=
 =?utf-8?B?ZEJzSklmaEFReEZ4RTRPS0RRTkNYN2Jjb1Ywc2txOXhtOVlzQW1XSUc1RDRY?=
 =?utf-8?B?aktCWWlTVUZPU2ZxTE82R1Y2VzUvSjl0RE1TTCthbHJZWmdHZEFLWU05a3Fy?=
 =?utf-8?B?NWQ5aC83WEdPOFc1UGFPZnRVSjIveDh5dWEzZDd2bk1aQ0NFQTM1NnBoaFNo?=
 =?utf-8?B?QWtxdnJQQlZvUzIvSlk2eVVjQkgzdFlQRm1BS2JWcS83R213eEJCRmpScnNE?=
 =?utf-8?B?eXpXWVEwMlkzUTJ1Mk1rbGoyMnFDcDhjaTJOZVdIcnBDb3NJTmVLb1hXK0NL?=
 =?utf-8?B?RmdjZ1pwQWRBWENWd0xwcWlqYTNXVEdUa0kybnVRem96RE5BWmtBMXcydS8z?=
 =?utf-8?B?MW1vdHdEZ1ZleGsvS1ZYMm1sZ1JMcmtOV3hGeExLeUNSSVY2c2Q1L0NkSitn?=
 =?utf-8?B?T2pIMG5sWW9tZXdCeFZsRTJENkpON1AyZDloRFF4Z29VQVZ6TVBDQ2xKSzFF?=
 =?utf-8?B?WTZscWs1b3RRNnd2cjhKdEM0Mm1CZHdRYi80YVl4UVUxc2ZwVS84cmsvVEk0?=
 =?utf-8?B?ZDJUc0kzRDhpVmdWVzg0UHRmRmhlUjdYN3N4VGg5eVM1ZGEvUzdvY3BRRVZs?=
 =?utf-8?B?K21LMFBBM3g2YWVNNi9Hd0JWbVFITWNoMDdQKzQ5Z0dqUi83ZUw3MjAxNm9G?=
 =?utf-8?B?TmZCK250MVFQWndVeHdyaWpsRWh3TDU0a0Z2QTlOWHdJeUpGRm5lQ1FVWkR6?=
 =?utf-8?B?MkRiY202RDRLUTBZVWxibmhQalVLZ3hzWnRBWXpYSzBCaGdHaGJnVWdMM0Zk?=
 =?utf-8?B?SkJnZWp2S2s5OHRLeTdaaE5tSVplSnFFaDRDTTErRmxrT1hweVo5UDY5T016?=
 =?utf-8?B?SnEvdXBNZ3dhZ0RxRm01eWhpY1NENFFoQ2svRCtxQitXS3lCQ2dCcHNXOHR6?=
 =?utf-8?B?VzI5T1pEUXE3N1VxZVFyb3AvSTg1VGF1dkpIU1Qrck5IcENta2xQMTJKY21P?=
 =?utf-8?B?R1l6SUN3QWVFcUF5TW4wSjBVL1FpNFZXWXhuV1gzODNwOHMxeTY0QTNrS3RB?=
 =?utf-8?B?d3g5VUhoVU5QTmtEcUp0MEczdWRFanU5RHJBNXdqZG13bU9qaXJWaWRXaXFI?=
 =?utf-8?B?UE9yWmxIN1FHNEduVUxoUFFxR1RFaTB3amxTaDFzWjBBcWh5Zk9uN1htQkRh?=
 =?utf-8?B?UmU4VEM3Z1pxZ3o3d0dLbFYvSXdDckVSS2NFdVpwVXZYakFsUHo4eE90SStM?=
 =?utf-8?B?bWY5Y21WeVk0SXlvMHNRakZWU3cvVGxIaGR4VnZhSUczZHhZNmpGY0c0N2F2?=
 =?utf-8?B?UnNXSEVENTdYTUR2TXhuVjNCbjdCS0tLZVhHUjNhTHNzWThUdjNkMDZJMFB5?=
 =?utf-8?B?TGw4N3BmYU9PTUh5V2psRmhWelVHZkdNZThHeDNQaW5BaGxVQnZwL1E0aDdG?=
 =?utf-8?B?NUpoODV6RTZEQkkwcFh0aGNzc3p4b0gvb3JmK1luRkxwOTlLZDB4SWxaRUVU?=
 =?utf-8?B?S3UydGFZTUsveWl0VjdnK0JEZElmWExRUllkMWhtOU9sNmw5V3czSk5rdmxx?=
 =?utf-8?B?bGpCK0UxYlp1b1plQmNWbVVPUDdpcDJiSkZUQ1BVRXk5RjVlNjZ1ZHpHVkhp?=
 =?utf-8?B?Mmc9PQ==?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c6ac01c8-77ed-428b-f3e2-08da5a766c04
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 08:56:29.3282
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: tyTho2dkLudEbOraS8MUR3y0vuNOjWwoRyDy+vYBRZxSzHvBg+chstdFe6nCZ2UF3hx4cijfEF/PY6sV3nHHaw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6699

Current code to calculate nr_irqs assumes the APIC destination mode to
be physical, so all vectors on each possible CPU is available for use
by a different interrupt source. This is not true when using Logical
(Cluster) destination mode, where CPUs in the same cluster share the
vector space.

Fix by calculating the maximum Cluster ID and use it to derive the
number of clusters in the system. Note the code assumes Cluster IDs to
be contiguous, or else we will set nr_irqs to a number higher than the
real amount of vectors (still not fatal).

The number of clusters is then used instead of the number of present
CPUs when calculating the value of nr_irqs.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/genapic/x2apic.c   |  2 +-
 xen/arch/x86/include/asm/apic.h |  2 ++
 xen/arch/x86/irq.c              | 10 ++++++++--
 xen/arch/x86/mpparse.c          |  5 +++++
 4 files changed, 16 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/genapic/x2apic.c b/xen/arch/x86/genapic/x2apic.c
index 7dfc793514..ad95564e90 100644
--- a/xen/arch/x86/genapic/x2apic.c
+++ b/xen/arch/x86/genapic/x2apic.c
@@ -228,7 +228,7 @@ static struct notifier_block x2apic_cpu_nfb = {
    .notifier_call = update_clusterinfo
 };
 
-static int8_t __initdata x2apic_phys = -1;
+int8_t __initdata x2apic_phys = -1;
 boolean_param("x2apic_phys", x2apic_phys);
 
 const struct genapic *__init apic_x2apic_probe(void)
diff --git a/xen/arch/x86/include/asm/apic.h b/xen/arch/x86/include/asm/apic.h
index 7625c0ecd6..6060628836 100644
--- a/xen/arch/x86/include/asm/apic.h
+++ b/xen/arch/x86/include/asm/apic.h
@@ -27,6 +27,8 @@ enum apic_mode {
 extern bool iommu_x2apic_enabled;
 extern u8 apic_verbosity;
 extern bool directed_eoi_enabled;
+extern uint16_t x2apic_max_cluster_id;
+extern int8_t x2apic_phys;
 
 void check_x2apic_preenabled(void);
 void x2apic_bsp_setup(void);
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index b51e25f696..b64d18c450 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -421,9 +421,15 @@ int __init init_irq_data(void)
     int irq, vector;
 
     if ( nr_irqs == 0 )
-        nr_irqs = cpu_has_apic ? max(0U + num_present_cpus() *
-                                     NR_DYNAMIC_VECTORS, 8 * nr_irqs_gsi)
+    {
+        unsigned int vec_spaces =
+            (x2apic_enabled && !x2apic_phys) ? x2apic_max_cluster_id + 1
+                                             : num_present_cpus();
+
+        nr_irqs = cpu_has_apic ? max(vec_spaces * NR_DYNAMIC_VECTORS,
+                                     8 * nr_irqs_gsi)
                                : nr_irqs_gsi;
+    }
     else if ( nr_irqs < 16 )
         nr_irqs = 16;
 
diff --git a/xen/arch/x86/mpparse.c b/xen/arch/x86/mpparse.c
index d8ccab2449..dc112bffc7 100644
--- a/xen/arch/x86/mpparse.c
+++ b/xen/arch/x86/mpparse.c
@@ -131,6 +131,8 @@ static int __init mpf_checksum(unsigned char *mp, int len)
 	return sum & 0xFF;
 }
 
+uint16_t __initdata x2apic_max_cluster_id;
+
 /* Return xen's logical cpu_id of the new added cpu or <0 if error */
 static int MP_processor_info_x(struct mpc_config_processor *m,
 			       u32 apicid, bool hotplug)
@@ -199,6 +201,9 @@ static int MP_processor_info_x(struct mpc_config_processor *m,
 		def_to_bigsmp = true;
 	}
 
+	x2apic_max_cluster_id = max(x2apic_max_cluster_id,
+				    (uint16_t)(apicid >> 4));
+
 	return cpu;
 }
 
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 30 08:56:41 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 08:56:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358346.587548 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6pyb-0000O3-Pk; Thu, 30 Jun 2022 08:56:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358346.587548; Thu, 30 Jun 2022 08:56:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6pyb-0000Nw-MA; Thu, 30 Jun 2022 08:56:41 +0000
Received: by outflank-mailman (input) for mailman id 358346;
 Thu, 30 Jun 2022 08:56:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPik=XF=citrix.com=prvs=1738a98a4=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o6pya-0007Q3-1A
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 08:56:40 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8bbf3f56-f852-11ec-bdce-3d151da133c5;
 Thu, 30 Jun 2022 10:56:36 +0200 (CEST)
Received: from mail-sn1anam02lp2047.outbound.protection.outlook.com (HELO
 NAM02-SN1-obe.outbound.protection.outlook.com) ([104.47.57.47])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 30 Jun 2022 04:56:37 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by SJ0PR03MB6699.namprd03.prod.outlook.com (2603:10b6:a03:402::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14; Thu, 30 Jun
 2022 08:56:34 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5395.015; Thu, 30 Jun 2022
 08:56:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8bbf3f56-f852-11ec-bdce-3d151da133c5
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656579398;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=MuxBZdCHcFcLAUSqULIUORd90AHHM71XIXawMiighH0=;
  b=YA3h6A1a5uIAwVqAf8lY82fR6xl3uYbx9nAJUfMkQIcloekQ6D0MYqjN
   jo+oSqzwx9p0x24iicY3FpOGvuOrH6rYVJrlJjC7JHepN8TeklkTGCOx/
   wR1Tb4MP4uUoEoCY7R8d5CL6sFTYhHkaQfTtaD8iKBrWpg+q0YbF91Py/
   Y=;
X-IronPort-RemoteIP: 104.47.57.47
X-IronPort-MID: 74794312
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:BlDuYas1MdEeQGOOuIi8qqEmP+fnVCJfMUV32f8akzHdYApBsoF/q
 tZmKTjUaamOZGL0f4x2bY/gpksGusXSx4M2HQc5pSphFi1A+JbJXdiXEBz9bniYRiHhoOOLz
 Cm8hv3odp1coqr0/0/1WlTZhSAgk/nOHNIQMcacUsxLbVYMpBwJ1FQywYbVvqYy2YLjW13W4
 YuryyHiEATNNwBcYzp8B52r8HuDjNyq0N/PlgVjDRzjlAa2e0g9VPrzF4noR5fLatA88tqBb
 /TC1NmEElbxpH/BPD8HfoHTKSXmSpaKVeSHZ+E/t6KK2nCurQRquko32WZ1he66RFxlkvgoo
 Oihu6BcRi9xZLLXlb43TiIfDh99B7YXx6LcASeW5Jn7I03uKxMAwt1IJWRuZ8gj3L8yBmtDs
 /sFNDoKcxaPwfqsx662QfVtgcJlK9T3OIQYuTdryjSx4fQOGMifBfmVo4IHmmtv7ixNNa+2i
 84xcz1gYQ6GexRSElwWFIg/jKGjgXyXnzhw9w7K9PZsujK7IApZ4aHXbODkZvCzYZtQoWayh
 XOF4FnHHURPXDCY4X/fmp62vcffkCW+VI8MGbmQ8v9xnEbV1mEVEAcRV1awvb++kEHWc9BVJ
 lEQ+yEuhbMv70HtRd74NzWnpFaUsxhaXMBfe9DW8ymIw6vQph2fX2ECRzsZMtg+7pdqGnoty
 0ODmM7vCXp3qrqJRHmB97CS6zSvJSwSKmxEbigBJecY3+TeTEgIpkqnZr5e/GSd1bUZxRmYL
 +i2kRUD
IronPort-HdrOrdr: A9a23:hvRtIaCp/pi4YILlHeg3sceALOsnbusQ8zAXPh9KJCC9I/bzqy
 nxpp8mPH/P5wr5lktQ++xoX5PwO080lKQFmrX5WI3PYOCIghrNEGgP1+vfKnjbalTDH41mpN
 hdmtZFebrN5DFB5K6VgTVQUexQuOVvmJrY+ds2pE0dKD2CBZsQjDuQXW2gYzBLrUR9dOwEPa
 vZwvACiyureHwRYMj+Ln4ZX9Lbr9mOsJ79exYJCzMu9QHL1FqTmfbHOind+i1bfyJEwL8k/2
 SAuwvl5p+7u/X+7hPHzWfc47lfhdOk4NpeA86njNQTN1zX+0+VTbUkf4fHkCE+oemp5lpvuN
 7Qoy04N8A20H/VdnHdm2qZ5yDQlBIVr1Pyw16RhnXu5ebjQighNsZHjYVFNjPE9ksJprhHoe
 529lPck6ASIQLLnSz76dSNfQptjFCIrX0rlvNWp2BDULEZdKRaoeUkjQ5o+a87bWzHAb0cYa
 hT5Jm23ocXTbraVQGSgoBX+q3iYpxpdS32AXTruaSuokprdT5CvgklLfck7wY9HaIGOud5Dt
 v/Q9RVfcl1P6krhIJGdZM8qJiMexvwaCOJFl6uCnLaM4xCE07xivfMkcYIDaeRCdc18Kc=
X-IronPort-AV: E=Sophos;i="5.92,233,1650945600"; 
   d="scan'208";a="74794312"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hKMoYwiuOBrPBGOsHks/k1bkr+JOm6b/OGpp8DKsMck1VUg04RdWyAd9vE+h/ap0ezHIgq9y1EWkmUCQt0D1ZJVIXoiylMnTvFeU/3gwnv3GPdBho0oIr0KABH/iJCK+GJ6eK2Q9ZF3Zu2vMQ2OPqYgkWgZWO+4NDShA9QPXhQx57FUvupY2X0XhkQqPC3FRkQ/hjTLtYfnG+mDmy0fRHUwlUecNZin6mGpGGu+JaeVC0aFIt2MzDKi/VEzRiJxJWkjThI1rEkCfB2GD9yoImO45Nv10w+zJ2kWsD4MP1xn3Uc8D580NS1tqo79PmWmZ3T4TrUduQZFoXLy0Ac/1XA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=dsP+mK1T59nSfzkuyerDtNZDx7JRlt36zNeR2aRBE2k=;
 b=jjO+XFvqOR2TaMpRwWR3C3VhS2L4QgOBMSd8obP9aUL72iVCFtmLVO3h/wYLsD1d5xep4rp2YBS6QdFXBZJpRWIQI0kJm5gQ/jne1AilQKb/Wpr2NM/hIC1JfSMcM36TRWvo7O/2h5kYeXnSMXsnukJ1pbKCu/zGDvpp4mbxWolukkF0PVOrLdg/SCFfb9ZfVbpIp+BuwJMWs1hOIzECYrtT9IPcgPl/78rMxL1D7Pp7/2HLVYo2/I5dj0VyLCy31ksogK7sAcHtUV0/mo2IL4cFSQtiESeMjw3zL/slf/r3AwY2sD4bi0/d91N5t2QWkhGvRcHB6xuEIU//BY5uvQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dsP+mK1T59nSfzkuyerDtNZDx7JRlt36zNeR2aRBE2k=;
 b=b/vhfOnMuuCg/YAD7SH0SKyL/ElIcIlAQNwrrs/OKhaKeehUtDfwe+JTWVY/lBKld+biqIuU34tSlZGjofeQoMtCOnGAGU4Xe5shzF8tXc71ytzylVA9FLRLtxzXphu3JHywo1U3KDaWT7LGRi29QVfXdOl3jF1vlx8fHU2kCgk=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 5/6] x86/irq: print nr_irqs as limit on the number of MSI(-X) interrupts
Date: Thu, 30 Jun 2022 10:54:38 +0200
Message-Id: <20220630085439.83193-6-roger.pau@citrix.com>
X-Mailer: git-send-email 2.36.1
In-Reply-To: <20220630085439.83193-1-roger.pau@citrix.com>
References: <20220630085439.83193-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P265CA0201.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:318::10) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8f1c599f-42a5-4511-8392-08da5a766f40
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6699:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0EyF2/svp9Fvgx6hTxYI/V+mX9UsybS8u7hW0D88rGtXwMNFkJkym1Yv9/ZnWEw88rNKp69sctD0wT01cXCW8MwD8AqguXp+Ud1NmETRQDsHR+YfhFSGqArfhuKr6f3iR6Zu8R2db7dVBVtHWQphFFKyGQicJeDIGUIKy1rqvvnrqfkQGCDKgZPvJbiYRypeG9f6Rc8fayh4rP2c8d9WbVMD5HIayQCtpY2YIFv65X7jyiMxDGIjD0gD8dUVCxwUm1Z2VJFEdsYw0YL0UgA1oe5nWr9P1Grsh5XOyrc4sGKrirO5rVdP8t/YK7xJ4C76Ub38DCyPEqVLdMkK5fe1mbdben9VIbCVl/WXA7Ysd/bhAZ2xi17z8Av9Yh0ZMW4P6ontyZdzMl8MF0ZRCM8lRu7Wm3qDDoQHvZeAbN2Vv3XENGMqNr82P4NMdYUmi+PbdLpODiT7WIFtjQf3fHUke9tRNPKWZBUhIDbA/L3CVfCFdd0ms31vOLHHC6PJFW5Eg6he8zaJOXORqhGX3FUspz9QiW+iy3a/22POPtKfDPDulRzMZ8VWH1hG2dPebXxdDT++eipEMOTtYCynAT/6HdF6eS3kbFdV2lt18q7JfW8D3f37SbgVvlsiPJ2PSLRclUyHFjK76dCLENJ4NJZ/9SNRqOvZFrXho5lu02zh2WztGmoke2HJweknSXPwz3XDRQSIxU/hmQiz6UeOV0J4AzPJN6+3I9Eb6HwXvhflBjdWDLEouxBAdvShXgvcuPD35vvC7AfNH6Hw27qfYJA8xw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(376002)(396003)(366004)(346002)(136003)(186003)(36756003)(316002)(2906002)(6506007)(86362001)(54906003)(41300700001)(2616005)(4744005)(38100700002)(6916009)(66556008)(66476007)(26005)(66946007)(4326008)(5660300002)(8936002)(8676002)(6512007)(1076003)(478600001)(6486002)(82960400001)(83380400001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MVo3cGdpQXpHNGJiQlJNUFh3OVpQQ2J3eTFXTitVSDNMcWdDeEFja3c3akVP?=
 =?utf-8?B?dWJiWGczRmNPZ1kvSjNLaitYSHljTlNPaGJ1V2hhUGJNOEtncXk4d2pQZ1k2?=
 =?utf-8?B?SXJzdmRoT3hEVEZoZWx1N2ZxdkVnNGdTSVhNUEh4c2pKcDJqY2lUSHN2ckdM?=
 =?utf-8?B?dXA3b1pPTm5McS9malNxOHovWmgrQklheWgrTkpyeXIzT0JIQmx6eXcyMVlB?=
 =?utf-8?B?VU1VeVgyWXk5aFZYekFBQkVSc0hSWXlBWGViVTRsYWc3Q2YrL08rM2xJN3ZH?=
 =?utf-8?B?RTQ1MytJVm80R2t6Uk55SzdwQjg2b0NJQjVVVnB6d1J4S1dtTUlHZUFGTmdm?=
 =?utf-8?B?aHplOCsvbWdqbWQ1ZkFqNHQ4dzQxZmZ5akdqbk93NGxKT2RLY0pEaEplRklB?=
 =?utf-8?B?b1FreHZqVEw4L2tkRDlrckxCdDVucENEUWwwRElyMGJONHhWNER0eGRvZnFo?=
 =?utf-8?B?bWFYMUtuVUI5V2tOSWgxZWFHWHJkdWNVa2doSFk1bkFaNXVsUG5pNHgzVjFp?=
 =?utf-8?B?NHNsbUttVWRjZ1hLcnN1ZW9HV0ZGcGl2SVc5WGhyUEFyeU5RVWppd2lFL283?=
 =?utf-8?B?SE5OS29VSGZ2d3NGN0ZYTjROY0hocW1iL2FuU05hcCt2TExwNzFjOTE4dWg1?=
 =?utf-8?B?RzlzeEpmclI0a3RwMkNTaVVnRkxkOXhPSEw2dXc3N3duWWxVbzVab05DZzVw?=
 =?utf-8?B?aG5VYmhsZi9RZTZZNHhMRHBHeGVZUmNITzdxYUVFaGxCZ1VxZ3UrMEZrMVcv?=
 =?utf-8?B?Q3AxaUZJMkZIeUpDaldhSzhNQ2swNDFZS1hvNHhLNmpqQlZwbWNKbkRCYmhR?=
 =?utf-8?B?ZCttZUwxd1ArNnF1K3I1ZVJUNHJidEN2WUtqdDZYQlVxdXQ4NTFEOHMzWmFR?=
 =?utf-8?B?TEhhT3lBeW1yaC9JUllCU2FwVXN1QzFQd2M5VDFBWTlIdzBvdnBLRzNuek5N?=
 =?utf-8?B?dkVFbnJFODNsUzV6NFcwbDI3azZVZlVuWGdNYUdPY0N1eHdKMkdic24wNlpQ?=
 =?utf-8?B?YnNiWk9EaGNBWDlKdldhMjhQRk91U2s5UkdDYWh6TEZnRW1acC8yb3ArWWxj?=
 =?utf-8?B?SGV2dmRWVGI3NXlCWUJCOEVha1kxSzBFRUdtYlBuTUxMdzJRbFVndk5pMDYw?=
 =?utf-8?B?ZG5EZmhnTzBkcUZXVVpFb0lzV3VVMXJEN3JFNHYrSFdXOEpnNjc0Z08rYTFL?=
 =?utf-8?B?WVoyY3V4OE84bUVhemcvMjlLOHovekt4ZzV2L2hLWmsra2hWb1JmQ3lpUGo5?=
 =?utf-8?B?dGhYWllxWlI2eHk3VkNvbVhVZnVadkFPbHF5aDE2VzBpSUxZQjhCaG1MMHVL?=
 =?utf-8?B?NnBNc2l0WWhvdVczUWtmTGNySklDMEV5Zi9wWVB3R1krNlhXa0ZZb3JoZ0Zw?=
 =?utf-8?B?eWZKWDZBVFVXcUNSS3VtR0tWYmNPYWNPN0pxU2sxWUNDY09XWnNBbWgrZG9X?=
 =?utf-8?B?WVRLNUEwVWRNYUN5ZnRDOGZkaDFpWmZkTzM3ZmtBcXBVRVY2UDYwOEppT2Fm?=
 =?utf-8?B?M3hSdnZoVkpxamZrTE9GcTJYT0szNWJVVkM5RmdWYzR0UFVwb3JxRTM5VGkz?=
 =?utf-8?B?aGtpUnUyL0R6Sm5QTUhaQmdxNldLSGdFZGtMM3c3VkpLaWhuWTNYc0pOYzZM?=
 =?utf-8?B?QXgvMkxlOTBuV1NKWkJjcjRWbDZtNTFhRVl6RGFyNWVEZlQrbVg2MXVRamhr?=
 =?utf-8?B?ZzhrNGZIWGIyeWoxc3AveFlFNnQzUkdXREJMR1c0TFArNXZNYkxsZnhoWWVE?=
 =?utf-8?B?N05RWUZjbVUzeGFWL2FwVmQ0ZVMxSmhJbEp0cGY4UTZscytpalkrOTRwd1Vr?=
 =?utf-8?B?UnBLT29NY1VKTlIwVnZzaklnRFROY0l0S2ovZXdObVFodGp0SXcxU2hpU2RH?=
 =?utf-8?B?TG53aWVRb3FqeG5tQ3RpTyt2WlBFbWY3RytjSWpQZ1BYQ256cnZpTmxyQzJZ?=
 =?utf-8?B?UFJBNE55NVZia1RhYVJ3M3VzdytndGNSdkc2RDUwVXZpR2s1dlZsSWdXaWVo?=
 =?utf-8?B?Yi9zWXk1YVNSaS9zMzdpM0IvbllFMFRnVW9RRWVPYXJkQlhhVTUxcWVyNXFM?=
 =?utf-8?B?c3VyMnQxeWlEY3JORmlic1MwVzFScVVFM2c2NkFyU1BNZkl4c1poOVVsdmVx?=
 =?utf-8?B?aFR1L3lmZWQ2U2krOHd2NjR4eE42eFRoT3EyTERKWkNFclNIQkNONXlIUDg0?=
 =?utf-8?B?OVE9PQ==?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8f1c599f-42a5-4511-8392-08da5a766f40
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 08:56:34.6736
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 4xFg4qZKlk1Tg4Theh8+zFCFjQ+N+X5oPIzhPVweVNfWwtNUulhtg6/QQCzA8U9tf77MFDapCoZvujjnd5sBNw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6699

Using nr_irqs minus nr_irqs_gsi is misleading, as GSI interrupts are
not allocated unless requested by the hardware domain, so a hardware
domain could not use any GSI (or just one for the ACPI SCI), and hence
(almost) all nr_irqs will be available for MSI(-X) usage.

No functional difference, just affects the printed message.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/irq.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index b64d18c450..7f75ec8bcc 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -434,7 +434,7 @@ int __init init_irq_data(void)
         nr_irqs = 16;
 
     printk(XENLOG_INFO "IRQ limits: %u GSI, %u MSI/MSI-X\n",
-           nr_irqs_gsi, nr_irqs - nr_irqs_gsi);
+           nr_irqs_gsi, nr_irqs);
 
     for ( vector = 0; vector < X86_NR_VECTORS; ++vector )
         this_cpu(vector_irq)[vector] = INT_MIN;
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 30 08:56:46 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 08:56:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358349.587559 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6pyg-0000zV-88; Thu, 30 Jun 2022 08:56:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358349.587559; Thu, 30 Jun 2022 08:56:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6pyg-0000zK-1f; Thu, 30 Jun 2022 08:56:46 +0000
Received: by outflank-mailman (input) for mailman id 358349;
 Thu, 30 Jun 2022 08:56:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPik=XF=citrix.com=prvs=1738a98a4=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o6pye-0007Q3-IW
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 08:56:44 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8daf9869-f852-11ec-bdce-3d151da133c5;
 Thu, 30 Jun 2022 10:56:40 +0200 (CEST)
Received: from mail-sn1anam02lp2043.outbound.protection.outlook.com (HELO
 NAM02-SN1-obe.outbound.protection.outlook.com) ([104.47.57.43])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 30 Jun 2022 04:56:40 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by SJ0PR03MB6699.namprd03.prod.outlook.com (2603:10b6:a03:402::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14; Thu, 30 Jun
 2022 08:56:39 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5395.015; Thu, 30 Jun 2022
 08:56:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8daf9869-f852-11ec-bdce-3d151da133c5
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656579403;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=M9sDkqyoWe8u+1pJDvGlTJ+8UF/5g6xliqOyamJnVWI=;
  b=AIznX0cxbymzrphChpHVajbD5zPYTBNjfo/fsXMBX0JFetxlDxDjze2F
   CDdIpZygV5q/SjWKJo/xrB7wrw/xm+xObK1panzBQk2F/zccflueRruJs
   yzZmFTJZJeVfDcnamkkXx70q6d3+yOcIeplUFtbdTUYhH1QXlR1RvRPbV
   I=;
X-IronPort-RemoteIP: 104.47.57.43
X-IronPort-MID: 74098140
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:FRdqHasHUmZMSD5f/173jcrJs+fnVCJfMUV32f8akzHdYApBsoF/q
 tZmKW6AbquNYTD0f9l0aNvgoUlTvZCBxtJiTApk/iwxFSwV+JbJXdiXEBz9bniYRiHhoOOLz
 Cm8hv3odp1coqr0/0/1WlTZhSAgk/nOHNIQMcacUsxLbVYMpBwJ1FQywYbVvqYy2YLjW13W4
 YuryyHiEATNNwBcYzp8B52r8HuDjNyq0N/PlgVjDRzjlAa2e0g9VPrzF4noR5fLatA88tqBb
 /TC1NmEElbxpH/BPD8HfoHTKSXmSpaKVeSHZ+E/t6KK2nCurQRquko32WZ1he66RFxlkvgoo
 Oihu6BcRi8TPIblqtQCdCB6PABAZrca9pz6IXGw5Jn7I03uKxMAwt1IJWRuZ8gj3L8yBmtDs
 /sFNDoKcxaPwfqsx662QfVtgcJlK9T3OIQYuTdryjSx4fQOGMifBfmVo4IHmmtr7ixNNa+2i
 84xcz1gYQ6GexRSElwWFIg/jKGjgXyXnzhw9w7N9PpuvzC7IApZ16bKFfDMKvawV8xejHq1m
 kzG9W39DURPXDCY4X/fmp62vcffkCW+VI8MGbmQ8v9xnEbV1mEVEAcRV1awvb++kEHWc9BVJ
 lEQ+yEuhbMv70HtRd74NzWnpFaUsxhaXMBfe9DW8ymIw6vQph2fX2ECRzsZMtg+7pdqHnoty
 0ODmM7vCXp3qrqJRHmB97CS6zSvJSwSKmxEbigBJecY3+TeTEgIpkqnZr5e/GSd17UZxRmYL
 +i2kRUD
IronPort-HdrOrdr: A9a23:33wGB6AjpdJyNKrlHeg3sceALOsnbusQ8zAXPh9KJCC9I/bzqy
 nxpp8mPH/P5wr5lktQ++xoX5PwO080lKQFmrX5WI3PYOCIghrNEGgP1+vfKnjbalTDH41mpN
 hdmtZFebrN5DFB5K6VgTVQUexQuOVvmJrY+ds2pE0dKD2CBZsQjDuQXW2gYzBLrUR9dOwEPa
 vZwvACiyureHwRYMj+Ln4ZX9Lbr9mOsJ79exYJCzMu9QHL1FqTmfbHOind+i1bfyJEwL8k/2
 SAuwvl5p+7u/X+7hPHzWfc47lfhdOk4NpeA86njNQTN1zX+0+VTbUkf4fHkCE+oemp5lpvuN
 7Qoy04N8A20H/VdnHdm2qZ5yDQlBIVr1Pyw16RhnXu5ebjQighNsZHjYVFNjPE9ksJprhHoe
 529lPck6ASIQLLnSz76dSNfQptjFCIrX0rlvNWp2BDULEZdKRaoeUkjQ5o+a87bWzHAb0cYa
 hT5Jm23ocXTbraVQGSgoBX+q3iYpxpdS32AXTruaSuokprdT5CvgklLfck7wY9HaIGOud5Dt
 v/Q9RVfcl1P6krhIJGdZM8qJiMexvwaCOJFl6uCnLaM4xCE07xivfMkcYIDaeRCdc18Kc=
X-IronPort-AV: E=Sophos;i="5.92,233,1650945600"; 
   d="scan'208";a="74098140"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VbG8SbLt/zTeVZhfR3rNJP6yMc7lq6nzCcHws5sKZgxynrbjTTNWbKjIgA1M4OPN74cTHgzO86vyCO2lEGS4bF+iv+WMBBOasylBKz/Q0cjG4eOX1Pc4QFySaPCWReuhSBUPk595qqpNiYnT7bBwAJ02FoDL6crdmPcDdZ5X0OKf+6HR5LT5Tqtg5SspyNvihxrF83LGaZiNudezt7/Q0ese2He4TE9QDtGZdXs15XMAwUwZS3ylPOdykDmzRiO1abv6N3+23udK2BOkbNP05pSFBZDay1oOlR9LxmxNHncCPODxzdwZtukL1rK7qYD0eHK0HSxeMxUxVUgR4pMZUA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=f+VcW6quQIs1mH4she5NONTPj4PREd8r33VA4ThvE2Y=;
 b=F1Qr4bEYhzaA69MHXDNH1/yRaqqqwxtHv6c1cyk5xsSpMH0QZxPDyI7fJjFAweHYjkQEv+gShzPe/PXFTGLOVk8tbGsYEjJdbGNRUYxOewhKvQWL5if/ZPQnxph1xi5qd+2t87XuRZvmO3WhB7Kr3dafIKXF/VkEdMNap+r9jTkj85wVJ5qL1ELMLHwLuONlT18OimOwf11yyXf0JHGkWX//25wBEA0xu7xecbTX+VDUFixFV07LQY5FEwgJNA80/BJYg8AFVS4seZH0cEsMb1d6EWzJ1N4yD3fRTebdXRAEw3PjeIg9xTjWcDlkGdw3vRpS2q+Zm6T0i45u6wD1KQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=f+VcW6quQIs1mH4she5NONTPj4PREd8r33VA4ThvE2Y=;
 b=J+1gMRfNFxCZ/fldV2/RIUkcQIOnx6sWedQ8bSN2rz0Llz5z5xrc/qG1qyfQiL+tJkzUZ5VlLRN7pldo4u7ZhmywnCOSNbnK4D2hvi3BIiyfXl+dZkW4tCnVdj14WKZUPrwsl4Ut1btttfv+V29QLDUC5pyLU4y2wvYBwdHuv38=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 6/6] x86/irq: do not set nr_irqs based on nr_irqs_gsi in APIC mode
Date: Thu, 30 Jun 2022 10:54:39 +0200
Message-Id: <20220630085439.83193-7-roger.pau@citrix.com>
X-Mailer: git-send-email 2.36.1
In-Reply-To: <20220630085439.83193-1-roger.pau@citrix.com>
References: <20220630085439.83193-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0609.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:314::11) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 895b7875-2d61-4f85-5a55-08da5a7671e8
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6699:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BTzb38rTcOX68Xk4ctm8mkJYqsVAjffjmVPggNes2IlFsnovPzmbPEV7cOzu3yhZaijCHMG9APi5OGDG5+sSfMJPrHClxfkJIHIlfQ/SUJp4lKr3nVVyxQDoTDDh066J67nuhyiz1Rp8ThsMAPfGALtbccXPEOXkEJewGpBqUlDriVqBg/nEnA7Tg8Vuz5NUAFGJYmATR/vixkGQhMa21JlcRsdmiN5hjTFO23+aXL6HshvJpjvdtIDY7lwq61hj5oktXYWrSaR7+H5YXo5RqtvPj+0Zs3akWCvmT5puJ68d+9gqfR9i7LHgFEnApN6vCL0tQXgo4ZR4dI36x7SfHCG/rMquxeBooeNTyXP66Wgqto0FcJG4C2wzeZulRIGbETq72vUeTiqoK+T5t5jqu9QHFBomNsiDp9XB0Ao20/kC+O3WnDgaCd0xBkKthfnbtBKWAZLNf7zdtIo/TxM4yDIUwurpHpXjfw+qIAPg6OBgXGRwQR1ajwYHEi6VfemnzyLYc2EE413ZAfgmwHlK6jPHqijVvX/fvQQ9CQAB9/PHzA4rLLc1h9/ZCFZg1G3DO8bLayszdhKnGbWboaknJuSjGGyjbFzRZJc8o/MePeR7M3G8tXrDFowbUrl/Brau9CO2K0udIB2PAtuoeDMb/lbhwE1T2torbd9JlNF1biE2qCEH8+EOjDPB+3pxm/f/Q9BhLiQbmfIpKsxgIv11WXNGd53gOQF5ba5IdTIpJMaXIgTFb8Mcg2qi928UbKL45AiUdooVr5Jhe631J2wXlg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(376002)(396003)(366004)(346002)(136003)(186003)(36756003)(316002)(2906002)(6506007)(86362001)(54906003)(41300700001)(2616005)(4744005)(38100700002)(6916009)(66556008)(66476007)(26005)(66946007)(4326008)(5660300002)(8936002)(8676002)(6512007)(1076003)(478600001)(6486002)(82960400001)(83380400001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UmFZN2QxUkVuNkQ3eWJhTXFpejZSb0tCM0c2dUFKY2VwZWZybTFPZHdBNndl?=
 =?utf-8?B?ajc4UTA0K3BIL1lxaldMdE5WdkxEdTd5UWNnb2xoNzZOcFVNMDhzMjdhUUR3?=
 =?utf-8?B?UUNIOG14TUo3aW00Sit3TmMxMzQ5SjhiQzVCSmFEdndtc0l0Z3pybWtod1JP?=
 =?utf-8?B?UWRVLzNIMGhBM2QzNGlrSy9zeEt1dVhsSCtVU3JsVmwxQTU4ZldwWWpMaWEw?=
 =?utf-8?B?QmJyQjJ4ZWlrbXRNUUdCYSt1dXlhN3lyQWNORCtTT0xJOHIwdGhqZzZnVkFI?=
 =?utf-8?B?TElpYlFMRFdoaVYzTkduZGhqRGM0b2lTVCs5S0JtWGk4Q3kxR0x4VnlzT3dE?=
 =?utf-8?B?V2grQklZTldubWtadkJvSTB3YjRYTklsL0UxMTVjVUUxaGdoS09sendGcmhh?=
 =?utf-8?B?WFpvd3F5S3Q2THA5MnlOSWVWeU1ic1BURnZ3ZDQxSzVJdUJYeUFwcEhYbHFL?=
 =?utf-8?B?NmJkV0RhSkFMOWZHTlh4Y1MzUVY1R21DNkNNazFGSjZEZS9UUTl6NllMc1dN?=
 =?utf-8?B?cUxjWWo3UzhYS0hmanZiQURweXpEM2VmOTlNSEY1TEVscXkyem85cXJicjQz?=
 =?utf-8?B?ZzlBVzlGWUZNQWFjMnBrQ1JDdXN5U2p6a05JazN2YU5ITXduUDJSZEpONXBq?=
 =?utf-8?B?L05pc1JVcDcrOEZxZ3I5c1lsRUZXQ0lxekVoWHJLa2ZtZW5IT2c3aTdrNHZQ?=
 =?utf-8?B?RHhOWWp1Tm1ZNkNjTGtKOUl6cnJnNDA1d2FDbkRqNCtTaXJFUDVUSU1xQmZs?=
 =?utf-8?B?L2lLRGhGOTlkVkF1bzRTeDZFSm5VRkppN2FBZzZtSEs2UGUyWFJOQ3ZBUDRF?=
 =?utf-8?B?dy94RXQwNFNMZDd4bmpDcEtzbnpXUnhSNFRmZ3ZxbWY4b00xMnZSUnIwcVk4?=
 =?utf-8?B?VXNsUXpwblVOTjZ4MWplVENmRk82RmtXQkdXcWJtYy9id3pOWlltN0RiWE9W?=
 =?utf-8?B?WGJleXJ6amhENUVnRDllM2FYRVUxeldMWUpHSU1UbzZRVno5T2Q3MVgzRHEz?=
 =?utf-8?B?VTJVTThWcFg0SitXdXRGSDE0M0N3TVNrT0Jkem9qbGtrNXd4ZDZyQU9rOHlx?=
 =?utf-8?B?b1ZvclJDMEJOeGljeFFweFFNWUdSQ1JpK1A0S3o1SEk3Z1h6Zk1iNTNmaEZT?=
 =?utf-8?B?U0pKR0hYM0F5UVNKUjQwUE90ZGNYd2NmN0xWOFNIalY2YVhqc1ZQcXpQbU0v?=
 =?utf-8?B?cHBEWFJZazZMZTdncnU0WmQ5TVYwQnJ1L21BNlpyeWg5d2tPc2lVS2JwL3Fs?=
 =?utf-8?B?ZTEwdGNJSCs0bHdxbmxUdDBwRnpIS01nTGNxSXAvUEVKWjNpalJnU2draTM0?=
 =?utf-8?B?c0xZcTErbEFYYUxqcnh0eGNkYTEzQlhycDZjRlFIejhnbHR0cEp1MlFFdXRF?=
 =?utf-8?B?YUJZV1RNUTZEck56N1Q5OU5GNnhHVlhrZ3c2TmJMVVpQSCs4UUJKemd6VHht?=
 =?utf-8?B?SGhYTU12U28xQWtodTJISzdnRCtTOWYreGVwdzI2UFRVbzJJeVFFaDJ3bWc4?=
 =?utf-8?B?UkoyaExSR2hENmVpT2lEZkN4Uk5YOHFTajM5VjdlZkNTN0tNQ2hYN3dCWHQz?=
 =?utf-8?B?TmFFd09Nci8zSWVlekF5eXdCNkdOQWU4VTE1V3BmbVZkeFBVbm85YUt1OXhF?=
 =?utf-8?B?dW9FNnFJUGRORTNyZGVIejBEWHFHSS9lT1lJTEJoRm5HQmIxSjdpT3lqSVhh?=
 =?utf-8?B?bzhLdWw3WkdWRndodi9FcDV0ZjYxekh0NU4vTjAyb0x1MVJFc09KY1JaU2NJ?=
 =?utf-8?B?K0JNWms5c3lOSjh5TGlhK096R2NwRzJMSFBOWUR4WDM4U2JtK1NtN3VKUWJZ?=
 =?utf-8?B?VWF4b3lIRkpoVFdidWZpSURGQzVDbkw3akkzZXhKZWsrRzN0TmhoVlFTbUdl?=
 =?utf-8?B?T3hHdlZlMlI4NmpHT041QlhBN29FaEhhOEdkWjBXLzlxbHNqdXFhQStWbkNQ?=
 =?utf-8?B?b3ZuODFvRG1EZVN1eVFLOGdtbVI5R05RTzFMSDZXOUtvTzFBQUlqanFmMHNN?=
 =?utf-8?B?U2VsRGRaQVNRNWVQTTZXQzFUR1BxeFZSOWxHMjduSGs0QzUxaU5DdFZ5ZEVT?=
 =?utf-8?B?LzVON0c5UDdpYmdKbVlSNnVUd3F5WDNleTdSaWVWZnI5VEJwWUY3UFIxak9a?=
 =?utf-8?B?b3NrNzJJODZFN0tEbTFhQUt2YWFBa2pLN1UyMXBaSDlpc0tpbnpWNVExS2Vj?=
 =?utf-8?B?RVE9PQ==?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 895b7875-2d61-4f85-5a55-08da5a7671e8
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 08:56:39.2071
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: jGEhCFpryh4CfPmJ8Kiiii/VdSf6JS0dnhrhwdQUMUuV5lTluoLh62viBUNY+WVwk0ta306MdAMgytpgBhMuPg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6699

When using an APIC do not set nr_irqs based on a factor of nr_irqs_gsi
(currently x8), and instead do so exclusively based on the amount of
available vectors on the system.

There's no point in setting nr_irqs to a value higher than the
available set of vectors, as vector allocation will fail anyway.

Fixes: e99d45da8a ('x86/x2apic: properly implement cluster mode')
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/irq.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 7f75ec8bcc..e3b0bee527 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -426,8 +426,7 @@ int __init init_irq_data(void)
             (x2apic_enabled && !x2apic_phys) ? x2apic_max_cluster_id + 1
                                              : num_present_cpus();
 
-        nr_irqs = cpu_has_apic ? max(vec_spaces * NR_DYNAMIC_VECTORS,
-                                     8 * nr_irqs_gsi)
+        nr_irqs = cpu_has_apic ? vec_spaces * NR_DYNAMIC_VECTORS
                                : nr_irqs_gsi;
     }
     else if ( nr_irqs < 16 )
-- 
2.36.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 30 09:00:52 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 09:00:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358383.587570 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6q2a-0003mD-Vw; Thu, 30 Jun 2022 09:00:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358383.587570; Thu, 30 Jun 2022 09:00:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6q2a-0003m6-TA; Thu, 30 Jun 2022 09:00:48 +0000
Received: by outflank-mailman (input) for mailman id 358383;
 Thu, 30 Jun 2022 09:00:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6q2a-0003lw-5r; Thu, 30 Jun 2022 09:00:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6q2a-0008NA-2P; Thu, 30 Jun 2022 09:00:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6q2Z-00081C-L3; Thu, 30 Jun 2022 09:00:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o6q2Z-0002fa-Kd; Thu, 30 Jun 2022 09:00:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NTzWJQhx61NzQdj/lKPvGgIS3r6zHg1AlSGLcrZItDU=; b=12q+mjqWmKIT0UsF1XHvvJpeuQ
	MH6XS7FyWMbAwZ8J+SWDpbwqURDwZTdICcxtr10Sy4pde40n4wHfafcMBT0andRlvdo4ObEZQMp3K
	bNYXtOXsFaWDe39UZo4ZOdG1tucA88a0gMde1xeg2eVu8Eht7Ckujl4aYfCEPKkw3dGk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171419-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 171419: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=170eccd03c3a1a17eaa3b1b2afd830a21b0f61db
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 30 Jun 2022 09:00:47 +0000

flight 171419 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171419/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              170eccd03c3a1a17eaa3b1b2afd830a21b0f61db
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  720 days
Failing since        151818  2020-07-11 04:18:52 Z  719 days  701 attempts
Testing same since   171419  2022-06-30 04:19:07 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Amneesh Singh <natto@weirdnatto.in>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani@anisinha.ca>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brad Laue <brad@brad-x.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Kirbach <christian.kirbach@gmail.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Christophe Fergeau <cfergeau@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Didik Supriadi <didiksupriadi41@gmail.com>
  dinglimin <dinglimin@cmss.chinamobile.com>
  Divya Garg <divya.garg@nutanix.com>
  Dmitrii Shcherbakov <dmitrii.shcherbakov@canonical.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Emilio Herrera <ehespinosa57@gmail.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fabiano Fidêncio <fabiano@fidencio.org>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Florian Schmidt <flosch@nutanix.com>
  Franck Ridel <fridel@protonmail.com>
  Gavi Teitz <gavi@nvidia.com>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Haonan Wang <hnwanga1@gmail.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Hiroki Narukawa <hnarukaw@yahoo-corp.jp>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Ian Wienand <iwienand@redhat.com>
  Ioanna Alifieraki <ioanna-maria.alifieraki@canonical.com>
  Ivan Teterevkov <ivan.teterevkov@nutanix.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  jason lee <ppark5237@gmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jia Zhou <zhou.jia2@zte.com.cn>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jing Qi <jinqi@redhat.com>
  Jinsheng Zhang <zhangjl02@inspur.com>
  Jiri Denemark <jdenemar@redhat.com>
  Joachim Falk <joachim.falk@gmx.de>
  John Ferlan <jferlan@redhat.com>
  John Levon <john.levon@nutanix.com>
  John Levon <levon@movementarian.org>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Justin Gatzen <justin.gatzen@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kim InSoo <simmon@nplob.com>
  Koichi Murase <myoga.murase@gmail.com>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Lee Yarwood <lyarwood@redhat.com>
  Lei Yang <yanglei209@huawei.com>
  Lena Voytek <lena.voytek@canonical.com>
  Liang Yan <lyan@digitalocean.com>
  Liang Yan <lyan@digtalocean.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Liu Yiding <liuyd.fnst@fujitsu.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  luzhipeng <luzhipeng@cestc.cn>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Mielke <mark.mielke@gmail.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Martin Pitt <mpitt@debian.org>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matej Cepl <mcepl@cepl.eu>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Goodhart <c@chromakode.com>
  Maxim Nestratov <mnestratov@virtuozzo.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Moteen Shah <codeguy.moteen@gmail.com>
  Moteen Shah <moteenshah.02@gmail.com>
  Muha Aliss <muhaaliss@gmail.com>
  Nathan <nathan95@live.it>
  Neal Gompa <ngompa13@gmail.com>
  Nick Chevsky <nchevsky@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nicolas Lécureuil <neoclust@mageia.org>
  Nicolas Lécureuil <nicolas.lecureuil@siveo.net>
  Nikolay Shirokovskiy <nikolay.shirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@openvz.org>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Niteesh Dubey <niteesh@linux.ibm.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Or Ozeri <oro@il.ibm.com>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peng Liang <tcx4c70@gmail.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Praveen K Paladugu <prapal@linux.microsoft.com>
  Prerna Saxena <prerna.saxena@nutanix.com>
  Richard W.M. Jones <rjones@redhat.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Robin Lee <cheeselee@fedoraproject.org>
  Rohit Kumar <rohit.kumar3@nutanix.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Davis <scott.davis@starlab.io>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Sergey A <sw@atrus.ru>
  Sergey A. <sw@atrus.ru>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  shenjiatong <yshxxsjt715@gmail.com>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Chopin <chopin.simon@gmail.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Simon Rowe <simon.rowe@nutanix.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@gmail.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Temuri Doghonadze <temuri.doghonadze@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tom Wieczorek <tom@bibbu.net>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tu Qiang <tu.qiang35@zte.com.cn>
  Tuguoyi <tu.guoyi@h3c.com>
  tuqiang <tu.qiang35@zte.com.cn>
  Vasiliy Ulyanov <vulyanov@suse.de>
  Victor Toso <victortoso@redhat.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Vinayak Kale <vkale@nvidia.com>
  Vineeth Pillai <viremana@linux.microsoft.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu@kernel.org>
  Wei-Chen Chen <weicche@microsoft.com>
  William Douglas <william.douglas@intel.com>
  Xu Chao <xu.chao6@zte.com.cn>
  Yalan Zhang <yalzhang@redhat.com>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Fei <yangfei85@huawei.com>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yasuhiko Kamata <belphegor@belbel.or.jp>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
  zhangjl02 <zhangjl02@inspur.com>
  zhanglei <zhanglei@smartx.com>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>
  Дамјан Георгиевски <gdamjan@gmail.com>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 114302 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 09:08:41 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 09:08:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358391.587581 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6q9v-0004gY-So; Thu, 30 Jun 2022 09:08:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358391.587581; Thu, 30 Jun 2022 09:08:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6q9v-0004gR-Pz; Thu, 30 Jun 2022 09:08:23 +0000
Received: by outflank-mailman (input) for mailman id 358391;
 Thu, 30 Jun 2022 09:07:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=89Zk=XF=hotmail.it=c.cesarano@srs-se1.protection.inumbo.net>)
 id 1o6q8t-0004RI-57
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 09:07:19 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-oln040092065079.outbound.protection.outlook.com [40.92.65.79])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0a04e59c-f854-11ec-bd2d-47488cf2e6aa;
 Thu, 30 Jun 2022 11:07:17 +0200 (CEST)
Received: from AM7PR10MB3559.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:20b:142::21)
 by AM6PR10MB2248.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:20b:45::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14; Thu, 30 Jun
 2022 09:07:16 +0000
Received: from AM7PR10MB3559.EURPRD10.PROD.OUTLOOK.COM
 ([fe80::110a:162b:f3a6:2e47]) by AM7PR10MB3559.EURPRD10.PROD.OUTLOOK.COM
 ([fe80::110a:162b:f3a6:2e47%5]) with mapi id 15.20.5395.015; Thu, 30 Jun 2022
 09:07:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0a04e59c-f854-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DcC73G1hxEdQtbdR3jOsYUbFmNIy4JhC+rMk/eFUla1+wLMNyaBD+3YYTyPIEeoU1CrMKbiR8MKvgx96VfrwA/+IAyYtNJuiAskrMprXqZDkgnAX/UtHZ7zYLqZFlH9addZ6iytuyLgZdB4+I/f9byuDym1nEquqm1pty65xaizufQgqfJE0tdu0gZaOjDaAJmDCW2VgBRy9LI0UozCtVSk4Dn7+AsQQeXC0OPtuPzaDm5YdEsmAOoyjVBmOrZGWXstV+pawDd2/kPpIm8G++hnW2xUwLkvHSBEOlkyq4Js+sAr6eYliEBNw6gp+dvVKlDkQSBGWikwEmCRD/G8D5g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=uLRwKeMT9BR/oaega17Tb7kpao5jwz099J/KmMl1ndM=;
 b=MVrQ+mfCwIx+V+t4MQs7stzxZhWrcQGE3dGOeIBeEKrvCt3BjCzVWp1g+5Ukkta9QaJTMQTYXNcEPQWABJqmiOw1aTjgZn3KAkRv6BUI9MZGaVvkVd7E9E6Mts+yx8dsPmcsK/+o7oA267yTXT6ekmWER0WCdHyfPLkAnueHvvpQQMtyYZ3jAFk4RVPu8ce0AazXcinYmA+LhN6GjleNk5zEUiWwy0jm/LLchxGB9EyrHadK9RKYEE+Zh89qZ11xR0imgHzJTXWXIhQq014q3qo17hG+fVvTeq3bZXJ8aE2L5j30QTDNXiri8yTCYLbnhi05Emlgfz+xwq1cy7NyIw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none; dmarc=none;
 dkim=none; arc=none
From: Carmine Cesarano <c.cesarano@hotmail.it>
To: Julien Grall <julien@xen.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Subject: R: R: R: Crash when using xencov
Thread-Topic: R: R: Crash when using xencov
Thread-Index:
 AQHYihZvzMoBU/8d5kCvfMcwSUXjJK1jJQIAgAALHhCAAAKEgIAAD46vgAAFV6WAAXJk3oAB9oiAgAD5uEk=
Date: Thu, 30 Jun 2022 09:07:15 +0000
Message-ID:
 <AM7PR10MB355902159111F3CC8C6CDB94F8BA9@AM7PR10MB3559.EURPRD10.PROD.OUTLOOK.COM>
References:
 <AM7PR10MB355942D32F58FF02379C398CF8B99@AM7PR10MB3559.EURPRD10.PROD.OUTLOOK.COM>
 <87d0667b-2b85-f006-ea3c-6f557b2bdc8e@xen.org>
 <AM7PR10MB355972A68A222CB9FBAC43D4F8B99@AM7PR10MB3559.EURPRD10.PROD.OUTLOOK.COM>
 <daa12b90-da87-d463-24c4-a13fba330f1d@xen.org>
 <AM7PR10MB35593AA7F46B4D4A0BBD9841F8B99@AM7PR10MB3559.EURPRD10.PROD.OUTLOOK.COM>
 <AM7PR10MB3559BB8CB733902773B1AD6AF8B99@AM7PR10MB3559.EURPRD10.PROD.OUTLOOK.COM>
 <AM7PR10MB3559A1984F6B53CEFB4FECC7F8B89@AM7PR10MB3559.EURPRD10.PROD.OUTLOOK.COM>
 <ef8b540c-d2c2-c999-d3fe-08fc88665ad9@xen.org>
In-Reply-To: <ef8b540c-d2c2-c999-d3fe-08fc88665ad9@xen.org>
Accept-Language: it-IT, en-US
Content-Language: it-IT
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-messagesentrepresentingtype: 1
x-tmn: [iTK+epqU2QfAYA6Mkoi0h3n+vH7dbsgj]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 9f94accf-2d70-472a-bd18-08da5a77ed87
x-ms-traffictypediagnostic: AM6PR10MB2248:EE_
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 beOG8WPenjwBmEkBB9m1sKnDLbLfjuWWa3lKJvdRdPjDjweFomYQIKdcAYmP/J23tEqNgz1K2hAtTdc8r8uCjdOZYc3K2VoogMZe75oz55lRsm5OwQS27dtQ8virnHbmplS3hGbMYw/u5wOZYJ215lksyd3UlxDnQ5B/vGUkDA95fhoPCKGJkLEtfYS1puxpW2528N1MVACEwN60VGmXZZLyuK9YVwlACz3dJ/+vmQFF5MOa0tVmR2IVamHJkYNXGTXnG95GLASCPQxCipliFCCbPE7ewoegXwO47DQO6YEHiW0yXcLwgky17DjhqedV5JkAM2q9+5JJlTsyB3lBcj4bF0UbdDtrOgVdks+8mzVYjOJW+T04GigKkdiaYhs/1QH9KqpDkpjyYT1ydxwj4juaB9Wj6mTaxt76HxUEskd1rndqQv+NYPt+W1EfJ+qURDxM+Djrg2Y4DHGN+zbQ9D2urUXt3P211487sseb8XzRPaiOrNjbw3JSmVeRKlw6MSwaygN32tKX69z1h49RfIXpcqOE3hiUageigmJc9TTUFVwO8G7LqlSdNjYqoNFIiNrzVj2aseP5MTI2eIy+vuA07W8nPMT8jfIMwT2Ac8Mwipg0Is73JbdsIVAyyproKs2RTfdjywfhBGgqaW3OVQ==
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?Windows-1252?Q?qIAuYqXM7uAlErqRDnE1dG0lbFTgHO3RPEZf27Y3M0eqYKliiQOlHkG7?=
 =?Windows-1252?Q?QRxcMHXQjZ9W24JGbYbGUdrMK+ijKdxUAYN5u6cMpjp7yzxr1TXt5kwN?=
 =?Windows-1252?Q?x6owHsUlOe1RHa4hPQ9TRXmSlu/qa98TmWo+X3msRcOjC/rLCFGwWzpM?=
 =?Windows-1252?Q?a3tI05M/aJD9/uKGvNgIbF4JhYpUvCctC3I9HoHOhO1HDjUJe4uGHdHn?=
 =?Windows-1252?Q?CmpoWf9TD6+kk3+3A/GPteSDsqb+m1yQ0VyUomXVerHl1hajH2OFr2cV?=
 =?Windows-1252?Q?j1qMq26+8KtBG191hQljwOSeqRRIfte8V1s4Z/6R1cxQVbjy1Xoi9zyq?=
 =?Windows-1252?Q?ivcYjqjTA3XHIkNtg431+/C1ZfxcJMsAh20udxnU1jUL9Jbrn5sFgZnO?=
 =?Windows-1252?Q?FN42GZMPJXZBo5H9MBGztefjAiqXd+hnOEIvvvRfBJvruky4x/tzYZRc?=
 =?Windows-1252?Q?iGl5mCIw8Yg47ACh1LXosxMMnQQ7MsCmHuQuuMZ5YCmZlaWPemd0vqXn?=
 =?Windows-1252?Q?ehOGOyW/nn2NIJKif8BavQxXB+OoHlh/fm6wZpddMgZF5USL/NE4LAeH?=
 =?Windows-1252?Q?1MSQXfQiIiV17FhvO0WqUjrbS7rBFJ4TH8c0ZurV1duNSbyv4eSsuXB2?=
 =?Windows-1252?Q?oUsSCF7Ol0bpBtTd6jI9aFlEwaf5lenB/5WXN1oDxlynzhQht4Lj6OTg?=
 =?Windows-1252?Q?uRq2hlBKQz5dP33LmQWKCortStk7ByGvmmrPsEnJojuhDqWsrVuG5VkQ?=
 =?Windows-1252?Q?85QMyEKSxHhGSdLPOQYUHpyse0/bKqr4v8m2A1HBXReVeqJ/pZaQyCDE?=
 =?Windows-1252?Q?5sBMvChPL8wRKUIB5Kfiy40qSy9bAdUvrBETZeCt3rxEZjME/hzQmvUT?=
 =?Windows-1252?Q?W3PjwqNqjKV617CbqclWI51fT34xG0Aw634mSjeiSEyhLD/QE/Sz8fEN?=
 =?Windows-1252?Q?dMEXDU14Ht+vnNxkOm7I8oWKh3pTdzwHrd7MMgjm5GzzrWCJ0x32yed/?=
 =?Windows-1252?Q?odFZK4WhtM0hhxkbF1BpWsBnVFFTaBSlv+0BIFGr1LNtpBchz1FdAOPw?=
 =?Windows-1252?Q?ZEVcjgIMrSsHT5N2wWTpqipqa8J+AkJ3Eyxu4Y5N/r3YgZtfBnTq4Uid?=
 =?Windows-1252?Q?ybm7w2AZP64nNTNtmmFKd4FrZR0sQrtNaIUGpntVWFXsNqZwB3Rp2fZ4?=
 =?Windows-1252?Q?htp1gAIBcQXKgA2oPuxA7wfJ1LLvJAMABYYqiB2QssGsj4V4qKBCtojA?=
 =?Windows-1252?Q?4fF8nKQ9GUGpDKbRK4z1cX8bzRx8HtwSJBQG54J2H5aRHqvy2SAQ+Sq0?=
 =?Windows-1252?Q?DZ6bud7vL1NL4tM7N9CG0iRToyBUNswIVBbccI1630ZYXb33t8P03d+V?=
 =?Windows-1252?Q?0amjVJgg+hqP1yUrN6p5rm3usp168g6vyUU=3D?=
Content-Type: multipart/alternative;
	boundary="_000_AM7PR10MB355902159111F3CC8C6CDB94F8BA9AM7PR10MB3559EURP_"
MIME-Version: 1.0
X-OriginatorOrg: sct-15-20-4755-11-msonline-outlook-6b909.templateTenant
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM7PR10MB3559.EURPRD10.PROD.OUTLOOK.COM
X-MS-Exchange-CrossTenant-RMS-PersistedConsumerOrg: 00000000-0000-0000-0000-000000000000
X-MS-Exchange-CrossTenant-Network-Message-Id: 9f94accf-2d70-472a-bd18-08da5a77ed87
X-MS-Exchange-CrossTenant-originalarrivaltime: 30 Jun 2022 09:07:15.8425
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 84df9e7f-e9f6-40af-b435-aaaaaaaaaaaa
X-MS-Exchange-CrossTenant-rms-persistedconsumerorg: 00000000-0000-0000-0000-000000000000
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR10MB2248

--_000_AM7PR10MB355902159111F3CC8C6CDB94F8BA9AM7PR10MB3559EURP_
Content-Type: text/plain; charset="Windows-1252"
Content-Transfer-Encoding: quoted-printable

Hello,
Sorry for the images on the ML.
If I wanted to change my setup instead, is there a tested and working versi=
on of gcc for xencov features on xen stable-4.16?
(I read GCC 3.4 or later in the documentation).

Cheers,

Carmine Cesarano

Da: Julien Grall<mailto:julien@xen.org>
Inviato: mercoled=EC 29 giugno 2022 20:02
A: Carmine Cesarano<mailto:c.cesarano@hotmail.it>
Cc: xen-devel@lists.xenproject.org<mailto:xen-devel@lists.xenproject.org>; =
Andrew Cooper<mailto:andrew.cooper3@citrix.com>
Oggetto: Re: R: R: Crash when using xencov

(moving the discussion to xen-devel)

On 28/06/2022 16:32, Carmine Cesarano wrote:
> Hi,

Hello,

Please refrain to top-post and/or post images on the ML. If you need to
share an image, then please upload them somewhere else.

> I made two further attempts, first by compiling xen and xen-tools with gc=
c 10 and second with gcc 7, getting the same problem.
>
> By running =93xencov reset=94, with with both compilers, the line of code=
 associated with the crash is:

Discussing with Andrew Cooper on IRC, it looks like the problem is
because Xen and GCC disagrees on the format. There are newer format that
Xen doesn't understand.

If you are interested to support GCOV on your setup, then I would
suggest to look at the documentation and/or look at what Linux is doing
for newer compiler.

>
>    *   /xen/xen/common/coverage/gcc_4_7.c:123
> By running =93xencov read=94, I get two different behaviors with the two =
compilers:
>
>    *   /xen/xen/common/coverage/gcc_4_7.c:165   [GCC 11]
>    *   /xen/xen/common/coverage/gcov.c:131          [GCC 7]
>
> Attached are the logs captured with a serial port.
>
> Cheers,
>
> Carmine Cesarano
> Da: Julien Grall<mailto:julien@xen.org>
> Inviato: luned=EC 27 giugno 2022 14:42
> A: Carmine Cesarano<mailto:c.cesarano@hotmail.it>
> Oggetto: Re: R: Crash when using xencov
>
> Hello,
>
> You seemed to have removed xen-users from the CC list. Please keep it in
> CC unless the discussion needs to private.
>
> Also, please avoid top-posting.
>
> On 27/06/2022 13:36, Carmine Cesarano wrote:
>> Yes, i mean stable-4.16. Below the logs after running "xencov reset". Th=
e situation for "xencov read" is similar.
>>
>> (XEN) ----[ Xen-4.16.2-pre  x86_64  debug=3Dy gcov=3Dy  Not tainted ]---=
-
>> (XEN) CPU:    0
>> (XEN) RIP:    e008:[<ffff82d040257bd2>] gcov_info_reset+0x87/0xa9
>> (XEN) RFLAGS: 0000000000010202   CONTEXT: hypervisor (d0v0)
>> (XEN) rax: 0000000000000000   rbx: ffff82d04056bdc0   rcx: 00000000000c0=
00b
>> (XEN) rdx: 0000000000000000   rsi: 0000000000000001   rdi: ffff82d04056b=
dc0
>> (XEN) rbp: ffff83023a7e7cb0   rsp: ffff83023a7e7c88   r8:  7ffffffffffff=
fff
>> (XEN) r9:  deadbeefdeadf00d   r10: 0000000000000000   r11: 0000000000000=
000
>> (XEN) r12: 0000000000000001   r13: ffff82d04056be28   r14: 0000000000000=
000
>> (XEN) r15: ffff82d04056bdc0   cr0: 0000000080050033   cr4: 0000000000172=
620
>> (XEN) cr3: 000000017ea0b000   cr2: 0000000000000000
>> (XEN) fsb: 00007fc0fb0ca200   gsb: ffff88807b400000   gss: 0000000000000=
000
>> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
>> (XEN) Xen code around <ffff82d040257bd2> (gcov_info_reset+0x87/0xa9):
>> (XEN)  1d 44 89 f0 49 8b 57 70 <4c> 8b 24 c2 49 83 c4 18 48 83 05 a6 81 =
4c 00 01
>> (XEN) Xen stack trace from rsp=3Dffff83023a7e7c88:
>> (XEN)    ffff82d04056bdc0 0000000000000001 ffff82d04070f180 000000000000=
0001
>> (XEN)    0000000000000000 ffff83023a7e7cc8 ffff82d040257a6a ffff83023a7e=
7db0
>> (XEN)    ffff83023a7e7ce8 ffff82d040257547 ffff83023a7e7fff ffff83023a7e=
7fff
>> (XEN)    ffff83023a7e7e58 ffff82d040255d5f ffff83023a7e7d68 ffff82d0403b=
5e8b
>> (XEN)    000000000017d5b2 0000000000000000 ffff83023a6b5000 000000000000=
0000
>> (XEN)    00007fc0fb348010 800000017ea0e127 0000000000000202 ffff82d04039=
9fd8
>> (XEN)    0000000000005a40 ffff83023a7e7d68 0000000000000206 ffff82e002fa=
b640
>> (XEN)    ffff83023a7e7e58 ffff82d0403bb29d ffff83023a69f000 000000003a7e=
7fff
>> (XEN)    000000017ea0f067 0000000000000000 000000000017d5b2 000000000017=
d5b2
>> (XEN)    0000001400000014 0000000000000002 ffffffffffffffff 000000000000=
0000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 000000000000=
0000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 000000000000=
0000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 000000000000=
0000
>> (XEN)    0000000000000000 ffff83023a7e7ef8 0000000000000001 ffff83023a69=
f000
>> (XEN)    deadbeefdeadf00d ffff82d04025579d ffff83023a7e7ee8 ffff82d04038=
7f62
>> (XEN)    00007fc0fb348010 deadbeefdeadf00d deadbeefdeadf00d deadbeefdead=
f00d
>> (XEN)    deadbeefdeadf00d ffff83023a7e7fff ffff82d0403b2c99 ffff83023a7e=
7eb8
>> (XEN)    ffff82d04038214c ffff83023a69f000 ffff83023a7e7ed8 ffff83023a69=
f000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 000000000000=
0000
>> (XEN)    00007cfdc58180e7 ffff82d0404392ae 0000000000000000 ffff88800f48=
4c00
>> (XEN) Xen call trace:
>> (XEN)    [<ffff82d040257bd2>] R gcov_info_reset+0x87/0xa9
>
> Thanks! There are multiple versions of gcov_info_reset() in the tree.
> The one used will depend on the compiler you are using.
>
> Can you use addr2line (or gdb) to find out the file and line of code
> associated with the crash?
>
> For addr2line you could do:
>
>     addr2line -e xen-syms 0xffff82d040257bd2

Cheers,

--
Julien Grall


--_000_AM7PR10MB355902159111F3CC8C6CDB94F8BA9AM7PR10MB3559EURP_
Content-Type: text/html; charset="Windows-1252"
Content-Transfer-Encoding: quoted-printable

<html xmlns:o=3D"urn:schemas-microsoft-com:office:office" xmlns:w=3D"urn:sc=
hemas-microsoft-com:office:word" xmlns:m=3D"http://schemas.microsoft.com/of=
fice/2004/12/omml" xmlns=3D"http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3DWindows-1=
252">
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:"Cambria Math";
	panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0cm;
	font-size:11.0pt;
	font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
.MsoChpDefault
	{mso-style-type:export-only;}
@page WordSection1
	{size:612.0pt 792.0pt;
	margin:70.85pt 2.0cm 2.0cm 2.0cm;}
div.WordSection1
	{page:WordSection1;}
--></style>
</head>
<body lang=3D"IT" link=3D"blue" vlink=3D"#954F72" style=3D"word-wrap:break-=
word">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">Hello,</p>
<p class=3D"MsoNormal">Sorry for the images on the ML.</p>
<p class=3D"MsoNormal">If I wanted to change my setup instead, is there a t=
ested and working version of gcc for xencov features on xen stable-4.16?</p=
>
<p class=3D"MsoNormal">(I read GCC 3.4 or later in the documentation).</p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Cheers,<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Carmine Cesarano</p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<div style=3D"mso-element:para-border-div;border:none;border-top:solid #E1E=
1E1 1.0pt;padding:3.0pt 0cm 0cm 0cm">
<p class=3D"MsoNormal" style=3D"border:none;padding:0cm"><b>Da: </b><a href=
=3D"mailto:julien@xen.org">Julien Grall</a><br>
<b>Inviato: </b>mercoled=EC 29 giugno 2022 20:02<br>
<b>A: </b><a href=3D"mailto:c.cesarano@hotmail.it">Carmine Cesarano</a><br>
<b>Cc: </b><a href=3D"mailto:xen-devel@lists.xenproject.org">xen-devel@list=
s.xenproject.org</a>;
<a href=3D"mailto:andrew.cooper3@citrix.com">Andrew Cooper</a><br>
<b>Oggetto: </b>Re: R: R: Crash when using xencov</p>
</div>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">(moving the discussion to xen-devel)<br>
<br>
On 28/06/2022 16:32, Carmine Cesarano wrote:<br>
&gt; Hi,<br>
<br>
Hello,<br>
<br>
Please refrain to top-post and/or post images on the ML. If you need to <br=
>
share an image, then please upload them somewhere else.<br>
<br>
&gt; I made two further attempts, first by compiling xen and xen-tools with=
 gcc 10 and second with gcc 7, getting the same problem.<br>
&gt; <br>
&gt; By running =93xencov reset=94, with with both compilers, the line of c=
ode associated with the crash is:<br>
<br>
Discussing with Andrew Cooper on IRC, it looks like the problem is <br>
because Xen and GCC disagrees on the format. There are newer format that <b=
r>
Xen doesn't understand.<br>
<br>
If you are interested to support GCOV on your setup, then I would <br>
suggest to look at the documentation and/or look at what Linux is doing <br=
>
for newer compiler.<br>
<br>
&gt; <br>
&gt;&nbsp;&nbsp;&nbsp; *&nbsp;&nbsp; /xen/xen/common/coverage/gcc_4_7.c:123=
<br>
&gt; By running =93xencov read=94, I get two different behaviors with the t=
wo compilers:<br>
&gt; <br>
&gt;&nbsp;&nbsp;&nbsp; *&nbsp;&nbsp; /xen/xen/common/coverage/gcc_4_7.c:165=
&nbsp;&nbsp; [GCC 11]<br>
&gt;&nbsp;&nbsp;&nbsp; *&nbsp;&nbsp; /xen/xen/common/coverage/gcov.c:131&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; [GCC 7]<br>
&gt; <br>
&gt; Attached are the logs captured with a serial port.<br>
&gt; <br>
&gt; Cheers,<br>
&gt; <br>
&gt; Carmine Cesarano<br>
&gt; Da: Julien Grall&lt;<a href=3D"mailto:julien@xen.org">mailto:julien@xe=
n.org</a>&gt;<br>
&gt; Inviato: luned=EC 27 giugno 2022 14:42<br>
&gt; A: Carmine Cesarano&lt;<a href=3D"mailto:c.cesarano@hotmail.it">mailto=
:c.cesarano@hotmail.it</a>&gt;<br>
&gt; Oggetto: Re: R: Crash when using xencov<br>
&gt; <br>
&gt; Hello,<br>
&gt; <br>
&gt; You seemed to have removed xen-users from the CC list. Please keep it =
in<br>
&gt; CC unless the discussion needs to private.<br>
&gt; <br>
&gt; Also, please avoid top-posting.<br>
&gt; <br>
&gt; On 27/06/2022 13:36, Carmine Cesarano wrote:<br>
&gt;&gt; Yes, i mean stable-4.16. Below the logs after running &quot;xencov=
 reset&quot;. The situation for &quot;xencov read&quot; is similar.<br>
&gt;&gt;<br>
&gt;&gt; (XEN) ----[ Xen-4.16.2-pre&nbsp; x86_64&nbsp; debug=3Dy gcov=3Dy&n=
bsp; Not tainted ]----<br>
&gt;&gt; (XEN) CPU:&nbsp;&nbsp;&nbsp; 0<br>
&gt;&gt; (XEN) RIP:&nbsp;&nbsp;&nbsp; e008:[&lt;ffff82d040257bd2&gt;] gcov_=
info_reset+0x87/0xa9<br>
&gt;&gt; (XEN) RFLAGS: 0000000000010202&nbsp;&nbsp; CONTEXT: hypervisor (d0=
v0)<br>
&gt;&gt; (XEN) rax: 0000000000000000&nbsp;&nbsp; rbx: ffff82d04056bdc0&nbsp=
;&nbsp; rcx: 00000000000c000b<br>
&gt;&gt; (XEN) rdx: 0000000000000000&nbsp;&nbsp; rsi: 0000000000000001&nbsp=
;&nbsp; rdi: ffff82d04056bdc0<br>
&gt;&gt; (XEN) rbp: ffff83023a7e7cb0&nbsp;&nbsp; rsp: ffff83023a7e7c88&nbsp=
;&nbsp; r8:&nbsp; 7fffffffffffffff<br>
&gt;&gt; (XEN) r9:&nbsp; deadbeefdeadf00d&nbsp;&nbsp; r10: 0000000000000000=
&nbsp;&nbsp; r11: 0000000000000000<br>
&gt;&gt; (XEN) r12: 0000000000000001&nbsp;&nbsp; r13: ffff82d04056be28&nbsp=
;&nbsp; r14: 0000000000000000<br>
&gt;&gt; (XEN) r15: ffff82d04056bdc0&nbsp;&nbsp; cr0: 0000000080050033&nbsp=
;&nbsp; cr4: 0000000000172620<br>
&gt;&gt; (XEN) cr3: 000000017ea0b000&nbsp;&nbsp; cr2: 0000000000000000<br>
&gt;&gt; (XEN) fsb: 00007fc0fb0ca200&nbsp;&nbsp; gsb: ffff88807b400000&nbsp=
;&nbsp; gss: 0000000000000000<br>
&gt;&gt; (XEN) ds: 0000&nbsp;&nbsp; es: 0000&nbsp;&nbsp; fs: 0000&nbsp;&nbs=
p; gs: 0000&nbsp;&nbsp; ss: e010&nbsp;&nbsp; cs: e008<br>
&gt;&gt; (XEN) Xen code around &lt;ffff82d040257bd2&gt; (gcov_info_reset+0x=
87/0xa9):<br>
&gt;&gt; (XEN)&nbsp; 1d 44 89 f0 49 8b 57 70 &lt;4c&gt; 8b 24 c2 49 83 c4 1=
8 48 83 05 a6 81 4c 00 01<br>
&gt;&gt; (XEN) Xen stack trace from rsp=3Dffff83023a7e7c88:<br>
&gt;&gt; (XEN)&nbsp;&nbsp;&nbsp; ffff82d04056bdc0 0000000000000001 ffff82d0=
4070f180 0000000000000001<br>
&gt;&gt; (XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 ffff83023a7e7cc8 ffff82d0=
40257a6a ffff83023a7e7db0<br>
&gt;&gt; (XEN)&nbsp;&nbsp;&nbsp; ffff83023a7e7ce8 ffff82d040257547 ffff8302=
3a7e7fff ffff83023a7e7fff<br>
&gt;&gt; (XEN)&nbsp;&nbsp;&nbsp; ffff83023a7e7e58 ffff82d040255d5f ffff8302=
3a7e7d68 ffff82d0403b5e8b<br>
&gt;&gt; (XEN)&nbsp;&nbsp;&nbsp; 000000000017d5b2 0000000000000000 ffff8302=
3a6b5000 0000000000000000<br>
&gt;&gt; (XEN)&nbsp;&nbsp;&nbsp; 00007fc0fb348010 800000017ea0e127 00000000=
00000202 ffff82d040399fd8<br>
&gt;&gt; (XEN)&nbsp;&nbsp;&nbsp; 0000000000005a40 ffff83023a7e7d68 00000000=
00000206 ffff82e002fab640<br>
&gt;&gt; (XEN)&nbsp;&nbsp;&nbsp; ffff83023a7e7e58 ffff82d0403bb29d ffff8302=
3a69f000 000000003a7e7fff<br>
&gt;&gt; (XEN)&nbsp;&nbsp;&nbsp; 000000017ea0f067 0000000000000000 00000000=
0017d5b2 000000000017d5b2<br>
&gt;&gt; (XEN)&nbsp;&nbsp;&nbsp; 0000001400000014 0000000000000002 ffffffff=
ffffffff 0000000000000000<br>
&gt;&gt; (XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 0000000000000000 00000000=
00000000 0000000000000000<br>
&gt;&gt; (XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 0000000000000000 00000000=
00000000 0000000000000000<br>
&gt;&gt; (XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 0000000000000000 00000000=
00000000 0000000000000000<br>
&gt;&gt; (XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 ffff83023a7e7ef8 00000000=
00000001 ffff83023a69f000<br>
&gt;&gt; (XEN)&nbsp;&nbsp;&nbsp; deadbeefdeadf00d ffff82d04025579d ffff8302=
3a7e7ee8 ffff82d040387f62<br>
&gt;&gt; (XEN)&nbsp;&nbsp;&nbsp; 00007fc0fb348010 deadbeefdeadf00d deadbeef=
deadf00d deadbeefdeadf00d<br>
&gt;&gt; (XEN)&nbsp;&nbsp;&nbsp; deadbeefdeadf00d ffff83023a7e7fff ffff82d0=
403b2c99 ffff83023a7e7eb8<br>
&gt;&gt; (XEN)&nbsp;&nbsp;&nbsp; ffff82d04038214c ffff83023a69f000 ffff8302=
3a7e7ed8 ffff83023a69f000<br>
&gt;&gt; (XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 0000000000000000 00000000=
00000000 0000000000000000<br>
&gt;&gt; (XEN)&nbsp;&nbsp;&nbsp; 00007cfdc58180e7 ffff82d0404392ae 00000000=
00000000 ffff88800f484c00<br>
&gt;&gt; (XEN) Xen call trace:<br>
&gt;&gt; (XEN)&nbsp;&nbsp;&nbsp; [&lt;ffff82d040257bd2&gt;] R gcov_info_res=
et+0x87/0xa9<br>
&gt; <br>
&gt; Thanks! There are multiple versions of gcov_info_reset() in the tree.<=
br>
&gt; The one used will depend on the compiler you are using.<br>
&gt; <br>
&gt; Can you use addr2line (or gdb) to find out the file and line of code<b=
r>
&gt; associated with the crash?<br>
&gt; <br>
&gt; For addr2line you could do:<br>
&gt; <br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp; addr2line -e xen-syms 0xffff82d040257bd2<br>
<br>
Cheers,<br>
<br>
-- <br>
Julien Grall<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
</div>
</body>
</html>

--_000_AM7PR10MB355902159111F3CC8C6CDB94F8BA9AM7PR10MB3559EURP_--


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 09:24:46 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 09:24:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358400.587591 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6qPf-0007GF-Cp; Thu, 30 Jun 2022 09:24:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358400.587591; Thu, 30 Jun 2022 09:24:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6qPf-0007G8-A6; Thu, 30 Jun 2022 09:24:39 +0000
Received: by outflank-mailman (input) for mailman id 358400;
 Thu, 30 Jun 2022 09:24:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPik=XF=citrix.com=prvs=1738a98a4=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o6qPd-0007G0-ST
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 09:24:37 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 728becfc-f856-11ec-bdce-3d151da133c5;
 Thu, 30 Jun 2022 11:24:33 +0200 (CEST)
Received: from mail-dm6nam10lp2109.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.109])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 30 Jun 2022 05:24:32 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by CH2PR03MB5367.namprd03.prod.outlook.com (2603:10b6:610:9c::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15; Thu, 30 Jun
 2022 09:24:30 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5395.015; Thu, 30 Jun 2022
 09:24:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 728becfc-f856-11ec-bdce-3d151da133c5
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656581076;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=0Bls/tYCdxwkuazXrMU28N6vQGXNDC9LIy8Re5WjAs4=;
  b=aDK3dcPVdcTPwiuRQ2J3pIlFmwSfvBx2Ttd4hDmor+wGGqHNqkEA+gt8
   u9Giv7TuGGiRVSE7MRyFWNwQ4QK3QPL8+7dR4GKYLCZCl2MftpBRzJ7JD
   vk0zESIel1F087nZUTeDX2EBQYWkoSVi3V8U2gO+q3g7gOPan3lm9hRDs
   Q=;
X-IronPort-RemoteIP: 104.47.58.109
X-IronPort-MID: 75197129
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:2NmS0ag48lP2E24C/R1A0dzTX161RBAKZh0ujC45NGQN5FlHY01je
 htvWWvTO6yKN2ajKYwlYIzko0MFsZ/cyoNqTANr+C09Fy4b9cadCdqndUqhZCn6wu8v7a5EA
 2fyTvGacajYm1eF/k/F3oDJ9CU6jefSLlbFILas1hpZHGeIcw98z0M58wIFqtQw24LhXVjV4
 YqaT/D3YzdJ5RYlagr41IrbwP9flKyaVOQw5wFWiVhj5TcyplFNZH4tDfjZw0jQG+G4KtWSV
 efbpIxVy0uCl/sb5nFJpZ6gGqECaua60QFjERO6UYD66vRJjnRaPqrWqJPwwKqY4tmEt4kZ9
 TlDiXC/YRh0PJPwkvwFaChFOAs9JPVg6Jn/PUHq5KR/z2WeG5ft69NHKRhueKE9pKNwC2wI8
 uEEIjcQaBzFn/ix3L+wVuhrgIIkMdXvO4Qc/HpnyFk1D95/GcyFH/qMuI8ehW9h7ixNNa+2i
 84xcz1gYQ6GexRSElwWFIg/jKGjgXyXnzhw9w/M9PVuuTm7IApZiqO9EfHHa82zWOJS3QWRr
 UPsoXvjK0RPXDCY4X/fmp62vcfDhTj+WZ4SPLSg++R2nUaIwWgOFBwRU0D9qv684mauVtQaJ
 0EK9y4Gqakp6FftXtT7Rwe/onOPolgbQdU4O9M97AaB26/F+TGzD2IPTiNCQNE+vcpwTjsvv
 neWm/v5CDopt6eaIVqG/bCIsXW+MDYUNkcZeSYeSQIPpdjkyKkxhxTDVMd+E4a6i9T0HXf7x
 DXihCM+nbQIkckT16ihu1vDiiivjoPVRxQx7w+RX2XNxgdkb4fjaYWu4lXf6etoJZycCFKGu
 RAsmceE5eQKJZiInT6KRqMGG7TBz+yMMCDYx0VuGZYh3z23/jioeoU4yCplOE5jP8IAeDnoS
 EzeowVc4NlUJnTCRa1qZ4O8Dew6wK6mEs7qPtjeY8BSeJF3eEmC9Tt3eE+L92n3lQ4nlqRXE
 ZWRfNuoDH0aIb961zfwTOAYuZcnyCkxymLUQZHT1Am83PyVY3v9YbsKPFaBdOkR8LKPoAKT9
 c1WccSN1X1ivPbWZyDW9csfKA4MJH1iXZTu8ZUPJ6iEPxZsH3wnB7nJ27Q9dod5nqNT0ODV4
 nW6XUwew1367ZHaFTi3hrlYQOuHdf5CQbgTZETA4X7AN6AfXLuS
IronPort-HdrOrdr: A9a23:ISQf/q8HXXsBi+ZlW3Juk+FDdb1zdoMgy1knxilNoENuH/Bwxv
 rFoB1E73TJYVYqN03IV+rwXZVoZUmsjaKdhrNhRotKPTOWwVdASbsP0WKM+V3d8kHFh41gPO
 JbAtJD4b7LfCdHZKTBkW6F+r8bqbHokZxAx92uqUuFJTsaF52IhD0JbjpzfHcGJjWvUvECZe
 ehD4d81kydUEVSSv7+KmgOXuDFqdGOvJX6YSQeDxpizAWVlzun5JPzDhDdh34lInty6IZn1V
 KAvx3y562lvf3+4hjA11XL55ATvNf60NNMCOGFl8BQADTxjQSDYphnRtS5zXkIidDqzGxvvM
 jHoh8mMcg2w3TNflutqR+o4AXk2CZG0Q6W9XaoxV/Y5eDpTjMzDMRMwahDdAHC1kYmtNZglI
 pWwmOwrfNsfF/9tRW4w+KNewBhl0Kyr3Znu/UUlWZjXYwXb6IUhZAD/XlSDIwLEEvBmc0a+d
 FVfY/hDcttABKnhyizhBgu/DXsZAV4Iv6+eDlMhiTPuAIm30yQzCMjtb4idzk7hdAAoqJ/lp
 X525RT5c9zp/AtHNJA7cc6ML+K4z/2MGXxGVPXB2jbP4c6HF+Ig6LLwdwOlZKXkdozvdAPpK
 g=
X-IronPort-AV: E=Sophos;i="5.92,233,1650945600"; 
   d="scan'208";a="75197129"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aLhNguCvSlg4xYLP4nd7kJxirzUf6qFhPFm6ktvmRWCq65yx6vR0d0ZWffZHDpP7CegLeAklP5wc+0tEsw8N3RQhpLd5wWAOhaU535QqB0PifZZjua+ZNiRLmV5Q+vqQ14yVuKW+qGXt5IYOYy2SOUOwHUIMnSfTSOnUAC0dVDp1VP4YlmhvckxxSThHPfhJfKoa2b/lNNOLLW0ton/0V834mgjsbzIfGnjNYCnE+CaaDlZ+zExo1w5M/pz5Si63k4xpUqjiVWlAgJzrxtPhqHQN8twwJ0SeM4XAmxSvRuNqbeN+ExYMEuGhOHFXr8nwvgBph/Mm3wB+4PJOmyqMeQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=GbJ8Qf4Cz0NR+IFV7ooSMyOSpWKGmIw2FnsULdrGdfY=;
 b=i42+m2FJ9kFrRJCEwo5nLDtZ7E11RPFahejk+IpomR0tPChUa5baO1mpglcKQlpCnmqhqnR956h9yKlYx5CoHEtghuG5RAGgvW/3BRLnzmJxTBWCTMTb1bO71lHOrnLwAwlxuvSvv1zqaiVnB9ukmRmdjCUxyJPc9RFQXhfnUoYh32FUHBZW4ZfcWSg/dtMgab5mJn0C9OHpkD/NrbufIINNawwpsvNnpnMBG+K2okVP8ddb1Gh78fQ8cG5PVkPdY6PhhZXOfQJ3mXABe1wJJjsSp5/HEdQ/3QuglAR5Lnxk5phZ23JPGUevkc9Ikd2eW5kHc1NSD1jLsY+dnrMNqA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GbJ8Qf4Cz0NR+IFV7ooSMyOSpWKGmIw2FnsULdrGdfY=;
 b=YSY7DcczzEKwMcGFpOr9clx3M7gX+gAR3GyxYjX/Vv69h4hna4Hnzh8TyU7Jij78tFXHmOy71P130B/W3jY6Hqjz5XpIKunAAebS6hAjVB5gffDlyC8a8yJWNtsnvshX8PGmlvd/i65cct5pKAaGO5jpOcLS+XNzhnHFI8zDY8o=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Thu, 30 Jun 2022 11:24:25 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Cc: xen-devel@lists.xenproject.org,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Wei Liu <wl@xen.org>, scott.davis@starlab.io, jandryuk@gmail.com,
	christopher.clark@starlab.io, Luca Fancellu <luca.fancellu@arm.com>,
	Julien Grall <jgrall@amazon.com>, Rahul Singh <rahul.singh@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: Re: [PATCH v9 1/3] xsm: create idle domain privileged and demote
 after setup
Message-ID: <Yr1ryXy/mMi6tJzY@Air-de-Roger>
References: <20220630022110.31555-1-dpsmith@apertussolutions.com>
 <20220630022110.31555-2-dpsmith@apertussolutions.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20220630022110.31555-2-dpsmith@apertussolutions.com>
X-ClientProxiedBy: LO4P123CA0152.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:188::13) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3b79d0b1-ec70-451c-7f65-08da5a7a562b
X-MS-TrafficTypeDiagnostic: CH2PR03MB5367:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	oTKT7/ueMNraWJ1qD+1VwsOqe5FKxqJbqejaPZ6/8FDQWzNt0yfRd5MBUYK4wdYyz32WDQmlmM4tm7OX7ZbflKUUrwX6imGljByAlpFqzpkV3F7eJCuEYHGjp7U1QOR09TaP2mq/E9QDiDMjAmHyjM3irAfE9E1c9B2t+utIp9Zfl1w+3H+LH1vlqo2KWji6COuqdrDLIN4LwrLHFeA7MeygoHyp1oUVkpmVGphF2D6ACX+tiffwTYwdLQwyqsgAifpT3qvR21uIVf5UPS2jzbE2DpSYERXfqaP0uq9tpYeCXY3UYeXFIFJBOCkk3yX0ITKyYXB1y9lHvqSqCJ8gfJSsXWm+UmIR1Px5NpEg4m4wUl/7A6w1zgHYakNfFPLnv7Rh8x7Zcxvyz9bnNK1OgHAnHZ/T4nxGZggGHwm6g135eb0DAxm6eu78pZuQ2khnX01JUC1cBV/oTSeCjbqqPbXhP9yfu+yC3P9bXa5Kt3K7uTGCV2awNP8dTxnrhB0v5pvNgmNnZrO4PyJtm23Ug+DZtiEIGV5Y/9UY8Ip/BRTHl65kVkhJ1a5hOZ8/AUHsGAwEsuuewo03aPCl1bxOUGZFNlaJbzH328uGdZy6C4uCpx+J31TmriPmZOfLa0cvABGVmoX8YbnoC1h0S4TQUkdx4kUq+HnXOavxWhTWyQ9+2YD8PXlq/n6uHGZVQ9QzE5LFPscghPlTeFHB4CkKkRHIdGLeE5qnFG5P+Gx1V3TmlgHh/ErrE+3yBT+V0/O0za1Ldrgs/VT+xQNAR8hr5g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(7916004)(376002)(136003)(39860400002)(366004)(346002)(396003)(66556008)(85182001)(6916009)(54906003)(83380400001)(66476007)(316002)(26005)(8676002)(478600001)(7416002)(41300700001)(38100700002)(5660300002)(4326008)(33716001)(6666004)(6512007)(66946007)(2906002)(9686003)(6486002)(86362001)(8936002)(186003)(82960400001)(6506007);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?K0xNNXJZU0dwOWNTV0Z4Y3M4eHZFUVQybmtkZmFqZ3E5RzdsZ1lTQ3BlY0Ez?=
 =?utf-8?B?RC95V0ZRK0p0T3NyMWlhMDdJbExsMEkrUSs3M1YvUktNWUh3ejVrRkhBMHVj?=
 =?utf-8?B?VEZOWm9sV2hJQzlYbi9NMDhBTTMwVFQ2S29hVW9RWDR2TmJoL1A4MDQwOUE1?=
 =?utf-8?B?WW13K2IxK0hlU3ZBVFJEMUNzeCtUbi9qNnBFTll6eWlrRVhZbURtZ1k4d2tD?=
 =?utf-8?B?Z0YwaElLTEVDOHJVWUVWNFRvUDFKYXozL1RiZmRJYjQvcTUyYmtHMXFOQXgz?=
 =?utf-8?B?YUp2T3ByRlpUUkRCYnJoYW9McEVJQ29OOFpzeFlEd2lpRld0dWgxSkh1bExB?=
 =?utf-8?B?NnBhNDNZYTVIbWRZc3ZpNXlibWlFMm0wek1MN2k5WUVBbUV0TW5KSEQwOXR1?=
 =?utf-8?B?d3lmdE1IMmJib3dXa04za1FHcU9IVUVvWkd4Wms0bkVBdTQwbzBGOXREcm5N?=
 =?utf-8?B?NDdNV01HTjJYVmgxekFFaHJyTVNod0JSd0xnVjJlRCtuWlhlZ0FNVVZCSURt?=
 =?utf-8?B?T1NTN2NHVk42VHErWWtyVGV4UHpaSEh3OVY5VjFzRFRibGNGWi9keDlVaWRF?=
 =?utf-8?B?MGhwOW5ocEpEWER5UGc2Ym5zaVp3YU50ZG1VdjNWUTJTSW84RkRISEtyZFds?=
 =?utf-8?B?S2UxNTVhclBEYkFVRG9PZ2IxM0JvcHlJNzQwR0E0TWdNOEYvb1hkSUV4cTVX?=
 =?utf-8?B?WHZVVmRSWGxPQ3pleGpYbjNNVjluZ1k5UlJvSHJ3MU5zbnh2anVvZHZBaEx3?=
 =?utf-8?B?SWFTWTZUV2tBc0ZlVXBTR25zNjAwWFdid2x5UkRBNW80YVoxaXVIc3pOWkx1?=
 =?utf-8?B?Mk5wL3BrdTZ1eFRJdHcvZlVKVUFTcUdPdEx2a252MUgyVzAwc3pMNHg4czgz?=
 =?utf-8?B?S1VrdVNSU2xJWDRGVVkrUlZ2NEtRdStuMHNiNjd0TUdaOUdaZkg2clJSUkE2?=
 =?utf-8?B?UEJtMUhNaG54R3psdVdTbmJPZW1iVnlSZVdUVGoyWExqNjJzRVZXMG5QT3pO?=
 =?utf-8?B?blhKV3J5cllNVDErZjNnMkRBODRRL2FIa0pBUk1uVVY5dnZ3Q3p3ZVRZczBl?=
 =?utf-8?B?TDN1Rk44TDk4SURpZlZlb01WbHNxb0JkeU15TjEwRGFIK2pmNjlQZGxvSmFO?=
 =?utf-8?B?czZQdzl1Yk1HYUcxbXRtVFZYYjRBZ0p0ZHp1M1l3TzVJYS9ndWR2aXVaRUta?=
 =?utf-8?B?YXpaWm5US0xyU3BIWWVIOXlkNVM3cy9XaEpoSVB4VTFmbm9TekZ3L2tXbjg5?=
 =?utf-8?B?b1VQdjFQUEh0bmYwWklQdGdKcGlYUitPSjFNZ1hLTEZrejhmS2pZVlc4VzZq?=
 =?utf-8?B?ZjhsV3BpQlR3b09Ub0doakxsLzJnY0FveVhwQXphbDhMNy9uQTYzSGxOR0o5?=
 =?utf-8?B?UHNraVg1QjNyTkFON3pseG12RmxPRmJFSnVleUdRYmlxbUZ1ZGhzMTRTT1Rr?=
 =?utf-8?B?cGJyU0JmUEQzMUtDa25jRDQzL2RvQi96cGwxY25XMURvTkRpcW1UT3Vjb2o0?=
 =?utf-8?B?MWdYWm1XTHlSRVRJRnpad1kwbTFiQStlZng3TmJCSnE0VGNZMzgyYk1HZTRC?=
 =?utf-8?B?TEIyZ2F3L2EvSThYaGloamRVZGJoSy8yUWkwNVhDR0dYTVpJMFl3emNETmhV?=
 =?utf-8?B?aElNZTI2WHhGMllPQjJaQmk1M0NqWDU2S050TXovS3FwYm5Remx0b3JDY3Fq?=
 =?utf-8?B?d1lkSFpja0lPTGJmQ2N6KzArak5JWDd0UnFtNnRxT1JHZDlKOWUzNDlGM09E?=
 =?utf-8?B?WUQzY2xBYkdPcXpzM3BGeEJkYW96d0krZGFhTmlSZjVNMk5iWGloSjJqSFlk?=
 =?utf-8?B?RWsrTklzaVlOb09KaGlxYmtPanNPUVBPTkQvcG9iTzZrVFNoQkR6NmRDNHkx?=
 =?utf-8?B?cmw4S2xMVmI1N2k3K1B1dUFRdjJBc1NWQkxHVkNic09oZ3FnalRja2xaMWFM?=
 =?utf-8?B?bUVpWDdxSGNYOVRBUVFmaG1zeURwREpZdE5VdnVzNkhiN0J4Q1lMWDQxNUxm?=
 =?utf-8?B?dHJHWG9ZcG82Mzg4U3BXMTdqZ1FBZE9WKzgyZDFHTWI0d3V0SWM2dUZXRnFw?=
 =?utf-8?B?TFJUQUVKODdGRGhhaFFUQWN6WElQWjc4a0swWUU1NE5VVE5tdFFaOTlZdGxG?=
 =?utf-8?B?SVdpa092QkRLTGcrdS93SWNrc3Q1ZHkwZ0xzOVFrY0RudHVmZGZ3N3RLa2Ri?=
 =?utf-8?B?cHc9PQ==?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3b79d0b1-ec70-451c-7f65-08da5a7a562b
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 09:24:30.6578
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: FlIdRmDOGbp3QiKdGDEhk9SX3/0F02s8bZRH27pRRoO6AmviMjv97gbp8qCDvqyTV14TxfH93+cHlI8HunmtNQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR03MB5367

On Wed, Jun 29, 2022 at 10:21:08PM -0400, Daniel P. Smith wrote:
> There are new capabilities, dom0less and hyperlaunch, that introduce internal
> hypervisor logic, which needs to make resource allocation calls that are
> protected by XSM access checks. The need for these resource allocations are
> necessary for dom0less and hyperlaunch when they are constructing the initial
> domain(s).  This creates an issue as a subset of the hypervisor code is
> executed under a system domain, the idle domain, that is represented by a
> per-CPU non-privileged struct domain. To enable these new capabilities to
> function correctly but in a controlled manner, this commit changes the idle
> system domain to be created as a privileged domain under the default policy and
> demoted before transitioning to running. A new XSM hook,
> xsm_set_system_active(), is introduced to allow each XSM policy type to demote
> the idle domain appropriately for that policy type. In the case of SILO, it
> inherits the default policy's hook for xsm_set_system_active().
> 
> For flask, a stub is added to ensure that flask policy system will function
> correctly with this patch until flask is extended with support for starting the
> idle domain privileged and properly demoting it on the call to
> xsm_set_system_active().
> 
> Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
> Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
> Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
> Acked-by: Julien Grall <jgrall@amazon.com> # arm
> Reviewed-by: Rahul Singh <rahul.singh@arm.com>
> Tested-by: Rahul Singh <rahul.singh@arm.com>

LGTM:

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 10:29:55 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 10:29:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358411.587603 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6rQa-0006M9-6z; Thu, 30 Jun 2022 10:29:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358411.587603; Thu, 30 Jun 2022 10:29:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6rQa-0006M2-3a; Thu, 30 Jun 2022 10:29:40 +0000
Received: by outflank-mailman (input) for mailman id 358411;
 Thu, 30 Jun 2022 10:29:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6rQZ-0006Ls-4G; Thu, 30 Jun 2022 10:29:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6rQZ-0001cm-1o; Thu, 30 Jun 2022 10:29:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6rQY-0002bR-PP; Thu, 30 Jun 2022 10:29:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o6rQY-0002RY-Ox; Thu, 30 Jun 2022 10:29:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=OilCHsaRdY/Ab6NzMUV6RSFMgTXWll1q/tapaILSY08=; b=K2u8jAgFRdk5yKmuhhA3SUvcjy
	2AtUG6cvcQEbkc2P8VDyLShQG4wWu/jZod4Gmc+imwcbT5jpueEei/THEqyTsISAcXXXaulAGFwgr
	sIG9cijC8G92EhyqM24QC/CU1DaJ1TSOumPApSnsc0KmTXDH5rdtJA+Vy9jfETyU78t8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171422-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 171422: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=8d0564deafc90df8531b086a483707cfcfac2b54
X-Osstest-Versions-That:
    ovmf=21e6ef752239c3c840bc31745e14b391bf9c4691
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 30 Jun 2022 10:29:38 +0000

flight 171422 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171422/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 8d0564deafc90df8531b086a483707cfcfac2b54
baseline version:
 ovmf                 21e6ef752239c3c840bc31745e14b391bf9c4691

Last test of basis   171418  2022-06-30 04:14:58 Z    0 days
Testing same since   171422  2022-06-30 08:41:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bob Feng <bob.c.feng@intel.com>
  Michael Kubacki <michael.kubacki@microsoft.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   21e6ef7522..8d0564deaf  8d0564deafc90df8531b086a483707cfcfac2b54 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 11:26:00 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 11:26:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358419.587614 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6sIy-0004mD-Db; Thu, 30 Jun 2022 11:25:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358419.587614; Thu, 30 Jun 2022 11:25:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6sIy-0004m6-9Z; Thu, 30 Jun 2022 11:25:52 +0000
Received: by outflank-mailman (input) for mailman id 358419;
 Thu, 30 Jun 2022 11:25:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5Dt9=XF=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1o6sIw-0004m0-Rv
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 11:25:51 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr10076.outbound.protection.outlook.com [40.107.1.76])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 64359e21-f867-11ec-bdce-3d151da133c5;
 Thu, 30 Jun 2022 13:25:49 +0200 (CEST)
Received: from AS9PR04CA0164.eurprd04.prod.outlook.com (2603:10a6:20b:530::16)
 by AM4PR0802MB2274.eurprd08.prod.outlook.com (2603:10a6:200:64::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14; Thu, 30 Jun
 2022 11:25:39 +0000
Received: from AM5EUR03FT004.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:530:cafe::1c) by AS9PR04CA0164.outlook.office365.com
 (2603:10a6:20b:530::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.15 via Frontend
 Transport; Thu, 30 Jun 2022 11:25:39 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT004.mail.protection.outlook.com (10.152.16.163) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5395.14 via Frontend Transport; Thu, 30 Jun 2022 11:25:38 +0000
Received: ("Tessian outbound 3c5325c30453:v121");
 Thu, 30 Jun 2022 11:25:38 +0000
Received: from 656c5815b905.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 765F273E-F4CD-48F0-9FE9-2041FFCB02A0.1; 
 Thu, 30 Jun 2022 11:25:28 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 656c5815b905.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 30 Jun 2022 11:25:28 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com (2603:10a6:102:2b9::9)
 by DB8PR08MB5049.eurprd08.prod.outlook.com (2603:10a6:10:ee::32) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14; Thu, 30 Jun
 2022 11:25:25 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::980a:f741:c4e1:82f7]) by PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::980a:f741:c4e1:82f7%7]) with mapi id 15.20.5395.014; Thu, 30 Jun 2022
 11:25:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 64359e21-f867-11ec-bdce-3d151da133c5
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=fV+RD0mTyidwY0KNyEZ/5gIWwIOp37XXx9L9DhDqjicYAunW0/+7GEJdFZJypjNl6Nfe4z8fPjGefxyVYxgTVTt4GecVdg3CNyhoNOT1Mrz0WjTq90ctdUuMguI7hirr1lp7wLd2K101R8Y4O/TPNudbL6Lg1wsVnBgEmot8tgUNTxVbkf+kLEwzgiYmyFWYVedfeauuM97ep5G071+2z2Fdhrpjk/4Itiuffef5jqi1tMUQ7vgIQP/amnbwHYIyaWAJrdd1+jsV+4KsZ/BBQUU/j3/6l+QJL3vDwL0WVkU83FdXFCQgtYq4qB0Tr0xz/75a3RrofX2a3mBAAUMjbg==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=r7sulLYxdbe2qy8THT+F2713o+VbmhO9hgGZDbFgaso=;
 b=aaCbPLIQsHT1QV92kZuIcbpjmfIg30fzVp4HNj+4VQCB664ZojASZKxZNs7sBsIKXDiQs6zX++xxeftiXZiNp4tzQcSss0n6eUFQeSBV+r3CY+fVPtdD+25mWTkJtia8hlKhe6P1P0S4nZuPbdNcgO8+nFQha9kdQ5PaGr99SDxEOZSMNtal78UNvCtqhIn9IfzOwmiJA2lAnBPBV4trpC772ez5/nxeRzCQXVdI+VNC7l9DVKm3FJzPSOHBEJsjuyRdzU4By2X3b89QuAUey39uwzxpCA24RnX2Z5JAV0sZK0BC4Rx3KDzn43VxrChCJoozY5NzfvvQIcmUfa+1LQ==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=r7sulLYxdbe2qy8THT+F2713o+VbmhO9hgGZDbFgaso=;
 b=XduvIBaZPd214gUcuMI3TGQUgMPK1BPgdUyltewdm9DWwIPdd8sxpqzsX3PKopAZU2PH6M8nTWmyuXJXdf2WOaIsZDKuP9JbVuhueu3wdmK9QJJyTci+eoHlk5+scKJEOXLjbZJ0N9Pk3iQTeUcxAmm33tDWlFcj8OXThXG3+NA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fOwwvLC1H5MkrTq/CXxjnt/mIKmYpSGE3au8c/1Tr6HkrN8E3wu60RFazVCFQ8ska/8/ekwpYmCviA0R+2Q1ssOW/4SLlMF18MOgAOdqz9b4szGd2T5nE+hoJkmk+aZRlz033kPhvKOLf5OpUXIkcyUUzuKz4HjnGhEuyzXyTMb72UmV/QmvX5+WRraXcBEzBfXv0Kjz4g9i0Wi8wNtsA/EQNbpE00FTylX/aFcMXN4CK1OXZ4F9STrFDGyk2li7ZdsParr/Mlw6KrFRgKAcR3iHgcyoxeNKFlC5/3YR7pJQpDdQS75Khi2B5Yd2jcSbHpeUk3/RMwh2qi3r0Xg1rQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=r7sulLYxdbe2qy8THT+F2713o+VbmhO9hgGZDbFgaso=;
 b=jlr0ltmVZ3UccUpYJDnupMqP6xzbMhL+S1n+znaKuVqDKioFFdERHdB7ryEDmpWxbAAKw4+NAa4Rv5UhxWq4dA3AoSI7j9sqQuwpKJH+6I57/hRGf2en2L6CacJdiRSpyXm6xO6DUAOjGJHTTgXIyPTaXrfw+2lgIoafEmumfg6ksv7SK2qkRCU2GMuVORB9rpQb8EdIZg+JePORzrchP6ZYjk5Dezp5zJpBZUTgFBHPdvu2n8x0JUw4yaOcdWDUSk+e1uhvDI/+40k+St1uWLD3KPaxYhYBnrcCzRDFvDsl/kQTMdm0CK3E3CPimCCCApxkbLXQEU/505VXUgZfzg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=r7sulLYxdbe2qy8THT+F2713o+VbmhO9hgGZDbFgaso=;
 b=XduvIBaZPd214gUcuMI3TGQUgMPK1BPgdUyltewdm9DWwIPdd8sxpqzsX3PKopAZU2PH6M8nTWmyuXJXdf2WOaIsZDKuP9JbVuhueu3wdmK9QJJyTci+eoHlk5+scKJEOXLjbZJ0N9Pk3iQTeUcxAmm33tDWlFcj8OXThXG3+NA=
From: Wei Chen <Wei.Chen@arm.com>
To: Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>
CC: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Jiamei Xie <Jiamei.Xie@arm.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v6 1/8] xen: reuse x86 EFI stub functions for Arm
Thread-Topic: [PATCH v6 1/8] xen: reuse x86 EFI stub functions for Arm
Thread-Index:
 AQHYfI531SRNO5A+8Ei3iOFoJrK02K1dB3iAgAE0OjCAABXrgIAAECwAgAAEoICAAARlgIAAAO0AgAmAA0A=
Date: Thu, 30 Jun 2022 11:25:25 +0000
Message-ID:
 <PAXPR08MB7420AD8092F0FBA43C359DD19EBA9@PAXPR08MB7420.eurprd08.prod.outlook.com>
References: <20220610055316.2197571-1-wei.chen@arm.com>
 <20220610055316.2197571-2-wei.chen@arm.com>
 <05dadcda-505d-d46a-776a-bb29b8915815@suse.com>
 <PAXPR08MB74205A192C0E6E2E4BDD64BB9EB49@PAXPR08MB7420.eurprd08.prod.outlook.com>
 <8e44e765-c47f-4480-ee44-704ea13a170d@xen.org>
 <cd5d728c-a21e-780e-3b79-0cfb163eb824@suse.com>
 <a6844d62-c1aa-a29f-56ba-3556bc1d4dac@xen.org>
 <6e91d7d0-78d2-2eec-3b14-9aea00b2a028@suse.com>
 <bad83568-94c6-6d90-308b-ae9965f54754@suse.com>
In-Reply-To: <bad83568-94c6-6d90-308b-ae9965f54754@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 0A3B52357D6C6345BCAA9EF61D942FFA.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: e52ff7b6-484d-4795-2382-08da5a8b4267
x-ms-traffictypediagnostic:
	DB8PR08MB5049:EE_|AM5EUR03FT004:EE_|AM4PR0802MB2274:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 tataLh8S8tMOnnIfJLgmdWw4ncJRkBt5X6u5Bw+ltztkUPVhRe67WZsz/0KhJusVVD0qjaK0nE3/gY0eOyPV37ze616gGwk86VlKpz130CUA3VdaSMLH5tzWrG5eaKQ62i1qA9k4PSmZwUdZq9pzaJti8RGeYNi9Ji+Y/Cv8SL7BkKN1GnTkFF2tgXOdqQQEmF9/hLV4iDsrki9eXL56tIs25HCjASNWGqXVnSpaoQbhJPUMMcOs0JtYyruaa31s0gTUb0FMIYMLEehUTB6qpZreel61kmVSk6SLjPNHo6K4ehx2YfQOTJw5vbjvRDl4ittR1sg9aVA9WIFzZxZt4dRhY7NkkEfQ25pWX+uEDdn4aQBOHF8oJH3bCzGny1MMoZgn5N0cnnzQ4/n+yUhzXQ7tKpJZIQvD4hTdk8o4E6UhGjCIRcN5D9Bn23wAGRg32nqLwUyG+aYn/UV1KGQAHHj3vz1Lc6NJm9j00+UMMOb5+DrNUubi8FslsYla+l3465Yy6W5X5ZG4gcmL/pfY5FT1v5VeUww8FYc6pzxpWrqf4F1v1ZgbRrYvz+ndumt9AKJMetySkG0NLtsfNQWI8HHLfy+MPC2fKDLDAYoQlvolKdyo3rwwY5BoBSzro3Qvbb5LNf+0bagyQz0VN3xTL603yB55kVB7PHQEOjVqdZmcOIBLrq+wF8myci6DbSS2KtEBeegw2SmKDikuP60kTIAJAMpqWvJx0/g5YhVGXj84Vs1UmQTaslp9T6L/ntDfuPEJALpdfusOHTCzDM/AoPPWCqhRcXWzqeKt63UJ+L4BsrjaMdixbqaL1vGIx0Ii
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB7420.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(376002)(346002)(366004)(39860400002)(396003)(136003)(6506007)(41300700001)(478600001)(4326008)(186003)(26005)(9686003)(7696005)(38070700005)(53546011)(38100700002)(83380400001)(122000001)(66446008)(33656002)(55016003)(54906003)(316002)(5660300002)(86362001)(66946007)(8676002)(2906002)(110136005)(71200400001)(66476007)(66556008)(64756008)(76116006)(8936002)(52536014);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5049
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT004.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	4473a348-4aab-453d-30bc-08da5a8b3a50
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	s0qi+Mf7KhsdCrpNEXQkSZ56bspzZFzvjoO3s5S9yiGOxScMXZ4duhmREzmRMD3t3C5GSOYOg6kjJYGzUwHwY0LVRKCgfFoO9WfHruU7uBqUSjkFTyfk6IrwQmP9Qm0sVUYoWomBH2xyJE2gx/OQ+9+l8ucQXf8e6YyVGO280s1pB8yUY8janz4hnueZECBvAANz8nhHRWb8Haykj2PrPsnRaZuMOZVDn4FgSwJ6u7gF9xC6odtDqeKhh9JqtrqyT0kp4V5cOkIoD5A2Vt27bXJCpGk2esczGIFNPjd34TPke/BBOYskUDaxQHWYCXjGkEPhNfGo6oTADmQrulZJ0Tr3pF1PyTtsMFvCr/wR7UNjU3gHwiNSTq//EDAHQU3erkD2Z56JSOSzLiX3J5qPFesHpzctKPuGFPv3bbTCaWSKaUxr8TDU2GsIWzzLIHhHoQ5C2lgGzAkQ1NwoZX6Tem5Coc8mJZKL0PtAK4C6wQ9gdbeEomd9vmsb5xKQFQmtJIoMy2gN8X+Lk50qexCwGSH0xGMb69FrIBYgKQ/OFzDqMJynTnHhy8p3oTWWGchCSEyIQ3fswG8jW65dNn8aOZ3xQA4raP3l/2OoD2DTrtCorlB4uLIKvfjyT71AYWnNvF3AipG+ltP2KEOI+zCf/+NURVQVrURCMKAxNmS3VlN2/wBkQ8mKX6hu0M1OzRgLLeEWJboE8wt+u90qvhWG/UQ3wjMneuPxG8MoN6JVox0mb8FPbkISvSw5QxlKl5XXyNVi9b+sLgb/inS+s9gQkh0Irn6Q2gneCdZlBGQPAwmHC+Ycn6hYt5zLvPCeHVCK
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(376002)(346002)(396003)(136003)(39860400002)(46966006)(36840700001)(40470700004)(40460700003)(478600001)(41300700001)(86362001)(82310400005)(70206006)(33656002)(82740400003)(356005)(26005)(70586007)(316002)(110136005)(4326008)(54906003)(81166007)(55016003)(8676002)(36860700001)(7696005)(47076005)(336012)(186003)(53546011)(5660300002)(2906002)(52536014)(6506007)(40480700001)(8936002)(83380400001)(9686003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 11:25:38.6574
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e52ff7b6-484d-4795-2382-08da5a8b4267
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT004.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR0802MB2274

SGkgSnVsaWVuIGFuZCBKYW4sDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJv
bTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KPiBTZW50OiAyMDIy5bm0NuaciDI0
5pelIDE4OjA5DQo+IFRvOiBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4ub3JnPg0KPiBDYzogbmQg
PG5kQGFybS5jb20+OyBTdGVmYW5vIFN0YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+
OyBCZXJ0cmFuZA0KPiBNYXJxdWlzIDxCZXJ0cmFuZC5NYXJxdWlzQGFybS5jb20+OyBWb2xvZHlt
eXIgQmFiY2h1aw0KPiA8Vm9sb2R5bXlyX0JhYmNodWtAZXBhbS5jb20+OyBBbmRyZXcgQ29vcGVy
IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPjsNCj4gUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIu
cGF1QGNpdHJpeC5jb20+OyBXZWkgTGl1IDx3bEB4ZW4ub3JnPjsgSmlhbWVpIFhpZQ0KPiA8Smlh
bWVpLlhpZUBhcm0uY29tPjsgeGVuLWRldmVsQGxpc3RzLnhlbnByb2plY3Qub3JnOyBXZWkgQ2hl
bg0KPiA8V2VpLkNoZW5AYXJtLmNvbT4NCj4gU3ViamVjdDogUmU6IFtQQVRDSCB2NiAxLzhdIHhl
bjogcmV1c2UgeDg2IEVGSSBzdHViIGZ1bmN0aW9ucyBmb3IgQXJtDQo+IA0KPiBPbiAyNC4wNi4y
MDIyIDEyOjA1LCBKYW4gQmV1bGljaCB3cm90ZToNCj4gPiBPbiAyNC4wNi4yMDIyIDExOjQ5LCBK
dWxpZW4gR3JhbGwgd3JvdGU6DQo+ID4+IEhpIEphbiwNCj4gPj4NCj4NCj4gPj4+Pj4+PiAtLS0g
YS94ZW4vYXJjaC9hcm0vZWZpL01ha2VmaWxlDQo+ID4+Pj4+Pj4gKysrIGIveGVuL2FyY2gvYXJt
L2VmaS9NYWtlZmlsZQ0KPiA+Pj4+Pj4+IEBAIC0xLDQgKzEsMTIgQEANCj4gPj4+Pj4+PiAgICBp
bmNsdWRlICQoc3JjdHJlZSkvY29tbW9uL2VmaS9lZmktY29tbW9uLm1rDQo+ID4+Pj4+Pj4NCj4g
Pj4+Pj4+PiAraWZlcSAoJChDT05GSUdfQVJNX0VGSSkseSkNCj4gPj4+Pj4+PiAgICBvYmoteSAr
PSAkKEVGSU9CSi15KQ0KPiA+Pj4+Pj4+ICAgIG9iai0kKENPTkZJR19BQ1BJKSArPSAgZWZpLWRv
bTAuaW5pdC5vDQo+ID4+Pj4+Pj4gK2Vsc2UNCj4gPj4+Pj4+PiArIyBBZGQgc3R1Yi5vIHRvIEVG
SU9CSi15IHRvIHJlLXVzZSB0aGUgY2xlYW4tZmlsZXMgaW4NCj4gPj4+Pj4+PiArIyBlZmktY29t
bW9uLm1rLiBPdGhlcndpc2UgdGhlIGxpbmsgb2Ygc3R1Yi5jIGluIGFybS9lZmkNCj4gPj4+Pj4+
PiArIyB3aWxsIG5vdCBiZSBjbGVhbmVkIGluICJtYWtlIGNsZWFuIi4NCj4gPj4+Pj4+PiArRUZJ
T0JKLXkgKz0gc3R1Yi5vDQo+ID4+Pj4+Pj4gK29iai15ICs9IHN0dWIubw0KPiA+Pj4+Pj4+ICtl
bmRpZg0KPiA+Pj4+Pj4NCj4gPj4+Pj4+IFRoaXMgaGFzIGNhdXNlZA0KPiA+Pj4+Pj4NCj4gPj4+
Pj4+IGxkOiB3YXJuaW5nOiBhcmNoL2FybS9lZmkvYnVpbHRfaW4ubyB1c2VzIDItYnl0ZSB3Y2hh
cl90IHlldCB0aGUNCj4gb3V0cHV0IGlzDQo+ID4+Pj4+PiB0byB1c2UgNC1ieXRlIHdjaGFyX3Q7
IHVzZSBvZiB3Y2hhcl90IHZhbHVlcyBhY3Jvc3Mgb2JqZWN0cyBtYXkNCj4gZmFpbA0KPiA+Pj4+
Pj4NCj4gPj4+Pj4+IGZvciB0aGUgMzItYml0IEFybSBidWlsZCB0aGF0IEkga2VlcCBkb2luZyBl
dmVyeSBvbmNlIGluIGEgd2hpbGUsDQo+IHdpdGgNCj4gPj4+Pj4+IChpZiBpdCBtYXR0ZXJzKSBH
TlUgbGQgMi4zOC4gSSBndWVzcyB5b3Ugd2lsbCB3YW50IHRvIGNvbnNpZGVyDQo+IGJ1aWxkaW5n
DQo+ID4+Pj4+PiBhbGwgb2YgWGVuIHdpdGggLWZzaG9ydC13Y2hhciwgb3IgdG8gYXZvaWQgYnVp
bGRpbmcgc3R1Yi5jIHdpdGgNCj4gdGhhdA0KPiA+Pj4+Pj4gb3B0aW9uLg0KPiA+Pj4+Pj4NCj4g
Pj4+Pj4NCj4gPj4+Pj4gVGhhbmtzIGZvciBwb2ludGluZyB0aGlzIG91dC4gSSB3aWxsIHRyeSB0
byB1c2UgLWZzaG9ydC13Y2hhciBmb3INCj4gQXJtMzIsDQo+ID4+Pj4+IGlmIEFybSBtYWludGFp
bmVycyBhZ3JlZS4NCj4gPj4+Pg0KPiA+Pj4+IExvb2tpbmcgYXQgdGhlIGNvZGUgd2UgZG9uJ3Qg
c2VlbSB0byBidWlsZCBYZW4gYXJtNjQgd2l0aCAtZnNob3J0LQ0KPiB3Y2hhcg0KPiA+Pj4+IChh
c2lkZSB0aGUgRUZJIGZpbGVzKS4gU28gaXQgaXMgbm90IGVudGlyZWx5IGNsZWFyIHdoeSB3ZSB3
b3VsZCB3YW50DQo+IHRvDQo+ID4+Pj4gdXNlIC1mc2hvcnQtd2NoYXIgZm9yIGFybTMyLg0KPiA+
Pj4NCj4gPj4+IFdlIGRvbid0IHVzZSB3Y2hhcl90IG91dHNpZGUgb2YgRUZJIGNvZGUgYWZhaWN0
LiBIZW5jZSB0byBhbGwgb3RoZXINCj4gY29kZQ0KPiA+Pj4gaXQgc2hvdWxkIGJlIGJlbmlnbiB3
aGV0aGVyIC1mc2hvcnQtd2NoYXIgaXMgaW4gdXNlLiBTbyB0aGUgc3VnZ2VzdGlvbg0KPiA+Pj4g
dG8gdXNlIHRoZSBmbGFnIHVuaWxhdGVyYWxseSBvbiBBcm0zMiBpcyByZWFsbHkganVzdCB0byBz
aWxlbmNlIHRoZSBsZA0KPiA+Pj4gd2FybmluZzsNCj4gPj4NCj4gPj4gT2suIFRoaXMgaXMgb2Rk
LiBXaHkgd291bGQgbGQgd2FybiBvbiBhcm0zMiBidXQgbm90IG90aGVyIGFyY2g/DQo+ID4NCj4g
PiBBcm0zMiBlbWJlZHMgQUJJIGluZm9ybWF0aW9uIGluIGEgbm90ZSBzZWN0aW9uIGluIGVhY2gg
b2JqZWN0IGZpbGUuDQo+IA0KPiBPciBhIG5vdGUtbGlrZSBvbmUgKGp1c3QgdG8gYXZvaWQgcG9z
c2libGUgY29uZnVzaW9uKTsgSSB0aGluayBpdCdzDQo+ICIuQVJNLmF0dHJpYnV0ZXMiLg0KPiAN
Cj4gSmFuDQo+IA0KPiA+IFRoZSBtaXNtYXRjaCBvZiB0aGUgd2NoYXJfdCBwYXJ0IG9mIHRoaXMg
aW5mb3JtYXRpb24gaXMgd2hhdCBjYXVzZXMNCj4gPiBsZCB0byBlbWl0IHRoZSB3YXJuaW5nLg0K
PiA+DQo+ID4+PiBvZmYgdGhlIHRvcCBvZiBteSBoZWFkIEkgY2FuJ3Qgc2VlIGFueXRoaW5nIHdy
b25nIHdpdGggdXNpbmcNCj4gPj4+IHRoZSBvcHRpb24gYWxzbyBmb3IgQXJtNjQgb3IgZXZlbiBn
bG9iYWxseS4gWWV0IG90b2ggd2UgdHlwaWNhbGx5IHRyeQ0KPiB0bw0KPiA+Pj4gbm90IG1ha2Ug
Y2hhbmdlcyBmb3IgZW52aXJvbm1lbnRzIHdoZXJlIHRoZXkgYXJlbid0IHJlYWxseSBuZWVkZWQu
DQo+ID4+DQo+ID4+IEkgYWdyZWUuIElmIHdlIG5lZWQgYSB3b3JrYXJvdW5kLCB0aGVuIG15IHBy
ZWZlcmVuY2Ugd291bGQgYmUgdG8gbm90DQo+ID4+IGJ1aWxkIHN0dWIuYyB3aXRoIC1mc2hvcnQt
d2NoYXIuDQo+ID4NCj4gPiBUaGlzIHdvdWxkIG5lZWQgdG8gYmUgYW4gQXJtLXNwZWNpYWwgdGhl
biwgYXMgb24geDg2IGl0IG5lZWRzIHRvIGJlDQo+IGJ1aWx0DQo+ID4gdGhpcyB3YXkuDQoNCkkg
aGF2ZSB0YWtlbiBhIGxvb2sgaW50byB0aGlzIHdhcm5pbmc6DQpUaGlzIGlzIGJlY2F1c2UgdGhl
ICItZnNob3J0LXdjaGFyIiBmbGFnIGNhdXNlcyBHQ0MgdG8gZ2VuZXJhdGUNCmNvZGUgdGhhdCBp
cyBub3QgYmluYXJ5IGNvbXBhdGlibGUgd2l0aCBjb2RlIGdlbmVyYXRlZCB3aXRob3V0DQp0aGF0
IGZsYWcuIFdoeSB0aGlzIHdhcm5pbmcgaGFzbid0IGJlZW4gdHJpZ2dlcmVkIGluIEFybTY0IGlz
DQpiZWNhdXNlIHdlIGRvbid0IHVzZSBhbnkgd2NoYXIgaW4gQXJtNjQgY29kZXMuIFdlIGFyZSBh
bHNvIG5vdA0KdXNpbmcgd2NoYXIgaW4gQXJtMzIgY29kZXMsIGJ1dCBBcm0zMiB3aWxsIGVtYmVk
IEFCSSBpbmZvcm1hdGlvbg0KaW4gIi5BUk0uYXR0cmlidXRlcyIgc2VjdGlvbi4gVGhpcyBzZWN0
aW9uIHN0b3JlcyBzb21lIG9iamVjdA0KZmlsZSBhdHRyaWJ1dGVzLCBsaWtlIEFCSSB2ZXJzaW9u
LCBDUFUgYXJjaCBhbmQgZXRjLiBBbmQgd2NoYXINCnNpemUgaXMgZGVzY3JpYmVkIGluIHRoaXMg
c2VjdGlvbiBieSAiVGFnX0FCSV9QQ1Nfd2NoYXJfdCIgdG9vLg0KVGFnX0FCSV9QQ1Nfd2NoYXJf
dCBpcyAyIGZvciBvYmplY3QgZmlsZXMgd2l0aCAiLWZzaG9ydC13Y2hhciIsDQpidXQgZm9yIG9i
amVjdCBmaWxlcyB3aXRob3V0ICItZnNob3J0LXdjaGFyIiBpcyA0LiBBcm0zMiBHQ0MNCmxkIHdp
bGwgY2hlY2sgdGhpcyB0YWcsIGFuZCB0aHJvdyBhYm92ZSB3YXJuaW5nIHdoZW4gaXQgZmluZHMN
CnRoZSBvYmplY3QgZmlsZXMgaGF2ZSBkaWZmZXJlbnQgVGFnX0FCSV9QQ1Nfd2NoYXJfdCB2YWx1
ZXMuDQoNCkFzIGdudS1lZmktMy4wIHVzZSB0aGUgR0NDIG9wdGlvbiAiLWZzaG9ydC13Y2hhciIg
dG8gZm9yY2Ugd2NoYXINCnRvIHVzZSBzaG9ydCBpbnRlZ2VycyAoMiBieXRlcykgaW5zdGVhZCBv
ZiBpbnRlZ2VycyAoNCBieXRlcykuDQpXZSBjYW4ndCByZW1vdmUgdGhpcyBvcHRpb24gZnJvbSB4
ODYgYW5kIEFybTY0LCBiZWNhdXNlIHRoZXkgbmVlZA0KdG8gaW50ZXJhY3Qgd2l0aCBFRkkgZmly
bXdhcmUuIFNvIEkgaGF2ZSB0byBvcHRpb25zOg0KMS4gUmVtb3ZlICItZnNob3J0LXdjaGFyIiBm
cm9tIGVmaS1jb21tb24ubWsgYW5kIGFkZCBpdCBiYWNrIGJ5DQogICB4ODYgYW5kIGFybTY0J3Mg
RUZJIE1ha2VmaWxlDQoyLiBBZGQgIi1uby13Y2hhci1zaXplLXdhcm5pbmciIHRvIEFybTMyJ3Mg
bGlua2VyIGZsYWdzDQoNCkkgcGVyc29uYWxseSBwcmVmZXIgb3B0aW9uIzEsIGJlY2F1c2UgQXJt
MzIgZG9lc24ndCBuZWVkIHRvIGludGVyYWN0DQp3aXRoIEVGSSBmaXJtd2FyZSwgYWxsIGl0IHJl
cXVpcmVzIGFyZSBzb21lIHN0dWIgZnVuY3Rpb25zLiBBbmQNCiItbm8td2NoYXItc2l6ZS13YXJu
aW5nIiBtYXkgaGlkZSBzb21lIHdhcm5pbmdzIHdlIHNob3VsZCBhd2FyZSBpbg0KZnV0dXJlLg0K
DQpDaGVlcnMsDQpXZWkgQ2hlbg0KDQo+ID4NCj4gPiBKYW4NCj4gPg0KDQo=


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 11:28:59 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 11:28:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358425.587625 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6sLy-0005cQ-U0; Thu, 30 Jun 2022 11:28:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358425.587625; Thu, 30 Jun 2022 11:28:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6sLy-0005cJ-RA; Thu, 30 Jun 2022 11:28:58 +0000
Received: by outflank-mailman (input) for mailman id 358425;
 Thu, 30 Jun 2022 11:28:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Sqd6=XF=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1o6sLx-0005cB-QD
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 11:28:57 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on2062.outbound.protection.outlook.com [40.107.223.62])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d3a9ff1c-f867-11ec-bd2d-47488cf2e6aa;
 Thu, 30 Jun 2022 13:28:56 +0200 (CEST)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by DM6PR12MB4332.namprd12.prod.outlook.com (2603:10b6:5:21e::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Thu, 30 Jun
 2022 11:28:53 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::6d20:93ce:c4d6:f683]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::6d20:93ce:c4d6:f683%4]) with mapi id 15.20.5373.018; Thu, 30 Jun 2022
 11:28:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d3a9ff1c-f867-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cMHzTQHkx05JTezr7ua8QAivFGX+g3ryJhqZFmpcFR2kq/FPgE537f3qpvRrAxNSyofp15vBya+E0zZBC0k5+n2Q5Rt/Mi/o9kHw7+58CAcjw8HN+e3ry6O9UTX8u3+i5CHXaU5x2o5q6fm8t8zrzaHHHsD1mHRWuiix+ufWzNVBdu6aluCSL4ueOIFz4DEopBeXdp6xmOPRCbHoWr3lPCI/dbByZCYdA3TaaVUrbGrW0KypEGoCrodOInISNpsbCfbYRCardpUBR+CfcSoTp03dUtKfZ6NpOXzOryMi4FmPupTSDvXeMZo0HBfl/b8ySsCPNlxrUSWs+BY85BUvqA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+Vj7T/Wb9689p5EhEAfOVENGEodFALw/hDr1aTqW//Q=;
 b=klADr37/9Gw6wHBomABvQUluB3No/TWJI99LxaFCH3K4Ae9CJPNn5UJ94e0YwKXSa5XpdaP7CgE8ZmNF4xJ9t1zDk2sRsgtaEozm+GRW/cRZhG3ZhALO/yQPBhClEGWcEMsL3Fi8adOE1v9zkjas9bqm4BgbiBqkd/0KPQ3qTqXfZi5S1vafmc/Mw43kO5X8HkriqAghHJ1Zmh0jhFzZiBkQNF8Xt4WHkX7vRvalkV2rfG97vijZ+sN9YcRCgOEX1bQ382+SkHnWof/OinU6yDbEMzydvItOMvDljwQyQbbtbwVYZNOGEdWVGFX4aiwiIm/fYDCrdm+FSc3ttr64/A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+Vj7T/Wb9689p5EhEAfOVENGEodFALw/hDr1aTqW//Q=;
 b=W+Mzvl8ZF1KA6XSgOy5PnREYWPO5Fl2ZN2+EoYyxtmrA9scxkrobEXw/+XjgQkaPBsdEhM37pzvEf3Oh9E69cwafUARp3Maju6gEjiUKl2kPqgUxr7nesG9obYkIrxQL+a3pthpMwyh/R8YUIkyE1IkzF2OveI27cL/tXUSUL4I=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <d2057409-6557-ec71-ab68-e74dc9aafe66@amd.com>
Date: Thu, 30 Jun 2022 12:28:47 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
Subject: Re: [PATCH 2/2] uboot-script-gen: do not enable direct mapping by
 default
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xenia <burzalodowa@gmail.com>, xen-devel@lists.xenproject.org,
 viryaos-discuss@lists.sourceforge.net
References: <20220626184536.666647-1-burzalodowa@gmail.com>
 <20220626184536.666647-2-burzalodowa@gmail.com>
 <alpine.DEB.2.22.394.2206281727080.4389@ubuntu-linux-20-04-desktop>
 <22476413-14da-21cd-eb02-15165bfe602a@gmail.com>
 <cafc1602-6f5f-3238-801d-29c13ed37f50@amd.com>
 <alpine.DEB.2.22.394.2206291323470.4389@ubuntu-linux-20-04-desktop>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <alpine.DEB.2.22.394.2206291323470.4389@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0413.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:189::22) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 37d94577-9e52-410b-b7cd-08da5a8bb62c
X-MS-TrafficTypeDiagnostic: DM6PR12MB4332:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NVFgnMPb3tqGHqMbAy34kPfthQqqvuOpcqCwNWvGWYVGa+E5eNUFyBuf1jthGNzlOPPtCmddvhC8Ugst277kcVbWEXR/MfHs2xwkWwjuSZc8CHmh94U+YRJOr5xGSdP7OK2lh5h7tBBqi0UZznm2+F9dhyh4S8mA8oRVE62atrS7zwxaweTu7Bm1b+iD6Vyy0B8zNKLRXfeod5IX2ZAv+JyuXCzQ+dFzMf/uDnvFzYg7f+3jNnJpUtFLq261oLBCMo8/WeBocbRrdwL4gi+pp3NszlNFLAE0WEydETZJTcgNHLQzHwTJrgoNvEkYmPcSV63XiO240qoE9aoQQSK+Axhrhkf3kMd7JcGk3ROfmWpWZdA8XRcOlOvwAFO7GBqXEZeQqtZMhnDPrhBhvkxAF+97XT0EtDwKWPkCyu9JbW8XoGskMjEproWz9z6z7ic7h3Lja5oGFQz6dTM52RdRzUtHMb02Yt7c1xZZbkdffS+vqyhbCDpCx8siFFQMFptlHNgUCQK65tME35i1W4OVhMV9P4WO75wP0xmuMKpojmR2aa7p/2w76crkecjkgOjnn9QztjrJsuFtiptpI5JB24gPM2/AO4POQeKvGhVcTGXeM8mIEfg5OCGbk5hxB+YgQOT4NCFz7QB8s7ukoAra5IvooEBOCKJPdvtD6VRn28G3VUZI6+cRxkaofF2HLOLYUlSPc9kyCGDa4HGaxCdZPsvVtgNvtinZCd5qzaiHCv5t3R3ryqUej8/Ja1xEKt1DupUPZAbBeYVIijC+7i4LtS/v3ZZ4ibCm3/w1MwAsJ6NtetjER0kPG9H32wmqw81xAcFgyoKYomRR3+SzpJxdJA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(136003)(366004)(39860400002)(396003)(376002)(346002)(5660300002)(38100700002)(6916009)(66476007)(36756003)(41300700001)(2616005)(186003)(31686004)(316002)(6666004)(2906002)(478600001)(6506007)(26005)(31696002)(8676002)(4326008)(83380400001)(66946007)(8936002)(66556008)(6512007)(6486002)(53546011)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RnVOdWl6WGh5bEpJMUNpUjkrR0R6WGJzSENOWEtmb0ZNK01haERyVlhBa3RF?=
 =?utf-8?B?MTNSSFdsOEhZeW9rMTgyWG9sS3hjWHRZU2VlVlIzdE04ZkpmWGhhRzZvRDBG?=
 =?utf-8?B?WFpRcHlzVEVXcHdZMVdkNzlrWDFDdmI3dlhwUnRjdE84Tk1sVm9yUzlyS3Mx?=
 =?utf-8?B?Z2VuZFpGcEQzczJhTnNHRitsc1BwWnZhQUYzd2JSZmwzYjdOVFFPcDFwVGhi?=
 =?utf-8?B?VU5FZ254eklZUmQweG96b3czeU8zcHRPVWhPUkw3d24yUno1Smg1UXhQMlp6?=
 =?utf-8?B?c0RkMndDVGdwZnpzZGg4dTZWR0tGMU5iZGxLMXpXSUdnU3FBKzdSUXJla0t1?=
 =?utf-8?B?RUY3UFBxSENlK3BUNjJISFJ4Mk4yVXU0T0JOWE1GcWJvQjZ4UTF1UHUxS1RF?=
 =?utf-8?B?Rnh6b0NHa0dmc29jbkNuQmhBQ1ZpQ28yekY3dnBFSlJKcXBJT01DNFNPZ0tO?=
 =?utf-8?B?ZEhMWjIxZVNHdERhYUJPSDIxQnVxY3UzS0lvMDVhQVJSbTVKc1d3eDE0ZXU2?=
 =?utf-8?B?S3creWRmQlE0MkIzSnRmZEpnNzBSWCtMajVLcXJSN2J1UzlaWHdjMHBVY3Uy?=
 =?utf-8?B?MVdQQnM5RmlIQ050NUkxcEdUZUhPNTdrZk96TzZNWFlJSFJQSmlwWVNSeHJY?=
 =?utf-8?B?Q3JWQ2tRWm5pMWpseHVYZldLTWp6TmE3T3U3ZlkwVStRRHpHQUg3RW1hQmpl?=
 =?utf-8?B?RytMV0ZRV1RCRlk1M0F6WDZFWFcrVC9uUXg2QU9LRTUrc3VuS3RNcGsySG1D?=
 =?utf-8?B?UWNhNzIwV1NmU24vamNCTGREVFBDc2VXcCtaUndscWNVMmpIbG9SQmRrNzhP?=
 =?utf-8?B?UGNZYXdpRXJtdGcwNFhXa2tDNklqODhjOVNCZk9OZllJWFJhaVZacklycEN3?=
 =?utf-8?B?R3NmenFGOG1XVVF2a2JWcFdhZm01b2puVFRjeS8yZWRWdSt3T0RwQ05aRWFH?=
 =?utf-8?B?K3RsQUx3dXBXeXhDTll4WXJCZzd0ZUwxWFdLMmRYSVUzSWRvdXExMy9OaFpq?=
 =?utf-8?B?d1p5TkhlYnNNcnVJcmJWTm52UWF4Wk0vN2dkYitrVEVFUm1qLzJ1a05KQ3Br?=
 =?utf-8?B?bWZrTm14YW00Y2dhS3NkMWxyOFo4eDVuUTF2NjYzOWtvMDZnSloweW4vMk1K?=
 =?utf-8?B?cFMzTjY1SjNBdkxDTjZxTU5LNll0L0h2c213cFF6SUZ4cDY4NTdPOXAzUDFl?=
 =?utf-8?B?RFhzblMrMUdkdGo2S1dTQUJYbmZwSUVSbUJqaXpjcGZnYmVHWVFGcnB5MFFq?=
 =?utf-8?B?Z0JhQXUrU25acmVkZUpaS3o4Z3lVblZQSVFTRllzVVRJb08wOFZXd3poa25H?=
 =?utf-8?B?TWpiNWt1OVd1REQybW4wMkZmYS81ZHJKQjdJcEhJaS81SVJtZWtMRE5WVE5D?=
 =?utf-8?B?WDFTblpaUmxDZHpTNDVoaW01UHpnSnYwWDFRNVpjR0tzcU9SbmpUTFZWMjdu?=
 =?utf-8?B?MlpQZGRyMFZ3RE1LWjA0N0xPa1dzVjRkeE9EZGFHVS9HOEVMR0pYZHRka2JE?=
 =?utf-8?B?SzA0UGhZck8zeUxkQllQQzdJcWRDdDRGQkRmcHprWTViSHNUa2VwZmNLeVBM?=
 =?utf-8?B?WU8xRDlNYTBWK1JWa3lTaDh5aGVoN251bWFQN1Jpc0VnczdFdUwvY1ljU1RJ?=
 =?utf-8?B?SkRnZzU2ZllkQnpkQXdHNlJ3OHk5bDZKTkZ0cjJtU3BYTFZmUi84RW85a0Fi?=
 =?utf-8?B?Qnh2UzdMOE54MXJrRTJpaW40V1ZhOGZNQWNYSGNwT1k3bzR0SWltNE1Xd3l4?=
 =?utf-8?B?aUViNFkrS1l5Rk1xWVZTMlVzTy9CWEJndDN4NDdQaHhzcW1FOXFJTWNZYnYr?=
 =?utf-8?B?LzdGbmVVSU55R3lPM2l0MVI1SGJsdUI3cm9DK093bzArZlZ2dWxGVDljWmR1?=
 =?utf-8?B?TVk4M2NLRTlhMW8zUmhsdExzSUpDSzJ0ekZtWkV1Q1J6VEgxdjVmbHp4Q3JL?=
 =?utf-8?B?a2R5VHRoVjhkcndmVG1WcEJCWHpWZnNSZG5WdXVzdkU3QUZJeFdkSUpYaTN0?=
 =?utf-8?B?NTYwK3ZEc3dwY2xncnNQSm5XU2V2QkdEYzVpc1hQVjR5V242TldacUUySWVU?=
 =?utf-8?B?QndTb3oxcFB3Rit6b3BuRE14UWNUMnR4RU9za2xHeUYxWkJhZzRNM050N3A2?=
 =?utf-8?Q?w5a4U/385xUu1p0uGGpE/QpgB?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 37d94577-9e52-410b-b7cd-08da5a8bb62c
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 11:28:53.1090
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: RhxGxaT6eA4+3lUfH9kOCjVYs/BV2g1Kc1xkSj4o82S+qwgBzytNFAUxAY7hOFVNb+cMihe3Y28HCTcY7PN0ng==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4332


On 29/06/2022 21:28, Stefano Stabellini wrote:
> On Wed, 29 Jun 2022, Ayan Kumar Halder wrote:
>> Hi Stefano/Xenia,
>>
>> On 29/06/2022 18:01, xenia wrote:
>>> Hi Stefano,
>>>
>>> On 6/29/22 03:28, Stefano Stabellini wrote:
>>>> On Sun, 26 Jun 2022, Xenia Ragiadakou wrote:
>>>>> To be inline with XEN, do not enable direct mapping automatically for
>>>>> all
>>>>> statically allocated domains.
>>>>>
>>>>> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
>>>> Actually I don't know about this one. I think it is OK that ImageBuilder
>>>> defaults are different from Xen defaults. This is a case where I think
>>>> it would be good to enable DOMU_DIRECT_MAP by default when
>>>> DOMU_STATIC_MEM is specified.
>>> Just realized that I forgot to add [ImageBuilder] tag to the patches. Sorry
>>> about that.
>> @Stefano, why do you wish the Imagebuilder's behaviour to differ from Xen ? Is
>> there any use-case that helps.
> As background, ImageBuilder is meant to be very simple to use especially
> for the most common configurations. In fact, I think ImageBuilder
> doesn't necessarely have to support all the options that Xen offers,
> only the most common and important.
>
> If someone wants an esoteric option, they can always edit the generated
> boot.source and make any necessary changes. I make sure to explain that
> editing boot.source is always a possibility in all the talks I gave
> about ImageBuilder.
>
> Now to answer the specific question. I am positive that the most common
> configuration for people that wants static memory is to have direct_map.
> That is because the two go hand-in-hand in configuration where the IOMMU
> is not used. So I think that from an ImageBuilder perspective direct_map
> should default to enabled when static memory is requested. It can always
> be disabled, both using DOMU_DIRECT_MAP, or editing boot.source.

Many thanks for the explanation. This makes sense.

I think this patch can be dropped then.

Xenia, apologies for suggesting you to do this in the first place.

- Ayan

>
>
>>> I cc Ayan, since the change was suggested by him.
>>> I have no strong preference on the default value.
>>>
>>> Xenia
>>>
>>>>> ---
>>>>>    README.md                | 4 ++--
>>>>>    scripts/uboot-script-gen | 8 ++------
>>>>>    2 files changed, 4 insertions(+), 8 deletions(-)
>>>>>
>>>>> diff --git a/README.md b/README.md
>>>>> index cb15ca5..03e437b 100644
>>>>> --- a/README.md
>>>>> +++ b/README.md
>>>>> @@ -169,8 +169,8 @@ Where:
>>>>>      if specified, indicates the host physical address regions
>>>>>      [baseaddr, baseaddr + size) to be reserved to the VM for static
>>>>> allocation.
>>>>>    -- DOMU_DIRECT_MAP[number] can be set to 1 or 0.
>>>>> -  If set to 1, the VM is direct mapped. The default is 1.
>>>>> +- DOMU_DIRECT_MAP[number] if set to 1, enables direct mapping.
>>>>> +  By default, direct mapping is disabled.
>>>>>      This is only applicable when DOMU_STATIC_MEM is specified.
>>>>>      - LINUX is optional but specifies the Linux kernel for when Xen is
>>>>> NOT
>>>>> diff --git a/scripts/uboot-script-gen b/scripts/uboot-script-gen
>>>>> index 085e29f..66ce6f7 100755
>>>>> --- a/scripts/uboot-script-gen
>>>>> +++ b/scripts/uboot-script-gen
>>>>> @@ -52,7 +52,7 @@ function dt_set()
>>>>>                echo "fdt set $path $var $array" >> $UBOOT_SOURCE
>>>>>            elif test $data_type = "bool"
>>>>>            then
>>>>> -            if test "$data" -eq 1
>>>>> +            if test "$data" == "1"
>>>>>                then
>>>>>                    echo "fdt set $path $var" >> $UBOOT_SOURCE
>>>>>                fi
>>>>> @@ -74,7 +74,7 @@ function dt_set()
>>>>>                fdtput $FDTEDIT -p -t s $path $var $data
>>>>>            elif test $data_type = "bool"
>>>>>            then
>>>>> -            if test "$data" -eq 1
>>>>> +            if test "$data" == "1"
>>>>>                then
>>>>>                    fdtput $FDTEDIT -p $path $var
>>>>>                fi
>>>>> @@ -491,10 +491,6 @@ function xen_config()
>>>>>            then
>>>>>                DOMU_CMD[$i]="console=ttyAMA0"
>>>>>            fi
>>>>> -        if test -z "${DOMU_DIRECT_MAP[$i]}"
>>>>> -        then
>>>>> -             DOMU_DIRECT_MAP[$i]=1
>>>>> -        fi
>>>>>            i=$(( $i + 1 ))
>>>>>        done
>>>>>    }
>>>>> -- 
>>>>> 2.34.1
>>>>>


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 11:34:26 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 11:34:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358431.587636 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6sR1-00071n-HQ; Thu, 30 Jun 2022 11:34:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358431.587636; Thu, 30 Jun 2022 11:34:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6sR1-00071g-Ed; Thu, 30 Jun 2022 11:34:11 +0000
Received: by outflank-mailman (input) for mailman id 358431;
 Thu, 30 Jun 2022 11:34:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xIEj=XF=linuxfoundation.org=gregkh@srs-se1.protection.inumbo.net>)
 id 1o6sQz-00071X-HP
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 11:34:09 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org
 [2604:1380:4601:e00::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8cff0b32-f868-11ec-bd2d-47488cf2e6aa;
 Thu, 30 Jun 2022 13:34:07 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 5A954B82A38;
 Thu, 30 Jun 2022 11:34:06 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id B8395C34115;
 Thu, 30 Jun 2022 11:34:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8cff0b32-f868-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org;
	s=korg; t=1656588845;
	bh=y/AkczZu+t5L5QS5HmrGxEEOZZGTi1cFRhvgGwdbbLY=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=Qmg64MZn4FHh18uQWV6rRCe66knAFc1oY1fGEQzGn712mLfl0ROVycbbNXMHzHvRE
	 MFxzjNM4zEBjQ1LYcJ27YupCdzXULyW26LOoR+89hp0WyuqCpZtxlSb8ZEvpERkHOS
	 p5wIiaR8x/xOeSlz1dkr3si+QRXTeavd7RI3LzlE=
Date: Thu, 30 Jun 2022 13:34:02 +0200
From: Greg KH <gregkh@linuxfoundation.org>
To: Demi Marie Obenour <demi@invisiblethingslab.com>
Cc: stable@vger.kernel.org,
	Xen developer discussion <xen-devel@lists.xenproject.org>,
	Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH 5.10] xen/gntdev: Avoid blocking in unmap_grant_pages()
Message-ID: <Yr2KKpWSiuzOQr7v@kroah.com>
References: <20220627181006.1954-1-demi@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20220627181006.1954-1-demi@invisiblethingslab.com>

On Mon, Jun 27, 2022 at 02:10:02PM -0400, Demi Marie Obenour wrote:
> commit dbe97cff7dd9f0f75c524afdd55ad46be3d15295 upstream
> 
> unmap_grant_pages() currently waits for the pages to no longer be used.
> In https://github.com/QubesOS/qubes-issues/issues/7481, this lead to a
> deadlock against i915: i915 was waiting for gntdev's MMU notifier to
> finish, while gntdev was waiting for i915 to free its pages.  I also
> believe this is responsible for various deadlocks I have experienced in
> the past.
> 
> Avoid these problems by making unmap_grant_pages async.  This requires
> making it return void, as any errors will not be available when the
> function returns.  Fortunately, the only use of the return value is a
> WARN_ON(), which can be replaced by a WARN_ON when the error is
> detected.  Additionally, a failed call will not prevent further calls
> from being made, but this is harmless.
> 
> Because unmap_grant_pages is now async, the grant handle will be sent to
> INVALID_GRANT_HANDLE too late to prevent multiple unmaps of the same
> handle.  Instead, a separate bool array is allocated for this purpose.
> This wastes memory, but stuffing this information in padding bytes is
> too fragile.  Furthermore, it is necessary to grab a reference to the
> map before making the asynchronous call, and release the reference when
> the call returns.
> 
> It is also necessary to guard against reentrancy in gntdev_map_put(),
> and to handle the case where userspace tries to map a mapping whose
> contents have not all been freed yet.
> 
> Fixes: 745282256c75 ("xen/gntdev: safely unmap grants in case they are still in use")
> Cc: stable@vger.kernel.org
> Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
> Reviewed-by: Juergen Gross <jgross@suse.com>
> Link: https://lore.kernel.org/r/20220622022726.2538-1-demi@invisiblethingslab.com
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  drivers/xen/gntdev-common.h |   7 ++
>  drivers/xen/gntdev.c        | 142 +++++++++++++++++++++++++-----------
>  2 files changed, 106 insertions(+), 43 deletions(-)

All now queued up, thanks.

greg k-h


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 11:37:56 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 11:37:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358437.587647 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6sUd-0007eK-0n; Thu, 30 Jun 2022 11:37:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358437.587647; Thu, 30 Jun 2022 11:37:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6sUc-0007eD-U9; Thu, 30 Jun 2022 11:37:54 +0000
Received: by outflank-mailman (input) for mailman id 358437;
 Thu, 30 Jun 2022 11:37:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5Dt9=XF=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1o6sUb-0007e0-Nd
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 11:37:53 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2088.outbound.protection.outlook.com [40.107.22.88])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1316124b-f869-11ec-bdce-3d151da133c5;
 Thu, 30 Jun 2022 13:37:51 +0200 (CEST)
Received: from DB6PR0202CA0023.eurprd02.prod.outlook.com (2603:10a6:4:29::33)
 by AM6PR08MB3317.eurprd08.prod.outlook.com (2603:10a6:209:42::28)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Thu, 30 Jun
 2022 11:37:50 +0000
Received: from DBAEUR03FT052.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:29:cafe::f1) by DB6PR0202CA0023.outlook.office365.com
 (2603:10a6:4:29::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.15 via Frontend
 Transport; Thu, 30 Jun 2022 11:37:50 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT052.mail.protection.outlook.com (100.127.142.144) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5395.14 via Frontend Transport; Thu, 30 Jun 2022 11:37:49 +0000
Received: ("Tessian outbound 3c5325c30453:v121");
 Thu, 30 Jun 2022 11:37:49 +0000
Received: from 030a6fbb61c8.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 90FABC9F-4E55-4B7F-9D24-EE7D0AA9AD02.1; 
 Thu, 30 Jun 2022 11:37:39 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 030a6fbb61c8.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 30 Jun 2022 11:37:39 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com (2603:10a6:102:2b9::9)
 by AM6PR08MB4294.eurprd08.prod.outlook.com (2603:10a6:20b:bd::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.15; Thu, 30 Jun
 2022 11:37:30 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::980a:f741:c4e1:82f7]) by PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::980a:f741:c4e1:82f7%7]) with mapi id 15.20.5395.014; Thu, 30 Jun 2022
 11:37:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1316124b-f869-11ec-bdce-3d151da133c5
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=FMV3bvJyxHOvI/j+OAtJgmUmXMUiVntQ/nWnX/beSb0T5SQ1yqFxldt02jrwjlm0wRRFh3HgKlM+LF+Am8hcfuW3XDRnxLL5Qg5PGzv1RO9autKuU90eTOBF+kej0xkunQmIStsPLxyA+8BURgA8iPxKMp5A4vjfDobq2UW+svCHP8+/+uE+PRjJthX5MWRvt6jjiTVkPaX03WIoWEMh9nzsi7RaHoX+0lpF5jVXa+KHDZS5Ne+7OCQEtNLQOuOzLrs0gIHg4ayubTzBj+3EPhWz+DVMBxieagBhY7E7q19AubsBZZhu8JWUtFwZ5PFQPt4A0Ji5RLyhrLstP/Dxhw==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=dqKhNuIZQ6zBXQGpSjQFM/oaOx3c/cm8Z3kwmRIWRvk=;
 b=Z+VQx1FuVEGLCQpVSjVSEzfe3NoZ9H2QPa6ucBAWJeA+Eb0UKwAKv5+6KsJfBNZvDi4zrj/H+vogo1FzWdVDd2MyAumaH+CJBgdLRmFJUyIel/051LzncyIRKQgfdxxLc84Pl1W8cKU/YzgekCi8eev8SyPJlWUFoM11ad//Dd0tawsokEuiGzAqbaPBRhbD8CQAWQqTsAn3Z8MYOHJYyWLgcp6EM6xZLJrRIss/MdA1DwGQLjPdulLSIpdafAxngeHJVweOFtEwOQrp3SeVS2OKmyzndHWI1LMMxwrgecz69kiBkUKjsyXS5f+9LT13SQZ/uf6+AVnLpV6ExDx7uQ==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0
 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dqKhNuIZQ6zBXQGpSjQFM/oaOx3c/cm8Z3kwmRIWRvk=;
 b=U3zCxmEDXIl35fmRb72yQokkLQgDdZs82PrFiKtA4/Auq+It6LNvwtWtc5UX2x+8h0Ipn3fKfWnRdJXhwEDZwUMWKovZz7mepn8sLqHLpPFXkZhRBzCaz7PGzxVvAJOHjneYpLSGi9Rl6IC9yzB3Oq2jpmk8kIT33S3doENFjz0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Wb0luYfs4h1o+3l0PpnPcWAaBqQEiy09B7GwItlWwNRJYKbqxbPSfnjB6qjy71MVXC/1GaCyFQmVDNCtB9kCzboQTN1OTyDBKdHWQNiRjOKUuinN+ehG0qSi7LRjaHMhl0EFYNV/vNg5ervL60VfvahrYmFxFSXtGBNi9V0B5Y1Ff0uPgsL4H80sAVwHyEpgcUj76YwBfqedfLVWCscpNETBlyM+du9K7NKrlADsn36K/COFfYqRoWvGQ8hQDgJiVvdS8zKu1F6JfCOeVK4bq3NHOZ+9sOI3elDfTHFtcYVb4EhP0+WRedtLMHtCsnjHglilyFF9WWq3EbweIqRA7A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=dqKhNuIZQ6zBXQGpSjQFM/oaOx3c/cm8Z3kwmRIWRvk=;
 b=kDWQJ5JRJE7Jmptr5U/+QJlYa42kmW0yrD2kgrMipsb3+yH+FsWVHc+j6acsZsJylotlYekooJNeeX2CPawHF4CJnTE1aazZrzfroWuvO8NZqG19lPdHsOsLsIhr0CkO9+/5+SC6hioD+mc2FdLw0cpLEry03nu5phZYjbusDVJk9Tz1GMTIcEYPzvOzidX/Rj004EczW92FjN4sP5BwsabkQ/527Y0TD/hlnXQ4Z2z77ASs+IHmzjHjE1BhsV5HUskYruMzl6UU/Sym4FUBwTKQOKtmSxqPrS3ieCOYiVTbSaPSPoVRWl2Yf29EnOoxYJeJ19wMkIp1SEDQFYGQIA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dqKhNuIZQ6zBXQGpSjQFM/oaOx3c/cm8Z3kwmRIWRvk=;
 b=U3zCxmEDXIl35fmRb72yQokkLQgDdZs82PrFiKtA4/Auq+It6LNvwtWtc5UX2x+8h0Ipn3fKfWnRdJXhwEDZwUMWKovZz7mepn8sLqHLpPFXkZhRBzCaz7PGzxVvAJOHjneYpLSGi9Rl6IC9yzB3Oq2jpmk8kIT33S3doENFjz0=
From: Wei Chen <Wei.Chen@arm.com>
To: Wei Chen <Wei.Chen@arm.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>
CC: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Jiamei Xie <Jiamei.Xie@arm.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v6 1/8] xen: reuse x86 EFI stub functions for Arm
Thread-Topic: [PATCH v6 1/8] xen: reuse x86 EFI stub functions for Arm
Thread-Index:
 AQHYfI531SRNO5A+8Ei3iOFoJrK02K1dB3iAgAE0OjCAABXrgIAAECwAgAAEoICAAARlgIAAAO0AgAmAA0CAAAZFYA==
Date: Thu, 30 Jun 2022 11:37:30 +0000
Message-ID:
 <PAXPR08MB7420EC44AB703BAE155ACC4B9EBA9@PAXPR08MB7420.eurprd08.prod.outlook.com>
References: <20220610055316.2197571-1-wei.chen@arm.com>
 <20220610055316.2197571-2-wei.chen@arm.com>
 <05dadcda-505d-d46a-776a-bb29b8915815@suse.com>
 <PAXPR08MB74205A192C0E6E2E4BDD64BB9EB49@PAXPR08MB7420.eurprd08.prod.outlook.com>
 <8e44e765-c47f-4480-ee44-704ea13a170d@xen.org>
 <cd5d728c-a21e-780e-3b79-0cfb163eb824@suse.com>
 <a6844d62-c1aa-a29f-56ba-3556bc1d4dac@xen.org>
 <6e91d7d0-78d2-2eec-3b14-9aea00b2a028@suse.com>
 <bad83568-94c6-6d90-308b-ae9965f54754@suse.com>
 <PAXPR08MB7420AD8092F0FBA43C359DD19EBA9@PAXPR08MB7420.eurprd08.prod.outlook.com>
In-Reply-To:
 <PAXPR08MB7420AD8092F0FBA43C359DD19EBA9@PAXPR08MB7420.eurprd08.prod.outlook.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 92C7280CC74E574B8FD88B6E710F3087.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-MS-Office365-Filtering-Correlation-Id: 6064b6c5-c0e6-4bbb-136c-08da5a8cf62a
x-ms-traffictypediagnostic:
	AM6PR08MB4294:EE_|DBAEUR03FT052:EE_|AM6PR08MB3317:EE_
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 sJO0oThXs038zX1YOJKa1B93tW3b7ZNOxMzHKTdr+DLvfH0e/OdIpKtX5GvlztgrukfyhJZzvx0lAxYNFPjMLAP2llW3CKoT45Ldkb6C8VJSsGS9NmR9Stobf7eVQfas0BIV8vbOCjZeI0WG8DEGyq0NaVx/LJ3Y9KJYmDo/3GWja1LUz0IEh2IHa8gred0+41JGpm+KauFpFbJIUNd/ZrNlpaZ2psuAVElFVQg6RY1+Y/8xxDeQb+BLT5ekktW1OtsY0LarNVkuAIClWEk5Qvi57VdtBMss9RAwhOIJtHnDNErg+5Ga6LtCxXzpUv4DPfrUPqxVbJThIrEIPcjHlyFSYSeIblhieXNF1VQMKyMlMcUKspyVzY7fSvQ3sdYc3G0wL9Ulv8rujgqHWmnKabSbb/58zuICfFQeKgjnt11G+H4hoqWABTbgq8cpCzZ2UghU0omSAbAu1ThP9AsPv//0P1qysb6iOLP/V7v+qpk2/hjvFtFnhrvhBtUtZb/1Yk1Lhz3sG5N3J7hZFm5oVxBjxBsnk5erjQgeXJ9MfCRzpVUhcLcdDJnQKNJcxNDVsgMLhOoHVlay8T3g8MOSGaswCrBImQCo3AV/TvdziwbXibUmZaMp39+XGABRHmuZYF+ddQW8vAEHOQCt73RKUT0Zurrv+NKFa+OwBTz7lhC8jkvM9P6ijlm1tfK6kltOr5YVfFtwGWopVbMNP0zcSgeF87xrhb5K6sCB/QJUCZ2HE9l474S4yNfHdpFc7kJfDQfa1kSPKazj55xS8POk4c44+gTJ69LsXuZQZ6qxMKgV8tRA8FNd9mPro5Spfcol1aDsA0qEqizXGRnuQ+xpI1gbQf4ivKq+v3R2WLczvGc=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB7420.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(346002)(396003)(376002)(136003)(366004)(39860400002)(55016003)(86362001)(122000001)(38070700005)(8936002)(6506007)(26005)(186003)(2940100002)(52536014)(38100700002)(33656002)(110136005)(5660300002)(54906003)(2906002)(71200400001)(316002)(83380400001)(41300700001)(478600001)(966005)(8676002)(9686003)(66476007)(66556008)(7696005)(64756008)(66446008)(76116006)(53546011)(4326008)(66946007);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4294
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT052.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	391a769d-dddb-47bd-1587-08da5a8cea5d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	24tCmvkb8prZHuNi8DCdQboP2s26NCHEUsaEx7MLlg8bYsxlyO0ZWMtGxLusRZmldcyKkQdOyzcIDAOYI5FVZgChCqaN2IAMt8aDyggCcURPSFycOMykEQ26PmJ1dOTf2ETbp/K+7q1Zos1jZqJ+HfFlqLkoFW/cxzA5FfBvGksPvEPPkzwlRRdorzOxw65w2Kpbo2rOs9pwbXd4jhbnygk0RdXS94gugG7ros951POZE6VbhYQiefnWlh8EarRrXONFvMX99NCSn/zRWN77KE/FQwyY85Nz1XwtDYGGKosyEwxRXnN6QMYkXwV7EdD2PhdO/K366tDQ51MHrUun05zw8U2aShhJRB+sfhBSa5fjTit+aMz//4EsPYlaflLEaiG9sy1m1L4so3Ktt6ryBz2HeUk5EeklKPLpKH8E2Urtxf29ypJyJlsjF0QlGMYRA+6G54MJo8OtNApX/gi0rI+6Vtpl4fpdO7lBz9ky2DsLDJ5jlW/EzW+AAVKRAGiXspaVDSiOopQGpae4hsk4ooIbpxE3zFXIxCoC8bkYjP/TNkCf06HzWe/Bj+hwfYXu7Pbu6TPd/CWAS8RVO1jk3CLyI5QlsKsPpNQPFDbwGTccYWsrJPTctnekzEe0MhBRd2Oof6mAYVtwNiBwHgHHL+hJirtxlaDIDP95WB9uzqvAx0+rIjLTvvck28ie074dWXEYYGbO4vDivLnPiCRW/gKRwAszoGc2Gxu7HKwmK3R/8vVtqrPzkSBnqFa872OIjirHxvRrgfUYSE+n4A7+TFgr4MBpPh/ZHYRQ3kk//FU8DAtQdDrtN7pe465m/WolFejk2tP8QeYAvRlujsh3uib50ipqz2A+7EXVUOAzKgU=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230016)(4636009)(346002)(396003)(376002)(136003)(39860400002)(40470700004)(36840700001)(46966006)(53546011)(316002)(52536014)(478600001)(966005)(41300700001)(54906003)(82740400003)(2906002)(86362001)(40460700003)(5660300002)(110136005)(82310400005)(7696005)(356005)(40480700001)(70586007)(8676002)(26005)(81166007)(47076005)(55016003)(2940100002)(33656002)(6506007)(9686003)(4326008)(36860700001)(83380400001)(70206006)(186003)(336012)(8936002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 11:37:49.8064
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6064b6c5-c0e6-4bbb-136c-08da5a8cf62a
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT052.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3317

DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogWGVuLWRldmVsIDx4ZW4t
ZGV2ZWwtYm91bmNlc0BsaXN0cy54ZW5wcm9qZWN0Lm9yZz4gT24gQmVoYWxmIE9mIFdlaQ0KPiBD
aGVuDQo+IFNlbnQ6IDIwMjLlubQ25pyIMzDml6UgMTk6MjUNCj4gVG86IEphbiBCZXVsaWNoIDxq
YmV1bGljaEBzdXNlLmNvbT47IEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+IENjOiBu
ZCA8bmRAYXJtLmNvbT47IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJlbGxpbmlAa2VybmVsLm9y
Zz47IEJlcnRyYW5kDQo+IE1hcnF1aXMgPEJlcnRyYW5kLk1hcnF1aXNAYXJtLmNvbT47IFZvbG9k
eW15ciBCYWJjaHVrDQo+IDxWb2xvZHlteXJfQmFiY2h1a0BlcGFtLmNvbT47IEFuZHJldyBDb29w
ZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+Ow0KPiBSb2dlciBQYXUgTW9ubsOpIDxyb2dl
ci5wYXVAY2l0cml4LmNvbT47IFdlaSBMaXUgPHdsQHhlbi5vcmc+OyBKaWFtZWkgWGllDQo+IDxK
aWFtZWkuWGllQGFybS5jb20+OyB4ZW4tZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmcNCj4gU3Vi
amVjdDogUkU6IFtQQVRDSCB2NiAxLzhdIHhlbjogcmV1c2UgeDg2IEVGSSBzdHViIGZ1bmN0aW9u
cyBmb3IgQXJtDQo+IA0KPiBIaSBKdWxpZW4gYW5kIEphbiwNCj4gDQo+ID4gLS0tLS1PcmlnaW5h
bCBNZXNzYWdlLS0tLS0NCj4gPiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+
DQo+ID4gU2VudDogMjAyMuW5tDbmnIgyNOaXpSAxODowOQ0KPiA+IFRvOiBKdWxpZW4gR3JhbGwg
PGp1bGllbkB4ZW4ub3JnPg0KPiA+IENjOiBuZCA8bmRAYXJtLmNvbT47IFN0ZWZhbm8gU3RhYmVs
bGluaSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz47DQo+IEJlcnRyYW5kDQo+ID4gTWFycXVpcyA8
QmVydHJhbmQuTWFycXVpc0Bhcm0uY29tPjsgVm9sb2R5bXlyIEJhYmNodWsNCj4gPiA8Vm9sb2R5
bXlyX0JhYmNodWtAZXBhbS5jb20+OyBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRy
aXguY29tPjsNCj4gPiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT47IFdl
aSBMaXUgPHdsQHhlbi5vcmc+OyBKaWFtZWkgWGllDQo+ID4gPEppYW1laS5YaWVAYXJtLmNvbT47
IHhlbi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZzsgV2VpIENoZW4NCj4gPiA8V2VpLkNoZW5A
YXJtLmNvbT4NCj4gPiBTdWJqZWN0OiBSZTogW1BBVENIIHY2IDEvOF0geGVuOiByZXVzZSB4ODYg
RUZJIHN0dWIgZnVuY3Rpb25zIGZvciBBcm0NCj4gPg0KPiA+IE9uIDI0LjA2LjIwMjIgMTI6MDUs
IEphbiBCZXVsaWNoIHdyb3RlOg0KPiA+ID4gT24gMjQuMDYuMjAyMiAxMTo0OSwgSnVsaWVuIEdy
YWxsIHdyb3RlOg0KPiA+ID4+IEhpIEphbiwNCj4gPiA+Pg0KPiA+DQo+ID4gPj4+Pj4+PiAtLS0g
YS94ZW4vYXJjaC9hcm0vZWZpL01ha2VmaWxlDQo+ID4gPj4+Pj4+PiArKysgYi94ZW4vYXJjaC9h
cm0vZWZpL01ha2VmaWxlDQo+ID4gPj4+Pj4+PiBAQCAtMSw0ICsxLDEyIEBADQo+ID4gPj4+Pj4+
PiAgICBpbmNsdWRlICQoc3JjdHJlZSkvY29tbW9uL2VmaS9lZmktY29tbW9uLm1rDQo+ID4gPj4+
Pj4+Pg0KPiA+ID4+Pj4+Pj4gK2lmZXEgKCQoQ09ORklHX0FSTV9FRkkpLHkpDQo+ID4gPj4+Pj4+
PiAgICBvYmoteSArPSAkKEVGSU9CSi15KQ0KPiA+ID4+Pj4+Pj4gICAgb2JqLSQoQ09ORklHX0FD
UEkpICs9ICBlZmktZG9tMC5pbml0Lm8NCj4gPiA+Pj4+Pj4+ICtlbHNlDQo+ID4gPj4+Pj4+PiAr
IyBBZGQgc3R1Yi5vIHRvIEVGSU9CSi15IHRvIHJlLXVzZSB0aGUgY2xlYW4tZmlsZXMgaW4NCj4g
PiA+Pj4+Pj4+ICsjIGVmaS1jb21tb24ubWsuIE90aGVyd2lzZSB0aGUgbGluayBvZiBzdHViLmMg
aW4gYXJtL2VmaQ0KPiA+ID4+Pj4+Pj4gKyMgd2lsbCBub3QgYmUgY2xlYW5lZCBpbiAibWFrZSBj
bGVhbiIuDQo+ID4gPj4+Pj4+PiArRUZJT0JKLXkgKz0gc3R1Yi5vDQo+ID4gPj4+Pj4+PiArb2Jq
LXkgKz0gc3R1Yi5vDQo+ID4gPj4+Pj4+PiArZW5kaWYNCj4gPiA+Pj4+Pj4NCj4gPiA+Pj4+Pj4g
VGhpcyBoYXMgY2F1c2VkDQo+ID4gPj4+Pj4+DQo+ID4gPj4+Pj4+IGxkOiB3YXJuaW5nOiBhcmNo
L2FybS9lZmkvYnVpbHRfaW4ubyB1c2VzIDItYnl0ZSB3Y2hhcl90IHlldCB0aGUNCj4gPiBvdXRw
dXQgaXMNCj4gPiA+Pj4+Pj4gdG8gdXNlIDQtYnl0ZSB3Y2hhcl90OyB1c2Ugb2Ygd2NoYXJfdCB2
YWx1ZXMgYWNyb3NzIG9iamVjdHMgbWF5DQo+ID4gZmFpbA0KPiA+ID4+Pj4+Pg0KPiA+ID4+Pj4+
PiBmb3IgdGhlIDMyLWJpdCBBcm0gYnVpbGQgdGhhdCBJIGtlZXAgZG9pbmcgZXZlcnkgb25jZSBp
biBhIHdoaWxlLA0KPiA+IHdpdGgNCj4gPiA+Pj4+Pj4gKGlmIGl0IG1hdHRlcnMpIEdOVSBsZCAy
LjM4LiBJIGd1ZXNzIHlvdSB3aWxsIHdhbnQgdG8gY29uc2lkZXINCj4gPiBidWlsZGluZw0KPiA+
ID4+Pj4+PiBhbGwgb2YgWGVuIHdpdGggLWZzaG9ydC13Y2hhciwgb3IgdG8gYXZvaWQgYnVpbGRp
bmcgc3R1Yi5jIHdpdGgNCj4gPiB0aGF0DQo+ID4gPj4+Pj4+IG9wdGlvbi4NCj4gPiA+Pj4+Pj4N
Cj4gPiA+Pj4+Pg0KPiA+ID4+Pj4+IFRoYW5rcyBmb3IgcG9pbnRpbmcgdGhpcyBvdXQuIEkgd2ls
bCB0cnkgdG8gdXNlIC1mc2hvcnQtd2NoYXIgZm9yDQo+ID4gQXJtMzIsDQo+ID4gPj4+Pj4gaWYg
QXJtIG1haW50YWluZXJzIGFncmVlLg0KPiA+ID4+Pj4NCj4gPiA+Pj4+IExvb2tpbmcgYXQgdGhl
IGNvZGUgd2UgZG9uJ3Qgc2VlbSB0byBidWlsZCBYZW4gYXJtNjQgd2l0aCAtZnNob3J0LQ0KPiA+
IHdjaGFyDQo+ID4gPj4+PiAoYXNpZGUgdGhlIEVGSSBmaWxlcykuIFNvIGl0IGlzIG5vdCBlbnRp
cmVseSBjbGVhciB3aHkgd2Ugd291bGQNCj4gd2FudA0KPiA+IHRvDQo+ID4gPj4+PiB1c2UgLWZz
aG9ydC13Y2hhciBmb3IgYXJtMzIuDQo+ID4gPj4+DQo+ID4gPj4+IFdlIGRvbid0IHVzZSB3Y2hh
cl90IG91dHNpZGUgb2YgRUZJIGNvZGUgYWZhaWN0LiBIZW5jZSB0byBhbGwgb3RoZXINCj4gPiBj
b2RlDQo+ID4gPj4+IGl0IHNob3VsZCBiZSBiZW5pZ24gd2hldGhlciAtZnNob3J0LXdjaGFyIGlz
IGluIHVzZS4gU28gdGhlDQo+IHN1Z2dlc3Rpb24NCj4gPiA+Pj4gdG8gdXNlIHRoZSBmbGFnIHVu
aWxhdGVyYWxseSBvbiBBcm0zMiBpcyByZWFsbHkganVzdCB0byBzaWxlbmNlIHRoZQ0KPiBsZA0K
PiA+ID4+PiB3YXJuaW5nOw0KPiA+ID4+DQo+ID4gPj4gT2suIFRoaXMgaXMgb2RkLiBXaHkgd291
bGQgbGQgd2FybiBvbiBhcm0zMiBidXQgbm90IG90aGVyIGFyY2g/DQo+ID4gPg0KPiA+ID4gQXJt
MzIgZW1iZWRzIEFCSSBpbmZvcm1hdGlvbiBpbiBhIG5vdGUgc2VjdGlvbiBpbiBlYWNoIG9iamVj
dCBmaWxlLg0KPiA+DQo+ID4gT3IgYSBub3RlLWxpa2Ugb25lIChqdXN0IHRvIGF2b2lkIHBvc3Np
YmxlIGNvbmZ1c2lvbik7IEkgdGhpbmsgaXQncw0KPiA+ICIuQVJNLmF0dHJpYnV0ZXMiLg0KPiA+
DQo+ID4gSmFuDQo+ID4NCj4gPiA+IFRoZSBtaXNtYXRjaCBvZiB0aGUgd2NoYXJfdCBwYXJ0IG9m
IHRoaXMgaW5mb3JtYXRpb24gaXMgd2hhdCBjYXVzZXMNCj4gPiA+IGxkIHRvIGVtaXQgdGhlIHdh
cm5pbmcuDQo+ID4gPg0KPiA+ID4+PiBvZmYgdGhlIHRvcCBvZiBteSBoZWFkIEkgY2FuJ3Qgc2Vl
IGFueXRoaW5nIHdyb25nIHdpdGggdXNpbmcNCj4gPiA+Pj4gdGhlIG9wdGlvbiBhbHNvIGZvciBB
cm02NCBvciBldmVuIGdsb2JhbGx5LiBZZXQgb3RvaCB3ZSB0eXBpY2FsbHkNCj4gdHJ5DQo+ID4g
dG8NCj4gPiA+Pj4gbm90IG1ha2UgY2hhbmdlcyBmb3IgZW52aXJvbm1lbnRzIHdoZXJlIHRoZXkg
YXJlbid0IHJlYWxseSBuZWVkZWQuDQo+ID4gPj4NCj4gPiA+PiBJIGFncmVlLiBJZiB3ZSBuZWVk
IGEgd29ya2Fyb3VuZCwgdGhlbiBteSBwcmVmZXJlbmNlIHdvdWxkIGJlIHRvIG5vdA0KPiA+ID4+
IGJ1aWxkIHN0dWIuYyB3aXRoIC1mc2hvcnQtd2NoYXIuDQo+ID4gPg0KPiA+ID4gVGhpcyB3b3Vs
ZCBuZWVkIHRvIGJlIGFuIEFybS1zcGVjaWFsIHRoZW4sIGFzIG9uIHg4NiBpdCBuZWVkcyB0byBi
ZQ0KPiA+IGJ1aWx0DQo+ID4gPiB0aGlzIHdheS4NCj4gDQo+IEkgaGF2ZSB0YWtlbiBhIGxvb2sg
aW50byB0aGlzIHdhcm5pbmc6DQo+IFRoaXMgaXMgYmVjYXVzZSB0aGUgIi1mc2hvcnQtd2NoYXIi
IGZsYWcgY2F1c2VzIEdDQyB0byBnZW5lcmF0ZQ0KPiBjb2RlIHRoYXQgaXMgbm90IGJpbmFyeSBj
b21wYXRpYmxlIHdpdGggY29kZSBnZW5lcmF0ZWQgd2l0aG91dA0KPiB0aGF0IGZsYWcuIFdoeSB0
aGlzIHdhcm5pbmcgaGFzbid0IGJlZW4gdHJpZ2dlcmVkIGluIEFybTY0IGlzDQo+IGJlY2F1c2Ug
d2UgZG9uJ3QgdXNlIGFueSB3Y2hhciBpbiBBcm02NCBjb2Rlcy4gV2UgYXJlIGFsc28gbm90DQo+
IHVzaW5nIHdjaGFyIGluIEFybTMyIGNvZGVzLCBidXQgQXJtMzIgd2lsbCBlbWJlZCBBQkkgaW5m
b3JtYXRpb24NCj4gaW4gIi5BUk0uYXR0cmlidXRlcyIgc2VjdGlvbi4gVGhpcyBzZWN0aW9uIHN0
b3JlcyBzb21lIG9iamVjdA0KPiBmaWxlIGF0dHJpYnV0ZXMsIGxpa2UgQUJJIHZlcnNpb24sIENQ
VSBhcmNoIGFuZCBldGMuIEFuZCB3Y2hhcg0KPiBzaXplIGlzIGRlc2NyaWJlZCBpbiB0aGlzIHNl
Y3Rpb24gYnkgIlRhZ19BQklfUENTX3djaGFyX3QiIHRvby4NCj4gVGFnX0FCSV9QQ1Nfd2NoYXJf
dCBpcyAyIGZvciBvYmplY3QgZmlsZXMgd2l0aCAiLWZzaG9ydC13Y2hhciIsDQo+IGJ1dCBmb3Ig
b2JqZWN0IGZpbGVzIHdpdGhvdXQgIi1mc2hvcnQtd2NoYXIiIGlzIDQuIEFybTMyIEdDQw0KPiBs
ZCB3aWxsIGNoZWNrIHRoaXMgdGFnLCBhbmQgdGhyb3cgYWJvdmUgd2FybmluZyB3aGVuIGl0IGZp
bmRzDQo+IHRoZSBvYmplY3QgZmlsZXMgaGF2ZSBkaWZmZXJlbnQgVGFnX0FCSV9QQ1Nfd2NoYXJf
dCB2YWx1ZXMuDQo+IA0KPiBBcyBnbnUtZWZpLTMuMCB1c2UgdGhlIEdDQyBvcHRpb24gIi1mc2hv
cnQtd2NoYXIiIHRvIGZvcmNlIHdjaGFyDQo+IHRvIHVzZSBzaG9ydCBpbnRlZ2VycyAoMiBieXRl
cykgaW5zdGVhZCBvZiBpbnRlZ2VycyAoNCBieXRlcykuDQo+IFdlIGNhbid0IHJlbW92ZSB0aGlz
IG9wdGlvbiBmcm9tIHg4NiBhbmQgQXJtNjQsIGJlY2F1c2UgdGhleSBuZWVkDQo+IHRvIGludGVy
YWN0IHdpdGggRUZJIGZpcm13YXJlLiBTbyBJIGhhdmUgdG8gb3B0aW9uczoNCj4gMS4gUmVtb3Zl
ICItZnNob3J0LXdjaGFyIiBmcm9tIGVmaS1jb21tb24ubWsgYW5kIGFkZCBpdCBiYWNrIGJ5DQo+
ICAgIHg4NiBhbmQgYXJtNjQncyBFRkkgTWFrZWZpbGUNCj4gMi4gQWRkICItbm8td2NoYXItc2l6
ZS13YXJuaW5nIiB0byBBcm0zMidzIGxpbmtlciBmbGFncw0KPiANCg0KVGhlIDNyZCBPcHRpb24g
aXMgc2ltaWxhciB0byBMaW51eCBrZXJuZWw6DQpLYnVpbGQ6IHVzZSAtZnNob3J0LXdjaGFyIGds
b2JhbGx5DQpodHRwczovL3BhdGNod29yay5rZXJuZWwub3JnL3Byb2plY3QvbGludXgta2J1aWxk
L3BhdGNoLzIwMTcwNzI2MTMzNjU1LjIxMzc0MzctMS1hcm5kQGFybmRiLmRlLw0KDQoNCj4gSSBw
ZXJzb25hbGx5IHByZWZlciBvcHRpb24jMSwgYmVjYXVzZSBBcm0zMiBkb2Vzbid0IG5lZWQgdG8g
aW50ZXJhY3QNCj4gd2l0aCBFRkkgZmlybXdhcmUsIGFsbCBpdCByZXF1aXJlcyBhcmUgc29tZSBz
dHViIGZ1bmN0aW9ucy4gQW5kDQo+ICItbm8td2NoYXItc2l6ZS13YXJuaW5nIiBtYXkgaGlkZSBz
b21lIHdhcm5pbmdzIHdlIHNob3VsZCBhd2FyZSBpbg0KPiBmdXR1cmUuDQo+IA0KPiBDaGVlcnMs
DQo+IFdlaSBDaGVuDQo+IA0KPiA+ID4NCj4gPiA+IEphbg0KPiA+ID4NCg0K


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 11:46:16 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 11:46:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358445.587658 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6scd-0000t5-0A; Thu, 30 Jun 2022 11:46:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358445.587658; Thu, 30 Jun 2022 11:46:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6scc-0000sy-S0; Thu, 30 Jun 2022 11:46:10 +0000
Received: by outflank-mailman (input) for mailman id 358445;
 Thu, 30 Jun 2022 11:46:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8cNV=XF=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1o6scc-0000ss-BI
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 11:46:10 +0000
Received: from mail-ej1-x631.google.com (mail-ej1-x631.google.com
 [2a00:1450:4864:20::631])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3b67e9e9-f86a-11ec-bd2d-47488cf2e6aa;
 Thu, 30 Jun 2022 13:46:09 +0200 (CEST)
Received: by mail-ej1-x631.google.com with SMTP id g26so38460656ejb.5
 for <xen-devel@lists.xenproject.org>; Thu, 30 Jun 2022 04:46:09 -0700 (PDT)
Received: from [192.168.1.10] (adsl-146.37.6.170.tellas.gr. [37.6.170.146])
 by smtp.gmail.com with ESMTPSA id
 ba29-20020a0564021add00b00435a62d35b5sm12970888edb.45.2022.06.30.04.46.07
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 30 Jun 2022 04:46:08 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3b67e9e9-f86a-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=message-id:date:mime-version:user-agent:subject:content-language:to
         :cc:references:from:in-reply-to:content-transfer-encoding;
        bh=ZKlio/AzJ6PJL+hQCWsbnuy/GPadpi3W60W3AP2+H8A=;
        b=RKj0sHHY2C+uCXtO8Qj+yRPowIxiu22aiiiZ2tOmNnliEul8TQvB2Q7WwqX9ZFRJD0
         OnL3TdSMoESaqj8SMqtobdf6+FnrzxxeryMp6X1ipnt8Bdqm6MTsKUmb9N5Plkbf0LGv
         0ffwSoiwvmcxpIMU26Qn25AwYgykcb+CG+mC8sfQqlEDtNTVIgQb6coxmxVFJ8Twowt5
         RIsGC55L6WUtvkeg9FgOKz5gJgmkdpoJBFwtTggcNxG3bV1WVCPOjNiKRCiwO4xZTeil
         MPt25KUkZfn/IkWhhPpq6Wzte/JuW2omQc8Uupf9V7i2rrBiVv2qpUXinlqNghiwDZt7
         4nRA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:message-id:date:mime-version:user-agent:subject
         :content-language:to:cc:references:from:in-reply-to
         :content-transfer-encoding;
        bh=ZKlio/AzJ6PJL+hQCWsbnuy/GPadpi3W60W3AP2+H8A=;
        b=7kx3Dlzz4uL7T2Sln+Q3FMO+LeRUQ+rQF/kBobIrGAR3X/oUfTz16k2hAluyhnNKu3
         oHZ6PguIePye17Lu/VW376HiONeyZGfTV6ltJlF9EhU7pzZjKnHK0YjR/lrsuhSDmvw2
         Q3/M72TL4iukHTykmipmq30juzksF5X2N689vdeNxd+xJt/oDyaYJXCYu7ELhRKokVWc
         wZ5cF0gzhs96z129CKeB9mWz3xVLfm0mzIh8xiJiuYLysO8x/AaGqipY+vszMxuDu7mP
         1cuSfAfpatyISPU6hPG0vPLx6SxCXvO1KOxl7p/gvMiE1CsKdQMDHs85gGIL4AfFKPPM
         qqYQ==
X-Gm-Message-State: AJIora9o2AndMv34G2xTMiTkwYSFFBpr+4t3b+GWOCuIflrbEjsxO0eO
	f+/qvBT7pXHRXX2x+nmtbH4=
X-Google-Smtp-Source: AGRyM1vWodVPizm0Aj6tgU/N7E06+lBncILbxFulU2MeNOjWycOPURyqrso66jaxNnTVaPmMxXlTjA==
X-Received: by 2002:a17:907:16a2:b0:726:abbc:69bf with SMTP id hc34-20020a17090716a200b00726abbc69bfmr8326891ejc.363.1656589568578;
        Thu, 30 Jun 2022 04:46:08 -0700 (PDT)
Message-ID: <da2b825d-9612-acb5-8069-1ec5b210e4d6@gmail.com>
Date: Thu, 30 Jun 2022 14:46:06 +0300
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.1
Subject: Re: [PATCH 2/2] uboot-script-gen: do not enable direct mapping by
 default
Content-Language: en-US
To: Ayan Kumar Halder <ayankuma@amd.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, viryaos-discuss@lists.sourceforge.net
References: <20220626184536.666647-1-burzalodowa@gmail.com>
 <20220626184536.666647-2-burzalodowa@gmail.com>
 <alpine.DEB.2.22.394.2206281727080.4389@ubuntu-linux-20-04-desktop>
 <22476413-14da-21cd-eb02-15165bfe602a@gmail.com>
 <cafc1602-6f5f-3238-801d-29c13ed37f50@amd.com>
 <alpine.DEB.2.22.394.2206291323470.4389@ubuntu-linux-20-04-desktop>
 <d2057409-6557-ec71-ab68-e74dc9aafe66@amd.com>
From: xenia <burzalodowa@gmail.com>
In-Reply-To: <d2057409-6557-ec71-ab68-e74dc9aafe66@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Ayan,

On 6/30/22 14:28, Ayan Kumar Halder wrote:
>
> On 29/06/2022 21:28, Stefano Stabellini wrote:
>> On Wed, 29 Jun 2022, Ayan Kumar Halder wrote:
>>> Hi Stefano/Xenia,
>>>
>>> On 29/06/2022 18:01, xenia wrote:
>>>> Hi Stefano,
>>>>
>>>> On 6/29/22 03:28, Stefano Stabellini wrote:
>>>>> On Sun, 26 Jun 2022, Xenia Ragiadakou wrote:
>>>>>> To be inline with XEN, do not enable direct mapping automatically 
>>>>>> for
>>>>>> all
>>>>>> statically allocated domains.
>>>>>>
>>>>>> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
>>>>> Actually I don't know about this one. I think it is OK that 
>>>>> ImageBuilder
>>>>> defaults are different from Xen defaults. This is a case where I 
>>>>> think
>>>>> it would be good to enable DOMU_DIRECT_MAP by default when
>>>>> DOMU_STATIC_MEM is specified.
>>>> Just realized that I forgot to add [ImageBuilder] tag to the 
>>>> patches. Sorry
>>>> about that.
>>> @Stefano, why do you wish the Imagebuilder's behaviour to differ 
>>> from Xen ? Is
>>> there any use-case that helps.
>> As background, ImageBuilder is meant to be very simple to use especially
>> for the most common configurations. In fact, I think ImageBuilder
>> doesn't necessarely have to support all the options that Xen offers,
>> only the most common and important.
>>
>> If someone wants an esoteric option, they can always edit the generated
>> boot.source and make any necessary changes. I make sure to explain that
>> editing boot.source is always a possibility in all the talks I gave
>> about ImageBuilder.
>>
>> Now to answer the specific question. I am positive that the most common
>> configuration for people that wants static memory is to have direct_map.
>> That is because the two go hand-in-hand in configuration where the IOMMU
>> is not used. So I think that from an ImageBuilder perspective direct_map
>> should default to enabled when static memory is requested. It can always
>> be disabled, both using DOMU_DIRECT_MAP, or editing boot.source.
>
> Many thanks for the explanation. This makes sense.
>
> I think this patch can be dropped then.
>
> Xenia, apologies for suggesting you to do this in the first place.

No worries, it 's all part of the process :)

> - Ayan
>
>>
>>>> I cc Ayan, since the change was suggested by him.
>>>> I have no strong preference on the default value.
>>>>
>>>> Xenia
>>>>
>>>>>> ---
>>>>>>    README.md                | 4 ++--
>>>>>>    scripts/uboot-script-gen | 8 ++------
>>>>>>    2 files changed, 4 insertions(+), 8 deletions(-)
>>>>>>
>>>>>> diff --git a/README.md b/README.md
>>>>>> index cb15ca5..03e437b 100644
>>>>>> --- a/README.md
>>>>>> +++ b/README.md
>>>>>> @@ -169,8 +169,8 @@ Where:
>>>>>>      if specified, indicates the host physical address regions
>>>>>>      [baseaddr, baseaddr + size) to be reserved to the VM for static
>>>>>> allocation.
>>>>>>    -- DOMU_DIRECT_MAP[number] can be set to 1 or 0.
>>>>>> -  If set to 1, the VM is direct mapped. The default is 1.
>>>>>> +- DOMU_DIRECT_MAP[number] if set to 1, enables direct mapping.
>>>>>> +  By default, direct mapping is disabled.
>>>>>>      This is only applicable when DOMU_STATIC_MEM is specified.
>>>>>>      - LINUX is optional but specifies the Linux kernel for when 
>>>>>> Xen is
>>>>>> NOT
>>>>>> diff --git a/scripts/uboot-script-gen b/scripts/uboot-script-gen
>>>>>> index 085e29f..66ce6f7 100755
>>>>>> --- a/scripts/uboot-script-gen
>>>>>> +++ b/scripts/uboot-script-gen
>>>>>> @@ -52,7 +52,7 @@ function dt_set()
>>>>>>                echo "fdt set $path $var $array" >> $UBOOT_SOURCE
>>>>>>            elif test $data_type = "bool"
>>>>>>            then
>>>>>> -            if test "$data" -eq 1
>>>>>> +            if test "$data" == "1"
>>>>>>                then
>>>>>>                    echo "fdt set $path $var" >> $UBOOT_SOURCE
>>>>>>                fi
>>>>>> @@ -74,7 +74,7 @@ function dt_set()
>>>>>>                fdtput $FDTEDIT -p -t s $path $var $data
>>>>>>            elif test $data_type = "bool"
>>>>>>            then
>>>>>> -            if test "$data" -eq 1
>>>>>> +            if test "$data" == "1"
>>>>>>                then
>>>>>>                    fdtput $FDTEDIT -p $path $var
>>>>>>                fi
>>>>>> @@ -491,10 +491,6 @@ function xen_config()
>>>>>>            then
>>>>>>                DOMU_CMD[$i]="console=ttyAMA0"
>>>>>>            fi
>>>>>> -        if test -z "${DOMU_DIRECT_MAP[$i]}"
>>>>>> -        then
>>>>>> -             DOMU_DIRECT_MAP[$i]=1
>>>>>> -        fi
>>>>>>            i=$(( $i + 1 ))
>>>>>>        done
>>>>>>    }
>>>>>> -- 
>>>>>> 2.34.1
>>>>>>


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 12:14:07 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 12:14:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358452.587669 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6t3Z-0004iJ-Cq; Thu, 30 Jun 2022 12:14:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358452.587669; Thu, 30 Jun 2022 12:14:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6t3Z-0004iC-9l; Thu, 30 Jun 2022 12:14:01 +0000
Received: by outflank-mailman (input) for mailman id 358452;
 Thu, 30 Jun 2022 12:13:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6t3X-0004i2-CT; Thu, 30 Jun 2022 12:13:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6t3X-0003TU-99; Thu, 30 Jun 2022 12:13:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6t3W-0005qc-Kc; Thu, 30 Jun 2022 12:13:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o6t3W-0007xD-KD; Thu, 30 Jun 2022 12:13:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=fsjgXkucqWZkJp9QlnH4hiX1XR0EHmP/k7xZ2hn/jaE=; b=VbwZPsqV3cQeVgxqeSR4k1/ca8
	nf3LO3lgXuEOPFHxvyjQ9Na7OfzmrKXFWQYBvpHVtsXH5LJPev5+o+iyLZq88bTMsfh/KtzrWaYIr
	RMRIzp9YzUReGQhRvaE8DvFqNq06d4+9Qj8O6Pb+rL5ztw/+VcWeP607noE4LHxQbRQw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171415-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 171415: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=93aa071f66b78a2abbf134aeb96b02f066e6091d
X-Osstest-Versions-That:
    xen=93aa071f66b78a2abbf134aeb96b02f066e6091d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 30 Jun 2022 12:13:58 +0000

flight 171415 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171415/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171402
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171402
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171402
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171402
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171402
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171402
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171402
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171402
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171402
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171402
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171402
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171402
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  93aa071f66b78a2abbf134aeb96b02f066e6091d
baseline version:
 xen                  93aa071f66b78a2abbf134aeb96b02f066e6091d

Last test of basis   171415  2022-06-30 01:51:58 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Thu Jun 30 12:18:29 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 12:18:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358460.587680 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6t7t-0005W2-0E; Thu, 30 Jun 2022 12:18:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358460.587680; Thu, 30 Jun 2022 12:18:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6t7s-0005Vv-Sr; Thu, 30 Jun 2022 12:18:28 +0000
Received: by outflank-mailman (input) for mailman id 358460;
 Thu, 30 Jun 2022 12:18:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=npq1=XF=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1o6t7r-0005Vm-DR
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 12:18:27 +0000
Received: from mail-wr1-x429.google.com (mail-wr1-x429.google.com
 [2a00:1450:4864:20::429])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bdfee3fc-f86e-11ec-bdce-3d151da133c5;
 Thu, 30 Jun 2022 14:18:26 +0200 (CEST)
Received: by mail-wr1-x429.google.com with SMTP id s1so27012438wra.9
 for <xen-devel@lists.xenproject.org>; Thu, 30 Jun 2022 05:18:26 -0700 (PDT)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 l1-20020a5d4bc1000000b00219e77e489fsm18853629wrt.17.2022.06.30.05.18.24
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 30 Jun 2022 05:18:25 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bdfee3fc-f86e-11ec-bdce-3d151da133c5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=subject:from:to:cc:references:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=RJleqi+0HQx/3xnKLvwls6z60YY3TaUIzcUE6+U77y8=;
        b=OmuHmcd1ZxhtDYkmBcqIAggqomsRRxwGXztdtl41dv1prJbWl6kj1/6gwM4XIX9l2S
         kWQXLVGM2uJ3CyzO7LTavdLI1nIdzN929Pawg9FDhsljF37OCIwKlJuzlxl7siurCO+3
         EmhrVOKoZL3r1M8yfvm3oHxb3praMkQzJfW+bm/3Br40+6xi91i26nImttONa64aHrOx
         9B7uf1tAGMMJPRtStpqseCrokJNld7cUTPVhZc4Vd0EELnKdprbST6tFrjixySVin8ib
         YKYBMjHdNceFU5xLwHp+p1jjYLsQfRCbmWK2AwOvFC9HSLoigNoAG/KvHL8tE6XlXlrm
         qx7g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:subject:from:to:cc:references:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=RJleqi+0HQx/3xnKLvwls6z60YY3TaUIzcUE6+U77y8=;
        b=sFkU93aSl6o+aL5ktlclFZvX6Mw2mVoXxs9o+ERlbPjOHwHP7SVclfa8CvWZRXKn9E
         N4ottvWzMO5Oe+g23IaDP5nJeGibGUqqC5Vl2olzNv6mpLV87A+3eK50lOnrvR8ZfXra
         Ed54Wk23lNIqPoOT3TToYVQDvr27kUdlZzh3IPa4GEBE5QMZVZLG5SCOHIz+h1kJgph5
         sxPmjAEuuFKLPzgFziS8AEzNq/0RVyn2sSvUcQQ6oBlipM/+CLBpY6Z1Z9ljkwGEdBSq
         kXrPn10jZKEaBBJxKQ+aoGn7c+NeUxxUWEMLKSsdJZmifkn702aUTX2DJb8x4TTUKrgb
         eF1A==
X-Gm-Message-State: AJIora+aEj1Q9J42pZWFyCYPbq9QwMBfv5dKKT+vKaQZjH9mUOam0BBC
	pc2zPvmFv8fDQPzPqveCKU4=
X-Google-Smtp-Source: AGRyM1uttJtJkk1/eUL0NOxY5BSV0GCjSTKh0LzgBEhl68Jj2Rh1AKX/7hm+pqeOberDXo1cHVt67w==
X-Received: by 2002:a5d:688e:0:b0:21b:9d51:25d2 with SMTP id h14-20020a5d688e000000b0021b9d5125d2mr8404703wru.286.1656591505569;
        Thu, 30 Jun 2022 05:18:25 -0700 (PDT)
Subject: Re: [PATCH V10 1/3] libxl: Add support for Virtio disk configuration
From: Oleksandr <olekstysh@gmail.com>
To: George Dunlap <George.Dunlap@citrix.com>,
 Anthony Perard <anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Wei Liu <wl@xen.org>,
 Nick Rosbrook <rosbrookn@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>
References: <62903b8e-6c20-600e-8283-5a3e3b853a18@gmail.com>
 <1655482471-16850-1-git-send-email-olekstysh@gmail.com>
 <9A36692A-8095-4C76-A69B-FBAB221A365C@citrix.com>
 <02648046-7781-61e5-de93-77342e4d6c16@gmail.com>
Message-ID: <36d4c786-9fb7-4b30-1a4d-171f92cc84d7@gmail.com>
Date: Thu, 30 Jun 2022 15:18:23 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <02648046-7781-61e5-de93-77342e4d6c16@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


Dear all.


On 25.06.22 17:32, Oleksandr wrote:
>
> On 24.06.22 15:59, George Dunlap wrote:
>
> Hello George
>
>>
>>> On 17 Jun 2022, at 17:14, Oleksandr Tyshchenko <olekstysh@gmail.com> 
>>> wrote:
>>>
>>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>
>>> This patch adds basic support for configuring and assisting virtio-mmio
>>> based virtio-disk backend (emulator) which is intended to run out of
>>> Qemu and could be run in any domain.
>>> Although the Virtio block device is quite different from traditional
>>> Xen PV block device (vbd) from the toolstack's point of view:
>>> - as the frontend is virtio-blk which is not a Xenbus driver, nothing
>>>    written to Xenstore are fetched by the frontend currently ("vdev"
>>>    is not passed to the frontend). But this might need to be revised
>>>    in future, so frontend data might be written to Xenstore in order to
>>>    support hotplugging virtio devices or passing the backend domain id
>>>    on arch where the device-tree is not available.
>>> - the ring-ref/event-channel are not used for the backend<->frontend
>>>    communication, the proposed IPC for Virtio is IOREQ/DM
>>> it is still a "block device" and ought to be integrated in existing
>>> "disk" handling. So, re-use (and adapt) "disk" parsing/configuration
>>> logic to deal with Virtio devices as well.
>>>
>>> For the immediate purpose and an ability to extend that support for
>>> other use-cases in future (Qemu, virtio-pci, etc) perform the following
>>> actions:
>>> - Add new disk backend type (LIBXL_DISK_BACKEND_OTHER) and reflect
>>>   that in the configuration
>>> - Introduce new disk "specification" and "transport" fields to struct
>>>   libxl_device_disk. Both are written to the Xenstore. The transport
>>>   field is only used for the specification "virtio" and it assumes
>>>   only "mmio" value for now.
>>> - Introduce new "specification" option with "xen" communication
>>>   protocol being default value.
>>> - Add new device kind (LIBXL__DEVICE_KIND_VIRTIO_DISK) as current
>>>   one (LIBXL__DEVICE_KIND_VBD) doesn't fit into Virtio disk model
>>>
>>> An example of domain configuration for Virtio disk:
>>> disk = [ 'phy:/dev/mmcblk0p3, xvda1, backendtype=other, 
>>> specification=virtio']
>>>
>>> Nothing has changed for default Xen disk configuration.
>>>
>>> Please note, this patch is not enough for virtio-disk to work
>>> on Xen (Arm), as for every Virtio device (including disk) we need
>>> to allocate Virtio MMIO params (IRQ and memory region) and pass
>>> them to the backend, also update Guest device-tree. The subsequent
>>> patch will add these missing bits. For the current patch,
>>> the default "irq" and "base" are just written to the Xenstore.
>>> This is not an ideal splitting, but this way we avoid breaking
>>> the bisectability.
>>>
>>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>> OK, I am *really* sorry for coming in here at the last minute and 
>> quibbling about things.
>
>
> no problem
>
>
>>   But this introduces a public interface which looks really wrong to 
>> me.  I’ve replied to the mail from December where Juergen proposed 
>> the “Other” protocol.
>>
>> Hopefully this will be a simple matter of finding a better name than 
>> “other”.  (Or you guys convincing me that “other” is really the best 
>> name for this value; or even Anthony asserting his right as a 
>> maintainer to overrule my objection if he thinks I’m out of line.)
>
>
> I saw your reply to V6 and Juergen's answer. I share Juergen's opinion 
> as well as I understand your concern. I think, this is exactly the 
> situation when finding a perfect name (obvious, short, etc) for the 
> backendtype (in our particular case) is really difficult.
>
> Personally I tend to leave "other", because there is no better 
> alternative from my PoV. Also I might be completely wrong here, but I 
> don't think we will have to extend the "backendtype" for supporting 
> other possible virtio backend implementations in the foreseeable future:
>
> - when Qemu gains the required support we will choose qdisk: 
> backendtype qdisk specification virtio
> - for the possible virtio alternative of the blkback we will choose 
> phy: backendtype phy specification virtio
>
> If there will be a need to keep various implementation, we will be 
> able to describe that by using "transport" (mmio, pci, xenbus, argo, 
> whatever).
> Actually this is why we also introduced "specification" and "transport".
>
> IIRC, there were other (suggested?) names except "other" which are 
> "external" and "daemon". If you think that one of them is better than 
> "other", I will happy to use it.


Could we please make a decision on this?

If "other" is not unambiguous, then maybe we could choose "daemon" to 
describe arbitrary user-level backends, any thought?



>
>
>
>>
>> FWIW the Golang changes look fine.


Thanks.


>>
>>   -George
>>
-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Thu Jun 30 12:19:10 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 12:19:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358465.587691 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6t8V-00064q-DC; Thu, 30 Jun 2022 12:19:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358465.587691; Thu, 30 Jun 2022 12:19:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6t8V-00064j-9Y; Thu, 30 Jun 2022 12:19:07 +0000
Received: by outflank-mailman (input) for mailman id 358465;
 Thu, 30 Jun 2022 12:19:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6t8U-00064V-Io; Thu, 30 Jun 2022 12:19:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6t8U-0003ZY-CI; Thu, 30 Jun 2022 12:19:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6t8T-0005zI-Sq; Thu, 30 Jun 2022 12:19:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o6t8T-0008QK-SR; Thu, 30 Jun 2022 12:19:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=eHEaF+bYmSNj6PlBkALoD82Qv18kAiXJBFoIdg3XeUE=; b=23E3G4uopjxnUuY/viHCD35Ymr
	Wwa7kr3L4FMIwYQomggKANjdNcncAleAxWZWW1WNU3hND8EEnfxR0FzRcbtrCpC+uYHWQ2bbnasXJ
	wIp7ByzJwxCKK26H7Ui/rmjkpTfvqeOSsPCbu6Zv+FYPF2QG9y7G1mLGD+9Nhfpmnsf0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171412-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 171412: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-xl-vhd:guest-start.2:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:heisenbug
    qemu-mainline:test-armhf-armhf-xl:host-ping-check-xen:fail:heisenbug
    qemu-mainline:test-arm64-arm64-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=f96d4e0f60073963a5c64844271ecfee8dd87abc
X-Osstest-Versions-That:
    qemuu=2a8835cb45371a1f05c9c5899741d66685290f28
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 30 Jun 2022 12:19:05 +0000

flight 171412 qemu-mainline real [real]
flight 171423 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/171412/
http://logs.test-lab.xenproject.org/osstest/logs/171423/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-vhd       22 guest-start.2  fail in 171423 REGR. vs. 171393

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install   fail pass in 171423-retest
 test-armhf-armhf-xl          10 host-ping-check-xen fail pass in 171423-retest
 test-arm64-arm64-xl-vhd 17 guest-start/debian.repeat fail pass in 171423-retest
 test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail pass in 171423-retest
 test-amd64-i386-xl-vhd 21 guest-start/debian.repeat fail pass in 171423-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl         15 migrate-support-check fail in 171423 never pass
 test-armhf-armhf-xl     16 saverestore-support-check fail in 171423 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171393
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171393
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171393
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171393
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171393
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171393
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171393
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171393
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                f96d4e0f60073963a5c64844271ecfee8dd87abc
baseline version:
 qemuu                2a8835cb45371a1f05c9c5899741d66685290f28

Last test of basis   171393  2022-06-29 02:25:02 Z    1 days
Testing same since   171412  2022-06-29 23:38:23 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Denis V. Lunev <den@openvz.org>
  Richard Henderson <richard.henderson@linaro.org>
  Vladimir Sementsov-Ogievskiy <vsementsov@openvz.org>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit f96d4e0f60073963a5c64844271ecfee8dd87abc
Merge: 2a8835cb45 1b8f777673
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Wed Jun 29 21:35:27 2022 +0530

    Merge tag 'pull-block-2022-06-14-v2' of https://gitlab.com/vsementsov/qemu into staging
    
    Block jobs & NBD patches
    
    v2: - add arguments to QEMUMachine constructor in test, to make it work
          on arm in gitlab pipeline
        - use bdrv_inc_in_flight() / bdrv_dec_in_flight() instead of direct
          manipulation with bs->in_flight
    
    - add new options for copy-before-write filter
    - new trace points for NBD
    - prefer unsigned type for some 'in_flight' fields
    
    # -----BEGIN PGP SIGNATURE-----
    #
    # iQIzBAABCgAdFiEEi5wmzbL9FHyIDoahVh8kwfGfefsFAmK8BqkACgkQVh8kwfGf
    # efuiMw/+P9FFLfGFSjVP+LYeT0Ah6kN1ypCMQzIk3Qq/J6qgMZhtRqpQoZOfIFQL
    # U9fGmEtQZ7gvEMD/nJToL6uOYlnQfPxDcA6GrRwWWE3rcFiPK4J0q2LlnPLn4QaU
    # W/qag5l/QnZfLlj/iV6neWOEvqdnvY1L8IS+T8xV6N0iBYlwgMC/6FGshQwehzcV
    # T5F1qPGB0vjFDjf92LFPEsvsFFHjHIVPwOyJMvF64QtSk57utikq/la9PI/yA9AH
    # Ll4mNQuZKx6rSI5wE6b21jc8iOUvaoHdPSEDQZfNILSdgGdiKvFwE51y+baGnIAD
    # TpjxfG59q0jyGxMjQVxMRSFaxAC4+Mqi82diSPv4xbiUdsE4byJH0oENn4z7+wAv
    # EvjuU9yx4FfHHltoUNwfn2pv00o/ELdZIoBNmW36rPxSGZMvfVfRtuBL7XWNUFbW
    # Fr4BwsjC4KfIxb16QTBGhXVv6grxdlwoU9N23npdi0YpW1ftZzXGDa85+gINQ807
    # eK/gP/OtYPwIql0bgmLiuaRNzC9psmQOO6vQbdvd/e4BEVWkxiez37+e+zFMStmL
    # OAL8rS6jckUoxVZjCYFEWg97XOobLUIhQxt9Fwh2omMDGKTwv861ghUAivxSWs93
    # IRNxfwqNPxnpDDXjnK1ayZgU08IL98AUYVKcPN1y8JvEhB4Hr1k=
    # =ndKk
    # -----END PGP SIGNATURE-----
    # gpg: Signature made Wed 29 Jun 2022 01:30:41 PM +0530
    # gpg:                using RSA key 8B9C26CDB2FD147C880E86A1561F24C1F19F79FB
    # gpg: Good signature from "Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>" [unknown]
    # gpg:                 aka "Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>" [unknown]
    # gpg: WARNING: This key is not certified with a trusted signature!
    # gpg:          There is no indication that the signature belongs to the owner.
    # Primary key fingerprint: 8B9C 26CD B2FD 147C 880E  86A1 561F 24C1 F19F 79FB
    
    * tag 'pull-block-2022-06-14-v2' of https://gitlab.com/vsementsov/qemu:
      block: use 'unsigned' for in_flight field on driver state
      nbd: trace long NBD operations
      iotests: copy-before-write: add cases for cbw-timeout option
      block/copy-before-write: implement cbw-timeout option
      block/block-copy: block_copy(): add timeout_ns parameter
      util: add qemu-co-timeout
      iotests: add copy-before-write: on-cbw-error tests
      block/copy-before-write: add on-cbw-error open parameter
      block/copy-before-write: refactor option parsing
    
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit 1b8f777673985af366de099ad4e41d334b36fb12
Author: Denis V. Lunev <den@openvz.org>
Date:   Mon May 30 12:39:57 2022 +0200

    block: use 'unsigned' for in_flight field on driver state
    
    This patch makes in_flight field 'unsigned' for BDRVNBDState and
    MirrorBlockJob. This matches the definition of this field on BDS
    and is generically correct - we should never get negative value here.
    
    Signed-off-by: Denis V. Lunev <den@openvz.org>
    CC: John Snow <jsnow@redhat.com>
    CC: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
    CC: Kevin Wolf <kwolf@redhat.com>
    CC: Hanna Reitz <hreitz@redhat.com>
    CC: Eric Blake <eblake@redhat.com>
    Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
    Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>

commit 8bb100c9e2dc1fe0e33283b0c43252dbaf4eb71b
Author: Denis V. Lunev <den@openvz.org>
Date:   Mon May 30 12:39:29 2022 +0200

    nbd: trace long NBD operations
    
    At the moment there are 2 sources of lengthy operations if configured:
    * open connection, which could retry inside and
    * reconnect of already opened connection
    These operations could be quite lengthy and cumbersome to catch thus
    it would be quite natural to add trace points for them.
    
    This patch is based on the original downstream work made by Vladimir.
    
    Signed-off-by: Denis V. Lunev <den@openvz.org>
    CC: Eric Blake <eblake@redhat.com>
    CC: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
    CC: Kevin Wolf <kwolf@redhat.com>
    CC: Hanna Reitz <hreitz@redhat.com>
    CC: Paolo Bonzini <pbonzini@redhat.com>
    Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
    Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>

commit 9d05a87b77a63ed5505c59f5e8e6c5ca4f2c04d3
Author: Vladimir Sementsov-Ogievskiy <vsementsov@openvz.org>
Date:   Thu Apr 7 16:27:26 2022 +0300

    iotests: copy-before-write: add cases for cbw-timeout option
    
    Add two simple test-cases: timeout failure with
    break-snapshot-on-cbw-error behavior and similar with
    break-guest-write-on-cbw-error behavior.
    
    Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@openvz.org>
    Reviewed-by: Hanna Reitz <hreitz@redhat.com>
    Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>

commit 6db7fd1ca980f8dd2fd082f13613166e170afd05
Author: Vladimir Sementsov-Ogievskiy <vsementsov@openvz.org>
Date:   Thu Apr 7 16:27:25 2022 +0300

    block/copy-before-write: implement cbw-timeout option
    
    In some scenarios, when copy-before-write operations lasts too long
    time, it's better to cancel it.
    
    Most useful would be to use the new option together with
    on-cbw-error=break-snapshot: this way if cbw operation takes too long
    time we'll just cancel backup process but do not disturb the guest too
    much.
    
    Note the tricky point of realization: we keep additional point in
    bs->in_flight during block_copy operation even if it's timed-out.
    Background "cancelled" block_copy operations will finish at some point
    and will want to access state. We should care to not free the state in
    .bdrv_close() earlier.
    
    Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@openvz.org>
    Reviewed-by: Hanna Reitz <hreitz@redhat.com>
      [vsementsov: use bdrv_inc_in_flight()/bdrv_dec_in_flight() instead of
       direct manipulation on bs->in_flight]
    Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>

commit 15df6e698719505570f8532772c2b08cb278a45a
Author: Vladimir Sementsov-Ogievskiy <vsementsov@openvz.org>
Date:   Thu Apr 7 16:27:24 2022 +0300

    block/block-copy: block_copy(): add timeout_ns parameter
    
    Add possibility to limit block_copy() call in time. To be used in the
    next commit.
    
    As timed-out block_copy() call will continue in background anyway (we
    can't immediately cancel IO operation), it's important also give user a
    possibility to pass a callback, to do some additional actions on
    block-copy call finish.
    
    Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@openvz.org>
    Reviewed-by: Hanna Reitz <hreitz@redhat.com>
    Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>

commit e1878eb5f0d93a67deb46aaeea898cf4824a759a
Author: Vladimir Sementsov-Ogievskiy <vsementsov@openvz.org>
Date:   Thu Apr 7 16:27:23 2022 +0300

    util: add qemu-co-timeout
    
    Add new API, to make a time limited call of the coroutine.
    
    Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@openvz.org>
    Reviewed-by: Hanna Reitz <hreitz@redhat.com>
    Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>

commit dd3e97dfbe199fa277869d127884071100a426e5
Author: Vladimir Sementsov-Ogievskiy <vsementsov@openvz.org>
Date:   Thu Apr 7 16:27:22 2022 +0300

    iotests: add copy-before-write: on-cbw-error tests
    
    Add tests for new option of copy-before-write filter: on-cbw-error.
    
    Note that we use QEMUMachine instead of VM class, because in further
    commit we'll want to use throttling which doesn't work with -accel
    qtest used by VM.
    
    We also touch pylintrc to not break iotest 297.
    
    Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@openvz.org>
    Reviewed-by: Hanna Reitz <hreitz@redhat.com>
      [vsementsov: add arguments to QEMUMachine constructor]
    Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>

commit f1bb39a8a5b6d486faa1a51a7f28c577155642c9
Author: Vladimir Sementsov-Ogievskiy <vsementsov@openvz.org>
Date:   Thu Apr 7 16:27:21 2022 +0300

    block/copy-before-write: add on-cbw-error open parameter
    
    Currently, behavior on copy-before-write operation failure is simple:
    report error to the guest.
    
    Let's implement alternative behavior: break the whole copy-before-write
    process (and corresponding backup job or NBD client) but keep guest
    working. It's needed if we consider guest stability as more important.
    
    The realisation is simple: on copy-before-write failure we set
    s->snapshot_ret and continue guest operations. s->snapshot_ret being
    set will lead to all further snapshot API requests. Note that all
    in-flight snapshot-API requests may still success: we do wait for them
    on BREAK_SNAPSHOT-failure path in cbw_do_copy_before_write().
    
    Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@openvz.org>
    Reviewed-by: Hanna Reitz <hreitz@redhat.com>
    Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>

commit 79ef0cebb5694411e7452f0cf15c4bd170c7f2d6
Author: Vladimir Sementsov-Ogievskiy <vsementsov@openvz.org>
Date:   Thu Apr 7 16:27:20 2022 +0300

    block/copy-before-write: refactor option parsing
    
    We are going to add one more option of enum type. Let's refactor option
    parsing so that we can simply work with BlockdevOptionsCbw object.
    
    Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@openvz.org>
    Reviewed-by: Hanna Reitz <hreitz@redhat.com>
    Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 12:24:35 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 12:24:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358476.587704 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6tDj-0007Zb-2S; Thu, 30 Jun 2022 12:24:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358476.587704; Thu, 30 Jun 2022 12:24:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6tDi-0007ZU-W0; Thu, 30 Jun 2022 12:24:30 +0000
Received: by outflank-mailman (input) for mailman id 358476;
 Thu, 30 Jun 2022 12:24:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6bdU=XF=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o6tDh-0007ZO-Oy
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 12:24:29 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr130074.outbound.protection.outlook.com [40.107.13.74])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 95cc5bd5-f86f-11ec-bdce-3d151da133c5;
 Thu, 30 Jun 2022 14:24:28 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB6732.eurprd04.prod.outlook.com (2603:10a6:10:10e::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Thu, 30 Jun
 2022 12:24:26 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5395.014; Thu, 30 Jun 2022
 12:24:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 95cc5bd5-f86f-11ec-bdce-3d151da133c5
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=X9/JAEAppMlaFbbTl47xL6sYU8o17zYXoTXD0+i+D6iaqCDCGhthsDFvq2FIBVjAlzhzen78Y1adFkO5J6I0Sr05pwa3xs8psVcXoFFLAfPQDi6Y2mezaTY8ZWpbzV93B0AMAm7xYWVwZQ9XFwdgnvxspoDze+R6WTaeZiSb+g/YY7ZzTLgSqoPuEZVQiY1Im3h6WsXEADv8VEk3PyMUxB62+dPI5Uy7LiVjKd6Lq7j1CMfpChVFsTwxDbESfAoVXN35C9vR1Yqe0xR2a1CYL4uq4CH2rt1TsmEIbd5fcnvQZUi1fHf+p6UmBqg0Q0NbhGAC6ZwzdL7SnAKtPQCCag==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=JpaL2K1pu2wSSMdea8KXMD5azt8WoYd/UmJdDdI8xW4=;
 b=HT89mmdRFDarb4CMYSX3DJhtf0iM+Q0RDtxX26/UnEhM5I4dSaOs2Jg5NpgCKqTrD7CZljEid8vxfnYhSIV2SyJURLl4pEpoa+7yd2mlSbJcgM8M30yqrWnsrWnPgA20nfDwa+/2AqG1NOj7EJj1HEJ+/Ml/vYtKIMsLevKuHD1LBGvyMilMZ/M1CUBxqboevaCWBix6AtHQqHcaaEM8hKSqBB/wkrY1OA0N5+b/qOluiZF9BS6INEfdzRtpMs4EqOmZWIbSDoMEI/ptKk9PH6RtXeCacyObGSsun6y50UHIHA2BcA/N3/d7nEFsJXthcDicdqte7SknQte6Q0ITKQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JpaL2K1pu2wSSMdea8KXMD5azt8WoYd/UmJdDdI8xW4=;
 b=3P9RY4QQ016ycNGlWpAPineWhFzYufo5kK7Gz35oku0Qkhrwz2f/eGq58BEWoWT/QgIIp1MfZz4bW4ksXZyH1aatxw+tmb6OH+02/HCah4DgsnKT7gpeDeLZjWBAV4GGnbygXU4lHzKWMJo/DCevhfXHy7vSBn1j5HV5rtiGWl4Tf+rfuB4ILDIk4puX2WnNOwYQysKlDsLMj71u7QrFnzX0dM7uofj5i7eDfhPPskBgeHLdRYo9tDu7DdVXpxxu7M0QKLJbUqJV90w9CKloBpop1alBY2XnketHPOF4e7Z86zFpMz+M5NPu3eMNm3n2dAx/kYL+lzENDYIj7RbC5w==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <8d23c11b-38c6-8356-cd28-6550e91220b1@suse.com>
Date: Thu, 30 Jun 2022 14:24:24 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: R: R: R: Crash when using xencov
Content-Language: en-US
To: Carmine Cesarano <c.cesarano@hotmail.it>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
 Julien Grall <julien@xen.org>
References: <AM7PR10MB355942D32F58FF02379C398CF8B99@AM7PR10MB3559.EURPRD10.PROD.OUTLOOK.COM>
 <87d0667b-2b85-f006-ea3c-6f557b2bdc8e@xen.org>
 <AM7PR10MB355972A68A222CB9FBAC43D4F8B99@AM7PR10MB3559.EURPRD10.PROD.OUTLOOK.COM>
 <daa12b90-da87-d463-24c4-a13fba330f1d@xen.org>
 <AM7PR10MB35593AA7F46B4D4A0BBD9841F8B99@AM7PR10MB3559.EURPRD10.PROD.OUTLOOK.COM>
 <AM7PR10MB3559BB8CB733902773B1AD6AF8B99@AM7PR10MB3559.EURPRD10.PROD.OUTLOOK.COM>
 <AM7PR10MB3559A1984F6B53CEFB4FECC7F8B89@AM7PR10MB3559.EURPRD10.PROD.OUTLOOK.COM>
 <ef8b540c-d2c2-c999-d3fe-08fc88665ad9@xen.org>
 <AM7PR10MB355902159111F3CC8C6CDB94F8BA9@AM7PR10MB3559.EURPRD10.PROD.OUTLOOK.COM>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <AM7PR10MB355902159111F3CC8C6CDB94F8BA9@AM7PR10MB3559.EURPRD10.PROD.OUTLOOK.COM>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AM6P195CA0043.EURP195.PROD.OUTLOOK.COM
 (2603:10a6:209:87::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: cc0e7c60-70e4-44aa-495b-08da5a9378c6
X-MS-TrafficTypeDiagnostic: DB8PR04MB6732:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	KKOyuo9W8e65WcCMVAhgSIv48rYIusmULaeVxkzZyO0rGVqAxwO7ouwCJo3pzfIVutxJxNXv4I4kKinCgtobPgWDtNY9Mz/mKqknXFsMW8OmxBnqmpWh658TlC6TSJBm5xPbo08XvTDQ14tmvG/UUpDRba8zeT/ZS5zq8RhkQi6BdkkOjPS23gj9YbX3emHJY3bOFOXDOWlh74pWyV8yTEH3v9PAj/5GZ3De4pka1F6le5HPxYPdmnNxb0hFNoNL7zKC/T0MbDTx9OoYlp206Ty+kIRl2wEMafkHTRbSVebhIxruc2B63YHi8JESn9J0Sgx6rcsCDrtB6sUTNw4lPtuVe82SaZf59rkqm1xiQkDck27pe9htfReDvGiB/TslMZe4URXCBBe/SF32UScPgZE4RwJx8XZ+CEj4r72Y/YywRyAi9UtEcENvOt6BQsigcKJ+uxhOSAOYWJTKypAsPrczu07WfVylzmZhJn3mlymzmUVtvBqs/qNbamgpQuZ/88aojwOhA/Q5QD4SzYTb2XqXagJvjeQFctf6NJ2ZCCdlh+I4mAkKJPEXIqPAkqOJv5dDCQ/4jDVvE6BLzc7EZj5YW9FQh5ky5lxBIjDCi8FVHxe8ARQlGNMBvIYJRzQJzdjiXNKjmfDTXnA+m2SFwfx/DzbtRCDil0xGbV19tkrDd/jvNv+wgAOv5jg78fsY1wMBEfe+3lot837FY5c4vJa8ueMal1IhEL29FgIalsXYvYWwIq4rfgZGFLeaUIIi33jdmESzyd0APd5s2KWwbCbDgGtj9N0ksJh3u5XClLQKWen/cQtPdpdAv/PzvYaEjhHI8Q/a63Lshju7AB/3p/uc/ctGhyhbWdwQCUpZSDY=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(376002)(39860400002)(396003)(366004)(346002)(136003)(45080400002)(186003)(6916009)(53546011)(6506007)(2616005)(8936002)(54906003)(83380400001)(26005)(8676002)(31696002)(41300700001)(6512007)(6486002)(86362001)(66556008)(66946007)(38100700002)(36756003)(5660300002)(2906002)(316002)(66476007)(478600001)(4326008)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZUVLZ0V2MjZUWS9pNzZvQlFobkl5MFYwejRwVi9KVlkzSUtJUk9TRmVuU0pP?=
 =?utf-8?B?VGJURDVQVWZVRm5HOHl5R0xKaHgwSlNpKzFCVmtsRUtTaHVZMHVHamVHTG0z?=
 =?utf-8?B?dEVaSkIrUkZkTXc3QW5mRXlGd2JlVE5jQU0xUm9NTlFmWExaaXVJVU1WMU0r?=
 =?utf-8?B?ZENReSs4QkdaWEFvb1BpNm1WbEo3VXl4WTlSYUdRa3pocnJMV0E4MXRNVkpj?=
 =?utf-8?B?VEFrNmVBNy9tQjN4Zit6bnVsQWRsdWZuOFZjWWRlMU54RlZzYUNNNW9VYnU0?=
 =?utf-8?B?VGNORi9vWW43N0VMNVdSTStYeHI0SmRaK3p3S0xwZitDVTVPTFVab2VJcW96?=
 =?utf-8?B?QWowbTE4MTg3RkpQUTNTQitEM0x2N0ZScmc4MEY5MTE4NWpvaGpHY1g2N1Vv?=
 =?utf-8?B?NUZvWGpHQW54SDcwVW5BYU80UEhCSFdHQk5mTGIzZXk0QzY3WDlySmYvNTdt?=
 =?utf-8?B?V3dUYmFVOVRsbE13OHhqellCUzdLRnk1TG5uU2x2RWQ2MUhJL2dmY1JsQm5S?=
 =?utf-8?B?YlM2YzhyYm54eXdBZG1LajVrMEtkU3NDcVNmcUdkS0g5RHlVbyt1ZG14UWQ1?=
 =?utf-8?B?dENCNEdCRk1oZ0w1Zy9jT2FsUjlwSlM0VnJWcFpCeUVVOTBRbXIyZU9ma3VB?=
 =?utf-8?B?VTh0WEtXYnYzSVRQd0xaazFVWDd2ZnVBeG1NSHlZYjZuUjcyRDRIK3NzYndw?=
 =?utf-8?B?U0FYR2ZHTEYwa2wvdmQ3M0tTRDNEQzdQdXF3bjdNV29qaTVSaTNvMEFUTGZt?=
 =?utf-8?B?QjZ2Y0tTYkpxdUpGT0F2ZWpiTlVHZTc4bkdVcnBvRjR5MlZIRzhEWWs1MDh3?=
 =?utf-8?B?eVlKY3ZjdnBxT3JROHpVcVdWTUFLVjdCQ2ZmK2pGMlVXWXNzVWR3UVFFNkI3?=
 =?utf-8?B?eWRlZlVSV01iemQxTExXOUp3WS9leXhGdjIwTktNTE5MazZHSWRUSEZONDNq?=
 =?utf-8?B?NTJzYTBKck41aWU0SkZVVk5sYm95MjgvZytxaEdVTzBRc1U1NVNFSE05ZDVw?=
 =?utf-8?B?N1l6c3ZSaHI0SWppZHVxVTFCMTJhbHV3MEdvWU1kTjhGaWJVUytUcEF5NXZR?=
 =?utf-8?B?ZXpRb3k5QkZHMmFyUHpzTUwwaFhwM01UY0xQWnEwRWRvc09XSE9sNkFWalFi?=
 =?utf-8?B?TXIyalR2elhOQXhXOURzbEtzRzNZWnlqb1c2QzdZVHVPeThIVzNMOXZlVUF5?=
 =?utf-8?B?NVFHS1JjckFMVjNveVIzUXFSOTJUaEd6TCtlUHpBOXg5MDJsT0dKMjluSVRZ?=
 =?utf-8?B?dTFFUWdJMURyTjJ2eFNIZ1l0QXlTUG40QnJMdHJoMEpzbVBQRUVHclRPcVNI?=
 =?utf-8?B?aXdFaUhSQlNEY1ZYR2hMdFdKRFFvVVpVRHpybHEycjV1Ymw2M2JVK0lpOTdL?=
 =?utf-8?B?bWMwNElDcFRySEl1Sm15SDY3bEZFa2F6TXNoRGpORGpLcWpsRmNpUjhtMXlB?=
 =?utf-8?B?SGdwblJGVzVuNXZhb01aS2V3VjNtNzZOQWllZy90Ym13ZldPQ1hjYkRIcHh3?=
 =?utf-8?B?d2k3S1NKTTJ6c3lXUHA4K0tyTitmU2tFRmE4ajRERkgwQWFLcloyZ1BDSTM5?=
 =?utf-8?B?dVlqUUVjRlE0WFhzUEdTSm5GaCtBa0ZhdFdVdGhoRWJZajNjV1BSazlpL2Rv?=
 =?utf-8?B?ZUtIVXI0aGFtd21tUHFwNURLbCtFb01vMmJUeEFGL05kY2dMZHlEa1BDL2Zp?=
 =?utf-8?B?VG10NjN5VHJRQUFlL2JDemFHYlRaRmNla3ZWSnNvSE93K0NQRjJsUWxLZ0dv?=
 =?utf-8?B?Vk9iSVE1b1JKZWdzSVIxMEFweUV5b0pJUTN3R09MMzhscEpCZGNMcUZHK0Vk?=
 =?utf-8?B?WlFmOGRDRDF0ZjBoSVFqaWp4TjJvOXlHRUVQeDFMWXpIOWs5Mk5VYWlvbDla?=
 =?utf-8?B?ZlowWnQvVlBENHVWL1paSytET3prU0JqL3djdnozQVNNYzFhbDdYaU5TcHBi?=
 =?utf-8?B?Rm1VaXdicndTZWliYXRPMGd6elEvWTRHZFUyaFVxMzVrc1pxNnV5b0luWjhn?=
 =?utf-8?B?Z1JMWHNDMDVvYk84UUJnQ0RuejlPMGRpOGRwWjFMOFZKTHlONjQ1WnR5WmQy?=
 =?utf-8?B?di95bEJ4NmFPYzZJbTFOKzFHRU5oSEZxNmhrZ1huR2t3YXAweGhiZTFsQnhq?=
 =?utf-8?Q?p/rWuGCCr1/sdHMiLS+3fQABK?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cc0e7c60-70e4-44aa-495b-08da5a9378c6
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 12:24:26.0862
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 1VdexkeQsANpQuoAqz6+i2/cyHrH9E7VzilNwwG/0ktEJwj/MWQHdJY6aWtz8aF/PfWTfFoVwr99WXCRuvr2QQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB6732

On 30.06.2022 11:07, Carmine Cesarano wrote:
> Sorry for the images on the ML.

Please note how Julien had also asked you to not top-post.

> If I wanted to change my setup instead, is there a tested and working version of gcc for xencov features on xen stable-4.16?
> (I read GCC 3.4 or later in the documentation).

Judging from the sources someone had tested with gcc7, but earlier you
said you see the same issue there as you had seen with gcc10. I assume
you did make sure to do a full, clean re-build when switching compiler
versions.

No update was made after gcc 7, so it's not entirely unexpected for
things to not work anymore with newer versions.

Jan

> Da: Julien Grall<mailto:julien@xen.org>
> Inviato: mercoledì 29 giugno 2022 20:02
> A: Carmine Cesarano<mailto:c.cesarano@hotmail.it>
> Cc: xen-devel@lists.xenproject.org<mailto:xen-devel@lists.xenproject.org>; Andrew Cooper<mailto:andrew.cooper3@citrix.com>
> Oggetto: Re: R: R: Crash when using xencov
> 
> (moving the discussion to xen-devel)
> 
> On 28/06/2022 16:32, Carmine Cesarano wrote:
>> Hi,
> 
> Hello,
> 
> Please refrain to top-post and/or post images on the ML. If you need to
> share an image, then please upload them somewhere else.
> 
>> I made two further attempts, first by compiling xen and xen-tools with gcc 10 and second with gcc 7, getting the same problem.
>>
>> By running “xencov reset”, with with both compilers, the line of code associated with the crash is:
> 
> Discussing with Andrew Cooper on IRC, it looks like the problem is
> because Xen and GCC disagrees on the format. There are newer format that
> Xen doesn't understand.
> 
> If you are interested to support GCOV on your setup, then I would
> suggest to look at the documentation and/or look at what Linux is doing
> for newer compiler.
> 
>>
>>    *   /xen/xen/common/coverage/gcc_4_7.c:123
>> By running “xencov read”, I get two different behaviors with the two compilers:
>>
>>    *   /xen/xen/common/coverage/gcc_4_7.c:165   [GCC 11]
>>    *   /xen/xen/common/coverage/gcov.c:131          [GCC 7]
>>
>> Attached are the logs captured with a serial port.
>>
>> Cheers,
>>
>> Carmine Cesarano
>> Da: Julien Grall<mailto:julien@xen.org>
>> Inviato: lunedì 27 giugno 2022 14:42
>> A: Carmine Cesarano<mailto:c.cesarano@hotmail.it>
>> Oggetto: Re: R: Crash when using xencov
>>
>> Hello,
>>
>> You seemed to have removed xen-users from the CC list. Please keep it in
>> CC unless the discussion needs to private.
>>
>> Also, please avoid top-posting.
>>
>> On 27/06/2022 13:36, Carmine Cesarano wrote:
>>> Yes, i mean stable-4.16. Below the logs after running "xencov reset". The situation for "xencov read" is similar.
>>>
>>> (XEN) ----[ Xen-4.16.2-pre  x86_64  debug=y gcov=y  Not tainted ]----
>>> (XEN) CPU:    0
>>> (XEN) RIP:    e008:[<ffff82d040257bd2>] gcov_info_reset+0x87/0xa9
>>> (XEN) RFLAGS: 0000000000010202   CONTEXT: hypervisor (d0v0)
>>> (XEN) rax: 0000000000000000   rbx: ffff82d04056bdc0   rcx: 00000000000c000b
>>> (XEN) rdx: 0000000000000000   rsi: 0000000000000001   rdi: ffff82d04056bdc0
>>> (XEN) rbp: ffff83023a7e7cb0   rsp: ffff83023a7e7c88   r8:  7fffffffffffffff
>>> (XEN) r9:  deadbeefdeadf00d   r10: 0000000000000000   r11: 0000000000000000
>>> (XEN) r12: 0000000000000001   r13: ffff82d04056be28   r14: 0000000000000000
>>> (XEN) r15: ffff82d04056bdc0   cr0: 0000000080050033   cr4: 0000000000172620
>>> (XEN) cr3: 000000017ea0b000   cr2: 0000000000000000
>>> (XEN) fsb: 00007fc0fb0ca200   gsb: ffff88807b400000   gss: 0000000000000000
>>> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
>>> (XEN) Xen code around <ffff82d040257bd2> (gcov_info_reset+0x87/0xa9):
>>> (XEN)  1d 44 89 f0 49 8b 57 70 <4c> 8b 24 c2 49 83 c4 18 48 83 05 a6 81 4c 00 01
>>> (XEN) Xen stack trace from rsp=ffff83023a7e7c88:
>>> (XEN)    ffff82d04056bdc0 0000000000000001 ffff82d04070f180 0000000000000001
>>> (XEN)    0000000000000000 ffff83023a7e7cc8 ffff82d040257a6a ffff83023a7e7db0
>>> (XEN)    ffff83023a7e7ce8 ffff82d040257547 ffff83023a7e7fff ffff83023a7e7fff
>>> (XEN)    ffff83023a7e7e58 ffff82d040255d5f ffff83023a7e7d68 ffff82d0403b5e8b
>>> (XEN)    000000000017d5b2 0000000000000000 ffff83023a6b5000 0000000000000000
>>> (XEN)    00007fc0fb348010 800000017ea0e127 0000000000000202 ffff82d040399fd8
>>> (XEN)    0000000000005a40 ffff83023a7e7d68 0000000000000206 ffff82e002fab640
>>> (XEN)    ffff83023a7e7e58 ffff82d0403bb29d ffff83023a69f000 000000003a7e7fff
>>> (XEN)    000000017ea0f067 0000000000000000 000000000017d5b2 000000000017d5b2
>>> (XEN)    0000001400000014 0000000000000002 ffffffffffffffff 0000000000000000
>>> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
>>> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
>>> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
>>> (XEN)    0000000000000000 ffff83023a7e7ef8 0000000000000001 ffff83023a69f000
>>> (XEN)    deadbeefdeadf00d ffff82d04025579d ffff83023a7e7ee8 ffff82d040387f62
>>> (XEN)    00007fc0fb348010 deadbeefdeadf00d deadbeefdeadf00d deadbeefdeadf00d
>>> (XEN)    deadbeefdeadf00d ffff83023a7e7fff ffff82d0403b2c99 ffff83023a7e7eb8
>>> (XEN)    ffff82d04038214c ffff83023a69f000 ffff83023a7e7ed8 ffff83023a69f000
>>> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
>>> (XEN)    00007cfdc58180e7 ffff82d0404392ae 0000000000000000 ffff88800f484c00
>>> (XEN) Xen call trace:
>>> (XEN)    [<ffff82d040257bd2>] R gcov_info_reset+0x87/0xa9
>>
>> Thanks! There are multiple versions of gcov_info_reset() in the tree.
>> The one used will depend on the compiler you are using.
>>
>> Can you use addr2line (or gdb) to find out the file and line of code
>> associated with the crash?
>>
>> For addr2line you could do:
>>
>>     addr2line -e xen-syms 0xffff82d040257bd2
> 
> Cheers,
> 
> --
> Julien Grall
> 
> 



From xen-devel-bounces@lists.xenproject.org Thu Jun 30 12:24:46 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 12:24:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358477.587715 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6tDy-0007wP-Fi; Thu, 30 Jun 2022 12:24:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358477.587715; Thu, 30 Jun 2022 12:24:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6tDy-0007wG-Cr; Thu, 30 Jun 2022 12:24:46 +0000
Received: by outflank-mailman (input) for mailman id 358477;
 Thu, 30 Jun 2022 12:24:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gQTH=XF=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o6tDw-0007ZO-QY
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 12:24:45 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9ef08ed2-f86f-11ec-bdce-3d151da133c5;
 Thu, 30 Jun 2022 14:24:43 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 3F9F721DD4;
 Thu, 30 Jun 2022 12:24:43 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id DDFAB13A5C;
 Thu, 30 Jun 2022 12:24:42 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id yPlbNAqWvWLcMgAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 30 Jun 2022 12:24:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9ef08ed2-f86f-11ec-bdce-3d151da133c5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656591883; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=uXa+FRlzYJYmyF3LIiIDI9DG8qyh1K9uMBsWxDl2nRM=;
	b=CtN7UkTwywTxvzvfNX6zoOXWzTuUCeNKvW4yloLm2ReYh8SnNMXcrxmaqz5mRL+x2937vC
	ZtAeiHVOA2aBwNk7uFB04S8NQjheWNUZkpgSyUOcBpPuY/Uv8dE7d/sJZTPtQHf/V+Wffg
	34W92Ra4pzpS1clb0txqBoiwR/Vf/w8=
Message-ID: <43cafa48-1cef-ad0f-654e-5296cff15018@suse.com>
Date: Thu, 30 Jun 2022 14:24:42 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Content-Language: en-US
To: Oleksandr <olekstysh@gmail.com>, George Dunlap
 <George.Dunlap@citrix.com>, Anthony Perard <anthony.perard@citrix.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Wei Liu <wl@xen.org>,
 Nick Rosbrook <rosbrookn@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>
References: <62903b8e-6c20-600e-8283-5a3e3b853a18@gmail.com>
 <1655482471-16850-1-git-send-email-olekstysh@gmail.com>
 <9A36692A-8095-4C76-A69B-FBAB221A365C@citrix.com>
 <02648046-7781-61e5-de93-77342e4d6c16@gmail.com>
 <36d4c786-9fb7-4b30-1a4d-171f92cc84d7@gmail.com>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH V10 1/3] libxl: Add support for Virtio disk configuration
In-Reply-To: <36d4c786-9fb7-4b30-1a4d-171f92cc84d7@gmail.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------or0G17J0FHgXlHhgEmNHKuk0"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------or0G17J0FHgXlHhgEmNHKuk0
Content-Type: multipart/mixed; boundary="------------SoJtpuNNnpGoH97jtfddOXYV";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Oleksandr <olekstysh@gmail.com>, George Dunlap
 <George.Dunlap@citrix.com>, Anthony Perard <anthony.perard@citrix.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Wei Liu <wl@xen.org>,
 Nick Rosbrook <rosbrookn@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>
Message-ID: <43cafa48-1cef-ad0f-654e-5296cff15018@suse.com>
Subject: Re: [PATCH V10 1/3] libxl: Add support for Virtio disk configuration
References: <62903b8e-6c20-600e-8283-5a3e3b853a18@gmail.com>
 <1655482471-16850-1-git-send-email-olekstysh@gmail.com>
 <9A36692A-8095-4C76-A69B-FBAB221A365C@citrix.com>
 <02648046-7781-61e5-de93-77342e4d6c16@gmail.com>
 <36d4c786-9fb7-4b30-1a4d-171f92cc84d7@gmail.com>
In-Reply-To: <36d4c786-9fb7-4b30-1a4d-171f92cc84d7@gmail.com>

--------------SoJtpuNNnpGoH97jtfddOXYV
Content-Type: multipart/mixed; boundary="------------eyobLM2s4wBBTle5dihHL9n8"

--------------eyobLM2s4wBBTle5dihHL9n8
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMzAuMDYuMjIgMTQ6MTgsIE9sZWtzYW5kciB3cm90ZToNCj4gDQo+IERlYXIgYWxsLg0K
PiANCj4gDQo+IE9uIDI1LjA2LjIyIDE3OjMyLCBPbGVrc2FuZHIgd3JvdGU6DQo+Pg0KPj4g
T24gMjQuMDYuMjIgMTU6NTksIEdlb3JnZSBEdW5sYXAgd3JvdGU6DQo+Pg0KPj4gSGVsbG8g
R2VvcmdlDQo+Pg0KPj4+DQo+Pj4+IE9uIDE3IEp1biAyMDIyLCBhdCAxNzoxNCwgT2xla3Nh
bmRyIFR5c2hjaGVua28gPG9sZWtzdHlzaEBnbWFpbC5jb20+IHdyb3RlOg0KPj4+Pg0KPj4+
PiBGcm9tOiBPbGVrc2FuZHIgVHlzaGNoZW5rbyA8b2xla3NhbmRyX3R5c2hjaGVua29AZXBh
bS5jb20+DQo+Pj4+DQo+Pj4+IFRoaXMgcGF0Y2ggYWRkcyBiYXNpYyBzdXBwb3J0IGZvciBj
b25maWd1cmluZyBhbmQgYXNzaXN0aW5nIHZpcnRpby1tbWlvDQo+Pj4+IGJhc2VkIHZpcnRp
by1kaXNrIGJhY2tlbmQgKGVtdWxhdG9yKSB3aGljaCBpcyBpbnRlbmRlZCB0byBydW4gb3V0
IG9mDQo+Pj4+IFFlbXUgYW5kIGNvdWxkIGJlIHJ1biBpbiBhbnkgZG9tYWluLg0KPj4+PiBB
bHRob3VnaCB0aGUgVmlydGlvIGJsb2NrIGRldmljZSBpcyBxdWl0ZSBkaWZmZXJlbnQgZnJv
bSB0cmFkaXRpb25hbA0KPj4+PiBYZW4gUFYgYmxvY2sgZGV2aWNlICh2YmQpIGZyb20gdGhl
IHRvb2xzdGFjaydzIHBvaW50IG9mIHZpZXc6DQo+Pj4+IC0gYXMgdGhlIGZyb250ZW5kIGlz
IHZpcnRpby1ibGsgd2hpY2ggaXMgbm90IGEgWGVuYnVzIGRyaXZlciwgbm90aGluZw0KPj4+
PiDCoMKgIHdyaXR0ZW4gdG8gWGVuc3RvcmUgYXJlIGZldGNoZWQgYnkgdGhlIGZyb250ZW5k
IGN1cnJlbnRseSAoInZkZXYiDQo+Pj4+IMKgwqAgaXMgbm90IHBhc3NlZCB0byB0aGUgZnJv
bnRlbmQpLiBCdXQgdGhpcyBtaWdodCBuZWVkIHRvIGJlIHJldmlzZWQNCj4+Pj4gwqDCoCBp
biBmdXR1cmUsIHNvIGZyb250ZW5kIGRhdGEgbWlnaHQgYmUgd3JpdHRlbiB0byBYZW5zdG9y
ZSBpbiBvcmRlciB0bw0KPj4+PiDCoMKgIHN1cHBvcnQgaG90cGx1Z2dpbmcgdmlydGlvIGRl
dmljZXMgb3IgcGFzc2luZyB0aGUgYmFja2VuZCBkb21haW4gaWQNCj4+Pj4gwqDCoCBvbiBh
cmNoIHdoZXJlIHRoZSBkZXZpY2UtdHJlZSBpcyBub3QgYXZhaWxhYmxlLg0KPj4+PiAtIHRo
ZSByaW5nLXJlZi9ldmVudC1jaGFubmVsIGFyZSBub3QgdXNlZCBmb3IgdGhlIGJhY2tlbmQ8
LT5mcm9udGVuZA0KPj4+PiDCoMKgIGNvbW11bmljYXRpb24sIHRoZSBwcm9wb3NlZCBJUEMg
Zm9yIFZpcnRpbyBpcyBJT1JFUS9ETQ0KPj4+PiBpdCBpcyBzdGlsbCBhICJibG9jayBkZXZp
Y2UiIGFuZCBvdWdodCB0byBiZSBpbnRlZ3JhdGVkIGluIGV4aXN0aW5nDQo+Pj4+ICJkaXNr
IiBoYW5kbGluZy4gU28sIHJlLXVzZSAoYW5kIGFkYXB0KSAiZGlzayIgcGFyc2luZy9jb25m
aWd1cmF0aW9uDQo+Pj4+IGxvZ2ljIHRvIGRlYWwgd2l0aCBWaXJ0aW8gZGV2aWNlcyBhcyB3
ZWxsLg0KPj4+Pg0KPj4+PiBGb3IgdGhlIGltbWVkaWF0ZSBwdXJwb3NlIGFuZCBhbiBhYmls
aXR5IHRvIGV4dGVuZCB0aGF0IHN1cHBvcnQgZm9yDQo+Pj4+IG90aGVyIHVzZS1jYXNlcyBp
biBmdXR1cmUgKFFlbXUsIHZpcnRpby1wY2ksIGV0YykgcGVyZm9ybSB0aGUgZm9sbG93aW5n
DQo+Pj4+IGFjdGlvbnM6DQo+Pj4+IC0gQWRkIG5ldyBkaXNrIGJhY2tlbmQgdHlwZSAoTElC
WExfRElTS19CQUNLRU5EX09USEVSKSBhbmQgcmVmbGVjdA0KPj4+PiDCoCB0aGF0IGluIHRo
ZSBjb25maWd1cmF0aW9uDQo+Pj4+IC0gSW50cm9kdWNlIG5ldyBkaXNrICJzcGVjaWZpY2F0
aW9uIiBhbmQgInRyYW5zcG9ydCIgZmllbGRzIHRvIHN0cnVjdA0KPj4+PiDCoCBsaWJ4bF9k
ZXZpY2VfZGlzay4gQm90aCBhcmUgd3JpdHRlbiB0byB0aGUgWGVuc3RvcmUuIFRoZSB0cmFu
c3BvcnQNCj4+Pj4gwqAgZmllbGQgaXMgb25seSB1c2VkIGZvciB0aGUgc3BlY2lmaWNhdGlv
biAidmlydGlvIiBhbmQgaXQgYXNzdW1lcw0KPj4+PiDCoCBvbmx5ICJtbWlvIiB2YWx1ZSBm
b3Igbm93Lg0KPj4+PiAtIEludHJvZHVjZSBuZXcgInNwZWNpZmljYXRpb24iIG9wdGlvbiB3
aXRoICJ4ZW4iIGNvbW11bmljYXRpb24NCj4+Pj4gwqAgcHJvdG9jb2wgYmVpbmcgZGVmYXVs
dCB2YWx1ZS4NCj4+Pj4gLSBBZGQgbmV3IGRldmljZSBraW5kIChMSUJYTF9fREVWSUNFX0tJ
TkRfVklSVElPX0RJU0spIGFzIGN1cnJlbnQNCj4+Pj4gwqAgb25lIChMSUJYTF9fREVWSUNF
X0tJTkRfVkJEKSBkb2Vzbid0IGZpdCBpbnRvIFZpcnRpbyBkaXNrIG1vZGVsDQo+Pj4+DQo+
Pj4+IEFuIGV4YW1wbGUgb2YgZG9tYWluIGNvbmZpZ3VyYXRpb24gZm9yIFZpcnRpbyBkaXNr
Og0KPj4+PiBkaXNrID0gWyAncGh5Oi9kZXYvbW1jYmxrMHAzLCB4dmRhMSwgYmFja2VuZHR5
cGU9b3RoZXIsIHNwZWNpZmljYXRpb249dmlydGlvJ10NCj4+Pj4NCj4+Pj4gTm90aGluZyBo
YXMgY2hhbmdlZCBmb3IgZGVmYXVsdCBYZW4gZGlzayBjb25maWd1cmF0aW9uLg0KPj4+Pg0K
Pj4+PiBQbGVhc2Ugbm90ZSwgdGhpcyBwYXRjaCBpcyBub3QgZW5vdWdoIGZvciB2aXJ0aW8t
ZGlzayB0byB3b3JrDQo+Pj4+IG9uIFhlbiAoQXJtKSwgYXMgZm9yIGV2ZXJ5IFZpcnRpbyBk
ZXZpY2UgKGluY2x1ZGluZyBkaXNrKSB3ZSBuZWVkDQo+Pj4+IHRvIGFsbG9jYXRlIFZpcnRp
byBNTUlPIHBhcmFtcyAoSVJRIGFuZCBtZW1vcnkgcmVnaW9uKSBhbmQgcGFzcw0KPj4+PiB0
aGVtIHRvIHRoZSBiYWNrZW5kLCBhbHNvIHVwZGF0ZSBHdWVzdCBkZXZpY2UtdHJlZS4gVGhl
IHN1YnNlcXVlbnQNCj4+Pj4gcGF0Y2ggd2lsbCBhZGQgdGhlc2UgbWlzc2luZyBiaXRzLiBG
b3IgdGhlIGN1cnJlbnQgcGF0Y2gsDQo+Pj4+IHRoZSBkZWZhdWx0ICJpcnEiIGFuZCAiYmFz
ZSIgYXJlIGp1c3Qgd3JpdHRlbiB0byB0aGUgWGVuc3RvcmUuDQo+Pj4+IFRoaXMgaXMgbm90
IGFuIGlkZWFsIHNwbGl0dGluZywgYnV0IHRoaXMgd2F5IHdlIGF2b2lkIGJyZWFraW5nDQo+
Pj4+IHRoZSBiaXNlY3RhYmlsaXR5Lg0KPj4+Pg0KPj4+PiBTaWduZWQtb2ZmLWJ5OiBPbGVr
c2FuZHIgVHlzaGNoZW5rbyA8b2xla3NhbmRyX3R5c2hjaGVua29AZXBhbS5jb20+DQo+Pj4g
T0ssIEkgYW0gKnJlYWxseSogc29ycnkgZm9yIGNvbWluZyBpbiBoZXJlIGF0IHRoZSBsYXN0
IG1pbnV0ZSBhbmQgcXVpYmJsaW5nIA0KPj4+IGFib3V0IHRoaW5ncy4NCj4+DQo+Pg0KPj4g
bm8gcHJvYmxlbQ0KPj4NCj4+DQo+Pj4gwqAgQnV0IHRoaXMgaW50cm9kdWNlcyBhIHB1Ymxp
YyBpbnRlcmZhY2Ugd2hpY2ggbG9va3MgcmVhbGx5IHdyb25nIHRvIG1lLiAgDQo+Pj4gSeKA
mXZlIHJlcGxpZWQgdG8gdGhlIG1haWwgZnJvbSBEZWNlbWJlciB3aGVyZSBKdWVyZ2VuIHBy
b3Bvc2VkIHRoZSDigJxPdGhlcuKAnSANCj4+PiBwcm90b2NvbC4NCj4+Pg0KPj4+IEhvcGVm
dWxseSB0aGlzIHdpbGwgYmUgYSBzaW1wbGUgbWF0dGVyIG9mIGZpbmRpbmcgYSBiZXR0ZXIg
bmFtZSB0aGFuIA0KPj4+IOKAnG90aGVy4oCdLsKgIChPciB5b3UgZ3V5cyBjb252aW5jaW5n
IG1lIHRoYXQg4oCcb3RoZXLigJ0gaXMgcmVhbGx5IHRoZSBiZXN0IG5hbWUgZm9yIA0KPj4+
IHRoaXMgdmFsdWU7IG9yIGV2ZW4gQW50aG9ueSBhc3NlcnRpbmcgaGlzIHJpZ2h0IGFzIGEg
bWFpbnRhaW5lciB0byBvdmVycnVsZSANCj4+PiBteSBvYmplY3Rpb24gaWYgaGUgdGhpbmtz
IEnigJltIG91dCBvZiBsaW5lLikNCj4+DQo+Pg0KPj4gSSBzYXcgeW91ciByZXBseSB0byBW
NiBhbmQgSnVlcmdlbidzIGFuc3dlci4gSSBzaGFyZSBKdWVyZ2VuJ3Mgb3BpbmlvbiBhcyB3
ZWxsIA0KPj4gYXMgSSB1bmRlcnN0YW5kIHlvdXIgY29uY2Vybi4gSSB0aGluaywgdGhpcyBp
cyBleGFjdGx5IHRoZSBzaXR1YXRpb24gd2hlbiANCj4+IGZpbmRpbmcgYSBwZXJmZWN0IG5h
bWUgKG9idmlvdXMsIHNob3J0LCBldGMpIGZvciB0aGUgYmFja2VuZHR5cGUgKGluIG91ciAN
Cj4+IHBhcnRpY3VsYXIgY2FzZSkgaXMgcmVhbGx5IGRpZmZpY3VsdC4NCj4+DQo+PiBQZXJz
b25hbGx5IEkgdGVuZCB0byBsZWF2ZSAib3RoZXIiLCBiZWNhdXNlIHRoZXJlIGlzIG5vIGJl
dHRlciBhbHRlcm5hdGl2ZSANCj4+IGZyb20gbXkgUG9WLiBBbHNvIEkgbWlnaHQgYmUgY29t
cGxldGVseSB3cm9uZyBoZXJlLCBidXQgSSBkb24ndCB0aGluayB3ZSB3aWxsIA0KPj4gaGF2
ZSB0byBleHRlbmQgdGhlICJiYWNrZW5kdHlwZSIgZm9yIHN1cHBvcnRpbmcgb3RoZXIgcG9z
c2libGUgdmlydGlvIGJhY2tlbmQgDQo+PiBpbXBsZW1lbnRhdGlvbnMgaW4gdGhlIGZvcmVz
ZWVhYmxlIGZ1dHVyZToNCj4+DQo+PiAtIHdoZW4gUWVtdSBnYWlucyB0aGUgcmVxdWlyZWQg
c3VwcG9ydCB3ZSB3aWxsIGNob29zZSBxZGlzazogYmFja2VuZHR5cGUgcWRpc2sgDQo+PiBz
cGVjaWZpY2F0aW9uIHZpcnRpbw0KPj4gLSBmb3IgdGhlIHBvc3NpYmxlIHZpcnRpbyBhbHRl
cm5hdGl2ZSBvZiB0aGUgYmxrYmFjayB3ZSB3aWxsIGNob29zZSBwaHk6IA0KPj4gYmFja2Vu
ZHR5cGUgcGh5IHNwZWNpZmljYXRpb24gdmlydGlvDQo+Pg0KPj4gSWYgdGhlcmUgd2lsbCBi
ZSBhIG5lZWQgdG8ga2VlcCB2YXJpb3VzIGltcGxlbWVudGF0aW9uLCB3ZSB3aWxsIGJlIGFi
bGUgdG8gDQo+PiBkZXNjcmliZSB0aGF0IGJ5IHVzaW5nICJ0cmFuc3BvcnQiIChtbWlvLCBw
Y2ksIHhlbmJ1cywgYXJnbywgd2hhdGV2ZXIpLg0KPj4gQWN0dWFsbHkgdGhpcyBpcyB3aHkg
d2UgYWxzbyBpbnRyb2R1Y2VkICJzcGVjaWZpY2F0aW9uIiBhbmQgInRyYW5zcG9ydCIuDQo+
Pg0KPj4gSUlSQywgdGhlcmUgd2VyZSBvdGhlciAoc3VnZ2VzdGVkPykgbmFtZXMgZXhjZXB0
ICJvdGhlciIgd2hpY2ggYXJlICJleHRlcm5hbCIgDQo+PiBhbmQgImRhZW1vbiIuIElmIHlv
dSB0aGluayB0aGF0IG9uZSBvZiB0aGVtIGlzIGJldHRlciB0aGFuICJvdGhlciIsIEkgd2ls
bCANCj4+IGhhcHB5IHRvIHVzZSBpdC4NCj4gDQo+IA0KPiBDb3VsZCB3ZSBwbGVhc2UgbWFr
ZSBhIGRlY2lzaW9uIG9uIHRoaXM/DQo+IA0KPiBJZiAib3RoZXIiIGlzIG5vdCB1bmFtYmln
dW91cywgdGhlbiBtYXliZSB3ZSBjb3VsZCBjaG9vc2UgImRhZW1vbiIgdG8gZGVzY3JpYmUg
DQo+IGFyYml0cmFyeSB1c2VyLWxldmVsIGJhY2tlbmRzLCBhbnkgdGhvdWdodD8NCg0KSU1P
IHRoaXMgd291bGQgZXhjbHVkZSBvdGhlciBjYXNlcywgbGlrZSBzcGVjaWFsIGtlcm5lbCBk
cml2ZXJzLg0KDQpNYXliZSAic3RhbmRhbG9uZSI/ICJvbmx5LXJlbHlpbmctb24teGVuc3Rv
cmUtZGF0YSIgd291bGQgYmUgYSBiaXQNCmV4YWdnZXJhdGVkLCB3aGlsZSBjb252ZXlpbmcg
dGhlIGlkZWEgcXVpdGUgbmljZWx5Lg0KDQoNCkp1ZXJnZW4NCg==
--------------eyobLM2s4wBBTle5dihHL9n8
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------eyobLM2s4wBBTle5dihHL9n8--

--------------SoJtpuNNnpGoH97jtfddOXYV--

--------------or0G17J0FHgXlHhgEmNHKuk0
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK9lgoFAwAAAAAACgkQsN6d1ii/Ey9O
RwgAnToxqTFsJCoutQxpg3dHN9B84RgitlJlXnRjnGld2JxkNDrL+Q84VuvyosCudXsPveJTyL/G
1gjlCoaHlZcERh2njS+AUqmL491mUZdMQS5cm63P+7dhLL3omYGGEjt/GgWYtXLGES3Hn169KQOz
z+QKKeEgod83NZzkR93x0g1Hy+M02DbXyd49Gqk+k2TheNDv9lKmQetRzjpxypoiHgzIrGfCoGfC
vDeZkuUqIhbo+DVHl5GYVvoZygc5GNX1L/QbfkaVorjx8PukBxPFA/Ugqqtk/+oqkGMZoCm0Cw1f
9l3Wi7YvGBCbUzuJp7CsMqz50La3vJbaNrBqm5wiJw==
=D7G0
-----END PGP SIGNATURE-----

--------------or0G17J0FHgXlHhgEmNHKuk0--


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 12:36:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 12:36:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358489.587727 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6tP5-0001Ro-Gz; Thu, 30 Jun 2022 12:36:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358489.587727; Thu, 30 Jun 2022 12:36:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6tP5-0001Rh-Dj; Thu, 30 Jun 2022 12:36:15 +0000
Received: by outflank-mailman (input) for mailman id 358489;
 Thu, 30 Jun 2022 12:36:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6bdU=XF=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1o6tP3-0001Qj-W2
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 12:36:14 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr70071.outbound.protection.outlook.com [40.107.7.71])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 377d9cf2-f871-11ec-bdce-3d151da133c5;
 Thu, 30 Jun 2022 14:36:09 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB5216.eurprd04.prod.outlook.com (2603:10a6:803:5f::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Thu, 30 Jun
 2022 12:36:06 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::dfa:a64a:432f:e26b%7]) with mapi id 15.20.5395.014; Thu, 30 Jun 2022
 12:36:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 377d9cf2-f871-11ec-bdce-3d151da133c5
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oQKQ/4CAzABtz9qO5jATMGChJxImiJAIyLWv1VRsGJRQMQ0QavcBQsQgU7NkDgSJwt78Ed9LvwnzD2VGyjs3U/u9x+tkk+/gTW8oy/yrerJiGWGuL7yHKbbaBrpQ+lH83EAPJAaIIFtS1WhOFmifxz0xLxEAg8ZWemO1Ngrj+30fajndi001M4E7wuEzjgcC9kdoCxRE+n3cr1Pwnf7+wOnbtkUblw1wSPevCfmECBgN8dMA+v+3oL+WWToKolzPXM8w6wa4EYrRFVHts2HiYmguAaT04obQ7B19LatPVK6NbJ6+LCaUl6jOWgdHoAgM2/MZ8M62OCL3yYvDjhWJ7A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=W3ScU5qTu5fIg3pdM6dsY8gOQ4Btdo3OCIM0d6ZRRGY=;
 b=k/y0HcDZLSdcQuFK2VJ6NNcmdlhyRY5xa29OFVN/1uzFegr2c6BgTtTgsq2Z2dxgaGxAJeHdNfCPusd0xYSyZKpdNy4xmnMhlYucKDcvd6nIpd1YY+Ndh/8FMjOnc9h0RcMTe1m2NsGWE0GKXdqmaBT0D638OH4uXbLXk1n/bXY1etOK7OBJ+LQt5YfmveyT5ra3c5bp11ghLjwlh0u+g6ulDwx6/XaQrKA8LDlYe3UKA1hKcnXAuMZJTDyUWJzfciRaXk6TWpwbAxABkdDa48/aoNqCMahZkljTfGLb8KL6gBm85bC4VjIr8yxTZmavXYUD68BIEHc4Zyo6UwgWQg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=W3ScU5qTu5fIg3pdM6dsY8gOQ4Btdo3OCIM0d6ZRRGY=;
 b=EMI0YzBZ5E+lDjdTOK+W+rd4CD4zQhI9csuSCzlgcCQHSIoWldaF2sIKvqi2LkcTlWbDb1QorGSql3A8PhQAmeLvnxFdppfKLhnkctK8gUd+SyLam2Kq26u83BOL+8bHF5DLPc8xYA1piXxWzMRibJY10nOjTtbzR2fAaLE7PckL8qRHG0yY1m7CoZseknkpMg8E3AgWUFzT554xTyBdue8VNv9oAJZJCQRYSpu6Ncghgyv1eCdZZB1wgL07r3OAcDxTmoTZmdNXKmhCDdmRvH5lxMuMGGl5YMCgl5ZavDlvklsvFakhg2Jft6EJCg9R0rOao9owto45/u6HJdXlVQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <1d488f5d-bfd3-a816-dab1-515f49a57f67@suse.com>
Date: Thu, 30 Jun 2022 14:36:04 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Subject: Re: [PATCH v6 1/8] xen: reuse x86 EFI stub functions for Arm
Content-Language: en-US
To: Wei Chen <Wei.Chen@arm.com>
Cc: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Jiamei Xie <Jiamei.Xie@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Julien Grall <julien@xen.org>
References: <20220610055316.2197571-1-wei.chen@arm.com>
 <20220610055316.2197571-2-wei.chen@arm.com>
 <05dadcda-505d-d46a-776a-bb29b8915815@suse.com>
 <PAXPR08MB74205A192C0E6E2E4BDD64BB9EB49@PAXPR08MB7420.eurprd08.prod.outlook.com>
 <8e44e765-c47f-4480-ee44-704ea13a170d@xen.org>
 <cd5d728c-a21e-780e-3b79-0cfb163eb824@suse.com>
 <a6844d62-c1aa-a29f-56ba-3556bc1d4dac@xen.org>
 <6e91d7d0-78d2-2eec-3b14-9aea00b2a028@suse.com>
 <bad83568-94c6-6d90-308b-ae9965f54754@suse.com>
 <PAXPR08MB7420AD8092F0FBA43C359DD19EBA9@PAXPR08MB7420.eurprd08.prod.outlook.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <PAXPR08MB7420AD8092F0FBA43C359DD19EBA9@PAXPR08MB7420.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0133.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:97::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 9552a6d3-6608-4b19-3369-08da5a951a15
X-MS-TrafficTypeDiagnostic: VI1PR04MB5216:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	4fzDLyrQEbROXvo0vRKfw4PRLS2V7cIaelL2X83xfvsQkAIZf+H/Y13xXfAsoFaL0T/lVIF06lEaWsqkw/NuwimH6hxGD5DbqVt2GUQEbODi42CTzHaQcaPoOBJDGuFbn5KT/DbJ/KHAPzPk9iwm5Rtz6MzXY7hRBpmDV3VNl8/dqPRTLYMco+x2dQYs1V/U3aaZlXc0Y/P4Xw9r6A8bf5cK2lB8HLG9iy2UTi7dYYFyV3isEYHxaKeFF7gymJtx3tRERL3FEnt05x+Gm/GUI5y+u3c5PxIvRNZfzG+27uE8uao5VzNCXLglNp3oVLvhd9+Vay1QYm3HlVoeSo9iB771+6r1KfYAxqyfwz6bMuCv4kjm/SBLZ5vooQrhaMKUMXs6Eizq8nUG6DIutvvI6KsXIhgwICTPPguLcywXVbV4qb8Bvk619BnyHdkPAuLiPtPlhz9X3uxsmZOHpNu4Li/ywmZ0sltO8ogtI6fdUSWxMb+IOHMCR14pVQD5ZgJy2jqdiK6gm64ni23GoPVoaVaDwEmK6j2XPfx43dpCU7UcJ7WB8mpMNrvuT828mBtqjLBEAayI4TBbLlH+2rmLtLYiAd1ak3HZ3NcOyQHb5adkGfd8U1n+vZLEmUN5nuv5BBWNusE1M4145jOMGXFnEuWES5yPWue/SIkeTwd4ddEujq1QEtaWJTLhcwYkodaoZOo0KVQDW95Amdvp0+p3kJgRpxRXsg57fhK+bJhzU5BsPFqNc5ILwahWImYKvVMJUT+2Mae1JwmFKLSLw+U375o2owbjch9WeQfE57i+RO897piesCmdmoMJI9BMun96cETiXAZUKmwkxkhvijX9GEhCXkyPZbul16Hg23vhHXw=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(136003)(366004)(39860400002)(346002)(396003)(376002)(316002)(66946007)(31686004)(8936002)(478600001)(186003)(2906002)(7416002)(2616005)(54906003)(6862004)(53546011)(38100700002)(31696002)(8676002)(6506007)(41300700001)(66556008)(5660300002)(86362001)(66476007)(6512007)(36756003)(26005)(6486002)(83380400001)(4326008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SnNyckdxdldaSmZYWUJLRUp5L0pDNkhvazQ5RlkzdUJyZnpNVHVJYk8xcHZJ?=
 =?utf-8?B?eWRXaVpTMUtjeXJpZ1BmeVdNMW5JTXg2cnhWclFmK2FyeHVnYytUS2YzZDF1?=
 =?utf-8?B?ZmxaRXVaSEdjakYxMzJEL0srS01aMEtEOGkxVWw4VXhNK0RhWlBtT0FQWUZs?=
 =?utf-8?B?cFRSRm90RFB6eDEySFNqQ1dvYzUyQjg4RWFPemYyb012T2YzOWxtSEZqNDN6?=
 =?utf-8?B?V3FxTSt5U1doMkF3bVZMd3R5MURxSVYxNEFxL2ZPUjh1T2tqU0FLU0RwTUZD?=
 =?utf-8?B?WlQ5a0FLcjY0TklKbU5RU1NjMjhpUE1TM29KN29GdUR5WEFwS2NpU2dzNGFt?=
 =?utf-8?B?UFlHZ01nOVdUME80dnRmZXROUmNXYndrT0RqUTlZTFY4K1BsUFh4Q2crUWJT?=
 =?utf-8?B?ZFFJUE56emttZnpsWG5oemdLcjN4QkxnQVBCaE5ncUFkZURySFJKZDZTenNB?=
 =?utf-8?B?OU9SaXBLUTdnM0xWcitXQ3ltV1VXYVZMS0VpL05vSDBxY1ZuSnFIWjdEelow?=
 =?utf-8?B?TEFnWklJMTM2Y3UxZGZTR3Z6WEtOcE0rb3RNR3ZKQWltNERyQ1RCblB6bWtx?=
 =?utf-8?B?L3dMcEtVUXZxVXd2R1NyNXlzQ0JyRWoxY2dYWkx0VjRTMzRQeHV0TmJVaHN0?=
 =?utf-8?B?bkZONU9DUHoybTVoOTMyVTNtemZHeHlGQ1prWUpKNnFPdzBjQzQrZXR6VzRL?=
 =?utf-8?B?UU8yeDZHRFlwTG9Gc1ZEemZkYUNXdFVxdkNzTzg3OEw5eE92Z1dxWGYzRmFO?=
 =?utf-8?B?c1RQSjhrV1EzcGtpT3g0M0tiNmNINDQzUm85M3NkRnRoWFQvYTloL29Wbkdq?=
 =?utf-8?B?Q0dITWxLd1FNV3pZUlMrRDlLZ0Qxck9RMk1pRmhhdi9ZT1IwQ3hMVk04MXVh?=
 =?utf-8?B?LzZqd1BGTTZpazNkU0NtemN5SU5JenNMN0ZJZFpaaTdXN2pyUVFpUklVMm9J?=
 =?utf-8?B?eTZKQkhjK0UrL0Y1bGluN2lxaWZLWGNtanZadkYyZXhNQUJxRy91d3pxSHNo?=
 =?utf-8?B?d0oxaEdrY0JWMXlpKzhrSkdwQjBPaUlSeXRoRUlsZk5aNTRQNWEycGd5Mkdl?=
 =?utf-8?B?TjVqVE5IVmw3c3E0SVVGZ0ZQVmR1TldKYnd4TTJvSlMwbVdpSXc5N25rcmlo?=
 =?utf-8?B?aWVwb3VUMS9nZUVtWG1oeGVRcm9HZTQ3N2F6RHNLTUZTKzBCQUVUNnNiM09O?=
 =?utf-8?B?NGFxNDJjS1BXZERtOTd5NGlyQzRpK2hTTENabFNwV2JTcDF6ODNFUFRnNWNl?=
 =?utf-8?B?K2U3czNNS2FFdjZTaWpyT2dLZjQxVjlHZXlMN3N4cDNTUkQwdGdIelBYYVlO?=
 =?utf-8?B?aVc5S2NTbVBnQ2JKTUdIeUVSYjVMM0tzZFJzeDYyeEtRaEVuSE1xVUM3dWVH?=
 =?utf-8?B?SElaMlcxL25kbGIxSld1RzBnamx0dFFnMmFsN2RFZXVPMktvaGUrMkNhTHp0?=
 =?utf-8?B?OWJoUnp5akMxUDFrMFA1NkdvWWxiSlowbVVzaVRpOHh1L3lSaTY3QTVSamVN?=
 =?utf-8?B?TXBoUlFQZVJaZCtEUGYzRHdocWM4OXBJR3RFK0R5WStURXloNzQxdTJFU05s?=
 =?utf-8?B?cVA3MWZsWjR3L3U0M1o0djRhWkEzOC85dCtsalVIcW5JVk5CVnhIUk5yN01l?=
 =?utf-8?B?VVkrTVFjakFGbXplbUJ5ZytYcjFHdHRxbm93VUI0V0plRzdVOXRGSUxmRWNQ?=
 =?utf-8?B?NXlxWVVTWk9TM3NjU3I3TXkvdnQySUVIVkN3c1MzTTlqWit1RFFYWnMvRkNp?=
 =?utf-8?B?K3hRQTFOUmZkZE54cGNZcmRHZkZxeWt5dFo4clV3TU5vWFkzaUFOeHVzYXZv?=
 =?utf-8?B?MndpZTQ4RFFGMlNWWHRlZzZMZ2dwUElUY3VDb1BwMlFNb2V0NHBScEYzS1RC?=
 =?utf-8?B?c1R6VS9RZ0RTaDBJSDZpNjliWERzWHlaS3JVZW45V3VBaXl6ZXE1RC9Ub0Ra?=
 =?utf-8?B?eVJyTVhTaURlb1VJRVlTT0tzcmV1UjdvWFNxMyttWEVLZDMwWVBZNStDOFRH?=
 =?utf-8?B?RmpoNUZyZ3pGTVJ6VHM5Mmk5RXFqcXQ2RzNJRyt6bStzTGplV0hTOWVVMW1Q?=
 =?utf-8?B?cnV0OWE5SkVKYU5ubWJhcDZoYnE4dXFKSmZiM2FmMUova05JUlBzNWhYbWhx?=
 =?utf-8?Q?xc22iU6Sck8ZLLKbjt46TfJPa?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9552a6d3-6608-4b19-3369-08da5a951a15
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 12:36:06.1820
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9jy9ez4ny/SASEUdkRr+d7JsyFTV6QbmCXjbiibxpDTBn3NmXFEA9A8jrvlcs6LSepa/wP51AAE0Z6WxBWtG7w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB5216

On 30.06.2022 13:25, Wei Chen wrote:
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 2022年6月24日 18:09
>>
>> On 24.06.2022 12:05, Jan Beulich wrote:
>>> On 24.06.2022 11:49, Julien Grall wrote:
>>>>>>>>> --- a/xen/arch/arm/efi/Makefile
>>>>>>>>> +++ b/xen/arch/arm/efi/Makefile
>>>>>>>>> @@ -1,4 +1,12 @@
>>>>>>>>>    include $(srctree)/common/efi/efi-common.mk
>>>>>>>>>
>>>>>>>>> +ifeq ($(CONFIG_ARM_EFI),y)
>>>>>>>>>    obj-y += $(EFIOBJ-y)
>>>>>>>>>    obj-$(CONFIG_ACPI) +=  efi-dom0.init.o
>>>>>>>>> +else
>>>>>>>>> +# Add stub.o to EFIOBJ-y to re-use the clean-files in
>>>>>>>>> +# efi-common.mk. Otherwise the link of stub.c in arm/efi
>>>>>>>>> +# will not be cleaned in "make clean".
>>>>>>>>> +EFIOBJ-y += stub.o
>>>>>>>>> +obj-y += stub.o
>>>>>>>>> +endif
>>>>>>>>
>>>>>>>> This has caused
>>>>>>>>
>>>>>>>> ld: warning: arch/arm/efi/built_in.o uses 2-byte wchar_t yet the
>> output is
>>>>>>>> to use 4-byte wchar_t; use of wchar_t values across objects may
>> fail
>>>>>>>>
>>>>>>>> for the 32-bit Arm build that I keep doing every once in a while,
>> with
>>>>>>>> (if it matters) GNU ld 2.38. I guess you will want to consider
>> building
>>>>>>>> all of Xen with -fshort-wchar, or to avoid building stub.c with
>> that
>>>>>>>> option.
>>>>>>>>
>>>>>>>
>>>>>>> Thanks for pointing this out. I will try to use -fshort-wchar for
>> Arm32,
>>>>>>> if Arm maintainers agree.
>>>>>>
>>>>>> Looking at the code we don't seem to build Xen arm64 with -fshort-
>> wchar
>>>>>> (aside the EFI files). So it is not entirely clear why we would want
>> to
>>>>>> use -fshort-wchar for arm32.
>>>>>
>>>>> We don't use wchar_t outside of EFI code afaict. Hence to all other
>> code
>>>>> it should be benign whether -fshort-wchar is in use. So the suggestion
>>>>> to use the flag unilaterally on Arm32 is really just to silence the ld
>>>>> warning;
>>>>
>>>> Ok. This is odd. Why would ld warn on arm32 but not other arch?
>>>
>>> Arm32 embeds ABI information in a note section in each object file.
>>
>> Or a note-like one (just to avoid possible confusion); I think it's
>> ".ARM.attributes".
>>
>> Jan
>>
>>> The mismatch of the wchar_t part of this information is what causes
>>> ld to emit the warning.
>>>
>>>>> off the top of my head I can't see anything wrong with using
>>>>> the option also for Arm64 or even globally. Yet otoh we typically try
>> to
>>>>> not make changes for environments where they aren't really needed.
>>>>
>>>> I agree. If we need a workaround, then my preference would be to not
>>>> build stub.c with -fshort-wchar.
>>>
>>> This would need to be an Arm-special then, as on x86 it needs to be
>> built
>>> this way.
> 
> I have taken a look into this warning:
> This is because the "-fshort-wchar" flag causes GCC to generate
> code that is not binary compatible with code generated without
> that flag. Why this warning hasn't been triggered in Arm64 is
> because we don't use any wchar in Arm64 codes.

I don't think that's quite right - you actually say below that we
do use it there when interacting with UEFI. There's no warning
there solely because the information isn't embedded in the object
files there, from all I can tell.

> We are also not
> using wchar in Arm32 codes, but Arm32 will embed ABI information
> in ".ARM.attributes" section. This section stores some object
> file attributes, like ABI version, CPU arch and etc. And wchar
> size is described in this section by "Tag_ABI_PCS_wchar_t" too.
> Tag_ABI_PCS_wchar_t is 2 for object files with "-fshort-wchar",
> but for object files without "-fshort-wchar" is 4. Arm32 GCC
> ld will check this tag, and throw above warning when it finds
> the object files have different Tag_ABI_PCS_wchar_t values.
> 
> As gnu-efi-3.0 use the GCC option "-fshort-wchar" to force wchar
> to use short integers (2 bytes) instead of integers (4 bytes).
> We can't remove this option from x86 and Arm64, because they need
> to interact with EFI firmware. So I have to options:
> 1. Remove "-fshort-wchar" from efi-common.mk and add it back by
>    x86 and arm64's EFI Makefile
> 2. Add "-no-wchar-size-warning" to Arm32's linker flags
> 
> I personally prefer option#1, because Arm32 doesn't need to interact
> with EFI firmware, all it requires are some stub functions. And
> "-no-wchar-size-warning" may hide some warnings we should aware in
> future.

I don't mind #1, but I think your subsequently proposed #3 would be
the first thing to try. There may be caveats, so if that doesn't work
out I'd suggest falling back to #1. Albeit ideally the flag setting
wouldn't be moved back (it _is_ a common EFI thing, after all), but
rather Arm32 arranging for its addition to be suppressed.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 13:16:52 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 13:16:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358497.587738 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6u2G-0006LI-P8; Thu, 30 Jun 2022 13:16:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358497.587738; Thu, 30 Jun 2022 13:16:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6u2G-0006LB-M1; Thu, 30 Jun 2022 13:16:44 +0000
Received: by outflank-mailman (input) for mailman id 358497;
 Thu, 30 Jun 2022 13:16:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gQTH=XF=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1o6u2F-0006L3-IU
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 13:16:43 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dcdecb08-f876-11ec-bdce-3d151da133c5;
 Thu, 30 Jun 2022 15:16:33 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id A566D21D34;
 Thu, 30 Jun 2022 13:16:41 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 80CF7139E9;
 Thu, 30 Jun 2022 13:16:41 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id lIvwHTmivWL5SQAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 30 Jun 2022 13:16:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dcdecb08-f876-11ec-bdce-3d151da133c5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1656595001; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=pNe559UvQnooprBZ6Fnd5/T/TxJuQKzcXs6v8CV/2no=;
	b=EU2/n1QH/oqY66T4suniblgB7/5IElnc2M1B8thaOhDHO9R6u6aj+eGHmuPdt4ezFIYFCO
	OFIVOeCaTHdJb0Qa0yy0xeq8xduYHwJxws+Fq9THBRdNjtC2R8GrLkvSUkXadrUE9PedNM
	ZMK39dmCMqLqhROfbLzyzk8QXrgTToQ=
Message-ID: <5136812e-e296-4acb-cafd-f189c4013ed3@suse.com>
Date: Thu, 30 Jun 2022 15:16:41 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.10.0
Content-Language: en-US
To: Greg KH <gregkh@linuxfoundation.org>,
 Demi Marie Obenour <demi@invisiblethingslab.com>
Cc: stable@vger.kernel.org,
 Xen developer discussion <xen-devel@lists.xenproject.org>
References: <20220627181006.1954-1-demi@invisiblethingslab.com>
 <Yr2KKpWSiuzOQr7v@kroah.com>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH 5.10] xen/gntdev: Avoid blocking in unmap_grant_pages()
In-Reply-To: <Yr2KKpWSiuzOQr7v@kroah.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------JsDTrELbzCD9OasTk9RasDaR"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------JsDTrELbzCD9OasTk9RasDaR
Content-Type: multipart/mixed; boundary="------------9pcFxfFI7ecXR7Cq6zP7LG0J";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Greg KH <gregkh@linuxfoundation.org>,
 Demi Marie Obenour <demi@invisiblethingslab.com>
Cc: stable@vger.kernel.org,
 Xen developer discussion <xen-devel@lists.xenproject.org>
Message-ID: <5136812e-e296-4acb-cafd-f189c4013ed3@suse.com>
Subject: Re: [PATCH 5.10] xen/gntdev: Avoid blocking in unmap_grant_pages()
References: <20220627181006.1954-1-demi@invisiblethingslab.com>
 <Yr2KKpWSiuzOQr7v@kroah.com>
In-Reply-To: <Yr2KKpWSiuzOQr7v@kroah.com>

--------------9pcFxfFI7ecXR7Cq6zP7LG0J
Content-Type: multipart/mixed; boundary="------------EALeCf4fgxcjdvskUURL5ywM"

--------------EALeCf4fgxcjdvskUURL5ywM
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMzAuMDYuMjIgMTM6MzQsIEdyZWcgS0ggd3JvdGU6DQo+IE9uIE1vbiwgSnVuIDI3LCAy
MDIyIGF0IDAyOjEwOjAyUE0gLTA0MDAsIERlbWkgTWFyaWUgT2Jlbm91ciB3cm90ZToNCj4+
IGNvbW1pdCBkYmU5N2NmZjdkZDlmMGY3NWM1MjRhZmRkNTVhZDQ2YmUzZDE1Mjk1IHVwc3Ry
ZWFtDQo+Pg0KPj4gdW5tYXBfZ3JhbnRfcGFnZXMoKSBjdXJyZW50bHkgd2FpdHMgZm9yIHRo
ZSBwYWdlcyB0byBubyBsb25nZXIgYmUgdXNlZC4NCj4+IEluIGh0dHBzOi8vZ2l0aHViLmNv
bS9RdWJlc09TL3F1YmVzLWlzc3Vlcy9pc3N1ZXMvNzQ4MSwgdGhpcyBsZWFkIHRvIGENCj4+
IGRlYWRsb2NrIGFnYWluc3QgaTkxNTogaTkxNSB3YXMgd2FpdGluZyBmb3IgZ250ZGV2J3Mg
TU1VIG5vdGlmaWVyIHRvDQo+PiBmaW5pc2gsIHdoaWxlIGdudGRldiB3YXMgd2FpdGluZyBm
b3IgaTkxNSB0byBmcmVlIGl0cyBwYWdlcy4gIEkgYWxzbw0KPj4gYmVsaWV2ZSB0aGlzIGlz
IHJlc3BvbnNpYmxlIGZvciB2YXJpb3VzIGRlYWRsb2NrcyBJIGhhdmUgZXhwZXJpZW5jZWQg
aW4NCj4+IHRoZSBwYXN0Lg0KPj4NCj4+IEF2b2lkIHRoZXNlIHByb2JsZW1zIGJ5IG1ha2lu
ZyB1bm1hcF9ncmFudF9wYWdlcyBhc3luYy4gIFRoaXMgcmVxdWlyZXMNCj4+IG1ha2luZyBp
dCByZXR1cm4gdm9pZCwgYXMgYW55IGVycm9ycyB3aWxsIG5vdCBiZSBhdmFpbGFibGUgd2hl
biB0aGUNCj4+IGZ1bmN0aW9uIHJldHVybnMuICBGb3J0dW5hdGVseSwgdGhlIG9ubHkgdXNl
IG9mIHRoZSByZXR1cm4gdmFsdWUgaXMgYQ0KPj4gV0FSTl9PTigpLCB3aGljaCBjYW4gYmUg
cmVwbGFjZWQgYnkgYSBXQVJOX09OIHdoZW4gdGhlIGVycm9yIGlzDQo+PiBkZXRlY3RlZC4g
IEFkZGl0aW9uYWxseSwgYSBmYWlsZWQgY2FsbCB3aWxsIG5vdCBwcmV2ZW50IGZ1cnRoZXIg
Y2FsbHMNCj4+IGZyb20gYmVpbmcgbWFkZSwgYnV0IHRoaXMgaXMgaGFybWxlc3MuDQo+Pg0K
Pj4gQmVjYXVzZSB1bm1hcF9ncmFudF9wYWdlcyBpcyBub3cgYXN5bmMsIHRoZSBncmFudCBo
YW5kbGUgd2lsbCBiZSBzZW50IHRvDQo+PiBJTlZBTElEX0dSQU5UX0hBTkRMRSB0b28gbGF0
ZSB0byBwcmV2ZW50IG11bHRpcGxlIHVubWFwcyBvZiB0aGUgc2FtZQ0KPj4gaGFuZGxlLiAg
SW5zdGVhZCwgYSBzZXBhcmF0ZSBib29sIGFycmF5IGlzIGFsbG9jYXRlZCBmb3IgdGhpcyBw
dXJwb3NlLg0KPj4gVGhpcyB3YXN0ZXMgbWVtb3J5LCBidXQgc3R1ZmZpbmcgdGhpcyBpbmZv
cm1hdGlvbiBpbiBwYWRkaW5nIGJ5dGVzIGlzDQo+PiB0b28gZnJhZ2lsZS4gIEZ1cnRoZXJt
b3JlLCBpdCBpcyBuZWNlc3NhcnkgdG8gZ3JhYiBhIHJlZmVyZW5jZSB0byB0aGUNCj4+IG1h
cCBiZWZvcmUgbWFraW5nIHRoZSBhc3luY2hyb25vdXMgY2FsbCwgYW5kIHJlbGVhc2UgdGhl
IHJlZmVyZW5jZSB3aGVuDQo+PiB0aGUgY2FsbCByZXR1cm5zLg0KPj4NCj4+IEl0IGlzIGFs
c28gbmVjZXNzYXJ5IHRvIGd1YXJkIGFnYWluc3QgcmVlbnRyYW5jeSBpbiBnbnRkZXZfbWFw
X3B1dCgpLA0KPj4gYW5kIHRvIGhhbmRsZSB0aGUgY2FzZSB3aGVyZSB1c2Vyc3BhY2UgdHJp
ZXMgdG8gbWFwIGEgbWFwcGluZyB3aG9zZQ0KPj4gY29udGVudHMgaGF2ZSBub3QgYWxsIGJl
ZW4gZnJlZWQgeWV0Lg0KPj4NCj4+IEZpeGVzOiA3NDUyODIyNTZjNzUgKCJ4ZW4vZ250ZGV2
OiBzYWZlbHkgdW5tYXAgZ3JhbnRzIGluIGNhc2UgdGhleSBhcmUgc3RpbGwgaW4gdXNlIikN
Cj4+IENjOiBzdGFibGVAdmdlci5rZXJuZWwub3JnDQo+PiBTaWduZWQtb2ZmLWJ5OiBEZW1p
IE1hcmllIE9iZW5vdXIgPGRlbWlAaW52aXNpYmxldGhpbmdzbGFiLmNvbT4NCj4+IFJldmll
d2VkLWJ5OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+DQo+PiBMaW5rOiBodHRw
czovL2xvcmUua2VybmVsLm9yZy9yLzIwMjIwNjIyMDIyNzI2LjI1MzgtMS1kZW1pQGludmlz
aWJsZXRoaW5nc2xhYi5jb20NCj4+IFNpZ25lZC1vZmYtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpn
cm9zc0BzdXNlLmNvbT4NCj4+IC0tLQ0KPj4gICBkcml2ZXJzL3hlbi9nbnRkZXYtY29tbW9u
LmggfCAgIDcgKysNCj4+ICAgZHJpdmVycy94ZW4vZ250ZGV2LmMgICAgICAgIHwgMTQyICsr
KysrKysrKysrKysrKysrKysrKysrKystLS0tLS0tLS0tLQ0KPj4gICAyIGZpbGVzIGNoYW5n
ZWQsIDEwNiBpbnNlcnRpb25zKCspLCA0MyBkZWxldGlvbnMoLSkNCj4gDQo+IEFsbCBub3cg
cXVldWVkIHVwLCB0aGFua3MuDQoNClNvcnJ5LCBidXQgSSB0aGluayBhdCBsZWFzdCB0aGUg
dmVyc2lvbiBmb3IgNS4xMCBpcyBmaXNoeSwgYXMgaXQgcmVtb3Zlcw0KdGhlIHRlc3RzIGZv
ciBzdWNjZXNzZnVsIGFsbG9jYXRpb25zIG9mIGFkZC0+bWFwX29wcyBhbmQgYWRkLT51bm1h
cF9vcHMuDQoNCkkgbmVlZCB0byBkbyBhIHRob3JvdWdoIHJldmlldyBvZiB0aGUgcGF0Y2hl
cyAodGhlICJSZXZpZXdlZC1ieToiIHRhZyBpbg0KdGhlIHBhdGNoZXMgaXMgdGhlIG9uZSBm
b3IgdGhlIHVwc3RyZWFtIHBhdGNoKS4NCg0KR3JlZywgY2FuIHlvdSBwbGVhc2Ugd2FpdCBm
b3IgbXkgZXhwbGljaXQgIm9rYXkiIGZvciB0aGUgYmFja3BvcnRzPw0KDQoNCkp1ZXJnZW4N
Cg==
--------------EALeCf4fgxcjdvskUURL5ywM
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------EALeCf4fgxcjdvskUURL5ywM--

--------------9pcFxfFI7ecXR7Cq6zP7LG0J--

--------------JsDTrELbzCD9OasTk9RasDaR
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmK9ojkFAwAAAAAACgkQsN6d1ii/Ey9r
RAf/fK4KwxbWJNm57k+LFn5HNb01E0xInEBhjUbv8MCde5ew4svEPKceJGv7hkLCxIuAp7uX0nYf
lGOJr3EXoAecxqyP6d7IywEqwjiUaiaiUKPnknphVIwVRqRDuhklNo/ILJJy/yf7zNxsrh/zMzko
JtK27WDMIbW+BT7sM7DjG66gwB+dKsROPTco6xxdUVZynr3roLG1r+oMTTo/m5fcbMmQGuwDCJHt
xCtKKWSMlBLMcwcE3pb89LONbhEmqP09qwx+s/yZOr5CfXaXLA7JZCyP/YZ44wZazegze6F2tGn8
1hUN9N4NtQWwM9kP+GnCL4Bv9GiBI9VmB1Yd4lI0yQ==
=rv91
-----END PGP SIGNATURE-----

--------------JsDTrELbzCD9OasTk9RasDaR--


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 13:31:44 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 13:31:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358503.587748 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6uGi-0000Xi-0n; Thu, 30 Jun 2022 13:31:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358503.587748; Thu, 30 Jun 2022 13:31:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6uGh-0000Xb-UO; Thu, 30 Jun 2022 13:31:39 +0000
Received: by outflank-mailman (input) for mailman id 358503;
 Thu, 30 Jun 2022 13:31:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xIEj=XF=linuxfoundation.org=gregkh@srs-se1.protection.inumbo.net>)
 id 1o6uGg-0000XF-Vt
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 13:31:39 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f6787e3e-f878-11ec-bd2d-47488cf2e6aa;
 Thu, 30 Jun 2022 15:31:36 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 342F761EA2;
 Thu, 30 Jun 2022 13:31:35 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9F501C34115;
 Thu, 30 Jun 2022 13:31:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f6787e3e-f878-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org;
	s=korg; t=1656595895;
	bh=HnYgzoY2wpz25syeP/HHn4cFSw8pcDffdk1+i7c9erA=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=WGugVafS/pr4OIuJad3zigQVZAlqgDOEqz69h1dDwdzvHWbVsFaZHMf4EgDPMrTlM
	 ZmdHlaVXws/afeUK2G+MkzHts8RqBEYSwAxM7g5djaVi2tCd9kXjHWrb8C6RhGvgSi
	 90/XUTu7wdBLB6MgsB5/wtv6IHfxZWS8pxBWuVc4=
Date: Thu, 30 Jun 2022 15:31:32 +0200
From: Greg KH <gregkh@linuxfoundation.org>
To: Juergen Gross <jgross@suse.com>
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	stable@vger.kernel.org,
	Xen developer discussion <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 5.10] xen/gntdev: Avoid blocking in unmap_grant_pages()
Message-ID: <Yr2ltPwSM7srft78@kroah.com>
References: <20220627181006.1954-1-demi@invisiblethingslab.com>
 <Yr2KKpWSiuzOQr7v@kroah.com>
 <5136812e-e296-4acb-cafd-f189c4013ed3@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <5136812e-e296-4acb-cafd-f189c4013ed3@suse.com>

On Thu, Jun 30, 2022 at 03:16:41PM +0200, Juergen Gross wrote:
> On 30.06.22 13:34, Greg KH wrote:
> > On Mon, Jun 27, 2022 at 02:10:02PM -0400, Demi Marie Obenour wrote:
> > > commit dbe97cff7dd9f0f75c524afdd55ad46be3d15295 upstream
> > > 
> > > unmap_grant_pages() currently waits for the pages to no longer be used.
> > > In https://github.com/QubesOS/qubes-issues/issues/7481, this lead to a
> > > deadlock against i915: i915 was waiting for gntdev's MMU notifier to
> > > finish, while gntdev was waiting for i915 to free its pages.  I also
> > > believe this is responsible for various deadlocks I have experienced in
> > > the past.
> > > 
> > > Avoid these problems by making unmap_grant_pages async.  This requires
> > > making it return void, as any errors will not be available when the
> > > function returns.  Fortunately, the only use of the return value is a
> > > WARN_ON(), which can be replaced by a WARN_ON when the error is
> > > detected.  Additionally, a failed call will not prevent further calls
> > > from being made, but this is harmless.
> > > 
> > > Because unmap_grant_pages is now async, the grant handle will be sent to
> > > INVALID_GRANT_HANDLE too late to prevent multiple unmaps of the same
> > > handle.  Instead, a separate bool array is allocated for this purpose.
> > > This wastes memory, but stuffing this information in padding bytes is
> > > too fragile.  Furthermore, it is necessary to grab a reference to the
> > > map before making the asynchronous call, and release the reference when
> > > the call returns.
> > > 
> > > It is also necessary to guard against reentrancy in gntdev_map_put(),
> > > and to handle the case where userspace tries to map a mapping whose
> > > contents have not all been freed yet.
> > > 
> > > Fixes: 745282256c75 ("xen/gntdev: safely unmap grants in case they are still in use")
> > > Cc: stable@vger.kernel.org
> > > Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
> > > Reviewed-by: Juergen Gross <jgross@suse.com>
> > > Link: https://lore.kernel.org/r/20220622022726.2538-1-demi@invisiblethingslab.com
> > > Signed-off-by: Juergen Gross <jgross@suse.com>
> > > ---
> > >   drivers/xen/gntdev-common.h |   7 ++
> > >   drivers/xen/gntdev.c        | 142 +++++++++++++++++++++++++-----------
> > >   2 files changed, 106 insertions(+), 43 deletions(-)
> > 
> > All now queued up, thanks.
> 
> Sorry, but I think at least the version for 5.10 is fishy, as it removes
> the tests for successful allocations of add->map_ops and add->unmap_ops.
> 
> I need to do a thorough review of the patches (the "Reviewed-by:" tag in
> the patches is the one for the upstream patch).
> 
> Greg, can you please wait for my explicit "okay" for the backports?

Ok, I'll go drop them all from the queues now.

thanks,

greg k-h


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 14:05:04 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 14:05:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358513.587779 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6umr-0004hB-Pu; Thu, 30 Jun 2022 14:04:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358513.587779; Thu, 30 Jun 2022 14:04:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6umr-0004fP-Jc; Thu, 30 Jun 2022 14:04:53 +0000
Received: by outflank-mailman (input) for mailman id 358513;
 Thu, 30 Jun 2022 14:04:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2IH8=XF=oss.nxp.com=andrei.cherechesu@srs-se1.protection.inumbo.net>)
 id 1o6umK-0004O6-9U
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 14:04:20 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr70059.outbound.protection.outlook.com [40.107.7.59])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8398080f-f87d-11ec-bdce-3d151da133c5;
 Thu, 30 Jun 2022 16:04:10 +0200 (CEST)
Received: from VI1PR04MB5056.eurprd04.prod.outlook.com (2603:10a6:803:5a::13)
 by DU0PR04MB9322.eurprd04.prod.outlook.com (2603:10a6:10:355::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Thu, 30 Jun
 2022 14:04:16 +0000
Received: from VI1PR04MB5056.eurprd04.prod.outlook.com
 ([fe80::1549:6f15:1949:f1a5]) by VI1PR04MB5056.eurprd04.prod.outlook.com
 ([fe80::1549:6f15:1949:f1a5%6]) with mapi id 15.20.5395.015; Thu, 30 Jun 2022
 14:04:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8398080f-f87d-11ec-bdce-3d151da133c5
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=N28HQzeyfEkoXNgIyh6g3bG4X1wzaLamMZwAcohnVXQ76Q/PKm2TxVyE8p9EpQPr3hhSudVf2+RWX9J1U7cZ96DSdVO21i5R8Jc19xbTBGZvmaTL4qJK/pbNMC1YfxyDHTKDdCyNsqu5oWpqp8OeWNz33V5b0L+olVnT+3o0eHc1DrP2aGopaU2NzL6DA9xmq7QAYTcLLPWho/QU6gaZmBdsDhHZfMolcIywrZefEGbqsAzGbv69NaWZBLzgQRDhixk/HpKWZRZUc8xndf1L+KsX6L3LKaoi/7bKLJbCmlrZF1l7GSdkHzJz9ARXi7p0O//TRFAeYV4DbZshQAhgiA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=slRq7QSM4ixKn2x0tNbd2VGZDBfK2EupasNiNNU+x2c=;
 b=nkBLdNbmNoyQ3V0LM2eNHDA8otlqnWNCEGLxyU09082a7HtXThmCqyQTilYT+WarAbuJbupVsN1yqDGSeg83bvJAinKmxm1XeIkJgp9Gv4buU4EMWFMANG8bHNiodEPpU3nJZT7oUPAKaqzFQ63dlbJbqm3bZkYpuaCQtQ2ZI2VU3zHaRZx9ffW69VbL1EXR4xb4sK9h3pJ+4nMxcLAATF3+OF3Iu+84eTIhyCJYSyoL9rUW4V38hgQV/rqB0zHUVQQFXCn+0i7uD5o7qBRwRVeDg2QxuA6UJ5dv8xQNX6hgMmUy5VGRHSwnRXsCsWuTcXvZ5vDOj0XGeLBR/eqTeA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oss.nxp.com; dmarc=pass action=none header.from=oss.nxp.com;
 dkim=pass header.d=oss.nxp.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=NXP1.onmicrosoft.com;
 s=selector2-NXP1-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=slRq7QSM4ixKn2x0tNbd2VGZDBfK2EupasNiNNU+x2c=;
 b=KqRcy4Z6BEh5Ei+OsiD3dQNyV5WK6l2weeqhYIlP4UknJm5B1EF5KFj3mwvj30nSkBg4fRGl20g5BQdsRVXWa/fuEOdcy7HaTRt+99SATXSA/V8BkHQwgzPPxioGLQokrzsLelZyfVCt1aY4OU7a8R7Vg2nVTmz3D29SsCymI5Y=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=oss.nxp.com;
From: "Andrei Cherechesu (OSS)" <andrei.cherechesu@oss.nxp.com>
To: xen-devel@lists.xenproject.org
Cc: viryaos-discuss@lists.sourceforge.net,
	sstabellini@kernel.org,
	Andrei Cherechesu <andrei.cherechesu@nxp.com>
Subject: [ImageBuilder][PATCH v2 3/4] uboot-script-gen: Enable appending extra commands to boot script
Date: Thu, 30 Jun 2022 17:00:28 +0300
Message-Id: <20220630140028.3227385-4-andrei.cherechesu@oss.nxp.com>
X-Mailer: git-send-email 2.35.1
In-Reply-To: <20220630140028.3227385-1-andrei.cherechesu@oss.nxp.com>
References: <20220630140028.3227385-1-andrei.cherechesu@oss.nxp.com>
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-ClientProxiedBy: AM0PR03CA0089.eurprd03.prod.outlook.com
 (2603:10a6:208:69::30) To VI1PR04MB5056.eurprd04.prod.outlook.com
 (2603:10a6:803:5a::13)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3acbee7f-13ba-46e9-6d00-08da5aa16b77
X-MS-TrafficTypeDiagnostic: DU0PR04MB9322:EE_
X-MS-Exchange-SharedMailbox-RoutingAgent-Processed: True
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8hu0YU/b5vU/Au3Dw7O9H3DsUtx/yMPpoUQUu7/TS2HaVc6vxDmJVL2EbGtmNSglIRVSTJmr4WfBp8KbpOYuFs7SmHJ7J3EeDzxFnHxcOcaVW3QVBKltR9fvH0JHaErKhRpIYJU/7voixjiTZhr5b7gWM58OkTQd2uXznnt7bYAEAxAbl3qkRxJZA9GxOsX5biCKWn6RbpQUBLSZBJ5ABa6KiUwj6kqcEA18wppzAWT+l7E9GYJls9+nRLmQH6+albhqB4vE1bEE7JSUvIeaSPQVOk7Ydj/qJRgzzQ4Y4w4+LfkqMIhtrHc3wu0lkKlq35t8I/s3u7dpYJHT3FrkqiSHGJsmSDC1B5I6em2eLQZKj4E/cDRqFRQTSP7eKCDYfOjHkfaXq5i8tP+bWvVcOXOuDZenil25vLW5q7NYffBVrH5Y00ANE9vd5HbQ28ALfblZTPyKAzp/wHT7lOfP+m/GBQrDw+Zx3FSbtVb72S0KLb/AWGWeHfxz1EiDPES1X+CuNbrDpCDzQ68TqYOpDj1oc6RkLrZHXWy/WXYqOwlcrC0Wr0YbMxTfoK31gav4TCT+91TZKE6aPbk3aoUSxFsUzhMdAT8T+jbRUUCOJ3nf4kTiAmZqjCS+mipVlPNa9LdqkK4g7SmxJl2on3cm+D4MktG65DjuzDrnnHY2XQUNMO0ktLVlaBXhfsI2MlE06HGoph+bJVLnrY/5JZWX5SJuFF6uUFU6/5PNM5Vikayba2jh95V/NC21B2zIWz7hQzQxK1lGMNe3AIGyO2VCOzXRSEFYf7wCQ8kl7mvFWvmhaj4w1Lkg4c23pLy20TZ2
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5056.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(346002)(366004)(136003)(396003)(39860400002)(376002)(478600001)(41300700001)(5660300002)(6666004)(186003)(6486002)(6512007)(52116002)(1076003)(86362001)(26005)(8936002)(2906002)(2616005)(66946007)(4326008)(38100700002)(8676002)(6506007)(316002)(66476007)(66556008)(38350700002)(83380400001)(6916009);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?tOqvql/MD4ejQ4kO7mo54xgh9ZR7tORA7n1HzA+7Kjzk+vWBcQopMhSvOPsZ?=
 =?us-ascii?Q?2Q1Atdo/vQx2ZXhBRwoTMKRN/KPtuRaE+LPnJc8qI9UcsJkT17WBeuJ2tf2f?=
 =?us-ascii?Q?YxUt46KlEHTeTfFvFKmSBNyeBB1VRSvZbscHdReKMBw9Zm6e6bqhuFZitFeN?=
 =?us-ascii?Q?FfoSSekMwY4qx9MbAv8U7UZGz86ZHdUc2XjBD60ixXKpoQtksxp39JLZ/qkO?=
 =?us-ascii?Q?Y78dYOeet4Oalyu+kIcI8K2M2ZdKY98eevpMc7q9z8UIsfPUHmbcrzlBqSTQ?=
 =?us-ascii?Q?mBOP6odYYJNURjsUBhw5KQi8rEDDG/R1aaVcWNdOMAsr1x8r8gjBZkD1cUxO?=
 =?us-ascii?Q?LWu/UTW5BkmJ0qVEvpROO12/D5pT3WSA1DyjLYpxRIBk3Ykkmq7wYbqs42sU?=
 =?us-ascii?Q?kj2VdYgQbyJ7fhfkKi/CrNSo29czF0Qi1p7+kuOO/gyimlolicrvg2f0LR/4?=
 =?us-ascii?Q?8EGh/UX4VTH/AD2O9BxOl/MZqvTqJ6G0rfphuCvc477kVGWTZiuvcus4slLT?=
 =?us-ascii?Q?yUWGGRQigviByUI61GwQvRMyxWT7BF4EkG1ia+FnsdzRNh2/75WkOfCRWX1w?=
 =?us-ascii?Q?tP7fzl7YZHeTGBntZIW0W1rDiGk/z7GPIRTPqD+ctg//7pUZS73f1ZAJFgl/?=
 =?us-ascii?Q?Q1Ycr+ZHHtxaWdMd91ceq4c5vbgtwTQg9Jee8KRC7W3lJsk2cnqIYIR8P8mW?=
 =?us-ascii?Q?4kP+qABetjiltYOL7Vp1bLp3PmuHJq7YzoDp6n/23PbjVJ0+zQZhwpsUsdvN?=
 =?us-ascii?Q?wh7rlNVoELA3KRZWIIk9b+MyMB4KTyIJ7BOvFgPhv1V7Qk/GtFCKCxllLeJI?=
 =?us-ascii?Q?u55eDP9o33NjKC0Sucj9XNa64RygyesoHCwkhiW94Uot4apJVXo9cNMUReyq?=
 =?us-ascii?Q?YH4FiZGnQ/yYIvXT1KNtgQ0eKFt+GjSN4UKbLGbAlDn2jMucVF6CnMXH40E5?=
 =?us-ascii?Q?D4FVN8QZI1BKElWA4bKDKlEvfTAhQlOQ+99wYE05Unnn+Fml0ecAq32nArRf?=
 =?us-ascii?Q?HVocK0IwCp7URTYd63syUs0Yy1gHC3JSB6dqZcysEeCYgSPFNrTQAXhjR8tJ?=
 =?us-ascii?Q?Vv/NSCr6MobaI4iXVAcVXY8Iescot7ETM2WhMOkF5Ml4nAJYRdRrbW0tB07U?=
 =?us-ascii?Q?GVy59pDLrusq0HE1iRQzHHvGhdXKoYR4FOAVXdkvtAA8DkEtrWRzREdOAArm?=
 =?us-ascii?Q?s7DkXhNaKZmyxYH28v9Uw3HFQH4fKsIjapJV2kQTBrB07oZhd47z7Ynms4lk?=
 =?us-ascii?Q?oQY8zwaZ0TIhxDbVifU0Z+Sh45TgRJ6YcMeYgQpGARJNomLanWGs4/musL2b?=
 =?us-ascii?Q?htyfN3nP6dtyiTOHApCc7YEbHe50mqUHPgyiS/csu4XoiEkx8rzOiwWINa/c?=
 =?us-ascii?Q?slZoAvogKBRAJn57VQsONeRNFLSIOfrUoHtPZbcnWmw3REtmzQzXjXi4qo8o?=
 =?us-ascii?Q?iSfYMjKtgi82ATihtxPzFx4qyz615f8O0OQ6mSE1ViGorerg3i+2yFduFc3c?=
 =?us-ascii?Q?lhBDbOu+VyjZWph1azu2mTh8eRFBwjvfKTaiyYc81IAL7xLpaWrwbNphCmo8?=
 =?us-ascii?Q?qhk5g4D7CZC4boemWWwoKbmE5FIRK8Bk8CprNJ0yZY4Q0xvPgUCDsb5AKoTS?=
 =?us-ascii?Q?rg=3D=3D?=
X-OriginatorOrg: oss.nxp.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3acbee7f-13ba-46e9-6d00-08da5aa16b77
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5056.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 14:04:16.7132
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 7ACarorNNtrjb3ap2HLw+RmQDTul/TZbMMNGeyHQS2j8KwlE5RZR9ONyJ5qxnsTPJbo2nOui4rgbjN1cEZcDyVWi1PApoNKHu/guKoTyyfM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR04MB9322

From: Andrei Cherechesu <andrei.cherechesu@nxp.com>

Added the "-a" parameter which stands for APPEND_EXTRA_CMDS option,
which enables the user to specify the path to a text file that contains,
on each line, u-boot commands that will be added to the generated script as
"fixups", before the boot command.

The file specified via the "-a" parameter will be copied as-is in the
generated script.

Signed-off-by: Andrei Cherechesu <andrei.cherechesu@nxp.com>
---
 scripts/uboot-script-gen | 19 +++++++++++++++++--
 1 file changed, 17 insertions(+), 2 deletions(-)

diff --git a/scripts/uboot-script-gen b/scripts/uboot-script-gen
index f8d2fb0..444c65a 100755
--- a/scripts/uboot-script-gen
+++ b/scripts/uboot-script-gen
@@ -416,6 +416,10 @@ function check_file_type()
     elif [ "$type" = "Device Tree Blob" ]
     then
         type="Device Tree Blob\|data"
+
+    elif [ "$type" = "text" ]
+    then
+        type="ASCII text"
     fi
 
     file -L $filename | grep "$type" &> /dev/null
@@ -973,7 +977,7 @@ function print_help
 {
     script=`basename "$0"`
     echo "usage:"
-    echo "	$script -c CONFIG_FILE -d DIRECTORY [-t LOAD_CMD] [-o FILE] [-k KEY_DIR/HINT [-u U-BOOT_DTB]] [-e] [-f] [-p PREPEND_PATH] [-s]"
+    echo "	$script -c CONFIG_FILE -d DIRECTORY [-t LOAD_CMD] [-o FILE] [-k KEY_DIR/HINT [-u U-BOOT_DTB]] [-e] [-f] [-p PREPEND_PATH] [-s] [-a APPEND_EXTRA_CMDS]"
     echo "	$script -h"
     echo "where:"
     echo "	CONFIG_FILE - configuration file"
@@ -991,6 +995,7 @@ function print_help
     echo "	-f - enable generating a FIT image"
     echo "	PREPEND_PATH - path to be appended before file names to match deploy location within rootfs"
     echo "	-s - enable dynamic loading of binaries by storing their addresses and sizes u-boot env variables"
+    echo "	APPEND_EXTRA_CMDS - absolute path to file containing extra u-boot cmds (fixups) to be run before booting"
     echo "	-h - prints out the help message and exits "
     echo "Defaults:"
     echo "	CONFIG_FILE=$cfg_file, UBOOT_TYPE=\"LOAD_CMD\" env var, DIRECTORY=$uboot_dir"
@@ -998,7 +1003,7 @@ function print_help
     echo "	$script -c ../config -d ./build42 -t \"scsi load 1:1\""
 }
 
-while getopts ":c:t:d:ho:k:u:fp:s" opt; do
+while getopts ":c:t:d:ho:k:u:fp:sa:" opt; do
     case ${opt} in
     t )
         case $OPTARG in
@@ -1043,6 +1048,9 @@ while getopts ":c:t:d:ho:k:u:fp:s" opt; do
     s )
         dynamic_loading_opt=y
         ;;
+    a )
+        extra_cmds_file=$OPTARG
+        ;;
     h )
         print_help
         exit 0
@@ -1235,6 +1243,13 @@ load_file $DEVICE_TREE "host_fdt"
 bitstream_load_and_config  # bitstream is loaded last but used first
 device_tree_editing $device_tree_addr
 
+# append extra u-boot commands (fixups) to script before boot command
+if test "$extra_cmds_file"
+then
+    check_file_type "$extra_cmds_file" "text"
+    cat $extra_cmds_file >> $UBOOT_SOURCE
+fi
+
 # disable device tree reloation
 echo "setenv fdt_high 0xffffffffffffffff" >> $UBOOT_SOURCE
 
-- 
2.35.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 30 14:05:05 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 14:05:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358511.587765 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6umr-0004V7-6b; Thu, 30 Jun 2022 14:04:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358511.587765; Thu, 30 Jun 2022 14:04:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6umr-0004UL-25; Thu, 30 Jun 2022 14:04:53 +0000
Received: by outflank-mailman (input) for mailman id 358511;
 Thu, 30 Jun 2022 14:04:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2IH8=XF=oss.nxp.com=andrei.cherechesu@srs-se1.protection.inumbo.net>)
 id 1o6umJ-0004O6-3c
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 14:04:19 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr70059.outbound.protection.outlook.com [40.107.7.59])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 821adbf6-f87d-11ec-bdce-3d151da133c5;
 Thu, 30 Jun 2022 16:04:08 +0200 (CEST)
Received: from VI1PR04MB5056.eurprd04.prod.outlook.com (2603:10a6:803:5a::13)
 by DU0PR04MB9322.eurprd04.prod.outlook.com (2603:10a6:10:355::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Thu, 30 Jun
 2022 14:04:14 +0000
Received: from VI1PR04MB5056.eurprd04.prod.outlook.com
 ([fe80::1549:6f15:1949:f1a5]) by VI1PR04MB5056.eurprd04.prod.outlook.com
 ([fe80::1549:6f15:1949:f1a5%6]) with mapi id 15.20.5395.015; Thu, 30 Jun 2022
 14:04:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 821adbf6-f87d-11ec-bdce-3d151da133c5
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iXY1k/P0bk3m/GfBSMjWRH/ulnetKmgn6JNt09emUVtWhvCIDRzeOG6JjBVuQRWJ09l2t24WvRmzC35k2VlcOUfN21SyrHOkeZyBb+jy1PqhN0xcj6/37merMNIV+Q6ChDuo7cYzz41W5iPUhPJ8voMXymuIQ/9ki7lEa2bmqSJmLDnv5eccTSQZikjvYHDOFY8TJd7L3llcKEKsYC8DK99G18z+wtUsb0yKILxjmSP1U0/XjvKDnIPx9NGTO81Qm9T1yTJ4/MnCVi1A0Mm2e7NSd2cnoZYFENFtEs0jh1319cKecuO1CFeDVFlWGD/w2auZbHWrBIkLBNAht8y8Pg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7giP8A1iOXK4r5YaK7UVvP9/j+xU4cgR7sxpc6z2ONI=;
 b=GieAOesNy7rYgjEKVCPDOMjUvJJGdkNuuuwBavfMGjHZmzQBdcD2CU/caecYoLhwNgwcNdXPxI/1TQuPOm22vIEZfgMegA+prtPy/Z2M8TZvBXsi2BnH2vouskn/SVmBX9ikYuMU6ndd7VMQTu7EVYqFYmwpNLzx+gpg2+g10pZONnn6+qVQN3r0fCdycrEGCaVTNHZH+TZYyppr3cblXOTVxBcUkyZKHxaDFN1kIfmj00BivrFSqrGecYo8k5BcydqU2YluCIPAP5hu6nRPYeJ0ExtL9VanNVQc4mW0nQHz6tz1iyKjAMJ0eqowcJj7gzWqR8ewS0ATSMspRBogow==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oss.nxp.com; dmarc=pass action=none header.from=oss.nxp.com;
 dkim=pass header.d=oss.nxp.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=NXP1.onmicrosoft.com;
 s=selector2-NXP1-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7giP8A1iOXK4r5YaK7UVvP9/j+xU4cgR7sxpc6z2ONI=;
 b=b3bKox2luCJh3uG9LYknIigWafyWSjOli0Y1SR+sOoYxuYYO4VE80EM4bW2FgjZA3SBpTSv8tmd8PNPpXL531DKapfJl2+HlUfcuxA9NSkMKgcPXH136zTsGKEL3RWAo0QCz+JiHfHca7yFvku/8Eidimf8yjPLy8d3glCKB6HE=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=oss.nxp.com;
From: "Andrei Cherechesu (OSS)" <andrei.cherechesu@oss.nxp.com>
To: xen-devel@lists.xenproject.org
Cc: viryaos-discuss@lists.sourceforge.net,
	sstabellini@kernel.org,
	Andrei Cherechesu <andrei.cherechesu@nxp.com>
Subject: [ImageBuilder][PATCH v2 1/4] scripts: Add support for prepending path to file names
Date: Thu, 30 Jun 2022 17:00:26 +0300
Message-Id: <20220630140028.3227385-2-andrei.cherechesu@oss.nxp.com>
X-Mailer: git-send-email 2.35.1
In-Reply-To: <20220630140028.3227385-1-andrei.cherechesu@oss.nxp.com>
References: <20220630140028.3227385-1-andrei.cherechesu@oss.nxp.com>
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-ClientProxiedBy: AM0PR03CA0089.eurprd03.prod.outlook.com
 (2603:10a6:208:69::30) To VI1PR04MB5056.eurprd04.prod.outlook.com
 (2603:10a6:803:5a::13)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 933f6104-b5ca-4a51-6275-08da5aa169be
X-MS-TrafficTypeDiagnostic: DU0PR04MB9322:EE_
X-MS-Exchange-SharedMailbox-RoutingAgent-Processed: True
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BdocNINpiOuxSxtoTht6KPhwqlNKePMkfJSIq6FOuxQaVDoFCP4nhhSeplKZyQBDHPKzePTYt8JSzFfsR1RcEAhHKerSto06RXCpgV0BcOl6YMOlEVCpDlvHaVD2e97+59Xs7AiFAgo1QCTWVLF7b2/DqzLbbLCIHUzZq6KkQ4zNAdrdKQuN689lSILZ1bewSKCSovQniv5N7HQDr2lj6cMXTfpWCT5YzhD/nJs5QUHioiLgdv41epBZhk/yIvcY0nnu+HzZVob/6E6qc9LlsAgxfeh7uipVoyY8zT9i+7Yr+HSTjvUK2W7g6U6LCVlJES3IBkuAIw92AQWBNr4oRj5iZ8YproCIr4lGCNqtFh1Wz2WmSSms5HS0XfmKPexBBrhzv8PcaILo2tKHdX4ENvLP27d4vklv0LW9fT802z1d1YJRfPZUJkRVvM7A3MUtNP1yGCzz3X6jZdAfTZJr2KOmfC8F+nyX5SH+jn69eeK8Pz92iuMvcKFH2x6wa1yRPF6rN65hVGZ9Fmxz4aQLGLdYQcYpgtOTC1XVCqwZC2DuzP6L81Eoy4btk7TGLag1mfu6aPXN4rPGa6Rykn0xCwm/WuheL5PERneR/NwQ4P4iIzT4qfEVBq14/pH5FqyM1Xzj0dCJ6wFUJPAzph2yk0ssQWTfjE2pe8tvBVDn+85EV1uQATR7vwbhCDEWnSH+sw1aV+Jq6bdk29uAVkyIrOItiCOjQflFe19yPAWE088dz+gq0xblpFUUdVosl9f3yxo9SIKbR/tS8CDHk1Gy15htYOiG1sUm/l/SocVqhKJG1k95jszzM2NEggVU+u9U
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5056.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(346002)(366004)(136003)(396003)(39860400002)(376002)(478600001)(41300700001)(5660300002)(6666004)(186003)(6486002)(6512007)(52116002)(1076003)(86362001)(26005)(8936002)(2906002)(2616005)(66946007)(4326008)(38100700002)(8676002)(6506007)(316002)(66476007)(66556008)(38350700002)(83380400001)(6916009);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?/6inGlVl0mxGbVn8w8PbvLkPDC4AUGE/pgfQHpesNPuYCCLAmpi29Cl0t/NQ?=
 =?us-ascii?Q?9BRCqJXQdNDdv3A4H2LtWzF5xAB168Kv/LXTXZ26XDitkWhJbB/tQ1Xck4zA?=
 =?us-ascii?Q?EOJvU9ijtrnP/ZuYHU4Sbjnkzz2DSsJ3R8pV/PMSNMfgrnuEEHraEK0Lb5jB?=
 =?us-ascii?Q?Xr16uPHmzT2RdEbmKcYzcAJWEaB4/Vbl2mGeWNOFggRkYQv5lTF1fyz37bOO?=
 =?us-ascii?Q?QMW4vl78MElFaxW6+DqBU/lhMQ8sQlCe8tcIZv3JS5mDtXYWRfQMPQNQ/cDl?=
 =?us-ascii?Q?8TxWSZE9Ss//ZFZsgf1GCkXJt2CTMjPy+hwvNl9cqCfycU4qlu3fulJ+ENet?=
 =?us-ascii?Q?HQAVNRApL+CH7WcSDu+d1pzUPFNRj0wqu50YU6cGXMP5KuGY6Gyq4vnvumti?=
 =?us-ascii?Q?FAme0Uus9injZihicHZNWMonB1RprOKiFkqcmdv9SOYz6NZLhlFSWxaP9Vpq?=
 =?us-ascii?Q?NEdGaOSBpzNFns+1Rq1dZI3n61karxUD0VuFjoaQAUc2c6kM943PMMEgv974?=
 =?us-ascii?Q?X/eATYref4q9X4mzdGCNEhpLi+tEZMG97rEy7BhvRL997LGg7//Ipyy+4gxl?=
 =?us-ascii?Q?KJLWBc0N9SGls7xtYQxkrTJPWK2E70WWmvSdMYDAcckU9YePq0lwARLkdeYG?=
 =?us-ascii?Q?mbhtc9SCf+nEx9SNkTkxt9powzvvoMpI6Cp9X083qrhtN8M/xfWG1YpF0PxK?=
 =?us-ascii?Q?Smp5i2SXfUsMFHqFaUCPPC8kmEGNEX3QST4+SLeHHt0k1U+n5fgrfG778yFD?=
 =?us-ascii?Q?Ag+JhCwtOb7cyW8JetoQmZnBebWWxkYutJBUUrshPLuvWEaPJkvcOaCmMwRa?=
 =?us-ascii?Q?uk/CNRxzj8DbmiRmo1XjYEJjvNv+kikR32akLjDrBQPqVmfd9C3iQmIsRhCs?=
 =?us-ascii?Q?iOAf4HFS1LQJ4tm+9juTOKDvyLmX2dKJXj6a0DriZPXu77e+/322ptosY0JY?=
 =?us-ascii?Q?OA8jBNbQgH67LwEjOMTe+8aM060GK9bWWHt/PeeG33Rw9IID6uujRvCwz7jY?=
 =?us-ascii?Q?ziiZf5KrgeMQAVXHwA8vdp9todO6BF+W8d9KvPrEBNYn6or8B64JqhmTxZsc?=
 =?us-ascii?Q?XfyWywGvAeYxFno+4PKWOH9PhePPrquFz9Reke1pTWWDf2oSP78YLGChElfT?=
 =?us-ascii?Q?Rk1ieXapFdw9bSN31QIOtkVWI1wF52naLgIdVYtGTeX7aEDoiJuKqeOuhkBE?=
 =?us-ascii?Q?nb0UhBxwlhjTxXDYbqYYkoBnO1D63wvFCYclT6ZRdRXl+u3zlCv2ulaEqe4s?=
 =?us-ascii?Q?XJj7XdFrPuNs2w/2I9dHLoe2OBYvfMZruBt9o3XDVnY1nt0u/riGjipA+NYL?=
 =?us-ascii?Q?fzdB15ProqVY9G2a9fcrB8ohtp+fuIoJUwQHhUmuItkthA9HOoMgqS0U+gN/?=
 =?us-ascii?Q?7U8wpzfXAWoGW/L/zV3vT8lZ4Hod1kNpIqZbCU3AUH14S98KWqa7OkvRg15w?=
 =?us-ascii?Q?MAk5Syzf4pGwN3lHi+8PlfD37Yg1s1ZgtuJwdsi79OUKzrgQ1GeOz9lRWwWP?=
 =?us-ascii?Q?s9XUz4tbesTk6WL2HTToEREiV/ypz+C26qTUORz+JgMw+v7aPB5JsyyUA59Y?=
 =?us-ascii?Q?FElrY4T8MJC+3r1P+BcDC9AFQyY0fRX22XtiixNp+Er4MgAm1A01iLDRs+Mg?=
 =?us-ascii?Q?xg=3D=3D?=
X-OriginatorOrg: oss.nxp.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 933f6104-b5ca-4a51-6275-08da5aa169be
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5056.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 14:04:13.8853
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: zk+Jo9wKlfQGj/dT1D9pz1hszik6zTfmW5Y20Lc2T+4qz7HCqHjB50M18gYJcUbIp8kFVcNJ4rbfvoTArrghYEZJhAvbAkOzzhO1YIdwfxc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR04MB9322

From: Andrei Cherechesu <andrei.cherechesu@nxp.com>

Added support for prepending path to file names in the final generated
u-boot script, for the use-case where we have the files in a separate
folder that can be accessed with a given $LOAD_CMD.

For example, we can have "fatload mmc 0:2" as LOAD_CMD but the
files would need to be loaded from the /boot folder within the 2nd
partition, not from the root ("/"). By specifying the "-p <path>"
parameter when running the script, paths like "/boot" can be
automatically prepended to the generated u-boot commands used
to load the files in board's memory.

Also added the support to disk_image script, to enable generating
a FAT partition with the binaries deployed in a custom folder
within it, if the "-p" parameter is specified.

Signed-off-by: Andrei Cherechesu <andrei.cherechesu@nxp.com>
---
 scripts/disk_image       | 37 +++++++++++++++++++++++--------------
 scripts/uboot-script-gen | 12 ++++++++----
 2 files changed, 31 insertions(+), 18 deletions(-)

diff --git a/scripts/disk_image b/scripts/disk_image
index 12fb06b..97e798f 100755
--- a/scripts/disk_image
+++ b/scripts/disk_image
@@ -539,7 +539,7 @@ function write_rootfs()
 function print_help
 {
     echo "usage:"
-    echo "	$0 -c CONFIG_FILE -d UBOOT_DIRECTORY -t UBOOT_TYPE <-w WORK_DIRECTORY> <-s SLACK> <-a> -o IMG_FILE"
+    echo "	$0 -c CONFIG_FILE -d UBOOT_DIRECTORY -t UBOOT_TYPE <-w WORK_DIRECTORY> <-s SLACK> <-a> -o IMG_FILE <-p PREPEND_PATH>"
     echo "	$0 -h"
     echo "where:"
     echo "	-c CONFIG_FILE - configuration file"
@@ -553,6 +553,7 @@ function print_help
     echo "	-s SLACK - free MB to add to each partition, default 128"
     echo "	-a specifies that the size of IMG_FILE has to be aligned to the nearest power of two"
     echo "	-o IMG_FILE - the output img file "
+    echo "	-p PREPEND_PATH - path to be appended before file names to customize deploy location within rootfs"
     echo "Example:"
     echo "	$0 -c ../config -d ./build42 -w tmp -o disk.img"
 }
@@ -564,7 +565,7 @@ then
     exit 1
 fi
 
-while getopts ":w:d:c:t:s:o:ah" opt
+while getopts ":w:d:c:t:s:o:ahp:" opt
 do
     case ${opt} in
     t )
@@ -606,6 +607,9 @@ do
     a )
         ALIGN=1
         ;;
+    p )
+        PREPEND_PATH="$OPTARG"
+        ;;
     h )
         print_help
         exit 0
@@ -828,56 +832,61 @@ mount /dev/mapper/diskimage1 $DESTDIR/part/disk1
 
 # only copy over files that were counted for the partition size
 cd "$UBOOT_OUT"
-cp --parents "$DOM0_KERNEL" "${DESTDIR_ABS}/part/disk1/"
-cp --parents "$DEVICE_TREE" "${DESTDIR_ABS}/part/disk1/"
-cp --parents "$UBOOT_SCRIPT" "${DESTDIR_ABS}/part/disk1/"
+if [ -n "$PREPEND_PATH" ]
+then
+    mkdir -p "${DESTDIR_ABS}/part/disk1/${PREPEND_PATH}"
+fi
+
+cp --parents "$DOM0_KERNEL" "${DESTDIR_ABS}/part/disk1/${PREPEND_PATH}"
+cp --parents "$DEVICE_TREE" "${DESTDIR_ABS}/part/disk1/${PREPEND_PATH}"
+cp --parents "$UBOOT_SCRIPT" "${DESTDIR_ABS}/part/disk1/${PREPEND_PATH}"
 
 if test "${DOM0_RAMDISK}"
 then
-    cp --parents "$DOM0_RAMDISK" "${DESTDIR_ABS}/part/disk1/"
+    cp --parents "$DOM0_RAMDISK" "${DESTDIR_ABS}/part/disk1/${PREPEND_PATH}"
 fi
 if test "$NUM_DT_OVERLAY" && test "$NUM_DT_OVERLAY" -gt 0
 then
     i=0
     while test $i -lt "$NUM_DT_OVERLAY"
     do
-        cp --parents "${DT_OVERLAY[$i]}" "${DESTDIR_ABS}/part/disk1/"
+        cp --parents "${DT_OVERLAY[$i]}" "${DESTDIR_ABS}/part/disk1/${PREPEND_PATH}"
         i=$(( $i + 1 ))
     done
 fi
 if test "${UBOOT_SOURCE}"
 then
-    cp --parents "$UBOOT_SOURCE" "${DESTDIR_ABS}/part/disk1/"
+    cp --parents "$UBOOT_SOURCE" "${DESTDIR_ABS}/part/disk1/${PREPEND_PATH}"
 fi
 if test "${XEN}"
 then
-    cp --parents "$XEN" "${DESTDIR_ABS}/part/disk1/"
+    cp --parents "$XEN" "${DESTDIR_ABS}/part/disk1/${PREPEND_PATH}"
 fi
 if test "$NUM_BOOT_AUX_FILE" && test "$NUM_BOOT_AUX_FILE" -gt 0
 then
     i=0
     while test $i -lt "$NUM_BOOT_AUX_FILE"
     do
-        cp --parents "${BOOT_AUX_FILE[$i]}" "${DESTDIR_ABS}/part/disk1/"
+        cp --parents "${BOOT_AUX_FILE[$i]}" "${DESTDIR_ABS}/part/disk1/${PREPEND_PATH}"
         i=$(( $i + 1 ))
     done
 fi
 if test "${BITSTREAM}"
 then
-    cp --parents "$BITSTREAM" "${DESTDIR_ABS}/part/disk1/"
+    cp --parents "$BITSTREAM" "${DESTDIR_ABS}/part/disk1/${PREPEND_PATH}"
 fi
 
 i=0
 while test $i -lt $NUM_DOMUS
 do
-    cp --parents "${DOMU_KERNEL[$i]}" "${DESTDIR_ABS}/part/disk1/"
+    cp --parents "${DOMU_KERNEL[$i]}" "${DESTDIR_ABS}/part/disk1/${PREPEND_PATH}"
     if test "${DOMU_RAMDISK[$i]}"
     then
-        cp --parents "${DOMU_RAMDISK[$i]}" "${DESTDIR_ABS}/part/disk1/"
+        cp --parents "${DOMU_RAMDISK[$i]}" "${DESTDIR_ABS}/part/disk1/${PREPEND_PATH}"
     fi
     if test "${DOMU_PASSTHROUGH_DTB[$i]}"
     then
-        cp --parents "${DOMU_PASSTHROUGH_DTB[$i]}" "${DESTDIR_ABS}/part/disk1/"
+        cp --parents "${DOMU_PASSTHROUGH_DTB[$i]}" "${DESTDIR_ABS}/part/disk1/${PREPEND_PATH}"
     fi
     i=$(( $i + 1 ))
 done
diff --git a/scripts/uboot-script-gen b/scripts/uboot-script-gen
index 085e29f..8f08cd6 100755
--- a/scripts/uboot-script-gen
+++ b/scripts/uboot-script-gen
@@ -316,7 +316,7 @@ function load_file()
     then
         echo "imxtract \$fit_addr $fit_scr_name $memaddr" >> $UBOOT_SOURCE
     else
-        echo "$LOAD_CMD $memaddr $relative_path" >> $UBOOT_SOURCE
+        echo "$LOAD_CMD $memaddr ${prepend_path:+$prepend_path/}$relative_path" >> $UBOOT_SOURCE
     fi
     add_size $filename
 }
@@ -891,7 +891,7 @@ function print_help
 {
     script=`basename "$0"`
     echo "usage:"
-    echo "	$script -c CONFIG_FILE -d DIRECTORY [-t LOAD_CMD] [-o FILE] [-k KEY_DIR/HINT [-u U-BOOT_DTB]] [-e] [-f]"
+    echo "	$script -c CONFIG_FILE -d DIRECTORY [-t LOAD_CMD] [-o FILE] [-k KEY_DIR/HINT [-u U-BOOT_DTB]] [-e] [-f] [-p PREPEND_PATH]"
     echo "	$script -h"
     echo "where:"
     echo "	CONFIG_FILE - configuration file"
@@ -907,6 +907,7 @@ function print_help
     echo "	HINT - the file name of the crt and key file minus the suffix (ex, hint.crt and hint.key)"
     echo "	U-BOOT_DTB - u-boot control dtb so that the public key gets added to it"
     echo "	-f - enable generating a FIT image"
+    echo "	PREPEND_PATH - path to be appended before file names to match deploy location within rootfs"
     echo "	-h - prints out the help message and exits "
     echo "Defaults:"
     echo "	CONFIG_FILE=$cfg_file, UBOOT_TYPE=\"LOAD_CMD\" env var, DIRECTORY=$uboot_dir"
@@ -914,7 +915,7 @@ function print_help
     echo "	$script -c ../config -d ./build42 -t \"scsi load 1:1\""
 }
 
-while getopts ":c:t:d:ho:k:u:f" opt; do
+while getopts ":c:t:d:ho:k:u:fp:" opt; do
     case ${opt} in
     t )
         case $OPTARG in
@@ -953,6 +954,9 @@ while getopts ":c:t:d:ho:k:u:f" opt; do
     f )
         fit_opt=y
         ;;
+    p )
+        prepend_path="$OPTARG"
+        ;;
     h )
         print_help
         exit 0
@@ -1179,5 +1183,5 @@ then
     echo "$LOAD_CMD $fit_addr $FIT; source $fit_addr:boot_scr"
 else
     echo "Generated uboot script $UBOOT_SCRIPT, to be loaded at address $uboot_addr:"
-    echo "$LOAD_CMD $uboot_addr $UBOOT_SCRIPT; source $uboot_addr"
+    echo "$LOAD_CMD $uboot_addr ${prepend_path:+$prepend_path/}$UBOOT_SCRIPT; source $uboot_addr"
 fi
-- 
2.35.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 30 14:05:05 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 14:05:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358515.587784 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6ums-0004pm-3o; Thu, 30 Jun 2022 14:04:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358515.587784; Thu, 30 Jun 2022 14:04:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6umr-0004nh-Ti; Thu, 30 Jun 2022 14:04:53 +0000
Received: by outflank-mailman (input) for mailman id 358515;
 Thu, 30 Jun 2022 14:04:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2IH8=XF=oss.nxp.com=andrei.cherechesu@srs-se1.protection.inumbo.net>)
 id 1o6umL-0004O6-5X
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 14:04:21 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr70057.outbound.protection.outlook.com [40.107.7.57])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 845c74a2-f87d-11ec-bdce-3d151da133c5;
 Thu, 30 Jun 2022 16:04:12 +0200 (CEST)
Received: from VI1PR04MB5056.eurprd04.prod.outlook.com (2603:10a6:803:5a::13)
 by DU0PR04MB9322.eurprd04.prod.outlook.com (2603:10a6:10:355::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Thu, 30 Jun
 2022 14:04:18 +0000
Received: from VI1PR04MB5056.eurprd04.prod.outlook.com
 ([fe80::1549:6f15:1949:f1a5]) by VI1PR04MB5056.eurprd04.prod.outlook.com
 ([fe80::1549:6f15:1949:f1a5%6]) with mapi id 15.20.5395.015; Thu, 30 Jun 2022
 14:04:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 845c74a2-f87d-11ec-bdce-3d151da133c5
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iaNeOhHqZRiDSLBjIYipfklZW9Ie3cfyB6qlqKLAY9+osIwNHCX2Z4oV6garVDFhCFPTZDj9Jc8/bBI3F4JhMwB9QVg/LjHaA06d+/gw2Pl/UlyKemZfw8O2EKruZGpH+oUVhIyWZCA0mxVGdn9nPqnu+b0P3Rx6+PvsQ3gA4ve3oDVj2bASEReS9W/lHB1IUihBAkM9tS3J1v2u41U8mJIyMaV69LySbl3BWiQeGS2Ww0ns6NdudVumlR454V61FW4wc9oicKlgSp698atFQntRKlyg2/9r8SPwuVGgmL7DO9FQmwEBq/AbVfzkUbYVNOOZwHfar0Dl6eOWlncifg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IEOir/TcWxDepJgogJp+rLWOzXH0RRWLH/R0bAwSxes=;
 b=Y3Gd2hQrS94vwh16oJ2N70W43LWmaZtdvJKE2Zt2VcuxgbRYqMxnzMu0TSdDgPoUshpeRFDTp1fWXCll9q2BwXXwyVk03PPQ6Cu/Z4ucfbEgQZNJBMQ+Bw0HKSi9qcFKdffbDtDBoHn5cIjmhHtkygrGytwYwm7fj/Z9CJ1rtcFJ15nVr+oCeNVMprMZXXkCgOyYvMyNISLOtC+M36qNO8mOxUn2wqYszwkF1udHbPw6t1iBLmLYzaI6AepV+FH1FMpqvGbfzw98amyrLPTw13fafU3CrBn4Wk88Tmm3ZujTZEs1oXm3K+Fw5TYsix74/ALQuGK+Pp5paGE3k8Eh7Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oss.nxp.com; dmarc=pass action=none header.from=oss.nxp.com;
 dkim=pass header.d=oss.nxp.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=NXP1.onmicrosoft.com;
 s=selector2-NXP1-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IEOir/TcWxDepJgogJp+rLWOzXH0RRWLH/R0bAwSxes=;
 b=Z4ij3h4QhH+o8FEfpit/cKJUDNqRo4BGo6jBHrPyoDNLC6on4wf1xChSgB7itEzIvKd9uLcFIvWP+26hflWlmB+4UEyl3q5mMaaNUc0E9wstB/kaL7bwKsN6jSMqp3+4QAzXQ9Uz1roOo1Hcv5jLBNZ4ClD6Wn2FIcmuSFx4qGQ=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=oss.nxp.com;
From: "Andrei Cherechesu (OSS)" <andrei.cherechesu@oss.nxp.com>
To: xen-devel@lists.xenproject.org
Cc: viryaos-discuss@lists.sourceforge.net,
	sstabellini@kernel.org,
	Andrei Cherechesu <andrei.cherechesu@nxp.com>
Subject: [ImageBuilder][PATCH v2 4/4] uboot-script-gen: Enable not adding boot command to script
Date: Thu, 30 Jun 2022 17:00:29 +0300
Message-Id: <20220630140028.3227385-5-andrei.cherechesu@oss.nxp.com>
X-Mailer: git-send-email 2.35.1
In-Reply-To: <20220630140028.3227385-1-andrei.cherechesu@oss.nxp.com>
References: <20220630140028.3227385-1-andrei.cherechesu@oss.nxp.com>
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-ClientProxiedBy: AM0PR03CA0089.eurprd03.prod.outlook.com
 (2603:10a6:208:69::30) To VI1PR04MB5056.eurprd04.prod.outlook.com
 (2603:10a6:803:5a::13)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 91ed6371-c465-42ee-1f5a-08da5aa16c93
X-MS-TrafficTypeDiagnostic: DU0PR04MB9322:EE_
X-MS-Exchange-SharedMailbox-RoutingAgent-Processed: True
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2R0yvQkVV0ZbmP+qQBWaoy+eo1ecg8F5/ZC0MyL1ZxXepCy83RY8kHeCzfOSoQR52FNG+xrOJH3ls4Adbnq8GYOGXJN1+DkvK689YcJSbzGkfbvvpxrFhg6oYqWnWhBKe/0L/+C+OfH9S7ycSK2BHClF5pxACUeljsI3JgVc6duED8+upYjopZfJ7KiZ/KsVCgMzVHwDBbxM6cMNRudm73qrv8SCyzpIkl2paCPNMhea44pivSRFfMA03c39VJKB2PL+zYWKCkZ2EC9PSJ5xxpOBW2i1R7Y0f8VRfcivwhNfuQQ+ynivlqPVF1x8r69skFcOWZX72iEAFf3wbZJomta0H/KbenL3YezO1BcUmo/6J//E8beFlpAl1k4sz1df2Ewa+pjIZizdiVzUACenrVcjKVPp9/EmwlUlE0erhzQh3rm6o57xSi22aiZEwoPL3Bq08qFB5w+cZnCKV0xkIvycc+YoAh0UgQsO9he9VVJlEaUEMRNK9//w9wNAkqWNo22LRyyBjbpCImNcuJiEED778/uyAAO5j1yrBbx+Ov7/RrUy/DJFwsNsRNOi6vLai5M/j+BfpZRY3BGY0UxSP9u78YWMU4IxkTxIaOpxeiS3VXL/x5UQBRrSDOTWmuaLTXDAe+wpDGMsALhbQ7KqettF2OprRgZskZqZsEXGRaxlzRcMtgz/oSZ95SNFpwnPDAe85vHR2RnSsvAKPToDp4sAyx6+SFaO34gwz6nM06U7Nzjuvs3Fh0VVZYMdz919h+Pqp1a+MIXIMgYz+cH6y2kSasO7mHQ9xBd/uCmO7Y/jCcY6PUdKQ0BDDZveI9x9
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5056.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(346002)(366004)(136003)(396003)(39860400002)(376002)(478600001)(41300700001)(5660300002)(6666004)(186003)(6486002)(6512007)(52116002)(1076003)(86362001)(26005)(8936002)(2906002)(2616005)(66946007)(4326008)(38100700002)(8676002)(6506007)(316002)(66476007)(66556008)(38350700002)(83380400001)(6916009);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?LsaCxl+YMIsKgAg1h9c7B4CTIvi+WIDV/juo2j+mf04+HNBfGNa75fFtUkAa?=
 =?us-ascii?Q?DadxhNl+NTWCNRPX1trj8/VIQRypK38xVKtcrDnJI2I9bydO9veYf7XtvT8C?=
 =?us-ascii?Q?a9jgs/vtXS5o8V9lGUcM85H0h4U2FCCdmjHJ5AihbtIqviRxXICppYyqP0XE?=
 =?us-ascii?Q?anWu921kndKGTXFOIkDBxW3YT7csh7OiTxltvjVHksR8NuDPhPoAnmqwatVD?=
 =?us-ascii?Q?0D9sjJy+qjLCBjbWl/QUshgQc79kGYO9S1Fl11fjEDaGPTKCw0fnsL5h9Oxv?=
 =?us-ascii?Q?e3M+pb6HewoP2m89fb3YOwulmSDLHvlBgKU6fL8J/nM2TTXbuL7X0BckCrbz?=
 =?us-ascii?Q?ReT5isJqfs1bj/KvqPHqYkt4DtPuzBb20itLXyjb4P0b+8d64c4Pb1iBK/yk?=
 =?us-ascii?Q?F1pu1hqxf+en04Vv5BTNrf7wU14Wo5FGOtxyJeb4SKxX4QWvApHfqFCr8V/4?=
 =?us-ascii?Q?aJTXsg628vMokulEsUfSXnaCwgDNwMlnbu7/tXLfV1VM/oIpkWzdTt4KwhgA?=
 =?us-ascii?Q?gvEKRLHHzMlHp/niiicZUB2Hq1LUlAiSXedvuRvGFJPj8NBtq+LzU+qTUhq6?=
 =?us-ascii?Q?xgLqcm5g9uh6YVzvoevNyz+v7aTbH7EvvStrdKy7nZ3x0YU00K5q78K/BYI+?=
 =?us-ascii?Q?s0nxgnBTNF6MXdb6CiKLf4q8tZkwjUfdsE73NT1klGTxru6ESujv1/0I7Uvl?=
 =?us-ascii?Q?IYa3HPm/GmVJB1A6mppbWo1L6MEnhnGUlcI9gmTPwALGx0S7AseSGl0mMhWl?=
 =?us-ascii?Q?sOasHUrTz0MTmGV2/qmiua3PdwFhy5F+j6xHjh05SNWj85CA2gAhQCJARV9n?=
 =?us-ascii?Q?1eDFV2SYd0BKLqjpXhjlHDoMxtHGJgFJKEz+UvPc4UAyC/wEnxbFgSklVZ8K?=
 =?us-ascii?Q?I5fsg+31uwqhkab26yLtUIQNpgH+JH8HD+mGSaaf40JKEG7pt4PNkEWbpi0c?=
 =?us-ascii?Q?cgt04z3CAKGyhb80KVkGrdGEgZYWTbmSHj9nb7eZ7cUHU2Xdre4WjQoOUAug?=
 =?us-ascii?Q?zr+/0p6dWBDisnCxB2kI2ll9MIaJRjfUWXMsMVh8Zx+CJIEetS5rC+p3dKRo?=
 =?us-ascii?Q?V66HOoDKQnjdDswXMHbudbfLkHCj/4+E5jfAGlCJIMeHG7OpoafHe24QfmGk?=
 =?us-ascii?Q?yRr/IMSDMTEDf2ZTNNsHqFoEJv0Gl6YPi+M5J327UKJq9woP1BD0U0Q7oQ/c?=
 =?us-ascii?Q?Kg5d8ptlzoqpekqc+cOKQcI8GgoZ6aUI3k5HlVwO1dwgxn9v1qszJntJhfji?=
 =?us-ascii?Q?J2E6k4pm2MN7Icq8mlhEmiggKDwAzy7F9ERHXX9ueJ7rW1KsYRtvesHu+CNv?=
 =?us-ascii?Q?0+6PXy6SXr4zulaUqrpiuVVyR7sCdztZ1bgh7v/qj41Zcf5OlDZbsxtFxAy+?=
 =?us-ascii?Q?BMSF4cBxIgzf5VZZA5HDRH7RTvc7+m0OeK5iMKYD76J0S3Eu4QUCtyI7mWJk?=
 =?us-ascii?Q?7l1hwkS5ey2iEgP7ikmFT5dVES+6RJwHAjCqEzSH7ffBkWUCO8LIz54rmYtK?=
 =?us-ascii?Q?EfqWM1fIhchUm4PVbaoPJDCH67NqMyFU/YEmD4mj6LHpr01/3rjuPx5rSCDJ?=
 =?us-ascii?Q?XSAQ+sjGWfz2dFSBLl50rGoNsLuIPjlWbT8KHtw0X1bdONFULNvIw6GHUMWP?=
 =?us-ascii?Q?Zw=3D=3D?=
X-OriginatorOrg: oss.nxp.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 91ed6371-c465-42ee-1f5a-08da5aa16c93
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5056.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 14:04:18.5412
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: L5vxLpA8FjH1rw/tAW8CEsO+tDdQA1nyED8JOKEL42MJf7Nfu/7+s7bZfFo4Mi52c0QS64OJH5O5P263s4OJBwADWDPF1LdIGluXou/pYos=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR04MB9322

From: Andrei Cherechesu <andrei.cherechesu@nxp.com>

If the "BOOT_CMD" variable is set to "none" inside the config
file, the boot command (i.e. "booti") will not by added to the
generated script, to allow the user to customize the u-boot env
or the device-tree after executing the script commands and before
actually booting.

Added commands to store the addresses where the Xen image and
device-tree file are loaded, in 'host_kernel_addr' and 'host_fdt_addr'
variables, if the boot command is skipped and the "-s" parameter is
not used.

The `booti` command can then be executed as part of the 'bootcmd' variable
in u-boot, which should contain:
	1. fetching the generated u-boot script
	2. executing the script
	3. running `booti ${host_kernel_addr} - ${host_fdt_addr}` command

Signed-off-by: Andrei Cherechesu <andrei.cherechesu@nxp.com>
---
 README.md                |  5 ++++-
 scripts/uboot-script-gen | 16 +++++++++++++---
 2 files changed, 17 insertions(+), 4 deletions(-)

diff --git a/README.md b/README.md
index cb15ca5..b1a9b9d 100644
--- a/README.md
+++ b/README.md
@@ -80,7 +80,10 @@ Where:
 
 - BOOT_CMD specifies the u-boot command used to boot the binaries.
   By default, it is 'booti'. The acceptable values are 'booti', 'bootm'
-  and 'bootefi'.
+  and 'bootefi' and 'none'. If the value is 'none', the BOOT_CMD is not
+  added to the boot script, and the addresses for the Xen binary and the
+  DTB are stored in 'host_kernel_addr' and 'host_fdt_addr' u-boot
+  env variables respectively, to be used manually when booting.
 
 - DEVICE_TREE specifies the DTB file to load.
 
diff --git a/scripts/uboot-script-gen b/scripts/uboot-script-gen
index 444c65a..994369c 100755
--- a/scripts/uboot-script-gen
+++ b/scripts/uboot-script-gen
@@ -966,7 +966,7 @@ function check_depends()
 
 function check_boot_cmd()
 {
-    if ! [[ " bootm booti bootefi " =~ " ${BOOT_CMD}" ]]
+    if ! [[ " bootm booti bootefi none " =~ " ${BOOT_CMD}" ]]
     then
         echo "\"BOOT_CMD=$BOOT_CMD\" is not valid"
         exit 1
@@ -1255,9 +1255,19 @@ echo "setenv fdt_high 0xffffffffffffffff" >> $UBOOT_SOURCE
 
 if test "$dynamic_loading_opt"
 then
-    echo "$BOOT_CMD \${host_kernel_addr} - \${host_fdt_addr}" >> $UBOOT_SOURCE
+    if [ "$BOOT_CMD" != "none" ]
+    then
+        echo "$BOOT_CMD \${host_kernel_addr} - \${host_fdt_addr}" >> $UBOOT_SOURCE
+    fi
 else
-    echo "$BOOT_CMD $kernel_addr - $device_tree_addr" >> $UBOOT_SOURCE
+    if [ "$BOOT_CMD" != "none" ]
+    then
+        echo "$BOOT_CMD $kernel_addr - $device_tree_addr" >> $UBOOT_SOURCE
+    else
+        # skip boot command but store load addresses to be used later
+        echo "setenv host_kernel_addr $kernel_addr" >> $UBOOT_SOURCE
+        echo "setenv host_fdt_addr $device_tree_addr" >> $UBOOT_SOURCE
+    fi
 fi
 
 if test "$FIT"
-- 
2.35.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 30 14:05:05 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 14:05:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358512.587772 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6umr-0004br-GV; Thu, 30 Jun 2022 14:04:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358512.587772; Thu, 30 Jun 2022 14:04:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6umr-0004Zn-B5; Thu, 30 Jun 2022 14:04:53 +0000
Received: by outflank-mailman (input) for mailman id 358512;
 Thu, 30 Jun 2022 14:04:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2IH8=XF=oss.nxp.com=andrei.cherechesu@srs-se1.protection.inumbo.net>)
 id 1o6umJ-0004O6-OP
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 14:04:19 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr70059.outbound.protection.outlook.com [40.107.7.59])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8319805a-f87d-11ec-bdce-3d151da133c5;
 Thu, 30 Jun 2022 16:04:09 +0200 (CEST)
Received: from VI1PR04MB5056.eurprd04.prod.outlook.com (2603:10a6:803:5a::13)
 by DU0PR04MB9322.eurprd04.prod.outlook.com (2603:10a6:10:355::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Thu, 30 Jun
 2022 14:04:15 +0000
Received: from VI1PR04MB5056.eurprd04.prod.outlook.com
 ([fe80::1549:6f15:1949:f1a5]) by VI1PR04MB5056.eurprd04.prod.outlook.com
 ([fe80::1549:6f15:1949:f1a5%6]) with mapi id 15.20.5395.015; Thu, 30 Jun 2022
 14:04:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8319805a-f87d-11ec-bdce-3d151da133c5
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=g0TPi6aWJw0SO7x+DuRH21xHOnECvpwd0r0ZTSN/0C+JhUd8zMpFDTT9eZUTMQHK6+uxz04snj/mFZ5Hwcr/WD7rWEvVE5RvSFNZFs58sLW4oG0OpWDPCuBKjZlwXOe6BNuSQkVw2wbpfp4xNYLMoXMAb17gou6LzD3TcAIShzuzyfAefit0i4iH2m8/k5xR0GDJoGeCUt4d5FKGIl6+6SZ+utgoPiMhEhOBa7/9l8z3mOgFQHGl+Ja8ZtGmBh/qvn8cl0BKUzV89WjVWExpXtFzlr0Fq/uQFgEsmYABjnlq3Izp7hdZkSTVgAyy6BPSrrZ5l/WJPf5EkpkoeoPQIA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0p9WID8wCWme7yTVDlS/nHDHlNEKgtykRbLwYbaQeI4=;
 b=mfbrWwedUKkeHZ6Vzec50C9Ikd+SBPs+wJbOhcnk7gjpJFkuSMl8fCTm5iURExowkUzuP4fDtjcJX6kb0sBJ9AhqZ0+W8gY4LYfFM1AP6fzYM1/tybx4DeB/PUOwqtAL9S5AIegQxgpcICifnfOZwkeUGPnSWelqDfFpZQ5inK1tyLY+0dstNax/rnoQiHj1eAYbbsWCanowMDxm9RqhgJadC7fX7vQM2fKzzI3nsUAuYZk+FeLAiliu6KFozvtWwCc8lg1h04OarUR8puEYXHyLGW1qXU+oZG/jmziGx/IMqW0cXqoZXYxuYA/J7fpbU0hX6exXzY/EyFTWHWHvng==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oss.nxp.com; dmarc=pass action=none header.from=oss.nxp.com;
 dkim=pass header.d=oss.nxp.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=NXP1.onmicrosoft.com;
 s=selector2-NXP1-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0p9WID8wCWme7yTVDlS/nHDHlNEKgtykRbLwYbaQeI4=;
 b=FpqiTeDycGl20L854ENWpvCiRyWibKzqfc8uPwHzhnDd5BNm7EoF+ZiLiwuLzP/REW/T7F8aBlp+IVeQ3FyojFetGjaFRbd/q4gjYCJNpUYxqSRdUmSGHTHoTYZW0w5y/d5CWg2+zzxq/EdV9DvhWIKBcukOQEjkvCIz5XSG57A=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=oss.nxp.com;
From: "Andrei Cherechesu (OSS)" <andrei.cherechesu@oss.nxp.com>
To: xen-devel@lists.xenproject.org
Cc: viryaos-discuss@lists.sourceforge.net,
	sstabellini@kernel.org,
	Andrei Cherechesu <andrei.cherechesu@nxp.com>
Subject: [ImageBuilder][PATCH v2 2/4] uboot-script-gen: Dynamically compute addr and size when loading binaries
Date: Thu, 30 Jun 2022 17:00:27 +0300
Message-Id: <20220630140028.3227385-3-andrei.cherechesu@oss.nxp.com>
X-Mailer: git-send-email 2.35.1
In-Reply-To: <20220630140028.3227385-1-andrei.cherechesu@oss.nxp.com>
References: <20220630140028.3227385-1-andrei.cherechesu@oss.nxp.com>
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-ClientProxiedBy: AM0PR03CA0089.eurprd03.prod.outlook.com
 (2603:10a6:208:69::30) To VI1PR04MB5056.eurprd04.prod.outlook.com
 (2603:10a6:803:5a::13)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d9c75e5c-1b4c-4582-c8fa-08da5aa16aac
X-MS-TrafficTypeDiagnostic: DU0PR04MB9322:EE_
X-MS-Exchange-SharedMailbox-RoutingAgent-Processed: True
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	N59m1XvfonJiMTpf+UpG/yzpCE7zOaAYt3GjBy5A1i+bySQs6loarS8kvi4osTx1MKoDYVagDkAO78BJ07t+yVBfTNGsyhS6lJBjfXebx+woEGOewxF7DQDV3RhokQpT2CifvB+NM8Ke+y+nErbhnjf/a8ogGxDkGYKrBoBzvSHfpzJLOH/9jPFwbXTGf5bhT0t9ViDhJ+NDwCmlV0zcVb0WXWix4jkdmicD1U+LeE+6VZMUsUpuUNEhw3t2jZwWHFOUzmySpFb5IT0alpQMumuSg7IhBL2qtUMv717hx9LDzLtqfQJaw1+1pjuw+r34x3B5cOHs8H2O0INJjoSAma7IlbSPTEgG2IY5MbaFVEycQiDZ5t23DN60mBqAdmjEGRn2FAmbg0ItOgg0e37AgzS+zs6UjCLNfI30PcKvMLYeCrtLqxlhtPfdznWupMp2trUJ54QQl0BeWjV4X6/qTQmgLftOgd0PSZVcLjfzUqQDOozRYV/CGBnnuORDrG3pYSiLqtlutlqMWIpAlmhPFVo5cBiAO03XZ5u5nvf8ynMACiuppq6QcqsQmd9Hb9bZZaDu/NWniycjpFAvEelgolT99uI3FHngUwuOnCiXyWDq9jOYvKH4ZPmz09d7p3HtxT1m/8UQQD8VuEYdhgK7UiOGLlE7J6xAd82myKsj34AmOhhr291Ismo8pxqN8I7IBuWMUFN0lmwLoXpg4eamT5nwRRpojbHTpA18w4wvjYBSIEQGSfxnXVfVJfMfZAGPHKL5GMFwoVnJSS4XCibaDkJ4gRBxeBAQIlrRh1OkQ0ohfNooMN0iAOv/KGZExZtV
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5056.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(346002)(366004)(136003)(396003)(39860400002)(376002)(478600001)(41300700001)(5660300002)(6666004)(186003)(6486002)(6512007)(52116002)(1076003)(86362001)(26005)(8936002)(2906002)(2616005)(30864003)(66946007)(4326008)(38100700002)(8676002)(6506007)(316002)(66476007)(66556008)(38350700002)(83380400001)(6916009);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?UqVlQVeYBSBgjIjgqtGzIINuDHvlUtdBkhqdpht7W1WhNJvjwmMQp0/VKFTB?=
 =?us-ascii?Q?XrJ2yAsG8Gz0wId8uNdO5dmrHs/Rj0F3zt4XMab3rhceNjzEGa44Vhfxi+ZE?=
 =?us-ascii?Q?HYcjIKvo92FWvHNlTYHdpE3fk/yKssE4kQLyZUv+HM9+ztSSuzz95wsRguix?=
 =?us-ascii?Q?KxbcF8zIKUaZqmSay5u9MzC19noNw6iCXBPhxVhcyfseDpgxrw+InLaegPTi?=
 =?us-ascii?Q?3FxiyQTzY7jFt+6IIPmvTg8H98IZHqAF7jPkgLrSQfghjka5JwV2ue2WmMRV?=
 =?us-ascii?Q?UIZqH//uO5HmkpIs03iW0b4HbWaH1Y13BRH5egrs+SCSNt7Y9nRzO2cjxm6w?=
 =?us-ascii?Q?/97SSSvDuMq7b07e4ALbdzIEwrbgQ9cl+njU/NOGVYEg8jQJ/ltAr9BfGMm5?=
 =?us-ascii?Q?K9bR+J33uPkggcOYIPPYuXVoAPOAxzkCTYeNhhC2yNzu7ZrsESJuOHeqYlSl?=
 =?us-ascii?Q?no0yXVsHCgBcs/8GLGKpzEl4SXmgXdU960rSzUbreat0zgpf3WaARm6d3+2y?=
 =?us-ascii?Q?Nxj49R+dX46q09ahzlT0V2SBM5cuueaU/R06vgPzYz5YWWq7joyConCnNMgz?=
 =?us-ascii?Q?0OFQ25MgxkzdI5VSDo3jnLIc1BeZ2ZKuXttR6eMsqIM8I4jn4ElGZ9V1/cO4?=
 =?us-ascii?Q?LllIcFfTtNvN38ACAz281ex4i/R+nnibl0XwMtEvb/cA/KOo9LTThfBPpwBn?=
 =?us-ascii?Q?srh7mZ99lNIjAWO2NHbfXTz7tZs5rFxq5Na0IZTJQkwNX8zc834+VRDdK4O4?=
 =?us-ascii?Q?XAHpgj/Q55/5XAo4kveJIRkeuxqZq9nvEWb+CX6pkf5zoMAHIcpvtTrI2aHz?=
 =?us-ascii?Q?y/rlkiaWj6ChxzzPVOyeRlFZgSWZWuNjcj4RWqL/YT5aI055EWeLJO7A0ElG?=
 =?us-ascii?Q?y5EapPm84aVgZy5LAdHv0Dxiqzx7P36C6k9Fnf9yaWsK7+G7JYdVHJkezL8G?=
 =?us-ascii?Q?H4qg2gJSzvehwjK7YwEUjMLwFd2z7DkDIq1lvDtMQ1kRU4hJL184ZtRiTJJR?=
 =?us-ascii?Q?7xm11C3uY4fR57X+EIV1chYiZx7cLYYSrwA1RI5Burctg6EN0LUZtNgEBH6o?=
 =?us-ascii?Q?KRslUvkvL3FkL4QMzXu3E173txSMvK/FU97ZIsabbO23xNVHr5hZT3lnW9n6?=
 =?us-ascii?Q?YJbc/jVyGWerYM2t1yuWhhU37BI22uKNOaA60IJiqz+YQeyiOUnFOeIrpgO+?=
 =?us-ascii?Q?ofOOGcq6kCclhHYuVXSeLzAryQCbhuJu9Ef48bsCn9Rp+W3wZr+FL/hPQ7PF?=
 =?us-ascii?Q?xbX/qvevfIpupq8q6FK2mKsa32P1dtAW4Anm4GcBQJRJ0tZDquNg5MROX3nn?=
 =?us-ascii?Q?5QAeuyK83JAPtFf2eBPXYK/T4xDJwFuszzt4vLovdNfFFrNjh8WR2s6RBb8+?=
 =?us-ascii?Q?OVupmjUplCAKVVipxvdWfxVAWb9yzqHEhzhe+6o53iE6/9r5oY0i3LHjjlcQ?=
 =?us-ascii?Q?TjXSxIgKL56VhadpgzkEDRoUuS5ncRcls/WtlPO1gmqpdv4ODbmg0n3PPaY9?=
 =?us-ascii?Q?Ih4hv3oFli26sHJ9Zz4BbVy5WNiqfe7ubSqraZnv3txb5g0pu5z2sRZhU0DK?=
 =?us-ascii?Q?KC+aW+iCSqI2/Pdr9BQyzkAgXBRkkDiW3uaRlwsPqTfddf/89EEafv8EwX3C?=
 =?us-ascii?Q?ZQ=3D=3D?=
X-OriginatorOrg: oss.nxp.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d9c75e5c-1b4c-4582-c8fa-08da5aa16aac
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5056.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 14:04:15.3851
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: VoCm1R/4Tv579CrJBDiUZo5DW0flBwP9NdKGmmPYGSSNxoNrEVX9HE67SqbzQrhxC7/QLlMjZYXhpPlxVqRrd8x7fGFj7biLyLNlyleLIuI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR04MB9322

From: Andrei Cherechesu <andrei.cherechesu@nxp.com>

Normally, the script would precompute the sizes of the loaded binaries
and addresses where they are loaded before generating the script,
and the sizes and addresses that needed to be provided to Xen via
/chosen would be hardcoded in the boot script.

Added option via "-s" parameter to use the ${filesize} variable
in u-boot, which is set automatically after a *load command.
The value stored by filesize is now stored in a u-boot env variable
with the name corresponding to the binary that was loaded before.
The newly set variables are now used in setting the /chosen node,
instead of the hardcoded values.

Also, the loading addresses for the files are dynamically computed
and aligned to 0x200000 using the `setexpr` u-boot command. Basically,
if the option is used, there are zero hardcoded addresses inside the
boot script, and everything is determined based on the MEMORY_START
parameter and each binary's size.

If the "-s" parameter is not used, the script does not store the
binaries' sizes and addresses in variables and uses the precomputed
ones when advertising them in the /chosen node.

Signed-off-by: Andrei Cherechesu <andrei.cherechesu@nxp.com>
---
 scripts/uboot-script-gen | 136 ++++++++++++++++++++++++++++++++-------
 1 file changed, 114 insertions(+), 22 deletions(-)

diff --git a/scripts/uboot-script-gen b/scripts/uboot-script-gen
index 8f08cd6..f8d2fb0 100755
--- a/scripts/uboot-script-gen
+++ b/scripts/uboot-script-gen
@@ -4,6 +4,9 @@ offset=$((2*1024*1024))
 filesize=0
 prog_req=(mkimage file fdtput mktemp awk)
 
+padding_mask=`printf "0x%X\n" $(($offset - 1))`
+padding_mask_inv=`printf "0x%X\n" $((~$padding_mask))`
+
 function cleanup_and_return_err()
 {
     rm -f $UBOOT_SOURCE $UBOOT_SCRIPT
@@ -91,10 +94,18 @@ function add_device_tree_kernel()
     local size=$3
     local bootargs=$4
 
-    dt_mknode "$path" "module$addr"
-    dt_set "$path/module$addr" "compatible" "str_a" "multiboot,kernel multiboot,module"
-    dt_set "$path/module$addr" "reg" "hex"  "0x0 $addr 0x0 $(printf "0x%x" $size)"
-    dt_set "$path/module$addr" "bootargs" "str" "$bootargs"
+    if test "$dynamic_loading_opt"
+    then
+        dt_mknode "$path" "module\${"$addr"}"
+        dt_set "$path/module\${"$addr"}" "compatible" "str_a" "multiboot,kernel multiboot,module"
+        dt_set "$path/module\${"$addr"}" "reg" "hex"  "0x0 0x\${"$addr"} 0x0 0x\${"$size"}"
+        dt_set "$path/module\${"$addr"}" "bootargs" "str" "$bootargs"
+    else
+        dt_mknode "$path" "module$addr"
+        dt_set "$path/module$addr" "compatible" "str_a" "multiboot,kernel multiboot,module"
+        dt_set "$path/module$addr" "reg" "hex"  "0x0 $addr 0x0 $(printf "0x%x" $size)"
+        dt_set "$path/module$addr" "bootargs" "str" "$bootargs"
+    fi
 }
 
 
@@ -104,9 +115,16 @@ function add_device_tree_ramdisk()
     local addr=$2
     local size=$3
 
-    dt_mknode "$path"  "module$addr"
-    dt_set "$path/module$addr" "compatible" "str_a" "multiboot,ramdisk multiboot,module"
-    dt_set "$path/module$addr" "reg" "hex"  "0x0 $addr 0x0 $(printf "0x%x" $size)"
+    if test "$dynamic_loading_opt"
+    then
+        dt_mknode "$path" "module\${"$addr"}"
+        dt_set "$path/module\${"$addr"}" "compatible" "str_a" "multiboot,ramdisk multiboot,module"
+        dt_set "$path/module\${"$addr"}" "reg" "hex"  "0x0 0x\${"$addr"} 0x0 0x\${"$size"}"
+    else
+        dt_mknode "$path" "module$addr"
+        dt_set "$path/module$addr" "compatible" "str_a" "multiboot,ramdisk multiboot,module"
+        dt_set "$path/module$addr" "reg" "hex"  "0x0 $addr 0x0 $(printf "0x%x" $size)"
+    fi
 }
 
 
@@ -116,9 +134,16 @@ function add_device_tree_passthrough()
     local addr=$2
     local size=$3
 
-    dt_mknode "$path"  "module$addr"
-    dt_set "$path/module$addr" "compatible" "str_a" "multiboot,device-tree multiboot,module"
-    dt_set "$path/module$addr" "reg" "hex"  "0x0 $addr 0x0 $(printf "0x%x" $size)"
+    if test "$dynamic_loading_opt"
+    then
+        dt_mknode "$path" "module\${"$addr"}"
+        dt_set "$path/module\${"$addr"}" "compatible" "str_a" "multiboot,device-tree multiboot,module"
+        dt_set "$path/module\${"$addr"}" "reg" "hex"  "0x0 0x\${"$addr"} 0x0 0x\${"$size"}"
+    else
+        dt_mknode "$path" "module$addr"
+        dt_set "$path/module$addr" "compatible" "str_a" "multiboot,device-tree multiboot,module"
+        dt_set "$path/module$addr" "reg" "hex"  "0x0 $addr 0x0 $(printf "0x%x" $size)"
+    fi
 }
 
 function add_device_tree_mem()
@@ -186,7 +211,12 @@ function xen_device_tree_editing()
     then
         dt_mknode "/chosen" "dom0"
         dt_set "/chosen/dom0" "compatible" "str_a" "xen,linux-zimage xen,multiboot-module multiboot,module"
-        dt_set "/chosen/dom0" "reg" "hex" "0x0 $dom0_kernel_addr 0x0 $(printf "0x%x" $dom0_kernel_size)"
+        if test "$dynamic_loading_opt"
+        then
+            dt_set "/chosen/dom0" "reg" "hex" "0x0 0x\${dom0_linux_addr} 0x0 0x\${dom0_linux_size}"
+        else
+            dt_set "/chosen/dom0" "reg" "hex" "0x0 $dom0_kernel_addr 0x0 $(printf "0x%x" $dom0_kernel_size)"
+        fi
         dt_set "/chosen" "xen,dom0-bootargs" "str" "$DOM0_CMD"
     fi
 
@@ -194,7 +224,12 @@ function xen_device_tree_editing()
     then
         dt_mknode "/chosen" "dom0-ramdisk"
         dt_set "/chosen/dom0-ramdisk" "compatible" "str_a" "xen,linux-initrd xen,multiboot-module multiboot,module"
-        dt_set "/chosen/dom0-ramdisk" "reg" "hex" "0x0 $ramdisk_addr 0x0 $(printf "0x%x" $ramdisk_size)"
+        if test "$dynamic_loading_opt"
+        then
+            dt_set "/chosen/dom0-ramdisk" "reg" "hex" "0x0 0x\${dom0_ramdisk_addr} 0x0 0x\${dom0_ramdisk_size}"
+        else
+            dt_set "/chosen/dom0-ramdisk" "reg" "hex" "0x0 $ramdisk_addr 0x0 $(printf "0x%x" $ramdisk_size)"
+        fi
     fi
 
     i=0
@@ -241,14 +276,29 @@ function xen_device_tree_editing()
             dt_set "/chosen/domU$i" "colors" "hex" "$(printf "0x%x" $bitcolors)"
         fi
 
-        add_device_tree_kernel "/chosen/domU$i" ${domU_kernel_addr[$i]} ${domU_kernel_size[$i]} "${DOMU_CMD[$i]}"
+        if test "$dynamic_loading_opt"
+        then
+            add_device_tree_kernel "/chosen/domU$i" "domU${i}_kernel_addr" "domU${i}_kernel_size" "${DOMU_CMD[$i]}"
+        else
+            add_device_tree_kernel "/chosen/domU$i" ${domU_kernel_addr[$i]} ${domU_kernel_size[$i]} "${DOMU_CMD[$i]}"
+        fi
         if test "${domU_ramdisk_addr[$i]}"
         then
-            add_device_tree_ramdisk "/chosen/domU$i" ${domU_ramdisk_addr[$i]} ${domU_ramdisk_size[$i]}
+            if test "$dynamic_loading_opt"
+            then
+                add_device_tree_ramdisk "/chosen/domU$i" "domU${i}_ramdisk_addr" "domU${i}_ramdisk_size"
+            else
+                add_device_tree_ramdisk "/chosen/domU$i" ${domU_ramdisk_addr[$i]} ${domU_ramdisk_size[$i]}
+            fi
         fi
         if test "${domU_passthrough_dtb_addr[$i]}"
         then
-            add_device_tree_passthrough "/chosen/domU$i" ${domU_passthrough_dtb_addr[$i]} ${domU_passthrough_dtb_size[$i]}
+            if test "$dynamic_loading_opt"
+            then
+                add_device_tree_passthrough "/chosen/domU$i" "domU${i}_fdt_addr" "domU${i}_fdt_size"
+            else
+                add_device_tree_passthrough "/chosen/domU$i" ${domU_passthrough_dtb_addr[$i]} ${domU_passthrough_dtb_size[$i]}
+            fi
         fi
         i=$(( $i + 1 ))
     done
@@ -271,7 +321,12 @@ function device_tree_editing()
 
     if test $UBOOT_SOURCE
     then
-        echo "fdt addr $device_tree_addr" >> $UBOOT_SOURCE
+        if test $dynamic_loading_opt
+        then
+            echo "fdt addr \${host_fdt_addr}" >> $UBOOT_SOURCE
+        else
+            echo "fdt addr $device_tree_addr" >> $UBOOT_SOURCE
+        fi
         echo "fdt resize 1024" >> $UBOOT_SOURCE
 
         if test $NUM_DT_OVERLAY && test $NUM_DT_OVERLAY -gt 0
@@ -306,7 +361,7 @@ function add_size()
 function load_file()
 {
     local filename=$1
-    local fit_scr_name=$2
+    local binary_name=$2
 
     local absolute_path="$(realpath --no-symlinks $filename)"
     local base="$(realpath $PWD)"/
@@ -314,11 +369,30 @@ function load_file()
 
     if test "$FIT"
     then
-        echo "imxtract \$fit_addr $fit_scr_name $memaddr" >> $UBOOT_SOURCE
+        echo "imxtract \$fit_addr $binary_name $memaddr" >> $UBOOT_SOURCE
     else
-        echo "$LOAD_CMD $memaddr ${prepend_path:+$prepend_path/}$relative_path" >> $UBOOT_SOURCE
+        if test "$dynamic_loading_opt"
+        then
+            echo "$LOAD_CMD \${curr_addr} ${prepend_path:+$prepend_path/}$relative_path" >> $UBOOT_SOURCE
+        else
+            echo "$LOAD_CMD $memaddr ${prepend_path:+$prepend_path/}$relative_path" >> $UBOOT_SOURCE
+        fi
     fi
     add_size $filename
+
+    if test "$dynamic_loading_opt" && test ! "$FIT"
+    then
+        # Store each binary's load addr and size
+        local binary_name_addr="${binary_name}_addr"
+        local binary_name_size="${binary_name}_size"
+        echo "setenv $binary_name_addr \${curr_addr}" >> $UBOOT_SOURCE
+        echo "setenv $binary_name_size \${filesize}" >> $UBOOT_SOURCE
+        
+        # Compute load addr for next binary dynamically
+        echo "setexpr curr_addr \${curr_addr} \+ \${filesize}" >> $UBOOT_SOURCE
+        echo "setexpr curr_addr \${curr_addr} \+ \${padding_mask}" >> $UBOOT_SOURCE
+        echo "setexpr curr_addr \${curr_addr} \& \${padding_mask_inv}" >> $UBOOT_SOURCE
+    fi
 }
 
 function check_file_type()
@@ -536,6 +610,14 @@ generate_uboot_images()
 
 xen_file_loading()
 {
+    if test "$dynamic_loading_opt"
+    then
+        local curr_addr=`printf "%x\n" $memaddr`
+        echo "setenv curr_addr $curr_addr" >> $UBOOT_SOURCE
+        echo "setenv padding_mask $padding_mask" >> $UBOOT_SOURCE
+        echo "setenv padding_mask_inv $padding_mask_inv" >> $UBOOT_SOURCE
+    fi
+
     if test "$DOM0_KERNEL"
     then
         check_compressed_file_type $DOM0_KERNEL "executable"
@@ -891,7 +973,7 @@ function print_help
 {
     script=`basename "$0"`
     echo "usage:"
-    echo "	$script -c CONFIG_FILE -d DIRECTORY [-t LOAD_CMD] [-o FILE] [-k KEY_DIR/HINT [-u U-BOOT_DTB]] [-e] [-f] [-p PREPEND_PATH]"
+    echo "	$script -c CONFIG_FILE -d DIRECTORY [-t LOAD_CMD] [-o FILE] [-k KEY_DIR/HINT [-u U-BOOT_DTB]] [-e] [-f] [-p PREPEND_PATH] [-s]"
     echo "	$script -h"
     echo "where:"
     echo "	CONFIG_FILE - configuration file"
@@ -908,6 +990,7 @@ function print_help
     echo "	U-BOOT_DTB - u-boot control dtb so that the public key gets added to it"
     echo "	-f - enable generating a FIT image"
     echo "	PREPEND_PATH - path to be appended before file names to match deploy location within rootfs"
+    echo "	-s - enable dynamic loading of binaries by storing their addresses and sizes u-boot env variables"
     echo "	-h - prints out the help message and exits "
     echo "Defaults:"
     echo "	CONFIG_FILE=$cfg_file, UBOOT_TYPE=\"LOAD_CMD\" env var, DIRECTORY=$uboot_dir"
@@ -915,7 +998,7 @@ function print_help
     echo "	$script -c ../config -d ./build42 -t \"scsi load 1:1\""
 }
 
-while getopts ":c:t:d:ho:k:u:fp:" opt; do
+while getopts ":c:t:d:ho:k:u:fp:s" opt; do
     case ${opt} in
     t )
         case $OPTARG in
@@ -957,6 +1040,9 @@ while getopts ":c:t:d:ho:k:u:fp:" opt; do
     p )
         prepend_path="$OPTARG"
         ;;
+    s )
+        dynamic_loading_opt=y
+        ;;
     h )
         print_help
         exit 0
@@ -1151,7 +1237,13 @@ device_tree_editing $device_tree_addr
 
 # disable device tree reloation
 echo "setenv fdt_high 0xffffffffffffffff" >> $UBOOT_SOURCE
-echo "$BOOT_CMD $kernel_addr - $device_tree_addr" >> $UBOOT_SOURCE
+
+if test "$dynamic_loading_opt"
+then
+    echo "$BOOT_CMD \${host_kernel_addr} - \${host_fdt_addr}" >> $UBOOT_SOURCE
+else
+    echo "$BOOT_CMD $kernel_addr - $device_tree_addr" >> $UBOOT_SOURCE
+fi
 
 if test "$FIT"
 then
-- 
2.35.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 30 14:05:05 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 14:05:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358509.587760 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6umq-0004RL-TG; Thu, 30 Jun 2022 14:04:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358509.587760; Thu, 30 Jun 2022 14:04:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6umq-0004RE-QK; Thu, 30 Jun 2022 14:04:52 +0000
Received: by outflank-mailman (input) for mailman id 358509;
 Thu, 30 Jun 2022 14:04:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2IH8=XF=oss.nxp.com=andrei.cherechesu@srs-se1.protection.inumbo.net>)
 id 1o6umA-0004NQ-Ar
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 14:04:10 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr70071.outbound.protection.outlook.com [40.107.7.71])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 81abc6d4-f87d-11ec-bd2d-47488cf2e6aa;
 Thu, 30 Jun 2022 16:04:07 +0200 (CEST)
Received: from VI1PR04MB5056.eurprd04.prod.outlook.com (2603:10a6:803:5a::13)
 by DU0PR04MB9322.eurprd04.prod.outlook.com (2603:10a6:10:355::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Thu, 30 Jun
 2022 14:04:05 +0000
Received: from VI1PR04MB5056.eurprd04.prod.outlook.com
 ([fe80::1549:6f15:1949:f1a5]) by VI1PR04MB5056.eurprd04.prod.outlook.com
 ([fe80::1549:6f15:1949:f1a5%6]) with mapi id 15.20.5395.015; Thu, 30 Jun 2022
 14:04:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 81abc6d4-f87d-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TRAelFvjHNKE+Dq/9+xFLA/w33IuElA/wwrSruQuB+NC2bEvlXeB8pAasrcVaiHbbqq1pbjAFWyK3KglBj6SfZ0S87LIQ+h4z6Dh70vmsCq/OZsZy5vr/RlHkj++H9BwLVJgffs5nbk9bXkv6/l6zktWnm81VKkUTTvzz3inFPBQJfZVyxK55tfQ9a/X0i7zk4v0GEoQgOFBjuhjrKr28/G+K+2slVUNBqlOQObeXK1PsAEt3rrXdjpdVinSI5fAeLdVdu/XxEzXZpwNyKq6TXu6xzXvQMnHcOOZ+br3dWFr7Ji80NwMExIn8G278AQ9SAQiY9G1JEV1uUaQkPvaVg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=r/bE8m0mHq4cfkvTjD0XHvH5DmfTH/t1/sIVm5i0T/A=;
 b=J7nJzoOtEgKlVTbgsjBdUkskWKYXebzCbVuQf8mWPiwIfzB6gJy1nThiGg2YNKeGqFcc/zmAhOVIPH2M1LzEGeF4XNWFqyQsjS2mjmqlnVB6vNlwN+Wbo6eILW7ZFrlK9My1dcr/CoPBaS1sNScc+npcnVv1VZ9YPTIwn1i5YEDZDzUQZugZhBPW/vD9rd07QT8vtpRcteKyn/kIWCtt00CLbO9WxM6k7xSDj+32dnxKlw8Wj50ONp2BE5KDxf62XBv076m97vdKooPF1qvvBRcW/7SAr9c/5Wr2zMW0bn5vWmVN1Bay+BIA5VfeDB3H9iAxY8Gs7CX6PIwMQQLMzA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oss.nxp.com; dmarc=pass action=none header.from=oss.nxp.com;
 dkim=pass header.d=oss.nxp.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=NXP1.onmicrosoft.com;
 s=selector2-NXP1-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=r/bE8m0mHq4cfkvTjD0XHvH5DmfTH/t1/sIVm5i0T/A=;
 b=L8o38CU6Ojtwbrh4aUP/T6UtlHojkCO7YuZhOjL3O29q95rdhe1MMmRUILAuBFOhV2iup32Q9K58r8G71Md+DZ9QaZkadOaaOEPyq00MDZ/EUe30AoQJ+rH2n7VHPJofpyAV081XS/TD/yR3sQRoiMjAJZiylnri+lb2ske/jX8=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=oss.nxp.com;
From: "Andrei Cherechesu (OSS)" <andrei.cherechesu@oss.nxp.com>
To: xen-devel@lists.xenproject.org
Cc: viryaos-discuss@lists.sourceforge.net,
	sstabellini@kernel.org,
	Andrei Cherechesu <andrei.cherechesu@nxp.com>
Subject: [ImageBuilder][PATCH v2 0/4] Add extra ImageBuilder features
Date: Thu, 30 Jun 2022 17:00:25 +0300
Message-Id: <20220630140028.3227385-1-andrei.cherechesu@oss.nxp.com>
X-Mailer: git-send-email 2.35.1
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-ClientProxiedBy: AM0PR03CA0089.eurprd03.prod.outlook.com
 (2603:10a6:208:69::30) To VI1PR04MB5056.eurprd04.prod.outlook.com
 (2603:10a6:803:5a::13)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 06357625-9ac4-4090-90c2-08da5aa164a1
X-MS-TrafficTypeDiagnostic: DU0PR04MB9322:EE_
X-MS-Exchange-SharedMailbox-RoutingAgent-Processed: True
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qVyhMH4WtsTGzk3tdkuYNebwdW3fXhShhlDRKEtD8ndpk4d0FsASFK0uM/XSKqeuLacgkxrHh0mEoUP0i3laZ9ELqZUVmmynsm5EIoRKJMHVRTjq4V/NmGRf39nSjPupKPA8a1Vud/5MKJs2VZI/A6dfB5GrZmeej+acRarUJSslldXtj5y+diC4SQ0DDW+4maqN0vdFr0c3UlqB/krVu2C8WlNCzSfLyoz43i2TWwYhjOpda10fvR0p5QIkd6gmwmzLaGUPNDL2CPc+fCKYDic9q0Lfq5MpYjdG/5W9kXv4j4Ek24M9pthqT41lyKR3oKF5AG+//broF+VT76GPHHTFvi1LYQ9QTVtyfJFIpizzsm/q5C10Kt8r89l2zuW9tYjJ1b9Tm2EyTDjyXf5ywjHY0FTfkkpMSayTpjCrdXfJYynXfMp8LHurJ/dgxip/7HyeGOZHZ0alblzNnbSm27htRevPx+ciC9oY5Y8iJ3Z9S2LVPFS6jKCPoyU3/YSNiJm/UKRGe7utO6+OaDawBNLHR9LmYvPXPuFl0ZJNcHyE4xQ3L26TaMLaP/Xao81EufydG09uu1cx8qHJDRFgpPaJkZ18ZAlU1kMBL64D/p3lDzJ57JFY1Uy46hpx6ybTRlPfkGtdUZTy63Ux2aIRejjNNi07g8wcd2i4+JZBJJGbmIUstO6+XjAYwzTkonBg7e1mw8HO2SA4tNK2wlr3opMwQBziKr1fVFw0uhWqeeN1qCBlShc5ZqAY36cD2TYo/s91d7Nfe5W2sIjOp04G71IOe8nyG99AW0DjVgzJjEKalBZPnNbmGZqJHbkHJW5o
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5056.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(346002)(366004)(136003)(396003)(39860400002)(376002)(478600001)(41300700001)(5660300002)(6666004)(186003)(6486002)(6512007)(52116002)(1076003)(86362001)(26005)(8936002)(2906002)(2616005)(66946007)(4326008)(38100700002)(8676002)(6506007)(316002)(66476007)(66556008)(38350700002)(83380400001)(6916009);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?Xv78Uj5U5WJ69DoC+HEJw9az0+KAHnYQ+4DczsB+ihaB71B6g3MbYymUOTkr?=
 =?us-ascii?Q?Yxemc4u1i3NzOlIDk3FeudQx/7C/hMW6GkkYYnku3OY7Ohai8kjJ9vGguZHP?=
 =?us-ascii?Q?A/rMDya9emrGGnHRCoExvLsbDg/urqwJJlpYKFPEUGj/pLkArpHIeUQM9z4Q?=
 =?us-ascii?Q?dPWtquqfH4xGIG9LxXcmC3oZjgAcQhVfnmGNHY6dgjQJEnifCa1SUIETjX9M?=
 =?us-ascii?Q?gCVhA59g67rw7+mxp7in8hIC9KEnU6XmEMdFaXdBRrKQpjGkzPt/IUyIxkG9?=
 =?us-ascii?Q?sWtpYiI9VV/trtPKBKbW/A1JvQ11YRPhJKkYFd/2RJw76US+fuvy1WLGh7/J?=
 =?us-ascii?Q?Fkk7z0vSAGOqV7Gx41e353lUIrvDfuKZJrXBQvzMoJPrGziwHILbPs8p4xzU?=
 =?us-ascii?Q?fuTZgRQddxB3kYLS/T5VZonoq6hXmrKoz1KUoFkS9qyygU3NeKOdwaoT9VCs?=
 =?us-ascii?Q?8uHDQrmh3geBnSgB+VS181eiR6CuERg33v0j1OEAwD53dUO3KWVhfN0ynbFf?=
 =?us-ascii?Q?FxkDdtP4H9jcwYjxh9qIIZ/KwPTYZsNvUc2jMzDIHr1R/dLscwC4pac4V9VI?=
 =?us-ascii?Q?lGX3pUrcLROcljeOc/kw/2aE8uqUZyxWsfKXsOTIaIbWSDBnYN/J6Y7DrAv9?=
 =?us-ascii?Q?QGYRPs4JQrFgARhhoRLiROtPkrsA8OqUvheM7UZgATVDNRyw/GpNdKBK9C4/?=
 =?us-ascii?Q?+QGCZHMaJ+Bc3OiLsiBvNRgWGbEBYK84D8/YrpxMCaq15xpy+FRjaRgT/Gtj?=
 =?us-ascii?Q?VoiCyN9CVIH5js0N5M/jWv5MPz0JD2G5tuehYHTgFrmA7/aeeY7utVpALX0P?=
 =?us-ascii?Q?8mDQQbi8rH4HxJJn/D4zd2DhvT+HaTnMWVN8mLWWlbfqaDiYleu18yXpey+U?=
 =?us-ascii?Q?ZAoruSKBHwfuWk+4OVbdam+NEHNwy0jJ7b4unrc6VexPrwf9rzYy03cP7KUh?=
 =?us-ascii?Q?eMN8+HvmJN1WpLI1PGhyiTqdIfYqRWXdMgiBvPiimL0wn36/mynIFLwr76sZ?=
 =?us-ascii?Q?xCkIa5KDfpgThUx+kW34zPYKBHuv1+oC/oqh325ILbXKzFfv5qig3gkKGs0e?=
 =?us-ascii?Q?RNZ4LHT0lDsWL3bm42NsFLJF9CnZIA2GDHAfd3+6lj2tDl1JJWXvQAHUHbiU?=
 =?us-ascii?Q?szNTgithLeOErBpObHZotgJNhIT/qm1MBl/ZhEdLXtlQ8ii21UZWFIBVdCEA?=
 =?us-ascii?Q?y44CTVnVs2TD7WoyDdgRH695/lGL6Pw/7boVcnEvmC0NeDnYg5gMfDg9u1Xv?=
 =?us-ascii?Q?6c9p+EDDsFqrqNvo4zW8O4+QuWgMFuutOAKWK0lNHaX+YZ59Fe8H5jN1G3Pj?=
 =?us-ascii?Q?lZRhYmNlNxuivBzPMtZARTkOEEnpmzt2hblFYq5lkf/Tp6iR0xchG6FzXXBL?=
 =?us-ascii?Q?Zjz+Gq5PeIuYo8qZyNkC4pOQvh99GedPQgHN7Xrfcaj55B9gM4fkczZEU9vw?=
 =?us-ascii?Q?6+WnzD6p0n58Sql0tFRy6ps9B69+BEWm5dGAuTUjaODc4N1aXdGVpNdMYYAc?=
 =?us-ascii?Q?+TcLo3xEfJx+dMOhW3y+ufZkZ/qdXmQYQLqHM8Bq6rC9Vk+StlwvMmp1pHvw?=
 =?us-ascii?Q?cr0oDvj9J7jKH8+m0Ti3mqlDcwwz0Ec47zulLjN64pVGoi6MdPylPUt1pV71?=
 =?us-ascii?Q?WA=3D=3D?=
X-OriginatorOrg: oss.nxp.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 06357625-9ac4-4090-90c2-08da5aa164a1
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5056.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 14:04:05.2765
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: JnusoL/p/6ppnQEww/rqQrvdbjsyehoxHsNo9OjWWtqlllzZoTgQ3KFION/49IPHwPqqWPgMl3PE/aV/eWtN5QGElzR2+uy8TiGkElK+LX8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR04MB9322

From: Andrei Cherechesu <andrei.cherechesu@nxp.com>

Hello,

Sorry for the late re-submission of patches, but I had some
company internal work to take care of. I managed to include the
changes mentioned by Stefano S. and Ayan K. H. in the discussions
for the first version of patches.

Changes in v2:
 - Dropped the patch which added support for DOMU_DIRECT_MAP and
DOMU_STATIC_MEM from the set, since it was already submitted by
someone else
 - For PATCH 1/4: Added the PREPEND_PATH option in disk_image script
as well
 - For PATCH 2/4: Added support for dynamically computing load
addresses and storing them in variables for each loaded binary,
along with the size of each loaded binary. Now, there is no
hardcoded address inside the generated script.
 - For PATCH 3/4: Rebased
 - For PATCH 4/4: Skip adding boot command to script if BOOT_CMD
is set to "none", instead of via passing parameter to script.


Andrei Cherechesu (4):
  scripts: Add support for prepending path to file names
  uboot-script-gen: Dynamically compute addr and size when loading
    binaries
  uboot-script-gen: Enable appending extra commands to boot script
  uboot-script-gen: Enable not adding boot command to script

 README.md                |   5 +-
 scripts/disk_image       |  37 +++++----
 scripts/uboot-script-gen | 169 +++++++++++++++++++++++++++++++++------
 3 files changed, 172 insertions(+), 39 deletions(-)

-- 
2.35.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 30 14:11:55 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 14:11:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358547.587815 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6utc-0008TI-6W; Thu, 30 Jun 2022 14:11:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358547.587815; Thu, 30 Jun 2022 14:11:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6utc-0008TB-3Z; Thu, 30 Jun 2022 14:11:52 +0000
Received: by outflank-mailman (input) for mailman id 358547;
 Thu, 30 Jun 2022 14:11:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nqik=XF=apertussolutions.com=dpsmith@srs-se1.protection.inumbo.net>)
 id 1o6utb-0008T5-1K
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 14:11:51 +0000
Received: from sender4-of-o51.zoho.com (sender4-of-o51.zoho.com
 [136.143.188.51]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 94139ddb-f87e-11ec-bd2d-47488cf2e6aa;
 Thu, 30 Jun 2022 16:11:49 +0200 (CEST)
Received: from [10.10.1.138] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1656598291417507.4277519566392;
 Thu, 30 Jun 2022 07:11:31 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 94139ddb-f87e-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; t=1656598293; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=AUCukss27nvcWdTpFyBZ8Quk0T/v/ezw5arMiQRYtp/E+zhJfTeeSLBPL9E+heNvyRIO8BMWzM8hbo0HKRE/hFaeKFm5+rNXJ7ZUsKmLad5lN4UHYRxPSI8QxdXk8MFQgW+rQ4JLGtVg+vKomR/ppwPe7WI574Kmen3M4GBw5lk=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1656598293; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=fd7RXl5+dt4+cELBUnmitpRnF8vkWWLIVA5/4Uzn+tw=; 
	b=fZJxvakGWw9QzX2Scr/YLCeKHNUIyg5b/bVZIWYFNrBQFBV4j8ANHA0W1+OKtrLD6A2j3BTLsJrGRSRqD+KbHJsAheJNR9mArNySxpsaJvPFfxlY2JCWEucNnYvbuL6yow0e4ZINJ/JVZhR5JM9IHJc0dxCOlQBbFXvSI22jys8=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1656598293;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Message-ID:Date:Date:MIME-Version:To:To:Cc:Cc:References:From:From:Subject:Subject:In-Reply-To:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=fd7RXl5+dt4+cELBUnmitpRnF8vkWWLIVA5/4Uzn+tw=;
	b=XkVeJXYZeBR8C3u+zACMVY2tdaYvVK5PqPXYzQ0k7bqVZKs//XZhJfssdu786vwD
	THrIOgNg6oA9yDweqaLdYfsn1p/IHOxCd6himvumtT0sG9YUf0UMMakBBcPgOdQnd6L
	oVpIh8xaCXI+3WViTz7613wVUeCzjPlYzydtfutI=
Message-ID: <d4f12e64-4db7-5c47-9412-70adfd245807@apertussolutions.com>
Date: Thu, 30 Jun 2022 10:09:27 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: scott.davis@starlab.io, jandryuk@gmail.com, christopher.clark@starlab.io,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
References: <20220630022110.31555-1-dpsmith@apertussolutions.com>
 <20220630022110.31555-4-dpsmith@apertussolutions.com>
 <9833c7cb-d71d-10a3-f74f-3caf46db3cb4@suse.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Subject: Re: [PATCH v9 3/3] xsm: refactor flask sid alloc and domain check
In-Reply-To: <9833c7cb-d71d-10a3-f74f-3caf46db3cb4@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

On 6/30/22 02:14, Jan Beulich wrote:
> Just a two nits - while the change looks plausible, I'm afraid I'm
> not qualified to properly review it.
> 
> On 30.06.2022 04:21, Daniel P. Smith wrote:
>> The function flask_domain_alloc_security() is where a default sid should be
>> assigned to a domain under construction. For reasons unknown, the initial
>> domain would be assigned unlabeled_t and then fixed up under
>> flask_domain_create().  With the introduction of xenboot_t it is now possible
>> to distinguish when the hypervisor is in the boot state.
>>
>> This commit looks to correct this by using a check to see if the hypervisor is
>> under the xenboot_t context in flask_domain_alloc_security(). If it is, then it
> 
> While (or maybe because) I'm not a native speaker, the use of "looks"
> reads ambiguous to me. I think you mean it in the sense of e.g. "aims",
> but at first I read it in the sense of "seems", which made me think
> you're not certain whether it actually does.

Apologies, "look to" or "looks to" are forms of an American idiom, and
was used for its meaning of "expected to happen"[1]. I will reword to
provide a concise version of this statement.

[1] https://idioms.thefreedictionary.com/look+to

>> will inspect the domain's is_privileged field, and select the appropriate
>> default label, dom0_t or domU_t, for the domain. The logic for
>> flask_domain_create() was changed to allow the incoming sid to override the
>> default label.
>>
>> The base policy was adjusted to allow the idle domain under the xenboot_t
>> context to be able to construct domains of both types, dom0 and domU.
>>
>> Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
>> ---
>>  tools/flask/policy/modules/dom0.te |  3 +++
>>  tools/flask/policy/modules/domU.te |  3 +++
>>  xen/xsm/flask/hooks.c              | 34 ++++++++++++++++++------------
>>  3 files changed, 26 insertions(+), 14 deletions(-)
>>
>> diff --git a/tools/flask/policy/modules/dom0.te b/tools/flask/policy/modules/dom0.te
>> index 0a63ce15b6..2022bb9636 100644
>> --- a/tools/flask/policy/modules/dom0.te
>> +++ b/tools/flask/policy/modules/dom0.te
>> @@ -75,3 +75,6 @@ admin_device(dom0_t, ioport_t)
>>  admin_device(dom0_t, iomem_t)
>>  
>>  domain_comms(dom0_t, dom0_t)
>> +
>> +# Allow they hypervisor to build domains of type dom0_t
> 
> Since it repeats ...

Ack.

>> +xen_build_domain(dom0_t)
>> diff --git a/tools/flask/policy/modules/domU.te b/tools/flask/policy/modules/domU.te
>> index b77df29d56..73fc90c3c6 100644
>> --- a/tools/flask/policy/modules/domU.te
>> +++ b/tools/flask/policy/modules/domU.te
>> @@ -13,6 +13,9 @@ domain_comms(domU_t, domU_t)
>>  migrate_domain_out(dom0_t, domU_t)
>>  domain_self_comms(domU_t)
>>  
>> +# Allow they hypervisor to build domains of type domU_t
>> +xen_build_domain(domU_t)
> 
> ... here - s/they/the/ in both places?

Ack.

> Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 14:12:43 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 14:12:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358552.587826 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6uuQ-0000Zc-GC; Thu, 30 Jun 2022 14:12:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358552.587826; Thu, 30 Jun 2022 14:12:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6uuQ-0000ZV-DA; Thu, 30 Jun 2022 14:12:42 +0000
Received: by outflank-mailman (input) for mailman id 358552;
 Thu, 30 Jun 2022 14:12:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nqik=XF=apertussolutions.com=dpsmith@srs-se1.protection.inumbo.net>)
 id 1o6uuP-0000ZH-HA
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 14:12:41 +0000
Received: from sender4-of-o51.zoho.com (sender4-of-o51.zoho.com
 [136.143.188.51]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b2ad12bf-f87e-11ec-bd2d-47488cf2e6aa;
 Thu, 30 Jun 2022 16:12:40 +0200 (CEST)
Received: from [10.10.1.138] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1656598351733138.8688270947107;
 Thu, 30 Jun 2022 07:12:31 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b2ad12bf-f87e-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; t=1656598353; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=igP5pbu1ys8J5ghzmGq2c8dwV6JMoo/gG3IxyyAFRp8s2HU7mtrvDZzDox00vjVF9oXi4R262oNihHJmiPVduJYrx3wTgA0yTMbecoAt2lPmYLxFZriAUG3dXQR35Tc0QaSsrMrkBYuI/dch5aRLe0MHfxOFdoFCzBQ0N/JeCC4=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1656598353; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=j8joM4+E+WUJ6OrZpzbtO/GPRIcptxzBks4vfrc1hEg=; 
	b=LsK0vSANe+1js72lexBa6sYWvVYzZ9tWULgW+mcbScyqaQRh0MSMNPM0ixD8G+dFHtFrMscqCQiktRmdXPxllyvV13QELm9Rcr5ccBnSUv4Ch4m7tC8aI6kerlbYRBxnwNUq2uZkEq9oLIowipQ8CX9UXwxdIAkEu8mRImk8qYg=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1656598353;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Message-ID:Date:Date:MIME-Version:Subject:Subject:To:To:Cc:Cc:References:From:From:In-Reply-To:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=j8joM4+E+WUJ6OrZpzbtO/GPRIcptxzBks4vfrc1hEg=;
	b=DHSKSmUxKjGaWQIp6SbJpTsV6j/uWze+uBNOLh/crYghB9NThHbWu0sv21KGYSlq
	oiNCgP15WnvQr9iW+ZG2lIVX9NWtXR6nSIDMXgZxThqruODDxSoA18YAJnmV0e7bqO7
	xgnOWvMmAFByjaR3RNEcAkqB4djKCIDHuKMaoHlo=
Message-ID: <e6aacf59-5752-c8f6-ff5b-4755b3f1de98@apertussolutions.com>
Date: Thu, 30 Jun 2022 10:10:26 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH v9 1/3] xsm: create idle domain privileged and demote
 after setup
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Wei Liu <wl@xen.org>,
 scott.davis@starlab.io, jandryuk@gmail.com, christopher.clark@starlab.io,
 Luca Fancellu <luca.fancellu@arm.com>, Julien Grall <jgrall@amazon.com>,
 Rahul Singh <rahul.singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>, Jan Beulich
 <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Dario Faggioli
 <dfaggioli@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>
References: <20220630022110.31555-1-dpsmith@apertussolutions.com>
 <20220630022110.31555-2-dpsmith@apertussolutions.com>
 <Yr1ryXy/mMi6tJzY@Air-de-Roger>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
In-Reply-To: <Yr1ryXy/mMi6tJzY@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External


On 6/30/22 05:24, Roger Pau Monné wrote:
> On Wed, Jun 29, 2022 at 10:21:08PM -0400, Daniel P. Smith wrote:
>> There are new capabilities, dom0less and hyperlaunch, that introduce internal
>> hypervisor logic, which needs to make resource allocation calls that are
>> protected by XSM access checks. The need for these resource allocations are
>> necessary for dom0less and hyperlaunch when they are constructing the initial
>> domain(s).  This creates an issue as a subset of the hypervisor code is
>> executed under a system domain, the idle domain, that is represented by a
>> per-CPU non-privileged struct domain. To enable these new capabilities to
>> function correctly but in a controlled manner, this commit changes the idle
>> system domain to be created as a privileged domain under the default policy and
>> demoted before transitioning to running. A new XSM hook,
>> xsm_set_system_active(), is introduced to allow each XSM policy type to demote
>> the idle domain appropriately for that policy type. In the case of SILO, it
>> inherits the default policy's hook for xsm_set_system_active().
>>
>> For flask, a stub is added to ensure that flask policy system will function
>> correctly with this patch until flask is extended with support for starting the
>> idle domain privileged and properly demoting it on the call to
>> xsm_set_system_active().
>>
>> Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
>> Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
>> Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
>> Acked-by: Julien Grall <jgrall@amazon.com> # arm
>> Reviewed-by: Rahul Singh <rahul.singh@arm.com>
>> Tested-by: Rahul Singh <rahul.singh@arm.com>
> 
> LGTM:
> 
> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Thanks, Roger.

Thank you.


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 14:13:35 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 14:13:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358557.587837 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6uvG-000191-P7; Thu, 30 Jun 2022 14:13:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358557.587837; Thu, 30 Jun 2022 14:13:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6uvG-00018u-ME; Thu, 30 Jun 2022 14:13:34 +0000
Received: by outflank-mailman (input) for mailman id 358557;
 Thu, 30 Jun 2022 14:13:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nqik=XF=apertussolutions.com=dpsmith@srs-se1.protection.inumbo.net>)
 id 1o6uvE-0000ZH-T1
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 14:13:33 +0000
Received: from sender4-of-o51.zoho.com (sender4-of-o51.zoho.com
 [136.143.188.51]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d152263f-f87e-11ec-bd2d-47488cf2e6aa;
 Thu, 30 Jun 2022 16:13:31 +0200 (CEST)
Received: from [10.10.1.138] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1656598406312972.0612089444087;
 Thu, 30 Jun 2022 07:13:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d152263f-f87e-11ec-bd2d-47488cf2e6aa
ARC-Seal: i=1; a=rsa-sha256; t=1656598408; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=gQx0tPWHUi6PJi7sExl2doMN3pCd7c4YIH5yGpVoi1bOt5iP0dx2MJyeXvyFlQMtkIc+ZC9iLe+njeblF3bWHpsOq+TBHjVNpxUTp3iRhVT+Kt3Nps62wRqW7SltOvp/V+/PNlGLoJqPXLRWJ4aNYm6PBHHNVVcFDPG2vuPeKGw=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1656598408; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=mnB6dWYOvRD72XyIq0MErs+Pc6+lS9wG5crKeAI6vGw=; 
	b=WcUxqbJgQOvBnQsqgdvDDg65z0n2+7NpdNNl7qUIQq5w+AYksWW9Wb3H5KkKkj+SjMzR50Pn/avr/6DU+aF1SFkBLMcGAELzF1s1yJ1DW21+IkMEas5zLOkiKANjiXh3qZ3RpA7O87oA3V/9fpVbP4n6tcq/M45min3X5kQdxJM=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1656598408;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Message-ID:Date:Date:MIME-Version:Subject:Subject:To:To:Cc:Cc:References:From:From:In-Reply-To:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=mnB6dWYOvRD72XyIq0MErs+Pc6+lS9wG5crKeAI6vGw=;
	b=GKNAgXBLOdgLh59d9mbyOOd4CjBOQLlBDfYmpRC8UHMZ4YUEhHEjW6A6iPek9gwl
	PpcoaK5n1vOAJ+Ww+S3L64Du8qKXTAUA2zKFeyc9Ygvp02vqbmIzDo6raUYf54XAa1h
	J465Mj13V7qupxrKv+snJm1inZSGWonGmzdphZes=
Message-ID: <25edcdb7-f863-51ca-812d-2f744f9de52e@apertussolutions.com>
Date: Thu, 30 Jun 2022 10:11:22 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.9.0
Subject: Re: [PATCH v9 3/3] xsm: refactor flask sid alloc and domain check
Content-Language: en-US
To: Henry Wang <Henry.Wang@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: "scott.davis@starlab.io" <scott.davis@starlab.io>,
 "jandryuk@gmail.com" <jandryuk@gmail.com>,
 "christopher.clark@starlab.io" <christopher.clark@starlab.io>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
References: <20220630022110.31555-1-dpsmith@apertussolutions.com>
 <20220630022110.31555-4-dpsmith@apertussolutions.com>
 <AS8PR08MB7991EF8AA64EAB4754513A7892BA9@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
In-Reply-To: <AS8PR08MB7991EF8AA64EAB4754513A7892BA9@AS8PR08MB7991.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

On 6/30/22 04:40, Henry Wang wrote:
> Hi Daniel,
> 
>> -----Original Message-----
>> Subject: [PATCH v9 3/3] xsm: refactor flask sid alloc and domain check
>>
>> The function flask_domain_alloc_security() is where a default sid should be
>> assigned to a domain under construction. For reasons unknown, the initial
>> domain would be assigned unlabeled_t and then fixed up under
>> flask_domain_create().  With the introduction of xenboot_t it is now possible
>> to distinguish when the hypervisor is in the boot state.
>>
>> This commit looks to correct this by using a check to see if the hypervisor is
>> under the xenboot_t context in flask_domain_alloc_security(). If it is, then it
>> will inspect the domain's is_privileged field, and select the appropriate
>> default label, dom0_t or domU_t, for the domain. The logic for
>> flask_domain_create() was changed to allow the incoming sid to override the
>> default label.
>>
>> The base policy was adjusted to allow the idle domain under the xenboot_t
>> context to be able to construct domains of both types, dom0 and domU.
>>
>> Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
> 
> Same as what Jan has said, I don't think I am qualified to properly review
> the series, but I did run a compile and runtime test on Arm64 platform with
> the xsm and flask enabled and it looks like everything is fine.

Ack.

> Hence (also for the whole series):
> Tested-by: Henry Wang <Henry.Wang@arm.com>

Thank you.

>> ---
>>  tools/flask/policy/modules/dom0.te |  3 +++
>>  tools/flask/policy/modules/domU.te |  3 +++
>>  xen/xsm/flask/hooks.c              | 34 ++++++++++++++++++------------
>>  3 files changed, 26 insertions(+), 14 deletions(-)
> 


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 14:29:39 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 14:29:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358575.587852 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6vAj-0003Er-AZ; Thu, 30 Jun 2022 14:29:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358575.587852; Thu, 30 Jun 2022 14:29:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6vAj-0003Ek-6f; Thu, 30 Jun 2022 14:29:33 +0000
Received: by outflank-mailman (input) for mailman id 358575;
 Thu, 30 Jun 2022 14:29:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6vAi-0003Ea-LM; Thu, 30 Jun 2022 14:29:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6vAi-0005y0-I0; Thu, 30 Jun 2022 14:29:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6vAh-0003Ug-V4; Thu, 30 Jun 2022 14:29:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o6vAh-0006qq-UK; Thu, 30 Jun 2022 14:29:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=to7hfu8hiCNHR1MzoPA1dEBHCfqPkRKhGk2ETaUH7uA=; b=oFYX7v6/HEd/9JcfUGHI66U+g9
	ZfcXzsfj9ETZxt7GaAaAxD9/JgNGiTI48nVhHK36D5RAD3jAdVtx5xNQxa8EjD60x6JTgdfLXOLxm
	+Oy6/dIF38wo7infaclNBWJvYijSOEasIZlXHAiqVK7VgTjYXqRueBI4o7RVOjZsAPjk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171416-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 171416: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start/debian.repeat:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-vhd:guest-start:fail:heisenbug
    linux-5.4:test-armhf-armhf-libvirt-raw:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:heisenbug
    linux-5.4:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=9ef3ad40a81ff6b8b65ed870588b230f38812f2a
X-Osstest-Versions-That:
    linux=23db944f754e99abf814a79a2273b0191d35e4ff
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 30 Jun 2022 14:29:31 +0000

flight 171416 linux-5.4 real [real]
flight 171426 linux-5.4 real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/171416/
http://logs.test-lab.xenproject.org/osstest/logs/171426/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-multivcpu 18 guest-start/debian.repeat fail in 171408 REGR. vs. 171352

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-credit1  14 guest-start      fail in 171408 pass in 171416
 test-armhf-armhf-xl-vhd      13 guest-start      fail in 171408 pass in 171416
 test-armhf-armhf-libvirt-raw 17 guest-start/debian.repeat fail in 171408 pass in 171416
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  7 xen-install fail pass in 171408
 test-armhf-armhf-xl-multivcpu 14 guest-start               fail pass in 171408

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail blocked in 171352
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check fail in 171408 never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check fail in 171408 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171352
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 171352
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171352
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 171352
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171352
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171352
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171352
 test-armhf-armhf-xl-credit1  18 guest-start/debian.repeat    fail  like 171352
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 171352
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 171352
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171352
 test-armhf-armhf-xl-credit2  14 guest-start                  fail  like 171352
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 171352
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171352
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                9ef3ad40a81ff6b8b65ed870588b230f38812f2a
baseline version:
 linux                23db944f754e99abf814a79a2273b0191d35e4ff

Last test of basis   171352  2022-06-25 11:13:17 Z    5 days
Testing same since   171400  2022-06-29 07:11:34 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aidan MacDonald <aidanmacdonald.0x0@gmail.com>
  Alexander Gordeev <agordeev@linux.ibm.com>
  Anatolii Gerasymenko <anatolii.gerasymenko@intel.com>
  Andrew Donnellan <ajd@linux.ibm.com>
  Antoine Tenart <atenart@kernel.org>
  Arnd Bergmann <arnd@arndb.de>
  Ballon Shi <ballon.shi@quectel.com>
  Bartosz Golaszewski <brgl@bgdev.pl>
  Baruch Siach <baruch@tkos.co.il>
  Carlo Lobrano <c.lobrano@gmail.com>
  Chevron Li <chevron.li@bayhubtech.com>
  Chevron Li<chevron.li@bayhubtech.com>
  Claudiu Manoil <claudiu.manoil@nxp.com>
  Curtis Taylor <cutaylor-pub@yahoo.com>
  Damien Le Moal <damien.lemoal@opensource.wdc.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Borkmann <daniel@iogearbox.net>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  Dmitry Rokosov <ddrokosov@sberdevices.ru>
  Edward Wu <edwardwu@realtek.com>
  Eelco Chaudron <echaudro@redhat.com>
  Eric Dumazet <edumazet@google.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Genjian Zhang <zhanggenjian@kylinos.cn>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Gurucharan <gurucharanx.g@intel.com> (A Contingent worker at Intel)
  Haibo Chen <haibo.chen@nxp.com>
  Hans de Goede <hdegoede@redhat.com>
  Helge Deller <deller@gmx.de>
  huhai <huhai@kylinos.cn>
  Hulk Robot <hulkrobot@huawei.com>
  Jakub Kicinski <kuba@kernel.org>
  Jason A. Donenfeld <Jason@zx2c4.com>
  Jason Wang <jasowang@redhat.com>
  Jay Vosburgh <jay.vosburgh@canonical.com>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Jiri Slaby <jslaby@suse.cz>
  Johan Hovold <johan@kernel.org>
  Jon Hunter <jonathanh@nvidia.com>
  Jon Maxwell <jmaxwell37@gmail.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Jonathan Toppins <jtoppins@redhat.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Kailang Yang <kailang@realtek.com>
  Krzysztof Halasa <khalasa@piap.pl>
  Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
  Liang He <windhl@126.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Lucas Stach <l.stach@pengutronix.de>
  Macpaul Lin <macpaul.lin@mediatek.com>
  Marc Dionne <marc.dionne@auristor.com>
  Mark Brown <broonie@kernel.org>
  Masahiro Yamada <masahiroy@kernel.org>
  Mathias Nyman <mathias.nyman@linux.intel.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Miaoqian Lin <linmq006@gmail.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Mike Snitzer <snitzer@kernel.org>
  Mikulas Patocka <mpatocka@redhat.com>
  Miquel Raynal <miquel.raynal@bootlin.com>
  Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
  Nikos Tsironis <ntsironis@arrikto.com>
  Olivier Moysan <olivier.moysan@foss.st.com>
  Paolo Abeni <pabeni@redhat.com>
  Peilin Ye <peilin.ye@bytedance.com>
  Rob Clark <robdclark@chromium.org>
  Ron Economos <re@w6rz.net>
  Rosemarie O'Riorden <roriorden@redhat.com>
  Sami Tolvanen <samitolvanen@google.com>
  Sascha Hauer <s.hauer@pengutronix.de>
  Sasha Levin <sashal@kernel.org>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Shawn Guo <shawnguo@kernel.org>
  Shuah Khan <skhan@linuxfoundation.org>
  Stephan Gerhold <stephan.gerhold@kernkonzept.com>
  Stephen Hemminger <stephen@networkplumber.org>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Sumanth Korikkar <sumanthk@linux.ibm.com>
  Sumit Dubey2 <Sumit.Dubey2@ibm.com>
  Takashi Iwai <tiwai@suse.de>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Thomas Richter <tmricht@linux.ibm.com>
  Tim Crawford <tcrawford@system76.com>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Vincent Whitchurch <vincent.whitchurch@axis.com>
  Xu Yang <xu.yang_2@nxp.com>
  Yonglin Tan <yonglin.tan@outlook.com>
  Zheyu Ma <zheyuma97@gmail.com>
  Ziyang Xuan <william.xuanziyang@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1766 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 16:35:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 16:35:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358603.587871 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6x8L-0001Fl-Fq; Thu, 30 Jun 2022 16:35:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358603.587871; Thu, 30 Jun 2022 16:35:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6x8L-0001Fe-CC; Thu, 30 Jun 2022 16:35:13 +0000
Received: by outflank-mailman (input) for mailman id 358603;
 Thu, 30 Jun 2022 16:35:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPik=XF=citrix.com=prvs=1738a98a4=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o6x8J-0001FY-TK
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 16:35:11 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9afe375e-f892-11ec-bd2d-47488cf2e6aa;
 Thu, 30 Jun 2022 18:35:10 +0200 (CEST)
Received: from mail-bn8nam12lp2173.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.173])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 30 Jun 2022 12:35:04 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by DM6PR03MB5370.namprd03.prod.outlook.com (2603:10b6:5:249::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.15; Thu, 30 Jun
 2022 16:35:02 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5395.015; Thu, 30 Jun 2022
 16:35:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9afe375e-f892-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656606910;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=X+eLrljL64bpjt76Hwz9sQQv29FUkblb4AZb55DcfGI=;
  b=R9y3WVGd/qsMN5mp3WeGX0cdrMshS8VyA6Il8oAg8d+j2NczSiDmjSSl
   oNxYZPvVX1jOteRBFGW/KNXDLYcgtPaEsMscv/QQEVm/reBIHPLCbgaBj
   CISIhR/0GpGtvAxFwd1f1AbUMb7IbqWfK8shfwijKDU33U9VmLaRMT2Dh
   8=;
X-IronPort-RemoteIP: 104.47.55.173
X-IronPort-MID: 77384588
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:luvlyq82OP4WBDwf/I4xDrUDqH+TJUtcMsCJ2f8bNWPcYEJGY0x3x
 2oaUDjXOayJYWryedt+bY218x8F7Z/SnN9iGQFtqH88E34SpcT7XtnIdU2Y0wF+jyHgoOCLy
 +1EN7Es+ehtFie0Si+Fa+Sn9T8mvU2xbuKU5NTsY0idfic5DnZ74f5fs7Rh2NQw34PhW1nlV
 e7a+KUzBnf0g1aYDUpMg06zgEsHUCPa4W5wUvQWPJinjXeG/5UnJMt3yZKZdhMUdrJ8DO+iL
 9sv+Znilo/vE7XBPfv++lrzWhVirrc/pmFigFIOM0SpqkAqSiDfTs/XnRfTAKtao2zhojx/9
 DlCnay9QBYwDKrFo/k2Dl5DGgckMLFl1bCSdBBTseTLp6HHW13F5qw0SWsQbcgf8OsxBnxS/
 /sFLjxLdgqEm++93LO8TK9rm9gnK87oeogYvxmMzxmAVapgHc+FHviMvIADtNszrpkm8fL2f
 c0WZCApdB3dSxZOJk0WGNQ1m+LAanzXLGID+A7I9fpfD277/VBhioPNbuPuX5+7Apx+tEGJi
 U/J1jGsav0dHJnFodafyVq8i+mKkS7lVYY6ELyj6uUskFCV3nYUChAdSR28u/bRokyjXZRZI
 k8d+CsrpIAz8lCmSp/2WBjQiH2JoB8aHcZRGusS6QeRx66S6AGcbkAGRDNcbN0ttOctWCcnk
 FSOmrvBGjhHoLCTD3WH+d+8tTqvPQAFIGlEYjULJTbp+PHmqYA3yxjJHtBqFffvisWvQG6rh
 TeXsCI5mrMfy9YR0Lm29kzGhDTqoYXVSgky5UPcWWfNAh5FWbNJrreAsTDzhcus5q7DJrVdl
 BDoQ/Sj0d0=
IronPort-HdrOrdr: A9a23:pXvL7K4dm6p4UlidYgPXwS6BI+orL9Y04lQ7vn2ZFiY5TiXIra
 qTdaogviMc6Ax/ZJjvo6HjBEDmewKlyXcV2/hpAV7GZmXbUQSTXeVfBOfZowEIeBeOi9K1q5
 0QFJSWYeeYZTYasS+T2njDLz9K+qjjzEnHv5a88587JjsaEJ2Ioj0JfTpyVSZNNXh7LKt8MK
 DZyttMpjKmd3hSRsOnBkMdV+yGg9HQjprpbTMPGhZisWC1/HqVwY+/NyLd8gYVUjtJz7tn2W
 /Zkzbh7qHml/2g0BfT20La8pwTstr8zdloAtCKl6EuW0PRozftQL4kd6yJvTgzru3qwFE2kO
 PUqxNlBMh342O5RBDGnTLdny3blBo+4X7rzlGVxVH5p9bieT48A81dwapEbxrw8SMbzZxB+Z
 MO+1jcm4tcDBvGkii4zcPPTQtWmk29pmdnufIPjkZYTZAVZNZq3MYiFXtuYdg99R/Bmc4a+L
 EENrCc2B8WSyLQU5nhhBgi/DT2NU5DXitvQSA5y7+oOnZt7TNEJnAjtbMid0c7he4AoqZ/lp
 r529xT5ddzp+8tHNdA7bQ6ML+KI12IZy7wG0SvBnmiPJ07Ghv22u7KCfMOlamXRKA=
X-IronPort-AV: E=Sophos;i="5.92,234,1650945600"; 
   d="scan'208";a="77384588"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EC8QX6v46FoxLNJJ68ax0EsV9P8VoD2CwDV2BxqZvtquGdlhtx4S4f0ylz0GCWBnZ2G767vLHcBQcEJZpqCjqPSvwhGWYWKtj/plI7SIKSMd5E45pCh/doRTinzppV6QaasvLJGK90pgPECcq42dRAD/bRiPELH1m0AUWyV27Q2RmX4AVTV8n0ck/7v5NSL7uvzeRQiGKMY/7+NySOszLOcy7+MZsRhMl3LsaKvYNntoBXrJdhYHqKCromuPa7eIkdE8KZepeOUJSXVi6QSJG0s+/aomCmaKVJO4mgNT5yv+I35ajabxEIciHOWgxOcn+bZqZKT/H++Ks+zY5ZstEw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=oX3URqPsbJ7y/iaNrL25B7p6kAIWUNzEoNnUrhhwRG8=;
 b=VPO5AEl7M9pWP6UfA66+ZeVqreWI8of5TutfyQMxIHAQTOdma0gEGQKwMU8VjwsjBdZ95wQtj7sdwk5lszFGzbfBV1Oaa1hy2ky24ZL/wl//HZVo1EWzpViHe9C4QECHInOhCyqInXuv5PXFYz4dE9dw0OwskOB2zJR+pnfEMVzZf5brgIkUjKk1t/JaK4cu+H3Tgrlfpc2vd5loNKGf5wcGkdhSi/fhx0KBvHpVhjlyHDbW3Biu+PjxaMGEX5xtwJuDegGEsWt4z2hwu35EweQc4Mz75aCRgXWV6UlrBXrRcLoITsbrpIiSS0qb1ZFz+x/3Pdh3kjdpVwmmIN90nQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oX3URqPsbJ7y/iaNrL25B7p6kAIWUNzEoNnUrhhwRG8=;
 b=GNmMoqEXNMk4AUOMGdlTFYaTjtTUpUqzxxUpljvuRf5Fnjy8Y71XYov/geUJzh6LsxfkTlNGS8Rcw0klo1IbBWINiJxuFXArBTvkHe0xAR1JTG9FBMIxN3SdT+WAIrE4a1/jj1OFgxOK0aOuESbpk200tjDRYRUUoYQb6/rNo3Y=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jun Nakajima <jun.nakajima@intel.com>,
	Kevin Tian <kevin.tian@intel.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Wei Liu <wl@xen.org>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH v2] x86/ept: fix shattering of special pages
Date: Thu, 30 Jun 2022 18:34:49 +0200
Message-Id: <20220630163449.18714-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.37.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0348.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:d::24) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 758738ce-329b-4c36-bd2b-08da5ab67b13
X-MS-TrafficTypeDiagnostic: DM6PR03MB5370:EE_
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ziep0BtSR7V/e/WhIAxnBOwoeBc+jq6mwQGe4QaAQThJEzt1N+hQW3P2yPq+44f6uJnMNPH6lIAquQRTJlRfSyaJ9x/agrziWaTeUVcfa77y4e6545LXCoWhOSzQgE6xfpFiSILhg/81Icw42xFat4aMn3jgWf5qQFn15yndEQdhdComm3tpb8n/zj47aWOEyM5AS+oub2vpUcZxDNbVDwEpfu+YC8k3APau6/OfzRee6vfC5GAbipMikwIUvxnHt3zbpNs1+wcimM3plWmRUeat4WiQCVrue//7XHxb58ajymrr55kX6xAmpl9KZpFasc6TY3a7Fb4nZ1Ne7wvaW5v22Urm4vP+DSKL3o4aog8PlC8XuvHVw8vZ6vXlk8wg2Tyd+1Ngjh4WrqYY+i1VJbv92mpIUyMGdRKwn/n9EEmgUHlT5tgyCVjGPtTY8/Gil4tWL2ai9Xll5KBauMCgSuDpbZsNDxaluWsxA49PAfSDJtt151zoPWZ76/iQFjhYGQ9UC2UnKmF2COhQPILjNifdTXYzul6Nstc0MnFXgQAhxX3QPAoNlofUNDF1zSF2HjjDirhSUF3z5dguh1k1TTSJLp+c2BaXEF4Z874niqFtFyOoBhqxXb2Ucqof77QPNbMa/ncUraPnwbzIwzJjreRSCCsfHR7XP/JKKWH3hY+neRuZ84jzr+DJeTU6AoJllSjMUVVHryH9CpSgBgUgxct+MWuhM2SrWVtA+OEKMhbHJ5HmvQYN5M+CH0vkji2w
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(346002)(366004)(396003)(136003)(39860400002)(376002)(186003)(6486002)(478600001)(83380400001)(82960400001)(38100700002)(2616005)(54906003)(8676002)(4326008)(6916009)(1076003)(316002)(66556008)(66476007)(66946007)(5660300002)(6506007)(6512007)(26005)(8936002)(41300700001)(86362001)(6666004)(2906002)(36756003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dDFPRlEwM1JLeFRGL1ZyM2F5QUNjczVIa3AvcHllSCtzcXpMWGhMQmRzU0Vv?=
 =?utf-8?B?V2lYSW45dmNNd21oRzdhUVdISncrWTZMc2owekY3UVBEVnZyWnkzamZsQnps?=
 =?utf-8?B?TENaa1NmZnFSNmlMd0diMno1aTJ0RmkwUVhRMVcxZjc5QkxjZ0lVZWthZHVZ?=
 =?utf-8?B?YkhIM016V3ZVKzIwYTlOKzVDWlVESzlkQzY4a1NjZlI4WHp2VnlIMzVNWWNF?=
 =?utf-8?B?bjhrcmtSV0F0N2dFeFpmWlYzdnlaV1VDZjRmUEgxK1VtaC9oNXM1Q0tNQW0x?=
 =?utf-8?B?Z0FNT010ekRLM0EyMUNpeVV6SGRGRXNEM0luQjgvUGRZQ2JNaFFrVlUydmZG?=
 =?utf-8?B?REM1T2FRei9ZWS9WcEgrRGsyd2RFMDBzTGdWbGtuNTFtWjlzZUd2b2g0aUcv?=
 =?utf-8?B?QkVZdi9ydVJBQmFUeGxnZVltdHduTXdHalRwaExkWjFTcVp2aThyNHBHUVRW?=
 =?utf-8?B?K0FrSXZuMEgwYWNKUkdyZkVjek1WN2Y0Q1JVUE9kTkYyelFYWUNNdFYxcVBM?=
 =?utf-8?B?d3M3aGFUckJZUjBLSndCeUdYVlFpbS9xc2VzVlV2MW5CWWNRNW1VOGV1OVBC?=
 =?utf-8?B?RnY5N0FYY2VFbzl1emFya0ZQM1o0UFZxUFhBd0hoTnlVQkxiWVpBT3hiU09i?=
 =?utf-8?B?Nnl0aUlDNlNPWkxUeWdvV0xQN2xnYW54SEM0TmJMWGx1OGtNeEc0bEpjYlh4?=
 =?utf-8?B?UFJ0emFoZVloaFdpcCtWRkErWC9pak9lbHBOUFMwalRqNk54MENhdE4zdGpx?=
 =?utf-8?B?TG5nUHR1dXRQZjRvOG5tVUloVUpNbWdHMDZYU0JuVENkODU2ZUM0eXM0Rms5?=
 =?utf-8?B?bUlVRDEvbE1vMC9sWnBvcTUvRU9HOHhUL3JYbmN0R3BZZXYyZi9TN0xuQWVp?=
 =?utf-8?B?ck44MWpDTjFuOWJ3QThPMFUvYjJTVTlIS2UwSWtFdkhEYTB4VitRZmJRZFlS?=
 =?utf-8?B?WEUzdklpMGtpd2hrdHltUlNWSlh6NHVjOFdSY0VTSzdhWTZ3OE5iMGRoQmYz?=
 =?utf-8?B?TXRoU2Y1eHhOdXpMdzNNNlgzTlEya3FMeWxiRnVMc2tEa29ZdnN4WXpXU3BU?=
 =?utf-8?B?QVdPcWFPODBuN2JFS3lrYU53UVhMTkdrV3E2ZEtYK0dBRTZxREs3cUpHMGtP?=
 =?utf-8?B?NkRITmI0UXBTdktmMEtrS2ZSeWtFT0ROYzk3ckJ1T2VIamdNRU95ZWhvcExL?=
 =?utf-8?B?MzRaMzVrSzc1d2FLSlh3dWFYY2tRL3BHT3FUWEJyY0YvZGJ6N3RDN2ZLMEhY?=
 =?utf-8?B?aGxqbjlKTG1hV0pPakQ5aFdyaWpCTThwY0pPaVc5YTdMRlVRb2JVeG5BYmI2?=
 =?utf-8?B?S0tQTTBSV3Q2QzIxSUJXNDlGNEdBVlhQL3V0bVQzc0tJTDR2RDRBWG9vdW5w?=
 =?utf-8?B?RXJiV0Y2UytUdTF0dUd6azB6UnlWbXZMVjJKRFFGaDNJRTd2L0RNS1Q5MG1q?=
 =?utf-8?B?SmtQQUJEWXB1SFRWZEZHbFpsSm12V2xQZzdkSHlVNWplVlVMb0ZoUnljRm9Z?=
 =?utf-8?B?d2d1RGUzWUd2UTFLYUxOUDdUbU80WjV4aWtQTkVBR3FZdmV1L0tvNGgyaHJu?=
 =?utf-8?B?YmhLRDE1LzR1ZVRwSHZ0R0J0cnFCSWhRdjM2WFFDOWlHSWc3WjNYMUZsMlNF?=
 =?utf-8?B?YzRaK1k3cTVxbEZLR0VONUExWVZ2bEpHdTJ6NXlsUnZlK3hkSXdKQm0yUzlB?=
 =?utf-8?B?OElmTGg2bzg1cXQ1ckgwQmVHZURvRVEvbEtJSWlrWk5Ib3J2RFd3UlpPS1Fh?=
 =?utf-8?B?OGJ6ZWJoRkM2czdIWUtUMGRuRU9vbzVoVElHQjk4SitsNGZjNjNoTGFwZDg3?=
 =?utf-8?B?ZHZZOE12ekF0OHNVWFMyRlBRaHc0Zlg3YnI4Y0thRjhUWFN1cnllNGczWkJ2?=
 =?utf-8?B?Yi9mdnBOSmROVlBKdFVLdUJGVFRJTmxJNmRMOTFTUDNRd2RqN3hSZ2w5U3lX?=
 =?utf-8?B?WE51bUwvUDczaWUzSkZPNkw1MWJKSTZGU1V4eWNvcFFTM1JoVUJETk9KQlps?=
 =?utf-8?B?OTF1OUU5dEVvUGlrUFdmSzdsdTN3WmVQWWFaOGpyY1Nrbjl5anhpZkhZV1Aw?=
 =?utf-8?B?OEt6K1g4d044eUFNRno4ZUhaQVFlSEZiRGRNU3N6REdiclliT0c2RzRGcjNI?=
 =?utf-8?B?eVRYb1M1eTAwc2RrREhXSlBocjg2bjlTU1NBV3A2STZWUWhSNUduS3dUQkoy?=
 =?utf-8?B?a3c9PQ==?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 758738ce-329b-4c36-bd2b-08da5ab67b13
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 16:35:02.4128
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: GbneLPYMsstoEqJvWgUGpjAk1QTEIJ93PTKXObv8v6hkFfUivRTtL4M2+mR/O0P62y3Gs3//iFBdsGNVkEO84A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5370

The current logic in epte_get_entry_emt() will split any page marked
as special with order greater than zero, without checking whether the
super page is all special.

Fix this by only splitting the page only if it's not all marked as
special, in order to prevent unneeded super page shuttering.

The unconditional special super page shattering has caused a
performance regression on some XenServer GPU pass through workloads.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
---
Cc: Paul Durrant <paul@xen.org>
---
 xen/arch/x86/mm/p2m-ept.c | 20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index b04ca6dbe8..b4919bad51 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -491,7 +491,7 @@ int epte_get_entry_emt(struct domain *d, gfn_t gfn, mfn_t mfn,
 {
     int gmtrr_mtype, hmtrr_mtype;
     struct vcpu *v = current;
-    unsigned long i;
+    unsigned long i, special_pgs;
 
     *ipat = false;
 
@@ -525,15 +525,17 @@ int epte_get_entry_emt(struct domain *d, gfn_t gfn, mfn_t mfn,
         return MTRR_TYPE_WRBACK;
     }
 
-    for ( i = 0; i < (1ul << order); i++ )
-    {
+    for ( special_pgs = i = 0; i < (1ul << order); i++ )
         if ( is_special_page(mfn_to_page(mfn_add(mfn, i))) )
-        {
-            if ( order )
-                return -1;
-            *ipat = true;
-            return MTRR_TYPE_WRBACK;
-        }
+            special_pgs++;
+
+    if ( special_pgs )
+    {
+        if ( special_pgs != (1ul << order) )
+            return -1;
+
+        *ipat = true;
+        return MTRR_TYPE_WRBACK;
     }
 
     switch ( type )
-- 
2.37.0



From xen-devel-bounces@lists.xenproject.org Thu Jun 30 16:40:02 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 16:40:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358609.587883 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6xD0-0002EV-A0; Thu, 30 Jun 2022 16:40:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358609.587883; Thu, 30 Jun 2022 16:40:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6xD0-0002EO-3b; Thu, 30 Jun 2022 16:40:02 +0000
Received: by outflank-mailman (input) for mailman id 358609;
 Thu, 30 Jun 2022 16:40:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPik=XF=citrix.com=prvs=1738a98a4=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o6xCy-00028B-Qy
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 16:40:00 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 473098f8-f893-11ec-bd2d-47488cf2e6aa;
 Thu, 30 Jun 2022 18:39:59 +0200 (CEST)
Received: from mail-mw2nam10lp2106.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.106])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 30 Jun 2022 12:39:57 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by MW4PR03MB6539.namprd03.prod.outlook.com (2603:10b6:303:126::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16; Thu, 30 Jun
 2022 16:39:55 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5395.015; Thu, 30 Jun 2022
 16:39:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 473098f8-f893-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656607199;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=5R6mRy+Zdw9vyz7I7XPM2XxjevfYuDk0+bpDss0blLA=;
  b=M73CucYKe5zVZoS6VP1xKvDhuWRQzO0bmLq5d6oMwhaKft4oV/ua3l+f
   wVk2FtXuIFfFAuxYtzCIop5VO2rxuqvBANULkC8THVMJmEAoCP8o7wkOH
   YOgWxw7pQB/XtnkrA0HQWhYW4M/8U0Rw0Jg2HfPiwwzmSKq/dLxOiNct9
   Y=;
X-IronPort-RemoteIP: 104.47.55.106
X-IronPort-MID: 77384986
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:oW28M65BogXUBpe0tk8PcAxRtEzGchMFZxGqfqrLsTDasY5as4F+v
 mMaWmjUP/eDZWDwfI91O4m09kkE7JXUmIBkGwBsrnwyHi5G8cbLO4+Ufxz6V8+wwmwvb67FA
 +E2MISowBUcFyeEzvuVGuG96yE6j8lkf5KkYAL+EnkZqTRMFWFw03qPp8Zj2tQy2YbiW1vX0
 T/Pi5a31GGNimYc3l08s8pvmDs31BglkGpF1rCWTakjUG72zxH5PrpGTU2CByKQrr1vNvy7X
 47+IISRpQs1yfuP5uSNyd4XemVSKlLb0JPnZnB+A8BOiTAazsA+PzpS2FPxpi67hh3Q9+2dx
 umhurSRWxh2DPPlwdgZaCh7TCQlL4t856/udC3XXcy7lyUqclPK6tA3VAQTAtdd/ex6R2ZT6
 fYfNTYBKAiZgP67y666Te8qgdk/KM7sP8UUvXQIITPxVK56B8ycBfiXo4YAgl/chegXdRraT
 9AeZjd1KgzJfjVEO0sNCYJ4l+Ct7pX6W2IH8wLJ9Pppi4TV5Fxw1+bVbPDZQcS1HIZOrBiZr
 UHH3GusV3n2M/Tak1Jp6EmEluLJ2C/2Ro8WPLm57eJxxk2ewHQJDx8bXkf9puO24maccd9CL
 00f+gI1sLM/skesS7HVQBmQsHOC+BkGVLJt//YS7QiMzu/R/FyfD21dFjpZMoV+6okxWCAg0
 UKPk5XxHztzvbaJSHWbsLCJsTe1PitTJmgHDcMZcTY4DxDYiNlbpnryohxLSsZZUvWd9enM/
 g23
IronPort-HdrOrdr: A9a23:BGSG6qE8dQaBHpTYpLqFc5HXdLJyesId70hD6qkvc3Fom52j/f
 xGws5x6faVslkssb8b6LW90Y27MAvhHPlOkPIs1NaZLXDbUQ6TQL2KgrGD/9SNIVycygcZ79
 YbT0EcMqyOMbEZt7ec3ODQKb9Jrri6GeKT9IHjJh9WPH1XgspbnmNE42igYy9LrF4sP+tFKH
 PQ3LswmxOQPVAsKuirDHgMWObO4/XNiZLdeBYDQzoq8hOHgz+E4KPzV0Hw5GZXbxp/hZMZtU
 TVmQ3w4auu99m91x/nzmfWq7BbgsHoxNdvDNGFzuIVNjLvoAC1Y5kJYczKgBkF5MWUrHo6mt
 jFpBkte+x19nPqZ2mw5SDg3gHxuQxenkPK+Bu9uz/OsMb5TDU1B45qnoRCaCbU7EImoZVVzL
 9L93jxjesaMTrw2ADGo/TYXRBjkUS55VA4l/QIsnBZWYwCLJdMsI0k+l9PGptoJlO21GkeKp
 ghMCjg3ocWTbvDBEqp/lWHgebcFEjbJy32DXTr4aeuontrdHMQ9Tpr+CVQpAZDyHsHceg02w
 31CNUXqFhwdL5nUUtcPpZ0fSLlMB27fTv8dESvHH/AKIYrf1rwlr+f2sRH2AjtQu1C8KcP
X-IronPort-AV: E=Sophos;i="5.92,234,1650945600"; 
   d="scan'208";a="77384986"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jxO9Ygnf9M4rb8QYwVU4tIDrWtfPzfXh7kDAALXIYAyOQdoOHMkRtAhTOJkEOybKuP6hmjF3/MAGckrmCn2qaR7k/GGuVKvsWYe9Il10+ySVEoJFnBzu8qI1EI6Mta1AlkNmHVhQ2EX43g5WcfcPKAZXBBzdfgfLICf/Nowcl+Z1IpOwWBSJKMlBqjDNSEs+RyU8l9BDC/nihxkToBJuUmlCurKMt2FNPCdNQfhTM4tz56Hg4105xxmxEkkJC0O2Q9qFry+F9+/1KKnmWSCOw1ju7s802jhdh7FEkIfNC/NZ9j6FYdifHGJtzVjAG0tzZSa89JLhh2ZZ0aXGISStqQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9JDgQkCJ8TUJba+XuzjE+dL45nQdMlCa1F+//oCj2HA=;
 b=U2munDzErxkySHN7k1Wuk+y1T/X6agMYEQvHrR5EJh+wV8Cli76C9+fc/2rY2Q9oh4Ygrnd/dHDzK1i09FOH8eIqBhNAAlep1DWEjSglMlqRP0O+BevDs/hntdQSVubpzdEqzV6RPRbgQM9Oys5wMkucRLBa5wEsDvApV1TICGp1xjjyQcMbgK7ziocuFyYel08BtEH2MNsmQ6X83EIjL/lqledj9eeZRIRqpZGPFxvHl9mRfJcFta8TE0fWvyAiRu4tIZZnDj4ptiyG3U/zmJl6FxVJ3A/SGh8VFEUxO08al6bbwaNDvM4mxc/pGLbSUoTU+SNuy1nILKwjCpNeKQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9JDgQkCJ8TUJba+XuzjE+dL45nQdMlCa1F+//oCj2HA=;
 b=ciZAaI4thGBoz2pzUBTPbRS3eJ/4PhHW4njaHkOw52EAMfX1z6uWA6HMQGsQ/dxsZ+6mGtJWIadRJrkll4+1E6RrR9TGpqqMq0+OE6GvTan7cPHdiHH+wI/etzbQ8iJ2IPeL9nzYOUJnYxAYokyEp+aGneRHKSXc3qDrHBKn1Ok=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Thu, 30 Jun 2022 18:39:52 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <JBeulich@suse.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2 1/2] x86/spec-ctrl: Only adjust MSR_SPEC_CTRL for idle
 with legacy IBRS
Message-ID: <Yr3R2M5P4saCVz5l@MacBook-Air-de-Roger.local>
References: <20220629184508.15956-1-andrew.cooper3@citrix.com>
 <20220629184508.15956-2-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20220629184508.15956-2-andrew.cooper3@citrix.com>
X-ClientProxiedBy: LO2P265CA0299.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a5::23) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f71a4988-3cde-436d-b0ba-08da5ab729f9
X-MS-TrafficTypeDiagnostic: MW4PR03MB6539:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tclNokUCJO2zOpSjDVTHUrqnZMg9h/ZMqdNnUHxOzMEA9gCzBa2JDK7djr8UXKAQbZcJ0TP64PsSTINBliTDKjVcq+fDqKxYIuZRy6Jt1To+Ww34mnPnArajHSPJQXFhUa4w5Qu3nMKhVyd7j++5wkTPohkMWp8dVdwMOdtNWdUyAr4kgbHnyyE6xEpXdphCSr2jv768NJEuVWXI8kJEO99k8TuRk8PW2FXa/nqG1KFjAp9SCG8WUHZi8sp91jY7FaBsrXUqujgU4hwGjJpbW18QusNyx6C0VD2+bpcOn2OAVrjbsTcd1AqDylDLL17uEzGC3ncGs8l5srFLPpLECgls8HFq8n8RP6UW52FuKxvN/Z9JjxpCdEGVuqT6oNsurE2oHInLwnSy78mc+M3BNd+rHVgyJ/a7lXyGNFvF/efIwwFZiEXh+1UFhcgbfDn4CvLbvo1lsqbe5zVWMtytf9hEL0W2NNzK8rsUi78UY4eWin5HKv9h6tKpaymY0fpNzNlS/nkKNB10RB48x063NPyzjGvmczxWFJT+bZaBVhtDySWfJgW/bevixURah+ojRWvR09Grjom29+CPLT9HU/pF4bCg8HDeFt6syci5QVhtnjLYTSuY+8l2no/hFN7LEiYYVv/oHsw/H4PpwmzyQCbGWmPSsbD2otO1Lo2ukqP/PT03MnBvG3c8oiWcaFDqZ+queZbgCw0c5wzEtNUxeLDnCsmNJ12/4RKjo+d9Ob1m6Ax6ofjDV1zrdIWHXOkq
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(346002)(376002)(136003)(39860400002)(396003)(366004)(82960400001)(85182001)(66556008)(41300700001)(86362001)(186003)(5660300002)(478600001)(6506007)(38100700002)(8936002)(6486002)(2906002)(6862004)(6666004)(54906003)(9686003)(4744005)(6636002)(4326008)(66946007)(316002)(26005)(83380400001)(6512007)(8676002)(66476007);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bDkzRWEva0YxcStBWURpYVV3RGlncWZDUHZzMndsai9BU3ByYm4xajBqY2Nm?=
 =?utf-8?B?NzF1cWoxS3dhMXJVeStSSnBRN1NsVlN4dzFyc0lpYTNlQi8rem8raCs2VXMv?=
 =?utf-8?B?M1VxVWdmckNldUgyekRDVlg0dGI1Nk1tck9XdU5Weis4YjI3SEMrbE1PTDBt?=
 =?utf-8?B?eWorOGRWckhkaEt0QTFjVlo0Z2JJenZMTWtrc0NRb1ExYUF1UXc0R3BEMkY4?=
 =?utf-8?B?R3JSRlg1ZFJnRWpRZWJwREVkUllNOVRZQUtCRDZUNS94ZGg5eFBpd2YyZFRD?=
 =?utf-8?B?N2FjQ2RBSlNMVEMyT1BJQnN2RWRwYzhxOU01VytydkI5Wk1CZFA4WnliM2E1?=
 =?utf-8?B?Q1dwcE1UMXRVWFBSaEhBd0Z1Q0wzTW83MkNUTTZhdCthNGVtVytySFBLdlR6?=
 =?utf-8?B?Y2p4MXpLeUg0SVJ2UWdqb1NKb3ExZk5NZ2RaL2NHOGNZQzFKcmdKMitMMy94?=
 =?utf-8?B?bVZTL0NHeTI4TEZCa0NramJTTktPakdvVjJzeXlaV3o2ZmVQOTFUbW4rUXpt?=
 =?utf-8?B?RmlUY1hkR2NxNDNuaERlNkNyck5ZRlNpYzhma0JUam5la1hWaDF2dkxUUTlo?=
 =?utf-8?B?dzFmSXRpd3ZHa1RObHJ6UFN5cWpwODUzNXF4UkFsZHVqbnREMmxHK3hQanB5?=
 =?utf-8?B?T21zNndFVUxpMkV6WjhoWXRvRmdaRmxJZHdRR2V6aW5sQnUwQXhUaEd3RXpB?=
 =?utf-8?B?VFVqMFNTenQ4dHptUUN5SzJDK1BNWnByd0UxQkIzS1JDUlhzdUVid1RodUUw?=
 =?utf-8?B?ejJrenEwU0R6Z2pCU3dFNFNtNzY5Q2todnlQcGlzckdDODk2NDZvVTl4aWhF?=
 =?utf-8?B?SDI4c01peFRKSS9pQWJEQmJHY1FINDJBSEtsbkZkeXRSUmRMQmhXTWRxQ2c3?=
 =?utf-8?B?WEZoYmh6U3Y3YldMYkxJWGlOOXk1Y2xkK1JKK2ZYbTVmTUVOVDdOUmxOcHQ1?=
 =?utf-8?B?cldQOGo1aS94ZndxaFVaRjJyQXFyMG55NSsvNzRpQTUzT2VvU1NQeUJsczM5?=
 =?utf-8?B?dm5aOHR3ZWh3VnZ0SFRoNHllN3ZLNnFrRmh3M2xYbnNiUEk5ZHJOZE1YbzdM?=
 =?utf-8?B?QjR5a0FVRC9UdUZJbHE5OFlSalRnaXhYRmw0dTlBLy9LcWg0emR0T1c4Nmhw?=
 =?utf-8?B?Kzl3dUZlZjNDMWZGTXNWZmJyYTVSUFk2MDFxK2tWLytIUTBEZy9teVhSWUcv?=
 =?utf-8?B?VU5TK0x1ak1KOUk0NzRuLy9sb3VibTB0UHo3cWw2dWtwKzcrRTZRQnRhMFFB?=
 =?utf-8?B?N0o0Y1VPNlArWGtXVjJoQ2QybkJmeHhJNlhISVE4QWNVQktBVXNISnFjd0tX?=
 =?utf-8?B?b09nUzBFcC9rUDB3aEs4TitQOGVVWDVaKzdqOStrdWJYYUxwMHdzV3dWd3ZM?=
 =?utf-8?B?YVJvQzhaVmhxbElhQVRvTFd3STI1cHNDQjFMTElDZFFoYWVGMm1ibkc1N2FS?=
 =?utf-8?B?U1J4QUZVQXpWcWd2bmk3S2hJRnlGZzZQaDlTdThuODBrK3g0cXpQcHhidzFX?=
 =?utf-8?B?TEc4OUthbEFxN1FLbC91bEowcm5YYVBRbGhESHI3eFlPcFNPRFdnTTdTdTQ0?=
 =?utf-8?B?QW52WVVkakJZaUVrZkdkVC9GVkVqUEJpSzdvVjhYMVZCY0lSdzIzZEVVd2p2?=
 =?utf-8?B?QnY5SFBHTTR2S0lnQ3lENEQ5SzkrSmFCb0Q5NDA1NkRTNXgzZ0w2OVpKRC9t?=
 =?utf-8?B?UzdkTCszUjBrVWtSSHBjeVV4emcyV1puYzdPWjNCUlBVM3FqYWRpbEVQYmkx?=
 =?utf-8?B?cGdZS2xkelcxcjQ3OFZYME93R0VSTHdqUU9JTXVpZ2o0N3l1VVdZdXBPa3Nr?=
 =?utf-8?B?WEVNZFAvK0lpa3JOM1ZSbU53dzJrNTVJcEErWGtxbFpmU2QwYlBTendCQjRi?=
 =?utf-8?B?OE9oaEVBeVZ6WExpWm1NSG00TmFpRG1BUlVKOE9XTUZZVkd3RHRTT3dkL3pS?=
 =?utf-8?B?emE1U01lUjV0VW5ldTU3VitJUU41OTdjRWt3aVNCaExzQ2RBOEdKQndTT0w1?=
 =?utf-8?B?bnljZDc3NmhqVjJMUUpJaFFqL0dtMVZWaGNPTUJqdnVNWTVocnErMS9meWtQ?=
 =?utf-8?B?NmFrQmh2ZzdaUXVXYjZ1ZjJnMHZITUI1Rmsrd0EzTGRTdlEyU3c2SUkwOFMr?=
 =?utf-8?B?M256bG9LU0RmOE1hQnlTVm5zZzNzUFg3aXZ4ZzFRcERsMkFhMW9XU2Y5THQ1?=
 =?utf-8?B?Qmc9PQ==?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f71a4988-3cde-436d-b0ba-08da5ab729f9
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 16:39:55.7198
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: A64Dta6xQap1xz+eCuMI1G8clH3r28+iyRFeVjy3m5jjdKiJVEwqLxxF4oGWUtIWumb8jNKeyIedKtVQIiCW9g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR03MB6539

On Wed, Jun 29, 2022 at 07:45:07PM +0100, Andrew Cooper wrote:
> Back at the time of the original Spectre-v2 fixes, it was recommended to clear
> MSR_SPEC_CTRL when going idle.  This is because of the side effects on the
> sibling thread caused by the microcode IBRS and STIBP implementations which
> were retrofitted to existing CPUs.
> 
> However, there are no relevant cross-thread impacts for the hardware
> IBRS/STIBP implementations, so this logic should not be used on Intel CPUs
> supporting eIBRS, or any AMD CPUs; doing so only adds unnecessary latency to
> the idle path.
> 
> Furthermore, there's no point playing with MSR_SPEC_CTRL in the idle paths if
> SMT is disabled for other reasons.
> 
> Fixes: 8d03080d2a33 ("x86/spec-ctrl: Cease using thunk=lfence on AMD")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 16:40:51 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 16:40:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358615.587893 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6xDk-0003Qf-IN; Thu, 30 Jun 2022 16:40:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358615.587893; Thu, 30 Jun 2022 16:40:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6xDk-0003QY-FU; Thu, 30 Jun 2022 16:40:48 +0000
Received: by outflank-mailman (input) for mailman id 358615;
 Thu, 30 Jun 2022 16:40:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPik=XF=citrix.com=prvs=1738a98a4=roger.pau@srs-se1.protection.inumbo.net>)
 id 1o6xDk-0003QS-2S
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 16:40:48 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 635e97f0-f893-11ec-bd2d-47488cf2e6aa;
 Thu, 30 Jun 2022 18:40:46 +0200 (CEST)
Received: from mail-mw2nam10lp2103.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.103])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 30 Jun 2022 12:40:44 -0400
Received: from DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
 by CH0PR03MB6114.namprd03.prod.outlook.com (2603:10b6:610:b9::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.14; Thu, 30 Jun
 2022 16:40:41 +0000
Received: from DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534]) by DS7PR03MB5608.namprd03.prod.outlook.com
 ([fe80::40af:d5f4:95eb:d534%7]) with mapi id 15.20.5395.015; Thu, 30 Jun 2022
 16:40:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 635e97f0-f893-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1656607246;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=TXxDLEZP8qVY5qtFkA+2S04SAHfniynB8YILSvOI9Vk=;
  b=RZNnQWB5m5e5KcgeneS0GqL4KNJo61usjwF5H++a2ujUcgVAQRqjD1z4
   HZwfm0EJaEeYe9xOg5fD1skAZvQ6JHR9ebcdwj82IamSUHaXPnMFtT25Z
   svNPdx1lHs+XP7PCSOUrCVF0HMBMupgPs6nDCqbDLcpY7lk8opV+O15rd
   g=;
X-IronPort-RemoteIP: 104.47.55.103
X-IronPort-MID: 77385043
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:23MFNKxRrhPO9lOZEGR6t+dDxyrEfRIJ4+MujC+fZmUNrF6WrkVWm
 DBMDz+HOKyDZ2DxeNp1aozloRlVsJ6Gm9U2SVM//iAxQypGp/SeCIXCJC8cHc8zwu4v7q5Dx
 59DAjUVBJlsFhcwnj/0bv656yMUOZigHtIQMsadUsxKbVIiGX1JZS5LwbZj2NY22oDhWWthh
 PupyyHhEA79s9JLGjp8B5Kr8HuDa9yr5Vv0FnRnDRx6lAe2e0s9VfrzFonoR5fMeaFGH/bSe
 gr25OrRElU1XfsaIojNfr7TKiXmS1NJVOSEoiI+t6OK2nCuqsGuu0qS2TV1hUp/0l20c95NJ
 NplhJyKU14WHvX2urojaz9pAwhDJZVI0eqSSZS/mZT7I0zuVVLJmq0rKX5seIoS96BwHH1E8
 uEeJHYVdBefiumqwbW9DO5xmsAkK8qtN4Qa0p1i5WiBUbB6HtaeHuOTvYQwMDQY36iiGd7EY
 MUUc3x3ZQnoaBxTIFYHTpk5mY9Eg1GgLGYI9QrJ/sLb5UDz9yts/YbxOePtWd7WZP9EuEKl/
 2PJqjGR7hYycYb3JSC+2mKhgKrDkD32XKoWFaak7bh6jVuL3GsRBRYKE1yhrpGRqGSzRtZeI
 Ew84Tc1oO4580nDZsb5dw21pjiDpBF0ZjZLO+gz6QXIz7WO5Q+cXzAAVmQYMIdgs9IqTzs30
 FPPh8nuGTFkrLySTzSa66uQqjSxfyMSKAfueBM5cOfM2PG7yKlbs/4FZo8L/HKd5jEtJQzN/
 g==
IronPort-HdrOrdr: A9a23:Lz0i66xN6G1781FFdJ8pKrPxt+skLtp133Aq2lEZdPULSKGlfp
 GV9sjziyWetN9wYh4dcB67Scy9qFfnhOZICO4qTMyftWjdyRKVxeRZgbcKrAeBJ8STzJ8/6U
 4kSdkFNDSSNykEsS+Z2njeLz9I+rDunsGVbKXlvhFQpGlRGt1dBmxCe2Km+yNNNWt77c1TLu
 vg2iMLnUvXRV0nKuCAQlUVVenKoNPG0LrgfB49HhYirC2Dlymh5rLWGwWRmk52aUIG/Z4StU
 z+1yDp7KSqtP+2jjfaym/o9pxT3P/s0MFKCsCggtUcbh/slgGrToJ8XKDqhkF9nMifrHIR1P
 XcqRYpOMp+r1vXY2GOuBPonzLt1T4/gkWSvGOwsD/Gm4jUVTg6A81OicZyaR3C8Xctu9l6ze
 Ziw3+Zn4A/N2KNoA3No/zzEz16nEu9pnQv1cQJiWZEbIcYYLhN6aQC4UJuFosaFi6S0vFrLA
 BXNrCT2B9qSyLaU5iA1VMfgOBEH05DVCtue3Jy9fB8iFNt7TNEJ0hx/r1sop5PzuN+d3B+3Z
 W0Dk1ZrsAxciYoV9MMOA4ge7rCNoWfe2O6DEuiZXLaKYogB1Xh77bK3ZRd3pDYRHVP9up4pK
 j8
X-IronPort-AV: E=Sophos;i="5.92,234,1650945600"; 
   d="scan'208";a="77385043"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IqsAPW1LKZQbYwkzBM7R/EIuIIxNsLV8LjvZ/P8wvVQSKbgtxHoC48ieKd/ws+OYJVXGNr12hqHMsGwIvWEj76nsTWcZd9SQfTUcuG7LqfRcleladwI6zAyxKBH9gnuC7eNGpm0eYiNACiadj3pRwQ8IPp6H/11Yg0Q//S5QGSAGED0LIBmD6G+DM4fTkqevC9l+oRlVYDy7yRYjyxqNu/xgDqZghY3EL4TbXg2aQwSQQZvLa2CETpn1P/U/X+kwSOse92BRq9HJNQNCQfZhvgAQsEg++PrxHmFl+y5ovgUXGJapUuD0sKb1jW2dKkknTuZnFpGvC2UWY2yOKj4/gw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=cKS1QyhNxme7N0Fa1uomyyNeEeioVy2n4cZCZD/R160=;
 b=QEwvNrMvs74jVKFtU9pEivfcc18zwp19qMsq+Ol4r4i1Wia3RNvo/5r7qIb4VFLQI5r33ueWJDcSmvjEgnZZYveR/GQ5e73dQS4JYqrD2sCJzUGiG4g/HCoAzq5SR04VJn674q/nznkE/Tf9WbdDYo7Aq0LPR+o1yMOFJFf6z6XL6+N2VZfHVnhdcEYTSwtVEWaYeQBT5Jmq1Qxj6MFlBUhzkQMxQcL4k7CXe1NLaYXfLAMkxZSCLHnznSOzNRDIsrB4e8SH/zYayCa5kw/0HcI/3/11tVwK6kwXMKl0NqO92zjbq6DFxSdbap/gPkw3Z8Wxq7ehCR/gs3k8vGJNrA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cKS1QyhNxme7N0Fa1uomyyNeEeioVy2n4cZCZD/R160=;
 b=hRhoTTlWtpHP2WuJIZyz9mE6ebfTarc3xbxO93NZ6rS9/z9BBDxmuqtjSUQ4JhOmhkUUwF4spqIMkzZOcn8p0nyp4lw3PNHGB8VAOI0Kcig4Hyb6QXus6iGFfi54hQW6zpbaDU9HMmCwzI9fhCon2NLwTfEmENLOkhiVPDH/84M=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Thu, 30 Jun 2022 18:40:37 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <JBeulich@suse.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2 2/2] x86/spec-ctrl: Knobs for STIBP and PSFD, and
 follow hardware STIBP hint
Message-ID: <Yr3SBViu6mJbdWdZ@MacBook-Air-de-Roger.local>
References: <20220629184508.15956-1-andrew.cooper3@citrix.com>
 <20220629184508.15956-3-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20220629184508.15956-3-andrew.cooper3@citrix.com>
X-ClientProxiedBy: LO2P265CA0020.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:62::32) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 07df8fea-eda2-4100-f3b9-08da5ab7454d
X-MS-TrafficTypeDiagnostic: CH0PR03MB6114:EE_
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	PWYkQjUTBwTA5uoqP99S8+617l5BqwQwtSWn+D2wvFOQQIespYSdJ6LIZEUQiSfqDuMpyLJF+7Zx7ClwC+6aePKAGf2YztCxyEHHMfIOG76az1+mSWCeAx1Yvd4/u/+F1CTG9x9QMwfCU5zjoVU5Yhl2CSmJX0Exzvh+RZpGqX6YrNxRKu4aJDbZYXy333ruEEd6jzUewGvqRQjaYqfTFwrry2uMTrHdcp3jQMhUxXpNeqNhH4h22B622ReSAHdWlD/N7cC0iMDtr5d7UYFMpmhaPiYrS3gK7qvWVwFhg0G/IXPz0+uQmJnGnV8XaXe/fSsJWPKZsFZkhISTOdsmwesIbhzQB+eElOi8Y4GS7mfGXapHS76XfUzBZ43MrOIhUUE2lHSUMoYZ77RiZOz+aGE+awlt/ZrtIlzdVN2pigfAWS9LLk/P0nrituaCWI2wP/ni3W8ilOBX7Nhp51h0T85/90i6vc8VNIXCUMfNaCvMTIGCQOCina3Zrzl2T0Szp6yv2sV9F19FYlipOsYV4cmuvPAXtBtj7bMVtSa9UDPCij6DlDtz+vvY/LkC12+0JuKHo/84A/x0F8unqiSS2Z94cw2hj/G2WEMgq4q8rhA+1oOw7sNmWNSHHxRB2Nfw7tY6YFvh/1XwmSwAUaMUP9isGVAWpWgs18pBmZTy7GseSsNwTpBfVh+N6vDQ2U7o8wK4MI6NeeavG8ahYeuRwfEuBhUUFTPYw8I5pEZYlWDvNHVIcJOcxIkIkf7JeA+qSJBQf4sed3VohPU3dlna9g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(136003)(376002)(346002)(396003)(366004)(85182001)(86362001)(38100700002)(186003)(41300700001)(6506007)(83380400001)(26005)(6512007)(9686003)(82960400001)(6486002)(66476007)(316002)(8676002)(4326008)(6666004)(478600001)(66556008)(66946007)(5660300002)(2906002)(6862004)(4744005)(8936002)(6636002)(54906003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cnRRK2c0ckNveTBLVlhBeE12Y3BOaFN0cFRjN1J5OWFEaVJWQ2tYM0lOSjZr?=
 =?utf-8?B?ODdkbzF6SjI0dlViTU9kRE5PN0lJU1VhVll2Vm5RRS82elBsUFZqcnR1S1Q2?=
 =?utf-8?B?Wk10YVhSZ2xkVk5HRTE2dHhNS1BBbUx0bUdhc2cvYVUwT3RKOWFPZzVZTVB3?=
 =?utf-8?B?L29vVHlWU2JaSXdlTCtQOFRmajN1UWtwTDZRQ0V6SVFTSnFZR054L0IwM3Vk?=
 =?utf-8?B?bUJ6YkM0b2J3YjNlODl2dTlqSkxIelduQVAxbXpIT0QwT0FLQUlPZ0Z6Zm1w?=
 =?utf-8?B?Um9EZjdvb255L3NpdVJNSjg3QUt2cFRWVHZpWjE1UFAvYWk5K1k0RFlCdUkz?=
 =?utf-8?B?OWJwVkxhcWlWY3lablZpeTdXOG5sWnVMdWpScVJCRjB6cVdSOVBMOVprMldL?=
 =?utf-8?B?dGNBUnA4MlRpd0dMZDBFQ0pqZS9lTG1aYzlDOUhQRlgxd1dJZFBNK2J0QWpD?=
 =?utf-8?B?Y3RCaHh5VlFrdzNlL052eGhpamt2NkJrTnR3RUNsUTltanA2d2QwVkNpSzRq?=
 =?utf-8?B?SmFjc2VuOUpuWEtwOXdpUk4yNkhzczJHMWEvNXc2T0JRaStCQnlaWGVneTJp?=
 =?utf-8?B?d3hiMjYwb1RLQU8yMWxPZVdFMkJIZVhrU0dieG54QVROYzl5YnpwNmdmZlVV?=
 =?utf-8?B?ZzA5TW45MGkxYmI3OWdqdTJYOHdTZFVCcVRsLzl0ZlRBaUxtVUpITm5nNWNL?=
 =?utf-8?B?MjNHSXRMcTJFaGhxQTB3bWRERElhbENMcUg0VHc0MndwdlhRREVHN3R5cjBY?=
 =?utf-8?B?MGV6aUVWM2wxV3dKeitjdG91aWk2a1lEa1FoaW51ZVN2K3grcUQzZ3NUSWFI?=
 =?utf-8?B?b1pOS3lxZERMOUdDaURlUlZQOHdFNXc4eURXazBaS1k2ZURuMTZkcnlXTnJW?=
 =?utf-8?B?TU14blZnZERpeGFCZ0s0YURsZjJQQnNVQVdjS1kxWlNPZXA1bUI3RTdqUkpw?=
 =?utf-8?B?YUhJMFNxVHdod2o5b2pLdTVwcXRNRkl0Zk1sbGtsc2I5aDh0VFh4QUxUSFZZ?=
 =?utf-8?B?bG0rTzdid21lbWVUanpZWnQ0VktlWE9manR0VFVaNTNRbi91NmNiSjdZTjZs?=
 =?utf-8?B?R3RTTDJKMWgzcDQyMWlpc3VKN3EyQjJKR0VIQmRCL1BKOGdmMURiUWRXWXhB?=
 =?utf-8?B?dWZHa2lWazJ0TC9IUXNLM2FNZU1GTVVSaklRYmRiamJIdVhQbXVqbmxhaXI3?=
 =?utf-8?B?Z1dQclhsZlVMK0UwRFBBc2c3bFp5VG9tMmROZHN2U3BBL1htNXEwWFFEd05m?=
 =?utf-8?B?WExEMDFkM0NDa1ZkQzJnTVdIUmpTZjY5Z3kvMlFJWHJVdmtoYzR6MjVtMUJ4?=
 =?utf-8?B?bW0rRUdGK3BLSFlpRWhYM3BWcWxoMnFIbzFmelZjRTFuNmZmaDgzRnhYbEI4?=
 =?utf-8?B?NmQwOFliSzBuRDdXemZ5dTl3cFBvci9UZzg3WWpReS9HTzRKM0pZdGRUbExl?=
 =?utf-8?B?dnV2UXJHOHc3c3gyTldhRktrUjRHMkI3blJpTU16TWo2MXhieVJ1RXY3clh6?=
 =?utf-8?B?NGVrTy90NW5ucmxJRVZKVlJyNUszWE04K3pFbVVNaFdoWjl0alNkaklOTFJm?=
 =?utf-8?B?UWJMSkxBTm10VFJORGVhRjNDNHlLS0hXZ3p0SXhUcGRUcTRlZmNLeW5tSk84?=
 =?utf-8?B?N2xJL2h4Q1hoaitNZnlwTmhSWkxKUDFXOEJHK21KdzliVW1PMmRsTFczanhD?=
 =?utf-8?B?RW5qY3VCV0hVNmVqcEYrNUZTbjB4WGxFVTVUa1pPS2d3YVdKZWF4QVNOMlJU?=
 =?utf-8?B?R2UxbVVacDgyMStHTTY2S1JlNGQxbU51UUNGa1BCanVNck1ldWZwdFZPM2ta?=
 =?utf-8?B?eVhORmx1aTZkRFdCN0swMmlCVlFtQXVnUzJOeGZyRjd2a1diT2IxTWZsT3Rk?=
 =?utf-8?B?OWNkYmlqa0dnZzdsNFlFKzUzU2dhNjFzUWwwZVE5OTZBbHY3ZGlVUlo4bDRm?=
 =?utf-8?B?aXl2Tmw4eG5kamZTRk1zWTN6QSsybEpKSmFNbm9PcW1XVE9URW5jRTNxNE45?=
 =?utf-8?B?UmxFUGdEOEwwbnA4NGtDTis0N2NkUzIvazQ5R0N0Yy8xMGpiTXZUNUpMNVpE?=
 =?utf-8?B?OXhKUk01V0I0VTlKZHBLYnd1TUtrZ0pXUEdXekFnY1ZiNlNxU2FYZWxxNVBS?=
 =?utf-8?B?WHd1Zm1WTkI1aFdpVTg5ZGdoY0d5Mll6eXJFUE0reElNa3FDS09kWlA0dDBX?=
 =?utf-8?B?VFE9PQ==?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 07df8fea-eda2-4100-f3b9-08da5ab7454d
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2022 16:40:41.5691
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: qFvh+nYlUC1bmkJo+mK6TWkidJvzpP1YHGN20AMOwixw8YZH+H83VhDRhHBOLlBcJiuXNPCYWexu07LXGVqxsQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR03MB6114

On Wed, Jun 29, 2022 at 07:45:08PM +0100, Andrew Cooper wrote:
> STIBP and PSFD are slightly weird bits, because they're both implied by other
> bits in MSR_SPEC_CTRL.  Add fine grain controls for them, and take the
> implications into account when setting IBRS/SSBD.
> 
> Rearrange the IBPB text/variables/logic to keep all the MSR_SPEC_CTRL bits
> together, for consistency.
> 
> However, AMD have a hardware hint CPUID bit recommending that STIBP be set
> unilaterally.  This is advertised on Zen3, so follow the recommendation.
> Furthermore, in such cases, set STIBP behind the guest's back for now.  This
> has negligible overhead for the guest, but saves a WRMSR on vmentry.  This is
> the only default change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 16:54:43 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 16:54:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358622.587904 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6xR6-00059u-PE; Thu, 30 Jun 2022 16:54:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358622.587904; Thu, 30 Jun 2022 16:54:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6xR6-00059n-MF; Thu, 30 Jun 2022 16:54:36 +0000
Received: by outflank-mailman (input) for mailman id 358622;
 Thu, 30 Jun 2022 16:54:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YJdO=XF=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1o6xR4-00059h-NM
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 16:54:35 +0000
Received: from wout1-smtp.messagingengine.com (wout1-smtp.messagingengine.com
 [64.147.123.24]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4e9968d0-f895-11ec-bd2d-47488cf2e6aa;
 Thu, 30 Jun 2022 18:54:31 +0200 (CEST)
Received: from compute4.internal (compute4.nyi.internal [10.202.2.44])
 by mailout.west.internal (Postfix) with ESMTP id 1199D3200A98;
 Thu, 30 Jun 2022 12:54:27 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute4.internal (MEProxy); Thu, 30 Jun 2022 12:54:28 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu,
 30 Jun 2022 12:54:26 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4e9968d0-f895-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:date:date:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to; s=fm2; t=1656608067; x=
	1656694467; bh=PZ8y2SwjdybeYE5ENCPmmaaRYCWssikjiQNKImz8H0Q=; b=W
	B+IGWwmOeWWrmLajypGH9ua5TQlFYQnRJfR2hMe/I67Ke0BNSTVuYE3Z7KLPdp/J
	ZZ/bV4pOk/eFixVsBvQvOAO4KvmiCyvgju2S1s5tjlTv3qQKmZwEDacJwI4EohI9
	fhDhA0qbmD8qX2DHBThx6+dUXrcycHKEYLyJWnpkNq2q8CEeTwl68FyPTpVdiyOl
	DcmOrKNHASJtZOZXuHafgopGBgXOkhl9BAaVUIF3BEMgcam5wvIjo6NCXmynXvNP
	d2zhVsjptTQYqy0ZUcgYG95bXZi8KCZL/BrYNBwtWvNUxHlisDFvvYPRNq9RgvAU
	PLda5bFNMeLM5bWiq6wWQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:date:date:feedback-id
	:feedback-id:from:from:in-reply-to:in-reply-to:message-id
	:mime-version:references:reply-to:sender:subject:subject:to:to
	:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm2; t=1656608067; x=1656694467; bh=PZ8y2SwjdybeYE5ENCPmmaaRYCWs
	sikjiQNKImz8H0Q=; b=RImnPROhbhyLe3jkqRzkBOjro5OiuIirdkHqLdF3BDGv
	VeSK0UA92HHQs0VL/TXKuXaujNyUmdg8MOdi19WCbFoCmuGdNPjTYv9remNwgurv
	QtuuBP+9k2sKTzA0mxNTeYZDArwdsp4c1KlBdwjB2AJGV7DKjHl5ZV17aCSsDpaw
	XunoVpAlGheSLlnIUuXt7tCHK2ikC9noG3ErI+jhaXVNMgr47bhuN2sBHUJPsqf1
	sZepebWiHrS8BzYinOgmmjKf21K7+dNfiYvnpYxmlNgXnyscayOyco9OVJikRxPw
	LGk5H+UV1Ff/O3MKMdmr8spRNRJuuYfZ2EanzF2W1w==
X-ME-Sender: <xms:Q9W9YmIhm0RcWi9PNc2UXpn9EGjRNisis_R7HxlmayWWsVv2cFcA7g>
    <xme:Q9W9YuJeWaBWUR0wzNQjw2vHqNVZP_yOPK1eTVpCwB979vu7un1-VeZNsOGIh7Fxy
    dgpPik5LRpuCtg>
X-ME-Received: <xmr:Q9W9YmvKBP1VGD6BSlDBUrCh9v_O3D3tkOO_7WlKx8PKe-ocRgzrNCkPAZ6R>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrudehuddguddthecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpeffhffvvefukfhfgggtuggjsehgtderredttddvnecuhfhrohhmpeffvghm
    ihcuofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinh
    hgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeeugefgkeelffetkedtieekfffh
    udeutddufefgieeiuefgledvheegvdefteevtdenucffohhmrghinhepghhithhhuhgsrd
    gtohhmpdhkvghrnhgvlhdrohhrghenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgr
    mhepmhgrihhlfhhrohhmpeguvghmihesihhnvhhishhisghlvghthhhinhhgshhlrggsrd
    gtohhm
X-ME-Proxy: <xmx:Q9W9Yra9bbJxz5PRgtapApQg3i6E3xcw07QOGe-SKjCdJZ4gFL4fUA>
    <xmx:Q9W9YtYON4UApqWc2dNnjZN9U7wQGDuNco3O1xiZvUR9uxZ5q_MOmg>
    <xmx:Q9W9YnAx9PHoTymym5gWQ-23QMpVz5q5GQjtaDh9RDpIaVx7aE3zaw>
    <xmx:Q9W9YsyJnrmXxR4L3pmDt2OCKvym_nZmYqmdVbBwnuGwgUpIkOfuuw>
Feedback-ID: iac594737:Fastmail
Date: Thu, 30 Jun 2022 12:54:02 -0400
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Juergen Gross <jgross@suse.com>, Greg KH <gregkh@linuxfoundation.org>
Cc: stable@vger.kernel.org,
	Xen developer discussion <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 5.10] xen/gntdev: Avoid blocking in unmap_grant_pages()
Message-ID: <Yr3VQaM0NBcIV2Kl@itl-email>
Mail-Followup-To: Demi Marie Obenour <demi@invisiblethingslab.com>,
	Juergen Gross <jgross@suse.com>,
	Greg KH <gregkh@linuxfoundation.org>, stable@vger.kernel.org,
	Xen developer discussion <xen-devel@lists.xenproject.org>
References: <20220627181006.1954-1-demi@invisiblethingslab.com>
 <Yr2KKpWSiuzOQr7v@kroah.com>
 <5136812e-e296-4acb-cafd-f189c4013ed3@suse.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha512;
	protocol="application/pgp-signature"; boundary="v8yNNVwv2qS6pEHy"
Content-Disposition: inline
In-Reply-To: <5136812e-e296-4acb-cafd-f189c4013ed3@suse.com>


--v8yNNVwv2qS6pEHy
Content-Type: text/plain; protected-headers=v1; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Thu, 30 Jun 2022 12:54:02 -0400
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Juergen Gross <jgross@suse.com>, Greg KH <gregkh@linuxfoundation.org>
Cc: stable@vger.kernel.org,
	Xen developer discussion <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 5.10] xen/gntdev: Avoid blocking in unmap_grant_pages()

On Thu, Jun 30, 2022 at 03:16:41PM +0200, Juergen Gross wrote:
> On 30.06.22 13:34, Greg KH wrote:
> > On Mon, Jun 27, 2022 at 02:10:02PM -0400, Demi Marie Obenour wrote:
> > > commit dbe97cff7dd9f0f75c524afdd55ad46be3d15295 upstream
> > >=20
> > > unmap_grant_pages() currently waits for the pages to no longer be use=
d.
> > > In https://github.com/QubesOS/qubes-issues/issues/7481, this lead to a
> > > deadlock against i915: i915 was waiting for gntdev's MMU notifier to
> > > finish, while gntdev was waiting for i915 to free its pages.  I also
> > > believe this is responsible for various deadlocks I have experienced =
in
> > > the past.
> > >=20
> > > Avoid these problems by making unmap_grant_pages async.  This requires
> > > making it return void, as any errors will not be available when the
> > > function returns.  Fortunately, the only use of the return value is a
> > > WARN_ON(), which can be replaced by a WARN_ON when the error is
> > > detected.  Additionally, a failed call will not prevent further calls
> > > from being made, but this is harmless.
> > >=20
> > > Because unmap_grant_pages is now async, the grant handle will be sent=
 to
> > > INVALID_GRANT_HANDLE too late to prevent multiple unmaps of the same
> > > handle.  Instead, a separate bool array is allocated for this purpose.
> > > This wastes memory, but stuffing this information in padding bytes is
> > > too fragile.  Furthermore, it is necessary to grab a reference to the
> > > map before making the asynchronous call, and release the reference wh=
en
> > > the call returns.
> > >=20
> > > It is also necessary to guard against reentrancy in gntdev_map_put(),
> > > and to handle the case where userspace tries to map a mapping whose
> > > contents have not all been freed yet.
> > >=20
> > > Fixes: 745282256c75 ("xen/gntdev: safely unmap grants in case they ar=
e still in use")
> > > Cc: stable@vger.kernel.org
> > > Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
> > > Reviewed-by: Juergen Gross <jgross@suse.com>
> > > Link: https://lore.kernel.org/r/20220622022726.2538-1-demi@invisiblet=
hingslab.com
> > > Signed-off-by: Juergen Gross <jgross@suse.com>
> > > ---
> > >   drivers/xen/gntdev-common.h |   7 ++
> > >   drivers/xen/gntdev.c        | 142 +++++++++++++++++++++++++--------=
---
> > >   2 files changed, 106 insertions(+), 43 deletions(-)
> >=20
> > All now queued up, thanks.
>=20
> Sorry, but I think at least the version for 5.10 is fishy, as it removes
> the tests for successful allocations of add->map_ops and add->unmap_ops.

That is definitely a bug; I will send another version (without your
Reviewed-by).

> I need to do a thorough review of the patches (the "Reviewed-by:" tag in
> the patches is the one for the upstream patch).

Yeah, that was my fault, sorry.

> Greg, can you please wait for my explicit "okay" for the backports?

Confirming that these patches do need review before they can be applied.
Juergen, would you mind making sure that the upstream patch was also
correct for 5.15 and 5.18?  It applied cleanly, but that is no guarantee
of correctness.
--=20
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

--v8yNNVwv2qS6pEHy
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEdodNnxM2uiJZBxxxsoi1X/+cIsEFAmK91UAACgkQsoi1X/+c
IsEvbhAAoIh6HQPIqGcjNkAeHjeZvnp4b9AHyj3CipQPoMRwmIKuJULrQcOsSeIi
gCuGuYudZkaM6OpX1AbRSD3g6TTxLIBatb6juCYOtFwDdA8Ujjh4DtKrSocRlUJX
GG5t05QJJAf59FPo6YjKRCr5WpOseqI9a2QkQyq4mF9xxNh3ngUHGnF3RXSJfYHe
vPBIqICFcvKNw+K/rfBNttGMohMdLYtizUf3SaXpleujAXrORFcKwxdNwWO2WRiE
rBJk9KXVDN05blJXYj7HWboDFq7oZ32pL6S5XJBS+8c1diFsajLK2bbO0znWZM7e
uc5ubK9PZ+wE6aqo/f2Hr5SFzZfz6dL1mQKSOJmPXcwullmhQGsiOwJZchrXnioW
OjONnxOYPHki2ml5MTNo9CQddZBJFT5JkyJaWkAED5c0++0bmQ1YXWP4N8l4ae7A
wve63nv5EjnKI2hFHmo3Tsq1eenpyyytk3NxJr0tdqpF9rdTMHV60Fftt4wGjnqG
FycnOfp7RRCiEeRluzV+Z16T9jDrqvt4fVJ0cLhG6/Fc2NVXEx95V5HbjQlsv2Cu
Rq1eLE/uHy3789bWIAAkyNANRkMKoLuws42XDZKrRd6BVfoOReT24MQPAU5KPAA1
mQG4mRyr1ELyCZRhAoiLDlCWOklRX1ZI0RmhPpgUK3EnamIBk8s=
=KbAB
-----END PGP SIGNATURE-----

--v8yNNVwv2qS6pEHy--


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 18:11:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 18:11:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358653.587935 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6ydF-0006Fg-QD; Thu, 30 Jun 2022 18:11:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358653.587935; Thu, 30 Jun 2022 18:11:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6ydF-0006FZ-NJ; Thu, 30 Jun 2022 18:11:13 +0000
Received: by outflank-mailman (input) for mailman id 358653;
 Thu, 30 Jun 2022 18:11:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6ydF-0006FP-7K; Thu, 30 Jun 2022 18:11:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6ydF-00024o-2Y; Thu, 30 Jun 2022 18:11:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o6ydE-0007c8-KV; Thu, 30 Jun 2022 18:11:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o6ydE-0000Vt-K3; Thu, 30 Jun 2022 18:11:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=kL292L8TPzVkCkYdRxQ5iWHJchQ2EZuZ7A3ggXYfi9k=; b=G52gKFI3ofmmIZzbSI5A3xnZOb
	MspA926Dmka8lFzr6SX4l7IRIefgAO1JzDUVma9B9+EQpSSMyUiqMc36oy7UaXR5401Qb3Waq5JXm
	+CaOlRmPuofWhGNmCfXZRa1cZy91qpXxbJOk7AjiVa2WnbPplugNdnY4AHqlkmghriVk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171420-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 171420: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=d9b2ba67917c18822c6a09af41c32fa161f1606b
X-Osstest-Versions-That:
    linux=354c6e071be986a44b956f7b57f1884244431048
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 30 Jun 2022 18:11:12 +0000

flight 171420 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171420/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-intel  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-dom0pvh-xl-amd  8 xen-boot              fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 171277
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 171277
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 171277
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 171277
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 171277
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 171277
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 171277
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 171277
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 171277
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 171277
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 171277
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 171277
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 171277

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 171277

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171277
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171277
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171277
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                d9b2ba67917c18822c6a09af41c32fa161f1606b
baseline version:
 linux                354c6e071be986a44b956f7b57f1884244431048

Last test of basis   171277  2022-06-19 03:11:35 Z   11 days
Failing since        171280  2022-06-19 15:12:25 Z   11 days   31 attempts
Testing same since   171411  2022-06-29 21:11:38 Z    0 days    2 attempts

------------------------------------------------------------
375 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 13469 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 18:34:15 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 18:34:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358661.587946 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6yzR-0000YM-M7; Thu, 30 Jun 2022 18:34:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358661.587946; Thu, 30 Jun 2022 18:34:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6yzR-0000YF-JK; Thu, 30 Jun 2022 18:34:09 +0000
Received: by outflank-mailman (input) for mailman id 358661;
 Thu, 30 Jun 2022 18:34:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=npq1=XF=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1o6yzQ-0000Y6-BP
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 18:34:08 +0000
Received: from mail-wr1-x434.google.com (mail-wr1-x434.google.com
 [2a00:1450:4864:20::434])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3963f72f-f8a3-11ec-bdce-3d151da133c5;
 Thu, 30 Jun 2022 20:34:07 +0200 (CEST)
Received: by mail-wr1-x434.google.com with SMTP id q9so28560879wrd.8
 for <xen-devel@lists.xenproject.org>; Thu, 30 Jun 2022 11:34:06 -0700 (PDT)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 v7-20020a1cac07000000b003a04e6410e0sm3547501wme.33.2022.06.30.11.34.05
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 30 Jun 2022 11:34:05 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3963f72f-f8a3-11ec-bdce-3d151da133c5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=AAXy/XHOZeSXcRBmzswgvDjoK+f4VJC4EZbfuI3w6L8=;
        b=OnZoTn4GCmFDccrv2kPHTtFhi/oD7HHPYsoKffPcTlQO2i9DhbF4PGpyVCJVrf7lAD
         7alDNjRKjTtNSXw8MGu7/hSpBel+41qmH2FO6doSJ0L5iI/wRP7GZlIVJ5SZQg/Hz7gu
         c4ifK+Lj/61dolnYdrOPyvmmL7Qh66JM0mut1r4IO5OYCyrmjWezwb/vKH3ouy28Inx+
         xCtNC3dPaUxEgmf/QKIKjYEwVmCCwGIjFDWcO4yKSUUj6mTyyczrQqbvvlohRM1BDPxt
         w8JbnC9u1R3yZ61Z7M0B/gryfSHsdjuK6RS1YZ0EPsPcQmabn7zLE96WQLz5dN1HO1zw
         wAIA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=AAXy/XHOZeSXcRBmzswgvDjoK+f4VJC4EZbfuI3w6L8=;
        b=yGWtq69geHPcP9YHPn/pywGxDt11NsrUoLyrdiB1u7wId4uSOVNCt57lUDdew2GDRu
         ShIxqPc1Odl11UEC5QRIn35T5cD1iFg4QOhUnE8gNxiP97m+kXwMpg7U89BItVntQKxL
         fwR6WKYFw5QaNZ2J/L+354/4/aliVh8fpKFbFUpFiV9WM4V3e96MoqThLaUL1OCDmCJ5
         7c/+2tjA3p7sCxEhrNqv2swDgD5blm2bzI7g7ULewQNO6zcj41iQ//03Yr03VnS0JoDI
         Z8TCy9EC7qJs2THOHrvS2RXN+flnMm4DGuwgiwmjFPNl8RX+KvXaXaDQsIxa/VlA/NQU
         QjHQ==
X-Gm-Message-State: AJIora/mJUgQPirfpdR4TFEcegLS7lekQl5h4sfS7cei2biGz7tXtaVK
	w7hzVUmbXAHHuav1mK4w7ZY=
X-Google-Smtp-Source: AGRyM1tAmrdCGNWuUr7wnzbtwRj8SrnNLRKkmCEkX7CCpGg/OIJUd7f9oCyqANdXlDWvVSLN2CzaUg==
X-Received: by 2002:a05:6000:1446:b0:21d:2245:ab65 with SMTP id v6-20020a056000144600b0021d2245ab65mr9722012wrx.315.1656614046302;
        Thu, 30 Jun 2022 11:34:06 -0700 (PDT)
Subject: Re: [PATCH V10 1/3] libxl: Add support for Virtio disk configuration
To: Juergen Gross <jgross@suse.com>
Cc: George Dunlap <George.Dunlap@citrix.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Wei Liu <wl@xen.org>,
 Nick Rosbrook <rosbrookn@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>
References: <62903b8e-6c20-600e-8283-5a3e3b853a18@gmail.com>
 <1655482471-16850-1-git-send-email-olekstysh@gmail.com>
 <9A36692A-8095-4C76-A69B-FBAB221A365C@citrix.com>
 <02648046-7781-61e5-de93-77342e4d6c16@gmail.com>
 <36d4c786-9fb7-4b30-1a4d-171f92cc84d7@gmail.com>
 <43cafa48-1cef-ad0f-654e-5296cff15018@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <742954bc-4546-6f99-4aec-c3c3e5ceb551@gmail.com>
Date: Thu, 30 Jun 2022 21:34:04 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <43cafa48-1cef-ad0f-654e-5296cff15018@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 30.06.22 15:24, Juergen Gross wrote:


Hello Juergen

> On 30.06.22 14:18, Oleksandr wrote:
>>
>> Dear all.
>>
>>
>> On 25.06.22 17:32, Oleksandr wrote:
>>>
>>> On 24.06.22 15:59, George Dunlap wrote:
>>>
>>> Hello George
>>>
>>>>
>>>>> On 17 Jun 2022, at 17:14, Oleksandr Tyshchenko 
>>>>> <olekstysh@gmail.com> wrote:
>>>>>
>>>>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>>>
>>>>> This patch adds basic support for configuring and assisting 
>>>>> virtio-mmio
>>>>> based virtio-disk backend (emulator) which is intended to run out of
>>>>> Qemu and could be run in any domain.
>>>>> Although the Virtio block device is quite different from traditional
>>>>> Xen PV block device (vbd) from the toolstack's point of view:
>>>>> - as the frontend is virtio-blk which is not a Xenbus driver, nothing
>>>>>    written to Xenstore are fetched by the frontend currently ("vdev"
>>>>>    is not passed to the frontend). But this might need to be revised
>>>>>    in future, so frontend data might be written to Xenstore in 
>>>>> order to
>>>>>    support hotplugging virtio devices or passing the backend 
>>>>> domain id
>>>>>    on arch where the device-tree is not available.
>>>>> - the ring-ref/event-channel are not used for the backend<->frontend
>>>>>    communication, the proposed IPC for Virtio is IOREQ/DM
>>>>> it is still a "block device" and ought to be integrated in existing
>>>>> "disk" handling. So, re-use (and adapt) "disk" parsing/configuration
>>>>> logic to deal with Virtio devices as well.
>>>>>
>>>>> For the immediate purpose and an ability to extend that support for
>>>>> other use-cases in future (Qemu, virtio-pci, etc) perform the 
>>>>> following
>>>>> actions:
>>>>> - Add new disk backend type (LIBXL_DISK_BACKEND_OTHER) and reflect
>>>>>   that in the configuration
>>>>> - Introduce new disk "specification" and "transport" fields to struct
>>>>>   libxl_device_disk. Both are written to the Xenstore. The transport
>>>>>   field is only used for the specification "virtio" and it assumes
>>>>>   only "mmio" value for now.
>>>>> - Introduce new "specification" option with "xen" communication
>>>>>   protocol being default value.
>>>>> - Add new device kind (LIBXL__DEVICE_KIND_VIRTIO_DISK) as current
>>>>>   one (LIBXL__DEVICE_KIND_VBD) doesn't fit into Virtio disk model
>>>>>
>>>>> An example of domain configuration for Virtio disk:
>>>>> disk = [ 'phy:/dev/mmcblk0p3, xvda1, backendtype=other, 
>>>>> specification=virtio']
>>>>>
>>>>> Nothing has changed for default Xen disk configuration.
>>>>>
>>>>> Please note, this patch is not enough for virtio-disk to work
>>>>> on Xen (Arm), as for every Virtio device (including disk) we need
>>>>> to allocate Virtio MMIO params (IRQ and memory region) and pass
>>>>> them to the backend, also update Guest device-tree. The subsequent
>>>>> patch will add these missing bits. For the current patch,
>>>>> the default "irq" and "base" are just written to the Xenstore.
>>>>> This is not an ideal splitting, but this way we avoid breaking
>>>>> the bisectability.
>>>>>
>>>>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>> OK, I am *really* sorry for coming in here at the last minute and 
>>>> quibbling about things.
>>>
>>>
>>> no problem
>>>
>>>
>>>>   But this introduces a public interface which looks really wrong 
>>>> to me.  I’ve replied to the mail from December where Juergen 
>>>> proposed the “Other” protocol.
>>>>
>>>> Hopefully this will be a simple matter of finding a better name 
>>>> than “other”.  (Or you guys convincing me that “other” is really 
>>>> the best name for this value; or even Anthony asserting his right 
>>>> as a maintainer to overrule my objection if he thinks I’m out of 
>>>> line.)
>>>
>>>
>>> I saw your reply to V6 and Juergen's answer. I share Juergen's 
>>> opinion as well as I understand your concern. I think, this is 
>>> exactly the situation when finding a perfect name (obvious, short, 
>>> etc) for the backendtype (in our particular case) is really difficult.
>>>
>>> Personally I tend to leave "other", because there is no better 
>>> alternative from my PoV. Also I might be completely wrong here, but 
>>> I don't think we will have to extend the "backendtype" for 
>>> supporting other possible virtio backend implementations in the 
>>> foreseeable future:
>>>
>>> - when Qemu gains the required support we will choose qdisk: 
>>> backendtype qdisk specification virtio
>>> - for the possible virtio alternative of the blkback we will choose 
>>> phy: backendtype phy specification virtio
>>>
>>> If there will be a need to keep various implementation, we will be 
>>> able to describe that by using "transport" (mmio, pci, xenbus, argo, 
>>> whatever).
>>> Actually this is why we also introduced "specification" and 
>>> "transport".
>>>
>>> IIRC, there were other (suggested?) names except "other" which are 
>>> "external" and "daemon". If you think that one of them is better 
>>> than "other", I will happy to use it.
>>
>>
>> Could we please make a decision on this?
>>
>> If "other" is not unambiguous, then maybe we could choose "daemon" to 
>> describe arbitrary user-level backends, any thought?
>
> IMO this would exclude other cases, like special kernel drivers.


I got it.


>
>
> Maybe "standalone"? "only-relying-on-xenstore-data" would be a bit
> exaggerated, while conveying the idea quite nicely.


"standalone" sounds good to me, thank you. I will wait a little bit for 
other opinions before making changes.


>
>
> Juergen

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Thu Jun 30 18:37:05 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 18:37:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358667.587957 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6z2G-0001EW-8b; Thu, 30 Jun 2022 18:37:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358667.587957; Thu, 30 Jun 2022 18:37:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6z2G-0001EP-4w; Thu, 30 Jun 2022 18:37:04 +0000
Received: by outflank-mailman (input) for mailman id 358667;
 Thu, 30 Jun 2022 18:37:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o6z2E-0001EJ-CM
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 18:37:02 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o6z2E-0002Tg-7b; Thu, 30 Jun 2022 18:37:02 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o6z2D-0000ey-Tp; Thu, 30 Jun 2022 18:37:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:Message-Id:Date:
	Subject:Cc:To:From; bh=EfathmZv19nq32cxJTKgih5SUnMMa3ZNjaInUHYRRhE=; b=dkuK7T
	PrQjsr8TrzB4r/UwFeeG3A1Pk+OTM+OVUpgm3zFqTfTz2+dnWuIUUDYRmPbXG5UAoAQyg4jSwuGjc
	KOMNjPE0PZmrkIqI/7nx14yZFJ26FnH1OSBswI6I07cnO28vYMSJzcpolp16YTXv0VE18JmWLqdkt
	sDOYk76pZgQ=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: jbeulich@suse.com,
	Julien Grall <jgrall@amazon.com>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v3] public/io: xs_wire: Document that new errors should be added at the end
Date: Thu, 30 Jun 2022 19:36:55 +0100
Message-Id: <20220630183655.44391-1-julien@xen.org>
X-Mailer: git-send-email 2.32.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

Some tools (e.g. xenstored) always expect EINVAL to be first in xsd_errors.

To be conservative, mandate that new errors should be added at the end
of the array.

Signed-off-by: Julien Grall <jgrall@amazon.com>

----

Changes in v3:
    - Mandate that new errors should be added at the end.

Changes in v2:
    - New patch
---
 xen/include/public/io/xs_wire.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/include/public/io/xs_wire.h b/xen/include/public/io/xs_wire.h
index c1ec7c73e3b1..a4d954cb05aa 100644
--- a/xen/include/public/io/xs_wire.h
+++ b/xen/include/public/io/xs_wire.h
@@ -76,6 +76,7 @@ static struct xsd_errors xsd_errors[]
 __attribute__((unused))
 #endif
     = {
+    /* /!\ New errors should be added at the end of the array. */
     XSD_ERROR(EINVAL),
     XSD_ERROR(EACCES),
     XSD_ERROR(EEXIST),
-- 
2.32.0



From xen-devel-bounces@lists.xenproject.org Thu Jun 30 18:38:52 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 18:38:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358673.587968 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6z3z-00020C-Jh; Thu, 30 Jun 2022 18:38:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358673.587968; Thu, 30 Jun 2022 18:38:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6z3z-000205-FR; Thu, 30 Jun 2022 18:38:51 +0000
Received: by outflank-mailman (input) for mailman id 358673;
 Thu, 30 Jun 2022 18:38:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o6z3y-0001zz-A3
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 18:38:50 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o6z3y-0002VE-7n; Thu, 30 Jun 2022 18:38:50 +0000
Received: from [54.239.6.182] (helo=[10.95.73.91])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o6z3y-0000jw-1t; Thu, 30 Jun 2022 18:38:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=OQpsZkzuNqW0jNbsgmu7rgznVeDDHQswnpLellJCMNg=; b=5O7HEOxJkS8ieLo8C7ru4cp8MO
	2O0VGE4829t78LgT/toCUbbhcDadcj7xvLLf7Z0GttOVqo0hXVfgyVT/OaarjflL3FEzkKkJ/5YNl
	lIjxh5JXkris3iMOKcose2TQapLRxIf5vwAW1jTHtDp7jtaNi0le5qgpwNA0pRUabtGc=;
Message-ID: <62b8dcc6-f2f0-7821-4402-9c4637a121c9@xen.org>
Date: Thu, 30 Jun 2022 19:38:48 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.11.0
Subject: Re: [PATCH v2 2/2] public/io: xs_wire: Allow Xenstore to report EPERM
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: jbeulich@suse.com, Julien Grall <jgrall@amazon.com>
References: <20220627123635.3416-1-julien@xen.org>
 <20220627123635.3416-3-julien@xen.org>
 <2d95bb8c-c89d-c5d7-4171-12ba64721480@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <2d95bb8c-c89d-c5d7-4171-12ba64721480@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 27/06/2022 15:52, Juergen Gross wrote:
> On 27.06.22 14:36, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> C Xenstored is using EPERM when the client is not allowed to change
>> the owner (see GET_PERMS). However, the xenstore protocol doesn't
>> describe EPERM so EINVAL will be sent to the client.
>>
>> When writing test, it would be useful to differentiate between EINVAL
>> (e.g. parsing error) and EPERM (i.e. no permission). So extend
>> xsd_errors[] to support return EPERM.
>>
>> Looking at previous time xsd_errors was extended (8b2c441a1b), it was
>> considered to be safe to add a new error because at least Linux driver
>> and libxenstore treat an unknown error code as EINVAL.
>>
>> This statement doesn't cover other possible OSes, however I am not
>> aware of any breakage.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> Reviewed-by: Juergen Gross <jgross@suse.com>

Thanks. I have committed this patch and respin the first one.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 18:41:12 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 18:41:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358679.587979 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6z6C-0003MC-VK; Thu, 30 Jun 2022 18:41:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358679.587979; Thu, 30 Jun 2022 18:41:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6z6C-0003M5-SE; Thu, 30 Jun 2022 18:41:08 +0000
Received: by outflank-mailman (input) for mailman id 358679;
 Thu, 30 Jun 2022 18:41:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o6z6B-0003Lz-3k
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 18:41:07 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o6z6A-0002Zr-Nv; Thu, 30 Jun 2022 18:41:06 +0000
Received: from [54.239.6.182] (helo=[10.95.73.91])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o6z6A-0000nz-I2; Thu, 30 Jun 2022 18:41:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=oIjU8+M+d9QrrwfgrJY3gAoCaOODXXLEHfwbb9ZHaeA=; b=jYVnmmV0+PMkg5j/Jk4z12YNka
	DGlnHq3OM70iE13q7Gw+hs5ZBzF//oNF5JUAcNOaaC6MovBsx1gFqI3ndGrEcU0J4kUlnwPDEQCTj
	zz8Z5mhj7EKAqSuDyYafXsDqKEDVUFUYWxiA70Y4iOYDIHOtX1kX/sZX8qG7dv4saw3E=;
Message-ID: <d4c92197-2839-5daa-9bb3-32fa91ade7a0@xen.org>
Date: Thu, 30 Jun 2022 19:41:04 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.11.0
Subject: Re: [PATCH] xen: arm: Don't use stop_cpu() in halt_this_cpu()
To: dmitry.semenets@gmail.com, xen-devel@lists.xenproject.org
Cc: Dmytro Semenets <dmytro_semenets@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220623074428.226719-1-dmitry.semenets@gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220623074428.226719-1-dmitry.semenets@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Dmitry,

On 23/06/2022 08:44, dmitry.semenets@gmail.com wrote:
> From: Dmytro Semenets <dmytro_semenets@epam.com>
> 
> When shutting down (or rebooting) the platform, Xen will call stop_cpu()
> on all the CPUs but one. The last CPU will then request the system to
> shutdown/restart.
> 
> On platform using PSCI, stop_cpu() will call PSCI CPU off. Per the spec
> (section 5.5.2 DEN0022D.b), the call could return DENIED if the Trusted
> OS is resident on the CPU that is about to be turned off.
> 
> As Xen doesn't migrate off the trusted OS (which BTW may not be
> migratable), it would be possible to hit the panic().
> 
> In the ideal situation, Xen should migrate the trusted OS or make sure
> the CPU off is not called. However, when shutting down (or rebooting)
> the platform, it is pointless to try to turn off all the CPUs (per
> section 5.10.2, it is only required to put the core in a known state).
> 
> So solve the problem by open-coding stop_cpu() in halt_this_cpu() and
> not call PSCI CPU off.
> 
> Signed-off-by: Dmytro Semenets <dmytro_semenets@epam.com>

Acked-by: Julien Grall <jgrall@amazon.com>

And committed.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 18:43:58 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 18:43:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358685.587989 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6z8u-0003z5-CS; Thu, 30 Jun 2022 18:43:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358685.587989; Thu, 30 Jun 2022 18:43:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o6z8u-0003yy-9n; Thu, 30 Jun 2022 18:43:56 +0000
Received: by outflank-mailman (input) for mailman id 358685;
 Thu, 30 Jun 2022 18:43:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o6z8t-0003ys-6N
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 18:43:55 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o6z8s-0002c4-OR; Thu, 30 Jun 2022 18:43:54 +0000
Received: from [54.239.6.182] (helo=[10.95.73.91])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o6z8s-0000sL-Hd; Thu, 30 Jun 2022 18:43:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=gNfG5otd9B61VwaiFWzcbfMKVZ9WnkihdbAY+PXm4vs=; b=4Kiy9/kz+y2E6fCDfWo9gH3uSF
	Sb+Ut3jp3Nxqo9mwZwPaVShcICtz367jaDpkxc3np/FS7+siQB4iMLgYrHJsYzR9ps90ykqkNlbvZ
	OJMsB5elp4nYZpMU8WNGG1zsAstbWcVj2Ny1zPnjx7uHw4s8qxEkqNlTci8uKIHyb7ZQ=;
Message-ID: <137599b8-3088-733c-ec27-093b49673a19@xen.org>
Date: Thu, 30 Jun 2022 19:43:52 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.11.0
Subject: Re: [PATCH v2] docs/misra: Add instructions for cppcheck
To: Luca Fancellu <luca.fancellu@arm.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, wei.chen@arm.com,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20220629125526.28190-1-luca.fancellu@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20220629125526.28190-1-luca.fancellu@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Luca,

On 29/06/2022 13:55, Luca Fancellu wrote:
> Add instructions on how to build cppcheck, the version currently used
> and an example to use the cppcheck integration to run the analysis on
> the Xen codebase
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
> Changes in v2:
> - typo fixes, removed build command line, rephrasing (Julien)
> ---
>   docs/misra/cppcheck.txt | 64 +++++++++++++++++++++++++++++++++++++++++

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 20:56:28 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 20:56:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358697.588009 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o71Cp-0001o9-Qc; Thu, 30 Jun 2022 20:56:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358697.588009; Thu, 30 Jun 2022 20:56:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o71Cp-0001o2-Lf; Thu, 30 Jun 2022 20:56:07 +0000
Received: by outflank-mailman (input) for mailman id 358697;
 Thu, 30 Jun 2022 20:56:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o71Co-0001ns-7s; Thu, 30 Jun 2022 20:56:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o71Co-0004zf-4V; Thu, 30 Jun 2022 20:56:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o71Cn-00055T-Kq; Thu, 30 Jun 2022 20:56:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o71Cn-0000Lx-KN; Thu, 30 Jun 2022 20:56:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1OOXTFzbw9vLVxS6J8cfc83qcpurZ3DgcSSOp6dmF1E=; b=YLFvs8F5Igc4NOBmZ6tYuDNWLa
	lPP+5k3llUfOmj8nJvBW+dKLgHHhSxhwK6P/XSOocJLo5tl0OGlDo1o4AGKKydzFjJwP2H+lVklLC
	yX6dcncYXExC/SIlb04W6pbqJAIFpmN3+ren9zbJ9PPofdbtct3E+p/26TaPJoulsYcU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171428-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 171428: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=60d1adfa18793f4ddb70c8e901562d8d3e9be3dc
X-Osstest-Versions-That:
    xen=93aa071f66b78a2abbf134aeb96b02f066e6091d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 30 Jun 2022 20:56:05 +0000

flight 171428 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/171428/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  60d1adfa18793f4ddb70c8e901562d8d3e9be3dc
baseline version:
 xen                  93aa071f66b78a2abbf134aeb96b02f066e6091d

Last test of basis   171391  2022-06-29 01:01:48 Z    1 days
Testing same since   171428  2022-06-30 18:01:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   93aa071f66..60d1adfa18  60d1adfa18793f4ddb70c8e901562d8d3e9be3dc -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 21:03:34 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 21:03:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358705.588020 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o71Jx-0003TC-I7; Thu, 30 Jun 2022 21:03:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358705.588020; Thu, 30 Jun 2022 21:03:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o71Jx-0003T5-Dk; Thu, 30 Jun 2022 21:03:29 +0000
Received: by outflank-mailman (input) for mailman id 358705;
 Thu, 30 Jun 2022 21:03:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HXfy=XF=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o71Jw-0003Sw-44
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 21:03:28 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 150a8efe-f8b8-11ec-bdce-3d151da133c5;
 Thu, 30 Jun 2022 23:03:26 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id AE37BB82D3B;
 Thu, 30 Jun 2022 21:03:24 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id D1C71C341C7;
 Thu, 30 Jun 2022 21:03:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 150a8efe-f8b8-11ec-bdce-3d151da133c5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1656623003;
	bh=NpgX41PYo5zn4S50qz5WRQOVMLgSVQH9jPYuQuHQiuU=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=grEfmkO2WilLd1mTwB1B8FFq8FpqEtIrqrCpzsoxPBad69yojONdgIIw0fiu+FAFo
	 KSKWzn81uXReJsZOVKJ8pg6Q0rX48A6Y4ED+rH3pRPyxXbg207a/nft+MXzyxoenbq
	 U1Zq7E/PSaYdPfR5WMKv7RC47Wl4faqJATwqzT8yGuMRc928kDUnoA6Oe1naaKgPGU
	 HoK447R/ndH1T1gW8TM/KZaJ41UzHhbDoV7JoPE4pn+HkGSbOvf0J6WQwycZSYQNNu
	 +Htimb3AVBqzjX/TomaCzrlM8bgdwj3lIeHXvTh+nrzRAgdetrZnV0dgP3Am0oJBWm
	 I0d0yUR5wMSDw==
Date: Thu, 30 Jun 2022 14:03:21 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Luca Fancellu <Luca.Fancellu@arm.com>, 
    Anthony PERARD <anthony.perard@citrix.com>, 
    Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>, 
    Elena Ufimtseva <elena.ufimtseva@oracle.com>, Tim Deegan <tim@xen.org>, 
    Daniel De Graaf <dgdegra@tycho.nsa.gov>, 
    "Daniel P. Smith" <dpsmith@apertussolutions.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Juergen Gross <jgross@suse.com>, 
    Christian Lindig <christian.lindig@citrix.com>, 
    David Scott <dave@recoil.org>
Subject: Re: [XEN PATCH v3 25/25] tools: Remove -Werror everywhere else
In-Reply-To: <6CA16D2D-8AD6-415A-A98D-00B27F91C4DA@arm.com>
Message-ID: <alpine.DEB.2.22.394.2206301348380.4389@ubuntu-linux-20-04-desktop>
References: <20220624160422.53457-1-anthony.perard@citrix.com> <20220624160422.53457-26-anthony.perard@citrix.com> <BF28045C-0848-4B5A-8DAB-57192C7B4A18@arm.com> <alpine.DEB.2.22.394.2206291019550.4389@ubuntu-linux-20-04-desktop>
 <6CA16D2D-8AD6-415A-A98D-00B27F91C4DA@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-516487629-1656622670=:4389"
Content-ID: <alpine.DEB.2.22.394.2206301358060.4389@ubuntu-linux-20-04-desktop>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-516487629-1656622670=:4389
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.22.394.2206301358061.4389@ubuntu-linux-20-04-desktop>

On Thu, 30 Jun 2022, Bertrand Marquis wrote:
> > On 29 Jun 2022, at 18:22, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > 
> > On Wed, 29 Jun 2022, Luca Fancellu wrote:
> >> + CC: Stefano Stabellini
> >> 
> >>> On 24 Jun 2022, at 17:04, Anthony PERARD <anthony.perard@citrix.com> wrote:
> >>> 
> >>> Patch "tools: Add -Werror by default to all tools/" have added
> >>> "-Werror" to CFLAGS in tools/Rules.mk, remove it from every other
> >>> makefiles as it is now duplicated.
> >>> 
> >>> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> >> 
> >> Hi Anthony,
> >> 
> >> I will try to review the serie when I manage to have some time, in the mean time I can say the whole
> >> serie builds fine in my Yocto setup on arm64 and x86_64, I’ve tried also the tool stack to
> >> create/destroy/console guests and no problem so far.
> >> 
> >> The only problem I have is building for arm32 because, I think, this patch does a great job and it
> >> discovers a problem here:
> > 
> > That reminds me that we only have arm32 Xen hypervisor builds in
> > gitlab-ci, we don't have any arm32 Xen tools builds. I'll add it to my
> > TODO but if someone (not necessarily Luca) has some spare time it could
> > be a nice project. It could be done with Yocto by adding a Yocto build
> > container to automation/build/.
> 
> We have now a way to build and run xen for arm32 on qemu using Yocto.
> We are using this internally and also will test Xen with guests on arm32 using this soon.
> 
> I am upstreaming to meta-virtualisation all the fixes needed for that so it should be fairly straight forward do reproduce this in Yocto build in a container.
> 
> Please tell me what you need and I will try to provide you a set of scripts or instructions do reproduce that on gitlab.

That would be great!

We need two things:

- a Yocto build container
- a build script


The build container would be something like:
automation/build/debian/unstable-arm64v8.dockerfile. It is a Dockerfile
to create a container with Yocto and all required dependencies. It could
be based on Debian arm64. The build container is only built once and
pushed to the gitlab registry, but it is executed every time a gitlab
pipeline is started.

We probably want the meta layers to be pulled as part of the build
container build (git clone from the Dockerfile) because otherwise we
would end up git cloining them every time we run a gitlab-ci pipeline,
slowing everything down.


The build script is the script executed in the build container for every
pipeline.

Imagine you have a container "yocto-arm32", basically we want to do:

# docker run yocto-arm32 build.script

where build.script is the script that actually triggers the Xen build
and produces the binary output.

The current build script is automation/scripts/build; it is used for all
build containers (all of them, from Debian to Fedora and Alpine) but it
is probably not suitable to be used for Yocto.  It simply calls
./configure; make; make install. It is more for normal distros.

I imagine that the build script for Yocto would call bitbake.


With the build container Dockerfile and the build script it becomes
very simple to add Yocto arm32 to gitlab-ci.

I realize that the actual build could be done on both arm64 or x86.
Currently the arm32 hypervisor-only cross-build is done on x86. See
automation/build/debian/unstable-arm32-gcc.dockerfile. Either way is OK.
--8323329-516487629-1656622670=:4389--


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 21:14:26 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 21:14:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358711.588031 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o71UT-00056w-Gv; Thu, 30 Jun 2022 21:14:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358711.588031; Thu, 30 Jun 2022 21:14:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o71UT-00056p-E6; Thu, 30 Jun 2022 21:14:21 +0000
Received: by outflank-mailman (input) for mailman id 358711;
 Thu, 30 Jun 2022 21:14:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HXfy=XF=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o71UR-00056j-SY
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 21:14:19 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9a5f3514-f8b9-11ec-bd2d-47488cf2e6aa;
 Thu, 30 Jun 2022 23:14:18 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id D8857B82D04;
 Thu, 30 Jun 2022 21:14:17 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 33E10C34115;
 Thu, 30 Jun 2022 21:14:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a5f3514-f8b9-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1656623656;
	bh=VaAsnbF/EFCNrpyEEoBZj7IbJxjprPHTrNMhmwvL5o0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=DMZPmMGkAXgUZJpWafB5CO0Q4LvcpIqN8atvkiKaJzgNXYcB6jky4BWJPaTeQ2Te+
	 TevfuylY0LmNCqAWTUBZ2zZrjJ8RPNyNbQrqFJdaMyVFy0I/qQhSY05GXTYYxmCD/i
	 T53I73xFLlzE9idfO/fdpipC9HYmNMsjRV3Y4Sha6v9m9EZK90Z5HfV3wT+CSQm6uO
	 pY7fqKQuluHnbz8Q3/0zlluSslv3xwbzhlZsBEf09mGjb17B6mdj6pCNHv/IV5twcL
	 AarIOBqL/Q9++tuqwYPijmxtu+Y7MS43KToZTtooX6/UP8nB3c5nSmJwZPhHISDUDs
	 JqT1/GeECz+hg==
Date: Thu, 30 Jun 2022 14:14:14 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    "dmitry.semenets@gmail.com" <dmitry.semenets@gmail.com>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Dmytro Semenets <dmytro_semenets@epam.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen: arm: Don't use stop_cpu() in halt_this_cpu()
In-Reply-To: <14736B47-2F17-4684-9162-17C3E55F8D15@arm.com>
Message-ID: <alpine.DEB.2.22.394.2206301404410.4389@ubuntu-linux-20-04-desktop>
References: <20220623074428.226719-1-dmitry.semenets@gmail.com> <alpine.DEB.2.22.394.2206231457250.2410338@ubuntu-linux-20-04-desktop> <e60a4e68-ed00-6cc7-31ca-64bcfc4bbdc5@xen.org> <alpine.DEB.2.22.394.2206241414420.2410338@ubuntu-linux-20-04-desktop>
 <5c986703-c932-3c7d-3756-2b885bb96e42@xen.org> <alpine.DEB.2.22.394.2206281538320.4389@ubuntu-linux-20-04-desktop> <26a1b208-7192-a64f-ca6d-c144de89ed2c@xen.org> <alpine.DEB.2.22.394.2206291014000.4389@ubuntu-linux-20-04-desktop>
 <14736B47-2F17-4684-9162-17C3E55F8D15@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 30 Jun 2022, Bertrand Marquis wrote:
> > On 29 Jun 2022, at 18:19, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > On Wed, 29 Jun 2022, Julien Grall wrote:
> >> On 28/06/2022 23:56, Stefano Stabellini wrote:
> >>>> The advantage of the panic() is it will remind us that some needs to be
> >>>> fixed.
> >>>> With a warning (or WARN()) people will tend to ignore it.
> >>> 
> >>> I know that this specific code path (cpu off) is probably not super
> >>> relevant for what I am about to say, but as we move closer to safety
> >>> certifiability we need to get away from using "panic" and BUG_ON as a
> >>> reminder that more work is needed to have a fully correct implementation
> >>> of something.
> >> 
> >> I don't think we have many places at runtime using BUG_ON()/panic(). They are
> >> often used because we think Xen would not be able to recover if the condition
> >> is hit.
> >> 
> >> I am happy to remove them, but this should not be at the expense to introduce
> >> other potential weird bugs.
> >> 
> >>> 
> >>> I also see your point and agree that ASSERT is not acceptable for
> >>> external input but from my point of view panic is the same (slightly
> >>> worse because it doesn't go away in production builds).
> >> 
> >> I think it depends on your target. Would you be happy if Xen continue to run
> >> with potentially a fatal flaw?
> > 
> > Actually, this is an excellent question. I don't know what is the
> > expected behavior from a safety perspective in case of serious errors.
> > How the error should be reported and whether continuing or not is
> > recommended. I'll try to find out more information.
> 
> I think there are 2 answers to this:
> - as much as possible: those case must be avoided and it must be demonstrated that they are impossible and hence removed or turn the system in a failsafe mode so that actions can be handle (usually reboot after saving some data)
> - in some cases this can be robustness code (more for security)
> 
> I think in our case that if we know that we are ending in a case where the system is unstable we should:
> - stop the guest responsible for this (if a guest is the origin) or return an error to the guest and cancel the operation if suitable
> - panic if this is internal or dom0
> 
> A warning informing that something not supported was done and ending in an unexpected behaviour is for sure not acceptable.


Let's say that we demonstrate that a problematic case is impossible, can
we still have a panic in the code? For instance:

ret = firmware_call();
if (ret)
    panic();

We know ret is always zero unless firmware is buggy or not
spec-compliant. Can the panic() still be present?


And/or do we need to replace all instances of "panic" with going into
"failsafe mode", which saves state and reboots so it is not so
dissimilar from panic actually?


In case of guest-initiated unexpected errors we already try to crash the
guest responsible and not crash the entire system because it is also a
matter of security (possible DOS). That is clear.

So it is other kind of unexpected errors, mostly due to hardware or
firmware unexpected behavior or Xen finding itself in state of a state
machine that should be impossible. Those are the ones we don't have a
clear way to proceed.


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 21:50:07 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 21:50:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358724.588042 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o722y-0000lQ-Cz; Thu, 30 Jun 2022 21:50:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358724.588042; Thu, 30 Jun 2022 21:50:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o722y-0000lJ-9x; Thu, 30 Jun 2022 21:50:00 +0000
Received: by outflank-mailman (input) for mailman id 358724;
 Thu, 30 Jun 2022 21:49:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o722x-0000lD-PE
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 21:49:59 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o722r-0005ts-69; Thu, 30 Jun 2022 21:49:53 +0000
Received: from gw1.octic.net ([81.187.162.82] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o722q-0001iP-Up; Thu, 30 Jun 2022 21:49:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=Nxr2urabtnXbbACzZJT+wojuonK0+VBifsLRBxz7Fak=; b=xhA6tkMkfLH4DNSUJ5v06uMEy6
	TLrCBCxXDohfw28Tfn3zozLu77yE3El2ndoR2lQGvtgkLsGG/PMTiLtTkzEZEACJMQOptrlFJkDUq
	9X3Z36V0tAyA2+ew4Nkn7lWOgw4+l2VnqNe4p9e7fsuyRDdwpYLCaznBVBt0ryazv0+U=;
Message-ID: <ef5bf7df-ad8a-e420-0fc4-d8f0a0e0f2fc@xen.org>
Date: Thu, 30 Jun 2022 22:49:49 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
To: Jan Beulich <jbeulich@suse.com>,
 Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <1652294845-13980-1-git-send-email-olekstysh@gmail.com>
 <632404c3-b285-753d-6644-bccbc17d42c0@xen.org>
 <15da3838-aa8e-57c3-b9e2-6c0a4a639fb0@suse.com>
From: Julien Grall <julien@xen.org>
Subject: Re: [PATCH V6 1/2] xen/gnttab: Store frame GFN in struct page_info on
 Arm
In-Reply-To: <15da3838-aa8e-57c3-b9e2-6c0a4a639fb0@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Jan,

On 24/06/2022 07:45, Jan Beulich wrote:
> On 23.06.2022 19:50, Julien Grall wrote:
>> On 11/05/2022 19:47, Oleksandr Tyshchenko wrote:
>>> @@ -1505,7 +1507,23 @@ int xenmem_add_to_physmap_one(
>>>        }
>>>    
>>>        /* Map at new location. */
>>> -    rc = guest_physmap_add_entry(d, gfn, mfn, 0, t);
>>
>>> +    if ( !p2m_is_ram(t) || !is_xen_heap_mfn(mfn) )
>>> +        rc = guest_physmap_add_entry(d, gfn, mfn, 0, t);
>>
>> I would expand the comment above to explain why you need a different
>> path for xenheap mapped as RAM. AFAICT, this is because we need to call
>> page_set_xenheap_gfn().
>>
>>> +    else
>>> +    {
>>> +        struct p2m_domain *p2m = p2m_get_hostp2m(d);
>>> +
>>> +        p2m_write_lock(p2m);
>>> +        if ( gfn_eq(page_get_xenheap_gfn(mfn_to_page(mfn)), INVALID_GFN) )
>>
>> Sorry to only notice it now. This check will also change the behavior
>> for XENMAPSPACE_shared_info. Now, we are only allowed to map the shared
>> info once.
>>
>> I believe this is fine because AFAICT x86 already prevents it. But this
>> is probably something that ought to be explained in the already long
>> commit message.
> 
> If by "prevent" you mean x86 unmaps the page from its earlier GFN, then
> yes. But this means that Arm would better follow that model instead of
> returning -EBUSY in this case. Just think of kexec-ing or a boot loader
> wanting to map shared info or grant table: There wouldn't necessarily
> be an explicit unmap.

I spent some time to think about this. There is a potential big issue 
with implicit unmapping from its earlier GFN. Imagine the boot loader 
decided to map the page in place of a RAM.

If the boot loader didn't unmap the page then when the OS map again, we 
would have a hole in the RAM. The OS may not know that and it may end up 
to use a page as RAM (and crash).

The problem is the same for kexec and AFAIU that's why we need to use 
soft-reset when kexec-ing.

So overall, I think we should prevent implicit unmap. So it would help 
to enforce that the bootloader (or any other components) clean-up behind 
themselves (i.e. unmap the page and populate if necessary).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 22:35:47 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 22:35:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358730.588053 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o72ky-0006RR-MY; Thu, 30 Jun 2022 22:35:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358730.588053; Thu, 30 Jun 2022 22:35:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o72ky-0006RK-IK; Thu, 30 Jun 2022 22:35:28 +0000
Received: by outflank-mailman (input) for mailman id 358730;
 Thu, 30 Jun 2022 22:35:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HXfy=XF=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1o72kx-0006RE-VT
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 22:35:27 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ef3ab96c-f8c4-11ec-bd2d-47488cf2e6aa;
 Fri, 01 Jul 2022 00:35:26 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 818F162450;
 Thu, 30 Jun 2022 22:35:24 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2BB49C34115;
 Thu, 30 Jun 2022 22:35:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ef3ab96c-f8c4-11ec-bd2d-47488cf2e6aa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1656628523;
	bh=FqKiq49jYP+9mSdTnz792yh57Uv7qw8s2Q2nrVlanUo=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Jqj8QyAgBUZgZEK7b932YKE9QudogoLcm7OZTEMkAvRWHkaw1YllqQjPg/VjJfASt
	 EL+6F2/ImNjitB7yudzm8ZxNDWblLV0l1kAAETb6Lpp79+3KC0xcole9WJ3QmNSl8V
	 LJPVtWB3mllCbswBDmMxaTMejMaE5uA2vCMdwsLzgkxtnNm5Q9sgbcy/4qtRe9FgBK
	 RPzfI9ILUsFmGXkCCfBg9LHHRAZ4opHQZVGOvC9SRfYedMg6mp49+th3Z5u1Zt05Vj
	 iqj6JhGbamfblRuA783fRLfPLEjW0Tro2r2qbojuJYzCWcEvWMgrUxybGnWYnCGxyK
	 2ZcjHBI2AspAA==
Date: Thu, 30 Jun 2022 15:35:21 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>
cc: xen-devel@lists.xenproject.org, scott.davis@starlab.io, jandryuk@gmail.com, 
    christopher.clark@starlab.io, sstabellini@kernel.org, 
    dgdegra@tycho.nsa.gov, jbeulich@suse.com, julien@xen.org, 
    george.dunlap@citrix.com, andrew.cooper3@citrix.com, dfaggioli@suse.com
Subject: Re: [PATCH v9 0/3] Adds starting the idle domain privileged
In-Reply-To: <20220630022110.31555-1-dpsmith@apertussolutions.com>
Message-ID: <alpine.DEB.2.22.394.2206301529570.4389@ubuntu-linux-20-04-desktop>
References: <20220630022110.31555-1-dpsmith@apertussolutions.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 29 Jun 2022, Daniel P. Smith wrote:
> This series makes it so that the idle domain is started privileged under the
> default policy, which the SILO policy inherits, and under the flask policy. It
> then introduces a new one-way XSM hook, xsm_transition_running, that is hooked
> by an XSM policy to transition the idle domain to its running privilege level.
> 
> Patch 3 is an important one, as first it addresses the issue raised under an
> RFC late last year by Jason Andryuk regarding the awkward entanglement of
> flask_domain_alloc_security() and flask_domain_create(). Second, it helps
> articulate why it is that the hypervisor should go through the access control
> checks, even when it is doing the action itself. The issue at hand is not that
> the hypervisor could be influenced to go around these check. The issue is these
> checks provides a configurable way to express the execution flow that the
> hypervisor should enforce. Specifically with this change, it is now possible
> for an owner of a dom0less or hyperlaunch system to express a policy where the
> hypervisor will enforce that no dom0 will be constructed, regardless of what
> boot construction details were provided to it. Likewise, an owner that does not
> want to see dom0less or hyperlaunch to be used can enforce that the hypervisor
> will only construct a dom0 domain. This can all be accomplished without the
> need to rebuild the hypervisor with these features enabled or disabled.


It looks like this patch series is fully acked except:
- in theory we need an ack from Daniel for flask
- there is a very small change to sched that would need an ack from
  George/Dario


I think we should commit the series in a couple of days unless someone
spots an issue with it. Let me know if you have any concerns with this.

The minimal request to improve the in-code comment by Jan could be done
on commit.

Note that committing this series would also have the benefit of
unblocking the ARM gitlab-ci pipeline.


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 23:09:53 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 23:09:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358736.588064 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o73IA-00023q-9b; Thu, 30 Jun 2022 23:09:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358736.588064; Thu, 30 Jun 2022 23:09:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o73IA-00023j-6d; Thu, 30 Jun 2022 23:09:46 +0000
Received: by outflank-mailman (input) for mailman id 358736;
 Thu, 30 Jun 2022 23:09:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1o73I9-00023N-1R
 for xen-devel@lists.xenproject.org; Thu, 30 Jun 2022 23:09:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o73I8-0007GC-PL; Thu, 30 Jun 2022 23:09:44 +0000
Received: from home.octic.net ([81.187.162.82] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1o73I8-0005iM-Ju; Thu, 30 Jun 2022 23:09:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=Sa4s+uNznVEw+rdyxP8ELatB26V7a3kZnd7VQ97RpNY=; b=HpifP0sgPr4COVFnGgbkZBpfXp
	4XbuJBQL75t3+WntrXRUkuYkpSzffv9KG8WJTz1KZfqGh06UyRkZfBsUxFQZ55quZr+lbR54j+f9e
	gdmqqYUzsAxLaOWU7KI80Jo8z1nhmdQCmnlGgd7ObN40z8xd6y8PnnQOD6z4q9u9TjKw=;
Message-ID: <d76ea85c-cb3b-ab03-ac8f-f146722389f1@xen.org>
Date: Fri, 1 Jul 2022 00:09:42 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0)
 Gecko/20100101 Thunderbird/91.10.0
To: Michal Orzel <michal.orzel@arm.com>, xen-devel@lists.xenproject.org
Cc: Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220624091146.35716-1-julien@xen.org>
 <20220624091146.35716-6-julien@xen.org>
 <c05cb05f-0344-19d2-4f8f-caa904c374dc@arm.com>
From: Julien Grall <julien@xen.org>
Subject: Re: [PATCH 5/7] xen/arm32: mm: Consolidate the domheap mappings
 initialization
In-Reply-To: <c05cb05f-0344-19d2-4f8f-caa904c374dc@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Michal,

On 27/06/2022 08:24, Michal Orzel wrote:
> On 24.06.2022 11:11, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> At the moment, the domheap mappings initialization is done separately for
>> the boot CPU and secondary CPUs. The main difference is for the former
>> the pages are part of Xen binary whilst for the latter they are
>> dynamically allocated.
>>
>> It would be good to have a single helper so it is easier to rework
>> on the domheap is initialized.
>>
>> For CPU0, we still need to use pre-allocated pages because the
>> allocators may use domain_map_page(), so we need to have the domheap
>> area ready first. But we can still delay the initialization to setup_mm().
>>
>> Introduce a new helper domheap_mapping_init() that will be called
> FWICS the function name is init_domheap_mappings.

I will udpate it.

> 
>> from setup_mm() for the boot CPU and from init_secondary_pagetables()
>> for secondary CPUs.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>> ---
>>   xen/arch/arm/include/asm/arm32/mm.h |  2 +
>>   xen/arch/arm/mm.c                   | 92 +++++++++++++++++++----------
>>   xen/arch/arm/setup.c                |  8 +++
>>   3 files changed, 71 insertions(+), 31 deletions(-)
>>
>> diff --git a/xen/arch/arm/include/asm/arm32/mm.h b/xen/arch/arm/include/asm/arm32/mm.h
>> index 6b039d9ceaa2..575373aeb985 100644
>> --- a/xen/arch/arm/include/asm/arm32/mm.h
>> +++ b/xen/arch/arm/include/asm/arm32/mm.h
>> @@ -10,6 +10,8 @@ static inline bool arch_mfns_in_directmap(unsigned long mfn, unsigned long nr)
>>       return false;
>>   }
>>   
>> +bool init_domheap_mappings(unsigned int cpu);
>> +
>>   #endif /* __ARM_ARM32_MM_H__ */
>>   
>>   /*
>> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
>> index 20733afebce4..995aa1e4480e 100644
>> --- a/xen/arch/arm/mm.c
>> +++ b/xen/arch/arm/mm.c
>> @@ -372,6 +372,58 @@ void clear_fixmap(unsigned map)
>>   }
>>   
>>   #ifdef CONFIG_DOMAIN_PAGE
>> +/*
>> + * Prepare the area that will be used to map domheap pages. They are
>> + * mapped in 2MB chunks, so we need to allocate the page-tables up to
>> + * the 2nd level.
>> + *
>> + * The caller should make sure the root page-table for @cpu has been
>> + * been allocated.
> Second 'been' not needed.

I will drop it.

> 
>> + */
>> +bool init_domheap_mappings(unsigned int cpu)
>> +{
>> +    unsigned int order = get_order_from_pages(DOMHEAP_SECOND_PAGES);
>> +    lpae_t *root = per_cpu(xen_pgtable, cpu);
>> +    unsigned int i, first_idx;
>> +    lpae_t *domheap;
>> +    mfn_t mfn;
>> +
>> +    ASSERT(root);
>> +    ASSERT(!per_cpu(xen_dommap, cpu));
>> +
>> +    /*
>> +     * The domheap for cpu0 is before the heap is initialized. So we
>> +     * need to use pre-allocated pages.
>> +     */
>> +    if ( !cpu )
>> +        domheap = cpu0_dommap;
>> +    else
>> +        domheap = alloc_xenheap_pages(order, 0);
>> +
>> +    if ( !domheap )
>> +        return false;
>> +
>> +    /* Ensure the domheap has no stray mappings */
>> +    memset(domheap, 0, DOMHEAP_SECOND_PAGES * PAGE_SIZE);
>> +
>> +    /*
>> +     * Update the first level mapping to reference the local CPUs
>> +     * domheap mapping pages.
>> +     */
>> +    mfn = virt_to_mfn(domheap);
>> +    first_idx = first_table_offset(DOMHEAP_VIRT_START);
>> +    for ( i = 0; i < DOMHEAP_SECOND_PAGES; i++ )
>> +    {
>> +        lpae_t pte = mfn_to_xen_entry(mfn_add(mfn, i), MT_NORMAL);
> I might have missed sth but shouldn't 'i' be multiplied by XEN_PT_LPAE_ENTRIES?
Each table is taking one page. As we are dealing with MFN, we only need 
to increment by 1 every time.

The reason the previous code was multiplying by XEN_PT_LPAE_ENTRIES was 
because it was using a virtual address with the type was lpae_t and then 
call virt_to_mfn().

But this is pointless as the domheap pages tables are so far both 
virtual and physically contiguous.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 30 23:29:31 2022
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jun 2022 23:29:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.358742.588074 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o73bB-0004ke-T8; Thu, 30 Jun 2022 23:29:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 358742.588074; Thu, 30 Jun 2022 23:29:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1o73bB-0004kX-QO; Thu, 30 Jun 2022 23:29:25 +0000
Received: by outflank-mailman (input) for mailman id 358742;
 Thu, 30 Jun 2022 23:29:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o73b9-0004kN-R6; Thu, 30 Jun 2022 23:29:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o73b9-0007b9-Of; Thu, 30 Jun 2022 23:29:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1o73b9-0002v0-CK; Thu, 30 Jun 2022 23:29:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1o73b9-0008S7-7X; Thu, 30 Jun 2022 23:29:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Pa5t3laHQwIrVGt2RuLpLNqlKTO3fmwNF6XPsHzhClQ=; b=4ovc4g33zmXJSQuUeIB4jiavtQ
	w6LymO07mJIdjdSskWgvwj7nA/kBIT9s6Fk3KWrdPG4f1A7UDWnIYylvZoZJWfpw4Oo8jP37+sfT4
	mU6l1gxx1EW5xi7dK3A6eplcEPonF2uNQbafyH+kG4EdDL0kG/hl1/D1cftvyY936ckw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-171424-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 171424: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-arm64-arm64-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=621745c4f349ac09b72706c46febee983abca916
X-Osstest-Versions-That:
    qemuu=2a8835cb45371a1f05c9c5899741d66685290f28
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 30 Jun 2022 23:29:23 +0000

flight 171424 qemu-mainline real [real]
flight 171432 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/171424/
http://logs.test-lab.xenproject.org/osstest/logs/171432/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-vhd 17 guest-start/debian.repeat fail pass in 171432-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 171393
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 171393
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 171393
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 171393
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 171393
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 171393
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 171393
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 171393
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                621745c4f349ac09b72706c46febee983abca916
baseline version:
 qemuu                2a8835cb45371a1f05c9c5899741d66685290f28

Last test of basis   171393  2022-06-29 02:25:02 Z    1 days
Failing since        171412  2022-06-29 23:38:23 Z    0 days    2 attempts
Testing same since   171424  2022-06-30 12:22:38 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bernhard Beschow <shentey@gmail.com>
  Denis V. Lunev <den@openvz.org>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Eugenio Pérez <eperezma@redhat.com>
  Guo Zhi <qtxuning1999@sjtu.edu.cn>
  Jason Wang <jasowang@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lev Kujawski <lkujaw@member.fsf.org>
  Markus Armbruster <armbru@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@openvz.org>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   2a8835cb45..621745c4f3  621745c4f349ac09b72706c46febee983abca916 -> upstream-tested


